which included commits to RCS files with non-trunk default branches.
- directory with info on RCU (read-copy update).
README.DAC960
- info on Mylex DAC960/DAC1100 PCI RAID Controller Driver for Linux.
-README.moxa
- - release notes for Moxa mutiport serial card.
SAK.txt
- info on Secure Attention Keys.
SubmittingDrivers
- info on typical Linux memory problems.
mips/
- directory with info about Linux on MIPS architecture.
-mkdev.cciss
- - script to make /dev entries for SMART controllers (see cciss.txt).
mono.txt
- how to execute Mono-based .NET binaries with the help of BINFMT_MISC.
moxa-smartio
version v0.99.0 or higher. Running old versions may cause problems
with programs using shared memory.
+udev
+----
+udev is a userspace application for populating /dev dynamically with
+only entries for devices actually present. udev replaces devfs.
+
Networking
==========
----------
o <http://powertweak.sourceforge.net/>
+udev
+----
+o <http://www.kernel.org/pub/linux/utils/kernel/hotplug/udev.html>
+
Networking
**********
---------
o <http://nfs.sourceforge.net/>
-
</sect1>
</chapter>
+ <chapter id="debugfs">
+ <title>The debugfs filesystem</title>
+
+ <sect1><title>debugfs interface</title>
+!Efs/debugfs/inode.c
+!Efs/debugfs/file.c
+ </sect1>
+ </chapter>
+
<chapter id="vfs">
<title>The Linux VFS</title>
<sect1><title>The Directory Cache</title>
<chapter id="netdev">
<title>Network device support</title>
<sect1><title>Driver Support</title>
-!Edrivers/net/net_init.c
!Enet/core/dev.c
</sect1>
<sect1><title>8390 Based Network Cards</title>
Linux 2.4.2 Secure Attention Key (SAK) handling
-18 March 2001, Andrew Morton <andrewm@uow.edu.au>
+18 March 2001, Andrew Morton <akpm@osdl.org>
An operating system's Secure Attention Key is a security tool which is
provided as protection against trojan password capturing programs. It
/dev/console opened.
Unfortunately this includes a number of things which you don't
- actually want killed. This is because these appliccaitons are
+ actually want killed. This is because these applications are
incorrectly holding /dev/console open. Be sure to complain to your
Linux distributor about this!
--- /dev/null
+#!/bin/sh
+
+n_shelves=${n_shelves:-10}
+n_partitions=${n_partitions:-16}
+
+if test "$#" != "1"; then
+ echo "Usage: sh `basename $0` {dir}" 1>&2
+ exit 1
+fi
+dir=$1
+
+MAJOR=152
+
+echo "Creating AoE devnode files in $dir ..."
+
+set -e
+
+mkdir -p $dir
+
+# (Status info is in sysfs. See status.sh.)
+# rm -f $dir/stat
+# mknod -m 0400 $dir/stat c $MAJOR 1
+rm -f $dir/err
+mknod -m 0400 $dir/err c $MAJOR 2
+rm -f $dir/discover
+mknod -m 0200 $dir/discover c $MAJOR 3
+rm -f $dir/interfaces
+mknod -m 0200 $dir/interfaces c $MAJOR 4
+
+export n_partitions
+mkshelf=`echo $0 | sed 's!mkdevs!mkshelf!'`
+i=0
+while test $i -lt $n_shelves; do
+ sh -xc "sh $mkshelf $dir $i"
+ i=`expr $i + 1`
+done
in industrial control and other areas due to low cost and power
consumption. The IXP4xx family currently consists of several processors
that support different network offload functions such as encryption,
-routing, firewalling, etc. For more information on the various
-versions of the CPU, see:
+routing, firewalling, etc. The IXP46x family is an updated version which
+supports faster speeds, new memory and flash configurations, and more
+integration such as an on-chip I2C controller.
+
+For more information on the various versions of the CPU, see:
http://developer.intel.com/design/network/products/npfamily/ixp4xx.htm
- Dual serial ports
- PCI interface
- Flash access (MTD/JFFS)
-- I2C through GPIO
+- I2C through GPIO on IXP42x
- GPIO for input/output/interrupts
See include/asm-arm/arch-ixp4xx/platform.h for access functions.
- Timers (watchdog, OS)
also known as the Richfield board. It contains 4 PCI slots, 16MB
of flash, two 10/100 ports and one ADSL port.
+Intel IXDP465 Development Platform
+http://developer.intel.com/design/network/products/npfamily/ixdp465.htm
+
+ This is basically an IXDP425 with an IXP465 and 32M of flash instead
+ of just 16.
+
Intel IXDPG425 Development Platform
This is basically and ADI Coyote board with a NEC EHCI controller
The following people have contributed patches/comments/etc:
+Lennerty Buytenhek
Lutz Jaenicke
Justin Mayfield
Robert E. Ranslam
-------------------------------------------------------------------------
-Last Update: 11/16/2004
+Last Update: 01/04/2005
Handheld (IPAQ), available in several varieties
+ HP iPAQ rx3715
+
+ S3C2440 based IPAQ, with a number of variations depending on
+ features shipped.
+
+
NAND
----
Roc Wu
Klaus Fetscher
Dimitry Andric
+ Shannon Holland
+
Document Changes
----------------
05 Sep 2004 - BJD - Added Klaus Fetscher to list of contributors
25 Oct 2004 - BJD - Added Dimitry Andric to list of contributors
25 Oct 2004 - BJD - Updated the MTD from the 2.6.9 merge
+ 21 Jan 2005 - BJD - Added rx3715, added Shannon to contributors
Document Author
---------------
-Ben Dooks, (c) 2004 Simtec Electronics
+Ben Dooks, (c) 2004-2005 Simtec Electronics
--- /dev/null
+ Semantics and Behavior of Atomic and
+ Bitmask Operations
+
+ David S. Miller
+
+ This document is intended to serve as a guide to Linux port
+maintainers on how to implement atomic counter, bitops, and spinlock
+interfaces properly.
+
+ The atomic_t type should be defined as a signed integer.
+Also, it should be made opaque such that any kind of cast to a normal
+C integer type will fail. Something like the following should
+suffice:
+
+ typedef struct { volatile int counter; } atomic_t;
+
+ The first operations to implement for atomic_t's are the
+initializers and plain reads.
+
+ #define ATOMIC_INIT(i) { (i) }
+ #define atomic_set(v, i) ((v)->counter = (i))
+
+The first macro is used in definitions, such as:
+
+static atomic_t my_counter = ATOMIC_INIT(1);
+
+The second interface can be used at runtime, as in:
+
+ struct foo { atomic_t counter; };
+ ...
+
+ struct foo *k;
+
+ k = kmalloc(sizeof(*k), GFP_KERNEL);
+ if (!k)
+ return -ENOMEM;
+ atomic_set(&k->counter, 0);
+
+Next, we have:
+
+ #define atomic_read(v) ((v)->counter)
+
+which simply reads the current value of the counter.
+
+Now, we move onto the actual atomic operation interfaces.
+
+ void atomic_add(int i, atomic_t *v);
+ void atomic_sub(int i, atomic_t *v);
+ void atomic_inc(atomic_t *v);
+ void atomic_dec(atomic_t *v);
+
+These four routines add and subtract integral values to/from the given
+atomic_t value. The first two routines pass explicit integers by
+which to make the adjustment, whereas the latter two use an implicit
+adjustment value of "1".
+
+One very important aspect of these two routines is that they DO NOT
+require any explicit memory barriers. They need only perform the
+atomic_t counter update in an SMP safe manner.
+
+Next, we have:
+
+ int atomic_inc_return(atomic_t *v);
+ int atomic_dec_return(atomic_t *v);
+
+These routines add 1 and subtract 1, respectively, from the given
+atomic_t and return the new counter value after the operation is
+performed.
+
+Unlike the above routines, it is required that explicit memory
+barriers are performed before and after the operation. It must be
+done such that all memory operations before and after the atomic
+operation calls are strongly ordered with respect to the atomic
+operation itself.
+
+For example, it should behave as if a smp_mb() call existed both
+before and after the atomic operation.
+
+If the atomic instructions used in an implementation provide explicit
+memory barrier semantics which satisfy the above requirements, that is
+fine as well.
+
+Let's move on:
+
+ int atomic_add_return(int i, atomic_t *v);
+ int atomic_sub_return(int i, atomic_t *v);
+
+These behave just like atomic_{inc,dec}_return() except that an
+explicit counter adjustment is given instead of the implicit "1".
+This means that like atomic_{inc,dec}_return(), the memory barrier
+semantics are required.
+
+Next:
+
+ int atomic_inc_and_test(atomic_t *v);
+ int atomic_dec_and_test(atomic_t *v);
+
+These two routines increment and decrement by 1, respectively, the
+given atomic counter. They return a boolean indicating whether the
+resulting counter value was zero or not.
+
+It requires explicit memory barrier semantics around the operation as
+above.
+
+ int atomic_sub_and_test(int i, atomic_t *v);
+
+This is identical to atomic_dec_and_test() except that an explicit
+decrement is given instead of the implicit "1". It requires explicit
+memory barrier semantics around the operation.
+
+ int atomic_add_negative(int i, atomic_t *v);
+
+The given increment is added to the given atomic counter value. A
+boolean is return which indicates whether the resulting counter value
+is negative. It requires explicit memory barrier semantics around the
+operation.
+
+If a caller requires memory barrier semantics around an atomic_t
+operation which does not return a value, a set of interfaces are
+defined which accomplish this:
+
+ void smp_mb__before_atomic_dec(void);
+ void smp_mb__after_atomic_dec(void);
+ void smp_mb__before_atomic_inc(void);
+ void smp_mb__after_atomic_dec(void);
+
+For example, smp_mb__before_atomic_dec() can be used like so:
+
+ obj->dead = 1;
+ smp_mb__before_atomic_dec();
+ atomic_dec(&obj->ref_count);
+
+It makes sure that all memory operations preceeding the atomic_dec()
+call are strongly ordered with respect to the atomic counter
+operation. In the above example, it guarentees that the assignment of
+"1" to obj->dead will be globally visible to other cpus before the
+atomic counter decrement.
+
+Without the explicitl smp_mb__before_atomic_dec() call, the
+implementation could legally allow the atomic counter update visible
+to other cpus before the "obj->dead = 1;" assignment.
+
+The other three interfaces listed are used to provide explicit
+ordering with respect to memory operations after an atomic_dec() call
+(smp_mb__after_atomic_dec()) and around atomic_inc() calls
+(smp_mb__{before,after}_atomic_inc()).
+
+A missing memory barrier in the cases where they are required by the
+atomic_t implementation above can have disasterous results. Here is
+an example, which follows a pattern occuring frequently in the Linux
+kernel. It is the use of atomic counters to implement reference
+counting, and it works such that once the counter falls to zero it can
+be guarenteed that no other entity can be accessing the object:
+
+static void obj_list_add(struct obj *obj)
+{
+ obj->active = 1;
+ list_add(&obj->list);
+}
+
+static void obj_list_del(struct obj *obj)
+{
+ list_del(&obj->list);
+ obj->active = 0;
+}
+
+static void obj_destroy(struct obj *obj)
+{
+ BUG_ON(obj->active);
+ kfree(obj);
+}
+
+struct obj *obj_list_peek(struct list_head *head)
+{
+ if (!list_empty(head)) {
+ struct obj *obj;
+
+ obj = list_entry(head->next, struct obj, list);
+ atomic_inc(&obj->refcnt);
+ return obj;
+ }
+ return NULL;
+}
+
+void obj_poke(void)
+{
+ struct obj *obj;
+
+ spin_lock(&global_list_lock);
+ obj = obj_list_peek(&global_list);
+ spin_unlock(&global_list_lock);
+
+ if (obj) {
+ obj->ops->poke(obj);
+ if (atomic_dec_and_test(&obj->refcnt))
+ obj_destroy(obj);
+ }
+}
+
+void obj_timeout(struct obj *obj)
+{
+ spin_lock(&global_list_lock);
+ obj_list_del(obj);
+ spin_unlock(&global_list_lock);
+
+ if (atomic_dec_and_test(&obj->refcnt))
+ obj_destroy(obj);
+}
+
+(This is a simplification of the ARP queue management in the
+ generic neighbour discover code of the networking. Olaf Kirch
+ found a bug wrt. memory barriers in kfree_skb() that exposed
+ the atomic_t memory barrier requirements quite clearly.)
+
+Given the above scheme, it must be the case that the obj->active
+update done by the obj list deletion be visible to other processors
+before the atomic counter decrement is performed.
+
+Otherwise, the counter could fall to zero, yet obj->active would still
+be set, thus triggering the assertion in obj_destroy(). The error
+sequence looks like this:
+
+ cpu 0 cpu 1
+ obj_poke() obj_timeout()
+ obj = obj_list_peek();
+ ... gains ref to obj, refcnt=2
+ obj_list_del(obj);
+ obj->active = 0 ...
+ ... visibility delayed ...
+ atomic_dec_and_test()
+ ... refcnt drops to 1 ...
+ atomic_dec_and_test()
+ ... refcount drops to 0 ...
+ obj_destroy()
+ BUG() triggers since obj->active
+ still seen as one
+ obj->active update visibility occurs
+
+With the memory barrier semantics required of the atomic_t operations
+which return values, the above sequence of memory visibility can never
+happen. Specifically, in the above case the atomic_dec_and_test()
+counter decrement would not become globally visible until the
+obj->active update does.
+
+As a historical note, 32-bit Sparc used to only allow usage of
+24-bits of it's atomic_t type. This was because it used 8 bits
+as a spinlock for SMP safety. Sparc32 lacked a "compare and swap"
+type instruction. However, 32-bit Sparc has since been moved over
+to a "hash table of spinlocks" scheme, that allows the full 32-bit
+counter to be realized. Essentially, an array of spinlocks are
+indexed into based upon the address of the atomic_t being operated
+on, and that lock protects the atomic operation. Parisc uses the
+same scheme.
+
+Another note is that the atomic_t operations returning values are
+extremely slow on an old 386.
+
+We will now cover the atomic bitmask operations. You will find that
+their SMP and memory barrier semantics are similar in shape and scope
+to the atomic_t ops above.
+
+Native atomic bit operations are defined to operate on objects aligned
+to the size of an "unsigned long" C data type, and are least of that
+size. The endianness of the bits within each "unsigned long" are the
+native endianness of the cpu.
+
+ void set_bit(unsigned long nr, volatils unsigned long *addr);
+ void clear_bit(unsigned long nr, volatils unsigned long *addr);
+ void change_bit(unsigned long nr, volatils unsigned long *addr);
+
+These routines set, clear, and change, respectively, the bit number
+indicated by "nr" on the bit mask pointed to by "ADDR".
+
+They must execute atomically, yet there are no implicit memory barrier
+semantics required of these interfaces.
+
+ int test_and_set_bit(unsigned long nr, volatils unsigned long *addr);
+ int test_and_clear_bit(unsigned long nr, volatils unsigned long *addr);
+ int test_and_change_bit(unsigned long nr, volatils unsigned long *addr);
+
+Like the above, except that these routines return a boolean which
+indicates whether the changed bit was set _BEFORE_ the atomic bit
+operation.
+
+WARNING! It is incredibly important that the value be a boolean,
+ie. "0" or "1". Do not try to be fancy and save a few instructions by
+declaring the above to return "long" and just returning something like
+"old_val & mask" because that will not work.
+
+For one thing, this return value gets truncated to int in many code
+paths using these interfaces, so on 64-bit if the bit is set in the
+upper 32-bits then testers will never see that.
+
+One great example of where this problem crops up are the thread_info
+flag operations. Routines such as test_and_set_ti_thread_flag() chop
+the return value into an int. There are other places where things
+like this occur as well.
+
+These routines, like the atomic_t counter operations returning values,
+require explicit memory barrier semantics around their execution. All
+memory operations before the atomic bit operation call must be made
+visible globally before the atomic bit operation is made visible.
+Likewise, the atomic bit operation must be visible globally before any
+subsequent memory operation is made visible. For example:
+
+ obj->dead = 1;
+ if (test_and_set_bit(0, &obj->flags))
+ /* ... */;
+ obj->killed = 1;
+
+The implementation of test_and_set_bit() must guarentee that
+"obj->dead = 1;" is visible to cpus before the atomic memory operation
+done by test_and_set_bit() becomes visible. Likewise, the atomic
+memory operation done by test_and_set_bit() must become visible before
+"obj->killed = 1;" is visible.
+
+Finally there is the basic operation:
+
+ int test_bit(unsigned long nr, __const__ volatile unsigned long *addr);
+
+Which returns a boolean indicating if bit "nr" is set in the bitmask
+pointed to by "addr".
+
+If explicit memory barriers are required around clear_bit() (which
+does not return a value, and thus does not need to provide memory
+barrier semantics), two interfaces are provided:
+
+ void smp_mb__before_clear_bit(void);
+ void smp_mb__after_clear_bit(void);
+
+They are used as follows, and are akin to their atomic_t operation
+brothers:
+
+ /* All memory operations before this call will
+ * be globally visible before the clear_bit().
+ */
+ smp_mb__before_clear_bit();
+ clear_bit( ... );
+
+ /* The clear_bit() will be visible before all
+ * subsequent memory operations.
+ */
+ smp_mb__after_clear_bit();
+
+Finally, there are non-atomic versions of the bitmask operations
+provided. They are used in contexts where some other higher-level SMP
+locking scheme is being used to protect the bitmask, and thus less
+expensive non-atomic operations may be used in the implementation.
+They have names similar to the above bitmask operation interfaces,
+except that two underscores are prefixed to the interface name.
+
+ void __set_bit(unsigned long nr, volatile unsigned long *addr);
+ void __clear_bit(unsigned long nr, volatile unsigned long *addr);
+ void __change_bit(unsigned long nr, volatile unsigned long *addr);
+ int __test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
+ int __test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
+ int __test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
+
+These non-atomic variants also do not require any special memory
+barrier semantics.
+
+The routines xchg() and cmpxchg() need the same exact memory barriers
+as the atomic and bit operations returning values.
+
+Spinlocks and rwlocks have memory barrier expectations as well.
+The rule to follow is simple:
+
+1) When acquiring a lock, the implementation must make it globally
+ visible before any subsequent memory operation.
+
+2) When releasing a lock, the implementation must make it such that
+ all previous memory operations are globally visible before the
+ lock release.
+
+Which finally brings us to _atomic_dec_and_lock(). There is an
+architecture-neutral version implemented in lib/dec_and_lock.c,
+but most platforms will wish to optimize this in assembler.
+
+ int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock);
+
+Atomically decrement the given counter, and if will drop to zero
+atomically acquire the given spinlock and perform the decrement
+of the counter to zero. If it does not drop to zero, do nothing
+with the spinlock.
+
+It is actually pretty simple to get the memory barrier correct.
+Simply satisfy the spinlock grab requirements, which is make
+sure the spinlock operation is globally visible before any
+subsequent memory operation.
+
+We can demonstrate this operation more clearly if we define
+an abstract atomic operation:
+
+ long cas(long *mem, long old, long new);
+
+"cas" stands for "compare and swap". It atomically:
+
+1) Compares "old" with the value currently at "mem".
+2) If they are equal, "new" is written to "mem".
+3) Regardless, the current value at "mem" is returned.
+
+As an example usage, here is what an atomic counter update
+might look like:
+
+void example_atomic_inc(long *counter)
+{
+ long old, new, ret;
+
+ while (1) {
+ old = *counter;
+ new = old + 1;
+
+ ret = cas(counter, old, new);
+ if (ret == old)
+ break;
+ }
+}
+
+Let's use cas() in order to build a pseudo-C atomic_dec_and_lock():
+
+int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
+{
+ long old, new, ret;
+ int went_to_zero;
+
+ went_to_zero = 0;
+ while (1) {
+ old = atomic_read(atomic);
+ new = old - 1;
+ if (new == 0) {
+ went_to_zero = 1;
+ spin_lock(lock);
+ }
+ ret = cas(atomic, old, new);
+ if (ret == old)
+ break;
+ if (went_to_zero) {
+ spin_unlock(lock);
+ went_to_zero = 0;
+ }
+ }
+
+ return went_to_zero;
+}
+
+Now, as far as memory barriers go, as long as spin_lock()
+strictly orders all subsequent memory operations (including
+the cas()) with respect to itself, things will be fine.
+
+Said another way, _atomic_dec_and_lock() must guarentee that
+a counter dropping to zero is never made visible before the
+spinlock being acquired.
+
+Note that this also means that for the case where the counter
+is not dropping to zero, there are no memory ordering
+requirements.
that it should be possible to put any filesystem with a block size >=
2KB on such a disc. For example, it should be possible to do:
+ # dvd+rw-format /dev/hdc (only needed if the disc has never
+ been formatted)
# mkudffs /dev/hdc
# mount /dev/hdc /cdrom -t udf -o rw,noatime
Both problems can be solved by using the pktcdvd driver, which always
generates aligned writes.
+ # dvd+rw-format /dev/hdc
# pktsetup dev_name /dev/hdc
# mkudffs /dev/pktcdvd/dev_name
# mount /dev/pktcdvd/dev_name /cdrom -t udf -o rw,noatime
------------
There is a CPU frequency changing CVS commit and general list where
you can report bugs, problems or submit patches. To post a message,
-send an email to cpufreq@www.linux.org.uk, to subscribe go to
-http://www.linux.org.uk/mailman/listinfo/cpufreq. Previous post to the
+send an email to cpufreq@lists.linux.org.uk, to subscribe go to
+http://lists.linux.org.uk/mailman/listinfo/cpufreq. Previous post to the
mailing list are available to subscribers at
-http://www.linux.org.uk/mailman/private/cpufreq/.
+http://lists.linux.org.uk/mailman/private/cpufreq/.
Links
* http://cvs.arm.linux.org.uk/
the CPUFreq Mailing list:
-* http://www.linux.org.uk/mailman/listinfo/cpufreq
+* http://lists.linux.org.uk/mailman/listinfo/cpufreq
Clock and voltage scaling for the SA-1100:
* http://www.lart.tudelft.nl/projects/scaling
- HAMA DVB-T USB device
http://www.hama.de/portal/articleId*110620/action*2598
-- CTS Portable (Chinese Television System)
+- CTS Portable (Chinese Television System) (2)
http://www.2cts.tv/ctsportable/
- Unknown USB DVB-T device with vendor ID Hyper-Paltek
Others:
-------
-- Ultima Electronic/Artec T1 USB TVBOX (AN2135 and AN2235)
+- Ultima Electronic/Artec T1 USB TVBOX (AN2135, AN2235, AN2235 with Panasonic Tuner)
http://82.161.246.249/products-tvbox.html
-- Compro Videomate DVB-U2000 - DVB-T USB
+- Compro Videomate DVB-U2000 - DVB-T USB (2)
http://www.comprousa.com/products/vmu2000.htm
- Grandtec USB DVB-T
http://www.grand.com.tw/
-- Avermedia AverTV DVBT USB
+- Avermedia AverTV DVBT USB (2)
http://www.avermedia.com/
- DiBcom USB DVB-T reference device (non-public)
Supported devices USB2.0
========================
-- Twinhan MagicBox II
+- Twinhan MagicBox II (2)
http://www.twinhan.com/product_terrestrial_7.asp
-- Yakumo DVB-T mobile
+- Hanftek UMT-010 (1)
+ http://www.globalsources.com/si/6008819757082/ProductDetail/Digital-TV/product_id-100046529
+
+- Typhoon/Yakumo/HAMA DVB-T mobile USB2.0 (1)
http://www.yakumo.de/produkte/index.php?pid=1&ag=DVB-T
+- Artec T1 USB TVBOX (FX2) (2)
+
- DiBcom USB2.0 DVB-T reference device (non-public)
+1) It is working almost.
+2) No test reports received yet.
+
0. NEWS:
+ 2004-01-13 - moved the mirrored pid_filter_table back to dvb-dibusb
+ - first almost working version for HanfTek UMT-010
+ - found out, that Yakumo/HAMA/Typhoon are predessors of the HanfTek
+ 2004-01-10 - refactoring completed, now everything is very delightful
+ - tuner quirks for some weird devices (Artec T1 AN2235 device has sometimes a
+ Panasonic Tuner assembled). Tunerprobing implemented. Thanks a lot to Gunnar Wittich.
+ 2004-12-29 - after several days of struggling around bug of no returning URBs fixed.
+ 2004-12-26 - refactored the dibusb-driver, splitted into separate files
+ - i2c-probing enabled
2004-12-06 - possibility for demod i2c-address probing
- new usb IDs (Compro,Artec)
2004-11-23 - merged changes from DiB3000MC_ver2.1
1. How to use?
NOTE: This driver was developed using Linux 2.6.6.,
-it is working with 2.6.7, 2.6.8.1, 2.6.9 .
+it is working with 2.6.7 and above.
Linux 2.4.x support is not planned, but patches are very welcome.
NOTE: I'm using Debian testing, so the following explaination (especially
the hotplug-path) needn't match your system, but probably it will :).
+The driver is included in the kernel since Linux 2.6.10.
+
1.1. Firmware
The USB driver needs to download a firmware to start working.
first have a look, which debug level are available:
modinfo dib3000mb
+modinfo dib3000-common
+modinfo dib3000mc
modinfo dvb-dibusb
+modprobe dib3000-common debug=<level>
modprobe dib3000mb debug=<level>
+modprobe dib3000mc debug=<level>
modprobe dvb-dibusb debug=<level>
should do the trick.
At this point you should be able to start a dvb-capable application. For myself
I used mplayer, dvbscan, tzap and kaxtv, they are working. Using the device
-as a slave device in vdr, was not working for me. Some work has to be done
-(patches and comments are very welcome).
+in vdr (at least the USB2.0 one) is working.
2. Known problems and bugs
-TODO:
-- signal-quality and strength calculations
+- none this time
2.1. Adding support for devices
maximum bandwidth of about 5-6 MBit/s when connected to a USB2.0 hub.
This is not enough for receiving the complete transport stream of a
DVB-T channel (which can be about 16 MBit/s). Normally this is not a
-problem, if you only want to watch TV, but watching a channel while
-recording another channel on the same frequency simply does not work.
-This applies to all USB1.1 DVB-T devices.
+problem, if you only want to watch TV (this does not apply for HDTV),
+but watching a channel while recording another channel on the same
+frequency simply does not work. This applies to all USB1.1 DVB-T
+devices, not only dibusb)
A special problem of the dibusb for the USB1.1 is, that the USB control
IC has a problem with write accesses while having MPEG2-streaming
these features is maybe a solution. Additionally this behaviour of VDR exceeds
the USB1.1 bandwidth.
+Update:
+For the USB1.1 and VDR some work has been done (patches and comments are still
+very welcome). Maybe the problem is solved in the meantime because I now use
+the dmx_sw_filter function instead of dmx_sw_filter_packet. I hope the
+linux-dvb software filter is able to get the best of the garbled TS.
+
2.3. Comments
Patches, comments and suggestions are very very welcome
3. Acknowledgements
Amaury Demol (ademol@dibcom.fr) and Francois Kanounnikoff from DiBcom for
- providing specs, code and help, on which the dvb-dibusb and dib3000mb are
- based.
+ providing specs, code and help, on which the dvb-dibusb, dib3000mb and
+ dib3000mc are based.
David Matthews for identifying a new device type (Artec T1 with AN2235)
and for extending dibusb with remote control event handling. Thank you.
use File::Temp qw/ tempdir /;
use IO::Handle;
-@components = ( "sp8870", "sp887x", "tda10045", "tda10046", "av7110", "dec2000t", "dec2540t", "dec3000s", "vp7041", "dibusb" );
+@components = ( "sp8870", "sp887x", "tda10045", "tda10046", "av7110", "dec2000t",
+ "dec2540t", "dec3000s", "vp7041", "dibusb", "nxt2002" );
# Check args
syntax() if (scalar(@ARGV) != 1);
$outfile;
}
+sub nxt2002 {
+ my $sourcefile = "Broadband4PC_4_2_11.zip";
+ my $url = "http://www.bbti.us/download/windows/$sourcefile";
+ my $hash = "c6d2ea47a8f456d887ada0cfb718ff2a";
+ my $outfile = "dvb-fe-nxt2002.fw";
+ my $tmpdir = tempdir(DIR => "/tmp", CLEANUP => 1);
+
+ checkstandard();
+
+ wgetfile($sourcefile, $url);
+ unzip($sourcefile, $tmpdir);
+ verify("$tmpdir/SkyNETU.sys", $hash);
+ extract("$tmpdir/SkyNETU.sys", 375832, 5908, $outfile);
+
+ $outfile;
+}
+
# ---------------------------------------------------------------
# Utilities
Early userspace support
=======================
-Last update: 2003-08-21
+Last update: 2004-12-20 tlh
"Early userspace" is a set of libraries and programs that provide
- initramfs, a chunk of code that unpacks the compressed cpio image
midway through the kernel boot process.
- klibc, a userspace C library, currently packaged separately, that is
- optimised for correctness and small size.
+ optimized for correctness and small size.
The cpio file format used by initramfs is the "newc" (aka "cpio -c")
-format, and is documented in the file "buffer-format.txt". If you
-want to generate your own cpio files directly instead of hacking on
-gen_init_cpio, you will need to short-circuit the build process in
-usr/ so that gen_init_cpio does not get run, then simply pop your own
-initramfs_data.cpio.gz file into place.
-
+format, and is documented in the file "buffer-format.txt". There are
+two ways to add an early userspace image: specify an existing cpio
+archive to be used as the image or have the kernel build process build
+the image from specifications.
+
+CPIO ARCHIVE method
+
+You can create a cpio archive that contains the early userspace image.
+Youre cpio archive should be specified in CONFIG_INITRAMFS_SOURCE and it
+will be used directly. Only a single cpio file may be specified in
+CONFIG_INITRAMFS_SOURCE and directory and file names are not allowed in
+combination with a cpio archive.
+
+IMAGE BUILDING method
+
+The kernel build process can also build an early userspace image from
+source parts rather than supplying a cpio archive. This method provides
+a way to create images with root-owned files even though the image was
+built by an unprivileged user.
+
+The image is specified as one or more sources in
+CONFIG_INITRAMFS_SOURCE. Sources can be either directories or files -
+cpio archives are *not* allowed when building from sources.
+
+A source directory will have it and all of it's contents packaged. The
+specified directory name will be mapped to '/'. When packaging a
+directory, limited user and group ID translation can be performed.
+INITRAMFS_ROOT_UID can be set to a user ID that needs to be mapped to
+user root (0). INITRAMFS_ROOT_GID can be set to a group ID that needs
+to be mapped to group root (0).
+
+A source file must be directives in the format required by the
+usr/gen_init_cpio utility (run 'usr/gen_init_cpio --help' to get the
+file format). The directives in the file will be passed directly to
+usr/gen_init_cpio.
+
+When a combination of directories and files are specified then the
+initramfs image will be an aggregate of all of them. In this way a user
+can create a 'root-image' directory and install all files into it.
+Because device-special files cannot be created by a unprivileged user,
+special files can be listed in a 'root-files' file. Both 'root-image'
+and 'root-files' can be listed in CONFIG_INITRAMFS_SOURCE and a complete
+early userspace image can be built by an unprivileged user.
+
+As a technical note, when directories and files are specified, the
+entire CONFIG_INITRAMFS_SOURCE is passed to
+scripts/gen_initramfs_list.sh. This means that CONFIG_INITRAMFS_SOURCE
+can really be interpreted as any legal argument to
+gen_initramfs_list.sh. If a directory is specified as an argument then
+the contents are scanned, uid/gid translation is performed, and
+usr/gen_init_cpio file directives are output. If a directory is
+specified as an arugemnt to scripts/gen_initramfs_list.sh then the
+contents of the file are simply copied to the output. All of the output
+directives from directory scanning and file contents copying are
+processed by usr/gen_init_cpio.
+
+See also 'scripts/gen_initramfs_list.sh -h'.
Where's this all leading?
=========================
--- /dev/null
+The following is a list of files and features that are going to be
+removed in the kernel source tree. Every entry should contain what
+exactly is going away, why it is happening, and who is going to be doing
+the work. When the feature is removed from the kernel, it should also
+be removed from this file.
+
+---------------------------
+
+What: devfs
+When: July 2005
+Files: fs/devfs/*, include/linux/devfs_fs*.h and assorted devfs
+ function calls throughout the kernel tree
+Why: It has been unmaintained for a number of years, has unfixable
+ races, contains a naming policy within the kernel that is
+ against the LSB, and can be replaced by using udev.
+Who: Greg Kroah-Hartman <greg@kroah.com>
+
- info and mount options for the UDF filesystem.
ufs.txt
- info on the ufs filesystem.
-umsdos.txt
- - info on the umsdos extensions to the msdos filesystem.
vfat.txt
- info on using the VFAT filesystem used in Windows NT and Windows 95
vfs.txt
unsigned int (*poll) (struct file *, struct poll_table_struct *);
int (*ioctl) (struct inode *, struct file *, unsigned int,
unsigned long);
+ long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long);
+ long (*compat_ioctl) (struct file *, unsigned int, unsigned long);
int (*mmap) (struct file *, struct vm_area_struct *);
int (*open) (struct inode *, struct file *);
int (*flush) (struct file *);
readdir: no
poll: no
ioctl: yes (see below)
+unlocked_ioctl: no (see below)
+compat_ioctl: no
mmap: no
open: maybe (see below)
flush: no
anything that resembles union-mount we won't have a struct file for all
components. And there are other reasons why the current interface is a mess...
+->ioctl() on regular files is superceded by the ->unlocked_ioctl() that
+doesn't take the BKL.
+
->read on directories probably must go away - we should just enforce -EISDIR
in sys_read() and friends.
--- /dev/null
+ ====================
+ DEBUGGING FR-V LINUX
+ ====================
+
+
+The kernel contains a GDB stub that talks GDB remote protocol across a serial
+port. This permits GDB to single step through the kernel, set breakpoints and
+trap exceptions that happen in kernel space and interrupt execution. It also
+permits the NMI interrupt button or serial port events to jump the kernel into
+the debugger.
+
+On the CPUs that have on-chip UARTs (FR400, FR403, FR405, FR555), the
+GDB stub hijacks a serial port for its own purposes, and makes it
+generate level 15 interrupts (NMI). The kernel proper cannot see the serial
+port in question under these conditions.
+
+On the MB93091-VDK CPU boards, the GDB stub uses UART1, which would otherwise
+be /dev/ttyS1. On the MB93093-PDK, the GDB stub uses UART0. Therefore, on the
+PDK there is no externally accessible serial port and the serial port to
+which the touch screen is attached becomes /dev/ttyS0.
+
+Note that the GDB stub runs entirely within CPU debug mode, and so should not
+incur any exceptions or interrupts whilst it is active. In particular, note
+that the clock will lose time since it is implemented in software.
+
+
+==================
+KERNEL PREPARATION
+==================
+
+Firstly, a debuggable kernel must be built. To do this, unpack the kernel tree
+and copy the configuration that you wish to use to .config. Then reconfigure
+the following things on the "Kernel Hacking" tab:
+
+ (*) "Include debugging information"
+
+ Set this to "Y". This causes all C and Assembly files to be compiled
+ to include debugging information.
+
+ (*) "In-kernel GDB stub"
+
+ Set this to "Y". This causes the GDB stub to be compiled into the
+ kernel.
+
+ (*) "Immediate activation"
+
+ Set this to "Y" if you want the GDB stub to activate as soon as possible
+ and wait for GDB to connect. This allows you to start tracing right from
+ the beginning of start_kernel() in init/main.c.
+
+ (*) "Console through GDB stub"
+
+ Set this to "Y" if you wish to be able to use "console=gdb0" on the
+ command line. That tells the kernel to pass system console messages to
+ GDB (which then prints them on its standard output). This is useful when
+ debugging the serial drivers that'd otherwise be used to pass console
+ messages to the outside world.
+
+Then build as usual, download to the board and execute. Note that if
+"Immediate activation" was selected, then the kernel will wait for GDB to
+attach. If not, then the kernel will boot immediately and GDB will have to
+interupt it or wait for an exception to occur if before doing anything with
+the kernel.
+
+
+=========================
+KERNEL DEBUGGING WITH GDB
+=========================
+
+Set the serial port on the computer that's going to run GDB to the appropriate
+baud rate. Assuming the board's debug port is connected to ttyS0/COM1 on the
+computer doing the debugging:
+
+ stty -F /dev/ttyS0 115200
+
+Then start GDB in the base of the kernel tree:
+
+ frv-uclinux-gdb linux [uClinux]
+
+Or:
+
+ frv-uclinux-gdb vmlinux [MMU linux]
+
+When the prompt appears:
+
+ GNU gdb frv-031024
+ Copyright 2003 Free Software Foundation, Inc.
+ GDB is free software, covered by the GNU General Public License, and you are
+ welcome to change it and/or distribute copies of it under certain conditions.
+ Type "show copying" to see the conditions.
+ There is absolutely no warranty for GDB. Type "show warranty" for details.
+ This GDB was configured as "--host=i686-pc-linux-gnu --target=frv-uclinux"...
+ (gdb)
+
+Attach to the board like this:
+
+ (gdb) target remote /dev/ttyS0
+ Remote debugging using /dev/ttyS0
+ start_kernel () at init/main.c:395
+ (gdb)
+
+This should show the appropriate lines from the source too. The kernel can
+then be debugged almost as if it's any other program.
+
+
+===============================
+INTERRUPTING THE RUNNING KERNEL
+===============================
+
+The kernel can be interrupted whilst it is running, causing a jump back to the
+GDB stub and the debugger:
+
+ (*) Pressing Ctrl-C in GDB. This will cause GDB to try and interrupt the
+ kernel by sending an RS232 BREAK over the serial line to the GDB
+ stub. This will (mostly) immediately interrupt the kernel and return it
+ to the debugger.
+
+ (*) Pressing the NMI button on the board will also cause a jump into the
+ debugger.
+
+ (*) Setting a software breakpoint. This sets a break instruction at the
+ desired location which the GDB stub then traps the exception for.
+
+ (*) Setting a hardware breakpoint. The GDB stub is capable of using the IBAR
+ and DBAR registers to assist debugging.
+
+Furthermore, the GDB stub will intercept a number of exceptions automatically
+if they are caused by kernel execution. It will also intercept BUG() macro
+invokation.
+
--- /dev/null
+ =================================
+ FR451 MMU LINUX MEMORY MANAGEMENT
+ =================================
+
+============
+MMU HARDWARE
+============
+
+FR451 MMU Linux puts the MMU into EDAT mode whilst running. This means that it uses both the SAT
+registers and the DAT TLB to perform address translation.
+
+There are 8 IAMLR/IAMPR register pairs and 16 DAMLR/DAMPR register pairs for SAT mode.
+
+In DAT mode, there is also a TLB organised in cache format as 64 lines x 2 ways. Each line spans a
+16KB range of addresses, but can match a larger region.
+
+
+===========================
+MEMORY MANAGEMENT REGISTERS
+===========================
+
+Certain control registers are used by the kernel memory management routines:
+
+ REGISTERS USAGE
+ ====================== ==================================================
+ IAMR0, DAMR0 Kernel image and data mappings
+ IAMR1, DAMR1 First-chance TLB lookup mapping
+ DAMR2 Page attachment for cache flush by page
+ DAMR3 Current PGD mapping
+ SCR0, DAMR4 Instruction TLB PGE/PTD cache
+ SCR1, DAMR5 Data TLB PGE/PTD cache
+ DAMR6-10 kmap_atomic() mappings
+ DAMR11 I/O mapping
+ CXNR mm_struct context ID
+ TTBR Page directory (PGD) pointer (physical address)
+
+
+=====================
+GENERAL MEMORY LAYOUT
+=====================
+
+The physical memory layout is as follows:
+
+ PHYSICAL ADDRESS CONTROLLER DEVICE
+ =================== ============== =======================================
+ 00000000 - BFFFFFFF SDRAM SDRAM area
+ E0000000 - EFFFFFFF L-BUS CS2# VDK SLBUS/PCI window
+ F0000000 - F0FFFFFF L-BUS CS5# MB93493 CSC area (DAV daughter board)
+ F1000000 - F1FFFFFF L-BUS CS7# (CB70 CPU-card PCMCIA port I/O space)
+ FC000000 - FC0FFFFF L-BUS CS1# VDK MB86943 config space
+ FC100000 - FC1FFFFF L-BUS CS6# DM9000 NIC I/O space
+ FC200000 - FC2FFFFF L-BUS CS3# MB93493 CSR area (DAV daughter board)
+ FD000000 - FDFFFFFF L-BUS CS4# (CB70 CPU-card extra flash space)
+ FE000000 - FEFFFFFF Internal CPU peripherals
+ FF000000 - FF1FFFFF L-BUS CS0# Flash 1
+ FF200000 - FF3FFFFF L-BUS CS0# Flash 2
+ FFC00000 - FFC0001F L-BUS CS0# FPGA
+
+The virtual memory layout is:
+
+ VIRTUAL ADDRESS PHYSICAL TRANSLATOR FLAGS SIZE OCCUPATION
+ ================= ======== ============== ======= ======= ===================================
+ 00004000-BFFFFFFF various TLB,xAMR1 D-N-??V 3GB Userspace
+ C0000000-CFFFFFFF 00000000 xAMPR0 -L-S--V 256MB Kernel image and data
+ D0000000-D7FFFFFF various TLB,xAMR1 D-NS??V 128MB vmalloc area
+ D8000000-DBFFFFFF various TLB,xAMR1 D-NS??V 64MB kmap() area
+ DC000000-DCFFFFFF various TLB 1MB Secondary kmap_atomic() frame
+ DD000000-DD27FFFF various DAMR 160KB Primary kmap_atomic() frame
+ DD040000 DAMR2/IAMR2 -L-S--V page Page cache flush attachment point
+ DD080000 DAMR3 -L-SC-V page Page Directory (PGD)
+ DD0C0000 DAMR4 -L-SC-V page Cached insn TLB Page Table lookup
+ DD100000 DAMR5 -L-SC-V page Cached data TLB Page Table lookup
+ DD140000 DAMR6 -L-S--V page kmap_atomic(KM_BOUNCE_READ)
+ DD180000 DAMR7 -L-S--V page kmap_atomic(KM_SKB_SUNRPC_DATA)
+ DD1C0000 DAMR8 -L-S--V page kmap_atomic(KM_SKB_DATA_SOFTIRQ)
+ DD200000 DAMR9 -L-S--V page kmap_atomic(KM_USER0)
+ DD240000 DAMR10 -L-S--V page kmap_atomic(KM_USER1)
+ E0000000-FFFFFFFF E0000000 DAMR11 -L-SC-V 512MB I/O region
+
+IAMPR1 and DAMPR1 are used as an extension to the TLB.
+
+
+====================
+KMAP AND KMAP_ATOMIC
+====================
+
+To access pages in the page cache (which may not be directly accessible if highmem is available),
+the kernel calls kmap(), does the access and then calls kunmap(); or it calls kmap_atomic(), does
+the access and then calls kunmap_atomic().
+
+kmap() creates an attachment between an arbitrary inaccessible page and a range of virtual
+addresses by installing a PTE in a special page table. The kernel can then access this page as it
+wills. When it's finished, the kernel calls kunmap() to clear the PTE.
+
+kmap_atomic() does something slightly different. In the interests of speed, it chooses one of two
+strategies:
+
+ (1) If possible, kmap_atomic() attaches the requested page to one of DAMPR5 through DAMPR10
+ register pairs; and the matching kunmap_atomic() clears the DAMPR. This makes high memory
+ support really fast as there's no need to flush the TLB or modify the page tables. The DAMLR
+ registers being used for this are preset during boot and don't change over the lifetime of the
+ process. There's a direct mapping between the first few kmap_atomic() types, DAMR number and
+ virtual address slot.
+
+ However, there are more kmap_atomic() types defined than there are DAMR registers available,
+ so we fall back to:
+
+ (2) kmap_atomic() uses a slot in the secondary frame (determined by the type parameter), and then
+ locks an entry in the TLB to translate that slot to the specified page. The number of slots is
+ obviously limited, and their positions are controlled such that each slot is matched by a
+ different line in the TLB. kunmap() ejects the entry from the TLB.
+
+Note that the first three kmap atomic types are really just declared as placeholders. The DAMPR
+registers involved are actually modified directly.
+
+Also note that kmap() itself may sleep, kmap_atomic() may never sleep and both always succeed;
+furthermore, a driver using kmap() may sleep before calling kunmap(), but may not sleep before
+calling kunmap_atomic() if it had previously called kmap_atomic().
+
+
+===============================
+USING MORE THAN 256MB OF MEMORY
+===============================
+
+The kernel cannot access more than 256MB of memory directly. The physical layout, however, permits
+up to 3GB of SDRAM (possibly 3.25GB) to be made available. By using CONFIG_HIGHMEM, the kernel can
+allow userspace (by way of page tables) and itself (by way of kmap) to deal with the memory
+allocation.
+
+External devices can, of course, still DMA to and from all of the SDRAM, even if the kernel can't
+see it directly. The kernel translates page references into real addresses for communicating to the
+devices.
+
+
+===================
+PAGE TABLE TOPOLOGY
+===================
+
+The page tables are arranged in 2-layer format. There is a middle layer (PMD) that would be used in
+3-layer format tables but that is folded into the top layer (PGD) and so consumes no extra memory
+or processing power.
+
+ +------+ PGD PMD
+ | TTBR |--->+-------------------+
+ +------+ | | : STE |
+ | PGE0 | PME0 : STE |
+ | | : STE |
+ +-------------------+ Page Table
+ | | : STE -------------->+--------+ +0x0000
+ | PGE1 | PME0 : STE -----------+ | PTE0 |
+ | | : STE -------+ | +--------+
+ +-------------------+ | | | PTE63 |
+ | | : STE | | +-->+--------+ +0x0100
+ | PGE2 | PME0 : STE | | | PTE64 |
+ | | : STE | | +--------+
+ +-------------------+ | | PTE127 |
+ | | : STE | +------>+--------+ +0x0200
+ | PGE3 | PME0 : STE | | PTE128 |
+ | | : STE | +--------+
+ +-------------------+ | PTE191 |
+ +--------+ +0x0300
+
+Each Page Directory (PGD) is 16KB (page size) in size and is divided into 64 entries (PGEs). Each
+PGE contains one Page Mid Directory (PMD).
+
+Each PMD is 256 bytes in size and contains a single entry (PME). Each PME holds 64 FR451 MMU
+segment table entries of 4 bytes apiece. Each PME "points to" a page table. In practice, each STE
+points to a subset of the page table, the first to PT+0x0000, the second to PT+0x0100, the third to
+PT+0x200, and so on.
+
+Each PGE and PME covers 64MB of the total virtual address space.
+
+Each Page Table (PTD) is 16KB (page size) in size, and is divided into 4096 entries (PTEs). Each
+entry can point to one 16KB page. In practice, each Linux page table is subdivided into 64 FR451
+MMU page tables. But they are all grouped together to make management easier, in particular rmap
+support is then trivial.
+
+Grouping page tables in this fashion makes PGE caching in SCR0/SCR1 more efficient because the
+coverage of the cached item is greater.
+
+Page tables for the vmalloc area are allocated at boot time and shared between all mm_structs.
+
+
+=================
+USER SPACE LAYOUT
+=================
+
+For MMU capable Linux, the regions userspace code are allowed to access are kept entirely separate
+from those dedicated to the kernel:
+
+ VIRTUAL ADDRESS SIZE PURPOSE
+ ================= ===== ===================================
+ 00000000-00003fff 4KB NULL pointer access trap
+ 00004000-01ffffff ~32MB lower mmap space (grows up)
+ 02000000-021fffff 2MB Stack space (grows down from top)
+ 02200000-nnnnnnnn Executable mapping
+ nnnnnnnn- brk space (grows up)
+ -bfffffff upper mmap space (grows down)
+
+This is so arranged so as to make best use of the 16KB page tables and the way in which PGEs/PMEs
+are cached by the TLB handler. The lower mmap space is filled first, and then the upper mmap space
+is filled.
+
+
+===============================
+GDB-STUB MMU DEBUGGING SERVICES
+===============================
+
+The gdb-stub included in this kernel provides a number of services to aid in the debugging of MMU
+related kernel services:
+
+ (*) Every time the kernel stops, certain state information is dumped into __debug_mmu. This
+ variable is defined in arch/frv/kernel/gdb-stub.c. Note that the gdbinit file in this
+ directory has some useful macros for dealing with this.
+
+ (*) __debug_mmu.tlb[]
+
+ This receives the current TLB contents. This can be viewed with the _tlb GDB macro:
+
+ (gdb) _tlb
+ tlb[0x00]: 01000005 00718203 01000002 00718203
+ tlb[0x01]: 01004002 006d4201 01004005 006d4203
+ tlb[0x02]: 01008002 006d0201 01008006 00004200
+ tlb[0x03]: 0100c006 007f4202 0100c002 0064c202
+ tlb[0x04]: 01110005 00774201 01110002 00774201
+ tlb[0x05]: 01114005 00770201 01114002 00770201
+ tlb[0x06]: 01118002 0076c201 01118005 0076c201
+ ...
+ tlb[0x3d]: 010f4002 00790200 001f4002 0054ca02
+ tlb[0x3e]: 010f8005 0078c201 010f8002 0078c201
+ tlb[0x3f]: 001fc002 0056ca01 001fc005 00538a01
+
+ (*) __debug_mmu.iamr[]
+ (*) __debug_mmu.damr[]
+
+ These receive the current IAMR and DAMR contents. These can be viewed with with the _amr
+ GDB macro:
+
+ (gdb) _amr
+ AMRx DAMR IAMR
+ ==== ===================== =====================
+ amr0 : L:c0000000 P:00000cb9 : L:c0000000 P:000004b9
+ amr1 : L:01070005 P:006f9203 : L:0102c005 P:006a1201
+ amr2 : L:d8d00000 P:00000000 : L:d8d00000 P:00000000
+ amr3 : L:d8d04000 P:00534c0d : L:00000000 P:00000000
+ amr4 : L:d8d08000 P:00554c0d : L:00000000 P:00000000
+ amr5 : L:d8d0c000 P:00554c0d : L:00000000 P:00000000
+ amr6 : L:d8d10000 P:00000000 : L:00000000 P:00000000
+ amr7 : L:d8d14000 P:00000000 : L:00000000 P:00000000
+ amr8 : L:d8d18000 P:00000000
+ amr9 : L:d8d1c000 P:00000000
+ amr10: L:d8d20000 P:00000000
+ amr11: L:e0000000 P:e0000ccd
+
+ (*) The current task's page directory is bound to DAMR3.
+
+ This can be viewed with the _pgd GDB macro:
+
+ (gdb) _pgd
+ $3 = {{pge = {{ste = {0x554001, 0x554101, 0x554201, 0x554301, 0x554401,
+ 0x554501, 0x554601, 0x554701, 0x554801, 0x554901, 0x554a01,
+ 0x554b01, 0x554c01, 0x554d01, 0x554e01, 0x554f01, 0x555001,
+ 0x555101, 0x555201, 0x555301, 0x555401, 0x555501, 0x555601,
+ 0x555701, 0x555801, 0x555901, 0x555a01, 0x555b01, 0x555c01,
+ 0x555d01, 0x555e01, 0x555f01, 0x556001, 0x556101, 0x556201,
+ 0x556301, 0x556401, 0x556501, 0x556601, 0x556701, 0x556801,
+ 0x556901, 0x556a01, 0x556b01, 0x556c01, 0x556d01, 0x556e01,
+ 0x556f01, 0x557001, 0x557101, 0x557201, 0x557301, 0x557401,
+ 0x557501, 0x557601, 0x557701, 0x557801, 0x557901, 0x557a01,
+ 0x557b01, 0x557c01, 0x557d01, 0x557e01, 0x557f01}}}}, {pge = {{
+ ste = {0x0 <repeats 64 times>}}}} <repeats 51 times>, {pge = {{ste = {
+ 0x248001, 0x248101, 0x248201, 0x248301, 0x248401, 0x248501,
+ 0x248601, 0x248701, 0x248801, 0x248901, 0x248a01, 0x248b01,
+ 0x248c01, 0x248d01, 0x248e01, 0x248f01, 0x249001, 0x249101,
+ 0x249201, 0x249301, 0x249401, 0x249501, 0x249601, 0x249701,
+ 0x249801, 0x249901, 0x249a01, 0x249b01, 0x249c01, 0x249d01,
+ 0x249e01, 0x249f01, 0x24a001, 0x24a101, 0x24a201, 0x24a301,
+ 0x24a401, 0x24a501, 0x24a601, 0x24a701, 0x24a801, 0x24a901,
+ 0x24aa01, 0x24ab01, 0x24ac01, 0x24ad01, 0x24ae01, 0x24af01,
+ 0x24b001, 0x24b101, 0x24b201, 0x24b301, 0x24b401, 0x24b501,
+ 0x24b601, 0x24b701, 0x24b801, 0x24b901, 0x24ba01, 0x24bb01,
+ 0x24bc01, 0x24bd01, 0x24be01, 0x24bf01}}}}, {pge = {{ste = {
+ 0x0 <repeats 64 times>}}}} <repeats 11 times>}
+
+ (*) The PTD last used by the instruction TLB miss handler is attached to DAMR4.
+ (*) The PTD last used by the data TLB miss handler is attached to DAMR5.
+
+ These can be viewed with the _ptd_i and _ptd_d GDB macros:
+
+ (gdb) _ptd_d
+ $5 = {{pte = 0x0} <repeats 127 times>, {pte = 0x539b01}, {
+ pte = 0x0} <repeats 896 times>, {pte = 0x719303}, {pte = 0x6d5303}, {
+ pte = 0x0}, {pte = 0x0}, {pte = 0x0}, {pte = 0x0}, {pte = 0x0}, {
+ pte = 0x0}, {pte = 0x0}, {pte = 0x0}, {pte = 0x0}, {pte = 0x6a1303}, {
+ pte = 0x0} <repeats 12 times>, {pte = 0x709303}, {pte = 0x0}, {pte = 0x0},
+ {pte = 0x6fd303}, {pte = 0x6f9303}, {pte = 0x6f5303}, {pte = 0x0}, {
+ pte = 0x6ed303}, {pte = 0x531b01}, {pte = 0x50db01}, {
+ pte = 0x0} <repeats 13 times>, {pte = 0x5303}, {pte = 0x7f5303}, {
+ pte = 0x509b01}, {pte = 0x505b01}, {pte = 0x7c9303}, {pte = 0x7b9303}, {
+ pte = 0x7b5303}, {pte = 0x7b1303}, {pte = 0x7ad303}, {pte = 0x0}, {
+ pte = 0x0}, {pte = 0x7a1303}, {pte = 0x0}, {pte = 0x795303}, {pte = 0x0}, {
+ pte = 0x78d303}, {pte = 0x0}, {pte = 0x0}, {pte = 0x0}, {pte = 0x0}, {
+ pte = 0x0}, {pte = 0x775303}, {pte = 0x771303}, {pte = 0x76d303}, {
+ pte = 0x0}, {pte = 0x765303}, {pte = 0x7c5303}, {pte = 0x501b01}, {
+ pte = 0x4f1b01}, {pte = 0x4edb01}, {pte = 0x0}, {pte = 0x4f9b01}, {
+ pte = 0x4fdb01}, {pte = 0x0} <repeats 2992 times>}
--- /dev/null
+November 23, 2004
+
+The following specification describes the SMSC LPC47B397-NC sensor chip
+(for which there is no public datasheet available). This document was
+provided by Craig Kelly (In-Store Broadcast Network) and edited/corrected
+by Mark M. Hoffman <mhoffman@lightlink.com>.
+
+* * * * *
+
+Methods for detecting the HP SIO and reading the thermal data on a dc7100.
+
+The thermal information on the dc7100 is contained in the SIO Hardware Monitor
+(HWM). The information is accessed through an index/data pair. The index/data
+pair is located at the HWM Base Address + 0 and the HWM Base Address + 1. The
+HWM Base address can be obtained from Logical Device 8, registers 0x60 (MSB)
+and 0x61 (LSB). Currently we are using 0x480 for the HWM Base Address and
+0x480 and 0x481 for the index/data pair.
+
+Reading temperature information.
+The temperature information is located in the following registers:
+Temp1 0x25 (Currently, this reflects the CPU temp on all systems).
+Temp2 0x26
+Temp3 0x27
+Temp4 0x80
+
+Programming Example
+The following is an example of how to read the HWM temperature registers:
+MOV DX,480H
+MOV AX,25H
+OUT DX,AL
+MOV DX,481H
+IN AL,DX
+
+AL contains the data in hex, the temperature in Celsius is the decimal
+equivalent.
+
+Ex: If AL contains 0x2A, the temperature is 42 degrees C.
+
+Reading tach information.
+The fan speed information is located in the following registers:
+ LSB MSB
+Tach1 0x28 0x29 (Currently, this reflects the CPU
+ fan speed on all systems).
+Tach2 0x2A 0x2B
+Tach3 0x2C 0x2D
+Tach4 0x2E 0x2F
+
+Important!!!
+Reading the tach LSB locks the tach MSB.
+The LSB Must be read first.
+
+How to convert the tach reading to RPM.
+The tach reading (TCount) is given by: (Tach MSB * 256) + (Tach LSB)
+The SIO counts the number of 90kHz (11.111us) pulses per revolution.
+RPM = 60/(TCount * 11.111us)
+
+Example:
+Reg 0x28 = 0x9B
+Reg 0x29 = 0x08
+
+TCount = 0x89B = 2203
+
+RPM = 60 / (2203 * 11.11111 E-6) = 2451 RPM
+
+Obtaining the SIO version.
+
+CONFIGURATION SEQUENCE
+To program the configuration registers, the following sequence must be followed:
+1. Enter Configuration Mode
+2. Configure the Configuration Registers
+3. Exit Configuration Mode.
+
+Enter Configuration Mode
+To place the chip into the Configuration State The config key (0x55) is written
+to the CONFIG PORT (0x2E).
+
+Configuration Mode
+In configuration mode, the INDEX PORT is located at the CONFIG PORT address and
+the DATA PORT is at INDEX PORT address + 1.
+
+The desired configuration registers are accessed in two steps:
+a. Write the index of the Logical Device Number Configuration Register
+ (i.e., 0x07) to the INDEX PORT and then write the number of the
+ desired logical device to the DATA PORT.
+
+b. Write the address of the desired configuration register within the
+ logical device to the INDEX PORT and then write or read the config-
+ uration register through the DATA PORT.
+
+Note: If accessing the Global Configuration Registers, step (a) is not required.
+
+Exit Configuration Mode
+To exit the Configuration State the write 0xAA to the CONFIG PORT (0x2E).
+The chip returns to the RUN State. (This is important).
+
+Programming Example
+The following is an example of how to read the SIO Device ID located at 0x20
+
+; ENTER CONFIGURATION MODE
+MOV DX,02EH
+MOV AX,055H
+OUT DX,AL
+; GLOBAL CONFIGURATION REGISTER
+MOV DX,02EH
+MOV AL,20H
+OUT DX,AL
+; READ THE DATA
+MOV DX,02FH
+IN AL,DX
+; EXIT CONFIGURATION MODE
+MOV DX,02EH
+MOV AX,0AAH
+OUT DX,AL
+
+The registers of interest for identifying the SIO on the dc7100 are Device ID
+(0x20) and Device Rev (0x21).
+
+The Device ID will read 0X6F
+The Device Rev currently reads 0x01
+
+Obtaining the HWM Base Address.
+The following is an example of how to read the HWM Base Address located in
+Logical Device 8.
+
+; ENTER CONFIGURATION MODE
+MOV DX,02EH
+MOV AX,055H
+OUT DX,AL
+; CONFIGURE REGISTER CRE0,
+; LOGICAL DEVICE 8
+MOV DX,02EH
+MOV AL,07H
+OUT DX,AL ;Point to LD# Config Reg
+MOV DX,02FH
+MOV AL, 08H
+OUT DX,AL;Point to Logical Device 8
+;
+MOV DX,02EH
+MOV AL,60H
+OUT DX,AL ; Point to HWM Base Addr MSB
+MOV DX,02FH
+IN AL,DX ; Get MSB of HWM Base Addr
+; EXIT CONFIGURATION MODE
+MOV DX,02EH
+MOV AX,0AAH
+OUT DX,AL
DESCRIPTION:
-This module is a very simple fake I2C/SMBus driver. It implements three
-types of SMBus commands: write quick, (r/w) byte data, and (r/w) word data.
+This module is a very simple fake I2C/SMBus driver. It implements four
+types of SMBus commands: write quick, (r/w) byte, (r/w) byte data, and
+(r/w) word data.
No hardware is needed nor associated with this module. It will accept write
quick commands to all addresses; it will respond to the other commands (also
to all addresses) by reading from or writing to an array in memory. It will
also spam the kernel logs for every command it handles.
+A pointer register with auto-increment is implemented for all byte
+operations. This allows for continuous byte reads like those supported by
+EEPROMs, among others.
+
The typical use-case is like this:
1. load this module
2. use i2cset (from lm_sensors project) to pre-load some data
2 bootsect-loader
3 SYSLINUX
4 EtherBoot
+ 5 ELILO
+ 7 GRuB
+ 8 U-BOOT
Please contact <hpa@zytor.com> if you need a bootloader ID
value assigned.
"ide=reverse" : formerly called to pci sub-system, but now local.
+ "ide=nodma" : disable DMA globally for the IDE subsystem.
+
The following are valid ONLY on ide0, which usually corresponds
to the first ATA interface found on the particular host, and the defaults for
the base,ctl ports must not be altered.
--- /dev/null
+IP OVER INFINIBAND
+
+ The ib_ipoib driver is an implementation of the IP over InfiniBand
+ protocol as specified by the latest Internet-Drafts issued by the
+ IETF ipoib working group. It is a "native" implementation in the
+ sense of setting the interface type to ARPHRD_INFINIBAND and the
+ hardware address length to 20 (earlier proprietary implementations
+ masqueraded to the kernel as ethernet interfaces).
+
+Partitions and P_Keys
+
+ When the IPoIB driver is loaded, it creates one interface for each
+ port using the P_Key at index 0. To create an interface with a
+ different P_Key, write the desired P_Key into the main interface's
+ /sys/class/net/<intf name>/create_child file. For example:
+
+ echo 0x8001 > /sys/class/net/ib0/create_child
+
+ This will create an interface named ib0.8001 with P_Key 0x8001. To
+ remove a subinterface, use the "delete_child" file:
+
+ echo 0x8001 > /sys/class/net/ib0/delete_child
+
+ The P_Key for any interface is given by the "pkey" file, and the
+ main interface for a subinterface is in "parent."
+
+Debugging Information
+
+ By compiling the IPoIB driver with CONFIG_INFINIBAND_IPOIB_DEBUG set
+ to 'y', tracing messages are compiled into the driver. They are
+ turned on by setting the module parameters debug_level and
+ mcast_debug_level to 1. These parameters can be controlled at
+ runtime through files in /sys/module/ib_ipoib/.
+
+ CONFIG_INFINIBAND_IPOIB_DEBUG also enables the "ipoib_debugfs"
+ virtual filesystem. By mounting this filesystem, for example with
+
+ mkdir -p /ipoib_debugfs
+ mount -t ipoib_debugfs none /ipoib_debufs
+
+ it is possible to get statistics about multicast groups from the
+ files /ipoib_debugfs/ib0_mcg and so on.
+
+ The performance impact of this option is negligible, so it
+ is safe to enable this option with debug_level set to 0 for normal
+ operation.
+
+ CONFIG_INFINIBAND_IPOIB_DEBUG_DATA enables even more debug output in
+ the data path when data_debug_level is set to 1. However, even with
+ the output disabled, enabling this configuration option will affect
+ performance, because it adds tests to the fast path.
+
+References
+
+ IETF IP over InfiniBand (ipoib) Working Group
+ http://ietf.org/html.charters/ipoib-charter.html
Linux* Base Driver for the Intel(R) PRO/100 Family of Adapters
==============================================================
-September 13, 2004
+November 17, 2004
Contents
===============
This file describes the Linux* Base Driver for the Intel(R) PRO/100 Family of
-Adapters, version 3.2.x. This driver includes support for Itanium(TM)2 and
-EM64T systems.
-
+Adapters, version 3.3.x. This driver supports 2.4.x and 2.6.x kernels.
Identifying Your Adapter
========================
The latest release of ethtool can be found at:
http://sf.net/projects/gkernel.
- After ethtool is installed, ethtool-copy.h must be copied and renamed to
- ethtool.h in your kernel source tree at <linux_kernel_src>/include/linux.
- Backup the original ethtool.h as needed before copying. The driver then
- must be recompiled in order to take advantage of the latest ethtool
- features.
-
NOTE: This driver uses mii support from the kernel. As a result, when
there is no link, ethtool will report speed/duplex to be 10/half.
Linux* Base Driver for the Intel(R) PRO/1000 Family of Adapters
===============================================================
-September 13, 2004
+November 17, 2004
Contents
===============
This file describes the Linux* Base Driver for the Intel(R) PRO/1000 Family
-of Adapters, version 5.x.x. This driver includes support for Itanium(TM)2
-and EM64T systems.
+of Adapters, version 5.x.x.
For questions related to hardware requirements, refer to the documentation
supplied with your Intel PRO/1000 adapter. All hardware requirements listed
Default Value: 256
This value is the number of receive descriptors allocated by the driver.
Increasing this value allows the driver to buffer more incoming packets.
- Each descriptor is 16 bytes. A receive buffer is also allocated for each
- descriptor and can be either 2048, 4096, 8192, or 16384 bytes, depending
- on the MTU setting. The maximum MTU size is 16110.
+ Each descriptor is 16 bytes. A receive buffer is allocated for each
+ descriptor and can either be 2048 or 4096 bytes long, depending on the MTU
+
+ setting. An incoming packet can span one or more receive descriptors.
+ The maximum MTU size is 16110.
NOTE: MTU designates the frame size. It only needs to be set for Jumbo
Frames.
also be forced.
The AutoNeg parameter is used when more control is required over the auto-
-negotiation process. When this parameter is used, Speed and Duplex must not
-be specified. This parameter is a bitmap that specifies which speed and
-duplex settings are advertised to the link partner.
+negotiation process. When this parameter is used, Speed and Duplex parameters
+must not be specified. The following table describes supported values for the
+AutoNeg parameter:
-Bit 7 6 5 4 3 2 1 0
-Speed (Mbps) N/A N/A 1000 N/A 100 100 10 10
-Duplex Full Full Half Full Half
+Speed (Mbps) 1000 100 100 10 10
+Duplex Full Full Half Full Half
+Value (in base 16) 0x20 0x08 0x04 0x02 0x01
-For example to limit the negotiated speed/duplex on the interface to 10 Mbps
-Half or Full duplex, set AutoNeg to 0x02:
- insmod e1000 AutoNeg=0x02
+Example: insmod e1000 AutoNeg=0x03, loads e1000 and specifies (10 full duplex,
+10 half duplex) for negotiation with the peer.
Note that setting AutoNeg does not guarantee that the board will link at the
highest specified speed or duplex mode, but the board will link at the
version 1.6 or later is required for this functionality.
The latest release of ethtool can be found from
- http://sf.net/projects/gkernel. After ethtool is installed,
- ethtool-copy.h must be copied and renamed to ethtool.h in your kernel
- source tree at <linux_kernel_src>/include/linux. Backup the original
- ethtool.h as needed before copying. The driver then must be recompiled
- in order to take advantage of the latest ethtool features.
+ http://sf.net/projects/gkernel.
NOTE: Ethtool 1.6 only supports a limited set of ethtool options. Support
for a more complete ethtool feature set can be enabled by upgrading
--- /dev/null
+ =============================
+ NO-MMU MEMORY MAPPING SUPPORT
+ =============================
+
+The kernel has limited support for memory mapping under no-MMU conditions, such
+as are used in uClinux environments. From the userspace point of view, memory
+mapping is made use of in conjunction with the mmap() system call, the shmat()
+call and the execve() system call. From the kernel's point of view, execve()
+mapping is actually performed by the binfmt drivers, which call back into the
+mmap() routines to do the actual work.
+
+Memory mapping behaviour also involves the way fork(), vfork(), clone() and
+ptrace() work. Under uClinux there is no fork(), and clone() must be supplied
+the CLONE_VM flag.
+
+The behaviour is similar between the MMU and no-MMU cases, but not identical;
+and it's also much more restricted in the latter case:
+
+ (*) Anonymous mapping, MAP_PRIVATE
+
+ In the MMU case: VM regions backed by arbitrary pages; copy-on-write
+ across fork.
+
+ In the no-MMU case: VM regions backed by arbitrary contiguous runs of
+ pages.
+
+ (*) Anonymous mapping, MAP_SHARED
+
+ These behave very much like private mappings, except that they're
+ shared across fork() or clone() without CLONE_VM in the MMU case. Since
+ the no-MMU case doesn't support these, behaviour is identical to
+ MAP_PRIVATE there.
+
+ (*) File, MAP_PRIVATE, PROT_READ / PROT_EXEC, !PROT_WRITE
+
+ In the MMU case: VM regions backed by pages read from file; changes to
+ the underlying file are reflected in the mapping; copied across fork.
+
+ In the no-MMU case: VM regions backed by arbitrary contiguous runs of
+ pages into which the appropriate bit of the file is read; any remaining
+ bit of the mapping is cleared; such mappings are shared if possible;
+ writes to the file do not affect the mapping; writes to the mapping are
+ visible in other processes (no MMU protection), but should not happen.
+
+ (*) File, MAP_PRIVATE, PROT_READ / PROT_EXEC, PROT_WRITE
+
+ In the MMU case: like the non-PROT_WRITE case, except that the pages in
+ question get copied before the write actually happens. From that point
+ on writes to that page in the file no longer get reflected into the
+ mapping's backing pages.
+
+ In the no-MMU case: works exactly as for the non-PROT_WRITE case.
+
+ (*) Regular file / blockdev, MAP_SHARED, PROT_READ / PROT_EXEC / PROT_WRITE
+
+ In the MMU case: VM regions backed by pages read from file; changes to
+ pages written back to file; writes to file reflected into pages backing
+ mapping; shared across fork.
+
+ In the no-MMU case: not supported.
+
+ (*) Memory backed regular file, MAP_SHARED, PROT_READ / PROT_EXEC / PROT_WRITE
+
+ In the MMU case: As for ordinary regular files.
+
+ In the no-MMU case: The filesystem providing the memory-backed file
+ (such as ramfs or tmpfs) may choose to honour an open, truncate, mmap
+ sequence by providing a contiguous sequence of pages to map. In that
+ case, a shared-writable memory mapping will be possible. It will work
+ as for the MMU case. If the filesystem does not provide any such
+ support, then the mapping request will be denied.
+
+ (*) Memory backed chardev, MAP_SHARED, PROT_READ / PROT_EXEC / PROT_WRITE
+
+ In the MMU case: As for ordinary regular files.
+
+ In the no-MMU case: The character device driver may choose to honour
+ the mmap() by providing direct access to the underlying device if it
+ provides memory or quasi-memory that can be accessed directly. Examples
+ of such are frame buffers and flash devices. If the driver does not
+ provide any such support, then the mapping request will be denied.
+
+
+============================
+FURTHER NOTES ON NO-MMU MMAP
+============================
+
+ (*) A request for a private mapping of less than a page in size may not return
+ a page-aligned buffer. This is because the kernel calls kmalloc() to
+ allocate the buffer, not get_free_page().
+
+ (*) A list of all the mappings on the system is visible through /proc/maps in
+ no-MMU mode.
+
+ (*) Supplying MAP_FIXED or a requesting a particular mapping address will
+ result in an error.
+
+ (*) Files mapped privately must have a read method provided by the driver or
+ filesystem so that the contents can be read into the memory allocated. An
+ error will result if they don't. This is most likely to be encountered
+ with character device files, pipes, fifos and sockets.
+
+
+============================================
+PROVIDING SHAREABLE CHARACTER DEVICE SUPPORT
+============================================
+
+To provide shareable character device support, a driver must provide a
+file->f_op->get_unmapped_area() operation. The mmap() routines will call this
+to get a proposed address for the mapping. This may return an error if it
+doesn't wish to honour the mapping because it's too long, at a weird offset,
+under some unsupported combination of flags or whatever.
+
+The vm_ops->close() routine will be invoked when the last mapping on a chardev
+is removed. An existing mapping will be shared, partially or not, if possible
+without notifying the driver.
+
+It is permitted also for the file->f_op->get_unmapped_area() operation to
+return -ENOSYS. This will be taken to mean that this operation just doesn't
+want to handle it, despite the fact it's got an operation. For instance, it
+might try directing the call to a secondary driver which turns out not to
+implement it. Such is the case for the framebuffer driver which attempts to
+direct the call to the device-specific driver.
+
+
+==============================================
+PROVIDING SHAREABLE MEMORY-BACKED FILE SUPPORT
+==============================================
+
+Provision of shared mappings on memory backed files is similar to the provision
+of support for shared mapped character devices. The main difference is that the
+filesystem providing the service will probably allocate a contiguous collection
+of pages and permit mappings to be made on that.
+
+It is recommended that a truncate operation applied to such a file that
+increases the file size, if that file is empty, be taken as a request to gather
+enough pages to honour a mapping. This is required to support POSIX shared
+memory.
+
+Memory backed devices are indicated by the mapping's backing device info having
+the memory_backed flag set.
*/
void pm_unregister_all(pm_callback cback);
-/*
- * Device idle/use detection
- *
- * In general, drivers for all devices should call "pm_access"
- * before accessing the hardware (ie. before reading or modifying
- * a hardware register). Request or packet-driven drivers should
- * additionally call "pm_dev_idle" when a device is not being used.
- *
- * Examples:
- * 1) A keyboard driver would call pm_access whenever a key is pressed
- * 2) A network driver would call pm_access before submitting
- * a packet for transmit or receive and pm_dev_idle when its
- * transfer and receive queues are empty.
- * 3) A VGA driver would call pm_access before it accesses any
- * of the video controller registers
- *
- * Ultimately, the PM policy manager uses the access and idle
- * information to decide when to suspend individual devices
- * or when to suspend the entire system
- */
-
-/*
- * Description: Update device access time and wake up device, if necessary
- *
- * Parameters:
- * dev - PM device previously returned from pm_register
- *
- * Details: If called from an interrupt handler pm_access updates
- * access time but should never need to wake up the device
- * (if device is generating interrupts, it should be awake
- * already) This is important as we can not wake up
- * devices from an interrupt handler.
- */
-void pm_access(struct pm_dev *dev);
-
-/*
- * Description: Identify device as currently being idle
- *
- * Parameters:
- * dev - PM device previously returned from pm_register
- *
- * Details: A call to pm_dev_idle might signal to the policy manager
- * to put a device to sleep. If a new device request arrives
- * between the call to pm_dev_idle and the pm_callback
- * callback, the driver should fail the pm_callback request.
- */
-void pm_dev_idle(struct pm_dev *dev);
-
/*
* Power management request callback
*
There is currently no way to know what states a device or driver
supports a priori. This will change in the future.
+pm_message_t meaning
+
+pm_message_t has two fields. event ("major"), and flags. If driver
+does not know event code, it aborts the request, returning error. Some
+drivers may need to deal with special cases based on the actual type
+of suspend operation being done at the system level. This is why
+there are flags.
+
+Event codes are:
+
+ON -- no need to do anything except special cases like broken
+HW.
+
+# NOTIFICATION -- pretty much same as ON?
+
+FREEZE -- stop DMA and interrupts, and be prepared to reinit HW from
+scratch. That probably means stop accepting upstream requests, the
+actual policy of what to do with them beeing specific to a given
+driver. It's acceptable for a network driver to just drop packets
+while a block driver is expected to block the queue so no request is
+lost. (Use IDE as an example on how to do that). FREEZE requires no
+power state change, and it's expected for drivers to be able to
+quickly transition back to operating state.
+
+SUSPEND -- like FREEZE, but also put hardware into low-power state. If
+there's need to distinguish several levels of sleep, additional flag
+is probably best way to do that.
+
+Transitions are only from a resumed state to a suspended state, never
+between 2 suspended states. (ON -> FREEZE or ON -> SUSPEND can happen,
+FREEZE -> SUSPEND or SUSPEND -> FREEZE can not).
+
+All events are:
+
+[NOTE NOTE NOTE: If you are driver author, you should not care; you
+should only look at event, and ignore flags.]
+
+#Prepare for suspend -- userland is still running but we are going to
+#enter suspend state. This gives drivers chance to load firmware from
+#disk and store it in memory, or do other activities taht require
+#operating userland, ability to kmalloc GFP_KERNEL, etc... All of these
+#are forbiden once the suspend dance is started.. event = ON, flags =
+#PREPARE_TO_SUSPEND
+
+Apm standby -- prepare for APM event. Quiesce devices to make life
+easier for APM BIOS. event = FREEZE, flags = APM_STANDBY
+
+Apm suspend -- same as APM_STANDBY, but it we should probably avoid
+spinning down disks. event = FREEZE, flags = APM_SUSPEND
+
+System halt, reboot -- quiesce devices to make life easier for BIOS. event
+= FREEZE, flags = SYSTEM_HALT or SYSTEM_REBOOT
+
+System shutdown -- at least disks need to be spun down, or data may be
+lost. Quiesce devices, just to make life easier for BIOS. event =
+FREEZE, flags = SYSTEM_SHUTDOWN
+
+Kexec -- turn off DMAs and put hardware into some state where new
+kernel can take over. event = FREEZE, flags = KEXEC
+
+Powerdown at end of swsusp -- very similar to SYSTEM_SHUTDOWN, except wake
+may need to be enabled on some devices. This actually has at least 3
+subtypes, system can reboot, enter S4 and enter S5 at the end of
+swsusp. event = FREEZE, flags = SWSUSP and one of SYSTEM_REBOOT,
+SYSTEM_SHUTDOWN, SYSTEM_S4
+
+Suspend to ram -- put devices into low power state. event = SUSPEND,
+flags = SUSPEND_TO_RAM
+
+Freeze for swsusp snapshot -- stop DMA and interrupts. No need to put
+devices into low power mode, but you must be able to reinitialize
+device from scratch in resume method. This has two flavors, its done
+once on suspending kernel, once on resuming kernel. event = FREEZE,
+flags = DURING_SUSPEND or DURING_RESUME
+
+Device detach requested from /sys -- deinitialize device; proably same as
+SYSTEM_SHUTDOWN, I do not understand this one too much. probably event
+= FREEZE, flags = DEV_DETACH.
+
+#These are not really events sent:
+#
+#System fully on -- device is working normally; this is probably never
+#passed to suspend() method... event = ON, flags = 0
+#
+#Ready after resume -- userland is now running, again. Time to free any
+#memory you ate during prepare to suspend... event = ON, flags =
+#READY_AFTER_RESUME
+#
Driver Detach Power Management
The driver core will not call any extra functions when binding the
device to the driver.
+pm_message_t meaning
+
+pm_message_t has two fields. event ("major"), and flags. If driver
+does not know event code, it aborts the request, returning error. Some
+drivers may need to deal with special cases based on the actual type
+of suspend operation being done at the system level. This is why
+there are flags.
+
+Event codes are:
+
+ON -- no need to do anything except special cases like broken
+HW.
+
+# NOTIFICATION -- pretty much same as ON?
+
+FREEZE -- stop DMA and interrupts, and be prepared to reinit HW from
+scratch. That probably means stop accepting upstream requests, the
+actual policy of what to do with them being specific to a given
+driver. It's acceptable for a network driver to just drop packets
+while a block driver is expected to block the queue so no request is
+lost. (Use IDE as an example on how to do that). FREEZE requires no
+power state change, and it's expected for drivers to be able to
+quickly transition back to operating state.
+
+SUSPEND -- like FREEZE, but also put hardware into low-power state. If
+there's need to distinguish several levels of sleep, additional flag
+is probably best way to do that.
+
+Transitions are only from a resumed state to a suspended state, never
+between 2 suspended states. (ON -> FREEZE or ON -> SUSPEND can happen,
+FREEZE -> SUSPEND or SUSPEND -> FREEZE can not).
+
+All events are:
+
+[NOTE NOTE NOTE: If you are driver author, you should not care; you
+should only look at event, and ignore flags.]
+
+#Prepare for suspend -- userland is still running but we are going to
+#enter suspend state. This gives drivers chance to load firmware from
+#disk and store it in memory, or do other activities taht require
+#operating userland, ability to kmalloc GFP_KERNEL, etc... All of these
+#are forbiden once the suspend dance is started.. event = ON, flags =
+#PREPARE_TO_SUSPEND
+
+Apm standby -- prepare for APM event. Quiesce devices to make life
+easier for APM BIOS. event = FREEZE, flags = APM_STANDBY
+
+Apm suspend -- same as APM_STANDBY, but it we should probably avoid
+spinning down disks. event = FREEZE, flags = APM_SUSPEND
+
+System halt, reboot -- quiesce devices to make life easier for BIOS. event
+= FREEZE, flags = SYSTEM_HALT or SYSTEM_REBOOT
+
+System shutdown -- at least disks need to be spun down, or data may be
+lost. Quiesce devices, just to make life easier for BIOS. event =
+FREEZE, flags = SYSTEM_SHUTDOWN
+
+Kexec -- turn off DMAs and put hardware into some state where new
+kernel can take over. event = FREEZE, flags = KEXEC
+
+Powerdown at end of swsusp -- very similar to SYSTEM_SHUTDOWN, except wake
+may need to be enabled on some devices. This actually has at least 3
+subtypes, system can reboot, enter S4 and enter S5 at the end of
+swsusp. event = FREEZE, flags = SWSUSP and one of SYSTEM_REBOOT,
+SYSTEM_SHUTDOWN, SYSTEM_S4
+
+Suspend to ram -- put devices into low power state. event = SUSPEND,
+flags = SUSPEND_TO_RAM
+
+Freeze for swsusp snapshot -- stop DMA and interrupts. No need to put
+devices into low power mode, but you must be able to reinitialize
+device from scratch in resume method. This has two flavors, its done
+once on suspending kernel, once on resuming kernel. event = FREEZE,
+flags = DURING_SUSPEND or DURING_RESUME
+
+Device detach requested from /sys -- deinitialize device; proably same as
+SYSTEM_SHUTDOWN, I do not understand this one too much. probably event
+= FREEZE, flags = DEV_DETACH.
+
+#These are not really events sent:
+#
+#System fully on -- device is working normally; this is probably never
+#passed to suspend() method... event = ON, flags = 0
+#
+#Ready after resume -- userland is now running, again. Time to free any
+#memory you ate during prepare to suspend... event = ON, flags =
+#READY_AFTER_RESUME
+#
00-INDEX
- this file
+cpu_features.txt
+ - info on how we support a variety of CPUs with minimal compile-time
+ options.
ppc_htab.txt
- info about the Linux/PPC /proc/ppc_htab entry
smp.txt
--- /dev/null
+
+
+ PCI Bus EEH Error Recovery
+ --------------------------
+ Linas Vepstas
+ <linas@austin.ibm.com>
+ 12 January 2005
+
+
+Overview:
+---------
+The IBM POWER-based pSeries and iSeries computers include PCI bus
+controller chips that have extended capabilities for detecting and
+reporting a large variety of PCI bus error conditions. These features
+go under the name of "EEH", for "Extended Error Handling". The EEH
+hardware features allow PCI bus errors to be cleared and a PCI
+card to be "rebooted", without also having to reboot the operating
+system.
+
+This is in contrast to traditional PCI error handling, where the
+PCI chip is wired directly to the CPU, and an error would cause
+a CPU machine-check/check-stop condition, halting the CPU entirely.
+Another "traditional" technique is to ignore such errors, which
+can lead to data corruption, both of user data or of kernel data,
+hung/unresponsive adapters, or system crashes/lockups. Thus,
+the idea behind EEH is that the operating system can become more
+reliable and robust by protecting it from PCI errors, and giving
+the OS the ability to "reboot"/recover individual PCI devices.
+
+Future systems from other vendors, based on the PCI-E specification,
+may contain similar features.
+
+
+Causes of EEH Errors
+--------------------
+EEH was originally designed to guard against hardware failure, such
+as PCI cards dying from heat, humidity, dust, vibration and bad
+electrical connections. The vast majority of EEH errors seen in
+"real life" are due to eithr poorly seated PCI cards, or,
+unfortunately quite commonly, due device driver bugs, device firmware
+bugs, and sometimes PCI card hardware bugs.
+
+The most common software bug, is one that causes the device to
+attempt to DMA to a location in system memory that has not been
+reserved for DMA access for that card. This is a powerful feature,
+as it prevents what; otherwise, would have been silent memory
+corruption caused by the bad DMA. A number of device driver
+bugs have been found and fixed in this way over the past few
+years. Other possible causes of EEH errors include data or
+address line parity errors (for example, due to poor electrical
+connectivity due to a poorly seated card), and PCI-X split-completion
+errors (due to software, device firmware, or device PCI hardware bugs).
+The vast majority of "true hardware failures" can be cured by
+physically removing and re-seating the PCI card.
+
+
+Detection and Recovery
+----------------------
+In the following discussion, a generic overview of how to detect
+and recover from EEH errors will be presented. This is followed
+by an overview of how the current implementation in the Linux
+kernel does it. The actual implementation is subject to change,
+and some of the finer points are still being debated. These
+may in turn be swayed if or when other architectures implement
+similar functionality.
+
+When a PCI Host Bridge (PHB, the bus controller connecting the
+PCI bus to the system CPU electronics complex) detects a PCI error
+condition, it will "isolate" the affected PCI card. Isolation
+will block all writes (either to the card from the system, or
+from the card to the system), and it will cause all reads to
+return all-ff's (0xff, 0xffff, 0xffffffff for 8/16/32-bit reads).
+This value was chosen because it is the same value you would
+get if the device was physically unplugged from the slot.
+This includes access to PCI memory, I/O space, and PCI config
+space. Interrupts; however, will continued to be delivered.
+
+Detection and recovery are performed with the aid of ppc64
+firmware. The programming interfaces in the Linux kernel
+into the firmware are referred to as RTAS (Run-Time Abstraction
+Services). The Linux kernel does not (should not) access
+the EEH function in the PCI chipsets directly, primarily because
+there are a number of different chipsets out there, each with
+different interfaces and quirks. The firmware provides a
+uniform abstraction layer that will work with all pSeries
+and iSeries hardware (and be forwards-compatible).
+
+If the OS or device driver suspects that a PCI slot has been
+EEH-isolated, there is a firmware call it can make to determine if
+this is the case. If so, then the device driver should put itself
+into a consistent state (given that it won't be able to complete any
+pending work) and start recovery of the card. Recovery normally
+would consist of reseting the PCI device (holding the PCI #RST
+line high for two seconds), followed by setting up the device
+config space (the base address registers (BAR's), latency timer,
+cache line size, interrupt line, and so on). This is followed by a
+reinitialization of the device driver. In a worst-case scenario,
+the power to the card can be toggled, at least on hot-plug-capable
+slots. In principle, layers far above the device driver probably
+do not need to know that the PCI card has been "rebooted" in this
+way; ideally, there should be at most a pause in Ethernet/disk/USB
+I/O while the card is being reset.
+
+If the card cannot be recovered after three or four resets, the
+kernel/device driver should assume the worst-case scenario, that the
+card has died completely, and report this error to the sysadmin.
+In addition, error messages are reported through RTAS and also through
+syslogd (/var/log/messages) to alert the sysadmin of PCI resets.
+The correct way to deal with failed adapters is to use the standard
+PCI hotplug tools to remove and replace the dead card.
+
+
+Current PPC64 Linux EEH Implementation
+--------------------------------------
+At this time, a generic EEH recovery mechanism has been implemented,
+so that individual device drivers do not need to be modified to support
+EEH recovery. This generic mechanism piggy-backs on the PCI hotplug
+infrastructure, and percolates events up through the hotplug/udev
+infrastructure. Followiing is a detailed description of how this is
+accomplished.
+
+EEH must be enabled in the PHB's very early during the boot process,
+and if a PCI slot is hot-plugged. The former is performed by
+eeh_init() in arch/ppc64/kernel/eeh.c, and the later by
+drivers/pci/hotplug/pSeries_pci.c calling in to the eeh.c code.
+EEH must be enabled before a PCI scan of the device can proceed.
+Current Power5 hardware will not work unless EEH is enabled;
+although older Power4 can run with it disabled. Effectively,
+EEH can no longer be turned off. PCI devices *must* be
+registered with the EEH code; the EEH code needs to know about
+the I/O address ranges of the PCI device in order to detect an
+error. Given an arbitrary address, the routine
+pci_get_device_by_addr() will find the pci device associated
+with that address (if any).
+
+The default include/asm-ppc64/io.h macros readb(), inb(), insb(),
+etc. include a check to see if the the i/o read returned all-0xff's.
+If so, these make a call to eeh_dn_check_failure(), which in turn
+asks the firmware if the all-ff's value is the sign of a true EEH
+error. If it is not, processing continues as normal. The grand
+total number of these false alarms or "false positives" can be
+seen in /proc/ppc64/eeh (subject to change). Normally, almost
+all of these occur during boot, when the PCI bus is scanned, where
+a large number of 0xff reads are part of the bus scan procedure.
+
+If a frozen slot is detected, code in arch/ppc64/kernel/eeh.c will
+print a stack trace to syslog (/var/log/messages). This stack trace
+has proven to be very useful to device-driver authors for finding
+out at what point the EEH error was detected, as the error itself
+usually occurs slightly beforehand.
+
+Next, it uses the Linux kernel notifier chain/work queue mechanism to
+allow any interested parties to find out about the failure. Device
+drivers, or other parts of the kernel, can use
+eeh_register_notifier(struct notifier_block *) to find out about EEH
+events. The event will include a pointer to the pci device, the
+device node and some state info. Receivers of the event can "do as
+they wish"; the default handler will be described further in this
+section.
+
+To assist in the recovery of the device, eeh.c exports the
+following functions:
+
+rtas_set_slot_reset() -- assert the PCI #RST line for 1/8th of a second
+rtas_configure_bridge() -- ask firmware to configure any PCI bridges
+ located topologically under the pci slot.
+eeh_save_bars() and eeh_restore_bars(): save and restore the PCI
+ config-space info for a device and any devices under it.
+
+
+A handler for the EEH notifier_block events is implemented in
+drivers/pci/hotplug/pSeries_pci.c, called handle_eeh_events().
+It saves the device BAR's and then calls rpaphp_unconfig_pci_adapter().
+This last call causes the device driver for the card to be stopped,
+which causes hotplug events to go out to user space. This triggers
+user-space scripts that might issue commands such as "ifdown eth0"
+for ethernet cards, and so on. This handler then sleeps for 5 seconds,
+hoping to give the user-space scripts enough time to complete.
+It then resets the PCI card, reconfigures the device BAR's, and
+any bridges underneath. It then calls rpaphp_enable_pci_slot(),
+which restarts the device driver and triggers more user-space
+events (for example, calling "ifup eth0" for ethernet cards).
+
+
+Device Shutdown and User-Space Events
+-------------------------------------
+This section documents what happens when a pci slot is unconfigured,
+focusing on how the device driver gets shut down, and on how the
+events get delivered to user-space scripts.
+
+Following is an example sequence of events that cause a device driver
+close function to be called during the first phase of an EEH reset.
+The following sequence is an example of the pcnet32 device driver.
+
+ rpa_php_unconfig_pci_adapter (struct slot *) // in rpaphp_pci.c
+ {
+ calls
+ pci_remove_bus_device (struct pci_dev *) // in /drivers/pci/remove.c
+ {
+ calls
+ pci_destroy_dev (struct pci_dev *)
+ {
+ calls
+ device_unregister (&dev->dev) // in /drivers/base/core.c
+ {
+ calls
+ device_del (struct device *)
+ {
+ calls
+ bus_remove_device() // in /drivers/base/bus.c
+ {
+ calls
+ device_release_driver()
+ {
+ calls
+ struct device_driver->remove() which is just
+ pci_device_remove() // in /drivers/pci/pci_driver.c
+ {
+ calls
+ struct pci_driver->remove() which is just
+ pcnet32_remove_one() // in /drivers/net/pcnet32.c
+ {
+ calls
+ unregister_netdev() // in /net/core/dev.c
+ {
+ calls
+ dev_close() // in /net/core/dev.c
+ {
+ calls dev->stop();
+ which is just pcnet32_close() // in pcnet32.c
+ {
+ which does what you wanted
+ to stop the device
+ }
+ }
+ }
+ which
+ frees pcnet32 device driver memory
+ }
+ }}}}}}
+
+
+ in drivers/pci/pci_driver.c,
+ struct device_driver->remove() is just pci_device_remove()
+ which calls struct pci_driver->remove() which is pcnet32_remove_one()
+ which calls unregister_netdev() (in net/core/dev.c)
+ which calls dev_close() (in net/core/dev.c)
+ which calls dev->stop() which is pcnet32_close()
+ which then does the appropriate shutdown.
+
+---
+Following is the analogous stack trace for events sent to user-space
+when the pci device is unconfigured.
+
+rpa_php_unconfig_pci_adapter() { // in rpaphp_pci.c
+ calls
+ pci_remove_bus_device (struct pci_dev *) { // in /drivers/pci/remove.c
+ calls
+ pci_destroy_dev (struct pci_dev *) {
+ calls
+ device_unregister (&dev->dev) { // in /drivers/base/core.c
+ calls
+ device_del(struct device * dev) { // in /drivers/base/core.c
+ calls
+ kobject_del() { //in /libs/kobject.c
+ calls
+ kobject_hotplug() { // in /libs/kobject.c
+ calls
+ kset_hotplug() { // in /lib/kobject.c
+ calls
+ kset->hotplug_ops->hotplug() which is really just
+ a call to
+ dev_hotplug() { // in /drivers/base/core.c
+ calls
+ dev->bus->hotplug() which is really just a call to
+ pci_hotplug () { // in drivers/pci/hotplug.c
+ which prints device name, etc....
+ }
+ }
+ then kset_hotplug() calls
+ call_usermodehelper () with
+ argv[0]=hotplug_path[] which is "/sbin/hotplug"
+ --> event to userspace,
+ }
+ }
+ kobject_del() then calls sysfs_remove_dir(), which would
+ trigger any user-space daemon that was watching /sysfs,
+ and notice the delete event.
+
+
+Pro's and Con's of the Current Design
+-------------------------------------
+There are several issues with the current EEH software recovery design,
+which may be addressed in future revisions. But first, note that the
+big plus of the current design is that no changes need to be made to
+individual device drivers, so that the current design throws a wide net.
+The biggest negative of the design is that it potentially disturbs
+network daemons and file systems that didn't need to be disturbed.
+
+-- A minor complaint is that resetting the network card causes
+ user-space back-to-back ifdown/ifup burps that potentially disturb
+ network daemons, that didn't need to even know that the pci
+ card was being rebooted.
+
+-- A more serious concern is that the same reset, for SCSI devices,
+ causes havoc to mounted file systems. Scripts cannot post-facto
+ unmount a file system without flushing pending buffers, but this
+ is impossible, because I/O has already been stopped. Thus,
+ ideally, the reset should happen at or below the block layer,
+ so that the file systems are not disturbed.
+
+ Reiserfs does not tolerate errors returned from the block device.
+ Ext3fs seems to be tolerant, retrying reads/writes until it does
+ succeed. Both have been only lightly tested in this scenario.
+
+ The SCSI-generic subsystem already has built-in code for performing
+ SCSI device resets, SCSI bus resets, and SCSI host-bus-adapter
+ (HBA) resets. These are cascaded into a chain of attempted
+ resets if a SCSI command fails. These are completely hidden
+ from the block layer. It would be very natural to add an EEH
+ reset into this chain of events.
+
+-- If a SCSI error occurs for the root device, all is lost unless
+ the sysadmin had the foresight to run /bin, /sbin, /etc, /var
+ and so on, out of ramdisk/tmpfs.
+
+
+Conclusions
+-----------
+There's forward progress ...
+
+
+* NOTE - this is an unmaintained driver. The original author cannot be located.
+
+SDL Communications is now SBS Technologies, and does not have any
+information on these ancient ISA cards on their website.
+
+James Nelson <james4765@gmail.com> - 12-12-2004
+
This is the README for RISCom/8 multi-port serial driver
- (C) 1994-1996 D.Gorodchanin (pgmdsg@ibi.com)
+ (C) 1994-1996 D.Gorodchanin
See file LICENSE for terms and conditions.
NOTE: English is not my native language.
1) This driver can support up to 4 boards at time.
Use string "riscom8=0xXXX,0xXXX,0xXXX,0xXXX" at LILO prompt, for
setting I/O base addresses for boards. If you compile driver
- as module use insmod options "iobase=0xXXX iobase1=0xXXX iobase2=..."
+ as module use modprobe options "iobase=0xXXX iobase1=0xXXX iobase2=..."
2) The driver partially supports famous 'setserial' program, you can use almost
any of its options, excluding port & irq settings.
3) There are some misc. defines at the beginning of riscom8.c, please read the
comments and try to change some of them in case of problems.
-
+
4) I consider the current state of the driver as BETA.
- If you REALLY think you found a bug, send me e-mail, I hope I'll
- fix it. For any other problems please ask support@sdlcomm.com.
5) SDL Communications WWW page is http://www.sdlcomm.com.
-6) You can use the script at the end of this file to create RISCom/8 devices.
+6) You can use the MAKEDEV program to create RISCom/8 /dev/ttyL* entries.
7) Minor numbers for first board are 0-7, for second 8-15, etc.
22 Apr 1996.
-
--------------------------------cut here-------------------------------------
-#!/bin/bash
-NORMAL_DEVICE=/dev/ttyL
-CALLOUT_DEVICE=/dev/cuL
-NORMAL_MAJOR=48
-CALLOUT_MAJOR=49
-
-echo "Creating devices... "
-for i in 0 1 2 3; do
- echo "Board No $[$i+1]"
- for j in 0 1 2 3 4 5 6 7; do
- k=$[ 8 * $i + $j]
- rm -f $NORMAL_DEVICE$k
- mknod $NORMAL_DEVICE$k c $NORMAL_MAJOR $k
- chmod a+rw $NORMAL_DEVICE$k
- echo -n $NORMAL_DEVICE$k" "
- rm -f $CALLOUT_DEVICE$k
- mknod $CALLOUT_DEVICE$k c $CALLOUT_MAJOR $k
- chmod a+rw $CALLOUT_DEVICE$k
- echo $CALLOUT_DEVICE$k
- done
-done
-echo "done."
--------------------------------cut here-------------------------------------
--- /dev/null
+Sat Jan 18 15:51:45 1997 Richard Henderson <rth@tamu.edu>
+
+ * Don't play with usage_count directly, instead hand around
+ the module header and use the module macros.
+
+Fri May 17 00:00:00 1996 Leonard N. Zubkoff <lnz@dandelion.com>
+
+ * BusLogic Driver Version 2.0.3 Released.
+
+Tue Apr 16 21:00:00 1996 Leonard N. Zubkoff <lnz@dandelion.com>
+
+ * BusLogic Driver Version 1.3.2 Released.
+
+Sun Dec 31 23:26:00 1995 Leonard N. Zubkoff <lnz@dandelion.com>
+
+ * BusLogic Driver Version 1.3.1 Released.
+
+Fri Nov 10 15:29:49 1995 Leonard N. Zubkoff <lnz@dandelion.com>
+
+ * Released new BusLogic driver.
+
+Wed Aug 9 22:37:04 1995 Andries Brouwer <aeb@cwi.nl>
+
+ As a preparation for new device code, separated the various
+ functions the request->dev field had into the device proper,
+ request->rq_dev and a status field request->rq_status.
+
+ The 2nd argument of bios_param is now a kdev_t.
+
+Wed Jul 19 10:43:15 1995 Michael Neuffer <neuffer@goofy.zdv.uni-mainz.de>
+
+ * scsi.c (scsi_proc_info): /proc/scsi/scsi now also lists all
+ attached devices.
+
+ * scsi_proc.c (proc_print_scsidevice): Added. Used by scsi.c and
+ eata_dma_proc.c to produce some device info for /proc/scsi.
+
+ * eata_dma.c (eata_queue)(eata_int_handler)(eata_scsi_done):
+ Changed handling of internal SCSI commands send to the HBA.
+
+
+Wed Jul 19 10:09:17 1995 Michael Neuffer <neuffer@goofy.zdv.uni-mainz.de>
+
+ * Linux 1.3.11 released.
+
+ * eata_dma.c (eata_queue)(eata_int_handler): Added code to do
+ command latency measurements if requested by root through
+ /proc/scsi interface.
+ Throughout Use HZ constant for time references.
+
+ * eata_pio.c: Use HZ constant for time references.
+
+ * aic7xxx.c, aic7xxx.h, aic7xxx_asm.c: Changed copyright from BSD
+ to GNU style.
+
+ * scsi.h: Added READ_12 command opcode constant
+
+Wed Jul 19 09:25:30 1995 Michael Neuffer <neuffer@goofy.zdv.uni-mainz.de>
+
+ * Linux 1.3.10 released.
+
+ * scsi_proc.c (dispatch_scsi_info): Removed unused variable.
+
+Wed Jul 19 09:25:30 1995 Michael Neuffer <neuffer@goofy.zdv.uni-mainz.de>
+
+ * Linux 1.3.9 released.
+
+ * scsi.c Blacklist concept expanded to 'support' more device
+ deficiencies. blacklist[] renamed to device_list[]
+ (scan_scsis): Code cleanup.
+
+ * scsi_debug.c (scsi_debug_proc_info): Added support to control
+ device lockup simulation via /proc/scsi interface.
+
+
+Wed Jul 19 09:22:34 1995 Michael Neuffer <neuffer@goofy.zdv.uni-mainz.de>
+
+ * Linux 1.3.7 released.
+
+ * scsi_proc.c: Fixed a number of bugs in directory handling
+
+Wed Jul 19 09:18:28 1995 Michael Neuffer <neuffer@goofy.zdv.uni-mainz.de>
+
+ * Linux 1.3.5 released.
+
+ * Native wide, multichannel and /proc/scsi support now in official
+ kernel distribution.
+
+ * scsi.c/h, hosts.c/h et al reindented to increase readability
+ (especially on 80 column wide terminals).
+
+ * scsi.c, scsi_proc.c, ../../fs/proc/inode.c: Added
+ /proc/scsi/scsi which allows root to scan for hotplugged devices.
+
+ * scsi.c (scsi_proc_info): Added, to support /proc/scsi/scsi.
+ (scan_scsis): Added some 'spaghetti' code to allow scanning for
+ single devices.
+
+
+Thu Jun 20 15:20:27 1995 Michael Neuffer <neuffer@goofy.zdv.uni-mainz.de>
+
+ * proc.c: Renamed to scsi_proc.c
+
+Mon Jun 12 20:32:45 1995 Michael Neuffer <neuffer@goofy.zdv.uni-mainz.de>
+
+ * Linux 1.3.0 released.
+
+Mon May 15 19:33:14 1995 Michael Neuffer <neuffer@goofy.zdv.uni-mainz.de>
+
+ * scsi.c: Added native multichannel and wide scsi support.
+
+ * proc.c (dispatch_scsi_info) (build_proc_dir_hba_entries):
+ Updated /proc/scsi interface.
+
+Thu May 4 17:58:48 1995 Michael Neuffer <neuffer@goofy.zdv.uni-mainz.de>
+
+ * sd.c (requeue_sd_request): Zero out the scatterlist only if
+ scsi_malloc returned memory for it.
+
+ * eata_dma.c (register_HBA) (eata_queue): Add support for
+ large scatter/gather tables and set use_clustering accordingly
+
+ * hosts.c: Make use_clustering changeable in the Scsi_Host structure.
+
+Wed Apr 12 15:25:52 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.2.5 released.
+
+ * buslogic.c: Update to version 1.15 (From Leonard N. Zubkoff).
+ Fixed interrupt routine to avoid races when handling multiple
+ complete commands per interrupt. Seems to come up with faster
+ cards.
+
+ * eata_dma.c: Update to 2.3.5r. Modularize. Improved error handling
+ throughout and fixed bug interrupt routine which resulted in shifted
+ status bytes. Added blink LED state checks for ISA and EISA HBAs.
+ Memory management bug seems to have disappeared ==> increasing
+ C_P_L_CURRENT_MAX to 16 for now. Decreasing C_P_L_DIV to 3 for
+ performance reasons.
+
+ * scsi.c: If we get a FMK, EOM, or ILI when attempting to scan
+ the bus, assume that it was just noise on the bus, and ignore
+ the device.
+
+ * scsi.h: Update and add a bunch of missing commands which we
+ were never using.
+
+ * sd.c: Use restore_flags in do_sd_request - this may result in
+ latency conditions, but it gets rid of races and crashes.
+ Do not save flags again when searching for a second command to
+ queue.
+
+ * st.c: Use bytes, not STP->buffer->buffer_size when reading
+ from tape.
+
+
+Tue Apr 4 09:42:08 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.2.4 released.
+
+ * st.c: Fix typo - restoring wrong flags.
+
+Wed Mar 29 06:55:12 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.2.3 released.
+
+ * st.c: Perform some waiting operations with interrupts off.
+ Is this correct???
+
+Wed Mar 22 10:34:26 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.2.2 released.
+
+ * aha152x.c: Modularize. Add support for PCMCIA.
+
+ * eata.c: Update to version 2.0. Fixed bug preventing media
+ detection. If scsi_register_host returns NULL, fail gracefully.
+
+ * scsi.c: Detect as NEC (for photo-cd purposes) for the 84
+ and 25 models as "NEC_OLDCDR".
+
+ * scsi.h: Add define for NEC_OLDCDR
+
+ * sr.c: Add handling for NEC_OLDCDR. Treat as unknown.
+
+ * u14-34f.c: Update to version 2.0. Fixed same bug as in
+ eata.c.
+
+
+Mon Mar 6 11:11:20 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.2.0 released. Yeah!!!
+
+ * Minor spelling/punctuation changes throughout. Nothing
+ substantive.
+
+Mon Feb 20 21:33:03 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.95 released.
+
+ * qlogic.c: Update to version 0.41.
+
+ * seagate.c: Change some message to be more descriptive about what
+ we detected.
+
+ * sr.c: spelling/whitespace changes.
+
+Mon Feb 20 21:33:03 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.94 released.
+
+Mon Feb 20 08:57:17 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.93 released.
+
+ * hosts.h: Change io_port to long int from short.
+
+ * 53c7,8xx.c: crash on AEN fixed, SCSI reset is no longer a NOP,
+ NULL pointer panic on odd UDCs fixed, two bugs in diagnostic output
+ fixed, should initialize correctly if left running, now loadable,
+ new memory allocation, extraneous diagnostic output suppressed,
+ splx() replaced with save/restore flags. [ Drew ]
+
+ * hosts.c, hosts.h, scsi_ioctl.c, sd.c, sd_ioctl.c, sg.c, sr.c,
+ sr_ioctl.c: Add special junk at end that Emacs will use for
+ formatting the file.
+
+ * qlogic.c: Update to v0.40a. Improve parity handling.
+
+ * scsi.c: Add Hitachi DK312C to blacklist. Change "};" to "}" in
+ many places. Use scsi_init_malloc to get command block - may
+ need this to be dma compatible for some host adapters.
+ Restore interrupts after unregistering a host.
+
+ * sd.c: Use sti instead of restore flags - causes latency problems.
+
+ * seagate.c: Use controller_type to determine string used when
+ registering irq.
+
+ * sr.c: More photo-cd hacks to make sure we get the xa stuff right.
+ * sr.h, sr.c: Change is_xa to xa_flags field.
+
+ * st.c: Disable retries for write operations.
+
+Wed Feb 15 10:52:56 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.92 released.
+
+ * eata.c: Update to 1.17.
+
+ * eata_dma.c: Update to 2.31a. Add more support for /proc/scsi.
+ Continuing modularization. Less crashes because of the bug in the
+ memory management ==> increase C_P_L_CURRENT_MAX to 10
+ and decrease C_P_L_DIV to 4.
+
+ * hosts.c: If we remove last host registered, reuse host number.
+ When freeing memory from host being deregistered, free extra_bytes
+ too.
+
+ * scsi.c (scan_scsis): memset(SDpnt, 0) and set SCmd.device to SDpnt.
+ Change memory allocation to work around bugs in __get_dma_pages.
+ Do not free host if usage count is not zero (for modules).
+
+ * sr_ioctl.c: Increase IOCTL_TIMEOUT to 3000.
+
+ * st.c: Allow for ST_EXTRA_DEVS in st data structures.
+
+ * u14-34f.c: Update to 1.17.
+
+Thu Feb 9 10:11:16 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.91 released.
+
+ * eata.c: Update to 1.16. Use wish_block instead of host->block.
+
+ * hosts.c: Initialize wish_block to 0.
+
+ * hosts.h: Add wish_block.
+
+ * scsi.c: Use wish_block as indicator that the host should be added
+ to block list.
+
+ * sg.c: Add SG_EXTRA_DEVS to number of slots.
+
+ * u14-34f.c: Use wish_block.
+
+Tue Feb 7 11:46:04 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.90 released.
+
+ * eata.c: Change naming from eata_* to eata2x_*. Now at vers 1.15.
+ Update interrupt handler to take pt_regs as arg. Allow blocking
+ even if loaded as module. Initialize target_time_out array.
+ Do not put sti(); in timing loop.
+
+ * hosts.c: Do not reuse host numbers.
+ Use scsi_make_blocked_list to generate blocking list.
+
+ * script_asm.pl: Beats me. Don't know perl. Something to do with
+ phase index.
+
+ * scsi.c (scsi_make_blocked_list): New function - code copied from
+ hosts.c.
+
+ * scsi.c: Update code to disable photo CD for Toshiba cdroms.
+ Use just manufacturer name, not model number.
+
+ * sr.c: Fix setting density for Toshiba drives.
+
+ * u14-34f.c: Clear target_time_out array during reset.
+
+Wed Feb 1 09:20:45 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.89 released.
+
+ * Makefile, u14-34f.c: Modularize.
+
+ * Makefile, eata.c: Modularize. Now version 1.14
+
+ * NCR5380.c: Update interrupt handler with new arglist. Minor
+ cleanups.
+
+ * eata_dma.c: Begin to modularize. Add hooks for /proc/scsi.
+ New version 2.3.0a. Add code in interrupt handler to allow
+ certain CDROM drivers to be detected which return a
+ CHECK_CONDITION during SCSI bus scan. Add opcode check to get
+ all DATA IN and DATA OUT phases right. Utilize HBA_interpret flag.
+ Improvements in HBA identification. Various other minor stuff.
+
+ * hosts.c: Initialize ->dma_channel and ->io_port when registering
+ a new host.
+
+ * qlogic.c: Modularize and add PCMCIA support.
+
+ * scsi.c: Add Hitachi to blacklist.
+
+ * scsi.c: Change default to no lun scan (too many problem devices).
+
+ * scsi.h: Define QUEUE_FULL condition.
+
+ * sd.c: Do not check for non-existent partition until after
+ new media check.
+
+ * sg.c: Undo previous change which was wrong.
+
+ * sr_ioctl.c: Increase IOCTL_TIMEOUT to 2000.
+
+ * st.c: Patches from Kai - improve filemark handling.
+
+Tue Jan 31 17:32:12 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.88 released.
+
+ * Throughout - spelling/grammar fixups.
+
+ * scsi.c: Make sure that all buffers are 16 byte aligned - some
+ drivers (buslogic) need this.
+
+ * scsi.c (scan_scsis): Remove message printed.
+
+ * scsi.c (scsi_init): Move message here.
+
+Mon Jan 30 06:40:25 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.87 released.
+
+ * sr.c: Photo-cd related changes. (Gerd Knorr??).
+
+ * st.c: Changes from Kai related to EOM detection.
+
+Mon Jan 23 23:53:10 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.86 released.
+
+ * 53c7,8xx.h: Change SG size to 127.
+
+ * eata_dma: Update to version 2.10i. Remove bug in the registration
+ of multiple HBAs and channels. Minor other improvements and stylistic
+ changes.
+
+ * scsi.c: Test for Toshiba XM-3401TA and exclude from detection
+ as toshiba drive - photo cd does not work with this drive.
+
+ * sr.c: Update photocd code.
+
+Mon Jan 23 23:53:10 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.85 released.
+
+ * st.c, st_ioctl.c, sg.c, sd_ioctl.c, scsi_ioctl.c, hosts.c:
+ include linux/mm.h
+
+ * qlogic.c, buslogic.c, aha1542.c: Include linux/module.h.
+
+Sun Jan 22 22:08:46 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.84 released.
+
+ * Makefile: Support for loadable QLOGIC boards.
+
+ * aha152x.c: Update to version 1.8 from Juergen.
+
+ * eata_dma.c: Update from Michael Neuffer.
+ Remove hard limit of 2 commands per lun and make it better
+ configurable. Improvements in HBA identification.
+
+ * in2000.c: Fix biosparam to support large disks.
+
+ * qlogic.c: Minor changes (change sti -> restore_flags).
+
+Wed Jan 18 23:33:09 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.83 released.
+
+ * aha1542.c(aha1542_intr_handle): Use arguments handed down to find
+ which irq.
+
+ * buslogic.c: Likewise.
+
+ * eata_dma.c: Use min of 2 cmd_per_lun for OCS_enabled boards.
+
+ * scsi.c: Make RECOVERED_ERROR a SUGGEST_IS_OK.
+
+ * sd.c: Fail if we are opening a non-existent partition.
+
+ * sr.c: Bump SR_TIMEOUT to 15000.
+ Do not probe for media size at boot time(hard on changers).
+ Flag device as needing sector size instead.
+
+ * sr_ioctl.c: Remove CDROMMULTISESSION_SYS ioctl.
+
+ * ultrastor.c: Fix bug in call to ultrastor_interrupt (wrong #args).
+
+Mon Jan 16 07:18:23 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.82 released.
+
+ Throughout.
+ - Change all interrupt handlers to accept new calling convention.
+ In particular, we now receive the irq number as one of the arguments.
+
+ * More minor spelling corrections in some of the new files.
+
+ * aha1542.c, buslogic.c: Clean up interrupt handler a little now
+ that we receive the irq as an arg.
+
+ * aha274x.c: s/snarf_region/request_region/
+
+ * eata.c: Update to version 1.12. Fix some comments and display a
+ message if we cannot reserve the port addresses.
+
+ * u14-34f.c: Update to version 1.13. Fix some comments and display a
+ message if we cannot reserve the port addresses.
+
+ * eata_dma.c: Define get_board_data function (send INQUIRY command).
+ Use to improve detection of variants of different DPT boards. Change
+ version subnumber to "0g".
+
+ * fdomain.c: Update to version 5.26. Improve detection of some boards
+ repackaged by IBM.
+
+ * scsi.c (scsi_register_host): Change "name" to const char *.
+
+ * sr.c: Fix problem in set mode command for Toshiba drives.
+
+ * sr.c: Fix typo from patch 81.
+
+Fri Jan 13 12:54:46 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.81 released. Codefreeze for 1.2 release announced.
+
+ Big changes here.
+
+ * eata_dma.*: New files from Michael Neuffer.
+ (neuffer@goofy.zdv.uni-mainz.de). Should support
+ all eata/dpt cards.
+
+ * hosts.c, Makefile: Add eata_dma.
+
+ * README.st: Document MTEOM.
+
+ Patches from me (ERY) to finish support for low-level loadable scsi.
+ It now works, and is actually useful.
+
+ * Throughout - add new argument to scsi_init_malloc that takes an
+ additional parameter. This is used as a priority to kmalloc,
+ and you can specify the GFP_DMA flag if you need DMA-able memory.
+
+ * Makefile: For source files that are loadable, always add name
+ to SCSI_SRCS. Fill in modules: target.
+
+ * hosts.c: Change next_host to next_scsi_host, and make global.
+ Print hosts after we have identified all of them. Use info()
+ function if present, otherwise use name field.
+
+ * hosts.h: Change attach function to return int, not void.
+ Define number of device slots to allow for loadable devices.
+ Define tags to tell scsi module code what type of module we
+ are loading.
+
+ * scsi.c: Fix scan_scsis so that it can be run by a user process.
+ Do not use waiting loops - use up and down mechanism as long
+ as current != task[0].
+
+ * scsi.c(scan_scsis): Do not use stack variables for I/O - this
+ could be > 16Mb if we are loading a module at runtime (i.e. use
+ scsi_init_malloc to get some memory we know will be safe).
+
+ * scsi.c: Change dma freelist to be a set of pages. This allows
+ us to dynamically adjust the size of the list by adding more pages
+ to the pagelist. Fix scsi_malloc and scsi_free accordingly.
+
+ * scsi_module.c: Fix include.
+
+ * sd.c: Declare detach function. Increment/decrement module usage
+ count as required. Fix init functions to allow loaded devices.
+ Revalidate all new disks so we get the partition tables. Define
+ detach function.
+
+ * sr.c: Likewise.
+
+ * sg.c: Declare detach function. Allow attachment of devices on
+ loaded drivers.
+
+ * st.c: Declare detach function. Increment/decrement module usage
+ count as required.
+
+Tue Jan 10 10:09:58 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.79 released.
+
+ Patch from some undetermined individual who needs to get a life :-).
+
+ * sr.c: Attacked by spelling bee...
+
+ Patches from Gerd Knorr:
+
+ * sr.c: make printk messages for photoCD a little more informative.
+
+ * sr_ioctl.c: Fix CDROMMULTISESSION_SYS ioctl.
+
+Mon Jan 9 10:01:37 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.78 released.
+
+ * Makefile: Add empty modules: target.
+
+ * Wheee. Now change register_iomem to request_region.
+
+ * in2000.c: Bugfix - apparently this is the fix that we have
+ all been waiting for. It fixes a problem whereby the driver
+ is not stable under heavy load. Race condition and all that.
+ Patch from Peter Lu.
+
+Wed Jan 4 21:17:40 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.77 released.
+
+ * 53c7,8xx.c: Fix from Linus - emulate splx.
+
+ Throughout:
+
+ Change "snarf_region" with "register_iomem".
+
+ * scsi_module.c: New file. Contains support for low-level loadable
+ scsi drivers. [ERY].
+
+ * sd.c: More s/int/long/ changes.
+
+ * seagate.c: Explicitly include linux/config.h
+
+ * sg.c: Increment/decrement module usage count on open/close.
+
+ * sg.c: Be a bit more careful about the user not supplying enough
+ information for a valid command. Pass correct size down to
+ scsi_do_cmd.
+
+ * sr.c: More changes for Photo-CD. This apparently breaks NEC drives.
+
+ * sr_ioctl.c: Support CDROMMULTISESSION ioctl.
+
+
+Sun Jan 1 19:55:21 1995 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.76 released.
+
+ * constants.c: Add type cast in switch statement.
+
+ * scsi.c (scsi_free): Change datatype of "offset" to long.
+ (scsi_malloc): Change a few more variables to long. Who
+ did this and why was it important? 64 bit machines?
+
+
+ Lots of changes to use save_state/restore_state instead of cli/sti.
+ Files changed include:
+
+ * aha1542.c:
+ * aha1740.c:
+ * buslogic.c:
+ * in2000.c:
+ * scsi.c:
+ * scsi_debug.c:
+ * sd.c:
+ * sr.c:
+ * st.c:
+
+Wed Dec 28 16:38:29 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.75 released.
+
+ * buslogic.c: Spelling fix.
+
+ * scsi.c: Add HP C1790A and C2500A scanjet to blacklist.
+
+ * scsi.c: Spelling fixup.
+
+ * sd.c: Add support for sd_hardsizes (hard sector sizes).
+
+ * ultrastor.c: Use save_flags/restore_flags instead of cli/sti.
+
+Fri Dec 23 13:36:25 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.74 released.
+
+ * README.st: Update from Kai Makisara.
+
+ * eata.c: New version from Dario - version 1.11.
+ use scsicam bios_param routine. Add support for 2011
+ and 2021 boards.
+
+ * hosts.c: Add support for blocking. Linked list automatically
+ generated when shpnt->block is set.
+
+ * scsi.c: Add sankyo & HP scanjet to blacklist. Add support for
+ kicking things loose when we deadlock.
+
+ * scsi.c: Recognize scanners and processors in scan_scsis.
+
+ * scsi_ioctl.h: Increase timeout to 9 seconds.
+
+ * st.c: New version from Kai - add better support for backspace.
+
+ * u14-34f.c: New version from Dario. Supports blocking.
+
+Wed Dec 14 14:46:30 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.73 released.
+
+ * buslogic.c: Update from Dave Gentzel. Version 1.14.
+ Add module related stuff. More fault tolerant if out of
+ DMA memory.
+
+ * fdomain.c: New version from Rik Faith - version 5.22. Add support
+ for ISA-200S SCSI adapter.
+
+ * hosts.c: Spelling.
+
+ * qlogic.c: Update to version 0.38a. Add more support for PCMCIA.
+
+ * scsi.c: Mask device type with 0x1f during scan_scsis.
+ Add support for deadlocking, err, make that getting out of
+ deadlock situations that are created when we allow the user
+ to limit requests to one host adapter at a time.
+
+ * scsi.c: Bugfix - pass pid, not SCpnt as second arg to
+ scsi_times_out.
+
+ * scsi.c: Restore interrupt state to previous value instead of using
+ cli/sti pairs.
+
+ * scsi.c: Add a bunch of module stuff (all commented out for now).
+
+ * scsi.c: Clean up scsi_dump_status.
+
+Tue Dec 6 12:34:20 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.72 released.
+
+ * sg.c: Bugfix - always use sg_free, since we might have big buff.
+
+Fri Dec 2 11:24:53 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.71 released.
+
+ * sg.c: Clear buff field when not in use. Only call scsi_free if
+ non-null.
+
+ * scsi.h: Call wake_up(&wait_for_request) when done with a
+ command.
+
+ * scsi.c (scsi_times_out): Pass pid down so that we can protect
+ against race conditions.
+
+ * scsi.c (scsi_abort): Zero timeout field if we get the
+ NOT_RUNNING message back from low-level driver.
+
+
+ * scsi.c (scsi_done): Restore cmd_len, use_sg here.
+
+ * scsi.c (request_sense): Not here.
+
+ * hosts.h: Add new forbidden_addr, forbidden_size fields. Who
+ added these and why????
+
+ * hosts.c (scsi_mem_init): Mark pages as reserved if they fall in
+ the forbidden regions. I am not sure - I think this is so that
+ we can deal with boards that do incomplete decoding of their
+ address lines for the bios chips, but I am not entirely sure.
+
+ * buslogic.c: Set forbidden_addr stuff if using a buggy board.
+
+ * aha1740.c: Test for NULL pointer in SCtmp. This should not
+ occur, but a nice message is better than a kernel segfault.
+
+ * 53c7,8xx.c: Add new PCI chip ID for 815.
+
+Fri Dec 2 11:24:53 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.70 released.
+
+ * ChangeLog, st.c: Spelling.
+
+Tue Nov 29 18:48:42 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.69 released.
+
+ * u14-34f.h: Non-functional change. [Dario].
+
+ * u14-34f.c: Use block field in Scsi_Host to prevent commands from
+ being queued to more than one host at the same time (used when
+ motherboard does not deal with multiple bus-masters very well).
+ Only when SINGLE_HOST_OPERATIONS is defined.
+ Use new cmd_per_lun field. [Dario]
+
+ * eata.c: Likewise.
+
+ * st.c: More changes from Kai. Add ready flag to indicate drive
+ status.
+
+ * README.st: Document this.
+
+ * sr.c: Bugfix (do not subtract CD_BLOCK_OFFSET) for photo-cd
+ code.
+
+ * sg.c: Bugfix - fix problem where opcode is not correctly set up.
+
+ * seagate.[c,h]: Use #defines to set driver name.
+
+ * scsi_ioctl.c: Zero buffer before executing command.
+
+ * scsi.c: Use new cmd_per_lun field in Scsi_Hosts as appropriate.
+ Add Sony CDU55S to blacklist.
+
+ * hosts.h: Add new cmd_per_lun field to Scsi_Hosts.
+
+ * hosts.c: Initialize cmd_per_lun in Scsi_Hosts from template.
+
+ * buslogic.c: Use cmd_per_lun field - initialize to different
+ values depending upon bus type (i.e. use 1 if ISA, so we do not
+ hog memory). Use other patches which got lost from 1.1.68.
+
+ * aha1542.c: Spelling.
+
+Tue Nov 29 15:43:50 1994 Eric Youngdale (eric@andante.aib.com)
+
+ * Linux 1.1.68 released.
+
+ Add support for 12 byte vendor specific commands in scsi-generics,
+ more (i.e. the last mandatory) low-level changes to support
+ loadable modules, plus a few other changes people have requested
+ lately. Changes by me (ERY) unless otherwise noted. Spelling
+ changes appear from some unknown corner of the universe.
+
+ * Throughout: Change COMMAND_SIZE() to use SCpnt->cmd_len.
+
+ * Throughout: Change info() low level function to take a Scsi_Host
+ pointer. This way the info function can return specific
+ information about the host in question, if desired.
+
+ * All low-level drivers: Add NULL in initializer for the
+ usage_count field added to Scsi_Host_Template.
+
+ * aha152x.[c,h]: Remove redundant info() function.
+
+ * aha1542.[c,h]: Likewise.
+
+ * aha1740.[c,h]: Likewise.
+
+ * aha274x.[c,h]: Likewise.
+
+ * eata.[c,h]: Likewise.
+
+ * pas16.[c,h]: Likewise.
+
+ * scsi_debug.[c,h]: Likewise.
+
+ * t128.[c,h]: Likewise.
+
+ * u14-34f.[c,h]: Likewise.
+
+ * ultrastor.[c,h]: Likewise.
+
+ * wd7000.[c,h]: Likewise.
+
+ * aha1542.c: Add support for command line options with lilo to set
+ DMA parameters, I/O port. From Matt Aarnio.
+
+ * buslogic.[c,h]: New version (1.13) from Dave Gentzel.
+
+ * hosts.h: Add new field to Scsi_Hosts "block" to allow blocking
+ all I/O to certain other cards. Helps prevent problems with some
+ ISA motherboards.
+
+ * hosts.h: Add usage_count to Scsi_Host_Template.
+
+ * hosts.h: Add n_io_port to Scsi_Host (used when releasing module).
+
+ * hosts.c: Initialize block field.
+
+ * in2000.c: Remove "static" declarations from exported functions.
+
+ * in2000.h: Likewise.
+
+ * scsi.c: Correctly set cmd_len field as required. Save and
+ change setting when doing a request_sense, restore when done.
+ Move abort timeout message. Fix panic in request_queueable to
+ print correct function name.
+
+ * scsi.c: When incrementing usage count, walk block linked list
+ for host, and or in SCSI_HOST_BLOCK bit. When decrementing usage
+ count to 0, clear this bit to allow usage to continue, wake up
+ processes waiting.
+
+
+ * scsi_ioctl.c: If we have an info() function, call it, otherwise
+ if we have a "name" field, use it, else do nothing.
+
+ * sd.c, sr.c: Clear cmd_len field prior to each command we
+ generate.
+
+ * sd.h: Add "has_part_table" bit to rscsi_disks.
+
+ * sg.[c,h]: Add support for vendor specific 12 byte commands (i.e.
+ override command length in COMMAND_SIZE).
+
+ * sr.c: Bugfix from Gerd in photocd code.
+
+ * sr.c: Bugfix in get_sectorsize - always use scsi_malloc buffer -
+ we cannot guarantee that the stack is < 16Mb.
+
+Tue Nov 22 15:40:46 1994 Eric Youngdale (eric@andante.aib.com)
+
+ * Linux 1.1.67 released.
+
+ * sr.c: Change spelling of manufactor to manufacturer.
+
+ * scsi.h: Likewise.
+
+ * scsi.c: Likewise.
+
+ * qlogic.c: Spelling corrections.
+
+ * in2000.h: Spelling corrections.
+
+ * in2000.c: Update from Bill Earnest, change from
+ jshiffle@netcom.com. Support new bios versions.
+
+ * README.qlogic: Spelling correction.
+
+Tue Nov 22 15:40:46 1994 Eric Youngdale (eric@andante.aib.com)
+
+ * Linux 1.1.66 released.
+
+ * u14-34f.c: Spelling corrections.
+
+ * sr.[h,c]: Add support for multi-session CDs from Gerd Knorr.
+
+ * scsi.h: Add manufactor field for keeping track of device
+ manufacturer.
+
+ * scsi.c: More spelling corrections.
+
+ * qlogic.h, qlogic.c, README.qlogic: New driver from Tom Zerucha.
+
+ * in2000.c, in2000.h: New driver from Brad McLean/Bill Earnest.
+
+ * fdomain.c: Spelling correction.
+
+ * eata.c: Spelling correction.
+
+Fri Nov 18 15:22:44 1994 Eric Youngdale (eric@andante.aib.com)
+
+ * Linux 1.1.65 released.
+
+ * eata.h: Update version string to 1.08.00.
+
+ * eata.c: Set sg_tablesize correctly for DPT PM2012 boards.
+
+ * aha274x.seq: Spell checking.
+
+ * README.st: Likewise.
+
+ * README.aha274x: Likewise.
+
+ * ChangeLog: Likewise.
+
+Tue Nov 15 15:35:08 1994 Eric Youngdale (eric@andante.aib.com)
+
+ * Linux 1.1.64 released.
+
+ * u14-34f.h: Update version number to 1.10.01.
+
+ * u14-34f.c: Use Scsi_Host can_queue variable instead of one from template.
+
+ * eata.[c,h]: New driver for DPT boards from Dario Ballabio.
+
+ * buslogic.c: Use can_queue field.
+
+Wed Nov 30 12:09:09 1994 Eric Youngdale (eric@andante.aib.com)
+
+ * Linux 1.1.63 released.
+
+ * sd.c: Give I/O error if we attempt 512 byte I/O to a disk with
+ 1024 byte sectors.
+
+ * scsicam.c: Make sure we do read from whole disk (mask off
+ partition).
+
+ * scsi.c: Use can_queue in Scsi_Host structure.
+ Fix panic message about invalid host.
+
+ * hosts.c: Initialize can_queue from template.
+
+ * hosts.h: Add can_queue to Scsi_Host structure.
+
+ * aha1740.c: Print out warning about NULL ecbptr.
+
+Fri Nov 4 12:40:30 1994 Eric Youngdale (eric@andante.aib.com)
+
+ * Linux 1.1.62 released.
+
+ * fdomain.c: Update to version 5.20. (From Rik Faith). Support
+ BIOS version 3.5.
+
+ * st.h: Add ST_EOD symbol.
+
+ * st.c: Patches from Kai Makisara - support additional densities,
+ add support for MTFSS, MTBSS, MTWSM commands.
+
+ * README.st: Update to document new commands.
+
+ * scsi.c: Add Mediavision CDR-H93MV to blacklist.
+
+Sat Oct 29 20:57:36 1994 Eric Youngdale (eric@andante.aib.com)
+
+ * Linux 1.1.60 released.
+
+ * u14-34f.[c,h]: New driver from Dario Ballabio.
+
+ * aic7770.c, aha274x_seq.h, aha274x.seq, aha274x.h, aha274x.c,
+ README.aha274x: New files, new driver from John Aycock.
+
+
+Tue Oct 11 08:47:39 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.54 released.
+
+ * Add third PCI chip id. [Drew]
+
+ * buslogic.c: Set BUSLOGIC_CMDLUN back to 1 [Eric].
+
+ * ultrastor.c: Fix asm directives for new GCC.
+
+ * sr.c, sd.c: Use new end_scsi_request function.
+
+ * scsi.h(end_scsi_request): Return pointer to block if still
+ active, else return NULL if inactive. Fixes race condition.
+
+Sun Oct 9 20:23:14 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.53 released.
+
+ * scsi.c: Do not allocate dma bounce buffers if we have exactly
+ 16Mb.
+
+Fri Sep 9 05:35:30 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.51 released.
+
+ * aha152x.c: Add support for disabling the parity check. Update
+ to version 1.4. [Juergen].
+
+ * seagate.c: Tweak debugging message.
+
+Wed Aug 31 10:15:55 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.50 released.
+
+ * aha152x.c: Add eb800 for Vtech Platinum SMP boards. [Juergen].
+
+ * scsi.c: Add Quantum PD1225S to blacklist.
+
+Fri Aug 26 09:38:45 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.49 released.
+
+ * sd.c: Fix bug when we were deleting the wrong entry if we
+ get an unsupported sector size device.
+
+ * sr.c: Another spelling patch.
+
+Thu Aug 25 09:15:27 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.48 released.
+
+ * Throughout: Use new semantics for request_dma, as appropriate.
+
+ * sr.c: Print correct device number.
+
+Sun Aug 21 17:49:23 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.47 released.
+
+ * NCR5380.c: Add support for LIMIT_TRANSFERSIZE.
+
+ * constants.h: Add prototype for print_Scsi_Cmnd.
+
+ * pas16.c: Some more minor tweaks. Test for Mediavision board.
+ Allow for disks > 1Gb. [Drew??]
+
+ * sr.c: Set SCpnt->transfersize.
+
+Tue Aug 16 17:29:35 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.46 released.
+
+ * Throughout: More spelling fixups.
+
+ * buslogic.c: Add a few more fixups from Dave. Disk translation
+ mainly.
+
+ * pas16.c: Add a few patches (Drew?).
+
+
+Thu Aug 11 20:45:15 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.44 released.
+
+ * hosts.c: Add type casts for scsi_init_malloc.
+
+ * scsicam.c: Add type cast.
+
+Wed Aug 10 19:23:01 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.43 released.
+
+ * Throughout: Spelling cleanups. [??]
+
+ * aha152x.c, NCR53*.c, fdomain.c, g_NCR5380.c, pas16.c, seagate.c,
+ t128.c: Use request_irq, not irqaction. [??]
+
+ * aha1542.c: Move test for shost before we start to use shost.
+
+ * aha1542.c, aha1740.c, ultrastor.c, wd7000.c: Use new
+ calling sequence for request_irq.
+
+ * buslogic.c: Update from Dave Gentzel.
+
+Tue Aug 9 09:32:59 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.42 released.
+
+ * NCR5380.c: Change NCR5380_print_status to static.
+
+ * seagate.c: A few more bugfixes. Only Drew knows what they are
+ for.
+
+ * ultrastor.c: Tweak some __asm__ directives so that it works
+ with newer compilers. [??]
+
+Sat Aug 6 21:29:36 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.40 released.
+
+ * NCR5380.c: Return SCSI_RESET_WAKEUP from reset function.
+
+ * aha1542.c: Reset mailbox status after a bus device reset.
+
+ * constants.c: Fix typo (;;).
+
+ * g_NCR5380.c:
+ * pas16.c: Correct usage of NCR5380_init.
+
+ * scsi.c: Remove redundant (and unused variables).
+
+ * sd.c: Use memset to clear all of rscsi_disks before we use it.
+
+ * sg.c: Ditto, except for scsi_generics.
+
+ * sr.c: Ditto, except for scsi_CDs.
+
+ * st.c: Initialize STp->device.
+
+ * seagate.c: Fix bug. [Drew]
+
+Thu Aug 4 08:47:27 1994 Eric Youngdale (eric@andante)
+
+ * Linux 1.1.39 released.
+
+ * Makefile: Fix typo in NCR53C7xx.
+
+ * st.c: Print correct number for device.
+
+Tue Aug 2 11:29:14 1994 Eric Youngdale (eric@esp22)
+
+ * Linux 1.1.38 released.
+
+ Lots of changes in 1.1.38. All from Drew unless otherwise noted.
+
+ * 53c7,8xx.c: New file from Drew. PCI driver.
+
+ * 53c7,8xx.h: Likewise.
+
+ * 53c7,8xx.scr: Likewise.
+
+ * 53c8xx_d.h, 53c8xx_u.h, script_asm.pl: Likewise.
+
+ * scsicam.c: New file from Drew. Read block 0 on the disk and
+ read the partition table. Attempt to deduce the geometry from
+ the partition table if possible. Only used by 53c[7,8]xx right
+ now, but could be used by any device for which we have no way
+ of identifying the geometry.
+
+ * sd.c: Use device letters instead of sd%d in a lot of messages.
+
+ * seagate.c: Fix bug that resulted in lockups with some devices.
+
+ * sr.c (sr_open): Return -EROFS, not -EACCES if we attempt to open
+ device for write.
+
+ * hosts.c, Makefile: Update for new driver.
+
+ * NCR5380.c, NCR5380.h, g_NCR5380.h: Update from Drew to support
+ 53C400 chip.
+
+ * constants.c: Define CONST_CMND and CONST_MSG. Other minor
+ cleanups along the way. Improve handling of CONST_MSG.
+
+ * fdomain.c, fdomain.h: New version from Rik Faith. Update to
+ 5.18. Should now support TMC-3260 PCI card with 18C30 chip.
+
+ * pas16.c: Update with new irq initialization.
+
+ * t128.c: Update with minor cleanups.
+
+ * scsi.c (scsi_pid): New variable - gives each command a unique
+ id. Add Quantum LPS5235S to blacklist. Change in_scan to
+ in_scan_scsis and make global.
+
+ * scsi.h: Add some defines for extended message handling,
+ INITIATE/RELEASE_RECOVERY. Add a few new fields to support sync
+ transfers.
+
+ * scsi_ioctl.h: Add ioctl to request synchronous transfers.
+
+
+Tue Jul 26 21:36:58 1994 Eric Youngdale (eric@esp22)
+
+ * Linux 1.1.37 released.
+
+ * aha1542.c: Always call aha1542_mbenable, use new udelay
+ mechanism so we do not wait a long time if the board does not
+ implement this command.
+
+ * g_NCR5380.c: Remove #include <linux/config.h> and #if
+ defined(CONFIG_SCSI_*).
+
+ * seagate.c: Likewise.
+
+ Next round of changes to support loadable modules. Getting closer
+ now, still not possible to do anything remotely usable.
+
+ hosts.c: Create a linked list of detected high level devices.
+ (scsi_register_device): New function to insert into this list.
+ (scsi_init): Call scsi_register_device for each of the known high
+ level drivers.
+
+ hosts.h: Add prototype for linked list header. Add structure
+ definition for device template structure which defines the linked
+ list.
+
+ scsi.c: (scan_scsis): Use linked list instead of knowledge about
+ existing high level device drivers.
+ (scsi_dev_init): Use init functions for drivers on linked list
+ instead of explicit list to initialize and attach devices to high
+ level drivers.
+
+ scsi.h: Add new field "attached" to scsi_device - count of number
+ of high level devices attached.
+
+ sd.c, sr.c, sg.c, st.c: Adjust init/attach functions to use new
+ scheme.
+
+Sat Jul 23 13:03:17 1994 Eric Youngdale (eric@esp22)
+
+ * Linux 1.1.35 released.
+
+ * ultrastor.c: Change constraint on asm() operand so that it works
+ with gcc 2.6.0.
+
+Thu Jul 21 10:37:39 1994 Eric Youngdale (eric@esp22)
+
+ * Linux 1.1.33 released.
+
+ * sr.c(sr_open): Do not allow opens with write access.
+
+Mon Jul 18 09:51:22 1994 1994 Eric Youngdale (eric@esp22)
+
+ * Linux 1.1.31 released.
+
+ * sd.c: Increase SD_TIMEOUT from 300 to 600.
+
+ * sr.c: Remove stray task_struct* variable that was no longer
+ used.
+
+ * sr_ioctl.c: Fix typo in up() call.
+
+Sun Jul 17 16:25:29 1994 Eric Youngdale (eric@esp22)
+
+ * Linux 1.1.30 released.
+
+ * scsi.c (scan_scsis): Fix detection of some Toshiba CDROM drives
+ that report themselves as disk drives.
+
+ * (Throughout): Use request.sem instead of request.waiting.
+ Should fix swap problem with fdomain.
+
+Thu Jul 14 10:51:42 1994 Eric Youngdale (eric@esp22)
+
+ * Linux 1.1.29 released.
+
+ * scsi.c (scan_scsis): Add new devices to end of linked list, not
+ to the beginning.
+
+ * scsi.h (SCSI_SLEEP): Remove brain dead hack to try to save
+ the task state before sleeping.
+
+Sat Jul 9 15:01:03 1994 Eric Youngdale (eric@esp22)
+
+ More changes to eventually support loadable modules. Mainly
+ we want to use linked lists instead of arrays because it is easier
+ to dynamically add and remove things this way.
+
+ Quite a bit more work is needed before loadable modules are
+ possible (and usable) with scsi, but this is most of the grunge
+ work.
+
+ * Linux 1.1.28 released.
+
+ * scsi.c, scsi.h (allocate_device, request_queueable): Change
+ argument from index into scsi_devices to a pointer to the
+ Scsi_Device struct.
+
+ * Throughout: Change all calls to allocate_device,
+ request_queueable to use new calling sequence.
+
+ * Throughout: Use SCpnt->device instead of
+ scsi_devices[SCpnt->index]. Ugh - the pointer was there all along
+ - much cleaner this way.
+
+ * scsi.c (scsi_init_malloc, scsi_free_malloc): New functions -
+ allow us to pretend that we have a working malloc when we
+ initialize. Use this instead of passing memory_start, memory_end
+ around all over the place.
+
+ * scsi.h, st.c, sr.c, sd.c, sg.c: Change *_init1 functions to use
+ scsi_init_malloc, remove all arguments, no return value.
+
+ * scsi.h: Remove index field from Scsi_Device and Scsi_Cmnd
+ structs.
+
+ * scsi.c (scsi_dev_init): Set up for scsi_init_malloc.
+ (scan_scsis): Get SDpnt from scsi_init_malloc, and refresh
+ when we discover a device. Free pointer before returning.
+ Change scsi_devices into a linked list.
+
+ * scsi.c (scan_scsis): Change to only scan one host.
+ (scsi_dev_init): Loop over all detected hosts, and scan them.
+
+ * hosts.c (scsi_init_free): Change so that number of extra bytes
+ is stored in struct, and we do not have to pass it each time.
+
+ * hosts.h: Change Scsi_Host_Template struct to include "next" and
+ "release" functions. Initialize to NULL in all low level
+ adapters.
+
+ * hosts.c: Rename scsi_hosts to builtin_scsi_hosts, create linked
+ list scsi_hosts, linked together with the new "next" field.
+
+Wed Jul 6 05:45:02 1994 Eric Youngdale (eric@esp22)
+
+ * Linux 1.1.25 released.
+
+ * aha152x.c: Changes from Juergen - cleanups and updates.
+
+ * sd.c, sr.c: Use new check_media_change and revalidate
+ file_operations fields.
+
+ * st.c, st.h: Add changes from Kai Makisara, dated Jun 22.
+
+ * hosts.h: Change SG_ALL back to 0xff. Apparently soft error
+ in /dev/brain resulted in having this bumped up.
+ Change first parameter in bios_param function to be Disk * instead
+ of index into rscsi_disks.
+
+ * sd_ioctl.c: Pass pointer to rscsi_disks element instead of index
+ to array.
+
+ * sd.h: Add struct name "scsi_disk" to typedef for Scsi_Disk.
+
+ * scsi.c: Remove redundant Maxtor XT8760S from blacklist.
+ In scsi_reset, add printk when DEBUG defined.
+
+ * All low level drivers: Modify definitions of bios_param in
+ appropriate way.
+
+Thu Jun 16 10:31:59 1994 Eric Youngdale (eric@esp22)
+
+ * Linux 1.1.20 released.
+
+ * scsi_ioctl.c: Only pass down the actual number of characters
+ required to scsi_do_cmd, not the one rounded up to a even number
+ of sectors.
+
+ * ultrastor.c: Changes from Caleb Epstein for 24f cards. Support
+ larger SG lists.
+
+ * ultrastor.c: Changes from me - use scsi_register to register
+ host. Add some consistency checking,
+
+Wed Jun 1 21:12:13 1994 Eric Youngdale (eric@esp22)
+
+ * Linux 1.1.19 released.
+
+ * scsi.h: Add new return code for reset() function:
+ SCSI_RESET_PUNT.
+
+ * scsi.c: Make SCSI_RESET_PUNT the same as SCSI_RESET_WAKEUP for
+ now.
+
+ * aha1542.c: If the command responsible for the reset is not
+ pending, return SCSI_RESET_PUNT.
+
+ * aha1740.c, buslogic.c, wd7000.c, ultrastor.c: Return
+ SCSI_RESET_PUNT instead of SCSI_RESET_SNOOZE.
+
+Tue May 31 19:36:01 1994 Eric Youngdale (eric@esp22)
+
+ * buslogic.c: Do not print out message about "must be Adaptec"
+ if we have detected a buslogic card. Print out a warning message
+ if we are configuring for >16Mb, since the 445S at board level
+ D or earlier does not work right. The "D" level board can be made
+ to work by flipping an undocumented switch, but this is too subtle.
+
+ Changes based upon patches in Yggdrasil distribution.
+
+ * sg.c, sg.h: Return sense data to user.
+
+ * aha1542.c, aha1740.c, buslogic.c: Do not panic if
+ sense buffer is wrong size.
+
+ * hosts.c: Test for ultrastor card before any of the others.
+
+ * scsi.c: Allow boot-time option for max_scsi_luns=? so that
+ buggy firmware has an easy work-around.
+
+Sun May 15 20:24:34 1994 Eric Youngdale (eric@esp22)
+
+ * Linux 1.1.15 released.
+
+ Post-codefreeze thaw...
+
+ * buslogic.[c,h]: New driver from David Gentzel.
+
+ * hosts.h: Add use_clustering field to explicitly say whether
+ clustering should be used for devices attached to this host
+ adapter. The buslogic board apparently supports large SG lists,
+ but it is apparently faster if sd.c condenses this into a smaller
+ list.
+
+ * sd.c: Use this field instead of heuristic.
+
+ * All host adapter include files: Add appropriate initializer for
+ use_clustering field.
+
+ * scsi.h: Add #defines for return codes for the abort and reset
+ functions. There are now a specific set of return codes to fully
+ specify all of the possible things that the low-level adapter
+ could do.
+
+ * scsi.c: Act based upon return codes from abort/reset functions.
+
+ * All host adapter abort/reset functions: Return new return code.
+
+ * Add code in scsi.c to help debug timeouts. Use #define
+ DEBUG_TIMEOUT to enable this.
+
+ * scsi.c: If the host->irq field is set, use
+ disable_irq/enable_irq before calling queuecommand if we
+ are not already in an interrupt. Reduce races, and we
+ can be sloppier about cli/sti in the interrupt routines now
+ (reduce interrupt latency).
+
+ * constants.c: Fix some things to eliminate warnings. Add some
+ sense descriptions that were omitted before.
+
+ * aha1542.c: Watch for SCRD from host adapter - if we see it, set
+ a flag. Currently we only print out the number of pending
+ commands that might need to be restarted.
+
+ * aha1542.c (aha1542_abort): Look for lost interrupts, OGMB still
+ full, and attempt to recover. Otherwise give up.
+
+ * aha1542.c (aha1542_reset): Try BUS DEVICE RESET, and then pass
+ DID_RESET back up to the upper level code for all commands running
+ on this target (even on different LUNs).
+
+Sat May 7 14:54:01 1994
+
+ * Linux 1.1.12 released.
+
+ * st.c, st.h: New version from Kai. Supports boot time
+ specification of number of buffers.
+
+ * wd7000.[c,h]: Updated driver from John Boyd. Now supports
+ more than one wd7000 board in machine at one time, among other things.
+
+Wed Apr 20 22:20:35 1994
+
+ * Linux 1.1.8 released.
+
+ * sd.c: Add a few type casts where scsi_malloc is called.
+
+Wed Apr 13 12:53:29 1994
+
+ * Linux 1.1.4 released.
+
+ * scsi.c: Clean up a few printks (use %p to print pointers).
+
+Wed Apr 13 11:33:02 1994
+
+ * Linux 1.1.3 released.
+
+ * fdomain.c: Update to version 5.16 (Handle different FIFO sizes
+ better).
+
+Fri Apr 8 08:57:19 1994
+
+ * Linux 1.1.2 released.
+
+ * Throughout: SCSI portion of cluster diffs added.
+
+Tue Apr 5 07:41:50 1994
+
+ * Linux 1.1 development tree initiated.
+
+ * The linux 1.0 development tree is now effectively frozen except
+ for obvious bugfixes.
+
+******************************************************************
+******************************************************************
+******************************************************************
+******************************************************************
+
+Sun Apr 17 00:17:39 1994
+
+ * Linux 1.0, patchlevel 9 released.
+
+ * fdomain.c: Update to version 5.16 (Handle different FIFO sizes
+ better).
+
+Thu Apr 7 08:36:20 1994
+
+ * Linux 1.0, patchlevel8 released.
+
+ * fdomain.c: Update to version 5.15 from 5.9. Handles 3.4 bios.
+
+Sun Apr 3 14:43:03 1994
+
+ * Linux 1.0, patchlevel6 released.
+
+ * wd7000.c: Make stab at fixing race condition.
+
+Sat Mar 26 14:14:50 1994
+
+ * Linux 1.0, patchlevel5 released.
+
+ * aha152x.c, Makefile: Fix a few bugs (too much data message).
+ Add a few more bios signatures. (Patches from Juergen).
+
+ * aha1542.c: Fix race condition in aha1542_out.
+
+Mon Mar 21 16:36:20 1994
+
+ * Linux 1.0, patchlevel3 released.
+
+ * sd.c, st.c, sr.c, sg.c: Return -ENXIO, not -ENODEV if we attempt
+ to open a non-existent device.
+
+ * scsi.c: Add Chinon cdrom to blacklist.
+
+ * sr_ioctl.c: Check return status of verify_area.
+
+Sat Mar 6 16:06:19 1994
+
+ * Linux 1.0 released (technically a pre-release).
+
+ * scsi.c: Add IMS CDD521, Maxtor XT-8760S to blacklist.
+
+Tue Feb 15 10:58:20 1994
+
+ * pl15e released.
+
+ * aha1542.c: For 1542C, allow dynamic device scan with >1Gb turned
+ off.
+
+ * constants.c: Fix typo in definition of CONSTANTS.
+
+ * pl15d released.
+
+Fri Feb 11 10:10:16 1994
+
+ * pl15c released.
+
+ * scsi.c: Add Maxtor XT-3280 and Rodime RO3000S to blacklist.
+
+ * scsi.c: Allow tagged queueing for scsi 3 devices as well.
+ Some really old devices report a version number of 0. Disallow
+ LUN != 0 for these.
+
+Thu Feb 10 09:48:57 1994
+
+ * pl15b released.
+
+Sun Feb 6 12:19:46 1994
+
+ * pl15a released.
+
+Fri Feb 4 09:02:17 1994
+
+ * scsi.c: Add Teac cdrom to blacklist.
+
+Thu Feb 3 14:16:43 1994
+
+ * pl15 released.
+
+Tue Feb 1 15:47:43 1994
+
+ * pl14w released.
+
+ * wd7000.c (wd_bases): Fix typo in last change.
+
+Mon Jan 24 17:37:23 1994
+
+ * pl14u released.
+
+ * aha1542.c: Support 1542CF/extended bios. Different from 1542C
+
+ * wd7000.c: Allow bios at 0xd8000 as well.
+
+ * ultrastor.c: Do not truncate cylinders to 1024.
+
+ * fdomain.c: Update to version 5.9 (add new bios signature).
+
+ * NCR5380.c: Update from Drew - should work a lot better now.
+
+Sat Jan 8 15:13:10 1994
+
+ * pl14o released.
+
+ * sr_ioctl.c: Zero reserved field before trying to set audio volume.
+
+Wed Jan 5 13:21:10 1994
+
+ * pl14m released.
+
+ * fdomain.c: Update to version 5.8. No functional difference???
+
+Tue Jan 4 14:26:13 1994
+
+ * pl14l released.
+
+ * ultrastor.c: Remove outl, inl functions (now provided elsewhere).
+
+Mon Jan 3 12:27:25 1994
+
+ * pl14k released.
+
+ * aha152x.c: Remove insw and outsw functions.
+
+ * fdomain.c: Ditto.
+
+Wed Dec 29 09:47:20 1993
+
+ * pl14i released.
+
+ * scsi.c: Support RECOVERED_ERROR for tape drives.
+
+ * st.c: Update of tape driver from Kai.
+
+Tue Dec 21 09:18:30 1993
+
+ * pl14g released.
+
+ * aha1542.[c,h]: Support extended BIOS stuff.
+
+ * scsi.c: Clean up messages about disks, so they are displayed as
+ sda, sdb, etc instead of sd0, sd1, etc.
+
+ * sr.c: Force reread of capacity if disk was changed.
+ Clear buffer before asking for capacity/sectorsize (some drives
+ do not report this properly). Set needs_sector_size flag if
+ drive did not return sensible sector size.
+
+Mon Dec 13 12:13:47 1993
+
+ * aha152x.c: Update to version .101 from Juergen.
+
+Mon Nov 29 03:03:00 1993
+
+ * linux 0.99.14 released.
+
+ * All scsi stuff moved from kernel/blk_drv/scsi to drivers/scsi.
+
+ * Throughout: Grammatical corrections to various comments.
+
+ * Makefile: fix so that we do not need to compile things we are
+ not going to use.
+
+ * NCR5380.c, NCR5380.h, g_NCR5380.c, g_NCR5380.h, pas16.c,
+ pas16.h, t128.c, t128.h: New files from Drew.
+
+ * aha152x.c, aha152x.h: New files from Juergen Fischer.
+
+ * aha1542.c: Support for more than one 1542 in the machine
+ at the same time. Make functions static that do not need
+ visibility.
+
+ * aha1740.c: Set NEEDS_JUMPSTART flag in reset function, so we
+ know to restart the command. Change prototype of aha1740_reset
+ to take a command pointer.
+
+ * constants.c: Clean up a few things.
+
+ * fdomain.c: Update to version 5.6. Move snarf_region. Allow
+ board to be set at different SCSI ids. Remove support for
+ reselection (did not work well). Set JUMPSTART flag in reset
+ code.
+
+ * hosts.c: Support new low-level adapters. Allow for more than
+ one adapter of a given type.
+
+ * hosts.h: Allow for more than one adapter of a given type.
+
+ * scsi.c: Add scsi_device_types array, if NEEDS_JUMPSTART is set
+ after a low-level reset, start the command again. Sort blacklist,
+ and add Maxtor MXT-1240S, XT-4170S, NEC CDROM 84, Seagate ST157N.
+
+ * scsi.h: Add constants for tagged queueing.
+
+ * Throughout: Use constants from major.h instead of hardcoded
+ numbers for major numbers.
+
+ * scsi_ioctl.c: Fix bug in buffer length in ioctl_command. Use
+ verify_area in GET_IDLUN ioctl. Add new ioctls for
+ TAGGED_QUEUE_ENABLE, DISABLE. Only allow IOCTL_SEND_COMMAND by
+ superuser.
+
+ * sd.c: Only pay attention to UNIT_ATTENTION for removable disks.
+ Fix bug where sometimes portions of blocks would get lost
+ resulting in processes hanging. Add messages when we spin up a
+ disk, and fix a bug in the timing. Increase read-ahead for disks
+ that are on a scatter-gather capable host adapter.
+
+ * seagate.c: Fix so that some parameters can be set from the lilo
+ prompt. Supply jumpstart flag if we are resetting and need the
+ command restarted. Fix so that we return 1 if we detect a card
+ so that multiple card detection works correctly. Add yet another
+ signature for FD cards (950). Add another signature for ST0x.
+
+ * sg.c, sg.h: New files from Lawrence Foard for generic scsi
+ access.
+
+ * sr.c: Add type casts for (void*) so that we can do pointer
+ arithmetic. Works with GCC without this, but it is not strictly
+ correct. Same bugfix as was in sd.c. Increase read-ahead a la
+ disk driver.
+
+ * sr_ioctl.c: Use scsi_malloc buffer instead of buffer from stack
+ since we cannot guarantee that the stack is < 16Mb.
+
+ ultrastor.c: Update to support 24f properly (JFC's driver).
+
+ wd7000.c: Supply jumpstart flag for reset. Do not round up
+ number of cylinders in biosparam function.
+
+Sat Sep 4 20:49:56 1993
+
+ * 0.99pl13 released.
+
+ * Throughout: Use check_region/snarf_region for all low-level
+ drivers.
+
+ * aha1542.c: Do hard reset instead of soft (some ethercard probes
+ screw us up).
+
+ * scsi.c: Add new flag ASKED_FOR_SENSE so that we can tell if we are
+ in a loop whereby the device returns null sense data.
+
+ * sd.c: Add code to spin up a drive if it is not already spinning.
+ Do this one at a time to make it easier on power supplies.
+
+ * sd_ioctl.c: Use sync_dev instead of fsync_dev in BLKFLSBUF ioctl.
+
+ * seagate.c: Switch around DATA/CONTROL lines.
+
+ * st.c: Change sense to unsigned.
+
+Thu Aug 5 11:59:18 1993
+
+ * 0.99pl12 released.
+
+ * constants.c, constants.h: New files with ascii descriptions of
+ various conditions.
+
+ * Makefile: Do not try to count the number of low-level drivers,
+ just generate the list of .o files.
+
+ * aha1542.c: Replace 16 with sizeof(SCpnt->sense_buffer). Add tests
+ for addresses > 16Mb, panic if we find one.
+
+ * aha1740.c: Ditto with sizeof().
+
+ * fdomain.c: Update to version 3.18. Add new signature, register IRQ
+ with irqaction. Use ID 7 for new board. Be more intelligent about
+ obtaining the h/s/c numbers for biosparam.
+
+ * hosts.c: Do not depend upon Makefile generated count of the number
+ of low-level host adapters.
+
+ * scsi.c: Use array for scsi_command_size instead of a function. Add
+ Texel cdrom and Maxtor XT-4380S to blacklist. Allow compile time
+ option for no-multi lun scan. Add semaphore for possible problems
+ with handshaking, assume device is faulty until we know it not to be
+ the case. Add DEBUG_INIT symbol to dump info as we scan for devices.
+ Zero sense buffer so we can tell if we need to request it. When
+ examining sense information, request sense if buffer is all zero.
+ If RESET, request sense information to see what to do next.
+
+ * scsi_debug.c: Change some constants to use symbols like INT_MAX.
+
+ * scsi_ioctl.c (kernel_scsi_ioctl): New function -for making ioctl
+ calls from kernel space.
+
+ * sd.c: Increase timeout to 300. Use functions in constants.h to
+ display info. Use scsi_malloc buffer for READ_CAPACITY, since
+ we cannot guarantee that a stack based buffer is < 16Mb.
+
+ * sd_ioctl.c: Add BLKFLSBUF ioctl.
+
+ * seagate.c: Add new compile time options for ARBITRATE,
+ SLOW_HANDSHAKE, and SLOW_RATE. Update assembly loops for transferring
+ data. Use kernel_scsi_ioctl to request mode page with geometry.
+
+ * sr.c: Use functions in constants.c to display messages.
+
+ * st.c: Support for variable block size.
+
+ * ultrastor.c: Do not use cache for tape drives. Set
+ unchecked_isa_dma flag, even though this may not be needed (gets set
+ later).
+
+Sat Jul 17 18:32:44 1993
+
+ * 0.99pl11 released. C++ compilable.
+
+ * Throughout: Add type casts all over the place, and use "ip" instead
+ of "info" in the various biosparam functions.
+
+ * Makefile: Compile seagate.c with C++ compiler.
+
+ * aha1542.c: Always set ccb pointer as this gets trashed somehow on
+ some systems. Add a few type casts. Update biosparam function a little.
+
+ * aha1740.c: Add a few type casts.
+
+ * fdomain.c: Update to version 3.17 from 3.6. Now works with
+ TMC-18C50.
+
+ * scsi.c: Minor changes here and there with datatypes. Save use_sg
+ when requesting sense information so that this can properly be
+ restored if we retry the command. Set aside dma buffers assuming each
+ block is 1 page, not 1Kb minix block.
+
+ * scsi_ioctl.c: Add a few type casts. Other minor changes.
+
+ * sd.c: Correctly free all scsi_malloc'd memory if we run out of
+ dma_pool. Store blocksize information for each partition.
+
+ * seagate.c: Minor cleanups here and there.
+
+ * sr.c: Set up blocksize array for all discs. Fix bug in freeing
+ buffers if we run out of dma pool.
+
+Thu Jun 2 17:58:11 1993
+
+ * 0.99pl10 released.
+
+ * aha1542.c: Support for BT 445S (VL-bus board with no dma channel).
+
+ * fdomain.c: Upgrade to version 3.6. Preliminary support for TNC-18C50.
+
+ * scsi.c: First attempt to fix problem with old_use_sg. Change
+ NOT_READY to a SUGGEST_ABORT. Fix timeout race where time might
+ get decremented past zero.
+
+ * sd.c: Add block_fsync function to dispatch table.
+
+ * sr.c: Increase timeout to 500 from 250. Add entry for sync in
+ dispatch table (supply NULL). If we do not have a sectorsize,
+ try to get it in the sd_open function. Add new function just to
+ obtain sectorsize.
+
+ * sr.h: Add needs_sector_size semaphore.
+
+ * st.c: Add NULL for fsync in dispatch table.
+
+ * wd7000.c: Allow another condition for power on that are normal
+ and do not require a panic.
+
+Thu Apr 22 23:10:11 1993
+
+ * 0.99pl9 released.
+
+ * aha1542.c: Use (void) instead of () in setup_mailboxes.
+
+ * scsi.c: Initialize transfersize and underflow fields in SCmd to 0.
+ Do not panic for unsupported message bytes.
+
+ * scsi.h: Allocate 12 bytes instead of 10 for commands. Add
+ transfersize and underflow fields.
+
+ * scsi_ioctl.c: Further bugfix to ioctl_probe.
+
+ * sd.c: Use long instead of int for last parameter in sd_ioctl.
+ Initialize transfersize and underflow fields.
+
+ * sd_ioctl.c: Ditto for sd_ioctl(,,,,);
+
+ * seagate.c: New version from Drew. Includes new signatures for FD
+ cards. Support for 0ws jumper. Correctly initialize
+ scsi_hosts[hostnum].this_id. Improved handing of
+ disconnect/reconnect, and support command linking. Use
+ transfersize and underflow fields. Support scatter-gather.
+
+ * sr.c, sr_ioctl.c: Use long instead of int for last parameter in sr_ioctl.
+ Use buffer and buflength in do_ioctl. Patches from Chris Newbold for
+ scsi-2 audio commands.
+
+ * ultrastor.c: Comment out in_byte (compiler warning).
+
+ * wd7000.c: Change () to (void) in wd7000_enable_dma.
+
+Wed Mar 31 16:36:25 1993
+
+ * 0.99pl8 released.
+
+ * aha1542.c: Handle mailboxes better for 1542C.
+ Do not truncate number of cylinders at 1024 for biosparam call.
+
+ * aha1740.c: Fix a few minor bugs for multiple devices.
+ Same as above for biosparam.
+
+ * scsi.c: Add lockable semaphore for removable devices that can have
+ media removal prevented. Add another signature for flopticals.
+ (allocate_device): Fix race condition. Allow more space in dma pool
+ for blocksizes of up to 4Kb.
+
+ * scsi.h: Define COMMAND_SIZE. Define a SCSI specific version of
+ INIT_REQUEST that can run with interrupts off.
+
+ * scsi_ioctl.c: Make ioctl_probe function more idiot-proof. If
+ a removable device says ILLEGAL REQUEST to a door-locking command,
+ clear lockable flag. Add SCSI_IOCTL_GET_IDLUN ioctl. Do not attempt
+ to lock door for devices that do not have lockable semaphore set.
+
+ * sd.c: Fix race condition for multiple disks. Use INIT_SCSI_REQUEST
+ instead of INIT_REQUEST. Allow sector sizes of 1024 and 256. For
+ removable disks that are not ready, mark them as having a media change
+ (some drives do not report this later).
+
+ * seagate.c: Use volatile keyword for memory-mapped register pointers.
+
+ * sr.c: Fix race condition, a la sd.c. Increase the number of retries
+ to 1. Use INIT_SCSI_REQUEST. Allow 512 byte sector sizes. Do a
+ read_capacity when we init the device so we know the size and
+ sectorsize.
+
+ * st.c: If ioctl not found in st.c, try scsi_ioctl for others.
+
+ * ultrastor.c: Do not truncate number of cylinders at 1024 for
+ biosparam call.
+
+ * wd7000.c: Ditto.
+ Throughout: Use COMMAND_SIZE macro to determine length of scsi
+ command.
+
+
+
+Sat Mar 13 17:31:29 1993
+
+ * 0.99pl7 released.
+
+ Throughout: Improve punctuation in some messages, and use new
+ verify_area syntax.
+
+ * aha1542.c: Handle unexpected interrupts better.
+
+ * scsi.c: Ditto. Handle reset conditions a bit better, asking for
+ sense information and retrying if required.
+
+ * scsi_ioctl.c: Allow for 12 byte scsi commands.
+
+ * ultrastor.c: Update to use scatter-gather.
+
+Sat Feb 20 17:57:15 1993
+
+ * 0.99pl6 released.
+
+ * fdomain.c: Update to version 3.5. Handle spurious interrupts
+ better.
+
+ * sd.c: Use register_blkdev function.
+
+ * sr.c: Ditto.
+
+ * st.c: Use register_chrdev function.
+
+ * wd7000.c: Undo previous change.
+
+Sat Feb 6 11:20:43 1993
+
+ * 0.99pl5 released.
+
+ * scsi.c: Fix bug in testing for UNIT_ATTENTION.
+
+ * wd7000.c: Check at more addresses for bios. Fix bug in biosparam
+ (heads & sectors turned around).
+
+Wed Jan 20 18:13:59 1993
+
+ * 0.99pl4 released.
+
+ * scsi.c: Ignore leading spaces when looking for blacklisted devices.
+
+ * seagate.c: Add a few new signatures for FD cards. Another patch
+ with SCint to fix race condition. Use recursion_depth to keep track
+ of how many times we have been recursively called, and do not start
+ another command unless we are on the outer level. Fixes bug
+ with Syquest cartridge drives (used to crash kernel), because
+ they do not disconnect with large data transfers.
+
+Tue Jan 12 14:33:36 1993
+
+ * 0.99pl3 released.
+
+ * fdomain.c: Update to version 3.3 (a few new signatures).
+
+ * scsi.c: Add CDU-541, Denon DRD-25X to blacklist.
+ (allocate_request, request_queueable): Init request.waiting to NULL if
+ non-buffer type of request.
+
+ * seagate.c: Allow controller to be overridden with CONTROLLER symbol.
+ Set SCint=NULL when we are done, to remove race condition.
+
+ * st.c: Changes from Kai.
+
+Wed Dec 30 20:03:47 1992
+
+ * 0.99pl2 released.
+
+ * scsi.c: Blacklist back in. Remove Newbury drive as other bugfix
+ eliminates need for it here.
+
+ * sd.c: Return ENODEV instead of EACCES if no such device available.
+ (sd_init) Init blkdev_fops earlier so that sd_open is available sooner.
+
+ * sr.c: Same as above for sd.c.
+
+ * st.c: Return ENODEV instead of ENXIO if no device. Init chrdev_fops
+ sooner, so that it is always there even if no tapes.
+
+ * seagate.c (controller_type): New variable to keep track of ST0x or
+ FD. Modify signatures list to indicate controller type, and init
+ controller_type once we find a match.
+
+ * wd7000.c (wd7000_set_sync): Remove redundant function.
+
+Sun Dec 20 16:26:24 1992
+
+ * 0.99pl1 released.
+
+ * scsi_ioctl.c: Bugfix - check dev->index, not dev->id against
+ NR_SCSI_DEVICES.
+
+ * sr_ioctl.c: Verify that device exists before allowing an ioctl.
+
+ * st.c: Patches from Kai - change timeout values, improve end of tape
+ handling.
+
+Sun Dec 13 18:15:23 1992
+
+ * 0.99 kernel released. Baseline for this ChangeLog.
+Release Date : Thu Feb 03 12:27:22 EST 2005 - Seokmann Ju <sju@lsil.com>
+Current Version : 2.20.4.5 (scsi module), 2.20.2.5 (cmm module)
+Older Version : 2.20.4.4 (scsi module), 2.20.2.4 (cmm module)
+
+1. Modified name of two attributes in scsi_host_template.
+ On Wed, 2005-02-02 at 10:56 -0500, Ju, Seokmann wrote:
+ > + .sdev_attrs = megaraid_device_attrs,
+ > + .shost_attrs = megaraid_class_device_attrs,
+
+ These are, perhaps, slightly confusing names.
+ The terms device and class_device have well defined meanings in the
+ generic device model, neither of which is what you mean here.
+ Why not simply megaraid_sdev_attrs and megaraid_shost_attrs?
+
+ Other than this, it looks fine to me too.
+
+Release Date : Thu Jan 27 00:01:03 EST 2005 - Atul Mukker <atulm@lsil.com>
+Current Version : 2.20.4.4 (scsi module), 2.20.2.5 (cmm module)
+Older Version : 2.20.4.3 (scsi module), 2.20.2.4 (cmm module)
+
+1. Bump up the version of scsi module due to its conflict.
+
+Release Date : Thu Jan 21 00:01:03 EST 2005 - Atul Mukker <atulm@lsil.com>
+Current Version : 2.20.4.3 (scsi module), 2.20.2.5 (cmm module)
+Older Version : 2.20.4.2 (scsi module), 2.20.2.4 (cmm module)
+
+1. Remove driver ioctl for logical drive to scsi address translation and
+ replace with the sysfs attribute. To remove drives and change
+ capacity, application shall now use the device attribute to get the
+ logical drive number for a scsi device. For adding newly created
+ logical drives, class device attribute would be required to uniquely
+ identify each controller.
+ - Atul Mukker <atulm@lsil.com>
+
+ "James, I've been thinking about this a little more, and you may be on
+ to something here. Let each driver add files as such:"
+
+ - Matt Domsch <Matt_Domsch@dell.com>, 12.15.2004
+ linux-scsi mailing list
+
+
+ "Then, if you simply publish your LD number as an extra parameter of
+ the device, you can look through /sys to find it."
+
+ - James Bottomley <James.Bottomley@SteelEye.com>, 01.03.2005
+ linux-scsi mailing list
+
+
+ "I don't see why not ... it's your driver, you can publish whatever
+ extra information you need as scsi_device attributes; that was one of
+ the designs of the extensible attribute system."
+
+ - James Bottomley <James.Bottomley@SteelEye.com>, 01.06.2005
+ linux-scsi mailing list
+
+2. Add AMI megaraid support - Brian King <brking@charter.net>
+ PCI_VENDOR_ID_AMI, PCI_DEVICE_ID_AMI_MEGARAID3,
+ PCI_VENDOR_ID_AMI, PCI_SUBSYS_ID_PERC3_DC,
+
+3. Make some code static - Adrian Bunk <bunk@stusta.de>
+ Date: Mon, 15 Nov 2004 03:14:57 +0100
+
+ The patch below makes some needlessly global code static.
+ -wait_queue_head_t wait_q;
+ +static wait_queue_head_t wait_q;
+
+ Signed-off-by: Adrian Bunk <bunk@stusta.de>
+
+4. Added NEC ROMB support - NEC MegaRAID PCI Express ROMB controller
+ PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_MEGARAID_NEC_ROMB_2E,
+ PCI_SUBSYS_ID_NEC, PCI_SUBSYS_ID_MEGARAID_NEC_ROMB_2E,
+
+5. Fixed Tape drive issue : For any Direct CDB command to physical device
+ including tape, timeout value set by driver was 10 minutes. With this
+ value, most of command will return within timeout. However, for those
+ command like ERASE or FORMAT, it takes more than an hour depends on
+ capacity of the device and the command could be terminated before it
+ completes.
+ To address this issue, the 'timeout' field in the DCDB command will
+ have NO TIMEOUT (i.e., 4) value as its timeout on DCDB command.
+
+
+
+Release Date : Thu Dec 9 19:10:23 EST 2004
+ - Sreenivas Bagalkote <sreenib@lsil.com>
+
+Current Version : 2.20.4.2 (scsi module), 2.20.2.4 (cmm module)
+Older Version : 2.20.4.1 (scsi module), 2.20.2.3 (cmm module)
+
+i. Introduced driver ioctl that returns scsi address for a given ld.
+
+ "Why can't the existing sysfs interfaces be used to do this?"
+ - Brian King (brking@us.ibm.com)
+
+ "I've looked into solving this another way, but I cannot see how
+ to get this driver-private mapping of logical drive number-> HCTL
+ without putting code something like this into the driver."
+
+ "...and by providing a mapping a function to userspace, the driver
+ is free to change its mapping algorithm in the future if necessary .."
+ - Matt Domsch (Matt_Domsch@dell.com)
+
Release Date : Thu Dec 9 19:02:14 EST 2004 - Sreenivas Bagalkote <sreenib@lsil.com>
Current Version : 2.20.4.1 (scsi module), 2.20.2.3 (cmm module)
i. Handle IOCTL cmd timeouts more properly.
ii. pci_dma_sync_{sg,single}_for_cpu was introduced into megaraid_mbox
- incorrectly (instead of _for_device). Changed to appropriate
+ incorrectly (instead of _for_device). Changed to appropriate
pci_dma_sync_{sg,single}_for_device.
Release Date : Wed Oct 06 11:15:29 EDT 2004 - Sreenivas Bagalkote <sreenib@lsil.com>
Yes 1 1 character received, marked as
TTY_PARITY
+ Other flags may be used (eg, xon/xoff characters) if your
+ hardware supports hardware "soft" flow control.
+
Locking: none.
Interrupts: caller dependent.
This call must not sleep
pm(port,state,oldstate)
- perform any power management related activities on the specified
- port. state indicates the new state (defined by ACPI D0-D3),
+ Perform any power management related activities on the specified
+ port. State indicates the new state (defined by ACPI D0-D3),
oldstate indicates the previous state. Essentially, D0 means
fully on, D3 means powered down.
This function should not be used to grab any resources.
+ This will be called when the port is initially opened and finally
+ closed, except when the port is also the system console. This
+ will occur even if CONFIG_PM is not set.
+
Locking: none.
Interrupts: caller dependent.
FIRMWARE
========
+[As of 2.6.11, the firmware can be loaded automatically with hotplug
+ when CONFIG_FW_LOADER is set. The mixartloader is necessary only
+ for older versions or when you build the driver into kernel.]
+
For loading the firmware automatically after the module is loaded, use
the post-install command. For example, add the following entry to
/etc/modprobe.conf for miXart driver:
The block and non-block options are used to change the behavior of
opening the device file.
-As default, ALSA behaves as defined in POSIX, i.e. blocks the file
-when it's busy until the device becomes free (unless O_NONBLOCK is
-specified). Some applications assume the non-block open behavior,
-which are actually implemented in some real OSS drivers.
+
+As default, ALSA behaves as original OSS drivers, i.e. does not block
+the file when it's busy. The -EBUSY error is returned in this case.
This blocking behavior can be changed globally via nonblock_open
-module option of snd-pcm-oss. For using the non-block mode as default
+module option of snd-pcm-oss. For using the blocking mode as default
for OSS devices, define like the following:
- options snd-pcm-oss nonblock_open=1
+ options snd-pcm-oss nonblock_open=0
The partial-frag and no-silence commands have been added recently.
Both commands are for optimization use only. The former command
--- /dev/null
+Copyright 2004 Linus Torvalds
+Copyright 2004 Pavel Machek <pavel@suse.cz>
+
+Using sparse for typechecking
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+"__bitwise" is a type attribute, so you have to do something like this:
+
+ typedef int __bitwise pm_request_t;
+
+ enum pm_request {
+ PM_SUSPEND = (__force pm_request_t) 1,
+ PM_RESUME = (__force pm_request_t) 2
+ };
+
+which makes PM_SUSPEND and PM_RESUME "bitwise" integers (the "__force" is
+there because sparse will complain about casting to/from a bitwise type,
+but in this case we really _do_ want to force the conversion). And because
+the enum values are all the same type, now "enum pm_request" will be that
+type too.
+
+And with gcc, all the __bitwise/__force stuff goes away, and it all ends
+up looking just like integers to gcc.
+
+Quite frankly, you don't need the enum there. The above all really just
+boils down to one special "int __bitwise" type.
+
+So the simpler way is to just do
+
+ typedef int __bitwise pm_request_t;
+
+ #define PM_SUSPEND ((__force pm_request_t) 1)
+ #define PM_RESUME ((__force pm_request_t) 2)
+
+and you now have all the infrastructure needed for strict typechecking.
+
+One small note: the constant integer "0" is special. You can use a
+constant zero as a bitwise integer type without sparse ever complaining.
+This is because "bitwise" (as the name implies) was designed for making
+sure that bitwise types don't get mixed up (little-endian vs big-endian
+vs cpu-endian vs whatever), and there the constant "0" really _is_
+special.
+
+Modify top-level Makefile to say
+
+CHECK = sparse -Wbitwise
+
+or you don't get any checking at all.
+
+
+Where to get sparse
+~~~~~~~~~~~~~~~~~~~
+
+With BK, you can just get it from
+
+ bk://sparse.bkbits.net/sparse
+
+and DaveJ has tar-balls at
+
+ http://www.codemonkey.org.uk/projects/bitkeeper/sparse/
+
+
+Once you have it, just do
+
+ make
+ make install
+
+as your regular user, and it will install sparse in your ~/bin directory.
+After that, doing a kernel make with "make C=1" will run sparse on all the
+C files that get recompiled, or with "make C=2" will run sparse on the
+files whether they need to be recompiled or not (ie the latter is fast way
+to check the whole tree if you have already built it).
kernel to userspace interfaces. The kernel to userspace interface is
the one that application programs use, the syscall interface. That
interface is _very_ stable over time, and will not break. I have old
-programs that were built on a pre 0.9something kernel that still works
+programs that were built on a pre 0.9something kernel that still work
just fine on the latest 2.6 kernel release. This interface is the one
that users and application programmers can count on being stable.
ensures that your driver is always buildable, and works over time, with
very little effort on your part.
-The very good side affects of having your driver in the main kernel tree
+The very good side effects of having your driver in the main kernel tree
are:
- The quality of the driver will rise as the maintenance costs (to the
original developer) will decrease.
+* NOTE - This is an unmaintained driver. Lantronix, which bought Stallion
+technologies, is not active in driver maintenance, and they have no information
+on when or if they will have a 2.6 driver.
+
+James Nelson <james4765@gmail.com> - 12-12-2004
Stallion Multiport Serial Driver Readme
---------------------------------------
-Copyright (C) 1994-1999, Stallion Technologies (support@stallion.com).
+Copyright (C) 1994-1999, Stallion Technologies.
Version: 5.5.1
Date: 28MAR99
If you are using any of the Stallion intelligent multiport boards (Brumby,
ONboard, EasyConnection 8/64 (ISA, EISA, MCA), EasyConnection/RA-PCI) with
-Linux you will need to get the driver utility package. This package is
-available at most of the Linux archive sites (and on CD-ROMs that contain
-these archives). The file will be called stallion-X.X.X.tar.gz where X.X.X
-will be the version number. In particular this package contains the board
-embedded executable images that are required for these boards. It also
-contains the downloader program. These boards cannot be used without this.
+Linux you will need to get the driver utility package. This contains a
+firmware loader and the firmware images necessary to make the devices operate.
The Stallion Technologies ftp site, ftp.stallion.com, will always have
-the latest version of the driver utility package. Other sites that usually
-have the latest version are tsx-11.mit.edu, sunsite.unc.edu and their
-mirrors.
+the latest version of the driver utility package.
-ftp.stallion.com:/drivers/ata5/Linux/v550.tar.gz
-tsx-11.mit.edu:/pub/linux/packages/stallion/stallion-5.5.0.tar.gz
-sunsite.unc.edu:/pub/Linux/kernel/patches/serial/stallion-5.5.0.tar.gz
+ftp://ftp.stallion.com/drivers/ata5/Linux/ata-linux-550.tar.gz
As of the printing of this document the latest version of the driver
utility package is 5.5.0. If a later version is now available then you
should use the latest version.
If you are using the EasyIO, EasyConnection 8/32 or EasyConnection 8/64-PCI
-boards then you don't need this package. Although it does have a handy
-script to create the /dev device nodes for these boards, and a serial stats
+boards then you don't need this package, although it does have a serial stats
display program.
If you require DIP switch settings, EISA or MCA configuration files, or any
Typically to load up the smart board driver use:
- insmod stallion.o
+ modprobe stallion
This will load the EasyIO and EasyConnection 8/32 driver. It will output a
message to say that it loaded and print the driver version number. It will
To load the intelligent board driver use:
- insmod istallion.o
+ modprobe istallion
It will output similar messages to the smart board driver.
If not using an auto-detectable board type (that is a PCI board) then you
-will also need to supply command line arguments to the "insmod" command
+will also need to supply command line arguments to the modprobe command
when loading the driver. The general form of the configuration argument is
board?=<name>[,<ioaddr>[,<addr>][,<irq>]]
board? -- specifies the arbitrary board number of this board,
can be in the range 0 to 3.
- name -- textual name of this board. The board name is the comman
+ name -- textual name of this board. The board name is the common
board name, or any "shortened" version of that. The board
type number may also be used here.
Up to 4 board configuration arguments can be specified on the load line.
Here is some examples:
- insmod stallion.o board0=easyio,0x2a0,5
+ modprobe stallion board0=easyio,0x2a0,5
This configures an EasyIO board as board 0 at I/O address 0x2a0 and IRQ 5.
- insmod istallion.o board3=ec8/64,0x2c0,0xcc000
+ modprobe istallion board3=ec8/64,0x2c0,0xcc000
This configures an EasyConnection 8/64 ISA as board 3 at I/O address 0x2c0 at
memory address 0xcc000.
- insmod stallion.o board1=ec8/32-at,0x2a0,0x280,10
+ modprobe stallion board1=ec8/32-at,0x2a0,0x280,10
This configures an EasyConnection 8/32 ISA board at primary I/O address 0x2a0,
secondary address 0x280 and IRQ 10.
You will probably want to enter this module load and configuration information
into your system startup scripts so that the drivers are loaded and configured
-on each system boot. Typically the start up script would be something line
-/etc/rc.d/rc.modules.
+on each system boot. Typically the start up script would be something like
+/etc/modprobe.conf.
2.2 STATIC DRIVER CONFIGURATION:
To set up the driver(s) for the boards that you want to use you need to
edit the appropriate driver file and add configuration entries.
-If using EasyIO or EasyConnection 8/32 ISA or MCA boards, do:
- vi /usr/src/linux/drivers/char/stallion.c
+If using EasyIO or EasyConnection 8/32 ISA or MCA boards,
+ In drivers/char/stallion.c:
- find the definition of the stl_brdconf array (of structures)
near the top of the file
- modify this to match the boards you are going to install
- save and exit
If using ONboard, Brumby, Stallion or EasyConnection 8/64 (ISA or EISA)
-boards then do:
- vi /usr/src/linux/drivers/char/istallion.c
+boards,
+ In drivers/char/istallion.c:
- find the definition of the stli_brdconf array (of structures)
near the top of the file
- modify this to match the boards you are going to install
of course the ports will not be operational!
If you are using the modularized version of the driver you might want to put
-the insmod calls in the startup script as well (before the download lines
+the modprobe calls in the startup script as well (before the download lines
obviously).
3.2 USING THE SERIAL PORTS
Once the driver is installed you will need to setup some device nodes to
-access the serial ports. The simplest method is to use the stallion utility
-"mkdevnods" script. It will automatically create device entries for Stallion
-boards. This will create the normal serial port devices as /dev/ttyE# where
-# is the port number starting from 0. A bank of 64 minor device numbers is
-allocated to each board, so the first port on the second board is port 64,
-etc. A set of callout type devices is also created. They are created as the
-devices /dev/cue# where # is the same as for the ttyE devices.
+access the serial ports. The simplest method is to use the /dev/MAKEDEV program.
+It will automatically create device entries for Stallion boards. This will
+create the normal serial port devices as /dev/ttyE# where# is the port number
+starting from 0. A bank of 64 minor device numbers is allocated to each board,
+so the first port on the second board is port 64,etc. A set of callout type
+devices may also be created. They are created as the devices /dev/cue# where #
+is the same as for the ttyE devices.
For the most part the Stallion driver tries to emulate the standard PC system
COM ports and the standard Linux serial driver. The idea is that you should
+ Last update: 2005-01-17, version 1.4
+
+This file is maintained by H. Peter Anvin <unicode@lanana.org> as part
+of the Linux Assigned Names And Numbers Authority (LANANA) project.
+The current version can be found at:
+
+ http://www.lanana.org/docs/unicode/unicode.txt
+
+ ------------------------
+
The Linux kernel code has been rewritten to use Unicode to map
characters to fonts. By downloading a single Unicode-to-font table,
both the eight-bit character sets and UTF-8 mode are changed to use
permits for example the use of block graphics even with a Latin-1 font
loaded.
+Note that although these codes are similar to ISO 2022, neither the
+codes nor their uses match ISO 2022; Linux has two 8-bit codes (G0 and
+G1), whereas ISO 2022 has four 7-bit codes (G0-G3).
+
In accordance with the Unicode standard/ISO 10646 the range U+F000 to
U+F8FF has been reserved for OS-wide allocation (the Unicode Standard
refers to this as a "Corporate Zone", since this is inaccurate for
two (in case 1024- or 2048-character fonts ever become necessary).
This leaves U+E000 to U+EFFF as End User Zone.
-The Unicodes in the range U+F000 to U+F1FF have been hard-coded to map
-directly to the loaded font, bypassing the translation table. The
-user-defined map now defaults to U+F000 to U+F1FF, emulating the
-previous behaviour. This range may expand in the future should it be
-warranted.
+[v1.2]: The Unicodes range from U+F000 and up to U+F7FF have been
+hard-coded to map directly to the loaded font, bypassing the
+translation table. The user-defined map now defaults to U+F000 to
+U+F0FF, emulating the previous behaviour. In practice, this range
+might be shorter; for example, vgacon can only handle 256-character
+(U+F000..U+F0FF) or 512-character (U+F000..U+F1FF) fonts.
+
Actual characters assigned in the Linux Zone
--------------------------------------------
-In addition, the following characters not present in Unicode 1.1.4 (at
-least, I have not found them!) have been defined; these are used by
-the DEC VT graphics map:
+In addition, the following characters not present in Unicode 1.1.4
+have been defined; these are used by the DEC VT graphics map. [v1.2]
+THIS USE IS OBSOLETE AND SHOULD NO LONGER BE USED; PLEASE SEE BELOW.
U+F800 DEC VT GRAPHICS HORIZONTAL LINE SCAN 1
U+F801 DEC VT GRAPHICS HORIZONTAL LINE SCAN 3
a smooth progression in the DEC VT graphics character set. I have
omitted the scan 5 line, since it is also used as a block-graphics
character, and hence has been coded as U+2500 FORMS LIGHT HORIZONTAL.
-However, I left U+F802 blank should the need arise.
-Klingon language support
-------------------------
+[v1.3]: These characters have been officially added to Unicode 3.2.0;
+they are added at U+23BA, U+23BB, U+23BC, U+23BD. Linux now uses the
+new values.
-Unfortunately, Unicode/ISO 10646 does not allocate code points for the
-language Klingon, probably fearing the potential code point explosion
-if many fictional languages were submitted for inclusion. There are
-also political reasons (the Japanese, for example, are not too happy
-about the whole 16-bit concept to begin with.) However, with Linux
-being a hacker-driven OS it seems this is a brilliant linguistic hack
-worth supporting. Hence I have chosen to add it to the list in the
-Linux Zone.
+[v1.2]: The following characters have been added to represent common
+keyboard symbols that are unlikely to ever be added to Unicode proper
+since they are horribly vendor-specific. This, of course, is an
+excellent example of horrible design.
-Several glyph forms for the Klingon alphabet have been proposed.
-However, since the set of symbols appear to be consistent throughout,
-with only the actual shapes being different, in keeping with standard
-Unicode practice these differences are considered font variants.
+U+F810 KEYBOARD SYMBOL FLYING FLAG
+U+F811 KEYBOARD SYMBOL PULLDOWN MENU
+U+F812 KEYBOARD SYMBOL OPEN APPLE
+U+F813 KEYBOARD SYMBOL SOLID APPLE
-Klingon has an alphabet of 26 characters, a positional numeric writing
-system with 10 digits, and is written left-to-right, top-to-bottom.
-Punctuation appears to be only used in Latin transliteration; it
-appears customary to write each sentence on its own line, and
-centered. Space has been reserved for punctuation should it prove
-necessary.
+Klingon language support
+------------------------
+
+In 1996, Linux was the first operating system in the world to add
+support for the artificial language Klingon, created by Marc Okrand
+for the "Star Trek" television series. This encoding was later
+adopted by the ConScript Unicode Registry and proposed (but ultimately
+rejected) for inclusion in Unicode Plane 1. Thus, it remains as a
+Linux/CSUR private assignment in the Linux Zone.
This encoding has been endorsed by the Klingon Language Institute.
For more information, contact them at:
located it at the end, on a 16-cell boundary in keeping with standard
Unicode practice.
+NOTE: This range is now officially managed by the ConScript Unicode
+Registry. The normative reference is at:
+
+ http://www.evertype.com/standards/csur/klingon.html
+
+Klingon has an alphabet of 26 characters, a positional numeric writing
+system with 10 digits, and is written left-to-right, top-to-bottom.
+
+Several glyph forms for the Klingon alphabet have been proposed.
+However, since the set of symbols appear to be consistent throughout,
+with only the actual shapes being different, in keeping with standard
+Unicode practice these differences are considered font variants.
+
U+F8D0 KLINGON LETTER A
U+F8D1 KLINGON LETTER B
U+F8D2 KLINGON LETTER CH
U+F8F8 KLINGON DIGIT EIGHT
U+F8F9 KLINGON DIGIT NINE
+U+F8FD KLINGON COMMA
+U+F8FE KLINGON FULL STOP
+U+F8FF KLINGON SYMBOL FOR EMPIRE
+
Other Fictional and Artificial Scripts
--------------------------------------
Since the assignment of the Klingon Linux Unicode block, a registry of
-fictional and artificial scripts has been established by John Cowan,
-<cowan@ccil.org>. The ConScript Unicode Registry is accessible at
-http://locke.ccil.org/~cowan/csur/; the ranges used fall at the bottom
-of the End User Zone and can hence not be normatively assigned, but it
-is recommended that people who wish to encode fictional scripts use
-these codes, in the interest of interoperability. For Klingon, CSUR
-has adopted the Linux encoding.
-
- H. Peter Anvin <hpa@zytor.com>
+fictional and artificial scripts has been established by John Cowan
+<jcowan@reutershealth.com> and Michael Everson <everson@evertype.com>.
+The ConScript Unicode Registry is accessible at:
+
+ http://www.evertype.com/standards/csur/
+
+The ranges used fall at the low end of the End User Zone and can hence
+not be normatively assigned, but it is recommended that people who
+wish to encode fictional scripts use these codes, in the interest of
+interoperability. For Klingon, CSUR has adopted the Linux encoding.
+The CSUR people are driving adding Tengwar and Cirth into Unicode
+Plane 1; the addition of Klingon to Unicode Plane 1 has been rejected
+and so the above encoding remains official.
1. Copyright
2. Disclaimer
3. License
-4. Overview
-5. Driver installation
+4. Overview and features
+5. Module dependencies
6. Module loading
7. Module parameters
8. Optional device control through "sysfs"
9. Supported devices
-10. How to add support for new image sensors
+10. How to add plug-in's for new image sensors
11. Notes for V4L2 application developers
-12. Contact information
-13. Credits
+12. Video frame formats
+13. Contact information
+14. Credits
1. Copyright
============
-Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it>
+Copyright (C) 2004-2005 by Luca Risolia <luca.risolia@studio.unibo.it>
2. Disclaimer
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
-4. Overview
-===========
+4. Overview and features
+========================
This driver attempts to support the video and audio streaming capabilities of
-the devices mounting the SONiX SN9C101, SN9C102 and SN9C103 (or SUI-102) PC
-Camera Controllers.
+the devices mounting the SONiX SN9C101, SN9C102 and SN9C103 PC Camera
+Controllers.
It's worth to note that SONiX has never collaborated with the author during the
development of this project, despite several requests for enough detailed
specifications of the register tables, compression engine and video data format
-of the above chips.
+of the above chips. Nevertheless, these informations are no longer necessary,
+becouse all the aspects related to these chips are known and have been
+described in detail in this documentation.
The driver relies on the Video4Linux2 and USB core modules. It has been
designed to run properly on SMP systems as well.
pixel area of image sensor;
- image downscaling with arbitrary scaling factors from 1, 2 and 4 in both
directions (see "Notes for V4L2 application developers" paragraph);
-- two different video formats for uncompressed or compressed data (see also
- "Notes for V4L2 application developers" paragraph);
+- two different video formats for uncompressed or compressed data in low or
+ high compression quality (see also "Notes for V4L2 application developers"
+ and "Video frame formats" paragraphs);
- full support for the capabilities of many of the possible image sensors that
can be connected to the SN9C10x bridges, including, for istance, red, green,
blue and global gain adjustments and exposure (see "Supported devices"
paragraph for details);
- use of default color settings for sunlight conditions;
-- dynamic I/O interface for both SN9C10x and image sensor control (see
- "Optional device control through 'sysfs'" paragraph);
+- dynamic I/O interface for both SN9C10x and image sensor control and
+ monitoring (see "Optional device control through 'sysfs'" paragraph);
- dynamic driver control thanks to various module parameters (see "Module
parameters" paragraph);
- up to 64 cameras can be handled at the same time; they can be connected and
other camera.
Default: -1
-------------------------------------------------------------------------------
+Name: force_munmap;
+Type: bool array (min = 0, max = 64)
+Syntax: <0|1[,...]>
+Description: Force the application to unmap previously mapped buffer memory
+ before calling any VIDIOC_S_CROP or VIDIOC_S_FMT ioctl's. Not
+ all the applications support this feature. This parameter is
+ specific for each detected camera.
+ 0 = do not force memory unmapping"
+ 1 = force memory unmapping (save memory)"
+Default: 0
+-------------------------------------------------------------------------------
Name: debug
Type: int
Syntax: <n>
-------------------------------------------------------------------------------
-8. Optional device control through "sysfs"
+8. Optional device control through "sysfs" [1]
==========================================
It is possible to read and write both the SN9C10x and the image sensor
registers by using the "sysfs" filesystem interface.
SN9C10x bridge, while the other two control the sensor chip. "reg" and
"i2c_reg" hold the values of the current register index where the following
reading/writing operations are addressed at through "val" and "i2c_val". Their
-use is not intended for end-users, unless you know what you are doing. Note
-that "i2c_reg" and "i2c_val" won't be created if the sensor does not actually
-support the standard I2C protocol. Also, remember that you must be logged in as
+use is not intended for end-users. Note that "i2c_reg" and "i2c_val" won't be
+created if the sensor does not actually support the standard I2C protocol or
+its registers are not 8-bit long. Also, remember that you must be logged in as
root before writing to them.
As an example, suppose we were to want to read the value contained in the
[root@localhost #] echo 2 > val
Note that the SN9C10x always returns 0 when some of its registers are read.
-To avoid race conditions, all the I/O accesses to the files are serialized.
+To avoid race conditions, all the I/O accesses to the above files are
+serialized.
+
+The sysfs interface also provides the "frame_header" entry, which exports the
+frame header of the most recent requested and captured video frame. The header
+is 12-bytes long and is appended to every video frame by the SN9C10x
+controllers. As an example, this additional information can be used by the user
+application for implementing auto-exposure features via software.
+
+The following table describes the frame header:
+
+Byte # Value Description
+------ ----- -----------
+0x00 0xFF Frame synchronisation pattern.
+0x01 0xFF Frame synchronisation pattern.
+0x02 0x00 Frame synchronisation pattern.
+0x03 0xC4 Frame synchronisation pattern.
+0x04 0xC4 Frame synchronisation pattern.
+0x05 0x96 Frame synchronisation pattern.
+0x06 0x00 or 0x01 Unknown meaning. The exact value depends on the chip.
+0x07 0xXX Variable value, whose bits are ff00uzzc, where ff is a
+ frame counter, u is unknown, zz is a size indicator
+ (00 = VGA, 01 = SIF, 10 = QSIF) and c stands for
+ "compression enabled" (1 = yes, 0 = no).
+0x08 0xXX Brightness sum inside Auto-Exposure area (low-byte).
+0x09 0xXX Brightness sum inside Auto-Exposure area (high-byte).
+ For a pure white image, this number will be equal to 500
+ times the area of the specified AE area. For images
+ that are not pure white, the value scales down according
+ to relative whiteness.
+0x0A 0xXX Brightness sum outside Auto-Exposure area (low-byte).
+0x0B 0xXX Brightness sum outside Auto-Exposure area (high-byte).
+ For a pure white image, this number will be equal to 125
+ times the area outside of the specified AE area. For
+ images that are not pure white, the value scales down
+ according to relative whiteness.
+
+The AE area (sx, sy, ex, ey) in the active window can be set by programming the
+registers 0x1c, 0x1d, 0x1e and 0x1f of the SN9C10x controllers, where one unit
+corresponds to 32 pixels.
+
+[1] The frame header has been documented by Bertrik Sikken.
9. Supported devices
Model Manufacturer
----- ------------
-PAS106B PixArt Imaging Inc.
-PAS202BCB PixArt Imaging Inc.
+HV7131D Hynix Semiconductor, Inc.
+MI-0343 Micron Technology, Inc.
+PAS106B PixArt Imaging, Inc.
+PAS202BCB PixArt Imaging, Inc.
TAS5110C1B Taiwan Advanced Sensor Corporation
TAS5130D1B Taiwan Advanced Sensor Corporation
driver.
-10. How to add support for new image sensors
-============================================
-It should be easy to write code for new sensors by using the small API that I
-have created for this purpose, which is present in "sn9c102_sensor.h"
+10. How to add plug-in's for new image sensors
+==============================================
+It should be easy to write plug-in's for new sensors by using the small API
+that has been created for this purpose, which is present in "sn9c102_sensor.h"
(documentation is included there). As an example, have a look at the code in
"sn9c102_pas106b.c", which uses the mentioned interface.
-At the moment, possible unsupported image sensors are: HV7131x series (VGA),
-MI03x series (VGA), OV7620 (VGA), OV7630 (VGA), CIS-VF10 (VGA).
+At the moment, possible unsupported image sensors are: CIS-VF10 (VGA),
+OV7620 (VGA), OV7630 (VGA).
11. Notes for V4L2 application developers
file descriptor. Once it is selected, the application must close and reopen the
device to switch to the other I/O method;
-- previously mapped buffer memory must always be unmapped before calling any
-of the "VIDIOC_S_CROP", "VIDIOC_TRY_FMT" and "VIDIOC_S_FMT" ioctl's. The same
-number of buffers as before will be allocated again to match the size of the
-new video frames, so you have to map the buffers again before any I/O attempts
-on them.
+- although it is not mandatory, previously mapped buffer memory should always
+be unmapped before calling any "VIDIOC_S_CROP" or "VIDIOC_S_FMT" ioctl's.
+The same number of buffers as before will be allocated again to match the size
+of the new video frames, so you have to map the buffers again before any I/O
+attempts on them.
Consistently with the hardware limits, this driver also supports image
downscaling with arbitrary scaling factors from 1, 2 and 4 in both directions.
This driver supports two different video formats: the first one is the "8-bit
Sequential Bayer" format and can be used to obtain uncompressed video data
from the device through the current I/O method, while the second one provides
-"raw" compressed video data (without the initial and final frame headers). The
-compression quality may vary from 0 to 1 and can be selected or queried thanks
-to the VIDIOC_S_JPEGCOMP and VIDIOC_G_JPEGCOMP V4L2 ioctl's. For maximum
-flexibility, the default active video format depends on how the image sensor
-being used is initialized (as described in the documentation of the API for the
-image sensors supplied by this driver).
+"raw" compressed video data (without frame headers not related to the
+compressed data). The compression quality may vary from 0 to 1 and can be
+selected or queried thanks to the VIDIOC_S_JPEGCOMP and VIDIOC_G_JPEGCOMP V4L2
+ioctl's. For maximum flexibility, both the default active video format and the
+default compression quality depend on how the image sensor being used is
+initialized (as described in the documentation of the API for the image sensors
+supplied by this driver).
-12. Contact information
+12. Video frame formats [1]
=======================
-I may be contacted by e-mail at <luca.risolia@studio.unibo.it>.
+The SN9C10x PC Camera Controllers can send images in two possible video
+formats over the USB: either native "Sequential RGB Bayer" or Huffman
+compressed. The latter is used to achieve high frame rates. The current video
+format may be selected or queried from the user application by calling the
+VIDIOC_S_FMT or VIDIOC_G_FMT ioctl's, as described in the V4L2 API
+specifications.
+
+The name "Sequential Bayer" indicates the organization of the red, green and
+blue pixels in one video frame. Each pixel is associated with a 8-bit long
+value and is disposed in memory according to the pattern shown below:
+
+B[0] G[1] B[2] G[3] ... B[m-2] G[m-1]
+G[m] R[m+1] G[m+2] R[m+2] ... G[2m-2] R[2m-1]
+...
+... B[(n-1)(m-2)] G[(n-1)(m-1)]
+... G[n(m-2)] R[n(m-1)]
+
+The above matrix also represents the sequential or progressive read-out mode of
+the (n, m) Bayer color filter array used in many CCD/CMOS image sensors.
+
+One compressed video frame consists of a bitstream that encodes for every R, G,
+or B pixel the difference between the value of the pixel itself and some
+reference pixel value. Pixels are organised in the Bayer pattern and the Bayer
+sub-pixels are tracked individually and alternatingly. For example, in the
+first line values for the B and G1 pixels are alternatingly encoded, while in
+the second line values for the G2 and R pixels are alternatingly encoded.
+
+The pixel reference value is calculated as follows:
+- the 4 top left pixels are encoded in raw uncompressed 8-bit format;
+- the value in the top two rows is the value of the pixel left of the current
+ pixel;
+- the value in the left column is the value of the pixel above the current
+ pixel;
+- for all other pixels, the reference value is the average of the value of the
+ pixel on the left and the value of the pixel above the current pixel;
+- there is one code in the bitstream that specifies the value of a pixel
+ directly (in 4-bit resolution);
+- pixel values need to be clamped inside the range [0..255] for proper
+ decoding.
+
+The algorithm purely describes the conversion from compressed Bayer code used
+in the SN9C10x chips to uncompressed Bayer. Additional steps are required to
+convert this to a color image (i.e. a color interpolation algorithm).
+
+The following Huffman codes have been found:
+0: +0 (relative to reference pixel value)
+100: +4
+101: -4?
+1110xxxx: set absolute value to xxxx.0000
+1101: +11
+1111: -11
+11001: +20
+110000: -20
+110001: ??? - these codes are apparently not used
+
+[1] The Huffman compression algorithm has been reverse-engineered and
+ documented by Bertrik Sikken.
+
+
+13. Contact information
+=======================
+The author may be contacted by e-mail at <luca.risolia@studio.unibo.it>.
-I can accept GPG/PGP encrypted e-mail. My GPG key ID is 'FCE635A4'.
-My public 1024-bit key should be available at any keyserver; the fingerprint
-is: '88E8 F32F 7244 68BA 3958 5D40 99DA 5D2A FCE6 35A4'.
+GPG/PGP encrypted e-mail's are accepted. The GPG key ID of the author is
+'FCE635A4'; the public 1024-bit key should be available at any keyserver;
+the fingerprint is: '88E8 F32F 7244 68BA 3958 5D40 99DA 5D2A FCE6 35A4'.
-13. Credits
+14. Credits
===========
-I would thank the following persons:
+Many thanks to following persons for their contribute (listed in alphabetical
+order):
-- Stefano Mozzi, who donated 45 EU;
- Luca Capello for the donation of a webcam;
-- Mizuno Takafumi for the donation of a webcam;
+- Joao Rodrigo Fuzaro, Joao Limirio, Claudio Filho and Caio Begotti for the
+ donation of a webcam;
- Carlos Eduardo Medaglia Dyonisio, who added the support for the PAS202BCB
- image sensor.
+ image sensor;
+- Stefano Mozzi, who donated 45 EU;
+- Bertrik Sikken, who reverse-engineered and documented the Huffman compression
+ algorithm used in the SN9C10x controllers and implemented the first decoder;
+- Mizuno Takafumi for the donation of a webcam;
+- An "anonymous" donator (who didn't want his name to be revealed) for the
+ donation of a webcam.
into a pg_data_t. The bootmem_data_t is just one part of this. To
make the code look uniform between NUMA and regular UMA platforms,
UMA platforms have a statically allocated pg_data_t too (contig_page_data).
-For the sake of uniformity, the variable "numnodes" is also defined
+For the sake of uniformity, the function num_online_nodes() is also defined
for all platforms. As we run benchmarks, we might decide to NUMAize
more variables like low_on_memory, nr_free_pages etc into the pg_data_t.
--- /dev/null
+Any w1 device must be connected to w1 bus master device - for example
+ds9490 usb device or w1-over-GPIO or RS232 converter.
+Driver for w1 bus master must provide several functions(you can find
+them in struct w1_bus_master definition in w1.h) which then will be
+called by w1 core to send various commands over w1 bus(by default it is
+reset and search commands). When some device is found on the bus, w1 core
+checks if driver for it's family is loaded.
+If driver is loaded w1 core creates new w1_slave object and registers it
+in the system(creates some generic sysfs files(struct w1_family_ops in
+w1_family.h), notifies any registered listener and so on...).
+It is device driver's business to provide any communication method
+upstream.
+For example w1_therm driver(ds18?20 thermal sensor family driver)
+provides temperature reading function which is bound to ->rbin() method
+of the above w1_family_ops structure.
+w1_smem - driver for simple 64bit memory cell provides ID reading
+method.
+
+You can call above methods by reading appropriate sysfs files.
APICs
apic Use IO-APIC. Default
- Unless you have an NVidia or VIA/Uniprocessor board.
- Then it defaults to off.
noapic Don't use the IO-APIC.
pirq=... See Documentation/i386/IO-APIC.txt
+ noapictimer Don't set up the APIC timer
+
Early Console
syntax: earlyprintk=vga
This is useful when you use a panic=... timeout and need the box
quickly up again.
+ nohpet
+ Don't use the HPET timer.
+
Idle loop
idle=poll
reboot=b[ios] | t[riple] | k[bd] [, [w]arm | [c]old]
bios Use the CPU reboto vector for warm reset
warm Don't set the cold reboot flag
- cold Set the cold reboto flag
+ cold Set the cold reboot flag
triple Force a triple fault (init)
kbd Use the keyboard controller. cold reset (default)
Disadvantage is that not all hardware will be completely reinitialized
on reboot so there may be boot problems on some systems.
+ reboot=force
+
+ Don't stop other CPUs on reboot. This can make reboot more reliable
+ in some cases.
+
Non Executable Mappings
noexec=on|off
numa=off Only set up a single NUMA node spanning all memory.
+ numa=noacpi Don't parse the SRAT table for NUMA setup
+
+ numa=fake=X Fake X nodes and ignore NUMA setup of the actual machine.
ACPI
interpreter
acpi=force Force ACPI on (currently not needed)
+ acpi=strict Disable out of spec ACPI workarounds.
+
+ acpi_sci={edge,level,high,low} Set up ACPI SCI interrupt.
+
+ acpi=noirq Don't route interrupts
+
PCI
pci=off Don't use PCI
pci=assign-busses Assign busses
pci=irqmask=MASK Set PCI interrupt mask to MASK
pci=lastbus=NUMBER Scan upto NUMBER busses, no matter what the mptable says.
+ pci=noacpi Don't use ACPI to set up PCI interrupt routing.
IOMMU
pages Prereserve that many 128K pages for the software IO bounce buffering.
force Force all IO through the software TLB.
+
+Debugging
+
+ oops=panic Always panic on oopses. Default is to just kill the process,
+ but there is a small probability of deadlocking the machine.
+ This will also cause panics on machine check exceptions.
+ Useful together with panic=30 to trigger a reboot.
+
+ kstack=N Print that many words from the kernel stack in oops dumps.
+
+Misc
+
+ noreplacement Don't replace instructions with more appropiate ones
+ for the CPU. This may be useful on asymmetric MP systems
+ where some CPU have less capabilities than the others.
+
-The paging design used on the x86-64 linux kernel port in 2.4.x provides:
-o per process virtual address space limit of 512 Gigabytes
-o top of userspace stack located at address 0x0000007fffffffff
-o PAGE_OFFSET = 0xffff800000000000
-o start of the kernel = 0xffffffff800000000
-o global RAM per system 2^64-PAGE_OFFSET-sizeof(kernel) = 128 Terabytes - 2 Gigabytes
-o no need of any common code change
-o no need to use highmem to handle the 128 Terabytes of RAM
+<previous description obsolete, deleted>
-Description:
+Virtual memory map with 4 level page tables:
- Userspace is able to modify and it sees only the 3rd/2nd/1st level
- pagetables (pgd_offset() implicitly walks the 1st slot of the 4th
- level pagetable and it returns an entry into the 3rd level pagetable).
- This is where the per-process 512 Gigabytes limit cames from.
+0000000000000000 - 00007fffffffffff (=47bits) user space, different per mm
+hole caused by [48:63] sign extension
+ffff800000000000 - ffff80ffffffffff (=40bits) guard hole
+ffff810000000000 - ffffc0ffffffffff (=46bits) direct mapping of phys. memory
+ffffc10000000000 - ffffc1ffffffffff (=40bits) hole
+ffffc20000000000 - ffffe1ffffffffff (=45bits) vmalloc/ioremap space
+... unused hole ...
+ffffffff80000000 - ffffffff82800000 (=40MB) kernel text mapping, from phys 0
+... unused hole ...
+ffffffff88000000 - fffffffffff00000 (=1919MB) module mapping space
- The common code pgd is the PDPE, the pmd is the PDE, the
- pte is the PTE. The PML4E remains invisible to the common
- code.
+vmalloc space is lazily synchronized into the different PML4 pages of
+the processes using the page fault handler, with init_level4_pgt as
+reference.
- The kernel uses all the first 47 bits of the negative half
- of the virtual address space to build the direct mapping using
- 2 Mbytes page size. The kernel virtual addresses have bit number
- 47 always set to 1 (and in turn also bits 48-63 are set to 1 too,
- due the sign extension). This is where the 128 Terabytes - 2 Gigabytes global
- limit of RAM cames from.
+Current X86-64 implementations only support 40 bit of address space,
+but we support upto 46bits. This expands into MBZ space in the page tables.
- Since the per-process limit is 512 Gigabytes (due to kernel common
- code 3 level pagetable limitation), the higher virtual address mapped
- into userspace is 0x7fffffffff and it makes sense to use it
- as the top of the userspace stack to allow the stack to grow as
- much as possible.
-
- Setting the PAGE_OFFSET to 2^39 (after the last userspace
- virtual address) wouldn't make much difference compared to
- setting PAGE_OFFSET to 0xffff800000000000 because we have an
- hole into the virtual address space. The last byte mapped by the
- 255th slot in the 4th level pagetable is at virtual address
- 0x00007fffffffffff and the first byte mapped by the 256th slot in the
- 4th level pagetable is at address 0xffff800000000000. Due to this
- hole we can't trivially build a direct mapping across all the
- 512 slots of the 4th level pagetable, so we simply use only the
- second (negative) half of the 4th level pagetable for that purpose
- (that provides us 128 Terabytes of contigous virtual addresses).
- Strictly speaking we could build a direct mapping also across the hole
- using some DISCONTIGMEM trick, but we don't need such a large
- direct mapping right now.
-
-Future:
-
- During 2.5.x we can break the 512 Gigabytes per-process limit
- possibly by removing from the common code any knowledge about the
- architectural dependent physical layout of the virtual to physical
- mapping.
-
- Once the 512 Gigabytes limit will be removed the kernel stack will
- be moved (most probably to virtual address 0x00007fffffffffff).
- Nothing will break in userspace due that move, as nothing breaks
- in IA32 compiling the kernel with CONFIG_2G.
-
-Linus agreed on not breaking common code and to live with the 512 Gigabytes
-per-process limitation for the 2.4.x timeframe and he has given me and Andi
-some very useful hints... (thanks! :)
-
-Thanks also to H. Peter Anvin for his interesting and useful suggestions on
-the x86-64-discuss lists!
-
-Other memory management related issues follows:
-
-PAGE_SIZE:
-
- If somebody is wondering why these days we still have a so small
- 4k pagesize (16 or 32 kbytes would be much better for performance
- of course), the PAGE_SIZE have to remain 4k for 32bit apps to
- provide 100% backwards compatible IA32 API (we can't allow silent
- fs corruption or as best a loss of coherency with the page cache
- by allocating MAP_SHARED areas in MAP_ANONYMOUS memory with a
- do_mmap_fake). I think it could be possible to have a dynamic page
- size between 32bit and 64bit apps but it would need extremely
- intrusive changes in the common code as first for page cache and
- we sure don't want to depend on them right now even if the
- hardware would support that.
-
-PAGETABLE SIZE:
-
- In turn we can't afford to have pagetables larger than 4k because
- we could not be able to allocate them due physical memory
- fragmentation, and failing to allocate the kernel stack is a minor
- issue compared to failing the allocation of a pagetable. If we
- fail the allocation of a pagetable the only thing we can do is to
- sched_yield polling the freelist (deadlock prone) or to segfault
- the task (not even the sighandler would be sure to run).
-
-KERNEL STACK:
-
- 1st stage:
-
- The kernel stack will be at first allocated with an order 2 allocation
- (16k) (the utilization of the stack for a 64bit platform really
- isn't exactly the double of a 32bit platform because the local
- variables may not be all 64bit wide, but not much less). This will
- make things even worse than they are right now on IA32 with
- respect of failing fork/clone due memory fragmentation.
-
- 2nd stage:
-
- We'll benchmark if reserving one register as task_struct
- pointer will improve performance of the kernel (instead of
- recalculating the task_struct pointer starting from the stack
- pointer each time). My guess is that recalculating will be faster
- but it worth a try.
-
- If reserving one register for the task_struct pointer
- will be faster we can as well split task_struct and kernel
- stack. task_struct can be a slab allocation or a
- PAGE_SIZEd allocation, and the kernel stack can then be
- allocated in a order 1 allocation. Really this is risky,
- since 8k on a 64bit platform is going to be less than 7k
- on a 32bit platform but we could try it out. This would
- reduce the fragmentation problem of an order of magnitude
- making it equal to the current IA32.
-
- We must also consider the x86-64 seems to provide in hardware a
- per-irq stack that could allow us to remove the irq handler
- footprint from the regular per-process-stack, so it could allow
- us to live with a smaller kernel stack compared to the other
- linux architectures.
-
- 3rd stage:
-
- Before going into production if we still have the order 2
- allocation we can add a sysctl that allows the kernel stack to be
- allocated with vmalloc during memory fragmentation. This have to
- remain turned off during benchmarks :) but it should be ok in real
- life.
-
-Order of PAGE_CACHE_SIZE and other allocations:
-
- On the long run we can increase the PAGE_CACHE_SIZE to be
- an order 2 allocations and also the slab/buffercache etc.ec..
- could be all done with order 2 allocations. To make the above
- to work we should change lots of common code thus it can be done
- only once the basic port will be in a production state. Having
- a working PAGE_CACHE_SIZE would be a benefit also for
- IA32 and other architectures of course.
-
-Andrea <andrea@suse.de> SuSE
+-Andi Kleen, Jul 2004
/* Note mask bit is true for DISABLED irqs. */
static unsigned int cached_irq_mask = 0xffff;
-static spinlock_t i8259_irq_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(i8259_irq_lock);
static inline void
i8259_update_irq_hw(unsigned int irq, unsigned long mask)
* at the same time in multiple CPUs? To be safe I added a spinlock
* but it can be removed trivially if the palcode is robust against smp.
*/
-spinlock_t srm_irq_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(srm_irq_lock);
static inline void
srm_enable_irq(unsigned int irq)
extern void entUna(void);
extern void entDbg(void);
-/* process.c */
-extern void cpu_idle(void) __attribute__((noreturn));
-
/* ptrace.c */
extern int ptrace_set_bpt (struct task_struct *child);
extern int ptrace_cancel_bpt (struct task_struct *child);
#include <asm/uaccess.h>
-static spinlock_t srmcons_callback_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(srmcons_callback_lock);
static int srm_is_registered_console = 0;
/*
srmcons_get_private_struct(struct srmcons_private **ps)
{
static struct srmcons_private *srmconsp = NULL;
- static spinlock_t srmconsp_lock = SPIN_LOCK_UNLOCKED;
+ static DEFINE_SPINLOCK(srmconsp_lock);
unsigned long flags;
int retval = 0;
}
srmconsp->tty = NULL;
- srmconsp->lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&srmconsp->lock);
init_timer(&srmconsp->timer);
*ps = srmconsp;
0xff0000, 0xfe0000, 0xff0000, 0xff0000
};
static unsigned int cached_irq_masks[4];
-spinlock_t rawhide_irq_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(rawhide_irq_lock);
static inline void
rawhide_update_irq_hw(int hose, int mask)
#include "pci_impl.h"
#include "machvec_impl.h"
-spinlock_t sable_lynx_irq_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(sable_lynx_irq_lock);
typedef struct irq_swizzle_struct
{
/*
* Need SMP-safe access to interrupt CSRs
*/
-spinlock_t titan_irq_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(titan_irq_lock);
static void
titan_update_irq_hw(unsigned long mask)
static unsigned long cached_irq_mask[WILDFIRE_NR_IRQS/(sizeof(long)*8)];
-spinlock_t wildfire_irq_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(wildfire_irq_lock);
static int doing_init_irq_hw = 0;
.set noreorder
.text
- .align 4
- .globl bcopy
- .ent bcopy
-bcopy:
- ldgp $29, 0($27)
- .prologue 1
- mov $16,$0
- mov $17,$16
- mov $0,$17
- br $31, memmove !samegp
- .end bcopy
-
.align 4
.globl memmove
.ent memmove
return 0;
}
-static struct oprofile_operations oprof_axp_ops = {
- .create_files = op_axp_create_files,
- .setup = op_axp_setup,
- .shutdown = op_axp_shutdown,
- .start = op_axp_start,
- .stop = op_axp_stop,
- .cpu_type = NULL /* To be filled in below. */
-};
-
int __init
-oprofile_arch_init(struct oprofile_operations **ops)
+oprofile_arch_init(struct oprofile_operations *ops)
{
struct op_axp_model *lmodel = NULL;
return -ENODEV;
model = lmodel;
- oprof_axp_ops.cpu_type = lmodel->cpu_type;
- *ops = &oprof_axp_ops;
+ ops->create_files = op_axp_create_files;
+ ops->setup = op_axp_setup;
+ ops->shutdown = op_axp_shutdown;
+ ops->start = op_axp_start;
+ ops->stop = op_axp_stop;
+ ops->cpu_type = lmodel->cpu_type;
printk(KERN_INFO "oprofile: using %s performance monitoring.\n",
lmodel->cpu_type);
return;
/* Record the sample. */
- oprofile_add_sample(regs->pc, !user_mode(regs),
- which, smp_processor_id());
+ oprofile_add_sample(regs, which);
}
struct op_counter_config *ctr)
{
/* Record the sample. */
- oprofile_add_sample(regs->pc, !user_mode(regs),
- which, smp_processor_id());
+ oprofile_add_sample(regs, which);
}
struct op_counter_config *ctr)
{
/* Record the sample. */
- oprofile_add_sample(regs->pc, !user_mode(regs),
- which, smp_processor_id());
+ oprofile_add_sample(regs, which);
}
if (counter == 1)
fake_counter += PM_NUM_COUNTERS;
if (ctr[fake_counter].enabled)
- oprofile_add_sample(pc, kern, fake_counter,
- smp_processor_id());
+ oprofile_add_pc(pc, kern, fake_counter);
}
static void
to PALcode. Recognize ITB miss by PALcode
offset address, and get actual PC from
EXC_ADDR. */
- oprofile_add_sample(regs->pc, kern, which,
- smp_processor_id());
+ oprofile_add_pc(regs->pc, kern, which);
if ((pmpc & ((1 << 15) - 1)) == 581)
op_add_pm(regs->pc, kern, which,
ctr, PM_ITB_MISS);
}
}
- oprofile_add_sample(pmpc, kern, which, smp_processor_id());
+ oprofile_add_pc(pmpc, kern, which);
pctr_ctl = wrperfmon(5, 0);
if (pctr_ctl & (1UL << 27))
--- /dev/null
+/*
+ * linux/arch/arm/boot/compressed/head-sharpsl.S
+ *
+ * Copyright (C) 2004-2005 Richard Purdie <rpurdie@rpsys.net>
+ *
+ * Sharp's bootloader doesn't pass any kind of machine ID
+ * so we have to figure out the machine for ourselves...
+ *
+ * Support for Poodle, Corgi (SL-C700), Shepherd (SL-C750)
+ * and Husky (SL-C760).
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/linkage.h>
+#include <asm/mach-types.h>
+
+#ifndef CONFIG_PXA_SHARPSL
+#error What am I doing here...
+#endif
+
+ .section ".start", "ax"
+
+__SharpSL_start:
+
+ ldr r1, .W100ADDR @ Base address of w100 chip + regs offset
+
+ mov r6, #0x31 @ Load Magic Init value
+ str r6, [r1, #0x280] @ to SCRATCH_UMSK
+ mov r5, #0x3000
+.W100LOOP:
+ subs r5, r5, #1
+ bne .W100LOOP
+ mov r6, #0x30 @ Load 2nd Magic Init value
+ str r6, [r1, #0x280] @ to SCRATCH_UMSK
+
+ ldr r6, [r1, #0] @ Load Chip ID
+ ldr r3, .W100ID
+ ldr r7, .POODLEID
+ cmp r6, r3
+ bne .SHARPEND @ We have no w100 - Poodle
+
+ mrc p15, 0, r6, c0, c0 @ Get Processor ID
+ and r6, r6, #0xffffff00
+ ldr r7, .CORGIID
+ ldr r3, .PXA255ID
+ cmp r6, r3
+ blo .SHARPEND @ We have a PXA250 - Corgi
+
+ mov r1, #0x0c000000 @ Base address of NAND chip
+ ldrb r3, [r1, #24] @ Load FLASHCTL
+ bic r3, r3, #0x11 @ SET NCE
+ orr r3, r3, #0x0a @ SET CLR + FLWP
+ strb r3, [r1, #24] @ Save to FLASHCTL
+ mov r2, #0x90 @ Command "readid"
+ strb r2, [r1, #20] @ Save to FLASHIO
+ bic r3, r3, #2 @ CLR CLE
+ orr r3, r3, #4 @ SET ALE
+ strb r3, [r1, #24] @ Save to FLASHCTL
+ mov r2, #0 @ Address 0x00
+ strb r2, [r1, #20] @ Save to FLASHIO
+ bic r3, r3, #4 @ CLR ALE
+ strb r3, [r1, #24] @ Save to FLASHCTL
+.SHARP1:
+ ldrb r3, [r1, #24] @ Load FLASHCTL
+ tst r3, #32 @ Is chip ready?
+ beq .SHARP1
+ ldrb r2, [r1, #20] @ NAND Manufacturer ID
+ ldrb r3, [r1, #20] @ NAND Chip ID
+ ldr r7, .SHEPHERDID
+ cmp r3, #0x76 @ 64MiB flash
+ beq .SHARPEND @ We have Shepherd
+ ldr r7, .HUSKYID @ Must be Husky
+ b .SHARPEND
+
+.PXA255ID:
+ .word 0x69052d00 @ PXA255 Processor ID
+.W100ID:
+ .word 0x57411002 @ w100 Chip ID
+.W100ADDR:
+ .word 0x08010000 @ w100 Chip ID Reg Address
+.POODLEID:
+ .word MACH_TYPE_POODLE
+.CORGIID:
+ .word MACH_TYPE_CORGI
+.SHEPHERDID:
+ .word MACH_TYPE_SHEPHERD
+.HUSKYID:
+ .word MACH_TYPE_HUSKY
+.SHARPEND:
+
+
int j,i,m,k,nr_banks,size;
unsigned char *c;
+ k = 0;
+
/* Head of the taglist */
tag->hdr.tag = ATAG_CORE;
tag->hdr.size = tag_size(tag_core);
#define amba_hotplug NULL
#endif
-static int amba_suspend(struct device *dev, u32 state)
+static int amba_suspend(struct device *dev, pm_message_t state)
{
struct amba_driver *drv = to_amba_driver(dev->driver);
int ret = 0;
return dev->devid == drv->devid;
}
-static int locomo_bus_suspend(struct device *dev, u32 state)
+static int locomo_bus_suspend(struct device *dev, pm_message_t state)
{
struct locomo_dev *ldev = LOCOMO_DEV(dev);
struct locomo_driver *drv = LOCOMO_DRV(dev->driver);
/*
* rtc_lock protects rtc_irq_data
*/
-static spinlock_t rtc_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(rtc_lock);
static unsigned long rtc_irq_data;
/*
--- /dev/null
+/*
+ * Support code for the SCOOP interface found on various Sharp PDAs
+ *
+ * Copyright (c) 2004 Richard Purdie
+ *
+ * Based on code written by Sharp/Lineo for 2.4 kernels
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/device.h>
+#include <asm/io.h>
+#include <asm/hardware/scoop.h>
+
+static void __iomem *scoop_io_base;
+
+#define SCOOP_REG(adr) (*(volatile unsigned short*)(scoop_io_base+(adr)))
+
+void reset_scoop(void)
+{
+ SCOOP_REG(SCOOP_MCR) = 0x0100; // 00
+ SCOOP_REG(SCOOP_CDR) = 0x0000; // 04
+ SCOOP_REG(SCOOP_CPR) = 0x0000; // 0C
+ SCOOP_REG(SCOOP_CCR) = 0x0000; // 10
+ SCOOP_REG(SCOOP_IMR) = 0x0000; // 18
+ SCOOP_REG(SCOOP_IRM) = 0x00FF; // 14
+ SCOOP_REG(SCOOP_ISR) = 0x0000; // 1C
+ SCOOP_REG(SCOOP_IRM) = 0x0000;
+}
+
+static DEFINE_SPINLOCK(scoop_lock);
+static u32 scoop_gpwr;
+
+unsigned short set_scoop_gpio(unsigned short bit)
+{
+ unsigned short gpio_bit;
+ unsigned long flag;
+
+ spin_lock_irqsave(&scoop_lock, flag);
+ gpio_bit = SCOOP_REG(SCOOP_GPWR) | bit;
+ SCOOP_REG(SCOOP_GPWR) = gpio_bit;
+ spin_unlock_irqrestore(&scoop_lock, flag);
+
+ return gpio_bit;
+}
+
+unsigned short reset_scoop_gpio(unsigned short bit)
+{
+ unsigned short gpio_bit;
+ unsigned long flag;
+
+ spin_lock_irqsave(&scoop_lock, flag);
+ gpio_bit = SCOOP_REG(SCOOP_GPWR) & ~bit;
+ SCOOP_REG(SCOOP_GPWR) = gpio_bit;
+ spin_unlock_irqrestore(&scoop_lock, flag);
+
+ return gpio_bit;
+}
+
+EXPORT_SYMBOL(set_scoop_gpio);
+EXPORT_SYMBOL(reset_scoop_gpio);
+
+unsigned short read_scoop_reg(unsigned short reg)
+{
+ return SCOOP_REG(reg);
+}
+
+void write_scoop_reg(unsigned short reg, unsigned short data)
+{
+ SCOOP_REG(reg)=data;
+}
+
+EXPORT_SYMBOL(reset_scoop);
+EXPORT_SYMBOL(read_scoop_reg);
+EXPORT_SYMBOL(write_scoop_reg);
+
+static int scoop_suspend(struct device *dev, uint32_t state, uint32_t level)
+{
+ if (level == SUSPEND_POWER_DOWN) {
+ scoop_gpwr = SCOOP_REG(SCOOP_GPWR);
+ SCOOP_REG(SCOOP_GPWR) = 0;
+ }
+ return 0;
+}
+
+static int scoop_resume(struct device *dev, uint32_t level)
+{
+ if (level == RESUME_POWER_ON) {
+ SCOOP_REG(SCOOP_GPWR) = scoop_gpwr;
+ }
+ return 0;
+}
+
+int __init scoop_probe(struct device *dev)
+{
+ struct scoop_config *inf;
+ struct platform_device *pdev = to_platform_device(dev);
+ struct resource *mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+
+ if (!mem)
+ return -EINVAL;
+
+ inf = dev->platform_data;
+ scoop_io_base = ioremap(mem->start, 0x1000);
+ if (!scoop_io_base)
+ return -ENOMEM;
+
+ SCOOP_REG(SCOOP_MCR) = 0x0140;
+
+ reset_scoop();
+
+ SCOOP_REG(SCOOP_GPCR) = inf->io_dir & 0xffff;
+ SCOOP_REG(SCOOP_GPWR) = inf->io_out & 0xffff;
+
+ return 0;
+}
+
+static struct device_driver scoop_driver = {
+ .name = "sharp-scoop",
+ .bus = &platform_bus_type,
+ .probe = scoop_probe,
+ .suspend = scoop_suspend,
+ .resume = scoop_resume,
+};
+
+int __init scoop_init(void)
+{
+ return driver_register(&scoop_driver);
+}
+
+subsys_initcall(scoop_init);
.write = via82c505_write_config,
};
-void __init via82c505_preinit(void *sysdata)
+void __init via82c505_preinit(void)
{
printk(KERN_DEBUG "PCI: VIA 82c505\n");
if (!request_region(0xA8,2,"via config")) {
CONFIG_IP_NF_TARGET_REDIRECT=y
CONFIG_IP_NF_TARGET_NETMAP=y
CONFIG_IP_NF_TARGET_SAME=y
-# CONFIG_IP_NF_NAT_LOCAL is not set
# CONFIG_IP_NF_NAT_SNMP_BASIC is not set
CONFIG_IP_NF_NAT_IRC=y
CONFIG_IP_NF_NAT_FTP=y
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc3
-# Wed Dec 15 17:03:41 2004
+# Linux kernel version: 2.6.10
+# Thu Jan 6 10:54:33 2005
#
CONFIG_ARM=y
CONFIG_MMU=y
# CONFIG_ARCH_IQ80321 is not set
CONFIG_ARCH_IQ31244=y
# CONFIG_ARCH_IQ80331 is not set
+# CONFIG_MACH_IQ80332 is not set
CONFIG_ARCH_EP80219=y
CONFIG_ARCH_IOP321=y
# CONFIG_ARCH_IOP331 is not set
# General setup
#
CONFIG_PCI=y
-# CONFIG_ZBOOT_ROM is not set
CONFIG_ZBOOT_ROM_TEXT=0x0
CONFIG_ZBOOT_ROM_BSS=0x0
# CONFIG_XIP_KERNEL is not set
# CONFIG_PM is not set
# CONFIG_PREEMPT is not set
# CONFIG_ARTHUR is not set
-CONFIG_CMDLINE="ip=boot root=nfs console=ttyS0,115200 mem=128M@0xa0000000"
+CONFIG_CMDLINE="ip=boot root=nfs console=ttyS0,115200"
CONFIG_ALIGNMENT_TRAP=y
#
#
# CONFIG_ACENIC is not set
# CONFIG_DL2K is not set
-# CONFIG_E1000 is not set
+CONFIG_E1000=y
+CONFIG_E1000_NAPI=y
# CONFIG_NS83820 is not set
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
-CONFIG_INPUT_MOUSEDEV_PSAUX=y
+# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc3
-# Wed Dec 15 16:58:36 2004
+# Linux kernel version: 2.6.10
+# Thu Jan 6 10:53:05 2005
#
CONFIG_ARM=y
CONFIG_MMU=y
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_HOTPLUG is not set
CONFIG_KOBJECT_UEVENT=y
-# CONFIG_IKCONFIG is not set
+CONFIG_IKCONFIG=y
+CONFIG_IKCONFIG_PROC=y
# CONFIG_EMBEDDED is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_EXTRA_PASS is not set
# CONFIG_ARCH_IQ80321 is not set
CONFIG_ARCH_IQ31244=y
# CONFIG_ARCH_IQ80331 is not set
+# CONFIG_MACH_IQ80332 is not set
# CONFIG_ARCH_EP80219 is not set
CONFIG_ARCH_IOP321=y
# CONFIG_ARCH_IOP331 is not set
# General setup
#
CONFIG_PCI=y
-# CONFIG_ZBOOT_ROM is not set
CONFIG_ZBOOT_ROM_TEXT=0x0
CONFIG_ZBOOT_ROM_BSS=0x0
# CONFIG_XIP_KERNEL is not set
# CONFIG_PM is not set
# CONFIG_PREEMPT is not set
# CONFIG_ARTHUR is not set
-CONFIG_CMDLINE="ip=boot root=nfs console=ttyS0,115200 mem=256M@0xa0000000"
+CONFIG_CMDLINE="ip=boot root=nfs console=ttyS0,115200"
CONFIG_ALIGNMENT_TRAP=y
#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
-CONFIG_INPUT_MOUSEDEV_PSAUX=y
+# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc3
-# Wed Dec 15 16:48:43 2004
+# Linux kernel version: 2.6.10
+# Thu Jan 6 10:52:05 2005
#
CONFIG_ARM=y
CONFIG_MMU=y
CONFIG_ARCH_IQ80321=y
# CONFIG_ARCH_IQ31244 is not set
# CONFIG_ARCH_IQ80331 is not set
+# CONFIG_MACH_IQ80332 is not set
# CONFIG_ARCH_EP80219 is not set
CONFIG_ARCH_IOP321=y
# CONFIG_ARCH_IOP331 is not set
# General setup
#
CONFIG_PCI=y
-# CONFIG_ZBOOT_ROM is not set
CONFIG_ZBOOT_ROM_TEXT=0x0
CONFIG_ZBOOT_ROM_BSS=0x0
# CONFIG_XIP_KERNEL is not set
# CONFIG_PM is not set
# CONFIG_PREEMPT is not set
# CONFIG_ARTHUR is not set
-CONFIG_CMDLINE="ip=boot root=nfs console=ttyS0,115200 mem=128M@0xa0000000"
+CONFIG_CMDLINE="ip=boot root=nfs console=ttyS0,115200"
CONFIG_ALIGNMENT_TRAP=y
#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
-CONFIG_INPUT_MOUSEDEV_PSAUX=y
+# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
#
# I2C support
#
-# CONFIG_I2C is not set
+CONFIG_I2C=y
+CONFIG_I2C_CHARDEV=y
+
+#
+# I2C Algorithms
+#
+# CONFIG_I2C_ALGOBIT is not set
+# CONFIG_I2C_ALGOPCF is not set
+# CONFIG_I2C_ALGOPCA is not set
+
+#
+# I2C Hardware Bus support
+#
+# CONFIG_I2C_ALI1535 is not set
+# CONFIG_I2C_ALI1563 is not set
+# CONFIG_I2C_ALI15X3 is not set
+# CONFIG_I2C_AMD756 is not set
+# CONFIG_I2C_AMD8111 is not set
+# CONFIG_I2C_I801 is not set
+# CONFIG_I2C_I810 is not set
+CONFIG_I2C_IOP3XX=y
+# CONFIG_I2C_ISA is not set
+# CONFIG_I2C_NFORCE2 is not set
+# CONFIG_I2C_PARPORT_LIGHT is not set
+# CONFIG_I2C_PIIX4 is not set
+# CONFIG_I2C_PROSAVAGE is not set
+# CONFIG_I2C_SAVAGE4 is not set
+# CONFIG_SCx200_ACB is not set
+# CONFIG_I2C_SIS5595 is not set
+# CONFIG_I2C_SIS630 is not set
+# CONFIG_I2C_SIS96X is not set
+# CONFIG_I2C_STUB is not set
+# CONFIG_I2C_VIA is not set
+# CONFIG_I2C_VIAPRO is not set
+# CONFIG_I2C_VOODOO3 is not set
+# CONFIG_I2C_PCA_ISA is not set
+
+#
+# Hardware Sensors Chip support
+#
+# CONFIG_I2C_SENSOR is not set
+# CONFIG_SENSORS_ADM1021 is not set
+# CONFIG_SENSORS_ADM1025 is not set
+# CONFIG_SENSORS_ADM1026 is not set
+# CONFIG_SENSORS_ADM1031 is not set
+# CONFIG_SENSORS_ASB100 is not set
+# CONFIG_SENSORS_DS1621 is not set
+# CONFIG_SENSORS_FSCHER is not set
+# CONFIG_SENSORS_GL518SM is not set
+# CONFIG_SENSORS_IT87 is not set
+# CONFIG_SENSORS_LM63 is not set
+# CONFIG_SENSORS_LM75 is not set
+# CONFIG_SENSORS_LM77 is not set
+# CONFIG_SENSORS_LM78 is not set
+# CONFIG_SENSORS_LM80 is not set
+# CONFIG_SENSORS_LM83 is not set
+# CONFIG_SENSORS_LM85 is not set
+# CONFIG_SENSORS_LM87 is not set
+# CONFIG_SENSORS_LM90 is not set
+# CONFIG_SENSORS_MAX1619 is not set
+# CONFIG_SENSORS_PC87360 is not set
+# CONFIG_SENSORS_SMSC47M1 is not set
+# CONFIG_SENSORS_VIA686A is not set
+# CONFIG_SENSORS_W83781D is not set
+# CONFIG_SENSORS_W83L785TS is not set
+# CONFIG_SENSORS_W83627HF is not set
+
+#
+# Other I2C Chip support
+#
+# CONFIG_SENSORS_EEPROM is not set
+# CONFIG_SENSORS_PCF8574 is not set
+# CONFIG_SENSORS_PCF8591 is not set
+# CONFIG_SENSORS_RTC8564 is not set
+# CONFIG_I2C_DEBUG_CORE is not set
+# CONFIG_I2C_DEBUG_ALGO is not set
+# CONFIG_I2C_DEBUG_BUS is not set
+# CONFIG_I2C_DEBUG_CHIP is not set
#
# Multimedia devices
# CONFIG_DEBUG_KERNEL is not set
# CONFIG_DEBUG_INFO is not set
CONFIG_FRAME_POINTER=y
-# CONFIG_DEBUG_USER is not set
+CONFIG_DEBUG_USER=y
#
# Security options
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc3
-# Wed Dec 15 16:43:39 2004
+# Linux kernel version: 2.6.10
+# Thu Jan 6 10:44:16 2005
#
CONFIG_ARM=y
CONFIG_MMU=y
# CONFIG_IKCONFIG is not set
# CONFIG_EMBEDDED is not set
CONFIG_KALLSYMS=y
-# CONFIG_KALLSYMS_ALL is not set
# CONFIG_KALLSYMS_EXTRA_PASS is not set
CONFIG_FUTEX=y
CONFIG_EPOLL=y
# CONFIG_ARCH_IQ80321 is not set
# CONFIG_ARCH_IQ31244 is not set
CONFIG_ARCH_IQ80331=y
+# CONFIG_MACH_IQ80332 is not set
# CONFIG_ARCH_EP80219 is not set
CONFIG_ARCH_IOP331=y
#
# IOP3xx Chipset Features
#
+CONFIG_IOP331_STEPD=y
#
# Processor Type
# General setup
#
CONFIG_PCI=y
-# CONFIG_ZBOOT_ROM is not set
CONFIG_ZBOOT_ROM_TEXT=0x0
CONFIG_ZBOOT_ROM_BSS=0x0
# CONFIG_XIP_KERNEL is not set
#
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
-# CONFIG_DEBUG_DRIVER is not set
# CONFIG_PM is not set
# CONFIG_PREEMPT is not set
# CONFIG_ARTHUR is not set
-CONFIG_CMDLINE="ip=boot root=nfs console=ttyS0,115200 mem=128M@0x00000000"
+CONFIG_CMDLINE="ip=boot root=nfs console=ttyS0,115200"
CONFIG_ALIGNMENT_TRAP=y
#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
-CONFIG_INPUT_MOUSEDEV_PSAUX=y
+# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
#
# I2C support
#
-# CONFIG_I2C is not set
+CONFIG_I2C=y
+CONFIG_I2C_CHARDEV=y
+
+#
+# I2C Algorithms
+#
+# CONFIG_I2C_ALGOBIT is not set
+# CONFIG_I2C_ALGOPCF is not set
+# CONFIG_I2C_ALGOPCA is not set
+
+#
+# I2C Hardware Bus support
+#
+# CONFIG_I2C_ALI1535 is not set
+# CONFIG_I2C_ALI1563 is not set
+# CONFIG_I2C_ALI15X3 is not set
+# CONFIG_I2C_AMD756 is not set
+# CONFIG_I2C_AMD8111 is not set
+# CONFIG_I2C_I801 is not set
+# CONFIG_I2C_I810 is not set
+CONFIG_I2C_IOP3XX=y
+# CONFIG_I2C_ISA is not set
+# CONFIG_I2C_NFORCE2 is not set
+# CONFIG_I2C_PARPORT_LIGHT is not set
+# CONFIG_I2C_PIIX4 is not set
+# CONFIG_I2C_PROSAVAGE is not set
+# CONFIG_I2C_SAVAGE4 is not set
+# CONFIG_SCx200_ACB is not set
+# CONFIG_I2C_SIS5595 is not set
+# CONFIG_I2C_SIS630 is not set
+# CONFIG_I2C_SIS96X is not set
+# CONFIG_I2C_STUB is not set
+# CONFIG_I2C_VIA is not set
+# CONFIG_I2C_VIAPRO is not set
+# CONFIG_I2C_VOODOO3 is not set
+# CONFIG_I2C_PCA_ISA is not set
+
+#
+# Hardware Sensors Chip support
+#
+# CONFIG_I2C_SENSOR is not set
+# CONFIG_SENSORS_ADM1021 is not set
+# CONFIG_SENSORS_ADM1025 is not set
+# CONFIG_SENSORS_ADM1026 is not set
+# CONFIG_SENSORS_ADM1031 is not set
+# CONFIG_SENSORS_ASB100 is not set
+# CONFIG_SENSORS_DS1621 is not set
+# CONFIG_SENSORS_FSCHER is not set
+# CONFIG_SENSORS_GL518SM is not set
+# CONFIG_SENSORS_IT87 is not set
+# CONFIG_SENSORS_LM63 is not set
+# CONFIG_SENSORS_LM75 is not set
+# CONFIG_SENSORS_LM77 is not set
+# CONFIG_SENSORS_LM78 is not set
+# CONFIG_SENSORS_LM80 is not set
+# CONFIG_SENSORS_LM83 is not set
+# CONFIG_SENSORS_LM85 is not set
+# CONFIG_SENSORS_LM87 is not set
+# CONFIG_SENSORS_LM90 is not set
+# CONFIG_SENSORS_MAX1619 is not set
+# CONFIG_SENSORS_PC87360 is not set
+# CONFIG_SENSORS_SMSC47M1 is not set
+# CONFIG_SENSORS_VIA686A is not set
+# CONFIG_SENSORS_W83781D is not set
+# CONFIG_SENSORS_W83L785TS is not set
+# CONFIG_SENSORS_W83627HF is not set
+
+#
+# Other I2C Chip support
+#
+# CONFIG_SENSORS_EEPROM is not set
+# CONFIG_SENSORS_PCF8574 is not set
+# CONFIG_SENSORS_PCF8591 is not set
+# CONFIG_SENSORS_RTC8564 is not set
+# CONFIG_I2C_DEBUG_CORE is not set
+# CONFIG_I2C_DEBUG_ALGO is not set
+# CONFIG_I2C_DEBUG_BUS is not set
+# CONFIG_I2C_DEBUG_CHIP is not set
#
# Multimedia devices
#
# Kernel hacking
#
-CONFIG_DEBUG_KERNEL=y
-# CONFIG_MAGIC_SYSRQ is not set
-# CONFIG_SCHEDSTATS is not set
-# CONFIG_DEBUG_SLAB is not set
-# CONFIG_DEBUG_SPINLOCK is not set
-# CONFIG_DEBUG_KOBJECT is not set
-CONFIG_DEBUG_BUGVERBOSE=y
+# CONFIG_DEBUG_KERNEL is not set
# CONFIG_DEBUG_INFO is not set
CONFIG_FRAME_POINTER=y
CONFIG_DEBUG_USER=y
-# CONFIG_DEBUG_WAITQ is not set
-CONFIG_DEBUG_ERRORS=y
-# CONFIG_DEBUG_LL is not set
#
# Security options
# Library routines
#
# CONFIG_CRC_CCITT is not set
-CONFIG_CRC32=y
+# CONFIG_CRC32 is not set
# CONFIG_LIBCRC32C is not set
--- /dev/null
+#
+# Automatically generated make config: don't edit
+# Linux kernel version: 2.6.10
+# Thu Jan 6 10:51:02 2005
+#
+CONFIG_ARM=y
+CONFIG_MMU=y
+CONFIG_UID16=y
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+CONFIG_GENERIC_IOMAP=y
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_CLEAN_COMPILE=y
+CONFIG_BROKEN_ON_SMP=y
+
+#
+# General setup
+#
+CONFIG_LOCALVERSION=""
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+# CONFIG_POSIX_MQUEUE is not set
+CONFIG_BSD_PROCESS_ACCT=y
+# CONFIG_BSD_PROCESS_ACCT_V3 is not set
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+CONFIG_LOG_BUF_SHIFT=14
+# CONFIG_HOTPLUG is not set
+CONFIG_KOBJECT_UEVENT=y
+# CONFIG_IKCONFIG is not set
+# CONFIG_EMBEDDED is not set
+CONFIG_KALLSYMS=y
+# CONFIG_KALLSYMS_EXTRA_PASS is not set
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
+CONFIG_SHMEM=y
+CONFIG_CC_ALIGN_FUNCTIONS=0
+CONFIG_CC_ALIGN_LABELS=0
+CONFIG_CC_ALIGN_LOOPS=0
+CONFIG_CC_ALIGN_JUMPS=0
+# CONFIG_TINY_SHMEM is not set
+
+#
+# Loadable module support
+#
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+# CONFIG_MODULE_FORCE_UNLOAD is not set
+CONFIG_OBSOLETE_MODPARM=y
+# CONFIG_MODVERSIONS is not set
+# CONFIG_MODULE_SRCVERSION_ALL is not set
+CONFIG_KMOD=y
+
+#
+# System Type
+#
+# CONFIG_ARCH_CLPS7500 is not set
+# CONFIG_ARCH_CLPS711X is not set
+# CONFIG_ARCH_CO285 is not set
+# CONFIG_ARCH_EBSA110 is not set
+# CONFIG_ARCH_CAMELOT is not set
+# CONFIG_ARCH_FOOTBRIDGE is not set
+# CONFIG_ARCH_INTEGRATOR is not set
+CONFIG_ARCH_IOP3XX=y
+# CONFIG_ARCH_IXP4XX is not set
+# CONFIG_ARCH_IXP2000 is not set
+# CONFIG_ARCH_L7200 is not set
+# CONFIG_ARCH_PXA is not set
+# CONFIG_ARCH_RPC is not set
+# CONFIG_ARCH_SA1100 is not set
+# CONFIG_ARCH_S3C2410 is not set
+# CONFIG_ARCH_SHARK is not set
+# CONFIG_ARCH_LH7A40X is not set
+# CONFIG_ARCH_OMAP is not set
+# CONFIG_ARCH_VERSATILE is not set
+# CONFIG_ARCH_IMX is not set
+# CONFIG_ARCH_H720X is not set
+
+#
+# IOP3xx Implementation Options
+#
+
+#
+# IOP3xx Platform Types
+#
+# CONFIG_ARCH_IQ80321 is not set
+# CONFIG_ARCH_IQ31244 is not set
+# CONFIG_ARCH_IQ80331 is not set
+CONFIG_MACH_IQ80332=y
+# CONFIG_ARCH_EP80219 is not set
+CONFIG_ARCH_IOP331=y
+
+#
+# IOP3xx Chipset Features
+#
+# CONFIG_IOP331_STEPD is not set
+
+#
+# Processor Type
+#
+CONFIG_CPU_32=y
+CONFIG_CPU_XSCALE=y
+CONFIG_CPU_32v5=y
+CONFIG_CPU_ABRT_EV5T=y
+CONFIG_CPU_CACHE_VIVT=y
+CONFIG_CPU_TLB_V4WBI=y
+CONFIG_CPU_MINICACHE=y
+
+#
+# Processor Features
+#
+# CONFIG_ARM_THUMB is not set
+CONFIG_XSCALE_PMU=y
+
+#
+# General setup
+#
+CONFIG_PCI=y
+CONFIG_ZBOOT_ROM_TEXT=0x0
+CONFIG_ZBOOT_ROM_BSS=0x0
+# CONFIG_XIP_KERNEL is not set
+# CONFIG_PCI_LEGACY_PROC is not set
+CONFIG_PCI_NAMES=y
+
+#
+# At least one math emulation must be selected
+#
+CONFIG_FPE_NWFPE=y
+# CONFIG_FPE_NWFPE_XP is not set
+# CONFIG_FPE_FASTFPE is not set
+CONFIG_BINFMT_ELF=y
+CONFIG_BINFMT_AOUT=y
+# CONFIG_BINFMT_MISC is not set
+
+#
+# Generic Driver Options
+#
+CONFIG_STANDALONE=y
+CONFIG_PREVENT_FIRMWARE_BUILD=y
+# CONFIG_PM is not set
+# CONFIG_PREEMPT is not set
+# CONFIG_ARTHUR is not set
+CONFIG_CMDLINE="ip=boot root=nfs console=ttyS0,115200"
+CONFIG_ALIGNMENT_TRAP=y
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Memory Technology Devices (MTD)
+#
+CONFIG_MTD=y
+# CONFIG_MTD_DEBUG is not set
+CONFIG_MTD_PARTITIONS=y
+# CONFIG_MTD_CONCAT is not set
+CONFIG_MTD_REDBOOT_PARTS=y
+CONFIG_MTD_REDBOOT_PARTS_UNALLOCATED=y
+CONFIG_MTD_REDBOOT_PARTS_READONLY=y
+# CONFIG_MTD_CMDLINE_PARTS is not set
+# CONFIG_MTD_AFS_PARTS is not set
+
+#
+# User Modules And Translation Layers
+#
+CONFIG_MTD_CHAR=y
+CONFIG_MTD_BLOCK=y
+# CONFIG_FTL is not set
+# CONFIG_NFTL is not set
+# CONFIG_INFTL is not set
+
+#
+# RAM/ROM/Flash chip drivers
+#
+CONFIG_MTD_CFI=y
+# CONFIG_MTD_JEDECPROBE is not set
+CONFIG_MTD_GEN_PROBE=y
+CONFIG_MTD_CFI_ADV_OPTIONS=y
+CONFIG_MTD_CFI_NOSWAP=y
+# CONFIG_MTD_CFI_BE_BYTE_SWAP is not set
+# CONFIG_MTD_CFI_LE_BYTE_SWAP is not set
+# CONFIG_MTD_CFI_GEOMETRY is not set
+CONFIG_MTD_MAP_BANK_WIDTH_1=y
+CONFIG_MTD_MAP_BANK_WIDTH_2=y
+CONFIG_MTD_MAP_BANK_WIDTH_4=y
+# CONFIG_MTD_MAP_BANK_WIDTH_8 is not set
+# CONFIG_MTD_MAP_BANK_WIDTH_16 is not set
+# CONFIG_MTD_MAP_BANK_WIDTH_32 is not set
+CONFIG_MTD_CFI_I1=y
+CONFIG_MTD_CFI_I2=y
+# CONFIG_MTD_CFI_I4 is not set
+# CONFIG_MTD_CFI_I8 is not set
+CONFIG_MTD_CFI_INTELEXT=y
+# CONFIG_MTD_CFI_AMDSTD is not set
+# CONFIG_MTD_CFI_STAA is not set
+CONFIG_MTD_CFI_UTIL=y
+# CONFIG_MTD_RAM is not set
+# CONFIG_MTD_ROM is not set
+# CONFIG_MTD_ABSENT is not set
+
+#
+# Mapping drivers for chip access
+#
+# CONFIG_MTD_COMPLEX_MAPPINGS is not set
+CONFIG_MTD_PHYSMAP=y
+CONFIG_MTD_PHYSMAP_START=0xc0000000
+CONFIG_MTD_PHYSMAP_LEN=0x00800000
+CONFIG_MTD_PHYSMAP_BANKWIDTH=1
+# CONFIG_MTD_ARM_INTEGRATOR is not set
+# CONFIG_MTD_EDB7312 is not set
+
+#
+# Self-contained MTD device drivers
+#
+# CONFIG_MTD_PMC551 is not set
+# CONFIG_MTD_SLRAM is not set
+# CONFIG_MTD_PHRAM is not set
+# CONFIG_MTD_MTDRAM is not set
+# CONFIG_MTD_BLKMTD is not set
+
+#
+# Disk-On-Chip Device Drivers
+#
+# CONFIG_MTD_DOC2000 is not set
+# CONFIG_MTD_DOC2001 is not set
+# CONFIG_MTD_DOC2001PLUS is not set
+
+#
+# NAND Flash Device Drivers
+#
+# CONFIG_MTD_NAND is not set
+
+#
+# Plug and Play support
+#
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+# CONFIG_BLK_DEV_UMEM is not set
+# CONFIG_BLK_DEV_LOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+# CONFIG_BLK_DEV_SX8 is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
+CONFIG_BLK_DEV_RAM_SIZE=8192
+# CONFIG_BLK_DEV_INITRD is not set
+CONFIG_INITRAMFS_SOURCE=""
+# CONFIG_CDROM_PKTCDVD is not set
+
+#
+# IO Schedulers
+#
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+
+#
+# Multi-device support (RAID and LVM)
+#
+CONFIG_MD=y
+CONFIG_BLK_DEV_MD=y
+CONFIG_MD_LINEAR=y
+CONFIG_MD_RAID0=y
+CONFIG_MD_RAID1=y
+# CONFIG_MD_RAID10 is not set
+CONFIG_MD_RAID5=y
+# CONFIG_MD_RAID6 is not set
+# CONFIG_MD_MULTIPATH is not set
+# CONFIG_MD_FAULTY is not set
+CONFIG_BLK_DEV_DM=y
+# CONFIG_DM_CRYPT is not set
+# CONFIG_DM_SNAPSHOT is not set
+# CONFIG_DM_MIRROR is not set
+# CONFIG_DM_ZERO is not set
+
+#
+# Networking support
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+CONFIG_PACKET=y
+CONFIG_PACKET_MMAP=y
+# CONFIG_NETLINK_DEV is not set
+CONFIG_UNIX=y
+# CONFIG_NET_KEY is not set
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+# CONFIG_IP_ADVANCED_ROUTER is not set
+CONFIG_IP_PNP=y
+# CONFIG_IP_PNP_DHCP is not set
+CONFIG_IP_PNP_BOOTP=y
+# CONFIG_IP_PNP_RARP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_IP_MROUTE is not set
+# CONFIG_ARPD is not set
+# CONFIG_SYN_COOKIES is not set
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+# CONFIG_INET_TUNNEL is not set
+CONFIG_IP_TCPDIAG=y
+# CONFIG_IP_TCPDIAG_IPV6 is not set
+# CONFIG_IPV6 is not set
+# CONFIG_NETFILTER is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+# CONFIG_ATM is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_DECNET is not set
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+# CONFIG_NET_CLS_ROUTE is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+CONFIG_NETDEVICES=y
+# CONFIG_DUMMY is not set
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+
+#
+# ARCnet devices
+#
+# CONFIG_ARCNET is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+# CONFIG_NET_ETHERNET is not set
+
+#
+# Ethernet (1000 Mbit)
+#
+# CONFIG_ACENIC is not set
+# CONFIG_DL2K is not set
+CONFIG_E1000=y
+CONFIG_E1000_NAPI=y
+# CONFIG_NS83820 is not set
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+# CONFIG_R8169 is not set
+# CONFIG_SK98LIN is not set
+# CONFIG_TIGON3 is not set
+
+#
+# Ethernet (10000 Mbit)
+#
+# CONFIG_IXGB is not set
+# CONFIG_S2IO is not set
+
+#
+# Token Ring devices
+#
+# CONFIG_TR is not set
+
+#
+# Wireless LAN (non-hamradio)
+#
+# CONFIG_NET_RADIO is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+# CONFIG_NET_FC is not set
+# CONFIG_SHAPER is not set
+# CONFIG_NETCONSOLE is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+# CONFIG_IDE is not set
+
+#
+# SCSI device support
+#
+CONFIG_SCSI=y
+CONFIG_SCSI_PROC_FS=y
+
+#
+# SCSI support type (disk, tape, CD-ROM)
+#
+CONFIG_BLK_DEV_SD=y
+# CONFIG_CHR_DEV_ST is not set
+# CONFIG_CHR_DEV_OSST is not set
+# CONFIG_BLK_DEV_SR is not set
+CONFIG_CHR_DEV_SG=y
+
+#
+# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
+#
+# CONFIG_SCSI_MULTI_LUN is not set
+# CONFIG_SCSI_CONSTANTS is not set
+# CONFIG_SCSI_LOGGING is not set
+
+#
+# SCSI Transport Attributes
+#
+# CONFIG_SCSI_SPI_ATTRS is not set
+# CONFIG_SCSI_FC_ATTRS is not set
+
+#
+# SCSI low-level drivers
+#
+# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
+# CONFIG_SCSI_3W_9XXX is not set
+# CONFIG_SCSI_ACARD is not set
+# CONFIG_SCSI_AACRAID is not set
+# CONFIG_SCSI_AIC7XXX is not set
+# CONFIG_SCSI_AIC7XXX_OLD is not set
+# CONFIG_SCSI_AIC79XX is not set
+# CONFIG_SCSI_DPT_I2O is not set
+# CONFIG_MEGARAID_NEWGEN is not set
+# CONFIG_MEGARAID_LEGACY is not set
+# CONFIG_SCSI_SATA is not set
+# CONFIG_SCSI_BUSLOGIC is not set
+# CONFIG_SCSI_DMX3191D is not set
+# CONFIG_SCSI_EATA is not set
+# CONFIG_SCSI_EATA_PIO is not set
+# CONFIG_SCSI_FUTURE_DOMAIN is not set
+# CONFIG_SCSI_GDTH is not set
+# CONFIG_SCSI_IPS is not set
+# CONFIG_SCSI_INITIO is not set
+# CONFIG_SCSI_INIA100 is not set
+# CONFIG_SCSI_SYM53C8XX_2 is not set
+# CONFIG_SCSI_IPR is not set
+# CONFIG_SCSI_QLOGIC_ISP is not set
+# CONFIG_SCSI_QLOGIC_FC is not set
+# CONFIG_SCSI_QLOGIC_1280 is not set
+CONFIG_SCSI_QLA2XXX=y
+# CONFIG_SCSI_QLA21XX is not set
+# CONFIG_SCSI_QLA22XX is not set
+# CONFIG_SCSI_QLA2300 is not set
+# CONFIG_SCSI_QLA2322 is not set
+# CONFIG_SCSI_QLA6312 is not set
+# CONFIG_SCSI_QLA6322 is not set
+# CONFIG_SCSI_DC395x is not set
+# CONFIG_SCSI_DC390T is not set
+# CONFIG_SCSI_NSP32 is not set
+# CONFIG_SCSI_DEBUG is not set
+
+#
+# Fusion MPT device support
+#
+# CONFIG_FUSION is not set
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+# CONFIG_I2O is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+
+#
+# Userland interfaces
+#
+CONFIG_INPUT_MOUSEDEV=y
+# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+# CONFIG_INPUT_JOYDEV is not set
+# CONFIG_INPUT_TSDEV is not set
+# CONFIG_INPUT_EVDEV is not set
+# CONFIG_INPUT_EVBUG is not set
+
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+# CONFIG_SERIO is not set
+
+#
+# Input Device Drivers
+#
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+# CONFIG_INPUT_MISC is not set
+
+#
+# Character devices
+#
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_HW_CONSOLE=y
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_8250_NR_UARTS=4
+# CONFIG_SERIAL_8250_EXTENDED is not set
+
+#
+# Non-8250 serial port support
+#
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_UNIX98_PTYS=y
+CONFIG_LEGACY_PTYS=y
+CONFIG_LEGACY_PTY_COUNT=256
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+# CONFIG_WATCHDOG is not set
+# CONFIG_NVRAM is not set
+# CONFIG_RTC is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_DRM is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# I2C support
+#
+CONFIG_I2C=y
+CONFIG_I2C_CHARDEV=y
+
+#
+# I2C Algorithms
+#
+# CONFIG_I2C_ALGOBIT is not set
+# CONFIG_I2C_ALGOPCF is not set
+# CONFIG_I2C_ALGOPCA is not set
+
+#
+# I2C Hardware Bus support
+#
+# CONFIG_I2C_ALI1535 is not set
+# CONFIG_I2C_ALI1563 is not set
+# CONFIG_I2C_ALI15X3 is not set
+# CONFIG_I2C_AMD756 is not set
+# CONFIG_I2C_AMD8111 is not set
+# CONFIG_I2C_I801 is not set
+# CONFIG_I2C_I810 is not set
+CONFIG_I2C_IOP3XX=y
+# CONFIG_I2C_ISA is not set
+# CONFIG_I2C_NFORCE2 is not set
+# CONFIG_I2C_PARPORT_LIGHT is not set
+# CONFIG_I2C_PIIX4 is not set
+# CONFIG_I2C_PROSAVAGE is not set
+# CONFIG_I2C_SAVAGE4 is not set
+# CONFIG_SCx200_ACB is not set
+# CONFIG_I2C_SIS5595 is not set
+# CONFIG_I2C_SIS630 is not set
+# CONFIG_I2C_SIS96X is not set
+# CONFIG_I2C_STUB is not set
+# CONFIG_I2C_VIA is not set
+# CONFIG_I2C_VIAPRO is not set
+# CONFIG_I2C_VOODOO3 is not set
+# CONFIG_I2C_PCA_ISA is not set
+
+#
+# Hardware Sensors Chip support
+#
+# CONFIG_I2C_SENSOR is not set
+# CONFIG_SENSORS_ADM1021 is not set
+# CONFIG_SENSORS_ADM1025 is not set
+# CONFIG_SENSORS_ADM1026 is not set
+# CONFIG_SENSORS_ADM1031 is not set
+# CONFIG_SENSORS_ASB100 is not set
+# CONFIG_SENSORS_DS1621 is not set
+# CONFIG_SENSORS_FSCHER is not set
+# CONFIG_SENSORS_GL518SM is not set
+# CONFIG_SENSORS_IT87 is not set
+# CONFIG_SENSORS_LM63 is not set
+# CONFIG_SENSORS_LM75 is not set
+# CONFIG_SENSORS_LM77 is not set
+# CONFIG_SENSORS_LM78 is not set
+# CONFIG_SENSORS_LM80 is not set
+# CONFIG_SENSORS_LM83 is not set
+# CONFIG_SENSORS_LM85 is not set
+# CONFIG_SENSORS_LM87 is not set
+# CONFIG_SENSORS_LM90 is not set
+# CONFIG_SENSORS_MAX1619 is not set
+# CONFIG_SENSORS_PC87360 is not set
+# CONFIG_SENSORS_SMSC47M1 is not set
+# CONFIG_SENSORS_VIA686A is not set
+# CONFIG_SENSORS_W83781D is not set
+# CONFIG_SENSORS_W83L785TS is not set
+# CONFIG_SENSORS_W83627HF is not set
+
+#
+# Other I2C Chip support
+#
+# CONFIG_SENSORS_EEPROM is not set
+# CONFIG_SENSORS_PCF8574 is not set
+# CONFIG_SENSORS_PCF8591 is not set
+# CONFIG_SENSORS_RTC8564 is not set
+# CONFIG_I2C_DEBUG_CORE is not set
+# CONFIG_I2C_DEBUG_ALGO is not set
+# CONFIG_I2C_DEBUG_BUS is not set
+# CONFIG_I2C_DEBUG_CHIP is not set
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+CONFIG_EXT3_FS=y
+CONFIG_EXT3_FS_XATTR=y
+# CONFIG_EXT3_FS_POSIX_ACL is not set
+# CONFIG_EXT3_FS_SECURITY is not set
+CONFIG_JBD=y
+# CONFIG_JBD_DEBUG is not set
+CONFIG_FS_MBCACHE=y
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+CONFIG_XFS_FS=y
+# CONFIG_XFS_RT is not set
+# CONFIG_XFS_QUOTA is not set
+CONFIG_XFS_SECURITY=y
+CONFIG_XFS_POSIX_ACL=y
+# CONFIG_MINIX_FS is not set
+# CONFIG_ROMFS_FS is not set
+# CONFIG_QUOTA is not set
+CONFIG_DNOTIFY=y
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+# CONFIG_MSDOS_FS is not set
+# CONFIG_VFAT_FS is not set
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_SYSFS=y
+# CONFIG_DEVFS_FS is not set
+# CONFIG_DEVPTS_FS_XATTR is not set
+CONFIG_TMPFS=y
+# CONFIG_TMPFS_XATTR is not set
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_JFFS_FS is not set
+# CONFIG_JFFS2_FS is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+CONFIG_NFS_FS=y
+CONFIG_NFS_V3=y
+# CONFIG_NFS_V4 is not set
+# CONFIG_NFS_DIRECTIO is not set
+CONFIG_NFSD=y
+CONFIG_NFSD_V3=y
+# CONFIG_NFSD_V4 is not set
+# CONFIG_NFSD_TCP is not set
+CONFIG_ROOT_NFS=y
+CONFIG_LOCKD=y
+CONFIG_LOCKD_V4=y
+CONFIG_EXPORTFS=y
+CONFIG_SUNRPC=y
+# CONFIG_RPCSEC_GSS_KRB5 is not set
+# CONFIG_RPCSEC_GSS_SPKM3 is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_AFS_FS is not set
+
+#
+# Partition Types
+#
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_ACORN_PARTITION is not set
+# CONFIG_OSF_PARTITION is not set
+# CONFIG_AMIGA_PARTITION is not set
+# CONFIG_ATARI_PARTITION is not set
+# CONFIG_MAC_PARTITION is not set
+CONFIG_MSDOS_PARTITION=y
+# CONFIG_BSD_DISKLABEL is not set
+# CONFIG_MINIX_SUBPARTITION is not set
+# CONFIG_SOLARIS_X86_PARTITION is not set
+# CONFIG_UNIXWARE_DISKLABEL is not set
+# CONFIG_LDM_PARTITION is not set
+# CONFIG_SGI_PARTITION is not set
+# CONFIG_ULTRIX_PARTITION is not set
+# CONFIG_SUN_PARTITION is not set
+# CONFIG_EFI_PARTITION is not set
+
+#
+# Native Language Support
+#
+# CONFIG_NLS is not set
+
+#
+# Profiling support
+#
+# CONFIG_PROFILING is not set
+
+#
+# Graphics support
+#
+# CONFIG_FB is not set
+
+#
+# Console display driver support
+#
+# CONFIG_VGA_CONSOLE is not set
+CONFIG_DUMMY_CONSOLE=y
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# Misc devices
+#
+
+#
+# USB support
+#
+# CONFIG_USB is not set
+CONFIG_USB_ARCH_HAS_HCD=y
+CONFIG_USB_ARCH_HAS_OHCI=y
+
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
+
+#
+# USB Gadget Support
+#
+# CONFIG_USB_GADGET is not set
+
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# Kernel hacking
+#
+# CONFIG_DEBUG_KERNEL is not set
+# CONFIG_DEBUG_INFO is not set
+CONFIG_FRAME_POINTER=y
+CONFIG_DEBUG_USER=y
+
+#
+# Security options
+#
+# CONFIG_KEYS is not set
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+# CONFIG_CRYPTO is not set
+
+#
+# Library routines
+#
+# CONFIG_CRC_CCITT is not set
+# CONFIG_CRC32 is not set
+# CONFIG_LIBCRC32C is not set
CONFIG_IP_NF_TARGET_REDIRECT=m
# CONFIG_IP_NF_TARGET_NETMAP is not set
# CONFIG_IP_NF_TARGET_SAME is not set
-CONFIG_IP_NF_NAT_LOCAL=y
CONFIG_IP_NF_NAT_SNMP_BASIC=m
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
--- /dev/null
+#
+# Automatically generated make config: don't edit
+# Linux kernel version: 2.6.11-rc2
+# Tue Feb 1 14:01:46 2005
+#
+CONFIG_ARM=y
+CONFIG_MMU=y
+CONFIG_UID16=y
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+CONFIG_GENERIC_CALIBRATE_DELAY=y
+CONFIG_GENERIC_IOMAP=y
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_CLEAN_COMPILE=y
+CONFIG_BROKEN_ON_SMP=y
+CONFIG_LOCK_KERNEL=y
+
+#
+# General setup
+#
+CONFIG_LOCALVERSION=""
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+# CONFIG_POSIX_MQUEUE is not set
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+CONFIG_LOG_BUF_SHIFT=14
+# CONFIG_HOTPLUG is not set
+CONFIG_KOBJECT_UEVENT=y
+# CONFIG_IKCONFIG is not set
+# CONFIG_EMBEDDED is not set
+CONFIG_KALLSYMS=y
+# CONFIG_KALLSYMS_ALL is not set
+# CONFIG_KALLSYMS_EXTRA_PASS is not set
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
+CONFIG_SHMEM=y
+CONFIG_CC_ALIGN_FUNCTIONS=0
+CONFIG_CC_ALIGN_LABELS=0
+CONFIG_CC_ALIGN_LOOPS=0
+CONFIG_CC_ALIGN_JUMPS=0
+# CONFIG_TINY_SHMEM is not set
+
+#
+# Loadable module support
+#
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+# CONFIG_MODULE_FORCE_UNLOAD is not set
+CONFIG_OBSOLETE_MODPARM=y
+# CONFIG_MODVERSIONS is not set
+# CONFIG_MODULE_SRCVERSION_ALL is not set
+# CONFIG_KMOD is not set
+
+#
+# System Type
+#
+# CONFIG_ARCH_CLPS7500 is not set
+# CONFIG_ARCH_CLPS711X is not set
+# CONFIG_ARCH_CO285 is not set
+# CONFIG_ARCH_EBSA110 is not set
+# CONFIG_ARCH_CAMELOT is not set
+# CONFIG_ARCH_FOOTBRIDGE is not set
+# CONFIG_ARCH_INTEGRATOR is not set
+# CONFIG_ARCH_IOP3XX is not set
+# CONFIG_ARCH_IXP4XX is not set
+# CONFIG_ARCH_IXP2000 is not set
+# CONFIG_ARCH_L7200 is not set
+# CONFIG_ARCH_PXA is not set
+# CONFIG_ARCH_RPC is not set
+# CONFIG_ARCH_SA1100 is not set
+# CONFIG_ARCH_S3C2410 is not set
+# CONFIG_ARCH_SHARK is not set
+# CONFIG_ARCH_LH7A40X is not set
+CONFIG_ARCH_OMAP=y
+# CONFIG_ARCH_VERSATILE is not set
+# CONFIG_ARCH_IMX is not set
+# CONFIG_ARCH_H720X is not set
+
+#
+# TI OMAP Implementations
+#
+
+#
+# OMAP Core Type
+#
+# CONFIG_ARCH_OMAP730 is not set
+# CONFIG_ARCH_OMAP1510 is not set
+CONFIG_ARCH_OMAP16XX=y
+CONFIG_ARCH_OMAP_OTG=y
+
+#
+# OMAP Board Type
+#
+# CONFIG_MACH_OMAP_INNOVATOR is not set
+CONFIG_MACH_OMAP_H2=y
+# CONFIG_MACH_OMAP_H3 is not set
+# CONFIG_MACH_OMAP_H4 is not set
+# CONFIG_MACH_OMAP_OSK is not set
+# CONFIG_MACH_OMAP_GENERIC is not set
+
+#
+# OMAP Feature Selections
+#
+CONFIG_OMAP_MUX=y
+# CONFIG_OMAP_MUX_DEBUG is not set
+CONFIG_OMAP_MUX_WARNINGS=y
+CONFIG_OMAP_LL_DEBUG_UART1=y
+# CONFIG_OMAP_LL_DEBUG_UART2 is not set
+# CONFIG_OMAP_LL_DEBUG_UART3 is not set
+CONFIG_OMAP_ARM_192MHZ=y
+# CONFIG_OMAP_ARM_168MHZ is not set
+# CONFIG_OMAP_ARM_120MHZ is not set
+# CONFIG_OMAP_ARM_60MHZ is not set
+# CONFIG_OMAP_ARM_30MHZ is not set
+
+#
+# Processor Type
+#
+CONFIG_CPU_32=y
+CONFIG_CPU_ARM926T=y
+CONFIG_CPU_32v5=y
+CONFIG_CPU_ABRT_EV5TJ=y
+CONFIG_CPU_CACHE_VIVT=y
+CONFIG_CPU_COPY_V4WB=y
+CONFIG_CPU_TLB_V4WBI=y
+
+#
+# Processor Features
+#
+CONFIG_ARM_THUMB=y
+# CONFIG_CPU_ICACHE_DISABLE is not set
+# CONFIG_CPU_DCACHE_DISABLE is not set
+# CONFIG_CPU_DCACHE_WRITETHROUGH is not set
+# CONFIG_CPU_CACHE_ROUND_ROBIN is not set
+
+#
+# General setup
+#
+CONFIG_ZBOOT_ROM_TEXT=0x0
+CONFIG_ZBOOT_ROM_BSS=0x0
+# CONFIG_XIP_KERNEL is not set
+
+#
+# PCCARD (PCMCIA/CardBus) support
+#
+# CONFIG_PCCARD is not set
+
+#
+# PC-card bridges
+#
+
+#
+# At least one math emulation must be selected
+#
+CONFIG_FPE_NWFPE=y
+# CONFIG_FPE_NWFPE_XP is not set
+# CONFIG_FPE_FASTFPE is not set
+# CONFIG_VFP is not set
+CONFIG_BINFMT_ELF=y
+CONFIG_BINFMT_AOUT=y
+# CONFIG_BINFMT_MISC is not set
+
+#
+# Generic Driver Options
+#
+CONFIG_STANDALONE=y
+CONFIG_PREVENT_FIRMWARE_BUILD=y
+# CONFIG_FW_LOADER is not set
+CONFIG_DEBUG_DRIVER=y
+CONFIG_PM=y
+CONFIG_PREEMPT=y
+# CONFIG_APM is not set
+# CONFIG_ARTHUR is not set
+CONFIG_CMDLINE="mem=32M console=ttyS0,115200n8 root=0801 ro init=/bin/sh"
+# CONFIG_LEDS is not set
+CONFIG_ALIGNMENT_TRAP=y
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Memory Technology Devices (MTD)
+#
+CONFIG_MTD=y
+CONFIG_MTD_DEBUG=y
+CONFIG_MTD_DEBUG_VERBOSE=3
+CONFIG_MTD_PARTITIONS=y
+# CONFIG_MTD_CONCAT is not set
+# CONFIG_MTD_REDBOOT_PARTS is not set
+CONFIG_MTD_CMDLINE_PARTS=y
+# CONFIG_MTD_AFS_PARTS is not set
+
+#
+# User Modules And Translation Layers
+#
+CONFIG_MTD_CHAR=y
+CONFIG_MTD_BLOCK=y
+# CONFIG_FTL is not set
+# CONFIG_NFTL is not set
+# CONFIG_INFTL is not set
+
+#
+# RAM/ROM/Flash chip drivers
+#
+CONFIG_MTD_CFI=y
+# CONFIG_MTD_JEDECPROBE is not set
+CONFIG_MTD_GEN_PROBE=y
+# CONFIG_MTD_CFI_ADV_OPTIONS is not set
+CONFIG_MTD_MAP_BANK_WIDTH_1=y
+CONFIG_MTD_MAP_BANK_WIDTH_2=y
+CONFIG_MTD_MAP_BANK_WIDTH_4=y
+# CONFIG_MTD_MAP_BANK_WIDTH_8 is not set
+# CONFIG_MTD_MAP_BANK_WIDTH_16 is not set
+# CONFIG_MTD_MAP_BANK_WIDTH_32 is not set
+CONFIG_MTD_CFI_I1=y
+CONFIG_MTD_CFI_I2=y
+# CONFIG_MTD_CFI_I4 is not set
+# CONFIG_MTD_CFI_I8 is not set
+CONFIG_MTD_CFI_INTELEXT=y
+# CONFIG_MTD_CFI_AMDSTD is not set
+# CONFIG_MTD_CFI_STAA is not set
+CONFIG_MTD_CFI_UTIL=y
+# CONFIG_MTD_RAM is not set
+# CONFIG_MTD_ROM is not set
+# CONFIG_MTD_ABSENT is not set
+# CONFIG_MTD_XIP is not set
+
+#
+# Mapping drivers for chip access
+#
+# CONFIG_MTD_COMPLEX_MAPPINGS is not set
+# CONFIG_MTD_PHYSMAP is not set
+# CONFIG_MTD_ARM_INTEGRATOR is not set
+# CONFIG_MTD_EDB7312 is not set
+
+#
+# Self-contained MTD device drivers
+#
+# CONFIG_MTD_SLRAM is not set
+# CONFIG_MTD_PHRAM is not set
+# CONFIG_MTD_MTDRAM is not set
+# CONFIG_MTD_BLKMTD is not set
+# CONFIG_MTD_BLOCK2MTD is not set
+
+#
+# Disk-On-Chip Device Drivers
+#
+# CONFIG_MTD_DOC2000 is not set
+# CONFIG_MTD_DOC2001 is not set
+# CONFIG_MTD_DOC2001PLUS is not set
+
+#
+# NAND Flash Device Drivers
+#
+# CONFIG_MTD_NAND is not set
+
+#
+# Plug and Play support
+#
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
+CONFIG_BLK_DEV_LOOP=y
+# CONFIG_BLK_DEV_CRYPTOLOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
+CONFIG_BLK_DEV_RAM_SIZE=8192
+CONFIG_BLK_DEV_INITRD=y
+CONFIG_INITRAMFS_SOURCE=""
+# CONFIG_CDROM_PKTCDVD is not set
+
+#
+# IO Schedulers
+#
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+CONFIG_ATA_OVER_ETH=m
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# Networking support
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+CONFIG_PACKET=y
+# CONFIG_PACKET_MMAP is not set
+# CONFIG_NETLINK_DEV is not set
+CONFIG_UNIX=y
+# CONFIG_NET_KEY is not set
+CONFIG_INET=y
+# CONFIG_IP_MULTICAST is not set
+# CONFIG_IP_ADVANCED_ROUTER is not set
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_IP_PNP_BOOTP=y
+# CONFIG_IP_PNP_RARP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_ARPD is not set
+# CONFIG_SYN_COOKIES is not set
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+# CONFIG_INET_TUNNEL is not set
+CONFIG_IP_TCPDIAG=y
+# CONFIG_IP_TCPDIAG_IPV6 is not set
+# CONFIG_IPV6 is not set
+# CONFIG_NETFILTER is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+# CONFIG_ATM is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_DECNET is not set
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+# CONFIG_NET_CLS_ROUTE is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+CONFIG_NETDEVICES=y
+# CONFIG_DUMMY is not set
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+CONFIG_MII=y
+CONFIG_SMC91X=y
+
+#
+# Ethernet (1000 Mbit)
+#
+
+#
+# Ethernet (10000 Mbit)
+#
+
+#
+# Token Ring devices
+#
+
+#
+# Wireless LAN (non-hamradio)
+#
+# CONFIG_NET_RADIO is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
+CONFIG_PPP=y
+# CONFIG_PPP_MULTILINK is not set
+# CONFIG_PPP_FILTER is not set
+# CONFIG_PPP_ASYNC is not set
+# CONFIG_PPP_SYNC_TTY is not set
+# CONFIG_PPP_DEFLATE is not set
+# CONFIG_PPP_BSDCOMP is not set
+# CONFIG_PPPOE is not set
+CONFIG_SLIP=y
+CONFIG_SLIP_COMPRESSED=y
+# CONFIG_SLIP_SMART is not set
+# CONFIG_SLIP_MODE_SLIP6 is not set
+# CONFIG_SHAPER is not set
+# CONFIG_NETCONSOLE is not set
+
+#
+# SCSI device support
+#
+CONFIG_SCSI=y
+CONFIG_SCSI_PROC_FS=y
+
+#
+# SCSI support type (disk, tape, CD-ROM)
+#
+# CONFIG_BLK_DEV_SD is not set
+# CONFIG_CHR_DEV_ST is not set
+# CONFIG_CHR_DEV_OSST is not set
+# CONFIG_BLK_DEV_SR is not set
+# CONFIG_CHR_DEV_SG is not set
+
+#
+# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
+#
+# CONFIG_SCSI_MULTI_LUN is not set
+# CONFIG_SCSI_CONSTANTS is not set
+# CONFIG_SCSI_LOGGING is not set
+
+#
+# SCSI Transport Attributes
+#
+# CONFIG_SCSI_SPI_ATTRS is not set
+# CONFIG_SCSI_FC_ATTRS is not set
+# CONFIG_SCSI_ISCSI_ATTRS is not set
+
+#
+# SCSI low-level drivers
+#
+# CONFIG_SCSI_SATA is not set
+# CONFIG_SCSI_DEBUG is not set
+
+#
+# Fusion MPT device support
+#
+
+#
+# IEEE 1394 (FireWire) support
+#
+
+#
+# I2O device support
+#
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+
+#
+# Userland interfaces
+#
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_PSAUX=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+# CONFIG_INPUT_JOYDEV is not set
+# CONFIG_INPUT_TSDEV is not set
+CONFIG_INPUT_EVDEV=y
+CONFIG_INPUT_EVBUG=y
+
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+CONFIG_SERIO=y
+CONFIG_SERIO_SERPORT=y
+# CONFIG_SERIO_CT82C710 is not set
+# CONFIG_SERIO_RAW is not set
+
+#
+# Input Device Drivers
+#
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+CONFIG_INPUT_MISC=y
+CONFIG_INPUT_UINPUT=y
+
+#
+# Character devices
+#
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_HW_CONSOLE=y
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_8250_NR_UARTS=4
+# CONFIG_SERIAL_8250_EXTENDED is not set
+
+#
+# Non-8250 serial port support
+#
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_UNIX98_PTYS=y
+# CONFIG_LEGACY_PTYS is not set
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+CONFIG_WATCHDOG=y
+CONFIG_WATCHDOG_NOWAYOUT=y
+
+#
+# Watchdog Device Drivers
+#
+# CONFIG_SOFT_WATCHDOG is not set
+# CONFIG_NVRAM is not set
+# CONFIG_RTC is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_DRM is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# I2C support
+#
+CONFIG_I2C=y
+CONFIG_I2C_CHARDEV=y
+
+#
+# I2C Algorithms
+#
+# CONFIG_I2C_ALGOBIT is not set
+# CONFIG_I2C_ALGOPCF is not set
+# CONFIG_I2C_ALGOPCA is not set
+
+#
+# I2C Hardware Bus support
+#
+# CONFIG_I2C_ISA is not set
+# CONFIG_I2C_PARPORT_LIGHT is not set
+# CONFIG_I2C_STUB is not set
+# CONFIG_I2C_PCA_ISA is not set
+
+#
+# Hardware Sensors Chip support
+#
+# CONFIG_I2C_SENSOR is not set
+# CONFIG_SENSORS_ADM1021 is not set
+# CONFIG_SENSORS_ADM1025 is not set
+# CONFIG_SENSORS_ADM1026 is not set
+# CONFIG_SENSORS_ADM1031 is not set
+# CONFIG_SENSORS_ASB100 is not set
+# CONFIG_SENSORS_DS1621 is not set
+# CONFIG_SENSORS_FSCHER is not set
+# CONFIG_SENSORS_GL518SM is not set
+# CONFIG_SENSORS_IT87 is not set
+# CONFIG_SENSORS_LM63 is not set
+# CONFIG_SENSORS_LM75 is not set
+# CONFIG_SENSORS_LM77 is not set
+# CONFIG_SENSORS_LM78 is not set
+# CONFIG_SENSORS_LM80 is not set
+# CONFIG_SENSORS_LM83 is not set
+# CONFIG_SENSORS_LM85 is not set
+# CONFIG_SENSORS_LM87 is not set
+# CONFIG_SENSORS_LM90 is not set
+# CONFIG_SENSORS_MAX1619 is not set
+# CONFIG_SENSORS_PC87360 is not set
+# CONFIG_SENSORS_SMSC47B397 is not set
+# CONFIG_SENSORS_SMSC47M1 is not set
+# CONFIG_SENSORS_W83781D is not set
+# CONFIG_SENSORS_W83L785TS is not set
+# CONFIG_SENSORS_W83627HF is not set
+
+#
+# Other I2C Chip support
+#
+# CONFIG_SENSORS_EEPROM is not set
+# CONFIG_SENSORS_PCF8574 is not set
+# CONFIG_SENSORS_PCF8591 is not set
+# CONFIG_SENSORS_RTC8564 is not set
+CONFIG_ISP1301_OMAP=y
+# CONFIG_I2C_DEBUG_CORE is not set
+# CONFIG_I2C_DEBUG_ALGO is not set
+# CONFIG_I2C_DEBUG_BUS is not set
+# CONFIG_I2C_DEBUG_CHIP is not set
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+# CONFIG_EXT3_FS is not set
+# CONFIG_JBD is not set
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+# CONFIG_XFS_FS is not set
+# CONFIG_MINIX_FS is not set
+CONFIG_ROMFS_FS=y
+# CONFIG_QUOTA is not set
+CONFIG_DNOTIFY=y
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+CONFIG_FAT_FS=y
+CONFIG_MSDOS_FS=y
+# CONFIG_VFAT_FS is not set
+CONFIG_FAT_DEFAULT_CODEPAGE=437
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_SYSFS=y
+# CONFIG_DEVFS_FS is not set
+# CONFIG_DEVPTS_FS_XATTR is not set
+# CONFIG_TMPFS is not set
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_JFFS_FS is not set
+CONFIG_JFFS2_FS=y
+CONFIG_JFFS2_FS_DEBUG=2
+# CONFIG_JFFS2_FS_NAND is not set
+# CONFIG_JFFS2_FS_NOR_ECC is not set
+# CONFIG_JFFS2_COMPRESSION_OPTIONS is not set
+CONFIG_JFFS2_ZLIB=y
+CONFIG_JFFS2_RTIME=y
+# CONFIG_JFFS2_RUBIN is not set
+CONFIG_CRAMFS=y
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+CONFIG_NFS_FS=y
+CONFIG_NFS_V3=y
+CONFIG_NFS_V4=y
+# CONFIG_NFS_DIRECTIO is not set
+# CONFIG_NFSD is not set
+CONFIG_ROOT_NFS=y
+CONFIG_LOCKD=y
+CONFIG_LOCKD_V4=y
+# CONFIG_EXPORTFS is not set
+CONFIG_SUNRPC=y
+CONFIG_SUNRPC_GSS=y
+CONFIG_RPCSEC_GSS_KRB5=y
+# CONFIG_RPCSEC_GSS_SPKM3 is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_AFS_FS is not set
+
+#
+# Partition Types
+#
+# CONFIG_PARTITION_ADVANCED is not set
+CONFIG_MSDOS_PARTITION=y
+
+#
+# Native Language Support
+#
+CONFIG_NLS=y
+CONFIG_NLS_DEFAULT="iso8859-1"
+# CONFIG_NLS_CODEPAGE_437 is not set
+# CONFIG_NLS_CODEPAGE_737 is not set
+# CONFIG_NLS_CODEPAGE_775 is not set
+# CONFIG_NLS_CODEPAGE_850 is not set
+# CONFIG_NLS_CODEPAGE_852 is not set
+# CONFIG_NLS_CODEPAGE_855 is not set
+# CONFIG_NLS_CODEPAGE_857 is not set
+# CONFIG_NLS_CODEPAGE_860 is not set
+# CONFIG_NLS_CODEPAGE_861 is not set
+# CONFIG_NLS_CODEPAGE_862 is not set
+# CONFIG_NLS_CODEPAGE_863 is not set
+# CONFIG_NLS_CODEPAGE_864 is not set
+# CONFIG_NLS_CODEPAGE_865 is not set
+# CONFIG_NLS_CODEPAGE_866 is not set
+# CONFIG_NLS_CODEPAGE_869 is not set
+# CONFIG_NLS_CODEPAGE_936 is not set
+# CONFIG_NLS_CODEPAGE_950 is not set
+# CONFIG_NLS_CODEPAGE_932 is not set
+# CONFIG_NLS_CODEPAGE_949 is not set
+# CONFIG_NLS_CODEPAGE_874 is not set
+# CONFIG_NLS_ISO8859_8 is not set
+# CONFIG_NLS_CODEPAGE_1250 is not set
+# CONFIG_NLS_CODEPAGE_1251 is not set
+# CONFIG_NLS_ASCII is not set
+# CONFIG_NLS_ISO8859_1 is not set
+# CONFIG_NLS_ISO8859_2 is not set
+# CONFIG_NLS_ISO8859_3 is not set
+# CONFIG_NLS_ISO8859_4 is not set
+# CONFIG_NLS_ISO8859_5 is not set
+# CONFIG_NLS_ISO8859_6 is not set
+# CONFIG_NLS_ISO8859_7 is not set
+# CONFIG_NLS_ISO8859_9 is not set
+# CONFIG_NLS_ISO8859_13 is not set
+# CONFIG_NLS_ISO8859_14 is not set
+# CONFIG_NLS_ISO8859_15 is not set
+# CONFIG_NLS_KOI8_R is not set
+# CONFIG_NLS_KOI8_U is not set
+# CONFIG_NLS_UTF8 is not set
+
+#
+# Profiling support
+#
+# CONFIG_PROFILING is not set
+
+#
+# Graphics support
+#
+CONFIG_FB=y
+CONFIG_FB_MODE_HELPERS=y
+# CONFIG_FB_TILEBLITTING is not set
+# CONFIG_FB_VIRTUAL is not set
+
+#
+# Console display driver support
+#
+# CONFIG_VGA_CONSOLE is not set
+CONFIG_DUMMY_CONSOLE=y
+CONFIG_FRAMEBUFFER_CONSOLE=y
+CONFIG_FONTS=y
+CONFIG_FONT_8x8=y
+CONFIG_FONT_8x16=y
+# CONFIG_FONT_6x11 is not set
+# CONFIG_FONT_PEARL_8x8 is not set
+# CONFIG_FONT_ACORN_8x8 is not set
+# CONFIG_FONT_MINI_4x6 is not set
+# CONFIG_FONT_SUN8x16 is not set
+# CONFIG_FONT_SUN12x22 is not set
+
+#
+# Logo configuration
+#
+CONFIG_LOGO=y
+# CONFIG_LOGO_LINUX_MONO is not set
+# CONFIG_LOGO_LINUX_VGA16 is not set
+CONFIG_LOGO_LINUX_CLUT224=y
+# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
+
+#
+# Sound
+#
+CONFIG_SOUND=y
+
+#
+# Advanced Linux Sound Architecture
+#
+# CONFIG_SND is not set
+
+#
+# Open Sound System
+#
+CONFIG_SOUND_PRIME=y
+# CONFIG_SOUND_BT878 is not set
+# CONFIG_SOUND_FUSION is not set
+# CONFIG_SOUND_CS4281 is not set
+# CONFIG_SOUND_SONICVIBES is not set
+# CONFIG_SOUND_TRIDENT is not set
+# CONFIG_SOUND_MSNDCLAS is not set
+# CONFIG_SOUND_MSNDPIN is not set
+# CONFIG_SOUND_OSS is not set
+# CONFIG_SOUND_TVMIXER is not set
+# CONFIG_SOUND_AD1980 is not set
+
+#
+# Misc devices
+#
+
+#
+# USB support
+#
+# CONFIG_USB is not set
+CONFIG_USB_ARCH_HAS_HCD=y
+CONFIG_USB_ARCH_HAS_OHCI=y
+
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
+
+#
+# USB Gadget Support
+#
+CONFIG_USB_GADGET=y
+# CONFIG_USB_GADGET_DEBUG_FILES is not set
+# CONFIG_USB_GADGET_NET2280 is not set
+# CONFIG_USB_GADGET_PXA2XX is not set
+# CONFIG_USB_GADGET_GOKU is not set
+# CONFIG_USB_GADGET_SA1100 is not set
+# CONFIG_USB_GADGET_LH7A40X is not set
+# CONFIG_USB_GADGET_DUMMY_HCD is not set
+CONFIG_USB_GADGET_OMAP=y
+CONFIG_USB_OMAP=y
+# CONFIG_USB_GADGET_DUALSPEED is not set
+# CONFIG_USB_ZERO is not set
+CONFIG_USB_ETH=y
+CONFIG_USB_ETH_RNDIS=y
+# CONFIG_USB_GADGETFS is not set
+# CONFIG_USB_FILE_STORAGE is not set
+# CONFIG_USB_G_SERIAL is not set
+
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# Kernel hacking
+#
+CONFIG_DEBUG_KERNEL=y
+# CONFIG_MAGIC_SYSRQ is not set
+# CONFIG_SCHEDSTATS is not set
+# CONFIG_DEBUG_SLAB is not set
+CONFIG_DEBUG_PREEMPT=y
+# CONFIG_DEBUG_SPINLOCK is not set
+# CONFIG_DEBUG_KOBJECT is not set
+CONFIG_DEBUG_BUGVERBOSE=y
+CONFIG_DEBUG_INFO=y
+# CONFIG_DEBUG_FS is not set
+CONFIG_FRAME_POINTER=y
+CONFIG_DEBUG_USER=y
+# CONFIG_DEBUG_WAITQ is not set
+CONFIG_DEBUG_ERRORS=y
+CONFIG_DEBUG_LL=y
+# CONFIG_DEBUG_ICEDCC is not set
+
+#
+# Security options
+#
+# CONFIG_KEYS is not set
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+CONFIG_CRYPTO=y
+# CONFIG_CRYPTO_HMAC is not set
+# CONFIG_CRYPTO_NULL is not set
+# CONFIG_CRYPTO_MD4 is not set
+CONFIG_CRYPTO_MD5=y
+# CONFIG_CRYPTO_SHA1 is not set
+# CONFIG_CRYPTO_SHA256 is not set
+# CONFIG_CRYPTO_SHA512 is not set
+# CONFIG_CRYPTO_WP512 is not set
+CONFIG_CRYPTO_DES=y
+# CONFIG_CRYPTO_BLOWFISH is not set
+# CONFIG_CRYPTO_TWOFISH is not set
+# CONFIG_CRYPTO_SERPENT is not set
+# CONFIG_CRYPTO_AES is not set
+# CONFIG_CRYPTO_CAST5 is not set
+# CONFIG_CRYPTO_CAST6 is not set
+# CONFIG_CRYPTO_TEA is not set
+# CONFIG_CRYPTO_ARC4 is not set
+# CONFIG_CRYPTO_KHAZAD is not set
+# CONFIG_CRYPTO_ANUBIS is not set
+# CONFIG_CRYPTO_DEFLATE is not set
+# CONFIG_CRYPTO_MICHAEL_MIC is not set
+# CONFIG_CRYPTO_CRC32C is not set
+# CONFIG_CRYPTO_TEST is not set
+
+#
+# Hardware crypto devices
+#
+
+#
+# Library routines
+#
+# CONFIG_CRC_CCITT is not set
+CONFIG_CRC32=y
+# CONFIG_LIBCRC32C is not set
+CONFIG_ZLIB_INFLATE=y
+CONFIG_ZLIB_DEFLATE=y
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc2
-# Mon Nov 15 15:29:42 2004
+# Linux kernel version: 2.6.11-rc1-bk5
+# Tue Jan 18 11:36:49 2005
#
CONFIG_ARM=y
CONFIG_MMU=y
CONFIG_UID16=y
CONFIG_RWSEM_GENERIC_SPINLOCK=y
+CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_GENERIC_IOMAP=y
#
#
# General setup
#
-# CONFIG_ZBOOT_ROM is not set
CONFIG_ZBOOT_ROM_TEXT=0x0
CONFIG_ZBOOT_ROM_BSS=0x0
# CONFIG_XIP_KERNEL is not set
+#
+# PCCARD (PCMCIA/CardBus) support
+#
+# CONFIG_PCCARD is not set
+
+#
+# PC-card bridges
+#
+
#
# At least one math emulation must be selected
#
#
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
+# CONFIG_FW_LOADER is not set
# CONFIG_DEBUG_DRIVER is not set
CONFIG_PM=y
# CONFIG_PREEMPT is not set
CONFIG_MTD_PARTITIONS=y
# CONFIG_MTD_CONCAT is not set
CONFIG_MTD_REDBOOT_PARTS=y
+CONFIG_MTD_REDBOOT_DIRECTORY_BLOCK=-1
CONFIG_MTD_REDBOOT_PARTS_UNALLOCATED=y
# CONFIG_MTD_REDBOOT_PARTS_READONLY is not set
CONFIG_MTD_CMDLINE_PARTS=y
# CONFIG_MTD_CFI_I4 is not set
# CONFIG_MTD_CFI_I8 is not set
CONFIG_MTD_CFI_INTELEXT=y
-# CONFIG_MTD_CFI_AMDSTD is not set
+CONFIG_MTD_CFI_AMDSTD=y
+CONFIG_MTD_CFI_AMDSTD_RETRY=0
# CONFIG_MTD_CFI_STAA is not set
CONFIG_MTD_CFI_UTIL=y
# CONFIG_MTD_RAM is not set
-# CONFIG_MTD_ROM is not set
+CONFIG_MTD_ROM=y
# CONFIG_MTD_ABSENT is not set
# CONFIG_MTD_OBSOLETE_CHIPS is not set
+# CONFIG_MTD_XIP is not set
#
# Mapping drivers for chip access
# CONFIG_MTD_PHRAM is not set
# CONFIG_MTD_MTDRAM is not set
# CONFIG_MTD_BLKMTD is not set
+# CONFIG_MTD_BLOCK2MTD is not set
#
# Disk-On-Chip Device Drivers
# CONFIG_MTD_NAND_S3C2410_DEBUG is not set
# CONFIG_MTD_NAND_S3C2410_HWECC is not set
# CONFIG_MTD_NAND_DISKONCHIP is not set
+# CONFIG_MTD_NAND_NANDSIM is not set
#
# Plug and Play support
#
# CONFIG_BLK_DEV_FD is not set
# CONFIG_PARIDE is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_IOSCHED_AS=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
+CONFIG_ATA_OVER_ETH=m
#
# Multi-device support (RAID and LVM)
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PARKBD is not set
+CONFIG_SERIO_LIBPS2=y
# CONFIG_SERIO_RAW is not set
#
# CONFIG_DIGI is not set
# CONFIG_MOXA_INTELLIO is not set
# CONFIG_MOXA_SMARTIO is not set
+# CONFIG_ISI is not set
# CONFIG_SYNCLINKMP is not set
# CONFIG_N_HDLC is not set
# CONFIG_RISCOM8 is not set
CONFIG_I2C_SENSOR=m
# CONFIG_SENSORS_ADM1021 is not set
# CONFIG_SENSORS_ADM1025 is not set
+# CONFIG_SENSORS_ADM1026 is not set
# CONFIG_SENSORS_ADM1031 is not set
# CONFIG_SENSORS_ASB100 is not set
# CONFIG_SENSORS_DS1621 is not set
# CONFIG_SENSORS_LM90 is not set
# CONFIG_SENSORS_MAX1619 is not set
# CONFIG_SENSORS_PC87360 is not set
+# CONFIG_SENSORS_SMSC47B397 is not set
# CONFIG_SENSORS_SMSC47M1 is not set
# CONFIG_SENSORS_W83781D is not set
# CONFIG_SENSORS_W83L785TS is not set
CONFIG_JFFS2_FS=y
CONFIG_JFFS2_FS_DEBUG=0
# CONFIG_JFFS2_FS_NAND is not set
+# CONFIG_JFFS2_FS_NOR_ECC is not set
# CONFIG_JFFS2_COMPRESSION_OPTIONS is not set
CONFIG_JFFS2_ZLIB=y
CONFIG_JFFS2_RTIME=y
# Logo configuration
#
# CONFIG_LOGO is not set
+# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
#
# Sound
CONFIG_USB_ARCH_HAS_HCD=y
# CONFIG_USB_ARCH_HAS_OHCI is not set
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
+
#
# USB Gadget Support
#
# CONFIG_DEBUG_SLAB is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_KOBJECT is not set
-# CONFIG_DEBUG_BUGVERBOSE is not set
+CONFIG_DEBUG_BUGVERBOSE=y
CONFIG_DEBUG_INFO=y
+# CONFIG_DEBUG_FS is not set
CONFIG_FRAME_POINTER=y
CONFIG_DEBUG_USER=y
# CONFIG_DEBUG_WAITQ is not set
#
# CONFIG_CRYPTO is not set
+#
+# Hardware crypto devices
+#
+
#
# Library routines
#
*/
static DECLARE_WAIT_QUEUE_HEAD(kapmd_wait);
static DECLARE_COMPLETION(kapmd_exit);
-static spinlock_t kapmd_queue_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(kapmd_queue_lock);
static struct apm_queue kapmd_queue;
.long sys_ipc
.long sys_fsync
.long sys_sigreturn_wrapper
-/* 120 */ .long sys_clone_wapper
+/* 120 */ .long sys_clone_wrapper
.long sys_setdomainname
.long sys_newuname
.long sys_ni_syscall
.long sys_fremovexattr
.long sys_tkill
.long sys_sendfile64
-/* 240 */ .long sys_futex
+/* 240 */ .long sys_futex_wrapper
.long sys_sched_setaffinity
.long sys_sched_getaffinity
.long sys_io_setup
.long sys_remap_file_pages
.long sys_ni_syscall /* sys_set_thread_area */
/* 255 */ .long sys_ni_syscall /* sys_get_thread_area */
- .long sys_ni_syscall /* sys_set_tid_address */
+ .long sys_set_tid_address
.long sys_timer_create
.long sys_timer_settime
.long sys_timer_gettime
#include <asm/mach/dma.h>
-spinlock_t dma_spin_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(dma_spin_lock);
#if MAX_DMA_CHANNELS > 0
#include <asm/thread_info.h>
#include <asm/ptrace.h>
+#include <asm/unistd.h>
#include "entry-header.S"
tst ip, #_TIF_SYSCALL_TRACE @ are we tracing syscalls?
bne __sys_trace
- adrsvc al, lr, ret_fast_syscall @ return address
+ adr lr, ret_fast_syscall @ return address
cmp scno, #NR_syscalls @ check upper syscall limit
ldrcc pc, [tbl, scno, lsl #2] @ call sys_* routine
mov r0, #0 @ trace entry [IP = 0]
bl syscall_trace
- adrsvc al, lr, __sys_trace_return @ return address
+ adr lr, __sys_trace_return @ return address
add r1, sp, #S_R0 + S_OFF @ pointer to regs
cmp scno, #NR_syscalls @ check upper syscall limit
ldmccia r1, {r0 - r3} @ have to reload r0 - r3
.type sys_syscall, #function
sys_syscall:
eor scno, r0, #OS_NUMBER << 20
- cmp scno, #NR_syscalls @ check range
- stmleia sp, {r5, r6} @ shuffle args
- movle r0, r1
- movle r1, r2
- movle r2, r3
- movle r3, r4
- ldrle pc, [tbl, scno, lsl #2]
+ cmp scno, #__NR_syscall - __NR_SYSCALL_BASE
+ cmpne scno, #NR_syscalls @ check range
+ stmloia sp, {r5, r6} @ shuffle args
+ movlo r0, r1
+ movlo r1, r2
+ movlo r2, r3
+ movlo r3, r4
+ ldrlo pc, [tbl, scno, lsl #2]
b sys_ni_syscall
sys_fork_wrapper:
add r3, sp, #S_OFF
b sys_execve
-sys_clone_wapper:
- add r2, sp, #S_OFF
+sys_clone_wrapper:
+ add ip, sp, #S_OFF
+ str ip, [sp, #4]
b sys_clone
sys_sigsuspend_wrapper:
ldr r2, [sp, #S_OFF + S_SP]
b do_sigaltstack
+sys_futex_wrapper:
+ str r5, [sp, #4] @ push sixth arg
+ b sys_futex
+
/*
* Note: off_4k (r5) is always units of 4K. If we can't do the requested
* offset, we return EINVAL.
#include <asm/errno.h>
#include <asm/hardware.h>
#include <asm/arch/irqs.h>
+#include <asm/arch/entry-macro.S>
#ifndef MODE_SVC
#define MODE_SVC 0x13
mov \rd, \rd, lsl #13
.endm
-/*
- * Like adr, but force SVC mode (if required)
- */
- .macro adrsvc, cond, reg, label
- adr\cond \reg, \label
- .endm
-
.macro alignment_trap, rbase, rtemp, sym
#ifdef CONFIG_ALIGNMENT_TRAP
#define OFF_CR_ALIGNMENT(x) cr_alignment - x
#include <asm/system.h>
#include <asm/uaccess.h>
+#if __GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)
+#warning This file requires GCC 3.3.x or older to build. Alternatively,
+#warning please talk to GCC people to resolve the issues with the
+#warning assembly clobber list.
+#endif
+
static unsigned long no_fiq_insn;
/* Default reacquire function
*/
void set_fiq_regs(struct pt_regs *regs)
{
- register unsigned long tmp, tmp2;
+ register unsigned long tmp;
__asm__ volatile (
"mrs %0, cpsr\n\
- mov %1, %3\n\
- msr cpsr_c, %1 @ select FIQ mode\n\
+ msr cpsr_c, %2 @ select FIQ mode\n\
mov r0, r0\n\
- ldmia %2, {r8 - r14}\n\
+ ldmia %1, {r8 - r14}\n\
msr cpsr_c, %0 @ return to SVC mode\n\
mov r0, r0"
- : "=&r" (tmp), "=&r" (tmp2)
+ : "=&r" (tmp)
: "r" (®s->ARM_r8), "I" (PSR_I_BIT | PSR_F_BIT | FIQ_MODE)
/* These registers aren't modified by the above code in a way
visible to the compiler, but we mark them as clobbers anyway
void get_fiq_regs(struct pt_regs *regs)
{
- register unsigned long tmp, tmp2;
+ register unsigned long tmp;
__asm__ volatile (
"mrs %0, cpsr\n\
- mov %1, %3\n\
- msr cpsr_c, %1 @ select FIQ mode\n\
+ msr cpsr_c, %2 @ select FIQ mode\n\
mov r0, r0\n\
- stmia %2, {r8 - r14}\n\
+ stmia %1, {r8 - r14}\n\
msr cpsr_c, %0 @ return to SVC mode\n\
mov r0, r0"
- : "=&r" (tmp), "=&r" (tmp2)
+ : "=&r" (tmp)
: "r" (®s->ARM_r8), "I" (PSR_I_BIT | PSR_F_BIT | FIQ_MODE)
/* These registers aren't modified by the above code in a way
visible to the compiler, but we mark them as clobbers anyway
--- /dev/null
+/*
+ * linux/arch/arm/kernel/smp.c
+ *
+ * Copyright (C) 2002 ARM Limited, All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/config.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/spinlock.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+#include <linux/cache.h>
+#include <linux/profile.h>
+#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/cpu.h>
+#include <linux/smp.h>
+#include <linux/seq_file.h>
+
+#include <asm/atomic.h>
+#include <asm/cacheflush.h>
+#include <asm/cpu.h>
+#include <asm/processor.h>
+#include <asm/tlbflush.h>
+#include <asm/ptrace.h>
+
+/*
+ * bitmask of present and online CPUs.
+ * The present bitmask indicates that the CPU is physically present.
+ * The online bitmask indicates that the CPU is up and running.
+ */
+cpumask_t cpu_present_mask;
+cpumask_t cpu_online_map;
+
+/*
+ * structures for inter-processor calls
+ * - A collection of single bit ipi messages.
+ */
+struct ipi_data {
+ spinlock_t lock;
+ unsigned long ipi_count;
+ unsigned long bits;
+};
+
+static DEFINE_PER_CPU(struct ipi_data, ipi_data) = {
+ .lock = SPIN_LOCK_UNLOCKED,
+};
+
+enum ipi_msg_type {
+ IPI_TIMER,
+ IPI_RESCHEDULE,
+ IPI_CALL_FUNC,
+ IPI_CPU_STOP,
+};
+
+struct smp_call_struct {
+ void (*func)(void *info);
+ void *info;
+ int wait;
+ cpumask_t pending;
+ cpumask_t unfinished;
+};
+
+static struct smp_call_struct * volatile smp_call_function_data;
+static DEFINE_SPINLOCK(smp_call_function_lock);
+
+int __init __cpu_up(unsigned int cpu)
+{
+ struct task_struct *idle;
+ int ret;
+
+ /*
+ * Spawn a new process manually. Grab a pointer to
+ * its task struct so we can mess with it
+ */
+ idle = fork_idle(cpu);
+ if (IS_ERR(idle)) {
+ printk(KERN_ERR "CPU%u: fork() failed\n", cpu);
+ return PTR_ERR(idle);
+ }
+
+ /*
+ * Now bring the CPU into our world.
+ */
+ ret = boot_secondary(cpu, idle);
+ if (ret) {
+ printk(KERN_CRIT "cpu_up: processor %d failed to boot\n", cpu);
+ /*
+ * FIXME: We need to clean up the new idle thread. --rmk
+ */
+ }
+
+ return ret;
+}
+
+/*
+ * Called by both boot and secondaries to move global data into
+ * per-processor storage.
+ */
+void __init smp_store_cpu_info(unsigned int cpuid)
+{
+ struct cpuinfo_arm *cpu_info = &per_cpu(cpu_data, cpuid);
+
+ cpu_info->loops_per_jiffy = loops_per_jiffy;
+}
+
+void __init smp_cpus_done(unsigned int max_cpus)
+{
+ int cpu;
+ unsigned long bogosum = 0;
+
+ for_each_online_cpu(cpu)
+ bogosum += per_cpu(cpu_data, cpu).loops_per_jiffy;
+
+ printk(KERN_INFO "SMP: Total of %d processors activated "
+ "(%lu.%02lu BogoMIPS).\n",
+ num_online_cpus(),
+ bogosum / (500000/HZ),
+ (bogosum / (5000/HZ)) % 100);
+}
+
+void __init smp_prepare_boot_cpu(void)
+{
+ unsigned int cpu = smp_processor_id();
+
+ cpu_set(cpu, cpu_present_mask);
+ cpu_set(cpu, cpu_online_map);
+}
+
+static void send_ipi_message(cpumask_t callmap, enum ipi_msg_type msg)
+{
+ unsigned long flags;
+ unsigned int cpu;
+
+ local_irq_save(flags);
+
+ for_each_cpu_mask(cpu, callmap) {
+ struct ipi_data *ipi = &per_cpu(ipi_data, cpu);
+
+ spin_lock(&ipi->lock);
+ ipi->bits |= 1 << msg;
+ spin_unlock(&ipi->lock);
+ }
+
+ /*
+ * Call the platform specific cross-CPU call function.
+ */
+ smp_cross_call(callmap);
+
+ local_irq_restore(flags);
+}
+
+/*
+ * You must not call this function with disabled interrupts, from a
+ * hardware interrupt handler, nor from a bottom half handler.
+ */
+int smp_call_function_on_cpu(void (*func)(void *info), void *info, int retry,
+ int wait, cpumask_t callmap)
+{
+ struct smp_call_struct data;
+ unsigned long timeout;
+ int ret = 0;
+
+ data.func = func;
+ data.info = info;
+ data.wait = wait;
+
+ cpu_clear(smp_processor_id(), callmap);
+ if (cpus_empty(callmap))
+ goto out;
+
+ data.pending = callmap;
+ if (wait)
+ data.unfinished = callmap;
+
+ /*
+ * try to get the mutex on smp_call_function_data
+ */
+ spin_lock(&smp_call_function_lock);
+ smp_call_function_data = &data;
+
+ send_ipi_message(callmap, IPI_CALL_FUNC);
+
+ timeout = jiffies + HZ;
+ while (!cpus_empty(data.pending) && time_before(jiffies, timeout))
+ barrier();
+
+ /*
+ * did we time out?
+ */
+ if (!cpus_empty(data.pending)) {
+ /*
+ * this may be causing our panic - report it
+ */
+ printk(KERN_CRIT
+ "CPU%u: smp_call_function timeout for %p(%p)\n"
+ " callmap %lx pending %lx, %swait\n",
+ smp_processor_id(), func, info, callmap, data.pending,
+ wait ? "" : "no ");
+
+ /*
+ * TRACE
+ */
+ timeout = jiffies + (5 * HZ);
+ while (!cpus_empty(data.pending) && time_before(jiffies, timeout))
+ barrier();
+
+ if (cpus_empty(data.pending))
+ printk(KERN_CRIT " RESOLVED\n");
+ else
+ printk(KERN_CRIT " STILL STUCK\n");
+ }
+
+ /*
+ * whatever happened, we're done with the data, so release it
+ */
+ smp_call_function_data = NULL;
+ spin_unlock(&smp_call_function_lock);
+
+ if (!cpus_empty(data.pending)) {
+ ret = -ETIMEDOUT;
+ goto out;
+ }
+
+ if (wait)
+ while (!cpus_empty(data.unfinished))
+ barrier();
+ out:
+
+ return 0;
+}
+
+int smp_call_function(void (*func)(void *info), void *info, int retry,
+ int wait)
+{
+ return smp_call_function_on_cpu(func, info, retry, wait,
+ cpu_online_map);
+}
+
+void show_ipi_list(struct seq_file *p)
+{
+ unsigned int cpu;
+
+ seq_puts(p, "IPI:");
+
+ for_each_online_cpu(cpu)
+ seq_printf(p, " %10lu", per_cpu(ipi_data, cpu).ipi_count);
+
+ seq_putc(p, '\n');
+}
+
+static void ipi_timer(struct pt_regs *regs)
+{
+ int user = user_mode(regs);
+
+ irq_enter();
+ profile_tick(CPU_PROFILING, regs);
+ update_process_times(user);
+ irq_exit();
+}
+
+/*
+ * ipi_call_function - handle IPI from smp_call_function()
+ *
+ * Note that we copy data out of the cross-call structure and then
+ * let the caller know that we're here and have done with their data
+ */
+static void ipi_call_function(unsigned int cpu)
+{
+ struct smp_call_struct *data = smp_call_function_data;
+ void (*func)(void *info) = data->func;
+ void *info = data->info;
+ int wait = data->wait;
+
+ cpu_clear(cpu, data->pending);
+
+ func(info);
+
+ if (wait)
+ cpu_clear(cpu, data->unfinished);
+}
+
+static DEFINE_SPINLOCK(stop_lock);
+
+/*
+ * ipi_cpu_stop - handle IPI from smp_send_stop()
+ */
+static void ipi_cpu_stop(unsigned int cpu)
+{
+ spin_lock(&stop_lock);
+ printk(KERN_CRIT "CPU%u: stopping\n", cpu);
+ dump_stack();
+ spin_unlock(&stop_lock);
+
+ cpu_clear(cpu, cpu_online_map);
+
+ local_fiq_disable();
+ local_irq_disable();
+
+ while (1)
+ cpu_relax();
+}
+
+/*
+ * Main handler for inter-processor interrupts
+ *
+ * For ARM, the ipimask now only identifies a single
+ * category of IPI (Bit 1 IPIs have been replaced by a
+ * different mechanism):
+ *
+ * Bit 0 - Inter-processor function call
+ */
+void do_IPI(struct pt_regs *regs)
+{
+ unsigned int cpu = smp_processor_id();
+ struct ipi_data *ipi = &per_cpu(ipi_data, cpu);
+
+ ipi->ipi_count++;
+
+ for (;;) {
+ unsigned long msgs;
+
+ spin_lock(&ipi->lock);
+ msgs = ipi->bits;
+ ipi->bits = 0;
+ spin_unlock(&ipi->lock);
+
+ if (!msgs)
+ break;
+
+ do {
+ unsigned nextmsg;
+
+ nextmsg = msgs & -msgs;
+ msgs &= ~nextmsg;
+ nextmsg = ffz(~nextmsg);
+
+ switch (nextmsg) {
+ case IPI_TIMER:
+ ipi_timer(regs);
+ break;
+
+ case IPI_RESCHEDULE:
+ /*
+ * nothing more to do - eveything is
+ * done on the interrupt return path
+ */
+ break;
+
+ case IPI_CALL_FUNC:
+ ipi_call_function(cpu);
+ break;
+
+ case IPI_CPU_STOP:
+ ipi_cpu_stop(cpu);
+ break;
+
+ default:
+ printk(KERN_CRIT "CPU%u: Unknown IPI message 0x%x\n",
+ cpu, nextmsg);
+ break;
+ }
+ } while (msgs);
+ }
+}
+
+void smp_send_reschedule(int cpu)
+{
+ send_ipi_message(cpumask_of_cpu(cpu), IPI_RESCHEDULE);
+}
+
+void smp_send_timer(void)
+{
+ cpumask_t mask = cpu_online_map;
+ cpu_clear(smp_processor_id(), mask);
+ send_ipi_message(mask, IPI_TIMER);
+}
+
+void smp_send_stop(void)
+{
+ cpumask_t mask = cpu_online_map;
+ cpu_clear(smp_processor_id(), mask);
+ send_ipi_message(mask, IPI_CPU_STOP);
+}
+
+/*
+ * not supported here
+ */
+int __init setup_profiling_timer(unsigned int multiplier)
+{
+ return -EINVAL;
+}
#endif
.endm
-.insw_bad_alignment:
- adr r0, .insw_bad_align_msg
- mov r2, lr
- b panic
-.insw_bad_align_msg:
- .asciz "insw: bad buffer alignment (0x%p, lr=0x%08lX)\n"
- .align
-
-.insw_align: tst r1, #1
- bne .insw_bad_alignment
-
- ldrh r3, [r0]
- strh r3, [r1], #2
-
- subs r2, r2, #1
- RETINSTR(moveq, pc, lr)
+.insw_align: movs ip, r1, lsl #31
+ bne .insw_noalign
+ ldrh ip, [r0]
+ sub r2, r2, #1
+ strh ip, [r1], #2
ENTRY(__raw_readsw)
- teq r2, #0 @ do we have to check for the zero len?
+ teq r2, #0
moveq pc, lr
tst r1, #3
bne .insw_align
ldrh lr, [r0]
pack ip, ip, lr
- stmia r1!, {r3 - r5, ip}
-
subs r2, r2, #8
+ stmia r1!, {r3 - r5, ip}
bpl .insw_8_lp
- tst r2, #7
- LOADREGS(eqfd, sp!, {r4, r5, pc})
-
.no_insw_8: tst r2, #4
beq .no_insw_4
stmia r1!, {r3, r4}
-.no_insw_4: tst r2, #2
- beq .no_insw_2
+.no_insw_4: movs r2, r2, lsl #31
+ bcc .no_insw_2
ldrh r3, [r0]
ldrh ip, [r0]
pack r3, r3, ip
-
str r3, [r1], #4
-.no_insw_2: tst r2, #1
- ldrneh r3, [r0]
+.no_insw_2: ldrneh r3, [r0]
strneh r3, [r1]
- LOADREGS(fd, sp!, {r4, r5, pc})
+ ldmfd sp!, {r4, r5, pc}
+
+#ifdef __ARMEB__
+#define _BE_ONLY_(code...) code
+#define _LE_ONLY_(code...)
+#define push_hbyte0 lsr #8
+#define pull_hbyte1 lsl #24
+#else
+#define _BE_ONLY_(code...)
+#define _LE_ONLY_(code...) code
+#define push_hbyte0 lsl #24
+#define pull_hbyte1 lsr #8
+#endif
+
+.insw_noalign: stmfd sp!, {r4, lr}
+ ldrccb ip, [r1, #-1]!
+ bcc 1f
+
+ ldrh ip, [r0]
+ sub r2, r2, #1
+ _BE_ONLY_( mov ip, ip, ror #8 )
+ strb ip, [r1], #1
+ _LE_ONLY_( mov ip, ip, lsr #8 )
+ _BE_ONLY_( mov ip, ip, lsr #24 )
+
+1: subs r2, r2, #2
+ bmi 3f
+ _BE_ONLY_( mov ip, ip, lsl #24 )
+
+2: ldrh r3, [r0]
+ ldrh r4, [r0]
+ subs r2, r2, #2
+ orr ip, ip, r3, lsl #8
+ orr ip, ip, r4, push_hbyte0
+ str ip, [r1], #4
+ mov ip, r4, pull_hbyte1
+ bpl 2b
+
+ _BE_ONLY_( mov ip, ip, lsr #24 )
+
+3: tst r2, #1
+ strb ip, [r1], #1
+ ldrneh ip, [r0]
+ _BE_ONLY_( movne ip, ip, ror #8 )
+ strneb ip, [r1], #1
+ _LE_ONLY_( movne ip, ip, lsr #8 )
+ _BE_ONLY_( movne ip, ip, lsr #24 )
+ strneb ip, [r1]
+ ldmfd sp!, {r4, pc}
#endif
.endm
-.outsw_bad_alignment:
- adr r0, .outsw_bad_align_msg
- mov r2, lr
- b panic
-.outsw_bad_align_msg:
- .asciz "outsw: bad buffer alignment (0x%p, lr=0x%08lX)\n"
- .align
-
-.outsw_align: tst r1, #1
- bne .outsw_bad_alignment
+.outsw_align: movs ip, r1, lsl #31
+ bne .outsw_noalign
ldrh r3, [r1], #2
+ sub r2, r2, #1
strh r3, [r0]
- subs r2, r2, #1
- RETINSTR(moveq, pc, lr)
-
ENTRY(__raw_writesw)
- teq r2, #0 @ do we have to check for the zero len?
+ teq r2, #0
moveq pc, lr
- tst r1, #3
+ ands r3, r1, #3
bne .outsw_align
stmfd sp!, {r4, r5, lr}
bmi .no_outsw_8
.outsw_8_lp: ldmia r1!, {r3, r4, r5, ip}
+ subs r2, r2, #8
outword r3
outword r4
outword r5
outword ip
- subs r2, r2, #8
bpl .outsw_8_lp
- tst r2, #7
- LOADREGS(eqfd, sp!, {r4, r5, pc})
-
.no_outsw_8: tst r2, #4
beq .no_outsw_4
outword r3
outword ip
-.no_outsw_4: tst r2, #2
- beq .no_outsw_2
+.no_outsw_4: movs r2, r2, lsl #31
+ bcc .no_outsw_2
ldr r3, [r1], #4
outword r3
-.no_outsw_2: tst r2, #1
- ldrneh r3, [r1]
+.no_outsw_2: ldrneh r3, [r1]
strneh r3, [r0]
- LOADREGS(fd, sp!, {r4, r5, pc})
+ ldmfd sp!, {r4, r5, pc}
+
+#ifdef __ARMEB__
+#define pull_hbyte0 lsl #8
+#define push_hbyte1 lsr #24
+#else
+#define pull_hbyte0 lsr #24
+#define push_hbyte1 lsl #8
+#endif
+
+.outsw_noalign: ldr r3, [r1, -r3]!
+ subcs r2, r2, #1
+ bcs 2f
+ subs r2, r2, #2
+ bmi 3f
+
+1: mov ip, r3, lsr #8
+ strh ip, [r0]
+2: mov ip, r3, pull_hbyte0
+ ldr r3, [r1, #4]!
+ subs r2, r2, #2
+ orr ip, ip, r3, push_hbyte1
+ strh ip, [r0]
+ bpl 2b
+
+3: tst r2, #1
+2: movne ip, r3, lsr #8
+ strneh ip, [r0]
+ mov pc, lr
<http://www.crl.hpl.hp.com/projects/personalserver/>
If you have any questions or comments about the Compaq Personal
- Server, send e-mail to skiff@crl.dec.com.
+ Server, send e-mail to <skiff@crl.dec.com>.
config ARCH_EBSA285_ADDIN
bool "EBSA285 (addin mode)"
static char led_state;
static char hw_led_state;
-static spinlock_t leds_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(leds_lock);
static void ebsa285_leds_event(led_event_t evt)
{
/*
* This is a lock for accessing ports GP1_IO_BASE and GP2_IO_BASE
*/
-spinlock_t gpio_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(gpio_lock);
static unsigned int current_gpio_op;
static unsigned int current_gpio_io;
static char led_state;
static char hw_led_state;
-static spinlock_t leds_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(leds_lock);
extern spinlock_t gpio_lock;
static void netwinder_leds_event(led_event_t evt)
bool
help
Select code specific to h7202 variants
+config H7202_SERIAL23
+ depends on CPU_H7202
+ bool "Use serial ports 2+3"
+ help
+ Say Y here if you wish to use serial ports 2+3. They share their
+ pins with the keyboard matrix controller, so you have to decide.
+
endif
* 2003 Robert Schwebel <r.schwebel@pengutronix.de>
* 2004 Sascha Hauer <s.hauer@pengutronix.de>
*
- * processor specific stuff for the Hynix h7201
+ * processor specific stuff for the Hynix h7202
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
static struct plat_serial8250_port serial_platform_data[] = {
{
- .membase = SERIAL0_BASE,
+ .membase = (void*)SERIAL0_VIRT,
+ .mapbase = SERIAL0_BASE,
.irq = IRQ_UART0,
.uartclk = 2*1843200,
.regshift = 2,
.flags = UPF_BOOT_AUTOCONF | UPF_SKIP_TEST,
},
{
- .membase = SERIAL1_BASE,
+ .membase = (void*)SERIAL1_VIRT,
+ .mapbase = SERIAL1_BASE,
.irq = IRQ_UART1,
.uartclk = 2*1843200,
.regshift = 2,
.iotype = UPIO_MEM,
.flags = UPF_BOOT_AUTOCONF | UPF_SKIP_TEST,
},
+#ifdef CONFIG_H7202_SERIAL23
{
- .membase = SERIAL2_BASE,
+ .membase = (void*)SERIAL2_VIRT,
+ .mapbase = SERIAL2_BASE,
.irq = IRQ_UART2,
.uartclk = 2*1843200,
.regshift = 2,
.flags = UPF_BOOT_AUTOCONF | UPF_SKIP_TEST,
},
{
- .membase = SERIAL3_BASE,
+ .membase = (void*)SERIAL3_VIRT,
+ .mapbase = SERIAL3_BASE,
.irq = IRQ_UART3,
.uartclk = 2*1843200,
.regshift = 2,
.iotype = UPIO_MEM,
.flags = UPF_BOOT_AUTOCONF | UPF_SKIP_TEST,
},
+#endif
{ },
};
/* Enable clocks */
CPU_REG (PMU_BASE, PMU_PLL_CTRL) |= PLL_2_EN | PLL_1_EN | PLL_3_MUTE;
+ CPU_REG (SERIAL0_VIRT, SERIAL_ENABLE) = SERIAL_ENABLE_EN;
+ CPU_REG (SERIAL1_VIRT, SERIAL_ENABLE) = SERIAL_ENABLE_EN;
+#ifdef CONFIG_H7202_SERIAL23
+ CPU_REG (SERIAL2_VIRT, SERIAL_ENABLE) = SERIAL_ENABLE_EN;
+ CPU_REG (SERIAL3_VIRT, SERIAL_ENABLE) = SERIAL_ENABLE_EN;
+ CPU_IO (GPIO_AMULSEL) = AMULSEL_USIN2 | AMULSEL_USOUT2 |
+ AMULSEL_USIN3 | AMULSEL_USOUT3;
+#endif
(void) platform_add_devices(devices, ARRAY_SIZE(devices));
}
i, channel->name);
DBOSR |= (1 << i);
}
- DISR |= (1 << i);
+ DISR = (1 << i);
}
return IRQ_HANDLED;
}
*/
printk(KERN_WARNING
"spurious IRQ for DMA channel %d\n", i);
- DISR |= (1 << i);
}
}
}
+ DISR = disr;
return IRQ_HANDLED;
}
static unsigned int imx_decode_pll(unsigned int pll)
{
u32 mfi = (pll >> 10) & 0xf;
- u32 mfn = pll & 0x3f;
- u32 mfd = (pll >> 16) & 0x3f;
+ u32 mfn = pll & 0x3ff;
+ u32 mfd = (pll >> 16) & 0x3ff;
u32 pd = (pll >> 26) & 0xf;
u32 f_ref = (CSCR & CSCR_SYSTEM_SEL) ? 16000000 : (CLK32 * 512);
* 7:2 register number
*
*/
-static spinlock_t v3_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(v3_lock);
#define PCI_BUS_NONMEM_START 0x00000000
#define PCI_BUS_NONMEM_SIZE SZ_256M
/*
* linux/arch/arm/mach-integrator/time.c
*
- * Copyright (C) 2000-2001 Deep Blue Solutions
+ * Copyright (C) 2000-2001 Deep Blue Solutions Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
+#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/time.h>
+#include <linux/mc146818rtc.h>
+#include <linux/interrupt.h>
#include <linux/init.h>
+#include <linux/device.h>
+#include <asm/hardware/amba.h>
#include <asm/hardware.h>
#include <asm/io.h>
+#include <asm/uaccess.h>
+#include <asm/rtc.h>
-#define RTC_DR (IO_ADDRESS(INTEGRATOR_RTC_BASE) + 0)
-#define RTC_MR (IO_ADDRESS(INTEGRATOR_RTC_BASE) + 4)
-#define RTC_STAT (IO_ADDRESS(INTEGRATOR_RTC_BASE) + 8)
-#define RTC_EOI (IO_ADDRESS(INTEGRATOR_RTC_BASE) + 8)
-#define RTC_LR (IO_ADDRESS(INTEGRATOR_RTC_BASE) + 12)
-#define RTC_CR (IO_ADDRESS(INTEGRATOR_RTC_BASE) + 16)
+#include <asm/mach/time.h>
-#define RTC_CR_MIE 0x00000001
+#define RTC_DR (0)
+#define RTC_MR (4)
+#define RTC_STAT (8)
+#define RTC_EOI (8)
+#define RTC_LR (12)
+#define RTC_CR (16)
+#define RTC_CR_MIE (1 << 0)
extern int (*set_rtc)(void);
+static void __iomem *rtc_base;
static int integrator_set_rtc(void)
{
- __raw_writel(xtime.tv_sec, RTC_LR);
+ __raw_writel(xtime.tv_sec, rtc_base + RTC_LR);
return 1;
}
-static int integrator_rtc_init(void)
+static void rtc_read_alarm(struct rtc_wkalrm *alrm)
{
- __raw_writel(0, RTC_CR);
- __raw_writel(0, RTC_EOI);
+ rtc_time_to_tm(readl(rtc_base + RTC_MR), &alrm->time);
+}
+
+static int rtc_set_alarm(struct rtc_wkalrm *alrm)
+{
+ unsigned long time;
+ int ret;
+
+ ret = rtc_tm_to_time(&alrm->time, &time);
+ if (ret == 0)
+ writel(time, rtc_base + RTC_MR);
+ return ret;
+}
+
+static void rtc_read_time(struct rtc_time *tm)
+{
+ rtc_time_to_tm(readl(rtc_base + RTC_DR), tm);
+}
- xtime.tv_sec = __raw_readl(RTC_DR);
+/*
+ * Set the RTC time. Unfortunately, we can't accurately set
+ * the point at which the counter updates.
+ *
+ * Also, since RTC_LR is transferred to RTC_CR on next rising
+ * edge of the 1Hz clock, we must write the time one second
+ * in advance.
+ */
+static int rtc_set_time(struct rtc_time *tm)
+{
+ unsigned long time;
+ int ret;
+
+ ret = rtc_tm_to_time(tm, &time);
+ if (ret == 0)
+ writel(time + 1, rtc_base + RTC_LR);
+
+ return ret;
+}
+
+static struct rtc_ops rtc_ops = {
+ .owner = THIS_MODULE,
+ .read_time = rtc_read_time,
+ .set_time = rtc_set_time,
+ .read_alarm = rtc_read_alarm,
+ .set_alarm = rtc_set_alarm,
+};
+
+static irqreturn_t rtc_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ writel(0, rtc_base + RTC_EOI);
+ return IRQ_HANDLED;
+}
+
+static int rtc_probe(struct amba_device *dev, void *id)
+{
+ int ret;
+
+ if (rtc_base)
+ return -EBUSY;
+
+ ret = amba_request_regions(dev, NULL);
+ if (ret)
+ goto out;
+
+ rtc_base = ioremap(dev->res.start, SZ_4K);
+ if (!rtc_base) {
+ ret = -ENOMEM;
+ goto res_out;
+ }
+
+ __raw_writel(0, rtc_base + RTC_CR);
+ __raw_writel(0, rtc_base + RTC_EOI);
+
+ xtime.tv_sec = __raw_readl(rtc_base + RTC_DR);
+
+ ret = request_irq(dev->irq[0], rtc_interrupt, SA_INTERRUPT,
+ "rtc-pl030", dev);
+ if (ret)
+ goto map_out;
+
+ ret = register_rtc(&rtc_ops);
+ if (ret)
+ goto irq_out;
set_rtc = integrator_set_rtc;
+ return 0;
+
+ irq_out:
+ free_irq(dev->irq[0], dev);
+ map_out:
+ iounmap(rtc_base);
+ rtc_base = NULL;
+ res_out:
+ amba_release_regions(dev);
+ out:
+ return ret;
+}
+
+static int rtc_remove(struct amba_device *dev)
+{
+ set_rtc = NULL;
+
+ writel(0, rtc_base + RTC_CR);
+
+ free_irq(dev->irq[0], dev);
+ unregister_rtc(&rtc_ops);
+
+ iounmap(rtc_base);
+ rtc_base = NULL;
+ amba_release_regions(dev);
+
+ return 0;
+}
+
+static struct timespec rtc_delta;
+
+static int rtc_suspend(struct amba_device *dev, u32 state)
+{
+ struct timespec rtc;
+
+ rtc.tv_sec = readl(rtc_base + RTC_DR);
+ rtc.tv_nsec = 0;
+ save_time_delta(&rtc_delta, &rtc);
return 0;
}
-__initcall(integrator_rtc_init);
+static int rtc_resume(struct amba_device *dev)
+{
+ struct timespec rtc;
+
+ rtc.tv_sec = readl(rtc_base + RTC_DR);
+ rtc.tv_nsec = 0;
+ restore_time_delta(&rtc_delta, &rtc);
+
+ return 0;
+}
+
+static struct amba_id rtc_ids[] = {
+ {
+ .id = 0x00041030,
+ .mask = 0x000fffff,
+ },
+ { 0, 0 },
+};
+
+static struct amba_driver rtc_driver = {
+ .drv = {
+ .name = "rtc-pl030",
+ },
+ .probe = rtc_probe,
+ .remove = rtc_remove,
+ .suspend = rtc_suspend,
+ .resume = rtc_resume,
+ .id_table = rtc_ids,
+};
+
+static int __init integrator_rtc_init(void)
+{
+ return amba_driver_register(&rtc_driver);
+}
+
+static void __exit integrator_rtc_exit(void)
+{
+ amba_driver_unregister(&rtc_driver);
+}
+
+module_init(integrator_rtc_init);
+module_exit(integrator_rtc_exit);
Say Y here if you want to run your kernel on the Intel IQ80331
evaluation kit for the IOP331 chipset.
+config MACH_IQ80332
+ bool "Enable support for IQ80332"
+ select ARCH_IOP331
+ help
+ Say Y here if you want to run your kernel on the Intel IQ80332
+ evaluation kit for the IOP332 chipset
+
config ARCH_EP80219
bool "Enable support for EP80219"
select ARCH_IOP321
bool
default ARCH_IQ80331
help
- The IQ80331 uses the IOP331 variant.
+ The IQ80331, IQ80332, and IQ80333 uses the IOP331 variant.
comment "IOP3xx Chipset Features"
-endmenu
+config IOP331_STEPD
+ bool "Chip stepping D of the IOP80331 processor or IOP80333"
+ depends on (ARCH_IOP331)
+ help
+ Say Y here if you have StepD of the IOP80331 or IOP8033
+ based platforms.
+endmenu
endif
obj-$(CONFIG_ARCH_IQ31244) += iq31244-mm.o iq31244-pci.o
obj-$(CONFIG_ARCH_IQ80331) += iq80331-mm.o iq80331-pci.o
+
+obj-$(CONFIG_MACH_IQ80332) += iq80332-mm.o iq80332-pci.o
void iop321_init(void)
{
-#if CONFIG_ARCH_EP80219
- *IOP321_ATUCR = 0x2;
- *IOP321_OIOWTVR = 0x90000000;
- *IOP321_IABAR0 = 0x00000004;
- *IOP321_IABAR2 = 0xa000000c;
- *IOP321_IALR2 = 0xe0000000;
-#endif
-
DBG("PCI: Intel 80321 PCI init code.\n");
- DBG("\tATU: IOP321_ATUCMD=0x%04x\n", *IOP321_ATUCMD);
- DBG("\tATU: IOP321_OMWTVR0=0x%04x, IOP321_OIOWTVR=0x%04x\n",
+ DBG("ATU: IOP321_ATUCMD=0x%04x\n", *IOP321_ATUCMD);
+ DBG("ATU: IOP321_OMWTVR0=0x%04x, IOP321_OIOWTVR=0x%04x\n",
*IOP321_OMWTVR0,
*IOP321_OIOWTVR);
- DBG("\tATU: IOP321_ATUCR=0x%08x\n", *IOP321_ATUCR);
- DBG("\tATU: IOP321_IABAR0=0x%08x IOP321_IALR0=0x%08x IOP321_IATVR0=%08x\n", *IOP321_IABAR0, *IOP321_IALR0, *IOP321_IATVR0);
- DBG("\tATU: IOP321_ERBAR=0x%08x IOP321_ERLR=0x%08x IOP321_ERTVR=%08x\n", *IOP321_ERBAR, *IOP321_ERLR, *IOP321_ERTVR);
- DBG("\tATU: IOP321_IABAR2=0x%08x IOP321_IALR2=0x%08x IOP321_IATVR2=%08x\n", *IOP321_IABAR2, *IOP321_IALR2, *IOP321_IATVR2);
- DBG("\tATU: IOP321_IABAR3=0x%08x IOP321_IALR3=0x%08x IOP321_IATVR3=%08x\n", *IOP321_IABAR3, *IOP321_IALR3, *IOP321_IATVR3);
-
-#if 0
- hook_fault_code(4, iop321_pci_abort, SIGBUS, "external abort on linefetch");
- hook_fault_code(6, iop321_pci_abort, SIGBUS, "external abort on linefetch");
- hook_fault_code(8, iop321_pci_abort, SIGBUS, "external abort on non-linefetch");
- hook_fault_code(10, iop321_pci_abort, SIGBUS, "external abort on non-linefetch");
-#endif
+ DBG("ATU: IOP321_ATUCR=0x%08x\n", *IOP321_ATUCR);
+ DBG("ATU: IOP321_IABAR0=0x%08x IOP321_IALR0=0x%08x IOP321_IATVR0=%08x\n",
+ *IOP321_IABAR0, *IOP321_IALR0, *IOP321_IATVR0);
+ DBG("ATU: IOP321_OMWTVR0=0x%08x\n", *IOP321_OMWTVR0);
+ DBG("ATU: IOP321_IABAR1=0x%08x IOP321_IALR1=0x%08x\n",
+ *IOP321_IABAR1, *IOP321_IALR1);
+ DBG("ATU: IOP321_ERBAR=0x%08x IOP321_ERLR=0x%08x IOP321_ERTVR=%08x\n",
+ *IOP321_ERBAR, *IOP321_ERLR, *IOP321_ERTVR);
+ DBG("ATU: IOP321_IABAR2=0x%08x IOP321_IALR2=0x%08x IOP321_IATVR2=%08x\n",
+ *IOP321_IABAR2, *IOP321_IALR2, *IOP321_IATVR2);
+ DBG("ATU: IOP321_IABAR3=0x%08x IOP321_IALR3=0x%08x IOP321_IATVR3=%08x\n",
+ *IOP321_IABAR3, *IOP321_IALR3, *IOP321_IATVR3);
+
hook_fault_code(16+6, iop321_pci_abort, SIGBUS, "imprecise external abort");
}
*
* Author: Nicolas Pitre <nico@cam.org>
* Copyright (C) 2001 MontaVista Software, Inc.
+ * Copyright (C) 2004 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
/* virtual physical length type */
/* mem mapped registers */
- { IOP321_VIRT_MEM_BASE, IOP321_PHY_MEM_BASE, 0x00002000, MT_DEVICE },
+ { IOP321_VIRT_MEM_BASE, IOP321_PHYS_MEM_BASE, 0x00002000, MT_DEVICE },
/* PCI IO space */
- { 0xfe000000, 0x90000000, 0x00020000, MT_DEVICE }
+ { IOP321_PCI_LOWER_IO_VA, IOP321_PCI_LOWER_IO_PA, IOP321_PCI_IO_WINDOW_SIZE, MT_DEVICE }
};
#ifdef CONFIG_ARCH_IQ80321
}
};
+static struct resource iop32x_i2c_0_resources[] = {
+ [0] = {
+ .start = 0xfffff680,
+ .end = 0xfffff698,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = IRQ_IOP321_I2C_0,
+ .end = IRQ_IOP321_I2C_0,
+ .flags = IORESOURCE_IRQ
+ }
+};
+
+static struct resource iop32x_i2c_1_resources[] = {
+ [0] = {
+ .start = 0xfffff6a0,
+ .end = 0xfffff6b8,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = IRQ_IOP321_I2C_1,
+ .end = IRQ_IOP321_I2C_1,
+ .flags = IORESOURCE_IRQ
+ }
+};
+
+static struct platform_device iop32x_i2c_0_controller = {
+ .name = "IOP3xx-I2C",
+ .id = 0,
+ .num_resources = 2,
+ .resource = iop32x_i2c_0_resources
+};
+
+static struct platform_device iop32x_i2c_1_controller = {
+ .name = "IOP3xx-I2C",
+ .id = 1,
+ .num_resources = 2,
+ .resource = iop32x_i2c_1_resources
+};
+
+static struct platform_device *iop32x_devices[] __initdata = {
+ &iop32x_i2c_0_controller,
+ &iop32x_i2c_1_controller
+};
+
+void __init iop32x_init(void)
+{
+ if(iop_is_321())
+ {
+ platform_add_devices(iop32x_devices,
+ ARRAY_SIZE(iop32x_devices));
+ }
+}
+
void __init iop321_map_io(void)
{
iotable_init(iop321_std_desc, ARRAY_SIZE(iop321_std_desc));
INITIRQ(iop321_init_irq)
.timer = &iop321_timer,
BOOT_PARAMS(0xa0000100)
+ INIT_MACHINE(iop32x_init)
MACHINE_END
#elif defined(CONFIG_ARCH_IQ31244)
MACHINE_START(IQ31244, "Intel IQ31244")
INITIRQ(iop321_init_irq)
.timer = &iop321_timer,
BOOT_PARAMS(0xa0000100)
+ INIT_MACHINE(iop32x_init)
MACHINE_END
#else
#error No machine descriptor defined for this IOP3XX implementation
* PCI support for the Intel IOP331 chipset
*
* Author: Dave Jiang (dave.jiang@intel.com)
- * Copyright (C) 2003 Intel Corp.
+ * Copyright (C) 2003, 2004 Intel Corp.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
#include <asm/arch/iop331.h>
-//#define DEBUG
+#undef DEBUG
+#undef DEBUG1
#ifdef DEBUG
#define DBG(x...) printk(x)
#define DBG(x...) do { } while (0)
#endif
+#ifdef DEBUG1
+#define DBG1(x...) printk(x)
+#else
+#define DBG1(x...) do { } while (0)
+#endif
+
/*
* This routine builds either a type0 or type1 configuration command. If the
* bus is on the 80331 then a type0 made, else a type1 is created.
void iop331_init(void)
{
- DBG("PCI: Intel 80331 PCI init code.\n");
- DBG("\tATU: IOP331_ATUCMD=0x%04x\n", *IOP331_ATUCMD);
- DBG("\tATU: IOP331_OMWTVR0=0x%04x, IOP331_OIOWTVR=0x%04x\n",
+ DBG1("PCI: Intel 80331 PCI init code.\n");
+ DBG1("\tATU: IOP331_ATUCMD=0x%04x\n", *IOP331_ATUCMD);
+ DBG1("\tATU: IOP331_OMWTVR0=0x%04x, IOP331_OIOWTVR=0x%04x\n",
*IOP331_OMWTVR0,
*IOP331_OIOWTVR);
- DBG("\tATU: IOP331_ATUCR=0x%08x\n", *IOP331_ATUCR);
- DBG("\tATU: IOP331_IABAR0=0x%08x IOP331_IALR0=0x%08x IOP331_IATVR0=%08x\n", *IOP331_IABAR0, *IOP331_IALR0, *IOP331_IATVR0);
- DBG("\tATU: IOP331_ERBAR=0x%08x IOP331_ERLR=0x%08x IOP331_ERTVR=%08x\n", *IOP331_ERBAR, *IOP331_ERLR, *IOP331_ERTVR);
- DBG("\tATU: IOP331_IABAR2=0x%08x IOP331_IALR2=0x%08x IOP331_IATVR2=%08x\n", *IOP331_IABAR2, *IOP331_IALR2, *IOP331_IATVR2);
- DBG("\tATU: IOP331_IABAR3=0x%08x IOP331_IALR3=0x%08x IOP331_IATVR3=%08x\n", *IOP331_IABAR3, *IOP331_IALR3, *IOP331_IATVR3);
-
- /* redboot changed, reset IABAR0 to something sane */
- /* fixes master aborts in plugged in cards */
- /* will clean up later and work nicely with redboot */
- *IOP331_IABAR0 = 0x00000004;
+ DBG1("\tATU: IOP331_OMWTVR1=0x%04x\n", *IOP331_OMWTVR1);
+ DBG1("\tATU: IOP331_ATUCR=0x%08x\n", *IOP331_ATUCR);
+ DBG1("\tATU: IOP331_IABAR0=0x%08x IOP331_IALR0=0x%08x IOP331_IATVR0=%08x\n", *IOP331_IABAR0, *IOP331_IALR0, *IOP331_IATVR0);
+ DBG1("\tATU: IOP31_IABAR1=0x%08x IOP331_IALR1=0x%08x\n", *IOP331_IABAR1, *IOP331_IALR1);
+ DBG1("\tATU: IOP331_ERBAR=0x%08x IOP331_ERLR=0x%08x IOP331_ERTVR=%08x\n", *IOP331_ERBAR, *IOP331_ERLR, *IOP331_ERTVR);
+ DBG1("\tATU: IOP331_IABAR2=0x%08x IOP331_IALR2=0x%08x IOP331_IATVR2=%08x\n", *IOP331_IABAR2, *IOP331_IALR2, *IOP331_IATVR2);
+ DBG1("\tATU: IOP331_IABAR3=0x%08x IOP331_IALR3=0x%08x IOP331_IATVR3=%08x\n", *IOP331_IABAR3, *IOP331_IALR3, *IOP331_IATVR3);
+
hook_fault_code(16+6, iop331_pci_abort, SIGBUS, "imprecise external abort");
}
#include <linux/mm.h>
#include <linux/init.h>
#include <linux/config.h>
+#include <linux/init.h>
#include <linux/major.h>
#include <linux/fs.h>
#include <linux/device.h>
{ IOP331_VIRT_MEM_BASE, IOP331_PHYS_MEM_BASE, 0x00002000, MT_DEVICE },
/* PCI IO space */
- { 0xfe000000, 0x90000000, 0x00020000, MT_DEVICE }
+ { IOP331_PCI_LOWER_IO_VA, IOP331_PCI_LOWER_IO_PA, IOP331_PCI_IO_WINDOW_SIZE, MT_DEVICE }
};
static struct uart_port iop331_serial_ports[] = {
{
- .membase = (char*)(IQ80331_UART0_VIRT),
- .mapbase = (IQ80331_UART0_PHYS),
+ .membase = (char*)(IOP331_UART0_VIRT),
+ .mapbase = (IOP331_UART0_PHYS),
.irq = IRQ_IOP331_UART0,
.flags = UPF_SKIP_TEST,
.iotype = UPIO_MEM,
.type = PORT_XSCALE,
.fifosize = 32
} , {
- .membase = (char*)(IQ80331_UART1_VIRT),
- .mapbase = (IQ80331_UART1_PHYS),
+ .membase = (char*)(IOP331_UART1_VIRT),
+ .mapbase = (IOP331_UART1_PHYS),
.irq = IRQ_IOP331_UART1,
.flags = UPF_SKIP_TEST,
.iotype = UPIO_MEM,
}
};
+static struct resource iop33x_i2c_0_resources[] = {
+ [0] = {
+ .start = 0xfffff680,
+ .end = 0xfffff698,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = IRQ_IOP331_I2C_0,
+ .end = IRQ_IOP331_I2C_0,
+ .flags = IORESOURCE_IRQ
+ }
+};
+
+static struct resource iop33x_i2c_1_resources[] = {
+ [0] = {
+ .start = 0xfffff6a0,
+ .end = 0xfffff6b8,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = IRQ_IOP331_I2C_1,
+ .end = IRQ_IOP331_I2C_1,
+ .flags = IORESOURCE_IRQ
+ }
+};
+
+static struct platform_device iop33x_i2c_0_controller = {
+ .name = "IOP3xx-I2C",
+ .id = 0,
+ .num_resources = 2,
+ .resource = iop33x_i2c_0_resources
+};
+
+static struct platform_device iop33x_i2c_1_controller = {
+ .name = "IOP3xx-I2C",
+ .id = 1,
+ .num_resources = 2,
+ .resource = iop33x_i2c_1_resources
+};
+
+static struct platform_device *iop33x_devices[] __initdata = {
+ &iop33x_i2c_0_controller,
+ &iop33x_i2c_1_controller
+};
+
+void __init iop33x_init(void)
+{
+ if(iop_is_331())
+ {
+ platform_add_devices(iop33x_devices,
+ ARRAY_SIZE(iop33x_devices));
+ }
+}
+
void __init iop331_map_io(void)
{
iotable_init(iop331_std_desc, ARRAY_SIZE(iop331_std_desc));
early_serial_setup(&iop331_serial_ports[1]);
}
-#ifdef CONFIG_ARCH_IQ80331
+#ifdef CONFIG_ARCH_IOP331
extern void iop331_init_irq(void);
extern struct sys_timer iop331_timer;
+#endif
+
+#ifdef CONFIG_ARCH_IQ80331
extern void iq80331_map_io(void);
#endif
+#ifdef CONFIG_MACH_IQ80332
+extern void iq80332_map_io(void);
+#endif
+
#if defined(CONFIG_ARCH_IQ80331)
MACHINE_START(IQ80331, "Intel IQ80331")
MAINTAINER("Intel Corp.")
BOOT_MEM(PHYS_OFFSET, 0xfefff000, 0xfffff000) // virtual, physical
- //BOOT_MEM(PHYS_OFFSET, IQ80331_UART0_VIRT, IQ80331_UART0_PHYS)
+ //BOOT_MEM(PHYS_OFFSET, IOP331_UART0_VIRT, IOP331_UART0_PHYS)
MAPIO(iq80331_map_io)
INITIRQ(iop331_init_irq)
- .timer = &iop331_timer,
+ .timer = &iop331_timer,
BOOT_PARAMS(0x0100)
+ INIT_MACHINE(iop33x_init)
MACHINE_END
+
+#elif defined(CONFIG_MACH_IQ80332)
+MACHINE_START(IQ80332, "Intel IQ80332")
+ MAINTAINER("Intel Corp.")
+ BOOT_MEM(PHYS_OFFSET, 0xfefff000, 0xfffff000) // virtual, physical
+ //BOOT_MEM(PHYS_OFFSET, IOP331_UART0_VIRT, IOP331_UART0_PHYS)
+ MAPIO(iq80332_map_io)
+ INITIRQ(iop331_init_irq)
+ .timer = &iop331_timer,
+ BOOT_PARAMS(0x0100)
+ INIT_MACHINE(iop33x_init)
+MACHINE_END
+
#else
#error No machine descriptor defined for this IOP3XX implementation
#endif
*
* Author: Rory Bolt <rorybolt@pacbell.net>
* Copyright (C) 2002 Rory Bolt
+ * Copyright (C) 2004 Intel Corp.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
memset(res, 0, sizeof(struct resource) * 2);
- res[0].start = IQ31244_PCI_IO_BASE + 0x6e000000;
- res[0].end = IQ31244_PCI_IO_BASE + IQ31244_PCI_IO_SIZE - 1 + IQ31244_PCI_IO_OFFSET;
+ res[0].start = IOP321_PCI_LOWER_IO_VA;
+ res[0].end = IOP321_PCI_UPPER_IO_VA;
res[0].name = "IQ31244 PCI I/O Space";
res[0].flags = IORESOURCE_IO;
- res[1].start = IQ31244_PCI_MEM_BASE;
- res[1].end = IQ31244_PCI_MEM_BASE + IQ31244_PCI_MEM_SIZE;
+ res[1].start = IOP321_PCI_LOWER_MEM_PA;
+ res[1].end = IOP321_PCI_UPPER_MEM_PA;
res[1].name = "IQ31244 PCI Memory Space";
res[1].flags = IORESOURCE_MEM;
request_resource(&ioport_resource, &res[0]);
request_resource(&iomem_resource, &res[1]);
+ sys->mem_offset = IOP321_PCI_MEM_OFFSET;
+ sys->io_offset = IOP321_PCI_IO_OFFSET;
+
sys->resource[0] = &res[0];
sys->resource[1] = &res[1];
sys->resource[2] = NULL;
- sys->io_offset = IQ31244_PCI_IO_OFFSET;
- sys->mem_offset = IQ80321_PCI_MEM_BASE -
- (*IOP321_IABAR1 & PCI_BASE_ADDRESS_MEM_MASK);
-
- iop3xx_pcibios_min_io = IQ31244_PCI_IO_BASE;
- iop3xx_pcibios_min_mem = IQ31244_PCI_MEM_BASE;
return 1;
}
static void iq31244_preinit(void)
{
iop321_init();
- /* setting up the second translation window */
- *IOP321_OMWTVR1 = IQ31244_PCI_MEM_BASE + 0x04000000;
- *IOP321_OUMWTVR1 = 0x0;
}
static struct hw_pci iq31244_pci __initdata = {
*
* Author: Rory Bolt <rorybolt@pacbell.net>
* Copyright (C) 2002 Rory Bolt
+ * Copyright (C) 2004 Intel Corp.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
memset(res, 0, sizeof(struct resource) * 2);
- res[0].start = IQ80321_PCI_IO_BASE + IQ80321_PCI_IO_OFFSET;
- res[0].end = IQ80321_PCI_IO_BASE + IQ80321_PCI_IO_SIZE - 1 + IQ80321_PCI_IO_OFFSET;
+ res[0].start = IOP321_PCI_LOWER_IO_VA;
+ res[0].end = IOP321_PCI_UPPER_IO_VA;
res[0].name = "IQ80321 PCI I/O Space";
res[0].flags = IORESOURCE_IO;
- res[1].start = IQ80321_PCI_MEM_BASE;
- res[1].end = IQ80321_PCI_MEM_BASE + IQ80321_PCI_MEM_SIZE;
+ res[1].start = IOP321_PCI_LOWER_MEM_PA;
+ res[1].end = IOP321_PCI_UPPER_MEM_PA;
res[1].name = "IQ80321 PCI Memory Space";
res[1].flags = IORESOURCE_MEM;
request_resource(&ioport_resource, &res[0]);
request_resource(&iomem_resource, &res[1]);
- /*
- * Since the IQ80321 is a slave card on a PCI backplane,
- * it uses BAR1 to reserve a portion of PCI memory space for
- * use with the private devices on the secondary bus
- * (GigE and PCI-X slot). We read BAR1 and configure
- * our outbound translation windows to target that
- * address range and assign all devices in that
- * address range. W/O this, certain BIOSes will fail
- * to boot as the IQ80321 claims addresses that are
- * in use by other devices.
- *
- * Note that the same cannot be done with I/O space,
- * so hopefully the host will stick to the lower 64K for
- * PCI I/O and leave us alone.
- */
- sys->mem_offset = IQ80321_PCI_MEM_BASE -
- (*IOP321_IABAR1 & PCI_BASE_ADDRESS_MEM_MASK);
+ sys->mem_offset = IOP321_PCI_MEM_OFFSET;
+ sys->io_offset = IOP321_PCI_IO_OFFSET;
sys->resource[0] = &res[0];
sys->resource[1] = &res[1];
sys->resource[2] = NULL;
- sys->io_offset = IQ80321_PCI_IO_OFFSET;
-
- iop3xx_pcibios_min_io = IQ80321_PCI_IO_BASE;
- iop3xx_pcibios_min_mem = IQ80321_PCI_MEM_BASE;
return 1;
}
* PCI support for the Intel IQ80331 reference board
*
* Author: Dave Jiang <dave.jiang@intel.com>
- * Copyright (C) 2003 Intel Corp.
+ * Copyright (C) 2003, 2004 Intel Corp.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
memset(res, 0, sizeof(struct resource) * 2);
- res[0].start = IQ80331_PCI_IO_BASE + 0x6e000000;
- res[0].end = IQ80331_PCI_IO_BASE + IQ80331_PCI_IO_SIZE - 1 + IQ80331_PCI_IO_OFFSET;
+ res[0].start = IOP331_PCI_LOWER_IO_VA;
+ res[0].end = IOP331_PCI_UPPER_IO_VA;
res[0].name = "IQ80331 PCI I/O Space";
res[0].flags = IORESOURCE_IO;
- res[1].start = IQ80331_PCI_MEM_BASE;
- res[1].end = IQ80331_PCI_MEM_BASE + IQ80331_PCI_MEM_SIZE;
+ res[1].start = IOP331_PCI_LOWER_MEM_PA;
+ res[1].end = IOP331_PCI_UPPER_MEM_PA;
res[1].name = "IQ80331 PCI Memory Space";
res[1].flags = IORESOURCE_MEM;
request_resource(&ioport_resource, &res[0]);
request_resource(&iomem_resource, &res[1]);
- /*
- * Since the IQ80331 is a slave card on a PCI backplane,
- * it uses BAR1 to reserve a portion of PCI memory space for
- * use with the private devices on the secondary bus
- * (GigE and PCI-X slot). We read BAR1 and configure
- * our outbound translation windows to target that
- * address range and assign all devices in that
- * address range. W/O this, certain BIOSes will fail
- * to boot as the IQ80331 claims addresses that are
- * in use by other devices.
- *
- * Note that the same cannot be done with I/O space,
- * so hopefully the host will stick to the lower 64K for
- * PCI I/O and leave us alone.
- */
- sys->mem_offset = IQ80331_PCI_MEM_BASE -
- (*IOP331_IABAR1 & PCI_BASE_ADDRESS_MEM_MASK);
+ sys->mem_offset = IOP331_PCI_MEM_OFFSET;
+ sys->io_offset = IOP331_PCI_IO_OFFSET;
sys->resource[0] = &res[0];
sys->resource[1] = &res[1];
sys->resource[2] = NULL;
- sys->io_offset = IQ80331_PCI_IO_OFFSET;
-
- iop3xx_pcibios_min_io = IQ80331_PCI_IO_BASE;
- iop3xx_pcibios_min_mem = IQ80331_PCI_MEM_BASE;
return 1;
}
--- /dev/null
+/*
+ * linux/arch/arm/mach-iop3xx/mm.c
+ *
+ * Low level memory initialization for iq80332 platform
+ *
+ * Author: Dave Jiang <dave.jiang@intel.com>
+ * Copyright (C) 2004 Intel Corp.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+
+#include <linux/mm.h>
+#include <linux/init.h>
+
+#include <asm/io.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+
+#include <asm/mach/map.h>
+#include <asm/mach-types.h>
+
+
+/*
+ * IQ80332 specific IO mappings
+ *
+ * We use RedBoot's setup for the onboard devices.
+ */
+
+void __init iq80332_map_io(void)
+{
+ iop331_map_io();
+}
--- /dev/null
+/*
+ * arch/arm/mach-iop3xx/iq80332-pci.c
+ *
+ * PCI support for the Intel IQ80332 reference board
+ *
+ * Author: Dave Jiang <dave.jiang@intel.com>
+ * Copyright (C) 2004 Intel Corp.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+
+#include <asm/hardware.h>
+#include <asm/irq.h>
+#include <asm/mach/pci.h>
+#include <asm/mach-types.h>
+
+/*
+ * The following macro is used to lookup irqs in a standard table
+ * format for those systems that do not already have PCI
+ * interrupts properly routed. We assume 1 <= pin <= 4
+ */
+#define PCI_IRQ_TABLE_LOOKUP(minid,maxid) \
+({ int _ctl_ = -1; \
+ unsigned int _idsel = idsel - minid; \
+ if (_idsel <= maxid) \
+ _ctl_ = pci_irq_table[_idsel][pin-1]; \
+ _ctl_; })
+
+#define INTA IRQ_IQ80332_INTA
+#define INTB IRQ_IQ80332_INTB
+#define INTC IRQ_IQ80332_INTC
+#define INTD IRQ_IQ80332_INTD
+
+//#define INTE IRQ_IQ80332_I82544
+
+static inline int __init
+iq80332_map_irq(struct pci_dev *dev, u8 idsel, u8 pin)
+{
+ static int pci_irq_table[][8] = {
+ /*
+ * PCI IDSEL/INTPIN->INTLINE
+ * A B C D
+ */
+ {-1, -1, -1, -1},
+ {-1, -1, -1, -1},
+ {-1, -1, -1, -1},
+ {INTA, INTB, INTC, INTD}, /* PCI-X Slot */
+ {-1, -1, -1, -1},
+ {INTC, INTC, INTC, INTC}, /* GigE */
+ {-1, -1, -1, -1},
+ {-1, -1, -1, -1},
+ };
+
+ BUG_ON(pin < 1 || pin > 4);
+
+ return PCI_IRQ_TABLE_LOOKUP(1, 7);
+}
+
+static int iq80332_setup(int nr, struct pci_sys_data *sys)
+{
+ struct resource *res;
+
+ if(nr != 0)
+ return 0;
+
+ res = kmalloc(sizeof(struct resource) * 2, GFP_KERNEL);
+ if (!res)
+ panic("PCI: unable to alloc resources");
+
+ memset(res, 0, sizeof(struct resource) * 2);
+
+ res[0].start = IOP331_PCI_LOWER_IO_VA;
+ res[0].end = IOP331_PCI_UPPER_IO_VA;
+ res[0].name = "IQ80332 PCI I/O Space";
+ res[0].flags = IORESOURCE_IO;
+
+ res[1].start = IOP331_PCI_LOWER_MEM_PA;
+ res[1].end = IOP331_PCI_UPPER_MEM_PA;
+ res[1].name = "IQ80332 PCI Memory Space";
+ res[1].flags = IORESOURCE_MEM;
+
+ request_resource(&ioport_resource, &res[0]);
+ request_resource(&iomem_resource, &res[1]);
+
+ sys->mem_offset = IOP331_PCI_MEM_OFFSET;
+ sys->io_offset = IOP331_PCI_IO_OFFSET;
+
+ sys->resource[0] = &res[0];
+ sys->resource[1] = &res[1];
+ sys->resource[2] = NULL;
+
+ return 1;
+}
+
+static void iq80332_preinit(void)
+{
+ iop331_init();
+}
+
+static struct hw_pci iq80332_pci __initdata = {
+ .swizzle = pci_std_swizzle,
+ .nr_controllers = 1,
+ .setup = iq80332_setup,
+ .scan = iop331_scan_bus,
+ .preinit = iq80332_preinit,
+ .map_irq = iq80332_map_irq
+};
+
+static int __init iq80332_pci_init(void)
+{
+ if (machine_is_iq80332())
+ pci_common_init(&iq80332_pci);
+ return 0;
+}
+
+subsys_initcall(iq80332_pci_init);
+
+
+
+
help
Say 'Y' here if you want your kernel to support the Radisys
ENP2611 PCI network processing card. For more information on
- this card, see Documentation/arm/ENP2611.
+ this card, see <file:Documentation/arm/ENP2611>.
config ARCH_IXDP2400
bool "Support Intel IXDP2400"
help
Say 'Y' here if you want your kernel to support the Intel
IXDP2400 reference platform. For more information on
- this platform, see Documentation/arm/IXP2000.
+ this platform, see <file:Documentation/arm/IXP2000>.
config ARCH_IXDP2800
bool "Support Intel IXDP2800"
help
Say 'Y' here if you want your kernel to support the Intel
IXDP2800 reference platform. For more information on
- this platform, see Documentation/arm/IXP2000.
+ this platform, see <file:Documentation/arm/IXP2000>.
config ARCH_IXDP2X00
bool
help
Say 'Y' here if you want your kernel to support the Intel
IXDP2401 reference platform. For more information on
- this platform, see Documentation/arm/IXP2000.
+ this platform, see <file:Documentation/arm/IXP2000>.
config ARCH_IXDP2801
bool "Support Intel IXDP2801"
help
Say 'Y' here if you want your kernel to support the Intel
IXDP2801 reference platform. For more information on
- this platform, see Documentation/arm/IXP2000.
+ this platform, see <file:Documentation/arm/IXP2000>.
config ARCH_IXDP2X01
bool
#include <asm/mach/time.h>
#include <asm/mach/irq.h>
-static spinlock_t ixp2000_slowport_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(ixp2000_slowport_lock);
static unsigned long ixp2000_slowport_irq_flags;
/*************************************************************************
iotable_init(ixp2000_small_io_desc, ARRAY_SIZE(ixp2000_small_io_desc));
iotable_init(ixp2000_large_io_desc, ARRAY_SIZE(ixp2000_large_io_desc));
early_serial_setup(&ixp2000_serial_port);
+
+ /* Set slowport to 8-bit mode. */
+ ixp2000_reg_write(IXP2000_SLOWPORT_FRM, 1);
}
/*************************************************************************
*************************************************************************/
static unsigned ticks_per_jiffy;
static unsigned ticks_per_usec;
+static unsigned next_jiffy_time;
unsigned long ixp2000_gettimeoffset (void)
{
- unsigned long elapsed;
+ unsigned long offset;
- /* Get ticks since last perfect jiffy */
- elapsed = ticks_per_jiffy - *IXP2000_T1_CSR;
+ offset = next_jiffy_time - *IXP2000_T4_CSR;
- return elapsed / ticks_per_usec;
+ return offset / ticks_per_usec;
}
static int ixp2000_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
/* clear timer 1 */
ixp2000_reg_write(IXP2000_T1_CLR, 1);
- timer_tick(regs);
+ while ((next_jiffy_time - *IXP2000_T4_CSR) > ticks_per_jiffy) {
+ timer_tick(regs);
+ next_jiffy_time -= ticks_per_jiffy;
+ }
write_sequnlock(&xtime_lock);
void __init ixp2000_init_time(unsigned long tick_rate)
{
ixp2000_reg_write(IXP2000_T1_CLR, 0);
- ixp2000_reg_write(IXP2000_T2_CLR, 0);
+ ixp2000_reg_write(IXP2000_T4_CLR, 0);
ticks_per_jiffy = (tick_rate + HZ/2) / HZ;
ticks_per_usec = tick_rate / 1000000;
ixp2000_reg_write(IXP2000_T1_CLD, ticks_per_jiffy);
ixp2000_reg_write(IXP2000_T1_CTL, (1 << 7));
+ /*
+ * We use T4 as a monotonic counter to track missed jiffies
+ */
+ ixp2000_reg_write(IXP2000_T4_CLD, -1);
+ ixp2000_reg_write(IXP2000_T4_CTL, (1 << 7));
+ next_jiffy_time = 0xffffffff - ticks_per_jiffy;
+
/* register for interrupt */
setup_irq(IRQ_IXP2000_TIMER1, &ixp2000_timer_irq);
}
ixp2000_init_time(50 * 1000 * 1000);
}
-static struct enp2611_timer = {
+static struct sys_timer enp2611_timer = {
.init = enp2611_timer_init,
.offset = ixp2000_gettimeoffset,
};
{
int irq;
- if (dev->bus->number == 0x00 && PCI_SLOT(dev->devfn) == 0x01) {
+ if (dev->bus->number == 0 && PCI_SLOT(dev->devfn) == 0) {
+ /* IXP2400. */
+ irq = IRQ_IXP2000_PCIA;
+ } else if (dev->bus->number == 0 && PCI_SLOT(dev->devfn) == 1) {
/* 21555 non-transparent bridge. */
irq = IRQ_IXP2000_PCIB;
- } else if (dev->bus->number == 0x01 && PCI_SLOT(dev->devfn) == 0x00) {
+ } else if (dev->bus->number == 0 && PCI_SLOT(dev->devfn) == 4) {
+ /* PCI2050B transparent bridge. */
+ irq = -1;
+ } else if (dev->bus->number == 1 && PCI_SLOT(dev->devfn) == 0) {
/* 82559 ethernet. */
irq = IRQ_IXP2000_PCIA;
+ } else if (dev->bus->number == 1 && PCI_SLOT(dev->devfn) == 1) {
+ /* SPI-3 option board. */
+ irq = IRQ_IXP2000_PCIB;
} else {
- printk(KERN_INFO "enp2611_pci_map_irq for unknown device\n");
- irq = IRQ_IXP2000_PCI;
+ printk(KERN_ERR "enp2611_pci_map_irq() called for unknown "
+ "device PCI:%d:%d:%d\n", dev->bus->number,
+ PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn));
+ irq = -1;
}
- printk(KERN_INFO "Assigned IRQ %d to PCI:%d:%d:%d\n", irq,
- dev->bus->number, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn));
-
return irq;
}
int __init enp2611_pci_init(void)
{
- pci_common_init(&enp2611_pci);
+ if (machine_is_enp2611())
+ pci_common_init(&enp2611_pci);
+
return 0;
}
ixp2000_init_time(((3125000 * numerator) / (denominator)) / 2);
}
-static struct timer ixdp2400_timer = {
+static struct sys_timer ixdp2400_timer = {
.init = ixdp2400_timer_init,
.offset = ixp2000_gettimeoffset,
};
* Device behind the first bridge
*/
if(dev->bus->self->devfn == IXDP2X00_P2P_DEVFN) {
- switch(PCI_SLOT(dev->devfn)) {
+ switch(dev->devfn) {
case IXDP2X00_PMC_DEVFN:
return IRQ_IXDP2800_PMC;
unsigned long dummy;
static struct slowport_cfg old_cfg;
-#ifdef CONFGI_ARCH_IXDP2400
+#ifdef CONFIG_ARCH_IXDP2400
if (machine_is_ixdp2400())
ixp2000_acquire_slowport(&slowport_cpld_cfg, &old_cfg);
#endif
{
volatile u32 temp;
+ unsigned long flags;
pci_master_aborts = 1;
- cli();
+ local_irq_save(flags);
temp = *(IXP2000_PCI_CONTROL);
if (temp & ((1 << 8) | (1 << 5))) {
ixp2000_reg_write(IXP2000_PCI_CONTROL, temp);
temp = *(IXP2000_PCI_CMDSTAT);
}
}
- sti();
+ local_irq_restore(flags);
/*
* If it was an imprecise abort, then we need to correct the
clear_master_aborts(void)
{
volatile u32 temp;
+ unsigned long flags;
- cli();
+ local_irq_save(flags);
temp = *(IXP2000_PCI_CONTROL);
if (temp & ((1 << 8) | (1 << 5))) {
ixp2000_reg_write(IXP2000_PCI_CONTROL, temp);
temp = *(IXP2000_PCI_CMDSTAT);
}
}
- sti();
+ local_irq_restore(flags);
return 0;
}
help
Say 'Y' here if you want your kernel to support the Gateworks
Avila Network Platform. For more information on this platform,
- see Documentation/arm/IXP4xx.
+ see <file:Documentation/arm/IXP4xx>.
config ARCH_ADI_COYOTE
bool "Coyote"
help
Say 'Y' here if you want your kernel to support the ADI
Engineering Coyote Gateway Reference Platform. For more
- information on this platform, see Documentation/arm/IXP4xx.
+ information on this platform, see <file:Documentation/arm/IXP4xx>.
config ARCH_IXDP425
bool "IXDP425"
help
Say 'Y' here if you want your kernel to support Intel's
IXDP425 Development Platform (Also known as Richfield).
- For more information on this platform, see Documentation/arm/IXP4xx.
+ For more information on this platform, see <file:Documentation/arm/IXP4xx>.
config MACH_IXDPG425
bool "IXDPG425"
help
Say 'Y' here if you want your kernel to support Intel's
IXDPG425 Development Platform (Also known as Montajade).
+ For more information on this platform, see <file:Documentation/arm/IXP4xx>.
+
+config MACH_IXDP465
+ bool "IXDP465"
+ help
+ Say 'Y' here if you want your kernel to support Intel's
+ IXDP465 Development Platform (Also known as BMP).
For more information on this platform, see Documentation/arm/IXP4xx.
+
#
# IXCDP1100 is the exact same HW as IXDP425, but with a different machine
# number from the bootloader due to marketing monkeys, so we just enable it
help
Say 'Y' here if you want your kernel to support the Motorola
PrPCM1100 Processor Mezanine Module. For more information on
- this platform, see Documentation/arm/IXP4xx.
+ this platform, see <file:Documentation/arm/IXP4xx>.
#
# Avila and IXDP share the same source for now. Will change in future
#
config ARCH_IXDP4XX
bool
- depends on ARCH_IXDP425 || ARCH_AVILA
+ depends on ARCH_IXDP425 || ARCH_AVILA || MACH_IXDP465
+ default y
+
+#
+# Certain registers and IRQs are only enabled if supporting IXP465 CPUs
+#
+config CPU_IXP46X
+ bool
+ depends on MACH_IXDP465
default y
+config MACH_GTWX5715
+ bool "Gemtek WX5715 (Linksys WRV54G)"
+ depends on ARCH_IXP4XX
+ help
+ This board is currently inside the Linksys WRV54G Gateways.
+
+ IXP425 - 266mhz
+ 32mb SDRAM
+ 8mb Flash
+ miniPCI slot 0 does not have a card connector soldered to the board
+ miniPCI slot 1 has an ISL3880 802.11g card (Prism54)
+ npe0 is connected to a Kendin KS8995M Switch (4 ports)
+ npe1 is the "wan" port
+ "Console" UART is available on J11 as console
+ "High Speed" UART is n/c (as far as I can tell)
+ 20 Pin ARM/Xscale JTAG interface on J2
+
+
comment "IXP4xx Options"
config IXP4XX_INDIRECT_PCI
1) A direct mapped window from 0x48000000 to 0x4bffffff (64MB).
To access PCI via this space, we simply ioremap() the BAR
into the kernel and we can use the standard read[bwl]/write[bwl]
- macros. This is the preffered method due to speed but it
+ macros. This is the preferred method due to speed but it
limits the system to just 64MB of PCI memory. This can be
problamatic if using video cards and other memory-heavy devices.
obj-$(CONFIG_MACH_IXDPG425) += ixdpg425-pci.o coyote-setup.o
obj-$(CONFIG_ARCH_ADI_COYOTE) += coyote-pci.o coyote-setup.o
obj-$(CONFIG_ARCH_PRPMC1100) += prpmc1100-pci.o prpmc1100-setup.o
+obj-$(CONFIG_MACH_GTWX5715) += gtwx5715-pci.o gtwx5715-setup.o
* these transactions are atomic or we will end up
* with corrupt data on the bus or in a driver.
*/
-static spinlock_t ixp4xx_pci_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(ixp4xx_pci_lock);
/*
* Read from PCI config space
asm("mrc p15, 0, %0, cr0, cr0, 0;" : "=r"(processor_id) :);
/*
- * Determine which PCI read method to use
+ * Determine which PCI read method to use.
+ * Rev 0 IXP425 requires workaround.
*/
- if (!(processor_id & 0xf)) {
- printk("PCI: IXP4xx A0 silicon detected - "
+ if (!(processor_id & 0xf) && !cpu_is_ixp46x()) {
+ printk("PCI: IXP42x A0 silicon detected - "
"PCI Non-Prefetch Workaround Enabled\n");
ixp4xx_pci_read = ixp4xx_pci_read_errata;
} else
**************************************************************************/
static void ixp4xx_irq_mask(unsigned int irq)
{
- *IXP4XX_ICMR &= ~(1 << irq);
+ if (cpu_is_ixp46x() && irq >= 32)
+ *IXP4XX_ICMR2 &= ~(1 << (irq - 32));
+ else
+ *IXP4XX_ICMR &= ~(1 << irq);
}
static void ixp4xx_irq_mask_ack(unsigned int irq)
static void ixp4xx_irq_unmask(unsigned int irq)
{
- static int irq2gpio[NR_IRQS] = {
+ static int irq2gpio[32] = {
-1, -1, -1, -1, -1, -1, 0, 1,
-1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, 2, 3, 4, 5, 6,
7, 8, 9, 10, 11, 12, -1, -1,
};
- int line = irq2gpio[irq];
+ int line = (irq < 32) ? irq2gpio[irq] : -1;
/*
* This only works for LEVEL gpio IRQs as per the IXP4xx developer's
if (line >= 0)
gpio_line_isr_clear(line);
- *IXP4XX_ICMR |= (1 << irq);
+ if (cpu_is_ixp46x() && irq >= 32)
+ *IXP4XX_ICMR2 |= (1 << (irq - 32));
+ else
+ *IXP4XX_ICMR |= (1 << irq);
}
static struct irqchip ixp4xx_irq_chip = {
/* Disable all interrupt */
*IXP4XX_ICMR = 0x0;
+ if (cpu_is_ixp46x()) {
+ /* Route upper 32 sources to IRQ instead of FIQ */
+ *IXP4XX_ICLR2 = 0x00;
+
+ /* Disable upper 32 interrupts */
+ *IXP4XX_ICMR2 = 0x00;
+ }
+
for(i = 0; i < NR_IRQS; i++)
{
set_irq_chip(i, &ixp4xx_irq_chip);
.init = ixp4xx_timer_init,
.offset = ixp4xx_gettimeoffset,
};
+
+static struct resource ixp46x_i2c_resources[] = {
+ [0] = {
+ .start = 0xc8011000,
+ .end = 0xc801101c,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = IRQ_IXP4XX_I2C,
+ .end = IRQ_IXP4XX_I2C,
+ .flags = IORESOURCE_IRQ
+ }
+};
+
+/*
+ * I2C controller. The IXP46x uses the same block as the IOP3xx, so
+ * we just use the same device name.
+ */
+static struct platform_device ixp46x_i2c_controller = {
+ .name = "IOP3xx-I2C",
+ .id = 0,
+ .num_resources = 2,
+ .resource = ixp46x_i2c_resources
+};
+
+static struct platform_device *ixp46x_devices[] __initdata = {
+ &ixp46x_i2c_controller
+};
+
+void __init ixp4xx_sys_init(void)
+{
+ if (cpu_is_ixp46x()) {
+ platform_add_devices(ixp46x_devices,
+ ARRAY_SIZE(ixp46x_devices));
+ }
+}
+
*
*/
+#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/init.h>
--- /dev/null
+/*
+ * arch/arm/mach-ixp4xx/gtwx5715-setup.c
+ *
+ * Gemtek GTWX5715 (Linksys WRV54G) board settup
+ *
+ * Copyright (C) 2004 George T. Joseph
+ * Derived from Coyote
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ */
+
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/serial.h>
+#include <linux/tty.h>
+#include <linux/serial_8250.h>
+
+#include <asm/types.h>
+#include <asm/setup.h>
+#include <asm/memory.h>
+#include <asm/hardware.h>
+#include <asm/irq.h>
+#include <asm/mach-types.h>
+#include <asm/mach/arch.h>
+#include <asm/mach/flash.h>
+#include <asm/arch/gtwx5715.h>
+
+/*
+ * Xscale UART registers are 32 bits wide with only the least
+ * significant 8 bits having any meaning. From a configuration
+ * perspective, this means 2 things...
+ *
+ * Setting .regshift = 2 so that the standard 16550 registers
+ * line up on every 4th byte.
+ *
+ * Shifting the register start virtual address +3 bytes when
+ * compiled big-endian. Since register writes are done on a
+ * single byte basis, if the shift isn't done the driver will
+ * write the value into the most significant byte of the register,
+ * which is ignored, instead of the least significant.
+ */
+
+#ifdef __ARMEB__
+#define REG_OFFSET 3
+#else
+#define REG_OFFSET 0
+#endif
+
+/*
+ * Only the second or "console" uart is connected on the gtwx5715.
+ */
+
+static struct resource gtwx5715_uart_resources[] = {
+ {
+ .start = IXP4XX_UART2_BASE_PHYS,
+ .end = IXP4XX_UART2_BASE_PHYS + 0x0fff,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = IRQ_IXP4XX_UART2,
+ .end = IRQ_IXP4XX_UART2,
+ .flags = IORESOURCE_IRQ,
+ },
+ { },
+};
+
+
+static struct plat_serial8250_port gtwx5715_uart_platform_data[] = {
+ {
+ .mapbase = IXP4XX_UART2_BASE_PHYS,
+ .membase = (char *)IXP4XX_UART2_BASE_VIRT + REG_OFFSET,
+ .irq = IRQ_IXP4XX_UART2,
+ .flags = UPF_BOOT_AUTOCONF,
+ .iotype = UPIO_MEM,
+ .regshift = 2,
+ .uartclk = IXP4XX_UART_XTAL,
+ },
+ { },
+};
+
+static struct platform_device gtwx5715_uart_device = {
+ .name = "serial8250",
+ .id = 0,
+ .dev = {
+ .platform_data = gtwx5715_uart_platform_data,
+ },
+ .num_resources = 2,
+ .resource = gtwx5715_uart_resources,
+};
+
+
+void __init gtwx5715_map_io(void)
+{
+ ixp4xx_map_io();
+}
+
+static struct flash_platform_data gtwx5715_flash_data = {
+ .map_name = "cfi_probe",
+ .width = 2,
+};
+
+static struct resource gtwx5715_flash_resource = {
+ .start = GTWX5715_FLASH_BASE,
+ .end = GTWX5715_FLASH_BASE + GTWX5715_FLASH_SIZE,
+ .flags = IORESOURCE_MEM,
+};
+
+static struct platform_device gtwx5715_flash = {
+ .name = "IXP4XX-Flash",
+ .id = 0,
+ .dev = {
+ .platform_data = >wx5715_flash_data,
+ },
+ .num_resources = 1,
+ .resource = >wx5715_flash_resource,
+};
+
+static struct platform_device *gtwx5715_devices[] __initdata = {
+ >wx5715_uart_device,
+ >wx5715_flash,
+};
+
+static void __init gtwx5715_init(void)
+{
+ platform_add_devices(gtwx5715_devices, ARRAY_SIZE(gtwx5715_devices));
+}
+
+
+MACHINE_START(GTWX5715, "Gemtek GTWX5715 (Linksys WRV54G)")
+ MAINTAINER("George Joseph")
+ BOOT_MEM(PHYS_OFFSET, IXP4XX_UART2_BASE_PHYS,
+ IXP4XX_UART2_BASE_VIRT)
+ MAPIO(gtwx5715_map_io)
+ INITIRQ(ixp4xx_init_irq)
+ .timer = &ixp4xx_timer,
+ BOOT_PARAMS(0x0100)
+ INIT_MACHINE(gtwx5715_init)
+MACHINE_END
+
+
*
*/
+#include <linux/kernel.h>
#include <linux/config.h>
#include <linux/pci.h>
#include <linux/init.h>
int __init ixdp425_pci_init(void)
{
- if (machine_is_ixdp425() ||
- machine_is_ixcdp1100() ||
- machine_is_avila())
+ if (machine_is_ixdp425() || machine_is_ixcdp1100() ||
+ machine_is_avila() || machine_is_ixdp465())
pci_common_init(&ixdp425_pci);
return 0;
}
*
*/
+#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/init.h>
*
*/
+#include <linux/kernel.h>
#include <linux/config.h>
#include <linux/pci.h>
#include <linux/init.h>
* Author: Deepak Saxena <dsaxena@plexity.net>
*/
+#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/device.h>
#include <linux/serial.h>
static void __init prpmc1100_init(void)
{
- platform_add_devices(&prpmc1100_devices, ARRAY_SIZE(prpmc1100_devices));
+ ixp4xx_sys_init();
+
+ platform_add_devices(prpmc1100_devices, ARRAY_SIZE(prpmc1100_devices));
}
MACHINE_START(PRPMC1100, "Motorola PrPMC1100")
#include <asm/arch/clocks.h>
#include <asm/arch/gpio.h>
#include <asm/arch/usb.h>
-#include <asm/arch/serial.h>
#include "common.h"
+extern int omap_gpio_init(void);
+
static int __initdata h2_serial_ports[OMAP_MAX_NR_PORTS] = {1, 1, 1};
static struct resource h2_smc91x_resources[] = {
.flags = IORESOURCE_MEM,
},
[1] = {
- .start = 0, /* Really GPIO 0 */
- .end = 0,
+ .start = OMAP_GPIO_IRQ(0),
+ .end = OMAP_GPIO_IRQ(0),
.flags = IORESOURCE_IRQ,
},
};
&h2_smc91x_device,
};
+static void __init h2_init_smc91x(void)
+{
+ if ((omap_request_gpio(0)) < 0) {
+ printk("Error requesting gpio 0 for smc91x irq\n");
+ return;
+ }
+ omap_set_gpio_edge_ctrl(0, OMAP_GPIO_FALLING_EDGE);
+}
+
void h2_init_irq(void)
{
omap_init_irq();
+ omap_gpio_init();
+ h2_init_smc91x();
}
static struct omap_usb_config h2_usb_config __initdata = {
#include <asm/mach/arch.h>
#include <asm/mach/map.h>
#include <asm/arch/irqs.h>
+#include <asm/arch/mux.h>
#include <asm/arch/gpio.h>
#include <asm/mach-types.h>
-#include <asm/arch/serial.h>
#include "common.h"
-void h3_init_irq(void)
-{
- omap_init_irq();
-}
+extern int omap_gpio_init(void);
static int __initdata h3_serial_ports[OMAP_MAX_NR_PORTS] = {1, 1, 1};
.flags = IORESOURCE_MEM,
},
[1] = {
- .start = 0,
- .end = 0,
+ .start = OMAP_GPIO_IRQ(40),
+ .end = OMAP_GPIO_IRQ(40),
.flags = IORESOURCE_IRQ,
},
};
(void) platform_add_devices(devices, ARRAY_SIZE(devices));
}
+static void __init h3_init_smc91x(void)
+{
+ omap_cfg_reg(W15_1710_GPIO40);
+ if (omap_request_gpio(40) < 0) {
+ printk("Error requesting gpio 40 for smc91x irq\n");
+ return;
+ }
+ omap_set_gpio_edge_ctrl(40, OMAP_GPIO_FALLING_EDGE);
+}
+
+void h3_init_irq(void)
+{
+ omap_init_irq();
+ omap_gpio_init();
+ h3_init_smc91x();
+}
+
static void __init h3_map_io(void)
{
omap_map_io();
static LIST_HEAD(clocks);
static DECLARE_MUTEX(clocks_sem);
-static spinlock_t clockfw_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(clockfw_lock);
static void propagate_rate(struct clk * clk);
/* MPU virtual clock functions */
static int select_table_rate(unsigned long rate);
#include <asm/arch/board.h>
#include <asm/arch/mux.h>
#include <asm/arch/fpga.h>
-#include <asm/arch/serial.h>
-
#include "clock.h"
_omap_map_io();
}
-static inline unsigned int omap_serial_in(struct plat_serial8250_port *up,
+static inline unsigned int omap_serial_in(struct plat_serial8250_port *up,
int offset)
{
offset <<= up->regshift;
return (unsigned int)__raw_readb(up->membase + offset);
}
-static inline void omap_serial_outp(struct plat_serial8250_port *p, int offset,
+static inline void omap_serial_outp(struct plat_serial8250_port *p, int offset,
int value)
{
offset <<= p->regshift;
/*
* Internal UARTs need to be initialized for the 8250 autoconfig to work
- * properly.
+ * properly. Note that the TX watermark initialization may not be needed
+ * once the 8250.c watermark handling code is merged.
*/
static void __init omap_serial_reset(struct plat_serial8250_port *p)
{
- omap_serial_outp(p, UART_OMAP_MDR1, 0x07); /* disable UART */
- omap_serial_outp(p, UART_OMAP_MDR1, 0x00); /* enable UART */
+ omap_serial_outp(p, UART_OMAP_MDR1, 0x07); /* disable UART */
+ omap_serial_outp(p, UART_OMAP_SCR, 0x08); /* TX watermark */
+ omap_serial_outp(p, UART_OMAP_MDR1, 0x00); /* enable UART */
if (!cpu_is_omap1510()) {
omap_serial_outp(p, UART_OMAP_SYSC, 0x01);
int __init_or_module
omap_cfg_reg(const reg_cfg_t reg_cfg)
{
- static spinlock_t mux_spin_lock = SPIN_LOCK_UNLOCKED;
+ static DEFINE_SPINLOCK(mux_spin_lock);
unsigned long flags;
reg_cfg_set *cfg;
mask32 = omap_readl(ARM_SYSST);
local_fiq_enable();
local_irq_enable();
+
+#if defined(CONFIG_OMAP_32K_TIMER) && defined(CONFIG_NO_IDLE_HZ)
+ /* Override timer to use VST for the next cycle */
+ omap_32k_timer_next_vst_interrupt();
+#endif
+
if ((mask32 & DSP_IDLE) == 0) {
__asm__ volatile ("mcr p15, 0, r0, c7, c0, 4");
} else {
*/
//#include <asm/arch/hardware.h>
-static int omap_pm_prepare(u32 state)
+static int omap_pm_prepare(suspend_state_t state)
{
int error = 0;
*
*/
-static int omap_pm_enter(u32 state)
+static int omap_pm_enter(suspend_state_t state)
{
switch (state)
{
* failed).
*/
-static int omap_pm_finish(u32 state)
+static int omap_pm_finish(suspend_state_t state)
{
return 0;
}
--- /dev/null
+/*
+ * Support for Sharp SL-C7xx PDAs
+ * Models: SL-C700 (Corgi), SL-C750 (Shepherd), SL-C760 (Husky)
+ *
+ * Copyright (c) 2004-2005 Richard Purdie
+ *
+ * Based on Sharp's 2.4 kernel patches/lubbock.c
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/major.h>
+#include <linux/fs.h>
+#include <linux/interrupt.h>
+#include <linux/mmc/host.h>
+
+#include <asm/setup.h>
+#include <asm/memory.h>
+#include <asm/mach-types.h>
+#include <asm/hardware.h>
+#include <asm/irq.h>
+#include <asm/io.h>
+
+#include <asm/mach/arch.h>
+#include <asm/mach/map.h>
+#include <asm/mach/irq.h>
+
+#include <asm/arch/pxa-regs.h>
+#include <asm/arch/irq.h>
+#include <asm/arch/mmc.h>
+#include <asm/arch/udc.h>
+#include <asm/arch/corgi.h>
+
+#include <asm/hardware/scoop.h>
+#include <video/w100fb.h>
+
+#include "generic.h"
+
+
+/*
+ * Corgi SCOOP Device
+ */
+static struct resource corgi_scoop_resources[] = {
+ [0] = {
+ .start = 0x10800000,
+ .end = 0x10800fff,
+ .flags = IORESOURCE_MEM,
+ },
+};
+
+static struct scoop_config corgi_scoop_setup = {
+ .io_dir = CORGI_SCOOP_IO_DIR,
+ .io_out = CORGI_SCOOP_IO_OUT,
+};
+
+static struct platform_device corgiscoop_device = {
+ .name = "sharp-scoop",
+ .id = -1,
+ .dev = {
+ .platform_data = &corgi_scoop_setup,
+ },
+ .num_resources = ARRAY_SIZE(corgi_scoop_resources),
+ .resource = corgi_scoop_resources,
+};
+
+
+/*
+ * Corgi SSP Device
+ *
+ * Set the parent as the scoop device because a lot of SSP devices
+ * also use scoop functions and this makes the power up/down order
+ * work correctly.
+ */
+static struct platform_device corgissp_device = {
+ .name = "corgi-ssp",
+ .dev = {
+ .parent = &corgiscoop_device.dev,
+ },
+ .id = -1,
+};
+
+
+/*
+ * Corgi w100 Frame Buffer Device
+ */
+static struct w100fb_mach_info corgi_fb_info = {
+ .w100fb_ssp_send = corgi_ssp_lcdtg_send,
+ .comadj = -1,
+ .phadadj = -1,
+};
+
+static struct resource corgi_fb_resources[] = {
+ [0] = {
+ .start = 0x08000000,
+ .end = 0x08ffffff,
+ .flags = IORESOURCE_MEM,
+ },
+};
+
+static struct platform_device corgifb_device = {
+ .name = "w100fb",
+ .id = -1,
+ .dev = {
+ .platform_data = &corgi_fb_info,
+ .parent = &corgissp_device.dev,
+ },
+ .num_resources = ARRAY_SIZE(corgi_fb_resources),
+ .resource = corgi_fb_resources,
+};
+
+
+/*
+ * Corgi Backlight Device
+ */
+static struct platform_device corgibl_device = {
+ .name = "corgi-bl",
+ .dev = {
+ .parent = &corgifb_device.dev,
+ },
+ .id = -1,
+};
+
+
+/*
+ * MMC/SD Device
+ *
+ * The card detect interrupt isn't debounced so we delay it by HZ/4
+ * to give the card a chance to fully insert/eject.
+ */
+static struct mmc_detect {
+ struct timer_list detect_timer;
+ void *devid;
+} mmc_detect;
+
+static void mmc_detect_callback(unsigned long data)
+{
+ mmc_detect_change(mmc_detect.devid);
+}
+
+static irqreturn_t corgi_mmc_detect_int(int irq, void *devid, struct pt_regs *regs)
+{
+ mmc_detect.devid=devid;
+ mod_timer(&mmc_detect.detect_timer, jiffies + HZ/4);
+ return IRQ_HANDLED;
+}
+
+static int corgi_mci_init(struct device *dev, irqreturn_t (*unused_detect_int)(int, void *, struct pt_regs *), void *data)
+{
+ int err;
+
+ /* setup GPIO for PXA25x MMC controller */
+ pxa_gpio_mode(GPIO6_MMCCLK_MD);
+ pxa_gpio_mode(GPIO8_MMCCS0_MD);
+ pxa_gpio_mode(CORGI_GPIO_nSD_DETECT | GPIO_IN);
+ pxa_gpio_mode(CORGI_GPIO_SD_PWR | GPIO_OUT);
+
+ init_timer(&mmc_detect.detect_timer);
+ mmc_detect.detect_timer.function = mmc_detect_callback;
+ mmc_detect.detect_timer.data = (unsigned long) &mmc_detect;
+
+ err = request_irq(CORGI_IRQ_GPIO_nSD_DETECT, corgi_mmc_detect_int, SA_INTERRUPT,
+ "MMC card detect", data);
+ if (err) {
+ printk(KERN_ERR "corgi_mci_init: MMC/SD: can't request MMC card detect IRQ\n");
+ return -1;
+ }
+
+ set_irq_type(CORGI_IRQ_GPIO_nSD_DETECT, IRQT_BOTHEDGE);
+
+ return 0;
+}
+
+static void corgi_mci_setpower(struct device *dev, unsigned int vdd)
+{
+ struct pxamci_platform_data* p_d = dev->platform_data;
+
+ if (( 1 << vdd) & p_d->ocr_mask) {
+ printk(KERN_DEBUG "%s: on\n", __FUNCTION__);
+ GPSR1 = GPIO_bit(CORGI_GPIO_SD_PWR);
+ } else {
+ printk(KERN_DEBUG "%s: off\n", __FUNCTION__);
+ GPCR1 = GPIO_bit(CORGI_GPIO_SD_PWR);
+ }
+}
+
+static void corgi_mci_exit(struct device *dev, void *data)
+{
+ free_irq(CORGI_IRQ_GPIO_nSD_DETECT, data);
+ del_timer(&mmc_detect.detect_timer);
+}
+
+static struct pxamci_platform_data corgi_mci_platform_data = {
+ .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34,
+ .init = corgi_mci_init,
+ .setpower = corgi_mci_setpower,
+ .exit = corgi_mci_exit,
+};
+
+
+/*
+ * USB Device Controller
+ */
+static void corgi_udc_command(int cmd)
+{
+ switch(cmd) {
+ case PXA2XX_UDC_CMD_CONNECT:
+ GPSR(CORGI_GPIO_USB_PULLUP) = GPIO_bit(CORGI_GPIO_USB_PULLUP);
+ break;
+ case PXA2XX_UDC_CMD_DISCONNECT:
+ GPCR(CORGI_GPIO_USB_PULLUP) = GPIO_bit(CORGI_GPIO_USB_PULLUP);
+ break;
+ }
+}
+
+static struct pxa2xx_udc_mach_info udc_info __initdata = {
+ /* no connect GPIO; corgi can't tell connection status */
+ .udc_command = corgi_udc_command,
+};
+
+
+static struct platform_device *devices[] __initdata = {
+ &corgiscoop_device,
+ &corgissp_device,
+ &corgifb_device,
+ &corgibl_device,
+};
+
+static struct sharpsl_flash_param_info sharpsl_flash_param;
+
+static void corgi_get_param(void)
+{
+ sharpsl_flash_param.comadj_keyword = readl(FLASH_MEM_BASE + FLASH_COMADJ_MAGIC_ADR);
+ sharpsl_flash_param.comadj = readl(FLASH_MEM_BASE + FLASH_COMADJ_DATA_ADR);
+
+ sharpsl_flash_param.phad_keyword = readl(FLASH_MEM_BASE + FLASH_PHAD_MAGIC_ADR);
+ sharpsl_flash_param.phadadj = readl(FLASH_MEM_BASE + FLASH_PHAD_DATA_ADR);
+}
+
+static void __init corgi_init(void)
+{
+ if (sharpsl_flash_param.comadj_keyword == FLASH_COMADJ_MAJIC)
+ corgi_fb_info.comadj=sharpsl_flash_param.comadj;
+ else
+ corgi_fb_info.comadj=-1;
+
+ if (sharpsl_flash_param.phad_keyword == FLASH_PHAD_MAJIC)
+ corgi_fb_info.phadadj=sharpsl_flash_param.phadadj;
+ else
+ corgi_fb_info.phadadj=-1;
+
+ pxa_gpio_mode(CORGI_GPIO_USB_PULLUP | GPIO_OUT);
+ pxa_set_udc_info(&udc_info);
+ pxa_set_mci_info(&corgi_mci_platform_data);
+
+ platform_add_devices(devices, ARRAY_SIZE(devices));
+}
+
+static void __init fixup_corgi(struct machine_desc *desc,
+ struct tag *tags, char **cmdline, struct meminfo *mi)
+{
+ corgi_get_param();
+ mi->nr_banks=1;
+ mi->bank[0].start = 0xa0000000;
+ mi->bank[0].node = 0;
+ if (machine_is_corgi())
+ mi->bank[0].size = (32*1024*1024);
+ else
+ mi->bank[0].size = (64*1024*1024);
+}
+
+static void __init corgi_init_irq(void)
+{
+ pxa_init_irq();
+}
+
+static struct map_desc corgi_io_desc[] __initdata = {
+/* virtual physical length */
+/* { 0xf1000000, 0x08000000, 0x01000000, MT_DEVICE },*/ /* LCDC (readable for Qt driver) */
+/* { 0xef700000, 0x10800000, 0x00001000, MT_DEVICE },*/ /* SCOOP */
+ { 0xef800000, 0x00000000, 0x00800000, MT_DEVICE }, /* Boot Flash */
+};
+
+static void __init corgi_map_io(void)
+{
+ pxa_map_io();
+ iotable_init(corgi_io_desc,ARRAY_SIZE(corgi_io_desc));
+
+ /* setup sleep mode values */
+ PWER = 0x00000002;
+ PFER = 0x00000000;
+ PRER = 0x00000002;
+ PGSR0 = 0x0158C000;
+ PGSR1 = 0x00FF0080;
+ PGSR2 = 0x0001C004;
+ /* Stop 3.6MHz and drive HIGH to PCMCIA and CS */
+ PCFR |= PCFR_OPDE;
+}
+
+#ifdef CONFIG_MACH_CORGI
+MACHINE_START(CORGI, "SHARP Corgi")
+ BOOT_MEM(0xa0000000, 0x40000000, io_p2v(0x40000000))
+ FIXUP(fixup_corgi)
+ MAPIO(corgi_map_io)
+ INITIRQ(corgi_init_irq)
+ .init_machine = corgi_init,
+ .timer = &pxa_timer,
+MACHINE_END
+#endif
+
+#ifdef CONFIG_MACH_SHEPHERD
+MACHINE_START(SHEPHERD, "SHARP Shepherd")
+ BOOT_MEM(0xa0000000, 0x40000000, io_p2v(0x40000000))
+ FIXUP(fixup_corgi)
+ MAPIO(corgi_map_io)
+ INITIRQ(corgi_init_irq)
+ .init_machine = corgi_init,
+ .timer = &pxa_timer,
+MACHINE_END
+#endif
+
+#ifdef CONFIG_MACH_HUSKY
+MACHINE_START(HUSKY, "SHARP Husky")
+ BOOT_MEM(0xa0000000, 0x40000000, io_p2v(0x40000000))
+ FIXUP(fixup_corgi)
+ MAPIO(corgi_map_io)
+ INITIRQ(corgi_init_irq)
+ .init_machine = corgi_init,
+ .timer = &pxa_timer,
+MACHINE_END
+#endif
+
--- /dev/null
+/*
+ * SSP control code for Sharp Corgi devices
+ *
+ * Copyright (c) 2004 Richard Purdie
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <asm/hardware.h>
+
+#include <asm/arch/ssp.h>
+#include <asm/arch/corgi.h>
+#include <asm/arch/pxa-regs.h>
+
+static spinlock_t corgi_ssp_lock = SPIN_LOCK_UNLOCKED;
+static struct ssp_dev corgi_ssp_dev;
+static struct ssp_state corgi_ssp_state;
+
+/*
+ * There are three devices connected to the SSP interface:
+ * 1. A touchscreen controller (TI ADS7846 compatible)
+ * 2. An LCD contoller (with some Backlight functionality)
+ * 3. A battery moinitoring IC (Maxim MAX1111)
+ *
+ * Each device uses a different speed/mode of communication.
+ *
+ * The touchscreen is very sensitive and the most frequently used
+ * so the port is left configured for this.
+ *
+ * Devices are selected using Chip Selects on GPIOs.
+ */
+
+/*
+ * ADS7846 Routines
+ */
+unsigned long corgi_ssp_ads7846_putget(ulong data)
+{
+ unsigned long ret,flag;
+
+ spin_lock_irqsave(&corgi_ssp_lock, flag);
+ GPCR0 = GPIO_bit(CORGI_GPIO_ADS7846_CS);
+
+ ssp_write_word(&corgi_ssp_dev,data);
+ ret = ssp_read_word(&corgi_ssp_dev);
+
+ GPSR0 = GPIO_bit(CORGI_GPIO_ADS7846_CS);
+ spin_unlock_irqrestore(&corgi_ssp_lock, flag);
+
+ return ret;
+}
+
+/*
+ * NOTE: These functions should always be called in interrupt context
+ * and use the _lock and _unlock functions. They are very time sensitive.
+ */
+void corgi_ssp_ads7846_lock(void)
+{
+ spin_lock(&corgi_ssp_lock);
+ GPCR0 = GPIO_bit(CORGI_GPIO_ADS7846_CS);
+}
+
+void corgi_ssp_ads7846_unlock(void)
+{
+ GPSR0 = GPIO_bit(CORGI_GPIO_ADS7846_CS);
+ spin_unlock(&corgi_ssp_lock);
+}
+
+void corgi_ssp_ads7846_put(ulong data)
+{
+ ssp_write_word(&corgi_ssp_dev,data);
+}
+
+unsigned long corgi_ssp_ads7846_get(void)
+{
+ return ssp_read_word(&corgi_ssp_dev);
+}
+
+EXPORT_SYMBOL(corgi_ssp_ads7846_putget);
+EXPORT_SYMBOL(corgi_ssp_ads7846_lock);
+EXPORT_SYMBOL(corgi_ssp_ads7846_unlock);
+EXPORT_SYMBOL(corgi_ssp_ads7846_put);
+EXPORT_SYMBOL(corgi_ssp_ads7846_get);
+
+
+/*
+ * LCD/Backlight Routines
+ */
+unsigned long corgi_ssp_dac_put(ulong data)
+{
+ unsigned long flag;
+
+ spin_lock_irqsave(&corgi_ssp_lock, flag);
+ GPCR0 = GPIO_bit(CORGI_GPIO_LCDCON_CS);
+
+ ssp_disable(&corgi_ssp_dev);
+ ssp_config(&corgi_ssp_dev, (SSCR0_Motorola | (SSCR0_DSS & 0x07 )), SSCR1_SPH, 0, SSCR0_SerClkDiv(76));
+ ssp_enable(&corgi_ssp_dev);
+
+ ssp_write_word(&corgi_ssp_dev,data);
+ /* Read null data back from device to prevent SSP overflow */
+ ssp_read_word(&corgi_ssp_dev);
+
+ ssp_disable(&corgi_ssp_dev);
+ ssp_config(&corgi_ssp_dev, (SSCR0_National | (SSCR0_DSS & 0x0b )), 0, 0, SSCR0_SerClkDiv(2));
+ ssp_enable(&corgi_ssp_dev);
+ GPSR0 = GPIO_bit(CORGI_GPIO_LCDCON_CS);
+ spin_unlock_irqrestore(&corgi_ssp_lock, flag);
+
+ return 0;
+}
+
+void corgi_ssp_lcdtg_send(u8 adrs, u8 data)
+{
+ corgi_ssp_dac_put(((adrs & 0x07) << 5) | (data & 0x1f));
+}
+
+void corgi_ssp_blduty_set(int duty)
+{
+ corgi_ssp_lcdtg_send(0x02,duty);
+}
+
+EXPORT_SYMBOL(corgi_ssp_lcdtg_send);
+EXPORT_SYMBOL(corgi_ssp_blduty_set);
+
+/*
+ * Max1111 Routines
+ */
+int corgi_ssp_max1111_get(ulong data)
+{
+ unsigned long flag;
+ int voltage,voltage1,voltage2;
+
+ spin_lock_irqsave(&corgi_ssp_lock, flag);
+ GPCR0 = GPIO_bit(CORGI_GPIO_MAX1111_CS);
+ ssp_disable(&corgi_ssp_dev);
+ ssp_config(&corgi_ssp_dev, (SSCR0_Motorola | (SSCR0_DSS & 0x07 )), 0, 0, SSCR0_SerClkDiv(8));
+ ssp_enable(&corgi_ssp_dev);
+
+ udelay(1);
+
+ /* TB1/RB1 */
+ ssp_write_word(&corgi_ssp_dev,data);
+ ssp_read_word(&corgi_ssp_dev); /* null read */
+
+ /* TB12/RB2 */
+ ssp_write_word(&corgi_ssp_dev,0);
+ voltage1=ssp_read_word(&corgi_ssp_dev);
+
+ /* TB13/RB3*/
+ ssp_write_word(&corgi_ssp_dev,0);
+ voltage2=ssp_read_word(&corgi_ssp_dev);
+
+ ssp_disable(&corgi_ssp_dev);
+ ssp_config(&corgi_ssp_dev, (SSCR0_National | (SSCR0_DSS & 0x0b )), 0, 0, SSCR0_SerClkDiv(2));
+ ssp_enable(&corgi_ssp_dev);
+ GPSR0 = GPIO_bit(CORGI_GPIO_MAX1111_CS);
+ spin_unlock_irqrestore(&corgi_ssp_lock, flag);
+
+ if (voltage1 & 0xc0 || voltage2 & 0x3f)
+ voltage = -1;
+ else
+ voltage = ((voltage1 << 2) & 0xfc) | ((voltage2 >> 6) & 0x03);
+
+ return voltage;
+}
+
+EXPORT_SYMBOL(corgi_ssp_max1111_get);
+
+/*
+ * Support Routines
+ */
+int __init corgi_ssp_probe(struct device *dev)
+{
+ int ret;
+
+ /* Chip Select - Disable All */
+ GPDR0 |= GPIO_bit(CORGI_GPIO_LCDCON_CS); /* output */
+ GPSR0 = GPIO_bit(CORGI_GPIO_LCDCON_CS); /* High - Disable LCD Control/Timing Gen */
+ GPDR0 |= GPIO_bit(CORGI_GPIO_MAX1111_CS); /* output */
+ GPSR0 = GPIO_bit(CORGI_GPIO_MAX1111_CS); /* High - Disable MAX1111*/
+ GPDR0 |= GPIO_bit(CORGI_GPIO_ADS7846_CS); /* output */
+ GPSR0 = GPIO_bit(CORGI_GPIO_ADS7846_CS); /* High - Disable ADS7846*/
+
+ ret=ssp_init(&corgi_ssp_dev,1);
+
+ if (ret)
+ printk(KERN_ERR "Unable to register SSP handler!\n");
+ else {
+ ssp_disable(&corgi_ssp_dev);
+ ssp_config(&corgi_ssp_dev, (SSCR0_National | (SSCR0_DSS & 0x0b )), 0, 0, SSCR0_SerClkDiv(2));
+ ssp_enable(&corgi_ssp_dev);
+ }
+
+ return ret;
+}
+
+static int corgi_ssp_remove(struct device *dev)
+{
+ ssp_exit(&corgi_ssp_dev);
+ return 0;
+}
+
+static int corgi_ssp_suspend(struct device *dev, u32 state, u32 level)
+{
+ if (level == SUSPEND_POWER_DOWN) {
+ ssp_flush(&corgi_ssp_dev);
+ ssp_save_state(&corgi_ssp_dev,&corgi_ssp_state);
+ }
+ return 0;
+}
+
+static int corgi_ssp_resume(struct device *dev, u32 level)
+{
+ if (level == RESUME_POWER_ON) {
+ GPSR0 = GPIO_bit(CORGI_GPIO_LCDCON_CS); /* High - Disable LCD Control/Timing Gen */
+ GPSR0 = GPIO_bit(CORGI_GPIO_MAX1111_CS); /* High - Disable MAX1111*/
+ GPSR0 = GPIO_bit(CORGI_GPIO_ADS7846_CS); /* High - Disable ADS7846*/
+ ssp_restore_state(&corgi_ssp_dev,&corgi_ssp_state);
+ ssp_enable(&corgi_ssp_dev);
+ }
+ return 0;
+}
+
+static struct device_driver corgissp_driver = {
+ .name = "corgi-ssp",
+ .bus = &platform_bus_type,
+ .probe = corgi_ssp_probe,
+ .remove = corgi_ssp_remove,
+ .suspend = corgi_ssp_suspend,
+ .resume = corgi_ssp_resume,
+};
+
+int __init corgi_ssp_init(void)
+{
+ return driver_register(&corgissp_driver);
+}
+
+arch_initcall(corgi_ssp_init);
#include <asm/arch/pxa-regs.h>
#include <asm/arch/mainstone.h>
+#include <asm/arch/audio.h>
#include <asm/arch/pxafb.h>
#include <asm/arch/mmc.h>
.resource = smc91x_resources,
};
+static int mst_audio_startup(snd_pcm_substream_t *substream, void *priv)
+{
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+ MST_MSCWR2 &= ~MST_MSCWR2_AC97_SPKROFF;
+ return 0;
+}
+
+static void mst_audio_shutdown(snd_pcm_substream_t *substream, void *priv)
+{
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+ MST_MSCWR2 |= MST_MSCWR2_AC97_SPKROFF;
+}
+
+static long mst_audio_suspend_mask;
+
+static void mst_audio_suspend(void *priv)
+{
+ mst_audio_suspend_mask = MST_MSCWR2;
+ MST_MSCWR2 |= MST_MSCWR2_AC97_SPKROFF;
+}
+
+static void mst_audio_resume(void *priv)
+{
+ MST_MSCWR2 &= mst_audio_suspend_mask | ~MST_MSCWR2_AC97_SPKROFF;
+}
+
+static pxa2xx_audio_ops_t mst_audio_ops = {
+ .startup = mst_audio_startup,
+ .shutdown = mst_audio_shutdown,
+ .suspend = mst_audio_suspend,
+ .resume = mst_audio_resume,
+};
+
+static struct platform_device mst_audio_device = {
+ .name = "pxa2xx-ac97",
+ .id = -1,
+ .dev = { .platform_data = &mst_audio_ops },
+};
static void mainstone_backlight_power(int on)
{
static void __init mainstone_init(void)
{
+ /*
+ * On Mainstone, we route AC97_SYSCLK via GPIO45 to
+ * the audio daughter card
+ */
+ pxa_gpio_mode(GPIO45_SYSCLK_AC97_MD);
+
platform_device_register(&smc91x_device);
+ platform_device_register(&mst_audio_device);
/* reading Mainstone's "Virtual Configuration Register"
might be handy to select LCD type here */
*
* Revision history:
* 22nd Aug 2003 Initial version.
- *
+ * 20th Dec 2004 Added ssp_config for changing port config without
+ * closing the port.
*/
#include <linux/module.h>
#include <asm/arch/ssp.h>
#include <asm/arch/pxa-regs.h>
+#define PXA_SSP_PORTS 3
+
+static DECLARE_MUTEX(sem);
+static int use_count[PXA_SSP_PORTS] = {0, 0, 0};
+
static irqreturn_t ssp_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
struct ssp_dev *dev = (struct ssp_dev*) dev_id;
SSCR0_P(dev->port) = ssp->cr0;
}
+/**
+ * ssp_config - configure SSP port settings
+ * @mode: port operating mode
+ * @flags: port config flags
+ * @psp_flags: port PSP config flags
+ * @speed: port speed
+ *
+ * Port MUST be disabled by ssp_disable before making any config changes.
+ */
+int ssp_config(struct ssp_dev *dev, u32 mode, u32 flags, u32 psp_flags, u32 speed)
+{
+ dev->mode = mode;
+ dev->flags = flags;
+ dev->psp_flags = psp_flags;
+ dev->speed = speed;
+
+ /* set up port type, speed, port settings */
+ SSCR0_P(dev->port) = (dev->speed | dev->mode);
+ SSCR1_P(dev->port) = dev->flags;
+ SSPSP_P(dev->port) = dev->psp_flags;
+
+ return 0;
+}
+
/**
* ssp_init - setup the SSP port
*
* %-EBUSY if the resources are already in use
* %0 on success
*/
-int ssp_init(struct ssp_dev *dev, u32 port, u32 mode, u32 flags, u32 psp_flags,
- u32 speed)
+int ssp_init(struct ssp_dev *dev, u32 port)
{
int ret, irq;
+ if (port > PXA_SSP_PORTS || port == 0)
+ return -ENODEV;
+
+ down(&sem);
+ if (use_count[port - 1]) {
+ up(&sem);
+ return -EBUSY;
+ }
+ use_count[port - 1]++;
+
if (!request_mem_region(__PREG(SSCR0_P(port)), 0x2c, "SSP")) {
+ use_count[port - 1]--;
+ up(&sem);
return -EBUSY;
}
}
dev->port = port;
- dev->mode = mode;
- dev->flags = flags;
- dev->psp_flags = psp_flags;
- dev->speed = speed;
-
- /* set up port type, speed, port settings */
- SSCR0_P(dev->port) = (dev->speed | dev->mode);
- SSCR1_P(dev->port) = dev->flags;
- SSPSP_P(dev->port) = dev->psp_flags;
ret = request_irq(irq, ssp_interrupt, 0, "SSP", dev);
if (ret)
#endif
}
+ up(&sem);
return 0;
out_region:
- release_mem_region(__PREG(SSCR0_P(dev->port)), 0x2c);
+ release_mem_region(__PREG(SSCR0_P(port)), 0x2c);
+ use_count[port - 1]--;
+ up(&sem);
return ret;
}
{
int irq;
+ down(&sem);
SSCR0_P(dev->port) &= ~SSCR0_SSE;
/* find irq, save power and turn off SSP port clock */
free_irq(irq, dev);
release_mem_region(__PREG(SSCR0_P(dev->port)), 0x2c);
+ use_count[dev->port - 1]--;
+ up(&sem);
}
EXPORT_SYMBOL(ssp_write_word);
EXPORT_SYMBOL(ssp_restore_state);
EXPORT_SYMBOL(ssp_init);
EXPORT_SYMBOL(ssp_exit);
+EXPORT_SYMBOL(ssp_config);
+
+MODULE_DESCRIPTION("PXA SSP driver");
+MODULE_AUTHOR("Liam Girdwood");
+MODULE_LICENSE("GPL");
+
/* old functions */
-void inline s3c2410_clk_enable(unsigned int clocks, unsigned int enable)
+void inline s3c24xx_clk_enable(unsigned int clocks, unsigned int enable)
{
unsigned long clkcon;
unsigned long flags;
return 0;
}
-int s3c2410_clkcon_enable(struct clk *clk, int enable)
+int s3c24xx_clkcon_enable(struct clk *clk, int enable)
{
- s3c2410_clk_enable(clk->ctrlbit, enable);
+ s3c24xx_clk_enable(clk->ctrlbit, enable);
return 0;
}
{ .name = "nand",
.id = -1,
.parent = &clk_h,
- .enable = s3c2410_clkcon_enable,
+ .enable = s3c24xx_clkcon_enable,
.ctrlbit = S3C2410_CLKCON_NAND
},
{ .name = "lcd",
.id = -1,
.parent = &clk_h,
- .enable = s3c2410_clkcon_enable,
+ .enable = s3c24xx_clkcon_enable,
.ctrlbit = S3C2410_CLKCON_LCDC
},
{ .name = "usb-host",
.id = -1,
.parent = &clk_h,
- .enable = s3c2410_clkcon_enable,
+ .enable = s3c24xx_clkcon_enable,
.ctrlbit = S3C2410_CLKCON_USBH
},
{ .name = "usb-device",
.id = -1,
.parent = &clk_h,
- .enable = s3c2410_clkcon_enable,
+ .enable = s3c24xx_clkcon_enable,
.ctrlbit = S3C2410_CLKCON_USBD
},
{ .name = "timers",
.parent = &clk_p,
- .enable = s3c2410_clkcon_enable,
+ .enable = s3c24xx_clkcon_enable,
.ctrlbit = S3C2410_CLKCON_PWMT
},
{ .name = "sdi",
.id = -1,
.parent = &clk_p,
- .enable = s3c2410_clkcon_enable,
+ .enable = s3c24xx_clkcon_enable,
.ctrlbit = S3C2410_CLKCON_SDI
},
{ .name = "uart",
.id = 0,
.parent = &clk_p,
- .enable = s3c2410_clkcon_enable,
+ .enable = s3c24xx_clkcon_enable,
.ctrlbit = S3C2410_CLKCON_UART0
},
{ .name = "uart",
.id = 1,
.parent = &clk_p,
- .enable = s3c2410_clkcon_enable,
+ .enable = s3c24xx_clkcon_enable,
.ctrlbit = S3C2410_CLKCON_UART1
},
{ .name = "uart",
.id = 2,
.parent = &clk_p,
- .enable = s3c2410_clkcon_enable,
+ .enable = s3c24xx_clkcon_enable,
.ctrlbit = S3C2410_CLKCON_UART2
},
{ .name = "gpio",
.id = -1,
.parent = &clk_p,
- .enable = s3c2410_clkcon_enable,
+ .enable = s3c24xx_clkcon_enable,
.ctrlbit = S3C2410_CLKCON_GPIO
},
{ .name = "rtc",
.id = -1,
.parent = &clk_p,
- .enable = s3c2410_clkcon_enable,
+ .enable = s3c24xx_clkcon_enable,
.ctrlbit = S3C2410_CLKCON_RTC
},
{ .name = "adc",
.id = -1,
.parent = &clk_p,
- .enable = s3c2410_clkcon_enable,
+ .enable = s3c24xx_clkcon_enable,
.ctrlbit = S3C2410_CLKCON_ADC
},
{ .name = "i2c",
.id = -1,
.parent = &clk_p,
- .enable = s3c2410_clkcon_enable,
+ .enable = s3c24xx_clkcon_enable,
.ctrlbit = S3C2410_CLKCON_IIC
},
{ .name = "iis",
.id = -1,
.parent = &clk_p,
- .enable = s3c2410_clkcon_enable,
+ .enable = s3c24xx_clkcon_enable,
.ctrlbit = S3C2410_CLKCON_IIS
},
{ .name = "spi",
.id = -1,
.parent = &clk_p,
- .enable = s3c2410_clkcon_enable,
+ .enable = s3c24xx_clkcon_enable,
.ctrlbit = S3C2410_CLKCON_SPI
},
{ .name = "watchdog",
/* initialise the clock system */
-int s3c2410_register_clock(struct clk *clk)
+int s3c24xx_register_clock(struct clk *clk)
{
clk->owner = THIS_MODULE;
atomic_set(&clk->used, 0);
/* initalise all the clocks */
-int __init s3c2410_init_clocks(void)
+int __init s3c24xx_setup_clocks(void)
{
struct clk *clkp = init_clocks;
int ptr;
* and of course, this looks neater
*/
- s3c2410_clk_enable(S3C2410_CLKCON_NAND, 0);
- s3c2410_clk_enable(S3C2410_CLKCON_USBH, 0);
- s3c2410_clk_enable(S3C2410_CLKCON_USBD, 0);
- s3c2410_clk_enable(S3C2410_CLKCON_ADC, 0);
- s3c2410_clk_enable(S3C2410_CLKCON_IIC, 0);
- s3c2410_clk_enable(S3C2410_CLKCON_SPI, 0);
+ s3c24xx_clk_enable(S3C2410_CLKCON_NAND, 0);
+ s3c24xx_clk_enable(S3C2410_CLKCON_USBH, 0);
+ s3c24xx_clk_enable(S3C2410_CLKCON_USBD, 0);
+ s3c24xx_clk_enable(S3C2410_CLKCON_ADC, 0);
+ s3c24xx_clk_enable(S3C2410_CLKCON_IIC, 0);
+ s3c24xx_clk_enable(S3C2410_CLKCON_SPI, 0);
/* assume uart clocks are correctly setup */
/* register our clocks */
- if (s3c2410_register_clock(&clk_f) < 0)
+ if (s3c24xx_register_clock(&clk_f) < 0)
printk(KERN_ERR "failed to register cpu fclk\n");
- if (s3c2410_register_clock(&clk_h) < 0)
+ if (s3c24xx_register_clock(&clk_h) < 0)
printk(KERN_ERR "failed to register cpu hclk\n");
- if (s3c2410_register_clock(&clk_p) < 0)
+ if (s3c24xx_register_clock(&clk_p) < 0)
printk(KERN_ERR "failed to register cpu pclk\n");
for (ptr = 0; ptr < ARRAY_SIZE(init_clocks); ptr++, clkp++) {
- ret = s3c2410_register_clock(clkp);
+ ret = s3c24xx_register_clock(clkp);
if (ret < 0) {
printk(KERN_ERR "Failed to register clock %s (%d)\n",
clkp->name, ret);
* Please DO NOT use these outside of arch/arm/mach-s3c2410
*/
-extern int s3c2410_clkcon_enable(struct clk *clk, int enable);
-extern int s3c2410_register_clock(struct clk *clk);
-extern int s3c2410_init_clocks(void);
+extern int s3c24xx_clkcon_enable(struct clk *clk, int enable);
+extern int s3c24xx_register_clock(struct clk *clk);
+extern int s3c24xx_setup_clocks(void);
unsigned long idcode;
unsigned long idmask;
void (*map_io)(struct map_desc *mach_desc, int size);
+ void (*init_uarts)(struct s3c2410_uartcfg *cfg, int no);
+ void (*init_clocks)(int xtal);
int (*init)(void);
const char *name;
};
static struct cpu_table cpu_ids[] __initdata = {
{
- .idcode = 0x32410000,
- .idmask = 0xffffffff,
- .map_io = s3c2410_map_io,
- .init = s3c2410_init,
- .name = name_s3c2410
+ .idcode = 0x32410000,
+ .idmask = 0xffffffff,
+ .map_io = s3c2410_map_io,
+ .init_clocks = s3c2410_init_clocks,
+ .init_uarts = s3c2410_init_uarts,
+ .init = s3c2410_init,
+ .name = name_s3c2410
},
{
- .idcode = 0x32410002,
- .idmask = 0xffffffff,
- .map_io = s3c2410_map_io,
- .init = s3c2410_init,
- .name = name_s3c2410a
+ .idcode = 0x32410002,
+ .idmask = 0xffffffff,
+ .map_io = s3c2410_map_io,
+ .init_clocks = s3c2410_init_clocks,
+ .init_uarts = s3c2410_init_uarts,
+ .init = s3c2410_init,
+ .name = name_s3c2410a
},
{
- .idcode = 0x32440000,
- .idmask = 0xffffffff,
- .map_io = s3c2440_map_io,
- .init = s3c2440_init,
- .name = name_s3c2440
+ .idcode = 0x32440000,
+ .idmask = 0xffffffff,
+ .map_io = s3c2440_map_io,
+ .init_clocks = s3c2440_init_clocks,
+ .init_uarts = s3c2440_init_uarts,
+ .init = s3c2440_init,
+ .name = name_s3c2440
},
{
- .idcode = 0x32440001,
- .idmask = 0xffffffff,
- .map_io = s3c2440_map_io,
- .init = s3c2440_init,
- .name = name_s3c2440a
+ .idcode = 0x32440001,
+ .idmask = 0xffffffff,
+ .map_io = s3c2440_map_io,
+ .init_clocks = s3c2440_init_clocks,
+ .init_uarts = s3c2440_init_uarts,
+ .init = s3c2440_init,
+ .name = name_s3c2440a
}
};
struct clk **ptr = b->clocks;;
for (i = b->clocks_count; i > 0; i--, ptr++)
- s3c2410_register_clock(*ptr);
+ s3c24xx_register_clock(*ptr);
}
}
(cpu->map_io)(mach_desc, size);
}
+/* s3c24xx_init_clocks
+ *
+ * Initialise the clock subsystem and associated information from the
+ * given master crystal value.
+ *
+ * xtal = 0 -> use default PLL crystal value (normally 12MHz)
+ * != 0 -> PLL crystal value in Hz
+*/
+
+void __init s3c24xx_init_clocks(int xtal)
+{
+ if (xtal != 0)
+ s3c24xx_xtal = xtal;
+
+ if (cpu == NULL)
+ panic("s3c24xx_init_clocks: no cpu setup?\n");
+
+ if (cpu->init_clocks == NULL)
+ panic("s3c24xx_init_clocks: cpu has no clock init\n");
+ else
+ (cpu->init_clocks)(xtal);
+}
+
+void __init s3c24xx_init_uarts(struct s3c2410_uartcfg *cfg, int no)
+{
+ if (cpu == NULL)
+ return;
+
+ if (cpu->init_uarts == NULL) {
+ printk(KERN_ERR "s3c24xx_init_uarts: cpu has no uart init\n");
+ } else
+ (cpu->init_uarts)(cfg, no);
+}
+
static int __init s3c_arch_init(void)
{
int ret;
/* arch/arm/mach-s3c2410/cpu.h
*
- * Copyright (c) 2004 Simtec Electronics
- * Ben Dooks <ben@simtec.co.uk>
+ * Copyright (c) 2004-2005 Simtec Electronics
+ * Ben Dooks <ben@simtec.co.uk>
*
* Header file for S3C24XX CPU support
*
* Modifications:
* 24-Aug-2004 BJD Start of generic S3C24XX support
* 18-Oct-2004 BJD Moved board struct into this file
+ * 04-Jan-2005 BJD New uart initialisation
+ * 10-Jan-2005 BJD Moved generic init here, specific to cpu headers
+ * 14-Jan-2005 BJD Added s3c24xx_init_clocks() call
*/
#define IODESC_ENT(x) { S3C2410_VA_##x, S3C2410_PA_##x, S3C2410_SZ_##x, MT_DEVICE }
#define print_mhz(m) ((m) / MHZ), ((m / 1000) % 1000)
-#ifdef CONFIG_CPU_S3C2410
-extern int s3c2410_init(void);
-extern void s3c2410_map_io(struct map_desc *mach_desc, int size);
-#else
-#define s3c2410_map_io NULL
-#define s3c2410_init NULL
-#endif
+/* forward declaration */
+struct s3c2410_uartcfg;
-#ifdef CONFIG_CPU_S3C2440
-extern int s3c2440_init(void);
-extern void s3c2440_map_io(struct map_desc *mach_desc, int size);
-#else
-#define s3c2440_map_io NULL
-#define s3c2440_init NULL
-#endif
+/* core initialisation functions */
+
+extern void s3c24xx_init_irq(void);
extern void s3c24xx_init_io(struct map_desc *mach_desc, int size);
+extern void s3c24xx_init_uarts(struct s3c2410_uartcfg *cfg, int no);
+
+extern void s3c24xx_init_clocks(int xtal);
+
/* the board structure is used at first initialsation time
* to get info such as the devices to register for this
* board. This is done because platfrom_add_devices() cannot
extern void s3c24xx_set_board(struct s3c24xx_board *board);
+/* timer for 2410/2440 */
+struct sys_timer;
+extern struct sys_timer s3c24xx_timer;
EXPORT_SYMBOL(s3c2410_dma_devconfig);
+/* s3c2410_dma_getposition
+ *
+ * returns the current transfer points for the dma source and destination
+*/
+
+int s3c2410_dma_getposition(dmach_t channel, dma_addr_t *src, dma_addr_t *dst)
+{
+ s3c2410_dma_chan_t *chan = &s3c2410_chans[channel];
+
+ check_channel(channel);
+
+ if (src != NULL)
+ *src = dma_rdreg(chan, S3C2410_DMA_DCSRC);
+
+ if (dst != NULL)
+ *dst = dma_rdreg(chan, S3C2410_DMA_DCDST);
+
+ return 0;
+}
+
+EXPORT_SYMBOL(s3c2410_dma_getposition);
+
+
/* system device class */
#ifdef CONFIG_PM
s3c_irq_demux_uart(IRQ_S3CUART_RX2, regs);
}
-/* s3c2410_init_irq
+/* s3c24xx_init_irq
*
* Initialise S3C2410 IRQ system
*/
-void __init s3c2410_init_irq(void)
+void __init s3c24xx_init_irq(void)
{
unsigned long pend;
unsigned long last;
* Modifications:
* 16-Sep-2004 BJD Copied from mach-h1940.c
* 25-Oct-2004 BJD Updates for 2.6.10-rc1
+ * 10-Jan-2005 BJD Removed include of s3c2410.h s3c2440.h
+ * 14-Jan-2005 BJD Added new clock init
*/
#include <linux/kernel.h>
#include <asm/arch/regs-serial.h>
#include <asm/arch/regs-gpio.h>
-#include "s3c2410.h"
-#include "s3c2440.h"
#include "clock.h"
#include "devs.h"
#include "cpu.h"
void __init rx3715_map_io(void)
{
- s3c24xx_xtal = 16934000;
-
s3c24xx_init_io(rx3715_iodesc, ARRAY_SIZE(rx3715_iodesc));
- s3c2440_init_uarts(rx3715_uartcfgs, ARRAY_SIZE(rx3715_uartcfgs));
+ s3c24xx_init_clocks(16934000);
+ s3c24xx_init_uarts(rx3715_uartcfgs, ARRAY_SIZE(rx3715_uartcfgs));
s3c24xx_set_board(&rx3715_board);
}
void __init rx3715_init_irq(void)
{
- s3c2410_init_irq();
+ s3c24xx_init_irq();
}
#ifdef CONFIG_PM
MAPIO(rx3715_map_io)
INITIRQ(rx3715_init_irq)
INIT_MACHINE(rx3715_init_machine)
- .timer = &s3c2410_timer,
+ .timer = &s3c24xx_timer,
MACHINE_END
#include <asm/arch/regs-serial.h>
-#include "s3c2410.h"
#include "devs.h"
#include "cpu.h"
void __init smdk2410_map_io(void)
{
s3c24xx_init_io(smdk2410_iodesc, ARRAY_SIZE(smdk2410_iodesc));
- s3c2410_init_uarts(smdk2410_uartcfgs, ARRAY_SIZE(smdk2410_uartcfgs));
+ s3c24xx_init_clocks(0);
+ s3c24xx_init_uarts(smdk2410_uartcfgs, ARRAY_SIZE(smdk2410_uartcfgs));
s3c24xx_set_board(&smdk2410_board);
}
void __init smdk2410_init_irq(void)
{
- s3c2410_init_irq();
+ s3c24xx_init_irq();
}
MACHINE_START(SMDK2410, "SMDK2410") /* @TODO: request a new identifier and switch
BOOT_PARAMS(S3C2410_SDRAM_PA + 0x100)
MAPIO(smdk2410_map_io)
INITIRQ(smdk2410_init_irq)
- .timer = &s3c2410_timer,
+ .timer = &s3c24xx_timer,
MACHINE_END
/* linux/arch/arm/mach-s3c2410/s3c2440-dsc.c
*
- * Copyright (c) 2004 Simtec Electronics
+ * Copyright (c) 2004-2005 Simtec Electronics
* Ben Dooks <ben@simtec.co.uk>
*
* Samsung S3C2440 Drive Strength Control support
* Modifications:
* 29-Aug-2004 BJD Start of drive-strength control
* 09-Nov-2004 BJD Added symbol export
+ * 11-Jan-2005 BJD Include fix
*/
#include <linux/kernel.h>
#include <asm/arch/regs-gpio.h>
#include <asm/arch/regs-dsc.h>
-#include "s3c2440.h"
#include "cpu.h"
+#include "s3c2440.h"
int s3c2440_set_dsc(unsigned int pin, unsigned int value)
{
/* linux/arch/arm/mach-s3c2410/s3c2440.c
*
- * Copyright (c) 2004 Simtec Electronics
+ * Copyright (c) 2004-2005 Simtec Electronics
* Ben Dooks <ben@simtec.co.uk>
*
* Samsung S3C2440 Mobile CPU support
* 09-Nov-2004 BJD Added sysdev for power management
* 04-Nov-2004 BJD New serial registration
* 15-Nov-2004 BJD Rename the i2c device for the s3c2440
+ * 14-Jan-2005 BJD Moved clock init code into seperate function
+ * 14-Jan-2005 BJD Removed un-used clock bits
*/
#include <linux/kernel.h>
#include "cpu.h"
#include "pm.h"
-int s3c2440_clock_tick_rate = 12*1000*1000; /* current timers at 12MHz */
-
-/* clock info */
-unsigned long s3c2440_hdiv;
static struct map_desc s3c2440_iodesc[] __initdata = {
IODESC_ENT(USBHOST),
IODESC_ENT(LCD),
IODESC_ENT(TIMER),
IODESC_ENT(ADC),
+ IODESC_ENT(WATCHDOG),
};
static struct resource s3c_uart0_resource[] = {
static struct clk s3c2440_clk_cam = {
.name = "camera",
- .enable = s3c2410_clkcon_enable,
+ .enable = s3c24xx_clkcon_enable,
.ctrlbit = S3C2440_CLKCON_CAMERA
};
static struct clk s3c2440_clk_ac97 = {
.name = "ac97",
- .enable = s3c2410_clkcon_enable,
+ .enable = s3c24xx_clkcon_enable,
.ctrlbit = S3C2440_CLKCON_CAMERA
};
void __init s3c2440_map_io(struct map_desc *mach_desc, int size)
{
- unsigned long clkdiv;
- unsigned long camdiv;
-
/* register our io-tables */
iotable_init(s3c2440_iodesc, ARRAY_SIZE(s3c2440_iodesc));
iotable_init(mach_desc, size);
+ /* rename any peripherals used differing from the s3c2410 */
+
+ s3c_device_i2c.name = "s3c2440-i2c";
+}
+
+void __init s3c2440_init_clocks(int xtal)
+{
+ unsigned long clkdiv;
+ unsigned long camdiv;
+ int s3c2440_hdiv = 1;
/* now we've got our machine bits initialised, work out what
* clocks we've got */
s3c24xx_hclk = s3c24xx_fclk / s3c2440_hdiv;
s3c24xx_pclk = s3c24xx_hclk / ((clkdiv & S3C2440_CLKDIVN_PDIVN)? 2:1);
- /* print brieft summary of clocks, etc */
+ /* print brief summary of clocks, etc */
printk("S3C2440: core %ld.%03ld MHz, memory %ld.%03ld MHz, peripheral %ld.%03ld MHz\n",
print_mhz(s3c24xx_fclk), print_mhz(s3c24xx_hclk),
* console to use them, and to add new ones after the initialisation
*/
- s3c2410_init_clocks();
+ s3c24xx_setup_clocks();
/* add s3c2440 specific clocks */
s3c2440_clk_cam.parent = clk_get(NULL, "hclk");
s3c2440_clk_ac97.parent = clk_get(NULL, "pclk");
- s3c2410_register_clock(&s3c2440_clk_ac97);
- s3c2410_register_clock(&s3c2440_clk_cam);
+ s3c24xx_register_clock(&s3c2440_clk_ac97);
+ s3c24xx_register_clock(&s3c2440_clk_cam);
clk_disable(&s3c2440_clk_ac97);
clk_disable(&s3c2440_clk_cam);
-
- /* rename any peripherals used differing from the s3c2410 */
-
- s3c_device_i2c.name = "s3c2440-i2c";
}
int __init s3c2440_init(void)
/* arch/arm/mach-s3c2410/s3c2440.h
*
- * Copyright (c) 2004 Simtec Electronics
- * Ben Dooks <ben@simtec.co.uk>
+ * Copyright (c) 2004-2005 Simtec Electronics
+ * Ben Dooks <ben@simtec.co.uk>
*
* Header file for s3c2440 cpu support
*
* Modifications:
* 24-Aug-2004 BJD Start of S3C2440 CPU support
* 04-Nov-2004 BJD Added s3c2440_init_uarts()
+ * 04-Jan-2005 BJD Moved uart init to cpu code
+ * 10-Jan-2005 BJD Moved 2440 specific init here
+ * 14-Jan-2005 BJD Split the clock initialisation code
*/
-struct s3c2410_uartcfg;
+#ifdef CONFIG_CPU_S3C2440
-extern void s3c2440_init_irq(void);
+extern int s3c2440_init(void);
-extern void s3c2440_init_time(void);
+extern void s3c2440_map_io(struct map_desc *mach_desc, int size);
extern void s3c2440_init_uarts(struct s3c2410_uartcfg *cfg, int no);
+
+extern void s3c2440_init_clocks(int xtal);
+
+#else
+#define s3c2440_init_clocks NULL
+#define s3c2440_init_uarts NULL
+#define s3c2440_map_io NULL
+#define s3c2440_init NULL
+#endif
setup_irq(IRQ_TIMER4, &s3c2410_timer_irq);
}
-struct sys_timer s3c2410_timer = {
+struct sys_timer s3c24xx_timer = {
.init = s3c2410_timer_init,
.offset = s3c2410_gettimeoffset,
.resume = s3c2410_timer_setup
#include <asm/mach/map.h>
#include <asm/mach/serial_sa1100.h>
+#include <asm/hardware/scoop.h>
#include <asm/hardware/locomo.h>
#include "generic.h"
-static void __init scoop_init(void)
-{
+static struct resource collie_scoop_resources[] = {
+ [0] = {
+ .start = 0x40800000,
+ .end = 0x40800fff,
+ .flags = IORESOURCE_MEM,
+ },
+};
-#define COLLIE_SCP_INIT_DATA(adr,dat) (((adr)<<16)|(dat))
-#define COLLIE_SCP_INIT_DATA_END ((unsigned long)-1)
- static const unsigned long scp_init[] = {
- COLLIE_SCP_INIT_DATA(COLLIE_SCP_MCR, 0x0140), // 00
- COLLIE_SCP_INIT_DATA(COLLIE_SCP_MCR, 0x0100),
- COLLIE_SCP_INIT_DATA(COLLIE_SCP_CDR, 0x0000), // 04
- COLLIE_SCP_INIT_DATA(COLLIE_SCP_CPR, 0x0000), // 0C
- COLLIE_SCP_INIT_DATA(COLLIE_SCP_CCR, 0x0000), // 10
- COLLIE_SCP_INIT_DATA(COLLIE_SCP_IMR, 0x0000), // 18
- COLLIE_SCP_INIT_DATA(COLLIE_SCP_IRM, 0x00FF), // 14
- COLLIE_SCP_INIT_DATA(COLLIE_SCP_ISR, 0x0000), // 1C
- COLLIE_SCP_INIT_DATA(COLLIE_SCP_IRM, 0x0000),
- COLLIE_SCP_INIT_DATA(COLLIE_SCP_GPCR, COLLIE_SCP_IO_DIR), // 20
- COLLIE_SCP_INIT_DATA(COLLIE_SCP_GPWR, COLLIE_SCP_IO_OUT), // 24
- COLLIE_SCP_INIT_DATA_END
- };
- int i;
- for (i = 0; scp_init[i] != COLLIE_SCP_INIT_DATA_END; i++) {
- int adr = scp_init[i] >> 16;
- COLLIE_SCP_REG(adr) = scp_init[i] & 0xFFFF;
- }
+static struct scoop_config collie_scoop_setup = {
+ .io_dir = COLLIE_SCOOP_IO_DIR,
+ .io_out = COLLIE_SCOOP_IO_OUT,
+};
+
+static struct platform_device colliescoop_device = {
+ .name = "sharp-scoop",
+ .id = -1,
+ .dev = {
+ .platform_data = &collie_scoop_setup,
+ },
+ .num_resources = ARRAY_SIZE(collie_scoop_resources),
+ .resource = collie_scoop_resources,
+};
-}
static struct resource locomo_resources[] = {
[0] = {
static struct platform_device *devices[] __initdata = {
&locomo_device,
+ &colliescoop_device,
};
static struct mtd_partition collie_partitions[] = {
static void collie_set_vpp(int vpp)
{
- COLLIE_SCP_REG_GPCR |= COLLIE_SCP_VPEN;
+ write_scoop_reg(SCOOP_GPCR, read_scoop_reg(SCOOP_GPCR) | COLLIE_SCP_VPEN);
if (vpp) {
- COLLIE_SCP_REG_GPWR |= COLLIE_SCP_VPEN;
+ write_scoop_reg(SCOOP_GPWR, read_scoop_reg(SCOOP_GPWR) | COLLIE_SCP_VPEN);
} else {
- COLLIE_SCP_REG_GPWR &= ~COLLIE_SCP_VPEN;
+ write_scoop_reg(SCOOP_GPWR, read_scoop_reg(SCOOP_GPWR) & ~COLLIE_SCP_VPEN);
}
}
GPDR |= GPIO_32_768kHz;
TUCR = TUCR_32_768kHz;
- scoop_init();
-
ret = platform_add_devices(devices, ARRAY_SIZE(devices));
if (ret) {
printk(KERN_WARNING "collie: Unable to register LoCoMo device\n");
/* virtual physical length type */
{0xe8000000, 0x00000000, 0x02000000, MT_DEVICE}, /* 32M main flash (cs0) */
{0xea000000, 0x08000000, 0x02000000, MT_DEVICE}, /* 32M boot flash (cs1) */
- {0xf0000000, 0x40000000, 0x01000000, MT_DEVICE}, /* 16M LOCOMO & SCOOP (cs4) */
};
static void __init collie_map_io(void)
}
#else
-#define neponset_suspend NULL
-#define neponset_resume NULL
+#define neponset_suspend NULL
+#define neponset_resume NULL
#endif
static struct device_driver neponset_device_driver = {
static void shark_ack_8259A_irq(unsigned int irq){}
-static void bogus_int(int irq, void *dev_id, struct pt_regs *regs)
+static irqreturn_t bogus_int(int irq, void *dev_id, struct pt_regs *regs)
{
printk("Got interrupt %i!\n",irq);
+ return IRQ_NONE;
}
static struct irqaction cascade;
//request_region(0xA0,0x2,"pic2");
cascade.handler = bogus_int;
- cascade.flags = 0;
- cascade.mask = 0;
cascade.name = "cascade";
- cascade.next = NULL;
- cascade.dev_id = NULL;
setup_irq(2,&cascade);
}
static short hw_led_state;
static short saved_state;
-static spinlock_t leds_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(leds_lock);
short sequoia_read(int addr) {
outw(addr,0x24);
else return 255;
}
-extern void __init via82c505_preinit(void *sysdata);
+extern void __init via82c505_preinit(void);
static struct hw_pci shark_pci __initdata = {
.setup = via82c505_setup,
.map_irq = shark_map_irq,
.nr_controllers = 1,
.scan = via82c505_scan_bus,
- .preinit = via82c505_preinit
+ .preinit = via82c505_preinit,
};
static int __init shark_pci_init(void)
*
* Copyright (C) 1995 Linus Torvalds
* Modifications for ARM processor (c) 1995-2001 Russell King
+ * Thumb aligment fault fixups (c) 2004 MontaVista Software, Inc.
+ * - Adapted from gdb/sim/arm/thumbemu.c -- Thumb instruction emulation.
+ * Copyright (C) 1996, Cygnus Software Technologies Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
#define TYPE_LDST 2
#define TYPE_DONE 3
+#ifdef __ARMEB__
+#define BE 1
+#define FIRST_BYTE_16 "mov %1, %1, ror #8\n"
+#define FIRST_BYTE_32 "mov %1, %1, ror #24\n"
+#define NEXT_BYTE "ror #24"
+#else
+#define BE 0
+#define FIRST_BYTE_16
+#define FIRST_BYTE_32
+#define NEXT_BYTE "lsr #8"
+#endif
+
#define __get8_unaligned_check(ins,val,addr,err) \
__asm__( \
"1: "ins" %1, [%2], #1\n" \
#define __get16_unaligned_check(ins,val,addr) \
do { \
unsigned int err = 0, v, a = addr; \
- __get8_unaligned_check(ins,val,a,err); \
__get8_unaligned_check(ins,v,a,err); \
- val |= v << 8; \
+ val = v << ((BE) ? 8 : 0); \
+ __get8_unaligned_check(ins,v,a,err); \
+ val |= v << ((BE) ? 0 : 8); \
if (err) \
goto fault; \
} while (0)
#define __get32_unaligned_check(ins,val,addr) \
do { \
unsigned int err = 0, v, a = addr; \
- __get8_unaligned_check(ins,val,a,err); \
__get8_unaligned_check(ins,v,a,err); \
- val |= v << 8; \
+ val = v << ((BE) ? 24 : 0); \
__get8_unaligned_check(ins,v,a,err); \
- val |= v << 16; \
+ val |= v << ((BE) ? 16 : 8); \
__get8_unaligned_check(ins,v,a,err); \
- val |= v << 24; \
+ val |= v << ((BE) ? 8 : 16); \
+ __get8_unaligned_check(ins,v,a,err); \
+ val |= v << ((BE) ? 0 : 24); \
if (err) \
goto fault; \
} while (0)
#define __put16_unaligned_check(ins,val,addr) \
do { \
unsigned int err = 0, v = val, a = addr; \
- __asm__( \
+ __asm__( FIRST_BYTE_16 \
"1: "ins" %1, [%2], #1\n" \
- " mov %1, %1, lsr #8\n" \
+ " mov %1, %1, "NEXT_BYTE"\n" \
"2: "ins" %1, [%2]\n" \
"3:\n" \
" .section .fixup,\"ax\"\n" \
#define __put32_unaligned_check(ins,val,addr) \
do { \
unsigned int err = 0, v = val, a = addr; \
- __asm__( \
+ __asm__( FIRST_BYTE_32 \
"1: "ins" %1, [%2], #1\n" \
- " mov %1, %1, lsr #8\n" \
+ " mov %1, %1, "NEXT_BYTE"\n" \
"2: "ins" %1, [%2], #1\n" \
- " mov %1, %1, lsr #8\n" \
+ " mov %1, %1, "NEXT_BYTE"\n" \
"3: "ins" %1, [%2], #1\n" \
- " mov %1, %1, lsr #8\n" \
+ " mov %1, %1, "NEXT_BYTE"\n" \
"4: "ins" %1, [%2]\n" \
"5:\n" \
" .section .fixup,\"ax\"\n" \
return TYPE_ERROR;
}
+/*
+ * Convert Thumb ld/st instruction forms to equivalent ARM instructions so
+ * we can reuse ARM userland alignment fault fixups for Thumb.
+ *
+ * This implementation was initially based on the algorithm found in
+ * gdb/sim/arm/thumbemu.c. It is basically just a code reduction of same
+ * to convert only Thumb ld/st instruction forms to equivalent ARM forms.
+ *
+ * NOTES:
+ * 1. Comments below refer to ARM ARM DDI0100E Thumb Instruction sections.
+ * 2. If for some reason we're passed an non-ld/st Thumb instruction to
+ * decode, we return 0xdeadc0de. This should never happen under normal
+ * circumstances but if it does, we've got other problems to deal with
+ * elsewhere and we obviously can't fix those problems here.
+ */
+
+static unsigned long
+thumb2arm(u16 tinstr)
+{
+ u32 L = (tinstr & (1<<11)) >> 11;
+
+ switch ((tinstr & 0xf800) >> 11) {
+ /* 6.5.1 Format 1: */
+ case 0x6000 >> 11: /* 7.1.52 STR(1) */
+ case 0x6800 >> 11: /* 7.1.26 LDR(1) */
+ case 0x7000 >> 11: /* 7.1.55 STRB(1) */
+ case 0x7800 >> 11: /* 7.1.30 LDRB(1) */
+ return 0xe5800000 |
+ ((tinstr & (1<<12)) << (22-12)) | /* fixup */
+ (L<<20) | /* L==1? */
+ ((tinstr & (7<<0)) << (12-0)) | /* Rd */
+ ((tinstr & (7<<3)) << (16-3)) | /* Rn */
+ ((tinstr & (31<<6)) >> /* immed_5 */
+ (6 - ((tinstr & (1<<12)) ? 0 : 2)));
+ case 0x8000 >> 11: /* 7.1.57 STRH(1) */
+ case 0x8800 >> 11: /* 7.1.32 LDRH(1) */
+ return 0xe1c000b0 |
+ (L<<20) | /* L==1? */
+ ((tinstr & (7<<0)) << (12-0)) | /* Rd */
+ ((tinstr & (7<<3)) << (16-3)) | /* Rn */
+ ((tinstr & (7<<6)) >> (6-1)) | /* immed_5[2:0] */
+ ((tinstr & (3<<9)) >> (9-8)); /* immed_5[4:3] */
+
+ /* 6.5.1 Format 2: */
+ case 0x5000 >> 11:
+ case 0x5800 >> 11:
+ {
+ static const u32 subset[8] = {
+ 0xe7800000, /* 7.1.53 STR(2) */
+ 0xe18000b0, /* 7.1.58 STRH(2) */
+ 0xe7c00000, /* 7.1.56 STRB(2) */
+ 0xe19000d0, /* 7.1.34 LDRSB */
+ 0xe7900000, /* 7.1.27 LDR(2) */
+ 0xe19000b0, /* 7.1.33 LDRH(2) */
+ 0xe7d00000, /* 7.1.31 LDRB(2) */
+ 0xe19000f0 /* 7.1.35 LDRSH */
+ };
+ return subset[(tinstr & (7<<9)) >> 9] |
+ ((tinstr & (7<<0)) << (12-0)) | /* Rd */
+ ((tinstr & (7<<3)) << (16-3)) | /* Rn */
+ ((tinstr & (7<<6)) >> (6-0)); /* Rm */
+ }
+
+ /* 6.5.1 Format 3: */
+ case 0x4800 >> 11: /* 7.1.28 LDR(3) */
+ /* NOTE: This case is not technically possible. We're
+ * loading 32-bit memory data via PC relative
+ * addressing mode. So we can and should eliminate
+ * this case. But I'll leave it here for now.
+ */
+ return 0xe59f0000 |
+ ((tinstr & (7<<8)) << (12-8)) | /* Rd */
+ ((tinstr & 255) << (2-0)); /* immed_8 */
+
+ /* 6.5.1 Format 4: */
+ case 0x9000 >> 11: /* 7.1.54 STR(3) */
+ case 0x9800 >> 11: /* 7.1.29 LDR(4) */
+ return 0xe58d0000 |
+ (L<<20) | /* L==1? */
+ ((tinstr & (7<<8)) << (12-8)) | /* Rd */
+ ((tinstr & 255) << 2); /* immed_8 */
+
+ /* 6.6.1 Format 1: */
+ case 0xc000 >> 11: /* 7.1.51 STMIA */
+ case 0xc800 >> 11: /* 7.1.25 LDMIA */
+ {
+ u32 Rn = (tinstr & (7<<8)) >> 8;
+ u32 W = ((L<<Rn) & (tinstr&255)) ? 0 : 1<<21;
+
+ return 0xe8800000 | W | (L<<20) | (Rn<<16) |
+ (tinstr&255);
+ }
+
+ /* 6.6.1 Format 2: */
+ case 0xb000 >> 11: /* 7.1.48 PUSH */
+ case 0xb800 >> 11: /* 7.1.47 POP */
+ if ((tinstr & (3 << 9)) == 0x0400) {
+ static const u32 subset[4] = {
+ 0xe92d0000, /* STMDB sp!,{registers} */
+ 0xe92d4000, /* STMDB sp!,{registers,lr} */
+ 0xe8bd0000, /* LDMIA sp!,{registers} */
+ 0xe8bd8000 /* LDMIA sp!,{registers,pc} */
+ };
+ return subset[(L<<1) | ((tinstr & (1<<8)) >> 8)] |
+ (tinstr & 255); /* register_list */
+ }
+ /* Else fall through for illegal instruction case */
+
+ default:
+ return 0xdeadc0de;
+ }
+}
+
static int
do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
{
union offset_union offset;
- unsigned long instr, instrptr;
+ unsigned long instr = 0, instrptr;
int (*handler)(unsigned long addr, unsigned long instr, struct pt_regs *regs);
unsigned int type;
+ mm_segment_t fs;
+ unsigned int fault;
+ u16 tinstr = 0;
instrptr = instruction_pointer(regs);
- instr = *(unsigned long *)instrptr;
+
+ fs = get_fs();
+ set_fs(KERNEL_DS);
+ if thumb_mode(regs) {
+ fault = __get_user(tinstr, (u16 *)(instrptr & ~1));
+ if (!(fault))
+ instr = thumb2arm(tinstr);
+ } else
+ fault = __get_user(instr, (u32 *)instrptr);
+ set_fs(fs);
+
+ if (fault) {
+ type = TYPE_FAULT;
+ goto bad_or_fault;
+ }
if (user_mode(regs))
goto user;
fixup:
- regs->ARM_pc += 4;
+ regs->ARM_pc += thumb_mode(regs) ? 2 : 4;
switch (CODING_BITS(instr)) {
case 0x00000000: /* ldrh or strh */
bad_or_fault:
if (type == TYPE_ERROR)
goto bad;
- regs->ARM_pc -= 4;
+ regs->ARM_pc -= thumb_mode(regs) ? 2 : 4;
/*
* We got a fault - fix it up, or die.
*/
* Oops, we didn't handle the instruction.
*/
printk(KERN_ERR "Alignment trap: not handling instruction "
- "%08lx at [<%08lx>]\n", instr, instrptr);
+ "%0*lx at [<%08lx>]\n",
+ thumb_mode(regs) ? 4 : 8,
+ thumb_mode(regs) ? tinstr : instr, instrptr);
ai_skipped += 1;
return 1;
ai_user += 1;
if (ai_usermode & 1)
- printk("Alignment trap: %s (%d) PC=0x%08lx Instr=0x%08lx "
+ printk("Alignment trap: %s (%d) PC=0x%08lx Instr=0x%0*lx "
"Address=0x%08lx FSR 0x%03x\n", current->comm,
- current->pid, instrptr, instr, addr, fsr);
+ current->pid, instrptr,
+ thumb_mode(regs) ? 4 : 8,
+ thumb_mode(regs) ? tinstr : instr,
+ addr, fsr);
if (ai_usermode & 2)
goto fixup;
static pte_t *from_pte;
static pte_t *to_pte;
-static spinlock_t v6_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(v6_lock);
#define DCACHE_COLOUR(vaddr) ((vaddr & (SHMLBA - 1)) >> PAGE_SHIFT)
mcr p15, 0, r0, c7, c7 @ invalidate I,D caches on v4
mcr p15, 0, r0, c7, c10, 4 @ drain write buffer on v4
mcr p15, 0, r0, c8, c7 @ invalidate I,D TLBs on v4
- mcr p15, 0, r4, c2, c0 @ load page table pointer
- mov r0, #0x1f @ Domains 0, 1 = client
- mcr p15, 0, r0, c3, c0 @ load domain access register
mrc p15, 0, r0, c1, c0 @ get control register v4
-/*
- * Clear out 'unwanted' bits (then put them in if we need them)
- */
- bic r0, r0, #0x1e00 @ i...??r.........
- bic r0, r0, #0x000e @ ............wca.
-/*
- * Turn on what we want
- */
- orr r0, r0, #0x0031 @ ..........DP...M
- orr r0, r0, #0x0100 @ .......S........
-
+ ldr r5, arm1020_cr1_clear
+ bic r0, r0, r5
+ ldr r5, arm1020_cr1_set
+ orr r0, r0, r5
#ifdef CONFIG_CPU_CACHE_ROUND_ROBIN
- orr r0, r0, #0x4000 @ .R..............
-#endif
-#ifndef CONFIG_CPU_BPREDICT_DISABLE
- orr r0, r0, #0x0800 @ ....Z...........
-#endif
-#ifndef CONFIG_CPU_DCACHE_DISABLE
- orr r0, r0, #0x0004 @ Enable D cache
-#endif
-#ifndef CONFIG_CPU_ICACHE_DISABLE
- orr r0, r0, #0x1000 @ I Cache on
+ orr r0, r0, #0x4000 @ .R.. .... .... ....
#endif
mov pc, lr
.size __arm1020_setup, . - __arm1020_setup
+ /*
+ * R
+ * .RVI ZFRS BLDP WCAM
+ * .0.1 1001 ..11 0101 /* FIXME: why no V bit? */
+ */
+ .type arm1020_cr1_clear, #object
+ .type arm1020_cr1_set, #object
+arm1020_cr1_clear:
+ .word 0x593f
+arm1020_cr1_set:
+ .word 0x1935
+
__INITDATA
/*
__arm1020_proc_info:
.long 0x4104a200 @ ARM 1020T (Architecture v5T)
.long 0xff0ffff0
- .long 0x00000c02 @ mmuflags
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm1020_setup
.long cpu_arch_name
.long cpu_elf_name
mcr p15, 0, r0, c7, c7 @ invalidate I,D caches on v4
mcr p15, 0, r0, c7, c10, 4 @ drain write buffer on v4
mcr p15, 0, r0, c8, c7 @ invalidate I,D TLBs on v4
- mcr p15, 0, r4, c2, c0 @ load page table pointer
- mov r0, #0x1f @ Domains 0, 1 = client
- mcr p15, 0, r0, c3, c0 @ load domain access register
mrc p15, 0, r0, c1, c0 @ get control register v4
-/*
- * Clear out 'unwanted' bits (then put them in if we need them)
- */
- bic r0, r0, #0x1e00 @ i...??r.........
- bic r0, r0, #0x000e @ ............wca.
-/*
- * Turn on what we want
- */
- orr r0, r0, #0x0031 @ ..........DP...M
- orr r0, r0, #0x0100 @ .......S........
-
+ ldr r5, arm1020e_cr1_clear
+ bic r0, r0, r5
+ ldr r5, arm1020e_cr1_set
+ orr r0, r0, r5
#ifdef CONFIG_CPU_CACHE_ROUND_ROBIN
- orr r0, r0, #0x4000 @ .R..............
-#endif
-#ifndef CONFIG_CPU_BPREDICT_DISABLE
- orr r0, r0, #0x0800 @ ....Z...........
-#endif
-#ifndef CONFIG_CPU_DCACHE_DISABLE
- orr r0, r0, #0x0004 @ Enable D cache
-#endif
-#ifndef CONFIG_CPU_ICACHE_DISABLE
- orr r0, r0, #0x1000 @ I Cache on
+ orr r0, r0, #0x4000 @ .R.. .... .... ....
#endif
mov pc, lr
.size __arm1020e_setup, . - __arm1020e_setup
+ /*
+ * R
+ * .RVI ZFRS BLDP WCAM
+ * .0.1 1001 ..11 0101 /* FIXME: why no V bit? */
+ */
+ .type arm1020e_cr1_clear, #object
+ .type arm1020e_cr1_set, #object
+arm1020e_cr1_clear:
+ .word 0x5f3f
+arm1020e_cr1_set:
+ .word 0x1935
+
__INITDATA
/*
__arm1020e_proc_info:
.long 0x4105a200 @ ARM 1020TE (Architecture v5TE)
.long 0xff0ffff0
- .long 0x00000c12 @ mmuflags
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm1020e_setup
.long cpu_arch_name
.long cpu_elf_name
mcr p15, 0, r0, c7, c7 @ invalidate I,D caches on v4
mcr p15, 0, r0, c7, c10, 4 @ drain write buffer on v4
mcr p15, 0, r0, c8, c7 @ invalidate I,D TLBs on v4
- mcr p15, 0, r4, c2, c0 @ load page table pointer
- mov r0, #0x1f @ Domains 0, 1 = client
- mcr p15, 0, r0, c3, c0 @ load domain access register
mrc p15, 0, r0, c1, c0 @ get control register v4
-/*
- * Clear out 'unwanted' bits (then put them in if we need them)
- */
- bic r0, r0, #0x1e00 @ ...i??r.........
- bic r0, r0, #0x000e @ ............wca.
-/*
- * Turn on what we want
- */
- orr r0, r0, #0x0031 @ ..........DP...M
- orr r0, r0, #0x2100 @ ..V....S........
-
+ ldr r5, arm1022_cr1_clear
+ bic r0, r0, r5
+ ldr r5, arm1022_cr1_set
+ orr r0, r0, r5
#ifdef CONFIG_CPU_CACHE_ROUND_ROBIN
orr r0, r0, #0x4000 @ .R..............
-#endif
-#ifndef CONFIG_CPU_BPREDICT_DISABLE
- orr r0, r0, #0x0800 @ ....Z...........
-#endif
-#ifndef CONFIG_CPU_DCACHE_DISABLE
- orr r0, r0, #0x0004 @ .............C..
-#endif
-#ifndef CONFIG_CPU_ICACHE_DISABLE
- orr r0, r0, #0x1000 @ ...I............
#endif
mov pc, lr
.size __arm1022_setup, . - __arm1022_setup
+ /*
+ * R
+ * .RVI ZFRS BLDP WCAM
+ * .011 1001 ..11 0101
+ *
+ */
+ .type arm1022_cr1_clear, #object
+ .type arm1022_cr1_set, #object
+arm1022_cr1_clear:
+ .word 0x7f3f
+arm1022_cr1_set:
+ .word 0x3935
+
__INITDATA
/*
__arm1022_proc_info:
.long 0x4105a220 @ ARM 1022E (v5TE)
.long 0xff0ffff0
- .long 0x00000c12 @ mmuflags
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm1022_setup
.long cpu_arch_name
.long cpu_elf_name
mov r0, #4 @ explicitly disable writeback
mcr p15, 7, r0, c15, c0, 0
#endif
- mov r0, #0x1f @ Domains 0, 1 = client
- mcr p15, 0, r0, c3, c0 @ load domain access register
mrc p15, 0, r0, c1, c0 @ get control register v4
-/*
- * Clear out 'unwanted' bits (then put them in if we need them)
- */
- bic r0, r0, #0x1e00 @ ...i??r.........
- bic r0, r0, #0x000e @ ............wca.
-/*
- * Turn on what we want
- */
- orr r0, r0, #0x0031 @ ..........DP...M
- orr r0, r0, #0x2100 @ ..V....S........
-
+ ldr r5, arm1026_cr1_clear
+ bic r0, r0, r5
+ ldr r5, arm1026_cr1_set
+ orr r0, r0, r5
#ifdef CONFIG_CPU_CACHE_ROUND_ROBIN
- orr r0, r0, #0x4000 @ .R..............
-#endif
-#ifndef CONFIG_CPU_BPREDICT_DISABLE
- orr r0, r0, #0x0800 @ ....Z...........
-#endif
-#ifndef CONFIG_CPU_DCACHE_DISABLE
- orr r0, r0, #0x0004 @ .............C..
-#endif
-#ifndef CONFIG_CPU_ICACHE_DISABLE
- orr r0, r0, #0x1000 @ ...I............
+ orr r0, r0, #0x4000 @ .R.. .... .... ....
#endif
mov pc, lr
.size __arm1026_setup, . - __arm1026_setup
+ /*
+ * R
+ * .RVI ZFRS BLDP WCAM
+ * .011 1001 ..11 0101
+ *
+ */
+ .type arm1026_cr1_clear, #object
+ .type arm1026_cr1_set, #object
+arm1026_cr1_clear:
+ .word 0x7f3f
+arm1026_cr1_set:
+ .word 0x3935
+
__INITDATA
/*
__arm1026_proc_info:
.long 0x4106a260 @ ARM 1026EJ-S (v5TEJ)
.long 0xff0ffff0
- .long 0x00000c12 @ mmuflags
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm1026_setup
.long cpu_arch_name
.long cpu_elf_name
__arm6_setup: mov r0, #0
mcr p15, 0, r0, c7, c0 @ flush caches on v3
mcr p15, 0, r0, c5, c0 @ flush TLBs on v3
- mcr p15, 0, r4, c2, c0 @ load page table pointer
- mov r0, #0x1f @ Domains 0, 1 = client
- mcr p15, 0, r0, c3, c0 @ load domain access register
mov r0, #0x3d @ . ..RS BLDP WCAM
orr r0, r0, #0x100 @ . ..01 0011 1101
mov pc, lr
__arm7_setup: mov r0, #0
mcr p15, 0, r0, c7, c0 @ flush caches on v3
mcr p15, 0, r0, c5, c0 @ flush TLBs on v3
- mcr p15, 0, r4, c2, c0 @ load page table pointer
- mov r0, #0x1f @ Domains 0, 1 = client
mcr p15, 0, r0, c3, c0 @ load domain access register
mov r0, #0x7d @ . ..RS BLDP WCAM
orr r0, r0, #0x100 @ . ..01 0111 1101
__arm710_proc_info:
.long 0x41007100
.long 0xfff8ff00
- .long 0x00000c1e
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_BUFFERABLE | \
+ PMD_SECT_CACHEABLE | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm7_setup
.long cpu_arch_name
.long cpu_elf_name
mcr p15, 0, ip, c1, c0, 0 @ ctrl register
mov pc, r0
- __INIT
-
- .type __arm710_setup, #function
-__arm710_setup: mov r0, #0
- mcr p15, 0, r0, c7, c7, 0 @ invalidate caches
- mcr p15, 0, r0, c8, c7, 0 @ flush TLB (v4)
- mcr p15, 0, r4, c2, c0 @ load page table pointer
- mov r0, #0x1f @ Domains 0, 1 = client
- mcr p15, 0, r0, c3, c0 @ load domain access register
-
- mrc p15, 0, r0, c1, c0 @ get control register
- bic r0, r0, #0x0e00 @ ..V. ..RS BLDP WCAM
- orr r0, r0, #0x0100 @ .... .... .111 .... (old)
- orr r0, r0, #0x003d @ .... ..01 ..11 1101 (new)
- mov pc, lr @ __ret (head.S)
- .size __arm710_setup, . - __arm710_setup
-
- .type __arm720_setup, #function
-__arm720_setup: mov r0, #0
- mcr p15, 0, r0, c7, c7, 0 @ invalidate caches
- mcr p15, 0, r0, c8, c7, 0 @ flush TLB (v4)
- mcr p15, 0, r4, c2, c0 @ load page table pointer
- mov r0, #0x1f @ Domains 0, 1 = client
- mcr p15, 0, r0, c3, c0 @ load domain access register
-
- mrc p15, 0, r0, c1, c0 @ get control register
- bic r0, r0, #0x0e00 @ ..V. ..RS BLDP WCAM
- orr r0, r0, #0x2100 @ .... .... .111 .... (old)
- orr r0, r0, #0x003d @ ..1. ..01 ..11 1101 (new)
- mov pc, lr @ __ret (head.S)
- .size __arm720_setup, . - __arm720_setup
+ __INIT
+
+ .type __arm710_setup, #function
+__arm710_setup:
+ mov r0, #0
+ mcr p15, 0, r0, c7, c7, 0 @ invalidate caches
+ mcr p15, 0, r0, c8, c7, 0 @ flush TLB (v4)
+ mrc p15, 0, r0, c1, c0 @ get control register
+ ldr r5, arm710_cr1_clear
+ bic r0, r0, r5
+ ldr r5, arm710_cr1_set
+ orr r0, r0, r5
+ mov pc, lr @ __ret (head.S)
+ .size __arm710_setup, . - __arm710_setup
+
+ /*
+ * R
+ * .RVI ZFRS BLDP WCAM
+ * .... 0001 ..11 1101
+ *
+ */
+ .type arm710_cr1_clear, #object
+ .type arm710_cr1_set, #object
+arm710_cr1_clear:
+ .word 0x0f3f
+arm710_cr1_set:
+ .word 0x013d
+
+ .type __arm720_setup, #function
+__arm720_setup:
+ mov r0, #0
+ mcr p15, 0, r0, c7, c7, 0 @ invalidate caches
+ mcr p15, 0, r0, c8, c7, 0 @ flush TLB (v4)
+ mrc p15, 0, r0, c1, c0 @ get control register
+ ldr r5, arm720_cr1_clear
+ bic r0, r0, r5
+ ldr r5, arm720_cr1_set
+ orr r0, r0, r5
+ mov pc, lr @ __ret (head.S)
+ .size __arm720_setup, . - __arm720_setup
+
+ /*
+ * R
+ * .RVI ZFRS BLDP WCAM
+ * ..1. 1001 ..11 1101
+ *
+ */
+ .type arm720_cr1_clear, #object
+ .type arm720_cr1_set, #object
+arm720_cr1_clear:
+ .word 0x2f3f
+arm720_cr1_set:
+ .word 0x213d
__INITDATA
__arm710_proc_info:
.long 0x41807100 @ cpu_val
.long 0xffffff00 @ cpu_mask
- .long 0x00000c1e @ section_mmu_flags
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_BUFFERABLE | \
+ PMD_SECT_CACHEABLE | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm710_setup @ cpu_flush
.long cpu_arch_name @ arch_name
.long cpu_elf_name @ elf_name
__arm720_proc_info:
.long 0x41807200 @ cpu_val
.long 0xffffff00 @ cpu_mask
- .long 0x00000c1e @ section_mmu_flags
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_BUFFERABLE | \
+ PMD_SECT_CACHEABLE | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm720_setup @ cpu_flush
.long cpu_arch_name @ arch_name
.long cpu_elf_name @ elf_name
mcr p15, 0, r0, c7, c7 @ invalidate I,D caches on v4
mcr p15, 0, r0, c7, c10, 4 @ drain write buffer on v4
mcr p15, 0, r0, c8, c7 @ invalidate I,D TLBs on v4
- mcr p15, 0, r4, c2, c0 @ load page table pointer
- mov r0, #0x1f @ Domains 0, 1 = client
- mcr p15, 0, r0, c3, c0 @ load domain access register
mrc p15, 0, r0, c1, c0 @ get control register v4
-/*
- * Clear out 'unwanted' bits (then put them in if we need them)
- */
- @ VI ZFRS BLDP WCAM
- bic r0, r0, #0x0e00
- bic r0, r0, #0x0002
- bic r0, r0, #0x000c
- bic r0, r0, #0x1000 @ ...0 000. .... 000.
-/*
- * Turn on what we want
- */
- orr r0, r0, #0x0031
- orr r0, r0, #0x2100 @ ..1. ...1 ..11 ...1
-
-#ifndef CONFIG_CPU_DCACHE_DISABLE
- orr r0, r0, #0x0004 @ .... .... .... .1..
-#endif
-#ifndef CONFIG_CPU_ICACHE_DISABLE
- orr r0, r0, #0x1000 @ ...1 .... .... ....
-#endif
+ ldr r5, arm920_cr1_clear
+ bic r0, r0, r5
+ ldr r5, arm920_cr1_set
+ orr r0, r0, r5
mov pc, lr
.size __arm920_setup, . - __arm920_setup
+ /*
+ * R
+ * .RVI ZFRS BLDP WCAM
+ * ..11 0001 ..11 0101
+ *
+ */
+ .type arm920_cr1_clear, #object
+ .type arm920_cr1_set, #object
+arm920_cr1_clear:
+ .word 0x3f3f
+arm920_cr1_set:
+ .word 0x3135
+
__INITDATA
/*
__arm920_proc_info:
.long 0x41009200
.long 0xff00fff0
- .long 0x00000c1e @ mmuflags
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_BUFFERABLE | \
+ PMD_SECT_CACHEABLE | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm920_setup
.long cpu_arch_name
.long cpu_elf_name
mcr p15, 0, r0, c7, c7 @ invalidate I,D caches on v4
mcr p15, 0, r0, c7, c10, 4 @ drain write buffer on v4
mcr p15, 0, r0, c8, c7 @ invalidate I,D TLBs on v4
- mcr p15, 0, r4, c2, c0 @ load page table pointer
- mov r0, #0x1f @ Domains 0, 1 = client
- mcr p15, 0, r0, c3, c0 @ load domain access register
mrc p15, 0, r0, c1, c0 @ get control register v4
-/*
- * Clear out 'unwanted' bits (then put them in if we need them)
- */
- @ VI ZFRS BLDP WCAM
- bic r0, r0, #0x0e00
- bic r0, r0, #0x0002
- bic r0, r0, #0x000c
- bic r0, r0, #0x1000 @ ...0 000. .... 000.
-/*
- * Turn on what we want
- */
- orr r0, r0, #0x0031
- orr r0, r0, #0x2100 @ ..1. ...1 ..11 ...1
-
-#ifndef CONFIG_CPU_DCACHE_DISABLE
- orr r0, r0, #0x0004 @ .... .... .... .1..
-#endif
-#ifndef CONFIG_CPU_ICACHE_DISABLE
- orr r0, r0, #0x1000 @ ...1 .... .... ....
-#endif
+ ldr r5, arm922_cr1_clear
+ bic r0, r0, r5
+ ldr r5, arm922_cr1_set
+ orr r0, r0, r5
mov pc, lr
.size __arm922_setup, . - __arm922_setup
+ /*
+ * R
+ * .RVI ZFRS BLDP WCAM
+ * ..11 0001 ..11 0101
+ *
+ */
+ .type arm922_cr1_clear, #object
+ .type arm922_cr1_set, #object
+arm922_cr1_clear:
+ .word 0x3f3f
+arm922_cr1_set:
+ .word 0x3135
+
__INITDATA
/*
__arm922_proc_info:
.long 0x41009220
.long 0xff00fff0
- .long 0x00000c1e @ mmuflags
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_BUFFERABLE | \
+ PMD_SECT_CACHEABLE | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm922_setup
.long cpu_arch_name
.long cpu_elf_name
mcr p15, 0, r0, c7, c7 @ invalidate I,D caches on v4
mcr p15, 0, r0, c7, c10, 4 @ drain write buffer on v4
mcr p15, 0, r0, c8, c7 @ invalidate I,D TLBs on v4
- mcr p15, 0, r4, c2, c0 @ load page table pointer
#ifdef CONFIG_CPU_DCACHE_WRITETHROUGH
mov r0, #4 @ disable write-back on caches explicitly
mcr p15, 7, r0, c15, c0, 0
#endif
- mov r0, #0x1f @ Domains 0, 1 = client
- mcr p15, 0, r0, c3, c0 @ load domain access register
mrc p15, 0, r0, c1, c0 @ get control register v4
-/*
- * Clear out 'unwanted' bits (then put them in if we need them)
- */
- @ VI ZFRS BLDP WCAM
- bic r0, r0, #0x0e00
- bic r0, r0, #0x0002
- bic r0, r0, #0x000c
- bic r0, r0, #0x1000 @ ...0 000. .... 000.
-/*
- * Turn on what we want
- */
- orr r0, r0, #0x0031
- orr r0, r0, #0x2100 @ ..1. ...1 ..11 ...1
-
- /* Writebuffer on */
- orr r0, r0, #0x0008 @ .... .... .... 1...
-
+ ldr r5, arm925_cr1_clear
+ bic r0, r0, r5
+ ldr r5, arm925_cr1_set
+ orr r0, r0, r5
#ifdef CONFIG_CPU_CACHE_ROUND_ROBIN
orr r0, r0, #0x4000 @ .1.. .... .... ....
-#endif
-#ifndef CONFIG_CPU_DCACHE_DISABLE
- orr r0, r0, #0x0004 @ .... .... .... .1..
-#endif
-#ifndef CONFIG_CPU_ICACHE_DISABLE
- orr r0, r0, #0x1000 @ ...1 .... .... ....
#endif
mov pc, lr
.size __arm925_setup, . - __arm925_setup
+ /*
+ * R
+ * .RVI ZFRS BLDP WCAM
+ * .011 0001 ..11 1101
+ *
+ */
+ .type arm925_cr1_clear, #object
+ .type arm925_cr1_set, #object
+arm925_cr1_clear:
+ .word 0x7f3f
+arm925_cr1_set:
+ .word 0x313d
+
__INITDATA
/*
__arm925_proc_info:
.long 0x54029250
.long 0xfffffff0
- .long 0x00000c12 @ mmuflags
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm925_setup
.long cpu_arch_name
.long cpu_elf_name
__arm915_proc_info:
.long 0x54029150
.long 0xfffffff0
- .long 0x00000c12 @ mmuflags
+ .long PMD_TYPE_SECT | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm925_setup
.long cpu_arch_name
.long cpu_elf_name
mcr p15, 0, r0, c7, c7 @ invalidate I,D caches on v4
mcr p15, 0, r0, c7, c10, 4 @ drain write buffer on v4
mcr p15, 0, r0, c8, c7 @ invalidate I,D TLBs on v4
- mcr p15, 0, r4, c2, c0 @ load page table pointer
#ifdef CONFIG_CPU_DCACHE_WRITETHROUGH
mcr p15, 7, r0, c15, c0, 0
#endif
- mov r0, #0x1f @ Domains 0, 1 = client
- mcr p15, 0, r0, c3, c0 @ load domain access register
mrc p15, 0, r0, c1, c0 @ get control register v4
-/*
- * Clear out 'unwanted' bits (then put them in if we need them)
- */
- @ VI ZFRS BLDP WCAM
- bic r0, r0, #0x0e00
- bic r0, r0, #0x0002
- bic r0, r0, #0x000c
- bic r0, r0, #0x1000 @ ...0 000. .... 000.
-/*
- * Turn on what we want
- */
- orr r0, r0, #0x0031
- orr r0, r0, #0x2100 @ ..1. ...1 ..11 ...1
-
+ ldr r5, arm926_cr1_clear
+ bic r0, r0, r5
+ ldr r5, arm926_cr1_set
+ orr r0, r0, r5
#ifdef CONFIG_CPU_CACHE_ROUND_ROBIN
orr r0, r0, #0x4000 @ .1.. .... .... ....
-#endif
-#ifndef CONFIG_CPU_DCACHE_DISABLE
- orr r0, r0, #0x0004 @ .... .... .... .1..
-#endif
-#ifndef CONFIG_CPU_ICACHE_DISABLE
- orr r0, r0, #0x1000 @ ...1 .... .... ....
#endif
mov pc, lr
.size __arm926_setup, . - __arm926_setup
+ /*
+ * R
+ * .RVI ZFRS BLDP WCAM
+ * .011 0001 ..11 0101
+ *
+ */
+ .type arm926_cr1_clear, #object
+ .type arm926_cr1_set, #object
+arm926_cr1_clear:
+ .word 0x7f3f
+arm926_cr1_set:
+ .word 0x3135
+
__INITDATA
/*
__arm926_proc_info:
.long 0x41069260 @ ARM926EJ-S (v5TEJ)
.long 0xff0ffff0
- .long 0x00000c1e @ mmuflags
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_BUFFERABLE | \
+ PMD_SECT_CACHEABLE | \
+ PMD_BIT4 | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __arm926_setup
.long cpu_arch_name
.long cpu_elf_name
.type __sa110_setup, #function
__sa110_setup:
- mrc p15, 0, r0, c1, c0 @ get control register v4
- bic r0, r0, #0x2e00 @ ..VI ZFRS BLDP WCAM
- bic r0, r0, #0x0002 @ ..0. 000. .... ..0.
- orr r0, r0, #0x003d
- orr r0, r0, #0x1100 @ ...1 ...1 ..11 11.1
mov r10, #0
mcr p15, 0, r10, c7, c7 @ invalidate I,D caches on v4
mcr p15, 0, r10, c7, c10, 4 @ drain write buffer on v4
mcr p15, 0, r10, c8, c7 @ invalidate I,D TLBs on v4
- mcr p15, 0, r4, c2, c0 @ load page table pointer
- mov r10, #0x1f @ Domains 0, 1 = client
- mcr p15, 0, r10, c3, c0 @ load domain access register
+ mrc p15, 0, r0, c1, c0 @ get control register v4
+ ldr r5, sa110_cr1_clear
+ bic r0, r0, r5
+ ldr r5, sa110_cr1_set
+ orr r0, r0, r5
mov pc, lr
.size __sa110_setup, . - __sa110_setup
+ /*
+ * R
+ * .RVI ZFRS BLDP WCAM
+ * ..01 0001 ..11 1101
+ *
+ */
+ .type sa110_cr1_clear, #object
+ .type sa110_cr1_set, #object
+sa110_cr1_clear:
+ .word 0x3f3f
+sa110_cr1_set:
+ .word 0x113d
+
__INITDATA
/*
__sa110_proc_info:
.long 0x4401a100
.long 0xfffffff0
- .long 0x00000c0e
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_BUFFERABLE | \
+ PMD_SECT_CACHEABLE | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __sa110_setup
.long cpu_arch_name
.long cpu_elf_name
ENTRY(cpu_v6_switch_mm)
mov r2, #0
ldr r1, [r1, #MM_CONTEXT_ID] @ get mm->context.id
+ mcr p15, 0, r2, c7, c5, 6 @ flush BTAC/BTB
mcr p15, 0, r2, c7, c10, 4 @ drain write buffer
mcr p15, 0, r0, c2, c0, 0 @ set TTB 0
mcr p15, 0, r1, c13, c0, 1 @ set context ID
* - cache type register is implemented
*/
__v6_setup:
- mov r10, #0
- mcr p15, 0, r10, c7, c14, 0 @ clean+invalidate D cache
- mcr p15, 0, r10, c7, c5, 0 @ invalidate I cache
- mcr p15, 0, r10, c7, c15, 0 @ clean+invalidate cache
- mcr p15, 0, r10, c7, c10, 4 @ drain write buffer
- mcr p15, 0, r10, c8, c7, 0 @ invalidate I + D TLBs
- mcr p15, 0, r10, c2, c0, 2 @ TTB control register
- mcr p15, 0, r4, c2, c0, 0 @ load TTB0
+ mov r0, #0
+ mcr p15, 0, r0, c7, c14, 0 @ clean+invalidate D cache
+ mcr p15, 0, r0, c7, c5, 0 @ invalidate I cache
+ mcr p15, 0, r0, c7, c15, 0 @ clean+invalidate cache
+ mcr p15, 0, r0, c7, c10, 4 @ drain write buffer
+ mcr p15, 0, r0, c8, c7, 0 @ invalidate I + D TLBs
+ mcr p15, 0, r0, c2, c0, 2 @ TTB control register
mcr p15, 0, r4, c2, c0, 1 @ load TTB1
- mov r10, #0x1f @ domains 0, 1 = manager
- mcr p15, 0, r10, c3, c0, 0 @ load domain access register
- mrc p15, 0, r0, c1, c0, 0 @ read control register
#ifdef CONFIG_VFP
- mrc p15, 0, r10, c1, c0, 2
- orr r10, r10, #(3 << 20)
- mcr p15, 0, r10, c1, c0, 2 @ Enable full access to VFP
+ mrc p15, 0, r0, c1, c0, 2
+ orr r0, r0, #(3 << 20)
+ mcr p15, 0, r0, c1, c0, 2 @ Enable full access to VFP
#endif
- ldr r10, cr1_clear @ get mask for bits to clear
- bic r0, r0, r10 @ clear bits them
- ldr r10, cr1_set @ get mask for bits to set
- orr r0, r0, r10 @ set them
+ mrc p15, 0, r0, c1, c0, 0 @ read control register
+ ldr r5, v6_cr1_clear @ get mask for bits to clear
+ bic r0, r0, r5 @ clear bits them
+ ldr r5, v6_cr1_set @ get mask for bits to set
+ orr r0, r0, r5 @ set them
mov pc, lr @ return to head.S:__ret
/*
* rrrr rrrx xxx0 0101 xxxx xxxx x111 xxxx < forced
* 0 110 0011 1.00 .111 1101 < we want
*/
- .type cr1_clear, #object
- .type cr1_set, #object
-cr1_clear:
- .word 0x0120c302
-cr1_set:
+ .type v6_cr1_clear, #object
+ .type v6_cr1_set, #object
+v6_cr1_clear:
+ .word 0x01e0fb7f
+v6_cr1_set:
.word 0x00c0387d
.type v6_processor_functions, #object
__v6_proc_info:
.long 0x0007b000
.long 0x0007f000
- .long 0x00000c0e
+ .long PMD_TYPE_SECT | \
+ PMD_SECT_BUFFERABLE | \
+ PMD_SECT_CACHEABLE | \
+ PMD_SECT_AP_WRITE | \
+ PMD_SECT_AP_READ
b __v6_setup
.long cpu_arch_name
.long cpu_elf_name
mov pc, lr
/*
- * v4_flush_kerm_tlb_range(start, end)
+ * v4_flush_kern_tlb_range(start, end)
*
* Invalidate a range of TLB entries in the specified kernel
* address range.
mov pc, lr
/*
- * v4_flush_kerm_tlb_range(start, end)
+ * v4_flush_kern_tlb_range(start, end)
*
* Invalidate a range of TLB entries in the specified kernel
* address range.
#include <linux/oprofile.h>
#include <linux/errno.h>
#include <asm/semaphore.h>
+#include <linux/sysdev.h>
#include "op_counter.h"
#include "op_arm_model.h"
static void pmu_stop(void);
static int pmu_create_files(struct super_block *, struct dentry *);
-static struct oprofile_operations pmu_ops = {
- .create_files = pmu_create_files,
- .setup = pmu_setup,
- .shutdown = pmu_stop,
- .start = pmu_start,
- .stop = pmu_stop,
+#ifdef CONFIG_PM
+static int pmu_suspend(struct sys_device *dev, u32 state)
+{
+ if (pmu_enabled)
+ pmu_stop();
+ return 0;
+}
+
+static int pmu_resume(struct sys_device *dev)
+{
+ if (pmu_enabled)
+ pmu_start();
+ return 0;
+}
+
+static struct sysdev_class oprofile_sysclass = {
+ set_kset_name("oprofile"),
+ .resume = pmu_resume,
+ .suspend = pmu_suspend,
};
-#ifdef CONFIG_PM
static struct sys_device device_oprofile = {
.id = 0,
.cls = &oprofile_sysclass,
int ret;
if (!(ret = sysdev_class_register(&oprofile_sysclass)))
- ret = sys_device_register(&device_oprofile);
+ ret = sysdev_register(&device_oprofile);
return ret;
}
-static void __exit exit_driverfs(void)
+static void exit_driverfs(void)
{
- sys_device_unregister(&device_oprofile);
+ sysdev_unregister(&device_oprofile);
sysdev_class_unregister(&oprofile_sysclass);
}
#else
up(&pmu_sem);
}
-int __init pmu_init(struct oprofile_operations **ops, struct op_arm_model_spec *spec)
+int __init pmu_init(struct oprofile_operations *ops, struct op_arm_model_spec *spec)
{
init_MUTEX(&pmu_sem);
pmu_model = spec;
init_driverfs();
- *ops = &pmu_ops;
- pmu_ops.cpu_type = pmu_model->name;
+ ops->create_files = pmu_create_files;
+ ops->setup = pmu_setup;
+ ops->shutdown = pmu_stop;
+ ops->start = pmu_start;
+ ops->stop = pmu_stop;
+ ops->cpu_type = pmu_model->name;
printk(KERN_INFO "oprofile: using %s PMU\n", spec->name);
+
return 0;
}
#include <linux/errno.h>
#include "op_arm_model.h"
-int __init oprofile_arch_init(struct oprofile_operations **ops)
+int __init oprofile_arch_init(struct oprofile_operations *ops)
{
int ret = -ENODEV;
#ifdef CONFIG_CPU_XSCALE
ret = pmu_init(ops, &op_xscale_spec);
#endif
+
return ret;
}
extern struct op_arm_model_spec op_xscale_spec;
#endif
-extern int pmu_init(struct oprofile_operations **ops, struct op_arm_model_spec *spec);
+extern int __init pmu_init(struct oprofile_operations *ops, struct op_arm_model_spec *spec);
extern void pmu_exit(void);
#endif /* OP_ARM_MODEL_H */
#ifdef CONFIG_ARCH_IOP331
#define XSCALE_PMU_IRQ IRQ_IOP331_CORE_PMU
#endif
+#ifdef CONFIG_ARCH_PXA
+#define XSCALE_PMU_IRQ IRQ_PMU
+#endif
/*
* Different types of events that can be counted by the XScale PMU
/* Overflow bit gets cleared. There's no workaround. */
/* Fixed in B stepping or later */
- pmnc &= ~(PMU_ENABLE | pmu->cnt_ovf[PMN0] | pmu->cnt_ovf[PMN1] |
- pmu->cnt_ovf[CCNT]);
- write_pmnc(pmnc);
+ /* Write the value back to clear the overflow flags. Overflow */
+ /* flags remain in pmnc for use below */
+ write_pmnc(pmnc & ~PMU_ENABLE);
for (i = CCNT; i <= PMN1; i++) {
if (!(pmu->int_mask[i] & pmu->int_enable))
static irqreturn_t xscale_pmu_interrupt(int irq, void *arg, struct pt_regs *regs)
{
- unsigned long pc = profile_pc(regs);
- int i, is_kernel = !user_mode(regs);
+ int i;
u32 pmnc;
if (pmu->id == PMU_XSC1)
continue;
write_counter(i, -(u32)results[i].reset_counter);
- oprofile_add_sample(pc, is_kernel, i, smp_processor_id());
+ oprofile_add_sample(regs, i);
results[i].ovf--;
}
*/
#include <linux/linkage.h>
#include <linux/init.h>
-#include <asm/thread_info.h>
+#include <asm/constants.h>
#include <asm/vfpmacros.h>
.globl do_vfp
@ VFP hardware support entry point.
@
@ r0 = faulted instruction
-@ r5 = faulted PC+4
+@ r2 = faulted PC+4
@ r9 = successful return
@ r10 = vfp_state union
@ lr = failure return
.globl vfp_support_entry
vfp_support_entry:
- DBGSTR3 "instr %08x pc %08x state %p", r0, r5, r10
+ DBGSTR3 "instr %08x pc %08x state %p", r0, r2, r10
VFPFMRX r1, FPEXC @ Is the VFP enabled?
DBGSTR1 "fpexc %08x", r1
ldr r3, last_VFP_context_address
orr r1, r1, #FPEXC_ENABLE @ user FPEXC has the enable bit set
ldr r4, [r3] @ last_VFP_context pointer
- bic r2, r1, #FPEXC_EXCEPTION @ make sure exceptions are disabled
+ bic r5, r1, #FPEXC_EXCEPTION @ make sure exceptions are disabled
cmp r4, r10
beq check_for_exception @ we are returning to the same
@ process, so the registers are
@ still there. In this case, we do
@ not want to drop a pending exception.
- VFPFMXR FPEXC, r2 @ enable VFP, disable any pending
+ VFPFMXR FPEXC, r5 @ enable VFP, disable any pending
@ exceptions, so we can get at the
@ rest of it
DBGSTR1 "save old state %p", r4
cmp r4, #0
beq no_old_VFP_process
- VFPFMRX r2, FPSCR @ current status
+ VFPFMRX r5, FPSCR @ current status
VFPFMRX r6, FPINST @ FPINST (always there, rev0 onwards)
tst r1, #FPEXC_FPV2 @ is there an FPINST2 to read?
VFPFMRX r8, FPINST2, NE @ FPINST2 if needed - avoids reading
@ nonexistant reg on rev0
VFPFSTMIA r4 @ save the working registers
add r4, r4, #8*16+4
- stmia r4, {r1, r2, r6, r8} @ save FPEXC, FPSCR, FPINST, FPINST2
+ stmia r4, {r1, r5, r6, r8} @ save FPEXC, FPSCR, FPINST, FPINST2
@ and point r4 at the word at the
@ start of the register dump
str r10, [r3] @ update the last_VFP_context pointer
@ Load the saved state back into the VFP
add r4, r10, #8*16+4
- ldmia r4, {r1, r2, r6, r8} @ load FPEXC, FPSCR, FPINST, FPINST2
+ ldmia r4, {r1, r5, r6, r8} @ load FPEXC, FPSCR, FPINST, FPINST2
VFPFLDMIA r10 @ reload the working registers while
@ FPEXC is in a safe state
tst r1, #FPEXC_FPV2 @ is there an FPINST2 to write?
VFPFMXR FPINST2, r8, NE @ FPINST2 if needed - avoids writing
@ nonexistant reg on rev0
VFPFMXR FPINST, r6
- VFPFMXR FPSCR, r2 @ restore status
+ VFPFMXR FPSCR, r5 @ restore status
check_for_exception:
tst r1, #FPEXC_EXCEPTION
@ out before setting an FPEXC that
@ stops us reading stuff
VFPFMXR FPEXC, r1 @ restore FPEXC last
- sub r5, r5, #4
- str r5, [sp, #S_PC] @ retry the instruction
+ sub r2, r2, #4
+ str r2, [sp, #S_PC] @ retry the instruction
mov pc, r9 @ we think we have handled things
look_for_VFP_exceptions:
tst r1, #FPEXC_EXCEPTION
bne process_exception
- VFPFMRX r2, FPSCR
- tst r2, #FPSCR_IXE @ IXE doesn't set FPEXC_EXCEPTION !
+ VFPFMRX r5, FPSCR
+ tst r5, #FPSCR_IXE @ IXE doesn't set FPEXC_EXCEPTION !
bne process_exception
@ Fall into hand on to next handler - appropriate coproc instr
process_exception:
DBGSTR "bounce"
- sub r5, r5, #4
- str r5, [sp, #S_PC] @ retry the instruction on exit from
+ sub r2, r2, #4
+ str r2, [sp, #S_PC] @ retry the instruction on exit from
@ the imprecise exception handling in
@ the support code
mov r2, sp @ nothing stacked - regdump is at TOS
Russell King
Keith Owens
+also thanks to Nicholas Pitre for hints, and for the basis or our XIP support.
+
Currently maintaing the code are
Ian Molton (Maintainer / Archimedes)
# for more details.
#
# Copyright (C) 1995-2001 by Russell King
+# Copyright (c) 2004 Ian Molton
LDFLAGS_vmlinux :=-p -X
CPPFLAGS_vmlinux.lds = -DTEXTADDR=$(TEXTADDR) -DDATAADDR=$(DATAADDR)
CFLAGS +=-g
endif
-# Force -mno-fpu to be passed to the assembler. Some versions of gcc don't
-# do this with -msoft-float
-CFLAGS_BOOT :=-mapcs-26 -mcpu=arm3 -mshort-load-bytes -msoft-float -Wa,-mno-fpu -Uarm
-CFLAGS +=-mapcs-26 -mcpu=arm3 -mshort-load-bytes -msoft-float -Wa,-mno-fpu -Uarm
-AFLAGS +=-mapcs-26 -mcpu=arm3 -mno-fpu -msoft-float -Wa,-mno-fpu
-
-head-y := arch/arm26/machine/head.o arch/arm26/kernel/init_task.o
+CFLAGS_BOOT :=-mapcs-26 -mcpu=arm3 -msoft-float -Uarm
+CFLAGS +=-mapcs-26 -mcpu=arm3 -msoft-float -Uarm
+AFLAGS +=-mapcs-26 -mcpu=arm3 -msoft-float
ifeq ($(CONFIG_XIP_KERNEL),y)
TEXTADDR := 0x03880000
DATAADDR := .
endif
+head-y := arch/arm26/kernel/head.o arch/arm26/kernel/init_task.o
+
ifeq ($(incdir-y),)
incdir-y :=
endif
echo '* zImage - Compressed kernel image (arch/$(ARCH)/boot/zImage)'
echo ' Image - Uncompressed kernel image (arch/$(ARCH)/boot/Image)'
echo ' bootpImage - Combined zImage and initial RAM disk'
+ echo ' xipImage - eXecute In Place capable image for ROM use (arch/$(ARCH)/boot/xipImage)'
echo ' initrd - Create an initial image'
echo ' install - Install uncompressed kernel'
echo ' zinstall - Install compressed kernel'
#
-# arch/arm/boot/Makefile
+# arch/arm26/boot/Makefile
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
ifeq ($(CONFIG_XIP_KERNEL),y)
$(obj)/xipImage: vmlinux FORCE
- $(OBJCOPY) -S -O binary -R .data -R .comment vmlinux vmlinux-text.bin
- $(OBJCOPY) -S -O binary -R .init -R .text -R .comment -R __ex_table -R __ksymtab vmlinux vmlinux-data.bin
+# $(OBJCOPY) -S -O binary -R .data -R .comment vmlinux vmlinux-text.bin
+# FIXME - where has .pci_fixup crept in from?
+ $(OBJCOPY) -S -O binary -R .data -R .pci_fixup -R .comment vmlinux vmlinux-text.bin
+ $(OBJCOPY) -S -O binary -R .init -R .text -R __ex_table -R .pci_fixup -R __ksymtab -R __ksymtab_gpl -R __kcrctab -R __kcrctab_gpl -R __param -R .comment vmlinux vmlinux-data.bin
cat vmlinux-text.bin vmlinux-data.bin > $@
$(RM) -f vmlinux-text.bin vmlinux-data.bin
@echo ' Kernel: $@ is ready'
/*
- * linux/arch/arm/boot/compressed/head.S
+ * linux/arch/arm26/boot/compressed/head.S
*
* Copyright (C) 1996-2002 Russell King
*
/*
- * linux/arch/arm/lib/ll_char_wr.S
+ * linux/arch/arm26/lib/ll_char_wr.S
*
* Copyright (C) 1995, 1996 Russell King.
*
/*
- * linux/include/asm-arm/arch-arc/uncompress.h
*
* Copyright (C) 1996 Russell King
*
/*
- * linux/arch/arm/boot/compressed/vmlinux.lds.in
+ * linux/arch/arm26/boot/compressed/vmlinux.lds.in
*
* Copyright (C) 2000 Russell King
*
#!/bin/sh
#
-# arch/arm/boot/install.sh
+# arch/arm26/boot/install.sh
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
#
# Adapted from code in arch/i386/boot/Makefile by H. Peter Anvin
# Adapted from code in arch/i386/boot/install.sh by Russell King
-# Stolen from arch/arm/boot/install.sh by Ian Molton
+# Stolen from arm32 by Ian Molton
#
# "make install" script for arm architecture
#
# Makefile for the linux kernel.
#
-ENTRY_OBJ = entry.o
-
# Object file lists.
-obj-y := compat.o dma.o entry.o irq.o \
- process.o ptrace.o semaphore.o setup.o signal.o sys_arm.o \
- time.o traps.o ecard.o time-acorn.o dma.o \
- ecard.o fiq.o time.o
+AFLAGS_head.o := -DTEXTADDR=$(TEXTADDR)
+
+obj-y := compat.o dma.o entry.o irq.o process.o ptrace.o \
+ semaphore.o setup.o signal.o sys_arm.o time.o traps.o \
+ ecard.o dma.o ecard.o fiq.o time.o
+
+extra-y := head.o init_task.o vmlinux.lds
obj-$(CONFIG_FIQ) += fiq.o
obj-$(CONFIG_MODULES) += armksyms.o
-extra-y := init_task.o vmlinux.lds
-
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
+#include <linux/module.h>
#include <linux/config.h>
#include <linux/module.h>
#include <linux/user.h>
#include <asm/elf.h>
#include <asm/io.h>
#include <asm/irq.h>
-#include <asm/pgalloc.h>
-//#include <asm/proc-fns.h>
#include <asm/processor.h>
#include <asm/semaphore.h>
#include <asm/system.h>
/*
* This has a special calling convention; it doesn't
* modify any of the usual registers, except for LR.
+ * FIXME - we used to use our own local version - looks to be in kernel/softirq now
*/
-extern void __do_softirq(void);
+//extern void __do_softirq(void);
#define EXPORT_SYMBOL_ALIAS(sym,orig) \
const char __kstrtab_##sym[] \
EXPORT_SYMBOL(kd_mksound);
#endif
-EXPORT_SYMBOL(__do_softirq);
+//EXPORT_SYMBOL(__do_softirq);
/* platform dependent support */
EXPORT_SYMBOL(dump_thread);
EXPORT_SYMBOL(sys_exit);
EXPORT_SYMBOL(sys_wait4);
- /* semaphores */
-EXPORT_SYMBOL(__down_failed);
-EXPORT_SYMBOL(__down_interruptible_failed);
-EXPORT_SYMBOL(__down_trylock_failed);
-EXPORT_SYMBOL(__up_wakeup);
-
EXPORT_SYMBOL(get_wchan);
#ifdef CONFIG_PREEMPT
int main(void)
{
- DEFINE(TSK_USED_MATH, offsetof(struct task_struct, used_math));
DEFINE(TSK_ACTIVE_MM, offsetof(struct task_struct, active_mm));
BLANK();
DEFINE(VMA_VM_MM, offsetof(struct vm_area_struct, vm_mm));
--- /dev/null
+/*
+ * linux/arch/arm26/kernel/calls.S
+ *
+ * Copyright (C) 2003 Ian Molton
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * FIXME
+ * This file is included twice in entry.S which may not be necessary
+ */
+
+//FIXME - clearly NR_syscalls is never defined here
+
+#ifndef NR_syscalls
+#define NR_syscalls 256
+#else
+
+__syscall_start:
+/* 0 */ .long sys_ni_syscall
+ .long sys_exit
+ .long sys_fork_wrapper
+ .long sys_read
+ .long sys_write
+/* 5 */ .long sys_open
+ .long sys_close
+ .long sys_ni_syscall /* was sys_waitpid */
+ .long sys_creat
+ .long sys_link
+/* 10 */ .long sys_unlink
+ .long sys_execve_wrapper
+ .long sys_chdir
+ .long sys_time /* used by libc4 */
+ .long sys_mknod
+/* 15 */ .long sys_chmod
+ .long sys_lchown16
+ .long sys_ni_syscall /* was sys_break */
+ .long sys_ni_syscall /* was sys_stat */
+ .long sys_lseek
+/* 20 */ .long sys_getpid
+ .long sys_mount
+ .long sys_oldumount /* used by libc4 */
+ .long sys_setuid16
+ .long sys_getuid16
+/* 25 */ .long sys_stime
+ .long sys_ptrace
+ .long sys_alarm /* used by libc4 */
+ .long sys_ni_syscall /* was sys_fstat */
+ .long sys_pause
+/* 30 */ .long sys_utime /* used by libc4 */
+ .long sys_ni_syscall /* was sys_stty */
+ .long sys_ni_syscall /* was sys_getty */
+ .long sys_access
+ .long sys_nice
+/* 35 */ .long sys_ni_syscall /* was sys_ftime */
+ .long sys_sync
+ .long sys_kill
+ .long sys_rename
+ .long sys_mkdir
+/* 40 */ .long sys_rmdir
+ .long sys_dup
+ .long sys_pipe
+ .long sys_times
+ .long sys_ni_syscall /* was sys_prof */
+/* 45 */ .long sys_brk
+ .long sys_setgid16
+ .long sys_getgid16
+ .long sys_ni_syscall /* was sys_signal */
+ .long sys_geteuid16
+/* 50 */ .long sys_getegid16
+ .long sys_acct
+ .long sys_umount
+ .long sys_ni_syscall /* was sys_lock */
+ .long sys_ioctl
+/* 55 */ .long sys_fcntl
+ .long sys_ni_syscall /* was sys_mpx */
+ .long sys_setpgid
+ .long sys_ni_syscall /* was sys_ulimit */
+ .long sys_ni_syscall /* was sys_olduname */
+/* 60 */ .long sys_umask
+ .long sys_chroot
+ .long sys_ustat
+ .long sys_dup2
+ .long sys_getppid
+/* 65 */ .long sys_getpgrp
+ .long sys_setsid
+ .long sys_sigaction
+ .long sys_ni_syscall /* was sys_sgetmask */
+ .long sys_ni_syscall /* was sys_ssetmask */
+/* 70 */ .long sys_setreuid16
+ .long sys_setregid16
+ .long sys_sigsuspend_wrapper
+ .long sys_sigpending
+ .long sys_sethostname
+/* 75 */ .long sys_setrlimit
+ .long sys_old_getrlimit /* used by libc4 */
+ .long sys_getrusage
+ .long sys_gettimeofday
+ .long sys_settimeofday
+/* 80 */ .long sys_getgroups16
+ .long sys_setgroups16
+ .long old_select /* used by libc4 */
+ .long sys_symlink
+ .long sys_ni_syscall /* was sys_lstat */
+/* 85 */ .long sys_readlink
+ .long sys_uselib
+ .long sys_swapon
+ .long sys_reboot
+ .long old_readdir /* used by libc4 */
+/* 90 */ .long old_mmap /* used by libc4 */
+ .long sys_munmap
+ .long sys_truncate
+ .long sys_ftruncate
+ .long sys_fchmod
+/* 95 */ .long sys_fchown16
+ .long sys_getpriority
+ .long sys_setpriority
+ .long sys_ni_syscall /* was sys_profil */
+ .long sys_statfs
+/* 100 */ .long sys_fstatfs
+ .long sys_ni_syscall
+ .long sys_socketcall
+ .long sys_syslog
+ .long sys_setitimer
+/* 105 */ .long sys_getitimer
+ .long sys_newstat
+ .long sys_newlstat
+ .long sys_newfstat
+ .long sys_ni_syscall /* was sys_uname */
+/* 110 */ .long sys_ni_syscall /* was sys_iopl */
+ .long sys_vhangup
+ .long sys_ni_syscall
+ .long sys_syscall /* call a syscall */
+ .long sys_wait4
+/* 115 */ .long sys_swapoff
+ .long sys_sysinfo
+ .long sys_ipc
+ .long sys_fsync
+ .long sys_sigreturn_wrapper
+/* 120 */ .long sys_clone_wapper
+ .long sys_setdomainname
+ .long sys_newuname
+ .long sys_ni_syscall
+ .long sys_adjtimex
+/* 125 */ .long sys_mprotect
+ .long sys_sigprocmask
+ .long sys_ni_syscall /* WAS: sys_create_module */
+ .long sys_init_module
+ .long sys_delete_module
+/* 130 */ .long sys_ni_syscall /* WAS: sys_get_kernel_syms */
+ .long sys_quotactl
+ .long sys_getpgid
+ .long sys_fchdir
+ .long sys_bdflush
+/* 135 */ .long sys_sysfs
+ .long sys_personality
+ .long sys_ni_syscall /* .long _sys_afs_syscall */
+ .long sys_setfsuid16
+ .long sys_setfsgid16
+/* 140 */ .long sys_llseek
+ .long sys_getdents
+ .long sys_select
+ .long sys_flock
+ .long sys_msync
+/* 145 */ .long sys_readv
+ .long sys_writev
+ .long sys_getsid
+ .long sys_fdatasync
+ .long sys_sysctl
+/* 150 */ .long sys_mlock
+ .long sys_munlock
+ .long sys_mlockall
+ .long sys_munlockall
+ .long sys_sched_setparam
+/* 155 */ .long sys_sched_getparam
+ .long sys_sched_setscheduler
+ .long sys_sched_getscheduler
+ .long sys_sched_yield
+ .long sys_sched_get_priority_max
+/* 160 */ .long sys_sched_get_priority_min
+ .long sys_sched_rr_get_interval
+ .long sys_nanosleep
+ .long sys_arm_mremap
+ .long sys_setresuid16
+/* 165 */ .long sys_getresuid16
+ .long sys_ni_syscall
+ .long sys_ni_syscall /* WAS: sys_query_module */
+ .long sys_poll
+ .long sys_nfsservctl
+/* 170 */ .long sys_setresgid16
+ .long sys_getresgid16
+ .long sys_prctl
+ .long sys_rt_sigreturn_wrapper
+ .long sys_rt_sigaction
+/* 175 */ .long sys_rt_sigprocmask
+ .long sys_rt_sigpending
+ .long sys_rt_sigtimedwait
+ .long sys_rt_sigqueueinfo
+ .long sys_rt_sigsuspend_wrapper
+/* 180 */ .long sys_pread64
+ .long sys_pwrite64
+ .long sys_chown16
+ .long sys_getcwd
+ .long sys_capget
+/* 185 */ .long sys_capset
+ .long sys_sigaltstack_wrapper
+ .long sys_sendfile
+ .long sys_ni_syscall
+ .long sys_ni_syscall
+/* 190 */ .long sys_vfork_wrapper
+ .long sys_getrlimit
+ .long sys_mmap2
+ .long sys_truncate64
+ .long sys_ftruncate64
+/* 195 */ .long sys_stat64
+ .long sys_lstat64
+ .long sys_fstat64
+ .long sys_lchown
+ .long sys_getuid
+/* 200 */ .long sys_getgid
+ .long sys_geteuid
+ .long sys_getegid
+ .long sys_setreuid
+ .long sys_setregid
+/* 205 */ .long sys_getgroups
+ .long sys_setgroups
+ .long sys_fchown
+ .long sys_setresuid
+ .long sys_getresuid
+/* 210 */ .long sys_setresgid
+ .long sys_getresgid
+ .long sys_chown
+ .long sys_setuid
+ .long sys_setgid
+/* 215 */ .long sys_setfsuid
+ .long sys_setfsgid
+ .long sys_getdents64
+ .long sys_pivot_root
+ .long sys_mincore
+/* 220 */ .long sys_madvise
+ .long sys_fcntl64
+ .long sys_ni_syscall /* TUX */
+ .long sys_ni_syscall /* WAS: sys_security */
+ .long sys_gettid
+/* 225 */ .long sys_readahead
+ .long sys_setxattr
+ .long sys_lsetxattr
+ .long sys_fsetxattr
+ .long sys_getxattr
+/* 230 */ .long sys_lgetxattr
+ .long sys_fgetxattr
+ .long sys_listxattr
+ .long sys_llistxattr
+ .long sys_flistxattr
+/* 235 */ .long sys_removexattr
+ .long sys_lremovexattr
+ .long sys_fremovexattr
+ .long sys_tkill
+__syscall_end:
+
+ .rept NR_syscalls - (__syscall_end - __syscall_start) / 4
+ .long sys_ni_syscall
+ .endr
+#endif
/*
- * linux/arch/arm/kernel/compat.c
+ * linux/arch/arm26/kernel/compat.c
*
* Copyright (C) 2001 Russell King
* 2003 Ian Molton
/*
- * linux/arch/arm/kernel/dma.c
+ * linux/arch/arm26/kernel/dma.c
*
* Copyright (C) 1995-2000 Russell King
* 2003 Ian Molton
#include <asm/dma.h>
-spinlock_t dma_spin_lock = SPIN_LOCK_UNLOCKED;
-
-#if MAX_DMA_CHANNELS > 0
+DEFINE_SPINLOCK(dma_spin_lock);
static dma_t dma_chan[MAX_DMA_CHANNELS];
arch_dma_init(dma_chan);
}
-#else
-
-int request_dma(dmach_t channel, const char *device_id)
-{
- return -EINVAL;
-}
-
-int get_dma_residue(dmach_t channel)
-{
- return 0;
-}
-
-#define GLOBAL_ALIAS(_a,_b) asm (".set " #_a "," #_b "; .globl " #_a)
-GLOBAL_ALIAS(disable_dma, get_dma_residue);
-GLOBAL_ALIAS(enable_dma, get_dma_residue);
-GLOBAL_ALIAS(free_dma, get_dma_residue);
-GLOBAL_ALIAS(get_dma_list, get_dma_residue);
-GLOBAL_ALIAS(set_dma_mode, get_dma_residue);
-GLOBAL_ALIAS(set_dma_page, get_dma_residue);
-GLOBAL_ALIAS(set_dma_count, get_dma_residue);
-GLOBAL_ALIAS(set_dma_addr, get_dma_residue);
-GLOBAL_ALIAS(set_dma_sg, get_dma_residue);
-GLOBAL_ALIAS(set_dma_speed, get_dma_residue);
-GLOBAL_ALIAS(init_dma, get_dma_residue);
-
-#endif
-
EXPORT_SYMBOL(request_dma);
EXPORT_SYMBOL(free_dma);
EXPORT_SYMBOL(enable_dma);
#include <asm/hardware.h>
#include <asm/io.h>
#include <asm/irq.h>
-#include <asm/pgalloc.h>
#include <asm/mmu_context.h>
-#include <asm/irq.h>
#include <asm/irqchip.h>
#include <asm/tlbflush.h>
unsigned int len = req->length;
unsigned int off = req->address;
- if (req->ec->slot_no == 8) {
- /*
- * The card maintains an index which increments the address
- * into a 4096-byte page on each access. We need to keep
- * track of the counter.
- */
- static unsigned int index;
- unsigned int page;
-
- page = (off >> 12) * 4;
- if (page > 256 * 4)
- return;
-
- off &= 4095;
-
- /*
- * If we are reading offset 0, or our current index is
- * greater than the offset, reset the hardware index counter.
- */
- if (off == 0 || index > off) {
- *base_addr = 0;
- index = 0;
- }
-
- /*
- * Increment the hardware index counter until we get to the
- * required offset. The read bytes are discarded.
- */
- while (index < off) {
- unsigned char byte;
- byte = base_addr[page];
- index += 1;
- }
-
+ if (!req->use_loader || !req->ec->loader) {
+ off *= 4;
while (len--) {
- *buf++ = base_addr[page];
- index += 1;
+ *buf++ = base_addr[off];
+ off += 4;
}
} else {
-
- if (!req->use_loader || !req->ec->loader) {
- off *= 4;
- while (len--) {
- *buf++ = base_addr[off];
- off += 4;
- }
- } else {
- while(len--) {
- /*
- * The following is required by some
- * expansion card loader programs.
- */
- *(unsigned long *)0x108 = 0;
- *buf++ = ecard_loader_read(off++, base_addr,
- req->ec->loader);
- }
+ while(len--) {
+ /*
+ * The following is required by some
+ * expansion card loader programs.
+ */
+ *(unsigned long *)0x108 = 0;
+ *buf++ = ecard_loader_read(off++, base_addr,
+ req->ec->loader);
}
}
-
}
static void ecard_do_request(struct ecard_request *req)
for (ec = cards; ec; ec = ec->next) {
int pending;
- if (!ec->claimed || ec->irq == NO_IRQ || ec->slot_no == 8)
+ if (!ec->claimed || ec->irq == NO_IRQ)
continue;
if (ec->ops && ec->ops->irqpending)
unsigned long address = 0;
int slot = ec->slot_no;
- if (ec->slot_no == 8)
- return 0;
-
ectcr &= ~(1 << slot);
switch (type) {
case ECARD_MEMC:
- if (slot < 4)
- address = IO_EC_MEMC_BASE + (slot << 12);
+ address = IO_EC_MEMC_BASE + (slot << 12);
break;
case ECARD_IOC:
- if (slot < 4)
- address = IO_EC_IOC_BASE + (slot << 12);
- if (address)
- address += speed << 17;
+ address = IO_EC_IOC_BASE + (slot << 12) + (speed << 17);
break;
default:
unsigned int slot = ec->slot_no;
int i;
- if (slot < 4) {
- ec_set_resource(ec, ECARD_RES_MEMC,
- PODSLOT_MEMC_BASE + (slot << 14),
- PODSLOT_MEMC_SIZE, IORESOURCE_MEM);
- }
+ ec_set_resource(ec, ECARD_RES_MEMC,
+ PODSLOT_MEMC_BASE + (slot << 14),
+ PODSLOT_MEMC_SIZE, IORESOURCE_MEM);
for (i = 0; i < ECARD_RES_IOCSYNC - ECARD_RES_IOCSLOW; i++) {
ec_set_resource(ec, i + ECARD_RES_IOCSLOW,
/*
* hook the interrupt handlers
*/
- if (slot < 8) {
- ec->irq = 32 + slot;
- set_irq_chip(ec->irq, &ecard_chip);
- set_irq_handler(ec->irq, do_level_IRQ);
- set_irq_flags(ec->irq, IRQF_VALID);
- }
+ ec->irq = 32 + slot;
+ set_irq_chip(ec->irq, &ecard_chip);
+ set_irq_handler(ec->irq, do_level_IRQ);
+ set_irq_flags(ec->irq, IRQF_VALID);
for (ecp = &cards; *ecp; ecp = &(*ecp)->next);
printk("Probing expansion cards\n");
- for (slot = 0; slot < 4; slot ++) {
+ for (slot = 0; slot < MAX_ECARDS; slot ++) {
ecard_probe(slot, ECARD_IOC);
}
* Assembled from chunks of code in arch/arm
*
* Copyright (C) 2003 Ian Molton
+ * Based on the work of RMK.
*
*/
-#include <linux/config.h> /* for CONFIG_ARCH_xxxx */
#include <linux/linkage.h>
#include <asm/assembler.h>
#define BAD_IRQ 3
#define BAD_UNDEFINSTR 4
-#define PT_TRACESYS 0x00000002
-
@ OS version number used in SWIs
@ RISC OS is 0
@ RISC iX is 8
@
@ Stack format (ensured by USER_* and SVC_*)
+@ PSR and PC are comined on arm26
@
-#define S_FRAME_SIZE 72 @ FIXME: Really?
+
+#define S_OFF 8
+
#define S_OLD_R0 64
-#define S_PSR 60
#define S_PC 60
#define S_LR 56
#define S_SP 52
#define S_R2 8
#define S_R1 4
#define S_R0 0
-#define S_OFF 8
.macro save_user_regs
- str r0, [sp, #-4]!
- str lr, [sp, #-4]!
+ str r0, [sp, #-4]! @ Store SVC r0
+ str lr, [sp, #-4]! @ Store user mode PC
sub sp, sp, #15*4
- stmia sp, {r0 - lr}^
+ stmia sp, {r0 - lr}^ @ Store the other user-mode regs
mov r0, r0
.endm
.macro slow_restore_user_regs
- ldmia sp, {r0 - lr}^ @ restore the user regs
- mov r0, r0 @ no-op
+ ldmia sp, {r0 - lr}^ @ restore the user regs not including PC
+ mov r0, r0
ldr lr, [sp, #15*4] @ get user PC
add sp, sp, #15*4+8 @ free stack
movs pc, lr @ return
movs pc, lr
.endm
+ .macro save_svc_regs
+ str sp, [sp, #-16]!
+ str lr, [sp, #8]
+ str lr, [sp, #4]
+ stmfd sp!, {r0 - r12}
+ mov r0, #-1
+ str r0, [sp, #S_OLD_R0]
+ zero_fp
+ .endm
+
+ .macro save_svc_regs_irq
+ str sp, [sp, #-16]!
+ str lr, [sp, #4]
+ ldr lr, .LCirq
+ ldr lr, [lr]
+ str lr, [sp, #8]
+ stmfd sp!, {r0 - r12}
+ mov r0, #-1
+ str r0, [sp, #S_OLD_R0]
+ zero_fp
+ .endm
+
+ .macro restore_svc_regs
+ ldmfd sp, {r0 - pc}^
+ .endm
+
.macro mask_pc, rd, rm
bic \rd, \rm, #PCMASK
.endm
mov \rd, \rd, lsl #13
.endm
- /*
- * Like adr, but force SVC mode (if required)
- */
- .macro adrsvc, cond, reg, label
- adr\cond \reg, \label
- orr\cond \reg, \reg, #PSR_I_BIT | MODE_SVC26
- .endm
-
-
/*
* These are the registers used in the syscall handler, and allow us to
* have in theory up to 7 arguments to a function - r0 to r6.
*
- * r7 is reserved for the system call number for thumb mode.
- *
* Note that tbl == why is intentional.
*
* We must set at least "tsk" and "why" when calling ret_with_reschedule.
#error "Please fix"
#endif
-/*
- * Our do_softirq out of line code. See include/asm-arm26/hardirq.h for
- * the calling assembly.
- */
-ENTRY(__do_softirq)
- stmfd sp!, {r0 - r3, ip, lr}
- bl do_softirq
- ldmfd sp!, {r0 - r3, ip, pc}
-
- .align 5
-
/*
* This is the fast syscall return path. We do as little as
* possible here, and this includes saving r0 back into the SVC
bl syscall_trace
b ret_slow_syscall
-#include <asm/calls.h>
+// FIXME - is this strictly necessary?
+#include "calls.S"
/*=============================================================================
* SWI handler
tst ip, #_TIF_SYSCALL_TRACE @ are we tracing syscalls?
bne __sys_trace
- adrsvc al, lr, ret_fast_syscall @ return address
+ adral lr, ret_fast_syscall @ set return address
+ orral lr, lr, #PSR_I_BIT | MODE_SVC26 @ Force SVC mode on return
cmp scno, #NR_syscalls @ check upper syscall limit
ldrcc pc, [tbl, scno, lsl #2] @ call sys_* routine
mov r0, #0 @ trace entry [IP = 0]
bl syscall_trace
- adrsvc al, lr, __sys_trace_return @ return address
+ adral lr, __sys_trace_return @ set return address
+ orral lr, lr, #PSR_I_BIT | MODE_SVC26 @ Force SVC mode on return
add r1, sp, #S_R0 + S_OFF @ pointer to regs
cmp scno, #NR_syscalls @ check upper syscall limit
ldmccia r1, {r0 - r3} @ have to reload r0 - r3
.type sys_call_table, #object
ENTRY(sys_call_table)
-#include <asm/calls.h>
+#include "calls.S"
/*============================================================================
* Special system call wrappers
.text
- .equ ioc_base_high, IOC_BASE & 0xff000000
- .equ ioc_base_low, IOC_BASE & 0x00ff0000
- .macro disable_fiq
- mov r12, #ioc_base_high
- .if ioc_base_low
- orr r12, r12, #ioc_base_low
- .endif
- strb r12, [r12, #0x38] @ Disable FIQ register
+ .macro handle_irq
+1: mov r4, #IOC_BASE
+ ldrb r6, [r4, #0x24] @ get high priority first
+ adr r5, irq_prio_h
+ teq r6, #0
+ ldreqb r6, [r4, #0x14] @ get low priority
+ adreq r5, irq_prio_l
+
+ teq r6, #0 @ If an IRQ happened...
+ ldrneb r0, [r5, r6] @ get IRQ number
+ movne r1, sp @ get struct pt_regs
+ adrne lr, 1b @ Set return address to 1b
+ orrne lr, lr, #PSR_I_BIT | MODE_SVC26 @ (and force SVC mode)
+ bne asm_do_IRQ @ process IRQ (if asserted)
.endm
- .macro get_irqnr_and_base, irqnr, base
- mov r4, #ioc_base_high @ point at IOC
- .if ioc_base_low
- orr r4, r4, #ioc_base_low
- .endif
- ldrb \irqnr, [r4, #0x24] @ get high priority first
- adr \base, irq_prio_h
- teq \irqnr, #0
- ldreqb \irqnr, [r4, #0x14] @ get low priority
- adreq \base, irq_prio_l
- .endm
/*
* Interrupt table (incorporates priority)
.endm
#if 1
-/* FIXME (well, ok, dont - but its easy to grep for :) */
/*
* Uncomment these if you wish to get more debugging into about data aborts.
+ * FIXME - I bet we can find a way to encode these and keep performance.
*/
#define FAULT_CODE_LDRSTRPOST 0x80
#define FAULT_CODE_LDRSTRPRE 0x40
#define FAULT_CODE_WRITE 0x02
#define FAULT_CODE_FORCECOW 0x01
-#define SVC_SAVE_ALL \
- str sp, [sp, #-16]! ;\
- str lr, [sp, #8] ;\
- str lr, [sp, #4] ;\
- stmfd sp!, {r0 - r12} ;\
- mov r0, #-1 ;\
- str r0, [sp, #S_OLD_R0] ;\
- zero_fp
-
-#define SVC_IRQ_SAVE_ALL \
- str sp, [sp, #-16]! ;\
- str lr, [sp, #4] ;\
- ldr lr, .LCirq ;\
- ldr lr, [lr] ;\
- str lr, [sp, #8] ;\
- stmfd sp!, {r0 - r12} ;\
- mov r0, #-1 ;\
- str r0, [sp, #S_OLD_R0] ;\
- zero_fp
-
-#define SVC_RESTORE_ALL \
- ldmfd sp, {r0 - pc}^
-
/*=============================================================================
* Undefined FIQs
*-----------------------------------------------------------------------------
/* FIXME - should we trap for a null pointer here? */
/* The SVC mode case */
-__und_svc: SVC_SAVE_ALL @ Non-user mode
+__und_svc: save_svc_regs @ Non-user mode
mask_pc r0, lr
and r2, lr, #3
sub r0, r0, #4
mov r1, sp
bl do_undefinstr
- SVC_RESTORE_ALL
+ restore_svc_regs
/* We get here if the FP emulator doesnt handle the undef instr.
* If the insn WAS handled, the emulator jumps to ret_from_exception by itself/
ldr lr, [sp,#S_PC] @ FIXME program to test this on. I think its
b .Lbug_undef @ broken at the moment though!)
-__pabt_invalid: SVC_SAVE_ALL
+__pabt_invalid: save_svc_regs
mov r0, sp @ Prefetch aborts are definitely *not*
mov r1, #BAD_PREFETCH @ allowed in non-user modes. We cant
and r2, lr, #3 @ recover from this problem.
b ret_from_exception
Laddrexcptn_not_user:
- SVC_SAVE_ALL
+ save_svc_regs
and r2, lr, #3
teq r2, #3
bne Laddrexcptn_illegal_mode
/*=============================================================================
* Interrupt (IRQ) handler
*-----------------------------------------------------------------------------
- * Note: if in user mode, then *no* kernel routine is running, so do not have
- * to save svc lr
- * (r13 points to irq temp save area)
+ * Note: if the IRQ was taken whilst in user mode, then *no* kernel routine
+ * is running, so do not have to save svc lr.
+ *
+ * Entered in IRQ mode.
*/
-vector_IRQ: ldr r13, .LCirq @ I will leave this one in just in case...
- sub lr, lr, #4
- str lr, [r13]
- tst lr, #3
- bne __irq_svc
- teqp pc, #PSR_I_BIT | MODE_SVC26
+vector_IRQ: ldr sp, .LCirq @ Setup some temporary stack
+ sub lr, lr, #4
+ str lr, [sp] @ push return address
+
+ tst lr, #3
+ bne __irq_non_usr
+
+__irq_usr: teqp pc, #PSR_I_BIT | MODE_SVC26 @ Enter SVC mode
mov r0, r0
+
ldr lr, .LCirq
- ldr lr, [lr]
+ ldr lr, [lr] @ Restore lr for jump back to USR
+
save_user_regs
-1: get_irqnr_and_base r6, r5
- teq r6, #0
- ldrneb r0, [r5, r6] @ get IRQ number
- movne r1, sp
- @
- @ routine called with r0 = irq number, r1 = struct pt_regs *
- @
- adr lr, 1b
- orr lr, lr, #PSR_I_BIT | MODE_SVC26 @ Force SVC
- bne asm_do_IRQ
+ handle_irq
mov why, #0
- get_thread_info tsk @ FIXME - was r5, but seemed wrong.
+ get_thread_info tsk
b ret_to_user
+@ Place the IRQ priority table here so that the handle_irq macros above
+@ and below here can access it.
+
irq_prio_table
-__irq_svc: teqp pc, #PSR_I_BIT | MODE_SVC26
+__irq_non_usr: teqp pc, #PSR_I_BIT | MODE_SVC26 @ Enter SVC mode
mov r0, r0
- SVC_IRQ_SAVE_ALL
+
+ save_svc_regs_irq
+
and r2, lr, #3
teq r2, #3
- bne __irq_invalid
-1: get_irqnr_and_base r6, r5
- teq r6, #0
- ldrneb r0, [r5, r6] @ get IRQ number
- movne r1, sp
- @
- @ routine called with r0 = irq number, r1 = struct pt_regs *
- @
- adr lr, 1b
- orr lr, lr, #PSR_I_BIT | MODE_SVC26 @ Force SVC
- bne asm_do_IRQ @ Returns to 1b
- SVC_RESTORE_ALL
+ bne __irq_invalid @ IRQ not from SVC mode
+
+ handle_irq
+
+ restore_svc_regs
__irq_invalid: mov r0, sp
mov r1, #BAD_IRQ
b ret_from_exception
Ldata_not_user:
- SVC_SAVE_ALL
+ save_svc_regs
and r2, lr, #3
teq r2, #3
bne Ldata_illegal_mode
teqeqp pc, #MODE_SVC26
mask_pc r0, lr
bl Ldata_do
- SVC_RESTORE_ALL
+ restore_svc_regs
Ldata_illegal_mode:
mov r0, sp
--- /dev/null
+/*
+ * linux/arch/arm26/kernel/head.S
+ *
+ * Copyright (C) 1994-2000 Russell King
+ * Copyright (C) 2003 Ian Molton
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * 26-bit kernel startup code
+ */
+#include <linux/config.h>
+#include <linux/linkage.h>
+#include <asm/mach-types.h>
+
+ .globl swapper_pg_dir
+ .equ swapper_pg_dir, 0x0207d000
+
+/*
+ * Entry point.
+ */
+ .section ".init.text",#alloc,#execinstr
+ENTRY(stext)
+
+__entry:
+ cmp pc, #0x02000000
+ ldrlt pc, LC0 @ if 0x01800000, call at 0x02080000
+ teq r0, #0 @ Check for old calling method
+ blne oldparams @ Move page if old
+
+ adr r0, LC0
+ ldmib r0, {r2-r5, sp} @ Setup stack (and fetch other values)
+
+ mov r0, #0 @ Clear BSS
+1: cmp r2, r3
+ strcc r0, [r2], #4
+ bcc 1b
+
+ bl detect_proc_type
+ str r0, [r4]
+ bl detect_arch_type
+ str r0, [r5]
+
+#ifdef CONFIG_XIP_KERNEL
+ ldr r3, ETEXT @ data section copy
+ ldr r4, SDATA
+ ldr r5, EDATA
+1:
+ ldr r6, [r3], #4
+ str r6, [r4], #4
+ cmp r4, r5
+ blt 1b
+#endif
+ mov fp, #0
+ b start_kernel
+
+LC0: .word _stext
+ .word __bss_start @ r2
+ .word _end @ r3
+ .word processor_id @ r4
+ .word __machine_arch_type @ r5
+ .word init_thread_union+8192 @ sp
+#ifdef CONFIG_XIP_KERNEL
+ETEXT: .word _endtext
+SDATA: .word _sdata
+EDATA: .word __bss_start
+#endif
+
+arm2_id: .long 0x41560200 @ ARM2 and 250 dont have a CPUID
+arm250_id: .long 0x41560250 @ So we create some after probing for them
+ .align
+
+oldparams: mov r4, #0x02000000
+ add r3, r4, #0x00080000
+ add r4, r4, #0x0007c000
+1: ldmia r0!, {r5 - r12}
+ stmia r4!, {r5 - r12}
+ cmp r4, r3
+ blt 1b
+ mov pc, lr
+
+/*
+ * We need some way to automatically detect the difference between
+ * these two machines. Unfortunately, it is not possible to detect
+ * the presence of the SuperIO chip, because that will hang the old
+ * Archimedes machines solid.
+ */
+/* DAG: Outdated, these have been combined !!!!!!! */
+detect_arch_type:
+#if defined(CONFIG_ARCH_ARC)
+ mov r0, #MACH_TYPE_ARCHIMEDES
+#elif defined(CONFIG_ARCH_A5K)
+ mov r0, #MACH_TYPE_A5K
+#endif
+ mov pc, lr
+
+detect_proc_type:
+ mov ip, lr
+ mov r2, #0xea000000 @ Point undef instr to continuation
+ adr r0, continue - 12
+ orr r0, r2, r0, lsr #2
+ mov r1, #0
+ str r0, [r1, #4]
+ ldr r0, arm2_id
+ swp r2, r2, [r1] @ check for swp (ARM2 cant)
+ ldr r0, arm250_id
+ mrc 15, 0, r3, c0, c0 @ check for CP#15 (ARM250 cant)
+ mov r0, r3
+continue: mov r2, #0xeb000000 @ Make undef vector loop
+ sub r2, r2, #2
+ str r2, [r1, #4]
+ mov pc, ip
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
+#include <linux/module.h>
#include <linux/config.h>
#include <linux/sched.h>
#include <linux/errno.h>
wake_up(&sem->wait);
}
-static spinlock_t semaphore_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(semaphore_lock);
void __sched __down(struct semaphore * sem)
{
* registers (r0 to r3 and lr), but not ip, as we use it as a return
* value in some cases..
*/
-asm(" .section .sched.text \n\
+asm(" .section .sched.text , #alloc, #execinstr \n\
.align 5 \n\
.globl __down_failed \n\
__down_failed: \n\
ldmfd sp!, {r0 - r3, pc}^ \n\
");
+EXPORT_SYMBOL(__down_failed);
+EXPORT_SYMBOL(__down_interruptible_failed);
+EXPORT_SYMBOL(__down_trylock_failed);
+EXPORT_SYMBOL(__up_wakeup);
+
* have a non-standard calling sequence on the Linux/arm
* platform.
*/
+#include <linux/module.h>
#include <linux/errno.h>
#include <linux/sched.h>
#include <linux/slab.h>
out:
return error;
}
+
+/* FIXME - see if this is correct for arm26 */
+long execve(const char *filename, char **argv, char **envp)
+{
+ struct pt_regs regs;
+ int ret;
+ memset(®s, 0, sizeof(struct pt_regs));
+ ret = do_execve((char *)filename, (char __user * __user *)argv, (char __user * __user *)envp, ®s);
+ if (ret < 0)
+ goto out;
+
+ /*
+ * Save argc to the register structure for userspace.
+ */
+ regs.ARM_r0 = ret;
+
+ /*
+ * We were successful. We won't be returning to our caller, but
+ * instead to user space by manipulating the kernel stack.
+ */
+ asm( "add r0, %0, %1\n\t"
+ "mov r1, %2\n\t"
+ "mov r2, %3\n\t"
+ "bl memmove\n\t" /* copy regs to top of stack */
+ "mov r8, #0\n\t" /* not a syscall */
+ "mov r9, %0\n\t" /* thread structure */
+ "mov sp, r0\n\t" /* reposition stack pointer */
+ "b ret_to_user"
+ :
+ : "r" (current_thread_info()),
+ "Ir" (THREAD_SIZE - 8 - sizeof(regs)),
+ "r" (®s),
+ "Ir" (sizeof(regs))
+ : "r0", "r1", "r2", "r3", "ip", "memory");
+
+ out:
+ return ret;
+}
+
+EXPORT_SYMBOL(execve);
#include <asm/hardware.h>
#include <asm/io.h>
#include <asm/irq.h>
-#include <asm/leds.h>
+#include <asm/ioc.h>
u64 jiffies_64 = INITIAL_JIFFIES;
extern unsigned long wall_jiffies;
/* this needs a better home */
-spinlock_t rtc_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(rtc_lock);
/* change this if you have some constant time drift */
#define USECS_PER_JIFFY (1000000/HZ)
*/
int (*set_rtc)(void) = dummy_set_rtc;
-static unsigned long dummy_gettimeoffset(void)
+/*
+ * Get time offset based on IOCs timer.
+ * FIXME - if this is called with interrutps off, why the shennanigans
+ * below ?
+ */
+static unsigned long gettimeoffset(void)
{
- return 0;
+ unsigned int count1, count2, status;
+ long offset;
+
+ ioc_writeb (0, IOC_T0LATCH);
+ barrier ();
+ count1 = ioc_readb(IOC_T0CNTL) | (ioc_readb(IOC_T0CNTH) << 8);
+ barrier ();
+ status = ioc_readb(IOC_IRQREQA);
+ barrier ();
+ ioc_writeb (0, IOC_T0LATCH);
+ barrier ();
+ count2 = ioc_readb(IOC_T0CNTL) | (ioc_readb(IOC_T0CNTH) << 8);
+
+ offset = count2;
+ if (count2 < count1) {
+ /*
+ * We have not had an interrupt between reading count1
+ * and count2.
+ */
+ if (status & (1 << 5))
+ offset -= LATCH;
+ } else if (count2 > count1) {
+ /*
+ * We have just had another interrupt between reading
+ * count1 and count2.
+ */
+ offset -= LATCH;
+ }
+
+ offset = (LATCH - offset) * (tick_nsec / 1000);
+ return (offset + LATCH/2) / LATCH;
}
/*
- * hook for getting the time offset. Note that it is
- * always called with interrupts disabled.
+ * Scheduler clock - returns current time in nanosec units.
*/
-unsigned long (*gettimeoffset)(void) = dummy_gettimeoffset;
+unsigned long long sched_clock(void)
+{
+ return (unsigned long long)jiffies * (1000000000 / HZ);
+}
static unsigned long next_rtc_update;
*/
void __init time_init(void)
{
- ioctime_init();
+ ioc_writeb(LATCH & 255, IOC_T0LTCHL);
+ ioc_writeb(LATCH >> 8, IOC_T0LTCHH);
+ ioc_writeb(0, IOC_T0GO);
+
setup_irq(IRQ_TIMER, &timer_irq);
}
/*
- * linux/arch/arm/kernel/traps.c
+ * linux/arch/arm26/kernel/traps.c
*
* Copyright (C) 1995-2002 Russell King
* Fragments that appear the same as linux/arch/i386/kernel/traps.c (C) Linus Torvalds
* published by the Free Software Foundation.
*
* 'traps.c' handles hardware exceptions after we have saved some state in
- * 'linux/arch/arm/lib/traps.S'. Mostly a debugging aid, but will probably
+ * 'linux/arch/arm26/lib/traps.S'. Mostly a debugging aid, but will probably
* kill the offending process.
*/
+
+#include <linux/module.h>
#include <linux/config.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <asm/atomic.h>
#include <asm/io.h>
-#include <asm/pgalloc.h>
#include <asm/pgtable.h>
#include <asm/system.h>
#include <asm/uaccess.h>
dump_mem("Stack: ", sp, 8192+(unsigned long)tsk->thread_info);
}
-EXPORT_SYMBOL(dump_stack);
-
void dump_stack(void)
{
#ifdef CONFIG_DEBUG_ERRORS
#endif
}
+EXPORT_SYMBOL(dump_stack);
+
//FIXME - was a static fn
void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
{
dump_mem("Stack: ", (unsigned long)sp, 8192+(unsigned long)task->thread_info);
}
-spinlock_t die_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(die_lock);
/*
* This function is protected against re-entrancy.
return 0;
case NR(usr26):
- case NR(usr32):
break;
default:
_text = .; /* Text and read-only data */
*(.text)
SCHED_TEXT
+ LOCK_TEXT /* FIXME - borrowed from arm32 - check*/
*(.fixup)
*(.gnu.warning)
*(.rodata)
_sdata = .;
.data : {
+ . = ALIGN(8192);
/*
* first, the init thread union, aligned
- * to an 8192 byte boundary.
+ * to an 8192 byte boundary. (see arm26/kernel/init_task.c)
+ * FIXME - sould this be 32K aligned on arm26?
*/
*(.init.task)
_text = .; /* Text and read-only data */
*(.text)
SCHED_TEXT
+ LOCK_TEXT
*(.fixup)
*(.gnu.warning)
*(.rodata)
.data : {
/*
* first, the init task union, aligned
- * to an 8192 byte boundary.
+ * to an 8192 byte boundary. (see arm26/kernel/init_task.c)
*/
*(.init.task)
#
-# linux/arch/arm/lib/Makefile
+# linux/arch/arm26/lib/Makefile
#
# Copyright (C) 1995-2000 Russell King
#
lib-y := backtrace.o changebit.o csumipv6.o csumpartial.o \
csumpartialcopy.o csumpartialcopyuser.o clearbit.o \
copy_page.o delay.o findbit.o memchr.o memcpy.o \
- memset.o memzero.o setbit.o \
- strchr.o strrchr.o testchangebit.o \
+ memset.o memzero.o setbit.o \
+ strchr.o strrchr.o testchangebit.o \
testclearbit.o testsetbit.o getuser.o \
putuser.o ashldi3.o ashrdi3.o lshrdi3.o muldi3.o \
ucmpdi2.o udivdi3.o lib1funcs.o ecard.o io-acorn.o \
floppydma.o io-readsb.o io-writesb.o io-writesl.o \
- uaccess-kernel.o uaccess-user.o io-readsw-armv3.o \
- io-writesw-armv3.o io-readsl-armv3.o ecard.o \
- io-acorn.o floppydma.o
+ uaccess-kernel.o uaccess-user.o io-readsw.o \
+ io-writesw.o io-readsl.o ecard.o io-acorn.o \
+ floppydma.o
lib-n :=
/*
- * linux/arch/arm/lib/backtrace.S
+ * linux/arch/arm26/lib/backtrace.S
*
* Copyright (C) 1995, 1996 Russell King
*
/*
- * linux/arch/arm/lib/changebit.S
+ * linux/arch/arm26/lib/changebit.S
*
* Copyright (C) 1995-1996 Russell King
*
/*
- * linux/arch/arm/lib/clearbit.S
+ * linux/arch/arm26/lib/clearbit.S
*
* Copyright (C) 1995-1996 Russell King
*
/*
- * linux/arch/arm/lib/copypage.S
+ * linux/arch/arm26/lib/copypage.S
*
* Copyright (C) 1995-1999 Russell King
*
/*
- * linux/arch/arm/lib/csumipv6.S
+ * linux/arch/arm26/lib/csumipv6.S
*
* Copyright (C) 1995-1998 Russell King
*
/*
- * linux/arch/arm/lib/csumpartial.S
+ * linux/arch/arm26/lib/csumpartial.S
*
* Copyright (C) 1995-1998 Russell King
*
/*
- * linux/arch/arm/lib/csumpartialcopy.S
+ * linux/arch/arm26/lib/csumpartialcopy.S
*
* Copyright (C) 1995-1998 Russell King
*
/*
- * linux/arch/arm/lib/csumpartialcopygeneric.S
+ * linux/arch/arm26/lib/csumpartialcopygeneric.S
*
* Copyright (C) 1995-2001 Russell King
*
/*
- * linux/arch/arm/lib/delay.S
+ * linux/arch/arm26/lib/delay.S
*
* Copyright (C) 1995, 1996 Russell King
*
/*
- * linux/arch/arm/lib/ecard.S
+ * linux/arch/arm26/lib/ecard.S
*
* Copyright (C) 1995, 1996 Russell King
*
/*
- * linux/arch/arm/lib/floppydma.S
+ * linux/arch/arm26/lib/floppydma.S
*
* Copyright (C) 1995, 1996 Russell King
*
/*
- * linux/arch/arm/lib/getuser.S
+ * linux/arch/arm26/lib/getuser.S
*
* Copyright (C) 2001 Russell King
*
*/
#include <asm/asm_offsets.h>
#include <asm/thread_info.h>
+#include <asm/errno.h>
.global __get_user_1
__get_user_1:
mov r2, #0
__get_user_bad:
mov r1, #0
- mov r0, #-14
+ mov r0, #-EFAULT
ldmfd sp!, {pc}^
.section __ex_table, "a"
/*
- * linux/arch/arm/lib/io-acorn.S
+ * linux/arch/arm26/lib/io-acorn.S
*
* Copyright (C) 1995, 1996 Russell King
*
/*
- * linux/arch/arm/lib/io-readsb.S
+ * linux/arch/arm26/lib/io-readsb.S
*
* Copyright (C) 1995-2000 Russell King
*
/*
- * linux/arch/arm/lib/io-writesb.S
+ * linux/arch/arm26/lib/io-writesb.S
*
* Copyright (C) 1995-2000 Russell King
*
/*
- * linux/arch/arm/lib/io-writesl.S
+ * linux/arch/arm26/lib/io-writesl.S
*
* Copyright (C) 1995-2000 Russell King
*
/*
- * linux/arch/arm/lib/memchr.S
+ * linux/arch/arm26/lib/memchr.S
*
* Copyright (C) 1995-2000 Russell King
*
/*
- * linux/arch/arm/lib/memcpy.S
+ * linux/arch/arm26/lib/memcpy.S
*
* Copyright (C) 1995-1999 Russell King
*
/*
- * linux/arch/arm/lib/memset.S
+ * linux/arch/arm26/lib/memset.S
*
* Copyright (C) 1995-2000 Russell King
*
/*
- * linux/arch/arm/lib/memzero.S
+ * linux/arch/arm26/lib/memzero.S
*
* Copyright (C) 1995-2000 Russell King
*
/*
- * linux/arch/arm/lib/putuser.S
+ * linux/arch/arm26/lib/putuser.S
*
* Copyright (C) 2001 Russell King
*
*/
#include <asm/asm_offsets.h>
#include <asm/thread_info.h>
+#include <asm/errno.h>
.global __put_user_1
__put_user_1:
ldmfd sp!, {pc}^
__put_user_bad:
- mov r0, #-14
+ mov r0, #-EFAULT
mov pc, lr
.section __ex_table, "a"
/*
- * linux/arch/arm/lib/setbit.S
+ * linux/arch/arm26/lib/setbit.S
*
* Copyright (C) 1995-1996 Russell King
*
/*
- * linux/arch/arm/lib/strchr.S
+ * linux/arch/arm26/lib/strchr.S
*
* Copyright (C) 1995-2000 Russell King
*
/*
- * linux/arch/arm/lib/strrchr.S
+ * linux/arch/arm26/lib/strrchr.S
*
* Copyright (C) 1995-2000 Russell King
*
/*
- * linux/arch/arm/lib/testchangebit.S
+ * linux/arch/arm26/lib/testchangebit.S
*
* Copyright (C) 1995-1996 Russell King
*
/*
- * linux/arch/arm/lib/testclearbit.S
+ * linux/arch/arm26/lib/testclearbit.S
*
* Copyright (C) 1995-1996 Russell King
*
/*
- * linux/arch/arm/lib/testsetbit.S
+ * linux/arch/arm26/lib/testsetbit.S
*
* Copyright (C) 1995-1996 Russell King
*
# Object file lists.
-obj-y := dma.o irq.o oldlatches.o \
- small_page.o
+obj-y := dma.o irq.o latches.o
-extra-y := head.o
-
-AFLAGS_head.o := -DTEXTADDR=$(TEXTADDR)
/*
- * linux/arch/arm/kernel/dma-arc.c
+ * linux/arch/arm26/kernel/dma.c
*
* Copyright (C) 1998-1999 Dave Gilbert / Russell King
* Copyright (C) 2003 Ian Molton
/*
- * linux/arch/arm/mach-arc/irq.c
+ * linux/arch/arm26/mach-arc/irq.c
*
* Copyright (C) 1996 Russell King
*
# Makefile for the linux arm26-specific parts of the memory manager.
#
-obj-y := init.o extable.o proc-funcs.o mm-memc.o fault.o
+obj-y := init.o extable.o proc-funcs.o memc.o fault.o \
+ small_page.o
/*
- * linux/arch/arm/mm/extable.c
+ * linux/arch/arm26/mm/extable.c
*/
#include <linux/config.h>
const struct exception_table_entry *fixup;
fixup = search_exception_tables(instruction_pointer(regs));
+
+ /*
+ * The kernel runs in SVC mode - make sure we keep running in SVC mode
+ * by frobbing the PSR appropriately (PSR and PC are in the same reg.
+ * on ARM26)
+ */
if (fixup)
regs->ARM_pc = fixup->fixup | PSR_I_BIT | MODE_SVC26;
/*
- * linux/arch/arm/mm/fault-common.c
+ * linux/arch/arm26/mm/fault.c
*
* Copyright (C) 1995 Linus Torvalds
* Modifications for ARM processor (c) 1995-2001 Russell King
tsk = current;
mm = tsk->mm;
- printk("do_page_fault: pid: %d %08x\n", tsk->pid, addr);
/*
* If we're in an interrupt or have no user
* context, we must not take the fault..
--- /dev/null
+/*
+ * linux/arch/arm26/mm/memc.c
+ *
+ * Copyright (C) 1998-2000 Russell King
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Page table sludge for older ARM processor architectures.
+ */
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/init.h>
+#include <linux/bootmem.h>
+
+#include <asm/pgtable.h>
+#include <asm/pgalloc.h>
+#include <asm/page.h>
+#include <asm/memory.h>
+#include <asm/hardware.h>
+
+#include <asm/map.h>
+
+#define MEMC_TABLE_SIZE (256*sizeof(unsigned long))
+
+kmem_cache_t *pte_cache, *pgd_cache;
+int page_nr;
+
+/*
+ * Allocate space for a page table and a MEMC table.
+ * Note that we place the MEMC
+ * table before the page directory. This means we can
+ * easily get to both tightly-associated data structures
+ * with a single pointer.
+ */
+static inline pgd_t *alloc_pgd_table(void)
+{
+ void *pg2k = kmem_cache_alloc(pgd_cache, GFP_KERNEL);
+
+ if (pg2k)
+ pg2k += MEMC_TABLE_SIZE;
+
+ return (pgd_t *)pg2k;
+}
+
+/*
+ * Free a page table. this function is the counterpart to get_pgd_slow
+ * below, not alloc_pgd_table above.
+ */
+void free_pgd_slow(pgd_t *pgd)
+{
+ unsigned long tbl = (unsigned long)pgd;
+
+ tbl -= MEMC_TABLE_SIZE;
+
+ kmem_cache_free(pgd_cache, (void *)tbl);
+}
+
+/*
+ * Allocate a new pgd and fill it in ready for use
+ *
+ * A new tasks pgd is completely empty (all pages !present) except for:
+ *
+ * o The machine vectors at virtual address 0x0
+ * o The vmalloc region at the top of address space
+ *
+ */
+#define FIRST_KERNEL_PGD_NR (FIRST_USER_PGD_NR + USER_PTRS_PER_PGD)
+
+pgd_t *get_pgd_slow(struct mm_struct *mm)
+{
+ pgd_t *new_pgd, *init_pgd;
+ pmd_t *new_pmd, *init_pmd;
+ pte_t *new_pte, *init_pte;
+
+ new_pgd = alloc_pgd_table();
+ if (!new_pgd)
+ goto no_pgd;
+
+ /*
+ * This lock is here just to satisfy pmd_alloc and pte_lock
+ * FIXME: I bet we could avoid taking it pretty much altogether
+ */
+ spin_lock(&mm->page_table_lock);
+
+ /*
+ * On ARM, first page must always be allocated since it contains
+ * the machine vectors.
+ */
+ new_pmd = pmd_alloc(mm, new_pgd, 0);
+ if (!new_pmd)
+ goto no_pmd;
+
+ new_pte = pte_alloc_kernel(mm, new_pmd, 0);
+ if (!new_pte)
+ goto no_pte;
+
+ init_pgd = pgd_offset(&init_mm, 0);
+ init_pmd = pmd_offset(init_pgd, 0);
+ init_pte = pte_offset(init_pmd, 0);
+
+ set_pte(new_pte, *init_pte);
+
+ /*
+ * the page table entries are zeroed
+ * when the table is created. (see the cache_ctor functions below)
+ * Now we need to plonk the kernel (vmalloc) area at the end of
+ * the address space. We copy this from the init thread, just like
+ * the init_pte we copied above...
+ */
+ memcpy(new_pgd + FIRST_KERNEL_PGD_NR, init_pgd + FIRST_KERNEL_PGD_NR,
+ (PTRS_PER_PGD - FIRST_KERNEL_PGD_NR) * sizeof(pgd_t));
+
+ spin_unlock(&mm->page_table_lock);
+
+ /* update MEMC tables */
+ cpu_memc_update_all(new_pgd);
+ return new_pgd;
+
+no_pte:
+ spin_unlock(&mm->page_table_lock);
+ pmd_free(new_pmd);
+ free_pgd_slow(new_pgd);
+ return NULL;
+
+no_pmd:
+ spin_unlock(&mm->page_table_lock);
+ free_pgd_slow(new_pgd);
+ return NULL;
+
+no_pgd:
+ return NULL;
+}
+
+/*
+ * No special code is required here.
+ */
+void setup_mm_for_reboot(char mode)
+{
+}
+
+/*
+ * This contains the code to setup the memory map on an ARM2/ARM250/ARM3
+ * o swapper_pg_dir = 0x0207d000
+ * o kernel proper starts at 0x0208000
+ * o create (allocate) a pte to contain the machine vectors
+ * o populate the pte (points to 0x02078000) (FIXME - is it zeroed?)
+ * o populate the init tasks page directory (pgd) with the new pte
+ * o zero the rest of the init tasks pgdir (FIXME - what about vmalloc?!)
+ */
+void __init memtable_init(struct meminfo *mi)
+{
+ pte_t *pte;
+ int i;
+
+ page_nr = max_low_pfn;
+
+ pte = alloc_bootmem_low_pages(PTRS_PER_PTE * sizeof(pte_t));
+ pte[0] = mk_pte_phys(PAGE_OFFSET + SCREEN_SIZE, PAGE_READONLY);
+ pmd_populate(&init_mm, pmd_offset(swapper_pg_dir, 0), pte);
+
+ for (i = 1; i < PTRS_PER_PGD; i++)
+ pgd_val(swapper_pg_dir[i]) = 0;
+}
+
+void __init iotable_init(struct map_desc *io_desc)
+{
+ /* nothing to do */
+}
+
+/*
+ * We never have holes in the memmap
+ */
+void __init create_memmap_holes(struct meminfo *mi)
+{
+}
+
+static void pte_cache_ctor(void *pte, kmem_cache_t *cache, unsigned long flags)
+{
+ memzero(pte, sizeof(pte_t) * PTRS_PER_PTE);
+}
+
+static void pgd_cache_ctor(void *pgd, kmem_cache_t *cache, unsigned long flags)
+{
+ memzero(pgd + MEMC_TABLE_SIZE, USER_PTRS_PER_PGD * sizeof(pgd_t));
+}
+
+void __init pgtable_cache_init(void)
+{
+ pte_cache = kmem_cache_create("pte-cache",
+ sizeof(pte_t) * PTRS_PER_PTE,
+ 0, 0, pte_cache_ctor, NULL);
+ if (!pte_cache)
+ BUG();
+
+ pgd_cache = kmem_cache_create("pgd-cache", MEMC_TABLE_SIZE +
+ sizeof(pgd_t) * PTRS_PER_PGD,
+ 0, 0, pgd_cache_ctor, NULL);
+ if (!pgd_cache)
+ BUG();
+}
/*
- * linux/arch/arm/mm/proc-arm2,3.S
+ * linux/arch/arm26/mm/proc-arm2,3.S
*
* Copyright (C) 1997-1999 Russell King
*
--- /dev/null
+#
+# For a description of the syntax of this configuration file,
+# see Documentation/kbuild/kconfig-language.txt.
+#
+config FRV
+ bool
+ default y
+
+config UID16
+ bool
+ default y
+
+config RWSEM_GENERIC_SPINLOCK
+ bool
+ default y
+
+config RWSEM_XCHGADD_ALGORITHM
+ bool
+
+config GENERIC_FIND_NEXT_BIT
+ bool
+ default y
+
+config GENERIC_CALIBRATE_DELAY
+ bool
+ default n
+
+config GENERIC_HARDIRQS
+ bool
+ default n
+
+mainmenu "Fujitsu FR-V Kernel Configuration"
+
+source "init/Kconfig"
+
+
+menu "Fujitsu FR-V system setup"
+
+config MMU
+ bool "MMU support"
+ help
+ This options switches on and off support for the FR-V MMU
+ (effectively switching between vmlinux and uClinux). Not all FR-V
+ CPUs support this. Currently only the FR451 has a sufficiently
+ featured MMU.
+
+config FRV_OUTOFLINE_ATOMIC_OPS
+ bool "Out-of-line the FRV atomic operations"
+ default n
+ help
+ Setting this option causes the FR-V atomic operations to be mostly
+ implemented out-of-line.
+
+ See Documentation/fujitsu/frv/atomic-ops.txt for more information.
+
+config HIGHMEM
+ bool "High memory support"
+ depends on MMU
+ default y
+ help
+ If you wish to use more than 256MB of memory with your MMU based
+ system, you will need to select this option. The kernel can only see
+ the memory between 0xC0000000 and 0xD0000000 directly... everything
+ else must be kmapped.
+
+ The arch is, however, capable of supporting up to 3GB of SDRAM.
+
+config HIGHPTE
+ bool "Allocate page tables in highmem"
+ depends on HIGHMEM
+ default y
+ help
+ The VM uses one page of memory for each page table. For systems
+ with a lot of RAM, this can be wasteful of precious low memory.
+ Setting this option will put user-space page tables in high memory.
+
+choice
+ prompt "uClinux kernel load address"
+ depends on !MMU
+ default UCPAGE_OFFSET_C0000000
+ help
+ This option sets the base address for the uClinux kernel. The kernel
+ will rearrange the SDRAM layout to start at this address, and move
+ itself to start there. It must be greater than 0, and it must be
+ sufficiently less than 0xE0000000 that the SDRAM does not intersect
+ the I/O region.
+
+ The base address must also be aligned such that the SDRAM controller
+ can decode it. For instance, a 512MB SDRAM bank must be 512MB aligned.
+
+config UCPAGE_OFFSET_20000000
+ bool "0x20000000"
+
+config UCPAGE_OFFSET_40000000
+ bool "0x40000000"
+
+config UCPAGE_OFFSET_60000000
+ bool "0x60000000"
+
+config UCPAGE_OFFSET_80000000
+ bool "0x80000000"
+
+config UCPAGE_OFFSET_A0000000
+ bool "0xA0000000"
+
+config UCPAGE_OFFSET_C0000000
+ bool "0xC0000000 (Recommended)"
+
+endchoice
+
+config PROTECT_KERNEL
+ bool "Protect core kernel against userspace"
+ depends on !MMU
+ default y
+ help
+ Selecting this option causes the uClinux kernel to change the
+ permittivity of DAMPR register covering the core kernel image to
+ prevent userspace accessing the underlying memory directly.
+
+choice
+ prompt "CPU Caching mode"
+ default FRV_DEFL_CACHE_WBACK
+ help
+ This option determines the default caching mode for the kernel.
+
+ Write-Back caching mode involves the all reads and writes causing
+ the affected cacheline to be read into the cache first before being
+ operated upon. Memory is not then updated by a write until the cache
+ is filled and a cacheline needs to be displaced from the cache to
+ make room. Only at that point is it written back.
+
+ Write-Behind caching is similar to Write-Back caching, except that a
+ write won't fetch a cacheline into the cache if there isn't already
+ one there; it will write directly to memory instead.
+
+ Write-Through caching only fetches cachelines from memory on a
+ read. Writes always get written directly to memory. If the affected
+ cacheline is also in cache, it will be updated too.
+
+ The final option is to turn of caching entirely.
+
+ Note that not all CPUs support Write-Behind caching. If the CPU on
+ which the kernel is running doesn't, it'll fall back to Write-Back
+ caching.
+
+config FRV_DEFL_CACHE_WBACK
+ bool "Write-Back"
+
+config FRV_DEFL_CACHE_WBEHIND
+ bool "Write-Behind"
+
+config FRV_DEFL_CACHE_WTHRU
+ bool "Write-Through"
+
+config FRV_DEFL_CACHE_DISABLED
+ bool "Disabled"
+
+endchoice
+
+menu "CPU core support"
+
+config CPU_FR401
+ bool "Include FR401 core support"
+ depends on !MMU
+ default y
+ help
+ This enables support for the FR401, FR401A and FR403 CPUs
+
+config CPU_FR405
+ bool "Include FR405 core support"
+ depends on !MMU
+ default y
+ help
+ This enables support for the FR405 CPU
+
+config CPU_FR451
+ bool "Include FR451 core support"
+ default y
+ help
+ This enables support for the FR451 CPU
+
+config CPU_FR451_COMPILE
+ bool "Specifically compile for FR451 core"
+ depends on CPU_FR451 && !CPU_FR401 && !CPU_FR405 && !CPU_FR551
+ default y
+ help
+ This causes appropriate flags to be passed to the compiler to
+ optimise for the FR451 CPU
+
+config CPU_FR551
+ bool "Include FR551 core support"
+ depends on !MMU
+ default y
+ help
+ This enables support for the FR555 CPU
+
+config CPU_FR551_COMPILE
+ bool "Specifically compile for FR551 core"
+ depends on CPU_FR551 && !CPU_FR401 && !CPU_FR405 && !CPU_FR451
+ default y
+ help
+ This causes appropriate flags to be passed to the compiler to
+ optimise for the FR555 CPU
+
+config FRV_L1_CACHE_SHIFT
+ int
+ default "5" if CPU_FR401 || CPU_FR405 || CPU_FR451
+ default "6" if CPU_FR551
+
+endmenu
+
+choice
+ prompt "System support"
+ default MB93091_VDK
+
+config MB93091_VDK
+ bool "MB93091 CPU board with or without motherboard"
+
+config MB93093_PDK
+ bool "MB93093 PDK unit"
+
+endchoice
+
+if MB93091_VDK
+choice
+ prompt "Motherboard support"
+ default MB93090_MB00
+
+config MB93090_MB00
+ bool "Use the MB93090-MB00 motherboard"
+ help
+ Select this option if the MB93091 CPU board is going to be used with
+ a MB93090-MB00 VDK motherboard
+
+config MB93091_NO_MB
+ bool "Use standalone"
+ help
+ Select this option if the MB93091 CPU board is going to be used
+ without a motherboard
+
+endchoice
+endif
+
+choice
+ prompt "GP-Relative data support"
+ default GPREL_DATA_8
+ help
+ This option controls what data, if any, should be placed in the GP
+ relative data sections. Using this means that the compiler can
+ generate accesses to the data using GR16-relative addressing which
+ is faster than absolute instructions and saves space (2 instructions
+ per access).
+
+ However, the GPREL region is limited in size because the immediate
+ value used in the load and store instructions is limited to a 12-bit
+ signed number.
+
+ So if the linker starts complaining that accesses to GPREL data are
+ out of range, try changing this option from the default.
+
+ Note that modules will always be compiled with this feature disabled
+ as the module data will not be in range of the GP base address.
+
+config GPREL_DATA_8
+ bool "Put data objects of up to 8 bytes into GP-REL"
+
+config GPREL_DATA_4
+ bool "Put data objects of up to 4 bytes into GP-REL"
+
+config GPREL_DATA_NONE
+ bool "Don't use GP-REL"
+
+endchoice
+
+config PCI
+ bool "Use PCI"
+ depends on MB93090_MB00
+ default y
+ help
+ Some FR-V systems (such as the MB93090-MB00 VDK) have PCI
+ onboard. If you have one of these boards and you wish to use the PCI
+ facilities, say Y here.
+
+ The PCI-HOWTO, available from
+ <http://www.tldp.org/docs.html#howto>, contains valuable
+ information about which PCI hardware does work under Linux and which
+ doesn't.
+
+config RESERVE_DMA_COHERENT
+ bool "Reserve DMA coherent memory"
+ depends on PCI && !MMU
+ default y
+ help
+ Many PCI drivers require access to uncached memory for DMA device
+ communications (such as is done with some Ethernet buffer rings). If
+ a fully featured MMU is available, this can be done through page
+ table settings, but if not, a region has to be set aside and marked
+ with a special DAMPR register.
+
+ Setting this option causes uClinux to set aside a portion of the
+ available memory for use in this manner. The memory will then be
+ unavailable for normal kernel use.
+
+source "drivers/pci/Kconfig"
+
+config PCMCIA
+ tristate "Use PCMCIA"
+ help
+ Say Y here if you want to attach PCMCIA- or PC-cards to your FR-V
+ board. These are credit-card size devices such as network cards,
+ modems or hard drives often used with laptops computers. There are
+ actually two varieties of these cards: the older 16 bit PCMCIA cards
+ and the newer 32 bit CardBus cards. If you want to use CardBus
+ cards, you need to say Y here and also to "CardBus support" below.
+
+ To use your PC-cards, you will need supporting software from David
+ Hinds pcmcia-cs package (see the file <file:Documentation/Changes>
+ for location). Please also read the PCMCIA-HOWTO, available from
+ <http://www.tldp.org/docs.html#howto>.
+
+ To compile this driver as modules, choose M here: the
+ modules will be called pcmcia_core and ds.
+
+#config MATH_EMULATION
+# bool "Math emulation support (EXPERIMENTAL)"
+# depends on EXPERIMENTAL
+# help
+# At some point in the future, this will cause floating-point math
+# instructions to be emulated by the kernel on machines that lack a
+# floating-point math coprocessor. Thrill-seekers and chronically
+# sleep-deprived psychotic hacker types can say Y now, everyone else
+# should probably wait a while.
+
+menu "Power management options"
+source kernel/power/Kconfig
+endmenu
+
+endmenu
+
+
+menu "Executable formats"
+
+source "fs/Kconfig.binfmt"
+
+endmenu
+
+source "drivers/Kconfig"
+
+source "fs/Kconfig"
+
+menu "Kernel hacking"
+
+config DEBUG_KERNEL
+ bool "Kernel debugging"
+ help
+ Say Y here if you are developing drivers or trying to debug and
+ identify kernel problems.
+
+config EARLY_PRINTK
+ bool "Early printk"
+ depends on EMBEDDED && DEBUG_KERNEL
+ default n
+ help
+ Write kernel log output directly into the VGA buffer or to a serial
+ port.
+
+ This is useful for kernel debugging when your machine crashes very
+ early before the console code is initialized. For normal operation
+ it is not recommended because it looks ugly and doesn't cooperate
+ with klogd/syslogd or the X server. You should normally N here,
+ unless you want to debug such a crash.
+
+config DEBUG_STACKOVERFLOW
+ bool "Check for stack overflows"
+ depends on DEBUG_KERNEL
+
+config DEBUG_SLAB
+ bool "Debug memory allocations"
+ depends on DEBUG_KERNEL
+ help
+ Say Y here to have the kernel do limited verification on memory
+ allocation as well as poisoning memory on free to catch use of freed
+ memory.
+
+config MAGIC_SYSRQ
+ bool "Magic SysRq key"
+ depends on DEBUG_KERNEL
+ help
+ If you say Y here, you will have some control over the system even
+ if the system crashes for example during kernel debugging (e.g., you
+ will be able to flush the buffer cache to disk, reboot the system
+ immediately or dump some status information). This is accomplished
+ by pressing various keys while holding SysRq (Alt+PrintScreen). It
+ also works on a serial console (on PC hardware at least), if you
+ send a BREAK and then within 5 seconds a command keypress. The
+ keys are documented in <file:Documentation/sysrq.txt>. Don't say Y
+ unless you really know what this hack does.
+
+config DEBUG_SPINLOCK
+ bool "Spinlock debugging"
+ depends on DEBUG_KERNEL
+ help
+ Say Y here and build SMP to catch missing spinlock initialization
+ and certain other kinds of spinlock errors commonly made. This is
+ best used in conjunction with the NMI watchdog so that spinlock
+ deadlocks are also debuggable.
+
+config DEBUG_SPINLOCK_SLEEP
+ bool "Sleep-inside-spinlock checking"
+ depends on DEBUG_KERNEL
+ help
+ If you say Y here, various routines which may sleep will become very
+ noisy if they are called with a spinlock held.
+
+config DEBUG_PAGEALLOC
+ bool "Page alloc debugging"
+ depends on DEBUG_KERNEL
+ help
+ Unmap pages from the kernel linear mapping after free_pages().
+ This results in a large slowdown, but helps to find certain types
+ of memory corruptions.
+
+config DEBUG_HIGHMEM
+ bool "Highmem debugging"
+ depends on DEBUG_KERNEL && HIGHMEM
+ help
+ This options enables addition error checking for high memory systems.
+ Disable for production systems.
+
+config DEBUG_INFO
+ bool "Compile the kernel with debug info"
+ depends on DEBUG_KERNEL
+ help
+ If you say Y here the resulting kernel image will include
+ debugging info resulting in a larger kernel image.
+ Say Y here only if you plan to use gdb to debug the kernel.
+ If you don't debug the kernel, you can say N.
+
+config DEBUG_BUGVERBOSE
+ bool "Verbose BUG() reporting"
+ depends on DEBUG_KERNEL
+
+config FRAME_POINTER
+ bool "Compile the kernel with frame pointers"
+ depends on DEBUG_KERNEL
+ help
+ If you say Y here the resulting kernel image will be slightly larger
+ and slower, but it will give very useful debugging information.
+ If you don't debug the kernel, you can say N, but we may not be able
+ to solve problems without frame pointers.
+
+config GDBSTUB
+ bool "Remote GDB kernel debugging"
+ depends on DEBUG_KERNEL
+ select DEBUG_INFO
+ select FRAME_POINTER
+ help
+ If you say Y here, it will be possible to remotely debug the kernel
+ using gdb. This enlarges your kernel ELF image disk size by several
+ megabytes and requires a machine with more than 16 MB, better 32 MB
+ RAM to avoid excessive linking time. This is only useful for kernel
+ hackers. If unsure, say N.
+
+choice
+ prompt "GDB stub port"
+ default GDBSTUB_UART1
+ depends on GDBSTUB
+ help
+ Select the on-CPU port used for GDB-stub
+
+config GDBSTUB_UART0
+ bool "/dev/ttyS0"
+
+config GDBSTUB_UART1
+ bool "/dev/ttyS1"
+
+endchoice
+
+config GDBSTUB_IMMEDIATE
+ bool "Break into GDB stub immediately"
+ depends on GDBSTUB
+ help
+ If you say Y here, GDB stub will break into the program as soon as
+ possible, leaving the program counter at the beginning of
+ start_kernel() in init/main.c.
+
+config GDB_CONSOLE
+ bool "Console output to GDB"
+ depends on KGDB
+ help
+ If you are using GDB for remote debugging over a serial port and
+ would like kernel messages to be formatted into GDB $O packets so
+ that GDB prints them as program output, say 'Y'.
+
+endmenu
+
+source "security/Kconfig"
+
+source "crypto/Kconfig"
+
+source "lib/Kconfig"
--- /dev/null
+#
+# frv/Makefile
+#
+# This file is included by the global makefile so that you can add your own
+# architecture-specific flags and dependencies. Remember to do have actions
+# for "archclean" and "archdep" for cleaning up and making dependencies for
+# this architecture
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (c) 2003, 2004 Red Hat Inc.
+# - Written by David Howells <dhowells@redhat.com>
+# - Derived from arch/m68knommu/Makefile,
+# Copyright (c) 1999,2001 D. Jeff Dionne <jeff@lineo.ca>,
+# Rt-Control Inc. / Lineo, Inc.
+#
+# Copyright (C) 1998,1999 D. Jeff Dionne <jeff@uclinux.org>,
+# Kenneth Albanowski <kjahds@kjahds.com>,
+#
+# Based on arch/m68k/Makefile:
+# Copyright (C) 1994 by Hamish Macdonald
+#
+
+CCSPECS := $(shell $(CC) -v 2>&1 | grep "^Reading specs from " | head -1 | cut -c20-)
+CCDIR := $(strip $(patsubst %/specs,%,$(CCSPECS)))
+CPUCLASS := fr400
+
+# test for cross compiling
+COMPILE_ARCH = $(shell uname -m)
+
+ifdef CONFIG_MMU
+UTS_SYSNAME = -DUTS_SYSNAME=\"Linux\"
+else
+UTS_SYSNAME = -DUTS_SYSNAME=\"uClinux\"
+endif
+
+ARCHMODFLAGS += -G0 -mlong-calls
+
+ifdef CONFIG_GPREL_DATA_8
+CFLAGS += -G8
+else
+ifdef CONFIG_GPREL_DATA_4
+CFLAGS += -G4
+else
+ifdef CONFIG_GPREL_DATA_NONE
+CFLAGS += -G0
+endif
+endif
+endif
+
+#LDFLAGS_vmlinux := -Map linkmap.txt
+
+ifdef CONFIG_GC_SECTIONS
+CFLAGS += -ffunction-sections -fdata-sections
+LINKFLAGS += --gc-sections
+endif
+
+ifndef CONFIG_FRAME_POINTER
+CFLAGS += -mno-linked-fp
+endif
+
+ifdef CONFIG_CPU_FR451_COMPILE
+CFLAGS += -mcpu=fr450
+AFLAGS += -mcpu=fr450
+ASFLAGS += -mcpu=fr450
+else
+ifdef CONFIG_CPU_FR551_COMPILE
+CFLAGS += -mcpu=fr550
+AFLAGS += -mcpu=fr550
+ASFLAGS += -mcpu=fr550
+else
+CFLAGS += -mcpu=fr400
+AFLAGS += -mcpu=fr400
+ASFLAGS += -mcpu=fr400
+endif
+endif
+
+# pretend the kernel is going to run on an FR400 with no media-fp unit
+# - reserve CC3 for use with atomic ops
+# - all the extra registers are dealt with only at context switch time
+CFLAGS += -mno-fdpic -mgpr-32 -msoft-float -mno-media
+CFLAGS += -ffixed-fcc3 -ffixed-cc3 -ffixed-gr15
+AFLAGS += -mno-fdpic
+ASFLAGS += -mno-fdpic
+
+# make sure the .S files get compiled with debug info
+# and disable optimisations that are unhelpful whilst debugging
+ifdef CONFIG_DEBUG_INFO
+CFLAGS += -O1
+AFLAGS += -Wa,--gdwarf2
+ASFLAGS += -Wa,--gdwarf2
+endif
+
+head-y := arch/frv/kernel/head.o arch/frv/kernel/init_task.o
+
+core-y += arch/frv/kernel/ arch/frv/mm/
+libs-y += arch/frv/lib/
+
+core-$(CONFIG_MB93090_MB00) += arch/frv/mb93090-mb00/
+
+all: Image
+
+Image: vmlinux
+ $(Q)$(MAKE) $(build)=arch/frv/boot $@
+
+bootstrap:
+ $(Q)$(MAKEBOOT) bootstrap
+
+archmrproper:
+ $(Q)$(MAKE) -C arch/frv/boot mrproper
+
+archclean:
+ $(Q)$(MAKE) -C arch/frv/boot clean
+
+archdep: scripts/mkdep symlinks
+ $(Q)$(MAKE) -C arch/frv/boot dep
--- /dev/null
+#
+# arch/arm/boot/Makefile
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 1995-2000 Russell King
+#
+
+SYSTEM =$(TOPDIR)/$(LINUX)
+
+ZTEXTADDR = 0x02080000
+PARAMS_PHYS = 0x0207c000
+INITRD_PHYS = 0x02180000
+INITRD_VIRT = 0x02180000
+
+#
+# If you don't define ZRELADDR above,
+# then it defaults to ZTEXTADDR
+#
+ifeq ($(ZRELADDR),)
+ZRELADDR = $(ZTEXTADDR)
+endif
+
+export SYSTEM ZTEXTADDR ZBSSADDR ZRELADDR INITRD_PHYS INITRD_VIRT PARAMS_PHYS
+
+Image: $(obj)/Image
+
+targets: $(obj)/Image
+
+$(obj)/Image: vmlinux FORCE
+ $(OBJCOPY) -O binary -R .note -R .comment -S vmlinux $@
+
+#$(obj)/Image: $(CONFIGURE) $(SYSTEM)
+# $(OBJCOPY) -O binary -R .note -R .comment -g -S $(SYSTEM) $@
+
+bzImage: zImage
+
+zImage: $(CONFIGURE) compressed/$(LINUX)
+ $(OBJCOPY) -O binary -R .note -R .comment -S compressed/$(LINUX) $@
+
+bootpImage: bootp/bootp
+ $(OBJCOPY) -O binary -R .note -R .comment -S bootp/bootp $@
+
+compressed/$(LINUX): $(TOPDIR)/$(LINUX) dep
+ @$(MAKE) -C compressed $(LINUX)
+
+bootp/bootp: zImage initrd
+ @$(MAKE) -C bootp bootp
+
+initrd:
+ @test "$(INITRD_VIRT)" != "" || (echo This architecture does not support INITRD; exit -1)
+ @test "$(INITRD)" != "" || (echo You must specify INITRD; exit -1)
+
+#
+# installation
+#
+install: $(CONFIGURE) Image
+ sh ./install.sh $(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION) Image $(TOPDIR)/System.map "$(INSTALL_PATH)"
+
+zinstall: $(CONFIGURE) zImage
+ sh ./install.sh $(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION) zImage $(TOPDIR)/System.map "$(INSTALL_PATH)"
+
+#
+# miscellany
+#
+mrproper clean:
+ $(RM) Image zImage bootpImage
+# @$(MAKE) -C compressed clean
+# @$(MAKE) -C bootp clean
+
+dep:
--- /dev/null
+#
+# Makefile for the linux kernel.
+#
+
+heads-y := head-uc-fr401.o head-uc-fr451.o head-uc-fr555.o
+heads-$(CONFIG_MMU) := head-mmu-fr451.o
+
+extra-y:= head.o init_task.o vmlinux.lds
+
+obj-y := $(heads-y) entry.o entry-table.o break.o switch_to.o kernel_thread.o \
+ process.o traps.o ptrace.o signal.o dma.o \
+ sys_frv.o time.o semaphore.o setup.o frv_ksyms.o \
+ debug-stub.o irq.o irq-routing.o sleep.o uaccess.o
+
+obj-$(CONFIG_GDBSTUB) += gdb-stub.o gdb-io.o
+
+obj-$(CONFIG_MB93091_VDK) += irq-mb93091.o
+obj-$(CONFIG_MB93093_PDK) += irq-mb93093.o
+obj-$(CONFIG_FUJITSU_MB93493) += irq-mb93493.o
+obj-$(CONFIG_PM) += pm.o cmode.o
+obj-$(CONFIG_MB93093_PDK) += pm-mb93093.o
+obj-$(CONFIG_SYSCTL) += sysctl.o
--- /dev/null
+/* break.S: Break interrupt handling (kept separate from entry.S)
+ *
+ * Copyright (C) 2003 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/sys.h>
+#include <linux/config.h>
+#include <linux/linkage.h>
+#include <asm/setup.h>
+#include <asm/segment.h>
+#include <asm/ptrace.h>
+#include <asm/spr-regs.h>
+
+#include <asm/errno.h>
+
+#
+# the break handler has its own stack
+#
+ .section .bss.stack
+ .globl __break_user_context
+ .balign 8192
+__break_stack:
+ .space (8192 - (USER_CONTEXT_SIZE + REG__DEBUG_XTRA)) & ~7
+__break_stack_tos:
+ .space REG__DEBUG_XTRA
+__break_user_context:
+ .space USER_CONTEXT_SIZE
+
+#
+# miscellaneous variables
+#
+ .section .bss
+#ifdef CONFIG_MMU
+ .globl __break_tlb_miss_real_return_info
+__break_tlb_miss_real_return_info:
+ .balign 8
+ .space 2*4 /* saved PCSR, PSR for TLB-miss handler fixup */
+#endif
+
+__break_trace_through_exceptions:
+ .space 4
+
+#define CS2_ECS1 0xe1200000
+#define CS2_USERLED 0x4
+
+.macro LEDS val,reg
+# sethi.p %hi(CS2_ECS1+CS2_USERLED),gr30
+# setlo %lo(CS2_ECS1+CS2_USERLED),gr30
+# setlos #~\val,\reg
+# st \reg,@(gr30,gr0)
+# setlos #0x5555,\reg
+# sethi.p %hi(0xffc00100),gr30
+# setlo %lo(0xffc00100),gr30
+# sth \reg,@(gr30,gr0)
+# membar
+.endm
+
+###############################################################################
+#
+# entry point for Break Exceptions/Interrupts
+#
+###############################################################################
+ .text
+ .balign 4
+ .globl __entry_break
+__entry_break:
+#ifdef CONFIG_MMU
+ movgs gr31,scr3
+#endif
+ LEDS 0x1001,gr31
+
+ sethi.p %hi(__break_user_context),gr31
+ setlo %lo(__break_user_context),gr31
+
+ stdi gr2,@(gr31,#REG_GR(2))
+ movsg ccr,gr3
+ sti gr3,@(gr31,#REG_CCR)
+
+ # catch the return from a TLB-miss handler that had single-step disabled
+ # traps will be enabled, so we have to do this now
+#ifdef CONFIG_MMU
+ movsg bpcsr,gr3
+ sethi.p %hi(__break_tlb_miss_return_breaks_here),gr2
+ setlo %lo(__break_tlb_miss_return_breaks_here),gr2
+ subcc gr2,gr3,gr0,icc0
+ beq icc0,#2,__break_return_singlestep_tlbmiss
+#endif
+
+ # determine whether we have stepped through into an exception
+ # - we need to take special action to suspend h/w single stepping if we've done
+ # that, so that the gdbstub doesn't get bogged down endlessly stepping through
+ # external interrupt handling
+ movsg bpsr,gr3
+ andicc gr3,#BPSR_BET,gr0,icc0
+ bne icc0,#2,__break_maybe_userspace /* jump if PSR.ET was 1 */
+
+ LEDS 0x1003,gr2
+
+ movsg brr,gr3
+ andicc gr3,#BRR_ST,gr0,icc0
+ andicc.p gr3,#BRR_SB,gr0,icc1
+ bne icc0,#2,__break_step /* jump if single-step caused break */
+ beq icc1,#2,__break_continue /* jump if BREAK didn't cause break */
+
+ LEDS 0x1007,gr2
+
+ # handle special breaks
+ movsg bpcsr,gr3
+
+ sethi.p %hi(__entry_return_singlestep_breaks_here),gr2
+ setlo %lo(__entry_return_singlestep_breaks_here),gr2
+ subcc gr2,gr3,gr0,icc0
+ beq icc0,#2,__break_return_singlestep
+
+ bra __break_continue
+
+
+###############################################################################
+#
+# handle BREAK instruction in kernel-mode exception epilogue
+#
+###############################################################################
+__break_return_singlestep:
+ LEDS 0x100f,gr2
+
+ # special break insn requests single-stepping to be turned back on
+ # HERE RETT
+ # PSR.ET 0 0
+ # PSR.PS old PSR.S ?
+ # PSR.S 1 1
+ # BPSR.ET 0 1 (can't have caused orig excep otherwise)
+ # BPSR.BS 1 old PSR.S
+ movsg dcr,gr2
+ sethi.p %hi(DCR_SE),gr3
+ setlo %lo(DCR_SE),gr3
+ or gr2,gr3,gr2
+ movgs gr2,dcr
+
+ movsg psr,gr2
+ andi gr2,#PSR_PS,gr2
+ slli gr2,#11,gr2 /* PSR.PS -> BPSR.BS */
+ ori gr2,#BPSR_BET,gr2 /* 1 -> BPSR.BET */
+ movgs gr2,bpsr
+
+ # return to the invoker of the original kernel exception
+ movsg pcsr,gr2
+ movgs gr2,bpcsr
+
+ LEDS 0x101f,gr2
+
+ ldi @(gr31,#REG_CCR),gr3
+ movgs gr3,ccr
+ lddi.p @(gr31,#REG_GR(2)),gr2
+ xor gr31,gr31,gr31
+ movgs gr0,brr
+#ifdef CONFIG_MMU
+ movsg scr3,gr31
+#endif
+ rett #1
+
+###############################################################################
+#
+# handle BREAK instruction in TLB-miss handler return path
+#
+###############################################################################
+#ifdef CONFIG_MMU
+__break_return_singlestep_tlbmiss:
+ LEDS 0x1100,gr2
+
+ sethi.p %hi(__break_tlb_miss_real_return_info),gr3
+ setlo %lo(__break_tlb_miss_real_return_info),gr3
+ lddi @(gr3,#0),gr2
+ movgs gr2,pcsr
+ movgs gr3,psr
+
+ bra __break_return_singlestep
+#endif
+
+
+###############################################################################
+#
+# handle single stepping into an exception prologue from kernel mode
+# - we try and catch it whilst it is still in the main vector table
+# - if we catch it there, we have to jump to the fixup handler
+# - there is a fixup table that has a pointer for every 16b slot in the trap
+# table
+#
+###############################################################################
+__break_step:
+ LEDS 0x2003,gr2
+
+ # external interrupts seem to escape from the trap table before single
+ # step catches up with them
+ movsg bpcsr,gr2
+ sethi.p %hi(__entry_kernel_external_interrupt),gr3
+ setlo %lo(__entry_kernel_external_interrupt),gr3
+ subcc gr2,gr3,gr0,icc0
+ beq icc0,#2,__break_step_kernel_external_interrupt
+ sethi.p %hi(__entry_uspace_external_interrupt),gr3
+ setlo %lo(__entry_uspace_external_interrupt),gr3
+ subcc gr2,gr3,gr0,icc0
+ beq icc0,#2,__break_step_uspace_external_interrupt
+
+ LEDS 0x2007,gr2
+
+ # the two main vector tables are adjacent on one 8Kb slab
+ movsg bpcsr,gr2
+ setlos #0xffffe000,gr3
+ and gr2,gr3,gr2
+ sethi.p %hi(__trap_tables),gr3
+ setlo %lo(__trap_tables),gr3
+ subcc gr2,gr3,gr0,icc0
+ bne icc0,#2,__break_continue
+
+ LEDS 0x200f,gr2
+
+ # skip workaround if so requested by GDB
+ sethi.p %hi(__break_trace_through_exceptions),gr3
+ setlo %lo(__break_trace_through_exceptions),gr3
+ ld @(gr3,gr0),gr3
+ subcc gr3,gr0,gr0,icc0
+ bne icc0,#0,__break_continue
+
+ LEDS 0x201f,gr2
+
+ # access the fixup table - there's a 1:1 mapping between the slots in the trap tables and
+ # the slots in the trap fixup tables allowing us to simply divide the offset into the
+ # former by 4 to access the latter
+ sethi.p %hi(__trap_tables),gr3
+ setlo %lo(__trap_tables),gr3
+ movsg bpcsr,gr2
+ sub gr2,gr3,gr2
+ srli.p gr2,#2,gr2
+
+ sethi %hi(__trap_fixup_tables),gr3
+ setlo.p %lo(__trap_fixup_tables),gr3
+ andi gr2,#~3,gr2
+ ld @(gr2,gr3),gr2
+ jmpil @(gr2,#0)
+
+# step through an internal exception from kernel mode
+ .globl __break_step_kernel_softprog_interrupt
+__break_step_kernel_softprog_interrupt:
+ sethi.p %hi(__entry_kernel_softprog_interrupt_reentry),gr3
+ setlo %lo(__entry_kernel_softprog_interrupt_reentry),gr3
+ bra __break_return_as_kernel_prologue
+
+# step through an external interrupt from kernel mode
+ .globl __break_step_kernel_external_interrupt
+__break_step_kernel_external_interrupt:
+ sethi.p %hi(__entry_kernel_external_interrupt_reentry),gr3
+ setlo %lo(__entry_kernel_external_interrupt_reentry),gr3
+
+__break_return_as_kernel_prologue:
+ LEDS 0x203f,gr2
+
+ movgs gr3,bpcsr
+
+ # do the bit we had to skip
+#ifdef CONFIG_MMU
+ movsg ear0,gr2 /* EAR0 can get clobbered by gdb-stub (ICI/ICEI) */
+ movgs gr2,scr2
+#endif
+
+ or.p sp,gr0,gr2 /* set up the stack pointer */
+ subi sp,#REG__END,sp
+ sti.p gr2,@(sp,#REG_SP)
+
+ setlos #REG__STATUS_STEP,gr2
+ sti gr2,@(sp,#REG__STATUS) /* record single step status */
+
+ # cancel single-stepping mode
+ movsg dcr,gr2
+ sethi.p %hi(~DCR_SE),gr3
+ setlo %lo(~DCR_SE),gr3
+ and gr2,gr3,gr2
+ movgs gr2,dcr
+
+ LEDS 0x207f,gr2
+
+ ldi @(gr31,#REG_CCR),gr3
+ movgs gr3,ccr
+ lddi.p @(gr31,#REG_GR(2)),gr2
+ xor gr31,gr31,gr31
+ movgs gr0,brr
+#ifdef CONFIG_MMU
+ movsg scr3,gr31
+#endif
+ rett #1
+
+# step through an internal exception from uspace mode
+ .globl __break_step_uspace_softprog_interrupt
+__break_step_uspace_softprog_interrupt:
+ sethi.p %hi(__entry_uspace_softprog_interrupt_reentry),gr3
+ setlo %lo(__entry_uspace_softprog_interrupt_reentry),gr3
+ bra __break_return_as_uspace_prologue
+
+# step through an external interrupt from kernel mode
+ .globl __break_step_uspace_external_interrupt
+__break_step_uspace_external_interrupt:
+ sethi.p %hi(__entry_uspace_external_interrupt_reentry),gr3
+ setlo %lo(__entry_uspace_external_interrupt_reentry),gr3
+
+__break_return_as_uspace_prologue:
+ LEDS 0x20ff,gr2
+
+ movgs gr3,bpcsr
+
+ # do the bit we had to skip
+ sethi.p %hi(__kernel_frame0_ptr),gr28
+ setlo %lo(__kernel_frame0_ptr),gr28
+ ldi.p @(gr28,#0),gr28
+
+ setlos #REG__STATUS_STEP,gr2
+ sti gr2,@(gr28,#REG__STATUS) /* record single step status */
+
+ # cancel single-stepping mode
+ movsg dcr,gr2
+ sethi.p %hi(~DCR_SE),gr3
+ setlo %lo(~DCR_SE),gr3
+ and gr2,gr3,gr2
+ movgs gr2,dcr
+
+ LEDS 0x20fe,gr2
+
+ ldi @(gr31,#REG_CCR),gr3
+ movgs gr3,ccr
+ lddi.p @(gr31,#REG_GR(2)),gr2
+ xor gr31,gr31,gr31
+ movgs gr0,brr
+#ifdef CONFIG_MMU
+ movsg scr3,gr31
+#endif
+ rett #1
+
+#ifdef CONFIG_MMU
+# step through an ITLB-miss handler from user mode
+ .globl __break_user_insn_tlb_miss
+__break_user_insn_tlb_miss:
+ # we'll want to try the trap stub again
+ sethi.p %hi(__trap_user_insn_tlb_miss),gr2
+ setlo %lo(__trap_user_insn_tlb_miss),gr2
+ movgs gr2,bpcsr
+
+__break_tlb_miss_common:
+ LEDS 0x2101,gr2
+
+ # cancel single-stepping mode
+ movsg dcr,gr2
+ sethi.p %hi(~DCR_SE),gr3
+ setlo %lo(~DCR_SE),gr3
+ and gr2,gr3,gr2
+ movgs gr2,dcr
+
+ # we'll swap the real return address for one with a BREAK insn so that we can re-enable
+ # single stepping on return
+ movsg pcsr,gr2
+ sethi.p %hi(__break_tlb_miss_real_return_info),gr3
+ setlo %lo(__break_tlb_miss_real_return_info),gr3
+ sti gr2,@(gr3,#0)
+
+ sethi.p %hi(__break_tlb_miss_return_break),gr2
+ setlo %lo(__break_tlb_miss_return_break),gr2
+ movgs gr2,pcsr
+
+ # we also have to fudge PSR because the return BREAK is in kernel space and we want
+ # to get a BREAK fault not an access violation should the return be to userspace
+ movsg psr,gr2
+ sti.p gr2,@(gr3,#4)
+ ori gr2,#PSR_PS,gr2
+ movgs gr2,psr
+
+ LEDS 0x2102,gr2
+
+ ldi @(gr31,#REG_CCR),gr3
+ movgs gr3,ccr
+ lddi @(gr31,#REG_GR(2)),gr2
+ movsg scr3,gr31
+ movgs gr0,brr
+ rett #1
+
+# step through a DTLB-miss handler from user mode
+ .globl __break_user_data_tlb_miss
+__break_user_data_tlb_miss:
+ # we'll want to try the trap stub again
+ sethi.p %hi(__trap_user_data_tlb_miss),gr2
+ setlo %lo(__trap_user_data_tlb_miss),gr2
+ movgs gr2,bpcsr
+ bra __break_tlb_miss_common
+
+# step through an ITLB-miss handler from kernel mode
+ .globl __break_kernel_insn_tlb_miss
+__break_kernel_insn_tlb_miss:
+ # we'll want to try the trap stub again
+ sethi.p %hi(__trap_kernel_insn_tlb_miss),gr2
+ setlo %lo(__trap_kernel_insn_tlb_miss),gr2
+ movgs gr2,bpcsr
+ bra __break_tlb_miss_common
+
+# step through a DTLB-miss handler from kernel mode
+ .globl __break_kernel_data_tlb_miss
+__break_kernel_data_tlb_miss:
+ # we'll want to try the trap stub again
+ sethi.p %hi(__trap_kernel_data_tlb_miss),gr2
+ setlo %lo(__trap_kernel_data_tlb_miss),gr2
+ movgs gr2,bpcsr
+ bra __break_tlb_miss_common
+#endif
+
+###############################################################################
+#
+# handle debug events originating with userspace
+#
+###############################################################################
+__break_maybe_userspace:
+ LEDS 0x3003,gr2
+
+ setlos #BPSR_BS,gr2
+ andcc gr3,gr2,gr0,icc0
+ bne icc0,#0,__break_continue /* skip if PSR.S was 1 */
+
+ movsg brr,gr2
+ andicc gr2,#BRR_ST|BRR_SB,gr0,icc0
+ beq icc0,#0,__break_continue /* jump if not BREAK or single-step */
+
+ LEDS 0x3007,gr2
+
+ # do the first part of the exception prologue here
+ sethi.p %hi(__kernel_frame0_ptr),gr28
+ setlo %lo(__kernel_frame0_ptr),gr28
+ ldi @(gr28,#0),gr28
+ andi gr28,#~7,gr28
+
+ # set up the kernel stack pointer
+ sti sp ,@(gr28,#REG_SP)
+ ori gr28,0,sp
+ sti gr0 ,@(gr28,#REG_GR(28))
+
+ stdi gr20,@(gr28,#REG_GR(20))
+ stdi gr22,@(gr28,#REG_GR(22))
+
+ movsg tbr,gr20
+ movsg bpcsr,gr21
+ movsg psr,gr22
+
+ # determine the exception type and cancel single-stepping mode
+ or gr0,gr0,gr23
+
+ movsg dcr,gr2
+ sethi.p %hi(DCR_SE),gr3
+ setlo %lo(DCR_SE),gr3
+ andcc gr2,gr3,gr0,icc0
+ beq icc0,#0,__break_no_user_sstep /* must have been a BREAK insn */
+
+ not gr3,gr3
+ and gr2,gr3,gr2
+ movgs gr2,dcr
+ ori gr23,#REG__STATUS_STEP,gr23
+
+__break_no_user_sstep:
+ LEDS 0x300f,gr2
+
+ movsg brr,gr2
+ andi gr2,#BRR_ST|BRR_SB,gr2
+ slli gr2,#1,gr2
+ or gr23,gr2,gr23
+ sti.p gr23,@(gr28,#REG__STATUS) /* record single step status */
+
+ # adjust the value acquired from TBR - this indicates the exception
+ setlos #~TBR_TT,gr2
+ and.p gr20,gr2,gr20
+ setlos #TBR_TT_BREAK,gr2
+ or.p gr20,gr2,gr20
+
+ # fudge PSR.PS and BPSR.BS to return to kernel mode through the trap
+ # table as trap 126
+ andi gr22,#~PSR_PS,gr22 /* PSR.PS should be 0 */
+ movgs gr22,psr
+
+ setlos #BPSR_BS,gr2 /* BPSR.BS should be 1 and BPSR.BET 0 */
+ movgs gr2,bpsr
+
+ # return through remainder of the exception prologue
+ # - need to load gr23 with return handler address
+ sethi.p %hi(__entry_return_from_user_exception),gr23
+ setlo %lo(__entry_return_from_user_exception),gr23
+ sethi.p %hi(__entry_common),gr3
+ setlo %lo(__entry_common),gr3
+ movgs gr3,bpcsr
+
+ LEDS 0x301f,gr2
+
+ ldi @(gr31,#REG_CCR),gr3
+ movgs gr3,ccr
+ lddi.p @(gr31,#REG_GR(2)),gr2
+ xor gr31,gr31,gr31
+ movgs gr0,brr
+#ifdef CONFIG_MMU
+ movsg scr3,gr31
+#endif
+ rett #1
+
+###############################################################################
+#
+# resume normal debug-mode entry
+#
+###############################################################################
+__break_continue:
+ LEDS 0x4003,gr2
+
+ # set up the kernel stack pointer
+ sti sp,@(gr31,#REG_SP)
+
+ sethi.p %hi(__break_stack_tos),sp
+ setlo %lo(__break_stack_tos),sp
+
+ # finish building the exception frame
+ stdi gr4 ,@(gr31,#REG_GR(4))
+ stdi gr6 ,@(gr31,#REG_GR(6))
+ stdi gr8 ,@(gr31,#REG_GR(8))
+ stdi gr10,@(gr31,#REG_GR(10))
+ stdi gr12,@(gr31,#REG_GR(12))
+ stdi gr14,@(gr31,#REG_GR(14))
+ stdi gr16,@(gr31,#REG_GR(16))
+ stdi gr18,@(gr31,#REG_GR(18))
+ stdi gr20,@(gr31,#REG_GR(20))
+ stdi gr22,@(gr31,#REG_GR(22))
+ stdi gr24,@(gr31,#REG_GR(24))
+ stdi gr26,@(gr31,#REG_GR(26))
+ sti gr0 ,@(gr31,#REG_GR(28)) /* NULL frame pointer */
+ sti gr29,@(gr31,#REG_GR(29))
+ sti gr30,@(gr31,#REG_GR(30))
+ sti gr8 ,@(gr31,#REG_ORIG_GR8)
+
+#ifdef CONFIG_MMU
+ movsg scr3,gr19
+ sti gr19,@(gr31,#REG_GR(31))
+#endif
+
+ movsg bpsr ,gr19
+ movsg tbr ,gr20
+ movsg bpcsr,gr21
+ movsg psr ,gr22
+ movsg isr ,gr23
+ movsg cccr ,gr25
+ movsg lr ,gr26
+ movsg lcr ,gr27
+
+ andi.p gr22,#~(PSR_S|PSR_ET),gr5 /* rebuild PSR */
+ andi gr19,#PSR_ET,gr4
+ or.p gr4,gr5,gr5
+ srli gr19,#10,gr4
+ andi gr4,#PSR_S,gr4
+ or.p gr4,gr5,gr5
+
+ setlos #-1,gr6
+ sti gr20,@(gr31,#REG_TBR)
+ sti gr21,@(gr31,#REG_PC)
+ sti gr5 ,@(gr31,#REG_PSR)
+ sti gr23,@(gr31,#REG_ISR)
+ sti gr25,@(gr31,#REG_CCCR)
+ stdi gr26,@(gr31,#REG_LR)
+ sti gr6 ,@(gr31,#REG_SYSCALLNO)
+
+ # store CPU-specific regs
+ movsg iacc0h,gr4
+ movsg iacc0l,gr5
+ stdi gr4,@(gr31,#REG_IACC0)
+
+ movsg gner0,gr4
+ movsg gner1,gr5
+ stdi gr4,@(gr31,#REG_GNER0)
+
+ # build the debug register frame
+ movsg brr,gr4
+ movgs gr0,brr
+ movsg nmar,gr5
+ movsg dcr,gr6
+
+ stdi gr4 ,@(gr31,#REG_BRR)
+ sti gr19,@(gr31,#REG_BPSR)
+ sti.p gr6 ,@(gr31,#REG_DCR)
+
+ # trap exceptions during break handling and disable h/w breakpoints/watchpoints
+ sethi %hi(DCR_EBE),gr5
+ setlo.p %lo(DCR_EBE),gr5
+ sethi %hi(__entry_breaktrap_table),gr4
+ setlo %lo(__entry_breaktrap_table),gr4
+ movgs gr5,dcr
+ movgs gr4,tbr
+
+ # set up kernel global registers
+ sethi.p %hi(__kernel_current_task),gr5
+ setlo %lo(__kernel_current_task),gr5
+ ld @(gr5,gr0),gr29
+ ldi.p @(gr29,#4),gr15 ; __current_thread_info = current->thread_info
+
+ sethi %hi(_gp),gr16
+ setlo.p %lo(_gp),gr16
+
+ # make sure we (the kernel) get div-zero and misalignment exceptions
+ setlos #ISR_EDE|ISR_DTT_DIVBYZERO|ISR_EMAM_EXCEPTION,gr5
+ movgs gr5,isr
+
+ # enter the GDB stub
+ LEDS 0x4007,gr2
+
+ or.p gr0,gr0,fp
+ call debug_stub
+
+ LEDS 0x403f,gr2
+
+ # return from break
+ lddi @(gr31,#REG_IACC0),gr4
+ movgs gr4,iacc0h
+ movgs gr5,iacc0l
+
+ lddi @(gr31,#REG_GNER0),gr4
+ movgs gr4,gner0
+ movgs gr5,gner1
+
+ lddi @(gr31,#REG_LR) ,gr26
+ lddi @(gr31,#REG_CCR) ,gr24
+ lddi @(gr31,#REG_PSR) ,gr22
+ ldi @(gr31,#REG_PC) ,gr21
+ ldi @(gr31,#REG_TBR) ,gr20
+ ldi.p @(gr31,#REG_DCR) ,gr6
+
+ andi gr22,#PSR_S,gr19 /* rebuild BPSR */
+ andi.p gr22,#PSR_ET,gr5
+ slli gr19,#10,gr19
+ or gr5,gr19,gr19
+
+ movgs gr6 ,dcr
+ movgs gr19,bpsr
+ movgs gr20,tbr
+ movgs gr21,bpcsr
+ movgs gr23,isr
+ movgs gr24,ccr
+ movgs gr25,cccr
+ movgs gr26,lr
+ movgs gr27,lcr
+
+ LEDS 0x407f,gr2
+
+#ifdef CONFIG_MMU
+ ldi @(gr31,#REG_GR(31)),gr2
+ movgs gr2,scr3
+#endif
+
+ ldi @(gr31,#REG_GR(30)),gr30
+ ldi @(gr31,#REG_GR(29)),gr29
+ lddi @(gr31,#REG_GR(26)),gr26
+ lddi @(gr31,#REG_GR(24)),gr24
+ lddi @(gr31,#REG_GR(22)),gr22
+ lddi @(gr31,#REG_GR(20)),gr20
+ lddi @(gr31,#REG_GR(18)),gr18
+ lddi @(gr31,#REG_GR(16)),gr16
+ lddi @(gr31,#REG_GR(14)),gr14
+ lddi @(gr31,#REG_GR(12)),gr12
+ lddi @(gr31,#REG_GR(10)),gr10
+ lddi @(gr31,#REG_GR(8)) ,gr8
+ lddi @(gr31,#REG_GR(6)) ,gr6
+ lddi @(gr31,#REG_GR(4)) ,gr4
+ lddi @(gr31,#REG_GR(2)) ,gr2
+ ldi.p @(gr31,#REG_SP) ,sp
+
+ xor gr31,gr31,gr31
+ movgs gr0,brr
+#ifdef CONFIG_MMU
+ movsg scr3,gr31
+#endif
+ rett #1
+
+###################################################################################################
+#
+# GDB stub "system calls"
+#
+###################################################################################################
+
+#ifdef CONFIG_GDBSTUB
+ # void gdbstub_console_write(struct console *con, const char *p, unsigned n)
+ .globl gdbstub_console_write
+gdbstub_console_write:
+ break
+ bralr
+#endif
+
+ # GDB stub BUG() trap
+ # GR8 is the proposed signal number
+ .globl __debug_bug_trap
+__debug_bug_trap:
+ break
+ bralr
+
+ # transfer kernel exeception to GDB for handling
+ .globl __break_hijack_kernel_event
+__break_hijack_kernel_event:
+ break
+ .globl __break_hijack_kernel_event_breaks_here
+__break_hijack_kernel_event_breaks_here:
+ nop
+
+#ifdef CONFIG_MMU
+ # handle a return from TLB-miss that requires single-step reactivation
+ .globl __break_tlb_miss_return_break
+__break_tlb_miss_return_break:
+ break
+__break_tlb_miss_return_breaks_here:
+ nop
+#endif
+
+ # guard the first .text label in the next file from confusion
+ nop
--- /dev/null
+/* cmode.S: clock mode management
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Woodhouse (dwmw2@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <linux/sys.h>
+#include <linux/config.h>
+#include <linux/linkage.h>
+#include <asm/setup.h>
+#include <asm/segment.h>
+#include <asm/ptrace.h>
+#include <asm/errno.h>
+#include <asm/cache.h>
+#include <asm/spr-regs.h>
+
+#define __addr_MASK 0xfeff9820 /* interrupt controller mask */
+
+#define __addr_SDRAMC 0xfe000400 /* SDRAM controller regs */
+#define SDRAMC_DSTS 0x28 /* SDRAM status */
+#define SDRAMC_DSTS_SSI 0x00000001 /* indicates that the SDRAM is in self-refresh mode */
+#define SDRAMC_DRCN 0x30 /* SDRAM refresh control */
+#define SDRAMC_DRCN_SR 0x00000001 /* transition SDRAM into self-refresh mode */
+#define __addr_CLKC 0xfeff9a00
+#define CLKC_SWCMODE 0x00000008
+#define __addr_LEDS 0xe1200004
+
+.macro li v r
+ sethi.p %hi(\v),\r
+ setlo %lo(\v),\r
+.endm
+
+ .text
+ .balign 4
+
+
+###############################################################################
+#
+# Change CMODE
+# - void frv_change_cmode(int cmode)
+#
+###############################################################################
+ .globl frv_change_cmode
+ .type frv_change_cmode,@function
+
+.macro LEDS v
+#ifdef DEBUG_CMODE
+ setlos #~\v,gr10
+ sti gr10,@(gr11,#0)
+ membar
+#endif
+.endm
+
+frv_change_cmode:
+ movsg lr,gr9
+#ifdef DEBUG_CMODE
+ li __addr_LEDS,gr11
+#endif
+ dcef @(gr0,gr0),#1
+
+ # Shift argument left by 24 bits to fit in SWCMODE register later.
+ slli gr8,#24,gr8
+
+ # (1) Set '0' in the PSR.ET bit, and prohibit interrupts.
+ movsg psr,gr14
+ andi gr14,#~PSR_ET,gr3
+ movgs gr3,psr
+
+#if 0 // Fujitsu recommend to skip this and will update docs.
+ # (2) Set '0' to all bits of the MASK register of the interrupt
+ # controller, and mask interrupts.
+ li __addr_MASK,gr12
+ ldi @(gr12,#0),gr13
+ li 0xffff0000,gr4
+ sti gr4,@(gr12,#0)
+#endif
+
+ # (3) Stop the transfer function of DMAC. Stop all the bus masters
+ # to access SDRAM and the internal resources.
+
+ # (already done by caller)
+
+ # (4) Preload a series of following instructions to the instruction
+ # cache.
+ li #__cmode_icache_lock_start,gr3
+ li #__cmode_icache_lock_end,gr4
+
+1: icpl gr3,gr0,#1
+ addi gr3,#L1_CACHE_BYTES,gr3
+ cmp gr4,gr3,icc0
+ bhi icc0,#0,1b
+
+ # Set up addresses in regs for later steps.
+ setlos SDRAMC_DRCN_SR,gr3
+ li __addr_SDRAMC,gr4
+ li __addr_CLKC,gr5
+ ldi @(gr5,#0),gr6
+ li #0x80000000,gr7
+ or gr6,gr7,gr6
+
+ bra __cmode_icache_lock_start
+
+ .balign L1_CACHE_BYTES
+__cmode_icache_lock_start:
+
+ # (5) Flush the content of all caches by the DCEF instruction.
+ dcef @(gr0,gr0),#1
+
+ # (6) Execute loading the dummy for SDRAM.
+ ldi @(gr9,#0),gr0
+
+ # (7) Set '1' to the DRCN.SR bit, and change SDRAM to the
+ # self-refresh mode. Execute the dummy load to all memory
+ # devices set to cacheable on the external bus side in parallel
+ # with this.
+ sti gr3,@(gr4,#SDRAMC_DRCN)
+
+ # (8) Execute memory barrier instruction (MEMBAR).
+ membar
+
+ # (9) Read the DSTS register repeatedly until '1' stands in the
+ # DSTS.SSI field.
+1: ldi @(gr4,#SDRAMC_DSTS),gr3
+ andicc gr3,#SDRAMC_DSTS_SSI,gr3,icc0
+ beq icc0,#0,1b
+
+ # (10) Execute memory barrier instruction (MEMBAR).
+ membar
+
+#if 1
+ # (11) Set the value of CMODE that you want to change to
+ # SWCMODE.SWCM[3:0].
+ sti gr8,@(gr5,#CLKC_SWCMODE)
+
+ # (12) Set '1' to the CLKC.SWEN bit. In that case, do not change
+ # fields other than SWEN of the CLKC register.
+ sti gr6,@(gr5,#0)
+#endif
+ # (13) Execute the instruction just after the memory barrier
+ # instruction that executes the self-loop 256 times. (Meanwhile,
+ # the CMODE switch is done.)
+ membar
+ setlos #256,gr7
+2: subicc gr7,#1,gr7,icc0
+ bne icc0,#2,2b
+
+ LEDS 0x36
+
+ # (14) Release the self-refresh of SDRAM.
+ sti gr0,@(gr4,#SDRAMC_DRCN)
+
+ # Wait for it...
+3: ldi @(gr4,#SDRAMC_DSTS),gr3
+ andicc gr3,#SDRAMC_DSTS_SSI,gr3,icc0
+ bne icc0,#2,3b
+
+#if 0
+ li 0x0100000,gr10
+4: subicc gr10,#1,gr10,icc0
+
+ bne icc0,#0,4b
+#endif
+
+__cmode_icache_lock_end:
+
+ li #__cmode_icache_lock_start,gr3
+ li #__cmode_icache_lock_end,gr4
+
+4: icul gr3
+ addi gr3,#L1_CACHE_BYTES,gr3
+ cmp gr4,gr3,icc0
+ bhi icc0,#0,4b
+
+#if 0 // Fujitsu recommend to skip this and will update docs.
+ # (15) Release the interrupt mask setting of the MASK register of
+ # the interrupt controller if necessary.
+ sti gr13,@(gr12,#0)
+#endif
+ # (16) Set 1' in the PSR.ET bit, and permit interrupt.
+ movgs gr14,psr
+
+ bralr
+
+ .size frv_change_cmode, .-frv_change_cmode
--- /dev/null
+/* debug-stub.c: debug-mode stub
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/string.h>
+#include <linux/kernel.h>
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/init.h>
+#include <linux/serial_reg.h>
+
+#include <asm/system.h>
+#include <asm/serial-regs.h>
+#include <asm/timer-regs.h>
+#include <asm/irc-regs.h>
+#include <asm/gdb-stub.h>
+#include "gdb-io.h"
+
+/* CPU board CON5 */
+#define __UART0(X) (*(volatile uint8_t *)(UART0_BASE + (UART_##X)))
+
+#define LSR_WAIT_FOR0(STATE) \
+do { \
+} while (!(__UART0(LSR) & UART_LSR_##STATE))
+
+#define FLOWCTL_QUERY0(LINE) ({ __UART0(MSR) & UART_MSR_##LINE; })
+#define FLOWCTL_CLEAR0(LINE) do { __UART0(MCR) &= ~UART_MCR_##LINE; } while (0)
+#define FLOWCTL_SET0(LINE) do { __UART0(MCR) |= UART_MCR_##LINE; } while (0)
+
+#define FLOWCTL_WAIT_FOR0(LINE) \
+do { \
+ gdbstub_do_rx(); \
+} while(!FLOWCTL_QUERY(LINE))
+
+static void __init debug_stub_init(void);
+
+extern asmlinkage void __break_hijack_kernel_event(void);
+extern asmlinkage void __break_hijack_kernel_event_breaks_here(void);
+
+/*****************************************************************************/
+/*
+ * debug mode handler stub
+ * - we come here with the CPU in debug mode and with exceptions disabled
+ * - handle debugging services for userspace
+ */
+asmlinkage void debug_stub(void)
+{
+ unsigned long hsr0;
+ int type = 0;
+
+ static u8 inited = 0;
+ if (!inited) {
+ debug_stub_init();
+ type = -1;
+ inited = 1;
+ }
+
+ hsr0 = __get_HSR(0);
+ if (hsr0 & HSR0_ETMD)
+ __set_HSR(0, hsr0 & ~HSR0_ETMD);
+
+ /* disable single stepping */
+ __debug_regs->dcr &= ~DCR_SE;
+
+ /* kernel mode can propose an exception be handled in debug mode by jumping to a special
+ * location */
+ if (__debug_frame->pc == (unsigned long) __break_hijack_kernel_event_breaks_here) {
+ /* replace the debug frame with the kernel frame and discard
+ * the top kernel context */
+ *__debug_frame = *__frame;
+ __frame = __debug_frame->next_frame;
+ __debug_regs->brr = (__debug_frame->tbr & TBR_TT) << 12;
+ __debug_regs->brr |= BRR_EB;
+ }
+
+ if (__debug_frame->pc == (unsigned long) __debug_bug_trap + 4) {
+ __debug_frame->pc = __debug_frame->lr;
+ type = __debug_frame->gr8;
+ }
+
+#ifdef CONFIG_GDBSTUB
+ gdbstub(type);
+#endif
+
+ if (hsr0 & HSR0_ETMD)
+ __set_HSR(0, __get_HSR(0) | HSR0_ETMD);
+
+} /* end debug_stub() */
+
+/*****************************************************************************/
+/*
+ * debug stub initialisation
+ */
+static void __init debug_stub_init(void)
+{
+ __set_IRR(6, 0xff000000); /* map ERRs to NMI */
+ __set_IITMR(1, 0x20000000); /* ERR0/1, UART0/1 IRQ detect levels */
+
+ asm volatile(" movgs gr0,ibar0 \n"
+ " movgs gr0,ibar1 \n"
+ " movgs gr0,ibar2 \n"
+ " movgs gr0,ibar3 \n"
+ " movgs gr0,dbar0 \n"
+ " movgs gr0,dbmr00 \n"
+ " movgs gr0,dbmr01 \n"
+ " movgs gr0,dbdr00 \n"
+ " movgs gr0,dbdr01 \n"
+ " movgs gr0,dbar1 \n"
+ " movgs gr0,dbmr10 \n"
+ " movgs gr0,dbmr11 \n"
+ " movgs gr0,dbdr10 \n"
+ " movgs gr0,dbdr11 \n"
+ );
+
+ /* deal with debugging stub initialisation and initial pause */
+ if (__debug_frame->pc == (unsigned long) __debug_stub_init_break)
+ __debug_frame->pc = (unsigned long) start_kernel;
+
+ /* enable the debug events we want to trap */
+ __debug_regs->dcr = DCR_EBE;
+
+#ifdef CONFIG_GDBSTUB
+ gdbstub_init();
+#endif
+
+ __clr_MASK_all();
+ __clr_MASK(15);
+ __clr_RC(15);
+
+} /* end debug_stub_init() */
+
+/*****************************************************************************/
+/*
+ * kernel "exit" trap for gdb stub
+ */
+void debug_stub_exit(int status)
+{
+
+#ifdef CONFIG_GDBSTUB
+ gdbstub_exit(status);
+#endif
+
+} /* end debug_stub_exit() */
+
+/*****************************************************************************/
+/*
+ * send string to serial port
+ */
+void debug_to_serial(const char *p, int n)
+{
+ char ch;
+
+ for (; n > 0; n--) {
+ ch = *p++;
+ FLOWCTL_SET0(DTR);
+ LSR_WAIT_FOR0(THRE);
+ // FLOWCTL_WAIT_FOR(CTS);
+
+ if (ch == 0x0a) {
+ __UART0(TX) = 0x0d;
+ mb();
+ LSR_WAIT_FOR0(THRE);
+ // FLOWCTL_WAIT_FOR(CTS);
+ }
+ __UART0(TX) = ch;
+ mb();
+
+ FLOWCTL_CLEAR0(DTR);
+ }
+
+} /* end debug_to_serial() */
+
+/*****************************************************************************/
+/*
+ * send string to serial port
+ */
+void debug_to_serial2(const char *fmt, ...)
+{
+ va_list va;
+ char buf[64];
+ int n;
+
+ va_start(va, fmt);
+ n = vsprintf(buf, fmt, va);
+ va_end(va);
+
+ debug_to_serial(buf, n);
+
+} /* end debug_to_serial2() */
+
+/*****************************************************************************/
+/*
+ * set up the ttyS0 serial port baud rate timers
+ */
+void __init console_set_baud(unsigned baud)
+{
+ unsigned value, high, low;
+ u8 lcr;
+
+ /* work out the divisor to give us the nearest higher baud rate */
+ value = __serial_clock_speed_HZ / 16 / baud;
+
+ /* determine the baud rate range */
+ high = __serial_clock_speed_HZ / 16 / value;
+ low = __serial_clock_speed_HZ / 16 / (value + 1);
+
+ /* pick the nearest bound */
+ if (low + (high - low) / 2 > baud)
+ value++;
+
+ lcr = __UART0(LCR);
+ __UART0(LCR) |= UART_LCR_DLAB;
+ mb();
+ __UART0(DLL) = value & 0xff;
+ __UART0(DLM) = (value >> 8) & 0xff;
+ mb();
+ __UART0(LCR) = lcr;
+ mb();
+
+} /* end console_set_baud() */
+
+/*****************************************************************************/
+/*
+ *
+ */
+int __init console_get_baud(void)
+{
+ unsigned value;
+ u8 lcr;
+
+ lcr = __UART0(LCR);
+ __UART0(LCR) |= UART_LCR_DLAB;
+ mb();
+ value = __UART0(DLM) << 8;
+ value |= __UART0(DLL);
+ __UART0(LCR) = lcr;
+ mb();
+
+ return value;
+} /* end console_get_baud() */
+
+/*****************************************************************************/
+/*
+ * display BUG() info
+ */
+#ifndef CONFIG_NO_KERNEL_MSG
+void __debug_bug_printk(const char *file, unsigned line)
+{
+ printk("kernel BUG at %s:%d!\n", file, line);
+
+} /* end __debug_bug_printk() */
+#endif
--- /dev/null
+/* dma.c: DMA controller management on FR401 and the like
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/spinlock.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <asm/dma.h>
+#include <asm/gpio-regs.h>
+#include <asm/irc-regs.h>
+#include <asm/cpu-irqs.h>
+
+struct frv_dma_channel {
+ uint8_t flags;
+#define FRV_DMA_FLAGS_RESERVED 0x01
+#define FRV_DMA_FLAGS_INUSE 0x02
+#define FRV_DMA_FLAGS_PAUSED 0x04
+ uint8_t cap; /* capabilities available */
+ int irq; /* completion IRQ */
+ uint32_t dreqbit;
+ uint32_t dackbit;
+ uint32_t donebit;
+ const unsigned long ioaddr; /* DMA controller regs addr */
+ const char *devname;
+ dma_irq_handler_t handler;
+ void *data;
+};
+
+
+#define __get_DMAC(IO,X) ({ *(volatile unsigned long *)((IO) + DMAC_##X##x); })
+
+#define __set_DMAC(IO,X,V) \
+do { \
+ *(volatile unsigned long *)((IO) + DMAC_##X##x) = (V); \
+ mb(); \
+} while(0)
+
+#define ___set_DMAC(IO,X,V) \
+do { \
+ *(volatile unsigned long *)((IO) + DMAC_##X##x) = (V); \
+} while(0)
+
+
+static struct frv_dma_channel frv_dma_channels[FRV_DMA_NCHANS] = {
+ [0] = {
+ .cap = FRV_DMA_CAP_DREQ | FRV_DMA_CAP_DACK | FRV_DMA_CAP_DONE,
+ .irq = IRQ_CPU_DMA0,
+ .dreqbit = SIR_DREQ0_INPUT,
+ .dackbit = SOR_DACK0_OUTPUT,
+ .donebit = SOR_DONE0_OUTPUT,
+ .ioaddr = 0xfe000900,
+ },
+ [1] = {
+ .cap = FRV_DMA_CAP_DREQ | FRV_DMA_CAP_DACK | FRV_DMA_CAP_DONE,
+ .irq = IRQ_CPU_DMA1,
+ .dreqbit = SIR_DREQ1_INPUT,
+ .dackbit = SOR_DACK1_OUTPUT,
+ .donebit = SOR_DONE1_OUTPUT,
+ .ioaddr = 0xfe000980,
+ },
+ [2] = {
+ .cap = FRV_DMA_CAP_DREQ | FRV_DMA_CAP_DACK,
+ .irq = IRQ_CPU_DMA2,
+ .dreqbit = SIR_DREQ2_INPUT,
+ .dackbit = SOR_DACK2_OUTPUT,
+ .ioaddr = 0xfe000a00,
+ },
+ [3] = {
+ .cap = FRV_DMA_CAP_DREQ | FRV_DMA_CAP_DACK,
+ .irq = IRQ_CPU_DMA3,
+ .dreqbit = SIR_DREQ3_INPUT,
+ .dackbit = SOR_DACK3_OUTPUT,
+ .ioaddr = 0xfe000a80,
+ },
+ [4] = {
+ .cap = FRV_DMA_CAP_DREQ,
+ .irq = IRQ_CPU_DMA4,
+ .dreqbit = SIR_DREQ4_INPUT,
+ .ioaddr = 0xfe001000,
+ },
+ [5] = {
+ .cap = FRV_DMA_CAP_DREQ,
+ .irq = IRQ_CPU_DMA5,
+ .dreqbit = SIR_DREQ5_INPUT,
+ .ioaddr = 0xfe001080,
+ },
+ [6] = {
+ .cap = FRV_DMA_CAP_DREQ,
+ .irq = IRQ_CPU_DMA6,
+ .dreqbit = SIR_DREQ6_INPUT,
+ .ioaddr = 0xfe001100,
+ },
+ [7] = {
+ .cap = FRV_DMA_CAP_DREQ,
+ .irq = IRQ_CPU_DMA7,
+ .dreqbit = SIR_DREQ7_INPUT,
+ .ioaddr = 0xfe001180,
+ },
+};
+
+static DEFINE_RWLOCK(frv_dma_channels_lock);
+
+unsigned long frv_dma_inprogress;
+
+#define frv_clear_dma_inprogress(channel) \
+ atomic_clear_mask(1 << (channel), &frv_dma_inprogress);
+
+#define frv_set_dma_inprogress(channel) \
+ atomic_set_mask(1 << (channel), &frv_dma_inprogress);
+
+/*****************************************************************************/
+/*
+ * DMA irq handler - determine channel involved, grab status and call real handler
+ */
+static irqreturn_t dma_irq_handler(int irq, void *_channel, struct pt_regs *regs)
+{
+ struct frv_dma_channel *channel = _channel;
+
+ frv_clear_dma_inprogress(channel - frv_dma_channels);
+ return channel->handler(channel - frv_dma_channels,
+ __get_DMAC(channel->ioaddr, CSTR),
+ channel->data,
+ regs);
+
+} /* end dma_irq_handler() */
+
+/*****************************************************************************/
+/*
+ * Determine which DMA controllers are present on this CPU
+ */
+void __init frv_dma_init(void)
+{
+ unsigned long psr = __get_PSR();
+ int num_dma, i;
+
+ /* First, determine how many DMA channels are available */
+ switch (PSR_IMPLE(psr)) {
+ case PSR_IMPLE_FR405:
+ case PSR_IMPLE_FR451:
+ case PSR_IMPLE_FR501:
+ case PSR_IMPLE_FR551:
+ num_dma = FRV_DMA_8CHANS;
+ break;
+
+ case PSR_IMPLE_FR401:
+ default:
+ num_dma = FRV_DMA_4CHANS;
+ break;
+ }
+
+ /* Now mark all of the non-existent channels as reserved */
+ for(i = num_dma; i < FRV_DMA_NCHANS; i++)
+ frv_dma_channels[i].flags = FRV_DMA_FLAGS_RESERVED;
+
+} /* end frv_dma_init() */
+
+/*****************************************************************************/
+/*
+ * allocate a DMA controller channel and the IRQ associated with it
+ */
+int frv_dma_open(const char *devname,
+ unsigned long dmamask,
+ int dmacap,
+ dma_irq_handler_t handler,
+ unsigned long irq_flags,
+ void *data)
+{
+ struct frv_dma_channel *channel;
+ int dma, ret;
+ uint32_t val;
+
+ write_lock(&frv_dma_channels_lock);
+
+ ret = -ENOSPC;
+
+ for (dma = FRV_DMA_NCHANS - 1; dma >= 0; dma--) {
+ channel = &frv_dma_channels[dma];
+
+ if (!test_bit(dma, &dmamask))
+ continue;
+
+ if ((channel->cap & dmacap) != dmacap)
+ continue;
+
+ if (!frv_dma_channels[dma].flags)
+ goto found;
+ }
+
+ goto out;
+
+ found:
+ ret = request_irq(channel->irq, dma_irq_handler, irq_flags, devname, channel);
+ if (ret < 0)
+ goto out;
+
+ /* okay, we've allocated all the resources */
+ channel = &frv_dma_channels[dma];
+
+ channel->flags |= FRV_DMA_FLAGS_INUSE;
+ channel->devname = devname;
+ channel->handler = handler;
+ channel->data = data;
+
+ /* Now make sure we are set up for DMA and not GPIO */
+ /* SIR bit must be set for DMA to work */
+ __set_SIR(channel->dreqbit | __get_SIR());
+ /* SOR bits depend on what the caller requests */
+ val = __get_SOR();
+ if(dmacap & FRV_DMA_CAP_DACK)
+ val |= channel->dackbit;
+ else
+ val &= ~channel->dackbit;
+ if(dmacap & FRV_DMA_CAP_DONE)
+ val |= channel->donebit;
+ else
+ val &= ~channel->donebit;
+ __set_SOR(val);
+
+ ret = dma;
+ out:
+ write_unlock(&frv_dma_channels_lock);
+ return ret;
+} /* end frv_dma_open() */
+
+EXPORT_SYMBOL(frv_dma_open);
+
+/*****************************************************************************/
+/*
+ * close a DMA channel and its associated interrupt
+ */
+void frv_dma_close(int dma)
+{
+ struct frv_dma_channel *channel = &frv_dma_channels[dma];
+ unsigned long flags;
+
+ write_lock_irqsave(&frv_dma_channels_lock, flags);
+
+ free_irq(channel->irq, channel);
+ frv_dma_stop(dma);
+
+ channel->flags &= ~FRV_DMA_FLAGS_INUSE;
+
+ write_unlock_irqrestore(&frv_dma_channels_lock, flags);
+} /* end frv_dma_close() */
+
+EXPORT_SYMBOL(frv_dma_close);
+
+/*****************************************************************************/
+/*
+ * set static configuration on a DMA channel
+ */
+void frv_dma_config(int dma, unsigned long ccfr, unsigned long cctr, unsigned long apr)
+{
+ unsigned long ioaddr = frv_dma_channels[dma].ioaddr;
+
+ ___set_DMAC(ioaddr, CCFR, ccfr);
+ ___set_DMAC(ioaddr, CCTR, cctr);
+ ___set_DMAC(ioaddr, APR, apr);
+ mb();
+
+} /* end frv_dma_config() */
+
+EXPORT_SYMBOL(frv_dma_config);
+
+/*****************************************************************************/
+/*
+ * start a DMA channel
+ */
+void frv_dma_start(int dma,
+ unsigned long sba, unsigned long dba,
+ unsigned long pix, unsigned long six, unsigned long bcl)
+{
+ unsigned long ioaddr = frv_dma_channels[dma].ioaddr;
+
+ ___set_DMAC(ioaddr, SBA, sba);
+ ___set_DMAC(ioaddr, DBA, dba);
+ ___set_DMAC(ioaddr, PIX, pix);
+ ___set_DMAC(ioaddr, SIX, six);
+ ___set_DMAC(ioaddr, BCL, bcl);
+ ___set_DMAC(ioaddr, CSTR, 0);
+ mb();
+
+ __set_DMAC(ioaddr, CCTR, __get_DMAC(ioaddr, CCTR) | DMAC_CCTRx_ACT);
+ frv_set_dma_inprogress(dma);
+
+} /* end frv_dma_start() */
+
+EXPORT_SYMBOL(frv_dma_start);
+
+/*****************************************************************************/
+/*
+ * restart a DMA channel that's been stopped in circular addressing mode by comparison-end
+ */
+void frv_dma_restart_circular(int dma, unsigned long six)
+{
+ unsigned long ioaddr = frv_dma_channels[dma].ioaddr;
+
+ ___set_DMAC(ioaddr, SIX, six);
+ ___set_DMAC(ioaddr, CSTR, __get_DMAC(ioaddr, CSTR) & ~DMAC_CSTRx_CE);
+ mb();
+
+ __set_DMAC(ioaddr, CCTR, __get_DMAC(ioaddr, CCTR) | DMAC_CCTRx_ACT);
+ frv_set_dma_inprogress(dma);
+
+} /* end frv_dma_restart_circular() */
+
+EXPORT_SYMBOL(frv_dma_restart_circular);
+
+/*****************************************************************************/
+/*
+ * stop a DMA channel
+ */
+void frv_dma_stop(int dma)
+{
+ unsigned long ioaddr = frv_dma_channels[dma].ioaddr;
+ uint32_t cctr;
+
+ ___set_DMAC(ioaddr, CSTR, 0);
+ cctr = __get_DMAC(ioaddr, CCTR);
+ cctr &= ~(DMAC_CCTRx_IE | DMAC_CCTRx_ACT);
+ cctr |= DMAC_CCTRx_FC; /* fifo clear */
+ __set_DMAC(ioaddr, CCTR, cctr);
+ __set_DMAC(ioaddr, BCL, 0);
+ frv_clear_dma_inprogress(dma);
+} /* end frv_dma_stop() */
+
+EXPORT_SYMBOL(frv_dma_stop);
+
+/*****************************************************************************/
+/*
+ * test interrupt status of DMA channel
+ */
+int is_frv_dma_interrupting(int dma)
+{
+ unsigned long ioaddr = frv_dma_channels[dma].ioaddr;
+
+ return __get_DMAC(ioaddr, CSTR) & (1 << 23);
+
+} /* end is_frv_dma_interrupting() */
+
+EXPORT_SYMBOL(is_frv_dma_interrupting);
+
+/*****************************************************************************/
+/*
+ * dump data about a DMA channel
+ */
+void frv_dma_dump(int dma)
+{
+ unsigned long ioaddr = frv_dma_channels[dma].ioaddr;
+ unsigned long cstr, pix, six, bcl;
+
+ cstr = __get_DMAC(ioaddr, CSTR);
+ pix = __get_DMAC(ioaddr, PIX);
+ six = __get_DMAC(ioaddr, SIX);
+ bcl = __get_DMAC(ioaddr, BCL);
+
+ printk("DMA[%d] cstr=%lx pix=%lx six=%lx bcl=%lx\n", dma, cstr, pix, six, bcl);
+
+} /* end frv_dma_dump() */
+
+EXPORT_SYMBOL(frv_dma_dump);
+
+/*****************************************************************************/
+/*
+ * pause all DMA controllers
+ * - called by clock mangling routines
+ * - caller must be holding interrupts disabled
+ */
+void frv_dma_pause_all(void)
+{
+ struct frv_dma_channel *channel;
+ unsigned long ioaddr;
+ unsigned long cstr, cctr;
+ int dma;
+
+ write_lock(&frv_dma_channels_lock);
+
+ for (dma = FRV_DMA_NCHANS - 1; dma >= 0; dma--) {
+ channel = &frv_dma_channels[dma];
+
+ if (!(channel->flags & FRV_DMA_FLAGS_INUSE))
+ continue;
+
+ ioaddr = channel->ioaddr;
+ cctr = __get_DMAC(ioaddr, CCTR);
+ if (cctr & DMAC_CCTRx_ACT) {
+ cctr &= ~DMAC_CCTRx_ACT;
+ __set_DMAC(ioaddr, CCTR, cctr);
+
+ do {
+ cstr = __get_DMAC(ioaddr, CSTR);
+ } while (cstr & DMAC_CSTRx_BUSY);
+
+ if (cstr & DMAC_CSTRx_FED)
+ channel->flags |= FRV_DMA_FLAGS_PAUSED;
+ frv_clear_dma_inprogress(dma);
+ }
+ }
+
+} /* end frv_dma_pause_all() */
+
+EXPORT_SYMBOL(frv_dma_pause_all);
+
+/*****************************************************************************/
+/*
+ * resume paused DMA controllers
+ * - called by clock mangling routines
+ * - caller must be holding interrupts disabled
+ */
+void frv_dma_resume_all(void)
+{
+ struct frv_dma_channel *channel;
+ unsigned long ioaddr;
+ unsigned long cstr, cctr;
+ int dma;
+
+ for (dma = FRV_DMA_NCHANS - 1; dma >= 0; dma--) {
+ channel = &frv_dma_channels[dma];
+
+ if (!(channel->flags & FRV_DMA_FLAGS_PAUSED))
+ continue;
+
+ ioaddr = channel->ioaddr;
+ cstr = __get_DMAC(ioaddr, CSTR);
+ cstr &= ~(DMAC_CSTRx_FED | DMAC_CSTRx_INT);
+ __set_DMAC(ioaddr, CSTR, cstr);
+
+ cctr = __get_DMAC(ioaddr, CCTR);
+ cctr |= DMAC_CCTRx_ACT;
+ __set_DMAC(ioaddr, CCTR, cctr);
+
+ channel->flags &= ~FRV_DMA_FLAGS_PAUSED;
+ frv_set_dma_inprogress(dma);
+ }
+
+ write_unlock(&frv_dma_channels_lock);
+
+} /* end frv_dma_resume_all() */
+
+EXPORT_SYMBOL(frv_dma_resume_all);
+
+/*****************************************************************************/
+/*
+ * dma status clear
+ */
+void frv_dma_status_clear(int dma)
+{
+ unsigned long ioaddr = frv_dma_channels[dma].ioaddr;
+ uint32_t cctr;
+ ___set_DMAC(ioaddr, CSTR, 0);
+
+ cctr = __get_DMAC(ioaddr, CCTR);
+} /* end frv_dma_status_clear() */
+
+EXPORT_SYMBOL(frv_dma_status_clear);
--- /dev/null
+/* entry-table.S: main trap vector tables and exception jump table
+ *
+ * Copyright (C) 2003 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <linux/sys.h>
+#include <linux/config.h>
+#include <linux/linkage.h>
+#include <asm/spr-regs.h>
+
+###############################################################################
+#
+# Declare the main trap and vector tables
+#
+# There are six tables:
+#
+# (1) The trap table for debug mode
+# (2) The trap table for kernel mode
+# (3) The trap table for user mode
+#
+# The CPU jumps to an appropriate slot in the appropriate table to perform
+# exception processing. We have three different tables for the three
+# different CPU modes because there is no hardware differentiation between
+# stack pointers for these three modes, and so we have to invent one when
+# crossing mode boundaries.
+#
+# (4) The exception handler vector table
+#
+# The user and kernel trap tables use the same prologue for normal
+# exception processing. The prologue then jumps to the handler in this
+# table, as indexed by the exception ID from the TBR.
+#
+# (5) The fixup table for kernel-trap single-step
+# (6) The fixup table for user-trap single-step
+#
+# Due to the way single-stepping works on this CPU (single-step is not
+# disabled when crossing exception boundaries, only when in debug mode),
+# we have to catch the single-step event in break.S and jump to the fixup
+# routine pointed to by this table.
+#
+# The linker script places the user mode and kernel mode trap tables on to
+# the same 8Kb page, so that break.S can be more efficient when performing
+# single-step bypass management
+#
+###############################################################################
+
+ # trap table for entry from debug mode
+ .section .trap.break,"ax"
+ .balign 256*16
+ .globl __entry_breaktrap_table
+__entry_breaktrap_table:
+
+ # trap table for entry from user mode
+ .section .trap.user,"ax"
+ .balign 256*16
+ .globl __entry_usertrap_table
+__entry_usertrap_table:
+
+ # trap table for entry from kernel mode
+ .section .trap.kernel,"ax"
+ .balign 256*16
+ .globl __entry_kerneltrap_table
+__entry_kerneltrap_table:
+
+ # exception handler jump table
+ .section .trap.vector,"ax"
+ .balign 256*4
+ .globl __entry_vector_table
+__entry_vector_table:
+
+ # trap fixup table for single-stepping in user mode
+ .section .trap.fixup.user,"a"
+ .balign 256*4
+ .globl __break_usertrap_fixup_table
+__break_usertrap_fixup_table:
+
+ # trap fixup table for single-stepping in user mode
+ .section .trap.fixup.kernel,"a"
+ .balign 256*4
+ .globl __break_kerneltrap_fixup_table
+__break_kerneltrap_fixup_table:
+
+ # handler declaration for a sofware or program interrupt
+.macro VECTOR_SOFTPROG tbr_tt, vec
+ .section .trap.user
+ .org \tbr_tt
+ bra __entry_uspace_softprog_interrupt
+ .section .trap.fixup.user
+ .org \tbr_tt >> 2
+ .long __break_step_uspace_softprog_interrupt
+ .section .trap.kernel
+ .org \tbr_tt
+ bra __entry_kernel_softprog_interrupt
+ .section .trap.fixup.kernel
+ .org \tbr_tt >> 2
+ .long __break_step_kernel_softprog_interrupt
+ .section .trap.vector
+ .org \tbr_tt >> 2
+ .long \vec
+.endm
+
+ # handler declaration for a maskable external interrupt
+.macro VECTOR_IRQ tbr_tt, vec
+ .section .trap.user
+ .org \tbr_tt
+ bra __entry_uspace_external_interrupt
+ .section .trap.fixup.user
+ .org \tbr_tt >> 2
+ .long __break_step_uspace_external_interrupt
+ .section .trap.kernel
+ .org \tbr_tt
+ bra __entry_kernel_external_interrupt
+ .section .trap.fixup.kernel
+ .org \tbr_tt >> 2
+ .long __break_step_kernel_external_interrupt
+ .section .trap.vector
+ .org \tbr_tt >> 2
+ .long \vec
+.endm
+
+ # handler declaration for an NMI external interrupt
+.macro VECTOR_NMI tbr_tt, vec
+ .section .trap.user
+ .org \tbr_tt
+ break
+ break
+ break
+ break
+ .section .trap.kernel
+ .org \tbr_tt
+ break
+ break
+ break
+ break
+ .section .trap.vector
+ .org \tbr_tt >> 2
+ .long \vec
+.endm
+
+ # handler declaration for an MMU only sofware or program interrupt
+.macro VECTOR_SP_MMU tbr_tt, vec
+#ifdef CONFIG_MMU
+ VECTOR_SOFTPROG \tbr_tt, \vec
+#else
+ VECTOR_NMI \tbr_tt, 0
+#endif
+.endm
+
+
+###############################################################################
+#
+# specification of the vectors
+# - note: each macro inserts code into multiple sections
+#
+###############################################################################
+ VECTOR_SP_MMU TBR_TT_INSTR_MMU_MISS, __entry_insn_mmu_miss
+ VECTOR_SOFTPROG TBR_TT_INSTR_ACC_ERROR, __entry_insn_access_error
+ VECTOR_SOFTPROG TBR_TT_INSTR_ACC_EXCEP, __entry_insn_access_exception
+ VECTOR_SOFTPROG TBR_TT_PRIV_INSTR, __entry_privileged_instruction
+ VECTOR_SOFTPROG TBR_TT_ILLEGAL_INSTR, __entry_illegal_instruction
+ VECTOR_SOFTPROG TBR_TT_FP_EXCEPTION, __entry_media_exception
+ VECTOR_SOFTPROG TBR_TT_MP_EXCEPTION, __entry_media_exception
+ VECTOR_SOFTPROG TBR_TT_DATA_ACC_ERROR, __entry_data_access_error
+ VECTOR_SP_MMU TBR_TT_DATA_MMU_MISS, __entry_data_mmu_miss
+ VECTOR_SOFTPROG TBR_TT_DATA_ACC_EXCEP, __entry_data_access_exception
+ VECTOR_SOFTPROG TBR_TT_DATA_STR_ERROR, __entry_data_store_error
+ VECTOR_SOFTPROG TBR_TT_DIVISION_EXCEP, __entry_division_exception
+
+#ifdef CONFIG_MMU
+ .section .trap.user
+ .org TBR_TT_INSTR_TLB_MISS
+ .globl __trap_user_insn_tlb_miss
+__trap_user_insn_tlb_miss:
+ movsg ear0,gr28 /* faulting address */
+ movsg scr0,gr31 /* get mapped PTD coverage start address */
+ xor.p gr28,gr31,gr31 /* compare addresses */
+ bra __entry_user_insn_tlb_miss
+
+ .org TBR_TT_DATA_TLB_MISS
+ .globl __trap_user_data_tlb_miss
+__trap_user_data_tlb_miss:
+ movsg ear0,gr28 /* faulting address */
+ movsg scr1,gr31 /* get mapped PTD coverage start address */
+ xor.p gr28,gr31,gr31 /* compare addresses */
+ bra __entry_user_data_tlb_miss
+
+ .section .trap.kernel
+ .org TBR_TT_INSTR_TLB_MISS
+ .globl __trap_kernel_insn_tlb_miss
+__trap_kernel_insn_tlb_miss:
+ movsg ear0,gr29 /* faulting address */
+ movsg scr0,gr31 /* get mapped PTD coverage start address */
+ xor.p gr29,gr31,gr31 /* compare addresses */
+ bra __entry_kernel_insn_tlb_miss
+
+ .org TBR_TT_DATA_TLB_MISS
+ .globl __trap_kernel_data_tlb_miss
+__trap_kernel_data_tlb_miss:
+ movsg ear0,gr29 /* faulting address */
+ movsg scr1,gr31 /* get mapped PTD coverage start address */
+ xor.p gr29,gr31,gr31 /* compare addresses */
+ bra __entry_kernel_data_tlb_miss
+
+ .section .trap.fixup.user
+ .org TBR_TT_INSTR_TLB_MISS >> 2
+ .globl __trap_fixup_user_insn_tlb_miss
+__trap_fixup_user_insn_tlb_miss:
+ .long __break_user_insn_tlb_miss
+ .org TBR_TT_DATA_TLB_MISS >> 2
+ .globl __trap_fixup_user_data_tlb_miss
+__trap_fixup_user_data_tlb_miss:
+ .long __break_user_data_tlb_miss
+
+ .section .trap.fixup.kernel
+ .org TBR_TT_INSTR_TLB_MISS >> 2
+ .globl __trap_fixup_kernel_insn_tlb_miss
+__trap_fixup_kernel_insn_tlb_miss:
+ .long __break_kernel_insn_tlb_miss
+ .org TBR_TT_DATA_TLB_MISS >> 2
+ .globl __trap_fixup_kernel_data_tlb_miss
+__trap_fixup_kernel_data_tlb_miss:
+ .long __break_kernel_data_tlb_miss
+
+ .section .trap.vector
+ .org TBR_TT_INSTR_TLB_MISS >> 2
+ .long __entry_insn_mmu_fault
+ .org TBR_TT_DATA_TLB_MISS >> 2
+ .long __entry_data_mmu_fault
+#endif
+
+ VECTOR_SP_MMU TBR_TT_DATA_DAT_EXCEP, __entry_data_dat_fault
+ VECTOR_NMI TBR_TT_DECREMENT_TIMER, __entry_do_NMI
+ VECTOR_SOFTPROG TBR_TT_COMPOUND_EXCEP, __entry_compound_exception
+ VECTOR_IRQ TBR_TT_INTERRUPT_1, __entry_do_IRQ
+ VECTOR_IRQ TBR_TT_INTERRUPT_2, __entry_do_IRQ
+ VECTOR_IRQ TBR_TT_INTERRUPT_3, __entry_do_IRQ
+ VECTOR_IRQ TBR_TT_INTERRUPT_4, __entry_do_IRQ
+ VECTOR_IRQ TBR_TT_INTERRUPT_5, __entry_do_IRQ
+ VECTOR_IRQ TBR_TT_INTERRUPT_6, __entry_do_IRQ
+ VECTOR_IRQ TBR_TT_INTERRUPT_7, __entry_do_IRQ
+ VECTOR_IRQ TBR_TT_INTERRUPT_8, __entry_do_IRQ
+ VECTOR_IRQ TBR_TT_INTERRUPT_9, __entry_do_IRQ
+ VECTOR_IRQ TBR_TT_INTERRUPT_10, __entry_do_IRQ
+ VECTOR_IRQ TBR_TT_INTERRUPT_11, __entry_do_IRQ
+ VECTOR_IRQ TBR_TT_INTERRUPT_12, __entry_do_IRQ
+ VECTOR_IRQ TBR_TT_INTERRUPT_13, __entry_do_IRQ
+ VECTOR_IRQ TBR_TT_INTERRUPT_14, __entry_do_IRQ
+ VECTOR_NMI TBR_TT_INTERRUPT_15, __entry_do_NMI
+
+ # miscellaneous user mode entry points
+ .section .trap.user
+ .org TBR_TT_TRAP0
+ .rept 127
+ bra __entry_uspace_softprog_interrupt
+ bra __break_step_uspace_softprog_interrupt
+ .long 0,0
+ .endr
+ .org TBR_TT_BREAK
+ bra __entry_break
+ .long 0,0,0
+
+ # miscellaneous kernel mode entry points
+ .section .trap.kernel
+ .org TBR_TT_TRAP0
+ .rept 127
+ bra __entry_kernel_softprog_interrupt
+ bra __break_step_kernel_softprog_interrupt
+ .long 0,0
+ .endr
+ .org TBR_TT_BREAK
+ bra __entry_break
+ .long 0,0,0
+
+ # miscellaneous debug mode entry points
+ .section .trap.break
+ .org TBR_TT_BREAK
+ movsg bpcsr,gr30
+ jmpl @(gr30,gr0)
+
+ # miscellaneous vectors
+ .section .trap.vector
+ .org TBR_TT_TRAP0 >> 2
+ .long system_call
+ .rept 126
+ .long __entry_unsupported_trap
+ .endr
+ .org TBR_TT_BREAK >> 2
+ .long __entry_debug_exception
--- /dev/null
+/* entry.S: FR-V entry
+ *
+ * Copyright (C) 2003 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ *
+ * Entry to the kernel is "interesting":
+ * (1) There are no stack pointers, not even for the kernel
+ * (2) General Registers should not be clobbered
+ * (3) There are no kernel-only data registers
+ * (4) Since all addressing modes are wrt to a General Register, no global
+ * variables can be reached
+ *
+ * We deal with this by declaring that we shall kill GR28 on entering the
+ * kernel from userspace
+ *
+ * However, since break interrupts can interrupt the CPU even when PSR.ET==0,
+ * they can't rely on GR28 to be anything useful, and so need to clobber a
+ * separate register (GR31). Break interrupts are managed in break.S
+ *
+ * GR29 _is_ saved, and holds the current task pointer globally
+ *
+ */
+
+#include <linux/sys.h>
+#include <linux/config.h>
+#include <linux/linkage.h>
+#include <asm/thread_info.h>
+#include <asm/setup.h>
+#include <asm/segment.h>
+#include <asm/ptrace.h>
+#include <asm/errno.h>
+#include <asm/cache.h>
+#include <asm/spr-regs.h>
+
+#define nr_syscalls ((syscall_table_size)/4)
+
+ .text
+ .balign 4
+
+.macro LEDS val
+# sethi.p %hi(0xe1200004),gr30
+# setlo %lo(0xe1200004),gr30
+# setlos #~\val,gr31
+# st gr31,@(gr30,gr0)
+# sethi.p %hi(0xffc00100),gr30
+# setlo %lo(0xffc00100),gr30
+# sth gr0,@(gr30,gr0)
+# membar
+.endm
+
+.macro LEDS32
+# not gr31,gr31
+# sethi.p %hi(0xe1200004),gr30
+# setlo %lo(0xe1200004),gr30
+# st.p gr31,@(gr30,gr0)
+# srli gr31,#16,gr31
+# sethi.p %hi(0xffc00100),gr30
+# setlo %lo(0xffc00100),gr30
+# sth gr31,@(gr30,gr0)
+# membar
+.endm
+
+###############################################################################
+#
+# entry point for External interrupts received whilst executing userspace code
+#
+###############################################################################
+ .globl __entry_uspace_external_interrupt
+ .type __entry_uspace_external_interrupt,@function
+__entry_uspace_external_interrupt:
+ LEDS 0x6200
+ sethi.p %hi(__kernel_frame0_ptr),gr28
+ setlo %lo(__kernel_frame0_ptr),gr28
+ ldi @(gr28,#0),gr28
+
+ # handle h/w single-step through exceptions
+ sti gr0,@(gr28,#REG__STATUS)
+
+ .globl __entry_uspace_external_interrupt_reentry
+__entry_uspace_external_interrupt_reentry:
+ LEDS 0x6201
+
+ setlos #REG__END,gr30
+ dcpl gr28,gr30,#0
+
+ # finish building the exception frame
+ sti sp, @(gr28,#REG_SP)
+ stdi gr2, @(gr28,#REG_GR(2))
+ stdi gr4, @(gr28,#REG_GR(4))
+ stdi gr6, @(gr28,#REG_GR(6))
+ stdi gr8, @(gr28,#REG_GR(8))
+ stdi gr10,@(gr28,#REG_GR(10))
+ stdi gr12,@(gr28,#REG_GR(12))
+ stdi gr14,@(gr28,#REG_GR(14))
+ stdi gr16,@(gr28,#REG_GR(16))
+ stdi gr18,@(gr28,#REG_GR(18))
+ stdi gr20,@(gr28,#REG_GR(20))
+ stdi gr22,@(gr28,#REG_GR(22))
+ stdi gr24,@(gr28,#REG_GR(24))
+ stdi gr26,@(gr28,#REG_GR(26))
+ sti gr0, @(gr28,#REG_GR(28))
+ sti gr29,@(gr28,#REG_GR(29))
+ stdi.p gr30,@(gr28,#REG_GR(30))
+
+ # set up the kernel stack pointer
+ ori gr28,0,sp
+
+ movsg tbr ,gr20
+ movsg psr ,gr22
+ movsg pcsr,gr21
+ movsg isr ,gr23
+ movsg ccr ,gr24
+ movsg cccr,gr25
+ movsg lr ,gr26
+ movsg lcr ,gr27
+
+ setlos.p #-1,gr4
+ andi gr22,#PSR_PS,gr5 /* try to rebuild original PSR value */
+ andi.p gr22,#~(PSR_PS|PSR_S),gr6
+ slli gr5,#1,gr5
+ or gr6,gr5,gr5
+ andi gr5,#~PSR_ET,gr5
+
+ sti gr20,@(gr28,#REG_TBR)
+ sti gr21,@(gr28,#REG_PC)
+ sti gr5 ,@(gr28,#REG_PSR)
+ sti gr23,@(gr28,#REG_ISR)
+ stdi gr24,@(gr28,#REG_CCR)
+ stdi gr26,@(gr28,#REG_LR)
+ sti gr4 ,@(gr28,#REG_SYSCALLNO)
+
+ movsg iacc0h,gr4
+ movsg iacc0l,gr5
+ stdi gr4,@(gr28,#REG_IACC0)
+
+ movsg gner0,gr4
+ movsg gner1,gr5
+ stdi gr4,@(gr28,#REG_GNER0)
+
+ # set up kernel global registers
+ sethi.p %hi(__kernel_current_task),gr5
+ setlo %lo(__kernel_current_task),gr5
+ sethi.p %hi(_gp),gr16
+ setlo %lo(_gp),gr16
+ ldi @(gr5,#0),gr29
+ ldi.p @(gr29,#4),gr15 ; __current_thread_info = current->thread_info
+
+ # make sure we (the kernel) get div-zero and misalignment exceptions
+ setlos #ISR_EDE|ISR_DTT_DIVBYZERO|ISR_EMAM_EXCEPTION,gr5
+ movgs gr5,isr
+
+ # switch to the kernel trap table
+ sethi.p %hi(__entry_kerneltrap_table),gr6
+ setlo %lo(__entry_kerneltrap_table),gr6
+ movgs gr6,tbr
+
+ # set the return address
+ sethi.p %hi(__entry_return_from_user_interrupt),gr4
+ setlo %lo(__entry_return_from_user_interrupt),gr4
+ movgs gr4,lr
+
+ # raise the minimum interrupt priority to 15 (NMI only) and enable exceptions
+ movsg psr,gr4
+
+ ori gr4,#PSR_PIL_14,gr4
+ movgs gr4,psr
+ ori gr4,#PSR_PIL_14|PSR_ET,gr4
+ movgs gr4,psr
+
+ LEDS 0x6202
+ bra do_IRQ
+
+ .size __entry_uspace_external_interrupt,.-__entry_uspace_external_interrupt
+
+###############################################################################
+#
+# entry point for External interrupts received whilst executing kernel code
+# - on arriving here, the following registers should already be set up:
+# GR15 - current thread_info struct pointer
+# GR16 - kernel GP-REL pointer
+# GR29 - current task struct pointer
+# TBR - kernel trap vector table
+# ISR - kernel's preferred integer controls
+#
+###############################################################################
+ .globl __entry_kernel_external_interrupt
+ .type __entry_kernel_external_interrupt,@function
+__entry_kernel_external_interrupt:
+ LEDS 0x6210
+
+ sub sp,gr15,gr31
+ LEDS32
+
+ # set up the stack pointer
+ or.p sp,gr0,gr30
+ subi sp,#REG__END,sp
+ sti gr30,@(sp,#REG_SP)
+
+ # handle h/w single-step through exceptions
+ sti gr0,@(sp,#REG__STATUS)
+
+ .globl __entry_kernel_external_interrupt_reentry
+__entry_kernel_external_interrupt_reentry:
+ LEDS 0x6211
+
+ # set up the exception frame
+ setlos #REG__END,gr30
+ dcpl sp,gr30,#0
+
+ sti.p gr28,@(sp,#REG_GR(28))
+ ori sp,0,gr28
+
+ # finish building the exception frame
+ stdi gr2,@(gr28,#REG_GR(2))
+ stdi gr4,@(gr28,#REG_GR(4))
+ stdi gr6,@(gr28,#REG_GR(6))
+ stdi gr8,@(gr28,#REG_GR(8))
+ stdi gr10,@(gr28,#REG_GR(10))
+ stdi gr12,@(gr28,#REG_GR(12))
+ stdi gr14,@(gr28,#REG_GR(14))
+ stdi gr16,@(gr28,#REG_GR(16))
+ stdi gr18,@(gr28,#REG_GR(18))
+ stdi gr20,@(gr28,#REG_GR(20))
+ stdi gr22,@(gr28,#REG_GR(22))
+ stdi gr24,@(gr28,#REG_GR(24))
+ stdi gr26,@(gr28,#REG_GR(26))
+ sti gr29,@(gr28,#REG_GR(29))
+ stdi gr30,@(gr28,#REG_GR(30))
+
+ movsg tbr ,gr20
+ movsg psr ,gr22
+ movsg pcsr,gr21
+ movsg isr ,gr23
+ movsg ccr ,gr24
+ movsg cccr,gr25
+ movsg lr ,gr26
+ movsg lcr ,gr27
+
+ setlos.p #-1,gr4
+ andi gr22,#PSR_PS,gr5 /* try to rebuild original PSR value */
+ andi.p gr22,#~(PSR_PS|PSR_S),gr6
+ slli gr5,#1,gr5
+ or gr6,gr5,gr5
+ andi.p gr5,#~PSR_ET,gr5
+
+ # set CCCR.CC3 to Undefined to abort atomic-modify completion inside the kernel
+ # - for an explanation of how it works, see: Documentation/fujitsu/frv/atomic-ops.txt
+ andi gr25,#~0xc0,gr25
+
+ sti gr20,@(gr28,#REG_TBR)
+ sti gr21,@(gr28,#REG_PC)
+ sti gr5 ,@(gr28,#REG_PSR)
+ sti gr23,@(gr28,#REG_ISR)
+ stdi gr24,@(gr28,#REG_CCR)
+ stdi gr26,@(gr28,#REG_LR)
+ sti gr4 ,@(gr28,#REG_SYSCALLNO)
+
+ movsg iacc0h,gr4
+ movsg iacc0l,gr5
+ stdi gr4,@(gr28,#REG_IACC0)
+
+ movsg gner0,gr4
+ movsg gner1,gr5
+ stdi gr4,@(gr28,#REG_GNER0)
+
+ # set the return address
+ sethi.p %hi(__entry_return_from_kernel_interrupt),gr4
+ setlo %lo(__entry_return_from_kernel_interrupt),gr4
+ movgs gr4,lr
+
+ # clear power-saving mode flags
+ movsg hsr0,gr4
+ andi gr4,#~HSR0_PDM,gr4
+ movgs gr4,hsr0
+
+ # raise the minimum interrupt priority to 15 (NMI only) and enable exceptions
+ movsg psr,gr4
+ ori gr4,#PSR_PIL_14,gr4
+ movgs gr4,psr
+ ori gr4,#PSR_ET,gr4
+ movgs gr4,psr
+
+ LEDS 0x6212
+ bra do_IRQ
+
+ .size __entry_kernel_external_interrupt,.-__entry_kernel_external_interrupt
+
+
+###############################################################################
+#
+# entry point for Software and Progam interrupts generated whilst executing userspace code
+#
+###############################################################################
+ .globl __entry_uspace_softprog_interrupt
+ .type __entry_uspace_softprog_interrupt,@function
+ .globl __entry_uspace_handle_mmu_fault
+__entry_uspace_softprog_interrupt:
+ LEDS 0x6000
+#ifdef CONFIG_MMU
+ movsg ear0,gr28
+__entry_uspace_handle_mmu_fault:
+ movgs gr28,scr2
+#endif
+ sethi.p %hi(__kernel_frame0_ptr),gr28
+ setlo %lo(__kernel_frame0_ptr),gr28
+ ldi @(gr28,#0),gr28
+
+ # handle h/w single-step through exceptions
+ sti gr0,@(gr28,#REG__STATUS)
+
+ .globl __entry_uspace_softprog_interrupt_reentry
+__entry_uspace_softprog_interrupt_reentry:
+ LEDS 0x6001
+
+ setlos #REG__END,gr30
+ dcpl gr28,gr30,#0
+
+ # set up the kernel stack pointer
+ sti.p sp,@(gr28,#REG_SP)
+ ori gr28,0,sp
+ sti gr0,@(gr28,#REG_GR(28))
+
+ stdi gr20,@(gr28,#REG_GR(20))
+ stdi gr22,@(gr28,#REG_GR(22))
+
+ movsg tbr,gr20
+ movsg pcsr,gr21
+ movsg psr,gr22
+
+ sethi.p %hi(__entry_return_from_user_exception),gr23
+ setlo %lo(__entry_return_from_user_exception),gr23
+ bra __entry_common
+
+ .size __entry_uspace_softprog_interrupt,.-__entry_uspace_softprog_interrupt
+
+ # single-stepping was disabled on entry to a TLB handler that then faulted
+#ifdef CONFIG_MMU
+ .globl __entry_uspace_handle_mmu_fault_sstep
+__entry_uspace_handle_mmu_fault_sstep:
+ movgs gr28,scr2
+ sethi.p %hi(__kernel_frame0_ptr),gr28
+ setlo %lo(__kernel_frame0_ptr),gr28
+ ldi @(gr28,#0),gr28
+
+ # flag single-step re-enablement
+ sti gr0,@(gr28,#REG__STATUS)
+ bra __entry_uspace_softprog_interrupt_reentry
+#endif
+
+
+###############################################################################
+#
+# entry point for Software and Progam interrupts generated whilst executing kernel code
+#
+###############################################################################
+ .globl __entry_kernel_softprog_interrupt
+ .type __entry_kernel_softprog_interrupt,@function
+__entry_kernel_softprog_interrupt:
+ LEDS 0x6004
+
+#ifdef CONFIG_MMU
+ movsg ear0,gr30
+ movgs gr30,scr2
+#endif
+
+ .globl __entry_kernel_handle_mmu_fault
+__entry_kernel_handle_mmu_fault:
+ # set up the stack pointer
+ subi sp,#REG__END,sp
+ sti sp,@(sp,#REG_SP)
+ sti sp,@(sp,#REG_SP-4)
+ andi sp,#~7,sp
+
+ # handle h/w single-step through exceptions
+ sti gr0,@(sp,#REG__STATUS)
+
+ .globl __entry_kernel_softprog_interrupt_reentry
+__entry_kernel_softprog_interrupt_reentry:
+ LEDS 0x6005
+
+ setlos #REG__END,gr30
+ dcpl sp,gr30,#0
+
+ # set up the exception frame
+ sti.p gr28,@(sp,#REG_GR(28))
+ ori sp,0,gr28
+
+ stdi gr20,@(gr28,#REG_GR(20))
+ stdi gr22,@(gr28,#REG_GR(22))
+
+ ldi @(sp,#REG_SP),gr22 /* reconstruct the old SP */
+ addi gr22,#REG__END,gr22
+ sti gr22,@(sp,#REG_SP)
+
+ # set CCCR.CC3 to Undefined to abort atomic-modify completion inside the kernel
+ # - for an explanation of how it works, see: Documentation/fujitsu/frv/atomic-ops.txt
+ movsg cccr,gr20
+ andi gr20,#~0xc0,gr20
+ movgs gr20,cccr
+
+ movsg tbr,gr20
+ movsg pcsr,gr21
+ movsg psr,gr22
+
+ sethi.p %hi(__entry_return_from_kernel_exception),gr23
+ setlo %lo(__entry_return_from_kernel_exception),gr23
+ bra __entry_common
+
+ .size __entry_kernel_softprog_interrupt,.-__entry_kernel_softprog_interrupt
+
+ # single-stepping was disabled on entry to a TLB handler that then faulted
+#ifdef CONFIG_MMU
+ .globl __entry_kernel_handle_mmu_fault_sstep
+__entry_kernel_handle_mmu_fault_sstep:
+ # set up the stack pointer
+ subi sp,#REG__END,sp
+ sti sp,@(sp,#REG_SP)
+ sti sp,@(sp,#REG_SP-4)
+ andi sp,#~7,sp
+
+ # flag single-step re-enablement
+ sethi #REG__STATUS_STEP,gr30
+ sti gr30,@(sp,#REG__STATUS)
+ bra __entry_kernel_softprog_interrupt_reentry
+#endif
+
+
+###############################################################################
+#
+# the rest of the kernel entry point code
+# - on arriving here, the following registers should be set up:
+# GR1 - kernel stack pointer
+# GR7 - syscall number (trap 0 only)
+# GR8-13 - syscall args (trap 0 only)
+# GR20 - saved TBR
+# GR21 - saved PC
+# GR22 - saved PSR
+# GR23 - return handler address
+# GR28 - exception frame on stack
+# SCR2 - saved EAR0 where applicable (clobbered by ICI & ICEF insns on FR451)
+# PSR - PSR.S 1, PSR.ET 0
+#
+###############################################################################
+ .globl __entry_common
+ .type __entry_common,@function
+__entry_common:
+ LEDS 0x6008
+
+ # finish building the exception frame
+ stdi gr2,@(gr28,#REG_GR(2))
+ stdi gr4,@(gr28,#REG_GR(4))
+ stdi gr6,@(gr28,#REG_GR(6))
+ stdi gr8,@(gr28,#REG_GR(8))
+ stdi gr10,@(gr28,#REG_GR(10))
+ stdi gr12,@(gr28,#REG_GR(12))
+ stdi gr14,@(gr28,#REG_GR(14))
+ stdi gr16,@(gr28,#REG_GR(16))
+ stdi gr18,@(gr28,#REG_GR(18))
+ stdi gr24,@(gr28,#REG_GR(24))
+ stdi gr26,@(gr28,#REG_GR(26))
+ sti gr29,@(gr28,#REG_GR(29))
+ stdi gr30,@(gr28,#REG_GR(30))
+
+ movsg lcr ,gr27
+ movsg lr ,gr26
+ movgs gr23,lr
+ movsg cccr,gr25
+ movsg ccr ,gr24
+ movsg isr ,gr23
+
+ setlos.p #-1,gr4
+ andi gr22,#PSR_PS,gr5 /* try to rebuild original PSR value */
+ andi.p gr22,#~(PSR_PS|PSR_S),gr6
+ slli gr5,#1,gr5
+ or gr6,gr5,gr5
+ andi gr5,#~PSR_ET,gr5
+
+ sti gr20,@(gr28,#REG_TBR)
+ sti gr21,@(gr28,#REG_PC)
+ sti gr5 ,@(gr28,#REG_PSR)
+ sti gr23,@(gr28,#REG_ISR)
+ stdi gr24,@(gr28,#REG_CCR)
+ stdi gr26,@(gr28,#REG_LR)
+ sti gr4 ,@(gr28,#REG_SYSCALLNO)
+
+ movsg iacc0h,gr4
+ movsg iacc0l,gr5
+ stdi gr4,@(gr28,#REG_IACC0)
+
+ movsg gner0,gr4
+ movsg gner1,gr5
+ stdi gr4,@(gr28,#REG_GNER0)
+
+ # set up kernel global registers
+ sethi.p %hi(__kernel_current_task),gr5
+ setlo %lo(__kernel_current_task),gr5
+ sethi.p %hi(_gp),gr16
+ setlo %lo(_gp),gr16
+ ldi @(gr5,#0),gr29
+ ldi @(gr29,#4),gr15 ; __current_thread_info = current->thread_info
+
+ # switch to the kernel trap table
+ sethi.p %hi(__entry_kerneltrap_table),gr6
+ setlo %lo(__entry_kerneltrap_table),gr6
+ movgs gr6,tbr
+
+ # make sure we (the kernel) get div-zero and misalignment exceptions
+ setlos #ISR_EDE|ISR_DTT_DIVBYZERO|ISR_EMAM_EXCEPTION,gr5
+ movgs gr5,isr
+
+ # clear power-saving mode flags
+ movsg hsr0,gr4
+ andi gr4,#~HSR0_PDM,gr4
+ movgs gr4,hsr0
+
+ # multiplex again using old TBR as a guide
+ setlos.p #TBR_TT,gr3
+ sethi %hi(__entry_vector_table),gr6
+ and.p gr20,gr3,gr5
+ setlo %lo(__entry_vector_table),gr6
+ srli gr5,#2,gr5
+ ld @(gr5,gr6),gr5
+
+ LEDS 0x6009
+ jmpl @(gr5,gr0)
+
+
+ .size __entry_common,.-__entry_common
+
+###############################################################################
+#
+# handle instruction MMU fault
+#
+###############################################################################
+#ifdef CONFIG_MMU
+ .globl __entry_insn_mmu_fault
+__entry_insn_mmu_fault:
+ LEDS 0x6010
+ setlos #0,gr8
+ movsg esr0,gr9
+ movsg scr2,gr10
+
+ # now that we've accessed the exception regs, we can enable exceptions
+ movsg psr,gr4
+ ori gr4,#PSR_ET,gr4
+ movgs gr4,psr
+
+ sethi.p %hi(do_page_fault),gr5
+ setlo %lo(do_page_fault),gr5
+ jmpl @(gr5,gr0) ; call do_page_fault(0,esr0,ear0)
+#endif
+
+
+###############################################################################
+#
+# handle instruction access error
+#
+###############################################################################
+ .globl __entry_insn_access_error
+__entry_insn_access_error:
+ LEDS 0x6011
+ sethi.p %hi(insn_access_error),gr5
+ setlo %lo(insn_access_error),gr5
+ movsg esfr1,gr8
+ movsg epcr0,gr9
+ movsg esr0,gr10
+
+ # now that we've accessed the exception regs, we can enable exceptions
+ movsg psr,gr4
+ ori gr4,#PSR_ET,gr4
+ movgs gr4,psr
+ jmpl @(gr5,gr0) ; call insn_access_error(esfr1,epcr0,esr0)
+
+###############################################################################
+#
+# handle various instructions of dubious legality
+#
+###############################################################################
+ .globl __entry_unsupported_trap
+ .globl __entry_illegal_instruction
+ .globl __entry_privileged_instruction
+ .globl __entry_debug_exception
+__entry_unsupported_trap:
+ subi gr21,#4,gr21
+ sti gr21,@(gr28,#REG_PC)
+__entry_illegal_instruction:
+__entry_privileged_instruction:
+__entry_debug_exception:
+ LEDS 0x6012
+ sethi.p %hi(illegal_instruction),gr5
+ setlo %lo(illegal_instruction),gr5
+ movsg esfr1,gr8
+ movsg epcr0,gr9
+ movsg esr0,gr10
+
+ # now that we've accessed the exception regs, we can enable exceptions
+ movsg psr,gr4
+ ori gr4,#PSR_ET,gr4
+ movgs gr4,psr
+ jmpl @(gr5,gr0) ; call ill_insn(esfr1,epcr0,esr0)
+
+###############################################################################
+#
+# handle media exception
+#
+###############################################################################
+ .globl __entry_media_exception
+__entry_media_exception:
+ LEDS 0x6013
+ sethi.p %hi(media_exception),gr5
+ setlo %lo(media_exception),gr5
+ movsg msr0,gr8
+ movsg msr1,gr9
+
+ # now that we've accessed the exception regs, we can enable exceptions
+ movsg psr,gr4
+ ori gr4,#PSR_ET,gr4
+ movgs gr4,psr
+ jmpl @(gr5,gr0) ; call media_excep(msr0,msr1)
+
+###############################################################################
+#
+# handle data MMU fault
+# handle data DAT fault (write-protect exception)
+#
+###############################################################################
+#ifdef CONFIG_MMU
+ .globl __entry_data_mmu_fault
+__entry_data_mmu_fault:
+ .globl __entry_data_dat_fault
+__entry_data_dat_fault:
+ LEDS 0x6014
+ setlos #1,gr8
+ movsg esr0,gr9
+ movsg scr2,gr10 ; saved EAR0
+
+ # now that we've accessed the exception regs, we can enable exceptions
+ movsg psr,gr4
+ ori gr4,#PSR_ET,gr4
+ movgs gr4,psr
+
+ sethi.p %hi(do_page_fault),gr5
+ setlo %lo(do_page_fault),gr5
+ jmpl @(gr5,gr0) ; call do_page_fault(1,esr0,ear0)
+#endif
+
+###############################################################################
+#
+# handle data and instruction access exceptions
+#
+###############################################################################
+ .globl __entry_insn_access_exception
+ .globl __entry_data_access_exception
+__entry_insn_access_exception:
+__entry_data_access_exception:
+ LEDS 0x6016
+ sethi.p %hi(memory_access_exception),gr5
+ setlo %lo(memory_access_exception),gr5
+ movsg esr0,gr8
+ movsg scr2,gr9 ; saved EAR0
+ movsg epcr0,gr10
+
+ # now that we've accessed the exception regs, we can enable exceptions
+ movsg psr,gr4
+ ori gr4,#PSR_ET,gr4
+ movgs gr4,psr
+ jmpl @(gr5,gr0) ; call memory_access_error(esr0,ear0,epcr0)
+
+###############################################################################
+#
+# handle data access error
+#
+###############################################################################
+ .globl __entry_data_access_error
+__entry_data_access_error:
+ LEDS 0x6016
+ sethi.p %hi(data_access_error),gr5
+ setlo %lo(data_access_error),gr5
+ movsg esfr1,gr8
+ movsg esr15,gr9
+ movsg ear15,gr10
+
+ # now that we've accessed the exception regs, we can enable exceptions
+ movsg psr,gr4
+ ori gr4,#PSR_ET,gr4
+ movgs gr4,psr
+ jmpl @(gr5,gr0) ; call data_access_error(esfr1,esr15,ear15)
+
+###############################################################################
+#
+# handle data store error
+#
+###############################################################################
+ .globl __entry_data_store_error
+__entry_data_store_error:
+ LEDS 0x6017
+ sethi.p %hi(data_store_error),gr5
+ setlo %lo(data_store_error),gr5
+ movsg esfr1,gr8
+ movsg esr14,gr9
+
+ # now that we've accessed the exception regs, we can enable exceptions
+ movsg psr,gr4
+ ori gr4,#PSR_ET,gr4
+ movgs gr4,psr
+ jmpl @(gr5,gr0) ; call data_store_error(esfr1,esr14)
+
+###############################################################################
+#
+# handle division exception
+#
+###############################################################################
+ .globl __entry_division_exception
+__entry_division_exception:
+ LEDS 0x6018
+ sethi.p %hi(division_exception),gr5
+ setlo %lo(division_exception),gr5
+ movsg esfr1,gr8
+ movsg esr0,gr9
+ movsg isr,gr10
+
+ # now that we've accessed the exception regs, we can enable exceptions
+ movsg psr,gr4
+ ori gr4,#PSR_ET,gr4
+ movgs gr4,psr
+ jmpl @(gr5,gr0) ; call div_excep(esfr1,esr0,isr)
+
+###############################################################################
+#
+# handle compound exception
+#
+###############################################################################
+ .globl __entry_compound_exception
+__entry_compound_exception:
+ LEDS 0x6019
+ sethi.p %hi(compound_exception),gr5
+ setlo %lo(compound_exception),gr5
+ movsg esfr1,gr8
+ movsg esr0,gr9
+ movsg esr14,gr10
+ movsg esr15,gr11
+ movsg msr0,gr12
+ movsg msr1,gr13
+
+ # now that we've accessed the exception regs, we can enable exceptions
+ movsg psr,gr4
+ ori gr4,#PSR_ET,gr4
+ movgs gr4,psr
+ jmpl @(gr5,gr0) ; call comp_excep(esfr1,esr0,esr14,esr15,msr0,msr1)
+
+###############################################################################
+#
+# handle interrupts and NMIs
+#
+###############################################################################
+ .globl __entry_do_IRQ
+__entry_do_IRQ:
+ LEDS 0x6020
+
+ # we can enable exceptions
+ movsg psr,gr4
+ ori gr4,#PSR_ET,gr4
+ movgs gr4,psr
+ bra do_IRQ
+
+ .globl __entry_do_NMI
+__entry_do_NMI:
+ LEDS 0x6021
+
+ # we can enable exceptions
+ movsg psr,gr4
+ ori gr4,#PSR_ET,gr4
+ movgs gr4,psr
+ bra do_NMI
+
+###############################################################################
+#
+# the return path for a newly forked child process
+# - __switch_to() saved the old current pointer in GR8 for us
+#
+###############################################################################
+ .globl ret_from_fork
+ret_from_fork:
+ LEDS 0x6100
+ call schedule_tail
+
+ # fork & co. return 0 to child
+ setlos.p #0,gr8
+ bra __syscall_exit
+
+###################################################################################################
+#
+# Return to user mode is not as complex as all this looks,
+# but we want the default path for a system call return to
+# go as quickly as possible which is why some of this is
+# less clear than it otherwise should be.
+#
+###################################################################################################
+ .balign L1_CACHE_BYTES
+ .globl system_call
+system_call:
+ LEDS 0x6101
+ movsg psr,gr4 ; enable exceptions
+ ori gr4,#PSR_ET,gr4
+ movgs gr4,psr
+
+ sti gr7,@(gr28,#REG_SYSCALLNO)
+ sti.p gr8,@(gr28,#REG_ORIG_GR8)
+
+ subicc gr7,#nr_syscalls,gr0,icc0
+ bnc icc0,#0,__syscall_badsys
+
+ ldi @(gr15,#TI_FLAGS),gr4
+ ori gr4,#_TIF_SYSCALL_TRACE,gr4
+ andicc gr4,#_TIF_SYSCALL_TRACE,gr0,icc0
+ bne icc0,#0,__syscall_trace_entry
+
+__syscall_call:
+ slli.p gr7,#2,gr7
+ sethi %hi(sys_call_table),gr5
+ setlo %lo(sys_call_table),gr5
+ ld @(gr5,gr7),gr4
+ calll @(gr4,gr0)
+
+
+###############################################################################
+#
+# return to interrupted process
+#
+###############################################################################
+__syscall_exit:
+ LEDS 0x6300
+
+ sti gr8,@(gr28,#REG_GR(8)) ; save return value
+
+ # rebuild saved psr - execve will change it for init/main.c
+ ldi @(gr28,#REG_PSR),gr22
+ srli gr22,#1,gr5
+ andi.p gr22,#~PSR_PS,gr22
+ andi gr5,#PSR_PS,gr5
+ or gr5,gr22,gr22
+ ori gr22,#PSR_S,gr22
+
+ # keep current PSR in GR23
+ movsg psr,gr23
+
+ # make sure we don't miss an interrupt setting need_resched or sigpending between
+ # sampling and the RETT
+ ori gr23,#PSR_PIL_14,gr23
+ movgs gr23,psr
+
+ ldi @(gr15,#TI_FLAGS),gr4
+ sethi.p %hi(_TIF_ALLWORK_MASK),gr5
+ setlo %lo(_TIF_ALLWORK_MASK),gr5
+ andcc gr4,gr5,gr0,icc0
+ bne icc0,#0,__syscall_exit_work
+
+ # restore all registers and return
+__entry_return_direct:
+ LEDS 0x6301
+
+ andi gr22,#~PSR_ET,gr22
+ movgs gr22,psr
+
+ ldi @(gr28,#REG_ISR),gr23
+ lddi @(gr28,#REG_CCR),gr24
+ lddi @(gr28,#REG_LR) ,gr26
+ ldi @(gr28,#REG_PC) ,gr21
+ ldi @(gr28,#REG_TBR),gr20
+
+ movgs gr20,tbr
+ movgs gr21,pcsr
+ movgs gr23,isr
+ movgs gr24,ccr
+ movgs gr25,cccr
+ movgs gr26,lr
+ movgs gr27,lcr
+
+ lddi @(gr28,#REG_GNER0),gr4
+ movgs gr4,gner0
+ movgs gr5,gner1
+
+ lddi @(gr28,#REG_IACC0),gr4
+ movgs gr4,iacc0h
+ movgs gr5,iacc0l
+
+ lddi @(gr28,#REG_GR(4)) ,gr4
+ lddi @(gr28,#REG_GR(6)) ,gr6
+ lddi @(gr28,#REG_GR(8)) ,gr8
+ lddi @(gr28,#REG_GR(10)),gr10
+ lddi @(gr28,#REG_GR(12)),gr12
+ lddi @(gr28,#REG_GR(14)),gr14
+ lddi @(gr28,#REG_GR(16)),gr16
+ lddi @(gr28,#REG_GR(18)),gr18
+ lddi @(gr28,#REG_GR(20)),gr20
+ lddi @(gr28,#REG_GR(22)),gr22
+ lddi @(gr28,#REG_GR(24)),gr24
+ lddi @(gr28,#REG_GR(26)),gr26
+ ldi @(gr28,#REG_GR(29)),gr29
+ lddi @(gr28,#REG_GR(30)),gr30
+
+ # check to see if a debugging return is required
+ LEDS 0x67f0
+ movsg ccr,gr2
+ ldi @(gr28,#REG__STATUS),gr3
+ andicc gr3,#REG__STATUS_STEP,gr0,icc0
+ bne icc0,#0,__entry_return_singlestep
+ movgs gr2,ccr
+
+ ldi @(gr28,#REG_SP) ,sp
+ lddi @(gr28,#REG_GR(2)) ,gr2
+ ldi @(gr28,#REG_GR(28)),gr28
+
+ LEDS 0x67fe
+// movsg pcsr,gr31
+// LEDS32
+
+#if 0
+ # store the current frame in the workram on the FR451
+ movgs gr28,scr2
+ sethi.p %hi(0xfe800000),gr28
+ setlo %lo(0xfe800000),gr28
+
+ stdi gr2,@(gr28,#REG_GR(2))
+ stdi gr4,@(gr28,#REG_GR(4))
+ stdi gr6,@(gr28,#REG_GR(6))
+ stdi gr8,@(gr28,#REG_GR(8))
+ stdi gr10,@(gr28,#REG_GR(10))
+ stdi gr12,@(gr28,#REG_GR(12))
+ stdi gr14,@(gr28,#REG_GR(14))
+ stdi gr16,@(gr28,#REG_GR(16))
+ stdi gr18,@(gr28,#REG_GR(18))
+ stdi gr24,@(gr28,#REG_GR(24))
+ stdi gr26,@(gr28,#REG_GR(26))
+ sti gr29,@(gr28,#REG_GR(29))
+ stdi gr30,@(gr28,#REG_GR(30))
+
+ movsg tbr ,gr30
+ sti gr30,@(gr28,#REG_TBR)
+ movsg pcsr,gr30
+ sti gr30,@(gr28,#REG_PC)
+ movsg psr ,gr30
+ sti gr30,@(gr28,#REG_PSR)
+ movsg isr ,gr30
+ sti gr30,@(gr28,#REG_ISR)
+ movsg ccr ,gr30
+ movsg cccr,gr31
+ stdi gr30,@(gr28,#REG_CCR)
+ movsg lr ,gr30
+ movsg lcr ,gr31
+ stdi gr30,@(gr28,#REG_LR)
+ sti gr0 ,@(gr28,#REG_SYSCALLNO)
+ movsg scr2,gr28
+#endif
+
+ rett #0
+
+ # return via break.S
+__entry_return_singlestep:
+ movgs gr2,ccr
+ lddi @(gr28,#REG_GR(2)) ,gr2
+ ldi @(gr28,#REG_SP) ,sp
+ ldi @(gr28,#REG_GR(28)),gr28
+ LEDS 0x67ff
+ break
+ .globl __entry_return_singlestep_breaks_here
+__entry_return_singlestep_breaks_here:
+ nop
+
+
+###############################################################################
+#
+# return to a process interrupted in kernel space
+# - we need to consider preemption if that is enabled
+#
+###############################################################################
+ .balign L1_CACHE_BYTES
+__entry_return_from_kernel_exception:
+ LEDS 0x6302
+ movsg psr,gr23
+ ori gr23,#PSR_PIL_14,gr23
+ movgs gr23,psr
+ bra __entry_return_direct
+
+ .balign L1_CACHE_BYTES
+__entry_return_from_kernel_interrupt:
+ LEDS 0x6303
+ movsg psr,gr23
+ ori gr23,#PSR_PIL_14,gr23
+ movgs gr23,psr
+
+#ifdef CONFIG_PREEMPT
+ ldi @(gr15,#TI_PRE_COUNT),gr5
+ subicc gr5,#0,gr0,icc0
+ beq icc0,#0,__entry_return_direct
+
+__entry_preempt_need_resched:
+ ldi @(gr15,#TI_FLAGS),gr4
+ andicc gr4,#_TIF_NEED_RESCHED,gr0,icc0
+ beq icc0,#1,__entry_return_direct
+
+ setlos #PREEMPT_ACTIVE,gr5
+ sti gr5,@(gr15,#TI_FLAGS)
+
+ andi gr23,#~PSR_PIL,gr23
+ movgs gr23,psr
+
+ call schedule
+ sti gr0,@(gr15,#TI_PRE_COUNT)
+
+ movsg psr,gr23
+ ori gr23,#PSR_PIL_14,gr23
+ movgs gr23,psr
+ bra __entry_preempt_need_resched
+#else
+ bra __entry_return_direct
+#endif
+
+
+###############################################################################
+#
+# perform work that needs to be done immediately before resumption
+#
+###############################################################################
+ .globl __entry_return_from_user_exception
+ .balign L1_CACHE_BYTES
+__entry_return_from_user_exception:
+ LEDS 0x6501
+
+__entry_resume_userspace:
+ # make sure we don't miss an interrupt setting need_resched or sigpending between
+ # sampling and the RETT
+ movsg psr,gr23
+ ori gr23,#PSR_PIL_14,gr23
+ movgs gr23,psr
+
+__entry_return_from_user_interrupt:
+ LEDS 0x6402
+ ldi @(gr15,#TI_FLAGS),gr4
+ sethi.p %hi(_TIF_WORK_MASK),gr5
+ setlo %lo(_TIF_WORK_MASK),gr5
+ andcc gr4,gr5,gr0,icc0
+ beq icc0,#1,__entry_return_direct
+
+__entry_work_pending:
+ LEDS 0x6404
+ andicc gr4,#_TIF_NEED_RESCHED,gr0,icc0
+ beq icc0,#1,__entry_work_notifysig
+
+__entry_work_resched:
+ LEDS 0x6408
+ movsg psr,gr23
+ andi gr23,#~PSR_PIL,gr23
+ movgs gr23,psr
+ call schedule
+ movsg psr,gr23
+ ori gr23,#PSR_PIL_14,gr23
+ movgs gr23,psr
+
+ LEDS 0x6401
+ ldi @(gr15,#TI_FLAGS),gr4
+ sethi.p %hi(_TIF_WORK_MASK),gr5
+ setlo %lo(_TIF_WORK_MASK),gr5
+ andcc gr4,gr5,gr0,icc0
+ beq icc0,#1,__entry_return_direct
+ andicc gr4,#_TIF_NEED_RESCHED,gr0,icc0
+ bne icc0,#1,__entry_work_resched
+
+__entry_work_notifysig:
+ LEDS 0x6410
+ ori.p gr4,#0,gr8
+ call do_notify_resume
+ bra __entry_return_direct
+
+ # perform syscall entry tracing
+__syscall_trace_entry:
+ LEDS 0x6320
+ setlos.p #0,gr8
+ call do_syscall_trace
+
+ ldi @(gr28,#REG_SYSCALLNO),gr7
+ lddi @(gr28,#REG_GR(8)) ,gr8
+ lddi @(gr28,#REG_GR(10)),gr10
+ lddi.p @(gr28,#REG_GR(12)),gr12
+
+ subicc gr7,#nr_syscalls,gr0,icc0
+ bnc icc0,#0,__syscall_badsys
+ bra __syscall_call
+
+ # perform syscall exit tracing
+__syscall_exit_work:
+ LEDS 0x6340
+ andicc gr4,#_TIF_SYSCALL_TRACE,gr0,icc0
+ beq icc0,#1,__entry_work_pending
+
+ movsg psr,gr23
+ andi gr23,#~PSR_PIL,gr23 ; could let do_syscall_trace() call schedule()
+ movgs gr23,psr
+
+ setlos.p #1,gr8
+ call do_syscall_trace
+ bra __entry_resume_userspace
+
+__syscall_badsys:
+ LEDS 0x6380
+ setlos #-ENOSYS,gr8
+ sti gr8,@(gr28,#REG_GR(8)) ; save return value
+ bra __entry_resume_userspace
+
+
+###############################################################################
+#
+# syscall vector table
+#
+###############################################################################
+#ifdef CONFIG_MMU
+#define __MMU(X) X
+#else
+#define __MMU(X) sys_ni_syscall
+#endif
+
+ .section .rodata
+ALIGN
+ .globl sys_call_table
+sys_call_table:
+ .long sys_restart_syscall /* 0 - old "setup()" system call, used for restarting */
+ .long sys_exit
+ .long sys_fork
+ .long sys_read
+ .long sys_write
+ .long sys_open /* 5 */
+ .long sys_close
+ .long sys_waitpid
+ .long sys_creat
+ .long sys_link
+ .long sys_unlink /* 10 */
+ .long sys_execve
+ .long sys_chdir
+ .long sys_time
+ .long sys_mknod
+ .long sys_chmod /* 15 */
+ .long sys_lchown16
+ .long sys_ni_syscall /* old break syscall holder */
+ .long sys_stat
+ .long sys_lseek
+ .long sys_getpid /* 20 */
+ .long sys_mount
+ .long sys_oldumount
+ .long sys_setuid16
+ .long sys_getuid16
+ .long sys_ni_syscall // sys_stime /* 25 */
+ .long sys_ptrace
+ .long sys_alarm
+ .long sys_fstat
+ .long sys_pause
+ .long sys_utime /* 30 */
+ .long sys_ni_syscall /* old stty syscall holder */
+ .long sys_ni_syscall /* old gtty syscall holder */
+ .long sys_access
+ .long sys_nice
+ .long sys_ni_syscall /* 35 */ /* old ftime syscall holder */
+ .long sys_sync
+ .long sys_kill
+ .long sys_rename
+ .long sys_mkdir
+ .long sys_rmdir /* 40 */
+ .long sys_dup
+ .long sys_pipe
+ .long sys_times
+ .long sys_ni_syscall /* old prof syscall holder */
+ .long sys_brk /* 45 */
+ .long sys_setgid16
+ .long sys_getgid16
+ .long sys_ni_syscall // sys_signal
+ .long sys_geteuid16
+ .long sys_getegid16 /* 50 */
+ .long sys_acct
+ .long sys_umount /* recycled never used phys( */
+ .long sys_ni_syscall /* old lock syscall holder */
+ .long sys_ioctl
+ .long sys_fcntl /* 55 */
+ .long sys_ni_syscall /* old mpx syscall holder */
+ .long sys_setpgid
+ .long sys_ni_syscall /* old ulimit syscall holder */
+ .long sys_ni_syscall /* old old uname syscall */
+ .long sys_umask /* 60 */
+ .long sys_chroot
+ .long sys_ustat
+ .long sys_dup2
+ .long sys_getppid
+ .long sys_getpgrp /* 65 */
+ .long sys_setsid
+ .long sys_sigaction
+ .long sys_ni_syscall // sys_sgetmask
+ .long sys_ni_syscall // sys_ssetmask
+ .long sys_setreuid16 /* 70 */
+ .long sys_setregid16
+ .long sys_sigsuspend
+ .long sys_ni_syscall // sys_sigpending
+ .long sys_sethostname
+ .long sys_setrlimit /* 75 */
+ .long sys_ni_syscall // sys_old_getrlimit
+ .long sys_getrusage
+ .long sys_gettimeofday
+ .long sys_settimeofday
+ .long sys_getgroups16 /* 80 */
+ .long sys_setgroups16
+ .long sys_ni_syscall /* old_select slot */
+ .long sys_symlink
+ .long sys_lstat
+ .long sys_readlink /* 85 */
+ .long sys_uselib
+ .long sys_swapon
+ .long sys_reboot
+ .long sys_ni_syscall // old_readdir
+ .long sys_ni_syscall /* 90 */ /* old_mmap slot */
+ .long sys_munmap
+ .long sys_truncate
+ .long sys_ftruncate
+ .long sys_fchmod
+ .long sys_fchown16 /* 95 */
+ .long sys_getpriority
+ .long sys_setpriority
+ .long sys_ni_syscall /* old profil syscall holder */
+ .long sys_statfs
+ .long sys_fstatfs /* 100 */
+ .long sys_ni_syscall /* ioperm for i386 */
+ .long sys_socketcall
+ .long sys_syslog
+ .long sys_setitimer
+ .long sys_getitimer /* 105 */
+ .long sys_newstat
+ .long sys_newlstat
+ .long sys_newfstat
+ .long sys_ni_syscall /* obsolete olduname( syscall */
+ .long sys_ni_syscall /* iopl for i386 */ /* 110 */
+ .long sys_vhangup
+ .long sys_ni_syscall /* obsolete idle( syscall */
+ .long sys_ni_syscall /* vm86old for i386 */
+ .long sys_wait4
+ .long sys_swapoff /* 115 */
+ .long sys_sysinfo
+ .long sys_ipc
+ .long sys_fsync
+ .long sys_sigreturn
+ .long sys_clone /* 120 */
+ .long sys_setdomainname
+ .long sys_newuname
+ .long sys_ni_syscall /* old "cacheflush" */
+ .long sys_adjtimex
+ .long __MMU(sys_mprotect) /* 125 */
+ .long sys_sigprocmask
+ .long sys_ni_syscall /* old "create_module" */
+ .long sys_init_module
+ .long sys_delete_module
+ .long sys_ni_syscall /* old "get_kernel_syms" */
+ .long sys_quotactl
+ .long sys_getpgid
+ .long sys_fchdir
+ .long sys_bdflush
+ .long sys_sysfs /* 135 */
+ .long sys_personality
+ .long sys_ni_syscall /* for afs_syscall */
+ .long sys_setfsuid16
+ .long sys_setfsgid16
+ .long sys_llseek /* 140 */
+ .long sys_getdents
+ .long sys_select
+ .long sys_flock
+ .long __MMU(sys_msync)
+ .long sys_readv /* 145 */
+ .long sys_writev
+ .long sys_getsid
+ .long sys_fdatasync
+ .long sys_sysctl
+ .long __MMU(sys_mlock) /* 150 */
+ .long __MMU(sys_munlock)
+ .long __MMU(sys_mlockall)
+ .long __MMU(sys_munlockall)
+ .long sys_sched_setparam
+ .long sys_sched_getparam /* 155 */
+ .long sys_sched_setscheduler
+ .long sys_sched_getscheduler
+ .long sys_sched_yield
+ .long sys_sched_get_priority_max
+ .long sys_sched_get_priority_min /* 160 */
+ .long sys_sched_rr_get_interval
+ .long sys_nanosleep
+ .long __MMU(sys_mremap)
+ .long sys_setresuid16
+ .long sys_getresuid16 /* 165 */
+ .long sys_ni_syscall /* for vm86 */
+ .long sys_ni_syscall /* Old sys_query_module */
+ .long sys_poll
+ .long sys_nfsservctl
+ .long sys_setresgid16 /* 170 */
+ .long sys_getresgid16
+ .long sys_prctl
+ .long sys_rt_sigreturn
+ .long sys_rt_sigaction
+ .long sys_rt_sigprocmask /* 175 */
+ .long sys_rt_sigpending
+ .long sys_rt_sigtimedwait
+ .long sys_rt_sigqueueinfo
+ .long sys_rt_sigsuspend
+ .long sys_pread64 /* 180 */
+ .long sys_pwrite64
+ .long sys_chown16
+ .long sys_getcwd
+ .long sys_capget
+ .long sys_capset /* 185 */
+ .long sys_sigaltstack
+ .long sys_sendfile
+ .long sys_ni_syscall /* streams1 */
+ .long sys_ni_syscall /* streams2 */
+ .long sys_vfork /* 190 */
+ .long sys_getrlimit
+ .long sys_mmap2
+ .long sys_truncate64
+ .long sys_ftruncate64
+ .long sys_stat64 /* 195 */
+ .long sys_lstat64
+ .long sys_fstat64
+ .long sys_lchown
+ .long sys_getuid
+ .long sys_getgid /* 200 */
+ .long sys_geteuid
+ .long sys_getegid
+ .long sys_setreuid
+ .long sys_setregid
+ .long sys_getgroups /* 205 */
+ .long sys_setgroups
+ .long sys_fchown
+ .long sys_setresuid
+ .long sys_getresuid
+ .long sys_setresgid /* 210 */
+ .long sys_getresgid
+ .long sys_chown
+ .long sys_setuid
+ .long sys_setgid
+ .long sys_setfsuid /* 215 */
+ .long sys_setfsgid
+ .long sys_pivot_root
+ .long __MMU(sys_mincore)
+ .long __MMU(sys_madvise)
+ .long sys_getdents64 /* 220 */
+ .long sys_fcntl64
+ .long sys_ni_syscall /* reserved for TUX */
+ .long sys_ni_syscall /* Reserved for Security */
+ .long sys_gettid
+ .long sys_readahead /* 225 */
+ .long sys_setxattr
+ .long sys_lsetxattr
+ .long sys_fsetxattr
+ .long sys_getxattr
+ .long sys_lgetxattr /* 230 */
+ .long sys_fgetxattr
+ .long sys_listxattr
+ .long sys_llistxattr
+ .long sys_flistxattr
+ .long sys_removexattr /* 235 */
+ .long sys_lremovexattr
+ .long sys_fremovexattr
+ .long sys_tkill
+ .long sys_sendfile64
+ .long sys_futex /* 240 */
+ .long sys_sched_setaffinity
+ .long sys_sched_getaffinity
+ .long sys_ni_syscall //sys_set_thread_area
+ .long sys_ni_syscall //sys_get_thread_area
+ .long sys_io_setup /* 245 */
+ .long sys_io_destroy
+ .long sys_io_getevents
+ .long sys_io_submit
+ .long sys_io_cancel
+ .long sys_fadvise64 /* 250 */
+ .long sys_ni_syscall
+ .long sys_exit_group
+ .long sys_lookup_dcookie
+ .long sys_epoll_create
+ .long sys_epoll_ctl /* 255 */
+ .long sys_epoll_wait
+ .long __MMU(sys_remap_file_pages)
+ .long sys_set_tid_address
+ .long sys_timer_create
+ .long sys_timer_settime /* 260 */
+ .long sys_timer_gettime
+ .long sys_timer_getoverrun
+ .long sys_timer_delete
+ .long sys_clock_settime
+ .long sys_clock_gettime /* 265 */
+ .long sys_clock_getres
+ .long sys_clock_nanosleep
+ .long sys_statfs64
+ .long sys_fstatfs64
+ .long sys_tgkill /* 270 */
+ .long sys_utimes
+ .long sys_fadvise64_64
+ .long sys_ni_syscall /* sys_vserver */
+ .long sys_mbind
+ .long sys_get_mempolicy
+ .long sys_set_mempolicy
+ .long sys_mq_open
+ .long sys_mq_unlink
+ .long sys_mq_timedsend
+ .long sys_mq_timedreceive /* 280 */
+ .long sys_mq_notify
+ .long sys_mq_getsetattr
+ .long sys_ni_syscall /* reserved for kexec */
+ .long sys_waitid
+ .long sys_ni_syscall /* 285 */ /* available */
+ .long sys_add_key
+ .long sys_request_key
+ .long sys_keyctl
+ .long sys_ni_syscall // sys_vperfctr_open
+ .long sys_ni_syscall // sys_vperfctr_control /* 290 */
+ .long sys_ni_syscall // sys_vperfctr_unlink
+ .long sys_ni_syscall // sys_vperfctr_iresume
+ .long sys_ni_syscall // sys_vperfctr_read
+
+
+syscall_table_size = (. - sys_call_table)
--- /dev/null
+#include <linux/module.h>
+#include <linux/linkage.h>
+#include <linux/sched.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/user.h>
+#include <linux/elfcore.h>
+#include <linux/in6.h>
+#include <linux/interrupt.h>
+#include <linux/config.h>
+
+#include <asm/setup.h>
+#include <asm/pgalloc.h>
+#include <asm/irq.h>
+#include <asm/io.h>
+#include <asm/semaphore.h>
+#include <asm/checksum.h>
+#include <asm/hardirq.h>
+#include <asm/current.h>
+
+extern void dump_thread(struct pt_regs *, struct user *);
+extern long __memcpy_user(void *dst, const void *src, size_t count);
+
+/* platform dependent support */
+
+EXPORT_SYMBOL(__ioremap);
+EXPORT_SYMBOL(iounmap);
+
+EXPORT_SYMBOL(dump_thread);
+EXPORT_SYMBOL(strnlen);
+EXPORT_SYMBOL(strrchr);
+EXPORT_SYMBOL(strstr);
+EXPORT_SYMBOL(strchr);
+EXPORT_SYMBOL(strcat);
+EXPORT_SYMBOL(strlen);
+EXPORT_SYMBOL(strcmp);
+EXPORT_SYMBOL(strncmp);
+EXPORT_SYMBOL(strncpy);
+
+EXPORT_SYMBOL(ip_fast_csum);
+
+#if 0
+EXPORT_SYMBOL(local_irq_count);
+EXPORT_SYMBOL(local_bh_count);
+#endif
+EXPORT_SYMBOL(kernel_thread);
+
+EXPORT_SYMBOL(enable_irq);
+EXPORT_SYMBOL(disable_irq);
+EXPORT_SYMBOL(__res_bus_clock_speed_HZ);
+EXPORT_SYMBOL(__page_offset);
+EXPORT_SYMBOL(__memcpy_user);
+EXPORT_SYMBOL(flush_dcache_page);
+
+#ifndef CONFIG_MMU
+EXPORT_SYMBOL(memory_start);
+EXPORT_SYMBOL(memory_end);
+#endif
+
+EXPORT_SYMBOL(__debug_bug_trap);
+
+/* Networking helper routines. */
+EXPORT_SYMBOL(csum_partial_copy);
+
+/* The following are special because they're not called
+ explicitly (the C compiler generates them). Fortunately,
+ their interface isn't gonna change any time soon now, so
+ it's OK to leave it out of version control. */
+EXPORT_SYMBOL(memcpy);
+EXPORT_SYMBOL(memset);
+EXPORT_SYMBOL(memcmp);
+EXPORT_SYMBOL(memscan);
+EXPORT_SYMBOL(memmove);
+EXPORT_SYMBOL(strtok);
+
+EXPORT_SYMBOL(get_wchan);
+
+#ifdef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS
+EXPORT_SYMBOL(atomic_test_and_ANDNOT_mask);
+EXPORT_SYMBOL(atomic_test_and_OR_mask);
+EXPORT_SYMBOL(atomic_test_and_XOR_mask);
+EXPORT_SYMBOL(atomic_add_return);
+EXPORT_SYMBOL(atomic_sub_return);
+EXPORT_SYMBOL(__xchg_8);
+EXPORT_SYMBOL(__xchg_16);
+EXPORT_SYMBOL(__xchg_32);
+EXPORT_SYMBOL(__cmpxchg_8);
+EXPORT_SYMBOL(__cmpxchg_16);
+EXPORT_SYMBOL(__cmpxchg_32);
+#endif
+
+/*
+ * libgcc functions - functions that are used internally by the
+ * compiler... (prototypes are not correct though, but that
+ * doesn't really matter since they're not versioned).
+ */
+extern void __gcc_bcmp(void);
+extern void __ashldi3(void);
+extern void __ashrdi3(void);
+extern void __cmpdi2(void);
+extern void __divdi3(void);
+extern void __lshrdi3(void);
+extern void __moddi3(void);
+extern void __muldi3(void);
+extern void __negdi2(void);
+extern void __ucmpdi2(void);
+extern void __udivdi3(void);
+extern void __udivmoddi4(void);
+extern void __umoddi3(void);
+
+ /* gcc lib functions */
+//EXPORT_SYMBOL(__gcc_bcmp);
+EXPORT_SYMBOL(__ashldi3);
+EXPORT_SYMBOL(__ashrdi3);
+//EXPORT_SYMBOL(__cmpdi2);
+//EXPORT_SYMBOL(__divdi3);
+EXPORT_SYMBOL(__lshrdi3);
+//EXPORT_SYMBOL(__moddi3);
+EXPORT_SYMBOL(__muldi3);
+EXPORT_SYMBOL(__negdi2);
+//EXPORT_SYMBOL(__ucmpdi2);
+//EXPORT_SYMBOL(__udivdi3);
+//EXPORT_SYMBOL(__udivmoddi4);
+//EXPORT_SYMBOL(__umoddi3);
--- /dev/null
+/* gdb-stub.c: FRV GDB stub
+ *
+ * Copyright (C) 2003,4 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ * - Derived from Linux/MIPS version, Copyright (C) 1995 Andreas Busse
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+/*
+ * To enable debugger support, two things need to happen. One, a
+ * call to set_debug_traps() is necessary in order to allow any breakpoints
+ * or error conditions to be properly intercepted and reported to gdb.
+ * Two, a breakpoint needs to be generated to begin communication. This
+ * is most easily accomplished by a call to breakpoint(). Breakpoint()
+ * simulates a breakpoint by executing a BREAK instruction.
+ *
+ *
+ * The following gdb commands are supported:
+ *
+ * command function Return value
+ *
+ * g return the value of the CPU registers hex data or ENN
+ * G set the value of the CPU registers OK or ENN
+ *
+ * mAA..AA,LLLL Read LLLL bytes at address AA..AA hex data or ENN
+ * MAA..AA,LLLL: Write LLLL bytes at address AA.AA OK or ENN
+ *
+ * c Resume at current address SNN ( signal NN)
+ * cAA..AA Continue at address AA..AA SNN
+ *
+ * s Step one instruction SNN
+ * sAA..AA Step one instruction from AA..AA SNN
+ *
+ * k kill
+ *
+ * ? What was the last sigval ? SNN (signal NN)
+ *
+ * bBB..BB Set baud rate to BB..BB OK or BNN, then sets
+ * baud rate
+ *
+ * All commands and responses are sent with a packet which includes a
+ * checksum. A packet consists of
+ *
+ * $<packet info>#<checksum>.
+ *
+ * where
+ * <packet info> :: <characters representing the command or response>
+ * <checksum> :: < two hex digits computed as modulo 256 sum of <packetinfo>>
+ *
+ * When a packet is received, it is first acknowledged with either '+' or '-'.
+ * '+' indicates a successful transfer. '-' indicates a failed transfer.
+ *
+ * Example:
+ *
+ * Host: Reply:
+ * $m0,10#2a +$00010203040506070809101112131415#42
+ *
+ *
+ * ==============
+ * MORE EXAMPLES:
+ * ==============
+ *
+ * For reference -- the following are the steps that one
+ * company took (RidgeRun Inc) to get remote gdb debugging
+ * going. In this scenario the host machine was a PC and the
+ * target platform was a Galileo EVB64120A MIPS evaluation
+ * board.
+ *
+ * Step 1:
+ * First download gdb-5.0.tar.gz from the internet.
+ * and then build/install the package.
+ *
+ * Example:
+ * $ tar zxf gdb-5.0.tar.gz
+ * $ cd gdb-5.0
+ * $ ./configure --target=frv-elf-gdb
+ * $ make
+ * $ frv-elf-gdb
+ *
+ * Step 2:
+ * Configure linux for remote debugging and build it.
+ *
+ * Example:
+ * $ cd ~/linux
+ * $ make menuconfig <go to "Kernel Hacking" and turn on remote debugging>
+ * $ make dep; make vmlinux
+ *
+ * Step 3:
+ * Download the kernel to the remote target and start
+ * the kernel running. It will promptly halt and wait
+ * for the host gdb session to connect. It does this
+ * since the "Kernel Hacking" option has defined
+ * CONFIG_REMOTE_DEBUG which in turn enables your calls
+ * to:
+ * set_debug_traps();
+ * breakpoint();
+ *
+ * Step 4:
+ * Start the gdb session on the host.
+ *
+ * Example:
+ * $ frv-elf-gdb vmlinux
+ * (gdb) set remotebaud 115200
+ * (gdb) target remote /dev/ttyS1
+ * ...at this point you are connected to
+ * the remote target and can use gdb
+ * in the normal fasion. Setting
+ * breakpoints, single stepping,
+ * printing variables, etc.
+ *
+ */
+
+#include <linux/string.h>
+#include <linux/kernel.h>
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/console.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/nmi.h>
+
+#include <asm/pgtable.h>
+#include <asm/system.h>
+#include <asm/gdb-stub.h>
+
+#define LEDS(x) do { /* *(u32*)0xe1200004 = ~(x); mb(); */ } while(0)
+
+#undef GDBSTUB_DEBUG_PROTOCOL
+
+extern void debug_to_serial(const char *p, int n);
+extern void gdbstub_console_write(struct console *co, const char *p, unsigned n);
+
+extern volatile uint32_t __break_error_detect[3]; /* ESFR1, ESR15, EAR15 */
+extern struct user_context __break_user_context;
+
+struct __debug_amr {
+ unsigned long L, P;
+} __attribute__((aligned(8)));
+
+struct __debug_mmu {
+ struct {
+ unsigned long hsr0, pcsr, esr0, ear0, epcr0;
+#ifdef CONFIG_MMU
+ unsigned long tplr, tppr, tpxr, cxnr;
+#endif
+ } regs;
+
+ struct __debug_amr iamr[16];
+ struct __debug_amr damr[16];
+
+#ifdef CONFIG_MMU
+ struct __debug_amr tlb[64*2];
+#endif
+};
+
+static struct __debug_mmu __debug_mmu;
+
+/*
+ * BUFMAX defines the maximum number of characters in inbound/outbound buffers
+ * at least NUMREGBYTES*2 are needed for register packets
+ */
+#define BUFMAX 2048
+
+#define BREAK_INSN 0x801000c0 /* use "break" as bkpt */
+
+static const char gdbstub_banner[] = "Linux/FR-V GDB Stub (c) RedHat 2003\n";
+
+volatile u8 gdbstub_rx_buffer[PAGE_SIZE] __attribute__((aligned(PAGE_SIZE)));
+volatile u32 gdbstub_rx_inp = 0;
+volatile u32 gdbstub_rx_outp = 0;
+volatile u8 gdbstub_rx_overflow = 0;
+u8 gdbstub_rx_unget = 0;
+
+/* set with GDB whilst running to permit step through exceptions */
+extern volatile u32 __attribute__((section(".bss"))) gdbstub_trace_through_exceptions;
+
+static char input_buffer[BUFMAX];
+static char output_buffer[BUFMAX];
+
+static const char hexchars[] = "0123456789abcdef";
+
+static const char *regnames[] = {
+ "PSR ", "ISR ", "CCR ", "CCCR",
+ "LR ", "LCR ", "PC ", "_stt",
+ "sys ", "GR8*", "GNE0", "GNE1",
+ "IACH", "IACL",
+ "TBR ", "SP ", "FP ", "GR3 ",
+ "GR4 ", "GR5 ", "GR6 ", "GR7 ",
+ "GR8 ", "GR9 ", "GR10", "GR11",
+ "GR12", "GR13", "GR14", "GR15",
+ "GR16", "GR17", "GR18", "GR19",
+ "GR20", "GR21", "GR22", "GR23",
+ "GR24", "GR25", "GR26", "GR27",
+ "EFRM", "CURR", "GR30", "BFRM"
+};
+
+struct gdbstub_bkpt {
+ unsigned long addr; /* address of breakpoint */
+ unsigned len; /* size of breakpoint */
+ uint32_t originsns[7]; /* original instructions */
+};
+
+static struct gdbstub_bkpt gdbstub_bkpts[256];
+
+/*
+ * local prototypes
+ */
+
+static void gdbstub_recv_packet(char *buffer);
+static int gdbstub_send_packet(char *buffer);
+static int gdbstub_compute_signal(unsigned long tbr);
+static int hex(unsigned char ch);
+static int hexToInt(char **ptr, unsigned long *intValue);
+static unsigned char *mem2hex(const void *mem, char *buf, int count, int may_fault);
+static char *hex2mem(const char *buf, void *_mem, int count);
+
+/*
+ * Convert ch from a hex digit to an int
+ */
+static int hex(unsigned char ch)
+{
+ if (ch >= 'a' && ch <= 'f')
+ return ch-'a'+10;
+ if (ch >= '0' && ch <= '9')
+ return ch-'0';
+ if (ch >= 'A' && ch <= 'F')
+ return ch-'A'+10;
+ return -1;
+}
+
+void gdbstub_printk(const char *fmt, ...)
+{
+ static char buf[1024];
+ va_list args;
+ int len;
+
+ /* Emit the output into the temporary buffer */
+ va_start(args, fmt);
+ len = vsnprintf(buf, sizeof(buf), fmt, args);
+ va_end(args);
+ debug_to_serial(buf, len);
+}
+
+static inline char *gdbstub_strcpy(char *dst, const char *src)
+{
+ int loop = 0;
+ while ((dst[loop] = src[loop]))
+ loop++;
+ return dst;
+}
+
+static void gdbstub_purge_cache(void)
+{
+ asm volatile(" dcef @(gr0,gr0),#1 \n"
+ " icei @(gr0,gr0),#1 \n"
+ " membar \n"
+ " bar \n"
+ );
+}
+
+/*****************************************************************************/
+/*
+ * scan for the sequence $<data>#<checksum>
+ */
+static void gdbstub_recv_packet(char *buffer)
+{
+ unsigned char checksum;
+ unsigned char xmitcsum;
+ unsigned char ch;
+ int count, i, ret, error;
+
+ for (;;) {
+ /* wait around for the start character, ignore all other characters */
+ do {
+ gdbstub_rx_char(&ch, 0);
+ } while (ch != '$');
+
+ checksum = 0;
+ xmitcsum = -1;
+ count = 0;
+ error = 0;
+
+ /* now, read until a # or end of buffer is found */
+ while (count < BUFMAX) {
+ ret = gdbstub_rx_char(&ch, 0);
+ if (ret < 0)
+ error = ret;
+
+ if (ch == '#')
+ break;
+ checksum += ch;
+ buffer[count] = ch;
+ count++;
+ }
+
+ if (error == -EIO) {
+ gdbstub_proto("### GDB Rx Error - Skipping packet ###\n");
+ gdbstub_proto("### GDB Tx NAK\n");
+ gdbstub_tx_char('-');
+ continue;
+ }
+
+ if (count >= BUFMAX || error)
+ continue;
+
+ buffer[count] = 0;
+
+ /* read the checksum */
+ ret = gdbstub_rx_char(&ch, 0);
+ if (ret < 0)
+ error = ret;
+ xmitcsum = hex(ch) << 4;
+
+ ret = gdbstub_rx_char(&ch, 0);
+ if (ret < 0)
+ error = ret;
+ xmitcsum |= hex(ch);
+
+ if (error) {
+ if (error == -EIO)
+ gdbstub_proto("### GDB Rx Error - Skipping packet\n");
+ gdbstub_proto("### GDB Tx NAK\n");
+ gdbstub_tx_char('-');
+ continue;
+ }
+
+ /* check the checksum */
+ if (checksum != xmitcsum) {
+ gdbstub_proto("### GDB Tx NAK\n");
+ gdbstub_tx_char('-'); /* failed checksum */
+ continue;
+ }
+
+ gdbstub_proto("### GDB Rx '$%s#%02x' ###\n", buffer, checksum);
+ gdbstub_proto("### GDB Tx ACK\n");
+ gdbstub_tx_char('+'); /* successful transfer */
+
+ /* if a sequence char is present, reply the sequence ID */
+ if (buffer[2] == ':') {
+ gdbstub_tx_char(buffer[0]);
+ gdbstub_tx_char(buffer[1]);
+
+ /* remove sequence chars from buffer */
+ count = 0;
+ while (buffer[count]) count++;
+ for (i=3; i <= count; i++)
+ buffer[i - 3] = buffer[i];
+ }
+
+ break;
+ }
+} /* end gdbstub_recv_packet() */
+
+/*****************************************************************************/
+/*
+ * send the packet in buffer.
+ * - return 0 if successfully ACK'd
+ * - return 1 if abandoned due to new incoming packet
+ */
+static int gdbstub_send_packet(char *buffer)
+{
+ unsigned char checksum;
+ int count;
+ unsigned char ch;
+
+ /* $<packet info>#<checksum> */
+ gdbstub_proto("### GDB Tx '%s' ###\n", buffer);
+
+ do {
+ gdbstub_tx_char('$');
+ checksum = 0;
+ count = 0;
+
+ while ((ch = buffer[count]) != 0) {
+ gdbstub_tx_char(ch);
+ checksum += ch;
+ count += 1;
+ }
+
+ gdbstub_tx_char('#');
+ gdbstub_tx_char(hexchars[checksum >> 4]);
+ gdbstub_tx_char(hexchars[checksum & 0xf]);
+
+ } while (gdbstub_rx_char(&ch,0),
+#ifdef GDBSTUB_DEBUG_PROTOCOL
+ ch=='-' && (gdbstub_proto("### GDB Rx NAK\n"),0),
+ ch!='-' && ch!='+' && (gdbstub_proto("### GDB Rx ??? %02x\n",ch),0),
+#endif
+ ch!='+' && ch!='$');
+
+ if (ch=='+') {
+ gdbstub_proto("### GDB Rx ACK\n");
+ return 0;
+ }
+
+ gdbstub_proto("### GDB Tx Abandoned\n");
+ gdbstub_rx_unget = ch;
+ return 1;
+} /* end gdbstub_send_packet() */
+
+/*
+ * While we find nice hex chars, build an int.
+ * Return number of chars processed.
+ */
+static int hexToInt(char **ptr, unsigned long *_value)
+{
+ int count = 0, ch;
+
+ *_value = 0;
+ while (**ptr) {
+ ch = hex(**ptr);
+ if (ch < 0)
+ break;
+
+ *_value = (*_value << 4) | ((uint8_t) ch & 0xf);
+ count++;
+
+ (*ptr)++;
+ }
+
+ return count;
+}
+
+/*****************************************************************************/
+/*
+ * probe an address to see whether it maps to anything
+ */
+static inline int gdbstub_addr_probe(const void *vaddr)
+{
+#ifdef CONFIG_MMU
+ unsigned long paddr;
+
+ asm("lrad %1,%0,#1,#0,#0" : "=r"(paddr) : "r"(vaddr));
+ if (!(paddr & xAMPRx_V))
+ return 0;
+#endif
+
+ return 1;
+} /* end gdbstub_addr_probe() */
+
+#ifdef CONFIG_MMU
+static unsigned long __saved_dampr, __saved_damlr;
+
+static inline unsigned long gdbstub_virt_to_pte(unsigned long vaddr)
+{
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte;
+ unsigned long val, dampr5;
+
+ pgd = (pgd_t *) __get_DAMLR(3) + pgd_index(vaddr);
+ pud = pud_offset(pgd, vaddr);
+ pmd = pmd_offset(pud, vaddr);
+
+ if (pmd_bad(*pmd) || !pmd_present(*pmd))
+ return 0;
+
+ /* make sure dampr5 maps to the correct pmd */
+ dampr5 = __get_DAMPR(5);
+ val = pmd_val(*pmd);
+ __set_DAMPR(5, val | xAMPRx_L | xAMPRx_SS_16Kb | xAMPRx_S | xAMPRx_C | xAMPRx_V);
+
+ /* now its safe to access pmd */
+ pte = (pte_t *)__get_DAMLR(5) + __pte_index(vaddr);
+ if (pte_present(*pte))
+ val = pte_val(*pte);
+ else
+ val = 0;
+
+ /* restore original dampr5 */
+ __set_DAMPR(5, dampr5);
+
+ return val;
+}
+#endif
+
+static inline int gdbstub_addr_map(const void *vaddr)
+{
+#ifdef CONFIG_MMU
+ unsigned long pte;
+
+ __saved_dampr = __get_DAMPR(2);
+ __saved_damlr = __get_DAMLR(2);
+#endif
+ if (gdbstub_addr_probe(vaddr))
+ return 1;
+#ifdef CONFIG_MMU
+ pte = gdbstub_virt_to_pte((unsigned long) vaddr);
+ if (pte) {
+ __set_DAMPR(2, pte);
+ __set_DAMLR(2, (unsigned long) vaddr & PAGE_MASK);
+ return 1;
+ }
+#endif
+ return 0;
+}
+
+static inline void gdbstub_addr_unmap(void)
+{
+#ifdef CONFIG_MMU
+ __set_DAMPR(2, __saved_dampr);
+ __set_DAMLR(2, __saved_damlr);
+#endif
+}
+
+/*
+ * access potentially dodgy memory through a potentially dodgy pointer
+ */
+static inline int gdbstub_read_dword(const void *addr, uint32_t *_res)
+{
+ unsigned long brr;
+ uint32_t res;
+
+ if (!gdbstub_addr_map(addr))
+ return 0;
+
+ asm volatile(" movgs gr0,brr \n"
+ " ld%I2 %M2,%0 \n"
+ " movsg brr,%1 \n"
+ : "=r"(res), "=r"(brr)
+ : "m"(*(uint32_t *) addr));
+ *_res = res;
+ gdbstub_addr_unmap();
+ return likely(!brr);
+}
+
+static inline int gdbstub_write_dword(void *addr, uint32_t val)
+{
+ unsigned long brr;
+
+ if (!gdbstub_addr_map(addr))
+ return 0;
+
+ asm volatile(" movgs gr0,brr \n"
+ " st%I2 %1,%M2 \n"
+ " movsg brr,%0 \n"
+ : "=r"(brr)
+ : "r"(val), "m"(*(uint32_t *) addr));
+ gdbstub_addr_unmap();
+ return likely(!brr);
+}
+
+static inline int gdbstub_read_word(const void *addr, uint16_t *_res)
+{
+ unsigned long brr;
+ uint16_t res;
+
+ if (!gdbstub_addr_map(addr))
+ return 0;
+
+ asm volatile(" movgs gr0,brr \n"
+ " lduh%I2 %M2,%0 \n"
+ " movsg brr,%1 \n"
+ : "=r"(res), "=r"(brr)
+ : "m"(*(uint16_t *) addr));
+ *_res = res;
+ gdbstub_addr_unmap();
+ return likely(!brr);
+}
+
+static inline int gdbstub_write_word(void *addr, uint16_t val)
+{
+ unsigned long brr;
+
+ if (!gdbstub_addr_map(addr))
+ return 0;
+
+ asm volatile(" movgs gr0,brr \n"
+ " sth%I2 %1,%M2 \n"
+ " movsg brr,%0 \n"
+ : "=r"(brr)
+ : "r"(val), "m"(*(uint16_t *) addr));
+ gdbstub_addr_unmap();
+ return likely(!brr);
+}
+
+static inline int gdbstub_read_byte(const void *addr, uint8_t *_res)
+{
+ unsigned long brr;
+ uint8_t res;
+
+ if (!gdbstub_addr_map(addr))
+ return 0;
+
+ asm volatile(" movgs gr0,brr \n"
+ " ldub%I2 %M2,%0 \n"
+ " movsg brr,%1 \n"
+ : "=r"(res), "=r"(brr)
+ : "m"(*(uint8_t *) addr));
+ *_res = res;
+ gdbstub_addr_unmap();
+ return likely(!brr);
+}
+
+static inline int gdbstub_write_byte(void *addr, uint8_t val)
+{
+ unsigned long brr;
+
+ if (!gdbstub_addr_map(addr))
+ return 0;
+
+ asm volatile(" movgs gr0,brr \n"
+ " stb%I2 %1,%M2 \n"
+ " movsg brr,%0 \n"
+ : "=r"(brr)
+ : "r"(val), "m"(*(uint8_t *) addr));
+ gdbstub_addr_unmap();
+ return likely(!brr);
+}
+
+static void __gdbstub_console_write(struct console *co, const char *p, unsigned n)
+{
+ char outbuf[26];
+ int qty;
+
+ outbuf[0] = 'O';
+
+ while (n > 0) {
+ qty = 1;
+
+ while (n > 0 && qty < 20) {
+ mem2hex(p, outbuf + qty, 2, 0);
+ qty += 2;
+ if (*p == 0x0a) {
+ outbuf[qty++] = '0';
+ outbuf[qty++] = 'd';
+ }
+ p++;
+ n--;
+ }
+
+ outbuf[qty] = 0;
+ gdbstub_send_packet(outbuf);
+ }
+}
+
+#if 0
+void debug_to_serial(const char *p, int n)
+{
+ gdbstub_console_write(NULL,p,n);
+}
+#endif
+
+#ifdef CONFIG_GDBSTUB_CONSOLE
+
+static kdev_t gdbstub_console_dev(struct console *con)
+{
+ return MKDEV(1,3); /* /dev/null */
+}
+
+static struct console gdbstub_console = {
+ .name = "gdb",
+ .write = gdbstub_console_write, /* in break.S */
+ .device = gdbstub_console_dev,
+ .flags = CON_PRINTBUFFER,
+ .index = -1,
+};
+
+#endif
+
+/*****************************************************************************/
+/*
+ * Convert the memory pointed to by mem into hex, placing result in buf.
+ * - if successful, return a pointer to the last char put in buf (NUL)
+ * - in case of mem fault, return NULL
+ * may_fault is non-zero if we are reading from arbitrary memory, but is currently
+ * not used.
+ */
+static unsigned char *mem2hex(const void *_mem, char *buf, int count, int may_fault)
+{
+ const uint8_t *mem = _mem;
+ uint8_t ch[4] __attribute__((aligned(4)));
+
+ if ((uint32_t)mem&1 && count>=1) {
+ if (!gdbstub_read_byte(mem,ch))
+ return NULL;
+ *buf++ = hexchars[ch[0] >> 4];
+ *buf++ = hexchars[ch[0] & 0xf];
+ mem++;
+ count--;
+ }
+
+ if ((uint32_t)mem&3 && count>=2) {
+ if (!gdbstub_read_word(mem,(uint16_t *)ch))
+ return NULL;
+ *buf++ = hexchars[ch[0] >> 4];
+ *buf++ = hexchars[ch[0] & 0xf];
+ *buf++ = hexchars[ch[1] >> 4];
+ *buf++ = hexchars[ch[1] & 0xf];
+ mem += 2;
+ count -= 2;
+ }
+
+ while (count>=4) {
+ if (!gdbstub_read_dword(mem,(uint32_t *)ch))
+ return NULL;
+ *buf++ = hexchars[ch[0] >> 4];
+ *buf++ = hexchars[ch[0] & 0xf];
+ *buf++ = hexchars[ch[1] >> 4];
+ *buf++ = hexchars[ch[1] & 0xf];
+ *buf++ = hexchars[ch[2] >> 4];
+ *buf++ = hexchars[ch[2] & 0xf];
+ *buf++ = hexchars[ch[3] >> 4];
+ *buf++ = hexchars[ch[3] & 0xf];
+ mem += 4;
+ count -= 4;
+ }
+
+ if (count>=2) {
+ if (!gdbstub_read_word(mem,(uint16_t *)ch))
+ return NULL;
+ *buf++ = hexchars[ch[0] >> 4];
+ *buf++ = hexchars[ch[0] & 0xf];
+ *buf++ = hexchars[ch[1] >> 4];
+ *buf++ = hexchars[ch[1] & 0xf];
+ mem += 2;
+ count -= 2;
+ }
+
+ if (count>=1) {
+ if (!gdbstub_read_byte(mem,ch))
+ return NULL;
+ *buf++ = hexchars[ch[0] >> 4];
+ *buf++ = hexchars[ch[0] & 0xf];
+ }
+
+ *buf = 0;
+
+ return buf;
+} /* end mem2hex() */
+
+/*****************************************************************************/
+/*
+ * convert the hex array pointed to by buf into binary to be placed in mem
+ * return a pointer to the character AFTER the last byte of buffer consumed
+ */
+static char *hex2mem(const char *buf, void *_mem, int count)
+{
+ uint8_t *mem = _mem;
+ union {
+ uint32_t l;
+ uint16_t w;
+ uint8_t b[4];
+ } ch;
+
+ if ((u32)mem&1 && count>=1) {
+ ch.b[0] = hex(*buf++) << 4;
+ ch.b[0] |= hex(*buf++);
+ if (!gdbstub_write_byte(mem,ch.b[0]))
+ return NULL;
+ mem++;
+ count--;
+ }
+
+ if ((u32)mem&3 && count>=2) {
+ ch.b[0] = hex(*buf++) << 4;
+ ch.b[0] |= hex(*buf++);
+ ch.b[1] = hex(*buf++) << 4;
+ ch.b[1] |= hex(*buf++);
+ if (!gdbstub_write_word(mem,ch.w))
+ return NULL;
+ mem += 2;
+ count -= 2;
+ }
+
+ while (count>=4) {
+ ch.b[0] = hex(*buf++) << 4;
+ ch.b[0] |= hex(*buf++);
+ ch.b[1] = hex(*buf++) << 4;
+ ch.b[1] |= hex(*buf++);
+ ch.b[2] = hex(*buf++) << 4;
+ ch.b[2] |= hex(*buf++);
+ ch.b[3] = hex(*buf++) << 4;
+ ch.b[3] |= hex(*buf++);
+ if (!gdbstub_write_dword(mem,ch.l))
+ return NULL;
+ mem += 4;
+ count -= 4;
+ }
+
+ if (count>=2) {
+ ch.b[0] = hex(*buf++) << 4;
+ ch.b[0] |= hex(*buf++);
+ ch.b[1] = hex(*buf++) << 4;
+ ch.b[1] |= hex(*buf++);
+ if (!gdbstub_write_word(mem,ch.w))
+ return NULL;
+ mem += 2;
+ count -= 2;
+ }
+
+ if (count>=1) {
+ ch.b[0] = hex(*buf++) << 4;
+ ch.b[0] |= hex(*buf++);
+ if (!gdbstub_write_byte(mem,ch.b[0]))
+ return NULL;
+ }
+
+ return (char *) buf;
+} /* end hex2mem() */
+
+/*****************************************************************************/
+/*
+ * This table contains the mapping between FRV TBR.TT exception codes,
+ * and signals, which are primarily what GDB understands. It also
+ * indicates which hardware traps we need to commandeer when
+ * initializing the stub.
+ */
+static const struct brr_to_sig_map {
+ unsigned long brr_mask; /* BRR bitmask */
+ unsigned long tbr_tt; /* TBR.TT code (in BRR.EBTT) */
+ unsigned int signo; /* Signal that we map this into */
+} brr_to_sig_map[] = {
+ { BRR_EB, TBR_TT_INSTR_ACC_ERROR, SIGSEGV },
+ { BRR_EB, TBR_TT_ILLEGAL_INSTR, SIGILL },
+ { BRR_EB, TBR_TT_PRIV_INSTR, SIGILL },
+ { BRR_EB, TBR_TT_MP_EXCEPTION, SIGFPE },
+ { BRR_EB, TBR_TT_DATA_ACC_ERROR, SIGSEGV },
+ { BRR_EB, TBR_TT_DATA_STR_ERROR, SIGSEGV },
+ { BRR_EB, TBR_TT_DIVISION_EXCEP, SIGFPE },
+ { BRR_EB, TBR_TT_COMPOUND_EXCEP, SIGSEGV },
+ { BRR_EB, TBR_TT_INTERRUPT_13, SIGALRM }, /* watchdog */
+ { BRR_EB, TBR_TT_INTERRUPT_14, SIGINT }, /* GDB serial */
+ { BRR_EB, TBR_TT_INTERRUPT_15, SIGQUIT }, /* NMI */
+ { BRR_CB, 0, SIGUSR1 },
+ { BRR_TB, 0, SIGUSR2 },
+ { BRR_DBNEx, 0, SIGTRAP },
+ { BRR_DBx, 0, SIGTRAP }, /* h/w watchpoint */
+ { BRR_IBx, 0, SIGTRAP }, /* h/w breakpoint */
+ { BRR_CBB, 0, SIGTRAP },
+ { BRR_SB, 0, SIGTRAP },
+ { BRR_ST, 0, SIGTRAP }, /* single step */
+ { 0, 0, SIGHUP } /* default */
+};
+
+/*****************************************************************************/
+/*
+ * convert the FRV BRR register contents into a UNIX signal number
+ */
+static inline int gdbstub_compute_signal(unsigned long brr)
+{
+ const struct brr_to_sig_map *map;
+ unsigned long tbr = (brr & BRR_EBTT) >> 12;
+
+ for (map = brr_to_sig_map; map->brr_mask; map++)
+ if (map->brr_mask & brr)
+ if (!map->tbr_tt || map->tbr_tt == tbr)
+ break;
+
+ return map->signo;
+} /* end gdbstub_compute_signal() */
+
+/*****************************************************************************/
+/*
+ * set a software breakpoint or a hardware breakpoint or watchpoint
+ */
+static int gdbstub_set_breakpoint(unsigned long type, unsigned long addr, unsigned long len)
+{
+ unsigned long tmp;
+ int bkpt, loop, xloop;
+
+ union {
+ struct {
+ unsigned long mask0, mask1;
+ };
+ uint8_t bytes[8];
+ } dbmr;
+
+ //gdbstub_printk("setbkpt(%ld,%08lx,%ld)\n", type, addr, len);
+
+ switch (type) {
+ /* set software breakpoint */
+ case 0:
+ if (addr & 3 || len > 7*4)
+ return -EINVAL;
+
+ for (bkpt = 255; bkpt >= 0; bkpt--)
+ if (!gdbstub_bkpts[bkpt].addr)
+ break;
+ if (bkpt < 0)
+ return -ENOSPC;
+
+ for (loop = 0; loop < len/4; loop++)
+ if (!gdbstub_read_dword(&((uint32_t *) addr)[loop],
+ &gdbstub_bkpts[bkpt].originsns[loop]))
+ return -EFAULT;
+
+ for (loop = 0; loop < len/4; loop++)
+ if (!gdbstub_write_dword(&((uint32_t *) addr)[loop],
+ BREAK_INSN)
+ ) {
+ /* need to undo the changes if possible */
+ for (xloop = 0; xloop < loop; xloop++)
+ gdbstub_write_dword(&((uint32_t *) addr)[xloop],
+ gdbstub_bkpts[bkpt].originsns[xloop]);
+ return -EFAULT;
+ }
+
+ gdbstub_bkpts[bkpt].addr = addr;
+ gdbstub_bkpts[bkpt].len = len;
+
+#if 0
+ gdbstub_printk("Set BKPT[%02x]: %08lx #%d {%04x, %04x} -> { %04x, %04x }\n",
+ bkpt,
+ gdbstub_bkpts[bkpt].addr,
+ gdbstub_bkpts[bkpt].len,
+ gdbstub_bkpts[bkpt].originsns[0],
+ gdbstub_bkpts[bkpt].originsns[1],
+ ((uint32_t *) addr)[0],
+ ((uint32_t *) addr)[1]
+ );
+#endif
+ return 0;
+
+ /* set hardware breakpoint */
+ case 1:
+ if (addr & 3 || len != 4)
+ return -EINVAL;
+
+ if (!(__debug_regs->dcr & DCR_IBE0)) {
+ //gdbstub_printk("set h/w break 0: %08lx\n", addr);
+ __debug_regs->dcr |= DCR_IBE0;
+ asm volatile("movgs %0,ibar0" : : "r"(addr));
+ return 0;
+ }
+
+ if (!(__debug_regs->dcr & DCR_IBE1)) {
+ //gdbstub_printk("set h/w break 1: %08lx\n", addr);
+ __debug_regs->dcr |= DCR_IBE1;
+ asm volatile("movgs %0,ibar1" : : "r"(addr));
+ return 0;
+ }
+
+ if (!(__debug_regs->dcr & DCR_IBE2)) {
+ //gdbstub_printk("set h/w break 2: %08lx\n", addr);
+ __debug_regs->dcr |= DCR_IBE2;
+ asm volatile("movgs %0,ibar2" : : "r"(addr));
+ return 0;
+ }
+
+ if (!(__debug_regs->dcr & DCR_IBE3)) {
+ //gdbstub_printk("set h/w break 3: %08lx\n", addr);
+ __debug_regs->dcr |= DCR_IBE3;
+ asm volatile("movgs %0,ibar3" : : "r"(addr));
+ return 0;
+ }
+
+ return -ENOSPC;
+
+ /* set data read/write/access watchpoint */
+ case 2:
+ case 3:
+ case 4:
+ if ((addr & ~7) != ((addr + len - 1) & ~7))
+ return -EINVAL;
+
+ tmp = addr & 7;
+
+ memset(dbmr.bytes, 0xff, sizeof(dbmr.bytes));
+ for (loop = 0; loop < len; loop++)
+ dbmr.bytes[tmp + loop] = 0;
+
+ addr &= ~7;
+
+ if (!(__debug_regs->dcr & (DCR_DRBE0|DCR_DWBE0))) {
+ //gdbstub_printk("set h/w watchpoint 0 type %ld: %08lx\n", type, addr);
+ tmp = type==2 ? DCR_DWBE0 : type==3 ? DCR_DRBE0 : DCR_DRBE0|DCR_DWBE0;
+ __debug_regs->dcr |= tmp;
+ asm volatile(" movgs %0,dbar0 \n"
+ " movgs %1,dbmr00 \n"
+ " movgs %2,dbmr01 \n"
+ " movgs gr0,dbdr00 \n"
+ " movgs gr0,dbdr01 \n"
+ : : "r"(addr), "r"(dbmr.mask0), "r"(dbmr.mask1));
+ return 0;
+ }
+
+ if (!(__debug_regs->dcr & (DCR_DRBE1|DCR_DWBE1))) {
+ //gdbstub_printk("set h/w watchpoint 1 type %ld: %08lx\n", type, addr);
+ tmp = type==2 ? DCR_DWBE1 : type==3 ? DCR_DRBE1 : DCR_DRBE1|DCR_DWBE1;
+ __debug_regs->dcr |= tmp;
+ asm volatile(" movgs %0,dbar1 \n"
+ " movgs %1,dbmr10 \n"
+ " movgs %2,dbmr11 \n"
+ " movgs gr0,dbdr10 \n"
+ " movgs gr0,dbdr11 \n"
+ : : "r"(addr), "r"(dbmr.mask0), "r"(dbmr.mask1));
+ return 0;
+ }
+
+ return -ENOSPC;
+
+ default:
+ return -EINVAL;
+ }
+
+} /* end gdbstub_set_breakpoint() */
+
+/*****************************************************************************/
+/*
+ * clear a breakpoint or watchpoint
+ */
+int gdbstub_clear_breakpoint(unsigned long type, unsigned long addr, unsigned long len)
+{
+ unsigned long tmp;
+ int bkpt, loop;
+
+ union {
+ struct {
+ unsigned long mask0, mask1;
+ };
+ uint8_t bytes[8];
+ } dbmr;
+
+ //gdbstub_printk("clearbkpt(%ld,%08lx,%ld)\n", type, addr, len);
+
+ switch (type) {
+ /* clear software breakpoint */
+ case 0:
+ for (bkpt = 255; bkpt >= 0; bkpt--)
+ if (gdbstub_bkpts[bkpt].addr == addr && gdbstub_bkpts[bkpt].len == len)
+ break;
+ if (bkpt < 0)
+ return -ENOENT;
+
+ gdbstub_bkpts[bkpt].addr = 0;
+
+ for (loop = 0; loop < len/4; loop++)
+ if (!gdbstub_write_dword(&((uint32_t *) addr)[loop],
+ gdbstub_bkpts[bkpt].originsns[loop]))
+ return -EFAULT;
+ return 0;
+
+ /* clear hardware breakpoint */
+ case 1:
+ if (addr & 3 || len != 4)
+ return -EINVAL;
+
+#define __get_ibar(X) ({ unsigned long x; asm volatile("movsg ibar"#X",%0" : "=r"(x)); x; })
+
+ if (__debug_regs->dcr & DCR_IBE0 && __get_ibar(0) == addr) {
+ //gdbstub_printk("clear h/w break 0: %08lx\n", addr);
+ __debug_regs->dcr &= ~DCR_IBE0;
+ asm volatile("movgs gr0,ibar0");
+ return 0;
+ }
+
+ if (__debug_regs->dcr & DCR_IBE1 && __get_ibar(1) == addr) {
+ //gdbstub_printk("clear h/w break 1: %08lx\n", addr);
+ __debug_regs->dcr &= ~DCR_IBE1;
+ asm volatile("movgs gr0,ibar1");
+ return 0;
+ }
+
+ if (__debug_regs->dcr & DCR_IBE2 && __get_ibar(2) == addr) {
+ //gdbstub_printk("clear h/w break 2: %08lx\n", addr);
+ __debug_regs->dcr &= ~DCR_IBE2;
+ asm volatile("movgs gr0,ibar2");
+ return 0;
+ }
+
+ if (__debug_regs->dcr & DCR_IBE3 && __get_ibar(3) == addr) {
+ //gdbstub_printk("clear h/w break 3: %08lx\n", addr);
+ __debug_regs->dcr &= ~DCR_IBE3;
+ asm volatile("movgs gr0,ibar3");
+ return 0;
+ }
+
+ return -EINVAL;
+
+ /* clear data read/write/access watchpoint */
+ case 2:
+ case 3:
+ case 4:
+ if ((addr & ~7) != ((addr + len - 1) & ~7))
+ return -EINVAL;
+
+ tmp = addr & 7;
+
+ memset(dbmr.bytes, 0xff, sizeof(dbmr.bytes));
+ for (loop = 0; loop < len; loop++)
+ dbmr.bytes[tmp + loop] = 0;
+
+ addr &= ~7;
+
+#define __get_dbar(X) ({ unsigned long x; asm volatile("movsg dbar"#X",%0" : "=r"(x)); x; })
+#define __get_dbmr0(X) ({ unsigned long x; asm volatile("movsg dbmr"#X"0,%0" : "=r"(x)); x; })
+#define __get_dbmr1(X) ({ unsigned long x; asm volatile("movsg dbmr"#X"1,%0" : "=r"(x)); x; })
+
+ /* consider DBAR 0 */
+ tmp = type==2 ? DCR_DWBE0 : type==3 ? DCR_DRBE0 : DCR_DRBE0|DCR_DWBE0;
+
+ if ((__debug_regs->dcr & (DCR_DRBE0|DCR_DWBE0)) != tmp ||
+ __get_dbar(0) != addr ||
+ __get_dbmr0(0) != dbmr.mask0 ||
+ __get_dbmr1(0) != dbmr.mask1)
+ goto skip_dbar0;
+
+ //gdbstub_printk("clear h/w watchpoint 0 type %ld: %08lx\n", type, addr);
+ __debug_regs->dcr &= ~(DCR_DRBE0|DCR_DWBE0);
+ asm volatile(" movgs gr0,dbar0 \n"
+ " movgs gr0,dbmr00 \n"
+ " movgs gr0,dbmr01 \n"
+ " movgs gr0,dbdr00 \n"
+ " movgs gr0,dbdr01 \n");
+ return 0;
+
+ skip_dbar0:
+ /* consider DBAR 0 */
+ tmp = type==2 ? DCR_DWBE1 : type==3 ? DCR_DRBE1 : DCR_DRBE1|DCR_DWBE1;
+
+ if ((__debug_regs->dcr & (DCR_DRBE1|DCR_DWBE1)) != tmp ||
+ __get_dbar(1) != addr ||
+ __get_dbmr0(1) != dbmr.mask0 ||
+ __get_dbmr1(1) != dbmr.mask1)
+ goto skip_dbar1;
+
+ //gdbstub_printk("clear h/w watchpoint 1 type %ld: %08lx\n", type, addr);
+ __debug_regs->dcr &= ~(DCR_DRBE1|DCR_DWBE1);
+ asm volatile(" movgs gr0,dbar1 \n"
+ " movgs gr0,dbmr10 \n"
+ " movgs gr0,dbmr11 \n"
+ " movgs gr0,dbdr10 \n"
+ " movgs gr0,dbdr11 \n");
+ return 0;
+
+ skip_dbar1:
+ return -ENOSPC;
+
+ default:
+ return -EINVAL;
+ }
+} /* end gdbstub_clear_breakpoint() */
+
+/*****************************************************************************/
+/*
+ * check a for an internal software breakpoint, and wind the PC back if necessary
+ */
+static void gdbstub_check_breakpoint(void)
+{
+ unsigned long addr = __debug_frame->pc - 4;
+ int bkpt;
+
+ for (bkpt = 255; bkpt >= 0; bkpt--)
+ if (gdbstub_bkpts[bkpt].addr == addr)
+ break;
+ if (bkpt >= 0)
+ __debug_frame->pc = addr;
+
+ //gdbstub_printk("alter pc [%d] %08lx\n", bkpt, __debug_frame->pc);
+
+} /* end gdbstub_check_breakpoint() */
+
+/*****************************************************************************/
+/*
+ *
+ */
+static void __attribute__((unused)) gdbstub_show_regs(void)
+{
+ uint32_t *reg;
+ int loop;
+
+ gdbstub_printk("\n");
+
+ gdbstub_printk("Frame: @%p [%s]\n",
+ __debug_frame,
+ __debug_frame->psr & PSR_S ? "kernel" : "user");
+
+ reg = (uint32_t *) __debug_frame;
+ for (loop = 0; loop < REG__END; loop++) {
+ printk("%s %08x", regnames[loop + 0], reg[loop + 0]);
+
+ if (loop == REG__END - 1 || loop % 5 == 4)
+ printk("\n");
+ else
+ printk(" | ");
+ }
+
+ gdbstub_printk("Process %s (pid: %d)\n", current->comm, current->pid);
+} /* end gdbstub_show_regs() */
+
+/*****************************************************************************/
+/*
+ * dump debugging regs
+ */
+static void __attribute__((unused)) gdbstub_dump_debugregs(void)
+{
+ unsigned long x;
+
+ x = __debug_regs->dcr;
+ gdbstub_printk("DCR %08lx ", x);
+
+ x = __debug_regs->brr;
+ gdbstub_printk("BRR %08lx\n", x);
+
+ gdbstub_printk("IBAR0 %08lx ", __get_ibar(0));
+ gdbstub_printk("IBAR1 %08lx ", __get_ibar(1));
+ gdbstub_printk("IBAR2 %08lx ", __get_ibar(2));
+ gdbstub_printk("IBAR3 %08lx\n", __get_ibar(3));
+
+ gdbstub_printk("DBAR0 %08lx ", __get_dbar(0));
+ gdbstub_printk("DBMR00 %08lx ", __get_dbmr0(0));
+ gdbstub_printk("DBMR01 %08lx\n", __get_dbmr1(0));
+
+ gdbstub_printk("DBAR1 %08lx ", __get_dbar(1));
+ gdbstub_printk("DBMR10 %08lx ", __get_dbmr0(1));
+ gdbstub_printk("DBMR11 %08lx\n", __get_dbmr1(1));
+
+ gdbstub_printk("\n");
+} /* end gdbstub_dump_debugregs() */
+
+/*****************************************************************************/
+/*
+ * dump the MMU state into a structure so that it can be accessed with GDB
+ */
+void gdbstub_get_mmu_state(void)
+{
+ asm volatile("movsg hsr0,%0" : "=r"(__debug_mmu.regs.hsr0));
+ asm volatile("movsg pcsr,%0" : "=r"(__debug_mmu.regs.pcsr));
+ asm volatile("movsg esr0,%0" : "=r"(__debug_mmu.regs.esr0));
+ asm volatile("movsg ear0,%0" : "=r"(__debug_mmu.regs.ear0));
+ asm volatile("movsg epcr0,%0" : "=r"(__debug_mmu.regs.epcr0));
+
+ /* read the protection / SAT registers */
+ __debug_mmu.iamr[0].L = __get_IAMLR(0);
+ __debug_mmu.iamr[0].P = __get_IAMPR(0);
+ __debug_mmu.iamr[1].L = __get_IAMLR(1);
+ __debug_mmu.iamr[1].P = __get_IAMPR(1);
+ __debug_mmu.iamr[2].L = __get_IAMLR(2);
+ __debug_mmu.iamr[2].P = __get_IAMPR(2);
+ __debug_mmu.iamr[3].L = __get_IAMLR(3);
+ __debug_mmu.iamr[3].P = __get_IAMPR(3);
+ __debug_mmu.iamr[4].L = __get_IAMLR(4);
+ __debug_mmu.iamr[4].P = __get_IAMPR(4);
+ __debug_mmu.iamr[5].L = __get_IAMLR(5);
+ __debug_mmu.iamr[5].P = __get_IAMPR(5);
+ __debug_mmu.iamr[6].L = __get_IAMLR(6);
+ __debug_mmu.iamr[6].P = __get_IAMPR(6);
+ __debug_mmu.iamr[7].L = __get_IAMLR(7);
+ __debug_mmu.iamr[7].P = __get_IAMPR(7);
+ __debug_mmu.iamr[8].L = __get_IAMLR(8);
+ __debug_mmu.iamr[8].P = __get_IAMPR(8);
+ __debug_mmu.iamr[9].L = __get_IAMLR(9);
+ __debug_mmu.iamr[9].P = __get_IAMPR(9);
+ __debug_mmu.iamr[10].L = __get_IAMLR(10);
+ __debug_mmu.iamr[10].P = __get_IAMPR(10);
+ __debug_mmu.iamr[11].L = __get_IAMLR(11);
+ __debug_mmu.iamr[11].P = __get_IAMPR(11);
+ __debug_mmu.iamr[12].L = __get_IAMLR(12);
+ __debug_mmu.iamr[12].P = __get_IAMPR(12);
+ __debug_mmu.iamr[13].L = __get_IAMLR(13);
+ __debug_mmu.iamr[13].P = __get_IAMPR(13);
+ __debug_mmu.iamr[14].L = __get_IAMLR(14);
+ __debug_mmu.iamr[14].P = __get_IAMPR(14);
+ __debug_mmu.iamr[15].L = __get_IAMLR(15);
+ __debug_mmu.iamr[15].P = __get_IAMPR(15);
+
+ __debug_mmu.damr[0].L = __get_DAMLR(0);
+ __debug_mmu.damr[0].P = __get_DAMPR(0);
+ __debug_mmu.damr[1].L = __get_DAMLR(1);
+ __debug_mmu.damr[1].P = __get_DAMPR(1);
+ __debug_mmu.damr[2].L = __get_DAMLR(2);
+ __debug_mmu.damr[2].P = __get_DAMPR(2);
+ __debug_mmu.damr[3].L = __get_DAMLR(3);
+ __debug_mmu.damr[3].P = __get_DAMPR(3);
+ __debug_mmu.damr[4].L = __get_DAMLR(4);
+ __debug_mmu.damr[4].P = __get_DAMPR(4);
+ __debug_mmu.damr[5].L = __get_DAMLR(5);
+ __debug_mmu.damr[5].P = __get_DAMPR(5);
+ __debug_mmu.damr[6].L = __get_DAMLR(6);
+ __debug_mmu.damr[6].P = __get_DAMPR(6);
+ __debug_mmu.damr[7].L = __get_DAMLR(7);
+ __debug_mmu.damr[7].P = __get_DAMPR(7);
+ __debug_mmu.damr[8].L = __get_DAMLR(8);
+ __debug_mmu.damr[8].P = __get_DAMPR(8);
+ __debug_mmu.damr[9].L = __get_DAMLR(9);
+ __debug_mmu.damr[9].P = __get_DAMPR(9);
+ __debug_mmu.damr[10].L = __get_DAMLR(10);
+ __debug_mmu.damr[10].P = __get_DAMPR(10);
+ __debug_mmu.damr[11].L = __get_DAMLR(11);
+ __debug_mmu.damr[11].P = __get_DAMPR(11);
+ __debug_mmu.damr[12].L = __get_DAMLR(12);
+ __debug_mmu.damr[12].P = __get_DAMPR(12);
+ __debug_mmu.damr[13].L = __get_DAMLR(13);
+ __debug_mmu.damr[13].P = __get_DAMPR(13);
+ __debug_mmu.damr[14].L = __get_DAMLR(14);
+ __debug_mmu.damr[14].P = __get_DAMPR(14);
+ __debug_mmu.damr[15].L = __get_DAMLR(15);
+ __debug_mmu.damr[15].P = __get_DAMPR(15);
+
+#ifdef CONFIG_MMU
+ do {
+ /* read the DAT entries from the TLB */
+ struct __debug_amr *p;
+ int loop;
+
+ asm volatile("movsg tplr,%0" : "=r"(__debug_mmu.regs.tplr));
+ asm volatile("movsg tppr,%0" : "=r"(__debug_mmu.regs.tppr));
+ asm volatile("movsg tpxr,%0" : "=r"(__debug_mmu.regs.tpxr));
+ asm volatile("movsg cxnr,%0" : "=r"(__debug_mmu.regs.cxnr));
+
+ p = __debug_mmu.tlb;
+
+ /* way 0 */
+ asm volatile("movgs %0,tpxr" :: "r"(0 << TPXR_WAY_SHIFT));
+ for (loop = 0; loop < 64; loop++) {
+ asm volatile("tlbpr %0,gr0,#1,#0" :: "r"(loop << PAGE_SHIFT));
+ asm volatile("movsg tplr,%0" : "=r"(p->L));
+ asm volatile("movsg tppr,%0" : "=r"(p->P));
+ p++;
+ }
+
+ /* way 1 */
+ asm volatile("movgs %0,tpxr" :: "r"(1 << TPXR_WAY_SHIFT));
+ for (loop = 0; loop < 64; loop++) {
+ asm volatile("tlbpr %0,gr0,#1,#0" :: "r"(loop << PAGE_SHIFT));
+ asm volatile("movsg tplr,%0" : "=r"(p->L));
+ asm volatile("movsg tppr,%0" : "=r"(p->P));
+ p++;
+ }
+
+ asm volatile("movgs %0,tplr" :: "r"(__debug_mmu.regs.tplr));
+ asm volatile("movgs %0,tppr" :: "r"(__debug_mmu.regs.tppr));
+ asm volatile("movgs %0,tpxr" :: "r"(__debug_mmu.regs.tpxr));
+ } while(0);
+#endif
+
+} /* end gdbstub_get_mmu_state() */
+
+/*****************************************************************************/
+/*
+ * handle event interception and GDB remote protocol processing
+ * - on entry:
+ * PSR.ET==0, PSR.S==1 and the CPU is in debug mode
+ * __debug_frame points to the saved registers
+ * __frame points to the kernel mode exception frame, if it was in kernel
+ * mode when the break happened
+ */
+void gdbstub(int sigval)
+{
+ unsigned long addr, length, loop, dbar, temp, temp2, temp3;
+ uint32_t zero;
+ char *ptr;
+ int flush_cache = 0;
+
+ LEDS(0x5000);
+
+ if (sigval < 0) {
+#ifndef CONFIG_GDBSTUB_IMMEDIATE
+ /* return immediately if GDB immediate activation option not set */
+ return;
+#else
+ sigval = SIGINT;
+#endif
+ }
+
+ save_user_regs(&__break_user_context);
+
+#if 0
+ gdbstub_printk("--> gdbstub() %08x %p %08x %08x\n",
+ __debug_frame->pc,
+ __debug_frame,
+ __debug_regs->brr,
+ __debug_regs->bpsr);
+// gdbstub_show_regs();
+#endif
+
+ LEDS(0x5001);
+
+ /* if we were interrupted by input on the serial gdbstub serial port,
+ * restore the context prior to the interrupt so that we return to that
+ * directly
+ */
+ temp = (unsigned long) __entry_kerneltrap_table;
+ temp2 = (unsigned long) __entry_usertrap_table;
+ temp3 = __debug_frame->pc & ~15;
+
+ if (temp3 == temp + TBR_TT_INTERRUPT_15 ||
+ temp3 == temp2 + TBR_TT_INTERRUPT_15
+ ) {
+ asm volatile("movsg pcsr,%0" : "=r"(__debug_frame->pc));
+ __debug_frame->psr |= PSR_ET;
+ __debug_frame->psr &= ~PSR_S;
+ if (__debug_frame->psr & PSR_PS)
+ __debug_frame->psr |= PSR_S;
+ __debug_regs->brr = (__debug_frame->tbr & TBR_TT) << 12;
+ __debug_regs->brr |= BRR_EB;
+ sigval = SIGINT;
+ }
+
+ /* handle the decrement timer going off (FR451 only) */
+ if (temp3 == temp + TBR_TT_DECREMENT_TIMER ||
+ temp3 == temp2 + TBR_TT_DECREMENT_TIMER
+ ) {
+ asm volatile("movgs %0,timerd" :: "r"(10000000));
+ asm volatile("movsg pcsr,%0" : "=r"(__debug_frame->pc));
+ __debug_frame->psr |= PSR_ET;
+ __debug_frame->psr &= ~PSR_S;
+ if (__debug_frame->psr & PSR_PS)
+ __debug_frame->psr |= PSR_S;
+ __debug_regs->brr = (__debug_frame->tbr & TBR_TT) << 12;
+ __debug_regs->brr |= BRR_EB;
+ sigval = SIGXCPU;;
+ }
+
+ LEDS(0x5002);
+
+ /* after a BREAK insn, the PC lands on the far side of it */
+ if (__debug_regs->brr & BRR_SB)
+ gdbstub_check_breakpoint();
+
+ LEDS(0x5003);
+
+ /* handle attempts to write console data via GDB "O" commands */
+ if (__debug_frame->pc == (unsigned long) gdbstub_console_write + 4) {
+ __gdbstub_console_write((struct console *) __debug_frame->gr8,
+ (const char *) __debug_frame->gr9,
+ (unsigned) __debug_frame->gr10);
+ goto done;
+ }
+
+ if (gdbstub_rx_unget) {
+ sigval = SIGINT;
+ goto packet_waiting;
+ }
+
+ if (!sigval)
+ sigval = gdbstub_compute_signal(__debug_regs->brr);
+
+ LEDS(0x5004);
+
+ /* send a message to the debugger's user saying what happened if it may
+ * not be clear cut (we can't map exceptions onto signals properly)
+ */
+ if (sigval != SIGINT && sigval != SIGTRAP && sigval != SIGILL) {
+ static const char title[] = "Break ";
+ static const char crlf[] = "\r\n";
+ unsigned long brr = __debug_regs->brr;
+ char hx;
+
+ ptr = output_buffer;
+ *ptr++ = 'O';
+ ptr = mem2hex(title, ptr, sizeof(title) - 1,0);
+
+ hx = hexchars[(brr & 0xf0000000) >> 28];
+ *ptr++ = hexchars[hx >> 4]; *ptr++ = hexchars[hx & 0xf];
+ hx = hexchars[(brr & 0x0f000000) >> 24];
+ *ptr++ = hexchars[hx >> 4]; *ptr++ = hexchars[hx & 0xf];
+ hx = hexchars[(brr & 0x00f00000) >> 20];
+ *ptr++ = hexchars[hx >> 4]; *ptr++ = hexchars[hx & 0xf];
+ hx = hexchars[(brr & 0x000f0000) >> 16];
+ *ptr++ = hexchars[hx >> 4]; *ptr++ = hexchars[hx & 0xf];
+ hx = hexchars[(brr & 0x0000f000) >> 12];
+ *ptr++ = hexchars[hx >> 4]; *ptr++ = hexchars[hx & 0xf];
+ hx = hexchars[(brr & 0x00000f00) >> 8];
+ *ptr++ = hexchars[hx >> 4]; *ptr++ = hexchars[hx & 0xf];
+ hx = hexchars[(brr & 0x000000f0) >> 4];
+ *ptr++ = hexchars[hx >> 4]; *ptr++ = hexchars[hx & 0xf];
+ hx = hexchars[(brr & 0x0000000f)];
+ *ptr++ = hexchars[hx >> 4]; *ptr++ = hexchars[hx & 0xf];
+
+ ptr = mem2hex(crlf, ptr, sizeof(crlf) - 1, 0);
+ *ptr = 0;
+ gdbstub_send_packet(output_buffer); /* send it off... */
+ }
+
+ LEDS(0x5005);
+
+ /* tell the debugger that an exception has occurred */
+ ptr = output_buffer;
+
+ /* Send trap type (converted to signal) */
+ *ptr++ = 'T';
+ *ptr++ = hexchars[sigval >> 4];
+ *ptr++ = hexchars[sigval & 0xf];
+
+ /* Send Error PC */
+ *ptr++ = hexchars[GDB_REG_PC >> 4];
+ *ptr++ = hexchars[GDB_REG_PC & 0xf];
+ *ptr++ = ':';
+ ptr = mem2hex(&__debug_frame->pc, ptr, 4, 0);
+ *ptr++ = ';';
+
+ /*
+ * Send frame pointer
+ */
+ *ptr++ = hexchars[GDB_REG_FP >> 4];
+ *ptr++ = hexchars[GDB_REG_FP & 0xf];
+ *ptr++ = ':';
+ ptr = mem2hex(&__debug_frame->fp, ptr, 4, 0);
+ *ptr++ = ';';
+
+ /*
+ * Send stack pointer
+ */
+ *ptr++ = hexchars[GDB_REG_SP >> 4];
+ *ptr++ = hexchars[GDB_REG_SP & 0xf];
+ *ptr++ = ':';
+ ptr = mem2hex(&__debug_frame->sp, ptr, 4, 0);
+ *ptr++ = ';';
+
+ *ptr++ = 0;
+ gdbstub_send_packet(output_buffer); /* send it off... */
+
+ LEDS(0x5006);
+
+ packet_waiting:
+ gdbstub_get_mmu_state();
+
+ /* wait for input from remote GDB */
+ while (1) {
+ output_buffer[0] = 0;
+
+ LEDS(0x5007);
+ gdbstub_recv_packet(input_buffer);
+ LEDS(0x5600 | input_buffer[0]);
+
+ switch (input_buffer[0]) {
+ /* request repeat of last signal number */
+ case '?':
+ output_buffer[0] = 'S';
+ output_buffer[1] = hexchars[sigval >> 4];
+ output_buffer[2] = hexchars[sigval & 0xf];
+ output_buffer[3] = 0;
+ break;
+
+ case 'd':
+ /* toggle debug flag */
+ break;
+
+ /* return the value of the CPU registers
+ * - GR0, GR1, GR2, GR3, GR4, GR5, GR6, GR7,
+ * - GR8, GR9, GR10, GR11, GR12, GR13, GR14, GR15,
+ * - GR16, GR17, GR18, GR19, GR20, GR21, GR22, GR23,
+ * - GR24, GR25, GR26, GR27, GR28, GR29, GR30, GR31,
+ * - GR32, GR33, GR34, GR35, GR36, GR37, GR38, GR39,
+ * - GR40, GR41, GR42, GR43, GR44, GR45, GR46, GR47,
+ * - GR48, GR49, GR50, GR51, GR52, GR53, GR54, GR55,
+ * - GR56, GR57, GR58, GR59, GR60, GR61, GR62, GR63,
+ * - FP0, FP1, FP2, FP3, FP4, FP5, FP6, FP7,
+ * - FP8, FP9, FP10, FP11, FP12, FP13, FP14, FP15,
+ * - FP16, FP17, FP18, FP19, FP20, FP21, FP22, FP23,
+ * - FP24, FP25, FP26, FP27, FP28, FP29, FP30, FP31,
+ * - FP32, FP33, FP34, FP35, FP36, FP37, FP38, FP39,
+ * - FP40, FP41, FP42, FP43, FP44, FP45, FP46, FP47,
+ * - FP48, FP49, FP50, FP51, FP52, FP53, FP54, FP55,
+ * - FP56, FP57, FP58, FP59, FP60, FP61, FP62, FP63,
+ * - PC, PSR, CCR, CCCR,
+ * - _X132, _X133, _X134
+ * - TBR, BRR, DBAR0, DBAR1, DBAR2, DBAR3,
+ * - _X141, _X142, _X143, _X144,
+ * - LR, LCR
+ */
+ case 'g':
+ zero = 0;
+ ptr = output_buffer;
+
+ /* deal with GR0, GR1-GR27, GR28-GR31, GR32-GR63 */
+ ptr = mem2hex(&zero, ptr, 4, 0);
+
+ for (loop = 1; loop <= 27; loop++)
+ ptr = mem2hex((unsigned long *)__debug_frame + REG_GR(loop),
+ ptr, 4, 0);
+ temp = (unsigned long) __frame;
+ ptr = mem2hex(&temp, ptr, 4, 0);
+ ptr = mem2hex((unsigned long *)__debug_frame + REG_GR(29), ptr, 4, 0);
+ ptr = mem2hex((unsigned long *)__debug_frame + REG_GR(30), ptr, 4, 0);
+#ifdef CONFIG_MMU
+ ptr = mem2hex((unsigned long *)__debug_frame + REG_GR(31), ptr, 4, 0);
+#else
+ temp = (unsigned long) __debug_frame;
+ ptr = mem2hex(&temp, ptr, 4, 0);
+#endif
+
+ for (loop = 32; loop <= 63; loop++)
+ ptr = mem2hex((unsigned long *)__debug_frame + REG_GR(loop),
+ ptr, 4, 0);
+
+ /* deal with FR0-FR63 */
+ for (loop = 0; loop <= 63; loop++)
+ ptr = mem2hex((unsigned long *)&__break_user_context +
+ __FPMEDIA_FR(loop),
+ ptr, 4, 0);
+
+ /* deal with special registers */
+ ptr = mem2hex(&__debug_frame->pc, ptr, 4, 0);
+ ptr = mem2hex(&__debug_frame->psr, ptr, 4, 0);
+ ptr = mem2hex(&__debug_frame->ccr, ptr, 4, 0);
+ ptr = mem2hex(&__debug_frame->cccr, ptr, 4, 0);
+ ptr = mem2hex(&zero, ptr, 4, 0);
+ ptr = mem2hex(&zero, ptr, 4, 0);
+ ptr = mem2hex(&zero, ptr, 4, 0);
+ ptr = mem2hex(&__debug_frame->tbr, ptr, 4, 0);
+ ptr = mem2hex(&__debug_regs->brr , ptr, 4, 0);
+
+ asm volatile("movsg dbar0,%0" : "=r"(dbar));
+ ptr = mem2hex(&dbar, ptr, 4, 0);
+ asm volatile("movsg dbar1,%0" : "=r"(dbar));
+ ptr = mem2hex(&dbar, ptr, 4, 0);
+ asm volatile("movsg dbar2,%0" : "=r"(dbar));
+ ptr = mem2hex(&dbar, ptr, 4, 0);
+ asm volatile("movsg dbar3,%0" : "=r"(dbar));
+ ptr = mem2hex(&dbar, ptr, 4, 0);
+
+ asm volatile("movsg scr0,%0" : "=r"(dbar));
+ ptr = mem2hex(&dbar, ptr, 4, 0);
+ asm volatile("movsg scr1,%0" : "=r"(dbar));
+ ptr = mem2hex(&dbar, ptr, 4, 0);
+ asm volatile("movsg scr2,%0" : "=r"(dbar));
+ ptr = mem2hex(&dbar, ptr, 4, 0);
+ asm volatile("movsg scr3,%0" : "=r"(dbar));
+ ptr = mem2hex(&dbar, ptr, 4, 0);
+
+ ptr = mem2hex(&__debug_frame->lr, ptr, 4, 0);
+ ptr = mem2hex(&__debug_frame->lcr, ptr, 4, 0);
+
+ ptr = mem2hex(&__debug_frame->iacc0, ptr, 8, 0);
+
+ ptr = mem2hex(&__break_user_context.f.fsr[0], ptr, 4, 0);
+
+ for (loop = 0; loop <= 7; loop++)
+ ptr = mem2hex(&__break_user_context.f.acc[loop], ptr, 4, 0);
+
+ ptr = mem2hex(&__break_user_context.f.accg, ptr, 8, 0);
+
+ for (loop = 0; loop <= 1; loop++)
+ ptr = mem2hex(&__break_user_context.f.msr[loop], ptr, 4, 0);
+
+ ptr = mem2hex(&__debug_frame->gner0, ptr, 4, 0);
+ ptr = mem2hex(&__debug_frame->gner1, ptr, 4, 0);
+
+ ptr = mem2hex(&__break_user_context.f.fner[0], ptr, 4, 0);
+ ptr = mem2hex(&__break_user_context.f.fner[1], ptr, 4, 0);
+
+ break;
+
+ /* set the values of the CPU registers */
+ case 'G':
+ ptr = &input_buffer[1];
+
+ /* deal with GR0, GR1-GR27, GR28-GR31, GR32-GR63 */
+ ptr = hex2mem(ptr, &temp, 4);
+
+ for (loop = 1; loop <= 27; loop++)
+ ptr = hex2mem(ptr, (unsigned long *)__debug_frame + REG_GR(loop),
+ 4);
+
+ ptr = hex2mem(ptr, &temp, 4);
+ __frame = (struct pt_regs *) temp;
+ ptr = hex2mem(ptr, &__debug_frame->gr29, 4);
+ ptr = hex2mem(ptr, &__debug_frame->gr30, 4);
+#ifdef CONFIG_MMU
+ ptr = hex2mem(ptr, &__debug_frame->gr31, 4);
+#else
+ ptr = hex2mem(ptr, &temp, 4);
+#endif
+
+ for (loop = 32; loop <= 63; loop++)
+ ptr = hex2mem(ptr, (unsigned long *)__debug_frame + REG_GR(loop),
+ 4);
+
+ /* deal with FR0-FR63 */
+ for (loop = 0; loop <= 63; loop++)
+ ptr = mem2hex((unsigned long *)&__break_user_context +
+ __FPMEDIA_FR(loop),
+ ptr, 4, 0);
+
+ /* deal with special registers */
+ ptr = hex2mem(ptr, &__debug_frame->pc, 4);
+ ptr = hex2mem(ptr, &__debug_frame->psr, 4);
+ ptr = hex2mem(ptr, &__debug_frame->ccr, 4);
+ ptr = hex2mem(ptr, &__debug_frame->cccr,4);
+
+ for (loop = 132; loop <= 140; loop++)
+ ptr = hex2mem(ptr, &temp, 4);
+
+ ptr = hex2mem(ptr, &temp, 4);
+ asm volatile("movgs %0,scr0" :: "r"(temp));
+ ptr = hex2mem(ptr, &temp, 4);
+ asm volatile("movgs %0,scr1" :: "r"(temp));
+ ptr = hex2mem(ptr, &temp, 4);
+ asm volatile("movgs %0,scr2" :: "r"(temp));
+ ptr = hex2mem(ptr, &temp, 4);
+ asm volatile("movgs %0,scr3" :: "r"(temp));
+
+ ptr = hex2mem(ptr, &__debug_frame->lr, 4);
+ ptr = hex2mem(ptr, &__debug_frame->lcr, 4);
+
+ ptr = hex2mem(ptr, &__debug_frame->iacc0, 8);
+
+ ptr = hex2mem(ptr, &__break_user_context.f.fsr[0], 4);
+
+ for (loop = 0; loop <= 7; loop++)
+ ptr = hex2mem(ptr, &__break_user_context.f.acc[loop], 4);
+
+ ptr = hex2mem(ptr, &__break_user_context.f.accg, 8);
+
+ for (loop = 0; loop <= 1; loop++)
+ ptr = hex2mem(ptr, &__break_user_context.f.msr[loop], 4);
+
+ ptr = hex2mem(ptr, &__debug_frame->gner0, 4);
+ ptr = hex2mem(ptr, &__debug_frame->gner1, 4);
+
+ ptr = hex2mem(ptr, &__break_user_context.f.fner[0], 4);
+ ptr = hex2mem(ptr, &__break_user_context.f.fner[1], 4);
+
+ gdbstub_strcpy(output_buffer,"OK");
+ break;
+
+ /* mAA..AA,LLLL Read LLLL bytes at address AA..AA */
+ case 'm':
+ ptr = &input_buffer[1];
+
+ if (hexToInt(&ptr, &addr) &&
+ *ptr++ == ',' &&
+ hexToInt(&ptr, &length)
+ ) {
+ if (mem2hex((char *)addr, output_buffer, length, 1))
+ break;
+ gdbstub_strcpy (output_buffer, "E03");
+ }
+ else {
+ gdbstub_strcpy(output_buffer,"E01");
+ }
+ break;
+
+ /* MAA..AA,LLLL: Write LLLL bytes at address AA.AA return OK */
+ case 'M':
+ ptr = &input_buffer[1];
+
+ if (hexToInt(&ptr, &addr) &&
+ *ptr++ == ',' &&
+ hexToInt(&ptr, &length) &&
+ *ptr++ == ':'
+ ) {
+ if (hex2mem(ptr, (char *)addr, length)) {
+ gdbstub_strcpy(output_buffer, "OK");
+ }
+ else {
+ gdbstub_strcpy(output_buffer, "E03");
+ }
+ }
+ else
+ gdbstub_strcpy(output_buffer, "E02");
+
+ flush_cache = 1;
+ break;
+
+ /* PNN,=RRRRRRRR: Write value R to reg N return OK */
+ case 'P':
+ ptr = &input_buffer[1];
+
+ if (!hexToInt(&ptr, &addr) ||
+ *ptr++ != '=' ||
+ !hexToInt(&ptr, &temp)
+ ) {
+ gdbstub_strcpy(output_buffer, "E01");
+ break;
+ }
+
+ temp2 = 1;
+ switch (addr) {
+ case GDB_REG_GR(0):
+ break;
+ case GDB_REG_GR(1) ... GDB_REG_GR(63):
+ __break_user_context.i.gr[addr - GDB_REG_GR(0)] = temp;
+ break;
+ case GDB_REG_FR(0) ... GDB_REG_FR(63):
+ __break_user_context.f.fr[addr - GDB_REG_FR(0)] = temp;
+ break;
+ case GDB_REG_PC:
+ __break_user_context.i.pc = temp;
+ break;
+ case GDB_REG_PSR:
+ __break_user_context.i.psr = temp;
+ break;
+ case GDB_REG_CCR:
+ __break_user_context.i.ccr = temp;
+ break;
+ case GDB_REG_CCCR:
+ __break_user_context.i.cccr = temp;
+ break;
+ case GDB_REG_BRR:
+ __debug_regs->brr = temp;
+ break;
+ case GDB_REG_LR:
+ __break_user_context.i.lr = temp;
+ break;
+ case GDB_REG_LCR:
+ __break_user_context.i.lcr = temp;
+ break;
+ case GDB_REG_FSR0:
+ __break_user_context.f.fsr[0] = temp;
+ break;
+ case GDB_REG_ACC(0) ... GDB_REG_ACC(7):
+ __break_user_context.f.acc[addr - GDB_REG_ACC(0)] = temp;
+ break;
+ case GDB_REG_ACCG(0):
+ *(uint32_t *) &__break_user_context.f.accg[0] = temp;
+ break;
+ case GDB_REG_ACCG(4):
+ *(uint32_t *) &__break_user_context.f.accg[4] = temp;
+ break;
+ case GDB_REG_MSR(0) ... GDB_REG_MSR(1):
+ __break_user_context.f.msr[addr - GDB_REG_MSR(0)] = temp;
+ break;
+ case GDB_REG_GNER(0) ... GDB_REG_GNER(1):
+ __break_user_context.i.gner[addr - GDB_REG_GNER(0)] = temp;
+ break;
+ case GDB_REG_FNER(0) ... GDB_REG_FNER(1):
+ __break_user_context.f.fner[addr - GDB_REG_FNER(0)] = temp;
+ break;
+ default:
+ temp2 = 0;
+ break;
+ }
+
+ if (temp2) {
+ gdbstub_strcpy(output_buffer, "OK");
+ }
+ else {
+ gdbstub_strcpy(output_buffer, "E02");
+ }
+ break;
+
+ /* cAA..AA Continue at address AA..AA(optional) */
+ case 'c':
+ /* try to read optional parameter, pc unchanged if no parm */
+ ptr = &input_buffer[1];
+ if (hexToInt(&ptr, &addr))
+ __debug_frame->pc = addr;
+ goto done;
+
+ /* kill the program */
+ case 'k' :
+ goto done; /* just continue */
+
+
+ /* reset the whole machine (FIXME: system dependent) */
+ case 'r':
+ break;
+
+
+ /* step to next instruction */
+ case 's':
+ __debug_regs->dcr |= DCR_SE;
+ goto done;
+
+ /* set baud rate (bBB) */
+ case 'b':
+ ptr = &input_buffer[1];
+ if (!hexToInt(&ptr, &temp)) {
+ gdbstub_strcpy(output_buffer,"B01");
+ break;
+ }
+
+ if (temp) {
+ /* ack before changing speed */
+ gdbstub_send_packet("OK");
+ gdbstub_set_baud(temp);
+ }
+ break;
+
+ /* set breakpoint */
+ case 'Z':
+ ptr = &input_buffer[1];
+
+ if (!hexToInt(&ptr,&temp) || *ptr++ != ',' ||
+ !hexToInt(&ptr,&addr) || *ptr++ != ',' ||
+ !hexToInt(&ptr,&length)
+ ) {
+ gdbstub_strcpy(output_buffer,"E01");
+ break;
+ }
+
+ if (temp >= 5) {
+ gdbstub_strcpy(output_buffer,"E03");
+ break;
+ }
+
+ if (gdbstub_set_breakpoint(temp, addr, length) < 0) {
+ gdbstub_strcpy(output_buffer,"E03");
+ break;
+ }
+
+ if (temp == 0)
+ flush_cache = 1; /* soft bkpt by modified memory */
+
+ gdbstub_strcpy(output_buffer,"OK");
+ break;
+
+ /* clear breakpoint */
+ case 'z':
+ ptr = &input_buffer[1];
+
+ if (!hexToInt(&ptr,&temp) || *ptr++ != ',' ||
+ !hexToInt(&ptr,&addr) || *ptr++ != ',' ||
+ !hexToInt(&ptr,&length)
+ ) {
+ gdbstub_strcpy(output_buffer,"E01");
+ break;
+ }
+
+ if (temp >= 5) {
+ gdbstub_strcpy(output_buffer,"E03");
+ break;
+ }
+
+ if (gdbstub_clear_breakpoint(temp, addr, length) < 0) {
+ gdbstub_strcpy(output_buffer,"E03");
+ break;
+ }
+
+ if (temp == 0)
+ flush_cache = 1; /* soft bkpt by modified memory */
+
+ gdbstub_strcpy(output_buffer,"OK");
+ break;
+
+ default:
+ gdbstub_proto("### GDB Unsupported Cmd '%s'\n",input_buffer);
+ break;
+ }
+
+ /* reply to the request */
+ LEDS(0x5009);
+ gdbstub_send_packet(output_buffer);
+ }
+
+ done:
+ restore_user_regs(&__break_user_context);
+
+ //gdbstub_dump_debugregs();
+ //gdbstub_printk("<-- gdbstub() %08x\n", __debug_frame->pc);
+
+ /* need to flush the instruction cache before resuming, as we may have
+ * deposited a breakpoint, and the icache probably has no way of
+ * knowing that a data ref to some location may have changed something
+ * that is in the instruction cache. NB: We flush both caches, just to
+ * be sure...
+ */
+
+ /* note: flushing the icache will clobber EAR0 on the FR451 */
+ if (flush_cache)
+ gdbstub_purge_cache();
+
+ LEDS(0x5666);
+
+} /* end gdbstub() */
+
+/*****************************************************************************/
+/*
+ * initialise the GDB stub
+ */
+void __init gdbstub_init(void)
+{
+#ifdef CONFIG_GDBSTUB_IMMEDIATE
+ unsigned char ch;
+ int ret;
+#endif
+
+ gdbstub_printk("%s", gdbstub_banner);
+ gdbstub_printk("DCR: %x\n", __debug_regs->dcr);
+
+ gdbstub_io_init();
+
+ /* try to talk to GDB (or anyone insane enough to want to type GDB protocol by hand) */
+ gdbstub_proto("### GDB Tx ACK\n");
+ gdbstub_tx_char('+'); /* 'hello world' */
+
+#ifdef CONFIG_GDBSTUB_IMMEDIATE
+ gdbstub_printk("GDB Stub waiting for packet\n");
+
+ /*
+ * In case GDB is started before us, ack any packets
+ * (presumably "$?#xx") sitting there.
+ */
+ do { gdbstub_rx_char(&ch, 0); } while (ch != '$');
+ do { gdbstub_rx_char(&ch, 0); } while (ch != '#');
+ do { ret = gdbstub_rx_char(&ch, 0); } while (ret != 0); /* eat first csum byte */
+ do { ret = gdbstub_rx_char(&ch, 0); } while (ret != 0); /* eat second csum byte */
+
+ gdbstub_proto("### GDB Tx NAK\n");
+ gdbstub_tx_char('-'); /* nak it */
+
+#else
+ gdbstub_printk("GDB Stub set\n");
+#endif
+
+#if 0
+ /* send banner */
+ ptr = output_buffer;
+ *ptr++ = 'O';
+ ptr = mem2hex(gdbstub_banner, ptr, sizeof(gdbstub_banner) - 1, 0);
+ gdbstub_send_packet(output_buffer);
+#endif
+#if defined(CONFIG_GDBSTUB_CONSOLE) && defined(CONFIG_GDBSTUB_IMMEDIATE)
+ register_console(&gdbstub_console);
+#endif
+
+} /* end gdbstub_init() */
+
+/*****************************************************************************/
+/*
+ * register the console at a more appropriate time
+ */
+#if defined (CONFIG_GDBSTUB_CONSOLE) && !defined(CONFIG_GDBSTUB_IMMEDIATE)
+static int __init gdbstub_postinit(void)
+{
+ printk("registering console\n");
+ register_console(&gdbstub_console);
+ return 0;
+} /* end gdbstub_postinit() */
+
+__initcall(gdbstub_postinit);
+#endif
+
+/*****************************************************************************/
+/*
+ * send an exit message to GDB
+ */
+void gdbstub_exit(int status)
+{
+ unsigned char checksum;
+ int count;
+ unsigned char ch;
+
+ sprintf(output_buffer,"W%02x",status&0xff);
+
+ gdbstub_tx_char('$');
+ checksum = 0;
+ count = 0;
+
+ while ((ch = output_buffer[count]) != 0) {
+ gdbstub_tx_char(ch);
+ checksum += ch;
+ count += 1;
+ }
+
+ gdbstub_tx_char('#');
+ gdbstub_tx_char(hexchars[checksum >> 4]);
+ gdbstub_tx_char(hexchars[checksum & 0xf]);
+
+ /* make sure the output is flushed, or else RedBoot might clobber it */
+ gdbstub_tx_char('-');
+ gdbstub_tx_flush();
+
+} /* end gdbstub_exit() */
+
+/*****************************************************************************/
+/*
+ * GDB wants to call malloc() and free() to allocate memory for calling kernel
+ * functions directly from its command line
+ */
+static void *malloc(size_t size) __attribute__((unused));
+static void *malloc(size_t size)
+{
+ return kmalloc(size, GFP_ATOMIC);
+}
+
+static void free(void *p) __attribute__((unused));
+static void free(void *p)
+{
+ kfree(p);
+}
+
+static uint32_t ___get_HSR0(void) __attribute__((unused));
+static uint32_t ___get_HSR0(void)
+{
+ return __get_HSR(0);
+}
+
+static uint32_t ___set_HSR0(uint32_t x) __attribute__((unused));
+static uint32_t ___set_HSR0(uint32_t x)
+{
+ __set_HSR(0, x);
+ return __get_HSR(0);
+}
--- /dev/null
+/* head-mmu-fr451.S: FR451 mmu-linux specific bits of initialisation
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/threads.h>
+#include <linux/linkage.h>
+#include <asm/ptrace.h>
+#include <asm/page.h>
+#include <asm/mem-layout.h>
+#include <asm/spr-regs.h>
+#include <asm/mb86943a.h>
+#include "head.inc"
+
+
+#define __400_DBR0 0xfe000e00
+#define __400_DBR1 0xfe000e08
+#define __400_DBR2 0xfe000e10
+#define __400_DBR3 0xfe000e18
+#define __400_DAM0 0xfe000f00
+#define __400_DAM1 0xfe000f08
+#define __400_DAM2 0xfe000f10
+#define __400_DAM3 0xfe000f18
+#define __400_LGCR 0xfe000010
+#define __400_LCR 0xfe000100
+#define __400_LSBR 0xfe000c00
+
+ .section .text.init,"ax"
+ .balign 4
+
+###############################################################################
+#
+# describe the position and layout of the SDRAM controller registers
+#
+# ENTRY: EXIT:
+# GR5 - cacheline size
+# GR11 - displacement of 2nd SDRAM addr reg from GR14
+# GR12 - displacement of 3rd SDRAM addr reg from GR14
+# GR13 - displacement of 4th SDRAM addr reg from GR14
+# GR14 - address of 1st SDRAM addr reg
+# GR15 - amount to shift address by to match SDRAM addr reg
+# GR26 &__head_reference [saved]
+# GR30 LED address [saved]
+# CC0 - T if DBR0 is present
+# CC1 - T if DBR1 is present
+# CC2 - T if DBR2 is present
+# CC3 - T if DBR3 is present
+#
+###############################################################################
+ .globl __head_fr451_describe_sdram
+__head_fr451_describe_sdram:
+ sethi.p %hi(__400_DBR0),gr14
+ setlo %lo(__400_DBR0),gr14
+ setlos.p #__400_DBR1-__400_DBR0,gr11
+ setlos #__400_DBR2-__400_DBR0,gr12
+ setlos.p #__400_DBR3-__400_DBR0,gr13
+ setlos #32,gr5 ; cacheline size
+ setlos.p #0,gr15 ; amount to shift addr reg by
+ setlos #0x00ff,gr4
+ movgs gr4,cccr ; extant DARS/DAMK regs
+ bralr
+
+###############################################################################
+#
+# rearrange the bus controller registers
+#
+# ENTRY: EXIT:
+# GR26 &__head_reference [saved]
+# GR30 LED address revised LED address
+#
+###############################################################################
+ .globl __head_fr451_set_busctl
+__head_fr451_set_busctl:
+ sethi.p %hi(__400_LGCR),gr4
+ setlo %lo(__400_LGCR),gr4
+ sethi.p %hi(__400_LSBR),gr10
+ setlo %lo(__400_LSBR),gr10
+ sethi.p %hi(__400_LCR),gr11
+ setlo %lo(__400_LCR),gr11
+
+ # set the bus controller
+ ldi @(gr4,#0),gr5
+ ori gr5,#0xff,gr5 ; make sure all chip-selects are enabled
+ sti gr5,@(gr4,#0)
+
+ sethi.p %hi(__region_CS1),gr4
+ setlo %lo(__region_CS1),gr4
+ sethi.p %hi(__region_CS1_M),gr5
+ setlo %lo(__region_CS1_M),gr5
+ sethi.p %hi(__region_CS1_C),gr6
+ setlo %lo(__region_CS1_C),gr6
+ sti gr4,@(gr10,#1*0x08)
+ sti gr5,@(gr10,#1*0x08+0x100)
+ sti gr6,@(gr11,#1*0x08)
+ sethi.p %hi(__region_CS2),gr4
+ setlo %lo(__region_CS2),gr4
+ sethi.p %hi(__region_CS2_M),gr5
+ setlo %lo(__region_CS2_M),gr5
+ sethi.p %hi(__region_CS2_C),gr6
+ setlo %lo(__region_CS2_C),gr6
+ sti gr4,@(gr10,#2*0x08)
+ sti gr5,@(gr10,#2*0x08+0x100)
+ sti gr6,@(gr11,#2*0x08)
+ sethi.p %hi(__region_CS3),gr4
+ setlo %lo(__region_CS3),gr4
+ sethi.p %hi(__region_CS3_M),gr5
+ setlo %lo(__region_CS3_M),gr5
+ sethi.p %hi(__region_CS3_C),gr6
+ setlo %lo(__region_CS3_C),gr6
+ sti gr4,@(gr10,#3*0x08)
+ sti gr5,@(gr10,#3*0x08+0x100)
+ sti gr6,@(gr11,#3*0x08)
+ sethi.p %hi(__region_CS4),gr4
+ setlo %lo(__region_CS4),gr4
+ sethi.p %hi(__region_CS4_M),gr5
+ setlo %lo(__region_CS4_M),gr5
+ sethi.p %hi(__region_CS4_C),gr6
+ setlo %lo(__region_CS4_C),gr6
+ sti gr4,@(gr10,#4*0x08)
+ sti gr5,@(gr10,#4*0x08+0x100)
+ sti gr6,@(gr11,#4*0x08)
+ sethi.p %hi(__region_CS5),gr4
+ setlo %lo(__region_CS5),gr4
+ sethi.p %hi(__region_CS5_M),gr5
+ setlo %lo(__region_CS5_M),gr5
+ sethi.p %hi(__region_CS5_C),gr6
+ setlo %lo(__region_CS5_C),gr6
+ sti gr4,@(gr10,#5*0x08)
+ sti gr5,@(gr10,#5*0x08+0x100)
+ sti gr6,@(gr11,#5*0x08)
+ sethi.p %hi(__region_CS6),gr4
+ setlo %lo(__region_CS6),gr4
+ sethi.p %hi(__region_CS6_M),gr5
+ setlo %lo(__region_CS6_M),gr5
+ sethi.p %hi(__region_CS6_C),gr6
+ setlo %lo(__region_CS6_C),gr6
+ sti gr4,@(gr10,#6*0x08)
+ sti gr5,@(gr10,#6*0x08+0x100)
+ sti gr6,@(gr11,#6*0x08)
+ sethi.p %hi(__region_CS7),gr4
+ setlo %lo(__region_CS7),gr4
+ sethi.p %hi(__region_CS7_M),gr5
+ setlo %lo(__region_CS7_M),gr5
+ sethi.p %hi(__region_CS7_C),gr6
+ setlo %lo(__region_CS7_C),gr6
+ sti gr4,@(gr10,#7*0x08)
+ sti gr5,@(gr10,#7*0x08+0x100)
+ sti gr6,@(gr11,#7*0x08)
+ membar
+ bar
+
+ # adjust LED bank address
+#ifdef CONFIG_MB93091_VDK
+ sethi.p %hi(__region_CS2 + 0x01200004),gr30
+ setlo %lo(__region_CS2 + 0x01200004),gr30
+#endif
+ bralr
+
+###############################################################################
+#
+# determine the total SDRAM size
+#
+# ENTRY: EXIT:
+# GR25 - SDRAM size
+# GR26 &__head_reference [saved]
+# GR30 LED address [saved]
+#
+###############################################################################
+ .globl __head_fr451_survey_sdram
+__head_fr451_survey_sdram:
+ sethi.p %hi(__400_DAM0),gr11
+ setlo %lo(__400_DAM0),gr11
+ sethi.p %hi(__400_DBR0),gr12
+ setlo %lo(__400_DBR0),gr12
+
+ sethi.p %hi(0xfe000000),gr17 ; unused SDRAM DBR value
+ setlo %lo(0xfe000000),gr17
+ setlos #0,gr25
+
+ ldi @(gr12,#0x00),gr4 ; DAR0
+ subcc gr4,gr17,gr0,icc0
+ beq icc0,#0,__head_no_DCS0
+ ldi @(gr11,#0x00),gr6 ; DAM0: bits 31:20 match addr 31:20
+ add gr25,gr6,gr25
+ addi gr25,#1,gr25
+__head_no_DCS0:
+
+ ldi @(gr12,#0x08),gr4 ; DAR1
+ subcc gr4,gr17,gr0,icc0
+ beq icc0,#0,__head_no_DCS1
+ ldi @(gr11,#0x08),gr6 ; DAM1: bits 31:20 match addr 31:20
+ add gr25,gr6,gr25
+ addi gr25,#1,gr25
+__head_no_DCS1:
+
+ ldi @(gr12,#0x10),gr4 ; DAR2
+ subcc gr4,gr17,gr0,icc0
+ beq icc0,#0,__head_no_DCS2
+ ldi @(gr11,#0x10),gr6 ; DAM2: bits 31:20 match addr 31:20
+ add gr25,gr6,gr25
+ addi gr25,#1,gr25
+__head_no_DCS2:
+
+ ldi @(gr12,#0x18),gr4 ; DAR3
+ subcc gr4,gr17,gr0,icc0
+ beq icc0,#0,__head_no_DCS3
+ ldi @(gr11,#0x18),gr6 ; DAM3: bits 31:20 match addr 31:20
+ add gr25,gr6,gr25
+ addi gr25,#1,gr25
+__head_no_DCS3:
+ bralr
+
+###############################################################################
+#
+# set the protection map with the I/DAMPR registers
+#
+# ENTRY: EXIT:
+# GR25 SDRAM size [saved]
+# GR26 &__head_reference [saved]
+# GR30 LED address [saved]
+#
+#
+# Using this map:
+# REGISTERS ADDRESS RANGE VIEW
+# =============== ====================== ===============================
+# IAMPR0/DAMPR0 0xC0000000-0xCFFFFFFF Cached kernel RAM Window
+# DAMPR11 0xE0000000-0xFFFFFFFF Uncached I/O
+#
+###############################################################################
+ .globl __head_fr451_set_protection
+__head_fr451_set_protection:
+ movsg lr,gr27
+
+ # set the I/O region protection registers for FR451 in MMU mode
+#define PGPROT_IO xAMPRx_L|xAMPRx_M|xAMPRx_S_KERNEL|xAMPRx_C|xAMPRx_V
+
+ sethi.p %hi(__region_IO),gr5
+ setlo %lo(__region_IO),gr5
+ setlos #PGPROT_IO|xAMPRx_SS_512Mb,gr4
+ or gr4,gr5,gr4
+ movgs gr5,damlr11 ; General I/O tile
+ movgs gr4,dampr11
+
+ # need to open a window onto at least part of the RAM for the kernel's use
+ sethi.p %hi(__sdram_base),gr8
+ setlo %lo(__sdram_base),gr8 ; physical address
+ sethi.p %hi(__page_offset),gr9
+ setlo %lo(__page_offset),gr9 ; virtual address
+
+ setlos #xAMPRx_L|xAMPRx_M|xAMPRx_SS_256Mb|xAMPRx_S_KERNEL|xAMPRx_V,gr11
+ or gr8,gr11,gr8
+
+ movgs gr9,iamlr0 ; mapped from real address 0
+ movgs gr8,iampr0 ; cached kernel memory at 0xC0000000
+ movgs gr9,damlr0
+ movgs gr8,dampr0
+
+ # set a temporary mapping for the kernel running at address 0 until we've turned on the MMU
+ sethi.p %hi(__sdram_base),gr9
+ setlo %lo(__sdram_base),gr9 ; virtual address
+
+ and.p gr4,gr11,gr4
+ and gr5,gr11,gr5
+ or.p gr4,gr11,gr4
+ or gr5,gr11,gr5
+
+ movgs gr9,iamlr1 ; mapped from real address 0
+ movgs gr8,iampr1 ; cached kernel memory at 0x00000000
+ movgs gr9,damlr1
+ movgs gr8,dampr1
+
+ # we use DAMR2-10 for kmap_atomic(), cache flush and TLB management
+ # since the DAMLR regs are not going to change, we can set them now
+ # also set up IAMLR2 to the same as DAMLR5
+ sethi.p %hi(KMAP_ATOMIC_PRIMARY_FRAME),gr4
+ setlo %lo(KMAP_ATOMIC_PRIMARY_FRAME),gr4
+ sethi.p %hi(PAGE_SIZE),gr5
+ setlo %lo(PAGE_SIZE),gr5
+
+ movgs gr4,damlr2
+ movgs gr4,iamlr2
+ add gr4,gr5,gr4
+ movgs gr4,damlr3
+ add gr4,gr5,gr4
+ movgs gr4,damlr4
+ add gr4,gr5,gr4
+ movgs gr4,damlr5
+ add gr4,gr5,gr4
+ movgs gr4,damlr6
+ add gr4,gr5,gr4
+ movgs gr4,damlr7
+ add gr4,gr5,gr4
+ movgs gr4,damlr8
+ add gr4,gr5,gr4
+ movgs gr4,damlr9
+ add gr4,gr5,gr4
+ movgs gr4,damlr10
+
+ movgs gr0,dampr2
+ movgs gr0,dampr4
+ movgs gr0,dampr5
+ movgs gr0,dampr6
+ movgs gr0,dampr7
+ movgs gr0,dampr8
+ movgs gr0,dampr9
+ movgs gr0,dampr10
+
+ movgs gr0,iamlr3
+ movgs gr0,iamlr4
+ movgs gr0,iamlr5
+ movgs gr0,iamlr6
+ movgs gr0,iamlr7
+
+ movgs gr0,iampr2
+ movgs gr0,iampr3
+ movgs gr0,iampr4
+ movgs gr0,iampr5
+ movgs gr0,iampr6
+ movgs gr0,iampr7
+
+ # start in TLB context 0 with the swapper's page tables
+ movgs gr0,cxnr
+
+ sethi.p %hi(swapper_pg_dir),gr4
+ setlo %lo(swapper_pg_dir),gr4
+ sethi.p %hi(__page_offset),gr5
+ setlo %lo(__page_offset),gr5
+ sub gr4,gr5,gr4
+ movgs gr4,ttbr
+ setlos #xAMPRx_L|xAMPRx_M|xAMPRx_SS_16Kb|xAMPRx_S|xAMPRx_C|xAMPRx_V,gr5
+ or gr4,gr5,gr4
+ movgs gr4,dampr3
+
+ # the FR451 also has an extra trap base register
+ movsg tbr,gr4
+ movgs gr4,btbr
+
+ LEDS 0x3300
+ jmpl @(gr27,gr0)
+
+###############################################################################
+#
+# finish setting up the protection registers
+#
+###############################################################################
+ .globl __head_fr451_finalise_protection
+__head_fr451_finalise_protection:
+ # turn on the timers as appropriate
+ movgs gr0,timerh
+ movgs gr0,timerl
+ movgs gr0,timerd
+ movsg hsr0,gr4
+ sethi.p %hi(HSR0_ETMI),gr5
+ setlo %lo(HSR0_ETMI),gr5
+ or gr4,gr5,gr4
+ movgs gr4,hsr0
+
+ # clear the TLB entry cache
+ movgs gr0,iamlr1
+ movgs gr0,iampr1
+ movgs gr0,damlr1
+ movgs gr0,dampr1
+
+ # clear the PGE cache
+ sethi.p %hi(__flush_tlb_all),gr4
+ setlo %lo(__flush_tlb_all),gr4
+ jmpl @(gr4,gr0)
--- /dev/null
+/* head-uc-fr401.S: FR401/3/5 uc-linux specific bits of initialisation
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/threads.h>
+#include <linux/linkage.h>
+#include <asm/ptrace.h>
+#include <asm/page.h>
+#include <asm/spr-regs.h>
+#include <asm/mb86943a.h>
+#include "head.inc"
+
+
+#define __400_DBR0 0xfe000e00
+#define __400_DBR1 0xfe000e08
+#define __400_DBR2 0xfe000e10 /* not on FR401 */
+#define __400_DBR3 0xfe000e18 /* not on FR401 */
+#define __400_DAM0 0xfe000f00
+#define __400_DAM1 0xfe000f08
+#define __400_DAM2 0xfe000f10 /* not on FR401 */
+#define __400_DAM3 0xfe000f18 /* not on FR401 */
+#define __400_LGCR 0xfe000010
+#define __400_LCR 0xfe000100
+#define __400_LSBR 0xfe000c00
+
+ .section .text.init,"ax"
+ .balign 4
+
+###############################################################################
+#
+# describe the position and layout of the SDRAM controller registers
+#
+# ENTRY: EXIT:
+# GR5 - cacheline size
+# GR11 - displacement of 2nd SDRAM addr reg from GR14
+# GR12 - displacement of 3rd SDRAM addr reg from GR14
+# GR13 - displacement of 4th SDRAM addr reg from GR14
+# GR14 - address of 1st SDRAM addr reg
+# GR15 - amount to shift address by to match SDRAM addr reg
+# GR26 &__head_reference [saved]
+# GR30 LED address [saved]
+# CC0 - T if DBR0 is present
+# CC1 - T if DBR1 is present
+# CC2 - T if DBR2 is present (not FR401/FR401A)
+# CC3 - T if DBR3 is present (not FR401/FR401A)
+#
+###############################################################################
+ .globl __head_fr401_describe_sdram
+__head_fr401_describe_sdram:
+ sethi.p %hi(__400_DBR0),gr14
+ setlo %lo(__400_DBR0),gr14
+ setlos.p #__400_DBR1-__400_DBR0,gr11
+ setlos #__400_DBR2-__400_DBR0,gr12
+ setlos.p #__400_DBR3-__400_DBR0,gr13
+ setlos #32,gr5 ; cacheline size
+ setlos.p #0,gr15 ; amount to shift addr reg by
+
+ # specify which DBR regs are present
+ setlos #0x00ff,gr4
+ movgs gr4,cccr
+ movsg psr,gr3 ; check for FR401/FR401A
+ srli gr3,#25,gr3
+ subicc gr3,#0x20>>1,gr0,icc0
+ bnelr icc0,#1
+ setlos #0x000f,gr4
+ movgs gr4,cccr
+ bralr
+
+###############################################################################
+#
+# rearrange the bus controller registers
+#
+# ENTRY: EXIT:
+# GR26 &__head_reference [saved]
+# GR30 LED address revised LED address
+#
+###############################################################################
+ .globl __head_fr401_set_busctl
+__head_fr401_set_busctl:
+ sethi.p %hi(__400_LGCR),gr4
+ setlo %lo(__400_LGCR),gr4
+ sethi.p %hi(__400_LSBR),gr10
+ setlo %lo(__400_LSBR),gr10
+ sethi.p %hi(__400_LCR),gr11
+ setlo %lo(__400_LCR),gr11
+
+ # set the bus controller
+ ldi @(gr4,#0),gr5
+ ori gr5,#0xff,gr5 ; make sure all chip-selects are enabled
+ sti gr5,@(gr4,#0)
+
+ sethi.p %hi(__region_CS1),gr4
+ setlo %lo(__region_CS1),gr4
+ sethi.p %hi(__region_CS1_M),gr5
+ setlo %lo(__region_CS1_M),gr5
+ sethi.p %hi(__region_CS1_C),gr6
+ setlo %lo(__region_CS1_C),gr6
+ sti gr4,@(gr10,#1*0x08)
+ sti gr5,@(gr10,#1*0x08+0x100)
+ sti gr6,@(gr11,#1*0x08)
+ sethi.p %hi(__region_CS2),gr4
+ setlo %lo(__region_CS2),gr4
+ sethi.p %hi(__region_CS2_M),gr5
+ setlo %lo(__region_CS2_M),gr5
+ sethi.p %hi(__region_CS2_C),gr6
+ setlo %lo(__region_CS2_C),gr6
+ sti gr4,@(gr10,#2*0x08)
+ sti gr5,@(gr10,#2*0x08+0x100)
+ sti gr6,@(gr11,#2*0x08)
+ sethi.p %hi(__region_CS3),gr4
+ setlo %lo(__region_CS3),gr4
+ sethi.p %hi(__region_CS3_M),gr5
+ setlo %lo(__region_CS3_M),gr5
+ sethi.p %hi(__region_CS3_C),gr6
+ setlo %lo(__region_CS3_C),gr6
+ sti gr4,@(gr10,#3*0x08)
+ sti gr5,@(gr10,#3*0x08+0x100)
+ sti gr6,@(gr11,#3*0x08)
+ sethi.p %hi(__region_CS4),gr4
+ setlo %lo(__region_CS4),gr4
+ sethi.p %hi(__region_CS4_M),gr5
+ setlo %lo(__region_CS4_M),gr5
+ sethi.p %hi(__region_CS4_C),gr6
+ setlo %lo(__region_CS4_C),gr6
+ sti gr4,@(gr10,#4*0x08)
+ sti gr5,@(gr10,#4*0x08+0x100)
+ sti gr6,@(gr11,#4*0x08)
+ sethi.p %hi(__region_CS5),gr4
+ setlo %lo(__region_CS5),gr4
+ sethi.p %hi(__region_CS5_M),gr5
+ setlo %lo(__region_CS5_M),gr5
+ sethi.p %hi(__region_CS5_C),gr6
+ setlo %lo(__region_CS5_C),gr6
+ sti gr4,@(gr10,#5*0x08)
+ sti gr5,@(gr10,#5*0x08+0x100)
+ sti gr6,@(gr11,#5*0x08)
+ sethi.p %hi(__region_CS6),gr4
+ setlo %lo(__region_CS6),gr4
+ sethi.p %hi(__region_CS6_M),gr5
+ setlo %lo(__region_CS6_M),gr5
+ sethi.p %hi(__region_CS6_C),gr6
+ setlo %lo(__region_CS6_C),gr6
+ sti gr4,@(gr10,#6*0x08)
+ sti gr5,@(gr10,#6*0x08+0x100)
+ sti gr6,@(gr11,#6*0x08)
+ sethi.p %hi(__region_CS7),gr4
+ setlo %lo(__region_CS7),gr4
+ sethi.p %hi(__region_CS7_M),gr5
+ setlo %lo(__region_CS7_M),gr5
+ sethi.p %hi(__region_CS7_C),gr6
+ setlo %lo(__region_CS7_C),gr6
+ sti gr4,@(gr10,#7*0x08)
+ sti gr5,@(gr10,#7*0x08+0x100)
+ sti gr6,@(gr11,#7*0x08)
+ membar
+ bar
+
+ # adjust LED bank address
+ sethi.p %hi(LED_ADDR - 0x20000000 +__region_CS2),gr30
+ setlo %lo(LED_ADDR - 0x20000000 +__region_CS2),gr30
+ bralr
+
+###############################################################################
+#
+# determine the total SDRAM size
+#
+# ENTRY: EXIT:
+# GR25 - SDRAM size
+# GR26 &__head_reference [saved]
+# GR30 LED address [saved]
+#
+###############################################################################
+ .globl __head_fr401_survey_sdram
+__head_fr401_survey_sdram:
+ sethi.p %hi(__400_DAM0),gr11
+ setlo %lo(__400_DAM0),gr11
+ sethi.p %hi(__400_DBR0),gr12
+ setlo %lo(__400_DBR0),gr12
+
+ sethi.p %hi(0xfe000000),gr17 ; unused SDRAM DBR value
+ setlo %lo(0xfe000000),gr17
+ setlos #0,gr25
+
+ ldi @(gr12,#0x00),gr4 ; DAR0
+ subcc gr4,gr17,gr0,icc0
+ beq icc0,#0,__head_no_DCS0
+ ldi @(gr11,#0x00),gr6 ; DAM0: bits 31:20 match addr 31:20
+ add gr25,gr6,gr25
+ addi gr25,#1,gr25
+__head_no_DCS0:
+
+ ldi @(gr12,#0x08),gr4 ; DAR1
+ subcc gr4,gr17,gr0,icc0
+ beq icc0,#0,__head_no_DCS1
+ ldi @(gr11,#0x08),gr6 ; DAM1: bits 31:20 match addr 31:20
+ add gr25,gr6,gr25
+ addi gr25,#1,gr25
+__head_no_DCS1:
+
+ # FR401/FR401A does not have DCS2/3
+ movsg psr,gr3
+ srli gr3,#25,gr3
+ subicc gr3,#0x20>>1,gr0,icc0
+ beq icc0,#0,__head_no_DCS3
+
+ ldi @(gr12,#0x10),gr4 ; DAR2
+ subcc gr4,gr17,gr0,icc0
+ beq icc0,#0,__head_no_DCS2
+ ldi @(gr11,#0x10),gr6 ; DAM2: bits 31:20 match addr 31:20
+ add gr25,gr6,gr25
+ addi gr25,#1,gr25
+__head_no_DCS2:
+
+ ldi @(gr12,#0x18),gr4 ; DAR3
+ subcc gr4,gr17,gr0,icc0
+ beq icc0,#0,__head_no_DCS3
+ ldi @(gr11,#0x18),gr6 ; DAM3: bits 31:20 match addr 31:20
+ add gr25,gr6,gr25
+ addi gr25,#1,gr25
+__head_no_DCS3:
+ bralr
+
+###############################################################################
+#
+# set the protection map with the I/DAMPR registers
+#
+# ENTRY: EXIT:
+# GR25 SDRAM size [saved]
+# GR26 &__head_reference [saved]
+# GR30 LED address [saved]
+#
+###############################################################################
+ .globl __head_fr401_set_protection
+__head_fr401_set_protection:
+ movsg lr,gr27
+
+ # set the I/O region protection registers for FR401/3/5
+ sethi.p %hi(__region_IO),gr5
+ setlo %lo(__region_IO),gr5
+ ori gr5,#xAMPRx_SS_512Mb|xAMPRx_S_KERNEL|xAMPRx_C|xAMPRx_V,gr5
+ movgs gr0,iampr7
+ movgs gr5,dampr7 ; General I/O tile
+
+ # need to tile the remaining IAMPR/DAMPR registers to cover as much of the RAM as possible
+ # - start with the highest numbered registers
+ sethi.p %hi(__kernel_image_end),gr8
+ setlo %lo(__kernel_image_end),gr8
+ sethi.p %hi(32768),gr4 ; allow for a maximal allocator bitmap
+ setlo %lo(32768),gr4
+ add gr8,gr4,gr8
+ sethi.p %hi(1024*2048-1),gr4 ; round up to nearest 2MiB
+ setlo %lo(1024*2048-1),gr4
+ add.p gr8,gr4,gr8
+ not gr4,gr4
+ and gr8,gr4,gr8
+
+ sethi.p %hi(__page_offset),gr9
+ setlo %lo(__page_offset),gr9
+ add gr9,gr25,gr9
+
+ # GR8 = base of uncovered RAM
+ # GR9 = top of uncovered RAM
+
+#ifdef CONFIG_MB93093_PDK
+ sethi.p %hi(__region_CS2),gr4
+ setlo %lo(__region_CS2),gr4
+ ori gr4,#xAMPRx_SS_1Mb|xAMPRx_S_KERNEL|xAMPRx_C|xAMPRx_V,gr4
+ movgs gr4,dampr6
+ movgs gr0,iampr6
+#else
+ call __head_split_region
+ movgs gr4,iampr6
+ movgs gr5,dampr6
+#endif
+ call __head_split_region
+ movgs gr4,iampr5
+ movgs gr5,dampr5
+ call __head_split_region
+ movgs gr4,iampr4
+ movgs gr5,dampr4
+ call __head_split_region
+ movgs gr4,iampr3
+ movgs gr5,dampr3
+ call __head_split_region
+ movgs gr4,iampr2
+ movgs gr5,dampr2
+ call __head_split_region
+ movgs gr4,iampr1
+ movgs gr5,dampr1
+
+ # cover kernel core image with kernel-only segment
+ sethi.p %hi(__page_offset),gr8
+ setlo %lo(__page_offset),gr8
+ call __head_split_region
+
+#ifdef CONFIG_PROTECT_KERNEL
+ ori.p gr4,#xAMPRx_S_KERNEL,gr4
+ ori gr5,#xAMPRx_S_KERNEL,gr5
+#endif
+
+ movgs gr4,iampr0
+ movgs gr5,dampr0
+ jmpl @(gr27,gr0)
--- /dev/null
+/* head-uc-fr451.S: FR451 uc-linux specific bits of initialisation
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/threads.h>
+#include <linux/linkage.h>
+#include <asm/ptrace.h>
+#include <asm/page.h>
+#include <asm/spr-regs.h>
+#include <asm/mb86943a.h>
+#include "head.inc"
+
+
+#define __400_DBR0 0xfe000e00
+#define __400_DBR1 0xfe000e08
+#define __400_DBR2 0xfe000e10
+#define __400_DBR3 0xfe000e18
+#define __400_DAM0 0xfe000f00
+#define __400_DAM1 0xfe000f08
+#define __400_DAM2 0xfe000f10
+#define __400_DAM3 0xfe000f18
+#define __400_LGCR 0xfe000010
+#define __400_LCR 0xfe000100
+#define __400_LSBR 0xfe000c00
+
+ .section .text.init,"ax"
+ .balign 4
+
+###############################################################################
+#
+# set the protection map with the I/DAMPR registers
+#
+# ENTRY: EXIT:
+# GR25 SDRAM size [saved]
+# GR26 &__head_reference [saved]
+# GR30 LED address [saved]
+#
+###############################################################################
+ .globl __head_fr451_set_protection
+__head_fr451_set_protection:
+ movsg lr,gr27
+
+ movgs gr0,dampr10
+ movgs gr0,damlr10
+ movgs gr0,dampr9
+ movgs gr0,damlr9
+ movgs gr0,dampr8
+ movgs gr0,damlr8
+
+ # set the I/O region protection registers for FR401/3/5
+ sethi.p %hi(__region_IO),gr5
+ setlo %lo(__region_IO),gr5
+ sethi.p %hi(0x1fffffff),gr7
+ setlo %lo(0x1fffffff),gr7
+ ori gr5,#xAMPRx_SS_512Mb|xAMPRx_S_KERNEL|xAMPRx_C|xAMPRx_V,gr5
+ movgs gr5,dampr11 ; General I/O tile
+ movgs gr7,damlr11
+
+ # need to tile the remaining IAMPR/DAMPR registers to cover as much of the RAM as possible
+ # - start with the highest numbered registers
+ sethi.p %hi(__kernel_image_end),gr8
+ setlo %lo(__kernel_image_end),gr8
+ sethi.p %hi(32768),gr4 ; allow for a maximal allocator bitmap
+ setlo %lo(32768),gr4
+ add gr8,gr4,gr8
+ sethi.p %hi(1024*2048-1),gr4 ; round up to nearest 2MiB
+ setlo %lo(1024*2048-1),gr4
+ add.p gr8,gr4,gr8
+ not gr4,gr4
+ and gr8,gr4,gr8
+
+ sethi.p %hi(__page_offset),gr9
+ setlo %lo(__page_offset),gr9
+ add gr9,gr25,gr9
+
+ sethi.p %hi(0xffffc000),gr11
+ setlo %lo(0xffffc000),gr11
+
+ # GR8 = base of uncovered RAM
+ # GR9 = top of uncovered RAM
+ # GR11 = xAMLR mask
+ LEDS 0x3317
+ call __head_split_region
+ movgs gr4,iampr7
+ movgs gr6,iamlr7
+ movgs gr5,dampr7
+ movgs gr7,damlr7
+
+ LEDS 0x3316
+ call __head_split_region
+ movgs gr4,iampr6
+ movgs gr6,iamlr6
+ movgs gr5,dampr6
+ movgs gr7,damlr6
+
+ LEDS 0x3315
+ call __head_split_region
+ movgs gr4,iampr5
+ movgs gr6,iamlr5
+ movgs gr5,dampr5
+ movgs gr7,damlr5
+
+ LEDS 0x3314
+ call __head_split_region
+ movgs gr4,iampr4
+ movgs gr6,iamlr4
+ movgs gr5,dampr4
+ movgs gr7,damlr4
+
+ LEDS 0x3313
+ call __head_split_region
+ movgs gr4,iampr3
+ movgs gr6,iamlr3
+ movgs gr5,dampr3
+ movgs gr7,damlr3
+
+ LEDS 0x3312
+ call __head_split_region
+ movgs gr4,iampr2
+ movgs gr6,iamlr2
+ movgs gr5,dampr2
+ movgs gr7,damlr2
+
+ LEDS 0x3311
+ call __head_split_region
+ movgs gr4,iampr1
+ movgs gr6,iamlr1
+ movgs gr5,dampr1
+ movgs gr7,damlr1
+
+ # cover kernel core image with kernel-only segment
+ LEDS 0x3310
+ sethi.p %hi(__page_offset),gr8
+ setlo %lo(__page_offset),gr8
+ call __head_split_region
+
+#ifdef CONFIG_PROTECT_KERNEL
+ ori.p gr4,#xAMPRx_S_KERNEL,gr4
+ ori gr5,#xAMPRx_S_KERNEL,gr5
+#endif
+
+ movgs gr4,iampr0
+ movgs gr6,iamlr0
+ movgs gr5,dampr0
+ movgs gr7,damlr0
+
+ # start in TLB context 0 with no page tables
+ movgs gr0,cxnr
+ movgs gr0,ttbr
+
+ # the FR451 also has an extra trap base register
+ movsg tbr,gr4
+ movgs gr4,btbr
+
+ # turn on the timers as appropriate
+ movgs gr0,timerh
+ movgs gr0,timerl
+ movgs gr0,timerd
+ movsg hsr0,gr4
+ sethi.p %hi(HSR0_ETMI),gr5
+ setlo %lo(HSR0_ETMI),gr5
+ or gr4,gr5,gr4
+ movgs gr4,hsr0
+
+ LEDS 0x3300
+ jmpl @(gr27,gr0)
--- /dev/null
+/* head-uc-fr555.S: FR555 uc-linux specific bits of initialisation
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/threads.h>
+#include <linux/linkage.h>
+#include <asm/ptrace.h>
+#include <asm/page.h>
+#include <asm/spr-regs.h>
+#include <asm/mb86943a.h>
+#include "head.inc"
+
+
+#define __551_DARS0 0xfeff0100
+#define __551_DARS1 0xfeff0104
+#define __551_DARS2 0xfeff0108
+#define __551_DARS3 0xfeff010c
+#define __551_DAMK0 0xfeff0110
+#define __551_DAMK1 0xfeff0114
+#define __551_DAMK2 0xfeff0118
+#define __551_DAMK3 0xfeff011c
+#define __551_LCR 0xfeff1100
+#define __551_LSBR 0xfeff1c00
+
+ .section .text.init,"ax"
+ .balign 4
+
+###############################################################################
+#
+# describe the position and layout of the SDRAM controller registers
+#
+# ENTRY: EXIT:
+# GR5 - cacheline size
+# GR11 - displacement of 2nd SDRAM addr reg from GR14
+# GR12 - displacement of 3rd SDRAM addr reg from GR14
+# GR13 - displacement of 4th SDRAM addr reg from GR14
+# GR14 - address of 1st SDRAM addr reg
+# GR15 - amount to shift address by to match SDRAM addr reg
+# GR26 &__head_reference [saved]
+# GR30 LED address [saved]
+# CC0 - T if DARS0 is present
+# CC1 - T if DARS1 is present
+# CC2 - T if DARS2 is present
+# CC3 - T if DARS3 is present
+#
+###############################################################################
+ .globl __head_fr555_describe_sdram
+__head_fr555_describe_sdram:
+ sethi.p %hi(__551_DARS0),gr14
+ setlo %lo(__551_DARS0),gr14
+ setlos.p #__551_DARS1-__551_DARS0,gr11
+ setlos #__551_DARS2-__551_DARS0,gr12
+ setlos.p #__551_DARS3-__551_DARS0,gr13
+ setlos #64,gr5 ; cacheline size
+ setlos #20,gr15 ; amount to shift addr by
+ setlos #0x00ff,gr4
+ movgs gr4,cccr ; extant DARS/DAMK regs
+ bralr
+
+###############################################################################
+#
+# rearrange the bus controller registers
+#
+# ENTRY: EXIT:
+# GR26 &__head_reference [saved]
+# GR30 LED address revised LED address
+#
+###############################################################################
+ .globl __head_fr555_set_busctl
+__head_fr555_set_busctl:
+ LEDS 0x100f
+ sethi.p %hi(__551_LSBR),gr10
+ setlo %lo(__551_LSBR),gr10
+ sethi.p %hi(__551_LCR),gr11
+ setlo %lo(__551_LCR),gr11
+
+ # set the bus controller
+ sethi.p %hi(__region_CS1),gr4
+ setlo %lo(__region_CS1),gr4
+ sethi.p %hi(__region_CS1_M),gr5
+ setlo %lo(__region_CS1_M),gr5
+ sethi.p %hi(__region_CS1_C),gr6
+ setlo %lo(__region_CS1_C),gr6
+ sti gr4,@(gr10,#1*0x08)
+ sti gr5,@(gr10,#1*0x08+0x100)
+ sti gr6,@(gr11,#1*0x08)
+ sethi.p %hi(__region_CS2),gr4
+ setlo %lo(__region_CS2),gr4
+ sethi.p %hi(__region_CS2_M),gr5
+ setlo %lo(__region_CS2_M),gr5
+ sethi.p %hi(__region_CS2_C),gr6
+ setlo %lo(__region_CS2_C),gr6
+ sti gr4,@(gr10,#2*0x08)
+ sti gr5,@(gr10,#2*0x08+0x100)
+ sti gr6,@(gr11,#2*0x08)
+ sethi.p %hi(__region_CS3),gr4
+ setlo %lo(__region_CS3),gr4
+ sethi.p %hi(__region_CS3_M),gr5
+ setlo %lo(__region_CS3_M),gr5
+ sethi.p %hi(__region_CS3_C),gr6
+ setlo %lo(__region_CS3_C),gr6
+ sti gr4,@(gr10,#3*0x08)
+ sti gr5,@(gr10,#3*0x08+0x100)
+ sti gr6,@(gr11,#3*0x08)
+ sethi.p %hi(__region_CS4),gr4
+ setlo %lo(__region_CS4),gr4
+ sethi.p %hi(__region_CS4_M),gr5
+ setlo %lo(__region_CS4_M),gr5
+ sethi.p %hi(__region_CS4_C),gr6
+ setlo %lo(__region_CS4_C),gr6
+ sti gr4,@(gr10,#4*0x08)
+ sti gr5,@(gr10,#4*0x08+0x100)
+ sti gr6,@(gr11,#4*0x08)
+ sethi.p %hi(__region_CS5),gr4
+ setlo %lo(__region_CS5),gr4
+ sethi.p %hi(__region_CS5_M),gr5
+ setlo %lo(__region_CS5_M),gr5
+ sethi.p %hi(__region_CS5_C),gr6
+ setlo %lo(__region_CS5_C),gr6
+ sti gr4,@(gr10,#5*0x08)
+ sti gr5,@(gr10,#5*0x08+0x100)
+ sti gr6,@(gr11,#5*0x08)
+ sethi.p %hi(__region_CS6),gr4
+ setlo %lo(__region_CS6),gr4
+ sethi.p %hi(__region_CS6_M),gr5
+ setlo %lo(__region_CS6_M),gr5
+ sethi.p %hi(__region_CS6_C),gr6
+ setlo %lo(__region_CS6_C),gr6
+ sti gr4,@(gr10,#6*0x08)
+ sti gr5,@(gr10,#6*0x08+0x100)
+ sti gr6,@(gr11,#6*0x08)
+ sethi.p %hi(__region_CS7),gr4
+ setlo %lo(__region_CS7),gr4
+ sethi.p %hi(__region_CS7_M),gr5
+ setlo %lo(__region_CS7_M),gr5
+ sethi.p %hi(__region_CS7_C),gr6
+ setlo %lo(__region_CS7_C),gr6
+ sti gr4,@(gr10,#7*0x08)
+ sti gr5,@(gr10,#7*0x08+0x100)
+ sti gr6,@(gr11,#7*0x08)
+ membar
+ bar
+
+ # adjust LED bank address
+#ifdef CONFIG_MB93091_VDK
+ sethi.p %hi(LED_ADDR - 0x20000000 +__region_CS2),gr30
+ setlo %lo(LED_ADDR - 0x20000000 +__region_CS2),gr30
+#endif
+ bralr
+
+###############################################################################
+#
+# determine the total SDRAM size
+#
+# ENTRY: EXIT:
+# GR25 - SDRAM size
+# GR26 &__head_reference [saved]
+# GR30 LED address [saved]
+#
+###############################################################################
+ .globl __head_fr555_survey_sdram
+__head_fr555_survey_sdram:
+ sethi.p %hi(__551_DAMK0),gr11
+ setlo %lo(__551_DAMK0),gr11
+ sethi.p %hi(__551_DARS0),gr12
+ setlo %lo(__551_DARS0),gr12
+
+ sethi.p %hi(0xfff),gr17 ; unused SDRAM AMK value
+ setlo %lo(0xfff),gr17
+ setlos #0,gr25
+
+ ldi @(gr11,#0x00),gr6 ; DAMK0: bits 11:0 match addr 11:0
+ subcc gr6,gr17,gr0,icc0
+ beq icc0,#0,__head_no_DCS0
+ ldi @(gr12,#0x00),gr4 ; DARS0
+ add gr25,gr6,gr25
+ addi gr25,#1,gr25
+__head_no_DCS0:
+
+ ldi @(gr11,#0x04),gr6 ; DAMK1: bits 11:0 match addr 11:0
+ subcc gr6,gr17,gr0,icc0
+ beq icc0,#0,__head_no_DCS1
+ ldi @(gr12,#0x04),gr4 ; DARS1
+ add gr25,gr6,gr25
+ addi gr25,#1,gr25
+__head_no_DCS1:
+
+ ldi @(gr11,#0x8),gr6 ; DAMK2: bits 11:0 match addr 11:0
+ subcc gr6,gr17,gr0,icc0
+ beq icc0,#0,__head_no_DCS2
+ ldi @(gr12,#0x8),gr4 ; DARS2
+ add gr25,gr6,gr25
+ addi gr25,#1,gr25
+__head_no_DCS2:
+
+ ldi @(gr11,#0xc),gr6 ; DAMK3: bits 11:0 match addr 11:0
+ subcc gr6,gr17,gr0,icc0
+ beq icc0,#0,__head_no_DCS3
+ ldi @(gr12,#0xc),gr4 ; DARS3
+ add gr25,gr6,gr25
+ addi gr25,#1,gr25
+__head_no_DCS3:
+
+ slli gr25,#20,gr25 ; shift [11:0] -> [31:20]
+ bralr
+
+###############################################################################
+#
+# set the protection map with the I/DAMPR registers
+#
+# ENTRY: EXIT:
+# GR25 SDRAM size saved
+# GR30 LED address saved
+#
+###############################################################################
+ .globl __head_fr555_set_protection
+__head_fr555_set_protection:
+ movsg lr,gr27
+
+ sethi.p %hi(0xfff00000),gr11
+ setlo %lo(0xfff00000),gr11
+
+ # set the I/O region protection registers for FR555
+ sethi.p %hi(__region_IO),gr7
+ setlo %lo(__region_IO),gr7
+ ori gr7,#xAMPRx_SS_512Mb|xAMPRx_S_KERNEL|xAMPRx_C|xAMPRx_V,gr5
+ movgs gr0,iampr15
+ movgs gr0,iamlr15
+ movgs gr5,dampr15
+ movgs gr7,damlr15
+
+ # need to tile the remaining IAMPR/DAMPR registers to cover as much of the RAM as possible
+ # - start with the highest numbered registers
+ sethi.p %hi(__kernel_image_end),gr8
+ setlo %lo(__kernel_image_end),gr8
+ sethi.p %hi(32768),gr4 ; allow for a maximal allocator bitmap
+ setlo %lo(32768),gr4
+ add gr8,gr4,gr8
+ sethi.p %hi(1024*2048-1),gr4 ; round up to nearest 2MiB
+ setlo %lo(1024*2048-1),gr4
+ add.p gr8,gr4,gr8
+ not gr4,gr4
+ and gr8,gr4,gr8
+
+ sethi.p %hi(__page_offset),gr9
+ setlo %lo(__page_offset),gr9
+ add gr9,gr25,gr9
+
+ # GR8 = base of uncovered RAM
+ # GR9 = top of uncovered RAM
+ # GR11 - mask for DAMLR/IAMLR regs
+ #
+ call __head_split_region
+ movgs gr4,iampr14
+ movgs gr6,iamlr14
+ movgs gr5,dampr14
+ movgs gr7,damlr14
+ call __head_split_region
+ movgs gr4,iampr13
+ movgs gr6,iamlr13
+ movgs gr5,dampr13
+ movgs gr7,damlr13
+ call __head_split_region
+ movgs gr4,iampr12
+ movgs gr6,iamlr12
+ movgs gr5,dampr12
+ movgs gr7,damlr12
+ call __head_split_region
+ movgs gr4,iampr11
+ movgs gr6,iamlr11
+ movgs gr5,dampr11
+ movgs gr7,damlr11
+ call __head_split_region
+ movgs gr4,iampr10
+ movgs gr6,iamlr10
+ movgs gr5,dampr10
+ movgs gr7,damlr10
+ call __head_split_region
+ movgs gr4,iampr9
+ movgs gr6,iamlr9
+ movgs gr5,dampr9
+ movgs gr7,damlr9
+ call __head_split_region
+ movgs gr4,iampr8
+ movgs gr6,iamlr8
+ movgs gr5,dampr8
+ movgs gr7,damlr8
+
+ call __head_split_region
+ movgs gr4,iampr7
+ movgs gr6,iamlr7
+ movgs gr5,dampr7
+ movgs gr7,damlr7
+ call __head_split_region
+ movgs gr4,iampr6
+ movgs gr6,iamlr6
+ movgs gr5,dampr6
+ movgs gr7,damlr6
+ call __head_split_region
+ movgs gr4,iampr5
+ movgs gr6,iamlr5
+ movgs gr5,dampr5
+ movgs gr7,damlr5
+ call __head_split_region
+ movgs gr4,iampr4
+ movgs gr6,iamlr4
+ movgs gr5,dampr4
+ movgs gr7,damlr4
+ call __head_split_region
+ movgs gr4,iampr3
+ movgs gr6,iamlr3
+ movgs gr5,dampr3
+ movgs gr7,damlr3
+ call __head_split_region
+ movgs gr4,iampr2
+ movgs gr6,iamlr2
+ movgs gr5,dampr2
+ movgs gr7,damlr2
+ call __head_split_region
+ movgs gr4,iampr1
+ movgs gr6,iamlr1
+ movgs gr5,dampr1
+ movgs gr7,damlr1
+
+ # cover kernel core image with kernel-only segment
+ sethi.p %hi(__page_offset),gr8
+ setlo %lo(__page_offset),gr8
+ call __head_split_region
+
+#ifdef CONFIG_PROTECT_KERNEL
+ ori.p gr4,#xAMPRx_S_KERNEL,gr4
+ ori gr5,#xAMPRx_S_KERNEL,gr5
+#endif
+
+ movgs gr4,iampr0
+ movgs gr6,iamlr0
+ movgs gr5,dampr0
+ movgs gr7,damlr0
+ jmpl @(gr27,gr0)
--- /dev/null
+/* head.S: kernel entry point for FR-V kernel
+ *
+ * Copyright (C) 2003, 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/threads.h>
+#include <linux/linkage.h>
+#include <asm/ptrace.h>
+#include <asm/page.h>
+#include <asm/spr-regs.h>
+#include <asm/mb86943a.h>
+#include <asm/cache.h>
+#include "head.inc"
+
+###############################################################################
+#
+# void _boot(unsigned long magic, char *command_line) __attribute__((noreturn))
+#
+# - if magic is 0xdead1eaf, then command_line is assumed to point to the kernel
+# command line string
+#
+###############################################################################
+ .section .text.head,"ax"
+ .balign 4
+
+ .globl _boot, __head_reference
+ .type _boot,@function
+_boot:
+__head_reference:
+ sethi.p %hi(LED_ADDR),gr30
+ setlo %lo(LED_ADDR),gr30
+
+ LEDS 0x0000
+
+ # calculate reference address for PC-relative stuff
+ call 0f
+0: movsg lr,gr26
+ addi gr26,#__head_reference-0b,gr26
+
+ # invalidate and disable both of the caches and turn off the memory access checking
+ dcef @(gr0,gr0),1
+ bar
+
+ sethi.p %hi(~(HSR0_ICE|HSR0_DCE|HSR0_CBM|HSR0_EIMMU|HSR0_EDMMU)),gr4
+ setlo %lo(~(HSR0_ICE|HSR0_DCE|HSR0_CBM|HSR0_EIMMU|HSR0_EDMMU)),gr4
+ movsg hsr0,gr5
+ and gr4,gr5,gr5
+ movgs gr5,hsr0
+ movsg hsr0,gr5
+
+ LEDS 0x0001
+
+ icei @(gr0,gr0),1
+ dcei @(gr0,gr0),1
+ bar
+
+ # turn the instruction cache back on
+ sethi.p %hi(HSR0_ICE),gr4
+ setlo %lo(HSR0_ICE),gr4
+ movsg hsr0,gr5
+ or gr4,gr5,gr5
+ movgs gr5,hsr0
+ movsg hsr0,gr5
+
+ bar
+
+ LEDS 0x0002
+
+ # retrieve the parameters (including command line) before we overwrite them
+ sethi.p %hi(0xdead1eaf),gr7
+ setlo %lo(0xdead1eaf),gr7
+ subcc gr7,gr8,gr0,icc0
+ bne icc0,#0,__head_no_parameters
+
+ sethi.p %hi(redboot_command_line-1),gr6
+ setlo %lo(redboot_command_line-1),gr6
+ sethi.p %hi(__head_reference),gr4
+ setlo %lo(__head_reference),gr4
+ sub gr6,gr4,gr6
+ add.p gr6,gr26,gr6
+ subi gr9,#1,gr9
+ setlos.p #511,gr4
+ setlos #1,gr5
+
+__head_copy_cmdline:
+ ldubu.p @(gr9,gr5),gr16
+ subicc gr4,#1,gr4,icc0
+ stbu.p gr16,@(gr6,gr5)
+ subicc gr16,#0,gr0,icc1
+ bls icc0,#0,__head_end_cmdline
+ bne icc1,#1,__head_copy_cmdline
+__head_end_cmdline:
+ stbu gr0,@(gr6,gr5)
+__head_no_parameters:
+
+###############################################################################
+#
+# we need to relocate the SDRAM to 0x00000000 (linux) or 0xC0000000 (uClinux)
+# - note that we're going to have to run entirely out of the icache whilst
+# fiddling with the SDRAM controller registers
+#
+###############################################################################
+#ifdef CONFIG_MMU
+ call __head_fr451_describe_sdram
+
+#else
+ movsg psr,gr5
+ srli gr5,#28,gr5
+ subicc gr5,#3,gr0,icc0
+ beq icc0,#0,__head_fr551_sdram
+
+ call __head_fr401_describe_sdram
+ bra __head_do_sdram
+
+__head_fr551_sdram:
+ call __head_fr555_describe_sdram
+ LEDS 0x000d
+
+__head_do_sdram:
+#endif
+
+ # preload the registers with invalid values in case any DBR/DARS are marked not present
+ sethi.p %hi(0xfe000000),gr17 ; unused SDRAM DBR value
+ setlo %lo(0xfe000000),gr17
+ or.p gr17,gr0,gr20
+ or gr17,gr0,gr21
+ or.p gr17,gr0,gr22
+ or gr17,gr0,gr23
+
+ # consult the SDRAM controller CS address registers
+ cld @(gr14,gr0 ),gr20, cc0,#1 ; DBR0 / DARS0
+ cld @(gr14,gr11),gr21, cc1,#1 ; DBR1 / DARS1
+ cld @(gr14,gr12),gr22, cc2,#1 ; DBR2 / DARS2
+ cld.p @(gr14,gr13),gr23, cc3,#1 ; DBR3 / DARS3
+
+ sll gr20,gr15,gr20 ; shift values up for FR551
+ sll gr21,gr15,gr21
+ sll gr22,gr15,gr22
+ sll gr23,gr15,gr23
+
+ LEDS 0x0003
+
+ # assume the lowest valid CS line to be the SDRAM base and get its address
+ subcc gr20,gr17,gr0,icc0
+ subcc.p gr21,gr17,gr0,icc1
+ subcc gr22,gr17,gr0,icc2
+ subcc.p gr23,gr17,gr0,icc3
+ ckne icc0,cc4 ; T if DBR0 != 0xfe000000
+ ckne icc1,cc5
+ ckne icc2,cc6
+ ckne icc3,cc7
+ cor gr23,gr0,gr24, cc7,#1 ; GR24 = SDRAM base
+ cor gr22,gr0,gr24, cc6,#1
+ cor gr21,gr0,gr24, cc5,#1
+ cor gr20,gr0,gr24, cc4,#1
+
+ # calculate the displacement required to get the SDRAM into the right place in memory
+ sethi.p %hi(__sdram_base),gr16
+ setlo %lo(__sdram_base),gr16
+ sub gr16,gr24,gr16 ; delta = __sdram_base - DBRx
+
+ # calculate the new values to go in the controller regs
+ cadd.p gr20,gr16,gr20, cc4,#1 ; DCS#0 (new) = DCS#0 (old) + delta
+ cadd gr21,gr16,gr21, cc5,#1
+ cadd.p gr22,gr16,gr22, cc6,#1
+ cadd gr23,gr16,gr23, cc7,#1
+
+ srl gr20,gr15,gr20 ; shift values down for FR551
+ srl gr21,gr15,gr21
+ srl gr22,gr15,gr22
+ srl gr23,gr15,gr23
+
+ # work out the address at which the reg updater resides and lock it into icache
+ # also work out the address the updater will jump to when finished
+ sethi.p %hi(__head_move_sdram-__head_reference),gr18
+ setlo %lo(__head_move_sdram-__head_reference),gr18
+ sethi.p %hi(__head_sdram_moved-__head_reference),gr19
+ setlo %lo(__head_sdram_moved-__head_reference),gr19
+ add.p gr18,gr26,gr18
+ add gr19,gr26,gr19
+ add.p gr19,gr16,gr19 ; moved = addr + (__sdram_base - DBRx)
+ add gr18,gr5,gr4 ; two cachelines probably required
+
+ icpl gr18,gr0,#1 ; load and lock the cachelines
+ icpl gr4,gr0,#1
+ LEDS 0x0004
+ membar
+ bar
+ jmpl @(gr18,gr0)
+
+ .balign L1_CACHE_BYTES
+__head_move_sdram:
+ cst gr20,@(gr14,gr0 ), cc4,#1
+ cst gr21,@(gr14,gr11), cc5,#1
+ cst gr22,@(gr14,gr12), cc6,#1
+ cst gr23,@(gr14,gr13), cc7,#1
+ cld @(gr14,gr0 ),gr20, cc4,#1
+ cld @(gr14,gr11),gr21, cc5,#1
+ cld @(gr14,gr12),gr22, cc4,#1
+ cld @(gr14,gr13),gr23, cc7,#1
+ bar
+ membar
+ jmpl @(gr19,gr0)
+
+ .balign L1_CACHE_BYTES
+__head_sdram_moved:
+ icul gr18
+ add gr18,gr5,gr4
+ icul gr4
+ icei @(gr0,gr0),1
+ dcei @(gr0,gr0),1
+
+ LEDS 0x0005
+
+ # recalculate reference address
+ call 0f
+0: movsg lr,gr26
+ addi gr26,#__head_reference-0b,gr26
+
+
+###############################################################################
+#
+# move the kernel image down to the bottom of the SDRAM
+#
+###############################################################################
+ sethi.p %hi(__kernel_image_size_no_bss+15),gr4
+ setlo %lo(__kernel_image_size_no_bss+15),gr4
+ srli.p gr4,#4,gr4 ; count
+ or gr26,gr26,gr16 ; source
+
+ sethi.p %hi(__sdram_base),gr17 ; destination
+ setlo %lo(__sdram_base),gr17
+
+ setlos #8,gr5
+ sub.p gr16,gr5,gr16 ; adjust src for LDDU
+ sub gr17,gr5,gr17 ; adjust dst for LDDU
+
+ sethi.p %hi(__head_move_kernel-__head_reference),gr18
+ setlo %lo(__head_move_kernel-__head_reference),gr18
+ sethi.p %hi(__head_kernel_moved-__head_reference+__sdram_base),gr19
+ setlo %lo(__head_kernel_moved-__head_reference+__sdram_base),gr19
+ add gr18,gr26,gr18
+ icpl gr18,gr0,#1
+ jmpl @(gr18,gr0)
+
+ .balign 32
+__head_move_kernel:
+ lddu @(gr16,gr5),gr10
+ lddu @(gr16,gr5),gr12
+ stdu.p gr10,@(gr17,gr5)
+ subicc gr4,#1,gr4,icc0
+ stdu.p gr12,@(gr17,gr5)
+ bhi icc0,#0,__head_move_kernel
+ jmpl @(gr19,gr0)
+
+ .balign 32
+__head_kernel_moved:
+ icul gr18
+ icei @(gr0,gr0),1
+ dcei @(gr0,gr0),1
+
+ LEDS 0x0006
+
+ # recalculate reference address
+ call 0f
+0: movsg lr,gr26
+ addi gr26,#__head_reference-0b,gr26
+
+
+###############################################################################
+#
+# rearrange the iomem map and set the protection registers
+#
+###############################################################################
+
+#ifdef CONFIG_MMU
+ LEDS 0x3301
+ call __head_fr451_set_busctl
+ LEDS 0x3303
+ call __head_fr451_survey_sdram
+ LEDS 0x3305
+ call __head_fr451_set_protection
+
+#else
+ movsg psr,gr5
+ srli gr5,#PSR_IMPLE_SHIFT,gr5
+ subicc gr5,#PSR_IMPLE_FR551,gr0,icc0
+ beq icc0,#0,__head_fr555_memmap
+ subicc gr5,#PSR_IMPLE_FR451,gr0,icc0
+ beq icc0,#0,__head_fr451_memmap
+
+ LEDS 0x3101
+ call __head_fr401_set_busctl
+ LEDS 0x3103
+ call __head_fr401_survey_sdram
+ LEDS 0x3105
+ call __head_fr401_set_protection
+ bra __head_done_memmap
+
+__head_fr451_memmap:
+ LEDS 0x3301
+ call __head_fr401_set_busctl
+ LEDS 0x3303
+ call __head_fr401_survey_sdram
+ LEDS 0x3305
+ call __head_fr451_set_protection
+ bra __head_done_memmap
+
+__head_fr555_memmap:
+ LEDS 0x3501
+ call __head_fr555_set_busctl
+ LEDS 0x3503
+ call __head_fr555_survey_sdram
+ LEDS 0x3505
+ call __head_fr555_set_protection
+
+__head_done_memmap:
+#endif
+ LEDS 0x0007
+
+###############################################################################
+#
+# turn the data cache and MMU on
+# - for the FR451 this'll mean that the window through which the kernel is
+# viewed will change
+#
+###############################################################################
+
+#ifdef CONFIG_MMU
+#define MMUMODE HSR0_EIMMU|HSR0_EDMMU|HSR0_EXMMU|HSR0_EDAT|HSR0_XEDAT
+#else
+#define MMUMODE HSR0_EIMMU|HSR0_EDMMU
+#endif
+
+ movsg hsr0,gr5
+
+ sethi.p %hi(MMUMODE),gr4
+ setlo %lo(MMUMODE),gr4
+ or gr4,gr5,gr5
+
+#if defined(CONFIG_FRV_DEFL_CACHE_WTHRU)
+ sethi.p %hi(HSR0_DCE|HSR0_CBM_WRITE_THRU),gr4
+ setlo %lo(HSR0_DCE|HSR0_CBM_WRITE_THRU),gr4
+#elif defined(CONFIG_FRV_DEFL_CACHE_WBACK)
+ sethi.p %hi(HSR0_DCE|HSR0_CBM_COPY_BACK),gr4
+ setlo %lo(HSR0_DCE|HSR0_CBM_COPY_BACK),gr4
+#elif defined(CONFIG_FRV_DEFL_CACHE_WBEHIND)
+ sethi.p %hi(HSR0_DCE|HSR0_CBM_COPY_BACK),gr4
+ setlo %lo(HSR0_DCE|HSR0_CBM_COPY_BACK),gr4
+
+ movsg psr,gr6
+ srli gr6,#24,gr6
+ cmpi gr6,#0x50,icc0 // FR451
+ beq icc0,#0,0f
+ cmpi gr6,#0x40,icc0 // FR405
+ bne icc0,#0,1f
+0:
+ # turn off write-allocate
+ sethi.p %hi(HSR0_NWA),gr6
+ setlo %lo(HSR0_NWA),gr6
+ or gr4,gr6,gr4
+1:
+
+#else
+#error No default cache configuration set
+#endif
+
+ or gr4,gr5,gr5
+ movgs gr5,hsr0
+ bar
+
+ LEDS 0x0008
+
+ sethi.p %hi(__head_mmu_enabled),gr19
+ setlo %lo(__head_mmu_enabled),gr19
+ jmpl @(gr19,gr0)
+
+__head_mmu_enabled:
+ icei @(gr0,gr0),#1
+ dcei @(gr0,gr0),#1
+
+ LEDS 0x0009
+
+#ifdef CONFIG_MMU
+ call __head_fr451_finalise_protection
+#endif
+
+ LEDS 0x000a
+
+###############################################################################
+#
+# set up the runtime environment
+#
+###############################################################################
+
+ # clear the BSS area
+ sethi.p %hi(__bss_start),gr4
+ setlo %lo(__bss_start),gr4
+ sethi.p %hi(_end),gr5
+ setlo %lo(_end),gr5
+ or.p gr0,gr0,gr18
+ or gr0,gr0,gr19
+
+0:
+ stdi gr18,@(gr4,#0)
+ stdi gr18,@(gr4,#8)
+ stdi gr18,@(gr4,#16)
+ stdi.p gr18,@(gr4,#24)
+ addi gr4,#24,gr4
+ subcc gr5,gr4,gr0,icc0
+ bhi icc0,#2,0b
+
+ LEDS 0x000b
+
+ # save the SDRAM details
+ sethi.p %hi(__sdram_old_base),gr4
+ setlo %lo(__sdram_old_base),gr4
+ st gr24,@(gr4,gr0)
+
+ sethi.p %hi(__sdram_base),gr5
+ setlo %lo(__sdram_base),gr5
+ sethi.p %hi(memory_start),gr4
+ setlo %lo(memory_start),gr4
+ st gr5,@(gr4,gr0)
+
+ add gr25,gr5,gr25
+ sethi.p %hi(memory_end),gr4
+ setlo %lo(memory_end),gr4
+ st gr25,@(gr4,gr0)
+
+ # point the TBR at the kernel trap table
+ sethi.p %hi(__entry_kerneltrap_table),gr4
+ setlo %lo(__entry_kerneltrap_table),gr4
+ movgs gr4,tbr
+
+ # set up the exception frame for init
+ sethi.p %hi(__kernel_frame0_ptr),gr28
+ setlo %lo(__kernel_frame0_ptr),gr28
+ sethi.p %hi(_gp),gr16
+ setlo %lo(_gp),gr16
+ sethi.p %hi(__entry_usertrap_table),gr4
+ setlo %lo(__entry_usertrap_table),gr4
+
+ lddi @(gr28,#0),gr28 ; load __frame & current
+ ldi.p @(gr29,#4),gr15 ; set current_thread
+
+ or gr0,gr0,fp
+ or gr28,gr0,sp
+
+ sti.p gr4,@(gr28,REG_TBR)
+ setlos #ISR_EDE|ISR_DTT_DIVBYZERO|ISR_EMAM_EXCEPTION,gr5
+ movgs gr5,isr
+
+ # turn on and off various CPU services
+ movsg psr,gr22
+ sethi.p %hi(#PSR_EM|PSR_EF|PSR_CM|PSR_NEM),gr4
+ setlo %lo(#PSR_EM|PSR_EF|PSR_CM|PSR_NEM),gr4
+ or gr22,gr4,gr22
+ movgs gr22,psr
+
+ andi gr22,#~(PSR_PIL|PSR_PS|PSR_S),gr22
+ ori gr22,#PSR_ET,gr22
+ sti gr22,@(gr28,REG_PSR)
+
+
+###############################################################################
+#
+# set up the registers and jump into the kernel
+#
+###############################################################################
+
+ LEDS 0x000c
+
+ # initialise the processor and the peripherals
+ #call SYMBOL_NAME(processor_init)
+ #call SYMBOL_NAME(unit_init)
+ #LEDS 0x0aff
+
+ sethi.p #0xe5e5,gr3
+ setlo #0xe5e5,gr3
+ or.p gr3,gr0,gr4
+ or gr3,gr0,gr5
+ or.p gr3,gr0,gr6
+ or gr3,gr0,gr7
+ or.p gr3,gr0,gr8
+ or gr3,gr0,gr9
+ or.p gr3,gr0,gr10
+ or gr3,gr0,gr11
+ or.p gr3,gr0,gr12
+ or gr3,gr0,gr13
+ or.p gr3,gr0,gr14
+ or gr3,gr0,gr17
+ or.p gr3,gr0,gr18
+ or gr3,gr0,gr19
+ or.p gr3,gr0,gr20
+ or gr3,gr0,gr21
+ or.p gr3,gr0,gr23
+ or gr3,gr0,gr24
+ or.p gr3,gr0,gr25
+ or gr3,gr0,gr26
+ or.p gr3,gr0,gr27
+# or gr3,gr0,gr30
+ or gr3,gr0,gr31
+ movgs gr0,lr
+ movgs gr0,lcr
+ movgs gr0,ccr
+ movgs gr0,cccr
+
+#ifdef CONFIG_MMU
+ movgs gr3,scr2
+ movgs gr3,scr3
+#endif
+
+ LEDS 0x0fff
+
+ # invoke the debugging stub if present
+ # - arch/frv/kernel/debug-stub.c will shift control directly to init/main.c
+ # (it will not return here)
+ break
+ .globl __debug_stub_init_break
+__debug_stub_init_break:
+
+ # however, if you need to use an ICE, and don't care about using any userspace
+ # debugging tools (such as the ptrace syscall), you can just step over the break
+ # above and get to the kernel this way
+ # look at arch/frv/kernel/debug-stub.c: debug_stub_init() to see what you've missed
+ call start_kernel
+
+ .globl __head_end
+__head_end:
+ .size _boot, .-_boot
+
+ # provide a point for GDB to place a break
+ .section .text.start,"ax"
+ .globl _start
+ .balign 4
+_start:
+ call _boot
+
+ .previous
+###############################################################################
+#
+# split a tile off of the region defined by GR8-GR9
+#
+# ENTRY: EXIT:
+# GR4 - IAMPR value representing tile
+# GR5 - DAMPR value representing tile
+# GR6 - IAMLR value representing tile
+# GR7 - DAMLR value representing tile
+# GR8 region base pointer [saved]
+# GR9 region top pointer updated to exclude new tile
+# GR11 xAMLR mask [saved]
+# GR25 SDRAM size [saved]
+# GR30 LED address [saved]
+#
+# - GR8 and GR9 should be rounded up/down to the nearest megabyte before calling
+#
+###############################################################################
+ .globl __head_split_region
+ .type __head_split_region,@function
+__head_split_region:
+ subcc.p gr9,gr8,gr4,icc0
+ setlos #31,gr5
+ scan.p gr4,gr0,gr6
+ beq icc0,#0,__head_region_empty
+ sub.p gr5,gr6,gr6 ; bit number of highest set bit (1MB=>20)
+ setlos #1,gr4
+ sll.p gr4,gr6,gr4 ; size of region (1 << bitno)
+ subi gr6,#17,gr6 ; 1MB => 0x03
+ slli.p gr6,#4,gr6 ; 1MB => 0x30
+ sub gr9,gr4,gr9 ; move uncovered top down
+
+ or gr9,gr6,gr4
+ ori gr4,#xAMPRx_S_USER|xAMPRx_C_CACHED|xAMPRx_V,gr4
+ or.p gr4,gr0,gr5
+
+ and gr4,gr11,gr6
+ and.p gr5,gr11,gr7
+ bralr
+
+__head_region_empty:
+ or.p gr0,gr0,gr4
+ or gr0,gr0,gr5
+ or.p gr0,gr0,gr6
+ or gr0,gr0,gr7
+ bralr
+ .size __head_split_region, .-__head_split_region
+
+###############################################################################
+#
+# write the 32-bit hex number in GR8 to ttyS0
+#
+###############################################################################
+#if 0
+ .globl __head_write_to_ttyS0
+ .type __head_write_to_ttyS0,@function
+__head_write_to_ttyS0:
+ sethi.p %hi(0xfeff9c00),gr31
+ setlo %lo(0xfeff9c00),gr31
+ setlos #8,gr20
+
+0: ldubi @(gr31,#5*8),gr21
+ andi gr21,#0x60,gr21
+ subicc gr21,#0x60,gr21,icc0
+ bne icc0,#0,0b
+
+1: srli gr8,#28,gr21
+ slli gr8,#4,gr8
+
+ addi gr21,#'0',gr21
+ subicc gr21,#'9',gr0,icc0
+ bls icc0,#2,2f
+ addi gr21,#'A'-'0'-10,gr21
+2:
+ stbi gr21,@(gr31,#0*8)
+ subicc gr20,#1,gr20,icc0
+ bhi icc0,#2,1b
+
+ setlos #'\r',gr21
+ stbi gr21,@(gr31,#0*8)
+
+ setlos #'\n',gr21
+ stbi gr21,@(gr31,#0*8)
+
+3: ldubi @(gr31,#5*8),gr21
+ andi gr21,#0x60,gr21
+ subicc gr21,#0x60,gr21,icc0
+ bne icc0,#0,3b
+ bralr
+
+ .size __head_write_to_ttyS0, .-__head_write_to_ttyS0
+#endif
--- /dev/null
+/* irq-mb93091.c: MB93091 FPGA interrupt handling
+ *
+ * Copyright (C) 2003 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/ptrace.h>
+#include <linux/errno.h>
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/ioport.h>
+#include <linux/interrupt.h>
+#include <linux/init.h>
+#include <linux/irq.h>
+
+#include <asm/io.h>
+#include <asm/system.h>
+#include <asm/bitops.h>
+#include <asm/delay.h>
+#include <asm/irq.h>
+#include <asm/irc-regs.h>
+#include <asm/irq-routing.h>
+
+#define __reg16(ADDR) (*(volatile unsigned short *)(ADDR))
+
+#define __get_IMR() ({ __reg16(0xffc00004); })
+#define __set_IMR(M) do { __reg16(0xffc00004) = (M); wmb(); } while(0)
+#define __get_IFR() ({ __reg16(0xffc0000c); })
+#define __clr_IFR(M) do { __reg16(0xffc0000c) = ~(M); wmb(); } while(0)
+
+static void frv_fpga_doirq(struct irq_source *source);
+static void frv_fpga_control(struct irq_group *group, int irq, int on);
+
+/*****************************************************************************/
+/*
+ * FPGA IRQ multiplexor
+ */
+static struct irq_source frv_fpga[4] = {
+#define __FPGA(X, M) \
+ [X] = { \
+ .muxname = "fpga."#X, \
+ .irqmask = M, \
+ .doirq = frv_fpga_doirq, \
+ }
+
+ __FPGA(0, 0x0028),
+ __FPGA(1, 0x0050),
+ __FPGA(2, 0x1c00),
+ __FPGA(3, 0x6386),
+};
+
+static struct irq_group frv_fpga_irqs = {
+ .first_irq = IRQ_BASE_FPGA,
+ .control = frv_fpga_control,
+ .sources = {
+ [ 1] = &frv_fpga[3],
+ [ 2] = &frv_fpga[3],
+ [ 3] = &frv_fpga[0],
+ [ 4] = &frv_fpga[1],
+ [ 5] = &frv_fpga[0],
+ [ 6] = &frv_fpga[1],
+ [ 7] = &frv_fpga[3],
+ [ 8] = &frv_fpga[3],
+ [ 9] = &frv_fpga[3],
+ [10] = &frv_fpga[2],
+ [11] = &frv_fpga[2],
+ [12] = &frv_fpga[2],
+ [13] = &frv_fpga[3],
+ [14] = &frv_fpga[3],
+ },
+};
+
+
+static void frv_fpga_control(struct irq_group *group, int index, int on)
+{
+ uint16_t imr = __get_IMR();
+
+ if (on)
+ imr &= ~(1 << index);
+ else
+ imr |= 1 << index;
+
+ __set_IMR(imr);
+}
+
+static void frv_fpga_doirq(struct irq_source *source)
+{
+ uint16_t mask, imr;
+
+ imr = __get_IMR();
+ mask = source->irqmask & ~imr & __get_IFR();
+ if (mask) {
+ __set_IMR(imr | mask);
+ __clr_IFR(mask);
+ distribute_irqs(&frv_fpga_irqs, mask);
+ __set_IMR(imr);
+ }
+}
+
+void __init fpga_init(void)
+{
+ __set_IMR(0x7ffe);
+ __clr_IFR(0x0000);
+
+ frv_irq_route_external(&frv_fpga[0], IRQ_CPU_EXTERNAL0);
+ frv_irq_route_external(&frv_fpga[1], IRQ_CPU_EXTERNAL1);
+ frv_irq_route_external(&frv_fpga[2], IRQ_CPU_EXTERNAL2);
+ frv_irq_route_external(&frv_fpga[3], IRQ_CPU_EXTERNAL3);
+ frv_irq_set_group(&frv_fpga_irqs);
+}
--- /dev/null
+/* irq-mb93093.c: MB93093 FPGA interrupt handling
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/ptrace.h>
+#include <linux/errno.h>
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/ioport.h>
+#include <linux/interrupt.h>
+#include <linux/init.h>
+#include <linux/irq.h>
+
+#include <asm/io.h>
+#include <asm/system.h>
+#include <asm/bitops.h>
+#include <asm/delay.h>
+#include <asm/irq.h>
+#include <asm/irc-regs.h>
+#include <asm/irq-routing.h>
+
+#define __reg16(ADDR) (*(volatile unsigned short *)(__region_CS2 + (ADDR)))
+
+#define __get_IMR() ({ __reg16(0x0a); })
+#define __set_IMR(M) do { __reg16(0x0a) = (M); wmb(); } while(0)
+#define __get_IFR() ({ __reg16(0x02); })
+#define __clr_IFR(M) do { __reg16(0x02) = ~(M); wmb(); } while(0)
+
+static void frv_fpga_doirq(struct irq_source *source);
+static void frv_fpga_control(struct irq_group *group, int irq, int on);
+
+/*****************************************************************************/
+/*
+ * FPGA IRQ multiplexor
+ */
+static struct irq_source frv_fpga[4] = {
+#define __FPGA(X, M) \
+ [X] = { \
+ .muxname = "fpga."#X, \
+ .irqmask = M, \
+ .doirq = frv_fpga_doirq, \
+ }
+
+ __FPGA(0, 0x0700),
+};
+
+static struct irq_group frv_fpga_irqs = {
+ .first_irq = IRQ_BASE_FPGA,
+ .control = frv_fpga_control,
+ .sources = {
+ [ 8] = &frv_fpga[0],
+ [ 9] = &frv_fpga[0],
+ [10] = &frv_fpga[0],
+ },
+};
+
+
+static void frv_fpga_control(struct irq_group *group, int index, int on)
+{
+ uint16_t imr = __get_IMR();
+
+ if (on)
+ imr &= ~(1 << index);
+ else
+ imr |= 1 << index;
+
+ __set_IMR(imr);
+}
+
+static void frv_fpga_doirq(struct irq_source *source)
+{
+ uint16_t mask, imr;
+
+ imr = __get_IMR();
+ mask = source->irqmask & ~imr & __get_IFR();
+ if (mask) {
+ __set_IMR(imr | mask);
+ __clr_IFR(mask);
+ distribute_irqs(&frv_fpga_irqs, mask);
+ __set_IMR(imr);
+ }
+}
+
+void __init fpga_init(void)
+{
+ __set_IMR(0x0700);
+ __clr_IFR(0x0000);
+
+ frv_irq_route_external(&frv_fpga[0], IRQ_CPU_EXTERNAL2);
+ frv_irq_set_group(&frv_fpga_irqs);
+}
--- /dev/null
+/* irq-mb93493.c: MB93493 companion chip interrupt handler
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/ptrace.h>
+#include <linux/errno.h>
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/ioport.h>
+#include <linux/interrupt.h>
+#include <linux/init.h>
+#include <linux/irq.h>
+
+#include <asm/io.h>
+#include <asm/system.h>
+#include <asm/bitops.h>
+#include <asm/delay.h>
+#include <asm/irq.h>
+#include <asm/irc-regs.h>
+#include <asm/irq-routing.h>
+#include <asm/mb93493-irqs.h>
+
+static void frv_mb93493_doirq(struct irq_source *source);
+
+/*****************************************************************************/
+/*
+ * MB93493 companion chip IRQ multiplexor
+ */
+static struct irq_source frv_mb93493[2] = {
+ [0] = {
+ .muxname = "mb93493.0",
+ .muxdata = __region_CS3 + 0x3d0,
+ .doirq = frv_mb93493_doirq,
+ .irqmask = 0x0000,
+ },
+ [1] = {
+ .muxname = "mb93493.1",
+ .muxdata = __region_CS3 + 0x3d4,
+ .doirq = frv_mb93493_doirq,
+ .irqmask = 0x0000,
+ },
+};
+
+static void frv_mb93493_control(struct irq_group *group, int index, int on)
+{
+ struct irq_source *source;
+ uint32_t iqsr;
+
+ if ((frv_mb93493[0].irqmask & (1 << index)))
+ source = &frv_mb93493[0];
+ else
+ source = &frv_mb93493[1];
+
+ iqsr = readl(source->muxdata);
+ if (on)
+ iqsr |= 1 << (index + 16);
+ else
+ iqsr &= ~(1 << (index + 16));
+
+ writel(iqsr, source->muxdata);
+}
+
+static struct irq_group frv_mb93493_irqs = {
+ .first_irq = IRQ_BASE_MB93493,
+ .control = frv_mb93493_control,
+};
+
+static void frv_mb93493_doirq(struct irq_source *source)
+{
+ uint32_t mask = readl(source->muxdata);
+ mask = mask & (mask >> 16) & 0xffff;
+
+ if (mask)
+ distribute_irqs(&frv_mb93493_irqs, mask);
+}
+
+static void __init mb93493_irq_route(int irq, int source)
+{
+ frv_mb93493[source].irqmask |= 1 << (irq - IRQ_BASE_MB93493);
+ frv_mb93493_irqs.sources[irq - IRQ_BASE_MB93493] = &frv_mb93493[source];
+}
+
+void __init route_mb93493_irqs(void)
+{
+ frv_irq_route_external(&frv_mb93493[0], IRQ_CPU_MB93493_0);
+ frv_irq_route_external(&frv_mb93493[1], IRQ_CPU_MB93493_1);
+
+ frv_irq_set_group(&frv_mb93493_irqs);
+
+ mb93493_irq_route(IRQ_MB93493_VDC, IRQ_MB93493_VDC_ROUTE);
+ mb93493_irq_route(IRQ_MB93493_VCC, IRQ_MB93493_VCC_ROUTE);
+ mb93493_irq_route(IRQ_MB93493_AUDIO_IN, IRQ_MB93493_AUDIO_IN_ROUTE);
+ mb93493_irq_route(IRQ_MB93493_I2C_0, IRQ_MB93493_I2C_0_ROUTE);
+ mb93493_irq_route(IRQ_MB93493_I2C_1, IRQ_MB93493_I2C_1_ROUTE);
+ mb93493_irq_route(IRQ_MB93493_USB, IRQ_MB93493_USB_ROUTE);
+ mb93493_irq_route(IRQ_MB93493_LOCAL_BUS, IRQ_MB93493_LOCAL_BUS_ROUTE);
+ mb93493_irq_route(IRQ_MB93493_PCMCIA, IRQ_MB93493_PCMCIA_ROUTE);
+ mb93493_irq_route(IRQ_MB93493_GPIO, IRQ_MB93493_GPIO_ROUTE);
+ mb93493_irq_route(IRQ_MB93493_AUDIO_OUT, IRQ_MB93493_AUDIO_OUT_ROUTE);
+}
--- /dev/null
+/* irq-routing.c: IRQ routing
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/sched.h>
+#include <linux/random.h>
+#include <linux/init.h>
+#include <linux/serial_reg.h>
+#include <asm/io.h>
+#include <asm/irq-routing.h>
+#include <asm/irc-regs.h>
+#include <asm/serial-regs.h>
+#include <asm/dma.h>
+
+struct irq_level frv_irq_levels[16] = {
+ [0 ... 15] = {
+ .lock = SPIN_LOCK_UNLOCKED,
+ }
+};
+
+struct irq_group *irq_groups[NR_IRQ_GROUPS];
+
+extern struct irq_group frv_cpu_irqs;
+
+void __init frv_irq_route(struct irq_source *source, int irqlevel)
+{
+ source->level = &frv_irq_levels[irqlevel];
+ source->next = frv_irq_levels[irqlevel].sources;
+ frv_irq_levels[irqlevel].sources = source;
+}
+
+void __init frv_irq_route_external(struct irq_source *source, int irq)
+{
+ int irqlevel = 0;
+
+ switch (irq) {
+ case IRQ_CPU_EXTERNAL0: irqlevel = IRQ_XIRQ0_LEVEL; break;
+ case IRQ_CPU_EXTERNAL1: irqlevel = IRQ_XIRQ1_LEVEL; break;
+ case IRQ_CPU_EXTERNAL2: irqlevel = IRQ_XIRQ2_LEVEL; break;
+ case IRQ_CPU_EXTERNAL3: irqlevel = IRQ_XIRQ3_LEVEL; break;
+ case IRQ_CPU_EXTERNAL4: irqlevel = IRQ_XIRQ4_LEVEL; break;
+ case IRQ_CPU_EXTERNAL5: irqlevel = IRQ_XIRQ5_LEVEL; break;
+ case IRQ_CPU_EXTERNAL6: irqlevel = IRQ_XIRQ6_LEVEL; break;
+ case IRQ_CPU_EXTERNAL7: irqlevel = IRQ_XIRQ7_LEVEL; break;
+ default: BUG();
+ }
+
+ source->level = &frv_irq_levels[irqlevel];
+ source->next = frv_irq_levels[irqlevel].sources;
+ frv_irq_levels[irqlevel].sources = source;
+}
+
+void __init frv_irq_set_group(struct irq_group *group)
+{
+ irq_groups[group->first_irq >> NR_IRQ_LOG2_ACTIONS_PER_GROUP] = group;
+}
+
+void distribute_irqs(struct irq_group *group, unsigned long irqmask)
+{
+ struct irqaction *action;
+ int irq;
+
+ while (irqmask) {
+ asm("scan %1,gr0,%0" : "=r"(irq) : "r"(irqmask));
+ if (irq < 0 || irq > 31)
+ asm volatile("break");
+ irq = 31 - irq;
+
+ irqmask &= ~(1 << irq);
+ action = group->actions[irq];
+
+ irq += group->first_irq;
+
+ if (action) {
+ int status = 0;
+
+// if (!(action->flags & SA_INTERRUPT))
+// local_irq_enable();
+
+ do {
+ status |= action->flags;
+ action->handler(irq, action->dev_id, __frame);
+ action = action->next;
+ } while (action);
+
+ if (status & SA_SAMPLE_RANDOM)
+ add_interrupt_randomness(irq);
+ local_irq_disable();
+ }
+ }
+}
+
+/*****************************************************************************/
+/*
+ * CPU UART interrupts
+ */
+static void frv_cpuuart_doirq(struct irq_source *source)
+{
+// uint8_t iir = readb(source->muxdata + UART_IIR * 8);
+// if ((iir & 0x0f) != UART_IIR_NO_INT)
+ distribute_irqs(&frv_cpu_irqs, source->irqmask);
+}
+
+struct irq_source frv_cpuuart[2] = {
+#define __CPUUART(X, A) \
+ [X] = { \
+ .muxname = "uart", \
+ .muxdata = (volatile void __iomem *) A, \
+ .irqmask = 1 << IRQ_CPU_UART##X, \
+ .doirq = frv_cpuuart_doirq, \
+ }
+
+ __CPUUART(0, UART0_BASE),
+ __CPUUART(1, UART1_BASE),
+};
+
+/*****************************************************************************/
+/*
+ * CPU DMA interrupts
+ */
+static void frv_cpudma_doirq(struct irq_source *source)
+{
+ uint32_t cstr = readl(source->muxdata + DMAC_CSTRx);
+ if (cstr & DMAC_CSTRx_INT)
+ distribute_irqs(&frv_cpu_irqs, source->irqmask);
+}
+
+struct irq_source frv_cpudma[8] = {
+#define __CPUDMA(X, A) \
+ [X] = { \
+ .muxname = "dma", \
+ .muxdata = (volatile void __iomem *) A, \
+ .irqmask = 1 << IRQ_CPU_DMA##X, \
+ .doirq = frv_cpudma_doirq, \
+ }
+
+ __CPUDMA(0, 0xfe000900),
+ __CPUDMA(1, 0xfe000980),
+ __CPUDMA(2, 0xfe000a00),
+ __CPUDMA(3, 0xfe000a80),
+ __CPUDMA(4, 0xfe001000),
+ __CPUDMA(5, 0xfe001080),
+ __CPUDMA(6, 0xfe001100),
+ __CPUDMA(7, 0xfe001180),
+};
+
+/*****************************************************************************/
+/*
+ * CPU timer interrupts - can't tell whether they've generated an interrupt or not
+ */
+static void frv_cputimer_doirq(struct irq_source *source)
+{
+ distribute_irqs(&frv_cpu_irqs, source->irqmask);
+}
+
+struct irq_source frv_cputimer[3] = {
+#define __CPUTIMER(X) \
+ [X] = { \
+ .muxname = "timer", \
+ .muxdata = 0, \
+ .irqmask = 1 << IRQ_CPU_TIMER##X, \
+ .doirq = frv_cputimer_doirq, \
+ }
+
+ __CPUTIMER(0),
+ __CPUTIMER(1),
+ __CPUTIMER(2),
+};
+
+/*****************************************************************************/
+/*
+ * external CPU interrupts - can't tell directly whether they've generated an interrupt or not
+ */
+static void frv_cpuexternal_doirq(struct irq_source *source)
+{
+ distribute_irqs(&frv_cpu_irqs, source->irqmask);
+}
+
+struct irq_source frv_cpuexternal[8] = {
+#define __CPUEXTERNAL(X) \
+ [X] = { \
+ .muxname = "ext", \
+ .muxdata = 0, \
+ .irqmask = 1 << IRQ_CPU_EXTERNAL##X, \
+ .doirq = frv_cpuexternal_doirq, \
+ }
+
+ __CPUEXTERNAL(0),
+ __CPUEXTERNAL(1),
+ __CPUEXTERNAL(2),
+ __CPUEXTERNAL(3),
+ __CPUEXTERNAL(4),
+ __CPUEXTERNAL(5),
+ __CPUEXTERNAL(6),
+ __CPUEXTERNAL(7),
+};
+
+#define set_IRR(N,A,B,C,D) __set_IRR(N, (A << 28) | (B << 24) | (C << 20) | (D << 16))
+
+struct irq_group frv_cpu_irqs = {
+ .sources = {
+ [IRQ_CPU_UART0] = &frv_cpuuart[0],
+ [IRQ_CPU_UART1] = &frv_cpuuart[1],
+ [IRQ_CPU_TIMER0] = &frv_cputimer[0],
+ [IRQ_CPU_TIMER1] = &frv_cputimer[1],
+ [IRQ_CPU_TIMER2] = &frv_cputimer[2],
+ [IRQ_CPU_DMA0] = &frv_cpudma[0],
+ [IRQ_CPU_DMA1] = &frv_cpudma[1],
+ [IRQ_CPU_DMA2] = &frv_cpudma[2],
+ [IRQ_CPU_DMA3] = &frv_cpudma[3],
+ [IRQ_CPU_DMA4] = &frv_cpudma[4],
+ [IRQ_CPU_DMA5] = &frv_cpudma[5],
+ [IRQ_CPU_DMA6] = &frv_cpudma[6],
+ [IRQ_CPU_DMA7] = &frv_cpudma[7],
+ [IRQ_CPU_EXTERNAL0] = &frv_cpuexternal[0],
+ [IRQ_CPU_EXTERNAL1] = &frv_cpuexternal[1],
+ [IRQ_CPU_EXTERNAL2] = &frv_cpuexternal[2],
+ [IRQ_CPU_EXTERNAL3] = &frv_cpuexternal[3],
+ [IRQ_CPU_EXTERNAL4] = &frv_cpuexternal[4],
+ [IRQ_CPU_EXTERNAL5] = &frv_cpuexternal[5],
+ [IRQ_CPU_EXTERNAL6] = &frv_cpuexternal[6],
+ [IRQ_CPU_EXTERNAL7] = &frv_cpuexternal[7],
+ },
+};
+
+/*****************************************************************************/
+/*
+ * route the CPU's interrupt sources
+ */
+void __init route_cpu_irqs(void)
+{
+ frv_irq_set_group(&frv_cpu_irqs);
+
+ __set_IITMR(0, 0x003f0000); /* DMA0-3, TIMER0-2 IRQ detect levels */
+ __set_IITMR(1, 0x20000000); /* ERR0-1, UART0-1, DMA4-7 IRQ detect levels */
+
+ /* route UART and error interrupts */
+ frv_irq_route(&frv_cpuuart[0], IRQ_UART0_LEVEL);
+ frv_irq_route(&frv_cpuuart[1], IRQ_UART1_LEVEL);
+
+ set_IRR(6, IRQ_GDBSTUB_LEVEL, IRQ_GDBSTUB_LEVEL, IRQ_UART1_LEVEL, IRQ_UART0_LEVEL);
+
+ /* route DMA channel interrupts */
+ frv_irq_route(&frv_cpudma[0], IRQ_DMA0_LEVEL);
+ frv_irq_route(&frv_cpudma[1], IRQ_DMA1_LEVEL);
+ frv_irq_route(&frv_cpudma[2], IRQ_DMA2_LEVEL);
+ frv_irq_route(&frv_cpudma[3], IRQ_DMA3_LEVEL);
+ frv_irq_route(&frv_cpudma[4], IRQ_DMA4_LEVEL);
+ frv_irq_route(&frv_cpudma[5], IRQ_DMA5_LEVEL);
+ frv_irq_route(&frv_cpudma[6], IRQ_DMA6_LEVEL);
+ frv_irq_route(&frv_cpudma[7], IRQ_DMA7_LEVEL);
+
+ set_IRR(4, IRQ_DMA3_LEVEL, IRQ_DMA2_LEVEL, IRQ_DMA1_LEVEL, IRQ_DMA0_LEVEL);
+ set_IRR(7, IRQ_DMA7_LEVEL, IRQ_DMA6_LEVEL, IRQ_DMA5_LEVEL, IRQ_DMA4_LEVEL);
+
+ /* route timer interrupts */
+ frv_irq_route(&frv_cputimer[0], IRQ_TIMER0_LEVEL);
+ frv_irq_route(&frv_cputimer[1], IRQ_TIMER1_LEVEL);
+ frv_irq_route(&frv_cputimer[2], IRQ_TIMER2_LEVEL);
+
+ set_IRR(5, 0, IRQ_TIMER2_LEVEL, IRQ_TIMER1_LEVEL, IRQ_TIMER0_LEVEL);
+
+ /* route external interrupts */
+ frv_irq_route(&frv_cpuexternal[0], IRQ_XIRQ0_LEVEL);
+ frv_irq_route(&frv_cpuexternal[1], IRQ_XIRQ1_LEVEL);
+ frv_irq_route(&frv_cpuexternal[2], IRQ_XIRQ2_LEVEL);
+ frv_irq_route(&frv_cpuexternal[3], IRQ_XIRQ3_LEVEL);
+ frv_irq_route(&frv_cpuexternal[4], IRQ_XIRQ4_LEVEL);
+ frv_irq_route(&frv_cpuexternal[5], IRQ_XIRQ5_LEVEL);
+ frv_irq_route(&frv_cpuexternal[6], IRQ_XIRQ6_LEVEL);
+ frv_irq_route(&frv_cpuexternal[7], IRQ_XIRQ7_LEVEL);
+
+ set_IRR(2, IRQ_XIRQ7_LEVEL, IRQ_XIRQ6_LEVEL, IRQ_XIRQ5_LEVEL, IRQ_XIRQ4_LEVEL);
+ set_IRR(3, IRQ_XIRQ3_LEVEL, IRQ_XIRQ2_LEVEL, IRQ_XIRQ1_LEVEL, IRQ_XIRQ0_LEVEL);
+
+#if defined(CONFIG_MB93091_VDK)
+ __set_TM1(0x55550000); /* XIRQ7-0 all active low */
+#elif defined(CONFIG_MB93093_PDK)
+ __set_TM1(0x15550000); /* XIRQ7 active high, 6-0 all active low */
+#else
+#error dont know external IRQ trigger levels for this setup
+#endif
+
+} /* end route_cpu_irqs() */
--- /dev/null
+/* irq.c: FRV IRQ handling
+ *
+ * Copyright (C) 2003, 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+/*
+ * (mostly architecture independent, will move to kernel/irq.c in 2.5.)
+ *
+ * IRQs are in fact implemented a bit like signal handlers for the kernel.
+ * Naturally it's not a 1:1 relation, but there are similarities.
+ */
+
+#include <linux/config.h>
+#include <linux/ptrace.h>
+#include <linux/errno.h>
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/ioport.h>
+#include <linux/interrupt.h>
+#include <linux/timex.h>
+#include <linux/slab.h>
+#include <linux/random.h>
+#include <linux/smp_lock.h>
+#include <linux/init.h>
+#include <linux/kernel_stat.h>
+#include <linux/irq.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+
+#include <asm/atomic.h>
+#include <asm/io.h>
+#include <asm/smp.h>
+#include <asm/system.h>
+#include <asm/bitops.h>
+#include <asm/uaccess.h>
+#include <asm/pgalloc.h>
+#include <asm/delay.h>
+#include <asm/irq.h>
+#include <asm/irc-regs.h>
+#include <asm/irq-routing.h>
+#include <asm/gdb-stub.h>
+
+extern void __init fpga_init(void);
+extern void __init route_mb93493_irqs(void);
+
+static void register_irq_proc (unsigned int irq);
+
+/*
+ * Special irq handlers.
+ */
+
+irqreturn_t no_action(int cpl, void *dev_id, struct pt_regs *regs) { return IRQ_HANDLED; }
+
+atomic_t irq_err_count;
+
+/*
+ * Generic, controller-independent functions:
+ */
+int show_interrupts(struct seq_file *p, void *v)
+{
+ struct irqaction *action;
+ struct irq_group *group;
+ unsigned long flags;
+ int level, grp, ix, i, j;
+
+ i = *(loff_t *) v;
+
+ switch (i) {
+ case 0:
+ seq_printf(p, " ");
+ for (j = 0; j < NR_CPUS; j++)
+ if (cpu_online(j))
+ seq_printf(p, "CPU%d ",j);
+
+ seq_putc(p, '\n');
+ break;
+
+ case 1 ... NR_IRQ_GROUPS * NR_IRQ_ACTIONS_PER_GROUP:
+ local_irq_save(flags);
+
+ grp = (i - 1) / NR_IRQ_ACTIONS_PER_GROUP;
+ group = irq_groups[grp];
+ if (!group)
+ goto skip;
+
+ ix = (i - 1) % NR_IRQ_ACTIONS_PER_GROUP;
+ action = group->actions[ix];
+ if (!action)
+ goto skip;
+
+ seq_printf(p, "%3d: ", i - 1);
+
+#ifndef CONFIG_SMP
+ seq_printf(p, "%10u ", kstat_irqs(i));
+#else
+ for (j = 0; j < NR_CPUS; j++)
+ if (cpu_online(j))
+ seq_printf(p, "%10u ", kstat_cpu(j).irqs[i - 1]);
+#endif
+
+ level = group->sources[ix]->level - frv_irq_levels;
+
+ seq_printf(p, " %12s@%x", group->sources[ix]->muxname, level);
+ seq_printf(p, " %s", action->name);
+
+ for (action = action->next; action; action = action->next)
+ seq_printf(p, ", %s", action->name);
+
+ seq_putc(p, '\n');
+skip:
+ local_irq_restore(flags);
+ break;
+
+ case NR_IRQ_GROUPS * NR_IRQ_ACTIONS_PER_GROUP + 1:
+ seq_printf(p, "ERR: %10u\n", atomic_read(&irq_err_count));
+ break;
+
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+
+/*
+ * Generic enable/disable code: this just calls
+ * down into the PIC-specific version for the actual
+ * hardware disable after having gotten the irq
+ * controller lock.
+ */
+
+/**
+ * disable_irq_nosync - disable an irq without waiting
+ * @irq: Interrupt to disable
+ *
+ * Disable the selected interrupt line. Disables and Enables are
+ * nested.
+ * Unlike disable_irq(), this function does not ensure existing
+ * instances of the IRQ handler have completed before returning.
+ *
+ * This function may be called from IRQ context.
+ */
+
+void disable_irq_nosync(unsigned int irq)
+{
+ struct irq_source *source;
+ struct irq_group *group;
+ struct irq_level *level;
+ unsigned long flags;
+ int idx = irq & (NR_IRQ_ACTIONS_PER_GROUP - 1);
+
+ group = irq_groups[irq >> NR_IRQ_LOG2_ACTIONS_PER_GROUP];
+ if (!group)
+ BUG();
+
+ source = group->sources[idx];
+ if (!source)
+ BUG();
+
+ level = source->level;
+
+ spin_lock_irqsave(&level->lock, flags);
+
+ if (group->control) {
+ if (!group->disable_cnt[idx]++)
+ group->control(group, idx, 0);
+ } else if (!level->disable_count++) {
+ __set_MASK(level - frv_irq_levels);
+ }
+
+ spin_unlock_irqrestore(&level->lock, flags);
+}
+
+/**
+ * disable_irq - disable an irq and wait for completion
+ * @irq: Interrupt to disable
+ *
+ * Disable the selected interrupt line. Enables and Disables are
+ * nested.
+ * This function waits for any pending IRQ handlers for this interrupt
+ * to complete before returning. If you use this function while
+ * holding a resource the IRQ handler may need you will deadlock.
+ *
+ * This function may be called - with care - from IRQ context.
+ */
+
+void disable_irq(unsigned int irq)
+{
+ disable_irq_nosync(irq);
+
+#ifdef CONFIG_SMP
+ if (!local_irq_count(smp_processor_id())) {
+ do {
+ barrier();
+ } while (irq_desc[irq].status & IRQ_INPROGRESS);
+ }
+#endif
+}
+
+/**
+ * enable_irq - enable handling of an irq
+ * @irq: Interrupt to enable
+ *
+ * Undoes the effect of one call to disable_irq(). If this
+ * matches the last disable, processing of interrupts on this
+ * IRQ line is re-enabled.
+ *
+ * This function may be called from IRQ context.
+ */
+
+void enable_irq(unsigned int irq)
+{
+ struct irq_source *source;
+ struct irq_group *group;
+ struct irq_level *level;
+ unsigned long flags;
+ int idx = irq & (NR_IRQ_ACTIONS_PER_GROUP - 1);
+ int count;
+
+ group = irq_groups[irq >> NR_IRQ_LOG2_ACTIONS_PER_GROUP];
+ if (!group)
+ BUG();
+
+ source = group->sources[idx];
+ if (!source)
+ BUG();
+
+ level = source->level;
+
+ spin_lock_irqsave(&level->lock, flags);
+
+ if (group->control)
+ count = group->disable_cnt[idx];
+ else
+ count = level->disable_count;
+
+ switch (count) {
+ case 1:
+ if (group->control) {
+ if (group->actions[idx])
+ group->control(group, idx, 1);
+ } else {
+ if (level->usage)
+ __clr_MASK(level - frv_irq_levels);
+ }
+ /* fall-through */
+
+ default:
+ count--;
+ break;
+
+ case 0:
+ printk("enable_irq(%u) unbalanced from %p\n", irq, __builtin_return_address(0));
+ }
+
+ if (group->control)
+ group->disable_cnt[idx] = count;
+ else
+ level->disable_count = count;
+
+ spin_unlock_irqrestore(&level->lock, flags);
+}
+
+/*****************************************************************************/
+/*
+ * handles all normal device IRQ's
+ * - registers are referred to by the __frame variable (GR28)
+ * - IRQ distribution is complicated in this arch because of the many PICs, the
+ * way they work and the way they cascade
+ */
+asmlinkage void do_IRQ(void)
+{
+ struct irq_source *source;
+ int level, cpu;
+
+ level = (__frame->tbr >> 4) & 0xf;
+ cpu = smp_processor_id();
+
+#if 0
+ {
+ static u32 irqcount;
+ *(volatile u32 *) 0xe1200004 = ~((irqcount++ << 8) | level);
+ *(volatile u16 *) 0xffc00100 = (u16) ~0x9999;
+ mb();
+ }
+#endif
+
+ if ((unsigned long) __frame - (unsigned long) (current + 1) < 512)
+ BUG();
+
+ __set_MASK(level);
+ __clr_RC(level);
+ __clr_IRL();
+
+ kstat_this_cpu.irqs[level]++;
+
+ irq_enter();
+
+ for (source = frv_irq_levels[level].sources; source; source = source->next)
+ source->doirq(source);
+
+ irq_exit();
+
+ __clr_MASK(level);
+
+ /* only process softirqs if we didn't interrupt another interrupt handler */
+ if ((__frame->psr & PSR_PIL) == PSR_PIL_0)
+ if (local_softirq_pending())
+ do_softirq();
+
+#ifdef CONFIG_PREEMPT
+ local_irq_disable();
+ while (--current->preempt_count == 0) {
+ if (!(__frame->psr & PSR_S) ||
+ current->need_resched == 0 ||
+ in_interrupt())
+ break;
+ current->preempt_count++;
+ local_irq_enable();
+ preempt_schedule();
+ local_irq_disable();
+ }
+#endif
+
+#if 0
+ {
+ *(volatile u16 *) 0xffc00100 = (u16) ~0x6666;
+ mb();
+ }
+#endif
+
+} /* end do_IRQ() */
+
+/*****************************************************************************/
+/*
+ * handles all NMIs when not co-opted by the debugger
+ * - registers are referred to by the __frame variable (GR28)
+ */
+asmlinkage void do_NMI(void)
+{
+} /* end do_NMI() */
+
+/*****************************************************************************/
+/**
+ * request_irq - allocate an interrupt line
+ * @irq: Interrupt line to allocate
+ * @handler: Function to be called when the IRQ occurs
+ * @irqflags: Interrupt type flags
+ * @devname: An ascii name for the claiming device
+ * @dev_id: A cookie passed back to the handler function
+ *
+ * This call allocates interrupt resources and enables the
+ * interrupt line and IRQ handling. From the point this
+ * call is made your handler function may be invoked. Since
+ * your handler function must clear any interrupt the board
+ * raises, you must take care both to initialise your hardware
+ * and to set up the interrupt handler in the right order.
+ *
+ * Dev_id must be globally unique. Normally the address of the
+ * device data structure is used as the cookie. Since the handler
+ * receives this value it makes sense to use it.
+ *
+ * If your interrupt is shared you must pass a non NULL dev_id
+ * as this is required when freeing the interrupt.
+ *
+ * Flags:
+ *
+ * SA_SHIRQ Interrupt is shared
+ *
+ * SA_INTERRUPT Disable local interrupts while processing
+ *
+ * SA_SAMPLE_RANDOM The interrupt can be used for entropy
+ *
+ */
+
+int request_irq(unsigned int irq,
+ irqreturn_t (*handler)(int, void *, struct pt_regs *),
+ unsigned long irqflags,
+ const char * devname,
+ void *dev_id)
+{
+ int retval;
+ struct irqaction *action;
+
+#if 1
+ /*
+ * Sanity-check: shared interrupts should REALLY pass in
+ * a real dev-ID, otherwise we'll have trouble later trying
+ * to figure out which interrupt is which (messes up the
+ * interrupt freeing logic etc).
+ */
+ if (irqflags & SA_SHIRQ) {
+ if (!dev_id)
+ printk("Bad boy: %s (at 0x%x) called us without a dev_id!\n",
+ devname, (&irq)[-1]);
+ }
+#endif
+
+ if ((irq >> NR_IRQ_LOG2_ACTIONS_PER_GROUP) >= NR_IRQ_GROUPS)
+ return -EINVAL;
+ if (!handler)
+ return -EINVAL;
+
+ action = (struct irqaction *) kmalloc(sizeof(struct irqaction), GFP_KERNEL);
+ if (!action)
+ return -ENOMEM;
+
+ action->handler = handler;
+ action->flags = irqflags;
+ action->mask = CPU_MASK_NONE;
+ action->name = devname;
+ action->next = NULL;
+ action->dev_id = dev_id;
+
+ retval = setup_irq(irq, action);
+ if (retval)
+ kfree(action);
+ return retval;
+}
+
+/**
+ * free_irq - free an interrupt
+ * @irq: Interrupt line to free
+ * @dev_id: Device identity to free
+ *
+ * Remove an interrupt handler. The handler is removed and if the
+ * interrupt line is no longer in use by any driver it is disabled.
+ * On a shared IRQ the caller must ensure the interrupt is disabled
+ * on the card it drives before calling this function. The function
+ * does not return until any executing interrupts for this IRQ
+ * have completed.
+ *
+ * This function may be called from interrupt context.
+ *
+ * Bugs: Attempting to free an irq in a handler for the same irq hangs
+ * the machine.
+ */
+
+void free_irq(unsigned int irq, void *dev_id)
+{
+ struct irq_source *source;
+ struct irq_group *group;
+ struct irq_level *level;
+ struct irqaction **p, **pp;
+ unsigned long flags;
+
+ if ((irq >> NR_IRQ_LOG2_ACTIONS_PER_GROUP) >= NR_IRQ_GROUPS)
+ return;
+
+ group = irq_groups[irq >> NR_IRQ_LOG2_ACTIONS_PER_GROUP];
+ if (!group)
+ BUG();
+
+ source = group->sources[irq & (NR_IRQ_ACTIONS_PER_GROUP - 1)];
+ if (!source)
+ BUG();
+
+ level = source->level;
+ p = &group->actions[irq & (NR_IRQ_ACTIONS_PER_GROUP - 1)];
+
+ spin_lock_irqsave(&level->lock, flags);
+
+ for (pp = p; *pp; pp = &(*pp)->next) {
+ struct irqaction *action = *pp;
+
+ if (action->dev_id != dev_id)
+ continue;
+
+ /* found it - remove from the list of entries */
+ *pp = action->next;
+
+ level->usage--;
+
+ if (p == pp && group->control)
+ group->control(group, irq & (NR_IRQ_ACTIONS_PER_GROUP - 1), 0);
+
+ if (level->usage == 0)
+ __set_MASK(level - frv_irq_levels);
+
+ spin_unlock_irqrestore(&level->lock,flags);
+
+#ifdef CONFIG_SMP
+ /* Wait to make sure it's not being used on another CPU */
+ while (desc->status & IRQ_INPROGRESS)
+ barrier();
+#endif
+ kfree(action);
+ return;
+ }
+}
+
+/*
+ * IRQ autodetection code..
+ *
+ * This depends on the fact that any interrupt that comes in on to an
+ * unassigned IRQ will cause GxICR_DETECT to be set
+ */
+
+static DECLARE_MUTEX(probe_sem);
+
+/**
+ * probe_irq_on - begin an interrupt autodetect
+ *
+ * Commence probing for an interrupt. The interrupts are scanned
+ * and a mask of potential interrupt lines is returned.
+ *
+ */
+
+unsigned long probe_irq_on(void)
+{
+ down(&probe_sem);
+ return 0;
+}
+
+/*
+ * Return a mask of triggered interrupts (this
+ * can handle only legacy ISA interrupts).
+ */
+
+/**
+ * probe_irq_mask - scan a bitmap of interrupt lines
+ * @val: mask of interrupts to consider
+ *
+ * Scan the ISA bus interrupt lines and return a bitmap of
+ * active interrupts. The interrupt probe logic state is then
+ * returned to its previous value.
+ *
+ * Note: we need to scan all the irq's even though we will
+ * only return ISA irq numbers - just so that we reset them
+ * all to a known state.
+ */
+unsigned int probe_irq_mask(unsigned long xmask)
+{
+ up(&probe_sem);
+ return 0;
+}
+
+/*
+ * Return the one interrupt that triggered (this can
+ * handle any interrupt source).
+ */
+
+/**
+ * probe_irq_off - end an interrupt autodetect
+ * @xmask: mask of potential interrupts (unused)
+ *
+ * Scans the unused interrupt lines and returns the line which
+ * appears to have triggered the interrupt. If no interrupt was
+ * found then zero is returned. If more than one interrupt is
+ * found then minus the first candidate is returned to indicate
+ * their is doubt.
+ *
+ * The interrupt probe logic state is returned to its previous
+ * value.
+ *
+ * BUGS: When used in a module (which arguably shouldnt happen)
+ * nothing prevents two IRQ probe callers from overlapping. The
+ * results of this are non-optimal.
+ */
+
+int probe_irq_off(unsigned long xmask)
+{
+ up(&probe_sem);
+ return -1;
+}
+
+/* this was setup_x86_irq but it seems pretty generic */
+int setup_irq(unsigned int irq, struct irqaction *new)
+{
+ struct irq_source *source;
+ struct irq_group *group;
+ struct irq_level *level;
+ struct irqaction **p, **pp;
+ unsigned long flags;
+
+ group = irq_groups[irq >> NR_IRQ_LOG2_ACTIONS_PER_GROUP];
+ if (!group)
+ BUG();
+
+ source = group->sources[irq & (NR_IRQ_ACTIONS_PER_GROUP - 1)];
+ if (!source)
+ BUG();
+
+ level = source->level;
+
+ p = &group->actions[irq & (NR_IRQ_ACTIONS_PER_GROUP - 1)];
+
+ /*
+ * Some drivers like serial.c use request_irq() heavily,
+ * so we have to be careful not to interfere with a
+ * running system.
+ */
+ if (new->flags & SA_SAMPLE_RANDOM) {
+ /*
+ * This function might sleep, we want to call it first,
+ * outside of the atomic block.
+ * Yes, this might clear the entropy pool if the wrong
+ * driver is attempted to be loaded, without actually
+ * installing a new handler, but is this really a problem,
+ * only the sysadmin is able to do this.
+ */
+ rand_initialize_irq(irq);
+ }
+
+ /* must juggle the interrupt processing stuff with interrupts disabled */
+ spin_lock_irqsave(&level->lock, flags);
+
+ /* can't share interrupts unless all parties agree to */
+ if (level->usage != 0 && !(level->flags & new->flags & SA_SHIRQ)) {
+ spin_unlock_irqrestore(&level->lock,flags);
+ return -EBUSY;
+ }
+
+ /* add new interrupt at end of irq queue */
+ pp = p;
+ while (*pp)
+ pp = &(*pp)->next;
+
+ *pp = new;
+
+ level->usage++;
+ level->flags = new->flags;
+
+ /* turn the interrupts on */
+ if (level->usage == 1)
+ __clr_MASK(level - frv_irq_levels);
+
+ if (p == pp && group->control)
+ group->control(group, irq & (NR_IRQ_ACTIONS_PER_GROUP - 1), 1);
+
+ spin_unlock_irqrestore(&level->lock, flags);
+ register_irq_proc(irq);
+ return 0;
+}
+
+static struct proc_dir_entry * root_irq_dir;
+static struct proc_dir_entry * irq_dir [NR_IRQS];
+
+#define HEX_DIGITS 8
+
+static unsigned int parse_hex_value (const char *buffer,
+ unsigned long count, unsigned long *ret)
+{
+ unsigned char hexnum [HEX_DIGITS];
+ unsigned long value;
+ int i;
+
+ if (!count)
+ return -EINVAL;
+ if (count > HEX_DIGITS)
+ count = HEX_DIGITS;
+ if (copy_from_user(hexnum, buffer, count))
+ return -EFAULT;
+
+ /*
+ * Parse the first 8 characters as a hex string, any non-hex char
+ * is end-of-string. '00e1', 'e1', '00E1', 'E1' are all the same.
+ */
+ value = 0;
+
+ for (i = 0; i < count; i++) {
+ unsigned int c = hexnum[i];
+
+ switch (c) {
+ case '0' ... '9': c -= '0'; break;
+ case 'a' ... 'f': c -= 'a'-10; break;
+ case 'A' ... 'F': c -= 'A'-10; break;
+ default:
+ goto out;
+ }
+ value = (value << 4) | c;
+ }
+out:
+ *ret = value;
+ return 0;
+}
+
+
+static int prof_cpu_mask_read_proc (char *page, char **start, off_t off,
+ int count, int *eof, void *data)
+{
+ unsigned long *mask = (unsigned long *) data;
+ if (count < HEX_DIGITS+1)
+ return -EINVAL;
+ return sprintf (page, "%08lx\n", *mask);
+}
+
+static int prof_cpu_mask_write_proc (struct file *file, const char *buffer,
+ unsigned long count, void *data)
+{
+ unsigned long *mask = (unsigned long *) data, full_count = count, err;
+ unsigned long new_value;
+
+ show_state();
+ err = parse_hex_value(buffer, count, &new_value);
+ if (err)
+ return err;
+
+ *mask = new_value;
+ return full_count;
+}
+
+#define MAX_NAMELEN 10
+
+static void register_irq_proc (unsigned int irq)
+{
+ char name [MAX_NAMELEN];
+
+ if (!root_irq_dir || irq_dir[irq])
+ return;
+
+ memset(name, 0, MAX_NAMELEN);
+ sprintf(name, "%d", irq);
+
+ /* create /proc/irq/1234 */
+ irq_dir[irq] = proc_mkdir(name, root_irq_dir);
+}
+
+unsigned long prof_cpu_mask = -1;
+
+void init_irq_proc (void)
+{
+ struct proc_dir_entry *entry;
+ int i;
+
+ /* create /proc/irq */
+ root_irq_dir = proc_mkdir("irq", 0);
+
+ /* create /proc/irq/prof_cpu_mask */
+ entry = create_proc_entry("prof_cpu_mask", 0600, root_irq_dir);
+ if (!entry)
+ return;
+
+ entry->nlink = 1;
+ entry->data = (void *)&prof_cpu_mask;
+ entry->read_proc = prof_cpu_mask_read_proc;
+ entry->write_proc = prof_cpu_mask_write_proc;
+
+ /*
+ * Create entries for all existing IRQs.
+ */
+ for (i = 0; i < NR_IRQS; i++)
+ register_irq_proc(i);
+}
+
+/*****************************************************************************/
+/*
+ * initialise the interrupt system
+ */
+void __init init_IRQ(void)
+{
+ route_cpu_irqs();
+ fpga_init();
+#ifdef CONFIG_FUJITSU_MB93493
+ route_mb93493_irqs();
+#endif
+} /* end init_IRQ() */
--- /dev/null
+/* kernel_thread.S: kernel thread creation
+ *
+ * Copyright (C) 2003 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/linkage.h>
+#include <asm/unistd.h>
+
+#define CLONE_VM 0x00000100 /* set if VM shared between processes */
+#define KERN_ERR "<3>"
+
+ .section .rodata
+kernel_thread_emsg:
+ .asciz KERN_ERR "failed to create kernel thread: error=%d\n"
+
+ .text
+ .balign 4
+
+###############################################################################
+#
+# Create a kernel thread
+#
+# int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags)
+#
+###############################################################################
+ .globl kernel_thread
+ .type kernel_thread,@function
+kernel_thread:
+ or.p gr8,gr0,gr4
+ or gr9,gr0,gr5
+
+ # start by forking the current process, but with shared VM
+ setlos.p #__NR_clone,gr7 ; syscall number
+ ori gr10,#CLONE_VM,gr8 ; first syscall arg [clone_flags]
+ sethi.p #0xe4e4,gr9 ; second syscall arg [newsp]
+ setlo #0xe4e4,gr9
+ setlos.p #0,gr10 ; third syscall arg [parent_tidptr]
+ setlos #0,gr11 ; fourth syscall arg [child_tidptr]
+ tira gr0,#0
+ setlos.p #4095,gr7
+ andcc gr8,gr8,gr0,icc0
+ addcc.p gr8,gr7,gr0,icc1
+ bnelr icc0,#2
+ bc icc1,#0,kernel_thread_error
+
+ # now invoke the work function
+ or gr5,gr0,gr8
+ calll @(gr4,gr0)
+
+ # and finally exit the thread
+ setlos #__NR_exit,gr7 ; syscall number
+ tira gr0,#0
+
+kernel_thread_error:
+ subi sp,#8,sp
+ movsg lr,gr4
+ sti gr8,@(sp,#0)
+ sti.p gr4,@(sp,#4)
+
+ or gr8,gr0,gr9
+ sethi.p %hi(kernel_thread_emsg),gr8
+ setlo %lo(kernel_thread_emsg),gr8
+
+ call printk
+
+ ldi @(sp,#4),gr4
+ ldi @(sp,#0),gr8
+ subi sp,#8,sp
+ jmpl @(gr4,gr0)
+
+ .size kernel_thread,.-kernel_thread
--- /dev/null
+/* local.h: local definitions
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _FRV_LOCAL_H
+#define _FRV_LOCAL_H
+
+#include <asm/sections.h>
+
+#ifndef __ASSEMBLY__
+
+/* dma.c */
+extern unsigned long frv_dma_inprogress;
+
+extern void frv_dma_pause_all(void);
+extern void frv_dma_resume_all(void);
+
+/* sleep.S */
+extern asmlinkage void frv_cpu_suspend(unsigned long);
+extern asmlinkage void frv_cpu_core_sleep(void);
+
+/* setup.c */
+extern unsigned long __nongprelbss pdm_suspend_mode;
+extern void determine_clocks(int verbose);
+extern int __nongprelbss clock_p0_current;
+extern int __nongprelbss clock_cm_current;
+extern int __nongprelbss clock_cmode_current;
+
+#ifdef CONFIG_PM
+extern int __nongprelbss clock_cmodes_permitted;
+extern unsigned long __nongprelbss clock_bits_settable;
+#define CLOCK_BIT_CM 0x0000000f
+#define CLOCK_BIT_CM_H 0x00000001 /* CLKC.CM can be set to 0 */
+#define CLOCK_BIT_CM_M 0x00000002 /* CLKC.CM can be set to 1 */
+#define CLOCK_BIT_CM_L 0x00000004 /* CLKC.CM can be set to 2 */
+#define CLOCK_BIT_P0 0x00000010 /* CLKC.P0 can be changed */
+#define CLOCK_BIT_CMODE 0x00000020 /* CLKC.CMODE can be changed */
+
+extern void (*__power_switch_wake_setup)(void);
+extern int (*__power_switch_wake_check)(void);
+extern void (*__power_switch_wake_cleanup)(void);
+#endif
+
+/* time.c */
+extern void time_divisor_init(void);
+
+
+#endif /* __ASSEMBLY__ */
+#endif /* _FRV_LOCAL_H */
--- /dev/null
+/*
+ * FR-V MB93093 Power Management Routines
+ *
+ * Copyright (c) 2004 Red Hat, Inc.
+ *
+ * Written by: msalter@redhat.com
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License.
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/pm.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+#include <linux/sysctl.h>
+#include <linux/errno.h>
+#include <linux/delay.h>
+#include <asm/uaccess.h>
+
+#include <asm/mb86943a.h>
+
+#include "local.h"
+
+static unsigned long imask;
+/*
+ * Setup interrupt masks, etc to enable wakeup by power switch
+ */
+static void mb93093_power_switch_setup(void)
+{
+ /* mask all but FPGA interrupt sources. */
+ imask = *(volatile unsigned long *)0xfeff9820;
+ *(volatile unsigned long *)0xfeff9820 = ~(1 << (IRQ_XIRQ2_LEVEL + 16)) & 0xfffe0000;
+}
+
+/*
+ * Cleanup interrupt masks, etc after wakeup by power switch
+ */
+static void mb93093_power_switch_cleanup(void)
+{
+ *(volatile unsigned long *)0xfeff9820 = imask;
+}
+
+/*
+ * Return non-zero if wakeup irq was caused by power switch
+ */
+static int mb93093_power_switch_check(void)
+{
+ return 1;
+}
+
+/*
+ * Initialize power interface
+ */
+static int __init mb93093_pm_init(void)
+{
+ __power_switch_wake_setup = mb93093_power_switch_setup;
+ __power_switch_wake_check = mb93093_power_switch_check;
+ __power_switch_wake_cleanup = mb93093_power_switch_cleanup;
+ return 0;
+}
+
+__initcall(mb93093_pm_init);
+
--- /dev/null
+/*
+ * FR-V Power Management Routines
+ *
+ * Copyright (c) 2004 Red Hat, Inc.
+ *
+ * Based on SA1100 version:
+ * Copyright (c) 2001 Cliff Brake <cbrake@accelent.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License.
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/pm.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+#include <linux/sysctl.h>
+#include <linux/errno.h>
+#include <linux/delay.h>
+#include <asm/uaccess.h>
+
+#include <asm/mb86943a.h>
+
+#include "local.h"
+
+void (*pm_power_off)(void);
+
+extern void frv_change_cmode(int);
+
+/*
+ * Debug macros
+ */
+#define DEBUG
+
+int pm_do_suspend(void)
+{
+ local_irq_disable();
+
+ __set_LEDS(0xb1);
+
+ /* go zzz */
+ frv_cpu_suspend(pdm_suspend_mode);
+
+ __set_LEDS(0xb2);
+
+ local_irq_enable();
+
+ return 0;
+}
+
+static unsigned long __irq_mask;
+
+/*
+ * Setup interrupt masks, etc to enable wakeup by power switch
+ */
+static void __default_power_switch_setup(void)
+{
+ /* default is to mask all interrupt sources. */
+ __irq_mask = *(unsigned long *)0xfeff9820;
+ *(unsigned long *)0xfeff9820 = 0xfffe0000;
+}
+
+/*
+ * Cleanup interrupt masks, etc after wakeup by power switch
+ */
+static void __default_power_switch_cleanup(void)
+{
+ *(unsigned long *)0xfeff9820 = __irq_mask;
+}
+
+/*
+ * Return non-zero if wakeup irq was caused by power switch
+ */
+static int __default_power_switch_check(void)
+{
+ return 1;
+}
+
+void (*__power_switch_wake_setup)(void) = __default_power_switch_setup;
+int (*__power_switch_wake_check)(void) = __default_power_switch_check;
+void (*__power_switch_wake_cleanup)(void) = __default_power_switch_cleanup;
+
+int pm_do_bus_sleep(void)
+{
+ local_irq_disable();
+
+ /*
+ * Here is where we need some platform-dependent setup
+ * of the interrupt state so that appropriate wakeup
+ * sources are allowed and all others are masked.
+ */
+ __power_switch_wake_setup();
+
+ __set_LEDS(0xa1);
+
+ /* go zzz
+ *
+ * This is in a loop in case power switch shares an irq with other
+ * devices. The wake_check() tells us if we need to finish waking
+ * or go back to sleep.
+ */
+ do {
+ frv_cpu_suspend(HSR0_PDM_BUS_SLEEP);
+ } while (__power_switch_wake_check && !__power_switch_wake_check());
+
+ __set_LEDS(0xa2);
+
+ /*
+ * Here is where we need some platform-dependent restore
+ * of the interrupt state prior to being called.
+ */
+ __power_switch_wake_cleanup();
+
+ local_irq_enable();
+
+ return 0;
+}
+
+unsigned long sleep_phys_sp(void *sp)
+{
+ return virt_to_phys(sp);
+}
+
+#ifdef CONFIG_SYSCTL
+/*
+ * Use a temporary sysctl number. Horrid, but will be cleaned up in 2.6
+ * when all the PM interfaces exist nicely.
+ */
+#define CTL_PM 9899
+#define CTL_PM_SUSPEND 1
+#define CTL_PM_CMODE 2
+#define CTL_PM_P0 4
+#define CTL_PM_CM 5
+
+static int user_atoi(char *ubuf, size_t len)
+{
+ char buf[16];
+ unsigned long ret;
+
+ if (len > 15)
+ return -EINVAL;
+
+ if (copy_from_user(buf, ubuf, len))
+ return -EFAULT;
+
+ buf[len] = 0;
+ ret = simple_strtoul(buf, NULL, 0);
+ if (ret > INT_MAX)
+ return -ERANGE;
+ return ret;
+}
+
+/*
+ * Send us to sleep.
+ */
+static int sysctl_pm_do_suspend(ctl_table *ctl, int write, struct file *filp,
+ void *buffer, size_t *lenp, loff_t *fpos)
+{
+ int retval, mode;
+
+ if (*lenp <= 0)
+ return -EIO;
+
+ mode = user_atoi(buffer, *lenp);
+ if ((mode != 1) && (mode != 5))
+ return -EINVAL;
+
+ retval = pm_send_all(PM_SUSPEND, (void *)3);
+
+ if (retval == 0) {
+ if (mode == 5)
+ retval = pm_do_bus_sleep();
+ else
+ retval = pm_do_suspend();
+ pm_send_all(PM_RESUME, (void *)0);
+ }
+
+ return retval;
+}
+
+static int try_set_cmode(int new_cmode)
+{
+ if (new_cmode > 15)
+ return -EINVAL;
+ if (!(clock_cmodes_permitted & (1<<new_cmode)))
+ return -EINVAL;
+
+ /* tell all the drivers we're suspending */
+ pm_send_all(PM_SUSPEND, (void *)3);
+
+ /* now change cmode */
+ local_irq_disable();
+ frv_dma_pause_all();
+
+ frv_change_cmode(new_cmode);
+
+ determine_clocks(0);
+ time_divisor_init();
+
+#ifdef DEBUG
+ determine_clocks(1);
+#endif
+ frv_dma_resume_all();
+ local_irq_enable();
+
+ /* tell all the drivers we're resuming */
+ pm_send_all(PM_RESUME, (void *)0);
+ return 0;
+}
+
+
+static int cmode_procctl(ctl_table *ctl, int write, struct file *filp,
+ void *buffer, size_t *lenp, loff_t *fpos)
+{
+ int new_cmode;
+
+ if (!write)
+ return proc_dointvec(ctl, write, filp, buffer, lenp, fpos);
+
+ new_cmode = user_atoi(buffer, *lenp);
+
+ return try_set_cmode(new_cmode)?:*lenp;
+}
+
+static int cmode_sysctl(ctl_table *table, int *name, int nlen,
+ void *oldval, size_t *oldlenp,
+ void *newval, size_t newlen, void **context)
+{
+ if (oldval && oldlenp) {
+ size_t oldlen;
+
+ if (get_user(oldlen, oldlenp))
+ return -EFAULT;
+
+ if (oldlen != sizeof(int))
+ return -EINVAL;
+
+ if (put_user(clock_cmode_current, (unsigned int *)oldval) ||
+ put_user(sizeof(int), oldlenp))
+ return -EFAULT;
+ }
+ if (newval && newlen) {
+ int new_cmode;
+
+ if (newlen != sizeof(int))
+ return -EINVAL;
+
+ if (get_user(new_cmode, (int *)newval))
+ return -EFAULT;
+
+ return try_set_cmode(new_cmode)?:1;
+ }
+ return 1;
+}
+
+static int try_set_p0(int new_p0)
+{
+ unsigned long flags, clkc;
+
+ if (new_p0 < 0 || new_p0 > 1)
+ return -EINVAL;
+
+ local_irq_save(flags);
+ __set_PSR(flags & ~PSR_ET);
+
+ frv_dma_pause_all();
+
+ clkc = __get_CLKC();
+ if (new_p0)
+ clkc |= CLKC_P0;
+ else
+ clkc &= ~CLKC_P0;
+ __set_CLKC(clkc);
+
+ determine_clocks(0);
+ time_divisor_init();
+
+#ifdef DEBUG
+ determine_clocks(1);
+#endif
+ frv_dma_resume_all();
+ local_irq_restore(flags);
+ return 0;
+}
+
+static int try_set_cm(int new_cm)
+{
+ unsigned long flags, clkc;
+
+ if (new_cm < 0 || new_cm > 1)
+ return -EINVAL;
+
+ local_irq_save(flags);
+ __set_PSR(flags & ~PSR_ET);
+
+ frv_dma_pause_all();
+
+ clkc = __get_CLKC();
+ clkc &= ~CLKC_CM;
+ clkc |= new_cm;
+ __set_CLKC(clkc);
+
+ determine_clocks(0);
+ time_divisor_init();
+
+#if 1 //def DEBUG
+ determine_clocks(1);
+#endif
+
+ frv_dma_resume_all();
+ local_irq_restore(flags);
+ return 0;
+}
+
+static int p0_procctl(ctl_table *ctl, int write, struct file *filp,
+ void *buffer, size_t *lenp, loff_t *fpos)
+{
+ int new_p0;
+
+ if (!write)
+ return proc_dointvec(ctl, write, filp, buffer, lenp, fpos);
+
+ new_p0 = user_atoi(buffer, *lenp);
+
+ return try_set_p0(new_p0)?:*lenp;
+}
+
+static int p0_sysctl(ctl_table *table, int *name, int nlen,
+ void *oldval, size_t *oldlenp,
+ void *newval, size_t newlen, void **context)
+{
+ if (oldval && oldlenp) {
+ size_t oldlen;
+
+ if (get_user(oldlen, oldlenp))
+ return -EFAULT;
+
+ if (oldlen != sizeof(int))
+ return -EINVAL;
+
+ if (put_user(clock_p0_current, (unsigned int *)oldval) ||
+ put_user(sizeof(int), oldlenp))
+ return -EFAULT;
+ }
+ if (newval && newlen) {
+ int new_p0;
+
+ if (newlen != sizeof(int))
+ return -EINVAL;
+
+ if (get_user(new_p0, (int *)newval))
+ return -EFAULT;
+
+ return try_set_p0(new_p0)?:1;
+ }
+ return 1;
+}
+
+static int cm_procctl(ctl_table *ctl, int write, struct file *filp,
+ void *buffer, size_t *lenp, loff_t *fpos)
+{
+ int new_cm;
+
+ if (!write)
+ return proc_dointvec(ctl, write, filp, buffer, lenp, fpos);
+
+ new_cm = user_atoi(buffer, *lenp);
+
+ return try_set_cm(new_cm)?:*lenp;
+}
+
+static int cm_sysctl(ctl_table *table, int *name, int nlen,
+ void *oldval, size_t *oldlenp,
+ void *newval, size_t newlen, void **context)
+{
+ if (oldval && oldlenp) {
+ size_t oldlen;
+
+ if (get_user(oldlen, oldlenp))
+ return -EFAULT;
+
+ if (oldlen != sizeof(int))
+ return -EINVAL;
+
+ if (put_user(clock_cm_current, (unsigned int *)oldval) ||
+ put_user(sizeof(int), oldlenp))
+ return -EFAULT;
+ }
+ if (newval && newlen) {
+ int new_cm;
+
+ if (newlen != sizeof(int))
+ return -EINVAL;
+
+ if (get_user(new_cm, (int *)newval))
+ return -EFAULT;
+
+ return try_set_cm(new_cm)?:1;
+ }
+ return 1;
+}
+
+
+static struct ctl_table pm_table[] =
+{
+ {CTL_PM_SUSPEND, "suspend", NULL, 0, 0200, NULL, &sysctl_pm_do_suspend},
+ {CTL_PM_CMODE, "cmode", &clock_cmode_current, sizeof(int), 0644, NULL, &cmode_procctl, &cmode_sysctl, NULL},
+ {CTL_PM_P0, "p0", &clock_p0_current, sizeof(int), 0644, NULL, &p0_procctl, &p0_sysctl, NULL},
+ {CTL_PM_CM, "cm", &clock_cm_current, sizeof(int), 0644, NULL, &cm_procctl, &cm_sysctl, NULL},
+ {0}
+};
+
+static struct ctl_table pm_dir_table[] =
+{
+ {CTL_PM, "pm", NULL, 0, 0555, pm_table},
+ {0}
+};
+
+/*
+ * Initialize power interface
+ */
+static int __init pm_init(void)
+{
+ register_sysctl_table(pm_dir_table, 1);
+ return 0;
+}
+
+__initcall(pm_init);
+
+#endif
--- /dev/null
+/* process.c: FRV specific parts of process handling
+ *
+ * Copyright (C) 2003-5 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ * - Derived from arch/m68k/kernel/process.c
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/stddef.h>
+#include <linux/unistd.h>
+#include <linux/ptrace.h>
+#include <linux/slab.h>
+#include <linux/user.h>
+#include <linux/elf.h>
+#include <linux/reboot.h>
+#include <linux/interrupt.h>
+
+#include <asm/uaccess.h>
+#include <asm/system.h>
+#include <asm/setup.h>
+#include <asm/pgtable.h>
+#include <asm/gdb-stub.h>
+#include <asm/mb-regs.h>
+
+#include "local.h"
+
+asmlinkage void ret_from_fork(void);
+
+#include <asm/pgalloc.h>
+
+struct task_struct *alloc_task_struct(void)
+{
+ struct task_struct *p = kmalloc(THREAD_SIZE, GFP_KERNEL);
+ if (p)
+ atomic_set((atomic_t *)(p+1), 1);
+ return p;
+}
+
+void free_task_struct(struct task_struct *p)
+{
+ if (atomic_dec_and_test((atomic_t *)(p+1)))
+ kfree(p);
+}
+
+static void core_sleep_idle(void)
+{
+#ifdef LED_DEBUG_SLEEP
+ /* Show that we're sleeping... */
+ __set_LEDS(0x55aa);
+#endif
+ frv_cpu_core_sleep();
+#ifdef LED_DEBUG_SLEEP
+ /* ... and that we woke up */
+ __set_LEDS(0);
+#endif
+ mb();
+}
+
+void (*idle)(void) = core_sleep_idle;
+
+/*
+ * The idle thread. There's no useful work to be
+ * done, so just try to conserve power and have a
+ * low exit latency (ie sit in a loop waiting for
+ * somebody to say that they'd like to reschedule)
+ */
+void cpu_idle(void)
+{
+ /* endless idle loop with no priority at all */
+ while (1) {
+ while (!need_resched()) {
+ irq_stat[smp_processor_id()].idle_timestamp = jiffies;
+
+ if (!frv_dma_inprogress && idle)
+ idle();
+ }
+
+ schedule();
+ }
+}
+
+void machine_restart(char * __unused)
+{
+ unsigned long reset_addr;
+#ifdef CONFIG_GDBSTUB
+ gdbstub_exit(0);
+#endif
+
+ if (PSR_IMPLE(__get_PSR()) == PSR_IMPLE_FR551)
+ reset_addr = 0xfefff500;
+ else
+ reset_addr = 0xfeff0500;
+
+ /* Software reset. */
+ asm volatile(" dcef @(gr0,gr0),1 ! membar !"
+ " sti %1,@(%0,0) !"
+ " nop ! nop ! nop ! nop ! nop ! "
+ " nop ! nop ! nop ! nop ! nop ! "
+ " nop ! nop ! nop ! nop ! nop ! "
+ " nop ! nop ! nop ! nop ! nop ! "
+ : : "r" (reset_addr), "r" (1) );
+
+ for (;;)
+ ;
+}
+
+void machine_halt(void)
+{
+#ifdef CONFIG_GDBSTUB
+ gdbstub_exit(0);
+#endif
+
+ for (;;);
+}
+
+void machine_power_off(void)
+{
+#ifdef CONFIG_GDBSTUB
+ gdbstub_exit(0);
+#endif
+
+ for (;;);
+}
+
+void flush_thread(void)
+{
+#if 0 //ndef NO_FPU
+ unsigned long zero = 0;
+#endif
+ set_fs(USER_DS);
+}
+
+inline unsigned long user_stack(const struct pt_regs *regs)
+{
+ while (regs->next_frame)
+ regs = regs->next_frame;
+ return user_mode(regs) ? regs->sp : 0;
+}
+
+asmlinkage int sys_fork(void)
+{
+#ifndef CONFIG_MMU
+ /* fork almost works, enough to trick you into looking elsewhere:-( */
+ return -EINVAL;
+#else
+ return do_fork(SIGCHLD, user_stack(__frame), __frame, 0, NULL, NULL);
+#endif
+}
+
+asmlinkage int sys_vfork(void)
+{
+ return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, user_stack(__frame), __frame, 0,
+ NULL, NULL);
+}
+
+/*****************************************************************************/
+/*
+ * clone a process
+ * - tlsptr is retrieved by copy_thread()
+ */
+asmlinkage int sys_clone(unsigned long clone_flags, unsigned long newsp,
+ int __user *parent_tidptr, int __user *child_tidptr,
+ int __user *tlsptr)
+{
+ if (!newsp)
+ newsp = user_stack(__frame);
+ return do_fork(clone_flags, newsp, __frame, 0, parent_tidptr, child_tidptr);
+} /* end sys_clone() */
+
+/*****************************************************************************/
+/*
+ * This gets called before we allocate a new thread and copy
+ * the current task into it.
+ */
+void prepare_to_copy(struct task_struct *tsk)
+{
+ //unlazy_fpu(tsk);
+} /* end prepare_to_copy() */
+
+/*****************************************************************************/
+/*
+ * set up the kernel stack and exception frames for a new process
+ */
+int copy_thread(int nr, unsigned long clone_flags,
+ unsigned long usp, unsigned long topstk,
+ struct task_struct *p, struct pt_regs *regs)
+{
+ struct pt_regs *childregs0, *childregs, *regs0;
+
+ regs0 = __kernel_frame0_ptr;
+ childregs0 = (struct pt_regs *)
+ ((unsigned long) p->thread_info + THREAD_SIZE - USER_CONTEXT_SIZE);
+ childregs = childregs0;
+
+ /* set up the userspace frame (the only place that the USP is stored) */
+ *childregs0 = *regs0;
+
+ childregs0->gr8 = 0;
+ childregs0->sp = usp;
+ childregs0->next_frame = NULL;
+
+ /* set up the return kernel frame if called from kernel_thread() */
+ if (regs != regs0) {
+ childregs--;
+ *childregs = *regs;
+ childregs->sp = (unsigned long) childregs0;
+ childregs->next_frame = childregs0;
+ childregs->gr15 = (unsigned long) p->thread_info;
+ childregs->gr29 = (unsigned long) p;
+ }
+
+ p->set_child_tid = p->clear_child_tid = NULL;
+
+ p->thread.frame = childregs;
+ p->thread.curr = p;
+ p->thread.sp = (unsigned long) childregs;
+ p->thread.fp = 0;
+ p->thread.lr = 0;
+ p->thread.pc = (unsigned long) ret_from_fork;
+ p->thread.frame0 = childregs0;
+
+ /* the new TLS pointer is passed in as arg #5 to sys_clone() */
+ if (clone_flags & CLONE_SETTLS)
+ childregs->gr29 = childregs->gr12;
+
+ save_user_regs(p->thread.user);
+
+ return 0;
+} /* end copy_thread() */
+
+/*
+ * fill in the user structure for a core dump..
+ */
+void dump_thread(struct pt_regs *regs, struct user *dump)
+{
+#if 0
+ /* changed the size calculations - should hopefully work better. lbt */
+ dump->magic = CMAGIC;
+ dump->start_code = 0;
+ dump->start_stack = user_stack(regs) & ~(PAGE_SIZE - 1);
+ dump->u_tsize = ((unsigned long) current->mm->end_code) >> PAGE_SHIFT;
+ dump->u_dsize = ((unsigned long) (current->mm->brk + (PAGE_SIZE-1))) >> PAGE_SHIFT;
+ dump->u_dsize -= dump->u_tsize;
+ dump->u_ssize = 0;
+
+ if (dump->start_stack < TASK_SIZE)
+ dump->u_ssize = ((unsigned long) (TASK_SIZE - dump->start_stack)) >> PAGE_SHIFT;
+
+ dump->regs = *(struct user_context *) regs;
+#endif
+}
+
+/*
+ * sys_execve() executes a new program.
+ */
+asmlinkage int sys_execve(char *name, char **argv, char **envp)
+{
+ int error;
+ char * filename;
+
+ lock_kernel();
+ filename = getname(name);
+ error = PTR_ERR(filename);
+ if (IS_ERR(filename))
+ goto out;
+ error = do_execve(filename, argv, envp, __frame);
+ putname(filename);
+ out:
+ unlock_kernel();
+ return error;
+}
+
+unsigned long get_wchan(struct task_struct *p)
+{
+ struct pt_regs *regs0;
+ unsigned long fp, pc;
+ unsigned long stack_limit;
+ int count = 0;
+ if (!p || p == current || p->state == TASK_RUNNING)
+ return 0;
+
+ stack_limit = (unsigned long) (p + 1);
+ fp = p->thread.fp;
+ regs0 = p->thread.frame0;
+
+ do {
+ if (fp < stack_limit || fp >= (unsigned long) regs0 || fp & 3)
+ return 0;
+
+ pc = ((unsigned long *) fp)[2];
+
+ /* FIXME: This depends on the order of these functions. */
+ if (!in_sched_functions(pc))
+ return pc;
+
+ fp = *(unsigned long *) fp;
+ } while (count++ < 16);
+
+ return 0;
+}
+
+unsigned long thread_saved_pc(struct task_struct *tsk)
+{
+ /* Check whether the thread is blocked in resume() */
+ if (in_sched_functions(tsk->thread.pc))
+ return ((unsigned long *)tsk->thread.fp)[2];
+ else
+ return tsk->thread.pc;
+}
+
+int elf_check_arch(const struct elf32_hdr *hdr)
+{
+ unsigned long hsr0 = __get_HSR(0);
+ unsigned long psr = __get_PSR();
+
+ if (hdr->e_machine != EM_FRV)
+ return 0;
+
+ switch (hdr->e_flags & EF_FRV_GPR_MASK) {
+ case EF_FRV_GPR64:
+ if ((hsr0 & HSR0_GRN) == HSR0_GRN_32)
+ return 0;
+ case EF_FRV_GPR32:
+ case 0:
+ break;
+ default:
+ return 0;
+ }
+
+ switch (hdr->e_flags & EF_FRV_FPR_MASK) {
+ case EF_FRV_FPR64:
+ if ((hsr0 & HSR0_FRN) == HSR0_FRN_32)
+ return 0;
+ case EF_FRV_FPR32:
+ case EF_FRV_FPR_NONE:
+ case 0:
+ break;
+ default:
+ return 0;
+ }
+
+ if ((hdr->e_flags & EF_FRV_MULADD) == EF_FRV_MULADD)
+ if (PSR_IMPLE(psr) != PSR_IMPLE_FR405 &&
+ PSR_IMPLE(psr) != PSR_IMPLE_FR451)
+ return 0;
+
+ switch (hdr->e_flags & EF_FRV_CPU_MASK) {
+ case EF_FRV_CPU_GENERIC:
+ break;
+ case EF_FRV_CPU_FR300:
+ case EF_FRV_CPU_SIMPLE:
+ case EF_FRV_CPU_TOMCAT:
+ default:
+ return 0;
+ case EF_FRV_CPU_FR400:
+ if (PSR_IMPLE(psr) != PSR_IMPLE_FR401 &&
+ PSR_IMPLE(psr) != PSR_IMPLE_FR405 &&
+ PSR_IMPLE(psr) != PSR_IMPLE_FR451 &&
+ PSR_IMPLE(psr) != PSR_IMPLE_FR551)
+ return 0;
+ break;
+ case EF_FRV_CPU_FR450:
+ if (PSR_IMPLE(psr) != PSR_IMPLE_FR451)
+ return 0;
+ break;
+ case EF_FRV_CPU_FR500:
+ if (PSR_IMPLE(psr) != PSR_IMPLE_FR501)
+ return 0;
+ break;
+ case EF_FRV_CPU_FR550:
+ if (PSR_IMPLE(psr) != PSR_IMPLE_FR551)
+ return 0;
+ break;
+ }
+
+ return 1;
+}
--- /dev/null
+/* ptrace.c: FRV specific parts of process tracing
+ *
+ * Copyright (C) 2003-5 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ * - Derived from arch/m68k/kernel/ptrace.c
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/errno.h>
+#include <linux/ptrace.h>
+#include <linux/user.h>
+#include <linux/config.h>
+#include <linux/security.h>
+
+#include <asm/uaccess.h>
+#include <asm/page.h>
+#include <asm/pgtable.h>
+#include <asm/system.h>
+#include <asm/processor.h>
+#include <asm/unistd.h>
+
+/*
+ * does not yet catch signals sent when the child dies.
+ * in exit.c or in signal.c.
+ */
+
+/*
+ * Get contents of register REGNO in task TASK.
+ */
+static inline long get_reg(struct task_struct *task, int regno)
+{
+ struct user_context *user = task->thread.user;
+
+ if (regno < 0 || regno >= PT__END)
+ return 0;
+
+ return ((unsigned long *) user)[regno];
+}
+
+/*
+ * Write contents of register REGNO in task TASK.
+ */
+static inline int put_reg(struct task_struct *task, int regno,
+ unsigned long data)
+{
+ struct user_context *user = task->thread.user;
+
+ if (regno < 0 || regno >= PT__END)
+ return -EIO;
+
+ switch (regno) {
+ case PT_GR(0):
+ return 0;
+ case PT_PSR:
+ case PT__STATUS:
+ return -EIO;
+ default:
+ ((unsigned long *) user)[regno] = data;
+ return 0;
+ }
+}
+
+/*
+ * check that an address falls within the bounds of the target process's memory mappings
+ */
+static inline int is_user_addr_valid(struct task_struct *child,
+ unsigned long start, unsigned long len)
+{
+#ifdef CONFIG_MMU
+ if (start >= PAGE_OFFSET || len > PAGE_OFFSET - start)
+ return -EIO;
+ return 0;
+#else
+ struct vm_list_struct *vml;
+
+ for (vml = child->mm->context.vmlist; vml; vml = vml->next)
+ if (start >= vml->vma->vm_start && start + len <= vml->vma->vm_end)
+ return 0;
+
+ return -EIO;
+#endif
+}
+
+/*
+ * Called by kernel/ptrace.c when detaching..
+ *
+ * Control h/w single stepping
+ */
+void ptrace_disable(struct task_struct *child)
+{
+ child->thread.frame0->__status &= ~REG__STATUS_STEP;
+}
+
+void ptrace_enable(struct task_struct *child)
+{
+ child->thread.frame0->__status |= REG__STATUS_STEP;
+}
+
+asmlinkage int sys_ptrace(long request, long pid, long addr, long data)
+{
+ struct task_struct *child;
+ unsigned long tmp;
+ int ret;
+
+ lock_kernel();
+ ret = -EPERM;
+ if (request == PTRACE_TRACEME) {
+ /* are we already being traced? */
+ if (current->ptrace & PT_PTRACED)
+ goto out;
+ ret = security_ptrace(current->parent, current);
+ if (ret)
+ goto out;
+ /* set the ptrace bit in the process flags. */
+ current->ptrace |= PT_PTRACED;
+ ret = 0;
+ goto out;
+ }
+ ret = -ESRCH;
+ read_lock(&tasklist_lock);
+ child = find_task_by_pid(pid);
+ if (child)
+ get_task_struct(child);
+ read_unlock(&tasklist_lock);
+ if (!child)
+ goto out;
+
+ ret = -EPERM;
+ if (pid == 1) /* you may not mess with init */
+ goto out_tsk;
+
+ if (request == PTRACE_ATTACH) {
+ ret = ptrace_attach(child);
+ goto out_tsk;
+ }
+
+ ret = ptrace_check_attach(child, request == PTRACE_KILL);
+ if (ret < 0)
+ goto out_tsk;
+
+ switch (request) {
+ /* when I and D space are separate, these will need to be fixed. */
+ case PTRACE_PEEKTEXT: /* read word at location addr. */
+ case PTRACE_PEEKDATA: {
+ int copied;
+
+ ret = -EIO;
+ if (is_user_addr_valid(child, addr, sizeof(tmp)) < 0)
+ break;
+
+ copied = access_process_vm(child, addr, &tmp, sizeof(tmp), 0);
+ if (copied != sizeof(tmp))
+ break;
+
+ ret = put_user(tmp,(unsigned long *) data);
+ break;
+ }
+
+ /* read the word at location addr in the USER area. */
+ case PTRACE_PEEKUSR: {
+ tmp = 0;
+ ret = -EIO;
+ if ((addr & 3) || addr < 0)
+ break;
+
+ ret = 0;
+ switch (addr >> 2) {
+ case 0 ... PT__END - 1:
+ tmp = get_reg(child, addr >> 2);
+ break;
+
+ case PT__END + 0:
+ tmp = child->mm->end_code - child->mm->start_code;
+ break;
+
+ case PT__END + 1:
+ tmp = child->mm->end_data - child->mm->start_data;
+ break;
+
+ case PT__END + 2:
+ tmp = child->mm->start_stack - child->mm->start_brk;
+ break;
+
+ case PT__END + 3:
+ tmp = child->mm->start_code;
+ break;
+
+ case PT__END + 4:
+ tmp = child->mm->start_stack;
+ break;
+
+ default:
+ ret = -EIO;
+ break;
+ }
+
+ if (ret == 0)
+ ret = put_user(tmp, (unsigned long *) data);
+ break;
+ }
+
+ /* when I and D space are separate, this will have to be fixed. */
+ case PTRACE_POKETEXT: /* write the word at location addr. */
+ case PTRACE_POKEDATA:
+ ret = -EIO;
+ if (is_user_addr_valid(child, addr, sizeof(tmp)) < 0)
+ break;
+ if (access_process_vm(child, addr, &data, sizeof(data), 1) != sizeof(data))
+ break;
+ ret = 0;
+ break;
+
+ case PTRACE_POKEUSR: /* write the word at location addr in the USER area */
+ ret = -EIO;
+ if ((addr & 3) || addr < 0)
+ break;
+
+ ret = 0;
+ switch (addr >> 2) {
+ case 0 ... PT__END-1:
+ ret = put_reg(child, addr >> 2, data);
+ break;
+
+ default:
+ ret = -EIO;
+ break;
+ }
+ break;
+
+ case PTRACE_SYSCALL: /* continue and stop at next (return from) syscall */
+ case PTRACE_CONT: /* restart after signal. */
+ ret = -EIO;
+ if ((unsigned long) data > _NSIG)
+ break;
+ if (request == PTRACE_SYSCALL)
+ set_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+ else
+ clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+ child->exit_code = data;
+ ptrace_disable(child);
+ wake_up_process(child);
+ ret = 0;
+ break;
+
+ /* make the child exit. Best I can do is send it a sigkill.
+ * perhaps it should be put in the status that it wants to
+ * exit.
+ */
+ case PTRACE_KILL:
+ ret = 0;
+ if (child->exit_state == EXIT_ZOMBIE) /* already dead */
+ break;
+ child->exit_code = SIGKILL;
+ clear_tsk_thread_flag(child, TIF_SINGLESTEP);
+ ptrace_disable(child);
+ wake_up_process(child);
+ break;
+
+ case PTRACE_SINGLESTEP: /* set the trap flag. */
+ ret = -EIO;
+ if ((unsigned long) data > _NSIG)
+ break;
+ clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+ ptrace_enable(child);
+ child->exit_code = data;
+ wake_up_process(child);
+ ret = 0;
+ break;
+
+ case PTRACE_DETACH: /* detach a process that was attached. */
+ ret = ptrace_detach(child, data);
+ break;
+
+ case PTRACE_GETREGS: { /* Get all integer regs from the child. */
+ int i;
+ for (i = 0; i < PT__GPEND; i++) {
+ tmp = get_reg(child, i);
+ if (put_user(tmp, (unsigned long *) data)) {
+ ret = -EFAULT;
+ break;
+ }
+ data += sizeof(long);
+ }
+ ret = 0;
+ break;
+ }
+
+ case PTRACE_SETREGS: { /* Set all integer regs in the child. */
+ int i;
+ for (i = 0; i < PT__GPEND; i++) {
+ if (get_user(tmp, (unsigned long *) data)) {
+ ret = -EFAULT;
+ break;
+ }
+ put_reg(child, i, tmp);
+ data += sizeof(long);
+ }
+ ret = 0;
+ break;
+ }
+
+ case PTRACE_GETFPREGS: { /* Get the child FP/Media state. */
+ ret = 0;
+ if (copy_to_user((void *) data,
+ &child->thread.user->f,
+ sizeof(child->thread.user->f)))
+ ret = -EFAULT;
+ break;
+ }
+
+ case PTRACE_SETFPREGS: { /* Set the child FP/Media state. */
+ ret = 0;
+ if (copy_from_user(&child->thread.user->f,
+ (void *) data,
+ sizeof(child->thread.user->f)))
+ ret = -EFAULT;
+ break;
+ }
+
+ case PTRACE_GETFDPIC:
+ tmp = 0;
+ switch (addr) {
+ case PTRACE_GETFDPIC_EXEC:
+ tmp = child->mm->context.exec_fdpic_loadmap;
+ break;
+ case PTRACE_GETFDPIC_INTERP:
+ tmp = child->mm->context.interp_fdpic_loadmap;
+ break;
+ default:
+ break;
+ }
+
+ ret = 0;
+ if (put_user(tmp, (unsigned long *) data)) {
+ ret = -EFAULT;
+ break;
+ }
+ break;
+
+ default:
+ ret = -EIO;
+ break;
+ }
+out_tsk:
+ put_task_struct(child);
+out:
+ unlock_kernel();
+ return ret;
+}
+
+int __nongprelbss kstrace;
+
+static const struct {
+ const char *name;
+ unsigned argmask;
+} __syscall_name_table[NR_syscalls] = {
+ [0] = { "restart_syscall" },
+ [1] = { "exit", 0x000001 },
+ [2] = { "fork", 0xffffff },
+ [3] = { "read", 0x000141 },
+ [4] = { "write", 0x000141 },
+ [5] = { "open", 0x000235 },
+ [6] = { "close", 0x000001 },
+ [7] = { "waitpid", 0x000141 },
+ [8] = { "creat", 0x000025 },
+ [9] = { "link", 0x000055 },
+ [10] = { "unlink", 0x000005 },
+ [11] = { "execve", 0x000445 },
+ [12] = { "chdir", 0x000005 },
+ [13] = { "time", 0x000004 },
+ [14] = { "mknod", 0x000325 },
+ [15] = { "chmod", 0x000025 },
+ [16] = { "lchown", 0x000025 },
+ [17] = { "break" },
+ [18] = { "oldstat", 0x000045 },
+ [19] = { "lseek", 0x000131 },
+ [20] = { "getpid", 0xffffff },
+ [21] = { "mount", 0x043555 },
+ [22] = { "umount", 0x000005 },
+ [23] = { "setuid", 0x000001 },
+ [24] = { "getuid", 0xffffff },
+ [25] = { "stime", 0x000004 },
+ [26] = { "ptrace", 0x004413 },
+ [27] = { "alarm", 0x000001 },
+ [28] = { "oldfstat", 0x000041 },
+ [29] = { "pause", 0xffffff },
+ [30] = { "utime", 0x000045 },
+ [31] = { "stty" },
+ [32] = { "gtty" },
+ [33] = { "access", 0x000025 },
+ [34] = { "nice", 0x000001 },
+ [35] = { "ftime" },
+ [36] = { "sync", 0xffffff },
+ [37] = { "kill", 0x000011 },
+ [38] = { "rename", 0x000055 },
+ [39] = { "mkdir", 0x000025 },
+ [40] = { "rmdir", 0x000005 },
+ [41] = { "dup", 0x000001 },
+ [42] = { "pipe", 0x000004 },
+ [43] = { "times", 0x000004 },
+ [44] = { "prof" },
+ [45] = { "brk", 0x000004 },
+ [46] = { "setgid", 0x000001 },
+ [47] = { "getgid", 0xffffff },
+ [48] = { "signal", 0x000041 },
+ [49] = { "geteuid", 0xffffff },
+ [50] = { "getegid", 0xffffff },
+ [51] = { "acct", 0x000005 },
+ [52] = { "umount2", 0x000035 },
+ [53] = { "lock" },
+ [54] = { "ioctl", 0x000331 },
+ [55] = { "fcntl", 0x000331 },
+ [56] = { "mpx" },
+ [57] = { "setpgid", 0x000011 },
+ [58] = { "ulimit" },
+ [60] = { "umask", 0x000002 },
+ [61] = { "chroot", 0x000005 },
+ [62] = { "ustat", 0x000043 },
+ [63] = { "dup2", 0x000011 },
+ [64] = { "getppid", 0xffffff },
+ [65] = { "getpgrp", 0xffffff },
+ [66] = { "setsid", 0xffffff },
+ [67] = { "sigaction" },
+ [68] = { "sgetmask" },
+ [69] = { "ssetmask" },
+ [70] = { "setreuid" },
+ [71] = { "setregid" },
+ [72] = { "sigsuspend" },
+ [73] = { "sigpending" },
+ [74] = { "sethostname" },
+ [75] = { "setrlimit" },
+ [76] = { "getrlimit" },
+ [77] = { "getrusage" },
+ [78] = { "gettimeofday" },
+ [79] = { "settimeofday" },
+ [80] = { "getgroups" },
+ [81] = { "setgroups" },
+ [82] = { "select" },
+ [83] = { "symlink" },
+ [84] = { "oldlstat" },
+ [85] = { "readlink" },
+ [86] = { "uselib" },
+ [87] = { "swapon" },
+ [88] = { "reboot" },
+ [89] = { "readdir" },
+ [91] = { "munmap", 0x000034 },
+ [92] = { "truncate" },
+ [93] = { "ftruncate" },
+ [94] = { "fchmod" },
+ [95] = { "fchown" },
+ [96] = { "getpriority" },
+ [97] = { "setpriority" },
+ [99] = { "statfs" },
+ [100] = { "fstatfs" },
+ [102] = { "socketcall" },
+ [103] = { "syslog" },
+ [104] = { "setitimer" },
+ [105] = { "getitimer" },
+ [106] = { "stat" },
+ [107] = { "lstat" },
+ [108] = { "fstat" },
+ [111] = { "vhangup" },
+ [114] = { "wait4" },
+ [115] = { "swapoff" },
+ [116] = { "sysinfo" },
+ [117] = { "ipc" },
+ [118] = { "fsync" },
+ [119] = { "sigreturn" },
+ [120] = { "clone" },
+ [121] = { "setdomainname" },
+ [122] = { "uname" },
+ [123] = { "modify_ldt" },
+ [123] = { "cacheflush" },
+ [124] = { "adjtimex" },
+ [125] = { "mprotect" },
+ [126] = { "sigprocmask" },
+ [127] = { "create_module" },
+ [128] = { "init_module" },
+ [129] = { "delete_module" },
+ [130] = { "get_kernel_syms" },
+ [131] = { "quotactl" },
+ [132] = { "getpgid" },
+ [133] = { "fchdir" },
+ [134] = { "bdflush" },
+ [135] = { "sysfs" },
+ [136] = { "personality" },
+ [137] = { "afs_syscall" },
+ [138] = { "setfsuid" },
+ [139] = { "setfsgid" },
+ [140] = { "_llseek", 0x014331 },
+ [141] = { "getdents" },
+ [142] = { "_newselect", 0x000141 },
+ [143] = { "flock" },
+ [144] = { "msync" },
+ [145] = { "readv" },
+ [146] = { "writev" },
+ [147] = { "getsid", 0x000001 },
+ [148] = { "fdatasync", 0x000001 },
+ [149] = { "_sysctl", 0x000004 },
+ [150] = { "mlock" },
+ [151] = { "munlock" },
+ [152] = { "mlockall" },
+ [153] = { "munlockall" },
+ [154] = { "sched_setparam" },
+ [155] = { "sched_getparam" },
+ [156] = { "sched_setscheduler" },
+ [157] = { "sched_getscheduler" },
+ [158] = { "sched_yield" },
+ [159] = { "sched_get_priority_max" },
+ [160] = { "sched_get_priority_min" },
+ [161] = { "sched_rr_get_interval" },
+ [162] = { "nanosleep", 0x000044 },
+ [163] = { "mremap" },
+ [164] = { "setresuid" },
+ [165] = { "getresuid" },
+ [166] = { "vm86" },
+ [167] = { "query_module" },
+ [168] = { "poll" },
+ [169] = { "nfsservctl" },
+ [170] = { "setresgid" },
+ [171] = { "getresgid" },
+ [172] = { "prctl", 0x333331 },
+ [173] = { "rt_sigreturn", 0xffffff },
+ [174] = { "rt_sigaction", 0x001441 },
+ [175] = { "rt_sigprocmask", 0x001441 },
+ [176] = { "rt_sigpending", 0x000014 },
+ [177] = { "rt_sigtimedwait", 0x001444 },
+ [178] = { "rt_sigqueueinfo", 0x000411 },
+ [179] = { "rt_sigsuspend", 0x000014 },
+ [180] = { "pread", 0x003341 },
+ [181] = { "pwrite", 0x003341 },
+ [182] = { "chown", 0x000115 },
+ [183] = { "getcwd" },
+ [184] = { "capget" },
+ [185] = { "capset" },
+ [186] = { "sigaltstack" },
+ [187] = { "sendfile" },
+ [188] = { "getpmsg" },
+ [189] = { "putpmsg" },
+ [190] = { "vfork", 0xffffff },
+ [191] = { "ugetrlimit" },
+ [192] = { "mmap2", 0x313314 },
+ [193] = { "truncate64" },
+ [194] = { "ftruncate64" },
+ [195] = { "stat64", 0x000045 },
+ [196] = { "lstat64", 0x000045 },
+ [197] = { "fstat64", 0x000041 },
+ [198] = { "lchown32" },
+ [199] = { "getuid32", 0xffffff },
+ [200] = { "getgid32", 0xffffff },
+ [201] = { "geteuid32", 0xffffff },
+ [202] = { "getegid32", 0xffffff },
+ [203] = { "setreuid32" },
+ [204] = { "setregid32" },
+ [205] = { "getgroups32" },
+ [206] = { "setgroups32" },
+ [207] = { "fchown32" },
+ [208] = { "setresuid32" },
+ [209] = { "getresuid32" },
+ [210] = { "setresgid32" },
+ [211] = { "getresgid32" },
+ [212] = { "chown32" },
+ [213] = { "setuid32" },
+ [214] = { "setgid32" },
+ [215] = { "setfsuid32" },
+ [216] = { "setfsgid32" },
+ [217] = { "pivot_root" },
+ [218] = { "mincore" },
+ [219] = { "madvise" },
+ [220] = { "getdents64" },
+ [221] = { "fcntl64" },
+ [223] = { "security" },
+ [224] = { "gettid" },
+ [225] = { "readahead" },
+ [226] = { "setxattr" },
+ [227] = { "lsetxattr" },
+ [228] = { "fsetxattr" },
+ [229] = { "getxattr" },
+ [230] = { "lgetxattr" },
+ [231] = { "fgetxattr" },
+ [232] = { "listxattr" },
+ [233] = { "llistxattr" },
+ [234] = { "flistxattr" },
+ [235] = { "removexattr" },
+ [236] = { "lremovexattr" },
+ [237] = { "fremovexattr" },
+ [238] = { "tkill" },
+ [239] = { "sendfile64" },
+ [240] = { "futex" },
+ [241] = { "sched_setaffinity" },
+ [242] = { "sched_getaffinity" },
+ [243] = { "set_thread_area" },
+ [244] = { "get_thread_area" },
+ [245] = { "io_setup" },
+ [246] = { "io_destroy" },
+ [247] = { "io_getevents" },
+ [248] = { "io_submit" },
+ [249] = { "io_cancel" },
+ [250] = { "fadvise64" },
+ [252] = { "exit_group", 0x000001 },
+ [253] = { "lookup_dcookie" },
+ [254] = { "epoll_create" },
+ [255] = { "epoll_ctl" },
+ [256] = { "epoll_wait" },
+ [257] = { "remap_file_pages" },
+ [258] = { "set_tid_address" },
+ [259] = { "timer_create" },
+ [260] = { "timer_settime" },
+ [261] = { "timer_gettime" },
+ [262] = { "timer_getoverrun" },
+ [263] = { "timer_delete" },
+ [264] = { "clock_settime" },
+ [265] = { "clock_gettime" },
+ [266] = { "clock_getres" },
+ [267] = { "clock_nanosleep" },
+ [268] = { "statfs64" },
+ [269] = { "fstatfs64" },
+ [270] = { "tgkill" },
+ [271] = { "utimes" },
+ [272] = { "fadvise64_64" },
+ [273] = { "vserver" },
+ [274] = { "mbind" },
+ [275] = { "get_mempolicy" },
+ [276] = { "set_mempolicy" },
+ [277] = { "mq_open" },
+ [278] = { "mq_unlink" },
+ [279] = { "mq_timedsend" },
+ [280] = { "mq_timedreceive" },
+ [281] = { "mq_notify" },
+ [282] = { "mq_getsetattr" },
+ [283] = { "sys_kexec_load" },
+};
+
+asmlinkage void do_syscall_trace(int leaving)
+{
+#if 0
+ unsigned long *argp;
+ const char *name;
+ unsigned argmask;
+ char buffer[16];
+
+ if (!kstrace)
+ return;
+
+ if (!current->mm)
+ return;
+
+ if (__frame->gr7 == __NR_close)
+ return;
+
+#if 0
+ if (__frame->gr7 != __NR_mmap2 &&
+ __frame->gr7 != __NR_vfork &&
+ __frame->gr7 != __NR_execve &&
+ __frame->gr7 != __NR_exit)
+ return;
+#endif
+
+ argmask = 0;
+ name = NULL;
+ if (__frame->gr7 < NR_syscalls) {
+ name = __syscall_name_table[__frame->gr7].name;
+ argmask = __syscall_name_table[__frame->gr7].argmask;
+ }
+ if (!name) {
+ sprintf(buffer, "sys_%lx", __frame->gr7);
+ name = buffer;
+ }
+
+ if (!leaving) {
+ if (!argmask) {
+ printk(KERN_CRIT "[%d] %s(%lx,%lx,%lx,%lx,%lx,%lx)\n",
+ current->pid,
+ name,
+ __frame->gr8,
+ __frame->gr9,
+ __frame->gr10,
+ __frame->gr11,
+ __frame->gr12,
+ __frame->gr13);
+ }
+ else if (argmask == 0xffffff) {
+ printk(KERN_CRIT "[%d] %s()\n",
+ current->pid,
+ name);
+ }
+ else {
+ printk(KERN_CRIT "[%d] %s(",
+ current->pid,
+ name);
+
+ argp = &__frame->gr8;
+
+ do {
+ switch (argmask & 0xf) {
+ case 1:
+ printk("%ld", (long) *argp);
+ break;
+ case 2:
+ printk("%lo", *argp);
+ break;
+ case 3:
+ printk("%lx", *argp);
+ break;
+ case 4:
+ printk("%p", (void *) *argp);
+ break;
+ case 5:
+ printk("\"%s\"", (char *) *argp);
+ break;
+ }
+
+ argp++;
+ argmask >>= 4;
+ if (argmask)
+ printk(",");
+
+ } while (argmask);
+
+ printk(")\n");
+ }
+ }
+ else {
+ if ((int)__frame->gr8 > -4096 && (int)__frame->gr8 < 4096)
+ printk(KERN_CRIT "[%d] %s() = %ld\n", current->pid, name, __frame->gr8);
+ else
+ printk(KERN_CRIT "[%d] %s() = %lx\n", current->pid, name, __frame->gr8);
+ }
+ return;
+#endif
+
+ if (!test_thread_flag(TIF_SYSCALL_TRACE))
+ return;
+
+ if (!(current->ptrace & PT_PTRACED))
+ return;
+
+ /* we need to indicate entry or exit to strace */
+ if (leaving)
+ __frame->__status |= REG__STATUS_SYSC_EXIT;
+ else
+ __frame->__status |= REG__STATUS_SYSC_ENTRY;
+
+ ptrace_notify(SIGTRAP);
+
+ /*
+ * this isn't the same as continuing with a signal, but it will do
+ * for normal use. strace only continues with a signal if the
+ * stopping signal is not SIGTRAP. -brl
+ */
+ if (current->exit_code) {
+ send_sig(current->exit_code, current, 1);
+ current->exit_code = 0;
+ }
+}
--- /dev/null
+/* semaphore.c: FR-V semaphores
+ *
+ * Copyright (C) 2003 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ * - Derived from lib/rwsem-spinlock.c
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/sched.h>
+#include <linux/module.h>
+#include <asm/semaphore.h>
+
+struct sem_waiter {
+ struct list_head list;
+ struct task_struct *task;
+};
+
+#if SEM_DEBUG
+void semtrace(struct semaphore *sem, const char *str)
+{
+ if (sem->debug)
+ printk("[%d] %s({%d,%d})\n",
+ current->pid,
+ str,
+ sem->counter,
+ list_empty(&sem->wait_list) ? 0 : 1);
+}
+#else
+#define semtrace(SEM,STR) do { } while(0)
+#endif
+
+/*
+ * wait for a token to be granted from a semaphore
+ * - entered with lock held and interrupts disabled
+ */
+void __down(struct semaphore *sem, unsigned long flags)
+{
+ struct task_struct *tsk = current;
+ struct sem_waiter waiter;
+
+ semtrace(sem, "Entering __down");
+
+ /* set up my own style of waitqueue */
+ waiter.task = tsk;
+ get_task_struct(tsk);
+
+ list_add_tail(&waiter.list, &sem->wait_list);
+
+ /* we don't need to touch the semaphore struct anymore */
+ spin_unlock_irqrestore(&sem->wait_lock, flags);
+
+ /* wait to be given the semaphore */
+ set_task_state(tsk, TASK_UNINTERRUPTIBLE);
+
+ for (;;) {
+ if (list_empty(&waiter.list))
+ break;
+ schedule();
+ set_task_state(tsk, TASK_UNINTERRUPTIBLE);
+ }
+
+ tsk->state = TASK_RUNNING;
+ semtrace(sem, "Leaving __down");
+}
+
+EXPORT_SYMBOL(__down);
+
+/*
+ * interruptibly wait for a token to be granted from a semaphore
+ * - entered with lock held and interrupts disabled
+ */
+int __down_interruptible(struct semaphore *sem, unsigned long flags)
+{
+ struct task_struct *tsk = current;
+ struct sem_waiter waiter;
+ int ret;
+
+ semtrace(sem,"Entering __down_interruptible");
+
+ /* set up my own style of waitqueue */
+ waiter.task = tsk;
+ get_task_struct(tsk);
+
+ list_add_tail(&waiter.list, &sem->wait_list);
+
+ /* we don't need to touch the semaphore struct anymore */
+ set_task_state(tsk, TASK_INTERRUPTIBLE);
+
+ spin_unlock_irqrestore(&sem->wait_lock, flags);
+
+ /* wait to be given the semaphore */
+ ret = 0;
+ for (;;) {
+ if (list_empty(&waiter.list))
+ break;
+ if (unlikely(signal_pending(current)))
+ goto interrupted;
+ schedule();
+ set_task_state(tsk, TASK_INTERRUPTIBLE);
+ }
+
+ out:
+ tsk->state = TASK_RUNNING;
+ semtrace(sem, "Leaving __down_interruptible");
+ return ret;
+
+ interrupted:
+ spin_lock_irqsave(&sem->wait_lock, flags);
+
+ if (!list_empty(&waiter.list)) {
+ list_del(&waiter.list);
+ ret = -EINTR;
+ }
+
+ spin_unlock_irqrestore(&sem->wait_lock, flags);
+ if (ret == -EINTR)
+ put_task_struct(current);
+ goto out;
+}
+
+EXPORT_SYMBOL(__down_interruptible);
+
+/*
+ * release a single token back to a semaphore
+ * - entered with lock held and interrupts disabled
+ */
+void __up(struct semaphore *sem)
+{
+ struct task_struct *tsk;
+ struct sem_waiter *waiter;
+
+ semtrace(sem,"Entering __up");
+
+ /* grant the token to the process at the front of the queue */
+ waiter = list_entry(sem->wait_list.next, struct sem_waiter, list);
+
+ /* We must be careful not to touch 'waiter' after we set ->task = NULL.
+ * It is an allocated on the waiter's stack and may become invalid at
+ * any time after that point (due to a wakeup from another source).
+ */
+ list_del_init(&waiter->list);
+ tsk = waiter->task;
+ mb();
+ waiter->task = NULL;
+ wake_up_process(tsk);
+ put_task_struct(tsk);
+
+ semtrace(sem,"Leaving __up");
+}
+
+EXPORT_SYMBOL(__up);
--- /dev/null
+/* setup.c: FRV specific setup
+ *
+ * Copyright (C) 2003-5 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ * - Derived from arch/m68k/kernel/setup.c
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/version.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/fs.h>
+#include <linux/mm.h>
+#include <linux/fb.h>
+#include <linux/console.h>
+#include <linux/genhd.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/major.h>
+#include <linux/bootmem.h>
+#include <linux/highmem.h>
+#include <linux/seq_file.h>
+#include <linux/serial.h>
+#include <linux/serial_core.h>
+#include <linux/serial_reg.h>
+
+#include <asm/setup.h>
+#include <asm/serial.h>
+#include <asm/irq.h>
+#include <asm/sections.h>
+#include <asm/pgalloc.h>
+#include <asm/busctl-regs.h>
+#include <asm/serial-regs.h>
+#include <asm/timer-regs.h>
+#include <asm/irc-regs.h>
+#include <asm/spr-regs.h>
+#include <asm/mb-regs.h>
+#include <asm/mb93493-regs.h>
+#include <asm/gdb-stub.h>
+#include <asm/irq-routing.h>
+#include <asm/io.h>
+
+#ifdef CONFIG_BLK_DEV_INITRD
+#include <linux/blk.h>
+#include <asm/pgtable.h>
+#endif
+
+#include "local.h"
+
+#ifdef CONFIG_MB93090_MB00
+static void __init mb93090_display(void);
+#endif
+#ifdef CONFIG_MMU
+static void __init setup_linux_memory(void);
+#else
+static void __init setup_uclinux_memory(void);
+#endif
+
+#ifdef CONFIG_CONSOLE
+extern struct consw *conswitchp;
+#ifdef CONFIG_FRAMEBUFFER
+extern struct consw fb_con;
+#endif
+#endif
+
+#ifdef CONFIG_MB93090_MB00
+static char __initdata mb93090_banner[] = "FJ/RH FR-V Linux";
+static char __initdata mb93090_version[] = UTS_RELEASE;
+
+int __nongprelbss mb93090_mb00_detected;
+#endif
+
+const char __frv_unknown_system[] = "unknown";
+const char __frv_mb93091_cb10[] = "mb93091-cb10";
+const char __frv_mb93091_cb11[] = "mb93091-cb11";
+const char __frv_mb93091_cb30[] = "mb93091-cb30";
+const char __frv_mb93091_cb41[] = "mb93091-cb41";
+const char __frv_mb93091_cb60[] = "mb93091-cb60";
+const char __frv_mb93091_cb70[] = "mb93091-cb70";
+const char __frv_mb93091_cb451[] = "mb93091-cb451";
+const char __frv_mb93090_mb00[] = "mb93090-mb00";
+
+const char __frv_mb93493[] = "mb93493";
+
+const char __frv_mb93093[] = "mb93093";
+
+static const char *__nongprelbss cpu_series;
+static const char *__nongprelbss cpu_core;
+static const char *__nongprelbss cpu_silicon;
+static const char *__nongprelbss cpu_mmu;
+static const char *__nongprelbss cpu_system;
+static const char *__nongprelbss cpu_board1;
+static const char *__nongprelbss cpu_board2;
+
+static unsigned long __nongprelbss cpu_psr_all;
+static unsigned long __nongprelbss cpu_hsr0_all;
+
+unsigned long __nongprelbss pdm_suspend_mode;
+
+unsigned long __nongprelbss rom_length;
+unsigned long __nongprelbss memory_start;
+unsigned long __nongprelbss memory_end;
+
+unsigned long __nongprelbss dma_coherent_mem_start;
+unsigned long __nongprelbss dma_coherent_mem_end;
+
+unsigned long __initdata __sdram_old_base;
+unsigned long __initdata num_mappedpages;
+
+struct cpuinfo_frv __nongprelbss boot_cpu_data;
+
+char command_line[COMMAND_LINE_SIZE];
+char __initdata redboot_command_line[COMMAND_LINE_SIZE];
+
+#ifdef CONFIG_PM
+#define __pminit
+#define __pminitdata
+#else
+#define __pminit __init
+#define __pminitdata __initdata
+#endif
+
+struct clock_cmode {
+ uint8_t xbus, sdram, corebus, core, dsu;
+};
+
+#define _frac(N,D) ((N)<<4 | (D))
+#define _x0_16 _frac(1,6)
+#define _x0_25 _frac(1,4)
+#define _x0_33 _frac(1,3)
+#define _x0_375 _frac(3,8)
+#define _x0_5 _frac(1,2)
+#define _x0_66 _frac(2,3)
+#define _x0_75 _frac(3,4)
+#define _x1 _frac(1,1)
+#define _x1_5 _frac(3,2)
+#define _x2 _frac(2,1)
+#define _x3 _frac(3,1)
+#define _x4 _frac(4,1)
+#define _x4_5 _frac(9,2)
+#define _x6 _frac(6,1)
+#define _x8 _frac(8,1)
+#define _x9 _frac(9,1)
+
+int __nongprelbss clock_p0_current;
+int __nongprelbss clock_cm_current;
+int __nongprelbss clock_cmode_current;
+#ifdef CONFIG_PM
+int __nongprelbss clock_cmodes_permitted;
+unsigned long __nongprelbss clock_bits_settable;
+#endif
+
+static struct clock_cmode __pminitdata undef_clock_cmode = { _x1, _x1, _x1, _x1, _x1 };
+
+static struct clock_cmode __pminitdata clock_cmodes_fr401_fr403[16] = {
+ [4] = { _x1, _x1, _x2, _x2, _x0_25 },
+ [5] = { _x1, _x2, _x4, _x4, _x0_5 },
+ [8] = { _x1, _x1, _x1, _x2, _x0_25 },
+ [9] = { _x1, _x2, _x2, _x4, _x0_5 },
+ [11] = { _x1, _x4, _x4, _x8, _x1 },
+ [12] = { _x1, _x1, _x2, _x4, _x0_5 },
+ [13] = { _x1, _x2, _x4, _x8, _x1 },
+};
+
+static struct clock_cmode __pminitdata clock_cmodes_fr405[16] = {
+ [0] = { _x1, _x1, _x1, _x1, _x0_5 },
+ [1] = { _x1, _x1, _x1, _x3, _x0_25 },
+ [2] = { _x1, _x1, _x2, _x6, _x0_5 },
+ [3] = { _x1, _x2, _x2, _x6, _x0_5 },
+ [4] = { _x1, _x1, _x2, _x2, _x0_16 },
+ [8] = { _x1, _x1, _x1, _x2, _x0_16 },
+ [9] = { _x1, _x2, _x2, _x4, _x0_33 },
+ [12] = { _x1, _x1, _x2, _x4, _x0_33 },
+ [14] = { _x1, _x3, _x3, _x9, _x0_75 },
+ [15] = { _x1, _x1_5, _x1_5, _x4_5, _x0_375 },
+
+#define CLOCK_CMODES_PERMITTED_FR405 0xd31f
+};
+
+static struct clock_cmode __pminitdata clock_cmodes_fr555[16] = {
+ [0] = { _x1, _x2, _x2, _x4, _x0_33 },
+ [1] = { _x1, _x3, _x3, _x6, _x0_5 },
+ [2] = { _x1, _x2, _x4, _x8, _x0_66 },
+ [3] = { _x1, _x1_5, _x3, _x6, _x0_5 },
+ [4] = { _x1, _x3, _x3, _x9, _x0_75 },
+ [5] = { _x1, _x2, _x2, _x6, _x0_5 },
+ [6] = { _x1, _x1_5, _x1_5, _x4_5, _x0_375 },
+};
+
+static const struct clock_cmode __pminitdata *clock_cmodes;
+static int __pminitdata clock_doubled;
+
+static struct uart_port __initdata __frv_uart0 = {
+ .uartclk = 0,
+ .membase = (char *) UART0_BASE,
+ .irq = IRQ_CPU_UART0,
+ .regshift = 3,
+ .iotype = UPIO_MEM,
+ .flags = UPF_BOOT_AUTOCONF | UPF_SKIP_TEST,
+};
+
+static struct uart_port __initdata __frv_uart1 = {
+ .uartclk = 0,
+ .membase = (char *) UART1_BASE,
+ .irq = IRQ_CPU_UART1,
+ .regshift = 3,
+ .iotype = UPIO_MEM,
+ .flags = UPF_BOOT_AUTOCONF | UPF_SKIP_TEST,
+};
+
+#if 0
+static void __init printk_xampr(unsigned long ampr, unsigned long amlr, char i_d, int n)
+{
+ unsigned long phys, virt, cxn, size;
+
+#ifdef CONFIG_MMU
+ virt = amlr & 0xffffc000;
+ cxn = amlr & 0x3fff;
+#else
+ virt = ampr & 0xffffc000;
+ cxn = 0;
+#endif
+ phys = ampr & xAMPRx_PPFN;
+ size = 1 << (((ampr & xAMPRx_SS) >> 4) + 17);
+
+ printk("%cAMPR%d: va %08lx-%08lx [pa %08lx] %c%c%c%c [cxn:%04lx]\n",
+ i_d, n,
+ virt, virt + size - 1,
+ phys,
+ ampr & xAMPRx_S ? 'S' : '-',
+ ampr & xAMPRx_C ? 'C' : '-',
+ ampr & DAMPRx_WP ? 'W' : '-',
+ ampr & xAMPRx_V ? 'V' : '-',
+ cxn
+ );
+}
+#endif
+
+/*****************************************************************************/
+/*
+ * dump the memory map
+ */
+static void __init dump_memory_map(void)
+{
+
+#if 0
+ /* dump the protection map */
+ printk_xampr(__get_IAMPR(0), __get_IAMLR(0), 'I', 0);
+ printk_xampr(__get_IAMPR(1), __get_IAMLR(1), 'I', 1);
+ printk_xampr(__get_IAMPR(2), __get_IAMLR(2), 'I', 2);
+ printk_xampr(__get_IAMPR(3), __get_IAMLR(3), 'I', 3);
+ printk_xampr(__get_IAMPR(4), __get_IAMLR(4), 'I', 4);
+ printk_xampr(__get_IAMPR(5), __get_IAMLR(5), 'I', 5);
+ printk_xampr(__get_IAMPR(6), __get_IAMLR(6), 'I', 6);
+ printk_xampr(__get_IAMPR(7), __get_IAMLR(7), 'I', 7);
+ printk_xampr(__get_IAMPR(8), __get_IAMLR(8), 'I', 8);
+ printk_xampr(__get_IAMPR(9), __get_IAMLR(9), 'i', 9);
+ printk_xampr(__get_IAMPR(10), __get_IAMLR(10), 'I', 10);
+ printk_xampr(__get_IAMPR(11), __get_IAMLR(11), 'I', 11);
+ printk_xampr(__get_IAMPR(12), __get_IAMLR(12), 'I', 12);
+ printk_xampr(__get_IAMPR(13), __get_IAMLR(13), 'I', 13);
+ printk_xampr(__get_IAMPR(14), __get_IAMLR(14), 'I', 14);
+ printk_xampr(__get_IAMPR(15), __get_IAMLR(15), 'I', 15);
+
+ printk_xampr(__get_DAMPR(0), __get_DAMLR(0), 'D', 0);
+ printk_xampr(__get_DAMPR(1), __get_DAMLR(1), 'D', 1);
+ printk_xampr(__get_DAMPR(2), __get_DAMLR(2), 'D', 2);
+ printk_xampr(__get_DAMPR(3), __get_DAMLR(3), 'D', 3);
+ printk_xampr(__get_DAMPR(4), __get_DAMLR(4), 'D', 4);
+ printk_xampr(__get_DAMPR(5), __get_DAMLR(5), 'D', 5);
+ printk_xampr(__get_DAMPR(6), __get_DAMLR(6), 'D', 6);
+ printk_xampr(__get_DAMPR(7), __get_DAMLR(7), 'D', 7);
+ printk_xampr(__get_DAMPR(8), __get_DAMLR(8), 'D', 8);
+ printk_xampr(__get_DAMPR(9), __get_DAMLR(9), 'D', 9);
+ printk_xampr(__get_DAMPR(10), __get_DAMLR(10), 'D', 10);
+ printk_xampr(__get_DAMPR(11), __get_DAMLR(11), 'D', 11);
+ printk_xampr(__get_DAMPR(12), __get_DAMLR(12), 'D', 12);
+ printk_xampr(__get_DAMPR(13), __get_DAMLR(13), 'D', 13);
+ printk_xampr(__get_DAMPR(14), __get_DAMLR(14), 'D', 14);
+ printk_xampr(__get_DAMPR(15), __get_DAMLR(15), 'D', 15);
+#endif
+
+#if 0
+ /* dump the bus controller registers */
+ printk("LGCR: %08lx\n", __get_LGCR());
+ printk("Master: %08lx-%08lx CR=%08lx\n",
+ __get_LEMBR(), __get_LEMBR() + __get_LEMAM(),
+ __get_LMAICR());
+
+ int loop;
+ for (loop = 1; loop <= 7; loop++) {
+ unsigned long lcr = __get_LCR(loop), lsbr = __get_LSBR(loop);
+ printk("CS#%d: %08lx-%08lx %c%c%c%c%c%c%c%c%c\n",
+ loop,
+ lsbr, lsbr + __get_LSAM(loop),
+ lcr & 0x80000000 ? 'r' : '-',
+ lcr & 0x40000000 ? 'w' : '-',
+ lcr & 0x08000000 ? 'b' : '-',
+ lcr & 0x04000000 ? 'B' : '-',
+ lcr & 0x02000000 ? 'C' : '-',
+ lcr & 0x01000000 ? 'D' : '-',
+ lcr & 0x00800000 ? 'W' : '-',
+ lcr & 0x00400000 ? 'R' : '-',
+ (lcr & 0x00030000) == 0x00000000 ? '4' :
+ (lcr & 0x00030000) == 0x00010000 ? '2' :
+ (lcr & 0x00030000) == 0x00020000 ? '1' :
+ '-'
+ );
+ }
+#endif
+
+#if 0
+ printk("\n");
+#endif
+} /* end dump_memory_map() */
+
+/*****************************************************************************/
+/*
+ * attempt to detect a VDK motherboard and DAV daughter board on an MB93091 system
+ */
+#ifdef CONFIG_MB93091_VDK
+static void __init detect_mb93091(void)
+{
+#ifdef CONFIG_MB93090_MB00
+ /* Detect CB70 without motherboard */
+ if (!(cpu_system == __frv_mb93091_cb70 && ((*(unsigned short *)0xffc00030) & 0x100))) {
+ cpu_board1 = __frv_mb93090_mb00;
+ mb93090_mb00_detected = 1;
+ }
+#endif
+
+#ifdef CONFIG_FUJITSU_MB93493
+ cpu_board2 = __frv_mb93493;
+#endif
+
+} /* end detect_mb93091() */
+#endif
+
+/*****************************************************************************/
+/*
+ * determine the CPU type and set appropriate parameters
+ *
+ * Family Series CPU Core Silicon Imple Vers
+ * ----------------------------------------------------------
+ * FR-V --+-> FR400 --+-> FR401 --+-> MB93401 02 00 [1]
+ * | | |
+ * | | +-> MB93401/A 02 01
+ * | | |
+ * | | +-> MB93403 02 02
+ * | |
+ * | +-> FR405 ----> MB93405 04 00
+ * |
+ * +-> FR450 ----> FR451 ----> MB93451 05 00
+ * |
+ * +-> FR500 ----> FR501 --+-> MB93501 01 01 [2]
+ * | |
+ * | +-> MB93501/A 01 02
+ * |
+ * +-> FR550 --+-> FR551 ----> MB93555 03 01
+ *
+ * [1] The MB93401 is an obsolete CPU replaced by the MB93401A
+ * [2] The MB93501 is an obsolete CPU replaced by the MB93501A
+ *
+ * Imple is PSR(Processor Status Register)[31:28].
+ * Vers is PSR(Processor Status Register)[27:24].
+ *
+ * A "Silicon" consists of CPU core and some on-chip peripherals.
+ */
+static void __init determine_cpu(void)
+{
+ unsigned long hsr0 = __get_HSR(0);
+ unsigned long psr = __get_PSR();
+
+ /* work out what selectable services the CPU supports */
+ __set_PSR(psr | PSR_EM | PSR_EF | PSR_CM | PSR_NEM);
+ cpu_psr_all = __get_PSR();
+ __set_PSR(psr);
+
+ __set_HSR(0, hsr0 | HSR0_GRLE | HSR0_GRHE | HSR0_FRLE | HSR0_FRHE);
+ cpu_hsr0_all = __get_HSR(0);
+ __set_HSR(0, hsr0);
+
+ /* derive other service specs from the CPU type */
+ cpu_series = "unknown";
+ cpu_core = "unknown";
+ cpu_silicon = "unknown";
+ cpu_mmu = "Prot";
+ cpu_system = __frv_unknown_system;
+ clock_cmodes = NULL;
+ clock_doubled = 0;
+#ifdef CONFIG_PM
+ clock_bits_settable = CLOCK_BIT_CM_H | CLOCK_BIT_CM_M | CLOCK_BIT_P0;
+#endif
+
+ switch (PSR_IMPLE(psr)) {
+ case PSR_IMPLE_FR401:
+ cpu_series = "fr400";
+ cpu_core = "fr401";
+ pdm_suspend_mode = HSR0_PDM_PLL_RUN;
+
+ switch (PSR_VERSION(psr)) {
+ case PSR_VERSION_FR401_MB93401:
+ cpu_silicon = "mb93401";
+ cpu_system = __frv_mb93091_cb10;
+ clock_cmodes = clock_cmodes_fr401_fr403;
+ clock_doubled = 1;
+ break;
+ case PSR_VERSION_FR401_MB93401A:
+ cpu_silicon = "mb93401/A";
+ cpu_system = __frv_mb93091_cb11;
+ clock_cmodes = clock_cmodes_fr401_fr403;
+ break;
+ case PSR_VERSION_FR401_MB93403:
+ cpu_silicon = "mb93403";
+#ifndef CONFIG_MB93093_PDK
+ cpu_system = __frv_mb93091_cb30;
+#else
+ cpu_system = __frv_mb93093;
+#endif
+ clock_cmodes = clock_cmodes_fr401_fr403;
+ break;
+ default:
+ break;
+ }
+ break;
+
+ case PSR_IMPLE_FR405:
+ cpu_series = "fr400";
+ cpu_core = "fr405";
+ pdm_suspend_mode = HSR0_PDM_PLL_STOP;
+
+ switch (PSR_VERSION(psr)) {
+ case PSR_VERSION_FR405_MB93405:
+ cpu_silicon = "mb93405";
+ cpu_system = __frv_mb93091_cb60;
+ clock_cmodes = clock_cmodes_fr405;
+#ifdef CONFIG_PM
+ clock_bits_settable |= CLOCK_BIT_CMODE;
+ clock_cmodes_permitted = CLOCK_CMODES_PERMITTED_FR405;
+#endif
+
+ /* the FPGA on the CB70 has extra registers
+ * - it has 0x0046 in the VDK_ID FPGA register at 0x1a0, which is
+ * how we tell the difference between it and a CB60
+ */
+ if (*(volatile unsigned short *) 0xffc001a0 == 0x0046)
+ cpu_system = __frv_mb93091_cb70;
+ break;
+ default:
+ break;
+ }
+ break;
+
+ case PSR_IMPLE_FR451:
+ cpu_series = "fr450";
+ cpu_core = "fr451";
+ pdm_suspend_mode = HSR0_PDM_PLL_STOP;
+#ifdef CONFIG_PM
+ clock_bits_settable |= CLOCK_BIT_CMODE;
+ clock_cmodes_permitted = CLOCK_CMODES_PERMITTED_FR405;
+#endif
+ switch (PSR_VERSION(psr)) {
+ case PSR_VERSION_FR451_MB93451:
+ cpu_silicon = "mb93451";
+ cpu_mmu = "Prot, SAT, xSAT, DAT";
+ cpu_system = __frv_mb93091_cb451;
+ clock_cmodes = clock_cmodes_fr405;
+ break;
+ default:
+ break;
+ }
+ break;
+
+ case PSR_IMPLE_FR501:
+ cpu_series = "fr500";
+ cpu_core = "fr501";
+ pdm_suspend_mode = HSR0_PDM_PLL_STOP;
+
+ switch (PSR_VERSION(psr)) {
+ case PSR_VERSION_FR501_MB93501: cpu_silicon = "mb93501"; break;
+ case PSR_VERSION_FR501_MB93501A: cpu_silicon = "mb93501/A"; break;
+ default:
+ break;
+ }
+ break;
+
+ case PSR_IMPLE_FR551:
+ cpu_series = "fr550";
+ cpu_core = "fr551";
+ pdm_suspend_mode = HSR0_PDM_PLL_RUN;
+
+ switch (PSR_VERSION(psr)) {
+ case PSR_VERSION_FR551_MB93555:
+ cpu_silicon = "mb93555";
+ cpu_mmu = "Prot, SAT";
+ cpu_system = __frv_mb93091_cb41;
+ clock_cmodes = clock_cmodes_fr555;
+ clock_doubled = 1;
+ break;
+ default:
+ break;
+ }
+ break;
+
+ default:
+ break;
+ }
+
+ printk("- Series:%s CPU:%s Silicon:%s\n",
+ cpu_series, cpu_core, cpu_silicon);
+
+#ifdef CONFIG_MB93091_VDK
+ detect_mb93091();
+#endif
+
+#if defined(CONFIG_MB93093_PDK) && defined(CONFIG_FUJITSU_MB93493)
+ cpu_board2 = __frv_mb93493;
+#endif
+
+} /* end determine_cpu() */
+
+/*****************************************************************************/
+/*
+ * calculate the bus clock speed
+ */
+void __pminit determine_clocks(int verbose)
+{
+ const struct clock_cmode *mode, *tmode;
+ unsigned long clkc, psr, quot;
+
+ clkc = __get_CLKC();
+ psr = __get_PSR();
+
+ clock_p0_current = !!(clkc & CLKC_P0);
+ clock_cm_current = clkc & CLKC_CM;
+ clock_cmode_current = (clkc & CLKC_CMODE) >> CLKC_CMODE_s;
+
+ if (verbose)
+ printk("psr=%08lx hsr0=%08lx clkc=%08lx\n", psr, __get_HSR(0), clkc);
+
+ /* the CB70 has some alternative ways of setting the clock speed through switches accessed
+ * through the FPGA. */
+ if (cpu_system == __frv_mb93091_cb70) {
+ unsigned short clkswr = *(volatile unsigned short *) 0xffc00104UL & 0x1fffUL;
+
+ if (clkswr & 0x1000)
+ __clkin_clock_speed_HZ = 60000000UL;
+ else
+ __clkin_clock_speed_HZ =
+ ((clkswr >> 8) & 0xf) * 10000000 +
+ ((clkswr >> 4) & 0xf) * 1000000 +
+ ((clkswr ) & 0xf) * 100000;
+ }
+ /* the FR451 is currently fixed at 24MHz */
+ else if (cpu_system == __frv_mb93091_cb451) {
+ //__clkin_clock_speed_HZ = 24000000UL; // CB451-FPGA
+ unsigned short clkswr = *(volatile unsigned short *) 0xffc00104UL & 0x1fffUL;
+
+ if (clkswr & 0x1000)
+ __clkin_clock_speed_HZ = 60000000UL;
+ else
+ __clkin_clock_speed_HZ =
+ ((clkswr >> 8) & 0xf) * 10000000 +
+ ((clkswr >> 4) & 0xf) * 1000000 +
+ ((clkswr ) & 0xf) * 100000;
+ }
+ /* otherwise determine the clockspeed from VDK or other registers */
+ else {
+ __clkin_clock_speed_HZ = __get_CLKIN();
+ }
+
+ /* look up the appropriate clock relationships table entry */
+ mode = &undef_clock_cmode;
+ if (clock_cmodes) {
+ tmode = &clock_cmodes[(clkc & CLKC_CMODE) >> CLKC_CMODE_s];
+ if (tmode->xbus)
+ mode = tmode;
+ }
+
+#define CLOCK(SRC,RATIO) ((SRC) * (((RATIO) >> 4) & 0x0f) / ((RATIO) & 0x0f))
+
+ if (clock_doubled)
+ __clkin_clock_speed_HZ <<= 1;
+
+ __ext_bus_clock_speed_HZ = CLOCK(__clkin_clock_speed_HZ, mode->xbus);
+ __sdram_clock_speed_HZ = CLOCK(__clkin_clock_speed_HZ, mode->sdram);
+ __dsu_clock_speed_HZ = CLOCK(__clkin_clock_speed_HZ, mode->dsu);
+
+ switch (clkc & CLKC_CM) {
+ case 0: /* High */
+ __core_bus_clock_speed_HZ = CLOCK(__clkin_clock_speed_HZ, mode->corebus);
+ __core_clock_speed_HZ = CLOCK(__clkin_clock_speed_HZ, mode->core);
+ break;
+ case 1: /* Medium */
+ __core_bus_clock_speed_HZ = CLOCK(__clkin_clock_speed_HZ, mode->sdram);
+ __core_clock_speed_HZ = CLOCK(__clkin_clock_speed_HZ, mode->sdram);
+ break;
+ case 2: /* Low; not supported */
+ case 3: /* UNDEF */
+ printk("Unsupported CLKC CM %ld\n", clkc & CLKC_CM);
+ panic("Bye");
+ }
+
+ __res_bus_clock_speed_HZ = __ext_bus_clock_speed_HZ;
+ if (clkc & CLKC_P0)
+ __res_bus_clock_speed_HZ >>= 1;
+
+ if (verbose) {
+ printk("CLKIN: %lu.%3.3luMHz\n",
+ __clkin_clock_speed_HZ / 1000000,
+ (__clkin_clock_speed_HZ / 1000) % 1000);
+
+ printk("CLKS:"
+ " ext=%luMHz res=%luMHz sdram=%luMHz cbus=%luMHz core=%luMHz dsu=%luMHz\n",
+ __ext_bus_clock_speed_HZ / 1000000,
+ __res_bus_clock_speed_HZ / 1000000,
+ __sdram_clock_speed_HZ / 1000000,
+ __core_bus_clock_speed_HZ / 1000000,
+ __core_clock_speed_HZ / 1000000,
+ __dsu_clock_speed_HZ / 1000000
+ );
+ }
+
+ /* calculate the number of __delay() loop iterations per sec (2 insn loop) */
+ __delay_loops_MHz = __core_clock_speed_HZ / (1000000 * 2);
+
+ /* set the serial prescaler */
+ __serial_clock_speed_HZ = __res_bus_clock_speed_HZ;
+ quot = 1;
+ while (__serial_clock_speed_HZ / quot / 16 / 65536 > 3000)
+ quot += 1;
+
+ /* double the divisor if P0 is clear, so that if/when P0 is set, it's still achievable
+ * - we have to be careful - dividing too much can mean we can't get 115200 baud
+ */
+ if (__serial_clock_speed_HZ > 32000000 && !(clkc & CLKC_P0))
+ quot <<= 1;
+
+ __serial_clock_speed_HZ /= quot;
+ __frv_uart0.uartclk = __serial_clock_speed_HZ;
+ __frv_uart1.uartclk = __serial_clock_speed_HZ;
+
+ if (verbose)
+ printk(" uart=%luMHz\n", __serial_clock_speed_HZ / 1000000 * quot);
+
+ while (!(__get_UART0_LSR() & UART_LSR_TEMT))
+ continue;
+
+ while (!(__get_UART1_LSR() & UART_LSR_TEMT))
+ continue;
+
+ __set_UCPVR(quot);
+ __set_UCPSR(0);
+} /* end determine_clocks() */
+
+/*****************************************************************************/
+/*
+ * reserve some DMA consistent memory
+ */
+#ifdef CONFIG_RESERVE_DMA_COHERENT
+static void __init reserve_dma_coherent(void)
+{
+ unsigned long ampr;
+
+ /* find the first non-kernel memory tile and steal it */
+#define __steal_AMPR(r) \
+ if (__get_DAMPR(r) & xAMPRx_V) { \
+ ampr = __get_DAMPR(r); \
+ __set_DAMPR(r, ampr | xAMPRx_S | xAMPRx_C); \
+ __set_IAMPR(r, 0); \
+ goto found; \
+ }
+
+ __steal_AMPR(1);
+ __steal_AMPR(2);
+ __steal_AMPR(3);
+ __steal_AMPR(4);
+ __steal_AMPR(5);
+ __steal_AMPR(6);
+
+ if (PSR_IMPLE(__get_PSR()) == PSR_IMPLE_FR551) {
+ __steal_AMPR(7);
+ __steal_AMPR(8);
+ __steal_AMPR(9);
+ __steal_AMPR(10);
+ __steal_AMPR(11);
+ __steal_AMPR(12);
+ __steal_AMPR(13);
+ __steal_AMPR(14);
+ }
+
+ /* unable to grant any DMA consistent memory */
+ printk("No DMA consistent memory reserved\n");
+ return;
+
+ found:
+ dma_coherent_mem_start = ampr & xAMPRx_PPFN;
+ ampr &= xAMPRx_SS;
+ ampr >>= 4;
+ ampr = 1 << (ampr - 3 + 20);
+ dma_coherent_mem_end = dma_coherent_mem_start + ampr;
+
+ printk("DMA consistent memory reserved %lx-%lx\n",
+ dma_coherent_mem_start, dma_coherent_mem_end);
+
+} /* end reserve_dma_coherent() */
+#endif
+
+/*****************************************************************************/
+/*
+ * calibrate the delay loop
+ */
+void __init calibrate_delay(void)
+{
+ loops_per_jiffy = __delay_loops_MHz * (1000000 / HZ);
+
+ printk("Calibrating delay loop... %lu.%02lu BogoMIPS\n",
+ loops_per_jiffy / (500000 / HZ),
+ (loops_per_jiffy / (5000 / HZ)) % 100);
+
+} /* end calibrate_delay() */
+
+/*****************************************************************************/
+/*
+ * look through the command line for some things we need to know immediately
+ */
+static void __init parse_cmdline_early(char *cmdline)
+{
+ if (!cmdline)
+ return;
+
+ while (*cmdline) {
+ if (*cmdline == ' ')
+ cmdline++;
+
+ /* "mem=XXX[kKmM]" sets SDRAM size to <mem>, overriding the value we worked
+ * out from the SDRAM controller mask register
+ */
+ if (!memcmp(cmdline, "mem=", 4)) {
+ unsigned long long mem_size;
+
+ mem_size = memparse(cmdline + 4, &cmdline);
+ memory_end = memory_start + mem_size;
+ }
+
+ while (*cmdline && *cmdline != ' ')
+ cmdline++;
+ }
+
+} /* end parse_cmdline_early() */
+
+/*****************************************************************************/
+/*
+ *
+ */
+void __init setup_arch(char **cmdline_p)
+{
+#ifdef CONFIG_MMU
+ printk("Linux FR-V port done by Red Hat Inc <dhowells@redhat.com>\n");
+#else
+ printk("uClinux FR-V port done by Red Hat Inc <dhowells@redhat.com>\n");
+#endif
+
+ memcpy(saved_command_line, redboot_command_line, COMMAND_LINE_SIZE);
+
+ determine_cpu();
+ determine_clocks(1);
+
+ /* For printk-directly-beats-on-serial-hardware hack */
+ console_set_baud(115200);
+#ifdef CONFIG_GDBSTUB
+ gdbstub_set_baud(115200);
+#endif
+
+#ifdef CONFIG_RESERVE_DMA_COHERENT
+ reserve_dma_coherent();
+#endif
+ dump_memory_map();
+
+#ifdef CONFIG_MB93090_MB00
+ if (mb93090_mb00_detected)
+ mb93090_display();
+#endif
+
+ /* register those serial ports that are available */
+#ifndef CONFIG_GDBSTUB_UART0
+ __reg(UART0_BASE + UART_IER * 8) = 0;
+ early_serial_setup(&__frv_uart0);
+// register_serial(&__frv_uart0);
+#endif
+#ifndef CONFIG_GDBSTUB_UART1
+ __reg(UART1_BASE + UART_IER * 8) = 0;
+ early_serial_setup(&__frv_uart1);
+// register_serial(&__frv_uart1);
+#endif
+
+#if defined(CONFIG_CHR_DEV_FLASH) || defined(CONFIG_BLK_DEV_FLASH)
+ /* we need to initialize the Flashrom device here since we might
+ * do things with flash early on in the boot
+ */
+ flash_probe();
+#endif
+
+ /* deal with the command line - RedBoot may have passed one to the kernel */
+ memcpy(command_line, saved_command_line, sizeof(command_line));
+ *cmdline_p = &command_line[0];
+ parse_cmdline_early(command_line);
+
+ /* set up the memory description
+ * - by now the stack is part of the init task */
+ printk("Memory %08lx-%08lx\n", memory_start, memory_end);
+
+ if (memory_start == memory_end) BUG();
+
+ init_mm.start_code = (unsigned long) &_stext;
+ init_mm.end_code = (unsigned long) &_etext;
+ init_mm.end_data = (unsigned long) &_edata;
+#if 0 /* DAVIDM - don't set brk just incase someone decides to use it */
+ init_mm.brk = (unsigned long) &_end;
+#else
+ init_mm.brk = (unsigned long) 0;
+#endif
+
+#ifdef DEBUG
+ printk("KERNEL -> TEXT=0x%06x-0x%06x DATA=0x%06x-0x%06x BSS=0x%06x-0x%06x\n",
+ (int) &_stext, (int) &_etext,
+ (int) &_sdata, (int) &_edata,
+ (int) &_sbss, (int) &_ebss);
+#endif
+
+#ifdef CONFIG_VT
+#if defined(CONFIG_VGA_CONSOLE)
+ conswitchp = &vga_con;
+#elif defined(CONFIG_DUMMY_CONSOLE)
+ conswitchp = &dummy_con;
+#endif
+#endif
+
+#ifdef CONFIG_BLK_DEV_BLKMEM
+ ROOT_DEV = MKDEV(BLKMEM_MAJOR,0);
+#endif
+ /*rom_length = (unsigned long)&_flashend - (unsigned long)&_romvec;*/
+
+#ifdef CONFIG_MMU
+ setup_linux_memory();
+#else
+ setup_uclinux_memory();
+#endif
+
+ /* get kmalloc into gear */
+ paging_init();
+
+ /* init DMA */
+ frv_dma_init();
+#ifdef DEBUG
+ printk("Done setup_arch\n");
+#endif
+
+ /* start the decrement timer running */
+// asm volatile("movgs %0,timerd" :: "r"(10000000));
+// __set_HSR(0, __get_HSR(0) | HSR0_ETMD);
+
+} /* end setup_arch() */
+
+#if 0
+/*****************************************************************************/
+/*
+ *
+ */
+static int __devinit setup_arch_serial(void)
+{
+ /* register those serial ports that are available */
+#ifndef CONFIG_GDBSTUB_UART0
+ early_serial_setup(&__frv_uart0);
+#endif
+#ifndef CONFIG_GDBSTUB_UART1
+ early_serial_setup(&__frv_uart1);
+#endif
+
+ return 0;
+} /* end setup_arch_serial() */
+
+late_initcall(setup_arch_serial);
+#endif
+
+/*****************************************************************************/
+/*
+ * set up the memory map for normal MMU linux
+ */
+#ifdef CONFIG_MMU
+static void __init setup_linux_memory(void)
+{
+ unsigned long bootmap_size, low_top_pfn, kstart, kend, high_mem;
+
+ kstart = (unsigned long) &__kernel_image_start - PAGE_OFFSET;
+ kend = (unsigned long) &__kernel_image_end - PAGE_OFFSET;
+
+ kstart = kstart & PAGE_MASK;
+ kend = (kend + PAGE_SIZE - 1) & PAGE_MASK;
+
+ /* give all the memory to the bootmap allocator, tell it to put the
+ * boot mem_map immediately following the kernel image
+ */
+ bootmap_size = init_bootmem_node(NODE_DATA(0),
+ kend >> PAGE_SHIFT, /* map addr */
+ memory_start >> PAGE_SHIFT, /* start of RAM */
+ memory_end >> PAGE_SHIFT /* end of RAM */
+ );
+
+ /* pass the memory that the kernel can immediately use over to the bootmem allocator */
+ max_mapnr = num_physpages = (memory_end - memory_start) >> PAGE_SHIFT;
+ low_top_pfn = (KERNEL_LOWMEM_END - KERNEL_LOWMEM_START) >> PAGE_SHIFT;
+ high_mem = 0;
+
+ if (num_physpages > low_top_pfn) {
+#ifdef CONFIG_HIGHMEM
+ high_mem = num_physpages - low_top_pfn;
+#else
+ max_mapnr = num_physpages = low_top_pfn;
+#endif
+ }
+ else {
+ low_top_pfn = num_physpages;
+ }
+
+ min_low_pfn = memory_start >> PAGE_SHIFT;
+ max_low_pfn = low_top_pfn;
+ max_pfn = memory_end >> PAGE_SHIFT;
+
+ num_mappedpages = low_top_pfn;
+
+ printk(KERN_NOTICE "%ldMB LOWMEM available.\n", low_top_pfn >> (20 - PAGE_SHIFT));
+
+ free_bootmem(memory_start, low_top_pfn << PAGE_SHIFT);
+
+#ifdef CONFIG_HIGHMEM
+ if (high_mem)
+ printk(KERN_NOTICE "%ldMB HIGHMEM available.\n", high_mem >> (20 - PAGE_SHIFT));
+#endif
+
+ /* take back the memory occupied by the kernel image and the bootmem alloc map */
+ reserve_bootmem(kstart, kend - kstart + bootmap_size);
+
+ /* reserve the memory occupied by the initial ramdisk */
+#ifdef CONFIG_BLK_DEV_INITRD
+ if (LOADER_TYPE && INITRD_START) {
+ if (INITRD_START + INITRD_SIZE <= (low_top_pfn << PAGE_SHIFT)) {
+ reserve_bootmem(INITRD_START, INITRD_SIZE);
+ initrd_start = INITRD_START ? INITRD_START + PAGE_OFFSET : 0;
+ initrd_end = initrd_start + INITRD_SIZE;
+ }
+ else {
+ printk(KERN_ERR
+ "initrd extends beyond end of memory (0x%08lx > 0x%08lx)\n"
+ "disabling initrd\n",
+ INITRD_START + INITRD_SIZE,
+ low_top_pfn << PAGE_SHIFT);
+ initrd_start = 0;
+ }
+ }
+#endif
+
+} /* end setup_linux_memory() */
+#endif
+
+/*****************************************************************************/
+/*
+ * set up the memory map for uClinux
+ */
+#ifndef CONFIG_MMU
+static void __init setup_uclinux_memory(void)
+{
+#ifdef CONFIG_PROTECT_KERNEL
+ unsigned long dampr;
+#endif
+ unsigned long kend;
+ int bootmap_size;
+
+ kend = (unsigned long) &__kernel_image_end;
+ kend = (kend + PAGE_SIZE - 1) & PAGE_MASK;
+
+ /* give all the memory to the bootmap allocator, tell it to put the
+ * boot mem_map immediately following the kernel image
+ */
+ bootmap_size = init_bootmem_node(NODE_DATA(0),
+ kend >> PAGE_SHIFT, /* map addr */
+ memory_start >> PAGE_SHIFT, /* start of RAM */
+ memory_end >> PAGE_SHIFT /* end of RAM */
+ );
+
+ /* free all the usable memory */
+ free_bootmem(memory_start, memory_end - memory_start);
+
+ high_memory = (void *) (memory_end & PAGE_MASK);
+ max_mapnr = num_physpages = ((unsigned long) high_memory - PAGE_OFFSET) >> PAGE_SHIFT;
+
+ min_low_pfn = memory_start >> PAGE_SHIFT;
+ max_low_pfn = memory_end >> PAGE_SHIFT;
+ max_pfn = max_low_pfn;
+
+ /* now take back the bits the core kernel is occupying */
+#ifndef CONFIG_PROTECT_KERNEL
+ reserve_bootmem(kend, bootmap_size);
+ reserve_bootmem((unsigned long) &__kernel_image_start,
+ kend - (unsigned long) &__kernel_image_start);
+
+#else
+ dampr = __get_DAMPR(0);
+ dampr &= xAMPRx_SS;
+ dampr = (dampr >> 4) + 17;
+ dampr = 1 << dampr;
+
+ reserve_bootmem(__get_DAMPR(0) & xAMPRx_PPFN, dampr);
+#endif
+
+ /* reserve some memory to do uncached DMA through if requested */
+#ifdef CONFIG_RESERVE_DMA_COHERENT
+ if (dma_coherent_mem_start)
+ reserve_bootmem(dma_coherent_mem_start,
+ dma_coherent_mem_end - dma_coherent_mem_start);
+#endif
+
+} /* end setup_uclinux_memory() */
+#endif
+
+/*****************************************************************************/
+/*
+ * get CPU information for use by procfs
+ */
+static int show_cpuinfo(struct seq_file *m, void *v)
+{
+ const char *gr, *fr, *fm, *fp, *cm, *nem, *ble;
+#ifdef CONFIG_PM
+ const char *sep;
+#endif
+
+ gr = cpu_hsr0_all & HSR0_GRHE ? "gr0-63" : "gr0-31";
+ fr = cpu_hsr0_all & HSR0_FRHE ? "fr0-63" : "fr0-31";
+ fm = cpu_psr_all & PSR_EM ? ", Media" : "";
+ fp = cpu_psr_all & PSR_EF ? ", FPU" : "";
+ cm = cpu_psr_all & PSR_CM ? ", CCCR" : "";
+ nem = cpu_psr_all & PSR_NEM ? ", NE" : "";
+ ble = cpu_psr_all & PSR_BE ? "BE" : "LE";
+
+ seq_printf(m,
+ "CPU-Series:\t%s\n"
+ "CPU-Core:\t%s, %s, %s%s%s\n"
+ "CPU:\t\t%s\n"
+ "MMU:\t\t%s\n"
+ "FP-Media:\t%s%s%s\n"
+ "System:\t\t%s",
+ cpu_series,
+ cpu_core, gr, ble, cm, nem,
+ cpu_silicon,
+ cpu_mmu,
+ fr, fm, fp,
+ cpu_system);
+
+ if (cpu_board1)
+ seq_printf(m, ", %s", cpu_board1);
+
+ if (cpu_board2)
+ seq_printf(m, ", %s", cpu_board2);
+
+ seq_printf(m, "\n");
+
+#ifdef CONFIG_PM
+ seq_printf(m, "PM-Controls:");
+ sep = "\t";
+
+ if (clock_bits_settable & CLOCK_BIT_CMODE) {
+ seq_printf(m, "%scmode=0x%04hx", sep, clock_cmodes_permitted);
+ sep = ", ";
+ }
+
+ if (clock_bits_settable & CLOCK_BIT_CM) {
+ seq_printf(m, "%scm=0x%lx", sep, clock_bits_settable & CLOCK_BIT_CM);
+ sep = ", ";
+ }
+
+ if (clock_bits_settable & CLOCK_BIT_P0) {
+ seq_printf(m, "%sp0=0x3", sep);
+ sep = ", ";
+ }
+
+ seq_printf(m, "%ssuspend=0x22\n", sep);
+#endif
+
+ seq_printf(m,
+ "PM-Status:\tcmode=%d, cm=%d, p0=%d\n",
+ clock_cmode_current, clock_cm_current, clock_p0_current);
+
+#define print_clk(TAG, VAR) \
+ seq_printf(m, "Clock-" TAG ":\t%lu.%2.2lu MHz\n", VAR / 1000000, (VAR / 10000) % 100)
+
+ print_clk("In", __clkin_clock_speed_HZ);
+ print_clk("Core", __core_clock_speed_HZ);
+ print_clk("SDRAM", __sdram_clock_speed_HZ);
+ print_clk("CBus", __core_bus_clock_speed_HZ);
+ print_clk("Res", __res_bus_clock_speed_HZ);
+ print_clk("Ext", __ext_bus_clock_speed_HZ);
+ print_clk("DSU", __dsu_clock_speed_HZ);
+
+ seq_printf(m,
+ "BogoMips:\t%lu.%02lu\n",
+ (loops_per_jiffy * HZ) / 500000, ((loops_per_jiffy * HZ) / 5000) % 100);
+
+ return 0;
+} /* end show_cpuinfo() */
+
+static void *c_start(struct seq_file *m, loff_t *pos)
+{
+ return *pos < NR_CPUS ? (void *) 0x12345678 : NULL;
+}
+
+static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+{
+ ++*pos;
+ return c_start(m, pos);
+}
+
+static void c_stop(struct seq_file *m, void *v)
+{
+}
+
+struct seq_operations cpuinfo_op = {
+ .start = c_start,
+ .next = c_next,
+ .stop = c_stop,
+ .show = show_cpuinfo,
+};
+
+void arch_gettod(int *year, int *mon, int *day, int *hour,
+ int *min, int *sec)
+{
+ *year = *mon = *day = *hour = *min = *sec = 0;
+}
+
+/*****************************************************************************/
+/*
+ *
+ */
+#ifdef CONFIG_MB93090_MB00
+static void __init mb93090_sendlcdcmd(uint32_t cmd)
+{
+ unsigned long base = __addr_LCD();
+ int loop;
+
+ /* request reading of the busy flag */
+ __set_LCD(base, LCD_CMD_READ_BUSY);
+ __set_LCD(base, LCD_CMD_READ_BUSY & ~LCD_E);
+
+ /* wait for the busy flag to become clear */
+ for (loop = 10000; loop > 0; loop--)
+ if (!(__get_LCD(base) & 0x80))
+ break;
+
+ /* send the command */
+ __set_LCD(base, cmd);
+ __set_LCD(base, cmd & ~LCD_E);
+
+} /* end mb93090_sendlcdcmd() */
+
+/*****************************************************************************/
+/*
+ * write to the MB93090 LEDs and LCD
+ */
+static void __init mb93090_display(void)
+{
+ const char *p;
+
+ __set_LEDS(0);
+
+ /* set up the LCD */
+ mb93090_sendlcdcmd(LCD_CMD_CLEAR);
+ mb93090_sendlcdcmd(LCD_CMD_FUNCSET(1,1,0));
+ mb93090_sendlcdcmd(LCD_CMD_ON(0,0));
+ mb93090_sendlcdcmd(LCD_CMD_HOME);
+
+ mb93090_sendlcdcmd(LCD_CMD_SET_DD_ADDR(0));
+ for (p = mb93090_banner; *p; p++)
+ mb93090_sendlcdcmd(LCD_DATA_WRITE(*p));
+
+ mb93090_sendlcdcmd(LCD_CMD_SET_DD_ADDR(64));
+ for (p = mb93090_version; *p; p++)
+ mb93090_sendlcdcmd(LCD_DATA_WRITE(*p));
+
+} /* end mb93090_display() */
+
+#endif // CONFIG_MB93090_MB00
--- /dev/null
+/* signal.c: FRV specific bits of signal handling
+ *
+ * Copyright (C) 2003-5 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ * - Derived from arch/m68k/kernel/signal.c
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/kernel.h>
+#include <linux/signal.h>
+#include <linux/errno.h>
+#include <linux/wait.h>
+#include <linux/ptrace.h>
+#include <linux/unistd.h>
+#include <linux/personality.h>
+#include <linux/suspend.h>
+#include <asm/ucontext.h>
+#include <asm/uaccess.h>
+#include <asm/cacheflush.h>
+
+#define DEBUG_SIG 0
+
+#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
+
+struct fdpic_func_descriptor {
+ unsigned long text;
+ unsigned long GOT;
+};
+
+asmlinkage int do_signal(struct pt_regs *regs, sigset_t *oldset);
+
+/*
+ * Atomically swap in the new signal mask, and wait for a signal.
+ */
+asmlinkage int sys_sigsuspend(int history0, int history1, old_sigset_t mask)
+{
+ sigset_t saveset;
+
+ mask &= _BLOCKABLE;
+ spin_lock_irq(¤t->sighand->siglock);
+ saveset = current->blocked;
+ siginitset(¤t->blocked, mask);
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+
+ __frame->gr8 = -EINTR;
+ while (1) {
+ current->state = TASK_INTERRUPTIBLE;
+ schedule();
+ if (do_signal(__frame, &saveset))
+ /* return the signal number as the return value of this function
+ * - this is an utterly evil hack. syscalls should not invoke do_signal()
+ * as entry.S sets regs->gr8 to the return value of the system call
+ * - we can't just use sigpending() as we'd have to discard SIG_IGN signals
+ * and call waitpid() if SIGCHLD needed discarding
+ * - this only works on the i386 because it passes arguments to the signal
+ * handler on the stack, and the return value in EAX is effectively
+ * discarded
+ */
+ return __frame->gr8;
+ }
+}
+
+asmlinkage int sys_rt_sigsuspend(sigset_t __user *unewset, size_t sigsetsize)
+{
+ sigset_t saveset, newset;
+
+ /* XXX: Don't preclude handling different sized sigset_t's. */
+ if (sigsetsize != sizeof(sigset_t))
+ return -EINVAL;
+
+ if (copy_from_user(&newset, unewset, sizeof(newset)))
+ return -EFAULT;
+ sigdelsetmask(&newset, ~_BLOCKABLE);
+
+ spin_lock_irq(¤t->sighand->siglock);
+ saveset = current->blocked;
+ current->blocked = newset;
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+
+ __frame->gr8 = -EINTR;
+ while (1) {
+ current->state = TASK_INTERRUPTIBLE;
+ schedule();
+ if (do_signal(__frame, &saveset))
+ /* return the signal number as the return value of this function
+ * - this is an utterly evil hack. syscalls should not invoke do_signal()
+ * as entry.S sets regs->gr8 to the return value of the system call
+ * - we can't just use sigpending() as we'd have to discard SIG_IGN signals
+ * and call waitpid() if SIGCHLD needed discarding
+ * - this only works on the i386 because it passes arguments to the signal
+ * handler on the stack, and the return value in EAX is effectively
+ * discarded
+ */
+ return __frame->gr8;
+ }
+}
+
+asmlinkage int sys_sigaction(int sig,
+ const struct old_sigaction __user *act,
+ struct old_sigaction __user *oact)
+{
+ struct k_sigaction new_ka, old_ka;
+ int ret;
+
+ if (act) {
+ old_sigset_t mask;
+ if (verify_area(VERIFY_READ, act, sizeof(*act)) ||
+ __get_user(new_ka.sa.sa_handler, &act->sa_handler) ||
+ __get_user(new_ka.sa.sa_restorer, &act->sa_restorer))
+ return -EFAULT;
+ __get_user(new_ka.sa.sa_flags, &act->sa_flags);
+ __get_user(mask, &act->sa_mask);
+ siginitset(&new_ka.sa.sa_mask, mask);
+ }
+
+ ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
+
+ if (!ret && oact) {
+ if (verify_area(VERIFY_WRITE, oact, sizeof(*oact)) ||
+ __put_user(old_ka.sa.sa_handler, &oact->sa_handler) ||
+ __put_user(old_ka.sa.sa_restorer, &oact->sa_restorer))
+ return -EFAULT;
+ __put_user(old_ka.sa.sa_flags, &oact->sa_flags);
+ __put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask);
+ }
+
+ return ret;
+}
+
+asmlinkage
+int sys_sigaltstack(const stack_t __user *uss, stack_t __user *uoss)
+{
+ return do_sigaltstack(uss, uoss, __frame->sp);
+}
+
+
+/*
+ * Do a signal return; undo the signal stack.
+ */
+
+struct sigframe
+{
+ void (*pretcode)(void);
+ int sig;
+ struct sigcontext sc;
+ unsigned long extramask[_NSIG_WORDS-1];
+ uint32_t retcode[2];
+};
+
+struct rt_sigframe
+{
+ void (*pretcode)(void);
+ int sig;
+ struct siginfo *pinfo;
+ void *puc;
+ struct siginfo info;
+ struct ucontext uc;
+ uint32_t retcode[2];
+};
+
+static int restore_sigcontext(struct sigcontext __user *sc, int *_gr8)
+{
+ struct user_context *user = current->thread.user;
+ unsigned long tbr, psr;
+
+ tbr = user->i.tbr;
+ psr = user->i.psr;
+ if (copy_from_user(user, &sc->sc_context, sizeof(sc->sc_context)))
+ goto badframe;
+ user->i.tbr = tbr;
+ user->i.psr = psr;
+
+ restore_user_regs(user);
+
+ user->i.syscallno = -1; /* disable syscall checks */
+
+ *_gr8 = user->i.gr[8];
+ return 0;
+
+ badframe:
+ return 1;
+}
+
+asmlinkage int sys_sigreturn(void)
+{
+ struct sigframe __user *frame = (struct sigframe __user *) __frame->sp;
+ sigset_t set;
+ int gr8;
+
+ if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
+ goto badframe;
+ if (__get_user(set.sig[0], &frame->sc.sc_oldmask))
+ goto badframe;
+
+ if (_NSIG_WORDS > 1 &&
+ __copy_from_user(&set.sig[1], &frame->extramask, sizeof(frame->extramask)))
+ goto badframe;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+ spin_lock_irq(¤t->sighand->siglock);
+ current->blocked = set;
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+
+ if (restore_sigcontext(&frame->sc, &gr8))
+ goto badframe;
+ return gr8;
+
+ badframe:
+ force_sig(SIGSEGV, current);
+ return 0;
+}
+
+asmlinkage int sys_rt_sigreturn(void)
+{
+ struct rt_sigframe __user *frame = (struct rt_sigframe __user *) __frame->sp;
+ sigset_t set;
+ stack_t st;
+ int gr8;
+
+ if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
+ goto badframe;
+ if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
+ goto badframe;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+ spin_lock_irq(¤t->sighand->siglock);
+ current->blocked = set;
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+
+ if (restore_sigcontext(&frame->uc.uc_mcontext, &gr8))
+ goto badframe;
+
+ if (do_sigaltstack(&frame->uc.uc_stack, NULL, __frame->sp) == -EFAULT)
+ goto badframe;
+
+ return gr8;
+
+badframe:
+ force_sig(SIGSEGV, current);
+ return 0;
+}
+
+/*
+ * Set up a signal frame
+ */
+static int setup_sigcontext(struct sigcontext __user *sc, unsigned long mask)
+{
+ save_user_regs(current->thread.user);
+
+ if (copy_to_user(&sc->sc_context, current->thread.user, sizeof(sc->sc_context)) != 0)
+ goto badframe;
+
+ /* non-iBCS2 extensions.. */
+ if (__put_user(mask, &sc->sc_oldmask) < 0)
+ goto badframe;
+
+ return 0;
+
+ badframe:
+ return 1;
+}
+
+/*****************************************************************************/
+/*
+ * Determine which stack to use..
+ */
+static inline void __user *get_sigframe(struct k_sigaction *ka,
+ struct pt_regs *regs,
+ size_t frame_size)
+{
+ unsigned long sp;
+
+ /* Default to using normal stack */
+ sp = regs->sp;
+
+ /* This is the X/Open sanctioned signal stack switching. */
+ if (ka->sa.sa_flags & SA_ONSTACK) {
+ if (! on_sig_stack(sp))
+ sp = current->sas_ss_sp + current->sas_ss_size;
+ }
+
+ return (void __user *) ((sp - frame_size) & ~7UL);
+} /* end get_sigframe() */
+
+/*****************************************************************************/
+/*
+ *
+ */
+static void setup_frame(int sig, struct k_sigaction *ka, sigset_t *set, struct pt_regs * regs)
+{
+ struct sigframe __user *frame;
+ int rsig;
+
+ frame = get_sigframe(ka, regs, sizeof(*frame));
+
+ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+ goto give_sigsegv;
+
+ rsig = sig;
+ if (sig < 32 &&
+ __current_thread_info->exec_domain &&
+ __current_thread_info->exec_domain->signal_invmap)
+ rsig = __current_thread_info->exec_domain->signal_invmap[sig];
+
+ if (__put_user(rsig, &frame->sig) < 0)
+ goto give_sigsegv;
+
+ if (setup_sigcontext(&frame->sc, set->sig[0]))
+ goto give_sigsegv;
+
+ if (_NSIG_WORDS > 1) {
+ if (__copy_to_user(frame->extramask, &set->sig[1],
+ sizeof(frame->extramask)))
+ goto give_sigsegv;
+ }
+
+ /* Set up to return from userspace. If provided, use a stub
+ * already in userspace. */
+ if (ka->sa.sa_flags & SA_RESTORER) {
+ if (__put_user(ka->sa.sa_restorer, &frame->pretcode) < 0)
+ goto give_sigsegv;
+ }
+ else {
+ /* Set up the following code on the stack:
+ * setlos #__NR_sigreturn,gr7
+ * tira gr0,0
+ */
+ if (__put_user((void (*)(void))frame->retcode, &frame->pretcode) ||
+ __put_user(0x8efc0000|__NR_sigreturn, &frame->retcode[0]) ||
+ __put_user(0xc0700000, &frame->retcode[1]))
+ goto give_sigsegv;
+
+ flush_icache_range((unsigned long) frame->retcode,
+ (unsigned long) (frame->retcode + 2));
+ }
+
+ /* set up registers for signal handler */
+ regs->sp = (unsigned long) frame;
+ regs->lr = (unsigned long) &frame->retcode;
+ regs->gr8 = sig;
+
+ if (get_personality & FDPIC_FUNCPTRS) {
+ struct fdpic_func_descriptor __user *funcptr =
+ (struct fdpic_func_descriptor *) ka->sa.sa_handler;
+ __get_user(regs->pc, &funcptr->text);
+ __get_user(regs->gr15, &funcptr->GOT);
+ } else {
+ regs->pc = (unsigned long) ka->sa.sa_handler;
+ regs->gr15 = 0;
+ }
+
+ set_fs(USER_DS);
+
+#if DEBUG_SIG
+ printk("SIG deliver %d (%s:%d): sp=%p pc=%lx ra=%p\n",
+ sig, current->comm, current->pid, frame, regs->pc, frame->pretcode);
+#endif
+
+ return;
+
+give_sigsegv:
+ if (sig == SIGSEGV)
+ ka->sa.sa_handler = SIG_DFL;
+
+ force_sig(SIGSEGV, current);
+} /* end setup_frame() */
+
+/*****************************************************************************/
+/*
+ *
+ */
+static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
+ sigset_t *set, struct pt_regs * regs)
+{
+ struct rt_sigframe __user *frame;
+ int rsig;
+
+ frame = get_sigframe(ka, regs, sizeof(*frame));
+
+ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+ goto give_sigsegv;
+
+ rsig = sig;
+ if (sig < 32 &&
+ __current_thread_info->exec_domain &&
+ __current_thread_info->exec_domain->signal_invmap)
+ rsig = __current_thread_info->exec_domain->signal_invmap[sig];
+
+ if (__put_user(rsig, &frame->sig) ||
+ __put_user(&frame->info, &frame->pinfo) ||
+ __put_user(&frame->uc, &frame->puc))
+ goto give_sigsegv;
+
+ if (copy_siginfo_to_user(&frame->info, info))
+ goto give_sigsegv;
+
+ /* Create the ucontext. */
+ if (__put_user(0, &frame->uc.uc_flags) ||
+ __put_user(0, &frame->uc.uc_link) ||
+ __put_user((void*)current->sas_ss_sp, &frame->uc.uc_stack.ss_sp) ||
+ __put_user(sas_ss_flags(regs->sp), &frame->uc.uc_stack.ss_flags) ||
+ __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size))
+ goto give_sigsegv;
+
+ if (setup_sigcontext(&frame->uc.uc_mcontext, set->sig[0]))
+ goto give_sigsegv;
+
+ if (__copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set)))
+ goto give_sigsegv;
+
+ /* Set up to return from userspace. If provided, use a stub
+ * already in userspace. */
+ if (ka->sa.sa_flags & SA_RESTORER) {
+ if (__put_user(ka->sa.sa_restorer, &frame->pretcode))
+ goto give_sigsegv;
+ }
+ else {
+ /* Set up the following code on the stack:
+ * setlos #__NR_sigreturn,gr7
+ * tira gr0,0
+ */
+ if (__put_user((void (*)(void))frame->retcode, &frame->pretcode) ||
+ __put_user(0x8efc0000|__NR_rt_sigreturn, &frame->retcode[0]) ||
+ __put_user(0xc0700000, &frame->retcode[1]))
+ goto give_sigsegv;
+
+ flush_icache_range((unsigned long) frame->retcode,
+ (unsigned long) (frame->retcode + 2));
+ }
+
+ /* Set up registers for signal handler */
+ regs->sp = (unsigned long) frame;
+ regs->lr = (unsigned long) &frame->retcode;
+ regs->gr8 = sig;
+ regs->gr9 = (unsigned long) &frame->info;
+
+ if (get_personality & FDPIC_FUNCPTRS) {
+ struct fdpic_func_descriptor *funcptr =
+ (struct fdpic_func_descriptor __user *) ka->sa.sa_handler;
+ __get_user(regs->pc, &funcptr->text);
+ __get_user(regs->gr15, &funcptr->GOT);
+ } else {
+ regs->pc = (unsigned long) ka->sa.sa_handler;
+ regs->gr15 = 0;
+ }
+
+ set_fs(USER_DS);
+
+#if DEBUG_SIG
+ printk("SIG deliver %d (%s:%d): sp=%p pc=%lx ra=%p\n",
+ sig, current->comm, current->pid, frame, regs->pc, frame->pretcode);
+#endif
+
+ return;
+
+give_sigsegv:
+ if (sig == SIGSEGV)
+ ka->sa.sa_handler = SIG_DFL;
+ force_sig(SIGSEGV, current);
+
+} /* end setup_rt_frame() */
+
+/*****************************************************************************/
+/*
+ * OK, we're invoking a handler
+ */
+static void handle_signal(unsigned long sig, siginfo_t *info,
+ struct k_sigaction *ka, sigset_t *oldset,
+ struct pt_regs *regs)
+{
+ /* Are we from a system call? */
+ if (in_syscall(regs)) {
+ /* If so, check system call restarting.. */
+ switch (regs->gr8) {
+ case -ERESTART_RESTARTBLOCK:
+ case -ERESTARTNOHAND:
+ regs->gr8 = -EINTR;
+ break;
+
+ case -ERESTARTSYS:
+ if (!(ka->sa.sa_flags & SA_RESTART)) {
+ regs->gr8 = -EINTR;
+ break;
+ }
+ /* fallthrough */
+ case -ERESTARTNOINTR:
+ regs->gr8 = regs->orig_gr8;
+ regs->pc -= 4;
+ }
+ }
+
+ /* Set up the stack frame */
+ if (ka->sa.sa_flags & SA_SIGINFO)
+ setup_rt_frame(sig, ka, info, oldset, regs);
+ else
+ setup_frame(sig, ka, oldset, regs);
+
+ if (!(ka->sa.sa_flags & SA_NODEFER)) {
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked, ¤t->blocked, &ka->sa.sa_mask);
+ sigaddset(¤t->blocked, sig);
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+ }
+} /* end handle_signal() */
+
+/*****************************************************************************/
+/*
+ * Note that 'init' is a special process: it doesn't get signals it doesn't
+ * want to handle. Thus you cannot kill init even with a SIGKILL even by
+ * mistake.
+ */
+int do_signal(struct pt_regs *regs, sigset_t *oldset)
+{
+ struct k_sigaction ka;
+ siginfo_t info;
+ int signr;
+
+ /*
+ * We want the common case to go fast, which
+ * is why we may in certain cases get here from
+ * kernel mode. Just return without doing anything
+ * if so.
+ */
+ if (!user_mode(regs))
+ return 1;
+
+ if (current->flags & PF_FREEZE) {
+ refrigerator(0);
+ goto no_signal;
+ }
+
+ if (!oldset)
+ oldset = ¤t->blocked;
+
+ signr = get_signal_to_deliver(&info, &ka, regs, NULL);
+ if (signr > 0) {
+ handle_signal(signr, &info, &ka, oldset, regs);
+ return 1;
+ }
+
+ no_signal:
+ /* Did we come from a system call? */
+ if (regs->syscallno >= 0) {
+ /* Restart the system call - no handlers present */
+ if (regs->gr8 == -ERESTARTNOHAND ||
+ regs->gr8 == -ERESTARTSYS ||
+ regs->gr8 == -ERESTARTNOINTR) {
+ regs->gr8 = regs->orig_gr8;
+ regs->pc -= 4;
+ }
+
+ if (regs->gr8 == -ERESTART_RESTARTBLOCK){
+ regs->gr8 = __NR_restart_syscall;
+ regs->pc -= 4;
+ }
+ }
+
+ return 0;
+} /* end do_signal() */
+
+/*****************************************************************************/
+/*
+ * notification of userspace execution resumption
+ * - triggered by current->work.notify_resume
+ */
+asmlinkage void do_notify_resume(__u32 thread_info_flags)
+{
+ /* pending single-step? */
+ if (thread_info_flags & _TIF_SINGLESTEP)
+ clear_thread_flag(TIF_SINGLESTEP);
+
+ /* deal with pending signal delivery */
+ if (thread_info_flags & _TIF_SIGPENDING)
+ do_signal(__frame, NULL);
+
+} /* end do_notify_resume() */
--- /dev/null
+/* sleep.S: power saving mode entry
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Woodhouse (dwmw2@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <linux/sys.h>
+#include <linux/config.h>
+#include <linux/linkage.h>
+#include <asm/setup.h>
+#include <asm/segment.h>
+#include <asm/page.h>
+#include <asm/ptrace.h>
+#include <asm/errno.h>
+#include <asm/cache.h>
+#include <asm/spr-regs.h>
+
+#define __addr_MASK 0xfeff9820 /* interrupt controller mask */
+
+#define __addr_FR55X_DRCN 0xfeff0218 /* Address of DRCN register */
+#define FR55X_DSTS_OFFSET -4 /* Offset from DRCN to DSTS */
+#define FR55X_SDRAMC_DSTS_SSI 0x00000002 /* indicates that the SDRAM is in self-refresh mode */
+
+#define __addr_FR4XX_DRCN 0xfe000430 /* Address of DRCN register */
+#define FR4XX_DSTS_OFFSET -8 /* Offset from DRCN to DSTS */
+#define FR4XX_SDRAMC_DSTS_SSI 0x00000001 /* indicates that the SDRAM is in self-refresh mode */
+
+#define SDRAMC_DRCN_SR 0x00000001 /* transition SDRAM into self-refresh mode */
+
+ .section .bss
+ .balign 8
+ .globl __sleep_save_area
+__sleep_save_area:
+ .space 16
+
+
+ .text
+ .balign 4
+
+.macro li v r
+ sethi.p %hi(\v),\r
+ setlo %lo(\v),\r
+.endm
+
+#ifdef CONFIG_PM
+###############################################################################
+#
+# CPU suspension routine
+# - void frv_cpu_suspend(unsigned long pdm_mode)
+#
+###############################################################################
+ .globl frv_cpu_suspend
+ .type frv_cpu_suspend,@function
+frv_cpu_suspend:
+
+ #----------------------------------------------------
+ # save hsr0, psr, isr, and lr for resume code
+ #----------------------------------------------------
+ li __sleep_save_area,gr11
+
+ movsg hsr0,gr4
+ movsg psr,gr5
+ movsg isr,gr6
+ movsg lr,gr7
+ stdi gr4,@(gr11,#0)
+ stdi gr6,@(gr11,#8)
+
+ # store the return address from sleep in GR14, and its complement in GR13 as a check
+ li __ramboot_resume,gr14
+#ifdef CONFIG_MMU
+ # Resume via RAMBOOT# will turn MMU off, so bootloader needs a physical address.
+ sethi.p %hi(__page_offset),gr13
+ setlo %lo(__page_offset),gr13
+ sub gr14,gr13,gr14
+#endif
+ not gr14,gr13
+
+ #----------------------------------------------------
+ # preload and lock into icache that code which may have to run
+ # when dram is in self-refresh state.
+ #----------------------------------------------------
+ movsg hsr0, gr3
+ li HSR0_ICE,gr4
+ or gr3,gr4,gr3
+ movgs gr3,hsr0
+ or gr3,gr8,gr7 // add the sleep bits for later
+
+ li #__icache_lock_start,gr3
+ li #__icache_lock_end,gr4
+1: icpl gr3,gr0,#1
+ addi gr3,#L1_CACHE_BYTES,gr3
+ cmp gr4,gr3,icc0
+ bhi icc0,#0,1b
+
+ # disable exceptions
+ movsg psr,gr8
+ andi.p gr8,#~PSR_PIL,gr8
+ andi gr8,~PSR_ET,gr8
+ movgs gr8,psr
+ ori gr8,#PSR_ET,gr8
+
+ srli gr8,#28,gr4
+ subicc gr4,#3,gr0,icc0
+ beq icc0,#0,1f
+ # FR4xx
+ li __addr_FR4XX_DRCN,gr4
+ li FR4XX_SDRAMC_DSTS_SSI,gr5
+ li FR4XX_DSTS_OFFSET,gr6
+ bra __icache_lock_start
+1:
+ # FR5xx
+ li __addr_FR55X_DRCN,gr4
+ li FR55X_SDRAMC_DSTS_SSI,gr5
+ li FR55X_DSTS_OFFSET,gr6
+ bra __icache_lock_start
+
+ .size frv_cpu_suspend, .-frv_cpu_suspend
+
+#
+# the final part of the sleep sequence...
+# - we want it to be be cacheline aligned so we can lock it into the icache easily
+# On entry: gr7 holds desired hsr0 sleep value
+# gr8 holds desired psr sleep value
+#
+ .balign L1_CACHE_BYTES
+ .type __icache_lock_start,@function
+__icache_lock_start:
+
+ #----------------------------------------------------
+ # put SDRAM in self-refresh mode
+ #----------------------------------------------------
+
+ # Flush all data in the cache using the DCEF instruction.
+ dcef @(gr0,gr0),#1
+
+ # Stop DMAC transfer
+
+ # Execute dummy load from SDRAM
+ ldi @(gr11,#0),gr11
+
+ # put the SDRAM into self-refresh mode
+ ld @(gr4,gr0),gr11
+ ori gr11,#SDRAMC_DRCN_SR,gr11
+ st gr11,@(gr4,gr0)
+ membar
+
+ # wait for SDRAM to reach self-refresh mode
+1: ld @(gr4,gr6),gr11
+ andcc gr11,gr5,gr11,icc0
+ beq icc0,#0,1b
+
+ # Set the GPIO register so that the IRQ[3:0] pins become valid, as required.
+ # Set the clock mode (CLKC register) as required.
+ # - At this time, also set the CLKC register P0 bit.
+
+ # Set the HSR0 register PDM field.
+ movgs gr7,hsr0
+
+ # Execute NOP 32 times.
+ .rept 32
+ nop
+ .endr
+
+#if 0 // Fujitsu recommend to skip this and will update docs.
+ # Release the interrupt mask setting of the MASK register of the
+ # interrupt controller if necessary.
+ sti gr10,@(gr9,#0)
+ membar
+#endif
+
+ # Set the PSR register ET bit to 1 to enable interrupts.
+ movgs gr8,psr
+
+ ###################################################
+ # this is only reached if waking up via interrupt
+ ###################################################
+
+ # Execute NOP 32 times.
+ .rept 32
+ nop
+ .endr
+
+ #----------------------------------------------------
+ # wake SDRAM from self-refresh mode
+ #----------------------------------------------------
+ ld @(gr4,gr0),gr11
+ andi gr11,#~SDRAMC_DRCN_SR,gr11
+ st gr11,@(gr4,gr0)
+ membar
+2:
+ ld @(gr4,gr6),gr11 // Wait for it to come back...
+ andcc gr11,gr5,gr0,icc0
+ bne icc0,0,2b
+
+ # wait for the SDRAM to stabilise
+ li 0x0100000,gr3
+3: subicc gr3,#1,gr3,icc0
+ bne icc0,#0,3b
+
+ # now that DRAM is back, this is the end of the code which gets
+ # locked in icache.
+__icache_lock_end:
+ .size __icache_lock_start, .-__icache_lock_start
+
+ # Fall-through to the RAMBOOT# wakeup path
+
+###############################################################################
+#
+# resume from suspend re-entry point reached via RAMBOOT# and bootloader
+#
+###############################################################################
+__ramboot_resume:
+
+ #----------------------------------------------------
+ # restore hsr0, psr, isr, and leave saved lr in gr7
+ #----------------------------------------------------
+ li __sleep_save_area,gr11
+#ifdef CONFIG_MMU
+ movsg hsr0,gr4
+ sethi.p %hi(HSR0_EXMMU),gr3
+ setlo %lo(HSR0_EXMMU),gr3
+ andcc gr3,gr4,gr0,icc0
+ bne icc0,#0,2f
+
+ # need to use physical address
+ sethi.p %hi(__page_offset),gr3
+ setlo %lo(__page_offset),gr3
+ sub gr11,gr3,gr11
+
+ # flush all tlb entries
+ setlos #64,gr4
+ setlos.p #PAGE_SIZE,gr5
+ setlos #0,gr6
+1:
+ tlbpr gr6,gr0,#6,#0
+ subicc.p gr4,#1,gr4,icc0
+ add gr6,gr5,gr6
+ bne icc0,#2,1b
+
+ # need a temporary mapping for the current physical address we are
+ # using between time MMU is enabled and jump to virtual address is
+ # made.
+ sethi.p %hi(0x00000000),gr4
+ setlo %lo(0x00000000),gr4 ; physical address
+ setlos #xAMPRx_L|xAMPRx_M|xAMPRx_SS_256Mb|xAMPRx_S_KERNEL|xAMPRx_V,gr5
+ or gr4,gr5,gr5
+
+ movsg cxnr,gr13
+ or gr4,gr13,gr4
+
+ movgs gr4,iamlr1 ; mapped from real address 0
+ movgs gr5,iampr1 ; cached kernel memory at 0x00000000
+2:
+#endif
+
+ lddi @(gr11,#0),gr4 ; hsr0, psr
+ lddi @(gr11,#8),gr6 ; isr, lr
+ movgs gr4,hsr0
+ bar
+
+#ifdef CONFIG_MMU
+ sethi.p %hi(1f),gr11
+ setlo %lo(1f),gr11
+ jmpl @(gr11,gr0)
+1:
+ movgs gr0,iampr1 ; get rid of temporary mapping
+#endif
+ movgs gr5,psr
+ movgs gr6,isr
+
+ #----------------------------------------------------
+ # unlock the icache which was locked before going to sleep
+ #----------------------------------------------------
+ li __icache_lock_start,gr3
+ li __icache_lock_end,gr4
+1: icul gr3
+ addi gr3,#L1_CACHE_BYTES,gr3
+ cmp gr4,gr3,icc0
+ bhi icc0,#0,1b
+
+ #----------------------------------------------------
+ # back to business as usual
+ #----------------------------------------------------
+ jmpl @(gr7,gr0) ;
+
+#endif /* CONFIG_PM */
+
+###############################################################################
+#
+# CPU core sleep mode routine
+#
+###############################################################################
+ .globl frv_cpu_core_sleep
+ .type frv_cpu_core_sleep,@function
+frv_cpu_core_sleep:
+
+ # Preload into icache.
+ li #__core_sleep_icache_lock_start,gr3
+ li #__core_sleep_icache_lock_end,gr4
+
+1: icpl gr3,gr0,#1
+ addi gr3,#L1_CACHE_BYTES,gr3
+ cmp gr4,gr3,icc0
+ bhi icc0,#0,1b
+
+ bra __core_sleep_icache_lock_start
+
+ .balign L1_CACHE_BYTES
+__core_sleep_icache_lock_start:
+
+ # (1) Set the PSR register ET bit to 0 to disable interrupts.
+ movsg psr,gr8
+ andi.p gr8,#~(PSR_PIL),gr8
+ andi gr8,#~(PSR_ET),gr4
+ movgs gr4,psr
+
+#if 0 // Fujitsu recommend to skip this and will update docs.
+ # (2) Set '1' to all bits in the MASK register of the interrupt
+ # controller and mask interrupts.
+ sethi.p %hi(__addr_MASK),gr9
+ setlo %lo(__addr_MASK),gr9
+ sethi.p %hi(0xffff0000),gr4
+ setlo %lo(0xffff0000),gr4
+ ldi @(gr9,#0),gr10
+ sti gr4,@(gr9,#0)
+#endif
+ # (3) Flush all data in the cache using the DCEF instruction.
+ dcef @(gr0,gr0),#1
+
+ # (4) Execute the memory barrier instruction
+ membar
+
+ # (5) Set the GPIO register so that the IRQ[3:0] pins become valid, as required.
+ # (6) Set the clock mode (CLKC register) as required.
+ # - At this time, also set the CLKC register P0 bit.
+ # (7) Set the HSR0 register PDM field to 001 .
+ movsg hsr0,gr4
+ ori gr4,HSR0_PDM_CORE_SLEEP,gr4
+ movgs gr4,hsr0
+
+ # (8) Execute NOP 32 times.
+ .rept 32
+ nop
+ .endr
+
+#if 0 // Fujitsu recommend to skip this and will update docs.
+ # (9) Release the interrupt mask setting of the MASK register of the
+ # interrupt controller if necessary.
+ sti gr10,@(gr9,#0)
+ membar
+#endif
+
+ # (10) Set the PSR register ET bit to 1 to enable interrupts.
+ movgs gr8,psr
+
+__core_sleep_icache_lock_end:
+
+ # Unlock from icache
+ li __core_sleep_icache_lock_start,gr3
+ li __core_sleep_icache_lock_end,gr4
+1: icul gr3
+ addi gr3,#L1_CACHE_BYTES,gr3
+ cmp gr4,gr3,icc0
+ bhi icc0,#0,1b
+
+ bralr
+
+ .size frv_cpu_core_sleep, .-frv_cpu_core_sleep
--- /dev/null
+###############################################################################
+#
+# switch_to.S: context switch operation
+#
+# Copyright (C) 2003 Red Hat, Inc. All Rights Reserved.
+# Written by David Howells (dhowells@redhat.com)
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version
+# 2 of the License, or (at your option) any later version.
+#
+###############################################################################
+#include <linux/config.h>
+#include <linux/linkage.h>
+#include <asm/thread_info.h>
+#include <asm/processor.h>
+#include <asm/registers.h>
+#include <asm/spr-regs.h>
+
+.macro LEDS val
+ setlos #~\val,gr27
+ st gr27,@(gr30,gr0)
+ membar
+ dcf @(gr30,gr0)
+.endm
+
+ .section .sdata
+ .balign 8
+
+ # address of frame 0 (userspace) on current kernel stack
+ .globl __kernel_frame0_ptr
+__kernel_frame0_ptr:
+ .long init_thread_union + THREAD_SIZE - USER_CONTEXT_SIZE
+
+ # address of current task
+ .globl __kernel_current_task
+__kernel_current_task:
+ .long init_task
+
+ .section .text
+ .balign 4
+
+###############################################################################
+#
+# struct task_struct *__switch_to(struct thread_struct *prev_thread,
+# struct thread_struct *next_thread,
+# struct task_struct *prev)
+#
+###############################################################################
+ .globl __switch_to
+__switch_to:
+ # save outgoing process's context
+ sethi.p %hi(__switch_back),gr13
+ setlo %lo(__switch_back),gr13
+ movsg lr,gr12
+
+ stdi gr28,@(gr8,#__THREAD_FRAME)
+ sti sp ,@(gr8,#__THREAD_SP)
+ sti fp ,@(gr8,#__THREAD_FP)
+ stdi gr12,@(gr8,#__THREAD_LR)
+ stdi gr16,@(gr8,#__THREAD_GR(16))
+ stdi gr18,@(gr8,#__THREAD_GR(18))
+ stdi gr20,@(gr8,#__THREAD_GR(20))
+ stdi gr22,@(gr8,#__THREAD_GR(22))
+ stdi gr24,@(gr8,#__THREAD_GR(24))
+ stdi.p gr26,@(gr8,#__THREAD_GR(26))
+
+ or gr8,gr8,gr22
+ ldi.p @(gr8,#__THREAD_USER),gr8
+ call save_user_regs
+ or gr22,gr22,gr8
+
+ # retrieve the new context
+ sethi.p %hi(__kernel_frame0_ptr),gr6
+ setlo %lo(__kernel_frame0_ptr),gr6
+ movsg psr,gr4
+
+ lddi.p @(gr9,#__THREAD_FRAME),gr10
+ or gr10,gr10,gr27 ; save prev for the return value
+
+ ldi @(gr11,#4),gr19 ; get new_current->thread_info
+
+ lddi @(gr9,#__THREAD_SP),gr12
+ ldi @(gr9,#__THREAD_LR),gr14
+ ldi @(gr9,#__THREAD_PC),gr18
+ ldi.p @(gr9,#__THREAD_FRAME0),gr7
+
+ # actually switch kernel contexts with ordinary exceptions disabled
+ andi gr4,#~PSR_ET,gr5
+ movgs gr5,psr
+
+ or.p gr10,gr0,gr28 ; set __frame
+ or gr11,gr0,gr29 ; set __current
+ or.p gr12,gr0,sp
+ or gr13,gr0,fp
+ or gr19,gr0,gr15 ; set __current_thread_info
+
+ sti gr7,@(gr6,#0) ; set __kernel_frame0_ptr
+ sti gr29,@(gr6,#4) ; set __kernel_current_task
+
+ movgs gr14,lr
+ bar
+
+ srli gr15,#28,gr5
+ subicc gr5,#0xc,gr0,icc0
+ beq icc0,#0,111f
+ break
+ nop
+111:
+
+ # jump to __switch_back or ret_from_fork as appropriate
+ # - move prev to GR8
+ movgs gr4,psr
+ jmpl.p @(gr18,gr0)
+ or gr27,gr27,gr8
+
+###############################################################################
+#
+# restore incoming process's context
+# - on entry:
+# - SP, FP, LR, GR15, GR28 and GR29 will have been set up appropriately
+# - GR8 will point to the outgoing task_struct
+# - GR9 will point to the incoming thread_struct
+#
+###############################################################################
+__switch_back:
+ lddi @(gr9,#__THREAD_GR(16)),gr16
+ lddi @(gr9,#__THREAD_GR(18)),gr18
+ lddi @(gr9,#__THREAD_GR(20)),gr20
+ lddi @(gr9,#__THREAD_GR(22)),gr22
+ lddi @(gr9,#__THREAD_GR(24)),gr24
+ lddi @(gr9,#__THREAD_GR(26)),gr26
+
+ # fall through into restore_user_regs()
+ ldi.p @(gr9,#__THREAD_USER),gr8
+ or gr8,gr8,gr9
+
+###############################################################################
+#
+# restore extra general regs and FP/Media regs
+# - void *restore_user_regs(const struct user_context *target, void *retval)
+# - on entry:
+# - GR8 will point to the user context to swap in
+# - GR9 will contain the value to be returned in GR8 (prev task on context switch)
+#
+###############################################################################
+ .globl restore_user_regs
+restore_user_regs:
+ movsg hsr0,gr6
+ ori gr6,#HSR0_GRHE|HSR0_FRLE|HSR0_FRHE,gr6
+ movgs gr6,hsr0
+ movsg hsr0,gr6
+
+ movsg psr,gr7
+ ori gr7,#PSR_EF|PSR_EM,gr7
+ movgs gr7,psr
+ movsg psr,gr7
+ srli gr7,#24,gr7
+ bar
+
+ lddi @(gr8,#__FPMEDIA_MSR(0)),gr4
+
+ movgs gr4,msr0
+ movgs gr5,msr1
+
+ lddfi @(gr8,#__FPMEDIA_ACC(0)),fr16
+ lddfi @(gr8,#__FPMEDIA_ACC(2)),fr18
+ ldbfi @(gr8,#__FPMEDIA_ACCG(0)),fr20
+ ldbfi @(gr8,#__FPMEDIA_ACCG(1)),fr21
+ ldbfi @(gr8,#__FPMEDIA_ACCG(2)),fr22
+ ldbfi @(gr8,#__FPMEDIA_ACCG(3)),fr23
+
+ mwtacc fr16,acc0
+ mwtacc fr17,acc1
+ mwtacc fr18,acc2
+ mwtacc fr19,acc3
+ mwtaccg fr20,accg0
+ mwtaccg fr21,accg1
+ mwtaccg fr22,accg2
+ mwtaccg fr23,accg3
+
+ # some CPUs have extra ACCx and ACCGx regs and maybe FSRx regs
+ subicc.p gr7,#0x50,gr0,icc0
+ subicc gr7,#0x31,gr0,icc1
+ beq icc0,#0,__restore_acc_fr451
+ beq icc1,#0,__restore_acc_fr555
+__restore_acc_cont:
+
+ # some CPU's have GR32-GR63
+ setlos #HSR0_FRHE,gr4
+ andcc gr6,gr4,gr0,icc0
+ beq icc0,#1,__restore_skip_gr32_gr63
+
+ lddi @(gr8,#__INT_GR(32)),gr32
+ lddi @(gr8,#__INT_GR(34)),gr34
+ lddi @(gr8,#__INT_GR(36)),gr36
+ lddi @(gr8,#__INT_GR(38)),gr38
+ lddi @(gr8,#__INT_GR(40)),gr40
+ lddi @(gr8,#__INT_GR(42)),gr42
+ lddi @(gr8,#__INT_GR(44)),gr44
+ lddi @(gr8,#__INT_GR(46)),gr46
+ lddi @(gr8,#__INT_GR(48)),gr48
+ lddi @(gr8,#__INT_GR(50)),gr50
+ lddi @(gr8,#__INT_GR(52)),gr52
+ lddi @(gr8,#__INT_GR(54)),gr54
+ lddi @(gr8,#__INT_GR(56)),gr56
+ lddi @(gr8,#__INT_GR(58)),gr58
+ lddi @(gr8,#__INT_GR(60)),gr60
+ lddi @(gr8,#__INT_GR(62)),gr62
+__restore_skip_gr32_gr63:
+
+ # all CPU's have FR0-FR31
+ lddfi @(gr8,#__FPMEDIA_FR( 0)),fr0
+ lddfi @(gr8,#__FPMEDIA_FR( 2)),fr2
+ lddfi @(gr8,#__FPMEDIA_FR( 4)),fr4
+ lddfi @(gr8,#__FPMEDIA_FR( 6)),fr6
+ lddfi @(gr8,#__FPMEDIA_FR( 8)),fr8
+ lddfi @(gr8,#__FPMEDIA_FR(10)),fr10
+ lddfi @(gr8,#__FPMEDIA_FR(12)),fr12
+ lddfi @(gr8,#__FPMEDIA_FR(14)),fr14
+ lddfi @(gr8,#__FPMEDIA_FR(16)),fr16
+ lddfi @(gr8,#__FPMEDIA_FR(18)),fr18
+ lddfi @(gr8,#__FPMEDIA_FR(20)),fr20
+ lddfi @(gr8,#__FPMEDIA_FR(22)),fr22
+ lddfi @(gr8,#__FPMEDIA_FR(24)),fr24
+ lddfi @(gr8,#__FPMEDIA_FR(26)),fr26
+ lddfi @(gr8,#__FPMEDIA_FR(28)),fr28
+ lddfi.p @(gr8,#__FPMEDIA_FR(30)),fr30
+
+ # some CPU's have FR32-FR63
+ setlos #HSR0_FRHE,gr4
+ andcc gr6,gr4,gr0,icc0
+ beq icc0,#1,__restore_skip_fr32_fr63
+
+ lddfi @(gr8,#__FPMEDIA_FR(32)),fr32
+ lddfi @(gr8,#__FPMEDIA_FR(34)),fr34
+ lddfi @(gr8,#__FPMEDIA_FR(36)),fr36
+ lddfi @(gr8,#__FPMEDIA_FR(38)),fr38
+ lddfi @(gr8,#__FPMEDIA_FR(40)),fr40
+ lddfi @(gr8,#__FPMEDIA_FR(42)),fr42
+ lddfi @(gr8,#__FPMEDIA_FR(44)),fr44
+ lddfi @(gr8,#__FPMEDIA_FR(46)),fr46
+ lddfi @(gr8,#__FPMEDIA_FR(48)),fr48
+ lddfi @(gr8,#__FPMEDIA_FR(50)),fr50
+ lddfi @(gr8,#__FPMEDIA_FR(52)),fr52
+ lddfi @(gr8,#__FPMEDIA_FR(54)),fr54
+ lddfi @(gr8,#__FPMEDIA_FR(56)),fr56
+ lddfi @(gr8,#__FPMEDIA_FR(58)),fr58
+ lddfi @(gr8,#__FPMEDIA_FR(60)),fr60
+ lddfi @(gr8,#__FPMEDIA_FR(62)),fr62
+__restore_skip_fr32_fr63:
+
+ lddi @(gr8,#__FPMEDIA_FNER(0)),gr4
+ movsg fner0,gr4
+ movsg fner1,gr5
+ or.p gr9,gr9,gr8
+ bralr
+
+ # the FR451 also has ACC8-11/ACCG8-11 regs (but not 4-7...)
+__restore_acc_fr451:
+ lddfi @(gr8,#__FPMEDIA_ACC(4)),fr16
+ lddfi @(gr8,#__FPMEDIA_ACC(6)),fr18
+ ldbfi @(gr8,#__FPMEDIA_ACCG(4)),fr20
+ ldbfi @(gr8,#__FPMEDIA_ACCG(5)),fr21
+ ldbfi @(gr8,#__FPMEDIA_ACCG(6)),fr22
+ ldbfi @(gr8,#__FPMEDIA_ACCG(7)),fr23
+
+ mwtacc fr16,acc8
+ mwtacc fr17,acc9
+ mwtacc fr18,acc10
+ mwtacc fr19,acc11
+ mwtaccg fr20,accg8
+ mwtaccg fr21,accg9
+ mwtaccg fr22,accg10
+ mwtaccg fr23,accg11
+ bra __restore_acc_cont
+
+ # the FR555 also has ACC4-7/ACCG4-7 regs and an FSR0 reg
+__restore_acc_fr555:
+ lddfi @(gr8,#__FPMEDIA_ACC(4)),fr16
+ lddfi @(gr8,#__FPMEDIA_ACC(6)),fr18
+ ldbfi @(gr8,#__FPMEDIA_ACCG(4)),fr20
+ ldbfi @(gr8,#__FPMEDIA_ACCG(5)),fr21
+ ldbfi @(gr8,#__FPMEDIA_ACCG(6)),fr22
+ ldbfi @(gr8,#__FPMEDIA_ACCG(7)),fr23
+
+ mnop.p
+ mwtacc fr16,acc4
+ mnop.p
+ mwtacc fr17,acc5
+ mnop.p
+ mwtacc fr18,acc6
+ mnop.p
+ mwtacc fr19,acc7
+ mnop.p
+ mwtaccg fr20,accg4
+ mnop.p
+ mwtaccg fr21,accg5
+ mnop.p
+ mwtaccg fr22,accg6
+ mnop.p
+ mwtaccg fr23,accg7
+
+ ldi @(gr8,#__FPMEDIA_FSR(0)),gr4
+ movgs gr4,fsr0
+
+ bra __restore_acc_cont
+
+
+###############################################################################
+#
+# save extra general regs and FP/Media regs
+# - void save_user_regs(struct user_context *target)
+#
+###############################################################################
+ .globl save_user_regs
+save_user_regs:
+ movsg hsr0,gr6
+ ori gr6,#HSR0_GRHE|HSR0_FRLE|HSR0_FRHE,gr6
+ movgs gr6,hsr0
+ movsg hsr0,gr6
+
+ movsg psr,gr7
+ ori gr7,#PSR_EF|PSR_EM,gr7
+ movgs gr7,psr
+ movsg psr,gr7
+ srli gr7,#24,gr7
+ bar
+
+ movsg fner0,gr4
+ movsg fner1,gr5
+ stdi.p gr4,@(gr8,#__FPMEDIA_FNER(0))
+
+ # some CPU's have GR32-GR63
+ setlos #HSR0_GRHE,gr4
+ andcc gr6,gr4,gr0,icc0
+ beq icc0,#1,__save_skip_gr32_gr63
+
+ stdi gr32,@(gr8,#__INT_GR(32))
+ stdi gr34,@(gr8,#__INT_GR(34))
+ stdi gr36,@(gr8,#__INT_GR(36))
+ stdi gr38,@(gr8,#__INT_GR(38))
+ stdi gr40,@(gr8,#__INT_GR(40))
+ stdi gr42,@(gr8,#__INT_GR(42))
+ stdi gr44,@(gr8,#__INT_GR(44))
+ stdi gr46,@(gr8,#__INT_GR(46))
+ stdi gr48,@(gr8,#__INT_GR(48))
+ stdi gr50,@(gr8,#__INT_GR(50))
+ stdi gr52,@(gr8,#__INT_GR(52))
+ stdi gr54,@(gr8,#__INT_GR(54))
+ stdi gr56,@(gr8,#__INT_GR(56))
+ stdi gr58,@(gr8,#__INT_GR(58))
+ stdi gr60,@(gr8,#__INT_GR(60))
+ stdi gr62,@(gr8,#__INT_GR(62))
+__save_skip_gr32_gr63:
+
+ # all CPU's have FR0-FR31
+ stdfi fr0 ,@(gr8,#__FPMEDIA_FR( 0))
+ stdfi fr2 ,@(gr8,#__FPMEDIA_FR( 2))
+ stdfi fr4 ,@(gr8,#__FPMEDIA_FR( 4))
+ stdfi fr6 ,@(gr8,#__FPMEDIA_FR( 6))
+ stdfi fr8 ,@(gr8,#__FPMEDIA_FR( 8))
+ stdfi fr10,@(gr8,#__FPMEDIA_FR(10))
+ stdfi fr12,@(gr8,#__FPMEDIA_FR(12))
+ stdfi fr14,@(gr8,#__FPMEDIA_FR(14))
+ stdfi fr16,@(gr8,#__FPMEDIA_FR(16))
+ stdfi fr18,@(gr8,#__FPMEDIA_FR(18))
+ stdfi fr20,@(gr8,#__FPMEDIA_FR(20))
+ stdfi fr22,@(gr8,#__FPMEDIA_FR(22))
+ stdfi fr24,@(gr8,#__FPMEDIA_FR(24))
+ stdfi fr26,@(gr8,#__FPMEDIA_FR(26))
+ stdfi fr28,@(gr8,#__FPMEDIA_FR(28))
+ stdfi.p fr30,@(gr8,#__FPMEDIA_FR(30))
+
+ # some CPU's have FR32-FR63
+ setlos #HSR0_FRHE,gr4
+ andcc gr6,gr4,gr0,icc0
+ beq icc0,#1,__save_skip_fr32_fr63
+
+ stdfi fr32,@(gr8,#__FPMEDIA_FR(32))
+ stdfi fr34,@(gr8,#__FPMEDIA_FR(34))
+ stdfi fr36,@(gr8,#__FPMEDIA_FR(36))
+ stdfi fr38,@(gr8,#__FPMEDIA_FR(38))
+ stdfi fr40,@(gr8,#__FPMEDIA_FR(40))
+ stdfi fr42,@(gr8,#__FPMEDIA_FR(42))
+ stdfi fr44,@(gr8,#__FPMEDIA_FR(44))
+ stdfi fr46,@(gr8,#__FPMEDIA_FR(46))
+ stdfi fr48,@(gr8,#__FPMEDIA_FR(48))
+ stdfi fr50,@(gr8,#__FPMEDIA_FR(50))
+ stdfi fr52,@(gr8,#__FPMEDIA_FR(52))
+ stdfi fr54,@(gr8,#__FPMEDIA_FR(54))
+ stdfi fr56,@(gr8,#__FPMEDIA_FR(56))
+ stdfi fr58,@(gr8,#__FPMEDIA_FR(58))
+ stdfi fr60,@(gr8,#__FPMEDIA_FR(60))
+ stdfi fr62,@(gr8,#__FPMEDIA_FR(62))
+__save_skip_fr32_fr63:
+
+ mrdacc acc0 ,fr4
+ mrdacc acc1 ,fr5
+
+ stdfi.p fr4 ,@(gr8,#__FPMEDIA_ACC(0))
+
+ mrdacc acc2 ,fr6
+ mrdacc acc3 ,fr7
+
+ stdfi.p fr6 ,@(gr8,#__FPMEDIA_ACC(2))
+
+ mrdaccg accg0,fr4
+ stbfi.p fr4 ,@(gr8,#__FPMEDIA_ACCG(0))
+
+ mrdaccg accg1,fr5
+ stbfi.p fr5 ,@(gr8,#__FPMEDIA_ACCG(1))
+
+ mrdaccg accg2,fr6
+ stbfi.p fr6 ,@(gr8,#__FPMEDIA_ACCG(2))
+
+ mrdaccg accg3,fr7
+ stbfi fr7 ,@(gr8,#__FPMEDIA_ACCG(3))
+
+ movsg msr0 ,gr4
+ movsg msr1 ,gr5
+
+ stdi gr4 ,@(gr8,#__FPMEDIA_MSR(0))
+
+ # some CPUs have extra ACCx and ACCGx regs and maybe FSRx regs
+ subicc.p gr7,#0x50,gr0,icc0
+ subicc gr7,#0x31,gr0,icc1
+ beq icc0,#0,__save_acc_fr451
+ beq icc1,#0,__save_acc_fr555
+__save_acc_cont:
+
+ lddfi @(gr8,#__FPMEDIA_FR(4)),fr4
+ lddfi.p @(gr8,#__FPMEDIA_FR(6)),fr6
+ bralr
+
+ # the FR451 also has ACC8-11/ACCG8-11 regs (but not 4-7...)
+__save_acc_fr451:
+ mrdacc acc8 ,fr4
+ mrdacc acc9 ,fr5
+
+ stdfi.p fr4 ,@(gr8,#__FPMEDIA_ACC(4))
+
+ mrdacc acc10,fr6
+ mrdacc acc11,fr7
+
+ stdfi.p fr6 ,@(gr8,#__FPMEDIA_ACC(6))
+
+ mrdaccg accg8,fr4
+ stbfi.p fr4 ,@(gr8,#__FPMEDIA_ACCG(4))
+
+ mrdaccg accg9,fr5
+ stbfi.p fr5 ,@(gr8,#__FPMEDIA_ACCG(5))
+
+ mrdaccg accg10,fr6
+ stbfi.p fr6 ,@(gr8,#__FPMEDIA_ACCG(6))
+
+ mrdaccg accg11,fr7
+ stbfi fr7 ,@(gr8,#__FPMEDIA_ACCG(7))
+ bra __save_acc_cont
+
+ # the FR555 also has ACC4-7/ACCG4-7 regs and an FSR0 reg
+__save_acc_fr555:
+ mnop.p
+ mrdacc acc4 ,fr4
+ mnop.p
+ mrdacc acc5 ,fr5
+
+ stdfi fr4 ,@(gr8,#__FPMEDIA_ACC(4))
+
+ mnop.p
+ mrdacc acc6 ,fr6
+ mnop.p
+ mrdacc acc7 ,fr7
+
+ stdfi fr6 ,@(gr8,#__FPMEDIA_ACC(6))
+
+ mnop.p
+ mrdaccg accg4,fr4
+ stbfi fr4 ,@(gr8,#__FPMEDIA_ACCG(4))
+
+ mnop.p
+ mrdaccg accg5,fr5
+ stbfi fr5 ,@(gr8,#__FPMEDIA_ACCG(5))
+
+ mnop.p
+ mrdaccg accg6,fr6
+ stbfi fr6 ,@(gr8,#__FPMEDIA_ACCG(6))
+
+ mnop.p
+ mrdaccg accg7,fr7
+ stbfi fr7 ,@(gr8,#__FPMEDIA_ACCG(7))
+
+ movsg fsr0 ,gr4
+ sti gr4 ,@(gr8,#__FPMEDIA_FSR(0))
+ bra __save_acc_cont
--- /dev/null
+/* sys_frv.c: FRV arch-specific syscall wrappers
+ *
+ * Copyright (C) 2003-5 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ * - Derived from arch/m68k/kernel/sys_m68k.c
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/sem.h>
+#include <linux/msg.h>
+#include <linux/shm.h>
+#include <linux/stat.h>
+#include <linux/mman.h>
+#include <linux/file.h>
+#include <linux/utsname.h>
+#include <linux/syscalls.h>
+
+#include <asm/setup.h>
+#include <asm/uaccess.h>
+#include <asm/ipc.h>
+
+/*
+ * sys_pipe() is the normal C calling standard for creating
+ * a pipe. It's not the way unix traditionally does this, though.
+ */
+asmlinkage long sys_pipe(unsigned long * fildes)
+{
+ int fd[2];
+ int error;
+
+ error = do_pipe(fd);
+ if (!error) {
+ if (copy_to_user(fildes, fd, 2*sizeof(int)))
+ error = -EFAULT;
+ }
+ return error;
+}
+
+asmlinkage long sys_mmap2(unsigned long addr, unsigned long len,
+ unsigned long prot, unsigned long flags,
+ unsigned long fd, unsigned long pgoff)
+{
+ int error = -EBADF;
+ struct file * file = NULL;
+
+ flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
+ if (!(flags & MAP_ANONYMOUS)) {
+ file = fget(fd);
+ if (!file)
+ goto out;
+ }
+
+ /* As with sparc32, make sure the shift for mmap2 is constant
+ (12), no matter what PAGE_SIZE we have.... */
+
+ /* But unlike sparc32, don't just silently break if we're
+ trying to map something we can't */
+ if (pgoff & ((1<<(PAGE_SHIFT-12))-1))
+ return -EINVAL;
+
+ pgoff >>= (PAGE_SHIFT - 12);
+
+ down_write(¤t->mm->mmap_sem);
+ error = do_mmap_pgoff(file, addr, len, prot, flags, pgoff);
+ up_write(¤t->mm->mmap_sem);
+
+ if (file)
+ fput(file);
+out:
+ return error;
+}
+
+#if 0 /* DAVIDM - do we want this */
+struct mmap_arg_struct64 {
+ __u32 addr;
+ __u32 len;
+ __u32 prot;
+ __u32 flags;
+ __u64 offset; /* 64 bits */
+ __u32 fd;
+};
+
+asmlinkage long sys_mmap64(struct mmap_arg_struct64 *arg)
+{
+ int error = -EFAULT;
+ struct file * file = NULL;
+ struct mmap_arg_struct64 a;
+ unsigned long pgoff;
+
+ if (copy_from_user(&a, arg, sizeof(a)))
+ return -EFAULT;
+
+ if ((long)a.offset & ~PAGE_MASK)
+ return -EINVAL;
+
+ pgoff = a.offset >> PAGE_SHIFT;
+ if ((a.offset >> PAGE_SHIFT) != pgoff)
+ return -EINVAL;
+
+ if (!(a.flags & MAP_ANONYMOUS)) {
+ error = -EBADF;
+ file = fget(a.fd);
+ if (!file)
+ goto out;
+ }
+ a.flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
+
+ down_write(¤t->mm->mmap_sem);
+ error = do_mmap_pgoff(file, a.addr, a.len, a.prot, a.flags, pgoff);
+ up_write(¤t->mm->mmap_sem);
+ if (file)
+ fput(file);
+out:
+ return error;
+}
+#endif
+
+/*
+ * sys_ipc() is the de-multiplexer for the SysV IPC calls..
+ *
+ * This is really horribly ugly.
+ */
+asmlinkage long sys_ipc(unsigned long call,
+ unsigned long first,
+ unsigned long second,
+ unsigned long third,
+ void __user *ptr,
+ unsigned long fifth)
+{
+ int version, ret;
+
+ version = call >> 16; /* hack for backward compatibility */
+ call &= 0xffff;
+
+ switch (call) {
+ case SEMOP:
+ return sys_semtimedop(first, (struct sembuf __user *)ptr, second, NULL);
+ case SEMTIMEDOP:
+ return sys_semtimedop(first, (struct sembuf __user *)ptr, second,
+ (const struct timespec __user *)fifth);
+
+ case SEMGET:
+ return sys_semget (first, second, third);
+ case SEMCTL: {
+ union semun fourth;
+ if (!ptr)
+ return -EINVAL;
+ if (get_user(fourth.__pad, (void * __user *) ptr))
+ return -EFAULT;
+ return sys_semctl (first, second, third, fourth);
+ }
+
+ case MSGSND:
+ return sys_msgsnd (first, (struct msgbuf __user *) ptr,
+ second, third);
+ case MSGRCV:
+ switch (version) {
+ case 0: {
+ struct ipc_kludge tmp;
+ if (!ptr)
+ return -EINVAL;
+
+ if (copy_from_user(&tmp,
+ (struct ipc_kludge __user *) ptr,
+ sizeof (tmp)))
+ return -EFAULT;
+ return sys_msgrcv (first, tmp.msgp, second,
+ tmp.msgtyp, third);
+ }
+ default:
+ return sys_msgrcv (first,
+ (struct msgbuf __user *) ptr,
+ second, fifth, third);
+ }
+ case MSGGET:
+ return sys_msgget ((key_t) first, second);
+ case MSGCTL:
+ return sys_msgctl (first, second, (struct msqid_ds __user *) ptr);
+
+ case SHMAT:
+ switch (version) {
+ default: {
+ ulong raddr;
+ ret = do_shmat (first, (char __user *) ptr, second, &raddr);
+ if (ret)
+ return ret;
+ return put_user (raddr, (ulong __user *) third);
+ }
+ case 1: /* iBCS2 emulator entry point */
+ if (!segment_eq(get_fs(), get_ds()))
+ return -EINVAL;
+ /* The "(ulong *) third" is valid _only_ because of the kernel segment thing */
+ return do_shmat (first, (char __user *) ptr, second, (ulong *) third);
+ }
+ case SHMDT:
+ return sys_shmdt ((char __user *)ptr);
+ case SHMGET:
+ return sys_shmget (first, second, third);
+ case SHMCTL:
+ return sys_shmctl (first, second,
+ (struct shmid_ds __user *) ptr);
+ default:
+ return -ENOSYS;
+ }
+}
--- /dev/null
+/* sysctl.c: implementation of /proc/sys files relating to FRV specifically
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/slab.h>
+#include <linux/sysctl.h>
+#include <linux/proc_fs.h>
+#include <linux/init.h>
+#include <asm/uaccess.h>
+
+static const char frv_cache_wback[] = "wback";
+static const char frv_cache_wthru[] = "wthru";
+
+static void frv_change_dcache_mode(unsigned long newmode)
+{
+ unsigned long flags, hsr0;
+
+ local_irq_save(flags);
+
+ hsr0 = __get_HSR(0);
+ hsr0 &= ~HSR0_DCE;
+ __set_HSR(0, hsr0);
+
+ asm volatile(" dcef @(gr0,gr0),#1 \n"
+ " membar \n"
+ : : : "memory"
+ );
+
+ hsr0 = (hsr0 & ~HSR0_CBM) | newmode;
+ __set_HSR(0, hsr0);
+ hsr0 |= HSR0_DCE;
+ __set_HSR(0, hsr0);
+
+ local_irq_restore(flags);
+
+ //printk("HSR0 now %08lx\n", hsr0);
+}
+
+/*****************************************************************************/
+/*
+ * handle requests to dynamically switch the write caching mode delivered by /proc
+ */
+static int procctl_frv_cachemode(ctl_table *table, int write, struct file *filp,
+ void *buffer, size_t *lenp, loff_t *ppos)
+{
+ unsigned long hsr0;
+ char buff[8];
+ int len;
+
+ len = *lenp;
+
+ if (write) {
+ /* potential state change */
+ if (len <= 1 || len > sizeof(buff) - 1)
+ return -EINVAL;
+
+ if (copy_from_user(buff, buffer, len) != 0)
+ return -EFAULT;
+
+ if (buff[len - 1] == '\n')
+ buff[len - 1] = '\0';
+ else
+ buff[len] = '\0';
+
+ if (strcmp(buff, frv_cache_wback) == 0) {
+ /* switch dcache into write-back mode */
+ frv_change_dcache_mode(HSR0_CBM_COPY_BACK);
+ return 0;
+ }
+
+ if (strcmp(buff, frv_cache_wthru) == 0) {
+ /* switch dcache into write-through mode */
+ frv_change_dcache_mode(HSR0_CBM_WRITE_THRU);
+ return 0;
+ }
+
+ return -EINVAL;
+ }
+
+ /* read the state */
+ if (filp->f_pos > 0) {
+ *lenp = 0;
+ return 0;
+ }
+
+ hsr0 = __get_HSR(0);
+ switch (hsr0 & HSR0_CBM) {
+ case HSR0_CBM_WRITE_THRU:
+ memcpy(buff, frv_cache_wthru, sizeof(frv_cache_wthru) - 1);
+ buff[sizeof(frv_cache_wthru) - 1] = '\n';
+ len = sizeof(frv_cache_wthru);
+ break;
+ default:
+ memcpy(buff, frv_cache_wback, sizeof(frv_cache_wback) - 1);
+ buff[sizeof(frv_cache_wback) - 1] = '\n';
+ len = sizeof(frv_cache_wback);
+ break;
+ }
+
+ if (len > *lenp)
+ len = *lenp;
+
+ if (copy_to_user(buffer, buff, len) != 0)
+ return -EFAULT;
+
+ *lenp = len;
+ filp->f_pos = len;
+ return 0;
+
+} /* end procctl_frv_cachemode() */
+
+/*****************************************************************************/
+/*
+ * permit the mm_struct the nominated process is using have its MMU context ID pinned
+ */
+#ifdef CONFIG_MMU
+static int procctl_frv_pin_cxnr(ctl_table *table, int write, struct file *filp,
+ void *buffer, size_t *lenp, loff_t *ppos)
+{
+ pid_t pid;
+ char buff[16], *p;
+ int len;
+
+ len = *lenp;
+
+ if (write) {
+ /* potential state change */
+ if (len <= 1 || len > sizeof(buff) - 1)
+ return -EINVAL;
+
+ if (copy_from_user(buff, buffer, len) != 0)
+ return -EFAULT;
+
+ if (buff[len - 1] == '\n')
+ buff[len - 1] = '\0';
+ else
+ buff[len] = '\0';
+
+ pid = simple_strtoul(buff, &p, 10);
+ if (*p)
+ return -EINVAL;
+
+ return cxn_pin_by_pid(pid);
+ }
+
+ /* read the currently pinned CXN */
+ if (filp->f_pos > 0) {
+ *lenp = 0;
+ return 0;
+ }
+
+ len = snprintf(buff, sizeof(buff), "%d\n", cxn_pinned);
+ if (len > *lenp)
+ len = *lenp;
+
+ if (copy_to_user(buffer, buff, len) != 0)
+ return -EFAULT;
+
+ *lenp = len;
+ filp->f_pos = len;
+ return 0;
+
+} /* end procctl_frv_pin_cxnr() */
+#endif
+
+/*
+ * FR-V specific sysctls
+ */
+static struct ctl_table frv_table[] =
+{
+ { 1, "cache-mode", NULL, 0, 0644, NULL, &procctl_frv_cachemode },
+#ifdef CONFIG_MMU
+ { 2, "pin-cxnr", NULL, 0, 0644, NULL, &procctl_frv_pin_cxnr },
+#endif
+ { 0 }
+};
+
+/*
+ * Use a temporary sysctl number. Horrid, but will be cleaned up in 2.6
+ * when all the PM interfaces exist nicely.
+ */
+#define CTL_FRV 9898
+static struct ctl_table frv_dir_table[] =
+{
+ {CTL_FRV, "frv", NULL, 0, 0555, frv_table},
+ {0}
+};
+
+/*
+ * Initialize power interface
+ */
+static int __init frv_sysctl_init(void)
+{
+ register_sysctl_table(frv_dir_table, 1);
+ return 0;
+}
+
+__initcall(frv_sysctl_init);
--- /dev/null
+/* time.c: FRV arch-specific time handling
+ *
+ * Copyright (C) 2003-5 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ * - Derived from arch/m68k/kernel/time.c
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h> /* CONFIG_HEARTBEAT */
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/param.h>
+#include <linux/string.h>
+#include <linux/interrupt.h>
+#include <linux/profile.h>
+#include <linux/irq.h>
+#include <linux/mm.h>
+
+#include <asm/io.h>
+#include <asm/timer-regs.h>
+#include <asm/mb-regs.h>
+#include <asm/mb86943a.h>
+#include <asm/irq-routing.h>
+
+#include <linux/timex.h>
+
+#define TICK_SIZE (tick_nsec / 1000)
+
+extern unsigned long wall_jiffies;
+
+u64 jiffies_64 = INITIAL_JIFFIES;
+EXPORT_SYMBOL(jiffies_64);
+
+unsigned long __nongprelbss __clkin_clock_speed_HZ;
+unsigned long __nongprelbss __ext_bus_clock_speed_HZ;
+unsigned long __nongprelbss __res_bus_clock_speed_HZ;
+unsigned long __nongprelbss __sdram_clock_speed_HZ;
+unsigned long __nongprelbss __core_bus_clock_speed_HZ;
+unsigned long __nongprelbss __core_clock_speed_HZ;
+unsigned long __nongprelbss __dsu_clock_speed_HZ;
+unsigned long __nongprelbss __serial_clock_speed_HZ;
+unsigned long __delay_loops_MHz;
+
+static irqreturn_t timer_interrupt(int irq, void *dummy, struct pt_regs *regs);
+
+static struct irqaction timer_irq = {
+ timer_interrupt, SA_INTERRUPT, CPU_MASK_NONE, "timer", NULL, NULL
+};
+
+static inline int set_rtc_mmss(unsigned long nowtime)
+{
+ return -1;
+}
+
+/*
+ * timer_interrupt() needs to keep up the real-time clock,
+ * as well as call the "do_timer()" routine every clocktick
+ */
+static irqreturn_t timer_interrupt(int irq, void *dummy, struct pt_regs * regs)
+{
+ /* last time the cmos clock got updated */
+ static long last_rtc_update = 0;
+
+ /*
+ * Here we are in the timer irq handler. We just have irqs locally
+ * disabled but we don't know if the timer_bh is running on the other
+ * CPU. We need to avoid to SMP race with it. NOTE: we don' t need
+ * the irq version of write_lock because as just said we have irq
+ * locally disabled. -arca
+ */
+ write_seqlock(&xtime_lock);
+
+ do_timer(regs);
+ update_process_times(user_mode(regs));
+ profile_tick(CPU_PROFILING, regs);
+
+ /*
+ * If we have an externally synchronized Linux clock, then update
+ * CMOS clock accordingly every ~11 minutes. Set_rtc_mmss() has to be
+ * called as close as possible to 500 ms before the new second starts.
+ */
+ if ((time_status & STA_UNSYNC) == 0 &&
+ xtime.tv_sec > last_rtc_update + 660 &&
+ (xtime.tv_nsec / 1000) >= 500000 - ((unsigned) TICK_SIZE) / 2 &&
+ (xtime.tv_nsec / 1000) <= 500000 + ((unsigned) TICK_SIZE) / 2
+ ) {
+ if (set_rtc_mmss(xtime.tv_sec) == 0)
+ last_rtc_update = xtime.tv_sec;
+ else
+ last_rtc_update = xtime.tv_sec - 600; /* do it again in 60 s */
+ }
+
+#ifdef CONFIG_HEARTBEAT
+ static unsigned short n;
+ n++;
+ __set_LEDS(n);
+#endif /* CONFIG_HEARTBEAT */
+
+ write_sequnlock(&xtime_lock);
+ return IRQ_HANDLED;
+}
+
+void time_divisor_init(void)
+{
+ unsigned short base, pre, prediv;
+
+ /* set the scheduling timer going */
+ pre = 1;
+ prediv = 4;
+ base = __res_bus_clock_speed_HZ / pre / HZ / (1 << prediv);
+
+ __set_TPRV(pre);
+ __set_TxCKSL_DATA(0, prediv);
+ __set_TCTR(TCTR_SC_CTR0 | TCTR_RL_RW_LH8 | TCTR_MODE_2);
+ __set_TCSR_DATA(0, base & 0xff);
+ __set_TCSR_DATA(0, base >> 8);
+}
+
+void time_init(void)
+{
+ unsigned int year, mon, day, hour, min, sec;
+
+ extern void arch_gettod(int *year, int *mon, int *day, int *hour, int *min, int *sec);
+
+ /* FIX by dqg : Set to zero for platforms that don't have tod */
+ /* without this time is undefined and can overflow time_t, causing */
+ /* very stange errors */
+ year = 1980;
+ mon = day = 1;
+ hour = min = sec = 0;
+ arch_gettod (&year, &mon, &day, &hour, &min, &sec);
+
+ if ((year += 1900) < 1970)
+ year += 100;
+ xtime.tv_sec = mktime(year, mon, day, hour, min, sec);
+ xtime.tv_nsec = 0;
+
+ /* install scheduling interrupt handler */
+ setup_irq(IRQ_CPU_TIMER0, &timer_irq);
+
+ time_divisor_init();
+}
+
+/*
+ * This version of gettimeofday has near microsecond resolution.
+ */
+void do_gettimeofday(struct timeval *tv)
+{
+ unsigned long seq;
+ unsigned long usec, sec;
+ unsigned long max_ntp_tick;
+
+ do {
+ unsigned long lost;
+
+ seq = read_seqbegin(&xtime_lock);
+
+ usec = 0;
+ lost = jiffies - wall_jiffies;
+
+ /*
+ * If time_adjust is negative then NTP is slowing the clock
+ * so make sure not to go into next possible interval.
+ * Better to lose some accuracy than have time go backwards..
+ */
+ if (unlikely(time_adjust < 0)) {
+ max_ntp_tick = (USEC_PER_SEC / HZ) - tickadj;
+ usec = min(usec, max_ntp_tick);
+
+ if (lost)
+ usec += lost * max_ntp_tick;
+ }
+ else if (unlikely(lost))
+ usec += lost * (USEC_PER_SEC / HZ);
+
+ sec = xtime.tv_sec;
+ usec += (xtime.tv_nsec / 1000);
+ } while (read_seqretry(&xtime_lock, seq));
+
+ while (usec >= 1000000) {
+ usec -= 1000000;
+ sec++;
+ }
+
+ tv->tv_sec = sec;
+ tv->tv_usec = usec;
+}
+
+int do_settimeofday(struct timespec *tv)
+{
+ time_t wtm_sec, sec = tv->tv_sec;
+ long wtm_nsec, nsec = tv->tv_nsec;
+
+ if ((unsigned long)tv->tv_nsec >= NSEC_PER_SEC)
+ return -EINVAL;
+
+ write_seqlock_irq(&xtime_lock);
+ /*
+ * This is revolting. We need to set "xtime" correctly. However, the
+ * value in this location is the value at the most recent update of
+ * wall time. Discover what correction gettimeofday() would have
+ * made, and then undo it!
+ */
+ nsec -= 0 * NSEC_PER_USEC;
+ nsec -= (jiffies - wall_jiffies) * TICK_NSEC;
+
+ wtm_sec = wall_to_monotonic.tv_sec + (xtime.tv_sec - sec);
+ wtm_nsec = wall_to_monotonic.tv_nsec + (xtime.tv_nsec - nsec);
+
+ set_normalized_timespec(&xtime, sec, nsec);
+ set_normalized_timespec(&wall_to_monotonic, wtm_sec, wtm_nsec);
+
+ time_adjust = 0; /* stop active adjtime() */
+ time_status |= STA_UNSYNC;
+ time_maxerror = NTP_PHASE_LIMIT;
+ time_esterror = NTP_PHASE_LIMIT;
+ write_sequnlock_irq(&xtime_lock);
+ clock_was_set();
+ return 0;
+}
+
+/*
+ * Scheduler clock - returns current time in nanosec units.
+ */
+unsigned long long sched_clock(void)
+{
+ return jiffies_64 * (1000000000 / HZ);
+}
--- /dev/null
+/* traps.c: high-level exception handler for FR-V
+ *
+ * Copyright (C) 2003 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/sched.h>
+#include <linux/signal.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/types.h>
+#include <linux/user.h>
+#include <linux/string.h>
+#include <linux/linkage.h>
+#include <linux/init.h>
+
+#include <asm/setup.h>
+#include <asm/fpu.h>
+#include <asm/system.h>
+#include <asm/uaccess.h>
+#include <asm/pgtable.h>
+#include <asm/siginfo.h>
+#include <asm/unaligned.h>
+
+void show_backtrace(struct pt_regs *, unsigned long);
+
+extern asmlinkage void __break_hijack_kernel_event(void);
+
+/*****************************************************************************/
+/*
+ * instruction access error
+ */
+asmlinkage void insn_access_error(unsigned long esfr1, unsigned long epcr0, unsigned long esr0)
+{
+ siginfo_t info;
+
+ die_if_kernel("-- Insn Access Error --\n"
+ "EPCR0 : %08lx\n"
+ "ESR0 : %08lx\n",
+ epcr0, esr0);
+
+ info.si_signo = SIGSEGV;
+ info.si_code = SEGV_ACCERR;
+ info.si_errno = 0;
+ info.si_addr = (void *) ((epcr0 & EPCR0_V) ? (epcr0 & EPCR0_PC) : __frame->pc);
+
+ force_sig_info(info.si_signo, &info, current);
+} /* end insn_access_error() */
+
+/*****************************************************************************/
+/*
+ * handler for:
+ * - illegal instruction
+ * - privileged instruction
+ * - unsupported trap
+ * - debug exceptions
+ */
+asmlinkage void illegal_instruction(unsigned long esfr1, unsigned long epcr0, unsigned long esr0)
+{
+ siginfo_t info;
+
+ die_if_kernel("-- Illegal Instruction --\n"
+ "EPCR0 : %08lx\n"
+ "ESR0 : %08lx\n"
+ "ESFR1 : %08lx\n",
+ epcr0, esr0, esfr1);
+
+ info.si_errno = 0;
+ info.si_addr = (void *) ((epcr0 & EPCR0_PC) ? (epcr0 & EPCR0_PC) : __frame->pc);
+
+ switch (__frame->tbr & TBR_TT) {
+ case TBR_TT_ILLEGAL_INSTR:
+ info.si_signo = SIGILL;
+ info.si_code = ILL_ILLOPC;
+ break;
+ case TBR_TT_PRIV_INSTR:
+ info.si_signo = SIGILL;
+ info.si_code = ILL_PRVOPC;
+ break;
+ case TBR_TT_TRAP2 ... TBR_TT_TRAP126:
+ info.si_signo = SIGILL;
+ info.si_code = ILL_ILLTRP;
+ break;
+ /* GDB uses "tira gr0, #1" as a breakpoint instruction. */
+ case TBR_TT_TRAP1:
+ case TBR_TT_BREAK:
+ info.si_signo = SIGTRAP;
+ info.si_code =
+ (__frame->__status & REG__STATUS_STEPPED) ? TRAP_TRACE : TRAP_BRKPT;
+ break;
+ }
+
+ force_sig_info(info.si_signo, &info, current);
+} /* end illegal_instruction() */
+
+/*****************************************************************************/
+/*
+ *
+ */
+asmlinkage void media_exception(unsigned long msr0, unsigned long msr1)
+{
+ siginfo_t info;
+
+ die_if_kernel("-- Media Exception --\n"
+ "MSR0 : %08lx\n"
+ "MSR1 : %08lx\n",
+ msr0, msr1);
+
+ info.si_signo = SIGFPE;
+ info.si_code = FPE_MDAOVF;
+ info.si_errno = 0;
+ info.si_addr = (void *) __frame->pc;
+
+ force_sig_info(info.si_signo, &info, current);
+} /* end media_exception() */
+
+/*****************************************************************************/
+/*
+ * instruction or data access exception
+ */
+asmlinkage void memory_access_exception(unsigned long esr0,
+ unsigned long ear0,
+ unsigned long epcr0)
+{
+ siginfo_t info;
+
+#ifdef CONFIG_MMU
+ unsigned long fixup;
+
+ if ((esr0 & ESRx_EC) == ESRx_EC_DATA_ACCESS)
+ if (handle_misalignment(esr0, ear0, epcr0) == 0)
+ return;
+
+ if ((fixup = search_exception_table(__frame->pc)) != 0) {
+ __frame->pc = fixup;
+ return;
+ }
+#endif
+
+ die_if_kernel("-- Memory Access Exception --\n"
+ "ESR0 : %08lx\n"
+ "EAR0 : %08lx\n"
+ "EPCR0 : %08lx\n",
+ esr0, ear0, epcr0);
+
+ info.si_signo = SIGSEGV;
+ info.si_code = SEGV_ACCERR;
+ info.si_errno = 0;
+ info.si_addr = NULL;
+
+ if ((esr0 & (ESRx_VALID | ESR0_EAV)) == (ESRx_VALID | ESR0_EAV))
+ info.si_addr = (void *) ear0;
+
+ force_sig_info(info.si_signo, &info, current);
+
+} /* end memory_access_exception() */
+
+/*****************************************************************************/
+/*
+ * data access error
+ * - double-word data load from CPU control area (0xFExxxxxx)
+ * - read performed on inactive or self-refreshing SDRAM
+ * - error notification from slave device
+ * - misaligned address
+ * - access to out of bounds memory region
+ * - user mode accessing privileged memory region
+ * - write to R/O memory region
+ */
+asmlinkage void data_access_error(unsigned long esfr1, unsigned long esr15, unsigned long ear15)
+{
+ siginfo_t info;
+
+ die_if_kernel("-- Data Access Error --\n"
+ "ESR15 : %08lx\n"
+ "EAR15 : %08lx\n",
+ esr15, ear15);
+
+ info.si_signo = SIGSEGV;
+ info.si_code = SEGV_ACCERR;
+ info.si_errno = 0;
+ info.si_addr = (void *)
+ (((esr15 & (ESRx_VALID|ESR15_EAV)) == (ESRx_VALID|ESR15_EAV)) ? ear15 : 0);
+
+ force_sig_info(info.si_signo, &info, current);
+} /* end data_access_error() */
+
+/*****************************************************************************/
+/*
+ * data store error - should only happen if accessing inactive or self-refreshing SDRAM
+ */
+asmlinkage void data_store_error(unsigned long esfr1, unsigned long esr15)
+{
+ die_if_kernel("-- Data Store Error --\n"
+ "ESR15 : %08lx\n",
+ esr15);
+ BUG();
+} /* end data_store_error() */
+
+/*****************************************************************************/
+/*
+ *
+ */
+asmlinkage void division_exception(unsigned long esfr1, unsigned long esr0, unsigned long isr)
+{
+ siginfo_t info;
+
+ die_if_kernel("-- Division Exception --\n"
+ "ESR0 : %08lx\n"
+ "ISR : %08lx\n",
+ esr0, isr);
+
+ info.si_signo = SIGFPE;
+ info.si_code = FPE_INTDIV;
+ info.si_errno = 0;
+ info.si_addr = (void *) __frame->pc;
+
+ force_sig_info(info.si_signo, &info, current);
+} /* end division_exception() */
+
+/*****************************************************************************/
+/*
+ *
+ */
+asmlinkage void compound_exception(unsigned long esfr1,
+ unsigned long esr0, unsigned long esr14, unsigned long esr15,
+ unsigned long msr0, unsigned long msr1)
+{
+ die_if_kernel("-- Compound Exception --\n"
+ "ESR0 : %08lx\n"
+ "ESR15 : %08lx\n"
+ "ESR15 : %08lx\n"
+ "MSR0 : %08lx\n"
+ "MSR1 : %08lx\n",
+ esr0, esr14, esr15, msr0, msr1);
+ BUG();
+} /* end compound_exception() */
+
+/*****************************************************************************/
+/*
+ * The architecture-independent backtrace generator
+ */
+void dump_stack(void)
+{
+ show_stack(NULL, NULL);
+}
+
+void show_stack(struct task_struct *task, unsigned long *sp)
+{
+}
+
+void show_trace_task(struct task_struct *tsk)
+{
+ printk("CONTEXT: stack=0x%lx frame=0x%p LR=0x%lx RET=0x%lx\n",
+ tsk->thread.sp, tsk->thread.frame, tsk->thread.lr, tsk->thread.sched_lr);
+}
+
+static const char *regnames[] = {
+ "PSR ", "ISR ", "CCR ", "CCCR",
+ "LR ", "LCR ", "PC ", "_stt",
+ "sys ", "GR8*", "GNE0", "GNE1",
+ "IACH", "IACL",
+ "TBR ", "SP ", "FP ", "GR3 ",
+ "GR4 ", "GR5 ", "GR6 ", "GR7 ",
+ "GR8 ", "GR9 ", "GR10", "GR11",
+ "GR12", "GR13", "GR14", "GR15",
+ "GR16", "GR17", "GR18", "GR19",
+ "GR20", "GR21", "GR22", "GR23",
+ "GR24", "GR25", "GR26", "GR27",
+ "EFRM", "CURR", "GR30", "BFRM"
+};
+
+void show_regs(struct pt_regs *regs)
+{
+ uint32_t *reg;
+ int loop;
+
+ printk("\n");
+
+ printk("Frame: @%08x [%s]\n",
+ (uint32_t) regs,
+ regs->psr & PSR_S ? "kernel" : "user");
+
+ reg = (uint32_t *) regs;
+ for (loop = 0; loop < REG__END; loop++) {
+ printk("%s %08x", regnames[loop + 0], reg[loop + 0]);
+
+ if (loop == REG__END - 1 || loop % 5 == 4)
+ printk("\n");
+ else
+ printk(" | ");
+ }
+
+ printk("Process %s (pid: %d)\n", current->comm, current->pid);
+}
+
+void die_if_kernel(const char *str, ...)
+{
+ char buffer[256];
+ va_list va;
+
+ if (user_mode(__frame))
+ return;
+
+ va_start(va, str);
+ vsprintf(buffer, str, va);
+ va_end(va);
+
+ console_verbose();
+ printk("\n===================================\n");
+ printk("%s\n", buffer);
+ show_backtrace(__frame, 0);
+
+ __break_hijack_kernel_event();
+ do_exit(SIGSEGV);
+}
+
+/*****************************************************************************/
+/*
+ * dump the contents of an exception frame
+ */
+static void show_backtrace_regs(struct pt_regs *frame)
+{
+ uint32_t *reg;
+ int loop;
+
+ /* print the registers for this frame */
+ printk("<-- %s Frame: @%p -->\n",
+ frame->psr & PSR_S ? "Kernel Mode" : "User Mode",
+ frame);
+
+ reg = (uint32_t *) frame;
+ for (loop = 0; loop < REG__END; loop++) {
+ printk("%s %08x", regnames[loop + 0], reg[loop + 0]);
+
+ if (loop == REG__END - 1 || loop % 5 == 4)
+ printk("\n");
+ else
+ printk(" | ");
+ }
+
+ printk("--------\n");
+} /* end show_backtrace_regs() */
+
+/*****************************************************************************/
+/*
+ * generate a backtrace of the kernel stack
+ */
+void show_backtrace(struct pt_regs *frame, unsigned long sp)
+{
+ struct pt_regs *frame0;
+ unsigned long tos = 0, stop = 0, base;
+ int format;
+
+ base = ((((unsigned long) frame) + 8191) & ~8191) - sizeof(struct user_context);
+ frame0 = (struct pt_regs *) base;
+
+ if (sp) {
+ tos = sp;
+ stop = (unsigned long) frame;
+ }
+
+ printk("\nProcess %s (pid: %d)\n\n", current->comm, current->pid);
+
+ for (;;) {
+ /* dump stack segment between frames */
+ //printk("%08lx -> %08lx\n", tos, stop);
+ format = 0;
+ while (tos < stop) {
+ if (format == 0)
+ printk(" %04lx :", tos & 0xffff);
+
+ printk(" %08lx", *(unsigned long *) tos);
+
+ tos += 4;
+ format++;
+ if (format == 8) {
+ printk("\n");
+ format = 0;
+ }
+ }
+
+ if (format > 0)
+ printk("\n");
+
+ /* dump frame 0 outside of the loop */
+ if (frame == frame0)
+ break;
+
+ tos = frame->sp;
+ if (((unsigned long) frame) + sizeof(*frame) != tos) {
+ printk("-- TOS %08lx does not follow frame %p --\n",
+ tos, frame);
+ break;
+ }
+
+ show_backtrace_regs(frame);
+
+ /* dump the stack between this frame and the next */
+ stop = (unsigned long) frame->next_frame;
+ if (stop != base &&
+ (stop < tos ||
+ stop > base ||
+ (stop < base && stop + sizeof(*frame) > base) ||
+ stop & 3)) {
+ printk("-- next_frame %08lx is invalid (range %08lx-%08lx) --\n",
+ stop, tos, base);
+ break;
+ }
+
+ /* move to next frame */
+ frame = frame->next_frame;
+ }
+
+ /* we can always dump frame 0, even if the rest of the stack is corrupt */
+ show_backtrace_regs(frame0);
+
+} /* end show_backtrace() */
+
+/*****************************************************************************/
+/*
+ * initialise traps
+ */
+void __init trap_init (void)
+{
+} /* end trap_init() */
--- /dev/null
+/* uaccess.c: userspace access functions
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/mm.h>
+#include <asm/uaccess.h>
+
+/*****************************************************************************/
+/*
+ * copy a null terminated string from userspace
+ */
+long strncpy_from_user(char *dst, const char *src, long count)
+{
+ unsigned long max;
+ char *p, ch;
+ long err = -EFAULT;
+
+ if (count < 0)
+ BUG();
+
+ p = dst;
+
+#ifndef CONFIG_MMU
+ if ((unsigned long) src < memory_start)
+ goto error;
+#endif
+
+ if ((unsigned long) src >= get_addr_limit())
+ goto error;
+
+ max = get_addr_limit() - (unsigned long) src;
+ if ((unsigned long) count > max) {
+ memset(dst + max, 0, count - max);
+ count = max;
+ }
+
+ err = 0;
+ for (; count > 0; count--, p++, src++) {
+ __get_user_asm(err, ch, src, "ub", "=r");
+ if (err < 0)
+ goto error;
+ if (!ch)
+ break;
+ *p = ch;
+ }
+
+ err = p - dst; /* return length excluding NUL */
+
+ error:
+ if (count > 0)
+ memset(p, 0, count); /* clear remainder of buffer [security] */
+
+ return err;
+} /* end strncpy_from_user() */
+
+/*****************************************************************************/
+/*
+ * Return the size of a string (including the ending 0)
+ *
+ * Return 0 on exception, a value greater than N if too long
+ */
+long strnlen_user(const char *src, long count)
+{
+ const char *p;
+ long err = 0;
+ char ch;
+
+ if (count < 0)
+ BUG();
+
+#ifndef CONFIG_MMU
+ if ((unsigned long) src < memory_start)
+ return 0;
+#endif
+
+ if ((unsigned long) src >= get_addr_limit())
+ return 0;
+
+ for (p = src; count > 0; count--, p++) {
+ __get_user_asm(err, ch, p, "ub", "=r");
+ if (err < 0)
+ return 0;
+ if (!ch)
+ break;
+ }
+
+ return p - src + 1; /* return length including NUL */
+} /* end strnlen_user() */
--- /dev/null
+/* ld script to make FRV Linux kernel
+ * Written by Martin Mares <mj@atrey.karlin.mff.cuni.cz>;
+ */
+OUTPUT_FORMAT("elf32-frv", "elf32-frv", "elf32-frv")
+OUTPUT_ARCH(frv)
+ENTRY(_start)
+
+#include <asm-generic/vmlinux.lds.h>
+#include <asm/processor.h>
+#include <asm/page.h>
+#include <asm/cache.h>
+#include <asm/thread_info.h>
+
+jiffies = jiffies_64 + 4;
+
+__page_offset = 0xc0000000; /* start of area covered by struct pages */
+__kernel_image_start = __page_offset; /* address at which kernel image resides */
+
+SECTIONS
+{
+ . = __kernel_image_start;
+
+ /* discardable initialisation code and data */
+ . = ALIGN(PAGE_SIZE); /* Init code and data */
+ __init_begin = .;
+
+ _sinittext = .;
+ .init.text : {
+ *(.text.head)
+#ifndef CONFIG_DEBUG_INFO
+ *(.init.text)
+ *(.exit.text)
+ *(.exit.data)
+ *(.exitcall.exit)
+#endif
+ }
+ _einittext = .;
+ .init.data : { *(.init.data) }
+
+ . = ALIGN(8);
+ __setup_start = .;
+ .setup.init : { KEEP(*(.init.setup)) }
+ __setup_end = .;
+
+ __initcall_start = .;
+ .initcall.init : {
+ *(.initcall1.init)
+ *(.initcall2.init)
+ *(.initcall3.init)
+ *(.initcall4.init)
+ *(.initcall5.init)
+ *(.initcall6.init)
+ *(.initcall7.init)
+ }
+ __initcall_end = .;
+ __con_initcall_start = .;
+ .con_initcall.init : { *(.con_initcall.init) }
+ __con_initcall_end = .;
+ SECURITY_INIT
+ . = ALIGN(4);
+ __alt_instructions = .;
+ .altinstructions : { *(.altinstructions) }
+ __alt_instructions_end = .;
+ .altinstr_replacement : { *(.altinstr_replacement) }
+
+ __per_cpu_start = .;
+ .data.percpu : { *(.data.percpu) }
+ __per_cpu_end = .;
+
+ . = ALIGN(4096);
+ __initramfs_start = .;
+ .init.ramfs : { *(.init.ramfs) }
+ __initramfs_end = .;
+
+ . = ALIGN(THREAD_SIZE);
+ __init_end = .;
+
+ /* put sections together that have massive alignment issues */
+ . = ALIGN(THREAD_SIZE);
+ .data.init_task : {
+ /* init task record & stack */
+ *(.data.init_task)
+ }
+
+ .trap : {
+ /* trap table management - read entry-table.S before modifying */
+ . = ALIGN(8192);
+ __trap_tables = .;
+ *(.trap.user)
+ *(.trap.kernel)
+ . = ALIGN(4096);
+ *(.trap.break)
+ }
+
+ . = ALIGN(4096);
+ .data.page_aligned : { *(.data.idt) }
+
+ . = ALIGN(L1_CACHE_BYTES);
+ .data.cacheline_aligned : { *(.data.cacheline_aligned) }
+
+ /* Text and read-only data */
+ . = ALIGN(4);
+ _text = .;
+ _stext = .;
+ .text : {
+ *(
+ .text.start .text .text.*
+#ifdef CONFIG_DEBUG_INFO
+ .init.text
+ .exit.text
+ .exitcall.exit
+#endif
+ )
+ SCHED_TEXT
+ *(.fixup)
+ *(.gnu.warning)
+ *(.exitcall.exit)
+ } = 0x9090
+
+ _etext = .; /* End of text section */
+
+ RODATA
+
+ .rodata : {
+ *(.trap.vector)
+
+ /* this clause must not be modified - the ordering and adjacency are imperative */
+ __trap_fixup_tables = .;
+ *(.trap.fixup.user .trap.fixup.kernel)
+
+ }
+
+ . = ALIGN(8); /* Exception table */
+ __start___ex_table = .;
+ __ex_table : { KEEP(*(__ex_table)) }
+ __stop___ex_table = .;
+
+ _sdata = .;
+ .data : { /* Data */
+ *(.data .data.*)
+ *(.exit.data)
+ CONSTRUCTORS
+ }
+
+ _edata = .; /* End of data section */
+
+ /* GP section */
+ . = ALIGN(L1_CACHE_BYTES);
+ _gp = . + 2048;
+ PROVIDE (gp = _gp);
+
+ .sdata : { *(.sdata .sdata.*) }
+
+ /* BSS */
+ . = ALIGN(L1_CACHE_BYTES);
+ __bss_start = .;
+
+ .sbss : { *(.sbss .sbss.*) }
+ .bss : { *(.bss .bss.*) }
+ .bss.stack : { *(.bss) }
+
+ __bss_stop = .;
+ _end = . ;
+ . = ALIGN(PAGE_SIZE);
+ __kernel_image_end = .;
+
+ /* Stabs debugging sections. */
+ .stab 0 : { *(.stab) }
+ .stabstr 0 : { *(.stabstr) }
+ .stab.excl 0 : { *(.stab.excl) }
+ .stab.exclstr 0 : { *(.stab.exclstr) }
+ .stab.index 0 : { *(.stab.index) }
+ .stab.indexstr 0 : { *(.stab.indexstr) }
+
+ .debug_line 0 : { *(.debug_line) }
+ .debug_info 0 : { *(.debug_info) }
+ .debug_abbrev 0 : { *(.debug_abbrev) }
+ .debug_aranges 0 : { *(.debug_aranges) }
+ .debug_frame 0 : { *(.debug_frame) }
+ .debug_pubnames 0 : { *(.debug_pubnames) }
+ .debug_str 0 : { *(.debug_str) }
+ .debug_ranges 0 : { *(.debug_ranges) }
+
+ .comment 0 : { *(.comment) }
+}
+
+__kernel_image_size_no_bss = __bss_start - __kernel_image_start;
--- /dev/null
+/*
+ * INET An implementation of the TCP/IP protocol suite for the LINUX
+ * operating system. INET is implemented using the BSD Socket
+ * interface as the means of communication with the user level.
+ *
+ * IP/TCP/UDP checksumming routines
+ *
+ * Authors: Jorge Cwik, <jorge@laser.satlink.net>
+ * Arnt Gulbrandsen, <agulbra@nvg.unit.no>
+ * Tom May, <ftom@netcom.com>
+ * Andreas Schwab, <schwab@issan.informatik.uni-dortmund.de>
+ * Lots of code moved from tcp.c and ip.c; see those files
+ * for more names.
+ *
+ * 03/02/96 Jes Sorensen, Andreas Schwab, Roman Hodek:
+ * Fixed some nasty bugs, causing some horrible crashes.
+ * A: At some points, the sum (%0) was used as
+ * length-counter instead of the length counter
+ * (%1). Thanks to Roman Hodek for pointing this out.
+ * B: GCC seems to mess up if one uses too many
+ * data-registers to hold input values and one tries to
+ * specify d0 and d1 as scratch registers. Letting gcc choose these
+ * registers itself solves the problem.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+/* Revised by Kenneth Albanowski for m68knommu. Basic problem: unaligned access kills, so most
+ of the assembly has to go. */
+
+#include <net/checksum.h>
+#include <asm/checksum.h>
+
+static inline unsigned short from32to16(unsigned long x)
+{
+ /* add up 16-bit and 16-bit for 16+c bit */
+ x = (x & 0xffff) + (x >> 16);
+ /* add up carry.. */
+ x = (x & 0xffff) + (x >> 16);
+ return x;
+}
+
+static unsigned long do_csum(const unsigned char * buff, int len)
+{
+ int odd, count;
+ unsigned long result = 0;
+
+ if (len <= 0)
+ goto out;
+ odd = 1 & (unsigned long) buff;
+ if (odd) {
+ result = *buff;
+ len--;
+ buff++;
+ }
+ count = len >> 1; /* nr of 16-bit words.. */
+ if (count) {
+ if (2 & (unsigned long) buff) {
+ result += *(unsigned short *) buff;
+ count--;
+ len -= 2;
+ buff += 2;
+ }
+ count >>= 1; /* nr of 32-bit words.. */
+ if (count) {
+ unsigned long carry = 0;
+ do {
+ unsigned long w = *(unsigned long *) buff;
+ count--;
+ buff += 4;
+ result += carry;
+ result += w;
+ carry = (w > result);
+ } while (count);
+ result += carry;
+ result = (result & 0xffff) + (result >> 16);
+ }
+ if (len & 2) {
+ result += *(unsigned short *) buff;
+ buff += 2;
+ }
+ }
+ if (len & 1)
+ result += (*buff << 8);
+ result = from32to16(result);
+ if (odd)
+ result = ((result >> 8) & 0xff) | ((result & 0xff) << 8);
+out:
+ return result;
+}
+
+/*
+ * computes the checksum of a memory block at buff, length len,
+ * and adds in "sum" (32-bit)
+ *
+ * returns a 32-bit number suitable for feeding into itself
+ * or csum_tcpudp_magic
+ *
+ * this function must be called with even lengths, except
+ * for the last fragment, which may be odd
+ *
+ * it's best to have buff aligned on a 32-bit boundary
+ */
+unsigned int csum_partial(const unsigned char * buff, int len, unsigned int sum)
+{
+ unsigned int result = do_csum(buff, len);
+
+ /* add in old sum, and carry.. */
+ result += sum;
+ if (sum > result)
+ result += 1;
+ return result;
+}
+
+/*
+ * this routine is used for miscellaneous IP-like checksums, mainly
+ * in icmp.c
+ */
+unsigned short ip_compute_csum(const unsigned char * buff, int len)
+{
+ return ~do_csum(buff,len);
+}
+
+/*
+ * copy from fs while checksumming, otherwise like csum_partial
+ */
+
+unsigned int
+csum_partial_copy_from_user(const char *src, char *dst, int len, int sum, int *csum_err)
+{
+ if (csum_err) *csum_err = 0;
+ memcpy(dst, src, len);
+ return csum_partial(dst, len, sum);
+}
+
+/*
+ * copy from ds while checksumming, otherwise like csum_partial
+ */
+
+unsigned int
+csum_partial_copy(const char *src, char *dst, int len, int sum)
+{
+ memcpy(dst, src, len);
+ return csum_partial(dst, len, sum);
+}
--- /dev/null
+/* pci-frv.c: low-level PCI access routines
+ *
+ * Copyright (C) 2003-5 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ * - Derived from the i386 equivalent stuff
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/errno.h>
+
+#include "pci-frv.h"
+
+#if 0
+void
+pcibios_update_resource(struct pci_dev *dev, struct resource *root,
+ struct resource *res, int resource)
+{
+ u32 new, check;
+ int reg;
+
+ new = res->start | (res->flags & PCI_REGION_FLAG_MASK);
+ if (resource < 6) {
+ reg = PCI_BASE_ADDRESS_0 + 4*resource;
+ } else if (resource == PCI_ROM_RESOURCE) {
+ res->flags |= PCI_ROM_ADDRESS_ENABLE;
+ new |= PCI_ROM_ADDRESS_ENABLE;
+ reg = dev->rom_base_reg;
+ } else {
+ /* Somebody might have asked allocation of a non-standard resource */
+ return;
+ }
+
+ pci_write_config_dword(dev, reg, new);
+ pci_read_config_dword(dev, reg, &check);
+ if ((new ^ check) & ((new & PCI_BASE_ADDRESS_SPACE_IO) ? PCI_BASE_ADDRESS_IO_MASK : PCI_BASE_ADDRESS_MEM_MASK)) {
+ printk(KERN_ERR "PCI: Error while updating region "
+ "%s/%d (%08x != %08x)\n", dev->slot_name, resource,
+ new, check);
+ }
+}
+#endif
+
+/*
+ * We need to avoid collisions with `mirrored' VGA ports
+ * and other strange ISA hardware, so we always want the
+ * addresses to be allocated in the 0x000-0x0ff region
+ * modulo 0x400.
+ *
+ * Why? Because some silly external IO cards only decode
+ * the low 10 bits of the IO address. The 0x00-0xff region
+ * is reserved for motherboard devices that decode all 16
+ * bits, so it's ok to allocate at, say, 0x2800-0x28ff,
+ * but we want to try to avoid allocating at 0x2900-0x2bff
+ * which might have be mirrored at 0x0100-0x03ff..
+ */
+void
+pcibios_align_resource(void *data, struct resource *res,
+ unsigned long size, unsigned long align)
+{
+ if (res->flags & IORESOURCE_IO) {
+ unsigned long start = res->start;
+
+ if (start & 0x300) {
+ start = (start + 0x3ff) & ~0x3ff;
+ res->start = start;
+ }
+ }
+}
+
+
+/*
+ * Handle resources of PCI devices. If the world were perfect, we could
+ * just allocate all the resource regions and do nothing more. It isn't.
+ * On the other hand, we cannot just re-allocate all devices, as it would
+ * require us to know lots of host bridge internals. So we attempt to
+ * keep as much of the original configuration as possible, but tweak it
+ * when it's found to be wrong.
+ *
+ * Known BIOS problems we have to work around:
+ * - I/O or memory regions not configured
+ * - regions configured, but not enabled in the command register
+ * - bogus I/O addresses above 64K used
+ * - expansion ROMs left enabled (this may sound harmless, but given
+ * the fact the PCI specs explicitly allow address decoders to be
+ * shared between expansion ROMs and other resource regions, it's
+ * at least dangerous)
+ *
+ * Our solution:
+ * (1) Allocate resources for all buses behind PCI-to-PCI bridges.
+ * This gives us fixed barriers on where we can allocate.
+ * (2) Allocate resources for all enabled devices. If there is
+ * a collision, just mark the resource as unallocated. Also
+ * disable expansion ROMs during this step.
+ * (3) Try to allocate resources for disabled devices. If the
+ * resources were assigned correctly, everything goes well,
+ * if they weren't, they won't disturb allocation of other
+ * resources.
+ * (4) Assign new addresses to resources which were either
+ * not configured at all or misconfigured. If explicitly
+ * requested by the user, configure expansion ROM address
+ * as well.
+ */
+
+static void __init pcibios_allocate_bus_resources(struct list_head *bus_list)
+{
+ struct list_head *ln;
+ struct pci_bus *bus;
+ struct pci_dev *dev;
+ int idx;
+ struct resource *r, *pr;
+
+ /* Depth-First Search on bus tree */
+ for (ln=bus_list->next; ln != bus_list; ln=ln->next) {
+ bus = pci_bus_b(ln);
+ if ((dev = bus->self)) {
+ for (idx = PCI_BRIDGE_RESOURCES; idx < PCI_NUM_RESOURCES; idx++) {
+ r = &dev->resource[idx];
+ if (!r->start)
+ continue;
+ pr = pci_find_parent_resource(dev, r);
+ if (!pr || request_resource(pr, r) < 0)
+ printk(KERN_ERR "PCI: Cannot allocate resource region %d of bridge %s\n", idx, dev->slot_name);
+ }
+ }
+ pcibios_allocate_bus_resources(&bus->children);
+ }
+}
+
+static void __init pcibios_allocate_resources(int pass)
+{
+ struct pci_dev *dev = NULL;
+ int idx, disabled;
+ u16 command;
+ struct resource *r, *pr;
+
+ while (dev = pci_find_device(PCI_ANY_ID, PCI_ANY_ID, dev),
+ dev != NULL
+ ) {
+ pci_read_config_word(dev, PCI_COMMAND, &command);
+ for(idx = 0; idx < 6; idx++) {
+ r = &dev->resource[idx];
+ if (r->parent) /* Already allocated */
+ continue;
+ if (!r->start) /* Address not assigned at all */
+ continue;
+ if (r->flags & IORESOURCE_IO)
+ disabled = !(command & PCI_COMMAND_IO);
+ else
+ disabled = !(command & PCI_COMMAND_MEMORY);
+ if (pass == disabled) {
+ DBG("PCI: Resource %08lx-%08lx (f=%lx, d=%d, p=%d)\n",
+ r->start, r->end, r->flags, disabled, pass);
+ pr = pci_find_parent_resource(dev, r);
+ if (!pr || request_resource(pr, r) < 0) {
+ printk(KERN_ERR "PCI: Cannot allocate resource region %d of device %s\n", idx, pci_name(dev));
+ /* We'll assign a new address later */
+ r->end -= r->start;
+ r->start = 0;
+ }
+ }
+ }
+ if (!pass) {
+ r = &dev->resource[PCI_ROM_RESOURCE];
+ if (r->flags & PCI_ROM_ADDRESS_ENABLE) {
+ /* Turn the ROM off, leave the resource region, but keep it unregistered. */
+ u32 reg;
+ DBG("PCI: Switching off ROM of %s\n", pci_name(dev));
+ r->flags &= ~PCI_ROM_ADDRESS_ENABLE;
+ pci_read_config_dword(dev, dev->rom_base_reg, ®);
+ pci_write_config_dword(dev, dev->rom_base_reg, reg & ~PCI_ROM_ADDRESS_ENABLE);
+ }
+ }
+ }
+}
+
+static void __init pcibios_assign_resources(void)
+{
+ struct pci_dev *dev = NULL;
+ int idx;
+ struct resource *r;
+
+ while (dev = pci_find_device(PCI_ANY_ID, PCI_ANY_ID, dev),
+ dev != NULL
+ ) {
+ int class = dev->class >> 8;
+
+ /* Don't touch classless devices and host bridges */
+ if (!class || class == PCI_CLASS_BRIDGE_HOST)
+ continue;
+
+ for(idx=0; idx<6; idx++) {
+ r = &dev->resource[idx];
+
+ /*
+ * Don't touch IDE controllers and I/O ports of video cards!
+ */
+ if ((class == PCI_CLASS_STORAGE_IDE && idx < 4) ||
+ (class == PCI_CLASS_DISPLAY_VGA && (r->flags & IORESOURCE_IO)))
+ continue;
+
+ /*
+ * We shall assign a new address to this resource, either because
+ * the BIOS forgot to do so or because we have decided the old
+ * address was unusable for some reason.
+ */
+ if (!r->start && r->end)
+ pci_assign_resource(dev, idx);
+ }
+
+ if (pci_probe & PCI_ASSIGN_ROMS) {
+ r = &dev->resource[PCI_ROM_RESOURCE];
+ r->end -= r->start;
+ r->start = 0;
+ if (r->end)
+ pci_assign_resource(dev, PCI_ROM_RESOURCE);
+ }
+ }
+}
+
+void __init pcibios_resource_survey(void)
+{
+ DBG("PCI: Allocating resources\n");
+ pcibios_allocate_bus_resources(&pci_root_buses);
+ pcibios_allocate_resources(0);
+ pcibios_allocate_resources(1);
+ pcibios_assign_resources();
+}
+
+int pcibios_enable_resources(struct pci_dev *dev, int mask)
+{
+ u16 cmd, old_cmd;
+ int idx;
+ struct resource *r;
+
+ pci_read_config_word(dev, PCI_COMMAND, &cmd);
+ old_cmd = cmd;
+ for(idx=0; idx<6; idx++) {
+ /* Only set up the requested stuff */
+ if (!(mask & (1<<idx)))
+ continue;
+
+ r = &dev->resource[idx];
+ if (!r->start && r->end) {
+ printk(KERN_ERR "PCI: Device %s not available because of resource collisions\n", pci_name(dev));
+ return -EINVAL;
+ }
+ if (r->flags & IORESOURCE_IO)
+ cmd |= PCI_COMMAND_IO;
+ if (r->flags & IORESOURCE_MEM)
+ cmd |= PCI_COMMAND_MEMORY;
+ }
+ if (dev->resource[PCI_ROM_RESOURCE].start)
+ cmd |= PCI_COMMAND_MEMORY;
+ if (cmd != old_cmd) {
+ printk("PCI: Enabling device %s (%04x -> %04x)\n", pci_name(dev), old_cmd, cmd);
+ pci_write_config_word(dev, PCI_COMMAND, cmd);
+ }
+ return 0;
+}
+
+/*
+ * If we set up a device for bus mastering, we need to check the latency
+ * timer as certain crappy BIOSes forget to set it properly.
+ */
+unsigned int pcibios_max_latency = 255;
+
+void pcibios_set_master(struct pci_dev *dev)
+{
+ u8 lat;
+ pci_read_config_byte(dev, PCI_LATENCY_TIMER, &lat);
+ if (lat < 16)
+ lat = (64 <= pcibios_max_latency) ? 64 : pcibios_max_latency;
+ else if (lat > pcibios_max_latency)
+ lat = pcibios_max_latency;
+ else
+ return;
+ printk(KERN_DEBUG "PCI: Setting latency timer of device %s to %d\n", pci_name(dev), lat);
+ pci_write_config_byte(dev, PCI_LATENCY_TIMER, lat);
+}
--- /dev/null
+/* pci-irq.c: PCI IRQ routing on the FRV motherboard
+ *
+ * Copyright (C) 2003 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ * derived from: arch/i386/kernel/pci-irq.c: (c) 1999--2000 Martin Mares <mj@suse.cz>
+ */
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+
+#include <asm/io.h>
+#include <asm/smp.h>
+#include <asm/irq-routing.h>
+
+#include "pci-frv.h"
+
+/*
+ * DEVICE DEVNO INT#A INT#B INT#C INT#D
+ * ======= ======= ======= ======= ======= =======
+ * MB86943 0 fpga.10 - - -
+ * RTL8029 16 fpga.12 - - -
+ * SLOT 1 19 fpga.6 fpga.5 fpga.4 fpga.3
+ * SLOT 2 18 fpga.5 fpga.4 fpga.3 fpga.6
+ * SLOT 3 17 fpga.4 fpga.3 fpga.6 fpga.5
+ *
+ */
+
+static const uint8_t __initdata pci_bus0_irq_routing[32][4] = {
+ [0 ] { IRQ_FPGA_MB86943_PCI_INTA },
+ [16] { IRQ_FPGA_RTL8029_INTA },
+ [17] { IRQ_FPGA_PCI_INTC, IRQ_FPGA_PCI_INTD, IRQ_FPGA_PCI_INTA, IRQ_FPGA_PCI_INTB },
+ [18] { IRQ_FPGA_PCI_INTB, IRQ_FPGA_PCI_INTC, IRQ_FPGA_PCI_INTD, IRQ_FPGA_PCI_INTA },
+ [19] { IRQ_FPGA_PCI_INTA, IRQ_FPGA_PCI_INTB, IRQ_FPGA_PCI_INTC, IRQ_FPGA_PCI_INTD },
+};
+
+void __init pcibios_irq_init(void)
+{
+}
+
+void __init pcibios_fixup_irqs(void)
+{
+ struct pci_dev *dev = NULL;
+ uint8_t line, pin;
+
+ while (dev = pci_find_device(PCI_ANY_ID, PCI_ANY_ID, dev),
+ dev != NULL
+ ) {
+ pci_read_config_byte(dev, PCI_INTERRUPT_PIN, &pin);
+ if (pin) {
+ dev->irq = pci_bus0_irq_routing[PCI_SLOT(dev->devfn)][pin - 1];
+ pci_write_config_byte(dev, PCI_INTERRUPT_LINE, dev->irq);
+ }
+ pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &line);
+ }
+}
+
+void __init pcibios_penalize_isa_irq(int irq)
+{
+}
+
+void pcibios_enable_irq(struct pci_dev *dev)
+{
+ pci_write_config_byte(dev, PCI_INTERRUPT_LINE, dev->irq);
+}
--- /dev/null
+/* pci-vdk.c: MB93090-MB00 (VDK) PCI support
+ *
+ * Copyright (C) 2003, 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/delay.h>
+#include <linux/slab.h>
+
+#include <asm/segment.h>
+#include <asm/io.h>
+#include <asm/mb-regs.h>
+#include <asm/mb86943a.h>
+#include "pci-frv.h"
+
+unsigned int __nongpreldata pci_probe = 1;
+
+int __nongpreldata pcibios_last_bus = -1;
+struct pci_bus *__nongpreldata pci_root_bus;
+struct pci_ops *__nongpreldata pci_root_ops;
+
+/*
+ * Functions for accessing PCI configuration space
+ */
+
+#define CONFIG_CMD(bus, dev, where) \
+ (0x80000000 | (bus->number << 16) | (devfn << 8) | (where & ~3))
+
+#define __set_PciCfgAddr(A) writel((A), (volatile void __iomem *) __region_CS1 + 0x80)
+
+#define __get_PciCfgDataB(A) readb((volatile void __iomem *) __region_CS1 + 0x88 + ((A) & 3))
+#define __get_PciCfgDataW(A) readw((volatile void __iomem *) __region_CS1 + 0x88 + ((A) & 2))
+#define __get_PciCfgDataL(A) readl((volatile void __iomem *) __region_CS1 + 0x88)
+
+#define __set_PciCfgDataB(A,V) \
+ writeb((V), (volatile void __iomem *) __region_CS1 + 0x88 + (3 - ((A) & 3)))
+
+#define __set_PciCfgDataW(A,V) \
+ writew((V), (volatile void __iomem *) __region_CS1 + 0x88 + (2 - ((A) & 2)))
+
+#define __set_PciCfgDataL(A,V) \
+ writel((V), (volatile void __iomem *) __region_CS1 + 0x88)
+
+#define __get_PciBridgeDataB(A) readb((volatile void __iomem *) __region_CS1 + 0x800 + (A))
+#define __get_PciBridgeDataW(A) readw((volatile void __iomem *) __region_CS1 + 0x800 + (A))
+#define __get_PciBridgeDataL(A) readl((volatile void __iomem *) __region_CS1 + 0x800 + (A))
+
+#define __set_PciBridgeDataB(A,V) writeb((V), (volatile void __iomem *) __region_CS1 + 0x800 + (A))
+#define __set_PciBridgeDataW(A,V) writew((V), (volatile void __iomem *) __region_CS1 + 0x800 + (A))
+#define __set_PciBridgeDataL(A,V) writel((V), (volatile void __iomem *) __region_CS1 + 0x800 + (A))
+
+static inline int __query(const struct pci_dev *dev)
+{
+// return dev->bus->number==0 && (dev->devfn==PCI_DEVFN(0,0));
+// return dev->bus->number==1;
+// return dev->bus->number==0 &&
+// (dev->devfn==PCI_DEVFN(2,0) || dev->devfn==PCI_DEVFN(3,0));
+ return 0;
+}
+
+/*****************************************************************************/
+/*
+ *
+ */
+static int pci_frv_read_config(struct pci_bus *bus, unsigned int devfn, int where, int size,
+ u32 *val)
+{
+ u32 _value;
+
+ if (bus->number == 0 && devfn == PCI_DEVFN(0, 0)) {
+ _value = __get_PciBridgeDataL(where & ~3);
+ }
+ else {
+ __set_PciCfgAddr(CONFIG_CMD(bus, devfn, where));
+ _value = __get_PciCfgDataL(where & ~3);
+ }
+
+ switch (size) {
+ case 1:
+ _value = _value >> ((where & 3) * 8);
+ break;
+
+ case 2:
+ _value = _value >> ((where & 2) * 8);
+ break;
+
+ case 4:
+ break;
+
+ default:
+ BUG();
+ }
+
+ *val = _value;
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static int pci_frv_write_config(struct pci_bus *bus, unsigned int devfn, int where, int size,
+ u32 value)
+{
+ switch (size) {
+ case 1:
+ if (bus->number == 0 && devfn == PCI_DEVFN(0, 0)) {
+ __set_PciBridgeDataB(where, value);
+ }
+ else {
+ __set_PciCfgAddr(CONFIG_CMD(bus, devfn, where));
+ __set_PciCfgDataB(where, value);
+ }
+ break;
+
+ case 2:
+ if (bus->number == 0 && devfn == PCI_DEVFN(0, 0)) {
+ __set_PciBridgeDataW(where, value);
+ }
+ else {
+ __set_PciCfgAddr(CONFIG_CMD(bus, devfn, where));
+ __set_PciCfgDataW(where, value);
+ }
+ break;
+
+ case 4:
+ if (bus->number == 0 && devfn == PCI_DEVFN(0, 0)) {
+ __set_PciBridgeDataL(where, value);
+ }
+ else {
+ __set_PciCfgAddr(CONFIG_CMD(bus, devfn, where));
+ __set_PciCfgDataL(where, value);
+ }
+ break;
+
+ default:
+ BUG();
+ }
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static struct pci_ops pci_direct_frv = {
+ pci_frv_read_config,
+ pci_frv_write_config,
+};
+
+/*
+ * Before we decide to use direct hardware access mechanisms, we try to do some
+ * trivial checks to ensure it at least _seems_ to be working -- we just test
+ * whether bus 00 contains a host bridge (this is similar to checking
+ * techniques used in XFree86, but ours should be more reliable since we
+ * attempt to make use of direct access hints provided by the PCI BIOS).
+ *
+ * This should be close to trivial, but it isn't, because there are buggy
+ * chipsets (yes, you guessed it, by Intel and Compaq) that have no class ID.
+ */
+static int __init pci_sanity_check(struct pci_ops *o)
+{
+ struct pci_bus bus; /* Fake bus and device */
+ u32 id;
+
+ bus.number = 0;
+
+ if (o->read(&bus, 0, PCI_VENDOR_ID, 4, &id) == PCIBIOS_SUCCESSFUL) {
+ printk("PCI: VDK Bridge device:vendor: %08x\n", id);
+ if (id == 0x200e10cf)
+ return 1;
+ }
+
+ printk("PCI: VDK Bridge: Sanity check failed\n");
+ return 0;
+}
+
+static struct pci_ops * __init pci_check_direct(void)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+
+ /* check if access works */
+ if (pci_sanity_check(&pci_direct_frv)) {
+ local_irq_restore(flags);
+ printk("PCI: Using configuration frv\n");
+// request_mem_region(0xBE040000, 256, "FRV bridge");
+// request_mem_region(0xBFFFFFF4, 12, "PCI frv");
+ return &pci_direct_frv;
+ }
+
+ local_irq_restore(flags);
+ return NULL;
+}
+
+/*
+ * Several buggy motherboards address only 16 devices and mirror
+ * them to next 16 IDs. We try to detect this `feature' on all
+ * primary buses (those containing host bridges as they are
+ * expected to be unique) and remove the ghost devices.
+ */
+
+static void __init pcibios_fixup_ghosts(struct pci_bus *b)
+{
+ struct list_head *ln, *mn;
+ struct pci_dev *d, *e;
+ int mirror = PCI_DEVFN(16,0);
+ int seen_host_bridge = 0;
+ int i;
+
+ for (ln=b->devices.next; ln != &b->devices; ln=ln->next) {
+ d = pci_dev_b(ln);
+ if ((d->class >> 8) == PCI_CLASS_BRIDGE_HOST)
+ seen_host_bridge++;
+ for (mn=ln->next; mn != &b->devices; mn=mn->next) {
+ e = pci_dev_b(mn);
+ if (e->devfn != d->devfn + mirror ||
+ e->vendor != d->vendor ||
+ e->device != d->device ||
+ e->class != d->class)
+ continue;
+ for(i=0; i<PCI_NUM_RESOURCES; i++)
+ if (e->resource[i].start != d->resource[i].start ||
+ e->resource[i].end != d->resource[i].end ||
+ e->resource[i].flags != d->resource[i].flags)
+ continue;
+ break;
+ }
+ if (mn == &b->devices)
+ return;
+ }
+ if (!seen_host_bridge)
+ return;
+ printk("PCI: Ignoring ghost devices on bus %02x\n", b->number);
+
+ ln = &b->devices;
+ while (ln->next != &b->devices) {
+ d = pci_dev_b(ln->next);
+ if (d->devfn >= mirror) {
+ list_del(&d->global_list);
+ list_del(&d->bus_list);
+ kfree(d);
+ } else
+ ln = ln->next;
+ }
+}
+
+/*
+ * Discover remaining PCI buses in case there are peer host bridges.
+ * We use the number of last PCI bus provided by the PCI BIOS.
+ */
+static void __init pcibios_fixup_peer_bridges(void)
+{
+ struct pci_bus bus;
+ struct pci_dev dev;
+ int n;
+ u16 l;
+
+ if (pcibios_last_bus <= 0 || pcibios_last_bus >= 0xff)
+ return;
+ printk("PCI: Peer bridge fixup\n");
+ for (n=0; n <= pcibios_last_bus; n++) {
+ if (pci_find_bus(0, n))
+ continue;
+ bus.number = n;
+ bus.ops = pci_root_ops;
+ dev.bus = &bus;
+ for(dev.devfn=0; dev.devfn<256; dev.devfn += 8)
+ if (!pci_read_config_word(&dev, PCI_VENDOR_ID, &l) &&
+ l != 0x0000 && l != 0xffff) {
+ printk("Found device at %02x:%02x [%04x]\n", n, dev.devfn, l);
+ printk("PCI: Discovered peer bus %02x\n", n);
+ pci_scan_bus(n, pci_root_ops, NULL);
+ break;
+ }
+ }
+}
+
+/*
+ * Exceptions for specific devices. Usually work-arounds for fatal design flaws.
+ */
+
+static void __init pci_fixup_umc_ide(struct pci_dev *d)
+{
+ /*
+ * UM8886BF IDE controller sets region type bits incorrectly,
+ * therefore they look like memory despite of them being I/O.
+ */
+ int i;
+
+ printk("PCI: Fixing base address flags for device %s\n", d->slot_name);
+ for(i=0; i<4; i++)
+ d->resource[i].flags |= PCI_BASE_ADDRESS_SPACE_IO;
+}
+
+static void __init pci_fixup_ide_bases(struct pci_dev *d)
+{
+ int i;
+
+ /*
+ * PCI IDE controllers use non-standard I/O port decoding, respect it.
+ */
+ if ((d->class >> 8) != PCI_CLASS_STORAGE_IDE)
+ return;
+ printk("PCI: IDE base address fixup for %s\n", d->slot_name);
+ for(i=0; i<4; i++) {
+ struct resource *r = &d->resource[i];
+ if ((r->start & ~0x80) == 0x374) {
+ r->start |= 2;
+ r->end = r->start;
+ }
+ }
+}
+
+static void __init pci_fixup_ide_trash(struct pci_dev *d)
+{
+ int i;
+
+ /*
+ * There exist PCI IDE controllers which have utter garbage
+ * in first four base registers. Ignore that.
+ */
+ printk("PCI: IDE base address trash cleared for %s\n", d->slot_name);
+ for(i=0; i<4; i++)
+ d->resource[i].start = d->resource[i].end = d->resource[i].flags = 0;
+}
+
+static void __devinit pci_fixup_latency(struct pci_dev *d)
+{
+ /*
+ * SiS 5597 and 5598 chipsets require latency timer set to
+ * at most 32 to avoid lockups.
+ */
+ DBG("PCI: Setting max latency to 32\n");
+ pcibios_max_latency = 32;
+}
+
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_UMC, PCI_DEVICE_ID_UMC_UM8886BF, pci_fixup_umc_ide);
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_5513, pci_fixup_ide_trash);
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_5597, pci_fixup_latency);
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SI, PCI_DEVICE_ID_SI_5598, pci_fixup_latency);
+DECLARE_PCI_FIXUP_HEADER(PCI_ANY_ID, PCI_ANY_ID, pci_fixup_ide_bases);
+
+/*
+ * Called after each bus is probed, but before its children
+ * are examined.
+ */
+
+void __init pcibios_fixup_bus(struct pci_bus *bus)
+{
+#if 0
+ printk("### PCIBIOS_FIXUP_BUS(%d)\n",bus->number);
+#endif
+ pcibios_fixup_ghosts(bus);
+ pci_read_bridge_bases(bus);
+
+ if (bus->number == 0) {
+ struct list_head *ln;
+ struct pci_dev *dev;
+ for (ln=bus->devices.next; ln != &bus->devices; ln=ln->next) {
+ dev = pci_dev_b(ln);
+ if (dev->devfn == 0) {
+ dev->resource[0].start = 0;
+ dev->resource[0].end = 0;
+ }
+ }
+ }
+}
+
+/*
+ * Initialization. Try all known PCI access methods. Note that we support
+ * using both PCI BIOS and direct access: in such cases, we use I/O ports
+ * to access config space, but we still keep BIOS order of cards to be
+ * compatible with 2.0.X. This should go away some day.
+ */
+
+int __init pcibios_init(void)
+{
+ struct pci_ops *dir = NULL;
+
+ if (!mb93090_mb00_detected)
+ return -ENXIO;
+
+ __reg_MB86943_sl_ctl |= MB86943_SL_CTL_DRCT_MASTER_SWAP | MB86943_SL_CTL_DRCT_SLAVE_SWAP;
+
+ __reg_MB86943_ecs_base(1) = ((__region_CS2 + 0x01000000) >> 9) | 0x08000000;
+ __reg_MB86943_ecs_base(2) = ((__region_CS2 + 0x00000000) >> 9) | 0x08000000;
+
+ *(volatile uint32_t *) (__region_CS1 + 0x848) = 0xe0000000;
+ *(volatile uint32_t *) (__region_CS1 + 0x8b8) = 0x00000000;
+
+ __reg_MB86943_sl_pci_io_base = (__region_CS2 + 0x04000000) >> 9;
+ __reg_MB86943_sl_pci_mem_base = (__region_CS2 + 0x08000000) >> 9;
+ __reg_MB86943_pci_sl_io_base = __region_CS2 + 0x04000000;
+ __reg_MB86943_pci_sl_mem_base = __region_CS2 + 0x08000000;
+ mb();
+
+ *(volatile unsigned long *)(__region_CS2+0x01300014) == 1;
+
+ ioport_resource.start = (__reg_MB86943_sl_pci_io_base << 9) & 0xfffffc00;
+ ioport_resource.end = (__reg_MB86943_sl_pci_io_range << 9) | 0x3ff;
+ ioport_resource.end += ioport_resource.start;
+
+ printk("PCI IO window: %08lx-%08lx\n", ioport_resource.start, ioport_resource.end);
+
+ iomem_resource.start = (__reg_MB86943_sl_pci_mem_base << 9) & 0xfffffc00;
+
+ /* Reserve somewhere to write to flush posted writes. */
+ iomem_resource.start += 0x400;
+
+ iomem_resource.end = (__reg_MB86943_sl_pci_mem_range << 9) | 0x3ff;
+ iomem_resource.end += iomem_resource.start;
+
+ printk("PCI MEM window: %08lx-%08lx\n", iomem_resource.start, iomem_resource.end);
+ printk("PCI DMA memory: %08lx-%08lx\n", dma_coherent_mem_start, dma_coherent_mem_end);
+
+ if (!pci_probe)
+ return -ENXIO;
+
+ dir = pci_check_direct();
+ if (dir)
+ pci_root_ops = dir;
+ else {
+ printk("PCI: No PCI bus detected\n");
+ return -ENXIO;
+ }
+
+ printk("PCI: Probing PCI hardware\n");
+ pci_root_bus = pci_scan_bus(0, pci_root_ops, NULL);
+
+ pcibios_irq_init();
+ pcibios_fixup_peer_bridges();
+ pcibios_fixup_irqs();
+ pcibios_resource_survey();
+
+ return 0;
+}
+
+arch_initcall(pcibios_init);
+
+char * __init pcibios_setup(char *str)
+{
+ if (!strcmp(str, "off")) {
+ pci_probe = 0;
+ return NULL;
+ } else if (!strncmp(str, "lastbus=", 8)) {
+ pcibios_last_bus = simple_strtol(str+8, NULL, 0);
+ return NULL;
+ }
+ return str;
+}
+
+int pcibios_enable_device(struct pci_dev *dev, int mask)
+{
+ int err;
+
+ if ((err = pcibios_enable_resources(dev, mask)) < 0)
+ return err;
+ pcibios_enable_irq(dev);
+ return 0;
+}
--- /dev/null
+/* dma-alloc.c: consistent DMA memory allocation
+ *
+ * Derived from arch/ppc/mm/cachemap.c
+ *
+ * PowerPC version derived from arch/arm/mm/consistent.c
+ * Copyright (C) 2001 Dan Malek (dmalek@jlc.net)
+ *
+ * linux/arch/arm/mm/consistent.c
+ *
+ * Copyright (C) 2000 Russell King
+ *
+ * Consistent memory allocators. Used for DMA devices that want to
+ * share uncached memory with the processor core. The function return
+ * is the virtual address and 'dma_handle' is the physical address.
+ * Mostly stolen from the ARM port, with some changes for PowerPC.
+ * -- Dan
+ * Modified for 36-bit support. -Matt
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ptrace.h>
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/swap.h>
+#include <linux/stddef.h>
+#include <linux/vmalloc.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+
+#include <asm/pgalloc.h>
+#include <asm/io.h>
+#include <asm/hardirq.h>
+#include <asm/mmu_context.h>
+#include <asm/pgtable.h>
+#include <asm/mmu.h>
+#include <asm/uaccess.h>
+#include <asm/smp.h>
+
+static int map_page(unsigned long va, unsigned long pa, pgprot_t prot)
+{
+ pgd_t *pge;
+ pud_t *pue;
+ pmd_t *pme;
+ pte_t *pte;
+ int err = -ENOMEM;
+
+ spin_lock(&init_mm.page_table_lock);
+
+ /* Use upper 10 bits of VA to index the first level map */
+ pge = pgd_offset_k(va);
+ pue = pud_offset(pge, va);
+ pme = pmd_offset(pue, va);
+
+ /* Use middle 10 bits of VA to index the second-level map */
+ pte = pte_alloc_kernel(&init_mm, pme, va);
+ if (pte != 0) {
+ err = 0;
+ set_pte(pte, mk_pte_phys(pa & PAGE_MASK, prot));
+ }
+
+ spin_unlock(&init_mm.page_table_lock);
+ return err;
+}
+
+/*
+ * This function will allocate the requested contiguous pages and
+ * map them into the kernel's vmalloc() space. This is done so we
+ * get unique mapping for these pages, outside of the kernel's 1:1
+ * virtual:physical mapping. This is necessary so we can cover large
+ * portions of the kernel with single large page TLB entries, and
+ * still get unique uncached pages for consistent DMA.
+ */
+void *consistent_alloc(int gfp, size_t size, dma_addr_t *dma_handle)
+{
+ struct vm_struct *area;
+ unsigned long page, va, pa;
+ void *ret;
+ int order, err, i;
+
+ if (in_interrupt())
+ BUG();
+
+ /* only allocate page size areas */
+ size = PAGE_ALIGN(size);
+ order = get_order(size);
+
+ page = __get_free_pages(gfp, order);
+ if (!page) {
+ BUG();
+ return NULL;
+ }
+
+ /* allocate some common virtual space to map the new pages */
+ area = get_vm_area(size, VM_ALLOC);
+ if (area == 0) {
+ free_pages(page, order);
+ return NULL;
+ }
+ va = VMALLOC_VMADDR(area->addr);
+ ret = (void *) va;
+
+ /* this gives us the real physical address of the first page */
+ *dma_handle = pa = virt_to_bus((void *) page);
+
+ /* set refcount=1 on all pages in an order>0 allocation so that vfree() will actually free
+ * all pages that were allocated.
+ */
+ if (order > 0) {
+ struct page *rpage = virt_to_page(page);
+
+ for (i = 1; i < (1 << order); i++)
+ set_page_count(rpage + i, 1);
+ }
+
+ err = 0;
+ for (i = 0; i < size && err == 0; i += PAGE_SIZE)
+ err = map_page(va + i, pa + i, PAGE_KERNEL_NOCACHE);
+
+ if (err) {
+ vfree((void *) va);
+ return NULL;
+ }
+
+ /* we need to ensure that there are no cachelines in use, or worse dirty in this area
+ * - can't do until after virtual address mappings are created
+ */
+ frv_cache_invalidate(va, va + size);
+
+ return ret;
+}
+
+/*
+ * free page(s) as defined by the above mapping.
+ */
+void consistent_free(void *vaddr)
+{
+ if (in_interrupt())
+ BUG();
+ vfree(vaddr);
+}
+
+/*
+ * make an area consistent.
+ */
+void consistent_sync(void *vaddr, size_t size, int direction)
+{
+ unsigned long start = (unsigned long) vaddr;
+ unsigned long end = start + size;
+
+ switch (direction) {
+ case PCI_DMA_NONE:
+ BUG();
+ case PCI_DMA_FROMDEVICE: /* invalidate only */
+ frv_cache_invalidate(start, end);
+ break;
+ case PCI_DMA_TODEVICE: /* writeback only */
+ frv_dcache_writeback(start, end);
+ break;
+ case PCI_DMA_BIDIRECTIONAL: /* writeback and invalidate */
+ frv_dcache_writeback(start, end);
+ break;
+ }
+}
+
+/*
+ * consistent_sync_page make a page are consistent. identical
+ * to consistent_sync, but takes a struct page instead of a virtual address
+ */
+
+void consistent_sync_page(struct page *page, unsigned long offset,
+ size_t size, int direction)
+{
+ void *start;
+
+ start = page_address(page) + offset;
+ consistent_sync(start, size, direction);
+}
--- /dev/null
+/* elf-fdpic.c: ELF FDPIC memory layout management
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/elf-fdpic.h>
+
+/*****************************************************************************/
+/*
+ * lay out the userspace VM according to our grand design
+ */
+#ifdef CONFIG_MMU
+void elf_fdpic_arch_lay_out_mm(struct elf_fdpic_params *exec_params,
+ struct elf_fdpic_params *interp_params,
+ unsigned long *start_stack,
+ unsigned long *start_brk)
+{
+ *start_stack = 0x02200000UL;
+
+ /* if the only executable is a shared object, assume that it is an interpreter rather than
+ * a true executable, and map it such that "ld.so --list" comes out right
+ */
+ if (!(interp_params->flags & ELF_FDPIC_FLAG_PRESENT) &&
+ exec_params->hdr.e_type != ET_EXEC
+ ) {
+ exec_params->load_addr = PAGE_SIZE;
+
+ *start_brk = 0x80000000UL;
+ }
+ else {
+ exec_params->load_addr = 0x02200000UL;
+
+ if ((exec_params->flags & ELF_FDPIC_FLAG_ARRANGEMENT) ==
+ ELF_FDPIC_FLAG_INDEPENDENT
+ ) {
+ exec_params->flags &= ~ELF_FDPIC_FLAG_ARRANGEMENT;
+ exec_params->flags |= ELF_FDPIC_FLAG_CONSTDISP;
+ }
+ }
+
+} /* end elf_fdpic_arch_lay_out_mm() */
+#endif
+
+/*****************************************************************************/
+/*
+ * place non-fixed mmaps firstly in the bottom part of memory, working up, and then in the top part
+ * of memory, working down
+ */
+unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len,
+ unsigned long pgoff, unsigned long flags)
+{
+ struct vm_area_struct *vma;
+ unsigned long limit;
+
+ if (len > TASK_SIZE)
+ return -ENOMEM;
+
+ /* only honour a hint if we're not going to clobber something doing so */
+ if (addr) {
+ addr = PAGE_ALIGN(addr);
+ vma = find_vma(current->mm, addr);
+ if (TASK_SIZE - len >= addr &&
+ (!vma || addr + len <= vma->vm_start))
+ goto success;
+ }
+
+ /* search between the bottom of user VM and the stack grow area */
+ addr = PAGE_SIZE;
+ limit = (current->mm->start_stack - 0x00200000);
+ if (addr + len <= limit) {
+ limit -= len;
+
+ if (addr <= limit) {
+ vma = find_vma(current->mm, PAGE_SIZE);
+ for (; vma; vma = vma->vm_next) {
+ if (addr > limit)
+ break;
+ if (addr + len <= vma->vm_start)
+ goto success;
+ addr = vma->vm_end;
+ }
+ }
+ }
+
+ /* search from just above the WorkRAM area to the top of memory */
+ addr = PAGE_ALIGN(0x80000000);
+ limit = TASK_SIZE - len;
+ if (addr <= limit) {
+ vma = find_vma(current->mm, addr);
+ for (; vma; vma = vma->vm_next) {
+ if (addr > limit)
+ break;
+ if (addr + len <= vma->vm_start)
+ goto success;
+ addr = vma->vm_end;
+ }
+
+ if (!vma && addr <= limit)
+ goto success;
+ }
+
+#if 0
+ printk("[area] l=%lx (ENOMEM) f='%s'\n",
+ len, filp ? filp->f_dentry->d_name.name : "");
+#endif
+ return -ENOMEM;
+
+ success:
+#if 0
+ printk("[area] l=%lx ad=%lx f='%s'\n",
+ len, addr, filp ? filp->f_dentry->d_name.name : "");
+#endif
+ return addr;
+} /* end arch_get_unmapped_area() */
--- /dev/null
+/*
+ * linux/arch/frv/mm/extable.c
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/spinlock.h>
+#include <asm/uaccess.h>
+
+extern const struct exception_table_entry __attribute__((aligned(8))) __start___ex_table[];
+extern const struct exception_table_entry __attribute__((aligned(8))) __stop___ex_table[];
+extern const void __memset_end, __memset_user_error_lr, __memset_user_error_handler;
+extern const void __memcpy_end, __memcpy_user_error_lr, __memcpy_user_error_handler;
+extern spinlock_t modlist_lock;
+
+/*****************************************************************************/
+/*
+ *
+ */
+static inline unsigned long search_one_table(const struct exception_table_entry *first,
+ const struct exception_table_entry *last,
+ unsigned long value)
+{
+ while (first <= last) {
+ const struct exception_table_entry __attribute__((aligned(8))) *mid;
+ long diff;
+
+ mid = (last - first) / 2 + first;
+ diff = mid->insn - value;
+ if (diff == 0)
+ return mid->fixup;
+ else if (diff < 0)
+ first = mid + 1;
+ else
+ last = mid - 1;
+ }
+ return 0;
+} /* end search_one_table() */
+
+/*****************************************************************************/
+/*
+ * see if there's a fixup handler available to deal with a kernel fault
+ */
+unsigned long search_exception_table(unsigned long pc)
+{
+ unsigned long ret = 0;
+
+ /* determine if the fault lay during a memcpy_user or a memset_user */
+ if (__frame->lr == (unsigned long) &__memset_user_error_lr &&
+ (unsigned long) &memset <= pc && pc < (unsigned long) &__memset_end
+ ) {
+ /* the fault occurred in a protected memset
+ * - we search for the return address (in LR) instead of the program counter
+ * - it was probably during a clear_user()
+ */
+ return (unsigned long) &__memset_user_error_handler;
+ }
+ else if (__frame->lr == (unsigned long) &__memcpy_user_error_lr &&
+ (unsigned long) &memcpy <= pc && pc < (unsigned long) &__memcpy_end
+ ) {
+ /* the fault occurred in a protected memset
+ * - we search for the return address (in LR) instead of the program counter
+ * - it was probably during a copy_to/from_user()
+ */
+ return (unsigned long) &__memcpy_user_error_handler;
+ }
+
+#ifndef CONFIG_MODULES
+ /* there is only the kernel to search. */
+ ret = search_one_table(__start___ex_table, __stop___ex_table - 1, pc);
+ return ret;
+
+#else
+ /* the kernel is the last "module" -- no need to treat it special */
+ unsigned long flags;
+ struct module *mp;
+
+ spin_lock_irqsave(&modlist_lock, flags);
+
+ for (mp = module_list; mp != NULL; mp = mp->next) {
+ if (mp->ex_table_start == NULL || !(mp->flags & (MOD_RUNNING | MOD_INITIALIZING)))
+ continue;
+ ret = search_one_table(mp->ex_table_start, mp->ex_table_end - 1, pc);
+ if (ret)
+ break;
+ }
+
+ spin_unlock_irqrestore(&modlist_lock, flags);
+ return ret;
+#endif
+} /* end search_exception_table() */
--- /dev/null
+/*
+ * linux/arch/frv/mm/fault.c
+ *
+ * Copyright (C) 2003 Red Hat, Inc. All Rights Reserved.
+ * - Written by David Howells (dhowells@redhat.com)
+ * - Derived from arch/m68knommu/mm/fault.c
+ * - Copyright (C) 1998 D. Jeff Dionne <jeff@lineo.ca>,
+ * - Copyright (C) 2000 Lineo, Inc. (www.lineo.com)
+ *
+ * Based on:
+ *
+ * linux/arch/m68k/mm/fault.c
+ *
+ * Copyright (C) 1995 Hamish Macdonald
+ */
+
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/kernel.h>
+#include <linux/ptrace.h>
+#include <linux/hardirq.h>
+
+#include <asm/system.h>
+#include <asm/pgtable.h>
+#include <asm/uaccess.h>
+#include <asm/gdb-stub.h>
+
+/*****************************************************************************/
+/*
+ * This routine handles page faults. It determines the problem, and
+ * then passes it off to one of the appropriate routines.
+ */
+asmlinkage void do_page_fault(int datammu, unsigned long esr0, unsigned long ear0)
+{
+ struct vm_area_struct *vma;
+ struct mm_struct *mm;
+ unsigned long _pme, lrai, lrad, fixup;
+ siginfo_t info;
+ pgd_t *pge;
+ pud_t *pue;
+ pte_t *pte;
+ int write;
+
+#if 0
+ const char *atxc[16] = {
+ [0x0] = "mmu-miss", [0x8] = "multi-dat", [0x9] = "multi-sat",
+ [0xa] = "tlb-miss", [0xc] = "privilege", [0xd] = "write-prot",
+ };
+
+ printk("do_page_fault(%d,%lx [%s],%lx)\n",
+ datammu, esr0, atxc[esr0 >> 20 & 0xf], ear0);
+#endif
+
+ mm = current->mm;
+
+ /*
+ * We fault-in kernel-space virtual memory on-demand. The
+ * 'reference' page table is init_mm.pgd.
+ *
+ * NOTE! We MUST NOT take any locks for this case. We may
+ * be in an interrupt or a critical region, and should
+ * only copy the information from the master page table,
+ * nothing more.
+ *
+ * This verifies that the fault happens in kernel space
+ * and that the fault was a page not present (invalid) error
+ */
+ if (!user_mode(__frame) && (esr0 & ESR0_ATXC) == ESR0_ATXC_AMRTLB_MISS) {
+ if (ear0 >= VMALLOC_START && ear0 < VMALLOC_END)
+ goto kernel_pte_fault;
+ if (ear0 >= PKMAP_BASE && ear0 < PKMAP_END)
+ goto kernel_pte_fault;
+ }
+
+ info.si_code = SEGV_MAPERR;
+
+ /*
+ * If we're in an interrupt or have no user
+ * context, we must not take the fault..
+ */
+ if (in_interrupt() || !mm)
+ goto no_context;
+
+ down_read(&mm->mmap_sem);
+
+ vma = find_vma(mm, ear0);
+ if (!vma)
+ goto bad_area;
+ if (vma->vm_start <= ear0)
+ goto good_area;
+ if (!(vma->vm_flags & VM_GROWSDOWN))
+ goto bad_area;
+
+ if (user_mode(__frame)) {
+ /*
+ * accessing the stack below %esp is always a bug.
+ * The "+ 32" is there due to some instructions (like
+ * pusha) doing post-decrement on the stack and that
+ * doesn't show up until later..
+ */
+ if ((ear0 & PAGE_MASK) + 2 * PAGE_SIZE < __frame->sp) {
+#if 0
+ printk("[%d] ### Access below stack @%lx (sp=%lx)\n",
+ current->pid, ear0, __frame->sp);
+ show_registers(__frame);
+ printk("[%d] ### Code: [%08lx] %02x %02x %02x %02x %02x %02x %02x %02x\n",
+ current->pid,
+ __frame->pc,
+ ((u8*)__frame->pc)[0],
+ ((u8*)__frame->pc)[1],
+ ((u8*)__frame->pc)[2],
+ ((u8*)__frame->pc)[3],
+ ((u8*)__frame->pc)[4],
+ ((u8*)__frame->pc)[5],
+ ((u8*)__frame->pc)[6],
+ ((u8*)__frame->pc)[7]
+ );
+#endif
+ goto bad_area;
+ }
+ }
+
+ if (expand_stack(vma, ear0))
+ goto bad_area;
+
+/*
+ * Ok, we have a good vm_area for this memory access, so
+ * we can handle it..
+ */
+ good_area:
+ info.si_code = SEGV_ACCERR;
+ write = 0;
+ switch (esr0 & ESR0_ATXC) {
+ default:
+ /* handle write to write protected page */
+ case ESR0_ATXC_WP_EXCEP:
+#ifdef TEST_VERIFY_AREA
+ if (!(user_mode(__frame)))
+ printk("WP fault at %08lx\n", __frame->pc);
+#endif
+ if (!(vma->vm_flags & VM_WRITE))
+ goto bad_area;
+ write = 1;
+ break;
+
+ /* handle read from protected page */
+ case ESR0_ATXC_PRIV_EXCEP:
+ goto bad_area;
+
+ /* handle read, write or exec on absent page
+ * - can't support write without permitting read
+ * - don't support execute without permitting read and vice-versa
+ */
+ case ESR0_ATXC_AMRTLB_MISS:
+ if (!(vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC)))
+ goto bad_area;
+ break;
+ }
+
+ /*
+ * If for any reason at all we couldn't handle the fault,
+ * make sure we exit gracefully rather than endlessly redo
+ * the fault.
+ */
+ switch (handle_mm_fault(mm, vma, ear0, write)) {
+ case 1:
+ current->min_flt++;
+ break;
+ case 2:
+ current->maj_flt++;
+ break;
+ case 0:
+ goto do_sigbus;
+ default:
+ goto out_of_memory;
+ }
+
+ up_read(&mm->mmap_sem);
+ return;
+
+/*
+ * Something tried to access memory that isn't in our memory map..
+ * Fix it, but check if it's kernel or user first..
+ */
+ bad_area:
+ up_read(&mm->mmap_sem);
+
+ /* User mode accesses just cause a SIGSEGV */
+ if (user_mode(__frame)) {
+ info.si_signo = SIGSEGV;
+ info.si_errno = 0;
+ /* info.si_code has been set above */
+ info.si_addr = (void *) ear0;
+ force_sig_info(SIGSEGV, &info, current);
+ return;
+ }
+
+ no_context:
+ /* are we prepared to handle this kernel fault? */
+ if ((fixup = search_exception_table(__frame->pc)) != 0) {
+ __frame->pc = fixup;
+ return;
+ }
+
+/*
+ * Oops. The kernel tried to access some bad page. We'll have to
+ * terminate things with extreme prejudice.
+ */
+
+ bust_spinlocks(1);
+
+ if (ear0 < PAGE_SIZE)
+ printk(KERN_ALERT "Unable to handle kernel NULL pointer dereference");
+ else
+ printk(KERN_ALERT "Unable to handle kernel paging request");
+ printk(" at virtual addr %08lx\n", ear0);
+ printk(" PC : %08lx\n", __frame->pc);
+ printk(" EXC : esr0=%08lx ear0=%08lx\n", esr0, ear0);
+
+ asm("lrai %1,%0,#1,#0,#0" : "=&r"(lrai) : "r"(ear0));
+ asm("lrad %1,%0,#1,#0,#0" : "=&r"(lrad) : "r"(ear0));
+
+ printk(KERN_ALERT " LRAI: %08lx\n", lrai);
+ printk(KERN_ALERT " LRAD: %08lx\n", lrad);
+
+ __break_hijack_kernel_event();
+
+ pge = pgd_offset(current->mm, ear0);
+ pue = pud_offset(pge, ear0);
+ _pme = pue->pue[0].ste[0];
+
+ printk(KERN_ALERT " PGE : %8p { PME %08lx }\n", pge, _pme);
+
+ if (_pme & xAMPRx_V) {
+ unsigned long dampr, damlr, val;
+
+ asm volatile("movsg dampr2,%0 ! movgs %2,dampr2 ! movsg damlr2,%1"
+ : "=&r"(dampr), "=r"(damlr)
+ : "r" (_pme | xAMPRx_L|xAMPRx_SS_16Kb|xAMPRx_S|xAMPRx_C|xAMPRx_V)
+ );
+
+ pte = (pte_t *) damlr + __pte_index(ear0);
+ val = pte_val(*pte);
+
+ asm volatile("movgs %0,dampr2" :: "r" (dampr));
+
+ printk(KERN_ALERT " PTE : %8p { %08lx }\n", pte, val);
+ }
+
+ die_if_kernel("Oops\n");
+ do_exit(SIGKILL);
+
+/*
+ * We ran out of memory, or some other thing happened to us that made
+ * us unable to handle the page fault gracefully.
+ */
+ out_of_memory:
+ up_read(&mm->mmap_sem);
+ printk("VM: killing process %s\n", current->comm);
+ if (user_mode(__frame))
+ do_exit(SIGKILL);
+ goto no_context;
+
+ do_sigbus:
+ up_read(&mm->mmap_sem);
+
+ /*
+ * Send a sigbus, regardless of whether we were in kernel
+ * or user mode.
+ */
+ info.si_signo = SIGBUS;
+ info.si_errno = 0;
+ info.si_code = BUS_ADRERR;
+ info.si_addr = (void *) ear0;
+ force_sig_info(SIGBUS, &info, current);
+
+ /* Kernel mode? Handle exceptions or die */
+ if (!user_mode(__frame))
+ goto no_context;
+ return;
+
+/*
+ * The fault was caused by a kernel PTE (such as installed by vmalloc or kmap)
+ */
+ kernel_pte_fault:
+ {
+ /*
+ * Synchronize this task's top level page-table
+ * with the 'reference' page table.
+ *
+ * Do _not_ use "tsk" here. We might be inside
+ * an interrupt in the middle of a task switch..
+ */
+ int index = pgd_index(ear0);
+ pgd_t *pgd, *pgd_k;
+ pud_t *pud, *pud_k;
+ pmd_t *pmd, *pmd_k;
+ pte_t *pte_k;
+
+ pgd = (pgd_t *) __get_TTBR();
+ pgd = (pgd_t *)__va(pgd) + index;
+ pgd_k = ((pgd_t *)(init_mm.pgd)) + index;
+
+ if (!pgd_present(*pgd_k))
+ goto no_context;
+ //set_pgd(pgd, *pgd_k); /////// gcc ICE's on this line
+
+ pud_k = pud_offset(pgd_k, ear0);
+ if (!pud_present(*pud_k))
+ goto no_context;
+
+ pmd_k = pmd_offset(pud_k, ear0);
+ if (!pmd_present(*pmd_k))
+ goto no_context;
+
+ pud = pud_offset(pgd, ear0);
+ pmd = pmd_offset(pud, ear0);
+ set_pmd(pmd, *pmd_k);
+
+ pte_k = pte_offset_kernel(pmd_k, ear0);
+ if (!pte_present(*pte_k))
+ goto no_context;
+ return;
+ }
+} /* end do_page_fault() */
--- /dev/null
+/* init.c: memory initialisation for FRV
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * Derived from:
+ * - linux/arch/m68knommu/mm/init.c
+ * - Copyright (C) 1998 D. Jeff Dionne <jeff@lineo.ca>, Kenneth Albanowski <kjahds@kjahds.com>,
+ * - Copyright (C) 2000 Lineo, Inc. (www.lineo.com)
+ * - linux/arch/m68k/mm/init.c
+ * - Copyright (C) 1995 Hamish Macdonald
+ */
+
+#include <linux/config.h>
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/pagemap.h>
+#include <linux/swap.h>
+#include <linux/mm.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/bootmem.h>
+#include <linux/highmem.h>
+
+#include <asm/setup.h>
+#include <asm/segment.h>
+#include <asm/page.h>
+#include <asm/pgtable.h>
+#include <asm/system.h>
+#include <asm/mmu_context.h>
+#include <asm/virtconvert.h>
+#include <asm/sections.h>
+#include <asm/tlb.h>
+
+#undef DEBUG
+
+DEFINE_PER_CPU(struct mmu_gather, mmu_gathers);
+
+/*
+ * BAD_PAGE is the page that is used for page faults when linux
+ * is out-of-memory. Older versions of linux just did a
+ * do_exit(), but using this instead means there is less risk
+ * for a process dying in kernel mode, possibly leaving a inode
+ * unused etc..
+ *
+ * BAD_PAGETABLE is the accompanying page-table: it is initialized
+ * to point to BAD_PAGE entries.
+ *
+ * ZERO_PAGE is a special page that is used for zero-initialized
+ * data and COW.
+ */
+static unsigned long empty_bad_page_table;
+static unsigned long empty_bad_page;
+unsigned long empty_zero_page;
+
+/*****************************************************************************/
+/*
+ *
+ */
+void show_mem(void)
+{
+ unsigned long i;
+ int free = 0, total = 0, reserved = 0, shared = 0;
+
+ printk("\nMem-info:\n");
+ show_free_areas();
+ i = max_mapnr;
+ while (i-- > 0) {
+ struct page *page = &mem_map[i];
+
+ total++;
+ if (PageReserved(page))
+ reserved++;
+ else if (!page_count(page))
+ free++;
+ else
+ shared += page_count(page) - 1;
+ }
+
+ printk("%d pages of RAM\n",total);
+ printk("%d free pages\n",free);
+ printk("%d reserved pages\n",reserved);
+ printk("%d pages shared\n",shared);
+
+} /* end show_mem() */
+
+/*****************************************************************************/
+/*
+ * paging_init() continues the virtual memory environment setup which
+ * was begun by the code in arch/head.S.
+ * The parameters are pointers to where to stick the starting and ending
+ * addresses of available kernel virtual memory.
+ */
+void __init paging_init(void)
+{
+ unsigned long zones_size[MAX_NR_ZONES] = {0, 0, 0};
+
+ /* allocate some pages for kernel housekeeping tasks */
+ empty_bad_page_table = (unsigned long) alloc_bootmem_pages(PAGE_SIZE);
+ empty_bad_page = (unsigned long) alloc_bootmem_pages(PAGE_SIZE);
+ empty_zero_page = (unsigned long) alloc_bootmem_pages(PAGE_SIZE);
+
+ memset((void *) empty_zero_page, 0, PAGE_SIZE);
+
+#if CONFIG_HIGHMEM
+ if (num_physpages - num_mappedpages) {
+ pgd_t *pge;
+ pud_t *pue;
+ pmd_t *pme;
+
+ pkmap_page_table = alloc_bootmem_pages(PAGE_SIZE);
+
+ memset(pkmap_page_table, 0, PAGE_SIZE);
+
+ pge = swapper_pg_dir + pgd_index_k(PKMAP_BASE);
+ pue = pud_offset(pge, PKMAP_BASE);
+ pme = pmd_offset(pue, PKMAP_BASE);
+ __set_pmd(pme, virt_to_phys(pkmap_page_table) | _PAGE_TABLE);
+ }
+#endif
+
+ /* distribute the allocatable pages across the various zones and pass them to the allocator
+ */
+ zones_size[ZONE_DMA] = max_low_pfn - min_low_pfn;
+ zones_size[ZONE_NORMAL] = 0;
+#ifdef CONFIG_HIGHMEM
+ zones_size[ZONE_HIGHMEM] = num_physpages - num_mappedpages;
+#endif
+
+ free_area_init(zones_size);
+
+#ifdef CONFIG_MMU
+ /* initialise init's MMU context */
+ init_new_context(&init_task, &init_mm);
+#endif
+
+} /* end paging_init() */
+
+/*****************************************************************************/
+/*
+ *
+ */
+void __init mem_init(void)
+{
+ unsigned long npages = (memory_end - memory_start) >> PAGE_SHIFT;
+ unsigned long tmp;
+#ifdef CONFIG_MMU
+ unsigned long loop, pfn;
+ int datapages = 0;
+#endif
+ int codek = 0, datak = 0;
+
+ /* this will put all memory onto the freelists */
+ totalram_pages = free_all_bootmem();
+
+#ifdef CONFIG_MMU
+ for (loop = 0 ; loop < npages ; loop++)
+ if (PageReserved(&mem_map[loop]))
+ datapages++;
+
+#ifdef CONFIG_HIGHMEM
+ for (pfn = num_physpages - 1; pfn >= num_mappedpages; pfn--) {
+ struct page *page = &mem_map[pfn];
+
+ ClearPageReserved(page);
+ set_bit(PG_highmem, &page->flags);
+ set_page_count(page, 1);
+ __free_page(page);
+ totalram_pages++;
+ }
+#endif
+
+ codek = ((unsigned long) &_etext - (unsigned long) &_stext) >> 10;
+ datak = datapages << (PAGE_SHIFT - 10);
+
+#else
+ codek = (_etext - _stext) >> 10;
+ datak = 0; //(_ebss - _sdata) >> 10;
+#endif
+
+ tmp = nr_free_pages() << PAGE_SHIFT;
+ printk("Memory available: %luKiB/%luKiB RAM, %luKiB/%luKiB ROM (%dKiB kernel code, %dKiB data)\n",
+ tmp >> 10,
+ npages << (PAGE_SHIFT - 10),
+ (rom_length > 0) ? ((rom_length >> 10) - codek) : 0,
+ rom_length >> 10,
+ codek,
+ datak
+ );
+
+} /* end mem_init() */
+
+/*****************************************************************************/
+/*
+ * free the memory that was only required for initialisation
+ */
+void __init free_initmem(void)
+{
+#if defined(CONFIG_RAMKERNEL) && !defined(CONFIG_PROTECT_KERNEL)
+ unsigned long start, end, addr;
+
+ start = PAGE_ALIGN((unsigned long) &__init_begin); /* round up */
+ end = ((unsigned long) &__init_end) & PAGE_MASK; /* round down */
+
+ /* next to check that the page we free is not a partial page */
+ for (addr = start; addr < end; addr += PAGE_SIZE) {
+ ClearPageReserved(virt_to_page(addr));
+ set_page_count(virt_to_page(addr), 1);
+ free_page(addr);
+ totalram_pages++;
+ }
+
+ printk("Freeing unused kernel memory: %ldKiB freed (0x%lx - 0x%lx)\n",
+ (end - start) >> 10, start, end);
+#endif
+} /* end free_initmem() */
+
+/*****************************************************************************/
+/*
+ * free the initial ramdisk memory
+ */
+#ifdef CONFIG_BLK_DEV_INITRD
+void __init free_initrd_mem(unsigned long start, unsigned long end)
+{
+ int pages = 0;
+ for (; start < end; start += PAGE_SIZE) {
+ ClearPageReserved(virt_to_page(start));
+ set_page_count(virt_to_page(start), 1);
+ free_page(start);
+ totalram_pages++;
+ pages++;
+ }
+ printk("Freeing initrd memory: %dKiB freed\n", (pages * PAGE_SIZE) >> 10);
+} /* end free_initrd_mem() */
+#endif
--- /dev/null
+/* kmap.c: ioremapping handlers
+ *
+ * Copyright (C) 2003-5 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ * - Derived from arch/m68k/mm/kmap.c
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/mm.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/vmalloc.h>
+
+#include <asm/setup.h>
+#include <asm/segment.h>
+#include <asm/page.h>
+#include <asm/pgalloc.h>
+#include <asm/io.h>
+#include <asm/system.h>
+
+#undef DEBUG
+
+/*****************************************************************************/
+/*
+ * Map some physical address range into the kernel address space.
+ */
+
+void *__ioremap(unsigned long physaddr, unsigned long size, int cacheflag)
+{
+ return (void *)physaddr;
+}
+
+/*
+ * Unmap a ioremap()ed region again
+ */
+void iounmap(void *addr)
+{
+}
+
+/*
+ * __iounmap unmaps nearly everything, so be careful
+ * it doesn't free currently pointer/page tables anymore but it
+ * wans't used anyway and might be added later.
+ */
+void __iounmap(void *addr, unsigned long size)
+{
+}
+
+/*
+ * Set new cache mode for some kernel address space.
+ * The caller must push data for that range itself, if such data may already
+ * be in the cache.
+ */
+void kernel_set_cachemode(void *addr, unsigned long size, int cmode)
+{
+}
--- /dev/null
+/* mmu-context.c: MMU context allocation and management
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <asm/tlbflush.h>
+
+#define NR_CXN 4096
+
+static unsigned long cxn_bitmap[NR_CXN / (sizeof(unsigned long) * 8)];
+static LIST_HEAD(cxn_owners_lru);
+static DEFINE_SPINLOCK(cxn_owners_lock);
+
+int __nongpreldata cxn_pinned = -1;
+
+
+/*****************************************************************************/
+/*
+ * initialise a new context
+ */
+int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
+{
+ memset(&mm->context, 0, sizeof(mm->context));
+ INIT_LIST_HEAD(&mm->context.id_link);
+ mm->context.itlb_cached_pge = 0xffffffffUL;
+ mm->context.dtlb_cached_pge = 0xffffffffUL;
+
+ return 0;
+} /* end init_new_context() */
+
+/*****************************************************************************/
+/*
+ * make sure a kernel MMU context has a CPU context number
+ * - call with cxn_owners_lock held
+ */
+static unsigned get_cxn(mm_context_t *ctx)
+{
+ struct list_head *_p;
+ mm_context_t *p;
+ unsigned cxn;
+
+ if (!list_empty(&ctx->id_link)) {
+ list_move_tail(&ctx->id_link, &cxn_owners_lru);
+ }
+ else {
+ /* find the first unallocated context number
+ * - 0 is reserved for the kernel
+ */
+ cxn = find_next_zero_bit(&cxn_bitmap, NR_CXN, 1);
+ if (cxn < NR_CXN) {
+ set_bit(cxn, &cxn_bitmap);
+ }
+ else {
+ /* none remaining - need to steal someone else's cxn */
+ p = NULL;
+ list_for_each(_p, &cxn_owners_lru) {
+ p = list_entry(_p, mm_context_t, id_link);
+ if (!p->id_busy && p->id != cxn_pinned)
+ break;
+ }
+
+ BUG_ON(_p == &cxn_owners_lru);
+
+ cxn = p->id;
+ p->id = 0;
+ list_del_init(&p->id_link);
+ __flush_tlb_mm(cxn);
+ }
+
+ ctx->id = cxn;
+ list_add_tail(&ctx->id_link, &cxn_owners_lru);
+ }
+
+ return ctx->id;
+} /* end get_cxn() */
+
+/*****************************************************************************/
+/*
+ * restore the current TLB miss handler mapped page tables into the MMU context and set up a
+ * mapping for the page directory
+ */
+void change_mm_context(mm_context_t *old, mm_context_t *ctx, pgd_t *pgd)
+{
+ unsigned long _pgd;
+
+ _pgd = virt_to_phys(pgd);
+
+ /* save the state of the outgoing MMU context */
+ old->id_busy = 0;
+
+ asm volatile("movsg scr0,%0" : "=r"(old->itlb_cached_pge));
+ asm volatile("movsg dampr4,%0" : "=r"(old->itlb_ptd_mapping));
+ asm volatile("movsg scr1,%0" : "=r"(old->dtlb_cached_pge));
+ asm volatile("movsg dampr5,%0" : "=r"(old->dtlb_ptd_mapping));
+
+ /* select an MMU context number */
+ spin_lock(&cxn_owners_lock);
+ get_cxn(ctx);
+ ctx->id_busy = 1;
+ spin_unlock(&cxn_owners_lock);
+
+ asm volatile("movgs %0,cxnr" : : "r"(ctx->id));
+
+ /* restore the state of the incoming MMU context */
+ asm volatile("movgs %0,scr0" : : "r"(ctx->itlb_cached_pge));
+ asm volatile("movgs %0,dampr4" : : "r"(ctx->itlb_ptd_mapping));
+ asm volatile("movgs %0,scr1" : : "r"(ctx->dtlb_cached_pge));
+ asm volatile("movgs %0,dampr5" : : "r"(ctx->dtlb_ptd_mapping));
+
+ /* map the PGD into uncached virtual memory */
+ asm volatile("movgs %0,ttbr" : : "r"(_pgd));
+ asm volatile("movgs %0,dampr3"
+ :: "r"(_pgd | xAMPRx_L | xAMPRx_M | xAMPRx_SS_16Kb |
+ xAMPRx_S | xAMPRx_C | xAMPRx_V));
+
+} /* end change_mm_context() */
+
+/*****************************************************************************/
+/*
+ * finished with an MMU context number
+ */
+void destroy_context(struct mm_struct *mm)
+{
+ mm_context_t *ctx = &mm->context;
+
+ spin_lock(&cxn_owners_lock);
+
+ if (!list_empty(&ctx->id_link)) {
+ if (ctx->id == cxn_pinned)
+ cxn_pinned = -1;
+
+ list_del_init(&ctx->id_link);
+ clear_bit(ctx->id, &cxn_bitmap);
+ __flush_tlb_mm(ctx->id);
+ ctx->id = 0;
+ }
+
+ spin_unlock(&cxn_owners_lock);
+} /* end destroy_context() */
+
+/*****************************************************************************/
+/*
+ * display the MMU context currently a process is currently using
+ */
+#ifdef CONFIG_PROC_FS
+char *proc_pid_status_frv_cxnr(struct mm_struct *mm, char *buffer)
+{
+ spin_lock(&cxn_owners_lock);
+ buffer += sprintf(buffer, "CXNR: %u\n", mm->context.id);
+ spin_unlock(&cxn_owners_lock);
+
+ return buffer;
+} /* end proc_pid_status_frv_cxnr() */
+#endif
+
+/*****************************************************************************/
+/*
+ * (un)pin a process's mm_struct's MMU context ID
+ */
+int cxn_pin_by_pid(pid_t pid)
+{
+ struct task_struct *tsk;
+ struct mm_struct *mm = NULL;
+ int ret;
+
+ /* unpin if pid is zero */
+ if (pid == 0) {
+ cxn_pinned = -1;
+ return 0;
+ }
+
+ ret = -ESRCH;
+
+ /* get a handle on the mm_struct */
+ read_lock(&tasklist_lock);
+ tsk = find_task_by_pid(pid);
+ if (tsk) {
+ ret = -EINVAL;
+
+ task_lock(tsk);
+ if (tsk->mm) {
+ mm = tsk->mm;
+ atomic_inc(&mm->mm_users);
+ ret = 0;
+ }
+ task_unlock(tsk);
+ }
+ read_unlock(&tasklist_lock);
+
+ if (ret < 0)
+ return ret;
+
+ /* make sure it has a CXN and pin it */
+ spin_lock(&cxn_owners_lock);
+ cxn_pinned = get_cxn(&mm->context);
+ spin_unlock(&cxn_owners_lock);
+
+ mmput(mm);
+ return 0;
+} /* end cxn_pin_by_pid() */
--- /dev/null
+/* pgalloc.c: page directory & page table allocation
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <asm/pgalloc.h>
+#include <asm/page.h>
+#include <asm/cacheflush.h>
+
+pgd_t swapper_pg_dir[PTRS_PER_PGD] __attribute__((aligned(PAGE_SIZE)));
+kmem_cache_t *pgd_cache;
+
+pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long address)
+{
+ pte_t *pte = (pte_t *)__get_free_page(GFP_KERNEL|__GFP_REPEAT);
+ if (pte)
+ clear_page(pte);
+ return pte;
+}
+
+struct page *pte_alloc_one(struct mm_struct *mm, unsigned long address)
+{
+ struct page *page;
+
+#ifdef CONFIG_HIGHPTE
+ page = alloc_pages(GFP_KERNEL|__GFP_HIGHMEM|__GFP_REPEAT, 0);
+#else
+ page = alloc_pages(GFP_KERNEL|__GFP_REPEAT, 0);
+#endif
+ if (page)
+ clear_highpage(page);
+ flush_dcache_page(page);
+ return page;
+}
+
+void __set_pmd(pmd_t *pmdptr, unsigned long pmd)
+{
+ unsigned long *__ste_p = pmdptr->ste;
+ int loop;
+
+ if (!pmd) {
+ memset(__ste_p, 0, PME_SIZE);
+ }
+ else {
+ BUG_ON(pmd & (0x3f00 | xAMPRx_SS | 0xe));
+
+ for (loop = PME_SIZE; loop > 0; loop -= 4) {
+ *__ste_p++ = pmd;
+ pmd += __frv_PT_SIZE;
+ }
+ }
+
+ frv_dcache_writeback((unsigned long) pmdptr, (unsigned long) (pmdptr + 1));
+}
+
+/*
+ * List of all pgd's needed for non-PAE so it can invalidate entries
+ * in both cached and uncached pgd's; not needed for PAE since the
+ * kernel pmd is shared. If PAE were not to share the pmd a similar
+ * tactic would be needed. This is essentially codepath-based locking
+ * against pageattr.c; it is the unique case in which a valid change
+ * of kernel pagetables can't be lazily synchronized by vmalloc faults.
+ * vmalloc faults work because attached pagetables are never freed.
+ * If the locking proves to be non-performant, a ticketing scheme with
+ * checks at dup_mmap(), exec(), and other mmlist addition points
+ * could be used. The locking scheme was chosen on the basis of
+ * manfred's recommendations and having no core impact whatsoever.
+ * -- wli
+ */
+DEFINE_SPINLOCK(pgd_lock);
+struct page *pgd_list;
+
+static inline void pgd_list_add(pgd_t *pgd)
+{
+ struct page *page = virt_to_page(pgd);
+ page->index = (unsigned long) pgd_list;
+ if (pgd_list)
+ pgd_list->private = (unsigned long) &page->index;
+ pgd_list = page;
+ page->private = (unsigned long) &pgd_list;
+}
+
+static inline void pgd_list_del(pgd_t *pgd)
+{
+ struct page *next, **pprev, *page = virt_to_page(pgd);
+ next = (struct page *) page->index;
+ pprev = (struct page **) page->private;
+ *pprev = next;
+ if (next)
+ next->private = (unsigned long) pprev;
+}
+
+void pgd_ctor(void *pgd, kmem_cache_t *cache, unsigned long unused)
+{
+ unsigned long flags;
+
+ if (PTRS_PER_PMD == 1)
+ spin_lock_irqsave(&pgd_lock, flags);
+
+ memcpy((pgd_t *) pgd + USER_PGDS_IN_LAST_PML4,
+ swapper_pg_dir + USER_PGDS_IN_LAST_PML4,
+ (PTRS_PER_PGD - USER_PGDS_IN_LAST_PML4) * sizeof(pgd_t));
+
+ if (PTRS_PER_PMD > 1)
+ return;
+
+ pgd_list_add(pgd);
+ spin_unlock_irqrestore(&pgd_lock, flags);
+ memset(pgd, 0, USER_PGDS_IN_LAST_PML4 * sizeof(pgd_t));
+}
+
+/* never called when PTRS_PER_PMD > 1 */
+void pgd_dtor(void *pgd, kmem_cache_t *cache, unsigned long unused)
+{
+ unsigned long flags; /* can be called from interrupt context */
+
+ spin_lock_irqsave(&pgd_lock, flags);
+ pgd_list_del(pgd);
+ spin_unlock_irqrestore(&pgd_lock, flags);
+}
+
+pgd_t *pgd_alloc(struct mm_struct *mm)
+{
+ pgd_t *pgd;
+
+ pgd = kmem_cache_alloc(pgd_cache, GFP_KERNEL);
+ if (!pgd)
+ return pgd;
+
+ return pgd;
+}
+
+void pgd_free(pgd_t *pgd)
+{
+ /* in the non-PAE case, clear_page_tables() clears user pgd entries */
+ kmem_cache_free(pgd_cache, pgd);
+}
+
+void __init pgtable_cache_init(void)
+{
+ pgd_cache = kmem_cache_create("pgd",
+ PTRS_PER_PGD * sizeof(pgd_t),
+ PTRS_PER_PGD * sizeof(pgd_t),
+ 0,
+ pgd_ctor,
+ pgd_dtor);
+ if (!pgd_cache)
+ panic("pgtable_cache_init(): Cannot create pgd cache");
+}
--- /dev/null
+/* tlb-flush.S: TLB flushing routines
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/sys.h>
+#include <linux/config.h>
+#include <linux/linkage.h>
+#include <asm/page.h>
+#include <asm/ptrace.h>
+#include <asm/spr-regs.h>
+
+.macro DEBUG ch
+# sethi.p %hi(0xfeff9c00),gr4
+# setlo %lo(0xfeff9c00),gr4
+# setlos #\ch,gr5
+# stbi gr5,@(gr4,#0)
+# membar
+.endm
+
+ .section .rodata
+
+ # sizes corresponding to TPXR.LMAX
+ .balign 1
+__tlb_lmax_sizes:
+ .byte 0, 64, 0, 0
+ .byte 0, 0, 0, 0
+ .byte 0, 0, 0, 0
+ .byte 0, 0, 0, 0
+
+ .section .text
+ .balign 4
+
+###############################################################################
+#
+# flush everything
+# - void __flush_tlb_all(void)
+#
+###############################################################################
+ .globl __flush_tlb_all
+ .type __flush_tlb_all,@function
+__flush_tlb_all:
+ DEBUG 'A'
+
+ # kill cached PGE value
+ setlos #0xffffffff,gr4
+ movgs gr4,scr0
+ movgs gr4,scr1
+
+ # kill AMPR-cached TLB values
+ movgs gr0,iamlr1
+ movgs gr0,iampr1
+ movgs gr0,damlr1
+ movgs gr0,dampr1
+
+ # find out how many lines there are
+ movsg tpxr,gr5
+ sethi.p %hi(__tlb_lmax_sizes),gr4
+ srli gr5,#TPXR_LMAX_SHIFT,gr5
+ setlo.p %lo(__tlb_lmax_sizes),gr4
+ andi gr5,#TPXR_LMAX_SMASK,gr5
+ ldub @(gr4,gr5),gr4
+
+ # now, we assume that the TLB line step is page size in size
+ setlos.p #PAGE_SIZE,gr5
+ setlos #0,gr6
+1:
+ tlbpr gr6,gr0,#6,#0
+ subicc.p gr4,#1,gr4,icc0
+ add gr6,gr5,gr6
+ bne icc0,#2,1b
+
+ DEBUG 'B'
+ bralr
+
+ .size __flush_tlb_all, .-__flush_tlb_all
+
+###############################################################################
+#
+# flush everything to do with one context
+# - void __flush_tlb_mm(unsigned long contextid [GR8])
+#
+###############################################################################
+ .globl __flush_tlb_mm
+ .type __flush_tlb_mm,@function
+__flush_tlb_mm:
+ DEBUG 'M'
+
+ # kill cached PGE value
+ setlos #0xffffffff,gr4
+ movgs gr4,scr0
+ movgs gr4,scr1
+
+ # specify the context we want to flush
+ movgs gr8,tplr
+
+ # find out how many lines there are
+ movsg tpxr,gr5
+ sethi.p %hi(__tlb_lmax_sizes),gr4
+ srli gr5,#TPXR_LMAX_SHIFT,gr5
+ setlo.p %lo(__tlb_lmax_sizes),gr4
+ andi gr5,#TPXR_LMAX_SMASK,gr5
+ ldub @(gr4,gr5),gr4
+
+ # now, we assume that the TLB line step is page size in size
+ setlos.p #PAGE_SIZE,gr5
+ setlos #0,gr6
+0:
+ tlbpr gr6,gr0,#5,#0
+ subicc.p gr4,#1,gr4,icc0
+ add gr6,gr5,gr6
+ bne icc0,#2,0b
+
+ DEBUG 'N'
+ bralr
+
+ .size __flush_tlb_mm, .-__flush_tlb_mm
+
+###############################################################################
+#
+# flush a range of addresses from the TLB
+# - void __flush_tlb_page(unsigned long contextid [GR8],
+# unsigned long start [GR9])
+#
+###############################################################################
+ .globl __flush_tlb_page
+ .type __flush_tlb_page,@function
+__flush_tlb_page:
+ # kill cached PGE value
+ setlos #0xffffffff,gr4
+ movgs gr4,scr0
+ movgs gr4,scr1
+
+ # specify the context we want to flush
+ movgs gr8,tplr
+
+ # zap the matching TLB line and AMR values
+ setlos #~(PAGE_SIZE-1),gr5
+ and gr9,gr5,gr9
+ tlbpr gr9,gr0,#5,#0
+
+ bralr
+
+ .size __flush_tlb_page, .-__flush_tlb_page
+
+###############################################################################
+#
+# flush a range of addresses from the TLB
+# - void __flush_tlb_range(unsigned long contextid [GR8],
+# unsigned long start [GR9],
+# unsigned long end [GR10])
+#
+###############################################################################
+ .globl __flush_tlb_range
+ .type __flush_tlb_range,@function
+__flush_tlb_range:
+ # kill cached PGE value
+ setlos #0xffffffff,gr4
+ movgs gr4,scr0
+ movgs gr4,scr1
+
+ # specify the context we want to flush
+ movgs gr8,tplr
+
+ # round the start down to beginning of TLB line and end up to beginning of next TLB line
+ setlos.p #~(PAGE_SIZE-1),gr5
+ setlos #PAGE_SIZE,gr6
+ subi.p gr10,#1,gr10
+ and gr9,gr5,gr9
+ and gr10,gr5,gr10
+2:
+ tlbpr gr9,gr0,#5,#0
+ subcc.p gr9,gr10,gr0,icc0
+ add gr9,gr6,gr9
+ bne icc0,#0,2b ; most likely a 1-page flush
+
+ bralr
+
+ .size __flush_tlb_range, .-__flush_tlb_range
--- /dev/null
+/* tlb-miss.S: TLB miss handlers
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/sys.h>
+#include <linux/config.h>
+#include <linux/linkage.h>
+#include <asm/page.h>
+#include <asm/pgtable.h>
+#include <asm/highmem.h>
+#include <asm/spr-regs.h>
+
+ .section .text
+ .balign 4
+
+ .globl __entry_insn_mmu_miss
+__entry_insn_mmu_miss:
+ break
+ nop
+
+ .globl __entry_insn_mmu_exception
+__entry_insn_mmu_exception:
+ break
+ nop
+
+ .globl __entry_data_mmu_miss
+__entry_data_mmu_miss:
+ break
+ nop
+
+ .globl __entry_data_mmu_exception
+__entry_data_mmu_exception:
+ break
+ nop
+
+###############################################################################
+#
+# handle a lookup failure of one sort or another in a kernel TLB handler
+# On entry:
+# GR29 - faulting address
+# SCR2 - saved CCR
+#
+###############################################################################
+ .type __tlb_kernel_fault,@function
+__tlb_kernel_fault:
+ # see if we're supposed to re-enable single-step mode upon return
+ sethi.p %hi(__break_tlb_miss_return_break),gr30
+ setlo %lo(__break_tlb_miss_return_break),gr30
+ movsg pcsr,gr31
+
+ subcc gr31,gr30,gr0,icc0
+ beq icc0,#0,__tlb_kernel_fault_sstep
+
+ movsg scr2,gr30
+ movgs gr30,ccr
+ movgs gr29,scr2 /* save EAR0 value */
+ sethi.p %hi(__kernel_current_task),gr29
+ setlo %lo(__kernel_current_task),gr29
+ ldi.p @(gr29,#0),gr29 /* restore GR29 */
+
+ bra __entry_kernel_handle_mmu_fault
+
+ # we've got to re-enable single-stepping
+__tlb_kernel_fault_sstep:
+ sethi.p %hi(__break_tlb_miss_real_return_info),gr30
+ setlo %lo(__break_tlb_miss_real_return_info),gr30
+ lddi @(gr30,0),gr30
+ movgs gr30,pcsr
+ movgs gr31,psr
+
+ movsg scr2,gr30
+ movgs gr30,ccr
+ movgs gr29,scr2 /* save EAR0 value */
+ sethi.p %hi(__kernel_current_task),gr29
+ setlo %lo(__kernel_current_task),gr29
+ ldi.p @(gr29,#0),gr29 /* restore GR29 */
+ bra __entry_kernel_handle_mmu_fault_sstep
+
+ .size __tlb_kernel_fault, .-__tlb_kernel_fault
+
+###############################################################################
+#
+# handle a lookup failure of one sort or another in a user TLB handler
+# On entry:
+# GR28 - faulting address
+# SCR2 - saved CCR
+#
+###############################################################################
+ .type __tlb_user_fault,@function
+__tlb_user_fault:
+ # see if we're supposed to re-enable single-step mode upon return
+ sethi.p %hi(__break_tlb_miss_return_break),gr30
+ setlo %lo(__break_tlb_miss_return_break),gr30
+ movsg pcsr,gr31
+ subcc gr31,gr30,gr0,icc0
+ beq icc0,#0,__tlb_user_fault_sstep
+
+ movsg scr2,gr30
+ movgs gr30,ccr
+ bra __entry_uspace_handle_mmu_fault
+
+ # we've got to re-enable single-stepping
+__tlb_user_fault_sstep:
+ sethi.p %hi(__break_tlb_miss_real_return_info),gr30
+ setlo %lo(__break_tlb_miss_real_return_info),gr30
+ lddi @(gr30,0),gr30
+ movgs gr30,pcsr
+ movgs gr31,psr
+ movsg scr2,gr30
+ movgs gr30,ccr
+ bra __entry_uspace_handle_mmu_fault_sstep
+
+ .size __tlb_user_fault, .-__tlb_user_fault
+
+###############################################################################
+#
+# Kernel instruction TLB miss handler
+# On entry:
+# GR1 - kernel stack pointer
+# GR28 - saved exception frame pointer
+# GR29 - faulting address
+# GR31 - EAR0 ^ SCR0
+# SCR0 - base of virtual range covered by cached PGE from last ITLB miss (or 0xffffffff)
+# DAMR3 - mapped page directory
+# DAMR4 - mapped page table as matched by SCR0
+#
+###############################################################################
+ .globl __entry_kernel_insn_tlb_miss
+ .type __entry_kernel_insn_tlb_miss,@function
+__entry_kernel_insn_tlb_miss:
+#if 0
+ sethi.p %hi(0xe1200004),gr30
+ setlo %lo(0xe1200004),gr30
+ st gr0,@(gr30,gr0)
+ sethi.p %hi(0xffc00100),gr30
+ setlo %lo(0xffc00100),gr30
+ sth gr30,@(gr30,gr0)
+ membar
+#endif
+
+ movsg ccr,gr30 /* save CCR */
+ movgs gr30,scr2
+
+ # see if the cached page table mapping is appropriate
+ srlicc.p gr31,#26,gr0,icc0
+ setlos 0x3ffc,gr30
+ srli.p gr29,#12,gr31 /* use EAR0[25:14] as PTE index */
+ bne icc0,#0,__itlb_k_PTD_miss
+
+__itlb_k_PTD_mapped:
+ # access the PTD with EAR0[25:14]
+ # - DAMLR4 points to the virtual address of the appropriate page table
+ # - the PTD holds 4096 PTEs
+ # - the PTD must be accessed uncached
+ # - the PTE must be marked accessed if it was valid
+ #
+ and gr31,gr30,gr31
+ movsg damlr4,gr30
+ add gr30,gr31,gr31
+ ldi @(gr31,#0),gr30 /* fetch the PTE */
+ andicc gr30,#_PAGE_PRESENT,gr0,icc0
+ ori.p gr30,#_PAGE_ACCESSED,gr30
+ beq icc0,#0,__tlb_kernel_fault /* jump if PTE invalid */
+ sti.p gr30,@(gr31,#0) /* update the PTE */
+ andi gr30,#~_PAGE_ACCESSED,gr30
+
+ # we're using IAMR1 as an extra TLB entry
+ # - punt the entry here (if valid) to the real TLB and then replace with the new PTE
+ # - need to check DAMR1 lest we cause an multiple-DAT-hit exception
+ # - IAMPR1 has no WP bit, and we mustn't lose WP information
+ movsg iampr1,gr31
+ andicc gr31,#xAMPRx_V,gr0,icc0
+ setlos.p 0xfffff000,gr31
+ beq icc0,#0,__itlb_k_nopunt /* punt not required */
+
+ movsg iamlr1,gr31
+ movgs gr31,tplr /* set TPLR.CXN */
+ tlbpr gr31,gr0,#4,#0 /* delete matches from TLB, IAMR1, DAMR1 */
+
+ movsg dampr1,gr31
+ ori gr31,#xAMPRx_V,gr31 /* entry was invalidated by tlbpr #4 */
+ movgs gr31,tppr
+ movsg iamlr1,gr31 /* set TPLR.CXN */
+ movgs gr31,tplr
+ tlbpr gr31,gr0,#2,#0 /* save to the TLB */
+ movsg tpxr,gr31 /* check the TLB write error flag */
+ andicc.p gr31,#TPXR_E,gr0,icc0
+ setlos #0xfffff000,gr31
+ bne icc0,#0,__tlb_kernel_fault
+
+__itlb_k_nopunt:
+
+ # assemble the new TLB entry
+ and gr29,gr31,gr29
+ movsg cxnr,gr31
+ or gr29,gr31,gr29
+ movgs gr29,iamlr1 /* xAMLR = address | context number */
+ movgs gr30,iampr1
+ movgs gr29,damlr1
+ movgs gr30,dampr1
+
+ # return, restoring registers
+ movsg scr2,gr30
+ movgs gr30,ccr
+ sethi.p %hi(__kernel_current_task),gr29
+ setlo %lo(__kernel_current_task),gr29
+ ldi @(gr29,#0),gr29
+ rett #0
+ beq icc0,#3,0 /* prevent icache prefetch */
+
+ # the PTE we want wasn't in the PTD we have mapped, so we need to go looking for a more
+ # appropriate page table and map that instead
+ # - access the PGD with EAR0[31:26]
+ # - DAMLR3 points to the virtual address of the page directory
+ # - the PGD holds 64 PGEs and each PGE/PME points to a set of page tables
+__itlb_k_PTD_miss:
+ srli gr29,#26,gr31 /* calculate PGE offset */
+ slli gr31,#8,gr31 /* and clear bottom bits */
+
+ movsg damlr3,gr30
+ ld @(gr31,gr30),gr30 /* access the PGE */
+
+ andicc.p gr30,#_PAGE_PRESENT,gr0,icc0
+ andicc gr30,#xAMPRx_SS,gr0,icc1
+
+ # map this PTD instead and record coverage address
+ ori.p gr30,#xAMPRx_L|xAMPRx_SS_16Kb|xAMPRx_S|xAMPRx_C|xAMPRx_V,gr30
+ beq icc0,#0,__tlb_kernel_fault /* jump if PGE not present */
+ slli.p gr31,#18,gr31
+ bne icc1,#0,__itlb_k_bigpage
+ movgs gr30,dampr4
+ movgs gr31,scr0
+
+ # we can now resume normal service
+ setlos 0x3ffc,gr30
+ srli.p gr29,#12,gr31 /* use EAR0[25:14] as PTE index */
+ bra __itlb_k_PTD_mapped
+
+__itlb_k_bigpage:
+ break
+ nop
+
+ .size __entry_kernel_insn_tlb_miss, .-__entry_kernel_insn_tlb_miss
+
+###############################################################################
+#
+# Kernel data TLB miss handler
+# On entry:
+# GR1 - kernel stack pointer
+# GR28 - saved exception frame pointer
+# GR29 - faulting address
+# GR31 - EAR0 ^ SCR1
+# SCR1 - base of virtual range covered by cached PGE from last DTLB miss (or 0xffffffff)
+# DAMR3 - mapped page directory
+# DAMR5 - mapped page table as matched by SCR1
+#
+###############################################################################
+ .globl __entry_kernel_data_tlb_miss
+ .type __entry_kernel_data_tlb_miss,@function
+__entry_kernel_data_tlb_miss:
+#if 0
+ sethi.p %hi(0xe1200004),gr30
+ setlo %lo(0xe1200004),gr30
+ st gr0,@(gr30,gr0)
+ sethi.p %hi(0xffc00100),gr30
+ setlo %lo(0xffc00100),gr30
+ sth gr30,@(gr30,gr0)
+ membar
+#endif
+
+ movsg ccr,gr30 /* save CCR */
+ movgs gr30,scr2
+
+ # see if the cached page table mapping is appropriate
+ srlicc.p gr31,#26,gr0,icc0
+ setlos 0x3ffc,gr30
+ srli.p gr29,#12,gr31 /* use EAR0[25:14] as PTE index */
+ bne icc0,#0,__dtlb_k_PTD_miss
+
+__dtlb_k_PTD_mapped:
+ # access the PTD with EAR0[25:14]
+ # - DAMLR5 points to the virtual address of the appropriate page table
+ # - the PTD holds 4096 PTEs
+ # - the PTD must be accessed uncached
+ # - the PTE must be marked accessed if it was valid
+ #
+ and gr31,gr30,gr31
+ movsg damlr5,gr30
+ add gr30,gr31,gr31
+ ldi @(gr31,#0),gr30 /* fetch the PTE */
+ andicc gr30,#_PAGE_PRESENT,gr0,icc0
+ ori.p gr30,#_PAGE_ACCESSED,gr30
+ beq icc0,#0,__tlb_kernel_fault /* jump if PTE invalid */
+ sti.p gr30,@(gr31,#0) /* update the PTE */
+ andi gr30,#~_PAGE_ACCESSED,gr30
+
+ # we're using DAMR1 as an extra TLB entry
+ # - punt the entry here (if valid) to the real TLB and then replace with the new PTE
+ # - need to check IAMR1 lest we cause an multiple-DAT-hit exception
+ movsg dampr1,gr31
+ andicc gr31,#xAMPRx_V,gr0,icc0
+ setlos.p 0xfffff000,gr31
+ beq icc0,#0,__dtlb_k_nopunt /* punt not required */
+
+ movsg damlr1,gr31
+ movgs gr31,tplr /* set TPLR.CXN */
+ tlbpr gr31,gr0,#4,#0 /* delete matches from TLB, IAMR1, DAMR1 */
+
+ movsg dampr1,gr31
+ ori gr31,#xAMPRx_V,gr31 /* entry was invalidated by tlbpr #4 */
+ movgs gr31,tppr
+ movsg damlr1,gr31 /* set TPLR.CXN */
+ movgs gr31,tplr
+ tlbpr gr31,gr0,#2,#0 /* save to the TLB */
+ movsg tpxr,gr31 /* check the TLB write error flag */
+ andicc.p gr31,#TPXR_E,gr0,icc0
+ setlos #0xfffff000,gr31
+ bne icc0,#0,__tlb_kernel_fault
+
+__dtlb_k_nopunt:
+
+ # assemble the new TLB entry
+ and gr29,gr31,gr29
+ movsg cxnr,gr31
+ or gr29,gr31,gr29
+ movgs gr29,iamlr1 /* xAMLR = address | context number */
+ movgs gr30,iampr1
+ movgs gr29,damlr1
+ movgs gr30,dampr1
+
+ # return, restoring registers
+ movsg scr2,gr30
+ movgs gr30,ccr
+ sethi.p %hi(__kernel_current_task),gr29
+ setlo %lo(__kernel_current_task),gr29
+ ldi @(gr29,#0),gr29
+ rett #0
+ beq icc0,#3,0 /* prevent icache prefetch */
+
+ # the PTE we want wasn't in the PTD we have mapped, so we need to go looking for a more
+ # appropriate page table and map that instead
+ # - access the PGD with EAR0[31:26]
+ # - DAMLR3 points to the virtual address of the page directory
+ # - the PGD holds 64 PGEs and each PGE/PME points to a set of page tables
+__dtlb_k_PTD_miss:
+ srli gr29,#26,gr31 /* calculate PGE offset */
+ slli gr31,#8,gr31 /* and clear bottom bits */
+
+ movsg damlr3,gr30
+ ld @(gr31,gr30),gr30 /* access the PGE */
+
+ andicc.p gr30,#_PAGE_PRESENT,gr0,icc0
+ andicc gr30,#xAMPRx_SS,gr0,icc1
+
+ # map this PTD instead and record coverage address
+ ori.p gr30,#xAMPRx_L|xAMPRx_SS_16Kb|xAMPRx_S|xAMPRx_C|xAMPRx_V,gr30
+ beq icc0,#0,__tlb_kernel_fault /* jump if PGE not present */
+ slli.p gr31,#18,gr31
+ bne icc1,#0,__dtlb_k_bigpage
+ movgs gr30,dampr5
+ movgs gr31,scr1
+
+ # we can now resume normal service
+ setlos 0x3ffc,gr30
+ srli.p gr29,#12,gr31 /* use EAR0[25:14] as PTE index */
+ bra __dtlb_k_PTD_mapped
+
+__dtlb_k_bigpage:
+ break
+ nop
+
+ .size __entry_kernel_data_tlb_miss, .-__entry_kernel_data_tlb_miss
+
+###############################################################################
+#
+# Userspace instruction TLB miss handler (with PGE prediction)
+# On entry:
+# GR28 - faulting address
+# GR31 - EAR0 ^ SCR0
+# SCR0 - base of virtual range covered by cached PGE from last ITLB miss (or 0xffffffff)
+# DAMR3 - mapped page directory
+# DAMR4 - mapped page table as matched by SCR0
+#
+###############################################################################
+ .globl __entry_user_insn_tlb_miss
+ .type __entry_user_insn_tlb_miss,@function
+__entry_user_insn_tlb_miss:
+#if 0
+ sethi.p %hi(0xe1200004),gr30
+ setlo %lo(0xe1200004),gr30
+ st gr0,@(gr30,gr0)
+ sethi.p %hi(0xffc00100),gr30
+ setlo %lo(0xffc00100),gr30
+ sth gr30,@(gr30,gr0)
+ membar
+#endif
+
+ movsg ccr,gr30 /* save CCR */
+ movgs gr30,scr2
+
+ # see if the cached page table mapping is appropriate
+ srlicc.p gr31,#26,gr0,icc0
+ setlos 0x3ffc,gr30
+ srli.p gr28,#12,gr31 /* use EAR0[25:14] as PTE index */
+ bne icc0,#0,__itlb_u_PTD_miss
+
+__itlb_u_PTD_mapped:
+ # access the PTD with EAR0[25:14]
+ # - DAMLR4 points to the virtual address of the appropriate page table
+ # - the PTD holds 4096 PTEs
+ # - the PTD must be accessed uncached
+ # - the PTE must be marked accessed if it was valid
+ #
+ and gr31,gr30,gr31
+ movsg damlr4,gr30
+ add gr30,gr31,gr31
+ ldi @(gr31,#0),gr30 /* fetch the PTE */
+ andicc gr30,#_PAGE_PRESENT,gr0,icc0
+ ori.p gr30,#_PAGE_ACCESSED,gr30
+ beq icc0,#0,__tlb_user_fault /* jump if PTE invalid */
+ sti.p gr30,@(gr31,#0) /* update the PTE */
+ andi gr30,#~_PAGE_ACCESSED,gr30
+
+ # we're using IAMR1/DAMR1 as an extra TLB entry
+ # - punt the entry here (if valid) to the real TLB and then replace with the new PTE
+ movsg dampr1,gr31
+ andicc gr31,#xAMPRx_V,gr0,icc0
+ setlos.p 0xfffff000,gr31
+ beq icc0,#0,__itlb_u_nopunt /* punt not required */
+
+ movsg dampr1,gr31
+ movgs gr31,tppr
+ movsg damlr1,gr31 /* set TPLR.CXN */
+ movgs gr31,tplr
+ tlbpr gr31,gr0,#2,#0 /* save to the TLB */
+ movsg tpxr,gr31 /* check the TLB write error flag */
+ andicc.p gr31,#TPXR_E,gr0,icc0
+ setlos #0xfffff000,gr31
+ bne icc0,#0,__tlb_user_fault
+
+__itlb_u_nopunt:
+
+ # assemble the new TLB entry
+ and gr28,gr31,gr28
+ movsg cxnr,gr31
+ or gr28,gr31,gr28
+ movgs gr28,iamlr1 /* xAMLR = address | context number */
+ movgs gr30,iampr1
+ movgs gr28,damlr1
+ movgs gr30,dampr1
+
+ # return, restoring registers
+ movsg scr2,gr30
+ movgs gr30,ccr
+ rett #0
+ beq icc0,#3,0 /* prevent icache prefetch */
+
+ # the PTE we want wasn't in the PTD we have mapped, so we need to go looking for a more
+ # appropriate page table and map that instead
+ # - access the PGD with EAR0[31:26]
+ # - DAMLR3 points to the virtual address of the page directory
+ # - the PGD holds 64 PGEs and each PGE/PME points to a set of page tables
+__itlb_u_PTD_miss:
+ srli gr28,#26,gr31 /* calculate PGE offset */
+ slli gr31,#8,gr31 /* and clear bottom bits */
+
+ movsg damlr3,gr30
+ ld @(gr31,gr30),gr30 /* access the PGE */
+
+ andicc.p gr30,#_PAGE_PRESENT,gr0,icc0
+ andicc gr30,#xAMPRx_SS,gr0,icc1
+
+ # map this PTD instead and record coverage address
+ ori.p gr30,#xAMPRx_L|xAMPRx_SS_16Kb|xAMPRx_S|xAMPRx_C|xAMPRx_V,gr30
+ beq icc0,#0,__tlb_user_fault /* jump if PGE not present */
+ slli.p gr31,#18,gr31
+ bne icc1,#0,__itlb_u_bigpage
+ movgs gr30,dampr4
+ movgs gr31,scr0
+
+ # we can now resume normal service
+ setlos 0x3ffc,gr30
+ srli.p gr28,#12,gr31 /* use EAR0[25:14] as PTE index */
+ bra __itlb_u_PTD_mapped
+
+__itlb_u_bigpage:
+ break
+ nop
+
+ .size __entry_user_insn_tlb_miss, .-__entry_user_insn_tlb_miss
+
+###############################################################################
+#
+# Userspace data TLB miss handler
+# On entry:
+# GR28 - faulting address
+# GR31 - EAR0 ^ SCR1
+# SCR1 - base of virtual range covered by cached PGE from last DTLB miss (or 0xffffffff)
+# DAMR3 - mapped page directory
+# DAMR5 - mapped page table as matched by SCR1
+#
+###############################################################################
+ .globl __entry_user_data_tlb_miss
+ .type __entry_user_data_tlb_miss,@function
+__entry_user_data_tlb_miss:
+#if 0
+ sethi.p %hi(0xe1200004),gr30
+ setlo %lo(0xe1200004),gr30
+ st gr0,@(gr30,gr0)
+ sethi.p %hi(0xffc00100),gr30
+ setlo %lo(0xffc00100),gr30
+ sth gr30,@(gr30,gr0)
+ membar
+#endif
+
+ movsg ccr,gr30 /* save CCR */
+ movgs gr30,scr2
+
+ # see if the cached page table mapping is appropriate
+ srlicc.p gr31,#26,gr0,icc0
+ setlos 0x3ffc,gr30
+ srli.p gr28,#12,gr31 /* use EAR0[25:14] as PTE index */
+ bne icc0,#0,__dtlb_u_PTD_miss
+
+__dtlb_u_PTD_mapped:
+ # access the PTD with EAR0[25:14]
+ # - DAMLR5 points to the virtual address of the appropriate page table
+ # - the PTD holds 4096 PTEs
+ # - the PTD must be accessed uncached
+ # - the PTE must be marked accessed if it was valid
+ #
+ and gr31,gr30,gr31
+ movsg damlr5,gr30
+
+__dtlb_u_using_iPTD:
+ add gr30,gr31,gr31
+ ldi @(gr31,#0),gr30 /* fetch the PTE */
+ andicc gr30,#_PAGE_PRESENT,gr0,icc0
+ ori.p gr30,#_PAGE_ACCESSED,gr30
+ beq icc0,#0,__tlb_user_fault /* jump if PTE invalid */
+ sti.p gr30,@(gr31,#0) /* update the PTE */
+ andi gr30,#~_PAGE_ACCESSED,gr30
+
+ # we're using DAMR1 as an extra TLB entry
+ # - punt the entry here (if valid) to the real TLB and then replace with the new PTE
+ movsg dampr1,gr31
+ andicc gr31,#xAMPRx_V,gr0,icc0
+ setlos.p 0xfffff000,gr31
+ beq icc0,#0,__dtlb_u_nopunt /* punt not required */
+
+ movsg dampr1,gr31
+ movgs gr31,tppr
+ movsg damlr1,gr31 /* set TPLR.CXN */
+ movgs gr31,tplr
+ tlbpr gr31,gr0,#2,#0 /* save to the TLB */
+ movsg tpxr,gr31 /* check the TLB write error flag */
+ andicc.p gr31,#TPXR_E,gr0,icc0
+ setlos #0xfffff000,gr31
+ bne icc0,#0,__tlb_user_fault
+
+__dtlb_u_nopunt:
+
+ # assemble the new TLB entry
+ and gr28,gr31,gr28
+ movsg cxnr,gr31
+ or gr28,gr31,gr28
+ movgs gr28,iamlr1 /* xAMLR = address | context number */
+ movgs gr30,iampr1
+ movgs gr28,damlr1
+ movgs gr30,dampr1
+
+ # return, restoring registers
+ movsg scr2,gr30
+ movgs gr30,ccr
+ rett #0
+ beq icc0,#3,0 /* prevent icache prefetch */
+
+ # the PTE we want wasn't in the PTD we have mapped, so we need to go looking for a more
+ # appropriate page table and map that instead
+ # - first of all, check the insn PGE cache - we may well get a hit there
+ # - access the PGD with EAR0[31:26]
+ # - DAMLR3 points to the virtual address of the page directory
+ # - the PGD holds 64 PGEs and each PGE/PME points to a set of page tables
+__dtlb_u_PTD_miss:
+ movsg scr0,gr31 /* consult the insn-PGE-cache key */
+ xor gr28,gr31,gr31
+ srlicc gr31,#26,gr0,icc0
+ srli gr28,#12,gr31 /* use EAR0[25:14] as PTE index */
+ bne icc0,#0,__dtlb_u_iPGE_miss
+
+ # what we're looking for is covered by the insn-PGE-cache
+ setlos 0x3ffc,gr30
+ and gr31,gr30,gr31
+ movsg damlr4,gr30
+ bra __dtlb_u_using_iPTD
+
+__dtlb_u_iPGE_miss:
+ srli gr28,#26,gr31 /* calculate PGE offset */
+ slli gr31,#8,gr31 /* and clear bottom bits */
+
+ movsg damlr3,gr30
+ ld @(gr31,gr30),gr30 /* access the PGE */
+
+ andicc.p gr30,#_PAGE_PRESENT,gr0,icc0
+ andicc gr30,#xAMPRx_SS,gr0,icc1
+
+ # map this PTD instead and record coverage address
+ ori.p gr30,#xAMPRx_L|xAMPRx_SS_16Kb|xAMPRx_S|xAMPRx_C|xAMPRx_V,gr30
+ beq icc0,#0,__tlb_user_fault /* jump if PGE not present */
+ slli.p gr31,#18,gr31
+ bne icc1,#0,__dtlb_u_bigpage
+ movgs gr30,dampr5
+ movgs gr31,scr1
+
+ # we can now resume normal service
+ setlos 0x3ffc,gr30
+ srli.p gr28,#12,gr31 /* use EAR0[25:14] as PTE index */
+ bra __dtlb_u_PTD_mapped
+
+__dtlb_u_bigpage:
+ break
+ nop
+
+ .size __entry_user_data_tlb_miss, .-__entry_user_data_tlb_miss
--- /dev/null
+/* unaligned.c: unalignment fixup handler for CPUs on which it is supported (FR451 only)
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/sched.h>
+#include <linux/signal.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/types.h>
+#include <linux/user.h>
+#include <linux/string.h>
+#include <linux/linkage.h>
+#include <linux/init.h>
+
+#include <asm/setup.h>
+#include <asm/system.h>
+#include <asm/uaccess.h>
+
+#if 0
+#define kdebug(fmt, ...) printk("FDPIC "fmt"\n" ,##__VA_ARGS__ )
+#else
+#define kdebug(fmt, ...) do {} while(0)
+#endif
+
+#define _MA_SIGNED 0x01
+#define _MA_HALF 0x02
+#define _MA_WORD 0x04
+#define _MA_DWORD 0x08
+#define _MA_SZ_MASK 0x0e
+#define _MA_LOAD 0x10
+#define _MA_STORE 0x20
+#define _MA_UPDATE 0x40
+#define _MA_IMM 0x80
+
+#define _MA_LDxU _MA_LOAD | _MA_UPDATE
+#define _MA_LDxI _MA_LOAD | _MA_IMM
+#define _MA_STxU _MA_STORE | _MA_UPDATE
+#define _MA_STxI _MA_STORE | _MA_IMM
+
+static const uint8_t tbl_LDGRk_reg[0x40] = {
+ [0x02] = _MA_LOAD | _MA_HALF | _MA_SIGNED, /* LDSH @(GRi,GRj),GRk */
+ [0x03] = _MA_LOAD | _MA_HALF, /* LDUH @(GRi,GRj),GRk */
+ [0x04] = _MA_LOAD | _MA_WORD, /* LD @(GRi,GRj),GRk */
+ [0x05] = _MA_LOAD | _MA_DWORD, /* LDD @(GRi,GRj),GRk */
+ [0x12] = _MA_LDxU | _MA_HALF | _MA_SIGNED, /* LDSHU @(GRi,GRj),GRk */
+ [0x13] = _MA_LDxU | _MA_HALF, /* LDUHU @(GRi,GRj),GRk */
+ [0x14] = _MA_LDxU | _MA_WORD, /* LDU @(GRi,GRj),GRk */
+ [0x15] = _MA_LDxU | _MA_DWORD, /* LDDU @(GRi,GRj),GRk */
+};
+
+static const uint8_t tbl_STGRk_reg[0x40] = {
+ [0x01] = _MA_STORE | _MA_HALF, /* STH @(GRi,GRj),GRk */
+ [0x02] = _MA_STORE | _MA_WORD, /* ST @(GRi,GRj),GRk */
+ [0x03] = _MA_STORE | _MA_DWORD, /* STD @(GRi,GRj),GRk */
+ [0x11] = _MA_STxU | _MA_HALF, /* STHU @(GRi,GRj),GRk */
+ [0x12] = _MA_STxU | _MA_WORD, /* STU @(GRi,GRj),GRk */
+ [0x13] = _MA_STxU | _MA_DWORD, /* STDU @(GRi,GRj),GRk */
+};
+
+static const uint8_t tbl_LDSTGRk_imm[0x80] = {
+ [0x31] = _MA_LDxI | _MA_HALF | _MA_SIGNED, /* LDSHI @(GRi,d12),GRk */
+ [0x32] = _MA_LDxI | _MA_WORD, /* LDI @(GRi,d12),GRk */
+ [0x33] = _MA_LDxI | _MA_DWORD, /* LDDI @(GRi,d12),GRk */
+ [0x36] = _MA_LDxI | _MA_HALF, /* LDUHI @(GRi,d12),GRk */
+ [0x51] = _MA_STxI | _MA_HALF, /* STHI @(GRi,d12),GRk */
+ [0x52] = _MA_STxI | _MA_WORD, /* STI @(GRi,d12),GRk */
+ [0x53] = _MA_STxI | _MA_DWORD, /* STDI @(GRi,d12),GRk */
+};
+
+
+/*****************************************************************************/
+/*
+ * see if we can handle the exception by fixing up a misaligned memory access
+ */
+int handle_misalignment(unsigned long esr0, unsigned long ear0, unsigned long epcr0)
+{
+ unsigned long insn, addr, *greg;
+ int GRi, GRj, GRk, D12, op;
+
+ union {
+ uint64_t _64;
+ uint32_t _32[2];
+ uint16_t _16;
+ uint8_t _8[8];
+ } x;
+
+ if (!(esr0 & ESR0_EAV) || !(epcr0 & EPCR0_V) || !(ear0 & 7))
+ return -EAGAIN;
+
+ epcr0 &= EPCR0_PC;
+
+ if (__frame->pc != epcr0) {
+ kdebug("MISALIGN: Execution not halted on excepting instruction\n");
+ BUG();
+ }
+
+ if (__get_user(insn, (unsigned long *) epcr0) < 0)
+ return -EFAULT;
+
+ /* determine the instruction type first */
+ switch ((insn >> 18) & 0x7f) {
+ case 0x2:
+ /* LDx @(GRi,GRj),GRk */
+ op = tbl_LDGRk_reg[(insn >> 6) & 0x3f];
+ break;
+
+ case 0x3:
+ /* STx GRk,@(GRi,GRj) */
+ op = tbl_STGRk_reg[(insn >> 6) & 0x3f];
+ break;
+
+ default:
+ op = tbl_LDSTGRk_imm[(insn >> 18) & 0x7f];
+ break;
+ }
+
+ if (!op)
+ return -EAGAIN;
+
+ kdebug("MISALIGN: pc=%08lx insn=%08lx ad=%08lx op=%02x\n", epcr0, insn, ear0, op);
+
+ memset(&x, 0xba, 8);
+
+ /* validate the instruction parameters */
+ greg = (unsigned long *) &__frame->tbr;
+
+ GRi = (insn >> 12) & 0x3f;
+ GRk = (insn >> 25) & 0x3f;
+
+ if (GRi > 31 || GRk > 31)
+ return -ENOENT;
+
+ if (op & _MA_DWORD && GRk & 1)
+ return -EINVAL;
+
+ if (op & _MA_IMM) {
+ D12 = insn & 0xfff;
+ asm ("slli %0,#20,%0 ! srai %0,#20,%0" : "=r"(D12) : "0"(D12)); /* sign extend */
+ addr = (GRi ? greg[GRi] : 0) + D12;
+ }
+ else {
+ GRj = (insn >> 0) & 0x3f;
+ if (GRj > 31)
+ return -ENOENT;
+ addr = (GRi ? greg[GRi] : 0) + (GRj ? greg[GRj] : 0);
+ }
+
+ if (addr != ear0) {
+ kdebug("MISALIGN: Calculated addr (%08lx) does not match EAR0 (%08lx)\n",
+ addr, ear0);
+ return -EFAULT;
+ }
+
+ /* check the address is okay */
+ if (user_mode(__frame) && ___range_ok(ear0, 8) < 0)
+ return -EFAULT;
+
+ /* perform the memory op */
+ if (op & _MA_STORE) {
+ /* perform a store */
+ x._32[0] = 0;
+ if (GRk != 0) {
+ if (op & _MA_HALF) {
+ x._16 = greg[GRk];
+ }
+ else {
+ x._32[0] = greg[GRk];
+ }
+ }
+ if (op & _MA_DWORD)
+ x._32[1] = greg[GRk + 1];
+
+ kdebug("MISALIGN: Store GR%d { %08x:%08x } -> %08lx (%dB)\n",
+ GRk, x._32[1], x._32[0], addr, op & _MA_SZ_MASK);
+
+ if (__memcpy_user((void *) addr, &x, op & _MA_SZ_MASK) != 0)
+ return -EFAULT;
+ }
+ else {
+ /* perform a load */
+ if (__memcpy_user(&x, (void *) addr, op & _MA_SZ_MASK) != 0)
+ return -EFAULT;
+
+ if (op & _MA_HALF) {
+ if (op & _MA_SIGNED)
+ asm ("slli %0,#16,%0 ! srai %0,#16,%0"
+ : "=r"(x._32[0]) : "0"(x._16));
+ else
+ asm ("sethi #0,%0"
+ : "=r"(x._32[0]) : "0"(x._16));
+ }
+
+ kdebug("MISALIGN: Load %08lx (%dB) -> GR%d, { %08x:%08x }\n",
+ addr, op & _MA_SZ_MASK, GRk, x._32[1], x._32[0]);
+
+ if (GRk != 0)
+ greg[GRk] = x._32[0];
+ if (op & _MA_DWORD)
+ greg[GRk + 1] = x._32[1];
+ }
+
+ /* update the base pointer if required */
+ if (op & _MA_UPDATE)
+ greg[GRi] = addr;
+
+ /* well... we've done that insn */
+ __frame->pc = __frame->pc + 4;
+
+ return 0;
+} /* end handle_misalignment() */
#
# Automatically generated make config: don't edit
+# Linux kernel version: 2.6.11-rc1
+# Sun Jan 16 17:24:38 2005
#
CONFIG_H8300=y
# CONFIG_MMU is not set
CONFIG_UID16=y
CONFIG_RWSEM_GENERIC_SPINLOCK=y
# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set
+CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ISA=y
+# CONFIG_PCI is not set
#
# Code maturity level options
#
CONFIG_EXPERIMENTAL=y
CONFIG_CLEAN_COMPILE=y
-CONFIG_STANDALONE=y
CONFIG_BROKEN_ON_SMP=y
#
# General setup
#
-# CONFIG_SYSVIPC is not set
+CONFIG_LOCALVERSION=""
# CONFIG_BSD_PROCESS_ACCT is not set
# CONFIG_SYSCTL is not set
+# CONFIG_AUDIT is not set
CONFIG_LOG_BUF_SHIFT=14
+# CONFIG_HOTPLUG is not set
# CONFIG_IKCONFIG is not set
CONFIG_EMBEDDED=y
-CONFIG_KALLSYMS=y
-CONFIG_FUTEX=y
-CONFIG_EPOLL=y
-CONFIG_IOSCHED_NOOP=y
-CONFIG_IOSCHED_AS=y
-CONFIG_IOSCHED_DEADLINE=y
+# CONFIG_KALLSYMS is not set
+# CONFIG_FUTEX is not set
+# CONFIG_EPOLL is not set
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
+CONFIG_CC_ALIGN_FUNCTIONS=0
+CONFIG_CC_ALIGN_LABELS=0
+CONFIG_CC_ALIGN_LOOPS=0
+CONFIG_CC_ALIGN_JUMPS=0
+CONFIG_TINY_SHMEM=y
#
# Loadable module support
#
# Processor type and features
#
-# CONFIG_H8300H_GENERIC is not set
+CONFIG_H8300H_GENERIC=y
# CONFIG_H8300H_AKI3068NET is not set
# CONFIG_H8300H_H8MAX is not set
-CONFIG_H8300H_SIM=y
+# CONFIG_H8300H_SIM is not set
+# CONFIG_H8S_GENERIC is not set
# CONFIG_H8S_EDOSK2674 is not set
# CONFIG_H8S_SIM is not set
+
+#
+# Detail Selection
+#
# CONFIG_H83002 is not set
-CONFIG_H83007=y
+# CONFIG_H83007 is not set
# CONFIG_H83048 is not set
-# CONFIG_H83068 is not set
-# CONFIG_H8S2678 is not set
-CONFIG_CPU_H8300H=y
-CONFIG_CPU_CLOCK=16000
+CONFIG_H83068=y
+CONFIG_CPU_CLOCK=20000
# CONFIG_RAMKERNEL is not set
CONFIG_ROMKERNEL=y
+CONFIG_CPU_H8300H=y
+# CONFIG_PREEMPT is not set
#
# Executable file formats
#
CONFIG_BINFMT_FLAT=y
-# CONFIG_BINFMT_ZFLAT is not set
+CONFIG_BINFMT_ZFLAT=y
+# CONFIG_BINFMT_SHARED_FLAT is not set
# CONFIG_BINFMT_MISC is not set
#
# Generic Driver Options
#
+# CONFIG_STANDALONE is not set
+# CONFIG_PREVENT_FIRMWARE_BUILD is not set
+# CONFIG_FW_LOADER is not set
+# CONFIG_DEBUG_DRIVER is not set
#
# Memory Technology Devices (MTD)
CONFIG_MTD=y
# CONFIG_MTD_DEBUG is not set
CONFIG_MTD_PARTITIONS=y
-# CONFIG_MTD_CONCAT is not set
+CONFIG_MTD_CONCAT=y
# CONFIG_MTD_REDBOOT_PARTS is not set
# CONFIG_MTD_CMDLINE_PARTS is not set
#
# CONFIG_MTD_CFI is not set
# CONFIG_MTD_JEDECPROBE is not set
+CONFIG_MTD_MAP_BANK_WIDTH_1=y
+CONFIG_MTD_MAP_BANK_WIDTH_2=y
+CONFIG_MTD_MAP_BANK_WIDTH_4=y
+# CONFIG_MTD_MAP_BANK_WIDTH_8 is not set
+# CONFIG_MTD_MAP_BANK_WIDTH_16 is not set
+# CONFIG_MTD_MAP_BANK_WIDTH_32 is not set
+CONFIG_MTD_CFI_I1=y
+CONFIG_MTD_CFI_I2=y
+# CONFIG_MTD_CFI_I4 is not set
+# CONFIG_MTD_CFI_I8 is not set
CONFIG_MTD_RAM=y
-# CONFIG_MTD_ROM is not set
+CONFIG_MTD_ROM=y
# CONFIG_MTD_ABSENT is not set
-# CONFIG_MTD_OBSOLETE_CHIPS is not set
#
# Mapping drivers for chip access
# Self-contained MTD device drivers
#
# CONFIG_MTD_SLRAM is not set
+# CONFIG_MTD_PHRAM is not set
# CONFIG_MTD_MTDRAM is not set
# CONFIG_MTD_BLKMTD is not set
+# CONFIG_MTD_BLOCK2MTD is not set
#
# Disk-On-Chip Device Drivers
# CONFIG_BLK_DEV_FD is not set
# CONFIG_BLK_DEV_XD is not set
# CONFIG_BLK_DEV_LOOP is not set
-# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_RAM is not set
-# CONFIG_BLK_DEV_INITRD is not set
+CONFIG_BLK_DEV_RAM_COUNT=16
+CONFIG_INITRAMFS_SOURCE=""
+# CONFIG_CDROM_PKTCDVD is not set
#
-# ATA/ATAPI/MFM/RLL support
+# IO Schedulers
#
-# CONFIG_IDE is not set
+CONFIG_IOSCHED_NOOP=y
+# CONFIG_IOSCHED_AS is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+# CONFIG_IOSCHED_CFQ is not set
#
-# IDE Extra configuration
+# ATA/ATAPI/MFM/RLL support
#
+# CONFIG_IDE is not set
#
# Networking support
#
-CONFIG_NET=y
-
-#
-# Networking options
-#
-# CONFIG_PACKET is not set
-# CONFIG_NETLINK_DEV is not set
-# CONFIG_UNIX is not set
-# CONFIG_NET_KEY is not set
-# CONFIG_INET is not set
-# CONFIG_INET_AH is not set
-# CONFIG_INET_ESP is not set
-# CONFIG_INET_IPCOMP is not set
-# CONFIG_DECNET is not set
-# CONFIG_BRIDGE is not set
-# CONFIG_NETFILTER is not set
-# CONFIG_ATM is not set
-# CONFIG_VLAN_8021Q is not set
-# CONFIG_LLC2 is not set
-# CONFIG_IPX is not set
-# CONFIG_ATALK is not set
-# CONFIG_X25 is not set
-# CONFIG_LAPB is not set
-# CONFIG_NET_DIVERT is not set
-# CONFIG_WAN_ROUTER is not set
-# CONFIG_NET_HW_FLOWCONTROL is not set
-
-#
-# QoS and/or fair queueing
-#
-# CONFIG_NET_SCHED is not set
-
-#
-# Network testing
-#
-# CONFIG_NET_PKTGEN is not set
-CONFIG_NETDEVICES=y
-
-#
-# ARCnet devices
-#
-# CONFIG_ARCNET is not set
-# CONFIG_DUMMY is not set
-# CONFIG_BONDING is not set
-# CONFIG_EQUALIZER is not set
-# CONFIG_TUN is not set
-
-#
-# Ethernet (10 or 100Mbit)
-#
-# CONFIG_NET_ETHERNET is not set
-
-#
-# Ethernet (1000 Mbit)
-#
-
-#
-# Ethernet (10000 Mbit)
-#
-# CONFIG_PPP is not set
-# CONFIG_SLIP is not set
-
-#
-# Wireless LAN (non-hamradio)
-#
-# CONFIG_NET_RADIO is not set
-
-#
-# Token Ring devices
-#
-# CONFIG_TR is not set
-# CONFIG_SHAPER is not set
-
-#
-# Wan interfaces
-#
-# CONFIG_WAN is not set
-
-#
-# Amateur Radio support
-#
-# CONFIG_HAMRADIO is not set
-
-#
-# IrDA (infrared) support
-#
-# CONFIG_IRDA is not set
-
-#
-# Bluetooth support
-#
-# CONFIG_BT is not set
+# CONFIG_NET is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
#
# Input device support
# Character devices
#
# CONFIG_VT is not set
-# CONFIG_SERIAL is not set
-CONFIG_SH_SCI=y
-CONFIG_SERIAL_CONSOLE=y
#
# Unix98 PTY support
#
# Non-8250 serial port support
#
+CONFIG_SERIAL_SH_SCI=y
+CONFIG_SERIAL_SH_SCI_CONSOLE=y
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
#
# I2C support
#
# USB support
#
+# CONFIG_USB_ARCH_HAS_HCD is not set
+# CONFIG_USB_ARCH_HAS_OHCI is not set
+
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
#
# USB Gadget Support
# CONFIG_MINIX_FS is not set
CONFIG_ROMFS_FS=y
# CONFIG_QUOTA is not set
+# CONFIG_DNOTIFY is not set
# CONFIG_AUTOFS_FS is not set
# CONFIG_AUTOFS4_FS is not set
#
# DOS/FAT/NT Filesystems
#
-# CONFIG_FAT_FS is not set
+# CONFIG_MSDOS_FS is not set
+# CONFIG_VFAT_FS is not set
# CONFIG_NTFS_FS is not set
#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
-CONFIG_PROC_KCORE=y
+# CONFIG_SYSFS is not set
# CONFIG_DEVFS_FS is not set
# CONFIG_TMPFS is not set
# CONFIG_HUGETLB_PAGE is not set
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
-#
-# Network File Systems
-#
-# CONFIG_EXPORTFS is not set
-
#
# Partition Types
#
#
# Kernel hacking
#
+CONFIG_DEBUG_KERNEL=y
+# CONFIG_SCHEDSTATS is not set
+# CONFIG_DEBUG_KOBJECT is not set
+# CONFIG_DEBUG_FS is not set
CONFIG_FULLDEBUG=y
-# CONFIG_MAGIC_SYSRQ is not set
# CONFIG_HIGHPROFILE is not set
CONFIG_NO_KERNEL_MSG=y
-CONFIG_GDB_MAGICPRINT=y
# CONFIG_SYSCALL_PRINT is not set
+# CONFIG_GDB_DEBUG is not set
+# CONFIG_CONFIG_SH_STANDARD_BIOS is not set
# CONFIG_DEFAULT_CMDLINE is not set
# CONFIG_BLKDEV_RESERVE is not set
#
# Security options
#
+# CONFIG_KEYS is not set
# CONFIG_SECURITY is not set
#
#
# CONFIG_CRYPTO is not set
+#
+# Hardware crypto devices
+#
+
#
# Library routines
#
-# CONFIG_CRC32 is not set
+# CONFIG_CRC_CCITT is not set
+CONFIG_CRC32=y
+# CONFIG_LIBCRC32C is not set
+CONFIG_ZLIB_INFLATE=y
u32 ft_tab[4][256];
u32 fl_tab[4][256];
-u32 ls_tab[4][256];
-u32 im_tab[4][256];
+static u32 ls_tab[4][256];
+static u32 im_tab[4][256];
u32 il_tab[4][256];
u32 it_tab[4][256];
-void gen_tabs(void)
+static void gen_tabs(void)
{
u32 i, w;
u8 pow[512], log[256];
}
display_cacheinfo(c);
+ detect_ht(c);
+
+#ifdef CONFIG_X86_HT
+ /* AMD dual core looks like HT but isn't really. Hide it from the
+ scheduler. This works around problems with the domain scheduler.
+ Also probably gives slightly better scheduling and disables
+ SMT nice which is harmful on dual core.
+ TBD tune the domain scheduler for dual core. */
+ if (cpu_has(c, X86_FEATURE_CMP_LEGACY))
+ smp_num_siblings = 1;
+#endif
+
+ if (cpuid_eax(0x80000000) >= 0x80000008) {
+ c->x86_num_cores = (cpuid_ecx(0x80000008) & 0xff) + 1;
+ if (c->x86_num_cores & (c->x86_num_cores - 1))
+ c->x86_num_cores = 1;
+ }
}
static unsigned int amd_size_cache(struct cpuinfo_x86 * c, unsigned int size)
depends on CPU_FREQ && EXPERIMENTAL
help
This adds the CPUFreq driver for FSB changing on nVidia nForce2
- plattforms.
+ platforms.
For details, take a look at <file:Documentation/cpu-freq/>.
#include <linux/acpi.h>
#include <acpi/processor.h>
+#include "speedstep-est-common.h"
+
#define dprintk(msg...) cpufreq_debug_printk(CPUFREQ_DEBUG_DRIVER, "acpi-cpufreq", msg)
MODULE_AUTHOR("Paul Diefenbaugh, Dominik Brodowski");
struct cpufreq_acpi_io {
struct acpi_processor_performance acpi_data;
struct cpufreq_frequency_table *freq_table;
+ unsigned int resume;
};
static struct cpufreq_acpi_io *acpi_io_data[NR_CPUS];
+static struct cpufreq_driver acpi_cpufreq_driver;
static int
acpi_processor_write_port(
}
if (state == data->acpi_data.state) {
- dprintk("Already at target state (P%d)\n", state);
- retval = 0;
- goto migrate_end;
+ if (unlikely(data->resume)) {
+ dprintk("Called after resume, resetting to P%d\n", state);
+ data->resume = 0;
+ } else {
+ dprintk("Already at target state (P%d)\n", state);
+ retval = 0;
+ goto migrate_end;
+ }
}
dprintk("Transitioning from P%d to P%d\n",
if (result)
goto err_free;
+ if (is_const_loops_cpu(cpu)) {
+ acpi_cpufreq_driver.flags |= CPUFREQ_CONST_LOOPS;
+ }
+
/* capability check */
if (data->acpi_data.state_count <= 1) {
dprintk("No P-States\n");
return (0);
}
+static int
+acpi_cpufreq_resume (
+ struct cpufreq_policy *policy)
+{
+ struct cpufreq_acpi_io *data = acpi_io_data[policy->cpu];
+
+
+ dprintk("acpi_cpufreq_resume\n");
+
+ data->resume = 1;
+
+ return (0);
+}
+
static struct freq_attr* acpi_cpufreq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
.target = acpi_cpufreq_target,
.init = acpi_cpufreq_cpu_init,
.exit = acpi_cpufreq_cpu_exit,
+ .resume = acpi_cpufreq_resume,
.name = "acpi-cpufreq",
.owner = THIS_MODULE,
.attr = acpi_cpufreq_attr,
MODULE_PARM_DESC(min_fsb,
"Minimum FSB to use, if not defined: current FSB - 50");
-/* DEBUG
- * Define it if you want verbose debug output, e.g. for bug reporting
- */
-//#define NFORCE2_DEBUG
-
-#ifdef NFORCE2_DEBUG
-#define dprintk(msg...) printk(msg)
-#else
-#define dprintk(msg...) do { } while(0)
-#endif
+#define dprintk(msg...) cpufreq_debug_printk(CPUFREQ_DEBUG_DRIVER, "cpufreq-nforce2", msg)
/*
* nforce2_calc_fsb - calculate FSB
--- /dev/null
+/*
+ * Routines common for drivers handling Enhanced Speedstep Technology
+ * Copyright (C) 2004 Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
+ *
+ * Licensed under the terms of the GNU GPL License version 2 -- see
+ * COPYING for details.
+ */
+
+static inline int is_const_loops_cpu(unsigned int cpu)
+{
+ struct cpuinfo_x86 *c = cpu_data + cpu;
+
+ if (c->x86_vendor != X86_VENDOR_INTEL || !cpu_has(c, X86_FEATURE_EST))
+ return 0;
+
+ /*
+ * on P-4s, the TSC runs with constant frequency independent of cpu freq
+ * when we use EST
+ */
+ if (c->x86 == 0xf)
+ return 1;
+
+ return 0;
+}
+
printk(KERN_DEBUG "speedstep-lib: couldn't detect FSB speed. Please send an e-mail to <linux@brodo.de>\n");
/* Multiplier. */
- mult = msr_lo >> 24;
+ if (c->x86_model < 2)
+ mult = msr_lo >> 27;
+ else
+ mult = msr_lo >> 24;
dprintk("P4 - FSB %u kHz; Multiplier %u; Speed %u kHz\n", fsb, mult, (fsb * mult));
{ 0x00, 0, 0}
};
-unsigned int init_intel_cacheinfo(struct cpuinfo_x86 *c)
+unsigned int __init init_intel_cacheinfo(struct cpuinfo_x86 *c)
{
unsigned int trace = 0, l1i = 0, l1d = 0, l2 = 0, l3 = 0; /* Cache sizes */
#include <linux/spinlock.h>
#include <linux/preempt.h>
#include <asm/kdebug.h>
+#include <asm/desc.h>
/* kprobe_status settings */
#define KPROBE_HIT_ACTIVE 0x00000001
int arch_prepare_kprobe(struct kprobe *p)
{
- memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
return 0;
}
+void arch_copy_kprobe(struct kprobe *p)
+{
+ memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
+}
+
void arch_remove_kprobe(struct kprobe *p)
{
}
* Interrupts are disabled on entry as trap3 is an interrupt gate and they
* remain disabled thorough out this function.
*/
-static inline int kprobe_handler(struct pt_regs *regs)
+static int kprobe_handler(struct pt_regs *regs)
{
struct kprobe *p;
int ret = 0;
- u8 *addr = (u8 *) (regs->eip - 1);
+ kprobe_opcode_t *addr = NULL;
+ unsigned long *lp;
/* We're in an interrupt, but this is clear and BUG()-safe. */
preempt_disable();
-
+ /* Check if the application is using LDT entry for its code segment and
+ * calculate the address by reading the base address from the LDT entry.
+ */
+ if ((regs->xcs & 4) && (current->mm)) {
+ lp = (unsigned long *) ((unsigned long)((regs->xcs >> 3) * 8)
+ + (char *) current->mm->context.ldt);
+ addr = (kprobe_opcode_t *) (get_desc_base(lp) + regs->eip -
+ sizeof(kprobe_opcode_t));
+ } else {
+ addr = (kprobe_opcode_t *)(regs->eip - sizeof(kprobe_opcode_t));
+ }
/* Check we're not actually recursing */
if (kprobe_running()) {
/* We *are* holding lock here, so this is safe.
if (!mem_base)
goto out;
- dev->dma_mem = kmalloc(GFP_KERNEL, sizeof(struct dma_coherent_mem));
+ dev->dma_mem = kmalloc(sizeof(struct dma_coherent_mem), GFP_KERNEL);
if (!dev->dma_mem)
goto out;
memset(dev->dma_mem, 0, sizeof(struct dma_coherent_mem));
- dev->dma_mem->bitmap = kmalloc(GFP_KERNEL, bitmap_size);
+ dev->dma_mem->bitmap = kmalloc(bitmap_size, GFP_KERNEL);
if (!dev->dma_mem->bitmap)
goto free1_out;
memset(dev->dma_mem->bitmap, 0, bitmap_size);
if(!mem)
return;
dev->dma_mem = NULL;
+ iounmap(mem->virt_base);
kfree(mem->bitmap);
kfree(mem);
}
#include <linux/sysdev.h>
#include <linux/bcd.h>
#include <linux/efi.h>
+#include <linux/mca.h>
#include <asm/io.h>
#include <asm/smp.h>
extern unsigned long wall_jiffies;
-spinlock_t rtc_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(rtc_lock);
-spinlock_t i8253_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(i8253_lock);
EXPORT_SYMBOL(i8253_lock);
struct timer_opts *cur_timer = &timer_none;
last_rtc_update = xtime.tv_sec - 600; /* do it again in 60 s */
}
-#ifdef CONFIG_MCA
- if( MCA_bus ) {
+ if (MCA_bus) {
/* The PS/2 uses level-triggered interrupts. You can't
turn them off, nor would you want to (any attempt to
enable edge-triggered interrupts usually gets intercepted by a
irq = inb_p( 0x61 ); /* read the current state */
outb_p( irq|0x80, 0x61 ); /* reset the IRQ */
}
-#endif
}
/*
hpet_reenable();
#endif
sec = get_cmos_time() + clock_cmos_diff;
- sleep_length = get_cmos_time() - sleep_start;
+ sleep_length = (get_cmos_time() - sleep_start) * HZ;
write_seqlock_irqsave(&xtime_lock, flags);
xtime.tv_sec = sec;
xtime.tv_nsec = 0;
write_sequnlock_irqrestore(&xtime_lock, flags);
- jiffies += sleep_length * HZ;
+ jiffies += sleep_length;
+ wall_jiffies += sleep_length;
return 0;
}
movl $0xA5A5A5A5, trampoline_data - r_base
# write marker for master knows we're running
- lidt boot_idt - r_base # load idt with 0, 0
- lgdt boot_gdt - r_base # load gdt with whatever is appropriate
+ /* GDT tables in non default location kernel can be beyond 16MB and
+ * lgdt will not be able to load the address as in real mode default
+ * operand size is 16bit. Use lgdtl instead to force operand size
+ * to 32 bit.
+ */
+
+ lidtl boot_idt - r_base # load idt with 0, 0
+ lgdtl boot_gdt - r_base # load gdt with whatever is appropriate
xor %ax, %ax
inc %ax # protected mode (PE) bit
void (*pm_power_off)(void);
-int reboot_thru_bios;
-int reboot_smp;
-
void machine_restart(char * __unused)
{
#ifdef CONFIG_SMP
#include "irq_vectors.h"
-static spinlock_t cobalt_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(cobalt_lock);
/*
* Set the given Cobalt APIC Redirection Table entry to point
oprofilefs.o oprofile_stats.o \
timer_int.o )
-oprofile-y := $(DRIVER_OBJS) init.o
+oprofile-y := $(DRIVER_OBJS) init.o backtrace.o
oprofile-$(CONFIG_X86_LOCAL_APIC) += nmi_int.o op_model_athlon.o \
op_model_ppro.o op_model_p4.o
oprofile-$(CONFIG_X86_IO_APIC) += nmi_timer_int.o
* with the NMI mode driver.
*/
-extern int nmi_init(struct oprofile_operations ** ops);
-extern int nmi_timer_init(struct oprofile_operations **ops);
+extern int nmi_init(struct oprofile_operations * ops);
+extern int nmi_timer_init(struct oprofile_operations * ops);
extern void nmi_exit(void);
+extern void x86_backtrace(struct pt_regs * const regs, unsigned int depth);
-int __init oprofile_arch_init(struct oprofile_operations ** ops)
+
+int __init oprofile_arch_init(struct oprofile_operations * ops)
{
- int ret = -ENODEV;
+ int ret;
+
+ ret = -ENODEV;
+
#ifdef CONFIG_X86_LOCAL_APIC
ret = nmi_init(ops);
#endif
-
#ifdef CONFIG_X86_IO_APIC
if (ret < 0)
ret = nmi_timer_init(ops);
#endif
+ ops->backtrace = x86_backtrace;
+
return ret;
}
}
-static void __exit exit_driverfs(void)
+static void exit_driverfs(void)
{
sysdev_unregister(&device_oprofile);
sysdev_class_unregister(&oprofile_sysclass);
static int nmi_callback(struct pt_regs * regs, int cpu)
{
- return model->check_ctrs(cpu, &cpu_msrs[cpu], regs);
+ return model->check_ctrs(regs, &cpu_msrs[cpu]);
}
}
-struct oprofile_operations nmi_ops = {
- .create_files = nmi_create_files,
- .setup = nmi_setup,
- .shutdown = nmi_shutdown,
- .start = nmi_start,
- .stop = nmi_stop
-};
-
-
-static int __init p4_init(void)
+static int __init p4_init(char ** cpu_type)
{
- __u8 cpu_model = current_cpu_data.x86_model;
+ __u8 cpu_model = boot_cpu_data.x86_model;
- if (cpu_model > 3)
+ if (cpu_model > 4)
return 0;
#ifndef CONFIG_SMP
- nmi_ops.cpu_type = "i386/p4";
+ *cpu_type = "i386/p4";
model = &op_p4_spec;
return 1;
#else
switch (smp_num_siblings) {
case 1:
- nmi_ops.cpu_type = "i386/p4";
+ *cpu_type = "i386/p4";
model = &op_p4_spec;
return 1;
case 2:
- nmi_ops.cpu_type = "i386/p4-ht";
+ *cpu_type = "i386/p4-ht";
model = &op_p4_ht2_spec;
return 1;
}
}
-static int __init ppro_init(void)
+static int __init ppro_init(char ** cpu_type)
{
- __u8 cpu_model = current_cpu_data.x86_model;
+ __u8 cpu_model = boot_cpu_data.x86_model;
if (cpu_model > 0xd)
return 0;
if (cpu_model == 9) {
- nmi_ops.cpu_type = "i386/p6_mobile";
+ *cpu_type = "i386/p6_mobile";
} else if (cpu_model > 5) {
- nmi_ops.cpu_type = "i386/piii";
+ *cpu_type = "i386/piii";
} else if (cpu_model > 2) {
- nmi_ops.cpu_type = "i386/pii";
+ *cpu_type = "i386/pii";
} else {
- nmi_ops.cpu_type = "i386/ppro";
+ *cpu_type = "i386/ppro";
}
model = &op_ppro_spec;
/* in order to get driverfs right */
static int using_nmi;
-int __init nmi_init(struct oprofile_operations ** ops)
+int __init nmi_init(struct oprofile_operations *ops)
{
- __u8 vendor = current_cpu_data.x86_vendor;
- __u8 family = current_cpu_data.x86;
-
+ __u8 vendor = boot_cpu_data.x86_vendor;
+ __u8 family = boot_cpu_data.x86;
+ char *cpu_type;
+
if (!cpu_has_apic)
return -ENODEV;
return -ENODEV;
case 6:
model = &op_athlon_spec;
- nmi_ops.cpu_type = "i386/athlon";
+ cpu_type = "i386/athlon";
break;
case 0xf:
model = &op_athlon_spec;
/* Actually it could be i386/hammer too, but give
user space an consistent name. */
- nmi_ops.cpu_type = "x86-64/hammer";
+ cpu_type = "x86-64/hammer";
break;
}
break;
switch (family) {
/* Pentium IV */
case 0xf:
- if (!p4_init())
+ if (!p4_init(&cpu_type))
return -ENODEV;
break;
/* A P6-class processor */
case 6:
- if (!ppro_init())
+ if (!ppro_init(&cpu_type))
return -ENODEV;
break;
init_driverfs();
using_nmi = 1;
- *ops = &nmi_ops;
+ ops->create_files = nmi_create_files;
+ ops->setup = nmi_setup;
+ ops->shutdown = nmi_shutdown;
+ ops->start = nmi_start;
+ ops->stop = nmi_stop;
+ ops->cpu_type = cpu_type;
printk(KERN_INFO "oprofile: using NMI interrupt.\n");
return 0;
}
-void __exit nmi_exit(void)
+void nmi_exit(void)
{
if (using_nmi)
exit_driverfs();
static int nmi_timer_callback(struct pt_regs * regs, int cpu)
{
- unsigned long eip = instruction_pointer(regs);
-
- oprofile_add_sample(eip, !user_mode(regs), 0, cpu);
+ oprofile_add_sample(regs, 0);
return 1;
}
}
-static struct oprofile_operations nmi_timer_ops = {
- .start = timer_start,
- .stop = timer_stop,
- .cpu_type = "timer"
-};
-
-int __init nmi_timer_init(struct oprofile_operations ** ops)
+int __init nmi_timer_init(struct oprofile_operations * ops)
{
extern int nmi_active;
if (nmi_active <= 0)
return -ENODEV;
- *ops = &nmi_timer_ops;
+ ops->start = timer_start;
+ ops->stop = timer_stop;
+ ops->cpu_type = "timer";
printk(KERN_INFO "oprofile: using NMI timer interrupt.\n");
return 0;
}
}
-static int athlon_check_ctrs(unsigned int const cpu,
- struct op_msrs const * const msrs,
- struct pt_regs * const regs)
+static int athlon_check_ctrs(struct pt_regs * const regs,
+ struct op_msrs const * const msrs)
{
unsigned int low, high;
int i;
- unsigned long eip = profile_pc(regs);
- int is_kernel = !user_mode(regs);
for (i = 0 ; i < NUM_COUNTERS; ++i) {
CTR_READ(low, high, msrs, i);
if (CTR_OVERFLOWED(low)) {
- oprofile_add_sample(eip, is_kernel, i, cpu);
+ oprofile_add_sample(regs, i);
CTR_WRITE(reset_value[i], msrs, i);
}
}
}
-static int ppro_check_ctrs(unsigned int const cpu,
- struct op_msrs const * const msrs,
- struct pt_regs * const regs)
+static int ppro_check_ctrs(struct pt_regs * const regs,
+ struct op_msrs const * const msrs)
{
unsigned int low, high;
int i;
- unsigned long eip = profile_pc(regs);
- int is_kernel = !user_mode(regs);
for (i = 0 ; i < NUM_COUNTERS; ++i) {
CTR_READ(low, high, msrs, i);
if (CTR_OVERFLOWED(low)) {
- oprofile_add_sample(eip, is_kernel, i, cpu);
+ oprofile_add_sample(regs, i);
CTR_WRITE(reset_value[i], msrs, i);
}
}
unsigned int const num_controls;
void (*fill_in_addresses)(struct op_msrs * const msrs);
void (*setup_ctrs)(struct op_msrs const * const msrs);
- int (*check_ctrs)(unsigned int const cpu,
- struct op_msrs const * const msrs,
- struct pt_regs * const regs);
+ int (*check_ctrs)(struct pt_regs * const regs,
+ struct op_msrs const * const msrs);
void (*start)(struct op_msrs const * const msrs);
void (*stop)(struct op_msrs const * const msrs);
};
* This interrupt-safe spinlock protects all accesses to PCI
* configuration space.
*/
-spinlock_t pci_config_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(pci_config_lock);
/*
* Several buggy motherboards address only 16 devices and mirror
#include <linux/pci.h>
#include <linux/init.h>
+#include <linux/nodemask.h>
#include "pci.h"
#define BUS2QUAD(global) (mp_bus_id_to_node[global])
return 0;
pci_root_bus = pcibios_scan_root(0);
- if (numnodes > 1) {
- for (quad = 1; quad < numnodes; ++quad) {
+ if (num_online_nodes() > 1)
+ for_each_online_node(quad) {
+ if (quad == 0)
+ continue;
printk("Scanning PCI bus %d for quad %d\n",
QUADLOCAL2BUS(quad,0), quad);
pci_scan_bus(QUADLOCAL2BUS(quad,0),
&pci_root_ops, NULL);
}
- }
return 0;
}
}
}
if (!found) {
- printk(KERN_WARNING "PCI: Device %02x:%02x not found by BIOS\n",
- dev->bus->number, dev->devfn);
+ printk(KERN_WARNING "PCI: Device %s not found by BIOS\n",
+ pci_name(dev));
list_del(&dev->global_list);
list_add_tail(&dev->global_list, &sorted_devices);
}
extern int pcibios_scanned;
extern spinlock_t pci_config_lock;
-int pirq_enable_irq(struct pci_dev *dev);
-
extern int (*pcibios_enable_irq)(struct pci_dev *dev);
config IA64_GRANULE_64MB
bool "64MB"
- depends on !(IA64_GENERIC || IA64_HP_ZX1 || IA64_SGI_SN2)
+ depends on !(IA64_GENERIC || IA64_HP_ZX1 || IA64_HP_ZX1_SWIOTLB || IA64_SGI_SN2)
endchoice
CONFIG_IA64_PAGE_SIZE_16KB=y
# CONFIG_IA64_PAGE_SIZE_64KB is not set
CONFIG_IA64_BRL_EMU=y
-# CONFIG_ITANIUM_BSTEP_SPECIFIC is not set
CONFIG_IA64_L1_CACHE_SHIFT=6
# CONFIG_NUMA is not set
# CONFIG_VIRTUAL_MEM_MAP is not set
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc1
-# Tue Nov 2 11:35:10 2004
+# Linux kernel version: 2.6.11-rc2
+# Sat Jan 22 11:17:02 2005
#
#
CONFIG_64BIT=y
CONFIG_MMU=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
+CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_TIME_INTERPOLATION=y
CONFIG_EFI=y
CONFIG_GENERIC_IOMAP=y
# CONFIG_IA64_GENERIC is not set
CONFIG_IA64_DIG=y
# CONFIG_IA64_HP_ZX1 is not set
+# CONFIG_IA64_HP_ZX1_SWIOTLB is not set
# CONFIG_IA64_SGI_SN2 is not set
# CONFIG_IA64_HP_SIM is not set
# CONFIG_ITANIUM is not set
CONFIG_IA64_L1_CACHE_SHIFT=7
# CONFIG_NUMA is not set
CONFIG_VIRTUAL_MEM_MAP=y
+CONFIG_HOLES_IN_ZONE=y
CONFIG_IA64_CYCLONE=y
CONFIG_IOSAPIC=y
CONFIG_FORCE_MAX_ZONEORDER=18
CONFIG_IA64_MCA_RECOVERY=y
CONFIG_PERFMON=y
CONFIG_IA64_PALINFO=y
+CONFIG_ACPI_DEALLOCATE_IRQ=y
#
# Firmware Drivers
CONFIG_ACPI_BOOT=y
CONFIG_ACPI_INTERPRETER=y
CONFIG_ACPI_BUTTON=m
+# CONFIG_ACPI_VIDEO is not set
CONFIG_ACPI_FAN=m
CONFIG_ACPI_PROCESSOR=m
+# CONFIG_ACPI_HOTPLUG_CPU is not set
CONFIG_ACPI_THERMAL=m
CONFIG_ACPI_BLACKLIST_YEAR=0
# CONFIG_ACPI_DEBUG is not set
CONFIG_ACPI_POWER=y
CONFIG_ACPI_PCI=y
CONFIG_ACPI_SYSTEM=y
+# CONFIG_ACPI_CONTAINER is not set
#
# Bus options (PCI, PCMCIA)
CONFIG_HOTPLUG_PCI_ACPI=m
# CONFIG_HOTPLUG_PCI_ACPI_IBM is not set
# CONFIG_HOTPLUG_PCI_CPCI is not set
-# CONFIG_HOTPLUG_PCI_PCIE is not set
# CONFIG_HOTPLUG_PCI_SHPC is not set
#
#
# Plug and Play support
#
+# CONFIG_PNP is not set
#
# Block devices
# CONFIG_BLK_CPQ_CISS_DA is not set
# CONFIG_BLK_DEV_DAC960 is not set
# CONFIG_BLK_DEV_UMEM is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_NBD=m
# CONFIG_BLK_DEV_SX8 is not set
# CONFIG_BLK_DEV_UB is not set
CONFIG_BLK_DEV_RAM=m
+CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_CDROM_PKTCDVD is not set
CONFIG_IOSCHED_AS=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
+# CONFIG_ATA_OVER_ETH is not set
#
# ATA/ATAPI/MFM/RLL support
#
CONFIG_SCSI_SPI_ATTRS=y
CONFIG_SCSI_FC_ATTRS=y
+# CONFIG_SCSI_ISCSI_ATTRS is not set
#
# SCSI low-level drivers
CONFIG_SCSI_QLA2300=m
CONFIG_SCSI_QLA2322=m
# CONFIG_SCSI_QLA6312 is not set
-# CONFIG_SCSI_QLA6322 is not set
# CONFIG_SCSI_DC395x is not set
# CONFIG_SCSI_DC390T is not set
# CONFIG_SCSI_DEBUG is not set
CONFIG_MD_RAID5=m
CONFIG_MD_RAID6=m
CONFIG_MD_MULTIPATH=m
+# CONFIG_MD_FAULTY is not set
CONFIG_BLK_DEV_DM=m
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_TUNNEL is not set
CONFIG_IP_TCPDIAG=y
+# CONFIG_IP_TCPDIAG_IPV6 is not set
# CONFIG_IPV6 is not set
# CONFIG_NETFILTER is not set
# CONFIG_FORCEDETH is not set
# CONFIG_DGRS is not set
CONFIG_EEPRO100=m
-# CONFIG_EEPRO100_PIO is not set
CONFIG_E100=m
# CONFIG_E100_NAPI is not set
# CONFIG_FEALNX is not set
# CONFIG_GAMEPORT_EMU10K1 is not set
# CONFIG_GAMEPORT_VORTEX is not set
# CONFIG_GAMEPORT_FM801 is not set
-# CONFIG_GAMEPORT_CS461x is not set
+# CONFIG_GAMEPORT_CS461X is not set
CONFIG_SERIO=y
CONFIG_SERIO_I8042=y
# CONFIG_SERIO_SERPORT is not set
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PCIPS2 is not set
+CONFIG_SERIO_LIBPS2=y
# CONFIG_SERIO_RAW is not set
#
CONFIG_SERIAL_NONSTANDARD=y
# CONFIG_ROCKETPORT is not set
# CONFIG_CYCLADES is not set
+# CONFIG_MOXA_SMARTIO is not set
+# CONFIG_ISI is not set
# CONFIG_SYNCLINK is not set
# CONFIG_SYNCLINKMP is not set
# CONFIG_N_HDLC is not set
#
CONFIG_AGP=m
CONFIG_AGP_I460=m
-CONFIG_DRM=y
+CONFIG_DRM=m
CONFIG_DRM_TDFX=m
CONFIG_DRM_R128=m
CONFIG_DRM_RADEON=m
#
CONFIG_VGA_CONSOLE=y
CONFIG_DUMMY_CONSOLE=y
+# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
#
# Sound
# CONFIG_USB_EHCI_ROOT_HUB_TT is not set
CONFIG_USB_OHCI_HCD=m
CONFIG_USB_UHCI_HCD=y
+# CONFIG_USB_SL811_HCD is not set
#
# USB Device Class drivers
# CONFIG_USB_BLUETOOTH_TTY is not set
# CONFIG_USB_ACM is not set
# CONFIG_USB_PRINTER is not set
+
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_RW_DETECT is not set
#
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_MICROTEK is not set
-# CONFIG_USB_HPUSBSCSI is not set
#
# USB Multimedia devices
#
#
-# USB Network adaptors
+# USB Network Adapters
#
# CONFIG_USB_CATC is not set
# CONFIG_USB_KAWETH is not set
#
# CONFIG_USB_EMI62 is not set
# CONFIG_USB_EMI26 is not set
-# CONFIG_USB_TIGL is not set
# CONFIG_USB_AUERSWALD is not set
# CONFIG_USB_RIO500 is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_PHIDGETKIT is not set
# CONFIG_USB_PHIDGETSERVO is not set
+# CONFIG_USB_IDMOUSE is not set
# CONFIG_USB_TEST is not set
#
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# InfiniBand support
+#
+# CONFIG_INFINIBAND is not set
+
#
# File systems
#
CONFIG_CIFS=m
# CONFIG_CIFS_STATS is not set
# CONFIG_CIFS_XATTR is not set
-# CONFIG_CIFS_POSIX is not set
+# CONFIG_CIFS_EXPERIMENTAL is not set
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
# CONFIG_CRC_CCITT is not set
CONFIG_CRC32=y
# CONFIG_LIBCRC32C is not set
+CONFIG_GENERIC_HARDIRQS=y
+CONFIG_GENERIC_IRQ_PROBE=y
#
# Profiling support
# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
# CONFIG_DEBUG_KOBJECT is not set
# CONFIG_DEBUG_INFO is not set
+# CONFIG_DEBUG_FS is not set
CONFIG_IA64_GRANULE_16MB=y
# CONFIG_IA64_GRANULE_64MB is not set
# CONFIG_IA64_PRINT_HAZARDS is not set
# CONFIG_CRYPTO_TEA is not set
# CONFIG_CRYPTO_ARC4 is not set
# CONFIG_CRYPTO_KHAZAD is not set
+# CONFIG_CRYPTO_ANUBIS is not set
# CONFIG_CRYPTO_DEFLATE is not set
# CONFIG_CRYPTO_MICHAEL_MIC is not set
# CONFIG_CRYPTO_CRC32C is not set
# CONFIG_CRYPTO_TEST is not set
+
+#
+# Hardware crypto devices
+#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.9-rc2-aegl
-# Mon Sep 27 19:03:13 2004
+# Linux kernel version: 2.6.10
+# Wed Dec 29 09:05:48 2004
#
#
# CONFIG_CLEAN_COMPILE is not set
CONFIG_BROKEN=y
CONFIG_BROKEN_ON_SMP=y
+CONFIG_LOCK_KERNEL=y
#
# General setup
# CONFIG_AUDIT is not set
CONFIG_LOG_BUF_SHIFT=17
CONFIG_HOTPLUG=y
+CONFIG_KOBJECT_UEVENT=y
# CONFIG_IKCONFIG is not set
# CONFIG_EMBEDDED is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_EXTRA_PASS is not set
CONFIG_FUTEX=y
CONFIG_EPOLL=y
-CONFIG_IOSCHED_NOOP=y
-CONFIG_IOSCHED_AS=y
-CONFIG_IOSCHED_DEADLINE=y
-CONFIG_IOSCHED_CFQ=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SHMEM=y
+CONFIG_CC_ALIGN_FUNCTIONS=0
+CONFIG_CC_ALIGN_LABELS=0
+CONFIG_CC_ALIGN_LOOPS=0
+CONFIG_CC_ALIGN_JUMPS=0
# CONFIG_TINY_SHMEM is not set
#
# CONFIG_MODULE_UNLOAD is not set
CONFIG_OBSOLETE_MODPARM=y
# CONFIG_MODVERSIONS is not set
+# CONFIG_MODULE_SRCVERSION_ALL is not set
# CONFIG_KMOD is not set
#
CONFIG_HAVE_DEC_LOCK=y
CONFIG_IA32_SUPPORT=y
CONFIG_COMPAT=y
+CONFIG_IA64_MCA_RECOVERY=y
CONFIG_PERFMON=y
CONFIG_IA64_PALINFO=y
CONFIG_ACPI_BOOT=y
CONFIG_ACPI_INTERPRETER=y
CONFIG_ACPI_BUTTON=y
+CONFIG_ACPI_VIDEO=m
CONFIG_ACPI_FAN=y
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_THERMAL=y
+CONFIG_ACPI_BLACKLIST_YEAR=0
# CONFIG_ACPI_DEBUG is not set
CONFIG_ACPI_BUS=y
CONFIG_ACPI_POWER=y
# CONFIG_HOTPLUG_PCI_SHPC is not set
#
-# PCMCIA/CardBus support
+# PCCARD (PCMCIA/CardBus) support
+#
+# CONFIG_PCCARD is not set
+
+#
+# PC-card bridges
#
-# CONFIG_PCMCIA is not set
#
# Device Drivers
#
# Plug and Play support
#
+# CONFIG_PNP is not set
#
# Block devices
# CONFIG_BLK_DEV_SX8 is not set
# CONFIG_BLK_DEV_UB is not set
CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
CONFIG_BLK_DEV_INITRD=y
+CONFIG_INITRAMFS_SOURCE=""
+# CONFIG_CDROM_PKTCDVD is not set
+
+#
+# IO Schedulers
+#
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
#
# ATA/ATAPI/MFM/RLL support
# CONFIG_BLK_DEV_IDEFLOPPY is not set
# CONFIG_BLK_DEV_IDESCSI is not set
# CONFIG_IDE_TASK_IOCTL is not set
-# CONFIG_IDE_TASKFILE_IO is not set
#
# IDE chipset support/bugfixes
# CONFIG_SCSI_QLOGIC_ISP is not set
# CONFIG_SCSI_QLOGIC_FC is not set
CONFIG_SCSI_QLOGIC_1280=y
+# CONFIG_SCSI_QLOGIC_1280_1040 is not set
CONFIG_SCSI_QLA2XXX=y
# CONFIG_SCSI_QLA21XX is not set
# CONFIG_SCSI_QLA22XX is not set
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_TUNNEL is not set
+# CONFIG_IP_TCPDIAG is not set
+# CONFIG_IP_TCPDIAG_IPV6 is not set
#
# IP: Virtual Server Configuration
# IP: Netfilter Configuration
#
# CONFIG_IP_NF_CONNTRACK is not set
+# CONFIG_IP_NF_CONNTRACK_MARK is not set
# CONFIG_IP_NF_QUEUE is not set
# CONFIG_IP_NF_IPTABLES is not set
CONFIG_IP_NF_ARPTABLES=y
# CONFIG_NET_DIVERT is not set
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
-# CONFIG_NET_HW_FLOWCONTROL is not set
#
# QoS and/or fair queueing
# CONFIG_EPIC100 is not set
# CONFIG_SUNDANCE is not set
# CONFIG_VIA_RHINE is not set
-# CONFIG_VIA_VELOCITY is not set
#
# Ethernet (1000 Mbit)
# CONFIG_YELLOWFIN is not set
# CONFIG_R8169 is not set
# CONFIG_SK98LIN is not set
+# CONFIG_VIA_VELOCITY is not set
CONFIG_TIGON3=y
#
# CONFIG_I2C_SIS5595 is not set
# CONFIG_I2C_SIS630 is not set
# CONFIG_I2C_SIS96X is not set
+# CONFIG_I2C_STUB is not set
# CONFIG_I2C_VIA is not set
# CONFIG_I2C_VIAPRO is not set
# CONFIG_I2C_VOODOO3 is not set
# CONFIG_I2C_SENSOR is not set
# CONFIG_SENSORS_ADM1021 is not set
# CONFIG_SENSORS_ADM1025 is not set
+# CONFIG_SENSORS_ADM1026 is not set
# CONFIG_SENSORS_ADM1031 is not set
# CONFIG_SENSORS_ASB100 is not set
# CONFIG_SENSORS_DS1621 is not set
# CONFIG_SENSORS_FSCHER is not set
# CONFIG_SENSORS_GL518SM is not set
# CONFIG_SENSORS_IT87 is not set
+# CONFIG_SENSORS_LM63 is not set
# CONFIG_SENSORS_LM75 is not set
# CONFIG_SENSORS_LM77 is not set
# CONFIG_SENSORS_LM78 is not set
# CONFIG_SENSORS_LM80 is not set
# CONFIG_SENSORS_LM83 is not set
# CONFIG_SENSORS_LM85 is not set
+# CONFIG_SENSORS_LM87 is not set
# CONFIG_SENSORS_LM90 is not set
# CONFIG_SENSORS_MAX1619 is not set
+# CONFIG_SENSORS_PC87360 is not set
# CONFIG_SENSORS_SMSC47M1 is not set
# CONFIG_SENSORS_VIA686A is not set
# CONFIG_SENSORS_W83781D is not set
#
# Multimedia devices
#
-# CONFIG_VIDEO_DEV is not set
+CONFIG_VIDEO_DEV=y
+
+#
+# Video For Linux
+#
+
+#
+# Video Adapters
+#
+# CONFIG_VIDEO_BT848 is not set
+# CONFIG_VIDEO_CPIA is not set
+# CONFIG_VIDEO_SAA5246A is not set
+# CONFIG_VIDEO_SAA5249 is not set
+# CONFIG_TUNER_3036 is not set
+# CONFIG_VIDEO_STRADIS is not set
+# CONFIG_VIDEO_ZORAN is not set
+# CONFIG_VIDEO_ZR36120 is not set
+# CONFIG_VIDEO_SAA7134 is not set
+# CONFIG_VIDEO_MXB is not set
+# CONFIG_VIDEO_DPC is not set
+# CONFIG_VIDEO_HEXIUM_ORION is not set
+# CONFIG_VIDEO_HEXIUM_GEMINI is not set
+# CONFIG_VIDEO_CX88 is not set
+# CONFIG_VIDEO_OVCAMCHIP is not set
+
+#
+# Radio Adapters
+#
+# CONFIG_RADIO_GEMTEK_PCI is not set
+# CONFIG_RADIO_MAXIRADIO is not set
+# CONFIG_RADIO_MAESTRO is not set
#
# Digital Video Broadcasting Devices
#
CONFIG_FB=y
CONFIG_FB_MODE_HELPERS=y
+# CONFIG_FB_TILEBLITTING is not set
# CONFIG_FB_CIRRUS is not set
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
CONFIG_FB_RADEON_DEBUG=y
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
+# CONFIG_FB_SAVAGE is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_KYRO is not set
#
# Sound
#
-# CONFIG_SOUND is not set
+CONFIG_SOUND=y
+
+#
+# Advanced Linux Sound Architecture
+#
+CONFIG_SND=y
+CONFIG_SND_TIMER=y
+CONFIG_SND_PCM=y
+CONFIG_SND_HWDEP=y
+CONFIG_SND_RAWMIDI=y
+CONFIG_SND_SEQUENCER=y
+# CONFIG_SND_SEQ_DUMMY is not set
+CONFIG_SND_OSSEMUL=y
+CONFIG_SND_MIXER_OSS=y
+CONFIG_SND_PCM_OSS=y
+CONFIG_SND_SEQUENCER_OSS=y
+# CONFIG_SND_VERBOSE_PRINTK is not set
+# CONFIG_SND_DEBUG is not set
+
+#
+# Generic devices
+#
+CONFIG_SND_MPU401_UART=y
+CONFIG_SND_OPL3_LIB=y
+# CONFIG_SND_DUMMY is not set
+# CONFIG_SND_VIRMIDI is not set
+# CONFIG_SND_MTPAV is not set
+# CONFIG_SND_SERIAL_U16550 is not set
+# CONFIG_SND_MPU401 is not set
+
+#
+# PCI devices
+#
+CONFIG_SND_AC97_CODEC=y
+# CONFIG_SND_ALI5451 is not set
+# CONFIG_SND_ATIIXP is not set
+# CONFIG_SND_ATIIXP_MODEM is not set
+# CONFIG_SND_AU8810 is not set
+# CONFIG_SND_AU8820 is not set
+# CONFIG_SND_AU8830 is not set
+# CONFIG_SND_AZT3328 is not set
+# CONFIG_SND_BT87X is not set
+# CONFIG_SND_CS46XX is not set
+# CONFIG_SND_CS4281 is not set
+# CONFIG_SND_EMU10K1 is not set
+# CONFIG_SND_KORG1212 is not set
+# CONFIG_SND_MIXART is not set
+# CONFIG_SND_NM256 is not set
+# CONFIG_SND_RME32 is not set
+# CONFIG_SND_RME96 is not set
+# CONFIG_SND_RME9652 is not set
+# CONFIG_SND_HDSP is not set
+# CONFIG_SND_TRIDENT is not set
+# CONFIG_SND_YMFPCI is not set
+# CONFIG_SND_ALS4000 is not set
+# CONFIG_SND_CMIPCI is not set
+# CONFIG_SND_ENS1370 is not set
+# CONFIG_SND_ENS1371 is not set
+# CONFIG_SND_ES1938 is not set
+# CONFIG_SND_ES1968 is not set
+# CONFIG_SND_MAESTRO3 is not set
+CONFIG_SND_FM801=y
+CONFIG_SND_FM801_TEA575X=y
+# CONFIG_SND_ICE1712 is not set
+# CONFIG_SND_ICE1724 is not set
+# CONFIG_SND_INTEL8X0 is not set
+# CONFIG_SND_INTEL8X0M is not set
+# CONFIG_SND_SONICVIBES is not set
+# CONFIG_SND_VIA82XX is not set
+# CONFIG_SND_VX222 is not set
+
+#
+# USB devices
+#
+# CONFIG_SND_USB_AUDIO is not set
+# CONFIG_SND_USB_USX2Y is not set
+
+#
+# Open Sound System
+#
+# CONFIG_SOUND_PRIME is not set
#
# USB support
# CONFIG_USB_DYNAMIC_MINORS is not set
# CONFIG_USB_SUSPEND is not set
# CONFIG_USB_OTG is not set
+CONFIG_USB_ARCH_HAS_HCD=y
+CONFIG_USB_ARCH_HAS_OHCI=y
#
# USB Host Controller Drivers
# CONFIG_USB_EHCI_ROOT_HUB_TT is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_UHCI_HCD=y
+# CONFIG_USB_SL811_HCD is not set
#
# USB Device Class drivers
#
+# CONFIG_USB_AUDIO is not set
# CONFIG_USB_BLUETOOTH_TTY is not set
+# CONFIG_USB_MIDI is not set
# CONFIG_USB_ACM is not set
# CONFIG_USB_PRINTER is not set
+
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
CONFIG_USB_STORAGE=y
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_RW_DETECT is not set
# CONFIG_USB_STORAGE_JUMPSHOT is not set
#
-# USB Human Interface Devices (HID)
+# USB Input Devices
#
CONFIG_USB_HID=y
CONFIG_USB_HIDINPUT=y
# USB Multimedia devices
#
# CONFIG_USB_DABUSB is not set
+# CONFIG_USB_VICAM is not set
+# CONFIG_USB_DSBR is not set
+# CONFIG_USB_IBMCAM is not set
+# CONFIG_USB_KONICAWC is not set
+# CONFIG_USB_OV511 is not set
+# CONFIG_USB_SE401 is not set
+# CONFIG_USB_SN9C102 is not set
+# CONFIG_USB_STV680 is not set
#
-# Video4Linux support is needed for USB Multimedia device support
-#
-
-#
-# USB Network adaptors
+# USB Network Adapters
#
# CONFIG_USB_CATC is not set
# CONFIG_USB_KAWETH is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_LED is not set
# CONFIG_USB_CYTHERM is not set
+# CONFIG_USB_PHIDGETKIT is not set
# CONFIG_USB_PHIDGETSERVO is not set
+#
+# USB ATM/DSL drivers
+#
+
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
#
# File systems
#
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
# CONFIG_QUOTA is not set
+CONFIG_DNOTIFY=y
CONFIG_AUTOFS_FS=y
# CONFIG_AUTOFS4_FS is not set
# CONFIG_DEVFS_FS is not set
# CONFIG_DEVPTS_FS_XATTR is not set
CONFIG_TMPFS=y
+CONFIG_TMPFS_XATTR=y
+CONFIG_TMPFS_SECURITY=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_RAMFS=y
#
CONFIG_DEBUG_KERNEL=y
CONFIG_MAGIC_SYSRQ=y
+# CONFIG_SCHEDSTATS is not set
# CONFIG_DEBUG_SLAB is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
+# CONFIG_DEBUG_KOBJECT is not set
# CONFIG_DEBUG_INFO is not set
CONFIG_IA64_GRANULE_16MB=y
# CONFIG_IA64_GRANULE_64MB is not set
#
# Security options
#
+# CONFIG_KEYS is not set
# CONFIG_SECURITY is not set
#
# CONFIG_CRYPTO_SHA1 is not set
# CONFIG_CRYPTO_SHA256 is not set
# CONFIG_CRYPTO_SHA512 is not set
-# CONFIG_CRYPTO_WHIRLPOOL is not set
+# CONFIG_CRYPTO_WP512 is not set
CONFIG_CRYPTO_DES=y
# CONFIG_CRYPTO_BLOWFISH is not set
# CONFIG_CRYPTO_TWOFISH is not set
# CONFIG_CRYPTO_TEA is not set
# CONFIG_CRYPTO_ARC4 is not set
# CONFIG_CRYPTO_KHAZAD is not set
+# CONFIG_CRYPTO_ANUBIS is not set
# CONFIG_CRYPTO_DEFLATE is not set
# CONFIG_CRYPTO_MICHAEL_MIC is not set
# CONFIG_CRYPTO_CRC32C is not set
# CONFIG_CRYPTO_TEST is not set
+
+#
+# Hardware crypto devices
+#
#
obj-y := sba_iommu.o
+obj-$(CONFIG_IA64_HP_ZX1_SWIOTLB) += hwsw_iommu.o
+obj-$(CONFIG_IA64_GENERIC) += hwsw_iommu.o
static int
simeth_device_event(struct notifier_block *this,unsigned long event, void *ptr)
{
- struct net_device *dev = (struct net_device *)ptr;
+ struct net_device *dev = ptr;
struct simeth_local *local;
struct in_device *in_dev;
struct in_ifaddr **ifap = NULL;
static int
simeth_tx(struct sk_buff *skb, struct net_device *dev)
{
- struct simeth_local *local = (struct simeth_local *)dev->priv;
+ struct simeth_local *local = dev->priv;
#if 0
/* ensure we have at least ETH_ZLEN bytes (min frame size) */
int len;
int rcv_count = SIMETH_RECV_MAX;
- local = (struct simeth_local *)dev->priv;
+ local = dev->priv;
/*
* the loop concept has been borrowed from other drivers
* looks to me like it's a throttling thing to avoid pushing to many
static struct net_device_stats *
simeth_get_stats(struct net_device *dev)
{
- struct simeth_local *local = (struct simeth_local *) dev->priv;
+ struct simeth_local *local = dev->priv;
return &local->stats;
}
# Copyright (C) Alex Williamson (alex_williamson@hp.com)
#
-obj-$(CONFIG_IA64_GENERIC) += hpzx1_machvec.o
+obj-$(CONFIG_IA64_GENERIC) += hpzx1_machvec.o hpzx1_swiotlb_machvec.o
struct ia32_user_i387_struct *fpstate = (void*)fpu;
mm_segment_t old_fs;
- if (!tsk->used_math)
+ if (!tsk_used_math(tsk))
return 0;
old_fs = get_fs();
struct ia32_user_fxsr_struct *fpxstate = (void*) xfpu;
mm_segment_t old_fs;
- if (!tsk->used_math)
+ if (!tsk_used_math(tsk))
return 0;
old_fs = get_fs();
#include <linux/cpumask.h>
#include <linux/init.h>
#include <linux/topology.h>
+#include <linux/nodemask.h>
#define SD_NODES_PER_DOMAIN 6
* Set up domains. Isolated domains just stay on the dummy domain.
*/
for_each_cpu_mask(i, cpu_default_map) {
- int node = cpu_to_node(i);
int group;
struct sched_domain *sd = NULL, *p;
- cpumask_t nodemask = node_to_cpumask(node);
+ cpumask_t nodemask = node_to_cpumask(cpu_to_node(i));
cpus_and(nodemask, nodemask, cpu_default_map);
sd = &per_cpu(node_domains, i);
*sd = SD_NODE_INIT;
- sd->span = sched_domain_node_span(node);
+ sd->span = sched_domain_node_span(cpu_to_node(i));
sd->parent = p;
cpus_and(sd->span, sd->span, cpu_default_map);
#endif
#include <linux/config.h>
/*
- * Preserved registers that are shared between code in ivt.S and entry.S. Be
- * careful not to step on these!
+ * Preserved registers that are shared between code in ivt.S and
+ * entry.S. Be careful not to step on these!
*/
-#define pLvSys p1 /* set 1 if leave from syscall; otherwise, set 0 */
-#define pKStk p2 /* will leave_{kernel,syscall} return to kernel-stacks? */
-#define pUStk p3 /* will leave_{kernel,syscall} return to user-stacks? */
-#define pSys p4 /* are we processing a (synchronous) system call? */
-#define pNonSys p5 /* complement of pSys */
+#define PRED_LEAVE_SYSCALL 1 /* TRUE iff leave from syscall */
+#define PRED_KERNEL_STACK 2 /* returning to kernel-stacks? */
+#define PRED_USER_STACK 3 /* returning to user-stacks? */
+#define PRED_SYSCALL 4 /* inside a system call? */
+#define PRED_NON_SYSCALL 5 /* complement of PRED_SYSCALL */
+
+#ifdef __ASSEMBLY__
+# define PASTE2(x,y) x##y
+# define PASTE(x,y) PASTE2(x,y)
+
+# define pLvSys PASTE(p,PRED_LEAVE_SYSCALL)
+# define pKStk PASTE(p,PRED_KERNEL_STACK)
+# define pUStk PASTE(p,PRED_USER_STACK)
+# define pSys PASTE(p,PRED_SYSCALL)
+# define pNonSys PASTE(p,PRED_NON_SYSCALL)
+#endif
#define PT(f) (IA64_PT_REGS_##f##_OFFSET)
#define SW(f) (IA64_SWITCH_STACK_##f##_OFFSET)
LOAD_FSYSCALL_TABLE(r14)
mov r16=IA64_KR(CURRENT) // 12 cycle read latency
+ tnat.nz p10,p9=r15
mov r19=NR_syscalls-1
;;
shladd r18=r17,3,r14
#endif
mov r10=-1
- mov r8=ENOSYS
+(p10) mov r8=EINVAL
+(p9) mov r8=ENOSYS
FSYS_RETURN
END(__kernel_syscall_via_epc)
// 2. Restore current thread pointer to kr6
// 3. Move stack ptr 16 bytes to conform to C calling convention
//
+// 04/11/12 Russ Anderson <rja@sgi.com>
+// Added per cpu MCA/INIT stack save areas.
+//
#include <linux/config.h>
#include <linux/threads.h>
ld8 tmp=[sal_to_os_handoff];; \
st8 [os_to_sal_handoff]=tmp;;
+#define GET_IA64_MCA_DATA(reg) \
+ GET_THIS_PADDR(reg, ia64_mca_data) \
+ ;; \
+ ld8 reg=[reg]
+
.global ia64_os_mca_dispatch
.global ia64_os_mca_dispatch_end
.global ia64_sal_to_os_handoff_state
.global ia64_os_to_sal_handoff_state
- .global ia64_mca_proc_state_dump
- .global ia64_mca_stack
- .global ia64_mca_stackframe
- .global ia64_mca_bspstore
- .global ia64_init_stack
.text
.align 16
// The following code purges TC and TR entries. Then reload all TC entries.
// Purge percpu data TC entries.
begin_tlb_purge_and_reload:
- mov r16=cr.lid
- LOAD_PHYSICAL(p0,r17,ia64_mca_tlb_list) // Physical address of ia64_mca_tlb_list
- mov r19=0
- mov r20=NR_CPUS
- ;;
-1: cmp.eq p6,p7=r19,r20
-(p6) br.spnt.few err
- ld8 r18=[r17],IA64_MCA_TLB_INFO_SIZE
- ;;
- add r19=1,r19
- cmp.eq p6,p7=r18,r16
-(p7) br.sptk.few 1b
- ;;
- adds r17=-IA64_MCA_TLB_INFO_SIZE,r17
- ;;
- mov r23=r17 // save current ia64_mca_percpu_info addr pointer.
- adds r17=16,r17
+
+#define O(member) IA64_CPUINFO_##member##_OFFSET
+
+ GET_THIS_PADDR(r2, cpu_info) // load phys addr of cpu_info into r2
;;
- ld8 r18=[r17],8 // r18=ptce_base
- ;;
- ld4 r19=[r17],4 // r19=ptce_count[0]
+ addl r17=O(PTCE_STRIDE),r2
+ addl r2=O(PTCE_BASE),r2
;;
- ld4 r20=[r17],4 // r20=ptce_count[1]
+ ld8 r18=[r2],(O(PTCE_COUNT)-O(PTCE_BASE));; // r18=ptce_base
+ ld4 r19=[r2],4 // r19=ptce_count[0]
+ ld4 r21=[r17],4 // r21=ptce_stride[0]
;;
- ld4 r21=[r17],4 // r21=ptce_stride[0]
+ ld4 r20=[r2] // r20=ptce_count[1]
+ ld4 r22=[r17] // r22=ptce_stride[1]
mov r24=0
;;
- ld4 r22=[r17],4 // r22=ptce_stride[1]
adds r20=-1,r20
;;
+#undef O
+
2:
cmp.ltu p6,p7=r24,r19
(p7) br.cond.dpnt.few 4f
srlz.d
;;
// 3. Purge ITR for PAL code.
- adds r17=48,r23
+ GET_THIS_PADDR(r2, ia64_mca_pal_base)
;;
- ld8 r16=[r17]
+ ld8 r16=[r2]
mov r18=IA64_GRANULE_SHIFT<<2
;;
ptr.i r16,r18
srlz.d
;;
// 2. Reload DTR register for PERCPU data.
- adds r17=8,r23
+ GET_THIS_PADDR(r2, ia64_mca_per_cpu_pte)
+ ;;
movl r16=PERCPU_ADDR // vaddr
movl r18=PERCPU_PAGE_SHIFT<<2
;;
mov cr.itir=r18
mov cr.ifa=r16
;;
- ld8 r18=[r17] // pte
+ ld8 r18=[r2] // load per-CPU PTE
mov r16=IA64_TR_PERCPU_DATA;
;;
itr.d dtr[r16]=r18
srlz.d
;;
// 3. Reload ITR for PAL code.
- adds r17=40,r23
+ GET_THIS_PADDR(r2, ia64_mca_pal_pte)
+ ;;
+ ld8 r18=[r2] // load PAL PTE
;;
- ld8 r18=[r17],8 // pte
+ GET_THIS_PADDR(r2, ia64_mca_pal_base)
;;
- ld8 r16=[r17] // vaddr
+ ld8 r16=[r2] // load PAL vaddr
mov r19=IA64_GRANULE_SHIFT<<2
;;
mov cr.itir=r19
done_tlb_purge_and_reload:
// Setup new stack frame for OS_MCA handling
- movl r2=ia64_mca_bspstore;; // local bspstore area location in r2
- DATA_VA_TO_PA(r2);;
- movl r3=ia64_mca_stackframe;; // save stack frame to memory in r3
- DATA_VA_TO_PA(r3);;
+ GET_IA64_MCA_DATA(r2)
+ ;;
+ add r3 = IA64_MCA_CPU_STACKFRAME_OFFSET, r2
+ add r2 = IA64_MCA_CPU_RBSTORE_OFFSET, r2
+ ;;
rse_switch_context(r6,r3,r2);; // RSC management in this new context
- movl r12=ia64_mca_stack
- mov r2=8*1024;; // stack size must be same as C array
- add r12=r2,r12;; // stack base @ bottom of array
- adds r12=-16,r12;; // allow 16 bytes of scratch
- // (C calling convention)
- DATA_VA_TO_PA(r12);;
+
+ GET_IA64_MCA_DATA(r2)
+ ;;
+ add r2 = IA64_MCA_CPU_STACK_OFFSET+IA64_MCA_STACK_SIZE-16, r2
+ ;;
+ mov r12=r2 // establish new stack-pointer
// Enter virtual mode from physical mode
VIRTUAL_MODE_ENTER(r2, r3, ia64_os_mca_virtual_begin, r4)
ia64_os_mca_virtual_end:
// restore the original stack frame here
- movl r2=ia64_mca_stackframe // restore stack frame from memory at r2
+ GET_IA64_MCA_DATA(r2)
+ ;;
+ add r2 = IA64_MCA_CPU_STACKFRAME_OFFSET, r2
;;
- DATA_VA_TO_PA(r2)
movl r4=IA64_PSR_MC
;;
rse_return_context(r4,r3,r2) // switch from interrupt context for RSE
ia64_os_mca_proc_state_dump:
// Save bank 1 GRs 16-31 which will be used by c-language code when we switch
// to virtual addressing mode.
- LOAD_PHYSICAL(p0,r2,ia64_mca_proc_state_dump)// convert OS state dump area to physical address
-
+ GET_IA64_MCA_DATA(r2)
+ ;;
+ add r2 = IA64_MCA_CPU_PROC_STATE_DUMP_OFFSET, r2
+ ;;
// save ar.NaT
mov r5=ar.unat // ar.unat
ia64_os_mca_proc_state_restore:
// Restore bank1 GR16-31
- movl r2=ia64_mca_proc_state_dump // Convert virtual address
- ;; // of OS state dump area
- DATA_VA_TO_PA(r2) // to physical address
+ GET_IA64_MCA_DATA(r2)
+ ;;
+ add r2 = IA64_MCA_CPU_PROC_STATE_DUMP_OFFSET, r2
restore_GRs: // restore bank-1 GRs 16-31
bsw.1;;
/* from mca_drv_asm.S */
extern void *mca_handler_bhhook(void);
-static spinlock_t mca_bh_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(mca_bh_lock);
typedef enum {
MCA_IS_LOCAL = 0,
MCA_IS_GLOBAL = 1
} mca_type_t;
-#define MAX_PAGE_ISOLATE 32
+#define MAX_PAGE_ISOLATE 1024
static struct page *page_isolate[MAX_PAGE_ISOLATE];
static int num_page_isolate = 0;
* go virtual and don't want to destroy the iip or ipsr.
*/
#define MINSTATE_START_SAVE_MIN_PHYS \
-(pKStk) movl sp=ia64_init_stack+IA64_STK_OFFSET-IA64_PT_REGS_SIZE; \
+(pKStk) mov r3=IA64_KR(PER_CPU_DATA);; \
+(pKStk) addl r3=THIS_CPU(ia64_mca_data),r3;; \
+(pKStk) ld8 r3 = [r3];; \
+(pKStk) addl r3=IA64_MCA_CPU_INIT_STACK_OFFSET,r3;; \
+(pKStk) addl sp=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r3; \
(pUStk) mov ar.rsc=0; /* set enforced lazy mode, pl 0, little-endian, loadrs=0 */ \
(pUStk) addl r22=IA64_RBS_OFFSET,r1; /* compute base of register backing store */ \
;; \
(pUStk) mov r24=ar.rnat; \
-(pKStk) tpa r1=sp; /* compute physical addr of sp */ \
(pUStk) addl r1=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r1; /* compute base of memory stack */ \
(pUStk) mov r23=ar.bspstore; /* save ar.bspstore */ \
(pUStk) dep r22=-1,r22,61,3; /* compute kernel virtual addr of RBS */ \
*
* Jan 28 2004 kaos@sgi.com
* Periodically check for outstanding MCA or INIT records.
+ *
+ * Dec 5 2004 kaos@sgi.com
+ * Standardize which records are cleared automatically.
*/
#include <linux/types.h>
}
}
-/* Check for outstanding MCA/INIT records every 5 minutes (arbitrary) */
-#define SALINFO_TIMER_DELAY (5*60*HZ)
+/* Check for outstanding MCA/INIT records every minute (arbitrary) */
+#define SALINFO_TIMER_DELAY (60*HZ)
static struct timer_list salinfo_timer;
static void
salinfo_log_read_cpu(void *context)
{
struct salinfo_data *data = context;
+ sal_log_record_header_t *rh;
data->log_size = ia64_sal_get_state_info(data->type, (u64 *) data->log_buffer);
- if (data->type == SAL_INFO_TYPE_CPE || data->type == SAL_INFO_TYPE_CMC)
+ rh = (sal_log_record_header_t *)(data->log_buffer);
+ /* Clear corrected errors as they are read from SAL */
+ if (rh->severity == sal_log_severity_corrected)
ia64_sal_clear_state_info(data->type);
}
static int
salinfo_log_clear(struct salinfo_data *data, int cpu)
{
+ sal_log_record_header_t *rh;
data->state = STATE_NO_DATA;
if (!test_bit(cpu, &data->cpu_event))
return 0;
data->saved_num = 0;
spin_unlock_irqrestore(&data_saved_lock, flags);
}
- /* ia64_mca_log_sal_error_record or salinfo_log_read_cpu already cleared
- * CPE and CMC errors
- */
- if (data->type != SAL_INFO_TYPE_CPE && data->type != SAL_INFO_TYPE_CMC)
+ rh = (sal_log_record_header_t *)(data->log_buffer);
+ /* Corrected errors have already been cleared from SAL */
+ if (rh->severity != sal_log_severity_corrected)
call_on_cpu(cpu, salinfo_log_clear_cpu, data);
/* clearing a record may make a new record visible */
salinfo_log_new_read(cpu, data);
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * This file contains NUMA specific variables and functions which can
+ * be split away from DISCONTIGMEM and are used on NUMA machines with
+ * contiguous memory.
+ * 2002/08/07 Erich Focht <efocht@ess.nec.de>
+ * Populate cpu entries in sysfs for non-numa systems as well
+ * Intel Corporation - Ashok Raj
+ */
+
+#include <linux/config.h>
+#include <linux/cpu.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/node.h>
+#include <linux/init.h>
+#include <linux/bootmem.h>
+#include <linux/nodemask.h>
+#include <asm/mmzone.h>
+#include <asm/numa.h>
+#include <asm/cpu.h>
+
+#ifdef CONFIG_NUMA
+static struct node *sysfs_nodes;
+#endif
+static struct ia64_cpu *sysfs_cpus;
+
+int arch_register_cpu(int num)
+{
+ struct node *parent = NULL;
+
+#ifdef CONFIG_NUMA
+ parent = &sysfs_nodes[cpu_to_node(num)];
+#endif /* CONFIG_NUMA */
+
+ return register_cpu(&sysfs_cpus[num].cpu, num, parent);
+}
+
+#ifdef CONFIG_HOTPLUG_CPU
+
+void arch_unregister_cpu(int num)
+{
+ struct node *parent = NULL;
+
+#ifdef CONFIG_NUMA
+ int node = cpu_to_node(num);
+ parent = &sysfs_nodes[node];
+#endif /* CONFIG_NUMA */
+
+ return unregister_cpu(&sysfs_cpus[num].cpu, parent);
+}
+EXPORT_SYMBOL(arch_register_cpu);
+EXPORT_SYMBOL(arch_unregister_cpu);
+#endif /*CONFIG_HOTPLUG_CPU*/
+
+
+static int __init topology_init(void)
+{
+ int i, err = 0;
+
+#ifdef CONFIG_NUMA
+ sysfs_nodes = kmalloc(sizeof(struct node) * MAX_NUMNODES, GFP_KERNEL);
+ if (!sysfs_nodes) {
+ err = -ENOMEM;
+ goto out;
+ }
+ memset(sysfs_nodes, 0, sizeof(struct node) * MAX_NUMNODES);
+
+ /* MCD - Do we want to register all ONLINE nodes, or all POSSIBLE nodes? */
+ for_each_online_node(i)
+ if ((err = register_node(&sysfs_nodes[i], i, 0)))
+ goto out;
+#endif
+
+ sysfs_cpus = kmalloc(sizeof(struct ia64_cpu) * NR_CPUS, GFP_KERNEL);
+ if (!sysfs_cpus) {
+ err = -ENOMEM;
+ goto out;
+ }
+ memset(sysfs_cpus, 0, sizeof(struct ia64_cpu) * NR_CPUS);
+
+ for_each_present_cpu(i)
+ if((err = arch_register_cpu(i)))
+ goto out;
+out:
+ return err;
+}
+
+__initcall(topology_init);
* Find next zero bit in a bitmap reasonably efficiently..
*/
-int __find_next_zero_bit (void *addr, unsigned long size, unsigned long offset)
+int __find_next_zero_bit (const void *addr, unsigned long size, unsigned long offset)
{
unsigned long *p = ((unsigned long *) addr) + (offset >> 6);
unsigned long result = offset & ~63UL;
extern unsigned long do_csum(const unsigned char *, long);
static unsigned int
-do_csum_partial_copy_from_user (const char __user *src, char *dst, int len,
- unsigned int psum, int *errp)
+do_csum_partial_copy_from_user (const unsigned char __user *src, unsigned char *dst,
+ int len, unsigned int psum, int *errp)
{
unsigned long result;
}
unsigned int
-csum_partial_copy_from_user (const char __user *src, char *dst, int len,
- unsigned int sum, int *errp)
+csum_partial_copy_from_user (const unsigned char __user *src, unsigned char *dst,
+ int len, unsigned int sum, int *errp)
{
if (!access_ok(VERIFY_READ, src, len)) {
*errp = -EFAULT;
}
unsigned int
-csum_partial_copy_nocheck(const char __user *src, char *dst, int len, unsigned int sum)
+csum_partial_copy_nocheck(const unsigned char __user *src, unsigned char *dst,
+ int len, unsigned int sum)
{
return do_csum_partial_copy_from_user(src, dst, len, sum, NULL);
}
* Copy data from IO memory space to "real" memory space.
* This needs to be optimized.
*/
-void
-__ia64_memcpy_fromio (void *to, volatile void __iomem *from, long count)
+void memcpy_fromio(void *to, const volatile void __iomem *from, long count)
{
char *dst = to;
*dst++ = readb(from++);
}
}
-EXPORT_SYMBOL(__ia64_memcpy_fromio);
+EXPORT_SYMBOL(memcpy_fromio);
/*
* Copy data from "real" memory space to IO memory space.
* This needs to be optimized.
*/
-void
-__ia64_memcpy_toio (volatile void __iomem *to, void *from, long count)
+void memcpy_toio(volatile void __iomem *to, const void *from, long count)
{
- char *src = from;
+ const char *src = from;
while (count) {
count--;
writeb(*src++, to++);
}
}
-EXPORT_SYMBOL(__ia64_memcpy_toio);
+EXPORT_SYMBOL(memcpy_toio);
/*
* "memset" on IO memory space.
* This needs to be optimized.
*/
-void
-__ia64_memset_c_io (volatile void __iomem *dst, unsigned long c, long count)
+void memset_io(volatile void __iomem *dst, int c, long count)
{
unsigned char ch = (char)(c & 0xff);
dst++;
}
}
-EXPORT_SYMBOL(__ia64_memset_c_io);
+EXPORT_SYMBOL(memset_io);
#ifdef CONFIG_IA64_GENERIC
*/
#include <asm/asmmacro.h>
-GLOBAL_ENTRY(bcopy)
- .regstk 3,0,0,0
- mov r8=in0
- mov in0=in1
- ;;
- mov in1=r8
- // gas doesn't handle control flow across procedures, so it doesn't
- // realize that a stop bit is needed before the "alloc" instruction
- // below
-{
- nop.m 0
- nop.f 0
- nop.i 0
-} ;;
-END(bcopy)
- // FALL THROUGH
GLOBAL_ENTRY(memcpy)
# define MEM_LAT 21 /* latency to memory */
#define EK(y...) EX(y)
-GLOBAL_ENTRY(bcopy)
- .regstk 3,0,0,0
- mov r8=in0
- mov in0=in1
- ;;
- mov in1=r8
- ;;
-END(bcopy)
-
/* McKinley specific optimization */
#define retval r8
#define IO_TLB_SEGSIZE 128
/*
- * log of the size of each IO TLB slab. The number of slabs is command line controllable.
+ * log of the size of each IO TLB slab. The number of slabs is command line
+ * controllable.
*/
#define IO_TLB_SHIFT 11
int swiotlb_force;
/*
- * Used to do a quick range check in swiotlb_unmap_single and swiotlb_sync_single_*, to see
- * if the memory was in fact allocated by this API.
+ * Used to do a quick range check in swiotlb_unmap_single and
+ * swiotlb_sync_single_*, to see if the memory was in fact allocated by this
+ * API.
*/
static char *io_tlb_start, *io_tlb_end;
/*
- * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and io_tlb_end.
- * This is command line adjustable via setup_io_tlb_npages.
- * Default to 64MB.
+ * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and
+ * io_tlb_end. This is command line adjustable via setup_io_tlb_npages.
*/
-static unsigned long io_tlb_nslabs = 32768;
+static unsigned long io_tlb_nslabs;
-/*
+/*
* When the IOMMU overflows we return a fallback buffer. This sets the size.
*/
static unsigned long io_tlb_overflow = 32*1024;
-void *io_tlb_overflow_buffer;
+void *io_tlb_overflow_buffer;
/*
- * This is a free list describing the number of free entries available from each index
+ * This is a free list describing the number of free entries available from
+ * each index
*/
static unsigned int *io_tlb_list;
static unsigned int io_tlb_index;
/*
- * We need to save away the original address corresponding to a mapped entry for the sync
- * operations.
+ * We need to save away the original address corresponding to a mapped entry
+ * for the sync operations.
*/
static unsigned char **io_tlb_orig_addr;
/*
* Protect the above data structures in the map and unmap calls
*/
-static spinlock_t io_tlb_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(io_tlb_lock);
static int __init
-setup_io_tlb_npages (char *str)
+setup_io_tlb_npages(char *str)
{
- if (isdigit(*str)) {
- io_tlb_nslabs = simple_strtoul(str, &str, 0) << (PAGE_SHIFT - IO_TLB_SHIFT);
+ if (isdigit(*str)) {
+ io_tlb_nslabs = simple_strtoul(str, &str, 0) <<
+ (PAGE_SHIFT - IO_TLB_SHIFT);
/* avoid tail segment of size < IO_TLB_SEGSIZE */
io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
}
/* make io_tlb_overflow tunable too? */
/*
- * Statically reserve bounce buffer space and initialize bounce buffer data structures for
- * the software IO TLB used to implement the PCI DMA API.
+ * Statically reserve bounce buffer space and initialize bounce buffer data
+ * structures for the software IO TLB used to implement the PCI DMA API.
*/
void
-swiotlb_init (void)
+swiotlb_init_with_default_size (size_t default_size)
{
unsigned long i;
+ if (!io_tlb_nslabs) {
+ io_tlb_nslabs = (default_size >> PAGE_SHIFT);
+ io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+ }
+
/*
* Get IO TLB memory from the low pages
*/
- io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs * (1 << IO_TLB_SHIFT));
+ io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs *
+ (1 << IO_TLB_SHIFT));
if (!io_tlb_start)
panic("Cannot allocate SWIOTLB buffer");
io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT);
io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
io_tlb_index = 0;
io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
-
- /*
- * Get the overflow emergency buffer
+
+ /*
+ * Get the overflow emergency buffer
*/
- io_tlb_overflow_buffer = alloc_bootmem_low(io_tlb_overflow);
+ io_tlb_overflow_buffer = alloc_bootmem_low(io_tlb_overflow);
printk(KERN_INFO "Placing software IO TLB between 0x%lx - 0x%lx\n",
virt_to_phys(io_tlb_start), virt_to_phys(io_tlb_end));
}
-static inline int address_needs_mapping(struct device *hwdev, dma_addr_t addr)
-{
- dma_addr_t mask = 0xffffffff;
- if (hwdev && hwdev->dma_mask)
- mask = *hwdev->dma_mask;
- return (addr & ~mask) != 0;
-}
+void
+swiotlb_init (void)
+{
+ swiotlb_init_with_default_size(64 * (1<<20)); /* default to 64MB */
+}
+
+static inline int
+address_needs_mapping(struct device *hwdev, dma_addr_t addr)
+{
+ dma_addr_t mask = 0xffffffff;
+ /* If the device has a mask, use it, otherwise default to 32 bits */
+ if (hwdev && hwdev->dma_mask)
+ mask = *hwdev->dma_mask;
+ return (addr & ~mask) != 0;
+}
/*
* Allocates bounce buffer and returns its kernel virtual address.
*/
static void *
-map_single (struct device *hwdev, char *buffer, size_t size, int dir)
+map_single(struct device *hwdev, char *buffer, size_t size, int dir)
{
unsigned long flags;
char *dma_addr;
int i;
/*
- * For mappings greater than a page size, we limit the stride (and hence alignment)
- * to a page size.
+ * For mappings greater than a page, we limit the stride (and
+ * hence alignment) to a page size.
*/
nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
- if (size > (1 << PAGE_SHIFT))
+ if (size > PAGE_SIZE)
stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
else
stride = 1;
BUG();
/*
- * Find suitable number of IO TLB entries size that will fit this request and
- * allocate a buffer from that IO TLB pool.
+ * Find suitable number of IO TLB entries size that will fit this
+ * request and allocate a buffer from that IO TLB pool.
*/
spin_lock_irqsave(&io_tlb_lock, flags);
{
do {
/*
- * If we find a slot that indicates we have 'nslots' number of
- * contiguous buffers, we allocate the buffers from that slot and
- * mark the entries as '0' indicating unavailable.
+ * If we find a slot that indicates we have 'nslots'
+ * number of contiguous buffers, we allocate the
+ * buffers from that slot and mark the entries as '0'
+ * indicating unavailable.
*/
if (io_tlb_list[index] >= nslots) {
int count = 0;
for (i = index; i < (int) (index + nslots); i++)
io_tlb_list[i] = 0;
- for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1)
- && io_tlb_list[i]; i--)
+ for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
io_tlb_list[i] = ++count;
dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
/*
- * Update the indices to avoid searching in the next round.
+ * Update the indices to avoid searching in
+ * the next round.
*/
io_tlb_index = ((index + nslots) < io_tlb_nslabs
? (index + nslots) : 0);
spin_unlock_irqrestore(&io_tlb_lock, flags);
/*
- * Save away the mapping from the original address to the DMA address. This is
- * needed when we sync the memory. Then we sync the buffer if needed.
+ * Save away the mapping from the original address to the DMA address.
+ * This is needed when we sync the memory. Then we sync the buffer if
+ * needed.
*/
io_tlb_orig_addr[index] = buffer;
if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
* dma_addr is the kernel virtual address of the bounce buffer to unmap.
*/
static void
-unmap_single (struct device *hwdev, char *dma_addr, size_t size, int dir)
+unmap_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
{
unsigned long flags;
- int i, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+ int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
char *buffer = io_tlb_orig_addr[index];
/*
* First, sync the memory before unmapping the entry
*/
- if ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL))
+ if (buffer && ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL)))
/*
- * bounce... copy the data back into the original buffer * and delete the
- * bounce buffer.
+ * bounce... copy the data back into the original buffer * and
+ * delete the bounce buffer.
*/
memcpy(buffer, dma_addr, size);
/*
- * Return the buffer to the free list by setting the corresponding entries to
- * indicate the number of contigous entries available. While returning the
- * entries to the free list, we merge the entries with slots below and above the
- * pool being returned.
+ * Return the buffer to the free list by setting the corresponding
+ * entries to indicate the number of contigous entries available.
+ * While returning the entries to the free list, we merge the entries
+ * with slots below and above the pool being returned.
*/
spin_lock_irqsave(&io_tlb_lock, flags);
{
- int count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
- io_tlb_list[index + nslots] : 0);
+ count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
+ io_tlb_list[index + nslots] : 0);
/*
- * Step 1: return the slots to the free list, merging the slots with
- * superceeding slots
+ * Step 1: return the slots to the free list, merging the
+ * slots with superceeding slots
*/
for (i = index + nslots - 1; i >= index; i--)
io_tlb_list[i] = ++count;
/*
- * Step 2: merge the returned slots with the preceding slots, if
- * available (non zero)
+ * Step 2: merge the returned slots with the preceding slots,
+ * if available (non zero)
*/
- for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) &&
- io_tlb_list[i]; i--)
+ for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
io_tlb_list[i] = ++count;
}
spin_unlock_irqrestore(&io_tlb_lock, flags);
}
static void
-sync_single (struct device *hwdev, char *dma_addr, size_t size, int dir)
+sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
{
int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
char *buffer = io_tlb_orig_addr[index];
}
void *
-swiotlb_alloc_coherent (struct device *hwdev, size_t size, dma_addr_t *dma_handle, int flags)
+swiotlb_alloc_coherent(struct device *hwdev, size_t size,
+ dma_addr_t *dma_handle, int flags)
{
unsigned long dev_addr;
void *ret;
+ int order = get_order(size);
- /* XXX fix me: the DMA API should pass us an explicit DMA mask instead: */
+ /*
+ * XXX fix me: the DMA API should pass us an explicit DMA mask
+ * instead, or use ZONE_DMA32 (ia64 overloads ZONE_DMA to be a ~32
+ * bit range instead of a 16MB one).
+ */
flags |= GFP_DMA;
- ret = (void *)__get_free_pages(flags, get_order(size));
+ ret = (void *)__get_free_pages(flags, order);
+ if (ret && address_needs_mapping(hwdev, virt_to_phys(ret))) {
+ /*
+ * The allocated memory isn't reachable by the device.
+ * Fall back on swiotlb_map_single().
+ */
+ free_pages((unsigned long) ret, order);
+ ret = NULL;
+ }
if (!ret) {
- /* DMA_FROM_DEVICE is to avoid the memcpy in map_single */
+ /*
+ * We are either out of memory or the device can't DMA
+ * to GFP_DMA memory; fall back on
+ * swiotlb_map_single(), which will grab memory from
+ * the lowest available address range.
+ */
dma_addr_t handle;
handle = swiotlb_map_single(NULL, NULL, size, DMA_FROM_DEVICE);
if (dma_mapping_error(handle))
memset(ret, 0, size);
dev_addr = virt_to_phys(ret);
- if (address_needs_mapping(hwdev,dev_addr))
- panic("swiotlb_alloc_consistent: allocated memory is out of range for device");
+
+ /* Confirm address can be DMA'd by device */
+ if (address_needs_mapping(hwdev, dev_addr)) {
+ printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016lx\n",
+ (unsigned long long)*hwdev->dma_mask, dev_addr);
+ panic("swiotlb_alloc_coherent: allocated memory is out of "
+ "range for device");
+ }
*dma_handle = dev_addr;
return ret;
}
void
-swiotlb_free_coherent (struct device *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle)
+swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
+ dma_addr_t dma_handle)
{
if (!(vaddr >= (void *)io_tlb_start
&& vaddr < (void *)io_tlb_end))
swiotlb_unmap_single (hwdev, dma_handle, size, DMA_TO_DEVICE);
}
-static void swiotlb_full(struct device *dev, size_t size, int dir, int do_panic)
+static void
+swiotlb_full(struct device *dev, size_t size, int dir, int do_panic)
{
- /*
+ /*
* Ran out of IOMMU space for this operation. This is very bad.
* Unfortunately the drivers cannot handle this operation properly.
* unless they check for pci_dma_mapping_error (most don't)
* When the mapping is small enough return a static buffer to limit
- * the damage, or panic when the transfer is too big.
- */
-
- printk(KERN_ERR
- "PCI-DMA: Out of SW-IOMMU space for %lu bytes at device %s\n",
- size, dev ? dev->bus_id : "?");
+ * the damage, or panic when the transfer is too big.
+ */
+ printk(KERN_ERR "PCI-DMA: Out of SW-IOMMU space for %lu bytes at "
+ "device %s\n", size, dev ? dev->bus_id : "?");
if (size > io_tlb_overflow && do_panic) {
if (dir == PCI_DMA_FROMDEVICE || dir == PCI_DMA_BIDIRECTIONAL)
panic("PCI-DMA: Memory would be corrupted\n");
- if (dir == PCI_DMA_TODEVICE || dir == PCI_DMA_BIDIRECTIONAL)
- panic("PCI-DMA: Random memory would be DMAed\n");
- }
-}
+ if (dir == PCI_DMA_TODEVICE || dir == PCI_DMA_BIDIRECTIONAL)
+ panic("PCI-DMA: Random memory would be DMAed\n");
+ }
+}
/*
- * Map a single buffer of the indicated size for DMA in streaming mode. The PCI address
- * to use is returned.
+ * Map a single buffer of the indicated size for DMA in streaming mode. The
+ * PCI address to use is returned.
*
- * Once the device is given the dma address, the device owns this memory until either
- * swiotlb_unmap_single or swiotlb_dma_sync_single is performed.
+ * Once the device is given the dma address, the device owns this memory until
+ * either swiotlb_unmap_single or swiotlb_dma_sync_single is performed.
*/
dma_addr_t
-swiotlb_map_single (struct device *hwdev, void *ptr, size_t size, int dir)
+swiotlb_map_single(struct device *hwdev, void *ptr, size_t size, int dir)
{
unsigned long dev_addr = virt_to_phys(ptr);
- void *map;
+ void *map;
if (dir == DMA_NONE)
BUG();
/*
- * Check if the PCI device can DMA to ptr... if so, just return ptr
+ * If the pointer passed in happens to be in the device's DMA window,
+ * we can safely return the device addr and not worry about bounce
+ * buffering it.
*/
- if (!address_needs_mapping(hwdev, dev_addr) && !swiotlb_force)
- /*
- * Device is bit capable of DMA'ing to the buffer... just return the PCI
- * address of ptr
- */
+ if (!address_needs_mapping(hwdev, dev_addr) && !swiotlb_force)
return dev_addr;
/*
- * get a bounce buffer:
+ * Oh well, have to allocate and map a bounce buffer.
*/
map = map_single(hwdev, ptr, size, dir);
- if (!map) {
- swiotlb_full(hwdev, size, dir, 1);
- map = io_tlb_overflow_buffer;
+ if (!map) {
+ swiotlb_full(hwdev, size, dir, 1);
+ map = io_tlb_overflow_buffer;
}
dev_addr = virt_to_phys(map);
/*
- * Ensure that the address returned is DMA'ble:
+ * Ensure that the address returned is DMA'ble
*/
if (address_needs_mapping(hwdev, dev_addr))
panic("map_single: bounce buffer is not DMA'ble");
* flush them when they get mapped into an executable vm-area.
*/
static void
-mark_clean (void *addr, size_t size)
+mark_clean(void *addr, size_t size)
{
unsigned long pg_addr, end;
}
/*
- * Unmap a single streaming mode DMA translation. The dma_addr and size must match what
- * was provided for in a previous swiotlb_map_single call. All other usages are
- * undefined.
+ * Unmap a single streaming mode DMA translation. The dma_addr and size must
+ * match what was provided for in a previous swiotlb_map_single call. All
+ * other usages are undefined.
*
- * After this call, reads by the cpu to the buffer are guaranteed to see whatever the
- * device wrote there.
+ * After this call, reads by the cpu to the buffer are guaranteed to see
+ * whatever the device wrote there.
*/
void
-swiotlb_unmap_single (struct device *hwdev, dma_addr_t dev_addr, size_t size, int dir)
+swiotlb_unmap_single(struct device *hwdev, dma_addr_t dev_addr, size_t size,
+ int dir)
{
char *dma_addr = phys_to_virt(dev_addr);
}
/*
- * Make physical memory consistent for a single streaming mode DMA translation after a
- * transfer.
+ * Make physical memory consistent for a single streaming mode DMA translation
+ * after a transfer.
*
- * If you perform a swiotlb_map_single() but wish to interrogate the buffer using the cpu,
- * yet do not wish to teardown the PCI dma mapping, you must call this function before
- * doing so. At the next point you give the PCI dma address back to the card, you must
- * first perform a swiotlb_dma_sync_for_device, and then the device again owns the buffer
+ * If you perform a swiotlb_map_single() but wish to interrogate the buffer
+ * using the cpu, yet do not wish to teardown the PCI dma mapping, you must
+ * call this function before doing so. At the next point you give the PCI dma
+ * address back to the card, you must first perform a
+ * swiotlb_dma_sync_for_device, and then the device again owns the buffer
*/
void
-swiotlb_sync_single_for_cpu (struct device *hwdev, dma_addr_t dev_addr, size_t size, int dir)
+swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+ size_t size, int dir)
{
char *dma_addr = phys_to_virt(dev_addr);
}
void
-swiotlb_sync_single_for_device (struct device *hwdev, dma_addr_t dev_addr, size_t size, int dir)
+swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
+ size_t size, int dir)
{
char *dma_addr = phys_to_virt(dev_addr);
}
/*
- * Map a set of buffers described by scatterlist in streaming mode for DMA. This is the
- * scatter-gather version of the above swiotlb_map_single interface. Here the scatter
- * gather list elements are each tagged with the appropriate dma address and length. They
- * are obtained via sg_dma_{address,length}(SG).
+ * Map a set of buffers described by scatterlist in streaming mode for DMA.
+ * This is the scatter-gather version of the above swiotlb_map_single
+ * interface. Here the scatter gather list elements are each tagged with the
+ * appropriate dma address and length. They are obtained via
+ * sg_dma_{address,length}(SG).
*
* NOTE: An implementation may be able to use a smaller number of
* DMA address/length pairs than there are SG table elements.
* The routine returns the number of addr/length pairs actually
* used, at most nents.
*
- * Device ownership issues as mentioned above for swiotlb_map_single are the same here.
+ * Device ownership issues as mentioned above for swiotlb_map_single are the
+ * same here.
*/
int
-swiotlb_map_sg (struct device *hwdev, struct scatterlist *sg, int nelems, int dir)
+swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
+ int dir)
{
void *addr;
unsigned long dev_addr;
if (swiotlb_force || address_needs_mapping(hwdev, dev_addr)) {
sg->dma_address = (dma_addr_t) virt_to_phys(map_single(hwdev, addr, sg->length, dir));
if (!sg->dma_address) {
- /* Don't panic here, we expect pci_map_sg users
+ /* Don't panic here, we expect map_sg users
to do proper error handling. */
- swiotlb_full(hwdev, sg->length, dir, 0);
+ swiotlb_full(hwdev, sg->length, dir, 0);
swiotlb_unmap_sg(hwdev, sg - i, i, dir);
sg[0].dma_length = 0;
return 0;
}
/*
- * Unmap a set of streaming mode DMA translations. Again, cpu read rules concerning calls
- * here are the same as for swiotlb_unmap_single() above.
+ * Unmap a set of streaming mode DMA translations. Again, cpu read rules
+ * concerning calls here are the same as for swiotlb_unmap_single() above.
*/
void
-swiotlb_unmap_sg (struct device *hwdev, struct scatterlist *sg, int nelems, int dir)
+swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
+ int dir)
{
int i;
}
/*
- * Make physical memory consistent for a set of streaming mode DMA translations after a
- * transfer.
+ * Make physical memory consistent for a set of streaming mode DMA translations
+ * after a transfer.
*
- * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules and
- * usage.
+ * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
+ * and usage.
*/
void
-swiotlb_sync_sg_for_cpu (struct device *hwdev, struct scatterlist *sg, int nelems, int dir)
+swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
+ int nelems, int dir)
{
int i;
for (i = 0; i < nelems; i++, sg++)
if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
- sync_single(hwdev, (void *) sg->dma_address, sg->dma_length, dir);
+ sync_single(hwdev, (void *) sg->dma_address,
+ sg->dma_length, dir);
}
void
-swiotlb_sync_sg_for_device (struct device *hwdev, struct scatterlist *sg, int nelems, int dir)
+swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
+ int nelems, int dir)
{
int i;
for (i = 0; i < nelems; i++, sg++)
if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
- sync_single(hwdev, (void *) sg->dma_address, sg->dma_length, dir);
+ sync_single(hwdev, (void *) sg->dma_address,
+ sg->dma_length, dir);
}
int
-swiotlb_dma_mapping_error (dma_addr_t dma_addr)
+swiotlb_dma_mapping_error(dma_addr_t dma_addr)
{
return (dma_addr == virt_to_phys(io_tlb_overflow_buffer));
}
/*
- * Return whether the given PCI device DMA address mask can be supported properly. For
- * example, if your device can only drive the low 24-bits during PCI bus mastering, then
- * you would pass 0x00ffffff as the mask to this function.
+ * Return whether the given PCI device DMA address mask can be supported
+ * properly. For example, if your device can only drive the low 24-bits
+ * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
+ * this function.
*/
int
swiotlb_dma_supported (struct device *hwdev, u64 mask)
{
- return 1;
+ return (virt_to_phys (io_tlb_end) - 1) <= mask;
}
EXPORT_SYMBOL(swiotlb_init);
#include <asm/mmzone.h>
#include <asm/numa.h>
-static struct node *sysfs_nodes;
-static struct cpu *sysfs_cpus;
/*
* The following structures are usually initialized by ACPI or
return (i < num_node_memblks) ? node_memblk[i].nid : (num_node_memblks ? -1 : 0);
}
-
-static int __init topology_init(void)
-{
- int i, err = 0;
-
- sysfs_nodes = kmalloc(sizeof(struct node) * numnodes, GFP_KERNEL);
- if (!sysfs_nodes) {
- err = -ENOMEM;
- goto out;
- }
- memset(sysfs_nodes, 0, sizeof(struct node) * numnodes);
-
- sysfs_cpus = kmalloc(sizeof(struct cpu) * NR_CPUS, GFP_KERNEL);
- if (!sysfs_cpus) {
- kfree(sysfs_nodes);
- err = -ENOMEM;
- goto out;
- }
- memset(sysfs_cpus, 0, sizeof(struct cpu) * NR_CPUS);
-
- for (i = 0; i < numnodes; i++)
- if ((err = register_node(&sysfs_nodes[i], i, NULL)))
- goto out;
-
- for (i = 0; i < NR_CPUS; i++)
- if (cpu_online(i))
- if((err = register_cpu(&sysfs_cpus[i], i,
- &sysfs_nodes[cpu_to_node(i)])))
- goto out;
- out:
- return err;
-}
-
-__initcall(topology_init);
oprofilefs.o oprofile_stats.o \
timer_int.o )
-oprofile-y := $(DRIVER_OBJS) init.o
+oprofile-y := $(DRIVER_OBJS) init.o backtrace.o
oprofile-$(CONFIG_PERFMON) += perfmon.o
#include <linux/init.h>
#include <linux/errno.h>
-extern int perfmon_init(struct oprofile_operations ** ops);
+extern int perfmon_init(struct oprofile_operations * ops);
extern void perfmon_exit(void);
+extern void ia64_backtrace(struct pt_regs * const regs, unsigned int depth);
-int __init oprofile_arch_init(struct oprofile_operations ** ops)
+int __init oprofile_arch_init(struct oprofile_operations * ops)
{
+ int ret = -ENODEV;
+
#ifdef CONFIG_PERFMON
- return perfmon_init(ops);
+ /* perfmon_init() can fail, but we have no way to report it */
+ ret = perfmon_init(ops);
#endif
- return -ENODEV;
+ ops->backtrace = ia64_backtrace;
+
+ return ret;
}
perfmon_handler(struct task_struct *task, void *buf, pfm_ovfl_arg_t *arg,
struct pt_regs *regs, unsigned long stamp)
{
- int cpu = smp_processor_id();
- unsigned long eip = instruction_pointer(regs);
int event = arg->pmd_eventid;
arg->ovfl_ctrl.bits.reset_ovfl_pmds = 1;
* without perfmon being shutdown (e.g. SIGSEGV)
*/
if (allow_ints)
- oprofile_add_sample(eip, !user_mode(regs), event, cpu);
+ oprofile_add_sample(regs, event);
return 0;
}
/* all the ops are handled via userspace for IA64 perfmon */
-static struct oprofile_operations perfmon_ops = {
- .start = perfmon_start,
- .stop = perfmon_stop,
-};
static int using_perfmon;
-int perfmon_init(struct oprofile_operations ** ops)
+int perfmon_init(struct oprofile_operations * ops)
{
int ret = pfm_register_buffer_fmt(&oprofile_fmt);
if (ret)
return -ENODEV;
- perfmon_ops.cpu_type = get_cpu_type();
- *ops = &perfmon_ops;
+ ops->cpu_type = get_cpu_type();
+ ops->start = perfmon_start;
+ ops->stop = perfmon_stop;
using_perfmon = 1;
printk(KERN_INFO "oprofile: using perfmon.\n");
return 0;
#include <asm/sn/sn_sal.h>
#include "ioerror.h"
#include <asm/sn/addrs.h>
-#include "shubio.h"
+#include <asm/sn/shubio.h>
#include <asm/sn/geo.h>
#include "xtalk/xwidgetdev.h"
#include "xtalk/hubdev.h"
*/
#include <linux/bootmem.h>
+#include <linux/nodemask.h>
#include <asm/sn/types.h>
#include <asm/sn/sn_sal.h>
#include <asm/sn/addrs.h>
struct pci_dev *host_pci_dev;
int status = 0;
- SN_PCIDEV_INFO(dev) = kmalloc(sizeof(struct pcidev_info), GFP_KERNEL);
+ dev->sysdata = kmalloc(sizeof(struct pcidev_info), GFP_KERNEL);
if (SN_PCIDEV_INFO(dev) <= 0)
BUG(); /* Cannot afford to run out of memory */
memset(SN_PCIDEV_INFO(dev), 0, sizeof(struct pcidev_info));
* after this point.
*/
- PCI_CONTROLLER(bus) = controller;
- SN_PCIBUS_BUSSOFT(bus) = provider_soft;
+ bus->sysdata = controller;
+ PCI_CONTROLLER(bus)->platform_data = provider_soft;
nasid = NASID_GET(SN_PCIBUS_BUSSOFT(bus)->bs_base);
cnode = nasid_to_cnodeid(nasid);
struct hubdev_info *hubdev_info;
- if (node >= numnodes) /* Headless/memless IO nodes */
+ if (node >= num_online_nodes()) /* Headless/memless IO nodes */
hubdev_info =
(struct hubdev_info *)alloc_bootmem_node(NODE_DATA(0),
sizeof(struct
return ((void *)(port | __IA64_UNCACHED_OFFSET));
} else {
/* but the simulator uses them... */
- unsigned long io_base;
unsigned long addr;
/*
* for accessing registers in bedrock local block
* (so we don't do port&0xfff)
*/
- if ((port >= 0x1f0 && port <= 0x1f7) ||
- port == 0x3f6 || port == 0x3f7) {
- io_base = (0xc000000fcc000000UL |
- ((unsigned long)get_nasid() << 38));
- addr = io_base | ((port >> 2) << 12) | (port & 0xfff);
- } else {
- addr = __ia64_get_io_port_base() | ((port >> 2) << 2);
- }
+ addr = (is_shub2() ? 0xc00000028c000000UL : 0xc0000087cc000000UL) | ((port >> 2) << 12);
+ if ((port >= 0x1f0 && port <= 0x1f7) || port == 0x3f6 || port == 0x3f7)
+ addr |= port;
return (void *)addr;
}
}
*/
void __sn_mmiowb(void)
{
- while ((((volatile unsigned long)(*pda->pio_write_status_addr)) &
- SH_PIO_WRITE_STATUS_0_PENDING_WRITE_COUNT_MASK) !=
- SH_PIO_WRITE_STATUS_0_PENDING_WRITE_COUNT_MASK)
+ volatile unsigned long *adr = pda->pio_write_status_addr;
+ unsigned long val = pda->pio_write_status_val;
+
+ while ((*adr & SH_PIO_WRITE_STATUS_PENDING_WRITE_COUNT_MASK) != val)
cpu_relax();
}
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
- * Copyright (C) 2004 Silicon Graphics, Inc. All rights reserved.
+ * Copyright (C) 2004-2005 Silicon Graphics, Inc. All rights reserved.
*
* SGI Altix topology and hardware performance monitoring API.
* Mark Goodwin <markgw@sgi.com>.
#include <linux/miscdevice.h>
#include <linux/cpumask.h>
#include <linux/smp_lock.h>
+#include <linux/nodemask.h>
#include <asm/processor.h>
#include <asm/topology.h>
#include <asm/smp.h>
return r;
}
+/* map SAL hwperf error code to system error code */
+static int sn_hwperf_map_err(int hwperf_err)
+{
+ int e;
+
+ switch(hwperf_err) {
+ case SN_HWPERF_OP_OK:
+ e = 0;
+ break;
+
+ case SN_HWPERF_OP_NOMEM:
+ e = -ENOMEM;
+ break;
+
+ case SN_HWPERF_OP_NO_PERM:
+ e = -EPERM;
+ break;
+
+ case SN_HWPERF_OP_IO_ERROR:
+ e = -EIO;
+ break;
+
+ case SN_HWPERF_OP_BUSY:
+ case SN_HWPERF_OP_RECONFIGURE:
+ e = -EAGAIN;
+ break;
+
+ case SN_HWPERF_OP_INVAL:
+ default:
+ e = -EINVAL;
+ break;
+ }
+
+ return e;
+}
+
/*
* ioctl for "sn_hwperf" misc device
*/
op_info.v0 = &v0;
op_info.op = op;
r = sn_hwperf_op_cpu(&op_info);
+ if (r) {
+ r = sn_hwperf_map_err(r);
+ goto error;
+ }
break;
default:
/* all other ops are a direct SAL call */
r = ia64_sn_hwperf_op(sn_hwperf_master_nasid, op,
a.arg, a.sz, (u64) p, 0, 0, &v0);
+ if (r) {
+ r = sn_hwperf_map_err(r);
+ goto error;
+ }
a.v0 = v0;
break;
}
}
error:
- if (p)
- vfree(p);
+ vfree(p);
lock_kernel();
return r;
{
struct seq_file *seq = file->private_data;
- if (seq->private)
- vfree(seq->private);
+ vfree(seq->private);
return seq_release(inode, file);
}
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
- * Copyright (C) 2000,2002-2004 Silicon Graphics, Inc. All rights reserved.
+ * Copyright (C) 2000,2002-2005 Silicon Graphics, Inc. All rights reserved.
*
- * Routines for PCI DMA mapping. See Documentation/DMA-mapping.txt for
+ * Routines for PCI DMA mapping. See Documentation/DMA-API.txt for
* a description of how these routines should be used.
*/
#include <linux/module.h>
+#include <asm/dma.h>
#include <asm/sn/sn_sal.h>
#include "pci/pcibus_provider_defs.h"
#include "pci/pcidev.h"
#include "pci/pcibr_provider.h"
-void sn_pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents,
- int direction);
+#define SG_ENT_VIRT_ADDRESS(sg) (page_address((sg)->page) + (sg)->offset)
+#define SG_ENT_PHYS_ADDRESS(SG) virt_to_phys(SG_ENT_VIRT_ADDRESS(SG))
/**
- * sn_pci_alloc_consistent - allocate memory for coherent DMA
- * @hwdev: device to allocate for
+ * sn_dma_supported - test a DMA mask
+ * @dev: device to test
+ * @mask: DMA mask to test
+ *
+ * Return whether the given PCI device DMA address mask can be supported
+ * properly. For example, if your device can only drive the low 24-bits
+ * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
+ * this function. Of course, SN only supports devices that have 32 or more
+ * address bits when using the PMU.
+ */
+int sn_dma_supported(struct device *dev, u64 mask)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ if (mask < 0x7fffffff)
+ return 0;
+ return 1;
+}
+EXPORT_SYMBOL(sn_dma_supported);
+
+/**
+ * sn_dma_set_mask - set the DMA mask
+ * @dev: device to set
+ * @dma_mask: new mask
+ *
+ * Set @dev's DMA mask if the hw supports it.
+ */
+int sn_dma_set_mask(struct device *dev, u64 dma_mask)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ if (!sn_dma_supported(dev, dma_mask))
+ return 0;
+
+ *dev->dma_mask = dma_mask;
+ return 1;
+}
+EXPORT_SYMBOL(sn_dma_set_mask);
+
+/**
+ * sn_dma_alloc_coherent - allocate memory for coherent DMA
+ * @dev: device to allocate for
* @size: size of the region
* @dma_handle: DMA (bus) address
+ * @flags: memory allocation flags
*
- * pci_alloc_consistent() returns a pointer to a memory region suitable for
+ * dma_alloc_coherent() returns a pointer to a memory region suitable for
* coherent DMA traffic to/from a PCI device. On SN platforms, this means
* that @dma_handle will have the %PCIIO_DMA_CMD flag set.
*
* This interface is usually used for "command" streams (e.g. the command
- * queue for a SCSI controller). See Documentation/DMA-mapping.txt for
+ * queue for a SCSI controller). See Documentation/DMA-API.txt for
* more information.
- *
- * Also known as platform_pci_alloc_consistent() by the IA64 machvec code.
*/
-void *sn_pci_alloc_consistent(struct pci_dev *hwdev, size_t size,
- dma_addr_t * dma_handle)
+void *sn_dma_alloc_coherent(struct device *dev, size_t size,
+ dma_addr_t * dma_handle, int flags)
{
void *cpuaddr;
unsigned long phys_addr;
- struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(hwdev);
- struct pcibus_bussoft *bussoft = SN_PCIDEV_BUSSOFT(hwdev);
-
- if (bussoft == NULL) {
- return NULL;
- }
+ struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(to_pci_dev(dev));
- if (! IS_PCI_BRIDGE_ASIC(bussoft->bs_asic_type)) {
- return NULL; /* unsupported asic type */
- }
+ BUG_ON(dev->bus != &pci_bus_type);
/*
* Allocate the memory.
/*
* 64 bit address translations should never fail.
* 32 bit translations can fail if there are insufficient mapping
- * resources.
+ * resources.
*/
- *dma_handle = pcibr_dma_map(pcidev_info, phys_addr, size, SN_PCIDMA_CONSISTENT);
+ *dma_handle = pcibr_dma_map(pcidev_info, phys_addr, size,
+ SN_PCIDMA_CONSISTENT);
if (!*dma_handle) {
- printk(KERN_ERR
- "sn_pci_alloc_consistent(): failed *dma_handle = 0x%lx hwdev->dev.coherent_dma_mask = 0x%lx \n",
- *dma_handle, hwdev->dev.coherent_dma_mask);
+ printk(KERN_ERR "%s: out of ATEs\n", __FUNCTION__);
free_pages((unsigned long)cpuaddr, get_order(size));
return NULL;
}
return cpuaddr;
}
+EXPORT_SYMBOL(sn_dma_alloc_coherent);
/**
- * sn_pci_free_consistent - free memory associated with coherent DMAable region
- * @hwdev: device to free for
+ * sn_pci_free_coherent - free memory associated with coherent DMAable region
+ * @dev: device to free for
* @size: size to free
- * @vaddr: kernel virtual address to free
+ * @cpu_addr: kernel virtual address to free
* @dma_handle: DMA address associated with this region
*
- * Frees the memory allocated by pci_alloc_consistent(). Also known
- * as platform_pci_free_consistent() by the IA64 machvec code.
+ * Frees the memory allocated by dma_alloc_coherent(), potentially unmapping
+ * any associated IOMMU mappings.
*/
-void
-sn_pci_free_consistent(struct pci_dev *hwdev, size_t size, void *vaddr,
- dma_addr_t dma_handle)
+void sn_dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
+ dma_addr_t dma_handle)
{
- struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(hwdev);
- struct pcibus_bussoft *bussoft = SN_PCIDEV_BUSSOFT(hwdev);
+ struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(to_pci_dev(dev));
- if (! bussoft) {
- return;
- }
+ BUG_ON(dev->bus != &pci_bus_type);
pcibr_dma_unmap(pcidev_info, dma_handle, 0);
- free_pages((unsigned long)vaddr, get_order(size));
-}
-
-/**
- * sn_pci_map_sg - map a scatter-gather list for DMA
- * @hwdev: device to map for
- * @sg: scatterlist to map
- * @nents: number of entries
- * @direction: direction of the DMA transaction
- *
- * Maps each entry of @sg for DMA. Also known as platform_pci_map_sg by the
- * IA64 machvec code.
- */
-int
-sn_pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents,
- int direction)
-{
-
- int i;
- unsigned long phys_addr;
- struct scatterlist *saved_sg = sg;
- struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(hwdev);
- struct pcibus_bussoft *bussoft = SN_PCIDEV_BUSSOFT(hwdev);
-
- /* can't go anywhere w/o a direction in life */
- if (direction == PCI_DMA_NONE)
- BUG();
-
- if (! bussoft) {
- return 0;
- }
-
- /* SN cannot support DMA addresses smaller than 32 bits. */
- if (hwdev->dma_mask < 0x7fffffff)
- return 0;
-
- /*
- * Setup a DMA address for each entry in the
- * scatterlist.
- */
- for (i = 0; i < nents; i++, sg++) {
- phys_addr =
- __pa((unsigned long)page_address(sg->page) + sg->offset);
- sg->dma_address = pcibr_dma_map(pcidev_info, phys_addr, sg->length, 0);
-
- if (!sg->dma_address) {
- printk(KERN_ERR "sn_pci_map_sg: Unable to allocate "
- "anymore page map entries.\n");
- /*
- * We will need to free all previously allocated entries.
- */
- if (i > 0) {
- sn_pci_unmap_sg(hwdev, saved_sg, i, direction);
- }
- return (0);
- }
-
- sg->dma_length = sg->length;
- }
-
- return nents;
-
-}
-
-/**
- * sn_pci_unmap_sg - unmap a scatter-gather list
- * @hwdev: device to unmap
- * @sg: scatterlist to unmap
- * @nents: number of scatterlist entries
- * @direction: DMA direction
- *
- * Unmap a set of streaming mode DMA translations. Again, cpu read rules
- * concerning calls here are the same as for pci_unmap_single() below. Also
- * known as sn_pci_unmap_sg() by the IA64 machvec code.
- */
-void
-sn_pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents,
- int direction)
-{
- int i;
- struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(hwdev);
- struct pcibus_bussoft *bussoft = SN_PCIDEV_BUSSOFT(hwdev);
-
- /* can't go anywhere w/o a direction in life */
- if (direction == PCI_DMA_NONE)
- BUG();
-
- if (! bussoft) {
- return;
- }
-
- for (i = 0; i < nents; i++, sg++) {
- pcibr_dma_unmap(pcidev_info, sg->dma_address, direction);
- sg->dma_address = (dma_addr_t) NULL;
- sg->dma_length = 0;
- }
+ free_pages((unsigned long)cpu_addr, get_order(size));
}
+EXPORT_SYMBOL(sn_dma_free_coherent);
/**
- * sn_pci_map_single - map a single region for DMA
- * @hwdev: device to map for
- * @ptr: kernel virtual address of the region to map
+ * sn_dma_map_single - map a single page for DMA
+ * @dev: device to map for
+ * @cpu_addr: kernel virtual address of the region to map
* @size: size of the region
* @direction: DMA direction
*
- * Map the region pointed to by @ptr for DMA and return the
- * DMA address. Also known as platform_pci_map_single() by
- * the IA64 machvec code.
+ * Map the region pointed to by @cpu_addr for DMA and return the
+ * DMA address.
*
* We map this to the one step pcibr_dmamap_trans interface rather than
* the two step pcibr_dmamap_alloc/pcibr_dmamap_addr because we have
* (which is pretty much unacceptable).
*
* TODO: simplify our interface;
- * get rid of dev_desc and vhdl (seems redundant given a pci_dev);
* figure out how to save dmamap handle so can use two step.
*/
-dma_addr_t
-sn_pci_map_single(struct pci_dev *hwdev, void *ptr, size_t size, int direction)
+dma_addr_t sn_dma_map_single(struct device *dev, void *cpu_addr, size_t size,
+ int direction)
{
dma_addr_t dma_addr;
unsigned long phys_addr;
- struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(hwdev);
- struct pcibus_bussoft *bussoft = SN_PCIDEV_BUSSOFT(hwdev);
+ struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(to_pci_dev(dev));
- if (direction == PCI_DMA_NONE)
- BUG();
-
- if (bussoft == NULL) {
- return 0;
- }
-
- if (! IS_PCI_BRIDGE_ASIC(bussoft->bs_asic_type)) {
- return 0; /* unsupported asic type */
- }
-
- /* SN cannot support DMA addresses smaller than 32 bits. */
- if (hwdev->dma_mask < 0x7fffffff)
- return 0;
-
- /*
- * Call our dmamap interface
- */
+ BUG_ON(dev->bus != &pci_bus_type);
- phys_addr = __pa(ptr);
+ phys_addr = __pa(cpu_addr);
dma_addr = pcibr_dma_map(pcidev_info, phys_addr, size, 0);
if (!dma_addr) {
- printk(KERN_ERR "pci_map_single: Unable to allocate anymore "
- "page map entries.\n");
+ printk(KERN_ERR "%s: out of ATEs\n", __FUNCTION__);
return 0;
}
- return ((dma_addr_t) dma_addr);
+ return dma_addr;
}
+EXPORT_SYMBOL(sn_dma_map_single);
/**
- * sn_pci_dma_sync_single_* - make sure all DMAs or CPU accesses
- * have completed
- * @hwdev: device to sync
- * @dma_handle: DMA address to sync
+ * sn_dma_unmap_single - unamp a DMA mapped page
+ * @dev: device to sync
+ * @dma_addr: DMA address to sync
* @size: size of region
* @direction: DMA direction
*
* This routine is supposed to sync the DMA region specified
- * by @dma_handle into the 'coherence domain'. We do not need to do
- * anything on our platform.
+ * by @dma_handle into the coherence domain. On SN, we're always cache
+ * coherent, so we just need to free any ATEs associated with this mapping.
*/
-void
-sn_pci_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr, size_t size,
- int direction)
+void sn_dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
+ int direction)
{
- struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(hwdev);
- struct pcibus_bussoft *bussoft = SN_PCIDEV_BUSSOFT(hwdev);
-
- if (direction == PCI_DMA_NONE)
- BUG();
-
- if (bussoft == NULL) {
- return;
- }
-
- if (! IS_PCI_BRIDGE_ASIC(bussoft->bs_asic_type)) {
- return; /* unsupported asic type */
- }
+ struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(to_pci_dev(dev));
+ BUG_ON(dev->bus != &pci_bus_type);
pcibr_dma_unmap(pcidev_info, dma_addr, direction);
}
+EXPORT_SYMBOL(sn_dma_unmap_single);
/**
- * sn_dma_supported - test a DMA mask
- * @hwdev: device to test
- * @mask: DMA mask to test
+ * sn_dma_unmap_sg - unmap a DMA scatterlist
+ * @dev: device to unmap
+ * @sg: scatterlist to unmap
+ * @nhwentries: number of scatterlist entries
+ * @direction: DMA direction
*
- * Return whether the given PCI device DMA address mask can be supported
- * properly. For example, if your device can only drive the low 24-bits
- * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
- * this function. Of course, SN only supports devices that have 32 or more
- * address bits when using the PMU. We could theoretically support <32 bit
- * cards using direct mapping, but we'll worry about that later--on the off
- * chance that someone actually wants to use such a card.
+ * Unmap a set of streaming mode DMA translations.
*/
-int sn_pci_dma_supported(struct pci_dev *hwdev, u64 mask)
+void sn_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
+ int nhwentries, int direction)
{
- if (mask < 0x7fffffff)
- return 0;
- return 1;
-}
-
-/*
- * New generic DMA routines just wrap sn2 PCI routines until we
- * support other bus types (if ever).
- */
+ int i;
+ struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(to_pci_dev(dev));
-int sn_dma_supported(struct device *dev, u64 mask)
-{
BUG_ON(dev->bus != &pci_bus_type);
- return sn_pci_dma_supported(to_pci_dev(dev), mask);
+ for (i = 0; i < nhwentries; i++, sg++) {
+ pcibr_dma_unmap(pcidev_info, sg->dma_address, direction);
+ sg->dma_address = (dma_addr_t) NULL;
+ sg->dma_length = 0;
+ }
}
+EXPORT_SYMBOL(sn_dma_unmap_sg);
-EXPORT_SYMBOL(sn_dma_supported);
-
-int sn_dma_set_mask(struct device *dev, u64 dma_mask)
+/**
+ * sn_dma_map_sg - map a scatterlist for DMA
+ * @dev: device to map for
+ * @sg: scatterlist to map
+ * @nhwentries: number of entries
+ * @direction: direction of the DMA transaction
+ *
+ * Maps each entry of @sg for DMA.
+ */
+int sn_dma_map_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
+ int direction)
{
+ unsigned long phys_addr;
+ struct scatterlist *saved_sg = sg;
+ struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(to_pci_dev(dev));
+ int i;
+
BUG_ON(dev->bus != &pci_bus_type);
- if (!sn_dma_supported(dev, dma_mask))
- return 0;
+ /*
+ * Setup a DMA address for each entry in the scatterlist.
+ */
+ for (i = 0; i < nhwentries; i++, sg++) {
+ phys_addr = SG_ENT_PHYS_ADDRESS(sg);
+ sg->dma_address = pcibr_dma_map(pcidev_info, phys_addr,
+ sg->length, 0);
- *dev->dma_mask = dma_mask;
- return 1;
-}
+ if (!sg->dma_address) {
+ printk(KERN_ERR "%s: out of ATEs\n", __FUNCTION__);
-EXPORT_SYMBOL(sn_dma_set_mask);
+ /*
+ * Free any successfully allocated entries.
+ */
+ if (i > 0)
+ sn_dma_unmap_sg(dev, saved_sg, i, direction);
+ return 0;
+ }
-void *sn_dma_alloc_coherent(struct device *dev, size_t size,
- dma_addr_t * dma_handle, int flag)
-{
- BUG_ON(dev->bus != &pci_bus_type);
+ sg->dma_length = sg->length;
+ }
- return sn_pci_alloc_consistent(to_pci_dev(dev), size, dma_handle);
+ return nhwentries;
}
+EXPORT_SYMBOL(sn_dma_map_sg);
-EXPORT_SYMBOL(sn_dma_alloc_coherent);
-
-void
-sn_dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
- dma_addr_t dma_handle)
+void sn_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
+ size_t size, int direction)
{
BUG_ON(dev->bus != &pci_bus_type);
-
- sn_pci_free_consistent(to_pci_dev(dev), size, cpu_addr, dma_handle);
}
+EXPORT_SYMBOL(sn_dma_sync_single_for_cpu);
-EXPORT_SYMBOL(sn_dma_free_coherent);
-
-dma_addr_t
-sn_dma_map_single(struct device *dev, void *cpu_addr, size_t size,
- int direction)
+void sn_dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
+ size_t size, int direction)
{
BUG_ON(dev->bus != &pci_bus_type);
-
- return sn_pci_map_single(to_pci_dev(dev), cpu_addr, size,
- (int)direction);
}
+EXPORT_SYMBOL(sn_dma_sync_single_for_device);
-EXPORT_SYMBOL(sn_dma_map_single);
-
-void
-sn_dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
- int direction)
+void sn_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
+ int nelems, int direction)
{
BUG_ON(dev->bus != &pci_bus_type);
-
- sn_pci_unmap_single(to_pci_dev(dev), dma_addr, size, (int)direction);
}
+EXPORT_SYMBOL(sn_dma_sync_sg_for_cpu);
-EXPORT_SYMBOL(sn_dma_unmap_single);
-
-dma_addr_t
-sn_dma_map_page(struct device *dev, struct page *page,
- unsigned long offset, size_t size, int direction)
+void sn_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
+ int nelems, int direction)
{
BUG_ON(dev->bus != &pci_bus_type);
-
- return pci_map_page(to_pci_dev(dev), page, offset, size,
- (int)direction);
}
+EXPORT_SYMBOL(sn_dma_sync_sg_for_device);
-EXPORT_SYMBOL(sn_dma_map_page);
-
-void
-sn_dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
- int direction)
+int sn_dma_mapping_error(dma_addr_t dma_addr)
{
- BUG_ON(dev->bus != &pci_bus_type);
-
- pci_unmap_page(to_pci_dev(dev), dma_address, size, (int)direction);
+ return 0;
}
+EXPORT_SYMBOL(sn_dma_mapping_error);
-EXPORT_SYMBOL(sn_dma_unmap_page);
-
-int
-sn_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
- int direction)
+char *sn_pci_get_legacy_mem(struct pci_bus *bus)
{
- BUG_ON(dev->bus != &pci_bus_type);
+ if (!SN_PCIBUS_BUSSOFT(bus))
+ return ERR_PTR(-ENODEV);
- return sn_pci_map_sg(to_pci_dev(dev), sg, nents, (int)direction);
+ return (char *)(SN_PCIBUS_BUSSOFT(bus)->bs_legacy_mem | __IA64_UNCACHED_OFFSET);
}
-EXPORT_SYMBOL(sn_dma_map_sg);
-
-void
-sn_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
- int direction)
+int sn_pci_legacy_read(struct pci_bus *bus, u16 port, u32 *val, u8 size)
{
- BUG_ON(dev->bus != &pci_bus_type);
-
- sn_pci_unmap_sg(to_pci_dev(dev), sg, nhwentries, (int)direction);
-}
+ unsigned long addr;
+ int ret;
-EXPORT_SYMBOL(sn_dma_unmap_sg);
+ if (!SN_PCIBUS_BUSSOFT(bus))
+ return -ENODEV;
-void
-sn_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
- size_t size, int direction)
-{
- BUG_ON(dev->bus != &pci_bus_type);
-}
+ addr = SN_PCIBUS_BUSSOFT(bus)->bs_legacy_io | __IA64_UNCACHED_OFFSET;
+ addr += port;
-EXPORT_SYMBOL(sn_dma_sync_single_for_cpu);
+ ret = ia64_sn_probe_mem(addr, (long)size, (void *)val);
-void
-sn_dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
- size_t size, int direction)
-{
- BUG_ON(dev->bus != &pci_bus_type);
-}
+ if (ret == 2)
+ return -EINVAL;
-EXPORT_SYMBOL(sn_dma_sync_single_for_device);
+ if (ret == 1)
+ *val = -1;
-void
-sn_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
- int direction)
-{
- BUG_ON(dev->bus != &pci_bus_type);
+ return size;
}
-EXPORT_SYMBOL(sn_dma_sync_sg_for_cpu);
-
-void
-sn_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
- int nelems, int direction)
+int sn_pci_legacy_write(struct pci_bus *bus, u16 port, u32 val, u8 size)
{
- BUG_ON(dev->bus != &pci_bus_type);
-}
+ int ret = size;
+ unsigned long paddr;
+ unsigned long *addr;
-int sn_dma_mapping_error(dma_addr_t dma_addr)
-{
- return 0;
-}
+ if (!SN_PCIBUS_BUSSOFT(bus)) {
+ ret = -ENODEV;
+ goto out;
+ }
-EXPORT_SYMBOL(sn_dma_sync_sg_for_device);
-EXPORT_SYMBOL(sn_pci_unmap_single);
-EXPORT_SYMBOL(sn_pci_map_single);
-EXPORT_SYMBOL(sn_pci_map_sg);
-EXPORT_SYMBOL(sn_pci_unmap_sg);
-EXPORT_SYMBOL(sn_pci_alloc_consistent);
-EXPORT_SYMBOL(sn_pci_free_consistent);
-EXPORT_SYMBOL(sn_pci_dma_supported);
-EXPORT_SYMBOL(sn_dma_mapping_error);
+ /* Put the phys addr in uncached space */
+ paddr = SN_PCIBUS_BUSSOFT(bus)->bs_legacy_io | __IA64_UNCACHED_OFFSET;
+ paddr += port;
+ addr = (unsigned long *)paddr;
+
+ switch (size) {
+ case 1:
+ *(volatile u8 *)(addr) = (u8)(val);
+ break;
+ case 2:
+ *(volatile u16 *)(addr) = (u16)(val);
+ break;
+ case 4:
+ *(volatile u32 *)(addr) = (u32)(val);
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+ out:
+ return ret;
+}
config UID16
bool
- default y
+ default n
config GENERIC_ISA_DMA
bool
default y
-#config GENERIC_HARDIRQS
-# bool
-# default y
+config GENERIC_HARDIRQS
+ bool
+ default y
+
+config GENERIC_IRQ_PROBE
+ bool
+ default y
source "init/Kconfig"
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc1-bk21
-# Fri Nov 12 16:08:49 2004
+# Linux kernel version: 2.6.11-rc4
+# Wed Feb 16 21:10:44 2005
#
CONFIG_M32R=y
-CONFIG_UID16=y
+# CONFIG_UID16 is not set
CONFIG_GENERIC_ISA_DMA=y
+CONFIG_GENERIC_HARDIRQS=y
+CONFIG_GENERIC_IRQ_PROBE=y
#
# Code maturity level options
# CONFIG_DISCONTIGMEM is not set
CONFIG_RWSEM_GENERIC_SPINLOCK=y
# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set
+CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_PREEMPT=y
# CONFIG_HAVE_DEC_LOCK is not set
# CONFIG_SMP is not set
#
CONFIG_PCCARD=y
# CONFIG_PCMCIA_DEBUG is not set
-# CONFIG_PCMCIA_OBSOLETE is not set
CONFIG_PCMCIA=y
#
# Block devices
#
# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
CONFIG_BLK_DEV_NBD=y
CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
# CONFIG_BLK_DEV_INITRD is not set
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_IOSCHED_AS is not set
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
+# CONFIG_ATA_OVER_ETH is not set
#
# ATA/ATAPI/MFM/RLL support
#
# CONFIG_SCSI_SPI_ATTRS is not set
# CONFIG_SCSI_FC_ATTRS is not set
+# CONFIG_SCSI_ISCSI_ATTRS is not set
#
# SCSI low-level drivers
#
# CONFIG_SCSI_SATA is not set
-# CONFIG_SCSI_QLOGIC_1280_1040 is not set
# CONFIG_SCSI_DEBUG is not set
#
# CONFIG_SERIO_I8042 is not set
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
+# CONFIG_SERIO_LIBPS2 is not set
# CONFIG_SERIO_RAW is not set
#
#
# Ftape, the floppy tape device driver
#
-# CONFIG_AGP is not set
# CONFIG_DRM is not set
#
CONFIG_LOGO_LINUX_MONO=y
CONFIG_LOGO_LINUX_VGA16=y
CONFIG_LOGO_LINUX_CLUT224=y
+# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
#
# Sound
# CONFIG_USB_ARCH_HAS_HCD is not set
# CONFIG_USB_ARCH_HAS_OHCI is not set
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
+
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# InfiniBand support
+#
+# CONFIG_INFINIBAND is not set
+
#
# File systems
#
# CONFIG_REISERFS_PROC_INFO is not set
# CONFIG_REISERFS_FS_XATTR is not set
# CONFIG_JFS_FS is not set
+
+#
+# XFS support
+#
# CONFIG_XFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_ROOT_NFS=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
-# CONFIG_EXPORTFS is not set
CONFIG_SUNRPC=y
# CONFIG_RPCSEC_GSS_KRB5 is not set
# CONFIG_RPCSEC_GSS_SPKM3 is not set
# Kernel hacking
#
# CONFIG_DEBUG_KERNEL is not set
-# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
+CONFIG_DEBUG_PREEMPT=y
+# CONFIG_DEBUG_BUGVERBOSE is not set
# CONFIG_FRAME_POINTER is not set
#
#
# CONFIG_CRYPTO is not set
+#
+# Hardware crypto devices
+#
+
#
# Library routines
#
.long sys_time
.long sys_mknod
.long sys_chmod /* 15 */
- .long sys_lchown
+ .long sys_ni_syscall /* lchown16 syscall holder */
.long sys_ni_syscall /* old break syscall holder */
- .long sys_stat
+ .long sys_ni_syscall /* old stat syscall holder */
.long sys_lseek
.long sys_getpid /* 20 */
.long sys_mount
.long sys_oldumount
- .long sys_setuid
- .long sys_getuid
+ .long sys_ni_syscall /* setuid16 syscall holder */
+ .long sys_ni_syscall /* getuid16 syscall holder */
.long sys_stime /* 25 */
.long sys_ptrace
.long sys_alarm
- .long sys_fstat
+ .long sys_ni_syscall /* old fstat syscall holder */
.long sys_pause
.long sys_utime /* 30 */
- .long sys_cacheflush /* for M32R */ /* old stty syscall holder */
+ .long sys_ni_syscall /* old stty syscall holder */
.long sys_cachectl /* for M32R */ /* old gtty syscall holder */
.long sys_access
- .long sys_nice
+ .long sys_ni_syscall /* nice syscall holder */
.long sys_ni_syscall /* 35 - old ftime syscall holder */
.long sys_sync
.long sys_kill
.long sys_times
.long sys_ni_syscall /* old prof syscall holder */
.long sys_brk /* 45 */
- .long sys_setgid
- .long sys_getgid
- .long sys_signal
- .long sys_geteuid
- .long sys_getegid /* 50 */
+ .long sys_ni_syscall /* setgid16 syscall holder */
+ .long sys_getgid /* will be unused */
+ .long sys_ni_syscall /* signal syscall holder */
+ .long sys_ni_syscall /* geteuid16 syscall holder */
+ .long sys_ni_syscall /* 50 - getegid16 syscall holder */
.long sys_acct
.long sys_umount /* recycled never used phys() */
.long sys_ni_syscall /* old lock syscall holder */
.long sys_ioctl
- .long sys_fcntl /* 55 */
- .long sys_ni_syscall /* old mpx syscall holder */
+ .long sys_fcntl /* 55 - will be unused */
+ .long sys_ni_syscall /* mpx syscall holder */
.long sys_setpgid
.long sys_ni_syscall /* old ulimit syscall holder */
.long sys_ni_syscall /* sys_olduname */
.long sys_getppid
.long sys_getpgrp /* 65 */
.long sys_setsid
- .long sys_sigaction
- .long sys_sgetmask
- .long sys_ssetmask
- .long sys_setreuid /* 70 */
- .long sys_setregid
- .long sys_sigsuspend
- .long sys_sigpending
+ .long sys_ni_syscall /* sigaction syscall holder */
+ .long sys_ni_syscall /* sgetmask syscall holder */
+ .long sys_ni_syscall /* ssetmask syscall holder */
+ .long sys_ni_syscall /* 70 - setreuid16 syscall holder */
+ .long sys_ni_syscall /* setregid16 syscall holder */
+ .long sys_ni_syscall /* sigsuspend syscall holder */
+ .long sys_ni_syscall /* sigpending syscall holder */
.long sys_sethostname
.long sys_setrlimit /* 75 */
- .long sys_getrlimit
+ .long sys_getrlimit/*will be unused*/
.long sys_getrusage
.long sys_gettimeofday
.long sys_settimeofday
- .long sys_getgroups /* 80 */
- .long sys_setgroups
+ .long sys_ni_syscall /* 80 - getgroups16 syscall holder */
+ .long sys_ni_syscall /* setgroups16 syscall holder */
.long sys_ni_syscall /* sys_oldselect */
.long sys_symlink
- .long sys_lstat
+ .long sys_ni_syscall /* old lstat syscall holder */
.long sys_readlink /* 85 */
.long sys_uselib
.long sys_swapon
.long sys_reboot
- .long old_readdir
+ .long sys_ni_syscall /* readdir syscall holder */
.long sys_ni_syscall /* 90 - old_mmap syscall holder */
.long sys_munmap
.long sys_truncate
.long sys_ftruncate
.long sys_fchmod
- .long sys_fchown /* 95 */
+ .long sys_ni_syscall /* 95 - fchwon16 syscall holder */
.long sys_getpriority
.long sys_setpriority
.long sys_ni_syscall /* old profil syscall holder */
.long sys_statfs
.long sys_fstatfs /* 100 */
- .long sys_ni_syscall /* ioperm */
+ .long sys_ni_syscall /* ioperm syscall holder */
.long sys_socketcall
.long sys_syslog
.long sys_setitimer
.long sys_newstat
.long sys_newlstat
.long sys_newfstat
- .long sys_uname
- .long sys_ni_syscall /* 110 - iopl */
+ .long sys_ni_syscall /* old uname syscall holder */
+ .long sys_ni_syscall /* 110 - iopl syscall holder */
.long sys_vhangup
- .long sys_ni_syscall /* for idle */
- .long sys_ni_syscall /* for vm86old */
+ .long sys_ni_syscall /* idle syscall holder */
+ .long sys_ni_syscall /* vm86old syscall holder */
.long sys_wait4
.long sys_swapoff /* 115 */
.long sys_sysinfo
.long sys_ipc
.long sys_fsync
- .long sys_sigreturn
+ .long sys_ni_syscall /* sigreturn syscall holder */
.long sys_clone /* 120 */
.long sys_setdomainname
.long sys_newuname
- .long sys_ni_syscall /* sys_modify_ldt */
+ .long sys_ni_syscall /* modify_ldt syscall holder */
.long sys_adjtimex
.long sys_mprotect /* 125 */
- .long sys_sigprocmask
- .long sys_ni_syscall /* sys_create_module */
+ .long sys_ni_syscall /* sigprocmask syscall holder */
+ .long sys_ni_syscall /* create_module syscall holder */
.long sys_init_module
.long sys_delete_module
- .long sys_ni_syscall /* 130 sys_get_kernel_syms */
+ .long sys_ni_syscall /* 130 - get_kernel_syms */
.long sys_quotactl
.long sys_getpgid
.long sys_fchdir
.long sys_bdflush
.long sys_sysfs /* 135 */
.long sys_personality
- .long sys_ni_syscall /* for afs_syscall */
- .long sys_setfsuid
- .long sys_setfsgid
+ .long sys_ni_syscall /* afs_syscall syscall holder */
+ .long sys_ni_syscall /* setfsuid16 syscall holder */
+ .long sys_ni_syscall /* setfsgid16 syscall holder */
.long sys_llseek /* 140 */
.long sys_getdents
.long sys_select
.long sys_sched_rr_get_interval
.long sys_nanosleep
.long sys_mremap
- .long sys_setresuid
- .long sys_getresuid /* 165 */
- .long sys_tas /* vm86 */
- .long sys_ni_syscall /* sys_query_module */
+ .long sys_ni_syscall /* setresuid16 syscall holder */
+ .long sys_ni_syscall /* 165 - getresuid16 syscall holder */
+ .long sys_tas /* vm86 syscall holder */
+ .long sys_ni_syscall /* query_module syscall holder */
.long sys_poll
.long sys_nfsservctl
.long sys_setresgid /* 170 */
.long sys_rt_sigsuspend
.long sys_pread64 /* 180 */
.long sys_pwrite64
- .long sys_chown
+ .long sys_ni_syscall /* chown16 syscall holder */
.long sys_getcwd
.long sys_capget
.long sys_capset /* 185 */
/*
* linux/arch/m32r/kernel/irq.c
*
- * Copyright (c) 2003, 2004 Hitoshi Yamamoto
- *
- * Taken from i386 2.6.4 version.
+ * Copyright (c) 2003, 2004 Hitoshi Yamamoto
+ * Copyright (c) 2004 Hirokazu Takata <takata at linux-m32r.org>
*/
/*
*
* Copyright (C) 1992, 1998 Linus Torvalds, Ingo Molnar
*
- * This file contains the code used by various IRQ handling routines:
- * asking for different IRQ's should be done through these routines
- * instead of just grabbing them. Thus setups with different IRQ numbers
- * shouldn't result in any weird surprises, and installing new handlers
- * should be easier.
- */
-
-/*
- * (mostly architecture independent, will move to kernel/irq.c in 2.5.)
- *
- * IRQs are in fact implemented a bit like signal handlers for the kernel.
- * Naturally it's not a 1:1 relation, but there are similarities.
+ * This file contains the lowest level m32r-specific interrupt
+ * entry and irq statistics code. All the remaining irq logic is
+ * done by the generic kernel/irq/ code and in the
+ * m32r-specific irq controller code.
*/
-#include <linux/config.h>
-#include <linux/errno.h>
-#include <linux/module.h>
-#include <linux/signal.h>
-#include <linux/sched.h>
-#include <linux/ioport.h>
-#include <linux/interrupt.h>
-#include <linux/timex.h>
-#include <linux/slab.h>
-#include <linux/random.h>
-#include <linux/smp_lock.h>
-#include <linux/init.h>
#include <linux/kernel_stat.h>
-#include <linux/irq.h>
-#include <linux/proc_fs.h>
+#include <linux/interrupt.h>
#include <linux/seq_file.h>
-#include <linux/kallsyms.h>
-#include <linux/bitops.h>
-
-#include <asm/atomic.h>
-#include <asm/io.h>
-#include <asm/smp.h>
-#include <asm/system.h>
+#include <linux/module.h>
#include <asm/uaccess.h>
-#include <asm/delay.h>
-#include <asm/irq.h>
-
-/*
- * Linux has a controller-independent x86 interrupt architecture.
- * every controller has a 'controller-template', that is used
- * by the main code to do the right thing. Each driver-visible
- * interrupt source is transparently wired to the apropriate
- * controller. Thus drivers need not be aware of the
- * interrupt-controller.
- *
- * Various interrupt controllers we handle: 8259 PIC, SMP IO-APIC,
- * PIIX4's internal 8259 PIC and SGI's Visual Workstation Cobalt (IO-)APIC.
- * (IO-APICs assumed to be messaging to Pentium local-APICs)
- *
- * the code is designed to be easily extended with new/different
- * interrupt controllers, without having to do assembly magic.
- */
-
-/*
- * Controller mappings for all interrupt sources:
- */
-irq_desc_t irq_desc[NR_IRQS] __cacheline_aligned = {
- [0 ... NR_IRQS-1] = {
- .handler = &no_irq_type,
- .lock = SPIN_LOCK_UNLOCKED
- }
-};
-
-static void register_irq_proc (unsigned int irq);
-
-/*
- * Special irq handlers.
- */
-
-irqreturn_t no_action(int cpl, void *dev_id, struct pt_regs *regs)
-{ return IRQ_NONE; }
-
-/*
- * Generic no controller code
- */
-
-static void enable_none(unsigned int irq) { }
-static unsigned int startup_none(unsigned int irq) { return 0; }
-static void disable_none(unsigned int irq) { }
-static void ack_none(unsigned int irq)
-{
-/*
- * 'what should we do if we get a hw irq event on an illegal vector'.
- * each architecture has to answer this themselves, it doesn't deserve
- * a generic callback i think.
- */
- printk("unexpected IRQ trap at vector %02x\n", irq);
-}
-
-/* startup is the same as "enable", shutdown is same as "disable" */
-#define shutdown_none disable_none
-#define end_none enable_none
-
-struct hw_interrupt_type no_irq_type = {
- "none",
- startup_none,
- shutdown_none,
- enable_none,
- disable_none,
- ack_none,
- end_none
-};
atomic_t irq_err_count;
atomic_t irq_mis_count;
return 0;
}
-#ifdef CONFIG_SMP
-inline void synchronize_irq(unsigned int irq)
-{
- while (irq_desc[irq].status & IRQ_INPROGRESS)
- cpu_relax();
-}
-#endif
-
-/*
- * This should really return information about whether
- * we should do bottom half handling etc. Right now we
- * end up _always_ checking the bottom half, which is a
- * waste of time and is not what some drivers would
- * prefer.
- */
-int handle_IRQ_event(unsigned int irq,
- struct pt_regs *regs, struct irqaction *action)
-{
- int status = 1; /* Force the "do bottom halves" bit */
- int ret, retval = 0;
-
- if (!(action->flags & SA_INTERRUPT))
- local_irq_enable();
-
- do {
- ret = action->handler(irq, action->dev_id, regs);
- if (ret == IRQ_HANDLED)
- status |= action->flags;
- action = action->next;
- retval |= ret;
- } while (action);
- if (status & SA_SAMPLE_RANDOM)
- add_interrupt_randomness(irq);
- local_irq_disable();
- return retval;
-}
-
-static void __report_bad_irq(int irq, irq_desc_t *desc, irqreturn_t action_ret)
-{
- struct irqaction *action;
-
- if (action_ret != IRQ_HANDLED && action_ret != IRQ_NONE) {
- printk(KERN_ERR "irq event %d: bogus return value %x\n",
- irq, action_ret);
- } else {
- printk(KERN_ERR "irq %d: nobody cared!\n", irq);
- }
- dump_stack();
- printk(KERN_ERR "handlers:\n");
- action = desc->action;
- do {
- printk(KERN_ERR "[<%p>]", action->handler);
- print_symbol(" (%s)",
- (unsigned long)action->handler);
- printk("\n");
- action = action->next;
- } while (action);
-}
-
-static void report_bad_irq(int irq, irq_desc_t *desc, irqreturn_t action_ret)
-{
- static int count = 100;
-
- if (count) {
- count--;
- __report_bad_irq(irq, desc, action_ret);
- }
-}
-
-static int noirqdebug;
-
-static int __init noirqdebug_setup(char *str)
-{
- noirqdebug = 1;
- printk("IRQ lockup detection disabled\n");
- return 1;
-}
-
-__setup("noirqdebug", noirqdebug_setup);
-
-/*
- * If 99,900 of the previous 100,000 interrupts have not been handled then
- * assume that the IRQ is stuck in some manner. Drop a diagnostic and try to
- * turn the IRQ off.
- *
- * (The other 100-of-100,000 interrupts may have been a correctly-functioning
- * device sharing an IRQ with the failing one)
- *
- * Called under desc->lock
- */
-static void note_interrupt(int irq, irq_desc_t *desc, irqreturn_t action_ret)
-{
- if (action_ret != IRQ_HANDLED) {
- desc->irqs_unhandled++;
- if (action_ret != IRQ_NONE)
- report_bad_irq(irq, desc, action_ret);
- }
-
- desc->irq_count++;
- if (desc->irq_count < 100000)
- return;
-
- desc->irq_count = 0;
- if (desc->irqs_unhandled > 99900) {
- /*
- * The interrupt is stuck
- */
- __report_bad_irq(irq, desc, action_ret);
- /*
- * Now kill the IRQ
- */
- printk(KERN_EMERG "Disabling IRQ #%d\n", irq);
- desc->status |= IRQ_DISABLED;
- desc->handler->disable(irq);
- }
- desc->irqs_unhandled = 0;
-}
-
-/*
- * Generic enable/disable code: this just calls
- * down into the PIC-specific version for the actual
- * hardware disable after having gotten the irq
- * controller lock.
- */
-
-/**
- * disable_irq_nosync - disable an irq without waiting
- * @irq: Interrupt to disable
- *
- * Disable the selected interrupt line. Disables and Enables are
- * nested.
- * Unlike disable_irq(), this function does not ensure existing
- * instances of the IRQ handler have completed before returning.
- *
- * This function may be called from IRQ context.
- */
-
-inline void disable_irq_nosync(unsigned int irq)
-{
- irq_desc_t *desc = irq_desc + irq;
- unsigned long flags;
-
- spin_lock_irqsave(&desc->lock, flags);
- if (!desc->depth++) {
- desc->status |= IRQ_DISABLED;
- desc->handler->disable(irq);
- }
- spin_unlock_irqrestore(&desc->lock, flags);
-}
-
-/**
- * disable_irq - disable an irq and wait for completion
- * @irq: Interrupt to disable
- *
- * Disable the selected interrupt line. Enables and Disables are
- * nested.
- * This function waits for any pending IRQ handlers for this interrupt
- * to complete before returning. If you use this function while
- * holding a resource the IRQ handler may need you will deadlock.
- *
- * This function may be called - with care - from IRQ context.
- */
-
-void disable_irq(unsigned int irq)
-{
- irq_desc_t *desc = irq_desc + irq;
- disable_irq_nosync(irq);
- if (desc->action)
- synchronize_irq(irq);
-}
-
-/**
- * enable_irq - enable handling of an irq
- * @irq: Interrupt to enable
- *
- * Undoes the effect of one call to disable_irq(). If this
- * matches the last disable, processing of interrupts on this
- * IRQ line is re-enabled.
- *
- * This function may be called from IRQ context.
- */
-
-void enable_irq(unsigned int irq)
-{
- irq_desc_t *desc = irq_desc + irq;
- unsigned long flags;
-
- spin_lock_irqsave(&desc->lock, flags);
- switch (desc->depth) {
- case 1: {
- unsigned int status = desc->status & ~IRQ_DISABLED;
- desc->status = status;
- if ((status & (IRQ_PENDING | IRQ_REPLAY)) == IRQ_PENDING) {
- desc->status = status | IRQ_REPLAY;
- hw_resend_irq(desc->handler,irq);
- }
- desc->handler->enable(irq);
- /* fall-through */
- }
- default:
- desc->depth--;
- break;
- case 0:
- printk("enable_irq(%u) unbalanced from %p\n", irq,
- __builtin_return_address(0));
- }
- spin_unlock_irqrestore(&desc->lock, flags);
-}
-
/*
* do_IRQ handles all normal device IRQ's (the special
* SMP cross-CPU interrupts have their own specific
*/
asmlinkage unsigned int do_IRQ(int irq, struct pt_regs *regs)
{
- /*
- * We ack quickly, we don't want the irq controller
- * thinking we're snobs just because some other CPU has
- * disabled global interrupts (we have already done the
- * INT_ACK cycles, it's too late to try to pretend to the
- * controller that we aren't taking the interrupt).
- *
- * 0 return value means that this irq is already being
- * handled by some other CPU. (or is disabled)
- */
- irq_desc_t *desc = irq_desc + irq;
- struct irqaction * action;
- unsigned int status;
-
irq_enter();
#ifdef CONFIG_DEBUG_STACKOVERFLOW
/* FIXME M32R */
#endif
- kstat_this_cpu.irqs[irq]++;
- spin_lock(&desc->lock);
- desc->handler->ack(irq);
- /*
- REPLAY is when Linux resends an IRQ that was dropped earlier
- WAITING is used by probe to mark irqs that are being tested
- */
- status = desc->status & ~(IRQ_REPLAY | IRQ_WAITING);
- status |= IRQ_PENDING; /* we _want_ to handle it */
-
- /*
- * If the IRQ is disabled for whatever reason, we cannot
- * use the action we have.
- */
- action = NULL;
- if (likely(!(status & (IRQ_DISABLED | IRQ_INPROGRESS)))) {
- action = desc->action;
- status &= ~IRQ_PENDING; /* we commit to handling */
- status |= IRQ_INPROGRESS; /* we are handling it */
- }
- desc->status = status;
-
- /*
- * If there is no IRQ handler or it was disabled, exit early.
- Since we set PENDING, if another processor is handling
- a different instance of this same irq, the other processor
- will take care of it.
- */
- if (unlikely(!action))
- goto out;
-
- /*
- * Edge triggered interrupts need to remember
- * pending events.
- * This applies to any hw interrupts that allow a second
- * instance of the same irq to arrive while we are in do_IRQ
- * or in the handler. But the code here only handles the _second_
- * instance of the irq, not the third or fourth. So it is mostly
- * useful for irq hardware that does not mask cleanly in an
- * SMP environment.
- */
- for (;;) {
- irqreturn_t action_ret;
-
- spin_unlock(&desc->lock);
- action_ret = handle_IRQ_event(irq, regs, action);
- spin_lock(&desc->lock);
- if (!noirqdebug)
- note_interrupt(irq, desc, action_ret);
- if (likely(!(desc->status & IRQ_PENDING)))
- break;
- desc->status &= ~IRQ_PENDING;
- }
- desc->status &= ~IRQ_INPROGRESS;
-
-out:
- /*
- * The ->end() handler has to deal with interrupts which got
- * disabled while the handler was running.
- */
- desc->handler->end(irq);
- spin_unlock(&desc->lock);
-
+ __do_IRQ(irq, regs);
irq_exit();
-#if defined(CONFIG_SMP)
- if (irq == M32R_IRQ_MFT2)
- smp_send_timer();
-#endif /* CONFIG_SMP */
-
return 1;
}
-
-int can_request_irq(unsigned int irq, unsigned long irqflags)
-{
- struct irqaction *action;
-
- if (irq >= NR_IRQS)
- return 0;
- action = irq_desc[irq].action;
- if (action) {
- if (irqflags & action->flags & SA_SHIRQ)
- action = NULL;
- }
- return !action;
-}
-
-/**
- * request_irq - allocate an interrupt line
- * @irq: Interrupt line to allocate
- * @handler: Function to be called when the IRQ occurs
- * @irqflags: Interrupt type flags
- * @devname: An ascii name for the claiming device
- * @dev_id: A cookie passed back to the handler function
- *
- * This call allocates interrupt resources and enables the
- * interrupt line and IRQ handling. From the point this
- * call is made your handler function may be invoked. Since
- * your handler function must clear any interrupt the board
- * raises, you must take care both to initialise your hardware
- * and to set up the interrupt handler in the right order.
- *
- * Dev_id must be globally unique. Normally the address of the
- * device data structure is used as the cookie. Since the handler
- * receives this value it makes sense to use it.
- *
- * If your interrupt is shared you must pass a non NULL dev_id
- * as this is required when freeing the interrupt.
- *
- * Flags:
- *
- * SA_SHIRQ Interrupt is shared
- *
- * SA_INTERRUPT Disable local interrupts while processing
- *
- * SA_SAMPLE_RANDOM The interrupt can be used for entropy
- *
- */
-
-int request_irq(unsigned int irq,
- irqreturn_t (*handler)(int, void *, struct pt_regs *),
- unsigned long irqflags,
- const char * devname,
- void *dev_id)
-{
- int retval;
- struct irqaction * action;
-
-#if 1
- /*
- * Sanity-check: shared interrupts should REALLY pass in
- * a real dev-ID, otherwise we'll have trouble later trying
- * to figure out which interrupt is which (messes up the
- * interrupt freeing logic etc).
- */
- if (irqflags & SA_SHIRQ) {
- if (!dev_id)
- printk("Bad boy: %s (at 0x%x) called us without a dev_id!\n", devname, (&irq)[-1]);
- }
-#endif
-
- if (irq >= NR_IRQS)
- return -EINVAL;
- if (!handler)
- return -EINVAL;
-
- action = (struct irqaction *)
- kmalloc(sizeof(struct irqaction), GFP_ATOMIC);
- if (!action)
- return -ENOMEM;
-
- action->handler = handler;
- action->flags = irqflags;
- cpus_clear(action->mask);
- action->name = devname;
- action->next = NULL;
- action->dev_id = dev_id;
-
- retval = setup_irq(irq, action);
- if (retval)
- kfree(action);
- return retval;
-}
-
-EXPORT_SYMBOL(request_irq);
-
-/**
- * free_irq - free an interrupt
- * @irq: Interrupt line to free
- * @dev_id: Device identity to free
- *
- * Remove an interrupt handler. The handler is removed and if the
- * interrupt line is no longer in use by any driver it is disabled.
- * On a shared IRQ the caller must ensure the interrupt is disabled
- * on the card it drives before calling this function. The function
- * does not return until any executing interrupts for this IRQ
- * have completed.
- *
- * This function must not be called from interrupt context.
- */
-
-void free_irq(unsigned int irq, void *dev_id)
-{
- irq_desc_t *desc;
- struct irqaction **p;
- unsigned long flags;
-
- if (irq >= NR_IRQS)
- return;
-
- desc = irq_desc + irq;
- spin_lock_irqsave(&desc->lock,flags);
- p = &desc->action;
- for (;;) {
- struct irqaction * action = *p;
- if (action) {
- struct irqaction **pp = p;
- p = &action->next;
- if (action->dev_id != dev_id)
- continue;
-
- /* Found it - now remove it from the list of entries */
- *pp = action->next;
- if (!desc->action) {
- desc->status |= IRQ_DISABLED;
- desc->handler->shutdown(irq);
- }
- spin_unlock_irqrestore(&desc->lock,flags);
-
- /* Wait to make sure it's not being used on another CPU */
- synchronize_irq(irq);
- kfree(action);
- return;
- }
- printk("Trying to free free IRQ%d\n",irq);
- spin_unlock_irqrestore(&desc->lock,flags);
- return;
- }
-}
-
-EXPORT_SYMBOL(free_irq);
-
-/*
- * IRQ autodetection code..
- *
- * This depends on the fact that any interrupt that
- * comes in on to an unassigned handler will get stuck
- * with "IRQ_WAITING" cleared and the interrupt
- * disabled.
- */
-
-static DECLARE_MUTEX(probe_sem);
-
-/**
- * probe_irq_on - begin an interrupt autodetect
- *
- * Commence probing for an interrupt. The interrupts are scanned
- * and a mask of potential interrupt lines is returned.
- *
- */
-
-unsigned long probe_irq_on(void)
-{
- unsigned int i;
- irq_desc_t *desc;
- unsigned long val;
- unsigned long delay;
-
- down(&probe_sem);
- /*
- * something may have generated an irq long ago and we want to
- * flush such a longstanding irq before considering it as spurious.
- */
- for (i = NR_IRQS-1; i > 0; i--) {
- desc = irq_desc + i;
-
- spin_lock_irq(&desc->lock);
- if (!irq_desc[i].action)
- irq_desc[i].handler->startup(i);
- spin_unlock_irq(&desc->lock);
- }
-
- /* Wait for longstanding interrupts to trigger. */
- for (delay = jiffies + HZ/50; time_after(delay, jiffies); )
- /* about 20ms delay */ barrier();
-
- /*
- * enable any unassigned irqs
- * (we must startup again here because if a longstanding irq
- * happened in the previous stage, it may have masked itself)
- */
- for (i = NR_IRQS-1; i > 0; i--) {
- desc = irq_desc + i;
-
- spin_lock_irq(&desc->lock);
- if (!desc->action) {
- desc->status |= IRQ_AUTODETECT | IRQ_WAITING;
- if (desc->handler->startup(i))
- desc->status |= IRQ_PENDING;
- }
- spin_unlock_irq(&desc->lock);
- }
-
- /*
- * Wait for spurious interrupts to trigger
- */
- for (delay = jiffies + HZ/10; time_after(delay, jiffies); )
- /* about 100ms delay */ barrier();
-
- /*
- * Now filter out any obviously spurious interrupts
- */
- val = 0;
- for (i = 0; i < NR_IRQS; i++) {
- irq_desc_t *desc = irq_desc + i;
- unsigned int status;
-
- spin_lock_irq(&desc->lock);
- status = desc->status;
-
- if (status & IRQ_AUTODETECT) {
- /* It triggered already - consider it spurious. */
- if (!(status & IRQ_WAITING)) {
- desc->status = status & ~IRQ_AUTODETECT;
- desc->handler->shutdown(i);
- } else
- if (i < 32)
- val |= 1 << i;
- }
- spin_unlock_irq(&desc->lock);
- }
-
- return val;
-}
-
-EXPORT_SYMBOL(probe_irq_on);
-
-/*
- * Return a mask of triggered interrupts (this
- * can handle only legacy ISA interrupts).
- */
-
-/**
- * probe_irq_mask - scan a bitmap of interrupt lines
- * @val: mask of interrupts to consider
- *
- * Scan the ISA bus interrupt lines and return a bitmap of
- * active interrupts. The interrupt probe logic state is then
- * returned to its previous value.
- *
- * Note: we need to scan all the irq's even though we will
- * only return ISA irq numbers - just so that we reset them
- * all to a known state.
- */
-unsigned int probe_irq_mask(unsigned long val)
-{
- int i;
- unsigned int mask;
-
- mask = 0;
- for (i = 0; i < NR_IRQS; i++) {
- irq_desc_t *desc = irq_desc + i;
- unsigned int status;
-
- spin_lock_irq(&desc->lock);
- status = desc->status;
-
- if (status & IRQ_AUTODETECT) {
- if (i < 16 && !(status & IRQ_WAITING))
- mask |= 1 << i;
-
- desc->status = status & ~IRQ_AUTODETECT;
- desc->handler->shutdown(i);
- }
- spin_unlock_irq(&desc->lock);
- }
- up(&probe_sem);
-
- return mask & val;
-}
-
-/*
- * Return the one interrupt that triggered (this can
- * handle any interrupt source).
- */
-
-/**
- * probe_irq_off - end an interrupt autodetect
- * @val: mask of potential interrupts (unused)
- *
- * Scans the unused interrupt lines and returns the line which
- * appears to have triggered the interrupt. If no interrupt was
- * found then zero is returned. If more than one interrupt is
- * found then minus the first candidate is returned to indicate
- * their is doubt.
- *
- * The interrupt probe logic state is returned to its previous
- * value.
- *
- * BUGS: When used in a module (which arguably shouldnt happen)
- * nothing prevents two IRQ probe callers from overlapping. The
- * results of this are non-optimal.
- */
-
-int probe_irq_off(unsigned long val)
-{
- int i, irq_found, nr_irqs;
-
- nr_irqs = 0;
- irq_found = 0;
- for (i = 0; i < NR_IRQS; i++) {
- irq_desc_t *desc = irq_desc + i;
- unsigned int status;
-
- spin_lock_irq(&desc->lock);
- status = desc->status;
-
- if (status & IRQ_AUTODETECT) {
- if (!(status & IRQ_WAITING)) {
- if (!nr_irqs)
- irq_found = i;
- nr_irqs++;
- }
- desc->status = status & ~IRQ_AUTODETECT;
- desc->handler->shutdown(i);
- }
- spin_unlock_irq(&desc->lock);
- }
- up(&probe_sem);
-
- if (nr_irqs > 1)
- irq_found = -irq_found;
- return irq_found;
-}
-
-EXPORT_SYMBOL(probe_irq_off);
-
-/* this was setup_x86_irq but it seems pretty generic */
-int setup_irq(unsigned int irq, struct irqaction * new)
-{
- int shared = 0;
- unsigned long flags;
- struct irqaction *old, **p;
- irq_desc_t *desc = irq_desc + irq;
-
- if (desc->handler == &no_irq_type)
- return -ENOSYS;
- /*
- * Some drivers like serial.c use request_irq() heavily,
- * so we have to be careful not to interfere with a
- * running system.
- */
- if (new->flags & SA_SAMPLE_RANDOM) {
- /*
- * This function might sleep, we want to call it first,
- * outside of the atomic block.
- * Yes, this might clear the entropy pool if the wrong
- * driver is attempted to be loaded, without actually
- * installing a new handler, but is this really a problem,
- * only the sysadmin is able to do this.
- */
- rand_initialize_irq(irq);
- }
-
- /*
- * The following block of code has to be executed atomically
- */
- spin_lock_irqsave(&desc->lock,flags);
- p = &desc->action;
- if ((old = *p) != NULL) {
- /* Can't share interrupts unless both agree to */
- if (!(old->flags & new->flags & SA_SHIRQ)) {
- spin_unlock_irqrestore(&desc->lock,flags);
- return -EBUSY;
- }
-
- /* add new interrupt at end of irq queue */
- do {
- p = &old->next;
- old = *p;
- } while (old);
- shared = 1;
- }
-
- *p = new;
-
- if (!shared) {
- desc->depth = 0;
- desc->status &= ~(IRQ_DISABLED | IRQ_AUTODETECT | IRQ_WAITING | IRQ_INPROGRESS);
- desc->handler->startup(irq);
- }
- spin_unlock_irqrestore(&desc->lock,flags);
-
- register_irq_proc(irq);
- return 0;
-}
-
-static struct proc_dir_entry * root_irq_dir;
-static struct proc_dir_entry * irq_dir [NR_IRQS];
-
-#ifdef CONFIG_SMP
-
-static struct proc_dir_entry *smp_affinity_entry[NR_IRQS];
-
-cpumask_t irq_affinity[NR_IRQS] = { [0 ... NR_IRQS-1] = CPU_MASK_ALL };
-
-static int irq_affinity_read_proc(char *page, char **start, off_t off,
- int count, int *eof, void *data)
-{
- int len = cpumask_scnprintf(page, count, irq_affinity[(long)data]);
- if (count - len < 2)
- return -EINVAL;
- len += sprintf(page + len, "\n");
- return len;
-}
-
-static int irq_affinity_write_proc(struct file *file, const char __user *buffer,
- unsigned long count, void *data)
-{
- int irq = (long)data, full_count = count, err;
- cpumask_t new_value, tmp;
-
- if (!irq_desc[irq].handler->set_affinity)
- return -EIO;
-
- err = cpumask_parse(buffer, count, new_value);
- if (err)
- return err;
-
- /*
- * Do not allow disabling IRQs completely - it's a too easy
- * way to make the system unusable accidentally :-) At least
- * one online CPU still has to be targeted.
- */
- cpus_and(tmp, new_value, cpu_online_map);
- if (cpus_empty(tmp))
- return -EINVAL;
-
- irq_affinity[irq] = new_value;
- irq_desc[irq].handler->set_affinity(irq,
- cpumask_of_cpu(first_cpu(new_value)));
-
- return full_count;
-}
-
-#endif
-
-static int prof_cpu_mask_read_proc (char *page, char **start, off_t off,
- int count, int *eof, void *data)
-{
- int len = cpumask_scnprintf(page, count, *(cpumask_t *)data);
- if (count - len < 2)
- return -EINVAL;
- len += sprintf(page + len, "\n");
- return len;
-}
-
-static int prof_cpu_mask_write_proc (struct file *file, const char __user *buffer,
- unsigned long count, void *data)
-{
- cpumask_t *mask = (cpumask_t *)data;
- unsigned long full_count = count, err;
- cpumask_t new_value;
-
- err = cpumask_parse(buffer, count, new_value);
- if (err)
- return err;
-
- *mask = new_value;
- return full_count;
-}
-
-#define MAX_NAMELEN 10
-
-static void register_irq_proc (unsigned int irq)
-{
- char name [MAX_NAMELEN];
-
- if (!root_irq_dir || (irq_desc[irq].handler == &no_irq_type) ||
- irq_dir[irq])
- return;
-
- memset(name, 0, MAX_NAMELEN);
- sprintf(name, "%d", irq);
-
- /* create /proc/irq/1234 */
- irq_dir[irq] = proc_mkdir(name, root_irq_dir);
-
-#ifdef CONFIG_SMP
- {
- struct proc_dir_entry *entry;
-
- /* create /proc/irq/1234/smp_affinity */
- entry = create_proc_entry("smp_affinity", 0600, irq_dir[irq]);
-
- if (entry) {
- entry->nlink = 1;
- entry->data = (void *)(long)irq;
- entry->read_proc = irq_affinity_read_proc;
- entry->write_proc = irq_affinity_write_proc;
- }
-
- smp_affinity_entry[irq] = entry;
- }
-#endif
-}
-
-unsigned long prof_cpu_mask = -1;
-
-void init_irq_proc (void)
-{
- struct proc_dir_entry *entry;
- int i;
-
- /* create /proc/irq */
- root_irq_dir = proc_mkdir("irq", NULL);
-
- /* create /proc/irq/prof_cpu_mask */
- entry = create_proc_entry("prof_cpu_mask", 0600, root_irq_dir);
-
- if (!entry)
- return;
-
- entry->nlink = 1;
- entry->data = (void *)&prof_cpu_mask;
- entry->read_proc = prof_cpu_mask_read_proc;
- entry->write_proc = prof_cpu_mask_write_proc;
-
- /*
- * Create entries for all existing IRQs.
- */
- for (i = 0; i < NR_IRQS; i++)
- register_irq_proc(i);
-}
-
/*
* linux/arch/m32r/kernel/process.c
- * orig : sh
*
* Copyright (c) 2001, 2002 Hiroyuki Kondo, Hirokazu Takata,
* Hitoshi Yamamoto
}
asmlinkage int sys_clone(unsigned long clone_flags, unsigned long newsp,
- unsigned long r2, unsigned long r3, unsigned long r4, unsigned long r5,
- unsigned long r6, struct pt_regs regs)
+ unsigned long parent_tidptr,
+ unsigned long child_tidptr,
+ unsigned long r4, unsigned long r5, unsigned long r6,
+ struct pt_regs regs)
{
if (!newsp)
newsp = regs.spu;
- return do_fork(clone_flags, newsp, ®s, 0, NULL, NULL);
+ return do_fork(clone_flags, newsp, ®s, 0,
+ (int __user *)parent_tidptr, (int __user *)child_tidptr);
}
/*
/*
* sys_execve() executes a new program.
*/
-asmlinkage int sys_execve(char __user *ufilename, char __user * __user *uargv, char __user * __user *uenvp,
- unsigned long r3, unsigned long r4, unsigned long r5, unsigned long r6,
- struct pt_regs regs)
+asmlinkage int sys_execve(char __user *ufilename, char __user * __user *uargv,
+ char __user * __user *uenvp,
+ unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, struct pt_regs regs)
{
int error;
char *filename;
/* M32R_FIXME */
return (0);
}
-
* linux/arch/m32r/kernel/ptrace.c
*
* Copyright (C) 2002 Hirokazu Takata, Takeo Takahashi
- * Copyright (C) 2004 Hirokazu Takata <takata at linux-m32r.org>
+ * Copyright (C) 2004 Hirokazu Takata, Kei Sakamoto
*
* Original x86 implementation:
* By Ross Biro 1/23/92
#ifndef NO_FPU
else if (off >= (long)(&dummy->fpu >> 2) &&
off < (long)(&dummy->u_fpvalid >> 2)) {
- if (!tsk->used_math) {
+ if (!tsk_used_math(tsk)) {
if (off == (long)(&dummy->fpu.fpscr >> 2))
tmp = FPSCR_INIT;
else
tmp = ((long *)(&tsk->thread.fpu >> 2))
[off - (long)&dummy->fpu];
} else if (off == (long)(&dummy->u_fpvalid >> 2))
- tmp = tsk->used_math;
+ tmp = !!tsk_used_math(tsk);
#endif /* not NO_FPU */
else
tmp = 0;
#ifndef NO_FPU
else if (off >= (long)(&dummy->fpu >> 2) &&
off < (long)(&dummy->u_fpvalid >> 2)) {
- tsk->used_math = 1;
+ set_stopped_child_used_math(tsk);
((long *)&tsk->thread.fpu)
[off - (long)&dummy->fpu] = data;
ret = 0;
} else if (off == (long)(&dummy->u_fpvalid >> 2)) {
- tsk->used_math = data ? 1 : 0;
+ conditional_stopped_child_used_math(data, tsk);
ret = 0;
}
#endif /* not NO_FPU */
struct debug_trap *p = &child->thread.debug_trap;
unsigned long addr = next_pc & ~3;
- if (p->nr_trap != 0) {
+ if (p->nr_trap == MAX_TRAPS) {
printk("kernel BUG at %s %d: p->nr_trap = %d\n",
__FILE__, __LINE__, p->nr_trap);
return -1;
}
- p->addr = addr;
- p->insn = next_insn;
+ p->addr[p->nr_trap] = addr;
+ p->insn[p->nr_trap] = next_insn;
p->nr_trap++;
if (next_pc & 3) {
*code = (next_insn & 0xffff0000) | 0x10f1;
return 0;
}
-int withdraw_debug_trap_for_signal(struct task_struct *child)
-{
- struct debug_trap *p = &child->thread.debug_trap;
- int nr_trap = p->nr_trap;
-
- if (nr_trap) {
- access_process_vm(child, p->addr, &p->insn, sizeof(p->insn), 1);
- p->nr_trap = 0;
- p->addr = 0;
- p->insn = 0;
- }
- return nr_trap;
-}
-
static int
unregister_debug_trap(struct task_struct *child, unsigned long addr,
unsigned long *code)
{
struct debug_trap *p = &child->thread.debug_trap;
+ int i;
- if (p->nr_trap != 1 || p->addr != addr) {
+ /* Search debug trap entry. */
+ for (i = 0; i < p->nr_trap; i++) {
+ if (p->addr[i] == addr)
+ break;
+ }
+ if (i >= p->nr_trap) {
/* The trap may be requested from debugger.
* ptrace should do nothing in this case.
*/
return 0;
}
- *code = p->insn;
- p->insn = 0;
- p->addr = 0;
+
+ /* Recover orignal instruction code. */
+ *code = p->insn[i];
+
+ /* Shift debug trap entries. */
+ while (i < p->nr_trap - 1) {
+ p->insn[i] = p->insn[i + 1];
+ p->addr[i] = p->addr[i + 1];
+ i++;
+ }
p->nr_trap--;
return 1;
}
unregister_all_debug_traps(struct task_struct *child)
{
struct debug_trap *p = &child->thread.debug_trap;
+ int i;
- if (p->nr_trap) {
- access_process_vm(child, p->addr, &p->insn, sizeof(p->insn), 1);
- p->addr = 0;
- p->insn = 0;
- p->nr_trap = 0;
- }
+ for (i = 0; i < p->nr_trap; i++)
+ access_process_vm(child, p->addr[i], &p->insn[i], sizeof(p->insn[i]), 1);
+ p->nr_trap = 0;
}
static inline void
return 0; /* success */
}
-void
-embed_debug_trap_for_signal(struct task_struct *child)
-{
- unsigned long next_pc;
- unsigned long pc, insn;
- int ret;
-
- pc = get_stack_long(child, PT_BPC);
- ret = access_process_vm(child, pc&~3, &insn, sizeof(insn), 0);
- if (ret != sizeof(insn)) {
- printk("kernel BUG at %s %d: access_process_vm returns %d\n",
- __FILE__, __LINE__, ret);
- return;
- }
- compute_next_pc(insn, pc, &next_pc, child);
- if (next_pc & 0x80000000) {
- printk("kernel BUG at %s %d: next_pc = 0x%08x\n",
- __FILE__, __LINE__, (int)next_pc);
- return;
- }
- if (embed_debug_trap(child, next_pc)) {
- printk("kernel BUG at %s %d: embed_debug_trap error\n",
- __FILE__, __LINE__);
- return;
- }
- invalidate_cache();
-}
-
void
withdraw_debug_trap(struct pt_regs *regs)
{
init_debug_traps(struct task_struct *child)
{
struct debug_trap *p = &child->thread.debug_trap;
+ int i;
p->nr_trap = 0;
- p->addr = 0;
- p->insn = 0;
+ for (i = 0; i < MAX_TRAPS; i++) {
+ p->addr[i] = 0;
+ p->insn[i] = 0;
+ }
}
current->exit_code = 0;
}
}
-
#endif
#ifdef CONFIG_DISCONTIGMEM
- numnodes = 2;
+ nodes_clear(node_online_map);
+ node_set_online(0);
+ node_set_online(1);
#endif /* CONFIG_DISCONTIGMEM */
init_mm.start_code = (unsigned long) _text;
/* Force FPU initialization */
current_thread_info()->status = 0;
- current->used_math = 0;
+ clear_used_math();
#ifdef CONFIG_MMU
/* Set up MMU */
int do_signal(struct pt_regs *, sigset_t *);
-/*
- * Atomically swap in the new signal mask, and wait for a signal.
- */
-asmlinkage int
-sys_sigsuspend(old_sigset_t mask, unsigned long r1,
- unsigned long r2, unsigned long r3, unsigned long r4,
- unsigned long r5, unsigned long r6, struct pt_regs regs)
-{
- sigset_t saveset;
-
- mask &= _BLOCKABLE;
- spin_lock_irq(¤t->sighand->siglock);
- saveset = current->blocked;
- siginitset(¤t->blocked, mask);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
-
- regs.r0 = -EINTR;
- while (1) {
- current->state = TASK_INTERRUPTIBLE;
- schedule();
- if (do_signal(®s, &saveset))
- return regs.r0;
- }
-}
-
asmlinkage int
sys_rt_sigsuspend(sigset_t *unewset, size_t sigsetsize,
unsigned long r2, unsigned long r3, unsigned long r4,
}
}
-asmlinkage int
-sys_sigaction(int sig, const struct old_sigaction __user *act,
- struct old_sigaction __user *oact)
-{
- struct k_sigaction new_ka, old_ka;
- int ret;
-
- if (act) {
- old_sigset_t mask;
- if (verify_area(VERIFY_READ, act, sizeof(*act)) ||
- __get_user(new_ka.sa.sa_handler, &act->sa_handler) ||
- __get_user(new_ka.sa.sa_restorer, &act->sa_restorer))
- return -EFAULT;
- __get_user(new_ka.sa.sa_flags, &act->sa_flags);
- __get_user(mask, &act->sa_mask);
- siginitset(&new_ka.sa.sa_mask, mask);
- }
-
- ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
-
- if (!ret && oact) {
- if (verify_area(VERIFY_WRITE, oact, sizeof(*oact)) ||
- __put_user(old_ka.sa.sa_handler, &oact->sa_handler) ||
- __put_user(old_ka.sa.sa_restorer, &oact->sa_restorer))
- return -EFAULT;
- __put_user(old_ka.sa.sa_flags, &oact->sa_flags);
- __put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask);
- }
-
- return ret;
-}
-
asmlinkage int
sys_sigaltstack(const stack_t __user *uss, stack_t __user *uoss,
unsigned long r2, unsigned long r3, unsigned long r4,
* Do a signal return; undo the signal stack.
*/
-struct sigframe
-{
-// char *pretcode;
- int sig;
- struct sigcontext sc;
-// struct _fpstate fpstate;
- unsigned long extramask[_NSIG_WORDS-1];
- char retcode[4];
-};
-
struct rt_sigframe
{
-// char *pretcode;
int sig;
struct siginfo *pinfo;
void *puc;
struct siginfo info;
struct ucontext uc;
// struct _fpstate fpstate;
- char retcode[8];
};
static int
return err;
}
-asmlinkage int
-sys_sigreturn(unsigned long r0, unsigned long r1,
- unsigned long r2, unsigned long r3, unsigned long r4,
- unsigned long r5, unsigned long r6, struct pt_regs regs)
-{
- struct sigframe __user *frame = (struct sigframe __user *)regs.spu;
- sigset_t set;
- int result;
-
- if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
- goto badframe;
- if (__get_user(set.sig[0], &frame->sc.oldmask)
- || (_NSIG_WORDS > 1
- && __copy_from_user(&set.sig[1], &frame->extramask,
- sizeof(frame->extramask))))
- goto badframe;
-
- sigdelsetmask(&set, ~_BLOCKABLE);
- spin_lock_irq(¤t->sighand->siglock);
- current->blocked = set;
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
-
- if (restore_sigcontext(®s, &frame->sc, &result))
- goto badframe;
- return result;
-
-badframe:
- force_sig(SIGSEGV, current);
- return 0;
-}
-
asmlinkage int
sys_rt_sigreturn(unsigned long r0, unsigned long r1,
unsigned long r2, unsigned long r3, unsigned long r4,
return (void __user *)((sp - frame_size) & -8ul);
}
-static void setup_frame(int sig, struct k_sigaction *ka,
- sigset_t *set, struct pt_regs *regs)
-{
- struct sigframe __user *frame;
- int err = 0;
- int signal;
-
- frame = get_sigframe(ka, regs->spu, sizeof(*frame));
-
- if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
- goto give_sigsegv;
-
- signal = current_thread_info()->exec_domain
- && current_thread_info()->exec_domain->signal_invmap
- && sig < 32
- ? current_thread_info()->exec_domain->signal_invmap[sig]
- : sig;
-
- err |= __put_user(signal, &frame->sig);
- if (err)
- goto give_sigsegv;
-
- err |= setup_sigcontext(&frame->sc, regs, set->sig[0]);
- if (err)
- goto give_sigsegv;
-
- if (_NSIG_WORDS > 1) {
- err |= __copy_to_user(frame->extramask, &set->sig[1],
- sizeof(frame->extramask));
- if (err)
- goto give_sigsegv;
- }
-
- if (ka->sa.sa_flags & SA_RESTORER)
- regs->lr = (unsigned long)ka->sa.sa_restorer;
- else {
- /* This is : ldi r7, #__NR_sigreturn ; trap #2 */
- unsigned long code = 0x670010f2 | (__NR_sigreturn << 16);
-
- regs->lr = (unsigned long)frame->retcode;
- err |= __put_user(code, (long __user *)(frame->retcode+0));
- if (err)
- goto give_sigsegv;
- flush_cache_sigtramp((unsigned long)frame->retcode);
- }
-
- /* Set up registers for signal handler */
- regs->spu = (unsigned long)frame;
- regs->r0 = signal; /* Arg for signal handler */
- regs->r1 = (unsigned long)&frame->sc;
- regs->bpc = (unsigned long)ka->sa.sa_handler;
-
- set_fs(USER_DS);
-
-#if DEBUG_SIG
- printk("SIG deliver (%s:%d): sp=%p pc=%p\n",
- current->comm, current->pid, frame, regs->pc);
-#endif
-
- return;
-
-give_sigsegv:
- force_sigsegv(sig, current);
-}
-
static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
sigset_t *set, struct pt_regs *regs)
{
goto give_sigsegv;
/* Set up to return from userspace. */
- if (ka->sa.sa_flags & SA_RESTORER)
- regs->lr = (unsigned long)ka->sa.sa_restorer;
- else {
- /* This is : ldi r7, #__NR_rt_sigreturn ; trap #2 */
- unsigned long code1 = 0x97f00000 | (__NR_rt_sigreturn);
- unsigned long code2 = 0x10f2f000;
-
- regs->lr = (unsigned long)frame->retcode;
- err |= __put_user(code1, (long __user *)(frame->retcode+0));
- err |= __put_user(code2, (long __user *)(frame->retcode+4));
- if (err)
- goto give_sigsegv;
- flush_cache_sigtramp((unsigned long)frame->retcode);
- }
+ regs->lr = (unsigned long)ka->sa.sa_restorer;
/* Set up registers for signal handler */
regs->spu = (unsigned long)frame;
}
/* Set up the stack frame */
- if (ka->sa.sa_flags & SA_SIGINFO)
- setup_rt_frame(sig, ka, info, oldset, regs);
- else
- setup_frame(sig, ka, oldset, regs);
+ setup_rt_frame(sig, ka, info, oldset, regs);
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(¤t->sighand->siglock);
* Structure and data for smp_call_function(). This is designed to minimise
* static memory requirements. It also looks cleaner.
*/
-static spinlock_t call_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(call_lock);
struct call_data_struct {
void (*func) (void *info);
/*
* For flush_cache_all()
*/
-static spinlock_t flushcache_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(flushcache_lock);
static volatile unsigned long flushcache_cpumask = 0;
/*
static struct mm_struct *flush_mm;
static struct vm_area_struct *flush_vma;
static volatile unsigned long flush_va;
-static spinlock_t tlbstate_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(tlbstate_lock);
#define FLUSH_ALL 0xffffffff
DECLARE_PER_CPU(int, prof_multiplier);
"ldi r4, #1 \n\t"
"st r4, @%2 \n\t"
: "=&r"(ipicr_val)
- : "r"(flags), "r"(&ipilock->lock), "r"(ipicr_addr),
+ : "r"(flags), "r"(&ipilock->slock), "r"(ipicr_addr),
"r"(mask), "r"(try), "r"(my_physid_mask)
: "memory", "r4"
#ifdef CONFIG_CHIP_M32700_TS1
#define Dprintk(x...)
#endif
-extern int cpu_idle(void);
extern cpumask_t cpu_initialized;
/*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*/
int ipi;
for (ipi = 0 ; ipi < NR_IPIS ; ipi++)
- ipi_lock[ipi] = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&ipi_lock[ipi]);
}
/*==========================================================================*
*/
local_flush_tlb_all();
- return cpu_idle();
+ cpu_idle();
+ return 0;
}
/*==========================================================================*
#else /* CONFIG_SMP */
#include <linux/spinlock.h>
-static spinlock_t tas_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(tas_lock);
asmlinkage int sys_tas(int *addr)
{
printk("\n");
}
-spinlock_t die_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(die_lock);
void die(const char * str, struct pt_regs * regs, long err)
{
* operating system. INET is implemented using the BSD Socket
* interface as the means of communication with the user level.
*
- * MIPS specific IP/TCP/UDP checksumming routines
+ * M32R specific IP/TCP/UDP checksumming routines
+ * (Some code taken from MIPS architecture)
*
- * Authors: Ralf Baechle, <ralf@waldorf-gmbh.de>
- * Lots of code moved from tcp.c and ip.c; see those files
- * for more names.
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
*
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version
- * 2 of the License, or (at your option) any later version.
+ * Copyright (C) 1994, 1995 Waldorf Electronics GmbH
+ * Copyright (C) 1998, 1999 Ralf Baechle
+ * Copyright (C) 2001-2005 Hiroyuki Kondo, Hirokazu Takata
*
*/
/*
* Copy while checksumming, otherwise like csum_partial
*/
-unsigned int csum_partial_copy_nocheck (const char *src, char *dst,
- int len, unsigned int sum)
+unsigned int
+csum_partial_copy_nocheck (const unsigned char *src, unsigned char *dst,
+ int len, unsigned int sum)
{
sum = csum_partial(src, len, sum);
memcpy(dst, src, len);
* Copy from userspace and compute checksum. If we catch an exception
* then zero the rest of the buffer.
*/
-unsigned int csum_partial_copy_from_user (const char __user *src, char *dst,
- int len, unsigned int sum,
- int *err_ptr)
+unsigned int
+csum_partial_copy_from_user (const unsigned char __user *src,
+ unsigned char *dst,
+ int len, unsigned int sum, int *err_ptr)
{
int missing;
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc1-bk21
-# Fri Nov 12 16:08:45 2004
+# Linux kernel version: 2.6.11-rc4
+# Wed Feb 16 21:10:50 2005
#
CONFIG_M32R=y
-CONFIG_UID16=y
+# CONFIG_UID16 is not set
CONFIG_GENERIC_ISA_DMA=y
+CONFIG_GENERIC_HARDIRQS=y
+CONFIG_GENERIC_IRQ_PROBE=y
#
# Code maturity level options
# CONFIG_DISCONTIGMEM is not set
CONFIG_RWSEM_GENERIC_SPINLOCK=y
# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set
+CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_PREEMPT=y
# CONFIG_HAVE_DEC_LOCK is not set
CONFIG_SMP=y
#
CONFIG_PCCARD=y
# CONFIG_PCMCIA_DEBUG is not set
-# CONFIG_PCMCIA_OBSOLETE is not set
CONFIG_PCMCIA=y
#
# Block devices
#
# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
CONFIG_BLK_DEV_NBD=y
CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
# CONFIG_BLK_DEV_INITRD is not set
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_IOSCHED_AS is not set
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
+# CONFIG_ATA_OVER_ETH is not set
#
# ATA/ATAPI/MFM/RLL support
#
# CONFIG_SCSI_SPI_ATTRS is not set
# CONFIG_SCSI_FC_ATTRS is not set
+# CONFIG_SCSI_ISCSI_ATTRS is not set
#
# SCSI low-level drivers
#
# CONFIG_SCSI_SATA is not set
-# CONFIG_SCSI_QLOGIC_1280_1040 is not set
# CONFIG_SCSI_DEBUG is not set
#
# CONFIG_SERIO_I8042 is not set
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
+# CONFIG_SERIO_LIBPS2 is not set
# CONFIG_SERIO_RAW is not set
#
#
# Ftape, the floppy tape device driver
#
-# CONFIG_AGP is not set
# CONFIG_DRM is not set
#
CONFIG_LOGO_LINUX_MONO=y
CONFIG_LOGO_LINUX_VGA16=y
CONFIG_LOGO_LINUX_CLUT224=y
+# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
#
# Sound
# CONFIG_USB_ARCH_HAS_HCD is not set
# CONFIG_USB_ARCH_HAS_OHCI is not set
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
+
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# InfiniBand support
+#
+# CONFIG_INFINIBAND is not set
+
#
# File systems
#
# CONFIG_REISERFS_PROC_INFO is not set
# CONFIG_REISERFS_FS_XATTR is not set
# CONFIG_JFS_FS is not set
+
+#
+# XFS support
+#
# CONFIG_XFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_ROOT_NFS=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
-# CONFIG_EXPORTFS is not set
CONFIG_SUNRPC=y
# CONFIG_RPCSEC_GSS_KRB5 is not set
# CONFIG_RPCSEC_GSS_SPKM3 is not set
# Kernel hacking
#
# CONFIG_DEBUG_KERNEL is not set
-# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
+CONFIG_DEBUG_PREEMPT=y
+# CONFIG_DEBUG_BUGVERBOSE is not set
# CONFIG_FRAME_POINTER is not set
#
#
# CONFIG_CRYPTO is not set
+#
+# Hardware crypto devices
+#
+
#
# Library routines
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc1-bk21
-# Fri Nov 12 16:08:49 2004
+# Linux kernel version: 2.6.11-rc4
+# Wed Feb 16 21:10:54 2005
#
CONFIG_M32R=y
-CONFIG_UID16=y
+# CONFIG_UID16 is not set
CONFIG_GENERIC_ISA_DMA=y
+CONFIG_GENERIC_HARDIRQS=y
+CONFIG_GENERIC_IRQ_PROBE=y
#
# Code maturity level options
# CONFIG_DISCONTIGMEM is not set
CONFIG_RWSEM_GENERIC_SPINLOCK=y
# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set
+CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_PREEMPT=y
# CONFIG_HAVE_DEC_LOCK is not set
# CONFIG_SMP is not set
#
CONFIG_PCCARD=y
# CONFIG_PCMCIA_DEBUG is not set
-# CONFIG_PCMCIA_OBSOLETE is not set
CONFIG_PCMCIA=y
#
# Block devices
#
# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
CONFIG_BLK_DEV_NBD=y
CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
# CONFIG_BLK_DEV_INITRD is not set
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_IOSCHED_AS is not set
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
+# CONFIG_ATA_OVER_ETH is not set
#
# ATA/ATAPI/MFM/RLL support
#
# CONFIG_SCSI_SPI_ATTRS is not set
# CONFIG_SCSI_FC_ATTRS is not set
+# CONFIG_SCSI_ISCSI_ATTRS is not set
#
# SCSI low-level drivers
#
# CONFIG_SCSI_SATA is not set
-# CONFIG_SCSI_QLOGIC_1280_1040 is not set
# CONFIG_SCSI_DEBUG is not set
#
# CONFIG_SERIO_I8042 is not set
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
+# CONFIG_SERIO_LIBPS2 is not set
# CONFIG_SERIO_RAW is not set
#
#
# Ftape, the floppy tape device driver
#
-# CONFIG_AGP is not set
# CONFIG_DRM is not set
#
CONFIG_LOGO_LINUX_MONO=y
CONFIG_LOGO_LINUX_VGA16=y
CONFIG_LOGO_LINUX_CLUT224=y
+# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
#
# Sound
# CONFIG_USB_ARCH_HAS_HCD is not set
# CONFIG_USB_ARCH_HAS_OHCI is not set
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
+
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# InfiniBand support
+#
+# CONFIG_INFINIBAND is not set
+
#
# File systems
#
# CONFIG_REISERFS_PROC_INFO is not set
# CONFIG_REISERFS_FS_XATTR is not set
# CONFIG_JFS_FS is not set
+
+#
+# XFS support
+#
# CONFIG_XFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_ROOT_NFS=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
-# CONFIG_EXPORTFS is not set
CONFIG_SUNRPC=y
# CONFIG_RPCSEC_GSS_KRB5 is not set
# CONFIG_RPCSEC_GSS_SPKM3 is not set
# Kernel hacking
#
# CONFIG_DEBUG_KERNEL is not set
-# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
+CONFIG_DEBUG_PREEMPT=y
+# CONFIG_DEBUG_BUGVERBOSE is not set
# CONFIG_FRAME_POINTER is not set
#
#
# CONFIG_CRYPTO is not set
+#
+# Hardware crypto devices
+#
+
#
# Library routines
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc1-bk21
-# Fri Nov 12 16:08:51 2004
+# Linux kernel version: 2.6.11-rc4
+# Wed Feb 16 21:10:57 2005
#
CONFIG_M32R=y
-CONFIG_UID16=y
+# CONFIG_UID16 is not set
CONFIG_GENERIC_ISA_DMA=y
+CONFIG_GENERIC_HARDIRQS=y
+CONFIG_GENERIC_IRQ_PROBE=y
#
# Code maturity level options
# CONFIG_FUTEX is not set
# CONFIG_EPOLL is not set
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_SHMEM=y
CONFIG_CC_ALIGN_FUNCTIONS=0
CONFIG_CC_ALIGN_LABELS=0
CONFIG_CC_ALIGN_LOOPS=0
CONFIG_CC_ALIGN_JUMPS=0
-# CONFIG_TINY_SHMEM is not set
+CONFIG_TINY_SHMEM=y
#
# Loadable module support
# CONFIG_DISCONTIGMEM is not set
CONFIG_RWSEM_GENERIC_SPINLOCK=y
# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set
+CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_PREEMPT=y
# CONFIG_HAVE_DEC_LOCK is not set
# CONFIG_SMP is not set
#
CONFIG_PCCARD=y
# CONFIG_PCMCIA_DEBUG is not set
-# CONFIG_PCMCIA_OBSOLETE is not set
CONFIG_PCMCIA=y
#
# Block devices
#
# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
CONFIG_BLK_DEV_NBD=y
CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
# CONFIG_BLK_DEV_INITRD is not set
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_IOSCHED_AS is not set
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
+# CONFIG_ATA_OVER_ETH is not set
#
# ATA/ATAPI/MFM/RLL support
# CONFIG_SERIO_I8042 is not set
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
+# CONFIG_SERIO_LIBPS2 is not set
# CONFIG_SERIO_RAW is not set
#
#
# Ftape, the floppy tape device driver
#
-# CONFIG_AGP is not set
# CONFIG_DRM is not set
#
# CONFIG_USB_ARCH_HAS_HCD is not set
# CONFIG_USB_ARCH_HAS_OHCI is not set
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
+
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# InfiniBand support
+#
+# CONFIG_INFINIBAND is not set
+
#
# File systems
#
# CONFIG_JBD is not set
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
+
+#
+# XFS support
+#
# CONFIG_XFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
# Pseudo filesystems
#
CONFIG_PROC_FS=y
-CONFIG_PROC_KCORE=y
CONFIG_SYSFS=y
CONFIG_DEVFS_FS=y
CONFIG_DEVFS_MOUNT=y
CONFIG_ROOT_NFS=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
-# CONFIG_EXPORTFS is not set
CONFIG_SUNRPC=y
# CONFIG_RPCSEC_GSS_KRB5 is not set
# CONFIG_RPCSEC_GSS_SPKM3 is not set
# Kernel hacking
#
# CONFIG_DEBUG_KERNEL is not set
-# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
+CONFIG_DEBUG_PREEMPT=y
+# CONFIG_DEBUG_BUGVERBOSE is not set
# CONFIG_FRAME_POINTER is not set
#
#
# CONFIG_CRYPTO is not set
+#
+# Hardware crypto devices
+#
+
#
# Library routines
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc1-bk21
-# Fri Nov 12 16:08:53 2004
+# Linux kernel version: 2.6.11-rc4
+# Wed Feb 16 21:11:02 2005
#
CONFIG_M32R=y
-CONFIG_UID16=y
+# CONFIG_UID16 is not set
CONFIG_GENERIC_ISA_DMA=y
+CONFIG_GENERIC_HARDIRQS=y
+CONFIG_GENERIC_IRQ_PROBE=y
#
# Code maturity level options
CONFIG_IRAM_SIZE=0x00080000
CONFIG_RWSEM_GENERIC_SPINLOCK=y
# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set
+CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_PREEMPT=y
# CONFIG_HAVE_DEC_LOCK is not set
CONFIG_SMP=y
#
CONFIG_PCCARD=y
# CONFIG_PCMCIA_DEBUG is not set
-# CONFIG_PCMCIA_OBSOLETE is not set
CONFIG_PCMCIA=y
#
CONFIG_MTD_PARTITIONS=y
# CONFIG_MTD_CONCAT is not set
CONFIG_MTD_REDBOOT_PARTS=y
+CONFIG_MTD_REDBOOT_DIRECTORY_BLOCK=-1
# CONFIG_MTD_REDBOOT_PARTS_UNALLOCATED is not set
# CONFIG_MTD_REDBOOT_PARTS_READONLY is not set
# CONFIG_MTD_CMDLINE_PARTS is not set
# CONFIG_MTD_PHRAM is not set
# CONFIG_MTD_MTDRAM is not set
# CONFIG_MTD_BLKMTD is not set
+# CONFIG_MTD_BLOCK2MTD is not set
#
# Disk-On-Chip Device Drivers
# Block devices
#
# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_IOSCHED_AS is not set
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
+# CONFIG_ATA_OVER_ETH is not set
#
# ATA/ATAPI/MFM/RLL support
# CONFIG_SERIO_I8042 is not set
# CONFIG_SERIO_SERPORT is not set
# CONFIG_SERIO_CT82C710 is not set
+# CONFIG_SERIO_LIBPS2 is not set
# CONFIG_SERIO_RAW is not set
#
#
# Ftape, the floppy tape device driver
#
-# CONFIG_AGP is not set
# CONFIG_DRM is not set
#
# CONFIG_USB_ARCH_HAS_HCD is not set
# CONFIG_USB_ARCH_HAS_OHCI is not set
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
+
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# InfiniBand support
+#
+# CONFIG_INFINIBAND is not set
+
#
# File systems
#
# CONFIG_JBD is not set
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
+
+#
+# XFS support
+#
# CONFIG_XFS_FS is not set
# CONFIG_MINIX_FS is not set
CONFIG_ROMFS_FS=y
CONFIG_JFFS2_FS=y
CONFIG_JFFS2_FS_DEBUG=0
# CONFIG_JFFS2_FS_NAND is not set
+# CONFIG_JFFS2_FS_NOR_ECC is not set
# CONFIG_JFFS2_COMPRESSION_OPTIONS is not set
CONFIG_JFFS2_ZLIB=y
CONFIG_JFFS2_RTIME=y
CONFIG_ROOT_NFS=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
-# CONFIG_EXPORTFS is not set
CONFIG_SUNRPC=y
# CONFIG_RPCSEC_GSS_KRB5 is not set
# CONFIG_RPCSEC_GSS_SPKM3 is not set
# Kernel hacking
#
# CONFIG_DEBUG_KERNEL is not set
-# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
+CONFIG_DEBUG_PREEMPT=y
+# CONFIG_DEBUG_BUGVERBOSE is not set
# CONFIG_FRAME_POINTER is not set
#
#
# CONFIG_CRYPTO is not set
+#
+# Hardware crypto devices
+#
+
#
# Library routines
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc1-bk21
-# Fri Nov 12 16:08:55 2004
+# Linux kernel version: 2.6.11-rc4
+# Wed Feb 16 21:11:07 2005
#
CONFIG_M32R=y
-CONFIG_UID16=y
+# CONFIG_UID16 is not set
CONFIG_GENERIC_ISA_DMA=y
+CONFIG_GENERIC_HARDIRQS=y
+CONFIG_GENERIC_IRQ_PROBE=y
#
# Code maturity level options
CONFIG_IRAM_SIZE=0x00080000
CONFIG_RWSEM_GENERIC_SPINLOCK=y
# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set
+CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_PREEMPT=y
# CONFIG_HAVE_DEC_LOCK is not set
# CONFIG_SMP is not set
#
CONFIG_PCCARD=y
# CONFIG_PCMCIA_DEBUG is not set
-# CONFIG_PCMCIA_OBSOLETE is not set
CONFIG_PCMCIA=y
#
CONFIG_MTD_PARTITIONS=y
# CONFIG_MTD_CONCAT is not set
CONFIG_MTD_REDBOOT_PARTS=y
+CONFIG_MTD_REDBOOT_DIRECTORY_BLOCK=-1
# CONFIG_MTD_REDBOOT_PARTS_UNALLOCATED is not set
# CONFIG_MTD_REDBOOT_PARTS_READONLY is not set
# CONFIG_MTD_CMDLINE_PARTS is not set
# CONFIG_MTD_PHRAM is not set
# CONFIG_MTD_MTDRAM is not set
# CONFIG_MTD_BLKMTD is not set
+# CONFIG_MTD_BLOCK2MTD is not set
#
# Disk-On-Chip Device Drivers
# Block devices
#
# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_IOSCHED_AS is not set
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
+# CONFIG_ATA_OVER_ETH is not set
#
# ATA/ATAPI/MFM/RLL support
# CONFIG_SERIO_I8042 is not set
# CONFIG_SERIO_SERPORT is not set
# CONFIG_SERIO_CT82C710 is not set
+# CONFIG_SERIO_LIBPS2 is not set
# CONFIG_SERIO_RAW is not set
#
#
# Ftape, the floppy tape device driver
#
-# CONFIG_AGP is not set
# CONFIG_DRM is not set
#
# CONFIG_USB_ARCH_HAS_HCD is not set
# CONFIG_USB_ARCH_HAS_OHCI is not set
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
+
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# InfiniBand support
+#
+# CONFIG_INFINIBAND is not set
+
#
# File systems
#
# CONFIG_JBD is not set
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
+
+#
+# XFS support
+#
# CONFIG_XFS_FS is not set
# CONFIG_MINIX_FS is not set
CONFIG_ROMFS_FS=y
CONFIG_JFFS2_FS=y
CONFIG_JFFS2_FS_DEBUG=0
# CONFIG_JFFS2_FS_NAND is not set
+# CONFIG_JFFS2_FS_NOR_ECC is not set
# CONFIG_JFFS2_COMPRESSION_OPTIONS is not set
CONFIG_JFFS2_ZLIB=y
CONFIG_JFFS2_RTIME=y
CONFIG_ROOT_NFS=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
-# CONFIG_EXPORTFS is not set
CONFIG_SUNRPC=y
# CONFIG_RPCSEC_GSS_KRB5 is not set
# CONFIG_RPCSEC_GSS_SPKM3 is not set
# Kernel hacking
#
# CONFIG_DEBUG_KERNEL is not set
-# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
+CONFIG_DEBUG_PREEMPT=y
+# CONFIG_DEBUG_BUGVERBOSE is not set
# CONFIG_FRAME_POINTER is not set
#
#
# CONFIG_CRYPTO is not set
+#
+# Hardware crypto devices
+#
+
#
# Library routines
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc1-bk21
-# Fri Nov 12 16:08:58 2004
+# Linux kernel version: 2.6.11-rc4
+# Wed Feb 16 21:11:10 2005
#
CONFIG_M32R=y
-CONFIG_UID16=y
+# CONFIG_UID16 is not set
CONFIG_GENERIC_ISA_DMA=y
+CONFIG_GENERIC_HARDIRQS=y
+CONFIG_GENERIC_IRQ_PROBE=y
#
# Code maturity level options
# CONFIG_DISCONTIGMEM is not set
CONFIG_RWSEM_GENERIC_SPINLOCK=y
# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set
+CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_PREEMPT=y
# CONFIG_HAVE_DEC_LOCK is not set
# CONFIG_SMP is not set
#
CONFIG_PCCARD=y
# CONFIG_PCMCIA_DEBUG is not set
-# CONFIG_PCMCIA_OBSOLETE is not set
CONFIG_PCMCIA=y
#
# Block devices
#
# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
CONFIG_BLK_DEV_NBD=y
CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
# CONFIG_BLK_DEV_INITRD is not set
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_IOSCHED_AS is not set
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
+# CONFIG_ATA_OVER_ETH is not set
#
# ATA/ATAPI/MFM/RLL support
#
# CONFIG_SCSI_SPI_ATTRS is not set
# CONFIG_SCSI_FC_ATTRS is not set
+# CONFIG_SCSI_ISCSI_ATTRS is not set
#
# SCSI low-level drivers
#
# CONFIG_SCSI_SATA is not set
-# CONFIG_SCSI_QLOGIC_1280_1040 is not set
# CONFIG_SCSI_DEBUG is not set
#
# CONFIG_SERIO_I8042 is not set
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
+# CONFIG_SERIO_LIBPS2 is not set
# CONFIG_SERIO_RAW is not set
#
#
# Ftape, the floppy tape device driver
#
-# CONFIG_AGP is not set
# CONFIG_DRM is not set
#
# CONFIG_USB_ARCH_HAS_HCD is not set
# CONFIG_USB_ARCH_HAS_OHCI is not set
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
+
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# InfiniBand support
+#
+# CONFIG_INFINIBAND is not set
+
#
# File systems
#
# CONFIG_REISERFS_PROC_INFO is not set
# CONFIG_REISERFS_FS_XATTR is not set
# CONFIG_JFS_FS is not set
+
+#
+# XFS support
+#
# CONFIG_XFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_ROOT_NFS=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
-# CONFIG_EXPORTFS is not set
CONFIG_SUNRPC=y
# CONFIG_RPCSEC_GSS_KRB5 is not set
# CONFIG_RPCSEC_GSS_SPKM3 is not set
# Kernel hacking
#
# CONFIG_DEBUG_KERNEL is not set
-# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
+CONFIG_DEBUG_PREEMPT=y
+# CONFIG_DEBUG_BUGVERBOSE is not set
# CONFIG_FRAME_POINTER is not set
#
#
# CONFIG_CRYPTO is not set
+#
+# Hardware crypto devices
+#
+
#
# Library routines
#
mem_prof_init();
- for (nid = 0 ; nid < numnodes ; nid++) {
+ for_each_online_node(nid) {
mp = &mem_prof[nid];
NODE_DATA(nid)=(pg_data_t *)&m32r_node_data[nid];
NODE_DATA(nid)->bdata = &node_bdata[nid];
mem_prof_t *mp;
pgdat_list = NULL;
- for (nid = numnodes - 1 ; nid >= 0 ; nid--) {
+ for (nid = num_online_nodes() - 1 ; nid >= 0 ; nid--) {
NODE_DATA(nid)->pgdat_next = pgdat_list;
pgdat_list = NODE_DATA(nid);
}
- for (nid = 0 ; nid < numnodes ; nid++) {
+ for_each_online_node(nid) {
mp = &mem_prof[nid];
for (i = 0 ; i < MAX_NR_ZONES ; i++) {
zones_size[i] = 0;
* linux/arch/m32r/mm/fault.c
*
* Copyright (c) 2001, 2002 Hitoshi Yamamoto, and H. Kondo
+ * Copyright (c) 2004 Naoto Sugai, NIIBE Yutaka
*
* Some code taken from i386 version.
* Copyright (C) 1995 Linus Torvalds
*/
-/* $Id$ */
-
#include <linux/config.h>
#include <linux/signal.h>
#include <linux/sched.h>
* bit 2 == 0 means kernel, 1 means user-mode
* bit 3 == 0 means data, 1 means instruction
*======================================================================*/
+#define ACE_PROTECTION 1
+#define ACE_WRITE 2
+#define ACE_USERMODE 4
+#define ACE_INSTRUCTION 8
+
asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long error_code,
unsigned long address)
{
* nothing more.
*
* This verifies that the fault happens in kernel space
- * (error_code & 4) == 0, and that the fault was not a
- * protection error (error_code & 1) == 0.
+ * (error_code & ACE_USERMODE) == 0, and that the fault was not a
+ * protection error (error_code & ACE_PROTECTION) == 0.
*/
- if (address >= TASK_SIZE && !(error_code & 4))
+ if (address >= TASK_SIZE && !(error_code & ACE_USERMODE))
goto vmalloc_fault;
mm = tsk->mm;
* thus avoiding the deadlock.
*/
if (!down_read_trylock(&mm->mmap_sem)) {
- if ((error_code & 4) == 0 &&
+ if ((error_code & ACE_USERMODE) == 0 &&
!search_exception_tables(regs->psw))
goto bad_area_nosemaphore;
down_read(&mm->mmap_sem);
if (!(vma->vm_flags & VM_GROWSDOWN))
goto bad_area;
#if 0
- if (error_code & 4) {
+ if (error_code & ACE_USERMODE) {
/*
* accessing the stack below "spu" is always a bug.
* The "+ 4" is there due to the push instruction
good_area:
info.si_code = SEGV_ACCERR;
write = 0;
- switch (error_code & 3) {
+ switch (error_code & (ACE_WRITE|ACE_PROTECTION)) {
default: /* 3: write, present */
/* fall through */
- case 2: /* write, not present */
+ case ACE_WRITE: /* write, not present */
if (!(vma->vm_flags & VM_WRITE))
goto bad_area;
write++;
break;
- case 1: /* read, present */
+ case ACE_PROTECTION: /* read, present */
case 0: /* read, not present */
if (!(vma->vm_flags & (VM_READ | VM_EXEC)))
goto bad_area;
}
+ /*
+ * For instruction access exception, check if the area is executable
+ */
+ if ((error_code & ACE_INSTRUCTION) && !(vma->vm_flags & VM_EXEC))
+ goto bad_area;
+
survive:
/*
* If for any reason at all we couldn't handle the fault,
* make sure we exit gracefully rather than endlessly redo
* the fault.
*/
- addr = (address & PAGE_MASK) | (error_code & 8);
+ addr = (address & PAGE_MASK);
+ set_thread_fault_code(error_code);
switch (handle_mm_fault(mm, vma, addr, write)) {
case VM_FAULT_MINOR:
tsk->min_flt++;
default:
BUG();
}
-
+ set_thread_fault_code(0);
up_read(&mm->mmap_sem);
return;
bad_area_nosemaphore:
/* User mode accesses just cause a SIGSEGV */
- if (error_code & 4) {
+ if (error_code & ACE_USERMODE) {
tsk->thread.address = address;
tsk->thread.error_code = error_code | (address >= TASK_SIZE);
tsk->thread.trap_no = 14;
goto survive;
}
printk("VM: killing process %s\n", tsk->comm);
- if (error_code & 4)
+ if (error_code & ACE_USERMODE)
do_exit(SIGKILL);
goto no_context;
up_read(&mm->mmap_sem);
/* Kernel mode? Handle exception or die */
- if (!(error_code & 4))
+ if (!(error_code & ACE_USERMODE))
goto no_context;
tsk->thread.address = address;
if (!pte_present(*pte_k))
goto no_context;
- addr = (address & PAGE_MASK) | (error_code & 8);
+ addr = (address & PAGE_MASK) | (error_code & ACE_INSTRUCTION);
update_mmu_cache(NULL, addr, *pte_k);
return;
}
unsigned long *entry1, *entry2;
unsigned long pte_data, flags;
unsigned int *entry_dat;
- int inst = vaddr & 8;
+ int inst = get_thread_fault_code() & ACE_INSTRUCTION;
int i;
/* Ptrace may call this routine. */
* Copyright (C) 1995 Linus Torvalds
*/
-/* $Id$ */
-
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/swap.h>
#include <linux/highmem.h>
#include <linux/bitops.h>
+#include <linux/nodemask.h>
#include <asm/types.h>
#include <asm/processor.h>
#include <asm/page.h>
int reservedpages, nid, i;
reservedpages = 0;
- for (nid = 0 ; nid < numnodes ; nid++)
+ for_each_online_node(nid)
for (i = 0 ; i < MAX_LOW_PFN(nid) - START_PFN(nid) ; i++)
if (PageReserved(NODE_DATA(nid)->node_mem_map + i))
reservedpages++;
#endif
num_physpages = 0;
- for (nid = 0 ; nid < numnodes ; nid++)
+ for_each_online_node(nid)
num_physpages += MAX_LOW_PFN(nid) - START_PFN(nid) + 1;
num_physpages -= hole_pages;
memset(empty_zero_page, 0, PAGE_SIZE);
/* this will put all low memory onto the freelists */
- for (nid = 0 ; nid < numnodes ; nid++)
+ for_each_online_node(nid)
totalram_pages += free_all_bootmem_node(NODE_DATA(nid));
reservedpages = reservedpages_count() - hole_pages;
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc1-bk21
-# Fri Nov 12 16:09:00 2004
+# Linux kernel version: 2.6.11-rc4
+# Wed Feb 16 21:11:13 2005
#
CONFIG_M32R=y
-CONFIG_UID16=y
+# CONFIG_UID16 is not set
CONFIG_GENERIC_ISA_DMA=y
+CONFIG_GENERIC_HARDIRQS=y
+CONFIG_GENERIC_IRQ_PROBE=y
#
# Code maturity level options
# CONFIG_FUTEX is not set
# CONFIG_EPOLL is not set
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_SHMEM=y
CONFIG_CC_ALIGN_FUNCTIONS=0
CONFIG_CC_ALIGN_LABELS=0
CONFIG_CC_ALIGN_LOOPS=0
CONFIG_CC_ALIGN_JUMPS=0
-# CONFIG_TINY_SHMEM is not set
+CONFIG_TINY_SHMEM=y
#
# Loadable module support
# CONFIG_DISCONTIGMEM is not set
CONFIG_RWSEM_GENERIC_SPINLOCK=y
# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set
+CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_PREEMPT=y
# CONFIG_HAVE_DEC_LOCK is not set
# CONFIG_SMP is not set
# Block devices
#
# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
CONFIG_BLK_DEV_NBD=y
CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
# CONFIG_BLK_DEV_INITRD is not set
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_IOSCHED_AS is not set
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
+# CONFIG_ATA_OVER_ETH is not set
#
# ATA/ATAPI/MFM/RLL support
# CONFIG_SERIO_I8042 is not set
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
+# CONFIG_SERIO_LIBPS2 is not set
# CONFIG_SERIO_RAW is not set
#
#
# Ftape, the floppy tape device driver
#
-# CONFIG_AGP is not set
# CONFIG_DRM is not set
# CONFIG_RAW_DRIVER is not set
# CONFIG_USB_ARCH_HAS_HCD is not set
# CONFIG_USB_ARCH_HAS_OHCI is not set
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
+
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# InfiniBand support
+#
+# CONFIG_INFINIBAND is not set
+
#
# File systems
#
# CONFIG_JBD is not set
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
+
+#
+# XFS support
+#
# CONFIG_XFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
# Pseudo filesystems
#
CONFIG_PROC_FS=y
-CONFIG_PROC_KCORE=y
CONFIG_SYSFS=y
# CONFIG_DEVFS_FS is not set
CONFIG_DEVPTS_FS_XATTR=y
CONFIG_ROOT_NFS=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
-# CONFIG_EXPORTFS is not set
CONFIG_SUNRPC=y
# CONFIG_RPCSEC_GSS_KRB5 is not set
# CONFIG_RPCSEC_GSS_SPKM3 is not set
# Kernel hacking
#
# CONFIG_DEBUG_KERNEL is not set
-# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
+CONFIG_DEBUG_PREEMPT=y
+# CONFIG_DEBUG_BUGVERBOSE is not set
# CONFIG_FRAME_POINTER is not set
#
#
# CONFIG_CRYPTO is not set
+#
+# Hardware crypto devices
+#
+
#
# Library routines
#
#include <linux/errno.h>
#include <linux/init.h>
-extern void timer_init(struct oprofile_operations ** ops);
-
-int __init oprofile_arch_init(struct oprofile_operations ** ops)
+int __init oprofile_arch_init(struct oprofile_operations * ops)
{
return -ENODEV;
}
-
void oprofile_arch_exit(void)
{
}
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc1-bk21
-# Fri Nov 12 16:09:02 2004
+# Linux kernel version: 2.6.11-rc4
+# Wed Feb 16 21:11:41 2005
#
CONFIG_M32R=y
-CONFIG_UID16=y
+# CONFIG_UID16 is not set
CONFIG_GENERIC_ISA_DMA=y
+CONFIG_GENERIC_HARDIRQS=y
+CONFIG_GENERIC_IRQ_PROBE=y
#
# Code maturity level options
# CONFIG_DISCONTIGMEM is not set
CONFIG_RWSEM_GENERIC_SPINLOCK=y
# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set
+CONFIG_GENERIC_CALIBRATE_DELAY=y
# CONFIG_PREEMPT is not set
# CONFIG_SMP is not set
#
CONFIG_PCCARD=y
# CONFIG_PCMCIA_DEBUG is not set
-# CONFIG_PCMCIA_OBSOLETE is not set
CONFIG_PCMCIA=y
#
# Block devices
#
# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_NBD is not set
CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
# CONFIG_BLK_DEV_INITRD is not set
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_IOSCHED_AS is not set
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
+# CONFIG_ATA_OVER_ETH is not set
#
# ATA/ATAPI/MFM/RLL support
#
# CONFIG_SCSI_SPI_ATTRS is not set
# CONFIG_SCSI_FC_ATTRS is not set
+# CONFIG_SCSI_ISCSI_ATTRS is not set
#
# SCSI low-level drivers
#
# CONFIG_SCSI_SATA is not set
-# CONFIG_SCSI_QLOGIC_1280_1040 is not set
# CONFIG_SCSI_DEBUG is not set
#
# CONFIG_SERIO_I8042 is not set
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
+# CONFIG_SERIO_LIBPS2 is not set
# CONFIG_SERIO_RAW is not set
#
#
# Ftape, the floppy tape device driver
#
-# CONFIG_AGP is not set
# CONFIG_DRM is not set
#
# CONFIG_USB_ARCH_HAS_HCD is not set
# CONFIG_USB_ARCH_HAS_OHCI is not set
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
+
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# InfiniBand support
+#
+# CONFIG_INFINIBAND is not set
+
#
# File systems
#
# CONFIG_REISERFS_PROC_INFO is not set
# CONFIG_REISERFS_FS_XATTR is not set
# CONFIG_JFS_FS is not set
+
+#
+# XFS support
+#
# CONFIG_XFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_ROOT_NFS=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
-# CONFIG_EXPORTFS is not set
CONFIG_SUNRPC=y
# CONFIG_RPCSEC_GSS_KRB5 is not set
# CONFIG_RPCSEC_GSS_SPKM3 is not set
# Kernel hacking
#
CONFIG_DEBUG_KERNEL=y
-# CONFIG_DEBUG_STACKOVERFLOW is not set
-# CONFIG_DEBUG_SLAB is not set
-# CONFIG_DEBUG_IOVIRT is not set
# CONFIG_MAGIC_SYSRQ is not set
+# CONFIG_SCHEDSTATS is not set
+# CONFIG_DEBUG_SLAB is not set
# CONFIG_DEBUG_SPINLOCK is not set
-# CONFIG_DEBUG_PAGEALLOC is not set
-CONFIG_DEBUG_INFO=y
# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
+# CONFIG_DEBUG_KOBJECT is not set
+# CONFIG_DEBUG_BUGVERBOSE is not set
+CONFIG_DEBUG_INFO=y
+# CONFIG_DEBUG_FS is not set
# CONFIG_FRAME_POINTER is not set
+# CONFIG_DEBUG_STACKOVERFLOW is not set
+# CONFIG_DEBUG_STACK_USAGE is not set
+# CONFIG_DEBUG_PAGEALLOC is not set
#
# Security options
#
# CONFIG_CRYPTO is not set
+#
+# Hardware crypto devices
+#
+
#
# Library routines
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc3-m68k
-# Sun Dec 5 14:21:25 2004
+# Linux kernel version: 2.6.10-m68k
+# Sun Dec 26 11:22:54 2004
#
CONFIG_M68K=y
CONFIG_MMU=y
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_SAME=m
-CONFIG_IP_NF_NAT_LOCAL=y
CONFIG_IP_NF_NAT_SNMP_BASIC=m
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
#
# Character devices
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc3-m68k
-# Sun Dec 5 14:21:29 2004
+# Linux kernel version: 2.6.10-m68k
+# Sun Dec 26 11:22:58 2004
#
CONFIG_M68K=y
CONFIG_MMU=y
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_SAME=m
-CONFIG_IP_NF_NAT_LOCAL=y
CONFIG_IP_NF_NAT_SNMP_BASIC=m
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
#
# Character devices
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc3-m68k
-# Sun Dec 5 14:21:34 2004
+# Linux kernel version: 2.6.10-m68k
+# Sun Dec 26 11:23:11 2004
#
CONFIG_M68K=y
CONFIG_MMU=y
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_SAME=m
-CONFIG_IP_NF_NAT_LOCAL=y
CONFIG_IP_NF_NAT_SNMP_BASIC=m
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
#
# Character devices
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc3-m68k
-# Sun Dec 5 14:21:38 2004
+# Linux kernel version: 2.6.10-m68k
+# Sun Dec 26 11:23:15 2004
#
CONFIG_M68K=y
CONFIG_MMU=y
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_SAME=m
-CONFIG_IP_NF_NAT_LOCAL=y
CONFIG_IP_NF_NAT_SNMP_BASIC=m
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
#
# Character devices
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc3-m68k
-# Sun Dec 5 14:21:44 2004
+# Linux kernel version: 2.6.10-m68k
+# Sun Dec 26 11:23:40 2004
#
CONFIG_M68K=y
CONFIG_MMU=y
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_SAME=m
-CONFIG_IP_NF_NAT_LOCAL=y
CONFIG_IP_NF_NAT_SNMP_BASIC=m
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
#
# Character devices
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc3-m68k
-# Sun Dec 5 14:21:47 2004
+# Linux kernel version: 2.6.10-m68k
+# Sun Dec 26 11:23:44 2004
#
CONFIG_M68K=y
CONFIG_MMU=y
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_SAME=m
-CONFIG_IP_NF_NAT_LOCAL=y
CONFIG_IP_NF_NAT_SNMP_BASIC=m
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
#
# Character devices
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc3-m68k
-# Sun Dec 5 14:21:49 2004
+# Linux kernel version: 2.6.10-m68k
+# Sun Dec 26 11:23:49 2004
#
CONFIG_M68K=y
CONFIG_MMU=y
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_SAME=m
-CONFIG_IP_NF_NAT_LOCAL=y
CONFIG_IP_NF_NAT_SNMP_BASIC=m
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
#
# Character devices
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc3-m68k
-# Sun Dec 5 14:21:52 2004
+# Linux kernel version: 2.6.10-m68k
+# Sun Dec 26 11:23:53 2004
#
CONFIG_M68K=y
CONFIG_MMU=y
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_SAME=m
-CONFIG_IP_NF_NAT_LOCAL=y
CONFIG_IP_NF_NAT_SNMP_BASIC=m
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
#
# Character devices
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc3-m68k
-# Sun Dec 5 14:21:55 2004
+# Linux kernel version: 2.6.10-m68k
+# Sun Dec 26 11:23:57 2004
#
CONFIG_M68K=y
CONFIG_MMU=y
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_SAME=m
-CONFIG_IP_NF_NAT_LOCAL=y
CONFIG_IP_NF_NAT_SNMP_BASIC=m
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
#
# Character devices
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc3-m68k
-# Sun Dec 5 14:21:58 2004
+# Linux kernel version: 2.6.10-m68k
+# Sun Dec 26 11:24:01 2004
#
CONFIG_M68K=y
CONFIG_MMU=y
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_SAME=m
-CONFIG_IP_NF_NAT_LOCAL=y
CONFIG_IP_NF_NAT_SNMP_BASIC=m
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
#
# Character devices
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc3-m68k
-# Sun Dec 5 14:22:01 2004
+# Linux kernel version: 2.6.10-m68k
+# Sun Dec 26 11:24:05 2004
#
CONFIG_M68K=y
CONFIG_MMU=y
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_SAME=m
-CONFIG_IP_NF_NAT_LOCAL=y
CONFIG_IP_NF_NAT_SNMP_BASIC=m
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
#
# Character devices
#
*/
unsigned int
-csum_partial_copy_from_user(const char *src, char *dst, int len,
- int sum, int *csum_err)
+csum_partial_copy_from_user(const unsigned char *src, unsigned char *dst,
+ int len, int sum, int *csum_err)
{
/*
* GCC doesn't like more than 10 operands for the asm
*/
unsigned int
-csum_partial_copy_nocheck(const char *src, char *dst, int len, int sum)
+csum_partial_copy_nocheck(const unsigned char *src, unsigned char *dst, int len, int sum)
{
unsigned long tmp1, tmp2;
__asm__("movel %2,%4\n\t"
#endif
/*
- * The senTec COBRA5272 board has nearly the same
- * memory layout as the M5272C3.
- * We assume 16MB ram.
+ * The senTec COBRA5272 board has nearly the same memory layout as
+ * the M5272C3. We assume 16MiB ram.
*/
#if defined(CONFIG_COBRA5272)
#define RAM_START 0x20000
#endif
/*
- * The senTec COBRA5282 board has the same
- * memory layout as the M5282EVB.
+ * The senTec COBRA5282 board has the same memory layout as the M5282EVB.
*/
#if defined(CONFIG_COBRA5282)
#define RAM_START 0x10000
/* Revised by Kenneth Albanowski for m68knommu. Basic problem: unaligned access kills, so most
of the assembly has to go. */
+#include <linux/module.h>
#include <net/checksum.h>
static inline unsigned short from32to16(unsigned long x)
*/
unsigned int
-csum_partial_copy_from_user(const char *src, char *dst, int len, int sum, int *csum_err)
+csum_partial_copy_from_user(const unsigned char *src, unsigned char *dst,
+ int len, int sum, int *csum_err)
{
if (csum_err) *csum_err = 0;
memcpy(dst, src, len);
*/
unsigned int
-csum_partial_copy(const char *src, char *dst, int len, int sum)
+csum_partial_copy(const unsigned char *src, unsigned char *dst, int len, int sum)
{
memcpy(dst, src, len);
return csum_partial(dst, len, sum);
* published by the Free Software Foundation.
*/
+#include <linux/module.h>
#include <asm/param.h>
#include <asm/delay.h>
+EXPORT_SYMBOL(udelay);
+
void udelay(unsigned long usecs)
{
_udelay(usecs);
high_memory = (void *) end_mem;
start_mem = PAGE_ALIGN(start_mem);
- max_mapnr = num_physpages = MAP_NR(high_memory);
+ max_mapnr = num_physpages = (((unsigned long) high_memory) - PAGE_OFFSET) >> PAGE_SHIFT;
/* this will put all memory onto the freelists */
totalram_pages = free_all_bootmem();
--- /dev/null
+/*****************************************************************************/
+
+/*
+ * head.S -- common startup code for ColdFire CPUs.
+ *
+ * (C) Copyright 1999-2004, Greg Ungerer (gerg@snapgear.com).
+ */
+
+/*****************************************************************************/
+
+#include <linux/config.h>
+#include <linux/sys.h>
+#include <linux/linkage.h>
+#include <asm/thread_info.h>
+#include <asm/coldfire.h>
+#include <asm/mcfcache.h>
+#include <asm/mcfsim.h>
+
+/*****************************************************************************/
+
+/*
+ * Define fixed memory sizes. Configuration of a fixed memory size
+ * overrides everything else. If the user defined a size we just
+ * blindly use it (they know what they are doing right :-)
+ */
+#if defined(CONFIG_RAM32MB)
+#define MEM_SIZE 0x02000000 /* memory size 32Mb */
+#elif defined(CONFIG_RAM16MB)
+#define MEM_SIZE 0x01000000 /* memory size 16Mb */
+#elif defined(CONFIG_RAM8MB)
+#define MEM_SIZE 0x00800000 /* memory size 8Mb */
+#elif defined(CONFIG_RAM4MB)
+#define MEM_SIZE 0x00400000 /* memory size 4Mb */
+#elif defined(CONFIG_RAM1MB)
+#define MEM_SIZE 0x00100000 /* memory size 1Mb */
+#endif
+
+/*
+ * Memory size exceptions for special cases. Some boards may be set
+ * for auto memory sizing, but we can't do it that way for some reason.
+ * For example the 5206eLITE board has static RAM, and auto-detecting
+ * the SDRAM will do you no good at all.
+ */
+#ifdef CONFIG_RAMAUTO
+#if defined(CONFIG_M5206eLITE)
+#define MEM_SIZE 0x00100000 /* 1MiB default memory */
+#endif
+#endif /* CONFIG_RAMAUTO */
+
+/*
+ * If we don't have a fixed memory size now, then lets build in code
+ * to auto detect the DRAM size. Obviously this is the prefered
+ * method, and should work for most boards (it won't work for those
+ * that do not have their RAM starting at address 0).
+ */
+#if defined(MEM_SIZE)
+.macro GET_MEM_SIZE
+ movel #MEM_SIZE,%d0 /* hard coded memory size */
+.endm
+
+#elif defined(CONFIG_M5206) || defined(CONFIG_M5206e) || \
+ defined(CONFIG_M5249) || defined(CONFIG_M527x) || \
+ defined(CONFIG_M528x) || defined(CONFIG_M5307) || \
+ defined(CONFIG_M5407)
+/*
+ * Not all these devices have exactly the same DRAM controller,
+ * but the DCMR register is virtually identical - give or take
+ * a couple of bits. The only exception is the 5272 devices, their
+ * DRAM controller is quite different.
+ */
+.macro GET_MEM_SIZE
+ movel MCF_MBAR+MCFSIM_DMR0,%d0 /* get mask for 1st bank */
+ btst #0,%d0 /* check if region enabled */
+ beq 1f
+ andl #0xfffc0000,%d0
+ beq 1f
+ addl #0x00040000,%d0 /* convert mask to size */
+1:
+ movel MCF_MBAR+MCFSIM_DMR1,%d1 /* get mask for 2nd bank */
+ btst #0,%d1 /* check if region enabled */
+ beq 2f
+ andl #0xfffc0000, %d1
+ beq 2f
+ addl #0x00040000,%d1
+ addl %d1,%d0 /* total mem size in d0 */
+2:
+.endm
+
+#elif defined(CONFIG_M5272)
+.macro GET_MEMORY_SIZE
+ movel MCF_MBAR+MCFSIM_CSOR7,%d0 /* get SDRAM address mask */
+ andil #0xfffff000,%d0 /* mask out chip select options */
+ negl %d0 /* negate bits */
+.endm
+
+#else
+#error "ERROR: I don't know how to determine your boards memory size?"
+#endif
+
+
+/*
+ * Most ColdFire boards have their DRAM starting at address 0.
+ * Notable exception is the 5206eLITE board.
+ */
+#if defined(CONFIG_M5206eLITE)
+#define MEM_BASE 0x30000000
+#endif
+
+#ifndef MEM_BASE
+#define MEM_BASE 0x00000000 /* memory base at address 0 */
+#endif
+
+/*
+ * The default location for the vectors is at the base of RAM.
+ * Some boards might like to use internal SRAM or something like
+ * that. If no board specific header defines an alternative then
+ * use the base of RAM.
+ */
+#ifndef VBR_BASE
+#define VBR_BASE MEM_BASE /* vector address */
+#endif
+
+/*****************************************************************************/
+
+/*
+ * Boards and platforms can do specific early hardware setup if
+ * they need to. Most don't need this, define away if not required.
+ */
+#ifndef PLATFORM_SETUP
+#define PLATFORM_SETUP
+#endif
+
+/*****************************************************************************/
+
+.global _start
+.global _rambase
+.global _ramvec
+.global _ramstart
+.global _ramend
+
+/*****************************************************************************/
+
+.data
+
+/*
+ * During startup we store away the RAM setup. These are not in the
+ * bss, since their values are determined and written before the bss
+ * has been cleared.
+ */
+_rambase:
+.long 0
+_ramvec:
+.long 0
+_ramstart:
+.long 0
+_ramend:
+.long 0
+
+/*****************************************************************************/
+
+.text
+
+/*
+ * This is the codes first entry point. This is where it all
+ * begins...
+ */
+
+_start:
+ nop /* filler */
+ movew #0x2700, %sr /* no interrupts */
+
+ /*
+ * Do any platform or board specific setup now. Most boards
+ * don't need anything. Those exceptions are define this in
+ * their board specific includes.
+ */
+ PLATFORM_SETUP
+
+ /*
+ * Create basic memory configuration. Set VBR accordingly,
+ * and size memory.
+ */
+ movel #VBR_BASE,%a7
+ movec %a7,%VBR /* set vectors addr */
+ movel %a7,_ramvec
+
+ movel #MEM_BASE,%a7 /* mark the base of RAM */
+ movel %a7,_rambase
+
+ GET_MEM_SIZE /* macro code determines size */
+ movel %d0,_ramend /* set end ram addr */
+
+ /*
+ * Now that we know what the memory is, lets enable cache
+ * and get things moving. This is Coldfire CPU specific.
+ */
+ CACHE_ENABLE /* enable CPU cache */
+
+
+#ifdef CONFIG_ROMFS_FS
+ /*
+ * Move ROM filesystem above bss :-)
+ */
+ lea _sbss,%a0 /* get start of bss */
+ lea _ebss,%a1 /* set up destination */
+ movel %a0,%a2 /* copy of bss start */
+
+ movel 8(%a0),%d0 /* get size of ROMFS */
+ addql #8,%d0 /* allow for rounding */
+ andl #0xfffffffc, %d0 /* whole words */
+
+ addl %d0,%a0 /* copy from end */
+ addl %d0,%a1 /* copy from end */
+ movel %a1,_ramstart /* set start of ram */
+
+_copy_romfs:
+ movel -(%a0),%d0 /* copy dword */
+ movel %d0,-(%a1)
+ cmpl %a0,%a2 /* check if at end */
+ bne _copy_romfs
+
+#else /* CONFIG_ROMFS_FS */
+ lea _ebss,%a1
+ movel %a1,_ramstart
+#endif /* CONFIG_ROMFS_FS */
+
+
+ /*
+ * Zero out the bss region.
+ */
+ lea _sbss,%a0 /* get start of bss */
+ lea _ebss,%a1 /* get end of bss */
+ clrl %d0 /* set value */
+_clear_bss:
+ movel %d0,(%a0)+ /* clear each word */
+ cmpl %a0,%a1 /* check if at end */
+ bne _clear_bss
+
+ /*
+ * Load the current task pointer and stack.
+ */
+ lea init_thread_union,%a0
+ lea THREAD_SIZE(%a0),%sp
+
+ /*
+ * Assember start up done, start code proper.
+ */
+ jsr start_kernel /* start Linux kernel */
+
+_exit:
+ jmp _exit /* should never get here */
+
+/*****************************************************************************/
* with this program; if not, write to the Free Software Foundation, Inc.,
* 675 Mass Ave, Cambridge, MA 02139, USA.
*/
+#include <linux/config.h>
#include <linux/errno.h>
#include <linux/init.h>
#include <linux/irq.h>
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
-
-#include <linux/config.h>
#include <linux/string.h>
#include <linux/sched.h>
#include <linux/threads.h>
* 675 Mass Ave, Cambridge, MA 02139, USA.
*
*/
-
+#include <linux/config.h>
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/sched.h>
* functions. The drivers allocate the data buffers and assign them
* to the descriptors.
*/
-static spinlock_t au1xxx_dbdma_spin_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(au1xxx_dbdma_spin_lock);
/* I couldn't find a macro that did this......
*/
* and if we try that first we are likely to not waste larger
* slabs of memory.
*/
- desc_base = kmalloc(entries * sizeof(au1x_ddma_desc_t), GFP_KERNEL);
+ desc_base = (u32)kmalloc(entries * sizeof(au1x_ddma_desc_t), GFP_KERNEL);
if (desc_base == 0)
return 0;
kfree((const void *)desc_base);
i = entries * sizeof(au1x_ddma_desc_t);
i += (sizeof(au1x_ddma_desc_t) - 1);
- if ((desc_base = kmalloc(i, GFP_KERNEL)) == 0)
+ if ((desc_base = (u32)kmalloc(i, GFP_KERNEL)) == 0)
return 0;
desc_base = ALIGN_ADDR(desc_base, sizeof(au1x_ddma_desc_t));
* Copyright 2000 MontaVista Software Inc.
* Author: MontaVista Software, Inc.
* stevel@mvista.com or source@mvista.com
+ * Copyright (C) 2005 Ralf Baechle (ralf@linux-mips.org)
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* 675 Mass Ave, Cambridge, MA 02139, USA.
*
*/
-
+#include <linux/config.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/errno.h>
*/
-spinlock_t au1000_dma_spin_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(au1000_dma_spin_lock);
struct dma_chan au1000_dma_table[NUM_AU1000_DMA_CHANNELS] = {
{.dev_id = -1,},
extern void counter0_irq(int irq, void *dev_id, struct pt_regs *regs);
#endif
-static spinlock_t irq_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(irq_lock);
static unsigned int startup_irq(unsigned int irq_nr)
#ifdef CONFIG_DMA_NONCOHERENT
/*
- * Set the NC bit in controller for pre-AC silicon
+ * Set the NC bit in controller for Au1500 pre-AC silicon
*/
- au_writel( 1<<16 | au_readl(Au1500_PCI_CFG), Au1500_PCI_CFG);
- printk("Non-coherent PCI accesses enabled\n");
+ u32 prid = read_c0_prid();
+ if ( (prid & 0xFF000000) == 0x01000000 && prid < 0x01030202) {
+ au_writel( 1<<16 | au_readl(Au1500_PCI_CFG), Au1500_PCI_CFG);
+ printk("Non-coherent PCI accesses enabled\n");
+ }
#endif
set_io_port_base(virt_io_addr);
* License version 2. This program is licensed "as is" without any
* warranty of any kind, whether express or implied.
*/
-#include <linux/config.h>
#include <linux/device.h>
#include <linux/kernel.h>
#include <linux/init.h>
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*/
-#include <linux/config.h>
#include <asm/asm.h>
#include <asm/mipsregs.h>
#include <asm/addrspace.h>
static unsigned long last_pc0, last_match20;
#endif
-static spinlock_t time_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(time_lock);
static inline void ack_r4ktimer(unsigned long newval)
{
/* This is for machines which generate the exact clock. */
#define USECS_PER_JIFFY (1000000/HZ)
-#define USECS_PER_JIFFY_FRAC (0x100000000*1000000/HZ&0xffffffff)
-
+#define USECS_PER_JIFFY_FRAC (0x100000000LL*1000000/HZ&0xffffffff)
static unsigned long
div64_32(unsigned long v1, unsigned long v2, unsigned long v3)
r4k_cur = (read_c0_count() + r4k_offset);
write_c0_compare(r4k_cur);
- /* no RTC on the pb1000 */
- xtime.tv_sec = 0;
- //xtime.tv_usec = 0;
-
#ifdef CONFIG_PM
/*
* setup counter 0, since it keeps ticking after a
dev->conf_desc),
0);
} else {
- int len = dev->conf_desc->wTotalLength;
+ int len = le16_to_cpu(dev->conf_desc->wTotalLength);
dbg("sending whole config desc,"
" size=%d, our size=%d", desc_len, len);
desc_len = desc_len > len ? len : desc_len;
epd->bEndpointAddress |= (u8)ep->address;
ep->direction = epd->bEndpointAddress & 0x80;
ep->type = epd->bmAttributes & 0x03;
- ep->max_pkt_size = epd->wMaxPacketSize;
+ ep->max_pkt_size = le16_to_cpu(epd->wMaxPacketSize);
spin_lock_init(&ep->lock);
ep->desc = epd;
ep->reg = &ep_reg[ep->address];
/*
* initialize the full config descriptor
*/
- usbdev.full_conf_desc = fcd = kmalloc(config_desc->wTotalLength,
+ usbdev.full_conf_desc = fcd = kmalloc(le16_to_cpu(config_desc->wTotalLength),
ALLOC_FLAGS);
if (!fcd) {
err("failed to alloc full config descriptor");
#include <asm/au1000.h>
#include <asm/csb250.h>
-#ifdef CONFIG_USB_OHCI
-// Enable the workaround for the OHCI DoneHead
-// register corruption problem.
-#define CONFIG_AU1000_OHCI_FIX
-#endif
-
-#ifdef CONFIG_RTC
-extern struct rtc_ops csb250_rtc_ops;
-#endif
-
extern int (*board_pci_idsel)(unsigned int devsel, int assert);
int csb250_pci_idsel(unsigned int devsel, int assert);
au_writel(au_readl(SYS_POWERCTRL) | (0x3 << 5), SYS_POWERCTRL);
#ifdef CONFIG_RTC
- rtc_ops = &csb250_rtc_ops;
// Enable the RTC if not already enabled
if (!(au_readl(0xac000028) & 0x20)) {
printk("enabling clock ...\n");
#include <linux/proc_fs.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
+#include <linux/wait.h>
#include <asm/segment.h>
#include <asm/irq.h>
ts = wm97xx_ts_get_handle(0);
/* proceed only after everybody is ready */
- while ( ! wm97xx_ts_ready(ts) ) {
- /* give a little time for initializations to complete */
- interruptible_sleep_on_timeout(&pendown_wait, HZ / 4);
- }
+ wait_event_timeout(pendown_wait, wm97xx_ts_ready(ts), HZ/4);
/* board-specific calibration */
wm97xx_ts_set_cal(ts,
#include <asm/pgtable.h>
#include <asm/au1000.h>
-extern struct rtc_ops no_rtc_ops;
-
void board_reset (void)
{
}
{
u32 pin_func;
- rtc_ops = &no_rtc_ops;
-
#ifdef CONFIG_AU1X00_USB_DEVICE
// 2nd USB port is USB device
pin_func = au_readl(SYS_PINFUNC) & (u32)(~0x8000);
#include <linux/ioport.h>
#include <linux/mm.h>
#include <linux/console.h>
-#include <linux/mc146818rtc.h>
#include <linux/delay.h>
#include <asm/cpu.h>
#include <asm/bootinfo.h>
#include <asm/irq.h>
-#include <asm/keyboard.h>
#include <asm/mipsregs.h>
#include <asm/reboot.h>
#include <asm/pgtable.h>
-#include <asm/au1000.h>
+#include <asm/mach-au1x00/au1000.h>
-extern struct rtc_ops no_rtc_ops;
+void board_reset (void)
+{
+ /* Hit BCSR.SYSTEM_CONTROL[SW_RST] */
+ au_writel(0x00000000, 0xAE00001C);
+}
void __init board_setup(void)
{
- rtc_ops = &no_rtc_ops;
-
#if defined (CONFIG_USB_OHCI) || defined (CONFIG_AU1X00_USB_DEVICE)
#ifdef CONFIG_AU1X00_USB_DEVICE
// 2nd USB port is USB device
// enable USB power switch
au_writel( au_readl(GPIO2_DIR) | 0x10, GPIO2_DIR );
au_writel( 0x100000, GPIO2_OUTPUT );
-#endif // defined (CONFIG_USB_OHCI) || defined (CONFIG_AU1000_USB_DEVICE)
+#endif // defined (CONFIG_USB_OHCI) || defined (CONFIG_AU1X00_USB_DEVICE)
#ifdef CONFIG_PCI
#if defined(__MIPSEB__)
* with this program; if not, write to the Free Software Foundation, Inc.,
* 675 Mass Ave, Cambridge, MA 02139, USA.
*/
-
+#include <linux/string.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
#include <linux/init.h>
#include <linux/mm.h>
#include <linux/sched.h>
#include <linux/bootmem.h>
#include <asm/addrspace.h>
#include <asm/bootinfo.h>
-#include <linux/config.h>
-#include <linux/string.h>
-#include <linux/kernel.h>
-#include <linux/sched.h>
int prom_argc;
char **prom_argv, **prom_envp;
return "MTX-1";
}
-int __init prom_init(int argc, char **argv, char **envp, int *prom_vec)
+void __init prom_init(void)
{
unsigned char *memsize_str;
unsigned long memsize;
- prom_argc = argc;
- prom_argv = argv;
- prom_envp = envp;
+ prom_argc = fw_arg0;
+ prom_argv = (char **) fw_arg1;
+ prom_envp = (char **) fw_arg2;
mips_machgroup = MACH_GROUP_ALCHEMY;
mips_machtype = MACH_MTX1; /* set the platform # */
+
prom_init_cmdline();
memsize_str = prom_getenv("memsize");
- if (!memsize_str) {
+ if (!memsize_str)
memsize = 0x04000000;
- } else {
+ else
memsize = simple_strtol(memsize_str, NULL, 0);
- }
add_memory_region(0, memsize, BOOT_MEM_RAM);
- return 0;
}
#include <asm/io.h>
#include <asm/mipsregs.h>
#include <asm/system.h>
-#include <asm/au1000.h>
+#include <asm/mach-au1x00/au1000.h>
-/* Need to define this.
-*/
au1xxx_irq_map_t au1xxx_irq_map[] = {
- { 0. 0. 0}
+ { AU1500_GPIO_204, INTC_INT_HIGH_LEVEL, 0},
+ { AU1500_GPIO_201, INTC_INT_LOW_LEVEL, 0 },
+ { AU1500_GPIO_202, INTC_INT_LOW_LEVEL, 0 },
+ { AU1500_GPIO_203, INTC_INT_LOW_LEVEL, 0 },
+ { AU1500_GPIO_205, INTC_INT_LOW_LEVEL, 0 },
};
-int au1xxx_nr_irqs = 0;
+int au1xxx_nr_irqs = sizeof(au1xxx_irq_map)/sizeof(au1xxx_irq_map_t);
#include <asm/mipsregs.h>
#include <asm/reboot.h>
#include <asm/pgtable.h>
-#include <asm/au1000.h>
-#include <asm/pb1000.h>
-
-#ifdef CONFIG_USB_OHCI
-// Enable the workaround for the OHCI DoneHead
-// register corruption problem.
-#define CONFIG_AU1000_OHCI_FIX
- ^^^^^^^^^^^^^^^^^^^^^^
- !!! I shall not define symbols starting with CONFIG_ !!!
-#endif
+#include <asm/mach-au1x00/au1000.h>
+#include <asm/mach-pb1x00/pb1000.h>
void board_reset (void)
{
#include <asm/io.h>
#include <asm/mipsregs.h>
#include <asm/system.h>
-#include <asm/au1000.h>
+#include <asm/mach-au1x00/au1000.h>
au1xxx_irq_map_t au1xxx_irq_map[] = {
{ AU1000_GPIO_15, INTC_INT_LOW_LEVEL, 0 },
#include <asm/mach-au1x00/au1000.h>
#include <asm/mach-pb1x00/pb1100.h>
-#ifdef CONFIG_USB_OHCI
-// Enable the workaround for the OHCI DoneHead
-// register corruption problem.
-#define CONFIG_AU1000_OHCI_FIX
- ^^^^^^^^^^^^^^^^^^^^^^
- !!! I shall not define symbols starting with CONFIG_ !!!
-#endif
-
void board_reset (void)
{
/* Hit BCSR.SYSTEM_CONTROL[SW_RST] */
#include <asm/mach-au1x00/au1000.h>
#include <asm/mach-pb1x00/pb1500.h>
-#ifdef CONFIG_USB_OHCI
-// Enable the workaround for the OHCI DoneHead
-// register corruption problem.
-#define CONFIG_AU1000_OHCI_FIX
- ^^^^^^^^^^^^^^^^^^^^^^
- !!! I shall not define symbols starting with CONFIG_ !!!
-#endif
-
void board_reset (void)
{
/* Hit BCSR.SYSTEM_CONTROL[SW_RST] */
* with this program; if not, write to the Free Software Foundation, Inc.,
* 675 Mass Ave, Cambridge, MA 02139, USA.
*/
-#include <linux/config.h>
#include <linux/init.h>
#include <linux/sched.h>
#include <linux/ioport.h>
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc2
-# Sun Nov 21 14:11:57 2004
+# Linux kernel version: 2.6.11-rc2
+# Wed Jan 26 02:49:02 2005
#
CONFIG_MIPS=y
# CONFIG_MIPS64 is not set
# CONFIG_SNI_RM200_PCI is not set
# CONFIG_TOSHIBA_RBTX4927 is not set
CONFIG_RWSEM_GENERIC_SPINLOCK=y
+CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_HAVE_DEC_LOCK=y
CONFIG_DMA_COHERENT=y
CONFIG_MIPS_DISABLE_OBSOLETE_IDE=y
CONFIG_CPU_LITTLE_ENDIAN=y
CONFIG_MIPS_L1_CACHE_SHIFT=5
-# CONFIG_FB is not set
#
# CPU selection
# CONFIG_PAGE_SIZE_16KB is not set
# CONFIG_PAGE_SIZE_64KB is not set
CONFIG_CPU_HAS_PREFETCH=y
-# CONFIG_VTAG_ICACHE is not set
CONFIG_64BIT_PHYS_ADDR=y
# CONFIG_CPU_ADVANCED is not set
CONFIG_CPU_HAS_LLSC=y
#
CONFIG_PCCARD=m
# CONFIG_PCMCIA_DEBUG is not set
-# CONFIG_PCMCIA_OBSOLETE is not set
CONFIG_PCMCIA=m
CONFIG_CARDBUS=y
# CONFIG_MTD_PHRAM is not set
# CONFIG_MTD_MTDRAM is not set
# CONFIG_MTD_BLKMTD is not set
+# CONFIG_MTD_BLOCK2MTD is not set
#
# Disk-On-Chip Device Drivers
CONFIG_MTD_NAND_IDS=m
CONFIG_MTD_NAND_AU1550=m
# CONFIG_MTD_NAND_DISKONCHIP is not set
+# CONFIG_MTD_NAND_NANDSIM is not set
#
# Parallel port support
# CONFIG_BLK_CPQ_CISS_DA is not set
# CONFIG_BLK_DEV_DAC960 is not set
# CONFIG_BLK_DEV_UMEM is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_SX8 is not set
# CONFIG_BLK_DEV_RAM is not set
+CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_LBD is not set
CONFIG_CDROM_PKTCDVD=m
CONFIG_IOSCHED_AS=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
+CONFIG_ATA_OVER_ETH=m
#
# ATA/ATAPI/MFM/RLL support
# CONFIG_IP_NF_QUEUE is not set
# CONFIG_IP_NF_IPTABLES is not set
# CONFIG_IP_NF_ARPTABLES is not set
-# CONFIG_IP_NF_COMPAT_IPCHAINS is not set
-# CONFIG_IP_NF_COMPAT_IPFWADM is not set
CONFIG_XFRM=y
CONFIG_XFRM_USER=m
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PCIPS2 is not set
+# CONFIG_SERIO_LIBPS2 is not set
CONFIG_SERIO_RAW=m
#
#
# Ftape, the floppy tape device driver
#
-# CONFIG_AGP is not set
# CONFIG_DRM is not set
#
#
# Graphics support
#
+# CONFIG_FB is not set
+# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
#
# Sound
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB_ARCH_HAS_OHCI=y
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
+
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# InfiniBand support
+#
+# CONFIG_INFINIBAND is not set
+
#
# File systems
#
# CONFIG_NLS_KOI8_U is not set
# CONFIG_NLS_UTF8 is not set
+#
+# Profiling support
+#
+# CONFIG_PROFILING is not set
+
#
# Kernel hacking
#
CONFIG_CRYPTO_CRC32C=m
# CONFIG_CRYPTO_TEST is not set
+#
+# Hardware crypto devices
+#
+
#
# Library routines
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc2
-# Sun Nov 21 14:12:03 2004
+# Linux kernel version: 2.6.11-rc2
+# Wed Jan 26 02:49:07 2005
#
CONFIG_MIPS=y
# CONFIG_MIPS64 is not set
# CONFIG_SNI_RM200_PCI is not set
# CONFIG_TOSHIBA_RBTX4927 is not set
CONFIG_RWSEM_GENERIC_SPINLOCK=y
+CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_HAVE_DEC_LOCK=y
CONFIG_DMA_NONCOHERENT=y
# CONFIG_CPU_LITTLE_ENDIAN is not set
CONFIG_SWAP_IO_SPACE=y
CONFIG_BOOT_ELF32=y
CONFIG_MIPS_L1_CACHE_SHIFT=5
-CONFIG_FB=y
#
# CPU selection
CONFIG_PCI_NAMES=y
CONFIG_MMU=y
+#
+# PCCARD (PCMCIA/CardBus) support
+#
+# CONFIG_PCCARD is not set
+
+#
+# PC-card bridges
+#
+
+#
+# PCI Hotplug Support
+#
+# CONFIG_HOTPLUG_PCI is not set
+
#
# Executable file formats
#
#
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
+# CONFIG_FW_LOADER is not set
#
# Memory Technology Devices (MTD)
# CONFIG_BLK_CPQ_CISS_DA is not set
# CONFIG_BLK_DEV_DAC960 is not set
# CONFIG_BLK_DEV_UMEM is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_SX8 is not set
# CONFIG_BLK_DEV_RAM is not set
+CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_LBD is not set
# CONFIG_CDROM_PKTCDVD is not set
CONFIG_IOSCHED_AS=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
+CONFIG_ATA_OVER_ETH=m
#
# ATA/ATAPI/MFM/RLL support
#
# CONFIG_SCSI_SPI_ATTRS is not set
# CONFIG_SCSI_FC_ATTRS is not set
+# CONFIG_SCSI_ISCSI_ATTRS is not set
#
# SCSI low-level drivers
# CONFIG_SCSI_QLOGIC_ISP is not set
# CONFIG_SCSI_QLOGIC_FC is not set
# CONFIG_SCSI_QLOGIC_1280 is not set
-# CONFIG_SCSI_QLOGIC_1280_1040 is not set
CONFIG_SCSI_QLA2XXX=m
# CONFIG_SCSI_QLA21XX is not set
# CONFIG_SCSI_QLA22XX is not set
# CONFIG_SCSI_QLA2300 is not set
# CONFIG_SCSI_QLA2322 is not set
# CONFIG_SCSI_QLA6312 is not set
-# CONFIG_SCSI_QLA6322 is not set
# CONFIG_SCSI_DC395x is not set
# CONFIG_SCSI_DC390T is not set
# CONFIG_SCSI_NSP32 is not set
# CONFIG_IP_NF_QUEUE is not set
# CONFIG_IP_NF_IPTABLES is not set
# CONFIG_IP_NF_ARPTABLES is not set
-# CONFIG_IP_NF_COMPAT_IPCHAINS is not set
-# CONFIG_IP_NF_COMPAT_IPFWADM is not set
#
# IPv6: Netfilter Configuration
# CONFIG_SERIO_SERPORT is not set
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PCIPS2 is not set
+# CONFIG_SERIO_LIBPS2 is not set
# CONFIG_SERIO_RAW is not set
#
#
# Ftape, the floppy tape device driver
#
-# CONFIG_AGP is not set
# CONFIG_DRM is not set
# CONFIG_RAW_DRIVER is not set
#
# Graphics support
#
+CONFIG_FB=y
CONFIG_FB_MODE_HELPERS=y
# CONFIG_FB_TILEBLITTING is not set
# CONFIG_FB_CIRRUS is not set
CONFIG_LOGO_LINUX_MONO=y
CONFIG_LOGO_LINUX_VGA16=y
CONFIG_LOGO_LINUX_CLUT224=y
+# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
#
# Sound
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB_ARCH_HAS_OHCI=y
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
+
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# InfiniBand support
+#
+# CONFIG_INFINIBAND is not set
+
#
# File systems
#
# CONFIG_NLS_KOI8_U is not set
# CONFIG_NLS_UTF8 is not set
+#
+# Profiling support
+#
+# CONFIG_PROFILING is not set
+
#
# Kernel hacking
#
#
# CONFIG_CRYPTO is not set
+#
+# Hardware crypto devices
+#
+
#
# Library routines
#
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.10-rc2
-# Sun Nov 21 14:12:04 2004
+# Linux kernel version: 2.6.11-rc2
+# Wed Jan 26 02:49:08 2005
#
CONFIG_MIPS=y
CONFIG_MIPS64=y
# CONFIG_SIBYTE_SB1xxx_SOC is not set
# CONFIG_SNI_RM200_PCI is not set
CONFIG_RWSEM_GENERIC_SPINLOCK=y
+CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_HAVE_DEC_LOCK=y
CONFIG_DMA_NONCOHERENT=y
# CONFIG_CPU_LITTLE_ENDIAN is not set
# CONFIG_SYSCLK_83 is not set
CONFIG_SYSCLK_100=y
CONFIG_MIPS_L1_CACHE_SHIFT=5
-# CONFIG_FB is not set
#
# CPU selection
CONFIG_PCI_NAMES=y
CONFIG_MMU=y
+#
+# PCCARD (PCMCIA/CardBus) support
+#
+# CONFIG_PCCARD is not set
+
+#
+# PC-card bridges
+#
+
+#
+# PCI Hotplug Support
+#
+# CONFIG_HOTPLUG_PCI is not set
+
#
# Executable file formats
#
#
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
+# CONFIG_FW_LOADER is not set
#
# Memory Technology Devices (MTD)
# CONFIG_BLK_CPQ_CISS_DA is not set
# CONFIG_BLK_DEV_DAC960 is not set
# CONFIG_BLK_DEV_UMEM is not set
+# CONFIG_BLK_DEV_COW_COMMON is not set
# CONFIG_BLK_DEV_LOOP is not set
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_SX8 is not set
# CONFIG_BLK_DEV_RAM is not set
+CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_INITRAMFS_SOURCE=""
CONFIG_CDROM_PKTCDVD=y
CONFIG_CDROM_PKTCDVD_BUFFERS=8
CONFIG_IOSCHED_AS=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
+CONFIG_ATA_OVER_ETH=y
#
# ATA/ATAPI/MFM/RLL support
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PCIPS2 is not set
+# CONFIG_SERIO_LIBPS2 is not set
CONFIG_SERIO_RAW=y
#
#
# Ftape, the floppy tape device driver
#
-# CONFIG_AGP is not set
# CONFIG_DRM is not set
# CONFIG_RAW_DRIVER is not set
#
# Graphics support
#
+# CONFIG_FB is not set
#
# Console display driver support
#
# CONFIG_VGA_CONSOLE is not set
CONFIG_DUMMY_CONSOLE=y
+# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
#
# Sound
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB_ARCH_HAS_OHCI=y
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
+
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# InfiniBand support
+#
+# CONFIG_INFINIBAND is not set
+
#
# File systems
#
#
# CONFIG_NLS is not set
+#
+# Profiling support
+#
+# CONFIG_PROFILING is not set
+
#
# Kernel hacking
#
#
# CONFIG_CRYPTO is not set
+#
+# Hardware crypto devices
+#
+
#
# Library routines
#
* Copyright (C) 2000 Geert Uytterhoeven <geert@sonycom.com>
* Sony Software Development Center Europe (SDCE), Brussels
*/
-#include <linux/config.h>
#include <linux/init.h>
#include <linux/kbd_ll.h>
#include <linux/kernel.h>
static void __init ddb5074_setup(void)
{
- extern int panic_timeout;
-
set_io_port_base(NILE4_PCI_IO_BASE);
isa_slot_offset = NILE4_PCI_MEM_BASE;
board_timer_setup = ddb_timer_init;
* Copyright (C) 2000 Geert Uytterhoeven <geert@sonycom.com>
* Sony Software Development Center Europe (SDCE), Brussels
*/
-#include <linux/config.h>
#include <linux/init.h>
#include <linux/kbd_ll.h>
#include <linux/kernel.h>
static void __init ddb5476_setup(void)
{
- extern int panic_timeout;
-
set_io_port_base(KSEG1ADDR(DDB_PCI_IO_BASE));
board_time_init = ddb_time_init;
static int ddb5477_setup(void)
{
- extern int panic_timeout;
-
/* initialize board - we don't trust the loader */
ddb5477_board_init();
/*
* arch/mips/dec/decstation.c
*/
-#include <linux/config.h>
#define RELOC
#define INITRD
#include <asm/dec/ioasic_ints.h>
-static spinlock_t ioasic_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(ioasic_lock);
static int ioasic_irq_base;
* There is no default value -- it has to be initialized.
*/
u32 cached_kn02_csr;
-spinlock_t kn02_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(kn02_lock);
static int kn02_irq_base;
* Copyright (C) 1998 Harald Koerfgen
* Copyright (C) 2000, 2001, 2002, 2003 Maciej W. Rozycki
*/
-#include <linux/config.h>
#include <linux/sched.h>
#include <linux/interrupt.h>
#include <linux/param.h>
* with this program; if not, write to the Free Software Foundation, Inc.,
* 675 Mass Ave, Cambridge, MA 02139, USA.
*/
+#include <linux/config.h>
#include <linux/init.h>
#include <linux/kernel_stat.h>
#include <linux/module.h>
/* Sets the exception_handler array. */
set_except_vector(0, galileo_handle_int);
- cli();
+ local_irq_disable();
/*
* Enable timer. Other interrupts will be enabled as they are
irq_desc[i].handler = &no_irq_type;
irq_desc[i].action = NULL;
irq_desc[i].depth = 0;
- irq_desc[i].lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&irq_desc[i].lock);
}
gt64120_irq_setup();
* 675 Mass Ave, Cambridge, MA 02139, USA.
*
*/
-#include <linux/config.h>
#include <linux/errno.h>
#include <linux/init.h>
#include <linux/kernel_stat.h>
}
/* Fix up the DiskOnChip mapping */
- GT_WRITE(0x468, 0xfef73);
+ GT_WRITE(GT_DEV_B3_OFS, 0xfef73);
}
early_initcall(momenco_ocelot_setup);
printk("Enabling L3 cache...");
/* Enable the L3 cache in the GT64120A's CPU Configuration register */
- tmp = GT_READ(0);
- GT_WRITE(0, tmp | (1<<14));
+ tmp = GT_READ(GT_CPU_OFS);
+ GT_WRITE(GT_CPU_OFS, tmp | (1<<14));
/* Enable the L3 cache in the CPU */
set_c0_config(1<<12 /* CONF_TE */);
* with this program; if not, write to the Free Software Foundation, Inc.,
* 675 Mass Ave, Cambridge, MA 02139, USA.
*/
-#include <linux/config.h>
#include <linux/errno.h>
#include <linux/init.h>
#include <linux/irq.h>
#include <asm/it8172/it8172_int.h>
#include <asm/it8172/it8172_dbg.h>
-#undef DEBUG_IRQ
-#ifdef DEBUG_IRQ
-/* note: prints function name for you */
-#define DPRINTK(fmt, args...) printk("%s: " fmt, __FUNCTION__ , ## args)
-#else
-#define DPRINTK(fmt, args...)
-#endif
-
/* revisit */
#define EXT_IRQ0_TO_IP 2 /* IP 2 */
#define EXT_IRQ5_TO_IP 7 /* IP 7 */
struct it8172_intc_regs volatile *it8172_hw0_icregs =
(struct it8172_intc_regs volatile *)(KSEG1ADDR(IT8172_PCI_IO_BASE + IT_INTC_BASE));
-/* Function for careful CP0 interrupt mask access */
-static inline void modify_cp0_intmask(unsigned clr_mask, unsigned set_mask)
-{
- unsigned long status = read_c0_status();
- status &= ~((clr_mask & 0xFF) << 8);
- status |= (set_mask & 0xFF) << 8;
- write_c0_status(status);
-}
-
-static inline void mask_irq(unsigned int irq_nr)
-{
- modify_cp0_intmask(irq_nr, 0);
-}
-
-static inline void unmask_irq(unsigned int irq_nr)
-{
- modify_cp0_intmask(0, irq_nr);
-}
-
-void local_disable_irq(unsigned int irq_nr)
-{
- unsigned long flags;
-
- local_irq_save(flags);
- disable_it8172_irq(irq_nr);
- local_irq_restore(flags);
-}
-
-void local_enable_irq(unsigned int irq_nr)
+static void disable_it8172_irq(unsigned int irq_nr)
{
- unsigned long flags;
-
- local_irq_save(flags);
- enable_it8172_irq(irq_nr);
- local_irq_restore(flags);
-}
-
-
-void disable_it8172_irq(unsigned int irq_nr)
-{
- DPRINTK("disable_it8172_irq %d\n", irq_nr);
-
if ( (irq_nr >= IT8172_LPC_IRQ_BASE) && (irq_nr <= IT8172_SERIRQ_15)) {
/* LPC interrupt */
- DPRINTK("DB lpc_mask %x\n", it8172_hw0_icregs->lpc_mask);
it8172_hw0_icregs->lpc_mask |=
(1 << (irq_nr - IT8172_LPC_IRQ_BASE));
- DPRINTK("DA lpc_mask %x\n", it8172_hw0_icregs->lpc_mask);
- }
- else if ( (irq_nr >= IT8172_LB_IRQ_BASE) && (irq_nr <= IT8172_IOCHK_IRQ)) {
+ } else if ( (irq_nr >= IT8172_LB_IRQ_BASE) && (irq_nr <= IT8172_IOCHK_IRQ)) {
/* Local Bus interrupt */
- DPRINTK("DB lb_mask %x\n", it8172_hw0_icregs->lb_mask);
it8172_hw0_icregs->lb_mask |=
(1 << (irq_nr - IT8172_LB_IRQ_BASE));
- DPRINTK("DA lb_mask %x\n", it8172_hw0_icregs->lb_mask);
- }
- else if ( (irq_nr >= IT8172_PCI_DEV_IRQ_BASE) && (irq_nr <= IT8172_DMA_IRQ)) {
+ } else if ( (irq_nr >= IT8172_PCI_DEV_IRQ_BASE) && (irq_nr <= IT8172_DMA_IRQ)) {
/* PCI and other interrupts */
- DPRINTK("DB pci_mask %x\n", it8172_hw0_icregs->pci_mask);
it8172_hw0_icregs->pci_mask |=
(1 << (irq_nr - IT8172_PCI_DEV_IRQ_BASE));
- DPRINTK("DA pci_mask %x\n", it8172_hw0_icregs->pci_mask);
- }
- else if ( (irq_nr >= IT8172_NMI_IRQ_BASE) && (irq_nr <= IT8172_POWER_NMI_IRQ)) {
+ } else if ( (irq_nr >= IT8172_NMI_IRQ_BASE) && (irq_nr <= IT8172_POWER_NMI_IRQ)) {
/* NMI interrupts */
- DPRINTK("DB nmi_mask %x\n", it8172_hw0_icregs->nmi_mask);
it8172_hw0_icregs->nmi_mask |=
(1 << (irq_nr - IT8172_NMI_IRQ_BASE));
- DPRINTK("DA nmi_mask %x\n", it8172_hw0_icregs->nmi_mask);
- }
- else {
+ } else {
panic("disable_it8172_irq: bad irq %d", irq_nr);
}
}
-void enable_it8172_irq(unsigned int irq_nr)
+static void enable_it8172_irq(unsigned int irq_nr)
{
- DPRINTK("enable_it8172_irq %d\n", irq_nr);
if ( (irq_nr >= IT8172_LPC_IRQ_BASE) && (irq_nr <= IT8172_SERIRQ_15)) {
/* LPC interrupt */
- DPRINTK("EB before lpc_mask %x\n", it8172_hw0_icregs->lpc_mask);
it8172_hw0_icregs->lpc_mask &=
~(1 << (irq_nr - IT8172_LPC_IRQ_BASE));
- DPRINTK("EA after lpc_mask %x\n", it8172_hw0_icregs->lpc_mask);
}
else if ( (irq_nr >= IT8172_LB_IRQ_BASE) && (irq_nr <= IT8172_IOCHK_IRQ)) {
/* Local Bus interrupt */
- DPRINTK("EB lb_mask %x\n", it8172_hw0_icregs->lb_mask);
it8172_hw0_icregs->lb_mask &=
~(1 << (irq_nr - IT8172_LB_IRQ_BASE));
- DPRINTK("EA lb_mask %x\n", it8172_hw0_icregs->lb_mask);
}
else if ( (irq_nr >= IT8172_PCI_DEV_IRQ_BASE) && (irq_nr <= IT8172_DMA_IRQ)) {
/* PCI and other interrupts */
- DPRINTK("EB pci_mask %x\n", it8172_hw0_icregs->pci_mask);
it8172_hw0_icregs->pci_mask &=
~(1 << (irq_nr - IT8172_PCI_DEV_IRQ_BASE));
- DPRINTK("EA pci_mask %x\n", it8172_hw0_icregs->pci_mask);
}
else if ( (irq_nr >= IT8172_NMI_IRQ_BASE) && (irq_nr <= IT8172_POWER_NMI_IRQ)) {
/* NMI interrupts */
- DPRINTK("EB nmi_mask %x\n", it8172_hw0_icregs->nmi_mask);
it8172_hw0_icregs->nmi_mask &=
~(1 << (irq_nr - IT8172_NMI_IRQ_BASE));
- DPRINTK("EA nmi_mask %x\n", it8172_hw0_icregs->nmi_mask);
}
else {
panic("enable_it8172_irq: bad irq %d", irq_nr);
unsigned long flags;
local_irq_save(flags);
- unmask_irq(1<<EXT_IRQ5_TO_IP); /* timer interrupt */
+ set_c0_status(0x100 << EXT_IRQ5_TO_IP);
local_irq_restore(flags);
}
cause = read_c0_cause();
printk("status %x cause %x\n", status, cause);
printk("epc %x badvaddr %x \n", regs->cp0_epc, regs->cp0_badvaddr);
-// while(1);
#endif
}
status >>= 1;
}
irq += IT8172_PCI_DEV_IRQ_BASE;
- //printk("pci int %d\n", irq);
- }
- else if (intstatus & 0x1) {
+ } else if (intstatus & 0x1) {
/* Local Bus interrupt */
irq = 0;
status |= it8172_hw0_icregs->lb_req;
status >>= 1;
}
irq += IT8172_LB_IRQ_BASE;
- //printk("lb int %d\n", irq);
- }
- else if (intstatus & 0x2) {
+ } else if (intstatus & 0x2) {
/* LPC interrupt */
/* Since some lpc interrupts are edge triggered,
* we could lose an interrupt this way because
status >>= 1;
}
irq += IT8172_LPC_IRQ_BASE;
- //printk("LPC int %d\n", irq);
} else
return;
extern asmlinkage void jazz_handle_int(void);
-static spinlock_t r4030_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(r4030_lock);
static void enable_r4030_irq(unsigned int irq)
{
static unsigned long vdma_pagetable_start;
-static spinlock_t vdma_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(vdma_lock);
/*
* Debug stuff
static void jmr3927_machine_restart(char *command)
{
- cli();
+ local_irq_disable();
puts("Rebooting...");
do_reset();
}
static void __init jmr3927_setup(void)
{
- extern int panic_timeout;
char *argptr;
set_io_port_base(JMR3927_PORT_BASE + JMR3927_PCIIO);
* Miguel de Icaza, 1997.
*/
#include <linux/mm.h>
-#include <linux/init.h>
-#include <linux/slab.h>
-#include <asm/uaccess.h>
#include <asm/inventory.h>
+#include <asm/uaccess.h>
#define MAX_INVENTORY 50
int inventory_items = 0;
return inventory_items * sizeof (inventory_t);
}
-static int __init init_inventory(void)
+int __init init_inventory(void)
{
/*
* gross hack while we put the right bits all over the kernel
return 0;
}
-
-module_init(init_inventory);
--- /dev/null
+/*
+ * Copyright (C) 2003 Ralf Baechle
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * Handler for RM9000 extended interrupts. These are a non-standard
+ * feature so we handle them separately from standard interrupts.
+ */
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+
+#include <asm/irq_cpu.h>
+#include <asm/mipsregs.h>
+#include <asm/system.h>
+
+static int irq_base;
+
+static inline void unmask_rm9k_irq(unsigned int irq)
+{
+ set_c0_intcontrol(0x1000 << (irq - irq_base));
+}
+
+static inline void mask_rm9k_irq(unsigned int irq)
+{
+ clear_c0_intcontrol(0x1000 << (irq - irq_base));
+}
+
+static inline void rm9k_cpu_irq_enable(unsigned int irq)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+ unmask_rm9k_irq(irq);
+ local_irq_restore(flags);
+}
+
+static void rm9k_cpu_irq_disable(unsigned int irq)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+ mask_rm9k_irq(irq);
+ local_irq_restore(flags);
+}
+
+static unsigned int rm9k_cpu_irq_startup(unsigned int irq)
+{
+ rm9k_cpu_irq_enable(irq);
+
+ return 0;
+}
+
+#define rm9k_cpu_irq_shutdown rm9k_cpu_irq_disable
+
+/*
+ * Performance counter interrupts are global on all processors.
+ */
+static void local_rm9k_perfcounter_irq_startup(void *args)
+{
+ unsigned int irq = (unsigned int) args;
+
+ rm9k_cpu_irq_enable(irq);
+}
+
+static unsigned int rm9k_perfcounter_irq_startup(unsigned int irq)
+{
+ on_each_cpu(local_rm9k_perfcounter_irq_startup, (void *) irq, 0, 1);
+
+ return 0;
+}
+
+static void local_rm9k_perfcounter_irq_shutdown(void *args)
+{
+ unsigned int irq = (unsigned int) args;
+ unsigned long flags;
+
+ local_irq_save(flags);
+ mask_rm9k_irq(irq);
+ local_irq_restore(flags);
+}
+
+static void rm9k_perfcounter_irq_shutdown(unsigned int irq)
+{
+ on_each_cpu(local_rm9k_perfcounter_irq_shutdown, (void *) irq, 0, 1);
+}
+
+
+/*
+ * While we ack the interrupt interrupts are disabled and thus we don't need
+ * to deal with concurrency issues. Same for rm9k_cpu_irq_end.
+ */
+static void rm9k_cpu_irq_ack(unsigned int irq)
+{
+ mask_rm9k_irq(irq);
+}
+
+static void rm9k_cpu_irq_end(unsigned int irq)
+{
+ if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))
+ unmask_rm9k_irq(irq);
+}
+
+static hw_irq_controller rm9k_irq_controller = {
+ "RM9000",
+ rm9k_cpu_irq_startup,
+ rm9k_cpu_irq_shutdown,
+ rm9k_cpu_irq_enable,
+ rm9k_cpu_irq_disable,
+ rm9k_cpu_irq_ack,
+ rm9k_cpu_irq_end,
+};
+
+static hw_irq_controller rm9k_perfcounter_irq = {
+ "RM9000",
+ rm9k_perfcounter_irq_startup,
+ rm9k_perfcounter_irq_shutdown,
+ rm9k_cpu_irq_enable,
+ rm9k_cpu_irq_disable,
+ rm9k_cpu_irq_ack,
+ rm9k_cpu_irq_end,
+};
+
+unsigned int rm9000_perfcount_irq;
+
+EXPORT_SYMBOL(rm9000_perfcount_irq);
+
+void __init rm9k_cpu_irq_init(int base)
+{
+ int i;
+
+ clear_c0_intcontrol(0x0000f000); /* Mask all */
+
+ for (i = base; i < base + 4; i++) {
+ irq_desc[i].status = IRQ_DISABLED;
+ irq_desc[i].action = NULL;
+ irq_desc[i].depth = 1;
+ irq_desc[i].handler = &rm9k_irq_controller;
+ }
+
+ rm9000_perfcount_irq = base + 1;
+ irq_desc[rm9000_perfcount_irq].handler = &rm9k_perfcounter_irq;
+
+ irq_base = base;
+}
#include <linux/spinlock.h>
static LIST_HEAD(dbe_list);
-static spinlock_t dbe_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(dbe_lock);
/* Given an address, look for it in the module exception tables. */
const struct exception_table_entry *search_module_dbetables(unsigned long addr)
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1991, 1992 Linus Torvalds
+ * Copyright (C) 1994 - 2000 Ralf Baechle
+ * Copyright (C) 1999, 2000 Silicon Graphics, Inc.
+ */
+
+static inline int
+setup_sigcontext(struct pt_regs *regs, struct sigcontext *sc)
+{
+ int err = 0;
+
+ err |= __put_user(regs->cp0_epc, &sc->sc_pc);
+ err |= __put_user(regs->cp0_status, &sc->sc_status);
+
+#define save_gp_reg(i) do { \
+ err |= __put_user(regs->regs[i], &sc->sc_regs[i]); \
+} while(0)
+ __put_user(0, &sc->sc_regs[0]); save_gp_reg(1); save_gp_reg(2);
+ save_gp_reg(3); save_gp_reg(4); save_gp_reg(5); save_gp_reg(6);
+ save_gp_reg(7); save_gp_reg(8); save_gp_reg(9); save_gp_reg(10);
+ save_gp_reg(11); save_gp_reg(12); save_gp_reg(13); save_gp_reg(14);
+ save_gp_reg(15); save_gp_reg(16); save_gp_reg(17); save_gp_reg(18);
+ save_gp_reg(19); save_gp_reg(20); save_gp_reg(21); save_gp_reg(22);
+ save_gp_reg(23); save_gp_reg(24); save_gp_reg(25); save_gp_reg(26);
+ save_gp_reg(27); save_gp_reg(28); save_gp_reg(29); save_gp_reg(30);
+ save_gp_reg(31);
+#undef save_gp_reg
+
+ err |= __put_user(regs->hi, &sc->sc_mdhi);
+ err |= __put_user(regs->lo, &sc->sc_mdlo);
+ err |= __put_user(regs->cp0_cause, &sc->sc_cause);
+ err |= __put_user(regs->cp0_badvaddr, &sc->sc_badvaddr);
+
+ err |= __put_user(!!used_math(), &sc->sc_used_math);
+
+ if (!used_math())
+ goto out;
+
+ /*
+ * Save FPU state to signal context. Signal handler will "inherit"
+ * current FPU state.
+ */
+ preempt_disable();
+
+ if (!is_fpu_owner()) {
+ own_fpu();
+ restore_fp(current);
+ }
+ err |= save_fp_context(sc);
+
+ preempt_enable();
+
+out:
+ return err;
+}
+
+static inline int
+restore_sigcontext(struct pt_regs *regs, struct sigcontext *sc)
+{
+ int err = 0;
+ unsigned int used_math;
+
+ /* Always make any pending restarted system calls return -EINTR */
+ current_thread_info()->restart_block.fn = do_no_restart_syscall;
+
+ err |= __get_user(regs->cp0_epc, &sc->sc_pc);
+ err |= __get_user(regs->hi, &sc->sc_mdhi);
+ err |= __get_user(regs->lo, &sc->sc_mdlo);
+
+#define restore_gp_reg(i) do { \
+ err |= __get_user(regs->regs[i], &sc->sc_regs[i]); \
+} while(0)
+ restore_gp_reg( 1); restore_gp_reg( 2); restore_gp_reg( 3);
+ restore_gp_reg( 4); restore_gp_reg( 5); restore_gp_reg( 6);
+ restore_gp_reg( 7); restore_gp_reg( 8); restore_gp_reg( 9);
+ restore_gp_reg(10); restore_gp_reg(11); restore_gp_reg(12);
+ restore_gp_reg(13); restore_gp_reg(14); restore_gp_reg(15);
+ restore_gp_reg(16); restore_gp_reg(17); restore_gp_reg(18);
+ restore_gp_reg(19); restore_gp_reg(20); restore_gp_reg(21);
+ restore_gp_reg(22); restore_gp_reg(23); restore_gp_reg(24);
+ restore_gp_reg(25); restore_gp_reg(26); restore_gp_reg(27);
+ restore_gp_reg(28); restore_gp_reg(29); restore_gp_reg(30);
+ restore_gp_reg(31);
+#undef restore_gp_reg
+
+ err |= __get_user(used_math, &sc->sc_used_math);
+ conditional_used_math(used_math);
+
+ preempt_disable();
+
+ if (used_math()) {
+ /* restore fpu context if we have used it before */
+ own_fpu();
+ err |= restore_fp_context(sc);
+ } else {
+ /* signal handler may have used FPU. Give it up. */
+ lose_fpu();
+ }
+
+ preempt_enable();
+
+ return err;
+}
+
+/*
+ * Determine which stack to use..
+ */
+static inline void *
+get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, size_t frame_size)
+{
+ unsigned long sp, almask;
+
+ /* Default to using normal stack */
+ sp = regs->regs[29];
+
+ /*
+ * FPU emulator may have it's own trampoline active just
+ * above the user stack, 16-bytes before the next lowest
+ * 16 byte boundary. Try to avoid trashing it.
+ */
+ sp -= 32;
+
+ /* This is the X/Open sanctioned signal stack switching. */
+ if ((ka->sa.sa_flags & SA_ONSTACK) && (sas_ss_flags (sp) == 0))
+ sp = current->sas_ss_sp + current->sas_ss_size;
+
+ if (PLAT_TRAMPOLINE_STUFF_LINE)
+ almask = ~(PLAT_TRAMPOLINE_STUFF_LINE - 1);
+ else
+ almask = ALMASK;
+
+ return (void *)((sp - frame_size) & almask);
+}
*(.options)
*(.pdr)
*(.reginfo)
+ *(.mdebug*)
}
/* This is the MIPS specific mdebug section. */
case PM_256M: return "256Mb";
#endif
}
+
+ return "unknown";
}
#define BARRIER() \
case PM_256M: return "256Mb";
#endif
}
+
+ return "unknown";
}
#define BARRIER() \
printk("tasks->mm.pgd == %08lx\n", (unsigned long) t->mm->pgd);
page_dir = pgd_offset(t->mm, 0);
- printk("page_dir == %08lx\n", (unsigned long) page_dir);
+ printk("page_dir == %016lx\n", (unsigned long) page_dir);
pgd = pgd_offset(t->mm, addr);
- printk("pgd == %08lx, ", (unsigned long) pgd);
+ printk("pgd == %016lx\n", (unsigned long) pgd);
pmd = pmd_offset(pgd, addr);
- printk("pmd == %08lx, ", (unsigned long) pmd);
+ printk("pmd == %016lx\n", (unsigned long) pmd);
pte = pte_offset(pmd, addr);
- printk("pte == %08lx, ", (unsigned long) pte);
+ printk("pte == %016lx\n", (unsigned long) pte);
page = *pte;
printk("page == %08lx\n", pte_val(page));
# Makefile for MIPS-specific library files..
#
-lib-y += csum_partial_copy.o dec_and_lock.o memcpy.o promlib.o strlen_user.o \
- strncpy_user.o strnlen_user.o
+lib-y += csum_partial_copy.o dec_and_lock.o iomap.o memcpy.o promlib.o \
+ strlen_user.o strncpy_user.o strnlen_user.o
EXTRA_AFLAGS := $(CFLAGS)
/*
* copy while checksumming, otherwise like csum_partial
*/
-unsigned int csum_partial_copy_nocheck(const char *src, char *dst,
+unsigned int csum_partial_copy_nocheck(const unsigned char *src, unsigned char *dst,
int len, unsigned int sum)
{
/*
* Copy from userspace and compute checksum. If we catch an exception
* then zero the rest of the buffer.
*/
-unsigned int csum_partial_copy_from_user (const char *src, char *dst,
+unsigned int csum_partial_copy_from_user (const unsigned char *src, unsigned char *dst,
int len, unsigned int sum, int *err_ptr)
{
int missing;
* cp1emu.c: a MIPS coprocessor 1 (fpu) instruction emulator
*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* Kevin D. Kissell, kevink@mips.com and Carsten Langgaard, carstenl@mips.com
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*
* MIPS floating point support
*
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* This program is free software; you can distribute it and/or modify it
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
*/
/*
* MIPS floating point support
- * Copyright (C) 1994-2000 Algorithmics Ltd. All rights reserved.
+ * Copyright (C) 1994-2000 Algorithmics Ltd.
* http://www.algor.co.uk
*
* ########################################################################
* Atlas board.
*
*/
-#include <linux/config.h>
#include <linux/compiler.h>
#include <linux/init.h>
#include <linux/sched.h>
* This is the interface to the remote debugger stub.
*/
#include <linux/types.h>
-#include <linux/config.h>
#include <linux/serial.h>
#include <linux/serialP.h>
#include <linux/serial_reg.h>
/*
* Carsten Langgaard, carstenl@mips.com
- * Copyright (C) 2000, 2001 MIPS Technologies, Inc.
+ * Copyright (C) 2000, 2001, 2004 MIPS Technologies, Inc.
* Copyright (C) 2001 Ralf Baechle
*
* This program is free software; you can distribute it and/or modify it
* The interrupt controller is located in the South Bridge a PIIX4 device
* with two internal 82C95 interrupt controllers.
*/
-#include <linux/config.h>
#include <linux/init.h>
#include <linux/irq.h>
#include <linux/sched.h>
extern asmlinkage void mipsIRQ(void);
-static spinlock_t mips_irq_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(mips_irq_lock);
static inline int mips_pcibios_iack(void)
{
case MIPS_REVISION_CORID_CORE_FPGAR2:
data = GT_READ(GT_INTRCAUSE_OFS);
printk("GT_INTRCAUSE = %08x\n", data);
- data = GT_READ(0x70);
- datahi = GT_READ(0x78);
- printk("GT_CPU_ERR_ADDR = %02x%08x\n", datahi, data);
+ data = GT_READ(GT_CPUERR_ADDRLO_OFS);
+ datahi = GT_READ(GT_CPUERR_ADDRHI_OFS);
+ printk("GT_CPUERR_ADDR = %02x%08x\n", datahi, data);
break;
case MIPS_REVISION_CORID_BONITO64:
case MIPS_REVISION_CORID_CORE_20K:
r4k_blast_scache_page = blast_scache128_page;
}
+static void (* r4k_blast_scache_page_indexed)(unsigned long addr);
+
+static inline void r4k_blast_scache_page_indexed_setup(void)
+{
+ unsigned long sc_lsize = cpu_scache_line_size();
+
+ if (sc_lsize == 16)
+ r4k_blast_scache_page_indexed = blast_scache16_page_indexed;
+ else if (sc_lsize == 32)
+ r4k_blast_scache_page_indexed = blast_scache32_page_indexed;
+ else if (sc_lsize == 64)
+ r4k_blast_scache_page_indexed = blast_scache64_page_indexed;
+ else if (sc_lsize == 128)
+ r4k_blast_scache_page_indexed = blast_scache128_page_indexed;
+}
+
static void (* r4k_blast_scache)(void);
static inline void r4k_blast_scache_setup(void)
{
struct mm_struct *mm = args;
- if (!cpu_has_dc_aliases)
- return;
-
if (!cpu_context(smp_processor_id(), mm))
return;
static void r4k_flush_cache_mm(struct mm_struct *mm)
{
+ if (!cpu_has_dc_aliases)
+ return;
+
on_each_cpu(local_r4k_flush_cache_mm, mm, 1, 1);
}
pmd_t *pmdp;
pte_t *ptep;
- /*
- * If ownes no valid ASID yet, cannot possibly have gotten
- * this page into the cache.
- */
- if (cpu_context(smp_processor_id(), mm) == 0)
- return;
-
page &= PAGE_MASK;
pgdp = pgd_offset(mm, page);
pmdp = pmd_offset(pgdp, page);
* in that case, which doesn't overly flush the cache too much.
*/
if ((mm == current->active_mm) && (pte_val(*ptep) & _PAGE_VALID)) {
- if (cpu_has_dc_aliases || (exec && !cpu_has_ic_fills_f_dc))
+ if (cpu_has_dc_aliases || (exec && !cpu_has_ic_fills_f_dc)) {
r4k_blast_dcache_page(page);
+ if (exec && !cpu_icache_snoops_remote_store)
+ r4k_blast_scache_page(page);
+ }
if (exec)
r4k_blast_icache_page(page);
* to work correctly.
*/
page = INDEX_BASE + (page & (dcache_size - 1));
- if (cpu_has_dc_aliases || (exec && !cpu_has_ic_fills_f_dc))
+ if (cpu_has_dc_aliases || (exec && !cpu_has_ic_fills_f_dc)) {
r4k_blast_dcache_page_indexed(page);
+ if (exec && !cpu_icache_snoops_remote_store)
+ r4k_blast_scache_page_indexed(page);
+ }
if (exec) {
if (cpu_has_vtag_icache) {
int cpu = smp_processor_id();
{
struct flush_cache_page_args args;
+ /*
+ * If ownes no valid ASID yet, cannot possibly have gotten
+ * this page into the cache.
+ */
+ if (cpu_context(smp_processor_id(), vma->vm_mm) == 0)
+ return;
+
args.vma = vma;
args.page = page;
struct flush_icache_range_args *fir_args = args;
unsigned long dc_lsize = current_cpu_data.dcache.linesz;
unsigned long ic_lsize = current_cpu_data.icache.linesz;
+ unsigned long sc_lsize = current_cpu_data.scache.linesz;
unsigned long start = fir_args->start;
unsigned long end = fir_args->end;
unsigned long addr, aend;
if (!cpu_has_ic_fills_f_dc) {
- if (end - start > dcache_size)
+ if (end - start > dcache_size) {
r4k_blast_dcache();
- else {
+ } else {
addr = start & ~(dc_lsize - 1);
aend = (end - 1) & ~(dc_lsize - 1);
addr += dc_lsize;
}
}
+
+ if (!cpu_icache_snoops_remote_store) {
+ if (end - start > scache_size) {
+ r4k_blast_scache();
+ } else {
+ addr = start & ~(sc_lsize - 1);
+ aend = (end - 1) & ~(sc_lsize - 1);
+
+ while (1) {
+ /* Hit_Writeback_Inv_D */
+ protected_writeback_scache_line(addr);
+ if (addr == aend)
+ break;
+ addr += sc_lsize;
+ }
+ }
+ }
}
if (end - start > icache_size)
if (!cpu_has_ic_fills_f_dc) {
unsigned long addr = (unsigned long) page_address(page);
r4k_blast_dcache_page(addr);
+ if (!cpu_icache_snoops_remote_store)
+ r4k_blast_scache_page(addr);
ClearPageDcacheDirty(page);
}
{
unsigned long ic_lsize = current_cpu_data.icache.linesz;
unsigned long dc_lsize = current_cpu_data.dcache.linesz;
+ unsigned long sc_lsize = current_cpu_data.scache.linesz;
unsigned long addr = (unsigned long) arg;
R4600_HIT_CACHEOP_WAR_IMPL;
protected_writeback_dcache_line(addr & ~(dc_lsize - 1));
+ if (!cpu_icache_snoops_remote_store)
+ protected_writeback_scache_line(addr & ~(sc_lsize - 1));
protected_flush_icache_line(addr & ~(ic_lsize - 1));
if (MIPS4K_ICACHE_REFILL_WAR) {
__asm__ __volatile__ (
}
}
-static char *way_string[] = { NULL, "direct mapped", "2-way", "3-way", "4-way",
- "5-way", "6-way", "7-way", "8-way"
+static char *way_string[] __initdata = { NULL, "direct mapped", "2-way",
+ "3-way", "4-way", "5-way", "6-way", "7-way", "8-way"
};
static void __init probe_pcache(void)
r4k_blast_icache_page_indexed_setup();
r4k_blast_icache_setup();
r4k_blast_scache_page_setup();
+ r4k_blast_scache_page_indexed_setup();
r4k_blast_scache_setup();
/*
/* Special cache error handler for SB1 */
memcpy((void *)(CAC_BASE + 0x100), &except_vec2_sb1, 0x80);
memcpy((void *)(UNCAC_BASE + 0x100), &except_vec2_sb1, 0x80);
- memcpy((void *)KSEG1ADDR(&handle_vec2_sb1), &handle_vec2_sb1, 0x80);
+ memcpy((void *)CKSEG1ADDR(&handle_vec2_sb1), &handle_vec2_sb1, 0x80);
probe_cache_sizes();
#endif /* CONFIG_DMA_NONCOHERENT */
-asmlinkage int sys_cacheflush(void *addr, int bytes, int cache)
+/*
+ * We could optimize the case where the cache argument is not BCACHE but
+ * that seems very atypical use ...
+ */
+asmlinkage int sys_cacheflush(unsigned long addr, unsigned long int bytes,
+ unsigned int cache)
{
- /* This should flush more selectivly ... */
- __flush_cache_all();
+ if (verify_area(VERIFY_WRITE, (void *) addr, bytes))
+ return -EFAULT;
+
+ flush_icache_range(addr, addr + bytes);
return 0;
}
/* Masks to select bits for Hamming parity, mask_72_64[i] for bit[i] */
static const uint64_t mask_72_64[8] = {
- 0x0738C808099264FFL,
- 0x38C808099264FF07L,
- 0xC808099264FF0738L,
- 0x08099264FF0738C8L,
- 0x099264FF0738C808L,
- 0x9264FF0738C80809L,
- 0x64FF0738C8080992L,
- 0xFF0738C808099264L
+ 0x0738C808099264FFULL,
+ 0x38C808099264FF07ULL,
+ 0xC808099264FF0738ULL,
+ 0x08099264FF0738C8ULL,
+ 0x099264FF0738C808ULL,
+ 0x9264FF0738C80809ULL,
+ 0x64FF0738C8080992ULL,
+ 0xFF0738C808099264ULL
};
/* Calculate the parity on a range of bits */
((lru >> 4) & 0x3),
((lru >> 6) & 0x3));
}
- va = (taglo & 0xC0000FFFFFFFE000) | addr;
+ va = (taglo & 0xC0000FFFFFFFE000ULL) | addr;
if ((taglo & (1 << 31)) && (((taglo >> 62) & 0x3) == 3))
- va |= 0x3FFFF00000000000;
+ va |= 0x3FFFF00000000000ULL;
valid = ((taghi >> 29) & 1);
if (valid) {
tlo_tmp = taglo & 0xfff3ff;
: "r" ((way << 13) | addr));
taglo = ((unsigned long long)taglohi << 32) | taglolo;
- pa = (taglo & 0xFFFFFFE000) | addr;
+ pa = (taglo & 0xFFFFFFE000ULL) | addr;
if (way == 0) {
lru = (taghi >> 14) & 0xff;
prom_printf("[Bank %d Set 0x%02x] LRU > %d %d %d %d > MRU\n",
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
-#include <linux/config.h>
#include <linux/init.h>
#include <asm/asm.h>
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2000 Ani Joshi <ajoshi@unixbox.com>
+ * Copyright (C) 2000, 2001 Ralf Baechle <ralf@gnu.org>
+ * Copyright (C) 2005 Ilya A. Volynets-Evenbakh <ilya@total-knowledge.com>
+ * swiped from i386, and cloned for MIPS by Geert, polished by Ralf.
+ * IP32 changes by Ilya.
+ */
+#include <linux/types.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/string.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/cache.h>
+#include <asm/io.h>
+#include <asm/ip32/crime.h>
+
+/*
+ * Warning on the terminology - Linux calls an uncached area coherent;
+ * MIPS terminology calls memory areas with hardware maintained coherency
+ * coherent.
+ */
+
+/*
+ * Few notes.
+ * 1. CPU sees memory as two chunks: 0-256M@0x0, and the rest @0x40000000+256M
+ * 2. PCI sees memory as one big chunk @0x0 (or we could use 0x40000000 for native-endian)
+ * 3. All other devices see memory as one big chunk at 0x40000000
+ * 4. Non-PCI devices will pass NULL as struct device*
+ * Thus we translate differently, depending on device.
+ */
+
+#define RAM_OFFSET_MASK 0x3fffffff
+
+void *dma_alloc_noncoherent(struct device *dev, size_t size,
+ dma_addr_t * dma_handle, int gfp)
+{
+ void *ret;
+ /* ignore region specifiers */
+ gfp &= ~(__GFP_DMA | __GFP_HIGHMEM);
+
+ if (dev == NULL || (dev->coherent_dma_mask < 0xffffffff))
+ gfp |= GFP_DMA;
+ ret = (void *) __get_free_pages(gfp, get_order(size));
+
+ if (ret != NULL) {
+ unsigned long addr = virt_to_phys(ret)&RAM_OFFSET_MASK;
+ memset(ret, 0, size);
+ if(dev==NULL)
+ addr+= CRIME_HI_MEM_BASE;
+ *dma_handle = addr;
+ }
+
+ return ret;
+}
+
+EXPORT_SYMBOL(dma_alloc_noncoherent);
+
+void *dma_alloc_coherent(struct device *dev, size_t size,
+ dma_addr_t * dma_handle, int gfp)
+{
+ void *ret;
+
+ ret = dma_alloc_noncoherent(dev, size, dma_handle, gfp);
+ if (ret) {
+ dma_cache_wback_inv((unsigned long) ret, size);
+ ret = UNCAC_ADDR(ret);
+ }
+
+ return ret;
+}
+
+EXPORT_SYMBOL(dma_alloc_coherent);
+
+void dma_free_noncoherent(struct device *dev, size_t size, void *vaddr,
+ dma_addr_t dma_handle)
+{
+ free_pages((unsigned long) vaddr, get_order(size));
+}
+
+EXPORT_SYMBOL(dma_free_noncoherent);
+
+void dma_free_coherent(struct device *dev, size_t size, void *vaddr,
+ dma_addr_t dma_handle)
+{
+ unsigned long addr = (unsigned long) vaddr;
+
+ addr = CAC_ADDR(addr);
+ free_pages(addr, get_order(size));
+}
+
+EXPORT_SYMBOL(dma_free_coherent);
+
+static inline void __dma_sync(unsigned long addr, size_t size,
+ enum dma_data_direction direction)
+{
+ switch (direction) {
+ case DMA_TO_DEVICE:
+ dma_cache_wback(addr, size);
+ break;
+
+ case DMA_FROM_DEVICE:
+ dma_cache_inv(addr, size);
+ break;
+
+ case DMA_BIDIRECTIONAL:
+ dma_cache_wback_inv(addr, size);
+ break;
+
+ default:
+ BUG();
+ }
+}
+
+dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size,
+ enum dma_data_direction direction)
+{
+ unsigned long addr = (unsigned long) ptr;
+
+ switch (direction) {
+ case DMA_TO_DEVICE:
+ dma_cache_wback(addr, size);
+ break;
+
+ case DMA_FROM_DEVICE:
+ dma_cache_inv(addr, size);
+ break;
+
+ case DMA_BIDIRECTIONAL:
+ dma_cache_wback_inv(addr, size);
+ break;
+
+ default:
+ BUG();
+ }
+
+ addr = virt_to_phys(ptr)&RAM_OFFSET_MASK;;
+ if(dev == NULL)
+ addr+=CRIME_HI_MEM_BASE;
+ return (dma_addr_t)addr;
+}
+
+EXPORT_SYMBOL(dma_map_single);
+
+void dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
+ enum dma_data_direction direction)
+{
+ switch (direction) {
+ case DMA_TO_DEVICE:
+ break;
+
+ case DMA_FROM_DEVICE:
+ break;
+
+ case DMA_BIDIRECTIONAL:
+ break;
+
+ default:
+ BUG();
+ }
+}
+
+EXPORT_SYMBOL(dma_unmap_single);
+
+int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
+ enum dma_data_direction direction)
+{
+ int i;
+
+ BUG_ON(direction == DMA_NONE);
+
+ for (i = 0; i < nents; i++, sg++) {
+ unsigned long addr;
+
+ addr = (unsigned long) page_address(sg->page)+sg->offset;
+ if (addr)
+ __dma_sync(addr, sg->length, direction);
+ addr = __pa(addr)&RAM_OFFSET_MASK;;
+ if(dev == NULL)
+ addr += CRIME_HI_MEM_BASE;
+ sg->dma_address = (dma_addr_t)addr;
+ }
+
+ return nents;
+}
+
+EXPORT_SYMBOL(dma_map_sg);
+
+dma_addr_t dma_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size, enum dma_data_direction direction)
+{
+ unsigned long addr;
+
+ BUG_ON(direction == DMA_NONE);
+
+ addr = (unsigned long) page_address(page) + offset;
+ dma_cache_wback_inv(addr, size);
+ addr = __pa(addr)&RAM_OFFSET_MASK;;
+ if(dev == NULL)
+ addr += CRIME_HI_MEM_BASE;
+
+ return (dma_addr_t)addr;
+}
+
+EXPORT_SYMBOL(dma_map_page);
+
+void dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
+ enum dma_data_direction direction)
+{
+ BUG_ON(direction == DMA_NONE);
+
+ if (direction != DMA_TO_DEVICE) {
+ unsigned long addr;
+
+ dma_address&=RAM_OFFSET_MASK;
+ addr = dma_address + PAGE_OFFSET;
+ if(dma_address>=256*1024*1024)
+ addr+=CRIME_HI_MEM_BASE;
+ dma_cache_wback_inv(addr, size);
+ }
+}
+
+EXPORT_SYMBOL(dma_unmap_page);
+
+void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
+ enum dma_data_direction direction)
+{
+ unsigned long addr;
+ int i;
+
+ BUG_ON(direction == DMA_NONE);
+
+ if (direction == DMA_TO_DEVICE)
+ return;
+
+ for (i = 0; i < nhwentries; i++, sg++) {
+ addr = (unsigned long) page_address(sg->page);
+ if (!addr)
+ continue;
+ dma_cache_wback_inv(addr + sg->offset, sg->length);
+ }
+}
+
+EXPORT_SYMBOL(dma_unmap_sg);
+
+void dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
+ size_t size, enum dma_data_direction direction)
+{
+ unsigned long addr;
+
+ BUG_ON(direction == DMA_NONE);
+
+ dma_handle&=RAM_OFFSET_MASK;
+ addr = dma_handle + PAGE_OFFSET;
+ if(dma_handle>=256*1024*1024)
+ addr+=CRIME_HI_MEM_BASE;
+ __dma_sync(addr, size, direction);
+}
+
+EXPORT_SYMBOL(dma_sync_single_for_cpu);
+
+void dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
+ size_t size, enum dma_data_direction direction)
+{
+ unsigned long addr;
+
+ BUG_ON(direction == DMA_NONE);
+
+ dma_handle&=RAM_OFFSET_MASK;
+ addr = dma_handle + PAGE_OFFSET;
+ if(dma_handle>=256*1024*1024)
+ addr+=CRIME_HI_MEM_BASE;
+ __dma_sync(addr, size, direction);
+}
+
+EXPORT_SYMBOL(dma_sync_single_for_device);
+
+void dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
+ unsigned long offset, size_t size, enum dma_data_direction direction)
+{
+ unsigned long addr;
+
+ BUG_ON(direction == DMA_NONE);
+
+ dma_handle&=RAM_OFFSET_MASK;
+ addr = dma_handle + offset + PAGE_OFFSET;
+ if(dma_handle>=256*1024*1024)
+ addr+=CRIME_HI_MEM_BASE;
+ __dma_sync(addr, size, direction);
+}
+
+EXPORT_SYMBOL(dma_sync_single_range_for_cpu);
+
+void dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
+ unsigned long offset, size_t size, enum dma_data_direction direction)
+{
+ unsigned long addr;
+
+ BUG_ON(direction == DMA_NONE);
+
+ dma_handle&=RAM_OFFSET_MASK;
+ addr = dma_handle + offset + PAGE_OFFSET;
+ if(dma_handle>=256*1024*1024)
+ addr+=CRIME_HI_MEM_BASE;
+ __dma_sync(addr, size, direction);
+}
+
+EXPORT_SYMBOL(dma_sync_single_range_for_device);
+
+void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
+ enum dma_data_direction direction)
+{
+ int i;
+
+ BUG_ON(direction == DMA_NONE);
+
+ /* Make sure that gcc doesn't leave the empty loop body. */
+ for (i = 0; i < nelems; i++, sg++)
+ __dma_sync((unsigned long)page_address(sg->page),
+ sg->length, direction);
+}
+
+EXPORT_SYMBOL(dma_sync_sg_for_cpu);
+
+void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
+ enum dma_data_direction direction)
+{
+ int i;
+
+ BUG_ON(direction == DMA_NONE);
+
+ /* Make sure that gcc doesn't leave the empty loop body. */
+ for (i = 0; i < nelems; i++, sg++)
+ __dma_sync((unsigned long)page_address(sg->page),
+ sg->length, direction);
+}
+
+EXPORT_SYMBOL(dma_sync_sg_for_device);
+
+int dma_mapping_error(dma_addr_t dma_addr)
+{
+ return 0;
+}
+
+EXPORT_SYMBOL(dma_mapping_error);
+
+int dma_supported(struct device *dev, u64 mask)
+{
+ /*
+ * we fall back to GFP_DMA when the mask isn't all 1s,
+ * so we can't guarantee allocations that must be
+ * within a tighter range than GFP_DMA..
+ */
+ if (mask < 0x00ffffff)
+ return 0;
+
+ return 1;
+}
+
+EXPORT_SYMBOL(dma_supported);
+
+int dma_is_consistent(dma_addr_t dma_addr)
+{
+ return 1;
+}
+
+EXPORT_SYMBOL(dma_is_consistent);
+
+void dma_cache_sync(void *vaddr, size_t size, enum dma_data_direction direction)
+{
+ if (direction == DMA_NONE)
+ return;
+
+ dma_cache_wback_inv((unsigned long)vaddr, size);
+}
+
+EXPORT_SYMBOL(dma_cache_sync);
+
void *addr;
might_sleep();
- if (page < highmem_start_page)
+ if (!PageHighMem(page))
return page_address(page);
addr = kmap_high(page);
flush_tlb_one((unsigned long)addr);
{
if (in_interrupt())
BUG();
- if (page < highmem_start_page)
+ if (!PageHighMem(page))
return;
kunmap_high(page);
}
/* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */
inc_preempt_count();
- if (page < highmem_start_page)
+ if (!PageHighMem(page))
return page_address(page);
idx = type + KM_TYPE_NR*smp_processor_id();
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
- * Copyright (C) 2003, 2004 Ralf Baechle (ralf@linux-mips.org)
+ * Copyright (C) 2003, 04, 05 Ralf Baechle (ralf@linux-mips.org)
*/
-#include <linux/config.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/sched.h>
EXPORT_SYMBOL(copy_page);
-/*
- * An address fits into a single register so it's safe to use 64-bit registers
- * if we have 64-bit adresses.
- */
-#define cpu_has_64bit_registers cpu_has_64bit_addresses
-
/*
* This is suboptimal for 32-bit kernels; we assume that R10000 is only used
* with 64-bit kernels. The prefetch offsets have been experimentally tuned
union mips_instruction mi;
unsigned int width;
- if (cpu_has_64bit_registers) {
+ if (cpu_has_64bit_gp_regs) {
mi.i_format.opcode = ld_op;
width = 8;
} else {
BUG_ON(offset > 0x7fff);
- mi.i_format.opcode = cpu_has_64bit_addresses ? daddiu_op : addiu_op;
+ mi.i_format.opcode = cpu_has_64bit_gp_regs ? daddiu_op : addiu_op;
mi.i_format.rs = 4; /* $a0 */
mi.i_format.rt = 6; /* $a2 */
mi.i_format.simmediate = offset;
BUG_ON(offset > 0x7fff);
- mi.i_format.opcode = cpu_has_64bit_addresses ? daddiu_op : addiu_op;
+ mi.i_format.opcode = cpu_has_64bit_gp_regs ? daddiu_op : addiu_op;
mi.i_format.rs = 5; /* $a1 */
mi.i_format.rt = 5; /* $a1 */
mi.i_format.simmediate = offset;
BUG_ON(offset > 0x7fff);
- mi.i_format.opcode = cpu_has_64bit_addresses ? daddiu_op : addiu_op;
+ mi.i_format.opcode = cpu_has_64bit_gp_regs ? daddiu_op : addiu_op;
mi.i_format.rs = 4; /* $a0 */
mi.i_format.rt = 4; /* $a0 */
mi.i_format.simmediate = offset;
* Copyright (C) 1996 David S. Miller (dm@engr.sgi.com)
* Copyright (C) 1997, 2001 Ralf Baechle (ralf@gnu.org)
* Copyright (C) 2000 SiByte, Inc.
+ * Copyright (C) 2005 Thiemo Seufer
*
* Written by Justin Carlson of SiByte, Inc.
* and Kip Walker of Broadcom Corp.
#define SB1_PREF_STORE_STREAMED_HINT "5"
#endif
-#ifdef CONFIG_SIBYTE_DMA_PAGEOPS
static inline void clear_page_cpu(void *page)
-#else
-void clear_page(void *page)
-#endif
{
unsigned char *addr = (unsigned char *) page;
unsigned char *end = addr + PAGE_SIZE;
* since we know we're on an SB1, we force the assembler to take
* 64-bit operands to speed things up
*/
- do {
- __asm__ __volatile__(
- " .set mips4 \n"
+ __asm__ __volatile__(
+ " .set push \n"
+ " .set mips4 \n"
+ " .set noreorder \n"
#ifdef CONFIG_CPU_HAS_PREFETCH
- " pref " SB1_PREF_STORE_STREAMED_HINT ", 0(%0) \n" /* Prefetch the first 4 lines */
- " pref " SB1_PREF_STORE_STREAMED_HINT ", 32(%0) \n"
- " pref " SB1_PREF_STORE_STREAMED_HINT ", 64(%0) \n"
- " pref " SB1_PREF_STORE_STREAMED_HINT ", 96(%0) \n"
+ " daddiu %0, %0, 128 \n"
+ " pref " SB1_PREF_STORE_STREAMED_HINT ", -128(%0) \n" /* Prefetch the first 4 lines */
+ " pref " SB1_PREF_STORE_STREAMED_HINT ", -96(%0) \n"
+ " pref " SB1_PREF_STORE_STREAMED_HINT ", -64(%0) \n"
+ " pref " SB1_PREF_STORE_STREAMED_HINT ", -32(%0) \n"
+ "1: sd $0, -128(%0) \n" /* Throw out a cacheline of 0's */
+ " sd $0, -120(%0) \n"
+ " sd $0, -112(%0) \n"
+ " sd $0, -104(%0) \n"
+ " daddiu %0, %0, 32 \n"
+ " bnel %0, %1, 1b \n"
+ " pref " SB1_PREF_STORE_STREAMED_HINT ", -32(%0) \n"
+ " daddiu %0, %0, -128 \n"
#endif
- "1: sd $0, 0(%0) \n" /* Throw out a cacheline of 0's */
- " sd $0, 8(%0) \n"
- " sd $0, 16(%0) \n"
- " sd $0, 24(%0) \n"
-#ifdef CONFIG_CPU_HAS_PREFETCH
- " pref " SB1_PREF_STORE_STREAMED_HINT ",128(%0) \n" /* Prefetch 4 lines ahead */
-#endif
- " .set mips0 \n"
- :
- : "r" (addr)
- : "memory");
- addr += 32;
- } while (addr != end);
+ " sd $0, 0(%0) \n" /* Throw out a cacheline of 0's */
+ "1: sd $0, 8(%0) \n"
+ " sd $0, 16(%0) \n"
+ " sd $0, 24(%0) \n"
+ " daddiu %0, %0, 32 \n"
+ " bnel %0, %1, 1b \n"
+ " sd $0, 0(%0) \n"
+ " .set pop \n"
+ : "+r" (addr)
+ : "r" (end)
+ : "memory");
}
-#ifdef CONFIG_SIBYTE_DMA_PAGEOPS
static inline void copy_page_cpu(void *to, void *from)
-#else
-void copy_page(void *to, void *from)
-#endif
{
- unsigned char *src = from;
- unsigned char *dst = to;
+ unsigned char *src = (unsigned char *)from;
+ unsigned char *dst = (unsigned char *)to;
unsigned char *end = src + PAGE_SIZE;
/*
- * This should be optimized in assembly...can't use ld/sd, though,
- * because the top 32 bits could be nuked if we took an interrupt
- * during the routine. And this is not a good place to be cli()'ing
- *
* The pref's used here are using "streaming" hints, which cause the
* copied data to be kicked out of the cache sooner. A page copy often
* ends up copying a lot more data than is commonly used, so this seems
* to make sense in terms of reducing cache pollution, but I've no real
* performance data to back this up
*/
-
- do {
- __asm__ __volatile__(
- " .set mips4 \n"
+ __asm__ __volatile__(
+ " .set push \n"
+ " .set mips4 \n"
+ " .set noreorder \n"
#ifdef CONFIG_CPU_HAS_PREFETCH
- " pref " SB1_PREF_LOAD_STREAMED_HINT ", 0(%0)\n" /* Prefetch the first 3 lines */
- " pref " SB1_PREF_STORE_STREAMED_HINT ", 0(%1)\n"
- " pref " SB1_PREF_LOAD_STREAMED_HINT ", 32(%0)\n"
- " pref " SB1_PREF_STORE_STREAMED_HINT ", 32(%1)\n"
- " pref " SB1_PREF_LOAD_STREAMED_HINT ", 64(%0)\n"
- " pref " SB1_PREF_STORE_STREAMED_HINT ", 64(%1)\n"
+ " daddiu %0, %0, 128 \n"
+ " daddiu %1, %1, 128 \n"
+ " pref " SB1_PREF_LOAD_STREAMED_HINT ", -128(%0)\n" /* Prefetch the first 4 lines */
+ " pref " SB1_PREF_STORE_STREAMED_HINT ", -128(%1)\n"
+ " pref " SB1_PREF_LOAD_STREAMED_HINT ", -96(%0)\n"
+ " pref " SB1_PREF_STORE_STREAMED_HINT ", -96(%1)\n"
+ " pref " SB1_PREF_LOAD_STREAMED_HINT ", -64(%0)\n"
+ " pref " SB1_PREF_STORE_STREAMED_HINT ", -64(%1)\n"
+ " pref " SB1_PREF_LOAD_STREAMED_HINT ", -32(%0)\n"
+ "1: pref " SB1_PREF_STORE_STREAMED_HINT ", -32(%1)\n"
+# ifdef CONFIG_MIPS64
+ " ld $8, -128(%0) \n" /* Block copy a cacheline */
+ " ld $9, -120(%0) \n"
+ " ld $10, -112(%0) \n"
+ " ld $11, -104(%0) \n"
+ " sd $8, -128(%1) \n"
+ " sd $9, -120(%1) \n"
+ " sd $10, -112(%1) \n"
+ " sd $11, -104(%1) \n"
+# else
+ " lw $2, -128(%0) \n" /* Block copy a cacheline */
+ " lw $3, -124(%0) \n"
+ " lw $6, -120(%0) \n"
+ " lw $7, -116(%0) \n"
+ " lw $8, -112(%0) \n"
+ " lw $9, -108(%0) \n"
+ " lw $10, -104(%0) \n"
+ " lw $11, -100(%0) \n"
+ " sw $2, -128(%1) \n"
+ " sw $3, -124(%1) \n"
+ " sw $6, -120(%1) \n"
+ " sw $7, -116(%1) \n"
+ " sw $8, -112(%1) \n"
+ " sw $9, -108(%1) \n"
+ " sw $10, -104(%1) \n"
+ " sw $11, -100(%1) \n"
+# endif
+ " daddiu %0, %0, 32 \n"
+ " daddiu %1, %1, 32 \n"
+ " bnel %0, %2, 1b \n"
+ " pref " SB1_PREF_LOAD_STREAMED_HINT ", -32(%0)\n"
+ " daddiu %0, %0, -128 \n"
+ " daddiu %1, %1, -128 \n"
#endif
- "1: lw $2, 0(%0) \n" /* Block copy a cacheline */
- " lw $3, 4(%0) \n"
- " lw $4, 8(%0) \n"
- " lw $5, 12(%0) \n"
- " lw $6, 16(%0) \n"
- " lw $7, 20(%0) \n"
- " lw $8, 24(%0) \n"
- " lw $9, 28(%0) \n"
-#ifdef CONFIG_CPU_HAS_PREFETCH
- " pref " SB1_PREF_LOAD_STREAMED_HINT ", 96(%0) \n" /* Prefetch ahead */
- " pref " SB1_PREF_STORE_STREAMED_HINT ", 96(%1) \n"
+#ifdef CONFIG_MIPS64
+ " ld $8, 0(%0) \n" /* Block copy a cacheline */
+ "1: ld $9, 8(%0) \n"
+ " ld $10, 16(%0) \n"
+ " ld $11, 24(%0) \n"
+ " sd $8, 0(%1) \n"
+ " sd $9, 8(%1) \n"
+ " sd $10, 16(%1) \n"
+ " sd $11, 24(%1) \n"
+#else
+ " lw $2, 0(%0) \n" /* Block copy a cacheline */
+ "1: lw $3, 4(%0) \n"
+ " lw $6, 8(%0) \n"
+ " lw $7, 12(%0) \n"
+ " lw $8, 16(%0) \n"
+ " lw $9, 20(%0) \n"
+ " lw $10, 24(%0) \n"
+ " lw $11, 28(%0) \n"
+ " sw $2, 0(%1) \n"
+ " sw $3, 4(%1) \n"
+ " sw $6, 8(%1) \n"
+ " sw $7, 12(%1) \n"
+ " sw $8, 16(%1) \n"
+ " sw $9, 20(%1) \n"
+ " sw $10, 24(%1) \n"
+ " sw $11, 28(%1) \n"
+#endif
+ " daddiu %0, %0, 32 \n"
+ " daddiu %1, %1, 32 \n"
+ " bnel %0, %2, 1b \n"
+#ifdef CONFIG_MIPS64
+ " ld $8, 0(%0) \n"
+#else
+ " lw $2, 0(%0) \n"
+#endif
+ " .set pop \n"
+ : "+r" (src), "+r" (dst)
+ : "r" (end)
+#ifdef CONFIG_MIPS64
+ : "$8","$9","$10","$11","memory");
+#else
+ : "$2","$3","$6","$7","$8","$9","$10","$11","memory");
#endif
- " sw $2, 0(%1) \n"
- " sw $3, 4(%1) \n"
- " sw $4, 8(%1) \n"
- " sw $5, 12(%1) \n"
- " sw $6, 16(%1) \n"
- " sw $7, 20(%1) \n"
- " sw $8, 24(%1) \n"
- " sw $9, 28(%1) \n"
- " .set mips0 \n"
- :
- : "r" (src), "r" (dst)
- : "$2","$3","$4","$5","$6","$7","$8","$9","memory");
- src += 32;
- dst += 32;
- } while (src != end);
}
* particular CPU.
*/
typedef struct dmadscr_s {
- uint64_t dscr_a;
- uint64_t dscr_b;
- uint64_t pad_a;
- uint64_t pad_b;
+ u64 dscr_a;
+ u64 dscr_b;
+ u64 pad_a;
+ u64 pad_b;
} dmadscr_t;
static dmadscr_t page_descr[NR_CPUS] __attribute__((aligned(SMP_CACHE_BYTES)));
void sb1_dma_init(void)
{
int cpu = smp_processor_id();
- uint64_t base_val = PHYSADDR(&page_descr[cpu]) | V_DM_DSCR_BASE_RINGSZ(1);
-
- __raw_writeq(base_val,
- IOADDR(A_DM_REGISTER(cpu, R_DM_DSCR_BASE)));
- __raw_writeq(base_val | M_DM_DSCR_BASE_RESET,
- IOADDR(A_DM_REGISTER(cpu, R_DM_DSCR_BASE)));
- __raw_writeq(base_val | M_DM_DSCR_BASE_ENABL,
- IOADDR(A_DM_REGISTER(cpu, R_DM_DSCR_BASE)));
+ u64 base_val = CPHYSADDR(&page_descr[cpu]) | V_DM_DSCR_BASE_RINGSZ(1);
+
+ bus_writeq(base_val,
+ (void *)IOADDR(A_DM_REGISTER(cpu, R_DM_DSCR_BASE)));
+ bus_writeq(base_val | M_DM_DSCR_BASE_RESET,
+ (void *)IOADDR(A_DM_REGISTER(cpu, R_DM_DSCR_BASE)));
+ bus_writeq(base_val | M_DM_DSCR_BASE_ENABL,
+ (void *)IOADDR(A_DM_REGISTER(cpu, R_DM_DSCR_BASE)));
}
void clear_page(void *page)
int cpu = smp_processor_id();
/* if the page is above Kseg0, use old way */
- if (KSEGX(page) != CAC_BASE)
+ if ((long)KSEGX(page) != (long)CKSEG0)
return clear_page_cpu(page);
- page_descr[cpu].dscr_a = PHYSADDR(page) | M_DM_DSCRA_ZERO_MEM | M_DM_DSCRA_L2C_DEST | M_DM_DSCRA_INTERRUPT;
+ page_descr[cpu].dscr_a = CPHYSADDR(page) | M_DM_DSCRA_ZERO_MEM | M_DM_DSCRA_L2C_DEST | M_DM_DSCRA_INTERRUPT;
page_descr[cpu].dscr_b = V_DM_DSCRB_SRC_LENGTH(PAGE_SIZE);
- __raw_writeq(1, IOADDR(A_DM_REGISTER(cpu, R_DM_DSCR_COUNT)));
+ bus_writeq(1, (void *)IOADDR(A_DM_REGISTER(cpu, R_DM_DSCR_COUNT)));
/*
* Don't really want to do it this way, but there's no
* reliable way to delay completion detection.
*/
- while (!(__raw_readq(IOADDR(A_DM_REGISTER(cpu, R_DM_DSCR_BASE_DEBUG)) & M_DM_DSCR_BASE_INTERRUPT)))
+ while (!(bus_readq((void *)(IOADDR(A_DM_REGISTER(cpu, R_DM_DSCR_BASE_DEBUG)) &
+ M_DM_DSCR_BASE_INTERRUPT))))
;
- __raw_readq(IOADDR(A_DM_REGISTER(cpu, R_DM_DSCR_BASE)));
+ bus_readq((void *)IOADDR(A_DM_REGISTER(cpu, R_DM_DSCR_BASE)));
}
void copy_page(void *to, void *from)
{
- unsigned long from_phys = PHYSADDR(from);
- unsigned long to_phys = PHYSADDR(to);
+ unsigned long from_phys = CPHYSADDR(from);
+ unsigned long to_phys = CPHYSADDR(to);
int cpu = smp_processor_id();
/* if either page is above Kseg0, use old way */
- if ((KSEGX(to) != CAC_BASE) || (KSEGX(from) != CAC_BASE))
+ if ((long)KSEGX(to) != (long)CKSEG0
+ || (long)KSEGX(from) != (long)CKSEG0)
return copy_page_cpu(to, from);
- page_descr[cpu].dscr_a = PHYSADDR(to_phys) | M_DM_DSCRA_L2C_DEST | M_DM_DSCRA_INTERRUPT;
- page_descr[cpu].dscr_b = PHYSADDR(from_phys) | V_DM_DSCRB_SRC_LENGTH(PAGE_SIZE);
- __raw_writeq(1, IOADDR(A_DM_REGISTER(cpu, R_DM_DSCR_COUNT)));
+ page_descr[cpu].dscr_a = CPHYSADDR(to_phys) | M_DM_DSCRA_L2C_DEST | M_DM_DSCRA_INTERRUPT;
+ page_descr[cpu].dscr_b = CPHYSADDR(from_phys) | V_DM_DSCRB_SRC_LENGTH(PAGE_SIZE);
+ bus_writeq(1, (void *)IOADDR(A_DM_REGISTER(cpu, R_DM_DSCR_COUNT)));
/*
* Don't really want to do it this way, but there's no
* reliable way to delay completion detection.
*/
- while (!(__raw_readq(IOADDR(A_DM_REGISTER(cpu, R_DM_DSCR_BASE_DEBUG)) & M_DM_DSCR_BASE_INTERRUPT)))
+ while (!(bus_readq((void *)(IOADDR(A_DM_REGISTER(cpu, R_DM_DSCR_BASE_DEBUG)) &
+ M_DM_DSCR_BASE_INTERRUPT))))
;
- __raw_readq(IOADDR(A_DM_REGISTER(cpu, R_DM_DSCR_BASE)));
+ bus_readq((void *)IOADDR(A_DM_REGISTER(cpu, R_DM_DSCR_BASE)));
}
-#endif
+#else /* !CONFIG_SIBYTE_DMA_PAGEOPS */
+
+void clear_page(void *page)
+{
+ return clear_page_cpu(page);
+}
+
+void copy_page(void *to, void *from)
+{
+ return copy_page_cpu(to, from);
+}
+
+#endif /* !CONFIG_SIBYTE_DMA_PAGEOPS */
EXPORT_SYMBOL(clear_page);
EXPORT_SYMBOL(copy_page);
/* Initialize the entire pgd. */
pgd_init((unsigned long)swapper_pg_dir);
- pgd_init((unsigned long)swapper_pg_dir +
- sizeof(pgd_t ) * USER_PTRS_PER_PGD);
+ pgd_init((unsigned long)swapper_pg_dir
+ + sizeof(pgd_t) * USER_PTRS_PER_PGD);
#ifdef CONFIG_HIGHMEM
pgd_base = swapper_pg_dir;
/* Initialize the entire pgd. */
pgd_init((unsigned long)swapper_pg_dir);
pmd_init((unsigned long)invalid_pmd_table, (unsigned long)invalid_pte_table);
- memset((void *)invalid_pte_table, 0, sizeof(pte_t) * PTRS_PER_PTE);
}
* Copyright (C) 1999 Silicon Graphics, Inc.
* Copyright (C) 2000 Kanoj Sarcar (kanoj@sgi.com)
*/
-#include <linux/config.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/sched.h>
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1999 Ralf Baechle
+ * Copyright (C) 1999 Silicon Graphics, Inc.
+ */
+#include <asm/mipsregs.h>
+#include <asm/page.h>
+#include <asm/regdef.h>
+#include <asm/stackframe.h>
+
+ .macro tlb_do_page_fault, write
+ NESTED(tlb_do_page_fault_\write, PT_SIZE, sp)
+ SAVE_ALL
+ MFC0 a2, CP0_BADVADDR
+ KMODE
+ move a0, sp
+ REG_S a2, PT_BVADDR(sp)
+ li a1, \write
+ jal do_page_fault
+ j ret_from_exception
+ END(tlb_do_page_fault_\write)
+ .endm
+
+ tlb_do_page_fault 0
+ tlb_do_page_fault 1
*
* Synthesize TLB refill handlers at runtime.
*
- * Copyright (C) 2004 by Thiemo Seufer
+ * Copyright (C) 2004,2005 by Thiemo Seufer
*/
#include <stdarg.h>
#include <asm/pgtable.h>
#include <asm/cacheflush.h>
-#include <asm/cacheflush.h>
#include <asm/mmu_context.h>
#include <asm/inst.h>
#include <asm/elf.h>
#include <asm/smp.h>
+#include <asm/war.h>
/* #define DEBUG_TLB */
return BCM1250_M3_WAR;
}
+static __init int __attribute__((unused)) r10000_llsc_war(void)
+{
+ return R10000_LLSC_WAR;
+}
+
/*
* A little micro-assembler, intended for TLB refill handler
* synthesizing. It is intentionally kept simple, does only support
enum opcode {
insn_invalid,
insn_addu, insn_addiu, insn_and, insn_andi, insn_beq,
- insn_bgez, insn_bgezl, insn_bltz, insn_bltzl, insn_bne,
- insn_daddu, insn_daddiu, insn_dmfc0, insn_dmtc0,
+ insn_beql, insn_bgez, insn_bgezl, insn_bltz, insn_bltzl,
+ insn_bne, insn_daddu, insn_daddiu, insn_dmfc0, insn_dmtc0,
insn_dsll, insn_dsll32, insn_dsra, insn_dsrl, insn_dsrl32,
insn_dsubu, insn_eret, insn_j, insn_jal, insn_jr, insn_ld,
- insn_lui, insn_lw, insn_mfc0, insn_mtc0, insn_ori, insn_rfe,
- insn_sd, insn_sll, insn_sra, insn_srl, insn_subu, insn_sw,
- insn_tlbp, insn_tlbwi, insn_tlbwr, insn_xor, insn_xori
+ insn_ll, insn_lld, insn_lui, insn_lw, insn_mfc0, insn_mtc0,
+ insn_ori, insn_rfe, insn_sc, insn_scd, insn_sd, insn_sll,
+ insn_sra, insn_srl, insn_subu, insn_sw, insn_tlbp, insn_tlbwi,
+ insn_tlbwr, insn_xor, insn_xori
};
struct insn {
{ insn_and, M(spec_op,0,0,0,0,and_op), RS | RT | RD },
{ insn_andi, M(andi_op,0,0,0,0,0), RS | RT | UIMM },
{ insn_beq, M(beq_op,0,0,0,0,0), RS | RT | BIMM },
+ { insn_beql, M(beql_op,0,0,0,0,0), RS | RT | BIMM },
{ insn_bgez, M(bcond_op,0,bgez_op,0,0,0), RS | BIMM },
{ insn_bgezl, M(bcond_op,0,bgezl_op,0,0,0), RS | BIMM },
{ insn_bltz, M(bcond_op,0,bltz_op,0,0,0), RS | BIMM },
{ insn_jal, M(jal_op,0,0,0,0,0), JIMM },
{ insn_jr, M(spec_op,0,0,0,0,jr_op), RS },
{ insn_ld, M(ld_op,0,0,0,0,0), RS | RT | SIMM },
+ { insn_ll, M(ll_op,0,0,0,0,0), RS | RT | SIMM },
+ { insn_lld, M(lld_op,0,0,0,0,0), RS | RT | SIMM },
{ insn_lui, M(lui_op,0,0,0,0,0), RT | SIMM },
{ insn_lw, M(lw_op,0,0,0,0,0), RS | RT | SIMM },
{ insn_mfc0, M(cop0_op,mfc_op,0,0,0,0), RT | RD },
{ insn_mtc0, M(cop0_op,mtc_op,0,0,0,0), RT | RD },
{ insn_ori, M(ori_op,0,0,0,0,0), RS | RT | UIMM },
{ insn_rfe, M(cop0_op,cop_op,0,0,0,rfe_op), 0 },
+ { insn_sc, M(sc_op,0,0,0,0,0), RS | RT | SIMM },
+ { insn_scd, M(scd_op,0,0,0,0,0), RS | RT | SIMM },
{ insn_sd, M(sd_op,0,0,0,0,0), RS | RT | SIMM },
{ insn_sll, M(spec_op,0,0,0,0,sll_op), RT | RD | RE },
{ insn_sra, M(spec_op,0,0,0,0,sra_op), RT | RD | RE },
I_u2u1u3(_andi);
I_u3u1u2(_and);
I_u1u2s3(_beq);
+I_u1u2s3(_beql);
I_u1s2(_bgez);
I_u1s2(_bgezl);
I_u1s2(_bltz);
I_u1(_jal);
I_u1(_jr);
I_u2s3u1(_ld);
+I_u2s3u1(_ll);
+I_u2s3u1(_lld);
I_u1s2(_lui);
I_u2s3u1(_lw);
I_u1u2(_mfc0);
I_u1u2(_mtc0);
I_u2u1u3(_ori);
I_0(_rfe);
+I_u2s3u1(_sc);
+I_u2s3u1(_scd);
I_u2s3u1(_sd);
I_u2u1u3(_sll);
I_u2u1u3(_sra);
label_leave,
label_vmalloc,
label_vmalloc_done,
- label_tlbwr_hazard,
- label_split
+ label_tlbw_hazard,
+ label_split,
+ label_nopage_tlbl,
+ label_nopage_tlbs,
+ label_nopage_tlbm,
+ label_smp_pgtable_change,
+ label_r3000_write_probe_fail,
+ label_r3000_write_probe_ok
};
struct label {
L_LA(_leave)
L_LA(_vmalloc)
L_LA(_vmalloc_done)
-L_LA(_tlbwr_hazard)
+L_LA(_tlbw_hazard)
L_LA(_split)
+L_LA(_nopage_tlbl)
+L_LA(_nopage_tlbs)
+L_LA(_nopage_tlbm)
+L_LA(_smp_pgtable_change)
+L_LA(_r3000_write_probe_fail)
+L_LA(_r3000_write_probe_ok)
/* convenience macros for instructions */
#ifdef CONFIG_MIPS64
# define i_ADDIU(buf, rs, rt, val) i_daddiu(buf, rs, rt, val)
# define i_ADDU(buf, rs, rt, rd) i_daddu(buf, rs, rt, rd)
# define i_SUBU(buf, rs, rt, rd) i_dsubu(buf, rs, rt, rd)
+# define i_LL(buf, rs, rt, off) i_lld(buf, rs, rt, off)
+# define i_SC(buf, rs, rt, off) i_scd(buf, rs, rt, off)
#else
# define i_LW(buf, rs, rt, off) i_lw(buf, rs, rt, off)
# define i_SW(buf, rs, rt, off) i_sw(buf, rs, rt, off)
# define i_ADDIU(buf, rs, rt, val) i_addiu(buf, rs, rt, val)
# define i_ADDU(buf, rs, rt, rd) i_addu(buf, rs, rt, rd)
# define i_SUBU(buf, rs, rt, rd) i_subu(buf, rs, rt, rd)
+# define i_LL(buf, rs, rt, off) i_ll(buf, rs, rt, off)
+# define i_SC(buf, rs, rt, off) i_sc(buf, rs, rt, off)
#endif
#define i_b(buf, off) i_beq(buf, 0, 0, off)
+#define i_beqz(buf, rs, off) i_beq(buf, rs, 0, off)
+#define i_beqzl(buf, rs, off) i_beql(buf, rs, 0, off)
#define i_bnez(buf, rs, off) i_bne(buf, rs, 0, off)
+#define i_bnezl(buf, rs, off) i_bnel(buf, rs, 0, off)
#define i_move(buf, a, b) i_ADDU(buf, a, 0, b)
#define i_nop(buf) i_sll(buf, 0, 0, 0)
#define i_ssnop(buf) i_sll(buf, 0, 0, 1)
#define i_ehb(buf) i_sll(buf, 0, 0, 3)
-#if CONFIG_MIPS64
-static __init int in_compat_space_p(long addr)
+#ifdef CONFIG_MIPS64
+static __init int __attribute__((unused)) in_compat_space_p(long addr)
{
/* Is this address in 32bit compat space? */
return (((addr) & 0xffffffff00000000) == 0xffffffff00000000);
}
-static __init int rel_highest(long val)
+static __init int __attribute__((unused)) rel_highest(long val)
{
return ((((val + 0x800080008000L) >> 48) & 0xffff) ^ 0x8000) - 0x8000;
}
-static __init int rel_higher(long val)
+static __init int __attribute__((unused)) rel_higher(long val)
{
return ((((val + 0x80008000L) >> 32) & 0xffff) ^ 0x8000) - 0x8000;
}
__resolve_relocs(rel, l);
}
-static __init void copy_handler(struct reloc *rel, struct label *lab,
- u32 *first, u32 *end, u32* target)
+static __init void move_relocs(struct reloc *rel, u32 *first, u32 *end,
+ long off)
{
- long off = (long)(target - first);
-
- memcpy(target, first, (end - first) * sizeof(u32));
-
for (; rel->lab != label_invalid; rel++)
if (rel->addr >= first && rel->addr < end)
rel->addr += off;
+}
+static __init void move_labels(struct label *lab, u32 *first, u32 *end,
+ long off)
+{
for (; lab->lab != label_invalid; lab++)
if (lab->addr >= first && lab->addr < end)
lab->addr += off;
}
+static __init void copy_handler(struct reloc *rel, struct label *lab,
+ u32 *first, u32 *end, u32 *target)
+{
+ long off = (long)(target - first);
+
+ memcpy(target, first, (end - first) * sizeof(u32));
+
+ move_relocs(rel, first, end, off);
+ move_labels(lab, first, end, off);
+}
+
static __init int __attribute__((unused)) insn_has_bdelay(struct reloc *rel,
u32 *addr)
{
i_b(p, 0);
}
+static void il_beqz(u32 **p, struct reloc **r, unsigned int reg,
+ enum label_id l)
+{
+ r_mips_pc16(r, *p, l);
+ i_beqz(p, reg, 0);
+}
+
+static void __attribute__((unused))
+il_beqzl(u32 **p, struct reloc **r, unsigned int reg, enum label_id l)
+{
+ r_mips_pc16(r, *p, l);
+ i_beqzl(p, reg, 0);
+}
+
static void il_bnez(u32 **p, struct reloc **r, unsigned int reg,
enum label_id l)
{
i_bgezl(p, reg, 0);
}
-/* The only registers allowed in TLB handlers. */
+/* The only general purpose registers allowed in TLB handlers. */
#define K0 26
#define K1 27
static __initdata struct label labels[128];
static __initdata struct reloc relocs[128];
-#ifdef CONFIG_MIPS32
/*
* The R3000 TLB handler is simple.
*/
panic("TLB refill handler space exceeded");
printk("Synthesized TLB handler (%u instructions).\n",
- p - tlb_handler);
+ (unsigned int)(p - tlb_handler));
#ifdef DEBUG_TLB
{
int i;
+
for (i = 0; i < (p - tlb_handler); i++)
printk("%08x\n", tlb_handler[i]);
}
memcpy((void *)CAC_BASE, tlb_handler, 0x80);
flush_icache_range(CAC_BASE, CAC_BASE + 0x80);
}
-#endif /* CONFIG_MIPS32 */
/*
* The R4000 TLB handler is much more complicated. We have two
}
/*
- * Write random TLB entry, and care about the hazards from the
- * preceeding mtc0 and for the following eret.
+ * Write random or indexed TLB entry, and care about the hazards from
+ * the preceeding mtc0 and for the following eret.
*/
-static __init void build_tlb_write_random_entry(u32 **p, struct label **l,
- struct reloc **r)
+enum tlb_write_entry { tlb_random, tlb_indexed };
+
+static __init void build_tlb_write_entry(u32 **p, struct label **l,
+ struct reloc **r,
+ enum tlb_write_entry wmode)
{
+ void(*tlbw)(u32 **) = NULL;
+
+ switch (wmode) {
+ case tlb_random: tlbw = i_tlbwr; break;
+ case tlb_indexed: tlbw = i_tlbwi; break;
+ }
+
switch (current_cpu_data.cputype) {
case CPU_R4000PC:
case CPU_R4000SC:
case CPU_R4400MC:
/*
* This branch uses up a mtc0 hazard nop slot and saves
- * two nops after the tlbwr.
+ * two nops after the tlbw instruction.
*/
- il_bgezl(p, r, 0, label_tlbwr_hazard);
- i_tlbwr(p);
- l_tlbwr_hazard(l, *p);
+ il_bgezl(p, r, 0, label_tlbw_hazard);
+ tlbw(p);
+ l_tlbw_hazard(l, *p);
i_nop(p);
break;
case CPU_R5000:
case CPU_R5000A:
case CPU_5KC:
+ case CPU_TX49XX:
case CPU_AU1000:
case CPU_AU1100:
case CPU_AU1500:
case CPU_AU1550:
i_nop(p);
- i_tlbwr(p);
+ tlbw(p);
break;
case CPU_R10000:
case CPU_4KSC:
case CPU_20KC:
case CPU_25KF:
- i_tlbwr(p);
+ tlbw(p);
break;
case CPU_NEVADA:
i_nop(p); /* QED specifies 2 nops hazard */
/*
* This branch uses up a mtc0 hazard nop slot and saves
- * a nop after the tlbwr.
+ * a nop after the tlbw instruction.
*/
- il_bgezl(p, r, 0, label_tlbwr_hazard);
- i_tlbwr(p);
- l_tlbwr_hazard(l, *p);
+ il_bgezl(p, r, 0, label_tlbw_hazard);
+ tlbw(p);
+ l_tlbw_hazard(l, *p);
+ break;
+
+ case CPU_RM7000:
+ i_nop(p);
+ i_nop(p);
+ i_nop(p);
+ i_nop(p);
+ tlbw(p);
break;
case CPU_4KEC:
case CPU_24K:
i_ehb(p);
- i_tlbwr(p);
+ tlbw(p);
break;
case CPU_RM9000:
i_ssnop(p);
i_ssnop(p);
i_ssnop(p);
- i_tlbwr(p);
+ tlbw(p);
i_ssnop(p);
i_ssnop(p);
i_ssnop(p);
i_ssnop(p);
break;
+ case CPU_VR4111:
+ case CPU_VR4121:
+ case CPU_VR4122:
+ case CPU_VR4181:
+ case CPU_VR4181A:
+ i_nop(p);
+ i_nop(p);
+ tlbw(p);
+ i_nop(p);
+ i_nop(p);
+ break;
+
+ case CPU_VR4131:
+ case CPU_VR4133:
+ i_nop(p);
+ i_nop(p);
+ tlbw(p);
+ break;
+
default:
panic("No TLB refill handler yet (CPU type: %d)",
current_cpu_data.cputype);
}
}
-#if CONFIG_MIPS64
+#ifdef CONFIG_MIPS64
/*
* TMP and PTR are scratch.
* TMP will be clobbered, PTR will hold the pmd entry.
il_bltz(p, r, tmp, label_vmalloc);
/* No i_nop needed here, since the next insn doesn't touch TMP. */
-# ifdef CONFIG_SMP
+#ifdef CONFIG_SMP
/*
* 64 bit SMP has the lower part of &pgd_current[smp_processor_id()]
* stored in CONTEXT.
if (in_compat_space_p(pgdc)) {
i_dmfc0(p, ptr, C0_CONTEXT);
i_dsra(p, ptr, ptr, 23);
+ i_ld(p, ptr, 0, ptr);
} else {
+#ifdef CONFIG_BUILD_ELF64
+ i_dmfc0(p, ptr, C0_CONTEXT);
+ i_dsrl(p, ptr, ptr, 23);
+ i_dsll(p, ptr, ptr, 3);
+ i_LA_mostly(p, tmp, pgdc);
+ i_daddu(p, ptr, ptr, tmp);
+ i_dmfc0(p, tmp, C0_BADVADDR);
+ i_ld(p, ptr, rel_lo(pgdc), ptr);
+#else
i_dmfc0(p, ptr, C0_CONTEXT);
i_lui(p, tmp, rel_highest(pgdc));
i_dsll(p, ptr, ptr, 9);
i_dsrl32(p, ptr, ptr, 0);
i_and(p, ptr, ptr, tmp);
i_dmfc0(p, tmp, C0_BADVADDR);
+ i_ld(p, ptr, 0, ptr);
+#endif
}
- i_ld(p, ptr, 0, ptr);
-# else
+#else
i_LA_mostly(p, ptr, pgdc);
i_ld(p, ptr, rel_lo(pgdc), ptr);
-# endif
+#endif
l_vmalloc_done(l, *p);
i_dsrl(p, tmp, tmp, PGDIR_SHIFT-3); /* get pgd offset in bytes */
}
}
-#else /* CONFIG_MIPS32 */
+#else /* !CONFIG_MIPS64 */
/*
* TMP and PTR are scratch.
* TMP will be clobbered, PTR will hold the pgd entry.
*/
-static __init void build_get_pgde32(u32 **p, unsigned int tmp, unsigned int ptr)
+static __init void __attribute__((unused))
+build_get_pgde32(u32 **p, unsigned int tmp, unsigned int ptr)
{
long pgdc = (long)pgd_current;
i_sll(p, tmp, tmp, PGD_T_LOG2);
i_addu(p, ptr, ptr, tmp); /* add in pgd offset */
}
-#endif /* CONFIG_MIPS32 */
+
+#endif /* !CONFIG_MIPS64 */
static __init void build_adjust_context(u32 **p, unsigned int ctx)
{
- unsigned int shift = 0;
- unsigned int mask = 0xff0;
-
-#if !defined(CONFIG_MIPS64) && !defined(CONFIG_64BIT_PHYS_ADDR)
- shift++;
- mask |= 0x008;
-#endif
+ unsigned int shift = 4 - (PTE_T_LOG2 + 1);
+ unsigned int mask = (PTRS_PER_PTE / 2 - 1) << (PTE_T_LOG2 + 1);
switch (current_cpu_data.cputype) {
case CPU_VR41XX:
* Kernel is a special case. Only a few CPUs use it.
*/
#ifdef CONFIG_64BIT_PHYS_ADDR
- if (cpu_has_64bit_gp_regs) {
+ if (cpu_has_64bits) {
i_ld(p, tmp, 0, ptep); /* get even pte */
i_ld(p, ptep, sizeof(pte_t), ptep); /* get odd pte */
i_dsrl(p, tmp, tmp, 6); /* convert to entrylo0 */
i_MFC0(&p, K0, C0_BADVADDR);
i_MFC0(&p, K1, C0_ENTRYHI);
i_xor(&p, K0, K0, K1);
- i_SRL(&p, K0, K0, PAGE_SHIFT+1);
+ i_SRL(&p, K0, K0, PAGE_SHIFT + 1);
il_bnez(&p, &r, K0, label_leave);
/* No need for i_nop */
}
#ifdef CONFIG_MIPS64
- build_get_pmde64(&p, &l, &r, K0, K1); /* get pmd ptr in K1 */
+ build_get_pmde64(&p, &l, &r, K0, K1); /* get pmd in K1 */
#else
- build_get_pgde32(&p, K0, K1); /* get pgd ptr in K1 */
+ build_get_pgde32(&p, K0, K1); /* get pgd in K1 */
#endif
build_get_ptep(&p, K0, K1);
build_update_entries(&p, K0, K1);
- build_tlb_write_random_entry(&p, &l, &r);
+ build_tlb_write_entry(&p, &l, &r, tlb_random);
l_leave(&l, p);
i_eret(&p); /* return from trap */
i_nop(&f);
else {
copy_handler(relocs, labels, split, split + 1, f);
+ move_labels(labels, f, f + 1, -1);
f++;
split++;
}
#endif /* CONFIG_MIPS64 */
resolve_relocs(relocs, labels);
- printk("Synthesized TLB handler (%u instructions).\n", final_len);
+ printk("Synthesized TLB refill handler (%u instructions).\n",
+ final_len);
#ifdef DEBUG_TLB
{
flush_icache_range(CAC_BASE, CAC_BASE + 0x100);
}
+/*
+ * TLB load/store/modify handlers.
+ *
+ * Only the fastpath gets synthesized at runtime, the slowpath for
+ * do_page_fault remains normal asm.
+ */
+extern void tlb_do_page_fault_0(void);
+extern void tlb_do_page_fault_1(void);
+
+#define __tlb_handler_align \
+ __attribute__((__aligned__(1 << CONFIG_MIPS_L1_CACHE_SHIFT)))
+
+/*
+ * 128 instructions for the fastpath handler is generous and should
+ * never be exceeded.
+ */
+#define FASTPATH_SIZE 128
+
+u32 __tlb_handler_align handle_tlbl[FASTPATH_SIZE];
+u32 __tlb_handler_align handle_tlbs[FASTPATH_SIZE];
+u32 __tlb_handler_align handle_tlbm[FASTPATH_SIZE];
+
+static void __init
+iPTE_LW(u32 **p, struct label **l, unsigned int pte, int offset,
+ unsigned int ptr)
+{
+#ifdef CONFIG_SMP
+# ifdef CONFIG_64BIT_PHYS_ADDR
+ if (cpu_has_64bits)
+ i_lld(p, pte, offset, ptr);
+ else
+# endif
+ i_LL(p, pte, offset, ptr);
+#else
+# ifdef CONFIG_64BIT_PHYS_ADDR
+ if (cpu_has_64bits)
+ i_ld(p, pte, offset, ptr);
+ else
+# endif
+ i_LW(p, pte, offset, ptr);
+#endif
+}
+
+static void __init
+iPTE_SW(u32 **p, struct reloc **r, unsigned int pte, int offset,
+ unsigned int ptr)
+{
+#ifdef CONFIG_SMP
+# ifdef CONFIG_64BIT_PHYS_ADDR
+ if (cpu_has_64bits)
+ i_scd(p, pte, offset, ptr);
+ else
+# endif
+ i_SC(p, pte, offset, ptr);
+
+ if (r10000_llsc_war())
+ il_beqzl(p, r, pte, label_smp_pgtable_change);
+ else
+ il_beqz(p, r, pte, label_smp_pgtable_change);
+
+# ifdef CONFIG_64BIT_PHYS_ADDR
+ if (!cpu_has_64bits) {
+ /* no i_nop needed */
+ i_ll(p, pte, sizeof(pte_t) / 2, ptr);
+ i_ori(p, pte, pte, _PAGE_VALID);
+ i_sc(p, pte, sizeof(pte_t) / 2, ptr);
+ il_beqz(p, r, pte, label_smp_pgtable_change);
+ /* no i_nop needed */
+ i_lw(p, pte, 0, ptr);
+ } else
+ i_nop(p);
+# else
+ i_nop(p);
+# endif
+#else
+# ifdef CONFIG_64BIT_PHYS_ADDR
+ if (cpu_has_64bits)
+ i_sd(p, pte, offset, ptr);
+ else
+# endif
+ i_SW(p, pte, offset, ptr);
+
+# ifdef CONFIG_64BIT_PHYS_ADDR
+ if (!cpu_has_64bits) {
+ i_lw(p, pte, sizeof(pte_t) / 2, ptr);
+ i_ori(p, pte, pte, _PAGE_VALID);
+ i_sw(p, pte, sizeof(pte_t) / 2, ptr);
+ i_lw(p, pte, 0, ptr);
+ }
+# endif
+#endif
+}
+
+/*
+ * Check if PTE is present, if not then jump to LABEL. PTR points to
+ * the page table where this PTE is located, PTE will be re-loaded
+ * with it's original value.
+ */
+static void __init
+build_pte_present(u32 **p, struct label **l, struct reloc **r,
+ unsigned int pte, unsigned int ptr, enum label_id lid)
+{
+ i_andi(p, pte, pte, _PAGE_PRESENT | _PAGE_READ);
+ i_xori(p, pte, pte, _PAGE_PRESENT | _PAGE_READ);
+ il_bnez(p, r, pte, lid);
+ iPTE_LW(p, l, pte, 0, ptr);
+}
+
+/* Make PTE valid, store result in PTR. */
+static void __init
+build_make_valid(u32 **p, struct reloc **r, unsigned int pte,
+ unsigned int ptr)
+{
+ i_ori(p, pte, pte, _PAGE_VALID | _PAGE_ACCESSED);
+ iPTE_SW(p, r, pte, 0, ptr);
+}
+
+/*
+ * Check if PTE can be written to, if not branch to LABEL. Regardless
+ * restore PTE with value from PTR when done.
+ */
+static void __init
+build_pte_writable(u32 **p, struct label **l, struct reloc **r,
+ unsigned int pte, unsigned int ptr, enum label_id lid)
+{
+ i_andi(p, pte, pte, _PAGE_PRESENT | _PAGE_WRITE);
+ i_xori(p, pte, pte, _PAGE_PRESENT | _PAGE_WRITE);
+ il_bnez(p, r, pte, lid);
+ iPTE_LW(p, l, pte, 0, ptr);
+}
+
+/* Make PTE writable, update software status bits as well, then store
+ * at PTR.
+ */
+static void __init
+build_make_write(u32 **p, struct reloc **r, unsigned int pte,
+ unsigned int ptr)
+{
+ i_ori(p, pte, pte,
+ _PAGE_ACCESSED | _PAGE_MODIFIED | _PAGE_VALID | _PAGE_DIRTY);
+ iPTE_SW(p, r, pte, 0, ptr);
+}
+
+/*
+ * Check if PTE can be modified, if not branch to LABEL. Regardless
+ * restore PTE with value from PTR when done.
+ */
+static void __init
+build_pte_modifiable(u32 **p, struct label **l, struct reloc **r,
+ unsigned int pte, unsigned int ptr, enum label_id lid)
+{
+ i_andi(p, pte, pte, _PAGE_WRITE);
+ il_beqz(p, r, pte, lid);
+ iPTE_LW(p, l, pte, 0, ptr);
+}
+
+/*
+ * R3000 style TLB load/store/modify handlers.
+ */
+
+/* This places the pte in the page table at PTR into ENTRYLO0. */
+static void __init
+build_r3000_pte_reload(u32 **p, unsigned int ptr)
+{
+ i_lw(p, ptr, 0, ptr);
+ i_nop(p); /* load delay */
+ i_mtc0(p, ptr, C0_ENTRYLO0);
+ i_nop(p); /* cp0 delay */
+}
+
+/*
+ * The index register may have the probe fail bit set,
+ * because we would trap on access kseg2, i.e. without refill.
+ */
+static void __init
+build_r3000_tlb_write(u32 **p, struct label **l, struct reloc **r,
+ unsigned int tmp)
+{
+ i_mfc0(p, tmp, C0_INDEX);
+ i_nop(p); /* cp0 delay */
+ il_bltz(p, r, tmp, label_r3000_write_probe_fail);
+ i_nop(p); /* branch delay */
+ i_tlbwi(p);
+ il_b(p, r, label_r3000_write_probe_ok);
+ i_nop(p); /* branch delay */
+ l_r3000_write_probe_fail(l, *p);
+ i_tlbwr(p);
+ l_r3000_write_probe_ok(l, *p);
+}
+
+static void __init
+build_r3000_tlbchange_handler_head(u32 **p, unsigned int pte,
+ unsigned int ptr)
+{
+ long pgdc = (long)pgd_current;
+
+ i_mfc0(p, pte, C0_BADVADDR);
+ i_lui(p, ptr, rel_hi(pgdc)); /* cp0 delay */
+ i_lw(p, ptr, rel_lo(pgdc), ptr);
+ i_srl(p, pte, pte, 22); /* load delay */
+ i_sll(p, pte, pte, 2);
+ i_addu(p, ptr, ptr, pte);
+ i_mfc0(p, pte, C0_CONTEXT);
+ i_lw(p, ptr, 0, ptr); /* cp0 delay */
+ i_andi(p, pte, pte, 0xffc); /* load delay */
+ i_addu(p, ptr, ptr, pte);
+ i_lw(p, pte, 0, ptr);
+ i_nop(p); /* load delay */
+ i_tlbp(p);
+}
+
+static void __init
+build_r3000_tlbchange_handler_tail(u32 **p, unsigned int tmp)
+{
+ i_mfc0(p, tmp, C0_EPC);
+ i_nop(p); /* cp0 delay */
+ i_jr(p, tmp);
+ i_rfe(p); /* branch delay */
+}
+
+static void __init build_r3000_tlb_load_handler(void)
+{
+ u32 *p = handle_tlbl;
+ struct label *l = labels;
+ struct reloc *r = relocs;
+
+ memset(handle_tlbl, 0, sizeof(handle_tlbl));
+ memset(labels, 0, sizeof(labels));
+ memset(relocs, 0, sizeof(relocs));
+
+ build_r3000_tlbchange_handler_head(&p, K0, K1);
+ build_pte_present(&p, &l, &r, K0, K1, label_nopage_tlbl);
+ build_make_valid(&p, &r, K0, K1);
+ build_r3000_pte_reload(&p, K1);
+ build_r3000_tlb_write(&p, &l, &r, K0);
+ build_r3000_tlbchange_handler_tail(&p, K0);
+
+ l_nopage_tlbl(&l, p);
+ i_j(&p, (unsigned long)tlb_do_page_fault_0 & 0x0fffffff);
+ i_nop(&p);
+
+ if ((p - handle_tlbl) > FASTPATH_SIZE)
+ panic("TLB load handler fastpath space exceeded");
+
+ resolve_relocs(relocs, labels);
+ printk("Synthesized TLB load handler fastpath (%u instructions).\n",
+ (unsigned int)(p - handle_tlbl));
+
+#ifdef DEBUG_TLB
+ {
+ int i;
+
+ for (i = 0; i < FASTPATH_SIZE; i++)
+ printk("%08x\n", handle_tlbl[i]);
+ }
+#endif
+
+ flush_icache_range((unsigned long)handle_tlbl,
+ (unsigned long)handle_tlbl + FASTPATH_SIZE * sizeof(u32));
+}
+
+static void __init build_r3000_tlb_store_handler(void)
+{
+ u32 *p = handle_tlbs;
+ struct label *l = labels;
+ struct reloc *r = relocs;
+
+ memset(handle_tlbs, 0, sizeof(handle_tlbs));
+ memset(labels, 0, sizeof(labels));
+ memset(relocs, 0, sizeof(relocs));
+
+ build_r3000_tlbchange_handler_head(&p, K0, K1);
+ build_pte_writable(&p, &l, &r, K0, K1, label_nopage_tlbs);
+ build_make_write(&p, &r, K0, K1);
+ build_r3000_pte_reload(&p, K1);
+ build_r3000_tlb_write(&p, &l, &r, K0);
+ build_r3000_tlbchange_handler_tail(&p, K0);
+
+ l_nopage_tlbs(&l, p);
+ i_j(&p, (unsigned long)tlb_do_page_fault_1 & 0x0fffffff);
+ i_nop(&p);
+
+ if ((p - handle_tlbs) > FASTPATH_SIZE)
+ panic("TLB store handler fastpath space exceeded");
+
+ resolve_relocs(relocs, labels);
+ printk("Synthesized TLB store handler fastpath (%u instructions).\n",
+ (unsigned int)(p - handle_tlbs));
+
+#ifdef DEBUG_TLB
+ {
+ int i;
+
+ for (i = 0; i < FASTPATH_SIZE; i++)
+ printk("%08x\n", handle_tlbs[i]);
+ }
+#endif
+
+ flush_icache_range((unsigned long)handle_tlbs,
+ (unsigned long)handle_tlbs + FASTPATH_SIZE * sizeof(u32));
+}
+
+static void __init build_r3000_tlb_modify_handler(void)
+{
+ u32 *p = handle_tlbm;
+ struct label *l = labels;
+ struct reloc *r = relocs;
+
+ memset(handle_tlbm, 0, sizeof(handle_tlbm));
+ memset(labels, 0, sizeof(labels));
+ memset(relocs, 0, sizeof(relocs));
+
+ build_r3000_tlbchange_handler_head(&p, K0, K1);
+ build_pte_modifiable(&p, &l, &r, K0, K1, label_nopage_tlbm);
+ build_make_write(&p, &r, K0, K1);
+ build_r3000_pte_reload(&p, K1);
+ i_tlbwi(&p);
+ build_r3000_tlbchange_handler_tail(&p, K0);
+
+ l_nopage_tlbm(&l, p);
+ i_j(&p, (unsigned long)tlb_do_page_fault_1 & 0x0fffffff);
+ i_nop(&p);
+
+ if ((p - handle_tlbm) > FASTPATH_SIZE)
+ panic("TLB modify handler fastpath space exceeded");
+
+ resolve_relocs(relocs, labels);
+ printk("Synthesized TLB modify handler fastpath (%u instructions).\n",
+ (unsigned int)(p - handle_tlbm));
+
+#ifdef DEBUG_TLB
+ {
+ int i;
+
+ for (i = 0; i < FASTPATH_SIZE; i++)
+ printk("%08x\n", handle_tlbm[i]);
+ }
+#endif
+
+ flush_icache_range((unsigned long)handle_tlbm,
+ (unsigned long)handle_tlbm + FASTPATH_SIZE * sizeof(u32));
+}
+
+/*
+ * R4000 style TLB load/store/modify handlers.
+ */
+static void __init
+build_r4000_tlbchange_handler_head(u32 **p, struct label **l,
+ struct reloc **r, unsigned int pte,
+ unsigned int ptr)
+{
+#ifdef CONFIG_MIPS64
+ build_get_pmde64(p, l, r, pte, ptr); /* get pmd in ptr */
+#else
+ build_get_pgde32(p, pte, ptr); /* get pgd in ptr */
+#endif
+
+ i_MFC0(p, pte, C0_BADVADDR);
+ i_LW(p, ptr, 0, ptr);
+ i_SRL(p, pte, pte, PAGE_SHIFT + PTE_ORDER - PTE_T_LOG2);
+ i_andi(p, pte, pte, (PTRS_PER_PTE - 1) << PTE_T_LOG2);
+ i_ADDU(p, ptr, ptr, pte);
+
+#ifdef CONFIG_SMP
+ l_smp_pgtable_change(l, *p);
+# endif
+ iPTE_LW(p, l, pte, 0, ptr); /* get even pte */
+ build_tlb_probe_entry(p);
+}
+
+static void __init
+build_r4000_tlbchange_handler_tail(u32 **p, struct label **l,
+ struct reloc **r, unsigned int tmp,
+ unsigned int ptr)
+{
+ i_ori(p, ptr, ptr, sizeof(pte_t));
+ i_xori(p, ptr, ptr, sizeof(pte_t));
+ build_update_entries(p, tmp, ptr);
+ build_tlb_write_entry(p, l, r, tlb_indexed);
+ l_leave(l, *p);
+ i_eret(p); /* return from trap */
+
+#ifdef CONFIG_MIPS64
+ build_get_pgd_vmalloc64(p, l, r, tmp, ptr);
+#endif
+}
+
+static void __init build_r4000_tlb_load_handler(void)
+{
+ u32 *p = handle_tlbl;
+ struct label *l = labels;
+ struct reloc *r = relocs;
+
+ memset(handle_tlbl, 0, sizeof(handle_tlbl));
+ memset(labels, 0, sizeof(labels));
+ memset(relocs, 0, sizeof(relocs));
+
+ if (bcm1250_m3_war()) {
+ i_MFC0(&p, K0, C0_BADVADDR);
+ i_MFC0(&p, K1, C0_ENTRYHI);
+ i_xor(&p, K0, K0, K1);
+ i_SRL(&p, K0, K0, PAGE_SHIFT + 1);
+ il_bnez(&p, &r, K0, label_leave);
+ /* No need for i_nop */
+ }
+
+ build_r4000_tlbchange_handler_head(&p, &l, &r, K0, K1);
+ build_pte_present(&p, &l, &r, K0, K1, label_nopage_tlbl);
+ build_make_valid(&p, &r, K0, K1);
+ build_r4000_tlbchange_handler_tail(&p, &l, &r, K0, K1);
+
+ l_nopage_tlbl(&l, p);
+ i_j(&p, (unsigned long)tlb_do_page_fault_0 & 0x0fffffff);
+ i_nop(&p);
+
+ if ((p - handle_tlbl) > FASTPATH_SIZE)
+ panic("TLB load handler fastpath space exceeded");
+
+ resolve_relocs(relocs, labels);
+ printk("Synthesized TLB load handler fastpath (%u instructions).\n",
+ (unsigned int)(p - handle_tlbl));
+
+#ifdef DEBUG_TLB
+ {
+ int i;
+
+ for (i = 0; i < FASTPATH_SIZE; i++)
+ printk("%08x\n", handle_tlbl[i]);
+ }
+#endif
+
+ flush_icache_range((unsigned long)handle_tlbl,
+ (unsigned long)handle_tlbl + FASTPATH_SIZE * sizeof(u32));
+}
+
+static void __init build_r4000_tlb_store_handler(void)
+{
+ u32 *p = handle_tlbs;
+ struct label *l = labels;
+ struct reloc *r = relocs;
+
+ memset(handle_tlbs, 0, sizeof(handle_tlbs));
+ memset(labels, 0, sizeof(labels));
+ memset(relocs, 0, sizeof(relocs));
+
+ build_r4000_tlbchange_handler_head(&p, &l, &r, K0, K1);
+ build_pte_writable(&p, &l, &r, K0, K1, label_nopage_tlbs);
+ build_make_write(&p, &r, K0, K1);
+ build_r4000_tlbchange_handler_tail(&p, &l, &r, K0, K1);
+
+ l_nopage_tlbs(&l, p);
+ i_j(&p, (unsigned long)tlb_do_page_fault_1 & 0x0fffffff);
+ i_nop(&p);
+
+ if ((p - handle_tlbs) > FASTPATH_SIZE)
+ panic("TLB store handler fastpath space exceeded");
+
+ resolve_relocs(relocs, labels);
+ printk("Synthesized TLB store handler fastpath (%u instructions).\n",
+ (unsigned int)(p - handle_tlbs));
+
+#ifdef DEBUG_TLB
+ {
+ int i;
+
+ for (i = 0; i < FASTPATH_SIZE; i++)
+ printk("%08x\n", handle_tlbs[i]);
+ }
+#endif
+
+ flush_icache_range((unsigned long)handle_tlbs,
+ (unsigned long)handle_tlbs + FASTPATH_SIZE * sizeof(u32));
+}
+
+static void __init build_r4000_tlb_modify_handler(void)
+{
+ u32 *p = handle_tlbm;
+ struct label *l = labels;
+ struct reloc *r = relocs;
+
+ memset(handle_tlbm, 0, sizeof(handle_tlbm));
+ memset(labels, 0, sizeof(labels));
+ memset(relocs, 0, sizeof(relocs));
+
+ build_r4000_tlbchange_handler_head(&p, &l, &r, K0, K1);
+ build_pte_modifiable(&p, &l, &r, K0, K1, label_nopage_tlbm);
+ /* Present and writable bits set, set accessed and dirty bits. */
+ build_make_write(&p, &r, K0, K1);
+ build_r4000_tlbchange_handler_tail(&p, &l, &r, K0, K1);
+
+ l_nopage_tlbm(&l, p);
+ i_j(&p, (unsigned long)tlb_do_page_fault_1 & 0x0fffffff);
+ i_nop(&p);
+
+ if ((p - handle_tlbm) > FASTPATH_SIZE)
+ panic("TLB modify handler fastpath space exceeded");
+
+ resolve_relocs(relocs, labels);
+ printk("Synthesized TLB modify handler fastpath (%u instructions).\n",
+ (unsigned int)(p - handle_tlbm));
+
+#ifdef DEBUG_TLB
+ {
+ int i;
+
+ for (i = 0; i < FASTPATH_SIZE; i++)
+ printk("%08x\n", handle_tlbm[i]);
+ }
+#endif
+
+ flush_icache_range((unsigned long)handle_tlbm,
+ (unsigned long)handle_tlbm + FASTPATH_SIZE * sizeof(u32));
+}
+
void __init build_tlb_refill_handler(void)
{
+ /*
+ * The refill handler is generated per-CPU, multi-node systems
+ * may have local storage for it. The other handlers are only
+ * needed once.
+ */
+ static int run_once = 0;
+
switch (current_cpu_data.cputype) {
-#ifdef CONFIG_MIPS32
case CPU_R2000:
case CPU_R3000:
case CPU_R3000A:
case CPU_TX3922:
case CPU_TX3927:
build_r3000_tlb_refill_handler();
+ if (!run_once) {
+ build_r3000_tlb_load_handler();
+ build_r3000_tlb_store_handler();
+ build_r3000_tlb_modify_handler();
+ run_once++;
+ }
break;
case CPU_R6000:
case CPU_R6000A:
panic("No R6000 TLB refill handler yet");
break;
-#endif
case CPU_R8000:
panic("No R8000 TLB refill handler yet");
default:
build_r4000_tlb_refill_handler();
+ if (!run_once) {
+ build_r4000_tlb_load_handler();
+ build_r4000_tlb_store_handler();
+ build_r4000_tlb_modify_handler();
+ run_once++;
+ }
}
}
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
- * Copyright (C) 1997, 2001 Ralf Baechle
+ * Copyright (C) 1997, 01, 05 Ralf Baechle
* Copyright 2001 MontaVista Software Inc.
* Author: jsun@mvista.com or jsun@junsun.net
*
* Copyright (C) 2004 MontaVista Software Inc.
* Author: Manish Lachwani, mlachwani@mvista.com
*/
-#include <linux/config.h>
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/delay.h>
* BRIEF MODULE DESCRIPTION
* Momentum Computer Ocelot-3 board dependent boot routines
*
- * Copyright (C) 1996, 1997, 2001 Ralf Baechle
+ * Copyright (C) 1996, 1997, 01, 05 Ralf Baechle
* Copyright (C) 2000 RidgeRun, Inc.
* Copyright (C) 2001 Red Hat, Inc.
* Copyright (C) 2002 Momentum Computer
* with this program; if not, write to the Free Software Foundation, Inc.,
* 675 Mass Ave, Cambridge, MA 02139, USA.
*/
-#include <linux/config.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/types.h>
*
* Copyright 2001 MontaVista Software Inc.
* Author: Jun Sun, jsun@mvista.com or jsun@junsun.net
- * Copyright (C) 2000, 2001 Ralf Baechle (ralf@gnu.org)
+ * Copyright (C) 2000, 01, 05 Ralf Baechle (ralf@linux-mips.org)
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* 675 Mass Ave, Cambridge, MA 02139, USA.
*
*/
-#include <linux/config.h>
#include <linux/errno.h>
#include <linux/init.h>
#include <linux/kernel_stat.h>
--- /dev/null
+EXTRA_CFLAGS := -Werror
+
+obj-$(CONFIG_OPROFILE) += oprofile.o
+
+DRIVER_OBJS = $(addprefix ../../../drivers/oprofile/, \
+ oprof.o cpu_buffer.o buffer_sync.o \
+ event_buffer.o oprofile_files.o \
+ oprofilefs.o oprofile_stats.o \
+ timer_int.o )
+
+oprofile-y := $(DRIVER_OBJS) common.o
+
+oprofile-$(CONFIG_CPU_MIPS32) += op_model_mipsxx.o
+oprofile-$(CONFIG_CPU_MIPS64) += op_model_mipsxx.o
+oprofile-$(CONFIG_CPU_RM9000) += op_model_rm9000.o
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2004 by Ralf Baechle
+ */
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/oprofile.h>
+#include <linux/smp.h>
+#include <asm/cpu-info.h>
+
+#include "op_impl.h"
+
+extern struct op_mips_model op_model_mipsxx __attribute__((weak));
+extern struct op_mips_model op_model_rm9000 __attribute__((weak));
+
+static struct op_mips_model *model;
+
+static struct op_counter_config ctr[20];
+
+static int op_mips_setup(void)
+{
+ /* Pre-compute the values to stuff in the hardware registers. */
+ model->reg_setup(ctr);
+
+ /* Configure the registers on all cpus. */
+ on_each_cpu(model->cpu_setup, 0, 0, 1);
+
+ return 0;
+}
+
+static int op_mips_create_files(struct super_block * sb, struct dentry * root)
+{
+ int i;
+
+ for (i = 0; i < model->num_counters; ++i) {
+ struct dentry *dir;
+ char buf[3];
+
+ snprintf(buf, sizeof buf, "%d", i);
+ dir = oprofilefs_mkdir(sb, root, buf);
+
+ oprofilefs_create_ulong(sb, dir, "enabled", &ctr[i].enabled);
+ oprofilefs_create_ulong(sb, dir, "event", &ctr[i].event);
+ oprofilefs_create_ulong(sb, dir, "count", &ctr[i].count);
+ /* Dummies. */
+ oprofilefs_create_ulong(sb, dir, "kernel", &ctr[i].kernel);
+ oprofilefs_create_ulong(sb, dir, "user", &ctr[i].user);
+ oprofilefs_create_ulong(sb, dir, "exl", &ctr[i].exl);
+ oprofilefs_create_ulong(sb, dir, "unit_mask", &ctr[i].unit_mask);
+ }
+
+ return 0;
+}
+
+static int op_mips_start(void)
+{
+ on_each_cpu(model->cpu_start, NULL, 0, 1);
+
+ return 0;
+}
+
+static void op_mips_stop(void)
+{
+ /* Disable performance monitoring for all counters. */
+ on_each_cpu(model->cpu_stop, NULL, 0, 1);
+}
+
+void __init oprofile_arch_init(struct oprofile_operations *ops)
+{
+ struct op_mips_model *lmodel = NULL;
+
+ switch (current_cpu_data.cputype) {
+ case CPU_24K:
+ lmodel = &op_model_mipsxx;
+ break;
+
+ case CPU_RM9000:
+ lmodel = &op_model_rm9000;
+ break;
+ };
+
+ if (!lmodel)
+ return;
+
+ if (lmodel->init())
+ return;
+
+ model = lmodel;
+
+ ops->create_files = op_mips_create_files;
+ ops->setup = op_mips_setup;
+ ops->start = op_mips_start;
+ ops->stop = op_mips_stop;
+ ops->cpu_type = lmodel->cpu_type;
+
+ printk(KERN_INFO "oprofile: using %s performance monitoring.\n",
+ lmodel->cpu_type);
+}
+
+void oprofile_arch_exit(void)
+{
+ model->exit();
+}
--- /dev/null
+/**
+ * @file arch/alpha/oprofile/op_impl.h
+ *
+ * @remark Copyright 2002 OProfile authors
+ * @remark Read the file COPYING
+ *
+ * @author Richard Henderson <rth@twiddle.net>
+ */
+
+#ifndef OP_IMPL_H
+#define OP_IMPL_H 1
+
+/* Per-counter configuration as set via oprofilefs. */
+struct op_counter_config {
+ unsigned long enabled;
+ unsigned long event;
+ unsigned long count;
+ /* Dummies because I am too lazy to hack the userspace tools. */
+ unsigned long kernel;
+ unsigned long user;
+ unsigned long exl;
+ unsigned long unit_mask;
+};
+
+/* Per-architecture configury and hooks. */
+struct op_mips_model {
+ void (*reg_setup) (struct op_counter_config *);
+ void (*cpu_setup) (void * dummy);
+ int (*init)(void);
+ void (*exit)(void);
+ void (*cpu_start)(void *args);
+ void (*cpu_stop)(void *args);
+ char *cpu_type;
+ unsigned char num_counters;
+};
+
+#endif
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2004 by Ralf Baechle
+ */
+#include <linux/oprofile.h>
+#include <linux/interrupt.h>
+#include <linux/smp.h>
+
+#include "op_impl.h"
+
+#define RM9K_COUNTER1_EVENT(event) ((event) << 0)
+#define RM9K_COUNTER1_SUPERVISOR (1ULL << 7)
+#define RM9K_COUNTER1_KERNEL (1ULL << 8)
+#define RM9K_COUNTER1_USER (1ULL << 9)
+#define RM9K_COUNTER1_ENABLE (1ULL << 10)
+#define RM9K_COUNTER1_OVERFLOW (1ULL << 15)
+
+#define RM9K_COUNTER2_EVENT(event) ((event) << 16)
+#define RM9K_COUNTER2_SUPERVISOR (1ULL << 23)
+#define RM9K_COUNTER2_KERNEL (1ULL << 24)
+#define RM9K_COUNTER2_USER (1ULL << 25)
+#define RM9K_COUNTER2_ENABLE (1ULL << 26)
+#define RM9K_COUNTER2_OVERFLOW (1ULL << 31)
+
+extern unsigned int rm9000_perfcount_irq;
+
+static struct rm9k_register_config {
+ unsigned int control;
+ unsigned int reset_counter1;
+ unsigned int reset_counter2;
+} reg;
+
+/* Compute all of the registers in preparation for enabling profiling. */
+
+static void rm9000_reg_setup(struct op_counter_config *ctr)
+{
+ unsigned int control = 0;
+
+ /* Compute the performance counter control word. */
+ /* For now count kernel and user mode */
+ if (ctr[0].enabled)
+ control |= RM9K_COUNTER1_EVENT(ctr[0].event) |
+ RM9K_COUNTER1_KERNEL |
+ RM9K_COUNTER1_USER |
+ RM9K_COUNTER1_ENABLE;
+ if (ctr[1].enabled)
+ control |= RM9K_COUNTER2_EVENT(ctr[1].event) |
+ RM9K_COUNTER2_KERNEL |
+ RM9K_COUNTER2_USER |
+ RM9K_COUNTER2_ENABLE;
+ reg.control = control;
+
+ reg.reset_counter1 = 0x80000000 - ctr[0].count;
+ reg.reset_counter2 = 0x80000000 - ctr[1].count;
+}
+
+/* Program all of the registers in preparation for enabling profiling. */
+
+static void rm9000_cpu_setup (void *args)
+{
+ uint64_t perfcount;
+
+ perfcount = ((uint64_t) reg.reset_counter2 << 32) | reg.reset_counter1;
+ write_c0_perfcount(perfcount);
+}
+
+static void rm9000_cpu_start(void *args)
+{
+ /* Start all counters on current CPU */
+ write_c0_perfcontrol(reg.control);
+}
+
+static void rm9000_cpu_stop(void *args)
+{
+ /* Stop all counters on current CPU */
+ write_c0_perfcontrol(0);
+}
+
+static irqreturn_t rm9000_perfcount_handler(int irq, void * dev_id,
+ struct pt_regs *regs)
+{
+ unsigned int control = read_c0_perfcontrol();
+ uint32_t counter1, counter2;
+ uint64_t counters;
+
+ /*
+ * RM9000 combines two 32-bit performance counters into a single
+ * 64-bit coprocessor zero register. To avoid a race updating the
+ * registers we need to stop the counters while we're messing with
+ * them ...
+ */
+ write_c0_perfcontrol(0);
+
+ counters = read_c0_perfcount();
+ counter1 = counters;
+ counter2 = counters >> 32;
+
+ if (control & RM9K_COUNTER1_OVERFLOW) {
+ oprofile_add_sample(regs, 0);
+ counter1 = reg.reset_counter1;
+ }
+ if (control & RM9K_COUNTER2_OVERFLOW) {
+ oprofile_add_sample(regs, 1);
+ counter2 = reg.reset_counter2;
+ }
+
+ counters = ((uint64_t)counter2 << 32) | counter1;
+ write_c0_perfcount(counters);
+ write_c0_perfcontrol(reg.control);
+
+ return IRQ_HANDLED;
+}
+
+static int rm9000_init(void)
+{
+ return request_irq(rm9000_perfcount_irq, rm9000_perfcount_handler,
+ 0, "Perfcounter", NULL);
+}
+
+static void rm9000_exit(void)
+{
+ free_irq(rm9000_perfcount_irq, NULL);
+}
+
+struct op_mips_model op_model_rm9000 = {
+ .reg_setup = rm9000_reg_setup,
+ .cpu_setup = rm9000_cpu_setup,
+ .init = rm9000_init,
+ .exit = rm9000_exit,
+ .cpu_start = rm9000_cpu_start,
+ .cpu_stop = rm9000_cpu_stop,
+ .cpu_type = "mips/rm9000",
+ .num_counters = 2
+};
+#include <linux/config.h>
#include <linux/init.h>
#include <linux/pci.h>
#include <asm/mips-boards/atlasint.h>
--- /dev/null
+/*
+ * arch/mips/pci/fixup-sb1250.c
+ *
+ * Copyright (C) 2004 MIPS Technologies, Inc. All rights reserved.
+ * Author: Maciej W. Rozycki <macro@mips.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/init.h>
+#include <linux/pci.h>
+
+/*
+ * The BCM1250, etc. PCI/HT bridge reports as a host bridge.
+ */
+static void __init quirk_sb1250_ht(struct pci_dev *dev)
+{
+ dev->class = PCI_CLASS_BRIDGE_PCI << 8;
+}
+DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIBYTE, PCI_DEVICE_ID_BCM1250_HT,
+ quirk_sb1250_ht);
--- /dev/null
+/*
+ * arch/mips/vr41xx/nec-cmbvr4133/pci_fixup.c
+ *
+ * The NEC CMB-VR4133 Board specific PCI fixups.
+ *
+ * Author: Yoichi Yuasa <yyuasa@mvista.com, or source@mvista.com> and
+ * Alex Sapkov <asapkov@ru.mvista.com>
+ *
+ * 2003-2004 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ *
+ * Modified for support in 2.6
+ * Author: Manish Lachwani (mlachwani@mvista.com)
+ *
+ */
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+
+#include <asm/io.h>
+#include <asm/vr41xx/cmbvr4133.h>
+
+extern int vr4133_rockhopper;
+extern void ali_m1535plus_init(struct pci_dev *dev);
+extern void ali_m5229_init(struct pci_dev *dev);
+
+/* Do platform specific device initialization at pci_enable_device() time */
+int pcibios_plat_dev_init(struct pci_dev *dev)
+{
+ /*
+ * We have to reset AMD PCnet adapter on Rockhopper since
+ * PMON leaves it enabled and generating interrupts. This leads
+ * to a lock if some PCI device driver later enables the IRQ line
+ * shared with PCnet and there is no AMD PCnet driver to catch its
+ * interrupts.
+ */
+#ifdef CONFIG_ROCKHOPPER
+ if (dev->vendor == PCI_VENDOR_ID_AMD &&
+ dev->device == PCI_DEVICE_ID_AMD_LANCE) {
+ inl(pci_resource_start(dev, 0) + 0x18);
+ }
+#endif
+
+ /*
+ * we have to open the bridges' windows down to 0 because otherwise
+ * we cannot access ISA south bridge I/O registers that get mapped from
+ * 0. for example, 8259 PIC would be unaccessible without that
+ */
+ if(dev->vendor == PCI_VENDOR_ID_INTEL && dev->device == PCI_DEVICE_ID_INTEL_S21152BB) {
+ pci_write_config_byte(dev, PCI_IO_BASE, 0);
+ if(dev->bus->number == 0) {
+ pci_write_config_word(dev, PCI_IO_BASE_UPPER16, 0);
+ } else {
+ pci_write_config_word(dev, PCI_IO_BASE_UPPER16, 1);
+ }
+ }
+
+ return 0;
+}
+
+/*
+ * M1535 IRQ mapping
+ * Feel free to change this, although it shouldn't be needed
+ */
+#define M1535_IRQ_INTA 7
+#define M1535_IRQ_INTB 9
+#define M1535_IRQ_INTC 10
+#define M1535_IRQ_INTD 11
+
+#define M1535_IRQ_USB 9
+#define M1535_IRQ_IDE 14
+#define M1535_IRQ_IDE2 15
+#define M1535_IRQ_PS2 12
+#define M1535_IRQ_RTC 8
+#define M1535_IRQ_FDC 6
+#define M1535_IRQ_AUDIO 5
+#define M1535_IRQ_COM1 4
+#define M1535_IRQ_COM2 4
+#define M1535_IRQ_IRDA 3
+#define M1535_IRQ_KBD 1
+#define M1535_IRQ_TMR 0
+
+/* Rockhopper "slots" assignment; this is hard-coded ... */
+#define ROCKHOPPER_M5451_SLOT 1
+#define ROCKHOPPER_M1535_SLOT 2
+#define ROCKHOPPER_M5229_SLOT 11
+#define ROCKHOPPER_M5237_SLOT 15
+#define ROCKHOPPER_PMU_SLOT 12
+/* ... and hard-wired. */
+#define ROCKHOPPER_PCI1_SLOT 3
+#define ROCKHOPPER_PCI2_SLOT 4
+#define ROCKHOPPER_PCI3_SLOT 5
+#define ROCKHOPPER_PCI4_SLOT 6
+#define ROCKHOPPER_PCNET_SLOT 1
+
+#define M1535_IRQ_MASK(n) (1 << (n))
+
+#define M1535_IRQ_EDGE (M1535_IRQ_MASK(M1535_IRQ_TMR) | \
+ M1535_IRQ_MASK(M1535_IRQ_KBD) | \
+ M1535_IRQ_MASK(M1535_IRQ_COM1) | \
+ M1535_IRQ_MASK(M1535_IRQ_COM2) | \
+ M1535_IRQ_MASK(M1535_IRQ_IRDA) | \
+ M1535_IRQ_MASK(M1535_IRQ_RTC) | \
+ M1535_IRQ_MASK(M1535_IRQ_FDC) | \
+ M1535_IRQ_MASK(M1535_IRQ_PS2))
+
+#define M1535_IRQ_LEVEL (M1535_IRQ_MASK(M1535_IRQ_IDE) | \
+ M1535_IRQ_MASK(M1535_IRQ_USB) | \
+ M1535_IRQ_MASK(M1535_IRQ_INTA) | \
+ M1535_IRQ_MASK(M1535_IRQ_INTB) | \
+ M1535_IRQ_MASK(M1535_IRQ_INTC) | \
+ M1535_IRQ_MASK(M1535_IRQ_INTD))
+
+struct irq_map_entry {
+ u16 bus;
+ u8 slot;
+ u8 irq;
+};
+static struct irq_map_entry int_map[] = {
+ {1, ROCKHOPPER_M5451_SLOT, M1535_IRQ_AUDIO}, /* Audio controller */
+ {1, ROCKHOPPER_PCI1_SLOT, M1535_IRQ_INTD}, /* PCI slot #1 */
+ {1, ROCKHOPPER_PCI2_SLOT, M1535_IRQ_INTC}, /* PCI slot #2 */
+ {1, ROCKHOPPER_M5237_SLOT, M1535_IRQ_USB}, /* USB host controller */
+ {1, ROCKHOPPER_M5229_SLOT, IDE_PRIMARY_IRQ}, /* IDE controller */
+ {2, ROCKHOPPER_PCNET_SLOT, M1535_IRQ_INTD}, /* AMD Am79c973 on-board
+ ethernet */
+ {2, ROCKHOPPER_PCI3_SLOT, M1535_IRQ_INTB}, /* PCI slot #3 */
+ {2, ROCKHOPPER_PCI4_SLOT, M1535_IRQ_INTC} /* PCI slot #4 */
+};
+
+static int pci_intlines[] =
+ { M1535_IRQ_INTA, M1535_IRQ_INTB, M1535_IRQ_INTC, M1535_IRQ_INTD };
+
+/* Determine the Rockhopper IRQ line number for the PCI device */
+int rockhopper_get_irq(struct pci_dev *dev, u8 pin, u8 slot)
+{
+ struct pci_bus *bus;
+ int i;
+
+ bus = dev->bus;
+ if (bus == NULL)
+ return -1;
+
+ for (i = 0; i < sizeof (int_map) / sizeof (int_map[0]); i++) {
+ if (int_map[i].bus == bus->number && int_map[i].slot == slot) {
+ int line;
+ for (line = 0; line < 4; line++)
+ if (pci_intlines[line] == int_map[i].irq)
+ break;
+ if (line < 4)
+ return pci_intlines[(line + (pin - 1)) % 4];
+ else
+ return int_map[i].irq;
+ }
+ }
+ return -1;
+}
+
+#ifdef CONFIG_ROCKHOPPER
+void i8259_init(void)
+{
+ outb(0x11, 0x20); /* Master ICW1 */
+ outb(I8259_IRQ_BASE, 0x21); /* Master ICW2 */
+ outb(0x04, 0x21); /* Master ICW3 */
+ outb(0x01, 0x21); /* Master ICW4 */
+ outb(0xff, 0x21); /* Master IMW */
+
+ outb(0x11, 0xa0); /* Slave ICW1 */
+ outb(I8259_IRQ_BASE + 8, 0xa1); /* Slave ICW2 */
+ outb(0x02, 0xa1); /* Slave ICW3 */
+ outb(0x01, 0xa1); /* Slave ICW4 */
+ outb(0xff, 0xa1); /* Slave IMW */
+
+ outb(0x00, 0x4d0);
+ outb(0x02, 0x4d1); /* USB IRQ9 is level */
+}
+#endif
+
+int __init pcibios_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
+{
+ extern int pci_probe_only;
+ pci_probe_only = 1;
+
+#ifdef CONFIG_ROCKHOPPER
+ if( dev->bus->number == 1 && vr4133_rockhopper ) {
+ if(slot == ROCKHOPPER_PCI1_SLOT || slot == ROCKHOPPER_PCI2_SLOT)
+ dev->irq = CMBVR41XX_INTA_IRQ;
+ else
+ dev->irq = rockhopper_get_irq(dev, pin, slot);
+ } else
+ dev->irq = CMBVR41XX_INTA_IRQ;
+#else
+ dev->irq = CMBVR41XX_INTA_IRQ;
+#endif
+
+ return dev->irq;
+}
+
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1533, ali_m1535plus_init);
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M5229, ali_m5229_init);
+
+
*
* ASIC PCI only supports type 1 config cycles.
*/
-static int set_config_address(unsigned char busno, unsigned int devfn, int reg)
+static int set_config_address(unsigned int busno, unsigned int devfn, int reg)
{
- if ((busno > 255) || (devfn > 255) || (reg > 255))
+ if ((devfn > 255) || (reg > 255))
return PCIBIOS_BAD_REGISTER_NUMBER;
if (busno == 0 && devfn >= PCI_DEVFN(8, 0))
#include <linux/init.h>
#include <asm/addrspace.h>
+#include <asm/byteorder.h>
#include <asm/tx4927/tx4927_pci.h>
-#include <asm/debug.h>
/* initialize in setup */
struct resource pci_io_resource = {
dev = PCI_SLOT(devfn);
func = PCI_FUNC(devfn);
- if (size == 2) {
- if (where & 1)
- return PCIBIOS_BAD_REGISTER_NUMBER;
- }
-
- if (size == 4) {
- if (where & 3)
- return PCIBIOS_BAD_REGISTER_NUMBER;
- }
-
/* check if the bus is top-level */
if (bus->parent != NULL) {
busno = bus->number;
switch (size) {
case 1:
*val = *(volatile u8 *) ((ulong) & tx4927_pcicptr->
- g2pcfgdata | (where & 3));
+ g2pcfgdata |
+#ifdef __LITTLE_ENDIAN
+ (where & 3));
+#else
+ ((where & 0x3) ^ 0x3));
+#endif
break;
case 2:
*val = *(volatile u16 *) ((ulong) & tx4927_pcicptr->
- g2pcfgdata | (where & 3));
+ g2pcfgdata |
+#ifdef __LITTLE_ENDIAN
+ (where & 3));
+#else
+ ((where & 0x3) ^ 0x2));
+#endif
break;
case 4:
*val = tx4927_pcicptr->g2pcfgdata;
dev = PCI_SLOT(devfn);
func = PCI_FUNC(devfn);
- if (size == 1) {
- if (where & 1)
- return PCIBIOS_BAD_REGISTER_NUMBER;
- }
-
- if (size == 4) {
- if (where & 3)
- return PCIBIOS_BAD_REGISTER_NUMBER;
- }
-
/* check if the bus is top-level */
if (bus->parent != NULL) {
busno = bus->number;
switch (size) {
case 1:
*(volatile u8 *) ((ulong) & tx4927_pcicptr->
- g2pcfgdata | (where & 3)) = val;
+ g2pcfgdata |
+#ifdef __LITTLE_ENDIAN
+ (where & 3)) = val;
+#else
+ ((where & 0x3) ^ 0x3)) = val;
+#endif
break;
case 2:
*(volatile u16 *) ((ulong) & tx4927_pcicptr->
- g2pcfgdata | (where & 3)) = val;
+ g2pcfgdata |
+#ifdef __LITTLE_ENDIAN
+ (where & 3)) = val;
+#else
+ ((where & 0x3) ^ 0x2)) = val;
+#endif
break;
case 4:
tx4927_pcicptr->g2pcfgdata = val;
}
struct pci_ops sb1250_pci_ops = {
- .read = sb1250_pcibios_read,
- .write = sb1250_pcibios_write
+ .read = sb1250_pcibios_read,
+ .write = sb1250_pcibios_write,
};
static struct resource sb1250_mem_resource = {
.end = 0x5fffffffUL,
.flags = IORESOURCE_MEM,
};
-
+
static struct resource sb1250_io_resource = {
.name = "SB1250 PCI I/O",
.start = 0x00000000UL,
/* CFE will assign PCI resources */
pci_probe_only = 1;
+ /* Avoid ISA compat ranges. */
+ PCIBIOS_MIN_IO = 0x00008000UL;
+ PCIBIOS_MIN_MEM = 0x01000000UL;
+
/* Set I/O resource limits. */
- ioport_resource.end = 0x01ffffff; /* 32MB accessible by sb1250 */
- iomem_resource.end = 0xffffffff; /* no HT support yet */
+ ioport_resource.end = 0x01ffffffUL; /* 32MB accessible by sb1250 */
+ iomem_resource.end = 0xffffffffUL; /* no HT support yet */
cfg_space =
ioremap(A_PHYS_LDTPCI_CFG_MATCH_BITS, 16 * 1024 * 1024);
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/pci.h>
-#include <asm/gt64240.h>
+#include <asm/titan_dep.h>
extern struct pci_ops titan_pci_ops;
* anyway. So we just claim 64kB here.
*/
#define TITAN_IO_SIZE 0x0000ffffUL
+#define TITAN_IO_BASE 0xe8000000UL
static struct resource py_io_resource = {
"Titan IO MEM", 0x00001000UL, TITAN_IO_SIZE - 1, IORESOURCE_IO,
{
unsigned long io_v_base;
- io_v_base = (unsigned long) ioremap(0xe0000000UL,TITAN_IO_SIZE);
+ io_v_base = (unsigned long) ioremap(TITAN_IO_BASE, TITAN_IO_SIZE);
if (!io_v_base)
panic(ioremap_failed);
set_io_port_base(io_v_base);
+ TITAN_WRITE(RM9000x2_OCD_LKM7, TITAN_READ(RM9000x2_OCD_LKM7) | 1);
ioport_resource.end = TITAN_IO_SIZE - 1;
*
* Copyright (C) 2003 PMC-Sierra Inc.
* Author: Manish Lachwani (lachwani@pmc-sierra.com)
+ * Copyright (C) 2005 Ralf Baechle (ralf@linux-mips.org)
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Header file for atmel_read_eeprom.c
*/
-#include <linux/config.h>
#include <linux/types.h>
#include <linux/pci.h>
#include <linux/kernel.h>
#include <asm/pmon.h>
#include <asm/titan_dep.h>
+extern unsigned int (*mips_hpt_read)(void);
+extern void (*mips_hpt_init)(unsigned int);
+
#define LAUNCHSTACK_SIZE 256
static spinlock_t launch_lock __initdata;
{
spin_lock(&launch_lock);
- debug_vectors->cpustart(1, &prom_smp_bootstrap,
- launchstack + LAUNCHSTACK_SIZE, 0);
+ pmon_cpustart(1, &prom_smp_bootstrap,
+ launchstack + LAUNCHSTACK_SIZE, 0);
}
/*
* We don't want to start the secondary CPU yet nor do we have a nice probing
* feature in PMON so we just assume presence of the secondary core.
*/
-void prom_prepare_cpus(unsigned int max_cpus)
+static char maxcpus_string[] __initdata =
+ KERN_WARNING "max_cpus set to 0; using 1 instead\n";
+
+void __init prom_prepare_cpus(unsigned int max_cpus)
{
+ int enabled = 0, i;
+
+ if (max_cpus == 0) {
+ printk(maxcpus_string);
+ max_cpus = 1;
+ }
+
cpus_clear(phys_cpu_present_map);
- /*
- * The boot CPU
- */
- cpu_set(0, phys_cpu_present_map);
- __cpu_number_map[0] = 0;
- __cpu_logical_map[0] = 0;
+ for (i = 0; i < 2; i++) {
+ if (i == max_cpus)
+ break;
+
+ /*
+ * The boot CPU
+ */
+ cpu_set(i, phys_cpu_present_map);
+ __cpu_number_map[i] = i;
+ __cpu_logical_map[i] = i;
+ enabled++;
+ }
/*
- * The secondary core
+ * Be paranoid. Enable the IPI only if we're really about to go SMP.
*/
- cpu_set(1, phys_cpu_present_map);
- __cpu_number_map[1] = 1;
- __cpu_logical_map[1] = 1;
+ if (enabled > 1)
+ set_c0_status(STATUSF_IP5);
}
/*
*/
void prom_init_secondary(void)
{
+ mips_hpt_init(mips_hpt_read());
+
set_c0_status(ST0_CO | ST0_IE | ST0_IM);
}
#define EEPROM_DATO 0x08 /* Data out */
#define EEPROM_DATI 0x10 /* Data in */
-/* We need to use this functions early... */
+/* We need to use these functions early... */
#define delay() ({ \
int x; \
for (x=0; x<100000; x++) __asm__ __volatile__(""); })
ip27-klconfig.o ip27-klnuma.o ip27-memory.o ip27-nmi.o ip27-reset.o \
ip27-timer.o ip27-hubio.o ip27-xtalk.o
+obj-$(CONFIG_KGDB) += ip27-dbgio.o
obj-$(CONFIG_SMP) += ip27-smp.o
EXTRA_AFLAGS := $(CFLAGS)
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/sched.h>
-#include <linux/mmzone.h> /* for numnodes */
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/cpumask.h>
* copy over the caliased exception handlers.
*/
if (get_compact_nodeid() == cnode) {
- extern char except_vec0, except_vec1_r4k;
extern char except_vec2_generic, except_vec3_generic;
+ extern void build_tlb_refill_handler(void);
- memcpy((void *)(KSEG0 + 0x100), &except_vec2_generic, 0x80);
- memcpy((void *)(KSEG0 + 0x180), &except_vec3_generic, 0x80);
- memcpy((void *)KSEG0, &except_vec0, 0x80);
- memcpy((void *)KSEG0 + 0x080, &except_vec1_r4k, 0x80);
- memcpy((void *)(KSEG0 + 0x100), (void *) KSEG0, 0x80);
- memcpy((void *)(KSEG0 + 0x180), &except_vec3_generic, 0x100);
+ memcpy((void *)(CKSEG0 + 0x100), &except_vec2_generic, 0x80);
+ memcpy((void *)(CKSEG0 + 0x180), &except_vec3_generic, 0x80);
+ build_tlb_refill_handler();
+ memcpy((void *)(CKSEG0 + 0x100), (void *) CKSEG0, 0x80);
+ memcpy((void *)(CKSEG0 + 0x180), &except_vec3_generic, 0x100);
__flush_cache_all();
}
#endif
mfc0 t0, CP0_STATUS
and s0, t0
move a0, sp
- la ra, ret_from_irq
+ PTR_LA ra, ret_from_irq
/* First check for RT interrupt. */
andi t0, s0, CAUSEF_IP4
#include <linux/init.h>
#include <linux/mmzone.h>
#include <linux/kernel.h>
+#include <linux/nodemask.h>
#include <linux/string.h>
#include <asm/page.h>
#include <asm/sn/mapped_kernel.h>
#include <asm/sn/sn_private.h>
-extern char _end;
static cpumask_t ktext_repmask;
/*
* kernel. For example, we should never put a copy on a headless node,
* and we should respect the topology of the machine.
*/
-void __init setup_replication_mask(int maxnodes)
+void __init setup_replication_mask()
{
- static int numa_kernel_replication_ratio;
cnodeid_t cnode;
/* Set only the master cnode's bit. The master cnode is always 0. */
cpus_clear(ktext_repmask);
cpu_set(0, ktext_repmask);
- numa_kernel_replication_ratio = 0;
#ifdef CONFIG_REPLICATE_KTEXT
#ifndef CONFIG_MAPPED_KERNEL
#error Kernel replication works with mapped kernel support. No calias support.
#endif
- numa_kernel_replication_ratio = 1;
-#endif
-
- for (cnode = 1; cnode < numnodes; cnode++) {
- /* See if this node should get a copy of the kernel */
- if (numa_kernel_replication_ratio &&
- !(cnode % numa_kernel_replication_ratio)) {
-
- /* Advertise that we have a copy of the kernel */
- cpu_set(cnode, ktext_repmask);
- }
+ for_each_online_node(cnode) {
+ if (cnode == 0)
+ continue;
+ /* Advertise that we have a copy of the kernel */
+ cpu_set(cnode, ktext_repmask);
}
-
+#endif
/* Set up a GDA pointer to the replication mask. */
GDA->g_ktext_repmask = &ktext_repmask;
}
memcpy((void *)dest_kern_start, (void *)source_start, kern_size);
}
-void __init replicate_kernel_text(int maxnodes)
+void __init replicate_kernel_text()
{
cnodeid_t cnode;
nasid_t client_nasid;
/* Record where the master node should get its kernel text */
set_ktext_source(master_nasid, master_nasid);
- for (cnode = 1; cnode < maxnodes; cnode++) {
+ for_each_online_node(cnode) {
+ if (cnode == 0)
+ continue;
client_nasid = COMPACT_TO_NASID_NODEID(cnode);
/* Check if this node should get a copy of the kernel */
*/
pfn_t node_getfirstfree(cnodeid_t cnode)
{
- unsigned long loadbase = CKSEG0;
+ unsigned long loadbase = REP_BASE;
nasid_t nasid = COMPACT_TO_NASID_NODEID(cnode);
unsigned long offset;
#ifdef CONFIG_MAPPED_KERNEL
- loadbase = CKSSEG + 16777216;
+ loadbase += 16777216;
#endif
offset = PAGE_ALIGN((unsigned long)(&_end)) - loadbase;
if ((cnode == 0) || (cpu_isset(cnode, ktext_repmask)))
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
- * Copyright (C) 2000 by Ralf Baechle
+ * Copyright (C) 2000, 05 by Ralf Baechle (ralf@linux-mips.org)
* Copyright (C) 2000 by Silicon Graphics, Inc.
* Copyright (C) 2004 by Christoph Hellwig
*
* On SGI IP27 the ARC memory configuration data is completly bogus but
* alternate easier to use mechanisms are available.
*/
+#include <linux/config.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/mmzone.h>
#include <linux/module.h>
+#include <linux/nodemask.h>
#include <linux/swap.h>
#include <linux/bootmem.h>
#include <asm/page.h>
static hubreg_t region_mask;
-static void gen_region_mask(hubreg_t *region_mask, int maxnodes)
+static void gen_region_mask(hubreg_t *region_mask)
{
cnodeid_t cnode;
(*region_mask) = 0;
- for (cnode = 0; cnode < maxnodes; cnode++) {
+ for_each_online_node(cnode) {
(*region_mask) |= 1ULL << get_region(cnode);
}
}
int port;
/* Figure out which routers nodes in question are connected to */
- for (cnode = 0; cnode < numnodes; cnode++) {
+ for_each_online_node(cnode) {
nasid = COMPACT_TO_NASID_NODEID(cnode);
if (nasid == -1) continue;
for (col = 0; col < MAX_COMPACT_NODES; col++)
__node_distances[row][col] = -1;
- for (row = 0; row < numnodes; row++) {
+ for_each_online_node(row) {
nasid = COMPACT_TO_NASID_NODEID(row);
- for (col = 0; col < numnodes; col++) {
+ for_each_online_node(col) {
nasid2 = COMPACT_TO_NASID_NODEID(col);
__node_distances[row][col] =
compute_node_distance(nasid, nasid2);
printk("************** Topology ********************\n");
printk(" ");
- for (col = 0; col < numnodes; col++)
+ for_each_online_node(col)
printk("%02d ", col);
printk("\n");
- for (row = 0; row < numnodes; row++) {
+ for_each_online_node(row) {
printk("%02d ", row);
- for (col = 0; col < numnodes; col++)
+ for_each_online_node(col)
printk("%2d ", node_distance(row, col));
printk("\n");
}
- for (cnode = 0; cnode < numnodes; cnode++) {
+ for_each_online_node(cnode) {
nasid = COMPACT_TO_NASID_NODEID(cnode);
if (nasid == -1) continue;
init_topology_matrix();
dump_topology();
- gen_region_mask(®ion_mask, numnodes);
+ gen_region_mask(®ion_mask);
- setup_replication_mask(numnodes);
+ setup_replication_mask();
/*
* Set all nodes' calias sizes to 8k
*/
- for (i = 0; i < numnodes; i++) {
+ for_each_online_node(i) {
nasid_t nasid;
nasid = COMPACT_TO_NASID_NODEID(i);
num_physpages = 0;
- for (node = 0; node < numnodes; node++) {
+ for_each_online_node(node) {
ignore = nodebytes = 0;
for (slot = 0; slot < MAX_MEM_SLOTS; slot++) {
slot_psize = slot_psize_compute(node, slot);
szmem();
for (node = 0; node < MAX_COMPACT_NODES; node++) {
- if (node < numnodes) {
+ if (node_online(node)) {
node_mem_init(node);
continue;
}
pagetable_init();
- for (node = 0; node < numnodes; node++) {
+ for_each_online_node(node) {
pfn_t start_pfn = slot_getbasepfn(node, 0);
pfn_t end_pfn = node_getmaxclick(node) + 1;
high_memory = (void *) __va(num_physpages << PAGE_SHIFT);
- for (node = 0; node < numnodes; node++) {
+ for_each_online_node(node) {
unsigned slot, numslots;
struct page *end, *p;
#include <linux/kallsyms.h>
#include <linux/kernel.h>
#include <linux/mmzone.h>
+#include <linux/nodemask.h>
#include <linux/spinlock.h>
#include <linux/smp.h>
#include <asm/atomic.h>
typedef unsigned long machreg_t;
-spinlock_t nmi_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(nmi_lock);
/*
* Lets see what else we need to do here. Set up sp, gp?
{
cnodeid_t cnode;
- for(cnode = 0 ; cnode < numnodes; cnode++)
+ for_each_online_node(cnode)
nmi_node_eframe_save(cnode);
}
* send NMIs to all cpus on a 256p system.
*/
for (i=0; i < 1500; i++) {
- for (node=0; node < numnodes; node++)
+ for_each_online_node(node)
if (NODEPDA(node)->dump_count == 0)
break;
- if (node == numnodes)
+ if (node == MAX_NUMNODES)
break;
if (i == 1000) {
- for (node=0; node < numnodes; node++)
+ for_each_online_node(node)
if (NODEPDA(node)->dump_count == 0) {
cpu = node_to_first_cpu(node);
for (n=0; n < CNODE_NUM_CPUS(node); cpu++, n++) {
#include <linux/timer.h>
#include <linux/smp.h>
#include <linux/mmzone.h>
+#include <linux/nodemask.h>
#include <asm/io.h>
#include <asm/irq.h>
smp_send_stop();
#endif
#if 0
- for (i = 0; i < numnodes; i++)
+ for_each_online_node(i)
REMOTE_HUB_S(COMPACT_TO_NASID_NODEID(i), PROMOP_REG,
PROMOP_REBOOT);
#else
#ifdef CONFIG_SMP
smp_send_stop();
#endif
- for (i = 0; i < numnodes; i++)
+ for_each_online_node(i)
REMOTE_HUB_S(COMPACT_TO_NASID_NODEID(i), PROMOP_REG,
PROMOP_RESTART);
LOCAL_HUB_S(NI_PORT_RESET, NPR_PORTRESET | NPR_LOCALRESET);
*/
#include <linux/init.h>
#include <linux/sched.h>
+#include <linux/nodemask.h>
#include <asm/page.h>
#include <asm/processor.h>
#include <asm/sn/arch.h>
for (i = 0; i < MAXCPUS; i++)
cpuid_to_compact_node[i] = INVALID_CNODEID;
- numnodes = 0;
+ /*
+ * MCD - this whole "compact node" stuff can probably be dropped,
+ * as we can handle sparse numbering now
+ */
+ nodes_clear(node_online_map);
for (i = 0; i < MAX_COMPACT_NODES; i++) {
nasid_t nasid = gdap->g_nasidtable[i];
if (nasid == INVALID_NASID)
break;
compact_to_nasid_node[i] = nasid;
nasid_to_compact_node[nasid] = i;
- numnodes++;
+ node_set_online(num_online_nodes());
highest = do_cpumask(i, nasid, highest);
}
- printk("Discovered %d cpus on %d nodes\n", highest + 1, numnodes);
+ printk("Discovered %d cpus on %d nodes\n", highest + 1, num_online_nodes());
}
static void intr_clear_bits(nasid_t nasid, volatile hubreg_t *pend,
{
cnodeid_t cnode;
- for (cnode = 0; cnode < numnodes; cnode++)
+ for_each_online_node(cnode)
intr_clear_all(COMPACT_TO_NASID_NODEID(cnode));
- replicate_kernel_text(numnodes);
+ replicate_kernel_text();
/*
* Assumption to be fixed: we're always booted on logical / physical
/*
- * Copytight (C) 1999, 2000 Ralf Baechle (ralf@gnu.org)
+ * Copytight (C) 1999, 2000, 05 Ralf Baechle (ralf@linux-mips.org)
* Copytight (C) 1999, 2000 Silicon Graphics, Inc.
*/
#include <linux/bcd.h>
-#include <linux/config.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/sched.h>
#
obj-y += ip32-berr.o ip32-irq.o ip32-irq-glue.o ip32-setup.o ip32-reset.o \
- crime.o
+ crime.o ip32-memory.o
EXTRA_AFLAGS := $(CFLAGS)
* for more details.
*
* Copyright (C) 2001, 2003 Keith M Wesolowski
+ * Copyright (C) 2005 Ilya A. Volynets <ilya@total-knowledge.com>
*/
#include <linux/types.h>
#include <linux/init.h>
{
unsigned int id, rev;
const int field = 2 * sizeof(unsigned long);
-
+
+ set_io_port_base((unsigned long) ioremap(MACEPCI_LOW_IO, 0x2000000));
crime = ioremap(CRIME_BASE, sizeof(struct sgi_crime));
mace = ioremap(MACE_BASE, sizeof(struct sgi_mace));
_machine_restart = ip32_machine_restart;
_machine_halt = ip32_machine_halt;
_machine_power_off = ip32_machine_power_off;
- request_irq(MACEISA_RTC_IRQ, ip32_rtc_int, 0, "rtc", NULL);
+
init_timer(&blink_timer);
blink_timer.function = blink_timeout;
notifier_chain_register(&panic_notifier_list, &panic_block);
+ request_irq(MACEISA_RTC_IRQ, ip32_rtc_int, 0, "rtc", NULL);
+
return 0;
}
* for more details.
*
* Copyright (C) 2000 Harald Koerfgen
- * Copyright (C) 2002, 03 Ilya A. Volynets
+ * Copyright (C) 2002, 2003, 2005 Ilya A. Volynets
*/
#include <linux/config.h>
#include <linux/console.h>
static int __init ip32_setup(void)
{
- set_io_port_base((unsigned long) ioremap(MACEPCI_LOW_IO, 0x2000000));
-
- crime_init();
-
board_be_init = ip32_be_init;
rtc_get_time = mc146818_get_cmos_time;
u_int64_t tb_options = M_SCD_TRACE_CFG_FREEZE_FULL;
/* Generate an SCD_PERFCNT interrupt in TB_PERIOD Zclks to
trigger start of trace. XXX vary sampling period */
- __raw_writeq(0, IOADDR(A_SCD_PERF_CNT_1));
- scdperfcnt = __raw_readq(IOADDR(A_SCD_PERF_CNT_CFG));
+ bus_writeq(0, IOADDR(A_SCD_PERF_CNT_1));
+ scdperfcnt = bus_readq(IOADDR(A_SCD_PERF_CNT_CFG));
/* Unfortunately, in Pass 2 we must clear all counters to knock down
a previous interrupt request. This means that bus profiling
requires ALL of the SCD perf counters. */
- __raw_writeq((scdperfcnt & ~M_SPC_CFG_SRC1) | // keep counters 0,2,3 as is
+ bus_writeq((scdperfcnt & ~M_SPC_CFG_SRC1) | // keep counters 0,2,3 as is
M_SPC_CFG_ENABLE | // enable counting
M_SPC_CFG_CLEAR | // clear all counters
V_SPC_CFG_SRC1(1), // counter 1 counts cycles
- IOADDR(A_SCD_PERF_CNT_CFG));
- __raw_writeq(next, IOADDR(A_SCD_PERF_CNT_1));
+ IOADDR(A_SCD_PERF_CNT_CFG));
+ bus_writeq(next, IOADDR(A_SCD_PERF_CNT_1));
/* Reset the trace buffer */
- __raw_writeq(M_SCD_TRACE_CFG_RESET, IOADDR(A_SCD_TRACE_CFG));
+ bus_writeq(M_SCD_TRACE_CFG_RESET, IOADDR(A_SCD_TRACE_CFG));
#if 0 && defined(M_SCD_TRACE_CFG_FORCECNT)
/* XXXKW may want to expose control to the data-collector */
tb_options |= M_SCD_TRACE_CFG_FORCECNT;
#endif
- __raw_writeq(tb_options, IOADDR(A_SCD_TRACE_CFG));
+ bus_writeq(tb_options, IOADDR(A_SCD_TRACE_CFG));
sbp.tb_armed = 1;
}
/* XXX should use XKPHYS to make writes bypass L2 */
u_int64_t *p = sbp.sbprof_tbbuf[sbp.next_tb_sample++];
/* Read out trace */
- __raw_writeq(M_SCD_TRACE_CFG_START_READ, IOADDR(A_SCD_TRACE_CFG));
+ bus_writeq(M_SCD_TRACE_CFG_START_READ, IOADDR(A_SCD_TRACE_CFG));
__asm__ __volatile__ ("sync" : : : "memory");
/* Loop runs backwards because bundles are read out in reverse order */
for (i = 256 * 6; i > 0; i -= 6) {
// Subscripts decrease to put bundle in the order
// t0 lo, t0 hi, t1 lo, t1 hi, t2 lo, t2 hi
- p[i-1] = __raw_readq(IOADDR(A_SCD_TRACE_READ)); // read t2 hi
- p[i-2] = __raw_readq(IOADDR(A_SCD_TRACE_READ)); // read t2 lo
- p[i-3] = __raw_readq(IOADDR(A_SCD_TRACE_READ)); // read t1 hi
- p[i-4] = __raw_readq(IOADDR(A_SCD_TRACE_READ)); // read t1 lo
- p[i-5] = __raw_readq(IOADDR(A_SCD_TRACE_READ)); // read t0 hi
- p[i-6] = __raw_readq(IOADDR(A_SCD_TRACE_READ)); // read t0 lo
+ p[i-1] = bus_readq(IOADDR(A_SCD_TRACE_READ)); // read t2 hi
+ p[i-2] = bus_readq(IOADDR(A_SCD_TRACE_READ)); // read t2 lo
+ p[i-3] = bus_readq(IOADDR(A_SCD_TRACE_READ)); // read t1 hi
+ p[i-4] = bus_readq(IOADDR(A_SCD_TRACE_READ)); // read t1 lo
+ p[i-5] = bus_readq(IOADDR(A_SCD_TRACE_READ)); // read t0 hi
+ p[i-6] = bus_readq(IOADDR(A_SCD_TRACE_READ)); // read t0 lo
}
if (!sbp.tb_enable) {
DBG(printk(DEVNAME ": tb_intr shutdown\n"));
- __raw_writeq(M_SCD_TRACE_CFG_RESET, IOADDR(A_SCD_TRACE_CFG));
+ bus_writeq(M_SCD_TRACE_CFG_RESET,
+ IOADDR(A_SCD_TRACE_CFG));
sbp.tb_armed = 0;
wake_up(&sbp.tb_sync);
} else {
} else {
/* No more trace buffer samples */
DBG(printk(DEVNAME ": tb_intr full\n"));
- __raw_writeq(M_SCD_TRACE_CFG_RESET, IOADDR(A_SCD_TRACE_CFG));
+ bus_writeq(M_SCD_TRACE_CFG_RESET, IOADDR(A_SCD_TRACE_CFG));
sbp.tb_armed = 0;
if (!sbp.tb_enable) {
wake_up(&sbp.tb_sync);
return -EBUSY;
}
/* Make sure there isn't a perf-cnt interrupt waiting */
- scdperfcnt = __raw_readq(IOADDR(A_SCD_PERF_CNT_CFG));
+ scdperfcnt = bus_readq(IOADDR(A_SCD_PERF_CNT_CFG));
/* Disable and clear counters, override SRC_1 */
- __raw_writeq((scdperfcnt & ~(M_SPC_CFG_SRC1 | M_SPC_CFG_ENABLE)) |
+ bus_writeq((scdperfcnt & ~(M_SPC_CFG_SRC1 | M_SPC_CFG_ENABLE)) |
M_SPC_CFG_ENABLE |
M_SPC_CFG_CLEAR |
V_SPC_CFG_SRC1(1),
- IOADDR(A_SCD_PERF_CNT_CFG));
+ IOADDR(A_SCD_PERF_CNT_CFG));
/* We grab this interrupt to prevent others from trying to use
it, even though we don't want to service the interrupts
/* I need the core to mask these, but the interrupt mapper to
pass them through. I am exploiting my knowledge that
cp0_status masks out IP[5]. krw */
- __raw_writeq(K_INT_MAP_I3,
- IOADDR(A_IMR_REGISTER(0, R_IMR_INTERRUPT_MAP_BASE) + (K_INT_PERF_CNT<<3)));
+ bus_writeq(K_INT_MAP_I3,
+ IOADDR(A_IMR_REGISTER(0, R_IMR_INTERRUPT_MAP_BASE) +
+ (K_INT_PERF_CNT << 3)));
/* Initialize address traps */
- __raw_writeq(0, IOADDR(A_ADDR_TRAP_UP_0));
- __raw_writeq(0, IOADDR(A_ADDR_TRAP_UP_1));
- __raw_writeq(0, IOADDR(A_ADDR_TRAP_UP_2));
- __raw_writeq(0, IOADDR(A_ADDR_TRAP_UP_3));
+ bus_writeq(0, IOADDR(A_ADDR_TRAP_UP_0));
+ bus_writeq(0, IOADDR(A_ADDR_TRAP_UP_1));
+ bus_writeq(0, IOADDR(A_ADDR_TRAP_UP_2));
+ bus_writeq(0, IOADDR(A_ADDR_TRAP_UP_3));
- __raw_writeq(0, IOADDR(A_ADDR_TRAP_DOWN_0));
- __raw_writeq(0, IOADDR(A_ADDR_TRAP_DOWN_1));
- __raw_writeq(0, IOADDR(A_ADDR_TRAP_DOWN_2));
- __raw_writeq(0, IOADDR(A_ADDR_TRAP_DOWN_3));
+ bus_writeq(0, IOADDR(A_ADDR_TRAP_DOWN_0));
+ bus_writeq(0, IOADDR(A_ADDR_TRAP_DOWN_1));
+ bus_writeq(0, IOADDR(A_ADDR_TRAP_DOWN_2));
+ bus_writeq(0, IOADDR(A_ADDR_TRAP_DOWN_3));
- __raw_writeq(0, IOADDR(A_ADDR_TRAP_CFG_0));
- __raw_writeq(0, IOADDR(A_ADDR_TRAP_CFG_1));
- __raw_writeq(0, IOADDR(A_ADDR_TRAP_CFG_2));
- __raw_writeq(0, IOADDR(A_ADDR_TRAP_CFG_3));
+ bus_writeq(0, IOADDR(A_ADDR_TRAP_CFG_0));
+ bus_writeq(0, IOADDR(A_ADDR_TRAP_CFG_1));
+ bus_writeq(0, IOADDR(A_ADDR_TRAP_CFG_2));
+ bus_writeq(0, IOADDR(A_ADDR_TRAP_CFG_3));
/* Initialize Trace Event 0-7 */
// when interrupt
- __raw_writeq(M_SCD_TREVT_INTERRUPT, IOADDR(A_SCD_TRACE_EVENT_0));
- __raw_writeq(0, IOADDR(A_SCD_TRACE_EVENT_1));
- __raw_writeq(0, IOADDR(A_SCD_TRACE_EVENT_2));
- __raw_writeq(0, IOADDR(A_SCD_TRACE_EVENT_3));
- __raw_writeq(0, IOADDR(A_SCD_TRACE_EVENT_4));
- __raw_writeq(0, IOADDR(A_SCD_TRACE_EVENT_5));
- __raw_writeq(0, IOADDR(A_SCD_TRACE_EVENT_6));
- __raw_writeq(0, IOADDR(A_SCD_TRACE_EVENT_7));
+ bus_writeq(M_SCD_TREVT_INTERRUPT, IOADDR(A_SCD_TRACE_EVENT_0));
+ bus_writeq(0, IOADDR(A_SCD_TRACE_EVENT_1));
+ bus_writeq(0, IOADDR(A_SCD_TRACE_EVENT_2));
+ bus_writeq(0, IOADDR(A_SCD_TRACE_EVENT_3));
+ bus_writeq(0, IOADDR(A_SCD_TRACE_EVENT_4));
+ bus_writeq(0, IOADDR(A_SCD_TRACE_EVENT_5));
+ bus_writeq(0, IOADDR(A_SCD_TRACE_EVENT_6));
+ bus_writeq(0, IOADDR(A_SCD_TRACE_EVENT_7));
/* Initialize Trace Sequence 0-7 */
// Start on event 0 (interrupt)
- __raw_writeq(V_SCD_TRSEQ_FUNC_START|0x0fff,
- IOADDR(A_SCD_TRACE_SEQUENCE_0));
+ bus_writeq(V_SCD_TRSEQ_FUNC_START | 0x0fff,
+ IOADDR(A_SCD_TRACE_SEQUENCE_0));
// dsamp when d used | asamp when a used
- __raw_writeq(M_SCD_TRSEQ_ASAMPLE|M_SCD_TRSEQ_DSAMPLE|K_SCD_TRSEQ_TRIGGER_ALL,
- IOADDR(A_SCD_TRACE_SEQUENCE_1));
- __raw_writeq(0, IOADDR(A_SCD_TRACE_SEQUENCE_2));
- __raw_writeq(0, IOADDR(A_SCD_TRACE_SEQUENCE_3));
- __raw_writeq(0, IOADDR(A_SCD_TRACE_SEQUENCE_4));
- __raw_writeq(0, IOADDR(A_SCD_TRACE_SEQUENCE_5));
- __raw_writeq(0, IOADDR(A_SCD_TRACE_SEQUENCE_6));
- __raw_writeq(0, IOADDR(A_SCD_TRACE_SEQUENCE_7));
+ bus_writeq(M_SCD_TRSEQ_ASAMPLE | M_SCD_TRSEQ_DSAMPLE |
+ K_SCD_TRSEQ_TRIGGER_ALL,
+ IOADDR(A_SCD_TRACE_SEQUENCE_1));
+ bus_writeq(0, IOADDR(A_SCD_TRACE_SEQUENCE_2));
+ bus_writeq(0, IOADDR(A_SCD_TRACE_SEQUENCE_3));
+ bus_writeq(0, IOADDR(A_SCD_TRACE_SEQUENCE_4));
+ bus_writeq(0, IOADDR(A_SCD_TRACE_SEQUENCE_5));
+ bus_writeq(0, IOADDR(A_SCD_TRACE_SEQUENCE_6));
+ bus_writeq(0, IOADDR(A_SCD_TRACE_SEQUENCE_7));
/* Now indicate the PERF_CNT interrupt as a trace-relevant interrupt */
- __raw_writeq((1ULL << K_INT_PERF_CNT), IOADDR(A_IMR_REGISTER(0, R_IMR_INTERRUPT_TRACE)));
+ bus_writeq((1ULL << K_INT_PERF_CNT),
+ IOADDR(A_IMR_REGISTER(0, R_IMR_INTERRUPT_TRACE)));
arm_tb();
csr_out32(M_SCD_TRACE_CFG_START_READ, IOADDR(A_SCD_TRACE_CFG));
for (i=0; i<256*6; i++)
- printk("%016llx\n", (unsigned long long)__raw_readq(IOADDR(A_SCD_TRACE_READ)));
+ printk("%016llx\n",
+ (unsigned long long)bus_readq(IOADDR(A_SCD_TRACE_READ)));
csr_out32(M_SCD_TRACE_CFG_RESET, IOADDR(A_SCD_TRACE_CFG));
csr_out32(M_SCD_TRACE_CFG_START, IOADDR(A_SCD_TRACE_CFG));
int bad_config = 0;
sb1_pass = read_c0_prid() & 0xff;
- sys_rev = __raw_readq(IOADDR(A_SCD_SYSTEM_REVISION));
+ sys_rev = bus_readq(IOADDR(A_SCD_SYSTEM_REVISION));
soc_type = SYS_SOC_TYPE(sys_rev);
soc_pass = G_SYS_REVISION(sys_rev);
machine_restart(NULL);
}
- plldiv = G_SYS_PLL_DIV(__raw_readq(IOADDR(A_SCD_SYSTEM_CFG)));
+ plldiv = G_SYS_PLL_DIV(bus_readq(IOADDR(A_SCD_SYSTEM_CFG)));
zbbus_mhz = ((plldiv >> 1) * 50) + ((plldiv & 1) * 25);
prom_printf("Broadcom SiByte %s %s @ %d MHz (SB1 rev %d)\n",
*/
void core_send_ipi(int cpu, unsigned int action)
{
- __raw_writeq((((u64)action)<< 48), mailbox_set_regs[cpu]);
+ bus_writeq((((u64)action) << 48), mailbox_set_regs[cpu]);
}
void sb1250_mailbox_interrupt(struct pt_regs *regs)
kstat_this_cpu.irqs[K_INT_MBOX_0]++;
/* Load the mailbox register to figure out what we're supposed to do */
- action = (____raw_readq(mailbox_regs[cpu]) >> 48) & 0xffff;
+ action = (__bus_readq(mailbox_regs[cpu]) >> 48) & 0xffff;
/* Clear the mailbox to clear the interrupt */
- ____raw_writeq(((u64)action)<<48, mailbox_clear_regs[cpu]);
+ __bus_writeq(((u64)action) << 48, mailbox_clear_regs[cpu]);
/*
* Nothing to do for SMP_RESCHEDULE_YOURSELF; returning from the
sb1250_mask_irq(cpu, irq);
/* Map the timer interrupt to ip[4] of this cpu */
- __raw_writeq(IMR_IP4_VAL, IOADDR(A_IMR_REGISTER(cpu, R_IMR_INTERRUPT_MAP_BASE) +
- (irq << 3)));
+ bus_writeq(IMR_IP4_VAL,
+ IOADDR(A_IMR_REGISTER(cpu, R_IMR_INTERRUPT_MAP_BASE) +
+ (irq << 3)));
/* the general purpose timer ticks at 1 Mhz independent if the rest of the system */
/* Disable the timer and set up the count */
- __raw_writeq(0, IOADDR(A_SCD_TIMER_REGISTER(cpu, R_SCD_TIMER_CFG)));
+ bus_writeq(0, IOADDR(A_SCD_TIMER_REGISTER(cpu, R_SCD_TIMER_CFG)));
#ifdef CONFIG_SIMULATION
- __raw_writeq(50000 / HZ, IOADDR(A_SCD_TIMER_REGISTER(cpu, R_SCD_TIMER_INIT)));
+ bus_writeq(50000 / HZ,
+ IOADDR(A_SCD_TIMER_REGISTER(cpu, R_SCD_TIMER_INIT)));
#else
- __raw_writeq(1000000/HZ, IOADDR(A_SCD_TIMER_REGISTER(cpu, R_SCD_TIMER_INIT)));
+ bus_writeq(1000000/HZ,
+ IOADDR(A_SCD_TIMER_REGISTER(cpu, R_SCD_TIMER_INIT)));
#endif
/* Set the timer running */
- __raw_writeq(M_SCD_TIMER_ENABLE|M_SCD_TIMER_MODE_CONTINUOUS,
- IOADDR(A_SCD_TIMER_REGISTER(cpu, R_SCD_TIMER_CFG)));
+ bus_writeq(M_SCD_TIMER_ENABLE | M_SCD_TIMER_MODE_CONTINUOUS,
+ IOADDR(A_SCD_TIMER_REGISTER(cpu, R_SCD_TIMER_CFG)));
sb1250_unmask_irq(cpu, irq);
sb1250_steal_irq(irq);
int irq = K_INT_TIMER_0 + cpu;
/* Reset the timer */
- ____raw_writeq(M_SCD_TIMER_ENABLE|M_SCD_TIMER_MODE_CONTINUOUS,
- IOADDR(A_SCD_TIMER_REGISTER(cpu, R_SCD_TIMER_CFG)));
+ __bus_writeq(M_SCD_TIMER_ENABLE | M_SCD_TIMER_MODE_CONTINUOUS,
+ IOADDR(A_SCD_TIMER_REGISTER(cpu, R_SCD_TIMER_CFG)));
/*
* CPU 0 handles the global timer interrupt job
unsigned long sb1250_gettimeoffset(void)
{
unsigned long count =
- __raw_readq(IOADDR(A_SCD_TIMER_REGISTER(0, R_SCD_TIMER_CNT)));
+ bus_readq(IOADDR(A_SCD_TIMER_REGISTER(0, R_SCD_TIMER_CNT)));
return 1000000/HZ - count;
}
#define M41T81REG_SQW 0x13 /* square wave register */
#define M41T81_CCR_ADDRESS 0x68
-#define SMB_CSR(reg) (IOADDR(A_SMB_REGISTER(1, reg)))
+#define SMB_CSR(reg) ((u8 *) (IOADDR(A_SMB_REGISTER(1, reg))))
static int m41t81_read(uint8_t addr)
{
- while (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
+ while (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
;
- __raw_writeq(addr & 0xff, SMB_CSR(R_SMB_CMD));
- __raw_writeq((V_SMB_ADDR(M41T81_CCR_ADDRESS) | V_SMB_TT_WR1BYTE), SMB_CSR(R_SMB_START));
+ bus_writeq(addr & 0xff, SMB_CSR(R_SMB_CMD));
+ bus_writeq((V_SMB_ADDR(M41T81_CCR_ADDRESS) | V_SMB_TT_WR1BYTE),
+ SMB_CSR(R_SMB_START));
- while (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
+ while (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
;
- __raw_writeq((V_SMB_ADDR(M41T81_CCR_ADDRESS) | V_SMB_TT_RD1BYTE), SMB_CSR(R_SMB_START));
+ bus_writeq((V_SMB_ADDR(M41T81_CCR_ADDRESS) | V_SMB_TT_RD1BYTE),
+ SMB_CSR(R_SMB_START));
- while (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
+ while (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
;
- if (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_ERROR) {
+ if (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_ERROR) {
/* Clear error bit by writing a 1 */
- __raw_writeq(M_SMB_ERROR, SMB_CSR(R_SMB_STATUS));
+ bus_writeq(M_SMB_ERROR, SMB_CSR(R_SMB_STATUS));
return -1;
}
- return (__raw_readq(SMB_CSR(R_SMB_DATA)) & 0xff);
+ return (bus_readq(SMB_CSR(R_SMB_DATA)) & 0xff);
}
static int m41t81_write(uint8_t addr, int b)
{
- while (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
+ while (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
;
- __raw_writeq((addr & 0xFF), SMB_CSR(R_SMB_CMD));
- __raw_writeq((b & 0xff), SMB_CSR(R_SMB_DATA));
- __raw_writeq(V_SMB_ADDR(M41T81_CCR_ADDRESS) | V_SMB_TT_WR2BYTE,
- SMB_CSR(R_SMB_START));
+ bus_writeq((addr & 0xFF), SMB_CSR(R_SMB_CMD));
+ bus_writeq((b & 0xff), SMB_CSR(R_SMB_DATA));
+ bus_writeq(V_SMB_ADDR(M41T81_CCR_ADDRESS) | V_SMB_TT_WR2BYTE,
+ SMB_CSR(R_SMB_START));
- while (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
+ while (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
;
- if (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_ERROR) {
+ if (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_ERROR) {
/* Clear error bit by writing a 1 */
- __raw_writeq(M_SMB_ERROR, SMB_CSR(R_SMB_STATUS));
+ bus_writeq(M_SMB_ERROR, SMB_CSR(R_SMB_STATUS));
return -1;
}
/* read the same byte again to make sure it is written */
- __raw_writeq(V_SMB_ADDR(M41T81_CCR_ADDRESS) | V_SMB_TT_RD1BYTE,
- SMB_CSR(R_SMB_START));
+ bus_writeq(V_SMB_ADDR(M41T81_CCR_ADDRESS) | V_SMB_TT_RD1BYTE,
+ SMB_CSR(R_SMB_START));
- while (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
+ while (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
;
return 0;
#define X1241_CCR_ADDRESS 0x6F
-#define SMB_CSR(reg) (IOADDR(A_SMB_REGISTER(1, reg)))
+#define SMB_CSR(reg) ((u8 *) (IOADDR(A_SMB_REGISTER(1, reg))))
static int xicor_read(uint8_t addr)
{
- while (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
+ while (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
;
- __raw_writeq((addr >> 8) & 0x7, SMB_CSR(R_SMB_CMD));
- __raw_writeq((addr & 0xff), SMB_CSR(R_SMB_DATA));
- __raw_writeq((V_SMB_ADDR(X1241_CCR_ADDRESS) | V_SMB_TT_WR2BYTE), SMB_CSR(R_SMB_START));
+ bus_writeq((addr >> 8) & 0x7, SMB_CSR(R_SMB_CMD));
+ bus_writeq((addr & 0xff), SMB_CSR(R_SMB_DATA));
+ bus_writeq((V_SMB_ADDR(X1241_CCR_ADDRESS) | V_SMB_TT_WR2BYTE),
+ SMB_CSR(R_SMB_START));
- while (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
+ while (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
;
- __raw_writeq((V_SMB_ADDR(X1241_CCR_ADDRESS) | V_SMB_TT_RD1BYTE), SMB_CSR(R_SMB_START));
+ bus_writeq((V_SMB_ADDR(X1241_CCR_ADDRESS) | V_SMB_TT_RD1BYTE),
+ SMB_CSR(R_SMB_START));
- while (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
+ while (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
;
- if (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_ERROR) {
+ if (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_ERROR) {
/* Clear error bit by writing a 1 */
- __raw_writeq(M_SMB_ERROR, SMB_CSR(R_SMB_STATUS));
+ bus_writeq(M_SMB_ERROR, SMB_CSR(R_SMB_STATUS));
return -1;
}
- return (__raw_readq(SMB_CSR(R_SMB_DATA)) & 0xff);
+ return (bus_readq(SMB_CSR(R_SMB_DATA)) & 0xff);
}
static int xicor_write(uint8_t addr, int b)
{
- while (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
+ while (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
;
- __raw_writeq(addr, SMB_CSR(R_SMB_CMD));
- __raw_writeq((addr & 0xff) | ((b & 0xff) << 8), SMB_CSR(R_SMB_DATA));
- __raw_writeq(V_SMB_ADDR(X1241_CCR_ADDRESS) | V_SMB_TT_WR3BYTE,
- SMB_CSR(R_SMB_START));
+ bus_writeq(addr, SMB_CSR(R_SMB_CMD));
+ bus_writeq((addr & 0xff) | ((b & 0xff) << 8), SMB_CSR(R_SMB_DATA));
+ bus_writeq(V_SMB_ADDR(X1241_CCR_ADDRESS) | V_SMB_TT_WR3BYTE,
+ SMB_CSR(R_SMB_START));
- while (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
+ while (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
;
- if (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_ERROR) {
+ if (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_ERROR) {
/* Clear error bit by writing a 1 */
- __raw_writeq(M_SMB_ERROR, SMB_CSR(R_SMB_STATUS));
+ bus_writeq(M_SMB_ERROR, SMB_CSR(R_SMB_STATUS));
return -1;
} else {
return 0;
#include <linux/bootmem.h>
#include <linux/blkdev.h>
#include <linux/init.h>
+#include <linux/kernel.h>
#include <linux/tty.h>
#include <linux/initrd.h>
static int __init swarm_setup(void)
{
- extern int panic_timeout;
-
sb1250_setup();
panic_timeout = 5; /* For debug. */
static int xicor_read(uint8_t addr)
{
- while (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
+ while (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
;
- __raw_writeq((addr >> 8) & 0x7, SMB_CSR(R_SMB_CMD));
- __raw_writeq((addr & 0xff), SMB_CSR(R_SMB_DATA));
- __raw_writeq((V_SMB_ADDR(X1241_CCR_ADDRESS) | V_SMB_TT_WR2BYTE), SMB_CSR(R_SMB_START));
+ bus_writeq((addr >> 8) & 0x7, SMB_CSR(R_SMB_CMD));
+ bus_writeq((addr & 0xff), SMB_CSR(R_SMB_DATA));
+ bus_writeq((V_SMB_ADDR(X1241_CCR_ADDRESS) | V_SMB_TT_WR2BYTE),
+ SMB_CSR(R_SMB_START));
- while (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
+ while (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
;
- __raw_writeq((V_SMB_ADDR(X1241_CCR_ADDRESS) | V_SMB_TT_RD1BYTE), SMB_CSR(R_SMB_START));
+ bus_writeq((V_SMB_ADDR(X1241_CCR_ADDRESS) | V_SMB_TT_RD1BYTE),
+ SMB_CSR(R_SMB_START));
- while (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
+ while (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
;
- if (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_ERROR) {
+ if (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_ERROR) {
/* Clear error bit by writing a 1 */
- __raw_writeq(M_SMB_ERROR, SMB_CSR(R_SMB_STATUS));
+ bus_writeq(M_SMB_ERROR, SMB_CSR(R_SMB_STATUS));
return -1;
}
- return (__raw_readq(SMB_CSR(R_SMB_DATA)) & 0xff);
+ return (bus_readq(SMB_CSR(R_SMB_DATA)) & 0xff);
}
static int xicor_write(uint8_t addr, int b)
{
- while (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
+ while (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
;
- __raw_writeq(addr, SMB_CSR(R_SMB_CMD));
- __raw_writeq((addr & 0xff) | ((b & 0xff) << 8), SMB_CSR(R_SMB_DATA));
- __raw_writeq(V_SMB_ADDR(X1241_CCR_ADDRESS) | V_SMB_TT_WR3BYTE,
- SMB_CSR(R_SMB_START));
+ bus_writeq(addr, SMB_CSR(R_SMB_CMD));
+ bus_writeq((addr & 0xff) | ((b & 0xff) << 8), SMB_CSR(R_SMB_DATA));
+ bus_writeq(V_SMB_ADDR(X1241_CCR_ADDRESS) | V_SMB_TT_WR3BYTE,
+ SMB_CSR(R_SMB_START));
- while (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
+ while (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_BUSY)
;
- if (__raw_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_ERROR) {
+ if (bus_readq(SMB_CSR(R_SMB_STATUS)) & M_SMB_ERROR) {
/* Clear error bit by writing a 1 */
- __raw_writeq(M_SMB_ERROR, SMB_CSR(R_SMB_STATUS));
+ bus_writeq(M_SMB_ERROR, SMB_CSR(R_SMB_STATUS));
return -1;
} else {
return 0;
/* Establish communication with the Xicor 1241 RTC */
/* XXXKW how do I share the SMBus with the I2C subsystem? */
- __raw_writeq(K_SMB_FREQ_400KHZ, SMB_CSR(R_SMB_FREQ));
- __raw_writeq(0, SMB_CSR(R_SMB_CONTROL));
+ bus_writeq(K_SMB_FREQ_400KHZ, SMB_CSR(R_SMB_FREQ));
+ bus_writeq(0, SMB_CSR(R_SMB_CONTROL));
if ((status = xicor_read(X1241REG_SR_RTCF)) < 0) {
printk("x1241: couldn't detect on SWARM SMBus 1\n");
#include <asm/io.h>
#include <asm/sni.h>
-spinlock_t pciasic_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(pciasic_lock);
extern asmlinkage void sni_rm200_pci_handle_int(void);
reg_wr08(RBTX4927_SW_RESET_DO, RBTX4927_SW_RESET_DO_SET);
/* do something passive while waiting for reset */
- cli();
+ local_irq_disable();
while (1)
asm_wait();
void toshiba_rbtx4927_halt(void)
{
printk(KERN_NOTICE "System Halted\n");
- cli();
+ local_irq_disable();
while (1) {
asm_wait();
}
* RTC ops
*/
-spinlock_t rtc_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(rtc_lock);
/* per VR41xx docs, bad data can be read if between 2 counts */
static inline unsigned short
*
* Copyright 2001 MontaVista Software Inc.
* Author: jsun@mvista.com or jsun@junsun.net
+ * Copyright (C) 2005 Ralf Baechle (ralf@linux-mips.org)
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
*
*/
-#include <linux/config.h>
#include <linux/ide.h>
#include <linux/init.h>
#include <linux/delay.h>
--- /dev/null
+/*
+ * arch/mips/vr41xx/nec-cmbvr4133/init.c
+ *
+ * PROM library initialisation code for NEC CMB-VR4133 board.
+ *
+ * Author: Yoichi Yuasa <yyuasa@mvista.com, or source@mvista.com> and
+ * Jun Sun <jsun@mvista.com, or source@mvista.com> and
+ * Alex Sapkov <asapkov@ru.mvista.com>
+ *
+ * 2001-2004 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ *
+ * Support for NEC-CMBVR4133 in 2.6
+ * Manish Lachwani (mlachwani@mvista.com)
+ */
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+
+#include <asm/bootinfo.h>
+
+#ifdef CONFIG_ROCKHOPPER
+#include <asm/io.h>
+#include <linux/pci.h>
+
+#define PCICONFDREG 0xaf000c14
+#define PCICONFAREG 0xaf000c18
+#endif
+
+const char *get_system_type(void)
+{
+ return "NEC CMB-VR4133";
+}
+
+#ifdef CONFIG_ROCKHOPPER
+void disable_pcnet(void)
+{
+ u32 data;
+
+ /*
+ * Workaround for the bug in PMON on VR4133. PMON leaves
+ * AMD PCNet controller (on Rockhopper) initialized and running in
+ * bus master mode. We have do disable it before doing any
+ * further initialization. Or we get problems with PCI bus 2
+ * and random lockups and crashes.
+ */
+
+ writel((2 << 16) |
+ (PCI_DEVFN(1,0) << 8) |
+ (0 & 0xfc) |
+ 1UL,
+ PCICONFAREG);
+
+ data = readl(PCICONFDREG);
+
+ writel((2 << 16) |
+ (PCI_DEVFN(1,0) << 8) |
+ (4 & 0xfc) |
+ 1UL,
+ PCICONFAREG);
+
+ data = readl(PCICONFDREG);
+
+ writel((2 << 16) |
+ (PCI_DEVFN(1,0) << 8) |
+ (4 & 0xfc) |
+ 1UL,
+ PCICONFAREG);
+
+ data &= ~4;
+
+ writel(data, PCICONFDREG);
+}
+#endif
+
--- /dev/null
+/*
+ * arch/mips/vr41xx/nec-cmbvr4133/irq.c
+ *
+ * Interrupt routines for the NEC CMB-VR4133 board.
+ *
+ * Author: Yoichi Yuasa <yyuasa@mvista.com, or source@mvista.com> and
+ * Alex Sapkov <asapkov@ru.mvista.com>
+ *
+ * 2003-2004 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ *
+ * Support for NEC-CMBVR4133 in 2.6
+ * Manish Lachwani (mlachwani@mvista.com)
+ */
+#include <linux/bitops.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/interrupt.h>
+
+#include <asm/io.h>
+#include <asm/vr41xx/cmbvr4133.h>
+
+extern void enable_8259A_irq(unsigned int irq);
+extern void disable_8259A_irq(unsigned int irq);
+extern void mask_and_ack_8259A(unsigned int irq);
+extern void init_8259A(int hoge);
+
+extern int vr4133_rockhopper;
+
+static unsigned int startup_i8259_irq(unsigned int irq)
+{
+ enable_8259A_irq(irq - I8259_IRQ_BASE);
+ return 0;
+}
+
+static void shutdown_i8259_irq(unsigned int irq)
+{
+ disable_8259A_irq(irq - I8259_IRQ_BASE);
+}
+
+static void enable_i8259_irq(unsigned int irq)
+{
+ enable_8259A_irq(irq - I8259_IRQ_BASE);
+}
+
+static void disable_i8259_irq(unsigned int irq)
+{
+ disable_8259A_irq(irq - I8259_IRQ_BASE);
+}
+
+static void ack_i8259_irq(unsigned int irq)
+{
+ mask_and_ack_8259A(irq - I8259_IRQ_BASE);
+}
+
+static void end_i8259_irq(unsigned int irq)
+{
+ if (!(irq_desc[irq].status & (IRQ_DISABLED|IRQ_INPROGRESS)))
+ enable_8259A_irq(irq - I8259_IRQ_BASE);
+}
+
+static struct hw_interrupt_type i8259_irq_type = {
+ .typename = "XT-PIC",
+ .startup = startup_i8259_irq,
+ .shutdown = shutdown_i8259_irq,
+ .enable = enable_i8259_irq,
+ .disable = disable_i8259_irq,
+ .ack = ack_i8259_irq,
+ .end = end_i8259_irq,
+};
+
+static int i8259_get_irq_number(int irq)
+{
+ unsigned long isr;
+
+ isr = inb(0x20);
+ irq = ffz(~isr);
+ if (irq == 2) {
+ isr = inb(0xa0);
+ irq = 8 + ffz(~isr);
+ }
+
+ if (irq < 0 || irq > 15)
+ return -EINVAL;
+
+ return I8259_IRQ_BASE + irq;
+}
+
+static struct irqaction i8259_slave_cascade = {
+ .handler = &no_action,
+ .name = "cascade",
+};
+
+void __init rockhopper_init_irq(void)
+{
+ int i;
+
+ if(!vr4133_rockhopper) {
+ printk(KERN_ERR "Not a Rockhopper Board \n");
+ return;
+ }
+
+ for (i = I8259_IRQ_BASE; i <= I8259_IRQ_LAST; i++)
+ irq_desc[i].handler = &i8259_irq_type;
+
+ setup_irq(I8259_SLAVE_IRQ, &i8259_slave_cascade);
+
+ vr41xx_set_irq_trigger(CMBVR41XX_INTC_PIN, TRIGGER_LEVEL, SIGNAL_THROUGH);
+ vr41xx_set_irq_level(CMBVR41XX_INTC_PIN, LEVEL_HIGH);
+ vr41xx_cascade_irq(CMBVR41XX_INTC_IRQ, i8259_get_irq_number);
+}
--- /dev/null
+/*
+ * arch/mips/vr41xx/nec-cmbvr4133/m1535plus.c
+ *
+ * Initialize for ALi M1535+(included M5229 and M5237).
+ *
+ * Author: Yoichi Yuasa <yyuasa@mvista.com, or source@mvista.com> and
+ * Alex Sapkov <asapkov@ru.mvista.com>
+ *
+ * 2003-2004 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ *
+ * Support for NEC-CMBVR4133 in 2.6
+ * Author: Manish Lachwani (mlachwani@mvista.com)
+ */
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/serial.h>
+
+#include <asm/vr41xx/cmbvr4133.h>
+#include <linux/pci.h>
+#include <asm/io.h>
+
+#define CONFIG_PORT(port) ((port) ? 0x3f0 : 0x370)
+#define DATA_PORT(port) ((port) ? 0x3f1 : 0x371)
+#define INDEX_PORT(port) CONFIG_PORT(port)
+
+#define ENTER_CONFIG_MODE(port) \
+ do { \
+ outb_p(0x51, CONFIG_PORT(port)); \
+ outb_p(0x23, CONFIG_PORT(port)); \
+ } while(0)
+
+#define SELECT_LOGICAL_DEVICE(port, dev_no) \
+ do { \
+ outb_p(0x07, INDEX_PORT(port)); \
+ outb_p((dev_no), DATA_PORT(port)); \
+ } while(0)
+
+#define WRITE_CONFIG_DATA(port,index,data) \
+ do { \
+ outb_p((index), INDEX_PORT(port)); \
+ outb_p((data), DATA_PORT(port)); \
+ } while(0)
+
+#define EXIT_CONFIG_MODE(port) outb(0xbb, CONFIG_PORT(port))
+
+#define PCI_CONFIG_ADDR KSEG1ADDR(0x0f000c18)
+#define PCI_CONFIG_DATA KSEG1ADDR(0x0f000c14)
+
+#ifdef CONFIG_BLK_DEV_FD
+
+void __devinit ali_m1535plus_fdc_init(int port)
+{
+ ENTER_CONFIG_MODE(port);
+ SELECT_LOGICAL_DEVICE(port, 0); /* FDC */
+ WRITE_CONFIG_DATA(port, 0x30, 0x01); /* FDC: enable */
+ WRITE_CONFIG_DATA(port, 0x60, 0x03); /* I/O port base: 0x3f0 */
+ WRITE_CONFIG_DATA(port, 0x61, 0xf0);
+ WRITE_CONFIG_DATA(port, 0x70, 0x06); /* IRQ: 6 */
+ WRITE_CONFIG_DATA(port, 0x74, 0x02); /* DMA: channel 2 */
+ WRITE_CONFIG_DATA(port, 0xf0, 0x08);
+ WRITE_CONFIG_DATA(port, 0xf1, 0x00);
+ WRITE_CONFIG_DATA(port, 0xf2, 0xff);
+ WRITE_CONFIG_DATA(port, 0xf4, 0x00);
+ EXIT_CONFIG_MODE(port);
+}
+
+#endif
+
+void __devinit ali_m1535plus_parport_init(int port)
+{
+ ENTER_CONFIG_MODE(port);
+ SELECT_LOGICAL_DEVICE(port, 3); /* Parallel Port */
+ WRITE_CONFIG_DATA(port, 0x30, 0x01);
+ WRITE_CONFIG_DATA(port, 0x60, 0x03); /* I/O port base: 0x378 */
+ WRITE_CONFIG_DATA(port, 0x61, 0x78);
+ WRITE_CONFIG_DATA(port, 0x70, 0x07); /* IRQ: 7 */
+ WRITE_CONFIG_DATA(port, 0x74, 0x04); /* DMA: None */
+ WRITE_CONFIG_DATA(port, 0xf0, 0x8c); /* IRQ polarity: Active Low */
+ WRITE_CONFIG_DATA(port, 0xf1, 0xc5);
+ EXIT_CONFIG_MODE(port);
+}
+
+void __devinit ali_m1535plus_keyboard_init(int port)
+{
+ ENTER_CONFIG_MODE(port);
+ SELECT_LOGICAL_DEVICE(port, 7); /* KEYBOARD */
+ WRITE_CONFIG_DATA(port, 0x30, 0x01); /* KEYBOARD: eable */
+ WRITE_CONFIG_DATA(port, 0x70, 0x01); /* IRQ: 1 */
+ WRITE_CONFIG_DATA(port, 0x72, 0x0c); /* PS/2 Mouse IRQ: 12 */
+ WRITE_CONFIG_DATA(port, 0xf0, 0x00);
+ EXIT_CONFIG_MODE(port);
+}
+
+void __devinit ali_m1535plus_hotkey_init(int port)
+{
+ ENTER_CONFIG_MODE(port);
+ SELECT_LOGICAL_DEVICE(port, 0xc); /* HOTKEY */
+ WRITE_CONFIG_DATA(port, 0x30, 0x00);
+ WRITE_CONFIG_DATA(port, 0xf0, 0x35);
+ WRITE_CONFIG_DATA(port, 0xf1, 0x14);
+ WRITE_CONFIG_DATA(port, 0xf2, 0x11);
+ WRITE_CONFIG_DATA(port, 0xf3, 0x71);
+ WRITE_CONFIG_DATA(port, 0xf5, 0x05);
+ EXIT_CONFIG_MODE(port);
+}
+
+void ali_m1535plus_init(struct pci_dev *dev)
+{
+ pci_write_config_byte(dev, 0x40, 0x18); /* PCI Interface Control */
+ pci_write_config_byte(dev, 0x41, 0xc0); /* PS2 keyb & mouse enable */
+ pci_write_config_byte(dev, 0x42, 0x41); /* ISA bus cycle control */
+ pci_write_config_byte(dev, 0x43, 0x00); /* ISA bus cycle control 2 */
+ pci_write_config_byte(dev, 0x44, 0x5d); /* IDE enable & IRQ 14 */
+ pci_write_config_byte(dev, 0x45, 0x0b); /* PCI int polling mode */
+ pci_write_config_byte(dev, 0x47, 0x00); /* BIOS chip select control */
+
+ /* IRQ routing */
+ pci_write_config_byte(dev, 0x48, 0x03); /* INTA IRQ10, INTB disable */
+ pci_write_config_byte(dev, 0x49, 0x00); /* INTC and INTD disable */
+ pci_write_config_byte(dev, 0x4a, 0x00); /* INTE and INTF disable */
+ pci_write_config_byte(dev, 0x4b, 0x90); /* Audio IRQ11, Modem disable */
+
+ pci_write_config_word(dev, 0x50, 0x4000); /* Parity check IDE enable */
+ pci_write_config_word(dev, 0x52, 0x0000); /* USB & RTC disable */
+ pci_write_config_word(dev, 0x54, 0x0002); /* ??? no info */
+ pci_write_config_word(dev, 0x56, 0x0002); /* PCS1J signal disable */
+
+ pci_write_config_byte(dev, 0x59, 0x00); /* PCSDS */
+ pci_write_config_byte(dev, 0x5a, 0x00);
+ pci_write_config_byte(dev, 0x5b, 0x00);
+ pci_write_config_word(dev, 0x5c, 0x0000);
+ pci_write_config_byte(dev, 0x5e, 0x00);
+ pci_write_config_byte(dev, 0x5f, 0x00);
+ pci_write_config_word(dev, 0x60, 0x0000);
+
+ pci_write_config_byte(dev, 0x6c, 0x00);
+ pci_write_config_byte(dev, 0x6d, 0x48); /* ROM address mapping */
+ pci_write_config_byte(dev, 0x6e, 0x00); /* ??? what for? */
+
+ pci_write_config_byte(dev, 0x70, 0x12); /* Serial IRQ control */
+ pci_write_config_byte(dev, 0x71, 0xEF); /* DMA channel select */
+ pci_write_config_byte(dev, 0x72, 0x03); /* USB IDSEL */
+ pci_write_config_byte(dev, 0x73, 0x00); /* ??? no info */
+
+ /*
+ * IRQ setup ALi M5237 USB Host Controller
+ * IRQ: 9
+ */
+ pci_write_config_byte(dev, 0x74, 0x01); /* USB IRQ9 */
+
+ pci_write_config_byte(dev, 0x75, 0x1f); /* IDE2 IRQ 15 */
+ pci_write_config_byte(dev, 0x76, 0x80); /* ACPI disable */
+ pci_write_config_byte(dev, 0x77, 0x40); /* Modem disable */
+ pci_write_config_dword(dev, 0x78, 0x20000000); /* Pin select 2 */
+ pci_write_config_byte(dev, 0x7c, 0x00); /* Pin select 3 */
+ pci_write_config_byte(dev, 0x81, 0x00); /* ID read/write control */
+ pci_write_config_byte(dev, 0x90, 0x00); /* PCI PM block control */
+ pci_write_config_word(dev, 0xa4, 0x0000); /* PMSCR */
+
+#ifdef CONFIG_BLK_DEV_FD
+ ali_m1535plus_fdc_init(1);
+#endif
+
+ ali_m1535plus_keyboard_init(1);
+ ali_m1535plus_hotkey_init(1);
+}
+
+static inline void ali_config_writeb(u8 reg, u8 val, int devfn)
+{
+ u32 data;
+ int shift;
+
+ writel((1 << 16) | (devfn << 8) | (reg & 0xfc) | 1UL, PCI_CONFIG_ADDR);
+ data = readl(PCI_CONFIG_DATA);
+
+ shift = (reg & 3) << 3;
+ data &= ~(0xff << shift);
+ data |= (((u32)val) << shift);
+
+ writel(data, PCI_CONFIG_DATA);
+}
+
+static inline u8 ali_config_readb(u8 reg, int devfn)
+{
+ u32 data;
+
+ writel((1 << 16) | (devfn << 8) | (reg & 0xfc) | 1UL, PCI_CONFIG_ADDR);
+ data = readl(PCI_CONFIG_DATA);
+
+ return (u8)(data >> ((reg & 3) << 3));
+}
+
+static inline u16 ali_config_readw(u8 reg, int devfn)
+{
+ u32 data;
+
+ writel((1 << 16) | (devfn << 8) | (reg & 0xfc) | 1UL, PCI_CONFIG_ADDR);
+ data = readl(PCI_CONFIG_DATA);
+
+ return (u16)(data >> ((reg & 2) << 3));
+}
+
+int vr4133_rockhopper = 0;
+void __init ali_m5229_preinit(void)
+{
+ if (ali_config_readw(PCI_VENDOR_ID,16) == PCI_VENDOR_ID_AL &&
+ ali_config_readw(PCI_DEVICE_ID,16) == PCI_DEVICE_ID_AL_M1533) {
+ printk(KERN_INFO "Found an NEC Rockhopper \n");
+ vr4133_rockhopper = 1;
+ /*
+ * Enable ALi M5229 IDE Controller (both channels)
+ * IDSEL: A27
+ */
+ ali_config_writeb(0x58, 0x4c, 16);
+ }
+}
+
+void __init ali_m5229_init(struct pci_dev *dev)
+{
+ /*
+ * Enable Primary/Secondary Channel Cable Detect 40-Pin
+ */
+ pci_write_config_word(dev, 0x4a, 0xc023);
+
+ /*
+ * Set only the 3rd byteis for the master IDE's cycle and
+ * enable Internal IDE Function
+ */
+ pci_write_config_byte(dev, 0x50, 0x23); /* Class code attr register */
+
+ pci_write_config_byte(dev, 0x09, 0xff); /* Set native mode & stuff */
+ pci_write_config_byte(dev, 0x52, 0x00); /* use timing registers */
+ pci_write_config_byte(dev, 0x58, 0x02); /* Primary addr setup timing */
+ pci_write_config_byte(dev, 0x59, 0x22); /* Primary cmd block timing */
+ pci_write_config_byte(dev, 0x5a, 0x22); /* Pr drv 0 R/W timing */
+ pci_write_config_byte(dev, 0x5b, 0x22); /* Pr drv 1 R/W timing */
+ pci_write_config_byte(dev, 0x5c, 0x02); /* Sec addr setup timing */
+ pci_write_config_byte(dev, 0x5d, 0x22); /* Sec cmd block timing */
+ pci_write_config_byte(dev, 0x5e, 0x22); /* Sec drv 0 R/W timing */
+ pci_write_config_byte(dev, 0x5f, 0x22); /* Sec drv 1 R/W timing */
+ pci_write_config_byte(dev, PCI_LATENCY_TIMER, 0x20);
+ pci_write_config_word(dev, PCI_COMMAND,
+ PCI_COMMAND_PARITY | PCI_COMMAND_MASTER |
+ PCI_COMMAND_IO);
+}
+
--- /dev/null
+/*
+ * arch/mips/vr41xx/nec-cmbvr4133/setup.c
+ *
+ * Setup for the NEC CMB-VR4133.
+ *
+ * Author: Yoichi Yuasa <yyuasa@mvista.com, or source@mvista.com> and
+ * Alex Sapkov <asapkov@ru.mvista.com>
+ *
+ * 2001-2004 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ *
+ * Support for CMBVR4133 board in 2.6
+ * Author: Manish Lachwani (mlachwani@mvista.com)
+ */
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/console.h>
+#include <linux/ide.h>
+#include <linux/ioport.h>
+
+#include <asm/reboot.h>
+#include <asm/time.h>
+#include <asm/vr41xx/cmbvr4133.h>
+#include <asm/bootinfo.h>
+
+#ifdef CONFIG_MTD
+#include <linux/mtd/physmap.h>
+#include <linux/mtd/partitions.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+
+static struct mtd_partition cmbvr4133_mtd_parts[] = {
+ {
+ .name = "User FS",
+ .size = 0x1be0000,
+ .offset = 0,
+ .mask_flags = 0,
+ },
+ {
+ .name = "PMON",
+ .size = 0x140000,
+ .offset = MTDPART_OFS_APPEND,
+ .mask_flags = MTD_WRITEABLE, /* force read-only */
+ },
+ {
+ .name = "User FS2",
+ .size = MTDPART_SIZ_FULL,
+ .offset = MTDPART_OFS_APPEND,
+ .mask_flags = 0,
+ }
+};
+
+#define number_partitions (sizeof(cmbvr4133_mtd_parts)/sizeof(struct mtd_partition))
+#endif
+
+extern void (*late_time_init)(void);
+
+static void __init vr4133_serial_init(void)
+{
+ vr41xx_select_siu_interface(SIU_RS232C, IRDA_NONE);
+ vr41xx_siu_init();
+ vr41xx_dsiu_init();
+}
+
+extern void i8259_init(void);
+
+static int __init nec_cmbvr4133_setup(void)
+{
+#ifdef CONFIG_ROCKHOPPER
+ extern void disable_pcnet(void);
+
+ disable_pcnet();
+#endif
+ set_io_port_base(KSEG1ADDR(0x16000000));
+
+ mips_machgroup = MACH_GROUP_NEC_VR41XX;
+ mips_machtype = MACH_NEC_CMBVR4133;
+
+ late_time_init = vr4133_serial_init;
+
+#ifdef CONFIG_PCI
+#ifdef CONFIG_ROCKHOPPER
+ ali_m5229_preinit();
+#endif
+#endif
+
+#ifdef CONFIG_ROCKHOPPER
+ rockhopper_init_irq();
+#endif
+
+#ifdef CONFIG_MTD
+ /* we use generic physmap mapping driver and we use partitions */
+ physmap_configure(0x1C000000, 0x02000000, 4, NULL);
+ physmap_set_partitions(cmbvr4133_mtd_parts, number_partitions);
+#endif
+
+ /* 128 MB memory support */
+ add_memory_region(0, 0x08000000, BOOT_MEM_RAM);
+
+#ifdef CONFIG_ROCKHOPPER
+ i8259_init();
+#endif
+ return 0;
+}
+
+early_initcall(nec_cmbvr4133_setup);
#
# Automatically generated make config: don't edit
+# Linux kernel version: 2.6.10-pa5
+# Wed Jan 5 13:40:36 2005
#
CONFIG_PARISC=y
CONFIG_MMU=y
#
CONFIG_EXPERIMENTAL=y
# CONFIG_CLEAN_COMPILE is not set
-# CONFIG_STANDALONE is not set
CONFIG_BROKEN=y
CONFIG_BROKEN_ON_SMP=y
#
# General setup
#
+CONFIG_LOCALVERSION=""
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
# CONFIG_AUDIT is not set
CONFIG_LOG_BUF_SHIFT=16
CONFIG_HOTPLUG=y
+CONFIG_KOBJECT_UEVENT=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_EMBEDDED=y
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
+# CONFIG_KALLSYMS_EXTRA_PASS is not set
CONFIG_FUTEX=y
CONFIG_EPOLL=y
-CONFIG_IOSCHED_NOOP=y
-CONFIG_IOSCHED_AS=y
-CONFIG_IOSCHED_DEADLINE=y
-CONFIG_IOSCHED_CFQ=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+CONFIG_SHMEM=y
+CONFIG_CC_ALIGN_FUNCTIONS=0
+CONFIG_CC_ALIGN_LABELS=0
+CONFIG_CC_ALIGN_LOOPS=0
+CONFIG_CC_ALIGN_JUMPS=0
+# CONFIG_TINY_SHMEM is not set
#
# Loadable module support
CONFIG_MODULE_FORCE_UNLOAD=y
CONFIG_OBSOLETE_MODPARM=y
# CONFIG_MODVERSIONS is not set
+# CONFIG_MODULE_SRCVERSION_ALL is not set
CONFIG_KMOD=y
#
# CONFIG_PDC_CHASSIS is not set
#
-# PCMCIA/CardBus support
+# PCCARD (PCMCIA/CardBus) support
+#
+# CONFIG_PCCARD is not set
+
+#
+# PC-card bridges
#
-CONFIG_PCMCIA=m
-CONFIG_PCMCIA_DEBUG=y
-CONFIG_YENTA=m
-CONFIG_CARDBUS=y
-# CONFIG_I82092 is not set
-# CONFIG_TCIC is not set
#
# PCI Hotplug Support
#
# Generic Driver Options
#
-# CONFIG_FW_LOADER is not set
-CONFIG_DEBUG_DRIVER=y
+# CONFIG_STANDALONE is not set
+# CONFIG_PREVENT_FIRMWARE_BUILD is not set
+CONFIG_FW_LOADER=y
+# CONFIG_DEBUG_DRIVER is not set
#
# Memory Technology Devices (MTD)
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_NBD is not set
-# CONFIG_BLK_DEV_CARMEL is not set
+# CONFIG_BLK_DEV_SX8 is not set
CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=6144
CONFIG_BLK_DEV_INITRD=y
+CONFIG_INITRAMFS_SOURCE=""
+CONFIG_CDROM_PKTCDVD=m
+CONFIG_CDROM_PKTCDVD_BUFFERS=8
+# CONFIG_CDROM_PKTCDVD_WCACHE is not set
+
+#
+# IO Schedulers
+#
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
#
# ATA/ATAPI/MFM/RLL support
# SCSI low-level drivers
#
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
+# CONFIG_SCSI_3W_9XXX is not set
# CONFIG_SCSI_ACARD is not set
# CONFIG_SCSI_AACRAID is not set
# CONFIG_SCSI_AIC7XXX is not set
# CONFIG_SCSI_AIC7XXX_OLD is not set
# CONFIG_SCSI_AIC79XX is not set
# CONFIG_SCSI_ADVANSYS is not set
-# CONFIG_SCSI_MEGARAID is not set
+# CONFIG_MEGARAID_NEWGEN is not set
+# CONFIG_MEGARAID_LEGACY is not set
# CONFIG_SCSI_SATA is not set
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_SCSI_CPQFCTS is not set
CONFIG_SCSI_QLOGIC_FC=m
# CONFIG_SCSI_QLOGIC_FC_FIRMWARE is not set
CONFIG_SCSI_QLOGIC_1280=m
+# CONFIG_SCSI_QLOGIC_1280_1040 is not set
CONFIG_SCSI_QLA2XXX=y
# CONFIG_SCSI_QLA21XX is not set
# CONFIG_SCSI_QLA22XX is not set
# CONFIG_SCSI_DC390T is not set
CONFIG_SCSI_DEBUG=m
-#
-# PCMCIA SCSI adapter support
-#
-# CONFIG_PCMCIA_FDOMAIN is not set
-# CONFIG_PCMCIA_QLOGIC is not set
-# CONFIG_PCMCIA_SYM53C500 is not set
-
#
# Multi-device support (RAID and LVM)
#
CONFIG_MD_LINEAR=y
CONFIG_MD_RAID0=y
CONFIG_MD_RAID1=y
+# CONFIG_MD_RAID10 is not set
# CONFIG_MD_RAID5 is not set
# CONFIG_MD_RAID6 is not set
# CONFIG_MD_MULTIPATH is not set
+# CONFIG_MD_FAULTY is not set
# CONFIG_BLK_DEV_DM is not set
#
#
CONFIG_FUSION=m
CONFIG_FUSION_MAX_SGE=40
-CONFIG_FUSION_ISENSE=m
CONFIG_FUSION_CTL=m
#
CONFIG_INET_AH=m
CONFIG_INET_ESP=m
# CONFIG_INET_IPCOMP is not set
+# CONFIG_INET_TUNNEL is not set
+CONFIG_IP_TCPDIAG=y
+# CONFIG_IP_TCPDIAG_IPV6 is not set
#
# IP: Virtual Server Configuration
# IP: Netfilter Configuration
#
CONFIG_IP_NF_CONNTRACK=m
+# CONFIG_IP_NF_CT_ACCT is not set
+# CONFIG_IP_NF_CONNTRACK_MARK is not set
+# CONFIG_IP_NF_CT_PROTO_SCTP is not set
CONFIG_IP_NF_FTP=m
CONFIG_IP_NF_IRC=m
CONFIG_IP_NF_TFTP=m
CONFIG_IP_NF_MATCH_STATE=m
CONFIG_IP_NF_MATCH_CONNTRACK=m
CONFIG_IP_NF_MATCH_OWNER=m
+CONFIG_IP_NF_MATCH_ADDRTYPE=m
+CONFIG_IP_NF_MATCH_REALM=m
+CONFIG_IP_NF_MATCH_SCTP=m
+CONFIG_IP_NF_MATCH_COMMENT=m
+CONFIG_IP_NF_MATCH_HASHLIMIT=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
+CONFIG_IP_NF_TARGET_LOG=m
+CONFIG_IP_NF_TARGET_ULOG=m
+CONFIG_IP_NF_TARGET_TCPMSS=m
CONFIG_IP_NF_NAT=m
CONFIG_IP_NF_NAT_NEEDED=y
CONFIG_IP_NF_TARGET_MASQUERADE=m
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_SAME=m
-# CONFIG_IP_NF_NAT_LOCAL is not set
CONFIG_IP_NF_NAT_SNMP_BASIC=m
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
CONFIG_IP_NF_TARGET_DSCP=m
CONFIG_IP_NF_TARGET_MARK=m
CONFIG_IP_NF_TARGET_CLASSIFY=m
-CONFIG_IP_NF_TARGET_LOG=m
-CONFIG_IP_NF_TARGET_ULOG=m
-CONFIG_IP_NF_TARGET_TCPMSS=m
+CONFIG_IP_NF_RAW=m
+CONFIG_IP_NF_TARGET_NOTRACK=m
CONFIG_IP_NF_ARPTABLES=m
CONFIG_IP_NF_ARPFILTER=m
CONFIG_IP_NF_ARP_MANGLE=m
# CONFIG_IP_NF_COMPAT_IPCHAINS is not set
# CONFIG_IP_NF_COMPAT_IPFWADM is not set
-CONFIG_IP_NF_TARGET_NOTRACK=m
-CONFIG_IP_NF_RAW=m
CONFIG_XFRM=y
CONFIG_XFRM_USER=m
# CONFIG_NET_DIVERT is not set
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
-# CONFIG_NET_FASTROUTE is not set
-# CONFIG_NET_HW_FLOWCONTROL is not set
#
# QoS and/or fair queueing
#
# CONFIG_NET_SCHED is not set
+CONFIG_NET_CLS_ROUTE=y
#
# Network testing
# CONFIG_DE4X5 is not set
# CONFIG_WINBOND_840 is not set
# CONFIG_DM9102 is not set
-CONFIG_PCMCIA_XIRCOM=m
-CONFIG_PCMCIA_XIRTULIP=m
CONFIG_HP100=m
CONFIG_NET_PCI=y
CONFIG_PCNET32=m
# CONFIG_YELLOWFIN is not set
# CONFIG_R8169 is not set
# CONFIG_SK98LIN is not set
+# CONFIG_VIA_VELOCITY is not set
CONFIG_TIGON3=m
#
# Obsolete Wireless cards support (pre-802.11)
#
# CONFIG_STRIP is not set
-CONFIG_PCMCIA_WAVELAN=m
-CONFIG_PCMCIA_NETWAVE=m
-
-#
-# Wireless 802.11 Frequency Hopping cards support
-#
-# CONFIG_PCMCIA_RAYCS is not set
#
# Wireless 802.11b ISA/PCI cards support
#
-# CONFIG_AIRO is not set
CONFIG_HERMES=m
CONFIG_PLX_HERMES=m
CONFIG_TMD_HERMES=m
CONFIG_PCI_HERMES=m
# CONFIG_ATMEL is not set
-#
-# Wireless 802.11b Pcmcia/Cardbus cards support
-#
-CONFIG_PCMCIA_HERMES=m
-CONFIG_AIRO_CS=m
-# CONFIG_PCMCIA_WL3501 is not set
-
#
# Prism GT/Duette 802.11(a/b/g) PCI/Cardbus support
#
# CONFIG_PRISM54 is not set
CONFIG_NET_WIRELESS=y
-#
-# PCMCIA network device support
-#
-CONFIG_NET_PCMCIA=y
-CONFIG_PCMCIA_3C589=m
-CONFIG_PCMCIA_3C574=m
-# CONFIG_PCMCIA_FMVJ18X is not set
-# CONFIG_PCMCIA_PCNET is not set
-# CONFIG_PCMCIA_NMCLAN is not set
-CONFIG_PCMCIA_SMC91C92=m
-CONFIG_PCMCIA_XIRC2PS=m
-# CONFIG_PCMCIA_AXNET is not set
-
#
# Wan interfaces
#
#
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
-# CONFIG_SERIAL_8250_CS is not set
CONFIG_SERIAL_8250_NR_UARTS=8
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_UNIX98_PTYS=y
# CONFIG_LEGACY_PTYS is not set
-# CONFIG_QIC02_TAPE is not set
#
# IPMI
#
# Ftape, the floppy tape device driver
#
-# CONFIG_FTAPE is not set
# CONFIG_AGP is not set
# CONFIG_DRM is not set
-
-#
-# PCMCIA character devices
-#
-# CONFIG_SYNCLINK_CS is not set
CONFIG_RAW_DRIVER=y
CONFIG_MAX_RAW_DEVS=256
#
# CONFIG_I2C is not set
+#
+# Dallas's 1-wire bus
+#
+# CONFIG_W1 is not set
+
#
# Misc devices
#
#
# Console display driver support
#
-# CONFIG_MDA_CONSOLE is not set
CONFIG_DUMMY_CONSOLE_COLUMNS=160
CONFIG_DUMMY_CONSOLE_ROWS=64
CONFIG_DUMMY_CONSOLE=y
# USB support
#
# CONFIG_USB is not set
+CONFIG_USB_ARCH_HAS_HCD=y
+CONFIG_USB_ARCH_HAS_OHCI=y
+
+#
+# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
+#
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
#
# File systems
#
# CONFIG_JFS_POSIX_ACL is not set
# CONFIG_JFS_DEBUG is not set
# CONFIG_JFS_STATISTICS is not set
+CONFIG_FS_POSIX_ACL=y
CONFIG_XFS_FS=m
# CONFIG_XFS_RT is not set
# CONFIG_XFS_QUOTA is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
# CONFIG_QUOTA is not set
+# CONFIG_DNOTIFY is not set
# CONFIG_AUTOFS_FS is not set
-# CONFIG_AUTOFS4_FS is not set
+CONFIG_AUTOFS4_FS=y
#
# CD-ROM/DVD Filesystems
CONFIG_JOLIET=y
# CONFIG_ZISOFS is not set
CONFIG_UDF_FS=m
+CONFIG_UDF_NLS=y
#
# DOS/FAT/NT Filesystems
CONFIG_FAT_FS=m
CONFIG_MSDOS_FS=m
CONFIG_VFAT_FS=m
+CONFIG_FAT_DEFAULT_CODEPAGE=437
+CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
# CONFIG_NTFS_FS is not set
#
# CONFIG_DEVFS_FS is not set
# CONFIG_DEVPTS_FS_XATTR is not set
CONFIG_TMPFS=y
+# CONFIG_TMPFS_XATTR is not set
# CONFIG_HUGETLBFS is not set
# CONFIG_HUGETLB_PAGE is not set
CONFIG_RAMFS=y
CONFIG_SUNRPC=y
CONFIG_SUNRPC_GSS=y
CONFIG_RPCSEC_GSS_KRB5=y
+# CONFIG_RPCSEC_GSS_SPKM3 is not set
CONFIG_SMB_FS=m
CONFIG_SMB_NLS_DEFAULT=y
CONFIG_SMB_NLS_REMOTE="cp437"
CONFIG_CIFS=m
# CONFIG_CIFS_STATS is not set
+# CONFIG_CIFS_XATTR is not set
+# CONFIG_CIFS_EXPERIMENTAL is not set
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
# CONFIG_NLS_ISO8859_8 is not set
# CONFIG_NLS_CODEPAGE_1250 is not set
# CONFIG_NLS_CODEPAGE_1251 is not set
+# CONFIG_NLS_ASCII is not set
CONFIG_NLS_ISO8859_1=m
CONFIG_NLS_ISO8859_2=m
CONFIG_NLS_ISO8859_3=m
# Kernel hacking
#
CONFIG_DEBUG_KERNEL=y
-# CONFIG_DEBUG_SLAB is not set
CONFIG_MAGIC_SYSRQ=y
+# CONFIG_SCHEDSTATS is not set
+# CONFIG_DEBUG_SLAB is not set
# CONFIG_DEBUG_SPINLOCK is not set
-# CONFIG_FRAME_POINTER is not set
+# CONFIG_DEBUG_KOBJECT is not set
# CONFIG_DEBUG_INFO is not set
#
# Security options
#
+# CONFIG_KEYS is not set
# CONFIG_SECURITY is not set
#
CONFIG_CRYPTO_SHA1=m
CONFIG_CRYPTO_SHA256=m
CONFIG_CRYPTO_SHA512=m
+CONFIG_CRYPTO_WP512=m
CONFIG_CRYPTO_DES=y
CONFIG_CRYPTO_BLOWFISH=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_AES=m
CONFIG_CRYPTO_CAST5=m
CONFIG_CRYPTO_CAST6=m
+CONFIG_CRYPTO_TEA=m
# CONFIG_CRYPTO_ARC4 is not set
+CONFIG_CRYPTO_KHAZAD=m
+CONFIG_CRYPTO_ANUBIS=m
CONFIG_CRYPTO_DEFLATE=m
# CONFIG_CRYPTO_MICHAEL_MIC is not set
CONFIG_CRYPTO_CRC32C=m
#
# Library routines
#
+CONFIG_CRC_CCITT=m
CONFIG_CRC32=y
CONFIG_LIBCRC32C=m
CONFIG_ZLIB_INFLATE=m
ENTRY_NAME(hpux_unimplemented_wrapper)
ENTRY_NAME(hpux_unimplemented_wrapper) /* 195 */
ENTRY_NAME(hpux_statfs)
- ENTRY_NAME(hpux_unimplemented_wrapper)
+ ENTRY_NAME(hpux_fstatfs)
ENTRY_NAME(hpux_unimplemented_wrapper)
ENTRY_NAME(hpux_unimplemented_wrapper)
ENTRY_NAME(sys_waitpid) /* 200 */
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
+#include <linux/file.h>
#include <linux/fs.h>
+#include <linux/namei.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/smp_lock.h>
#include <linux/syscalls.h>
#include <linux/utsname.h>
-#include <linux/vmalloc.h>
#include <linux/vfs.h>
+#include <linux/vmalloc.h>
#include <asm/errno.h>
#include <asm/pgalloc.h>
int hpux_setpgrp(void)
{
- extern int sys_setpgid(int, int);
return sys_setpgid(0,0);
}
* aid in porting forward if and when sys_ustat() is changed from
* its form in kernel 2.2.5.
*/
-static int hpux_ustat(dev_t dev, struct hpux_ustat *ubuf)
+static int hpux_ustat(dev_t dev, struct hpux_ustat __user *ubuf)
{
struct super_block *s;
struct hpux_ustat tmp; /* Changed to hpux_ustat */
if (err)
goto out;
- memset(&tmp,0,sizeof(struct hpux_ustat)); /* Changed to hpux_ustat */
+ memset(&tmp,0,sizeof(tmp));
tmp.f_tfree = (int32_t)sbuf.f_bfree;
tmp.f_tinode = (u_int32_t)sbuf.f_ffree;
tmp.f_blksize = (u_int32_t)sbuf.f_bsize; /* Added this line */
- /* Changed to hpux_ustat: */
- err = copy_to_user(ubuf,&tmp,sizeof(struct hpux_ustat)) ? -EFAULT : 0;
+ err = copy_to_user(ubuf, &tmp, sizeof(tmp)) ? -EFAULT : 0;
out:
return err;
}
int16_t f_pad;
};
-/* hpux statfs */
-int hpux_statfs(const char *path, struct hpux_statfs *buf)
+static int vfs_statfs_hpux(struct super_block *sb, struct hpux_statfs *buf)
{
- int error;
- int len;
- char *kpath;
-
- len = strlen_user((char *)path);
-
- kpath = (char *) kmalloc(len+1, GFP_KERNEL);
- if ( !kpath ) {
- printk(KERN_DEBUG "failed to kmalloc kpath\n");
- return 0;
- }
+ struct kstatfs st;
+ int retval;
+
+ retval = vfs_statfs(sb, &st);
+ if (retval)
+ return retval;
+
+ memset(buf, 0, sizeof(*buf));
+ buf->f_type = st.f_type;
+ buf->f_bsize = st.f_bsize;
+ buf->f_blocks = st.f_blocks;
+ buf->f_bfree = st.f_bfree;
+ buf->f_bavail = st.f_bavail;
+ buf->f_files = st.f_files;
+ buf->f_ffree = st.f_ffree;
+ buf->f_fsid[0] = st.f_fsid.val[0];
+ buf->f_fsid[1] = st.f_fsid.val[1];
- if ( copy_from_user(kpath, (char *)path, len+1) ) {
- printk(KERN_DEBUG "failed to copy_from_user kpath\n");
- kfree(kpath);
return 0;
- }
-
- printk(KERN_DEBUG "hpux_statfs(\"%s\",-)\n", kpath);
+}
- kfree(kpath);
+/* hpux statfs */
+asmlinkage long hpux_statfs(const char __user *path,
+ struct hpux_statfs __user *buf)
+{
+ struct nameidata nd;
+ int error;
- /* just fake it, beginning of structures match */
- error = sys_statfs(path, (struct statfs *) buf);
+ error = user_path_walk(path, &nd);
+ if (!error) {
+ struct hpux_statfs tmp;
+ error = vfs_statfs_hpux(nd.dentry->d_inode->i_sb, &tmp);
+ if (!error && copy_to_user(buf, &tmp, sizeof(tmp)))
+ error = -EFAULT;
+ path_release(&nd);
+ }
+ return error;
+}
- /* ignoring rest of statfs struct, but it should be zeros. Need to do
- something with f_fsid[1], which is the fstype for sysfs */
+asmlinkage long hpux_fstatfs(unsigned int fd, struct hpux_statfs __user * buf)
+{
+ struct file *file;
+ struct hpux_statfs tmp;
+ int error;
+ error = -EBADF;
+ file = fget(fd);
+ if (!file)
+ goto out;
+ error = vfs_statfs_hpux(file->f_dentry->d_inode->i_sb, &tmp);
+ if (!error && copy_to_user(buf, &tmp, sizeof(tmp)))
+ error = -EFAULT;
+ fput(file);
+ out:
return error;
}
"setdomainname",
"async_daemon",
"getdirentries", /* 195 */
- "statfs",
- "fstatfs",
+ NULL,
+ NULL,
"vfsmount",
NULL,
"waitpid", /* 200 */
current->thread.map_base = DEFAULT_MAP_BASE32; \
current->thread.task_size = DEFAULT_TASK_SIZE32 \
-#define jiffies_to_timeval jiffies_to_compat_timeval
+#undef cputime_to_timeval
+#define cputime_to_timeval cputime_to_compat_timeval
static __inline__ void
-jiffies_to_compat_timeval(unsigned long jiffies, struct compat_timeval *value)
+cputime_to_compat_timeval(const cputime_t cputime, struct compat_timeval *value)
{
+ unsigned long jiffies = cputime_to_jiffies(cputime);
value->tv_usec = (jiffies % HZ) * (1000000L / HZ);
value->tv_sec = jiffies / HZ;
}
* Copyright (c) 2001 Matthew Wilcox for Hewlett Packard
* Copyright (c) 2001 Helge Deller <deller@gmx.de>
* Copyright (c) 2001,2002 Ryan Bradetich
+ * Copyright (c) 2004 Thibaut VARENE <varenet@parisc-linux.org>
*
* The file handles registering devices and drivers, then matching them.
* It's the closest we get to a dating agency.
+ *
+ * If you're thinking about modifying this file, here are some gotchas to
+ * bear in mind:
+ * - 715/Mirage device paths have a dummy device between Lasi and its children
+ * - The EISA adapter may show up as a sibling or child of Wax
+ * - Dino has an optionally functional serial port. If firmware enables it,
+ * it shows up as a child of Dino. If firmware disables it, the buswalk
+ * finds it and it shows up as a child of Cujo
+ * - Dino has both parisc and pci devices as children
+ * - parisc devices are discovered in a random order, including children
+ * before parents in some cases.
*/
#include <linux/slab.h>
struct hppa_dma_ops *hppa_dma_ops;
EXPORT_SYMBOL(hppa_dma_ops);
-static struct parisc_device root;
+static struct device root = {
+ .bus_id = "parisc",
+};
-#define for_each_padev(dev) \
- for (dev = root.child; dev != NULL; dev = next_dev(dev))
+#define for_each_padev(padev) \
+ for (padev = next_dev(&root); padev != NULL; \
+ padev = next_dev(&padev->dev))
-#define check_dev(dev) \
- (dev->id.hw_type != HPHW_FAULTY) ? dev : next_dev(dev)
+#define check_dev(padev) \
+ (padev->id.hw_type != HPHW_FAULTY) ? padev : next_dev(&padev->dev)
/**
* next_dev - enumerates registered devices
* next_dev does a depth-first search of the tree, returning parents
* before children. Returns NULL when there are no more devices.
*/
-struct parisc_device *next_dev(struct parisc_device *dev)
+static struct parisc_device *next_dev(struct device *dev)
{
- if (dev->child) {
- return check_dev(dev->child);
- } else if (dev->sibling) {
- return dev->sibling;
+ if (!list_empty(&dev->children)) {
+ dev = list_to_dev(dev->children.next);
+ return check_dev(to_parisc_device(dev));
}
- /* Exhausted tree at this level, time to go up. */
- do {
+ while (dev != &root) {
+ if (dev->node.next != &dev->parent->children) {
+ dev = list_to_dev(dev->node.next);
+ return to_parisc_device(dev);
+ }
dev = dev->parent;
- if (dev && dev->sibling)
- return dev->sibling;
- } while (dev != &root);
+ }
return NULL;
}
* Walks up the device tree looking for a device of the specified type.
* If it finds it, it returns it. If not, it returns NULL.
*/
-const struct parisc_device *find_pa_parent_type(const struct parisc_device *dev, int type)
+const struct parisc_device *
+find_pa_parent_type(const struct parisc_device *padev, int type)
{
+ const struct device *dev = &padev->dev;
while (dev != &root) {
- if (dev->id.hw_type == type)
- return dev;
+ struct parisc_device *candidate = to_parisc_device(dev);
+ if (candidate->id.hw_type == type)
+ return candidate;
dev = dev->parent;
}
return NULL;
}
-static void
-get_node_path(struct parisc_device *dev, struct hardware_path *path)
+#ifdef CONFIG_PCI
+static inline int is_pci_dev(struct device *dev)
+{
+ return dev->bus == &pci_bus_type;
+}
+#else
+static inline int is_pci_dev(struct device *dev)
+{
+ return 0;
+}
+#endif
+
+/*
+ * get_node_path fills in @path with the firmware path to the device.
+ * Note that if @node is a parisc device, we don't fill in the 'mod' field.
+ * This is because both callers pass the parent and fill in the mod
+ * themselves. If @node is a PCI device, we do fill it in, even though this
+ * is inconsistent.
+ */
+static void get_node_path(struct device *dev, struct hardware_path *path)
{
int i = 5;
memset(&path->bc, -1, 6);
+
+ if (is_pci_dev(dev)) {
+ unsigned int devfn = to_pci_dev(dev)->devfn;
+ path->mod = PCI_FUNC(devfn);
+ path->bc[i--] = PCI_SLOT(devfn);
+ dev = dev->parent;
+ }
+
while (dev != &root) {
- path->bc[i--] = dev->hw_path;
+ if (is_pci_dev(dev)) {
+ unsigned int devfn = to_pci_dev(dev)->devfn;
+ path->bc[i--] = PCI_SLOT(devfn) | (PCI_FUNC(devfn)<< 5);
+ } else if (dev->bus == &parisc_bus_type) {
+ path->bc[i--] = to_parisc_device(dev)->hw_path;
+ }
dev = dev->parent;
}
}
{
struct hardware_path path;
- get_node_path(dev->parent, &path);
+ get_node_path(dev->dev.parent, &path);
path.mod = dev->hw_path;
return print_hwpath(&path, output);
}
#if defined(CONFIG_PCI) || defined(CONFIG_ISA)
/**
- * get_pci_node_path - Returns hardware path for PCI devices
- * dev: The device to return the path for
- * output: Pointer to a previously-allocated array to place the path in.
+ * get_pci_node_path - Determines the hardware path for a PCI device
+ * @pdev: The device to return the path for
+ * @path: Pointer to a previously-allocated array to place the path in.
*
* This function fills in the hardware_path structure with the route to
* the specified PCI device. This structure is suitable for passing to
* PDC calls.
*/
-void get_pci_node_path(struct pci_dev *dev, struct hardware_path *path)
+void get_pci_node_path(struct pci_dev *pdev, struct hardware_path *path)
{
- struct pci_bus *bus;
- const struct parisc_device *padev;
- int i = 5;
-
- memset(&path->bc, -1, 6);
- path->mod = PCI_FUNC(dev->devfn);
- path->bc[i--] = PCI_SLOT(dev->devfn);
- for (bus = dev->bus; bus->parent; bus = bus->parent) {
- unsigned int devfn = bus->self->devfn;
- path->bc[i--] = PCI_SLOT(devfn) | (PCI_FUNC(devfn) << 5);
- }
-
- padev = HBA_DATA(bus->bridge->platform_data)->dev;
- while (padev != &root) {
- path->bc[i--] = padev->hw_path;
- padev = padev->parent;
- }
+ get_node_path(&pdev->dev, path);
}
EXPORT_SYMBOL(get_pci_node_path);
#endif /* defined(CONFIG_PCI) || defined(CONFIG_ISA) */
+static void setup_bus_id(struct parisc_device *padev)
+{
+ struct hardware_path path;
+ char *output = padev->dev.bus_id;
+ int i;
+
+ get_node_path(padev->dev.parent, &path);
-struct parisc_device * create_tree_node(char id, struct parisc_device *parent,
- struct parisc_device **insert)
+ for (i = 0; i < 6; i++) {
+ if (path.bc[i] == -1)
+ continue;
+ output += sprintf(output, "%u:", (unsigned char) path.bc[i]);
+ }
+ sprintf(output, "%u", (unsigned char) padev->hw_path);
+}
+
+struct parisc_device * create_tree_node(char id, struct device *parent)
{
struct parisc_device *dev = kmalloc(sizeof(*dev), GFP_KERNEL);
if (!dev)
return NULL;
+
memset(dev, 0, sizeof(*dev));
dev->hw_path = id;
dev->id.hw_type = HPHW_FAULTY;
- dev->parent = parent;
- dev->sibling = *insert;
- *insert = dev;
+
+ dev->dev.parent = parent;
+ setup_bus_id(dev);
+
+ dev->dev.bus = &parisc_bus_type;
+ dev->dma_mask = 0xffffffffUL; /* PARISC devices are 32-bit */
+
+ /* make the generic dma mask a pointer to the parisc one */
+ dev->dev.dma_mask = &dev->dma_mask;
+ dev->dev.coherent_dma_mask = dev->dma_mask;
+ device_register(&dev->dev);
+
return dev;
}
* Checks all the children of @parent for a matching @id. If none
* found, it allocates a new device and returns it.
*/
-struct parisc_device *
-alloc_tree_node(struct parisc_device *parent, char id)
+static struct parisc_device * alloc_tree_node(struct device *parent, char id)
{
- struct parisc_device *prev;
- if ((!parent->child) || (parent->child->hw_path > id)) {
- return create_tree_node(id, parent, &parent->child);
- }
+ struct device *dev;
- prev = parent->child;
- if (prev->hw_path == id)
- return prev;
-
- while (prev->sibling && prev->sibling->hw_path < id) {
- prev = prev->sibling;
+ list_for_each_entry(dev, &parent->children, node) {
+ struct parisc_device *padev = to_parisc_device(dev);
+ if (padev->hw_path == id)
+ return padev;
}
- if ((prev->sibling) && (prev->sibling->hw_path == id))
- return prev->sibling;
-
- return create_tree_node(id, parent, &prev->sibling);
+ return create_tree_node(id, parent);
}
-static struct parisc_device *find_parisc_device(struct hardware_path *modpath)
+static struct parisc_device *create_parisc_device(struct hardware_path *modpath)
{
int i;
- struct parisc_device *parent = &root;
+ struct device *parent = &root;
for (i = 0; i < 6; i++) {
if (modpath->bc[i] == -1)
continue;
- parent = alloc_tree_node(parent, modpath->bc[i]);
+ parent = &alloc_tree_node(parent, modpath->bc[i])->dev;
}
return alloc_tree_node(parent, modpath->mod);
}
if (status != PDC_OK)
return NULL;
- dev = find_parisc_device(mod_path);
+ dev = create_parisc_device(mod_path);
if (dev->id.hw_type != HPHW_FAULTY) {
char p[64];
print_pa_hwpath(dev, p);
static int parisc_generic_match(struct device *dev, struct device_driver *drv)
{
- return match_device(to_parisc_driver(drv), to_parisc_device(dev));
+ return match_device(to_parisc_driver(drv), to_parisc_device(dev));
+}
+
+#define pa_dev_attr(name, field, format_string) \
+static ssize_t name##_show(struct device *dev, char *buf) \
+{ \
+ struct parisc_device *padev = to_parisc_device(dev); \
+ return sprintf(buf, format_string, padev->field); \
}
+#define pa_dev_attr_id(field, format) pa_dev_attr(field, id.field, format)
+
+pa_dev_attr(irq, irq, "%u\n");
+pa_dev_attr_id(hw_type, "0x%02x\n");
+pa_dev_attr(rev, id.hversion_rev, "0x%x\n");
+pa_dev_attr_id(hversion, "0x%03x\n");
+pa_dev_attr_id(sversion, "0x%05x\n");
+
+static struct device_attribute parisc_device_attrs[] = {
+ __ATTR_RO(irq),
+ __ATTR_RO(hw_type),
+ __ATTR_RO(rev),
+ __ATTR_RO(hversion),
+ __ATTR_RO(sversion),
+ __ATTR_NULL,
+};
+
struct bus_type parisc_bus_type = {
.name = "parisc",
.match = parisc_generic_match,
+ .dev_attrs = parisc_device_attrs,
};
/**
return 0;
}
+/**
+ * match_pci_device - Matches a pci device against a given hardware path
+ * entry.
+ * @dev: the generic device (known to be contained by a pci_dev).
+ * @index: the current BC index
+ * @modpath: the hardware path.
+ * @return: true if the device matches the hardware path.
+ */
+static int match_pci_device(struct device *dev, int index,
+ struct hardware_path *modpath)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ int id;
+
+ if (index == 5) {
+ /* we are at the end of the path, and on the actual device */
+ unsigned int devfn = pdev->devfn;
+ return ((modpath->bc[5] == PCI_SLOT(devfn)) &&
+ (modpath->mod == PCI_FUNC(devfn)));
+ }
+
+ id = PCI_SLOT(pdev->devfn) | (PCI_FUNC(pdev->devfn) << 5);
+ return (modpath->bc[index] == id);
+}
+
+/**
+ * match_parisc_device - Matches a parisc device against a given hardware
+ * path entry.
+ * @dev: the generic device (known to be contained by a parisc_device).
+ * @index: the current BC index
+ * @modpath: the hardware path.
+ * @return: true if the device matches the hardware path.
+ */
+static int match_parisc_device(struct device *dev, int index,
+ struct hardware_path *modpath)
+{
+ struct parisc_device *curr = to_parisc_device(dev);
+ char id = (index == 6) ? modpath->mod : modpath->bc[index];
+
+ return (curr->hw_path == id);
+}
+
+/**
+ * parse_tree_node - returns a device entry in the iotree
+ * @parent: the parent node in the tree
+ * @index: the current BC index
+ * @modpath: the hardware_path struct to match a device against
+ * @return: The corresponding device if found, NULL otherwise.
+ *
+ * Checks all the children of @parent for a matching @id. If none
+ * found, it returns NULL.
+ */
+static struct device *
+parse_tree_node(struct device *parent, int index, struct hardware_path *modpath)
+{
+ struct device *device;
+
+ list_for_each_entry(device, &parent->children, node) {
+ if (device->bus == &parisc_bus_type) {
+ if (match_parisc_device(device, index, modpath))
+ return device;
+ } else if (is_pci_dev(device)) {
+ if (match_pci_device(device, index, modpath))
+ return device;
+ } else if (device->bus == NULL) {
+ /* we are on a bus bridge */
+ struct device *new = parse_tree_node(device, index, modpath);
+ if (new)
+ return new;
+ }
+ }
+
+ return NULL;
+}
+
+/**
+ * hwpath_to_device - Finds the generic device corresponding to a given hardware path.
+ * @modpath: the hardware path.
+ * @return: The target device, NULL if not found.
+ */
+struct device *hwpath_to_device(struct hardware_path *modpath)
+{
+ int i;
+ struct device *parent = &root;
+ for (i = 0; i < 6; i++) {
+ if (modpath->bc[i] == -1)
+ continue;
+ parent = parse_tree_node(parent, i, modpath);
+ if (!parent)
+ return NULL;
+ }
+ if (is_pci_dev(parent)) /* pci devices already parse MOD */
+ return parent;
+ else
+ return parse_tree_node(parent, 6, modpath);
+}
+EXPORT_SYMBOL(hwpath_to_device);
+
#define BC_PORT_MASK 0x8
#define BC_LOWER_PORT 0x8
((dev->id.hw_type == HPHW_IOA) || (dev->id.hw_type == HPHW_BCPORT))
#define IS_LOWER_PORT(dev) \
- ((gsc_readl(&((struct bc_module *)dev->hpa)->io_status) \
+ ((gsc_readl(dev->hpa + offsetof(struct bc_module, io_status)) \
& BC_PORT_MASK) == BC_LOWER_PORT)
#define MAX_NATIVE_DEVICES 64
#define FLEX_MASK F_EXTEND(0xfffc0000)
#define IO_IO_LOW offsetof(struct bc_module, io_io_low)
#define IO_IO_HIGH offsetof(struct bc_module, io_io_high)
-#define READ_IO_IO_LOW(dev) (unsigned long)(signed int)__raw_readl(dev->hpa + IO_IO_LOW)
-#define READ_IO_IO_HIGH(dev) (unsigned long)(signed int)__raw_readl(dev->hpa + IO_IO_HIGH)
+#define READ_IO_IO_LOW(dev) (unsigned long)(signed int)gsc_readl(dev->hpa + IO_IO_LOW)
+#define READ_IO_IO_HIGH(dev) (unsigned long)(signed int)gsc_readl(dev->hpa + IO_IO_HIGH)
static void walk_native_bus(unsigned long io_io_low, unsigned long io_io_high,
- struct parisc_device *parent);
+ struct device *parent);
void walk_lower_bus(struct parisc_device *dev)
{
io_io_high = (READ_IO_IO_HIGH(dev)+ ~FLEX_MASK) & FLEX_MASK;
}
- walk_native_bus(io_io_low, io_io_high, dev);
+ walk_native_bus(io_io_low, io_io_high, &dev->dev);
}
/**
* keyboard ports). This problem is not yet solved.
*/
static void walk_native_bus(unsigned long io_io_low, unsigned long io_io_high,
- struct parisc_device *parent)
+ struct device *parent)
{
int i, devices_found = 0;
unsigned long hpa = io_io_low;
&root);
}
-void fixup_child_irqs(struct parisc_device *parent, int base,
- int (*choose_irq)(struct parisc_device *))
-{
- struct parisc_device *dev;
-
- if (!parent->child)
- return;
-
- for (dev = check_dev(parent->child); dev; dev = dev->sibling) {
- int irq = choose_irq(dev);
- if (irq > 0) {
-#ifdef __LP64__
- irq += 32;
-#endif
- dev->irq = base + irq;
- }
- }
-}
-
static void print_parisc_device(struct parisc_device *dev)
{
char hw_path[64];
printk("\n");
}
-void print_subdevices(struct parisc_device *parent)
-{
- struct parisc_device *dev;
- for (dev = parent->child; dev != parent->sibling; dev = next_dev(dev)) {
- print_parisc_device(dev);
- }
-}
-
-
-/*
- * parisc_generic_device_register_recursive() - internal function to recursively
- * register all parisc devices
- */
-static void parisc_generic_device_register_recursive( struct parisc_device *dev )
-{
- char tmp1[32];
-
- /* has this device been registered already ? */
- if (dev->dev.dma_mask != NULL)
- return;
-
- /* register all parents recursively */
- if (dev->parent && dev->parent!=&root)
- parisc_generic_device_register_recursive(dev->parent);
-
- /* set up the generic device tree for this */
- snprintf(tmp1, sizeof(tmp1), "%d", dev->hw_path);
- if (dev->parent && dev->parent != &root) {
- struct parisc_device *ndev;
- char tmp2[32];
-
- dev->dev.parent = &dev->parent->dev;
- for(ndev = dev->parent; ndev != &root;
- ndev = ndev->parent) {
- snprintf(tmp2, sizeof(tmp2), "%d:%s",
- ndev->hw_path, tmp1);
- strlcpy(tmp1, tmp2, sizeof(tmp1));
- }
- }
-
- dev->dev.bus = &parisc_bus_type;
- snprintf(dev->dev.bus_id, sizeof(dev->dev.bus_id), "parisc%s",
- tmp1);
- /* make the generic dma mask a pointer to the parisc one */
- dev->dev.dma_mask = &dev->dma_mask;
- dev->dev.coherent_dma_mask = dev->dma_mask;
- pr_debug("device_register(%s)\n", dev->dev.bus_id);
- device_register(&dev->dev);
-}
-
-/*
- * parisc_generic_device_register() - register all parisc devices
+/**
+ * init_parisc_bus - Some preparation to be done before inventory
*/
-void parisc_generic_device_register(void)
+void init_parisc_bus(void)
{
- struct parisc_device *dev;
-
bus_register(&parisc_bus_type);
-
- for_each_padev(dev) {
- parisc_generic_device_register_recursive( dev );
- }
+ device_register(&root);
+ get_device(&root);
}
/**
}
/**
- * do_system_map_inventory - Retrieve firmware devices via SYSTEM_MAP.
+ * system_map_inventory - Retrieve firmware devices via SYSTEM_MAP.
*
* This function attempts to retrieve and register all the devices firmware
* knows about via the SYSTEM_MAP PDC call.
int i;
long status = PDC_OK;
- for (i = 0; status != PDC_BAD_PROC && status != PDC_NE_MOD; i++) {
+ for (i = 0; i < 256; i++) {
struct parisc_device *dev;
struct pdc_system_map_mod_info module_result;
struct pdc_module_path module_path;
status = pdc_system_map_find_mods(&module_result,
&module_path, i);
+ if ((status == PDC_BAD_PROC) || (status == PDC_NE_MOD))
+ break;
if (status != PDC_OK)
continue;
-
+
dev = alloc_pa_dev(module_result.mod_addr, &module_path.path);
if (!dev)
continue;
void __init do_device_inventory(void)
{
- extern void parisc_generic_device_register(void);
-
printk(KERN_INFO "Searching for devices...\n");
+ init_parisc_bus();
+
switch (pdc_type) {
case PDC_TYPE_PAT:
default:
panic("Unknown PDC type!\n");
}
- parisc_generic_device_register();
printk(KERN_INFO "Found devices:\n");
print_parisc_devices();
}
#include <asm/io.h>
EXPORT_SYMBOL(__ioremap);
EXPORT_SYMBOL(iounmap);
-EXPORT_SYMBOL(__memcpy_toio);
-EXPORT_SYMBOL(__memcpy_fromio);
-EXPORT_SYMBOL(__memset_io);
+EXPORT_SYMBOL(memcpy_toio);
+EXPORT_SYMBOL(memcpy_fromio);
+EXPORT_SYMBOL(memset_io);
#include <asm/unistd.h>
EXPORT_SYMBOL(sys_open);
lib-y := lusercopy.o bitops.o checksum.o io.o memset.o fixup.o memcpy.o
+obj-y := iomap.o
+
lib-$(CONFIG_SMP) += debuglocks.o
/*
* copy while checksumming, otherwise like csum_partial
*/
-unsigned int csum_partial_copy_nocheck(const char *src, char *dst,
+unsigned int csum_partial_copy_nocheck(const unsigned char *src, unsigned char *dst,
int len, unsigned int sum)
{
/*
* Copy from userspace and compute checksum. If we catch an exception
* then zero the rest of the buffer.
*/
-unsigned int csum_partial_copy_from_user (const char *src, char *dst,
+unsigned int csum_partial_copy_from_user (const unsigned char *src, unsigned char *dst,
int len, unsigned int sum,
int *err_ptr)
{
* Assumes the device can cope with 32-bit transfers. If it can't,
* don't use this function.
*/
-void __memcpy_toio(unsigned long dest, unsigned long src, int count)
+void memcpy_toio(volatile void __iomem *dst, const void *src, int count)
{
- if ((dest & 3) != (src & 3))
+ if (((unsigned long)dst & 3) != ((unsigned long)src & 3))
goto bytecopy;
- while (dest & 3) {
- writeb(*(char *)src, dest++);
+ while ((unsigned long)dst & 3) {
+ writeb(*(char *)src, dst++);
src++;
count--;
}
while (count > 3) {
- __raw_writel(*(u32 *)src, dest);
+ __raw_writel(*(u32 *)src, dst);
src += 4;
- dest += 4;
+ dst += 4;
count -= 4;
}
bytecopy:
while (count--) {
- writeb(*(char *)src, dest++);
+ writeb(*(char *)src, dst++);
src++;
}
}
** Minimize total number of transfers at cost of CPU cycles.
** TODO: only look at src alignment and adjust the stores to dest.
*/
-void __memcpy_fromio(unsigned long dest, unsigned long src, int count)
+void memcpy_fromio(void *dst, const volatile void __iomem *src, int count)
{
/* first compare alignment of src/dst */
- if ( ((dest ^ src) & 1) || (count < 2) )
+ if ( (((unsigned long)dst ^ (unsigned long)src) & 1) || (count < 2) )
goto bytecopy;
- if ( ((dest ^ src) & 2) || (count < 4) )
+ if ( (((unsigned long)dst ^ (unsigned long)src) & 2) || (count < 4) )
goto shortcopy;
/* Then check for misaligned start address */
- if (src & 1) {
- *(u8 *)dest = readb(src);
+ if ((unsigned long)src & 1) {
+ *(u8 *)dst = readb(src);
src++;
- dest++;
+ dst++;
count--;
if (count < 2) goto bytecopy;
}
- if (src & 2) {
- *(u16 *)dest = __raw_readw(src);
+ if ((unsigned long)src & 2) {
+ *(u16 *)dst = __raw_readw(src);
src += 2;
- dest += 2;
+ dst += 2;
count -= 2;
}
while (count > 3) {
- *(u32 *)dest = __raw_readl(src);
- dest += 4;
+ *(u32 *)dst = __raw_readl(src);
+ dst += 4;
src += 4;
count -= 4;
}
shortcopy:
while (count > 1) {
- *(u16 *)dest = __raw_readw(src);
+ *(u16 *)dst = __raw_readw(src);
src += 2;
- dest += 2;
+ dst += 2;
count -= 2;
}
bytecopy:
while (count--) {
- *(char *)dest = readb(src);
+ *(char *)dst = readb(src);
src++;
- dest++;
+ dst++;
}
}
* Assumes the device can cope with 32-bit transfers. If it can't,
* don't use this function.
*/
-void __memset_io(unsigned long dest, char fill, int count)
+void memset_io(volatile void __iomem *addr, unsigned char val, int count)
{
- u32 fill32 = (fill << 24) | (fill << 16) | (fill << 8) | fill;
- while (dest & 3) {
- writeb(fill, dest++);
+ u32 val32 = (val << 24) | (val << 16) | (val << 8) | val;
+ while ((unsigned long)addr & 3) {
+ writeb(val, addr++);
count--;
}
while (count > 3) {
- __raw_writel(fill32, dest);
- dest += 4;
+ __raw_writel(val32, addr);
+ addr += 4;
count -= 4;
}
while (count--) {
- writeb(fill, dest++);
+ writeb(val, addr++);
}
}
--- /dev/null
+/*
+ * iomap.c - Implement iomap interface for PA-RISC
+ * Copyright (c) 2004 Matthew Wilcox
+ */
+
+#include <linux/ioport.h>
+#include <linux/pci.h>
+#include <asm/io.h>
+
+/*
+ * The iomap space on 32-bit PA-RISC is intended to look like this:
+ * 00000000-7fffffff virtual mapped IO
+ * 80000000-8fffffff ISA/EISA port space that can't be virtually mapped
+ * 90000000-9fffffff Dino port space
+ * a0000000-afffffff Astro port space
+ * b0000000-bfffffff PAT port space
+ * c0000000-cfffffff non-swapped memory IO
+ * f0000000-ffffffff legacy IO memory pointers
+ *
+ * For the moment, here's what it looks like:
+ * 80000000-8fffffff All ISA/EISA port space
+ * f0000000-ffffffff legacy IO memory pointers
+ *
+ * On 64-bit, everything is extended, so:
+ * 8000000000000000-8fffffffffffffff All ISA/EISA port space
+ * f000000000000000-ffffffffffffffff legacy IO memory pointers
+ */
+
+/*
+ * Technically, this should be 'if (VMALLOC_START < addr < VMALLOC_END),
+ * but that's slow and we know it'll be within the first 2GB.
+ */
+#ifdef CONFIG_64BIT
+#define INDIRECT_ADDR(addr) (((unsigned long)(addr) & 1UL<<63) != 0)
+#define ADDR_TO_REGION(addr) (((unsigned long)addr >> 60) & 7)
+#define IOPORT_MAP_BASE (8UL << 60)
+#else
+#define INDIRECT_ADDR(addr) (((unsigned long)(addr) & 1UL<<31) != 0)
+#define ADDR_TO_REGION(addr) (((unsigned long)addr >> 28) & 7)
+#define IOPORT_MAP_BASE (8UL << 28)
+#endif
+
+struct iomap_ops {
+ unsigned int (*read8)(void __iomem *);
+ unsigned int (*read16)(void __iomem *);
+ unsigned int (*read32)(void __iomem *);
+ void (*write8)(u8, void __iomem *);
+ void (*write16)(u16, void __iomem *);
+ void (*write32)(u32, void __iomem *);
+ void (*read8r)(void __iomem *, void *, unsigned long);
+ void (*read16r)(void __iomem *, void *, unsigned long);
+ void (*read32r)(void __iomem *, void *, unsigned long);
+ void (*write8r)(void __iomem *, const void *, unsigned long);
+ void (*write16r)(void __iomem *, const void *, unsigned long);
+ void (*write32r)(void __iomem *, const void *, unsigned long);
+};
+
+/* Generic ioport ops. To be replaced later by specific dino/elroy/wax code */
+
+#define ADDR2PORT(addr) ((unsigned long __force)(addr) & 0xffffff)
+
+static unsigned int ioport_read8(void __iomem *addr)
+{
+ return inb(ADDR2PORT(addr));
+}
+
+static unsigned int ioport_read16(void __iomem *addr)
+{
+ return inw(ADDR2PORT(addr));
+}
+
+static unsigned int ioport_read32(void __iomem *addr)
+{
+ return inl(ADDR2PORT(addr));
+}
+
+static void ioport_write8(u8 datum, void __iomem *addr)
+{
+ outb(datum, ADDR2PORT(addr));
+}
+
+static void ioport_write16(u16 datum, void __iomem *addr)
+{
+ outw(datum, ADDR2PORT(addr));
+}
+
+static void ioport_write32(u32 datum, void __iomem *addr)
+{
+ outl(datum, ADDR2PORT(addr));
+}
+
+static void ioport_read8r(void __iomem *addr, void *dst, unsigned long count)
+{
+ insb(ADDR2PORT(addr), dst, count);
+}
+
+static void ioport_read16r(void __iomem *addr, void *dst, unsigned long count)
+{
+ insw(ADDR2PORT(addr), dst, count);
+}
+
+static void ioport_read32r(void __iomem *addr, void *dst, unsigned long count)
+{
+ insl(ADDR2PORT(addr), dst, count);
+}
+
+static void ioport_write8r(void __iomem *addr, const void *s, unsigned long n)
+{
+ outsb(ADDR2PORT(addr), s, n);
+}
+
+static void ioport_write16r(void __iomem *addr, const void *s, unsigned long n)
+{
+ outsw(ADDR2PORT(addr), s, n);
+}
+
+static void ioport_write32r(void __iomem *addr, const void *s, unsigned long n)
+{
+ outsl(ADDR2PORT(addr), s, n);
+}
+
+static const struct iomap_ops ioport_ops = {
+ ioport_read8,
+ ioport_read16,
+ ioport_read32,
+ ioport_write8,
+ ioport_write16,
+ ioport_write32,
+ ioport_read8r,
+ ioport_read16r,
+ ioport_read32r,
+ ioport_write8r,
+ ioport_write16r,
+ ioport_write32r,
+};
+
+/* Legacy I/O memory ops */
+
+static unsigned int iomem_read8(void __iomem *addr)
+{
+ return readb(addr);
+}
+
+static unsigned int iomem_read16(void __iomem *addr)
+{
+ return readw(addr);
+}
+
+static unsigned int iomem_read32(void __iomem *addr)
+{
+ return readl(addr);
+}
+
+static void iomem_write8(u8 datum, void __iomem *addr)
+{
+ writeb(datum, addr);
+}
+
+static void iomem_write16(u16 datum, void __iomem *addr)
+{
+ writew(datum, addr);
+}
+
+static void iomem_write32(u32 datum, void __iomem *addr)
+{
+ writel(datum, addr);
+}
+
+static void iomem_read8r(void __iomem *addr, void *dst, unsigned long count)
+{
+ while (count--) {
+ *(u8 *)dst = __raw_readb(addr);
+ dst++;
+ }
+}
+
+static void iomem_read16r(void __iomem *addr, void *dst, unsigned long count)
+{
+ while (count--) {
+ *(u16 *)dst = __raw_readw(addr);
+ dst += 2;
+ }
+}
+
+static void iomem_read32r(void __iomem *addr, void *dst, unsigned long count)
+{
+ while (count--) {
+ *(u32 *)dst = __raw_readl(addr);
+ dst += 4;
+ }
+}
+
+static void iomem_write8r(void __iomem *addr, const void *s, unsigned long n)
+{
+ while (n--) {
+ __raw_writeb(*(u8 *)s, addr);
+ s++;
+ }
+}
+
+static void iomem_write16r(void __iomem *addr, const void *s, unsigned long n)
+{
+ while (n--) {
+ __raw_writew(*(u16 *)s, addr);
+ s += 2;
+ }
+}
+
+static void iomem_write32r(void __iomem *addr, const void *s, unsigned long n)
+{
+ while (n--) {
+ __raw_writel(*(u32 *)s, addr);
+ s += 4;
+ }
+}
+
+static const struct iomap_ops iomem_ops = {
+ iomem_read8,
+ iomem_read16,
+ iomem_read32,
+ iomem_write8,
+ iomem_write16,
+ iomem_write32,
+ iomem_read8r,
+ iomem_read16r,
+ iomem_read32r,
+ iomem_write8r,
+ iomem_write16r,
+ iomem_write32r,
+};
+
+const struct iomap_ops *iomap_ops[8] = {
+ [0] = &ioport_ops,
+#ifdef CONFIG_DEBUG_IOREMAP
+ [6] = &iomem_ops,
+#else
+ [7] = &iomem_ops
+#endif
+};
+
+
+unsigned int ioread8(void __iomem *addr)
+{
+ if (unlikely(INDIRECT_ADDR(addr)))
+ return iomap_ops[ADDR_TO_REGION(addr)]->read8(addr);
+ return *((u8 *)addr);
+}
+
+unsigned int ioread16(void __iomem *addr)
+{
+ if (unlikely(INDIRECT_ADDR(addr)))
+ return iomap_ops[ADDR_TO_REGION(addr)]->read16(addr);
+ return le16_to_cpup((u16 *)addr);
+}
+
+unsigned int ioread32(void __iomem *addr)
+{
+ if (unlikely(INDIRECT_ADDR(addr)))
+ return iomap_ops[ADDR_TO_REGION(addr)]->read32(addr);
+ return le32_to_cpup((u32 *)addr);
+}
+
+void iowrite8(u8 datum, void __iomem *addr)
+{
+ if (unlikely(INDIRECT_ADDR(addr))) {
+ iomap_ops[ADDR_TO_REGION(addr)]->write8(datum, addr);
+ } else {
+ *((u8 *)addr) = datum;
+ }
+}
+
+void iowrite16(u16 datum, void __iomem *addr)
+{
+ if (unlikely(INDIRECT_ADDR(addr))) {
+ iomap_ops[ADDR_TO_REGION(addr)]->write16(datum, addr);
+ } else {
+ *((u16 *)addr) = cpu_to_le16(datum);
+ }
+}
+
+void iowrite32(u32 datum, void __iomem *addr)
+{
+ if (unlikely(INDIRECT_ADDR(addr))) {
+ iomap_ops[ADDR_TO_REGION(addr)]->write32(datum, addr);
+ } else {
+ *((u32 *)addr) = cpu_to_le32(datum);
+ }
+}
+
+/* Repeating interfaces */
+
+void ioread8_rep(void __iomem *addr, void *dst, unsigned long count)
+{
+ if (unlikely(INDIRECT_ADDR(addr))) {
+ iomap_ops[ADDR_TO_REGION(addr)]->read8r(addr, dst, count);
+ } else {
+ while (count--) {
+ *(u8 *)dst = *(u8 *)addr;
+ dst++;
+ }
+ }
+}
+
+void ioread16_rep(void __iomem *addr, void *dst, unsigned long count)
+{
+ if (unlikely(INDIRECT_ADDR(addr))) {
+ iomap_ops[ADDR_TO_REGION(addr)]->read16r(addr, dst, count);
+ } else {
+ while (count--) {
+ *(u16 *)dst = *(u16 *)addr;
+ dst += 2;
+ }
+ }
+}
+
+void ioread32_rep(void __iomem *addr, void *dst, unsigned long count)
+{
+ if (unlikely(INDIRECT_ADDR(addr))) {
+ iomap_ops[ADDR_TO_REGION(addr)]->read32r(addr, dst, count);
+ } else {
+ while (count--) {
+ *(u32 *)dst = *(u32 *)addr;
+ dst += 4;
+ }
+ }
+}
+
+void iowrite8_rep(void __iomem *addr, const void *src, unsigned long count)
+{
+ if (unlikely(INDIRECT_ADDR(addr))) {
+ iomap_ops[ADDR_TO_REGION(addr)]->write8r(addr, src, count);
+ } else {
+ while (count--) {
+ *(u8 *)addr = *(u8 *)src;
+ src++;
+ }
+ }
+}
+
+void iowrite16_rep(void __iomem *addr, const void *src, unsigned long count)
+{
+ if (unlikely(INDIRECT_ADDR(addr))) {
+ iomap_ops[ADDR_TO_REGION(addr)]->write16r(addr, src, count);
+ } else {
+ while (count--) {
+ *(u16 *)addr = *(u16 *)src;
+ src += 2;
+ }
+ }
+}
+
+void iowrite32_rep(void __iomem *addr, const void *src, unsigned long count)
+{
+ if (unlikely(INDIRECT_ADDR(addr))) {
+ iomap_ops[ADDR_TO_REGION(addr)]->write32r(addr, src, count);
+ } else {
+ while (count--) {
+ *(u32 *)addr = *(u32 *)src;
+ src += 4;
+ }
+ }
+}
+
+/* Mapping interfaces */
+
+void __iomem *ioport_map(unsigned long port, unsigned int nr)
+{
+ return (void __iomem *)(IOPORT_MAP_BASE | port);
+}
+
+void ioport_unmap(void __iomem *addr)
+{
+ if (!INDIRECT_ADDR(addr)) {
+ iounmap(addr);
+ }
+}
+
+/* Create a virtual mapping cookie for a PCI BAR (memory or IO) */
+void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long maxlen)
+{
+ unsigned long start = pci_resource_start(dev, bar);
+ unsigned long len = pci_resource_len(dev, bar);
+ unsigned long flags = pci_resource_flags(dev, bar);
+
+ if (!len || !start)
+ return NULL;
+ if (maxlen && len > maxlen)
+ len = maxlen;
+ if (flags & IORESOURCE_IO)
+ return ioport_map(start, len);
+ if (flags & IORESOURCE_MEM) {
+ if (flags & IORESOURCE_CACHEABLE)
+ return ioremap(start, len);
+ return ioremap_nocache(start, len);
+ }
+ /* What? */
+ return NULL;
+}
+
+void pci_iounmap(struct pci_dev *dev, void __iomem * addr)
+{
+ if (!INDIRECT_ADDR(addr)) {
+ iounmap(addr);
+ }
+}
+
+EXPORT_SYMBOL(ioread8);
+EXPORT_SYMBOL(ioread16);
+EXPORT_SYMBOL(ioread32);
+EXPORT_SYMBOL(iowrite8);
+EXPORT_SYMBOL(iowrite16);
+EXPORT_SYMBOL(iowrite32);
+EXPORT_SYMBOL(ioread8_rep);
+EXPORT_SYMBOL(ioread16_rep);
+EXPORT_SYMBOL(ioread32_rep);
+EXPORT_SYMBOL(iowrite8_rep);
+EXPORT_SYMBOL(iowrite16_rep);
+EXPORT_SYMBOL(iowrite32_rep);
+EXPORT_SYMBOL(ioport_map);
+EXPORT_SYMBOL(ioport_unmap);
+EXPORT_SYMBOL(pci_iomap);
+EXPORT_SYMBOL(pci_iounmap);
return dst;
}
-void bcopy(const void * srcp, void * destp, size_t count)
-{
- mtsp(get_kernel_space(), 1);
- mtsp(get_kernel_space(), 2);
- pa_memcpy(destp, srcp, count);
-}
-
EXPORT_SYMBOL(copy_to_user);
EXPORT_SYMBOL(copy_from_user);
EXPORT_SYMBOL(copy_in_user);
EXPORT_SYMBOL(memcpy);
-EXPORT_SYMBOL(bcopy);
#endif
#include <linux/vmalloc.h>
#include <linux/errno.h>
+#include <linux/module.h>
#include <asm/io.h>
#include <asm/pgalloc.h>
}
#endif /* USE_HPPA_IOREMAP */
+#ifdef CONFIG_DEBUG_IOREMAP
+static unsigned long last = 0;
+
+void gsc_bad_addr(unsigned long addr)
+{
+ if (time_after(jiffies, last + HZ*10)) {
+ printk("gsc_foo() called with bad address 0x%lx\n", addr);
+ dump_stack();
+ last = jiffies;
+ }
+}
+EXPORT_SYMBOL(gsc_bad_addr);
+
+void __raw_bad_addr(const volatile void __iomem *addr)
+{
+ if (time_after(jiffies, last + HZ*10)) {
+ printk("__raw_foo() called with bad address 0x%p\n", addr);
+ dump_stack();
+ last = jiffies;
+ }
+}
+EXPORT_SYMBOL(__raw_bad_addr);
+#endif
+
/*
* Generic mapping function (not visible outside):
*/
* have to convert them into an offset in a page-aligned mapping, but the
* caller shouldn't need to know that small detail.
*/
-void * __ioremap(unsigned long phys_addr, unsigned long size, unsigned long flags)
+void __iomem * __ioremap(unsigned long phys_addr, unsigned long size, unsigned long flags)
{
#if !(USE_HPPA_IOREMAP)
phys_addr |= 0xfc000000;
}
- return (void *)phys_addr;
+#ifdef CONFIG_DEBUG_IOREMAP
+ return (void __iomem *)(phys_addr - (0x1UL << NYBBLE_SHIFT));
+#else
+ return (void __iomem *)phys_addr;
+#endif
#else
void * addr;
vfree(addr);
return NULL;
}
- return (void *) (offset + (char *)addr);
+ return (void __iomem *) (offset + (char *)addr);
#endif
}
-void iounmap(void *addr)
+void iounmap(void __iomem *addr)
{
#if !(USE_HPPA_IOREMAP)
return;
#else
if (addr > high_memory)
- return vfree((void *) (PAGE_MASK & (unsigned long) addr));
+ return vfree((void *) (PAGE_MASK & (unsigned long __force) addr));
#endif
}
#include <linux/kernel.h>
#include <linux/oprofile.h>
-extern void timer_init(struct oprofile_operations ** ops);
-
-int __init oprofile_arch_init(struct oprofile_operations ** ops)
+int __init oprofile_arch_init(struct oprofile_operations * ops)
{
return -ENODEV;
}
depends on NET_ETHERNET
help
Enable Ethernet support via the Motorola MPC8xx serial
- commmunications controller.
+ communications controller.
choice
prompt "SCC used for Ethernet"
flush_dcache_range((unsigned long)skb->data,
(unsigned long)skb->data + skb->len);
+ /* disable interrupts while triggering transmit */
spin_lock_irq(&fep->lock);
/* Send it on its way. Tell FEC its ready, interrupt when done,
struct sk_buff *skb;
fep = dev->priv;
+ /* lock while transmitting */
spin_lock(&fep->lock);
bdp = fep->dirty_tx;
if ((mip = mii_head) != NULL) {
ep->fec_mii_data = mip->mii_regval;
+
}
}
retval = 0;
- save_flags(flags);
- cli();
+ /* lock while modifying mii_list */
+ spin_lock_irqsave(&fep->lock, flags);
if ((mip = mii_free) != NULL) {
mii_free = mip->mii_next;
retval = 1;
}
- restore_flags(flags);
+ spin_unlock_irqrestore(&fep->lock, flags);
return(retval);
}
--- /dev/null
+/*
+ * drivers/serial/mpsc/mpsc_defs.h
+ *
+ * Register definitions for the Marvell Multi-Protocol Serial Controller (MPSC),
+ * Serial DMA Controller (SDMA), and Baud Rate Generator (BRG).
+ *
+ * Author: Mark A. Greer <mgreer@mvista.com>
+ *
+ * 2004 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+#ifndef _PPC_BOOT_MPSC_DEFS_H__
+#define _PPC_BOOT_MPSC_DEFS_H__
+
+#define MPSC_NUM_CTLRS 2
+
+/*
+ *****************************************************************************
+ *
+ * Multi-Protocol Serial Controller Interface Registers
+ *
+ *****************************************************************************
+ */
+
+/* Main Configuratino Register Offsets */
+#define MPSC_MMCRL 0x0000
+#define MPSC_MMCRH 0x0004
+#define MPSC_MPCR 0x0008
+#define MPSC_CHR_1 0x000c
+#define MPSC_CHR_2 0x0010
+#define MPSC_CHR_3 0x0014
+#define MPSC_CHR_4 0x0018
+#define MPSC_CHR_5 0x001c
+#define MPSC_CHR_6 0x0020
+#define MPSC_CHR_7 0x0024
+#define MPSC_CHR_8 0x0028
+#define MPSC_CHR_9 0x002c
+#define MPSC_CHR_10 0x0030
+#define MPSC_CHR_11 0x0034
+
+#define MPSC_MPCR_CL_5 0
+#define MPSC_MPCR_CL_6 1
+#define MPSC_MPCR_CL_7 2
+#define MPSC_MPCR_CL_8 3
+#define MPSC_MPCR_SBL_1 0
+#define MPSC_MPCR_SBL_2 3
+
+#define MPSC_CHR_2_TEV (1<<1)
+#define MPSC_CHR_2_TA (1<<7)
+#define MPSC_CHR_2_TTCS (1<<9)
+#define MPSC_CHR_2_REV (1<<17)
+#define MPSC_CHR_2_RA (1<<23)
+#define MPSC_CHR_2_CRD (1<<25)
+#define MPSC_CHR_2_EH (1<<31)
+#define MPSC_CHR_2_PAR_ODD 0
+#define MPSC_CHR_2_PAR_SPACE 1
+#define MPSC_CHR_2_PAR_EVEN 2
+#define MPSC_CHR_2_PAR_MARK 3
+
+/* MPSC Signal Routing */
+#define MPSC_MRR 0x0000
+#define MPSC_RCRR 0x0004
+#define MPSC_TCRR 0x0008
+
+/*
+ *****************************************************************************
+ *
+ * Serial DMA Controller Interface Registers
+ *
+ *****************************************************************************
+ */
+
+#define SDMA_SDC 0x0000
+#define SDMA_SDCM 0x0008
+#define SDMA_RX_DESC 0x0800
+#define SDMA_RX_BUF_PTR 0x0808
+#define SDMA_SCRDP 0x0810
+#define SDMA_TX_DESC 0x0c00
+#define SDMA_SCTDP 0x0c10
+#define SDMA_SFTDP 0x0c14
+
+#define SDMA_DESC_CMDSTAT_PE (1<<0)
+#define SDMA_DESC_CMDSTAT_CDL (1<<1)
+#define SDMA_DESC_CMDSTAT_FR (1<<3)
+#define SDMA_DESC_CMDSTAT_OR (1<<6)
+#define SDMA_DESC_CMDSTAT_BR (1<<9)
+#define SDMA_DESC_CMDSTAT_MI (1<<10)
+#define SDMA_DESC_CMDSTAT_A (1<<11)
+#define SDMA_DESC_CMDSTAT_AM (1<<12)
+#define SDMA_DESC_CMDSTAT_CT (1<<13)
+#define SDMA_DESC_CMDSTAT_C (1<<14)
+#define SDMA_DESC_CMDSTAT_ES (1<<15)
+#define SDMA_DESC_CMDSTAT_L (1<<16)
+#define SDMA_DESC_CMDSTAT_F (1<<17)
+#define SDMA_DESC_CMDSTAT_P (1<<18)
+#define SDMA_DESC_CMDSTAT_EI (1<<23)
+#define SDMA_DESC_CMDSTAT_O (1<<31)
+
+#define SDMA_DESC_DFLT (SDMA_DESC_CMDSTAT_O | \
+ SDMA_DESC_CMDSTAT_EI)
+
+#define SDMA_SDC_RFT (1<<0)
+#define SDMA_SDC_SFM (1<<1)
+#define SDMA_SDC_BLMR (1<<6)
+#define SDMA_SDC_BLMT (1<<7)
+#define SDMA_SDC_POVR (1<<8)
+#define SDMA_SDC_RIFB (1<<9)
+
+#define SDMA_SDCM_ERD (1<<7)
+#define SDMA_SDCM_AR (1<<15)
+#define SDMA_SDCM_STD (1<<16)
+#define SDMA_SDCM_TXD (1<<23)
+#define SDMA_SDCM_AT (1<<31)
+
+#define SDMA_0_CAUSE_RXBUF (1<<0)
+#define SDMA_0_CAUSE_RXERR (1<<1)
+#define SDMA_0_CAUSE_TXBUF (1<<2)
+#define SDMA_0_CAUSE_TXEND (1<<3)
+#define SDMA_1_CAUSE_RXBUF (1<<8)
+#define SDMA_1_CAUSE_RXERR (1<<9)
+#define SDMA_1_CAUSE_TXBUF (1<<10)
+#define SDMA_1_CAUSE_TXEND (1<<11)
+
+#define SDMA_CAUSE_RX_MASK (SDMA_0_CAUSE_RXBUF | SDMA_0_CAUSE_RXERR | \
+ SDMA_1_CAUSE_RXBUF | SDMA_1_CAUSE_RXERR)
+#define SDMA_CAUSE_TX_MASK (SDMA_0_CAUSE_TXBUF | SDMA_0_CAUSE_TXEND | \
+ SDMA_1_CAUSE_TXBUF | SDMA_1_CAUSE_TXEND)
+
+/* SDMA Interrupt registers */
+#define SDMA_INTR_CAUSE 0x0000
+#define SDMA_INTR_MASK 0x0080
+
+/*
+ *****************************************************************************
+ *
+ * Baud Rate Generator Interface Registers
+ *
+ *****************************************************************************
+ */
+
+#define BRG_BCR 0x0000
+#define BRG_BTR 0x0004
+
+#endif /*_PPC_BOOT_MPSC_DEFS_H__ */
--- /dev/null
+/*
+ * arch/ppc/boot/simple/misc-chestnut.S
+ *
+ * Setup for the IBM Chestnut (ibm-750fxgx_eval)
+ *
+ * Author: <source@mvista.com>
+ *
+ * <2004> (c) MontaVista Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+
+#include <asm/ppc_asm.h>
+#include <asm/mv64x60_defs.h>
+#include <platforms/chestnut.h>
+
+ .globl mv64x60_board_init
+mv64x60_board_init:
+ /*
+ * move UART to 0xffc00000
+ */
+
+ li r23,16
+
+ addis r25,0,CONFIG_MV64X60_BASE@h
+ ori r25,r25,MV64x60_CPU2DEV_2_BASE
+ addis r26,0,CHESTNUT_UART_BASE@h
+ srw r26,r26,r23
+ stwbrx r26,0,(r25)
+ sync
+
+ addis r25,0,CONFIG_MV64X60_BASE@h
+ ori r25,r25,MV64x60_CPU2DEV_2_SIZE
+ addis r26,0,0x00100000@h
+ srw r26,r26,r23
+ stwbrx r26,0,(r25)
+ sync
+
+ blr
--- /dev/null
+/*
+ * arch/ppc/boot/simple/misc-cpci690.c
+ *
+ * Add birec data for Force CPCI690 board.
+ *
+ * Author: Mark A. Greer <source@mvista.com>
+ *
+ * 2003 (c) MontaVista Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#include <linux/types.h>
+long mv64x60_mpsc_clk_freq = 133000000;
--- /dev/null
+/*
+ * arch/ppc/boot/simple/misc-katana.c
+ *
+ * Add birec data for Artesyn KATANA board.
+ *
+ * Author: Mark A. Greer <source@mvista.com>
+ *
+ * 2004 (c) MontaVista Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#include <linux/types.h>
+long mv64x60_mpsc_clk_freq = 133333333;
/*
- * 2004 (c) MontaVista, Software, Inc. This file is licensed under
+ * 2004-2005 (c) MontaVista, Software, Inc. This file is licensed under
* the terms of the GNU General Public License version 2. This program
* is licensed "as is" without any warranty of any kind, whether express
* or implied.
#include <linux/string.h>
#include <linux/ctype.h>
#include <asm/ppcboot.h>
-#include <platforms/4xx/ocotea.h>
+#include <asm/ibm4xx.h>
extern unsigned long decompress_kernel(unsigned long load_addr, int num_words,
unsigned long cksum);
decompress_kernel(load_addr, num_words, cksum);
- mac64 = simple_strtoull((char *)OCOTEA_PIBS_MAC_BASE, 0, 16);
+ mac64 = simple_strtoull((char *)PIBS_MAC_BASE, 0, 16);
memcpy(hold_residual->bi_enetaddr, (char *)&mac64+2, 6);
- mac64 = simple_strtoull((char *)(OCOTEA_PIBS_MAC_BASE+OCOTEA_PIBS_MAC_OFFSET), 0, 16);
+#ifdef CONFIG_440GX
+ mac64 = simple_strtoull((char *)(PIBS_MAC_BASE+PIBS_MAC_OFFSET), 0, 16);
memcpy(hold_residual->bi_enet1addr, (char *)&mac64+2, 6);
- mac64 = simple_strtoull((char *)(OCOTEA_PIBS_MAC_BASE+OCOTEA_PIBS_MAC_OFFSET*2), 0, 16);
+ mac64 = simple_strtoull((char *)(PIBS_MAC_BASE+PIBS_MAC_OFFSET*2), 0, 16);
memcpy(hold_residual->bi_enet2addr, (char *)&mac64+2, 6);
- mac64 = simple_strtoull((char *)(OCOTEA_PIBS_MAC_BASE+OCOTEA_PIBS_MAC_OFFSET*3), 0, 16);
+ mac64 = simple_strtoull((char *)(PIBS_MAC_BASE+PIBS_MAC_OFFSET*3), 0, 16);
memcpy(hold_residual->bi_enet3addr, (char *)&mac64+2, 6);
+#endif
return (void *)hold_residual;
}
CONFIG_IP_NF_NAT_NEEDED=y
CONFIG_IP_NF_TARGET_MASQUERADE=m
CONFIG_IP_NF_TARGET_REDIRECT=m
-# CONFIG_IP_NF_NAT_LOCAL is not set
CONFIG_IP_NF_NAT_SNMP_BASIC=m
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
CONFIG_IP_NF_NAT_NEEDED=y
CONFIG_IP_NF_TARGET_MASQUERADE=m
CONFIG_IP_NF_TARGET_REDIRECT=m
-# CONFIG_IP_NF_NAT_LOCAL is not set
CONFIG_IP_NF_NAT_SNMP_BASIC=m
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
CONFIG_IP_NF_TARGET_REDIRECT=m
# CONFIG_IP_NF_TARGET_NETMAP is not set
# CONFIG_IP_NF_TARGET_SAME is not set
-# CONFIG_IP_NF_NAT_LOCAL is not set
# CONFIG_IP_NF_NAT_SNMP_BASIC is not set
CONFIG_IP_NF_NAT_FTP=m
# CONFIG_IP_NF_MANGLE is not set
CONFIG_IP_NF_NAT_NEEDED=y
CONFIG_IP_NF_TARGET_MASQUERADE=m
CONFIG_IP_NF_TARGET_REDIRECT=m
-# CONFIG_IP_NF_NAT_LOCAL is not set
# CONFIG_IP_NF_NAT_SNMP_BASIC is not set
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
CONFIG_IP_NF_TARGET_REDIRECT=m
# CONFIG_IP_NF_TARGET_NETMAP is not set
# CONFIG_IP_NF_TARGET_SAME is not set
-# CONFIG_IP_NF_NAT_LOCAL is not set
# CONFIG_IP_NF_NAT_SNMP_BASIC is not set
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
CONFIG_IP_NF_TARGET_REDIRECT=m
# CONFIG_IP_NF_TARGET_NETMAP is not set
# CONFIG_IP_NF_TARGET_SAME is not set
-# CONFIG_IP_NF_NAT_LOCAL is not set
# CONFIG_IP_NF_NAT_SNMP_BASIC is not set
CONFIG_IP_NF_NAT_FTP=m
# CONFIG_IP_NF_MANGLE is not set
cmplwi cr2,r3,0x800c /* 7410 */
cmplwi cr3,r3,0x8001 /* 7455 */
cmplwi cr4,r3,0x8002 /* 7457 */
- cmplwi cr5,r3,0x7000 /* 750FX */
+ cmplwi cr5,r3,0x8003 /* 7447A */
+ cmplwi cr6,r3,0x7000 /* 750FX */
/* cr1 is 7400 || 7410 */
cror 4*cr1+eq,4*cr1+eq,4*cr2+eq
/* cr0 is 74xx */
cror 4*cr0+eq,4*cr0+eq,4*cr3+eq
cror 4*cr0+eq,4*cr0+eq,4*cr4+eq
cror 4*cr0+eq,4*cr0+eq,4*cr1+eq
+ cror 4*cr0+eq,4*cr0+eq,4*cr5+eq
bne 1f
/* Backup 74xx specific regs */
mfspr r4,SPRN_MSSCR0
mfspr r4,SPRN_LDSTDB
stw r4,CS_LDSTDB(r5)
1:
- bne cr5,1f
+ bne cr6,1f
/* Backup 750FX specific registers */
mfspr r4,SPRN_HID1
stw r4,CS_HID1(r5)
cmplwi cr2,r3,0x800c /* 7410 */
cmplwi cr3,r3,0x8001 /* 7455 */
cmplwi cr4,r3,0x8002 /* 7457 */
- cmplwi cr5,r3,0x7000 /* 750FX */
+ cmplwi cr5,r3,0x8003 /* 7447A */
+ cmplwi cr6,r3,0x7000 /* 750FX */
/* cr1 is 7400 || 7410 */
cror 4*cr1+eq,4*cr1+eq,4*cr2+eq
/* cr0 is 74xx */
cror 4*cr0+eq,4*cr0+eq,4*cr3+eq
cror 4*cr0+eq,4*cr0+eq,4*cr4+eq
cror 4*cr0+eq,4*cr0+eq,4*cr1+eq
+ cror 4*cr0+eq,4*cr0+eq,4*cr5+eq
bne 2f
/* Restore 74xx specific regs */
lwz r4,CS_MSSCR0(r5)
mtspr SPRN_LDSTDB,r4
isync
sync
-2: bne cr5,1f
+2: bne cr6,1f
/* Restore 750FX specific registers
* that is restore HID2 on rev 2.x and PLL config & switch
* to PLL 0 on all
* This is the page table (2MB) covering uncached, DMA consistent allocations
*/
static pte_t *consistent_pte;
-static spinlock_t consistent_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(consistent_lock);
/*
* VM region handling support.
* set. All other Linux PTE bits control the behavior
* of the MMU.
*/
- li r11, 0x00f0
+2: li r11, 0x00f0
rlwimi r10, r11, 0, 24, 28 /* Set 24-27, clear 28 */
DO_8xx_CPU6(0x2d80, r3)
mtspr MI_RPN, r10 /* Update TLB entry */
#endif
rfi
-2: mfspr r10, M_TW /* Restore registers */
- lwz r11, 0(r0)
- mtcr r11
- lwz r11, 4(r0)
-#ifdef CONFIG_8xx_CPU6
- lwz r3, 8(r0)
-#endif
- b InstructionAccess
-
. = 0x1200
DataStoreTLBMiss:
#ifdef CONFIG_8xx_CPU6
* set. All other Linux PTE bits control the behavior
* of the MMU.
*/
- li r11, 0x00f0
+2: li r11, 0x00f0
rlwimi r10, r11, 0, 24, 28 /* Set 24-27, clear 28 */
DO_8xx_CPU6(0x3d80, r3)
mtspr MD_RPN, r10 /* Update TLB entry */
#endif
rfi
-2: mfspr r10, M_TW /* Restore registers */
- lwz r11, 0(r0)
- mtcr r11
- lwz r11, 4(r0)
-#ifdef CONFIG_8xx_CPU6
- lwz r3, 8(r0)
-#endif
- b DataAccess
-
/* This is an instruction TLB error on the MPC8xx. This could be due
* to many reasons, such as executing guarded memory or illegal instruction
* addresses. There is nothing to do but handle a big time error fault.
addi r3,r1,STACK_FRAME_OVERHEAD; \
EXC_XFER_TEMPLATE(DebugException, 0x2002, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), NOCOPY, crit_transfer_to_handler, ret_from_crit_exc)
+#define INSTRUCTION_STORAGE_EXCEPTION \
+ START_EXCEPTION(InstructionStorage) \
+ NORMAL_EXCEPTION_PROLOG; \
+ mfspr r5,SPRN_ESR; /* Grab the ESR and save it */ \
+ stw r5,_ESR(r11); \
+ mr r4,r12; /* Pass SRR0 as arg2 */ \
+ li r5,0; /* Pass zero as arg3 */ \
+ EXC_XFER_EE_LITE(0x0400, handle_page_fault)
+
+#define ALIGNMENT_EXCEPTION \
+ START_EXCEPTION(Alignment) \
+ NORMAL_EXCEPTION_PROLOG; \
+ mfspr r4,SPRN_DEAR; /* Grab the DEAR and save it */ \
+ stw r4,_DEAR(r11); \
+ addi r3,r1,STACK_FRAME_OVERHEAD; \
+ EXC_XFER_EE(0x0600, AlignmentException)
+
+#define PROGRAM_EXCEPTION \
+ START_EXCEPTION(Program) \
+ NORMAL_EXCEPTION_PROLOG; \
+ mfspr r4,SPRN_ESR; /* Grab the ESR and save it */ \
+ stw r4,_ESR(r11); \
+ addi r3,r1,STACK_FRAME_OVERHEAD; \
+ EXC_XFER_STD(0x0700, ProgramCheckException)
+
+#define DECREMENTER_EXCEPTION \
+ START_EXCEPTION(Decrementer) \
+ NORMAL_EXCEPTION_PROLOG; \
+ lis r0,TSR_DIS@h; /* Setup the DEC interrupt mask */ \
+ mtspr SPRN_TSR,r0; /* Clear the DEC interrupt */ \
+ addi r3,r1,STACK_FRAME_OVERHEAD; \
+ EXC_XFER_LITE(0x0900, timer_interrupt)
+
#endif /* __HEAD_BOOKE_H__ */
tlbsx 0,r6 /* Fall through, we had to match */
match_TLB:
mfspr r7,SPRN_MAS0
- rlwinm r3,r7,16,28,31 /* Extract MAS0(Entry) */
+ rlwinm r3,r7,16,20,31 /* Extract MAS0(Entry) */
mfspr r7,SPRN_MAS1 /* Insure IPROT set */
oris r7,r7,MAS1_IPROT@h
andi. r9,r9,0xfff
li r6,0 /* Set Entry counter to 0 */
1: lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
- rlwimi r7,r6,16,12,15 /* Setup MAS0 = TLBSEL | ESEL(r6) */
+ rlwimi r7,r6,16,4,15 /* Setup MAS0 = TLBSEL | ESEL(r6) */
mtspr SPRN_MAS0,r7
tlbre
mfspr r7,SPRN_MAS1
andi. r5, r3, 0x1 /* Find an entry not used and is non-zero */
addi r5, r5, 0x1
lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
- rlwimi r7,r3,16,12,15 /* Setup MAS0 = TLBSEL | ESEL(r3) */
+ rlwimi r7,r3,16,4,15 /* Setup MAS0 = TLBSEL | ESEL(r3) */
mtspr SPRN_MAS0,r7
tlbre
/* Just modify the entry ID and EPN for the temp mapping */
lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
- rlwimi r7,r5,16,12,15 /* Setup MAS0 = TLBSEL | ESEL(r5) */
+ rlwimi r7,r5,16,4,15 /* Setup MAS0 = TLBSEL | ESEL(r5) */
mtspr SPRN_MAS0,r7
xori r6,r4,1 /* Setup TMP mapping in the other Address space */
slwi r6,r6,12
/* 5. Invalidate mapping we started in */
lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
- rlwimi r7,r3,16,12,15 /* Setup MAS0 = TLBSEL | ESEL(r3) */
+ rlwimi r7,r3,16,4,15 /* Setup MAS0 = TLBSEL | ESEL(r3) */
mtspr SPRN_MAS0,r7
tlbre
li r6,0
/* 8. Clear out the temp mapping */
lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
- rlwimi r7,r5,16,12,15 /* Setup MAS0 = TLBSEL | ESEL(r5) */
+ rlwimi r7,r5,16,4,15 /* Setup MAS0 = TLBSEL | ESEL(r5) */
mtspr SPRN_MAS0,r7
tlbre
mtspr SPRN_MAS1,r8
mtspr SPRN_IVPR,r4
/* Setup the defaults for TLB entries */
- li r2,MAS4_TSIZED(BOOKE_PAGESZ_4K)
+ li r2,(MAS4_TSIZED(BOOKE_PAGESZ_4K))@l
mtspr SPRN_MAS4, r2
#if 0
b data_access
/* Instruction Storage Interrupt */
- START_EXCEPTION(InstructionStorage)
- NORMAL_EXCEPTION_PROLOG
- mfspr r5,SPRN_ESR /* Grab the ESR and save it */
- stw r5,_ESR(r11)
- mr r4,r12 /* Pass SRR0 as arg2 */
- li r5,0 /* Pass zero as arg3 */
- EXC_XFER_EE_LITE(0x0400, handle_page_fault)
+ INSTRUCTION_STORAGE_EXCEPTION
/* External Input Interrupt */
EXCEPTION(0x0500, ExternalInput, do_IRQ, EXC_XFER_LITE)
/* Alignment Interrupt */
- START_EXCEPTION(Alignment)
- NORMAL_EXCEPTION_PROLOG
- mfspr r4,SPRN_DEAR /* Grab the DEAR and save it */
- stw r4,_DEAR(r11)
- addi r3,r1,STACK_FRAME_OVERHEAD
- EXC_XFER_EE(0x0600, AlignmentException)
+ ALIGNMENT_EXCEPTION
/* Program Interrupt */
- START_EXCEPTION(Program)
- NORMAL_EXCEPTION_PROLOG
- mfspr r4,SPRN_ESR /* Grab the ESR and save it */
- stw r4,_ESR(r11)
- addi r3,r1,STACK_FRAME_OVERHEAD
- EXC_XFER_STD(0x0700, ProgramCheckException)
+ PROGRAM_EXCEPTION
/* Floating Point Unavailable Interrupt */
EXCEPTION(0x0800, FloatingPointUnavailable, UnknownException, EXC_XFER_EE)
EXCEPTION(0x2900, AuxillaryProcessorUnavailable, UnknownException, EXC_XFER_EE)
/* Decrementer Interrupt */
- START_EXCEPTION(Decrementer)
- NORMAL_EXCEPTION_PROLOG
- lis r0,TSR_DIS@h /* Setup the DEC interrupt mask */
- mtspr SPRN_TSR,r0 /* Clear the DEC interrupt */
- addi r3,r1,STACK_FRAME_OVERHEAD
- EXC_XFER_LITE(0x0900, timer_interrupt)
+ DECREMENTER_EXCEPTION
/* Fixed Internal Timer Interrupt */
/* TODO: Add FIT support */
ori r11, r11, swapper_pg_dir@l
mfspr r12,SPRN_MAS1 /* Set TID to 0 */
- li r13,MAS1_TID@l
- andc r12,r12,r13
+ rlwinm r12,r12,0,16,1
mtspr SPRN_MAS1,r12
b 4f
ori r11, r11, swapper_pg_dir@l
mfspr r12,SPRN_MAS1 /* Set TID to 0 */
- li r13,MAS1_TID@l
- andc r12,r12,r13
+ rlwinm r12,r12,0,16,1
mtspr SPRN_MAS1,r12
b 4f
EXCEPTION(0x2050, SPEFloatingPointRound, UnknownException, EXC_XFER_EE)
/* Performance Monitor */
- EXCEPTION(0x2060, PerformanceMonitor, UnknownException, EXC_XFER_EE)
+ EXCEPTION(0x2060, PerformanceMonitor, PerformanceMonitorException, EXC_XFER_STD)
+
/* Debug Interrupt */
DEBUG_EXCEPTION
/*
* The body of the idle task.
*/
-int cpu_idle(void)
+void cpu_idle(void)
{
for (;;)
if (ppc_md.idle != NULL)
ppc_md.idle();
else
default_idle();
- return 0;
}
#if defined(CONFIG_SYSCTL) && defined(CONFIG_6xx)
#include <asm/cputable.h>
#include <asm/ppc_asm.h>
#include <asm/cache.h>
+#include <asm/page.h>
/* Usage:
/* Tweak some bits */
rlwinm r5,r3,0,0,0 /* r5 contains the new enable bit */
rlwinm r3,r3,0,22,20 /* Turn off the invalidate bit */
- rlwinm r3,r3,0,1,31 /* Turn off the enable bit */
+ rlwinm r3,r3,0,2,31 /* Turn off the enable & PE bits */
rlwinm r3,r3,0,5,3 /* Turn off the clken bit */
/* Check to see if we need to flush */
rlwinm. r4,r4,0,0,0
/* flush_disable_L1() - Flush and disable L1 cache
*
* clobbers r0, r3, ctr, cr0
- *
+ * Must be called with interrupts disabled and MMU enabled.
*/
_GLOBAL(__flush_disable_L1)
/* Stop pending alitvec streams and memory accesses */
*/
li r3,0x4000 /* 512kB / 32B */
mtctr r3
- li r3, 0
+ lis r3,KERNELBASE@h
1:
lwz r0,0(r3)
addi r3,r3,0x0020 /* Go to start of next cache line */
/* Now flush those cache lines */
li r3,0x4000 /* 512kB / 32B */
mtctr r3
- li r3, 0
+ lis r3,KERNELBASE@h
1:
dcbf 0,r3
addi r3,r3,0x0020 /* Go to start of next cache line */
if (flags & IORESOURCE_IO)
return ioport_map(start, len);
if (flags & IORESOURCE_MEM)
- return (void __iomem *) start;
+ /* Not checking IORESOURCE_CACHEABLE because PPC does
+ * not currently distinguish between ioremap and
+ * ioremap_nocache.
+ */
+ return ioremap(start, len);
/* What? */
return NULL;
}
--- /dev/null
+/* kernel/perfmon.c
+ * PPC 32 Performance Monitor Infrastructure
+ *
+ * Author: Andy Fleming
+ * Copyright (c) 2004 Freescale Semiconductor, Inc
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/stddef.h>
+#include <linux/unistd.h>
+#include <linux/ptrace.h>
+#include <linux/slab.h>
+#include <linux/user.h>
+#include <linux/a.out.h>
+#include <linux/interrupt.h>
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/prctl.h>
+
+#include <asm/pgtable.h>
+#include <asm/uaccess.h>
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/reg.h>
+#include <asm/xmon.h>
+
+/* A lock to regulate grabbing the interrupt */
+DEFINE_SPINLOCK(perfmon_lock);
+
+#ifdef CONFIG_FSL_BOOKE
+static void dummy_perf(struct pt_regs *regs)
+{
+ unsigned int pmgc0 = mfpmr(PMRN_PMGC0);
+
+ pmgc0 &= ~PMGC0_PMIE;
+ mtpmr(PMRN_PMGC0, pmgc0);
+}
+
+#else
+/* Ensure exceptions are disabled */
+
+static void dummy_perf(struct pt_regs *regs)
+{
+ unsigned int mmcr0 = mfspr(SPRN_MMCR0);
+
+ mmcr0 &= ~MMCR0_PMXE;
+ mtspr(SPRN_MMCR0, mmcr0);
+}
+#endif
+
+void (*perf_irq)(struct pt_regs *) = dummy_perf;
+
+/* Grab the interrupt, if it's free.
+ * Returns 0 on success, -1 if the interrupt is taken already */
+int request_perfmon_irq(void (*handler)(struct pt_regs *))
+{
+ int err = 0;
+
+ spin_lock(&perfmon_lock);
+
+ if (perf_irq == dummy_perf)
+ perf_irq = handler;
+ else {
+ pr_info("perfmon irq already handled by %p\n", perf_irq);
+ err = -1;
+ }
+
+ spin_unlock(&perfmon_lock);
+
+ return err;
+}
+
+void free_perfmon_irq(void)
+{
+ spin_lock(&perfmon_lock);
+
+ perf_irq = dummy_perf;
+
+ spin_unlock(&perfmon_lock);
+}
+
+EXPORT_SYMBOL(perf_irq);
+EXPORT_SYMBOL(request_perfmon_irq);
+EXPORT_SYMBOL(free_perfmon_irq);
--- /dev/null
+/* kernel/perfmon_fsl_booke.c
+ * Freescale Book-E Performance Monitor code
+ *
+ * Author: Andy Fleming
+ * Copyright (c) 2004 Freescale Semiconductor, Inc
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/stddef.h>
+#include <linux/unistd.h>
+#include <linux/ptrace.h>
+#include <linux/slab.h>
+#include <linux/user.h>
+#include <linux/a.out.h>
+#include <linux/interrupt.h>
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/prctl.h>
+
+#include <asm/pgtable.h>
+#include <asm/uaccess.h>
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/reg.h>
+#include <asm/xmon.h>
+#include <asm/perfmon.h>
+
+static inline u32 get_pmlca(int ctr);
+static inline void set_pmlca(int ctr, u32 pmlca);
+
+static inline u32 get_pmlca(int ctr)
+{
+ u32 pmlca;
+
+ switch (ctr) {
+ case 0:
+ pmlca = mfpmr(PMRN_PMLCA0);
+ break;
+ case 1:
+ pmlca = mfpmr(PMRN_PMLCA1);
+ break;
+ case 2:
+ pmlca = mfpmr(PMRN_PMLCA2);
+ break;
+ case 3:
+ pmlca = mfpmr(PMRN_PMLCA3);
+ break;
+ default:
+ panic("Bad ctr number\n");
+ }
+
+ return pmlca;
+}
+
+static inline void set_pmlca(int ctr, u32 pmlca)
+{
+ switch (ctr) {
+ case 0:
+ mtpmr(PMRN_PMLCA0, pmlca);
+ break;
+ case 1:
+ mtpmr(PMRN_PMLCA1, pmlca);
+ break;
+ case 2:
+ mtpmr(PMRN_PMLCA2, pmlca);
+ break;
+ case 3:
+ mtpmr(PMRN_PMLCA3, pmlca);
+ break;
+ default:
+ panic("Bad ctr number\n");
+ }
+}
+
+void init_pmc_stop(int ctr)
+{
+ u32 pmlca = (PMLCA_FC | PMLCA_FCS | PMLCA_FCU |
+ PMLCA_FCM1 | PMLCA_FCM0);
+ u32 pmlcb = 0;
+
+ switch (ctr) {
+ case 0:
+ mtpmr(PMRN_PMLCA0, pmlca);
+ mtpmr(PMRN_PMLCB0, pmlcb);
+ break;
+ case 1:
+ mtpmr(PMRN_PMLCA1, pmlca);
+ mtpmr(PMRN_PMLCB1, pmlcb);
+ break;
+ case 2:
+ mtpmr(PMRN_PMLCA2, pmlca);
+ mtpmr(PMRN_PMLCB2, pmlcb);
+ break;
+ case 3:
+ mtpmr(PMRN_PMLCA3, pmlca);
+ mtpmr(PMRN_PMLCB3, pmlcb);
+ break;
+ default:
+ panic("Bad ctr number!\n");
+ }
+}
+
+void set_pmc_event(int ctr, int event)
+{
+ u32 pmlca;
+
+ pmlca = get_pmlca(ctr);
+
+ pmlca = (pmlca & ~PMLCA_EVENT_MASK) |
+ ((event << PMLCA_EVENT_SHIFT) &
+ PMLCA_EVENT_MASK);
+
+ set_pmlca(ctr, pmlca);
+}
+
+void set_pmc_user_kernel(int ctr, int user, int kernel)
+{
+ u32 pmlca;
+
+ pmlca = get_pmlca(ctr);
+
+ if(user)
+ pmlca &= ~PMLCA_FCU;
+ else
+ pmlca |= PMLCA_FCU;
+
+ if(kernel)
+ pmlca &= ~PMLCA_FCS;
+ else
+ pmlca |= PMLCA_FCS;
+
+ set_pmlca(ctr, pmlca);
+}
+
+void set_pmc_marked(int ctr, int mark0, int mark1)
+{
+ u32 pmlca = get_pmlca(ctr);
+
+ if(mark0)
+ pmlca &= ~PMLCA_FCM0;
+ else
+ pmlca |= PMLCA_FCM0;
+
+ if(mark1)
+ pmlca &= ~PMLCA_FCM1;
+ else
+ pmlca |= PMLCA_FCM1;
+
+ set_pmlca(ctr, pmlca);
+}
+
+void pmc_start_ctr(int ctr, int enable)
+{
+ u32 pmlca = get_pmlca(ctr);
+
+ pmlca &= ~PMLCA_FC;
+
+ if (enable)
+ pmlca |= PMLCA_CE;
+ else
+ pmlca &= ~PMLCA_CE;
+
+ set_pmlca(ctr, pmlca);
+}
+
+void pmc_start_ctrs(int enable)
+{
+ u32 pmgc0 = mfpmr(PMRN_PMGC0);
+
+ pmgc0 &= ~PMGC0_FAC;
+ pmgc0 |= PMGC0_FCECE;
+
+ if (enable)
+ pmgc0 |= PMGC0_PMIE;
+ else
+ pmgc0 &= ~PMGC0_PMIE;
+
+ mtpmr(PMRN_PMGC0, pmgc0);
+}
+
+void pmc_stop_ctrs(void)
+{
+ u32 pmgc0 = mfpmr(PMRN_PMGC0);
+
+ pmgc0 |= PMGC0_FAC;
+
+ pmgc0 &= ~(PMGC0_PMIE | PMGC0_FCECE);
+
+ mtpmr(PMRN_PMGC0, pmgc0);
+}
+
+void dump_pmcs(void)
+{
+ printk("pmgc0: %x\n", mfpmr(PMRN_PMGC0));
+ printk("pmc\t\tpmlca\t\tpmlcb\n");
+ printk("%8x\t%8x\t%8x\n", mfpmr(PMRN_PMC0),
+ mfpmr(PMRN_PMLCA0), mfpmr(PMRN_PMLCB0));
+ printk("%8x\t%8x\t%8x\n", mfpmr(PMRN_PMC1),
+ mfpmr(PMRN_PMLCA1), mfpmr(PMRN_PMLCB1));
+ printk("%8x\t%8x\t%8x\n", mfpmr(PMRN_PMC2),
+ mfpmr(PMRN_PMLCA2), mfpmr(PMRN_PMLCB2));
+ printk("%8x\t%8x\t%8x\n", mfpmr(PMRN_PMC3),
+ mfpmr(PMRN_PMLCA3), mfpmr(PMRN_PMLCB3));
+}
+
+EXPORT_SYMBOL(init_pmc_stop);
+EXPORT_SYMBOL(set_pmc_event);
+EXPORT_SYMBOL(set_pmc_user_kernel);
+EXPORT_SYMBOL(set_pmc_marked);
+EXPORT_SYMBOL(pmc_start_ctr);
+EXPORT_SYMBOL(pmc_start_ctrs);
+EXPORT_SYMBOL(pmc_stop_ctrs);
+EXPORT_SYMBOL(dump_pmcs);
}
EXPORT_SYMBOL(_raw_spin_unlock);
-
/*
- * Just like x86, implement read-write locks as a 32-bit counter
- * with the high bit (sign) being the "write" bit.
- * -- Cort
+ * For rwlocks, zero is unlocked, -1 is write-locked,
+ * positive is read-locked.
*/
-void _raw_read_lock(rwlock_t *rw)
+static __inline__ int __read_trylock(rwlock_t *rw)
{
- unsigned long stuck = INIT_STUCK;
- int cpu = smp_processor_id();
+ signed int tmp;
+
+ __asm__ __volatile__(
+"2: lwarx %0,0,%1 # __read_trylock\n\
+ addic. %0,%0,1\n\
+ ble- 1f\n"
+ PPC405_ERR77(0,%1)
+" stwcx. %0,0,%1\n\
+ bne- 2b\n\
+ isync\n\
+1:"
+ : "=&r"(tmp)
+ : "r"(&rw->lock)
+ : "cr0", "memory");
-again:
- /* get our read lock in there */
- atomic_inc((atomic_t *) &(rw)->lock);
- if ( (signed long)((rw)->lock) < 0) /* someone has a write lock */
- {
- /* turn off our read lock */
- atomic_dec((atomic_t *) &(rw)->lock);
- /* wait for the write lock to go away */
- while ((signed long)((rw)->lock) < 0)
- {
- if(!--stuck)
- {
- printk("_read_lock(%p) CPU#%d\n", rw, cpu);
+ return tmp;
+}
+
+int _raw_read_trylock(rwlock_t *rw)
+{
+ return __read_trylock(rw) > 0;
+}
+EXPORT_SYMBOL(_raw_read_trylock);
+
+void _raw_read_lock(rwlock_t *rw)
+{
+ unsigned int stuck;
+
+ while (__read_trylock(rw) <= 0) {
+ stuck = INIT_STUCK;
+ while (!read_can_lock(rw)) {
+ if (--stuck == 0) {
+ printk("_read_lock(%p) CPU#%d lock %d\n",
+ rw, _smp_processor_id(), rw->lock);
stuck = INIT_STUCK;
}
}
- /* try to get the read lock again */
- goto again;
}
- wmb();
}
EXPORT_SYMBOL(_raw_read_lock);
void _raw_read_unlock(rwlock_t *rw)
{
if ( rw->lock == 0 )
- printk("_read_unlock(): %s/%d (nip %08lX) lock %lx\n",
+ printk("_read_unlock(): %s/%d (nip %08lX) lock %d\n",
current->comm,current->pid,current->thread.regs->nip,
rw->lock);
wmb();
void _raw_write_lock(rwlock_t *rw)
{
- unsigned long stuck = INIT_STUCK;
- int cpu = smp_processor_id();
-
-again:
- if ( test_and_set_bit(31,&(rw)->lock) ) /* someone has a write lock */
- {
- while ( (rw)->lock & (1<<31) ) /* wait for write lock */
- {
- if(!--stuck)
- {
- printk("write_lock(%p) CPU#%d lock %lx)\n",
- rw, cpu,rw->lock);
- stuck = INIT_STUCK;
- }
- barrier();
- }
- goto again;
- }
-
- if ( (rw)->lock & ~(1<<31)) /* someone has a read lock */
- {
- /* clear our write lock and wait for reads to go away */
- clear_bit(31,&(rw)->lock);
- while ( (rw)->lock & ~(1<<31) )
- {
- if(!--stuck)
- {
- printk("write_lock(%p) 2 CPU#%d lock %lx)\n",
- rw, cpu,rw->lock);
+ unsigned int stuck;
+
+ while (cmpxchg(&rw->lock, 0, -1) != 0) {
+ stuck = INIT_STUCK;
+ while (!write_can_lock(rw)) {
+ if (--stuck == 0) {
+ printk("write_lock(%p) CPU#%d lock %d)\n",
+ rw, _smp_processor_id(), rw->lock);
stuck = INIT_STUCK;
}
- barrier();
}
- goto again;
}
wmb();
}
int _raw_write_trylock(rwlock_t *rw)
{
- if (test_and_set_bit(31, &(rw)->lock)) /* someone has a write lock */
+ if (cmpxchg(&rw->lock, 0, -1) != 0)
return 0;
-
- if ((rw)->lock & ~(1<<31)) { /* someone has a read lock */
- /* clear our write lock and wait for reads to go away */
- clear_bit(31,&(rw)->lock);
- return 0;
- }
wmb();
return 1;
}
void _raw_write_unlock(rwlock_t *rw)
{
- if ( !(rw->lock & (1<<31)) )
- printk("_write_lock(): %s/%d (nip %08lX) lock %lx\n",
+ if (rw->lock >= 0)
+ printk("_write_lock(): %s/%d (nip %08lX) lock %d\n",
current->comm,current->pid,current->thread.regs->nip,
rw->lock);
wmb();
- clear_bit(31,&(rw)->lock);
+ rw->lock = 0;
}
EXPORT_SYMBOL(_raw_write_unlock);
flags |= _PAGE_COHERENT;
#endif
- TLBCAM[index].MAS0 = MAS0_TLBSEL | (index << 16);
- TLBCAM[index].MAS1 = MAS1_VALID | MAS1_IPROT | MAS1_TSIZE(tsize) | ((pid << 16) & MAS1_TID);
+ TLBCAM[index].MAS0 = MAS0_TLBSEL(1) | MAS0_ESEL(index);
+ TLBCAM[index].MAS1 = MAS1_VALID | MAS1_IPROT | MAS1_TSIZE(tsize) | MAS1_TID(pid);
TLBCAM[index].MAS2 = virt & PAGE_MASK;
TLBCAM[index].MAS2 |= (flags & _PAGE_WRITETHRU) ? MAS2_W : 0;
void invalidate_tlbcam_entry(int index)
{
- TLBCAM[index].MAS0 = MAS0_TLBSEL | (index << 16);
+ TLBCAM[index].MAS0 = MAS0_TLBSEL(1) | MAS0_ESEL(index);
TLBCAM[index].MAS1 = ~MAS1_VALID;
loadcam_entry(index);
ptepage = virt_to_page(ptep);
mm = (struct mm_struct *) ptepage->mapping;
ptephys = __pa(ptep) & PAGE_MASK;
- addr = ptepage->index + (((unsigned long)ptep & ~PAGE_MASK) << 9);
+ addr = ptepage->index + (((unsigned long)ptep & ~PAGE_MASK) << 10);
flush_hash_pages(mm->context, addr, ptephys, 1);
}
oprofilefs.o oprofile_stats.o \
timer_int.o )
-oprofile-y := $(DRIVER_OBJS) init.o
+oprofile-y := $(DRIVER_OBJS) common.o
+
+ifeq ($(CONFIG_FSL_BOOKE),y)
+ oprofile-y += op_model_fsl_booke.o
+endif
+
--- /dev/null
+/*
+ * PPC 32 oprofile support
+ * Based on PPC64 oprofile support
+ * Copyright (C) 2004 Anton Blanchard <anton@au.ibm.com>, IBM
+ *
+ * Copyright (C) Freescale Semiconductor, Inc 2004
+ *
+ * Author: Andy Fleming
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/oprofile.h>
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/smp.h>
+#include <linux/errno.h>
+#include <asm/ptrace.h>
+#include <asm/system.h>
+#include <asm/perfmon.h>
+#include <asm/cputable.h>
+
+#include "op_impl.h"
+
+static struct op_ppc32_model *model;
+
+static struct op_counter_config ctr[OP_MAX_COUNTER];
+static struct op_system_config sys;
+
+static void op_handle_interrupt(struct pt_regs *regs)
+{
+ model->handle_interrupt(regs, ctr);
+}
+
+static int op_ppc32_setup(void)
+{
+ /* Install our interrupt handler into the existing hook. */
+ if(request_perfmon_irq(&op_handle_interrupt))
+ return -EBUSY;
+
+ mb();
+
+ /* Pre-compute the values to stuff in the hardware registers. */
+ model->reg_setup(ctr, &sys, model->num_counters);
+
+#if 0
+ /* FIXME: Make multi-cpu work */
+ /* Configure the registers on all cpus. */
+ on_each_cpu(model->reg_setup, NULL, 0, 1);
+#endif
+
+ return 0;
+}
+
+static void op_ppc32_shutdown(void)
+{
+ mb();
+
+ /* Remove our interrupt handler. We may be removing this module. */
+ free_perfmon_irq();
+}
+
+static void op_ppc32_cpu_start(void *dummy)
+{
+ model->start(ctr);
+}
+
+static int op_ppc32_start(void)
+{
+ on_each_cpu(op_ppc32_cpu_start, NULL, 0, 1);
+ return 0;
+}
+
+static inline void op_ppc32_cpu_stop(void *dummy)
+{
+ model->stop();
+}
+
+static void op_ppc32_stop(void)
+{
+ on_each_cpu(op_ppc32_cpu_stop, NULL, 0, 1);
+}
+
+static int op_ppc32_create_files(struct super_block *sb, struct dentry *root)
+{
+ int i;
+
+ for (i = 0; i < model->num_counters; ++i) {
+ struct dentry *dir;
+ char buf[3];
+
+ snprintf(buf, sizeof buf, "%d", i);
+ dir = oprofilefs_mkdir(sb, root, buf);
+
+ oprofilefs_create_ulong(sb, dir, "enabled", &ctr[i].enabled);
+ oprofilefs_create_ulong(sb, dir, "event", &ctr[i].event);
+ oprofilefs_create_ulong(sb, dir, "count", &ctr[i].count);
+ oprofilefs_create_ulong(sb, dir, "kernel", &ctr[i].kernel);
+ oprofilefs_create_ulong(sb, dir, "user", &ctr[i].user);
+
+ /* FIXME: Not sure if this is used */
+ oprofilefs_create_ulong(sb, dir, "unit_mask", &ctr[i].unit_mask);
+ }
+
+ oprofilefs_create_ulong(sb, root, "enable_kernel", &sys.enable_kernel);
+ oprofilefs_create_ulong(sb, root, "enable_user", &sys.enable_user);
+
+ /* Default to tracing both kernel and user */
+ sys.enable_kernel = 1;
+ sys.enable_user = 1;
+
+ return 0;
+}
+
+static struct oprofile_operations oprof_ppc32_ops = {
+ .create_files = op_ppc32_create_files,
+ .setup = op_ppc32_setup,
+ .shutdown = op_ppc32_shutdown,
+ .start = op_ppc32_start,
+ .stop = op_ppc32_stop,
+ .cpu_type = NULL /* To be filled in below. */
+};
+
+int __init oprofile_arch_init(struct oprofile_operations *ops)
+{
+ char *name;
+ int cpu_id = smp_processor_id();
+
+#ifdef CONFIG_FSL_BOOKE
+ model = &op_model_fsl_booke;
+#else
+ return -ENODEV;
+#endif
+
+ name = kmalloc(32, GFP_KERNEL);
+
+ if (NULL == name)
+ return -ENOMEM;
+
+ sprintf(name, "ppc/%s", cur_cpu_spec[cpu_id]->cpu_name);
+
+ oprof_ppc32_ops.cpu_type = name;
+
+ model->num_counters = cur_cpu_spec[cpu_id]->num_pmcs;
+
+ *ops = oprof_ppc32_ops;
+
+ printk(KERN_INFO "oprofile: using %s performance monitoring.\n",
+ oprof_ppc32_ops.cpu_type);
+
+ return 0;
+}
+
+void oprofile_arch_exit(void)
+{
+ kfree(oprof_ppc32_ops.cpu_type);
+ oprof_ppc32_ops.cpu_type = NULL;
+}
--- /dev/null
+/*
+ * Copyright (C) 2004 Anton Blanchard <anton@au.ibm.com>, IBM
+ *
+ * Based on alpha version.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef OP_IMPL_H
+#define OP_IMPL_H 1
+
+#define OP_MAX_COUNTER 8
+
+/* Per-counter configuration as set via oprofilefs. */
+struct op_counter_config {
+ unsigned long enabled;
+ unsigned long event;
+ unsigned long count;
+ unsigned long kernel;
+ unsigned long user;
+ unsigned long unit_mask;
+};
+
+/* System-wide configuration as set via oprofilefs. */
+struct op_system_config {
+ unsigned long enable_kernel;
+ unsigned long enable_user;
+};
+
+/* Per-arch configuration */
+struct op_ppc32_model {
+ void (*reg_setup) (struct op_counter_config *,
+ struct op_system_config *,
+ int num_counters);
+ void (*start) (struct op_counter_config *);
+ void (*stop) (void);
+ void (*handle_interrupt) (struct pt_regs *,
+ struct op_counter_config *);
+ int num_counters;
+};
+
+#endif /* OP_IMPL_H */
--- /dev/null
+/*
+ * oprofile/op_model_e500.c
+ *
+ * Freescale Book-E oprofile support, based on ppc64 oprofile support
+ * Copyright (C) 2004 Anton Blanchard <anton@au.ibm.com>, IBM
+ *
+ * Copyright (c) 2004 Freescale Semiconductor, Inc
+ *
+ * Author: Andy Fleming
+ * Maintainer: Kumar Gala <Kumar.Gala@freescale.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/oprofile.h>
+#include <linux/init.h>
+#include <linux/smp.h>
+#include <asm/ptrace.h>
+#include <asm/system.h>
+#include <asm/processor.h>
+#include <asm/cputable.h>
+#include <asm/reg_booke.h>
+#include <asm/page.h>
+#include <asm/perfmon.h>
+
+#include "op_impl.h"
+
+static unsigned long reset_value[OP_MAX_COUNTER];
+
+static int num_counters;
+static int oprofile_running;
+
+static inline unsigned int ctr_read(unsigned int i)
+{
+ switch(i) {
+ case 0:
+ return mfpmr(PMRN_PMC0);
+ case 1:
+ return mfpmr(PMRN_PMC1);
+ case 2:
+ return mfpmr(PMRN_PMC2);
+ case 3:
+ return mfpmr(PMRN_PMC3);
+ default:
+ return 0;
+ }
+}
+
+static inline void ctr_write(unsigned int i, unsigned int val)
+{
+ switch(i) {
+ case 0:
+ mtpmr(PMRN_PMC0, val);
+ break;
+ case 1:
+ mtpmr(PMRN_PMC1, val);
+ break;
+ case 2:
+ mtpmr(PMRN_PMC2, val);
+ break;
+ case 3:
+ mtpmr(PMRN_PMC3, val);
+ break;
+ default:
+ break;
+ }
+}
+
+
+static void fsl_booke_reg_setup(struct op_counter_config *ctr,
+ struct op_system_config *sys,
+ int num_ctrs)
+{
+ int i;
+
+ num_counters = num_ctrs;
+
+ /* freeze all counters */
+ pmc_stop_ctrs();
+
+ /* Our counters count up, and "count" refers to
+ * how much before the next interrupt, and we interrupt
+ * on overflow. So we calculate the starting value
+ * which will give us "count" until overflow.
+ * Then we set the events on the enabled counters */
+ for (i = 0; i < num_counters; ++i) {
+ reset_value[i] = 0x80000000UL - ctr[i].count;
+
+ init_pmc_stop(i);
+
+ set_pmc_event(i, ctr[i].event);
+
+ set_pmc_user_kernel(i, ctr[i].user, ctr[i].kernel);
+ }
+}
+
+static void fsl_booke_start(struct op_counter_config *ctr)
+{
+ int i;
+
+ mtmsr(mfmsr() | MSR_PMM);
+
+ for (i = 0; i < num_counters; ++i) {
+ if (ctr[i].enabled) {
+ ctr_write(i, reset_value[i]);
+ /* Set Each enabled counterd to only
+ * count when the Mark bit is not set */
+ set_pmc_marked(i, 1, 0);
+ pmc_start_ctr(i, 1);
+ } else {
+ ctr_write(i, 0);
+
+ /* Set the ctr to be stopped */
+ pmc_start_ctr(i, 0);
+ }
+ }
+
+ /* Clear the freeze bit, and enable the interrupt.
+ * The counters won't actually start until the rfi clears
+ * the PMM bit */
+ pmc_start_ctrs(1);
+
+ oprofile_running = 1;
+
+ pr_debug("start on cpu %d, pmgc0 %x\n", smp_processor_id(),
+ mfpmr(PMRN_PMGC0));
+}
+
+static void fsl_booke_stop(void)
+{
+ /* freeze counters */
+ pmc_stop_ctrs();
+
+ oprofile_running = 0;
+
+ pr_debug("stop on cpu %d, pmgc0 %x\n", smp_processor_id(),
+ mfpmr(PMRN_PMGC0));
+
+ mb();
+}
+
+
+static void fsl_booke_handle_interrupt(struct pt_regs *regs,
+ struct op_counter_config *ctr)
+{
+ unsigned long pc;
+ int is_kernel;
+ int val;
+ int i;
+
+ /* set the PMM bit (see comment below) */
+ mtmsr(mfmsr() | MSR_PMM);
+
+ pc = regs->nip;
+ is_kernel = (pc >= KERNELBASE);
+
+ for (i = 0; i < num_counters; ++i) {
+ val = ctr_read(i);
+ if (val < 0) {
+ if (oprofile_running && ctr[i].enabled) {
+ oprofile_add_pc(pc, is_kernel, i);
+ ctr_write(i, reset_value[i]);
+ } else {
+ ctr_write(i, 0);
+ }
+ }
+ }
+
+ /* The freeze bit was set by the interrupt. */
+ /* Clear the freeze bit, and reenable the interrupt.
+ * The counters won't actually start until the rfi clears
+ * the PMM bit */
+ pmc_start_ctrs(1);
+}
+
+struct op_ppc32_model op_model_fsl_booke = {
+ .reg_setup = fsl_booke_reg_setup,
+ .start = fsl_booke_start,
+ .stop = fsl_booke_stop,
+ .handle_interrupt = fsl_booke_handle_interrupt,
+};
#include <asm/ibm4xx.h>
#include <asm/ocp.h>
+#include <asm/ppc4xx_pic.h>
#include <platforms/4xx/ibm405ep.h>
{ .vendor = OCP_VENDOR_INVALID
}
};
+
+/* Polarity and triggering settings for internal interrupt sources */
+struct ppc4xx_uic_settings ppc4xx_core_uic_cfg[] __initdata = {
+ { .polarity = 0xffff7f80,
+ .triggering = 0x00000000,
+ .ext_irq_mask = 0x0000007f, /* IRQ0 - IRQ6 */
+ }
+};
--- /dev/null
+/*
+ * arch/ppc/platforms/4xx/ibm440sp.c
+ *
+ * PPC440SP I/O descriptions
+ *
+ * Matt Porter <mporter@kernel.crashing.org>
+ * Copyright 2002-2005 MontaVista Software Inc.
+ *
+ * Eugene Surovegin <eugene.surovegin@zultys.com> or <ebs@ebshome.net>
+ * Copyright (c) 2003, 2004 Zultys Technologies
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <platforms/4xx/ibm440sp.h>
+#include <asm/ocp.h>
+
+static struct ocp_func_emac_data ibm440sp_emac0_def = {
+ .rgmii_idx = -1, /* No RGMII */
+ .rgmii_mux = -1, /* No RGMII */
+ .zmii_idx = -1, /* No ZMII */
+ .zmii_mux = -1, /* No ZMII */
+ .mal_idx = 0, /* MAL device index */
+ .mal_rx_chan = 0, /* MAL rx channel number */
+ .mal_tx_chan = 0, /* MAL tx channel number */
+ .wol_irq = 61, /* WOL interrupt number */
+ .mdio_idx = -1, /* No shared MDIO */
+ .tah_idx = -1, /* No TAH */
+ .jumbo = 1, /* Jumbo frames supported */
+};
+OCP_SYSFS_EMAC_DATA()
+
+static struct ocp_func_mal_data ibm440sp_mal0_def = {
+ .num_tx_chans = 4, /* Number of TX channels */
+ .num_rx_chans = 4, /* Number of RX channels */
+ .txeob_irq = 38, /* TX End Of Buffer IRQ */
+ .rxeob_irq = 39, /* RX End Of Buffer IRQ */
+ .txde_irq = 34, /* TX Descriptor Error IRQ */
+ .rxde_irq = 35, /* RX Descriptor Error IRQ */
+ .serr_irq = 33, /* MAL System Error IRQ */
+};
+OCP_SYSFS_MAL_DATA()
+
+static struct ocp_func_iic_data ibm440sp_iic0_def = {
+ .fast_mode = 0, /* Use standad mode (100Khz) */
+};
+
+static struct ocp_func_iic_data ibm440sp_iic1_def = {
+ .fast_mode = 0, /* Use standad mode (100Khz) */
+};
+OCP_SYSFS_IIC_DATA()
+
+struct ocp_def core_ocp[] = {
+ { .vendor = OCP_VENDOR_IBM,
+ .function = OCP_FUNC_OPB,
+ .index = 0,
+ .paddr = 0x0000000140000000ULL,
+ .irq = OCP_IRQ_NA,
+ .pm = OCP_CPM_NA,
+ },
+ { .vendor = OCP_VENDOR_IBM,
+ .function = OCP_FUNC_16550,
+ .index = 0,
+ .paddr = PPC440SP_UART0_ADDR,
+ .irq = UART0_INT,
+ .pm = IBM_CPM_UART0,
+ },
+ { .vendor = OCP_VENDOR_IBM,
+ .function = OCP_FUNC_16550,
+ .index = 1,
+ .paddr = PPC440SP_UART1_ADDR,
+ .irq = UART1_INT,
+ .pm = IBM_CPM_UART1,
+ },
+ { .vendor = OCP_VENDOR_IBM,
+ .function = OCP_FUNC_16550,
+ .index = 2,
+ .paddr = PPC440SP_UART2_ADDR,
+ .irq = UART2_INT,
+ .pm = IBM_CPM_UART2,
+ },
+ { .vendor = OCP_VENDOR_IBM,
+ .function = OCP_FUNC_IIC,
+ .index = 0,
+ .paddr = 0x00000001f0000400ULL,
+ .irq = 2,
+ .pm = IBM_CPM_IIC0,
+ .additions = &ibm440sp_iic0_def,
+ .show = &ocp_show_iic_data
+ },
+ { .vendor = OCP_VENDOR_IBM,
+ .function = OCP_FUNC_IIC,
+ .index = 1,
+ .paddr = 0x00000001f0000500ULL,
+ .irq = 3,
+ .pm = IBM_CPM_IIC1,
+ .additions = &ibm440sp_iic1_def,
+ .show = &ocp_show_iic_data
+ },
+ { .vendor = OCP_VENDOR_IBM,
+ .function = OCP_FUNC_GPIO,
+ .index = 0,
+ .paddr = 0x00000001f0000700ULL,
+ .irq = OCP_IRQ_NA,
+ .pm = IBM_CPM_GPIO0,
+ },
+ { .vendor = OCP_VENDOR_IBM,
+ .function = OCP_FUNC_MAL,
+ .paddr = OCP_PADDR_NA,
+ .irq = OCP_IRQ_NA,
+ .pm = OCP_CPM_NA,
+ .additions = &ibm440sp_mal0_def,
+ .show = &ocp_show_mal_data,
+ },
+ { .vendor = OCP_VENDOR_IBM,
+ .function = OCP_FUNC_EMAC,
+ .index = 0,
+ .paddr = 0x00000001f0000800ULL,
+ .irq = 60,
+ .pm = OCP_CPM_NA,
+ .additions = &ibm440sp_emac0_def,
+ .show = &ocp_show_emac_data,
+ },
+ { .vendor = OCP_VENDOR_INVALID
+ }
+};
--- /dev/null
+/*
+ * arch/ppc/platforms/4xx/ibm440sp.h
+ *
+ * PPC440SP definitions
+ *
+ * Matt Porter <mporter@kernel.crashing.org>
+ *
+ * Copyright 2004-2005 MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifdef __KERNEL__
+#ifndef __PPC_PLATFORMS_IBM440SP_H
+#define __PPC_PLATFORMS_IBM440SP_H
+
+#include <linux/config.h>
+
+#include <asm/ibm44x.h>
+
+/* UART */
+#define PPC440SP_UART0_ADDR 0x00000001f0000200ULL
+#define PPC440SP_UART1_ADDR 0x00000001f0000300ULL
+#define PPC440SP_UART2_ADDR 0x00000001f0000600ULL
+#define UART0_INT 0
+#define UART1_INT 1
+#define UART2_INT 2
+
+/* Clock and Power Management */
+#define IBM_CPM_IIC0 0x80000000 /* IIC interface */
+#define IBM_CPM_IIC1 0x40000000 /* IIC interface */
+#define IBM_CPM_PCI 0x20000000 /* PCI bridge */
+#define IBM_CPM_CPU 0x02000000 /* processor core */
+#define IBM_CPM_DMA 0x01000000 /* DMA controller */
+#define IBM_CPM_BGO 0x00800000 /* PLB to OPB bus arbiter */
+#define IBM_CPM_BGI 0x00400000 /* OPB to PLB bridge */
+#define IBM_CPM_EBC 0x00200000 /* External Bux Controller */
+#define IBM_CPM_EBM 0x00100000 /* Ext Bus Master Interface */
+#define IBM_CPM_DMC 0x00080000 /* SDRAM peripheral controller */
+#define IBM_CPM_PLB 0x00040000 /* PLB bus arbiter */
+#define IBM_CPM_SRAM 0x00020000 /* SRAM memory controller */
+#define IBM_CPM_PPM 0x00002000 /* PLB Performance Monitor */
+#define IBM_CPM_UIC1 0x00001000 /* Universal Interrupt Controller */
+#define IBM_CPM_GPIO0 0x00000800 /* General Purpose IO (??) */
+#define IBM_CPM_GPT 0x00000400 /* General Purpose Timers */
+#define IBM_CPM_UART0 0x00000200 /* serial port 0 */
+#define IBM_CPM_UART1 0x00000100 /* serial port 1 */
+#define IBM_CPM_UART2 0x00000100 /* serial port 1 */
+#define IBM_CPM_UIC0 0x00000080 /* Universal Interrupt Controller */
+#define IBM_CPM_TMRCLK 0x00000040 /* CPU timers */
+#define IBM_CPM_EMAC0 0x00000020 /* EMAC 0 */
+
+#define DFLT_IBM4xx_PM ~(IBM_CPM_UIC | IBM_CPM_UIC1 | IBM_CPM_CPU \
+ | IBM_CPM_EBC | IBM_CPM_SRAM | IBM_CPM_BGO \
+ | IBM_CPM_EBM | IBM_CPM_PLB | IBM_CPM_OPB \
+ | IBM_CPM_TMRCLK | IBM_CPM_DMA | IBM_CPM_PCI \
+ | IBM_CPM_TAHOE0 | IBM_CPM_TAHOE1 \
+ | IBM_CPM_EMAC0 | IBM_CPM_EMAC1 \
+ | IBM_CPM_EMAC2 | IBM_CPM_EMAC3 )
+#endif /* __PPC_PLATFORMS_IBM440SP_H */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * arch/ppc/platforms/4xx/luan.c
+ *
+ * Luan board specific routines
+ *
+ * Matt Porter <mporter@kernel.crashing.org>
+ *
+ * Copyright 2004-2005 MontaVista Software Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/reboot.h>
+#include <linux/pci.h>
+#include <linux/kdev_t.h>
+#include <linux/types.h>
+#include <linux/major.h>
+#include <linux/blkdev.h>
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/ide.h>
+#include <linux/initrd.h>
+#include <linux/irq.h>
+#include <linux/seq_file.h>
+#include <linux/root_dev.h>
+#include <linux/tty.h>
+#include <linux/serial.h>
+#include <linux/serial_core.h>
+
+#include <asm/system.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+#include <asm/dma.h>
+#include <asm/io.h>
+#include <asm/machdep.h>
+#include <asm/ocp.h>
+#include <asm/pci-bridge.h>
+#include <asm/time.h>
+#include <asm/todc.h>
+#include <asm/bootinfo.h>
+#include <asm/ppc4xx_pic.h>
+#include <asm/ppcboot.h>
+
+#include <syslib/ibm44x_common.h>
+#include <syslib/ibm440gx_common.h>
+#include <syslib/ibm440sp_common.h>
+
+/*
+ * This is a horrible kludge, we eventually need to abstract this
+ * generic PHY stuff, so the standard phy mode defines can be
+ * easily used from arch code.
+ */
+#include "../../../../drivers/net/ibm_emac/ibm_emac_phy.h"
+
+bd_t __res;
+
+static struct ibm44x_clocks clocks __initdata;
+
+static void __init
+luan_calibrate_decr(void)
+{
+ unsigned int freq;
+
+ if (mfspr(SPRN_CCR1) & CCR1_TCS)
+ freq = LUAN_TMR_CLK;
+ else
+ freq = clocks.cpu;
+
+ ibm44x_calibrate_decr(freq);
+}
+
+static int
+luan_show_cpuinfo(struct seq_file *m)
+{
+ seq_printf(m, "vendor\t\t: IBM\n");
+ seq_printf(m, "machine\t\t: PPC440SP EVB (Luan)\n");
+
+ return 0;
+}
+
+static inline int
+luan_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin)
+{
+ struct pci_controller *hose = pci_bus_to_hose(dev->bus->number);
+
+ /* PCIX0 in adapter mode, no host interrupt routing */
+
+ /* PCIX1 */
+ if (hose->index == 0) {
+ static char pci_irq_table[][4] =
+ /*
+ * PCI IDSEL/INTPIN->INTLINE
+ * A B C D
+ */
+ {
+ { 49, 49, 49, 49 }, /* IDSEL 1 - PCIX1 Slot 0 */
+ { 49, 49, 49, 49 }, /* IDSEL 2 - PCIX1 Slot 1 */
+ { 49, 49, 49, 49 }, /* IDSEL 3 - PCIX1 Slot 2 */
+ { 49, 49, 49, 49 }, /* IDSEL 4 - PCIX1 Slot 3 */
+ };
+ const long min_idsel = 1, max_idsel = 4, irqs_per_slot = 4;
+ return PCI_IRQ_TABLE_LOOKUP;
+ /* PCIX2 */
+ } else if (hose->index == 1) {
+ static char pci_irq_table[][4] =
+ /*
+ * PCI IDSEL/INTPIN->INTLINE
+ * A B C D
+ */
+ {
+ { 50, 50, 50, 50 }, /* IDSEL 1 - PCIX2 Slot 0 */
+ { 50, 50, 50, 50 }, /* IDSEL 2 - PCIX2 Slot 1 */
+ { 50, 50, 50, 50 }, /* IDSEL 3 - PCIX2 Slot 2 */
+ { 50, 50, 50, 50 }, /* IDSEL 4 - PCIX2 Slot 3 */
+ };
+ const long min_idsel = 1, max_idsel = 4, irqs_per_slot = 4;
+ return PCI_IRQ_TABLE_LOOKUP;
+ }
+ return -1;
+}
+
+static void __init luan_set_emacdata(void)
+{
+ struct ocp_def *def;
+ struct ocp_func_emac_data *emacdata;
+
+ /* Set phy_map, phy_mode, and mac_addr for the EMAC */
+ def = ocp_get_one_device(OCP_VENDOR_IBM, OCP_FUNC_EMAC, 0);
+ emacdata = def->additions;
+ emacdata->phy_map = 0x00000001; /* Skip 0x00 */
+ emacdata->phy_mode = PHY_MODE_GMII;
+ memcpy(emacdata->mac_addr, __res.bi_enetaddr, 6);
+}
+
+#define PCIX_READW(offset) \
+ (readw((void *)((u32)pcix_reg_base+offset)))
+
+#define PCIX_WRITEW(value, offset) \
+ (writew(value, (void *)((u32)pcix_reg_base+offset)))
+
+#define PCIX_WRITEL(value, offset) \
+ (writel(value, (void *)((u32)pcix_reg_base+offset)))
+
+static void __init
+luan_setup_pcix(void)
+{
+ int i;
+ void *pcix_reg_base;
+
+ for (i=0;i<3;i++) {
+ pcix_reg_base = ioremap64(PCIX0_REG_BASE + i*PCIX_REG_OFFSET, PCIX_REG_SIZE);
+
+ /* Enable PCIX0 I/O, Mem, and Busmaster cycles */
+ PCIX_WRITEW(PCIX_READW(PCIX0_COMMAND) | PCI_COMMAND_IO | PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER, PCIX0_COMMAND);
+
+ /* Disable all windows */
+ PCIX_WRITEL(0, PCIX0_POM0SA);
+ PCIX_WRITEL(0, PCIX0_POM1SA);
+ PCIX_WRITEL(0, PCIX0_POM2SA);
+ PCIX_WRITEL(0, PCIX0_PIM0SA);
+ PCIX_WRITEL(0, PCIX0_PIM0SAH);
+ PCIX_WRITEL(0, PCIX0_PIM1SA);
+ PCIX_WRITEL(0, PCIX0_PIM2SA);
+ PCIX_WRITEL(0, PCIX0_PIM2SAH);
+
+ /*
+ * Setup 512MB PLB->PCI outbound mem window
+ * (a_n000_0000->0_n000_0000)
+ * */
+ PCIX_WRITEL(0x0000000a, PCIX0_POM0LAH);
+ PCIX_WRITEL(0x80000000 | i*LUAN_PCIX_MEM_SIZE, PCIX0_POM0LAL);
+ PCIX_WRITEL(0x00000000, PCIX0_POM0PCIAH);
+ PCIX_WRITEL(0x80000000 | i*LUAN_PCIX_MEM_SIZE, PCIX0_POM0PCIAL);
+ PCIX_WRITEL(0xe0000001, PCIX0_POM0SA);
+
+ /* Setup 2GB PCI->PLB inbound memory window at 0, enable MSIs */
+ PCIX_WRITEL(0x00000000, PCIX0_PIM0LAH);
+ PCIX_WRITEL(0x00000000, PCIX0_PIM0LAL);
+ PCIX_WRITEL(0xe0000007, PCIX0_PIM0SA);
+ PCIX_WRITEL(0xffffffff, PCIX0_PIM0SAH);
+
+ iounmap(pcix_reg_base);
+ }
+
+ eieio();
+}
+
+static void __init
+luan_setup_hose(struct pci_controller *hose,
+ int lower_mem,
+ int upper_mem,
+ int cfga,
+ int cfgd,
+ u64 pcix_io_base)
+{
+ char name[20];
+
+ sprintf(name, "PCIX%d host bridge", hose->index);
+
+ hose->pci_mem_offset = LUAN_PCIX_MEM_OFFSET;
+
+ pci_init_resource(&hose->io_resource,
+ LUAN_PCIX_LOWER_IO,
+ LUAN_PCIX_UPPER_IO,
+ IORESOURCE_IO,
+ name);
+
+ pci_init_resource(&hose->mem_resources[0],
+ lower_mem,
+ upper_mem,
+ IORESOURCE_MEM,
+ name);
+
+ hose->io_space.start = LUAN_PCIX_LOWER_IO;
+ hose->io_space.end = LUAN_PCIX_UPPER_IO;
+ hose->mem_space.start = lower_mem;
+ hose->mem_space.end = upper_mem;
+ isa_io_base =
+ (unsigned long)ioremap64(pcix_io_base, PCIX_IO_SIZE);
+ hose->io_base_virt = (void *)isa_io_base;
+
+ setup_indirect_pci(hose, cfga, cfgd);
+ hose->set_cfg_type = 1;
+}
+
+static void __init
+luan_setup_hoses(void)
+{
+ struct pci_controller *hose1, *hose2;
+
+ /* Configure windows on the PCI-X host bridge */
+ luan_setup_pcix();
+
+ /* Allocate hoses for PCIX1 and PCIX2 */
+ hose1 = pcibios_alloc_controller();
+ hose2 = pcibios_alloc_controller();
+ if (!hose1 || !hose2)
+ return;
+
+ /* Setup PCIX1 */
+ hose1->first_busno = 0;
+ hose1->last_busno = 0xff;
+
+ luan_setup_hose(hose1,
+ LUAN_PCIX1_LOWER_MEM,
+ LUAN_PCIX1_UPPER_MEM,
+ PCIX1_CFGA,
+ PCIX1_CFGD,
+ PCIX1_IO_BASE);
+
+ hose1->last_busno = pciauto_bus_scan(hose1, hose1->first_busno);
+
+ /* Setup PCIX2 */
+ hose2->first_busno = hose1->last_busno + 1;
+ hose2->last_busno = 0xff;
+
+ luan_setup_hose(hose2,
+ LUAN_PCIX2_LOWER_MEM,
+ LUAN_PCIX2_UPPER_MEM,
+ PCIX2_CFGA,
+ PCIX2_CFGD,
+ PCIX2_IO_BASE);
+
+ hose2->last_busno = pciauto_bus_scan(hose2, hose2->first_busno);
+
+ ppc_md.pci_swizzle = common_swizzle;
+ ppc_md.pci_map_irq = luan_map_irq;
+}
+
+TODC_ALLOC();
+
+static void __init
+luan_early_serial_map(void)
+{
+ struct uart_port port;
+
+ /* Setup ioremapped serial port access */
+ memset(&port, 0, sizeof(port));
+ port.membase = ioremap64(PPC440SP_UART0_ADDR, 8);
+ port.irq = UART0_INT;
+ port.uartclk = clocks.uart0;
+ port.regshift = 0;
+ port.iotype = SERIAL_IO_MEM;
+ port.flags = ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST;
+ port.line = 0;
+
+ if (early_serial_setup(&port) != 0) {
+ printk("Early serial init of port 0 failed\n");
+ }
+
+ port.membase = ioremap64(PPC440SP_UART1_ADDR, 8);
+ port.irq = UART1_INT;
+ port.uartclk = clocks.uart1;
+ port.line = 1;
+
+ if (early_serial_setup(&port) != 0) {
+ printk("Early serial init of port 1 failed\n");
+ }
+
+ port.membase = ioremap64(PPC440SP_UART2_ADDR, 8);
+ port.irq = UART2_INT;
+ port.uartclk = BASE_BAUD;
+ port.line = 2;
+
+ if (early_serial_setup(&port) != 0) {
+ printk("Early serial init of port 2 failed\n");
+ }
+}
+
+static void __init
+luan_setup_arch(void)
+{
+ luan_set_emacdata();
+
+#if !defined(CONFIG_BDI_SWITCH)
+ /*
+ * The Abatron BDI JTAG debugger does not tolerate others
+ * mucking with the debug registers.
+ */
+ mtspr(SPRN_DBCR0, (DBCR0_TDE | DBCR0_IDM));
+#endif
+
+ /*
+ * Determine various clocks.
+ * To be completely correct we should get SysClk
+ * from FPGA, because it can be changed by on-board switches
+ * --ebs
+ */
+ /* 440GX and 440SP clocking is the same -mdp */
+ ibm440gx_get_clocks(&clocks, 33333333, 6 * 1843200);
+ ocp_sys_info.opb_bus_freq = clocks.opb;
+
+ /* init to some ~sane value until calibrate_delay() runs */
+ loops_per_jiffy = 50000000/HZ;
+
+ /* Setup PCIXn host bridges */
+ luan_setup_hoses();
+
+#ifdef CONFIG_BLK_DEV_INITRD
+ if (initrd_start)
+ ROOT_DEV = Root_RAM0;
+ else
+#endif
+#ifdef CONFIG_ROOT_NFS
+ ROOT_DEV = Root_NFS;
+#else
+ ROOT_DEV = Root_HDA1;
+#endif
+
+ luan_early_serial_map();
+
+ /* Identify the system */
+ printk("Luan port (MontaVista Software, Inc. <source@mvista.com>)\n");
+}
+
+void __init platform_init(unsigned long r3, unsigned long r4,
+ unsigned long r5, unsigned long r6, unsigned long r7)
+{
+ parse_bootinfo(find_bootinfo());
+
+ /*
+ * If we were passed in a board information, copy it into the
+ * residual data area.
+ */
+ if (r3)
+ __res = *(bd_t *)(r3 + KERNELBASE);
+
+ ibm44x_platform_init();
+
+ ppc_md.setup_arch = luan_setup_arch;
+ ppc_md.show_cpuinfo = luan_show_cpuinfo;
+ ppc_md.find_end_of_memory = ibm440sp_find_end_of_memory;
+ ppc_md.get_irq = NULL; /* Set in ppc4xx_pic_init() */
+
+ ppc_md.calibrate_decr = luan_calibrate_decr;
+#ifdef CONFIG_KGDB
+ ppc_md.early_serial_map = luan_early_serial_map;
+#endif
+}
--- /dev/null
+/*
+ * arch/ppc/platforms/4xx/luan.h
+ *
+ * Luan board definitions
+ *
+ * Matt Porter <mporter@kernel.crashing.org>
+ *
+ * Copyright 2004-2005 MontaVista Software Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+
+#ifdef __KERNEL__
+#ifndef __ASM_LUAN_H__
+#define __ASM_LUAN_H__
+
+#include <linux/config.h>
+#include <platforms/4xx/ibm440sp.h>
+
+/* F/W TLB mapping used in bootloader glue to reset EMAC */
+#define PPC44x_EMAC0_MR0 0xa0000800
+
+/* Location of MAC addresses in PIBS image */
+#define PIBS_FLASH_BASE 0xffe00000
+#define PIBS_MAC_BASE (PIBS_FLASH_BASE+0x1b0400)
+
+/* External timer clock frequency */
+#define LUAN_TMR_CLK 25000000
+
+/* Flash */
+#define LUAN_FPGA_REG_0 0x0000000148300000ULL
+#define LUAN_BOOT_LARGE_FLASH(x) (x & 0x40)
+#define LUAN_SMALL_FLASH_LOW 0x00000001ff900000ULL
+#define LUAN_SMALL_FLASH_HIGH 0x00000001ffe00000ULL
+#define LUAN_SMALL_FLASH_SIZE 0x100000
+#define LUAN_LARGE_FLASH_LOW 0x00000001ff800000ULL
+#define LUAN_LARGE_FLASH_HIGH 0x00000001ffc00000ULL
+#define LUAN_LARGE_FLASH_SIZE 0x400000
+
+/*
+ * Serial port defines
+ */
+#define RS_TABLE_SIZE 3
+
+/* PIBS defined UART mappings, used before early_serial_setup */
+#define UART0_IO_BASE 0xa0000200
+#define UART1_IO_BASE 0xa0000300
+#define UART2_IO_BASE 0xa0000600
+
+#define BASE_BAUD 11059200
+#define STD_UART_OP(num) \
+ { 0, BASE_BAUD, 0, UART##num##_INT, \
+ (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST), \
+ iomem_base: UART##num##_IO_BASE, \
+ io_type: SERIAL_IO_MEM},
+
+#define SERIAL_PORT_DFNS \
+ STD_UART_OP(0) \
+ STD_UART_OP(1) \
+ STD_UART_OP(2)
+
+/* PCI support */
+#define LUAN_PCIX_LOWER_IO 0x00000000
+#define LUAN_PCIX_UPPER_IO 0x0000ffff
+#define LUAN_PCIX0_LOWER_MEM 0x80000000
+#define LUAN_PCIX0_UPPER_MEM 0x9fffffff
+#define LUAN_PCIX1_LOWER_MEM 0xa0000000
+#define LUAN_PCIX1_UPPER_MEM 0xbfffffff
+#define LUAN_PCIX2_LOWER_MEM 0xc0000000
+#define LUAN_PCIX2_UPPER_MEM 0xdfffffff
+
+#define LUAN_PCIX_MEM_SIZE 0x20000000
+#define LUAN_PCIX_MEM_OFFSET 0x00000000
+
+#endif /* __ASM_LUAN_H__ */
+#endif /* __KERNEL__ */
ppc_md.setup_arch = oak_setup_arch;
ppc_md.show_percpuinfo = oak_show_percpuinfo;
ppc_md.irq_canonicalize = NULL;
- ppc_md.init_IRQ = oak_init_IRQ;
- ppc_md.get_irq = oak_get_irq;
+ ppc_md.init_IRQ = ppc4xx_pic_init;
+ ppc_md.get_irq = NULL; /* Set in ppc4xx_pic_init() */
ppc_md.init = NULL;
ppc_md.restart = oak_restart;
return 0;
}
-/*
- * Document me.
- */
-void __init
-oak_init_IRQ(void)
-{
- int i;
-
- ppc4xx_pic_init();
-
- for (i = 0; i < NR_IRQS; i++) {
- irq_desc[i].handler = ppc4xx_pic;
- }
-
- return;
-}
-
-/*
- * Document me.
- */
-int
-oak_get_irq(struct pt_regs *regs)
-{
- return (ppc4xx_pic_get_irq(regs));
-}
-
/*
* Document me.
*/
This option enables support for the WindRiver PowerQUICC III
SBC8560 board.
+config STX_GP3
+ bool "Silicon Turnkey Express GP3"
+ help
+ This option enables support for the Silicon Turnkey Express GP3
+ board.
+
endchoice
# It's often necessary to know the specific 85xx processor type.
config MPC8560
bool
- depends on SBC8560 || MPC8560_ADS
+ depends on SBC8560 || MPC8560_ADS || STX_GP3
default y
config 85xx_PCI2
depends on MPC8555_CDS
default y
-config FSL_OCP
- bool
- depends on 85xx
- default y
-
config PPC_GEN550
bool
depends on MPC8540 || SBC8560 || MPC8555
#include <linux/serial_core.h>
#include <linux/initrd.h>
#include <linux/module.h>
+#include <linux/fsl_devices.h>
#include <asm/system.h>
#include <asm/pgtable.h>
#include <asm/irq.h>
#include <asm/immap_85xx.h>
#include <asm/kgdb.h>
-#include <asm/ocp.h>
+#include <asm/ppc_sys.h>
#include <mm/mmu_decl.h>
-#include <syslib/ppc85xx_common.h>
#include <syslib/ppc85xx_setup.h>
-struct ocp_gfar_data mpc85xx_tsec1_def = {
- .interruptTransmit = MPC85xx_IRQ_TSEC1_TX,
- .interruptError = MPC85xx_IRQ_TSEC1_ERROR,
- .interruptReceive = MPC85xx_IRQ_TSEC1_RX,
- .interruptPHY = MPC85xx_IRQ_EXT5,
- .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR
- | GFAR_HAS_RMON
- | GFAR_HAS_PHY_INTR | GFAR_HAS_COALESCE),
- .phyid = 0,
- .phyregidx = 0,
-};
-
-struct ocp_gfar_data mpc85xx_tsec2_def = {
- .interruptTransmit = MPC85xx_IRQ_TSEC2_TX,
- .interruptError = MPC85xx_IRQ_TSEC2_ERROR,
- .interruptReceive = MPC85xx_IRQ_TSEC2_RX,
- .interruptPHY = MPC85xx_IRQ_EXT5,
- .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR
- | GFAR_HAS_RMON
- | GFAR_HAS_PHY_INTR | GFAR_HAS_COALESCE),
- .phyid = 1,
- .phyregidx = 0,
-};
-
-struct ocp_gfar_data mpc85xx_fec_def = {
- .interruptTransmit = MPC85xx_IRQ_FEC,
- .interruptError = MPC85xx_IRQ_FEC,
- .interruptReceive = MPC85xx_IRQ_FEC,
- .interruptPHY = MPC85xx_IRQ_EXT5,
- .flags = 0,
- .phyid = 3,
- .phyregidx = 0,
-};
-
-struct ocp_fs_i2c_data mpc85xx_i2c1_def = {
- .flags = FS_I2C_SEPARATE_DFSRR,
-};
-
/* ************************************************************************
*
* Setup the architecture
static void __init
mpc8540ads_setup_arch(void)
{
- struct ocp_def *def;
- struct ocp_gfar_data *einfo;
bd_t *binfo = (bd_t *) __res;
unsigned int freq;
+ struct gianfar_platform_data *pdata;
/* get the core frequency */
freq = binfo->bi_intfreq;
invalidate_tlbcam_entry(NUM_TLBCAMS - 1);
#endif
- def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 0);
- if (def) {
- einfo = (struct ocp_gfar_data *) def->additions;
- memcpy(einfo->mac_addr, binfo->bi_enetaddr, 6);
- }
-
- def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 1);
- if (def) {
- einfo = (struct ocp_gfar_data *) def->additions;
- memcpy(einfo->mac_addr, binfo->bi_enet1addr, 6);
- }
-
- def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 2);
- if (def) {
- einfo = (struct ocp_gfar_data *) def->additions;
- memcpy(einfo->mac_addr, binfo->bi_enet2addr, 6);
- }
+ /* setup the board related information for the enet controllers */
+ pdata = (struct gianfar_platform_data *) ppc_sys_get_pdata(MPC85xx_TSEC1);
+ pdata->board_flags = FSL_GIANFAR_BRD_HAS_PHY_INTR;
+ pdata->interruptPHY = MPC85xx_IRQ_EXT5;
+ pdata->phyid = 0;
+ /* fixup phy address */
+ pdata->phy_reg_addr += binfo->bi_immr_base;
+ memcpy(pdata->mac_addr, binfo->bi_enetaddr, 6);
+
+ pdata = (struct gianfar_platform_data *) ppc_sys_get_pdata(MPC85xx_TSEC2);
+ pdata->board_flags = FSL_GIANFAR_BRD_HAS_PHY_INTR;
+ pdata->interruptPHY = MPC85xx_IRQ_EXT5;
+ pdata->phyid = 1;
+ /* fixup phy address */
+ pdata->phy_reg_addr += binfo->bi_immr_base;
+ memcpy(pdata->mac_addr, binfo->bi_enet1addr, 6);
+
+ pdata = (struct gianfar_platform_data *) ppc_sys_get_pdata(MPC85xx_FEC);
+ pdata->board_flags = FSL_GIANFAR_BRD_HAS_PHY_INTR;
+ pdata->interruptPHY = MPC85xx_IRQ_EXT5;
+ pdata->phyid = 3;
+ /* fixup phy address */
+ pdata->phy_reg_addr += binfo->bi_immr_base;
+ memcpy(pdata->mac_addr, binfo->bi_enet2addr, 6);
#ifdef CONFIG_BLK_DEV_INITRD
if (initrd_start)
#else
ROOT_DEV = Root_HDA1;
#endif
-
- ocp_for_each_device(mpc85xx_update_paddr_ocp, &(binfo->bi_immr_base));
}
/* ************************************************************************ */
#ifdef CONFIG_SERIAL_TEXT_DEBUG
{
bd_t *binfo = (bd_t *) __res;
+ struct uart_port p;
/* Use the last TLB entry to map CCSRBAR to allow access to DUART regs */
settlbcam(NUM_TLBCAMS - 1, binfo->bi_immr_base,
binfo->bi_immr_base, MPC85xx_CCSRBAR_SIZE, _PAGE_IO, 0);
+
+ memset(&p, 0, sizeof (p));
+ p.iotype = SERIAL_IO_MEM;
+ p.membase = (void *) binfo->bi_immr_base + MPC85xx_UART0_OFFSET;
+ p.uartclk = binfo->bi_busfreq;
+
+ gen550_init(0, &p);
+
+ memset(&p, 0, sizeof (p));
+ p.iotype = SERIAL_IO_MEM;
+ p.membase = (void *) binfo->bi_immr_base + MPC85xx_UART1_OFFSET;
+ p.uartclk = binfo->bi_busfreq;
+
+ gen550_init(1, &p);
}
#endif
strcpy(cmd_line, (char *) (r6 + KERNELBASE));
}
+ identify_ppc_sys_by_id(mfspr(SVR));
+
/* setup the PowerPC module struct */
ppc_md.setup_arch = mpc8540ads_setup_arch;
ppc_md.show_cpuinfo = mpc85xx_ads_show_cpuinfo;
#include <syslib/ppc85xx_setup.h>
#include <platforms/85xx/mpc85xx_ads_common.h>
-#define SERIAL_PORT_DFNS \
- STD_UART_OP(0) \
- STD_UART_OP(1)
-
#endif /* __MACH_MPC8540ADS_H__ */
#define __MACH_MPC8555CDS_H__
#include <linux/config.h>
-#include <linux/serial.h>
+#include <syslib/ppc85xx_setup.h>
#include <platforms/85xx/mpc85xx_cds_common.h>
#define CPM_MAP_ADDR (CCSRBAR + MPC85xx_CPM_OFFSET)
#include <linux/serial_core.h>
#include <linux/initrd.h>
#include <linux/module.h>
+#include <linux/fsl_devices.h>
#include <asm/system.h>
#include <asm/pgtable.h>
#include <asm/irq.h>
#include <asm/immap_85xx.h>
#include <asm/kgdb.h>
-#include <asm/ocp.h>
+#include <asm/ppc_sys.h>
#include <asm/cpm2.h>
#include <mm/mmu_decl.h>
extern void cpm2_reset(void);
-struct ocp_gfar_data mpc85xx_tsec1_def = {
- .interruptTransmit = MPC85xx_IRQ_TSEC1_TX,
- .interruptError = MPC85xx_IRQ_TSEC1_ERROR,
- .interruptReceive = MPC85xx_IRQ_TSEC1_RX,
- .interruptPHY = MPC85xx_IRQ_EXT5,
- .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR
- | GFAR_HAS_RMON | GFAR_HAS_COALESCE
- | GFAR_HAS_PHY_INTR),
- .phyid = 0,
- .phyregidx = 0,
-};
-
-struct ocp_gfar_data mpc85xx_tsec2_def = {
- .interruptTransmit = MPC85xx_IRQ_TSEC2_TX,
- .interruptError = MPC85xx_IRQ_TSEC2_ERROR,
- .interruptReceive = MPC85xx_IRQ_TSEC2_RX,
- .interruptPHY = MPC85xx_IRQ_EXT5,
- .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR
- | GFAR_HAS_RMON | GFAR_HAS_COALESCE
- | GFAR_HAS_PHY_INTR),
- .phyid = 1,
- .phyregidx = 0,
-};
-
-struct ocp_fs_i2c_data mpc85xx_i2c1_def = {
- .flags = FS_I2C_SEPARATE_DFSRR,
-};
-
/* ************************************************************************
*
* Setup the architecture
static void __init
mpc8560ads_setup_arch(void)
{
- struct ocp_def *def;
- struct ocp_gfar_data *einfo;
bd_t *binfo = (bd_t *) __res;
unsigned int freq;
+ struct gianfar_platform_data *pdata;
cpm2_reset();
mpc85xx_setup_hose();
#endif
- def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 0);
- if (def) {
- einfo = (struct ocp_gfar_data *) def->additions;
- memcpy(einfo->mac_addr, binfo->bi_enetaddr, 6);
- }
-
- def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 1);
- if (def) {
- einfo = (struct ocp_gfar_data *) def->additions;
- memcpy(einfo->mac_addr, binfo->bi_enet1addr, 6);
- }
+ /* setup the board related information for the enet controllers */
+ pdata = (struct gianfar_platform_data *) ppc_sys_get_pdata(MPC85xx_TSEC1);
+ pdata->board_flags = FSL_GIANFAR_BRD_HAS_PHY_INTR;
+ pdata->interruptPHY = MPC85xx_IRQ_EXT5;
+ pdata->phyid = 0;
+ /* fixup phy address */
+ pdata->phy_reg_addr += binfo->bi_immr_base;
+ memcpy(pdata->mac_addr, binfo->bi_enetaddr, 6);
+
+ pdata = (struct gianfar_platform_data *) ppc_sys_get_pdata(MPC85xx_TSEC2);
+ pdata->board_flags = FSL_GIANFAR_BRD_HAS_PHY_INTR;
+ pdata->interruptPHY = MPC85xx_IRQ_EXT5;
+ pdata->phyid = 1;
+ /* fixup phy address */
+ pdata->phy_reg_addr += binfo->bi_immr_base;
+ memcpy(pdata->mac_addr, binfo->bi_enet1addr, 6);
#ifdef CONFIG_BLK_DEV_INITRD
if (initrd_start)
#else
ROOT_DEV = Root_HDA1;
#endif
-
- ocp_for_each_device(mpc85xx_update_paddr_ocp, &(binfo->bi_immr_base));
}
static irqreturn_t cpm2_cascade(int irq, void *dev_id, struct pt_regs *regs)
strcpy(cmd_line, (char *) (r6 + KERNELBASE));
}
+ identify_ppc_sys_by_id(mfspr(SVR));
+
/* setup the PowerPC module struct */
ppc_md.setup_arch = mpc8560ads_setup_arch;
ppc_md.show_cpuinfo = mpc85xx_ads_show_cpuinfo;
#include <asm/mpc85xx.h>
#include <asm/irq.h>
#include <asm/immap_85xx.h>
-#include <asm/ocp.h>
#include <mm/mmu_decl.h>
#include <linux/initrd.h>
#include <linux/tty.h>
#include <linux/serial_core.h>
+#include <linux/fsl_devices.h>
#include <asm/system.h>
#include <asm/pgtable.h>
#include <asm/irq.h>
#include <asm/immap_85xx.h>
#include <asm/immap_cpm2.h>
-#include <asm/ocp.h>
+#include <asm/ppc_sys.h>
#include <asm/kgdb.h>
#include <mm/mmu_decl.h>
#endif
};
-struct ocp_gfar_data mpc85xx_tsec1_def = {
- .interruptTransmit = MPC85xx_IRQ_TSEC1_TX,
- .interruptError = MPC85xx_IRQ_TSEC1_ERROR,
- .interruptReceive = MPC85xx_IRQ_TSEC1_RX,
- .interruptPHY = MPC85xx_IRQ_EXT5,
- .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR |
- GFAR_HAS_PHY_INTR),
- .phyid = 0,
- .phyregidx = 0,
-};
-
-struct ocp_gfar_data mpc85xx_tsec2_def = {
- .interruptTransmit = MPC85xx_IRQ_TSEC2_TX,
- .interruptError = MPC85xx_IRQ_TSEC2_ERROR,
- .interruptReceive = MPC85xx_IRQ_TSEC2_RX,
- .interruptPHY = MPC85xx_IRQ_EXT5,
- .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR |
- GFAR_HAS_PHY_INTR),
- .phyid = 1,
- .phyregidx = 0,
-};
-
-struct ocp_fs_i2c_data mpc85xx_i2c1_def = {
- .flags = FS_I2C_SEPARATE_DFSRR,
-};
-
/* ************************************************************************ */
int
mpc85xx_cds_show_cpuinfo(struct seq_file *m)
#define ARCADIA_HOST_BRIDGE_IDSEL 17
#define ARCADIA_2ND_BRIDGE_IDSEL 3
+extern int mpc85xx_pci1_last_busno;
+
int
mpc85xx_exclude_device(u_char bus, u_char devfn)
{
if (bus == 0 && PCI_SLOT(devfn) == 0)
return PCIBIOS_DEVICE_NOT_FOUND;
#ifdef CONFIG_85xx_PCI2
- /* With the current code we know PCI2 will be bus 2, however this may
- * not be guarnteed */
- if (bus == 2 && PCI_SLOT(devfn) == 0)
- return PCIBIOS_DEVICE_NOT_FOUND;
+ if (mpc85xx_pci1_last_busno)
+ if (bus == (mpc85xx_pci1_last_busno + 1) && PCI_SLOT(devfn) == 0)
+ return PCIBIOS_DEVICE_NOT_FOUND;
#endif
/* We explicitly do not go past the Tundra 320 Bridge */
if (bus == 1)
static void __init
mpc85xx_cds_setup_arch(void)
{
- struct ocp_def *def;
- struct ocp_gfar_data *einfo;
bd_t *binfo = (bd_t *) __res;
unsigned int freq;
+ struct gianfar_platform_data *pdata;
/* get the core frequency */
freq = binfo->bi_intfreq;
invalidate_tlbcam_entry(NUM_TLBCAMS - 1);
#endif
- def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 0);
- if (def) {
- einfo = (struct ocp_gfar_data *) def->additions;
- memcpy(einfo->mac_addr, binfo->bi_enetaddr, 6);
- }
+ /* setup the board related information for the enet controllers */
+ pdata = (struct gianfar_platform_data *) ppc_sys_get_pdata(MPC85xx_TSEC1);
+ pdata->board_flags = FSL_GIANFAR_BRD_HAS_PHY_INTR;
+ pdata->interruptPHY = MPC85xx_IRQ_EXT5;
+ pdata->phyid = 0;
+ /* fixup phy address */
+ pdata->phy_reg_addr += binfo->bi_immr_base;
+ memcpy(pdata->mac_addr, binfo->bi_enetaddr, 6);
+
+ pdata = (struct gianfar_platform_data *) ppc_sys_get_pdata(MPC85xx_TSEC2);
+ pdata->board_flags = FSL_GIANFAR_BRD_HAS_PHY_INTR;
+ pdata->interruptPHY = MPC85xx_IRQ_EXT5;
+ pdata->phyid = 1;
+ /* fixup phy address */
+ pdata->phy_reg_addr += binfo->bi_immr_base;
+ memcpy(pdata->mac_addr, binfo->bi_enet1addr, 6);
- def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 1);
- if (def) {
- einfo = (struct ocp_gfar_data *) def->additions;
- memcpy(einfo->mac_addr, binfo->bi_enet1addr, 6);
- }
#ifdef CONFIG_BLK_DEV_INITRD
if (initrd_start)
#else
ROOT_DEV = Root_HDA1;
#endif
-
- ocp_for_each_device(mpc85xx_update_paddr_ocp, &(binfo->bi_immr_base));
}
/* ************************************************************************ */
#ifdef CONFIG_SERIAL_TEXT_DEBUG
{
bd_t *binfo = (bd_t *) __res;
+ struct uart_port p;
/* Use the last TLB entry to map CCSRBAR to allow access to DUART regs */
settlbcam(NUM_TLBCAMS - 1, binfo->bi_immr_base,
- binfo->bi_immr_base, MPC85xx_CCSRBAR_SIZE, _PAGE_IO, 0);
+ binfo->bi_immr_base, MPC85xx_CCSRBAR_SIZE, _PAGE_IO, 0);
+ memset(&p, 0, sizeof (p));
+ p.iotype = SERIAL_IO_MEM;
+ p.membase = (void *) binfo->bi_immr_base + MPC85xx_UART0_OFFSET;
+ p.uartclk = binfo->bi_busfreq;
+
+ gen550_init(0, &p);
+
+ memset(&p, 0, sizeof (p));
+ p.iotype = SERIAL_IO_MEM;
+ p.membase = (void *) binfo->bi_immr_base + MPC85xx_UART1_OFFSET;
+ p.uartclk = binfo->bi_busfreq;
+
+ gen550_init(1, &p);
}
#endif
strcpy(cmd_line, (char *) (r6 + KERNELBASE));
}
+ identify_ppc_sys_by_id(mfspr(SVR));
+
/* setup the PowerPC module struct */
ppc_md.setup_arch = mpc85xx_cds_setup_arch;
ppc_md.show_cpuinfo = mpc85xx_cds_show_cpuinfo;
#define MPC85XX_PCI1_IO_SIZE 0x01000000
/* PCI 2 memory map */
-#define MPC85XX_PCI2_LOWER_IO 0x01000000
-#define MPC85XX_PCI2_UPPER_IO 0x01ffffff
+/* Note: the standard PPC fixups will cause IO space to get bumped by
+ * hose->io_base_virt - isa_io_base => MPC85XX_PCI1_IO_SIZE */
+#define MPC85XX_PCI2_LOWER_IO 0x00000000
+#define MPC85XX_PCI2_UPPER_IO 0x00ffffff
#define MPC85XX_PCI2_LOWER_MEM 0xa0000000
#define MPC85XX_PCI2_UPPER_MEM 0xbfffffff
#define MPC85XX_PCI2_IO_SIZE 0x01000000
-#define SERIAL_PORT_DFNS \
- STD_UART_OP(0) \
- STD_UART_OP(1)
-
#endif /* __MACH_MPC85XX_CDS_H__ */
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/mpc85xx_devices.c
+ *
+ * MPC85xx Device descriptions
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2005 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/serial_8250.h>
+#include <linux/fsl_devices.h>
+#include <asm/mpc85xx.h>
+#include <asm/irq.h>
+#include <asm/ppc_sys.h>
+
+/* We use offsets for IORESOURCE_MEM since we do not know at compile time
+ * what CCSRBAR is, will get fixed up by mach_mpc85xx_fixup
+ */
+
+static struct gianfar_platform_data mpc85xx_tsec1_pdata = {
+ .device_flags = FSL_GIANFAR_DEV_HAS_GIGABIT |
+ FSL_GIANFAR_DEV_HAS_COALESCE | FSL_GIANFAR_DEV_HAS_RMON |
+ FSL_GIANFAR_DEV_HAS_MULTI_INTR,
+ .phy_reg_addr = MPC85xx_ENET1_OFFSET,
+};
+
+static struct gianfar_platform_data mpc85xx_tsec2_pdata = {
+ .device_flags = FSL_GIANFAR_DEV_HAS_GIGABIT |
+ FSL_GIANFAR_DEV_HAS_COALESCE | FSL_GIANFAR_DEV_HAS_RMON |
+ FSL_GIANFAR_DEV_HAS_MULTI_INTR,
+ .phy_reg_addr = MPC85xx_ENET1_OFFSET,
+};
+
+static struct gianfar_platform_data mpc85xx_fec_pdata = {
+ .phy_reg_addr = MPC85xx_ENET1_OFFSET,
+};
+
+static struct fsl_i2c_platform_data mpc85xx_fsl_i2c_pdata = {
+ .device_flags = FSL_I2C_DEV_SEPARATE_DFSRR,
+};
+
+static struct plat_serial8250_port serial_platform_data[] = {
+ [0] = {
+ .mapbase = 0x4500,
+ .irq = MPC85xx_IRQ_DUART,
+ .iotype = UPIO_MEM,
+ .flags = UPF_BOOT_AUTOCONF | UPF_SKIP_TEST | UPF_SHARE_IRQ,
+ },
+ [1] = {
+ .mapbase = 0x4600,
+ .irq = MPC85xx_IRQ_DUART,
+ .iotype = UPIO_MEM,
+ .flags = UPF_BOOT_AUTOCONF | UPF_SKIP_TEST | UPF_SHARE_IRQ,
+ },
+};
+
+struct platform_device ppc_sys_platform_devices[] = {
+ [MPC85xx_TSEC1] = {
+ .name = "fsl-gianfar",
+ .id = 1,
+ .dev.platform_data = &mpc85xx_tsec1_pdata,
+ .num_resources = 4,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_ENET1_OFFSET,
+ .end = MPC85xx_ENET1_OFFSET +
+ MPC85xx_ENET1_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .name = "tx",
+ .start = MPC85xx_IRQ_TSEC1_TX,
+ .end = MPC85xx_IRQ_TSEC1_TX,
+ .flags = IORESOURCE_IRQ,
+ },
+ {
+ .name = "rx",
+ .start = MPC85xx_IRQ_TSEC1_RX,
+ .end = MPC85xx_IRQ_TSEC1_RX,
+ .flags = IORESOURCE_IRQ,
+ },
+ {
+ .name = "error",
+ .start = MPC85xx_IRQ_TSEC1_ERROR,
+ .end = MPC85xx_IRQ_TSEC1_ERROR,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_TSEC2] = {
+ .name = "fsl-gianfar",
+ .id = 2,
+ .dev.platform_data = &mpc85xx_tsec2_pdata,
+ .num_resources = 4,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_ENET2_OFFSET,
+ .end = MPC85xx_ENET2_OFFSET +
+ MPC85xx_ENET2_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .name = "tx",
+ .start = MPC85xx_IRQ_TSEC2_TX,
+ .end = MPC85xx_IRQ_TSEC2_TX,
+ .flags = IORESOURCE_IRQ,
+ },
+ {
+ .name = "rx",
+ .start = MPC85xx_IRQ_TSEC2_RX,
+ .end = MPC85xx_IRQ_TSEC2_RX,
+ .flags = IORESOURCE_IRQ,
+ },
+ {
+ .name = "error",
+ .start = MPC85xx_IRQ_TSEC2_ERROR,
+ .end = MPC85xx_IRQ_TSEC2_ERROR,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_FEC] = {
+ .name = "fsl-gianfar",
+ .id = 3,
+ .dev.platform_data = &mpc85xx_fec_pdata,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_ENET3_OFFSET,
+ .end = MPC85xx_ENET3_OFFSET +
+ MPC85xx_ENET3_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+
+ },
+ {
+ .start = MPC85xx_IRQ_FEC,
+ .end = MPC85xx_IRQ_FEC,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_IIC1] = {
+ .name = "fsl-i2c",
+ .id = 1,
+ .dev.platform_data = &mpc85xx_fsl_i2c_pdata,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_IIC1_OFFSET,
+ .end = MPC85xx_IIC1_OFFSET +
+ MPC85xx_IIC1_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = MPC85xx_IRQ_IIC1,
+ .end = MPC85xx_IRQ_IIC1,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_DMA0] = {
+ .name = "fsl-dma",
+ .id = 0,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_DMA0_OFFSET,
+ .end = MPC85xx_DMA0_OFFSET +
+ MPC85xx_DMA0_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = MPC85xx_IRQ_DMA0,
+ .end = MPC85xx_IRQ_DMA0,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_DMA1] = {
+ .name = "fsl-dma",
+ .id = 1,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_DMA1_OFFSET,
+ .end = MPC85xx_DMA1_OFFSET +
+ MPC85xx_DMA1_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = MPC85xx_IRQ_DMA1,
+ .end = MPC85xx_IRQ_DMA1,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_DMA2] = {
+ .name = "fsl-dma",
+ .id = 2,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_DMA2_OFFSET,
+ .end = MPC85xx_DMA2_OFFSET +
+ MPC85xx_DMA2_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = MPC85xx_IRQ_DMA2,
+ .end = MPC85xx_IRQ_DMA2,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_DMA3] = {
+ .name = "fsl-dma",
+ .id = 3,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_DMA3_OFFSET,
+ .end = MPC85xx_DMA3_OFFSET +
+ MPC85xx_DMA3_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = MPC85xx_IRQ_DMA3,
+ .end = MPC85xx_IRQ_DMA3,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_DUART] = {
+ .name = "serial8250",
+ .id = 0,
+ .dev.platform_data = serial_platform_data,
+ },
+ [MPC85xx_PERFMON] = {
+ .name = "fsl-perfmon",
+ .id = 1,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_PERFMON_OFFSET,
+ .end = MPC85xx_PERFMON_OFFSET +
+ MPC85xx_PERFMON_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = MPC85xx_IRQ_PERFMON,
+ .end = MPC85xx_IRQ_PERFMON,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_SEC2] = {
+ .name = "fsl-sec2",
+ .id = 1,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_SEC2_OFFSET,
+ .end = MPC85xx_SEC2_OFFSET +
+ MPC85xx_SEC2_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = MPC85xx_IRQ_SEC2,
+ .end = MPC85xx_IRQ_SEC2,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+#ifdef CONFIG_CPM2
+ [MPC85xx_CPM_FCC1] = {
+ .name = "fsl-cpm-fcc",
+ .id = 1,
+ .num_resources = 3,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91300,
+ .end = 0x9131F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = 0x91380,
+ .end = 0x9139F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_FCC1,
+ .end = SIU_INT_FCC1,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_FCC2] = {
+ .name = "fsl-cpm-fcc",
+ .id = 2,
+ .num_resources = 3,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91320,
+ .end = 0x9133F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = 0x913A0,
+ .end = 0x913CF,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_FCC2,
+ .end = SIU_INT_FCC2,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_FCC3] = {
+ .name = "fsl-cpm-fcc",
+ .id = 3,
+ .num_resources = 3,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91340,
+ .end = 0x9135F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = 0x913D0,
+ .end = 0x913FF,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_FCC3,
+ .end = SIU_INT_FCC3,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_I2C] = {
+ .name = "fsl-cpm-i2c",
+ .id = 1,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91860,
+ .end = 0x918BF,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_I2C,
+ .end = SIU_INT_I2C,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_SCC1] = {
+ .name = "fsl-cpm-scc",
+ .id = 1,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91A00,
+ .end = 0x91A1F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_SCC1,
+ .end = SIU_INT_SCC1,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_SCC2] = {
+ .name = "fsl-cpm-scc",
+ .id = 2,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91A20,
+ .end = 0x91A3F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_SCC2,
+ .end = SIU_INT_SCC2,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_SCC3] = {
+ .name = "fsl-cpm-scc",
+ .id = 3,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91A40,
+ .end = 0x91A5F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_SCC3,
+ .end = SIU_INT_SCC3,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_SCC4] = {
+ .name = "fsl-cpm-scc",
+ .id = 4,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91A60,
+ .end = 0x91A7F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_SCC4,
+ .end = SIU_INT_SCC4,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_SPI] = {
+ .name = "fsl-cpm-spi",
+ .id = 1,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91AA0,
+ .end = 0x91AFF,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_SPI,
+ .end = SIU_INT_SPI,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_MCC1] = {
+ .name = "fsl-cpm-mcc",
+ .id = 1,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91B30,
+ .end = 0x91B3F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_MCC1,
+ .end = SIU_INT_MCC1,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_MCC2] = {
+ .name = "fsl-cpm-mcc",
+ .id = 2,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91B50,
+ .end = 0x91B5F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_MCC2,
+ .end = SIU_INT_MCC2,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_SMC1] = {
+ .name = "fsl-cpm-smc",
+ .id = 1,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91A80,
+ .end = 0x91A8F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_SMC1,
+ .end = SIU_INT_SMC1,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_SMC2] = {
+ .name = "fsl-cpm-smc",
+ .id = 2,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91A90,
+ .end = 0x91A9F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_SMC2,
+ .end = SIU_INT_SMC2,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_USB] = {
+ .name = "fsl-cpm-usb",
+ .id = 2,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91B60,
+ .end = 0x91B7F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_USB,
+ .end = SIU_INT_USB,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+#endif /* CONFIG_CPM2 */
+};
+
+static int __init mach_mpc85xx_fixup(struct platform_device *pdev)
+{
+ ppc_sys_fixup_mem_resource(pdev, CCSRBAR);
+ return 0;
+}
+
+static int __init mach_mpc85xx_init(void)
+{
+ ppc_sys_device_fixup = mach_mpc85xx_fixup;
+ return 0;
+}
+
+postcore_initcall(mach_mpc85xx_init);
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/mpc85xx_sys.c
+ *
+ * MPC85xx System descriptions
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2005 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <asm/ppc_sys.h>
+
+struct ppc_sys_spec *cur_ppc_sys_spec;
+struct ppc_sys_spec ppc_sys_specs[] = {
+ {
+ .ppc_sys_name = "MPC8540",
+ .mask = 0xFFFF0000,
+ .value = 0x80300000,
+ .num_devices = 10,
+ .device_list = (enum ppc_sys_devices[])
+ {
+ MPC85xx_TSEC1, MPC85xx_TSEC2, MPC85xx_FEC, MPC85xx_IIC1,
+ MPC85xx_DMA0, MPC85xx_DMA1, MPC85xx_DMA2, MPC85xx_DMA3,
+ MPC85xx_PERFMON, MPC85xx_DUART,
+ },
+ },
+ {
+ .ppc_sys_name = "MPC8560",
+ .mask = 0xFFFF0000,
+ .value = 0x80700000,
+ .num_devices = 19,
+ .device_list = (enum ppc_sys_devices[])
+ {
+ MPC85xx_TSEC1, MPC85xx_TSEC2, MPC85xx_IIC1,
+ MPC85xx_DMA0, MPC85xx_DMA1, MPC85xx_DMA2, MPC85xx_DMA3,
+ MPC85xx_PERFMON,
+ MPC85xx_CPM_SPI, MPC85xx_CPM_I2C, MPC85xx_CPM_SCC1,
+ MPC85xx_CPM_SCC2, MPC85xx_CPM_SCC3, MPC85xx_CPM_SCC4,
+ MPC85xx_CPM_FCC1, MPC85xx_CPM_FCC2, MPC85xx_CPM_FCC3,
+ MPC85xx_CPM_MCC1, MPC85xx_CPM_MCC2,
+ },
+ },
+ {
+ .ppc_sys_name = "MPC8541",
+ .mask = 0xFFFF0000,
+ .value = 0x80720000,
+ .num_devices = 13,
+ .device_list = (enum ppc_sys_devices[])
+ {
+ MPC85xx_TSEC1, MPC85xx_TSEC2, MPC85xx_IIC1,
+ MPC85xx_DMA0, MPC85xx_DMA1, MPC85xx_DMA2, MPC85xx_DMA3,
+ MPC85xx_PERFMON, MPC85xx_DUART,
+ MPC85xx_CPM_SPI, MPC85xx_CPM_I2C,
+ MPC85xx_CPM_FCC1, MPC85xx_CPM_FCC2,
+ },
+ },
+ {
+ .ppc_sys_name = "MPC8541E",
+ .mask = 0xFFFF0000,
+ .value = 0x807A0000,
+ .num_devices = 14,
+ .device_list = (enum ppc_sys_devices[])
+ {
+ MPC85xx_TSEC1, MPC85xx_TSEC2, MPC85xx_IIC1,
+ MPC85xx_DMA0, MPC85xx_DMA1, MPC85xx_DMA2, MPC85xx_DMA3,
+ MPC85xx_PERFMON, MPC85xx_DUART, MPC85xx_SEC2,
+ MPC85xx_CPM_SPI, MPC85xx_CPM_I2C,
+ MPC85xx_CPM_FCC1, MPC85xx_CPM_FCC2,
+ },
+ },
+ {
+ .ppc_sys_name = "MPC8555",
+ .mask = 0xFFFF0000,
+ .value = 0x80710000,
+ .num_devices = 20,
+ .device_list = (enum ppc_sys_devices[])
+ {
+ MPC85xx_TSEC1, MPC85xx_TSEC2, MPC85xx_IIC1,
+ MPC85xx_DMA0, MPC85xx_DMA1, MPC85xx_DMA2, MPC85xx_DMA3,
+ MPC85xx_PERFMON, MPC85xx_DUART,
+ MPC85xx_CPM_SPI, MPC85xx_CPM_I2C, MPC85xx_CPM_SCC1,
+ MPC85xx_CPM_SCC2, MPC85xx_CPM_SCC3,
+ MPC85xx_CPM_FCC1, MPC85xx_CPM_FCC2, MPC85xx_CPM_FCC3,
+ MPC85xx_CPM_SMC1, MPC85xx_CPM_SMC2,
+ MPC85xx_CPM_USB,
+ },
+ },
+ {
+ .ppc_sys_name = "MPC8555E",
+ .mask = 0xFFFF0000,
+ .value = 0x80790000,
+ .num_devices = 21,
+ .device_list = (enum ppc_sys_devices[])
+ {
+ MPC85xx_TSEC1, MPC85xx_TSEC2, MPC85xx_IIC1,
+ MPC85xx_DMA0, MPC85xx_DMA1, MPC85xx_DMA2, MPC85xx_DMA3,
+ MPC85xx_PERFMON, MPC85xx_DUART, MPC85xx_SEC2,
+ MPC85xx_CPM_SPI, MPC85xx_CPM_I2C, MPC85xx_CPM_SCC1,
+ MPC85xx_CPM_SCC2, MPC85xx_CPM_SCC3,
+ MPC85xx_CPM_FCC1, MPC85xx_CPM_FCC2, MPC85xx_CPM_FCC3,
+ MPC85xx_CPM_SMC1, MPC85xx_CPM_SMC2,
+ MPC85xx_CPM_USB,
+ },
+ },
+ { /* default match */
+ .ppc_sys_name = "",
+ .mask = 0x00000000,
+ .value = 0x00000000,
+ },
+};
#include <linux/serial_core.h>
#include <linux/initrd.h>
#include <linux/module.h>
+#include <linux/fsl_devices.h>
#include <asm/system.h>
#include <asm/pgtable.h>
#include <asm/irq.h>
#include <asm/immap_85xx.h>
#include <asm/kgdb.h>
-#include <asm/ocp.h>
+#include <asm/ppc_sys.h>
#include <mm/mmu_decl.h>
#include <syslib/ppc85xx_common.h>
#include <syslib/ppc85xx_setup.h>
-struct ocp_gfar_data mpc85xx_tsec1_def = {
- .interruptTransmit = MPC85xx_IRQ_TSEC1_TX,
- .interruptError = MPC85xx_IRQ_TSEC1_ERROR,
- .interruptReceive = MPC85xx_IRQ_TSEC1_RX,
- .interruptPHY = MPC85xx_IRQ_EXT6,
- .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR | GFAR_HAS_PHY_INTR),
- .phyid = 25,
- .phyregidx = 0,
-};
-
-struct ocp_gfar_data mpc85xx_tsec2_def = {
- .interruptTransmit = MPC85xx_IRQ_TSEC2_TX,
- .interruptError = MPC85xx_IRQ_TSEC2_ERROR,
- .interruptReceive = MPC85xx_IRQ_TSEC2_RX,
- .interruptPHY = MPC85xx_IRQ_EXT7,
- .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR | GFAR_HAS_PHY_INTR),
- .phyid = 26,
- .phyregidx = 0,
-};
-
-struct ocp_fs_i2c_data mpc85xx_i2c1_def = {
- .flags = FS_I2C_SEPARATE_DFSRR,
-};
-
-
#ifdef CONFIG_SERIAL_8250
static void __init
sbc8560_early_serial_map(void)
static void __init
sbc8560_setup_arch(void)
{
- struct ocp_def *def;
- struct ocp_gfar_data *einfo;
bd_t *binfo = (bd_t *) __res;
unsigned int freq;
+ struct gianfar_platform_data *pdata;
/* get the core frequency */
freq = binfo->bi_intfreq;
invalidate_tlbcam_entry(NUM_TLBCAMS - 1);
#endif
- /* Set up MAC addresses for the Ethernet devices */
- def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 0);
- if (def) {
- einfo = (struct ocp_gfar_data *) def->additions;
- memcpy(einfo->mac_addr, binfo->bi_enetaddr, 6);
- }
-
- def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 1);
- if (def) {
- einfo = (struct ocp_gfar_data *) def->additions;
- memcpy(einfo->mac_addr, binfo->bi_enet1addr, 6);
- }
+ /* setup the board related information for the enet controllers */
+ pdata = (struct gianfar_platform_data *) ppc_sys_get_pdata(MPC85xx_TSEC1);
+ pdata->board_flags = FSL_GIANFAR_BRD_HAS_PHY_INTR;
+ pdata->interruptPHY = MPC85xx_IRQ_EXT6;
+ pdata->phyid = 25;
+ /* fixup phy address */
+ pdata->phy_reg_addr += binfo->bi_immr_base;
+ memcpy(pdata->mac_addr, binfo->bi_enetaddr, 6);
+
+ pdata = (struct gianfar_platform_data *) ppc_sys_get_pdata(MPC85xx_TSEC2);
+ pdata->board_flags = FSL_GIANFAR_BRD_HAS_PHY_INTR;
+ pdata->interruptPHY = MPC85xx_IRQ_EXT7;
+ pdata->phyid = 26;
+ /* fixup phy address */
+ pdata->phy_reg_addr += binfo->bi_immr_base;
+ memcpy(pdata->mac_addr, binfo->bi_enet1addr, 6);
#ifdef CONFIG_BLK_DEV_INITRD
if (initrd_start)
#else
ROOT_DEV = Root_HDA1;
#endif
-
- ocp_for_each_device(mpc85xx_update_paddr_ocp, &(binfo->bi_immr_base));
}
/* ************************************************************************ */
strcpy(cmd_line, (char *) (r6 + KERNELBASE));
}
+ identify_ppc_sys_by_id(mfspr(SVR));
+
/* setup the PowerPC module struct */
ppc_md.setup_arch = sbc8560_setup_arch;
ppc_md.show_cpuinfo = sbc8560_show_cpuinfo;
#include <asm/mpc85xx.h>
#include <asm/irq.h>
#include <asm/immap_85xx.h>
-#include <asm/ocp.h>
#include <mm/mmu_decl.h>
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/stx_gp3.c
+ *
+ * STx GP3 board specific routines
+ *
+ * Dan Malek <dan@embeddededge.com>
+ * Copyright 2004 Embedded Edge, LLC
+ *
+ * Copied from mpc8560_ads.c
+ * Copyright 2002, 2003 Motorola Inc.
+ *
+ * Ported to 2.6, Matt Porter <mporter@kernel.crashing.org>
+ * Copyright 2004-2005 MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/reboot.h>
+#include <linux/pci.h>
+#include <linux/kdev_t.h>
+#include <linux/major.h>
+#include <linux/blkdev.h>
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+#include <linux/root_dev.h>
+#include <linux/seq_file.h>
+#include <linux/serial.h>
+#include <linux/module.h>
+#include <linux/fsl_devices.h>
+
+#include <asm/system.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+#include <asm/atomic.h>
+#include <asm/time.h>
+#include <asm/io.h>
+#include <asm/machdep.h>
+#include <asm/prom.h>
+#include <asm/open_pic.h>
+#include <asm/bootinfo.h>
+#include <asm/pci-bridge.h>
+#include <asm/mpc85xx.h>
+#include <asm/irq.h>
+#include <asm/immap_85xx.h>
+#include <asm/immap_cpm2.h>
+#include <asm/mpc85xx.h>
+#include <asm/ppc_sys.h>
+
+#include <syslib/cpm2_pic.h>
+#include <syslib/ppc85xx_common.h>
+
+extern void cpm2_reset(void);
+
+unsigned char __res[sizeof(bd_t)];
+
+#ifndef CONFIG_PCI
+unsigned long isa_io_base = 0;
+unsigned long isa_mem_base = 0;
+unsigned long pci_dram_offset = 0;
+#endif
+
+/* Internal interrupts are all Level Sensitive, and Positive Polarity */
+static u8 gp3_openpic_initsenses[] __initdata = {
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 0: L2 Cache */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 1: ECM */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 2: DDR DRAM */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 3: LBIU */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 4: DMA 0 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 5: DMA 1 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 6: DMA 2 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 7: DMA 3 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 8: PCI/PCI-X */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 9: RIO Inbound Port Write Error */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 10: RIO Doorbell Inbound */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 11: RIO Outbound Message */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 12: RIO Inbound Message */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 13: TSEC 0 Transmit */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 14: TSEC 0 Receive */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 15: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 16: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 17: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 18: TSEC 0 Receive/Transmit Error */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 19: TSEC 1 Transmit */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 20: TSEC 1 Receive */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 21: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 22: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 23: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 24: TSEC 1 Receive/Transmit Error */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 25: Fast Ethernet */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 26: DUART */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 27: I2C */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 28: Performance Monitor */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 29: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 30: CPM */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 31: Unused */
+ 0x0, /* External 0: */
+#if defined(CONFIG_PCI)
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 1: PCI slot 0 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 2: PCI slot 1 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 3: PCI slot 2 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 4: PCI slot 3 */
+#else
+ 0x0, /* External 1: */
+ 0x0, /* External 2: */
+ 0x0, /* External 3: */
+ 0x0, /* External 4: */
+#endif
+ 0x0, /* External 5: */
+ 0x0, /* External 6: */
+ 0x0, /* External 7: */
+ 0x0, /* External 8: */
+ 0x0, /* External 9: */
+ 0x0, /* External 10: */
+ 0x0, /* External 11: */
+};
+
+/*
+ * Setup the architecture
+ */
+static void __init
+gp3_setup_arch(void)
+{
+ bd_t *binfo = (bd_t *) __res;
+ unsigned int freq;
+ struct gianfar_platform_data *pdata;
+
+ cpm2_reset();
+
+ /* get the core frequency */
+ freq = binfo->bi_intfreq;
+
+ if (ppc_md.progress)
+ ppc_md.progress("gp3_setup_arch()", 0);
+
+ /* Set loops_per_jiffy to a half-way reasonable value,
+ for use until calibrate_delay gets called. */
+ loops_per_jiffy = freq / HZ;
+
+#ifdef CONFIG_PCI
+ /* setup PCI host bridges */
+ mpc85xx_setup_hose();
+#endif
+
+ /* setup the board related information for the enet controllers */
+ pdata = (struct gianfar_platform_data *) ppc_sys_get_pdata(MPC85xx_TSEC1);
+/* pdata->board_flags = FSL_GIANFAR_BRD_HAS_PHY_INTR; */
+ pdata->interruptPHY = MPC85xx_IRQ_EXT5;
+ pdata->phyid = 2;
+ pdata->phy_reg_addr += binfo->bi_immr_base;
+ memcpy(pdata->mac_addr, binfo->bi_enetaddr, 6);
+
+ pdata = (struct gianfar_platform_data *) ppc_sys_get_pdata(MPC85xx_TSEC2);
+/* pdata->board_flags = FSL_GIANFAR_BRD_HAS_PHY_INTR; */
+ pdata->interruptPHY = MPC85xx_IRQ_EXT5;
+ pdata->phyid = 4;
+ /* fixup phy address */
+ pdata->phy_reg_addr += binfo->bi_immr_base;
+ memcpy(pdata->mac_addr, binfo->bi_enet1addr, 6);
+
+#ifdef CONFIG_BLK_DEV_INITRD
+ if (initrd_start)
+ ROOT_DEV = Root_RAM0;
+ else
+#endif
+#ifdef CONFIG_ROOT_NFS
+ ROOT_DEV = Root_NFS;
+#else
+ ROOT_DEV = Root_HDA1;
+#endif
+
+ printk ("bi_immr_base = %8.8lx\n", binfo->bi_immr_base);
+}
+
+static irqreturn_t cpm2_cascade(int irq, void *dev_id, struct pt_regs *regs)
+{
+ while ((irq = cpm2_get_irq(regs)) >= 0)
+ __do_IRQ(irq, regs);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction cpm2_irqaction = {
+ .handler = cpm2_cascade,
+ .flags = SA_INTERRUPT,
+ .mask = CPU_MASK_NONE,
+ .name = "cpm2_cascade",
+};
+
+static void __init
+gp3_init_IRQ(void)
+{
+ int i;
+ volatile cpm2_map_t *immap = cpm2_immr;
+ bd_t *binfo = (bd_t *) __res;
+
+ /*
+ * Setup OpenPIC
+ */
+
+ /* Determine the Physical Address of the OpenPIC regs */
+ phys_addr_t OpenPIC_PAddr =
+ binfo->bi_immr_base + MPC85xx_OPENPIC_OFFSET;
+ OpenPIC_Addr = ioremap(OpenPIC_PAddr, MPC85xx_OPENPIC_SIZE);
+ OpenPIC_InitSenses = gp3_openpic_initsenses;
+ OpenPIC_NumInitSenses = sizeof (gp3_openpic_initsenses);
+
+ /* Skip reserved space and internal sources */
+ openpic_set_sources(0, 32, OpenPIC_Addr + 0x10200);
+
+ /* Map PIC IRQs 0-11 */
+ openpic_set_sources(32, 12, OpenPIC_Addr + 0x10000);
+
+ /*
+ * Let openpic interrupts starting from an offset, to
+ * leave space for cascading interrupts underneath.
+ */
+ openpic_init(MPC85xx_OPENPIC_IRQ_OFFSET);
+
+ /*
+ * Setup CPM2 PIC
+ */
+
+ /* disable all CPM interupts */
+ immap->im_intctl.ic_simrh = 0x0;
+ immap->im_intctl.ic_simrl = 0x0;
+
+ for (i = CPM_IRQ_OFFSET; i < (NR_CPM_INTS + CPM_IRQ_OFFSET); i++)
+ irq_desc[i].handler = &cpm2_pic;
+
+ /*
+ * Initialize the default interrupt mapping priorities,
+ * in case the boot rom changed something on us.
+ */
+ immap->im_intctl.ic_sicr = 0;
+ immap->im_intctl.ic_scprrh = 0x05309770;
+ immap->im_intctl.ic_scprrl = 0x05309770;
+
+ setup_irq(MPC85xx_IRQ_CPM, &cpm2_irqaction);
+
+ return;
+}
+
+static int
+gp3_show_cpuinfo(struct seq_file *m)
+{
+ uint pvid, svid, phid1;
+ bd_t *binfo = (bd_t *) __res;
+ uint memsize;
+ unsigned int freq;
+ extern unsigned long total_memory; /* in mm/init */
+
+ /* get the core frequency */
+ freq = binfo->bi_intfreq;
+
+ pvid = mfspr(PVR);
+ svid = mfspr(SVR);
+
+ memsize = total_memory;
+
+ seq_printf(m, "Vendor\t\t: RPC Electronics STx \n");
+
+ switch (svid & 0xffff0000) {
+ case SVR_8540:
+ seq_printf(m, "Machine\t\t: GP3 - MPC8540\n");
+ break;
+ case SVR_8560:
+ seq_printf(m, "Machine\t\t: GP3 - MPC8560\n");
+ break;
+ default:
+ seq_printf(m, "Machine\t\t: unknown\n");
+ break;
+ }
+ seq_printf(m, "bus freq\t: %u.%.6u MHz\n", freq / 1000000,
+ freq % 1000000);
+ seq_printf(m, "PVR\t\t: 0x%x\n", pvid);
+ seq_printf(m, "SVR\t\t: 0x%x\n", svid);
+
+ /* Display cpu Pll setting */
+ phid1 = mfspr(HID1);
+ seq_printf(m, "PLL setting\t: 0x%x\n", ((phid1 >> 24) & 0x3f));
+
+ /* Display the amount of memory */
+ seq_printf(m, "Memory\t\t: %d MB\n", memsize / (1024 * 1024));
+
+ return 0;
+}
+
+#ifdef CONFIG_PCI
+int mpc85xx_map_irq(struct pci_dev *dev, unsigned char idsel,
+ unsigned char pin)
+{
+ static char pci_irq_table[][4] =
+ /*
+ * PCI IDSEL/INTPIN->INTLINE
+ * A B C D
+ */
+ {
+ {PIRQA, PIRQB, PIRQC, PIRQD},
+ {PIRQD, PIRQA, PIRQB, PIRQC},
+ {PIRQC, PIRQD, PIRQA, PIRQB},
+ {PIRQB, PIRQC, PIRQD, PIRQA},
+ };
+
+ const long min_idsel = 12, max_idsel = 15, irqs_per_slot = 4;
+ return PCI_IRQ_TABLE_LOOKUP;
+}
+
+int mpc85xx_exclude_device(u_char bus, u_char devfn)
+{
+ if (bus == 0 && PCI_SLOT(devfn) == 0)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+ else
+ return PCIBIOS_SUCCESSFUL;
+}
+#endif /* CONFIG_PCI */
+
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ /* parse_bootinfo must always be called first */
+ parse_bootinfo(find_bootinfo());
+
+ /*
+ * If we were passed in a board information, copy it into the
+ * residual data area.
+ */
+ if (r3) {
+ memcpy((void *) __res, (void *) (r3 + KERNELBASE),
+ sizeof (bd_t));
+
+ }
+#if defined(CONFIG_BLK_DEV_INITRD)
+ /*
+ * If the init RAM disk has been configured in, and there's a valid
+ * starting address for it, set it up.
+ */
+ if (r4) {
+ initrd_start = r4 + KERNELBASE;
+ initrd_end = r5 + KERNELBASE;
+ }
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+ /* Copy the kernel command line arguments to a safe place. */
+
+ if (r6) {
+ *(char *) (r7 + KERNELBASE) = 0;
+ strcpy(cmd_line, (char *) (r6 + KERNELBASE));
+ }
+
+ identify_ppc_sys_by_id(mfspr(SVR));
+
+ /* setup the PowerPC module struct */
+ ppc_md.setup_arch = gp3_setup_arch;
+ ppc_md.show_cpuinfo = gp3_show_cpuinfo;
+
+ ppc_md.init_IRQ = gp3_init_IRQ;
+ ppc_md.get_irq = openpic_get_irq;
+
+ ppc_md.restart = mpc85xx_restart;
+ ppc_md.power_off = mpc85xx_power_off;
+ ppc_md.halt = mpc85xx_halt;
+
+ ppc_md.find_end_of_memory = mpc85xx_find_end_of_memory;
+
+ ppc_md.calibrate_decr = mpc85xx_calibrate_decr;
+
+ if (ppc_md.progress)
+ ppc_md.progress("platform_init(): exit", 0);
+
+ return;
+}
--- /dev/null
+/*
+ * arch/ppc/platforms/stx8560_gp3.h
+ *
+ * STx GP3 board definitions
+ *
+ * Dan Malek (dan@embeddededge.com)
+ * Copyright 2004 Embedded Edge, LLC
+ *
+ * Ported to 2.6, Matt Porter <mporter@kernel.crashing.org>
+ * Copyright 2004-2005 MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+
+#ifndef __MACH_STX_GP3_H
+#define __MACH_STX_GP3_H
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/seq_file.h>
+#include <asm/ppcboot.h>
+
+#define BOARD_CCSRBAR ((uint)0xe0000000)
+#define CCSRBAR_SIZE ((uint)1024*1024)
+
+#define CPM_MAP_ADDR (CCSRBAR + MPC85xx_CPM_OFFSET)
+
+#define BCSR_ADDR ((uint)0xfc000000)
+#define BCSR_SIZE ((uint)(16 * 1024))
+
+#define BCSR_TSEC1_RESET 0x00000080
+#define BCSR_TSEC2_RESET 0x00000040
+#define BCSR_LED1 0x00000008
+#define BCSR_LED2 0x00000004
+#define BCSR_LED3 0x00000002
+#define BCSR_LED4 0x00000001
+
+extern void mpc85xx_setup_hose(void) __init;
+extern void mpc85xx_restart(char *cmd);
+extern void mpc85xx_power_off(void);
+extern void mpc85xx_halt(void);
+extern int mpc85xx_show_cpuinfo(struct seq_file *m);
+extern void mpc85xx_init_IRQ(void) __init;
+extern unsigned long mpc85xx_find_end_of_memory(void) __init;
+extern void mpc85xx_calibrate_decr(void) __init;
+
+#define PCI_CFG_ADDR_OFFSET (0x8000)
+#define PCI_CFG_DATA_OFFSET (0x8004)
+
+/* PCI interrupt controller */
+#define PIRQA MPC85xx_IRQ_EXT1
+#define PIRQB MPC85xx_IRQ_EXT2
+#define PIRQC MPC85xx_IRQ_EXT3
+#define PIRQD MPC85xx_IRQ_EXT4
+#define PCI_MIN_IDSEL 16
+#define PCI_MAX_IDSEL 19
+#define PCI_IRQ_SLOT 4
+
+#define MPC85XX_PCI1_LOWER_IO 0x00000000
+#define MPC85XX_PCI1_UPPER_IO 0x00ffffff
+
+#define MPC85XX_PCI1_LOWER_MEM 0x80000000
+#define MPC85XX_PCI1_UPPER_MEM 0x9fffffff
+
+#define MPC85XX_PCI1_IO_BASE 0xe2000000
+#define MPC85XX_PCI1_MEM_OFFSET 0x00000000
+
+#define MPC85XX_PCI1_IO_SIZE 0x01000000
+
+#endif /* __MACH_STX_GP3_H */
void
apus_restart(char *cmd)
{
- cli();
+ local_irq_disable();
APUS_WRITE(APUS_REG_LOCK,
REGLOCK_BLACKMAGICK1|REGLOCK_BLACKMAGICK2);
--- /dev/null
+/*
+ * arch/ppc/platforms/chestnut.c
+ *
+ * Board setup routines for IBM Chestnut
+ *
+ * Author: <source@mvista.com>
+ *
+ * <2004> (c) MontaVista Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/reboot.h>
+#include <linux/kdev_t.h>
+#include <linux/major.h>
+#include <linux/blkdev.h>
+#include <linux/console.h>
+#include <linux/root_dev.h>
+#include <linux/initrd.h>
+#include <linux/delay.h>
+#include <linux/seq_file.h>
+#include <linux/ide.h>
+#include <linux/serial.h>
+#include <linux/serial_core.h>
+#include <asm/system.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+#include <asm/time.h>
+#include <asm/dma.h>
+#include <asm/io.h>
+#include <linux/irq.h>
+#include <asm/hw_irq.h>
+#include <asm/machdep.h>
+#include <asm/kgdb.h>
+#include <asm/bootinfo.h>
+#include <asm/mv64x60.h>
+#include <platforms/chestnut.h>
+
+static u32 boot_base; /* Virtual addr of 8bit boot */
+static u32 cpld_base; /* Virtual addr of CPLD Regs */
+
+static mv64x60_handle_t bh;
+
+extern void gen550_progress(char *, unsigned short);
+extern void gen550_init(int, struct uart_port *);
+extern void mv64360_pcibios_fixup(mv64x60_handle_t *bh);
+
+#define BIT(x) (1<<x)
+#define CHESTNUT_PRESERVE_MASK (BIT(MV64x60_CPU2DEV_0_WIN) | \
+ BIT(MV64x60_CPU2DEV_1_WIN) | \
+ BIT(MV64x60_CPU2DEV_2_WIN) | \
+ BIT(MV64x60_CPU2DEV_3_WIN) | \
+ BIT(MV64x60_CPU2BOOT_WIN))
+/**************************************************************************
+ * FUNCTION: chestnut_calibrate_decr
+ *
+ * DESCRIPTION: initialize decrementer interrupt frequency (used as system
+ * timer)
+ *
+ ****/
+static void __init
+chestnut_calibrate_decr(void){
+ ulong freq;
+
+ freq = CHESTNUT_BUS_SPEED / 4;
+
+ printk("time_init: decrementer frequency = %lu.%.6lu MHz\n",
+ freq/1000000, freq%1000000);
+
+ tb_ticks_per_jiffy = freq / HZ;
+ tb_to_us = mulhwu_scale_factor(freq, 1000000);
+
+ return;
+}
+
+static int
+chestnut_show_cpuinfo(struct seq_file *m)
+{
+ seq_printf(m, "vendor\t\t: IBM\n");
+ seq_printf(m, "machine\t\t: 750FX/GX Eval Board (Chestnut/Buckeye)\n");
+
+ return 0;
+}
+
+/**************************************************************************
+ * FUNCTION: chestnut_find_end_of_memory
+ *
+ * DESCRIPTION: ppc_md memory size callback
+ *
+ ****/
+unsigned long __init
+chestnut_find_end_of_memory(void)
+{
+ static int mem_size = 0;
+
+ if (mem_size == 0) {
+ mem_size = mv64x60_get_mem_size(CONFIG_MV64X60_NEW_BASE,
+ MV64x60_TYPE_MV64460);
+ }
+ return(mem_size);
+}
+
+#if defined(CONFIG_SERIAL_8250)
+static void __init
+chestnut_early_serial_map(void)
+{
+ struct uart_port port;
+
+ /* Setup serial port access */
+ memset(&port, 0, sizeof(port));
+ port.uartclk = BASE_BAUD * 16;
+ port.irq = UART0_INT;
+ port.flags = STD_COM_FLAGS | UPF_IOREMAP;
+ port.iotype = SERIAL_IO_MEM;
+ port.mapbase = CHESTNUT_UART0_IO_BASE;
+ port.regshift = 0;
+
+ if (early_serial_setup(&port) != 0)
+ printk("Early serial init of port 0 failed\n");
+
+ /* Assume early_serial_setup() doesn't modify serial_req */
+ port.line = 1;
+ port.irq = UART1_INT;
+ port.mapbase = CHESTNUT_UART1_IO_BASE;
+
+ if (early_serial_setup(&port) != 0)
+ printk("Early serial init of port 1 failed\n");
+}
+#endif
+
+/**************************************************************************
+ * FUNCTION: chestnut_map_irq
+ *
+ * DESCRIPTION: 0 return since PCI IRQs not needed
+ *
+ ****/
+static int __init
+chestnut_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin)
+{
+ static char pci_irq_table[][4] = {
+ {CHESTNUT_PCI_SLOT0_IRQ, CHESTNUT_PCI_SLOT0_IRQ,
+ CHESTNUT_PCI_SLOT0_IRQ, CHESTNUT_PCI_SLOT0_IRQ},
+ {CHESTNUT_PCI_SLOT1_IRQ, CHESTNUT_PCI_SLOT1_IRQ,
+ CHESTNUT_PCI_SLOT1_IRQ, CHESTNUT_PCI_SLOT1_IRQ},
+ {CHESTNUT_PCI_SLOT2_IRQ, CHESTNUT_PCI_SLOT2_IRQ,
+ CHESTNUT_PCI_SLOT2_IRQ, CHESTNUT_PCI_SLOT2_IRQ},
+ {CHESTNUT_PCI_SLOT3_IRQ, CHESTNUT_PCI_SLOT3_IRQ,
+ CHESTNUT_PCI_SLOT3_IRQ, CHESTNUT_PCI_SLOT3_IRQ},
+ };
+ const long min_idsel = 1, max_idsel = 4, irqs_per_slot = 4;
+
+ return (PCI_IRQ_TABLE_LOOKUP);
+}
+
+
+/**************************************************************************
+ * FUNCTION: chestnut_setup_bridge
+ *
+ * DESCRIPTION: initalize board-specific settings on the MV64360
+ *
+ ****/
+static void __init
+chestnut_setup_bridge(void)
+{
+ struct mv64x60_setup_info si;
+ int i;
+
+ if ( ppc_md.progress )
+ ppc_md.progress("chestnut_setup_bridge: enter", 0);
+
+ memset(&si, 0, sizeof(si));
+
+ si.phys_reg_base = CONFIG_MV64X60_NEW_BASE;
+
+ /* setup only PCI bus 0 (bus 1 not used) */
+ si.pci_0.enable_bus = 1;
+ si.pci_0.pci_io.cpu_base = CHESTNUT_PCI0_IO_PROC_ADDR;
+ si.pci_0.pci_io.pci_base_hi = 0;
+ si.pci_0.pci_io.pci_base_lo = CHESTNUT_PCI0_IO_PCI_ADDR;
+ si.pci_0.pci_io.size = CHESTNUT_PCI0_IO_SIZE;
+ si.pci_0.pci_io.swap = MV64x60_CPU2PCI_SWAP_NONE; /* no swapping */
+ si.pci_0.pci_mem[0].cpu_base = CHESTNUT_PCI0_MEM_PROC_ADDR;
+ si.pci_0.pci_mem[0].pci_base_hi = CHESTNUT_PCI0_MEM_PCI_HI_ADDR;
+ si.pci_0.pci_mem[0].pci_base_lo = CHESTNUT_PCI0_MEM_PCI_LO_ADDR;
+ si.pci_0.pci_mem[0].size = CHESTNUT_PCI0_MEM_SIZE;
+ si.pci_0.pci_mem[0].swap = MV64x60_CPU2PCI_SWAP_NONE; /* no swapping */
+ si.pci_0.pci_cmd_bits = 0;
+ si.pci_0.latency_timer = 0x80;
+
+ si.window_preserve_mask_32_lo = CHESTNUT_PRESERVE_MASK;
+
+ for (i=0; i<MV64x60_CPU2MEM_WINDOWS; i++) {
+ si.cpu_prot_options[i] = 0;
+#ifdef CONFIG_NOT_CACHE_COHERENT
+ si.cpu_snoop_options[i] = MV64360_CPU_SNOOP_NONE;
+#else
+ si.cpu_snoop_options[i] = MV64360_CPU_SNOOP_WB; /* risky */
+#endif
+ si.pci_0.acc_cntl_options[i] =
+#ifdef CONFIG_NOT_CACHE_COHERENT
+ MV64360_PCI_ACC_CNTL_SNOOP_NONE |
+#else
+ MV64360_PCI_ACC_CNTL_SNOOP_WB | /* risky */
+#endif
+ MV64360_PCI_ACC_CNTL_SWAP_NONE |
+ MV64360_PCI_ACC_CNTL_MBURST_32_BYTES |
+ MV64360_PCI_ACC_CNTL_RDSIZE_32_BYTES;
+ }
+
+ /* Lookup host bridge - on CPU 0 - no SMP support */
+ if (mv64x60_init(&bh, &si)) {
+ printk("\n\nPCI Bridge initialization failed!\n");
+ }
+
+ pci_dram_offset = 0;
+ ppc_md.pci_swizzle = common_swizzle;
+ ppc_md.pci_map_irq = chestnut_map_irq;
+ ppc_md.pci_exclude_device = mv64x60_pci_exclude_device;
+
+ mv64x60_set_bus(&bh, 0, 0);
+ bh.hose_a->first_busno = 0;
+ bh.hose_a->last_busno = 0xff;
+ bh.hose_a->last_busno = pciauto_bus_scan(bh.hose_a, 0);
+
+}
+
+void __init
+chestnut_setup_peripherals(void)
+{
+
+ mv64x60_set_32bit_window(&bh, MV64x60_CPU2BOOT_WIN,
+ CHESTNUT_BOOT_8BIT_BASE, CHESTNUT_BOOT_8BIT_SIZE, 0);
+
+ mv64x60_set_32bit_window(&bh, MV64x60_CPU2DEV_0_WIN,
+ CHESTNUT_32BIT_BASE, CHESTNUT_32BIT_SIZE, 0);
+ mv64x60_set_32bit_window(&bh, MV64x60_CPU2DEV_1_WIN,
+ CHESTNUT_CPLD_BASE, CHESTNUT_CPLD_SIZE, 0);
+
+ mv64x60_set_32bit_window(&bh, MV64x60_CPU2DEV_2_WIN,
+ CHESTNUT_UART_BASE, CHESTNUT_UART_SIZE, 0);
+ mv64x60_set_32bit_window(&bh, MV64x60_CPU2DEV_3_WIN,
+ CHESTNUT_FRAM_BASE, CHESTNUT_FRAM_SIZE, 0);
+ /* Set up window for internal sram (256KByte insize) */
+ mv64x60_set_32bit_window(&bh, MV64x60_CPU2SRAM_WIN,
+ CHESTNUT_INTERNAL_SRAM_BASE,
+ CHESTNUT_INTERNAL_SRAM_SIZE, 0);
+
+ boot_base = (u32)ioremap(CHESTNUT_BOOT_8BIT_BASE,
+ CHESTNUT_BOOT_8BIT_SIZE);
+ cpld_base = (u32)ioremap(CHESTNUT_CPLD_BASE, CHESTNUT_CPLD_SIZE);
+
+ /*
+ * Configure internal SRAM -
+ * Cache coherent write back, incase
+ * CONFIG_MV64360_SRAM_CACHE_COHERENT set
+ * Parity enabled.
+ * Parity error propagation
+ * Arbitration not parked for CPU only
+ * Other bits are reserved.
+ */
+#ifdef CONFIG_MV64360_SRAM_CACHE_COHERENT
+ mv64x60_write(&bh, MV64360_SRAM_CONFIG, 0x001600b2);
+#else
+ mv64x60_write(&bh, MV64360_SRAM_CONFIG, 0x001600b0);
+#endif
+
+ /*
+ * Setting the SRAM to 0. Note that this generates parity errors on
+ * internal data path in SRAM since it's first time accessing it
+ * while after reset it's not configured
+ */
+ memset((void *)CHESTNUT_INTERNAL_SRAM_BASE, 0, CHESTNUT_INTERNAL_SRAM_SIZE);
+ /*
+ * Configure MPP pins for PCI DMA
+ *
+ * PCI Slot GNT pin REQ pin
+ * 0 MPP16 MPP17
+ * 1 MPP18 MPP19
+ * 2 MPP20 MPP21
+ * 3 MPP22 MPP23
+ */
+ mv64x60_write(&bh, MV64x60_MPP_CNTL_2,
+ (0x1 << 0) | /* MPPSel16 PCI0_GNT[0] */
+ (0x1 << 4) | /* MPPSel17 PCI0_REQ[0] */
+ (0x1 << 8) | /* MPPSel18 PCI0_GNT[1] */
+ (0x1 << 12) | /* MPPSel19 PCI0_REQ[1] */
+ (0x1 << 16) | /* MPPSel20 PCI0_GNT[2] */
+ (0x1 << 20) | /* MPPSel21 PCI0_REQ[2] */
+ (0x1 << 24) | /* MPPSel22 PCI0_GNT[3] */
+ (0x1 << 28)); /* MPPSel23 PCI0_REQ[3] */
+ /*
+ * Set unused MPP pins for output, as per schematic note
+ *
+ * Unused Pins: MPP01, MPP02, MPP04, MPP05, MPP06
+ * MPP09, MPP10, MPP13, MPP14, MPP15
+ */
+ mv64x60_clr_bits(&bh, MV64x60_MPP_CNTL_0,
+ (0xf << 4) | /* MPPSel01 GPIO[1] */
+ (0xf << 8) | /* MPPSel02 GPIO[2] */
+ (0xf << 16) | /* MPPSel04 GPIO[4] */
+ (0xf << 20) | /* MPPSel05 GPIO[5] */
+ (0xf << 24)); /* MPPSel06 GPIO[6] */
+ mv64x60_clr_bits(&bh, MV64x60_MPP_CNTL_1,
+ (0xf << 4) | /* MPPSel09 GPIO[9] */
+ (0xf << 8) | /* MPPSel10 GPIO[10] */
+ (0xf << 20) | /* MPPSel13 GPIO[13] */
+ (0xf << 24) | /* MPPSel14 GPIO[14] */
+ (0xf << 28)); /* MPPSel15 GPIO[15] */
+ mv64x60_set_bits(&bh, MV64x60_GPP_IO_CNTL,
+ BIT(1) | BIT(2) | BIT(4) | BIT(5) | BIT(6) |
+ BIT(9) | BIT(10) | BIT(13) | BIT(14) | BIT(15)); /* Output */
+
+ /*
+ * Configure the following MPP pins to indicate a level
+ * triggered interrupt
+ *
+ * MPP24 - Board Reset (just map the MPP & GPP for chestnut_reset)
+ * MPP25 - UART A (high)
+ * MPP26 - UART B (high)
+ * MPP28 - PCI Slot 3 (low)
+ * MPP29 - PCI Slot 2 (low)
+ * MPP30 - PCI Slot 1 (low)
+ * MPP31 - PCI Slot 0 (low)
+ */
+ mv64x60_clr_bits(&bh, MV64x60_MPP_CNTL_3,
+ BIT(3) | BIT(2) | BIT(1) | BIT(0) | /* MPP 24 */
+ BIT(7) | BIT(6) | BIT(5) | BIT(4) | /* MPP 25 */
+ BIT(11) | BIT(10) | BIT(9) | BIT(8) | /* MPP 26 */
+ BIT(19) | BIT(18) | BIT(17) | BIT(16) | /* MPP 28 */
+ BIT(23) | BIT(22) | BIT(21) | BIT(20) | /* MPP 29 */
+ BIT(27) | BIT(26) | BIT(25) | BIT(24) | /* MPP 30 */
+ BIT(31) | BIT(30) | BIT(29) | BIT(28)); /* MPP 31 */
+
+ /*
+ * Define GPP 25 (high), 26 (high), 28 (low), 29 (low), 30 (low),
+ * 31 (low) interrupt polarity input signal and level triggered
+ */
+ mv64x60_clr_bits(&bh, MV64x60_GPP_LEVEL_CNTL, BIT(25) | BIT(26));
+ mv64x60_set_bits(&bh, MV64x60_GPP_LEVEL_CNTL,
+ BIT(28) | BIT(29) | BIT(30) | BIT(31));
+ mv64x60_clr_bits(&bh, MV64x60_GPP_IO_CNTL,
+ BIT(25) | BIT(26) | BIT(28) | BIT(29) | BIT(30) |
+ BIT(31));
+
+ /* Config GPP interrupt controller to respond to level trigger */
+ mv64x60_set_bits(&bh, MV64360_COMM_ARBITER_CNTL, BIT(10));
+
+ /*
+ * Dismiss and then enable interrupt on GPP interrupt cause for CPU #0
+ */
+ mv64x60_write(&bh, MV64x60_GPP_INTR_CAUSE,
+ ~(BIT(25) | BIT(26) | BIT(28) | BIT(29) | BIT(30) |
+ BIT(31)));
+ mv64x60_set_bits(&bh, MV64x60_GPP_INTR_MASK,
+ BIT(25) | BIT(26) | BIT(28) | BIT(29) | BIT(30) |
+ BIT(31));
+
+ /*
+ * Dismiss and then enable interrupt on CPU #0 high cause register
+ * BIT27 summarizes GPP interrupts 24-31
+ */
+ mv64x60_set_bits(&bh, MV64360_IC_CPU0_INTR_MASK_HI, BIT(27));
+
+ if (ppc_md.progress)
+ ppc_md.progress("chestnut_setup_bridge: exit", 0);
+}
+
+/**************************************************************************
+ * FUNCTION: chestnut_setup_arch
+ *
+ * DESCRIPTION: ppc_md machine configuration callback
+ *
+ ****/
+static void __init
+chestnut_setup_arch(void)
+{
+ if (ppc_md.progress)
+ ppc_md.progress("chestnut_setup_arch: enter", 0);
+
+ /* init to some ~sane value until calibrate_delay() runs */
+ loops_per_jiffy = 50000000 / HZ;
+
+ /* if the time base value is greater than bus freq/4 (the TB and
+ * decrementer tick rate) + signed integer rollover value, we
+ * can spend a fair amount of time waiting for the rollover to
+ * happen. To get around this, initialize the time base register
+ * to a "safe" value.
+ */
+ set_tb(0, 0);
+
+#ifdef CONFIG_BLK_DEV_INITRD
+ if (initrd_start)
+ ROOT_DEV = Root_RAM0;
+ else
+#endif
+#ifdef CONFIG_ROOT_NFS
+ ROOT_DEV = Root_NFS;
+#else
+ ROOT_DEV = Root_SDA2;
+#endif
+
+ /*
+ * Set up the L2CR register.
+ */
+ _set_L2CR(_get_L2CR() | L2CR_L2E);
+
+ chestnut_setup_bridge();
+ chestnut_setup_peripherals();
+
+#ifdef CONFIG_DUMMY_CONSOLE
+ conswitchp = &dummy_con;
+#endif
+
+#if defined(CONFIG_SERIAL_8250)
+ chestnut_early_serial_map();
+#endif
+
+ /* Identify the system */
+ printk(KERN_INFO "System Identification: IBM 750FX/GX Eval Board\n");
+ printk(KERN_INFO "IBM 750FX/GX port (C) 2004 MontaVista Software, Inc. (source@mvista.com)\n");
+
+ if (ppc_md.progress)
+ ppc_md.progress("chestnut_setup_arch: exit", 0);
+
+ return;
+}
+
+/**************************************************************************
+ * FUNCTION: chestnut_restart
+ *
+ * DESCRIPTION: ppc_md machine reset callback
+ * reset the board via the CPLD command register
+ *
+ ****/
+static void
+chestnut_restart(char *cmd)
+{
+ volatile ulong i = 10000000;
+
+ local_irq_disable();
+
+ /*
+ * Set CPLD Reg 3 bit 0 to 1 to allow MPP signals on reset to work
+ *
+ * MPP24 - board reset
+ */
+ writeb(0x1, (void __iomem *)(cpld_base+3));
+
+ /* GPP pin tied to MPP earlier */
+ mv64x60_set_bits(&bh, MV64x60_GPP_VALUE_SET, BIT(24));
+
+ while (i-- > 0);
+ panic("restart failed\n");
+}
+
+static void
+chestnut_halt(void)
+{
+ local_irq_disable();
+ for (;;);
+ /* NOTREACHED */
+}
+
+static void
+chestnut_power_off(void)
+{
+ chestnut_halt();
+ /* NOTREACHED */
+}
+
+#define SET_PCI_COMMAND_INVALIDATE
+#ifdef SET_PCI_COMMAND_INVALIDATE
+/*
+ * Dave Wilhardt found that PCI_COMMAND_INVALIDATE must
+ * be set for each device if you are using cache coherency.
+ */
+static void __init
+set_pci_command_invalidate(void)
+{
+ struct pci_dev *dev = NULL;
+ u16 val;
+
+ while ((dev = pci_find_device(PCI_ANY_ID, PCI_ANY_ID, dev)) != NULL) {
+ pci_read_config_word(dev, PCI_COMMAND, &val);
+ val |= PCI_COMMAND_INVALIDATE;
+ pci_write_config_word(dev, PCI_COMMAND, val);
+
+ pci_write_config_byte(dev, PCI_CACHE_LINE_SIZE,
+ L1_CACHE_LINE_SIZE >> 2);
+ }
+}
+#endif
+
+static void __init
+chestnut_pci_fixups(void)
+{
+#ifdef SET_PCI_COMMAND_INVALIDATE
+ set_pci_command_invalidate();
+#endif
+}
+
+/**************************************************************************
+ * FUNCTION: chestnut_map_io
+ *
+ * DESCRIPTION: configure fixed memory-mapped IO
+ *
+ ****/
+static void __init
+chestnut_map_io(void)
+{
+#ifdef CONFIG_MV64360_SRAM_CACHEABLE
+ io_block_mapping(CHESTNUT_INTERNAL_SRAM_BASE,
+ CHESTNUT_INTERNAL_SRAM_BASE,
+ CHESTNUT_INTERNAL_SRAM_SIZE,
+ _PAGE_KERNEL | _PAGE_GUARDED);
+#else
+#ifdef CONFIG_MV64360_SRAM_CACHE_COHERENT
+ io_block_mapping(CHESTNUT_INTERNAL_SRAM_BASE,
+ CHESTNUT_INTERNAL_SRAM_BASE,
+ CHESTNUT_INTERNAL_SRAM_SIZE,
+ _PAGE_KERNEL | _PAGE_GUARDED | _PAGE_COHERENT);
+#else
+ io_block_mapping(CHESTNUT_INTERNAL_SRAM_BASE,
+ CHESTNUT_INTERNAL_SRAM_BASE,
+ CHESTNUT_INTERNAL_SRAM_SIZE,
+ _PAGE_IO);
+#endif /* !CONFIG_MV64360_SRAM_CACHE_COHERENT */
+#endif /* !CONFIG_MV64360_SRAM_CACHEABLE */
+
+#if defined(CONFIG_SERIAL_TEXT_DEBUG) || defined(CONFIG_KGDB)
+ io_block_mapping(CHESTNUT_UART_BASE, CHESTNUT_UART_BASE, 0x100000, _PAGE_IO);
+#endif
+}
+
+/**************************************************************************
+ * FUNCTION: chestnut_set_bat
+ *
+ * DESCRIPTION: configures a (temporary) bat mapping for early access to
+ * device I/O
+ *
+ ****/
+static __inline__ void
+chestnut_set_bat(void)
+{
+ mb();
+ mtspr(DBAT3U, 0xf0001ffe);
+ mtspr(DBAT3L, 0xf000002a);
+ mb();
+
+ return;
+}
+
+/**************************************************************************
+ * FUNCTION: platform_init
+ *
+ * DESCRIPTION: main entry point for configuring board-specific machine
+ * callbacks
+ *
+ ****/
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ parse_bootinfo(find_bootinfo());
+
+ /* Copy the kernel command line arguments to a safe place. */
+
+ if (r6) {
+ *(char *) (r7 + KERNELBASE) = 0;
+ strcpy(cmd_line, (char *) (r6 + KERNELBASE));
+ }
+
+ isa_mem_base = 0;
+
+ ppc_md.setup_arch = chestnut_setup_arch;
+ ppc_md.show_cpuinfo = chestnut_show_cpuinfo;
+ ppc_md.irq_canonicalize = NULL;
+ ppc_md.init_IRQ = mv64360_init_irq;
+ ppc_md.get_irq = mv64360_get_irq;
+ ppc_md.init = NULL;
+
+ ppc_md.find_end_of_memory = chestnut_find_end_of_memory;
+ ppc_md.setup_io_mappings = chestnut_map_io;
+ ppc_md.pcibios_fixup = chestnut_pci_fixups;
+
+ ppc_md.restart = chestnut_restart;
+ ppc_md.power_off = chestnut_power_off;
+ ppc_md.halt = chestnut_halt;
+
+ ppc_md.time_init = NULL;
+ ppc_md.set_rtc_time = NULL;
+ ppc_md.get_rtc_time = NULL;
+ ppc_md.calibrate_decr = chestnut_calibrate_decr;
+
+ ppc_md.nvram_read_val = NULL;
+ ppc_md.nvram_write_val = NULL;
+
+ ppc_md.heartbeat = NULL;
+
+ ppc_md.pcibios_fixup = chestnut_pci_fixups;
+
+ bh.p_base = CONFIG_MV64X60_NEW_BASE;
+
+ chestnut_set_bat();
+
+#if defined(CONFIG_SERIAL_TEXT_DEBUG)
+ ppc_md.progress = gen550_progress;
+#endif
+#if defined(CONFIG_KGDB)
+ ppc_md.kgdb_map_scc = gen550_kgdb_map_scc;
+#endif
+
+ if (ppc_md.progress)
+ ppc_md.progress("chestnut_init(): exit", 0);
+
+ return;
+}
--- /dev/null
+/*
+ * arch/ppc/platforms/chestnut.h
+ *
+ * Definitions for IBM 750FXGX Eval (Chestnut)
+ *
+ * Author: <source@mvista.com>
+ *
+ * Based on Artesyn Katana code done by Tim Montgomery <timm@artesyncp.com>
+ * Based on code done by Rabeeh Khoury - rabeeh@galileo.co.il
+ * Based on code done by Mark A. Greer <mgreer@mvista.com>
+ *
+ * <2004> (c) MontaVista Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+/*
+ * This is the CPU physical memory map (windows must be at least 1MB and start
+ * on a boundary that is a multiple of the window size):
+ *
+ * Seems on the IBM 750FXGX Eval board, the MV64460 Registers can be in
+ * only 2 places per switch U17 0x14000000 or 0xf1000000 easily - chose to
+ * implement at 0xf1000000 only at this time
+ *
+ * 0xfff00000-0xffffffff - 8 Flash
+ * 0xffd00000-0xffd00004 - CPLD
+ * 0xffc00000-0xffc0000f - UART
+ * 0xffb00000-0xffb07fff - FRAM
+ * 0xffa00000-0xffafffff - *** HOLE ***
+ * 0xff900000-0xff9fffff - MV64460 Integrated SRAM
+ * 0xfe000000-0xff8fffff - *** HOLE ***
+ * 0xfc000000-0xfdffffff - 32bit Flash
+ * 0xf1010000-0xfbffffff - *** HOLE ***
+ * 0xf1000000-0xf100ffff - MV64460 Registers
+ */
+
+#ifndef __PPC_PLATFORMS_CHESTNUT_H__
+#define __PPC_PLATFORMS_CHESTNUT_H__
+
+#define CHESTNUT_BOOT_8BIT_BASE 0xfff00000
+#define CHESTNUT_BOOT_8BIT_SIZE_ACTUAL (1024*1024)
+#define CHESTNUT_BOOT_SRAM_BASE 0xffe00000
+#define CHESTNUT_BOOT_SRAM_SIZE_ACTUAL (1024*1024)
+#define CHESTNUT_CPLD_BASE 0xffd00000
+#define CHESTNUT_CPLD_SIZE_ACTUAL 5
+#define CHESTNUT_CPLD_REG3 (CHESTNUT_CPLD_BASE+3)
+#define CHESTNUT_UART_BASE 0xffc00000
+#define CHESTNUT_UART_SIZE_ACTUAL 16
+#define CHESTNUT_FRAM_BASE 0xffb00000
+#define CHESTNUT_FRAM_SIZE_ACTUAL (32*1024)
+#define CHESTNUT_BRIDGE_REG_BASE 0xf1000000
+#define CHESTNUT_INTERNAL_SRAM_BASE 0xff900000
+#define CHESTNUT_INTERNAL_SRAM_SIZE_ACTUAL (256*1024)
+#define CHESTNUT_32BIT_BASE 0xfc000000
+#define CHESTNUT_32BIT_SIZE (32*1024*1024)
+
+#define CHESTNUT_BOOT_8BIT_SIZE max(MV64360_WINDOW_SIZE_MIN, \
+ CHESTNUT_BOOT_8BIT_SIZE_ACTUAL)
+#define CHESTNUT_BOOT_SRAM_SIZE max(MV64360_WINDOW_SIZE_MIN, \
+ CHESTNUT_BOOT_SRAM_SIZE_ACTUAL)
+#define CHESTNUT_CPLD_SIZE max(MV64360_WINDOW_SIZE_MIN, \
+ CHESTNUT_CPLD_SIZE_ACTUAL)
+#define CHESTNUT_UART_SIZE max(MV64360_WINDOW_SIZE_MIN, \
+ CHESTNUT_UART_SIZE_ACTUAL)
+#define CHESTNUT_FRAM_SIZE max(MV64360_WINDOW_SIZE_MIN, \
+ CHESTNUT_FRAM_SIZE_ACTUAL)
+#define CHESTNUT_INTERNAL_SRAM_SIZE max(MV64360_WINDOW_SIZE_MIN, \
+ CHESTNUT_INTERNAL_SRAM_SIZE_ACTUAL)
+
+#define CHESTNUT_BUS_SPEED 200000000
+#define CHESTNUT_PIBS_DATABASE 0xf0000 /* from PIBS src code */
+
+#define MV64360_ETH_PORT_SERIAL_CONTROL_REG_PORT0 0x243c
+#define MV64360_ETH_PORT_SERIAL_CONTROL_REG_PORT1 0x283c
+
+/*
+ * PCI windows
+ */
+
+#define CHESTNUT_PCI0_MEM_PROC_ADDR 0x80000000
+#define CHESTNUT_PCI0_MEM_PCI_HI_ADDR 0x00000000
+#define CHESTNUT_PCI0_MEM_PCI_LO_ADDR 0x80000000
+#define CHESTNUT_PCI0_MEM_SIZE 0x10000000
+#define CHESTNUT_PCI0_IO_PROC_ADDR 0xa0000000
+#define CHESTNUT_PCI0_IO_PCI_ADDR 0x00000000
+#define CHESTNUT_PCI0_IO_SIZE 0x01000000
+
+/*
+ * Board-specific IRQ info
+ */
+#define CHESTNUT_PCI_SLOT0_IRQ 64+31
+#define CHESTNUT_PCI_SLOT1_IRQ 64+30
+#define CHESTNUT_PCI_SLOT2_IRQ 64+29
+#define CHESTNUT_PCI_SLOT3_IRQ 64+28
+
+/* serial port definitions */
+#define CHESTNUT_UART0_IO_BASE CHESTNUT_UART_BASE+8
+#define CHESTNUT_UART1_IO_BASE CHESTNUT_UART_BASE
+
+#define UART0_INT 64+25
+#define UART1_INT 64+26
+
+#ifdef CONFIG_SERIAL_MANY_PORTS
+#define RS_TABLE_SIZE 64
+#else
+#define RS_TABLE_SIZE 2
+#endif
+
+/* Rate for the 3.6864 Mhz clock for the onboard serial chip */
+#define BASE_BAUD ( 3686400 / 16 )
+
+#ifdef CONFIG_SERIAL_DETECT_IRQ
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF|ASYNC_SKIP_TEST|ASYNC_AUTO_IRQ)
+#else
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF|ASYNC_SKIP_TEST)
+#endif
+
+#define STD_UART_OP(num) \
+ { 0, BASE_BAUD, 0, UART##num##_INT, STD_COM_FLAGS, \
+ iomem_base: (u8 *)CHESTNUT_UART##num##_IO_BASE, \
+ io_type: SERIAL_IO_MEM},
+
+#define SERIAL_PORT_DFNS \
+ STD_UART_OP(0) \
+ STD_UART_OP(1)
+
+#endif /* __PPC_PLATFORMS_CHESTNUT_H__ */
#include <asm/open_pic.h>
/* LongTrail */
-unsigned long gg2_pci_config_base;
+void __iomem *gg2_pci_config_base;
/*
* The VLSI Golden Gate II has only 512K of PCI configuration space, so we
int __chrp gg2_read_config(struct pci_bus *bus, unsigned int devfn, int off,
int len, u32 *val)
{
- volatile unsigned char *cfg_data;
+ volatile void __iomem *cfg_data;
struct pci_controller *hose = bus->sysdata;
if (bus->number > 7)
cfg_data = hose->cfg_data + ((bus->number<<16) | (devfn<<8) | off);
switch (len) {
case 1:
- *val = in_8((u8 *)cfg_data);
+ *val = in_8(cfg_data);
break;
case 2:
- *val = in_le16((u16 *)cfg_data);
+ *val = in_le16(cfg_data);
break;
default:
- *val = in_le32((u32 *)cfg_data);
+ *val = in_le32(cfg_data);
break;
}
return PCIBIOS_SUCCESSFUL;
int __chrp gg2_write_config(struct pci_bus *bus, unsigned int devfn, int off,
int len, u32 val)
{
- volatile unsigned char *cfg_data;
+ volatile void __iomem *cfg_data;
struct pci_controller *hose = bus->sysdata;
if (bus->number > 7)
cfg_data = hose->cfg_data + ((bus->number<<16) | (devfn<<8) | off);
switch (len) {
case 1:
- out_8((u8 *)cfg_data, val);
+ out_8(cfg_data, val);
break;
case 2:
- out_le16((u16 *)cfg_data, val);
+ out_le16(cfg_data, val);
break;
default:
- out_le32((u32 *)cfg_data, val);
+ out_le32(cfg_data, val);
break;
}
return PCIBIOS_SUCCESSFUL;
rtas_read_config(struct pci_bus *bus, unsigned int devfn, int offset,
int len, u32 *val)
{
+ struct pci_controller *hose = bus->sysdata;
unsigned long addr = (offset & 0xff) | ((devfn & 0xff) << 8)
- | ((bus->number & 0xff) << 16);
+ | (((bus->number - hose->first_busno) & 0xff) << 16)
+ | (hose->index << 24);
unsigned long ret = ~0UL;
int rval;
rtas_write_config(struct pci_bus *bus, unsigned int devfn, int offset,
int len, u32 val)
{
+ struct pci_controller *hose = bus->sysdata;
unsigned long addr = (offset & 0xff) | ((devfn & 0xff) << 8)
- | ((bus->number & 0xff) << 16);
+ | (((bus->number - hose->first_busno) & 0xff) << 16)
+ | (hose->index << 24);
int rval;
rval = call_rtas("write-pci-config", 3, 1, NULL, addr, len, val);
iounmap(reg);
}
+/* Marvell Discovery II based Pegasos 2 */
+static void __init setup_peg2(struct pci_controller *hose, struct device_node *dev)
+{
+ struct device_node *root = find_path_device("/");
+ struct device_node *rtas;
+
+ rtas = of_find_node_by_name (root, "rtas");
+ if (rtas) {
+ hose->ops = &rtas_pci_ops;
+ } else {
+ printk ("RTAS supporting Pegasos OF not found, please upgrade"
+ " your firmware\n");
+ }
+ pci_assign_all_busses = 1;
+}
+
void __init
chrp_find_bridges(void)
{
struct pci_controller *hose;
unsigned int *dma;
char *model, *machine;
- int is_longtrail = 0, is_mot = 0;
+ int is_longtrail = 0, is_mot = 0, is_pegasos = 0;
struct device_node *root = find_path_device("/");
/*
if (machine != NULL) {
is_longtrail = strncmp(machine, "IBM,LongTrail", 13) == 0;
is_mot = strncmp(machine, "MOT", 3) == 0;
+ if (strncmp(machine, "Pegasos2", 8) == 0)
+ is_pegasos = 2;
+ else if (strncmp(machine, "Pegasos", 7) == 0)
+ is_pegasos = 1;
}
for (dev = root->child; dev != NULL; dev = dev->sibling) {
if (dev->type == NULL || strcmp(dev->type, "pci") != 0)
|| strncmp(model, "Motorola, Grackle", 17) == 0) {
setup_grackle(hose);
} else if (is_longtrail) {
+ void __iomem *p = ioremap(GG2_PCI_CONFIG_BASE, 0x80000);
hose->ops = &gg2_pci_ops;
- hose->cfg_data = (unsigned char *)
- ioremap(GG2_PCI_CONFIG_BASE, 0x80000);
- gg2_pci_config_base = (unsigned long) hose->cfg_data;
+ hose->cfg_data = p;
+ gg2_pci_config_base = p;
+ } else if (is_pegasos == 1) {
+ setup_indirect_pci(hose, 0xfec00cf8, 0xfee00cfc);
+ } else if (is_pegasos == 2) {
+ setup_peg2(hose, dev);
} else {
printk("No methods for %s (model %s), using RTAS\n",
dev->full_name, model);
}
}
- ppc_md.pcibios_fixup = chrp_pcibios_fixup;
+ /* Do not fixup interrupts from OF tree on pegasos */
+ if (is_pegasos == 0)
+ ppc_md.pcibios_fixup = chrp_pcibios_fixup;
}
#include <linux/seq_file.h>
#include <linux/root_dev.h>
#include <linux/initrd.h>
+#include <linux/module.h>
#include <asm/io.h>
#include <asm/pgtable.h>
extern unsigned long pmac_find_end_of_memory(void);
extern int of_show_percpuinfo(struct seq_file *, int);
+int _chrp_type;
+EXPORT_SYMBOL(_chrp_type);
+
/*
* XXX this should be in xmon.h, but putting it there means xmon.h
* has to include <linux/interrupt.h> (to get irqreturn_t), which
if (!strncmp(model, "IBM,LongTrail", 13)) {
/* VLSI VAS96011/12 `Golden Gate 2' */
/* Memory banks */
- sdramen = (in_le32((unsigned *)(gg2_pci_config_base+
- GG2_PCI_DRAM_CTRL))
+ sdramen = (in_le32(gg2_pci_config_base + GG2_PCI_DRAM_CTRL)
>>31) & 1;
for (i = 0; i < (sdramen ? 4 : 6); i++) {
- t = in_le32((unsigned *)(gg2_pci_config_base+
+ t = in_le32(gg2_pci_config_base+
GG2_PCI_DRAM_BANK0+
- i*4));
+ i*4);
if (!(t & 1))
continue;
switch ((t>>8) & 0x1f) {
gg2_memtypes[sdramen ? 1 : ((t>>1) & 3)]);
}
/* L2 cache */
- t = in_le32((unsigned *)(gg2_pci_config_base+GG2_PCI_CC_CTRL));
+ t = in_le32(gg2_pci_config_base+GG2_PCI_CC_CTRL);
seq_printf(m, "board l2\t: %s %s (%s)\n",
gg2_cachesizes[(t>>7) & 3],
gg2_cachetypes[(t>>2) & 3],
}
-void __init
-chrp_setup_arch(void)
+static void __init pegasos_set_l2cr(void)
+{
+ struct device_node *np;
+
+ /* On Pegasos, enable the l2 cache if needed, as the OF forgets it */
+ if (_chrp_type != _CHRP_Pegasos)
+ return;
+
+ /* Enable L2 cache if needed */
+ np = find_type_devices("cpu");
+ if (np != NULL) {
+ unsigned int *l2cr = (unsigned int *)
+ get_property (np, "l2cr", NULL);
+ if (l2cr == NULL) {
+ printk ("Pegasos l2cr : no cpu l2cr property found\n");
+ return;
+ }
+ if (!((*l2cr) & 0x80000000)) {
+ printk ("Pegasos l2cr : L2 cache was not active, "
+ "activating\n");
+ _set_L2CR(0);
+ _set_L2CR((*l2cr) | 0x80000000);
+ }
+ }
+}
+
+void __init chrp_setup_arch(void)
{
struct device_node *device;
#endif
ROOT_DEV = Root_SDA2; /* sda2 (sda1 is for the kernel) */
+ /* On pegasos, enable the L2 cache if not already done by OF */
+ pegasos_set_l2cr();
+
/* Lookup PCI host bridges */
chrp_find_bridges();
chrp_find_openpic();
- prom_get_irq_senses(init_senses, NUM_8259_INTERRUPTS, NR_IRQS);
- OpenPIC_InitSenses = init_senses;
- OpenPIC_NumInitSenses = NR_IRQS - NUM_8259_INTERRUPTS;
+ if (OpenPIC_Addr) {
+ prom_get_irq_senses(init_senses, NUM_8259_INTERRUPTS, NR_IRQS);
+ OpenPIC_InitSenses = init_senses;
+ OpenPIC_NumInitSenses = NR_IRQS - NUM_8259_INTERRUPTS;
- openpic_init(NUM_8259_INTERRUPTS);
- /* We have a cascade on OpenPIC IRQ 0, Linux IRQ 16 */
- openpic_hookup_cascade(NUM_8259_INTERRUPTS, "82c59 cascade",
- i8259_irq);
+ openpic_init(NUM_8259_INTERRUPTS);
+ /* We have a cascade on OpenPIC IRQ 0, Linux IRQ 16 */
+ openpic_hookup_cascade(NUM_8259_INTERRUPTS, "82c59 cascade",
+ i8259_irq);
+ }
for (i = 0; i < NUM_8259_INTERRUPTS; i++)
irq_desc[i].handler = &i8259_pic;
i8259_init(chrp_int_ack);
chrp_init(unsigned long r3, unsigned long r4, unsigned long r5,
unsigned long r6, unsigned long r7)
{
+ struct device_node *root = find_path_device ("/");
+ char *machine = NULL;
+
#ifdef CONFIG_BLK_DEV_INITRD
/* take care of initrd if we have one */
if ( r6 )
DMA_MODE_WRITE = 0x48;
isa_io_base = CHRP_ISA_IO_BASE; /* default value */
+ if (root)
+ machine = get_property(root, "model", NULL);
+ if (machine && strncmp(machine, "Pegasos", 7) == 0) {
+ _chrp_type = _CHRP_Pegasos;
+ } else if (machine && strncmp(machine, "IBM", 3) == 0) {
+ _chrp_type = _CHRP_IBM;
+ } else if (machine && strncmp(machine, "MOT", 3) == 0) {
+ _chrp_type = _CHRP_Motorola;
+ } else {
+ /* Let's assume it is an IBM chrp if all else fails */
+ _chrp_type = _CHRP_IBM;
+ }
+
ppc_md.setup_arch = chrp_setup_arch;
ppc_md.show_percpuinfo = of_show_percpuinfo;
ppc_md.show_cpuinfo = chrp_show_cpuinfo;
+
ppc_md.irq_canonicalize = chrp_irq_canonicalize;
ppc_md.init_IRQ = chrp_init_IRQ;
- ppc_md.get_irq = openpic_get_irq;
+ if (_chrp_type == _CHRP_Pegasos)
+ ppc_md.get_irq = i8259_irq;
+ else
+ ppc_md.get_irq = openpic_get_irq;
ppc_md.init = chrp_init2;
do_openpic_setup_cpu();
}
-static spinlock_t timebase_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(timebase_lock);
static unsigned int timebase_upper = 0, timebase_lower = 0;
void __devinit
int base;
rtcs = find_compatible_devices("rtc", "pnpPNP,b00");
+ if (rtcs == NULL)
+ rtcs = find_compatible_devices("rtc", "ds1385-rtc");
if (rtcs == NULL || rtcs->addrs == NULL)
return 0;
base = rtcs->addrs[0].address;
--- /dev/null
+/*
+ * arch/ppc/platforms/cpci690.c
+ *
+ * Board setup routines for the Force CPCI690 board.
+ *
+ * Author: Mark A. Greer <mgreer@mvista.com>
+ *
+ * 2003 (c) MontaVista Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This programr
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+#include <linux/config.h>
+#include <linux/delay.h>
+#include <linux/pci.h>
+#include <linux/ide.h>
+#include <linux/irq.h>
+#include <linux/fs.h>
+#include <linux/seq_file.h>
+#include <linux/console.h>
+#include <linux/initrd.h>
+#include <linux/root_dev.h>
+#include <linux/mv643xx.h>
+#include <asm/bootinfo.h>
+#include <asm/machdep.h>
+#include <asm/todc.h>
+#include <asm/time.h>
+#include <asm/mv64x60.h>
+#include <platforms/cpci690.h>
+
+#define BOARD_VENDOR "Force"
+#define BOARD_MACHINE "CPCI690"
+
+/* Set IDE controllers into Native mode? */
+#define SET_PCI_IDE_NATIVE
+
+static struct mv64x60_handle bh;
+static u32 cpci690_br_base;
+
+static const unsigned int cpu_7xx[16] = { /* 7xx & 74xx (but not 745x) */
+ 18, 15, 14, 2, 4, 13, 5, 9, 6, 11, 8, 10, 16, 12, 7, 0
+};
+
+TODC_ALLOC();
+
+static int __init
+cpci690_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin)
+{
+ struct pci_controller *hose = pci_bus_to_hose(dev->bus->number);
+
+ if (hose->index == 0) {
+ static char pci_irq_table[][4] =
+ /*
+ * PCI IDSEL/INTPIN->INTLINE
+ * A B C D
+ */
+ {
+ { 90, 91, 88, 89}, /* IDSEL 30/20 - Sentinel */
+ };
+
+ const long min_idsel = 20, max_idsel = 20, irqs_per_slot = 4;
+ return PCI_IRQ_TABLE_LOOKUP;
+ }
+ else {
+ static char pci_irq_table[][4] =
+ /*
+ * PCI IDSEL/INTPIN->INTLINE
+ * A B C D
+ */
+ {
+ { 93, 94, 95, 92}, /* IDSEL 28/18 - PMC slot 2 */
+ { 0, 0, 0, 0}, /* IDSEL 29/19 - Not used */
+ { 94, 95, 92, 93}, /* IDSEL 30/20 - PMC slot 1 */
+ };
+
+ const long min_idsel = 18, max_idsel = 20, irqs_per_slot = 4;
+ return PCI_IRQ_TABLE_LOOKUP;
+ }
+}
+
+static int
+cpci690_get_bus_speed(void)
+{
+ return 133333333;
+}
+
+static int
+cpci690_get_cpu_speed(void)
+{
+ unsigned long hid1;
+
+ hid1 = mfspr(HID1) >> 28;
+ return cpci690_get_bus_speed() * cpu_7xx[hid1]/2;
+}
+
+#define KB (1024UL)
+#define MB (1024UL * KB)
+#define GB (1024UL * MB)
+
+unsigned long __init
+cpci690_find_end_of_memory(void)
+{
+ u32 mem_ctlr_size;
+ static u32 board_size;
+ static u8 first_time = 1;
+
+ if (first_time) {
+ /* Using cpci690_set_bat() mapping ==> virt addr == phys addr */
+ switch (in_8((u8 *) (cpci690_br_base +
+ CPCI690_BR_MEM_CTLR)) & 0x07) {
+ case 0x01:
+ board_size = 256*MB;
+ break;
+ case 0x02:
+ board_size = 512*MB;
+ break;
+ case 0x03:
+ board_size = 768*MB;
+ break;
+ case 0x04:
+ board_size = 1*GB;
+ break;
+ case 0x05:
+ board_size = 1*GB + 512*MB;
+ break;
+ case 0x06:
+ board_size = 2*GB;
+ break;
+ default:
+ board_size = 0xffffffff; /* use mem ctlr size */
+ } /* switch */
+
+ mem_ctlr_size = mv64x60_get_mem_size(CONFIG_MV64X60_NEW_BASE,
+ MV64x60_TYPE_GT64260A);
+
+ /* Check that mem ctlr & board reg agree. If not, pick MIN. */
+ if (board_size != mem_ctlr_size) {
+ printk(KERN_WARNING "Board register & memory controller"
+ "mem size disagree (board reg: 0x%lx, "
+ "mem ctlr: 0x%lx)\n",
+ (ulong)board_size, (ulong)mem_ctlr_size);
+ board_size = min(board_size, mem_ctlr_size);
+ }
+
+ first_time = 0;
+ } /* if */
+
+ return board_size;
+}
+
+static void __init
+cpci690_setup_bridge(void)
+{
+ struct mv64x60_setup_info si;
+ int i;
+
+ memset(&si, 0, sizeof(si));
+
+ si.phys_reg_base = CONFIG_MV64X60_NEW_BASE;
+
+ si.pci_0.enable_bus = 1;
+ si.pci_0.pci_io.cpu_base = CPCI690_PCI0_IO_START_PROC_ADDR;
+ si.pci_0.pci_io.pci_base_hi = 0;
+ si.pci_0.pci_io.pci_base_lo = CPCI690_PCI0_IO_START_PCI_ADDR;
+ si.pci_0.pci_io.size = CPCI690_PCI0_IO_SIZE;
+ si.pci_0.pci_io.swap = MV64x60_CPU2PCI_SWAP_NONE;
+ si.pci_0.pci_mem[0].cpu_base = CPCI690_PCI0_MEM_START_PROC_ADDR;
+ si.pci_0.pci_mem[0].pci_base_hi = CPCI690_PCI0_MEM_START_PCI_HI_ADDR;
+ si.pci_0.pci_mem[0].pci_base_lo = CPCI690_PCI0_MEM_START_PCI_LO_ADDR;
+ si.pci_0.pci_mem[0].size = CPCI690_PCI0_MEM_SIZE;
+ si.pci_0.pci_mem[0].swap = MV64x60_CPU2PCI_SWAP_NONE;
+ si.pci_0.pci_cmd_bits = 0;
+ si.pci_0.latency_timer = 0x80;
+
+ si.pci_1.enable_bus = 1;
+ si.pci_1.pci_io.cpu_base = CPCI690_PCI1_IO_START_PROC_ADDR;
+ si.pci_1.pci_io.pci_base_hi = 0;
+ si.pci_1.pci_io.pci_base_lo = CPCI690_PCI1_IO_START_PCI_ADDR;
+ si.pci_1.pci_io.size = CPCI690_PCI1_IO_SIZE;
+ si.pci_1.pci_io.swap = MV64x60_CPU2PCI_SWAP_NONE;
+ si.pci_1.pci_mem[0].cpu_base = CPCI690_PCI1_MEM_START_PROC_ADDR;
+ si.pci_1.pci_mem[0].pci_base_hi = CPCI690_PCI1_MEM_START_PCI_HI_ADDR;
+ si.pci_1.pci_mem[0].pci_base_lo = CPCI690_PCI1_MEM_START_PCI_LO_ADDR;
+ si.pci_1.pci_mem[0].size = CPCI690_PCI1_MEM_SIZE;
+ si.pci_1.pci_mem[0].swap = MV64x60_CPU2PCI_SWAP_NONE;
+ si.pci_1.pci_cmd_bits = 0;
+ si.pci_1.latency_timer = 0x80;
+
+ for (i=0; i<MV64x60_CPU2MEM_WINDOWS; i++) {
+ si.cpu_prot_options[i] = 0;
+ si.cpu_snoop_options[i] = GT64260_CPU_SNOOP_WB;
+ si.pci_0.acc_cntl_options[i] =
+ GT64260_PCI_ACC_CNTL_DREADEN |
+ GT64260_PCI_ACC_CNTL_RDPREFETCH |
+ GT64260_PCI_ACC_CNTL_RDLINEPREFETCH |
+ GT64260_PCI_ACC_CNTL_RDMULPREFETCH |
+ GT64260_PCI_ACC_CNTL_SWAP_NONE |
+ GT64260_PCI_ACC_CNTL_MBURST_32_BTYES;
+ si.pci_0.snoop_options[i] = GT64260_PCI_SNOOP_WB;
+ si.pci_1.acc_cntl_options[i] =
+ GT64260_PCI_ACC_CNTL_DREADEN |
+ GT64260_PCI_ACC_CNTL_RDPREFETCH |
+ GT64260_PCI_ACC_CNTL_RDLINEPREFETCH |
+ GT64260_PCI_ACC_CNTL_RDMULPREFETCH |
+ GT64260_PCI_ACC_CNTL_SWAP_NONE |
+ GT64260_PCI_ACC_CNTL_MBURST_32_BTYES;
+ si.pci_1.snoop_options[i] = GT64260_PCI_SNOOP_WB;
+ }
+
+ /* Lookup PCI host bridges */
+ if (mv64x60_init(&bh, &si))
+ printk(KERN_ERR "Bridge initialization failed.\n");
+
+ pci_dram_offset = 0; /* System mem at same addr on PCI & cpu bus */
+ ppc_md.pci_swizzle = common_swizzle;
+ ppc_md.pci_map_irq = cpci690_map_irq;
+ ppc_md.pci_exclude_device = mv64x60_pci_exclude_device;
+
+ mv64x60_set_bus(&bh, 0, 0);
+ bh.hose_a->first_busno = 0;
+ bh.hose_a->last_busno = 0xff;
+ bh.hose_a->last_busno = pciauto_bus_scan(bh.hose_a, 0);
+
+ bh.hose_b->first_busno = bh.hose_a->last_busno + 1;
+ mv64x60_set_bus(&bh, 1, bh.hose_b->first_busno);
+ bh.hose_b->last_busno = 0xff;
+ bh.hose_b->last_busno = pciauto_bus_scan(bh.hose_b,
+ bh.hose_b->first_busno);
+
+ return;
+}
+
+static void __init
+cpci690_setup_peripherals(void)
+{
+ /* Set up windows to CPLD, RTC/TODC, IPMI. */
+ mv64x60_set_32bit_window(&bh, MV64x60_CPU2DEV_0_WIN, CPCI690_BR_BASE,
+ CPCI690_BR_SIZE, 0);
+ bh.ci->enable_window_32bit(&bh, MV64x60_CPU2DEV_0_WIN);
+ cpci690_br_base = (u32)ioremap(CPCI690_BR_BASE, CPCI690_BR_SIZE);
+
+ mv64x60_set_32bit_window(&bh, MV64x60_CPU2DEV_1_WIN, CPCI690_TODC_BASE,
+ CPCI690_TODC_SIZE, 0);
+ bh.ci->enable_window_32bit(&bh, MV64x60_CPU2DEV_1_WIN);
+ TODC_INIT(TODC_TYPE_MK48T35, 0, 0,
+ ioremap(CPCI690_TODC_BASE, CPCI690_TODC_SIZE), 8);
+
+ mv64x60_set_32bit_window(&bh, MV64x60_CPU2DEV_2_WIN, CPCI690_IPMI_BASE,
+ CPCI690_IPMI_SIZE, 0);
+ bh.ci->enable_window_32bit(&bh, MV64x60_CPU2DEV_2_WIN);
+
+ mv64x60_set_bits(&bh, MV64x60_PCI0_ARBITER_CNTL, (1<<31));
+ mv64x60_set_bits(&bh, MV64x60_PCI1_ARBITER_CNTL, (1<<31));
+
+ mv64x60_set_bits(&bh, MV64x60_CPU_MASTER_CNTL, (1<<9)); /* Only 1 cpu */
+
+ /*
+ * Turn off timer/counters. Not turning off watchdog timer because
+ * can't read its reg on the 64260A so don't know if we'll be enabling
+ * or disabling.
+ */
+ mv64x60_clr_bits(&bh, MV64x60_TIMR_CNTR_0_3_CNTL,
+ ((1<<0) | (1<<8) | (1<<16) | (1<<24)));
+ mv64x60_clr_bits(&bh, GT64260_TIMR_CNTR_4_7_CNTL,
+ ((1<<0) | (1<<8) | (1<<16) | (1<<24)));
+
+ /*
+ * Set MPSC Multiplex RMII
+ * NOTE: ethernet driver modifies bit 0 and 1
+ */
+ mv64x60_write(&bh, GT64260_MPP_SERIAL_PORTS_MULTIPLEX, 0x00001102);
+
+#define GPP_EXTERNAL_INTERRUPTS \
+ ((1<<24) | (1<<25) | (1<<26) | (1<<27) | \
+ (1<<28) | (1<<29) | (1<<30) | (1<<31))
+ /* PCI interrupts are inputs */
+ mv64x60_clr_bits(&bh, MV64x60_GPP_IO_CNTL, GPP_EXTERNAL_INTERRUPTS);
+ /* PCI interrupts are active low */
+ mv64x60_set_bits(&bh, MV64x60_GPP_LEVEL_CNTL, GPP_EXTERNAL_INTERRUPTS);
+
+ /* Clear any pending interrupts for these inputs and enable them. */
+ mv64x60_write(&bh, MV64x60_GPP_INTR_CAUSE, ~GPP_EXTERNAL_INTERRUPTS);
+ mv64x60_set_bits(&bh, MV64x60_GPP_INTR_MASK, GPP_EXTERNAL_INTERRUPTS);
+
+ /* Route MPP interrupt inputs to GPP */
+ mv64x60_write(&bh, MV64x60_MPP_CNTL_2, 0x00000000);
+ mv64x60_write(&bh, MV64x60_MPP_CNTL_3, 0x00000000);
+
+ return;
+}
+
+static void __init
+cpci690_setup_arch(void)
+{
+ if (ppc_md.progress)
+ ppc_md.progress("cpci690_setup_arch: enter", 0);
+#ifdef CONFIG_BLK_DEV_INITRD
+ if (initrd_start)
+ ROOT_DEV = Root_RAM0;
+ else
+#endif
+#ifdef CONFIG_ROOT_NFS
+ ROOT_DEV = Root_NFS;
+#else
+ ROOT_DEV = Root_SDA2;
+#endif
+
+ if (ppc_md.progress)
+ ppc_md.progress("cpci690_setup_arch: Enabling L2 cache", 0);
+
+ /* Enable L2 and L3 caches (if 745x) */
+ _set_L2CR(_get_L2CR() | L2CR_L2E);
+ _set_L3CR(_get_L3CR() | L3CR_L3E);
+
+ if (ppc_md.progress)
+ ppc_md.progress("cpci690_setup_arch: Initializing bridge", 0);
+
+ cpci690_setup_bridge(); /* set up PCI bridge(s) */
+ cpci690_setup_peripherals(); /* set up chip selects/GPP/MPP etc */
+
+ if (ppc_md.progress)
+ ppc_md.progress("cpci690_setup_arch: bridge init complete", 0);
+
+ printk(KERN_INFO "%s %s port (C) 2003 MontaVista Software, Inc. "
+ "(source@mvista.com)\n", BOARD_VENDOR, BOARD_MACHINE);
+
+ if (ppc_md.progress)
+ ppc_md.progress("cpci690_setup_arch: exit", 0);
+
+ return;
+}
+
+/* Platform device data fixup routines. */
+#if defined(CONFIG_SERIAL_MPSC)
+static void __init
+cpci690_fixup_mpsc_pdata(struct platform_device *pdev)
+{
+ struct mpsc_pdata *pdata;
+
+ pdata = (struct mpsc_pdata *)pdev->dev.platform_data;
+
+ pdata->max_idle = 40;
+ pdata->default_baud = 9600;
+ pdata->brg_clk_src = 8;
+ pdata->brg_clk_freq = 133000000;
+
+ return;
+}
+
+static int __init
+cpci690_platform_notify(struct device *dev)
+{
+ static struct {
+ char *bus_id;
+ void ((*rtn)(struct platform_device *pdev));
+ } dev_map[] = {
+ { MPSC_CTLR_NAME "0", cpci690_fixup_mpsc_pdata },
+ { MPSC_CTLR_NAME "1", cpci690_fixup_mpsc_pdata },
+ };
+ struct platform_device *pdev;
+ int i;
+
+ if (dev && dev->bus_id)
+ for (i=0; i<ARRAY_SIZE(dev_map); i++)
+ if (!strncmp(dev->bus_id, dev_map[i].bus_id,
+ BUS_ID_SIZE)) {
+
+ pdev = container_of(dev,
+ struct platform_device, dev);
+ dev_map[i].rtn(pdev);
+ }
+
+ return 0;
+}
+#endif
+
+static void
+cpci690_reset_board(void)
+{
+ u32 i = 10000;
+
+ local_irq_disable();
+ out_8((u8 *)(cpci690_br_base + CPCI690_BR_SW_RESET), 0x11);
+
+ while (i != 0) i++;
+ panic("restart failed\n");
+}
+
+static void
+cpci690_restart(char *cmd)
+{
+ cpci690_reset_board();
+}
+
+static void
+cpci690_halt(void)
+{
+ while (1);
+ /* NOTREACHED */
+}
+
+static void
+cpci690_power_off(void)
+{
+ cpci690_halt();
+ /* NOTREACHED */
+}
+
+static int
+cpci690_show_cpuinfo(struct seq_file *m)
+{
+ seq_printf(m, "vendor\t\t: " BOARD_VENDOR "\n");
+ seq_printf(m, "machine\t\t: " BOARD_MACHINE "\n");
+ seq_printf(m, "cpu MHz\t\t: %d\n", cpci690_get_cpu_speed()/1000/1000);
+ seq_printf(m, "bus MHz\t\t: %d\n", cpci690_get_bus_speed()/1000/1000);
+
+ return 0;
+}
+
+static void __init
+cpci690_calibrate_decr(void)
+{
+ ulong freq;
+
+ freq = cpci690_get_bus_speed()/4;
+
+ printk(KERN_INFO "time_init: decrementer frequency = %lu.%.6lu MHz\n",
+ freq/1000000, freq%1000000);
+
+ tb_ticks_per_jiffy = freq / HZ;
+ tb_to_us = mulhwu_scale_factor(freq, 1000000);
+
+ return;
+}
+
+static __inline__ void
+cpci690_set_bat(u32 addr, u32 size)
+{
+ addr &= 0xfffe0000;
+ size &= 0x1ffe0000;
+ size = ((size >> 17) - 1) << 2;
+
+ mb();
+ mtspr(DBAT1U, addr | size | 0x2); /* Vs == 1; Vp == 0 */
+ mtspr(DBAT1L, addr | 0x2a); /* WIMG bits == 0101; PP == r/w access */
+ mb();
+
+ return;
+}
+
+#if defined(CONFIG_SERIAL_TEXT_DEBUG) || defined(CONFIG_KGDB)
+static void __init
+cpci690_map_io(void)
+{
+ io_block_mapping(CONFIG_MV64X60_NEW_BASE, CONFIG_MV64X60_NEW_BASE,
+ 128 * KB, _PAGE_IO);
+}
+#endif
+
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+#ifdef CONFIG_BLK_DEV_INITRD
+ initrd_start=initrd_end=0;
+ initrd_below_start_ok=0;
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+ parse_bootinfo(find_bootinfo());
+
+ loops_per_jiffy = cpci690_get_cpu_speed() / HZ;
+
+ isa_mem_base = 0;
+
+ ppc_md.setup_arch = cpci690_setup_arch;
+ ppc_md.show_cpuinfo = cpci690_show_cpuinfo;
+ ppc_md.init_IRQ = gt64260_init_irq;
+ ppc_md.get_irq = gt64260_get_irq;
+ ppc_md.restart = cpci690_restart;
+ ppc_md.power_off = cpci690_power_off;
+ ppc_md.halt = cpci690_halt;
+ ppc_md.find_end_of_memory = cpci690_find_end_of_memory;
+ ppc_md.time_init = todc_time_init;
+ ppc_md.set_rtc_time = todc_set_rtc_time;
+ ppc_md.get_rtc_time = todc_get_rtc_time;
+ ppc_md.nvram_read_val = todc_direct_read_val;
+ ppc_md.nvram_write_val = todc_direct_write_val;
+ ppc_md.calibrate_decr = cpci690_calibrate_decr;
+
+ /*
+ * Need to map in board regs (used by cpci690_find_end_of_memory())
+ * and the bridge's regs (used by progress);
+ */
+ cpci690_set_bat(CPCI690_BR_BASE, 32 * MB);
+ cpci690_br_base = CPCI690_BR_BASE;
+
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+ ppc_md.setup_io_mappings = cpci690_map_io;
+ ppc_md.progress = mv64x60_mpsc_progress;
+ mv64x60_progress_init(CONFIG_MV64X60_NEW_BASE);
+#endif /* CONFIG_SERIAL_TEXT_DEBUG */
+#ifdef CONFIG_KGDB
+ ppc_md.setup_io_mappings = cpci690_map_io;
+ ppc_md.early_serial_map = cpci690_early_serial_map;
+#endif /* CONFIG_KGDB */
+
+#if defined(CONFIG_SERIAL_MPSC)
+ platform_notify = cpci690_platform_notify;
+#endif
+
+ return;
+}
--- /dev/null
+/*
+ * arch/ppc/platforms/cpci690.h
+ *
+ * Definitions for Force CPCI690
+ *
+ * Author: Mark A. Greer <mgreer@mvista.com>
+ *
+ * 2003 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+/*
+ * The GT64260 has 2 PCI buses each with 1 window from the CPU bus to
+ * PCI I/O space and 4 windows from the CPU bus to PCI MEM space.
+ */
+
+#ifndef __PPC_PLATFORMS_CPCI690_H
+#define __PPC_PLATFORMS_CPCI690_H
+
+/*
+ * Define bd_t to pass in the MAC addresses used by the GT64260's enet ctlrs.
+ */
+#define CPCI690_BI_MAGIC 0xFE8765DC
+
+typedef struct board_info {
+ u32 bi_magic;
+ u8 bi_enetaddr[3][6];
+} bd_t;
+
+/* PCI bus Resource setup */
+#define CPCI690_PCI0_MEM_START_PROC_ADDR 0x80000000
+#define CPCI690_PCI0_MEM_START_PCI_HI_ADDR 0x00000000
+#define CPCI690_PCI0_MEM_START_PCI_LO_ADDR 0x80000000
+#define CPCI690_PCI0_MEM_SIZE 0x10000000
+#define CPCI690_PCI0_IO_START_PROC_ADDR 0xa0000000
+#define CPCI690_PCI0_IO_START_PCI_ADDR 0x00000000
+#define CPCI690_PCI0_IO_SIZE 0x01000000
+
+#define CPCI690_PCI1_MEM_START_PROC_ADDR 0x90000000
+#define CPCI690_PCI1_MEM_START_PCI_HI_ADDR 0x00000000
+#define CPCI690_PCI1_MEM_START_PCI_LO_ADDR 0x90000000
+#define CPCI690_PCI1_MEM_SIZE 0x10000000
+#define CPCI690_PCI1_IO_START_PROC_ADDR 0xa1000000
+#define CPCI690_PCI1_IO_START_PCI_ADDR 0x01000000
+#define CPCI690_PCI1_IO_SIZE 0x01000000
+
+/* Board Registers */
+#define CPCI690_BR_BASE 0xf0000000
+#define CPCI690_BR_SIZE_ACTUAL 0x8
+#define CPCI690_BR_SIZE max(GT64260_WINDOW_SIZE_MIN, \
+ CPCI690_BR_SIZE_ACTUAL)
+#define CPCI690_BR_LED_CNTL 0x00
+#define CPCI690_BR_SW_RESET 0x01
+#define CPCI690_BR_MISC_STATUS 0x02
+#define CPCI690_BR_SWITCH_STATUS 0x03
+#define CPCI690_BR_MEM_CTLR 0x04
+#define CPCI690_BR_LAST_RESET_1 0x05
+#define CPCI690_BR_LAST_RESET_2 0x06
+
+#define CPCI690_TODC_BASE 0xf0100000
+#define CPCI690_TODC_SIZE_ACTUAL 0x8000 /* Size or NVRAM + RTC */
+#define CPCI690_TODC_SIZE max(GT64260_WINDOW_SIZE_MIN, \
+ CPCI690_TODC_SIZE_ACTUAL)
+#define CPCI690_MAC_OFFSET 0x7c10 /* MAC in RTC NVRAM */
+
+#define CPCI690_IPMI_BASE 0xf0200000
+#define CPCI690_IPMI_SIZE_ACTUAL 0x10 /* 16 bytes of IPMI */
+#define CPCI690_IPMI_SIZE max(GT64260_WINDOW_SIZE_MIN, \
+ CPCI690_IPMI_SIZE_ACTUAL)
+
+#endif /* __PPC_PLATFORMS_CPCI690_H */
--- /dev/null
+/*
+ * arch/ppc/platforms/katana.c
+ *
+ * Board setup routines for the Artesyn Katana 750 based boards.
+ *
+ * Tim Montgomery <timm@artesyncp.com>
+ *
+ * Based on code done by Rabeeh Khoury - rabeeh@galileo.co.il
+ * Based on code done by - Mark A. Greer <mgreer@mvista.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+/*
+ * Supports the Artesyn 750i, 752i, and 3750. The 752i is virtually identical
+ * to the 750i except that it has an mv64460 bridge.
+ */
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/kdev_t.h>
+#include <linux/console.h>
+#include <linux/initrd.h>
+#include <linux/root_dev.h>
+#include <linux/delay.h>
+#include <linux/seq_file.h>
+#include <linux/smp.h>
+#include <linux/mv643xx.h>
+#ifdef CONFIG_BOOTIMG
+#include <linux/bootimg.h>
+#endif
+#include <asm/page.h>
+#include <asm/time.h>
+#include <asm/smp.h>
+#include <asm/todc.h>
+#include <asm/bootinfo.h>
+#include <asm/mv64x60.h>
+#include <platforms/katana.h>
+
+static struct mv64x60_handle bh;
+static katana_id_t katana_id;
+static u32 cpld_base;
+static u32 sram_base;
+
+/* PCI Interrupt routing */
+static int __init
+katana_irq_lookup_750i(unsigned char idsel, unsigned char pin)
+{
+ static char pci_irq_table[][4] = {
+ /*
+ * PCI IDSEL/INTPIN->INTLINE
+ * A B C D
+ */
+ /* IDSEL 4 (PMC 1) */
+ { KATANA_PCI_INTB_IRQ_750i, KATANA_PCI_INTC_IRQ_750i,
+ KATANA_PCI_INTD_IRQ_750i, KATANA_PCI_INTA_IRQ_750i },
+ /* IDSEL 5 (PMC 2) */
+ { KATANA_PCI_INTC_IRQ_750i, KATANA_PCI_INTD_IRQ_750i,
+ KATANA_PCI_INTA_IRQ_750i, KATANA_PCI_INTB_IRQ_750i },
+ /* IDSEL 6 (T8110) */
+ {KATANA_PCI_INTD_IRQ_750i, 0, 0, 0 },
+ };
+ const long min_idsel = 4, max_idsel = 6, irqs_per_slot = 4;
+
+ return PCI_IRQ_TABLE_LOOKUP;
+}
+
+static int __init
+katana_irq_lookup_3750(unsigned char idsel, unsigned char pin)
+{
+ static char pci_irq_table[][4] = {
+ /*
+ * PCI IDSEL/INTPIN->INTLINE
+ * A B C D
+ */
+ { KATANA_PCI_INTA_IRQ_3750, 0, 0, 0 }, /* IDSEL 3 (BCM5691) */
+ { KATANA_PCI_INTB_IRQ_3750, 0, 0, 0 }, /* IDSEL 4 (MV64360 #2)*/
+ { KATANA_PCI_INTC_IRQ_3750, 0, 0, 0 }, /* IDSEL 5 (MV64360 #3)*/
+ };
+ const long min_idsel = 3, max_idsel = 5, irqs_per_slot = 4;
+
+ return PCI_IRQ_TABLE_LOOKUP;
+}
+
+static int __init
+katana_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin)
+{
+ switch (katana_id) {
+ case KATANA_ID_750I:
+ case KATANA_ID_752I:
+ return katana_irq_lookup_750i(idsel, pin);
+
+ case KATANA_ID_3750:
+ return katana_irq_lookup_3750(idsel, pin);
+
+ default:
+ printk(KERN_ERR "Bogus board ID\n");
+ return 0;
+ }
+}
+
+/* Board info retrieval routines */
+void __init
+katana_get_board_id(void)
+{
+ switch (in_8((volatile char *)(cpld_base + KATANA_CPLD_PRODUCT_ID))) {
+ case KATANA_PRODUCT_ID_3750:
+ katana_id = KATANA_ID_3750;
+ break;
+
+ case KATANA_PRODUCT_ID_750i:
+ katana_id = KATANA_ID_750I;
+ break;
+
+ case KATANA_PRODUCT_ID_752i:
+ katana_id = KATANA_ID_752I;
+ break;
+
+ default:
+ printk(KERN_ERR "Unsupported board\n");
+ }
+}
+
+int __init
+katana_get_proc_num(void)
+{
+ u16 val;
+ u8 save_exclude;
+ static int proc = -1;
+ static u8 first_time = 1;
+
+ if (first_time) {
+ if (katana_id != KATANA_ID_3750)
+ proc = 0;
+ else {
+ save_exclude = mv64x60_pci_exclude_bridge;
+ mv64x60_pci_exclude_bridge = 0;
+
+ early_read_config_word(bh.hose_a, 0,
+ PCI_DEVFN(0,0), PCI_DEVICE_ID, &val);
+
+ mv64x60_pci_exclude_bridge = save_exclude;
+
+ switch(val) {
+ case PCI_DEVICE_ID_KATANA_3750_PROC0:
+ proc = 0;
+ break;
+
+ case PCI_DEVICE_ID_KATANA_3750_PROC1:
+ proc = 1;
+ break;
+
+ case PCI_DEVICE_ID_KATANA_3750_PROC2:
+ proc = 2;
+ break;
+
+ default:
+ printk(KERN_ERR "Bogus Device ID\n");
+ }
+ }
+
+ first_time = 0;
+ }
+
+ return proc;
+}
+
+static inline int
+katana_is_monarch(void)
+{
+ return in_8((volatile char *)(cpld_base + KATANA_CPLD_BD_CFG_3)) &
+ KATANA_CPLD_BD_CFG_3_MONARCH;
+}
+
+static void __init
+katana_enable_ipmi(void)
+{
+ u8 reset_out;
+
+ /* Enable access to IPMI ctlr by clearing IPMI PORTSEL bit in CPLD */
+ reset_out = in_8((volatile char *)(cpld_base + KATANA_CPLD_RESET_OUT));
+ reset_out &= ~KATANA_CPLD_RESET_OUT_PORTSEL;
+ out_8((volatile void *)(cpld_base + KATANA_CPLD_RESET_OUT), reset_out);
+ return;
+}
+
+static unsigned long
+katana_bus_freq(void)
+{
+ u8 bd_cfg_0;
+
+ bd_cfg_0 = in_8((volatile char *)(cpld_base + KATANA_CPLD_BD_CFG_0));
+
+ switch (bd_cfg_0 & KATANA_CPLD_BD_CFG_0_SYSCLK_MASK) {
+ case KATANA_CPLD_BD_CFG_0_SYSCLK_200:
+ return 200000000;
+ break;
+
+ case KATANA_CPLD_BD_CFG_0_SYSCLK_166:
+ return 166666666;
+ break;
+
+ case KATANA_CPLD_BD_CFG_0_SYSCLK_133:
+ return 133333333;
+ break;
+
+ case KATANA_CPLD_BD_CFG_0_SYSCLK_100:
+ return 100000000;
+ break;
+
+ default:
+ return 133333333;
+ break;
+ }
+}
+
+/* Bridge & platform setup routines */
+void __init
+katana_intr_setup(void)
+{
+ /* MPP 8, 9, and 10 */
+ mv64x60_clr_bits(&bh, MV64x60_MPP_CNTL_1, 0xfff);
+
+ /* MPP 14 */
+ if ((katana_id == KATANA_ID_750I) || (katana_id == KATANA_ID_752I))
+ mv64x60_clr_bits(&bh, MV64x60_MPP_CNTL_1, 0x0f000000);
+
+ /*
+ * Define GPP 8,9,and 10 interrupt polarity as active low
+ * input signal and level triggered
+ */
+ mv64x60_set_bits(&bh, MV64x60_GPP_LEVEL_CNTL, 0x700);
+ mv64x60_clr_bits(&bh, MV64x60_GPP_IO_CNTL, 0x700);
+
+ if ((katana_id == KATANA_ID_750I) || (katana_id == KATANA_ID_752I)) {
+ mv64x60_set_bits(&bh, MV64x60_GPP_LEVEL_CNTL, (1<<14));
+ mv64x60_clr_bits(&bh, MV64x60_GPP_IO_CNTL, (1<<14));
+ }
+
+ /* Config GPP intr ctlr to respond to level trigger */
+ mv64x60_set_bits(&bh, MV64x60_COMM_ARBITER_CNTL, (1<<10));
+
+ /* Erranum FEr PCI-#8 */
+ mv64x60_clr_bits(&bh, MV64x60_PCI0_CMD, (1<<5) | (1<<9));
+ mv64x60_clr_bits(&bh, MV64x60_PCI1_CMD, (1<<5) | (1<<9));
+
+ /*
+ * Dismiss and then enable interrupt on GPP interrupt cause
+ * for CPU #0
+ */
+ mv64x60_write(&bh, MV64x60_GPP_INTR_CAUSE, ~0x700);
+ mv64x60_set_bits(&bh, MV64x60_GPP_INTR_MASK, 0x700);
+
+ if ((katana_id == KATANA_ID_750I) || (katana_id == KATANA_ID_752I)) {
+ mv64x60_write(&bh, MV64x60_GPP_INTR_CAUSE, ~(1<<14));
+ mv64x60_set_bits(&bh, MV64x60_GPP_INTR_MASK, (1<<14));
+ }
+
+ /*
+ * Dismiss and then enable interrupt on CPU #0 high cause reg
+ * BIT25 summarizes GPP interrupts 8-15
+ */
+ mv64x60_set_bits(&bh, MV64360_IC_CPU0_INTR_MASK_HI, (1<<25));
+ return;
+}
+
+void __init
+katana_setup_peripherals(void)
+{
+ u32 base, size_0, size_1;
+
+ /* Set up windows for boot CS, soldered & socketed flash, and CPLD */
+ mv64x60_set_32bit_window(&bh, MV64x60_CPU2BOOT_WIN,
+ KATANA_BOOT_WINDOW_BASE, KATANA_BOOT_WINDOW_SIZE, 0);
+ bh.ci->enable_window_32bit(&bh, MV64x60_CPU2BOOT_WIN);
+
+ /* Assume firmware set up window sizes correctly for dev 0 & 1 */
+ mv64x60_get_32bit_window(&bh, MV64x60_CPU2DEV_0_WIN, &base, &size_0);
+
+ if (size_0 > 0) {
+ mv64x60_set_32bit_window(&bh, MV64x60_CPU2DEV_0_WIN,
+ KATANA_SOLDERED_FLASH_BASE, size_0, 0);
+ bh.ci->enable_window_32bit(&bh, MV64x60_CPU2DEV_0_WIN);
+ }
+
+ mv64x60_get_32bit_window(&bh, MV64x60_CPU2DEV_1_WIN, &base, &size_1);
+
+ if (size_1 > 0) {
+ mv64x60_set_32bit_window(&bh, MV64x60_CPU2DEV_1_WIN,
+ (KATANA_SOLDERED_FLASH_BASE + size_0), size_1, 0);
+ bh.ci->enable_window_32bit(&bh, MV64x60_CPU2DEV_1_WIN);
+ }
+
+ mv64x60_set_32bit_window(&bh, MV64x60_CPU2DEV_2_WIN,
+ KATANA_SOCKET_BASE, KATANA_SOCKETED_FLASH_SIZE, 0);
+ bh.ci->enable_window_32bit(&bh, MV64x60_CPU2DEV_2_WIN);
+
+ mv64x60_set_32bit_window(&bh, MV64x60_CPU2DEV_3_WIN,
+ KATANA_CPLD_BASE, KATANA_CPLD_SIZE, 0);
+ bh.ci->enable_window_32bit(&bh, MV64x60_CPU2DEV_3_WIN);
+ cpld_base = (u32)ioremap(KATANA_CPLD_BASE, KATANA_CPLD_SIZE);
+
+ mv64x60_set_32bit_window(&bh, MV64x60_CPU2SRAM_WIN,
+ KATANA_INTERNAL_SRAM_BASE, MV64360_SRAM_SIZE, 0);
+ bh.ci->enable_window_32bit(&bh, MV64x60_CPU2SRAM_WIN);
+ sram_base = (u32)ioremap(KATANA_INTERNAL_SRAM_BASE, MV64360_SRAM_SIZE);
+
+ /* Set up Enet->SRAM window */
+ mv64x60_set_32bit_window(&bh, MV64x60_ENET2MEM_4_WIN,
+ KATANA_INTERNAL_SRAM_BASE, MV64360_SRAM_SIZE, 0x2);
+ bh.ci->enable_window_32bit(&bh, MV64x60_ENET2MEM_4_WIN);
+
+ /* Give enet r/w access to memory region */
+ mv64x60_set_bits(&bh, MV64360_ENET2MEM_ACC_PROT_0, (0x3 << (4 << 1)));
+ mv64x60_set_bits(&bh, MV64360_ENET2MEM_ACC_PROT_1, (0x3 << (4 << 1)));
+ mv64x60_set_bits(&bh, MV64360_ENET2MEM_ACC_PROT_2, (0x3 << (4 << 1)));
+
+ mv64x60_clr_bits(&bh, MV64x60_PCI1_PCI_DECODE_CNTL, (1 << 3));
+ mv64x60_clr_bits(&bh, MV64x60_TIMR_CNTR_0_3_CNTL,
+ ((1 << 0) | (1 << 8) | (1 << 16) | (1 << 24)));
+
+ /* Must wait until window set up before retrieving board id */
+ katana_get_board_id();
+
+ /* Enumerate pci bus (must know board id before getting proc number) */
+ if (katana_get_proc_num() == 0)
+ bh.hose_b->last_busno = pciauto_bus_scan(bh.hose_b, 0);
+
+#if defined(CONFIG_NOT_COHERENT_CACHE)
+ mv64x60_write(&bh, MV64360_SRAM_CONFIG, 0x00160000);
+#else
+ mv64x60_write(&bh, MV64360_SRAM_CONFIG, 0x001600b2);
+#endif
+
+ /*
+ * Setting the SRAM to 0. Note that this generates parity errors on
+ * internal data path in SRAM since it's first time accessing it
+ * while after reset it's not configured.
+ */
+ memset((void *)sram_base, 0, MV64360_SRAM_SIZE);
+
+ /* Only processor zero [on 3750] is an PCI interrupt controller */
+ if (katana_get_proc_num() == 0)
+ katana_intr_setup();
+
+ return;
+}
+
+static void __init
+katana_setup_bridge(void)
+{
+ struct mv64x60_setup_info si;
+ int i;
+
+ memset(&si, 0, sizeof(si));
+
+ si.phys_reg_base = KATANA_BRIDGE_REG_BASE;
+
+ si.pci_1.enable_bus = 1;
+ si.pci_1.pci_io.cpu_base = KATANA_PCI1_IO_START_PROC_ADDR;
+ si.pci_1.pci_io.pci_base_hi = 0;
+ si.pci_1.pci_io.pci_base_lo = KATANA_PCI1_IO_START_PCI_ADDR;
+ si.pci_1.pci_io.size = KATANA_PCI1_IO_SIZE;
+ si.pci_1.pci_io.swap = MV64x60_CPU2PCI_SWAP_NONE;
+ si.pci_1.pci_mem[0].cpu_base = KATANA_PCI1_MEM_START_PROC_ADDR;
+ si.pci_1.pci_mem[0].pci_base_hi = KATANA_PCI1_MEM_START_PCI_HI_ADDR;
+ si.pci_1.pci_mem[0].pci_base_lo = KATANA_PCI1_MEM_START_PCI_LO_ADDR;
+ si.pci_1.pci_mem[0].size = KATANA_PCI1_MEM_SIZE;
+ si.pci_1.pci_mem[0].swap = MV64x60_CPU2PCI_SWAP_NONE;
+ si.pci_1.pci_cmd_bits = 0;
+ si.pci_1.latency_timer = 0x80;
+
+ for (i = 0; i < MV64x60_CPU2MEM_WINDOWS; i++) {
+#if defined(CONFIG_NOT_COHERENT_CACHE)
+ si.cpu_prot_options[i] = 0;
+ si.enet_options[i] = MV64360_ENET2MEM_SNOOP_NONE;
+ si.mpsc_options[i] = MV64360_MPSC2MEM_SNOOP_NONE;
+ si.idma_options[i] = MV64360_IDMA2MEM_SNOOP_NONE;
+
+ si.pci_1.acc_cntl_options[i] =
+ MV64360_PCI_ACC_CNTL_SNOOP_NONE |
+ MV64360_PCI_ACC_CNTL_SWAP_NONE |
+ MV64360_PCI_ACC_CNTL_MBURST_128_BYTES |
+ MV64360_PCI_ACC_CNTL_RDSIZE_256_BYTES;
+#else
+ si.cpu_prot_options[i] = 0;
+ si.enet_options[i] = MV64360_ENET2MEM_SNOOP_NONE; /* errata */
+ si.mpsc_options[i] = MV64360_MPSC2MEM_SNOOP_NONE; /* errata */
+ si.idma_options[i] = MV64360_IDMA2MEM_SNOOP_NONE; /* errata */
+
+ si.pci_1.acc_cntl_options[i] =
+ MV64360_PCI_ACC_CNTL_SNOOP_WB |
+ MV64360_PCI_ACC_CNTL_SWAP_NONE |
+ MV64360_PCI_ACC_CNTL_MBURST_32_BYTES |
+ MV64360_PCI_ACC_CNTL_RDSIZE_32_BYTES;
+#endif
+ }
+
+ /* Lookup PCI host bridges */
+ if (mv64x60_init(&bh, &si))
+ printk(KERN_WARNING "Bridge initialization failed.\n");
+
+ pci_dram_offset = 0; /* sys mem at same addr on PCI & cpu bus */
+ ppc_md.pci_swizzle = common_swizzle;
+ ppc_md.pci_map_irq = katana_map_irq;
+ ppc_md.pci_exclude_device = mv64x60_pci_exclude_device;
+
+ mv64x60_set_bus(&bh, 1, 0);
+ bh.hose_b->first_busno = 0;
+ bh.hose_b->last_busno = 0xff;
+
+ return;
+}
+
+static void __init
+katana_setup_arch(void)
+{
+ if (ppc_md.progress)
+ ppc_md.progress("katana_setup_arch: enter", 0);
+
+ set_tb(0, 0);
+
+#ifdef CONFIG_BLK_DEV_INITRD
+ if (initrd_start)
+ ROOT_DEV = Root_RAM0;
+ else
+#endif
+#ifdef CONFIG_ROOT_NFS
+ ROOT_DEV = Root_NFS;
+#else
+ ROOT_DEV = Root_SDA2;
+#endif
+
+ /*
+ * Set up the L2CR register.
+ *
+ * 750FX has only L2E, L2PE (bits 2-8 are reserved)
+ * DD2.0 has bug that requires the L2 to be in WRT mode
+ * avoid dirty data in cache
+ */
+ if (PVR_REV(mfspr(PVR)) == 0x0200) {
+ printk(KERN_INFO "DD2.0 detected. Setting L2 cache"
+ "to Writethrough mode\n");
+ _set_L2CR(L2CR_L2E | L2CR_L2PE | L2CR_L2WT);
+ }
+ else
+ _set_L2CR(L2CR_L2E | L2CR_L2PE);
+
+ if (ppc_md.progress)
+ ppc_md.progress("katana_setup_arch: calling setup_bridge", 0);
+
+ katana_setup_bridge();
+ katana_setup_peripherals();
+ katana_enable_ipmi();
+
+ printk(KERN_INFO "Artesyn Communication Products, LLC - Katana(TM)\n");
+ if (ppc_md.progress)
+ ppc_md.progress("katana_setup_arch: exit", 0);
+ return;
+}
+
+/* Platform device data fixup routines. */
+#if defined(CONFIG_SERIAL_MPSC)
+static void __init
+katana_fixup_mpsc_pdata(struct platform_device *pdev)
+{
+ struct mpsc_pdata *pdata;
+
+ pdata = (struct mpsc_pdata *)pdev->dev.platform_data;
+
+ pdata->max_idle = 40;
+ pdata->default_baud = KATANA_DEFAULT_BAUD;
+ pdata->brg_clk_src = KATANA_MPSC_CLK_SRC;
+ pdata->brg_clk_freq = KATANA_MPSC_CLK_FREQ;
+
+ return;
+}
+#endif
+
+#if defined(CONFIG_MV643XX_ETH)
+static void __init
+katana_fixup_eth_pdata(struct platform_device *pdev)
+{
+ struct mv64xxx_eth_platform_data *eth_pd;
+ static u16 phy_addr[] = {
+ KATANA_ETH0_PHY_ADDR,
+ KATANA_ETH1_PHY_ADDR,
+ KATANA_ETH2_PHY_ADDR,
+ };
+ int rx_size = KATANA_ETH_RX_QUEUE_SIZE * MV64340_ETH_DESC_SIZE;
+ int tx_size = KATANA_ETH_TX_QUEUE_SIZE * MV64340_ETH_DESC_SIZE;
+
+ eth_pd = pdev->dev.platform_data;
+ eth_pd->force_phy_addr = 1;
+ eth_pd->phy_addr = phy_addr[pdev->id];
+ eth_pd->tx_queue_size = KATANA_ETH_TX_QUEUE_SIZE;
+ eth_pd->rx_queue_size = KATANA_ETH_RX_QUEUE_SIZE;
+ eth_pd->tx_sram_addr = mv643xx_sram_alloc(tx_size);
+
+ if (eth_pd->tx_sram_addr)
+ eth_pd->tx_sram_size = tx_size;
+ else
+ printk(KERN_ERR "mv643xx_sram_alloc failed\n");
+
+ eth_pd->rx_sram_addr = mv643xx_sram_alloc(rx_size);
+ if (eth_pd->rx_sram_addr)
+ eth_pd->rx_sram_size = rx_size;
+ else
+ printk(KERN_ERR "mv643xx_sram_alloc failed\n");
+}
+#endif
+
+static int __init
+katana_platform_notify(struct device *dev)
+{
+ static struct {
+ char *bus_id;
+ void ((*rtn)(struct platform_device *pdev));
+ } dev_map[] = {
+#if defined(CONFIG_SERIAL_MPSC)
+ { MPSC_CTLR_NAME "0", katana_fixup_mpsc_pdata },
+ { MPSC_CTLR_NAME "1", katana_fixup_mpsc_pdata },
+#endif
+#if defined(CONFIG_MV643XX_ETH)
+ { MV64XXX_ETH_NAME "0", katana_fixup_eth_pdata },
+ { MV64XXX_ETH_NAME "1", katana_fixup_eth_pdata },
+ { MV64XXX_ETH_NAME "2", katana_fixup_eth_pdata },
+#endif
+ };
+ struct platform_device *pdev;
+ int i;
+
+ if (dev && dev->bus_id)
+ for (i=0; i<ARRAY_SIZE(dev_map); i++)
+ if (!strncmp(dev->bus_id, dev_map[i].bus_id,
+ BUS_ID_SIZE)) {
+
+ pdev = container_of(dev,
+ struct platform_device, dev);
+ dev_map[i].rtn(pdev);
+ }
+
+ return 0;
+}
+
+static void
+katana_restart(char *cmd)
+{
+ volatile ulong i = 10000000;
+
+ /* issue hard reset to the reset command register */
+ out_8((volatile char *)(cpld_base + KATANA_CPLD_RST_CMD),
+ KATANA_CPLD_RST_CMD_HR);
+
+ while (i-- > 0) ;
+ panic("restart failed\n");
+}
+
+static void
+katana_halt(void)
+{
+ while (1) ;
+ /* NOTREACHED */
+}
+
+static void
+katana_power_off(void)
+{
+ katana_halt();
+ /* NOTREACHED */
+}
+
+static int
+katana_show_cpuinfo(struct seq_file *m)
+{
+ seq_printf(m, "vendor\t\t: Artesyn Communication Products, LLC\n");
+
+ seq_printf(m, "board\t\t: ");
+
+ switch (katana_id) {
+ case KATANA_ID_3750:
+ seq_printf(m, "Katana 3750\n");
+ break;
+
+ case KATANA_ID_750I:
+ seq_printf(m, "Katana 750i\n");
+ break;
+
+ case KATANA_ID_752I:
+ seq_printf(m, "Katana 752i\n");
+ break;
+
+ default:
+ seq_printf(m, "Unknown\n");
+ break;
+ }
+
+ seq_printf(m, "product ID\t: 0x%x\n",
+ in_8((volatile char *)(cpld_base + KATANA_CPLD_PRODUCT_ID)));
+ seq_printf(m, "hardware rev\t: 0x%x\n",
+ in_8((volatile char *)(cpld_base+KATANA_CPLD_HARDWARE_VER)));
+ seq_printf(m, "PLD rev\t\t: 0x%x\n",
+ in_8((volatile char *)(cpld_base + KATANA_CPLD_PLD_VER)));
+ seq_printf(m, "PLB freq\t: %ldMhz\n", katana_bus_freq() / 1000000);
+ seq_printf(m, "PCI\t\t: %sMonarch\n", katana_is_monarch()? "" : "Non-");
+
+ return 0;
+}
+
+static void __init
+katana_calibrate_decr(void)
+{
+ ulong freq;
+
+ freq = katana_bus_freq() / 4;
+
+ printk(KERN_INFO "time_init: decrementer frequency = %lu.%.6lu MHz\n",
+ freq / 1000000, freq % 1000000);
+
+ tb_ticks_per_jiffy = freq / HZ;
+ tb_to_us = mulhwu_scale_factor(freq, 1000000);
+
+ return;
+}
+
+unsigned long __init
+katana_find_end_of_memory(void)
+{
+ return mv64x60_get_mem_size(KATANA_BRIDGE_REG_BASE,
+ MV64x60_TYPE_MV64360);
+}
+
+static inline void
+katana_set_bat(void)
+{
+ mb();
+ mtspr(DBAT2U, 0xf0001ffe);
+ mtspr(DBAT2L, 0xf000002a);
+ mb();
+
+ return;
+}
+
+#if defined(CONFIG_SERIAL_TEXT_DEBUG) && defined(CONFIG_SERIAL_MPSC_CONSOLE)
+static void __init
+katana_map_io(void)
+{
+ io_block_mapping(0xf8100000, 0xf8100000, 0x00020000, _PAGE_IO);
+}
+#endif
+
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ parse_bootinfo(find_bootinfo());
+
+ isa_mem_base = 0;
+
+ ppc_md.setup_arch = katana_setup_arch;
+ ppc_md.show_cpuinfo = katana_show_cpuinfo;
+ ppc_md.init_IRQ = mv64360_init_irq;
+ ppc_md.get_irq = mv64360_get_irq;
+ ppc_md.restart = katana_restart;
+ ppc_md.power_off = katana_power_off;
+ ppc_md.halt = katana_halt;
+ ppc_md.find_end_of_memory = katana_find_end_of_memory;
+ ppc_md.calibrate_decr = katana_calibrate_decr;
+
+#if defined(CONFIG_SERIAL_TEXT_DEBUG) && defined(CONFIG_SERIAL_MPSC_CONSOLE)
+ ppc_md.setup_io_mappings = katana_map_io;
+ ppc_md.progress = mv64x60_mpsc_progress;
+ mv64x60_progress_init(KATANA_BRIDGE_REG_BASE);
+#endif
+
+#if defined(CONFIG_SERIAL_MPSC) || defined(CONFIG_MV643XX_ETH)
+ platform_notify = katana_platform_notify;
+#endif
+
+ katana_set_bat(); /* Need for katana_find_end_of_memory and progress */
+ return;
+}
--- /dev/null
+/*
+ * arch/ppc/platforms/katana.h
+ *
+ * Definitions for Artesyn Katana750i/3750 board.
+ *
+ * Tim Montgomery <timm@artesyncp.com>
+ *
+ * Based on code done by Rabeeh Khoury - rabeeh@galileo.co.il
+ * Based on code done by Mark A. Greer <mgreer@mvista.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+/*
+ * The MV64360 has 2 PCI buses each with 1 window from the CPU bus to
+ * PCI I/O space and 4 windows from the CPU bus to PCI MEM space.
+ * We'll only use one PCI MEM window on each PCI bus.
+ *
+ * This is the CPU physical memory map (windows must be at least 1MB and start
+ * on a boundary that is a multiple of the window size):
+ *
+ * 0xff800000-0xffffffff - Boot window
+ * 0xf8400000-0xf85fffff - Internal SRAM
+ * 0xf8200000-0xf823ffff - CPLD
+ * 0xf8100000-0xf810ffff - MV64360 Registers
+ * 0xf8000000-0xf80fffff - PLCC socket
+ * 0xf0000000-0xf01fffff - Consistent memory pool
+ * 0xe8000000-0xefffffff - soldered flash
+ * 0xc0000000-0xc0ffffff - PCI I/O
+ * 0x80000000-0xbfffffff - PCI MEM
+ */
+
+#ifndef __PPC_PLATFORMS_KATANA_H
+#define __PPC_PLATFORMS_KATANA_H
+
+/* CPU Physical Memory Map setup. */
+#define KATANA_BOOT_WINDOW_BASE 0xff800000
+#define KATANA_INTERNAL_SRAM_BASE 0xf8400000
+#define KATANA_CPLD_BASE 0xf8200000
+#define KATANA_BRIDGE_REG_BASE 0xf8100000
+#define KATANA_SOCKET_BASE 0xf8000000
+#define KATANA_SOLDERED_FLASH_BASE 0xe8000000
+
+#define KATANA_BOOT_WINDOW_SIZE_ACTUAL 0x00800000 /* 8MB */
+#define KATANA_CPLD_SIZE_ACTUAL 0x00020000 /* 128KB */
+#define KATANA_SOCKETED_FLASH_SIZE_ACTUAL 0x00080000 /* 512KB */
+#define KATANA_SOLDERED_FLASH_SIZE_ACTUAL 0x02000000 /* 32MB */
+
+#define KATANA_BOOT_WINDOW_SIZE max(MV64360_WINDOW_SIZE_MIN, \
+ KATANA_BOOT_WINDOW_SIZE_ACTUAL)
+#define KATANA_CPLD_SIZE max(MV64360_WINDOW_SIZE_MIN, \
+ KATANA_CPLD_SIZE_ACTUAL)
+#define KATANA_SOCKETED_FLASH_SIZE max(MV64360_WINDOW_SIZE_MIN, \
+ KATANA_SOCKETED_FLASH_SIZE_ACTUAL)
+#define KATANA_SOLDERED_FLASH_SIZE max(MV64360_WINDOW_SIZE_MIN, \
+ KATANA_SOLDERED_FLASH_SIZE_ACTUAL)
+
+#define KATANA_PCI1_MEM_START_PROC_ADDR 0x80000000
+#define KATANA_PCI1_MEM_START_PCI_HI_ADDR 0x00000000
+#define KATANA_PCI1_MEM_START_PCI_LO_ADDR 0x80000000
+#define KATANA_PCI1_MEM_SIZE 0x40000000
+#define KATANA_PCI1_IO_START_PROC_ADDR 0xc0000000
+#define KATANA_PCI1_IO_START_PCI_ADDR 0x00000000
+#define KATANA_PCI1_IO_SIZE 0x01000000
+
+/* Board-specific IRQ info */
+#define KATANA_PCI_INTA_IRQ_3750 64+8
+#define KATANA_PCI_INTB_IRQ_3750 64+9
+#define KATANA_PCI_INTC_IRQ_3750 64+10
+
+#define KATANA_PCI_INTA_IRQ_750i 64+8
+#define KATANA_PCI_INTB_IRQ_750i 64+9
+#define KATANA_PCI_INTC_IRQ_750i 64+10
+#define KATANA_PCI_INTD_IRQ_750i 64+14
+
+#define KATANA_CPLD_RST_EVENT 0x00000000
+#define KATANA_CPLD_RST_CMD 0x00001000
+#define KATANA_CPLD_PCI_ERR_INT_EN 0x00002000
+#define KATANA_CPLD_PCI_ERR_INT_PEND 0x00003000
+#define KATANA_CPLD_PRODUCT_ID 0x00004000
+#define KATANA_CPLD_EREADY 0x00005000
+
+#define KATANA_CPLD_HARDWARE_VER 0x00007000
+#define KATANA_CPLD_PLD_VER 0x00008000
+#define KATANA_CPLD_BD_CFG_0 0x00009000
+#define KATANA_CPLD_BD_CFG_1 0x0000a000
+#define KATANA_CPLD_BD_CFG_3 0x0000c000
+#define KATANA_CPLD_LED 0x0000d000
+#define KATANA_CPLD_RESET_OUT 0x0000e000
+
+#define KATANA_CPLD_RST_EVENT_INITACT 0x80
+#define KATANA_CPLD_RST_EVENT_SW 0x40
+#define KATANA_CPLD_RST_EVENT_WD 0x20
+#define KATANA_CPLD_RST_EVENT_COPS 0x10
+#define KATANA_CPLD_RST_EVENT_COPH 0x08
+#define KATANA_CPLD_RST_EVENT_CPCI 0x02
+#define KATANA_CPLD_RST_EVENT_FP 0x01
+
+#define KATANA_CPLD_RST_CMD_SCL 0x80
+#define KATANA_CPLD_RST_CMD_SDA 0x40
+#define KATANA_CPLD_RST_CMD_I2C 0x10
+#define KATANA_CPLD_RST_CMD_FR 0x08
+#define KATANA_CPLD_RST_CMD_SR 0x04
+#define KATANA_CPLD_RST_CMD_HR 0x01
+
+#define KATANA_CPLD_BD_CFG_0_SYSCLK_MASK 0xc0
+#define KATANA_CPLD_BD_CFG_0_SYSCLK_200 0x00
+#define KATANA_CPLD_BD_CFG_0_SYSCLK_166 0x80
+#define KATANA_CPLD_BD_CFG_0_SYSCLK_133 0xc0
+#define KATANA_CPLD_BD_CFG_0_SYSCLK_100 0x40
+
+#define KATANA_CPLD_BD_CFG_1_FL_BANK_MASK 0x03
+#define KATANA_CPLD_BD_CFG_1_FL_BANK_16MB 0x00
+#define KATANA_CPLD_BD_CFG_1_FL_BANK_32MB 0x01
+#define KATANA_CPLD_BD_CFG_1_FL_BANK_64MB 0x02
+#define KATANA_CPLD_BD_CFG_1_FL_BANK_128MB 0x03
+
+#define KATANA_CPLD_BD_CFG_1_FL_NUM_BANKS_MASK 0x04
+#define KATANA_CPLD_BD_CFG_1_FL_NUM_BANKS_ONE 0x00
+#define KATANA_CPLD_BD_CFG_1_FL_NUM_BANKS_TWO 0x04
+
+#define KATANA_CPLD_BD_CFG_3_MONARCH 0x04
+
+#define KATANA_CPLD_RESET_OUT_PORTSEL 0x80
+#define KATANA_CPLD_RESET_OUT_WD 0x20
+#define KATANA_CPLD_RESET_OUT_COPH 0x08
+#define KATANA_CPLD_RESET_OUT_PCI_RST_PCI 0x02
+#define KATANA_CPLD_RESET_OUT_PCI_RST_FP 0x01
+
+#define KATANA_MBOX_RESET_REQUEST 0xC83A
+#define KATANA_MBOX_RESET_ACK 0xE430
+#define KATANA_MBOX_RESET_DONE 0x32E5
+
+#define HSL_PLD_BASE 0x00010000
+#define HSL_PLD_J4SGA_REG_OFF 0
+#define HSL_PLD_J4GA_REG_OFF 1
+#define HSL_PLD_J2GA_REG_OFF 2
+#define GA_MASK 0x1f
+#define HSL_PLD_SIZE 0x1000
+#define K3750_GPP_GEO_ADDR_PINS 0xf8000000
+#define K3750_GPP_GEO_ADDR_SHIFT 27
+
+#define K3750_GPP_EVENT_PROC_0 (1 << 21)
+#define K3750_GPP_EVENT_PROC_1_2 (1 << 2)
+
+#define PCI_VENDOR_ID_ARTESYN 0x1223
+#define PCI_DEVICE_ID_KATANA_3750_PROC0 0x0041
+#define PCI_DEVICE_ID_KATANA_3750_PROC1 0x0042
+#define PCI_DEVICE_ID_KATANA_3750_PROC2 0x0043
+
+#define COPROC_MEM_FUNCTION 0
+#define COPROC_MEM_BAR 0
+#define COPROC_REGS_FUNCTION 0
+#define COPROC_REGS_BAR 4
+#define COPROC_FLASH_FUNCTION 2
+#define COPROC_FLASH_BAR 4
+
+#define KATANA_IPMB_LOCAL_I2C_ADDR 0x08
+
+#define KATANA_DEFAULT_BAUD 9600
+#define KATANA_MPSC_CLK_SRC 8 /* TCLK */
+#define KATANA_MPSC_CLK_FREQ 133333333 /* 133.3333... MHz */
+
+#define KATANA_ETH0_PHY_ADDR 12
+#define KATANA_ETH1_PHY_ADDR 11
+#define KATANA_ETH2_PHY_ADDR 4
+
+#define KATANA_PRODUCT_ID_3750 0x01
+#define KATANA_PRODUCT_ID_750i 0x02
+#define KATANA_PRODUCT_ID_752i 0x04
+
+#define KATANA_ETH_TX_QUEUE_SIZE 800
+#define KATANA_ETH_RX_QUEUE_SIZE 400
+
+#define KATANA_ETH_PORT_CONFIG_VALUE \
+ ETH_UNICAST_NORMAL_MODE | \
+ ETH_DEFAULT_RX_QUEUE_0 | \
+ ETH_DEFAULT_RX_ARP_QUEUE_0 | \
+ ETH_RECEIVE_BC_IF_NOT_IP_OR_ARP | \
+ ETH_RECEIVE_BC_IF_IP | \
+ ETH_RECEIVE_BC_IF_ARP | \
+ ETH_CAPTURE_TCP_FRAMES_DIS | \
+ ETH_CAPTURE_UDP_FRAMES_DIS | \
+ ETH_DEFAULT_RX_TCP_QUEUE_0 | \
+ ETH_DEFAULT_RX_UDP_QUEUE_0 | \
+ ETH_DEFAULT_RX_BPDU_QUEUE_0
+
+#define KATANA_ETH_PORT_CONFIG_EXTEND_VALUE \
+ ETH_SPAN_BPDU_PACKETS_AS_NORMAL | \
+ ETH_PARTITION_DISABLE
+
+#define GT_ETH_IPG_INT_RX(value) \
+ ((value & 0x3fff) << 8)
+
+#define KATANA_ETH_PORT_SDMA_CONFIG_VALUE \
+ ETH_RX_BURST_SIZE_4_64BIT | \
+ GT_ETH_IPG_INT_RX(0) | \
+ ETH_TX_BURST_SIZE_4_64BIT
+
+#define KATANA_ETH_PORT_SERIAL_CONTROL_VALUE \
+ ETH_FORCE_LINK_PASS | \
+ ETH_ENABLE_AUTO_NEG_FOR_DUPLX | \
+ ETH_DISABLE_AUTO_NEG_FOR_FLOW_CTRL | \
+ ETH_ADV_SYMMETRIC_FLOW_CTRL | \
+ ETH_FORCE_FC_MODE_NO_PAUSE_DIS_TX | \
+ ETH_FORCE_BP_MODE_NO_JAM | \
+ BIT9 | \
+ ETH_DO_NOT_FORCE_LINK_FAIL | \
+ ETH_RETRANSMIT_16_ATTEMPTS | \
+ ETH_ENABLE_AUTO_NEG_SPEED_GMII | \
+ ETH_DTE_ADV_0 | \
+ ETH_DISABLE_AUTO_NEG_BYPASS | \
+ ETH_AUTO_NEG_NO_CHANGE | \
+ ETH_MAX_RX_PACKET_9700BYTE | \
+ ETH_CLR_EXT_LOOPBACK | \
+ ETH_SET_FULL_DUPLEX_MODE | \
+ ETH_ENABLE_FLOW_CTRL_TX_RX_IN_FULL_DUPLEX
+
+#ifndef __ASSEMBLY__
+
+typedef enum {
+ KATANA_ID_3750,
+ KATANA_ID_750I,
+ KATANA_ID_752I,
+ KATANA_ID_MAX
+} katana_id_t;
+
+#endif
+
+#endif /* __PPC_PLATFORMS_KATANA_H */
static void
pal4_restart(char *cmd)
{
- __cli();
+ local_irq_disable();
__asm__ __volatile__("lis 3,0xfff0\n \
ori 3,3,0x100\n \
mtspr 26,3\n \
static void
pal4_power_off(void)
{
- __cli();
+ local_irq_disable();
for(;;);
}
--- /dev/null
+/*
+ * This file contains low-level cache management functions
+ * used for sleep and CPU speed changes on Apple machines.
+ * (In fact the only thing that is Apple-specific is that we assume
+ * that we can read from ROM at physical address 0xfff00000.)
+ *
+ * Copyright (C) 2004 Paul Mackerras (paulus@samba.org) and
+ * Benjamin Herrenschmidt (benh@kernel.crashing.org)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <linux/config.h>
+#include <asm/processor.h>
+#include <asm/ppc_asm.h>
+#include <asm/cputable.h>
+
+/*
+ * Flush and disable all data caches (dL1, L2, L3). This is used
+ * when going to sleep, when doing a PMU based cpufreq transition,
+ * or when "offlining" a CPU on SMP machines. This code is over
+ * paranoid, but I've had enough issues with various CPU revs and
+ * bugs that I decided it was worth beeing over cautious
+ */
+
+_GLOBAL(flush_disable_caches)
+BEGIN_FTR_SECTION
+ b flush_disable_745x
+END_FTR_SECTION_IFSET(CPU_FTR_SPEC7450)
+BEGIN_FTR_SECTION
+ b flush_disable_75x
+END_FTR_SECTION_IFSET(CPU_FTR_L2CR)
+ b __flush_disable_L1
+
+/* This is the code for G3 and 74[01]0 */
+flush_disable_75x:
+ mflr r10
+
+ /* Turn off EE and DR in MSR */
+ mfmsr r11
+ rlwinm r0,r11,0,~MSR_EE
+ rlwinm r0,r0,0,~MSR_DR
+ sync
+ mtmsr r0
+ isync
+
+ /* Stop DST streams */
+BEGIN_FTR_SECTION
+ DSSALL
+ sync
+END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
+
+ /* Stop DPM */
+ mfspr r8,SPRN_HID0 /* Save HID0 in r8 */
+ rlwinm r4,r8,0,12,10 /* Turn off HID0[DPM] */
+ sync
+ mtspr SPRN_HID0,r4 /* Disable DPM */
+ sync
+
+ /* disp-flush L1 */
+ li r4,0x4000
+ mtctr r4
+ lis r4,0xfff0
+1: lwzx r0,r0,r4
+ addi r4,r4,32
+ bdnz 1b
+ sync
+ isync
+
+ /* disable / invalidate / enable L1 data */
+ mfspr r3,SPRN_HID0
+ rlwinm r0,r0,0,~HID0_DCE
+ mtspr SPRN_HID0,r3
+ sync
+ isync
+ ori r3,r3,HID0_DCE|HID0_DCI
+ sync
+ isync
+ mtspr SPRN_HID0,r3
+ xori r3,r3,HID0_DCI
+ mtspr SPRN_HID0,r3
+ sync
+
+ /* Get the current enable bit of the L2CR into r4 */
+ mfspr r5,L2CR
+ /* Set to data-only (pre-745x bit) */
+ oris r3,r5,L2CR_L2DO@h
+ b 2f
+ /* When disabling L2, code must be in L1 */
+ .balign 32
+1: mtspr L2CR,r3
+3: sync
+ isync
+ b 1f
+2: b 3f
+3: sync
+ isync
+ b 1b
+1: /* disp-flush L2. The interesting thing here is that the L2 can be
+ * up to 2Mb ... so using the ROM, we'll end up wrapping back to memory
+ * but that is probbaly fine. We disp-flush over 4Mb to be safe
+ */
+ lis r4,2
+ mtctr r4
+ lis r4,0xfff0
+1: lwzx r0,r0,r4
+ addi r4,r4,32
+ bdnz 1b
+ sync
+ isync
+ /* now disable L2 */
+ rlwinm r5,r5,0,~L2CR_L2E
+ b 2f
+ /* When disabling L2, code must be in L1 */
+ .balign 32
+1: mtspr L2CR,r5
+3: sync
+ isync
+ b 1f
+2: b 3f
+3: sync
+ isync
+ b 1b
+1: sync
+ isync
+ /* Invalidate L2. This is pre-745x, we clear the L2I bit ourselves */
+ oris r4,r5,L2CR_L2I@h
+ mtspr L2CR,r4
+ sync
+ isync
+ xoris r4,r4,L2CR_L2I@h
+ sync
+ mtspr L2CR,r4
+ sync
+
+ /* now disable the L1 data cache */
+ mfspr r0,HID0
+ rlwinm r0,r0,0,~HID0_DCE
+ mtspr HID0,r0
+ sync
+ isync
+
+ /* Restore HID0[DPM] to whatever it was before */
+ sync
+ mtspr SPRN_HID0,r8
+ sync
+
+ /* restore DR and EE */
+ sync
+ mtmsr r11
+ isync
+
+ mtlr r10
+ blr
+
+/* This code is for 745x processors */
+flush_disable_745x:
+ /* Turn off EE and DR in MSR */
+ mfmsr r11
+ rlwinm r0,r11,0,~MSR_EE
+ rlwinm r0,r0,0,~MSR_DR
+ sync
+ mtmsr r0
+ isync
+
+ /* Stop prefetch streams */
+ DSSALL
+ sync
+
+ /* Disable L2 prefetching */
+ mfspr r0,SPRN_MSSCR0
+ rlwinm r0,r0,0,0,29
+ mtspr SPRN_MSSCR0,r0
+ sync
+ isync
+ lis r4,0
+ dcbf 0,r4
+ dcbf 0,r4
+ dcbf 0,r4
+ dcbf 0,r4
+ dcbf 0,r4
+ dcbf 0,r4
+ dcbf 0,r4
+ dcbf 0,r4
+
+ /* Due to a bug with the HW flush on some CPU revs, we occasionally
+ * experience data corruption. I'm adding a displacement flush along
+ * with a dcbf loop over a few Mb to "help". The problem isn't totally
+ * fixed by this in theory, but at least, in practice, I couldn't reproduce
+ * it even with a big hammer...
+ */
+
+ lis r4,0x0002
+ mtctr r4
+ li r4,0
+1:
+ lwzx r0,r0,r4
+ addi r4,r4,32 /* Go to start of next cache line */
+ bdnz 1b
+ isync
+
+ /* Now, flush the first 4MB of memory */
+ lis r4,0x0002
+ mtctr r4
+ li r4,0
+ sync
+1:
+ dcbf 0,r4
+ addi r4,r4,32 /* Go to start of next cache line */
+ bdnz 1b
+
+ /* Flush and disable the L1 data cache */
+ mfspr r6,SPRN_LDSTCR
+ lis r3,0xfff0 /* read from ROM for displacement flush */
+ li r4,0xfe /* start with only way 0 unlocked */
+ li r5,128 /* 128 lines in each way */
+1: mtctr r5
+ rlwimi r6,r4,0,24,31
+ mtspr SPRN_LDSTCR,r6
+ sync
+ isync
+2: lwz r0,0(r3) /* touch each cache line */
+ addi r3,r3,32
+ bdnz 2b
+ rlwinm r4,r4,1,24,30 /* move on to the next way */
+ ori r4,r4,1
+ cmpwi r4,0xff /* all done? */
+ bne 1b
+ /* now unlock the L1 data cache */
+ li r4,0
+ rlwimi r6,r4,0,24,31
+ sync
+ mtspr SPRN_LDSTCR,r6
+ sync
+ isync
+
+ /* Flush the L2 cache using the hardware assist */
+ mfspr r3,L2CR
+ cmpwi r3,0 /* check if it is enabled first */
+ bge 4f
+ oris r0,r3,(L2CR_L2IO_745x|L2CR_L2DO_745x)@h
+ b 2f
+ /* When disabling/locking L2, code must be in L1 */
+ .balign 32
+1: mtspr L2CR,r0 /* lock the L2 cache */
+3: sync
+ isync
+ b 1f
+2: b 3f
+3: sync
+ isync
+ b 1b
+1: sync
+ isync
+ ori r0,r3,L2CR_L2HWF_745x
+ sync
+ mtspr L2CR,r0 /* set the hardware flush bit */
+3: mfspr r0,L2CR /* wait for it to go to 0 */
+ andi. r0,r0,L2CR_L2HWF_745x
+ bne 3b
+ sync
+ rlwinm r3,r3,0,~L2CR_L2E
+ b 2f
+ /* When disabling L2, code must be in L1 */
+ .balign 32
+1: mtspr L2CR,r3 /* disable the L2 cache */
+3: sync
+ isync
+ b 1f
+2: b 3f
+3: sync
+ isync
+ b 1b
+1: sync
+ isync
+ oris r4,r3,L2CR_L2I@h
+ mtspr L2CR,r4
+ sync
+ isync
+1: mfspr r4,L2CR
+ andis. r0,r4,L2CR_L2I@h
+ bne 1b
+ sync
+
+BEGIN_FTR_SECTION
+ /* Flush the L3 cache using the hardware assist */
+4: mfspr r3,L3CR
+ cmpwi r3,0 /* check if it is enabled */
+ bge 6f
+ oris r0,r3,L3CR_L3IO@h
+ ori r0,r0,L3CR_L3DO
+ sync
+ mtspr L3CR,r0 /* lock the L3 cache */
+ sync
+ isync
+ ori r0,r0,L3CR_L3HWF
+ sync
+ mtspr L3CR,r0 /* set the hardware flush bit */
+5: mfspr r0,L3CR /* wait for it to go to zero */
+ andi. r0,r0,L3CR_L3HWF
+ bne 5b
+ rlwinm r3,r3,0,~L3CR_L3E
+ sync
+ mtspr L3CR,r3 /* disable the L3 cache */
+ sync
+ ori r4,r3,L3CR_L3I
+ mtspr SPRN_L3CR,r4
+1: mfspr r4,SPRN_L3CR
+ andi. r0,r4,L3CR_L3I
+ bne 1b
+ sync
+END_FTR_SECTION_IFSET(CPU_FTR_L3CR)
+
+6: mfspr r0,HID0 /* now disable the L1 data cache */
+ rlwinm r0,r0,0,~HID0_DCE
+ mtspr HID0,r0
+ sync
+ isync
+ mtmsr r11 /* restore DR and EE */
+ isync
+ blr
static int nvram_mult, is_core_99;
static int core99_bank = 0;
static int nvram_partitions[3];
-static spinlock_t nv_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(nv_lock);
extern int pmac_newworld;
extern int system_running;
extern u8 pci_cache_line_size;
extern int pcibios_assign_bus_offset;
-struct pci_dev *k2_skiplist[2];
+struct device_node *k2_skiplist[2];
/*
* Magic constants for enabling cache coherency in the bandit/PSX bridge.
|(((unsigned long)(off)) & 0xFCUL) \
|1UL)
-static unsigned int __pmac
+static void volatile __iomem * __pmac
macrisc_cfg_access(struct pci_controller* hose, u8 bus, u8 dev_fn, u8 offset)
{
unsigned int caddr;
if (bus == hose->first_busno) {
if (dev_fn < (11 << 3))
- return 0;
+ return NULL;
caddr = MACRISC_CFA0(dev_fn, offset);
} else
caddr = MACRISC_CFA1(bus, dev_fn, offset);
} while (in_le32(hose->cfg_addr) != caddr);
offset &= has_uninorth ? 0x07 : 0x03;
- return (unsigned int)(hose->cfg_data) + (unsigned int)offset;
+ return hose->cfg_data + offset;
}
static int __pmac
int len, u32 *val)
{
struct pci_controller *hose = bus->sysdata;
- unsigned int addr;
+ void volatile __iomem *addr;
addr = macrisc_cfg_access(hose, bus->number, devfn, offset);
if (!addr)
*/
switch (len) {
case 1:
- *val = in_8((u8 *)addr);
+ *val = in_8(addr);
break;
case 2:
- *val = in_le16((u16 *)addr);
+ *val = in_le16(addr);
break;
default:
- *val = in_le32((u32 *)addr);
+ *val = in_le32(addr);
break;
}
return PCIBIOS_SUCCESSFUL;
int len, u32 val)
{
struct pci_controller *hose = bus->sysdata;
- unsigned int addr;
+ void volatile __iomem *addr;
addr = macrisc_cfg_access(hose, bus->number, devfn, offset);
if (!addr)
*/
switch (len) {
case 1:
- out_8((u8 *)addr, val);
- (void) in_8((u8 *)addr);
+ out_8(addr, val);
+ (void) in_8(addr);
break;
case 2:
- out_le16((u16 *)addr, val);
- (void) in_le16((u16 *)addr);
+ out_le16(addr, val);
+ (void) in_le16(addr);
break;
default:
- out_le32((u32 *)addr, val);
- (void) in_le32((u32 *)addr);
+ out_le32(addr, val);
+ (void) in_le32(addr);
break;
}
return PCIBIOS_SUCCESSFUL;
+ (((unsigned long)bus) << 16) \
+ 0x01000000UL)
-static unsigned long __pmac
+static void volatile __iomem * __pmac
u3_ht_cfg_access(struct pci_controller* hose, u8 bus, u8 devfn, u8 offset)
{
if (bus == hose->first_busno) {
if (PCI_FUNC(devfn) != 0 || PCI_SLOT(devfn) > 7 ||
PCI_SLOT(devfn) < 1)
return 0;
- return ((unsigned long)hose->cfg_data) + U3_HT_CFA0(devfn, offset);
+ return hose->cfg_data + U3_HT_CFA0(devfn, offset);
} else
- return ((unsigned long)hose->cfg_data) + U3_HT_CFA1(bus, devfn, offset);
+ return hose->cfg_data + U3_HT_CFA1(bus, devfn, offset);
}
static int __pmac
int len, u32 *val)
{
struct pci_controller *hose = bus->sysdata;
- unsigned int addr;
+ void volatile __iomem *addr;
int i;
struct device_node *np = pci_busdev_to_OF_node(bus, devfn);
* cycle accesses. Fix that here.
*/
for (i=0; i<2; i++)
- if (k2_skiplist[i] && k2_skiplist[i]->bus == bus &&
- k2_skiplist[i]->devfn == devfn) {
+ if (k2_skiplist[i] == np) {
switch (len) {
case 1:
*val = 0xff; break;
*/
switch (len) {
case 1:
- *val = in_8((u8 *)addr);
+ *val = in_8(addr);
break;
case 2:
- *val = in_le16((u16 *)addr);
+ *val = in_le16(addr);
break;
default:
- *val = in_le32((u32 *)addr);
+ *val = in_le32(addr);
break;
}
return PCIBIOS_SUCCESSFUL;
int len, u32 val)
{
struct pci_controller *hose = bus->sysdata;
- unsigned int addr;
+ void volatile __iomem *addr;
int i;
struct device_node *np = pci_busdev_to_OF_node(bus, devfn);
* cycle accesses. Fix that here.
*/
for (i=0; i<2; i++)
- if (k2_skiplist[i] && k2_skiplist[i]->bus == bus &&
- k2_skiplist[i]->devfn == devfn)
+ if (k2_skiplist[i] == np)
return PCIBIOS_SUCCESSFUL;
addr = u3_ht_cfg_access(hose, bus->number, devfn, offset);
*/
switch (len) {
case 1:
- out_8((u8 *)addr, val);
- (void) in_8((u8 *)addr);
+ out_8(addr, val);
+ (void) in_8(addr);
break;
case 2:
- out_le16((u16 *)addr, val);
- (void) in_le16((u16 *)addr);
+ out_le16(addr, val);
+ (void) in_le16(addr);
break;
default:
- out_le32((u32 *)addr, val);
- (void) in_le32((u32 *)addr);
+ out_le32(addr, val);
+ (void) in_le32(addr);
break;
}
return PCIBIOS_SUCCESSFUL;
/* read the word at offset 0 in config space for device 11 */
out_le32(bp->cfg_addr, (1UL << BANDIT_DEVNUM) + PCI_VENDOR_ID);
udelay(2);
- vendev = in_le32((volatile unsigned int *)bp->cfg_data);
+ vendev = in_le32(bp->cfg_data);
if (vendev == (PCI_DEVICE_ID_APPLE_BANDIT << 16) +
PCI_VENDOR_ID_APPLE) {
/* read the revision id */
/* read the word at offset 0x50 */
out_le32(bp->cfg_addr, (1UL << BANDIT_DEVNUM) + BANDIT_MAGIC);
udelay(2);
- magic = in_le32((volatile unsigned int *)bp->cfg_data);
+ magic = in_le32(bp->cfg_data);
if ((magic & BANDIT_COHERENT) != 0)
return;
magic |= BANDIT_COHERENT;
udelay(2);
- out_le32((volatile unsigned int *)bp->cfg_data, magic);
+ out_le32(bp->cfg_data, magic);
printk(KERN_INFO "Cache coherency enabled for bandit/PSX\n");
}
unsigned int val;
out_be32(bp->cfg_addr, GRACKLE_CFA(0, 0, 0xa8));
- val = in_le32((volatile unsigned int *)bp->cfg_data);
+ val = in_le32(bp->cfg_data);
val = enable? (val | GRACKLE_PICR1_STG) :
(val & ~GRACKLE_PICR1_STG);
out_be32(bp->cfg_addr, GRACKLE_CFA(0, 0, 0xa8));
- out_le32((volatile unsigned int *)bp->cfg_data, val);
- (void)in_le32((volatile unsigned int *)bp->cfg_data);
+ out_le32(bp->cfg_data, val);
+ (void)in_le32(bp->cfg_data);
}
static inline void grackle_set_loop_snoop(struct pci_controller *bp, int enable)
unsigned int val;
out_be32(bp->cfg_addr, GRACKLE_CFA(0, 0, 0xa8));
- val = in_le32((volatile unsigned int *)bp->cfg_data);
+ val = in_le32(bp->cfg_data);
val = enable? (val | GRACKLE_PICR1_LOOPSNOOP) :
(val & ~GRACKLE_PICR1_LOOPSNOOP);
out_be32(bp->cfg_addr, GRACKLE_CFA(0, 0, 0xa8));
- out_le32((volatile unsigned int *)bp->cfg_data, val);
- (void)in_le32((volatile unsigned int *)bp->cfg_data);
+ out_le32(bp->cfg_data, val);
+ (void)in_le32(bp->cfg_data);
}
static int __init
setup_bandit(struct pci_controller* hose, struct reg_property* addr)
{
hose->ops = ¯isc_pci_ops;
- hose->cfg_addr = (volatile unsigned int *)
- ioremap(addr->address + 0x800000, 0x1000);
- hose->cfg_data = (volatile unsigned char *)
- ioremap(addr->address + 0xc00000, 0x1000);
+ hose->cfg_addr = ioremap(addr->address + 0x800000, 0x1000);
+ hose->cfg_data = ioremap(addr->address + 0xc00000, 0x1000);
init_bandit(hose);
}
{
/* assume a `chaos' bridge */
hose->ops = &chaos_pci_ops;
- hose->cfg_addr = (volatile unsigned int *)
- ioremap(addr->address + 0x800000, 0x1000);
- hose->cfg_data = (volatile unsigned char *)
- ioremap(addr->address + 0xc00000, 0x1000);
+ hose->cfg_addr = ioremap(addr->address + 0x800000, 0x1000);
+ hose->cfg_data = ioremap(addr->address + 0xc00000, 0x1000);
}
#ifdef CONFIG_POWER4
* the reg address cell, we shall fix that by killing struct
* reg_property and using some accessor functions instead
*/
- hose->cfg_data = (volatile unsigned char *)ioremap(0xf2000000, 0x02000000);
+ hose->cfg_data = ioremap(0xf2000000, 0x02000000);
/*
* /ht node doesn't expose a "ranges" property, so we "remove" regions that
* (iBook second controller)
*/
if (dev->vendor == PCI_VENDOR_ID_APPLE
- && dev->device == PCI_DEVICE_ID_APPLE_KL_USB && !node)
+ && (dev->class == ((PCI_CLASS_SERIAL_USB << 8) | 0x10))
+ && !node) {
+ printk(KERN_INFO "Apple USB OHCI %s disabled by firmware\n",
+ pci_name(dev));
return -EINVAL;
+ }
if (!node)
return 0;
#include <asm/mpc8260.h>
void __init
-m82xx_board_init(void)
+m82xx_board_setup(void)
{
/* Enable the 2nd UART port */
- *(volatile uint *)(BCSR_ADDR + 4) &= ~BCSR1_RS232_EN2;
+ *(volatile uint *)(BCSR_ADDR + 4) &= ~BCSR1_RS232_EN2;
}
#define BCSR1_FETH_RST ((uint)0x04000000) /* 0 == reset */
#define BCSR1_RS232_EN1 ((uint)0x02000000) /* 0 == enable */
#define BCSR1_RS232_EN2 ((uint)0x01000000) /* 0 == enable */
+#define BCSR3_FETHIEN2 ((uint)0x10000000) /* 0 == enable */
+#define BCSR3_FETH2_RST ((uint)0x80000000) /* 0 == reset */
#define PHY_INTERRUPT SIU_INT_IRQ7
int len, u32 *val)
{
struct pci_controller *hose = bus->sysdata;
- volatile unsigned char *cfg_data;
+ volatile void __iomem *cfg_data;
if (bus->number != 0 || DEVNO(devfn) < MIN_DEVNR
|| DEVNO(devfn) > MAX_DEVNR)
cfg_data = hose->cfg_data + CFGADDR(devfn) + offset;
switch (len) {
case 1:
- *val = in_8((u8 *)cfg_data);
+ *val = in_8(cfg_data);
break;
case 2:
- *val = in_le16((u16 *)cfg_data);
+ *val = in_le16(cfg_data);
break;
default:
- *val = in_le32((u32 *)cfg_data);
+ *val = in_le32(cfg_data);
break;
}
return PCIBIOS_SUCCESSFUL;
int len, u32 val)
{
struct pci_controller *hose = bus->sysdata;
- volatile unsigned char *cfg_data;
+ volatile void __iomem *cfg_data;
if (bus->number != 0 || DEVNO(devfn) < MIN_DEVNR
|| DEVNO(devfn) > MAX_DEVNR)
cfg_data = hose->cfg_data + CFGADDR(devfn) + offset;
switch (len) {
case 1:
- out_8((u8 *)cfg_data, val);
+ out_8(cfg_data, val);
break;
case 2:
- out_le16((u16 *)cfg_data, val);
+ out_le16(cfg_data, val);
break;
default:
- out_le32((u32 *)cfg_data, val);
+ out_le32(cfg_data, val);
break;
}
return PCIBIOS_SUCCESSFUL;
#define SERIAL_BAUD 9600
+/* SERIAL_PORT_DFNS is defined in <asm/serial.h> */
+#ifndef SERIAL_PORT_DFNS
+#define SERIAL_PORT_DFNS
+#endif
+
static struct serial_state rs_table[RS_TABLE_SIZE] = {
SERIAL_PORT_DFNS /* defined in <asm/serial.h> */
};
rs_table[i].port = serial_req->iobase;
rs_table[i].iomem_base = serial_req->membase;
rs_table[i].iomem_reg_shift = serial_req->regshift;
+ rs_table[i].baud_base = serial_req->uartclk ? serial_req->uartclk / 16 : BASE_BAUD;
}
#ifdef CONFIG_SERIAL_TEXT_DEBUG
#define cached_A1 (cached_8259[0])
#define cached_21 (cached_8259[1])
-static spinlock_t i8259_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(i8259_lock);
int i8259_pic_irq_offset;
};
static struct resource pic1_iores = {
- "8259 (master)", 0x20, 0x21, IORESOURCE_BUSY
+ .name = "8259 (master)",
+ .start = 0x20,
+ .end = 0x21,
+ .flags = IORESOURCE_BUSY,
};
static struct resource pic2_iores = {
- "8259 (slave)", 0xa0, 0xa1, IORESOURCE_BUSY
+ .name = "8259 (slave)",
+ .start = 0xa0,
+ .end = 0xa1,
+ .flags = IORESOURCE_BUSY,
};
static struct resource pic_edgectrl_iores = {
- "8259 edge control", 0x4d0, 0x4d1, IORESOURCE_BUSY
+ .name = "8259 edge control",
+ .start = 0x4d0,
+ .end = 0x4d1,
+ .flags = IORESOURCE_BUSY,
};
static struct irqaction i8259_irqaction = {
--- /dev/null
+/*
+ * arch/ppc/syslib/ibm440sp_common.c
+ *
+ * PPC440SP system library
+ *
+ * Matt Porter <mporter@kernel.crashing.org>
+ * Copyright 2002-2005 MontaVista Software Inc.
+ *
+ * Eugene Surovegin <eugene.surovegin@zultys.com> or <ebs@ebshome.net>
+ * Copyright (c) 2003, 2004 Zultys Technologies
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/serial.h>
+
+#include <asm/param.h>
+#include <asm/ibm44x.h>
+#include <asm/mmu.h>
+#include <asm/machdep.h>
+#include <asm/time.h>
+#include <asm/ppc4xx_pic.h>
+
+/*
+ * Read the 440SP memory controller to get size of system memory.
+ */
+unsigned long __init ibm440sp_find_end_of_memory(void)
+{
+ u32 i;
+ u32 mem_size = 0;
+
+ /* Read two bank sizes and sum */
+ for (i=0; i<2; i++)
+ switch (mfdcr(DCRN_MQ0_BS0BAS + i) & MQ0_CONFIG_SIZE_MASK) {
+ case MQ0_CONFIG_SIZE_8M:
+ mem_size += PPC44x_MEM_SIZE_8M;
+ break;
+ case MQ0_CONFIG_SIZE_16M:
+ mem_size += PPC44x_MEM_SIZE_16M;
+ break;
+ case MQ0_CONFIG_SIZE_32M:
+ mem_size += PPC44x_MEM_SIZE_32M;
+ break;
+ case MQ0_CONFIG_SIZE_64M:
+ mem_size += PPC44x_MEM_SIZE_64M;
+ break;
+ case MQ0_CONFIG_SIZE_128M:
+ mem_size += PPC44x_MEM_SIZE_128M;
+ break;
+ case MQ0_CONFIG_SIZE_256M:
+ mem_size += PPC44x_MEM_SIZE_256M;
+ break;
+ case MQ0_CONFIG_SIZE_512M:
+ mem_size += PPC44x_MEM_SIZE_512M;
+ break;
+ case MQ0_CONFIG_SIZE_1G:
+ mem_size += PPC44x_MEM_SIZE_1G;
+ break;
+ case MQ0_CONFIG_SIZE_2G:
+ mem_size += PPC44x_MEM_SIZE_2G;
+ break;
+ default:
+ break;
+ }
+ return mem_size;
+}
--- /dev/null
+/*
+ * arch/ppc/syslib/ibm440sp_common.h
+ *
+ * PPC440SP system library
+ *
+ * Matt Porter <mporter@kernel.crashing.org>
+ * Copyright 2004-2005 MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+#ifdef __KERNEL__
+#ifndef __PPC_SYSLIB_IBM440SP_COMMON_H
+#define __PPC_SYSLIB_IBM440SP_COMMON_H
+
+#ifndef __ASSEMBLY__
+
+extern unsigned long __init ibm440sp_find_end_of_memory(void);
+
+#endif /* __ASSEMBLY__ */
+#endif /* __PPC_SYSLIB_IBM440SP_COMMON_H */
+#endif /* __KERNEL__ */
* PPC44x system library
*
* Matt Porter <mporter@kernel.crashing.org>
- * Copyright 2002-2004 MontaVista Software Inc.
+ * Copyright 2002-2005 MontaVista Software Inc.
*
* Eugene Surovegin <eugene.surovegin@zultys.com> or <ebs@ebshome.net>
* Copyright (c) 2003, 2004 Zultys Technologies
#include <linux/time.h>
#include <linux/types.h>
#include <linux/serial.h>
+#include <linux/module.h>
#include <asm/ibm44x.h>
#include <asm/mmu.h>
* address in the 440's 36-bit address space. Fix
* them up with the appropriate ERPN
*/
- if ((addr >= PPC44x_IO_LO) && (addr < PPC44x_IO_HI))
+ if ((addr >= PPC44x_IO_LO) && (addr <= PPC44x_IO_HI))
page_4gb = PPC44x_IO_PAGE;
- else if ((addr >= PPC44x_PCICFG_LO) && (addr < PPC44x_PCICFG_HI))
+ else if ((addr >= PPC44x_PCI0CFG_LO) && (addr <= PPC44x_PCI0CFG_HI))
page_4gb = PPC44x_PCICFG_PAGE;
- else if ((addr >= PPC44x_PCIMEM_LO) && (addr < PPC44x_PCIMEM_HI))
+#ifdef CONFIG_440SP
+ else if ((addr >= PPC44x_PCI1CFG_LO) && (addr <= PPC44x_PCI1CFG_HI))
+ page_4gb = PPC44x_PCICFG_PAGE;
+ else if ((addr >= PPC44x_PCI2CFG_LO) && (addr <= PPC44x_PCI2CFG_HI))
+ page_4gb = PPC44x_PCICFG_PAGE;
+#endif
+ else if ((addr >= PPC44x_PCIMEM_LO) && (addr <= PPC44x_PCIMEM_HI))
page_4gb = PPC44x_PCIMEM_PAGE;
return (page_4gb | addr);
};
+EXPORT_SYMBOL(fixup_bigphys_addr);
void __init ibm44x_calibrate_decr(unsigned int freq)
{
return mem_size;
}
-static void __init ibm44x_init_irq(void)
-{
- int i;
-
- ppc4xx_pic_init();
-
- for (i = 0; i < NR_IRQS; i++)
- irq_desc[i].handler = ppc4xx_pic;
-}
-
void __init ibm44x_platform_init(void)
{
- ppc_md.init_IRQ = ibm44x_init_irq;
+ ppc_md.init_IRQ = ppc4xx_pic_init;
ppc_md.find_end_of_memory = ibm44x_find_end_of_memory;
ppc_md.restart = ibm44x_restart;
ppc_md.power_off = ibm44x_power_off;
#endif
}
+/* Called from MachineCheckException */
+void platform_machine_check(struct pt_regs *regs)
+{
+ printk("PLB0: BEAR=0x%08x%08x ACR= 0x%08x BESR= 0x%08x\n",
+ mfdcr(DCRN_PLB0_BEARH), mfdcr(DCRN_PLB0_BEARL),
+ mfdcr(DCRN_PLB0_ACR), mfdcr(DCRN_PLB0_BESR));
+ printk("POB0: BEAR=0x%08x%08x BESR0=0x%08x BESR1=0x%08x\n",
+ mfdcr(DCRN_POB0_BEARH), mfdcr(DCRN_POB0_BEARL),
+ mfdcr(DCRN_POB0_BESR0), mfdcr(DCRN_POB0_BESR1));
+ printk("OPB0: BEAR=0x%08x%08x BSTAT=0x%08x\n",
+ mfdcr(DCRN_OPB0_BEARH), mfdcr(DCRN_OPB0_BEARL),
+ mfdcr(DCRN_OPB0_BSTAT));
+}
int len, u32 *val)
{
struct pci_controller *hose = bus->sysdata;
- volatile unsigned char *cfg_data;
+ volatile void __iomem *cfg_data;
u8 cfg_type = 0;
if (ppc_md.pci_exclude_device)
cfg_data = hose->cfg_data + (offset & 3);
switch (len) {
case 1:
- *val = in_8((u8 *)cfg_data);
+ *val = in_8(cfg_data);
break;
case 2:
- *val = in_le16((u16 *)cfg_data);
+ *val = in_le16(cfg_data);
break;
default:
- *val = in_le32((u32 *)cfg_data);
+ *val = in_le32(cfg_data);
break;
}
return PCIBIOS_SUCCESSFUL;
int len, u32 val)
{
struct pci_controller *hose = bus->sysdata;
- volatile unsigned char *cfg_data;
+ volatile void __iomem *cfg_data;
u8 cfg_type = 0;
if (ppc_md.pci_exclude_device)
cfg_data = hose->cfg_data + (offset & 3);
switch (len) {
case 1:
- out_8((u8 *)cfg_data, val);
+ out_8(cfg_data, val);
break;
case 2:
- out_le16((u16 *)cfg_data, val);
+ out_le16(cfg_data, val);
break;
default:
- out_le32((u32 *)cfg_data, val);
+ out_le32(cfg_data, val);
break;
}
return PCIBIOS_SUCCESSFUL;
};
void __init
-setup_indirect_pci_nomap(struct pci_controller* hose, u32 cfg_addr,
- u32 cfg_data)
+setup_indirect_pci_nomap(struct pci_controller* hose, void __iomem * cfg_addr,
+ void __iomem * cfg_data)
{
- hose->cfg_addr = (unsigned int *)cfg_addr;
- hose->cfg_data = (unsigned char *)cfg_data;
+ hose->cfg_addr = cfg_addr;
+ hose->cfg_data = cfg_data;
hose->ops = &indirect_pci_ops;
}
setup_indirect_pci(struct pci_controller* hose, u32 cfg_addr, u32 cfg_data)
{
unsigned long base = cfg_addr & PAGE_MASK;
- char *mbase;
+ void __iomem *mbase, *addr, *data;
mbase = ioremap(base, PAGE_SIZE);
- cfg_addr = (u32)(mbase + (cfg_addr & ~PAGE_MASK));
+ addr = mbase + (cfg_addr & ~PAGE_MASK);
if ((cfg_data & PAGE_MASK) != base)
mbase = ioremap(cfg_data & PAGE_MASK, PAGE_SIZE);
- cfg_data = (u32)(mbase + (cfg_data & ~PAGE_MASK));
- setup_indirect_pci_nomap(hose, cfg_addr, cfg_data);
+ data = mbase + (cfg_data & ~PAGE_MASK);
+ setup_indirect_pci_nomap(hose, addr, data);
}
--- /dev/null
+/*
+ * arch/ppc/syslib/mv64x60_dbg.c
+ *
+ * KGDB and progress routines for the Marvell/Galileo MV64x60 (Discovery).
+ *
+ * Author: Mark A. Greer <mgreer@mvista.com>
+ *
+ * 2003 (c) MontaVista Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+/*
+ *****************************************************************************
+ *
+ * Low-level MPSC/UART I/O routines
+ *
+ *****************************************************************************
+ */
+
+
+#include <linux/config.h>
+#include <linux/irq.h>
+#include <asm/delay.h>
+#include <asm/mv64x60.h>
+
+
+#if defined(CONFIG_SERIAL_TEXT_DEBUG)
+
+#define MPSC_CHR_1 0x000c
+#define MPSC_CHR_2 0x0010
+
+static struct mv64x60_handle mv64x60_dbg_bh;
+
+void
+mv64x60_progress_init(u32 base)
+{
+ mv64x60_dbg_bh.v_base = base;
+ return;
+}
+
+static void
+mv64x60_polled_putc(int chan, char c)
+{
+ u32 offset;
+
+ if (chan == 0)
+ offset = 0x8000;
+ else
+ offset = 0x9000;
+
+ mv64x60_write(&mv64x60_dbg_bh, offset + MPSC_CHR_1, (u32)c);
+ mv64x60_write(&mv64x60_dbg_bh, offset + MPSC_CHR_2, 0x200);
+ udelay(2000);
+}
+
+void
+mv64x60_mpsc_progress(char *s, unsigned short hex)
+{
+ volatile char c;
+
+ mv64x60_polled_putc(0, '\r');
+
+ while ((c = *s++) != 0)
+ mv64x60_polled_putc(0, c);
+
+ mv64x60_polled_putc(0, '\n');
+ mv64x60_polled_putc(0, '\r');
+
+ return;
+}
+#endif /* CONFIG_SERIAL_TEXT_DEBUG */
+
+
+#if defined(CONFIG_KGDB)
+
+#if defined(CONFIG_KGDB_TTYS0)
+#define KGDB_PORT 0
+#elif defined(CONFIG_KGDB_TTYS1)
+#define KGDB_PORT 1
+#else
+#error "Invalid kgdb_tty port"
+#endif
+
+void
+putDebugChar(unsigned char c)
+{
+ mv64x60_polled_putc(KGDB_PORT, (char)c);
+}
+
+int
+getDebugChar(void)
+{
+ unsigned char c;
+
+ while (!mv64x60_polled_getc(KGDB_PORT, &c));
+ return (int)c;
+}
+
+void
+putDebugString(char* str)
+{
+ while (*str != '\0') {
+ putDebugChar(*str);
+ str++;
+ }
+ putDebugChar('\r');
+ return;
+}
+
+void
+kgdb_interruptible(int enable)
+{
+}
+
+void
+kgdb_map_scc(void)
+{
+ if (ppc_md.early_serial_map)
+ ppc_md.early_serial_map();
+}
+#endif /* CONFIG_KGDB */
--- /dev/null
+/*
+ * arch/ppc/syslib/mv64x60_win.c
+ *
+ * Tables with info on how to manipulate the 32 & 64 bit windows on the
+ * various types of Marvell bridge chips.
+ *
+ * Author: Mark A. Greer <mgreer@mvista.com>
+ *
+ * 2004 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/string.h>
+#include <linux/bootmem.h>
+#include <linux/mv643xx.h>
+
+#include <asm/byteorder.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/uaccess.h>
+#include <asm/machdep.h>
+#include <asm/pci-bridge.h>
+#include <asm/delay.h>
+#include <asm/mv64x60.h>
+
+
+/*
+ *****************************************************************************
+ *
+ * Tables describing how to set up windows on each type of bridge
+ *
+ *****************************************************************************
+ */
+struct mv64x60_32bit_window
+ gt64260_32bit_windows[MV64x60_32BIT_WIN_COUNT] __initdata = {
+ /* CPU->MEM Windows */
+ [MV64x60_CPU2MEM_0_WIN] = {
+ .base_reg = MV64x60_CPU2MEM_0_BASE,
+ .size_reg = MV64x60_CPU2MEM_0_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2MEM_1_WIN] = {
+ .base_reg = MV64x60_CPU2MEM_1_BASE,
+ .size_reg = MV64x60_CPU2MEM_1_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2MEM_2_WIN] = {
+ .base_reg = MV64x60_CPU2MEM_2_BASE,
+ .size_reg = MV64x60_CPU2MEM_2_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2MEM_3_WIN] = {
+ .base_reg = MV64x60_CPU2MEM_3_BASE,
+ .size_reg = MV64x60_CPU2MEM_3_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ /* CPU->Device Windows */
+ [MV64x60_CPU2DEV_0_WIN] = {
+ .base_reg = MV64x60_CPU2DEV_0_BASE,
+ .size_reg = MV64x60_CPU2DEV_0_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2DEV_1_WIN] = {
+ .base_reg = MV64x60_CPU2DEV_1_BASE,
+ .size_reg = MV64x60_CPU2DEV_1_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2DEV_2_WIN] = {
+ .base_reg = MV64x60_CPU2DEV_2_BASE,
+ .size_reg = MV64x60_CPU2DEV_2_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2DEV_3_WIN] = {
+ .base_reg = MV64x60_CPU2DEV_3_BASE,
+ .size_reg = MV64x60_CPU2DEV_3_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ /* CPU->Boot Window */
+ [MV64x60_CPU2BOOT_WIN] = {
+ .base_reg = MV64x60_CPU2BOOT_0_BASE,
+ .size_reg = MV64x60_CPU2BOOT_0_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ /* CPU->PCI 0 Windows */
+ [MV64x60_CPU2PCI0_IO_WIN] = {
+ .base_reg = MV64x60_CPU2PCI0_IO_BASE,
+ .size_reg = MV64x60_CPU2PCI0_IO_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI0_MEM_0_WIN] = {
+ .base_reg = MV64x60_CPU2PCI0_MEM_0_BASE,
+ .size_reg = MV64x60_CPU2PCI0_MEM_0_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI0_MEM_1_WIN] = {
+ .base_reg = MV64x60_CPU2PCI0_MEM_1_BASE,
+ .size_reg = MV64x60_CPU2PCI0_MEM_1_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI0_MEM_2_WIN] = {
+ .base_reg = MV64x60_CPU2PCI0_MEM_2_BASE,
+ .size_reg = MV64x60_CPU2PCI0_MEM_2_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI0_MEM_3_WIN] = {
+ .base_reg = MV64x60_CPU2PCI0_MEM_3_BASE,
+ .size_reg = MV64x60_CPU2PCI0_MEM_3_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ /* CPU->PCI 1 Windows */
+ [MV64x60_CPU2PCI1_IO_WIN] = {
+ .base_reg = MV64x60_CPU2PCI1_IO_BASE,
+ .size_reg = MV64x60_CPU2PCI1_IO_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI1_MEM_0_WIN] = {
+ .base_reg = MV64x60_CPU2PCI1_MEM_0_BASE,
+ .size_reg = MV64x60_CPU2PCI1_MEM_0_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI1_MEM_1_WIN] = {
+ .base_reg = MV64x60_CPU2PCI1_MEM_1_BASE,
+ .size_reg = MV64x60_CPU2PCI1_MEM_1_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI1_MEM_2_WIN] = {
+ .base_reg = MV64x60_CPU2PCI1_MEM_2_BASE,
+ .size_reg = MV64x60_CPU2PCI1_MEM_2_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI1_MEM_3_WIN] = {
+ .base_reg = MV64x60_CPU2PCI1_MEM_3_BASE,
+ .size_reg = MV64x60_CPU2PCI1_MEM_3_SIZE,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ /* CPU->SRAM Window (64260 has no integrated SRAM) */
+ /* CPU->PCI 0 Remap I/O Window */
+ [MV64x60_CPU2PCI0_IO_REMAP_WIN] = {
+ .base_reg = MV64x60_CPU2PCI0_IO_REMAP,
+ .size_reg = 0,
+ .base_bits = 12,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ /* CPU->PCI 1 Remap I/O Window */
+ [MV64x60_CPU2PCI1_IO_REMAP_WIN] = {
+ .base_reg = MV64x60_CPU2PCI1_IO_REMAP,
+ .size_reg = 0,
+ .base_bits = 12,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ /* CPU Memory Protection Windows */
+ [MV64x60_CPU_PROT_0_WIN] = {
+ .base_reg = MV64x60_CPU_PROT_BASE_0,
+ .size_reg = MV64x60_CPU_PROT_SIZE_0,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU_PROT_1_WIN] = {
+ .base_reg = MV64x60_CPU_PROT_BASE_1,
+ .size_reg = MV64x60_CPU_PROT_SIZE_1,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU_PROT_2_WIN] = {
+ .base_reg = MV64x60_CPU_PROT_BASE_2,
+ .size_reg = MV64x60_CPU_PROT_SIZE_2,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU_PROT_3_WIN] = {
+ .base_reg = MV64x60_CPU_PROT_BASE_3,
+ .size_reg = MV64x60_CPU_PROT_SIZE_3,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ /* CPU Snoop Windows */
+ [MV64x60_CPU_SNOOP_0_WIN] = {
+ .base_reg = GT64260_CPU_SNOOP_BASE_0,
+ .size_reg = GT64260_CPU_SNOOP_SIZE_0,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU_SNOOP_1_WIN] = {
+ .base_reg = GT64260_CPU_SNOOP_BASE_1,
+ .size_reg = GT64260_CPU_SNOOP_SIZE_1,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU_SNOOP_2_WIN] = {
+ .base_reg = GT64260_CPU_SNOOP_BASE_2,
+ .size_reg = GT64260_CPU_SNOOP_SIZE_2,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU_SNOOP_3_WIN] = {
+ .base_reg = GT64260_CPU_SNOOP_BASE_3,
+ .size_reg = GT64260_CPU_SNOOP_SIZE_3,
+ .base_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ /* PCI 0->System Memory Remap Windows */
+ [MV64x60_PCI02MEM_REMAP_0_WIN] = {
+ .base_reg = MV64x60_PCI0_SLAVE_MEM_0_REMAP,
+ .size_reg = 0,
+ .base_bits = 20,
+ .size_bits = 0,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = 0 },
+ [MV64x60_PCI02MEM_REMAP_1_WIN] = {
+ .base_reg = MV64x60_PCI0_SLAVE_MEM_1_REMAP,
+ .size_reg = 0,
+ .base_bits = 20,
+ .size_bits = 0,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = 0 },
+ [MV64x60_PCI02MEM_REMAP_2_WIN] = {
+ .base_reg = MV64x60_PCI0_SLAVE_MEM_1_REMAP,
+ .size_reg = 0,
+ .base_bits = 20,
+ .size_bits = 0,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = 0 },
+ [MV64x60_PCI02MEM_REMAP_3_WIN] = {
+ .base_reg = MV64x60_PCI0_SLAVE_MEM_1_REMAP,
+ .size_reg = 0,
+ .base_bits = 20,
+ .size_bits = 0,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = 0 },
+ /* PCI 1->System Memory Remap Windows */
+ [MV64x60_PCI12MEM_REMAP_0_WIN] = {
+ .base_reg = MV64x60_PCI1_SLAVE_MEM_0_REMAP,
+ .size_reg = 0,
+ .base_bits = 20,
+ .size_bits = 0,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = 0 },
+ [MV64x60_PCI12MEM_REMAP_1_WIN] = {
+ .base_reg = MV64x60_PCI1_SLAVE_MEM_1_REMAP,
+ .size_reg = 0,
+ .base_bits = 20,
+ .size_bits = 0,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = 0 },
+ [MV64x60_PCI12MEM_REMAP_2_WIN] = {
+ .base_reg = MV64x60_PCI1_SLAVE_MEM_1_REMAP,
+ .size_reg = 0,
+ .base_bits = 20,
+ .size_bits = 0,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = 0 },
+ [MV64x60_PCI12MEM_REMAP_3_WIN] = {
+ .base_reg = MV64x60_PCI1_SLAVE_MEM_1_REMAP,
+ .size_reg = 0,
+ .base_bits = 20,
+ .size_bits = 0,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = 0 },
+ /* ENET->SRAM Window (64260 doesn't have separate windows) */
+ /* MPSC->SRAM Window (64260 doesn't have separate windows) */
+ /* IDMA->SRAM Window (64260 doesn't have separate windows) */
+};
+
+struct mv64x60_64bit_window
+ gt64260_64bit_windows[MV64x60_64BIT_WIN_COUNT] __initdata = {
+ /* CPU->PCI 0 MEM Remap Windows */
+ [MV64x60_CPU2PCI0_MEM_0_REMAP_WIN] = {
+ .base_hi_reg = MV64x60_CPU2PCI0_MEM_0_REMAP_HI,
+ .base_lo_reg = MV64x60_CPU2PCI0_MEM_0_REMAP_LO,
+ .size_reg = 0,
+ .base_lo_bits = 12,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI0_MEM_1_REMAP_WIN] = {
+ .base_hi_reg = MV64x60_CPU2PCI0_MEM_1_REMAP_HI,
+ .base_lo_reg = MV64x60_CPU2PCI0_MEM_1_REMAP_LO,
+ .size_reg = 0,
+ .base_lo_bits = 12,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI0_MEM_2_REMAP_WIN] = {
+ .base_hi_reg = MV64x60_CPU2PCI0_MEM_2_REMAP_HI,
+ .base_lo_reg = MV64x60_CPU2PCI0_MEM_2_REMAP_LO,
+ .size_reg = 0,
+ .base_lo_bits = 12,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI0_MEM_3_REMAP_WIN] = {
+ .base_hi_reg = MV64x60_CPU2PCI0_MEM_3_REMAP_HI,
+ .base_lo_reg = MV64x60_CPU2PCI0_MEM_3_REMAP_LO,
+ .size_reg = 0,
+ .base_lo_bits = 12,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ /* CPU->PCI 1 MEM Remap Windows */
+ [MV64x60_CPU2PCI1_MEM_0_REMAP_WIN] = {
+ .base_hi_reg = MV64x60_CPU2PCI1_MEM_0_REMAP_HI,
+ .base_lo_reg = MV64x60_CPU2PCI1_MEM_0_REMAP_LO,
+ .size_reg = 0,
+ .base_lo_bits = 12,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI1_MEM_1_REMAP_WIN] = {
+ .base_hi_reg = MV64x60_CPU2PCI1_MEM_1_REMAP_HI,
+ .base_lo_reg = MV64x60_CPU2PCI1_MEM_1_REMAP_LO,
+ .size_reg = 0,
+ .base_lo_bits = 12,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI1_MEM_2_REMAP_WIN] = {
+ .base_hi_reg = MV64x60_CPU2PCI1_MEM_2_REMAP_HI,
+ .base_lo_reg = MV64x60_CPU2PCI1_MEM_2_REMAP_LO,
+ .size_reg = 0,
+ .base_lo_bits = 12,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI1_MEM_3_REMAP_WIN] = {
+ .base_hi_reg = MV64x60_CPU2PCI1_MEM_3_REMAP_HI,
+ .base_lo_reg = MV64x60_CPU2PCI1_MEM_3_REMAP_LO,
+ .size_reg = 0,
+ .base_lo_bits = 12,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ /* PCI 0->MEM Access Control Windows */
+ [MV64x60_PCI02MEM_ACC_CNTL_0_WIN] = {
+ .base_hi_reg = MV64x60_PCI0_ACC_CNTL_0_BASE_HI,
+ .base_lo_reg = MV64x60_PCI0_ACC_CNTL_0_BASE_LO,
+ .size_reg = MV64x60_PCI0_ACC_CNTL_0_SIZE,
+ .base_lo_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_PCI02MEM_ACC_CNTL_1_WIN] = {
+ .base_hi_reg = MV64x60_PCI0_ACC_CNTL_1_BASE_HI,
+ .base_lo_reg = MV64x60_PCI0_ACC_CNTL_1_BASE_LO,
+ .size_reg = MV64x60_PCI0_ACC_CNTL_1_SIZE,
+ .base_lo_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_PCI02MEM_ACC_CNTL_2_WIN] = {
+ .base_hi_reg = MV64x60_PCI0_ACC_CNTL_2_BASE_HI,
+ .base_lo_reg = MV64x60_PCI0_ACC_CNTL_2_BASE_LO,
+ .size_reg = MV64x60_PCI0_ACC_CNTL_2_SIZE,
+ .base_lo_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_PCI02MEM_ACC_CNTL_3_WIN] = {
+ .base_hi_reg = MV64x60_PCI0_ACC_CNTL_3_BASE_HI,
+ .base_lo_reg = MV64x60_PCI0_ACC_CNTL_3_BASE_LO,
+ .size_reg = MV64x60_PCI0_ACC_CNTL_3_SIZE,
+ .base_lo_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ /* PCI 1->MEM Access Control Windows */
+ [MV64x60_PCI12MEM_ACC_CNTL_0_WIN] = {
+ .base_hi_reg = MV64x60_PCI1_ACC_CNTL_0_BASE_HI,
+ .base_lo_reg = MV64x60_PCI1_ACC_CNTL_0_BASE_LO,
+ .size_reg = MV64x60_PCI1_ACC_CNTL_0_SIZE,
+ .base_lo_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_PCI12MEM_ACC_CNTL_1_WIN] = {
+ .base_hi_reg = MV64x60_PCI1_ACC_CNTL_1_BASE_HI,
+ .base_lo_reg = MV64x60_PCI1_ACC_CNTL_1_BASE_LO,
+ .size_reg = MV64x60_PCI1_ACC_CNTL_1_SIZE,
+ .base_lo_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_PCI12MEM_ACC_CNTL_2_WIN] = {
+ .base_hi_reg = MV64x60_PCI1_ACC_CNTL_2_BASE_HI,
+ .base_lo_reg = MV64x60_PCI1_ACC_CNTL_2_BASE_LO,
+ .size_reg = MV64x60_PCI1_ACC_CNTL_2_SIZE,
+ .base_lo_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_PCI12MEM_ACC_CNTL_3_WIN] = {
+ .base_hi_reg = MV64x60_PCI1_ACC_CNTL_3_BASE_HI,
+ .base_lo_reg = MV64x60_PCI1_ACC_CNTL_3_BASE_LO,
+ .size_reg = MV64x60_PCI1_ACC_CNTL_3_SIZE,
+ .base_lo_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ /* PCI 0->MEM Snoop Windows */
+ [MV64x60_PCI02MEM_SNOOP_0_WIN] = {
+ .base_hi_reg = GT64260_PCI0_SNOOP_0_BASE_HI,
+ .base_lo_reg = GT64260_PCI0_SNOOP_0_BASE_LO,
+ .size_reg = GT64260_PCI0_SNOOP_0_SIZE,
+ .base_lo_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_PCI02MEM_SNOOP_1_WIN] = {
+ .base_hi_reg = GT64260_PCI0_SNOOP_1_BASE_HI,
+ .base_lo_reg = GT64260_PCI0_SNOOP_1_BASE_LO,
+ .size_reg = GT64260_PCI0_SNOOP_1_SIZE,
+ .base_lo_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_PCI02MEM_SNOOP_2_WIN] = {
+ .base_hi_reg = GT64260_PCI0_SNOOP_2_BASE_HI,
+ .base_lo_reg = GT64260_PCI0_SNOOP_2_BASE_LO,
+ .size_reg = GT64260_PCI0_SNOOP_2_SIZE,
+ .base_lo_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_PCI02MEM_SNOOP_3_WIN] = {
+ .base_hi_reg = GT64260_PCI0_SNOOP_3_BASE_HI,
+ .base_lo_reg = GT64260_PCI0_SNOOP_3_BASE_LO,
+ .size_reg = GT64260_PCI0_SNOOP_3_SIZE,
+ .base_lo_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ /* PCI 1->MEM Snoop Windows */
+ [MV64x60_PCI12MEM_SNOOP_0_WIN] = {
+ .base_hi_reg = GT64260_PCI1_SNOOP_0_BASE_HI,
+ .base_lo_reg = GT64260_PCI1_SNOOP_0_BASE_LO,
+ .size_reg = GT64260_PCI1_SNOOP_0_SIZE,
+ .base_lo_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_PCI12MEM_SNOOP_1_WIN] = {
+ .base_hi_reg = GT64260_PCI1_SNOOP_1_BASE_HI,
+ .base_lo_reg = GT64260_PCI1_SNOOP_1_BASE_LO,
+ .size_reg = GT64260_PCI1_SNOOP_1_SIZE,
+ .base_lo_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_PCI12MEM_SNOOP_2_WIN] = {
+ .base_hi_reg = GT64260_PCI1_SNOOP_2_BASE_HI,
+ .base_lo_reg = GT64260_PCI1_SNOOP_2_BASE_LO,
+ .size_reg = GT64260_PCI1_SNOOP_2_SIZE,
+ .base_lo_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_PCI12MEM_SNOOP_3_WIN] = {
+ .base_hi_reg = GT64260_PCI1_SNOOP_3_BASE_HI,
+ .base_lo_reg = GT64260_PCI1_SNOOP_3_BASE_LO,
+ .size_reg = GT64260_PCI1_SNOOP_3_SIZE,
+ .base_lo_bits = 12,
+ .size_bits = 12,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+};
+
+struct mv64x60_32bit_window
+ mv64360_32bit_windows[MV64x60_32BIT_WIN_COUNT] __initdata = {
+ /* CPU->MEM Windows */
+ [MV64x60_CPU2MEM_0_WIN] = {
+ .base_reg = MV64x60_CPU2MEM_0_BASE,
+ .size_reg = MV64x60_CPU2MEM_0_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 0 },
+ [MV64x60_CPU2MEM_1_WIN] = {
+ .base_reg = MV64x60_CPU2MEM_1_BASE,
+ .size_reg = MV64x60_CPU2MEM_1_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 1 },
+ [MV64x60_CPU2MEM_2_WIN] = {
+ .base_reg = MV64x60_CPU2MEM_2_BASE,
+ .size_reg = MV64x60_CPU2MEM_2_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 2 },
+ [MV64x60_CPU2MEM_3_WIN] = {
+ .base_reg = MV64x60_CPU2MEM_3_BASE,
+ .size_reg = MV64x60_CPU2MEM_3_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 3 },
+ /* CPU->Device Windows */
+ [MV64x60_CPU2DEV_0_WIN] = {
+ .base_reg = MV64x60_CPU2DEV_0_BASE,
+ .size_reg = MV64x60_CPU2DEV_0_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 4 },
+ [MV64x60_CPU2DEV_1_WIN] = {
+ .base_reg = MV64x60_CPU2DEV_1_BASE,
+ .size_reg = MV64x60_CPU2DEV_1_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 5 },
+ [MV64x60_CPU2DEV_2_WIN] = {
+ .base_reg = MV64x60_CPU2DEV_2_BASE,
+ .size_reg = MV64x60_CPU2DEV_2_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 6 },
+ [MV64x60_CPU2DEV_3_WIN] = {
+ .base_reg = MV64x60_CPU2DEV_3_BASE,
+ .size_reg = MV64x60_CPU2DEV_3_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 7 },
+ /* CPU->Boot Window */
+ [MV64x60_CPU2BOOT_WIN] = {
+ .base_reg = MV64x60_CPU2BOOT_0_BASE,
+ .size_reg = MV64x60_CPU2BOOT_0_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 8 },
+ /* CPU->PCI 0 Windows */
+ [MV64x60_CPU2PCI0_IO_WIN] = {
+ .base_reg = MV64x60_CPU2PCI0_IO_BASE,
+ .size_reg = MV64x60_CPU2PCI0_IO_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 9 },
+ [MV64x60_CPU2PCI0_MEM_0_WIN] = {
+ .base_reg = MV64x60_CPU2PCI0_MEM_0_BASE,
+ .size_reg = MV64x60_CPU2PCI0_MEM_0_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 10 },
+ [MV64x60_CPU2PCI0_MEM_1_WIN] = {
+ .base_reg = MV64x60_CPU2PCI0_MEM_1_BASE,
+ .size_reg = MV64x60_CPU2PCI0_MEM_1_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 11 },
+ [MV64x60_CPU2PCI0_MEM_2_WIN] = {
+ .base_reg = MV64x60_CPU2PCI0_MEM_2_BASE,
+ .size_reg = MV64x60_CPU2PCI0_MEM_2_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 12 },
+ [MV64x60_CPU2PCI0_MEM_3_WIN] = {
+ .base_reg = MV64x60_CPU2PCI0_MEM_3_BASE,
+ .size_reg = MV64x60_CPU2PCI0_MEM_3_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 13 },
+ /* CPU->PCI 1 Windows */
+ [MV64x60_CPU2PCI1_IO_WIN] = {
+ .base_reg = MV64x60_CPU2PCI1_IO_BASE,
+ .size_reg = MV64x60_CPU2PCI1_IO_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 14 },
+ [MV64x60_CPU2PCI1_MEM_0_WIN] = {
+ .base_reg = MV64x60_CPU2PCI1_MEM_0_BASE,
+ .size_reg = MV64x60_CPU2PCI1_MEM_0_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 15 },
+ [MV64x60_CPU2PCI1_MEM_1_WIN] = {
+ .base_reg = MV64x60_CPU2PCI1_MEM_1_BASE,
+ .size_reg = MV64x60_CPU2PCI1_MEM_1_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 16 },
+ [MV64x60_CPU2PCI1_MEM_2_WIN] = {
+ .base_reg = MV64x60_CPU2PCI1_MEM_2_BASE,
+ .size_reg = MV64x60_CPU2PCI1_MEM_2_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 17 },
+ [MV64x60_CPU2PCI1_MEM_3_WIN] = {
+ .base_reg = MV64x60_CPU2PCI1_MEM_3_BASE,
+ .size_reg = MV64x60_CPU2PCI1_MEM_3_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 18 },
+ /* CPU->SRAM Window */
+ [MV64x60_CPU2SRAM_WIN] = {
+ .base_reg = MV64360_CPU2SRAM_BASE,
+ .size_reg = 0,
+ .base_bits = 16,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUWIN_ENAB | 19 },
+ /* CPU->PCI 0 Remap I/O Window */
+ [MV64x60_CPU2PCI0_IO_REMAP_WIN] = {
+ .base_reg = MV64x60_CPU2PCI0_IO_REMAP,
+ .size_reg = 0,
+ .base_bits = 16,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ /* CPU->PCI 1 Remap I/O Window */
+ [MV64x60_CPU2PCI1_IO_REMAP_WIN] = {
+ .base_reg = MV64x60_CPU2PCI1_IO_REMAP,
+ .size_reg = 0,
+ .base_bits = 16,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ /* CPU Memory Protection Windows */
+ [MV64x60_CPU_PROT_0_WIN] = {
+ .base_reg = MV64x60_CPU_PROT_BASE_0,
+ .size_reg = MV64x60_CPU_PROT_SIZE_0,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUPROT_ENAB | 31 },
+ [MV64x60_CPU_PROT_1_WIN] = {
+ .base_reg = MV64x60_CPU_PROT_BASE_1,
+ .size_reg = MV64x60_CPU_PROT_SIZE_1,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUPROT_ENAB | 31 },
+ [MV64x60_CPU_PROT_2_WIN] = {
+ .base_reg = MV64x60_CPU_PROT_BASE_2,
+ .size_reg = MV64x60_CPU_PROT_SIZE_2,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUPROT_ENAB | 31 },
+ [MV64x60_CPU_PROT_3_WIN] = {
+ .base_reg = MV64x60_CPU_PROT_BASE_3,
+ .size_reg = MV64x60_CPU_PROT_SIZE_3,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = MV64x60_EXTRA_CPUPROT_ENAB | 31 },
+ /* CPU Snoop Windows -- don't exist on 64360 */
+ /* PCI 0->System Memory Remap Windows */
+ [MV64x60_PCI02MEM_REMAP_0_WIN] = {
+ .base_reg = MV64x60_PCI0_SLAVE_MEM_0_REMAP,
+ .size_reg = 0,
+ .base_bits = 20,
+ .size_bits = 0,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = 0 },
+ [MV64x60_PCI02MEM_REMAP_1_WIN] = {
+ .base_reg = MV64x60_PCI0_SLAVE_MEM_1_REMAP,
+ .size_reg = 0,
+ .base_bits = 20,
+ .size_bits = 0,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = 0 },
+ [MV64x60_PCI02MEM_REMAP_2_WIN] = {
+ .base_reg = MV64x60_PCI0_SLAVE_MEM_1_REMAP,
+ .size_reg = 0,
+ .base_bits = 20,
+ .size_bits = 0,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = 0 },
+ [MV64x60_PCI02MEM_REMAP_3_WIN] = {
+ .base_reg = MV64x60_PCI0_SLAVE_MEM_1_REMAP,
+ .size_reg = 0,
+ .base_bits = 20,
+ .size_bits = 0,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = 0 },
+ /* PCI 1->System Memory Remap Windows */
+ [MV64x60_PCI12MEM_REMAP_0_WIN] = {
+ .base_reg = MV64x60_PCI1_SLAVE_MEM_0_REMAP,
+ .size_reg = 0,
+ .base_bits = 20,
+ .size_bits = 0,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = 0 },
+ [MV64x60_PCI12MEM_REMAP_1_WIN] = {
+ .base_reg = MV64x60_PCI1_SLAVE_MEM_1_REMAP,
+ .size_reg = 0,
+ .base_bits = 20,
+ .size_bits = 0,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = 0 },
+ [MV64x60_PCI12MEM_REMAP_2_WIN] = {
+ .base_reg = MV64x60_PCI1_SLAVE_MEM_1_REMAP,
+ .size_reg = 0,
+ .base_bits = 20,
+ .size_bits = 0,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = 0 },
+ [MV64x60_PCI12MEM_REMAP_3_WIN] = {
+ .base_reg = MV64x60_PCI1_SLAVE_MEM_1_REMAP,
+ .size_reg = 0,
+ .base_bits = 20,
+ .size_bits = 0,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = 0 },
+ /* ENET->System Memory Windows */
+ [MV64x60_ENET2MEM_0_WIN] = {
+ .base_reg = MV64360_ENET2MEM_0_BASE,
+ .size_reg = MV64360_ENET2MEM_0_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_ENET_ENAB | 0 },
+ [MV64x60_ENET2MEM_1_WIN] = {
+ .base_reg = MV64360_ENET2MEM_1_BASE,
+ .size_reg = MV64360_ENET2MEM_1_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_ENET_ENAB | 1 },
+ [MV64x60_ENET2MEM_2_WIN] = {
+ .base_reg = MV64360_ENET2MEM_2_BASE,
+ .size_reg = MV64360_ENET2MEM_2_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_ENET_ENAB | 2 },
+ [MV64x60_ENET2MEM_3_WIN] = {
+ .base_reg = MV64360_ENET2MEM_3_BASE,
+ .size_reg = MV64360_ENET2MEM_3_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_ENET_ENAB | 3 },
+ [MV64x60_ENET2MEM_4_WIN] = {
+ .base_reg = MV64360_ENET2MEM_4_BASE,
+ .size_reg = MV64360_ENET2MEM_4_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_ENET_ENAB | 4 },
+ [MV64x60_ENET2MEM_5_WIN] = {
+ .base_reg = MV64360_ENET2MEM_5_BASE,
+ .size_reg = MV64360_ENET2MEM_5_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_ENET_ENAB | 5 },
+ /* MPSC->System Memory Windows */
+ [MV64x60_MPSC2MEM_0_WIN] = {
+ .base_reg = MV64360_MPSC2MEM_0_BASE,
+ .size_reg = MV64360_MPSC2MEM_0_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_MPSC_ENAB | 0 },
+ [MV64x60_MPSC2MEM_1_WIN] = {
+ .base_reg = MV64360_MPSC2MEM_1_BASE,
+ .size_reg = MV64360_MPSC2MEM_1_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_MPSC_ENAB | 1 },
+ [MV64x60_MPSC2MEM_2_WIN] = {
+ .base_reg = MV64360_MPSC2MEM_2_BASE,
+ .size_reg = MV64360_MPSC2MEM_2_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_MPSC_ENAB | 2 },
+ [MV64x60_MPSC2MEM_3_WIN] = {
+ .base_reg = MV64360_MPSC2MEM_3_BASE,
+ .size_reg = MV64360_MPSC2MEM_3_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_MPSC_ENAB | 3 },
+ /* IDMA->System Memory Windows */
+ [MV64x60_IDMA2MEM_0_WIN] = {
+ .base_reg = MV64360_IDMA2MEM_0_BASE,
+ .size_reg = MV64360_IDMA2MEM_0_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_IDMA_ENAB | 0 },
+ [MV64x60_IDMA2MEM_1_WIN] = {
+ .base_reg = MV64360_IDMA2MEM_1_BASE,
+ .size_reg = MV64360_IDMA2MEM_1_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_IDMA_ENAB | 1 },
+ [MV64x60_IDMA2MEM_2_WIN] = {
+ .base_reg = MV64360_IDMA2MEM_2_BASE,
+ .size_reg = MV64360_IDMA2MEM_2_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_IDMA_ENAB | 2 },
+ [MV64x60_IDMA2MEM_3_WIN] = {
+ .base_reg = MV64360_IDMA2MEM_3_BASE,
+ .size_reg = MV64360_IDMA2MEM_3_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_IDMA_ENAB | 3 },
+ [MV64x60_IDMA2MEM_4_WIN] = {
+ .base_reg = MV64360_IDMA2MEM_4_BASE,
+ .size_reg = MV64360_IDMA2MEM_4_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_IDMA_ENAB | 4 },
+ [MV64x60_IDMA2MEM_5_WIN] = {
+ .base_reg = MV64360_IDMA2MEM_5_BASE,
+ .size_reg = MV64360_IDMA2MEM_5_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_IDMA_ENAB | 5 },
+ [MV64x60_IDMA2MEM_6_WIN] = {
+ .base_reg = MV64360_IDMA2MEM_6_BASE,
+ .size_reg = MV64360_IDMA2MEM_6_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_IDMA_ENAB | 6 },
+ [MV64x60_IDMA2MEM_7_WIN] = {
+ .base_reg = MV64360_IDMA2MEM_7_BASE,
+ .size_reg = MV64360_IDMA2MEM_7_SIZE,
+ .base_bits = 16,
+ .size_bits = 16,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_IDMA_ENAB | 7 },
+};
+
+struct mv64x60_64bit_window
+ mv64360_64bit_windows[MV64x60_64BIT_WIN_COUNT] __initdata = {
+ /* CPU->PCI 0 MEM Remap Windows */
+ [MV64x60_CPU2PCI0_MEM_0_REMAP_WIN] = {
+ .base_hi_reg = MV64x60_CPU2PCI0_MEM_0_REMAP_HI,
+ .base_lo_reg = MV64x60_CPU2PCI0_MEM_0_REMAP_LO,
+ .size_reg = 0,
+ .base_lo_bits = 16,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI0_MEM_1_REMAP_WIN] = {
+ .base_hi_reg = MV64x60_CPU2PCI0_MEM_1_REMAP_HI,
+ .base_lo_reg = MV64x60_CPU2PCI0_MEM_1_REMAP_LO,
+ .size_reg = 0,
+ .base_lo_bits = 16,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI0_MEM_2_REMAP_WIN] = {
+ .base_hi_reg = MV64x60_CPU2PCI0_MEM_2_REMAP_HI,
+ .base_lo_reg = MV64x60_CPU2PCI0_MEM_2_REMAP_LO,
+ .size_reg = 0,
+ .base_lo_bits = 16,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI0_MEM_3_REMAP_WIN] = {
+ .base_hi_reg = MV64x60_CPU2PCI0_MEM_3_REMAP_HI,
+ .base_lo_reg = MV64x60_CPU2PCI0_MEM_3_REMAP_LO,
+ .size_reg = 0,
+ .base_lo_bits = 16,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ /* CPU->PCI 1 MEM Remap Windows */
+ [MV64x60_CPU2PCI1_MEM_0_REMAP_WIN] = {
+ .base_hi_reg = MV64x60_CPU2PCI1_MEM_0_REMAP_HI,
+ .base_lo_reg = MV64x60_CPU2PCI1_MEM_0_REMAP_LO,
+ .size_reg = 0,
+ .base_lo_bits = 16,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI1_MEM_1_REMAP_WIN] = {
+ .base_hi_reg = MV64x60_CPU2PCI1_MEM_1_REMAP_HI,
+ .base_lo_reg = MV64x60_CPU2PCI1_MEM_1_REMAP_LO,
+ .size_reg = 0,
+ .base_lo_bits = 16,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI1_MEM_2_REMAP_WIN] = {
+ .base_hi_reg = MV64x60_CPU2PCI1_MEM_2_REMAP_HI,
+ .base_lo_reg = MV64x60_CPU2PCI1_MEM_2_REMAP_LO,
+ .size_reg = 0,
+ .base_lo_bits = 16,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ [MV64x60_CPU2PCI1_MEM_3_REMAP_WIN] = {
+ .base_hi_reg = MV64x60_CPU2PCI1_MEM_3_REMAP_HI,
+ .base_lo_reg = MV64x60_CPU2PCI1_MEM_3_REMAP_LO,
+ .size_reg = 0,
+ .base_lo_bits = 16,
+ .size_bits = 0,
+ .get_from_field = mv64x60_shift_left,
+ .map_to_field = mv64x60_shift_right,
+ .extra = 0 },
+ /* PCI 0->MEM Access Control Windows */
+ [MV64x60_PCI02MEM_ACC_CNTL_0_WIN] = {
+ .base_hi_reg = MV64x60_PCI0_ACC_CNTL_0_BASE_HI,
+ .base_lo_reg = MV64x60_PCI0_ACC_CNTL_0_BASE_LO,
+ .size_reg = MV64x60_PCI0_ACC_CNTL_0_SIZE,
+ .base_lo_bits = 20,
+ .size_bits = 20,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_PCIACC_ENAB | 0 },
+ [MV64x60_PCI02MEM_ACC_CNTL_1_WIN] = {
+ .base_hi_reg = MV64x60_PCI0_ACC_CNTL_1_BASE_HI,
+ .base_lo_reg = MV64x60_PCI0_ACC_CNTL_1_BASE_LO,
+ .size_reg = MV64x60_PCI0_ACC_CNTL_1_SIZE,
+ .base_lo_bits = 20,
+ .size_bits = 20,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_PCIACC_ENAB | 0 },
+ [MV64x60_PCI02MEM_ACC_CNTL_2_WIN] = {
+ .base_hi_reg = MV64x60_PCI0_ACC_CNTL_2_BASE_HI,
+ .base_lo_reg = MV64x60_PCI0_ACC_CNTL_2_BASE_LO,
+ .size_reg = MV64x60_PCI0_ACC_CNTL_2_SIZE,
+ .base_lo_bits = 20,
+ .size_bits = 20,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_PCIACC_ENAB | 0 },
+ [MV64x60_PCI02MEM_ACC_CNTL_3_WIN] = {
+ .base_hi_reg = MV64x60_PCI0_ACC_CNTL_3_BASE_HI,
+ .base_lo_reg = MV64x60_PCI0_ACC_CNTL_3_BASE_LO,
+ .size_reg = MV64x60_PCI0_ACC_CNTL_3_SIZE,
+ .base_lo_bits = 20,
+ .size_bits = 20,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_PCIACC_ENAB | 0 },
+ /* PCI 1->MEM Access Control Windows */
+ [MV64x60_PCI12MEM_ACC_CNTL_0_WIN] = {
+ .base_hi_reg = MV64x60_PCI1_ACC_CNTL_0_BASE_HI,
+ .base_lo_reg = MV64x60_PCI1_ACC_CNTL_0_BASE_LO,
+ .size_reg = MV64x60_PCI1_ACC_CNTL_0_SIZE,
+ .base_lo_bits = 20,
+ .size_bits = 20,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_PCIACC_ENAB | 0 },
+ [MV64x60_PCI12MEM_ACC_CNTL_1_WIN] = {
+ .base_hi_reg = MV64x60_PCI1_ACC_CNTL_1_BASE_HI,
+ .base_lo_reg = MV64x60_PCI1_ACC_CNTL_1_BASE_LO,
+ .size_reg = MV64x60_PCI1_ACC_CNTL_1_SIZE,
+ .base_lo_bits = 20,
+ .size_bits = 20,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_PCIACC_ENAB | 0 },
+ [MV64x60_PCI12MEM_ACC_CNTL_2_WIN] = {
+ .base_hi_reg = MV64x60_PCI1_ACC_CNTL_2_BASE_HI,
+ .base_lo_reg = MV64x60_PCI1_ACC_CNTL_2_BASE_LO,
+ .size_reg = MV64x60_PCI1_ACC_CNTL_2_SIZE,
+ .base_lo_bits = 20,
+ .size_bits = 20,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_PCIACC_ENAB | 0 },
+ [MV64x60_PCI12MEM_ACC_CNTL_3_WIN] = {
+ .base_hi_reg = MV64x60_PCI1_ACC_CNTL_3_BASE_HI,
+ .base_lo_reg = MV64x60_PCI1_ACC_CNTL_3_BASE_LO,
+ .size_reg = MV64x60_PCI1_ACC_CNTL_3_SIZE,
+ .base_lo_bits = 20,
+ .size_bits = 20,
+ .get_from_field = mv64x60_mask,
+ .map_to_field = mv64x60_mask,
+ .extra = MV64x60_EXTRA_PCIACC_ENAB | 0 },
+ /* PCI 0->MEM Snoop Windows -- don't exist on 64360 */
+ /* PCI 1->MEM Snoop Windows -- don't exist on 64360 */
+};
vec);
}
-static spinlock_t openpic2_setup_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(openpic2_setup_lock);
/*
* Initialize a timer interrupt (and disable it)
OpenPIC_Processor Processor[OPENPIC_MAX_PROCESSORS];
};
-extern volatile struct OpenPIC *OpenPIC;
+extern volatile struct OpenPIC __iomem *OpenPIC;
/*
--- /dev/null
+/*
+ *
+ * Copyright (c) 1999 Grant Erickson <grant@lcse.umn.edu>
+ *
+ * Module name: ppc403_pic.c
+ *
+ * Description:
+ * Interrupt controller driver for PowerPC 403-based processors.
+ */
+
+/*
+ * The PowerPC 403 cores' Asynchronous Interrupt Controller (AIC) has
+ * 32 possible interrupts, a majority of which are not implemented on
+ * all cores. There are six configurable, external interrupt pins and
+ * there are eight internal interrupts for the on-chip serial port
+ * (SPU), DMA controller, and JTAG controller.
+ *
+ */
+
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/signal.h>
+#include <linux/stddef.h>
+
+#include <asm/processor.h>
+#include <asm/system.h>
+#include <asm/irq.h>
+#include <asm/ppc4xx_pic.h>
+
+/* Function Prototypes */
+
+static void ppc403_aic_enable(unsigned int irq);
+static void ppc403_aic_disable(unsigned int irq);
+static void ppc403_aic_disable_and_ack(unsigned int irq);
+
+static struct hw_interrupt_type ppc403_aic = {
+ "403GC AIC",
+ NULL,
+ NULL,
+ ppc403_aic_enable,
+ ppc403_aic_disable,
+ ppc403_aic_disable_and_ack,
+ 0
+};
+
+int
+ppc403_pic_get_irq(struct pt_regs *regs)
+{
+ int irq;
+ unsigned long bits;
+
+ /*
+ * Only report the status of those interrupts that are actually
+ * enabled.
+ */
+
+ bits = mfdcr(DCRN_EXISR) & mfdcr(DCRN_EXIER);
+
+ /*
+ * Walk through the interrupts from highest priority to lowest, and
+ * report the first pending interrupt found.
+ * We want PPC, not C bit numbering, so just subtract the ffs()
+ * result from 32.
+ */
+ irq = 32 - ffs(bits);
+
+ if (irq == NR_AIC_IRQS)
+ irq = -1;
+
+ return (irq);
+}
+
+static void
+ppc403_aic_enable(unsigned int irq)
+{
+ int bit, word;
+
+ bit = irq & 0x1f;
+ word = irq >> 5;
+
+ ppc_cached_irq_mask[word] |= (1 << (31 - bit));
+ mtdcr(DCRN_EXIER, ppc_cached_irq_mask[word]);
+}
+
+static void
+ppc403_aic_disable(unsigned int irq)
+{
+ int bit, word;
+
+ bit = irq & 0x1f;
+ word = irq >> 5;
+
+ ppc_cached_irq_mask[word] &= ~(1 << (31 - bit));
+ mtdcr(DCRN_EXIER, ppc_cached_irq_mask[word]);
+}
+
+static void
+ppc403_aic_disable_and_ack(unsigned int irq)
+{
+ int bit, word;
+
+ bit = irq & 0x1f;
+ word = irq >> 5;
+
+ ppc_cached_irq_mask[word] &= ~(1 << (31 - bit));
+ mtdcr(DCRN_EXIER, ppc_cached_irq_mask[word]);
+ mtdcr(DCRN_EXISR, (1 << (31 - bit)));
+}
+
+void __init
+ppc4xx_pic_init(void)
+{
+ int i;
+
+ /*
+ * Disable all external interrupts until they are
+ * explicity requested.
+ */
+ ppc_cached_irq_mask[0] = 0;
+
+ mtdcr(DCRN_EXIER, ppc_cached_irq_mask[0]);
+
+ ppc_md.get_irq = ppc403_pic_get_irq;
+
+ for (i = 0; i < NR_IRQS; i++)
+ irq_desc[i].handler = &ppc403_aic;
+}
return;
}
-#ifdef PPC4xx_DMA64BIT
+#ifdef PPC4xx_DMA_64BIT
mtdcr(DCRN_DMASAH0 + dmanr*2, (u32)(src_addr >> 32));
#else
mtdcr(DCRN_DMASA0 + dmanr*2, (u32)src_addr);
return;
}
-#ifdef PPC4xx_DMA64BIT
+#ifdef PPC4xx_DMA_64BIT
mtdcr(DCRN_DMADAH0 + dmanr*2, (u32)(dst_addr >> 32));
#else
mtdcr(DCRN_DMADA0 + dmanr*2, (u32)dst_addr);
return DMA_STATUS_BAD_CHANNEL;
}
+ memcpy(p_dma_ch, &dma_channels[dmanr], sizeof (ppc_dma_ch_t));
+
#if DCRN_POL > 0
polarity = mfdcr(DCRN_POL);
#else
return (GET_DMA_PW(control));
}
+/*
+ * Clears the channel status bits
+ */
+int
+ppc4xx_clr_dma_status(unsigned int dmanr)
+{
+ if (dmanr >= MAX_PPC4xx_DMA_CHANNELS) {
+ printk(KERN_ERR "ppc4xx_clr_dma_status: bad channel: %d\n", dmanr);
+ return DMA_STATUS_BAD_CHANNEL;
+ }
+ mtdcr(DCRN_DMASR, ((u32)DMA_CH0_ERR | (u32)DMA_CS0 | (u32)DMA_TS0) >> dmanr);
+ return DMA_STATUS_GOOD;
+}
+
+/*
+ * Enables the burst on the channel (BTEN bit in the control/count register)
+ * Note:
+ * For scatter/gather dma, this function MUST be called before the
+ * ppc4xx_alloc_dma_handle() func as the chan count register is copied into the
+ * sgl list and used as each sgl element is added.
+ */
+int
+ppc4xx_enable_burst(unsigned int dmanr)
+{
+ unsigned int ctc;
+ if (dmanr >= MAX_PPC4xx_DMA_CHANNELS) {
+ printk(KERN_ERR "ppc4xx_enable_burst: bad channel: %d\n", dmanr);
+ return DMA_STATUS_BAD_CHANNEL;
+ }
+ ctc = mfdcr(DCRN_DMACT0 + (dmanr * 0x8)) | DMA_CTC_BTEN;
+ mtdcr(DCRN_DMACT0 + (dmanr * 0x8), ctc);
+ return DMA_STATUS_GOOD;
+}
+/*
+ * Disables the burst on the channel (BTEN bit in the control/count register)
+ * Note:
+ * For scatter/gather dma, this function MUST be called before the
+ * ppc4xx_alloc_dma_handle() func as the chan count register is copied into the
+ * sgl list and used as each sgl element is added.
+ */
+int
+ppc4xx_disable_burst(unsigned int dmanr)
+{
+ unsigned int ctc;
+ if (dmanr >= MAX_PPC4xx_DMA_CHANNELS) {
+ printk(KERN_ERR "ppc4xx_disable_burst: bad channel: %d\n", dmanr);
+ return DMA_STATUS_BAD_CHANNEL;
+ }
+ ctc = mfdcr(DCRN_DMACT0 + (dmanr * 0x8)) &~ DMA_CTC_BTEN;
+ mtdcr(DCRN_DMACT0 + (dmanr * 0x8), ctc);
+ return DMA_STATUS_GOOD;
+}
+/*
+ * Sets the burst size (number of peripheral widths) for the channel
+ * (BSIZ bits in the control/count register))
+ * must be one of:
+ * DMA_CTC_BSIZ_2
+ * DMA_CTC_BSIZ_4
+ * DMA_CTC_BSIZ_8
+ * DMA_CTC_BSIZ_16
+ * Note:
+ * For scatter/gather dma, this function MUST be called before the
+ * ppc4xx_alloc_dma_handle() func as the chan count register is copied into the
+ * sgl list and used as each sgl element is added.
+ */
+int
+ppc4xx_set_burst_size(unsigned int dmanr, unsigned int bsize)
+{
+ unsigned int ctc;
+ if (dmanr >= MAX_PPC4xx_DMA_CHANNELS) {
+ printk(KERN_ERR "ppc4xx_set_burst_size: bad channel: %d\n", dmanr);
+ return DMA_STATUS_BAD_CHANNEL;
+ }
+ ctc = mfdcr(DCRN_DMACT0 + (dmanr * 0x8)) &~ DMA_CTC_BSIZ_MSK;
+ ctc |= (bsize & DMA_CTC_BSIZ_MSK);
+ mtdcr(DCRN_DMACT0 + (dmanr * 0x8), ctc);
+ return DMA_STATUS_GOOD;
+}
EXPORT_SYMBOL(ppc4xx_init_dma_channel);
EXPORT_SYMBOL(ppc4xx_get_channel_config);
EXPORT_SYMBOL(ppc4xx_enable_dma_interrupt);
EXPORT_SYMBOL(ppc4xx_disable_dma_interrupt);
EXPORT_SYMBOL(ppc4xx_get_dma_status);
+EXPORT_SYMBOL(ppc4xx_clr_dma_status);
+EXPORT_SYMBOL(ppc4xx_enable_burst);
+EXPORT_SYMBOL(ppc4xx_disable_burst);
+EXPORT_SYMBOL(ppc4xx_set_burst_size);
void __init
ppc4xx_init_IRQ(void)
{
- int i;
-
ppc4xx_pic_init();
-
- for (i = 0; i < NR_IRQS; i++)
- irq_desc[i].handler = ppc4xx_pic;
}
static void
ppc_ide_md.ide_init_hwif = ppc4xx_ide_init_hwif_ports;
#endif /* defined(CONFIG_PCI) && defined(CONFIG_IDE) */
}
+
+/* Called from MachineCheckException */
+void platform_machine_check(struct pt_regs *regs)
+{
+#if defined(DCRN_PLB0_BEAR)
+ printk("PLB0: BEAR= 0x%08x ACR= 0x%08x BESR= 0x%08x\n",
+ mfdcr(DCRN_PLB0_BEAR), mfdcr(DCRN_PLB0_ACR),
+ mfdcr(DCRN_PLB0_BESR));
+#endif
+#if defined(DCRN_POB0_BEAR)
+ printk("PLB0 to OPB: BEAR= 0x%08x BESR0= 0x%08x BESR1= 0x%08x\n",
+ mfdcr(DCRN_POB0_BEAR), mfdcr(DCRN_POB0_BESR0),
+ mfdcr(DCRN_POB0_BESR1));
+#endif
+
+}
psgl->ptail = psgl->phead;
psgl->ptail_dma = psgl->phead_dma;
} else {
+ if(p_dma_ch->int_on_final_sg) {
+ /* mask out all dma interrupts, except error, on tail
+ before adding new tail. */
+ psgl->ptail->control_count &=
+ ~(SG_TCI_ENABLE | SG_ETI_ENABLE);
+ }
psgl->ptail->next = psgl->ptail_dma + sizeof(ppc_sgl_t);
psgl->ptail++;
psgl->ptail_dma += sizeof(ppc_sgl_t);
}
sgl_addr = (ppc_sgl_t *) __va(mfdcr(DCRN_ASG0 + (psgl->dmanr * 0x8)));
- count_left = mfdcr(DCRN_DMACT0 + (psgl->dmanr * 0x8));
+ count_left = mfdcr(DCRN_DMACT0 + (psgl->dmanr * 0x8)) & SG_COUNT_MASK;
if (!sgl_addr) {
printk("ppc4xx_get_dma_sgl_residue: sgl addr register is null\n");
int
ppc4xx_alloc_dma_handle(sgl_handle_t * phandle, unsigned int mode, unsigned int dmanr)
{
- sgl_list_info_t *psgl;
+ sgl_list_info_t *psgl=NULL;
dma_addr_t dma_addr;
ppc_dma_ch_t *p_dma_ch = &dma_channels[dmanr];
uint32_t sg_command;
+ uint32_t ctc_settings;
void *ret;
if (dmanr >= MAX_PPC4xx_DMA_CHANNELS) {
mtdcr(DCRN_ASGC, sg_command);
psgl->sgl_control = SG_ERI_ENABLE | SG_LINK;
+ /* keep control count register settings */
+ ctc_settings = mfdcr(DCRN_DMACT0 + (dmanr * 0x8))
+ & (DMA_CTC_BSIZ_MSK | DMA_CTC_BTEN); /*burst mode settings*/
+ psgl->sgl_control |= ctc_settings;
+
if (p_dma_ch->int_enable) {
if (p_dma_ch->tce_enable)
psgl->sgl_control |= SG_TCI_ENABLE;
#include <asm/mpc85xx.h>
#include <asm/mmu.h>
-#include <asm/ocp.h>
/* ************************************************************************ */
/* Return the value of CCSRBAR for the current board */
return BOARD_CCSRBAR;
}
-/* ************************************************************************ */
-/* Update the 85xx OCP tables paddr field */
-void
-mpc85xx_update_paddr_ocp(struct ocp_device *dev, void *arg)
-{
- phys_addr_t ccsrbar;
- if (arg) {
- ccsrbar = *(phys_addr_t *)arg;
- dev->def->paddr += ccsrbar;
- }
-}
-
EXPORT_SYMBOL(get_ccsrbar);
#include <linux/config.h>
#include <linux/init.h>
-#include <asm/ocp.h>
/* Provide access to ccsrbar for any modules, etc */
phys_addr_t get_ccsrbar(void);
-/* Update the 85xx OCP tables paddr field */
-void mpc85xx_update_paddr_ocp(struct ocp_device *dev, void *ccsrbar);
-
#endif /* __PPC_SYSLIB_PPC85XX_COMMON_H */
#include <linux/serial.h>
#include <linux/tty.h> /* for linux/serial_core.h */
#include <linux/serial_core.h>
+#include <linux/serial_8250.h>
#include <asm/prom.h>
#include <asm/time.h>
#include <asm/mpc85xx.h>
#include <asm/immap_85xx.h>
#include <asm/mmu.h>
-#include <asm/ocp.h>
+#include <asm/ppc_sys.h>
#include <asm/kgdb.h>
#include <syslib/ppc85xx_setup.h>
void __init
mpc85xx_early_serial_map(void)
{
+#if defined(CONFIG_SERIAL_TEXT_DEBUG) || defined(CONFIG_KGDB)
struct uart_port serial_req;
+#endif
+ struct plat_serial8250_port *pdata;
bd_t *binfo = (bd_t *) __res;
- phys_addr_t duart_paddr = binfo->bi_immr_base + MPC85xx_UART0_OFFSET;
+ pdata = (struct plat_serial8250_port *) ppc_sys_get_pdata(MPC85xx_DUART);
/* Setup serial port access */
+ pdata[0].uartclk = binfo->bi_busfreq;
+ pdata[0].mapbase += binfo->bi_immr_base;
+ pdata[0].membase = ioremap(pdata[0].mapbase, MPC85xx_UART0_SIZE);
+
+#if defined(CONFIG_SERIAL_TEXT_DEBUG) || defined(CONFIG_KGDB)
memset(&serial_req, 0, sizeof (serial_req));
- serial_req.uartclk = binfo->bi_busfreq;
- serial_req.line = 0;
- serial_req.irq = MPC85xx_IRQ_DUART;
- serial_req.flags = ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST;
serial_req.iotype = SERIAL_IO_MEM;
- serial_req.membase = ioremap(duart_paddr, MPC85xx_UART0_SIZE);
- serial_req.mapbase = duart_paddr;
+ serial_req.mapbase = pdata[0].mapbase;
+ serial_req.membase = pdata[0].membase;
serial_req.regshift = 0;
-#if defined(CONFIG_SERIAL_TEXT_DEBUG) || defined(CONFIG_KGDB)
gen550_init(0, &serial_req);
#endif
- if (early_serial_setup(&serial_req) != 0)
- printk("Early serial init of port 0 failed\n");
-
- /* Assume early_serial_setup() doesn't modify serial_req */
- duart_paddr = binfo->bi_immr_base + MPC85xx_UART1_OFFSET;
- serial_req.line = 1;
- serial_req.mapbase = duart_paddr;
- serial_req.membase = ioremap(duart_paddr, MPC85xx_UART1_SIZE);
+ pdata[1].uartclk = binfo->bi_busfreq;
+ pdata[1].mapbase += binfo->bi_immr_base;
+ pdata[1].membase = ioremap(pdata[1].mapbase, MPC85xx_UART0_SIZE);
#if defined(CONFIG_SERIAL_TEXT_DEBUG) || defined(CONFIG_KGDB)
+ /* Assume gen550_init() doesn't modify serial_req */
+ serial_req.mapbase = pdata[1].mapbase;
+ serial_req.membase = pdata[1].membase;
+
gen550_init(1, &serial_req);
#endif
-
- if (early_serial_setup(&serial_req) != 0)
- printk("Early serial init of port 1 failed\n");
}
#endif
}
#endif /* CONFIG_85xx_PCI2 */
+int mpc85xx_pci1_last_busno = 0;
+
void __init
mpc85xx_setup_hose(void)
{
IORESOURCE_IO, "PCI2 host bridge");
hose_b->last_busno = pciauto_bus_scan(hose_b, hose_b->first_busno);
+
+ /* let board code know what the last bus number was on PCI1 */
+ mpc85xx_pci1_last_busno = hose_a->last_busno;
#endif
return;
}
#define PCIX_STATUS 0x64
/* Serial Config */
-#define MPC85XX_0_SERIAL (CCSRBAR + 0x4500)
-#define MPC85XX_1_SERIAL (CCSRBAR + 0x4600)
-
#ifdef CONFIG_SERIAL_MANY_PORTS
#define RS_TABLE_SIZE 64
#else
#define BASE_BAUD 115200
#endif
-#define STD_UART_OP(num) \
- { 0, BASE_BAUD, num, MPC85xx_IRQ_DUART, \
- (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST), \
- iomem_base: (u8 *)MPC85XX_##num##_SERIAL, \
- io_type: SERIAL_IO_MEM},
-
/* Offset of CPM register space */
#define CPM_MAP_ADDR (CCSRBAR + MPC85xx_CPM_OFFSET)
--- /dev/null
+/*
+ * arch/ppc/syslib/ppc_sys.c
+ *
+ * PPC System library functions
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2005 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <asm/ppc_sys.h>
+
+int (*ppc_sys_device_fixup) (struct platform_device * pdev);
+
+static int ppc_sys_inited;
+
+void __init identify_ppc_sys_by_id(u32 id)
+{
+ unsigned int i = 0;
+ while (1) {
+ if ((ppc_sys_specs[i].mask & id) == ppc_sys_specs[i].value)
+ break;
+ i++;
+ }
+
+ cur_ppc_sys_spec = &ppc_sys_specs[i];
+
+ return;
+}
+
+void __init identify_ppc_sys_by_name(char *name)
+{
+ /* TODO */
+ return;
+}
+
+/* Update all memory resources by paddr, call before platform_device_register */
+void __init
+ppc_sys_fixup_mem_resource(struct platform_device *pdev, phys_addr_t paddr)
+{
+ int i;
+ for (i = 0; i < pdev->num_resources; i++) {
+ struct resource *r = &pdev->resource[i];
+ if ((r->flags & IORESOURCE_MEM) == IORESOURCE_MEM) {
+ r->start += paddr;
+ r->end += paddr;
+ }
+ }
+}
+
+/* Get platform_data pointer out of platform device, call before platform_device_register */
+void *__init ppc_sys_get_pdata(enum ppc_sys_devices dev)
+{
+ return ppc_sys_platform_devices[dev].dev.platform_data;
+}
+
+void ppc_sys_device_remove(enum ppc_sys_devices dev)
+{
+ unsigned int i;
+
+ if (ppc_sys_inited) {
+ platform_device_unregister(&ppc_sys_platform_devices[dev]);
+ } else {
+ if (cur_ppc_sys_spec == NULL)
+ return;
+ for (i = 0; i < cur_ppc_sys_spec->num_devices; i++)
+ if (cur_ppc_sys_spec->device_list[i] == dev)
+ cur_ppc_sys_spec->device_list[i] = -1;
+ }
+}
+
+static int __init ppc_sys_init(void)
+{
+ unsigned int i, dev_id, ret = 0;
+
+ BUG_ON(cur_ppc_sys_spec == NULL);
+
+ for (i = 0; i < cur_ppc_sys_spec->num_devices; i++) {
+ dev_id = cur_ppc_sys_spec->device_list[i];
+ if (dev_id != -1) {
+ if (ppc_sys_device_fixup != NULL)
+ ppc_sys_device_fixup(&ppc_sys_platform_devices
+ [dev_id]);
+ if (platform_device_register
+ (&ppc_sys_platform_devices[dev_id])) {
+ ret = 1;
+ printk(KERN_ERR
+ "unable to register device %d\n",
+ dev_id);
+ }
+ }
+ }
+
+ ppc_sys_inited = 1;
+ return ret;
+}
+
+subsys_initcall(ppc_sys_init);
}
#endif
-static spinlock_t rtas_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(rtas_lock);
/* this can be called after setup -- Cort */
int __openfirmware
#define mk_config_type1(bus, dev, offset) \
mk_config_addr(bus, dev, offset) | 1;
+static spinlock_t pcibios_lock = SPIN_LOCK_UNLOCKED;
+
int qspan_pcibios_read_config_byte(unsigned char bus, unsigned char dev_fn,
unsigned char offset, unsigned char *val)
{
}
#ifdef CONFIG_RPXCLASSIC
- save_flags(flags);
- cli();
+ /* disable interrupts */
+ spin_lock_irqsave(&pcibios_lock, flags);
*((uint *)RPX_CSR_ADDR) &= ~BCSR2_QSPACESEL;
eieio();
#endif
#ifdef CONFIG_RPXCLASSIC
*((uint *)RPX_CSR_ADDR) |= BCSR2_QSPACESEL;
eieio();
- restore_flags(flags);
+ spin_unlock_irqrestore(&pcibios_lock, flags);
#endif
offset ^= 0x03;
}
#ifdef CONFIG_RPXCLASSIC
- save_flags(flags);
- cli();
+ /* disable interrupts */
+ spin_lock_irqsave(&pcibios_lock, flags);
*((uint *)RPX_CSR_ADDR) &= ~BCSR2_QSPACESEL;
eieio();
#endif
#ifdef CONFIG_RPXCLASSIC
*((uint *)RPX_CSR_ADDR) |= BCSR2_QSPACESEL;
eieio();
- restore_flags(flags);
+ spin_unlock_irqrestore(&pcibios_lock, flags);
#endif
sp = ((ushort *)&temp) + ((offset >> 1) & 1);
}
#ifdef CONFIG_RPXCLASSIC
- save_flags(flags);
- cli();
+ /* disable interrupts */
+ spin_lock_irqsave(&pcibios_lock, flags);
*((uint *)RPX_CSR_ADDR) &= ~BCSR2_QSPACESEL;
eieio();
#endif
#ifdef CONFIG_RPXCLASSIC
*((uint *)RPX_CSR_ADDR) |= BCSR2_QSPACESEL;
eieio();
- restore_flags(flags);
+ spin_unlock_irqrestore(&pcibios_lock, flags);
#endif
return PCIBIOS_SUCCESSFUL;
*cp = val;
#ifdef CONFIG_RPXCLASSIC
- save_flags(flags);
- cli();
+ /* disable interrupts */
+ spin_lock_irqsave(&pcibios_lock, flags);
*((uint *)RPX_CSR_ADDR) &= ~BCSR2_QSPACESEL;
eieio();
#endif
#ifdef CONFIG_RPXCLASSIC
*((uint *)RPX_CSR_ADDR) |= BCSR2_QSPACESEL;
eieio();
- restore_flags(flags);
+ spin_unlock_irqrestore(&pcibios_lock, flags);
#endif
return PCIBIOS_SUCCESSFUL;
*sp = val;
#ifdef CONFIG_RPXCLASSIC
- save_flags(flags);
- cli();
+ /* disable interrupts */
+ spin_lock_irqsave(&pcibios_lock, flags);
*((uint *)RPX_CSR_ADDR) &= ~BCSR2_QSPACESEL;
eieio();
#endif
#ifdef CONFIG_RPXCLASSIC
*((uint *)RPX_CSR_ADDR) |= BCSR2_QSPACESEL;
eieio();
- restore_flags(flags);
+ spin_unlock_irqrestore(&pcibios_lock, flags);
#endif
return PCIBIOS_SUCCESSFUL;
return PCIBIOS_DEVICE_NOT_FOUND;
#ifdef CONFIG_RPXCLASSIC
- save_flags(flags);
- cli();
+ /* disable interrupts */
+ spin_lock_irqsave(&pcibios_lock, flags);
*((uint *)RPX_CSR_ADDR) &= ~BCSR2_QSPACESEL;
eieio();
#endif
#ifdef CONFIG_RPXCLASSIC
*((uint *)RPX_CSR_ADDR) |= BCSR2_QSPACESEL;
eieio();
- restore_flags(flags);
+ spin_unlock_irqrestore(&pcibios_lock, flags);
#endif
return PCIBIOS_SUCCESSFUL;
#define intc_in_be32(addr) mfdcr((addr))
#endif
-/* Global Variables */
-struct hw_interrupt_type *ppc4xx_pic;
-
static void
xilinx_intc_enable(unsigned int irq)
{
void __init
ppc4xx_pic_init(void)
{
+ int i;
+
#if XPAR_XINTC_USE_DCR == 0
intc = ioremap(XPAR_INTC_0_BASEADDR, 32);
/* Turn on the Master Enable. */
intc_out_be32(intc + MER, 0x3UL);
- ppc4xx_pic = &xilinx_intc;
ppc_md.get_irq = xilinx_pic_get_irq;
+
+ for (i = 0; i < NR_IRQS; ++i)
+ irq_desc[i].handler = &xilinx_intc;
}
bool "Check for stack overflows"
depends on DEBUG_KERNEL
+config KPROBES
+ bool "Kprobes"
+ depends on DEBUG_KERNEL
+ help
+ Kprobes allows you to trap at almost any kernel address and
+ execute a callback function. register_kprobe() establishes
+ a probepoint and specifies the callback. Kprobes is useful
+ for kernel debugging, non-intrusive instrumentation and testing.
+ If in doubt, say "N".
+
config DEBUG_STACK_USAGE
bool "Stack utilization instrumentation"
depends on DEBUG_KERNEL
config XMON
bool "Include xmon kernel debugger"
- depends on DEBUGGER
+ depends on DEBUGGER && !PPC_ISERIES
help
Include in-kernel hooks for the xmon kernel monitor/debugger.
Unless you are intending to debug the kernel, say N here.
bdnz 8b
blr
- .globl bcopy
-bcopy:
- mr r6,r3
- mr r3,r4
- mr r4,r6
- b memcpy
-
.globl memmove
memmove:
cmplw 0,r3,r4
CONFIG_IP_NF_TARGET_REDIRECT=y
CONFIG_IP_NF_TARGET_NETMAP=y
CONFIG_IP_NF_TARGET_SAME=y
-# CONFIG_IP_NF_NAT_LOCAL is not set
# CONFIG_IP_NF_NAT_SNMP_BASIC is not set
CONFIG_IP_NF_MANGLE=y
CONFIG_IP_NF_TARGET_TOS=y
# CONFIG_SCSI_FUTURE_DOMAIN is not set
# CONFIG_SCSI_GDTH is not set
# CONFIG_SCSI_IPS is not set
-CONFIG_SCSI_IBMVSCSI=m
+CONFIG_SCSI_IBMVSCSI=y
# CONFIG_SCSI_INIA100 is not set
CONFIG_SCSI_SYM53C8XX_2=y
CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=0
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_SAME=m
-# CONFIG_IP_NF_NAT_LOCAL is not set
CONFIG_IP_NF_NAT_SNMP_BASIC=m
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
#
# CONFIG_NET_TULIP is not set
# CONFIG_HP100 is not set
-CONFIG_IBMVETH=m
+CONFIG_IBMVETH=y
CONFIG_NET_PCI=y
CONFIG_PCNET32=y
# CONFIG_AMD8111_ETH is not set
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_SAME=m
-# CONFIG_IP_NF_NAT_LOCAL is not set
CONFIG_IP_NF_NAT_SNMP_BASIC=m
CONFIG_IP_NF_NAT_IRC=m
CONFIG_IP_NF_NAT_FTP=m
#include <linux/time.h>
-#define jiffies_to_timeval jiffies_to_compat_timeval
+#undef cputime_to_timeval
+#define cputime_to_timeval cputime_to_compat_timeval
static __inline__ void
-jiffies_to_compat_timeval(unsigned long jiffies, struct compat_timeval *value)
+cputime_to_compat_timeval(const cputime_t cputime, struct compat_timeval *value)
{
+ unsigned long jiffies = cputime_to_jiffies(cputime);
value->tv_usec = (jiffies % HZ) * (1000000L / HZ);
value->tv_sec = jiffies / HZ;
}
#define cached_A1 (cached_8259[0])
#define cached_21 (cached_8259[1])
-static spinlock_t i8259_lock __cacheline_aligned_in_smp = SPIN_LOCK_UNLOCKED;
+static __cacheline_aligned_in_smp DEFINE_SPINLOCK(i8259_lock);
static int i8259_pic_irq_offset;
static int i8259_present;
#include <asm/iSeries/HvTypes.h>
#include <asm/iSeries/mf.h>
#include <asm/iSeries/LparData.h>
-//#include <asm/iSeries/iSeries_VpdInfo.h>
#include <asm/iSeries/iSeries_pci.h>
#include "pci.h"
typedef struct SlotMapStruct SlotMap;
#define SLOT_ENTRY_SIZE 16
-/*
- * Bus, Card, Board, FrameId, CardLocation.
- */
-LocationData* iSeries_GetLocationData(struct pci_dev *PciDev)
-{
- struct iSeries_Device_Node *DevNode =
- (struct iSeries_Device_Node *)PciDev->sysdata;
- LocationData *LocationPtr =
- (LocationData *)kmalloc(LOCATION_DATA_SIZE, GFP_KERNEL);
-
- if (LocationPtr == NULL) {
- printk("PCI: LocationData area allocation failed!\n");
- return NULL;
- }
- memset(LocationPtr, 0, LOCATION_DATA_SIZE);
- LocationPtr->Bus = ISERIES_BUS(DevNode);
- LocationPtr->Board = DevNode->Board;
- LocationPtr->FrameId = DevNode->FrameId;
- LocationPtr->Card = PCI_SLOT(DevNode->DevFn);
- strcpy(&LocationPtr->CardLocation[0], &DevNode->CardLocation[0]);
- return LocationPtr;
-}
-EXPORT_SYMBOL(iSeries_GetLocationData);
-
/*
* Formats the device information.
* - Pass in pci_dev* pointer to the device.
return len;
}
-/*
- * Build a character string of the device location, Frame 1, Card C10
- */
-int device_Location(struct pci_dev *PciDev, char *BufPtr)
-{
- struct iSeries_Device_Node *DevNode =
- (struct iSeries_Device_Node *)PciDev->sysdata;
- return sprintf(BufPtr, "PCI: Bus%3d, AgentId%3d, Vendor %04X, Location %s",
- DevNode->DsaAddr.Dsa.busNumber, DevNode->AgentId,
- DevNode->Vendor, DevNode->Location);
-}
-
/*
* Parse the Slot Area
*/
if (parms->itc_size == 0)
panic("PCI_DMA: parms->size is zero, parms is 0x%p", parms);
- tbl->it_size = parms->itc_size;
+ /* itc_size is in pages worth of table, it_size is in # of entries */
+ tbl->it_size = (parms->itc_size * PAGE_SIZE) / sizeof(union tce_entry);
tbl->it_busno = parms->itc_busno;
tbl->it_offset = parms->itc_offset;
tbl->it_index = parms->itc_index;
- tbl->it_entrysize = sizeof(union tce_entry);
tbl->it_blocksize = 1;
tbl->it_type = TCE_PCI;
kfree(tbl);
}
+static void iommu_dev_setup_iSeries(struct pci_dev *dev) { }
+static void iommu_bus_setup_iSeries(struct pci_bus *bus) { }
-void tce_init_iSeries(void)
+void iommu_init_early_iSeries(void)
{
ppc_md.tce_build = tce_build_iSeries;
ppc_md.tce_free = tce_free_iSeries;
+ ppc_md.iommu_dev_setup = iommu_dev_setup_iSeries;
+ ppc_md.iommu_bus_setup = iommu_bus_setup_iSeries;
+
pci_iommu_init();
}
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/irq.h>
+#include <linux/delay.h>
#include <asm/io.h>
#include <asm/iSeries/HvCallPci.h>
int iSeries_Device_ToggleReset(struct pci_dev *PciDev, int AssertTime,
int DelayTime)
{
- unsigned long AssertDelay, WaitDelay;
+ unsigned int AssertDelay, WaitDelay;
struct iSeries_Device_Node *DeviceNode =
(struct iSeries_Device_Node *)PciDev->sysdata;
* Set defaults, Assert is .5 second, Wait is 3 seconds.
*/
if (AssertTime == 0)
- AssertDelay = (5 * HZ) / 10;
+ AssertDelay = 500;
else
- AssertDelay = (AssertTime * HZ) / 10;
+ AssertDelay = AssertTime * 100;
if (DelayTime == 0)
- WaitDelay = (30 * HZ) / 10;
+ WaitDelay = 3000;
else
- WaitDelay = (DelayTime * HZ) / 10;
+ WaitDelay = DelayTime * 100;
/*
* Assert reset
DeviceNode->ReturnCode = HvCallPci_setSlotReset(ISERIES_BUS(DeviceNode),
0x00, DeviceNode->AgentId, 1);
if (DeviceNode->ReturnCode == 0) {
- set_current_state(TASK_UNINTERRUPTIBLE);
- schedule_timeout(AssertDelay); /* Sleep for the time */
+ msleep(AssertDelay); /* Sleep for the time */
DeviceNode->ReturnCode =
HvCallPci_setSlotReset(ISERIES_BUS(DeviceNode),
0x00, DeviceNode->AgentId, 0);
/*
* Wait for device to reset
*/
- set_current_state(TASK_UNINTERRUPTIBLE);
- schedule_timeout(WaitDelay);
+ msleep(WaitDelay);
}
if (DeviceNode->ReturnCode == 0)
PCIFR("Slot 0x%04X.%02 Reset\n", ISERIES_BUS(DeviceNode),
#include <asm/pgtable.h>
#include <asm/io.h>
#include <asm/smp.h>
-#include <asm/naca.h>
#include <asm/paca.h>
#include <asm/iSeries/LparData.h>
#include <asm/iSeries/HvCall.h>
np = 0;
for (i=0; i < NR_CPUS; ++i) {
- if (paca[i].lppaca.xDynProcStatus < 2) {
+ if (paca[i].lppaca.dyn_proc_status < 2) {
cpu_set(i, cpu_possible_map);
cpu_set(i, cpu_present_map);
cpu_set(i, cpu_sibling_map[i]);
unsigned np = 0;
for (i=0; i < NR_CPUS; ++i) {
- if (paca[i].lppaca.xDynProcStatus < 2) {
+ if (paca[i].lppaca.dyn_proc_status < 2) {
/*paca[i].active = 1;*/
++np;
}
BUG_ON(nr < 0 || nr >= NR_CPUS);
/* Verify that our partition has a processor nr */
- if (paca[nr].lppaca.xDynProcStatus >= 2)
+ if (paca[nr].lppaca.dyn_proc_status >= 2)
return;
/* The processor is currently spinning, waiting
if (flags & IORESOURCE_IO)
return ioport_map(start, len);
if (flags & IORESOURCE_MEM)
- return (void __iomem *) start;
+ return ioremap(start, len);
/* What? */
return NULL;
}
--- /dev/null
+/*
+ * Kernel Probes (KProbes)
+ * arch/ppc64/kernel/kprobes.c
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ * Copyright (C) IBM Corporation, 2002, 2004
+ *
+ * 2002-Oct Created by Vamsi Krishna S <vamsi_krishna@in.ibm.com> Kernel
+ * Probes initial implementation ( includes contributions from
+ * Rusty Russell).
+ * 2004-July Suparna Bhattacharya <suparna@in.ibm.com> added jumper probes
+ * interface to access function arguments.
+ * 2004-Nov Ananth N Mavinakayanahalli <ananth@in.ibm.com> kprobes port
+ * for PPC64
+ */
+
+#include <linux/config.h>
+#include <linux/kprobes.h>
+#include <linux/ptrace.h>
+#include <linux/spinlock.h>
+#include <linux/preempt.h>
+#include <asm/kdebug.h>
+#include <asm/sstep.h>
+
+/* kprobe_status settings */
+#define KPROBE_HIT_ACTIVE 0x00000001
+#define KPROBE_HIT_SS 0x00000002
+
+static struct kprobe *current_kprobe;
+static unsigned long kprobe_status, kprobe_saved_msr;
+static struct pt_regs jprobe_saved_regs;
+
+int arch_prepare_kprobe(struct kprobe *p)
+{
+ kprobe_opcode_t insn = *p->addr;
+
+ if (IS_MTMSRD(insn) || IS_RFID(insn))
+ /* cannot put bp on RFID/MTMSRD */
+ return 1;
+ return 0;
+}
+
+void arch_copy_kprobe(struct kprobe *p)
+{
+ memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
+}
+
+void arch_remove_kprobe(struct kprobe *p)
+{
+}
+
+static inline void disarm_kprobe(struct kprobe *p, struct pt_regs *regs)
+{
+ *p->addr = p->opcode;
+ regs->nip = (unsigned long)p->addr;
+}
+
+static inline void prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
+{
+ regs->msr |= MSR_SE;
+ regs->nip = (unsigned long)&p->ainsn.insn;
+}
+
+static inline int kprobe_handler(struct pt_regs *regs)
+{
+ struct kprobe *p;
+ int ret = 0;
+ unsigned int *addr = (unsigned int *)regs->nip;
+
+ /* We're in an interrupt, but this is clear and BUG()-safe. */
+ preempt_disable();
+
+ /* Check we're not actually recursing */
+ if (kprobe_running()) {
+ /* We *are* holding lock here, so this is safe.
+ Disarm the probe we just hit, and ignore it. */
+ p = get_kprobe(addr);
+ if (p) {
+ disarm_kprobe(p, regs);
+ ret = 1;
+ } else {
+ p = current_kprobe;
+ if (p->break_handler && p->break_handler(p, regs)) {
+ goto ss_probe;
+ }
+ }
+ /* If it's not ours, can't be delete race, (we hold lock). */
+ goto no_kprobe;
+ }
+
+ lock_kprobes();
+ p = get_kprobe(addr);
+ if (!p) {
+ unlock_kprobes();
+#if 0
+ if (*addr != BREAKPOINT_INSTRUCTION) {
+ /*
+ * The breakpoint instruction was removed right
+ * after we hit it. Another cpu has removed
+ * either a probepoint or a debugger breakpoint
+ * at this address. In either case, no further
+ * handling of this interrupt is appropriate.
+ */
+ ret = 1;
+ }
+#endif
+ /* Not one of ours: let kernel handle it */
+ goto no_kprobe;
+ }
+
+ kprobe_status = KPROBE_HIT_ACTIVE;
+ current_kprobe = p;
+ kprobe_saved_msr = regs->msr;
+ if (p->pre_handler(p, regs)) {
+ /* handler has already set things up, so skip ss setup */
+ return 1;
+ }
+
+ss_probe:
+ prepare_singlestep(p, regs);
+ kprobe_status = KPROBE_HIT_SS;
+ return 1;
+
+no_kprobe:
+ preempt_enable_no_resched();
+ return ret;
+}
+
+/*
+ * Called after single-stepping. p->addr is the address of the
+ * instruction whose first byte has been replaced by the "breakpoint"
+ * instruction. To avoid the SMP problems that can occur when we
+ * temporarily put back the original opcode to single-step, we
+ * single-stepped a copy of the instruction. The address of this
+ * copy is p->ainsn.insn.
+ */
+static void resume_execution(struct kprobe *p, struct pt_regs *regs)
+{
+ int ret;
+
+ regs->nip = (unsigned long)p->addr;
+ ret = emulate_step(regs, p->ainsn.insn[0]);
+ if (ret == 0)
+ regs->nip = (unsigned long)p->addr + 4;
+
+ regs->msr &= ~MSR_SE;
+}
+
+static inline int post_kprobe_handler(struct pt_regs *regs)
+{
+ if (!kprobe_running())
+ return 0;
+
+ if (current_kprobe->post_handler)
+ current_kprobe->post_handler(current_kprobe, regs, 0);
+
+ resume_execution(current_kprobe, regs);
+ regs->msr |= kprobe_saved_msr;
+
+ unlock_kprobes();
+ preempt_enable_no_resched();
+
+ /*
+ * if somebody else is singlestepping across a probe point, msr
+ * will have SE set, in which case, continue the remaining processing
+ * of do_debug, as if this is not a probe hit.
+ */
+ if (regs->msr & MSR_SE)
+ return 0;
+
+ return 1;
+}
+
+/* Interrupts disabled, kprobe_lock held. */
+static inline int kprobe_fault_handler(struct pt_regs *regs, int trapnr)
+{
+ if (current_kprobe->fault_handler
+ && current_kprobe->fault_handler(current_kprobe, regs, trapnr))
+ return 1;
+
+ if (kprobe_status & KPROBE_HIT_SS) {
+ resume_execution(current_kprobe, regs);
+ regs->msr |= kprobe_saved_msr;
+
+ unlock_kprobes();
+ preempt_enable_no_resched();
+ }
+ return 0;
+}
+
+/*
+ * Wrapper routine to for handling exceptions.
+ */
+int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val,
+ void *data)
+{
+ struct die_args *args = (struct die_args *)data;
+ switch (val) {
+ case DIE_IABR_MATCH:
+ case DIE_DABR_MATCH:
+ case DIE_BPT:
+ if (kprobe_handler(args->regs))
+ return NOTIFY_STOP;
+ break;
+ case DIE_SSTEP:
+ if (post_kprobe_handler(args->regs))
+ return NOTIFY_STOP;
+ break;
+ case DIE_GPF:
+ case DIE_PAGE_FAULT:
+ if (kprobe_running() &&
+ kprobe_fault_handler(args->regs, args->trapnr))
+ return NOTIFY_STOP;
+ break;
+ default:
+ break;
+ }
+ return NOTIFY_DONE;
+}
+
+int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
+{
+ struct jprobe *jp = container_of(p, struct jprobe, kp);
+
+ memcpy(&jprobe_saved_regs, regs, sizeof(struct pt_regs));
+
+ /* setup return addr to the jprobe handler routine */
+ regs->nip = (unsigned long)(((func_descr_t *)jp->entry)->entry);
+ regs->gpr[2] = (unsigned long)(((func_descr_t *)jp->entry)->toc);
+
+ return 1;
+}
+
+void jprobe_return(void)
+{
+ preempt_enable_no_resched();
+ asm volatile("trap" ::: "memory");
+}
+
+void jprobe_return_end(void)
+{
+};
+
+int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
+{
+ /*
+ * FIXME - we should ideally be validating that we got here 'cos
+ * of the "trap" in jprobe_return() above, before restoring the
+ * saved regs...
+ */
+ memcpy(regs, &jprobe_saved_regs, sizeof(struct pt_regs));
+ return 1;
+}
/* Do the mapping of the IO space */
phbs_remap_io();
- /* Fixup the pci_bus sysdata pointers */
- pci_fix_bus_sysdata();
-
- /* Setup the iommu */
- iommu_setup_u3();
-
DBG(" <- maple_pcibios_fixup\n");
}
extern void maple_pci_init(void);
extern void maple_pcibios_fixup(void);
extern int maple_pci_get_legacy_ide_irq(struct pci_dev *dev, int channel);
-extern void generic_find_legacy_serial_ports(unsigned int *default_speed);
+extern void generic_find_legacy_serial_ports(u64 *physport,
+ unsigned int *default_speed);
static void maple_restart(char *cmd)
#ifdef CONFIG_SMP
smp_ops = &maple_smp_ops;
#endif
- /* Setup the PCI DMA to "direct" by default. May be overriden
- * by iommu later on
- */
- pci_dma_init_direct();
-
/* Lookup PCI hosts */
maple_pci_init();
static void __init maple_init_early(void)
{
unsigned int default_speed;
+ u64 physport;
DBG(" -> maple_init_early\n");
hpte_init_native();
/* Find the serial port */
- generic_find_legacy_serial_ports(&default_speed);
+ generic_find_legacy_serial_ports(&physport, &default_speed);
- DBG("naca->serialPortAddr: %lx\n", (long)naca->serialPortAddr);
+ DBG("phys port addr: %lx\n", (long)physport);
- if (naca->serialPortAddr) {
+ if (physport) {
void *comport;
/* Map the uart for udbg. */
- comport = (void *)__ioremap(naca->serialPortAddr, 16, _PAGE_NO_CACHE);
+ comport = (void *)__ioremap(physport, 16, _PAGE_NO_CACHE);
udbg_init_uart(comport, default_speed);
ppc_md.udbg_putc = udbg_putc;
}
/* Setup interrupt mapping options */
- naca->interrupt_controller = IC_OPEN_PIC;
+ ppc64_interrupt_controller = IC_OPEN_PIC;
+
+ iommu_init_early_u3();
DBG(" <- maple_init_early\n");
}
static struct mpic *mpics;
static struct mpic *mpic_primary;
-static spinlock_t mpic_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(mpic_lock);
/*
#ifdef CONFIG_SMP
struct mpic *mpic = mpic_primary;
unsigned long flags;
-#ifdef CONFIG_IRQ_ALL_CPUS
u32 msk = 1 << hard_smp_processor_id();
unsigned int i;
-#endif
BUG_ON(mpic == NULL);
spin_lock_irqsave(&mpic_lock, flags);
-#ifdef CONFIG_IRQ_ALL_CPUS
/* let the mpic know we want intrs. default affinity is 0xffffffff
* until changed via /proc. That's how it's done on x86. If we want
* it differently, then we should make sure we also change the default
* values of irq_affinity in irq.c.
*/
- for (i = 0; i < mpic->num_sources ; i++)
- mpic_irq_write(i, MPIC_IRQ_DESTINATION,
- mpic_irq_read(i, MPIC_IRQ_DESTINATION) | msk);
-#endif /* CONFIG_IRQ_ALL_CPUS */
+ if (distribute_irqs) {
+ for (i = 0; i < mpic->num_sources ; i++)
+ mpic_irq_write(i, MPIC_IRQ_DESTINATION,
+ mpic_irq_read(i, MPIC_IRQ_DESTINATION) | msk);
+ }
/* Set current processor priority to 0 */
mpic_cpu_write(MPIC_CPU_CURRENT_TASK_PRI, 0);
#include <asm/rtas.h>
#include <asm/prom.h>
#include <asm/machdep.h>
+#include <asm/systemcfg.h>
#undef DEBUG_NVRAM
#include <asm/machdep.h>
#include <asm/abs_addr.h>
#include <asm/plpar_wrappers.h>
+#include <asm/systemcfg.h>
#include "pci.h"
+#define DBG(fmt...)
+
+extern int is_python(struct device_node *);
static void tce_build_pSeries(struct iommu_table *tbl, long index,
long npages, unsigned long uaddr,
}
}
-DEFINE_PER_CPU(void *, tce_page) = NULL;
+static DEFINE_PER_CPU(void *, tce_page) = NULL;
static void tce_buildmulti_pSeriesLP(struct iommu_table *tbl, long tcenum,
long npages, unsigned long uaddr,
}
}
-
-static void iommu_buses_init(void)
-{
- struct pci_controller *phb, *tmp;
- struct device_node *dn, *first_dn;
- int num_slots, num_slots_ilog2;
- int first_phb = 1;
- unsigned long tcetable_ilog2;
-
- /*
- * We default to a TCE table that maps 2GB (4MB table, 22 bits),
- * however some machines have a 3GB IO hole and for these we
- * create a table that maps 1GB (2MB table, 21 bits)
- */
- if (io_hole_start < 0x80000000UL)
- tcetable_ilog2 = 21;
- else
- tcetable_ilog2 = 22;
-
- /* XXX Should we be using pci_root_buses instead? -ojn
- */
-
- list_for_each_entry_safe(phb, tmp, &hose_list, list_node) {
- first_dn = ((struct device_node *)phb->arch_data)->child;
-
- /* Carve 2GB into the largest dma_window_size possible */
- for (dn = first_dn, num_slots = 0; dn != NULL; dn = dn->sibling)
- num_slots++;
- num_slots_ilog2 = __ilog2(num_slots);
-
- if ((1<<num_slots_ilog2) != num_slots)
- num_slots_ilog2++;
-
- phb->dma_window_size = 1 << (tcetable_ilog2 - num_slots_ilog2);
-
- /* Reserve 16MB of DMA space on the first PHB.
- * We should probably be more careful and use firmware props.
- * In reality this space is remapped, not lost. But we don't
- * want to get that smart to handle it -- too much work.
- */
- phb->dma_window_base_cur = first_phb ? (1 << 12) : 0;
- first_phb = 0;
-
- for (dn = first_dn; dn != NULL; dn = dn->sibling)
- iommu_devnode_init_pSeries(dn);
- }
-}
-
-
-static void iommu_buses_init_lpar(struct list_head *bus_list)
-{
- struct list_head *ln;
- struct pci_bus *bus;
- struct device_node *busdn;
- unsigned int *dma_window;
-
- for (ln=bus_list->next; ln != bus_list; ln=ln->next) {
- bus = pci_bus_b(ln);
-
- if (bus->self)
- busdn = pci_device_to_OF_node(bus->self);
- else
- busdn = bus->sysdata; /* must be a phb */
-
- dma_window = (unsigned int *)get_property(busdn, "ibm,dma-window", NULL);
- if (dma_window) {
- /* Bussubno hasn't been copied yet.
- * Do it now because iommu_table_setparms_lpar needs it.
- */
- busdn->bussubno = bus->number;
- iommu_devnode_init_pSeries(busdn);
- }
-
- /* look for a window on a bridge even if the PHB had one */
- iommu_buses_init_lpar(&bus->children);
- }
-}
-
-
static void iommu_table_setparms(struct pci_controller *phb,
struct device_node *dn,
struct iommu_table *tbl)
tbl->it_busno = phb->bus->number;
/* Units of tce entries */
- tbl->it_offset = phb->dma_window_base_cur;
-
- /* Adjust the current table offset to the next
- * region. Measured in TCE entries. Force an
- * alignment to the size allotted per IOA. This
- * makes it easier to remove the 1st 16MB.
- */
- phb->dma_window_base_cur += (phb->dma_window_size>>3);
- phb->dma_window_base_cur &=
- ~((phb->dma_window_size>>3)-1);
-
- /* Set the tce table size - measured in pages */
- tbl->it_size = ((phb->dma_window_base_cur -
- tbl->it_offset) << 3) >> PAGE_SHIFT;
+ tbl->it_offset = phb->dma_window_base_cur >> PAGE_SHIFT;
/* Test if we are going over 2GB of DMA space */
- if (phb->dma_window_base_cur > (1 << 19))
+ if (phb->dma_window_base_cur + phb->dma_window_size > (1L << 31))
panic("PCI_DMA: Unexpected number of IOAs under this PHB.\n");
+ phb->dma_window_base_cur += phb->dma_window_size;
+
+ /* Set the tce table size - measured in entries */
+ tbl->it_size = phb->dma_window_size >> PAGE_SHIFT;
+
tbl->it_index = 0;
- tbl->it_entrysize = sizeof(union tce_entry);
tbl->it_blocksize = 16;
tbl->it_type = TCE_PCI;
}
*/
static void iommu_table_setparms_lpar(struct pci_controller *phb,
struct device_node *dn,
- struct iommu_table *tbl)
+ struct iommu_table *tbl,
+ unsigned int *dma_window)
{
- unsigned int *dma_window;
-
- dma_window = (unsigned int *)get_property(dn, "ibm,dma-window", NULL);
-
- if (!dma_window)
- panic("iommu_table_setparms_lpar: device %s has no"
- " ibm,dma-window property!\n", dn->full_name);
-
tbl->it_busno = dn->bussubno;
- tbl->it_size = (((((unsigned long)dma_window[4] << 32) |
- (unsigned long)dma_window[5]) >> PAGE_SHIFT) << 3) >> PAGE_SHIFT;
- tbl->it_offset = ((((unsigned long)dma_window[2] << 32) |
- (unsigned long)dma_window[3]) >> 12);
+
+ /* TODO: Parse field size properties properly. */
+ tbl->it_size = (((unsigned long)dma_window[4] << 32) |
+ (unsigned long)dma_window[5]) >> PAGE_SHIFT;
+ tbl->it_offset = (((unsigned long)dma_window[2] << 32) |
+ (unsigned long)dma_window[3]) >> PAGE_SHIFT;
tbl->it_base = 0;
tbl->it_index = dma_window[0];
- tbl->it_entrysize = sizeof(union tce_entry);
tbl->it_blocksize = 16;
tbl->it_type = TCE_PCI;
}
+static void iommu_bus_setup_pSeries(struct pci_bus *bus)
+{
+ struct device_node *dn, *pdn;
+ struct iommu_table *tbl;
+
+ DBG("iommu_bus_setup_pSeries, bus %p, bus->self %p\n", bus, bus->self);
+
+ /* For each (root) bus, we carve up the available DMA space in 256MB
+ * pieces. Since each piece is used by one (sub) bus/device, that would
+ * give a maximum of 7 devices per PHB. In most cases, this is plenty.
+ *
+ * The exception is on Python PHBs (pre-POWER4). Here we don't have EADS
+ * bridges below the PHB to allocate the sectioned tables to, so instead
+ * we allocate a 1GB table at the PHB level.
+ */
+
+ dn = pci_bus_to_OF_node(bus);
+
+ if (!bus->self) {
+ /* Root bus */
+ if (is_python(dn)) {
+ unsigned int *iohole;
+
+ DBG("Python root bus %s\n", bus->name);
+
+ iohole = (unsigned int *)get_property(dn, "io-hole", 0);
+
+ if (iohole) {
+ /* On first bus we need to leave room for the
+ * ISA address space. Just skip the first 256MB
+ * alltogether. This leaves 768MB for the window.
+ */
+ DBG("PHB has io-hole, reserving 256MB\n");
+ dn->phb->dma_window_size = 3 << 28;
+ dn->phb->dma_window_base_cur = 1 << 28;
+ } else {
+ /* 1GB window by default */
+ dn->phb->dma_window_size = 1 << 30;
+ dn->phb->dma_window_base_cur = 0;
+ }
+
+ tbl = kmalloc(sizeof(struct iommu_table), GFP_KERNEL);
+
+ iommu_table_setparms(dn->phb, dn, tbl);
+ dn->iommu_table = iommu_init_table(tbl);
+ } else {
+ /* Do a 128MB table at root. This is used for the IDE
+ * controller on some SMP-mode POWER4 machines. It
+ * doesn't hurt to allocate it on other machines
+ * -- it'll just be unused since new tables are
+ * allocated on the EADS level.
+ *
+ * Allocate at offset 128MB to avoid having to deal
+ * with ISA holes; 128MB table for IDE is plenty.
+ */
+ dn->phb->dma_window_size = 1 << 27;
+ dn->phb->dma_window_base_cur = 1 << 27;
-void iommu_devnode_init_pSeries(struct device_node *dn)
+ tbl = kmalloc(sizeof(struct iommu_table), GFP_KERNEL);
+
+ iommu_table_setparms(dn->phb, dn, tbl);
+ dn->iommu_table = iommu_init_table(tbl);
+
+ /* All child buses have 256MB tables */
+ dn->phb->dma_window_size = 1 << 28;
+ }
+ } else {
+ pdn = pci_bus_to_OF_node(bus->parent);
+
+ if (!bus->parent->self && !is_python(pdn)) {
+ struct iommu_table *tbl;
+ /* First child and not python means this is the EADS
+ * level. Allocate new table for this slot with 256MB
+ * window.
+ */
+
+ tbl = kmalloc(sizeof(struct iommu_table), GFP_KERNEL);
+
+ iommu_table_setparms(dn->phb, dn, tbl);
+
+ dn->iommu_table = iommu_init_table(tbl);
+ } else {
+ /* Lower than first child or under python, use parent table */
+ dn->iommu_table = pdn->iommu_table;
+ }
+ }
+}
+
+
+static void iommu_bus_setup_pSeriesLP(struct pci_bus *bus)
{
struct iommu_table *tbl;
+ struct device_node *dn, *pdn;
+ unsigned int *dma_window = NULL;
- tbl = (struct iommu_table *)kmalloc(sizeof(struct iommu_table),
- GFP_KERNEL);
-
- if (systemcfg->platform == PLATFORM_PSERIES_LPAR)
- iommu_table_setparms_lpar(dn->phb, dn, tbl);
- else
- iommu_table_setparms(dn->phb, dn, tbl);
+ dn = pci_bus_to_OF_node(bus);
+
+ /* Find nearest ibm,dma-window, walking up the device tree */
+ for (pdn = dn; pdn != NULL; pdn = pdn->parent) {
+ dma_window = (unsigned int *)get_property(pdn, "ibm,dma-window", NULL);
+ if (dma_window != NULL)
+ break;
+ }
+
+ if (dma_window == NULL) {
+ DBG("iommu_bus_setup_pSeriesLP: bus %s seems to have no ibm,dma-window property\n", dn->full_name);
+ return;
+ }
+
+ if (!pdn->iommu_table) {
+ /* Bussubno hasn't been copied yet.
+ * Do it now because iommu_table_setparms_lpar needs it.
+ */
+ pdn->bussubno = bus->number;
+
+ tbl = (struct iommu_table *)kmalloc(sizeof(struct iommu_table),
+ GFP_KERNEL);
- dn->iommu_table = iommu_init_table(tbl);
+ iommu_table_setparms_lpar(pdn->phb, pdn, tbl, dma_window);
+
+ pdn->iommu_table = iommu_init_table(tbl);
+ }
+
+ if (pdn != dn)
+ dn->iommu_table = pdn->iommu_table;
}
-void iommu_setup_pSeries(void)
+
+static void iommu_dev_setup_pSeries(struct pci_dev *dev)
{
- struct pci_dev *dev = NULL;
struct device_node *dn, *mydn;
- if (systemcfg->platform == PLATFORM_PSERIES_LPAR)
- iommu_buses_init_lpar(&pci_root_buses);
- else
- iommu_buses_init();
-
- /* Now copy the iommu_table ptr from the bus devices down to every
+ DBG("iommu_dev_setup_pSeries, dev %p (%s)\n", dev, dev->pretty_name);
+ /* Now copy the iommu_table ptr from the bus device down to the
* pci device_node. This means get_iommu_table() won't need to search
* up the device tree to find it.
*/
- for_each_pci_dev(dev) {
- mydn = dn = pci_device_to_OF_node(dev);
+ mydn = dn = pci_device_to_OF_node(dev);
- while (dn && dn->iommu_table == NULL)
- dn = dn->parent;
- if (dn)
- mydn->iommu_table = dn->iommu_table;
+ while (dn && dn->iommu_table == NULL)
+ dn = dn->parent;
+
+ if (dn) {
+ mydn->iommu_table = dn->iommu_table;
+ } else {
+ DBG("iommu_dev_setup_pSeries, dev %p (%s) has no iommu table\n", dev, dev->pretty_name);
}
}
+static void iommu_bus_setup_null(struct pci_bus *b) { }
+static void iommu_dev_setup_null(struct pci_dev *d) { }
+
/* These are called very early. */
-void tce_init_pSeries(void)
+void iommu_init_early_pSeries(void)
{
- if (!(systemcfg->platform & PLATFORM_LPAR)) {
+ if (of_chosen && get_property(of_chosen, "linux,iommu-off", NULL)) {
+ /* Direct I/O, IOMMU off */
+ ppc_md.iommu_dev_setup = iommu_dev_setup_null;
+ ppc_md.iommu_bus_setup = iommu_bus_setup_null;
+ pci_direct_iommu_init();
+
+ return;
+ }
+
+ if (systemcfg->platform & PLATFORM_LPAR) {
+ if (cur_cpu_spec->firmware_features & FW_FEATURE_MULTITCE) {
+ ppc_md.tce_build = tce_buildmulti_pSeriesLP;
+ ppc_md.tce_free = tce_freemulti_pSeriesLP;
+ } else {
+ ppc_md.tce_build = tce_build_pSeriesLP;
+ ppc_md.tce_free = tce_free_pSeriesLP;
+ }
+ ppc_md.iommu_bus_setup = iommu_bus_setup_pSeriesLP;
+ } else {
ppc_md.tce_build = tce_build_pSeries;
ppc_md.tce_free = tce_free_pSeries;
- } else if (cur_cpu_spec->firmware_features & FW_FEATURE_MULTITCE) {
- ppc_md.tce_build = tce_buildmulti_pSeriesLP;
- ppc_md.tce_free = tce_freemulti_pSeriesLP;
- } else {
- ppc_md.tce_build = tce_build_pSeriesLP;
- ppc_md.tce_free = tce_free_pSeriesLP;
+ ppc_md.iommu_bus_setup = iommu_bus_setup_pSeries;
}
+ ppc_md.iommu_dev_setup = iommu_dev_setup_pSeries;
+
pci_iommu_init();
}
#include <linux/adb.h>
#include <linux/module.h>
#include <linux/delay.h>
-
#include <linux/irq.h>
#include <linux/seq_file.h>
#include <linux/root_dev.h>
#include <asm/dma.h>
#include <asm/machdep.h>
#include <asm/irq.h>
-#include <asm/naca.h>
#include <asm/time.h>
#include <asm/nvram.h>
-
-#include "i8259.h"
+#include <asm/plpar_wrappers.h>
#include <asm/xics.h>
-#include <asm/ppcdebug.h>
#include <asm/cputable.h>
+#include "i8259.h"
#include "mpic.h"
+#include "pci.h"
#ifdef DEBUG
#define DBG(fmt...) udbg_printf(fmt)
#define DBG(fmt...)
#endif
-extern void find_and_init_phbs(void);
extern void pSeries_final_fixup(void);
extern void pSeries_get_boot_time(struct rtc_time *rtc_time);
extern void pSeries_get_rtc_time(struct rtc_time *rtc_time);
extern int pSeries_set_rtc_time(struct rtc_time *rtc_time);
extern void find_udbg_vterm(void);
-extern void SystemReset_FWNMI(void), MachineCheck_FWNMI(void); /* from head.S */
-extern void generic_find_legacy_serial_ports(unsigned int *default_speed);
+extern void system_reset_fwnmi(void); /* from head.S */
+extern void machine_check_fwnmi(void); /* from head.S */
+extern void generic_find_legacy_serial_ports(u64 *physport,
+ unsigned int *default_speed);
int fwnmi_active; /* TRUE if an FWNMI handler is present */
-unsigned long virtPython0Facilities = 0; // python0 facility area (memory mapped io) (64-bit format) VIRTUAL address.
-
-extern unsigned long loops_per_jiffy;
-
extern unsigned long ppc_proc_freq;
extern unsigned long ppc_tb_freq;
+extern void pSeries_system_reset_exception(struct pt_regs *regs);
+extern int pSeries_machine_check_exception(struct pt_regs *regs);
+
static volatile void __iomem * chrp_int_ack_special;
struct mpic *pSeries_mpic;
if (ibm_nmi_register == RTAS_UNKNOWN_SERVICE)
return;
ret = rtas_call(ibm_nmi_register, 2, 1, NULL,
- __pa((unsigned long)SystemReset_FWNMI),
- __pa((unsigned long)MachineCheck_FWNMI));
+ __pa((unsigned long)system_reset_fwnmi),
+ __pa((unsigned long)machine_check_fwnmi));
if (ret == 0)
fwnmi_active = 1;
}
static void __init pSeries_setup_arch(void)
{
/* Fixup ppc_md depending on the type of interrupt controller */
- if (naca->interrupt_controller == IC_OPEN_PIC) {
+ if (ppc64_interrupt_controller == IC_OPEN_PIC) {
ppc_md.init_IRQ = pSeries_init_mpic;
ppc_md.get_irq = mpic_get_irq;
/* Allocate the mpic now, so that find_and_init_phbs() can
fwnmi_init();
/* Find and initialize PCI host bridges */
- /* iSeries needs to be done much later. */
+ init_pci_config_tokens();
eeh_init();
find_and_init_phbs();
* to properly parse the OF interrupt tree & do the virtual irq mapping
*/
__irq_offset_value = NUM_ISA_INTERRUPTS;
- naca->interrupt_controller = IC_INVALID;
+ ppc64_interrupt_controller = IC_INVALID;
for (np = NULL; (np = of_find_node_by_name(np, "interrupt-controller"));) {
typep = (char *)get_property(np, "compatible", NULL);
if (strstr(typep, "open-pic"))
- naca->interrupt_controller = IC_OPEN_PIC;
+ ppc64_interrupt_controller = IC_OPEN_PIC;
else if (strstr(typep, "ppc-xicp"))
- naca->interrupt_controller = IC_PPC_XIC;
+ ppc64_interrupt_controller = IC_PPC_XIC;
else
- printk("initialize_naca: failed to recognize"
+ printk("pSeries_discover_pic: failed to recognize"
" interrupt-controller\n");
break;
}
void *comport;
int iommu_off = 0;
unsigned int default_speed;
+ u64 physport;
DBG(" -> pSeries_init_early()\n");
get_property(of_chosen, "linux,iommu-off", NULL));
}
- generic_find_legacy_serial_ports(&default_speed);
+ generic_find_legacy_serial_ports(&physport, &default_speed);
if (systemcfg->platform & PLATFORM_LPAR)
find_udbg_vterm();
- else if (naca->serialPortAddr) {
+ else if (physport) {
/* Map the uart for udbg. */
- comport = (void *)__ioremap(naca->serialPortAddr, 16, _PAGE_NO_CACHE);
+ comport = (void *)__ioremap(physport, 16, _PAGE_NO_CACHE);
udbg_init_uart(comport, default_speed);
ppc_md.udbg_putc = udbg_putc;
}
- if (iommu_off)
- pci_dma_init_direct();
- else
- tce_init_pSeries();
+ iommu_init_early_pSeries();
pSeries_discover_pic();
char *os;
static int display_character, set_indicator;
static int max_width;
- static spinlock_t progress_lock = SPIN_LOCK_UNLOCKED;
+ static DEFINE_SPINLOCK(progress_lock);
static int pending_newline = 0; /* did last write end with unprinted newline? */
if (!rtas.base)
.calibrate_decr = pSeries_calibrate_decr,
.progress = pSeries_progress,
.check_legacy_ioport = pSeries_check_legacy_ioport,
+ .system_reset_exception = pSeries_system_reset_exception,
+ .machine_check_exception = pSeries_machine_check_exception,
};
#include <linux/module.h>
#include <linux/sched.h>
#include <linux/smp.h>
-#include <linux/smp_lock.h>
#include <linux/interrupt.h>
-#include <linux/kernel_stat.h>
#include <linux/delay.h>
#include <linux/init.h>
#include <linux/spinlock.h>
#include <asm/io.h>
#include <asm/prom.h>
#include <asm/smp.h>
-#include <asm/naca.h>
#include <asm/paca.h>
#include <asm/time.h>
-#include <asm/ppcdebug.h>
#include <asm/machdep.h>
#include <asm/xics.h>
#include <asm/cputable.h>
#define DBG(fmt...)
#endif
-extern void pseries_secondary_smp_init(unsigned long);
+extern void pSeries_secondary_smp_init(unsigned long);
/* Get state of physical CPU.
* Return codes:
#ifdef CONFIG_HOTPLUG_CPU
-int __cpu_disable(void)
+int pSeries_cpu_disable(void)
{
- /* FIXME: go put this in a header somewhere */
- extern void xics_migrate_irqs_away(void);
-
systemcfg->processorCount--;
/*fix boot_cpuid here*/
return 0;
}
-void __cpu_die(unsigned int cpu)
+void pSeries_cpu_die(unsigned int cpu)
{
int tries;
int cpu_status;
cpu_status = query_cpu_stopped(pcpu);
if (cpu_status == 0 || cpu_status == -1)
break;
- set_current_state(TASK_UNINTERRUPTIBLE);
- schedule_timeout(HZ/5);
+ msleep(200);
}
if (cpu_status != 0) {
printk("Querying DEAD? cpu %i (%i) shows %i\n",
{
int status;
unsigned long start_here = __pa((u32)*((unsigned long *)
- pseries_secondary_smp_init));
+ pSeries_secondary_smp_init));
unsigned int pcpu;
/* At boot time the cpus are already spinning in hold
}
}
-extern void xics_request_IPIs(void);
-
static int __init smp_xics_probe(void)
{
xics_request_IPIs();
{
if (cpu != boot_cpuid)
xics_setup_cpu();
+
+ if (cur_cpu_spec->firmware_features & FW_FEATURE_SPLPAR)
+ vpa_init(cpu);
+
+ /*
+ * Put the calling processor into the GIQ. This is really only
+ * necessary from a secondary thread as the OF start-cpu interface
+ * performs this function for us on primary threads.
+ */
+ rtas_set_indicator(GLOBAL_INTERRUPT_QUEUE,
+ (1UL << interrupt_server_size) - 1 - default_distrib_server, 1);
}
-static spinlock_t timebase_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(timebase_lock);
static unsigned long timebase = 0;
static void __devinit pSeries_give_timebase(void)
spin_unlock(&timebase_lock);
}
-static void __devinit pSeries_late_setup_cpu(int cpu)
-{
- extern unsigned int default_distrib_server;
-
- if (cur_cpu_spec->firmware_features & FW_FEATURE_SPLPAR) {
- vpa_init(cpu);
- }
-
-#ifdef CONFIG_IRQ_ALL_CPUS
- /* Put the calling processor into the GIQ. This is really only
- * necessary from a secondary thread as the OF start-cpu interface
- * performs this function for us on primary threads.
- */
- /* TODO: 9005 is #defined in rtas-proc.c -- move to a header */
- rtas_set_indicator(9005, default_distrib_server, 1);
-#endif
-}
-
-
-void __devinit smp_pSeries_kick_cpu(int nr)
+static void __devinit smp_pSeries_kick_cpu(int nr)
{
BUG_ON(nr < 0 || nr >= NR_CPUS);
.probe = smp_mpic_probe,
.kick_cpu = smp_pSeries_kick_cpu,
.setup_cpu = smp_mpic_setup_cpu,
- .late_setup_cpu = pSeries_late_setup_cpu,
};
static struct smp_ops_t pSeries_xics_smp_ops = {
.probe = smp_xics_probe,
.kick_cpu = smp_pSeries_kick_cpu,
.setup_cpu = smp_xics_setup_cpu,
- .late_setup_cpu = pSeries_late_setup_cpu,
};
/* This is called very early */
DBG(" -> smp_init_pSeries()\n");
- if (naca->interrupt_controller == IC_OPEN_PIC)
+ if (ppc64_interrupt_controller == IC_OPEN_PIC)
smp_ops = &pSeries_mpic_smp_ops;
else
smp_ops = &pSeries_xics_smp_ops;
+#ifdef CONFIG_HOTPLUG_CPU
+ smp_ops->cpu_disable = pSeries_cpu_disable;
+ smp_ops->cpu_die = pSeries_cpu_die;
+#endif
+
/* Start secondary threads on SMT systems; primary threads
* are already in the running state.
*/
rtas_call(rtas_token("start-cpu"), 3, 1, &ret,
get_hard_smp_processor_id(i),
__pa((u32)*((unsigned long *)
- pseries_secondary_smp_init)),
+ pSeries_secondary_smp_init)),
i);
}
}
- if (cur_cpu_spec->firmware_features & FW_FEATURE_SPLPAR)
- vpa_init(boot_cpuid);
-
/* Non-lpar has additional take/give timebase */
if (rtas_token("freeze-time-base") != RTAS_UNKNOWN_SERVICE) {
smp_ops->give_timebase = pSeries_give_timebase;
--- /dev/null
+/*
+ * Support for DMA from PCI devices to main memory on
+ * machines without an iommu or with directly addressable
+ * RAM (typically a pmac with 2Gb of RAM or less)
+ *
+ * Copyright (C) 2003 Benjamin Herrenschmidt (benh@kernel.crashing.org)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include <linux/string.h>
+#include <linux/init.h>
+#include <linux/bootmem.h>
+#include <linux/mm.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/sections.h>
+#include <asm/io.h>
+#include <asm/prom.h>
+#include <asm/pci-bridge.h>
+#include <asm/machdep.h>
+#include <asm/pmac_feature.h>
+#include <asm/abs_addr.h>
+
+#include "pci.h"
+
+static void *pci_direct_alloc_consistent(struct pci_dev *hwdev, size_t size,
+ dma_addr_t *dma_handle)
+{
+ void *ret;
+
+ ret = (void *)__get_free_pages(GFP_ATOMIC, get_order(size));
+ if (ret != NULL) {
+ memset(ret, 0, size);
+ *dma_handle = virt_to_abs(ret);
+ }
+ return ret;
+}
+
+static void pci_direct_free_consistent(struct pci_dev *hwdev, size_t size,
+ void *vaddr, dma_addr_t dma_handle)
+{
+ free_pages((unsigned long)vaddr, get_order(size));
+}
+
+static dma_addr_t pci_direct_map_single(struct pci_dev *hwdev, void *ptr,
+ size_t size, enum dma_data_direction direction)
+{
+ return virt_to_abs(ptr);
+}
+
+static void pci_direct_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr,
+ size_t size, enum dma_data_direction direction)
+{
+}
+
+static int pci_direct_map_sg(struct pci_dev *hwdev, struct scatterlist *sg,
+ int nents, enum dma_data_direction direction)
+{
+ int i;
+
+ for (i = 0; i < nents; i++, sg++) {
+ sg->dma_address = page_to_phys(sg->page) + sg->offset;
+ sg->dma_length = sg->length;
+ }
+
+ return nents;
+}
+
+static void pci_direct_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg,
+ int nents, enum dma_data_direction direction)
+{
+}
+
+void __init pci_direct_iommu_init(void)
+{
+ pci_dma_ops.pci_alloc_consistent = pci_direct_alloc_consistent;
+ pci_dma_ops.pci_free_consistent = pci_direct_free_consistent;
+ pci_dma_ops.pci_map_single = pci_direct_map_single;
+ pci_dma_ops.pci_unmap_single = pci_direct_unmap_single;
+ pci_dma_ops.pci_map_sg = pci_direct_map_sg;
+ pci_dma_ops.pci_unmap_sg = pci_direct_unmap_sg;
+}
* We use a single global lock to protect accesses. Each driver has
* to take care of its own locking
*/
-static spinlock_t feature_lock __pmacdata = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(feature_lock __pmacdata);
#define LOCK(flags) spin_lock_irqsave(&feature_lock, flags);
#define UNLOCK(flags) spin_unlock_irqrestore(&feature_lock, flags);
static u32 uninorth_rev __pmacdata;
static void *u3_ht;
-extern struct pci_dev *k2_skiplist[2];
+extern struct device_node *k2_skiplist[2];
/*
* For each motherboard family, we have a table of functions pointers
{
struct macio_chip* macio = &macio_chips[0];
unsigned long flags;
- struct pci_dev *pdev = NULL;
if (node == NULL)
return -ENODEV;
- /* XXX FIXME: We should fix pci_device_from_OF_node here, and
- * get to a real pci_dev or we'll get into trouble with PCI
- * domains the day we get overlapping numbers (like if we ever
- * decide to show the HT root.
- * Note that we only get the slot when value is 0. This is called
- * early during boot with value 1 to enable all devices, at which
- * point, we don't yet have probed pci_find_slot, so it would fail
- * to look for the slot at this point.
- */
- if (!value)
- pdev = pci_find_slot(node->busno, node->devfn);
-
LOCK(flags);
if (value) {
MACIO_BIS(KEYLARGO_FCR1, K2_FCR1_GMAC_CLK_ENABLE);
mb();
k2_skiplist[0] = NULL;
} else {
- k2_skiplist[0] = pdev;
+ k2_skiplist[0] = node;
mb();
MACIO_BIC(KEYLARGO_FCR1, K2_FCR1_GMAC_CLK_ENABLE);
}
{
struct macio_chip* macio = &macio_chips[0];
unsigned long flags;
- struct pci_dev *pdev = NULL;
-
- /* XXX FIXME: We should fix pci_device_from_OF_node here, and
- * get to a real pci_dev or we'll get into trouble with PCI
- * domains the day we get overlapping numbers (like if we ever
- * decide to show the HT root
- * Note that we only get the slot when value is 0. This is called
- * early during boot with value 1 to enable all devices, at which
- * point, we don't yet have probed pci_find_slot, so it would fail
- * to look for the slot at this point.
- */
+
if (node == NULL)
return -ENODEV;
- if (!value)
- pdev = pci_find_slot(node->busno, node->devfn);
-
LOCK(flags);
if (value) {
MACIO_BIS(KEYLARGO_FCR1, K2_FCR1_FW_CLK_ENABLE);
mb();
k2_skiplist[1] = NULL;
} else {
- k2_skiplist[1] = pdev;
+ k2_skiplist[1] = node;
mb();
MACIO_BIC(KEYLARGO_FCR1, K2_FCR1_FW_CLK_ENABLE);
}
static volatile unsigned char *nvram_data;
static int core99_bank = 0;
// XXX Turn that into a sem
-static spinlock_t nv_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(nv_lock);
extern int system_running;
* assuming we won't have both UniNorth and Bandit */
static int has_uninorth;
static struct pci_controller *u3_agp;
-struct pci_dev *k2_skiplist[2];
+struct device_node *k2_skiplist[2];
static int __init fixup_one_level_bus_range(struct device_node *node, int higher)
{
struct device_node *busdn, *dn;
int i;
- /*
- * When a device in K2 is powered down, we die on config
- * cycle accesses. Fix that here.
- */
- for (i=0; i<2; i++)
- if (k2_skiplist[i] && k2_skiplist[i]->bus == bus &&
- k2_skiplist[i]->devfn == devfn)
- return 1;
-
/* We only allow config cycles to devices that are in OF device-tree
* as we are apparently having some weird things going on with some
* revs of K2 on recent G5s
if (dn == NULL)
return -1;
+ /*
+ * When a device in K2 is powered down, we die on config
+ * cycle accesses. Fix that here.
+ */
+ for (i=0; i<2; i++)
+ if (k2_skiplist[i] == dn)
+ return 1;
+
return 0;
}
for_each_pci_dev(dev)
pci_read_irq_line(dev);
-
- pci_fix_bus_sysdata();
-
- iommu_setup_u3();
}
static void __init pmac_fixup_phb_resources(void)
pmac_setup_smp();
#endif
- /* Setup the PCI DMA to "direct" by default. May be overriden
- * by iommu later on
- */
- pci_dma_init_direct();
-
/* Lookup PCI hosts */
pmac_pci_init();
}
/* Setup interrupt mapping options */
- naca->interrupt_controller = IC_OPEN_PIC;
+ ppc64_interrupt_controller = IC_OPEN_PIC;
+
+ iommu_init_early_u3();
DBG(" <- pmac_init_early\n");
}
#include <linux/slab.h>
#include <linux/kernel.h>
-#include <asm/naca.h>
-#include <asm/paca.h>
#include <asm/systemcfg.h>
#include <asm/rtas.h>
#include <asm/uaccess.h>
};
#endif
-/*
- * NOTE: since paca data is always in flux the values will never be a
- * consistant set.
- */
-static void __init proc_create_paca(struct proc_dir_entry *dir, int num)
-{
- struct proc_dir_entry *ent;
- struct paca_struct *lpaca = paca + num;
- char buf[16];
-
- sprintf(buf, "%02x", num);
- ent = create_proc_entry(buf, S_IRUSR, dir);
- if (ent) {
- ent->nlink = 1;
- ent->data = lpaca;
- ent->size = 4096;
- ent->proc_fops = &page_map_fops;
- }
-}
-
/*
* Create the ppc64 and ppc64/rtas directories early. This allows us to
* assume that they have been previously created in drivers.
static int __init proc_ppc64_init(void)
{
- unsigned long i;
struct proc_dir_entry *pde;
- pde = create_proc_entry("ppc64/naca", S_IRUSR, NULL);
- if (!pde)
- return 1;
- pde->nlink = 1;
- pde->data = naca;
- pde->size = 4096;
- pde->proc_fops = &page_map_fops;
-
pde = create_proc_entry("ppc64/systemcfg", S_IFREG|S_IRUGO, NULL);
if (!pde)
return 1;
pde->size = 4096;
pde->proc_fops = &page_map_fops;
- /* /proc/ppc64/paca/XX -- raw paca contents. Only readable to root */
- pde = proc_mkdir("ppc64/paca", NULL);
- if (!pde)
- return 1;
- for_each_cpu(i)
- proc_create_paca(pde, i);
-
#ifdef CONFIG_PPC_PSERIES
if ((systemcfg->platform & PLATFORM_PSERIES))
proc_ppc64_create_ofdt();
#include <asm/system.h>
#include <asm/mmu.h>
#include <asm/pgtable.h>
-#include <asm/naca.h>
#include <asm/pci.h>
#include <asm/iommu.h>
#include <asm/bootinfo.h>
prom_debug("TCE table: %s\n", path);
prom_debug("\tnode = 0x%x\n", node);
- prom_debug("\tbase = 0x%x\n", vbase);
+ prom_debug("\tbase = 0x%x\n", base);
prom_debug("\tsize = 0x%x\n", minsize);
/* Initialize the table to have a one-to-one mapping
static unsigned int dart_emptyval;
static struct iommu_table iommu_table_u3;
+static int iommu_table_u3_inited;
static int dart_dirty;
#define DBG(...)
unsigned int regword;
unsigned int i;
unsigned long tmp;
- struct page *p;
if (dart_tablebase == 0 || dart_tablesize == 0) {
printk(KERN_INFO "U3-DART: table not allocated, using direct DMA\n");
* that to work around what looks like a problem with the HT bridge
* prefetching into invalid pages and corrupting data
*/
- tmp = __get_free_pages(GFP_ATOMIC, 1);
- if (tmp == 0)
- panic("U3-DART: Cannot allocate spare page !");
- dart_emptyval = DARTMAP_VALID |
- ((virt_to_abs(tmp) >> PAGE_SHIFT) & DARTMAP_RPNMASK);
+ tmp = lmb_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!tmp)
+ panic("U3-DART: Cannot allocate spare page!");
+ dart_emptyval = DARTMAP_VALID | ((tmp >> PAGE_SHIFT) & DARTMAP_RPNMASK);
/* Map in DART registers. FIXME: Use device node to get base address */
dart = ioremap(DART_BASE, 0x7000);
if (dart == NULL)
- panic("U3-DART: Cannot map registers !");
+ panic("U3-DART: Cannot map registers!");
/* Set initial control register contents: table base,
* table size and enable bit
((dart_tablebase >> PAGE_SHIFT) << DARTCNTL_BASE_SHIFT) |
(((dart_tablesize >> PAGE_SHIFT) & DARTCNTL_SIZE_MASK)
<< DARTCNTL_SIZE_SHIFT);
- p = virt_to_page(dart_tablebase);
dart_vbase = ioremap(virt_to_abs(dart_tablebase), dart_tablesize);
/* Fill initial table */
/* Invalidate DART to get rid of possible stale TLBs */
dart_tlb_invalidate_all();
+ printk(KERN_INFO "U3/CPC925 DART IOMMU initialized\n");
+
+ return 0;
+}
+
+static void iommu_table_u3_setup(void)
+{
iommu_table_u3.it_busno = 0;
-
- /* Units of tce entries */
iommu_table_u3.it_offset = 0;
-
- /* Set the tce table size - measured in pages */
- iommu_table_u3.it_size = dart_tablesize >> PAGE_SHIFT;
+ /* it_size is in number of entries */
+ iommu_table_u3.it_size = dart_tablesize / sizeof(u32);
/* Initialize the common IOMMU code */
iommu_table_u3.it_base = (unsigned long)dart_vbase;
iommu_table_u3.it_index = 0;
iommu_table_u3.it_blocksize = 1;
- iommu_table_u3.it_entrysize = sizeof(u32);
iommu_init_table(&iommu_table_u3);
/* Reserve the last page of the DART to avoid possible prefetch
* past the DART mapped area
*/
- set_bit(iommu_table_u3.it_mapsize - 1, iommu_table_u3.it_map);
+ set_bit(iommu_table_u3.it_size - 1, iommu_table_u3.it_map);
+}
- printk(KERN_INFO "U3/CPC925 DART IOMMU initialized\n");
+static void iommu_dev_setup_u3(struct pci_dev *dev)
+{
+ struct device_node *dn;
- return 0;
+ /* We only have one iommu table on the mac for now, which makes
+ * things simple. Setup all PCI devices to point to this table
+ *
+ * We must use pci_device_to_OF_node() to make sure that
+ * we get the real "final" pointer to the device in the
+ * pci_dev sysdata and not the temporary PHB one
+ */
+ dn = pci_device_to_OF_node(dev);
+
+ if (dn)
+ dn->iommu_table = &iommu_table_u3;
}
-void iommu_setup_u3(void)
+static void iommu_bus_setup_u3(struct pci_bus *bus)
+{
+ struct device_node *dn;
+
+ if (!iommu_table_u3_inited) {
+ iommu_table_u3_inited = 1;
+ iommu_table_u3_setup();
+ }
+
+ dn = pci_bus_to_OF_node(bus);
+
+ if (dn)
+ dn->iommu_table = &iommu_table_u3;
+}
+
+static void iommu_dev_setup_null(struct pci_dev *dev) { }
+static void iommu_bus_setup_null(struct pci_bus *bus) { }
+
+void iommu_init_early_u3(void)
{
- struct pci_controller *phb, *tmp;
- struct pci_dev *dev = NULL;
struct device_node *dn;
/* Find the DART in the device-tree */
ppc_md.tce_flush = dart_flush;
/* Initialize the DART HW */
- if (dart_init(dn))
- return;
-
- /* Setup pci_dma ops */
- pci_iommu_init();
-
- /* We only have one iommu table on the mac for now, which makes
- * things simple. Setup all PCI devices to point to this table
- */
- for_each_pci_dev(dev) {
- /* We must use pci_device_to_OF_node() to make sure that
- * we get the real "final" pointer to the device in the
- * pci_dev sysdata and not the temporary PHB one
- */
- struct device_node *dn = pci_device_to_OF_node(dev);
- if (dn)
- dn->iommu_table = &iommu_table_u3;
- }
- /* We also make sure we set all PHBs ... */
- list_for_each_entry_safe(phb, tmp, &hose_list, list_node) {
- dn = (struct device_node *)phb->arch_data;
- dn->iommu_table = &iommu_table_u3;
+ if (dart_init(dn)) {
+ /* If init failed, use direct iommu and null setup functions */
+ ppc_md.iommu_dev_setup = iommu_dev_setup_null;
+ ppc_md.iommu_bus_setup = iommu_bus_setup_null;
+
+ /* Setup pci_dma ops */
+ pci_direct_iommu_init();
+ } else {
+ ppc_md.iommu_dev_setup = iommu_dev_setup_u3;
+ ppc_md.iommu_bus_setup = iommu_bus_setup_u3;
+
+ /* Setup pci_dma ops */
+ pci_iommu_init();
}
}
+
void __init alloc_u3_dart_table(void)
{
/* Only reserve DART space if machine has more than 2GB of RAM
holder_cpu = lock_value & 0xffff;
BUG_ON(holder_cpu >= NR_CPUS);
holder_paca = &paca[holder_cpu];
- yield_count = holder_paca->lppaca.xYieldCount;
+ yield_count = holder_paca->lppaca.yield_count;
if ((yield_count & 1) == 0)
return; /* virtual cpu is currently running */
rmb();
holder_cpu = lock_value & 0xffff;
BUG_ON(holder_cpu >= NR_CPUS);
holder_paca = &paca[holder_cpu];
- yield_count = holder_paca->lppaca.xYieldCount;
+ yield_count = holder_paca->lppaca.yield_count;
if ((yield_count & 1) == 0)
return; /* virtual cpu is currently running */
rmb();
#include <asm/sstep.h>
#include <asm/processor.h>
-extern char SystemCall_common[];
+extern char system_call_common[];
/* Bits in SRR1 that are copied from MSR */
#define MSR_MASK 0xffffffff87c0ffff
regs->gpr[11] = regs->nip + 4;
regs->gpr[12] = regs->msr & MSR_MASK;
regs->gpr[13] = (unsigned long) get_paca();
- regs->nip = (unsigned long) &SystemCall_common;
+ regs->nip = (unsigned long) &system_call_common;
regs->msr = MSR_KERNEL;
return 1;
case 18: /* b */
std r3,STK_PARM(r4)(r1)
/* Get htab_hash_mask */
- ld r4,htab_data@got(2)
- ld r27,16(r4) /* htab_data.htab_hash_mask -> r27 */
+ ld r4,htab_hash_mask@got(2)
+ ld r27,0(r4) /* htab_hash_mask -> r27 */
/* Check if we may already be in the hashtable, in this case, we
* go to out-of-line code to try to modify the HPTE
#define HPTE_LOCK_BIT 3
-static spinlock_t native_tlbie_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(native_tlbie_lock);
static inline void native_lock_hpte(HPTE *hptep)
{
unsigned long hpteflags, int bolted, int large)
{
unsigned long arpn = physRpn_to_absRpn(prpn);
- HPTE *hptep = htab_data.htab + hpte_group;
+ HPTE *hptep = htab_address + hpte_group;
Hpte_dword0 dw0;
HPTE lhpte;
int i;
slot_offset = mftb() & 0x7;
for (i = 0; i < HPTES_PER_GROUP; i++) {
- hptep = htab_data.htab + hpte_group + slot_offset;
+ hptep = htab_address + hpte_group + slot_offset;
dw0 = hptep->dw0.dw0;
if (dw0.v && !dw0.bolted) {
hash = hpt_hash(vpn, 0);
for (j = 0; j < 2; j++) {
- slot = (hash & htab_data.htab_hash_mask) * HPTES_PER_GROUP;
+ slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
for (i = 0; i < HPTES_PER_GROUP; i++) {
- hptep = htab_data.htab + slot;
+ hptep = htab_address + slot;
dw0 = hptep->dw0.dw0;
if ((dw0.avpn == (vpn >> 11)) && dw0.v &&
static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
unsigned long va, int large, int local)
{
- HPTE *hptep = htab_data.htab + slot;
+ HPTE *hptep = htab_address + slot;
Hpte_dword0 dw0;
unsigned long avpn = va >> 23;
int ret = 0;
*/
static void native_hpte_updateboltedpp(unsigned long newpp, unsigned long ea)
{
- unsigned long vsid, va, vpn, flags;
+ unsigned long vsid, va, vpn, flags = 0;
long slot;
HPTE *hptep;
int lock_tlbie = !(cur_cpu_spec->cpu_features & CPU_FTR_LOCKLESS_TLBIE);
slot = native_hpte_find(vpn);
if (slot == -1)
panic("could not find page to bolt\n");
- hptep = htab_data.htab + slot;
+ hptep = htab_address + slot;
set_pp_bit(newpp, hptep);
static void native_hpte_invalidate(unsigned long slot, unsigned long va,
int large, int local)
{
- HPTE *hptep = htab_data.htab + slot;
+ HPTE *hptep = htab_address + slot;
Hpte_dword0 dw0;
unsigned long avpn = va >> 23;
unsigned long flags;
secondary = (pte_val(batch->pte[i]) & _PAGE_SECONDARY) >> 15;
if (secondary)
hash = ~hash;
- slot = (hash & htab_data.htab_hash_mask) * HPTES_PER_GROUP;
+ slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
slot += (pte_val(batch->pte[i]) & _PAGE_GROUP_IX) >> 12;
- hptep = htab_data.htab + slot;
+ hptep = htab_address + slot;
avpn = va >> 23;
if (large)
#include <asm/mmu.h>
#include <asm/mmu_context.h>
#include <asm/paca.h>
-#include <asm/naca.h>
#include <asm/cputable.h>
extern void slb_allocate(unsigned long ea);
void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
{
unsigned long offset = get_paca()->slb_cache_ptr;
- unsigned long esid_data;
+ unsigned long esid_data = 0;
unsigned long pc = KSTK_EIP(tsk);
unsigned long stack = KSTK_ESP(tsk);
unsigned long unmapped_base;
}
/* Workaround POWER5 < DD2.1 issue */
- if (offset == 1 || offset > SLB_CACHE_ENTRIES) {
- /* flush segment in EEH region, we shouldn't ever
- * access addresses in this region. */
- asm volatile("slbie %0" : : "r"(EEHREGIONBASE));
- }
+ if (offset == 1 || offset > SLB_CACHE_ENTRIES)
+ asm volatile("slbie %0" : : "r" (esid_data));
get_paca()->slb_cache_ptr = 0;
get_paca()->context = mm->context;
#include <asm/mmu.h>
#include <asm/mmu_context.h>
#include <asm/paca.h>
-#include <asm/naca.h>
#include <asm/cputable.h>
/* Both the segment table and SLB code uses the following cache */
#define OP_MAX_COUNTER 8
-#define MSR_PMM (1UL << (63 - 61))
-
-/* freeze counters. set to 1 on a perfmon exception */
-#define MMCR0_FC (1UL << (31 - 0))
-
-/* freeze in supervisor state */
-#define MMCR0_KERNEL_DISABLE (1UL << (31 - 1))
-
-/* freeze in problem state */
-#define MMCR0_PROBLEM_DISABLE (1UL << (31 - 2))
-
-/* freeze counters while MSR mark = 1 */
-#define MMCR0_FCM1 (1UL << (31 - 3))
-
-/* performance monitor exception enable */
-#define MMCR0_PMXE (1UL << (31 - 5))
-
-/* freeze counters on enabled condition or event */
-#define MMCR0_FCECE (1UL << (31 - 6))
-
-/* PMC1 count enable*/
-#define MMCR0_PMC1INTCONTROL (1UL << (31 - 16))
-
-/* PMCn count enable*/
-#define MMCR0_PMCNINTCONTROL (1UL << (31 - 17))
-
-/* performance monitor alert has occurred, set to 0 after handling exception */
-#define MMCR0_PMAO (1UL << (31 - 24))
-
-/* state of MSR HV when SIAR set */
-#define MMCRA_SIHV (1UL << (63 - 35))
-
-/* state of MSR PR when SIAR set */
-#define MMCRA_SIPR (1UL << (63 - 36))
-
-/* enable sampling */
-#define MMCRA_SAMPLE_ENABLE (1UL << (63 - 63))
-
/* Per-counter configuration as set via oprofilefs. */
struct op_counter_config {
unsigned long valid;
mmcr0 |= MMCR0_FCM1|MMCR0_PMXE|MMCR0_FCECE;
/* Only applies to POWER3, but should be safe on RS64 */
- mmcr0 |= MMCR0_PMC1INTCONTROL|MMCR0_PMCNINTCONTROL;
+ mmcr0 |= MMCR0_PMC1CE|MMCR0_PMCjCE;
mtspr(SPRN_MMCR0, mmcr0);
dbg("setup on cpu %d, mmcr0 %lx\n", smp_processor_id(),
int i;
unsigned long pc = mfspr(SPRN_SIAR);
int is_kernel = (pc >= KERNELBASE);
- unsigned int cpu = smp_processor_id();
/* set the PMM bit (see comment below) */
mtmsrd(mfmsr() | MSR_PMM);
val = ctr_read(i);
if (val < 0) {
if (ctr[i].enabled) {
- oprofile_add_sample(pc, is_kernel, i, cpu);
+ oprofile_add_pc(pc, is_kernel, i);
ctr_write(i, reset_value[i]);
} else {
ctr_write(i, 0);
--- /dev/null
+/*
+ * Copyright (C) 1996 Paul Mackerras.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * NOTE: assert(sizeof(buf) > 184)
+ */
+#include <asm/processor.h>
+#include <asm/ppc_asm.h>
+
+_GLOBAL(xmon_setjmp)
+ mflr r0
+ std r0,0(r3)
+ std r1,8(r3)
+ std r2,16(r3)
+ mfcr r0
+ std r0,24(r3)
+ std r13,32(r3)
+ std r14,40(r3)
+ std r15,48(r3)
+ std r16,56(r3)
+ std r17,64(r3)
+ std r18,72(r3)
+ std r19,80(r3)
+ std r20,88(r3)
+ std r21,96(r3)
+ std r22,104(r3)
+ std r23,112(r3)
+ std r24,120(r3)
+ std r25,128(r3)
+ std r26,136(r3)
+ std r27,144(r3)
+ std r28,152(r3)
+ std r29,160(r3)
+ std r30,168(r3)
+ std r31,176(r3)
+ li r3,0
+ blr
+
+_GLOBAL(xmon_longjmp)
+ cmpdi r4,0
+ bne 1f
+ li r4,1
+1: ld r13,32(r3)
+ ld r14,40(r3)
+ ld r15,48(r3)
+ ld r16,56(r3)
+ ld r17,64(r3)
+ ld r18,72(r3)
+ ld r19,80(r3)
+ ld r20,88(r3)
+ ld r21,96(r3)
+ ld r22,104(r3)
+ ld r23,112(r3)
+ ld r24,120(r3)
+ ld r25,128(r3)
+ ld r26,136(r3)
+ ld r27,144(r3)
+ ld r28,152(r3)
+ ld r29,160(r3)
+ ld r30,168(r3)
+ ld r31,176(r3)
+ ld r0,24(r3)
+ mtcrf 56,r0
+ ld r0,0(r3)
+ ld r1,8(r3)
+ ld r2,16(r3)
+ mtlr r0
+ mr r3,r4
+ blr
static int __init setup_xmon_sysrq(void)
{
- __sysrq_put_key_op('x', &sysrq_xmon_op);
+ register_sysrq_key('x', &sysrq_xmon_op);
return 0;
}
__initcall(setup_xmon_sysrq);
__u32 sival_ptr;
} sigval_t32;
-typedef struct siginfo32 {
+typedef struct compat_siginfo {
int si_signo;
int si_errno;
int si_code;
/* POSIX.1b timers */
struct {
- unsigned int _timer1;
- unsigned int _timer2;
-
+ timer_t _tid; /* timer id */
+ int _overrun; /* overrun count */
+ sigval_t _sigval; /* same as below */
+ int _sys_private; /* not to be passed to user */
} _timer;
/* POSIX.1b signals */
int _fd;
} _sigpoll;
} _sifields;
-} siginfo_t32;
+} compat_siginfo_t;
/*
* How these fields are to be accessed.
#define si_addr _sifields._sigfault._addr
#define si_band _sifields._sigpoll._band
#define si_fd _sifields._sigpoll._fd
+#define si_tid _sifields._timer._tid
+#define si_overrun _sifields._timer._overrun
/* asm/sigcontext.h */
typedef union
} _sigev_un;
};
-extern int copy_siginfo_to_user32(siginfo_t32 __user *to, siginfo_t *from);
-extern int copy_siginfo_from_user32(siginfo_t *to, siginfo_t32 __user *from);
-
#endif /* _ASM_S390X_S390_H */
* S390 version
* Copyright (C) 1999,2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
* Author(s): Martin Schwidefsky (schwidefsky@de.ibm.com),
+ * Christian Borntraeger (cborntra@de.ibm.com),
*/
-#include <linux/stddef.h>
#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/stddef.h>
#include <linux/string.h>
#include <asm/ebcdic.h>
-#include <linux/spinlock.h>
#include <asm/cpcmd.h>
#include <asm/system.h>
-static spinlock_t cpcmd_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(cpcmd_lock);
static char cpcmd_buf[240];
-void cpcmd(char *cmd, char *response, int rlen)
+/*
+ * the caller of __cpcmd has to ensure that the response buffer is below 2 GB
+ */
+void __cpcmd(char *cmd, char *response, int rlen)
{
- const int mask = 0x40000000L;
+ const int mask = 0x40000000L;
unsigned long flags;
- int cmdlen;
+ int cmdlen;
spin_lock_irqsave(&cpcmd_lock, flags);
cmdlen = strlen(cmd);
- BUG_ON(cmdlen>240);
+ BUG_ON(cmdlen > 240);
strcpy(cpcmd_buf, cmd);
ASCEBC(cpcmd_buf, cmdlen);
if (response != NULL && rlen > 0) {
+ memset(response, 0, rlen);
#ifndef CONFIG_ARCH_S390X
- asm volatile ("LRA 2,0(%0)\n\t"
+ asm volatile ("LRA 2,0(%0)\n\t"
"LR 4,%1\n\t"
"O 4,%4\n\t"
"LRA 3,0(%2)\n\t"
spin_unlock_irqrestore(&cpcmd_lock, flags);
}
+EXPORT_SYMBOL(__cpcmd);
+
+#ifdef CONFIG_ARCH_S390X
+void cpcmd(char *cmd, char *response, int rlen)
+{
+ char *lowbuf;
+ if ((rlen == 0) || (response == NULL)
+ || !((unsigned long)response >> 31))
+ __cpcmd(cmd, response, rlen);
+ else {
+ lowbuf = kmalloc(rlen, GFP_KERNEL | GFP_DMA);
+ if (!lowbuf) {
+ printk(KERN_WARNING
+ "cpcmd: could not allocate response buffer\n");
+ return;
+ }
+ __cpcmd(cmd, lowbuf, rlen);
+ memcpy(response, lowbuf, rlen);
+ kfree(lowbuf);
+ }
+}
+
+EXPORT_SYMBOL(cpcmd);
+#endif /* CONFIG_ARCH_S390X */
local_irq_save(flags);
+ account_system_vtime(current);
+
+ local_bh_disable();
+
if (local_softirq_pending()) {
/* Get current stack pointer. */
asm volatile("la %0,0(15)" : "=a" (old));
__do_softirq();
}
+ account_system_vtime(current);
+
+ __local_bh_enable();
+
local_irq_restore(flags);
}
#include <linux/types.h>
#include <linux/timex.h>
#include <linux/notifier.h>
+#include <linux/kernel_stat.h>
+#include <linux/rcupdate.h>
#include <asm/s390_ext.h>
#include <asm/timer.h>
static ext_int_info_t ext_int_info_timer;
DEFINE_PER_CPU(struct vtimer_queue, virt_cpu_timer);
-void start_cpu_timer(void)
+#ifdef CONFIG_VIRT_CPU_ACCOUNTING
+/*
+ * Update process times based on virtual cpu times stored by entry.S
+ * to the lowcore fields user_timer, system_timer & steal_clock.
+ */
+void account_user_vtime(struct task_struct *tsk)
+{
+ cputime_t cputime;
+ __u64 timer, clock;
+ int rcu_user_flag;
+
+ timer = S390_lowcore.last_update_timer;
+ clock = S390_lowcore.last_update_clock;
+ asm volatile (" STPT %0\n" /* Store current cpu timer value */
+ " STCK %1" /* Store current tod clock value */
+ : "=m" (S390_lowcore.last_update_timer),
+ "=m" (S390_lowcore.last_update_clock) );
+ S390_lowcore.system_timer += timer - S390_lowcore.last_update_timer;
+ S390_lowcore.steal_clock += S390_lowcore.last_update_clock - clock;
+
+ cputime = S390_lowcore.user_timer >> 12;
+ rcu_user_flag = cputime != 0;
+ S390_lowcore.user_timer -= cputime << 12;
+ S390_lowcore.steal_clock -= cputime << 12;
+ account_user_time(tsk, cputime);
+
+ cputime = S390_lowcore.system_timer >> 12;
+ S390_lowcore.system_timer -= cputime << 12;
+ S390_lowcore.steal_clock -= cputime << 12;
+ account_system_time(tsk, HARDIRQ_OFFSET, cputime);
+
+ cputime = S390_lowcore.steal_clock;
+ if ((__s64) cputime > 0) {
+ cputime >>= 12;
+ S390_lowcore.steal_clock -= cputime << 12;
+ account_steal_time(tsk, cputime);
+ }
+
+ run_local_timers();
+ if (rcu_pending(smp_processor_id()))
+ rcu_check_callbacks(smp_processor_id(), rcu_user_flag);
+ scheduler_tick();
+}
+
+/*
+ * Update process times based on virtual cpu times stored by entry.S
+ * to the lowcore fields user_timer, system_timer & steal_clock.
+ */
+void account_system_vtime(struct task_struct *tsk)
+{
+ cputime_t cputime;
+ __u64 timer;
+
+ timer = S390_lowcore.last_update_timer;
+ asm volatile (" STPT %0" /* Store current cpu timer value */
+ : "=m" (S390_lowcore.last_update_timer) );
+ S390_lowcore.system_timer += timer - S390_lowcore.last_update_timer;
+
+ cputime = S390_lowcore.system_timer >> 12;
+ S390_lowcore.system_timer -= cputime << 12;
+ S390_lowcore.steal_clock -= cputime << 12;
+ account_system_time(tsk, 0, cputime);
+}
+
+static inline void set_vtimer(__u64 expires)
+{
+ __u64 timer;
+
+ asm volatile (" STPT %0\n" /* Store current cpu timer value */
+ " SPT %1" /* Set new value immediatly afterwards */
+ : "=m" (timer) : "m" (expires) );
+ S390_lowcore.system_timer += S390_lowcore.last_update_timer - timer;
+ S390_lowcore.last_update_timer = expires;
+
+ /* store expire time for this CPU timer */
+ per_cpu(virt_cpu_timer, smp_processor_id()).to_expire = expires;
+}
+#else
+static inline void set_vtimer(__u64 expires)
+{
+ S390_lowcore.last_update_timer = expires;
+ asm volatile ("SPT %0" : : "m" (S390_lowcore.last_update_timer));
+
+ /* store expire time for this CPU timer */
+ per_cpu(virt_cpu_timer, smp_processor_id()).to_expire = expires;
+}
+#endif
+
+static void start_cpu_timer(void)
{
struct vtimer_queue *vt_list;
set_vtimer(vt_list->idle);
}
-void stop_cpu_timer(void)
+static void stop_cpu_timer(void)
{
__u64 done;
struct vtimer_queue *vt_list;
set_vtimer(VTIMER_MAX_SLICE);
}
-void set_vtimer(__u64 expires)
-{
- asm volatile ("SPT %0" : : "m" (expires));
-
- /* store expire time for this CPU timer */
- per_cpu(virt_cpu_timer, smp_processor_id()).to_expire = expires;
-}
-
/*
* Sorted add to a list. List is linear searched until first bigger
* element is found.
*/
-void list_add_sorted(struct vtimer_list *timer, struct list_head *head)
+static void list_add_sorted(struct vtimer_list *timer, struct list_head *head)
{
struct vtimer_list *event;
{
struct vtimer_queue *vt_list;
unsigned long cr0;
- __u64 timer;
/* kick the virtual timer */
- timer = VTIMER_MAX_SLICE;
- asm volatile ("SPT %0" : : "m" (timer));
+ S390_lowcore.exit_timer = VTIMER_MAX_SLICE;
+ S390_lowcore.last_update_timer = VTIMER_MAX_SLICE;
+ asm volatile ("SPT %0" : : "m" (S390_lowcore.last_update_timer));
+ asm volatile ("STCK %0" : "=m" (S390_lowcore.last_update_clock));
__ctl_store(cr0, 0, 0);
cr0 |= 0x400;
__ctl_load(cr0, 0, 0);
}
EXPORT_SYMBOL(memcpy);
-/**
- * bcopy - Copy one area of memory to another
- * @src: Where to copy from
- * @dest: Where to copy to
- * @n: The size of the area.
- *
- * Note that this is the same as memcpy(), with the arguments reversed.
- * memcpy() is the standard, bcopy() is a legacy BSD function.
- */
-void bcopy(const void *srcp, void *destp, size_t n)
-{
- __builtin_memcpy(destp, srcp, n);
-}
-EXPORT_SYMBOL(bcopy);
-
/**
* memset - Fill a region of memory with the given value
* @s: Pointer to the start of the area.
int segcnt;
};
-static spinlock_t dcss_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(dcss_lock);
static struct list_head dcss_list = LIST_HEAD_INIT(dcss_list);
static char *segtype_string[] = { "SW", "EW", "SR", "ER", "SN", "EN", "SC",
"EW/EN-MIXED" };
struct list_head *l;
struct dcss_segment *tmp, *retval = NULL;
- BUG_ON (!spin_is_locked(&dcss_lock));
+ assert_spin_locked(&dcss_lock);
dcss_mkname (name, dcss_name);
list_for_each (l, &dcss_list) {
tmp = list_entry (l, struct dcss_segment, list);
struct list_head *l;
struct dcss_segment *tmp;
- BUG_ON (!spin_is_locked(&dcss_lock));
+ assert_spin_locked(&dcss_lock);
list_for_each(l, &dcss_list) {
tmp = list_entry(l, struct dcss_segment, list);
if ((tmp->start_addr >> 20) > (seg->end >> 20))
/*
* save segment content permanently
*/
-void segment_save(char *name)
+void
+segment_save(char *name)
{
struct dcss_segment *seg;
int startpfn = 0;
segtype_string[seg->range[i].start & 0xff]);
}
sprintf(cmd2, "SAVESEG %s", name);
- cpcmd(cmd1, NULL, 80);
- cpcmd(cmd2, NULL, 80);
+ cpcmd(cmd1, NULL, 0);
+ cpcmd(cmd2, NULL, 0);
spin_unlock(&dcss_lock);
}
#include <linux/init.h>
#include <linux/errno.h>
-//extern int irq_init(struct oprofile_operations** ops);
-extern void timer_init(struct oprofile_operations** ops);
-
-int __init oprofile_arch_init(struct oprofile_operations** ops)
+int __init oprofile_arch_init(struct oprofile_operations* ops)
{
- timer_init(ops);
- return 0;
+ return -ENODEV;
}
void oprofile_arch_exit(void)
if (PXSEG(port))
*(volatile unsigned char *)port = value;
-#if defined(CONFIG_HS7751RVOIP_CIDEC)
+#if defined(CONFIG_HS7751RVOIP_CODEC)
else if (codec_port(port))
*(volatile unsigned cjar *)((unsigned long)area6_io8_base+(port-CODEC_IO_BASE)) = value;
#endif
{
if (PXSEG(port))
*(volatile unsigned char *)port = value;
-#if defined(CONFIG_HS7751RVOIP_CIDEC)
+#if defined(CONFIG_HS7751RVOIP_CODEC)
else if (codec_port(port))
*(volatile unsigned cjar *)((unsigned long)area6_io8_base+(port-CODEC_IO_BASE)) = value;
#endif
help
Find out whether you have a PCI motherboard. PCI is the name of a
bus system, i.e. the way the CPU talks to the other stuff inside
- your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or
- VESA. If you have PCI, say Y, otherwise N.
+ your box. If you have PCI, say Y, otherwise N.
The PCI-HOWTO, available from
<http://www.tldp.org/docs.html#howto>, contains valuable
return;
}
- if (tsk->used_math) {
+ if (used_math()) {
/* Using the FPU again. */
restore_fpu(tsk);
} else {
/* First time FPU user. */
fpu_init();
- tsk->used_math = 1;
+ set_used_math();
}
set_tsk_thread_flag(tsk, TIF_USEDFPU);
}
#include <linux/tty.h>
#include <linux/personality.h>
#include <linux/binfmts.h>
-#include <linux/suspend.h>
#include <asm/ucontext.h>
#include <asm/uaccess.h>
if (!(cpu_data->flags & CPU_HAS_FPU))
return 0;
- tsk->used_math = 1;
+ set_used_math();
return __copy_from_user(&tsk->thread.fpu.hard, &sc->sc_fpregs[0],
sizeof(long)*(16*2+2));
}
if (!(cpu_data->flags & CPU_HAS_FPU))
return 0;
- if (!tsk->used_math) {
+ if (!used_math()) {
__put_user(0, &sc->sc_ownedfp);
return 0;
}
/* This will cause a "finit" to be triggered by the next
attempted FPU operation by the 'current' process.
*/
- tsk->used_math = 0;
+ clear_used_math();
unlazy_fpu(tsk, regs);
return __copy_to_user(&sc->sc_fpregs[0], &tsk->thread.fpu.hard,
regs->sr |= SR_FD; /* Release FPU */
clear_fpu(tsk, regs);
- tsk->used_math = 0;
+ clear_used_math();
__get_user (owned_fp, &sc->sc_ownedfp);
if (owned_fp)
err |= restore_sigcontext_fpu(sc);
if (!user_mode(regs))
return 1;
- if (current->flags & PF_FREEZE) {
- refrigerator(0);
+ if (try_to_freeze(0))
goto no_signal;
- }
if (!oldset)
oldset = ¤t->blocked;
#include <linux/init.h>
#include <linux/errno.h>
-int __init oprofile_arch_init(struct oprofile_operations **ops)
+int __init oprofile_arch_init(struct oprofile_operations *ops)
{
return -ENODEV;
}
*/
static int sh7750_timer_notify(struct notifier_block *self,
- unsigned long val, void *data)
+ unsigned long val, void *regs)
{
- struct pt_regs *regs = data;
- unsigned long pc;
-
- pc = instruction_pointer(regs);
- oprofile_add_sample(pc, !user_mode(regs), 0, smp_processor_id());
-
+ oprofile_add_sample((struct pt_regs *)regs, 0);
return 0;
}
bool
default y
+config GENERIC_CALIBRATE_DELAY
+ bool
+ default y
+
config LOG_BUF_SHIFT
int
default 14
}
}
-void cpu_idle(void *unused)
+void cpu_idle(void)
{
default_idle();
}
last_task_used_math = NULL;
}
/* Force FPU state to be reinitialised after exec */
- current->used_math = 0;
+ clear_used_math();
#endif
/* if we are a kernel thread, about to change to user thread,
int fpvalid;
struct task_struct *tsk = current;
- fpvalid = tsk->used_math;
+ fpvalid = !!tsk_used_math(tsk);
if (fpvalid) {
if (current == last_task_used_math) {
grab_fpu();
struct pt_regs *regs;
regs = (struct pt_regs*)((unsigned char *)task + THREAD_SIZE) - 1;
- if (!task->used_math) {
+ if (!tsk_used_math(task)) {
if (addr == offsetof(struct user_fpu_struct, fpscr)) {
tmp = FPSCR_INIT;
} else {
regs = (struct pt_regs*)((unsigned char *)task + THREAD_SIZE) - 1;
- if (!task->used_math) {
+ if (!tsk_used_math(task)) {
fpinit(&task->thread.fpu.hard);
- task->used_math = 1;
+ set_stopped_child_used_math(task);
} else if (last_task_used_math == task) {
grab_fpu();
fpsave(&task->thread.fpu.hard);
(addr < offsetof(struct user, u_fpvalid))) {
tmp = get_fpu_long(child, addr - offsetof(struct user, fpu));
} else if (addr == offsetof(struct user, u_fpvalid)) {
- tmp = child->used_math;
+ tmp = !!tsk_used_math(child);
} else {
break;
}
int fpvalid;
err |= __get_user (fpvalid, &sc->sc_fpvalid);
- current->used_math = fpvalid;
+ conditional_used_math(fpvalid);
if (! fpvalid)
return err;
int err = 0;
int fpvalid;
- fpvalid = current->used_math;
+ fpvalid = !!used_math();
err |= __put_user(fpvalid, &sc->sc_fpvalid);
if (! fpvalid)
return err;
err |= __copy_to_user(&sc->sc_fpregs[0], ¤t->thread.fpu.hard,
(sizeof(long long) * 32) + (sizeof(int) * 1));
- current->used_math = 0;
+ clear_used_math();
return err;
}
if (!user_mode(regs))
return 1;
- if (current->flags & PF_FREEZE) {
- refrigerator(0);
+ if (try_to_freeze(0))
goto no_signal;
- }
if (!oldset)
oldset = ¤t->blocked;
/* Copy while checksumming, otherwise like csum_partial. */
unsigned int
-csum_partial_copy(const char *src, char *dst, int len, unsigned int sum)
+csum_partial_copy(const unsigned char *src, unsigned char *dst, int len, unsigned int sum)
{
sum = csum_partial(src, len, sum);
memcpy(dst, src, len);
/* Copy from userspace and compute checksum. If we catch an exception
then zero the rest of the buffer. */
unsigned int
-csum_partial_copy_from_user(const char *src, char *dst, int len,
+csum_partial_copy_from_user(const unsigned char *src, unsigned char *dst, int len,
unsigned int sum, int *err_ptr)
{
int missing;
/* Copy to userspace and compute checksum. */
unsigned int
-csum_partial_copy_to_user(const char *src, char *dst, int len,
+csum_partial_copy_to_user(const unsigned char *src, unsigned char *dst, int len,
unsigned int sum, int *err_ptr)
{
sum = csum_partial(src, len, sum);
// Post SIM:
unsigned int
-csum_partial_copy_nocheck(const char *src, char *dst, int len, unsigned int sum)
+csum_partial_copy_nocheck(const unsigned char *src, unsigned char *dst, int len, unsigned int sum)
{
// unsigned dummy;
pr_debug("csum_partial_copy_nocheck src %p dst %p len %d\n", src, dst,
* in entry.S::floppy_tdone
*/
void __iomem *auxio_register = NULL;
-static spinlock_t auxio_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(auxio_lock);
void __init auxio_probe(void)
{
#endif
}
}
- prom_getproperty(auxio_nd, "reg", (char *) auxregs, sizeof(auxregs));
+ if(prom_getproperty(auxio_nd, "reg", (char *) auxregs, sizeof(auxregs)) <= 0)
+ return;
prom_apply_obio_ranges(auxregs, 0x1);
/* Map the register both read and write */
r.flags = auxregs[0].which_io & 0xF;
return;
/* Map the power control register. */
- prom_getproperty(node, "reg", (char *)®s, sizeof(regs));
+ if (prom_getproperty(node, "reg", (char *)®s, sizeof(regs)) <= 0)
+ return;
prom_apply_obio_ranges(®s, 1);
memset(&r, 0, sizeof(r));
r.flags = regs.which_io & 0xF;
for (i = 0; i < NUM_SUN_MACHINES; i++) {
if(Sun_Machines[i].id_machtype == machtype) {
- if (machtype != (SM_SUN4M_OBP | 0x00))
+ if (machtype != (SM_SUN4M_OBP | 0x00) ||
+ prom_getproperty(prom_root_node, "banner-name",
+ sysname, sizeof(sysname)) <= 0)
printk("TYPE: %s\n", Sun_Machines[i].name);
- else {
- prom_getproperty(prom_root_node, "banner-name",
- sysname, sizeof(sysname));
+ else
printk("TYPE: %s\n", sysname);
- }
return;
}
}
static int pcic0_up;
static struct linux_pcic pcic0;
-unsigned int pcic_regs;
+void * __iomem pcic_regs;
volatile int pcic_speculative;
volatile int pcic_trapped;
pcic0_up = 1;
pcic->pcic_res_regs.name = "pcic_registers";
- pcic->pcic_regs = (unsigned long)
- ioremap(regs[0].phys_addr, regs[0].reg_size);
+ pcic->pcic_regs = ioremap(regs[0].phys_addr, regs[0].reg_size);
if (!pcic->pcic_regs) {
prom_printf("PCIC: Error, cannot map PCIC registers.\n");
prom_halt();
}
pcic->pcic_res_cfg_addr.name = "pcic_cfg_addr";
- if ((pcic->pcic_config_space_addr = (unsigned long)
+ if ((pcic->pcic_config_space_addr =
ioremap(regs[2].phys_addr, regs[2].reg_size * 2)) == 0) {
prom_printf("PCIC: Error, cannot map"
"PCI Configuration Space Address.\n");
* must be the same. Thus, we need adjust size of data.
*/
pcic->pcic_res_cfg_data.name = "pcic_cfg_data";
- if ((pcic->pcic_config_space_data = (unsigned long)
+ if ((pcic->pcic_config_space_data =
ioremap(regs[3].phys_addr, regs[3].reg_size * 2)) == 0) {
prom_printf("PCIC: Error, cannot map"
"PCI Configuration Space Data.\n");
* We do not use horroble macroses here because we want to
* advance pointer by sizeof(size).
*/
-void outsb(unsigned long addr, const void *src, unsigned long count) {
+void outsb(unsigned long addr, const void *src, unsigned long count)
+{
while (count) {
count -= 1;
- writeb(*(const char *)src, addr);
+ outb(*(const char *)src, addr);
src += 1;
- addr += 1;
+ /* addr += 1; */
}
}
-void outsw(unsigned long addr, const void *src, unsigned long count) {
+void outsw(unsigned long addr, const void *src, unsigned long count)
+{
while (count) {
count -= 2;
- writew(*(const short *)src, addr);
+ outw(*(const short *)src, addr);
src += 2;
- addr += 2;
+ /* addr += 2; */
}
}
-void outsl(unsigned long addr, const void *src, unsigned long count) {
+void outsl(unsigned long addr, const void *src, unsigned long count)
+{
while (count) {
count -= 4;
- writel(*(const long *)src, addr);
+ outl(*(const long *)src, addr);
src += 4;
- addr += 4;
+ /* addr += 4; */
}
}
-void insb(unsigned long addr, void *dst, unsigned long count) {
+void insb(unsigned long addr, void *dst, unsigned long count)
+{
while (count) {
count -= 1;
- *(unsigned char *)dst = readb(addr);
+ *(unsigned char *)dst = inb(addr);
dst += 1;
- addr += 1;
+ /* addr += 1; */
}
}
-void insw(unsigned long addr, void *dst, unsigned long count) {
+void insw(unsigned long addr, void *dst, unsigned long count)
+{
while (count) {
count -= 2;
- *(unsigned short *)dst = readw(addr);
+ *(unsigned short *)dst = inw(addr);
dst += 2;
- addr += 2;
+ /* addr += 2; */
}
}
-void insl(unsigned long addr, void *dst, unsigned long count) {
+void insl(unsigned long addr, void *dst, unsigned long count)
+{
while (count) {
count -= 4;
/*
* XXX I am sure we are in for an unaligned trap here.
*/
- *(unsigned long *)dst = readl(addr);
+ *(unsigned long *)dst = inl(addr);
dst += 4;
- addr += 4;
+ /* addr += 4; */
}
}
wake_up(&sem->wait);
}
-static spinlock_t semaphore_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(semaphore_lock);
void __sched __down(struct semaphore * sem)
{
/* Fix idle thread fields. */
__asm__ __volatile__("ld [%0], %%g6\n\t"
- "sta %%g6, [%%g0] %1\n\t"
- : : "r" (¤t_set[cpuid]), "i" (ASI_M_VIKING_TMP2)
+ : : "r" (¤t_set[cpuid])
: "memory" /* paranoid */);
cpu_leds[cpuid] = 0x9;
spin_unlock_irqrestore(&sun4d_imsk_lock, flags);
}
-extern int cpu_idle(void *unused);
extern void init_IRQ(void);
extern void cpu_panic(void);
unsigned char processors_out[NR_CPUS]; /* Set when ipi exited. */
} ccall_info __attribute__((aligned(8)));
-static spinlock_t cross_call_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(cross_call_lock);
/* Cross calls must be serialized, at least currently. */
void smp4d_cross_call(smpfunc_t func, unsigned long arg1, unsigned long arg2,
SMP_PRINTK(("smp4d_message_pass %d %d %08lx %d\n", target, msg, data, wait));
if (msg == MSG_STOP_CPU && target == MSG_ALL_BUT_SELF) {
unsigned long flags;
- static spinlock_t stop_cpu_lock = SPIN_LOCK_UNLOCKED;
+ static DEFINE_SPINLOCK(stop_cpu_lock);
spin_lock_irqsave(&stop_cpu_lock, flags);
smp4d_stop_cpu_sender = me;
smp4d_cross_call((smpfunc_t)smp4d_stop_cpu, 0, 0, 0, 0, 0);
void __init smp4d_blackbox_current(unsigned *addr)
{
- /* We have a nice Linux current register :) */
- int rd = addr[1] & 0x3e000000;
+ int rd = *addr & 0x3e000000;
- addr[0] = 0x10800006; /* b .+24 */
- addr[1] = 0xc0800820 | rd; /* lda [%g0] ASI_M_VIKING_TMP2, reg */
+ addr[0] = 0xc0800800 | rd; /* lda [%g0] ASI_M_VIKING_TMP1, reg */
+ addr[2] = 0x81282002 | rd | (rd >> 11); /* sll reg, 2, reg */
+ addr[4] = 0x01000000; /* nop */
}
void __init sun4d_init_smp(void)
{
int i;
- extern unsigned int patchme_store_new_current[];
extern unsigned int t_nmi[], linux_trap_ipi15_sun4d[], linux_trap_ipi15_sun4m[];
- /* Store current into Linux current register :) */
- __asm__ __volatile__("sta %%g6, [%%g0] %0" : : "i"(ASI_M_VIKING_TMP2));
-
- /* Patch switch_to */
- patchme_store_new_current[0] = (patchme_store_new_current[0] & 0x3e000000) | 0xc0a00820;
-
/* Patch ipi15 trap table */
t_nmi[1] = t_nmi[1] + (linux_trap_ipi15_sun4d - linux_trap_ipi15_sun4m);
#else /* SMP */
+static spinlock_t dummy = SPIN_LOCK_UNLOCKED;
#define ATOMIC_HASH_SIZE 1
-#define ATOMIC_HASH(a) 0
+#define ATOMIC_HASH(a) (&dummy)
#endif /* SMP */
int offset, count; /* siamese twins */
int off_new;
int align1;
- int i;
+ int i, color;
- if (align == 0)
- align = 1;
+ if (t->num_colors) {
+ /* align is overloaded to be the page color */
+ color = align;
+ align = t->num_colors;
+ } else {
+ color = 0;
+ if (align == 0)
+ align = 1;
+ }
align1 = align - 1;
if ((align & align1) != 0)
BUG();
BUG();
if (len <= 0 || len > t->size)
BUG();
+ color &= align1;
spin_lock(&t->lock);
if (len < t->last_size)
count = 0;
for (;;) {
off_new = find_next_zero_bit(t->map, t->size, offset);
- off_new = (off_new + align1) & ~align1;
+ off_new = ((off_new + align1) & ~align1) + color;
count += off_new - offset;
offset = off_new;
if (offset >= t->size)
spin_lock_init(&t->lock);
t->map = map;
t->size = size;
- t->last_size = 0;
- t->first_free = 0;
}
-/* memcpy.S: Sparc optimized memcpy, bcopy and memmove code
- * Hand optimized from GNU libc's memcpy, bcopy and memmove
+/* memcpy.S: Sparc optimized memcpy and memmove code
+ * Hand optimized from GNU libc's memcpy and memmove
* Copyright (C) 1991,1996 Free Software Foundation
* Copyright (C) 1995 Linus Torvalds (Linus.Torvalds@helsinki.fi)
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
retl
nop ! Only bcopy returns here and it retuns void...
-FUNC(bcopy)
- mov %o0, %o3
- mov %o1, %o0
- mov %o3, %o1
- tst %o2
- bcs 0b
- /* Do the cmp in the delay slot */
#ifdef __KERNEL__
FUNC(amemmove)
FUNC(__memmove)
/* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */
inc_preempt_count();
- if (page < highmem_start_page)
+ if (!PageHighMem(page))
return page_address(page);
idx = type + KM_TYPE_NR*smp_processor_id();
ctxd_t *srmmu_context_table;
int viking_mxcc_present;
-static spinlock_t srmmu_context_spinlock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(srmmu_context_spinlock);
int is_hypersparc;
static unsigned long srmmu_pte_pfn(pte_t pte)
{
if (srmmu_device_memory(pte_val(pte))) {
- /* XXX Anton obviously had something in mind when he did this.
- * But what?
+ /* Just return something that will cause
+ * pfn_valid() to return false. This makes
+ * copy_one_pte() to just directly copy to
+ * PTE over.
*/
- /* return (struct page *)~0; */
- BUG();
+ return ~0UL;
}
return (pte_val(pte) & SRMMU_PTE_PMASK) >> (PAGE_SHIFT-4);
}
BTFIXUPSET_CALL(free_pgd_fast, srmmu_free_pgd_fast, BTFIXUPCALL_NORM);
BTFIXUPSET_CALL(get_pgd_fast, srmmu_get_pgd_fast, BTFIXUPCALL_NORM);
+ BTFIXUPSET_HALF(pte_readi, SRMMU_NOREAD);
BTFIXUPSET_HALF(pte_writei, SRMMU_WRITE);
BTFIXUPSET_HALF(pte_dirtyi, SRMMU_DIRTY);
BTFIXUPSET_HALF(pte_youngi, SRMMU_REF);
int st_p;
char propb[64];
char *p;
+ int propl;
switch(prom_vers) {
case PROM_V0:
if(strncmp(propb, "serial", sizeof("serial")))
return PROMDEV_I_UNK;
}
- prom_getproperty(prom_root_node, "stdin-path", propb, sizeof(propb));
- p = propb;
- while(*p) p++; p -= 2;
- if(p[0] == ':') {
- if(p[1] == 'a')
- return PROMDEV_ITTYA;
- else if(p[1] == 'b')
- return PROMDEV_ITTYB;
+ propl = prom_getproperty(prom_root_node, "stdin-path", propb, sizeof(propb));
+ if(propl > 2) {
+ p = propb;
+ while(*p) p++; p -= 2;
+ if(p[0] == ':') {
+ if(p[1] == 'a')
+ return PROMDEV_ITTYA;
+ else if(p[1] == 'b')
+ return PROMDEV_ITTYB;
+ }
}
return PROMDEV_I_UNK;
}
restore_current();
spin_unlock_irqrestore(&prom_lock, flags);
propl = prom_getproperty(st_p, "device_type", propb, sizeof(propb));
- if (propl >= 0 && propl == sizeof("display") &&
+ if (propl == sizeof("display") &&
strncmp("display", propb, sizeof("display")) == 0)
{
return PROMDEV_OSCREEN;
if(propl >= 0 &&
strncmp("serial", propb, sizeof("serial")) != 0)
return PROMDEV_O_UNK;
- prom_getproperty(prom_root_node, "stdout-path", propb, sizeof(propb));
- if(strncmp(propb, con_name_jmc, CON_SIZE_JMC) == 0)
+ propl = prom_getproperty(prom_root_node, "stdout-path",
+ propb, sizeof(propb));
+ if(propl == CON_SIZE_JMC &&
+ strncmp(propb, con_name_jmc, CON_SIZE_JMC) == 0)
return PROMDEV_OTTYA;
- p = propb;
- while(*p) p++; p -= 2;
- if(p[0]==':') {
- if(p[1] == 'a')
- return PROMDEV_OTTYA;
- else if(p[1] == 'b')
- return PROMDEV_OTTYB;
+ if(propl > 2) {
+ p = propb;
+ while(*p) p++; p-= 2;
+ if(p[0]==':') {
+ if(p[1] == 'a')
+ return PROMDEV_OTTYA;
+ else if(p[1] == 'b')
+ return PROMDEV_OTTYB;
+ }
}
} else {
switch(*romvec->pv_stdin) {
extern void restore_current(void);
-spinlock_t prom_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(prom_lock);
/* Reset and reboot the machine with the command 'bcommand'. */
void
}
}
-static void
+void
prom_adjust_ranges(struct linux_prom_ranges *ranges1, int nranges1,
struct linux_prom_ranges *ranges2, int nranges2)
{
*/
int prom_nodematch(int node, char *name)
{
+ int error;
+
static char namebuf[128];
- prom_getproperty(node, "name", namebuf, sizeof(namebuf));
+ error = prom_getproperty(node, "name", namebuf, sizeof(namebuf));
+ if (error == -1) return 0;
if(strcmp(namebuf, name) == 0) return 1;
return 0;
}
#include <linux/time.h>
-#define jiffies_to_timeval jiffies_to_compat_timeval
+#undef cputime_to_timeval
+#define cputime_to_timeval cputime_to_compat_timeval
static __inline__ void
-jiffies_to_compat_timeval(unsigned long jiffies, struct compat_timeval *value)
+cputime_to_compat_timeval(const cputime_t cputime, struct compat_timeval *value)
{
+ unsigned long jiffies = cputime_to_jiffies(cputime);
value->tv_usec = (jiffies % HZ) * (1000000L / HZ);
value->tv_sec = jiffies / HZ;
}
/* Used to synchronize acceses to NatSemi SUPER I/O chip configure
* operations in asm/ns87303.h
*/
-spinlock_t ns87303_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(ns87303_lock);
extern void cpu_probe(void);
extern void central_probe(void);
*/
int arch_prepare_kprobe(struct kprobe *p)
+{
+ return 0;
+}
+
+void arch_copy_kprobe(struct kprobe *p)
{
p->ainsn.insn[0] = *p->addr;
p->ainsn.insn[1] = BREAKPOINT_INSTRUCTION_2;
- return 0;
}
void arch_remove_kprobe(struct kprobe *p)
.globl FUNC_NAME
.type FUNC_NAME,#function
FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */
+ srlx %o2, 31, %g2
+ cmp %g2, 0
+ tne %xcc, 5
PREAMBLE
mov %o0, %g5
cmp %o2, 0
.globl FUNC_NAME
.type FUNC_NAME,#function
FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */
+ srlx %o2, 31, %g2
+ cmp %g2, 0
+ tne %xcc, 5
PREAMBLE
mov %o0, %g5
cmp %o2, 0
* Copyright (C) 1999 David S. Miller (davem@redhat.com)
*/
+#include <linux/config.h>
#include <asm/asi.h>
+ /* On SMP we need to use memory barriers to ensure
+ * correct memory operation ordering, nop these out
+ * for uniprocessor.
+ */
+#ifdef CONFIG_SMP
+#define ATOMIC_PRE_BARRIER membar #StoreLoad | #LoadLoad
+#define ATOMIC_POST_BARRIER membar #StoreLoad | #StoreStore
+#else
+#define ATOMIC_PRE_BARRIER nop
+#define ATOMIC_POST_BARRIER nop
+#endif
+
.text
- .align 64
- .globl __atomic_add
- .type __atomic_add,#function
-__atomic_add: /* %o0 = increment, %o1 = atomic_ptr */
- lduw [%o1], %g5
+ /* Two versions of the atomic routines, one that
+ * does not return a value and does not perform
+ * memory barriers, and a second which returns
+ * a value and does the barriers.
+ */
+ .globl atomic_add
+ .type atomic_add,#function
+atomic_add: /* %o0 = increment, %o1 = atomic_ptr */
+1: lduw [%o1], %g5
+ add %g5, %o0, %g7
+ cas [%o1], %g5, %g7
+ cmp %g5, %g7
+ bne,pn %icc, 1b
+ nop
+ retl
+ nop
+ .size atomic_add, .-atomic_add
+
+ .globl atomic_sub
+ .type atomic_sub,#function
+atomic_sub: /* %o0 = decrement, %o1 = atomic_ptr */
+1: lduw [%o1], %g5
+ sub %g5, %o0, %g7
+ cas [%o1], %g5, %g7
+ cmp %g5, %g7
+ bne,pn %icc, 1b
+ nop
+ retl
+ nop
+ .size atomic_sub, .-atomic_sub
+
+ .globl atomic_add_ret
+ .type atomic_add_ret,#function
+atomic_add_ret: /* %o0 = increment, %o1 = atomic_ptr */
+ ATOMIC_PRE_BARRIER
+1: lduw [%o1], %g5
add %g5, %o0, %g7
cas [%o1], %g5, %g7
cmp %g5, %g7
- bne,pn %icc, __atomic_add
- membar #StoreLoad | #StoreStore
+ bne,pn %icc, 1b
+ add %g7, %o0, %g7
+ ATOMIC_POST_BARRIER
retl
- add %g7, %o0, %o0
- .size __atomic_add, .-__atomic_add
+ sra %g7, 0, %o0
+ .size atomic_add_ret, .-atomic_add_ret
- .globl __atomic_sub
- .type __atomic_sub,#function
-__atomic_sub: /* %o0 = increment, %o1 = atomic_ptr */
- lduw [%o1], %g5
+ .globl atomic_sub_ret
+ .type atomic_sub_ret,#function
+atomic_sub_ret: /* %o0 = decrement, %o1 = atomic_ptr */
+ ATOMIC_PRE_BARRIER
+1: lduw [%o1], %g5
sub %g5, %o0, %g7
cas [%o1], %g5, %g7
cmp %g5, %g7
- bne,pn %icc, __atomic_sub
- membar #StoreLoad | #StoreStore
+ bne,pn %icc, 1b
+ sub %g7, %o0, %g7
+ ATOMIC_POST_BARRIER
+ retl
+ sra %g7, 0, %o0
+ .size atomic_sub_ret, .-atomic_sub_ret
+
+ .globl atomic64_add
+ .type atomic64_add,#function
+atomic64_add: /* %o0 = increment, %o1 = atomic_ptr */
+1: ldx [%o1], %g5
+ add %g5, %o0, %g7
+ casx [%o1], %g5, %g7
+ cmp %g5, %g7
+ bne,pn %xcc, 1b
+ nop
+ retl
+ nop
+ .size atomic64_add, .-atomic64_add
+
+ .globl atomic64_sub
+ .type atomic64_sub,#function
+atomic64_sub: /* %o0 = decrement, %o1 = atomic_ptr */
+1: ldx [%o1], %g5
+ sub %g5, %o0, %g7
+ casx [%o1], %g5, %g7
+ cmp %g5, %g7
+ bne,pn %xcc, 1b
+ nop
retl
- sub %g7, %o0, %o0
- .size __atomic_sub, .-__atomic_sub
+ nop
+ .size atomic64_sub, .-atomic64_sub
- .globl __atomic64_add
- .type __atomic64_add,#function
-__atomic64_add: /* %o0 = increment, %o1 = atomic_ptr */
- ldx [%o1], %g5
+ .globl atomic64_add_ret
+ .type atomic64_add_ret,#function
+atomic64_add_ret: /* %o0 = increment, %o1 = atomic_ptr */
+ ATOMIC_PRE_BARRIER
+1: ldx [%o1], %g5
add %g5, %o0, %g7
casx [%o1], %g5, %g7
cmp %g5, %g7
- bne,pn %xcc, __atomic64_add
- membar #StoreLoad | #StoreStore
+ bne,pn %xcc, 1b
+ add %g7, %o0, %g7
+ ATOMIC_POST_BARRIER
retl
- add %g7, %o0, %o0
- .size __atomic64_add, .-__atomic64_add
+ mov %g7, %o0
+ .size atomic64_add_ret, .-atomic64_add_ret
- .globl __atomic64_sub
- .type __atomic64_sub,#function
-__atomic64_sub: /* %o0 = increment, %o1 = atomic_ptr */
- ldx [%o1], %g5
+ .globl atomic64_sub_ret
+ .type atomic64_sub_ret,#function
+atomic64_sub_ret: /* %o0 = decrement, %o1 = atomic_ptr */
+ ATOMIC_PRE_BARRIER
+1: ldx [%o1], %g5
sub %g5, %o0, %g7
casx [%o1], %g5, %g7
cmp %g5, %g7
- bne,pn %xcc, __atomic64_sub
- membar #StoreLoad | #StoreStore
+ bne,pn %xcc, 1b
+ sub %g7, %o0, %g7
+ ATOMIC_POST_BARRIER
retl
- sub %g7, %o0, %o0
- .size __atomic64_sub, .-__atomic64_sub
+ mov %g7, %o0
+ .size atomic64_sub_ret, .-atomic64_sub_ret
* Copyright (C) 2000 David S. Miller (davem@redhat.com)
*/
+#include <linux/config.h>
#include <asm/asi.h>
+ /* On SMP we need to use memory barriers to ensure
+ * correct memory operation ordering, nop these out
+ * for uniprocessor.
+ */
+#ifdef CONFIG_SMP
+#define BITOP_PRE_BARRIER membar #StoreLoad | #LoadLoad
+#define BITOP_POST_BARRIER membar #StoreLoad | #StoreStore
+#else
+#define BITOP_PRE_BARRIER nop
+#define BITOP_POST_BARRIER nop
+#endif
+
.text
- .align 64
- .globl ___test_and_set_bit
- .type ___test_and_set_bit,#function
-___test_and_set_bit: /* %o0=nr, %o1=addr */
+
+ .globl test_and_set_bit
+ .type test_and_set_bit,#function
+test_and_set_bit: /* %o0=nr, %o1=addr */
+ BITOP_PRE_BARRIER
+ srlx %o0, 6, %g1
+ mov 1, %g5
+ sllx %g1, 3, %g3
+ and %o0, 63, %g2
+ sllx %g5, %g2, %g5
+ add %o1, %g3, %o1
+1: ldx [%o1], %g7
+ or %g7, %g5, %g1
+ casx [%o1], %g7, %g1
+ cmp %g7, %g1
+ bne,pn %xcc, 1b
+ and %g7, %g5, %g2
+ BITOP_POST_BARRIER
+ clr %o0
+ retl
+ movrne %g2, 1, %o0
+ .size test_and_set_bit, .-test_and_set_bit
+
+ .globl test_and_clear_bit
+ .type test_and_clear_bit,#function
+test_and_clear_bit: /* %o0=nr, %o1=addr */
+ BITOP_PRE_BARRIER
+ srlx %o0, 6, %g1
+ mov 1, %g5
+ sllx %g1, 3, %g3
+ and %o0, 63, %g2
+ sllx %g5, %g2, %g5
+ add %o1, %g3, %o1
+1: ldx [%o1], %g7
+ andn %g7, %g5, %g1
+ casx [%o1], %g7, %g1
+ cmp %g7, %g1
+ bne,pn %xcc, 1b
+ and %g7, %g5, %g2
+ BITOP_POST_BARRIER
+ clr %o0
+ retl
+ movrne %g2, 1, %o0
+ .size test_and_clear_bit, .-test_and_clear_bit
+
+ .globl test_and_change_bit
+ .type test_and_change_bit,#function
+test_and_change_bit: /* %o0=nr, %o1=addr */
+ BITOP_PRE_BARRIER
+ srlx %o0, 6, %g1
+ mov 1, %g5
+ sllx %g1, 3, %g3
+ and %o0, 63, %g2
+ sllx %g5, %g2, %g5
+ add %o1, %g3, %o1
+1: ldx [%o1], %g7
+ xor %g7, %g5, %g1
+ casx [%o1], %g7, %g1
+ cmp %g7, %g1
+ bne,pn %xcc, 1b
+ and %g7, %g5, %g2
+ BITOP_POST_BARRIER
+ clr %o0
+ retl
+ movrne %g2, 1, %o0
+ .size test_and_change_bit, .-test_and_change_bit
+
+ .globl set_bit
+ .type set_bit,#function
+set_bit: /* %o0=nr, %o1=addr */
srlx %o0, 6, %g1
mov 1, %g5
sllx %g1, 3, %g3
and %o0, 63, %g2
sllx %g5, %g2, %g5
add %o1, %g3, %o1
- ldx [%o1], %g7
-1: andcc %g7, %g5, %o0
- bne,pn %xcc, 2f
- xor %g7, %g5, %g1
+1: ldx [%o1], %g7
+ or %g7, %g5, %g1
casx [%o1], %g7, %g1
cmp %g7, %g1
- bne,a,pn %xcc, 1b
- ldx [%o1], %g7
-2: retl
- membar #StoreLoad | #StoreStore
- .size ___test_and_set_bit, .-___test_and_set_bit
+ bne,pn %xcc, 1b
+ nop
+ retl
+ nop
+ .size set_bit, .-set_bit
- .globl ___test_and_clear_bit
- .type ___test_and_clear_bit,#function
-___test_and_clear_bit: /* %o0=nr, %o1=addr */
+ .globl clear_bit
+ .type clear_bit,#function
+clear_bit: /* %o0=nr, %o1=addr */
srlx %o0, 6, %g1
mov 1, %g5
sllx %g1, 3, %g3
and %o0, 63, %g2
sllx %g5, %g2, %g5
add %o1, %g3, %o1
- ldx [%o1], %g7
-1: andcc %g7, %g5, %o0
- be,pn %xcc, 2f
- xor %g7, %g5, %g1
+1: ldx [%o1], %g7
+ andn %g7, %g5, %g1
casx [%o1], %g7, %g1
cmp %g7, %g1
- bne,a,pn %xcc, 1b
- ldx [%o1], %g7
-2: retl
- membar #StoreLoad | #StoreStore
- .size ___test_and_clear_bit, .-___test_and_clear_bit
+ bne,pn %xcc, 1b
+ nop
+ retl
+ nop
+ .size clear_bit, .-clear_bit
- .globl ___test_and_change_bit
- .type ___test_and_change_bit,#function
-___test_and_change_bit: /* %o0=nr, %o1=addr */
+ .globl change_bit
+ .type change_bit,#function
+change_bit: /* %o0=nr, %o1=addr */
srlx %o0, 6, %g1
mov 1, %g5
sllx %g1, 3, %g3
and %o0, 63, %g2
sllx %g5, %g2, %g5
add %o1, %g3, %o1
- ldx [%o1], %g7
-1: and %g7, %g5, %o0
+1: ldx [%o1], %g7
xor %g7, %g5, %g1
casx [%o1], %g7, %g1
cmp %g7, %g1
- bne,a,pn %xcc, 1b
- ldx [%o1], %g7
-2: retl
- membar #StoreLoad | #StoreStore
- nop
- .size ___test_and_change_bit, .-___test_and_change_bit
+ bne,pn %xcc, 1b
+ nop
+ retl
+ nop
+ .size change_bit, .-change_bit
{
n *= 4;
- n *= (cpu_data(smp_processor_id()).udelay_val * (HZ/4));
+ n *= (cpu_data(_smp_processor_id()).udelay_val * (HZ/4));
n >>= 32;
__delay(n + 1);
.align 32
.globl memmove
.type memmove,#function
-memmove:
+memmove: /* o0=dst o1=src o2=len */
mov %o0, %g1
cmp %o0, %o1
- blu,pt %xcc, memcpy
- sub %o0, %o1, %g5
- add %o1, %o2, %g3
- cmp %g3, %o0
bleu,pt %xcc, memcpy
add %o1, %o2, %g5
- add %o0, %o2, %o5
-
+ cmp %g5, %o0
+ bleu,pt %xcc, memcpy
+ add %o0, %o2, %o5
sub %g5, 1, %o1
+
sub %o5, 1, %o0
1: ldub [%o1], %g5
subcc %o2, 1, %o2
char *dst = to;
const char __user *src = from;
- while (size--) {
+ while (size) {
if (__get_user(*dst, src))
break;
dst++;
src++;
+ size--;
}
if (size)
char __user *dst = to;
const char *src = from;
- while (size--) {
+ while (size) {
if (__put_user(*src, dst))
break;
dst++;
src++;
+ size--;
}
return size;
char __user *dst = to;
char __user *src = from;
- while (size--) {
+ while (size) {
char tmp;
if (__get_user(tmp, src))
break;
dst++;
src++;
+ size--;
}
return size;
return 0;
}
+static inline int io_remap_pud_range(pud_t * pud, unsigned long address, unsigned long size,
+ unsigned long offset, pgprot_t prot, int space)
+{
+ unsigned long end;
+
+ address &= ~PUD_MASK;
+ end = address + size;
+ if (end > PUD_SIZE)
+ end = PUD_SIZE;
+ offset -= address;
+ do {
+ pmd_t *pmd = pmd_alloc(current->mm, pud, address);
+ if (!pud)
+ return -ENOMEM;
+ io_remap_pmd_range(pmd, address, end - address, address + offset, prot, space);
+ address = (address + PUD_SIZE) & PUD_MASK;
+ pud++;
+ } while (address < end);
+ return 0;
+}
+
int io_remap_page_range(struct vm_area_struct *vma, unsigned long from, unsigned long offset, unsigned long size, pgprot_t prot, int space)
{
int error = 0;
spin_lock(&mm->page_table_lock);
while (from < end) {
- pmd_t *pmd = pmd_alloc(current->mm, dir, from);
+ pud_t *pud = pud_alloc(current->mm, dir, from);
error = -ENOMEM;
- if (!pmd)
+ if (!pud)
break;
- error = io_remap_pmd_range(pmd, from, end - from, offset + from, prot, space);
+ error = io_remap_pud_range(pud, from, end - from, offset + from, prot, space);
if (error)
break;
from = (from + PGDIR_SIZE) & PGDIR_MASK;
*/
BUG_ON(s > e);
-#if 0
- /* Currently free_pgtables guarantees this. */
s &= PMD_MASK;
e = (e + PMD_SIZE - 1) & PMD_MASK;
-#endif
+
vpte_base = (tlb_type == spitfire ?
VPTE_BASE_SPITFIRE :
VPTE_BASE_CHEETAH);
#include <linux/errno.h>
#include <linux/init.h>
-extern void timer_init(struct oprofile_operations ** ops);
-
-int __init oprofile_arch_init(struct oprofile_operations ** ops)
+int __init oprofile_arch_init(struct oprofile_operations * ops)
{
return -ENODEV;
}
EXTRA_CFLAGS := -Werror
lib-y := bootstr.o devops.o init.o memory.o misc.o \
- tree.o console.o printf.o p1275.o map.o
+ tree.o console.o printf.o p1275.o map.o cif.o
--- /dev/null
+/* cif.S: PROM entry/exit assembler trampolines.
+ *
+ * Copyright (C) 1996,1997 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
+ * Copyright (C) 2005 David S. Miller <davem@davemloft.net>
+ */
+
+#include <asm/pstate.h>
+
+ .text
+ .globl prom_cif_interface
+prom_cif_interface:
+ sethi %hi(p1275buf), %o0
+ or %o0, %lo(p1275buf), %o0
+ ldx [%o0 + 0x010], %o1 ! prom_cif_stack
+ save %o1, -0x190, %sp
+ ldx [%i0 + 0x008], %l2 ! prom_cif_handler
+ rdpr %pstate, %l4
+ wrpr %g0, 0x15, %pstate ! save alternate globals
+ stx %g1, [%sp + 2047 + 0x0b0]
+ stx %g2, [%sp + 2047 + 0x0b8]
+ stx %g3, [%sp + 2047 + 0x0c0]
+ stx %g4, [%sp + 2047 + 0x0c8]
+ stx %g5, [%sp + 2047 + 0x0d0]
+ stx %g6, [%sp + 2047 + 0x0d8]
+ stx %g7, [%sp + 2047 + 0x0e0]
+ wrpr %g0, 0x814, %pstate ! save interrupt globals
+ stx %g1, [%sp + 2047 + 0x0e8]
+ stx %g2, [%sp + 2047 + 0x0f0]
+ stx %g3, [%sp + 2047 + 0x0f8]
+ stx %g4, [%sp + 2047 + 0x100]
+ stx %g5, [%sp + 2047 + 0x108]
+ stx %g6, [%sp + 2047 + 0x110]
+ stx %g7, [%sp + 2047 + 0x118]
+ wrpr %g0, 0x14, %pstate ! save normal globals
+ stx %g1, [%sp + 2047 + 0x120]
+ stx %g2, [%sp + 2047 + 0x128]
+ stx %g3, [%sp + 2047 + 0x130]
+ stx %g4, [%sp + 2047 + 0x138]
+ stx %g5, [%sp + 2047 + 0x140]
+ stx %g6, [%sp + 2047 + 0x148]
+ stx %g7, [%sp + 2047 + 0x150]
+ wrpr %g0, 0x414, %pstate ! save mmu globals
+ stx %g1, [%sp + 2047 + 0x158]
+ stx %g2, [%sp + 2047 + 0x160]
+ stx %g3, [%sp + 2047 + 0x168]
+ stx %g4, [%sp + 2047 + 0x170]
+ stx %g5, [%sp + 2047 + 0x178]
+ stx %g6, [%sp + 2047 + 0x180]
+ stx %g7, [%sp + 2047 + 0x188]
+ mov %g1, %l0 ! also save to locals, so we can handle
+ mov %g2, %l1 ! tlb faults later on, when accessing
+ mov %g3, %l3 ! the stack.
+ mov %g7, %l5
+ wrpr %l4, PSTATE_IE, %pstate ! turn off interrupts
+ call %l2
+ add %i0, 0x018, %o0 ! prom_args
+ wrpr %g0, 0x414, %pstate ! restore mmu globals
+ mov %l0, %g1
+ mov %l1, %g2
+ mov %l3, %g3
+ mov %l5, %g7
+ wrpr %g0, 0x14, %pstate ! restore normal globals
+ ldx [%sp + 2047 + 0x120], %g1
+ ldx [%sp + 2047 + 0x128], %g2
+ ldx [%sp + 2047 + 0x130], %g3
+ ldx [%sp + 2047 + 0x138], %g4
+ ldx [%sp + 2047 + 0x140], %g5
+ ldx [%sp + 2047 + 0x148], %g6
+ ldx [%sp + 2047 + 0x150], %g7
+ wrpr %g0, 0x814, %pstate ! restore interrupt globals
+ ldx [%sp + 2047 + 0x0e8], %g1
+ ldx [%sp + 2047 + 0x0f0], %g2
+ ldx [%sp + 2047 + 0x0f8], %g3
+ ldx [%sp + 2047 + 0x100], %g4
+ ldx [%sp + 2047 + 0x108], %g5
+ ldx [%sp + 2047 + 0x110], %g6
+ ldx [%sp + 2047 + 0x118], %g7
+ wrpr %g0, 0x15, %pstate ! restore alternate globals
+ ldx [%sp + 2047 + 0x0b0], %g1
+ ldx [%sp + 2047 + 0x0b8], %g2
+ ldx [%sp + 2047 + 0x0c0], %g3
+ ldx [%sp + 2047 + 0x0c8], %g4
+ ldx [%sp + 2047 + 0x0d0], %g5
+ ldx [%sp + 2047 + 0x0d8], %g6
+ ldx [%sp + 2047 + 0x0e0], %g7
+ wrpr %l4, 0, %pstate ! restore original pstate
+ ret
+ restore
+
+ .globl prom_cif_callback
+prom_cif_callback:
+ sethi %hi(p1275buf), %o1
+ or %o1, %lo(p1275buf), %o1
+ save %sp, -0x270, %sp
+ rdpr %pstate, %l4
+ wrpr %g0, 0x15, %pstate ! save PROM alternate globals
+ stx %g1, [%sp + 2047 + 0x0b0]
+ stx %g2, [%sp + 2047 + 0x0b8]
+ stx %g3, [%sp + 2047 + 0x0c0]
+ stx %g4, [%sp + 2047 + 0x0c8]
+ stx %g5, [%sp + 2047 + 0x0d0]
+ stx %g6, [%sp + 2047 + 0x0d8]
+ stx %g7, [%sp + 2047 + 0x0e0]
+ ! restore Linux alternate globals
+ ldx [%sp + 2047 + 0x190], %g1
+ ldx [%sp + 2047 + 0x198], %g2
+ ldx [%sp + 2047 + 0x1a0], %g3
+ ldx [%sp + 2047 + 0x1a8], %g4
+ ldx [%sp + 2047 + 0x1b0], %g5
+ ldx [%sp + 2047 + 0x1b8], %g6
+ ldx [%sp + 2047 + 0x1c0], %g7
+ wrpr %g0, 0x814, %pstate ! save PROM interrupt globals
+ stx %g1, [%sp + 2047 + 0x0e8]
+ stx %g2, [%sp + 2047 + 0x0f0]
+ stx %g3, [%sp + 2047 + 0x0f8]
+ stx %g4, [%sp + 2047 + 0x100]
+ stx %g5, [%sp + 2047 + 0x108]
+ stx %g6, [%sp + 2047 + 0x110]
+ stx %g7, [%sp + 2047 + 0x118]
+ ! restore Linux interrupt globals
+ ldx [%sp + 2047 + 0x1c8], %g1
+ ldx [%sp + 2047 + 0x1d0], %g2
+ ldx [%sp + 2047 + 0x1d8], %g3
+ ldx [%sp + 2047 + 0x1e0], %g4
+ ldx [%sp + 2047 + 0x1e8], %g5
+ ldx [%sp + 2047 + 0x1f0], %g6
+ ldx [%sp + 2047 + 0x1f8], %g7
+ wrpr %g0, 0x14, %pstate ! save PROM normal globals
+ stx %g1, [%sp + 2047 + 0x120]
+ stx %g2, [%sp + 2047 + 0x128]
+ stx %g3, [%sp + 2047 + 0x130]
+ stx %g4, [%sp + 2047 + 0x138]
+ stx %g5, [%sp + 2047 + 0x140]
+ stx %g6, [%sp + 2047 + 0x148]
+ stx %g7, [%sp + 2047 + 0x150]
+ ! restore Linux normal globals
+ ldx [%sp + 2047 + 0x200], %g1
+ ldx [%sp + 2047 + 0x208], %g2
+ ldx [%sp + 2047 + 0x210], %g3
+ ldx [%sp + 2047 + 0x218], %g4
+ ldx [%sp + 2047 + 0x220], %g5
+ ldx [%sp + 2047 + 0x228], %g6
+ ldx [%sp + 2047 + 0x230], %g7
+ wrpr %g0, 0x414, %pstate ! save PROM mmu globals
+ stx %g1, [%sp + 2047 + 0x158]
+ stx %g2, [%sp + 2047 + 0x160]
+ stx %g3, [%sp + 2047 + 0x168]
+ stx %g4, [%sp + 2047 + 0x170]
+ stx %g5, [%sp + 2047 + 0x178]
+ stx %g6, [%sp + 2047 + 0x180]
+ stx %g7, [%sp + 2047 + 0x188]
+ ! restore Linux mmu globals
+ ldx [%sp + 2047 + 0x238], %o0
+ ldx [%sp + 2047 + 0x240], %o1
+ ldx [%sp + 2047 + 0x248], %l2
+ ldx [%sp + 2047 + 0x250], %l3
+ ldx [%sp + 2047 + 0x258], %l5
+ ldx [%sp + 2047 + 0x260], %l6
+ ldx [%sp + 2047 + 0x268], %l7
+ ! switch to Linux tba
+ sethi %hi(sparc64_ttable_tl0), %l1
+ rdpr %tba, %l0 ! save PROM tba
+ mov %o0, %g1
+ mov %o1, %g2
+ mov %l2, %g3
+ mov %l3, %g4
+ mov %l5, %g5
+ mov %l6, %g6
+ mov %l7, %g7
+ wrpr %l1, %tba ! install Linux tba
+ wrpr %l4, 0, %pstate ! restore PSTATE
+ call prom_world
+ mov %g0, %o0
+ ldx [%i1 + 0x000], %l2
+ call %l2
+ mov %i0, %o0
+ mov %o0, %l1
+ call prom_world
+ or %g0, 1, %o0
+ wrpr %g0, 0x14, %pstate ! interrupts off
+ ! restore PROM mmu globals
+ ldx [%sp + 2047 + 0x158], %o0
+ ldx [%sp + 2047 + 0x160], %o1
+ ldx [%sp + 2047 + 0x168], %l2
+ ldx [%sp + 2047 + 0x170], %l3
+ ldx [%sp + 2047 + 0x178], %l5
+ ldx [%sp + 2047 + 0x180], %l6
+ ldx [%sp + 2047 + 0x188], %l7
+ wrpr %g0, 0x414, %pstate ! restore PROM mmu globals
+ mov %o0, %g1
+ mov %o1, %g2
+ mov %l2, %g3
+ mov %l3, %g4
+ mov %l5, %g5
+ mov %l6, %g6
+ mov %l7, %g7
+ wrpr %l0, %tba ! restore PROM tba
+ wrpr %g0, 0x14, %pstate ! restore PROM normal globals
+ ldx [%sp + 2047 + 0x120], %g1
+ ldx [%sp + 2047 + 0x128], %g2
+ ldx [%sp + 2047 + 0x130], %g3
+ ldx [%sp + 2047 + 0x138], %g4
+ ldx [%sp + 2047 + 0x140], %g5
+ ldx [%sp + 2047 + 0x148], %g6
+ ldx [%sp + 2047 + 0x150], %g7
+ wrpr %g0, 0x814, %pstate ! restore PROM interrupt globals
+ ldx [%sp + 2047 + 0x0e8], %g1
+ ldx [%sp + 2047 + 0x0f0], %g2
+ ldx [%sp + 2047 + 0x0f8], %g3
+ ldx [%sp + 2047 + 0x100], %g4
+ ldx [%sp + 2047 + 0x108], %g5
+ ldx [%sp + 2047 + 0x110], %g6
+ ldx [%sp + 2047 + 0x118], %g7
+ wrpr %g0, 0x15, %pstate ! restore PROM alternate globals
+ ldx [%sp + 2047 + 0x0b0], %g1
+ ldx [%sp + 2047 + 0x0b8], %g2
+ ldx [%sp + 2047 + 0x0c0], %g3
+ ldx [%sp + 2047 + 0x0c8], %g4
+ ldx [%sp + 2047 + 0x0d0], %g5
+ ldx [%sp + 2047 + 0x0d8], %g6
+ ldx [%sp + 2047 + 0x0e0], %g7
+ wrpr %l4, 0, %pstate
+ ret
+ restore %l1, 0, %o0
+
extern void prom_world(int);
-void prom_cif_interface (void)
-{
- __asm__ __volatile__ (
-" mov %0, %%o0\n"
-" ldx [%%o0 + 0x010], %%o1 ! prom_cif_stack\n"
-" save %%o1, -0x190, %%sp\n"
-" ldx [%%i0 + 0x008], %%l2 ! prom_cif_handler\n"
-" rdpr %%pstate, %%l4\n"
-" wrpr %%g0, 0x15, %%pstate ! save alternate globals\n"
-" stx %%g1, [%%sp + 2047 + 0x0b0]\n"
-" stx %%g2, [%%sp + 2047 + 0x0b8]\n"
-" stx %%g3, [%%sp + 2047 + 0x0c0]\n"
-" stx %%g4, [%%sp + 2047 + 0x0c8]\n"
-" stx %%g5, [%%sp + 2047 + 0x0d0]\n"
-" stx %%g6, [%%sp + 2047 + 0x0d8]\n"
-" stx %%g7, [%%sp + 2047 + 0x0e0]\n"
-" wrpr %%g0, 0x814, %%pstate ! save interrupt globals\n"
-" stx %%g1, [%%sp + 2047 + 0x0e8]\n"
-" stx %%g2, [%%sp + 2047 + 0x0f0]\n"
-" stx %%g3, [%%sp + 2047 + 0x0f8]\n"
-" stx %%g4, [%%sp + 2047 + 0x100]\n"
-" stx %%g5, [%%sp + 2047 + 0x108]\n"
-" stx %%g6, [%%sp + 2047 + 0x110]\n"
-" stx %%g7, [%%sp + 2047 + 0x118]\n"
-" wrpr %%g0, 0x14, %%pstate ! save normal globals\n"
-" stx %%g1, [%%sp + 2047 + 0x120]\n"
-" stx %%g2, [%%sp + 2047 + 0x128]\n"
-" stx %%g3, [%%sp + 2047 + 0x130]\n"
-" stx %%g4, [%%sp + 2047 + 0x138]\n"
-" stx %%g5, [%%sp + 2047 + 0x140]\n"
-" stx %%g6, [%%sp + 2047 + 0x148]\n"
-" stx %%g7, [%%sp + 2047 + 0x150]\n"
-" wrpr %%g0, 0x414, %%pstate ! save mmu globals\n"
-" stx %%g1, [%%sp + 2047 + 0x158]\n"
-" stx %%g2, [%%sp + 2047 + 0x160]\n"
-" stx %%g3, [%%sp + 2047 + 0x168]\n"
-" stx %%g4, [%%sp + 2047 + 0x170]\n"
-" stx %%g5, [%%sp + 2047 + 0x178]\n"
-" stx %%g6, [%%sp + 2047 + 0x180]\n"
-" stx %%g7, [%%sp + 2047 + 0x188]\n"
-" mov %%g1, %%l0 ! also save to locals, so we can handle\n"
-" mov %%g2, %%l1 ! tlb faults later on, when accessing\n"
-" mov %%g3, %%l3 ! the stack.\n"
-" mov %%g7, %%l5\n"
-" wrpr %%l4, %1, %%pstate ! turn off interrupts\n"
-" call %%l2\n"
-" add %%i0, 0x018, %%o0 ! prom_args\n"
-" wrpr %%g0, 0x414, %%pstate ! restore mmu globals\n"
-" mov %%l0, %%g1\n"
-" mov %%l1, %%g2\n"
-" mov %%l3, %%g3\n"
-" mov %%l5, %%g7\n"
-" wrpr %%g0, 0x14, %%pstate ! restore normal globals\n"
-" ldx [%%sp + 2047 + 0x120], %%g1\n"
-" ldx [%%sp + 2047 + 0x128], %%g2\n"
-" ldx [%%sp + 2047 + 0x130], %%g3\n"
-" ldx [%%sp + 2047 + 0x138], %%g4\n"
-" ldx [%%sp + 2047 + 0x140], %%g5\n"
-" ldx [%%sp + 2047 + 0x148], %%g6\n"
-" ldx [%%sp + 2047 + 0x150], %%g7\n"
-" wrpr %%g0, 0x814, %%pstate ! restore interrupt globals\n"
-" ldx [%%sp + 2047 + 0x0e8], %%g1\n"
-" ldx [%%sp + 2047 + 0x0f0], %%g2\n"
-" ldx [%%sp + 2047 + 0x0f8], %%g3\n"
-" ldx [%%sp + 2047 + 0x100], %%g4\n"
-" ldx [%%sp + 2047 + 0x108], %%g5\n"
-" ldx [%%sp + 2047 + 0x110], %%g6\n"
-" ldx [%%sp + 2047 + 0x118], %%g7\n"
-" wrpr %%g0, 0x15, %%pstate ! restore alternate globals\n"
-" ldx [%%sp + 2047 + 0x0b0], %%g1\n"
-" ldx [%%sp + 2047 + 0x0b8], %%g2\n"
-" ldx [%%sp + 2047 + 0x0c0], %%g3\n"
-" ldx [%%sp + 2047 + 0x0c8], %%g4\n"
-" ldx [%%sp + 2047 + 0x0d0], %%g5\n"
-" ldx [%%sp + 2047 + 0x0d8], %%g6\n"
-" ldx [%%sp + 2047 + 0x0e0], %%g7\n"
-" wrpr %%l4, 0, %%pstate ! restore original pstate\n"
-" ret\n"
-" restore\n"
-" " : : "r" (&p1275buf), "i" (PSTATE_IE));
-}
-
-void prom_cif_callback(void)
-{
- __asm__ __volatile__ (
-" mov %0, %%o1\n"
-" save %%sp, -0x270, %%sp\n"
-" rdpr %%pstate, %%l4\n"
-" wrpr %%g0, 0x15, %%pstate ! save PROM alternate globals\n"
-" stx %%g1, [%%sp + 2047 + 0x0b0]\n"
-" stx %%g2, [%%sp + 2047 + 0x0b8]\n"
-" stx %%g3, [%%sp + 2047 + 0x0c0]\n"
-" stx %%g4, [%%sp + 2047 + 0x0c8]\n"
-" stx %%g5, [%%sp + 2047 + 0x0d0]\n"
-" stx %%g6, [%%sp + 2047 + 0x0d8]\n"
-" stx %%g7, [%%sp + 2047 + 0x0e0]\n"
-" ! restore Linux alternate globals\n"
-" ldx [%%sp + 2047 + 0x190], %%g1\n"
-" ldx [%%sp + 2047 + 0x198], %%g2\n"
-" ldx [%%sp + 2047 + 0x1a0], %%g3\n"
-" ldx [%%sp + 2047 + 0x1a8], %%g4\n"
-" ldx [%%sp + 2047 + 0x1b0], %%g5\n"
-" ldx [%%sp + 2047 + 0x1b8], %%g6\n"
-" ldx [%%sp + 2047 + 0x1c0], %%g7\n"
-" wrpr %%g0, 0x814, %%pstate ! save PROM interrupt globals\n"
-" stx %%g1, [%%sp + 2047 + 0x0e8]\n"
-" stx %%g2, [%%sp + 2047 + 0x0f0]\n"
-" stx %%g3, [%%sp + 2047 + 0x0f8]\n"
-" stx %%g4, [%%sp + 2047 + 0x100]\n"
-" stx %%g5, [%%sp + 2047 + 0x108]\n"
-" stx %%g6, [%%sp + 2047 + 0x110]\n"
-" stx %%g7, [%%sp + 2047 + 0x118]\n"
-" ! restore Linux interrupt globals\n"
-" ldx [%%sp + 2047 + 0x1c8], %%g1\n"
-" ldx [%%sp + 2047 + 0x1d0], %%g2\n"
-" ldx [%%sp + 2047 + 0x1d8], %%g3\n"
-" ldx [%%sp + 2047 + 0x1e0], %%g4\n"
-" ldx [%%sp + 2047 + 0x1e8], %%g5\n"
-" ldx [%%sp + 2047 + 0x1f0], %%g6\n"
-" ldx [%%sp + 2047 + 0x1f8], %%g7\n"
-" wrpr %%g0, 0x14, %%pstate ! save PROM normal globals\n"
-" stx %%g1, [%%sp + 2047 + 0x120]\n"
-" stx %%g2, [%%sp + 2047 + 0x128]\n"
-" stx %%g3, [%%sp + 2047 + 0x130]\n"
-" stx %%g4, [%%sp + 2047 + 0x138]\n"
-" stx %%g5, [%%sp + 2047 + 0x140]\n"
-" stx %%g6, [%%sp + 2047 + 0x148]\n"
-" stx %%g7, [%%sp + 2047 + 0x150]\n"
-" ! restore Linux normal globals\n"
-" ldx [%%sp + 2047 + 0x200], %%g1\n"
-" ldx [%%sp + 2047 + 0x208], %%g2\n"
-" ldx [%%sp + 2047 + 0x210], %%g3\n"
-" ldx [%%sp + 2047 + 0x218], %%g4\n"
-" ldx [%%sp + 2047 + 0x220], %%g5\n"
-" ldx [%%sp + 2047 + 0x228], %%g6\n"
-" ldx [%%sp + 2047 + 0x230], %%g7\n"
-" wrpr %%g0, 0x414, %%pstate ! save PROM mmu globals\n"
-" stx %%g1, [%%sp + 2047 + 0x158]\n"
-" stx %%g2, [%%sp + 2047 + 0x160]\n"
-" stx %%g3, [%%sp + 2047 + 0x168]\n"
-" stx %%g4, [%%sp + 2047 + 0x170]\n"
-" stx %%g5, [%%sp + 2047 + 0x178]\n"
-" stx %%g6, [%%sp + 2047 + 0x180]\n"
-" stx %%g7, [%%sp + 2047 + 0x188]\n"
-" ! restore Linux mmu globals\n"
-" ldx [%%sp + 2047 + 0x238], %%o0\n"
-" ldx [%%sp + 2047 + 0x240], %%o1\n"
-" ldx [%%sp + 2047 + 0x248], %%l2\n"
-" ldx [%%sp + 2047 + 0x250], %%l3\n"
-" ldx [%%sp + 2047 + 0x258], %%l5\n"
-" ldx [%%sp + 2047 + 0x260], %%l6\n"
-" ldx [%%sp + 2047 + 0x268], %%l7\n"
-" ! switch to Linux tba\n"
-" sethi %%hi(sparc64_ttable_tl0), %%l1\n"
-" rdpr %%tba, %%l0 ! save PROM tba\n"
-" mov %%o0, %%g1\n"
-" mov %%o1, %%g2\n"
-" mov %%l2, %%g3\n"
-" mov %%l3, %%g4\n"
-" mov %%l5, %%g5\n"
-" mov %%l6, %%g6\n"
-" mov %%l7, %%g7\n"
-" wrpr %%l1, %%tba ! install Linux tba\n"
-" wrpr %%l4, 0, %%pstate ! restore PSTATE\n"
-" call prom_world\n"
-" mov %%g0, %%o0\n"
-" ldx [%%i1 + 0x000], %%l2\n"
-" call %%l2\n"
-" mov %%i0, %%o0\n"
-" mov %%o0, %%l1\n"
-" call prom_world\n"
-" or %%g0, 1, %%o0\n"
-" wrpr %%g0, 0x14, %%pstate ! interrupts off\n"
-" ! restore PROM mmu globals\n"
-" ldx [%%sp + 2047 + 0x158], %%o0\n"
-" ldx [%%sp + 2047 + 0x160], %%o1\n"
-" ldx [%%sp + 2047 + 0x168], %%l2\n"
-" ldx [%%sp + 2047 + 0x170], %%l3\n"
-" ldx [%%sp + 2047 + 0x178], %%l5\n"
-" ldx [%%sp + 2047 + 0x180], %%l6\n"
-" ldx [%%sp + 2047 + 0x188], %%l7\n"
-" wrpr %%g0, 0x414, %%pstate ! restore PROM mmu globals\n"
-" mov %%o0, %%g1\n"
-" mov %%o1, %%g2\n"
-" mov %%l2, %%g3\n"
-" mov %%l3, %%g4\n"
-" mov %%l5, %%g5\n"
-" mov %%l6, %%g6\n"
-" mov %%l7, %%g7\n"
-" wrpr %%l0, %%tba ! restore PROM tba\n"
-" wrpr %%g0, 0x14, %%pstate ! restore PROM normal globals\n"
-" ldx [%%sp + 2047 + 0x120], %%g1\n"
-" ldx [%%sp + 2047 + 0x128], %%g2\n"
-" ldx [%%sp + 2047 + 0x130], %%g3\n"
-" ldx [%%sp + 2047 + 0x138], %%g4\n"
-" ldx [%%sp + 2047 + 0x140], %%g5\n"
-" ldx [%%sp + 2047 + 0x148], %%g6\n"
-" ldx [%%sp + 2047 + 0x150], %%g7\n"
-" wrpr %%g0, 0x814, %%pstate ! restore PROM interrupt globals\n"
-" ldx [%%sp + 2047 + 0x0e8], %%g1\n"
-" ldx [%%sp + 2047 + 0x0f0], %%g2\n"
-" ldx [%%sp + 2047 + 0x0f8], %%g3\n"
-" ldx [%%sp + 2047 + 0x100], %%g4\n"
-" ldx [%%sp + 2047 + 0x108], %%g5\n"
-" ldx [%%sp + 2047 + 0x110], %%g6\n"
-" ldx [%%sp + 2047 + 0x118], %%g7\n"
-" wrpr %%g0, 0x15, %%pstate ! restore PROM alternate globals\n"
-" ldx [%%sp + 2047 + 0x0b0], %%g1\n"
-" ldx [%%sp + 2047 + 0x0b8], %%g2\n"
-" ldx [%%sp + 2047 + 0x0c0], %%g3\n"
-" ldx [%%sp + 2047 + 0x0c8], %%g4\n"
-" ldx [%%sp + 2047 + 0x0d0], %%g5\n"
-" ldx [%%sp + 2047 + 0x0d8], %%g6\n"
-" ldx [%%sp + 2047 + 0x0e0], %%g7\n"
-" wrpr %%l4, 0, %%pstate\n"
-" ret\n"
-" restore %%l1, 0, %%o0\n"
-" " : : "r" (&p1275buf), "i" (PSTATE_PRIV));
-}
+extern void prom_cif_interface(void);
+extern void prom_cif_callback(void);
/*
* This provides SMP safety on the p1275buf. prom_callback() drops this lock
* to allow recursuve acquisition.
*/
-spinlock_t prom_entry_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(prom_entry_lock);
long p1275_cmd (char *service, long fmt, ...)
{
if (! current->files->fd[fd] ||
! current->files->fd[fd]->f_dentry ||
! (ino = current->files->fd[fd]->f_dentry->d_inode) ||
- ! ino->i_sock) {
+ ! S_ISSOCK(ino->i_mode)) {
spin_unlock(¤t->files->file_lock);
return TBADF;
}
struct module_info *mi;
ino = filp->f_dentry->d_inode;
- if (! ino->i_sock)
+ if (!S_ISSOCK(ino->i_mode))
return -EBADF;
sock = filp->private_data;
if (! sock) {
unsigned int mask = 0;
ino=filp->f_dentry->d_inode;
- if (ino && ino->i_sock) {
+ if (ino && S_ISSOCK(ino->i_mode)) {
struct sol_socket_struct *sock;
sock = (struct sol_socket_struct*)filp->private_data;
if (sock && sock->pfirst) {
asmlinkage int solaris_ioctl(unsigned int fd, unsigned int cmd, u32 arg);
-static spinlock_t timod_pagelock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(timod_pagelock);
static char * page = NULL ;
#ifndef DEBUG_SOLARIS_KMALLOC
if(!filp) goto out;
ino = filp->f_dentry->d_inode;
- if (!ino) goto out;
-
- if (!ino->i_sock)
+ if (!ino || !S_ISSOCK(ino->i_mode))
goto out;
ctlptr = (struct strbuf __user *)A(arg1);
ino = filp->f_dentry->d_inode;
if (!ino) goto out;
- if (!ino->i_sock &&
+ if (!S_ISSOCK(ino->i_mode) &&
(imajor(ino) != 30 || iminor(ino) != 1))
goto out;
--- /dev/null
+config 64_BIT
+ bool
+ default n
+
+config TOP_ADDR
+ hex
+ default 0xc0000000 if !HOST_2G_2G
+ default 0x80000000 if HOST_2G_2G
+
+config 3_LEVEL_PGTABLES
+ bool "Three-level pagetables"
+ default n
+ help
+ Three-level pagetables will let UML have more than 4G of physical
+ memory. All the memory that can't be mapped directly will be treated
+ as high memory.
+
+config ARCH_HAS_SC_SIGNALS
+ bool
+ default y
+
+config ARCH_REUSE_HOST_VSYSCALL_AREA
+ bool
+ default y
--- /dev/null
+config 64_BIT
+ bool
+ default y
+
+config 3_LEVEL_PGTABLES
+ bool
+ default y
+
+config ARCH_HAS_SC_SIGNALS
+ bool
+ default n
+
+config ARCH_REUSE_HOST_VSYSCALL_AREA
+ bool
+ default n
# Licensed under the GPL
#
-core-y += $(ARCH_DIR)/os-$(OS)/
+# To get a definition of F_SETSIG
+USER_CFLAGS += -D_GNU_SOURCE -D_LARGEFILE64_SOURCE
+CFLAGS += -D_LARGEFILE64_SOURCE
# Licensed under the GPL
#
-MODE_INCLUDE += -I$(srctree)/$(ARCH_DIR)/kernel/tt/include
+# Copyright 2003 - 2004 Pathscale, Inc
+# Released under the GPL
+
+SUBARCH_LIBS := arch/um/sys-x86_64/
+START := 0x60000000
+
+CFLAGS += -U__$(SUBARCH)__ -fno-builtin
ARCH_USER_CFLAGS := -D__x86_64__
+
+ELF_ARCH := i386:x86-64
+ELF_FORMAT := elf64-x86-64
+
+SYS_UTIL_DIR := $(ARCH_DIR)/sys-x86_64/util
+SYS_DIR := $(ARCH_DIR)/include/sysdep-x86_64
+
+SYS_HEADERS = $(SYS_DIR)/sc.h $(SYS_DIR)/thread.h
+
+prepare: $(SYS_HEADERS)
+
+$(SYS_DIR)/sc.h: $(SYS_UTIL_DIR)/mk_sc
+ $(call filechk,gen_header)
+
+$(SYS_DIR)/thread.h: $(SYS_UTIL_DIR)/mk_thread
+ $(call filechk,gen_header)
+
+$(SYS_UTIL_DIR)/mk_sc: scripts_basic FORCE
+ $(Q)$(MAKE) $(build)=$(SYS_UTIL_DIR) $@
+
+$(SYS_UTIL_DIR)/mk_thread: scripts_basic $(ARCH_SYMLINKS) $(GEN_HEADERS) FORCE
+ $(Q)$(MAKE) $(build)=$(SYS_UTIL_DIR) $@
+
+CLEAN_FILES += $(SYS_HEADERS)
+
+LIBC_DIR := /usr/lib64
+
+export LIBC_DIR
return(uml_strdup(str));
}
-static inline int cow_seek_file(int fd, __u64 offset)
+static inline int cow_seek_file(int fd, unsigned long long offset)
{
return(os_seek_file(fd, offset));
}
-static inline int cow_file_size(char *file, __u64 *size_out)
+static inline int cow_file_size(char *file, unsigned long long *size_out)
{
return(os_file_size(file, size_out));
}
--- /dev/null
+#include <linux/init.h>
+#include <linux/console.h>
+
+#include "chan_user.h"
+
+/* ----------------------------------------------------------------------------- */
+/* trivial console driver -- simply dump everything to stderr */
+
+/*
+ * Don't register by default -- as this registeres very early in the
+ * boot process it becomes the default console. And as this isn't a
+ * real tty driver init isn't able to open /dev/console then.
+ *
+ * In most cases this isn't what you want ...
+ */
+static int use_stderr_console = 0;
+
+static void stderr_console_write(struct console *console, const char *string,
+ unsigned len)
+{
+ generic_write(2 /* stderr */, string, len, NULL);
+}
+
+static struct console stderr_console = {
+ .name "stderr",
+ .write stderr_console_write,
+ .flags CON_PRINTBUFFER,
+};
+
+static int __init stderr_console_init(void)
+{
+ if (use_stderr_console)
+ register_console(&stderr_console);
+ return 0;
+}
+console_initcall(stderr_console_init);
+
+static int stderr_setup(char *str)
+{
+ if (!str)
+ return 0;
+ use_stderr_console = simple_strtoul(str,&str,0);
+ return 1;
+}
+__setup("stderr=", stderr_setup);
#include "linux/tty.h"
#include "linux/list.h"
+#include "linux/console.h"
#include "chan_user.h"
+#include "line.h"
struct chan {
struct list_head list;
};
extern void chan_interrupt(struct list_head *chans, struct work_struct *task,
- struct tty_struct *tty, int irq, void *dev);
+ struct tty_struct *tty, int irq);
extern int parse_chan_pair(char *str, struct list_head *chans, int pri,
int device, struct chan_opts *opts);
extern int open_chan(struct list_head *chans);
int write_irq);
extern int console_write_chan(struct list_head *chans, const char *buf,
int len);
+extern int console_open_chan(struct line *line, struct console *co,
+ struct chan_opts *opts);
extern void close_chan(struct list_head *chans);
-extern void chan_enable_winch(struct list_head *chans, void *line);
-extern void enable_chan(struct list_head *chans, void *data);
+extern void chan_enable_winch(struct list_head *chans, struct tty_struct *tty);
+extern void enable_chan(struct list_head *chans, struct tty_struct *tty);
extern int chan_window_size(struct list_head *chans,
unsigned short *rows_out,
unsigned short *cols_out);
unsigned short *cols_out);
extern void generic_free(void *data);
-extern void register_winch(int fd, void *device_data);
-extern void register_winch_irq(int fd, int tty_fd, int pid, void *line);
+struct tty_struct;
+extern void register_winch(int fd, struct tty_struct *tty);
+extern void register_winch_irq(int fd, int tty_fd, int pid, struct tty_struct *tty);
#define __channel_help(fn, prefix) \
__uml_help(fn, prefix "[0-9]*=<channel description>\n" \
#ifndef __FRAME_KERN_H_
#define __FRAME_KERN_H_
-#include "frame.h"
-#include "sysdep/frame_kern.h"
+#define _S(nr) (1<<((nr)-1))
+#define _BLOCKABLE (~(_S(SIGKILL) | _S(SIGSTOP)))
extern int setup_signal_stack_sc(unsigned long stack_top, int sig,
struct k_sigaction *ka,
struct uml_net {
struct list_head list;
struct net_device *dev;
+ struct platform_device pdev;
int index;
unsigned char mac[ETH_ALEN];
int have_mac;
#ifndef __PROCESS_H__
#define __PROCESS_H__
-#include <asm/sigcontext.h>
+#include <signal.h>
extern void sig_handler(int sig, struct sigcontext sc);
extern void alarm_handler(int sig, struct sigcontext sc);
extern int ptrace_getregs(long pid, unsigned long *regs_out);
extern int ptrace_setregs(long pid, unsigned long *regs_in);
extern int ptrace_getfpregs(long pid, unsigned long *regs_out);
+extern int ptrace_setfpregs(long pid, unsigned long *regs);
extern void arch_enter_kernel(void *task, int pid);
extern void arch_leave_kernel(void *task, int pid);
extern void ptrace_pokeuser(unsigned long addr, unsigned long data);
#ifndef PTRACE_SYSEMU
#define PTRACE_SYSEMU 31
#endif
+#ifndef PTRACE_SYSEMU_SINGLESTEP
+#define PTRACE_SYSEMU_SINGLESTEP 32
+#endif
+
+/* On architectures, that started to support PTRACE_O_TRACESYSGOOD
+ * in linux 2.4, there are two different definitions of
+ * PTRACE_SETOPTIONS: linux 2.4 uses 21 while linux 2.6 uses 0x4200.
+ * For binary compatibility, 2.6 also supports the old "21", named
+ * PTRACE_OLDSETOPTION. On these architectures, UML always must use
+ * "21", to ensure the kernel runs on 2.4 and 2.6 host without
+ * recompilation. So, we use PTRACE_OLDSETOPTIONS in UML.
+ * We also want to be able to build the kernel on 2.4, which doesn't
+ * have PTRACE_OLDSETOPTIONS. So, if it is missing, we declare
+ * PTRACE_OLDSETOPTIONS to to be the same as PTRACE_SETOPTIONS.
+ *
+ * On architectures, that start to support PTRACE_O_TRACESYSGOOD on
+ * linux 2.6, PTRACE_OLDSETOPTIONS never is defined, and also isn't
+ * supported by the host kernel. In that case, our trick lets us use
+ * the new 0x4200 with the name PTRACE_OLDSETOPTIONS.
+ */
+#ifndef PTRACE_OLDSETOPTIONS
+#define PTRACE_OLDSETOPTIONS PTRACE_SETOPTIONS
+#endif
void set_using_sysemu(int value);
int get_using_sysemu(void);
extern int sysemu_supported;
+#define SELECT_PTRACE_OPERATION(sysemu_mode, singlestep_mode) \
+ (((int[3][3] ) { \
+ { PTRACE_SYSCALL, PTRACE_SYSCALL, PTRACE_SINGLESTEP }, \
+ { PTRACE_SYSEMU, PTRACE_SYSEMU, PTRACE_SINGLESTEP }, \
+ { PTRACE_SYSEMU, PTRACE_SYSEMU_SINGLESTEP, PTRACE_SYSEMU_SINGLESTEP }}) \
+ [sysemu_mode][singlestep_mode])
+
#endif
--- /dev/null
+/*
+ * Copyright (C) 2004 PathScale, Inc
+ * Licensed under the GPL
+ */
+
+#ifndef __REGISTERS_H
+#define __REGISTERS_H
+
+#include "sysdep/ptrace.h"
+
+extern void init_thread_registers(union uml_pt_regs *to);
+extern void save_registers(int pid, union uml_pt_regs *regs);
+extern void restore_registers(int pid, union uml_pt_regs *regs);
+extern void init_registers(int pid);
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
#include "uml-config.h"
#ifdef UML_CONFIG_MODE_TT
-#include "ptrace-tt.h"
+#include "sysdep/sc.h"
#endif
#ifdef UML_CONFIG_MODE_SKAS
-#include "ptrace-skas.h"
+
+/* syscall emulation path in ptrace */
+
+#ifndef PTRACE_SYSEMU
+#define PTRACE_SYSEMU 31
+#endif
+
+void set_using_sysemu(int value);
+int get_using_sysemu(void);
+extern int sysemu_supported;
+
+#include "skas_ptregs.h"
+
+#define HOST_FRAME_SIZE 17
+
+#define REGS_IP(r) ((r)[HOST_IP])
+#define REGS_SP(r) ((r)[HOST_SP])
+#define REGS_EFLAGS(r) ((r)[HOST_EFLAGS])
+#define REGS_EAX(r) ((r)[HOST_EAX])
+#define REGS_EBX(r) ((r)[HOST_EBX])
+#define REGS_ECX(r) ((r)[HOST_ECX])
+#define REGS_EDX(r) ((r)[HOST_EDX])
+#define REGS_ESI(r) ((r)[HOST_ESI])
+#define REGS_EDI(r) ((r)[HOST_EDI])
+#define REGS_EBP(r) ((r)[HOST_EBP])
+#define REGS_CS(r) ((r)[HOST_CS])
+#define REGS_SS(r) ((r)[HOST_SS])
+#define REGS_DS(r) ((r)[HOST_DS])
+#define REGS_ES(r) ((r)[HOST_ES])
+#define REGS_FS(r) ((r)[HOST_FS])
+#define REGS_GS(r) ((r)[HOST_GS])
+
+#define REGS_SET_SYSCALL_RETURN(r, res) REGS_EAX(r) = (res)
+
+#define REGS_RESTART_SYSCALL(r) IP_RESTART_SYSCALL(REGS_IP(r))
+
+#define REGS_SEGV_IS_FIXABLE(r) SEGV_IS_FIXABLE((r)->trap_type)
+
+#define REGS_FAULT_ADDR(r) ((r)->fault_addr)
+
+#define REGS_FAULT_WRITE(r) FAULT_WRITE((r)->fault_type)
+
+#endif
+#ifndef PTRACE_SYSEMU_SINGLESTEP
+#define PTRACE_SYSEMU_SINGLESTEP 32
#endif
#include "choose-mode.h"
#define FP_FRAME_SIZE (27)
#define FPX_FRAME_SIZE (128)
+#define MAX_REG_OFFSET (FRAME_SIZE_OFFSET)
+#define MAX_REG_NR (FRAME_SIZE)
+
#ifdef PTRACE_GETREGS
#define UM_HAVE_GETREGS
#endif
--- /dev/null
+/*
+ * Copyright (C) 2004 PathScale, Inc
+ * Licensed under the GPL
+ */
+
+#ifndef __I386_SIGNAL_H_
+#define __I386_SIGNAL_H_
+
+#include <signal.h>
+
+#define ARCH_GET_SIGCONTEXT(sc, sig) \
+ do sc = (struct sigcontext *) (&sig + 1); while(0)
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
[ __NR_multiplexer ] = sys_ni_syscall, \
[ __NR_mmap ] = old_mmap, \
[ __NR_madvise ] = sys_madvise, \
- [ __NR_mincore ] = sys_mincore,
+ [ __NR_mincore ] = sys_mincore, \
+ [ __NR_iopl ] = (syscall_handler_t *) sys_ni_syscall, \
+ [ __NR_utimes ] = (syscall_handler_t *) sys_utimes, \
+ [ __NR_fadvise64 ] = (syscall_handler_t *) sys_fadvise64,
-#define LAST_ARCH_SYSCALL __NR_mincore
+#define LAST_ARCH_SYSCALL __NR_fadvise64
/*
* Overrides for Emacs so that we follow Linus's tabbing style.
--- /dev/null
+/*
+ * Licensed under the GPL
+ */
+
+#ifndef __UM_SYSDEP_CHECKSUM_H
+#define __UM_SYSDEP_CHECKSUM_H
+
+#include "linux/string.h"
+#include "linux/in6.h"
+#include "asm/uaccess.h"
+
+extern unsigned int csum_partial_copy_from(const char *src, char *dst, int len,
+ int sum, int *err_ptr);
+extern unsigned csum_partial(const unsigned char *buff, unsigned len,
+ unsigned sum);
+
+/*
+ * Note: when you get a NULL pointer exception here this means someone
+ * passed in an incorrect kernel address to one of these functions.
+ *
+ * If you use these functions directly please don't forget the
+ * verify_area().
+ */
+
+static __inline__
+unsigned int csum_partial_copy_nocheck(const char *src, char *dst,
+ int len, int sum)
+{
+ memcpy(dst, src, len);
+ return(csum_partial(dst, len, sum));
+}
+
+static __inline__
+unsigned int csum_partial_copy_from_user(const char *src, char *dst,
+ int len, int sum, int *err_ptr)
+{
+ return csum_partial_copy_from(src, dst, len, sum, err_ptr);
+}
+
+/**
+ * csum_fold - Fold and invert a 32bit checksum.
+ * sum: 32bit unfolded sum
+ *
+ * Fold a 32bit running checksum to 16bit and invert it. This is usually
+ * the last step before putting a checksum into a packet.
+ * Make sure not to mix with 64bit checksums.
+ */
+static inline unsigned int csum_fold(unsigned int sum)
+{
+ __asm__(
+ " addl %1,%0\n"
+ " adcl $0xffff,%0"
+ : "=r" (sum)
+ : "r" (sum << 16), "0" (sum & 0xffff0000)
+ );
+ return (~sum) >> 16;
+}
+
+/**
+ * csum_tcpup_nofold - Compute an IPv4 pseudo header checksum.
+ * @saddr: source address
+ * @daddr: destination address
+ * @len: length of packet
+ * @proto: ip protocol of packet
+ * @sum: initial sum to be added in (32bit unfolded)
+ *
+ * Returns the pseudo header checksum the input data. Result is
+ * 32bit unfolded.
+ */
+static inline unsigned long
+csum_tcpudp_nofold(unsigned saddr, unsigned daddr, unsigned short len,
+ unsigned short proto, unsigned int sum)
+{
+ asm(" addl %1, %0\n"
+ " adcl %2, %0\n"
+ " adcl %3, %0\n"
+ " adcl $0, %0\n"
+ : "=r" (sum)
+ : "g" (daddr), "g" (saddr), "g" ((ntohs(len)<<16)+proto*256), "0" (sum));
+ return sum;
+}
+
+/*
+ * computes the checksum of the TCP/UDP pseudo-header
+ * returns a 16-bit checksum, already complemented
+ */
+static inline unsigned short int csum_tcpudp_magic(unsigned long saddr,
+ unsigned long daddr,
+ unsigned short len,
+ unsigned short proto,
+ unsigned int sum)
+{
+ return csum_fold(csum_tcpudp_nofold(saddr,daddr,len,proto,sum));
+}
+
+/**
+ * ip_fast_csum - Compute the IPv4 header checksum efficiently.
+ * iph: ipv4 header
+ * ihl: length of header / 4
+ */
+static inline unsigned short ip_fast_csum(unsigned char *iph, unsigned int ihl)
+{
+ unsigned int sum;
+
+ asm( " movl (%1), %0\n"
+ " subl $4, %2\n"
+ " jbe 2f\n"
+ " addl 4(%1), %0\n"
+ " adcl 8(%1), %0\n"
+ " adcl 12(%1), %0\n"
+ "1: adcl 16(%1), %0\n"
+ " lea 4(%1), %1\n"
+ " decl %2\n"
+ " jne 1b\n"
+ " adcl $0, %0\n"
+ " movl %0, %2\n"
+ " shrl $16, %0\n"
+ " addw %w2, %w0\n"
+ " adcl $0, %0\n"
+ " notl %0\n"
+ "2:"
+ /* Since the input registers which are loaded with iph and ipl
+ are modified, we must also specify them as outputs, or gcc
+ will assume they contain their original values. */
+ : "=r" (sum), "=r" (iph), "=r" (ihl)
+ : "1" (iph), "2" (ihl)
+ : "memory");
+ return(sum);
+}
+
+static inline unsigned add32_with_carry(unsigned a, unsigned b)
+{
+ asm("addl %2,%0\n\t"
+ "adcl $0,%0"
+ : "=r" (a)
+ : "0" (a), "r" (b));
+ return a;
+}
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright 2003 PathScale, Inc.
+ *
+ * Licensed under the GPL
+ */
+
+#ifndef __SYSDEP_X86_64_PTRACE_H
+#define __SYSDEP_X86_64_PTRACE_H
+
+#include "uml-config.h"
+
+#ifdef UML_CONFIG_MODE_TT
+#include "sysdep/sc.h"
+#endif
+
+#ifdef UML_CONFIG_MODE_SKAS
+#include "skas_ptregs.h"
+
+#define REGS_IP(r) ((r)[HOST_IP])
+#define REGS_SP(r) ((r)[HOST_SP])
+
+#define REGS_RBX(r) ((r)[HOST_RBX])
+#define REGS_RCX(r) ((r)[HOST_RCX])
+#define REGS_RDX(r) ((r)[HOST_RDX])
+#define REGS_RSI(r) ((r)[HOST_RSI])
+#define REGS_RDI(r) ((r)[HOST_RDI])
+#define REGS_RBP(r) ((r)[HOST_RBP])
+#define REGS_RAX(r) ((r)[HOST_RAX])
+#define REGS_R8(r) ((r)[HOST_R8])
+#define REGS_R9(r) ((r)[HOST_R9])
+#define REGS_R10(r) ((r)[HOST_R10])
+#define REGS_R11(r) ((r)[HOST_R11])
+#define REGS_R12(r) ((r)[HOST_R12])
+#define REGS_R13(r) ((r)[HOST_R13])
+#define REGS_R14(r) ((r)[HOST_R14])
+#define REGS_R15(r) ((r)[HOST_R15])
+#define REGS_CS(r) ((r)[HOST_CS])
+#define REGS_EFLAGS(r) ((r)[HOST_EFLAGS])
+#define REGS_SS(r) ((r)[HOST_SS])
+
+#define HOST_FS_BASE 21
+#define HOST_GS_BASE 22
+#define HOST_DS 23
+#define HOST_ES 24
+#define HOST_FS 25
+#define HOST_GS 26
+
+#define REGS_FS_BASE(r) ((r)[HOST_FS_BASE])
+#define REGS_GS_BASE(r) ((r)[HOST_GS_BASE])
+#define REGS_DS(r) ((r)[HOST_DS])
+#define REGS_ES(r) ((r)[HOST_ES])
+#define REGS_FS(r) ((r)[HOST_FS])
+#define REGS_GS(r) ((r)[HOST_GS])
+
+#define REGS_ORIG_RAX(r) ((r)[HOST_ORIG_RAX])
+
+#define REGS_SET_SYSCALL_RETURN(r, res) REGS_RAX(r) = (res)
+
+#define REGS_RESTART_SYSCALL(r) IP_RESTART_SYSCALL(REGS_IP(r))
+
+#define REGS_SEGV_IS_FIXABLE(r) SEGV_IS_FIXABLE((r)->trap_type)
+
+#define REGS_FAULT_ADDR(r) ((r)->fault_addr)
+
+#define REGS_FAULT_WRITE(r) FAULT_WRITE((r)->fault_type)
+
+#define REGS_TRAP(r) ((r)->trap_type)
+
+#define REGS_ERR(r) ((r)->fault_type)
+
+#endif
+
+#include "choose-mode.h"
+
+/* XXX */
+union uml_pt_regs {
+#ifdef UML_CONFIG_MODE_TT
+ struct tt_regs {
+ long syscall;
+ unsigned long orig_rax;
+ void *sc;
+ } tt;
+#endif
+#ifdef UML_CONFIG_MODE_SKAS
+ struct skas_regs {
+ /* XXX */
+ unsigned long regs[27];
+ unsigned long fp[65];
+ unsigned long fault_addr;
+ unsigned long fault_type;
+ unsigned long trap_type;
+ long syscall;
+ int is_user;
+ } skas;
+#endif
+};
+
+#define EMPTY_UML_PT_REGS { }
+
+/* XXX */
+extern int mode_tt;
+
+#define UPT_RBX(r) CHOOSE_MODE(SC_RBX(UPT_SC(r)), REGS_RBX((r)->skas.regs))
+#define UPT_RCX(r) CHOOSE_MODE(SC_RCX(UPT_SC(r)), REGS_RCX((r)->skas.regs))
+#define UPT_RDX(r) CHOOSE_MODE(SC_RDX(UPT_SC(r)), REGS_RDX((r)->skas.regs))
+#define UPT_RSI(r) CHOOSE_MODE(SC_RSI(UPT_SC(r)), REGS_RSI((r)->skas.regs))
+#define UPT_RDI(r) CHOOSE_MODE(SC_RDI(UPT_SC(r)), REGS_RDI((r)->skas.regs))
+#define UPT_RBP(r) CHOOSE_MODE(SC_RBP(UPT_SC(r)), REGS_RBP((r)->skas.regs))
+#define UPT_RAX(r) CHOOSE_MODE(SC_RAX(UPT_SC(r)), REGS_RAX((r)->skas.regs))
+#define UPT_R8(r) CHOOSE_MODE(SC_R8(UPT_SC(r)), REGS_R8((r)->skas.regs))
+#define UPT_R9(r) CHOOSE_MODE(SC_R9(UPT_SC(r)), REGS_R9((r)->skas.regs))
+#define UPT_R10(r) CHOOSE_MODE(SC_R10(UPT_SC(r)), REGS_R10((r)->skas.regs))
+#define UPT_R11(r) CHOOSE_MODE(SC_R11(UPT_SC(r)), REGS_R11((r)->skas.regs))
+#define UPT_R12(r) CHOOSE_MODE(SC_R12(UPT_SC(r)), REGS_R12((r)->skas.regs))
+#define UPT_R13(r) CHOOSE_MODE(SC_R13(UPT_SC(r)), REGS_R13((r)->skas.regs))
+#define UPT_R14(r) CHOOSE_MODE(SC_R14(UPT_SC(r)), REGS_R14((r)->skas.regs))
+#define UPT_R15(r) CHOOSE_MODE(SC_R15(UPT_SC(r)), REGS_R15((r)->skas.regs))
+#define UPT_CS(r) CHOOSE_MODE(SC_CS(UPT_SC(r)), REGS_CS((r)->skas.regs))
+#define UPT_FS(r) CHOOSE_MODE(SC_FS(UPT_SC(r)), REGS_FS((r)->skas.regs))
+#define UPT_GS(r) CHOOSE_MODE(SC_GS(UPT_SC(r)), REGS_GS((r)->skas.regs))
+#define UPT_DS(r) CHOOSE_MODE(SC_DS(UPT_SC(r)), REGS_DS((r)->skas.regs))
+#define UPT_ES(r) CHOOSE_MODE(SC_ES(UPT_SC(r)), REGS_ES((r)->skas.regs))
+#define UPT_CS(r) CHOOSE_MODE(SC_CS(UPT_SC(r)), REGS_CS((r)->skas.regs))
+#define UPT_ORIG_RAX(r) \
+ CHOOSE_MODE((r)->tt.orig_rax, REGS_ORIG_RAX((r)->skas.regs))
+
+#define UPT_IP(r) CHOOSE_MODE(SC_IP(UPT_SC(r)), REGS_IP((r)->skas.regs))
+#define UPT_SP(r) CHOOSE_MODE(SC_SP(UPT_SC(r)), REGS_SP((r)->skas.regs))
+
+#define UPT_EFLAGS(r) \
+ CHOOSE_MODE(SC_EFLAGS(UPT_SC(r)), REGS_EFLAGS((r)->skas.regs))
+#define UPT_SC(r) ((r)->tt.sc)
+#define UPT_SYSCALL_NR(r) CHOOSE_MODE((r)->tt.syscall, (r)->skas.syscall)
+
+extern int user_context(unsigned long sp);
+
+#define UPT_IS_USER(r) \
+ CHOOSE_MODE(user_context(UPT_SP(r)), (r)->skas.is_user)
+
+#define UPT_SYSCALL_ARG1(r) UPT_RDI(r)
+#define UPT_SYSCALL_ARG2(r) UPT_RSI(r)
+#define UPT_SYSCALL_ARG3(r) UPT_RDX(r)
+#define UPT_SYSCALL_ARG4(r) UPT_R10(r)
+#define UPT_SYSCALL_ARG5(r) UPT_R8(r)
+#define UPT_SYSCALL_ARG6(r) UPT_R9(r)
+
+struct syscall_args {
+ unsigned long args[6];
+};
+
+#define SYSCALL_ARGS(r) ((struct syscall_args) \
+ { .args = { UPT_SYSCALL_ARG1(r), \
+ UPT_SYSCALL_ARG2(r), \
+ UPT_SYSCALL_ARG3(r), \
+ UPT_SYSCALL_ARG4(r), \
+ UPT_SYSCALL_ARG5(r), \
+ UPT_SYSCALL_ARG6(r) } } )
+
+#define UPT_REG(regs, reg) \
+ ({ unsigned long val; \
+ switch(reg){ \
+ case R8: val = UPT_R8(regs); break; \
+ case R9: val = UPT_R9(regs); break; \
+ case R10: val = UPT_R10(regs); break; \
+ case R11: val = UPT_R11(regs); break; \
+ case R12: val = UPT_R12(regs); break; \
+ case R13: val = UPT_R13(regs); break; \
+ case R14: val = UPT_R14(regs); break; \
+ case R15: val = UPT_R15(regs); break; \
+ case RIP: val = UPT_IP(regs); break; \
+ case RSP: val = UPT_SP(regs); break; \
+ case RAX: val = UPT_RAX(regs); break; \
+ case RBX: val = UPT_RBX(regs); break; \
+ case RCX: val = UPT_RCX(regs); break; \
+ case RDX: val = UPT_RDX(regs); break; \
+ case RSI: val = UPT_RSI(regs); break; \
+ case RDI: val = UPT_RDI(regs); break; \
+ case RBP: val = UPT_RBP(regs); break; \
+ case ORIG_RAX: val = UPT_ORIG_RAX(regs); break; \
+ case CS: val = UPT_CS(regs); break; \
+ case DS: val = UPT_DS(regs); break; \
+ case ES: val = UPT_ES(regs); break; \
+ case FS: val = UPT_FS(regs); break; \
+ case GS: val = UPT_GS(regs); break; \
+ case EFLAGS: val = UPT_EFLAGS(regs); break; \
+ default : \
+ panic("Bad register in UPT_REG : %d\n", reg); \
+ val = -1; \
+ } \
+ val; \
+ })
+
+
+#define UPT_SET(regs, reg, val) \
+ ({ unsigned long val; \
+ switch(reg){ \
+ case R8: UPT_R8(regs) = val; break; \
+ case R9: UPT_R9(regs) = val; break; \
+ case R10: UPT_R10(regs) = val; break; \
+ case R11: UPT_R11(regs) = val; break; \
+ case R12: UPT_R12(regs) = val; break; \
+ case R13: UPT_R13(regs) = val; break; \
+ case R14: UPT_R14(regs) = val; break; \
+ case R15: UPT_R15(regs) = val; break; \
+ case RIP: UPT_IP(regs) = val; break; \
+ case RSP: UPT_SP(regs) = val; break; \
+ case RAX: UPT_RAX(regs) = val; break; \
+ case RBX: UPT_RBX(regs) = val; break; \
+ case RCX: UPT_RCX(regs) = val; break; \
+ case RDX: UPT_RDX(regs) = val; break; \
+ case RSI: UPT_RSI(regs) = val; break; \
+ case RDI: UPT_RDI(regs) = val; break; \
+ case RBP: UPT_RBP(regs) = val; break; \
+ case ORIG_RAX: UPT_ORIG_RAX(regs) = val; break; \
+ case CS: UPT_CS(regs) = val; break; \
+ case DS: UPT_DS(regs) = val; break; \
+ case ES: UPT_ES(regs) = val; break; \
+ case FS: UPT_FS(regs) = val; break; \
+ case GS: UPT_GS(regs) = val; break; \
+ case EFLAGS: UPT_EFLAGS(regs) = val; break; \
+ default : \
+ panic("Bad register in UPT_SET : %d\n", reg); \
+ break; \
+ } \
+ val; \
+ })
+
+#define UPT_SET_SYSCALL_RETURN(r, res) \
+ CHOOSE_MODE(SC_SET_SYSCALL_RETURN(UPT_SC(r), (res)), \
+ REGS_SET_SYSCALL_RETURN((r)->skas.regs, (res)))
+
+#define UPT_RESTART_SYSCALL(r) \
+ CHOOSE_MODE(SC_RESTART_SYSCALL(UPT_SC(r)), \
+ REGS_RESTART_SYSCALL((r)->skas.regs))
+
+#define UPT_SEGV_IS_FIXABLE(r) \
+ CHOOSE_MODE(SC_SEGV_IS_FIXABLE(UPT_SC(r)), \
+ REGS_SEGV_IS_FIXABLE(&r->skas))
+
+#define UPT_FAULT_ADDR(r) \
+ CHOOSE_MODE(SC_FAULT_ADDR(UPT_SC(r)), REGS_FAULT_ADDR(&r->skas))
+
+#define UPT_FAULT_WRITE(r) \
+ CHOOSE_MODE(SC_FAULT_WRITE(UPT_SC(r)), REGS_FAULT_WRITE(&r->skas))
+
+#define UPT_TRAP(r) CHOOSE_MODE(SC_TRAP_TYPE(UPT_SC(r)), REGS_TRAP(&r->skas))
+#define UPT_ERR(r) CHOOSE_MODE(SC_FAULT_TYPE(UPT_SC(r)), REGS_ERR(&r->skas))
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright 2003 PathScale, Inc.
+ *
+ * Licensed under the GPL
+ */
+
+#ifndef __SYSDEP_X86_64_PTRACE_USER_H__
+#define __SYSDEP_X86_64_PTRACE_USER_H__
+
+#define __FRAME_OFFSETS
+#include <asm/ptrace.h>
+#undef __FRAME_OFFSETS
+
+#define PT_INDEX(off) ((off) / sizeof(unsigned long))
+
+#define PT_SYSCALL_NR(regs) ((regs)[PT_INDEX(ORIG_RAX)])
+#define PT_SYSCALL_NR_OFFSET (ORIG_RAX)
+
+#define PT_SYSCALL_ARG1(regs) (((unsigned long *) (regs))[PT_INDEX(RDI)])
+#define PT_SYSCALL_ARG1_OFFSET (RDI)
+
+#define PT_SYSCALL_ARG2(regs) (((unsigned long *) (regs))[PT_INDEX(RSI)])
+#define PT_SYSCALL_ARG2_OFFSET (RSI)
+
+#define PT_SYSCALL_ARG3(regs) (((unsigned long *) (regs))[PT_INDEX(RDX)])
+#define PT_SYSCALL_ARG3_OFFSET (RDX)
+
+#define PT_SYSCALL_ARG4(regs) (((unsigned long *) (regs))[PT_INDEX(RCX)])
+#define PT_SYSCALL_ARG4_OFFSET (RCX)
+
+#define PT_SYSCALL_ARG5(regs) (((unsigned long *) (regs))[PT_INDEX(R8)])
+#define PT_SYSCALL_ARG5_OFFSET (R8)
+
+#define PT_SYSCALL_ARG6(regs) (((unsigned long *) (regs))[PT_INDEX(R9)])
+#define PT_SYSCALL_ARG6_OFFSET (R9)
+
+#define PT_SYSCALL_RET_OFFSET (RAX)
+
+#define PT_IP_OFFSET (RIP)
+#define PT_IP(regs) ((regs)[PT_INDEX(RIP)])
+
+#define PT_SP_OFFSET (RSP)
+#define PT_SP(regs) ((regs)[PT_INDEX(RSP)])
+
+#define PT_ORIG_RAX_OFFSET (ORIG_RAX)
+#define PT_ORIG_RAX(regs) ((regs)[PT_INDEX(ORIG_RAX)])
+
+#define MAX_REG_OFFSET (FRAME_SIZE)
+#define MAX_REG_NR ((MAX_REG_OFFSET) / sizeof(unsigned long))
+
+/* x86_64 FC3 doesn't define this in /usr/include/linux/ptrace.h even though
+ * it's defined in the kernel's include/linux/ptrace.h. Additionally, use the
+ * 2.4 name and value for 2.4 host compatibility.
+ */
+#ifndef PTRACE_OLDSETOPTIONS
+#define PTRACE_OLDSETOPTIONS 21
+#endif
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2004 PathScale, Inc
+ * Licensed under the GPL
+ */
+
+#ifndef __X86_64_SIGNAL_H_
+#define __X86_64_SIGNAL_H_
+
+#define ARCH_GET_SIGCONTEXT(sc, sig_addr) \
+ do { \
+ struct ucontext *__uc; \
+ asm("movq %%rdx, %0" : "=r" (__uc)); \
+ sc = (struct sigcontext *) &__uc->uc_mcontext; \
+ } while(0)
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
[ __NR_iopl ] = (syscall_handler_t *) sys_ni_syscall, \
[ __NR_set_thread_area ] = (syscall_handler_t *) sys_ni_syscall, \
[ __NR_get_thread_area ] = (syscall_handler_t *) sys_ni_syscall, \
+ [ __NR_remap_file_pages ] = (syscall_handler_t *) sys_remap_file_pages, \
[ __NR_semtimedop ] = (syscall_handler_t *) sys_semtimedop, \
+ [ __NR_fadvise64 ] = (syscall_handler_t *) sys_fadvise64, \
+ [ 223 ] = (syscall_handler_t *) sys_ni_syscall, \
+ [ __NR_utimes ] = (syscall_handler_t *) sys_utimes, \
+ [ __NR_vserver ] = (syscall_handler_t *) sys_ni_syscall, \
[ 251 ] = (syscall_handler_t *) sys_ni_syscall,
#define LAST_ARCH_SYSCALL 251
#define __UML_UACCESS_H__
extern int __do_copy_to_user(void *to, const void *from, int n,
- void **fault_addr, void **fault_catcher);
+ void **fault_addr, void **fault_catcher);
extern unsigned long __do_user_copy(void *to, const void *from, int n,
void **fault_addr, void **fault_catcher,
void (*op)(void *to, const void *from,
#include "linux/errno.h"
#include "linux/module.h"
-extern unsigned int arch_csum_partial(const char *buff, int len, int sum);
+unsigned int arch_csum_partial(const unsigned char *buff, int len, int sum);
-extern unsigned int csum_partial(char *buff, int len, int sum)
+unsigned int csum_partial(unsigned char *buff, int len, int sum)
{
- return(arch_csum_partial(buff, len, sum));
+ return arch_csum_partial(buff, len, sum);
}
EXPORT_SYMBOL(csum_partial);
-unsigned int csum_partial_copy_to(const char *src, char *dst, int len,
- int sum, int *err_ptr)
+unsigned int csum_partial_copy_to(const unsigned char *src, char __user *dst,
+ int len, int sum, int *err_ptr)
{
if(copy_to_user(dst, src, len)){
*err_ptr = -EFAULT;
return(arch_csum_partial(src, len, sum));
}
-unsigned int csum_partial_copy_from(const char *src, char *dst, int len,
- int sum, int *err_ptr)
+unsigned int csum_partial_copy_from(const unsigned char __user *src, char *dst,
+ int len, int sum, int *err_ptr)
{
if(copy_from_user(dst, src, len)){
*err_ptr = -EFAULT;
return(-1);
}
- return(arch_csum_partial(dst, len, sum));
+ return arch_csum_partial(dst, len, sum);
}
/*
SECTIONS
{
+ PROVIDE (__executable_start = START);
. = START + SIZEOF_HEADERS;
.interp : { *(.interp) }
+ /* Used in arch/um/kernel/mem.c. Any memory between START and __binary_start
+ * is remapped.*/
__binary_start = .;
. = ALIGN(4096); /* Init code and data */
_stext = .;
return(len);
}
-static int write_proc_exitcode(struct file *file, const char *buffer,
+static int write_proc_exitcode(struct file *file, const char __user *buffer,
unsigned long count, void *data)
{
char *end, buf[sizeof("nnnnn\0")];
extern int uml_exitcode;
+extern void scan_elf_aux( char **envp);
+
int main(int argc, char **argv, char **envp)
{
char **new_argv;
set_handler(SIGTERM, last_ditch_exit, SA_ONESHOT | SA_NODEFER, -1);
set_handler(SIGHUP, last_ditch_exit, SA_ONESHOT | SA_NODEFER, -1);
+ scan_elf_aux( envp);
+
do_uml_initcalls();
ret = linux_main(argc, argv);
+ /* Disable SIGPROF - I have no idea why libc doesn't do this or turn
+ * off the profiling time, but UML dies with a SIGPROF just before
+ * exiting when profiling is active.
+ */
+ change_sig(SIGPROF, 0);
+
/* Reboot */
if(ret){
int err;
printf("\n");
- /* Let any pending signals fire, then disable them. This
- * ensures that they won't be delivered after the exec, when
- * they are definitely not expected.
- */
- unblock_signals();
+ /* stop timers and set SIG*ALRM to be ignored */
disable_timer();
+
+ /* disable SIGIO for the fds and set SIGIO to be ignored */
err = deactivate_all_fds();
if(err)
printf("deactivate_all_fds failed, errno = %d\n",
-err);
+ /* Let any pending signals fire now. This ensures
+ * that they won't be delivered after the exec, when
+ * they are definitely not expected.
+ */
+ unblock_signals();
+
execvp(new_argv[0], new_argv);
perror("Failed to exec kernel");
ret = 1;
* physical memory - kmalloc/kfree
* kernel virtual memory - vmalloc/vfree
* anywhere else - malloc/free
- * If kmalloc is not yet possible, then the kernel memory regions
- * may not be set up yet, and the variables not initialized. So,
- * free is called.
+ * If kmalloc is not yet possible, then either high_physmem and/or
+ * end_vm are still 0 (as at startup), in which case we call free, or
+ * we have set them, but anyway addr has not been allocated from those
+ * areas. So, in both cases __real_free is called.
*
* CAN_KMALLOC is checked because it would be bad to free a buffer
* with kmalloc/vmalloc after they have been turned off during
* shutdown.
+ * XXX: However, we sometimes shutdown CAN_KMALLOC temporarily, so
+ * there is a possibility for memory leaks.
*/
if((addr >= uml_physmem) && (addr < high_physmem)){
while((mask = va_arg(ap, int)) != -1){
sigaddset(&action.sa_mask, mask);
}
+ va_end(ap);
action.sa_flags = flags;
action.sa_restorer = NULL;
if(sigaction(sig, &action, NULL) < 0)
extern int have_fpx_regs;
extern void user_time_init_skas(void);
-extern int copy_sc_from_user_skas(int pid, union uml_pt_regs *regs,
- void *from_ptr);
-extern int copy_sc_to_user_skas(int pid, void *to_ptr, void *fp,
- union uml_pt_regs *regs,
- unsigned long fault_addr, int fault_type);
extern void sig_handler_common_skas(int sig, void *sc_ptr);
extern void halt_skas(void);
extern void reboot_skas(void);
#define __SKAS_UACCESS_H
#include "asm/errno.h"
+#include "asm/fixmap.h"
#define access_ok_skas(type, addr, size) \
((segment_eq(get_fs(), KERNEL_DS)) || \
(((unsigned long) (addr) < TASK_SIZE) && \
- ((unsigned long) (addr) + (size) <= TASK_SIZE)))
+ ((unsigned long) (addr) + (size) <= TASK_SIZE)) || \
+ ((type == VERIFY_READ ) && \
+ ((unsigned long) (addr) >= FIXADDR_USER_START) && \
+ ((unsigned long) (addr) + (size) <= FIXADDR_USER_END) && \
+ ((unsigned long) (addr) + (size) >= (unsigned long)(addr))))
static inline int verify_area_skas(int type, const void * addr,
unsigned long size)
/* Round up to the nearest 4M */
unsigned long top = ROUND_4M((unsigned long) &arg);
+#ifdef CONFIG_HOST_TASK_SIZE
+ *host_size_out = CONFIG_HOST_TASK_SIZE;
+ *task_size_out = CONFIG_HOST_TASK_SIZE;
+#else
*host_size_out = top;
*task_size_out = top;
+#endif
return(((unsigned long) set_task_sizes_skas) & ~0xffffff);
}
/*
* Copyright (C) 2002 Jeff Dike (jdike@karaya.com)
+ * Copyright 2003 PathScale, Inc.
* Licensed under the GPL
*/
unsigned long end_addr, int force)
{
pgd_t *npgd;
+ pud_t *npud;
pmd_t *npmd;
pte_t *npte;
- unsigned long addr;
+ unsigned long addr, end;
int r, w, x, err, fd;
if(mm == NULL) return;
fd = mm->context.skas.mm_fd;
for(addr = start_addr; addr < end_addr;){
npgd = pgd_offset(mm, addr);
- npmd = pmd_offset(npgd, addr);
- if(pmd_present(*npmd)){
- npte = pte_offset_kernel(npmd, addr);
- r = pte_read(*npte);
- w = pte_write(*npte);
- x = pte_exec(*npte);
- if(!pte_dirty(*npte)) w = 0;
- if(!pte_young(*npte)){
- r = 0;
- w = 0;
- }
- if(force || pte_newpage(*npte)){
- err = unmap(fd, (void *) addr, PAGE_SIZE);
+ if(!pgd_present(*npgd)){
+ if(force || pgd_newpage(*npgd)){
+ end = addr + PGDIR_SIZE;
+ if(end > end_addr)
+ end = end_addr;
+ err = unmap(fd, (void *) addr, end - addr);
if(err < 0)
panic("munmap failed, errno = %d\n",
-err);
- if(pte_present(*npte))
- map(fd, addr,
- pte_val(*npte) & PAGE_MASK,
- PAGE_SIZE, r, w, x);
+ pgd_mkuptodate(*npgd);
}
- else if(pte_newprot(*npte)){
- protect(fd, addr, PAGE_SIZE, r, w, x, 1);
+ addr += PGDIR_SIZE;
+ continue;
+ }
+
+ npud = pud_offset(npgd, addr);
+ if(!pud_present(*npud)){
+ if(force || pud_newpage(*npud)){
+ end = addr + PUD_SIZE;
+ if(end > end_addr)
+ end = end_addr;
+ err = unmap(fd, (void *) addr, end - addr);
+ if(err < 0)
+ panic("munmap failed, errno = %d\n",
+ -err);
+ pud_mkuptodate(*npud);
}
- *npte = pte_mkuptodate(*npte);
- addr += PAGE_SIZE;
+ addr += PUD_SIZE;
+ continue;
}
- else {
+
+ npmd = pmd_offset(npud, addr);
+ if(!pmd_present(*npmd)){
if(force || pmd_newpage(*npmd)){
- err = unmap(fd, (void *) addr, PMD_SIZE);
+ end = addr + PMD_SIZE;
+ if(end > end_addr)
+ end = end_addr;
+ err = unmap(fd, (void *) addr, end - addr);
if(err < 0)
panic("munmap failed, errno = %d\n",
-err);
pmd_mkuptodate(*npmd);
}
addr += PMD_SIZE;
+ continue;
+ }
+
+ npte = pte_offset_kernel(npmd, addr);
+ r = pte_read(*npte);
+ w = pte_write(*npte);
+ x = pte_exec(*npte);
+ if(!pte_dirty(*npte))
+ w = 0;
+ if(!pte_young(*npte)){
+ r = 0;
+ w = 0;
+ }
+ if(force || pte_newpage(*npte)){
+ err = unmap(fd, (void *) addr, PAGE_SIZE);
+ if(err < 0)
+ panic("munmap failed, errno = %d\n", -err);
+ if(pte_present(*npte))
+ map(fd, addr, pte_val(*npte) & PAGE_MASK,
+ PAGE_SIZE, r, w, x);
}
+ else if(pte_newprot(*npte))
+ protect(fd, addr, PAGE_SIZE, r, w, x, 1);
+
+ *npte = pte_mkuptodate(*npte);
+ addr += PAGE_SIZE;
}
}
{
struct mm_struct *mm;
pgd_t *pgd;
+ pud_t *pud;
pmd_t *pmd;
pte_t *pte;
- unsigned long addr;
+ unsigned long addr, last;
int updated = 0, err;
mm = &init_mm;
for(addr = start; addr < end;){
pgd = pgd_offset(mm, addr);
- pmd = pmd_offset(pgd, addr);
- if(pmd_present(*pmd)){
- pte = pte_offset_kernel(pmd, addr);
- if(!pte_present(*pte) || pte_newpage(*pte)){
+ pud = pud_offset(pgd, addr);
+ pmd = pmd_offset(pud, addr);
+ if(!pgd_present(*pgd)){
+ if(pgd_newpage(*pgd)){
updated = 1;
+ last = addr + PGDIR_SIZE;
+ if(last > end)
+ last = end;
err = os_unmap_memory((void *) addr,
- PAGE_SIZE);
+ last - addr);
if(err < 0)
panic("munmap failed, errno = %d\n",
-err);
- if(pte_present(*pte))
- map_memory(addr,
- pte_val(*pte) & PAGE_MASK,
- PAGE_SIZE, 1, 1, 1);
}
- else if(pte_newprot(*pte)){
+ addr += PGDIR_SIZE;
+ continue;
+ }
+
+ pud = pud_offset(pgd, addr);
+ if(!pud_present(*pud)){
+ if(pud_newpage(*pud)){
updated = 1;
- protect_memory(addr, PAGE_SIZE, 1, 1, 1, 1);
+ last = addr + PUD_SIZE;
+ if(last > end)
+ last = end;
+ err = os_unmap_memory((void *) addr,
+ last - addr);
+ if(err < 0)
+ panic("munmap failed, errno = %d\n",
+ -err);
}
- addr += PAGE_SIZE;
+ addr += PUD_SIZE;
+ continue;
}
- else {
+
+ pmd = pmd_offset(pud, addr);
+ if(!pmd_present(*pmd)){
if(pmd_newpage(*pmd)){
updated = 1;
- err = os_unmap_memory((void *) addr, PMD_SIZE);
+ last = addr + PMD_SIZE;
+ if(last > end)
+ last = end;
+ err = os_unmap_memory((void *) addr,
+ last - addr);
if(err < 0)
panic("munmap failed, errno = %d\n",
-err);
}
addr += PMD_SIZE;
+ continue;
+ }
+
+ pte = pte_offset_kernel(pmd, addr);
+ if(!pte_present(*pte) || pte_newpage(*pte)){
+ updated = 1;
+ err = os_unmap_memory((void *) addr, PAGE_SIZE);
+ if(err < 0)
+ panic("munmap failed, errno = %d\n", -err);
+ if(pte_present(*pte))
+ map_memory(addr, pte_val(*pte) & PAGE_MASK,
+ PAGE_SIZE, 1, 1, 1);
}
+ else if(pte_newprot(*pte)){
+ updated = 1;
+ protect_memory(addr, PAGE_SIZE, 1, 1, 1, 1);
+ }
+ addr += PAGE_SIZE;
}
}
--- /dev/null
+#include <stdio.h>
+#include <asm/ptrace.h>
+#include <asm/user.h>
+
+#define PRINT_REG(name, val) printf("#define HOST_%s %d\n", (name), (val))
+
+int main(int argc, char **argv)
+{
+ printf("/* Automatically generated by "
+ "arch/um/kernel/skas/util/mk_ptregs */\n");
+ printf("\n");
+ printf("#ifndef __SKAS_PT_REGS_\n");
+ printf("#define __SKAS_PT_REGS_\n");
+ printf("\n");
+ printf("#define HOST_FRAME_SIZE %d\n", FRAME_SIZE);
+ printf("#define HOST_FP_SIZE %d\n",
+ sizeof(struct user_i387_struct) / sizeof(unsigned long));
+ printf("#define HOST_XFP_SIZE %d\n",
+ sizeof(struct user_fxsr_struct) / sizeof(unsigned long));
+
+ PRINT_REG("IP", EIP);
+ PRINT_REG("SP", UESP);
+ PRINT_REG("EFLAGS", EFL);
+ PRINT_REG("EAX", EAX);
+ PRINT_REG("EBX", EBX);
+ PRINT_REG("ECX", ECX);
+ PRINT_REG("EDX", EDX);
+ PRINT_REG("ESI", ESI);
+ PRINT_REG("EDI", EDI);
+ PRINT_REG("EBP", EBP);
+ PRINT_REG("CS", CS);
+ PRINT_REG("SS", SS);
+ PRINT_REG("DS", DS);
+ PRINT_REG("FS", FS);
+ PRINT_REG("ES", ES);
+ PRINT_REG("GS", GS);
+ printf("\n");
+ printf("#endif\n");
+ return(0);
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright 2003 PathScale, Inc.
+ *
+ * Licensed under the GPL
+ */
+
+#include <stdio.h>
+#define __FRAME_OFFSETS
+#include <asm/ptrace.h>
+
+#define PRINT_REG(name, val) \
+ printf("#define HOST_%s (%d / sizeof(unsigned long))\n", (name), (val))
+
+int main(int argc, char **argv)
+{
+ printf("/* Automatically generated by "
+ "arch/um/kernel/skas/util/mk_ptregs */\n");
+ printf("\n");
+ printf("#ifndef __SKAS_PT_REGS_\n");
+ printf("#define __SKAS_PT_REGS_\n");
+ printf("#define HOST_FRAME_SIZE (%d / sizeof(unsigned long))\n",
+ FRAME_SIZE);
+ PRINT_REG("RBX", RBX);
+ PRINT_REG("RCX", RCX);
+ PRINT_REG("RDI", RDI);
+ PRINT_REG("RSI", RSI);
+ PRINT_REG("RDX", RDX);
+ PRINT_REG("RBP", RBP);
+ PRINT_REG("RAX", RAX);
+ PRINT_REG("R8", R8);
+ PRINT_REG("R9", R9);
+ PRINT_REG("R10", R10);
+ PRINT_REG("R11", R11);
+ PRINT_REG("R12", R12);
+ PRINT_REG("R13", R13);
+ PRINT_REG("R14", R14);
+ PRINT_REG("R15", R15);
+ PRINT_REG("ORIG_RAX", ORIG_RAX);
+ PRINT_REG("CS", CS);
+ PRINT_REG("SS", SS);
+ PRINT_REG("EFLAGS", EFLAGS);
+#if 0
+ PRINT_REG("FS", FS);
+ PRINT_REG("GS", GS);
+ PRINT_REG("DS", DS);
+ PRINT_REG("ES", ES);
+#endif
+
+ PRINT_REG("IP", RIP);
+ PRINT_REG("SP", RSP);
+ printf("#define HOST_FP_SIZE 0\n");
+ printf("#define HOST_XFP_SIZE 0\n");
+ printf("\n");
+ printf("\n");
+ printf("#endif\n");
+ return(0);
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
return(pgd_offset(mm, address));
}
-pmd_t *pmd_offset_proc(pgd_t *pgd, unsigned long address)
+pud_t *pud_offset_proc(pgd_t *pgd, unsigned long address)
{
- return(pmd_offset(pgd, address));
+ return(pud_offset(pgd, address));
+}
+
+pmd_t *pmd_offset_proc(pud_t *pud, unsigned long address)
+{
+ return(pmd_offset(pud, address));
}
pte_t *pte_offset_proc(pmd_t *pmd, unsigned long address)
pte_t *addr_pte(struct task_struct *task, unsigned long addr)
{
- return(pte_offset_kernel(pmd_offset(pgd_offset(task->mm, addr), addr),
- addr));
+ pgd_t *pgd = pgd_offset(task->mm, addr);
+ pud_t *pud = pud_offset(pgd, addr);
+ pmd_t *pmd = pmd_offset(pud, addr);
+
+ return(pte_offset_map(pmd, addr));
}
/*
void signal_usr1(int sig)
{
if(debugger_pid != -1){
- printk(UM_KERN_ERR "The debugger is already running\n");
+ printf("The debugger is already running\n");
return;
}
debugger_pid = start_debugger(linux_prog, 0, 0, &debugger_fd);
void child_signal(pid_t pid, int status){ }
int init_ptrace_proxy(int idle_pid, int startup, int stop)
{
- printk(UM_KERN_ERR "debug requested when CONFIG_PT_PROXY is off\n");
+ printf("debug requested when CONFIG_PT_PROXY is off\n");
kill_child_dead(idle_pid);
exit(1);
}
void signal_usr1(int sig)
{
- printk(UM_KERN_ERR "debug requested when CONFIG_PT_PROXY is off\n");
+ printf("debug requested when CONFIG_PT_PROXY is off\n");
}
int attach_debugger(int idle_pid, int pid, int stop)
{
- printk(UM_KERN_ERR "attach_debugger called when CONFIG_PT_PROXY "
+ printf("attach_debugger called when CONFIG_PT_PROXY "
"is off\n");
return(-1);
}
extern int tracer(int (*init_proc)(void *), void *sp);
extern void user_time_init_tt(void);
-extern int copy_sc_from_user_tt(void *to_ptr, void *from_ptr, void *data);
-extern int copy_sc_to_user_tt(void *to_ptr, void *fp, void *from_ptr,
- void *data);
extern void sig_handler_common_tt(int sig, void *sc);
extern void syscall_handler_tt(int sig, union uml_pt_regs *regs);
extern void reboot_tt(void);
extern int is_tracing(void *task);
extern void syscall_handler(int sig, union uml_pt_regs *regs);
extern void exit_kernel(int pid, void *task);
-extern int do_syscall(void *task, int pid, int local_using_sysemu);
+extern void do_syscall(void *task, int pid, int local_using_sysemu);
+extern void do_sigtrap(void *task);
extern int is_valid_pid(int pid);
extern void remap_data(void *segment_start, void *segment_end, int w);
SECTIONS
{
+ /*This must contain the right address - not quite the default ELF one.*/
+ PROVIDE (__executable_start = START);
. = START + SIZEOF_HEADERS;
+ /* Used in arch/um/kernel/mem.c. Any memory between START and __binary_start
+ * is remapped.*/
__binary_start = .;
#ifdef MODE_TT
.thread_private : {
}
. = ALIGN(4096);
.remap : { arch/um/kernel/tt/unmap_fin.o (.text) }
-#endif
+
+ /* We want it only if we are in MODE_TT. In both cases, however, when MODE_TT
+ * is off the resulting binary segfaults.*/
. = ALIGN(4096); /* Init code and data */
+#endif
+
_stext = .;
__init_begin = .;
.init.text : {
--- /dev/null
+/*
+ * arch/um/kernel/elf_aux.c
+ *
+ * Scan the Elf auxiliary vector provided by the host to extract
+ * information about vsyscall-page, etc.
+ *
+ * Copyright (C) 2004 Fujitsu Siemens Computers GmbH
+ * Author: Bodo Stroesser (bodo.stroesser@fujitsu-siemens.com)
+ */
+#include <elf.h>
+#include <stddef.h>
+#include "init.h"
+#include "elf_user.h"
+
+#if ELF_CLASS == ELFCLASS32
+typedef Elf32_auxv_t elf_auxv_t;
+#else
+typedef Elf64_auxv_t elf_auxv_t;
+#endif
+
+char * elf_aux_platform;
+long elf_aux_hwcap;
+
+unsigned long vsyscall_ehdr;
+unsigned long vsyscall_end;
+
+unsigned long __kernel_vsyscall;
+
+__init void scan_elf_aux( char **envp)
+{
+ long page_size = 0;
+ elf_auxv_t * auxv;
+
+ while ( *envp++ != NULL) ;
+
+ for ( auxv = (elf_auxv_t *)envp; auxv->a_type != AT_NULL; auxv++) {
+ switch ( auxv->a_type ) {
+ case AT_SYSINFO:
+ __kernel_vsyscall = auxv->a_un.a_val;
+ break;
+ case AT_SYSINFO_EHDR:
+ vsyscall_ehdr = auxv->a_un.a_val;
+ break;
+ case AT_HWCAP:
+ elf_aux_hwcap = auxv->a_un.a_val;
+ break;
+ case AT_PLATFORM:
+ elf_aux_platform = auxv->a_un.a_ptr;
+ break;
+ case AT_PAGESZ:
+ page_size = auxv->a_un.a_val;
+ break;
+ }
+ }
+ if ( ! __kernel_vsyscall || ! vsyscall_ehdr ||
+ ! elf_aux_hwcap || ! elf_aux_platform ||
+ ! page_size || (vsyscall_ehdr % page_size) ) {
+ __kernel_vsyscall = 0;
+ vsyscall_ehdr = 0;
+ elf_aux_hwcap = 0;
+ elf_aux_platform = "i586";
+ }
+ else {
+ vsyscall_end = vsyscall_ehdr + page_size;
+ }
+}
--- /dev/null
+/*
+ * Copyright (C) 2004 PathScale, Inc
+ * Licensed under the GPL
+ */
+
+#include <signal.h>
+#include "time_user.h"
+#include "mode.h"
+#include "sysdep/signal.h"
+
+void sig_handler(int sig)
+{
+ struct sigcontext *sc;
+
+ ARCH_GET_SIGCONTEXT(sc, sig);
+ CHOOSE_MODE_PROC(sig_handler_common_tt, sig_handler_common_skas,
+ sig, sc);
+}
+
+extern int timer_irq_inited;
+
+void alarm_handler(int sig)
+{
+ struct sigcontext *sc;
+
+ ARCH_GET_SIGCONTEXT(sc, sig);
+ if(!timer_irq_inited) return;
+
+ if(sig == SIGALRM)
+ switch_timers(0);
+
+ CHOOSE_MODE_PROC(sig_handler_common_tt, sig_handler_common_skas,
+ sig, sc);
+
+ if(sig == SIGALRM)
+ switch_timers(1);
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+#
+# Copyright (C) 2000 Jeff Dike (jdike@karaya.com)
+# Licensed under the GPL
+#
+
+obj-$(CONFIG_MODE_SKAS) = registers.o
+
+USER_OBJS := $(foreach file,$(obj-y),$(obj)/$(file))
+
+$(USER_OBJS) : %.o: %.c
+ $(CC) $(CFLAGS_$(notdir $@)) $(USER_CFLAGS) -c -o $@ $<
--- /dev/null
+/*
+ * Copyright (C) 2004 PathScale, Inc
+ * Licensed under the GPL
+ */
+
+#include <errno.h>
+#include <string.h>
+#include <sys/ptrace.h>
+#include "sysdep/ptrace.h"
+#include "uml-config.h"
+#include "skas_ptregs.h"
+#include "registers.h"
+#include "user.h"
+
+/* These are set once at boot time and not changed thereafter */
+
+static unsigned long exec_regs[HOST_FRAME_SIZE];
+static unsigned long exec_fp_regs[HOST_FP_SIZE];
+static unsigned long exec_fpx_regs[HOST_XFP_SIZE];
+static int have_fpx_regs = 1;
+
+void init_thread_registers(union uml_pt_regs *to)
+{
+ memcpy(to->skas.regs, exec_regs, sizeof(to->skas.regs));
+ memcpy(to->skas.fp, exec_fp_regs, sizeof(to->skas.fp));
+ if(have_fpx_regs)
+ memcpy(to->skas.xfp, exec_fpx_regs, sizeof(to->skas.xfp));
+}
+
+static int move_registers(int pid, int int_op, union uml_pt_regs *regs,
+ int fp_op, unsigned long *fp_regs)
+{
+ if(ptrace(int_op, pid, 0, regs->skas.regs) < 0)
+ return(-errno);
+
+ if(ptrace(fp_op, pid, 0, fp_regs) < 0)
+ return(-errno);
+
+ return(0);
+}
+
+void save_registers(int pid, union uml_pt_regs *regs)
+{
+ unsigned long *fp_regs;
+ int err, fp_op;
+
+ if(have_fpx_regs){
+ fp_op = PTRACE_GETFPXREGS;
+ fp_regs = regs->skas.xfp;
+ }
+ else {
+ fp_op = PTRACE_GETFPREGS;
+ fp_regs = regs->skas.fp;
+ }
+
+ err = move_registers(pid, PTRACE_GETREGS, regs, fp_op, fp_regs);
+ if(err)
+ panic("save_registers - saving registers failed, errno = %d\n",
+ -err);
+}
+
+void restore_registers(int pid, union uml_pt_regs *regs)
+{
+ unsigned long *fp_regs;
+ int err, fp_op;
+
+ if(have_fpx_regs){
+ fp_op = PTRACE_SETFPXREGS;
+ fp_regs = regs->skas.xfp;
+ }
+ else {
+ fp_op = PTRACE_SETFPREGS;
+ fp_regs = regs->skas.fp;
+ }
+
+ err = move_registers(pid, PTRACE_SETREGS, regs, fp_op, fp_regs);
+ if(err)
+ panic("restore_registers - saving registers failed, "
+ "errno = %d\n", -err);
+}
+
+void init_registers(int pid)
+{
+ int err;
+
+ err = ptrace(PTRACE_GETREGS, pid, 0, exec_regs);
+ if(err)
+ panic("check_ptrace : PTRACE_GETREGS failed, errno = %d",
+ err);
+
+ err = ptrace(PTRACE_GETFPXREGS, pid, 0, exec_fpx_regs);
+ if(!err)
+ return;
+
+ have_fpx_regs = 0;
+ if(err != EIO)
+ panic("check_ptrace : PTRACE_GETFPXREGS failed, errno = %d",
+ err);
+
+ err = ptrace(PTRACE_GETFPREGS, pid, 0, exec_fp_regs);
+ if(err)
+ panic("check_ptrace : PTRACE_GETFPREGS failed, errno = %d",
+ err);
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+#
+# Copyright (C) 2000 Jeff Dike (jdike@karaya.com)
+# Licensed under the GPL
+#
+
+obj-$(CONFIG_MODE_SKAS) = registers.o
+
+USER_OBJS := $(foreach file,$(obj-y),$(obj)/$(file))
+
+$(USER_OBJS) : %.o: %.c
+ $(CC) $(CFLAGS_$(notdir $@)) $(USER_CFLAGS) -c -o $@ $<
--- /dev/null
+/*
+ * Copyright (C) 2004 PathScale, Inc
+ * Licensed under the GPL
+ */
+
+#include <errno.h>
+#include <string.h>
+#include <sys/ptrace.h>
+#include "sysdep/ptrace.h"
+#include "uml-config.h"
+#include "skas_ptregs.h"
+#include "registers.h"
+#include "user.h"
+
+/* These are set once at boot time and not changed thereafter */
+
+static unsigned long exec_regs[HOST_FRAME_SIZE];
+static unsigned long exec_fp_regs[HOST_FP_SIZE];
+
+void init_thread_registers(union uml_pt_regs *to)
+{
+ memcpy(to->skas.regs, exec_regs, sizeof(to->skas.regs));
+ memcpy(to->skas.fp, exec_fp_regs, sizeof(to->skas.fp));
+}
+
+static int move_registers(int pid, int int_op, int fp_op,
+ union uml_pt_regs *regs)
+{
+ if(ptrace(int_op, pid, 0, regs->skas.regs) < 0)
+ return(-errno);
+
+ if(ptrace(fp_op, pid, 0, regs->skas.fp) < 0)
+ return(-errno);
+
+ return(0);
+}
+
+void save_registers(int pid, union uml_pt_regs *regs)
+{
+ int err;
+
+ err = move_registers(pid, PTRACE_GETREGS, PTRACE_GETFPREGS, regs);
+ if(err)
+ panic("save_registers - saving registers failed, errno = %d\n",
+ -err);
+}
+
+void restore_registers(int pid, union uml_pt_regs *regs)
+{
+ int err;
+
+ err = move_registers(pid, PTRACE_SETREGS, PTRACE_SETFPREGS, regs);
+ if(err)
+ panic("restore_registers - saving registers failed, "
+ "errno = %d\n", -err);
+}
+
+void init_registers(int pid)
+{
+ int err;
+
+ err = ptrace(PTRACE_GETREGS, pid, 0, exec_regs);
+ if(err)
+ panic("check_ptrace : PTRACE_GETREGS failed, errno = %d",
+ err);
+
+ err = ptrace(PTRACE_GETFPREGS, pid, 0, exec_fp_regs);
+ if(err)
+ panic("check_ptrace : PTRACE_GETFPREGS failed, errno = %d",
+ err);
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+void __delay(unsigned long time)
+{
+ /* Stolen from the i386 __loop_delay */
+ int d0;
+ __asm__ __volatile__(
+ "\tjmp 1f\n"
+ ".align 16\n"
+ "1:\tjmp 2f\n"
+ ".align 16\n"
+ "2:\tdecl %0\n\tjns 2b"
+ :"=&a" (d0)
+ :"0" (time));
+}
+
* Licensed under the GPL
*/
+#include <linux/config.h>
+#include <linux/compiler.h>
#include "linux/sched.h"
#include "asm/elf.h"
#include "asm/ptrace.h"
unsigned short instr;
int n;
- n = copy_from_user(&instr, (void *) addr, sizeof(instr));
+ n = copy_from_user(&instr, (void __user *) addr, sizeof(instr));
if(n){
printk("is_syscall : failed to read instruction from 0x%lx\n",
addr);
*/
#ifdef CONFIG_MODE_TT
-static inline int convert_fxsr_to_user_tt(struct _fpstate *buf,
+static inline int convert_fxsr_to_user_tt(struct _fpstate __user *buf,
struct pt_regs *regs)
{
struct i387_fxsave_struct *fxsave = SC_FXSR_ENV(PT_REGS_SC(regs));
unsigned long env[7];
- struct _fpreg *to;
+ struct _fpreg __user *to;
struct _fpxreg *from;
int i;
}
#endif
-static inline int convert_fxsr_to_user(struct _fpstate *buf,
+static inline int convert_fxsr_to_user(struct _fpstate __user *buf,
struct pt_regs *regs)
{
return(CHOOSE_MODE(convert_fxsr_to_user_tt(buf, regs), 0));
#ifdef CONFIG_MODE_TT
static inline int convert_fxsr_from_user_tt(struct pt_regs *regs,
- struct _fpstate *buf)
+ struct _fpstate __user *buf)
{
struct i387_fxsave_struct *fxsave = SC_FXSR_ENV(PT_REGS_SC(regs));
unsigned long env[7];
struct _fpxreg *to;
- struct _fpreg *from;
+ struct _fpreg __user *from;
int i;
if ( __copy_from_user( env, buf, 7 * sizeof(long) ) )
#endif
static inline int convert_fxsr_from_user(struct pt_regs *regs,
- struct _fpstate *buf)
+ struct _fpstate __user *buf)
{
return(CHOOSE_MODE(convert_fxsr_from_user_tt(regs, buf), 0));
}
{
int err;
- err = convert_fxsr_to_user((struct _fpstate *) buf,
+ err = convert_fxsr_to_user((struct _fpstate __user *) buf,
&child->thread.regs);
if(err) return(-EFAULT);
else return(0);
int err;
err = convert_fxsr_from_user(&child->thread.regs,
- (struct _fpstate *) buf);
+ (struct _fpstate __user *) buf);
if(err) return(-EFAULT);
else return(0);
}
struct i387_fxsave_struct *fxsave = SC_FXSR_ENV(PT_REGS_SC(regs));
int err;
- err = __copy_to_user((void *) buf, fxsave,
+ err = __copy_to_user((void __user *) buf, fxsave,
sizeof(struct user_fxsr_struct));
if(err) return -EFAULT;
else return 0;
struct i387_fxsave_struct *fxsave = SC_FXSR_ENV(PT_REGS_SC(regs));
int err;
- err = __copy_from_user(fxsave, (void *) buf,
+ err = __copy_from_user(fxsave, (void __user *) buf,
sizeof(struct user_fxsr_struct) );
if(err) return -EFAULT;
else return 0;
#include <asm/sigcontext.h>
#include "sysdep/ptrace.h"
#include "kern_util.h"
-#include "frame_user.h"
-
-int sc_size(void *data)
-{
- struct arch_frame_data *arch = data;
-
- return(sizeof(struct sigcontext) + arch->fpstate_size);
-}
void sc_to_sc(void *to_ptr, void *from_ptr)
{
struct sigcontext *to = to_ptr, *from = from_ptr;
- int size = sizeof(*to) + signal_frame_sc.common.arch.fpstate_size;
- memcpy(to, from, size);
- if(from->fpstate != NULL) to->fpstate = (struct _fpstate *) (to + 1);
+ memcpy(to, from, sizeof(*to) + sizeof(struct _fpstate));
+ if(from->fpstate != NULL)
+ to->fpstate = (struct _fpstate *) (to + 1);
}
unsigned long *sc_sigmask(void *sc_ptr)
{
struct sigcontext *sc = sc_ptr;
-
- return(&sc->oldmask);
+ return &sc->oldmask;
}
int sc_get_fpregs(unsigned long buf, void *sc_ptr)
--- /dev/null
+/*
+ * Copyright (C) 2004 Jeff Dike (jdike@addtoit.com)
+ * Licensed under the GPL
+ */
+
+#include "linux/signal.h"
+#include "linux/ptrace.h"
+#include "asm/current.h"
+#include "asm/ucontext.h"
+#include "asm/uaccess.h"
+#include "asm/unistd.h"
+#include "frame_kern.h"
+#include "signal_user.h"
+#include "ptrace_user.h"
+#include "sigcontext.h"
+#include "mode.h"
+
+#ifdef CONFIG_MODE_SKAS
+
+#include "skas.h"
+
+static int copy_sc_from_user_skas(struct pt_regs *regs,
+ struct sigcontext *from)
+{
+ struct sigcontext sc;
+ unsigned long fpregs[HOST_FP_SIZE];
+ int err;
+
+ err = copy_from_user(&sc, from, sizeof(sc));
+ err |= copy_from_user(fpregs, sc.fpstate, sizeof(fpregs));
+ if(err)
+ return(err);
+
+ REGS_GS(regs->regs.skas.regs) = sc.gs;
+ REGS_FS(regs->regs.skas.regs) = sc.fs;
+ REGS_ES(regs->regs.skas.regs) = sc.es;
+ REGS_DS(regs->regs.skas.regs) = sc.ds;
+ REGS_EDI(regs->regs.skas.regs) = sc.edi;
+ REGS_ESI(regs->regs.skas.regs) = sc.esi;
+ REGS_EBP(regs->regs.skas.regs) = sc.ebp;
+ REGS_SP(regs->regs.skas.regs) = sc.esp;
+ REGS_EBX(regs->regs.skas.regs) = sc.ebx;
+ REGS_EDX(regs->regs.skas.regs) = sc.edx;
+ REGS_ECX(regs->regs.skas.regs) = sc.ecx;
+ REGS_EAX(regs->regs.skas.regs) = sc.eax;
+ REGS_IP(regs->regs.skas.regs) = sc.eip;
+ REGS_CS(regs->regs.skas.regs) = sc.cs;
+ REGS_EFLAGS(regs->regs.skas.regs) = sc.eflags;
+ REGS_SS(regs->regs.skas.regs) = sc.ss;
+ regs->regs.skas.fault_addr = sc.cr2;
+ regs->regs.skas.fault_type = FAULT_WRITE(sc.err);
+ regs->regs.skas.trap_type = sc.trapno;
+
+ err = ptrace_setfpregs(userspace_pid[0], fpregs);
+ if(err < 0){
+ printk("copy_sc_from_user_skas - PTRACE_SETFPREGS failed, "
+ "errno = %d\n", err);
+ return(1);
+ }
+
+ return(0);
+}
+
+int copy_sc_to_user_skas(struct sigcontext *to, struct _fpstate *to_fp,
+ struct pt_regs *regs, unsigned long fault_addr,
+ int fault_type)
+{
+ struct sigcontext sc;
+ unsigned long fpregs[HOST_FP_SIZE];
+ int err;
+
+ sc.gs = REGS_GS(regs->regs.skas.regs);
+ sc.fs = REGS_FS(regs->regs.skas.regs);
+ sc.es = REGS_ES(regs->regs.skas.regs);
+ sc.ds = REGS_DS(regs->regs.skas.regs);
+ sc.edi = REGS_EDI(regs->regs.skas.regs);
+ sc.esi = REGS_ESI(regs->regs.skas.regs);
+ sc.ebp = REGS_EBP(regs->regs.skas.regs);
+ sc.esp = REGS_SP(regs->regs.skas.regs);
+ sc.ebx = REGS_EBX(regs->regs.skas.regs);
+ sc.edx = REGS_EDX(regs->regs.skas.regs);
+ sc.ecx = REGS_ECX(regs->regs.skas.regs);
+ sc.eax = REGS_EAX(regs->regs.skas.regs);
+ sc.eip = REGS_IP(regs->regs.skas.regs);
+ sc.cs = REGS_CS(regs->regs.skas.regs);
+ sc.eflags = REGS_EFLAGS(regs->regs.skas.regs);
+ sc.esp_at_signal = regs->regs.skas.regs[UESP];
+ sc.ss = regs->regs.skas.regs[SS];
+ sc.cr2 = fault_addr;
+ sc.err = TO_SC_ERR(fault_type);
+ sc.trapno = regs->regs.skas.trap_type;
+
+ err = ptrace_getfpregs(userspace_pid[0], fpregs);
+ if(err < 0){
+ printk("copy_sc_to_user_skas - PTRACE_GETFPREGS failed, "
+ "errno = %d\n", err);
+ return(1);
+ }
+ to_fp = (to_fp ? to_fp : (struct _fpstate *) (to + 1));
+ sc.fpstate = to_fp;
+
+ if(err)
+ return(err);
+
+ return(copy_to_user(to, &sc, sizeof(sc)) ||
+ copy_to_user(to_fp, fpregs, sizeof(fpregs)));
+}
+#endif
+
+#ifdef CONFIG_MODE_TT
+int copy_sc_from_user_tt(struct sigcontext *to, struct sigcontext *from,
+ int fpsize)
+{
+ struct _fpstate *to_fp, *from_fp;
+ unsigned long sigs;
+ int err;
+
+ to_fp = to->fpstate;
+ from_fp = from->fpstate;
+ sigs = to->oldmask;
+ err = copy_from_user(to, from, sizeof(*to));
+ to->oldmask = sigs;
+ if(to_fp != NULL){
+ err |= copy_from_user(&to->fpstate, &to_fp,
+ sizeof(to->fpstate));
+ err |= copy_from_user(to_fp, from_fp, fpsize);
+ }
+ return(err);
+}
+
+int copy_sc_to_user_tt(struct sigcontext *to, struct _fpstate *fp,
+ struct sigcontext *from, int fpsize)
+{
+ struct _fpstate *to_fp, *from_fp;
+ int err;
+
+ to_fp = (fp ? fp : (struct _fpstate *) (to + 1));
+ from_fp = from->fpstate;
+ err = copy_to_user(to, from, sizeof(*to));
+ if(from_fp != NULL){
+ err |= copy_to_user(&to->fpstate, &to_fp,
+ sizeof(to->fpstate));
+ err |= copy_to_user(to_fp, from_fp, fpsize);
+ }
+ return(err);
+}
+#endif
+
+static int copy_sc_from_user(struct pt_regs *to, void __user *from)
+{
+ int ret;
+
+ ret = CHOOSE_MODE(copy_sc_from_user_tt(UPT_SC(&to->regs), from,
+ sizeof(struct _fpstate)),
+ copy_sc_from_user_skas(to, from));
+ return(ret);
+}
+
+static int copy_sc_to_user(struct sigcontext *to, struct _fpstate *fp,
+ struct pt_regs *from)
+{
+ return(CHOOSE_MODE(copy_sc_to_user_tt(to, fp, UPT_SC(&from->regs),
+ sizeof(*fp)),
+ copy_sc_to_user_skas(to, fp, from,
+ current->thread.cr2,
+ current->thread.err)));
+}
+
+static int copy_ucontext_to_user(struct ucontext *uc, struct _fpstate *fp,
+ sigset_t *set, unsigned long sp)
+{
+ int err = 0;
+
+ err |= put_user(current->sas_ss_sp, &uc->uc_stack.ss_sp);
+ err |= put_user(sas_ss_flags(sp), &uc->uc_stack.ss_flags);
+ err |= put_user(current->sas_ss_size, &uc->uc_stack.ss_size);
+ err |= copy_sc_to_user(&uc->uc_mcontext, fp, ¤t->thread.regs);
+ err |= copy_to_user(&uc->uc_sigmask, set, sizeof(*set));
+ return(err);
+}
+
+struct sigframe
+{
+ char *pretcode;
+ int sig;
+ struct sigcontext sc;
+ struct _fpstate fpstate;
+ unsigned long extramask[_NSIG_WORDS-1];
+ char retcode[8];
+};
+
+struct rt_sigframe
+{
+ char *pretcode;
+ int sig;
+ struct siginfo *pinfo;
+ void *puc;
+ struct siginfo info;
+ struct ucontext uc;
+ struct _fpstate fpstate;
+ char retcode[8];
+};
+
+int setup_signal_stack_sc(unsigned long stack_top, int sig,
+ struct k_sigaction *ka, struct pt_regs *regs,
+ sigset_t *mask)
+{
+ struct sigframe __user *frame;
+ void *restorer;
+ int err = 0;
+
+ stack_top &= -8UL;
+ frame = (struct sigframe *) stack_top - 1;
+ if(verify_area(VERIFY_WRITE, frame, sizeof(*frame)))
+ return(1);
+
+ restorer = (void *) frame->retcode;
+ if(ka->sa.sa_flags & SA_RESTORER)
+ restorer = ka->sa.sa_restorer;
+
+ err |= __put_user(restorer, &frame->pretcode);
+ err |= __put_user(sig, &frame->sig);
+ err |= copy_sc_to_user(&frame->sc, NULL, regs);
+ err |= __put_user(mask->sig[0], &frame->sc.oldmask);
+ if (_NSIG_WORDS > 1)
+ err |= __copy_to_user(&frame->extramask, &mask->sig[1],
+ sizeof(frame->extramask));
+
+ /*
+ * This is popl %eax ; movl $,%eax ; int $0x80
+ *
+ * WE DO NOT USE IT ANY MORE! It's only left here for historical
+ * reasons and because gdb uses it as a signature to notice
+ * signal handler stack frames.
+ */
+ err |= __put_user(0xb858, (short __user *)(frame->retcode+0));
+ err |= __put_user(__NR_sigreturn, (int __user *)(frame->retcode+2));
+ err |= __put_user(0x80cd, (short __user *)(frame->retcode+6));
+
+ if(err)
+ return(err);
+
+ PT_REGS_SP(regs) = (unsigned long) frame;
+ PT_REGS_IP(regs) = (unsigned long) ka->sa.sa_handler;
+ PT_REGS_EAX(regs) = (unsigned long) sig;
+ PT_REGS_EDX(regs) = (unsigned long) 0;
+ PT_REGS_ECX(regs) = (unsigned long) 0;
+
+ if ((current->ptrace & PT_DTRACE) && (current->ptrace & PT_PTRACED))
+ ptrace_notify(SIGTRAP);
+ return(0);
+}
+
+int setup_signal_stack_si(unsigned long stack_top, int sig,
+ struct k_sigaction *ka, struct pt_regs *regs,
+ siginfo_t *info, sigset_t *mask)
+{
+ struct rt_sigframe __user *frame;
+ void *restorer;
+ int err = 0;
+
+ stack_top &= -8UL;
+ frame = (struct rt_sigframe *) stack_top - 1;
+ if(verify_area(VERIFY_WRITE, frame, sizeof(*frame)))
+ return(1);
+
+ restorer = (void *) frame->retcode;
+ if(ka->sa.sa_flags & SA_RESTORER)
+ restorer = ka->sa.sa_restorer;
+
+ err |= __put_user(restorer, &frame->pretcode);
+ err |= __put_user(sig, &frame->sig);
+ err |= __put_user(&frame->info, &frame->pinfo);
+ err |= __put_user(&frame->uc, &frame->puc);
+ err |= copy_siginfo_to_user(&frame->info, info);
+ err |= copy_ucontext_to_user(&frame->uc, &frame->fpstate, mask,
+ PT_REGS_SP(regs));
+
+ /*
+ * This is movl $,%eax ; int $0x80
+ *
+ * WE DO NOT USE IT ANY MORE! It's only left here for historical
+ * reasons and because gdb uses it as a signature to notice
+ * signal handler stack frames.
+ */
+ err |= __put_user(0xb8, (char __user *)(frame->retcode+0));
+ err |= __put_user(__NR_rt_sigreturn, (int __user *)(frame->retcode+1));
+ err |= __put_user(0x80cd, (short __user *)(frame->retcode+5));
+
+ if(err)
+ return(err);
+
+ PT_REGS_SP(regs) = (unsigned long) frame;
+ PT_REGS_IP(regs) = (unsigned long) ka->sa.sa_handler;
+ PT_REGS_EAX(regs) = (unsigned long) sig;
+ PT_REGS_EDX(regs) = (unsigned long) &frame->info;
+ PT_REGS_ECX(regs) = (unsigned long) &frame->uc;
+
+ if ((current->ptrace & PT_DTRACE) && (current->ptrace & PT_PTRACED))
+ ptrace_notify(SIGTRAP);
+ return(0);
+}
+
+long sys_sigreturn(struct pt_regs regs)
+{
+ unsigned long __user sp = PT_REGS_SP(¤t->thread.regs);
+ struct sigframe __user *frame = (struct sigframe *)(sp - 8);
+ sigset_t set;
+ struct sigcontext __user *sc = &frame->sc;
+ unsigned long __user *oldmask = &sc->oldmask;
+ unsigned long __user *extramask = frame->extramask;
+ int sig_size = (_NSIG_WORDS - 1) * sizeof(unsigned long);
+
+ if(copy_from_user(&set.sig[0], oldmask, sizeof(&set.sig[0])) ||
+ copy_from_user(&set.sig[1], extramask, sig_size))
+ goto segfault;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+
+ spin_lock_irq(¤t->sighand->siglock);
+ current->blocked = set;
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+
+ if(copy_sc_from_user(¤t->thread.regs, sc))
+ goto segfault;
+
+ /* Avoid ERESTART handling */
+ PT_REGS_SYSCALL_NR(¤t->thread.regs) = -1;
+ return(PT_REGS_SYSCALL_RET(¤t->thread.regs));
+
+ segfault:
+ force_sig(SIGSEGV, current);
+ return 0;
+}
+
+long sys_rt_sigreturn(struct pt_regs regs)
+{
+ unsigned long __user sp = PT_REGS_SP(¤t->thread.regs);
+ struct rt_sigframe __user *frame = (struct rt_sigframe *) (sp - 4);
+ sigset_t set;
+ struct ucontext __user *uc = &frame->uc;
+ int sig_size = _NSIG_WORDS * sizeof(unsigned long);
+
+ if(copy_from_user(&set, &uc->uc_sigmask, sig_size))
+ goto segfault;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+
+ spin_lock_irq(¤t->sighand->siglock);
+ current->blocked = set;
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+
+ if(copy_sc_from_user(¤t->thread.regs, &uc->uc_mcontext))
+ goto segfault;
+
+ /* Avoid ERESTART handling */
+ PT_REGS_SYSCALL_NR(¤t->thread.regs) = -1;
+ return(PT_REGS_SYSCALL_RET(¤t->thread.regs));
+
+ segfault:
+ force_sig(SIGSEGV, current);
+ return 0;
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
/*
- * Copyright (C) 2000 Jeff Dike (jdike@karaya.com)
+ * Copyright (C) 2000 - 2003 Jeff Dike (jdike@addtoit.com)
* Licensed under the GPL
*/
#include "linux/sched.h"
+#include "linux/shm.h"
+#include "asm/ipc.h"
#include "asm/mman.h"
#include "asm/uaccess.h"
#include "asm/unistd.h"
unsigned long prot, unsigned long flags,
unsigned long fd, unsigned long offset);
-int old_mmap_i386(struct mmap_arg_struct *arg)
+long old_mmap_i386(struct mmap_arg_struct __user *arg)
{
struct mmap_arg_struct a;
int err = -EFAULT;
struct sel_arg_struct {
unsigned long n;
- fd_set *inp, *outp, *exp;
- struct timeval *tvp;
+ fd_set __user *inp;
+ fd_set __user *outp;
+ fd_set __user *exp;
+ struct timeval __user *tvp;
};
-int old_select(struct sel_arg_struct *arg)
+long old_select(struct sel_arg_struct __user *arg)
{
struct sel_arg_struct a;
/* The i386 version skips reading from %esi, the fourth argument. So we must do
* this, too.
*/
-int sys_clone(unsigned long clone_flags, unsigned long newsp, int *parent_tid,
- int unused, int *child_tid)
+long sys_clone(unsigned long clone_flags, unsigned long newsp,
+ int __user *parent_tid, int unused, int __user *child_tid)
{
long ret;
return(ret);
}
+/*
+ * sys_ipc() is the de-multiplexer for the SysV IPC calls..
+ *
+ * This is really horribly ugly.
+ */
+long sys_ipc (uint call, int first, int second,
+ int third, void *__user ptr, long fifth)
+{
+ int version, ret;
+
+ version = call >> 16; /* hack for backward compatibility */
+ call &= 0xffff;
+
+ switch (call) {
+ case SEMOP:
+ return sys_semtimedop(first, (struct sembuf *) ptr, second,
+ NULL);
+ case SEMTIMEDOP:
+ return sys_semtimedop(first, (struct sembuf *) ptr, second,
+ (const struct timespec *) fifth);
+ case SEMGET:
+ return sys_semget (first, second, third);
+ case SEMCTL: {
+ union semun fourth;
+ if (!ptr)
+ return -EINVAL;
+ if (get_user(fourth.__pad, (void **) ptr))
+ return -EFAULT;
+ return sys_semctl (first, second, third, fourth);
+ }
+
+ case MSGSND:
+ return sys_msgsnd (first, (struct msgbuf *) ptr,
+ second, third);
+ case MSGRCV:
+ switch (version) {
+ case 0: {
+ struct ipc_kludge tmp;
+ if (!ptr)
+ return -EINVAL;
+
+ if (copy_from_user(&tmp,
+ (struct ipc_kludge *) ptr,
+ sizeof (tmp)))
+ return -EFAULT;
+ return sys_msgrcv (first, tmp.msgp, second,
+ tmp.msgtyp, third);
+ }
+ default:
+ panic("msgrcv with version != 0");
+ return sys_msgrcv (first,
+ (struct msgbuf *) ptr,
+ second, fifth, third);
+ }
+ case MSGGET:
+ return sys_msgget ((key_t) first, second);
+ case MSGCTL:
+ return sys_msgctl (first, second, (struct msqid_ds *) ptr);
+
+ case SHMAT:
+ switch (version) {
+ default: {
+ ulong raddr;
+ ret = do_shmat (first, (char *) ptr, second, &raddr);
+ if (ret)
+ return ret;
+ return put_user (raddr, (ulong *) third);
+ }
+ case 1: /* iBCS2 emulator entry point */
+ if (!segment_eq(get_fs(), get_ds()))
+ return -EINVAL;
+ return do_shmat (first, (char *) ptr, second, (ulong *) third);
+ }
+ case SHMDT:
+ return sys_shmdt ((char *)ptr);
+ case SHMGET:
+ return sys_shmget (first, second, third);
+ case SHMCTL:
+ return sys_shmctl (first, second,
+ (struct shmid_ds *) ptr);
+ default:
+ return -ENOSYS;
+ }
+}
+
+long sys_sigaction(int sig, const struct old_sigaction __user *act,
+ struct old_sigaction __user *oact)
+{
+ struct k_sigaction new_ka, old_ka;
+ int ret;
+
+ if (act) {
+ old_sigset_t mask;
+ if (verify_area(VERIFY_READ, act, sizeof(*act)) ||
+ __get_user(new_ka.sa.sa_handler, &act->sa_handler) ||
+ __get_user(new_ka.sa.sa_restorer, &act->sa_restorer))
+ return -EFAULT;
+ __get_user(new_ka.sa.sa_flags, &act->sa_flags);
+ __get_user(mask, &act->sa_mask);
+ siginitset(&new_ka.sa.sa_mask, mask);
+ }
+
+ ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
+
+ if (!ret && oact) {
+ if (verify_area(VERIFY_WRITE, oact, sizeof(*oact)) ||
+ __put_user(old_ka.sa.sa_handler, &oact->sa_handler) ||
+ __put_user(old_ka.sa.sa_restorer, &oact->sa_restorer))
+ return -EFAULT;
+ __put_user(old_ka.sa.sa_flags, &oact->sa_flags);
+ __put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask);
+ }
+
+ return ret;
+}
+
/*
* Overrides for Emacs so that we follow Linus's tabbing style.
* Emacs will notice this stuff at the end of the file and automatically
--- /dev/null
+#
+# Copyright 2003 PathScale, Inc.
+#
+# Licensed under the GPL
+#
+
+lib-y = bitops.o bugs.o csum-partial.o delay.o fault.o mem.o memcpy.o \
+ ptrace.o ptrace_user.o semaphore.o sigcontext.o signal.o \
+ syscalls.o sysrq.o thunk.o
+
+USER_OBJS := ptrace_user.o sigcontext.o
+USER_OBJS := $(foreach file,$(USER_OBJS),$(obj)/$(file))
+
+SYMLINKS = bitops.c csum-copy.S csum-partial.c csum-wrappers.c memcpy.S \
+ semaphore.c thunk.S
+SYMLINKS := $(foreach f,$(SYMLINKS),$(src)/$f)
+
+clean-files := $(SYMLINKS)
+
+bitops.c-dir = lib
+csum-copy.S-dir = lib
+csum-partial.c-dir = lib
+csum-wrappers.c-dir = lib
+memcpy.S-dir = lib
+semaphore.c-dir = kernel
+thunk.S-dir = lib
+
+define make_link
+ -rm -f $1
+ ln -sf $(TOPDIR)/arch/x86_64/$($(notdir $1)-dir)/$(notdir $1) $1
+endef
+
+$(SYMLINKS):
+ $(call make_link,$@)
+
+$(USER_OBJS) : %.o: %.c
+ $(CC) $(CFLAGS_$(notdir $@)) $(USER_CFLAGS) -c -o $@ $<
+
+CFLAGS_csum-partial.o := -Dcsum_partial=arch_csum_partial
--- /dev/null
+/*
+ * Copyright 2003 PathScale, Inc.
+ * Copied from arch/x86_64
+ *
+ * Licensed under the GPL
+ */
+
+#include "asm/processor.h"
+
+void __delay(unsigned long loops)
+{
+ unsigned long i;
+
+ for(i = 0; i < loops; i++) ;
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright 2003 PathScale, Inc.
+ *
+ * Licensed under the GPL
+ */
+
+#define __FRAME_OFFSETS
+#include "asm/ptrace.h"
+#include "linux/sched.h"
+#include "linux/errno.h"
+#include "asm/elf.h"
+
+/* XXX x86_64 */
+unsigned long not_ss;
+unsigned long not_ds;
+unsigned long not_es;
+
+#define SC_SS(r) (not_ss)
+#define SC_DS(r) (not_ds)
+#define SC_ES(r) (not_es)
+
+/* determines which flags the user has access to. */
+/* 1 = access 0 = no access */
+#define FLAG_MASK 0x44dd5UL
+
+int putreg(struct task_struct *child, int regno, unsigned long value)
+{
+ unsigned long tmp;
+
+#ifdef TIF_IA32
+ /* Some code in the 64bit emulation may not be 64bit clean.
+ Don't take any chances. */
+ if (test_tsk_thread_flag(child, TIF_IA32))
+ value &= 0xffffffff;
+#endif
+ switch (regno){
+ case FS:
+ case GS:
+ case DS:
+ case ES:
+ case SS:
+ case CS:
+ if (value && (value & 3) != 3)
+ return -EIO;
+ value &= 0xffff;
+ break;
+
+ case FS_BASE:
+ case GS_BASE:
+ if (!((value >> 48) == 0 || (value >> 48) == 0xffff))
+ return -EIO;
+ break;
+
+ case EFLAGS:
+ value &= FLAG_MASK;
+ tmp = PT_REGS_EFLAGS(&child->thread.regs) & ~FLAG_MASK;
+ value |= tmp;
+ break;
+ }
+
+ PT_REGS_SET(&child->thread.regs, regno, value);
+ return 0;
+}
+
+unsigned long getreg(struct task_struct *child, int regno)
+{
+ unsigned long retval = ~0UL;
+ switch (regno) {
+ case FS:
+ case GS:
+ case DS:
+ case ES:
+ case SS:
+ case CS:
+ retval = 0xffff;
+ /* fall through */
+ default:
+ retval &= PT_REG(&child->thread.regs, regno);
+#ifdef TIF_IA32
+ if (test_tsk_thread_flag(child, TIF_IA32))
+ retval &= 0xffffffff;
+#endif
+ }
+ return retval;
+}
+
+void arch_switch(void)
+{
+/* XXX
+ printk("arch_switch\n");
+*/
+}
+
+int is_syscall(unsigned long addr)
+{
+ panic("is_syscall");
+}
+
+int dump_fpu(struct pt_regs *regs, elf_fpregset_t *fpu )
+{
+ panic("dump_fpu");
+ return(1);
+}
+
+int get_fpregs(unsigned long buf, struct task_struct *child)
+{
+ panic("get_fpregs");
+ return(0);
+}
+
+int set_fpregs(unsigned long buf, struct task_struct *child)
+{
+ panic("set_fpregs");
+ return(0);
+}
+
+int get_fpxregs(unsigned long buf, struct task_struct *tsk)
+{
+ panic("get_fpxregs");
+ return(0);
+}
+
+int set_fpxregs(unsigned long buf, struct task_struct *tsk)
+{
+ panic("set_fxpregs");
+ return(0);
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2003 PathScale, Inc.
+ * Licensed under the GPL
+ */
+
+#include "linux/stddef.h"
+#include "linux/errno.h"
+#include "linux/personality.h"
+#include "linux/ptrace.h"
+#include "asm/current.h"
+#include "asm/uaccess.h"
+#include "asm/sigcontext.h"
+#include "asm/ptrace.h"
+#include "asm/arch/ucontext.h"
+#include "choose-mode.h"
+#include "sysdep/ptrace.h"
+#include "frame_kern.h"
+
+#ifdef CONFIG_MODE_SKAS
+
+#include "skas.h"
+
+static int copy_sc_from_user_skas(struct pt_regs *regs,
+ struct sigcontext *from)
+{
+ int err = 0;
+
+#define GETREG(regs, regno, sc, regname) \
+ __get_user((regs)->regs.skas.regs[(regno) / sizeof(unsigned long)], \
+ &(sc)->regname)
+
+ err |= GETREG(regs, R8, from, r8);
+ err |= GETREG(regs, R9, from, r9);
+ err |= GETREG(regs, R10, from, r10);
+ err |= GETREG(regs, R11, from, r11);
+ err |= GETREG(regs, R12, from, r12);
+ err |= GETREG(regs, R13, from, r13);
+ err |= GETREG(regs, R14, from, r14);
+ err |= GETREG(regs, R15, from, r15);
+ err |= GETREG(regs, RDI, from, rdi);
+ err |= GETREG(regs, RSI, from, rsi);
+ err |= GETREG(regs, RBP, from, rbp);
+ err |= GETREG(regs, RBX, from, rbx);
+ err |= GETREG(regs, RDX, from, rdx);
+ err |= GETREG(regs, RAX, from, rax);
+ err |= GETREG(regs, RCX, from, rcx);
+ err |= GETREG(regs, RSP, from, rsp);
+ err |= GETREG(regs, RIP, from, rip);
+ err |= GETREG(regs, EFLAGS, from, eflags);
+ err |= GETREG(regs, CS, from, cs);
+
+#undef GETREG
+
+ return(err);
+}
+
+int copy_sc_to_user_skas(struct sigcontext *to, struct _fpstate *to_fp,
+ struct pt_regs *regs, unsigned long mask)
+{
+ unsigned long eflags;
+ int err = 0;
+
+ err |= __put_user(0, &to->gs);
+ err |= __put_user(0, &to->fs);
+
+#define PUTREG(regs, regno, sc, regname) \
+ __put_user((regs)->regs.skas.regs[(regno) / sizeof(unsigned long)], \
+ &(sc)->regname)
+
+ err |= PUTREG(regs, RDI, to, rdi);
+ err |= PUTREG(regs, RSI, to, rsi);
+ err |= PUTREG(regs, RBP, to, rbp);
+ err |= PUTREG(regs, RSP, to, rsp);
+ err |= PUTREG(regs, RBX, to, rbx);
+ err |= PUTREG(regs, RDX, to, rdx);
+ err |= PUTREG(regs, RCX, to, rcx);
+ err |= PUTREG(regs, RAX, to, rax);
+ err |= PUTREG(regs, R8, to, r8);
+ err |= PUTREG(regs, R9, to, r9);
+ err |= PUTREG(regs, R10, to, r10);
+ err |= PUTREG(regs, R11, to, r11);
+ err |= PUTREG(regs, R12, to, r12);
+ err |= PUTREG(regs, R13, to, r13);
+ err |= PUTREG(regs, R14, to, r14);
+ err |= PUTREG(regs, R15, to, r15);
+ err |= PUTREG(regs, CS, to, cs); /* XXX x86_64 doesn't do this */
+ err |= __put_user(current->thread.err, &to->err);
+ err |= __put_user(current->thread.trap_no, &to->trapno);
+ err |= PUTREG(regs, RIP, to, rip);
+ err |= PUTREG(regs, EFLAGS, to, eflags);
+#undef PUTREG
+
+ err |= __put_user(mask, &to->oldmask);
+ err |= __put_user(current->thread.cr2, &to->cr2);
+
+ return(err);
+}
+
+#endif
+
+#ifdef CONFIG_MODE_TT
+int copy_sc_from_user_tt(struct sigcontext *to, struct sigcontext *from,
+ int fpsize)
+{
+ struct _fpstate *to_fp, *from_fp;
+ unsigned long sigs;
+ int err;
+
+ to_fp = to->fpstate;
+ from_fp = from->fpstate;
+ sigs = to->oldmask;
+ err = copy_from_user(to, from, sizeof(*to));
+ to->oldmask = sigs;
+ return(err);
+}
+
+int copy_sc_to_user_tt(struct sigcontext *to, struct _fpstate *fp,
+ struct sigcontext *from, int fpsize)
+{
+ struct _fpstate *to_fp, *from_fp;
+ int err;
+
+ to_fp = (fp ? fp : (struct _fpstate *) (to + 1));
+ from_fp = from->fpstate;
+ err = copy_to_user(to, from, sizeof(*to));
+ return(err);
+}
+
+#endif
+
+static int copy_sc_from_user(struct pt_regs *to, void __user *from)
+{
+ int ret;
+
+ ret = CHOOSE_MODE(copy_sc_from_user_tt(UPT_SC(&to->regs), from,
+ sizeof(struct _fpstate)),
+ copy_sc_from_user_skas(to, from));
+ return(ret);
+}
+
+static int copy_sc_to_user(struct sigcontext *to, struct _fpstate *fp,
+ struct pt_regs *from, unsigned long mask)
+{
+ return(CHOOSE_MODE(copy_sc_to_user_tt(to, fp, UPT_SC(&from->regs),
+ sizeof(*fp)),
+ copy_sc_to_user_skas(to, fp, from, mask)));
+}
+
+struct rt_sigframe
+{
+ char *pretcode;
+ struct ucontext uc;
+ struct siginfo info;
+};
+
+#define round_down(m, n) (((m) / (n)) * (n))
+
+int setup_signal_stack_si(unsigned long stack_top, int sig,
+ struct k_sigaction *ka, struct pt_regs * regs,
+ siginfo_t *info, sigset_t *set)
+{
+ struct rt_sigframe __user *frame;
+ struct _fpstate __user *fp = NULL;
+ int err = 0;
+ struct task_struct *me = current;
+
+ frame = (struct rt_sigframe __user *)
+ round_down(stack_top - sizeof(struct rt_sigframe), 16) - 8;
+ frame -= 128;
+
+ if (!access_ok(VERIFY_WRITE, fp, sizeof(struct _fpstate)))
+ goto out;
+
+#if 0 /* XXX */
+ if (save_i387(fp) < 0)
+ err |= -1;
+#endif
+ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+ goto out;
+
+ if (ka->sa.sa_flags & SA_SIGINFO) {
+ err |= copy_siginfo_to_user(&frame->info, info);
+ if (err)
+ goto out;
+ }
+
+ /* Create the ucontext. */
+ err |= __put_user(0, &frame->uc.uc_flags);
+ err |= __put_user(0, &frame->uc.uc_link);
+ err |= __put_user(me->sas_ss_sp, &frame->uc.uc_stack.ss_sp);
+ err |= __put_user(sas_ss_flags(PT_REGS_SP(regs)),
+ &frame->uc.uc_stack.ss_flags);
+ err |= __put_user(me->sas_ss_size, &frame->uc.uc_stack.ss_size);
+ err |= copy_sc_to_user(&frame->uc.uc_mcontext, fp, regs, set->sig[0]);
+ err |= __put_user(fp, &frame->uc.uc_mcontext.fpstate);
+ if (sizeof(*set) == 16) {
+ __put_user(set->sig[0], &frame->uc.uc_sigmask.sig[0]);
+ __put_user(set->sig[1], &frame->uc.uc_sigmask.sig[1]);
+ }
+ else
+ err |= __copy_to_user(&frame->uc.uc_sigmask, set,
+ sizeof(*set));
+
+ /* Set up to return from userspace. If provided, use a stub
+ already in userspace. */
+ /* x86-64 should always use SA_RESTORER. */
+ if (ka->sa.sa_flags & SA_RESTORER)
+ err |= __put_user(ka->sa.sa_restorer, &frame->pretcode);
+ else
+ /* could use a vstub here */
+ goto out;
+
+ if (err)
+ goto out;
+
+ /* Set up registers for signal handler */
+ {
+ struct exec_domain *ed = current_thread_info()->exec_domain;
+ if (unlikely(ed && ed->signal_invmap && sig < 32))
+ sig = ed->signal_invmap[sig];
+ }
+
+ PT_REGS_RDI(regs) = sig;
+ /* In case the signal handler was declared without prototypes */
+ PT_REGS_RAX(regs) = 0;
+
+ /* This also works for non SA_SIGINFO handlers because they expect the
+ next argument after the signal number on the stack. */
+ PT_REGS_RSI(regs) = (unsigned long) &frame->info;
+ PT_REGS_RDX(regs) = (unsigned long) &frame->uc;
+ PT_REGS_RIP(regs) = (unsigned long) ka->sa.sa_handler;
+
+ PT_REGS_RSP(regs) = (unsigned long) frame;
+ out:
+ return(err);
+}
+
+long sys_rt_sigreturn(struct pt_regs *regs)
+{
+ unsigned long __user sp = PT_REGS_SP(¤t->thread.regs);
+ struct rt_sigframe __user *frame =
+ (struct rt_sigframe __user *)(sp - 8);
+ struct ucontext __user *uc = &frame->uc;
+ sigset_t set;
+
+ if(copy_from_user(&set, &uc->uc_sigmask, sizeof(set)))
+ goto segfault;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+
+ spin_lock_irq(¤t->sighand->siglock);
+ current->blocked = set;
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+
+ if(copy_sc_from_user(¤t->thread.regs, &uc->uc_mcontext))
+ goto segfault;
+
+ /* Avoid ERESTART handling */
+ PT_REGS_SYSCALL_NR(¤t->thread.regs) = -1;
+ return(PT_REGS_SYSCALL_RET(¤t->thread.regs));
+
+ segfault:
+ force_sig(SIGSEGV, current);
+ return 0;
+}
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright 2003 PathScale, Inc.
+ *
+ * Licensed under the GPL
+ */
+
+#include "linux/linkage.h"
+#include "linux/slab.h"
+#include "linux/shm.h"
+#include "asm/uaccess.h"
+#define __FRAME_OFFSETS
+#include "asm/ptrace.h"
+#include "asm/unistd.h"
+#include "asm/prctl.h" /* XXX This should get the constants from libc */
+#include "choose-mode.h"
+
+asmlinkage long wrap_sys_shmat(int shmid, char __user *shmaddr, int shmflg)
+{
+ unsigned long raddr;
+
+ return do_shmat(shmid, shmaddr, shmflg, &raddr) ?: (long) raddr;
+}
+
+#ifdef CONFIG_MODE_TT
+extern int modify_ldt(int func, void *ptr, unsigned long bytecount);
+
+long sys_modify_ldt_tt(int func, void *ptr, unsigned long bytecount)
+{
+ /* XXX This should check VERIFY_WRITE depending on func, check this
+ * in i386 as well.
+ */
+ if(verify_area(VERIFY_READ, ptr, bytecount))
+ return(-EFAULT);
+ return(modify_ldt(func, ptr, bytecount));
+}
+#endif
+
+#ifdef CONFIG_MODE_SKAS
+extern int userspace_pid;
+
+#ifndef __NR_mm_indirect
+#define __NR_mm_indirect 241
+#endif
+
+long sys_modify_ldt_skas(int func, void *ptr, unsigned long bytecount)
+{
+ unsigned long args[6];
+ void *buf;
+ int res, n;
+
+ buf = kmalloc(bytecount, GFP_KERNEL);
+ if(buf == NULL)
+ return(-ENOMEM);
+
+ res = 0;
+
+ switch(func){
+ case 1:
+ case 0x11:
+ res = copy_from_user(buf, ptr, bytecount);
+ break;
+ }
+
+ if(res != 0){
+ res = -EFAULT;
+ goto out;
+ }
+
+ args[0] = func;
+ args[1] = (unsigned long) buf;
+ args[2] = bytecount;
+ res = syscall(__NR_mm_indirect, ¤t->mm->context.u,
+ __NR_modify_ldt, args);
+
+ if(res < 0)
+ goto out;
+
+ switch(func){
+ case 0:
+ case 2:
+ n = res;
+ res = copy_to_user(ptr, buf, n);
+ if(res != 0)
+ res = -EFAULT;
+ else
+ res = n;
+ break;
+ }
+
+ out:
+ kfree(buf);
+ return(res);
+}
+#endif
+
+long sys_modify_ldt(int func, void *ptr, unsigned long bytecount)
+{
+ return(CHOOSE_MODE_PROC(sys_modify_ldt_tt, sys_modify_ldt_skas, func,
+ ptr, bytecount));
+}
+
+#ifdef CONFIG_MODE_TT
+extern long arch_prctl(int code, unsigned long addr);
+
+static long arch_prctl_tt(int code, unsigned long addr)
+{
+ unsigned long tmp;
+ long ret;
+
+ switch(code){
+ case ARCH_SET_GS:
+ case ARCH_SET_FS:
+ ret = arch_prctl(code, addr);
+ break;
+ case ARCH_GET_FS:
+ case ARCH_GET_GS:
+ ret = arch_prctl(code, (unsigned long) &tmp);
+ if(!ret)
+ ret = put_user(tmp, &addr);
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+
+ return(ret);
+}
+#endif
+
+#ifdef CONFIG_MODE_SKAS
+
+static long arch_prctl_skas(int code, unsigned long addr)
+{
+ long ret = 0;
+
+ switch(code){
+ case ARCH_SET_GS:
+ current->thread.regs.regs.skas.regs[GS_BASE / sizeof(unsigned long)] = addr;
+ break;
+ case ARCH_SET_FS:
+ current->thread.regs.regs.skas.regs[FS_BASE / sizeof(unsigned long)] = addr;
+ break;
+ case ARCH_GET_FS:
+ ret = put_user(current->thread.regs.regs.skas.regs[GS / sizeof(unsigned long)], &addr);
+ break;
+ case ARCH_GET_GS:
+ ret = put_user(current->thread.regs.regs.skas.regs[FS / sizeof(unsigned \
+long)], &addr);
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+
+ return(ret);
+}
+#endif
+
+long sys_arch_prctl(int code, unsigned long addr)
+{
+ return(CHOOSE_MODE_PROC(arch_prctl_tt, arch_prctl_skas, code, addr));
+}
+
+long sys_clone(unsigned long clone_flags, unsigned long newsp,
+ void __user *parent_tid, void __user *child_tid)
+{
+ long ret;
+
+ /* XXX: normal arch do here this pass, and also pass the regs to
+ * do_fork, instead of NULL. Currently the arch-independent code
+ * ignores these values, while the UML code (actually it's
+ * copy_thread) does the right thing. But this should change,
+ probably. */
+ /*if (!newsp)
+ newsp = UPT_SP(current->thread.regs);*/
+ current->thread.forking = 1;
+ ret = do_fork(clone_flags, newsp, NULL, 0, parent_tid, child_tid);
+ current->thread.forking = 0;
+ return(ret);
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright 2003 PathScale, Inc.
+ *
+ * Licensed under the GPL
+ */
+
+#include "linux/kernel.h"
+#include "linux/utsname.h"
+#include "linux/module.h"
+#include "asm/current.h"
+#include "asm/ptrace.h"
+#include "sysrq.h"
+
+void __show_regs(struct pt_regs * regs)
+{
+ printk("\n");
+ print_modules();
+ printk("Pid: %d, comm: %.20s %s %s\n",
+ current->pid, current->comm, print_tainted(), system_utsname.release);
+ printk("RIP: %04lx:[<%016lx>] ", PT_REGS_CS(regs) & 0xffff,
+ PT_REGS_RIP(regs));
+ printk("\nRSP: %016lx EFLAGS: %08lx\n", PT_REGS_RSP(regs),
+ PT_REGS_EFLAGS(regs));
+ printk("RAX: %016lx RBX: %016lx RCX: %016lx\n",
+ PT_REGS_RAX(regs), PT_REGS_RBX(regs), PT_REGS_RCX(regs));
+ printk("RDX: %016lx RSI: %016lx RDI: %016lx\n",
+ PT_REGS_RDX(regs), PT_REGS_RSI(regs), PT_REGS_RDI(regs));
+ printk("RBP: %016lx R08: %016lx R09: %016lx\n",
+ PT_REGS_RBP(regs), PT_REGS_R8(regs), PT_REGS_R9(regs));
+ printk("R10: %016lx R11: %016lx R12: %016lx\n",
+ PT_REGS_R10(regs), PT_REGS_R11(regs), PT_REGS_R12(regs));
+ printk("R13: %016lx R14: %016lx R15: %016lx\n",
+ PT_REGS_R13(regs), PT_REGS_R14(regs), PT_REGS_R15(regs));
+}
+
+void show_regs(struct pt_regs *regs)
+{
+ __show_regs(regs);
+ show_trace((unsigned long *) ®s);
+}
+
+/* Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+# Copyright 2003 - 2004 Pathscale, Inc
+# Released under the GPL
+
+hostprogs-y := mk_sc mk_thread
+always := $(hostprogs-y)
+
+mk_thread-objs := mk_thread_kern.o mk_thread_user.o
+
+HOSTCFLAGS_mk_thread_kern.o := $(CFLAGS) $(CPPFLAGS)
+HOSTCFLAGS_mk_thread_user.o := $(USER_CFLAGS)
--- /dev/null
+/* Copyright (C) 2003 - 2004 PathScale, Inc
+ * Released under the GPL
+ */
+
+#include <stdio.h>
+#include <signal.h>
+#include <linux/stddef.h>
+
+#define SC_OFFSET(name, field) \
+ printf("#define " name \
+ "(sc) *((unsigned long *) &(((char *) (sc))[%ld]))\n",\
+ offsetof(struct sigcontext, field))
+
+#define SC_FP_OFFSET(name, field) \
+ printf("#define " name \
+ "(sc) *((unsigned long *) &(((char *) (SC_FPSTATE(sc)))[%ld]))\n",\
+ offsetof(struct _fpstate, field))
+
+#define SC_FP_OFFSET_PTR(name, field, type) \
+ printf("#define " name \
+ "(sc) ((" type " *) &(((char *) (SC_FPSTATE(sc)))[%d]))\n",\
+ offsetof(struct _fpstate, field))
+
+int main(int argc, char **argv)
+{
+ SC_OFFSET("SC_RBX", rbx);
+ SC_OFFSET("SC_RCX", rcx);
+ SC_OFFSET("SC_RDX", rdx);
+ SC_OFFSET("SC_RSI", rsi);
+ SC_OFFSET("SC_RDI", rdi);
+ SC_OFFSET("SC_RBP", rbp);
+ SC_OFFSET("SC_RAX", rax);
+ SC_OFFSET("SC_R8", r8);
+ SC_OFFSET("SC_R9", r9);
+ SC_OFFSET("SC_R10", r10);
+ SC_OFFSET("SC_R11", r11);
+ SC_OFFSET("SC_R12", r12);
+ SC_OFFSET("SC_R13", r13);
+ SC_OFFSET("SC_R14", r14);
+ SC_OFFSET("SC_R15", r15);
+ SC_OFFSET("SC_IP", rip);
+ SC_OFFSET("SC_SP", rsp);
+ SC_OFFSET("SC_CR2", cr2);
+ SC_OFFSET("SC_ERR", err);
+ SC_OFFSET("SC_TRAPNO", trapno);
+ SC_OFFSET("SC_CS", cs);
+ SC_OFFSET("SC_FS", fs);
+ SC_OFFSET("SC_GS", gs);
+ SC_OFFSET("SC_EFLAGS", eflags);
+ SC_OFFSET("SC_SIGMASK", oldmask);
+#if 0
+ SC_OFFSET("SC_ORIG_RAX", orig_rax);
+ SC_OFFSET("SC_DS", ds);
+ SC_OFFSET("SC_ES", es);
+ SC_OFFSET("SC_SS", ss);
+#endif
+ return(0);
+}
--- /dev/null
+#include "linux/config.h"
+#include "linux/stddef.h"
+#include "linux/sched.h"
+
+extern void print_head(void);
+extern void print_constant_ptr(char *name, int value);
+extern void print_constant(char *name, char *type, int value);
+extern void print_tail(void);
+
+#define THREAD_OFFSET(field) offsetof(struct task_struct, thread.field)
+
+int main(int argc, char **argv)
+{
+ print_head();
+#ifdef CONFIG_MODE_TT
+ print_constant("TASK_EXTERN_PID", "int", THREAD_OFFSET(mode.tt.extern_pid));
+#endif
+ print_tail();
+ return(0);
+}
+
--- /dev/null
+#include <stdio.h>
+
+void print_head(void)
+{
+ printf("/*\n");
+ printf(" * Generated by mk_thread\n");
+ printf(" */\n");
+ printf("\n");
+ printf("#ifndef __UM_THREAD_H\n");
+ printf("#define __UM_THREAD_H\n");
+ printf("\n");
+}
+
+void print_constant_ptr(char *name, int value)
+{
+ printf("#define %s(task) ((unsigned long *) "
+ "&(((char *) (task))[%d]))\n", name, value);
+}
+
+void print_constant(char *name, char *type, int value)
+{
+ printf("#define %s(task) *((%s *) &(((char *) (task))[%d]))\n", name, type,
+ value);
+}
+
+void print_tail(void)
+{
+ printf("\n");
+ printf("#endif\n");
+}
static unsigned long memcons_offs = 0;
/* Spinlock protecting memcons_offs. */
-static spinlock_t memcons_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(memcons_lock);
static size_t write (const char *buf, size_t len)
static unsigned char leds_image[LED_NUM_DIGITS] = { 0 };
/* Spinlock protecting the above leds. */
-static spinlock_t leds_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(leds_lock);
/* Common body of LED read/write functions, checks POS and LEN for
correctness, declares a variable using IMG_DECL, initialized pointing at
static struct mb_sram_free_area *mb_sram_free_free_areas = 0;
/* Spinlock protecting the above globals. */
-static spinlock_t mb_sram_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(mb_sram_lock);
/* Allocate a memory block at least SIZE bytes long in the Mother-A SRAM
space. */
static struct dma_mapping *free_dma_mappings = 0;
/* Spinlock protecting the above globals. */
-static spinlock_t dma_mappings_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(dma_mappings_lock);
static struct dma_mapping *new_dma_mapping (size_t size)
{
wake_up(&sem->wait);
}
-static spinlock_t semaphore_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(semaphore_lock);
void __sched __down(struct semaphore * sem)
{
void EARLY_INIT_SECTION_ATTR mach_early_init (void)
{
- extern int panic_timeout;
-
/* The sim85e2 simulator tracks `undefined' values, so to make
debugging easier, we begin by zeroing out all otherwise
undefined registers. This is not strictly necessary.
/*
* copy while checksumming, otherwise like csum_partial
*/
-unsigned int csum_partial_copy(const char *src, char *dst,
+unsigned int csum_partial_copy(const unsigned char *src, unsigned char *dst,
int len, unsigned int sum)
{
/*
* Copy from userspace and compute checksum. If we catch an exception
* then zero the rest of the buffer.
*/
-unsigned int csum_partial_copy_from_user (const char *src, char *dst,
+unsigned int csum_partial_copy_from_user (const unsigned char *src, unsigned char *dst,
int len, unsigned int sum,
int *err_ptr)
{
return dst;
}
-void bcopy (const char *src, char *dst, int size)
-{
- memcpy (dst, src, size);
-}
-
void *memmove (void *dst, const void *src, __kernel_size_t size)
{
if ((unsigned long)dst < (unsigned long)src
case SHMDT:
return sys_shmdt(compat_ptr(ptr));
case SHMGET:
- return sys_shmget(first, second, third);
+ return sys_shmget(first, (unsigned)second, third);
case SHMCTL:
return compat_sys_shmctl(first, second, compat_ptr(ptr));
}
do_suspend_lowlevel:
.LFB5:
subq $8, %rsp
-.LCFI2:
- testl %edi, %edi
- jne .L99
xorl %eax, %eax
call save_processor_state
void __init iommu_hole_init(void)
{
int fix, num;
- u32 aper_size, aper_alloc = 0, aper_order;
- u64 aper_base;
+ u32 aper_size, aper_alloc = 0, aper_order, last_aper_order = 0;
+ u64 aper_base, last_aper_base = 0;
int valid_agp = 0;
if (iommu_aperture_disabled || !fix_aperture)
if (!aperture_valid(name, aper_base, aper_size)) {
fix = 1;
break;
- }
+ }
+
+ if ((last_aper_order && aper_order != last_aper_order) ||
+ (last_aper_base && aper_base != last_aper_base)) {
+ fix = 1;
+ break;
+ }
+ last_aper_order = aper_order;
+ last_aper_base = aper_base;
}
if (!fix && !fallback_aper_force)
#include <linux/stddef.h>
#include <linux/errno.h>
#include <linux/hardirq.h>
+#include <linux/suspend.h>
#include <asm/pda.h>
#include <asm/processor.h>
#include <asm/segment.h>
offsetof (struct rt_sigframe32, uc.uc_mcontext));
BLANK();
#endif
-
+ DEFINE(SIZEOF_PBE, sizeof(struct pbe));
+ DEFINE(pbe_address, offsetof(struct pbe, address));
+ DEFINE(pbe_orig_address, offsetof(struct pbe, orig_address));
return 0;
}
#ifdef __i386__
#define VGABASE (__ISA_IO_base + 0xb8000)
#else
-#define VGABASE ((void *)0xffffffff800b8000UL)
+#define VGABASE ((void __iomem *)0xffffffff800b8000UL)
#endif
#define MAX_YPOS 25
early_console = &early_serial_console;
} else if (!strncmp(buf, "vga", 3)) {
early_console = &early_vga_console;
- } else {
- early_console = NULL;
- return -1;
}
early_console_initialized = 1;
register_console(early_console);
#include <asm/unistd.h>
#include <asm/thread_info.h>
#include <asm/hw_irq.h>
-#include <asm/errno.h>
.code64
CFI_ENDPROC
.endm
+ENTRY(thermal_interrupt)
+ apicinterrupt THERMAL_APIC_VECTOR,smp_thermal_interrupt
+
#ifdef CONFIG_SMP
ENTRY(reschedule_interrupt)
apicinterrupt RESCHEDULE_VECTOR,smp_reschedule_interrupt
u8 id;
u8 cluster_cnt[NUM_APIC_CLUSTERS];
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) {
+ /* AMD always uses flat mode right now */
+ genapic = &apic_flat;
+ goto print;
+ }
+
memset(cluster_cnt, 0, sizeof(cluster_cnt));
for (i = 0; i < NR_CPUS; i++) {
else
genapic = &apic_cluster;
+print:
printk(KERN_INFO "Setting APIC routing to %s\n", genapic->name);
}
#include <asm/pgtable.h>
#include <asm/kdebug.h>
+static DECLARE_MUTEX(kprobe_mutex);
+
/* kprobe_status settings */
#define KPROBE_HIT_ACTIVE 0x00000001
#define KPROBE_HIT_SS 0x00000002
int arch_prepare_kprobe(struct kprobe *p)
{
/* insn: must be on special executable page on x86_64. */
+ up(&kprobe_mutex);
p->ainsn.insn = get_insn_slot();
+ down(&kprobe_mutex);
if (!p->ainsn.insn) {
return -ENOMEM;
}
- memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE);
return 0;
}
+void arch_copy_kprobe(struct kprobe *p)
+{
+ memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE);
+}
+
void arch_remove_kprobe(struct kprobe *p)
{
+ up(&kprobe_mutex);
free_insn_slot(p->ainsn.insn);
+ down(&kprobe_mutex);
}
static inline void disarm_kprobe(struct kprobe *p, struct pt_regs *regs)
}
/* All out of space. Need to allocate a new page. Use slot 0.*/
- kip = kmalloc(sizeof(struct kprobe_insn_page), GFP_ATOMIC);
+ kip = kmalloc(sizeof(struct kprobe_insn_page), GFP_KERNEL);
if (!kip) {
return NULL;
}
kip->insns = (kprobe_opcode_t*) __vmalloc(PAGE_SIZE,
- GFP_ATOMIC|__GFP_HIGHMEM, __pgprot(__PAGE_KERNEL_EXEC));
+ GFP_KERNEL|__GFP_HIGHMEM, __pgprot(__PAGE_KERNEL_EXEC));
if (!kip->insns) {
kfree(kip);
return NULL;
--- /dev/null
+/*
+ * Intel specific MCE features.
+ * Copyright 2004 Zwane Mwaikambo <zwane@linuxpower.ca>
+ */
+
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/percpu.h>
+#include <asm/processor.h>
+#include <asm/msr.h>
+#include <asm/mce.h>
+#include <asm/hw_irq.h>
+
+static DEFINE_PER_CPU(unsigned long, next_check);
+
+asmlinkage void smp_thermal_interrupt(void)
+{
+ struct mce m;
+
+ ack_APIC_irq();
+
+ irq_enter();
+ if (time_before(jiffies, __get_cpu_var(next_check)))
+ goto done;
+
+ __get_cpu_var(next_check) = jiffies + HZ*300;
+ memset(&m, 0, sizeof(m));
+ m.cpu = smp_processor_id();
+ m.bank = MCE_THERMAL_BANK;
+ rdtscll(m.tsc);
+ rdmsrl(MSR_IA32_THERM_STATUS, m.status);
+ if (m.status & 0x1) {
+ printk(KERN_EMERG
+ "CPU%d: Temperature above threshold, cpu clock throttled\n", m.cpu);
+ add_taint(TAINT_MACHINE_CHECK);
+ } else {
+ printk(KERN_EMERG "CPU%d: Temperature/speed normal\n", m.cpu);
+ }
+
+ mce_log(&m);
+done:
+ irq_exit();
+}
+
+static void __init intel_init_thermal(struct cpuinfo_x86 *c)
+{
+ u32 l, h;
+ int tm2 = 0;
+ unsigned int cpu = smp_processor_id();
+
+ if (!cpu_has(c, X86_FEATURE_ACPI))
+ return;
+
+ if (!cpu_has(c, X86_FEATURE_ACC))
+ return;
+
+ /* first check if TM1 is already enabled by the BIOS, in which
+ * case there might be some SMM goo which handles it, so we can't even
+ * put a handler since it might be delivered via SMI already.
+ */
+ rdmsr(MSR_IA32_MISC_ENABLE, l, h);
+ h = apic_read(APIC_LVTTHMR);
+ if ((l & (1 << 3)) && (h & APIC_DM_SMI)) {
+ printk(KERN_DEBUG
+ "CPU%d: Thermal monitoring handled by SMI\n", cpu);
+ return;
+ }
+
+ if (cpu_has(c, X86_FEATURE_TM2) && (l & (1 << 13)))
+ tm2 = 1;
+
+ if (h & APIC_VECTOR_MASK) {
+ printk(KERN_DEBUG
+ "CPU%d: Thermal LVT vector (%#x) already "
+ "installed\n", cpu, (h & APIC_VECTOR_MASK));
+ return;
+ }
+
+ h = THERMAL_APIC_VECTOR;
+ h |= (APIC_DM_FIXED | APIC_LVT_MASKED);
+ apic_write_around(APIC_LVTTHMR, h);
+
+ rdmsr(MSR_IA32_THERM_INTERRUPT, l, h);
+ wrmsr(MSR_IA32_THERM_INTERRUPT, l | 0x03, h);
+
+ rdmsr(MSR_IA32_MISC_ENABLE, l, h);
+ wrmsr(MSR_IA32_MISC_ENABLE, l | (1 << 3), h);
+
+ l = apic_read(APIC_LVTTHMR);
+ apic_write_around(APIC_LVTTHMR, l & ~APIC_LVT_MASKED);
+ printk(KERN_INFO "CPU%d: Thermal monitoring enabled (%s)\n",
+ cpu, tm2 ? "TM2" : "TM1");
+ return;
+}
+
+void __init mce_intel_feature_init(struct cpuinfo_x86 *c)
+{
+ intel_init_thermal(c);
+}
* This is maintained separately from nmi_active because the NMI
* watchdog may also be driven from the I/O APIC timer.
*/
-static spinlock_t lapic_nmi_owner_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(lapic_nmi_owner_lock);
static unsigned int lapic_nmi_owner;
#define LAPIC_NMI_WATCHDOG (1<<0)
#define LAPIC_NMI_RESERVED (1<<1)
mdelay((10*1000)/nmi_hz); // wait 10 ticks
for (cpu = 0; cpu < NR_CPUS; cpu++) {
- if (!cpu_online(cpu))
+#ifdef CONFIG_SMP
+ /* Check cpu_callin_map here because that is set
+ after the timer is started. */
+ if (!cpu_isset(cpu, cpu_callin_map))
continue;
+#endif
if (cpu_pda[cpu].__nmi_count - counts[cpu] <= 5) {
printk("CPU#%d: NMI appears to be stuck (%d)!\n",
cpu,
-/* Originally gcc generated, modified by hand
+/* Copyright 2004,2005 Pavel Machek <pavel@suse.cz>, Andi Kleen <ak@suse.de>, Rafael J. Wysocki <rjw@sisk.pl>
*
- * This may not use any stack, nor any variable that is not "NoSave":
+ * Distribute under GPLv2.
+ *
+ * swsusp_arch_resume may not use any stack, nor any variable that is
+ * not "NoSave" during copying pages:
*
* Its rewriting one kernel image with another. What is stack in "old"
* image could very well be data page in "new" image, and overwriting
#include <linux/linkage.h>
#include <asm/segment.h>
#include <asm/page.h>
+#include <asm/offset.h>
ENTRY(swsusp_arch_suspend)
movq %rcx, %cr3;
movq %rax, %cr4; # turn PGE back on
+ movq pagedir_nosave(%rip), %rdx
+ /* compute the limit */
movl nr_copy_pages(%rip), %eax
- xorl %ecx, %ecx
- movq $0, %r10
testl %eax, %eax
jz done
-.L105:
- xorl %esi, %esi
- movq $0, %r11
- jmp .L104
- .p2align 4,,7
-copy_one_page:
- movq %r10, %rcx
-.L104:
- movq pagedir_nosave(%rip), %rdx
- movq %rcx, %rax
- salq $5, %rax
- movq 8(%rdx,%rax), %rcx
- movq (%rdx,%rax), %rax
- movzbl (%rsi,%rax), %eax
- movb %al, (%rsi,%rcx)
-
- movq %cr3, %rax; # flush TLB
- movq %rax, %cr3;
+ movq %rdx,%r8
+ movl $SIZEOF_PBE,%r9d
+ mul %r9 # with rax, clobbers rdx
+ movq %r8, %rdx
+ addq %r8, %rax
+loop:
+ /* get addresses from the pbe and copy the page */
+ movq pbe_address(%rdx), %rsi
+ movq pbe_orig_address(%rdx), %rdi
+ movq $512, %rcx
+ rep
+ movsq
- movq %r11, %rax
- incq %rax
- cmpq $4095, %rax
- movq %rax, %rsi
- movq %rax, %r11
- jbe copy_one_page
- movq %r10, %rax
- incq %rax
- movq %rax, %rcx
- movq %rax, %r10
- mov nr_copy_pages(%rip), %eax
- cmpq %rax, %rcx
- jb .L105
+ /* progress to the next pbe */
+ addq $SIZEOF_PBE, %rdx
+ cmpq %rax, %rdx
+ jb loop
done:
movl $24, %eax
movl %eax, %ds
inline void __const_udelay(unsigned long xloops)
{
- __delay(((xloops * current_cpu_data.loops_per_jiffy) >> 32) * HZ);
+ __delay(((xloops * cpu_data[_smp_processor_id()].loops_per_jiffy) >> 32) * HZ);
}
void __udelay(unsigned long usecs)
* AMD K8 NUMA support.
* Discover the memory map and associated nodes.
*
- * Doesn't use the ACPI SRAT table because it has a questionable license.
- * Instead the northbridge registers are read directly.
- * XXX in 2.5 we could use the generic SRAT code
+ * This version reads it directly from the K8 northbridge.
*
* Copyright 2002,2003 Andi Kleen, SuSE Labs.
*/
int __init k8_scan_nodes(unsigned long start, unsigned long end)
{
unsigned long prevbase;
- struct node nodes[MAXNODE];
+ struct node nodes[8];
int nodeid, i, nb;
int found = 0;
u32 reg;
+ unsigned numnodes;
+ nodemask_t nodes_parsed;
+
+ nodes_clear(nodes_parsed);
nb = find_northbridge();
if (nb < 0)
printk(KERN_INFO "Scanning NUMA topology in Northbridge %d\n", nb);
reg = read_pci_config(0, nb, 0, 0x60);
- numnodes = ((reg >> 4) & 7) + 1;
+ numnodes = ((reg >> 4) & 0xF) + 1;
- printk(KERN_INFO "Number of nodes %d (%x)\n", numnodes, reg);
+ printk(KERN_INFO "Number of nodes %d\n", numnodes);
memset(&nodes,0,sizeof(nodes));
prevbase = 0;
nodeid = limit & 7;
if ((base & 3) == 0) {
- if (i < numnodes)
+ if (i < numnodes)
printk("Skipping disabled node %d\n", i);
continue;
}
- if (nodeid >= numnodes) {
+ if (nodeid >= numnodes) {
printk("Ignoring excess node %d (%lx:%lx)\n", nodeid,
base, limit);
continue;
nodeid, (base>>8)&3, (limit>>8) & 3);
return -1;
}
- if (node_online(nodeid)) {
+ if (node_isset(nodeid, nodes_parsed)) {
printk(KERN_INFO "Node %d already present. Skipping\n",
nodeid);
continue;
nodes[nodeid].end = limit;
prevbase = base;
+
+ node_set(nodeid, nodes_parsed);
}
if (!found)
return -1;
- memnode_shift = compute_hash_shift(nodes);
+ memnode_shift = compute_hash_shift(nodes, numnodes);
if (memnode_shift < 0) {
printk(KERN_ERR "No NUMA node hash function found. Contact maintainer\n");
return -1;
}
printk(KERN_INFO "Using node hash shift of %d\n", memnode_shift);
- for (i = 0; i < MAXNODE; i++) {
+ for (i = 0; i < 8; i++) {
if (nodes[i].start != nodes[i].end) {
/* assume 1:1 NODE:CPU */
cpu_to_node[i] = i;
- setup_node_bootmem(i, nodes[i].start, nodes[i].end);
- }
+ setup_node_bootmem(i, nodes[i].start, nodes[i].end);
+ }
}
numa_init_array();
--- /dev/null
+/*
+ * ACPI 3.0 based NUMA setup
+ * Copyright 2004 Andi Kleen, SuSE Labs.
+ *
+ * Reads the ACPI SRAT table to figure out what memory belongs to which CPUs.
+ *
+ * Called from acpi_numa_init while reading the SRAT and SLIT tables.
+ * Assumes all memory regions belonging to a single proximity domain
+ * are in one chunk. Holes between them will be included in the node.
+ */
+
+#include <linux/kernel.h>
+#include <linux/acpi.h>
+#include <linux/mmzone.h>
+#include <linux/bitmap.h>
+#include <linux/module.h>
+#include <linux/topology.h>
+#include <asm/proto.h>
+#include <asm/numa.h>
+
+static struct acpi_table_slit *acpi_slit;
+
+static nodemask_t nodes_parsed __initdata;
+static nodemask_t nodes_found __initdata;
+static struct node nodes[MAX_NUMNODES] __initdata;
+static __u8 pxm2node[256] = { [0 ... 255] = 0xff };
+
+static __init int setup_node(int pxm)
+{
+ unsigned node = pxm2node[pxm];
+ if (node == 0xff) {
+ if (nodes_weight(nodes_found) >= MAX_NUMNODES)
+ return -1;
+ node = first_unset_node(nodes_found);
+ node_set(node, nodes_found);
+ pxm2node[pxm] = node;
+ }
+ return pxm2node[pxm];
+}
+
+static __init int conflicting_nodes(unsigned long start, unsigned long end)
+{
+ int i;
+ for_each_online_node(i) {
+ struct node *nd = &nodes[i];
+ if (nd->start == nd->end)
+ continue;
+ if (nd->end > start && nd->start < end)
+ return 1;
+ if (nd->end == end && nd->start == start)
+ return 1;
+ }
+ return -1;
+}
+
+static __init void cutoff_node(int i, unsigned long start, unsigned long end)
+{
+ struct node *nd = &nodes[i];
+ if (nd->start < start) {
+ nd->start = start;
+ if (nd->end < nd->start)
+ nd->start = nd->end;
+ }
+ if (nd->end > end) {
+ if (!(end & 0xfff))
+ end--;
+ nd->end = end;
+ if (nd->start > nd->end)
+ nd->start = nd->end;
+ }
+}
+
+static __init void bad_srat(void)
+{
+ printk(KERN_ERR "SRAT: SRAT not used.\n");
+ acpi_numa = -1;
+}
+
+static __init inline int srat_disabled(void)
+{
+ return numa_off || acpi_numa < 0;
+}
+
+/* Callback for SLIT parsing */
+void __init acpi_numa_slit_init(struct acpi_table_slit *slit)
+{
+ acpi_slit = slit;
+}
+
+/* Callback for Proximity Domain -> LAPIC mapping */
+void __init
+acpi_numa_processor_affinity_init(struct acpi_table_processor_affinity *pa)
+{
+ int pxm, node;
+ if (srat_disabled() || pa->flags.enabled == 0)
+ return;
+ pxm = pa->proximity_domain;
+ node = setup_node(pxm);
+ if (node < 0) {
+ printk(KERN_ERR "SRAT: Too many proximity domains %x\n", pxm);
+ bad_srat();
+ return;
+ }
+ if (pa->apic_id >= NR_CPUS) {
+ printk(KERN_ERR "SRAT: lapic %u too large.\n",
+ pa->apic_id);
+ bad_srat();
+ return;
+ }
+ cpu_to_node[pa->apic_id] = node;
+ acpi_numa = 1;
+ printk(KERN_INFO "SRAT: PXM %u -> APIC %u -> Node %u\n",
+ pxm, pa->apic_id, node);
+}
+
+/* Callback for parsing of the Proximity Domain <-> Memory Area mappings */
+void __init
+acpi_numa_memory_affinity_init(struct acpi_table_memory_affinity *ma)
+{
+ struct node *nd;
+ unsigned long start, end;
+ int node, pxm;
+ int i;
+
+ if (srat_disabled() || ma->flags.enabled == 0)
+ return;
+ /* hotplug bit is ignored for now */
+ pxm = ma->proximity_domain;
+ node = setup_node(pxm);
+ if (node < 0) {
+ printk(KERN_ERR "SRAT: Too many proximity domains.\n");
+ bad_srat();
+ return;
+ }
+ start = ma->base_addr_lo | ((u64)ma->base_addr_hi << 32);
+ end = start + (ma->length_lo | ((u64)ma->length_hi << 32));
+ i = conflicting_nodes(start, end);
+ if (i >= 0) {
+ printk(KERN_ERR
+ "SRAT: pxm %d overlap %lx-%lx with node %d(%Lx-%Lx)\n",
+ pxm, start, end, i, nodes[i].start, nodes[i].end);
+ bad_srat();
+ return;
+ }
+ nd = &nodes[node];
+ if (!node_test_and_set(node, nodes_parsed)) {
+ nd->start = start;
+ nd->end = end;
+ } else {
+ if (start < nd->start)
+ nd->start = start;
+ if (nd->end < end)
+ nd->end = end;
+ }
+ if (!(nd->end & 0xfff))
+ nd->end--;
+ printk(KERN_INFO "SRAT: Node %u PXM %u %Lx-%Lx\n", node, pxm,
+ nd->start, nd->end);
+}
+
+void __init acpi_numa_arch_fixup(void) {}
+
+/* Use the information discovered above to actually set up the nodes. */
+int __init acpi_scan_nodes(unsigned long start, unsigned long end)
+{
+ int i;
+ if (acpi_numa <= 0)
+ return -1;
+ memnode_shift = compute_hash_shift(nodes, nodes_weight(nodes_parsed));
+ if (memnode_shift < 0) {
+ printk(KERN_ERR
+ "SRAT: No NUMA node hash function found. Contact maintainer\n");
+ bad_srat();
+ return -1;
+ }
+ for (i = 0; i < MAX_NUMNODES; i++) {
+ if (!node_isset(i, nodes_parsed))
+ continue;
+ cutoff_node(i, start, end);
+ if (nodes[i].start == nodes[i].end) {
+ node_clear(i, nodes_parsed);
+ continue;
+ }
+ setup_node_bootmem(i, nodes[i].start, nodes[i].end);
+ }
+ for (i = 0; i < NR_CPUS; i++) {
+ if (cpu_to_node[i] == NUMA_NO_NODE)
+ continue;
+ if (!node_isset(cpu_to_node[i], nodes_parsed))
+ cpu_to_node[i] = NUMA_NO_NODE;
+ }
+ numa_init_array();
+ return 0;
+}
+
+int node_to_pxm(int n)
+{
+ int i;
+ if (pxm2node[n] == n)
+ return n;
+ for (i = 0; i < 256; i++)
+ if (pxm2node[i] == n)
+ return i;
+ return 0;
+}
+
+int __node_distance(int a, int b)
+{
+ int index;
+
+ if (!acpi_slit)
+ return a == b ? 10 : 20;
+ index = acpi_slit->localities * node_to_pxm(a);
+ return acpi_slit->entry[index + node_to_pxm(b)];
+}
+
+EXPORT_SYMBOL(__node_distance);
oprofilefs.o oprofile_stats.o \
timer_int.o )
-OPROFILE-y := init.o
+OPROFILE-y := init.o backtrace.o
OPROFILE-$(CONFIG_X86_LOCAL_APIC) += nmi_int.o op_model_athlon.o op_model_p4.o \
op_model_ppro.o
OPROFILE-$(CONFIG_X86_IO_APIC) += nmi_timer_int.o
return (x >> y) | (x << (64 - y));
}
-const u64 sha512_K[80] = {
+static const u64 sha512_K[80] = {
0x428a2f98d728ae22ULL, 0x7137449123ef65cdULL, 0xb5c0fbcfec4d3b2fULL,
0xe9b5dba58189dbbcULL, 0x3956c25bf348b538ULL, 0x59f111f1b605d019ULL,
0x923f82a4af194f9bULL, 0xab1c5ed5da6d8118ULL, 0xd807aa98a3030242ULL,
int fd1772_init(void)
{
- static spinlock_t lock = SPIN_LOCK_UNLOCKED;
+ static DEFINE_SPINLOCK(lock);
int i, err = -ENOMEM;
if (!machine_is_archimedes())
static void (*do_mfm)(void) = NULL;
static struct request_queue *mfm_queue;
-static spinlock_t mfm_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(mfm_lock);
#define MAJOR_NR MFM_ACORN_MAJOR
#define QUEUE (mfm_queue)
#
# ACPI Bus and Device Drivers
#
+processor-objs += processor_core.o processor_throttling.o \
+ processor_idle.o processor_thermal.o
+ifdef CONFIG_CPU_FREQ
+processor-objs += processor_perflib.o
+endif
+
obj-$(CONFIG_ACPI_BUS) += sleep/
obj-$(CONFIG_ACPI_BUS) += bus.o
obj-$(CONFIG_ACPI_AC) += ac.o
obj-$(CONFIG_ACPI_PCI) += pci_root.o pci_link.o pci_irq.o pci_bind.o
obj-$(CONFIG_ACPI_POWER) += power.o
obj-$(CONFIG_ACPI_PROCESSOR) += processor.o
+obj-$(CONFIG_ACPI_CONTAINER) += container.o
obj-$(CONFIG_ACPI_THERMAL) += thermal.o
obj-$(CONFIG_ACPI_SYSTEM) += system.o event.o
obj-$(CONFIG_ACPI_DEBUG) += debug.o
--- /dev/null
+/*
+ * acpi_container.c - ACPI Generic Container Driver
+ * ($Revision: )
+ *
+ * Copyright (C) 2004 Anil S Keshavamurthy (anil.s.keshavamurthy@intel.com)
+ * Copyright (C) 2004 Keiichiro Tokunaga (tokunaga.keiich@jp.fujitsu.com)
+ * Copyright (C) 2004 Motoyuki Ito (motoyuki@soft.fujitsu.com)
+ * Copyright (C) 2004 Intel Corp.
+ * Copyright (C) 2004 FUJITSU LIMITED
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/acpi.h>
+#include <acpi/acpi_bus.h>
+#include <acpi/acpi_drivers.h>
+#include <acpi/container.h>
+
+#define ACPI_CONTAINER_DRIVER_NAME "ACPI container driver"
+#define ACPI_CONTAINER_DEVICE_NAME "ACPI container device"
+#define ACPI_CONTAINER_CLASS "container"
+
+#define INSTALL_NOTIFY_HANDLER 1
+#define UNINSTALL_NOTIFY_HANDLER 2
+
+#define ACPI_CONTAINER_COMPONENT 0x01000000
+#define _COMPONENT ACPI_CONTAINER_COMPONENT
+ACPI_MODULE_NAME ("acpi_container")
+
+MODULE_AUTHOR("Anil S Keshavamurthy");
+MODULE_DESCRIPTION(ACPI_CONTAINER_DRIVER_NAME);
+MODULE_LICENSE("GPL");
+
+#define ACPI_STA_PRESENT (0x00000001)
+
+static int acpi_container_add(struct acpi_device *device);
+static int acpi_container_remove(struct acpi_device *device, int type);
+
+static struct acpi_driver acpi_container_driver = {
+ .name = ACPI_CONTAINER_DRIVER_NAME,
+ .class = ACPI_CONTAINER_CLASS,
+ .ids = "ACPI0004,PNP0A05,PNP0A06",
+ .ops = {
+ .add = acpi_container_add,
+ .remove = acpi_container_remove,
+ },
+};
+
+
+/*******************************************************************/
+
+static int
+is_device_present(acpi_handle handle)
+{
+ acpi_handle temp;
+ acpi_status status;
+ unsigned long sta;
+
+ ACPI_FUNCTION_TRACE("is_device_present");
+
+ status = acpi_get_handle(handle, "_STA", &temp);
+ if (ACPI_FAILURE(status))
+ return_VALUE(1); /* _STA not found, assmue device present */
+
+ status = acpi_evaluate_integer(handle, "_STA", NULL, &sta);
+ if (ACPI_FAILURE(status))
+ return_VALUE(0); /* Firmware error */
+
+ return_VALUE((sta & ACPI_STA_PRESENT) == ACPI_STA_PRESENT);
+}
+
+/*******************************************************************/
+static int
+acpi_container_add(struct acpi_device *device)
+{
+ struct acpi_container *container;
+
+ ACPI_FUNCTION_TRACE("acpi_container_add");
+
+ if (!device) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "device is NULL\n"));
+ return_VALUE(-EINVAL);
+ }
+
+ container = kmalloc(sizeof(struct acpi_container), GFP_KERNEL);
+ if(!container)
+ return_VALUE(-ENOMEM);
+
+ memset(container, 0, sizeof(struct acpi_container));
+ container->handle = device->handle;
+ strcpy(acpi_device_name(device), ACPI_CONTAINER_DEVICE_NAME);
+ strcpy(acpi_device_class(device), ACPI_CONTAINER_CLASS);
+ acpi_driver_data(device) = container;
+
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Device <%s> bid <%s>\n", \
+ acpi_device_name(device), acpi_device_bid(device)));
+
+
+ return_VALUE(0);
+}
+
+static int
+acpi_container_remove(struct acpi_device *device, int type)
+{
+ acpi_status status = AE_OK;
+ struct acpi_container *pc = NULL;
+ pc = (struct acpi_container*) acpi_driver_data(device);
+
+ if (pc)
+ kfree(pc);
+
+ return status;
+}
+
+
+static int
+container_device_add(struct acpi_device **device, acpi_handle handle)
+{
+ acpi_handle phandle;
+ struct acpi_device *pdev;
+ int result;
+
+ ACPI_FUNCTION_TRACE("container_device_add");
+
+ if (acpi_get_parent(handle, &phandle)) {
+ return_VALUE(-ENODEV);
+ }
+
+ if (acpi_bus_get_device(phandle, &pdev)) {
+ return_VALUE(-ENODEV);
+ }
+
+ if (acpi_bus_add(device, pdev, handle, ACPI_BUS_TYPE_DEVICE)) {
+ return_VALUE(-ENODEV);
+ }
+
+ result = acpi_bus_scan(*device);
+
+ return_VALUE(result);
+}
+
+static void
+container_notify_cb(acpi_handle handle, u32 type, void *context)
+{
+ struct acpi_device *device = NULL;
+ int result;
+ int present;
+ acpi_status status;
+
+ ACPI_FUNCTION_TRACE("container_notify_cb");
+
+ present = is_device_present(handle);
+
+ switch (type) {
+ case ACPI_NOTIFY_BUS_CHECK:
+ /* Fall through */
+ case ACPI_NOTIFY_DEVICE_CHECK:
+ printk("Container driver received %s event\n",
+ (type == ACPI_NOTIFY_BUS_CHECK)?
+ "ACPI_NOTIFY_BUS_CHECK":"ACPI_NOTIFY_DEVICE_CHECK");
+ if (present) {
+ status = acpi_bus_get_device(handle, &device);
+ if (ACPI_FAILURE(status) || !device) {
+ result = container_device_add(&device, handle);
+ if (!result)
+ kobject_hotplug(&device->kobj, KOBJ_ONLINE);
+ } else {
+ /* device exist and this is a remove request */
+ kobject_hotplug(&device->kobj, KOBJ_OFFLINE);
+ }
+ }
+ break;
+ case ACPI_NOTIFY_EJECT_REQUEST:
+ if (!acpi_bus_get_device(handle, &device) && device) {
+ kobject_hotplug(&device->kobj, KOBJ_OFFLINE);
+ }
+ break;
+ default:
+ break;
+ }
+ return_VOID;
+}
+
+static acpi_status
+container_walk_namespace_cb(acpi_handle handle,
+ u32 lvl,
+ void *context,
+ void **rv)
+{
+ char *hid = NULL;
+ struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL};
+ struct acpi_device_info *info;
+ acpi_status status;
+ int *action = context;
+
+ ACPI_FUNCTION_TRACE("container_walk_namespace_cb");
+
+ status = acpi_get_object_info(handle, &buffer);
+ if (ACPI_FAILURE(status) || !buffer.pointer) {
+ return_ACPI_STATUS(AE_OK);
+ }
+
+ info = buffer.pointer;
+ if (info->valid & ACPI_VALID_HID)
+ hid = info->hardware_id.value;
+
+ if (hid == NULL) {
+ goto end;
+ }
+
+ if (strcmp(hid, "ACPI0004") && strcmp(hid, "PNP0A05") &&
+ strcmp(hid, "PNP0A06")) {
+ goto end;
+ }
+
+ switch(*action) {
+ case INSTALL_NOTIFY_HANDLER:
+ acpi_install_notify_handler(handle,
+ ACPI_SYSTEM_NOTIFY,
+ container_notify_cb,
+ NULL);
+ break;
+ case UNINSTALL_NOTIFY_HANDLER:
+ acpi_remove_notify_handler(handle,
+ ACPI_SYSTEM_NOTIFY,
+ container_notify_cb);
+ break;
+ default:
+ break;
+ }
+
+end:
+ acpi_os_free(buffer.pointer);
+
+ return_ACPI_STATUS(AE_OK);
+}
+
+
+int __init
+acpi_container_init(void)
+{
+ int result = 0;
+ int action = INSTALL_NOTIFY_HANDLER;
+
+ result = acpi_bus_register_driver(&acpi_container_driver);
+ if (result < 0) {
+ return(result);
+ }
+
+ /* register notify handler to every container device */
+ acpi_walk_namespace(ACPI_TYPE_DEVICE,
+ ACPI_ROOT_OBJECT,
+ ACPI_UINT32_MAX,
+ container_walk_namespace_cb,
+ &action, NULL);
+
+ return(0);
+}
+
+void __exit
+acpi_container_exit(void)
+{
+ int action = UNINSTALL_NOTIFY_HANDLER;
+
+ ACPI_FUNCTION_TRACE("acpi_container_exit");
+
+ acpi_walk_namespace(ACPI_TYPE_DEVICE,
+ ACPI_ROOT_OBJECT,
+ ACPI_UINT32_MAX,
+ container_walk_namespace_cb,
+ &action, NULL);
+
+ acpi_bus_unregister_driver(&acpi_container_driver);
+
+ return_VOID;
+}
+
+module_init(acpi_container_init);
+module_exit(acpi_container_exit);
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
case AML_REVISION_OP:
- obj_desc->integer.value = ACPI_CA_SUPPORT_LEVEL;
+ obj_desc->integer.value = ACPI_CA_VERSION;
break;
default:
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
}
/*
- * If there is no parent, we are executing at the method level.
- * An executing method typically has no parent, since each method
- * is parsed separately.
+ * If there is no parent, or the parent is a scope_op, we are executing
+ * at the method level. An executing method typically has no parent,
+ * since each method is parsed separately. A method invoked externally
+ * via execute_control_method has a scope_op as the parent.
*/
- if (!op->common.parent) {
+ if ((!op->common.parent) ||
+ (op->common.parent->common.aml_opcode == AML_SCOPE_OP)) {
/*
* If this is the last statement in the method, we know it is not a
* Return() operator (would not come here.) The following code is the
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
goto cleanup;
}
- /* Resolve all operands */
+ /*
+ * All opcodes require operand resolution, with the only exceptions
+ * being the object_type and size_of operators.
+ */
+ if (!(walk_state->op_info->flags & AML_NO_OPERAND_RESOLVE)) {
+ /* Resolve all operands */
+
+ status = acpi_ex_resolve_operands (walk_state->opcode,
+ &(walk_state->operands [walk_state->num_operands -1]),
+ walk_state);
+ if (ACPI_SUCCESS (status)) {
+ ACPI_DUMP_OPERANDS (ACPI_WALK_OPERANDS, ACPI_IMODE_EXECUTE,
+ acpi_ps_get_opcode_name (walk_state->opcode),
+ walk_state->num_operands, "after ex_resolve_operands");
+ }
+ }
- status = acpi_ex_resolve_operands (walk_state->opcode,
- &(walk_state->operands [walk_state->num_operands -1]),
- walk_state);
if (ACPI_SUCCESS (status)) {
- ACPI_DUMP_OPERANDS (ACPI_WALK_OPERANDS, ACPI_IMODE_EXECUTE,
- acpi_ps_get_opcode_name (walk_state->opcode),
- walk_state->num_operands, "after ex_resolve_operands");
-
/*
* Dispatch the request to the appropriate interpreter handler
* routine. There is one routine per opcode "type" based upon the
break;
}
+ /* Done with this result state (Now that operand stack is built) */
+
+ status = acpi_ds_result_stack_pop (walk_state);
+ if (ACPI_FAILURE (status)) {
+ goto cleanup;
+ }
+
/*
* If a result object was returned from above, push it on the
* current result stack
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
}
+#ifdef ACPI_ENABLE_OBJECT_CACHE
/******************************************************************************
*
* FUNCTION: acpi_ds_delete_walk_state_cache
acpi_ut_delete_generic_cache (ACPI_MEM_LIST_WALK);
return_VOID;
}
+#endif
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
if ((gpe_event_info->flags & ACPI_GPE_XRUPT_TYPE_MASK) == ACPI_GPE_EDGE_TRIGGERED) {
status = acpi_hw_clear_gpe (gpe_event_info);
if (ACPI_FAILURE (status)) {
- ACPI_REPORT_ERROR (("acpi_ev_gpe_dispatch: Unable to clear GPE[%2X]\n",
- gpe_number));
+ ACPI_REPORT_ERROR (("acpi_ev_gpe_dispatch: %s, Unable to clear GPE[%2X]\n",
+ acpi_format_exception (status), gpe_number));
return_VALUE (ACPI_INTERRUPT_NOT_HANDLED);
}
}
status = acpi_hw_clear_gpe (gpe_event_info);
if (ACPI_FAILURE (status)) {
ACPI_REPORT_ERROR ((
- "acpi_ev_gpe_dispatch: Unable to clear GPE[%2X]\n",
- gpe_number));
+ "acpi_ev_gpe_dispatch: %s, Unable to clear GPE[%2X]\n",
+ acpi_format_exception (status), gpe_number));
return_VALUE (ACPI_INTERRUPT_NOT_HANDLED);
}
}
status = acpi_ev_disable_gpe (gpe_event_info);
if (ACPI_FAILURE (status)) {
ACPI_REPORT_ERROR ((
- "acpi_ev_gpe_dispatch: Unable to disable GPE[%2X]\n",
- gpe_number));
+ "acpi_ev_gpe_dispatch: %s, Unable to disable GPE[%2X]\n",
+ acpi_format_exception (status), gpe_number));
return_VALUE (ACPI_INTERRUPT_NOT_HANDLED);
}
* Execute the method associated with the GPE
* NOTE: Level-triggered GPEs are cleared after the method completes.
*/
- if (ACPI_FAILURE (acpi_os_queue_for_execution (OSD_PRIORITY_GPE,
- acpi_ev_asynch_execute_gpe_method,
- gpe_event_info))) {
+ status = acpi_os_queue_for_execution (OSD_PRIORITY_GPE,
+ acpi_ev_asynch_execute_gpe_method, gpe_event_info);
+ if (ACPI_FAILURE (status)) {
ACPI_REPORT_ERROR ((
- "acpi_ev_gpe_dispatch: Unable to queue handler for GPE[%2X], event is disabled\n",
- gpe_number));
+ "acpi_ev_gpe_dispatch: %s, Unable to queue handler for GPE[%2X] - event disabled\n",
+ acpi_format_exception (status), gpe_number));
}
break;
status = acpi_ev_disable_gpe (gpe_event_info);
if (ACPI_FAILURE (status)) {
ACPI_REPORT_ERROR ((
- "acpi_ev_gpe_dispatch: Unable to disable GPE[%2X]\n",
- gpe_number));
+ "acpi_ev_gpe_dispatch: %s, Unable to disable GPE[%2X]\n",
+ acpi_format_exception (status), gpe_number));
return_VALUE (ACPI_INTERRUPT_NOT_HANDLED);
}
break;
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
/*
* Runtime option: Should Wake GPEs be enabled at runtime? The default
- * is No,they should only be enabled just as the machine goes to sleep.
+ * is No, they should only be enabled just as the machine goes to sleep.
*/
if (acpi_gbl_leave_wake_gpes_disabled) {
/*
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* DESCRIPTION: Clear an ACPI event (fixed)
*
******************************************************************************/
-#ifdef ACPI_FUTURE_USAGE
+
acpi_status
acpi_clear_event (
u32 event)
return_ACPI_STATUS (status);
}
EXPORT_SYMBOL(acpi_clear_event);
-#endif /* ACPI_FUTURE_USAGE */
/*******************************************************************************
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
ACPI_MEMSET (&table_info, 0, sizeof (struct acpi_table_desc));
- table_info.type = 5;
+ table_info.type = ACPI_TABLE_SSDT;
table_info.pointer = table;
table_info.length = (acpi_size) table->length;
table_info.allocation = ACPI_MEM_ALLOCATED;
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*/
result = 0;
- /* Transfer no more than an integer's worth of data */
-
- if (count > acpi_gbl_integer_byte_width) {
- count = acpi_gbl_integer_byte_width;
- }
-
/*
* String conversion is different than Buffer conversion
*/
case ACPI_TYPE_BUFFER:
+ /* Check for zero-length buffer */
+
+ if (!count) {
+ return_ACPI_STATUS (AE_AML_BUFFER_LIMIT);
+ }
+
+ /* Transfer no more than an integer's worth of data */
+
+ if (count > acpi_gbl_integer_byte_width) {
+ count = acpi_gbl_integer_byte_width;
+ }
+
/*
* Convert buffer to an integer - we simply grab enough raw data
* from the buffer to fill an integer
/* Save the Result */
return_desc->integer.value = result;
+ acpi_ex_truncate_for32bit_table (return_desc);
*result_desc = return_desc;
return_ACPI_STATUS (AE_OK);
}
/*
* Create a new Buffer object
* Size will be the string length
+ *
+ * NOTE: Add one to the string length to include the null terminator.
+ * The ACPI spec is unclear on this subject, but there is existing
+ * ASL/AML code that depends on the null being transferred to the new
+ * buffer.
*/
- return_desc = acpi_ut_create_buffer_object ((acpi_size) obj_desc->string.length);
+ return_desc = acpi_ut_create_buffer_object ((acpi_size) obj_desc->string.length + 1);
if (!return_desc) {
return_ACPI_STATUS (AE_NO_MEMORY);
}
{
union acpi_operand_object *return_desc;
u8 *new_buf;
+ u32 i;
u32 string_length = 0;
u16 base = 16;
- u32 i;
u8 separator = ',';
case ACPI_TYPE_BUFFER:
+ /* Setup string length, base, and separator */
+
switch (type) {
case ACPI_EXPLICIT_CONVERT_DECIMAL: /* Used by to_decimal_string operator */
/*
* decimal values separated by commas."
*/
base = 10;
- string_length = obj_desc->buffer.length; /* 4 chars for each decimal */
- /*lint -fallthrough */
+ /*
+ * Calculate the final string length. Individual string values
+ * are variable length (include separator for each)
+ */
+ for (i = 0; i < obj_desc->buffer.length; i++) {
+ if (obj_desc->buffer.pointer[i] >= 100) {
+ string_length += 4;
+ }
+ else if (obj_desc->buffer.pointer[i] >= 10) {
+ string_length += 3;
+ }
+ else {
+ string_length += 2;
+ }
+ }
+ break;
case ACPI_IMPLICIT_CONVERT_HEX:
/*
*"The entire contents of the buffer are converted to a string of
* two-character hexadecimal numbers, each separated by a space."
*/
- if (type == ACPI_IMPLICIT_CONVERT_HEX) {
- separator = ' ';
- }
-
- /*lint -fallthrough */
+ separator = ' ';
+ string_length = (obj_desc->buffer.length * 3);
+ break;
case ACPI_EXPLICIT_CONVERT_HEX: /* Used by to_hex_string operator */
/*
* From ACPI: "If Data is a buffer, it is converted to a string of
* hexadecimal values separated by commas."
*/
- string_length += (obj_desc->buffer.length * 3);
- if (string_length > ACPI_MAX_STRING_CONVERSION) /* ACPI limit */ {
- return_ACPI_STATUS (AE_AML_STRING_LIMIT);
- }
-
- /* Create a new string object and string buffer */
-
- return_desc = acpi_ut_create_string_object ((acpi_size) string_length -1);
- if (!return_desc) {
- return_ACPI_STATUS (AE_NO_MEMORY);
- }
+ string_length = (obj_desc->buffer.length * 3);
+ break;
- new_buf = return_desc->buffer.pointer;
+ default:
+ return_ACPI_STATUS (AE_BAD_PARAMETER);
+ }
- /*
- * Convert buffer bytes to hex or decimal values
- * (separated by commas)
- */
- for (i = 0; i < obj_desc->buffer.length; i++) {
- new_buf += acpi_ex_convert_to_ascii (
- (acpi_integer) obj_desc->buffer.pointer[i], base,
- new_buf, 1);
- *new_buf++ = separator; /* each separated by a comma or space */
- }
+ /*
+ * Perform the conversion.
+ * (-1 because of extra separator included in string_length from above)
+ */
+ string_length--;
+ if (string_length > ACPI_MAX_STRING_CONVERSION) /* ACPI limit */ {
+ return_ACPI_STATUS (AE_AML_STRING_LIMIT);
+ }
- /* Null terminate the string (overwrites final comma from above) */
+ /*
+ * Create a new string object and string buffer
+ */
+ return_desc = acpi_ut_create_string_object ((acpi_size) string_length);
+ if (!return_desc) {
+ return_ACPI_STATUS (AE_NO_MEMORY);
+ }
- new_buf--;
- *new_buf = 0;
+ new_buf = return_desc->buffer.pointer;
- /* Recalculate length */
+ /*
+ * Convert buffer bytes to hex or decimal values
+ * (separated by commas or spaces)
+ */
+ for (i = 0; i < obj_desc->buffer.length; i++) {
+ new_buf += acpi_ex_convert_to_ascii (
+ (acpi_integer) obj_desc->buffer.pointer[i], base,
+ new_buf, 1);
+ *new_buf++ = separator; /* each separated by a comma or space */
+ }
- return_desc->string.length = ACPI_STRLEN (return_desc->string.pointer);
- break;
+ /* Null terminate the string (overwrites final comma/space from above) */
- default:
- return_ACPI_STATUS (AE_BAD_PARAMETER);
- }
+ new_buf--;
+ *new_buf = 0;
break;
default:
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
/* obj_desc is a valid object */
if (depth > 0) {
- ACPI_DEBUG_PRINT_RAW ((ACPI_DB_EXEC, "%*s[%u] ", depth, " ", depth));
+ ACPI_DEBUG_PRINT ((ACPI_DB_EXEC, "%*s[%u] %p ",
+ depth, " ", depth, obj_desc));
+ }
+ else {
+ ACPI_DEBUG_PRINT ((ACPI_DB_EXEC, "%p ", obj_desc));
}
- ACPI_DEBUG_PRINT_RAW ((ACPI_DB_EXEC, "%p ", obj_desc));
-
switch (ACPI_GET_OBJECT_TYPE (obj_desc)) {
case ACPI_TYPE_LOCAL_REFERENCE:
acpi_ex_out_integer ("bit_length", obj_desc->common_field.bit_length);
acpi_ex_out_integer ("fld_bit_offset", obj_desc->common_field.start_field_bit_offset);
acpi_ex_out_integer ("base_byte_offset", obj_desc->common_field.base_byte_offset);
- acpi_ex_out_integer ("datum_valid_bits", obj_desc->common_field.datum_valid_bits);
- acpi_ex_out_integer ("end_fld_valid_bits",obj_desc->common_field.end_field_valid_bits);
- acpi_ex_out_integer ("end_buf_valid_bits",obj_desc->common_field.end_buffer_valid_bits);
acpi_ex_out_pointer ("parent_node", obj_desc->common_field.node);
switch (ACPI_GET_OBJECT_TYPE (obj_desc)) {
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
}
-/*******************************************************************************
- *
- * FUNCTION: acpi_ex_get_buffer_datum
- *
- * PARAMETERS: Datum - Where the Datum is returned
- * Buffer - Raw field buffer
- * buffer_length - Entire length (used for big-endian only)
- * byte_granularity - 1/2/4/8 Granularity of the field
- * (aka Datum Size)
- * buffer_offset - Datum offset into the buffer
- *
- * RETURN: none
- *
- * DESCRIPTION: Get a datum from the buffer according to the buffer field
- * byte granularity
- *
- ******************************************************************************/
-
-void
-acpi_ex_get_buffer_datum (
- acpi_integer *datum,
- void *buffer,
- u32 buffer_length,
- u32 byte_granularity,
- u32 buffer_offset)
-{
- u32 index;
-
-
- ACPI_FUNCTION_TRACE_U32 ("ex_get_buffer_datum", byte_granularity);
-
-
- /* Get proper index into buffer (handles big/little endian) */
-
- index = ACPI_BUFFER_INDEX (buffer_length, buffer_offset, byte_granularity);
-
- /* Move the requested number of bytes */
-
- switch (byte_granularity) {
- case ACPI_FIELD_BYTE_GRANULARITY:
-
- *datum = ((u8 *) buffer) [index];
- break;
-
- case ACPI_FIELD_WORD_GRANULARITY:
-
- ACPI_MOVE_16_TO_64 (datum, &(((u16 *) buffer) [index]));
- break;
-
- case ACPI_FIELD_DWORD_GRANULARITY:
-
- ACPI_MOVE_32_TO_64 (datum, &(((u32 *) buffer) [index]));
- break;
-
- case ACPI_FIELD_QWORD_GRANULARITY:
-
- ACPI_MOVE_64_TO_64 (datum, &(((u64 *) buffer) [index]));
- break;
-
- default:
- /* Should not get here */
- break;
- }
-
- return_VOID;
-}
-
-
-/*******************************************************************************
- *
- * FUNCTION: acpi_ex_set_buffer_datum
- *
- * PARAMETERS: merged_datum - Value to store
- * Buffer - Receiving buffer
- * buffer_length - Entire length (used for big-endian only)
- * byte_granularity - 1/2/4/8 Granularity of the field
- * (aka Datum Size)
- * buffer_offset - Datum offset into the buffer
- *
- * RETURN: none
- *
- * DESCRIPTION: Store the merged datum to the buffer according to the
- * byte granularity
- *
- ******************************************************************************/
-
-void
-acpi_ex_set_buffer_datum (
- acpi_integer merged_datum,
- void *buffer,
- u32 buffer_length,
- u32 byte_granularity,
- u32 buffer_offset)
-{
- u32 index;
-
-
- ACPI_FUNCTION_TRACE_U32 ("ex_set_buffer_datum", byte_granularity);
-
-
- /* Get proper index into buffer (handles big/little endian) */
-
- index = ACPI_BUFFER_INDEX (buffer_length, buffer_offset, byte_granularity);
-
- /* Move the requested number of bytes */
-
- switch (byte_granularity) {
- case ACPI_FIELD_BYTE_GRANULARITY:
-
- ((u8 *) buffer) [index] = (u8) merged_datum;
- break;
-
- case ACPI_FIELD_WORD_GRANULARITY:
-
- ACPI_MOVE_64_TO_16 (&(((u16 *) buffer)[index]), &merged_datum);
- break;
-
- case ACPI_FIELD_DWORD_GRANULARITY:
-
- ACPI_MOVE_64_TO_32 (&(((u32 *) buffer)[index]), &merged_datum);
- break;
-
- case ACPI_FIELD_QWORD_GRANULARITY:
-
- ACPI_MOVE_64_TO_64 (&(((u64 *) buffer)[index]), &merged_datum);
- break;
-
- default:
- /* Should not get here */
- break;
- }
-
- return_VOID;
-}
-
-
-/*******************************************************************************
- *
- * FUNCTION: acpi_ex_common_buffer_setup
- *
- * PARAMETERS: obj_desc - Field object
- * buffer_length - Length of caller's buffer
- * datum_count - Where the datum_count is returned
- *
- * RETURN: Status, datum_count
- *
- * DESCRIPTION: Common code to validate the incoming buffer size and compute
- * the number of field "datums" that must be read or written.
- * A "datum" is the smallest unit that can be read or written
- * to the field, it is either 1,2,4, or 8 bytes.
- *
- ******************************************************************************/
-
-acpi_status
-acpi_ex_common_buffer_setup (
- union acpi_operand_object *obj_desc,
- u32 buffer_length,
- u32 *datum_count)
-{
- u32 byte_field_length;
- u32 actual_byte_field_length;
-
-
- ACPI_FUNCTION_TRACE ("ex_common_buffer_setup");
-
-
- /*
- * Incoming buffer must be at least as long as the field, we do not
- * allow "partial" field reads/writes. We do not care if the buffer is
- * larger than the field, this typically happens when an integer is
- * read/written to a field that is actually smaller than an integer.
- */
- byte_field_length = ACPI_ROUND_BITS_UP_TO_BYTES (
- obj_desc->common_field.bit_length);
- if (byte_field_length > buffer_length) {
- ACPI_DEBUG_PRINT ((ACPI_DB_BFIELD,
- "Field size %X (bytes) is too large for buffer (%X)\n",
- byte_field_length, buffer_length));
-
- return_ACPI_STATUS (AE_BUFFER_OVERFLOW);
- }
-
- /*
- * Create "actual" field byte count (minimum number of bytes that
- * must be read), then convert to datum count (minimum number
- * of datum-sized units that must be read)
- */
- actual_byte_field_length = ACPI_ROUND_BITS_UP_TO_BYTES (
- obj_desc->common_field.start_field_bit_offset +
- obj_desc->common_field.bit_length);
-
-
- *datum_count = ACPI_ROUND_UP_TO (actual_byte_field_length,
- obj_desc->common_field.access_byte_width);
-
- ACPI_DEBUG_PRINT ((ACPI_DB_BFIELD,
- "buffer_bytes %X, actual_bytes %X, Datums %X, byte_gran %X\n",
- byte_field_length, actual_byte_field_length,
- *datum_count, obj_desc->common_field.access_byte_width));
-
- return_ACPI_STATUS (AE_OK);
-}
-
-
/*******************************************************************************
*
* FUNCTION: acpi_ex_extract_from_field
u32 buffer_length)
{
acpi_status status;
- u32 field_datum_byte_offset;
- u32 buffer_datum_offset;
- acpi_integer previous_raw_datum = 0;
- acpi_integer this_raw_datum = 0;
- acpi_integer merged_datum = 0;
+ acpi_integer raw_datum;
+ acpi_integer merged_datum;
+ u32 field_offset = 0;
+ u32 buffer_offset = 0;
+ u32 buffer_tail_bits;
u32 datum_count;
+ u32 field_datum_count;
u32 i;
ACPI_FUNCTION_TRACE ("ex_extract_from_field");
- /* Validate buffer, compute number of datums */
+ /* Validate target buffer and clear it */
- status = acpi_ex_common_buffer_setup (obj_desc, buffer_length, &datum_count);
- if (ACPI_FAILURE (status)) {
- return_ACPI_STATUS (status);
- }
+ if (buffer_length < ACPI_ROUND_BITS_UP_TO_BYTES (
+ obj_desc->common_field.bit_length)) {
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
+ "Field size %X (bits) is too large for buffer (%X)\n",
+ obj_desc->common_field.bit_length, buffer_length));
- /*
- * Clear the caller's buffer (the whole buffer length as given)
- * This is very important, especially in the cases where the buffer
- * is longer than the size of the field.
- */
+ return_ACPI_STATUS (AE_BUFFER_OVERFLOW);
+ }
ACPI_MEMSET (buffer, 0, buffer_length);
- field_datum_byte_offset = 0;
- buffer_datum_offset= 0;
-
- /* Read the entire field */
-
- for (i = 0; i < datum_count; i++) {
- status = acpi_ex_field_datum_io (obj_desc, field_datum_byte_offset,
- &this_raw_datum, ACPI_READ);
- if (ACPI_FAILURE (status)) {
- return_ACPI_STATUS (status);
- }
-
- /* We might actually be done if the request fits in one datum */
-
- if ((datum_count == 1) &&
- (obj_desc->common_field.flags & AOPOBJ_SINGLE_DATUM)) {
- /* 1) Shift the valid data bits down to start at bit 0 */
+ /* Compute the number of datums (access width data items) */
- merged_datum = (this_raw_datum >> obj_desc->common_field.start_field_bit_offset);
+ datum_count = ACPI_ROUND_UP_TO (
+ obj_desc->common_field.bit_length,
+ obj_desc->common_field.access_bit_width);
+ field_datum_count = ACPI_ROUND_UP_TO (
+ obj_desc->common_field.bit_length +
+ obj_desc->common_field.start_field_bit_offset,
+ obj_desc->common_field.access_bit_width);
- /* 2) Mask off any upper unused bits (bits not part of the field) */
+ /* Priming read from the field */
- if (obj_desc->common_field.end_buffer_valid_bits) {
- merged_datum &= ACPI_MASK_BITS_ABOVE (obj_desc->common_field.end_buffer_valid_bits);
- }
+ status = acpi_ex_field_datum_io (obj_desc, field_offset, &raw_datum, ACPI_READ);
+ if (ACPI_FAILURE (status)) {
+ return_ACPI_STATUS (status);
+ }
+ merged_datum = raw_datum >> obj_desc->common_field.start_field_bit_offset;
- /* Store the datum to the caller buffer */
+ /* Read the rest of the field */
- acpi_ex_set_buffer_datum (merged_datum, buffer, buffer_length,
- obj_desc->common_field.access_byte_width, buffer_datum_offset);
+ for (i = 1; i < field_datum_count; i++) {
+ /* Get next input datum from the field */
- return_ACPI_STATUS (AE_OK);
+ field_offset += obj_desc->common_field.access_byte_width;
+ status = acpi_ex_field_datum_io (obj_desc, field_offset,
+ &raw_datum, ACPI_READ);
+ if (ACPI_FAILURE (status)) {
+ return_ACPI_STATUS (status);
}
- /* Special handling for the last datum to ignore extra bits */
-
- if ((i >= (datum_count -1)) &&
- (obj_desc->common_field.end_field_valid_bits)) {
- /*
- * This is the last iteration of the loop. We need to clear
- * any unused bits (bits that are not part of this field) before
- * we store the final merged datum into the caller buffer.
- */
- this_raw_datum &=
- ACPI_MASK_BITS_ABOVE (obj_desc->common_field.end_field_valid_bits);
- }
+ /* Merge with previous datum if necessary */
- /*
- * Create the (possibly) merged datum to be stored to the caller buffer
- */
- if (obj_desc->common_field.start_field_bit_offset == 0) {
- /* Field is not skewed and we can just copy the datum */
+ merged_datum |= raw_datum <<
+ (obj_desc->common_field.access_bit_width - obj_desc->common_field.start_field_bit_offset);
- acpi_ex_set_buffer_datum (this_raw_datum, buffer, buffer_length,
- obj_desc->common_field.access_byte_width, buffer_datum_offset);
- buffer_datum_offset++;
+ if (i == datum_count) {
+ break;
}
- else {
- /* Not aligned -- on the first iteration, just save the datum */
- if (i != 0) {
- /*
- * Put together the appropriate bits of the two raw data to make a
- * single complete field datum
- *
- * 1) Normalize the first datum down to bit 0
- */
- merged_datum = (previous_raw_datum >> obj_desc->common_field.start_field_bit_offset);
+ /* Write merged datum to target buffer */
- /* 2) Insert the second datum "above" the first datum */
+ ACPI_MEMCPY (((char *) buffer) + buffer_offset, &merged_datum,
+ ACPI_MIN(obj_desc->common_field.access_byte_width,
+ buffer_length - buffer_offset));
- merged_datum |= (this_raw_datum << obj_desc->common_field.datum_valid_bits);
-
- acpi_ex_set_buffer_datum (merged_datum, buffer, buffer_length,
- obj_desc->common_field.access_byte_width, buffer_datum_offset);
- buffer_datum_offset++;
- }
+ buffer_offset += obj_desc->common_field.access_byte_width;
+ merged_datum = raw_datum >> obj_desc->common_field.start_field_bit_offset;
+ }
- /*
- * Save the raw datum that was just acquired since it may contain bits
- * of the *next* field datum
- */
- previous_raw_datum = this_raw_datum;
- }
+ /* Mask off any extra bits in the last datum */
- field_datum_byte_offset += obj_desc->common_field.access_byte_width;
+ buffer_tail_bits = obj_desc->common_field.bit_length % obj_desc->common_field.access_bit_width;
+ if (buffer_tail_bits) {
+ merged_datum &= ACPI_MASK_BITS_ABOVE (buffer_tail_bits);
}
- /* For non-aligned case, there is one last datum to insert */
+ /* Write the last datum to the buffer */
- if (obj_desc->common_field.start_field_bit_offset != 0) {
- merged_datum = (this_raw_datum >> obj_desc->common_field.start_field_bit_offset);
-
- acpi_ex_set_buffer_datum (merged_datum, buffer, buffer_length,
- obj_desc->common_field.access_byte_width, buffer_datum_offset);
- }
+ ACPI_MEMCPY (((char *) buffer) + buffer_offset, &merged_datum,
+ ACPI_MIN(obj_desc->common_field.access_byte_width,
+ buffer_length - buffer_offset));
return_ACPI_STATUS (AE_OK);
}
u32 buffer_length)
{
acpi_status status;
- u32 field_datum_byte_offset;
- u32 datum_offset;
acpi_integer mask;
acpi_integer merged_datum;
- acpi_integer previous_raw_datum;
- acpi_integer this_raw_datum;
+ acpi_integer raw_datum = 0;
+ u32 field_offset = 0;
+ u32 buffer_offset = 0;
+ u32 buffer_tail_bits;
u32 datum_count;
+ u32 field_datum_count;
+ u32 i;
ACPI_FUNCTION_TRACE ("ex_insert_into_field");
- /* Validate buffer, compute number of datums */
-
- status = acpi_ex_common_buffer_setup (obj_desc, buffer_length, &datum_count);
- if (ACPI_FAILURE (status)) {
- return_ACPI_STATUS (status);
- }
-
- /*
- * Break the request into up to three parts (similar to an I/O request):
- * 1) non-aligned part at start
- * 2) aligned part in middle
- * 3) non-aligned part at the end
- */
- field_datum_byte_offset = 0;
- datum_offset= 0;
-
- /* Get a single datum from the caller's buffer */
-
- acpi_ex_get_buffer_datum (&previous_raw_datum, buffer, buffer_length,
- obj_desc->common_field.access_byte_width, datum_offset);
-
- /*
- * Part1:
- * Write a partial field datum if field does not begin on a datum boundary
- * Note: The code in this section also handles the aligned case
- *
- * Construct Mask with 1 bits where the field is, 0 bits elsewhere
- * (Only the bottom 5 bits of bit_length are valid for a shift operation)
- *
- * Mask off bits that are "below" the field (if any)
- */
- mask = ACPI_MASK_BITS_BELOW (obj_desc->common_field.start_field_bit_offset);
-
- /* If the field fits in one datum, may need to mask upper bits */
+ /* Validate input buffer */
- if ((obj_desc->common_field.flags & AOPOBJ_SINGLE_DATUM) &&
- obj_desc->common_field.end_field_valid_bits) {
- /* There are bits above the field, mask them off also */
+ if (buffer_length < ACPI_ROUND_BITS_UP_TO_BYTES (
+ obj_desc->common_field.bit_length)) {
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
+ "Field size %X (bits) is too large for buffer (%X)\n",
+ obj_desc->common_field.bit_length, buffer_length));
- mask &= ACPI_MASK_BITS_ABOVE (obj_desc->common_field.end_field_valid_bits);
+ return_ACPI_STATUS (AE_BUFFER_OVERFLOW);
}
- /* Shift and mask the value into the field position */
-
- merged_datum = (previous_raw_datum << obj_desc->common_field.start_field_bit_offset);
- merged_datum &= mask;
-
- /* Apply the update rule (if necessary) and write the datum to the field */
+ /* Compute the number of datums (access width data items) */
- status = acpi_ex_write_with_update_rule (obj_desc, mask, merged_datum,
- field_datum_byte_offset);
- if (ACPI_FAILURE (status)) {
- return_ACPI_STATUS (status);
- }
+ mask = ACPI_MASK_BITS_BELOW (obj_desc->common_field.start_field_bit_offset);
+ datum_count = ACPI_ROUND_UP_TO (obj_desc->common_field.bit_length,
+ obj_desc->common_field.access_bit_width);
+ field_datum_count = ACPI_ROUND_UP_TO (obj_desc->common_field.bit_length +
+ obj_desc->common_field.start_field_bit_offset,
+ obj_desc->common_field.access_bit_width);
- /* We just wrote the first datum */
+ /* Get initial Datum from the input buffer */
- datum_offset++;
+ ACPI_MEMCPY (&raw_datum, buffer,
+ ACPI_MIN(obj_desc->common_field.access_byte_width,
+ buffer_length - buffer_offset));
- /* If the entire field fits within one datum, we are done. */
+ merged_datum = raw_datum << obj_desc->common_field.start_field_bit_offset;
- if ((datum_count == 1) &&
- (obj_desc->common_field.flags & AOPOBJ_SINGLE_DATUM)) {
- return_ACPI_STATUS (AE_OK);
- }
+ /* Write the entire field */
- /*
- * Part2:
- * Write the aligned data.
- *
- * We don't need to worry about the update rule for these data, because
- * all of the bits in each datum are part of the field.
- *
- * The last datum must be special cased because it might contain bits
- * that are not part of the field -- therefore the "update rule" must be
- * applied in Part3 below.
- */
- while (datum_offset < datum_count) {
- field_datum_byte_offset += obj_desc->common_field.access_byte_width;
+ for (i = 1; i < field_datum_count; i++) {
+ /* Write merged datum to the target field */
- /*
- * Get the next raw buffer datum. It may contain bits of the previous
- * field datum
- */
- acpi_ex_get_buffer_datum (&this_raw_datum, buffer, buffer_length,
- obj_desc->common_field.access_byte_width, datum_offset);
+ merged_datum &= mask;
+ status = acpi_ex_write_with_update_rule (obj_desc, mask, merged_datum, field_offset);
+ if (ACPI_FAILURE (status)) {
+ return_ACPI_STATUS (status);
+ }
- /* Create the field datum based on the field alignment */
+ /* Start new output datum by merging with previous input datum */
- if (obj_desc->common_field.start_field_bit_offset != 0) {
- /*
- * Put together appropriate bits of the two raw buffer data to make
- * a single complete field datum
- */
- merged_datum =
- (previous_raw_datum >> obj_desc->common_field.datum_valid_bits) |
- (this_raw_datum << obj_desc->common_field.start_field_bit_offset);
- }
- else {
- /* Field began aligned on datum boundary */
+ field_offset += obj_desc->common_field.access_byte_width;
+ merged_datum = raw_datum >>
+ (obj_desc->common_field.access_bit_width - obj_desc->common_field.start_field_bit_offset);
+ mask = ACPI_INTEGER_MAX;
- merged_datum = this_raw_datum;
+ if (i == datum_count) {
+ break;
}
- /*
- * Special handling for the last datum if the field does NOT end on
- * a datum boundary. Update Rule must be applied to the bits outside
- * the field.
- */
- datum_offset++;
- if ((datum_offset == datum_count) &&
- (obj_desc->common_field.end_field_valid_bits)) {
- /*
- * If there are dangling non-aligned bits, perform one more merged write
- * Else - field is aligned at the end, no need for any more writes
- */
+ /* Get the next input datum from the buffer */
- /*
- * Part3:
- * This is the last datum and the field does not end on a datum boundary.
- * Build the partial datum and write with the update rule.
- *
- * Mask off the unused bits above (after) the end-of-field
- */
- mask = ACPI_MASK_BITS_ABOVE (obj_desc->common_field.end_field_valid_bits);
- merged_datum &= mask;
+ buffer_offset += obj_desc->common_field.access_byte_width;
+ ACPI_MEMCPY (&raw_datum, ((char *) buffer) + buffer_offset,
+ ACPI_MIN(obj_desc->common_field.access_byte_width,
+ buffer_length - buffer_offset));
+ merged_datum |= raw_datum << obj_desc->common_field.start_field_bit_offset;
+ }
- /* Write the last datum with the update rule */
+ /* Mask off any extra bits in the last datum */
- status = acpi_ex_write_with_update_rule (obj_desc, mask, merged_datum,
- field_datum_byte_offset);
- if (ACPI_FAILURE (status)) {
- return_ACPI_STATUS (status);
- }
- }
- else {
- /* Normal (aligned) case -- write the completed datum */
+ buffer_tail_bits = (obj_desc->common_field.bit_length +
+ obj_desc->common_field.start_field_bit_offset) % obj_desc->common_field.access_bit_width;
+ if (buffer_tail_bits) {
+ mask &= ACPI_MASK_BITS_ABOVE (buffer_tail_bits);
+ }
- status = acpi_ex_field_datum_io (obj_desc, field_datum_byte_offset,
- &merged_datum, ACPI_WRITE);
- if (ACPI_FAILURE (status)) {
- return_ACPI_STATUS (status);
- }
- }
+ /* Write the last datum to the field */
- /*
- * Save the most recent datum since it may contain bits of the *next*
- * field datum. Update current byte offset.
- */
- previous_raw_datum = this_raw_datum;
- }
+ merged_datum &= mask;
+ status = acpi_ex_write_with_update_rule (obj_desc, mask, merged_datum, field_offset);
return_ACPI_STATUS (status);
}
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
if (local_operand1 != operand1) {
acpi_ut_remove_reference (local_operand1);
}
- return_ACPI_STATUS (AE_OK);
+ return_ACPI_STATUS (status);
}
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
status = acpi_ex_convert_to_string (operand[0], &return_desc,
ACPI_EXPLICIT_CONVERT_DECIMAL);
+ if (return_desc == operand[0]) {
+ /* No conversion performed, add ref to handle return value */
+ acpi_ut_add_reference (return_desc);
+ }
break;
status = acpi_ex_convert_to_string (operand[0], &return_desc,
ACPI_EXPLICIT_CONVERT_HEX);
+ if (return_desc == operand[0]) {
+ /* No conversion performed, add ref to handle return value */
+ acpi_ut_add_reference (return_desc);
+ }
break;
case AML_TO_BUFFER_OP: /* to_buffer (Data, Result) */
status = acpi_ex_convert_to_buffer (operand[0], &return_desc);
+ if (return_desc == operand[0]) {
+ /* No conversion performed, add ref to handle return value */
+ acpi_ut_add_reference (return_desc);
+ }
break;
status = acpi_ex_convert_to_integer (operand[0], &return_desc,
ACPI_ANY_BASE);
+ if (return_desc == operand[0]) {
+ /* No conversion performed, add ref to handle return value */
+ acpi_ut_add_reference (return_desc);
+ }
break;
goto cleanup;
}
- /*
- * Store the return value computed above into the target object
- */
- status = acpi_ex_store (return_desc, operand[1], walk_state);
+ if (ACPI_SUCCESS (status)) {
+ /*
+ * Store the return value computed above into the target object
+ */
+ status = acpi_ex_store (return_desc, operand[1], walk_state);
+ }
cleanup:
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
return_desc->string.pointer = buffer;
return_desc->string.length = (u32) length;
}
+
+ /* Mark buffer initialized */
+
+ return_desc->buffer.flags |= AOPOBJ_DATA_VALID;
break;
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* FUNCTION: acpi_ex_do_match
*
* PARAMETERS: match_op - The AML match operand
- * package_value - Value from the target package
- * match_value - Value to be matched
+ * package_obj - Object from the target package
+ * match_obj - Object to be matched
*
* RETURN: TRUE if the match is successful, FALSE otherwise
*
- * DESCRIPTION: Implements the low-level match for the ASL Match operator
+ * DESCRIPTION: Implements the low-level match for the ASL Match operator.
+ * Package elements will be implicitly converted to the type of
+ * the match object (Integer/Buffer/String).
*
******************************************************************************/
u8
acpi_ex_do_match (
u32 match_op,
- acpi_integer package_value,
- acpi_integer match_value)
+ union acpi_operand_object *package_obj,
+ union acpi_operand_object *match_obj)
{
-
+ u8 logical_result = TRUE;
+ acpi_status status;
+
+
+ /*
+ * Note: Since the package_obj/match_obj ordering is opposite to that of
+ * the standard logical operators, we have to reverse them when we call
+ * do_logical_op in order to make the implicit conversion rules work
+ * correctly. However, this means we have to flip the entire equation
+ * also. A bit ugly perhaps, but overall, better than fussing the
+ * parameters around at runtime, over and over again.
+ *
+ * Below, P[i] refers to the package element, M refers to the Match object.
+ */
switch (match_op) {
- case MATCH_MTR: /* always true */
+ case MATCH_MTR:
- break;
+ /* Always true */
+ break;
- case MATCH_MEQ: /* true if equal */
+ case MATCH_MEQ:
- if (package_value != match_value) {
+ /*
+ * True if equal: (P[i] == M)
+ * Change to: (M == P[i])
+ */
+ status = acpi_ex_do_logical_op (AML_LEQUAL_OP, match_obj, package_obj,
+ &logical_result);
+ if (ACPI_FAILURE (status)) {
return (FALSE);
}
break;
+ case MATCH_MLE:
- case MATCH_MLE: /* true if less than or equal */
-
- if (package_value > match_value) {
+ /*
+ * True if less than or equal: (P[i] <= M) (P[i] not_greater than M)
+ * Change to: (M >= P[i]) (M not_less than P[i])
+ */
+ status = acpi_ex_do_logical_op (AML_LLESS_OP, match_obj, package_obj,
+ &logical_result);
+ if (ACPI_FAILURE (status)) {
return (FALSE);
}
+ logical_result = (u8) !logical_result;
break;
+ case MATCH_MLT:
- case MATCH_MLT: /* true if less than */
-
- if (package_value >= match_value) {
+ /*
+ * True if less than: (P[i] < M)
+ * Change to: (M > P[i])
+ */
+ status = acpi_ex_do_logical_op (AML_LGREATER_OP, match_obj, package_obj,
+ &logical_result);
+ if (ACPI_FAILURE (status)) {
return (FALSE);
}
break;
+ case MATCH_MGE:
- case MATCH_MGE: /* true if greater than or equal */
-
- if (package_value < match_value) {
+ /*
+ * True if greater than or equal: (P[i] >= M) (P[i] not_less than M)
+ * Change to: (M <= P[i]) (M not_greater than P[i])
+ */
+ status = acpi_ex_do_logical_op (AML_LGREATER_OP, match_obj, package_obj,
+ &logical_result);
+ if (ACPI_FAILURE (status)) {
return (FALSE);
}
+ logical_result = (u8)!logical_result;
break;
+ case MATCH_MGT:
- case MATCH_MGT: /* true if greater than */
-
- if (package_value <= match_value) {
+ /*
+ * True if greater than: (P[i] > M)
+ * Change to: (M < P[i])
+ */
+ status = acpi_ex_do_logical_op (AML_LLESS_OP, match_obj, package_obj,
+ &logical_result);
+ if (ACPI_FAILURE (status)) {
return (FALSE);
}
break;
+ default:
- default: /* undefined */
+ /* Undefined */
return (FALSE);
}
-
- return TRUE;
+ return logical_result;
}
switch (walk_state->opcode) {
case AML_MATCH_OP:
/*
- * Match (search_package[0], match_op1[1], match_object1[2],
- * match_op2[3], match_object2[4], start_index[5])
+ * Match (search_pkg[0], match_op1[1], match_obj1[2],
+ * match_op2[3], match_obj2[4], start_index[5])
*/
- /* Validate match comparison sub-opcodes */
+ /* Validate both Match Term Operators (MTR, MEQ, etc.) */
if ((operand[1]->integer.value > MAX_MATCH_OPERATOR) ||
(operand[3]->integer.value > MAX_MATCH_OPERATOR)) {
- ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "operation encoding out of range\n"));
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "Match operator out of range\n"));
status = AE_AML_OPERAND_VALUE;
goto cleanup;
}
+ /* Get the package start_index, validate against the package length */
+
index = (u32) operand[5]->integer.value;
if (index >= (u32) operand[0]->package.count) {
ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "Index beyond package end\n"));
goto cleanup;
}
+ /* Create an integer for the return value */
+
return_desc = acpi_ut_create_internal_object (ACPI_TYPE_INTEGER);
if (!return_desc) {
status = AE_NO_MEMORY;
return_desc->integer.value = ACPI_INTEGER_MAX;
/*
- * Examine each element until a match is found. Within the loop,
+ * Examine each element until a match is found. Both match conditions
+ * must be satisfied for a match to occur. Within the loop,
* "continue" signifies that the current element does not match
* and the next should be examined.
*
* Upon finding a match, the loop will terminate via "break" at
- * the bottom. If it terminates "normally", match_value will be -1
- * (its initial value) indicating that no match was found. When
- * returned as a Number, this will produce the Ones value as specified.
+ * the bottom. If it terminates "normally", match_value will be
+ * ACPI_INTEGER_MAX (Ones) (its initial value) indicating that no
+ * match was found.
*/
for ( ; index < operand[0]->package.count; index++) {
+ /* Get the current package element */
+
this_element = operand[0]->package.elements[index];
- /*
- * Treat any NULL or non-numeric elements as non-matching.
- */
- if (!this_element ||
- ACPI_GET_OBJECT_TYPE (this_element) != ACPI_TYPE_INTEGER) {
+ /* Treat any uninitialized (NULL) elements as non-matching */
+
+ if (!this_element) {
continue;
}
/*
- * "continue" (proceed to next iteration of enclosing
- * "for" loop) signifies a non-match.
+ * Both match conditions must be satisfied. Execution of a continue
+ * (proceed to next iteration of enclosing for loop) signifies a
+ * non-match.
*/
if (!acpi_ex_do_match ((u32) operand[1]->integer.value,
- this_element->integer.value, operand[2]->integer.value)) {
+ this_element, operand[2])) {
continue;
}
if (!acpi_ex_do_match ((u32) operand[3]->integer.value,
- this_element->integer.value, operand[4]->integer.value)) {
+ this_element, operand[4])) {
continue;
}
return_desc->integer.value = index;
break;
}
-
break;
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
obj_desc->common_field.access_byte_width = (u8)
ACPI_DIV_8 (access_bit_width); /* 1, 2, 4, 8 */
+ obj_desc->common_field.access_bit_width = (u8) access_bit_width;
+
/*
* base_byte_offset is the address of the start of the field within the
* region. It is the byte address of the first *datum* (field-width data
obj_desc->common_field.start_field_bit_offset = (u8)
(field_bit_position - ACPI_MUL_8 (obj_desc->common_field.base_byte_offset));
- /*
- * Valid bits -- the number of bits that compose a partial datum,
- * 1) At the end of the field within the region (arbitrary starting bit
- * offset)
- * 2) At the end of a buffer used to contain the field (starting offset
- * always zero)
- */
- obj_desc->common_field.end_field_valid_bits = (u8)
- ((obj_desc->common_field.start_field_bit_offset + field_bit_length) %
- access_bit_width);
- /* start_buffer_bit_offset always = 0 */
-
- obj_desc->common_field.end_buffer_valid_bits = (u8)
- (field_bit_length % access_bit_width);
-
- /*
- * datum_valid_bits is the number of valid field bits in the first
- * field datum.
- */
- obj_desc->common_field.datum_valid_bits = (u8)
- (access_bit_width - obj_desc->common_field.start_field_bit_offset);
-
/*
* Does the entire field fit within a single field access element? (datum)
* (i.e., without crossing a datum boundary)
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
goto next_operand;
- case ARGI_ANYTYPE:
+ case ARGI_DATAREFOBJ: /* Store operator only */
/*
* We don't want to resolve index_op reference objects during
goto next_operand;
+ case ARGI_DATAREFOBJ:
+
+ /* Used by the Store() operator only */
+
+ switch (ACPI_GET_OBJECT_TYPE (obj_desc)) {
+ case ACPI_TYPE_INTEGER:
+ case ACPI_TYPE_PACKAGE:
+ case ACPI_TYPE_STRING:
+ case ACPI_TYPE_BUFFER:
+ case ACPI_TYPE_BUFFER_FIELD:
+ case ACPI_TYPE_LOCAL_REFERENCE:
+ case ACPI_TYPE_LOCAL_REGION_FIELD:
+ case ACPI_TYPE_LOCAL_BANK_FIELD:
+ case ACPI_TYPE_LOCAL_INDEX_FIELD:
+ case ACPI_TYPE_DDB_HANDLE:
+
+ /* Valid operand */
+ break;
+
+ default:
+
+ if (acpi_gbl_enable_interpreter_slack) {
+ /*
+ * Enable original behavior of Store(), allowing any and all
+ * objects as the source operand. The ACPI spec does not
+ * allow this, however.
+ */
+ break;
+ }
+
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
+ "Needed Integer/Buffer/String/Package/Ref/Ddb], found [%s] %p\n",
+ acpi_ut_get_object_type_name (obj_desc), obj_desc));
+
+ return_ACPI_STATUS (AE_AML_OPERAND_TYPE);
+ }
+ goto next_operand;
+
+
default:
/* Unknown type */
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
switch (index_desc->reference.target_type) {
case ACPI_TYPE_PACKAGE:
/*
- * Storing to a package element is not simple. The source must be
- * evaluated and converted to the type of the destination and then the
- * source is copied into the destination - we can't just point to the
- * source object.
- */
- /*
+ * Storing to a package element. Copy the object and replace
+ * any existing object with the new object. No implicit
+ * conversion is performed.
+ *
* The object at *(index_desc->Reference.Where) is the
* element within the package that is to be modified.
* The parent package object is at index_desc->Reference.Object
*/
obj_desc = *(index_desc->reference.where);
- /* Do the conversion/store */
-
- status = acpi_ex_store_object_to_object (source_desc, obj_desc, &new_desc,
- walk_state);
+ status = acpi_ut_copy_iobject_to_iobject (source_desc, &new_desc, walk_state);
if (ACPI_FAILURE (status)) {
- ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
- "Could not store object to indexed package element\n"));
return_ACPI_STATUS (status);
}
- /*
- * If a new object was created, we must install it as the new
- * package element
- */
- if (new_desc != obj_desc) {
- acpi_ut_remove_reference (obj_desc);
- *(index_desc->reference.where) = new_desc;
+ if (obj_desc) {
+ /* Decrement reference count by the ref count of the parent package */
- /* If same as the original source, add a reference */
-
- if (new_desc == source_desc) {
- acpi_ut_add_reference (new_desc);
+ for (i = 0; i < ((union acpi_operand_object *) index_desc->reference.object)->common.reference_count; i++) {
+ acpi_ut_remove_reference (obj_desc);
}
+ }
- /* Increment reference count by the ref count of the parent package -1 */
+ *(index_desc->reference.where) = new_desc;
- for (i = 1; i < ((union acpi_operand_object *) index_desc->reference.object)->common.reference_count; i++) {
- acpi_ut_add_reference (new_desc);
- }
+ /* Increment reference count by the ref count of the parent package -1 */
+
+ for (i = 1; i < ((union acpi_operand_object *) index_desc->reference.object)->common.reference_count; i++) {
+ acpi_ut_add_reference (new_desc);
}
+
break;
case ACPI_TYPE_BUFFER_FIELD:
/*
- * Store into a Buffer (not actually a real buffer_field) at a
- * location defined by an Index.
+ * Store into a Buffer or String (not actually a real buffer_field)
+ * at a location defined by an Index.
*
* The first 8-bit element of the source object is written to the
* 8-bit Buffer location defined by the Index destination object,
*/
/*
- * Make sure the target is a Buffer
+ * Make sure the target is a Buffer or String. An error should
+ * not happen here, since the reference_object was constructed
+ * by the INDEX_OP code.
*/
obj_desc = index_desc->reference.object;
- if (ACPI_GET_OBJECT_TYPE (obj_desc) != ACPI_TYPE_BUFFER) {
+ if ((ACPI_GET_OBJECT_TYPE (obj_desc) != ACPI_TYPE_BUFFER) &&
+ (ACPI_GET_OBJECT_TYPE (obj_desc) != ACPI_TYPE_STRING)) {
return_ACPI_STATUS (AE_AML_OPERAND_TYPE);
}
break;
case ACPI_TYPE_BUFFER:
-
- value = source_desc->buffer.pointer[0];
- break;
-
case ACPI_TYPE_STRING:
- value = (u8) source_desc->string.pointer[0];
+ /* Note: Takes advantage of common string/buffer fields */
+
+ value = source_desc->buffer.pointer[0];
break;
default:
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
{
union acpi_operand_object *actual_src_desc;
acpi_status status = AE_OK;
+ acpi_object_type original_src_type;
ACPI_FUNCTION_TRACE_PTR ("ex_store_object_to_object", source_desc);
return_ACPI_STATUS (status);
}
- if (ACPI_GET_OBJECT_TYPE (source_desc) != ACPI_GET_OBJECT_TYPE (dest_desc)) {
+ original_src_type = ACPI_GET_OBJECT_TYPE (source_desc);
+ if (original_src_type != ACPI_GET_OBJECT_TYPE (dest_desc)) {
/*
* The source type does not match the type of the destination.
* Perform the "implicit conversion" of the source to the current type
* Otherwise, actual_src_desc is a temporary object to hold the
* converted object.
*/
- status = acpi_ex_convert_to_target_type (ACPI_GET_OBJECT_TYPE (dest_desc), source_desc,
- &actual_src_desc, walk_state);
+ status = acpi_ex_convert_to_target_type (ACPI_GET_OBJECT_TYPE (dest_desc),
+ source_desc, &actual_src_desc, walk_state);
if (ACPI_FAILURE (status)) {
return_ACPI_STATUS (status);
}
if (source_desc == actual_src_desc) {
/*
- * No conversion was performed. Return the source_desc as the
+ * No conversion was performed. Return the source_desc as the
* new object.
*/
*new_desc = source_desc;
case ACPI_TYPE_BUFFER:
- status = acpi_ex_store_buffer_to_buffer (actual_src_desc, dest_desc);
+ /*
+ * Note: There is different store behavior depending on the original
+ * source type
+ */
+ status = acpi_ex_store_buffer_to_buffer (original_src_type, actual_src_desc,
+ dest_desc);
break;
case ACPI_TYPE_PACKAGE:
- status = acpi_ut_copy_iobject_to_iobject (actual_src_desc, &dest_desc, walk_state);
+ status = acpi_ut_copy_iobject_to_iobject (actual_src_desc, &dest_desc,
+ walk_state);
break;
default:
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
acpi_status
acpi_ex_store_buffer_to_buffer (
+ acpi_object_type original_src_type,
union acpi_operand_object *source_desc,
union acpi_operand_object *target_desc)
{
return_ACPI_STATUS (AE_NO_MEMORY);
}
- target_desc->common.flags &= ~AOPOBJ_STATIC_POINTER;
target_desc->buffer.length = length;
}
- /*
- * Buffer is a static allocation,
- * only place what will fit in the buffer.
- */
+ /* Copy source buffer to target buffer */
+
if (length <= target_desc->buffer.length) {
/* Clear existing buffer and copy in the new one */
ACPI_MEMSET (target_desc->buffer.pointer, 0, target_desc->buffer.length);
ACPI_MEMCPY (target_desc->buffer.pointer, buffer, length);
- }
- else {
+
/*
- * Truncate the source, copy only what will fit
+ * If the original source was a string, we must truncate the buffer,
+ * according to the ACPI spec. Integer-to-Buffer and Buffer-to-Buffer
+ * copy must not truncate the original buffer.
*/
+ if (original_src_type == ACPI_TYPE_STRING) {
+ /* Set the new length of the target */
+
+ target_desc->buffer.length = length;
+ }
+ }
+ else {
+ /* Truncate the source, copy only what will fit */
+
ACPI_MEMCPY (target_desc->buffer.pointer, buffer, target_desc->buffer.length);
ACPI_DEBUG_PRINT ((ACPI_DB_INFO,
- "Truncating src buffer from %X to %X\n",
+ "Truncating source buffer from %X to %X\n",
length, target_desc->buffer.length));
}
/* Copy flags */
target_desc->buffer.flags = source_desc->buffer.flags;
+ target_desc->common.flags &= ~AOPOBJ_STATIC_POINTER;
return_ACPI_STATUS (AE_OK);
}
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
return_ACPI_STATUS (status);
}
- if (sleep_state != ACPI_STATE_S5) {
- /* Disable BM arbitration */
-
- status = acpi_set_register (ACPI_BITREG_ARB_DISABLE, 1, ACPI_MTX_DO_NOT_LOCK);
- if (ACPI_FAILURE (status)) {
- return_ACPI_STATUS (status);
- }
- }
-
/*
* 1) Disable/Clear all GPEs
* 2) Enable all wakeup GPEs
(void) acpi_set_register(acpi_gbl_fixed_event_info[ACPI_EVENT_POWER_BUTTON].status_register_id,
1, ACPI_MTX_DO_NOT_LOCK);
- /* Enable BM arbitration */
-
- status = acpi_set_register (ACPI_BITREG_ARB_DISABLE, 0, ACPI_MTX_LOCK);
- if (ACPI_FAILURE (status)) {
- return_ACPI_STATUS (status);
- }
-
arg.integer.value = ACPI_SST_WORKING;
status = acpi_evaluate_object (NULL, METHOD_NAME__SST, &arg_list, NULL);
if (ACPI_FAILURE (status) && status != AE_NOT_FOUND) {
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
#define IBM_PARAM(feature) \
module_param_call(feature, set_ibm_param, NULL, NULL, 0)
-static void __exit acpi_ibm_exit(void)
+static void acpi_ibm_exit(void)
{
int i;
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*/
switch (init_val->type) {
case ACPI_TYPE_METHOD:
- obj_desc->method.param_count = (u8) ACPI_STRTOUL
- (val, NULL, 10);
+ obj_desc->method.param_count = (u8) ACPI_TO_INTEGER (val);
obj_desc->common.flags |= AOPOBJ_DATA_VALID;
#if defined (_ACPI_ASL_COMPILER) || defined (_ACPI_DUMP_App)
case ACPI_TYPE_INTEGER:
- obj_desc->integer.value =
- (acpi_integer) ACPI_STRTOUL (val, NULL, 10);
+ obj_desc->integer.value = ACPI_TO_INTEGER (val);
break;
case ACPI_TYPE_MUTEX:
obj_desc->mutex.node = new_node;
- obj_desc->mutex.sync_level = (u8) ACPI_STRTOUL
- (val, NULL, 10);
+ obj_desc->mutex.sync_level = (u8) (ACPI_TO_INTEGER (val) - 1);
if (ACPI_STRCMP (init_val->name, "_GL_") == 0) {
/*
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
status = AE_OK;
}
else {
+ /* Delete any return object (especially if implicit_return is enabled) */
+
+ if (pinfo.return_object) {
+ acpi_ut_remove_reference (pinfo.return_object);
+ }
+
/* Count of successful INIs */
info->num_INI++;
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
{
acpi_status status;
struct acpi_namespace_node *node;
- struct acpi_device_info info;
+ struct acpi_device_info *info;
struct acpi_device_info *return_info;
struct acpi_compatible_id_list *cid_list = NULL;
acpi_size size;
return (status);
}
+ info = ACPI_MEM_CALLOCATE (sizeof (struct acpi_device_info));
+ if (!info) {
+ return (AE_NO_MEMORY);
+ }
+
status = acpi_ut_acquire_mutex (ACPI_MTX_NAMESPACE);
if (ACPI_FAILURE (status)) {
- return (status);
+ goto cleanup;
}
node = acpi_ns_map_handle_to_node (handle);
if (!node) {
(void) acpi_ut_release_mutex (ACPI_MTX_NAMESPACE);
- return (AE_BAD_PARAMETER);
+ goto cleanup;
}
/* Init return structure */
size = sizeof (struct acpi_device_info);
- ACPI_MEMSET (&info, 0, size);
- info.type = node->type;
- info.name = node->name.integer;
- info.valid = 0;
+ info->type = node->type;
+ info->name = node->name.integer;
+ info->valid = 0;
status = acpi_ut_release_mutex (ACPI_MTX_NAMESPACE);
if (ACPI_FAILURE (status)) {
- return (status);
+ goto cleanup;
}
/* If not a device, we are all done */
- if (info.type == ACPI_TYPE_DEVICE) {
+ if (info->type == ACPI_TYPE_DEVICE) {
/*
* Get extra info for ACPI Devices objects only:
* Run the Device _HID, _UID, _CID, _STA, _ADR and _sx_d methods.
*
* Note: none of these methods are required, so they may or may
- * not be present for this device. The Info.Valid bitfield is used
+ * not be present for this device. The Info->Valid bitfield is used
* to indicate which methods were found and ran successfully.
*/
/* Execute the Device._HID method */
- status = acpi_ut_execute_HID (node, &info.hardware_id);
+ status = acpi_ut_execute_HID (node, &info->hardware_id);
if (ACPI_SUCCESS (status)) {
- info.valid |= ACPI_VALID_HID;
+ info->valid |= ACPI_VALID_HID;
}
/* Execute the Device._UID method */
- status = acpi_ut_execute_UID (node, &info.unique_id);
+ status = acpi_ut_execute_UID (node, &info->unique_id);
if (ACPI_SUCCESS (status)) {
- info.valid |= ACPI_VALID_UID;
+ info->valid |= ACPI_VALID_UID;
}
/* Execute the Device._CID method */
if (ACPI_SUCCESS (status)) {
size += ((acpi_size) cid_list->count - 1) *
sizeof (struct acpi_compatible_id);
- info.valid |= ACPI_VALID_CID;
+ info->valid |= ACPI_VALID_CID;
}
/* Execute the Device._STA method */
- status = acpi_ut_execute_STA (node, &info.current_status);
+ status = acpi_ut_execute_STA (node, &info->current_status);
if (ACPI_SUCCESS (status)) {
- info.valid |= ACPI_VALID_STA;
+ info->valid |= ACPI_VALID_STA;
}
/* Execute the Device._ADR method */
status = acpi_ut_evaluate_numeric_object (METHOD_NAME__ADR, node,
- &info.address);
+ &info->address);
if (ACPI_SUCCESS (status)) {
- info.valid |= ACPI_VALID_ADR;
+ info->valid |= ACPI_VALID_ADR;
}
/* Execute the Device._sx_d methods */
- status = acpi_ut_execute_sxds (node, info.highest_dstates);
+ status = acpi_ut_execute_sxds (node, info->highest_dstates);
if (ACPI_SUCCESS (status)) {
- info.valid |= ACPI_VALID_SXDS;
+ info->valid |= ACPI_VALID_SXDS;
}
-
- status = AE_OK;
}
/* Validate/Allocate/Clear caller buffer */
/* Populate the return buffer */
return_info = buffer->pointer;
- ACPI_MEMCPY (return_info, &info, sizeof (struct acpi_device_info));
+ ACPI_MEMCPY (return_info, info, sizeof (struct acpi_device_info));
if (cid_list) {
ACPI_MEMCPY (&return_info->compatibility_id, cid_list, cid_list->size);
cleanup:
+ ACPI_MEM_FREE (info);
if (cid_list) {
ACPI_MEM_FREE (cid_list);
}
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*
*/
-
+#include <linux/module.h>
#include <linux/config.h>
#include <linux/init.h>
#include <linux/kernel.h>
result = acpi_table_parse_srat(ACPI_SRAT_MEMORY_AFFINITY,
acpi_parse_memory_affinity,
NR_NODE_MEMBLKS); // IA64 specific
- } else {
- /* FIXME */
- printk("Warning: acpi_table_parse(ACPI_SRAT) returned %d!\n",result);
}
/* SLIT: System Locality Information Table */
result = acpi_table_parse(ACPI_SLIT, acpi_parse_slit);
- if (result < 1) {
- /* FIXME */
- printk("Warning: acpi_table_parse(ACPI_SLIT) returned %d!\n",result);
- }
acpi_numa_arch_fixup();
return 0;
}
+
+int
+acpi_get_pxm(acpi_handle h)
+{
+ unsigned long pxm;
+ acpi_status status;
+ acpi_handle handle;
+ acpi_handle phandle = h;
+
+ do {
+ handle = phandle;
+ status = acpi_evaluate_integer(handle, "_PXM", NULL, &pxm);
+ if (ACPI_SUCCESS(status))
+ return (int)pxm;
+ status = acpi_get_parent(handle, &phandle);
+ } while(ACPI_SUCCESS(status));
+ return -1;
+}
+EXPORT_SYMBOL(acpi_get_pxm);
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
#define ARGI_LOCAL6 ARG_NONE
#define ARGI_LOCAL7 ARG_NONE
#define ARGI_LOR_OP ARGI_LIST2 (ARGI_INTEGER, ARGI_INTEGER)
-#define ARGI_MATCH_OP ARGI_LIST6 (ARGI_PACKAGE, ARGI_INTEGER, ARGI_INTEGER, ARGI_INTEGER, ARGI_INTEGER, ARGI_INTEGER)
+#define ARGI_MATCH_OP ARGI_LIST6 (ARGI_PACKAGE, ARGI_INTEGER, ARGI_COMPUTEDATA, ARGI_INTEGER,ARGI_COMPUTEDATA,ARGI_INTEGER)
#define ARGI_METHOD_OP ARGI_INVALID_OPCODE
#define ARGI_METHODCALL_OP ARGI_INVALID_OPCODE
#define ARGI_MID_OP ARGI_LIST4 (ARGI_BUFFER_OR_STRING,ARGI_INTEGER, ARGI_INTEGER, ARGI_TARGETREF)
#define ARGI_SHIFT_LEFT_OP ARGI_LIST3 (ARGI_INTEGER, ARGI_INTEGER, ARGI_TARGETREF)
#define ARGI_SHIFT_RIGHT_OP ARGI_LIST3 (ARGI_INTEGER, ARGI_INTEGER, ARGI_TARGETREF)
#define ARGI_SIGNAL_OP ARGI_LIST1 (ARGI_EVENT)
-#define ARGI_SIZE_OF_OP ARGI_LIST1 (ARGI_REFERENCE) /* Force delay of operand resolution */
+#define ARGI_SIZE_OF_OP ARGI_LIST1 (ARGI_DATAOBJECT)
#define ARGI_SLEEP_OP ARGI_LIST1 (ARGI_INTEGER)
#define ARGI_STALL_OP ARGI_LIST1 (ARGI_INTEGER)
#define ARGI_STATICSTRING_OP ARGI_INVALID_OPCODE
-#define ARGI_STORE_OP ARGI_LIST2 (ARGI_ANYTYPE, ARGI_TARGETREF)
+#define ARGI_STORE_OP ARGI_LIST2 (ARGI_DATAREFOBJ, ARGI_TARGETREF)
#define ARGI_STRING_OP ARGI_INVALID_OPCODE
#define ARGI_SUBTRACT_OP ARGI_LIST3 (ARGI_INTEGER, ARGI_INTEGER, ARGI_TARGETREF)
#define ARGI_THERMAL_ZONE_OP ARGI_INVALID_OPCODE
#define ARGI_TO_HEX_STR_OP ARGI_LIST2 (ARGI_COMPUTEDATA,ARGI_FIXED_TARGET)
#define ARGI_TO_INTEGER_OP ARGI_LIST2 (ARGI_COMPUTEDATA,ARGI_FIXED_TARGET)
#define ARGI_TO_STRING_OP ARGI_LIST3 (ARGI_BUFFER, ARGI_INTEGER, ARGI_FIXED_TARGET)
-#define ARGI_TYPE_OP ARGI_LIST1 (ARGI_REFERENCE) /* Force delay of operand resolution */
+#define ARGI_TYPE_OP ARGI_LIST1 (ARGI_ANYTYPE)
#define ARGI_UNLOAD_OP ARGI_LIST1 (ARGI_DDBHANDLE)
#define ARGI_VAR_PACKAGE_OP ARGI_LIST1 (ARGI_INTEGER)
#define ARGI_WAIT_OP ARGI_LIST2 (ARGI_EVENT, ARGI_INTEGER)
/* 2D */ ACPI_OP ("FindSetRightBit", ARGP_FIND_SET_RIGHT_BIT_OP,ARGI_FIND_SET_RIGHT_BIT_OP, ACPI_TYPE_ANY, AML_CLASS_EXECUTE, AML_TYPE_EXEC_1A_1T_1R, AML_FLAGS_EXEC_1A_1T_1R | AML_CONSTANT),
/* 2E */ ACPI_OP ("DerefOf", ARGP_DEREF_OF_OP, ARGI_DEREF_OF_OP, ACPI_TYPE_ANY, AML_CLASS_EXECUTE, AML_TYPE_EXEC_1A_0T_1R, AML_FLAGS_EXEC_1A_0T_1R),
/* 2F */ ACPI_OP ("Notify", ARGP_NOTIFY_OP, ARGI_NOTIFY_OP, ACPI_TYPE_ANY, AML_CLASS_EXECUTE, AML_TYPE_EXEC_2A_0T_0R, AML_FLAGS_EXEC_2A_0T_0R),
-/* 30 */ ACPI_OP ("SizeOf", ARGP_SIZE_OF_OP, ARGI_SIZE_OF_OP, ACPI_TYPE_ANY, AML_CLASS_EXECUTE, AML_TYPE_EXEC_1A_0T_1R, AML_FLAGS_EXEC_1A_0T_1R),
+/* 30 */ ACPI_OP ("SizeOf", ARGP_SIZE_OF_OP, ARGI_SIZE_OF_OP, ACPI_TYPE_ANY, AML_CLASS_EXECUTE, AML_TYPE_EXEC_1A_0T_1R, AML_FLAGS_EXEC_1A_0T_1R | AML_NO_OPERAND_RESOLVE),
/* 31 */ ACPI_OP ("Index", ARGP_INDEX_OP, ARGI_INDEX_OP, ACPI_TYPE_ANY, AML_CLASS_EXECUTE, AML_TYPE_EXEC_2A_1T_1R, AML_FLAGS_EXEC_2A_1T_1R | AML_CONSTANT),
/* 32 */ ACPI_OP ("Match", ARGP_MATCH_OP, ARGI_MATCH_OP, ACPI_TYPE_ANY, AML_CLASS_EXECUTE, AML_TYPE_EXEC_6A_0T_1R, AML_FLAGS_EXEC_6A_0T_1R | AML_CONSTANT),
/* 33 */ ACPI_OP ("CreateDWordField", ARGP_CREATE_DWORD_FIELD_OP,ARGI_CREATE_DWORD_FIELD_OP, ACPI_TYPE_BUFFER_FIELD, AML_CLASS_CREATE, AML_TYPE_CREATE_FIELD, AML_HAS_ARGS | AML_NSOBJECT | AML_NSNODE | AML_DEFER | AML_CREATE),
/* 34 */ ACPI_OP ("CreateWordField", ARGP_CREATE_WORD_FIELD_OP, ARGI_CREATE_WORD_FIELD_OP, ACPI_TYPE_BUFFER_FIELD, AML_CLASS_CREATE, AML_TYPE_CREATE_FIELD, AML_HAS_ARGS | AML_NSOBJECT | AML_NSNODE | AML_DEFER | AML_CREATE),
/* 35 */ ACPI_OP ("CreateByteField", ARGP_CREATE_BYTE_FIELD_OP, ARGI_CREATE_BYTE_FIELD_OP, ACPI_TYPE_BUFFER_FIELD, AML_CLASS_CREATE, AML_TYPE_CREATE_FIELD, AML_HAS_ARGS | AML_NSOBJECT | AML_NSNODE | AML_DEFER | AML_CREATE),
/* 36 */ ACPI_OP ("CreateBitField", ARGP_CREATE_BIT_FIELD_OP, ARGI_CREATE_BIT_FIELD_OP, ACPI_TYPE_BUFFER_FIELD, AML_CLASS_CREATE, AML_TYPE_CREATE_FIELD, AML_HAS_ARGS | AML_NSOBJECT | AML_NSNODE | AML_DEFER | AML_CREATE),
-/* 37 */ ACPI_OP ("ObjectType", ARGP_TYPE_OP, ARGI_TYPE_OP, ACPI_TYPE_ANY, AML_CLASS_EXECUTE, AML_TYPE_EXEC_1A_0T_1R, AML_FLAGS_EXEC_1A_0T_1R),
+/* 37 */ ACPI_OP ("ObjectType", ARGP_TYPE_OP, ARGI_TYPE_OP, ACPI_TYPE_ANY, AML_CLASS_EXECUTE, AML_TYPE_EXEC_1A_0T_1R, AML_FLAGS_EXEC_1A_0T_1R | AML_NO_OPERAND_RESOLVE),
/* 38 */ ACPI_OP ("LAnd", ARGP_LAND_OP, ARGI_LAND_OP, ACPI_TYPE_ANY, AML_CLASS_EXECUTE, AML_TYPE_EXEC_2A_0T_1R, AML_FLAGS_EXEC_2A_0T_1R | AML_LOGICAL_NUMERIC | AML_CONSTANT),
/* 39 */ ACPI_OP ("LOr", ARGP_LOR_OP, ARGI_LOR_OP, ACPI_TYPE_ANY, AML_CLASS_EXECUTE, AML_TYPE_EXEC_2A_0T_1R, AML_FLAGS_EXEC_2A_0T_1R | AML_LOGICAL_NUMERIC | AML_CONSTANT),
/* 3A */ ACPI_OP ("LNot", ARGP_LNOT_OP, ARGI_LNOT_OP, ACPI_TYPE_ANY, AML_CLASS_EXECUTE, AML_TYPE_EXEC_1A_0T_1R, AML_FLAGS_EXEC_1A_0T_1R | AML_CONSTANT),
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
}
+#ifdef ACPI_ENABLE_OBJECT_CACHE
/*******************************************************************************
*
* FUNCTION: acpi_ps_delete_parse_cache
acpi_ut_delete_generic_cache (ACPI_MEM_LIST_PSNODE_EXT);
return_VOID;
}
+#endif
/*******************************************************************************
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
acpi_status status = AE_OK;
struct acpi_pci_data *data = NULL;
struct acpi_pci_data *pdata = NULL;
- char pathname[ACPI_PATHNAME_MAX] = {0};
- struct acpi_buffer buffer = {ACPI_PATHNAME_MAX, pathname};
+ char *pathname = NULL;
+ struct acpi_buffer buffer = {0, NULL};
acpi_handle handle = NULL;
ACPI_FUNCTION_TRACE("acpi_pci_bind");
if (!device || !device->parent)
return_VALUE(-EINVAL);
+ pathname = kmalloc(ACPI_PATHNAME_MAX, GFP_KERNEL);
+ if(!pathname)
+ return_VALUE(-ENOMEM);
+ memset(pathname, 0, ACPI_PATHNAME_MAX);
+ buffer.length = ACPI_PATHNAME_MAX;
+ buffer.pointer = pathname;
+
data = kmalloc(sizeof(struct acpi_pci_data), GFP_KERNEL);
- if (!data)
+ if (!data){
+ kfree (pathname);
return_VALUE(-ENOMEM);
+ }
memset(data, 0, sizeof(struct acpi_pci_data));
acpi_get_name(device->handle, ACPI_FULL_PATHNAME, &buffer);
data->id.device, data->id.function));
data->bus = data->dev->subordinate;
device->ops.bind = acpi_pci_bind;
+ device->ops.unbind = acpi_pci_unbind;
}
/*
}
end:
+ kfree(pathname);
if (result)
kfree(data);
return_VALUE(result);
}
+int acpi_pci_unbind(
+ struct acpi_device *device)
+{
+ int result = 0;
+ acpi_status status = AE_OK;
+ struct acpi_pci_data *data = NULL;
+ char *pathname = NULL;
+ struct acpi_buffer buffer = {0, NULL};
+
+ ACPI_FUNCTION_TRACE("acpi_pci_unbind");
+
+ if (!device || !device->parent)
+ return_VALUE(-EINVAL);
+
+ pathname = (char *) kmalloc(ACPI_PATHNAME_MAX, GFP_KERNEL);
+ if(!pathname)
+ return_VALUE(-ENOMEM);
+ memset(pathname, 0, ACPI_PATHNAME_MAX);
+
+ buffer.length = ACPI_PATHNAME_MAX;
+ buffer.pointer = pathname;
+ acpi_get_name(device->handle, ACPI_FULL_PATHNAME, &buffer);
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Unbinding PCI device [%s]...\n",
+ pathname));
+ kfree(pathname);
+
+ status = acpi_get_data(device->handle, acpi_pci_data_handler, (void**)&data);
+ if (ACPI_FAILURE(status)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Unable to get data from device %s\n",
+ acpi_device_bid(device)));
+ result = -ENODEV;
+ goto end;
+ }
+
+ status = acpi_detach_data(device->handle, acpi_pci_data_handler);
+ if (ACPI_FAILURE(status)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Unable to detach data from device %s\n",
+ acpi_device_bid(device)));
+ result = -ENODEV;
+ goto end;
+ }
+ if (data->dev->subordinate) {
+ acpi_pci_irq_del_prt(data->id.segment, data->bus->number);
+ }
+ kfree(data);
+
+end:
+ return_VALUE(result);
+}
int
acpi_pci_bind_root (
int result = 0;
acpi_status status = AE_OK;
struct acpi_pci_data *data = NULL;
- char pathname[ACPI_PATHNAME_MAX] = {0};
- struct acpi_buffer buffer = {ACPI_PATHNAME_MAX, pathname};
+ char *pathname = NULL;
+ struct acpi_buffer buffer = {0, NULL};
ACPI_FUNCTION_TRACE("acpi_pci_bind_root");
- if (!device || !id || !bus)
+ pathname = (char *)kmalloc(ACPI_PATHNAME_MAX, GFP_KERNEL);
+ if(!pathname)
+ return_VALUE(-ENOMEM);
+ memset(pathname, 0, ACPI_PATHNAME_MAX);
+
+ buffer.length = ACPI_PATHNAME_MAX;
+ buffer.pointer = pathname;
+
+ if (!device || !id || !bus){
+ kfree(pathname);
return_VALUE(-EINVAL);
+ }
data = kmalloc(sizeof(struct acpi_pci_data), GFP_KERNEL);
- if (!data)
+ if (!data){
+ kfree(pathname);
return_VALUE(-ENOMEM);
+ }
memset(data, 0, sizeof(struct acpi_pci_data));
data->id = *id;
data->bus = bus;
device->ops.bind = acpi_pci_bind;
+ device->ops.unbind = acpi_pci_unbind;
acpi_get_name(device->handle, ACPI_FULL_PATHNAME, &buffer);
}
end:
+ kfree(pathname);
if (result != 0)
kfree(data);
--- /dev/null
+/*
+ * acpi_processor.c - ACPI Processor Driver ($Revision: 71 $)
+ *
+ * Copyright (C) 2001, 2002 Andy Grover <andrew.grover@intel.com>
+ * Copyright (C) 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com>
+ * Copyright (C) 2004 Dominik Brodowski <linux@brodo.de>
+ * Copyright (C) 2004 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
+ * - Added processor hotplug support
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ * TBD:
+ * 1. Make # power states dynamic.
+ * 2. Support duty_cycle values that span bit 4.
+ * 3. Optimize by having scheduler determine business instead of
+ * having us try to calculate it here.
+ * 4. Need C1 timing -- must modify kernel (IRQ handler) to get this.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/pm.h>
+#include <linux/cpufreq.h>
+#include <linux/cpu.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+#include <linux/dmi.h>
+#include <linux/moduleparam.h>
+
+#include <asm/io.h>
+#include <asm/system.h>
+#include <asm/cpu.h>
+#include <asm/delay.h>
+#include <asm/uaccess.h>
+#include <asm/processor.h>
+#include <asm/smp.h>
+#include <asm/acpi.h>
+
+#include <acpi/acpi_bus.h>
+#include <acpi/acpi_drivers.h>
+#include <acpi/processor.h>
+
+
+#define ACPI_PROCESSOR_COMPONENT 0x01000000
+#define ACPI_PROCESSOR_CLASS "processor"
+#define ACPI_PROCESSOR_DRIVER_NAME "ACPI Processor Driver"
+#define ACPI_PROCESSOR_DEVICE_NAME "Processor"
+#define ACPI_PROCESSOR_FILE_INFO "info"
+#define ACPI_PROCESSOR_FILE_THROTTLING "throttling"
+#define ACPI_PROCESSOR_FILE_LIMIT "limit"
+#define ACPI_PROCESSOR_NOTIFY_PERFORMANCE 0x80
+#define ACPI_PROCESSOR_NOTIFY_POWER 0x81
+
+#define ACPI_PROCESSOR_LIMIT_USER 0
+#define ACPI_PROCESSOR_LIMIT_THERMAL 1
+
+#define ACPI_STA_PRESENT 0x00000001
+
+#define _COMPONENT ACPI_PROCESSOR_COMPONENT
+ACPI_MODULE_NAME ("acpi_processor")
+
+MODULE_AUTHOR("Paul Diefenbaugh");
+MODULE_DESCRIPTION(ACPI_PROCESSOR_DRIVER_NAME);
+MODULE_LICENSE("GPL");
+
+
+static int acpi_processor_add (struct acpi_device *device);
+static int acpi_processor_start (struct acpi_device *device);
+static int acpi_processor_remove (struct acpi_device *device, int type);
+static int acpi_processor_info_open_fs(struct inode *inode, struct file *file);
+static void acpi_processor_notify ( acpi_handle handle, u32 event, void *data);
+static acpi_status acpi_processor_hotadd_init(acpi_handle handle, int *p_cpu);
+static int acpi_processor_handle_eject(struct acpi_processor *pr);
+
+static struct acpi_driver acpi_processor_driver = {
+ .name = ACPI_PROCESSOR_DRIVER_NAME,
+ .class = ACPI_PROCESSOR_CLASS,
+ .ids = ACPI_PROCESSOR_HID,
+ .ops = {
+ .add = acpi_processor_add,
+ .remove = acpi_processor_remove,
+ .start = acpi_processor_start,
+ },
+};
+
+#define INSTALL_NOTIFY_HANDLER 1
+#define UNINSTALL_NOTIFY_HANDLER 2
+
+
+struct file_operations acpi_processor_info_fops = {
+ .open = acpi_processor_info_open_fs,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+
+struct acpi_processor *processors[NR_CPUS];
+struct acpi_processor_errata errata;
+
+
+/* --------------------------------------------------------------------------
+ Errata Handling
+ -------------------------------------------------------------------------- */
+
+int
+acpi_processor_errata_piix4 (
+ struct pci_dev *dev)
+{
+ u8 rev = 0;
+ u8 value1 = 0;
+ u8 value2 = 0;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_errata_piix4");
+
+ if (!dev)
+ return_VALUE(-EINVAL);
+
+ /*
+ * Note that 'dev' references the PIIX4 ACPI Controller.
+ */
+
+ pci_read_config_byte(dev, PCI_REVISION_ID, &rev);
+
+ switch (rev) {
+ case 0:
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Found PIIX4 A-step\n"));
+ break;
+ case 1:
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Found PIIX4 B-step\n"));
+ break;
+ case 2:
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Found PIIX4E\n"));
+ break;
+ case 3:
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Found PIIX4M\n"));
+ break;
+ default:
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Found unknown PIIX4\n"));
+ break;
+ }
+
+ switch (rev) {
+
+ case 0: /* PIIX4 A-step */
+ case 1: /* PIIX4 B-step */
+ /*
+ * See specification changes #13 ("Manual Throttle Duty Cycle")
+ * and #14 ("Enabling and Disabling Manual Throttle"), plus
+ * erratum #5 ("STPCLK# Deassertion Time") from the January
+ * 2002 PIIX4 specification update. Applies to only older
+ * PIIX4 models.
+ */
+ errata.piix4.throttle = 1;
+
+ case 2: /* PIIX4E */
+ case 3: /* PIIX4M */
+ /*
+ * See erratum #18 ("C3 Power State/BMIDE and Type-F DMA
+ * Livelock") from the January 2002 PIIX4 specification update.
+ * Applies to all PIIX4 models.
+ */
+
+ /*
+ * BM-IDE
+ * ------
+ * Find the PIIX4 IDE Controller and get the Bus Master IDE
+ * Status register address. We'll use this later to read
+ * each IDE controller's DMA status to make sure we catch all
+ * DMA activity.
+ */
+ dev = pci_get_subsys(PCI_VENDOR_ID_INTEL,
+ PCI_DEVICE_ID_INTEL_82371AB,
+ PCI_ANY_ID, PCI_ANY_ID, NULL);
+ if (dev) {
+ errata.piix4.bmisx = pci_resource_start(dev, 4);
+ pci_dev_put(dev);
+ }
+
+ /*
+ * Type-F DMA
+ * ----------
+ * Find the PIIX4 ISA Controller and read the Motherboard
+ * DMA controller's status to see if Type-F (Fast) DMA mode
+ * is enabled (bit 7) on either channel. Note that we'll
+ * disable C3 support if this is enabled, as some legacy
+ * devices won't operate well if fast DMA is disabled.
+ */
+ dev = pci_get_subsys(PCI_VENDOR_ID_INTEL,
+ PCI_DEVICE_ID_INTEL_82371AB_0,
+ PCI_ANY_ID, PCI_ANY_ID, NULL);
+ if (dev) {
+ pci_read_config_byte(dev, 0x76, &value1);
+ pci_read_config_byte(dev, 0x77, &value2);
+ if ((value1 & 0x80) || (value2 & 0x80))
+ errata.piix4.fdma = 1;
+ pci_dev_put(dev);
+ }
+
+ break;
+ }
+
+ if (errata.piix4.bmisx)
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "Bus master activity detection (BM-IDE) erratum enabled\n"));
+ if (errata.piix4.fdma)
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "Type-F DMA livelock erratum (C3 disabled)\n"));
+
+ return_VALUE(0);
+}
+
+
+int
+acpi_processor_errata (
+ struct acpi_processor *pr)
+{
+ int result = 0;
+ struct pci_dev *dev = NULL;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_errata");
+
+ if (!pr)
+ return_VALUE(-EINVAL);
+
+ /*
+ * PIIX4
+ */
+ dev = pci_get_subsys(PCI_VENDOR_ID_INTEL,
+ PCI_DEVICE_ID_INTEL_82371AB_3, PCI_ANY_ID, PCI_ANY_ID, NULL);
+ if (dev) {
+ result = acpi_processor_errata_piix4(dev);
+ pci_dev_put(dev);
+ }
+
+ return_VALUE(result);
+}
+
+
+/* --------------------------------------------------------------------------
+ FS Interface (/proc)
+ -------------------------------------------------------------------------- */
+
+struct proc_dir_entry *acpi_processor_dir = NULL;
+
+static int acpi_processor_info_seq_show(struct seq_file *seq, void *offset)
+{
+ struct acpi_processor *pr = (struct acpi_processor *)seq->private;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_info_seq_show");
+
+ if (!pr)
+ goto end;
+
+ seq_printf(seq, "processor id: %d\n"
+ "acpi id: %d\n"
+ "bus mastering control: %s\n"
+ "power management: %s\n"
+ "throttling control: %s\n"
+ "limit interface: %s\n",
+ pr->id,
+ pr->acpi_id,
+ pr->flags.bm_control ? "yes" : "no",
+ pr->flags.power ? "yes" : "no",
+ pr->flags.throttling ? "yes" : "no",
+ pr->flags.limit ? "yes" : "no");
+
+end:
+ return_VALUE(0);
+}
+
+static int acpi_processor_info_open_fs(struct inode *inode, struct file *file)
+{
+ return single_open(file, acpi_processor_info_seq_show,
+ PDE(inode)->data);
+}
+
+
+static int
+acpi_processor_add_fs (
+ struct acpi_device *device)
+{
+ struct proc_dir_entry *entry = NULL;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_add_fs");
+
+ if (!acpi_device_dir(device)) {
+ acpi_device_dir(device) = proc_mkdir(acpi_device_bid(device),
+ acpi_processor_dir);
+ if (!acpi_device_dir(device))
+ return_VALUE(-ENODEV);
+ }
+ acpi_device_dir(device)->owner = THIS_MODULE;
+
+ /* 'info' [R] */
+ entry = create_proc_entry(ACPI_PROCESSOR_FILE_INFO,
+ S_IRUGO, acpi_device_dir(device));
+ if (!entry)
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Unable to create '%s' fs entry\n",
+ ACPI_PROCESSOR_FILE_INFO));
+ else {
+ entry->proc_fops = &acpi_processor_info_fops;
+ entry->data = acpi_driver_data(device);
+ entry->owner = THIS_MODULE;
+ }
+
+ /* 'throttling' [R/W] */
+ entry = create_proc_entry(ACPI_PROCESSOR_FILE_THROTTLING,
+ S_IFREG|S_IRUGO|S_IWUSR, acpi_device_dir(device));
+ if (!entry)
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Unable to create '%s' fs entry\n",
+ ACPI_PROCESSOR_FILE_THROTTLING));
+ else {
+ entry->proc_fops = &acpi_processor_throttling_fops;
+ entry->proc_fops->write = acpi_processor_write_throttling;
+ entry->data = acpi_driver_data(device);
+ entry->owner = THIS_MODULE;
+ }
+
+ /* 'limit' [R/W] */
+ entry = create_proc_entry(ACPI_PROCESSOR_FILE_LIMIT,
+ S_IFREG|S_IRUGO|S_IWUSR, acpi_device_dir(device));
+ if (!entry)
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Unable to create '%s' fs entry\n",
+ ACPI_PROCESSOR_FILE_LIMIT));
+ else {
+ entry->proc_fops = &acpi_processor_limit_fops;
+ entry->proc_fops->write = acpi_processor_write_limit;
+ entry->data = acpi_driver_data(device);
+ entry->owner = THIS_MODULE;
+ }
+
+ return_VALUE(0);
+}
+
+
+static int
+acpi_processor_remove_fs (
+ struct acpi_device *device)
+{
+ ACPI_FUNCTION_TRACE("acpi_processor_remove_fs");
+
+ if (acpi_device_dir(device)) {
+ remove_proc_entry(ACPI_PROCESSOR_FILE_INFO,acpi_device_dir(device));
+ remove_proc_entry(ACPI_PROCESSOR_FILE_THROTTLING,
+ acpi_device_dir(device));
+ remove_proc_entry(ACPI_PROCESSOR_FILE_LIMIT,acpi_device_dir(device));
+ remove_proc_entry(acpi_device_bid(device), acpi_processor_dir);
+ acpi_device_dir(device) = NULL;
+ }
+
+ return_VALUE(0);
+}
+
+/* Use the acpiid in MADT to map cpus in case of SMP */
+#ifndef CONFIG_SMP
+#define convert_acpiid_to_cpu(acpi_id) (0xff)
+#else
+
+#ifdef CONFIG_IA64
+#define arch_acpiid_to_apicid ia64_acpiid_to_sapicid
+#define arch_cpu_to_apicid ia64_cpu_to_sapicid
+#define ARCH_BAD_APICID (0xffff)
+#else
+#define arch_acpiid_to_apicid x86_acpiid_to_apicid
+#define arch_cpu_to_apicid x86_cpu_to_apicid
+#define ARCH_BAD_APICID (0xff)
+#endif
+
+static u8 convert_acpiid_to_cpu(u8 acpi_id)
+{
+ u16 apic_id;
+ int i;
+
+ apic_id = arch_acpiid_to_apicid[acpi_id];
+ if (apic_id == ARCH_BAD_APICID)
+ return -1;
+
+ for (i = 0; i < NR_CPUS; i++) {
+ if (arch_cpu_to_apicid[i] == apic_id)
+ return i;
+ }
+ return -1;
+}
+#endif
+
+/* --------------------------------------------------------------------------
+ Driver Interface
+ -------------------------------------------------------------------------- */
+
+static int
+acpi_processor_get_info (
+ struct acpi_processor *pr)
+{
+ acpi_status status = 0;
+ union acpi_object object = {0};
+ struct acpi_buffer buffer = {sizeof(union acpi_object), &object};
+ u8 cpu_index;
+ static int cpu0_initialized;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_get_info");
+
+ if (!pr)
+ return_VALUE(-EINVAL);
+
+ if (num_online_cpus() > 1)
+ errata.smp = TRUE;
+
+ acpi_processor_errata(pr);
+
+ /*
+ * Check to see if we have bus mastering arbitration control. This
+ * is required for proper C3 usage (to maintain cache coherency).
+ */
+ if (acpi_fadt.V1_pm2_cnt_blk && acpi_fadt.pm2_cnt_len) {
+ pr->flags.bm_control = 1;
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "Bus mastering arbitration control present\n"));
+ }
+ else
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "No bus mastering arbitration control\n"));
+
+ /*
+ * Evalute the processor object. Note that it is common on SMP to
+ * have the first (boot) processor with a valid PBLK address while
+ * all others have a NULL address.
+ */
+ status = acpi_evaluate_object(pr->handle, NULL, NULL, &buffer);
+ if (ACPI_FAILURE(status)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Error evaluating processor object\n"));
+ return_VALUE(-ENODEV);
+ }
+
+ /*
+ * TBD: Synch processor ID (via LAPIC/LSAPIC structures) on SMP.
+ * >>> 'acpi_get_processor_id(acpi_id, &id)' in arch/xxx/acpi.c
+ */
+ pr->acpi_id = object.processor.proc_id;
+
+ cpu_index = convert_acpiid_to_cpu(pr->acpi_id);
+
+ /* Handle UP system running SMP kernel, with no LAPIC in MADT */
+ if ( !cpu0_initialized && (cpu_index == 0xff) &&
+ (num_online_cpus() == 1)) {
+ cpu_index = 0;
+ }
+
+ cpu0_initialized = 1;
+
+ pr->id = cpu_index;
+
+ /*
+ * Extra Processor objects may be enumerated on MP systems with
+ * less than the max # of CPUs. They should be ignored _iff
+ * they are physically not present.
+ */
+ if (cpu_index >= NR_CPUS) {
+ if (ACPI_FAILURE(acpi_processor_hotadd_init(pr->handle, &pr->id))) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Error getting cpuindex for acpiid 0x%x\n",
+ pr->acpi_id));
+ return_VALUE(-ENODEV);
+ }
+ }
+
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Processor [%d:%d]\n", pr->id,
+ pr->acpi_id));
+
+ if (!object.processor.pblk_address)
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "No PBLK (NULL address)\n"));
+ else if (object.processor.pblk_length != 6)
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid PBLK length [%d]\n",
+ object.processor.pblk_length));
+ else {
+ pr->throttling.address = object.processor.pblk_address;
+ pr->throttling.duty_offset = acpi_fadt.duty_offset;
+ pr->throttling.duty_width = acpi_fadt.duty_width;
+
+ pr->pblk = object.processor.pblk_address;
+
+ /*
+ * We don't care about error returns - we just try to mark
+ * these reserved so that nobody else is confused into thinking
+ * that this region might be unused..
+ *
+ * (In particular, allocating the IO range for Cardbus)
+ */
+ request_region(pr->throttling.address, 6, "ACPI CPU throttle");
+ }
+
+#ifdef CONFIG_CPU_FREQ
+ acpi_processor_ppc_has_changed(pr);
+#endif
+ acpi_processor_get_throttling_info(pr);
+ acpi_processor_get_limit_info(pr);
+
+ return_VALUE(0);
+}
+
+static int
+acpi_processor_start(
+ struct acpi_device *device)
+{
+ int result = 0;
+ acpi_status status = AE_OK;
+ struct acpi_processor *pr;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_start");
+
+ pr = acpi_driver_data(device);
+
+ result = acpi_processor_get_info(pr);
+ if (result) {
+ /* Processor is physically not present */
+ return_VALUE(0);
+ }
+
+ BUG_ON((pr->id >= NR_CPUS) || (pr->id < 0));
+
+ processors[pr->id] = pr;
+
+ result = acpi_processor_add_fs(device);
+ if (result)
+ goto end;
+
+ status = acpi_install_notify_handler(pr->handle, ACPI_DEVICE_NOTIFY,
+ acpi_processor_notify, pr);
+ if (ACPI_FAILURE(status)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Error installing device notify handler\n"));
+ }
+
+ acpi_processor_power_init(pr, device);
+
+ if (pr->flags.throttling) {
+ printk(KERN_INFO PREFIX "%s [%s] (supports",
+ acpi_device_name(device), acpi_device_bid(device));
+ printk(" %d throttling states", pr->throttling.state_count);
+ printk(")\n");
+ }
+
+end:
+
+ return_VALUE(result);
+}
+
+
+
+static void
+acpi_processor_notify (
+ acpi_handle handle,
+ u32 event,
+ void *data)
+{
+ struct acpi_processor *pr = (struct acpi_processor *) data;
+ struct acpi_device *device = NULL;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_notify");
+
+ if (!pr)
+ return_VOID;
+
+ if (acpi_bus_get_device(pr->handle, &device))
+ return_VOID;
+
+ switch (event) {
+ case ACPI_PROCESSOR_NOTIFY_PERFORMANCE:
+ acpi_processor_ppc_has_changed(pr);
+ acpi_bus_generate_event(device, event,
+ pr->performance_platform_limit);
+ break;
+ case ACPI_PROCESSOR_NOTIFY_POWER:
+ acpi_processor_cst_has_changed(pr);
+ acpi_bus_generate_event(device, event, 0);
+ break;
+ default:
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "Unsupported event [0x%x]\n", event));
+ break;
+ }
+
+ return_VOID;
+}
+
+
+static int
+acpi_processor_add (
+ struct acpi_device *device)
+{
+ struct acpi_processor *pr = NULL;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_add");
+
+ if (!device)
+ return_VALUE(-EINVAL);
+
+ pr = kmalloc(sizeof(struct acpi_processor), GFP_KERNEL);
+ if (!pr)
+ return_VALUE(-ENOMEM);
+ memset(pr, 0, sizeof(struct acpi_processor));
+
+ pr->handle = device->handle;
+ strcpy(acpi_device_name(device), ACPI_PROCESSOR_DEVICE_NAME);
+ strcpy(acpi_device_class(device), ACPI_PROCESSOR_CLASS);
+ acpi_driver_data(device) = pr;
+
+ return_VALUE(0);
+}
+
+
+static int
+acpi_processor_remove (
+ struct acpi_device *device,
+ int type)
+{
+ acpi_status status = AE_OK;
+ struct acpi_processor *pr = NULL;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_remove");
+
+ if (!device || !acpi_driver_data(device))
+ return_VALUE(-EINVAL);
+
+ pr = (struct acpi_processor *) acpi_driver_data(device);
+
+ if (pr->id >= NR_CPUS) {
+ kfree(pr);
+ return_VALUE(0);
+ }
+
+ if (type == ACPI_BUS_REMOVAL_EJECT) {
+ if (acpi_processor_handle_eject(pr))
+ return_VALUE(-EINVAL);
+ }
+
+ acpi_processor_power_exit(pr, device);
+
+ status = acpi_remove_notify_handler(pr->handle, ACPI_DEVICE_NOTIFY,
+ acpi_processor_notify);
+ if (ACPI_FAILURE(status)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Error removing notify handler\n"));
+ }
+
+ acpi_processor_remove_fs(device);
+
+ processors[pr->id] = NULL;
+
+ kfree(pr);
+
+ return_VALUE(0);
+}
+
+#ifdef CONFIG_ACPI_HOTPLUG_CPU
+/****************************************************************************
+ * Acpi processor hotplug support *
+ ****************************************************************************/
+
+static int is_processor_present(acpi_handle handle);
+
+static int
+is_processor_present(
+ acpi_handle handle)
+{
+ acpi_status status;
+ unsigned long sta = 0;
+
+ ACPI_FUNCTION_TRACE("is_processor_present");
+
+ status = acpi_evaluate_integer(handle, "_STA", NULL, &sta);
+ if (ACPI_FAILURE(status) || !(sta & ACPI_STA_PRESENT)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Processor Device is not present\n"));
+ return_VALUE(0);
+ }
+ return_VALUE(1);
+}
+
+
+static
+int acpi_processor_device_add(
+ acpi_handle handle,
+ struct acpi_device **device)
+{
+ acpi_handle phandle;
+ struct acpi_device *pdev;
+ struct acpi_processor *pr;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_device_add");
+
+ if (acpi_get_parent(handle, &phandle)) {
+ return_VALUE(-ENODEV);
+ }
+
+ if (acpi_bus_get_device(phandle, &pdev)) {
+ return_VALUE(-ENODEV);
+ }
+
+ if (acpi_bus_add(device, pdev, handle, ACPI_BUS_TYPE_PROCESSOR)) {
+ return_VALUE(-ENODEV);
+ }
+
+ acpi_bus_scan(*device);
+
+ pr = acpi_driver_data(*device);
+ if (!pr)
+ return_VALUE(-ENODEV);
+
+ if ((pr->id >=0) && (pr->id < NR_CPUS)) {
+ kobject_hotplug(&(*device)->kobj, KOBJ_ONLINE);
+ }
+ return_VALUE(0);
+}
+
+
+static void
+acpi_processor_hotplug_notify (
+ acpi_handle handle,
+ u32 event,
+ void *data)
+{
+ struct acpi_processor *pr;
+ struct acpi_device *device = NULL;
+ int result;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_hotplug_notify");
+
+ switch (event) {
+ case ACPI_NOTIFY_BUS_CHECK:
+ case ACPI_NOTIFY_DEVICE_CHECK:
+ printk("Processor driver received %s event\n",
+ (event==ACPI_NOTIFY_BUS_CHECK)?
+ "ACPI_NOTIFY_BUS_CHECK":"ACPI_NOTIFY_DEVICE_CHECK");
+
+ if (!is_processor_present(handle))
+ break;
+
+ if (acpi_bus_get_device(handle, &device)) {
+ result = acpi_processor_device_add(handle, &device);
+ if (result)
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Unable to add the device\n"));
+ break;
+ }
+
+ pr = acpi_driver_data(device);
+ if (!pr) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Driver data is NULL\n"));
+ break;
+ }
+
+ if (pr->id >= 0 && (pr->id < NR_CPUS)) {
+ kobject_hotplug(&device->kobj, KOBJ_OFFLINE);
+ break;
+ }
+
+ result = acpi_processor_start(device);
+ if ((!result) && ((pr->id >=0) && (pr->id < NR_CPUS))) {
+ kobject_hotplug(&device->kobj, KOBJ_ONLINE);
+ } else {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Device [%s] failed to start\n",
+ acpi_device_bid(device)));
+ }
+ break;
+ case ACPI_NOTIFY_EJECT_REQUEST:
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,"received ACPI_NOTIFY_EJECT_REQUEST\n"));
+
+ if (acpi_bus_get_device(handle, &device)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,"Device don't exist, dropping EJECT\n"));
+ break;
+ }
+ pr = acpi_driver_data(device);
+ if (!pr) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,"Driver data is NULL, dropping EJECT\n"));
+ return_VOID;
+ }
+
+ if ((pr->id < NR_CPUS) && (cpu_present(pr->id)))
+ kobject_hotplug(&device->kobj, KOBJ_OFFLINE);
+ break;
+ default:
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "Unsupported event [0x%x]\n", event));
+ break;
+ }
+
+ return_VOID;
+}
+
+static acpi_status
+processor_walk_namespace_cb(acpi_handle handle,
+ u32 lvl,
+ void *context,
+ void **rv)
+{
+ acpi_status status;
+ int *action = context;
+ acpi_object_type type = 0;
+
+ status = acpi_get_type(handle, &type);
+ if (ACPI_FAILURE(status))
+ return(AE_OK);
+
+ if (type != ACPI_TYPE_PROCESSOR)
+ return(AE_OK);
+
+ switch(*action) {
+ case INSTALL_NOTIFY_HANDLER:
+ acpi_install_notify_handler(handle,
+ ACPI_SYSTEM_NOTIFY,
+ acpi_processor_hotplug_notify,
+ NULL);
+ break;
+ case UNINSTALL_NOTIFY_HANDLER:
+ acpi_remove_notify_handler(handle,
+ ACPI_SYSTEM_NOTIFY,
+ acpi_processor_hotplug_notify);
+ break;
+ default:
+ break;
+ }
+
+ return(AE_OK);
+}
+
+
+static acpi_status
+acpi_processor_hotadd_init(
+ acpi_handle handle,
+ int *p_cpu)
+{
+ ACPI_FUNCTION_TRACE("acpi_processor_hotadd_init");
+
+ if (!is_processor_present(handle)) {
+ return_VALUE(AE_ERROR);
+ }
+
+ if (acpi_map_lsapic(handle, p_cpu))
+ return_VALUE(AE_ERROR);
+
+ if (arch_register_cpu(*p_cpu)) {
+ acpi_unmap_lsapic(*p_cpu);
+ return_VALUE(AE_ERROR);
+ }
+
+ return_VALUE(AE_OK);
+}
+
+
+static int
+acpi_processor_handle_eject(struct acpi_processor *pr)
+{
+ if (cpu_online(pr->id)) {
+ return(-EINVAL);
+ }
+ arch_unregister_cpu(pr->id);
+ acpi_unmap_lsapic(pr->id);
+ return(0);
+}
+#else
+static acpi_status
+acpi_processor_hotadd_init(
+ acpi_handle handle,
+ int *p_cpu)
+{
+ return AE_ERROR;
+}
+static int
+acpi_processor_handle_eject(struct acpi_processor *pr)
+{
+ return(-EINVAL);
+}
+#endif
+
+
+static
+void acpi_processor_install_hotplug_notify(void)
+{
+#ifdef CONFIG_ACPI_HOTPLUG_CPU
+ int action = INSTALL_NOTIFY_HANDLER;
+ acpi_walk_namespace(ACPI_TYPE_PROCESSOR,
+ ACPI_ROOT_OBJECT,
+ ACPI_UINT32_MAX,
+ processor_walk_namespace_cb,
+ &action, NULL);
+#endif
+}
+
+
+static
+void acpi_processor_uninstall_hotplug_notify(void)
+{
+#ifdef CONFIG_ACPI_HOTPLUG_CPU
+ int action = UNINSTALL_NOTIFY_HANDLER;
+ acpi_walk_namespace(ACPI_TYPE_PROCESSOR,
+ ACPI_ROOT_OBJECT,
+ ACPI_UINT32_MAX,
+ processor_walk_namespace_cb,
+ &action, NULL);
+#endif
+}
+
+/*
+ * We keep the driver loaded even when ACPI is not running.
+ * This is needed for the powernow-k8 driver, that works even without
+ * ACPI, but needs symbols from this driver
+ */
+
+static int __init
+acpi_processor_init (void)
+{
+ int result = 0;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_init");
+
+ memset(&processors, 0, sizeof(processors));
+ memset(&errata, 0, sizeof(errata));
+
+ acpi_processor_dir = proc_mkdir(ACPI_PROCESSOR_CLASS, acpi_root_dir);
+ if (!acpi_processor_dir)
+ return_VALUE(0);
+ acpi_processor_dir->owner = THIS_MODULE;
+
+ result = acpi_bus_register_driver(&acpi_processor_driver);
+ if (result < 0) {
+ remove_proc_entry(ACPI_PROCESSOR_CLASS, acpi_root_dir);
+ return_VALUE(0);
+ }
+
+ acpi_processor_install_hotplug_notify();
+
+ acpi_thermal_cpufreq_init();
+
+ acpi_processor_ppc_init();
+
+ return_VALUE(0);
+}
+
+
+static void __exit
+acpi_processor_exit (void)
+{
+ ACPI_FUNCTION_TRACE("acpi_processor_exit");
+
+ acpi_processor_ppc_exit();
+
+ acpi_thermal_cpufreq_exit();
+
+ acpi_processor_uninstall_hotplug_notify();
+
+ acpi_bus_unregister_driver(&acpi_processor_driver);
+
+ remove_proc_entry(ACPI_PROCESSOR_CLASS, acpi_root_dir);
+
+ return_VOID;
+}
+
+
+module_init(acpi_processor_init);
+module_exit(acpi_processor_exit);
+
+EXPORT_SYMBOL(acpi_processor_set_thermal_limit);
+
+MODULE_ALIAS("processor");
--- /dev/null
+/*
+ * processor_idle - idle state submodule to the ACPI processor driver
+ *
+ * Copyright (C) 2001, 2002 Andy Grover <andrew.grover@intel.com>
+ * Copyright (C) 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com>
+ * Copyright (C) 2004 Dominik Brodowski <linux@brodo.de>
+ * Copyright (C) 2004 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
+ * - Added processor hotplug support
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/cpufreq.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+#include <linux/acpi.h>
+#include <linux/dmi.h>
+#include <linux/moduleparam.h>
+
+#include <asm/io.h>
+#include <asm/uaccess.h>
+
+#include <acpi/acpi_bus.h>
+#include <acpi/processor.h>
+
+#define ACPI_PROCESSOR_COMPONENT 0x01000000
+#define ACPI_PROCESSOR_CLASS "processor"
+#define ACPI_PROCESSOR_DRIVER_NAME "ACPI Processor Driver"
+#define _COMPONENT ACPI_PROCESSOR_COMPONENT
+ACPI_MODULE_NAME ("acpi_processor")
+
+#define ACPI_PROCESSOR_FILE_POWER "power"
+
+#define US_TO_PM_TIMER_TICKS(t) ((t * (PM_TIMER_FREQUENCY/1000)) / 1000)
+#define C2_OVERHEAD 4 /* 1us (3.579 ticks per us) */
+#define C3_OVERHEAD 4 /* 1us (3.579 ticks per us) */
+
+static void (*pm_idle_save)(void);
+module_param(max_cstate, uint, 0644);
+
+static unsigned int nocst = 0;
+module_param(nocst, uint, 0000);
+
+/*
+ * bm_history -- bit-mask with a bit per jiffy of bus-master activity
+ * 1000 HZ: 0xFFFFFFFF: 32 jiffies = 32ms
+ * 800 HZ: 0xFFFFFFFF: 32 jiffies = 40ms
+ * 100 HZ: 0x0000000F: 4 jiffies = 40ms
+ * reduce history for more aggressive entry into C3
+ */
+static unsigned int bm_history = (HZ >= 800 ? 0xFFFFFFFF : ((1U << (HZ / 25)) - 1));
+module_param(bm_history, uint, 0644);
+/* --------------------------------------------------------------------------
+ Power Management
+ -------------------------------------------------------------------------- */
+
+/*
+ * IBM ThinkPad R40e crashes mysteriously when going into C2 or C3.
+ * For now disable this. Probably a bug somewhere else.
+ *
+ * To skip this limit, boot/load with a large max_cstate limit.
+ */
+static int no_c2c3(struct dmi_system_id *id)
+{
+ if (max_cstate > ACPI_PROCESSOR_MAX_POWER)
+ return 0;
+
+ printk(KERN_NOTICE PREFIX "%s detected - C2,C3 disabled."
+ " Override with \"processor.max_cstate=%d\"\n", id->ident,
+ ACPI_PROCESSOR_MAX_POWER + 1);
+
+ max_cstate = 1;
+
+ return 0;
+}
+
+
+
+
+static struct dmi_system_id __initdata processor_power_dmi_table[] = {
+ { no_c2c3, "IBM ThinkPad R40e", {
+ DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
+ DMI_MATCH(DMI_BIOS_VERSION,"1SET60WW") }},
+ { no_c2c3, "Medion 41700", {
+ DMI_MATCH(DMI_BIOS_VENDOR,"Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION,"R01-A1J") }},
+ {},
+};
+
+
+static inline u32
+ticks_elapsed (
+ u32 t1,
+ u32 t2)
+{
+ if (t2 >= t1)
+ return (t2 - t1);
+ else if (!acpi_fadt.tmr_val_ext)
+ return (((0x00FFFFFF - t1) + t2) & 0x00FFFFFF);
+ else
+ return ((0xFFFFFFFF - t1) + t2);
+}
+
+
+static void
+acpi_processor_power_activate (
+ struct acpi_processor *pr,
+ struct acpi_processor_cx *new)
+{
+ struct acpi_processor_cx *old;
+
+ if (!pr || !new)
+ return;
+
+ old = pr->power.state;
+
+ if (old)
+ old->promotion.count = 0;
+ new->demotion.count = 0;
+
+ /* Cleanup from old state. */
+ if (old) {
+ switch (old->type) {
+ case ACPI_STATE_C3:
+ /* Disable bus master reload */
+ if (new->type != ACPI_STATE_C3)
+ acpi_set_register(ACPI_BITREG_BUS_MASTER_RLD, 0, ACPI_MTX_DO_NOT_LOCK);
+ break;
+ }
+ }
+
+ /* Prepare to use new state. */
+ switch (new->type) {
+ case ACPI_STATE_C3:
+ /* Enable bus master reload */
+ if (old->type != ACPI_STATE_C3)
+ acpi_set_register(ACPI_BITREG_BUS_MASTER_RLD, 1, ACPI_MTX_DO_NOT_LOCK);
+ break;
+ }
+
+ pr->power.state = new;
+
+ return;
+}
+
+
+static void acpi_processor_idle (void)
+{
+ struct acpi_processor *pr = NULL;
+ struct acpi_processor_cx *cx = NULL;
+ struct acpi_processor_cx *next_state = NULL;
+ int sleep_ticks = 0;
+ u32 t1, t2 = 0;
+
+ pr = processors[_smp_processor_id()];
+ if (!pr)
+ return;
+
+ /*
+ * Interrupts must be disabled during bus mastering calculations and
+ * for C2/C3 transitions.
+ */
+ local_irq_disable();
+
+ /*
+ * Check whether we truly need to go idle, or should
+ * reschedule:
+ */
+ if (unlikely(need_resched())) {
+ local_irq_enable();
+ return;
+ }
+
+ cx = pr->power.state;
+ if (!cx)
+ goto easy_out;
+
+ /*
+ * Check BM Activity
+ * -----------------
+ * Check for bus mastering activity (if required), record, and check
+ * for demotion.
+ */
+ if (pr->flags.bm_check) {
+ u32 bm_status = 0;
+ unsigned long diff = jiffies - pr->power.bm_check_timestamp;
+
+ if (diff > 32)
+ diff = 32;
+
+ while (diff) {
+ /* if we didn't get called, assume there was busmaster activity */
+ diff--;
+ if (diff)
+ pr->power.bm_activity |= 0x1;
+ pr->power.bm_activity <<= 1;
+ }
+
+ acpi_get_register(ACPI_BITREG_BUS_MASTER_STATUS,
+ &bm_status, ACPI_MTX_DO_NOT_LOCK);
+ if (bm_status) {
+ pr->power.bm_activity++;
+ acpi_set_register(ACPI_BITREG_BUS_MASTER_STATUS,
+ 1, ACPI_MTX_DO_NOT_LOCK);
+ }
+ /*
+ * PIIX4 Erratum #18: Note that BM_STS doesn't always reflect
+ * the true state of bus mastering activity; forcing us to
+ * manually check the BMIDEA bit of each IDE channel.
+ */
+ else if (errata.piix4.bmisx) {
+ if ((inb_p(errata.piix4.bmisx + 0x02) & 0x01)
+ || (inb_p(errata.piix4.bmisx + 0x0A) & 0x01))
+ pr->power.bm_activity++;
+ }
+
+ pr->power.bm_check_timestamp = jiffies;
+
+ /*
+ * Apply bus mastering demotion policy. Automatically demote
+ * to avoid a faulty transition. Note that the processor
+ * won't enter a low-power state during this call (to this
+ * funciton) but should upon the next.
+ *
+ * TBD: A better policy might be to fallback to the demotion
+ * state (use it for this quantum only) istead of
+ * demoting -- and rely on duration as our sole demotion
+ * qualification. This may, however, introduce DMA
+ * issues (e.g. floppy DMA transfer overrun/underrun).
+ */
+ if (pr->power.bm_activity & cx->demotion.threshold.bm) {
+ local_irq_enable();
+ next_state = cx->demotion.state;
+ goto end;
+ }
+ }
+
+ cx->usage++;
+
+ /*
+ * Sleep:
+ * ------
+ * Invoke the current Cx state to put the processor to sleep.
+ */
+ switch (cx->type) {
+
+ case ACPI_STATE_C1:
+ /*
+ * Invoke C1.
+ * Use the appropriate idle routine, the one that would
+ * be used without acpi C-states.
+ */
+ if (pm_idle_save)
+ pm_idle_save();
+ else
+ safe_halt();
+ /*
+ * TBD: Can't get time duration while in C1, as resumes
+ * go to an ISR rather than here. Need to instrument
+ * base interrupt handler.
+ */
+ sleep_ticks = 0xFFFFFFFF;
+ break;
+
+ case ACPI_STATE_C2:
+ /* Get start time (ticks) */
+ t1 = inl(acpi_fadt.xpm_tmr_blk.address);
+ /* Invoke C2 */
+ inb(cx->address);
+ /* Dummy op - must do something useless after P_LVL2 read */
+ t2 = inl(acpi_fadt.xpm_tmr_blk.address);
+ /* Get end time (ticks) */
+ t2 = inl(acpi_fadt.xpm_tmr_blk.address);
+ /* Re-enable interrupts */
+ local_irq_enable();
+ /* Compute time (ticks) that we were actually asleep */
+ sleep_ticks = ticks_elapsed(t1, t2) - cx->latency_ticks - C2_OVERHEAD;
+ break;
+
+ case ACPI_STATE_C3:
+ /* Disable bus master arbitration */
+ acpi_set_register(ACPI_BITREG_ARB_DISABLE, 1, ACPI_MTX_DO_NOT_LOCK);
+ /* Get start time (ticks) */
+ t1 = inl(acpi_fadt.xpm_tmr_blk.address);
+ /* Invoke C3 */
+ inb(cx->address);
+ /* Dummy op - must do something useless after P_LVL3 read */
+ t2 = inl(acpi_fadt.xpm_tmr_blk.address);
+ /* Get end time (ticks) */
+ t2 = inl(acpi_fadt.xpm_tmr_blk.address);
+ /* Enable bus master arbitration */
+ acpi_set_register(ACPI_BITREG_ARB_DISABLE, 0, ACPI_MTX_DO_NOT_LOCK);
+ /* Re-enable interrupts */
+ local_irq_enable();
+ /* Compute time (ticks) that we were actually asleep */
+ sleep_ticks = ticks_elapsed(t1, t2) - cx->latency_ticks - C3_OVERHEAD;
+ break;
+
+ default:
+ local_irq_enable();
+ return;
+ }
+
+ next_state = pr->power.state;
+
+ /*
+ * Promotion?
+ * ----------
+ * Track the number of longs (time asleep is greater than threshold)
+ * and promote when the count threshold is reached. Note that bus
+ * mastering activity may prevent promotions.
+ * Do not promote above max_cstate.
+ */
+ if (cx->promotion.state &&
+ ((cx->promotion.state - pr->power.states) <= max_cstate)) {
+ if (sleep_ticks > cx->promotion.threshold.ticks) {
+ cx->promotion.count++;
+ cx->demotion.count = 0;
+ if (cx->promotion.count >= cx->promotion.threshold.count) {
+ if (pr->flags.bm_check) {
+ if (!(pr->power.bm_activity & cx->promotion.threshold.bm)) {
+ next_state = cx->promotion.state;
+ goto end;
+ }
+ }
+ else {
+ next_state = cx->promotion.state;
+ goto end;
+ }
+ }
+ }
+ }
+
+ /*
+ * Demotion?
+ * ---------
+ * Track the number of shorts (time asleep is less than time threshold)
+ * and demote when the usage threshold is reached.
+ */
+ if (cx->demotion.state) {
+ if (sleep_ticks < cx->demotion.threshold.ticks) {
+ cx->demotion.count++;
+ cx->promotion.count = 0;
+ if (cx->demotion.count >= cx->demotion.threshold.count) {
+ next_state = cx->demotion.state;
+ goto end;
+ }
+ }
+ }
+
+end:
+ /*
+ * Demote if current state exceeds max_cstate
+ */
+ if ((pr->power.state - pr->power.states) > max_cstate) {
+ if (cx->demotion.state)
+ next_state = cx->demotion.state;
+ }
+
+ /*
+ * New Cx State?
+ * -------------
+ * If we're going to start using a new Cx state we must clean up
+ * from the previous and prepare to use the new.
+ */
+ if (next_state != pr->power.state)
+ acpi_processor_power_activate(pr, next_state);
+
+ return;
+
+ easy_out:
+ /* do C1 instead of busy loop */
+ if (pm_idle_save)
+ pm_idle_save();
+ else
+ safe_halt();
+ return;
+}
+
+
+static int
+acpi_processor_set_power_policy (
+ struct acpi_processor *pr)
+{
+ unsigned int i;
+ unsigned int state_is_set = 0;
+ struct acpi_processor_cx *lower = NULL;
+ struct acpi_processor_cx *higher = NULL;
+ struct acpi_processor_cx *cx;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_set_power_policy");
+
+ if (!pr)
+ return_VALUE(-EINVAL);
+
+ /*
+ * This function sets the default Cx state policy (OS idle handler).
+ * Our scheme is to promote quickly to C2 but more conservatively
+ * to C3. We're favoring C2 for its characteristics of low latency
+ * (quick response), good power savings, and ability to allow bus
+ * mastering activity. Note that the Cx state policy is completely
+ * customizable and can be altered dynamically.
+ */
+
+ /* startup state */
+ for (i=1; i < ACPI_PROCESSOR_MAX_POWER; i++) {
+ cx = &pr->power.states[i];
+ if (!cx->valid)
+ continue;
+
+ if (!state_is_set)
+ pr->power.state = cx;
+ state_is_set++;
+ break;
+ }
+
+ if (!state_is_set)
+ return_VALUE(-ENODEV);
+
+ /* demotion */
+ for (i=1; i < ACPI_PROCESSOR_MAX_POWER; i++) {
+ cx = &pr->power.states[i];
+ if (!cx->valid)
+ continue;
+
+ if (lower) {
+ cx->demotion.state = lower;
+ cx->demotion.threshold.ticks = cx->latency_ticks;
+ cx->demotion.threshold.count = 1;
+ if (cx->type == ACPI_STATE_C3)
+ cx->demotion.threshold.bm = bm_history;
+ }
+
+ lower = cx;
+ }
+
+ /* promotion */
+ for (i = (ACPI_PROCESSOR_MAX_POWER - 1); i > 0; i--) {
+ cx = &pr->power.states[i];
+ if (!cx->valid)
+ continue;
+
+ if (higher) {
+ cx->promotion.state = higher;
+ cx->promotion.threshold.ticks = cx->latency_ticks;
+ if (cx->type >= ACPI_STATE_C2)
+ cx->promotion.threshold.count = 4;
+ else
+ cx->promotion.threshold.count = 10;
+ if (higher->type == ACPI_STATE_C3)
+ cx->promotion.threshold.bm = bm_history;
+ }
+
+ higher = cx;
+ }
+
+ return_VALUE(0);
+}
+
+
+static int acpi_processor_get_power_info_fadt (struct acpi_processor *pr)
+{
+ int i;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_get_power_info_fadt");
+
+ if (!pr)
+ return_VALUE(-EINVAL);
+
+ if (!pr->pblk)
+ return_VALUE(-ENODEV);
+
+ for (i = 0; i < ACPI_PROCESSOR_MAX_POWER; i++)
+ memset(pr->power.states, 0, sizeof(struct acpi_processor_cx));
+
+ /* if info is obtained from pblk/fadt, type equals state */
+ pr->power.states[ACPI_STATE_C1].type = ACPI_STATE_C1;
+ pr->power.states[ACPI_STATE_C2].type = ACPI_STATE_C2;
+ pr->power.states[ACPI_STATE_C3].type = ACPI_STATE_C3;
+
+ /* the C0 state only exists as a filler in our array,
+ * and all processors need to support C1 */
+ pr->power.states[ACPI_STATE_C0].valid = 1;
+ pr->power.states[ACPI_STATE_C1].valid = 1;
+
+ /* determine C2 and C3 address from pblk */
+ pr->power.states[ACPI_STATE_C2].address = pr->pblk + 4;
+ pr->power.states[ACPI_STATE_C3].address = pr->pblk + 5;
+
+ /* determine latencies from FADT */
+ pr->power.states[ACPI_STATE_C2].latency = acpi_fadt.plvl2_lat;
+ pr->power.states[ACPI_STATE_C3].latency = acpi_fadt.plvl3_lat;
+
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "lvl2[0x%08x] lvl3[0x%08x]\n",
+ pr->power.states[ACPI_STATE_C2].address,
+ pr->power.states[ACPI_STATE_C3].address));
+
+ return_VALUE(0);
+}
+
+
+static int acpi_processor_get_power_info_cst (struct acpi_processor *pr)
+{
+ acpi_status status = 0;
+ acpi_integer count;
+ int i;
+ struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL};
+ union acpi_object *cst;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_get_power_info_cst");
+
+ if (errata.smp)
+ return_VALUE(-ENODEV);
+
+ if (nocst)
+ return_VALUE(-ENODEV);
+
+ pr->power.count = 0;
+ for (i = 0; i < ACPI_PROCESSOR_MAX_POWER; i++)
+ memset(pr->power.states, 0, sizeof(struct acpi_processor_cx));
+
+ status = acpi_evaluate_object(pr->handle, "_CST", NULL, &buffer);
+ if (ACPI_FAILURE(status)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "No _CST, giving up\n"));
+ return_VALUE(-ENODEV);
+ }
+
+ cst = (union acpi_object *) buffer.pointer;
+
+ /* There must be at least 2 elements */
+ if (!cst || (cst->type != ACPI_TYPE_PACKAGE) || cst->package.count < 2) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "not enough elements in _CST\n"));
+ status = -EFAULT;
+ goto end;
+ }
+
+ count = cst->package.elements[0].integer.value;
+
+ /* Validate number of power states. */
+ if (count < 1 || count != cst->package.count - 1) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "count given by _CST is not valid\n"));
+ status = -EFAULT;
+ goto end;
+ }
+
+ /* We support up to ACPI_PROCESSOR_MAX_POWER. */
+ if (count > ACPI_PROCESSOR_MAX_POWER) {
+ printk(KERN_WARNING "Limiting number of power states to max (%d)\n", ACPI_PROCESSOR_MAX_POWER);
+ printk(KERN_WARNING "Please increase ACPI_PROCESSOR_MAX_POWER if needed.\n");
+ count = ACPI_PROCESSOR_MAX_POWER;
+ }
+
+ /* Tell driver that at least _CST is supported. */
+ pr->flags.has_cst = 1;
+
+ for (i = 1; i <= count; i++) {
+ union acpi_object *element;
+ union acpi_object *obj;
+ struct acpi_power_register *reg;
+ struct acpi_processor_cx cx;
+
+ memset(&cx, 0, sizeof(cx));
+
+ element = (union acpi_object *) &(cst->package.elements[i]);
+ if (element->type != ACPI_TYPE_PACKAGE)
+ continue;
+
+ if (element->package.count != 4)
+ continue;
+
+ obj = (union acpi_object *) &(element->package.elements[0]);
+
+ if (obj->type != ACPI_TYPE_BUFFER)
+ continue;
+
+ reg = (struct acpi_power_register *) obj->buffer.pointer;
+
+ if (reg->space_id != ACPI_ADR_SPACE_SYSTEM_IO &&
+ (reg->space_id != ACPI_ADR_SPACE_FIXED_HARDWARE))
+ continue;
+
+ cx.address = (reg->space_id == ACPI_ADR_SPACE_FIXED_HARDWARE) ?
+ 0 : reg->address;
+
+ /* There should be an easy way to extract an integer... */
+ obj = (union acpi_object *) &(element->package.elements[1]);
+ if (obj->type != ACPI_TYPE_INTEGER)
+ continue;
+
+ cx.type = obj->integer.value;
+
+ if ((cx.type != ACPI_STATE_C1) &&
+ (reg->space_id != ACPI_ADR_SPACE_SYSTEM_IO))
+ continue;
+
+ if ((cx.type < ACPI_STATE_C1) ||
+ (cx.type > ACPI_STATE_C3))
+ continue;
+
+ obj = (union acpi_object *) &(element->package.elements[2]);
+ if (obj->type != ACPI_TYPE_INTEGER)
+ continue;
+
+ cx.latency = obj->integer.value;
+
+ obj = (union acpi_object *) &(element->package.elements[3]);
+ if (obj->type != ACPI_TYPE_INTEGER)
+ continue;
+
+ cx.power = obj->integer.value;
+
+ (pr->power.count)++;
+ memcpy(&(pr->power.states[pr->power.count]), &cx, sizeof(cx));
+ }
+
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Found %d power states\n", pr->power.count));
+
+ /* Validate number of power states discovered */
+ if (pr->power.count < 2)
+ status = -ENODEV;
+
+end:
+ acpi_os_free(buffer.pointer);
+
+ return_VALUE(status);
+}
+
+
+static void acpi_processor_power_verify_c2(struct acpi_processor_cx *cx)
+{
+ ACPI_FUNCTION_TRACE("acpi_processor_get_power_verify_c2");
+
+ if (!cx->address)
+ return_VOID;
+
+ /*
+ * C2 latency must be less than or equal to 100
+ * microseconds.
+ */
+ else if (cx->latency > ACPI_PROCESSOR_MAX_C2_LATENCY) {
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "latency too large [%d]\n",
+ cx->latency));
+ return_VOID;
+ }
+
+ /* We're (currently) only supporting C2 on UP */
+ else if (errata.smp) {
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "C2 not supported in SMP mode\n"));
+ return_VOID;
+ }
+
+ /*
+ * Otherwise we've met all of our C2 requirements.
+ * Normalize the C2 latency to expidite policy
+ */
+ cx->valid = 1;
+ cx->latency_ticks = US_TO_PM_TIMER_TICKS(cx->latency);
+
+ return_VOID;
+}
+
+
+static void acpi_processor_power_verify_c3(
+ struct acpi_processor *pr,
+ struct acpi_processor_cx *cx)
+{
+ ACPI_FUNCTION_TRACE("acpi_processor_get_power_verify_c3");
+
+ if (!cx->address)
+ return_VOID;
+
+ /*
+ * C3 latency must be less than or equal to 1000
+ * microseconds.
+ */
+ else if (cx->latency > ACPI_PROCESSOR_MAX_C3_LATENCY) {
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "latency too large [%d]\n",
+ cx->latency));
+ return_VOID;
+ }
+
+ /* bus mastering control is necessary */
+ else if (!pr->flags.bm_control) {
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "C3 support requires bus mastering control\n"));
+ return_VOID;
+ }
+
+ /* We're (currently) only supporting C2 on UP */
+ else if (errata.smp) {
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "C3 not supported in SMP mode\n"));
+ return_VOID;
+ }
+
+ /*
+ * PIIX4 Erratum #18: We don't support C3 when Type-F (fast)
+ * DMA transfers are used by any ISA device to avoid livelock.
+ * Note that we could disable Type-F DMA (as recommended by
+ * the erratum), but this is known to disrupt certain ISA
+ * devices thus we take the conservative approach.
+ */
+ else if (errata.piix4.fdma) {
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "C3 not supported on PIIX4 with Type-F DMA\n"));
+ return_VOID;
+ }
+
+ /*
+ * Otherwise we've met all of our C3 requirements.
+ * Normalize the C3 latency to expidite policy. Enable
+ * checking of bus mastering status (bm_check) so we can
+ * use this in our C3 policy
+ */
+ cx->valid = 1;
+ cx->latency_ticks = US_TO_PM_TIMER_TICKS(cx->latency);
+ pr->flags.bm_check = 1;
+
+ return_VOID;
+}
+
+
+static int acpi_processor_power_verify(struct acpi_processor *pr)
+{
+ unsigned int i;
+ unsigned int working = 0;
+
+ for (i=1; i < ACPI_PROCESSOR_MAX_POWER; i++) {
+ struct acpi_processor_cx *cx = &pr->power.states[i];
+
+ switch (cx->type) {
+ case ACPI_STATE_C1:
+ cx->valid = 1;
+ break;
+
+ case ACPI_STATE_C2:
+ acpi_processor_power_verify_c2(cx);
+ break;
+
+ case ACPI_STATE_C3:
+ acpi_processor_power_verify_c3(pr, cx);
+ break;
+ }
+
+ if (cx->valid)
+ working++;
+ }
+
+ return (working);
+}
+
+static int acpi_processor_get_power_info (
+ struct acpi_processor *pr)
+{
+ unsigned int i;
+ int result;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_get_power_info");
+
+ /* NOTE: the idle thread may not be running while calling
+ * this function */
+
+ result = acpi_processor_get_power_info_cst(pr);
+ if ((result) || (acpi_processor_power_verify(pr) < 2)) {
+ result = acpi_processor_get_power_info_fadt(pr);
+ if (result)
+ return_VALUE(result);
+
+ if (acpi_processor_power_verify(pr) < 2)
+ return_VALUE(-ENODEV);
+ }
+
+ /*
+ * Set Default Policy
+ * ------------------
+ * Now that we know which states are supported, set the default
+ * policy. Note that this policy can be changed dynamically
+ * (e.g. encourage deeper sleeps to conserve battery life when
+ * not on AC).
+ */
+ result = acpi_processor_set_power_policy(pr);
+ if (result)
+ return_VALUE(result);
+
+ /*
+ * if one state of type C2 or C3 is available, mark this
+ * CPU as being "idle manageable"
+ */
+ for (i = 1; i < ACPI_PROCESSOR_MAX_POWER; i++) {
+ if (pr->power.states[i].valid)
+ pr->power.count = i;
+ if ((pr->power.states[i].valid) &&
+ (pr->power.states[i].type >= ACPI_STATE_C2))
+ pr->flags.power = 1;
+ }
+
+ return_VALUE(0);
+}
+
+int acpi_processor_cst_has_changed (struct acpi_processor *pr)
+{
+ int result = 0;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_cst_has_changed");
+
+ if (!pr)
+ return_VALUE(-EINVAL);
+
+ if (errata.smp || nocst) {
+ return_VALUE(-ENODEV);
+ }
+
+ if (!pr->flags.power_setup_done)
+ return_VALUE(-ENODEV);
+
+ /* Fall back to the default idle loop */
+ pm_idle = pm_idle_save;
+ synchronize_kernel();
+
+ pr->flags.power = 0;
+ result = acpi_processor_get_power_info(pr);
+ if ((pr->flags.power == 1) && (pr->flags.power_setup_done))
+ pm_idle = acpi_processor_idle;
+
+ return_VALUE(result);
+}
+
+/* proc interface */
+
+static int acpi_processor_power_seq_show(struct seq_file *seq, void *offset)
+{
+ struct acpi_processor *pr = (struct acpi_processor *)seq->private;
+ unsigned int i;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_power_seq_show");
+
+ if (!pr)
+ goto end;
+
+ seq_printf(seq, "active state: C%zd\n"
+ "max_cstate: C%d\n"
+ "bus master activity: %08x\n",
+ pr->power.state ? pr->power.state - pr->power.states : 0,
+ max_cstate,
+ (unsigned)pr->power.bm_activity);
+
+ seq_puts(seq, "states:\n");
+
+ for (i = 1; i <= pr->power.count; i++) {
+ seq_printf(seq, " %cC%d: ",
+ (&pr->power.states[i] == pr->power.state?'*':' '), i);
+
+ if (!pr->power.states[i].valid) {
+ seq_puts(seq, "<not supported>\n");
+ continue;
+ }
+
+ switch (pr->power.states[i].type) {
+ case ACPI_STATE_C1:
+ seq_printf(seq, "type[C1] ");
+ break;
+ case ACPI_STATE_C2:
+ seq_printf(seq, "type[C2] ");
+ break;
+ case ACPI_STATE_C3:
+ seq_printf(seq, "type[C3] ");
+ break;
+ default:
+ seq_printf(seq, "type[--] ");
+ break;
+ }
+
+ if (pr->power.states[i].promotion.state)
+ seq_printf(seq, "promotion[C%zd] ",
+ (pr->power.states[i].promotion.state -
+ pr->power.states));
+ else
+ seq_puts(seq, "promotion[--] ");
+
+ if (pr->power.states[i].demotion.state)
+ seq_printf(seq, "demotion[C%zd] ",
+ (pr->power.states[i].demotion.state -
+ pr->power.states));
+ else
+ seq_puts(seq, "demotion[--] ");
+
+ seq_printf(seq, "latency[%03d] usage[%08d]\n",
+ pr->power.states[i].latency,
+ pr->power.states[i].usage);
+ }
+
+end:
+ return_VALUE(0);
+}
+
+static int acpi_processor_power_open_fs(struct inode *inode, struct file *file)
+{
+ return single_open(file, acpi_processor_power_seq_show,
+ PDE(inode)->data);
+}
+
+static struct file_operations acpi_processor_power_fops = {
+ .open = acpi_processor_power_open_fs,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+
+int acpi_processor_power_init(struct acpi_processor *pr, struct acpi_device *device)
+{
+ acpi_status status = 0;
+ static int first_run = 0;
+ struct proc_dir_entry *entry = NULL;
+ unsigned int i;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_power_init");
+
+ if (!first_run) {
+ dmi_check_system(processor_power_dmi_table);
+ if (max_cstate < ACPI_C_STATES_MAX)
+ printk(KERN_NOTICE "ACPI: processor limited to max C-state %d\n", max_cstate);
+ first_run++;
+ }
+
+ if (!errata.smp && (pr->id == 0) && acpi_fadt.cst_cnt && !nocst) {
+ status = acpi_os_write_port(acpi_fadt.smi_cmd, acpi_fadt.cst_cnt, 8);
+ if (ACPI_FAILURE(status)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Notifying BIOS of _CST ability failed\n"));
+ }
+ }
+
+ acpi_processor_get_power_info(pr);
+
+ /*
+ * Install the idle handler if processor power management is supported.
+ * Note that we use previously set idle handler will be used on
+ * platforms that only support C1.
+ */
+ if ((pr->flags.power) && (!boot_option_idle_override)) {
+ printk(KERN_INFO PREFIX "CPU%d (power states:", pr->id);
+ for (i = 1; i <= pr->power.count; i++)
+ if (pr->power.states[i].valid)
+ printk(" C%d[C%d]", i, pr->power.states[i].type);
+ printk(")\n");
+
+ if (pr->id == 0) {
+ pm_idle_save = pm_idle;
+ pm_idle = acpi_processor_idle;
+ }
+ }
+
+ /* 'power' [R] */
+ entry = create_proc_entry(ACPI_PROCESSOR_FILE_POWER,
+ S_IRUGO, acpi_device_dir(device));
+ if (!entry)
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Unable to create '%s' fs entry\n",
+ ACPI_PROCESSOR_FILE_POWER));
+ else {
+ entry->proc_fops = &acpi_processor_power_fops;
+ entry->data = acpi_driver_data(device);
+ entry->owner = THIS_MODULE;
+ }
+
+ pr->flags.power_setup_done = 1;
+
+ return_VALUE(0);
+}
+
+int acpi_processor_power_exit(struct acpi_processor *pr, struct acpi_device *device)
+{
+ ACPI_FUNCTION_TRACE("acpi_processor_power_exit");
+
+ pr->flags.power_setup_done = 0;
+
+ if (acpi_device_dir(device))
+ remove_proc_entry(ACPI_PROCESSOR_FILE_POWER,acpi_device_dir(device));
+
+ /* Unregister the idle handler when processor #0 is removed. */
+ if (pr->id == 0) {
+ pm_idle = pm_idle_save;
+
+ /*
+ * We are about to unload the current idle thread pm callback
+ * (pm_idle), Wait for all processors to update cached/local
+ * copies of pm_idle before proceeding.
+ */
+ cpu_idle_wait();
+ }
+
+ return_VALUE(0);
+}
--- /dev/null
+/*
+ * processor_perflib.c - ACPI Processor P-States Library ($Revision: 71 $)
+ *
+ * Copyright (C) 2001, 2002 Andy Grover <andrew.grover@intel.com>
+ * Copyright (C) 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com>
+ * Copyright (C) 2004 Dominik Brodowski <linux@brodo.de>
+ * Copyright (C) 2004 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
+ * - Added processor hotplug support
+ *
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
+ *
+ */
+
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/cpufreq.h>
+
+#ifdef CONFIG_X86_ACPI_CPUFREQ_PROC_INTF
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+
+#include <asm/uaccess.h>
+#endif
+
+#include <acpi/acpi_bus.h>
+#include <acpi/processor.h>
+
+
+#define ACPI_PROCESSOR_COMPONENT 0x01000000
+#define ACPI_PROCESSOR_CLASS "processor"
+#define ACPI_PROCESSOR_DRIVER_NAME "ACPI Processor Driver"
+#define ACPI_PROCESSOR_FILE_PERFORMANCE "performance"
+#define _COMPONENT ACPI_PROCESSOR_COMPONENT
+ACPI_MODULE_NAME ("acpi_processor")
+
+
+static DECLARE_MUTEX(performance_sem);
+
+/*
+ * _PPC support is implemented as a CPUfreq policy notifier:
+ * This means each time a CPUfreq driver registered also with
+ * the ACPI core is asked to change the speed policy, the maximum
+ * value is adjusted so that it is within the platform limit.
+ *
+ * Also, when a new platform limit value is detected, the CPUfreq
+ * policy is adjusted accordingly.
+ */
+
+#define PPC_REGISTERED 1
+#define PPC_IN_USE 2
+
+static int acpi_processor_ppc_status = 0;
+
+static int acpi_processor_ppc_notifier(struct notifier_block *nb,
+ unsigned long event,
+ void *data)
+{
+ struct cpufreq_policy *policy = data;
+ struct acpi_processor *pr;
+ unsigned int ppc = 0;
+
+ down(&performance_sem);
+
+ if (event != CPUFREQ_INCOMPATIBLE)
+ goto out;
+
+ pr = processors[policy->cpu];
+ if (!pr || !pr->performance)
+ goto out;
+
+ ppc = (unsigned int) pr->performance_platform_limit;
+ if (!ppc)
+ goto out;
+
+ if (ppc > pr->performance->state_count)
+ goto out;
+
+ cpufreq_verify_within_limits(policy, 0,
+ pr->performance->states[ppc].core_frequency * 1000);
+
+ out:
+ up(&performance_sem);
+
+ return 0;
+}
+
+
+static struct notifier_block acpi_ppc_notifier_block = {
+ .notifier_call = acpi_processor_ppc_notifier,
+};
+
+
+static int
+acpi_processor_get_platform_limit (
+ struct acpi_processor* pr)
+{
+ acpi_status status = 0;
+ unsigned long ppc = 0;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_get_platform_limit");
+
+ if (!pr)
+ return_VALUE(-EINVAL);
+
+ /*
+ * _PPC indicates the maximum state currently supported by the platform
+ * (e.g. 0 = states 0..n; 1 = states 1..n; etc.
+ */
+ status = acpi_evaluate_integer(pr->handle, "_PPC", NULL, &ppc);
+
+ if (status != AE_NOT_FOUND)
+ acpi_processor_ppc_status |= PPC_IN_USE;
+
+ if(ACPI_FAILURE(status) && status != AE_NOT_FOUND) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Error evaluating _PPC\n"));
+ return_VALUE(-ENODEV);
+ }
+
+ pr->performance_platform_limit = (int) ppc;
+
+ return_VALUE(0);
+}
+
+
+int acpi_processor_ppc_has_changed(
+ struct acpi_processor *pr)
+{
+ int ret = acpi_processor_get_platform_limit(pr);
+ if (ret < 0)
+ return (ret);
+ else
+ return cpufreq_update_policy(pr->id);
+}
+
+
+void acpi_processor_ppc_init(void) {
+ if (!cpufreq_register_notifier(&acpi_ppc_notifier_block, CPUFREQ_POLICY_NOTIFIER))
+ acpi_processor_ppc_status |= PPC_REGISTERED;
+ else
+ printk(KERN_DEBUG "Warning: Processor Platform Limit not supported.\n");
+}
+
+
+void acpi_processor_ppc_exit(void) {
+ if (acpi_processor_ppc_status & PPC_REGISTERED)
+ cpufreq_unregister_notifier(&acpi_ppc_notifier_block, CPUFREQ_POLICY_NOTIFIER);
+
+ acpi_processor_ppc_status &= ~PPC_REGISTERED;
+}
+
+/*
+ * when registering a cpufreq driver with this ACPI processor driver, the
+ * _PCT and _PSS structures are read out and written into struct
+ * acpi_processor_performance.
+ */
+static int acpi_processor_set_pdc (struct acpi_processor *pr)
+{
+ acpi_status status = AE_OK;
+ u32 arg0_buf[3];
+ union acpi_object arg0 = {ACPI_TYPE_BUFFER};
+ struct acpi_object_list no_object = {1, &arg0};
+ struct acpi_object_list *pdc;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_set_pdc");
+
+ arg0.buffer.length = 12;
+ arg0.buffer.pointer = (u8 *) arg0_buf;
+ arg0_buf[0] = ACPI_PDC_REVISION_ID;
+ arg0_buf[1] = 0;
+ arg0_buf[2] = 0;
+
+ pdc = (pr->performance->pdc) ? pr->performance->pdc : &no_object;
+
+ status = acpi_evaluate_object(pr->handle, "_PDC", pdc, NULL);
+
+ if ((ACPI_FAILURE(status)) && (pr->performance->pdc))
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Error evaluating _PDC, using legacy perf. control...\n"));
+
+ return_VALUE(status);
+}
+
+
+static int
+acpi_processor_get_performance_control (
+ struct acpi_processor *pr)
+{
+ int result = 0;
+ acpi_status status = 0;
+ struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL};
+ union acpi_object *pct = NULL;
+ union acpi_object obj = {0};
+
+ ACPI_FUNCTION_TRACE("acpi_processor_get_performance_control");
+
+ status = acpi_evaluate_object(pr->handle, "_PCT", NULL, &buffer);
+ if(ACPI_FAILURE(status)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Error evaluating _PCT\n"));
+ return_VALUE(-ENODEV);
+ }
+
+ pct = (union acpi_object *) buffer.pointer;
+ if (!pct || (pct->type != ACPI_TYPE_PACKAGE)
+ || (pct->package.count != 2)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid _PCT data\n"));
+ result = -EFAULT;
+ goto end;
+ }
+
+ /*
+ * control_register
+ */
+
+ obj = pct->package.elements[0];
+
+ if ((obj.type != ACPI_TYPE_BUFFER)
+ || (obj.buffer.length < sizeof(struct acpi_pct_register))
+ || (obj.buffer.pointer == NULL)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Invalid _PCT data (control_register)\n"));
+ result = -EFAULT;
+ goto end;
+ }
+ memcpy(&pr->performance->control_register, obj.buffer.pointer, sizeof(struct acpi_pct_register));
+
+
+ /*
+ * status_register
+ */
+
+ obj = pct->package.elements[1];
+
+ if ((obj.type != ACPI_TYPE_BUFFER)
+ || (obj.buffer.length < sizeof(struct acpi_pct_register))
+ || (obj.buffer.pointer == NULL)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Invalid _PCT data (status_register)\n"));
+ result = -EFAULT;
+ goto end;
+ }
+
+ memcpy(&pr->performance->status_register, obj.buffer.pointer, sizeof(struct acpi_pct_register));
+
+end:
+ acpi_os_free(buffer.pointer);
+
+ return_VALUE(result);
+}
+
+
+static int
+acpi_processor_get_performance_states (
+ struct acpi_processor *pr)
+{
+ int result = 0;
+ acpi_status status = AE_OK;
+ struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL};
+ struct acpi_buffer format = {sizeof("NNNNNN"), "NNNNNN"};
+ struct acpi_buffer state = {0, NULL};
+ union acpi_object *pss = NULL;
+ int i;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_get_performance_states");
+
+ status = acpi_evaluate_object(pr->handle, "_PSS", NULL, &buffer);
+ if(ACPI_FAILURE(status)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Error evaluating _PSS\n"));
+ return_VALUE(-ENODEV);
+ }
+
+ pss = (union acpi_object *) buffer.pointer;
+ if (!pss || (pss->type != ACPI_TYPE_PACKAGE)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid _PSS data\n"));
+ result = -EFAULT;
+ goto end;
+ }
+
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Found %d performance states\n",
+ pss->package.count));
+
+ pr->performance->state_count = pss->package.count;
+ pr->performance->states = kmalloc(sizeof(struct acpi_processor_px) * pss->package.count, GFP_KERNEL);
+ if (!pr->performance->states) {
+ result = -ENOMEM;
+ goto end;
+ }
+
+ for (i = 0; i < pr->performance->state_count; i++) {
+
+ struct acpi_processor_px *px = &(pr->performance->states[i]);
+
+ state.length = sizeof(struct acpi_processor_px);
+ state.pointer = px;
+
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Extracting state %d\n", i));
+
+ status = acpi_extract_package(&(pss->package.elements[i]),
+ &format, &state);
+ if (ACPI_FAILURE(status)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid _PSS data\n"));
+ result = -EFAULT;
+ kfree(pr->performance->states);
+ goto end;
+ }
+
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "State [%d]: core_frequency[%d] power[%d] transition_latency[%d] bus_master_latency[%d] control[0x%x] status[0x%x]\n",
+ i,
+ (u32) px->core_frequency,
+ (u32) px->power,
+ (u32) px->transition_latency,
+ (u32) px->bus_master_latency,
+ (u32) px->control,
+ (u32) px->status));
+
+ if (!px->core_frequency) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid _PSS data: freq is zero\n"));
+ result = -EFAULT;
+ kfree(pr->performance->states);
+ goto end;
+ }
+ }
+
+end:
+ acpi_os_free(buffer.pointer);
+
+ return_VALUE(result);
+}
+
+
+static int
+acpi_processor_get_performance_info (
+ struct acpi_processor *pr)
+{
+ int result = 0;
+ acpi_status status = AE_OK;
+ acpi_handle handle = NULL;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_get_performance_info");
+
+ if (!pr || !pr->performance || !pr->handle)
+ return_VALUE(-EINVAL);
+
+ acpi_processor_set_pdc(pr);
+
+ status = acpi_get_handle(pr->handle, "_PCT", &handle);
+ if (ACPI_FAILURE(status)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "ACPI-based processor performance control unavailable\n"));
+ return_VALUE(-ENODEV);
+ }
+
+ result = acpi_processor_get_performance_control(pr);
+ if (result)
+ return_VALUE(result);
+
+ result = acpi_processor_get_performance_states(pr);
+ if (result)
+ return_VALUE(result);
+
+ result = acpi_processor_get_platform_limit(pr);
+ if (result)
+ return_VALUE(result);
+
+ return_VALUE(0);
+}
+
+
+int acpi_processor_notify_smm(struct module *calling_module) {
+ acpi_status status;
+ static int is_done = 0;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_notify_smm");
+
+ if (!(acpi_processor_ppc_status & PPC_REGISTERED))
+ return_VALUE(-EBUSY);
+
+ if (!try_module_get(calling_module))
+ return_VALUE(-EINVAL);
+
+ /* is_done is set to negative if an error occured,
+ * and to postitive if _no_ error occured, but SMM
+ * was already notified. This avoids double notification
+ * which might lead to unexpected results...
+ */
+ if (is_done > 0) {
+ module_put(calling_module);
+ return_VALUE(0);
+ }
+ else if (is_done < 0) {
+ module_put(calling_module);
+ return_VALUE(is_done);
+ }
+
+ is_done = -EIO;
+
+ /* Can't write pstate_cnt to smi_cmd if either value is zero */
+ if ((!acpi_fadt.smi_cmd) ||
+ (!acpi_fadt.pstate_cnt)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "No SMI port or pstate_cnt\n"));
+ module_put(calling_module);
+ return_VALUE(0);
+ }
+
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Writing pstate_cnt [0x%x] to smi_cmd [0x%x]\n", acpi_fadt.pstate_cnt, acpi_fadt.smi_cmd));
+
+ /* FADT v1 doesn't support pstate_cnt, many BIOS vendors use
+ * it anyway, so we need to support it... */
+ if (acpi_fadt_is_v1) {
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Using v1.0 FADT reserved value for pstate_cnt\n"));
+ }
+
+ status = acpi_os_write_port (acpi_fadt.smi_cmd,
+ (u32) acpi_fadt.pstate_cnt, 8);
+ if (ACPI_FAILURE (status)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Failed to write pstate_cnt [0x%x] to "
+ "smi_cmd [0x%x]\n", acpi_fadt.pstate_cnt, acpi_fadt.smi_cmd));
+ module_put(calling_module);
+ return_VALUE(status);
+ }
+
+ /* Success. If there's no _PPC, we need to fear nothing, so
+ * we can allow the cpufreq driver to be rmmod'ed. */
+ is_done = 1;
+
+ if (!(acpi_processor_ppc_status & PPC_IN_USE))
+ module_put(calling_module);
+
+ return_VALUE(0);
+}
+EXPORT_SYMBOL(acpi_processor_notify_smm);
+
+
+#ifdef CONFIG_X86_ACPI_CPUFREQ_PROC_INTF
+/* /proc/acpi/processor/../performance interface (DEPRECATED) */
+
+static int acpi_processor_perf_open_fs(struct inode *inode, struct file *file);
+static struct file_operations acpi_processor_perf_fops = {
+ .open = acpi_processor_perf_open_fs,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+static int acpi_processor_perf_seq_show(struct seq_file *seq, void *offset)
+{
+ struct acpi_processor *pr = (struct acpi_processor *)seq->private;
+ int i;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_perf_seq_show");
+
+ if (!pr)
+ goto end;
+
+ if (!pr->performance) {
+ seq_puts(seq, "<not supported>\n");
+ goto end;
+ }
+
+ seq_printf(seq, "state count: %d\n"
+ "active state: P%d\n",
+ pr->performance->state_count,
+ pr->performance->state);
+
+ seq_puts(seq, "states:\n");
+ for (i = 0; i < pr->performance->state_count; i++)
+ seq_printf(seq, " %cP%d: %d MHz, %d mW, %d uS\n",
+ (i == pr->performance->state?'*':' '), i,
+ (u32) pr->performance->states[i].core_frequency,
+ (u32) pr->performance->states[i].power,
+ (u32) pr->performance->states[i].transition_latency);
+
+end:
+ return_VALUE(0);
+}
+
+static int acpi_processor_perf_open_fs(struct inode *inode, struct file *file)
+{
+ return single_open(file, acpi_processor_perf_seq_show,
+ PDE(inode)->data);
+}
+
+static ssize_t
+acpi_processor_write_performance (
+ struct file *file,
+ const char __user *buffer,
+ size_t count,
+ loff_t *data)
+{
+ int result = 0;
+ struct seq_file *m = (struct seq_file *) file->private_data;
+ struct acpi_processor *pr = (struct acpi_processor *) m->private;
+ struct acpi_processor_performance *perf;
+ char state_string[12] = {'\0'};
+ unsigned int new_state = 0;
+ struct cpufreq_policy policy;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_write_performance");
+
+ if (!pr || (count > sizeof(state_string) - 1))
+ return_VALUE(-EINVAL);
+
+ perf = pr->performance;
+ if (!perf)
+ return_VALUE(-EINVAL);
+
+ if (copy_from_user(state_string, buffer, count))
+ return_VALUE(-EFAULT);
+
+ state_string[count] = '\0';
+ new_state = simple_strtoul(state_string, NULL, 0);
+
+ if (new_state >= perf->state_count)
+ return_VALUE(-EINVAL);
+
+ cpufreq_get_policy(&policy, pr->id);
+
+ policy.cpu = pr->id;
+ policy.min = perf->states[new_state].core_frequency * 1000;
+ policy.max = perf->states[new_state].core_frequency * 1000;
+
+ result = cpufreq_set_policy(&policy);
+ if (result)
+ return_VALUE(result);
+
+ return_VALUE(count);
+}
+
+static void
+acpi_cpufreq_add_file (
+ struct acpi_processor *pr)
+{
+ struct proc_dir_entry *entry = NULL;
+ struct acpi_device *device = NULL;
+
+ ACPI_FUNCTION_TRACE("acpi_cpufreq_addfile");
+
+ if (acpi_bus_get_device(pr->handle, &device))
+ return_VOID;
+
+ /* add file 'performance' [R/W] */
+ entry = create_proc_entry(ACPI_PROCESSOR_FILE_PERFORMANCE,
+ S_IFREG|S_IRUGO|S_IWUSR, acpi_device_dir(device));
+ if (!entry)
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Unable to create '%s' fs entry\n",
+ ACPI_PROCESSOR_FILE_PERFORMANCE));
+ else {
+ entry->proc_fops = &acpi_processor_perf_fops;
+ entry->proc_fops->write = acpi_processor_write_performance;
+ entry->data = acpi_driver_data(device);
+ entry->owner = THIS_MODULE;
+ }
+ return_VOID;
+}
+
+static void
+acpi_cpufreq_remove_file (
+ struct acpi_processor *pr)
+{
+ struct acpi_device *device = NULL;
+
+ ACPI_FUNCTION_TRACE("acpi_cpufreq_addfile");
+
+ if (acpi_bus_get_device(pr->handle, &device))
+ return_VOID;
+
+ /* remove file 'performance' */
+ remove_proc_entry(ACPI_PROCESSOR_FILE_PERFORMANCE,
+ acpi_device_dir(device));
+
+ return_VOID;
+}
+
+#else
+static void acpi_cpufreq_add_file (struct acpi_processor *pr) { return; }
+static void acpi_cpufreq_remove_file (struct acpi_processor *pr) { return; }
+#endif /* CONFIG_X86_ACPI_CPUFREQ_PROC_INTF */
+
+
+int
+acpi_processor_register_performance (
+ struct acpi_processor_performance * performance,
+ unsigned int cpu)
+{
+ struct acpi_processor *pr;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_register_performance");
+
+ if (!(acpi_processor_ppc_status & PPC_REGISTERED))
+ return_VALUE(-EINVAL);
+
+ down(&performance_sem);
+
+ pr = processors[cpu];
+ if (!pr) {
+ up(&performance_sem);
+ return_VALUE(-ENODEV);
+ }
+
+ if (pr->performance) {
+ up(&performance_sem);
+ return_VALUE(-EBUSY);
+ }
+
+ pr->performance = performance;
+
+ if (acpi_processor_get_performance_info(pr)) {
+ pr->performance = NULL;
+ up(&performance_sem);
+ return_VALUE(-EIO);
+ }
+
+ acpi_cpufreq_add_file(pr);
+
+ up(&performance_sem);
+ return_VALUE(0);
+}
+EXPORT_SYMBOL(acpi_processor_register_performance);
+
+
+void
+acpi_processor_unregister_performance (
+ struct acpi_processor_performance * performance,
+ unsigned int cpu)
+{
+ struct acpi_processor *pr;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_unregister_performance");
+
+ down(&performance_sem);
+
+ pr = processors[cpu];
+ if (!pr) {
+ up(&performance_sem);
+ return_VOID;
+ }
+
+ kfree(pr->performance->states);
+ pr->performance = NULL;
+
+ acpi_cpufreq_remove_file(pr);
+
+ up(&performance_sem);
+
+ return_VOID;
+}
+EXPORT_SYMBOL(acpi_processor_unregister_performance);
--- /dev/null
+/*
+ * processor_thermal.c - Passive cooling submodule of the ACPI processor driver
+ *
+ * Copyright (C) 2001, 2002 Andy Grover <andrew.grover@intel.com>
+ * Copyright (C) 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com>
+ * Copyright (C) 2004 Dominik Brodowski <linux@brodo.de>
+ * Copyright (C) 2004 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
+ * - Added processor hotplug support
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/cpufreq.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+
+#include <asm/uaccess.h>
+
+#include <acpi/acpi_bus.h>
+#include <acpi/processor.h>
+#include <acpi/acpi_drivers.h>
+
+#define ACPI_PROCESSOR_COMPONENT 0x01000000
+#define ACPI_PROCESSOR_CLASS "processor"
+#define ACPI_PROCESSOR_DRIVER_NAME "ACPI Processor Driver"
+#define _COMPONENT ACPI_PROCESSOR_COMPONENT
+ACPI_MODULE_NAME ("acpi_processor")
+
+
+/* --------------------------------------------------------------------------
+ Limit Interface
+ -------------------------------------------------------------------------- */
+
+static int
+acpi_processor_apply_limit (
+ struct acpi_processor* pr)
+{
+ int result = 0;
+ u16 px = 0;
+ u16 tx = 0;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_apply_limit");
+
+ if (!pr)
+ return_VALUE(-EINVAL);
+
+ if (!pr->flags.limit)
+ return_VALUE(-ENODEV);
+
+ if (pr->flags.throttling) {
+ if (pr->limit.user.tx > tx)
+ tx = pr->limit.user.tx;
+ if (pr->limit.thermal.tx > tx)
+ tx = pr->limit.thermal.tx;
+
+ result = acpi_processor_set_throttling(pr, tx);
+ if (result)
+ goto end;
+ }
+
+ pr->limit.state.px = px;
+ pr->limit.state.tx = tx;
+
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Processor [%d] limit set to (P%d:T%d)\n",
+ pr->id,
+ pr->limit.state.px,
+ pr->limit.state.tx));
+
+end:
+ if (result)
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Unable to set limit\n"));
+
+ return_VALUE(result);
+}
+
+
+#ifdef CONFIG_CPU_FREQ
+
+/* If a passive cooling situation is detected, primarily CPUfreq is used, as it
+ * offers (in most cases) voltage scaling in addition to frequency scaling, and
+ * thus a cubic (instead of linear) reduction of energy. Also, we allow for
+ * _any_ cpufreq driver and not only the acpi-cpufreq driver.
+ */
+
+static unsigned int cpufreq_thermal_reduction_pctg[NR_CPUS];
+static unsigned int acpi_thermal_cpufreq_is_init = 0;
+
+
+static int cpu_has_cpufreq(unsigned int cpu)
+{
+ struct cpufreq_policy policy;
+ if (!acpi_thermal_cpufreq_is_init)
+ return -ENODEV;
+ if (!cpufreq_get_policy(&policy, cpu))
+ return -ENODEV;
+ return 0;
+}
+
+
+static int acpi_thermal_cpufreq_increase(unsigned int cpu)
+{
+ if (!cpu_has_cpufreq(cpu))
+ return -ENODEV;
+
+ if (cpufreq_thermal_reduction_pctg[cpu] < 60) {
+ cpufreq_thermal_reduction_pctg[cpu] += 20;
+ cpufreq_update_policy(cpu);
+ return 0;
+ }
+
+ return -ERANGE;
+}
+
+
+static int acpi_thermal_cpufreq_decrease(unsigned int cpu)
+{
+ if (!cpu_has_cpufreq(cpu))
+ return -ENODEV;
+
+ if (cpufreq_thermal_reduction_pctg[cpu] >= 20) {
+ cpufreq_thermal_reduction_pctg[cpu] -= 20;
+ cpufreq_update_policy(cpu);
+ return 0;
+ }
+
+ return -ERANGE;
+}
+
+
+static int acpi_thermal_cpufreq_notifier(
+ struct notifier_block *nb,
+ unsigned long event,
+ void *data)
+{
+ struct cpufreq_policy *policy = data;
+ unsigned long max_freq = 0;
+
+ if (event != CPUFREQ_ADJUST)
+ goto out;
+
+ max_freq = (policy->cpuinfo.max_freq * (100 - cpufreq_thermal_reduction_pctg[policy->cpu])) / 100;
+
+ cpufreq_verify_within_limits(policy, 0, max_freq);
+
+ out:
+ return 0;
+}
+
+
+static struct notifier_block acpi_thermal_cpufreq_notifier_block = {
+ .notifier_call = acpi_thermal_cpufreq_notifier,
+};
+
+
+void acpi_thermal_cpufreq_init(void) {
+ int i;
+
+ for (i=0; i<NR_CPUS; i++)
+ cpufreq_thermal_reduction_pctg[i] = 0;
+
+ i = cpufreq_register_notifier(&acpi_thermal_cpufreq_notifier_block, CPUFREQ_POLICY_NOTIFIER);
+ if (!i)
+ acpi_thermal_cpufreq_is_init = 1;
+}
+
+void acpi_thermal_cpufreq_exit(void) {
+ if (acpi_thermal_cpufreq_is_init)
+ cpufreq_unregister_notifier(&acpi_thermal_cpufreq_notifier_block, CPUFREQ_POLICY_NOTIFIER);
+
+ acpi_thermal_cpufreq_is_init = 0;
+}
+
+#else /* ! CONFIG_CPU_FREQ */
+
+static int acpi_thermal_cpufreq_increase(unsigned int cpu) { return -ENODEV; }
+static int acpi_thermal_cpufreq_decrease(unsigned int cpu) { return -ENODEV; }
+
+
+#endif
+
+
+int
+acpi_processor_set_thermal_limit (
+ acpi_handle handle,
+ int type)
+{
+ int result = 0;
+ struct acpi_processor *pr = NULL;
+ struct acpi_device *device = NULL;
+ int tx = 0;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_set_thermal_limit");
+
+ if ((type < ACPI_PROCESSOR_LIMIT_NONE)
+ || (type > ACPI_PROCESSOR_LIMIT_DECREMENT))
+ return_VALUE(-EINVAL);
+
+ result = acpi_bus_get_device(handle, &device);
+ if (result)
+ return_VALUE(result);
+
+ pr = (struct acpi_processor *) acpi_driver_data(device);
+ if (!pr)
+ return_VALUE(-ENODEV);
+
+ /* Thermal limits are always relative to the current Px/Tx state. */
+ if (pr->flags.throttling)
+ pr->limit.thermal.tx = pr->throttling.state;
+
+ /*
+ * Our default policy is to only use throttling at the lowest
+ * performance state.
+ */
+
+ tx = pr->limit.thermal.tx;
+
+ switch (type) {
+
+ case ACPI_PROCESSOR_LIMIT_NONE:
+ do {
+ result = acpi_thermal_cpufreq_decrease(pr->id);
+ } while (!result);
+ tx = 0;
+ break;
+
+ case ACPI_PROCESSOR_LIMIT_INCREMENT:
+ /* if going up: P-states first, T-states later */
+
+ result = acpi_thermal_cpufreq_increase(pr->id);
+ if (!result)
+ goto end;
+ else if (result == -ERANGE)
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "At maximum performance state\n"));
+
+ if (pr->flags.throttling) {
+ if (tx == (pr->throttling.state_count - 1))
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "At maximum throttling state\n"));
+ else
+ tx++;
+ }
+ break;
+
+ case ACPI_PROCESSOR_LIMIT_DECREMENT:
+ /* if going down: T-states first, P-states later */
+
+ if (pr->flags.throttling) {
+ if (tx == 0)
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "At minimum throttling state\n"));
+ else {
+ tx--;
+ goto end;
+ }
+ }
+
+ result = acpi_thermal_cpufreq_decrease(pr->id);
+ if (result == -ERANGE)
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "At minimum performance state\n"));
+
+ break;
+ }
+
+end:
+ if (pr->flags.throttling) {
+ pr->limit.thermal.px = 0;
+ pr->limit.thermal.tx = tx;
+
+ result = acpi_processor_apply_limit(pr);
+ if (result)
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR,
+ "Unable to set thermal limit\n"));
+
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Thermal limit now (P%d:T%d)\n",
+ pr->limit.thermal.px,
+ pr->limit.thermal.tx));
+ } else
+ result = 0;
+
+ return_VALUE(result);
+}
+
+
+int
+acpi_processor_get_limit_info (
+ struct acpi_processor *pr)
+{
+ ACPI_FUNCTION_TRACE("acpi_processor_get_limit_info");
+
+ if (!pr)
+ return_VALUE(-EINVAL);
+
+ if (pr->flags.throttling)
+ pr->flags.limit = 1;
+
+ return_VALUE(0);
+}
+
+
+/* /proc interface */
+
+static int acpi_processor_limit_seq_show(struct seq_file *seq, void *offset)
+{
+ struct acpi_processor *pr = (struct acpi_processor *)seq->private;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_limit_seq_show");
+
+ if (!pr)
+ goto end;
+
+ if (!pr->flags.limit) {
+ seq_puts(seq, "<not supported>\n");
+ goto end;
+ }
+
+ seq_printf(seq, "active limit: P%d:T%d\n"
+ "user limit: P%d:T%d\n"
+ "thermal limit: P%d:T%d\n",
+ pr->limit.state.px, pr->limit.state.tx,
+ pr->limit.user.px, pr->limit.user.tx,
+ pr->limit.thermal.px, pr->limit.thermal.tx);
+
+end:
+ return_VALUE(0);
+}
+
+int acpi_processor_limit_open_fs(struct inode *inode, struct file *file)
+{
+ return single_open(file, acpi_processor_limit_seq_show,
+ PDE(inode)->data);
+}
+
+ssize_t acpi_processor_write_limit (
+ struct file *file,
+ const char __user *buffer,
+ size_t count,
+ loff_t *data)
+{
+ int result = 0;
+ struct seq_file *m = (struct seq_file *)file->private_data;
+ struct acpi_processor *pr = (struct acpi_processor *)m->private;
+ char limit_string[25] = {'\0'};
+ int px = 0;
+ int tx = 0;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_write_limit");
+
+ if (!pr || (count > sizeof(limit_string) - 1)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid argument\n"));
+ return_VALUE(-EINVAL);
+ }
+
+ if (copy_from_user(limit_string, buffer, count)) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid data\n"));
+ return_VALUE(-EFAULT);
+ }
+
+ limit_string[count] = '\0';
+
+ if (sscanf(limit_string, "%d:%d", &px, &tx) != 2) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid data format\n"));
+ return_VALUE(-EINVAL);
+ }
+
+ if (pr->flags.throttling) {
+ if ((tx < 0) || (tx > (pr->throttling.state_count - 1))) {
+ ACPI_DEBUG_PRINT((ACPI_DB_ERROR, "Invalid tx\n"));
+ return_VALUE(-EINVAL);
+ }
+ pr->limit.user.tx = tx;
+ }
+
+ result = acpi_processor_apply_limit(pr);
+
+ return_VALUE(count);
+}
+
+
+struct file_operations acpi_processor_limit_fops = {
+ .open = acpi_processor_limit_open_fs,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
--- /dev/null
+/*
+ * processor_throttling.c - Throttling submodule of the ACPI processor driver
+ *
+ * Copyright (C) 2001, 2002 Andy Grover <andrew.grover@intel.com>
+ * Copyright (C) 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com>
+ * Copyright (C) 2004 Dominik Brodowski <linux@brodo.de>
+ * Copyright (C) 2004 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
+ * - Added processor hotplug support
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/cpufreq.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+
+#include <asm/io.h>
+#include <asm/uaccess.h>
+
+#include <acpi/acpi_bus.h>
+#include <acpi/processor.h>
+
+#define ACPI_PROCESSOR_COMPONENT 0x01000000
+#define ACPI_PROCESSOR_CLASS "processor"
+#define ACPI_PROCESSOR_DRIVER_NAME "ACPI Processor Driver"
+#define _COMPONENT ACPI_PROCESSOR_COMPONENT
+ACPI_MODULE_NAME ("acpi_processor")
+
+
+/* --------------------------------------------------------------------------
+ Throttling Control
+ -------------------------------------------------------------------------- */
+
+static int
+acpi_processor_get_throttling (
+ struct acpi_processor *pr)
+{
+ int state = 0;
+ u32 value = 0;
+ u32 duty_mask = 0;
+ u32 duty_value = 0;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_get_throttling");
+
+ if (!pr)
+ return_VALUE(-EINVAL);
+
+ if (!pr->flags.throttling)
+ return_VALUE(-ENODEV);
+
+ pr->throttling.state = 0;
+
+ duty_mask = pr->throttling.state_count - 1;
+
+ duty_mask <<= pr->throttling.duty_offset;
+
+ local_irq_disable();
+
+ value = inl(pr->throttling.address);
+
+ /*
+ * Compute the current throttling state when throttling is enabled
+ * (bit 4 is on).
+ */
+ if (value & 0x10) {
+ duty_value = value & duty_mask;
+ duty_value >>= pr->throttling.duty_offset;
+
+ if (duty_value)
+ state = pr->throttling.state_count-duty_value;
+ }
+
+ pr->throttling.state = state;
+
+ local_irq_enable();
+
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "Throttling state is T%d (%d%% throttling applied)\n",
+ state, pr->throttling.states[state].performance));
+
+ return_VALUE(0);
+}
+
+
+int acpi_processor_set_throttling (
+ struct acpi_processor *pr,
+ int state)
+{
+ u32 value = 0;
+ u32 duty_mask = 0;
+ u32 duty_value = 0;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_set_throttling");
+
+ if (!pr)
+ return_VALUE(-EINVAL);
+
+ if ((state < 0) || (state > (pr->throttling.state_count - 1)))
+ return_VALUE(-EINVAL);
+
+ if (!pr->flags.throttling)
+ return_VALUE(-ENODEV);
+
+ if (state == pr->throttling.state)
+ return_VALUE(0);
+
+ /*
+ * Calculate the duty_value and duty_mask.
+ */
+ if (state) {
+ duty_value = pr->throttling.state_count - state;
+
+ duty_value <<= pr->throttling.duty_offset;
+
+ /* Used to clear all duty_value bits */
+ duty_mask = pr->throttling.state_count - 1;
+
+ duty_mask <<= acpi_fadt.duty_offset;
+ duty_mask = ~duty_mask;
+ }
+
+ local_irq_disable();
+
+ /*
+ * Disable throttling by writing a 0 to bit 4. Note that we must
+ * turn it off before you can change the duty_value.
+ */
+ value = inl(pr->throttling.address);
+ if (value & 0x10) {
+ value &= 0xFFFFFFEF;
+ outl(value, pr->throttling.address);
+ }
+
+ /*
+ * Write the new duty_value and then enable throttling. Note
+ * that a state value of 0 leaves throttling disabled.
+ */
+ if (state) {
+ value &= duty_mask;
+ value |= duty_value;
+ outl(value, pr->throttling.address);
+
+ value |= 0x00000010;
+ outl(value, pr->throttling.address);
+ }
+
+ pr->throttling.state = state;
+
+ local_irq_enable();
+
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "Throttling state set to T%d (%d%%)\n", state,
+ (pr->throttling.states[state].performance?pr->throttling.states[state].performance/10:0)));
+
+ return_VALUE(0);
+}
+
+
+int
+acpi_processor_get_throttling_info (
+ struct acpi_processor *pr)
+{
+ int result = 0;
+ int step = 0;
+ int i = 0;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_get_throttling_info");
+
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "pblk_address[0x%08x] duty_offset[%d] duty_width[%d]\n",
+ pr->throttling.address,
+ pr->throttling.duty_offset,
+ pr->throttling.duty_width));
+
+ if (!pr)
+ return_VALUE(-EINVAL);
+
+ /* TBD: Support ACPI 2.0 objects */
+
+ if (!pr->throttling.address) {
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "No throttling register\n"));
+ return_VALUE(0);
+ }
+ else if (!pr->throttling.duty_width) {
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "No throttling states\n"));
+ return_VALUE(0);
+ }
+ /* TBD: Support duty_cycle values that span bit 4. */
+ else if ((pr->throttling.duty_offset
+ + pr->throttling.duty_width) > 4) {
+ ACPI_DEBUG_PRINT((ACPI_DB_WARN, "duty_cycle spans bit 4\n"));
+ return_VALUE(0);
+ }
+
+ /*
+ * PIIX4 Errata: We don't support throttling on the original PIIX4.
+ * This shouldn't be an issue as few (if any) mobile systems ever
+ * used this part.
+ */
+ if (errata.piix4.throttle) {
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "Throttling not supported on PIIX4 A- or B-step\n"));
+ return_VALUE(0);
+ }
+
+ pr->throttling.state_count = 1 << acpi_fadt.duty_width;
+
+ /*
+ * Compute state values. Note that throttling displays a linear power/
+ * performance relationship (at 50% performance the CPU will consume
+ * 50% power). Values are in 1/10th of a percent to preserve accuracy.
+ */
+
+ step = (1000 / pr->throttling.state_count);
+
+ for (i=0; i<pr->throttling.state_count; i++) {
+ pr->throttling.states[i].performance = step * i;
+ pr->throttling.states[i].power = step * i;
+ }
+
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Found %d throttling states\n",
+ pr->throttling.state_count));
+
+ pr->flags.throttling = 1;
+
+ /*
+ * Disable throttling (if enabled). We'll let subsequent policy (e.g.
+ * thermal) decide to lower performance if it so chooses, but for now
+ * we'll crank up the speed.
+ */
+
+ result = acpi_processor_get_throttling(pr);
+ if (result)
+ goto end;
+
+ if (pr->throttling.state) {
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Disabling throttling (was T%d)\n",
+ pr->throttling.state));
+ result = acpi_processor_set_throttling(pr, 0);
+ if (result)
+ goto end;
+ }
+
+end:
+ if (result)
+ pr->flags.throttling = 0;
+
+ return_VALUE(result);
+}
+
+
+/* proc interface */
+
+static int acpi_processor_throttling_seq_show(struct seq_file *seq, void *offset)
+{
+ struct acpi_processor *pr = (struct acpi_processor *)seq->private;
+ int i = 0;
+ int result = 0;
+
+ ACPI_FUNCTION_TRACE("acpi_processor_throttling_seq_show");
+
+ if (!pr)
+ goto end;
+
+ if (!(pr->throttling.state_count > 0)) {
+ seq_puts(seq, "<not supported>\n");
+ goto end;
+ }
+
+ result = acpi_processor_get_throttling(pr);
+
+ if (result) {
+ seq_puts(seq, "Could not determine current throttling state.\n");
+ goto end;
+ }
+
+ seq_printf(seq, "state count: %d\n"
+ "active state: T%d\n",
+ pr->throttling.state_count,
+ pr->throttling.state);
+
+ seq_puts(seq, "states:\n");
+ for (i = 0; i < pr->throttling.state_count; i++)
+ seq_printf(seq, " %cT%d: %02d%%\n",
+ (i == pr->throttling.state?'*':' '), i,
+ (pr->throttling.states[i].performance?pr->throttling.states[i].performance/10:0));
+
+end:
+ return_VALUE(0);
+}
+
+int acpi_processor_throttling_open_fs(struct inode *inode, struct file *file)
+{
+ return single_open(file, acpi_processor_throttling_seq_show,
+ PDE(inode)->data);
+}
+
+ssize_t acpi_processor_write_throttling (
+ struct file *file,
+ const char __user *buffer,
+ size_t count,
+ loff_t *data)
+{
+ int result = 0;
+ struct seq_file *m = (struct seq_file *)file->private_data;
+ struct acpi_processor *pr = (struct acpi_processor *)m->private;
+ char state_string[12] = {'\0'};
+
+ ACPI_FUNCTION_TRACE("acpi_processor_write_throttling");
+
+ if (!pr || (count > sizeof(state_string) - 1))
+ return_VALUE(-EINVAL);
+
+ if (copy_from_user(state_string, buffer, count))
+ return_VALUE(-EFAULT);
+
+ state_string[count] = '\0';
+
+ result = acpi_processor_set_throttling(pr,
+ simple_strtoul(state_string, NULL, 0));
+ if (result)
+ return_VALUE(result);
+
+ return_VALUE(count);
+}
+
+struct file_operations acpi_processor_throttling_fops = {
+ .open = acpi_processor_throttling_open_fs,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* Execute the method, no return value
*/
status = acpi_ns_evaluate_relative ("_SRS", &info);
+ if (ACPI_SUCCESS (status)) {
+ /* Delete any return object (especially if implicit_return is enabled) */
+
+ if (info.return_object) {
+ acpi_ut_remove_reference (info.return_object);
+ }
+ }
/*
* Clean up and return the status from acpi_ns_evaluate_relative
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
{
u32 sec, min, hr;
u32 day, mo, yr;
+ unsigned char rtc_control = 0;
ACPI_FUNCTION_TRACE("acpi_system_alarm_seq_show");
sec = CMOS_READ(RTC_SECONDS_ALARM);
min = CMOS_READ(RTC_MINUTES_ALARM);
hr = CMOS_READ(RTC_HOURS_ALARM);
+ rtc_control = CMOS_READ(RTC_CONTROL);
-#if 0 /* If we ever get an FACP with proper values... */
+ /* If we ever get an FACP with proper values... */
if (acpi_gbl_FADT->day_alrm)
- day = CMOS_READ(acpi_gbl_FADT->day_alrm);
+ /* ACPI spec: only low 6 its should be cared */
+ day = CMOS_READ(acpi_gbl_FADT->day_alrm) & 0x3F;
else
day = CMOS_READ(RTC_DAY_OF_MONTH);
if (acpi_gbl_FADT->mon_alrm)
yr = CMOS_READ(acpi_gbl_FADT->century) * 100 + CMOS_READ(RTC_YEAR);
else
yr = CMOS_READ(RTC_YEAR);
-#else
- day = CMOS_READ(RTC_DAY_OF_MONTH);
- mo = CMOS_READ(RTC_MONTH);
- yr = CMOS_READ(RTC_YEAR);
-#endif
spin_unlock(&rtc_lock);
- BCD_TO_BIN(sec);
- BCD_TO_BIN(min);
- BCD_TO_BIN(hr);
- BCD_TO_BIN(day);
- BCD_TO_BIN(mo);
- BCD_TO_BIN(yr);
+ if (!(rtc_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+ BCD_TO_BIN(sec);
+ BCD_TO_BIN(min);
+ BCD_TO_BIN(hr);
+ BCD_TO_BIN(day);
+ BCD_TO_BIN(mo);
+ BCD_TO_BIN(yr);
+ }
-#if 0
/* we're trusting the FADT (see above)*/
-#else
+ if (!acpi_gbl_FADT->century)
/* If we're not trusting the FADT, we should at least make it
* right for _this_ century... ehm, what is _this_ century?
*
* s/2000/2100
*
*/
- yr += 2000;
-#endif
+ yr += 2000;
seq_printf(seq,"%4.4u-", yr);
(mo > 12) ? seq_puts(seq, "**-") : seq_printf(seq, "%2.2u-", mo);
}
spin_lock_irq(&rtc_lock);
+ /*
+ * Disable alarm interrupt before setting alarm timer or else
+ * when ACPI_EVENT_RTC is enabled, a spurious ACPI interrupt occurs
+ */
+ rtc_control &= ~RTC_AIE;
+ CMOS_WRITE(rtc_control, RTC_CONTROL);
+ CMOS_READ(RTC_INTR_FLAGS);
/* write the fields the rtc knows about */
CMOS_WRITE(hr, RTC_HOURS_ALARM);
* offsets into the CMOS RAM here -- which for some reason are pointing
* to the RTC area of memory.
*/
-#if 0
if (acpi_gbl_FADT->day_alrm)
CMOS_WRITE(day, acpi_gbl_FADT->day_alrm);
if (acpi_gbl_FADT->mon_alrm)
CMOS_WRITE(mo, acpi_gbl_FADT->mon_alrm);
if (acpi_gbl_FADT->century)
CMOS_WRITE(yr/100, acpi_gbl_FADT->century);
-#endif
/* enable the rtc alarm interrupt */
- if (!(rtc_control & RTC_AIE)) {
- rtc_control |= RTC_AIE;
- CMOS_WRITE(rtc_control,RTC_CONTROL);
- CMOS_READ(RTC_INTR_FLAGS);
- }
+ rtc_control |= RTC_AIE;
+ CMOS_WRITE(rtc_control, RTC_CONTROL);
+ CMOS_READ(RTC_INTR_FLAGS);
spin_unlock_irq(&rtc_lock);
- acpi_set_register(ACPI_BITREG_RT_CLOCK_ENABLE, 1, ACPI_MTX_LOCK);
+ acpi_clear_event(ACPI_EVENT_RTC);
+ acpi_enable_event(ACPI_EVENT_RTC, 0);
*ppos += count;
char strbuf[5];
char str[5] = "";
int len = count;
+ struct acpi_device *found_dev = NULL;
if (len > 4) len = 4;
if (!strncmp(dev->pnp.bus_id, str, 4)) {
dev->wakeup.state.enabled = dev->wakeup.state.enabled ? 0:1;
+ found_dev = dev;
break;
}
}
+ if (found_dev) {
+ list_for_each_safe(node, next, &acpi_wakeup_device_list) {
+ struct acpi_device * dev = container_of(node,
+ struct acpi_device, wakeup_list);
+
+ if ((dev != found_dev) &&
+ (dev->wakeup.gpe_number == found_dev->wakeup.gpe_number) &&
+ (dev->wakeup.gpe_device == found_dev->wakeup.gpe_device)) {
+ printk(KERN_WARNING "ACPI: '%s' and '%s' have the same GPE, "
+ "can't disable/enable one seperately\n",
+ dev->pnp.bus_id, found_dev->pnp.bus_id);
+ dev->wakeup.state.enabled = found_dev->wakeup.state.enabled;
+ }
+ }
+ }
spin_unlock(&acpi_device_lock);
return count;
}
};
+static u32 rtc_handler(void * context)
+{
+ acpi_clear_event(ACPI_EVENT_RTC);
+ acpi_disable_event(ACPI_EVENT_RTC, 0);
+
+ return ACPI_INTERRUPT_HANDLED;
+}
+
static int acpi_sleep_proc_init(void)
{
struct proc_dir_entry *entry = NULL;
if (entry)
entry->proc_fops = &acpi_system_wakeup_device_fops;
+ acpi_install_fixed_event_handler(ACPI_EVENT_RTC, rtc_handler, NULL);
return 0;
}
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
acpi_tb_get_rsdt_address (&address);
+ table_info.type = ACPI_TABLE_XSDT;
status = acpi_tb_get_table (&address, &table_info);
if (ACPI_FAILURE (status)) {
ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "Could not get the RSDT/XSDT, %s\n",
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
/* Check oem_id and oem_table_id */
- if ((oem_id[0] && ACPI_STRCMP (oem_id, table->oem_id)) ||
- (oem_table_id[0] && ACPI_STRCMP (oem_table_id, table->oem_table_id))) {
+ if ((oem_id[0] && ACPI_STRNCMP (
+ oem_id, table->oem_id, sizeof (table->oem_id))) ||
+ (oem_table_id[0] && ACPI_STRNCMP (
+ oem_table_id, table->oem_table_id, sizeof (table->oem_table_id)))) {
return_ACPI_STATUS (AE_AML_NAME_NOT_FOUND);
}
status = acpi_tb_find_rsdp (&table_info, flags);
if (ACPI_FAILURE (status)) {
- ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "RSDP structure not found, %s Flags=%X\n",
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
+ "RSDP structure not found, %s Flags=%X\n",
acpi_format_exception (status), flags));
return_ACPI_STATUS (AE_NO_ACPI_TABLES);
}
u8 *start_address,
u32 length)
{
- u32 offset;
u8 *mem_rover;
+ u8 *end_address;
+ u8 checksum;
ACPI_FUNCTION_TRACE ("tb_scan_memory_for_rsdp");
- /* Search from given start addr for the requested length */
+ end_address = start_address + length;
- for (offset = 0, mem_rover = start_address;
- offset < length;
- offset += ACPI_RSDP_SCAN_STEP, mem_rover += ACPI_RSDP_SCAN_STEP) {
+ /* Search from given start address for the requested length */
+ for (mem_rover = start_address; mem_rover < end_address;
+ mem_rover += ACPI_RSDP_SCAN_STEP) {
/* The signature and checksum must both be correct */
- if (ACPI_STRNCMP ((char *) mem_rover,
- RSDP_SIG, sizeof (RSDP_SIG)-1) == 0 &&
- acpi_tb_checksum (mem_rover, ACPI_RSDP_CHECKSUM_LENGTH) == 0) {
- /* If so, we have found the RSDP */
+ if (ACPI_STRNCMP ((char *) mem_rover, RSDP_SIG, sizeof (RSDP_SIG)-1) != 0) {
+ /* No signature match, keep looking */
+
+ continue;
+ }
+
+ /* Signature matches, check the appropriate checksum */
+
+ if ((ACPI_CAST_PTR (struct rsdp_descriptor, mem_rover))->revision < 2) {
+ /* ACPI version 1.0 */
+
+ checksum = acpi_tb_checksum (mem_rover, ACPI_RSDP_CHECKSUM_LENGTH);
+ }
+ else {
+ /* Post ACPI 1.0, use extended_checksum */
+
+ checksum = acpi_tb_checksum (mem_rover, ACPI_RSDP_XCHECKSUM_LENGTH);
+ }
+
+ if (checksum == 0) {
+ /* Checksum valid, we have found a valid RSDP */
ACPI_DEBUG_PRINT ((ACPI_DB_INFO,
- "RSDP located at physical address %p\n",mem_rover));
+ "RSDP located at physical address %p\n", mem_rover));
return_PTR (mem_rover);
}
+
+ ACPI_DEBUG_PRINT ((ACPI_DB_INFO,
+ "Found an RSDP at physical address %p, but it has a bad checksum\n",
+ mem_rover));
}
/* Searched entire block, no RSDP was found */
- ACPI_DEBUG_PRINT ((ACPI_DB_INFO,"Searched entire block, no RSDP was found.\n"));
+ ACPI_DEBUG_PRINT ((ACPI_DB_INFO,
+ "Searched entire block, no valid RSDP was found.\n"));
return_PTR (NULL);
}
ACPI_EBDA_PTR_LENGTH,
(void *) &table_ptr);
if (ACPI_FAILURE (status)) {
- ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "Could not map memory at %8.8X for length %X\n",
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
+ "Could not map memory at %8.8X for length %X\n",
ACPI_EBDA_PTR_LOCATION, ACPI_EBDA_PTR_LENGTH));
return_ACPI_STATUS (status);
}
ACPI_EBDA_WINDOW_SIZE,
(void *) &table_ptr);
if (ACPI_FAILURE (status)) {
- ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "Could not map memory at %8.8X for length %X\n",
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
+ "Could not map memory at %8.8X for length %X\n",
physical_address, ACPI_EBDA_WINDOW_SIZE));
return_ACPI_STATUS (status);
}
ACPI_HI_RSDP_WINDOW_SIZE,
(void *) &table_ptr);
if (ACPI_FAILURE (status)) {
- ACPI_DEBUG_PRINT ((ACPI_DB_ERROR, "Could not map memory at %8.8X for length %X\n",
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
+ "Could not map memory at %8.8X for length %X\n",
ACPI_HI_RSDP_WINDOW_BASE, ACPI_HI_RSDP_WINDOW_SIZE));
return_ACPI_STATUS (status);
}
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
ACPI_FUNCTION_ENTRY ();
+ cache_info = &acpi_gbl_memory_lists[list_id];
+
+#ifdef ACPI_ENABLE_OBJECT_CACHE
+
/* If walk cache is full, just free this wallkstate object */
- cache_info = &acpi_gbl_memory_lists[list_id];
if (cache_info->cache_depth >= cache_info->max_cache_depth) {
ACPI_MEM_FREE (object);
ACPI_MEM_TRACKING (cache_info->total_freed++);
(void) acpi_ut_release_mutex (ACPI_MTX_CACHES);
}
+
+#else
+
+ /* Object cache is disabled; just free the object */
+
+ ACPI_MEM_FREE (object);
+ ACPI_MEM_TRACKING (cache_info->total_freed++);
+#endif
}
cache_info = &acpi_gbl_memory_lists[list_id];
+
+#ifdef ACPI_ENABLE_OBJECT_CACHE
+
if (ACPI_FAILURE (acpi_ut_acquire_mutex (ACPI_MTX_CACHES))) {
return (NULL);
}
ACPI_MEM_TRACKING (cache_info->total_allocated++);
}
+#else
+
+ /* Object cache is disabled; just allocate the object */
+
+ object = ACPI_MEM_CALLOCATE (cache_info->object_size);
+ ACPI_MEM_TRACKING (cache_info->total_allocated++);
+#endif
+
return (object);
}
+#ifdef ACPI_ENABLE_OBJECT_CACHE
/******************************************************************************
*
* FUNCTION: acpi_ut_delete_generic_cache
cache_info->cache_depth--;
}
}
+#endif
/*******************************************************************************
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
break;
}
+ if ((acpi_gbl_enable_interpreter_slack) &&
+ (!expected_return_btypes)) {
+ /*
+ * We received a return object, but one was not expected. This can
+ * happen frequently if the "implicit return" feature is enabled.
+ * Just delete the return object and return AE_OK.
+ */
+ acpi_ut_remove_reference (info.return_object);
+ return_ACPI_STATUS (AE_OK);
+ }
+
/* Is the return object one of the expected types? */
if (!(expected_return_btypes & return_btype)) {
prefix_node, path, AE_TYPE);
ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
- "Type returned from %s was incorrect: %X\n",
- path, ACPI_GET_OBJECT_TYPE (info.return_object)));
+ "Type returned from %s was incorrect: %s, expected Btypes: %X\n",
+ path, acpi_ut_get_object_type_name (info.return_object),
+ expected_return_btypes));
/* On error exit, we must delete the return object */
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
******************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
string++;
}
+ /* Any string left? */
+
+ if (!(*string)) {
+ goto error_exit;
+ }
+
/* Main loop: convert the string to a 64-bit integer */
while (*string) {
acpi_mutex_handle mutex_id)
{
acpi_status status;
- u32 i;
u32 this_thread_id;
this_thread_id = acpi_os_get_thread_id ();
- /*
- * Deadlock prevention. Check if this thread owns any mutexes of value
- * greater than or equal to this one. If so, the thread has violated
- * the mutex ordering rule. This indicates a coding error somewhere in
- * the ACPI subsystem code.
- */
- for (i = mutex_id; i < MAX_MUTEX; i++) {
- if (acpi_gbl_mutex_info[i].owner_id == this_thread_id) {
- if (i == mutex_id) {
+#ifdef ACPI_MUTEX_DEBUG
+ {
+ u32 i;
+ /*
+ * Mutex debug code, for internal debugging only.
+ *
+ * Deadlock prevention. Check if this thread owns any mutexes of value
+ * greater than or equal to this one. If so, the thread has violated
+ * the mutex ordering rule. This indicates a coding error somewhere in
+ * the ACPI subsystem code.
+ */
+ for (i = mutex_id; i < MAX_MUTEX; i++) {
+ if (acpi_gbl_mutex_info[i].owner_id == this_thread_id) {
+ if (i == mutex_id) {
+ ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
+ "Mutex [%s] already acquired by this thread [%X]\n",
+ acpi_ut_get_mutex_name (mutex_id), this_thread_id));
+
+ return (AE_ALREADY_ACQUIRED);
+ }
+
ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
- "Mutex [%s] already acquired by this thread [%X]\n",
- acpi_ut_get_mutex_name (mutex_id), this_thread_id));
+ "Invalid acquire order: Thread %X owns [%s], wants [%s]\n",
+ this_thread_id, acpi_ut_get_mutex_name (i),
+ acpi_ut_get_mutex_name (mutex_id)));
- return (AE_ALREADY_ACQUIRED);
+ return (AE_ACQUIRE_DEADLOCK);
}
-
- ACPI_DEBUG_PRINT ((ACPI_DB_ERROR,
- "Invalid acquire order: Thread %X owns [%s], wants [%s]\n",
- this_thread_id, acpi_ut_get_mutex_name (i),
- acpi_ut_get_mutex_name (mutex_id)));
-
- return (AE_ACQUIRE_DEADLOCK);
}
}
+#endif
ACPI_DEBUG_PRINT ((ACPI_DB_MUTEX,
"Thread %X attempting to acquire Mutex [%s]\n",
* DESCRIPTION: Create a new state and push it
*
******************************************************************************/
-
+#ifdef ACPI_FUTURE_USAGE
acpi_status
acpi_ut_create_pkg_state_and_push (
void *internal_object,
acpi_ut_push_generic_state (state_list, state);
return (AE_OK);
}
-
+#endif /* ACPI_FUTURE_USAGE */
/*******************************************************************************
*
}
+#ifdef ACPI_ENABLE_OBJECT_CACHE
/*******************************************************************************
*
* FUNCTION: acpi_ut_delete_generic_state_cache
acpi_ut_delete_generic_cache (ACPI_MEM_LIST_STATE);
return_VOID;
}
+#endif
/*******************************************************************************
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
}
+#ifdef ACPI_ENABLE_OBJECT_CACHE
/*******************************************************************************
*
* FUNCTION: acpi_ut_delete_object_cache
acpi_ut_delete_generic_cache (ACPI_MEM_LIST_OPERAND);
return_VOID;
}
+#endif
/*******************************************************************************
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
ACPI_FUNCTION_TRACE ("acpi_purge_cached_objects");
+#ifdef ACPI_ENABLE_OBJECT_CACHE
acpi_ut_delete_generic_state_cache ();
acpi_ut_delete_object_cache ();
acpi_ds_delete_walk_state_cache ();
acpi_ps_delete_parse_cache ();
+#endif
return_ACPI_STATUS (AE_OK);
}
unsigned long *data)
{
acpi_status status = AE_OK;
- union acpi_object element;
- struct acpi_buffer buffer = {sizeof(union acpi_object), &element};
+ union acpi_object *element;
+ struct acpi_buffer buffer = {0,NULL};
ACPI_FUNCTION_TRACE("acpi_evaluate_integer");
if (!data)
return_ACPI_STATUS(AE_BAD_PARAMETER);
+ element = kmalloc(sizeof(union acpi_object), GFP_KERNEL);
+ if(!element)
+ return_ACPI_STATUS(AE_NO_MEMORY);
+
+ memset(element, 0, sizeof(union acpi_object));
+ buffer.length = sizeof(union acpi_object);
+ buffer.pointer = element;
status = acpi_evaluate_object(handle, pathname, arguments, &buffer);
if (ACPI_FAILURE(status)) {
acpi_util_eval_error(handle, pathname, status);
return_ACPI_STATUS(status);
}
- if (element.type != ACPI_TYPE_INTEGER) {
+ if (element->type != ACPI_TYPE_INTEGER) {
acpi_util_eval_error(handle, pathname, AE_BAD_DATA);
return_ACPI_STATUS(AE_BAD_DATA);
}
- *data = element.integer.value;
+ *data = element->integer.value;
+ kfree(element);
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Return value [%lu]\n", *data));
.release = single_release,
};
+static char device_decode[][30] = {
+ "motherboard VGA device",
+ "PCI VGA device",
+ "AGP VGA device",
+ "UNKNOWN",
+};
+
static void acpi_video_device_notify ( acpi_handle handle, u32 event, void *data);
static void acpi_video_device_rebind( struct acpi_video_bus *video);
static void acpi_video_device_bind( struct acpi_video_bus *video, struct acpi_video_device *device);
struct acpi_video_bus *video = (struct acpi_video_bus *) seq->private;
int status;
unsigned long id;
- char device_decode[][30] = {
- "motherboard VGA device",
- "PCI VGA device",
- "AGP VGA device",
- "UNKNOWN",
- };
ACPI_FUNCTION_TRACE("acpi_video_bus_POST_seq_show");
dod->package.count));
active_device_list= kmalloc(
- dod->package.count*sizeof(struct acpi_video_enumerated_device),
+ (1+dod->package.count)*sizeof(struct acpi_video_enumerated_device),
GFP_KERNEL);
if (!active_device_list) {
}
-int atmtcp_attach(struct atm_vcc *vcc,int itf)
+static int atmtcp_attach(struct atm_vcc *vcc,int itf)
{
struct atm_dev *dev;
}
-int atmtcp_create_persistent(int itf)
+static int atmtcp_create_persistent(int itf)
{
return atmtcp_create(itf,1,NULL);
}
-int atmtcp_remove_persistent(int itf)
+static int atmtcp_remove_persistent(int itf)
{
struct atm_dev *dev;
struct atmtcp_dev_data *dev_data;
#define SI_HIGH ID_DIN /* HOST_CNTL_ID_PROM_DATA_IN */
#define EEPROM_DELAY 400 /* microseconds */
-/* Read from EEPROM = 0000 0011b */
-unsigned int readtab[] = {
- CS_HIGH | CLK_HIGH,
- CS_LOW | CLK_LOW,
- CLK_HIGH, /* 0 */
- CLK_LOW,
- CLK_HIGH, /* 0 */
- CLK_LOW,
- CLK_HIGH, /* 0 */
- CLK_LOW,
- CLK_HIGH, /* 0 */
- CLK_LOW,
- CLK_HIGH, /* 0 */
- CLK_LOW,
- CLK_HIGH, /* 0 */
- CLK_LOW | SI_HIGH,
- CLK_HIGH | SI_HIGH, /* 1 */
- CLK_LOW | SI_HIGH,
- CLK_HIGH | SI_HIGH /* 1 */
-};
-
-/* Clock to read from/write to the EEPROM */
-unsigned int clocktab[] = {
- CLK_LOW,
- CLK_HIGH,
- CLK_LOW,
- CLK_HIGH,
- CLK_LOW,
- CLK_HIGH,
- CLK_LOW,
- CLK_HIGH,
- CLK_LOW,
- CLK_HIGH,
- CLK_LOW,
- CLK_HIGH,
- CLK_LOW,
- CLK_HIGH,
- CLK_LOW,
- CLK_HIGH,
- CLK_LOW
-};
-
-
#endif /* _HE_H_ */
#ifdef __KERNEL__
int idt77105_init(struct atm_dev *dev) __init;
-int idt77105_stop(struct atm_dev *dev);
#endif
/*
struct rsq_entry *next;
struct rsq_entry *last;
dma_addr_t paddr;
-} rsq_info;
+};
/*****************************************************************************/
#define FE_DS3_PHY 0x0080 /* DS3 */
#define FE_E3_PHY 0x0090 /* E3 */
-extern void ia_mb25_init (IADEV *);
-
/*********************** SUNI_PM7345 PHY DEFINE HERE *********************/
typedef struct _suni_pm7345_t
{
#define SUNI_DS3_FOVRI 0x02 /* FIFO overrun */
#define SUNI_DS3_FUDRI 0x01 /* FIFO underrun */
-extern void ia_suni_pm7345_init (IADEV *iadev);
-
///////////////////SUNI_PM7345 PHY DEFINE END /////////////////////////////
/* ia_eeprom define*/
#include <asm/uaccess.h>
#include <asm/atomic.h>
#include "nicstar.h"
-#include "nicstarmac.h"
#ifdef CONFIG_ATM_NICSTAR_USE_SUNI
#include "suni.h"
#endif /* CONFIG_ATM_NICSTAR_USE_SUNI */
* Read this ForeRunner's MAC address from eprom/eeprom
*/
+typedef void __iomem *virt_addr_t;
+
#define CYCLE_DELAY 5
/* This was the original definition
#define SI_LOW 0x0000 /* Serial input data low */
/* Read Status Register = 0000 0101b */
+#if 0
static u_int32_t rdsrtab[] =
{
CS_HIGH | CLK_HIGH,
CLK_LOW | SI_HIGH,
CLK_HIGH | SI_HIGH /* 1 */
};
+#endif /* 0 */
/* Read from EEPROM = 0000 0011b */
* eeprom, then pull the result from bit 16 of the NicSTaR's General Purpose
* register.
*/
-
+#if 0
u_int32_t
nicstar_read_eprom_status( virt_addr_t base )
{
osp_MicroDelay( CYCLE_DELAY );
return rbyte;
}
+#endif /* 0 */
/*
}
-void
+static void
nicstar_init_eprom( virt_addr_t base )
{
u_int32_t val;
* above.
*/
-void
+static void
nicstar_read_eprom(
virt_addr_t base,
u_int8_t prom_offset,
#include <linux/atm_zatm.h>
#include <linux/capability.h>
#include <linux/bitops.h>
+#include <linux/wait.h>
#include <asm/byteorder.h>
#include <asm/system.h>
#include <asm/string.h>
struct zatm_vcc *zatm_vcc;
unsigned long flags;
int chan;
-struct sk_buff *skb;
-int once = 1;
zatm_vcc = ZATM_VCC(vcc);
zatm_dev = ZATM_DEV(vcc->dev);
chan = zatm_vcc->tx_chan;
if (!chan) return;
DPRINTK("close_tx\n");
- while (skb_peek(&zatm_vcc->backlog)) {
-if (once) {
-printk("waiting for backlog to drain ...\n");
-event_dump();
-once = 0;
-}
- sleep_on(&zatm_vcc->tx_wait);
+ if (skb_peek(&zatm_vcc->backlog)) {
+ printk("waiting for backlog to drain ...\n");
+ event_dump();
+ wait_event(zatm_vcc->tx_wait, !skb_peek(&zatm_vcc->backlog));
}
-once = 1;
- while ((skb = skb_peek(&zatm_vcc->tx_queue))) {
-if (once) {
-printk("waiting for TX queue to drain ... %p\n",skb);
-event_dump();
-once = 0;
-}
- DPRINTK("waiting for TX queue to drain ... %p\n",skb);
- sleep_on(&zatm_vcc->tx_wait);
+ if (skb_peek(&zatm_vcc->tx_queue)) {
+ printk("waiting for TX queue to drain ...\n");
+ event_dump();
+ wait_event(zatm_vcc->tx_wait, !skb_peek(&zatm_vcc->tx_queue));
}
spin_lock_irqsave(&zatm_dev->lock, flags);
#if 0
goto out_disable;
zatm_dev->pci_dev = pci_dev;
- dev = (struct atm_dev *)zatm_dev;
+ dev->dev_data = zatm_dev;
zatm_dev->copper = (int)ent->driver_data;
if ((ret = zatm_init(dev)) || (ret = zatm_start(dev)))
goto out_release;
obj-y := core.o sys.o interface.o bus.o \
driver.o class.o class_simple.o platform.o \
- cpu.o firmware.o init.o map.o dmapool.o
+ cpu.o firmware.o init.o map.o dmapool.o \
+ attribute_container.o transport_class.o
obj-y += power/
obj-$(CONFIG_FW_LOADER) += firmware_class.o
obj-$(CONFIG_NUMA) += node.o
--- /dev/null
+/*
+ * attribute_container.c - implementation of a simple container for classes
+ *
+ * Copyright (c) 2005 - James Bottomley <James.Bottomley@steeleye.com>
+ *
+ * This file is licensed under GPLv2
+ *
+ * The basic idea here is to enable a device to be attached to an
+ * aritrary numer of classes without having to allocate storage for them.
+ * Instead, the contained classes select the devices they need to attach
+ * to via a matching function.
+ */
+
+#include <linux/attribute_container.h>
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/list.h>
+#include <linux/module.h>
+
+/* This is a private structure used to tie the classdev and the
+ * container .. it should never be visible outside this file */
+struct internal_container {
+ struct list_head node;
+ struct attribute_container *cont;
+ struct class_device classdev;
+};
+
+/**
+ * attribute_container_classdev_to_container - given a classdev, return the container
+ *
+ * @classdev: the class device created by attribute_container_add_device.
+ *
+ * Returns the container associated with this classdev.
+ */
+struct attribute_container *
+attribute_container_classdev_to_container(struct class_device *classdev)
+{
+ struct internal_container *ic =
+ container_of(classdev, struct internal_container, classdev);
+ return ic->cont;
+}
+EXPORT_SYMBOL_GPL(attribute_container_classdev_to_container);
+
+static struct list_head attribute_container_list;
+
+static DECLARE_MUTEX(attribute_container_mutex);
+
+/**
+ * attribute_container_register - register an attribute container
+ *
+ * @cont: The container to register. This must be allocated by the
+ * callee and should also be zeroed by it.
+ */
+int
+attribute_container_register(struct attribute_container *cont)
+{
+ INIT_LIST_HEAD(&cont->node);
+ INIT_LIST_HEAD(&cont->containers);
+
+ down(&attribute_container_mutex);
+ list_add_tail(&cont->node, &attribute_container_list);
+ up(&attribute_container_mutex);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(attribute_container_register);
+
+/**
+ * attribute_container_unregister - remove a container registration
+ *
+ * @cont: previously registered container to remove
+ */
+int
+attribute_container_unregister(struct attribute_container *cont)
+{
+ int retval = -EBUSY;
+ down(&attribute_container_mutex);
+ if (!list_empty(&cont->containers))
+ goto out;
+ retval = 0;
+ list_del(&cont->node);
+ out:
+ up(&attribute_container_mutex);
+ return retval;
+
+}
+EXPORT_SYMBOL_GPL(attribute_container_unregister);
+
+/* private function used as class release */
+static void attribute_container_release(struct class_device *classdev)
+{
+ struct internal_container *ic
+ = container_of(classdev, struct internal_container, classdev);
+ struct device *dev = classdev->dev;
+
+ kfree(ic);
+ put_device(dev);
+}
+
+/**
+ * attribute_container_add_device - see if any container is interested in dev
+ *
+ * @dev: device to add attributes to
+ * @fn: function to trigger addition of class device.
+ *
+ * This function allocates storage for the class device(s) to be
+ * attached to dev (one for each matching attribute_container). If no
+ * fn is provided, the code will simply register the class device via
+ * class_device_add. If a function is provided, it is expected to add
+ * the class device at the appropriate time. One of the things that
+ * might be necessary is to allocate and initialise the classdev and
+ * then add it a later time. To do this, call this routine for
+ * allocation and initialisation and then use
+ * attribute_container_device_trigger() to call class_device_add() on
+ * it. Note: after this, the class device contains a reference to dev
+ * which is not relinquished until the release of the classdev.
+ */
+void
+attribute_container_add_device(struct device *dev,
+ int (*fn)(struct attribute_container *,
+ struct device *,
+ struct class_device *))
+{
+ struct attribute_container *cont;
+
+ down(&attribute_container_mutex);
+ list_for_each_entry(cont, &attribute_container_list, node) {
+ struct internal_container *ic;
+
+ if (attribute_container_no_classdevs(cont))
+ continue;
+
+ if (!cont->match(cont, dev))
+ continue;
+ ic = kmalloc(sizeof(struct internal_container), GFP_KERNEL);
+ if (!ic) {
+ dev_printk(KERN_ERR, dev, "failed to allocate class container\n");
+ continue;
+ }
+ memset(ic, 0, sizeof(struct internal_container));
+ INIT_LIST_HEAD(&ic->node);
+ ic->cont = cont;
+ class_device_initialize(&ic->classdev);
+ ic->classdev.dev = get_device(dev);
+ ic->classdev.class = cont->class;
+ cont->class->release = attribute_container_release;
+ strcpy(ic->classdev.class_id, dev->bus_id);
+ if (fn)
+ fn(cont, dev, &ic->classdev);
+ else
+ attribute_container_add_class_device(&ic->classdev);
+ list_add_tail(&ic->node, &cont->containers);
+ }
+ up(&attribute_container_mutex);
+}
+
+/**
+ * attribute_container_remove_device - make device eligible for removal.
+ *
+ * @dev: The generic device
+ * @fn: A function to call to remove the device
+ *
+ * This routine triggers device removal. If fn is NULL, then it is
+ * simply done via class_device_unregister (note that if something
+ * still has a reference to the classdev, then the memory occupied
+ * will not be freed until the classdev is released). If you want a
+ * two phase release: remove from visibility and then delete the
+ * device, then you should use this routine with a fn that calls
+ * class_device_del() and then use
+ * attribute_container_device_trigger() to do the final put on the
+ * classdev.
+ */
+void
+attribute_container_remove_device(struct device *dev,
+ void (*fn)(struct attribute_container *,
+ struct device *,
+ struct class_device *))
+{
+ struct attribute_container *cont;
+
+ down(&attribute_container_mutex);
+ list_for_each_entry(cont, &attribute_container_list, node) {
+ struct internal_container *ic, *tmp;
+
+ if (attribute_container_no_classdevs(cont))
+ continue;
+
+ if (!cont->match(cont, dev))
+ continue;
+ list_for_each_entry_safe(ic, tmp, &cont->containers, node) {
+ if (dev != ic->classdev.dev)
+ continue;
+ list_del(&ic->node);
+ if (fn)
+ fn(cont, dev, &ic->classdev);
+ else {
+ attribute_container_remove_attrs(&ic->classdev);
+ class_device_unregister(&ic->classdev);
+ }
+ }
+ }
+ up(&attribute_container_mutex);
+}
+EXPORT_SYMBOL_GPL(attribute_container_remove_device);
+
+/**
+ * attribute_container_device_trigger - execute a trigger for each matching classdev
+ *
+ * @dev: The generic device to run the trigger for
+ * @fn the function to execute for each classdev.
+ *
+ * This funcion is for executing a trigger when you need to know both
+ * the container and the classdev. If you only care about the
+ * container, then use attribute_container_trigger() instead.
+ */
+void
+attribute_container_device_trigger(struct device *dev,
+ int (*fn)(struct attribute_container *,
+ struct device *,
+ struct class_device *))
+{
+ struct attribute_container *cont;
+
+ down(&attribute_container_mutex);
+ list_for_each_entry(cont, &attribute_container_list, node) {
+ struct internal_container *ic, *tmp;
+
+ if (!cont->match(cont, dev))
+ continue;
+
+ list_for_each_entry_safe(ic, tmp, &cont->containers, node) {
+ if (dev == ic->classdev.dev)
+ fn(cont, dev, &ic->classdev);
+ }
+ }
+ up(&attribute_container_mutex);
+}
+EXPORT_SYMBOL_GPL(attribute_container_device_trigger);
+
+/**
+ * attribute_container_trigger - trigger a function for each matching container
+ *
+ * @dev: The generic device to activate the trigger for
+ * @fn: the function to trigger
+ *
+ * This routine triggers a function that only needs to know the
+ * matching containers (not the classdev) associated with a device.
+ * It is more lightweight than attribute_container_device_trigger, so
+ * should be used in preference unless the triggering function
+ * actually needs to know the classdev.
+ */
+void
+attribute_container_trigger(struct device *dev,
+ int (*fn)(struct attribute_container *,
+ struct device *))
+{
+ struct attribute_container *cont;
+
+ down(&attribute_container_mutex);
+ list_for_each_entry(cont, &attribute_container_list, node) {
+ if (cont->match(cont, dev))
+ fn(cont, dev);
+ }
+ up(&attribute_container_mutex);
+}
+EXPORT_SYMBOL_GPL(attribute_container_trigger);
+
+/**
+ * attribute_container_add_attrs - add attributes
+ *
+ * @classdev: The class device
+ *
+ * This simply creates all the class device sysfs files from the
+ * attributes listed in the container
+ */
+int
+attribute_container_add_attrs(struct class_device *classdev)
+{
+ struct attribute_container *cont =
+ attribute_container_classdev_to_container(classdev);
+ struct class_device_attribute **attrs = cont->attrs;
+ int i, error;
+
+ if (!attrs)
+ return 0;
+
+ for (i = 0; attrs[i]; i++) {
+ error = class_device_create_file(classdev, attrs[i]);
+ if (error)
+ return error;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(attribute_container_add_attrs);
+
+/**
+ * attribute_container_add_class_device - same function as class_device_add
+ *
+ * @classdev: the class device to add
+ *
+ * This performs essentially the same function as class_device_add except for
+ * attribute containers, namely add the classdev to the system and then
+ * create the attribute files
+ */
+int
+attribute_container_add_class_device(struct class_device *classdev)
+{
+ int error = class_device_add(classdev);
+ if (error)
+ return error;
+ return attribute_container_add_attrs(classdev);
+}
+EXPORT_SYMBOL_GPL(attribute_container_add_class_device);
+
+/**
+ * attribute_container_add_class_device_adapter - simple adapter for triggers
+ *
+ * This function is identical to attribute_container_add_class_device except
+ * that it is designed to be called from the triggers
+ */
+int
+attribute_container_add_class_device_adapter(struct attribute_container *cont,
+ struct device *dev,
+ struct class_device *classdev)
+{
+ return attribute_container_add_class_device(classdev);
+}
+EXPORT_SYMBOL_GPL(attribute_container_add_class_device_adapter);
+
+/**
+ * attribute_container_remove_attrs - remove any attribute files
+ *
+ * @classdev: The class device to remove the files from
+ *
+ */
+void
+attribute_container_remove_attrs(struct class_device *classdev)
+{
+ struct attribute_container *cont =
+ attribute_container_classdev_to_container(classdev);
+ struct class_device_attribute **attrs = cont->attrs;
+ int i;
+
+ if (!attrs)
+ return;
+
+ for (i = 0; attrs[i]; i++)
+ class_device_remove_file(classdev, attrs[i]);
+}
+EXPORT_SYMBOL_GPL(attribute_container_remove_attrs);
+
+/**
+ * attribute_container_class_device_del - equivalent of class_device_del
+ *
+ * @classdev: the class device
+ *
+ * This function simply removes all the attribute files and then calls
+ * class_device_del.
+ */
+void
+attribute_container_class_device_del(struct class_device *classdev)
+{
+ attribute_container_remove_attrs(classdev);
+ class_device_del(classdev);
+}
+EXPORT_SYMBOL_GPL(attribute_container_class_device_del);
+
+int __init
+attribute_container_init(void)
+{
+ INIT_LIST_HEAD(&attribute_container_list);
+ return 0;
+}
static int CurrentNSect;
static char *CurrentBuffer;
-static spinlock_t acsi_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(acsi_lock);
#define SET_TIMER() mod_timer(&acsi_timer, jiffies + ACSI_TIMEOUT)
static int writefromint;
static char *raw_buf;
-static spinlock_t amiflop_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(amiflop_lock);
#define RAW_BUF_SIZE 30000 /* size of raw disk data */
--- /dev/null
+/* Copyright (c) 2004 Coraid, Inc. See COPYING for GPL terms. */
+#define VERSION "5"
+#define AOE_MAJOR 152
+#define DEVICE_NAME "aoe"
+#ifndef AOE_PARTITIONS
+#define AOE_PARTITIONS 16
+#endif
+#define SYSMINOR(aoemajor, aoeminor) ((aoemajor) * 10 + (aoeminor))
+#define AOEMAJOR(sysminor) ((sysminor) / 10)
+#define AOEMINOR(sysminor) ((sysminor) % 10)
+#define WHITESPACE " \t\v\f\n"
+
+enum {
+ AOECMD_ATA,
+ AOECMD_CFG,
+
+ AOEFL_RSP = (1<<3),
+ AOEFL_ERR = (1<<2),
+
+ AOEAFL_EXT = (1<<6),
+ AOEAFL_DEV = (1<<4),
+ AOEAFL_ASYNC = (1<<1),
+ AOEAFL_WRITE = (1<<0),
+
+ AOECCMD_READ = 0,
+ AOECCMD_TEST,
+ AOECCMD_PTEST,
+ AOECCMD_SET,
+ AOECCMD_FSET,
+
+ AOE_HVER = 0x10,
+};
+
+struct aoe_hdr {
+ unsigned char dst[6];
+ unsigned char src[6];
+ unsigned char type[2];
+ unsigned char verfl;
+ unsigned char err;
+ unsigned char major[2];
+ unsigned char minor;
+ unsigned char cmd;
+ unsigned char tag[4];
+};
+
+struct aoe_atahdr {
+ unsigned char aflags;
+ unsigned char errfeat;
+ unsigned char scnt;
+ unsigned char cmdstat;
+ unsigned char lba0;
+ unsigned char lba1;
+ unsigned char lba2;
+ unsigned char lba3;
+ unsigned char lba4;
+ unsigned char lba5;
+ unsigned char res[2];
+};
+
+struct aoe_cfghdr {
+ unsigned char bufcnt[2];
+ unsigned char fwver[2];
+ unsigned char res;
+ unsigned char aoeccmd;
+ unsigned char cslen[2];
+};
+
+enum {
+ DEVFL_UP = 1, /* device is installed in system and ready for AoE->ATA commands */
+ DEVFL_TKILL = (1<<1), /* flag for timer to know when to kill self */
+ DEVFL_EXT = (1<<2), /* device accepts lba48 commands */
+ DEVFL_CLOSEWAIT = (1<<3), /* device is waiting for all closes to revalidate */
+ DEVFL_WC_UPDATE = (1<<4), /* this device needs to update write cache status */
+ DEVFL_WORKON = (1<<4),
+
+ BUFFL_FAIL = 1,
+};
+
+enum {
+ MAXATADATA = 1024,
+ NPERSHELF = 10,
+ FREETAG = -1,
+ MIN_BUFS = 8,
+};
+
+struct buf {
+ struct list_head bufs;
+ ulong flags;
+ ulong nframesout;
+ char *bufaddr;
+ ulong resid;
+ ulong bv_resid;
+ sector_t sector;
+ struct bio *bio;
+ struct bio_vec *bv;
+};
+
+struct frame {
+ int tag;
+ ulong waited;
+ struct buf *buf;
+ char *bufaddr;
+ int writedatalen;
+ int ndata;
+
+ /* largest possible */
+ unsigned char data[sizeof(struct aoe_hdr) + sizeof(struct aoe_atahdr)];
+};
+
+struct aoedev {
+ struct aoedev *next;
+ unsigned char addr[6]; /* remote mac addr */
+ ushort flags;
+ ulong sysminor;
+ ulong aoemajor;
+ ulong aoeminor;
+ ulong nopen; /* (bd_openers isn't available without sleeping) */
+ ulong rttavg; /* round trip average of requests/responses */
+ u16 fw_ver; /* version of blade's firmware */
+ struct work_struct work;/* disk create work struct */
+ struct gendisk *gd;
+ request_queue_t blkq;
+ struct hd_geometry geo;
+ sector_t ssize;
+ struct timer_list timer;
+ spinlock_t lock;
+ struct net_device *ifp; /* interface ed is attached to */
+ struct sk_buff *skblist;/* packets needing to be sent */
+ mempool_t *bufpool; /* for deadlock-free Buf allocation */
+ struct list_head bufq; /* queue of bios to work on */
+ struct buf *inprocess; /* the one we're currently working on */
+ ulong lasttag; /* last tag sent */
+ ulong nframes; /* number of frames below */
+ struct frame *frames;
+};
+
+
+int aoeblk_init(void);
+void aoeblk_exit(void);
+void aoeblk_gdalloc(void *);
+void aoedisk_rm_sysfs(struct aoedev *d);
+
+int aoechr_init(void);
+void aoechr_exit(void);
+void aoechr_error(char *);
+void aoechr_hdump(char *, int len);
+
+void aoecmd_work(struct aoedev *d);
+void aoecmd_cfg(ushort, unsigned char);
+void aoecmd_ata_rsp(struct sk_buff *);
+void aoecmd_cfg_rsp(struct sk_buff *);
+
+int aoedev_init(void);
+void aoedev_exit(void);
+struct aoedev *aoedev_bymac(unsigned char *);
+void aoedev_downdev(struct aoedev *d);
+struct aoedev *aoedev_set(ulong, unsigned char *, struct net_device *, ulong);
+int aoedev_busy(void);
+
+int aoenet_init(void);
+void aoenet_exit(void);
+void aoenet_xmit(struct sk_buff *);
+int is_aoe_netif(struct net_device *ifp);
+int set_aoe_iflist(const char __user *str, size_t size);
+
+u64 mac_addr(char addr[6]);
--- /dev/null
+/* Copyright (c) 2004 Coraid, Inc. See COPYING for GPL terms. */
+/*
+ * aoeblk.c
+ * block device routines
+ */
+
+#include <linux/hdreg.h>
+#include <linux/blkdev.h>
+#include <linux/fs.h>
+#include <linux/ioctl.h>
+#include <linux/genhd.h>
+#include <linux/netdevice.h>
+#include "aoe.h"
+
+static kmem_cache_t *buf_pool_cache;
+
+/* add attributes for our block devices in sysfs */
+static ssize_t aoedisk_show_state(struct gendisk * disk, char *page)
+{
+ struct aoedev *d = disk->private_data;
+
+ return snprintf(page, PAGE_SIZE,
+ "%s%s\n",
+ (d->flags & DEVFL_UP) ? "up" : "down",
+ (d->flags & DEVFL_CLOSEWAIT) ? ",closewait" : "");
+}
+static ssize_t aoedisk_show_mac(struct gendisk * disk, char *page)
+{
+ struct aoedev *d = disk->private_data;
+
+ return snprintf(page, PAGE_SIZE, "%012llx\n", mac_addr(d->addr));
+}
+static ssize_t aoedisk_show_netif(struct gendisk * disk, char *page)
+{
+ struct aoedev *d = disk->private_data;
+
+ return snprintf(page, PAGE_SIZE, "%s\n", d->ifp->name);
+}
+
+static struct disk_attribute disk_attr_state = {
+ .attr = {.name = "state", .mode = S_IRUGO },
+ .show = aoedisk_show_state
+};
+static struct disk_attribute disk_attr_mac = {
+ .attr = {.name = "mac", .mode = S_IRUGO },
+ .show = aoedisk_show_mac
+};
+static struct disk_attribute disk_attr_netif = {
+ .attr = {.name = "netif", .mode = S_IRUGO },
+ .show = aoedisk_show_netif
+};
+
+static void
+aoedisk_add_sysfs(struct aoedev *d)
+{
+ sysfs_create_file(&d->gd->kobj, &disk_attr_state.attr);
+ sysfs_create_file(&d->gd->kobj, &disk_attr_mac.attr);
+ sysfs_create_file(&d->gd->kobj, &disk_attr_netif.attr);
+}
+void
+aoedisk_rm_sysfs(struct aoedev *d)
+{
+ sysfs_remove_link(&d->gd->kobj, "state");
+ sysfs_remove_link(&d->gd->kobj, "mac");
+ sysfs_remove_link(&d->gd->kobj, "netif");
+}
+
+static int
+aoeblk_open(struct inode *inode, struct file *filp)
+{
+ struct aoedev *d;
+ ulong flags;
+
+ d = inode->i_bdev->bd_disk->private_data;
+
+ spin_lock_irqsave(&d->lock, flags);
+ if (d->flags & DEVFL_UP) {
+ d->nopen++;
+ spin_unlock_irqrestore(&d->lock, flags);
+ return 0;
+ }
+ spin_unlock_irqrestore(&d->lock, flags);
+ return -ENODEV;
+}
+
+static int
+aoeblk_release(struct inode *inode, struct file *filp)
+{
+ struct aoedev *d;
+ ulong flags;
+
+ d = inode->i_bdev->bd_disk->private_data;
+
+ spin_lock_irqsave(&d->lock, flags);
+
+ if (--d->nopen == 0 && (d->flags & DEVFL_CLOSEWAIT)) {
+ d->flags &= ~DEVFL_CLOSEWAIT;
+ spin_unlock_irqrestore(&d->lock, flags);
+ aoecmd_cfg(d->aoemajor, d->aoeminor);
+ return 0;
+ }
+ spin_unlock_irqrestore(&d->lock, flags);
+
+ return 0;
+}
+
+static int
+aoeblk_make_request(request_queue_t *q, struct bio *bio)
+{
+ struct aoedev *d;
+ struct buf *buf;
+ struct sk_buff *sl;
+ ulong flags;
+
+ blk_queue_bounce(q, &bio);
+
+ d = bio->bi_bdev->bd_disk->private_data;
+ buf = mempool_alloc(d->bufpool, GFP_NOIO);
+ if (buf == NULL) {
+ printk(KERN_INFO "aoe: aoeblk_make_request: buf allocation "
+ "failure\n");
+ bio_endio(bio, bio->bi_size, -ENOMEM);
+ return 0;
+ }
+ memset(buf, 0, sizeof(*buf));
+ INIT_LIST_HEAD(&buf->bufs);
+ buf->bio = bio;
+ buf->resid = bio->bi_size;
+ buf->sector = bio->bi_sector;
+ buf->bv = buf->bio->bi_io_vec;
+ buf->bv_resid = buf->bv->bv_len;
+ buf->bufaddr = page_address(buf->bv->bv_page) + buf->bv->bv_offset;
+
+ spin_lock_irqsave(&d->lock, flags);
+
+ if ((d->flags & DEVFL_UP) == 0) {
+ printk(KERN_INFO "aoe: aoeblk_make_request: device %ld.%ld is not up\n",
+ d->aoemajor, d->aoeminor);
+ spin_unlock_irqrestore(&d->lock, flags);
+ mempool_free(buf, d->bufpool);
+ bio_endio(bio, bio->bi_size, -ENXIO);
+ return 0;
+ }
+
+ list_add_tail(&buf->bufs, &d->bufq);
+ aoecmd_work(d);
+
+ sl = d->skblist;
+ d->skblist = NULL;
+
+ spin_unlock_irqrestore(&d->lock, flags);
+
+ aoenet_xmit(sl);
+ return 0;
+}
+
+/* This ioctl implementation expects userland to have the device node
+ * permissions set so that only priviledged users can open an aoe
+ * block device directly.
+ */
+static int
+aoeblk_ioctl(struct inode *inode, struct file *filp, uint cmd, ulong arg)
+{
+ struct aoedev *d;
+
+ if (!arg)
+ return -EINVAL;
+
+ d = inode->i_bdev->bd_disk->private_data;
+ if ((d->flags & DEVFL_UP) == 0) {
+ printk(KERN_ERR "aoe: aoeblk_ioctl: disk not up\n");
+ return -ENODEV;
+ }
+
+ if (cmd == HDIO_GETGEO) {
+ d->geo.start = get_start_sect(inode->i_bdev);
+ if (!copy_to_user((void __user *) arg, &d->geo, sizeof d->geo))
+ return 0;
+ return -EFAULT;
+ }
+ printk(KERN_INFO "aoe: aoeblk_ioctl: unknown ioctl %d\n", cmd);
+ return -EINVAL;
+}
+
+static struct block_device_operations aoe_bdops = {
+ .open = aoeblk_open,
+ .release = aoeblk_release,
+ .ioctl = aoeblk_ioctl,
+ .owner = THIS_MODULE,
+};
+
+/* alloc_disk and add_disk can sleep */
+void
+aoeblk_gdalloc(void *vp)
+{
+ struct aoedev *d = vp;
+ struct gendisk *gd;
+ ulong flags;
+
+ gd = alloc_disk(AOE_PARTITIONS);
+ if (gd == NULL) {
+ printk(KERN_ERR "aoe: aoeblk_gdalloc: cannot allocate disk "
+ "structure for %ld.%ld\n", d->aoemajor, d->aoeminor);
+ spin_lock_irqsave(&d->lock, flags);
+ d->flags &= ~DEVFL_WORKON;
+ spin_unlock_irqrestore(&d->lock, flags);
+ return;
+ }
+
+ d->bufpool = mempool_create(MIN_BUFS,
+ mempool_alloc_slab, mempool_free_slab,
+ buf_pool_cache);
+ if (d->bufpool == NULL) {
+ printk(KERN_ERR "aoe: aoeblk_gdalloc: cannot allocate bufpool "
+ "for %ld.%ld\n", d->aoemajor, d->aoeminor);
+ put_disk(gd);
+ spin_lock_irqsave(&d->lock, flags);
+ d->flags &= ~DEVFL_WORKON;
+ spin_unlock_irqrestore(&d->lock, flags);
+ return;
+ }
+
+ spin_lock_irqsave(&d->lock, flags);
+ blk_queue_make_request(&d->blkq, aoeblk_make_request);
+ gd->major = AOE_MAJOR;
+ gd->first_minor = d->sysminor * AOE_PARTITIONS;
+ gd->fops = &aoe_bdops;
+ gd->private_data = d;
+ gd->capacity = d->ssize;
+ snprintf(gd->disk_name, sizeof gd->disk_name, "etherd/e%ld.%ld",
+ d->aoemajor, d->aoeminor);
+
+ gd->queue = &d->blkq;
+ d->gd = gd;
+ d->flags &= ~DEVFL_WORKON;
+ d->flags |= DEVFL_UP;
+
+ spin_unlock_irqrestore(&d->lock, flags);
+
+ add_disk(gd);
+ aoedisk_add_sysfs(d);
+
+ printk(KERN_INFO "aoe: %012llx e%lu.%lu v%04x has %llu "
+ "sectors\n", mac_addr(d->addr), d->aoemajor, d->aoeminor,
+ d->fw_ver, (long long)d->ssize);
+}
+
+void
+aoeblk_exit(void)
+{
+ kmem_cache_destroy(buf_pool_cache);
+}
+
+int __init
+aoeblk_init(void)
+{
+ buf_pool_cache = kmem_cache_create("aoe_bufs",
+ sizeof(struct buf),
+ 0, 0, NULL, NULL);
+ if (buf_pool_cache == NULL)
+ return -ENOMEM;
+
+ return 0;
+}
+
--- /dev/null
+/* Copyright (c) 2004 Coraid, Inc. See COPYING for GPL terms. */
+/*
+ * aoechr.c
+ * AoE character device driver
+ */
+
+#include <linux/hdreg.h>
+#include <linux/blkdev.h>
+#include "aoe.h"
+
+enum {
+ //MINOR_STAT = 1, (moved to sysfs)
+ MINOR_ERR = 2,
+ MINOR_DISCOVER,
+ MINOR_INTERFACES,
+ MSGSZ = 2048,
+ NARGS = 10,
+ NMSG = 100, /* message backlog to retain */
+};
+
+struct aoe_chardev {
+ ulong minor;
+ char name[32];
+};
+
+enum { EMFL_VALID = 1 };
+
+struct ErrMsg {
+ short flags;
+ short len;
+ char *msg;
+};
+
+static struct ErrMsg emsgs[NMSG];
+static int emsgs_head_idx, emsgs_tail_idx;
+static struct semaphore emsgs_sema;
+static spinlock_t emsgs_lock;
+static int nblocked_emsgs_readers;
+static struct class_simple *aoe_class;
+static struct aoe_chardev chardevs[] = {
+ { MINOR_ERR, "err" },
+ { MINOR_DISCOVER, "discover" },
+ { MINOR_INTERFACES, "interfaces" },
+};
+
+static int
+discover(void)
+{
+ aoecmd_cfg(0xffff, 0xff);
+ return 0;
+}
+
+static int
+interfaces(const char __user *str, size_t size)
+{
+ if (set_aoe_iflist(str, size)) {
+ printk(KERN_CRIT
+ "%s: could not set interface list: %s\n",
+ __FUNCTION__, "too many interfaces");
+ return -EINVAL;
+ }
+ return 0;
+}
+
+void
+aoechr_error(char *msg)
+{
+ struct ErrMsg *em;
+ char *mp;
+ ulong flags, n;
+
+ n = strlen(msg);
+
+ spin_lock_irqsave(&emsgs_lock, flags);
+
+ em = emsgs + emsgs_tail_idx;
+ if ((em->flags & EMFL_VALID)) {
+bail: spin_unlock_irqrestore(&emsgs_lock, flags);
+ return;
+ }
+
+ mp = kmalloc(n, GFP_ATOMIC);
+ if (mp == NULL) {
+ printk(KERN_CRIT "aoe: aoechr_error: allocation failure, len=%ld\n", n);
+ goto bail;
+ }
+
+ memcpy(mp, msg, n);
+ em->msg = mp;
+ em->flags |= EMFL_VALID;
+ em->len = n;
+
+ emsgs_tail_idx++;
+ emsgs_tail_idx %= ARRAY_SIZE(emsgs);
+
+ spin_unlock_irqrestore(&emsgs_lock, flags);
+
+ if (nblocked_emsgs_readers)
+ up(&emsgs_sema);
+}
+
+#define PERLINE 16
+void
+aoechr_hdump(char *buf, int n)
+{
+ int bufsiz;
+ char *fbuf;
+ int linelen;
+ char *p, *e, *fp;
+
+ bufsiz = n * 3; /* 2 hex digits and a space */
+ bufsiz += n / PERLINE + 1; /* the newline characters */
+ bufsiz += 1; /* the final '\0' */
+
+ fbuf = kmalloc(bufsiz, GFP_ATOMIC);
+ if (!fbuf) {
+ printk(KERN_INFO
+ "%s: cannot allocate memory\n",
+ __FUNCTION__);
+ return;
+ }
+
+ for (p = buf; n <= 0;) {
+ linelen = n > PERLINE ? PERLINE : n;
+ n -= linelen;
+
+ fp = fbuf;
+ for (e=p+linelen; p<e; p++)
+ fp += sprintf(fp, "%2.2X ", *p & 255);
+ sprintf(fp, "\n");
+ aoechr_error(fbuf);
+ }
+
+ kfree(fbuf);
+}
+
+static ssize_t
+aoechr_write(struct file *filp, const char __user *buf, size_t cnt, loff_t *offp)
+{
+ int ret = -EINVAL;
+
+ switch ((unsigned long) filp->private_data) {
+ default:
+ printk(KERN_INFO "aoe: aoechr_write: can't write to that file.\n");
+ break;
+ case MINOR_DISCOVER:
+ ret = discover();
+ break;
+ case MINOR_INTERFACES:
+ ret = interfaces(buf, cnt);
+ break;
+ }
+ if (ret == 0)
+ ret = cnt;
+ return ret;
+}
+
+static int
+aoechr_open(struct inode *inode, struct file *filp)
+{
+ int n, i;
+
+ n = MINOR(inode->i_rdev);
+ filp->private_data = (void *) (unsigned long) n;
+
+ for (i = 0; i < ARRAY_SIZE(chardevs); ++i)
+ if (chardevs[i].minor == n)
+ return 0;
+ return -EINVAL;
+}
+
+static int
+aoechr_rel(struct inode *inode, struct file *filp)
+{
+ return 0;
+}
+
+static ssize_t
+aoechr_read(struct file *filp, char __user *buf, size_t cnt, loff_t *off)
+{
+ int n;
+ char *mp;
+ struct ErrMsg *em;
+ ssize_t len;
+ ulong flags;
+
+ n = (int) filp->private_data;
+ switch (n) {
+ case MINOR_ERR:
+ spin_lock_irqsave(&emsgs_lock, flags);
+loop:
+ em = emsgs + emsgs_head_idx;
+ if ((em->flags & EMFL_VALID) == 0) {
+ if (filp->f_flags & O_NDELAY) {
+ spin_unlock_irqrestore(&emsgs_lock, flags);
+ return -EAGAIN;
+ }
+ nblocked_emsgs_readers++;
+
+ spin_unlock_irqrestore(&emsgs_lock, flags);
+
+ n = down_interruptible(&emsgs_sema);
+
+ spin_lock_irqsave(&emsgs_lock, flags);
+
+ nblocked_emsgs_readers--;
+
+ if (n) {
+ spin_unlock_irqrestore(&emsgs_lock, flags);
+ return -ERESTARTSYS;
+ }
+ goto loop;
+ }
+ if (em->len > cnt) {
+ spin_unlock_irqrestore(&emsgs_lock, flags);
+ return -EAGAIN;
+ }
+ mp = em->msg;
+ len = em->len;
+ em->msg = NULL;
+ em->flags &= ~EMFL_VALID;
+
+ emsgs_head_idx++;
+ emsgs_head_idx %= ARRAY_SIZE(emsgs);
+
+ spin_unlock_irqrestore(&emsgs_lock, flags);
+
+ n = copy_to_user(buf, mp, len);
+ kfree(mp);
+ return n == 0 ? len : -EFAULT;
+ default:
+ return -EFAULT;
+ }
+}
+
+struct file_operations aoe_fops = {
+ .write = aoechr_write,
+ .read = aoechr_read,
+ .open = aoechr_open,
+ .release = aoechr_rel,
+ .owner = THIS_MODULE,
+};
+
+int __init
+aoechr_init(void)
+{
+ int n, i;
+
+ n = register_chrdev(AOE_MAJOR, "aoechr", &aoe_fops);
+ if (n < 0) {
+ printk(KERN_ERR "aoe: aoechr_init: can't register char device\n");
+ return n;
+ }
+ sema_init(&emsgs_sema, 0);
+ spin_lock_init(&emsgs_lock);
+ aoe_class = class_simple_create(THIS_MODULE, "aoe");
+ if (IS_ERR(aoe_class)) {
+ unregister_chrdev(AOE_MAJOR, "aoechr");
+ return PTR_ERR(aoe_class);
+ }
+ for (i = 0; i < ARRAY_SIZE(chardevs); ++i)
+ class_simple_device_add(aoe_class,
+ MKDEV(AOE_MAJOR, chardevs[i].minor),
+ NULL, chardevs[i].name);
+
+ return 0;
+}
+
+void
+aoechr_exit(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(chardevs); ++i)
+ class_simple_device_remove(MKDEV(AOE_MAJOR, chardevs[i].minor));
+ class_simple_destroy(aoe_class);
+ unregister_chrdev(AOE_MAJOR, "aoechr");
+}
+
--- /dev/null
+/* Copyright (c) 2004 Coraid, Inc. See COPYING for GPL terms. */
+/*
+ * aoecmd.c
+ * Filesystem request handling methods
+ */
+
+#include <linux/hdreg.h>
+#include <linux/blkdev.h>
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include "aoe.h"
+
+#define TIMERTICK (HZ / 10)
+#define MINTIMER (2 * TIMERTICK)
+#define MAXTIMER (HZ << 1)
+#define MAXWAIT (60 * 3) /* After MAXWAIT seconds, give up and fail dev */
+
+static struct sk_buff *
+new_skb(struct net_device *if_dev, ulong len)
+{
+ struct sk_buff *skb;
+
+ skb = alloc_skb(len, GFP_ATOMIC);
+ if (skb) {
+ skb->nh.raw = skb->mac.raw = skb->data;
+ skb->dev = if_dev;
+ skb->protocol = __constant_htons(ETH_P_AOE);
+ skb->priority = 0;
+ skb_put(skb, len);
+ skb->next = skb->prev = NULL;
+
+ /* tell the network layer not to perform IP checksums
+ * or to get the NIC to do it
+ */
+ skb->ip_summed = CHECKSUM_NONE;
+ }
+ return skb;
+}
+
+static struct sk_buff *
+skb_prepare(struct aoedev *d, struct frame *f)
+{
+ struct sk_buff *skb;
+ char *p;
+
+ skb = new_skb(d->ifp, f->ndata + f->writedatalen);
+ if (!skb) {
+ printk(KERN_INFO "aoe: skb_prepare: failure to allocate skb\n");
+ return NULL;
+ }
+
+ p = skb->mac.raw;
+ memcpy(p, f->data, f->ndata);
+
+ if (f->writedatalen) {
+ p += sizeof(struct aoe_hdr) + sizeof(struct aoe_atahdr);
+ memcpy(p, f->bufaddr, f->writedatalen);
+ }
+
+ return skb;
+}
+
+static struct frame *
+getframe(struct aoedev *d, int tag)
+{
+ struct frame *f, *e;
+
+ f = d->frames;
+ e = f + d->nframes;
+ for (; f<e; f++)
+ if (f->tag == tag)
+ return f;
+ return NULL;
+}
+
+/*
+ * Leave the top bit clear so we have tagspace for userland.
+ * The bottom 16 bits are the xmit tick for rexmit/rttavg processing.
+ * This driver reserves tag -1 to mean "unused frame."
+ */
+static int
+newtag(struct aoedev *d)
+{
+ register ulong n;
+
+ n = jiffies & 0xffff;
+ return n |= (++d->lasttag & 0x7fff) << 16;
+}
+
+static int
+aoehdr_atainit(struct aoedev *d, struct aoe_hdr *h)
+{
+ u16 type = __constant_cpu_to_be16(ETH_P_AOE);
+ u16 aoemajor = __cpu_to_be16(d->aoemajor);
+ u32 host_tag = newtag(d);
+ u32 tag = __cpu_to_be32(host_tag);
+
+ memcpy(h->src, d->ifp->dev_addr, sizeof h->src);
+ memcpy(h->dst, d->addr, sizeof h->dst);
+ memcpy(h->type, &type, sizeof type);
+ h->verfl = AOE_HVER;
+ memcpy(h->major, &aoemajor, sizeof aoemajor);
+ h->minor = d->aoeminor;
+ h->cmd = AOECMD_ATA;
+ memcpy(h->tag, &tag, sizeof tag);
+
+ return host_tag;
+}
+
+static void
+aoecmd_ata_rw(struct aoedev *d, struct frame *f)
+{
+ struct aoe_hdr *h;
+ struct aoe_atahdr *ah;
+ struct buf *buf;
+ struct sk_buff *skb;
+ ulong bcnt;
+ register sector_t sector;
+ char writebit, extbit;
+
+ writebit = 0x10;
+ extbit = 0x4;
+
+ buf = d->inprocess;
+
+ sector = buf->sector;
+ bcnt = buf->bv_resid;
+ if (bcnt > MAXATADATA)
+ bcnt = MAXATADATA;
+
+ /* initialize the headers & frame */
+ h = (struct aoe_hdr *) f->data;
+ ah = (struct aoe_atahdr *) (h+1);
+ f->ndata = sizeof *h + sizeof *ah;
+ memset(h, 0, f->ndata);
+ f->tag = aoehdr_atainit(d, h);
+ f->waited = 0;
+ f->buf = buf;
+ f->bufaddr = buf->bufaddr;
+
+ /* set up ata header */
+ ah->scnt = bcnt >> 9;
+ ah->lba0 = sector;
+ ah->lba1 = sector >>= 8;
+ ah->lba2 = sector >>= 8;
+ ah->lba3 = sector >>= 8;
+ if (d->flags & DEVFL_EXT) {
+ ah->aflags |= AOEAFL_EXT;
+ ah->lba4 = sector >>= 8;
+ ah->lba5 = sector >>= 8;
+ } else {
+ extbit = 0;
+ ah->lba3 &= 0x0f;
+ ah->lba3 |= 0xe0; /* LBA bit + obsolete 0xa0 */
+ }
+
+ if (bio_data_dir(buf->bio) == WRITE) {
+ ah->aflags |= AOEAFL_WRITE;
+ f->writedatalen = bcnt;
+ } else {
+ writebit = 0;
+ f->writedatalen = 0;
+ }
+
+ ah->cmdstat = WIN_READ | writebit | extbit;
+
+ /* mark all tracking fields and load out */
+ buf->nframesout += 1;
+ buf->bufaddr += bcnt;
+ buf->bv_resid -= bcnt;
+/* printk(KERN_INFO "aoe: bv_resid=%ld\n", buf->bv_resid); */
+ buf->resid -= bcnt;
+ buf->sector += bcnt >> 9;
+ if (buf->resid == 0) {
+ d->inprocess = NULL;
+ } else if (buf->bv_resid == 0) {
+ buf->bv++;
+ buf->bv_resid = buf->bv->bv_len;
+ buf->bufaddr = page_address(buf->bv->bv_page) + buf->bv->bv_offset;
+ }
+
+ skb = skb_prepare(d, f);
+ if (skb) {
+ skb->next = d->skblist;
+ d->skblist = skb;
+ }
+}
+
+/* enters with d->lock held */
+void
+aoecmd_work(struct aoedev *d)
+{
+ struct frame *f;
+ struct buf *buf;
+loop:
+ f = getframe(d, FREETAG);
+ if (f == NULL)
+ return;
+ if (d->inprocess == NULL) {
+ if (list_empty(&d->bufq))
+ return;
+ buf = container_of(d->bufq.next, struct buf, bufs);
+ list_del(d->bufq.next);
+/*printk(KERN_INFO "aoecmd_work: bi_size=%ld\n", buf->bio->bi_size); */
+ d->inprocess = buf;
+ }
+ aoecmd_ata_rw(d, f);
+ goto loop;
+}
+
+static void
+rexmit(struct aoedev *d, struct frame *f)
+{
+ struct sk_buff *skb;
+ struct aoe_hdr *h;
+ char buf[128];
+ u32 n;
+ u32 net_tag;
+
+ n = newtag(d);
+
+ snprintf(buf, sizeof buf,
+ "%15s e%ld.%ld oldtag=%08x@%08lx newtag=%08x\n",
+ "retransmit",
+ d->aoemajor, d->aoeminor, f->tag, jiffies, n);
+ aoechr_error(buf);
+
+ h = (struct aoe_hdr *) f->data;
+ f->tag = n;
+ net_tag = __cpu_to_be32(n);
+ memcpy(h->tag, &net_tag, sizeof net_tag);
+
+ skb = skb_prepare(d, f);
+ if (skb) {
+ skb->next = d->skblist;
+ d->skblist = skb;
+ }
+}
+
+static int
+tsince(int tag)
+{
+ int n;
+
+ n = jiffies & 0xffff;
+ n -= tag & 0xffff;
+ if (n < 0)
+ n += 1<<16;
+ return n;
+}
+
+static void
+rexmit_timer(ulong vp)
+{
+ struct aoedev *d;
+ struct frame *f, *e;
+ struct sk_buff *sl;
+ register long timeout;
+ ulong flags, n;
+
+ d = (struct aoedev *) vp;
+ sl = NULL;
+
+ /* timeout is always ~150% of the moving average */
+ timeout = d->rttavg;
+ timeout += timeout >> 1;
+
+ spin_lock_irqsave(&d->lock, flags);
+
+ if (d->flags & DEVFL_TKILL) {
+tdie: spin_unlock_irqrestore(&d->lock, flags);
+ return;
+ }
+ f = d->frames;
+ e = f + d->nframes;
+ for (; f<e; f++) {
+ if (f->tag != FREETAG && tsince(f->tag) >= timeout) {
+ n = f->waited += timeout;
+ n /= HZ;
+ if (n > MAXWAIT) { /* waited too long. device failure. */
+ aoedev_downdev(d);
+ goto tdie;
+ }
+ rexmit(d, f);
+ }
+ }
+
+ sl = d->skblist;
+ d->skblist = NULL;
+ if (sl) {
+ n = d->rttavg <<= 1;
+ if (n > MAXTIMER)
+ d->rttavg = MAXTIMER;
+ }
+
+ d->timer.expires = jiffies + TIMERTICK;
+ add_timer(&d->timer);
+
+ spin_unlock_irqrestore(&d->lock, flags);
+
+ aoenet_xmit(sl);
+}
+
+static void
+ataid_complete(struct aoedev *d, unsigned char *id)
+{
+ u64 ssize;
+ u16 n;
+
+ /* word 83: command set supported */
+ n = __le16_to_cpu(*((u16 *) &id[83<<1]));
+
+ /* word 86: command set/feature enabled */
+ n |= __le16_to_cpu(*((u16 *) &id[86<<1]));
+
+ if (n & (1<<10)) { /* bit 10: LBA 48 */
+ d->flags |= DEVFL_EXT;
+
+ /* word 100: number lba48 sectors */
+ ssize = __le64_to_cpu(*((u64 *) &id[100<<1]));
+
+ /* set as in ide-disk.c:init_idedisk_capacity */
+ d->geo.cylinders = ssize;
+ d->geo.cylinders /= (255 * 63);
+ d->geo.heads = 255;
+ d->geo.sectors = 63;
+ } else {
+ d->flags &= ~DEVFL_EXT;
+
+ /* number lba28 sectors */
+ ssize = __le32_to_cpu(*((u32 *) &id[60<<1]));
+
+ /* NOTE: obsolete in ATA 6 */
+ d->geo.cylinders = __le16_to_cpu(*((u16 *) &id[54<<1]));
+ d->geo.heads = __le16_to_cpu(*((u16 *) &id[55<<1]));
+ d->geo.sectors = __le16_to_cpu(*((u16 *) &id[56<<1]));
+ }
+ d->ssize = ssize;
+ d->geo.start = 0;
+ if (d->gd != NULL) {
+ d->gd->capacity = ssize;
+ d->flags |= DEVFL_UP;
+ return;
+ }
+ if (d->flags & DEVFL_WORKON) {
+ printk(KERN_INFO "aoe: ataid_complete: can't schedule work, it's already on! "
+ "(This really shouldn't happen).\n");
+ return;
+ }
+ INIT_WORK(&d->work, aoeblk_gdalloc, d);
+ schedule_work(&d->work);
+ d->flags |= DEVFL_WORKON;
+}
+
+static void
+calc_rttavg(struct aoedev *d, int rtt)
+{
+ register long n;
+
+ n = rtt;
+ if (n < MINTIMER)
+ n = MINTIMER;
+ else if (n > MAXTIMER)
+ n = MAXTIMER;
+
+ /* g == .25; cf. Congestion Avoidance and Control, Jacobson & Karels; 1988 */
+ n -= d->rttavg;
+ d->rttavg += n >> 2;
+}
+
+void
+aoecmd_ata_rsp(struct sk_buff *skb)
+{
+ struct aoedev *d;
+ struct aoe_hdr *hin;
+ struct aoe_atahdr *ahin, *ahout;
+ struct frame *f;
+ struct buf *buf;
+ struct sk_buff *sl;
+ register long n;
+ ulong flags;
+ char ebuf[128];
+
+ hin = (struct aoe_hdr *) skb->mac.raw;
+ d = aoedev_bymac(hin->src);
+ if (d == NULL) {
+ snprintf(ebuf, sizeof ebuf, "aoecmd_ata_rsp: ata response "
+ "for unknown device %d.%d\n",
+ __be16_to_cpu(*((u16 *) hin->major)),
+ hin->minor);
+ aoechr_error(ebuf);
+ return;
+ }
+
+ spin_lock_irqsave(&d->lock, flags);
+
+ f = getframe(d, __be32_to_cpu(*((u32 *) hin->tag)));
+ if (f == NULL) {
+ spin_unlock_irqrestore(&d->lock, flags);
+ snprintf(ebuf, sizeof ebuf,
+ "%15s e%d.%d tag=%08x@%08lx\n",
+ "unexpected rsp",
+ __be16_to_cpu(*((u16 *) hin->major)),
+ hin->minor,
+ __be32_to_cpu(*((u32 *) hin->tag)),
+ jiffies);
+ aoechr_error(ebuf);
+ return;
+ }
+
+ calc_rttavg(d, tsince(f->tag));
+
+ ahin = (struct aoe_atahdr *) (hin+1);
+ ahout = (struct aoe_atahdr *) (f->data + sizeof(struct aoe_hdr));
+ buf = f->buf;
+
+ if (ahin->cmdstat & 0xa9) { /* these bits cleared on success */
+ printk(KERN_CRIT "aoe: aoecmd_ata_rsp: ata error cmd=%2.2Xh "
+ "stat=%2.2Xh\n", ahout->cmdstat, ahin->cmdstat);
+ if (buf)
+ buf->flags |= BUFFL_FAIL;
+ } else {
+ switch (ahout->cmdstat) {
+ case WIN_READ:
+ case WIN_READ_EXT:
+ n = ahout->scnt << 9;
+ if (skb->len - sizeof *hin - sizeof *ahin < n) {
+ printk(KERN_CRIT "aoe: aoecmd_ata_rsp: runt "
+ "ata data size in read. skb->len=%d\n",
+ skb->len);
+ /* fail frame f? just returning will rexmit. */
+ spin_unlock_irqrestore(&d->lock, flags);
+ return;
+ }
+ memcpy(f->bufaddr, ahin+1, n);
+ case WIN_WRITE:
+ case WIN_WRITE_EXT:
+ break;
+ case WIN_IDENTIFY:
+ if (skb->len - sizeof *hin - sizeof *ahin < 512) {
+ printk(KERN_INFO "aoe: aoecmd_ata_rsp: runt data size "
+ "in ataid. skb->len=%d\n", skb->len);
+ spin_unlock_irqrestore(&d->lock, flags);
+ return;
+ }
+ ataid_complete(d, (char *) (ahin+1));
+ /* d->flags |= DEVFL_WC_UPDATE; */
+ break;
+ default:
+ printk(KERN_INFO "aoe: aoecmd_ata_rsp: unrecognized "
+ "outbound ata command %2.2Xh for %d.%d\n",
+ ahout->cmdstat,
+ __be16_to_cpu(*((u16 *) hin->major)),
+ hin->minor);
+ }
+ }
+
+ if (buf) {
+ buf->nframesout -= 1;
+ if (buf->nframesout == 0 && buf->resid == 0) {
+ n = !(buf->flags & BUFFL_FAIL);
+ bio_endio(buf->bio, buf->bio->bi_size, 0);
+ mempool_free(buf, d->bufpool);
+ }
+ }
+
+ f->buf = NULL;
+ f->tag = FREETAG;
+
+ aoecmd_work(d);
+
+ sl = d->skblist;
+ d->skblist = NULL;
+
+ spin_unlock_irqrestore(&d->lock, flags);
+
+ aoenet_xmit(sl);
+}
+
+void
+aoecmd_cfg(ushort aoemajor, unsigned char aoeminor)
+{
+ struct aoe_hdr *h;
+ struct aoe_cfghdr *ch;
+ struct sk_buff *skb, *sl;
+ struct net_device *ifp;
+ u16 aoe_type = __constant_cpu_to_be16(ETH_P_AOE);
+ u16 net_aoemajor = __cpu_to_be16(aoemajor);
+
+ sl = NULL;
+
+ read_lock(&dev_base_lock);
+ for (ifp = dev_base; ifp; dev_put(ifp), ifp = ifp->next) {
+ dev_hold(ifp);
+ if (!is_aoe_netif(ifp))
+ continue;
+
+ skb = new_skb(ifp, sizeof *h + sizeof *ch);
+ if (skb == NULL) {
+ printk(KERN_INFO "aoe: aoecmd_cfg: skb alloc failure\n");
+ continue;
+ }
+ h = (struct aoe_hdr *) skb->mac.raw;
+ memset(h, 0, sizeof *h + sizeof *ch);
+
+ memset(h->dst, 0xff, sizeof h->dst);
+ memcpy(h->src, ifp->dev_addr, sizeof h->src);
+ memcpy(h->type, &aoe_type, sizeof aoe_type);
+ h->verfl = AOE_HVER;
+ memcpy(h->major, &net_aoemajor, sizeof net_aoemajor);
+ h->minor = aoeminor;
+ h->cmd = AOECMD_CFG;
+
+ skb->next = sl;
+ sl = skb;
+ }
+ read_unlock(&dev_base_lock);
+
+ aoenet_xmit(sl);
+}
+
+/*
+ * Since we only call this in one place (and it only prepares one frame)
+ * we just return the skb. Usually we'd chain it up to the d->skblist.
+ */
+static struct sk_buff *
+aoecmd_ata_id(struct aoedev *d)
+{
+ struct aoe_hdr *h;
+ struct aoe_atahdr *ah;
+ struct frame *f;
+ struct sk_buff *skb;
+
+ f = getframe(d, FREETAG);
+ if (f == NULL) {
+ printk(KERN_CRIT "aoe: aoecmd_ata_id: can't get a frame. "
+ "This shouldn't happen.\n");
+ return NULL;
+ }
+
+ /* initialize the headers & frame */
+ h = (struct aoe_hdr *) f->data;
+ ah = (struct aoe_atahdr *) (h+1);
+ f->ndata = sizeof *h + sizeof *ah;
+ memset(h, 0, f->ndata);
+ f->tag = aoehdr_atainit(d, h);
+ f->waited = 0;
+ f->writedatalen = 0;
+
+ /* this message initializes the device, so we reset the rttavg */
+ d->rttavg = MAXTIMER;
+
+ /* set up ata header */
+ ah->scnt = 1;
+ ah->cmdstat = WIN_IDENTIFY;
+ ah->lba3 = 0xa0;
+
+ skb = skb_prepare(d, f);
+
+ /* we now want to start the rexmit tracking */
+ d->flags &= ~DEVFL_TKILL;
+ d->timer.data = (ulong) d;
+ d->timer.function = rexmit_timer;
+ d->timer.expires = jiffies + TIMERTICK;
+ add_timer(&d->timer);
+
+ return skb;
+}
+
+void
+aoecmd_cfg_rsp(struct sk_buff *skb)
+{
+ struct aoedev *d;
+ struct aoe_hdr *h;
+ struct aoe_cfghdr *ch;
+ ulong flags, bufcnt, sysminor, aoemajor;
+ struct sk_buff *sl;
+ enum { MAXFRAMES = 8, MAXSYSMINOR = 255 };
+
+ h = (struct aoe_hdr *) skb->mac.raw;
+ ch = (struct aoe_cfghdr *) (h+1);
+
+ /*
+ * Enough people have their dip switches set backwards to
+ * warrant a loud message for this special case.
+ */
+ aoemajor = __be16_to_cpu(*((u16 *) h->major));
+ if (aoemajor == 0xfff) {
+ printk(KERN_CRIT "aoe: aoecmd_cfg_rsp: Warning: shelf "
+ "address is all ones. Check shelf dip switches\n");
+ return;
+ }
+
+ sysminor = SYSMINOR(aoemajor, h->minor);
+ if (sysminor > MAXSYSMINOR) {
+ printk(KERN_INFO "aoe: aoecmd_cfg_rsp: sysminor %ld too "
+ "large\n", sysminor);
+ return;
+ }
+
+ bufcnt = __be16_to_cpu(*((u16 *) ch->bufcnt));
+ if (bufcnt > MAXFRAMES) /* keep it reasonable */
+ bufcnt = MAXFRAMES;
+
+ d = aoedev_set(sysminor, h->src, skb->dev, bufcnt);
+ if (d == NULL) {
+ printk(KERN_INFO "aoe: aoecmd_cfg_rsp: device set failure\n");
+ return;
+ }
+
+ spin_lock_irqsave(&d->lock, flags);
+
+ if (d->flags & (DEVFL_UP | DEVFL_CLOSEWAIT)) {
+ spin_unlock_irqrestore(&d->lock, flags);
+ return;
+ }
+
+ d->fw_ver = __be16_to_cpu(*((u16 *) ch->fwver));
+
+ /* we get here only if the device is new */
+ sl = aoecmd_ata_id(d);
+
+ spin_unlock_irqrestore(&d->lock, flags);
+
+ aoenet_xmit(sl);
+}
+
--- /dev/null
+/* Copyright (c) 2004 Coraid, Inc. See COPYING for GPL terms. */
+/*
+ * aoedev.c
+ * AoE device utility functions; maintains device list.
+ */
+
+#include <linux/hdreg.h>
+#include <linux/blkdev.h>
+#include <linux/netdevice.h>
+#include "aoe.h"
+
+static struct aoedev *devlist;
+static spinlock_t devlist_lock;
+
+struct aoedev *
+aoedev_bymac(unsigned char *macaddr)
+{
+ struct aoedev *d;
+ ulong flags;
+
+ spin_lock_irqsave(&devlist_lock, flags);
+
+ for (d=devlist; d; d=d->next)
+ if (!memcmp(d->addr, macaddr, 6))
+ break;
+
+ spin_unlock_irqrestore(&devlist_lock, flags);
+ return d;
+}
+
+/* called with devlist lock held */
+static struct aoedev *
+aoedev_newdev(ulong nframes)
+{
+ struct aoedev *d;
+ struct frame *f, *e;
+
+ d = kcalloc(1, sizeof *d, GFP_ATOMIC);
+ if (d == NULL)
+ return NULL;
+ f = kcalloc(nframes, sizeof *f, GFP_ATOMIC);
+ if (f == NULL) {
+ kfree(d);
+ return NULL;
+ }
+
+ d->nframes = nframes;
+ d->frames = f;
+ e = f + nframes;
+ for (; f<e; f++)
+ f->tag = FREETAG;
+
+ spin_lock_init(&d->lock);
+ init_timer(&d->timer);
+ d->bufpool = NULL; /* defer to aoeblk_gdalloc */
+ INIT_LIST_HEAD(&d->bufq);
+ d->next = devlist;
+ devlist = d;
+
+ return d;
+}
+
+void
+aoedev_downdev(struct aoedev *d)
+{
+ struct frame *f, *e;
+ struct buf *buf;
+ struct bio *bio;
+
+ d->flags |= DEVFL_TKILL;
+ del_timer(&d->timer);
+
+ f = d->frames;
+ e = f + d->nframes;
+ for (; f<e; f->tag = FREETAG, f->buf = NULL, f++) {
+ if (f->tag == FREETAG || f->buf == NULL)
+ continue;
+ buf = f->buf;
+ bio = buf->bio;
+ if (--buf->nframesout == 0) {
+ mempool_free(buf, d->bufpool);
+ bio_endio(bio, bio->bi_size, -EIO);
+ }
+ }
+ d->inprocess = NULL;
+
+ while (!list_empty(&d->bufq)) {
+ buf = container_of(d->bufq.next, struct buf, bufs);
+ list_del(d->bufq.next);
+ bio = buf->bio;
+ mempool_free(buf, d->bufpool);
+ bio_endio(bio, bio->bi_size, -EIO);
+ }
+
+ if (d->nopen)
+ d->flags |= DEVFL_CLOSEWAIT;
+ if (d->gd)
+ d->gd->capacity = 0;
+
+ d->flags &= ~DEVFL_UP;
+}
+
+struct aoedev *
+aoedev_set(ulong sysminor, unsigned char *addr, struct net_device *ifp, ulong bufcnt)
+{
+ struct aoedev *d;
+ ulong flags;
+
+ spin_lock_irqsave(&devlist_lock, flags);
+
+ for (d=devlist; d; d=d->next)
+ if (d->sysminor == sysminor
+ || memcmp(d->addr, addr, sizeof d->addr) == 0)
+ break;
+
+ if (d == NULL && (d = aoedev_newdev(bufcnt)) == NULL) {
+ spin_unlock_irqrestore(&devlist_lock, flags);
+ printk(KERN_INFO "aoe: aoedev_set: aoedev_newdev failure.\n");
+ return NULL;
+ }
+
+ spin_unlock_irqrestore(&devlist_lock, flags);
+ spin_lock_irqsave(&d->lock, flags);
+
+ d->ifp = ifp;
+
+ if (d->sysminor != sysminor
+ || memcmp(d->addr, addr, sizeof d->addr)
+ || (d->flags & DEVFL_UP) == 0) {
+ aoedev_downdev(d); /* flushes outstanding frames */
+ memcpy(d->addr, addr, sizeof d->addr);
+ d->sysminor = sysminor;
+ d->aoemajor = AOEMAJOR(sysminor);
+ d->aoeminor = AOEMINOR(sysminor);
+ }
+
+ spin_unlock_irqrestore(&d->lock, flags);
+ return d;
+}
+
+static void
+aoedev_freedev(struct aoedev *d)
+{
+ if (d->gd) {
+ aoedisk_rm_sysfs(d);
+ del_gendisk(d->gd);
+ put_disk(d->gd);
+ }
+ kfree(d->frames);
+ mempool_destroy(d->bufpool);
+ kfree(d);
+}
+
+void
+aoedev_exit(void)
+{
+ struct aoedev *d;
+ ulong flags;
+
+ flush_scheduled_work();
+
+ while ((d = devlist)) {
+ devlist = d->next;
+
+ spin_lock_irqsave(&d->lock, flags);
+ aoedev_downdev(d);
+ spin_unlock_irqrestore(&d->lock, flags);
+
+ del_timer_sync(&d->timer);
+ aoedev_freedev(d);
+ }
+}
+
+int __init
+aoedev_init(void)
+{
+ spin_lock_init(&devlist_lock);
+ return 0;
+}
+
--- /dev/null
+/* Copyright (c) 2004 Coraid, Inc. See COPYING for GPL terms. */
+/*
+ * aoemain.c
+ * Module initialization routines, discover timer
+ */
+
+#include <linux/hdreg.h>
+#include <linux/blkdev.h>
+#include <linux/module.h>
+#include "aoe.h"
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Sam Hopkins <sah@coraid.com>");
+MODULE_DESCRIPTION("AoE block/char driver for 2.6.[0-9]+");
+MODULE_VERSION(VERSION);
+
+enum { TINIT, TRUN, TKILL };
+
+static void
+discover_timer(ulong vp)
+{
+ static struct timer_list t;
+ static volatile ulong die;
+ static spinlock_t lock;
+ ulong flags;
+ enum { DTIMERTICK = HZ * 60 }; /* one minute */
+
+ switch (vp) {
+ case TINIT:
+ init_timer(&t);
+ spin_lock_init(&lock);
+ t.data = TRUN;
+ t.function = discover_timer;
+ die = 0;
+ case TRUN:
+ spin_lock_irqsave(&lock, flags);
+ if (!die) {
+ t.expires = jiffies + DTIMERTICK;
+ add_timer(&t);
+ }
+ spin_unlock_irqrestore(&lock, flags);
+
+ aoecmd_cfg(0xffff, 0xff);
+ return;
+ case TKILL:
+ spin_lock_irqsave(&lock, flags);
+ die = 1;
+ spin_unlock_irqrestore(&lock, flags);
+
+ del_timer_sync(&t);
+ default:
+ return;
+ }
+}
+
+static void
+aoe_exit(void)
+{
+ discover_timer(TKILL);
+
+ aoenet_exit();
+ unregister_blkdev(AOE_MAJOR, DEVICE_NAME);
+ aoechr_exit();
+ aoedev_exit();
+ aoeblk_exit(); /* free cache after de-allocating bufs */
+}
+
+static int __init
+aoe_init(void)
+{
+ int ret;
+
+ ret = aoedev_init();
+ if (ret)
+ return ret;
+ ret = aoechr_init();
+ if (ret)
+ goto chr_fail;
+ ret = aoeblk_init();
+ if (ret)
+ goto blk_fail;
+ ret = aoenet_init();
+ if (ret)
+ goto net_fail;
+ ret = register_blkdev(AOE_MAJOR, DEVICE_NAME);
+ if (ret < 0) {
+ printk(KERN_ERR "aoe: aoeblk_init: can't register major\n");
+ goto blkreg_fail;
+ }
+
+ printk(KERN_INFO
+ "aoe: aoe_init: AoE v2.6-%s initialised.\n",
+ VERSION);
+ discover_timer(TINIT);
+ return 0;
+
+ blkreg_fail:
+ aoenet_exit();
+ net_fail:
+ aoeblk_exit();
+ blk_fail:
+ aoechr_exit();
+ chr_fail:
+ aoedev_exit();
+
+ printk(KERN_INFO "aoe: aoe_init: initialisation failure.\n");
+ return ret;
+}
+
+module_init(aoe_init);
+module_exit(aoe_exit);
+
--- /dev/null
+/* Copyright (c) 2004 Coraid, Inc. See COPYING for GPL terms. */
+/*
+ * aoenet.c
+ * Ethernet portion of AoE driver
+ */
+
+#include <linux/hdreg.h>
+#include <linux/blkdev.h>
+#include <linux/netdevice.h>
+#include "aoe.h"
+
+#define NECODES 5
+
+static char *aoe_errlist[] =
+{
+ "no such error",
+ "unrecognized command code",
+ "bad argument parameter",
+ "device unavailable",
+ "config string present",
+ "unsupported version"
+};
+
+enum {
+ IFLISTSZ = 1024,
+};
+
+static char aoe_iflist[IFLISTSZ];
+
+int
+is_aoe_netif(struct net_device *ifp)
+{
+ register char *p, *q;
+ register int len;
+
+ if (aoe_iflist[0] == '\0')
+ return 1;
+
+ for (p = aoe_iflist; *p; p = q + strspn(q, WHITESPACE)) {
+ q = p + strcspn(p, WHITESPACE);
+ if (q != p)
+ len = q - p;
+ else
+ len = strlen(p); /* last token in aoe_iflist */
+
+ if (strlen(ifp->name) == len && !strncmp(ifp->name, p, len))
+ return 1;
+ if (q == p)
+ break;
+ }
+
+ return 0;
+}
+
+int
+set_aoe_iflist(const char __user *user_str, size_t size)
+{
+ if (size >= IFLISTSZ)
+ return -EINVAL;
+
+ if (copy_from_user(aoe_iflist, user_str, size)) {
+ printk(KERN_INFO "aoe: %s: copy from user failed\n", __FUNCTION__);
+ return -EFAULT;
+ }
+ aoe_iflist[size] = 0x00;
+ return 0;
+}
+
+u64
+mac_addr(char addr[6])
+{
+ u64 n = 0;
+ char *p = (char *) &n;
+
+ memcpy(p + 2, addr, 6); /* (sizeof addr != 6) */
+
+ return __be64_to_cpu(n);
+}
+
+static struct sk_buff *
+skb_check(struct sk_buff *skb)
+{
+ if (skb_is_nonlinear(skb))
+ if ((skb = skb_share_check(skb, GFP_ATOMIC)))
+ if (skb_linearize(skb, GFP_ATOMIC) < 0) {
+ dev_kfree_skb(skb);
+ return NULL;
+ }
+ return skb;
+}
+
+void
+aoenet_xmit(struct sk_buff *sl)
+{
+ struct sk_buff *skb;
+
+ while ((skb = sl)) {
+ sl = sl->next;
+ skb->next = skb->prev = NULL;
+ dev_queue_xmit(skb);
+ }
+}
+
+/*
+ * (1) len doesn't include the header by default. I want this.
+ */
+static int
+aoenet_rcv(struct sk_buff *skb, struct net_device *ifp, struct packet_type *pt)
+{
+ struct aoe_hdr *h;
+ ulong n;
+
+ skb = skb_check(skb);
+ if (!skb)
+ return 0;
+
+ if (!is_aoe_netif(ifp))
+ goto exit;
+
+ //skb->len += ETH_HLEN; /* (1) */
+ skb_push(skb, ETH_HLEN); /* (1) */
+
+ h = (struct aoe_hdr *) skb->mac.raw;
+ n = __be32_to_cpu(*((u32 *) h->tag));
+ if ((h->verfl & AOEFL_RSP) == 0 || (n & 1<<31))
+ goto exit;
+
+ if (h->verfl & AOEFL_ERR) {
+ n = h->err;
+ if (n > NECODES)
+ n = 0;
+ if (net_ratelimit())
+ printk(KERN_ERR "aoe: aoenet_rcv: error packet from %d.%d; "
+ "ecode=%d '%s'\n",
+ __be16_to_cpu(*((u16 *) h->major)), h->minor,
+ h->err, aoe_errlist[n]);
+ goto exit;
+ }
+
+ switch (h->cmd) {
+ case AOECMD_ATA:
+ aoecmd_ata_rsp(skb);
+ break;
+ case AOECMD_CFG:
+ aoecmd_cfg_rsp(skb);
+ break;
+ default:
+ printk(KERN_INFO "aoe: aoenet_rcv: unknown cmd %d\n", h->cmd);
+ }
+exit:
+ dev_kfree_skb(skb);
+ return 0;
+}
+
+static struct packet_type aoe_pt = {
+ .type = __constant_htons(ETH_P_AOE),
+ .func = aoenet_rcv,
+};
+
+int __init
+aoenet_init(void)
+{
+ dev_add_pack(&aoe_pt);
+ return 0;
+}
+
+void
+aoenet_exit(void)
+{
+ dev_remove_pack(&aoe_pt);
+}
+
static int DriveType = TYPE_HD;
-static spinlock_t ataflop_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(ataflop_lock);
/* Array for translating minors into disk formats */
static struct {
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*
- * Questions/Comments/Bugfixes to arrays@compaq.com
+ * Questions/Comments/Bugfixes to iss_storagedev@hp.com
*
*/
#ifdef CONFIG_CISS_SCSI_TAPE
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*
- * Questions/Comments/Bugfixes to arrays@compaq.com
+ * Questions/Comments/Bugfixes to iss_storagedev@hp.com
*
* If you want to make changes, improve or add functionality to this
* driver, you'll probably need the Compaq Array Controller Interface
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*
- * Questions/Comments/Bugfixes to arrays@compaq.com
+ * Questions/Comments/Bugfixes to iss_storagedev@hp.com
*
*/
#ifndef IDA_IOCTL_H
* a single lock.
* Thanks go to Jens Axboe and Al Viro for their LKML emails explaining this!
*/
-static spinlock_t nbd_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(nbd_lock);
#ifndef NDEBUG
static const char *ioctl_cmd_to_ascii(int cmd)
MODULE_LICENSE("GPL");
#ifndef NDEBUG
-MODULE_PARM(debugflags, "i");
+module_param(debugflags, int, 0644);
MODULE_PARM_DESC(debugflags, "flags for controlling debug output");
#endif
/*
* elevator noop
*/
-#include <linux/kernel.h>
-#include <linux/fs.h>
#include <linux/blkdev.h>
#include <linux/elevator.h>
#include <linux/bio.h>
-#include <linux/config.h>
#include <linux/module.h>
-#include <linux/slab.h>
#include <linux/init.h>
-#include <linux/compiler.h>
-
-#include <asm/uaccess.h>
/*
* See if we can find a request that this buffer can be coalesced with.
*/
-int elevator_noop_merge(request_queue_t *q, struct request **req,
- struct bio *bio)
+static int elevator_noop_merge(request_queue_t *q, struct request **req,
+ struct bio *bio)
{
struct list_head *entry = &q->queue_head;
struct request *__rq;
return ELEVATOR_NO_MERGE;
}
-void elevator_noop_merge_requests(request_queue_t *q, struct request *req,
- struct request *next)
+static void elevator_noop_merge_requests(request_queue_t *q, struct request *req,
+ struct request *next)
{
list_del_init(&next->queuelist);
}
-void elevator_noop_add_request(request_queue_t *q, struct request *rq,
- int where)
+static void elevator_noop_add_request(request_queue_t *q, struct request *rq,
+ int where)
{
- struct list_head *insert = q->queue_head.prev;
-
if (where == ELEVATOR_INSERT_FRONT)
- insert = &q->queue_head;
-
- list_add_tail(&rq->queuelist, &q->queue_head);
+ list_add(&rq->queuelist, &q->queue_head);
+ else
+ list_add_tail(&rq->queuelist, &q->queue_head);
/*
* new merges must not precede this barrier
q->last_merge = rq;
}
-struct request *elevator_noop_next_request(request_queue_t *q)
+static struct request *elevator_noop_next_request(request_queue_t *q)
{
if (!list_empty(&q->queue_head))
return list_entry_rq(q->queue_head.next);
.elevator_owner = THIS_MODULE,
};
-int noop_init(void)
+static int __init noop_init(void)
{
return elv_register(&elevator_noop);
}
-void noop_exit(void)
+static void __exit noop_exit(void)
{
elv_unregister(&elevator_noop);
}
before 1999 were Series 5) Series 5 drives will NOT always have the
Series noted on the bottom of the drive. Series 6 drivers will.
- In other words, if your BACKPACK drive dosen't say "Series 6" on the
+ In other words, if your BACKPACK drive doesn't say "Series 6" on the
bottom, enable this option.
If you chose to build PARIDE support into your kernel, you may
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Micro Solutions Inc.");
MODULE_DESCRIPTION("BACKPACK Protocol module, compatible with PARIDE");
-MODULE_PARM(verbose,"i");
+module_param(verbose, bool, 0644);
module_init(bpck6_init)
module_exit(bpck6_exit)
static int ps_tq_active = 0;
static int ps_nice = 0;
-static spinlock_t ps_spinlock __attribute__((unused)) = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(ps_spinlock __attribute__((unused)));
static DECLARE_WORK(ps_tq, ps_tq_int, NULL);
goto no_bio;
for (i = 0; i < PAGES_PER_PACKET; i++) {
- pkt->pages[i] = alloc_page(GFP_KERNEL);
+ pkt->pages[i] = alloc_page(GFP_KERNEL|__GFP_ZERO);
if (!pkt->pages[i])
goto no_page;
}
- for (i = 0; i < PAGES_PER_PACKET; i++)
- clear_page(page_address(pkt->pages[i]));
spin_lock_init(&pkt->lock);
/*
* Copy CD_FRAMESIZE bytes from src_bio into a destination page
*/
-static void pkt_copy_bio_data(struct bio *src_bio, int seg, int offs,
- struct page *dst_page, int dst_offs)
+static void pkt_copy_bio_data(struct bio *src_bio, int seg, int offs, struct page *dst_page, int dst_offs)
{
unsigned int copy_size = CD_FRAMESIZE;
printk("Mode-%c disc\n", pd->settings.block_mode == 8 ? '1' : '2');
}
-static int pkt_mode_sense(struct pktcdvd_device *pd, struct packet_command *cgc,
- int page_code, int page_control)
+static int pkt_mode_sense(struct pktcdvd_device *pd, struct packet_command *cgc, int page_code, int page_control)
{
memset(cgc->cmd, 0, sizeof(cgc->cmd));
.fops = &pkt_ctl_fops
};
-int pkt_init(void)
+static int pkt_init(void)
{
int ret;
return ret;
}
-void pkt_exit(void)
+static void pkt_exit(void)
{
remove_proc_entry("pktcdvd", proc_root_driver);
misc_deregister(&pkt_misc);
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*
- * Questions/Comments/Bugfixes to arrays@compaq.com
+ * Questions/Comments/Bugfixes to iss_storagedev@hp.com
*
* If you want to make changes, improve or add functionality to this
* driver, you'll probably need the Compaq Array Controller Interface
static int floppy_count;
static struct floppy_state floppy_states[MAX_FLOPPIES];
-static spinlock_t swim_iop_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(swim_iop_lock);
#define CURRENT elv_next_request(swim_queue)
}
port->disk = disk;
- sprintf(disk->disk_name, DRV_NAME "%u_%u", host->id, i);
+ sprintf(disk->disk_name, DRV_NAME "/%u",
+ (unsigned int) (host->id * CARM_MAX_PORTS) + i);
sprintf(disk->devfs_name, DRV_NAME "/%u_%u", host->id, i);
disk->major = host->major;
disk->first_minor = i * CARM_MINORS_PER_MAJOR;
*/
#define UB_URB_TIMEOUT (HZ*2)
#define UB_DATA_TIMEOUT (HZ*5) /* ZIP does spin-ups in the data phase */
-#define UB_CTRL_TIMEOUT (HZ/2) /* 500ms ought to be enough to clear a stall */
+#define UB_STAT_TIMEOUT (HZ*5) /* Same spinups and eject for a dataless cmd. */
+#define UB_CTRL_TIMEOUT (HZ/2) /* 500ms ought to be enough to clear a stall */
/*
* An instance of a SCSI command in transit.
/*
*/
+static int ub_bd_rq_fn_1(struct ub_dev *sc, struct request *rq);
+static int ub_cmd_build_block(struct ub_dev *sc, struct ub_scsi_cmd *cmd,
+ struct request *rq);
+static int ub_cmd_build_packet(struct ub_dev *sc, struct ub_scsi_cmd *cmd,
+ struct request *rq);
static void ub_rw_cmd_done(struct ub_dev *sc, struct ub_scsi_cmd *cmd);
static void ub_end_rq(struct request *rq, int uptodate);
static int ub_submit_scsi(struct ub_dev *sc, struct ub_scsi_cmd *cmd);
static void ub_scsi_dispatch(struct ub_dev *sc);
static void ub_scsi_urb_compl(struct ub_dev *sc, struct ub_scsi_cmd *cmd);
static void ub_state_done(struct ub_dev *sc, struct ub_scsi_cmd *cmd, int rc);
+static void __ub_state_stat(struct ub_dev *sc, struct ub_scsi_cmd *cmd);
static void ub_state_stat(struct ub_dev *sc, struct ub_scsi_cmd *cmd);
static void ub_state_sense(struct ub_dev *sc, struct ub_scsi_cmd *cmd);
static int ub_submit_clear_stall(struct ub_dev *sc, struct ub_scsi_cmd *cmd,
*/
#define UB_MAX_HOSTS 26
static char ub_hostv[UB_MAX_HOSTS];
-static spinlock_t ub_lock = SPIN_LOCK_UNLOCKED; /* Locks globals and ->openc */
+static DEFINE_SPINLOCK(ub_lock); /* Locks globals and ->openc */
/*
* The SCSI command tracing procedures.
* The request function is our main entry point
*/
-static inline int ub_bd_rq_fn_1(request_queue_t *q)
+static void ub_bd_rq_fn(request_queue_t *q)
{
-#if 0
- int writing = 0, pci_dir, i, n_elem;
- u32 tmp;
- unsigned int msg_size;
-#endif
struct ub_dev *sc = q->queuedata;
struct request *rq;
-#if 0 /* We use rq->buffer for now */
- struct scatterlist *sg;
- int n_elem;
-#endif
+
+ while ((rq = elv_next_request(q)) != NULL) {
+ if (ub_bd_rq_fn_1(sc, rq) != 0) {
+ blk_stop_queue(q);
+ break;
+ }
+ }
+}
+
+static int ub_bd_rq_fn_1(struct ub_dev *sc, struct request *rq)
+{
struct ub_scsi_cmd *cmd;
- int ub_dir;
- unsigned int block, nblks;
int rc;
- if ((rq = elv_next_request(q)) == NULL)
- return 1;
-
if (atomic_read(&sc->poison) || sc->changed) {
blkdev_dequeue_request(rq);
ub_end_rq(rq, 0);
return 0;
}
- if ((cmd = ub_get_cmd(sc)) == NULL) {
- blk_stop_queue(q);
- return 1;
- }
+ if ((cmd = ub_get_cmd(sc)) == NULL)
+ return -1;
+ memset(cmd, 0, sizeof(struct ub_scsi_cmd));
blkdev_dequeue_request(rq);
+ if (blk_pc_request(rq)) {
+ rc = ub_cmd_build_packet(sc, cmd, rq);
+ } else {
+ rc = ub_cmd_build_block(sc, cmd, rq);
+ }
+ if (rc != 0) {
+ ub_put_cmd(sc, cmd);
+ ub_end_rq(rq, 0);
+ blk_start_queue(sc->disk->queue);
+ return 0;
+ }
+
+ cmd->state = UB_CMDST_INIT;
+ cmd->done = ub_rw_cmd_done;
+ cmd->back = rq;
+
+ cmd->tag = sc->tagcnt++;
+ if ((rc = ub_submit_scsi(sc, cmd)) != 0) {
+ ub_put_cmd(sc, cmd);
+ ub_end_rq(rq, 0);
+ blk_start_queue(sc->disk->queue);
+ return 0;
+ }
+
+ return 0;
+}
+
+static int ub_cmd_build_block(struct ub_dev *sc, struct ub_scsi_cmd *cmd,
+ struct request *rq)
+{
+ int ub_dir;
+#if 0 /* We use rq->buffer for now */
+ struct scatterlist *sg;
+ int n_elem;
+#endif
+ unsigned int block, nblks;
+
if (rq_data_dir(rq) == WRITE)
ub_dir = UB_DIR_WRITE;
else
return 0;
}
#endif
+
/*
* XXX Unfortunately, this check does not work. It is quite possible
* to get bogus non-null rq->buffer if you allow sg by mistake.
*/
static int do_print = 1;
if (do_print) {
- printk(KERN_WARNING "%s: unmapped request\n", sc->name);
+ printk(KERN_WARNING "%s: unmapped block request"
+ " flags 0x%lx sectors %lu\n",
+ sc->name, rq->flags, rq->nr_sectors);
do_print = 0;
}
- ub_put_cmd(sc, cmd);
- ub_end_rq(rq, 0);
- blk_start_queue(q);
- return 0;
+ return -1;
}
/*
block = rq->sector >> sc->capacity.bshift;
nblks = rq->nr_sectors >> sc->capacity.bshift;
- memset(cmd, 0, sizeof(struct ub_scsi_cmd));
cmd->cdb[0] = (ub_dir == UB_DIR_READ)? READ_10: WRITE_10;
/* 10-byte uses 4 bytes of LBA: 2147483648KB, 2097152MB, 2048GB */
cmd->cdb[2] = block >> 24;
cmd->cdb[7] = nblks >> 8;
cmd->cdb[8] = nblks;
cmd->cdb_len = 10;
+
cmd->dir = ub_dir;
- cmd->state = UB_CMDST_INIT;
cmd->data = rq->buffer;
cmd->len = rq->nr_sectors * 512;
- cmd->done = ub_rw_cmd_done;
- cmd->back = rq;
-
- cmd->tag = sc->tagcnt++;
- if ((rc = ub_submit_scsi(sc, cmd)) != 0) {
- ub_put_cmd(sc, cmd);
- ub_end_rq(rq, 0);
- blk_start_queue(q);
- return 0;
- }
return 0;
}
-static void ub_bd_rq_fn(request_queue_t *q)
+static int ub_cmd_build_packet(struct ub_dev *sc, struct ub_scsi_cmd *cmd,
+ struct request *rq)
{
- do { } while (ub_bd_rq_fn_1(q) == 0);
+
+ if (rq->data_len != 0 && rq->data == NULL) {
+ static int do_print = 1;
+ if (do_print) {
+ printk(KERN_WARNING "%s: unmapped packet request"
+ " flags 0x%lx length %d\n",
+ sc->name, rq->flags, rq->data_len);
+ do_print = 0;
+ }
+ return -1;
+ }
+
+ memcpy(&cmd->cdb, rq->cmd, rq->cmd_len);
+ cmd->cdb_len = rq->cmd_len;
+
+ if (rq->data_len == 0) {
+ cmd->dir = UB_DIR_NONE;
+ } else {
+ if (rq_data_dir(rq) == WRITE)
+ cmd->dir = UB_DIR_WRITE;
+ else
+ cmd->dir = UB_DIR_READ;
+ }
+ cmd->data = rq->data;
+ cmd->len = rq->data_len;
+
+ return 0;
}
static void ub_rw_cmd_done(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
request_queue_t *q = disk->queue;
int uptodate;
+ if (blk_pc_request(rq)) {
+ /* UB_SENSE_SIZE is smaller than SCSI_SENSE_BUFFERSIZE */
+ memcpy(rq->sense, sc->top_sense, UB_SENSE_SIZE);
+ rq->sense_len = UB_SENSE_SIZE;
+ }
+
if (cmd->error == 0)
uptodate = 1;
else
bcb = &sc->work_bcb;
+ /*
+ * ``If the allocation length is eighteen or greater, and a device
+ * server returns less than eithteen bytes of data, the application
+ * client should assume that the bytes not transferred would have been
+ * zeroes had the device server returned those bytes.''
+ *
+ * We zero sense for all commands so that when a packet request
+ * fails it does not return a stale sense.
+ */
+ memset(&sc->top_sense, 0, UB_SENSE_SIZE);
+
/* set up the command wrapper */
bcb->Signature = cpu_to_le32(US_BULK_CB_SIGN);
bcb->Tag = cmd->tag; /* Endianness is not important */
if (urb->status == -EPIPE) {
/*
* STALL while clearning STALL.
- * A STALL is illegal on a control pipe!
+ * The control pipe clears itself - nothing to do.
* XXX Might try to reset the device here and retry.
*/
printk(KERN_NOTICE "%s: "
if (urb->status == -EPIPE) {
/*
* STALL while clearning STALL.
- * A STALL is illegal on a control pipe!
+ * The control pipe clears itself - nothing to do.
* XXX Might try to reset the device here and retry.
*/
printk(KERN_NOTICE "%s: "
rc = ub_submit_clear_stall(sc, cmd, sc->last_pipe);
if (rc != 0) {
printk(KERN_NOTICE "%s: "
- "unable to submit clear for device %u (%d)\n",
+ "unable to submit clear for device %u"
+ " (code %d)\n",
sc->name, sc->dev->devnum, rc);
/*
* This is typically ENOMEM or some other such shit.
rc = ub_submit_clear_stall(sc, cmd, sc->last_pipe);
if (rc != 0) {
printk(KERN_NOTICE "%s: "
- "unable to submit clear for device %u (%d)\n",
+ "unable to submit clear for device %u"
+ " (code %d)\n",
sc->name, sc->dev->devnum, rc);
/*
* This is typically ENOMEM or some other such shit.
rc = ub_submit_clear_stall(sc, cmd, sc->last_pipe);
if (rc != 0) {
printk(KERN_NOTICE "%s: "
- "unable to submit clear for device %u (%d)\n",
+ "unable to submit clear for device %u"
+ " (code %d)\n",
sc->name, sc->dev->devnum, rc);
/*
* This is typically ENOMEM or some other such shit.
sc->name, sc->dev->devnum);
goto Bad_End;
}
-
- /*
- * ub_state_stat only not dropping the count...
- */
- UB_INIT_COMPLETION(sc->work_done);
-
- sc->last_pipe = sc->recv_bulk_pipe;
- usb_fill_bulk_urb(&sc->work_urb, sc->dev,
- sc->recv_bulk_pipe, &sc->work_bcs,
- US_BULK_CS_WRAP_LEN, ub_urb_complete, sc);
- sc->work_urb.transfer_flags = URB_ASYNC_UNLINK;
- sc->work_urb.actual_length = 0;
- sc->work_urb.error_count = 0;
- sc->work_urb.status = 0;
-
- rc = usb_submit_urb(&sc->work_urb, GFP_ATOMIC);
- if (rc != 0) {
- /* XXX Clear stalls */
- printk("%s: CSW #%d submit failed (%d)\n",
- sc->name, cmd->tag, rc); /* P3 */
- ub_complete(&sc->work_done);
- ub_state_done(sc, cmd, rc);
- return;
- }
-
- sc->work_timer.expires = jiffies + UB_URB_TIMEOUT;
- add_timer(&sc->work_timer);
+ __ub_state_stat(sc, cmd);
return;
}
goto Bad_End;
}
+#if 0
if (bcs->Signature != cpu_to_le32(US_BULK_CS_SIGN) &&
bcs->Signature != cpu_to_le32(US_BULK_CS_OLYMPUS_SIGN)) {
- /* XXX Rate-limit, even for P3 tagged */
- /* P3 */ printk("ub: signature 0x%x\n", bcs->Signature);
/* Windows ignores signatures, so do we. */
}
+#endif
if (bcs->Tag != cmd->tag) {
- /* P3 */ printk("%s: tag orig 0x%x reply 0x%x\n",
- sc->name, cmd->tag, bcs->Tag);
- goto Bad_End;
+ /*
+ * This usually happens when we disagree with the
+ * device's microcode about something. For instance,
+ * a few of them throw this after timeouts. They buffer
+ * commands and reply at commands we timed out before.
+ * Without flushing these replies we loop forever.
+ */
+ if (++cmd->stat_count >= 4) {
+ printk(KERN_NOTICE "%s: "
+ "tag mismatch orig 0x%x reply 0x%x "
+ "on device %u\n",
+ sc->name, cmd->tag, bcs->Tag,
+ sc->dev->devnum);
+ goto Bad_End;
+ }
+ __ub_state_stat(sc, cmd);
+ return;
}
switch (bcs->Status) {
/*
* Factorization helper for the command state machine:
- * Submit a CSW read and go to STAT state.
+ * Submit a CSW read.
*/
-static void ub_state_stat(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
+static void __ub_state_stat(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
{
int rc;
if ((rc = usb_submit_urb(&sc->work_urb, GFP_ATOMIC)) != 0) {
/* XXX Clear stalls */
- printk("ub: CSW #%d submit failed (%d)\n", cmd->tag, rc); /* P3 */
+ printk("%s: CSW #%d submit failed (%d)\n", sc->name, cmd->tag, rc); /* P3 */
ub_complete(&sc->work_done);
ub_state_done(sc, cmd, rc);
return;
}
- sc->work_timer.expires = jiffies + UB_URB_TIMEOUT;
+ sc->work_timer.expires = jiffies + UB_STAT_TIMEOUT;
add_timer(&sc->work_timer);
+}
+
+/*
+ * Factorization helper for the command state machine:
+ * Submit a CSW read and go to STAT state.
+ */
+static void ub_state_stat(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
+{
+ __ub_state_stat(sc, cmd);
cmd->stat_count = 0;
cmd->state = UB_CMDST_STAT;
goto error;
}
- /*
- * ``If the allocation length is eighteen or greater, and a device
- * server returns less than eithteen bytes of data, the application
- * client should assume that the bytes not transferred would have been
- * zeroes had the device server returned those bytes.''
- */
- memset(&sc->top_sense, 0, UB_SENSE_SIZE);
-
scmd = &sc->top_rqs_cmd;
scmd->cdb[0] = REQUEST_SENSE;
scmd->cdb[4] = UB_SENSE_SIZE;
static int ub_bd_ioctl(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg)
{
-// void __user *usermem = (void *) arg;
-// struct carm_port *port = ino->i_bdev->bd_disk->private_data;
-// struct hd_geometry geom;
-
-#if 0
- switch (cmd) {
- case HDIO_GETGEO:
- if (usermem == NULL) // XXX Bizzare. Why?
- return -EINVAL;
-
- geom.heads = (u8) port->dev_geom_head;
- geom.sectors = (u8) port->dev_geom_sect;
- geom.cylinders = port->dev_geom_cyl;
- geom.start = get_start_sect(ino->i_bdev);
-
- if (copy_to_user(usermem, &geom, sizeof(geom)))
- return -EFAULT;
- return 0;
-
- default: ;
- }
-#endif
+ struct gendisk *disk = inode->i_bdev->bd_disk;
+ void __user *usermem = (void __user *) arg;
- return -ENOTTY;
+ return scsi_cmd_ioctl(filp, disk, cmd, usermem);
}
/*
static int list_count = 0;
static int current_device = -1;
-static spinlock_t z2ram_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(z2ram_lock);
static struct block_device_operations z2_fops;
static struct gendisk *z2ram_gendisk;
Say Y here to compile support for HCI BCM203x devices into the
kernel or say M to compile it as module (bcm203x).
+config BT_HCIBPA10X
+ tristate "HCI BPA10x USB driver"
+ depends on USB
+ help
+ Bluetooth HCI BPA10x USB driver.
+ This driver provides support for the Digianswer BPA 100/105 Bluetooth
+ sniffer devices.
+
+ Say Y here to compile support for HCI BPA10x devices into the
+ kernel or say M to compile it as module (bpa10x).
+
config BT_HCIBFUSB
tristate "HCI BlueFRITZ! USB driver"
depends on USB
obj-$(CONFIG_BT_HCIVHCI) += hci_vhci.o
obj-$(CONFIG_BT_HCIUART) += hci_uart.o
obj-$(CONFIG_BT_HCIBCM203X) += bcm203x.o
+obj-$(CONFIG_BT_HCIBPA10X) += bpa10x.o
obj-$(CONFIG_BT_HCIBFUSB) += bfusb.o
obj-$(CONFIG_BT_HCIDTL1) += dtl1_cs.o
obj-$(CONFIG_BT_HCIBT3C) += bt3c_cs.o
/* ======================== Module parameters ======================== */
-/* Bit map of interrupts to choose from */
-static unsigned int irq_mask = 0x86bc;
-static int irq_list[4] = { -1 };
-
-module_param(irq_mask, uint, 0);
-module_param_array(irq_list, int, NULL, 0);
-
MODULE_AUTHOR("Marcel Holtmann <marcel@holtmann.org>");
MODULE_DESCRIPTION("Bluetooth driver for the Anycom BlueCard (LSE039/LSE041)");
MODULE_LICENSE("GPL");
bluecard_info_t *info;
client_reg_t client_reg;
dev_link_t *link;
- int i, ret;
+ int ret;
/* Create new info device */
info = kmalloc(sizeof(*info), GFP_KERNEL);
link->io.Attributes1 = IO_DATA_PATH_WIDTH_8;
link->io.NumPorts1 = 8;
link->irq.Attributes = IRQ_TYPE_EXCLUSIVE | IRQ_HANDLE_PRESENT;
- link->irq.IRQInfo1 = IRQ_INFO2_VALID | IRQ_LEVEL_ID;
-
- if (irq_list[0] == -1)
- link->irq.IRQInfo2 = irq_mask;
- else
- for (i = 0; i < 4; i++)
- link->irq.IRQInfo2 |= 1 << irq_list[i];
+ link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->irq.Handler = bluecard_interrupt;
link->irq.Instance = info;
link->next = dev_list;
dev_list = link;
client_reg.dev_info = &dev_info;
- client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE;
client_reg.EventMask =
CS_EVENT_CARD_INSERTION | CS_EVENT_CARD_REMOVAL |
CS_EVENT_RESET_PHYSICAL | CS_EVENT_CARD_RESET |
static void __exit exit_bluecard_cs(void)
{
pcmcia_unregister_driver(&bluecard_driver);
-
- /* XXX: this really needs to move into generic code.. */
- while (dev_list != NULL)
- bluecard_detach(dev_list);
+ BUG_ON(dev_list != NULL);
}
module_init(init_bluecard_cs);
--- /dev/null
+/*
+ *
+ * Digianswer Bluetooth USB driver
+ *
+ * Copyright (C) 2004-2005 Marcel Holtmann <marcel@holtmann.org>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+
+#include <linux/usb.h>
+
+#include <net/bluetooth/bluetooth.h>
+#include <net/bluetooth/hci_core.h>
+
+#ifndef CONFIG_BT_HCIBPA10X_DEBUG
+#undef BT_DBG
+#define BT_DBG(D...)
+#endif
+
+#define VERSION "0.8"
+
+static int ignore = 0;
+
+static struct usb_device_id bpa10x_table[] = {
+ /* Tektronix BPA 100/105 (Digianswer) */
+ { USB_DEVICE(0x08fd, 0x0002) },
+
+ { } /* Terminating entry */
+};
+
+MODULE_DEVICE_TABLE(usb, bpa10x_table);
+
+#define BPA10X_CMD_EP 0x00
+#define BPA10X_EVT_EP 0x81
+#define BPA10X_TX_EP 0x02
+#define BPA10X_RX_EP 0x82
+
+#define BPA10X_CMD_BUF_SIZE 252
+#define BPA10X_EVT_BUF_SIZE 16
+#define BPA10X_TX_BUF_SIZE 384
+#define BPA10X_RX_BUF_SIZE 384
+
+struct bpa10x_data {
+ struct hci_dev *hdev;
+ struct usb_device *udev;
+
+ rwlock_t lock;
+
+ struct sk_buff_head cmd_queue;
+ struct urb *cmd_urb;
+ struct urb *evt_urb;
+ struct sk_buff *evt_skb;
+ unsigned int evt_len;
+
+ struct sk_buff_head tx_queue;
+ struct urb *tx_urb;
+ struct urb *rx_urb;
+};
+
+#define HCI_VENDOR_HDR_SIZE 5
+
+struct hci_vendor_hdr {
+ __u8 type;
+ __u16 snum;
+ __u16 dlen;
+} __attribute__ ((packed));
+
+static void bpa10x_recv_bulk(struct bpa10x_data *data, unsigned char *buf, int count)
+{
+ struct hci_acl_hdr *ah;
+ struct hci_sco_hdr *sh;
+ struct hci_vendor_hdr *vh;
+ struct sk_buff *skb;
+ int len;
+
+ while (count) {
+ switch (*buf++) {
+ case HCI_ACLDATA_PKT:
+ ah = (struct hci_acl_hdr *) buf;
+ len = HCI_ACL_HDR_SIZE + __le16_to_cpu(ah->dlen);
+ skb = bt_skb_alloc(len, GFP_ATOMIC);
+ if (skb) {
+ memcpy(skb_put(skb, len), buf, len);
+ skb->dev = (void *) data->hdev;
+ skb->pkt_type = HCI_ACLDATA_PKT;
+ hci_recv_frame(skb);
+ }
+ break;
+
+ case HCI_SCODATA_PKT:
+ sh = (struct hci_sco_hdr *) buf;
+ len = HCI_SCO_HDR_SIZE + sh->dlen;
+ skb = bt_skb_alloc(len, GFP_ATOMIC);
+ if (skb) {
+ memcpy(skb_put(skb, len), buf, len);
+ skb->dev = (void *) data->hdev;
+ skb->pkt_type = HCI_SCODATA_PKT;
+ hci_recv_frame(skb);
+ }
+ break;
+
+ case HCI_VENDOR_PKT:
+ vh = (struct hci_vendor_hdr *) buf;
+ len = HCI_VENDOR_HDR_SIZE + __le16_to_cpu(vh->dlen);
+ skb = bt_skb_alloc(len, GFP_ATOMIC);
+ if (skb) {
+ memcpy(skb_put(skb, len), buf, len);
+ skb->dev = (void *) data->hdev;
+ skb->pkt_type = HCI_VENDOR_PKT;
+ hci_recv_frame(skb);
+ }
+ break;
+
+ default:
+ len = count - 1;
+ break;
+ }
+
+ buf += len;
+ count -= (len + 1);
+ }
+}
+
+static int bpa10x_recv_event(struct bpa10x_data *data, unsigned char *buf, int size)
+{
+ BT_DBG("data %p buf %p size %d", data, buf, size);
+
+ if (data->evt_skb) {
+ struct sk_buff *skb = data->evt_skb;
+
+ memcpy(skb_put(skb, size), buf, size);
+
+ if (skb->len == data->evt_len) {
+ data->evt_skb = NULL;
+ data->evt_len = 0;
+ hci_recv_frame(skb);
+ }
+ } else {
+ struct sk_buff *skb;
+ struct hci_event_hdr *hdr;
+ unsigned char pkt_type;
+ int pkt_len = 0;
+
+ if (size < HCI_EVENT_HDR_SIZE + 1) {
+ BT_ERR("%s event packet block with size %d is too short",
+ data->hdev->name, size);
+ return -EILSEQ;
+ }
+
+ pkt_type = *buf++;
+ size--;
+
+ if (pkt_type != HCI_EVENT_PKT) {
+ BT_ERR("%s unexpected event packet start byte 0x%02x",
+ data->hdev->name, pkt_type);
+ return -EPROTO;
+ }
+
+ hdr = (struct hci_event_hdr *) buf;
+ pkt_len = HCI_EVENT_HDR_SIZE + hdr->plen;
+
+ skb = bt_skb_alloc(pkt_len, GFP_ATOMIC);
+ if (!skb) {
+ BT_ERR("%s no memory for new event packet",
+ data->hdev->name);
+ return -ENOMEM;
+ }
+
+ skb->dev = (void *) data->hdev;
+ skb->pkt_type = pkt_type;
+
+ memcpy(skb_put(skb, size), buf, size);
+
+ if (pkt_len == size) {
+ hci_recv_frame(skb);
+ } else {
+ data->evt_skb = skb;
+ data->evt_len = pkt_len;
+ }
+ }
+
+ return 0;
+}
+
+static void bpa10x_wakeup(struct bpa10x_data *data)
+{
+ struct urb *urb;
+ struct sk_buff *skb;
+ int err;
+
+ BT_DBG("data %p", data);
+
+ urb = data->cmd_urb;
+ if (urb->status == -EINPROGRESS)
+ skb = NULL;
+ else
+ skb = skb_dequeue(&data->cmd_queue);
+
+ if (skb) {
+ struct usb_ctrlrequest *cr;
+
+ if (skb->len > BPA10X_CMD_BUF_SIZE) {
+ BT_ERR("%s command packet with size %d is too big",
+ data->hdev->name, skb->len);
+ kfree_skb(skb);
+ return;
+ }
+
+ cr = (struct usb_ctrlrequest *) urb->setup_packet;
+ cr->wLength = __cpu_to_le16(skb->len);
+
+ memcpy(urb->transfer_buffer, skb->data, skb->len);
+ urb->transfer_buffer_length = skb->len;
+
+ err = usb_submit_urb(urb, GFP_ATOMIC);
+ if (err < 0 && err != -ENODEV) {
+ BT_ERR("%s submit failed for command urb %p with error %d",
+ data->hdev->name, urb, err);
+ skb_queue_head(&data->cmd_queue, skb);
+ } else
+ kfree_skb(skb);
+ }
+
+ urb = data->tx_urb;
+ if (urb->status == -EINPROGRESS)
+ skb = NULL;
+ else
+ skb = skb_dequeue(&data->tx_queue);
+
+ if (skb) {
+ memcpy(urb->transfer_buffer, skb->data, skb->len);
+ urb->transfer_buffer_length = skb->len;
+
+ err = usb_submit_urb(urb, GFP_ATOMIC);
+ if (err < 0 && err != -ENODEV) {
+ BT_ERR("%s submit failed for command urb %p with error %d",
+ data->hdev->name, urb, err);
+ skb_queue_head(&data->tx_queue, skb);
+ } else
+ kfree_skb(skb);
+ }
+}
+
+static void bpa10x_complete(struct urb *urb, struct pt_regs *regs)
+{
+ struct bpa10x_data *data = urb->context;
+ unsigned char *buf = urb->transfer_buffer;
+ int err, count = urb->actual_length;
+
+ BT_DBG("data %p urb %p buf %p count %d", data, urb, buf, count);
+
+ read_lock(&data->lock);
+
+ if (!test_bit(HCI_RUNNING, &data->hdev->flags))
+ goto unlock;
+
+ if (urb->status < 0 || !count)
+ goto resubmit;
+
+ if (usb_pipein(urb->pipe)) {
+ data->hdev->stat.byte_rx += count;
+
+ if (usb_pipetype(urb->pipe) == PIPE_INTERRUPT)
+ bpa10x_recv_event(data, buf, count);
+
+ if (usb_pipetype(urb->pipe) == PIPE_BULK)
+ bpa10x_recv_bulk(data, buf, count);
+ } else {
+ data->hdev->stat.byte_tx += count;
+
+ bpa10x_wakeup(data);
+ }
+
+resubmit:
+ if (usb_pipein(urb->pipe)) {
+ err = usb_submit_urb(urb, GFP_ATOMIC);
+ if (err < 0 && err != -ENODEV) {
+ BT_ERR("%s urb %p type %d resubmit status %d",
+ data->hdev->name, urb, usb_pipetype(urb->pipe), err);
+ }
+ }
+
+unlock:
+ read_unlock(&data->lock);
+}
+
+static inline struct urb *bpa10x_alloc_urb(struct usb_device *udev, unsigned int pipe, size_t size, int flags, void *data)
+{
+ struct urb *urb;
+ struct usb_ctrlrequest *cr;
+ unsigned char *buf;
+
+ BT_DBG("udev %p data %p", udev, data);
+
+ urb = usb_alloc_urb(0, flags);
+ if (!urb)
+ return NULL;
+
+ buf = kmalloc(size, flags);
+ if (!buf) {
+ usb_free_urb(urb);
+ return NULL;
+ }
+
+ switch (usb_pipetype(pipe)) {
+ case PIPE_CONTROL:
+ cr = kmalloc(sizeof(*cr), flags);
+ if (!cr) {
+ kfree(buf);
+ usb_free_urb(urb);
+ return NULL;
+ }
+
+ cr->bRequestType = USB_TYPE_VENDOR;
+ cr->bRequest = 0;
+ cr->wIndex = 0;
+ cr->wValue = 0;
+ cr->wLength = __cpu_to_le16(0);
+
+ usb_fill_control_urb(urb, udev, pipe, (void *) cr, buf, 0, bpa10x_complete, data);
+ break;
+
+ case PIPE_INTERRUPT:
+ usb_fill_int_urb(urb, udev, pipe, buf, size, bpa10x_complete, data, 1);
+ break;
+
+ case PIPE_BULK:
+ usb_fill_bulk_urb(urb, udev, pipe, buf, size, bpa10x_complete, data);
+ break;
+
+ default:
+ kfree(buf);
+ usb_free_urb(urb);
+ return NULL;
+ }
+
+ return urb;
+}
+
+static inline void bpa10x_free_urb(struct urb *urb)
+{
+ BT_DBG("urb %p", urb);
+
+ if (!urb)
+ return;
+
+ if (urb->setup_packet)
+ kfree(urb->setup_packet);
+
+ if (urb->transfer_buffer)
+ kfree(urb->transfer_buffer);
+
+ usb_free_urb(urb);
+}
+
+static int bpa10x_open(struct hci_dev *hdev)
+{
+ struct bpa10x_data *data = hdev->driver_data;
+ struct usb_device *udev = data->udev;
+ unsigned long flags;
+ int err;
+
+ BT_DBG("hdev %p data %p", hdev, data);
+
+ if (test_and_set_bit(HCI_RUNNING, &hdev->flags))
+ return 0;
+
+ data->cmd_urb = bpa10x_alloc_urb(udev, usb_sndctrlpipe(udev, BPA10X_CMD_EP),
+ BPA10X_CMD_BUF_SIZE, GFP_KERNEL, data);
+ if (!data->cmd_urb) {
+ err = -ENOMEM;
+ goto done;
+ }
+
+ data->evt_urb = bpa10x_alloc_urb(udev, usb_rcvintpipe(udev, BPA10X_EVT_EP),
+ BPA10X_EVT_BUF_SIZE, GFP_KERNEL, data);
+ if (!data->evt_urb) {
+ bpa10x_free_urb(data->cmd_urb);
+ err = -ENOMEM;
+ goto done;
+ }
+
+ data->rx_urb = bpa10x_alloc_urb(udev, usb_rcvbulkpipe(udev, BPA10X_RX_EP),
+ BPA10X_RX_BUF_SIZE, GFP_KERNEL, data);
+ if (!data->rx_urb) {
+ bpa10x_free_urb(data->evt_urb);
+ bpa10x_free_urb(data->cmd_urb);
+ err = -ENOMEM;
+ goto done;
+ }
+
+ data->tx_urb = bpa10x_alloc_urb(udev, usb_sndbulkpipe(udev, BPA10X_TX_EP),
+ BPA10X_TX_BUF_SIZE, GFP_KERNEL, data);
+ if (!data->rx_urb) {
+ bpa10x_free_urb(data->rx_urb);
+ bpa10x_free_urb(data->evt_urb);
+ bpa10x_free_urb(data->cmd_urb);
+ err = -ENOMEM;
+ goto done;
+ }
+
+ write_lock_irqsave(&data->lock, flags);
+
+ err = usb_submit_urb(data->evt_urb, GFP_ATOMIC);
+ if (err < 0) {
+ BT_ERR("%s submit failed for event urb %p with error %d",
+ data->hdev->name, data->evt_urb, err);
+ } else {
+ err = usb_submit_urb(data->rx_urb, GFP_ATOMIC);
+ if (err < 0) {
+ BT_ERR("%s submit failed for rx urb %p with error %d",
+ data->hdev->name, data->evt_urb, err);
+ usb_kill_urb(data->evt_urb);
+ }
+ }
+
+ write_unlock_irqrestore(&data->lock, flags);
+
+done:
+ if (err < 0)
+ clear_bit(HCI_RUNNING, &hdev->flags);
+
+ return err;
+}
+
+static int bpa10x_close(struct hci_dev *hdev)
+{
+ struct bpa10x_data *data = hdev->driver_data;
+ unsigned long flags;
+
+ BT_DBG("hdev %p data %p", hdev, data);
+
+ if (!test_and_clear_bit(HCI_RUNNING, &hdev->flags))
+ return 0;
+
+ write_lock_irqsave(&data->lock, flags);
+
+ skb_queue_purge(&data->cmd_queue);
+ usb_kill_urb(data->cmd_urb);
+ usb_kill_urb(data->evt_urb);
+ usb_kill_urb(data->rx_urb);
+ usb_kill_urb(data->tx_urb);
+
+ write_unlock_irqrestore(&data->lock, flags);
+
+ bpa10x_free_urb(data->cmd_urb);
+ bpa10x_free_urb(data->evt_urb);
+ bpa10x_free_urb(data->rx_urb);
+ bpa10x_free_urb(data->tx_urb);
+
+ return 0;
+}
+
+static int bpa10x_flush(struct hci_dev *hdev)
+{
+ struct bpa10x_data *data = hdev->driver_data;
+
+ BT_DBG("hdev %p data %p", hdev, data);
+
+ skb_queue_purge(&data->cmd_queue);
+
+ return 0;
+}
+
+static int bpa10x_send_frame(struct sk_buff *skb)
+{
+ struct hci_dev *hdev = (struct hci_dev *) skb->dev;
+ struct bpa10x_data *data;
+
+ BT_DBG("hdev %p skb %p type %d len %d", hdev, skb, skb->pkt_type, skb->len);
+
+ if (!hdev) {
+ BT_ERR("Frame for unknown HCI device");
+ return -ENODEV;
+ }
+
+ if (!test_bit(HCI_RUNNING, &hdev->flags))
+ return -EBUSY;
+
+ data = hdev->driver_data;
+
+ /* Prepend skb with frame type */
+ memcpy(skb_push(skb, 1), &(skb->pkt_type), 1);
+
+ switch (skb->pkt_type) {
+ case HCI_COMMAND_PKT:
+ hdev->stat.cmd_tx++;
+ skb_queue_tail(&data->cmd_queue, skb);
+ break;
+
+ case HCI_ACLDATA_PKT:
+ hdev->stat.acl_tx++;
+ skb_queue_tail(&data->tx_queue, skb);
+ break;
+
+ case HCI_SCODATA_PKT:
+ hdev->stat.sco_tx++;
+ skb_queue_tail(&data->tx_queue, skb);
+ break;
+ };
+
+ read_lock(&data->lock);
+
+ bpa10x_wakeup(data);
+
+ read_unlock(&data->lock);
+
+ return 0;
+}
+
+static void bpa10x_destruct(struct hci_dev *hdev)
+{
+ struct bpa10x_data *data = hdev->driver_data;
+
+ BT_DBG("hdev %p data %p", hdev, data);
+
+ kfree(data);
+}
+
+static int bpa10x_probe(struct usb_interface *intf, const struct usb_device_id *id)
+{
+ struct usb_device *udev = interface_to_usbdev(intf);
+ struct hci_dev *hdev;
+ struct bpa10x_data *data;
+ int err;
+
+ BT_DBG("intf %p id %p", intf, id);
+
+ if (ignore)
+ return -ENODEV;
+
+ data = kmalloc(sizeof(*data), GFP_KERNEL);
+ if (!data) {
+ BT_ERR("Can't allocate data structure");
+ return -ENOMEM;
+ }
+
+ memset(data, 0, sizeof(*data));
+
+ data->udev = udev;
+
+ rwlock_init(&data->lock);
+
+ skb_queue_head_init(&data->cmd_queue);
+ skb_queue_head_init(&data->tx_queue);
+
+ hdev = hci_alloc_dev();
+ if (!hdev) {
+ BT_ERR("Can't allocate HCI device");
+ kfree(data);
+ return -ENOMEM;
+ }
+
+ data->hdev = hdev;
+
+ hdev->type = HCI_USB;
+ hdev->driver_data = data;
+ SET_HCIDEV_DEV(hdev, &intf->dev);
+
+ hdev->open = bpa10x_open;
+ hdev->close = bpa10x_close;
+ hdev->flush = bpa10x_flush;
+ hdev->send = bpa10x_send_frame;
+ hdev->destruct = bpa10x_destruct;
+
+ hdev->owner = THIS_MODULE;
+
+ err = hci_register_dev(hdev);
+ if (err < 0) {
+ BT_ERR("Can't register HCI device");
+ kfree(data);
+ hci_free_dev(hdev);
+ return err;
+ }
+
+ usb_set_intfdata(intf, data);
+
+ return 0;
+}
+
+static void bpa10x_disconnect(struct usb_interface *intf)
+{
+ struct bpa10x_data *data = usb_get_intfdata(intf);
+ struct hci_dev *hdev = data->hdev;
+
+ BT_DBG("intf %p", intf);
+
+ if (!hdev)
+ return;
+
+ usb_set_intfdata(intf, NULL);
+
+ if (hci_unregister_dev(hdev) < 0)
+ BT_ERR("Can't unregister HCI device %s", hdev->name);
+
+ hci_free_dev(hdev);
+}
+
+static struct usb_driver bpa10x_driver = {
+ .owner = THIS_MODULE,
+ .name = "bpa10x",
+ .probe = bpa10x_probe,
+ .disconnect = bpa10x_disconnect,
+ .id_table = bpa10x_table,
+};
+
+static int __init bpa10x_init(void)
+{
+ int err;
+
+ BT_INFO("Digianswer Bluetooth USB driver ver %s", VERSION);
+
+ err = usb_register(&bpa10x_driver);
+ if (err < 0)
+ BT_ERR("Failed to register USB driver");
+
+ return err;
+}
+
+static void __exit bpa10x_exit(void)
+{
+ usb_deregister(&bpa10x_driver);
+}
+
+module_init(bpa10x_init);
+module_exit(bpa10x_exit);
+
+module_param(ignore, bool, 0644);
+MODULE_PARM_DESC(ignore, "Ignore devices from the matching table");
+
+MODULE_AUTHOR("Marcel Holtmann <marcel@holtmann.org>");
+MODULE_DESCRIPTION("Digianswer Bluetooth USB driver ver " VERSION);
+MODULE_VERSION(VERSION);
+MODULE_LICENSE("GPL");
/* ======================== Module parameters ======================== */
-/* Bit map of interrupts to choose from */
-static unsigned int irq_mask = 0xffff;
-static int irq_list[4] = { -1 };
-
-module_param(irq_mask, uint, 0);
-module_param_array(irq_list, int, NULL, 0);
-
MODULE_AUTHOR("Marcel Holtmann <marcel@holtmann.org>");
MODULE_DESCRIPTION("Bluetooth driver for Bluetooth PCMCIA cards with HCI UART interface");
MODULE_LICENSE("GPL");
btuart_info_t *info;
client_reg_t client_reg;
dev_link_t *link;
- int i, ret;
+ int ret;
/* Create new info device */
info = kmalloc(sizeof(*info), GFP_KERNEL);
link->io.Attributes1 = IO_DATA_PATH_WIDTH_8;
link->io.NumPorts1 = 8;
link->irq.Attributes = IRQ_TYPE_EXCLUSIVE | IRQ_HANDLE_PRESENT;
- link->irq.IRQInfo1 = IRQ_INFO2_VALID | IRQ_LEVEL_ID;
-
- if (irq_list[0] == -1)
- link->irq.IRQInfo2 = irq_mask;
- else
- for (i = 0; i < 4; i++)
- link->irq.IRQInfo2 |= 1 << irq_list[i];
+ link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->irq.Handler = btuart_interrupt;
link->irq.Instance = info;
link->next = dev_list;
dev_list = link;
client_reg.dev_info = &dev_info;
- client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE;
client_reg.EventMask =
CS_EVENT_CARD_INSERTION | CS_EVENT_CARD_REMOVAL |
CS_EVENT_RESET_PHYSICAL | CS_EVENT_CARD_RESET |
static void btuart_config(dev_link_t *link)
{
- static ioaddr_t base[5] = { 0x3f8, 0x2f8, 0x3e8, 0x2e8, 0x0 };
+ static kio_addr_t base[5] = { 0x3f8, 0x2f8, 0x3e8, 0x2e8, 0x0 };
client_handle_t handle = link->handle;
btuart_info_t *info = link->priv;
tuple_t tuple;
static void __exit exit_btuart_cs(void)
{
pcmcia_unregister_driver(&btuart_driver);
-
- /* XXX: this really needs to move into generic code.. */
- while (dev_list != NULL)
- btuart_detach(dev_list);
+ BUG_ON(dev_list != NULL);
}
module_init(init_btuart_cs);
/* ======================== Module parameters ======================== */
-/* Bit map of interrupts to choose from */
-static unsigned int irq_mask = 0xffff;
-static int irq_list[4] = { -1 };
-
-module_param(irq_mask, uint, 0);
-module_param_array(irq_list, int, NULL, 0);
-
MODULE_AUTHOR("Marcel Holtmann <marcel@holtmann.org>");
MODULE_DESCRIPTION("Bluetooth driver for Nokia Connectivity Card DTL-1");
MODULE_LICENSE("GPL");
dtl1_info_t *info;
client_reg_t client_reg;
dev_link_t *link;
- int i, ret;
+ int ret;
/* Create new info device */
info = kmalloc(sizeof(*info), GFP_KERNEL);
link->io.Attributes1 = IO_DATA_PATH_WIDTH_8;
link->io.NumPorts1 = 8;
link->irq.Attributes = IRQ_TYPE_EXCLUSIVE | IRQ_HANDLE_PRESENT;
- link->irq.IRQInfo1 = IRQ_INFO2_VALID | IRQ_LEVEL_ID;
-
- if (irq_list[0] == -1)
- link->irq.IRQInfo2 = irq_mask;
- else
- for (i = 0; i < 4; i++)
- link->irq.IRQInfo2 |= 1 << irq_list[i];
+ link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->irq.Handler = dtl1_interrupt;
link->irq.Instance = info;
link->next = dev_list;
dev_list = link;
client_reg.dev_info = &dev_info;
- client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE;
client_reg.EventMask =
CS_EVENT_CARD_INSERTION | CS_EVENT_CARD_REMOVAL |
CS_EVENT_RESET_PHYSICAL | CS_EVENT_CARD_RESET |
static void __exit exit_dtl1_cs(void)
{
pcmcia_unregister_driver(&dtl1_driver);
-
- /* XXX: this really needs to move into generic code.. */
- while (dev_list != NULL)
- dtl1_detach(dev_list);
+ BUG_ON(dev_list != NULL);
}
module_init(init_dtl1_cs);
have, say Y and find out whether you have one of the following
drives.
- For each of these drivers, a file Documentation/cdrom/{driver_name}
+ For each of these drivers, a <file:Documentation/cdrom/{driver_name}>
exists. Especially in cases where you do not know exactly which kind
of drive you have you should read there. Most of these drivers use a
file drivers/cdrom/{driver_name}.h where you can define your
config MCD
tristate "Mitsumi (standard) [no XA/Multisession] CDROM support"
- depends on CD_NO_IDESCSI
+ depends on CD_NO_IDESCSI && BROKEN
---help---
This is the older of the two drivers for the older Mitsumi models
LU-005, FX-001 and FX-001D. This is not the right driver for the
static struct cm206_struct *cd; /* the main memory structure */
static struct request_queue *cm206_queue;
-static spinlock_t cm206_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(cm206_lock);
/* First, we define some polling functions. These are actually
only being used in the initialization. */
#define MAJOR_NR GOLDSTAR_CDROM_MAJOR
#include <linux/blkdev.h>
-#define gscd_port gscd /* for compatible parameter passing with "insmod" */
#include "gscd.h"
static int gscdPresent = 0;
static unsigned char gscd_buf[2048]; /* buffer for block size conversion */
static int gscd_bn = -1;
static short gscd_port = GSCD_BASE_ADDR;
-MODULE_PARM(gscd, "h");
+module_param_named(gscd, gscd_port, short, 0);
/* Kommt spaeter vielleicht noch mal dran ...
* static DECLARE_WAIT_QUEUE_HEAD(gscd_waitq);
static int AudioEnd_f;
static struct timer_list gscd_timer = TIMER_INITIALIZER(NULL, 0, 0);
-static spinlock_t gscd_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(gscd_lock);
static struct request_queue *gscd_queue;
static struct block_device_operations gscd_fops = {
#define ISP16_IO_BASE 0xF8D
#define ISP16_IO_SIZE 5 /* ports used from 0xF8D up to 0xF91 */
-
-int isp16_init(void);
#include <asm/uaccess.h>
#include <linux/blkdev.h>
-#define mcd_port mcd /* for compatible parameter passing with "insmod" */
#include "mcd.h"
/* I added A flag to drop to 1x speed if too many errors 0 = 1X ; 1 = 2X */
static short mcd_port = CONFIG_MCD_BASE; /* used as "mcd" by "insmod" */
static int mcd_irq = CONFIG_MCD_IRQ; /* must directly follow mcd_port */
-MODULE_PARM(mcd, "1-2i");
static int McdTimeout, McdTries;
static DECLARE_WAIT_QUEUE_HEAD(mcd_waitq);
static void mcd_release(struct cdrom_device_info *cdi);
static int mcd_media_changed(struct cdrom_device_info *cdi, int disc_nr);
static int mcd_tray_move(struct cdrom_device_info *cdi, int position);
-static spinlock_t mcd_spinlock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(mcd_spinlock);
static int mcd_audio_ioctl(struct cdrom_device_info *cdi, unsigned int cmd,
void *arg);
static int mcd_drive_status(struct cdrom_device_info *cdi, int slot_nr);
static struct gendisk *mcd_gendisk;
-#ifndef MODULE
static int __init mcd_setup(char *str)
{
int ints[9];
__setup("mcd=", mcd_setup);
-#endif /* MODULE */
+#ifdef MODULE
+static int __init param_set_mcd(const char *val, struct kernel_param *kp)
+{
+ mcd_setup(val);
+ return 0;
+}
+module_param_call(mcd, param_set_mcd, NULL, NULL, 0);
+#endif
static int mcd_media_changed(struct cdrom_device_info *cdi, int disc_nr)
{
#define optcd_port optcd /* Needed for the modutils. */
static short optcd_port = OPTCD_PORTBASE; /* I/O base of drive. */
-MODULE_PARM(optcd_port, "h");
+module_param(optcd_port, short, 0);
/* Drive registers, read */
#define DATA_PORT optcd_port /* Read data/status */
#define STATUS_PORT optcd_port+1 /* Indicate data/status availability */
static DECLARE_WAIT_QUEUE_HEAD(waitq);
static void sleep_timer(unsigned long data);
static struct timer_list delay_timer = TIMER_INITIALIZER(sleep_timer, 0, 0);
-static spinlock_t optcd_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(optcd_lock);
static struct request_queue *opt_queue;
/* Timer routine: wake up when desired flag goes low,
/*
* Protects access to global structures etc.
*/
-static spinlock_t sbpcd_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED;
+static __cacheline_aligned DEFINE_SPINLOCK(sbpcd_lock);
static struct request_queue *sbpcd_queue;
MODULE_PARM(sbpcd, "2i");
static volatile unsigned char sjcd_completion_error = 0;
static unsigned short sjcd_command_is_in_progress = 0;
static unsigned short sjcd_error_reported = 0;
-static spinlock_t sjcd_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(sjcd_lock);
static int sjcd_open_count;
static int sjcd_base = SJCD_BASE_ADDR;
-MODULE_PARM(sjcd_base, "i");
+module_param(sjcd_base, int, 0);
static DECLARE_WAIT_QUEUE_HEAD(sjcd_waitq);
/* The base I/O address of the Sony Interface. This is a variable (not a
#define) so it can be easily changed via some future ioctl() */
static unsigned int sony535_cd_base_io = CDU535_ADDRESS;
-MODULE_PARM(sony535_cd_base_io, "i");
+module_param(sony535_cd_base_io, int, 0);
/*
* The following are I/O addresses of the various registers for the drive. The
static unsigned short read_status_reg;
static unsigned short data_reg;
-static spinlock_t sonycd535_lock = SPIN_LOCK_UNLOCKED; /* queue lock */
+static DEFINE_SPINLOCK(sonycd535_lock); /* queue lock */
static struct request_queue *sonycd535_queue;
static int initialized; /* Has the drive been initialized? */
0x5c U+005c
0x5d U+005d
0x5e U+005e
-0x5f U+005f U+f804
+0x5f U+005f U+23bd U+f804
0x60 U+0060
0x61 U+0061 U+00e3
0x62 U+0062
# Makefile for the drm device driver. This driver provides support for the
# Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
+drm-objs := drm_auth.o drm_bufs.o drm_context.o drm_dma.o drm_drawable.o \
+ drm_drv.o drm_fops.o drm_init.o drm_ioctl.o drm_irq.o \
+ drm_lock.o drm_memory.o drm_proc.o drm_stub.o drm_vm.o \
+ drm_agpsupport.o drm_scatter.o ati_pcigart.o drm_pci.o \
+ drm_sysfs.o
+
gamma-objs := gamma_drv.o gamma_dma.o
tdfx-objs := tdfx_drv.o
r128-objs := r128_drv.o r128_cce.o r128_state.o r128_irq.o
ffb-objs := ffb_drv.o ffb_context.o
sis-objs := sis_drv.o sis_ds.o sis_mm.o
+obj-$(CONFIG_DRM) += drm.o
obj-$(CONFIG_DRM_GAMMA) += gamma.o
obj-$(CONFIG_DRM_TDFX) += tdfx.o
obj-$(CONFIG_DRM_R128) += r128.o
--- /dev/null
+/**
+ * \file drm_agpsupport.h
+ * DRM support for AGP/GART backend
+ *
+ * \author Rickard E. (Rik) Faith <faith@valinux.com>
+ * \author Gareth Hughes <gareth@valinux.com>
+ */
+
+/*
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "drmP.h"
+#include <linux/module.h>
+
+#if __OS_HAS_AGP
+
+/**
+ * AGP information ioctl.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg pointer to a (output) drm_agp_info structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Verifies the AGP device has been initialized and acquired and fills in the
+ * drm_agp_info structure with the information in drm_agp_head::agp_info.
+ */
+int drm_agp_info(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ DRM_AGP_KERN *kern;
+ drm_agp_info_t info;
+
+ if (!dev->agp || !dev->agp->acquired)
+ return -EINVAL;
+
+ kern = &dev->agp->agp_info;
+ info.agp_version_major = kern->version.major;
+ info.agp_version_minor = kern->version.minor;
+ info.mode = kern->mode;
+ info.aperture_base = kern->aper_base;
+ info.aperture_size = kern->aper_size * 1024 * 1024;
+ info.memory_allowed = kern->max_memory << PAGE_SHIFT;
+ info.memory_used = kern->current_memory << PAGE_SHIFT;
+ info.id_vendor = kern->device->vendor;
+ info.id_device = kern->device->device;
+
+ if (copy_to_user((drm_agp_info_t __user *)arg, &info, sizeof(info)))
+ return -EFAULT;
+ return 0;
+}
+
+/**
+ * Acquire the AGP device (ioctl).
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg user argument.
+ * \return zero on success or a negative number on failure.
+ *
+ * Verifies the AGP device hasn't been acquired before and calls
+ * agp_acquire().
+ */
+int drm_agp_acquire(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ int retcode;
+
+ if (!dev->agp)
+ return -ENODEV;
+ if (dev->agp->acquired)
+ return -EBUSY;
+ if ((retcode = agp_backend_acquire()))
+ return retcode;
+ dev->agp->acquired = 1;
+ return 0;
+}
+
+/**
+ * Release the AGP device (ioctl).
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg user argument.
+ * \return zero on success or a negative number on failure.
+ *
+ * Verifies the AGP device has been acquired and calls agp_backend_release().
+ */
+int drm_agp_release(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+
+ if (!dev->agp || !dev->agp->acquired)
+ return -EINVAL;
+ agp_backend_release();
+ dev->agp->acquired = 0;
+ return 0;
+
+}
+
+/**
+ * Release the AGP device.
+ *
+ * Calls agp_backend_release().
+ */
+void drm_agp_do_release(void)
+{
+ agp_backend_release();
+}
+
+/**
+ * Enable the AGP bus.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg pointer to a drm_agp_mode structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Verifies the AGP device has been acquired but not enabled, and calls
+ * agp_enable().
+ */
+int drm_agp_enable(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_agp_mode_t mode;
+
+ if (!dev->agp || !dev->agp->acquired)
+ return -EINVAL;
+
+ if (copy_from_user(&mode, (drm_agp_mode_t __user *)arg, sizeof(mode)))
+ return -EFAULT;
+
+ dev->agp->mode = mode.mode;
+ agp_enable(mode.mode);
+ dev->agp->base = dev->agp->agp_info.aper_base;
+ dev->agp->enabled = 1;
+ return 0;
+}
+
+/**
+ * Allocate AGP memory.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg pointer to a drm_agp_buffer structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Verifies the AGP device is present and has been acquired, allocates the
+ * memory via alloc_agp() and creates a drm_agp_mem entry for it.
+ */
+int drm_agp_alloc(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_agp_buffer_t request;
+ drm_agp_mem_t *entry;
+ DRM_AGP_MEM *memory;
+ unsigned long pages;
+ u32 type;
+ drm_agp_buffer_t __user *argp = (void __user *)arg;
+
+ if (!dev->agp || !dev->agp->acquired)
+ return -EINVAL;
+ if (copy_from_user(&request, argp, sizeof(request)))
+ return -EFAULT;
+ if (!(entry = drm_alloc(sizeof(*entry), DRM_MEM_AGPLISTS)))
+ return -ENOMEM;
+
+ memset(entry, 0, sizeof(*entry));
+
+ pages = (request.size + PAGE_SIZE - 1) / PAGE_SIZE;
+ type = (u32) request.type;
+
+ if (!(memory = drm_alloc_agp(pages, type))) {
+ drm_free(entry, sizeof(*entry), DRM_MEM_AGPLISTS);
+ return -ENOMEM;
+ }
+
+ entry->handle = (unsigned long)memory->key + 1;
+ entry->memory = memory;
+ entry->bound = 0;
+ entry->pages = pages;
+ entry->prev = NULL;
+ entry->next = dev->agp->memory;
+ if (dev->agp->memory)
+ dev->agp->memory->prev = entry;
+ dev->agp->memory = entry;
+
+ request.handle = entry->handle;
+ request.physical = memory->physical;
+
+ if (copy_to_user(argp, &request, sizeof(request))) {
+ dev->agp->memory = entry->next;
+ dev->agp->memory->prev = NULL;
+ drm_free_agp(memory, pages);
+ drm_free(entry, sizeof(*entry), DRM_MEM_AGPLISTS);
+ return -EFAULT;
+ }
+ return 0;
+}
+
+/**
+ * Search for the AGP memory entry associated with a handle.
+ *
+ * \param dev DRM device structure.
+ * \param handle AGP memory handle.
+ * \return pointer to the drm_agp_mem structure associated with \p handle.
+ *
+ * Walks through drm_agp_head::memory until finding a matching handle.
+ */
+static drm_agp_mem_t *drm_agp_lookup_entry(drm_device_t *dev,
+ unsigned long handle)
+{
+ drm_agp_mem_t *entry;
+
+ for (entry = dev->agp->memory; entry; entry = entry->next) {
+ if (entry->handle == handle)
+ return entry;
+ }
+ return NULL;
+}
+
+/**
+ * Unbind AGP memory from the GATT (ioctl).
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg pointer to a drm_agp_binding structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Verifies the AGP device is present and acquired, looks-up the AGP memory
+ * entry and passes it to the unbind_agp() function.
+ */
+int drm_agp_unbind(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_agp_binding_t request;
+ drm_agp_mem_t *entry;
+ int ret;
+
+ if (!dev->agp || !dev->agp->acquired)
+ return -EINVAL;
+ if (copy_from_user(&request, (drm_agp_binding_t __user *)arg, sizeof(request)))
+ return -EFAULT;
+ if (!(entry = drm_agp_lookup_entry(dev, request.handle)))
+ return -EINVAL;
+ if (!entry->bound)
+ return -EINVAL;
+ ret = drm_unbind_agp(entry->memory);
+ if (ret == 0)
+ entry->bound = 0;
+ return ret;
+}
+
+/**
+ * Bind AGP memory into the GATT (ioctl)
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg pointer to a drm_agp_binding structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Verifies the AGP device is present and has been acquired and that no memory
+ * is currently bound into the GATT. Looks-up the AGP memory entry and passes
+ * it to bind_agp() function.
+ */
+int drm_agp_bind(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_agp_binding_t request;
+ drm_agp_mem_t *entry;
+ int retcode;
+ int page;
+
+ if (!dev->agp || !dev->agp->acquired)
+ return -EINVAL;
+ if (copy_from_user(&request, (drm_agp_binding_t __user *)arg, sizeof(request)))
+ return -EFAULT;
+ if (!(entry = drm_agp_lookup_entry(dev, request.handle)))
+ return -EINVAL;
+ if (entry->bound)
+ return -EINVAL;
+ page = (request.offset + PAGE_SIZE - 1) / PAGE_SIZE;
+ if ((retcode = drm_bind_agp(entry->memory, page)))
+ return retcode;
+ entry->bound = dev->agp->base + (page << PAGE_SHIFT);
+ DRM_DEBUG("base = 0x%lx entry->bound = 0x%lx\n",
+ dev->agp->base, entry->bound);
+ return 0;
+}
+
+/**
+ * Free AGP memory (ioctl).
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg pointer to a drm_agp_buffer structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Verifies the AGP device is present and has been acquired and looks up the
+ * AGP memory entry. If the memory it's currently bound, unbind it via
+ * unbind_agp(). Frees it via free_agp() as well as the entry itself
+ * and unlinks from the doubly linked list it's inserted in.
+ */
+int drm_agp_free(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_agp_buffer_t request;
+ drm_agp_mem_t *entry;
+
+ if (!dev->agp || !dev->agp->acquired)
+ return -EINVAL;
+ if (copy_from_user(&request, (drm_agp_buffer_t __user *)arg, sizeof(request)))
+ return -EFAULT;
+ if (!(entry = drm_agp_lookup_entry(dev, request.handle)))
+ return -EINVAL;
+ if (entry->bound)
+ drm_unbind_agp(entry->memory);
+
+ if (entry->prev)
+ entry->prev->next = entry->next;
+ else
+ dev->agp->memory = entry->next;
+
+ if (entry->next)
+ entry->next->prev = entry->prev;
+
+ drm_free_agp(entry->memory, entry->pages);
+ drm_free(entry, sizeof(*entry), DRM_MEM_AGPLISTS);
+ return 0;
+}
+
+/**
+ * Initialize the AGP resources.
+ *
+ * \return pointer to a drm_agp_head structure.
+ *
+ */
+drm_agp_head_t *drm_agp_init(void)
+{
+ drm_agp_head_t *head = NULL;
+
+ if (!(head = drm_alloc(sizeof(*head), DRM_MEM_AGPLISTS)))
+ return NULL;
+ memset((void *)head, 0, sizeof(*head));
+ agp_copy_info(&head->agp_info);
+ if (head->agp_info.chipset == NOT_SUPPORTED) {
+ drm_free(head, sizeof(*head), DRM_MEM_AGPLISTS);
+ return NULL;
+ }
+ head->memory = NULL;
+#if LINUX_VERSION_CODE <= 0x020408
+ head->cant_use_aperture = 0;
+ head->page_mask = ~(0xfff);
+#else
+ head->cant_use_aperture = head->agp_info.cant_use_aperture;
+ head->page_mask = head->agp_info.page_mask;
+#endif
+
+ return head;
+}
+
+/** Calls agp_allocate_memory() */
+DRM_AGP_MEM *drm_agp_allocate_memory(size_t pages, u32 type)
+{
+ return agp_allocate_memory(pages, type);
+}
+
+/** Calls agp_free_memory() */
+int drm_agp_free_memory(DRM_AGP_MEM *handle)
+{
+ if (!handle)
+ return 0;
+ agp_free_memory(handle);
+ return 1;
+}
+
+/** Calls agp_bind_memory() */
+int drm_agp_bind_memory(DRM_AGP_MEM *handle, off_t start)
+{
+ if (!handle)
+ return -EINVAL;
+ return agp_bind_memory(handle, start);
+}
+
+/** Calls agp_unbind_memory() */
+int drm_agp_unbind_memory(DRM_AGP_MEM *handle)
+{
+ if (!handle)
+ return -EINVAL;
+ return agp_unbind_memory(handle);
+}
+
+#endif /* __OS_HAS_AGP */
--- /dev/null
+/**
+ * \file drm_auth.h
+ * IOCTLs for authentication
+ *
+ * \author Rickard E. (Rik) Faith <faith@valinux.com>
+ * \author Gareth Hughes <gareth@valinux.com>
+ */
+
+/*
+ * Created: Tue Feb 2 08:37:54 1999 by faith@valinux.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "drmP.h"
+
+/**
+ * Generate a hash key from a magic.
+ *
+ * \param magic magic.
+ * \return hash key.
+ *
+ * The key is the modulus of the hash table size, #DRM_HASH_SIZE, which must be
+ * a power of 2.
+ */
+static int drm_hash_magic(drm_magic_t magic)
+{
+ return magic & (DRM_HASH_SIZE-1);
+}
+
+/**
+ * Find the file with the given magic number.
+ *
+ * \param dev DRM device.
+ * \param magic magic number.
+ *
+ * Searches in drm_device::magiclist within all files with the same hash key
+ * the one with matching magic number, while holding the drm_device::struct_sem
+ * lock.
+ */
+static drm_file_t *drm_find_file(drm_device_t *dev, drm_magic_t magic)
+{
+ drm_file_t *retval = NULL;
+ drm_magic_entry_t *pt;
+ int hash = drm_hash_magic(magic);
+
+ down(&dev->struct_sem);
+ for (pt = dev->magiclist[hash].head; pt; pt = pt->next) {
+ if (pt->magic == magic) {
+ retval = pt->priv;
+ break;
+ }
+ }
+ up(&dev->struct_sem);
+ return retval;
+}
+
+/**
+ * Adds a magic number.
+ *
+ * \param dev DRM device.
+ * \param priv file private data.
+ * \param magic magic number.
+ *
+ * Creates a drm_magic_entry structure and appends to the linked list
+ * associated the magic number hash key in drm_device::magiclist, while holding
+ * the drm_device::struct_sem lock.
+ */
+int drm_add_magic(drm_device_t *dev, drm_file_t *priv, drm_magic_t magic)
+{
+ int hash;
+ drm_magic_entry_t *entry;
+
+ DRM_DEBUG("%d\n", magic);
+
+ hash = drm_hash_magic(magic);
+ entry = drm_alloc(sizeof(*entry), DRM_MEM_MAGIC);
+ if (!entry) return -ENOMEM;
+ memset(entry, 0, sizeof(*entry));
+ entry->magic = magic;
+ entry->priv = priv;
+ entry->next = NULL;
+
+ down(&dev->struct_sem);
+ if (dev->magiclist[hash].tail) {
+ dev->magiclist[hash].tail->next = entry;
+ dev->magiclist[hash].tail = entry;
+ } else {
+ dev->magiclist[hash].head = entry;
+ dev->magiclist[hash].tail = entry;
+ }
+ up(&dev->struct_sem);
+
+ return 0;
+}
+
+/**
+ * Remove a magic number.
+ *
+ * \param dev DRM device.
+ * \param magic magic number.
+ *
+ * Searches and unlinks the entry in drm_device::magiclist with the magic
+ * number hash key, while holding the drm_device::struct_sem lock.
+ */
+int drm_remove_magic(drm_device_t *dev, drm_magic_t magic)
+{
+ drm_magic_entry_t *prev = NULL;
+ drm_magic_entry_t *pt;
+ int hash;
+
+
+ DRM_DEBUG("%d\n", magic);
+ hash = drm_hash_magic(magic);
+
+ down(&dev->struct_sem);
+ for (pt = dev->magiclist[hash].head; pt; prev = pt, pt = pt->next) {
+ if (pt->magic == magic) {
+ if (dev->magiclist[hash].head == pt) {
+ dev->magiclist[hash].head = pt->next;
+ }
+ if (dev->magiclist[hash].tail == pt) {
+ dev->magiclist[hash].tail = prev;
+ }
+ if (prev) {
+ prev->next = pt->next;
+ }
+ up(&dev->struct_sem);
+ return 0;
+ }
+ }
+ up(&dev->struct_sem);
+
+ drm_free(pt, sizeof(*pt), DRM_MEM_MAGIC);
+
+ return -EINVAL;
+}
+
+/**
+ * Get a unique magic number (ioctl).
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg pointer to a resulting drm_auth structure.
+ * \return zero on success, or a negative number on failure.
+ *
+ * If there is a magic number in drm_file::magic then use it, otherwise
+ * searches an unique non-zero magic number and add it associating it with \p
+ * filp.
+ */
+int drm_getmagic(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ static drm_magic_t sequence = 0;
+ static DEFINE_SPINLOCK(lock);
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_auth_t auth;
+
+ /* Find unique magic */
+ if (priv->magic) {
+ auth.magic = priv->magic;
+ } else {
+ do {
+ spin_lock(&lock);
+ if (!sequence) ++sequence; /* reserve 0 */
+ auth.magic = sequence++;
+ spin_unlock(&lock);
+ } while (drm_find_file(dev, auth.magic));
+ priv->magic = auth.magic;
+ drm_add_magic(dev, priv, auth.magic);
+ }
+
+ DRM_DEBUG("%u\n", auth.magic);
+ if (copy_to_user((drm_auth_t __user *)arg, &auth, sizeof(auth)))
+ return -EFAULT;
+ return 0;
+}
+
+/**
+ * Authenticate with a magic.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg pointer to a drm_auth structure.
+ * \return zero if authentication successed, or a negative number otherwise.
+ *
+ * Checks if \p filp is associated with the magic number passed in \arg.
+ */
+int drm_authmagic(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_auth_t auth;
+ drm_file_t *file;
+
+ if (copy_from_user(&auth, (drm_auth_t __user *)arg, sizeof(auth)))
+ return -EFAULT;
+ DRM_DEBUG("%u\n", auth.magic);
+ if ((file = drm_find_file(dev, auth.magic))) {
+ file->authenticated = 1;
+ drm_remove_magic(dev, auth.magic);
+ return 0;
+ }
+ return -EINVAL;
+}
--- /dev/null
+/**
+ * \file drm_bufs.h
+ * Generic buffer template
+ *
+ * \author Rickard E. (Rik) Faith <faith@valinux.com>
+ * \author Gareth Hughes <gareth@valinux.com>
+ */
+
+/*
+ * Created: Thu Nov 23 03:10:50 2000 by gareth@valinux.com
+ *
+ * Copyright 1999, 2000 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include <linux/vmalloc.h>
+#include "drmP.h"
+
+/**
+ * Compute size order. Returns the exponent of the smaller power of two which
+ * is greater or equal to given number.
+ *
+ * \param size size.
+ * \return order.
+ *
+ * \todo Can be made faster.
+ */
+int drm_order( unsigned long size )
+{
+ int order;
+ unsigned long tmp;
+
+ for (order = 0, tmp = size >> 1; tmp; tmp >>= 1, order++)
+ ;
+
+ if (size & (size - 1))
+ ++order;
+
+ return order;
+}
+EXPORT_SYMBOL(drm_order);
+
+/**
+ * Ioctl to specify a range of memory that is available for mapping by a non-root process.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg pointer to a drm_map structure.
+ * \return zero on success or a negative value on error.
+ *
+ * Adjusts the memory offset to its absolute value according to the mapping
+ * type. Adds the map to the map list drm_device::maplist. Adds MTRR's where
+ * applicable and if supported by the kernel.
+ */
+int drm_addmap( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_map_t *map;
+ drm_map_t __user *argp = (void __user *)arg;
+ drm_map_list_t *list;
+
+ if ( !(filp->f_mode & 3) ) return -EACCES; /* Require read/write */
+
+ map = drm_alloc( sizeof(*map), DRM_MEM_MAPS );
+ if ( !map )
+ return -ENOMEM;
+
+ if ( copy_from_user( map, argp, sizeof(*map) ) ) {
+ drm_free( map, sizeof(*map), DRM_MEM_MAPS );
+ return -EFAULT;
+ }
+
+ /* Only allow shared memory to be removable since we only keep enough
+ * book keeping information about shared memory to allow for removal
+ * when processes fork.
+ */
+ if ( (map->flags & _DRM_REMOVABLE) && map->type != _DRM_SHM ) {
+ drm_free( map, sizeof(*map), DRM_MEM_MAPS );
+ return -EINVAL;
+ }
+ DRM_DEBUG( "offset = 0x%08lx, size = 0x%08lx, type = %d\n",
+ map->offset, map->size, map->type );
+ if ( (map->offset & (~PAGE_MASK)) || (map->size & (~PAGE_MASK)) ) {
+ drm_free( map, sizeof(*map), DRM_MEM_MAPS );
+ return -EINVAL;
+ }
+ map->mtrr = -1;
+ map->handle = NULL;
+
+ switch ( map->type ) {
+ case _DRM_REGISTERS:
+ case _DRM_FRAME_BUFFER:
+#if !defined(__sparc__) && !defined(__alpha__) && !defined(__ia64__)
+ if ( map->offset + map->size < map->offset ||
+ map->offset < virt_to_phys(high_memory) ) {
+ drm_free( map, sizeof(*map), DRM_MEM_MAPS );
+ return -EINVAL;
+ }
+#endif
+#ifdef __alpha__
+ map->offset += dev->hose->mem_space->start;
+#endif
+ if (drm_core_has_MTRR(dev)) {
+ if ( map->type == _DRM_FRAME_BUFFER ||
+ (map->flags & _DRM_WRITE_COMBINING) ) {
+ map->mtrr = mtrr_add( map->offset, map->size,
+ MTRR_TYPE_WRCOMB, 1 );
+ }
+ }
+ if (map->type == _DRM_REGISTERS)
+ map->handle = drm_ioremap( map->offset, map->size,
+ dev );
+ break;
+
+ case _DRM_SHM:
+ map->handle = vmalloc_32(map->size);
+ DRM_DEBUG( "%lu %d %p\n",
+ map->size, drm_order( map->size ), map->handle );
+ if ( !map->handle ) {
+ drm_free( map, sizeof(*map), DRM_MEM_MAPS );
+ return -ENOMEM;
+ }
+ map->offset = (unsigned long)map->handle;
+ if ( map->flags & _DRM_CONTAINS_LOCK ) {
+ /* Prevent a 2nd X Server from creating a 2nd lock */
+ if (dev->lock.hw_lock != NULL) {
+ vfree( map->handle );
+ drm_free( map, sizeof(*map), DRM_MEM_MAPS );
+ return -EBUSY;
+ }
+ dev->sigdata.lock =
+ dev->lock.hw_lock = map->handle; /* Pointer to lock */
+ }
+ break;
+ case _DRM_AGP:
+ if (drm_core_has_AGP(dev)) {
+#ifdef __alpha__
+ map->offset += dev->hose->mem_space->start;
+#endif
+ map->offset += dev->agp->base;
+ map->mtrr = dev->agp->agp_mtrr; /* for getmap */
+ }
+ break;
+ case _DRM_SCATTER_GATHER:
+ if (!dev->sg) {
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ return -EINVAL;
+ }
+ map->offset += dev->sg->handle;
+ break;
+
+ default:
+ drm_free( map, sizeof(*map), DRM_MEM_MAPS );
+ return -EINVAL;
+ }
+
+ list = drm_alloc(sizeof(*list), DRM_MEM_MAPS);
+ if(!list) {
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ return -EINVAL;
+ }
+ memset(list, 0, sizeof(*list));
+ list->map = map;
+
+ down(&dev->struct_sem);
+ list_add(&list->head, &dev->maplist->head);
+ up(&dev->struct_sem);
+
+ if ( copy_to_user( argp, map, sizeof(*map) ) )
+ return -EFAULT;
+ if ( map->type != _DRM_SHM ) {
+ if ( copy_to_user( &argp->handle,
+ &map->offset,
+ sizeof(map->offset) ) )
+ return -EFAULT;
+ }
+ return 0;
+}
+
+
+/**
+ * Remove a map private from list and deallocate resources if the mapping
+ * isn't in use.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg pointer to a drm_map_t structure.
+ * \return zero on success or a negative value on error.
+ *
+ * Searches the map on drm_device::maplist, removes it from the list, see if
+ * its being used, and free any associate resource (such as MTRR's) if it's not
+ * being on use.
+ *
+ * \sa addmap().
+ */
+int drm_rmmap(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ struct list_head *list;
+ drm_map_list_t *r_list = NULL;
+ drm_vma_entry_t *pt, *prev;
+ drm_map_t *map;
+ drm_map_t request;
+ int found_maps = 0;
+
+ if (copy_from_user(&request, (drm_map_t __user *)arg,
+ sizeof(request))) {
+ return -EFAULT;
+ }
+
+ down(&dev->struct_sem);
+ list = &dev->maplist->head;
+ list_for_each(list, &dev->maplist->head) {
+ r_list = list_entry(list, drm_map_list_t, head);
+
+ if(r_list->map &&
+ r_list->map->handle == request.handle &&
+ r_list->map->flags & _DRM_REMOVABLE) break;
+ }
+
+ /* List has wrapped around to the head pointer, or its empty we didn't
+ * find anything.
+ */
+ if(list == (&dev->maplist->head)) {
+ up(&dev->struct_sem);
+ return -EINVAL;
+ }
+ map = r_list->map;
+ list_del(list);
+ drm_free(list, sizeof(*list), DRM_MEM_MAPS);
+
+ for (pt = dev->vmalist, prev = NULL; pt; prev = pt, pt = pt->next) {
+ if (pt->vma->vm_private_data == map) found_maps++;
+ }
+
+ if(!found_maps) {
+ switch (map->type) {
+ case _DRM_REGISTERS:
+ case _DRM_FRAME_BUFFER:
+ if (drm_core_has_MTRR(dev)) {
+ if (map->mtrr >= 0) {
+ int retcode;
+ retcode = mtrr_del(map->mtrr,
+ map->offset,
+ map->size);
+ DRM_DEBUG("mtrr_del = %d\n", retcode);
+ }
+ }
+ drm_ioremapfree(map->handle, map->size, dev);
+ break;
+ case _DRM_SHM:
+ vfree(map->handle);
+ break;
+ case _DRM_AGP:
+ case _DRM_SCATTER_GATHER:
+ break;
+ }
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ }
+ up(&dev->struct_sem);
+ return 0;
+}
+
+/**
+ * Cleanup after an error on one of the addbufs() functions.
+ *
+ * \param entry buffer entry where the error occurred.
+ *
+ * Frees any pages and buffers associated with the given entry.
+ */
+static void drm_cleanup_buf_error(drm_device_t *dev, drm_buf_entry_t *entry)
+{
+ int i;
+
+ if (entry->seg_count) {
+ for (i = 0; i < entry->seg_count; i++) {
+ if (entry->seglist[i]) {
+ drm_free_pages(entry->seglist[i],
+ entry->page_order,
+ DRM_MEM_DMA);
+ }
+ }
+ drm_free(entry->seglist,
+ entry->seg_count *
+ sizeof(*entry->seglist),
+ DRM_MEM_SEGS);
+
+ entry->seg_count = 0;
+ }
+
+ if (entry->buf_count) {
+ for (i = 0; i < entry->buf_count; i++) {
+ if (entry->buflist[i].dev_private) {
+ drm_free(entry->buflist[i].dev_private,
+ entry->buflist[i].dev_priv_size,
+ DRM_MEM_BUFS);
+ }
+ }
+ drm_free(entry->buflist,
+ entry->buf_count *
+ sizeof(*entry->buflist),
+ DRM_MEM_BUFS);
+
+ entry->buf_count = 0;
+ }
+}
+
+#if __OS_HAS_AGP
+/**
+ * Add AGP buffers for DMA transfers (ioctl).
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg pointer to a drm_buf_desc_t request.
+ * \return zero on success or a negative number on failure.
+ *
+ * After some sanity checks creates a drm_buf structure for each buffer and
+ * reallocates the buffer list of the same size order to accommodate the new
+ * buffers.
+ */
+int drm_addbufs_agp( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_desc_t request;
+ drm_buf_entry_t *entry;
+ drm_buf_t *buf;
+ unsigned long offset;
+ unsigned long agp_offset;
+ int count;
+ int order;
+ int size;
+ int alignment;
+ int page_order;
+ int total;
+ int byte_count;
+ int i;
+ drm_buf_t **temp_buflist;
+ drm_buf_desc_t __user *argp = (void __user *)arg;
+
+ if ( !dma ) return -EINVAL;
+
+ if ( copy_from_user( &request, argp,
+ sizeof(request) ) )
+ return -EFAULT;
+
+ count = request.count;
+ order = drm_order( request.size );
+ size = 1 << order;
+
+ alignment = (request.flags & _DRM_PAGE_ALIGN)
+ ? PAGE_ALIGN(size) : size;
+ page_order = order - PAGE_SHIFT > 0 ? order - PAGE_SHIFT : 0;
+ total = PAGE_SIZE << page_order;
+
+ byte_count = 0;
+ agp_offset = dev->agp->base + request.agp_start;
+
+ DRM_DEBUG( "count: %d\n", count );
+ DRM_DEBUG( "order: %d\n", order );
+ DRM_DEBUG( "size: %d\n", size );
+ DRM_DEBUG( "agp_offset: %lu\n", agp_offset );
+ DRM_DEBUG( "alignment: %d\n", alignment );
+ DRM_DEBUG( "page_order: %d\n", page_order );
+ DRM_DEBUG( "total: %d\n", total );
+
+ if ( order < DRM_MIN_ORDER || order > DRM_MAX_ORDER ) return -EINVAL;
+ if ( dev->queue_count ) return -EBUSY; /* Not while in use */
+
+ spin_lock( &dev->count_lock );
+ if ( dev->buf_use ) {
+ spin_unlock( &dev->count_lock );
+ return -EBUSY;
+ }
+ atomic_inc( &dev->buf_alloc );
+ spin_unlock( &dev->count_lock );
+
+ down( &dev->struct_sem );
+ entry = &dma->bufs[order];
+ if ( entry->buf_count ) {
+ up( &dev->struct_sem );
+ atomic_dec( &dev->buf_alloc );
+ return -ENOMEM; /* May only call once for each order */
+ }
+
+ if (count < 0 || count > 4096) {
+ up( &dev->struct_sem );
+ atomic_dec( &dev->buf_alloc );
+ return -EINVAL;
+ }
+
+ entry->buflist = drm_alloc( count * sizeof(*entry->buflist),
+ DRM_MEM_BUFS );
+ if ( !entry->buflist ) {
+ up( &dev->struct_sem );
+ atomic_dec( &dev->buf_alloc );
+ return -ENOMEM;
+ }
+ memset( entry->buflist, 0, count * sizeof(*entry->buflist) );
+
+ entry->buf_size = size;
+ entry->page_order = page_order;
+
+ offset = 0;
+
+ while ( entry->buf_count < count ) {
+ buf = &entry->buflist[entry->buf_count];
+ buf->idx = dma->buf_count + entry->buf_count;
+ buf->total = alignment;
+ buf->order = order;
+ buf->used = 0;
+
+ buf->offset = (dma->byte_count + offset);
+ buf->bus_address = agp_offset + offset;
+ buf->address = (void *)(agp_offset + offset);
+ buf->next = NULL;
+ buf->waiting = 0;
+ buf->pending = 0;
+ init_waitqueue_head( &buf->dma_wait );
+ buf->filp = NULL;
+
+ buf->dev_priv_size = dev->driver->dev_priv_size;
+ buf->dev_private = drm_alloc( buf->dev_priv_size,
+ DRM_MEM_BUFS );
+ if(!buf->dev_private) {
+ /* Set count correctly so we free the proper amount. */
+ entry->buf_count = count;
+ drm_cleanup_buf_error(dev,entry);
+ up( &dev->struct_sem );
+ atomic_dec( &dev->buf_alloc );
+ return -ENOMEM;
+ }
+ memset( buf->dev_private, 0, buf->dev_priv_size );
+
+ DRM_DEBUG( "buffer %d @ %p\n",
+ entry->buf_count, buf->address );
+
+ offset += alignment;
+ entry->buf_count++;
+ byte_count += PAGE_SIZE << page_order;
+ }
+
+ DRM_DEBUG( "byte_count: %d\n", byte_count );
+
+ temp_buflist = drm_realloc( dma->buflist,
+ dma->buf_count * sizeof(*dma->buflist),
+ (dma->buf_count + entry->buf_count)
+ * sizeof(*dma->buflist),
+ DRM_MEM_BUFS );
+ if(!temp_buflist) {
+ /* Free the entry because it isn't valid */
+ drm_cleanup_buf_error(dev,entry);
+ up( &dev->struct_sem );
+ atomic_dec( &dev->buf_alloc );
+ return -ENOMEM;
+ }
+ dma->buflist = temp_buflist;
+
+ for ( i = 0 ; i < entry->buf_count ; i++ ) {
+ dma->buflist[i + dma->buf_count] = &entry->buflist[i];
+ }
+
+ dma->buf_count += entry->buf_count;
+ dma->byte_count += byte_count;
+
+ DRM_DEBUG( "dma->buf_count : %d\n", dma->buf_count );
+ DRM_DEBUG( "entry->buf_count : %d\n", entry->buf_count );
+
+ up( &dev->struct_sem );
+
+ request.count = entry->buf_count;
+ request.size = size;
+
+ if ( copy_to_user( argp, &request, sizeof(request) ) )
+ return -EFAULT;
+
+ dma->flags = _DRM_DMA_USE_AGP;
+
+ atomic_dec( &dev->buf_alloc );
+ return 0;
+}
+#endif /* __OS_HAS_AGP */
+
+int drm_addbufs_pci( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_desc_t request;
+ int count;
+ int order;
+ int size;
+ int total;
+ int page_order;
+ drm_buf_entry_t *entry;
+ unsigned long page;
+ drm_buf_t *buf;
+ int alignment;
+ unsigned long offset;
+ int i;
+ int byte_count;
+ int page_count;
+ unsigned long *temp_pagelist;
+ drm_buf_t **temp_buflist;
+ drm_buf_desc_t __user *argp = (void __user *)arg;
+
+ if (!drm_core_check_feature(dev, DRIVER_PCI_DMA)) return -EINVAL;
+ if ( !dma ) return -EINVAL;
+
+ if ( copy_from_user( &request, argp, sizeof(request) ) )
+ return -EFAULT;
+
+ count = request.count;
+ order = drm_order( request.size );
+ size = 1 << order;
+
+ DRM_DEBUG( "count=%d, size=%d (%d), order=%d, queue_count=%d\n",
+ request.count, request.size, size,
+ order, dev->queue_count );
+
+ if ( order < DRM_MIN_ORDER || order > DRM_MAX_ORDER ) return -EINVAL;
+ if ( dev->queue_count ) return -EBUSY; /* Not while in use */
+
+ alignment = (request.flags & _DRM_PAGE_ALIGN)
+ ? PAGE_ALIGN(size) : size;
+ page_order = order - PAGE_SHIFT > 0 ? order - PAGE_SHIFT : 0;
+ total = PAGE_SIZE << page_order;
+
+ spin_lock( &dev->count_lock );
+ if ( dev->buf_use ) {
+ spin_unlock( &dev->count_lock );
+ return -EBUSY;
+ }
+ atomic_inc( &dev->buf_alloc );
+ spin_unlock( &dev->count_lock );
+
+ down( &dev->struct_sem );
+ entry = &dma->bufs[order];
+ if ( entry->buf_count ) {
+ up( &dev->struct_sem );
+ atomic_dec( &dev->buf_alloc );
+ return -ENOMEM; /* May only call once for each order */
+ }
+
+ if (count < 0 || count > 4096) {
+ up( &dev->struct_sem );
+ atomic_dec( &dev->buf_alloc );
+ return -EINVAL;
+ }
+
+ entry->buflist = drm_alloc( count * sizeof(*entry->buflist),
+ DRM_MEM_BUFS );
+ if ( !entry->buflist ) {
+ up( &dev->struct_sem );
+ atomic_dec( &dev->buf_alloc );
+ return -ENOMEM;
+ }
+ memset( entry->buflist, 0, count * sizeof(*entry->buflist) );
+
+ entry->seglist = drm_alloc( count * sizeof(*entry->seglist),
+ DRM_MEM_SEGS );
+ if ( !entry->seglist ) {
+ drm_free( entry->buflist,
+ count * sizeof(*entry->buflist),
+ DRM_MEM_BUFS );
+ up( &dev->struct_sem );
+ atomic_dec( &dev->buf_alloc );
+ return -ENOMEM;
+ }
+ memset( entry->seglist, 0, count * sizeof(*entry->seglist) );
+
+ /* Keep the original pagelist until we know all the allocations
+ * have succeeded
+ */
+ temp_pagelist = drm_alloc( (dma->page_count + (count << page_order))
+ * sizeof(*dma->pagelist),
+ DRM_MEM_PAGES );
+ if (!temp_pagelist) {
+ drm_free( entry->buflist,
+ count * sizeof(*entry->buflist),
+ DRM_MEM_BUFS );
+ drm_free( entry->seglist,
+ count * sizeof(*entry->seglist),
+ DRM_MEM_SEGS );
+ up( &dev->struct_sem );
+ atomic_dec( &dev->buf_alloc );
+ return -ENOMEM;
+ }
+ memcpy(temp_pagelist,
+ dma->pagelist,
+ dma->page_count * sizeof(*dma->pagelist));
+ DRM_DEBUG( "pagelist: %d entries\n",
+ dma->page_count + (count << page_order) );
+
+ entry->buf_size = size;
+ entry->page_order = page_order;
+ byte_count = 0;
+ page_count = 0;
+
+ while ( entry->buf_count < count ) {
+ page = drm_alloc_pages( page_order, DRM_MEM_DMA );
+ if ( !page ) {
+ /* Set count correctly so we free the proper amount. */
+ entry->buf_count = count;
+ entry->seg_count = count;
+ drm_cleanup_buf_error(dev, entry);
+ drm_free( temp_pagelist,
+ (dma->page_count + (count << page_order))
+ * sizeof(*dma->pagelist),
+ DRM_MEM_PAGES );
+ up( &dev->struct_sem );
+ atomic_dec( &dev->buf_alloc );
+ return -ENOMEM;
+ }
+ entry->seglist[entry->seg_count++] = page;
+ for ( i = 0 ; i < (1 << page_order) ; i++ ) {
+ DRM_DEBUG( "page %d @ 0x%08lx\n",
+ dma->page_count + page_count,
+ page + PAGE_SIZE * i );
+ temp_pagelist[dma->page_count + page_count++]
+ = page + PAGE_SIZE * i;
+ }
+ for ( offset = 0 ;
+ offset + size <= total && entry->buf_count < count ;
+ offset += alignment, ++entry->buf_count ) {
+ buf = &entry->buflist[entry->buf_count];
+ buf->idx = dma->buf_count + entry->buf_count;
+ buf->total = alignment;
+ buf->order = order;
+ buf->used = 0;
+ buf->offset = (dma->byte_count + byte_count + offset);
+ buf->address = (void *)(page + offset);
+ buf->next = NULL;
+ buf->waiting = 0;
+ buf->pending = 0;
+ init_waitqueue_head( &buf->dma_wait );
+ buf->filp = NULL;
+
+ buf->dev_priv_size = dev->driver->dev_priv_size;
+ buf->dev_private = drm_alloc( buf->dev_priv_size,
+ DRM_MEM_BUFS );
+ if(!buf->dev_private) {
+ /* Set count correctly so we free the proper amount. */
+ entry->buf_count = count;
+ entry->seg_count = count;
+ drm_cleanup_buf_error(dev,entry);
+ drm_free( temp_pagelist,
+ (dma->page_count + (count << page_order))
+ * sizeof(*dma->pagelist),
+ DRM_MEM_PAGES );
+ up( &dev->struct_sem );
+ atomic_dec( &dev->buf_alloc );
+ return -ENOMEM;
+ }
+ memset( buf->dev_private, 0, buf->dev_priv_size );
+
+ DRM_DEBUG( "buffer %d @ %p\n",
+ entry->buf_count, buf->address );
+ }
+ byte_count += PAGE_SIZE << page_order;
+ }
+
+ temp_buflist = drm_realloc( dma->buflist,
+ dma->buf_count * sizeof(*dma->buflist),
+ (dma->buf_count + entry->buf_count)
+ * sizeof(*dma->buflist),
+ DRM_MEM_BUFS );
+ if (!temp_buflist) {
+ /* Free the entry because it isn't valid */
+ drm_cleanup_buf_error(dev,entry);
+ drm_free( temp_pagelist,
+ (dma->page_count + (count << page_order))
+ * sizeof(*dma->pagelist),
+ DRM_MEM_PAGES );
+ up( &dev->struct_sem );
+ atomic_dec( &dev->buf_alloc );
+ return -ENOMEM;
+ }
+ dma->buflist = temp_buflist;
+
+ for ( i = 0 ; i < entry->buf_count ; i++ ) {
+ dma->buflist[i + dma->buf_count] = &entry->buflist[i];
+ }
+
+ /* No allocations failed, so now we can replace the orginal pagelist
+ * with the new one.
+ */
+ if (dma->page_count) {
+ drm_free(dma->pagelist,
+ dma->page_count * sizeof(*dma->pagelist),
+ DRM_MEM_PAGES);
+ }
+ dma->pagelist = temp_pagelist;
+
+ dma->buf_count += entry->buf_count;
+ dma->seg_count += entry->seg_count;
+ dma->page_count += entry->seg_count << page_order;
+ dma->byte_count += PAGE_SIZE * (entry->seg_count << page_order);
+
+ up( &dev->struct_sem );
+
+ request.count = entry->buf_count;
+ request.size = size;
+
+ if ( copy_to_user( argp, &request, sizeof(request) ) )
+ return -EFAULT;
+
+ atomic_dec( &dev->buf_alloc );
+ return 0;
+
+}
+
+int drm_addbufs_sg( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_desc_t __user *argp = (void __user *)arg;
+ drm_buf_desc_t request;
+ drm_buf_entry_t *entry;
+ drm_buf_t *buf;
+ unsigned long offset;
+ unsigned long agp_offset;
+ int count;
+ int order;
+ int size;
+ int alignment;
+ int page_order;
+ int total;
+ int byte_count;
+ int i;
+ drm_buf_t **temp_buflist;
+
+ if (!drm_core_check_feature(dev, DRIVER_SG)) return -EINVAL;
+
+ if ( !dma ) return -EINVAL;
+
+ if ( copy_from_user( &request, argp, sizeof(request) ) )
+ return -EFAULT;
+
+ count = request.count;
+ order = drm_order( request.size );
+ size = 1 << order;
+
+ alignment = (request.flags & _DRM_PAGE_ALIGN)
+ ? PAGE_ALIGN(size) : size;
+ page_order = order - PAGE_SHIFT > 0 ? order - PAGE_SHIFT : 0;
+ total = PAGE_SIZE << page_order;
+
+ byte_count = 0;
+ agp_offset = request.agp_start;
+
+ DRM_DEBUG( "count: %d\n", count );
+ DRM_DEBUG( "order: %d\n", order );
+ DRM_DEBUG( "size: %d\n", size );
+ DRM_DEBUG( "agp_offset: %lu\n", agp_offset );
+ DRM_DEBUG( "alignment: %d\n", alignment );
+ DRM_DEBUG( "page_order: %d\n", page_order );
+ DRM_DEBUG( "total: %d\n", total );
+
+ if ( order < DRM_MIN_ORDER || order > DRM_MAX_ORDER ) return -EINVAL;
+ if ( dev->queue_count ) return -EBUSY; /* Not while in use */
+
+ spin_lock( &dev->count_lock );
+ if ( dev->buf_use ) {
+ spin_unlock( &dev->count_lock );
+ return -EBUSY;
+ }
+ atomic_inc( &dev->buf_alloc );
+ spin_unlock( &dev->count_lock );
+
+ down( &dev->struct_sem );
+ entry = &dma->bufs[order];
+ if ( entry->buf_count ) {
+ up( &dev->struct_sem );
+ atomic_dec( &dev->buf_alloc );
+ return -ENOMEM; /* May only call once for each order */
+ }
+
+ if (count < 0 || count > 4096) {
+ up( &dev->struct_sem );
+ atomic_dec( &dev->buf_alloc );
+ return -EINVAL;
+ }
+
+ entry->buflist = drm_alloc( count * sizeof(*entry->buflist),
+ DRM_MEM_BUFS );
+ if ( !entry->buflist ) {
+ up( &dev->struct_sem );
+ atomic_dec( &dev->buf_alloc );
+ return -ENOMEM;
+ }
+ memset( entry->buflist, 0, count * sizeof(*entry->buflist) );
+
+ entry->buf_size = size;
+ entry->page_order = page_order;
+
+ offset = 0;
+
+ while ( entry->buf_count < count ) {
+ buf = &entry->buflist[entry->buf_count];
+ buf->idx = dma->buf_count + entry->buf_count;
+ buf->total = alignment;
+ buf->order = order;
+ buf->used = 0;
+
+ buf->offset = (dma->byte_count + offset);
+ buf->bus_address = agp_offset + offset;
+ buf->address = (void *)(agp_offset + offset + dev->sg->handle);
+ buf->next = NULL;
+ buf->waiting = 0;
+ buf->pending = 0;
+ init_waitqueue_head( &buf->dma_wait );
+ buf->filp = NULL;
+
+ buf->dev_priv_size = dev->driver->dev_priv_size;
+ buf->dev_private = drm_alloc( buf->dev_priv_size,
+ DRM_MEM_BUFS );
+ if(!buf->dev_private) {
+ /* Set count correctly so we free the proper amount. */
+ entry->buf_count = count;
+ drm_cleanup_buf_error(dev,entry);
+ up( &dev->struct_sem );
+ atomic_dec( &dev->buf_alloc );
+ return -ENOMEM;
+ }
+
+ memset( buf->dev_private, 0, buf->dev_priv_size );
+
+ DRM_DEBUG( "buffer %d @ %p\n",
+ entry->buf_count, buf->address );
+
+ offset += alignment;
+ entry->buf_count++;
+ byte_count += PAGE_SIZE << page_order;
+ }
+
+ DRM_DEBUG( "byte_count: %d\n", byte_count );
+
+ temp_buflist = drm_realloc( dma->buflist,
+ dma->buf_count * sizeof(*dma->buflist),
+ (dma->buf_count + entry->buf_count)
+ * sizeof(*dma->buflist),
+ DRM_MEM_BUFS );
+ if(!temp_buflist) {
+ /* Free the entry because it isn't valid */
+ drm_cleanup_buf_error(dev,entry);
+ up( &dev->struct_sem );
+ atomic_dec( &dev->buf_alloc );
+ return -ENOMEM;
+ }
+ dma->buflist = temp_buflist;
+
+ for ( i = 0 ; i < entry->buf_count ; i++ ) {
+ dma->buflist[i + dma->buf_count] = &entry->buflist[i];
+ }
+
+ dma->buf_count += entry->buf_count;
+ dma->byte_count += byte_count;
+
+ DRM_DEBUG( "dma->buf_count : %d\n", dma->buf_count );
+ DRM_DEBUG( "entry->buf_count : %d\n", entry->buf_count );
+
+ up( &dev->struct_sem );
+
+ request.count = entry->buf_count;
+ request.size = size;
+
+ if ( copy_to_user( argp, &request, sizeof(request) ) )
+ return -EFAULT;
+
+ dma->flags = _DRM_DMA_USE_SG;
+
+ atomic_dec( &dev->buf_alloc );
+ return 0;
+}
+
+/**
+ * Add buffers for DMA transfers (ioctl).
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg pointer to a drm_buf_desc_t request.
+ * \return zero on success or a negative number on failure.
+ *
+ * According with the memory type specified in drm_buf_desc::flags and the
+ * build options, it dispatches the call either to addbufs_agp(),
+ * addbufs_sg() or addbufs_pci() for AGP, scatter-gather or consistent
+ * PCI memory respectively.
+ */
+int drm_addbufs( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_buf_desc_t request;
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+
+ if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA))
+ return -EINVAL;
+
+ if ( copy_from_user( &request, (drm_buf_desc_t __user *)arg,
+ sizeof(request) ) )
+ return -EFAULT;
+
+#if __OS_HAS_AGP
+ if ( request.flags & _DRM_AGP_BUFFER )
+ return drm_addbufs_agp( inode, filp, cmd, arg );
+ else
+#endif
+ if ( request.flags & _DRM_SG_BUFFER )
+ return drm_addbufs_sg( inode, filp, cmd, arg );
+ else
+ return drm_addbufs_pci( inode, filp, cmd, arg );
+}
+
+
+/**
+ * Get information about the buffer mappings.
+ *
+ * This was originally mean for debugging purposes, or by a sophisticated
+ * client library to determine how best to use the available buffers (e.g.,
+ * large buffers can be used for image transfer).
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg pointer to a drm_buf_info structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Increments drm_device::buf_use while holding the drm_device::count_lock
+ * lock, preventing of allocating more buffers after this call. Information
+ * about each requested buffer is then copied into user space.
+ */
+int drm_infobufs( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_info_t request;
+ drm_buf_info_t __user *argp = (void __user *)arg;
+ int i;
+ int count;
+
+ if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA))
+ return -EINVAL;
+
+ if ( !dma ) return -EINVAL;
+
+ spin_lock( &dev->count_lock );
+ if ( atomic_read( &dev->buf_alloc ) ) {
+ spin_unlock( &dev->count_lock );
+ return -EBUSY;
+ }
+ ++dev->buf_use; /* Can't allocate more after this call */
+ spin_unlock( &dev->count_lock );
+
+ if ( copy_from_user( &request, argp, sizeof(request) ) )
+ return -EFAULT;
+
+ for ( i = 0, count = 0 ; i < DRM_MAX_ORDER + 1 ; i++ ) {
+ if ( dma->bufs[i].buf_count ) ++count;
+ }
+
+ DRM_DEBUG( "count = %d\n", count );
+
+ if ( request.count >= count ) {
+ for ( i = 0, count = 0 ; i < DRM_MAX_ORDER + 1 ; i++ ) {
+ if ( dma->bufs[i].buf_count ) {
+ drm_buf_desc_t __user *to = &request.list[count];
+ drm_buf_entry_t *from = &dma->bufs[i];
+ drm_freelist_t *list = &dma->bufs[i].freelist;
+ if ( copy_to_user( &to->count,
+ &from->buf_count,
+ sizeof(from->buf_count) ) ||
+ copy_to_user( &to->size,
+ &from->buf_size,
+ sizeof(from->buf_size) ) ||
+ copy_to_user( &to->low_mark,
+ &list->low_mark,
+ sizeof(list->low_mark) ) ||
+ copy_to_user( &to->high_mark,
+ &list->high_mark,
+ sizeof(list->high_mark) ) )
+ return -EFAULT;
+
+ DRM_DEBUG( "%d %d %d %d %d\n",
+ i,
+ dma->bufs[i].buf_count,
+ dma->bufs[i].buf_size,
+ dma->bufs[i].freelist.low_mark,
+ dma->bufs[i].freelist.high_mark );
+ ++count;
+ }
+ }
+ }
+ request.count = count;
+
+ if ( copy_to_user( argp, &request, sizeof(request) ) )
+ return -EFAULT;
+
+ return 0;
+}
+
+/**
+ * Specifies a low and high water mark for buffer allocation
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg a pointer to a drm_buf_desc structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Verifies that the size order is bounded between the admissible orders and
+ * updates the respective drm_device_dma::bufs entry low and high water mark.
+ *
+ * \note This ioctl is deprecated and mostly never used.
+ */
+int drm_markbufs( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_desc_t request;
+ int order;
+ drm_buf_entry_t *entry;
+
+ if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA))
+ return -EINVAL;
+
+ if ( !dma ) return -EINVAL;
+
+ if ( copy_from_user( &request,
+ (drm_buf_desc_t __user *)arg,
+ sizeof(request) ) )
+ return -EFAULT;
+
+ DRM_DEBUG( "%d, %d, %d\n",
+ request.size, request.low_mark, request.high_mark );
+ order = drm_order( request.size );
+ if ( order < DRM_MIN_ORDER || order > DRM_MAX_ORDER ) return -EINVAL;
+ entry = &dma->bufs[order];
+
+ if ( request.low_mark < 0 || request.low_mark > entry->buf_count )
+ return -EINVAL;
+ if ( request.high_mark < 0 || request.high_mark > entry->buf_count )
+ return -EINVAL;
+
+ entry->freelist.low_mark = request.low_mark;
+ entry->freelist.high_mark = request.high_mark;
+
+ return 0;
+}
+
+/**
+ * Unreserve the buffers in list, previously reserved using drmDMA.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg pointer to a drm_buf_free structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Calls free_buffer() for each used buffer.
+ * This function is primarily used for debugging.
+ */
+int drm_freebufs( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_free_t request;
+ int i;
+ int idx;
+ drm_buf_t *buf;
+
+ if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA))
+ return -EINVAL;
+
+ if ( !dma ) return -EINVAL;
+
+ if ( copy_from_user( &request,
+ (drm_buf_free_t __user *)arg,
+ sizeof(request) ) )
+ return -EFAULT;
+
+ DRM_DEBUG( "%d\n", request.count );
+ for ( i = 0 ; i < request.count ; i++ ) {
+ if ( copy_from_user( &idx,
+ &request.list[i],
+ sizeof(idx) ) )
+ return -EFAULT;
+ if ( idx < 0 || idx >= dma->buf_count ) {
+ DRM_ERROR( "Index %d (of %d max)\n",
+ idx, dma->buf_count - 1 );
+ return -EINVAL;
+ }
+ buf = dma->buflist[idx];
+ if ( buf->filp != filp ) {
+ DRM_ERROR( "Process %d freeing buffer not owned\n",
+ current->pid );
+ return -EINVAL;
+ }
+ drm_free_buffer( dev, buf );
+ }
+
+ return 0;
+}
+
+/**
+ * Maps all of the DMA buffers into client-virtual space (ioctl).
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg pointer to a drm_buf_map structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Maps the AGP or SG buffer region with do_mmap(), and copies information
+ * about each buffer into user space. The PCI buffers are already mapped on the
+ * addbufs_pci() call.
+ */
+int drm_mapbufs( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ drm_buf_map_t __user *argp = (void __user *)arg;
+ int retcode = 0;
+ const int zero = 0;
+ unsigned long virtual;
+ unsigned long address;
+ drm_buf_map_t request;
+ int i;
+
+ if (!drm_core_check_feature(dev, DRIVER_HAVE_DMA))
+ return -EINVAL;
+
+ if ( !dma ) return -EINVAL;
+
+ spin_lock( &dev->count_lock );
+ if ( atomic_read( &dev->buf_alloc ) ) {
+ spin_unlock( &dev->count_lock );
+ return -EBUSY;
+ }
+ dev->buf_use++; /* Can't allocate more after this call */
+ spin_unlock( &dev->count_lock );
+
+ if ( copy_from_user( &request, argp, sizeof(request) ) )
+ return -EFAULT;
+
+ if ( request.count >= dma->buf_count ) {
+ if ((drm_core_has_AGP(dev) && (dma->flags & _DRM_DMA_USE_AGP)) ||
+ (drm_core_check_feature(dev, DRIVER_SG) && (dma->flags & _DRM_DMA_USE_SG)) ) {
+ drm_map_t *map = dev->agp_buffer_map;
+
+ if ( !map ) {
+ retcode = -EINVAL;
+ goto done;
+ }
+
+#if LINUX_VERSION_CODE <= 0x020402
+ down( ¤t->mm->mmap_sem );
+#else
+ down_write( ¤t->mm->mmap_sem );
+#endif
+ virtual = do_mmap( filp, 0, map->size,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED,
+ (unsigned long)map->offset );
+#if LINUX_VERSION_CODE <= 0x020402
+ up( ¤t->mm->mmap_sem );
+#else
+ up_write( ¤t->mm->mmap_sem );
+#endif
+ } else {
+#if LINUX_VERSION_CODE <= 0x020402
+ down( ¤t->mm->mmap_sem );
+#else
+ down_write( ¤t->mm->mmap_sem );
+#endif
+ virtual = do_mmap( filp, 0, dma->byte_count,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED, 0 );
+#if LINUX_VERSION_CODE <= 0x020402
+ up( ¤t->mm->mmap_sem );
+#else
+ up_write( ¤t->mm->mmap_sem );
+#endif
+ }
+ if ( virtual > -1024UL ) {
+ /* Real error */
+ retcode = (signed long)virtual;
+ goto done;
+ }
+ request.virtual = (void __user *)virtual;
+
+ for ( i = 0 ; i < dma->buf_count ; i++ ) {
+ if ( copy_to_user( &request.list[i].idx,
+ &dma->buflist[i]->idx,
+ sizeof(request.list[0].idx) ) ) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ if ( copy_to_user( &request.list[i].total,
+ &dma->buflist[i]->total,
+ sizeof(request.list[0].total) ) ) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ if ( copy_to_user( &request.list[i].used,
+ &zero,
+ sizeof(zero) ) ) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ address = virtual + dma->buflist[i]->offset; /* *** */
+ if ( copy_to_user( &request.list[i].address,
+ &address,
+ sizeof(address) ) ) {
+ retcode = -EFAULT;
+ goto done;
+ }
+ }
+ }
+ done:
+ request.count = dma->buf_count;
+ DRM_DEBUG( "%d buffers, retcode = %d\n", request.count, retcode );
+
+ if ( copy_to_user( argp, &request, sizeof(request) ) )
+ return -EFAULT;
+
+ return retcode;
+}
+
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
+#define DRIVER_AUTHOR "Gareth Hughes, Leif Delgass, José Fonseca, Jon Smirl"
-#include "drm_auth.h"
-#include "drm_agpsupport.h"
-#include "drm_bufs.h"
-#include "drm_context.h"
-#include "drm_dma.h"
-#include "drm_irq.h"
-#include "drm_drawable.h"
-#include "drm_drv.h"
-#include "drm_fops.h"
-#include "drm_init.h"
-#include "drm_ioctl.h"
-#include "drm_lock.h"
-#include "drm_memory.h"
-#include "drm_proc.h"
-#include "drm_vm.h"
-#include "drm_stub.h"
-#include "drm_scatter.h"
+#define DRIVER_NAME "drm"
+#define DRIVER_DESC "DRM shared core routines"
+#define DRIVER_DATE "20040925"
+
+#define DRM_IF_MAJOR 1
+#define DRM_IF_MINOR 2
+
+#define DRIVER_MAJOR 1
+#define DRIVER_MINOR 0
+#define DRIVER_PATCHLEVEL 0
--- /dev/null
+/**
+ * \file drm_dma.h
+ * DMA IOCTL and function support
+ *
+ * \author Rickard E. (Rik) Faith <faith@valinux.com>
+ * \author Gareth Hughes <gareth@valinux.com>
+ */
+
+/*
+ * Created: Fri Mar 19 14:30:16 1999 by faith@valinux.com
+ *
+ * Copyright 1999, 2000 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "drmP.h"
+
+/**
+ * Initialize the DMA data.
+ *
+ * \param dev DRM device.
+ * \return zero on success or a negative value on failure.
+ *
+ * Allocate and initialize a drm_device_dma structure.
+ */
+int drm_dma_setup( drm_device_t *dev )
+{
+ int i;
+
+ dev->dma = drm_alloc( sizeof(*dev->dma), DRM_MEM_DRIVER );
+ if ( !dev->dma )
+ return -ENOMEM;
+
+ memset( dev->dma, 0, sizeof(*dev->dma) );
+
+ for ( i = 0 ; i <= DRM_MAX_ORDER ; i++ )
+ memset(&dev->dma->bufs[i], 0, sizeof(dev->dma->bufs[0]));
+
+ return 0;
+}
+
+/**
+ * Cleanup the DMA resources.
+ *
+ * \param dev DRM device.
+ *
+ * Free all pages associated with DMA buffers, the buffers and pages lists, and
+ * finally the the drm_device::dma structure itself.
+ */
+void drm_dma_takedown(drm_device_t *dev)
+{
+ drm_device_dma_t *dma = dev->dma;
+ int i, j;
+
+ if (!dma) return;
+
+ /* Clear dma buffers */
+ for (i = 0; i <= DRM_MAX_ORDER; i++) {
+ if (dma->bufs[i].seg_count) {
+ DRM_DEBUG("order %d: buf_count = %d,"
+ " seg_count = %d\n",
+ i,
+ dma->bufs[i].buf_count,
+ dma->bufs[i].seg_count);
+ for (j = 0; j < dma->bufs[i].seg_count; j++) {
+ if (dma->bufs[i].seglist[j]) {
+ drm_free_pages(dma->bufs[i].seglist[j],
+ dma->bufs[i].page_order,
+ DRM_MEM_DMA);
+ }
+ }
+ drm_free(dma->bufs[i].seglist,
+ dma->bufs[i].seg_count
+ * sizeof(*dma->bufs[0].seglist),
+ DRM_MEM_SEGS);
+ }
+ if (dma->bufs[i].buf_count) {
+ for (j = 0; j < dma->bufs[i].buf_count; j++) {
+ if (dma->bufs[i].buflist[j].dev_private) {
+ drm_free(dma->bufs[i].buflist[j].dev_private,
+ dma->bufs[i].buflist[j].dev_priv_size,
+ DRM_MEM_BUFS);
+ }
+ }
+ drm_free(dma->bufs[i].buflist,
+ dma->bufs[i].buf_count *
+ sizeof(*dma->bufs[0].buflist),
+ DRM_MEM_BUFS);
+ }
+ }
+
+ if (dma->buflist) {
+ drm_free(dma->buflist,
+ dma->buf_count * sizeof(*dma->buflist),
+ DRM_MEM_BUFS);
+ }
+
+ if (dma->pagelist) {
+ drm_free(dma->pagelist,
+ dma->page_count * sizeof(*dma->pagelist),
+ DRM_MEM_PAGES);
+ }
+ drm_free(dev->dma, sizeof(*dev->dma), DRM_MEM_DRIVER);
+ dev->dma = NULL;
+}
+
+
+/**
+ * Free a buffer.
+ *
+ * \param dev DRM device.
+ * \param buf buffer to free.
+ *
+ * Resets the fields of \p buf.
+ */
+void drm_free_buffer(drm_device_t *dev, drm_buf_t *buf)
+{
+ if (!buf) return;
+
+ buf->waiting = 0;
+ buf->pending = 0;
+ buf->filp = NULL;
+ buf->used = 0;
+
+ if (drm_core_check_feature(dev, DRIVER_DMA_QUEUE) && waitqueue_active(&buf->dma_wait)) {
+ wake_up_interruptible(&buf->dma_wait);
+ }
+}
+
+/**
+ * Reclaim the buffers.
+ *
+ * \param filp file pointer.
+ *
+ * Frees each buffer associated with \p filp not already on the hardware.
+ */
+void drm_core_reclaim_buffers(drm_device_t *dev, struct file *filp)
+{
+ drm_device_dma_t *dma = dev->dma;
+ int i;
+
+ if (!dma) return;
+ for (i = 0; i < dma->buf_count; i++) {
+ if (dma->buflist[i]->filp == filp) {
+ switch (dma->buflist[i]->list) {
+ case DRM_LIST_NONE:
+ drm_free_buffer(dev, dma->buflist[i]);
+ break;
+ case DRM_LIST_WAIT:
+ dma->buflist[i]->list = DRM_LIST_RECLAIM;
+ break;
+ default:
+ /* Buffer already on hardware. */
+ break;
+ }
+ }
+ }
+}
+EXPORT_SYMBOL(drm_core_reclaim_buffers);
+
--- /dev/null
+/**
+ * \file drm_drawable.h
+ * IOCTLs for drawables
+ *
+ * \author Rickard E. (Rik) Faith <faith@valinux.com>
+ * \author Gareth Hughes <gareth@valinux.com>
+ */
+
+/*
+ * Created: Tue Feb 2 08:37:54 1999 by faith@valinux.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "drmP.h"
+
+/** No-op. */
+int drm_adddraw(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_draw_t draw;
+
+ draw.handle = 0; /* NOOP */
+ DRM_DEBUG("%d\n", draw.handle);
+ if (copy_to_user((drm_draw_t __user *)arg, &draw, sizeof(draw)))
+ return -EFAULT;
+ return 0;
+}
+
+/** No-op. */
+int drm_rmdraw(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ return 0; /* NOOP */
+}
--- /dev/null
+/**
+ * \file drm_drv.h
+ * Generic driver template
+ *
+ * \author Rickard E. (Rik) Faith <faith@valinux.com>
+ * \author Gareth Hughes <gareth@valinux.com>
+ *
+ * To use this template, you must at least define the following (samples
+ * given for the MGA driver):
+ *
+ * \code
+ * #define DRIVER_AUTHOR "VA Linux Systems, Inc."
+ *
+ * #define DRIVER_NAME "mga"
+ * #define DRIVER_DESC "Matrox G200/G400"
+ * #define DRIVER_DATE "20001127"
+ *
+ * #define DRIVER_MAJOR 2
+ * #define DRIVER_MINOR 0
+ * #define DRIVER_PATCHLEVEL 2
+ *
+ * #define DRIVER_IOCTL_COUNT DRM_ARRAY_SIZE( mga_ioctls )
+ *
+ * #define drm_x mga_##x
+ * \endcode
+ */
+
+/*
+ * Created: Thu Nov 23 03:10:50 2000 by gareth@valinux.com
+ *
+ * Copyright 1999, 2000 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "drmP.h"
+#include "drm_core.h"
+
+/** Ioctl table */
+drm_ioctl_desc_t drm_ioctls[] = {
+ [DRM_IOCTL_NR(DRM_IOCTL_VERSION)] = { drm_version, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_UNIQUE)] = { drm_getunique, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_MAGIC)] = { drm_getmagic, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_IRQ_BUSID)] = { drm_irq_by_busid, 0, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_MAP)] = { drm_getmap, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_CLIENT)] = { drm_getclient, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_STATS)] = { drm_getstats, 0, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_SET_VERSION)] = { drm_setversion, 0, 1 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_SET_UNIQUE)] = { drm_setunique, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_BLOCK)] = { drm_noop, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_UNBLOCK)] = { drm_noop, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AUTH_MAGIC)] = { drm_authmagic, 1, 1 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_MAP)] = { drm_addmap, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RM_MAP)] = { drm_rmmap, 1, 0 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_SET_SAREA_CTX)] = { drm_setsareactx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_SAREA_CTX)] = { drm_getsareactx, 1, 0 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_CTX)] = { drm_addctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RM_CTX)] = { drm_rmctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_MOD_CTX)] = { drm_modctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_GET_CTX)] = { drm_getctx, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_SWITCH_CTX)] = { drm_switchctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_NEW_CTX)] = { drm_newctx, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RES_CTX)] = { drm_resctx, 1, 0 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_DRAW)] = { drm_adddraw, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_RM_DRAW)] = { drm_rmdraw, 1, 1 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_LOCK)] = { drm_lock, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_UNLOCK)] = { drm_unlock, 1, 0 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_FINISH)] = { drm_noop, 1, 0 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_ADD_BUFS)] = { drm_addbufs, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_MARK_BUFS)] = { drm_markbufs, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_INFO_BUFS)] = { drm_infobufs, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_MAP_BUFS)] = { drm_mapbufs, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_FREE_BUFS)] = { drm_freebufs, 1, 0 },
+ /* The DRM_IOCTL_DMA ioctl should be defined by the driver. */
+
+ [DRM_IOCTL_NR(DRM_IOCTL_CONTROL)] = { drm_control, 1, 1 },
+
+#if __OS_HAS_AGP
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_ACQUIRE)] = { drm_agp_acquire, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_RELEASE)] = { drm_agp_release, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_ENABLE)] = { drm_agp_enable, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_INFO)] = { drm_agp_info, 1, 0 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_ALLOC)] = { drm_agp_alloc, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_FREE)] = { drm_agp_free, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_BIND)] = { drm_agp_bind, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_AGP_UNBIND)] = { drm_agp_unbind, 1, 1 },
+#endif
+
+ [DRM_IOCTL_NR(DRM_IOCTL_SG_ALLOC)] = { drm_sg_alloc, 1, 1 },
+ [DRM_IOCTL_NR(DRM_IOCTL_SG_FREE)] = { drm_sg_free, 1, 1 },
+
+ [DRM_IOCTL_NR(DRM_IOCTL_WAIT_VBLANK)] = { drm_wait_vblank, 0, 0 },
+};
+
+#define DRIVER_IOCTL_COUNT DRM_ARRAY_SIZE( drm_ioctls )
+
+/**
+ * Take down the DRM device.
+ *
+ * \param dev DRM device structure.
+ *
+ * Frees every resource in \p dev.
+ *
+ * \sa drm_device and setup().
+ */
+int drm_takedown( drm_device_t *dev )
+{
+ drm_magic_entry_t *pt, *next;
+ drm_map_t *map;
+ drm_map_list_t *r_list;
+ struct list_head *list, *list_next;
+ drm_vma_entry_t *vma, *vma_next;
+ int i;
+
+ DRM_DEBUG( "\n" );
+
+ if (dev->driver->pretakedown)
+ dev->driver->pretakedown(dev);
+
+ if ( dev->irq_enabled ) drm_irq_uninstall( dev );
+
+ down( &dev->struct_sem );
+ del_timer( &dev->timer );
+
+ if ( dev->devname ) {
+ drm_free( dev->devname, strlen( dev->devname ) + 1,
+ DRM_MEM_DRIVER );
+ dev->devname = NULL;
+ }
+
+ if ( dev->unique ) {
+ drm_free( dev->unique, strlen( dev->unique ) + 1,
+ DRM_MEM_DRIVER );
+ dev->unique = NULL;
+ dev->unique_len = 0;
+ }
+ /* Clear pid list */
+ for ( i = 0 ; i < DRM_HASH_SIZE ; i++ ) {
+ for ( pt = dev->magiclist[i].head ; pt ; pt = next ) {
+ next = pt->next;
+ drm_free( pt, sizeof(*pt), DRM_MEM_MAGIC );
+ }
+ dev->magiclist[i].head = dev->magiclist[i].tail = NULL;
+ }
+
+ /* Clear AGP information */
+ if (drm_core_has_AGP(dev) && dev->agp) {
+ drm_agp_mem_t *entry;
+ drm_agp_mem_t *nexte;
+
+ /* Remove AGP resources, but leave dev->agp
+ intact until drv_cleanup is called. */
+ for ( entry = dev->agp->memory ; entry ; entry = nexte ) {
+ nexte = entry->next;
+ if ( entry->bound ) drm_unbind_agp( entry->memory );
+ drm_free_agp( entry->memory, entry->pages );
+ drm_free( entry, sizeof(*entry), DRM_MEM_AGPLISTS );
+ }
+ dev->agp->memory = NULL;
+
+ if ( dev->agp->acquired ) drm_agp_do_release();
+
+ dev->agp->acquired = 0;
+ dev->agp->enabled = 0;
+ }
+
+ /* Clear vma list (only built for debugging) */
+ if ( dev->vmalist ) {
+ for ( vma = dev->vmalist ; vma ; vma = vma_next ) {
+ vma_next = vma->next;
+ drm_free( vma, sizeof(*vma), DRM_MEM_VMAS );
+ }
+ dev->vmalist = NULL;
+ }
+
+ if( dev->maplist ) {
+ list_for_each_safe( list, list_next, &dev->maplist->head ) {
+ r_list = (drm_map_list_t *)list;
+
+ if ( ( map = r_list->map ) ) {
+ switch ( map->type ) {
+ case _DRM_REGISTERS:
+ case _DRM_FRAME_BUFFER:
+ if (drm_core_has_MTRR(dev)) {
+ if ( map->mtrr >= 0 ) {
+ int retcode;
+ retcode = mtrr_del( map->mtrr,
+ map->offset,
+ map->size );
+ DRM_DEBUG( "mtrr_del=%d\n", retcode );
+ }
+ }
+ drm_ioremapfree( map->handle, map->size, dev );
+ break;
+ case _DRM_SHM:
+ vfree(map->handle);
+ break;
+
+ case _DRM_AGP:
+ /* Do nothing here, because this is all
+ * handled in the AGP/GART driver.
+ */
+ break;
+ case _DRM_SCATTER_GATHER:
+ /* Handle it */
+ if (drm_core_check_feature(dev, DRIVER_SG) && dev->sg) {
+ drm_sg_cleanup(dev->sg);
+ dev->sg = NULL;
+ }
+ break;
+ }
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ }
+ list_del( list );
+ drm_free(r_list, sizeof(*r_list), DRM_MEM_MAPS);
+ }
+ drm_free(dev->maplist, sizeof(*dev->maplist), DRM_MEM_MAPS);
+ dev->maplist = NULL;
+ }
+
+ if (drm_core_check_feature(dev, DRIVER_DMA_QUEUE) && dev->queuelist ) {
+ for ( i = 0 ; i < dev->queue_count ; i++ ) {
+ if ( dev->queuelist[i] ) {
+ drm_free( dev->queuelist[i],
+ sizeof(*dev->queuelist[0]),
+ DRM_MEM_QUEUES );
+ dev->queuelist[i] = NULL;
+ }
+ }
+ drm_free( dev->queuelist,
+ dev->queue_slots * sizeof(*dev->queuelist),
+ DRM_MEM_QUEUES );
+ dev->queuelist = NULL;
+ }
+ dev->queue_count = 0;
+
+ if (drm_core_check_feature(dev, DRIVER_HAVE_DMA))
+ drm_dma_takedown( dev );
+
+ if ( dev->lock.hw_lock ) {
+ dev->sigdata.lock = dev->lock.hw_lock = NULL; /* SHM removed */
+ dev->lock.filp = NULL;
+ wake_up_interruptible( &dev->lock.lock_queue );
+ }
+ up( &dev->struct_sem );
+
+ return 0;
+}
+
+
+
+/**
+ * Module initialization. Called via init_module at module load time, or via
+ * linux/init/main.c (this is not currently supported).
+ *
+ * \return zero on success or a negative number on failure.
+ *
+ * Initializes an array of drm_device structures, and attempts to
+ * initialize all available devices, using consecutive minors, registering the
+ * stubs and initializing the AGP device.
+ *
+ * Expands the \c DRIVER_PREINIT and \c DRIVER_POST_INIT macros before and
+ * after the initialization for driver customization.
+ */
+int drm_init( struct drm_driver *driver )
+{
+ struct pci_dev *pdev = NULL;
+ struct pci_device_id *pid;
+ int i;
+
+ DRM_DEBUG( "\n" );
+
+ drm_mem_init();
+
+ for (i=0; driver->pci_driver.id_table[i].vendor != 0; i++) {
+ pid = (struct pci_device_id *)&driver->pci_driver.id_table[i];
+
+ pdev=NULL;
+ /* pass back in pdev to account for multiple identical cards */
+ while ((pdev = pci_get_subsys(pid->vendor, pid->device, pid->subvendor, pid->subdevice, pdev)) != NULL) {
+ /* stealth mode requires a manual probe */
+ pci_dev_get(pdev);
+ drm_probe(pdev, pid, driver);
+ }
+ }
+ return 0;
+}
+EXPORT_SYMBOL(drm_init);
+
+/**
+ * Called via cleanup_module() at module unload time.
+ *
+ * Cleans up all DRM device, calling takedown().
+ *
+ * \sa drm_init().
+ */
+static void drm_cleanup( drm_device_t *dev )
+{
+ DRM_DEBUG( "\n" );
+
+ if (!dev) {
+ DRM_ERROR("cleanup called no dev\n");
+ return;
+ }
+
+ drm_takedown( dev );
+
+ drm_ctxbitmap_cleanup( dev );
+
+ if (drm_core_has_MTRR(dev) && drm_core_has_AGP(dev) &&
+ dev->agp && dev->agp->agp_mtrr >= 0) {
+ int retval;
+ retval = mtrr_del( dev->agp->agp_mtrr,
+ dev->agp->agp_info.aper_base,
+ dev->agp->agp_info.aper_size*1024*1024 );
+ DRM_DEBUG( "mtrr_del=%d\n", retval );
+ }
+
+ if (drm_core_has_AGP(dev) && dev->agp ) {
+ drm_free( dev->agp, sizeof(*dev->agp), DRM_MEM_AGPLISTS );
+ dev->agp = NULL;
+ }
+
+ if (dev->driver->postcleanup)
+ dev->driver->postcleanup(dev);
+
+ if ( drm_put_minor(dev) )
+ DRM_ERROR( "Cannot unload module\n" );
+}
+
+void drm_exit (struct drm_driver *driver)
+{
+ int i;
+ drm_device_t *dev = NULL;
+ drm_minor_t *minor;
+
+ DRM_DEBUG( "\n" );
+
+ for (i = 0; i < drm_cards_limit; i++) {
+ minor = &drm_minors[i];
+ if (!minor->dev)
+ continue;
+ if (minor->dev->driver!=driver)
+ continue;
+
+ dev = minor->dev;
+
+ }
+ if (dev) {
+ /* release the pci driver */
+ if (dev->pdev)
+ pci_dev_put(dev->pdev);
+ drm_cleanup(dev);
+ }
+
+ DRM_INFO( "Module unloaded\n" );
+}
+EXPORT_SYMBOL(drm_exit);
+
+/** File operations structure */
+static struct file_operations drm_stub_fops = {
+ .owner = THIS_MODULE,
+ .open = drm_stub_open
+};
+
+static int __init drm_core_init(void)
+{
+ int ret = -ENOMEM;
+
+ drm_cards_limit = (drm_cards_limit < DRM_MAX_MINOR + 1 ? drm_cards_limit : DRM_MAX_MINOR + 1);
+ drm_minors = drm_calloc(drm_cards_limit,
+ sizeof(*drm_minors), DRM_MEM_STUB);
+ if(!drm_minors)
+ goto err_p1;
+
+ if (register_chrdev(DRM_MAJOR, "drm", &drm_stub_fops))
+ goto err_p1;
+
+ drm_class = drm_sysfs_create(THIS_MODULE, "drm");
+ if (IS_ERR(drm_class)) {
+ printk (KERN_ERR "DRM: Error creating drm class.\n");
+ ret = PTR_ERR(drm_class);
+ goto err_p2;
+ }
+
+ drm_proc_root = create_proc_entry("dri", S_IFDIR, NULL);
+ if (!drm_proc_root) {
+ DRM_ERROR("Cannot create /proc/dri\n");
+ ret = -1;
+ goto err_p3;
+ }
+
+ DRM_INFO( "Initialized %s %d.%d.%d %s\n",
+ DRIVER_NAME,
+ DRIVER_MAJOR,
+ DRIVER_MINOR,
+ DRIVER_PATCHLEVEL,
+ DRIVER_DATE
+ );
+ return 0;
+err_p3:
+ drm_sysfs_destroy(drm_class);
+err_p2:
+ unregister_chrdev(DRM_MAJOR, "drm");
+ drm_free(drm_minors, sizeof(*drm_minors) * drm_cards_limit, DRM_MEM_STUB);
+err_p1:
+ return ret;
+}
+
+static void __exit drm_core_exit (void)
+{
+ remove_proc_entry("dri", NULL);
+ drm_sysfs_destroy(drm_class);
+
+ unregister_chrdev(DRM_MAJOR, "drm");
+
+ drm_free(drm_minors, sizeof(*drm_minors) *
+ drm_cards_limit, DRM_MEM_STUB);
+}
+
+
+module_init( drm_core_init );
+module_exit( drm_core_exit );
+
+
+/**
+ * Get version information
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg user argument, pointing to a drm_version structure.
+ * \return zero on success or negative number on failure.
+ *
+ * Fills in the version information in \p arg.
+ */
+int drm_version( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_version_t __user *argp = (void __user *)arg;
+ drm_version_t version;
+ int ret;
+
+ if ( copy_from_user( &version, argp, sizeof(version) ) )
+ return -EFAULT;
+
+ /* version is a required function to return the personality module version */
+ if ((ret = dev->driver->version(&version)))
+ return ret;
+
+ if ( copy_to_user( argp, &version, sizeof(version) ) )
+ return -EFAULT;
+ return 0;
+}
+
+
+
+/**
+ * Called whenever a process performs an ioctl on /dev/drm.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg user argument.
+ * \return zero on success or negative number on failure.
+ *
+ * Looks up the ioctl function in the ::ioctls table, checking for root
+ * previleges if so required, and dispatches to the respective function.
+ */
+int drm_ioctl( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_ioctl_desc_t *ioctl;
+ drm_ioctl_t *func;
+ unsigned int nr = DRM_IOCTL_NR(cmd);
+ int retcode = -EINVAL;
+
+ atomic_inc( &dev->ioctl_count );
+ atomic_inc( &dev->counts[_DRM_STAT_IOCTLS] );
+ ++priv->ioctl_count;
+
+ DRM_DEBUG( "pid=%d, cmd=0x%02x, nr=0x%02x, dev 0x%lx, auth=%d\n",
+ current->pid, cmd, nr, (long)old_encode_dev(dev->device),
+ priv->authenticated );
+
+ if (nr < DRIVER_IOCTL_COUNT)
+ ioctl = &drm_ioctls[nr];
+ else if ((nr >= DRM_COMMAND_BASE) && (nr < DRM_COMMAND_BASE + dev->driver->num_ioctls))
+ ioctl = &dev->driver->ioctls[nr - DRM_COMMAND_BASE];
+ else
+ goto err_i1;
+
+ func = ioctl->func;
+ /* is there a local override? */
+ if ((nr == DRM_IOCTL_NR(DRM_IOCTL_DMA)) && dev->driver->dma_ioctl)
+ func = dev->driver->dma_ioctl;
+
+ if ( !func ) {
+ DRM_DEBUG( "no function\n" );
+ retcode = -EINVAL;
+ } else if ( ( ioctl->root_only && !capable( CAP_SYS_ADMIN ) )||
+ ( ioctl->auth_needed && !priv->authenticated ) ) {
+ retcode = -EACCES;
+ } else {
+ retcode = func( inode, filp, cmd, arg );
+ }
+
+err_i1:
+ atomic_dec( &dev->ioctl_count );
+ if (retcode) DRM_DEBUG( "ret = %x\n", retcode);
+ return retcode;
+}
+EXPORT_SYMBOL(drm_ioctl);
+
--- /dev/null
+/**
+ * \file drm_fops.h
+ * File operations for DRM
+ *
+ * \author Rickard E. (Rik) Faith <faith@valinux.com>
+ * \author Daryll Strauss <daryll@valinux.com>
+ * \author Gareth Hughes <gareth@valinux.com>
+ */
+
+/*
+ * Created: Mon Jan 4 08:58:31 1999 by faith@valinux.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "drmP.h"
+#include <linux/poll.h>
+
+static int drm_setup( drm_device_t *dev )
+{
+ int i;
+ int ret;
+
+ if (dev->driver->presetup)
+ {
+ ret=dev->driver->presetup(dev);
+ if (ret!=0)
+ return ret;
+ }
+
+ atomic_set( &dev->ioctl_count, 0 );
+ atomic_set( &dev->vma_count, 0 );
+ dev->buf_use = 0;
+ atomic_set( &dev->buf_alloc, 0 );
+
+ if (drm_core_check_feature(dev, DRIVER_HAVE_DMA))
+ {
+ i = drm_dma_setup( dev );
+ if ( i < 0 )
+ return i;
+ }
+
+ for ( i = 0 ; i < DRM_ARRAY_SIZE(dev->counts) ; i++ )
+ atomic_set( &dev->counts[i], 0 );
+
+ for ( i = 0 ; i < DRM_HASH_SIZE ; i++ ) {
+ dev->magiclist[i].head = NULL;
+ dev->magiclist[i].tail = NULL;
+ }
+
+ dev->maplist = drm_alloc(sizeof(*dev->maplist),
+ DRM_MEM_MAPS);
+ if(dev->maplist == NULL) return -ENOMEM;
+ memset(dev->maplist, 0, sizeof(*dev->maplist));
+ INIT_LIST_HEAD(&dev->maplist->head);
+
+ dev->ctxlist = drm_alloc(sizeof(*dev->ctxlist),
+ DRM_MEM_CTXLIST);
+ if(dev->ctxlist == NULL) return -ENOMEM;
+ memset(dev->ctxlist, 0, sizeof(*dev->ctxlist));
+ INIT_LIST_HEAD(&dev->ctxlist->head);
+
+ dev->vmalist = NULL;
+ dev->sigdata.lock = dev->lock.hw_lock = NULL;
+ init_waitqueue_head( &dev->lock.lock_queue );
+ dev->queue_count = 0;
+ dev->queue_reserved = 0;
+ dev->queue_slots = 0;
+ dev->queuelist = NULL;
+ dev->irq_enabled = 0;
+ dev->context_flag = 0;
+ dev->interrupt_flag = 0;
+ dev->dma_flag = 0;
+ dev->last_context = 0;
+ dev->last_switch = 0;
+ dev->last_checked = 0;
+ init_waitqueue_head( &dev->context_wait );
+ dev->if_version = 0;
+
+ dev->ctx_start = 0;
+ dev->lck_start = 0;
+
+ dev->buf_rp = dev->buf;
+ dev->buf_wp = dev->buf;
+ dev->buf_end = dev->buf + DRM_BSZ;
+ dev->buf_async = NULL;
+ init_waitqueue_head( &dev->buf_readers );
+ init_waitqueue_head( &dev->buf_writers );
+
+ DRM_DEBUG( "\n" );
+
+ /*
+ * The kernel's context could be created here, but is now created
+ * in drm_dma_enqueue. This is more resource-efficient for
+ * hardware that does not do DMA, but may mean that
+ * drm_select_queue fails between the time the interrupt is
+ * initialized and the time the queues are initialized.
+ */
+ if (dev->driver->postsetup)
+ dev->driver->postsetup(dev);
+
+ return 0;
+}
+
+/**
+ * Open file.
+ *
+ * \param inode device inode
+ * \param filp file pointer.
+ * \return zero on success or a negative number on failure.
+ *
+ * Searches the DRM device with the same minor number, calls open_helper(), and
+ * increments the device open count. If the open count was previous at zero,
+ * i.e., it's the first that the device is open, then calls setup().
+ */
+int drm_open( struct inode *inode, struct file *filp )
+{
+ drm_device_t *dev = NULL;
+ int minor = iminor(inode);
+ int retcode = 0;
+
+ if (!((minor >= 0) && (minor < drm_cards_limit)))
+ return -ENODEV;
+
+ dev = drm_minors[minor].dev;
+ if (!dev)
+ return -ENODEV;
+
+ retcode = drm_open_helper( inode, filp, dev );
+ if ( !retcode ) {
+ atomic_inc( &dev->counts[_DRM_STAT_OPENS] );
+ spin_lock( &dev->count_lock );
+ if ( !dev->open_count++ ) {
+ spin_unlock( &dev->count_lock );
+ return drm_setup( dev );
+ }
+ spin_unlock( &dev->count_lock );
+ }
+
+ return retcode;
+}
+EXPORT_SYMBOL(drm_open);
+
+/**
+ * Release file.
+ *
+ * \param inode device inode
+ * \param filp file pointer.
+ * \return zero on success or a negative number on failure.
+ *
+ * If the hardware lock is held then free it, and take it again for the kernel
+ * context since it's necessary to reclaim buffers. Unlink the file private
+ * data from its list and free it. Decreases the open count and if it reaches
+ * zero calls takedown().
+ */
+int drm_release( struct inode *inode, struct file *filp )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev;
+ int retcode = 0;
+
+ lock_kernel();
+ dev = priv->dev;
+
+ DRM_DEBUG( "open_count = %d\n", dev->open_count );
+
+ if (dev->driver->prerelease)
+ dev->driver->prerelease(dev, filp);
+
+ /* ========================================================
+ * Begin inline drm_release
+ */
+
+ DRM_DEBUG( "pid = %d, device = 0x%lx, open_count = %d\n",
+ current->pid, (long)old_encode_dev(dev->device), dev->open_count );
+
+ if ( priv->lock_count && dev->lock.hw_lock &&
+ _DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock) &&
+ dev->lock.filp == filp ) {
+ DRM_DEBUG( "File %p released, freeing lock for context %d\n",
+ filp,
+ _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock) );
+
+ if (dev->driver->release)
+ dev->driver->release(dev, filp);
+
+ drm_lock_free( dev, &dev->lock.hw_lock->lock,
+ _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock) );
+
+ /* FIXME: may require heavy-handed reset of
+ hardware at this point, possibly
+ processed via a callback to the X
+ server. */
+ }
+ else if ( dev->driver->release && priv->lock_count && dev->lock.hw_lock ) {
+ /* The lock is required to reclaim buffers */
+ DECLARE_WAITQUEUE( entry, current );
+
+ add_wait_queue( &dev->lock.lock_queue, &entry );
+ for (;;) {
+ __set_current_state(TASK_INTERRUPTIBLE);
+ if ( !dev->lock.hw_lock ) {
+ /* Device has been unregistered */
+ retcode = -EINTR;
+ break;
+ }
+ if ( drm_lock_take( &dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT ) ) {
+ dev->lock.filp = filp;
+ dev->lock.lock_time = jiffies;
+ atomic_inc( &dev->counts[_DRM_STAT_LOCKS] );
+ break; /* Got lock */
+ }
+ /* Contention */
+ schedule();
+ if ( signal_pending( current ) ) {
+ retcode = -ERESTARTSYS;
+ break;
+ }
+ }
+ __set_current_state(TASK_RUNNING);
+ remove_wait_queue( &dev->lock.lock_queue, &entry );
+ if( !retcode ) {
+ if (dev->driver->release)
+ dev->driver->release(dev, filp);
+ drm_lock_free( dev, &dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT );
+ }
+ }
+
+ if (drm_core_check_feature(dev, DRIVER_HAVE_DMA))
+ {
+ dev->driver->reclaim_buffers(dev, filp);
+ }
+
+ drm_fasync( -1, filp, 0 );
+
+ down( &dev->ctxlist_sem );
+ if ( !list_empty( &dev->ctxlist->head ) ) {
+ drm_ctx_list_t *pos, *n;
+
+ list_for_each_entry_safe( pos, n, &dev->ctxlist->head, head ) {
+ if ( pos->tag == priv &&
+ pos->handle != DRM_KERNEL_CONTEXT ) {
+ if (dev->driver->context_dtor)
+ dev->driver->context_dtor(dev, pos->handle);
+
+ drm_ctxbitmap_free( dev, pos->handle );
+
+ list_del( &pos->head );
+ drm_free( pos, sizeof(*pos), DRM_MEM_CTXLIST );
+ --dev->ctx_count;
+ }
+ }
+ }
+ up( &dev->ctxlist_sem );
+
+ down( &dev->struct_sem );
+ if ( priv->remove_auth_on_close == 1 ) {
+ drm_file_t *temp = dev->file_first;
+ while ( temp ) {
+ temp->authenticated = 0;
+ temp = temp->next;
+ }
+ }
+ if ( priv->prev ) {
+ priv->prev->next = priv->next;
+ } else {
+ dev->file_first = priv->next;
+ }
+ if ( priv->next ) {
+ priv->next->prev = priv->prev;
+ } else {
+ dev->file_last = priv->prev;
+ }
+ up( &dev->struct_sem );
+
+ if (dev->driver->free_filp_priv)
+ dev->driver->free_filp_priv(dev, priv);
+
+ drm_free( priv, sizeof(*priv), DRM_MEM_FILES );
+
+ /* ========================================================
+ * End inline drm_release
+ */
+
+ atomic_inc( &dev->counts[_DRM_STAT_CLOSES] );
+ spin_lock( &dev->count_lock );
+ if ( !--dev->open_count ) {
+ if ( atomic_read( &dev->ioctl_count ) || dev->blocked ) {
+ DRM_ERROR( "Device busy: %d %d\n",
+ atomic_read( &dev->ioctl_count ),
+ dev->blocked );
+ spin_unlock( &dev->count_lock );
+ unlock_kernel();
+ return -EBUSY;
+ }
+ spin_unlock( &dev->count_lock );
+ unlock_kernel();
+ return drm_takedown( dev );
+ }
+ spin_unlock( &dev->count_lock );
+
+ unlock_kernel();
+
+ return retcode;
+}
+EXPORT_SYMBOL(drm_release);
+
+/**
+ * Called whenever a process opens /dev/drm.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param dev device.
+ * \return zero on success or a negative number on failure.
+ *
+ * Creates and initializes a drm_file structure for the file private data in \p
+ * filp and add it into the double linked list in \p dev.
+ */
+int drm_open_helper(struct inode *inode, struct file *filp, drm_device_t *dev)
+{
+ int minor = iminor(inode);
+ drm_file_t *priv;
+ int ret;
+
+ if (filp->f_flags & O_EXCL) return -EBUSY; /* No exclusive opens */
+ if (!drm_cpu_valid()) return -EINVAL;
+
+ DRM_DEBUG("pid = %d, minor = %d\n", current->pid, minor);
+
+ priv = drm_alloc(sizeof(*priv), DRM_MEM_FILES);
+ if(!priv) return -ENOMEM;
+
+ memset(priv, 0, sizeof(*priv));
+ filp->private_data = priv;
+ priv->uid = current->euid;
+ priv->pid = current->pid;
+ priv->minor = minor;
+ priv->dev = dev;
+ priv->ioctl_count = 0;
+ priv->authenticated = capable(CAP_SYS_ADMIN);
+ priv->lock_count = 0;
+
+ if (dev->driver->open_helper) {
+ ret=dev->driver->open_helper(dev, priv);
+ if (ret < 0)
+ goto out_free;
+ }
+
+ down(&dev->struct_sem);
+ if (!dev->file_last) {
+ priv->next = NULL;
+ priv->prev = NULL;
+ dev->file_first = priv;
+ dev->file_last = priv;
+ } else {
+ priv->next = NULL;
+ priv->prev = dev->file_last;
+ dev->file_last->next = priv;
+ dev->file_last = priv;
+ }
+ up(&dev->struct_sem);
+
+#ifdef __alpha__
+ /*
+ * Default the hose
+ */
+ if (!dev->hose) {
+ struct pci_dev *pci_dev;
+ pci_dev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, NULL);
+ if (pci_dev) {
+ dev->hose = pci_dev->sysdata;
+ pci_dev_put(pci_dev);
+ }
+ if (!dev->hose) {
+ struct pci_bus *b = pci_bus_b(pci_root_buses.next);
+ if (b) dev->hose = b->sysdata;
+ }
+ }
+#endif
+
+ return 0;
+out_free:
+ drm_free(priv, sizeof(*priv), DRM_MEM_FILES);
+ filp->private_data=NULL;
+ return ret;
+}
+
+/** No-op. */
+int drm_flush(struct file *filp)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+
+ DRM_DEBUG("pid = %d, device = 0x%lx, open_count = %d\n",
+ current->pid, (long)old_encode_dev(dev->device), dev->open_count);
+ return 0;
+}
+EXPORT_SYMBOL(drm_flush);
+
+/** No-op. */
+int drm_fasync(int fd, struct file *filp, int on)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ int retcode;
+
+ DRM_DEBUG("fd = %d, device = 0x%lx\n", fd, (long)old_encode_dev(dev->device));
+ retcode = fasync_helper(fd, filp, on, &dev->buf_async);
+ if (retcode < 0) return retcode;
+ return 0;
+}
+EXPORT_SYMBOL(drm_fasync);
+
+/** No-op. */
+unsigned int drm_poll(struct file *filp, struct poll_table_struct *wait)
+{
+ return 0;
+}
+EXPORT_SYMBOL(drm_poll);
+
+
+/** No-op. */
+ssize_t drm_read(struct file *filp, char __user *buf, size_t count, loff_t *off)
+{
+ return 0;
+}
--- /dev/null
+/**
+ * \file drm_init.h
+ * Setup/Cleanup for DRM
+ *
+ * \author Rickard E. (Rik) Faith <faith@valinux.com>
+ * \author Gareth Hughes <gareth@valinux.com>
+ */
+
+/*
+ * Created: Mon Jan 4 08:58:31 1999 by faith@valinux.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "drmP.h"
+
+/**
+ * Check whether DRI will run on this CPU.
+ *
+ * \return non-zero if the DRI will run on this CPU, or zero otherwise.
+ */
+int drm_cpu_valid(void)
+{
+#if defined(__i386__)
+ if (boot_cpu_data.x86 == 3) return 0; /* No cmpxchg on a 386 */
+#endif
+#if defined(__sparc__) && !defined(__sparc_v9__)
+ return 0; /* No cmpxchg before v9 sparc. */
+#endif
+ return 1;
+}
--- /dev/null
+/**
+ * \file drm_ioctl.h
+ * IOCTL processing for DRM
+ *
+ * \author Rickard E. (Rik) Faith <faith@valinux.com>
+ * \author Gareth Hughes <gareth@valinux.com>
+ */
+
+/*
+ * Created: Fri Jan 8 09:01:26 1999 by faith@valinux.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "drmP.h"
+#include "drm_core.h"
+
+#include "linux/pci.h"
+
+/**
+ * Get the bus id.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg user argument, pointing to a drm_unique structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Copies the bus id from drm_device::unique into user space.
+ */
+int drm_getunique(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_unique_t __user *argp = (void __user *)arg;
+ drm_unique_t u;
+
+ if (copy_from_user(&u, argp, sizeof(u)))
+ return -EFAULT;
+ if (u.unique_len >= dev->unique_len) {
+ if (copy_to_user(u.unique, dev->unique, dev->unique_len))
+ return -EFAULT;
+ }
+ u.unique_len = dev->unique_len;
+ if (copy_to_user(argp, &u, sizeof(u)))
+ return -EFAULT;
+ return 0;
+}
+
+/**
+ * Set the bus id.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg user argument, pointing to a drm_unique structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Copies the bus id from userspace into drm_device::unique, and verifies that
+ * it matches the device this DRM is attached to (EINVAL otherwise). Deprecated
+ * in interface version 1.1 and will return EBUSY when setversion has requested
+ * version 1.1 or greater.
+ */
+int drm_setunique(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_unique_t u;
+ int domain, bus, slot, func, ret;
+
+ if (dev->unique_len || dev->unique) return -EBUSY;
+
+ if (copy_from_user(&u, (drm_unique_t __user *)arg, sizeof(u)))
+ return -EFAULT;
+
+ if (!u.unique_len || u.unique_len > 1024) return -EINVAL;
+
+ dev->unique_len = u.unique_len;
+ dev->unique = drm_alloc(u.unique_len + 1, DRM_MEM_DRIVER);
+ if(!dev->unique) return -ENOMEM;
+ if (copy_from_user(dev->unique, u.unique, dev->unique_len))
+ return -EFAULT;
+
+ dev->unique[dev->unique_len] = '\0';
+
+ dev->devname = drm_alloc(strlen(dev->driver->pci_driver.name) + strlen(dev->unique) + 2,
+ DRM_MEM_DRIVER);
+ if (!dev->devname)
+ return -ENOMEM;
+
+ sprintf(dev->devname, "%s@%s", dev->driver->pci_driver.name, dev->unique);
+
+ /* Return error if the busid submitted doesn't match the device's actual
+ * busid.
+ */
+ ret = sscanf(dev->unique, "PCI:%d:%d:%d", &bus, &slot, &func);
+ if (ret != 3)
+ return DRM_ERR(EINVAL);
+ domain = bus >> 8;
+ bus &= 0xff;
+
+ if ((domain != dev->pci_domain) ||
+ (bus != dev->pci_bus) ||
+ (slot != dev->pci_slot) ||
+ (func != dev->pci_func))
+ return -EINVAL;
+
+ return 0;
+}
+
+static int
+drm_set_busid(drm_device_t *dev)
+{
+ if (dev->unique != NULL)
+ return EBUSY;
+
+ dev->unique_len = 20;
+ dev->unique = drm_alloc(dev->unique_len + 1, DRM_MEM_DRIVER);
+ if (dev->unique == NULL)
+ return ENOMEM;
+
+ snprintf(dev->unique, dev->unique_len, "pci:%04x:%02x:%02x.%d",
+ dev->pci_domain, dev->pci_bus, dev->pci_slot, dev->pci_func);
+
+ dev->devname = drm_alloc(strlen(dev->driver->pci_driver.name) + dev->unique_len + 2,
+ DRM_MEM_DRIVER);
+ if (dev->devname == NULL)
+ return ENOMEM;
+
+ sprintf(dev->devname, "%s@%s", dev->driver->pci_driver.name, dev->unique);
+
+ return 0;
+}
+
+
+/**
+ * Get a mapping information.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg user argument, pointing to a drm_map structure.
+ *
+ * \return zero on success or a negative number on failure.
+ *
+ * Searches for the mapping with the specified offset and copies its information
+ * into userspace
+ */
+int drm_getmap( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_map_t __user *argp = (void __user *)arg;
+ drm_map_t map;
+ drm_map_list_t *r_list = NULL;
+ struct list_head *list;
+ int idx;
+ int i;
+
+ if (copy_from_user(&map, argp, sizeof(map)))
+ return -EFAULT;
+ idx = map.offset;
+
+ down(&dev->struct_sem);
+ if (idx < 0) {
+ up(&dev->struct_sem);
+ return -EINVAL;
+ }
+
+ i = 0;
+ list_for_each(list, &dev->maplist->head) {
+ if(i == idx) {
+ r_list = list_entry(list, drm_map_list_t, head);
+ break;
+ }
+ i++;
+ }
+ if(!r_list || !r_list->map) {
+ up(&dev->struct_sem);
+ return -EINVAL;
+ }
+
+ map.offset = r_list->map->offset;
+ map.size = r_list->map->size;
+ map.type = r_list->map->type;
+ map.flags = r_list->map->flags;
+ map.handle = r_list->map->handle;
+ map.mtrr = r_list->map->mtrr;
+ up(&dev->struct_sem);
+
+ if (copy_to_user(argp, &map, sizeof(map))) return -EFAULT;
+ return 0;
+}
+
+/**
+ * Get client information.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg user argument, pointing to a drm_client structure.
+ *
+ * \return zero on success or a negative number on failure.
+ *
+ * Searches for the client with the specified index and copies its information
+ * into userspace
+ */
+int drm_getclient( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_client_t __user *argp = (void __user *)arg;
+ drm_client_t client;
+ drm_file_t *pt;
+ int idx;
+ int i;
+
+ if (copy_from_user(&client, argp, sizeof(client)))
+ return -EFAULT;
+ idx = client.idx;
+ down(&dev->struct_sem);
+ for (i = 0, pt = dev->file_first; i < idx && pt; i++, pt = pt->next)
+ ;
+
+ if (!pt) {
+ up(&dev->struct_sem);
+ return -EINVAL;
+ }
+ client.auth = pt->authenticated;
+ client.pid = pt->pid;
+ client.uid = pt->uid;
+ client.magic = pt->magic;
+ client.iocs = pt->ioctl_count;
+ up(&dev->struct_sem);
+
+ if (copy_to_user((drm_client_t __user *)arg, &client, sizeof(client)))
+ return -EFAULT;
+ return 0;
+}
+
+/**
+ * Get statistics information.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg user argument, pointing to a drm_stats structure.
+ *
+ * \return zero on success or a negative number on failure.
+ */
+int drm_getstats( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_stats_t stats;
+ int i;
+
+ memset(&stats, 0, sizeof(stats));
+
+ down(&dev->struct_sem);
+
+ for (i = 0; i < dev->counters; i++) {
+ if (dev->types[i] == _DRM_STAT_LOCK)
+ stats.data[i].value
+ = (dev->lock.hw_lock
+ ? dev->lock.hw_lock->lock : 0);
+ else
+ stats.data[i].value = atomic_read(&dev->counts[i]);
+ stats.data[i].type = dev->types[i];
+ }
+
+ stats.count = dev->counters;
+
+ up(&dev->struct_sem);
+
+ if (copy_to_user((drm_stats_t __user *)arg, &stats, sizeof(stats)))
+ return -EFAULT;
+ return 0;
+}
+
+/**
+ * Setversion ioctl.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg user argument, pointing to a drm_lock structure.
+ * \return zero on success or negative number on failure.
+ *
+ * Sets the requested interface version
+ */
+int drm_setversion(DRM_IOCTL_ARGS)
+{
+ DRM_DEVICE;
+ drm_set_version_t sv;
+ drm_set_version_t retv;
+ int if_version;
+ drm_set_version_t __user *argp = (void __user *)data;
+ drm_version_t version;
+
+ DRM_COPY_FROM_USER_IOCTL(sv, argp, sizeof(sv));
+
+ memset(&version, 0, sizeof(version));
+
+ dev->driver->version(&version);
+ retv.drm_di_major = DRM_IF_MAJOR;
+ retv.drm_di_minor = DRM_IF_MINOR;
+ retv.drm_dd_major = version.version_major;
+ retv.drm_dd_minor = version.version_minor;
+
+ DRM_COPY_TO_USER_IOCTL(argp, retv, sizeof(sv));
+
+ if (sv.drm_di_major != -1) {
+ if (sv.drm_di_major != DRM_IF_MAJOR ||
+ sv.drm_di_minor < 0 || sv.drm_di_minor > DRM_IF_MINOR)
+ return EINVAL;
+ if_version = DRM_IF_VERSION(sv.drm_di_major, sv.drm_dd_minor);
+ dev->if_version = DRM_MAX(if_version, dev->if_version);
+ if (sv.drm_di_minor >= 1) {
+ /*
+ * Version 1.1 includes tying of DRM to specific device
+ */
+ drm_set_busid(dev);
+ }
+ }
+
+ if (sv.drm_dd_major != -1) {
+ if (sv.drm_dd_major != version.version_major ||
+ sv.drm_dd_minor < 0 || sv.drm_dd_minor > version.version_minor)
+ return EINVAL;
+
+ if (dev->driver->set_version)
+ dev->driver->set_version(dev, &sv);
+ }
+ return 0;
+}
+
+/** No-op ioctl. */
+int drm_noop(struct inode *inode, struct file *filp, unsigned int cmd,
+ unsigned long arg)
+{
+ DRM_DEBUG("\n");
+ return 0;
+}
--- /dev/null
+/**
+ * \file drm_irq.h
+ * IRQ support
+ *
+ * \author Rickard E. (Rik) Faith <faith@valinux.com>
+ * \author Gareth Hughes <gareth@valinux.com>
+ */
+
+/*
+ * Created: Fri Mar 19 14:30:16 1999 by faith@valinux.com
+ *
+ * Copyright 1999, 2000 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "drmP.h"
+
+#include <linux/interrupt.h> /* For task queue support */
+
+/**
+ * Get interrupt from bus id.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg user argument, pointing to a drm_irq_busid structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Finds the PCI device with the specified bus id and gets its IRQ number.
+ * This IOCTL is deprecated, and will now return EINVAL for any busid not equal
+ * to that of the device that this DRM instance attached to.
+ */
+int drm_irq_by_busid(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_irq_busid_t __user *argp = (void __user *)arg;
+ drm_irq_busid_t p;
+
+ if (!drm_core_check_feature(dev, DRIVER_HAVE_IRQ))
+ return -EINVAL;
+
+ if (copy_from_user(&p, argp, sizeof(p)))
+ return -EFAULT;
+
+ if ((p.busnum >> 8) != dev->pci_domain ||
+ (p.busnum & 0xff) != dev->pci_bus ||
+ p.devnum != dev->pci_slot ||
+ p.funcnum != dev->pci_func)
+ return -EINVAL;
+
+ p.irq = dev->irq;
+
+ DRM_DEBUG("%d:%d:%d => IRQ %d\n",
+ p.busnum, p.devnum, p.funcnum, p.irq);
+ if (copy_to_user(argp, &p, sizeof(p)))
+ return -EFAULT;
+ return 0;
+}
+
+/**
+ * Install IRQ handler.
+ *
+ * \param dev DRM device.
+ * \param irq IRQ number.
+ *
+ * Initializes the IRQ related data, and setups drm_device::vbl_queue. Installs the handler, calling the driver
+ * \c drm_driver_irq_preinstall() and \c drm_driver_irq_postinstall() functions
+ * before and after the installation.
+ */
+int drm_irq_install( drm_device_t *dev )
+{
+ int ret;
+ unsigned long sh_flags=0;
+
+ if (!drm_core_check_feature(dev, DRIVER_HAVE_IRQ))
+ return -EINVAL;
+
+ if ( dev->irq == 0 )
+ return -EINVAL;
+
+ down( &dev->struct_sem );
+
+ /* Driver must have been initialized */
+ if ( !dev->dev_private ) {
+ up( &dev->struct_sem );
+ return -EINVAL;
+ }
+
+ if ( dev->irq_enabled ) {
+ up( &dev->struct_sem );
+ return -EBUSY;
+ }
+ dev->irq_enabled = 1;
+ up( &dev->struct_sem );
+
+ DRM_DEBUG( "%s: irq=%d\n", __FUNCTION__, dev->irq );
+
+ if (drm_core_check_feature(dev, DRIVER_IRQ_VBL)) {
+ init_waitqueue_head(&dev->vbl_queue);
+
+ spin_lock_init( &dev->vbl_lock );
+
+ INIT_LIST_HEAD( &dev->vbl_sigs.head );
+
+ dev->vbl_pending = 0;
+ }
+
+ /* Before installing handler */
+ dev->driver->irq_preinstall(dev);
+
+ /* Install handler */
+ if (drm_core_check_feature(dev, DRIVER_IRQ_SHARED))
+ sh_flags = SA_SHIRQ;
+
+ ret = request_irq( dev->irq, dev->driver->irq_handler,
+ sh_flags, dev->devname, dev );
+ if ( ret < 0 ) {
+ down( &dev->struct_sem );
+ dev->irq_enabled = 0;
+ up( &dev->struct_sem );
+ return ret;
+ }
+
+ /* After installing handler */
+ dev->driver->irq_postinstall(dev);
+
+ return 0;
+}
+
+/**
+ * Uninstall the IRQ handler.
+ *
+ * \param dev DRM device.
+ *
+ * Calls the driver's \c drm_driver_irq_uninstall() function, and stops the irq.
+ */
+int drm_irq_uninstall( drm_device_t *dev )
+{
+ int irq_enabled;
+
+ if (!drm_core_check_feature(dev, DRIVER_HAVE_IRQ))
+ return -EINVAL;
+
+ down( &dev->struct_sem );
+ irq_enabled = dev->irq_enabled;
+ dev->irq_enabled = 0;
+ up( &dev->struct_sem );
+
+ if ( !irq_enabled )
+ return -EINVAL;
+
+ DRM_DEBUG( "%s: irq=%d\n", __FUNCTION__, dev->irq );
+
+ dev->driver->irq_uninstall(dev);
+
+ free_irq( dev->irq, dev );
+
+ return 0;
+}
+EXPORT_SYMBOL(drm_irq_uninstall);
+
+/**
+ * IRQ control ioctl.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg user argument, pointing to a drm_control structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Calls irq_install() or irq_uninstall() according to \p arg.
+ */
+int drm_control( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_control_t ctl;
+
+ /* if we haven't irq we fallback for compatibility reasons - this used to be a separate function in drm_dma.h */
+
+ if ( copy_from_user( &ctl, (drm_control_t __user *)arg, sizeof(ctl) ) )
+ return -EFAULT;
+
+ switch ( ctl.func ) {
+ case DRM_INST_HANDLER:
+ if (!drm_core_check_feature(dev, DRIVER_HAVE_IRQ))
+ return 0;
+ if (dev->if_version < DRM_IF_VERSION(1, 2) &&
+ ctl.irq != dev->irq)
+ return -EINVAL;
+ return drm_irq_install( dev );
+ case DRM_UNINST_HANDLER:
+ if (!drm_core_check_feature(dev, DRIVER_HAVE_IRQ))
+ return 0;
+ return drm_irq_uninstall( dev );
+ default:
+ return -EINVAL;
+ }
+}
+
+/**
+ * Wait for VBLANK.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param data user argument, pointing to a drm_wait_vblank structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Verifies the IRQ is installed.
+ *
+ * If a signal is requested checks if this task has already scheduled the same signal
+ * for the same vblank sequence number - nothing to be done in
+ * that case. If the number of tasks waiting for the interrupt exceeds 100 the
+ * function fails. Otherwise adds a new entry to drm_device::vbl_sigs for this
+ * task.
+ *
+ * If a signal is not requested, then calls vblank_wait().
+ */
+int drm_wait_vblank( DRM_IOCTL_ARGS )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_wait_vblank_t __user *argp = (void __user *)data;
+ drm_wait_vblank_t vblwait;
+ struct timeval now;
+ int ret = 0;
+ unsigned int flags;
+
+ if (!drm_core_check_feature(dev, DRIVER_IRQ_VBL))
+ return -EINVAL;
+
+ if (!dev->irq)
+ return -EINVAL;
+
+ DRM_COPY_FROM_USER_IOCTL( vblwait, argp, sizeof(vblwait) );
+
+ switch ( vblwait.request.type & ~_DRM_VBLANK_FLAGS_MASK ) {
+ case _DRM_VBLANK_RELATIVE:
+ vblwait.request.sequence += atomic_read( &dev->vbl_received );
+ vblwait.request.type &= ~_DRM_VBLANK_RELATIVE;
+ case _DRM_VBLANK_ABSOLUTE:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ flags = vblwait.request.type & _DRM_VBLANK_FLAGS_MASK;
+
+ if ( flags & _DRM_VBLANK_SIGNAL ) {
+ unsigned long irqflags;
+ drm_vbl_sig_t *vbl_sig;
+
+ vblwait.reply.sequence = atomic_read( &dev->vbl_received );
+
+ spin_lock_irqsave( &dev->vbl_lock, irqflags );
+
+ /* Check if this task has already scheduled the same signal
+ * for the same vblank sequence number; nothing to be done in
+ * that case
+ */
+ list_for_each_entry( vbl_sig, &dev->vbl_sigs.head, head ) {
+ if (vbl_sig->sequence == vblwait.request.sequence
+ && vbl_sig->info.si_signo == vblwait.request.signal
+ && vbl_sig->task == current)
+ {
+ spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
+ goto done;
+ }
+ }
+
+ if ( dev->vbl_pending >= 100 ) {
+ spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
+ return -EBUSY;
+ }
+
+ dev->vbl_pending++;
+
+ spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
+
+ if ( !( vbl_sig = drm_alloc( sizeof( drm_vbl_sig_t ), DRM_MEM_DRIVER ) ) ) {
+ return -ENOMEM;
+ }
+
+ memset( (void *)vbl_sig, 0, sizeof(*vbl_sig) );
+
+ vbl_sig->sequence = vblwait.request.sequence;
+ vbl_sig->info.si_signo = vblwait.request.signal;
+ vbl_sig->task = current;
+
+ spin_lock_irqsave( &dev->vbl_lock, irqflags );
+
+ list_add_tail( (struct list_head *) vbl_sig, &dev->vbl_sigs.head );
+
+ spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
+ } else {
+ if (dev->driver->vblank_wait)
+ ret = dev->driver->vblank_wait( dev, &vblwait.request.sequence );
+
+ do_gettimeofday( &now );
+ vblwait.reply.tval_sec = now.tv_sec;
+ vblwait.reply.tval_usec = now.tv_usec;
+ }
+
+done:
+ DRM_COPY_TO_USER_IOCTL( argp, vblwait, sizeof(vblwait) );
+
+ return ret;
+}
+
+/**
+ * Send the VBLANK signals.
+ *
+ * \param dev DRM device.
+ *
+ * Sends a signal for each task in drm_device::vbl_sigs and empties the list.
+ *
+ * If a signal is not requested, then calls vblank_wait().
+ */
+void drm_vbl_send_signals( drm_device_t *dev )
+{
+ struct list_head *list, *tmp;
+ drm_vbl_sig_t *vbl_sig;
+ unsigned int vbl_seq = atomic_read( &dev->vbl_received );
+ unsigned long flags;
+
+ spin_lock_irqsave( &dev->vbl_lock, flags );
+
+ list_for_each_safe( list, tmp, &dev->vbl_sigs.head ) {
+ vbl_sig = list_entry( list, drm_vbl_sig_t, head );
+ if ( ( vbl_seq - vbl_sig->sequence ) <= (1<<23) ) {
+ vbl_sig->info.si_code = vbl_seq;
+ send_sig_info( vbl_sig->info.si_signo, &vbl_sig->info, vbl_sig->task );
+
+ list_del( list );
+
+ drm_free( vbl_sig, sizeof(*vbl_sig), DRM_MEM_DRIVER );
+
+ dev->vbl_pending--;
+ }
+ }
+
+ spin_unlock_irqrestore( &dev->vbl_lock, flags );
+}
+EXPORT_SYMBOL(drm_vbl_send_signals);
+
+
--- /dev/null
+/**
+ * \file drm_lock.h
+ * IOCTLs for locking
+ *
+ * \author Rickard E. (Rik) Faith <faith@valinux.com>
+ * \author Gareth Hughes <gareth@valinux.com>
+ */
+
+/*
+ * Created: Tue Feb 2 08:37:54 1999 by faith@valinux.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "drmP.h"
+
+/**
+ * Lock ioctl.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg user argument, pointing to a drm_lock structure.
+ * \return zero on success or negative number on failure.
+ *
+ * Add the current task to the lock wait queue, and attempt to take to lock.
+ */
+int drm_lock( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ DECLARE_WAITQUEUE( entry, current );
+ drm_lock_t lock;
+ int ret = 0;
+
+ ++priv->lock_count;
+
+ if ( copy_from_user( &lock, (drm_lock_t __user *)arg, sizeof(lock) ) )
+ return -EFAULT;
+
+ if ( lock.context == DRM_KERNEL_CONTEXT ) {
+ DRM_ERROR( "Process %d using kernel context %d\n",
+ current->pid, lock.context );
+ return -EINVAL;
+ }
+
+ DRM_DEBUG( "%d (pid %d) requests lock (0x%08x), flags = 0x%08x\n",
+ lock.context, current->pid,
+ dev->lock.hw_lock->lock, lock.flags );
+
+ if (drm_core_check_feature(dev, DRIVER_DMA_QUEUE))
+ if ( lock.context < 0 )
+ return -EINVAL;
+
+ add_wait_queue( &dev->lock.lock_queue, &entry );
+ for (;;) {
+ __set_current_state(TASK_INTERRUPTIBLE);
+ if ( !dev->lock.hw_lock ) {
+ /* Device has been unregistered */
+ ret = -EINTR;
+ break;
+ }
+ if ( drm_lock_take( &dev->lock.hw_lock->lock,
+ lock.context ) ) {
+ dev->lock.filp = filp;
+ dev->lock.lock_time = jiffies;
+ atomic_inc( &dev->counts[_DRM_STAT_LOCKS] );
+ break; /* Got lock */
+ }
+
+ /* Contention */
+ schedule();
+ if ( signal_pending( current ) ) {
+ ret = -ERESTARTSYS;
+ break;
+ }
+ }
+ __set_current_state(TASK_RUNNING);
+ remove_wait_queue( &dev->lock.lock_queue, &entry );
+
+ sigemptyset( &dev->sigmask );
+ sigaddset( &dev->sigmask, SIGSTOP );
+ sigaddset( &dev->sigmask, SIGTSTP );
+ sigaddset( &dev->sigmask, SIGTTIN );
+ sigaddset( &dev->sigmask, SIGTTOU );
+ dev->sigdata.context = lock.context;
+ dev->sigdata.lock = dev->lock.hw_lock;
+ block_all_signals( drm_notifier,
+ &dev->sigdata, &dev->sigmask );
+
+ if (dev->driver->dma_ready && (lock.flags & _DRM_LOCK_READY))
+ dev->driver->dma_ready(dev);
+
+ if ( dev->driver->dma_quiescent && (lock.flags & _DRM_LOCK_QUIESCENT ))
+ return dev->driver->dma_quiescent(dev);
+
+ /* dev->driver->kernel_context_switch isn't used by any of the x86
+ * drivers but is used by the Sparc driver.
+ */
+
+ if (dev->driver->kernel_context_switch &&
+ dev->last_context != lock.context) {
+ dev->driver->kernel_context_switch(dev, dev->last_context,
+ lock.context);
+ }
+ DRM_DEBUG( "%d %s\n", lock.context, ret ? "interrupted" : "has lock" );
+
+ return ret;
+}
+
+/**
+ * Unlock ioctl.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg user argument, pointing to a drm_lock structure.
+ * \return zero on success or negative number on failure.
+ *
+ * Transfer and free the lock.
+ */
+int drm_unlock( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_lock_t lock;
+
+ if ( copy_from_user( &lock, (drm_lock_t __user *)arg, sizeof(lock) ) )
+ return -EFAULT;
+
+ if ( lock.context == DRM_KERNEL_CONTEXT ) {
+ DRM_ERROR( "Process %d using kernel context %d\n",
+ current->pid, lock.context );
+ return -EINVAL;
+ }
+
+ atomic_inc( &dev->counts[_DRM_STAT_UNLOCKS] );
+
+ /* kernel_context_switch isn't used by any of the x86 drm
+ * modules but is required by the Sparc driver.
+ */
+ if (dev->driver->kernel_context_switch_unlock)
+ dev->driver->kernel_context_switch_unlock(dev, &lock);
+ else {
+ drm_lock_transfer( dev, &dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT );
+
+ if ( drm_lock_free( dev, &dev->lock.hw_lock->lock,
+ DRM_KERNEL_CONTEXT ) ) {
+ DRM_ERROR( "\n" );
+ }
+ }
+
+ unblock_all_signals();
+ return 0;
+}
+
+/**
+ * Take the heavyweight lock.
+ *
+ * \param lock lock pointer.
+ * \param context locking context.
+ * \return one if the lock is held, or zero otherwise.
+ *
+ * Attempt to mark the lock as held by the given context, via the \p cmpxchg instruction.
+ */
+int drm_lock_take(__volatile__ unsigned int *lock, unsigned int context)
+{
+ unsigned int old, new, prev;
+
+ do {
+ old = *lock;
+ if (old & _DRM_LOCK_HELD) new = old | _DRM_LOCK_CONT;
+ else new = context | _DRM_LOCK_HELD;
+ prev = cmpxchg(lock, old, new);
+ } while (prev != old);
+ if (_DRM_LOCKING_CONTEXT(old) == context) {
+ if (old & _DRM_LOCK_HELD) {
+ if (context != DRM_KERNEL_CONTEXT) {
+ DRM_ERROR("%d holds heavyweight lock\n",
+ context);
+ }
+ return 0;
+ }
+ }
+ if (new == (context | _DRM_LOCK_HELD)) {
+ /* Have lock */
+ return 1;
+ }
+ return 0;
+}
+
+/**
+ * This takes a lock forcibly and hands it to context. Should ONLY be used
+ * inside *_unlock to give lock to kernel before calling *_dma_schedule.
+ *
+ * \param dev DRM device.
+ * \param lock lock pointer.
+ * \param context locking context.
+ * \return always one.
+ *
+ * Resets the lock file pointer.
+ * Marks the lock as held by the given context, via the \p cmpxchg instruction.
+ */
+int drm_lock_transfer(drm_device_t *dev,
+ __volatile__ unsigned int *lock, unsigned int context)
+{
+ unsigned int old, new, prev;
+
+ dev->lock.filp = NULL;
+ do {
+ old = *lock;
+ new = context | _DRM_LOCK_HELD;
+ prev = cmpxchg(lock, old, new);
+ } while (prev != old);
+ return 1;
+}
+
+/**
+ * Free lock.
+ *
+ * \param dev DRM device.
+ * \param lock lock.
+ * \param context context.
+ *
+ * Resets the lock file pointer.
+ * Marks the lock as not held, via the \p cmpxchg instruction. Wakes any task
+ * waiting on the lock queue.
+ */
+int drm_lock_free(drm_device_t *dev,
+ __volatile__ unsigned int *lock, unsigned int context)
+{
+ unsigned int old, new, prev;
+
+ dev->lock.filp = NULL;
+ do {
+ old = *lock;
+ new = 0;
+ prev = cmpxchg(lock, old, new);
+ } while (prev != old);
+ if (_DRM_LOCK_IS_HELD(old) && _DRM_LOCKING_CONTEXT(old) != context) {
+ DRM_ERROR("%d freed heavyweight lock held by %d\n",
+ context,
+ _DRM_LOCKING_CONTEXT(old));
+ return 1;
+ }
+ wake_up_interruptible(&dev->lock.lock_queue);
+ return 0;
+}
+
+/**
+ * If we get here, it means that the process has called DRM_IOCTL_LOCK
+ * without calling DRM_IOCTL_UNLOCK.
+ *
+ * If the lock is not held, then let the signal proceed as usual. If the lock
+ * is held, then set the contended flag and keep the signal blocked.
+ *
+ * \param priv pointer to a drm_sigdata structure.
+ * \return one if the signal should be delivered normally, or zero if the
+ * signal should be blocked.
+ */
+int drm_notifier(void *priv)
+{
+ drm_sigdata_t *s = (drm_sigdata_t *)priv;
+ unsigned int old, new, prev;
+
+
+ /* Allow signal delivery if lock isn't held */
+ if (!s->lock || !_DRM_LOCK_IS_HELD(s->lock->lock)
+ || _DRM_LOCKING_CONTEXT(s->lock->lock) != s->context) return 1;
+
+ /* Otherwise, set flag to force call to
+ drmUnlock */
+ do {
+ old = s->lock->lock;
+ new = old | _DRM_LOCK_CONT;
+ prev = cmpxchg(&s->lock->lock, old, new);
+ } while (prev != old);
+ return 0;
+}
--- /dev/null
+/**
+ * \file drm_memory.h
+ * Memory management wrappers for DRM
+ *
+ * \author Rickard E. (Rik) Faith <faith@valinux.com>
+ * \author Gareth Hughes <gareth@valinux.com>
+ */
+
+/*
+ * Created: Thu Feb 4 14:00:34 1999 by faith@valinux.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include <linux/config.h>
+#include <linux/highmem.h>
+#include "drmP.h"
+
+#ifdef DEBUG_MEMORY
+#include "drm_memory_debug.h"
+#else
+
+/** No-op. */
+void drm_mem_init(void)
+{
+}
+
+/**
+ * Called when "/proc/dri/%dev%/mem" is read.
+ *
+ * \param buf output buffer.
+ * \param start start of output data.
+ * \param offset requested start offset.
+ * \param len requested number of bytes.
+ * \param eof whether there is no more data to return.
+ * \param data private data.
+ * \return number of written bytes.
+ *
+ * No-op.
+ */
+int drm_mem_info(char *buf, char **start, off_t offset,
+ int len, int *eof, void *data)
+{
+ return 0;
+}
+
+/** Wrapper around kmalloc() */
+void *drm_calloc(size_t nmemb, size_t size, int area)
+{
+ void *addr;
+
+ addr = kmalloc(size * nmemb, GFP_KERNEL);
+ if (addr != NULL)
+ memset((void *)addr, 0, size * nmemb);
+
+ return addr;
+}
+EXPORT_SYMBOL(drm_calloc);
+
+/** Wrapper around kmalloc() and kfree() */
+void *drm_realloc(void *oldpt, size_t oldsize, size_t size, int area)
+{
+ void *pt;
+
+ if (!(pt = kmalloc(size, GFP_KERNEL))) return NULL;
+ if (oldpt && oldsize) {
+ memcpy(pt, oldpt, oldsize);
+ kfree(oldpt);
+ }
+ return pt;
+}
+
+/**
+ * Allocate pages.
+ *
+ * \param order size order.
+ * \param area memory area. (Not used.)
+ * \return page address on success, or zero on failure.
+ *
+ * Allocate and reserve free pages.
+ */
+unsigned long drm_alloc_pages(int order, int area)
+{
+ unsigned long address;
+ unsigned long bytes = PAGE_SIZE << order;
+ unsigned long addr;
+ unsigned int sz;
+
+ address = __get_free_pages(GFP_KERNEL, order);
+ if (!address)
+ return 0;
+
+ /* Zero */
+ memset((void *)address, 0, bytes);
+
+ /* Reserve */
+ for (addr = address, sz = bytes;
+ sz > 0;
+ addr += PAGE_SIZE, sz -= PAGE_SIZE) {
+ SetPageReserved(virt_to_page(addr));
+ }
+
+ return address;
+}
+
+/**
+ * Free pages.
+ *
+ * \param address address of the pages to free.
+ * \param order size order.
+ * \param area memory area. (Not used.)
+ *
+ * Unreserve and free pages allocated by alloc_pages().
+ */
+void drm_free_pages(unsigned long address, int order, int area)
+{
+ unsigned long bytes = PAGE_SIZE << order;
+ unsigned long addr;
+ unsigned int sz;
+
+ if (!address)
+ return;
+
+ /* Unreserve */
+ for (addr = address, sz = bytes;
+ sz > 0;
+ addr += PAGE_SIZE, sz -= PAGE_SIZE) {
+ ClearPageReserved(virt_to_page(addr));
+ }
+
+ free_pages(address, order);
+}
+
+
+#if __OS_HAS_AGP
+/** Wrapper around agp_allocate_memory() */
+DRM_AGP_MEM *drm_alloc_agp(int pages, u32 type)
+{
+ return drm_agp_allocate_memory(pages, type);
+}
+
+/** Wrapper around agp_free_memory() */
+int drm_free_agp(DRM_AGP_MEM *handle, int pages)
+{
+ return drm_agp_free_memory(handle) ? 0 : -EINVAL;
+}
+
+/** Wrapper around agp_bind_memory() */
+int drm_bind_agp(DRM_AGP_MEM *handle, unsigned int start)
+{
+ return drm_agp_bind_memory(handle, start);
+}
+
+/** Wrapper around agp_unbind_memory() */
+int drm_unbind_agp(DRM_AGP_MEM *handle)
+{
+ return drm_agp_unbind_memory(handle);
+}
+#endif /* agp */
+#endif /* debug_memory */
#include <linux/config.h>
#include <linux/highmem.h>
+#include <linux/vmalloc.h>
#include "drmP.h"
/**
* Cut down version of drm_memory_debug.h, which used to be called
- * drm_memory.h. If you want the debug functionality, change 0 to 1
- * below.
+ * drm_memory.h.
*/
-#define DEBUG_MEMORY 0
#if __OS_HAS_AGP
# endif
#endif
-#include <asm/tlbflush.h>
-
/*
* Find the drm_map that covers the range [offset, offset+size).
*/
drm_follow_page (void *vaddr)
{
pgd_t *pgd = pgd_offset_k((unsigned long) vaddr);
- pmd_t *pmd = pmd_offset(pgd, (unsigned long) vaddr);
+ pud_t *pud = pud_offset(pgd, (unsigned long) vaddr);
+ pmd_t *pmd = pmd_offset(pud, (unsigned long) vaddr);
pte_t *ptep = pte_offset_kernel(pmd, (unsigned long) vaddr);
return pte_pfn(*ptep) << PAGE_SHIFT;
}
}
-#if DEBUG_MEMORY
-#include "drm_memory_debug.h"
-#else
-
-/** No-op. */
-void DRM(mem_init)(void)
-{
-}
-
-/**
- * Called when "/proc/dri/%dev%/mem" is read.
- *
- * \param buf output buffer.
- * \param start start of output data.
- * \param offset requested start offset.
- * \param len requested number of bytes.
- * \param eof whether there is no more data to return.
- * \param data private data.
- * \return number of written bytes.
- *
- * No-op.
- */
-int DRM(mem_info)(char *buf, char **start, off_t offset,
- int len, int *eof, void *data)
-{
- return 0;
-}
-
-/** Wrapper around kmalloc() */
-void *DRM(alloc)(size_t size, int area)
-{
- return kmalloc(size, GFP_KERNEL);
-}
-
-/** Wrapper around kmalloc() */
-void *DRM(calloc)(size_t size, size_t nmemb, int area)
-{
- void *addr;
-
- addr = kmalloc(size * nmemb, GFP_KERNEL);
- if (addr != NULL)
- memset((void *)addr, 0, size * nmemb);
-
- return addr;
-}
-
-/** Wrapper around kmalloc() and kfree() */
-void *DRM(realloc)(void *oldpt, size_t oldsize, size_t size, int area)
-{
- void *pt;
-
- if (!(pt = kmalloc(size, GFP_KERNEL))) return NULL;
- if (oldpt && oldsize) {
- memcpy(pt, oldpt, oldsize);
- kfree(oldpt);
- }
- return pt;
-}
-
-/** Wrapper around kfree() */
-void DRM(free)(void *pt, size_t size, int area)
-{
- kfree(pt);
-}
-
-/**
- * Allocate pages.
- *
- * \param order size order.
- * \param area memory area. (Not used.)
- * \return page address on success, or zero on failure.
- *
- * Allocate and reserve free pages.
- */
-unsigned long DRM(alloc_pages)(int order, int area)
-{
- unsigned long address;
- unsigned long bytes = PAGE_SIZE << order;
- unsigned long addr;
- unsigned int sz;
-
- address = __get_free_pages(GFP_KERNEL, order);
- if (!address)
- return 0;
-
- /* Zero */
- memset((void *)address, 0, bytes);
-
- /* Reserve */
- for (addr = address, sz = bytes;
- sz > 0;
- addr += PAGE_SIZE, sz -= PAGE_SIZE) {
- SetPageReserved(virt_to_page(addr));
- }
-
- return address;
-}
-
-/**
- * Free pages.
- *
- * \param address address of the pages to free.
- * \param order size order.
- * \param area memory area. (Not used.)
- *
- * Unreserve and free pages allocated by alloc_pages().
- */
-void DRM(free_pages)(unsigned long address, int order, int area)
-{
- unsigned long bytes = PAGE_SIZE << order;
- unsigned long addr;
- unsigned int sz;
-
- if (!address)
- return;
-
- /* Unreserve */
- for (addr = address, sz = bytes;
- sz > 0;
- addr += PAGE_SIZE, sz -= PAGE_SIZE) {
- ClearPageReserved(virt_to_page(addr));
- }
-
- free_pages(address, order);
-}
-
-/** Wrapper around drm_ioremap() */
-void *DRM(ioremap)(unsigned long offset, unsigned long size, drm_device_t *dev)
-{
- return drm_ioremap(offset, size, dev);
-}
-
-/** Wrapper around drm_ioremap_nocache() */
-void *DRM(ioremap_nocache)(unsigned long offset, unsigned long size, drm_device_t *dev)
-{
- return drm_ioremap_nocache(offset, size, dev);
-}
-
-/** Wrapper around drm_iounmap() */
-void DRM(ioremapfree)(void *pt, unsigned long size, drm_device_t *dev)
-{
- drm_ioremapfree(pt, size, dev);
-}
-
-#if __OS_HAS_AGP
-/** Wrapper around agp_allocate_memory() */
-DRM_AGP_MEM *DRM(alloc_agp)(int pages, u32 type)
-{
- return DRM(agp_allocate_memory)(pages, type);
-}
-
-/** Wrapper around agp_free_memory() */
-int DRM(free_agp)(DRM_AGP_MEM *handle, int pages)
-{
- return DRM(agp_free_memory)(handle) ? 0 : -EINVAL;
-}
-
-/** Wrapper around agp_bind_memory() */
-int DRM(bind_agp)(DRM_AGP_MEM *handle, unsigned int start)
-{
- return DRM(agp_bind_memory)(handle, start);
-}
-
-/** Wrapper around agp_unbind_memory() */
-int DRM(unbind_agp)(DRM_AGP_MEM *handle)
-{
- return DRM(agp_unbind_memory)(handle);
-}
-#endif /* agp */
-#endif /* debug_memory */
--- /dev/null
+/* drm_pci.h -- PCI DMA memory management wrappers for DRM -*- linux-c -*- */
+/**
+ * \file drm_pci.c
+ * \brief Functions and ioctls to manage PCI memory
+ *
+ * \warning These interfaces aren't stable yet.
+ *
+ * \todo Implement the remaining ioctl's for the PCI pools.
+ * \todo The wrappers here are so thin that they would be better off inlined..
+ *
+ * \author Jose Fonseca <jrfonseca@tungstengraphics.com>
+ * \author Leif Delgass <ldelgass@retinalburn.net>
+ */
+
+/*
+ * Copyright 2003 Jos�Fonseca.
+ * Copyright 2003 Leif Delgass.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+ * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include <linux/pci.h>
+#include "drmP.h"
+
+/**********************************************************************/
+/** \name PCI memory */
+/*@{*/
+
+/**
+ * \brief Allocate a PCI consistent memory block, for DMA.
+ */
+void *drm_pci_alloc(drm_device_t * dev, size_t size, size_t align,
+ dma_addr_t maxaddr, dma_addr_t * busaddr)
+{
+ void *address;
+#if DRM_DEBUG_MEMORY
+ int area = DRM_MEM_DMA;
+
+ spin_lock(&drm_mem_lock);
+ if ((drm_ram_used >> PAGE_SHIFT)
+ > (DRM_RAM_PERCENT * drm_ram_available) / 100) {
+ spin_unlock(&drm_mem_lock);
+ return 0;
+ }
+ spin_unlock(&drm_mem_lock);
+#endif
+
+ /* pci_alloc_consistent only guarantees alignment to the smallest
+ * PAGE_SIZE order which is greater than or equal to the requested size.
+ * Return NULL here for now to make sure nobody tries for larger alignment
+ */
+ if (align > size)
+ return NULL;
+
+ if (pci_set_dma_mask(dev->pdev, maxaddr) != 0) {
+ DRM_ERROR("Setting pci dma mask failed\n");
+ return NULL;
+ }
+
+ address = pci_alloc_consistent(dev->pdev, size, busaddr);
+
+#if DRM_DEBUG_MEMORY
+ if (address == NULL) {
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[area].fail_count;
+ spin_unlock(&drm_mem_lock);
+ return NULL;
+ }
+
+ spin_lock(&drm_mem_lock);
+ ++drm_mem_stats[area].succeed_count;
+ drm_mem_stats[area].bytes_allocated += size;
+ drm_ram_used += size;
+ spin_unlock(&drm_mem_lock);
+#else
+ if (address == NULL)
+ return NULL;
+#endif
+
+ memset(address, 0, size);
+
+ return address;
+}
+EXPORT_SYMBOL(drm_pci_alloc);
+
+/**
+ * \brief Free a PCI consistent memory block.
+ */
+void
+drm_pci_free(drm_device_t * dev, size_t size, void *vaddr, dma_addr_t busaddr)
+{
+#if DRM_DEBUG_MEMORY
+ int area = DRM_MEM_DMA;
+ int alloc_count;
+ int free_count;
+#endif
+
+ if (!vaddr) {
+#if DRM_DEBUG_MEMORY
+ DRM_MEM_ERROR(area, "Attempt to free address 0\n");
+#endif
+ } else {
+ pci_free_consistent(dev->pdev, size, vaddr, busaddr);
+ }
+
+#if DRM_DEBUG_MEMORY
+ spin_lock(&drm_mem_lock);
+ free_count = ++drm_mem_stats[area].free_count;
+ alloc_count = drm_mem_stats[area].succeed_count;
+ drm_mem_stats[area].bytes_freed += size;
+ drm_ram_used -= size;
+ spin_unlock(&drm_mem_lock);
+ if (free_count > alloc_count) {
+ DRM_MEM_ERROR(area,
+ "Excess frees: %d frees, %d allocs\n",
+ free_count, alloc_count);
+ }
+#endif
+
+}
+EXPORT_SYMBOL(drm_pci_free);
+
+/*@}*/
Please contact dri-devel@lists.sf.net to add new cards to this list
*/
#define radeon_PCI_IDS \
- {0x1002, 0x4136, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4137, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4237, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4242, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4242, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4336, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4337, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4437, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4964, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4965, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4966, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4967, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4C57, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4C58, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4C59, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4C5A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4C64, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4C65, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4C66, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x4C67, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5144, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5145, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5146, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5147, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5148, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5149, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x514A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x514B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x514C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x514D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x514E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x514F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5157, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5158, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5159, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x515A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5168, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5169, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x516A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x516B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x516C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5834, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5835, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5836, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5837, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5960, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5961, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5962, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5963, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5964, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5968, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5969, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x596A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x596B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5c61, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5c62, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5c63, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
- {0x1002, 0x5c64, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4136, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS100|CHIP_IS_IGP}, \
+ {0x1002, 0x4137, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS200|CHIP_IS_IGP}, \
+ {0x1002, 0x4144, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R300}, \
+ {0x1002, 0x4145, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R300}, \
+ {0x1002, 0x4146, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R300}, \
+ {0x1002, 0x4147, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R300}, \
+ {0x1002, 0x4150, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV350}, \
+ {0x1002, 0x4151, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV350}, \
+ {0x1002, 0x4152, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV350}, \
+ {0x1002, 0x4153, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV350}, \
+ {0x1002, 0x4154, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV350}, \
+ {0x1002, 0x4156, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV350}, \
+ {0x1002, 0x4237, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS250|CHIP_IS_IGP}, \
+ {0x1002, 0x4242, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
+ {0x1002, 0x4243, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
+ {0x1002, 0x4336, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS100|CHIP_IS_IGP|CHIP_IS_MOBILITY}, \
+ {0x1002, 0x4337, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS200|CHIP_IS_IGP|CHIP_IS_MOBILITY}, \
+ {0x1002, 0x4437, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS250|CHIP_IS_IGP|CHIP_IS_MOBILITY}, \
+ {0x1002, 0x4964, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R250}, \
+ {0x1002, 0x4965, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R250}, \
+ {0x1002, 0x4966, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R250}, \
+ {0x1002, 0x4967, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R250}, \
+ {0x1002, 0x4C57, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV200|CHIP_IS_MOBILITY}, \
+ {0x1002, 0x4C58, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV200|CHIP_IS_MOBILITY}, \
+ {0x1002, 0x4C59, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV100|CHIP_IS_MOBILITY}, \
+ {0x1002, 0x4C5A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV100|CHIP_IS_MOBILITY}, \
+ {0x1002, 0x4C64, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R250|CHIP_IS_MOBILITY}, \
+ {0x1002, 0x4C65, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R250|CHIP_IS_MOBILITY}, \
+ {0x1002, 0x4C66, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R250|CHIP_IS_MOBILITY}, \
+ {0x1002, 0x4C67, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R250|CHIP_IS_MOBILITY}, \
+ {0x1002, 0x4E50, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV350|CHIP_IS_MOBILITY}, \
+ {0x1002, 0x5144, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R100|CHIP_SINGLE_CRTC}, \
+ {0x1002, 0x5145, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R100|CHIP_SINGLE_CRTC}, \
+ {0x1002, 0x5146, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R100|CHIP_SINGLE_CRTC}, \
+ {0x1002, 0x5147, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R100|CHIP_SINGLE_CRTC}, \
+ {0x1002, 0x5148, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
+ {0x1002, 0x5149, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
+ {0x1002, 0x514A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
+ {0x1002, 0x514B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
+ {0x1002, 0x514C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
+ {0x1002, 0x514D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
+ {0x1002, 0x514E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
+ {0x1002, 0x514F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
+ {0x1002, 0x5157, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV200}, \
+ {0x1002, 0x5158, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV200}, \
+ {0x1002, 0x5159, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV100}, \
+ {0x1002, 0x515A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV100}, \
+ {0x1002, 0x5168, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
+ {0x1002, 0x5169, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
+ {0x1002, 0x516A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
+ {0x1002, 0x516B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
+ {0x1002, 0x516C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R200}, \
+ {0x1002, 0x5834, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS300|CHIP_IS_IGP}, \
+ {0x1002, 0x5835, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS300|CHIP_IS_IGP|CHIP_IS_MOBILITY}, \
+ {0x1002, 0x5836, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS300|CHIP_IS_IGP}, \
+ {0x1002, 0x5837, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS300|CHIP_IS_IGP}, \
+ {0x1002, 0x5960, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \
+ {0x1002, 0x5961, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \
+ {0x1002, 0x5962, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \
+ {0x1002, 0x5963, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \
+ {0x1002, 0x5964, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \
+ {0x1002, 0x5968, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \
+ {0x1002, 0x5969, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \
+ {0x1002, 0x596A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \
+ {0x1002, 0x596B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \
+ {0x1002, 0x5c61, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280|CHIP_IS_MOBILITY}, \
+ {0x1002, 0x5c62, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \
+ {0x1002, 0x5c63, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280|CHIP_IS_MOBILITY}, \
+ {0x1002, 0x5c64, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280}, \
{0, 0, 0}
#define r128_PCI_IDS \
{0x8086, 0x3582, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x8086, 0x2572, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0x8086, 0x2582, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x8086, 0x2982, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
{0, 0, 0}
--- /dev/null
+/**
+ * \file drm_proc.h
+ * /proc support for DRM
+ *
+ * \author Rickard E. (Rik) Faith <faith@valinux.com>
+ * \author Gareth Hughes <gareth@valinux.com>
+ *
+ * \par Acknowledgements:
+ * Matthew J Sottek <matthew.j.sottek@intel.com> sent in a patch to fix
+ * the problem with the proc files not outputting all their information.
+ */
+
+/*
+ * Created: Mon Jan 11 09:48:47 1999 by faith@valinux.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "drmP.h"
+
+static int drm_name_info(char *buf, char **start, off_t offset,
+ int request, int *eof, void *data);
+static int drm_vm_info(char *buf, char **start, off_t offset,
+ int request, int *eof, void *data);
+static int drm_clients_info(char *buf, char **start, off_t offset,
+ int request, int *eof, void *data);
+static int drm_queues_info(char *buf, char **start, off_t offset,
+ int request, int *eof, void *data);
+static int drm_bufs_info(char *buf, char **start, off_t offset,
+ int request, int *eof, void *data);
+#if DRM_DEBUG_CODE
+static int drm_vma_info(char *buf, char **start, off_t offset,
+ int request, int *eof, void *data);
+#endif
+
+/**
+ * Proc file list.
+ */
+struct drm_proc_list {
+ const char *name; /**< file name */
+ int (*f)(char *, char **, off_t, int, int *, void *); /**< proc callback*/
+} drm_proc_list[] = {
+ { "name", drm_name_info },
+ { "mem", drm_mem_info },
+ { "vm", drm_vm_info },
+ { "clients", drm_clients_info },
+ { "queues", drm_queues_info },
+ { "bufs", drm_bufs_info },
+#if DRM_DEBUG_CODE
+ { "vma", drm_vma_info },
+#endif
+};
+#define DRM_PROC_ENTRIES (sizeof(drm_proc_list)/sizeof(drm_proc_list[0]))
+
+/**
+ * Initialize the DRI proc filesystem for a device.
+ *
+ * \param dev DRM device.
+ * \param minor device minor number.
+ * \param root DRI proc dir entry.
+ * \param dev_root resulting DRI device proc dir entry.
+ * \return root entry pointer on success, or NULL on failure.
+ *
+ * Create the DRI proc root entry "/proc/dri", the device proc root entry
+ * "/proc/dri/%minor%/", and each entry in proc_list as
+ * "/proc/dri/%minor%/%name%".
+ */
+int drm_proc_init(drm_device_t *dev, int minor,
+ struct proc_dir_entry *root,
+ struct proc_dir_entry **dev_root)
+{
+ struct proc_dir_entry *ent;
+ int i, j;
+ char name[64];
+
+ sprintf(name, "%d", minor);
+ *dev_root = create_proc_entry(name, S_IFDIR, root);
+ if (!*dev_root) {
+ DRM_ERROR("Cannot create /proc/dri/%s\n", name);
+ return -1;
+ }
+
+ for (i = 0; i < DRM_PROC_ENTRIES; i++) {
+ ent = create_proc_entry(drm_proc_list[i].name,
+ S_IFREG|S_IRUGO, *dev_root);
+ if (!ent) {
+ DRM_ERROR("Cannot create /proc/dri/%s/%s\n",
+ name, drm_proc_list[i].name);
+ for (j = 0; j < i; j++)
+ remove_proc_entry(drm_proc_list[i].name,
+ *dev_root);
+ remove_proc_entry(name, root);
+ return -1;
+ }
+ ent->read_proc = drm_proc_list[i].f;
+ ent->data = dev;
+ }
+
+ return 0;
+}
+
+
+/**
+ * Cleanup the proc filesystem resources.
+ *
+ * \param minor device minor number.
+ * \param root DRI proc dir entry.
+ * \param dev_root DRI device proc dir entry.
+ * \return always zero.
+ *
+ * Remove all proc entries created by proc_init().
+ */
+int drm_proc_cleanup(int minor, struct proc_dir_entry *root,
+ struct proc_dir_entry *dev_root)
+{
+ int i;
+ char name[64];
+
+ if (!root || !dev_root) return 0;
+
+ for (i = 0; i < DRM_PROC_ENTRIES; i++)
+ remove_proc_entry(drm_proc_list[i].name, dev_root);
+ sprintf(name, "%d", minor);
+ remove_proc_entry(name, root);
+
+ return 0;
+}
+
+/**
+ * Called when "/proc/dri/.../name" is read.
+ *
+ * \param buf output buffer.
+ * \param start start of output data.
+ * \param offset requested start offset.
+ * \param request requested number of bytes.
+ * \param eof whether there is no more data to return.
+ * \param data private data.
+ * \return number of written bytes.
+ *
+ * Prints the device name together with the bus id if available.
+ */
+static int drm_name_info(char *buf, char **start, off_t offset, int request,
+ int *eof, void *data)
+{
+ drm_device_t *dev = (drm_device_t *)data;
+ int len = 0;
+
+ if (offset > DRM_PROC_LIMIT) {
+ *eof = 1;
+ return 0;
+ }
+
+ *start = &buf[offset];
+ *eof = 0;
+
+ if (dev->unique) {
+ DRM_PROC_PRINT("%s 0x%lx %s\n",
+ dev->driver->pci_driver.name, (long)old_encode_dev(dev->device), dev->unique);
+ } else {
+ DRM_PROC_PRINT("%s 0x%lx\n", dev->driver->pci_driver.name, (long)old_encode_dev(dev->device));
+ }
+
+ if (len > request + offset) return request;
+ *eof = 1;
+ return len - offset;
+}
+
+/**
+ * Called when "/proc/dri/.../vm" is read.
+ *
+ * \param buf output buffer.
+ * \param start start of output data.
+ * \param offset requested start offset.
+ * \param request requested number of bytes.
+ * \param eof whether there is no more data to return.
+ * \param data private data.
+ * \return number of written bytes.
+ *
+ * Prints information about all mappings in drm_device::maplist.
+ */
+static int drm__vm_info(char *buf, char **start, off_t offset, int request,
+ int *eof, void *data)
+{
+ drm_device_t *dev = (drm_device_t *)data;
+ int len = 0;
+ drm_map_t *map;
+ drm_map_list_t *r_list;
+ struct list_head *list;
+
+ /* Hardcoded from _DRM_FRAME_BUFFER,
+ _DRM_REGISTERS, _DRM_SHM, _DRM_AGP, and
+ _DRM_SCATTER_GATHER. */
+ const char *types[] = { "FB", "REG", "SHM", "AGP", "SG" };
+ const char *type;
+ int i;
+
+ if (offset > DRM_PROC_LIMIT) {
+ *eof = 1;
+ return 0;
+ }
+
+ *start = &buf[offset];
+ *eof = 0;
+
+ DRM_PROC_PRINT("slot offset size type flags "
+ "address mtrr\n\n");
+ i = 0;
+ if (dev->maplist != NULL) list_for_each(list, &dev->maplist->head) {
+ r_list = list_entry(list, drm_map_list_t, head);
+ map = r_list->map;
+ if(!map) continue;
+ if (map->type < 0 || map->type > 4) type = "??";
+ else type = types[map->type];
+ DRM_PROC_PRINT("%4d 0x%08lx 0x%08lx %4.4s 0x%02x 0x%08lx ",
+ i,
+ map->offset,
+ map->size,
+ type,
+ map->flags,
+ (unsigned long)map->handle);
+ if (map->mtrr < 0) {
+ DRM_PROC_PRINT("none\n");
+ } else {
+ DRM_PROC_PRINT("%4d\n", map->mtrr);
+ }
+ i++;
+ }
+
+ if (len > request + offset) return request;
+ *eof = 1;
+ return len - offset;
+}
+
+/**
+ * Simply calls _vm_info() while holding the drm_device::struct_sem lock.
+ */
+static int drm_vm_info(char *buf, char **start, off_t offset, int request,
+ int *eof, void *data)
+{
+ drm_device_t *dev = (drm_device_t *)data;
+ int ret;
+
+ down(&dev->struct_sem);
+ ret = drm__vm_info(buf, start, offset, request, eof, data);
+ up(&dev->struct_sem);
+ return ret;
+}
+
+/**
+ * Called when "/proc/dri/.../queues" is read.
+ *
+ * \param buf output buffer.
+ * \param start start of output data.
+ * \param offset requested start offset.
+ * \param request requested number of bytes.
+ * \param eof whether there is no more data to return.
+ * \param data private data.
+ * \return number of written bytes.
+ */
+static int drm__queues_info(char *buf, char **start, off_t offset,
+ int request, int *eof, void *data)
+{
+ drm_device_t *dev = (drm_device_t *)data;
+ int len = 0;
+ int i;
+ drm_queue_t *q;
+
+ if (offset > DRM_PROC_LIMIT) {
+ *eof = 1;
+ return 0;
+ }
+
+ *start = &buf[offset];
+ *eof = 0;
+
+ DRM_PROC_PRINT(" ctx/flags use fin"
+ " blk/rw/rwf wait flushed queued"
+ " locks\n\n");
+ for (i = 0; i < dev->queue_count; i++) {
+ q = dev->queuelist[i];
+ atomic_inc(&q->use_count);
+ DRM_PROC_PRINT_RET(atomic_dec(&q->use_count),
+ "%5d/0x%03x %5d %5d"
+ " %5d/%c%c/%c%c%c %5Zd\n",
+ i,
+ q->flags,
+ atomic_read(&q->use_count),
+ atomic_read(&q->finalization),
+ atomic_read(&q->block_count),
+ atomic_read(&q->block_read) ? 'r' : '-',
+ atomic_read(&q->block_write) ? 'w' : '-',
+ waitqueue_active(&q->read_queue) ? 'r':'-',
+ waitqueue_active(&q->write_queue) ? 'w':'-',
+ waitqueue_active(&q->flush_queue) ? 'f':'-',
+ DRM_BUFCOUNT(&q->waitlist));
+ atomic_dec(&q->use_count);
+ }
+
+ if (len > request + offset) return request;
+ *eof = 1;
+ return len - offset;
+}
+
+/**
+ * Simply calls _queues_info() while holding the drm_device::struct_sem lock.
+ */
+static int drm_queues_info(char *buf, char **start, off_t offset, int request,
+ int *eof, void *data)
+{
+ drm_device_t *dev = (drm_device_t *)data;
+ int ret;
+
+ down(&dev->struct_sem);
+ ret = drm__queues_info(buf, start, offset, request, eof, data);
+ up(&dev->struct_sem);
+ return ret;
+}
+
+/**
+ * Called when "/proc/dri/.../bufs" is read.
+ *
+ * \param buf output buffer.
+ * \param start start of output data.
+ * \param offset requested start offset.
+ * \param request requested number of bytes.
+ * \param eof whether there is no more data to return.
+ * \param data private data.
+ * \return number of written bytes.
+ */
+static int drm__bufs_info(char *buf, char **start, off_t offset, int request,
+ int *eof, void *data)
+{
+ drm_device_t *dev = (drm_device_t *)data;
+ int len = 0;
+ drm_device_dma_t *dma = dev->dma;
+ int i;
+
+ if (!dma || offset > DRM_PROC_LIMIT) {
+ *eof = 1;
+ return 0;
+ }
+
+ *start = &buf[offset];
+ *eof = 0;
+
+ DRM_PROC_PRINT(" o size count free segs pages kB\n\n");
+ for (i = 0; i <= DRM_MAX_ORDER; i++) {
+ if (dma->bufs[i].buf_count)
+ DRM_PROC_PRINT("%2d %8d %5d %5d %5d %5d %5ld\n",
+ i,
+ dma->bufs[i].buf_size,
+ dma->bufs[i].buf_count,
+ atomic_read(&dma->bufs[i]
+ .freelist.count),
+ dma->bufs[i].seg_count,
+ dma->bufs[i].seg_count
+ *(1 << dma->bufs[i].page_order),
+ (dma->bufs[i].seg_count
+ * (1 << dma->bufs[i].page_order))
+ * PAGE_SIZE / 1024);
+ }
+ DRM_PROC_PRINT("\n");
+ for (i = 0; i < dma->buf_count; i++) {
+ if (i && !(i%32)) DRM_PROC_PRINT("\n");
+ DRM_PROC_PRINT(" %d", dma->buflist[i]->list);
+ }
+ DRM_PROC_PRINT("\n");
+
+ if (len > request + offset) return request;
+ *eof = 1;
+ return len - offset;
+}
+
+/**
+ * Simply calls _bufs_info() while holding the drm_device::struct_sem lock.
+ */
+static int drm_bufs_info(char *buf, char **start, off_t offset, int request,
+ int *eof, void *data)
+{
+ drm_device_t *dev = (drm_device_t *)data;
+ int ret;
+
+ down(&dev->struct_sem);
+ ret = drm__bufs_info(buf, start, offset, request, eof, data);
+ up(&dev->struct_sem);
+ return ret;
+}
+
+/**
+ * Called when "/proc/dri/.../clients" is read.
+ *
+ * \param buf output buffer.
+ * \param start start of output data.
+ * \param offset requested start offset.
+ * \param request requested number of bytes.
+ * \param eof whether there is no more data to return.
+ * \param data private data.
+ * \return number of written bytes.
+ */
+static int drm__clients_info(char *buf, char **start, off_t offset,
+ int request, int *eof, void *data)
+{
+ drm_device_t *dev = (drm_device_t *)data;
+ int len = 0;
+ drm_file_t *priv;
+
+ if (offset > DRM_PROC_LIMIT) {
+ *eof = 1;
+ return 0;
+ }
+
+ *start = &buf[offset];
+ *eof = 0;
+
+ DRM_PROC_PRINT("a dev pid uid magic ioctls\n\n");
+ for (priv = dev->file_first; priv; priv = priv->next) {
+ DRM_PROC_PRINT("%c %3d %5d %5d %10u %10lu\n",
+ priv->authenticated ? 'y' : 'n',
+ priv->minor,
+ priv->pid,
+ priv->uid,
+ priv->magic,
+ priv->ioctl_count);
+ }
+
+ if (len > request + offset) return request;
+ *eof = 1;
+ return len - offset;
+}
+
+/**
+ * Simply calls _clients_info() while holding the drm_device::struct_sem lock.
+ */
+static int drm_clients_info(char *buf, char **start, off_t offset,
+ int request, int *eof, void *data)
+{
+ drm_device_t *dev = (drm_device_t *)data;
+ int ret;
+
+ down(&dev->struct_sem);
+ ret = drm__clients_info(buf, start, offset, request, eof, data);
+ up(&dev->struct_sem);
+ return ret;
+}
+
+#if DRM_DEBUG_CODE
+
+static int drm__vma_info(char *buf, char **start, off_t offset, int request,
+ int *eof, void *data)
+{
+ drm_device_t *dev = (drm_device_t *)data;
+ int len = 0;
+ drm_vma_entry_t *pt;
+ struct vm_area_struct *vma;
+#if defined(__i386__)
+ unsigned int pgprot;
+#endif
+
+ if (offset > DRM_PROC_LIMIT) {
+ *eof = 1;
+ return 0;
+ }
+
+ *start = &buf[offset];
+ *eof = 0;
+
+ DRM_PROC_PRINT("vma use count: %d, high_memory = %p, 0x%08lx\n",
+ atomic_read(&dev->vma_count),
+ high_memory, virt_to_phys(high_memory));
+ for (pt = dev->vmalist; pt; pt = pt->next) {
+ if (!(vma = pt->vma)) continue;
+ DRM_PROC_PRINT("\n%5d 0x%08lx-0x%08lx %c%c%c%c%c%c 0x%08lx",
+ pt->pid,
+ vma->vm_start,
+ vma->vm_end,
+ vma->vm_flags & VM_READ ? 'r' : '-',
+ vma->vm_flags & VM_WRITE ? 'w' : '-',
+ vma->vm_flags & VM_EXEC ? 'x' : '-',
+ vma->vm_flags & VM_MAYSHARE ? 's' : 'p',
+ vma->vm_flags & VM_LOCKED ? 'l' : '-',
+ vma->vm_flags & VM_IO ? 'i' : '-',
+ VM_OFFSET(vma));
+
+#if defined(__i386__)
+ pgprot = pgprot_val(vma->vm_page_prot);
+ DRM_PROC_PRINT(" %c%c%c%c%c%c%c%c%c",
+ pgprot & _PAGE_PRESENT ? 'p' : '-',
+ pgprot & _PAGE_RW ? 'w' : 'r',
+ pgprot & _PAGE_USER ? 'u' : 's',
+ pgprot & _PAGE_PWT ? 't' : 'b',
+ pgprot & _PAGE_PCD ? 'u' : 'c',
+ pgprot & _PAGE_ACCESSED ? 'a' : '-',
+ pgprot & _PAGE_DIRTY ? 'd' : '-',
+ pgprot & _PAGE_PSE ? 'm' : 'k',
+ pgprot & _PAGE_GLOBAL ? 'g' : 'l' );
+#endif
+ DRM_PROC_PRINT("\n");
+ }
+
+ if (len > request + offset) return request;
+ *eof = 1;
+ return len - offset;
+}
+
+static int drm_vma_info(char *buf, char **start, off_t offset, int request,
+ int *eof, void *data)
+{
+ drm_device_t *dev = (drm_device_t *)data;
+ int ret;
+
+ down(&dev->struct_sem);
+ ret = drm__vma_info(buf, start, offset, request, eof, data);
+ up(&dev->struct_sem);
+ return ret;
+}
+#endif
+
+
--- /dev/null
+/**
+ * \file drm_scatter.h
+ * IOCTLs to manage scatter/gather memory
+ *
+ * \author Gareth Hughes <gareth@valinux.com>
+ */
+
+/*
+ * Created: Mon Dec 18 23:20:54 2000 by gareth@valinux.com
+ *
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <linux/config.h>
+#include <linux/vmalloc.h>
+#include "drmP.h"
+
+#define DEBUG_SCATTER 0
+
+void drm_sg_cleanup( drm_sg_mem_t *entry )
+{
+ struct page *page;
+ int i;
+
+ for ( i = 0 ; i < entry->pages ; i++ ) {
+ page = entry->pagelist[i];
+ if ( page )
+ ClearPageReserved( page );
+ }
+
+ vfree( entry->virtual );
+
+ drm_free( entry->busaddr,
+ entry->pages * sizeof(*entry->busaddr),
+ DRM_MEM_PAGES );
+ drm_free( entry->pagelist,
+ entry->pages * sizeof(*entry->pagelist),
+ DRM_MEM_PAGES );
+ drm_free( entry,
+ sizeof(*entry),
+ DRM_MEM_SGLISTS );
+}
+
+int drm_sg_alloc( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_scatter_gather_t __user *argp = (void __user *)arg;
+ drm_scatter_gather_t request;
+ drm_sg_mem_t *entry;
+ unsigned long pages, i, j;
+
+ DRM_DEBUG( "%s\n", __FUNCTION__ );
+
+ if (!drm_core_check_feature(dev, DRIVER_SG))
+ return -EINVAL;
+
+ if ( dev->sg )
+ return -EINVAL;
+
+ if ( copy_from_user( &request, argp, sizeof(request) ) )
+ return -EFAULT;
+
+ entry = drm_alloc( sizeof(*entry), DRM_MEM_SGLISTS );
+ if ( !entry )
+ return -ENOMEM;
+
+ memset( entry, 0, sizeof(*entry) );
+
+ pages = (request.size + PAGE_SIZE - 1) / PAGE_SIZE;
+ DRM_DEBUG( "sg size=%ld pages=%ld\n", request.size, pages );
+
+ entry->pages = pages;
+ entry->pagelist = drm_alloc( pages * sizeof(*entry->pagelist),
+ DRM_MEM_PAGES );
+ if ( !entry->pagelist ) {
+ drm_free( entry, sizeof(*entry), DRM_MEM_SGLISTS );
+ return -ENOMEM;
+ }
+
+ memset(entry->pagelist, 0, pages * sizeof(*entry->pagelist));
+
+ entry->busaddr = drm_alloc( pages * sizeof(*entry->busaddr),
+ DRM_MEM_PAGES );
+ if ( !entry->busaddr ) {
+ drm_free( entry->pagelist,
+ entry->pages * sizeof(*entry->pagelist),
+ DRM_MEM_PAGES );
+ drm_free( entry,
+ sizeof(*entry),
+ DRM_MEM_SGLISTS );
+ return -ENOMEM;
+ }
+ memset( (void *)entry->busaddr, 0, pages * sizeof(*entry->busaddr) );
+
+ entry->virtual = vmalloc_32( pages << PAGE_SHIFT );
+ if ( !entry->virtual ) {
+ drm_free( entry->busaddr,
+ entry->pages * sizeof(*entry->busaddr),
+ DRM_MEM_PAGES );
+ drm_free( entry->pagelist,
+ entry->pages * sizeof(*entry->pagelist),
+ DRM_MEM_PAGES );
+ drm_free( entry,
+ sizeof(*entry),
+ DRM_MEM_SGLISTS );
+ return -ENOMEM;
+ }
+
+ /* This also forces the mapping of COW pages, so our page list
+ * will be valid. Please don't remove it...
+ */
+ memset( entry->virtual, 0, pages << PAGE_SHIFT );
+
+ entry->handle = (unsigned long)entry->virtual;
+
+ DRM_DEBUG( "sg alloc handle = %08lx\n", entry->handle );
+ DRM_DEBUG( "sg alloc virtual = %p\n", entry->virtual );
+
+ for ( i = entry->handle, j = 0 ; j < pages ; i += PAGE_SIZE, j++ ) {
+ entry->pagelist[j] = vmalloc_to_page((void *)i);
+ if (!entry->pagelist[j])
+ goto failed;
+ SetPageReserved(entry->pagelist[j]);
+ }
+
+ request.handle = entry->handle;
+
+ if ( copy_to_user( argp, &request, sizeof(request) ) ) {
+ drm_sg_cleanup( entry );
+ return -EFAULT;
+ }
+
+ dev->sg = entry;
+
+#if DEBUG_SCATTER
+ /* Verify that each page points to its virtual address, and vice
+ * versa.
+ */
+ {
+ int error = 0;
+
+ for ( i = 0 ; i < pages ; i++ ) {
+ unsigned long *tmp;
+
+ tmp = page_address( entry->pagelist[i] );
+ for ( j = 0 ;
+ j < PAGE_SIZE / sizeof(unsigned long) ;
+ j++, tmp++ ) {
+ *tmp = 0xcafebabe;
+ }
+ tmp = (unsigned long *)((u8 *)entry->virtual +
+ (PAGE_SIZE * i));
+ for( j = 0 ;
+ j < PAGE_SIZE / sizeof(unsigned long) ;
+ j++, tmp++ ) {
+ if ( *tmp != 0xcafebabe && error == 0 ) {
+ error = 1;
+ DRM_ERROR( "Scatter allocation error, "
+ "pagelist does not match "
+ "virtual mapping\n" );
+ }
+ }
+ tmp = page_address( entry->pagelist[i] );
+ for(j = 0 ;
+ j < PAGE_SIZE / sizeof(unsigned long) ;
+ j++, tmp++) {
+ *tmp = 0;
+ }
+ }
+ if (error == 0)
+ DRM_ERROR( "Scatter allocation matches pagelist\n" );
+ }
+#endif
+
+ return 0;
+
+ failed:
+ drm_sg_cleanup( entry );
+ return -ENOMEM;
+}
+
+int drm_sg_free( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_scatter_gather_t request;
+ drm_sg_mem_t *entry;
+
+ if (!drm_core_check_feature(dev, DRIVER_SG))
+ return -EINVAL;
+
+ if ( copy_from_user( &request,
+ (drm_scatter_gather_t __user *)arg,
+ sizeof(request) ) )
+ return -EFAULT;
+
+ entry = dev->sg;
+ dev->sg = NULL;
+
+ if ( !entry || entry->handle != request.handle )
+ return -EINVAL;
+
+ DRM_DEBUG( "sg free virtual = %p\n", entry->virtual );
+
+ drm_sg_cleanup( entry );
+
+ return 0;
+}
--- /dev/null
+/**
+ * \file drm_stub.h
+ * Stub support
+ *
+ * \author Rickard E. (Rik) Faith <faith@valinux.com>
+ */
+
+/*
+ * Created: Fri Jan 19 10:48:35 2001 by faith@acm.org
+ *
+ * Copyright 2001 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include "drmP.h"
+#include "drm_core.h"
+
+unsigned int drm_cards_limit = 16; /* Enough for one machine */
+unsigned int drm_debug = 0; /* 1 to enable debug output */
+EXPORT_SYMBOL(drm_debug);
+
+MODULE_AUTHOR( DRIVER_AUTHOR );
+MODULE_DESCRIPTION( DRIVER_DESC );
+MODULE_LICENSE("GPL and additional rights");
+MODULE_PARM_DESC(drm_cards_limit, "Maximum number of graphics cards");
+MODULE_PARM_DESC(drm_debug, "Enable debug output");
+
+module_param_named(cards_limit, drm_cards_limit, int, 0444);
+module_param_named(debug, drm_debug, int, 0666);
+
+drm_minor_t *drm_minors;
+struct drm_sysfs_class *drm_class;
+struct proc_dir_entry *drm_proc_root;
+
+static int drm_fill_in_dev(drm_device_t *dev, struct pci_dev *pdev, const struct pci_device_id *ent, struct drm_driver *driver)
+{
+ int retcode;
+
+ spin_lock_init(&dev->count_lock);
+ init_timer( &dev->timer );
+ sema_init( &dev->struct_sem, 1 );
+ sema_init( &dev->ctxlist_sem, 1 );
+
+ dev->pdev = pdev;
+
+#ifdef __alpha__
+ dev->hose = pdev->sysdata;
+ dev->pci_domain = dev->hose->bus->number;
+#else
+ dev->pci_domain = 0;
+#endif
+ dev->pci_bus = pdev->bus->number;
+ dev->pci_slot = PCI_SLOT(pdev->devfn);
+ dev->pci_func = PCI_FUNC(pdev->devfn);
+ dev->irq = pdev->irq;
+
+ /* the DRM has 6 basic counters */
+ dev->counters = 6;
+ dev->types[0] = _DRM_STAT_LOCK;
+ dev->types[1] = _DRM_STAT_OPENS;
+ dev->types[2] = _DRM_STAT_CLOSES;
+ dev->types[3] = _DRM_STAT_IOCTLS;
+ dev->types[4] = _DRM_STAT_LOCKS;
+ dev->types[5] = _DRM_STAT_UNLOCKS;
+
+ dev->driver = driver;
+
+ if (dev->driver->preinit)
+ if ((retcode = dev->driver->preinit(dev, ent->driver_data)))
+ goto error_out_unreg;
+
+ if (drm_core_has_AGP(dev)) {
+ dev->agp = drm_agp_init();
+ if (drm_core_check_feature(dev, DRIVER_REQUIRE_AGP) && (dev->agp == NULL)) {
+ DRM_ERROR( "Cannot initialize the agpgart module.\n" );
+ retcode = -EINVAL;
+ goto error_out_unreg;
+ }
+ if (drm_core_has_MTRR(dev)) {
+ if (dev->agp)
+ dev->agp->agp_mtrr = mtrr_add( dev->agp->agp_info.aper_base,
+ dev->agp->agp_info.aper_size*1024*1024,
+ MTRR_TYPE_WRCOMB,
+ 1 );
+ }
+ }
+
+ retcode = drm_ctxbitmap_init( dev );
+ if( retcode ) {
+ DRM_ERROR( "Cannot allocate memory for context bitmap.\n" );
+ goto error_out_unreg;
+ }
+
+ dev->device = MKDEV(DRM_MAJOR, dev->minor );
+
+ /* postinit is a required function to display the signon banner */
+ if ((retcode = dev->driver->postinit(dev, ent->driver_data)))
+ goto error_out_unreg;
+
+ return 0;
+
+error_out_unreg:
+ drm_takedown(dev);
+ return retcode;
+}
+
+/**
+ * File \c open operation.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ *
+ * Puts the dev->fops corresponding to the device minor number into
+ * \p filp, call the \c open method, and restore the file operations.
+ */
+int drm_stub_open(struct inode *inode, struct file *filp)
+{
+ drm_device_t *dev = NULL;
+ int minor = iminor(inode);
+ int err = -ENODEV;
+ struct file_operations *old_fops;
+
+ DRM_DEBUG("\n");
+
+ if (!((minor >= 0) && (minor < drm_cards_limit)))
+ return -ENODEV;
+
+ dev = drm_minors[minor].dev;
+ if (!dev)
+ return -ENODEV;
+
+ old_fops = filp->f_op;
+ filp->f_op = fops_get(&dev->driver->fops);
+ if (filp->f_op->open && (err = filp->f_op->open(inode, filp))) {
+ fops_put(filp->f_op);
+ filp->f_op = fops_get(old_fops);
+ }
+ fops_put(old_fops);
+
+ return err;
+}
+
+/**
+ * Get a device minor number.
+ *
+ * \param pdev PCI device structure
+ * \param ent entry from the PCI ID table with device type flags
+ * \return zero on success or a negative number on failure.
+ *
+ * Attempt to gets inter module "drm" information. If we are first
+ * then register the character device and inter module information.
+ * Try and register, if we fail to register, backout previous work.
+ */
+int drm_probe(struct pci_dev *pdev, const struct pci_device_id *ent, struct drm_driver *driver)
+{
+ struct class_device *dev_class;
+ drm_device_t *dev;
+ int ret;
+ int minor;
+ drm_minor_t *minors = &drm_minors[0];
+
+ DRM_DEBUG("\n");
+
+ for (minor = 0; minor < drm_cards_limit; minor++, minors++) {
+ if (minors->type == DRM_MINOR_FREE) {
+
+ DRM_DEBUG("assigning minor %d\n", minor);
+ dev = drm_calloc(1, sizeof(*dev), DRM_MEM_STUB);
+ if (!dev)
+ return -ENOMEM;
+
+ *minors = (drm_minor_t){.dev = dev, .type=DRM_MINOR_PRIMARY};
+ dev->minor = minor;
+
+ pci_enable_device(pdev);
+
+ if ((ret=drm_fill_in_dev(dev, pdev, ent, driver))) {
+ printk(KERN_ERR "DRM: Fill_in_dev failed.\n");
+ goto err_g1;
+ }
+ if ((ret = drm_proc_init(dev, minor, drm_proc_root, &minors->dev_root))) {
+ printk (KERN_ERR "DRM: Failed to initialize /proc/dri.\n");
+ goto err_g1;
+ }
+
+
+ dev_class = drm_sysfs_device_add(drm_class,
+ MKDEV(DRM_MAJOR,
+ minor),
+ &pdev->dev,
+ "card%d", minor);
+ if (IS_ERR(dev_class)) {
+ printk(KERN_ERR "DRM: Error sysfs_device_add.\n");
+ ret = PTR_ERR(dev_class);
+ goto err_g2;
+ }
+
+ DRM_DEBUG("new minor assigned %d\n", minor);
+ return 0;
+ }
+ }
+ DRM_ERROR("out of minors\n");
+ return -ENOMEM;
+err_g2:
+ drm_proc_cleanup(minor, drm_proc_root, minors->dev_root);
+err_g1:
+ *minors = (drm_minor_t){.dev = NULL, .type = DRM_MINOR_FREE};
+ drm_free(dev, sizeof(*dev), DRM_MEM_STUB);
+ return ret;
+}
+EXPORT_SYMBOL(drm_probe);
+
+
+/**
+ * Put a device minor number.
+ *
+ * \param minor minor number.
+ * \return always zero.
+ *
+ * Cleans up the proc resources. If a minor is zero then release the foreign
+ * "drm" data, otherwise unregisters the "drm" data, frees the stub list and
+ * unregisters the character device.
+ */
+int drm_put_minor(drm_device_t *dev)
+{
+ drm_minor_t *minors = &drm_minors[dev->minor];
+
+ DRM_DEBUG("release minor %d\n", dev->minor);
+
+ drm_proc_cleanup(dev->minor, drm_proc_root, minors->dev_root);
+ drm_sysfs_device_remove(MKDEV(DRM_MAJOR, dev->minor));
+
+ *minors = (drm_minor_t){.dev = NULL, .type = DRM_MINOR_FREE};
+ drm_free(dev, sizeof(*dev), DRM_MEM_STUB);
+
+ return 0;
+}
+
--- /dev/null
+/*
+ * drm_sysfs.c - Modifications to drm_sysfs_class.c to support
+ * extra sysfs attribute from DRM. Normal drm_sysfs_class
+ * does not allow adding attributes.
+ *
+ * Copyright (c) 2004 Jon Smirl <jonsmirl@gmail.com>
+ * Copyright (c) 2003-2004 Greg Kroah-Hartman <greg@kroah.com>
+ * Copyright (c) 2003-2004 IBM Corp.
+ *
+ * This file is released under the GPLv2
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/device.h>
+#include <linux/kdev_t.h>
+#include <linux/err.h>
+
+#include "drm_core.h"
+
+struct drm_sysfs_class {
+ struct class_device_attribute attr;
+ struct class class;
+};
+#define to_drm_sysfs_class(d) container_of(d, struct drm_sysfs_class, class)
+
+struct simple_dev {
+ struct list_head node;
+ dev_t dev;
+ struct class_device class_dev;
+};
+#define to_simple_dev(d) container_of(d, struct simple_dev, class_dev)
+
+static LIST_HEAD(simple_dev_list);
+static DEFINE_SPINLOCK(simple_dev_list_lock);
+
+static void release_simple_dev(struct class_device *class_dev)
+{
+ struct simple_dev *s_dev = to_simple_dev(class_dev);
+ kfree(s_dev);
+}
+
+static ssize_t show_dev(struct class_device *class_dev, char *buf)
+{
+ struct simple_dev *s_dev = to_simple_dev(class_dev);
+ return print_dev_t(buf, s_dev->dev);
+}
+
+static void drm_sysfs_class_release(struct class *class)
+{
+ struct drm_sysfs_class *cs = to_drm_sysfs_class(class);
+ kfree(cs);
+}
+
+/* Display the version of drm_core. This doesn't work right in current design */
+static ssize_t version_show(struct class *dev, char *buf)
+{
+ return sprintf(buf, "%s %d.%d.%d %s\n", DRIVER_NAME, DRIVER_MAJOR,
+ DRIVER_MINOR, DRIVER_PATCHLEVEL, DRIVER_DATE);
+}
+
+static CLASS_ATTR(version, S_IRUGO, version_show, NULL);
+
+/**
+ * drm_sysfs_create - create a struct drm_sysfs_class structure
+ * @owner: pointer to the module that is to "own" this struct drm_sysfs_class
+ * @name: pointer to a string for the name of this class.
+ *
+ * This is used to create a struct drm_sysfs_class pointer that can then be used
+ * in calls to drm_sysfs_device_add().
+ *
+ * Note, the pointer created here is to be destroyed when finished by making a
+ * call to drm_sysfs_destroy().
+ */
+struct drm_sysfs_class *drm_sysfs_create(struct module *owner, char *name)
+{
+ struct drm_sysfs_class *cs;
+ int retval;
+
+ cs = kmalloc(sizeof(*cs), GFP_KERNEL);
+ if (!cs) {
+ retval = -ENOMEM;
+ goto error;
+ }
+ memset(cs, 0x00, sizeof(*cs));
+
+ cs->class.name = name;
+ cs->class.class_release = drm_sysfs_class_release;
+ cs->class.release = release_simple_dev;
+
+ cs->attr.attr.name = "dev";
+ cs->attr.attr.mode = S_IRUGO;
+ cs->attr.attr.owner = owner;
+ cs->attr.show = show_dev;
+ cs->attr.store = NULL;
+
+ retval = class_register(&cs->class);
+ if (retval)
+ goto error;
+ class_create_file(&cs->class, &class_attr_version);
+
+ return cs;
+
+ error:
+ kfree(cs);
+ return ERR_PTR(retval);
+}
+
+/**
+ * drm_sysfs_destroy - destroys a struct drm_sysfs_class structure
+ * @cs: pointer to the struct drm_sysfs_class that is to be destroyed
+ *
+ * Note, the pointer to be destroyed must have been created with a call to
+ * drm_sysfs_create().
+ */
+void drm_sysfs_destroy(struct drm_sysfs_class *cs)
+{
+ if ((cs == NULL) || (IS_ERR(cs)))
+ return;
+
+ class_unregister(&cs->class);
+}
+
+/**
+ * drm_sysfs_device_add - adds a class device to sysfs for a character driver
+ * @cs: pointer to the struct drm_sysfs_class that this device should be registered to.
+ * @dev: the dev_t for the device to be added.
+ * @device: a pointer to a struct device that is assiociated with this class device.
+ * @fmt: string for the class device's name
+ *
+ * A struct class_device will be created in sysfs, registered to the specified
+ * class. A "dev" file will be created, showing the dev_t for the device. The
+ * pointer to the struct class_device will be returned from the call. Any further
+ * sysfs files that might be required can be created using this pointer.
+ * Note: the struct drm_sysfs_class passed to this function must have previously been
+ * created with a call to drm_sysfs_create().
+ */
+struct class_device *drm_sysfs_device_add(struct drm_sysfs_class *cs, dev_t dev,
+ struct device *device,
+ const char *fmt, ...)
+{
+ va_list args;
+ struct simple_dev *s_dev = NULL;
+ int retval;
+
+ if ((cs == NULL) || (IS_ERR(cs))) {
+ retval = -ENODEV;
+ goto error;
+ }
+
+ s_dev = kmalloc(sizeof(*s_dev), GFP_KERNEL);
+ if (!s_dev) {
+ retval = -ENOMEM;
+ goto error;
+ }
+ memset(s_dev, 0x00, sizeof(*s_dev));
+
+ s_dev->dev = dev;
+ s_dev->class_dev.dev = device;
+ s_dev->class_dev.class = &cs->class;
+
+ va_start(args, fmt);
+ vsnprintf(s_dev->class_dev.class_id, BUS_ID_SIZE, fmt, args);
+ va_end(args);
+ retval = class_device_register(&s_dev->class_dev);
+ if (retval)
+ goto error;
+
+ class_device_create_file(&s_dev->class_dev, &cs->attr);
+
+ spin_lock(&simple_dev_list_lock);
+ list_add(&s_dev->node, &simple_dev_list);
+ spin_unlock(&simple_dev_list_lock);
+
+ return &s_dev->class_dev;
+
+ error:
+ kfree(s_dev);
+ return ERR_PTR(retval);
+}
+
+/**
+ * drm_sysfs_device_remove - removes a class device that was created with drm_sysfs_device_add()
+ * @dev: the dev_t of the device that was previously registered.
+ *
+ * This call unregisters and cleans up a class device that was created with a
+ * call to drm_sysfs_device_add()
+ */
+void drm_sysfs_device_remove(dev_t dev)
+{
+ struct simple_dev *s_dev = NULL;
+ int found = 0;
+
+ spin_lock(&simple_dev_list_lock);
+ list_for_each_entry(s_dev, &simple_dev_list, node) {
+ if (s_dev->dev == dev) {
+ found = 1;
+ break;
+ }
+ }
+ if (found) {
+ list_del(&s_dev->node);
+ spin_unlock(&simple_dev_list_lock);
+ class_device_unregister(&s_dev->class_dev);
+ } else {
+ spin_unlock(&simple_dev_list_lock);
+ }
+}
--- /dev/null
+/**
+ * \file drm_vm.h
+ * Memory mapping for DRM
+ *
+ * \author Rickard E. (Rik) Faith <faith@valinux.com>
+ * \author Gareth Hughes <gareth@valinux.com>
+ */
+
+/*
+ * Created: Mon Jan 4 08:58:31 1999 by faith@valinux.com
+ *
+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "drmP.h"
+#if defined(__ia64__)
+#include <linux/efi.h>
+#endif
+
+
+/**
+ * \c nopage method for AGP virtual memory.
+ *
+ * \param vma virtual memory area.
+ * \param address access address.
+ * \return pointer to the page structure.
+ *
+ * Find the right map and if it's AGP memory find the real physical page to
+ * map, get the page, increment the use count and return it.
+ */
+#if __OS_HAS_AGP
+static __inline__ struct page *drm_do_vm_nopage(struct vm_area_struct *vma,
+ unsigned long address)
+{
+ drm_file_t *priv = vma->vm_file->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_map_t *map = NULL;
+ drm_map_list_t *r_list;
+ struct list_head *list;
+
+ /*
+ * Find the right map
+ */
+ if (!drm_core_has_AGP(dev))
+ goto vm_nopage_error;
+
+ if(!dev->agp || !dev->agp->cant_use_aperture) goto vm_nopage_error;
+
+ list_for_each(list, &dev->maplist->head) {
+ r_list = list_entry(list, drm_map_list_t, head);
+ map = r_list->map;
+ if (!map) continue;
+ if (map->offset == VM_OFFSET(vma)) break;
+ }
+
+ if (map && map->type == _DRM_AGP) {
+ unsigned long offset = address - vma->vm_start;
+ unsigned long baddr = VM_OFFSET(vma) + offset;
+ struct drm_agp_mem *agpmem;
+ struct page *page;
+
+#ifdef __alpha__
+ /*
+ * Adjust to a bus-relative address
+ */
+ baddr -= dev->hose->mem_space->start;
+#endif
+
+ /*
+ * It's AGP memory - find the real physical page to map
+ */
+ for(agpmem = dev->agp->memory; agpmem; agpmem = agpmem->next) {
+ if (agpmem->bound <= baddr &&
+ agpmem->bound + agpmem->pages * PAGE_SIZE > baddr)
+ break;
+ }
+
+ if (!agpmem) goto vm_nopage_error;
+
+ /*
+ * Get the page, inc the use count, and return it
+ */
+ offset = (baddr - agpmem->bound) >> PAGE_SHIFT;
+ page = virt_to_page(__va(agpmem->memory->memory[offset]));
+ get_page(page);
+
+ DRM_DEBUG("baddr = 0x%lx page = 0x%p, offset = 0x%lx, count=%d\n",
+ baddr, __va(agpmem->memory->memory[offset]), offset,
+ page_count(page));
+
+ return page;
+ }
+vm_nopage_error:
+ return NOPAGE_SIGBUS; /* Disallow mremap */
+}
+#else /* __OS_HAS_AGP */
+static __inline__ struct page *drm_do_vm_nopage(struct vm_area_struct *vma,
+ unsigned long address)
+{
+ return NOPAGE_SIGBUS;
+}
+#endif /* __OS_HAS_AGP */
+
+/**
+ * \c nopage method for shared virtual memory.
+ *
+ * \param vma virtual memory area.
+ * \param address access address.
+ * \return pointer to the page structure.
+ *
+ * Get the the mapping, find the real physical page to map, get the page, and
+ * return it.
+ */
+static __inline__ struct page *drm_do_vm_shm_nopage(struct vm_area_struct *vma,
+ unsigned long address)
+{
+ drm_map_t *map = (drm_map_t *)vma->vm_private_data;
+ unsigned long offset;
+ unsigned long i;
+ struct page *page;
+
+ if (address > vma->vm_end) return NOPAGE_SIGBUS; /* Disallow mremap */
+ if (!map) return NOPAGE_OOM; /* Nothing allocated */
+
+ offset = address - vma->vm_start;
+ i = (unsigned long)map->handle + offset;
+ page = vmalloc_to_page((void *)i);
+ if (!page)
+ return NOPAGE_OOM;
+ get_page(page);
+
+ DRM_DEBUG("shm_nopage 0x%lx\n", address);
+ return page;
+}
+
+
+/**
+ * \c close method for shared virtual memory.
+ *
+ * \param vma virtual memory area.
+ *
+ * Deletes map information if we are the last
+ * person to close a mapping and it's not in the global maplist.
+ */
+void drm_vm_shm_close(struct vm_area_struct *vma)
+{
+ drm_file_t *priv = vma->vm_file->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_vma_entry_t *pt, *prev, *next;
+ drm_map_t *map;
+ drm_map_list_t *r_list;
+ struct list_head *list;
+ int found_maps = 0;
+
+ DRM_DEBUG("0x%08lx,0x%08lx\n",
+ vma->vm_start, vma->vm_end - vma->vm_start);
+ atomic_dec(&dev->vma_count);
+
+ map = vma->vm_private_data;
+
+ down(&dev->struct_sem);
+ for (pt = dev->vmalist, prev = NULL; pt; pt = next) {
+ next = pt->next;
+ if (pt->vma->vm_private_data == map) found_maps++;
+ if (pt->vma == vma) {
+ if (prev) {
+ prev->next = pt->next;
+ } else {
+ dev->vmalist = pt->next;
+ }
+ drm_free(pt, sizeof(*pt), DRM_MEM_VMAS);
+ } else {
+ prev = pt;
+ }
+ }
+ /* We were the only map that was found */
+ if(found_maps == 1 &&
+ map->flags & _DRM_REMOVABLE) {
+ /* Check to see if we are in the maplist, if we are not, then
+ * we delete this mappings information.
+ */
+ found_maps = 0;
+ list = &dev->maplist->head;
+ list_for_each(list, &dev->maplist->head) {
+ r_list = list_entry(list, drm_map_list_t, head);
+ if (r_list->map == map) found_maps++;
+ }
+
+ if(!found_maps) {
+ switch (map->type) {
+ case _DRM_REGISTERS:
+ case _DRM_FRAME_BUFFER:
+ if (drm_core_has_MTRR(dev) && map->mtrr >= 0) {
+ int retcode;
+ retcode = mtrr_del(map->mtrr,
+ map->offset,
+ map->size);
+ DRM_DEBUG("mtrr_del = %d\n", retcode);
+ }
+ drm_ioremapfree(map->handle, map->size, dev);
+ break;
+ case _DRM_SHM:
+ vfree(map->handle);
+ break;
+ case _DRM_AGP:
+ case _DRM_SCATTER_GATHER:
+ break;
+ }
+ drm_free(map, sizeof(*map), DRM_MEM_MAPS);
+ }
+ }
+ up(&dev->struct_sem);
+}
+
+/**
+ * \c nopage method for DMA virtual memory.
+ *
+ * \param vma virtual memory area.
+ * \param address access address.
+ * \return pointer to the page structure.
+ *
+ * Determine the page number from the page offset and get it from drm_device_dma::pagelist.
+ */
+static __inline__ struct page *drm_do_vm_dma_nopage(struct vm_area_struct *vma,
+ unsigned long address)
+{
+ drm_file_t *priv = vma->vm_file->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_device_dma_t *dma = dev->dma;
+ unsigned long offset;
+ unsigned long page_nr;
+ struct page *page;
+
+ if (!dma) return NOPAGE_SIGBUS; /* Error */
+ if (address > vma->vm_end) return NOPAGE_SIGBUS; /* Disallow mremap */
+ if (!dma->pagelist) return NOPAGE_OOM ; /* Nothing allocated */
+
+ offset = address - vma->vm_start; /* vm_[pg]off[set] should be 0 */
+ page_nr = offset >> PAGE_SHIFT;
+ page = virt_to_page((dma->pagelist[page_nr] +
+ (offset & (~PAGE_MASK))));
+
+ get_page(page);
+
+ DRM_DEBUG("dma_nopage 0x%lx (page %lu)\n", address, page_nr);
+ return page;
+}
+
+/**
+ * \c nopage method for scatter-gather virtual memory.
+ *
+ * \param vma virtual memory area.
+ * \param address access address.
+ * \return pointer to the page structure.
+ *
+ * Determine the map offset from the page offset and get it from drm_sg_mem::pagelist.
+ */
+static __inline__ struct page *drm_do_vm_sg_nopage(struct vm_area_struct *vma,
+ unsigned long address)
+{
+ drm_map_t *map = (drm_map_t *)vma->vm_private_data;
+ drm_file_t *priv = vma->vm_file->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_sg_mem_t *entry = dev->sg;
+ unsigned long offset;
+ unsigned long map_offset;
+ unsigned long page_offset;
+ struct page *page;
+
+ if (!entry) return NOPAGE_SIGBUS; /* Error */
+ if (address > vma->vm_end) return NOPAGE_SIGBUS; /* Disallow mremap */
+ if (!entry->pagelist) return NOPAGE_OOM ; /* Nothing allocated */
+
+
+ offset = address - vma->vm_start;
+ map_offset = map->offset - dev->sg->handle;
+ page_offset = (offset >> PAGE_SHIFT) + (map_offset >> PAGE_SHIFT);
+ page = entry->pagelist[page_offset];
+ get_page(page);
+
+ return page;
+}
+
+
+#if LINUX_VERSION_CODE > KERNEL_VERSION(2,6,0)
+
+static struct page *drm_vm_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int *type) {
+ if (type) *type = VM_FAULT_MINOR;
+ return drm_do_vm_nopage(vma, address);
+}
+
+static struct page *drm_vm_shm_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int *type) {
+ if (type) *type = VM_FAULT_MINOR;
+ return drm_do_vm_shm_nopage(vma, address);
+}
+
+static struct page *drm_vm_dma_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int *type) {
+ if (type) *type = VM_FAULT_MINOR;
+ return drm_do_vm_dma_nopage(vma, address);
+}
+
+static struct page *drm_vm_sg_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int *type) {
+ if (type) *type = VM_FAULT_MINOR;
+ return drm_do_vm_sg_nopage(vma, address);
+}
+
+#else /* LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,0) */
+
+static struct page *drm_vm_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int unused) {
+ return drm_do_vm_nopage(vma, address);
+}
+
+static struct page *drm_vm_shm_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int unused) {
+ return drm_do_vm_shm_nopage(vma, address);
+}
+
+static struct page *drm_vm_dma_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int unused) {
+ return drm_do_vm_dma_nopage(vma, address);
+}
+
+static struct page *drm_vm_sg_nopage(struct vm_area_struct *vma,
+ unsigned long address,
+ int unused) {
+ return drm_do_vm_sg_nopage(vma, address);
+}
+
+#endif
+
+
+/** AGP virtual memory operations */
+static struct vm_operations_struct drm_vm_ops = {
+ .nopage = drm_vm_nopage,
+ .open = drm_vm_open,
+ .close = drm_vm_close,
+};
+
+/** Shared virtual memory operations */
+static struct vm_operations_struct drm_vm_shm_ops = {
+ .nopage = drm_vm_shm_nopage,
+ .open = drm_vm_open,
+ .close = drm_vm_shm_close,
+};
+
+/** DMA virtual memory operations */
+static struct vm_operations_struct drm_vm_dma_ops = {
+ .nopage = drm_vm_dma_nopage,
+ .open = drm_vm_open,
+ .close = drm_vm_close,
+};
+
+/** Scatter-gather virtual memory operations */
+static struct vm_operations_struct drm_vm_sg_ops = {
+ .nopage = drm_vm_sg_nopage,
+ .open = drm_vm_open,
+ .close = drm_vm_close,
+};
+
+
+/**
+ * \c open method for shared virtual memory.
+ *
+ * \param vma virtual memory area.
+ *
+ * Create a new drm_vma_entry structure as the \p vma private data entry and
+ * add it to drm_device::vmalist.
+ */
+void drm_vm_open(struct vm_area_struct *vma)
+{
+ drm_file_t *priv = vma->vm_file->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_vma_entry_t *vma_entry;
+
+ DRM_DEBUG("0x%08lx,0x%08lx\n",
+ vma->vm_start, vma->vm_end - vma->vm_start);
+ atomic_inc(&dev->vma_count);
+
+ vma_entry = drm_alloc(sizeof(*vma_entry), DRM_MEM_VMAS);
+ if (vma_entry) {
+ down(&dev->struct_sem);
+ vma_entry->vma = vma;
+ vma_entry->next = dev->vmalist;
+ vma_entry->pid = current->pid;
+ dev->vmalist = vma_entry;
+ up(&dev->struct_sem);
+ }
+}
+
+/**
+ * \c close method for all virtual memory types.
+ *
+ * \param vma virtual memory area.
+ *
+ * Search the \p vma private data entry in drm_device::vmalist, unlink it, and
+ * free it.
+ */
+void drm_vm_close(struct vm_area_struct *vma)
+{
+ drm_file_t *priv = vma->vm_file->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_vma_entry_t *pt, *prev;
+
+ DRM_DEBUG("0x%08lx,0x%08lx\n",
+ vma->vm_start, vma->vm_end - vma->vm_start);
+ atomic_dec(&dev->vma_count);
+
+ down(&dev->struct_sem);
+ for (pt = dev->vmalist, prev = NULL; pt; prev = pt, pt = pt->next) {
+ if (pt->vma == vma) {
+ if (prev) {
+ prev->next = pt->next;
+ } else {
+ dev->vmalist = pt->next;
+ }
+ drm_free(pt, sizeof(*pt), DRM_MEM_VMAS);
+ break;
+ }
+ }
+ up(&dev->struct_sem);
+}
+
+/**
+ * mmap DMA memory.
+ *
+ * \param filp file pointer.
+ * \param vma virtual memory area.
+ * \return zero on success or a negative number on failure.
+ *
+ * Sets the virtual memory area operations structure to vm_dma_ops, the file
+ * pointer, and calls vm_open().
+ */
+int drm_mmap_dma(struct file *filp, struct vm_area_struct *vma)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev;
+ drm_device_dma_t *dma;
+ unsigned long length = vma->vm_end - vma->vm_start;
+
+ lock_kernel();
+ dev = priv->dev;
+ dma = dev->dma;
+ DRM_DEBUG("start = 0x%lx, end = 0x%lx, offset = 0x%lx\n",
+ vma->vm_start, vma->vm_end, VM_OFFSET(vma));
+
+ /* Length must match exact page count */
+ if (!dma || (length >> PAGE_SHIFT) != dma->page_count) {
+ unlock_kernel();
+ return -EINVAL;
+ }
+ unlock_kernel();
+
+ vma->vm_ops = &drm_vm_dma_ops;
+
+#if LINUX_VERSION_CODE <= 0x02040e /* KERNEL_VERSION(2,4,14) */
+ vma->vm_flags |= VM_LOCKED | VM_SHM; /* Don't swap */
+#else
+ vma->vm_flags |= VM_RESERVED; /* Don't swap */
+#endif
+
+ vma->vm_file = filp; /* Needed for drm_vm_open() */
+ drm_vm_open(vma);
+ return 0;
+}
+
+unsigned long drm_core_get_map_ofs(drm_map_t *map)
+{
+ return map->offset;
+}
+EXPORT_SYMBOL(drm_core_get_map_ofs);
+
+unsigned long drm_core_get_reg_ofs(struct drm_device *dev)
+{
+#ifdef __alpha__
+ return dev->hose->dense_mem_base - dev->hose->mem_space->start;
+#else
+ return 0;
+#endif
+}
+EXPORT_SYMBOL(drm_core_get_reg_ofs);
+
+/**
+ * mmap DMA memory.
+ *
+ * \param filp file pointer.
+ * \param vma virtual memory area.
+ * \return zero on success or a negative number on failure.
+ *
+ * If the virtual memory area has no offset associated with it then it's a DMA
+ * area, so calls mmap_dma(). Otherwise searches the map in drm_device::maplist,
+ * checks that the restricted flag is not set, sets the virtual memory operations
+ * according to the mapping type and remaps the pages. Finally sets the file
+ * pointer and calls vm_open().
+ */
+int drm_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_map_t *map = NULL;
+ drm_map_list_t *r_list;
+ unsigned long offset = 0;
+ struct list_head *list;
+
+ DRM_DEBUG("start = 0x%lx, end = 0x%lx, offset = 0x%lx\n",
+ vma->vm_start, vma->vm_end, VM_OFFSET(vma));
+
+ if ( !priv->authenticated ) return -EACCES;
+
+ /* We check for "dma". On Apple's UniNorth, it's valid to have
+ * the AGP mapped at physical address 0
+ * --BenH.
+ */
+ if (!VM_OFFSET(vma)
+#if __OS_HAS_AGP
+ && (!dev->agp || dev->agp->agp_info.device->vendor != PCI_VENDOR_ID_APPLE)
+#endif
+ )
+ return drm_mmap_dma(filp, vma);
+
+ /* A sequential search of a linked list is
+ fine here because: 1) there will only be
+ about 5-10 entries in the list and, 2) a
+ DRI client only has to do this mapping
+ once, so it doesn't have to be optimized
+ for performance, even if the list was a
+ bit longer. */
+ list_for_each(list, &dev->maplist->head) {
+ unsigned long off;
+
+ r_list = list_entry(list, drm_map_list_t, head);
+ map = r_list->map;
+ if (!map) continue;
+ off = dev->driver->get_map_ofs(map);
+ if (off == VM_OFFSET(vma)) break;
+ }
+
+ if (!map || ((map->flags&_DRM_RESTRICTED) && !capable(CAP_SYS_ADMIN)))
+ return -EPERM;
+
+ /* Check for valid size. */
+ if (map->size != vma->vm_end - vma->vm_start) return -EINVAL;
+
+ if (!capable(CAP_SYS_ADMIN) && (map->flags & _DRM_READ_ONLY)) {
+ vma->vm_flags &= ~(VM_WRITE | VM_MAYWRITE);
+#if defined(__i386__) || defined(__x86_64__)
+ pgprot_val(vma->vm_page_prot) &= ~_PAGE_RW;
+#else
+ /* Ye gads this is ugly. With more thought
+ we could move this up higher and use
+ `protection_map' instead. */
+ vma->vm_page_prot = __pgprot(pte_val(pte_wrprotect(
+ __pte(pgprot_val(vma->vm_page_prot)))));
+#endif
+ }
+
+ switch (map->type) {
+ case _DRM_AGP:
+ if (drm_core_has_AGP(dev) && dev->agp->cant_use_aperture) {
+ /*
+ * On some platforms we can't talk to bus dma address from the CPU, so for
+ * memory of type DRM_AGP, we'll deal with sorting out the real physical
+ * pages and mappings in nopage()
+ */
+#if defined(__powerpc__)
+ pgprot_val(vma->vm_page_prot) |= _PAGE_NO_CACHE;
+#endif
+ vma->vm_ops = &drm_vm_ops;
+ break;
+ }
+ /* fall through to _DRM_FRAME_BUFFER... */
+ case _DRM_FRAME_BUFFER:
+ case _DRM_REGISTERS:
+ if (VM_OFFSET(vma) >= __pa(high_memory)) {
+#if defined(__i386__) || defined(__x86_64__)
+ if (boot_cpu_data.x86 > 3 && map->type != _DRM_AGP) {
+ pgprot_val(vma->vm_page_prot) |= _PAGE_PCD;
+ pgprot_val(vma->vm_page_prot) &= ~_PAGE_PWT;
+ }
+#elif defined(__powerpc__)
+ pgprot_val(vma->vm_page_prot) |= _PAGE_NO_CACHE | _PAGE_GUARDED;
+#endif
+ vma->vm_flags |= VM_IO; /* not in core dump */
+ }
+#if defined(__ia64__)
+ if (efi_range_is_wc(vma->vm_start, vma->vm_end -
+ vma->vm_start))
+ vma->vm_page_prot =
+ pgprot_writecombine(vma->vm_page_prot);
+ else
+ vma->vm_page_prot =
+ pgprot_noncached(vma->vm_page_prot);
+#endif
+ offset = dev->driver->get_reg_ofs(dev);
+#ifdef __sparc__
+ if (io_remap_page_range(DRM_RPR_ARG(vma) vma->vm_start,
+ VM_OFFSET(vma) + offset,
+ vma->vm_end - vma->vm_start,
+ vma->vm_page_prot, 0))
+#else
+ if (remap_pfn_range(DRM_RPR_ARG(vma) vma->vm_start,
+ (VM_OFFSET(vma) + offset) >> PAGE_SHIFT,
+ vma->vm_end - vma->vm_start,
+ vma->vm_page_prot))
+#endif
+ return -EAGAIN;
+ DRM_DEBUG(" Type = %d; start = 0x%lx, end = 0x%lx,"
+ " offset = 0x%lx\n",
+ map->type,
+ vma->vm_start, vma->vm_end, VM_OFFSET(vma) + offset);
+ vma->vm_ops = &drm_vm_ops;
+ break;
+ case _DRM_SHM:
+ vma->vm_ops = &drm_vm_shm_ops;
+ vma->vm_private_data = (void *)map;
+ /* Don't let this area swap. Change when
+ DRM_KERNEL advisory is supported. */
+#if LINUX_VERSION_CODE <= 0x02040e /* KERNEL_VERSION(2,4,14) */
+ vma->vm_flags |= VM_LOCKED;
+#else
+ vma->vm_flags |= VM_RESERVED;
+#endif
+ break;
+ case _DRM_SCATTER_GATHER:
+ vma->vm_ops = &drm_vm_sg_ops;
+ vma->vm_private_data = (void *)map;
+#if LINUX_VERSION_CODE <= 0x02040e /* KERNEL_VERSION(2,4,14) */
+ vma->vm_flags |= VM_LOCKED;
+#else
+ vma->vm_flags |= VM_RESERVED;
+#endif
+ break;
+ default:
+ return -EINVAL; /* This should never happen. */
+ }
+#if LINUX_VERSION_CODE <= 0x02040e /* KERNEL_VERSION(2,4,14) */
+ vma->vm_flags |= VM_LOCKED | VM_SHM; /* Don't swap */
+#else
+ vma->vm_flags |= VM_RESERVED; /* Don't swap */
+#endif
+
+ vma->vm_file = filp; /* Needed for drm_vm_open() */
+ drm_vm_open(vma);
+ return 0;
+}
+EXPORT_SYMBOL(drm_mmap);
struct ffb_hw_context *hw_state[FFB_MAX_CTXS];
} ffb_dev_priv_t;
-extern struct file_operations DRM(fops);
extern unsigned long ffb_get_unmapped_area(struct file *filp,
unsigned long hint,
unsigned long len,
extern void ffb_set_context_ioctls(void);
extern drm_ioctl_desc_t DRM(ioctls)[];
+extern int ffb_driver_context_switch(drm_device_t *dev, int old, int new);
bl->rp = bl->bufs;
bl->wp = bl->bufs;
bl->end = &bl->bufs[bl->count+1];
- bl->write_lock = SPIN_LOCK_UNLOCKED;
- bl->read_lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&bl->write_lock);
+ spin_lock_init(&bl->read_lock);
return 0;
}
bl->low_mark = 0;
bl->high_mark = 0;
atomic_set(&bl->wfh, 0);
- bl->lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&bl->lock);
++bl->initialized;
return 0;
}
/* i810 specific ioctls
* The device specific ioctl range is 0x40 to 0x79.
*/
-#define DRM_IOCTL_I810_INIT DRM_IOW( 0x40, drm_i810_init_t)
-#define DRM_IOCTL_I810_VERTEX DRM_IOW( 0x41, drm_i810_vertex_t)
-#define DRM_IOCTL_I810_CLEAR DRM_IOW( 0x42, drm_i810_clear_t)
-#define DRM_IOCTL_I810_FLUSH DRM_IO( 0x43)
-#define DRM_IOCTL_I810_GETAGE DRM_IO( 0x44)
-#define DRM_IOCTL_I810_GETBUF DRM_IOWR(0x45, drm_i810_dma_t)
-#define DRM_IOCTL_I810_SWAP DRM_IO( 0x46)
-#define DRM_IOCTL_I810_COPY DRM_IOW( 0x47, drm_i810_copy_t)
-#define DRM_IOCTL_I810_DOCOPY DRM_IO( 0x48)
-#define DRM_IOCTL_I810_OV0INFO DRM_IOR( 0x49, drm_i810_overlay_t)
-#define DRM_IOCTL_I810_FSTATUS DRM_IO ( 0x4a)
-#define DRM_IOCTL_I810_OV0FLIP DRM_IO ( 0x4b)
-#define DRM_IOCTL_I810_MC DRM_IOW( 0x4c, drm_i810_mc_t)
-#define DRM_IOCTL_I810_RSTATUS DRM_IO ( 0x4d )
-#define DRM_IOCTL_I810_FLIP DRM_IO ( 0x4e )
+#define DRM_I810_INIT 0x00
+#define DRM_I810_VERTEX 0x01
+#define DRM_I810_CLEAR 0x02
+#define DRM_I810_FLUSH 0x03
+#define DRM_I810_GETAGE 0x04
+#define DRM_I810_GETBUF 0x05
+#define DRM_I810_SWAP 0x06
+#define DRM_I810_COPY 0x07
+#define DRM_I810_DOCOPY 0x08
+#define DRM_I810_OV0INFO 0x09
+#define DRM_I810_FSTATUS 0x0a
+#define DRM_I810_OV0FLIP 0x0b
+#define DRM_I810_MC 0x0c
+#define DRM_I810_RSTATUS 0x0d
+#define DRM_I810_FLIP 0x0e
+
+#define DRM_IOCTL_I810_INIT DRM_IOW( DRM_COMMAND_BASE + DRM_I810_INIT, drm_i810_init_t)
+#define DRM_IOCTL_I810_VERTEX DRM_IOW( DRM_COMMAND_BASE + DRM_I810_VERTEX, drm_i810_vertex_t)
+#define DRM_IOCTL_I810_CLEAR DRM_IOW( DRM_COMMAND_BASE + DRM_I810_CLEAR, drm_i810_clear_t)
+#define DRM_IOCTL_I810_FLUSH DRM_IO( DRM_COMMAND_BASE + DRM_I810_FLUSH)
+#define DRM_IOCTL_I810_GETAGE DRM_IO( DRM_COMMAND_BASE + DRM_I810_GETAGE)
+#define DRM_IOCTL_I810_GETBUF DRM_IOWR(DRM_COMMAND_BASE + DRM_I810_GETBUF, drm_i810_dma_t)
+#define DRM_IOCTL_I810_SWAP DRM_IO( DRM_COMMAND_BASE + DRM_I810_SWAP)
+#define DRM_IOCTL_I810_COPY DRM_IOW( DRM_COMMAND_BASE + DRM_I810_COPY, drm_i810_copy_t)
+#define DRM_IOCTL_I810_DOCOPY DRM_IO( DRM_COMMAND_BASE + DRM_I810_DOCOPY)
+#define DRM_IOCTL_I810_OV0INFO DRM_IOR( DRM_COMMAND_BASE + DRM_I810_OV0INFO, drm_i810_overlay_t)
+#define DRM_IOCTL_I810_FSTATUS DRM_IO ( DRM_COMMAND_BASE + DRM_I810_FSTATUS)
+#define DRM_IOCTL_I810_OV0FLIP DRM_IO ( DRM_COMMAND_BASE + DRM_I810_OV0FLIP)
+#define DRM_IOCTL_I810_MC DRM_IOW( DRM_COMMAND_BASE + DRM_I810_MC, drm_i810_mc_t)
+#define DRM_IOCTL_I810_RSTATUS DRM_IO ( DRM_COMMAND_BASE + DRM_I810_RSTATUS)
+#define DRM_IOCTL_I810_FLIP DRM_IO ( DRM_COMMAND_BASE + DRM_I810_FLIP)
typedef struct _drm_i810_clear {
int clear_color;
*/
#include <linux/config.h>
-#include "i810.h"
#include "drmP.h"
#include "drm.h"
#include "i810_drm.h"
#include "i810_drv.h"
-#include "drm_core.h"
+#include "drm_pciids.h"
+
+static int postinit( struct drm_device *dev, unsigned long flags )
+{
+ /* i810 has 4 more counters */
+ dev->counters += 4;
+ dev->types[6] = _DRM_STAT_IRQ;
+ dev->types[7] = _DRM_STAT_PRIMARY;
+ dev->types[8] = _DRM_STAT_SECONDARY;
+ dev->types[9] = _DRM_STAT_DMA;
+
+ DRM_INFO( "Initialized %s %d.%d.%d %s on minor %d: %s\n",
+ DRIVER_NAME,
+ DRIVER_MAJOR,
+ DRIVER_MINOR,
+ DRIVER_PATCHLEVEL,
+ DRIVER_DATE,
+ dev->minor,
+ pci_pretty_name(dev->pdev)
+ );
+ return 0;
+}
+
+static int version( drm_version_t *version )
+{
+ int len;
+
+ version->version_major = DRIVER_MAJOR;
+ version->version_minor = DRIVER_MINOR;
+ version->version_patchlevel = DRIVER_PATCHLEVEL;
+ DRM_COPY( version->name, DRIVER_NAME );
+ DRM_COPY( version->date, DRIVER_DATE );
+ DRM_COPY( version->desc, DRIVER_DESC );
+ return 0;
+}
+
+static struct pci_device_id pciidlist[] = {
+ i810_PCI_IDS
+};
+
+extern drm_ioctl_desc_t i810_ioctls[];
+extern int i810_max_ioctl;
+
+static struct drm_driver driver = {
+ .driver_features = DRIVER_USE_AGP | DRIVER_REQUIRE_AGP | DRIVER_USE_MTRR | DRIVER_HAVE_DMA | DRIVER_DMA_QUEUE,
+ .dev_priv_size = sizeof(drm_i810_buf_priv_t),
+ .pretakedown = i810_driver_pretakedown,
+ .release = i810_driver_release,
+ .dma_quiescent = i810_driver_dma_quiescent,
+ .reclaim_buffers = i810_reclaim_buffers,
+ .get_map_ofs = drm_core_get_map_ofs,
+ .get_reg_ofs = drm_core_get_reg_ofs,
+ .postinit = postinit,
+ .version = version,
+ .ioctls = i810_ioctls,
+ .fops = {
+ .owner = THIS_MODULE,
+ .open = drm_open,
+ .release = drm_release,
+ .ioctl = drm_ioctl,
+ .mmap = drm_mmap,
+ .poll = drm_poll,
+ .fasync = drm_fasync,
+ },
+ .pci_driver = {
+ .name = DRIVER_NAME,
+ .id_table = pciidlist,
+ },
+};
+
+static int __init i810_init(void)
+{
+ driver.num_ioctls = i810_max_ioctl;
+ return drm_init(&driver);
+}
+
+static void __exit i810_exit(void)
+{
+ drm_exit(&driver);
+}
+
+module_init(i810_init);
+module_exit(i810_exit);
+
+MODULE_AUTHOR( DRIVER_AUTHOR );
+MODULE_DESCRIPTION( DRIVER_DESC );
+MODULE_LICENSE("GPL and additional rights");
#ifndef _I810_DRV_H_
#define _I810_DRV_H_
+/* General customization:
+ */
+
+#define DRIVER_AUTHOR "VA Linux Systems Inc."
+
+#define DRIVER_NAME "i810"
+#define DRIVER_DESC "Intel i810"
+#define DRIVER_DATE "20030605"
+
+/* Interface history
+ *
+ * 1.1 - XFree86 4.1
+ * 1.2 - XvMC interfaces
+ * - XFree86 4.2
+ * 1.2.1 - Disable copying code (leave stub ioctls for backwards compatibility)
+ * - Remove requirement for interrupt (leave stubs again)
+ * 1.3 - Add page flipping.
+ * 1.4 - fix DRM interface
+ */
+#define DRIVER_MAJOR 1
+#define DRIVER_MINOR 4
+#define DRIVER_PATCHLEVEL 0
+
typedef struct drm_i810_buf_priv {
u32 *in_use;
int my_use_idx;
extern int i810_dma_cleanup(drm_device_t *dev);
extern int i810_flush_ioctl(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg);
-extern void i810_reclaim_buffers(struct file *filp);
+extern void i810_reclaim_buffers(drm_device_t *dev, struct file *filp);
extern int i810_getage(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg);
extern int i810_mmap_buffers(struct file *filp, struct vm_area_struct *vma);
extern void i810_dma_quiescent(drm_device_t *dev);
-int i810_dma_vertex(struct inode *inode, struct file *filp,
+extern int i810_dma_vertex(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg);
-int i810_swap_bufs(struct inode *inode, struct file *filp,
+extern int i810_swap_bufs(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg);
-int i810_clear_bufs(struct inode *inode, struct file *filp,
+extern int i810_clear_bufs(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg);
-int i810_flip_bufs(struct inode *inode, struct file *filp,
+extern int i810_flip_bufs(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg);
+extern int i810_driver_dma_quiescent(drm_device_t *dev);
+extern void i810_driver_release(drm_device_t *dev, struct file *filp);
+extern void i810_driver_pretakedown(drm_device_t *dev);
+
#define I810_BASE(reg) ((unsigned long) \
dev_priv->mmio_map->handle)
#define I810_ADDR(reg) (I810_BASE(reg) + reg)
/* I830 specific ioctls
* The device specific ioctl range is 0x40 to 0x79.
*/
-#define DRM_IOCTL_I830_INIT DRM_IOW( 0x40, drm_i830_init_t)
-#define DRM_IOCTL_I830_VERTEX DRM_IOW( 0x41, drm_i830_vertex_t)
-#define DRM_IOCTL_I830_CLEAR DRM_IOW( 0x42, drm_i830_clear_t)
-#define DRM_IOCTL_I830_FLUSH DRM_IO ( 0x43)
-#define DRM_IOCTL_I830_GETAGE DRM_IO ( 0x44)
-#define DRM_IOCTL_I830_GETBUF DRM_IOWR(0x45, drm_i830_dma_t)
-#define DRM_IOCTL_I830_SWAP DRM_IO ( 0x46)
-#define DRM_IOCTL_I830_COPY DRM_IOW( 0x47, drm_i830_copy_t)
-#define DRM_IOCTL_I830_DOCOPY DRM_IO ( 0x48)
-#define DRM_IOCTL_I830_FLIP DRM_IO ( 0x49)
-#define DRM_IOCTL_I830_IRQ_EMIT DRM_IOWR(0x4a, drm_i830_irq_emit_t)
-#define DRM_IOCTL_I830_IRQ_WAIT DRM_IOW( 0x4b, drm_i830_irq_wait_t)
-#define DRM_IOCTL_I830_GETPARAM DRM_IOWR(0x4c, drm_i830_getparam_t)
-#define DRM_IOCTL_I830_SETPARAM DRM_IOWR(0x4d, drm_i830_setparam_t)
+#define DRM_I830_INIT 0x00
+#define DRM_I830_VERTEX 0x01
+#define DRM_I830_CLEAR 0x02
+#define DRM_I830_FLUSH 0x03
+#define DRM_I830_GETAGE 0x04
+#define DRM_I830_GETBUF 0x05
+#define DRM_I830_SWAP 0x06
+#define DRM_I830_COPY 0x07
+#define DRM_I830_DOCOPY 0x08
+#define DRM_I830_FLIP 0x09
+#define DRM_I830_IRQ_EMIT 0x0a
+#define DRM_I830_IRQ_WAIT 0x0b
+#define DRM_I830_GETPARAM 0x0c
+#define DRM_I830_SETPARAM 0x0d
+
+#define DRM_IOCTL_I830_INIT DRM_IOW( DRM_COMMAND_BASE + DRM_IOCTL_I830_INIT, drm_i830_init_t)
+#define DRM_IOCTL_I830_VERTEX DRM_IOW( DRM_COMMAND_BASE + DRM_IOCTL_I830_VERTEX, drm_i830_vertex_t)
+#define DRM_IOCTL_I830_CLEAR DRM_IOW( DRM_COMMAND_BASE + DRM_IOCTL_I830_CLEAR, drm_i830_clear_t)
+#define DRM_IOCTL_I830_FLUSH DRM_IO ( DRM_COMMAND_BASE + DRM_IOCTL_I830_FLUSH)
+#define DRM_IOCTL_I830_GETAGE DRM_IO ( DRM_COMMAND_BASE + DRM_IOCTL_I830_GETAGE)
+#define DRM_IOCTL_I830_GETBUF DRM_IOWR(DRM_COMMAND_BASE + DRM_IOCTL_I830_GETBUF, drm_i830_dma_t)
+#define DRM_IOCTL_I830_SWAP DRM_IO ( DRM_COMMAND_BASE + DRM_IOCTL_I830_SWAP)
+#define DRM_IOCTL_I830_COPY DRM_IOW( DRM_COMMAND_BASE + DRM_IOCTL_I830_COPY, drm_i830_copy_t)
+#define DRM_IOCTL_I830_DOCOPY DRM_IO ( DRM_COMMAND_BASE + DRM_IOCTL_I830_DOCOPY)
+#define DRM_IOCTL_I830_FLIP DRM_IO ( DRM_COMMAND_BASE + DRM_IOCTL_I830_FLIP)
+#define DRM_IOCTL_I830_IRQ_EMIT DRM_IOWR(DRM_COMMAND_BASE + DRM_IOCTL_I830_IRQ_EMIT, drm_i830_irq_emit_t)
+#define DRM_IOCTL_I830_IRQ_WAIT DRM_IOW( DRM_COMMAND_BASE + DRM_IOCTL_I830_IRQ_WAIT, drm_i830_irq_wait_t)
+#define DRM_IOCTL_I830_GETPARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_IOCTL_I830_GETPARAM, drm_i830_getparam_t)
+#define DRM_IOCTL_I830_SETPARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_IOCTL_I830_SETPARAM, drm_i830_setparam_t)
typedef struct _drm_i830_clear {
int clear_color;
#ifndef _I830_DRV_H_
#define _I830_DRV_H_
+/* General customization:
+ */
+
+#define DRIVER_AUTHOR "VA Linux Systems Inc."
+
+#define DRIVER_NAME "i830"
+#define DRIVER_DESC "Intel 830M"
+#define DRIVER_DATE "20021108"
+
+/* Interface history:
+ *
+ * 1.1: Original.
+ * 1.2: ?
+ * 1.3: New irq emit/wait ioctls.
+ * New pageflip ioctl.
+ * New getparam ioctl.
+ * State for texunits 3&4 in sarea.
+ * New (alternative) layout for texture state.
+ */
+#define DRIVER_MAJOR 1
+#define DRIVER_MINOR 3
+#define DRIVER_PATCHLEVEL 2
+
+/* Driver will work either way: IRQ's save cpu time when waiting for
+ * the card, but are subject to subtle interactions between bios,
+ * hardware and the driver.
+ */
+/* XXX: Add vblank support? */
+#define USE_IRQS 0
+
typedef struct drm_i830_buf_priv {
u32 *in_use;
int my_use_idx;
extern int i830_dma_cleanup(drm_device_t *dev);
extern int i830_flush_ioctl(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg);
-extern void i830_reclaim_buffers(struct file *filp);
+extern void i830_reclaim_buffers(drm_device_t *dev, struct file *filp);
extern int i830_getage(struct inode *inode, struct file *filp, unsigned int cmd,
unsigned long arg);
extern int i830_mmap_buffers(struct file *filp, struct vm_area_struct *vma);
extern void i830_driver_irq_preinstall( drm_device_t *dev );
extern void i830_driver_irq_postinstall( drm_device_t *dev );
extern void i830_driver_irq_uninstall( drm_device_t *dev );
+extern void i830_driver_pretakedown(drm_device_t *dev);
+extern void i830_driver_release(drm_device_t *dev, struct file *filp);
+extern int i830_driver_dma_quiescent(drm_device_t *dev);
#define I830_BASE(reg) ((unsigned long) \
dev_priv->mmio_map->handle)
*
**************************************************************************/
-#include "i915.h"
#include "drmP.h"
#include "drm.h"
#include "i915_drm.h"
#include "i915_drv.h"
-static inline void i915_print_status_page(drm_device_t * dev)
-{
- drm_i915_private_t *dev_priv = dev->dev_private;
- u32 *temp = dev_priv->hw_status_page;
-
- if (!temp) {
- DRM_DEBUG("no status page\n");
- return;
- }
-
- DRM_DEBUG("hw_status: Interrupt Status : %x\n", temp[0]);
- DRM_DEBUG("hw_status: LpRing Head ptr : %x\n", temp[1]);
- DRM_DEBUG("hw_status: IRing Head ptr : %x\n", temp[2]);
- DRM_DEBUG("hw_status: Reserved : %x\n", temp[3]);
- DRM_DEBUG("hw_status: Driver Counter : %d\n", temp[5]);
-
-}
+drm_ioctl_desc_t i915_ioctls[] = {
+ [DRM_IOCTL_NR(DRM_I915_INIT)] = {i915_dma_init, 1, 1},
+ [DRM_IOCTL_NR(DRM_I915_FLUSH)] = {i915_flush_ioctl, 1, 0},
+ [DRM_IOCTL_NR(DRM_I915_FLIP)] = {i915_flip_bufs, 1, 0},
+ [DRM_IOCTL_NR(DRM_I915_BATCHBUFFER)] = {i915_batchbuffer, 1, 0},
+ [DRM_IOCTL_NR(DRM_I915_IRQ_EMIT)] = {i915_irq_emit, 1, 0},
+ [DRM_IOCTL_NR(DRM_I915_IRQ_WAIT)] = {i915_irq_wait, 1, 0},
+ [DRM_IOCTL_NR(DRM_I915_GETPARAM)] = {i915_getparam, 1, 0},
+ [DRM_IOCTL_NR(DRM_I915_SETPARAM)] = {i915_setparam, 1, 1},
+ [DRM_IOCTL_NR(DRM_I915_ALLOC)] = {i915_mem_alloc, 1, 0},
+ [DRM_IOCTL_NR(DRM_I915_FREE)] = {i915_mem_free, 1, 0},
+ [DRM_IOCTL_NR(DRM_I915_INIT_HEAP)] = {i915_mem_init_heap, 1, 1},
+ [DRM_IOCTL_NR(DRM_I915_CMDBUFFER)] = {i915_cmdbuffer, 1, 0}
+};
+
+int i915_max_ioctl = DRM_ARRAY_SIZE(i915_ioctls);
/* Really want an OS-independent resettable timer. Would like to have
* this loop run for (eg) 3 sec, but have the timer reset every time
* is freed, it's too late.
*/
if (dev->irq)
- DRM(irq_uninstall) (dev);
+ drm_irq_uninstall (dev);
if (dev->dev_private) {
drm_i915_private_t *dev_priv =
}
if (dev_priv->hw_status_page) {
- pci_free_consistent(dev->pdev, PAGE_SIZE,
- dev_priv->hw_status_page,
- dev_priv->dma_status_page);
+ drm_pci_free(dev, PAGE_SIZE, dev_priv->hw_status_page,
+ dev_priv->dma_status_page);
/* Need to rewrite hardware status page */
I915_WRITE(0x02080, 0x1ffff000);
}
- DRM(free) (dev->dev_private, sizeof(drm_i915_private_t),
+ drm_free (dev->dev_private, sizeof(drm_i915_private_t),
DRM_MEM_DRIVER);
dev->dev_private = NULL;
dev_priv->allow_batchbuffer = 1;
/* Program Hardware Status Page */
- dev_priv->hw_status_page =
- pci_alloc_consistent(dev->pdev, PAGE_SIZE,
- &dev_priv->dma_status_page);
+ dev_priv->hw_status_page = drm_pci_alloc(dev, PAGE_SIZE, PAGE_SIZE,
+ 0xffffffff,
+ &dev_priv->dma_status_page);
if (!dev_priv->hw_status_page) {
dev->dev_private = (void *)dev_priv;
switch (init.func) {
case I915_INIT_DMA:
- dev_priv = DRM(alloc) (sizeof(drm_i915_private_t),
+ dev_priv = drm_alloc (sizeof(drm_i915_private_t),
DRM_MEM_DRIVER);
if (dev_priv == NULL)
return DRM_ERR(ENOMEM);
{
DRM_DEVICE;
- if (!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
- DRM_ERROR("i915_flush_ioctl called without lock held\n");
- return DRM_ERR(EINVAL);
- }
+ LOCK_TEST_WITH_RETURN(dev, filp);
return i915_quiescent(dev);
}
DRM_DEBUG("i915 batchbuffer, start %x used %d cliprects %d\n",
batch.start, batch.used, batch.num_cliprects);
- if (!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
- DRM_ERROR("i915_batchbuffer called without lock held\n");
- return DRM_ERR(EINVAL);
- }
+ LOCK_TEST_WITH_RETURN(dev, filp);
if (batch.num_cliprects && DRM_VERIFYAREA_READ(batch.cliprects,
batch.num_cliprects *
DRM_DEBUG("i915 cmdbuffer, buf %p sz %d cliprects %d\n",
cmdbuf.buf, cmdbuf.sz, cmdbuf.num_cliprects);
- if (!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
- DRM_ERROR("i915_cmdbuffer called without lock held\n");
- return DRM_ERR(EINVAL);
- }
+ LOCK_TEST_WITH_RETURN(dev, filp);
if (cmdbuf.num_cliprects &&
DRM_VERIFYAREA_READ(cmdbuf.cliprects,
DRM_DEVICE;
DRM_DEBUG("%s\n", __FUNCTION__);
- if (!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
- DRM_ERROR("i915_flip_buf called without lock held\n");
- return DRM_ERR(EINVAL);
- }
+
+ LOCK_TEST_WITH_RETURN(dev, filp);
return i915_dispatch_flip(dev);
}
return 0;
}
-static void i915_driver_pretakedown(drm_device_t *dev)
+void i915_driver_pretakedown(drm_device_t *dev)
{
if ( dev->dev_private ) {
drm_i915_private_t *dev_priv = dev->dev_private;
i915_dma_cleanup( dev );
}
-static void i915_driver_prerelease(drm_device_t *dev, DRMFILE filp)
+void i915_driver_prerelease(drm_device_t *dev, DRMFILE filp)
{
if ( dev->dev_private ) {
drm_i915_private_t *dev_priv = dev->dev_private;
}
}
-void i915_driver_register_fns(drm_device_t *dev)
-{
- dev->driver_features = DRIVER_USE_AGP | DRIVER_REQUIRE_AGP | DRIVER_USE_MTRR | DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED;
- dev->fn_tbl.pretakedown = i915_driver_pretakedown;
- dev->fn_tbl.prerelease = i915_driver_prerelease;
- dev->fn_tbl.irq_preinstall = i915_driver_irq_preinstall;
- dev->fn_tbl.irq_postinstall = i915_driver_irq_postinstall;
- dev->fn_tbl.irq_uninstall = i915_driver_irq_uninstall;
- dev->fn_tbl.irq_handler = i915_driver_irq_handler;
-
- dev->counters += 4;
- dev->types[6] = _DRM_STAT_IRQ;
- dev->types[7] = _DRM_STAT_PRIMARY;
- dev->types[8] = _DRM_STAT_SECONDARY;
- dev->types[9] = _DRM_STAT_DMA;
-}
/* I915 specific ioctls
* The device specific ioctl range is 0x40 to 0x79.
*/
-#define DRM_IOCTL_I915_INIT DRM_IOW( 0x40, drm_i915_init_t)
-#define DRM_IOCTL_I915_FLUSH DRM_IO ( 0x41)
-#define DRM_IOCTL_I915_FLIP DRM_IO ( 0x42)
-#define DRM_IOCTL_I915_BATCHBUFFER DRM_IOW( 0x43, drm_i915_batchbuffer_t)
-#define DRM_IOCTL_I915_IRQ_EMIT DRM_IOWR(0x44, drm_i915_irq_emit_t)
-#define DRM_IOCTL_I915_IRQ_WAIT DRM_IOW( 0x45, drm_i915_irq_wait_t)
-#define DRM_IOCTL_I915_GETPARAM DRM_IOWR(0x46, drm_i915_getparam_t)
-#define DRM_IOCTL_I915_SETPARAM DRM_IOW( 0x47, drm_i915_setparam_t)
-#define DRM_IOCTL_I915_ALLOC DRM_IOWR(0x48, drm_i915_mem_alloc_t)
-#define DRM_IOCTL_I915_FREE DRM_IOW( 0x49, drm_i915_mem_free_t)
-#define DRM_IOCTL_I915_INIT_HEAP DRM_IOW( 0x4a, drm_i915_mem_init_heap_t)
-#define DRM_IOCTL_I915_CMDBUFFER DRM_IOW( 0x4b, drm_i915_cmdbuffer_t)
+#define DRM_I915_INIT 0x00
+#define DRM_I915_FLUSH 0x01
+#define DRM_I915_FLIP 0x02
+#define DRM_I915_BATCHBUFFER 0x03
+#define DRM_I915_IRQ_EMIT 0x04
+#define DRM_I915_IRQ_WAIT 0x05
+#define DRM_I915_GETPARAM 0x06
+#define DRM_I915_SETPARAM 0x07
+#define DRM_I915_ALLOC 0x08
+#define DRM_I915_FREE 0x09
+#define DRM_I915_INIT_HEAP 0x0a
+#define DRM_I915_CMDBUFFER 0x0b
+
+#define DRM_IOCTL_I915_INIT DRM_IOW( DRM_COMMAND_BASE + DRM_I915_INIT, drm_i915_init_t)
+#define DRM_IOCTL_I915_FLUSH DRM_IO ( DRM_COMMAND_BASE + DRM_I915_FLUSH)
+#define DRM_IOCTL_I915_FLIP DRM_IO ( DRM_COMMAND_BASE + DRM_I915_FLIP)
+#define DRM_IOCTL_I915_BATCHBUFFER DRM_IOW( DRM_COMMAND_BASE + DRM_I915_BATCHBUFFER, drm_i915_batchbuffer_t)
+#define DRM_IOCTL_I915_IRQ_EMIT DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_IRQ_EMIT, drm_i915_irq_emit_t)
+#define DRM_IOCTL_I915_IRQ_WAIT DRM_IOW( DRM_COMMAND_BASE + DRM_I915_IRQ_WAIT, drm_i915_irq_wait_t)
+#define DRM_IOCTL_I915_GETPARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GETPARAM, drm_i915_getparam_t)
+#define DRM_IOCTL_I915_SETPARAM DRM_IOW( DRM_COMMAND_BASE + DRM_I915_SETPARAM, drm_i915_setparam_t)
+#define DRM_IOCTL_I915_ALLOC DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_ALLOC, drm_i915_mem_alloc_t)
+#define DRM_IOCTL_I915_FREE DRM_IOW( DRM_COMMAND_BASE + DRM_I915_FREE, drm_i915_mem_free_t)
+#define DRM_IOCTL_I915_INIT_HEAP DRM_IOW( DRM_COMMAND_BASE + DRM_I915_INIT_HEAP, drm_i915_mem_init_heap_t)
+#define DRM_IOCTL_I915_CMDBUFFER DRM_IOW( DRM_COMMAND_BASE + DRM_I915_CMDBUFFER, drm_i915_cmdbuffer_t)
/* Allow drivers to submit batchbuffers directly to hardware, relying
* on the security mechanisms provided by hardware.
*
**************************************************************************/
-#include "i915.h"
#include "drmP.h"
#include "drm.h"
#include "i915_drm.h"
#include "i915_drv.h"
-#include "drm_core.h"
+#include "drm_pciids.h"
+
+int postinit( struct drm_device *dev, unsigned long flags )
+{
+ dev->counters += 4;
+ dev->types[6] = _DRM_STAT_IRQ;
+ dev->types[7] = _DRM_STAT_PRIMARY;
+ dev->types[8] = _DRM_STAT_SECONDARY;
+ dev->types[9] = _DRM_STAT_DMA;
+
+ DRM_INFO( "Initialized %s %d.%d.%d %s on minor %d: %s\n",
+ DRIVER_NAME,
+ DRIVER_MAJOR,
+ DRIVER_MINOR,
+ DRIVER_PATCHLEVEL,
+ DRIVER_DATE,
+ dev->minor,
+ pci_pretty_name(dev->pdev)
+ );
+ return 0;
+}
+
+static int version( drm_version_t *version )
+{
+ int len;
+
+ version->version_major = DRIVER_MAJOR;
+ version->version_minor = DRIVER_MINOR;
+ version->version_patchlevel = DRIVER_PATCHLEVEL;
+ DRM_COPY( version->name, DRIVER_NAME );
+ DRM_COPY( version->date, DRIVER_DATE );
+ DRM_COPY( version->desc, DRIVER_DESC );
+ return 0;
+}
+
+static struct pci_device_id pciidlist[] = {
+ i915_PCI_IDS
+};
+
+extern drm_ioctl_desc_t i915_ioctls[];
+extern int i915_max_ioctl;
+
+static struct drm_driver driver = {
+ .driver_features = DRIVER_USE_AGP | DRIVER_REQUIRE_AGP | DRIVER_USE_MTRR |
+ DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED,
+ .pretakedown = i915_driver_pretakedown,
+ .prerelease = i915_driver_prerelease,
+ .irq_preinstall = i915_driver_irq_preinstall,
+ .irq_postinstall = i915_driver_irq_postinstall,
+ .irq_uninstall = i915_driver_irq_uninstall,
+ .irq_handler = i915_driver_irq_handler,
+ .reclaim_buffers = drm_core_reclaim_buffers,
+ .get_map_ofs = drm_core_get_map_ofs,
+ .get_reg_ofs = drm_core_get_reg_ofs,
+ .postinit = postinit,
+ .version = version,
+ .ioctls = i915_ioctls,
+ .fops = {
+ .owner = THIS_MODULE,
+ .open = drm_open,
+ .release = drm_release,
+ .ioctl = drm_ioctl,
+ .mmap = drm_mmap,
+ .poll = drm_poll,
+ .fasync = drm_fasync,
+ },
+ .pci_driver = {
+ .name = DRIVER_NAME,
+ .id_table = pciidlist,
+ }
+};
+
+static int __init i915_init(void)
+{
+ driver.num_ioctls = i915_max_ioctl;
+ return drm_init(&driver);
+}
+
+static void __exit i915_exit(void)
+{
+ drm_exit(&driver);
+}
+
+module_init(i915_init);
+module_exit(i915_exit);
+
+MODULE_AUTHOR( DRIVER_AUTHOR );
+MODULE_DESCRIPTION( DRIVER_DESC );
+MODULE_LICENSE("GPL and additional rights");
#ifndef _I915_DRV_H_
#define _I915_DRV_H_
+/* General customization:
+ */
+
+#define DRIVER_AUTHOR "Tungsten Graphics, Inc."
+
+#define DRIVER_NAME "i915"
+#define DRIVER_DESC "Intel Graphics"
+#define DRIVER_DATE "20040405"
+
+/* Interface history:
+ *
+ * 1.1: Original.
+ */
+#define DRIVER_MAJOR 1
+#define DRIVER_MINOR 1
+#define DRIVER_PATCHLEVEL 0
+
+/* We use our own dma mechanisms, not the drm template code. However,
+ * the shared IRQ code is useful to us:
+ */
+#define __HAVE_PM 1
+
typedef struct _drm_i915_ring_buffer {
int tail_mask;
unsigned long Start;
extern int i915_setparam(DRM_IOCTL_ARGS);
extern int i915_cmdbuffer(DRM_IOCTL_ARGS);
extern void i915_kernel_lost_context(drm_device_t * dev);
+extern void i915_driver_pretakedown(drm_device_t *dev);
+extern void i915_driver_prerelease(drm_device_t *dev, DRMFILE filp);
/* i915_irq.c */
extern int i915_irq_emit(DRM_IOCTL_ARGS);
*
**************************************************************************/
-#include "i915.h"
#include "drmP.h"
#include "drm.h"
#include "i915_drm.h"
drm_i915_irq_emit_t emit;
int result;
- if (!_DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock)) {
- DRM_ERROR("i915_irq_emit called without lock held\n");
- return DRM_ERR(EINVAL);
- }
+ LOCK_TEST_WITH_RETURN(dev, filp);
if (!dev_priv) {
DRM_ERROR("%s called with no initialization\n", __FUNCTION__);
*
**************************************************************************/
-#include "i915.h"
#include "drmP.h"
#include "drm.h"
#include "i915_drm.h"
{
/* Maybe cut off the start of an existing block */
if (start > p->start) {
- struct mem_block *newblock = DRM_MALLOC(sizeof(*newblock));
+ struct mem_block *newblock = drm_alloc(sizeof(*newblock), DRM_MEM_BUFLISTS);
if (!newblock)
goto out;
newblock->start = start;
/* Maybe cut off the end of an existing block */
if (size < p->size) {
- struct mem_block *newblock = DRM_MALLOC(sizeof(*newblock));
+ struct mem_block *newblock = drm_alloc(sizeof(*newblock), DRM_MEM_BUFLISTS);
if (!newblock)
goto out;
newblock->start = start + size;
p->size += q->size;
p->next = q->next;
p->next->prev = p;
- DRM_FREE(q, sizeof(*q));
+ drm_free(q, sizeof(*q), DRM_MEM_BUFLISTS);
}
if (p->prev->filp == NULL) {
q->size += p->size;
q->next = p->next;
q->next->prev = q;
- DRM_FREE(p, sizeof(*q));
+ drm_free(p, sizeof(*q), DRM_MEM_BUFLISTS);
}
}
*/
static int init_heap(struct mem_block **heap, int start, int size)
{
- struct mem_block *blocks = DRM_MALLOC(sizeof(*blocks));
+ struct mem_block *blocks = drm_alloc(sizeof(*blocks), DRM_MEM_BUFLISTS);
if (!blocks)
return -ENOMEM;
- *heap = DRM_MALLOC(sizeof(**heap));
+ *heap = drm_alloc(sizeof(**heap), DRM_MEM_BUFLISTS);
if (!*heap) {
- DRM_FREE(blocks, sizeof(*blocks));
+ drm_free(blocks, sizeof(*blocks), DRM_MEM_BUFLISTS);
return -ENOMEM;
}
p->size += q->size;
p->next = q->next;
p->next->prev = p;
- DRM_FREE(q, sizeof(*q));
+ drm_free(q, sizeof(*q), DRM_MEM_BUFLISTS);
}
}
}
for (p = (*heap)->next; p != *heap;) {
struct mem_block *q = p;
p = p->next;
- DRM_FREE(q, sizeof(*q));
+ drm_free(q, sizeof(*q), DRM_MEM_BUFLISTS);
}
- DRM_FREE(*heap, sizeof(**heap));
+ drm_free(*heap, sizeof(**heap), DRM_MEM_BUFLISTS);
*heap = NULL;
}
* Gareth Hughes <gareth@valinux.com>
*/
-#include "mga.h"
#include "drmP.h"
#include "drm.h"
#include "mga_drm.h"
#include "mga_drv.h"
+drm_ioctl_desc_t mga_ioctls[] = {
+ [DRM_IOCTL_NR(DRM_MGA_INIT)] = { mga_dma_init, 1, 1 },
+ [DRM_IOCTL_NR(DRM_MGA_FLUSH)] = { mga_dma_flush, 1, 0 },
+ [DRM_IOCTL_NR(DRM_MGA_RESET)] = { mga_dma_reset, 1, 0 },
+ [DRM_IOCTL_NR(DRM_MGA_SWAP)] = { mga_dma_swap, 1, 0 },
+ [DRM_IOCTL_NR(DRM_MGA_CLEAR)] = { mga_dma_clear, 1, 0 },
+ [DRM_IOCTL_NR(DRM_MGA_VERTEX)] = { mga_dma_vertex, 1, 0 },
+ [DRM_IOCTL_NR(DRM_MGA_INDICES)] = { mga_dma_indices, 1, 0 },
+ [DRM_IOCTL_NR(DRM_MGA_ILOAD)] = { mga_dma_iload, 1, 0 },
+ [DRM_IOCTL_NR(DRM_MGA_BLIT)] = { mga_dma_blit, 1, 0 },
+ [DRM_IOCTL_NR(DRM_MGA_GETPARAM)]= { mga_getparam, 1, 0 },
+};
+
+int mga_max_ioctl = DRM_ARRAY_SIZE(mga_ioctls);
/* ================================================================
* DMA hardware state programming functions
* Gareth Hughes <gareth@valinux.com>
*/
-#include "mga.h"
#include "drmP.h"
#include "drm.h"
#include "mga_drm.h"
#define __SIS_DRM_H__
/* SiS specific ioctls */
-#define DRM_IOCTL_SIS_FB_ALLOC DRM_IOWR(0x44, drm_sis_mem_t)
-#define DRM_IOCTL_SIS_FB_FREE DRM_IOW( 0x45, drm_sis_mem_t)
-#define DRM_IOCTL_SIS_AGP_INIT DRM_IOWR(0x53, drm_sis_agp_t)
-#define DRM_IOCTL_SIS_AGP_ALLOC DRM_IOWR(0x54, drm_sis_mem_t)
-#define DRM_IOCTL_SIS_AGP_FREE DRM_IOW( 0x55, drm_sis_mem_t)
-#define DRM_IOCTL_SIS_FB_INIT DRM_IOW( 0x56, drm_sis_fb_t)
+#define NOT_USED_0_3
+#define DRM_SIS_FB_ALLOC 0x04
+#define DRM_SIS_FB_FREE 0x05
+#define NOT_USED_6_12
+#define DRM_SIS_AGP_INIT 0x13
+#define DRM_SIS_AGP_ALLOC 0x14
+#define DRM_SIS_AGP_FREE 0x15
+#define DRM_SIS_FB_INIT 0x16
+
+#define DRM_IOCTL_SIS_FB_ALLOC DRM_IOWR(DRM_COMMAND_BASE + DRM_SIS_FB_ALLOC, drm_sis_mem_t)
+#define DRM_IOCTL_SIS_FB_FREE DRM_IOW( DRM_COMMAND_BASE + DRM_SIS_FB_FREE, drm_sis_mem_t)
+#define DRM_IOCTL_SIS_AGP_INIT DRM_IOWR(DRM_COMMAND_BASE + DRM_SIS_AGP_INIT, drm_sis_agp_t)
+#define DRM_IOCTL_SIS_AGP_ALLOC DRM_IOWR(DRM_COMMAND_BASE + DRM_SIS_AGP_ALLOC, drm_sis_mem_t)
+#define DRM_IOCTL_SIS_AGP_FREE DRM_IOW( DRM_COMMAND_BASE + DRM_SIS_AGP_FREE, drm_sis_mem_t)
+#define DRM_IOCTL_SIS_FB_INIT DRM_IOW( DRM_COMMAND_BASE + DRM_SIS_FB_INIT, drm_sis_fb_t)
/*
#define DRM_IOCTL_SIS_FLIP DRM_IOW( 0x48, drm_sis_flip_t)
#define DRM_IOCTL_SIS_FLIP_INIT DRM_IO( 0x49)
*/
#include <linux/config.h>
-#include "sis.h"
#include "drmP.h"
#include "sis_drm.h"
#include "sis_drv.h"
-#include "drm_core.h"
+#include "drm_pciids.h"
+
+static int postinit( struct drm_device *dev, unsigned long flags )
+{
+ DRM_INFO( "Initialized %s %d.%d.%d %s on minor %d: %s\n",
+ DRIVER_NAME,
+ DRIVER_MAJOR,
+ DRIVER_MINOR,
+ DRIVER_PATCHLEVEL,
+ DRIVER_DATE,
+ dev->minor,
+ pci_pretty_name(dev->pdev)
+ );
+ return 0;
+}
+
+static int version( drm_version_t *version )
+{
+ int len;
+
+ version->version_major = DRIVER_MAJOR;
+ version->version_minor = DRIVER_MINOR;
+ version->version_patchlevel = DRIVER_PATCHLEVEL;
+ DRM_COPY( version->name, DRIVER_NAME );
+ DRM_COPY( version->date, DRIVER_DATE );
+ DRM_COPY( version->desc, DRIVER_DESC );
+ return 0;
+}
+
+static struct pci_device_id pciidlist[] = {
+ sisdrv_PCI_IDS
+};
+
+extern drm_ioctl_desc_t sis_ioctls[];
+extern int sis_max_ioctl;
+
+static struct drm_driver driver = {
+ .driver_features = DRIVER_USE_AGP | DRIVER_USE_MTRR,
+ .context_ctor = sis_init_context,
+ .context_dtor = sis_final_context,
+ .reclaim_buffers = drm_core_reclaim_buffers,
+ .get_map_ofs = drm_core_get_map_ofs,
+ .get_reg_ofs = drm_core_get_reg_ofs,
+ .postinit = postinit,
+ .version = version,
+ .ioctls = sis_ioctls,
+ .fops = {
+ .owner = THIS_MODULE,
+ .open = drm_open,
+ .release = drm_release,
+ .ioctl = drm_ioctl,
+ .mmap = drm_mmap,
+ .poll = drm_poll,
+ .fasync = drm_fasync,
+ },
+ .pci_driver = {
+ .name = DRIVER_NAME,
+ .id_table = pciidlist,
+ }
+};
+
+static int __init sis_init(void)
+{
+ driver.num_ioctls = sis_max_ioctl;
+ return drm_init(&driver);
+}
+
+static void __exit sis_exit(void)
+{
+ drm_exit(&driver);
+}
+
+module_init(sis_init);
+module_exit(sis_exit);
+
+MODULE_AUTHOR( DRIVER_AUTHOR );
+MODULE_DESCRIPTION( DRIVER_DESC );
+MODULE_LICENSE("GPL and additional rights");
#ifndef _SIS_DRV_H_
#define _SIS_DRV_H_
+/* General customization:
+ */
+
+#define DRIVER_AUTHOR "SIS"
+#define DRIVER_NAME "sis"
+#define DRIVER_DESC "SIS 300/630/540"
+#define DRIVER_DATE "20030826"
+#define DRIVER_MAJOR 1
+#define DRIVER_MINOR 1
+#define DRIVER_PATCHLEVEL 0
+
#include "sis_ds.h"
typedef struct drm_sis_private {
extern int sis_ioctl_agp_free( DRM_IOCTL_ARGS );
extern int sis_fb_init( DRM_IOCTL_ARGS );
+extern int sis_init_context(drm_device_t *dev, int context);
+extern int sis_final_context(drm_device_t *dev, int context);
+
#endif
/*
* GLX Hardware Device Driver common code
- * Copyright (C) 1999 Keith Whitwell
+ * Copyright (C) 1999 Wittawat Yamwong
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
* OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
- * KEITH WHITWELL, OR ANY OTHER CONTRIBUTORS BE LIABLE FOR ANY CLAIM,
+ * WITTAWAT YAMWONG, OR ANY OTHER CONTRIBUTORS BE LIABLE FOR ANY CLAIM,
* DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
* OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
struct mem_block_t *heap;
int ofs,size;
int align;
- int free:1;
- int reserved:1;
+ unsigned int free:1;
+ unsigned int reserved:1;
};
typedef struct mem_block_t TMemBlock;
typedef struct mem_block_t *PMemBlock;
static inline unsigned char ds1286_is_updating(void);
-static spinlock_t ds1286_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(ds1286_lock);
static int ds1286_read_proc(char *page, char **start, off_t off,
int count, int *eof, void *data);
static void __aux_write_ack(int val);
#endif
-static spinlock_t kbd_controller_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(kbd_controller_lock);
static unsigned char handle_kbd_event(void);
/* used only by send_data - set by keyboard_interrupt */
*/
#define EFI_RTC_EPOCH 1998
-static spinlock_t efi_rtc_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(efi_rtc_lock);
static int efi_rtc_ioctl(struct inode *inode, struct file *file,
unsigned int cmd, unsigned long arg);
* changed * appropriately. See below.
*/
- char zftc_src[] ="$Source: /homes/cvs/ftape-stacked/ftape/compressor/zftape-compress.c,v $";
- char zftc_rev[] = "$Revision: 1.1.6.1 $";
- char zftc_dat[] = "$Date: 1997/11/16 15:15:56 $";
-
-#include <linux/version.h>
#include <linux/errno.h>
#include <linux/mm.h>
#include <linux/module.h>
if (TRACE_LEVEL >= ft_t_info) {
printk(
KERN_INFO "(c) 1997 Claus-Justus Heine (claus@momo.math.rwth-aachen.de)\n"
-KERN_INFO "Compressor for zftape (lzrw3 algorithm)\n"
-KERN_INFO "Compiled for kernel version %s\n", UTS_RELEASE);
+KERN_INFO "Compressor for zftape (lzrw3 algorithm)\n");
}
#else /* !MODULE */
/* print a short no-nonsense boot message */
- printk("zftape compressor v1.00a 970514 for Linux " UTS_RELEASE "\n");
+ printk("zftape compressor v1.00a 970514\n");
printk("For use with " FTAPE_VERSION "\n");
#endif /* MODULE */
TRACE(ft_t_info, "zft_compressor_init @ 0x%p", zft_compressor_init);
#include "../lowlevel/fdc-io.h"
#include "../lowlevel/fc-10.h"
-__u16 inbs_magic[] = {
+static __u16 inbs_magic[] = {
0x3, 0x3, 0x0, 0x4, 0x7, 0x2, 0x5, 0x3, 0x1, 0x4,
0x3, 0x5, 0x2, 0x0, 0x3, 0x7, 0x4, 0x2,
0x0, 0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7
};
-__u16 fc10_ports[] = {
+static __u16 fc10_ports[] = {
0x180, 0x210, 0x2A0, 0x300, 0x330, 0x340, 0x370
};
extern volatile fdc_mode_enum fdc_mode;
extern int fdc_setup_error; /* outdated ??? */
extern wait_queue_head_t ftape_wait_intr;
-extern int ftape_motor; /* fdc motor line state */
extern volatile int ftape_current_cylinder; /* track nr FDC thinks we're on */
extern volatile __u8 fdc_head; /* FDC head */
extern volatile __u8 fdc_cyl; /* FDC track */
extern int fdc_ready_wait(unsigned int timeout);
extern int fdc_command(const __u8 * cmd_data, int cmd_len);
extern int fdc_result(__u8 * res_data, int res_len);
-extern int fdc_issue_command(const __u8 * out_data, int out_count,
- __u8 * in_data, int in_count);
extern int fdc_interrupt_wait(unsigned int time);
-extern int fdc_set_seek_rate(int seek_rate);
extern int fdc_seek(int track);
extern int fdc_sense_drive_status(int *st3);
extern void fdc_motor(int motor);
extern void fdc_reset(void);
-extern int fdc_recalibrate(void);
extern void fdc_disable(void);
extern int fdc_fifo_threshold(__u8 threshold,
int *fifo_state, int *lock_state, int *fifo_thr);
forward, backward
} mode_type;
+#if 0
+static void ftape_put_bad_sector_entry(int segment_id, SectorMap new_map);
+#endif
+
#if 0
/* fix_tape converts a normal QIC-80 tape into a 'wide' tape.
* For testing purposes only !
}
}
-void ftape_put_bad_sector_entry(int segment_id, SectorMap new_map)
+#if 0
+static void ftape_put_bad_sector_entry(int segment_id, SectorMap new_map)
{
SectorCount *ptr = (SectorCount *)bad_sector_map;
int count;
}
TRACE_EXIT;
}
+#endif /* 0 */
SectorMap ftape_get_bad_sector_entry(int segment_id)
{
extern void update_bad_sector_map(__u8 * buffer);
extern void ftape_extract_bad_sector_map(__u8 * buffer);
extern SectorMap ftape_get_bad_sector_entry(int segment_id);
-extern void ftape_put_bad_sector_entry(int segment_id, SectorMap mask);
extern __u8 *ftape_find_end_of_bsm_list(__u8 * address);
extern void ftape_init_bsm(void);
#endif
}
-void ftape_set_status(const ftape_info *status)
-{
- ftape_status = *status;
-}
-
static int ftape_not_operational(int status)
{
/* return true if status indicates tape can not be used.
return i;
}
-void ftape_detach_drive(void)
+static void ftape_detach_drive(void)
{
TRACE_FUN(ft_t_any);
ft_history.rewinds = 0;
}
-int ftape_activate_drive(vendor_struct * drive_type)
+static int ftape_activate_drive(vendor_struct * drive_type)
{
int result = 0;
TRACE_FUN(ft_t_flow);
TRACE_EXIT result;
}
-int ftape_get_drive_status(void)
+static int ftape_get_drive_status(void)
{
int result;
int status;
TRACE_EXIT 0;
}
-void ftape_log_vendor_id(void)
+static void ftape_log_vendor_id(void)
{
int vendor_index;
TRACE_FUN(ft_t_flow);
TRACE_EXIT 0;
}
-int ftape_init_drive(void)
+static int ftape_init_drive(void)
{
int status;
qic_model model;
unsigned int data_rate,
unsigned int tape_len);
extern int ftape_calibrate_data_rate(unsigned int qic_std);
-extern int ftape_init_drive(void);
extern const ftape_info *ftape_get_status(void);
#endif
return result;
}
-int ftape_parameter_wait(unsigned int parm, unsigned int timeout, int *status)
+static int ftape_parameter_wait(unsigned int parm, unsigned int timeout, int *status)
{
int result;
TRACE_EXIT 0;
}
-int ftape_in_error_state(int status)
-{
- TRACE_FUN(ft_t_any);
-
- if ((status & QIC_STATUS_READY) && (status & QIC_STATUS_ERROR)) {
- TRACE_ABORT(1, ft_t_warn, "warning: error status set!");
- }
- TRACE_EXIT 0;
-}
-
int ftape_report_configuration(qic_model *model,
unsigned int *rate,
int *qic_std,
TRACE_EXIT (result < 0) ? -EIO : 0;
}
-int ftape_report_rom_version(int *version)
+static int ftape_report_rom_version(int *version)
{
if (ftape_report_operation(version, QIC_REPORT_ROM_VERSION, 8) < 0) {
}
}
-int ftape_report_signature(int *signature)
-{
- int result;
-
- result = ftape_command(28);
- result = ftape_report_operation(signature, 9, 8);
- result = ftape_command(30);
- return (result < 0) ? -EIO : 0;
-}
-
void ftape_report_vendor_id(unsigned int *id)
{
int result;
unsigned int timeout,
int *status);
extern int ftape_parameter(unsigned int parameter);
-extern int ftape_parameter_wait(unsigned int parameter,
- unsigned int timeout,
- int *status);
extern int ftape_report_operation(int *status,
qic117_cmd_t command,
int result_length);
extern int ftape_report_status(int *status);
extern int ftape_ready_wait(unsigned int timeout, int *status);
extern int ftape_seek_head_to_track(unsigned int track);
-extern int ftape_in_error_state(int status);
extern int ftape_set_data_rate(unsigned int new_rate, unsigned int qic_std);
extern int ftape_report_error(unsigned int *error,
qic117_cmd_t *command,
return len;
}
-int ftape_read_proc(char *page, char **start, off_t off,
- int count, int *eof, void *data)
+static int ftape_read_proc(char *page, char **start, off_t off,
+ int count, int *eof, void *data)
{
char *ptr = page;
size_t len;
/* Read Id of first sector passing tape head.
*/
-int ftape_read_id(void)
+static int ftape_read_id(void)
{
int status;
__u8 out[2];
extern buffer_struct *ftape_get_buffer (ft_buffer_queue_t pos);
extern int ftape_buffer_id (ft_buffer_queue_t pos);
extern void ftape_reset_buffer(void);
-extern int ftape_read_id(void);
extern void ftape_tape_parameters(__u8 drive_configuration);
extern int ftape_wait_segment(buffer_state_enum state);
extern int ftape_dumb_stop(void);
TRACE_ABORT(0, ft_t_noise,
"allocated buffer @ %p, %d bytes", *(void **)new, size);
}
-int zft_vcalloc_always(void *new, size_t size)
-{
- TRACE_FUN(ft_t_flow);
-
- zft_vfree(new, size);
- TRACE_EXIT zft_vcalloc_once(new, size);
-}
int zft_vmalloc_always(void *new, size_t size)
{
TRACE_FUN(ft_t_flow);
extern int zft_vmalloc_once(void *new, size_t size);
extern int zft_vcalloc_once(void *new, size_t size);
extern int zft_vmalloc_always(void *new, size_t size);
-extern int zft_vcalloc_always(void *new, size_t size);
extern void zft_vfree(void *old, size_t size);
extern void *zft_kmalloc(size_t size);
extern void zft_kfree(void *old, size_t size);
#include <linux/config.h>
#include <linux/module.h>
#include <linux/errno.h>
-#include <linux/version.h>
#include <linux/fs.h>
#include <linux/kernel.h>
#include <linux/signal.h>
#include "../zftape/zftape-ctl.h"
#include "../zftape/zftape-buffers.h"
-char zft_src[] __initdata = "$Source: /homes/cvs/ftape-stacked/ftape/zftape/zftape-init.c,v $";
-char zft_rev[] __initdata = "$Revision: 1.8 $";
-char zft_dat[] __initdata = "$Date: 1997/11/06 00:48:56 $";
-
MODULE_AUTHOR("(c) 1996, 1997 Claus-Justus Heine "
"(claus@momo.math.rwth-aachen.de)");
MODULE_DESCRIPTION(ZFTAPE_VERSION " - "
}
}
-struct zft_cmpr_ops *zft_cmpr_unregister(void)
-{
- struct zft_cmpr_ops *old_ops = zft_cmpr_ops;
- TRACE_FUN(ft_t_flow);
-
- zft_cmpr_ops = NULL;
- TRACE_EXIT old_ops;
-}
-
/* lock the zft-compressor() module.
*/
int zft_cmpr_lock(int try_to_load)
KERN_INFO
"Support for QIC-113 compatible volume table, dynamic memory allocation\n"
KERN_INFO
-"and builtin compression (lzrw3 algorithm).\n"
-KERN_INFO
-"Compiled for Linux version %s\n", UTS_RELEASE);
+"and builtin compression (lzrw3 algorithm).\n");
}
#else /* !MODULE */
/* print a short no-nonsense boot message */
- printk(KERN_INFO ZFTAPE_VERSION " for Linux " UTS_RELEASE "\n");
+ printk(KERN_INFO ZFTAPE_VERSION "\n");
#endif /* MODULE */
TRACE(ft_t_info, "zft_init @ 0x%p", zft_init);
TRACE(ft_t_info,
/* zftape-init.c defined global functions.
*/
extern int zft_cmpr_register(struct zft_cmpr_ops *new_ops);
-extern struct zft_cmpr_ops *zft_cmpr_unregister(void);
extern int zft_cmpr_lock(int try_to_load);
#endif
int zft_deblock_segment = -1;
zft_status_enum zft_io_state = zft_idle;
int zft_header_changed;
-int zft_bad_sector_map_changed;
int zft_qic113; /* conform to old specs. and old zftape */
int zft_use_compression;
zft_position zft_pos = {
extern int zft_deblock_segment;
extern zft_status_enum zft_io_state;
extern int zft_header_changed;
-extern int zft_bad_sector_map_changed;
extern int zft_qic113; /* conform to old specs. and old zftape */
extern int zft_use_compression;
extern unsigned int zft_blk_sz;
static zft_volinfo eot_vtbl;
static zft_volinfo *cur_vtbl;
-inline void zft_new_vtbl_entry(void)
+static inline void zft_new_vtbl_entry(void)
{
struct list_head *tmp = &zft_last_vtbl->node;
zft_volinfo *new = zft_kmalloc(sizeof(zft_volinfo));
* that buffer already contains the old volume-table, so that vtbl
* entries without the zft_volume flag set can savely be ignored.
*/
-void zft_create_volume_headers(__u8 *buffer)
+static void zft_create_volume_headers(__u8 *buffer)
{
__u8 *entry;
struct list_head *tmp;
/* exported functions */
extern void zft_init_vtbl (void);
extern void zft_free_vtbl (void);
-extern void zft_new_vtbl_entry (void);
extern int zft_extract_volume_headers(__u8 *buffer);
extern int zft_update_volume_table (unsigned int segment);
extern int zft_open_volume (zft_position *pos,
/* zftape-init.c */
EXPORT_SYMBOL(zft_cmpr_register);
-EXPORT_SYMBOL(zft_cmpr_unregister);
/* zftape-read.c */
EXPORT_SYMBOL(zft_fetch_segment_fraction);
/* zftape-buffers.c */
}
-int gs_real_chars_in_buffer(struct tty_struct *tty)
+static int gs_real_chars_in_buffer(struct tty_struct *tty)
{
struct gs_port *port;
func_enter ();
}
-void gs_shutdown_port (struct gs_port *port)
+static void gs_shutdown_port (struct gs_port *port)
{
unsigned long flags;
}
-void gs_do_softint(void *private_)
-{
- struct gs_port *port = private_;
- struct tty_struct *tty;
-
- func_enter ();
-
- if (!port) return;
-
- tty = port->tty;
-
- if (!tty) return;
-
- if (test_and_clear_bit(RS_EVENT_WRITE_WAKEUP, &port->event)) {
- tty_wakeup(tty);
- wake_up_interruptible(&tty->write_wait);
- }
- func_exit ();
-}
-
-
int gs_block_til_ready(void *port_, struct file * filp)
{
struct gs_port *port = port_;
EXPORT_SYMBOL(gs_stop);
EXPORT_SYMBOL(gs_start);
EXPORT_SYMBOL(gs_hangup);
-EXPORT_SYMBOL(gs_do_softint);
EXPORT_SYMBOL(gs_block_til_ready);
EXPORT_SYMBOL(gs_close);
EXPORT_SYMBOL(gs_set_termios);
static u32 hpet_ntimer, hpet_nhpet, hpet_max_freq = HPET_USER_FREQ;
/* A lock for concurrent access by app and isr hpet activity. */
-static spinlock_t hpet_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(hpet_lock);
/* A lock for concurrent intermodule access to hpet and isr hpet activity. */
-static spinlock_t hpet_task_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(hpet_task_lock);
#define HPET_DEV_NAME (7)
* Protect the list of hvc_struct instances from inserts and removals during
* list traversal.
*/
-static spinlock_t hvc_structs_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(hvc_structs_lock);
/*
* Initial console vtermnos for console API usage prior to full console
kobject_init(&hp->kobj);
hp->kobj.ktype = &hvc_kobj_type;
- hp->lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&hp->lock);
spin_lock(&hvc_structs_lock);
hp->index = ++hvc_count;
list_add_tail(&(hp->next), &hvc_structs);
static unsigned long *hvcs_pi_buff;
/* Only allow one hvcs_struct to use the hvcs_pi_buff at a time. */
-static spinlock_t hvcs_pi_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(hvcs_pi_lock);
/* One vty-server per hvcs_struct */
struct hvcs_struct {
#define from_kobj(kobj) container_of(kobj, struct hvcs_struct, kobj)
static struct list_head hvcs_structs = LIST_HEAD_INIT(hvcs_structs);
-static spinlock_t hvcs_structs_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(hvcs_structs_lock);
static void hvcs_unthrottle(struct tty_struct *tty);
static void hvcs_throttle(struct tty_struct *tty);
/* hvcsd->tty is zeroed out with the memset */
memset(hvcsd, 0x00, sizeof(*hvcsd));
- hvcsd->lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&hvcsd->lock);
/* Automatically incs the refcount the first time */
kobject_init(&hvcsd->kobj);
/* Set up the callback for terminating the hvcs_struct's life */
hvcs_tty_driver->driver_name = hvcs_driver_name;
hvcs_tty_driver->name = hvcs_device_node;
+ hvcs_tty_driver->devfs_name = hvcs_device_node;
/*
* We'll let the system assign us a major number, indicated by leaving
static int __init dmi_iterate(void (*decode)(DMIHeader *))
{
- unsigned char buf[20];
- long fp = 0x000e0000L;
- fp -= 16;
-
- while (fp < 0x000fffffL) {
- fp += 16;
- isa_memcpy_fromio(buf, fp, 20);
- if (memcmp(buf, "_DMI_", 5)==0) {
- u16 num = buf[13]<<8 | buf[12];
- u16 len = buf [7]<<8 | buf [6];
- u32 base = buf[11]<<24 | buf[10]<<16 | buf[9]<<8 | buf[8];
+ unsigned char buf[20];
+ void __iomem *p = ioremap(0xe0000, 0x20000), *q;
+
+ if (!p)
+ return -1;
+
+ for (q = p; q < p + 0x20000; q += 16) {
+ memcpy_fromio(buf, q, 20);
+ if (memcmp(buf, "_DMI_", 5)==0) {
+ u16 num = buf[13]<<8 | buf[12];
+ u16 len = buf [7]<<8 | buf [6];
+ u32 base = buf[11]<<24 | buf[10]<<16 | buf[9]<<8 | buf[8];
#ifdef I8K_DEBUG
- printk(KERN_INFO "DMI %d.%d present.\n",
- buf[14]>>4, buf[14]&0x0F);
- printk(KERN_INFO "%d structures occupying %d bytes.\n",
- buf[13]<<8 | buf[12],
- buf [7]<<8 | buf[6]);
- printk(KERN_INFO "DMI table at 0x%08X.\n",
- buf[11]<<24 | buf[10]<<16 | buf[9]<<8 | buf[8]);
+ printk(KERN_INFO "DMI %d.%d present.\n",
+ buf[14]>>4, buf[14]&0x0F);
+ printk(KERN_INFO "%d structures occupying %d bytes.\n",
+ buf[13]<<8 | buf[12],
+ buf [7]<<8 | buf[6]);
+ printk(KERN_INFO "DMI table at 0x%08X.\n",
+ buf[11]<<24 | buf[10]<<16 | buf[9]<<8 | buf[8]);
#endif
- if (dmi_table(base, len, num, decode)==0) {
- return 0;
- }
+ if (dmi_table(base, len, num, decode)==0) {
+ iounmap(p);
+ return 0;
+ }
+ }
}
- }
- return -1;
+ iounmap(p);
+ return -1;
}
/* end of DMI code */
/* fip_firm.h - Intelliport II loadware */
/* -31232 bytes read from ff.lod */
-unsigned char fip_firm[] __initdata = {
+static unsigned char fip_firm[] __initdata = {
0x3C,0x42,0x37,0x18,0x02,0x01,0x03,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
0x57,0x65,0x64,0x20,0x44,0x65,0x63,0x20,0x30,0x31,0x20,0x31,0x32,0x3A,0x32,0x34,
0x3A,0x33,0x30,0x20,0x31,0x39,0x39,0x39,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
//static UCHAR ct37[]={ 5, BYP|VIP, 0x25,0,0,0,0 }; // FLOW PACKET
// Back to normal
-static UCHAR ct38[] = {11, BTH|VAR, 0x26,0,0,0,0,0,0,0,0,0,0 }; // DEF KEY SEQ
+//static UCHAR ct38[] = {11, BTH|VAR, 0x26,0,0,0,0,0,0,0,0,0,0 }; // DEF KEY SEQ
//static UCHAR ct39[]={ 3, BTH|END, 0x27,0,0 }; // OPOSTON
//static UCHAR ct40[]={ 1, BTH|END, 0x28 }; // OPOSTOFF
static UCHAR ct41[] = { 1, BYP, 0x29 }; // RESUME
//static UCHAR ct50[]={ 1, BTH, 0x32 }; // DTRFLOWENAB
//static UCHAR ct51[]={ 1, BTH, 0x33 }; // DTRFLOWDSAB
//static UCHAR ct52[]={ 1, BTH, 0x34 }; // BAUDTABRESET
-static UCHAR ct53[] = { 3, BTH, 0x35,0,0 }; // BAUDREMAP
+//static UCHAR ct53[] = { 3, BTH, 0x35,0,0 }; // BAUDREMAP
static UCHAR ct54[] = { 3, BTH, 0x36,0,0 }; // CUSTOMBAUD1
static UCHAR ct55[] = { 3, BTH, 0x37,0,0 }; // CUSTOMBAUD2
static UCHAR ct56[] = { 2, BTH|END, 0x38,0 }; // PAUSE
//* Code *
//********
-//******************************************************************************
-// Function: i2cmdSetSeq(type, size, string)
-// Parameters: type - sequence number
-// size - length of sequence
-// string - substitution string
-//
-// Returns: Pointer to command structure
-//
-// Description:
-//
-// This routine sets the parameters of command 38 Define Hot Key sequence (alias
-// "special receive sequence"). Returns a pointer to the structure. Endeavours
-// to be bullet-proof in that the sequence number is forced in range, and any
-// out-of-range sizes are forced to zero.
-//******************************************************************************
-cmdSyntaxPtr
-i2cmdSetSeq(unsigned char type, unsigned char size, unsigned char *string)
-{
- cmdSyntaxPtr pCM = (cmdSyntaxPtr) ct38;
- unsigned char *pc;
-
- pCM->cmd[1] = ((type > 0xf) ? 0xf : type); // Sequence number
- size = ((size > 0x8) ? 0 : size); // size
- pCM->cmd[2] = size;
- pCM->length = 3 + size; // UPDATES THE LENGTH!
-
- pc = &(pCM->cmd[3]);
-
- while(size--) {
- *pc++ = *string++;
- }
- return pCM;
-}
-
//******************************************************************************
// Function: i2cmdUnixFlags(iflag, cflag, lflag)
// Parameters: Unix tty flags
return pCM;
}
-//******************************************************************************
-// Function: i2cmdBaudRemap(dest,src)
-// Parameters: ?
-//
-// Returns: Pointer to command structure
-//
-// Description:
-//
-// This routine sets the parameters of command 53 and returns a pointer to the
-// appropriate structure.
-//******************************************************************************
-cmdSyntaxPtr
-i2cmdBaudRemap(unsigned char dest, unsigned char src)
-{
- cmdSyntaxPtr pCM = (cmdSyntaxPtr) ct53;
-
- pCM->cmd[1] = dest;
- pCM->cmd[2] = src;
- return pCM;
-}
-
//******************************************************************************
// Function: i2cmdBaudDef(which, rate)
// Parameters: ?
// there is more than one parameter to assign, we must use a function rather
// than a macro (used usually).
//
-extern cmdSyntaxPtr i2cmdSetSeq(UCHAR seqno, UCHAR size, UCHAR *string);
extern cmdSyntaxPtr i2cmdUnixFlags(USHORT iflag,USHORT cflag,USHORT lflag);
-extern cmdSyntaxPtr i2cmdBaudRemap(UCHAR dest, UCHAR src);
extern cmdSyntaxPtr i2cmdBaudDef(int which, USHORT rate);
// Declarations for the global arrays used to bear the commands and their
// library code in response to data movement and shouldn't ever be sent by the
// user code. See i2pack.h and the body of i2lib.c for details.
-// COMMAND 38: Define the hot-key sequence
-// seqno: sequence number 0-15
-// size: number of characters in sequence (1-8)
-// string: pointer to the characters
-// (if size == 0, "undefines" this sequence
-//
-#define CMD_SET_SEQ(seqno,size,string) i2cmdSetSeq(seqno,size,string)
-
// Enable on-board post-processing, using options given in oflag argument.
// Formerly, this command was automatically preceded by a CMD_OPOST_OFF command
// because the loadware does not permit sending back-to-back CMD_OPOST_ON
#define CMD_DTRFL_DSAB (cmdSyntaxPtr)(ct51) // Disable DTR flow control
#define CMD_BAUD_RESET (cmdSyntaxPtr)(ct52) // Reset baudrate table
-// COMMAND 53: Remap baud rate table
-// dest = index of table entry to be changed
-// src = index value to substitute.
-// at default mapping table is f(x) = x
-//
-#define CMD_BAUD_REMAP(dest,src) i2cmdBaudRemap(dest,src)
-
// COMMAND 54: Define custom rate #1
// rate = (short) 1/10 of the desired baud rate
//
//* Code *
//********
-inline int
+static inline int
i2Validate ( i2ChanStrPtr pCh )
{
//ip2trace(pCh->port_index, ITRC_VERIFY,ITRC_ENTER,2,pCh->validity,
IPMI is a standard for managing sensors (temperature,
voltage, etc.) in a system.
- See Documentation/IPMI.txt for more details on the driver.
+ See <file:Documentation/IPMI.txt> for more details on the driver.
If unsure, say N.
extern void (*pm_power_off)(void);
/* Stuff from the get device id command. */
-unsigned int mfg_id;
-unsigned int prod_id;
-unsigned char capabilities;
+static unsigned int mfg_id;
+static unsigned int prod_id;
+static unsigned char capabilities;
/* We use our own messages for this operation, we don't let the system
allocate them, since we may be in a panic situation. The whole
};
static struct poweroff_function poweroff_functions[] = {
- { "ATCA", ipmi_atca_detect, ipmi_poweroff_atca },
- { "CPI1", ipmi_cpi1_detect, ipmi_poweroff_cpi1 },
+ { .platform_type = "ATCA",
+ .detect = ipmi_atca_detect,
+ .poweroff_func = ipmi_poweroff_atca },
+ { .platform_type = "CPI1",
+ .detect = ipmi_cpi1_detect,
+ .poweroff_func = ipmi_poweroff_cpi1 },
/* Chassis should generally be last, other things should override
it. */
- { "chassis", ipmi_chassis_detect, ipmi_poweroff_chassis },
+ { .platform_type = "chassis",
+ .detect = ipmi_chassis_detect,
+ .poweroff_func = ipmi_poweroff_chassis },
};
#define NUM_PO_FUNCS (sizeof(poweroff_functions) \
/ sizeof(struct poweroff_function))
* with this program; if not, write to the Free Software Foundation, Inc.,
* 675 Mass Ave, Cambridge, MA 02139, USA.
*/
-
#include <linux/module.h>
-#include <linux/config.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/miscdevice.h>
static int ite_gpio_open(struct inode *inode, struct file *file)
{
- unsigned int minor = iminor(inode);
- if (minor != GPIO_MINOR)
- return -ENODEV;
-
return 0;
}
static int ite_gpio_ioctl(struct inode *inode, struct file *file,
unsigned int cmd, unsigned long arg)
{
-
static struct ite_gpio_ioctl_data ioctl_data;
if (copy_from_user(&ioctl_data, (struct ite_gpio_ioctl_data *)arg,
return -ENOIOCTLCMD;
}
+
return 0;
}
-static void ite_gpio_irq_handler(int this_irq, void *dev_id, struct pt_regs *regs)
+static void ite_gpio_irq_handler(int this_irq, void *dev_id,
+ struct pt_regs *regs)
{
int i,line;
}
}
-static struct file_operations ite_gpio_fops =
-{
+static struct file_operations ite_gpio_fops = {
.owner = THIS_MODULE,
.ioctl = ite_gpio_ioctl,
.open = ite_gpio_open,
.release = ite_gpio_release,
};
-/* GPIO_MINOR in include/linux/miscdevice.h */
-static struct miscdevice ite_gpio_miscdev =
-{
- GPIO_MINOR,
+static struct miscdevice ite_gpio_miscdev = {
+ MISC_DYNAMIC_MINOR,
"ite_gpio",
&ite_gpio_fops
};
return 0;
}
-void __exit ite_gpio_exit(void)
+static void __exit ite_gpio_exit(void)
{
misc_deregister(&ite_gpio_miscdev);
}
#include <asm/sn/intr.h>
#include <asm/sn/shub_mmr.h>
#include <asm/sn/nodepda.h>
-
-/* This is ugly and jbarnes has promised me to fix this later */
-#include "../../arch/ia64/sn/include/shubio.h"
+#include <asm/sn/shubio.h>
MODULE_AUTHOR("Jesse Barnes <jbarnes@sgi.com>");
MODULE_DESCRIPTION("SGI Altix RTC Timer");
#include <asm/uaccess.h>
#include <asm/system.h>
-static spinlock_t nvram_state_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(nvram_state_lock);
static int nvram_open_cnt; /* #times opened */
static int nvram_open_mode; /* special open modes */
#define NVRAM_WRITE 1 /* opened for writing (exclusive) */
#include <linux/init.h>
#include <linux/kbd_ll.h>
#include <linux/delay.h>
-#include <linux/random.h>
#include <linux/poll.h>
#include <linux/miscdevice.h>
#include <linux/slab.h>
}
-spinlock_t kbd_controller_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(kbd_controller_lock);
static unsigned char handle_kbd_event(void);
return;
}
- add_mouse_randomness(scancode);
if (aux_count) {
int head = queue->head;
i--;
}
if (count-i) {
- file->f_dentry->d_inode->i_atime = CURRENT_TIME;
+ struct inode *inode = file->f_dentry->d_inode;
+ inode->i_atime = current_fs_time(inode->i_sb);
return count-i;
}
if (signal_pending(current))
int RIOBootCodeUNKNOWN(struct rio_info *, struct DownLoad *);
void msec_timeout(struct Host *);
int RIOBootRup(struct rio_info *, uint, struct Host *, struct PKT *);
-int RIOBootComplete(struct rio_info *, struct Host *, uint, struct PktCmd *);
int RIOBootOk(struct rio_info *,struct Host *, ulong);
int RIORtaBound(struct rio_info *, uint);
void FillSlot(int, int, uint, struct Host *);
int RIOKillNeighbour(struct rio_info *, caddr_t);
int RIOSuspendBootRta(struct Host *, int, int);
int RIOFoadWakeup(struct rio_info *);
-int RIOCommandRup(struct rio_info *, uint, struct Host *, struct PKT *);
struct CmdBlk * RIOGetCmdBlk(void);
void RIOFreeCmdBlk(struct CmdBlk *);
int RIOQueueCmdBlk(struct Host *, uint, struct CmdBlk *);
void RIOPollHostCommands(struct rio_info *, struct Host *);
-int RIOStrlen(register char *);
-int RIOStrCmp(register char *, register char *);
-int RIOStrnCmp(register char *, register char *, int);
-void RIOStrNCpy(char *, char *, int);
int RIOWFlushMark(int, struct CmdBlk *);
int RIORFlushEnable(int, struct CmdBlk *);
int RIOUnUse(int, struct CmdBlk *);
/* rioctrl.c */
int copyin(int, caddr_t, int);
-int copyout(caddr_t, int, int);
int riocontrol(struct rio_info *, dev_t,int,caddr_t,int);
int RIOPreemptiveCmd(struct rio_info *,struct Port *,uchar);
caddr_t RIOCheckForATCard(int);
int RIOAssignAT(struct rio_info *, int, caddr_t, int);
int RIOBoardTest(paddr_t, caddr_t, uchar, int);
-int RIOScrub(int, BYTE *, int);
-void RIOAllocateInterrupts(struct rio_info *);
-void RIOStopInterrupts(struct rio_info *, int, int);
void RIOAllocDataStructs(struct rio_info *);
void RIOSetupDataStructs(struct rio_info *);
int RIODefaultName(struct rio_info *, struct Host *, uint);
-int RIOReport(struct rio_info *);
struct rioVersion * RIOVersid(void);
int RIOMapin(paddr_t, int, caddr_t *);
void RIOMapout(paddr_t, long, caddr_t);
void RIOHostReset(uint, volatile struct DpRam *, uint);
/* riointr.c */
-void riopoll(struct rio_info *);
-void riointr(struct rio_info *);
void RIOTxEnable(char *);
void RIOServiceHost(struct rio_info *, struct Host *, int);
-void RIOReceive(struct rio_info *, struct Port *);
int riotproc(struct rio_info *, register struct ttystatics *, int, int);
/* rioparam.c */
/* rioroute.c */
int RIORouteRup(struct rio_info *, uint, struct Host *, struct PKT *);
void RIOFixPhbs(struct rio_info *, struct Host *, uint);
-int RIOCheckIsolated(struct rio_info *, struct Host *, uint);
-int RIOIsolate(struct rio_info *, struct Host *, uint);
-int RIOCheck(struct Host *, uint);
uint GetUnitType(uint);
int RIOSetChange(struct rio_info *);
-void RIOConCon(struct rio_info *, struct Host *, uint, uint, uint, uint, int);
int RIOFindFreeID(struct rio_info *, struct Host *, uint *, uint *);
-int RIOFreeDisconnected(struct rio_info *, struct Host *, int );
-int RIORemoveFromSavedTable(struct rio_info *, struct Map *);
/* riotty.c */
int riotopen(struct tty_struct * tty, struct file * filp);
int riotclose(void *ptr);
-int RIOCookMode(struct ttystatics *);
int riotioctl(struct rio_info *, struct tty_struct *, register int, register caddr_t);
void ttyseth(struct Port *, struct ttystatics *, struct old_sgttyb *sg);
*
* */
-
-#define RCS_ID "$Id: rio.c,v 1.1 1999/07/11 10:13:54 wolff Exp wolff $"
-#define RCS_REV "$Revision: 1.1 $"
-
-
#include <linux/module.h>
#include <linux/config.h>
#include <linux/kdev_t.h>
unsigned int cmd, unsigned long arg);
static int rio_init_drivers(void);
-void my_hd (void *addr, int len);
+static void my_hd (void *addr, int len);
static struct tty_driver *rio_driver, *rio_driver2;
sources use all over the place. */
struct rio_info *p;
-/* struct rio_board boards[RIO_HOSTS]; */
-struct rio_port *rio_ports;
-
-int rio_initialized;
-int rio_nports;
int rio_debug;
- Set rio_poll to 1 to poll every timer tick (10ms on Intel).
This is used when the card cannot use an interrupt for some reason.
*/
-int rio_poll = 1;
+static int rio_poll = 1;
/* These are the only open spaces in my computer. Yours may have more
or less.... */
-int rio_probe_addrs[]= {0xc0000, 0xd0000, 0xe0000};
+static int rio_probe_addrs[]= {0xc0000, 0xd0000, 0xe0000};
#define NR_RIO_ADDRS (sizeof(rio_probe_addrs)/sizeof (int))
.ioctl = rio_fw_ioctl,
};
-struct miscdevice rio_fw_device = {
+static struct miscdevice rio_fw_device = {
RIOCTL_MISC_MINOR, "rioctl", &rio_fw_fops
};
#ifdef DEBUG
-void my_hd (void *ad, int len)
+static void my_hd (void *ad, int len)
{
int i, j, ch;
unsigned char *addr = ad;
}
-void rio_reset_interrupt (struct Host *HostP)
+static void rio_reset_interrupt (struct Host *HostP)
{
func_enter();
* ********************************************************************** */
-struct vpd_prom *get_VPD_PROM (struct Host *hp)
+static struct vpd_prom *get_VPD_PROM (struct Host *hp)
{
static struct vpd_prom vpdp;
char *p;
port->gs.close_delay = HZ/2;
port->gs.closing_wait = 30 * HZ;
port->gs.rd = &rio_real_driver;
- port->portSem = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&port->portSem);
/*
* Initializing wait queue
*/
EEprom. As the bit is read/write for the CPU, we can fix it here,
if we detect that it isn't set correctly. -- REW */
-void fix_rio_pci (struct pci_dev *pdev)
+static void fix_rio_pci (struct pci_dev *pdev)
{
unsigned int hwbase;
unsigned long rebase;
hp->Type = RIO_PCI;
hp->Copy = rio_pcicopy;
hp->Mode = RIO_PCI_BOOT_FROM_RAM;
- hp->HostLock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&hp->HostLock);
rio_reset_interrupt (hp);
rio_start_card_running (hp);
hp->Type = RIO_PCI;
hp->Copy = rio_pcicopy;
hp->Mode = RIO_PCI_BOOT_FROM_RAM;
- hp->HostLock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&hp->HostLock);
rio_dprintk (RIO_DEBUG_PROBE, "Ivec: %x\n", hp->Ivec);
rio_dprintk (RIO_DEBUG_PROBE, "Mode: %x\n", hp->Mode);
* Moreover, the ISA card will work with the
* special PCI copy anyway. -- REW */
hp->Mode = 0;
- hp->HostLock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&hp->HostLock);
vpdp = get_VPD_PROM (hp);
rio_dprintk (RIO_DEBUG_PROBE, "Got VPD ROM\n");
#include "cmdblk.h"
#include "route.h"
+static int RIOBootComplete( struct rio_info *p, struct Host *HostP, uint Rup, struct PktCmd *PktCmdP );
+
static uchar
RIOAtVec2Ctrl[] =
{
HostP->UnixRups[RupN].RupP = &HostP->RupP[RupN];
HostP->UnixRups[RupN].Id = RupN+1;
HostP->UnixRups[RupN].BaseSysPort = NO_PORT;
- HostP->UnixRups[RupN].RupLock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&HostP->UnixRups[RupN].RupLock);
}
for ( RupN = 0; RupN<LINKS_PER_UNIT; RupN++ ) {
HostP->UnixRups[RupN+MAX_RUP].RupP = &HostP->LinkStrP[RupN].rup;
HostP->UnixRups[RupN+MAX_RUP].Id = 0;
HostP->UnixRups[RupN+MAX_RUP].BaseSysPort = NO_PORT;
- HostP->UnixRups[RupN+MAX_RUP].RupLock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&HostP->UnixRups[RupN+MAX_RUP].RupLock);
}
/*
** If booted by an RTA, HostP->Mapping[Rup].RtaUniqueNum is the booting RTA.
** RtaUniq is the booted RTA.
*/
-int RIOBootComplete( struct rio_info *p, struct Host *HostP, uint Rup, struct PktCmd *PktCmdP )
+static int RIOBootComplete( struct rio_info *p, struct Host *HostP, uint Rup, struct PktCmd *PktCmdP )
{
struct Map *MapP = NULL;
struct Map *MapP2 = NULL;
/*
** Incoming command on the COMMAND_RUP to be processed.
*/
-int
+static int
RIOCommandRup(p, Rup, HostP, PacketP)
struct rio_info * p;
uint Rup;
} while ( Rup );
}
-
-/*
-** Return the length of the named string
-*/
-int
-RIOStrlen(Str)
-register char *Str;
-{
- register int len = 0;
-
- while ( *Str++ )
- len++;
- return len;
-}
-
-/*
-** compares s1 to s2 and return 0 if they match.
-*/
-int
-RIOStrCmp(s1, s2)
-register char *s1;
-register char *s2;
-{
- while ( *s1 && *s2 && *s1==*s2 )
- s1++, s2++;
- return *s1-*s2;
-}
-
-/*
-** compares s1 to s2 for upto n bytes and return 0 if they match.
-*/
-int
-RIOStrnCmp(s1, s2, n)
-register char *s1;
-register char *s2;
-int n;
-{
- while ( n && *s1 && *s2 && *s1==*s2 )
- n--, s1++, s2++;
- return n ? *s1!=*s2 : 0;
-}
-
-/*
-** copy up to 'len' bytes from 'from' to 'to'.
-*/
-void
-RIOStrNCpy(to, from, len)
-char *to;
-char *from;
-int len;
-{
- while ( len-- && (*to++ = *from++) )
- ;
- to[-1]='\0';
-}
-
int
RIOWFlushMark(iPortP, CmdBlkP)
int iPortP;
#define drv_makedev(maj, min) ((((uint) maj & 0xff) << 8) | ((uint) min & 0xff))
-#ifdef linux
int copyin (int arg, caddr_t dp, int siz)
{
int rv;
else return rv;
}
-
-int copyout (caddr_t dp, int arg, int siz)
+static int copyout (caddr_t dp, int arg, int siz)
{
int rv;
else return rv;
}
-#else
-
-int
-copyin(arg, dp, siz)
-int arg;
-caddr_t dp;
-int siz;
-{
- if (rbounds ((unsigned long) arg) >= siz) {
- bcopy ( arg, dp, siz );
- return OK;
- } else
- return ( COPYFAIL );
-}
-
-int
-copyout (dp, arg, siz)
-caddr_t dp;
-int arg;
-int siz;
-{
- if (wbounds ((unsigned long) arg) >= siz ) {
- bcopy ( dp, arg, siz );
- return OK;
- } else
- return ( COPYFAIL );
-}
-#endif
-
int
riocontrol(p, dev, cmd, arg, su)
struct rio_info * p;
p->RIOPortp[loop]->TtyP = &p->channel[loop];
#endif
- p->RIOPortp[loop]->portSem = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&p->RIOPortp[loop]->portSem);
p->RIOPortp[loop]->InUse = NOT_INUSE;
}
#undef bcopy
#define bcopy rio_pcicopy
-int
-RIOPCIinit(struct rio_info *p, int Mode);
+int RIOPCIinit(struct rio_info *p, int Mode);
+
+#if 0
+static void RIOAllocateInterrupts(struct rio_info *);
+static int RIOReport(struct rio_info *);
+static void RIOStopInterrupts(struct rio_info *, int, int);
+#endif
+static int RIOScrub(int, BYTE *, int);
#if 0
extern int rio_intr();
** Call with op not zero, and the RAM will be read and compated with val[op-1]
** to check that the data from the previous phase was retained.
*/
-int
+static int
RIOScrub(op, ram, size)
int op;
BYTE * ram;
** and force into polled mode if told to. Patch up the
** interrupt vector & salute The Queen when you've done.
*/
-void
+#if 0
+static void
RIOAllocateInterrupts(p)
struct rio_info * p;
{
** new-fangled interrupt thingies. Set everything up to just
** poll.
*/
-void
+static void
RIOStopInterrupts(p, Reason, Host)
struct rio_info * p;
int Reason;
}
}
-#if 0
/*
** This function is called at init time to setup the data structures.
*/
}
RIODefaultName(p, HostP, rup);
}
- HostP->UnixRups[rup].RupLock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&HostP->UnixRups[rup].RupLock);
}
}
}
#define RIO_RELEASE "Linux"
#define RELEASE_ID "1.0"
-int
+#if 0
+static int
RIOReport(p)
struct rio_info * p;
{
}
return 0;
}
-
-/*
-** This function returns release/version information as used by ioctl() calls
-** It returns a MAX_VERSION_LEN byte string, null terminated.
-*/
-char *
-OLD_RIOVersid( void )
-{
- static char Info[MAX_VERSION_LEN];
- char * RIORelease = RIO_RELEASE;
- char * cp;
- int ct = 0;
-
- for ( ct=0; RIORelease[ct] && ct<MAX_VERSION_LEN; ct++ )
- Info[ct] = RIORelease[ct];
- if ( ct>=MAX_VERSION_LEN ) {
- Info[MAX_VERSION_LEN-1] = '\0';
- return Info;
- }
- Info[ct++]=' ';
- if ( ct>=MAX_VERSION_LEN ) {
- Info[MAX_VERSION_LEN-1] = '\0';
- return Info;
- }
-
- cp=""; /* Fill the RCS Id here */
-
- while ( *cp && ct<MAX_VERSION_LEN )
- Info[ct++] = *cp++;
- if ( ct<MAX_VERSION_LEN-1 )
- Info[ct] = '\0';
- Info[MAX_VERSION_LEN-1] = '\0';
- return Info;
-}
-
+#endif
static struct rioVersion stVersion;
#include "rioioctl.h"
+static void RIOReceive(struct rio_info *, struct Port *);
-/*
-** riopoll is called every clock tick. Once the /dev/rio device has been
-** opened, and polldistributed( ) has been called, this routine is called
-** every clock tick *by every cpu*. The 'interesting' piece of code that
-** manipulates 'RIONumCpus' and 'RIOCpuCountdown' is used to fair-share
-** the work between the CPUs. If there are 'N' cpus, then each poll time
-** we increment a counter, modulo 'N-1'. When this counter is 0, we call
-** the interrupt handler. This has the effect that polls are serviced
-** by processor 'N', 'N-1', 'N-2', ... '0', round and round. Neat.
-*/
-void
-riopoll(p)
-struct rio_info * p;
-{
- int host;
-
- /*
- ** Here's the deal. We try to fair share as much as possible amongst
- ** all the processors that are available. Since each processor
- ** should generate HZ ticks per second and since we only need HZ ticks
- ** in total for proper operation we simply attempt to cycle round each
- ** processor in turn, using RIOCpuCountdown to decide whether to call
- ** the interrupt routine. ( In fact the count zeroes when it reaches
- ** one less than the total number of processors - so e.g. on a two
- ** processor system RIOService will get called 2*HZ times per second. )
- ** this_cpu (cur_cpu()) tells us the number of the current processor
- ** as follows:
- **
- ** 0 - default CPU
- ** 1 - first extra CPU
- ** 2 - second extra CPU
- ** etc.
- */
-
- /*
- ** okay, we've got a cpu that hasn't had a go recently
- ** - lets check to see what needs doing.
- */
- for ( host=0; host<p->RIONumHosts; host++ ) {
- struct Host *HostP = &p->RIOHosts[host];
-
- rio_spin_lock( &HostP->HostLock );
-
- if ( ( (HostP->Flags & RUN_STATE) != RC_RUNNING ) ||
- HostP->InIntr ) {
- rio_spin_unlock (&HostP->HostLock);
- continue;
- }
-
- if ( RWORD( HostP->ParmMapP->rup_intr ) ||
- RWORD( HostP->ParmMapP->rx_intr ) ||
- RWORD( HostP->ParmMapP->tx_intr ) ) {
- HostP->InIntr = 1;
-
-#ifdef FUTURE_RELEASE
- if( HostP->Type == RIO_EISA )
- INBZ( HostP->Slot, EISA_INTERRUPT_RESET );
- else
-#endif
- WBYTE( HostP->ResetInt , 0xff );
-
- rio_spin_lock(&HostP->HostLock);
-
- p->_RIO_Polled++;
- RIOServiceHost(p, HostP, 'p' );
- rio_spin_lock( &HostP->HostLock);
- HostP->InIntr = 0;
- rio_spin_unlock (&HostP->HostLock);
- }
- }
- rio_spin_unlock (&p->RIOIntrSem);
-}
-
-
-char *firstchars (char *p, int nch)
+static char *firstchars (char *p, int nch)
{
static char buf[2][128];
static int t=0;
}
-/*
-** When a real-life interrupt comes in here, we try to find out
-** which host card it belongs to, and then service only that host
-** Notice the cunning way that, once we've found a candidate, we
-** continue just in case we are sharing interrupts.
-*/
-void
-riointr(p)
-struct rio_info * p;
-{
- int host;
-
- for ( host=0; host<p->RIONumHosts; host++ ) {
- struct Host *HostP = &p->RIOHosts[host];
-
- rio_dprintk (RIO_DEBUG_INTR, "riointr() doing host %d type %d\n", host, HostP->Type);
-
- switch( HostP->Type ) {
- case RIO_AT:
- case RIO_MCA:
- case RIO_PCI:
- rio_spin_lock(&HostP->HostLock);
- WBYTE(HostP->ResetInt , 0xff);
- if ( !HostP->InIntr ) {
- HostP->InIntr = 1;
- rio_spin_unlock (&HostP->HostLock);
- p->_RIO_Interrupted++;
- RIOServiceHost(p, HostP, 'i');
- rio_spin_lock(&HostP->HostLock);
- HostP->InIntr = 0;
- }
- rio_spin_unlock(&HostP->HostLock);
- break;
-#ifdef FUTURE_RELEASE
- case RIO_EISA:
- if ( ivec == HostP->Ivec )
- {
- OldSpl = LOCKB( &HostP->HostLock );
- INBZ( HostP->Slot, EISA_INTERRUPT_RESET );
- if ( !HostP->InIntr )
- {
- HostP->InIntr = 1;
- UNLOCKB( &HostP->HostLock, OldSpl );
- if ( this_cpu < RIO_CPU_LIMIT )
- {
- int intrSpl = LOCKB( &RIOIntrLock );
- UNLOCKB( &RIOIntrLock, intrSpl );
- }
- p->_RIO_Interrupted++;
- RIOServiceHost( HostP, 'i' );
- OldSpl = LOCKB( &HostP->HostLock );
- HostP->InIntr = 0;
- }
- UNLOCKB( &HostP->HostLock, OldSpl );
- done++;
- }
- break;
-#endif
- }
-
- HostP->IntSrvDone++;
- }
-
-#ifdef FUTURE_RELEASE
- if ( !done )
- {
- cmn_err( CE_WARN, "RIO: Interrupt received with vector 0x%x\n", ivec );
- cmn_err( CE_CONT, " Valid vectors are:\n");
- for ( host=0; host<RIONumHosts; host++ )
- {
- switch( RIOHosts[host].Type )
- {
- case RIO_AT:
- case RIO_MCA:
- case RIO_EISA:
- cmn_err( CE_CONT, "0x%x ", RIOHosts[host].Ivec );
- break;
- case RIO_PCI:
- cmn_err( CE_CONT, "0x%x ", get_intr_arg( RIOHosts[host].PciDevInfo.busnum, IDIST_PCI_IRQ( RIOHosts[host].PciDevInfo.slotnum, RIOHosts[host].PciDevInfo.funcnum ) ));
- break;
- }
- }
- cmn_err( CE_CONT, "\n" );
- }
-#endif
-}
-
/*
** RIO Host Service routine. Does all the work traditionally associated with an
** interrupt.
** NB: Called with the tty locked. The spl from the lockb( ) is passed.
** we return the ttySpl level that we re-locked at.
*/
-void
+static void
RIOReceive(p, PortP)
struct rio_info * p;
struct Port * PortP;
#include "list.h"
#include "sam.h"
+static int RIOCheckIsolated(struct rio_info *, struct Host *, uint);
+static int RIOIsolate(struct rio_info *, struct Host *, uint);
+static int RIOCheck(struct Host *, uint);
+static void RIOConCon(struct rio_info *, struct Host *, uint, uint, uint, uint, int);
+
+
/*
** Incoming on the ROUTE_RUP
** I wrote this while I was tired. Forgive me.
** the world about it. This is done to ensure that the configurator
** only gets up-to-date information about what is going on.
*/
-int
+static int
RIOCheckIsolated(p, HostP, UnitId)
struct rio_info * p;
struct Host *HostP;
** all the units attached to it. This will mean that the entire
** subnet will re-introduce itself.
*/
-int
+static int
RIOIsolate(p, HostP, UnitId)
struct rio_info * p;
struct Host * HostP;
return 1;
}
-int
+static int
RIOCheck(HostP, UnitId)
struct Host *HostP;
uint UnitId;
return(0);
}
-void
+static void
RIOConCon(p, HostP, FromId, FromLink, ToId, ToLink, Change)
struct rio_info * p;
struct Host *HostP;
** Delete and RTA entry from the saved table given to us
** by the configuration program.
*/
-int
+static int
RIORemoveFromSavedTable(struct rio_info *p, struct Map *pMap)
{
int entry;
** Scan the unit links to and return zero if the unit is completely
** disconnected.
*/
-int
+static int
RIOFreeDisconnected(struct rio_info *p, struct Host *HostP, int unit)
{
int link;
#include <linux/slab.h>
#include <linux/errno.h>
#include <linux/interrupt.h>
+#include <linux/string.h>
#include <asm/io.h>
#include <asm/system.h>
return -ENXIO;
}
rio_dprintk (RIO_DEBUG_TABLE, "RIONewTable: entering(9)\n");
- if (RIOStrCmp(MapP->Name,
+ if (strcmp(MapP->Name,
p->RIOConnectTable[SubEnt].Name)==0 && !(MapP->Flags & RTA16_SECOND_SLOT)) { /* (9) */
rio_dprintk (RIO_DEBUG_TABLE, "RTA name %s used twice\n", MapP->Name);
p->RIOError.Error = NAME_USED_TWICE;
for ( Host2=0; Host2<p->RIONumHosts; Host2++ ) {
if (Host2 == Host)
continue;
- if (RIOStrCmp(p->RIOHosts[Host].Name, p->RIOHosts[Host2].Name)
+ if (strcmp(p->RIOHosts[Host].Name, p->RIOHosts[Host2].Name)
== 0) {
NameIsUnique = 0;
Host1++;
PortP->Store = 0;
PortP->FirstOpen = 1;
- /*
- ** handle the xprint issues
- */
-#ifdef XPRINT_SUPPORT
- PortP->Xprint.XpActive = 0;
- PortP->Xprint.XttyP = &riox_tty[SysPort];
- /* TO FROM MAXLEN */
- RIOStrNCpy( PortP->Xprint.XpOn, RIOConf.XpOn, MAX_XP_CTRL_LEN );
- RIOStrNCpy( PortP->Xprint.XpOff, RIOConf.XpOff, MAX_XP_CTRL_LEN );
- PortP->Xprint.XpCps = RIOConf.XpCps;
- PortP->Xprint.XpLen = RIOStrlen(PortP->Xprint.XpOn)+
- RIOStrlen(PortP->Xprint.XpOff);
-#endif
-
/*
** Buffers 'n things
*/
#include <linux/slab.h>
#include <linux/errno.h>
#include <linux/tty.h>
+#include <linux/string.h>
#include <asm/io.h>
#include <asm/system.h>
#include <asm/string.h>
int RIOShortCommand(struct rio_info *p, struct Port *PortP,
int command, int len, int arg);
+#if 0
+static int RIOCookMode(struct ttystatics *);
+#endif
extern int conv_vb[]; /* now defined in ttymgr.c */
extern int conv_bv[]; /* now defined in ttymgr.c */
** COOK_WELL if the line discipline must be used to do the processing
** COOK_MEDIUM if the card can do all the processing necessary.
*/
-int
+#if 0
+static int
RIOCookMode(struct ttystatics *tp)
{
/*
*/
return COOK_MEDIUM;
}
-
+#endif
static void
RIOClearUp(PortP)
pseterr(EFAULT);
}
PortP->Xprint.XpOn[MAX_XP_CTRL_LEN-1] = '\0';
- PortP->Xprint.XpLen = RIOStrlen(PortP->Xprint.XpOn)+
- RIOStrlen(PortP->Xprint.XpOff);
+ PortP->Xprint.XpLen = strlen(PortP->Xprint.XpOn)+
+ strlen(PortP->Xprint.XpOff);
rio_spin_unlock_irqrestore(&PortP->portSem, flags);
return 0;
pseterr(EFAULT);
}
PortP->Xprint.XpOff[MAX_XP_CTRL_LEN-1] = '\0';
- PortP->Xprint.XpLen = RIOStrlen(PortP->Xprint.XpOn)+
- RIOStrlen(PortP->Xprint.XpOff);
+ PortP->Xprint.XpLen = strlen(PortP->Xprint.XpOn)+
+ strlen(PortP->Xprint.XpOff);
rio_spin_unlock_irqrestore(&PortP->portSem, flags);
return 0;
static int s3c2410_rtc_tickno = NO_IRQ;
static int s3c2410_rtc_freq = 1;
-static spinlock_t s3c2410_rtc_pie_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(s3c2410_rtc_pie_lock);
/* IRQ Handlers */
MODULE_LICENSE("GPL");
static int major = 0; /* default to dynamic major */
-MODULE_PARM(major, "i");
+module_param(major, int, 0);
MODULE_PARM_DESC(major, "Major device number");
static ssize_t scx200_gpio_write(struct file *file, const char __user *data,
* OF THE POSSIBILITY OF SUCH DAMAGE.
*/
-#define VERSION(ver,rel,seq) (((ver)<<16) | ((rel)<<8) | (seq))
#if defined(__i386__)
# define BREAKPOINT() asm(" int $3");
#else
#define RCLRVALUE 0xffff
-MGSL_PARAMS default_params = {
+static MGSL_PARAMS default_params = {
MGSL_MODE_HDLC, /* unsigned long mode */
0, /* unsigned char loopback; */
HDLC_FLAG_UNDERRUN_ABORT15, /* unsigned short flags; */
#define usc_EnableReceiver(a,b) \
usc_OutReg( (a), RMR, (u16)((usc_InReg((a),RMR) & 0xfffc) | (b)) )
-u16 usc_InDmaReg( struct mgsl_struct *info, u16 Port );
-void usc_OutDmaReg( struct mgsl_struct *info, u16 Port, u16 Value );
-void usc_DmaCmd( struct mgsl_struct *info, u16 Cmd );
+static u16 usc_InDmaReg( struct mgsl_struct *info, u16 Port );
+static void usc_OutDmaReg( struct mgsl_struct *info, u16 Port, u16 Value );
+static void usc_DmaCmd( struct mgsl_struct *info, u16 Cmd );
-u16 usc_InReg( struct mgsl_struct *info, u16 Port );
-void usc_OutReg( struct mgsl_struct *info, u16 Port, u16 Value );
-void usc_RTCmd( struct mgsl_struct *info, u16 Cmd );
+static u16 usc_InReg( struct mgsl_struct *info, u16 Port );
+static void usc_OutReg( struct mgsl_struct *info, u16 Port, u16 Value );
+static void usc_RTCmd( struct mgsl_struct *info, u16 Cmd );
void usc_RCmd( struct mgsl_struct *info, u16 Cmd );
void usc_TCmd( struct mgsl_struct *info, u16 Cmd );
#define usc_SetTransmitSyncChars(a,s0,s1) usc_OutReg((a), TSR, (u16)(((u16)s0<<8)|(u16)s1))
-void usc_process_rxoverrun_sync( struct mgsl_struct *info );
-void usc_start_receiver( struct mgsl_struct *info );
-void usc_stop_receiver( struct mgsl_struct *info );
+static void usc_process_rxoverrun_sync( struct mgsl_struct *info );
+static void usc_start_receiver( struct mgsl_struct *info );
+static void usc_stop_receiver( struct mgsl_struct *info );
-void usc_start_transmitter( struct mgsl_struct *info );
-void usc_stop_transmitter( struct mgsl_struct *info );
-void usc_set_txidle( struct mgsl_struct *info );
-void usc_load_txfifo( struct mgsl_struct *info );
+static void usc_start_transmitter( struct mgsl_struct *info );
+static void usc_stop_transmitter( struct mgsl_struct *info );
+static void usc_set_txidle( struct mgsl_struct *info );
+static void usc_load_txfifo( struct mgsl_struct *info );
-void usc_enable_aux_clock( struct mgsl_struct *info, u32 DataRate );
-void usc_enable_loopback( struct mgsl_struct *info, int enable );
+static void usc_enable_aux_clock( struct mgsl_struct *info, u32 DataRate );
+static void usc_enable_loopback( struct mgsl_struct *info, int enable );
-void usc_get_serial_signals( struct mgsl_struct *info );
-void usc_set_serial_signals( struct mgsl_struct *info );
+static void usc_get_serial_signals( struct mgsl_struct *info );
+static void usc_set_serial_signals( struct mgsl_struct *info );
-void usc_reset( struct mgsl_struct *info );
+static void usc_reset( struct mgsl_struct *info );
-void usc_set_sync_mode( struct mgsl_struct *info );
-void usc_set_sdlc_mode( struct mgsl_struct *info );
-void usc_set_async_mode( struct mgsl_struct *info );
-void usc_enable_async_clock( struct mgsl_struct *info, u32 DataRate );
+static void usc_set_sync_mode( struct mgsl_struct *info );
+static void usc_set_sdlc_mode( struct mgsl_struct *info );
+static void usc_set_async_mode( struct mgsl_struct *info );
+static void usc_enable_async_clock( struct mgsl_struct *info, u32 DataRate );
-void usc_loopback_frame( struct mgsl_struct *info );
+static void usc_loopback_frame( struct mgsl_struct *info );
-void mgsl_tx_timeout(unsigned long context);
+static void mgsl_tx_timeout(unsigned long context);
-void usc_loopmode_cancel_transmit( struct mgsl_struct * info );
-void usc_loopmode_insert_request( struct mgsl_struct * info );
-int usc_loopmode_active( struct mgsl_struct * info);
-void usc_loopmode_send_done( struct mgsl_struct * info );
-int usc_loopmode_send_active( struct mgsl_struct * info );
+static void usc_loopmode_cancel_transmit( struct mgsl_struct * info );
+static void usc_loopmode_insert_request( struct mgsl_struct * info );
+static int usc_loopmode_active( struct mgsl_struct * info);
+static void usc_loopmode_send_done( struct mgsl_struct * info );
-int mgsl_ioctl_common(struct mgsl_struct *info, unsigned int cmd, unsigned long arg);
+static int mgsl_ioctl_common(struct mgsl_struct *info, unsigned int cmd, unsigned long arg);
#ifdef CONFIG_HDLC
#define dev_to_port(D) (dev_to_hdlc(D)->priv)
((Nrdd) << 11) + \
((Nrad) << 6) )
-void mgsl_trace_block(struct mgsl_struct *info,const char* data, int count, int xmit);
+static void mgsl_trace_block(struct mgsl_struct *info,const char* data, int count, int xmit);
/*
* Adapter diagnostic routines
*/
-BOOLEAN mgsl_register_test( struct mgsl_struct *info );
-BOOLEAN mgsl_irq_test( struct mgsl_struct *info );
-BOOLEAN mgsl_dma_test( struct mgsl_struct *info );
-BOOLEAN mgsl_memory_test( struct mgsl_struct *info );
-int mgsl_adapter_test( struct mgsl_struct *info );
+static BOOLEAN mgsl_register_test( struct mgsl_struct *info );
+static BOOLEAN mgsl_irq_test( struct mgsl_struct *info );
+static BOOLEAN mgsl_dma_test( struct mgsl_struct *info );
+static BOOLEAN mgsl_memory_test( struct mgsl_struct *info );
+static int mgsl_adapter_test( struct mgsl_struct *info );
/*
* device and resource management routines
*/
-int mgsl_claim_resources(struct mgsl_struct *info);
-void mgsl_release_resources(struct mgsl_struct *info);
-void mgsl_add_device(struct mgsl_struct *info);
-struct mgsl_struct* mgsl_allocate_device(void);
+static int mgsl_claim_resources(struct mgsl_struct *info);
+static void mgsl_release_resources(struct mgsl_struct *info);
+static void mgsl_add_device(struct mgsl_struct *info);
+static struct mgsl_struct* mgsl_allocate_device(void);
/*
* DMA buffer manupulation functions.
*/
-void mgsl_free_rx_frame_buffers( struct mgsl_struct *info, unsigned int StartIndex, unsigned int EndIndex );
-int mgsl_get_rx_frame( struct mgsl_struct *info );
-int mgsl_get_raw_rx_frame( struct mgsl_struct *info );
-void mgsl_reset_rx_dma_buffers( struct mgsl_struct *info );
-void mgsl_reset_tx_dma_buffers( struct mgsl_struct *info );
-int num_free_tx_dma_buffers(struct mgsl_struct *info);
-void mgsl_load_tx_dma_buffer( struct mgsl_struct *info, const char *Buffer, unsigned int BufferSize);
-void mgsl_load_pci_memory(char* TargetPtr, const char* SourcePtr, unsigned short count);
+static void mgsl_free_rx_frame_buffers( struct mgsl_struct *info, unsigned int StartIndex, unsigned int EndIndex );
+static int mgsl_get_rx_frame( struct mgsl_struct *info );
+static int mgsl_get_raw_rx_frame( struct mgsl_struct *info );
+static void mgsl_reset_rx_dma_buffers( struct mgsl_struct *info );
+static void mgsl_reset_tx_dma_buffers( struct mgsl_struct *info );
+static int num_free_tx_dma_buffers(struct mgsl_struct *info);
+static void mgsl_load_tx_dma_buffer( struct mgsl_struct *info, const char *Buffer, unsigned int BufferSize);
+static void mgsl_load_pci_memory(char* TargetPtr, const char* SourcePtr, unsigned short count);
/*
* DMA and Shared Memory buffer allocation and formatting
*/
-int mgsl_allocate_dma_buffers(struct mgsl_struct *info);
-void mgsl_free_dma_buffers(struct mgsl_struct *info);
-int mgsl_alloc_frame_memory(struct mgsl_struct *info, DMABUFFERENTRY *BufferList,int Buffercount);
-void mgsl_free_frame_memory(struct mgsl_struct *info, DMABUFFERENTRY *BufferList,int Buffercount);
-int mgsl_alloc_buffer_list_memory(struct mgsl_struct *info);
-void mgsl_free_buffer_list_memory(struct mgsl_struct *info);
-int mgsl_alloc_intermediate_rxbuffer_memory(struct mgsl_struct *info);
-void mgsl_free_intermediate_rxbuffer_memory(struct mgsl_struct *info);
-int mgsl_alloc_intermediate_txbuffer_memory(struct mgsl_struct *info);
-void mgsl_free_intermediate_txbuffer_memory(struct mgsl_struct *info);
-int load_next_tx_holding_buffer(struct mgsl_struct *info);
-int save_tx_buffer_request(struct mgsl_struct *info,const char *Buffer, unsigned int BufferSize);
+static int mgsl_allocate_dma_buffers(struct mgsl_struct *info);
+static void mgsl_free_dma_buffers(struct mgsl_struct *info);
+static int mgsl_alloc_frame_memory(struct mgsl_struct *info, DMABUFFERENTRY *BufferList,int Buffercount);
+static void mgsl_free_frame_memory(struct mgsl_struct *info, DMABUFFERENTRY *BufferList,int Buffercount);
+static int mgsl_alloc_buffer_list_memory(struct mgsl_struct *info);
+static void mgsl_free_buffer_list_memory(struct mgsl_struct *info);
+static int mgsl_alloc_intermediate_rxbuffer_memory(struct mgsl_struct *info);
+static void mgsl_free_intermediate_rxbuffer_memory(struct mgsl_struct *info);
+static int mgsl_alloc_intermediate_txbuffer_memory(struct mgsl_struct *info);
+static void mgsl_free_intermediate_txbuffer_memory(struct mgsl_struct *info);
+static int load_next_tx_holding_buffer(struct mgsl_struct *info);
+static int save_tx_buffer_request(struct mgsl_struct *info,const char *Buffer, unsigned int BufferSize);
/*
* Bottom half interrupt handlers
*/
-void mgsl_bh_handler(void* Context);
-void mgsl_bh_receive(struct mgsl_struct *info);
-void mgsl_bh_transmit(struct mgsl_struct *info);
-void mgsl_bh_status(struct mgsl_struct *info);
+static void mgsl_bh_handler(void* Context);
+static void mgsl_bh_receive(struct mgsl_struct *info);
+static void mgsl_bh_transmit(struct mgsl_struct *info);
+static void mgsl_bh_status(struct mgsl_struct *info);
/*
* Interrupt handler routines and dispatch table.
*/
-void mgsl_isr_null( struct mgsl_struct *info );
-void mgsl_isr_transmit_data( struct mgsl_struct *info );
-void mgsl_isr_receive_data( struct mgsl_struct *info );
-void mgsl_isr_receive_status( struct mgsl_struct *info );
-void mgsl_isr_transmit_status( struct mgsl_struct *info );
-void mgsl_isr_io_pin( struct mgsl_struct *info );
-void mgsl_isr_misc( struct mgsl_struct *info );
-void mgsl_isr_receive_dma( struct mgsl_struct *info );
-void mgsl_isr_transmit_dma( struct mgsl_struct *info );
+static void mgsl_isr_null( struct mgsl_struct *info );
+static void mgsl_isr_transmit_data( struct mgsl_struct *info );
+static void mgsl_isr_receive_data( struct mgsl_struct *info );
+static void mgsl_isr_receive_status( struct mgsl_struct *info );
+static void mgsl_isr_transmit_status( struct mgsl_struct *info );
+static void mgsl_isr_io_pin( struct mgsl_struct *info );
+static void mgsl_isr_misc( struct mgsl_struct *info );
+static void mgsl_isr_receive_dma( struct mgsl_struct *info );
+static void mgsl_isr_transmit_dma( struct mgsl_struct *info );
typedef void (*isr_dispatch_func)(struct mgsl_struct *);
-isr_dispatch_func UscIsrTable[7] =
+static isr_dispatch_func UscIsrTable[7] =
{
mgsl_isr_null,
mgsl_isr_misc,
/*
* Global linked list of SyncLink devices
*/
-struct mgsl_struct *mgsl_device_list;
+static struct mgsl_struct *mgsl_device_list;
static int mgsl_device_count;
/*
static int txdmabufs[MAX_TOTAL_DEVICES];
static int txholdbufs[MAX_TOTAL_DEVICES];
-MODULE_PARM(break_on_load,"i");
-MODULE_PARM(ttymajor,"i");
-MODULE_PARM(io,"1-" __MODULE_STRING(MAX_ISA_DEVICES) "i");
-MODULE_PARM(irq,"1-" __MODULE_STRING(MAX_ISA_DEVICES) "i");
-MODULE_PARM(dma,"1-" __MODULE_STRING(MAX_ISA_DEVICES) "i");
-MODULE_PARM(debug_level,"i");
-MODULE_PARM(maxframe,"1-" __MODULE_STRING(MAX_TOTAL_DEVICES) "i");
-MODULE_PARM(dosyncppp,"1-" __MODULE_STRING(MAX_TOTAL_DEVICES) "i");
-MODULE_PARM(txdmabufs,"1-" __MODULE_STRING(MAX_TOTAL_DEVICES) "i");
-MODULE_PARM(txholdbufs,"1-" __MODULE_STRING(MAX_TOTAL_DEVICES) "i");
+module_param(break_on_load, bool, 0);
+module_param(ttymajor, int, 0);
+module_param_array(io, int, NULL, 0);
+module_param_array(irq, int, NULL, 0);
+module_param_array(dma, int, NULL, 0);
+module_param(debug_level, int, 0);
+module_param_array(maxframe, int, NULL, 0);
+module_param_array(dosyncppp, int, NULL, 0);
+module_param_array(txdmabufs, int, NULL, 0);
+module_param_array(txholdbufs, int, NULL, 0);
static char *driver_name = "SyncLink serial driver";
static char *driver_version = "$Revision: 4.28 $";
* (gdb) to get the .text address for the add-symbol-file command.
* This allows remote debugging of dynamically loadable modules.
*/
-void* mgsl_get_text_ptr(void)
+static void* mgsl_get_text_ptr(void)
{
return mgsl_get_text_ptr;
}
/* mgsl_bh_action() Return next bottom half action to perform.
* Return Value: BH action code or 0 if nothing to do.
*/
-int mgsl_bh_action(struct mgsl_struct *info)
+static int mgsl_bh_action(struct mgsl_struct *info)
{
unsigned long flags;
int rc = 0;
/*
* Perform bottom half processing of work items queued by ISR.
*/
-void mgsl_bh_handler(void* Context)
+static void mgsl_bh_handler(void* Context)
{
struct mgsl_struct *info = (struct mgsl_struct*)Context;
int action;
__FILE__,__LINE__,info->device_name);
}
-void mgsl_bh_receive(struct mgsl_struct *info)
+static void mgsl_bh_receive(struct mgsl_struct *info)
{
int (*get_rx_frame)(struct mgsl_struct *info) =
(info->params.mode == MGSL_MODE_HDLC ? mgsl_get_rx_frame : mgsl_get_raw_rx_frame);
} while(get_rx_frame(info));
}
-void mgsl_bh_transmit(struct mgsl_struct *info)
+static void mgsl_bh_transmit(struct mgsl_struct *info)
{
struct tty_struct *tty = info->tty;
unsigned long flags;
spin_unlock_irqrestore(&info->irq_spinlock,flags);
}
-void mgsl_bh_status(struct mgsl_struct *info)
+static void mgsl_bh_status(struct mgsl_struct *info)
{
if ( debug_level >= DEBUG_LEVEL_BH )
printk( "%s(%d):mgsl_bh_status() entry on %s\n",
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void mgsl_isr_receive_status( struct mgsl_struct *info )
+static void mgsl_isr_receive_status( struct mgsl_struct *info )
{
u16 status = usc_InReg( info, RCSR );
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void mgsl_isr_transmit_status( struct mgsl_struct *info )
+static void mgsl_isr_transmit_status( struct mgsl_struct *info )
{
u16 status = usc_InReg( info, TCSR );
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void mgsl_isr_io_pin( struct mgsl_struct *info )
+static void mgsl_isr_io_pin( struct mgsl_struct *info )
{
struct mgsl_icount *icount;
u16 status = usc_InReg( info, MISR );
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void mgsl_isr_transmit_data( struct mgsl_struct *info )
+static void mgsl_isr_transmit_data( struct mgsl_struct *info )
{
if ( debug_level >= DEBUG_LEVEL_ISR )
printk("%s(%d):mgsl_isr_transmit_data xmit_cnt=%d\n",
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void mgsl_isr_receive_data( struct mgsl_struct *info )
+static void mgsl_isr_receive_data( struct mgsl_struct *info )
{
int Fifocount;
u16 status;
* Arguments: info pointer to device extension (instance data)
* Return Value: None
*/
-void mgsl_isr_misc( struct mgsl_struct *info )
+static void mgsl_isr_misc( struct mgsl_struct *info )
{
u16 status = usc_InReg( info, MISR );
* Arguments: info pointer to device extension (instance data)
* Return Value: None
*/
-void mgsl_isr_null( struct mgsl_struct *info )
+static void mgsl_isr_null( struct mgsl_struct *info )
{
} /* end of mgsl_isr_null() */
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void mgsl_isr_receive_dma( struct mgsl_struct *info )
+static void mgsl_isr_receive_dma( struct mgsl_struct *info )
{
u16 status;
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void mgsl_isr_transmit_dma( struct mgsl_struct *info )
+static void mgsl_isr_transmit_dma( struct mgsl_struct *info )
{
u16 status;
return mgsl_ioctl_common(info, cmd, arg);
}
-int mgsl_ioctl_common(struct mgsl_struct *info, unsigned int cmd, unsigned long arg)
+static int mgsl_ioctl_common(struct mgsl_struct *info, unsigned int cmd, unsigned long arg)
{
int error;
struct mgsl_icount cnow; /* kernel counter temps */
*
* Return Value:
*/
-int mgsl_read_proc(char *page, char **start, off_t off, int count,
+static int mgsl_read_proc(char *page, char **start, off_t off, int count,
int *eof, void *data)
{
int len = 0, l;
* Arguments: info pointer to device instance data
* Return Value: 0 if success, otherwise error
*/
-int mgsl_allocate_dma_buffers(struct mgsl_struct *info)
+static int mgsl_allocate_dma_buffers(struct mgsl_struct *info)
{
unsigned short BuffersPerFrame;
* Arguments: info pointer to device instance data
* Return Value: 0 if success, otherwise error
*/
-int mgsl_alloc_buffer_list_memory( struct mgsl_struct *info )
+static int mgsl_alloc_buffer_list_memory( struct mgsl_struct *info )
{
unsigned int i;
* the buffer list contains the information necessary to free
* the individual buffers!
*/
-void mgsl_free_buffer_list_memory( struct mgsl_struct *info )
+static void mgsl_free_buffer_list_memory( struct mgsl_struct *info )
{
if ( info->buffer_list && info->bus_type != MGSL_BUS_TYPE_PCI )
kfree(info->buffer_list);
*
* Return Value: 0 if success, otherwise -ENOMEM
*/
-int mgsl_alloc_frame_memory(struct mgsl_struct *info,DMABUFFERENTRY *BufferList,int Buffercount)
+static int mgsl_alloc_frame_memory(struct mgsl_struct *info,DMABUFFERENTRY *BufferList,int Buffercount)
{
int i;
unsigned long phys_addr;
*
* Return Value: None
*/
-void mgsl_free_frame_memory(struct mgsl_struct *info, DMABUFFERENTRY *BufferList, int Buffercount)
+static void mgsl_free_frame_memory(struct mgsl_struct *info, DMABUFFERENTRY *BufferList, int Buffercount)
{
int i;
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void mgsl_free_dma_buffers( struct mgsl_struct *info )
+static void mgsl_free_dma_buffers( struct mgsl_struct *info )
{
mgsl_free_frame_memory( info, info->rx_buffer_list, info->rx_buffer_count );
mgsl_free_frame_memory( info, info->tx_buffer_list, info->tx_buffer_count );
*
* Return Value: 0 if success, otherwise -ENOMEM
*/
-int mgsl_alloc_intermediate_rxbuffer_memory(struct mgsl_struct *info)
+static int mgsl_alloc_intermediate_rxbuffer_memory(struct mgsl_struct *info)
{
info->intermediate_rxbuffer = kmalloc(info->max_frame_size, GFP_KERNEL | GFP_DMA);
if ( info->intermediate_rxbuffer == NULL )
*
* Return Value: None
*/
-void mgsl_free_intermediate_rxbuffer_memory(struct mgsl_struct *info)
+static void mgsl_free_intermediate_rxbuffer_memory(struct mgsl_struct *info)
{
if ( info->intermediate_rxbuffer )
kfree(info->intermediate_rxbuffer);
*
* Return Value: 0 if success, otherwise -ENOMEM
*/
-int mgsl_alloc_intermediate_txbuffer_memory(struct mgsl_struct *info)
+static int mgsl_alloc_intermediate_txbuffer_memory(struct mgsl_struct *info)
{
int i;
*
* Return Value: None
*/
-void mgsl_free_intermediate_txbuffer_memory(struct mgsl_struct *info)
+static void mgsl_free_intermediate_txbuffer_memory(struct mgsl_struct *info)
{
int i;
* into adapter's tx dma buffer,
* 0 otherwise
*/
-int load_next_tx_holding_buffer(struct mgsl_struct *info)
+static int load_next_tx_holding_buffer(struct mgsl_struct *info)
{
int ret = 0;
*
* Return Value: 1 if able to store, 0 otherwise
*/
-int save_tx_buffer_request(struct mgsl_struct *info,const char *Buffer, unsigned int BufferSize)
+static int save_tx_buffer_request(struct mgsl_struct *info,const char *Buffer, unsigned int BufferSize)
{
struct tx_holding_buffer *ptx;
return 1;
}
-int mgsl_claim_resources(struct mgsl_struct *info)
+static int mgsl_claim_resources(struct mgsl_struct *info)
{
if (request_region(info->io_base,info->io_addr_size,"synclink") == NULL) {
printk( "%s(%d):I/O address conflict on device %s Addr=%08X\n",
} /* end of mgsl_claim_resources() */
-void mgsl_release_resources(struct mgsl_struct *info)
+static void mgsl_release_resources(struct mgsl_struct *info)
{
if ( debug_level >= DEBUG_LEVEL_INFO )
printk( "%s(%d):mgsl_release_resources(%s) entry\n",
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void mgsl_add_device( struct mgsl_struct *info )
+static void mgsl_add_device( struct mgsl_struct *info )
{
info->next_device = NULL;
info->line = mgsl_device_count;
* Arguments: none
* Return Value: pointer to mgsl_struct if success, otherwise NULL
*/
-struct mgsl_struct* mgsl_allocate_device(void)
+static struct mgsl_struct* mgsl_allocate_device(void)
{
struct mgsl_struct *info;
*
* None
*/
-void usc_RTCmd( struct mgsl_struct *info, u16 Cmd )
+static void usc_RTCmd( struct mgsl_struct *info, u16 Cmd )
{
/* output command to CCAR in bits <15..11> */
/* preserve bits <10..7>, bits <6..0> must be zero */
*
* None
*/
-void usc_DmaCmd( struct mgsl_struct *info, u16 Cmd )
+static void usc_DmaCmd( struct mgsl_struct *info, u16 Cmd )
{
/* write command mask to DCAR */
outw( Cmd + info->mbre_bit, info->io_base );
* None
*
*/
-void usc_OutDmaReg( struct mgsl_struct *info, u16 RegAddr, u16 RegValue )
+static void usc_OutDmaReg( struct mgsl_struct *info, u16 RegAddr, u16 RegValue )
{
/* Note: The DCAR is located at the adapter base address */
/* Note: must preserve state of BIT8 in DCAR */
* The 16-bit value read from register
*
*/
-u16 usc_InDmaReg( struct mgsl_struct *info, u16 RegAddr )
+static u16 usc_InDmaReg( struct mgsl_struct *info, u16 RegAddr )
{
/* Note: The DCAR is located at the adapter base address */
/* Note: must preserve state of BIT8 in DCAR */
* None
*
*/
-void usc_OutReg( struct mgsl_struct *info, u16 RegAddr, u16 RegValue )
+static void usc_OutReg( struct mgsl_struct *info, u16 RegAddr, u16 RegValue )
{
outw( RegAddr + info->loopback_bits, info->io_base + CCAR );
outw( RegValue, info->io_base + CCAR );
*
* 16-bit value read from register
*/
-u16 usc_InReg( struct mgsl_struct *info, u16 RegAddr )
+static u16 usc_InReg( struct mgsl_struct *info, u16 RegAddr )
{
outw( RegAddr + info->loopback_bits, info->io_base + CCAR );
return inw( info->io_base + CCAR );
* Arguments: info pointer to device instance data
* Return Value: NONE
*/
-void usc_set_sdlc_mode( struct mgsl_struct *info )
+static void usc_set_sdlc_mode( struct mgsl_struct *info )
{
u16 RegValue;
int PreSL1660;
* enable 1 = enable loopback, 0 = disable
* Return Value: None
*/
-void usc_enable_loopback(struct mgsl_struct *info, int enable)
+static void usc_enable_loopback(struct mgsl_struct *info, int enable)
{
if (enable) {
/* blank external TXD output */
*
* Return Value: None
*/
-void usc_enable_aux_clock( struct mgsl_struct *info, u32 data_rate )
+static void usc_enable_aux_clock( struct mgsl_struct *info, u32 data_rate )
{
u32 XtalSpeed;
u16 Tc;
*
* Return Value: None
*/
-void usc_process_rxoverrun_sync( struct mgsl_struct *info )
+static void usc_process_rxoverrun_sync( struct mgsl_struct *info )
{
int start_index;
int end_index;
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void usc_stop_receiver( struct mgsl_struct *info )
+static void usc_stop_receiver( struct mgsl_struct *info )
{
if (debug_level >= DEBUG_LEVEL_ISR)
printk("%s(%d):usc_stop_receiver(%s)\n",
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void usc_start_receiver( struct mgsl_struct *info )
+static void usc_start_receiver( struct mgsl_struct *info )
{
u32 phys_addr;
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void usc_start_transmitter( struct mgsl_struct *info )
+static void usc_start_transmitter( struct mgsl_struct *info )
{
u32 phys_addr;
unsigned int FrameSize;
* Arguments: info pointer to device isntance data
* Return Value: None
*/
-void usc_stop_transmitter( struct mgsl_struct *info )
+static void usc_stop_transmitter( struct mgsl_struct *info )
{
if (debug_level >= DEBUG_LEVEL_ISR)
printk("%s(%d):usc_stop_transmitter(%s)\n",
* Arguments: info pointer to device extension (instance data)
* Return Value: None
*/
-void usc_load_txfifo( struct mgsl_struct *info )
+static void usc_load_txfifo( struct mgsl_struct *info )
{
int Fifocount;
u8 TwoBytes[2];
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void usc_reset( struct mgsl_struct *info )
+static void usc_reset( struct mgsl_struct *info )
{
if ( info->bus_type == MGSL_BUS_TYPE_PCI ) {
int i;
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void usc_set_async_mode( struct mgsl_struct *info )
+static void usc_set_async_mode( struct mgsl_struct *info )
{
u16 RegValue;
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void usc_loopback_frame( struct mgsl_struct *info )
+static void usc_loopback_frame( struct mgsl_struct *info )
{
int i;
unsigned long oldmode = info->params.mode;
* Arguments: info pointer to adapter info structure
* Return Value: None
*/
-void usc_set_sync_mode( struct mgsl_struct *info )
+static void usc_set_sync_mode( struct mgsl_struct *info )
{
usc_loopback_frame( info );
usc_set_sdlc_mode( info );
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void usc_set_txidle( struct mgsl_struct *info )
+static void usc_set_txidle( struct mgsl_struct *info )
{
u16 usc_idle_mode = IDLEMODE_FLAGS;
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void usc_get_serial_signals( struct mgsl_struct *info )
+static void usc_get_serial_signals( struct mgsl_struct *info )
{
u16 status;
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void usc_set_serial_signals( struct mgsl_struct *info )
+static void usc_set_serial_signals( struct mgsl_struct *info )
{
u16 Control;
unsigned char V24Out = info->serial_signals;
* 0 disables the AUX clock.
* Return Value: None
*/
-void usc_enable_async_clock( struct mgsl_struct *info, u32 data_rate )
+static void usc_enable_async_clock( struct mgsl_struct *info, u32 data_rate )
{
if ( data_rate ) {
/*
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void mgsl_reset_tx_dma_buffers( struct mgsl_struct *info )
+static void mgsl_reset_tx_dma_buffers( struct mgsl_struct *info )
{
unsigned int i;
* Arguments: info pointer to device instance data
* Return Value: number of free tx dma buffers
*/
-int num_free_tx_dma_buffers(struct mgsl_struct *info)
+static int num_free_tx_dma_buffers(struct mgsl_struct *info)
{
return info->tx_buffer_count - info->tx_dma_buffers_used;
}
* Arguments: info pointer to device instance data
* Return Value: None
*/
-void mgsl_reset_rx_dma_buffers( struct mgsl_struct *info )
+static void mgsl_reset_rx_dma_buffers( struct mgsl_struct *info )
{
unsigned int i;
*
* Return Value: None
*/
-void mgsl_free_rx_frame_buffers( struct mgsl_struct *info, unsigned int StartIndex, unsigned int EndIndex )
+static void mgsl_free_rx_frame_buffers( struct mgsl_struct *info, unsigned int StartIndex, unsigned int EndIndex )
{
int Done = 0;
DMABUFFERENTRY *pBufEntry;
* Arguments: info pointer to device extension
* Return Value: 1 if frame returned, otherwise 0
*/
-int mgsl_get_rx_frame(struct mgsl_struct *info)
+static int mgsl_get_rx_frame(struct mgsl_struct *info)
{
unsigned int StartIndex, EndIndex; /* index of 1st and last buffers of Rx frame */
unsigned short status;
* Arguments: info pointer to device extension
* Return Value: 1 if frame returned, otherwise 0
*/
-int mgsl_get_raw_rx_frame(struct mgsl_struct *info)
+static int mgsl_get_raw_rx_frame(struct mgsl_struct *info)
{
unsigned int CurrentIndex, NextIndex;
unsigned short status;
*
* Return Value: None
*/
-void mgsl_load_tx_dma_buffer(struct mgsl_struct *info, const char *Buffer,
- unsigned int BufferSize)
+static void mgsl_load_tx_dma_buffer(struct mgsl_struct *info,
+ const char *Buffer, unsigned int BufferSize)
{
unsigned short Copycount;
unsigned int i = 0;
* Arguments: info pointer to device instance data
* Return Value: TRUE if test passed, otherwise FALSE
*/
-BOOLEAN mgsl_register_test( struct mgsl_struct *info )
+static BOOLEAN mgsl_register_test( struct mgsl_struct *info )
{
static unsigned short BitPatterns[] =
{ 0x0000, 0xffff, 0xaaaa, 0x5555, 0x1234, 0x6969, 0x9696, 0x0f0f };
* Arguments: info pointer to device instance data
* Return Value: TRUE if test passed, otherwise FALSE
*/
-BOOLEAN mgsl_irq_test( struct mgsl_struct *info )
+static BOOLEAN mgsl_irq_test( struct mgsl_struct *info )
{
unsigned long EndTime;
unsigned long flags;
* Arguments: info pointer to device instance data
* Return Value: TRUE if test passed, otherwise FALSE
*/
-BOOLEAN mgsl_dma_test( struct mgsl_struct *info )
+static BOOLEAN mgsl_dma_test( struct mgsl_struct *info )
{
unsigned short FifoLevel;
unsigned long phys_addr;
* Arguments: info pointer to device instance data
* Return Value: 0 if success, otherwise -ENODEV
*/
-int mgsl_adapter_test( struct mgsl_struct *info )
+static int mgsl_adapter_test( struct mgsl_struct *info )
{
if ( debug_level >= DEBUG_LEVEL_INFO )
printk( "%s(%d):Testing device %s\n",
* Arguments: info pointer to device instance data
* Return Value: TRUE if test passed, otherwise FALSE
*/
-BOOLEAN mgsl_memory_test( struct mgsl_struct *info )
+static BOOLEAN mgsl_memory_test( struct mgsl_struct *info )
{
static unsigned long BitPatterns[] = { 0x0, 0x55555555, 0xaaaaaaaa,
0x66666666, 0x99999999, 0xffffffff, 0x12345678 };
*
* Return Value: None
*/
-void mgsl_load_pci_memory( char* TargetPtr, const char* SourcePtr,
+static void mgsl_load_pci_memory( char* TargetPtr, const char* SourcePtr,
unsigned short count )
{
/* 16 32-bit writes @ 60ns each = 960ns max latency on local bus */
} /* End Of mgsl_load_pci_memory() */
-void mgsl_trace_block(struct mgsl_struct *info,const char* data, int count, int xmit)
+static void mgsl_trace_block(struct mgsl_struct *info,const char* data, int count, int xmit)
{
int i;
int linecount;
* Arguments: context pointer to device instance data
* Return Value: None
*/
-void mgsl_tx_timeout(unsigned long context)
+static void mgsl_tx_timeout(unsigned long context)
{
struct mgsl_struct *info = (struct mgsl_struct*)context;
unsigned long flags;
/* release the line by echoing RxD to TxD
* upon completion of a transmit frame
*/
-void usc_loopmode_send_done( struct mgsl_struct * info )
+static void usc_loopmode_send_done( struct mgsl_struct * info )
{
info->loopmode_send_done_requested = FALSE;
/* clear CMR:13 to 0 to start echoing RxData to TxData */
/* abort a transmit in progress while in HDLC LoopMode
*/
-void usc_loopmode_cancel_transmit( struct mgsl_struct * info )
+static void usc_loopmode_cancel_transmit( struct mgsl_struct * info )
{
/* reset tx dma channel and purge TxFifo */
usc_RTCmd( info, RTCmd_PurgeTxFifo );
* is an Insert Into Loop action. Upon receipt of a GoAhead sequence (RxAbort)
* we must clear CMR:13 to begin repeating TxData to RxData
*/
-void usc_loopmode_insert_request( struct mgsl_struct * info )
+static void usc_loopmode_insert_request( struct mgsl_struct * info )
{
info->loopmode_insert_requested = TRUE;
/* return 1 if station is inserted into the loop, otherwise 0
*/
-int usc_loopmode_active( struct mgsl_struct * info)
+static int usc_loopmode_active( struct mgsl_struct * info)
{
return usc_InReg( info, CCSR ) & BIT7 ? 1 : 0 ;
}
-/* return 1 if USC is in loop send mode, otherwise 0
- */
-int usc_loopmode_send_active( struct mgsl_struct * info )
-{
- return usc_InReg( info, CCSR ) & BIT6 ? 1 : 0 ;
-}
-
#ifdef CONFIG_HDLC
/**
static int tosh_fn = 0;
-MODULE_PARM(tosh_fn, "i");
+module_param(tosh_fn, int, 0);
static int tosh_ioctl(struct inode *, struct file *, unsigned int,
* laptop, otherwise zero and determines the Machine ID, BIOS version and
* date, and SCI version.
*/
-int tosh_probe(void)
+static int tosh_probe(void)
{
int i,major,minor,day,year,month,flag;
unsigned char signature[7] = { 0x54,0x4f,0x53,0x48,0x49,0x42,0x41 };
#define VIOCONS_KERN_WARN KERN_WARNING "viocons: "
#define VIOCONS_KERN_INFO KERN_INFO "viocons: "
-static spinlock_t consolelock = SPIN_LOCK_UNLOCKED;
-static spinlock_t consoleloglock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(consolelock);
+static DEFINE_SPINLOCK(consoleloglock);
#ifdef CONFIG_MAGIC_SYSRQ
static int vio_sysrq_pressed;
2 to activate CPUfreq drivers debugging, and
4 to activate CPUfreq governor debugging
-config CPU_FREQ_PROC_INTF
- tristate "/proc/cpufreq interface (deprecated)"
- depends on CPU_FREQ && PROC_FS
- help
- This enables the /proc/cpufreq interface for controlling
- CPUFreq. Please note that it is recommended to use the sysfs
- interface instead (which is built automatically).
-
- For details, take a look at <file:Documentation/cpu-freq/>.
-
- If in doubt, say N.
+config CPU_FREQ_STAT
+ tristate "CPU frequency translation statistics"
+ depends on CPU_FREQ && CPU_FREQ_TABLE
+ default y
+ help
+ This driver exports CPU frequency statistics information through sysfs
+ file system
+
+config CPU_FREQ_STAT_DETAILS
+ bool "CPU frequency translation statistics details"
+ depends on CPU_FREQ && CPU_FREQ_STAT
+ default n
+ help
+ This will show detail CPU frequency translation table in sysfs file
+ system
choice
prompt "Default CPUFreq governor"
If in doubt, say Y.
-config CPU_FREQ_24_API
- bool "/proc/sys/cpu/ interface (2.4. / OLD)"
- depends on CPU_FREQ_GOV_USERSPACE
- depends on SYSCTL
- help
- This enables the /proc/sys/cpu/ sysctl interface for controlling
- the CPUFreq,"userspace" governor. This is the same interface
- as known from the 2.4.-kernel patches for CPUFreq, and offers
- the same functionality as long as "userspace" is the
- selected governor for the specified CPU.
-
- For details, take a look at <file:Documentation/cpu-freq/>.
-
- If in doubt, say N.
-
config CPU_FREQ_GOV_ONDEMAND
tristate "'ondemand' cpufreq policy governor"
depends on CPU_FREQ
# CPUfreq core
obj-$(CONFIG_CPU_FREQ) += cpufreq.o
+# CPUfreq stats
+obj-$(CONFIG_CPU_FREQ_STAT) += cpufreq_stats.o
# CPUfreq governors
obj-$(CONFIG_CPU_FREQ_GOV_PERFORMANCE) += cpufreq_performance.o
# CPUfreq cross-arch helpers
obj-$(CONFIG_CPU_FREQ_TABLE) += freq_table.o
-obj-$(CONFIG_CPU_FREQ_PROC_INTF) += proc_intf.o
static int down_skip[NR_CPUS];
struct cpu_dbs_info_s *this_dbs_info;
+ struct cpufreq_policy *policy;
+ unsigned int j;
+
this_dbs_info = &per_cpu(cpu_dbs_info, cpu);
if (!this_dbs_info->enable)
return;
+ policy = this_dbs_info->cur_policy;
/*
* The default safe range is 20% to 80%
* Every sampling_rate, we check
* Frequency reduction happens at minimum steps of
* 5% of max_frequency
*/
+
/* Check for frequency increase */
total_idle_ticks = kstat_cpu(cpu).cpustat.idle +
kstat_cpu(cpu).cpustat.iowait;
idle_ticks = total_idle_ticks -
this_dbs_info->prev_cpu_idle_up;
this_dbs_info->prev_cpu_idle_up = total_idle_ticks;
+
+
+ for_each_cpu_mask(j, policy->cpus) {
+ unsigned int tmp_idle_ticks;
+ struct cpu_dbs_info_s *j_dbs_info;
+
+ if (j == cpu)
+ continue;
+
+ j_dbs_info = &per_cpu(cpu_dbs_info, j);
+ /* Check for frequency increase */
+ total_idle_ticks = kstat_cpu(j).cpustat.idle +
+ kstat_cpu(j).cpustat.iowait;
+ tmp_idle_ticks = total_idle_ticks -
+ j_dbs_info->prev_cpu_idle_up;
+ j_dbs_info->prev_cpu_idle_up = total_idle_ticks;
+
+ if (tmp_idle_ticks < idle_ticks)
+ idle_ticks = tmp_idle_ticks;
+ }
/* Scale idle ticks by 100 and compare with up and down ticks */
idle_ticks *= 100;
sampling_rate_in_HZ(dbs_tuners_ins.sampling_rate);
if (idle_ticks < up_idle_ticks) {
- __cpufreq_driver_target(this_dbs_info->cur_policy,
- this_dbs_info->cur_policy->max,
+ __cpufreq_driver_target(policy, policy->max,
CPUFREQ_RELATION_H);
down_skip[cpu] = 0;
this_dbs_info->prev_cpu_idle_down = total_idle_ticks;
if (down_skip[cpu] < dbs_tuners_ins.sampling_down_factor)
return;
+ total_idle_ticks = kstat_cpu(cpu).cpustat.idle +
+ kstat_cpu(cpu).cpustat.iowait;
idle_ticks = total_idle_ticks -
this_dbs_info->prev_cpu_idle_down;
+ this_dbs_info->prev_cpu_idle_down = total_idle_ticks;
+
+ for_each_cpu_mask(j, policy->cpus) {
+ unsigned int tmp_idle_ticks;
+ struct cpu_dbs_info_s *j_dbs_info;
+
+ if (j == cpu)
+ continue;
+
+ j_dbs_info = &per_cpu(cpu_dbs_info, j);
+ /* Check for frequency increase */
+ total_idle_ticks = kstat_cpu(j).cpustat.idle +
+ kstat_cpu(j).cpustat.iowait;
+ tmp_idle_ticks = total_idle_ticks -
+ j_dbs_info->prev_cpu_idle_down;
+ j_dbs_info->prev_cpu_idle_down = total_idle_ticks;
+
+ if (tmp_idle_ticks < idle_ticks)
+ idle_ticks = tmp_idle_ticks;
+ }
+
/* Scale idle ticks by 100 and compare with up and down ticks */
idle_ticks *= 100;
down_skip[cpu] = 0;
- this_dbs_info->prev_cpu_idle_down = total_idle_ticks;
freq_down_sampling_rate = dbs_tuners_ins.sampling_rate *
dbs_tuners_ins.sampling_down_factor;
sampling_rate_in_HZ(freq_down_sampling_rate);
if (idle_ticks > down_idle_ticks ) {
- freq_down_step = (5 * this_dbs_info->cur_policy->max) / 100;
+ freq_down_step = (5 * policy->max) / 100;
/* max freq cannot be less than 100. But who knows.... */
if (unlikely(freq_down_step == 0))
freq_down_step = 5;
- __cpufreq_driver_target(this_dbs_info->cur_policy,
- this_dbs_info->cur_policy->cur - freq_down_step,
+ __cpufreq_driver_target(policy,
+ policy->cur - freq_down_step,
CPUFREQ_RELATION_H);
return;
}
static inline void dbs_timer_init(void)
{
INIT_WORK(&dbs_work, do_dbs_timer, NULL);
- schedule_work(&dbs_work);
+ schedule_delayed_work(&dbs_work,
+ sampling_rate_in_HZ(dbs_tuners_ins.sampling_rate));
return;
}
{
unsigned int cpu = policy->cpu;
struct cpu_dbs_info_s *this_dbs_info;
+ unsigned int j;
this_dbs_info = &per_cpu(cpu_dbs_info, cpu);
break;
down(&dbs_sem);
- this_dbs_info->cur_policy = policy;
+ for_each_cpu_mask(j, policy->cpus) {
+ struct cpu_dbs_info_s *j_dbs_info;
+ j_dbs_info = &per_cpu(cpu_dbs_info, j);
+ j_dbs_info->cur_policy = policy;
- this_dbs_info->prev_cpu_idle_up =
- kstat_cpu(cpu).cpustat.idle +
- kstat_cpu(cpu).cpustat.iowait;
- this_dbs_info->prev_cpu_idle_down =
- kstat_cpu(cpu).cpustat.idle +
- kstat_cpu(cpu).cpustat.iowait;
+ j_dbs_info->prev_cpu_idle_up =
+ kstat_cpu(j).cpustat.idle +
+ kstat_cpu(j).cpustat.iowait;
+ j_dbs_info->prev_cpu_idle_down =
+ kstat_cpu(j).cpustat.idle +
+ kstat_cpu(j).cpustat.iowait;
+ }
this_dbs_info->enable = 1;
sysfs_create_group(&policy->kobj, &dbs_attr_group);
dbs_enable++;
--- /dev/null
+/*
+ * drivers/cpufreq/cpufreq_stats.c
+ *
+ * Copyright (C) 2003-2004 Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>.
+ * (C) 2004 Zou Nan hai <nanhai.zou@intel.com>.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/sysdev.h>
+#include <linux/cpu.h>
+#include <linux/sysfs.h>
+#include <linux/cpufreq.h>
+#include <linux/jiffies.h>
+#include <linux/percpu.h>
+#include <linux/kobject.h>
+#include <linux/spinlock.h>
+
+static spinlock_t cpufreq_stats_lock;
+
+#define CPUFREQ_STATDEVICE_ATTR(_name,_mode,_show) \
+static struct freq_attr _attr_##_name = {\
+ .attr = {.name = __stringify(_name), .owner = THIS_MODULE, \
+ .mode = _mode, }, \
+ .show = _show,\
+};
+
+static unsigned long
+delta_time(unsigned long old, unsigned long new)
+{
+ return (old > new) ? (old - new): (new + ~old + 1);
+}
+
+struct cpufreq_stats {
+ unsigned int cpu;
+ unsigned int total_trans;
+ unsigned long long last_time;
+ unsigned int max_state;
+ unsigned int state_num;
+ unsigned int last_index;
+ unsigned long long *time_in_state;
+ unsigned int *freq_table;
+#ifdef CONFIG_CPU_FREQ_STAT_DETAILS
+ unsigned int *trans_table;
+#endif
+};
+
+static struct cpufreq_stats *cpufreq_stats_table[NR_CPUS];
+
+struct cpufreq_stats_attribute {
+ struct attribute attr;
+ ssize_t(*show) (struct cpufreq_stats *, char *);
+};
+
+static int
+cpufreq_stats_update (unsigned int cpu)
+{
+ struct cpufreq_stats *stat;
+ spin_lock(&cpufreq_stats_lock);
+ stat = cpufreq_stats_table[cpu];
+ if (stat->time_in_state)
+ stat->time_in_state[stat->last_index] +=
+ delta_time(stat->last_time, jiffies);
+ stat->last_time = jiffies;
+ spin_unlock(&cpufreq_stats_lock);
+ return 0;
+}
+
+static ssize_t
+show_total_trans(struct cpufreq_policy *policy, char *buf)
+{
+ struct cpufreq_stats *stat = cpufreq_stats_table[policy->cpu];
+ if(!stat)
+ return 0;
+ return sprintf(buf, "%d\n",
+ cpufreq_stats_table[stat->cpu]->total_trans);
+}
+
+static ssize_t
+show_time_in_state(struct cpufreq_policy *policy, char *buf)
+{
+ ssize_t len = 0;
+ int i;
+ struct cpufreq_stats *stat = cpufreq_stats_table[policy->cpu];
+ if(!stat)
+ return 0;
+ cpufreq_stats_update(stat->cpu);
+ for (i = 0; i < stat->state_num; i++) {
+ len += sprintf(buf + len, "%u %llu\n",
+ stat->freq_table[i], stat->time_in_state[i]);
+ }
+ return len;
+}
+
+#ifdef CONFIG_CPU_FREQ_STAT_DETAILS
+static ssize_t
+show_trans_table(struct cpufreq_policy *policy, char *buf)
+{
+ ssize_t len = 0;
+ int i, j;
+
+ struct cpufreq_stats *stat = cpufreq_stats_table[policy->cpu];
+ if(!stat)
+ return 0;
+ cpufreq_stats_update(stat->cpu);
+ for (i = 0; i < stat->state_num; i++) {
+ if (len >= PAGE_SIZE)
+ break;
+ len += snprintf(buf + len, PAGE_SIZE - len, "%9u:\t",
+ stat->freq_table[i]);
+
+ for (j = 0; j < stat->state_num; j++) {
+ if (len >= PAGE_SIZE)
+ break;
+ len += snprintf(buf + len, PAGE_SIZE - len, "%u\t",
+ stat->trans_table[i*stat->max_state+j]);
+ }
+ len += snprintf(buf + len, PAGE_SIZE - len, "\n");
+ }
+ return len;
+}
+CPUFREQ_STATDEVICE_ATTR(trans_table,0444,show_trans_table);
+#endif
+
+CPUFREQ_STATDEVICE_ATTR(total_trans,0444,show_total_trans);
+CPUFREQ_STATDEVICE_ATTR(time_in_state,0444,show_time_in_state);
+
+static struct attribute *default_attrs[] = {
+ &_attr_total_trans.attr,
+ &_attr_time_in_state.attr,
+#ifdef CONFIG_CPU_FREQ_STAT_DETAILS
+ &_attr_trans_table.attr,
+#endif
+ NULL
+};
+static struct attribute_group stats_attr_group = {
+ .attrs = default_attrs,
+ .name = "stats"
+};
+
+static int
+freq_table_get_index(struct cpufreq_stats *stat, unsigned int freq)
+{
+ int index;
+ for (index = 0; index < stat->max_state; index++)
+ if (stat->freq_table[index] == freq)
+ return index;
+ return -1;
+}
+
+static void
+cpufreq_stats_free_table (unsigned int cpu)
+{
+ struct cpufreq_stats *stat = cpufreq_stats_table[cpu];
+ struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
+ if (policy && policy->cpu == cpu)
+ sysfs_remove_group(&policy->kobj, &stats_attr_group);
+ if (stat) {
+ kfree(stat->time_in_state);
+ kfree(stat);
+ }
+ cpufreq_stats_table[cpu] = NULL;
+ if (policy)
+ cpufreq_cpu_put(policy);
+}
+
+static int
+cpufreq_stats_create_table (struct cpufreq_policy *policy,
+ struct cpufreq_frequency_table *table)
+{
+ unsigned int i, j, count = 0, ret = 0;
+ struct cpufreq_stats *stat;
+ struct cpufreq_policy *data;
+ unsigned int alloc_size;
+ unsigned int cpu = policy->cpu;
+ if (cpufreq_stats_table[cpu])
+ return -EBUSY;
+ if ((stat = kmalloc(sizeof(struct cpufreq_stats), GFP_KERNEL)) == NULL)
+ return -ENOMEM;
+ memset(stat, 0, sizeof (struct cpufreq_stats));
+
+ data = cpufreq_cpu_get(cpu);
+ if ((ret = sysfs_create_group(&data->kobj, &stats_attr_group)))
+ goto error_out;
+
+ stat->cpu = cpu;
+ cpufreq_stats_table[cpu] = stat;
+
+ for (i=0; table[i].frequency != CPUFREQ_TABLE_END; i++) {
+ unsigned int freq = table[i].frequency;
+ if (freq == CPUFREQ_ENTRY_INVALID)
+ continue;
+ count++;
+ }
+
+ alloc_size = count * sizeof(int) + count * sizeof(long long);
+
+#ifdef CONFIG_CPU_FREQ_STAT_DETAILS
+ alloc_size += count * count * sizeof(int);
+#endif
+ stat->max_state = count;
+ stat->time_in_state = kmalloc(alloc_size, GFP_KERNEL);
+ if (!stat->time_in_state) {
+ ret = -ENOMEM;
+ goto error_out;
+ }
+ memset(stat->time_in_state, 0, alloc_size);
+ stat->freq_table = (unsigned int *)(stat->time_in_state + count);
+
+#ifdef CONFIG_CPU_FREQ_STAT_DETAILS
+ stat->trans_table = stat->freq_table + count;
+#endif
+ j = 0;
+ for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) {
+ unsigned int freq = table[i].frequency;
+ if (freq == CPUFREQ_ENTRY_INVALID)
+ continue;
+ if (freq_table_get_index(stat, freq) == -1)
+ stat->freq_table[j++] = freq;
+ }
+ stat->state_num = j;
+ spin_lock(&cpufreq_stats_lock);
+ stat->last_time = jiffies;
+ stat->last_index = freq_table_get_index(stat, policy->cur);
+ spin_unlock(&cpufreq_stats_lock);
+ cpufreq_cpu_put(data);
+ return 0;
+error_out:
+ cpufreq_cpu_put(data);
+ kfree(stat);
+ cpufreq_stats_table[cpu] = NULL;
+ return ret;
+}
+
+static int
+cpufreq_stat_notifier_policy (struct notifier_block *nb, unsigned long val,
+ void *data)
+{
+ int ret;
+ struct cpufreq_policy *policy = data;
+ struct cpufreq_frequency_table *table;
+ unsigned int cpu = policy->cpu;
+ if (val != CPUFREQ_NOTIFY)
+ return 0;
+ table = cpufreq_frequency_get_table(cpu);
+ if (!table)
+ return 0;
+ if ((ret = cpufreq_stats_create_table(policy, table)))
+ return ret;
+ return 0;
+}
+
+static int
+cpufreq_stat_notifier_trans (struct notifier_block *nb, unsigned long val,
+ void *data)
+{
+ struct cpufreq_freqs *freq = data;
+ struct cpufreq_stats *stat;
+ int old_index, new_index;
+
+ if (val != CPUFREQ_POSTCHANGE)
+ return 0;
+
+ stat = cpufreq_stats_table[freq->cpu];
+ if (!stat)
+ return 0;
+ old_index = freq_table_get_index(stat, freq->old);
+ new_index = freq_table_get_index(stat, freq->new);
+
+ cpufreq_stats_update(freq->cpu);
+ if (old_index == new_index)
+ return 0;
+
+ spin_lock(&cpufreq_stats_lock);
+ stat->last_index = new_index;
+#ifdef CONFIG_CPU_FREQ_STAT_DETAILS
+ stat->trans_table[old_index * stat->max_state + new_index]++;
+#endif
+ stat->total_trans++;
+ spin_unlock(&cpufreq_stats_lock);
+ return 0;
+}
+
+static struct notifier_block notifier_policy_block = {
+ .notifier_call = cpufreq_stat_notifier_policy
+};
+
+static struct notifier_block notifier_trans_block = {
+ .notifier_call = cpufreq_stat_notifier_trans
+};
+
+static int
+__init cpufreq_stats_init(void)
+{
+ int ret;
+ unsigned int cpu;
+ spin_lock_init(&cpufreq_stats_lock);
+ if ((ret = cpufreq_register_notifier(¬ifier_policy_block,
+ CPUFREQ_POLICY_NOTIFIER)))
+ return ret;
+
+ if ((ret = cpufreq_register_notifier(¬ifier_trans_block,
+ CPUFREQ_TRANSITION_NOTIFIER))) {
+ cpufreq_unregister_notifier(¬ifier_policy_block,
+ CPUFREQ_POLICY_NOTIFIER);
+ return ret;
+ }
+
+ for_each_cpu(cpu)
+ cpufreq_update_policy(cpu);
+ return 0;
+}
+static void
+__exit cpufreq_stats_exit(void)
+{
+ unsigned int cpu;
+ cpufreq_unregister_notifier(¬ifier_policy_block,
+ CPUFREQ_POLICY_NOTIFIER);
+ cpufreq_unregister_notifier(¬ifier_trans_block,
+ CPUFREQ_TRANSITION_NOTIFIER);
+ for_each_cpu(cpu)
+ cpufreq_stats_free_table(cpu);
+}
+
+MODULE_AUTHOR ("Zou Nan hai <nanhai.zou@intel.com>");
+MODULE_DESCRIPTION ("'cpufreq_stats' - A driver to export cpufreq stats through sysfs filesystem");
+MODULE_LICENSE ("GPL");
+
+module_init(cpufreq_stats_init);
+module_exit(cpufreq_stats_exit);
}
EXPORT_SYMBOL_GPL(cpufreq_frequency_table_put_attr);
+struct cpufreq_frequency_table *cpufreq_frequency_get_table(unsigned int cpu)
+{
+ return show_table[cpu];
+}
+EXPORT_SYMBOL_GPL(cpufreq_frequency_get_table);
MODULE_AUTHOR ("Dominik Brodowski <linux@brodo.de>");
MODULE_DESCRIPTION ("CPUfreq frequency table helpers");
-/*
- * linux/drivers/cpufreq/proc_intf.c
- *
- * Copyright (C) 2002 - 2003 Dominik Brodowski
- */
-
-#include <linux/kernel.h>
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/cpufreq.h>
-#include <linux/ctype.h>
-#include <linux/proc_fs.h>
-#include <asm/uaccess.h>
-
-#warning This module will be removed from the 2.6. kernel series soon after 2005-01-01
-
-#define CPUFREQ_ALL_CPUS ((NR_CPUS))
-
-static unsigned int warning_print = 0;
-
-/**
- * cpufreq_parse_policy - parse a policy string
- * @input_string: the string to parse.
- * @policy: the policy written inside input_string
- *
- * This function parses a "policy string" - something the user echo'es into
- * /proc/cpufreq or gives as boot parameter - into a struct cpufreq_policy.
- * If there are invalid/missing entries, they are replaced with current
- * cpufreq policy.
- */
-static int cpufreq_parse_policy(char input_string[42], struct cpufreq_policy *policy)
-{
- unsigned int min = 0;
- unsigned int max = 0;
- unsigned int cpu = 0;
- char str_governor[16];
- struct cpufreq_policy current_policy;
- unsigned int result = -EFAULT;
-
- if (!policy)
- return -EINVAL;
-
- policy->min = 0;
- policy->max = 0;
- policy->policy = 0;
- policy->cpu = CPUFREQ_ALL_CPUS;
-
- if (sscanf(input_string, "%d:%d:%d:%15s", &cpu, &min, &max, str_governor) == 4)
- {
- policy->min = min;
- policy->max = max;
- policy->cpu = cpu;
- result = 0;
- goto scan_policy;
- }
- if (sscanf(input_string, "%d%%%d%%%d%%%15s", &cpu, &min, &max, str_governor) == 4)
- {
- if (!cpufreq_get_policy(¤t_policy, cpu)) {
- policy->min = (min * current_policy.cpuinfo.max_freq) / 100;
- policy->max = (max * current_policy.cpuinfo.max_freq) / 100;
- policy->cpu = cpu;
- result = 0;
- goto scan_policy;
- }
- }
-
- if (sscanf(input_string, "%d:%d:%15s", &min, &max, str_governor) == 3)
- {
- policy->min = min;
- policy->max = max;
- result = 0;
- goto scan_policy;
- }
-
- if (sscanf(input_string, "%d%%%d%%%15s", &min, &max, str_governor) == 3)
- {
- if (!cpufreq_get_policy(¤t_policy, cpu)) {
- policy->min = (min * current_policy.cpuinfo.max_freq) / 100;
- policy->max = (max * current_policy.cpuinfo.max_freq) / 100;
- result = 0;
- goto scan_policy;
- }
- }
-
- return -EINVAL;
-
-scan_policy:
- result = cpufreq_parse_governor(str_governor, &policy->policy, &policy->governor);
-
- return result;
-}
-
-/**
- * cpufreq_proc_read - read /proc/cpufreq
- *
- * This function prints out the current cpufreq policy.
- */
-static int cpufreq_proc_read (
- char *page,
- char **start,
- off_t off,
- int count,
- int *eof,
- void *data)
-{
- char *p = page;
- int len = 0;
- struct cpufreq_policy policy;
- unsigned int min_pctg = 0;
- unsigned int max_pctg = 0;
- unsigned int i = 0;
-
- if (off != 0)
- goto end;
-
- if (!warning_print) {
- warning_print++;
- printk(KERN_INFO "Access to /proc/cpufreq is deprecated and "
- "will be removed from (new) 2.6. kernels soon "
- "after 2005-01-01\n");
- }
-
- p += sprintf(p, " minimum CPU frequency - maximum CPU frequency - policy\n");
- for (i=0;i<NR_CPUS;i++) {
- if (!cpu_online(i))
- continue;
-
- if (cpufreq_get_policy(&policy, i))
- continue;
-
- if (!policy.cpuinfo.max_freq)
- continue;
-
- min_pctg = (policy.min * 100) / policy.cpuinfo.max_freq;
- max_pctg = (policy.max * 100) / policy.cpuinfo.max_freq;
-
- p += sprintf(p, "CPU%3d %9d kHz (%3d %%) - %9d kHz (%3d %%) - ",
- i , policy.min, min_pctg, policy.max, max_pctg);
- if (policy.policy) {
- switch (policy.policy) {
- case CPUFREQ_POLICY_POWERSAVE:
- p += sprintf(p, "powersave\n");
- break;
- case CPUFREQ_POLICY_PERFORMANCE:
- p += sprintf(p, "performance\n");
- break;
- default:
- p += sprintf(p, "INVALID\n");
- break;
- }
- } else
- p += scnprintf(p, CPUFREQ_NAME_LEN, "%s\n", policy.governor->name);
- }
-end:
- len = (p - page);
- if (len <= off+count)
- *eof = 1;
- *start = page + off;
- len -= off;
- if (len>count)
- len = count;
- if (len<0)
- len = 0;
-
- return len;
-}
-
-
-/**
- * cpufreq_proc_write - handles writing into /proc/cpufreq
- *
- * This function calls the parsing script and then sets the policy
- * accordingly.
- */
-static int cpufreq_proc_write (
- struct file *file,
- const char __user *buffer,
- unsigned long count,
- void *data)
-{
- int result = 0;
- char proc_string[42] = {'\0'};
- struct cpufreq_policy policy;
- unsigned int i = 0;
-
-
- if ((count > sizeof(proc_string) - 1))
- return -EINVAL;
-
- if (copy_from_user(proc_string, buffer, count))
- return -EFAULT;
-
- if (!warning_print) {
- warning_print++;
- printk(KERN_INFO "Access to /proc/cpufreq is deprecated and "
- "will be removed from (new) 2.6. kernels soon "
- "after 2005-01-01\n");
- }
-
- proc_string[count] = '\0';
-
- result = cpufreq_parse_policy(proc_string, &policy);
- if (result)
- return -EFAULT;
-
- if (policy.cpu == CPUFREQ_ALL_CPUS)
- {
- for (i=0; i<NR_CPUS; i++)
- {
- policy.cpu = i;
- if (cpu_online(i))
- cpufreq_set_policy(&policy);
- }
- }
- else
- cpufreq_set_policy(&policy);
-
- return count;
-}
-
-
-/**
- * cpufreq_proc_init - add "cpufreq" to the /proc root directory
- *
- * This function adds "cpufreq" to the /proc root directory.
- */
-static int __init cpufreq_proc_init (void)
-{
- struct proc_dir_entry *entry = NULL;
-
- /* are these acceptable values? */
- entry = create_proc_entry("cpufreq", S_IFREG|S_IRUGO|S_IWUSR,
- &proc_root);
-
- if (!entry) {
- printk(KERN_ERR "unable to create /proc/cpufreq entry\n");
- return -EIO;
- } else {
- entry->read_proc = cpufreq_proc_read;
- entry->write_proc = cpufreq_proc_write;
- }
-
- return 0;
-}
-
-
-/**
- * cpufreq_proc_exit - removes "cpufreq" from the /proc root directory.
- *
- * This function removes "cpufreq" from the /proc root directory.
- */
-static void __exit cpufreq_proc_exit (void)
-{
- remove_proc_entry("cpufreq", &proc_root);
- return;
-}
-
-MODULE_AUTHOR ("Dominik Brodowski <linux@brodo.de>");
-MODULE_DESCRIPTION ("CPUfreq /proc/cpufreq interface");
-MODULE_LICENSE ("GPL");
-
-module_init(cpufreq_proc_init);
-module_exit(cpufreq_proc_exit);
--- /dev/null
+menu "Hardware crypto devices"
+
+config CRYPTO_DEV_PADLOCK
+ tristate "Support for VIA PadLock ACE"
+ depends on CRYPTO && X86 && !X86_64
+ help
+ Some VIA processors come with an integrated crypto engine
+ (so called VIA PadLock ACE, Advanced Cryptography Engine)
+ that provides instructions for very fast {en,de}cryption
+ with some algorithms.
+
+ The instructions are used only when the CPU supports them.
+ Otherwise software encryption is used. If you are unsure,
+ say Y.
+
+config CRYPTO_DEV_PADLOCK_AES
+ bool "Support for AES in VIA PadLock"
+ depends on CRYPTO_DEV_PADLOCK
+ default y
+ help
+ Use VIA PadLock for AES algorithm.
+
+endmenu
--- /dev/null
+
+obj-$(CONFIG_CRYPTO_DEV_PADLOCK) += padlock.o
+
+padlock-objs-$(CONFIG_CRYPTO_DEV_PADLOCK_AES) += padlock-aes.o
+
+padlock-objs := padlock-generic.o $(padlock-objs-y)
+
--- /dev/null
+/*
+ * Cryptographic API.
+ *
+ * Support for VIA PadLock hardware crypto engine.
+ *
+ * Copyright (c) 2004 Michal Ludvig <michal@logix.cz>
+ *
+ * Key expansion routine taken from crypto/aes.c
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * ---------------------------------------------------------------------------
+ * Copyright (c) 2002, Dr Brian Gladman <brg@gladman.me.uk>, Worcester, UK.
+ * All rights reserved.
+ *
+ * LICENSE TERMS
+ *
+ * The free distribution and use of this software in both source and binary
+ * form is allowed (with or without changes) provided that:
+ *
+ * 1. distributions of this source code include the above copyright
+ * notice, this list of conditions and the following disclaimer;
+ *
+ * 2. distributions in binary form include the above copyright
+ * notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other associated materials;
+ *
+ * 3. the copyright holder's name is not used to endorse products
+ * built using this software without specific written permission.
+ *
+ * ALTERNATIVELY, provided that this notice is retained in full, this product
+ * may be distributed under the terms of the GNU General Public License (GPL),
+ * in which case the provisions of the GPL apply INSTEAD OF those given above.
+ *
+ * DISCLAIMER
+ *
+ * This software is provided 'as is' with no explicit or implied warranties
+ * in respect of its properties, including, but not limited to, correctness
+ * and/or fitness for purpose.
+ * ---------------------------------------------------------------------------
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/crypto.h>
+#include <linux/interrupt.h>
+#include <asm/byteorder.h>
+#include "padlock.h"
+
+#define AES_MIN_KEY_SIZE 16 /* in uint8_t units */
+#define AES_MAX_KEY_SIZE 32 /* ditto */
+#define AES_BLOCK_SIZE 16 /* ditto */
+#define AES_EXTENDED_KEY_SIZE 64 /* in uint32_t units */
+#define AES_EXTENDED_KEY_SIZE_B (AES_EXTENDED_KEY_SIZE * sizeof(uint32_t))
+
+struct aes_ctx {
+ uint32_t e_data[AES_EXTENDED_KEY_SIZE+4];
+ uint32_t d_data[AES_EXTENDED_KEY_SIZE+4];
+ uint32_t *E;
+ uint32_t *D;
+ int key_length;
+};
+
+/* ====== Key management routines ====== */
+
+static inline uint32_t
+generic_rotr32 (const uint32_t x, const unsigned bits)
+{
+ const unsigned n = bits % 32;
+ return (x >> n) | (x << (32 - n));
+}
+
+static inline uint32_t
+generic_rotl32 (const uint32_t x, const unsigned bits)
+{
+ const unsigned n = bits % 32;
+ return (x << n) | (x >> (32 - n));
+}
+
+#define rotl generic_rotl32
+#define rotr generic_rotr32
+
+/*
+ * #define byte(x, nr) ((unsigned char)((x) >> (nr*8)))
+ */
+static inline uint8_t
+byte(const uint32_t x, const unsigned n)
+{
+ return x >> (n << 3);
+}
+
+#define uint32_t_in(x) le32_to_cpu(*(const uint32_t *)(x))
+#define uint32_t_out(to, from) (*(uint32_t *)(to) = cpu_to_le32(from))
+
+#define E_KEY ctx->E
+#define D_KEY ctx->D
+
+static uint8_t pow_tab[256];
+static uint8_t log_tab[256];
+static uint8_t sbx_tab[256];
+static uint8_t isb_tab[256];
+static uint32_t rco_tab[10];
+static uint32_t ft_tab[4][256];
+static uint32_t it_tab[4][256];
+
+static uint32_t fl_tab[4][256];
+static uint32_t il_tab[4][256];
+
+static inline uint8_t
+f_mult (uint8_t a, uint8_t b)
+{
+ uint8_t aa = log_tab[a], cc = aa + log_tab[b];
+
+ return pow_tab[cc + (cc < aa ? 1 : 0)];
+}
+
+#define ff_mult(a,b) (a && b ? f_mult(a, b) : 0)
+
+#define f_rn(bo, bi, n, k) \
+ bo[n] = ft_tab[0][byte(bi[n],0)] ^ \
+ ft_tab[1][byte(bi[(n + 1) & 3],1)] ^ \
+ ft_tab[2][byte(bi[(n + 2) & 3],2)] ^ \
+ ft_tab[3][byte(bi[(n + 3) & 3],3)] ^ *(k + n)
+
+#define i_rn(bo, bi, n, k) \
+ bo[n] = it_tab[0][byte(bi[n],0)] ^ \
+ it_tab[1][byte(bi[(n + 3) & 3],1)] ^ \
+ it_tab[2][byte(bi[(n + 2) & 3],2)] ^ \
+ it_tab[3][byte(bi[(n + 1) & 3],3)] ^ *(k + n)
+
+#define ls_box(x) \
+ ( fl_tab[0][byte(x, 0)] ^ \
+ fl_tab[1][byte(x, 1)] ^ \
+ fl_tab[2][byte(x, 2)] ^ \
+ fl_tab[3][byte(x, 3)] )
+
+#define f_rl(bo, bi, n, k) \
+ bo[n] = fl_tab[0][byte(bi[n],0)] ^ \
+ fl_tab[1][byte(bi[(n + 1) & 3],1)] ^ \
+ fl_tab[2][byte(bi[(n + 2) & 3],2)] ^ \
+ fl_tab[3][byte(bi[(n + 3) & 3],3)] ^ *(k + n)
+
+#define i_rl(bo, bi, n, k) \
+ bo[n] = il_tab[0][byte(bi[n],0)] ^ \
+ il_tab[1][byte(bi[(n + 3) & 3],1)] ^ \
+ il_tab[2][byte(bi[(n + 2) & 3],2)] ^ \
+ il_tab[3][byte(bi[(n + 1) & 3],3)] ^ *(k + n)
+
+static void
+gen_tabs (void)
+{
+ uint32_t i, t;
+ uint8_t p, q;
+
+ /* log and power tables for GF(2**8) finite field with
+ 0x011b as modular polynomial - the simplest prmitive
+ root is 0x03, used here to generate the tables */
+
+ for (i = 0, p = 1; i < 256; ++i) {
+ pow_tab[i] = (uint8_t) p;
+ log_tab[p] = (uint8_t) i;
+
+ p ^= (p << 1) ^ (p & 0x80 ? 0x01b : 0);
+ }
+
+ log_tab[1] = 0;
+
+ for (i = 0, p = 1; i < 10; ++i) {
+ rco_tab[i] = p;
+
+ p = (p << 1) ^ (p & 0x80 ? 0x01b : 0);
+ }
+
+ for (i = 0; i < 256; ++i) {
+ p = (i ? pow_tab[255 - log_tab[i]] : 0);
+ q = ((p >> 7) | (p << 1)) ^ ((p >> 6) | (p << 2));
+ p ^= 0x63 ^ q ^ ((q >> 6) | (q << 2));
+ sbx_tab[i] = p;
+ isb_tab[p] = (uint8_t) i;
+ }
+
+ for (i = 0; i < 256; ++i) {
+ p = sbx_tab[i];
+
+ t = p;
+ fl_tab[0][i] = t;
+ fl_tab[1][i] = rotl (t, 8);
+ fl_tab[2][i] = rotl (t, 16);
+ fl_tab[3][i] = rotl (t, 24);
+
+ t = ((uint32_t) ff_mult (2, p)) |
+ ((uint32_t) p << 8) |
+ ((uint32_t) p << 16) | ((uint32_t) ff_mult (3, p) << 24);
+
+ ft_tab[0][i] = t;
+ ft_tab[1][i] = rotl (t, 8);
+ ft_tab[2][i] = rotl (t, 16);
+ ft_tab[3][i] = rotl (t, 24);
+
+ p = isb_tab[i];
+
+ t = p;
+ il_tab[0][i] = t;
+ il_tab[1][i] = rotl (t, 8);
+ il_tab[2][i] = rotl (t, 16);
+ il_tab[3][i] = rotl (t, 24);
+
+ t = ((uint32_t) ff_mult (14, p)) |
+ ((uint32_t) ff_mult (9, p) << 8) |
+ ((uint32_t) ff_mult (13, p) << 16) |
+ ((uint32_t) ff_mult (11, p) << 24);
+
+ it_tab[0][i] = t;
+ it_tab[1][i] = rotl (t, 8);
+ it_tab[2][i] = rotl (t, 16);
+ it_tab[3][i] = rotl (t, 24);
+ }
+}
+
+#define star_x(x) (((x) & 0x7f7f7f7f) << 1) ^ ((((x) & 0x80808080) >> 7) * 0x1b)
+
+#define imix_col(y,x) \
+ u = star_x(x); \
+ v = star_x(u); \
+ w = star_x(v); \
+ t = w ^ (x); \
+ (y) = u ^ v ^ w; \
+ (y) ^= rotr(u ^ t, 8) ^ \
+ rotr(v ^ t, 16) ^ \
+ rotr(t,24)
+
+/* initialise the key schedule from the user supplied key */
+
+#define loop4(i) \
+{ t = rotr(t, 8); t = ls_box(t) ^ rco_tab[i]; \
+ t ^= E_KEY[4 * i]; E_KEY[4 * i + 4] = t; \
+ t ^= E_KEY[4 * i + 1]; E_KEY[4 * i + 5] = t; \
+ t ^= E_KEY[4 * i + 2]; E_KEY[4 * i + 6] = t; \
+ t ^= E_KEY[4 * i + 3]; E_KEY[4 * i + 7] = t; \
+}
+
+#define loop6(i) \
+{ t = rotr(t, 8); t = ls_box(t) ^ rco_tab[i]; \
+ t ^= E_KEY[6 * i]; E_KEY[6 * i + 6] = t; \
+ t ^= E_KEY[6 * i + 1]; E_KEY[6 * i + 7] = t; \
+ t ^= E_KEY[6 * i + 2]; E_KEY[6 * i + 8] = t; \
+ t ^= E_KEY[6 * i + 3]; E_KEY[6 * i + 9] = t; \
+ t ^= E_KEY[6 * i + 4]; E_KEY[6 * i + 10] = t; \
+ t ^= E_KEY[6 * i + 5]; E_KEY[6 * i + 11] = t; \
+}
+
+#define loop8(i) \
+{ t = rotr(t, 8); ; t = ls_box(t) ^ rco_tab[i]; \
+ t ^= E_KEY[8 * i]; E_KEY[8 * i + 8] = t; \
+ t ^= E_KEY[8 * i + 1]; E_KEY[8 * i + 9] = t; \
+ t ^= E_KEY[8 * i + 2]; E_KEY[8 * i + 10] = t; \
+ t ^= E_KEY[8 * i + 3]; E_KEY[8 * i + 11] = t; \
+ t = E_KEY[8 * i + 4] ^ ls_box(t); \
+ E_KEY[8 * i + 12] = t; \
+ t ^= E_KEY[8 * i + 5]; E_KEY[8 * i + 13] = t; \
+ t ^= E_KEY[8 * i + 6]; E_KEY[8 * i + 14] = t; \
+ t ^= E_KEY[8 * i + 7]; E_KEY[8 * i + 15] = t; \
+}
+
+/* Tells whether the ACE is capable to generate
+ the extended key for a given key_len. */
+static inline int
+aes_hw_extkey_available(uint8_t key_len)
+{
+ /* TODO: We should check the actual CPU model/stepping
+ as it's possible that the capability will be
+ added in the next CPU revisions. */
+ if (key_len == 16)
+ return 1;
+ return 0;
+}
+
+static int
+aes_set_key(void *ctx_arg, const uint8_t *in_key, unsigned int key_len, uint32_t *flags)
+{
+ struct aes_ctx *ctx = ctx_arg;
+ uint32_t i, t, u, v, w;
+ uint32_t P[AES_EXTENDED_KEY_SIZE];
+ uint32_t rounds;
+
+ if (key_len != 16 && key_len != 24 && key_len != 32) {
+ *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+ return -EINVAL;
+ }
+
+ ctx->key_length = key_len;
+
+ ctx->E = ctx->e_data;
+ ctx->D = ctx->d_data;
+
+ /* Ensure 16-Bytes alignmentation of keys for VIA PadLock. */
+ if ((int)(ctx->e_data) & 0x0F)
+ ctx->E += 4 - (((int)(ctx->e_data) & 0x0F) / sizeof (ctx->e_data[0]));
+
+ if ((int)(ctx->d_data) & 0x0F)
+ ctx->D += 4 - (((int)(ctx->d_data) & 0x0F) / sizeof (ctx->d_data[0]));
+
+ E_KEY[0] = uint32_t_in (in_key);
+ E_KEY[1] = uint32_t_in (in_key + 4);
+ E_KEY[2] = uint32_t_in (in_key + 8);
+ E_KEY[3] = uint32_t_in (in_key + 12);
+
+ /* Don't generate extended keys if the hardware can do it. */
+ if (aes_hw_extkey_available(key_len))
+ return 0;
+
+ switch (key_len) {
+ case 16:
+ t = E_KEY[3];
+ for (i = 0; i < 10; ++i)
+ loop4 (i);
+ break;
+
+ case 24:
+ E_KEY[4] = uint32_t_in (in_key + 16);
+ t = E_KEY[5] = uint32_t_in (in_key + 20);
+ for (i = 0; i < 8; ++i)
+ loop6 (i);
+ break;
+
+ case 32:
+ E_KEY[4] = uint32_t_in (in_key + 16);
+ E_KEY[5] = uint32_t_in (in_key + 20);
+ E_KEY[6] = uint32_t_in (in_key + 24);
+ t = E_KEY[7] = uint32_t_in (in_key + 28);
+ for (i = 0; i < 7; ++i)
+ loop8 (i);
+ break;
+ }
+
+ D_KEY[0] = E_KEY[0];
+ D_KEY[1] = E_KEY[1];
+ D_KEY[2] = E_KEY[2];
+ D_KEY[3] = E_KEY[3];
+
+ for (i = 4; i < key_len + 24; ++i) {
+ imix_col (D_KEY[i], E_KEY[i]);
+ }
+
+ /* PadLock needs a different format of the decryption key. */
+ rounds = 10 + (key_len - 16) / 4;
+
+ for (i = 0; i < rounds; i++) {
+ P[((i + 1) * 4) + 0] = D_KEY[((rounds - i - 1) * 4) + 0];
+ P[((i + 1) * 4) + 1] = D_KEY[((rounds - i - 1) * 4) + 1];
+ P[((i + 1) * 4) + 2] = D_KEY[((rounds - i - 1) * 4) + 2];
+ P[((i + 1) * 4) + 3] = D_KEY[((rounds - i - 1) * 4) + 3];
+ }
+
+ P[0] = E_KEY[(rounds * 4) + 0];
+ P[1] = E_KEY[(rounds * 4) + 1];
+ P[2] = E_KEY[(rounds * 4) + 2];
+ P[3] = E_KEY[(rounds * 4) + 3];
+
+ memcpy(D_KEY, P, AES_EXTENDED_KEY_SIZE_B);
+
+ return 0;
+}
+
+/* ====== Encryption/decryption routines ====== */
+
+/* This is the real call to PadLock. */
+static inline void
+padlock_xcrypt_ecb(uint8_t *input, uint8_t *output, uint8_t *key,
+ void *control_word, uint32_t count)
+{
+ asm volatile ("pushfl; popfl"); /* enforce key reload. */
+ asm volatile (".byte 0xf3,0x0f,0xa7,0xc8" /* rep xcryptecb */
+ : "+S"(input), "+D"(output)
+ : "d"(control_word), "b"(key), "c"(count));
+}
+
+static void
+aes_padlock(void *ctx_arg, uint8_t *out_arg, const uint8_t *in_arg, int encdec)
+{
+ /* Don't blindly modify this structure - the items must
+ fit on 16-Bytes boundaries! */
+ struct padlock_xcrypt_data {
+ uint8_t buf[AES_BLOCK_SIZE];
+ union cword cword;
+ };
+
+ struct aes_ctx *ctx = ctx_arg;
+ char bigbuf[sizeof(struct padlock_xcrypt_data) + 16];
+ struct padlock_xcrypt_data *data;
+ void *key;
+
+ /* Place 'data' at the first 16-Bytes aligned address in 'bigbuf'. */
+ if (((long)bigbuf) & 0x0F)
+ data = (void*)(bigbuf + 16 - ((long)bigbuf & 0x0F));
+ else
+ data = (void*)bigbuf;
+
+ /* Prepare Control word. */
+ memset (data, 0, sizeof(struct padlock_xcrypt_data));
+ data->cword.b.encdec = !encdec; /* in the rest of cryptoapi ENC=1/DEC=0 */
+ data->cword.b.rounds = 10 + (ctx->key_length - 16) / 4;
+ data->cword.b.ksize = (ctx->key_length - 16) / 8;
+
+ /* Is the hardware capable to generate the extended key? */
+ if (!aes_hw_extkey_available(ctx->key_length))
+ data->cword.b.keygen = 1;
+
+ /* ctx->E starts with a plain key - if the hardware is capable
+ to generate the extended key itself we must supply
+ the plain key for both Encryption and Decryption. */
+ if (encdec == CRYPTO_DIR_ENCRYPT || data->cword.b.keygen == 0)
+ key = ctx->E;
+ else
+ key = ctx->D;
+
+ memcpy(data->buf, in_arg, AES_BLOCK_SIZE);
+ padlock_xcrypt_ecb(data->buf, data->buf, key, &data->cword, 1);
+ memcpy(out_arg, data->buf, AES_BLOCK_SIZE);
+}
+
+static void
+aes_encrypt(void *ctx_arg, uint8_t *out, const uint8_t *in)
+{
+ aes_padlock(ctx_arg, out, in, CRYPTO_DIR_ENCRYPT);
+}
+
+static void
+aes_decrypt(void *ctx_arg, uint8_t *out, const uint8_t *in)
+{
+ aes_padlock(ctx_arg, out, in, CRYPTO_DIR_DECRYPT);
+}
+
+static struct crypto_alg aes_alg = {
+ .cra_name = "aes",
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = AES_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct aes_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(aes_alg.cra_list),
+ .cra_u = {
+ .cipher = {
+ .cia_min_keysize = AES_MIN_KEY_SIZE,
+ .cia_max_keysize = AES_MAX_KEY_SIZE,
+ .cia_setkey = aes_set_key,
+ .cia_encrypt = aes_encrypt,
+ .cia_decrypt = aes_decrypt
+ }
+ }
+};
+
+int __init padlock_init_aes(void)
+{
+ printk(KERN_NOTICE PFX "Using VIA PadLock ACE for AES algorithm.\n");
+
+ gen_tabs();
+ return crypto_register_alg(&aes_alg);
+}
+
+void __exit padlock_fini_aes(void)
+{
+ crypto_unregister_alg(&aes_alg);
+}
--- /dev/null
+/*
+ * Cryptographic API.
+ *
+ * Support for VIA PadLock hardware crypto engine.
+ *
+ * Copyright (c) 2004 Michal Ludvig <michal@logix.cz>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/crypto.h>
+#include <asm/byteorder.h>
+#include "padlock.h"
+
+static int __init
+padlock_init(void)
+{
+ int ret = -ENOSYS;
+
+ if (!cpu_has_xcrypt) {
+ printk(KERN_ERR PFX "VIA PadLock not detected.\n");
+ return -ENODEV;
+ }
+
+ if (!cpu_has_xcrypt_enabled) {
+ printk(KERN_ERR PFX "VIA PadLock detected, but not enabled. Hmm, strange...\n");
+ return -ENODEV;
+ }
+
+#ifdef CONFIG_CRYPTO_DEV_PADLOCK_AES
+ if ((ret = padlock_init_aes())) {
+ printk(KERN_ERR PFX "VIA PadLock AES initialization failed.\n");
+ return ret;
+ }
+#endif
+
+ if (ret == -ENOSYS)
+ printk(KERN_ERR PFX "Hmm, VIA PadLock was compiled without any algorithm.\n");
+
+ return ret;
+}
+
+static void __exit
+padlock_fini(void)
+{
+#ifdef CONFIG_CRYPTO_DEV_PADLOCK_AES
+ padlock_fini_aes();
+#endif
+}
+
+module_init(padlock_init);
+module_exit(padlock_fini);
+
+MODULE_DESCRIPTION("VIA PadLock crypto engine support.");
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_AUTHOR("Michal Ludvig");
--- /dev/null
+/*
+ * Driver for VIA PadLock
+ *
+ * Copyright (c) 2004 Michal Ludvig <michal@logix.cz>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+
+#ifndef _CRYPTO_PADLOCK_H
+#define _CRYPTO_PADLOCK_H
+
+/* Control word. */
+union cword {
+ uint32_t cword[4];
+ struct {
+ int rounds:4;
+ int algo:3;
+ int keygen:1;
+ int interm:1;
+ int encdec:1;
+ int ksize:2;
+ } b;
+};
+
+#define PFX "padlock: "
+
+#ifdef CONFIG_CRYPTO_DEV_PADLOCK_AES
+int padlock_init_aes(void);
+void padlock_fini_aes(void);
+#endif
+
+#endif /* _CRYPTO_PADLOCK_H */
# Makefile for the Linux device tree
-# Being anal sometimes saves a crash/reboot cycle... ;-)
-EXTRA_CFLAGS := -Werror
-
obj-$(CONFIG_EISA) += eisa-bus.o
obj-${CONFIG_EISA_PCI_EISA} += pci_eisa.o
return -ENODEV;
}
-
-#ifdef CONFIG_IA64_EARLY_PRINTK_UART
-unsigned long
-hcdp_early_uart (void)
-{
- efi_system_table_t *systab;
- efi_config_table_t *config_tables;
- unsigned long addr = 0;
- struct pcdp *pcdp = 0;
- struct pcdp_uart *uart;
- int i;
-
- systab = (efi_system_table_t *) ia64_boot_param->efi_systab;
- if (!systab)
- return 0;
- systab = __va(systab);
-
- config_tables = (efi_config_table_t *) systab->tables;
- if (!config_tables)
- return 0;
- config_tables = __va(config_tables);
-
- for (i = 0; i < systab->nr_tables; i++) {
- if (efi_guidcmp(config_tables[i].guid, HCDP_TABLE_GUID) == 0) {
- pcdp = (struct pcdp *) config_tables[i].table;
- break;
- }
- }
- if (!pcdp)
- return 0;
- pcdp = __va(pcdp);
-
- for (i = 0, uart = pcdp->uart; i < pcdp->num_uarts; i++, uart++) {
- if (uart->type == PCDP_CONSOLE_UART) {
- addr = uart->addr.address;
- break;
- }
- }
- return addr;
-}
-#endif /* CONFIG_IA64_EARLY_PRINTK_UART */
tristate "MPC8xx CPM I2C interface"
depends on 8xx && I2C
+config I2C_ALGO_SIBYTE
+ tristate "SiByte SMBus interface"
+ depends on SIBYTE_SB1xxx_SOC && I2C
+ help
+ Supports the SiByte SOC on-chip I2C interfaces (2 channels).
+
+config I2C_ALGO_SGI
+ tristate "I2C SGI interfaces"
+ depends on I2C && (SGI_IP22 || SGI_IP32 || X86_VISWS)
+ help
+ Supports the SGI interfaces like the ones found on SGI Indy VINO
+ or SGI O2 MACE.
+
endmenu
obj-$(CONFIG_I2C_ALGOPCF) += i2c-algo-pcf.o
obj-$(CONFIG_I2C_ALGOPCA) += i2c-algo-pca.o
obj-$(CONFIG_I2C_ALGOITE) += i2c-algo-ite.o
+obj-$(CONFIG_I2C_ALGO_SIBYTE) += i2c-algo-sibyte.o
+obj-$(CONFIG_I2C_ALGO_SGI) += i2c-algo-sgi.o
ifeq ($(CONFIG_I2C_DEBUG_ALGO),y)
EXTRA_CFLAGS += -DDEBUG
state = pca_status(adap);
if ( state != 0xF8 ) {
- printk(KERN_ERR DRIVER ": bus is not idle. status is %#04x\n", state );
+ dev_dbg(&i2c_adap->dev, "bus is not idle. status is %#04x\n", state );
/* FIXME: what to do. Force stop ? */
return -EREMOTEIO;
}
static u32 pca_func(struct i2c_adapter *adap)
{
- return I2C_FUNC_SMBUS_EMUL;
+ return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL;
}
static int pca_init(struct i2c_algo_pca_data *adap)
static u32 pcf_func(struct i2c_adapter *adap)
{
- return I2C_FUNC_SMBUS_EMUL | I2C_FUNC_10BIT_ADDR |
- I2C_FUNC_PROTOCOL_MANGLING;
+ return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL |
+ I2C_FUNC_10BIT_ADDR | I2C_FUNC_PROTOCOL_MANGLING;
}
/* -----exported algorithm data: ------------------------------------- */
--- /dev/null
+/*
+ * i2c-algo-sgi.c: i2c driver algorithms for SGI adapters.
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License version 2 as published by the Free Software Foundation.
+ *
+ * Copyright (C) 2003 Ladislav Michl <ladis@linux-mips.org>
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/delay.h>
+
+#include <linux/i2c.h>
+#include <linux/i2c-algo-sgi.h>
+
+
+#define SGI_I2C_FORCE_IDLE (0 << 0)
+#define SGI_I2C_NOT_IDLE (1 << 0)
+#define SGI_I2C_WRITE (0 << 1)
+#define SGI_I2C_READ (1 << 1)
+#define SGI_I2C_RELEASE_BUS (0 << 2)
+#define SGI_I2C_HOLD_BUS (1 << 2)
+#define SGI_I2C_XFER_DONE (0 << 4)
+#define SGI_I2C_XFER_BUSY (1 << 4)
+#define SGI_I2C_ACK (0 << 5)
+#define SGI_I2C_NACK (1 << 5)
+#define SGI_I2C_BUS_OK (0 << 7)
+#define SGI_I2C_BUS_ERR (1 << 7)
+
+#define get_control() adap->getctrl(adap->data)
+#define set_control(val) adap->setctrl(adap->data, val)
+#define read_data() adap->rdata(adap->data)
+#define write_data(val) adap->wdata(adap->data, val)
+
+
+static int wait_xfer_done(struct i2c_algo_sgi_data *adap)
+{
+ int i;
+
+ for (i = 0; i < adap->xfer_timeout; i++) {
+ if ((get_control() & SGI_I2C_XFER_BUSY) == 0)
+ return 0;
+ udelay(1);
+ }
+
+ return -ETIMEDOUT;
+}
+
+static int wait_ack(struct i2c_algo_sgi_data *adap)
+{
+ int i;
+
+ if (wait_xfer_done(adap))
+ return -ETIMEDOUT;
+ for (i = 0; i < adap->ack_timeout; i++) {
+ if ((get_control() & SGI_I2C_NACK) == 0)
+ return 0;
+ udelay(1);
+ }
+
+ return -ETIMEDOUT;
+}
+
+static int force_idle(struct i2c_algo_sgi_data *adap)
+{
+ int i;
+
+ set_control(SGI_I2C_FORCE_IDLE);
+ for (i = 0; i < adap->xfer_timeout; i++) {
+ if ((get_control() & SGI_I2C_NOT_IDLE) == 0)
+ goto out;
+ udelay(1);
+ }
+ return -ETIMEDOUT;
+out:
+ if (get_control() & SGI_I2C_BUS_ERR)
+ return -EIO;
+ return 0;
+}
+
+static int do_address(struct i2c_algo_sgi_data *adap, unsigned int addr,
+ int rd)
+{
+ if (rd)
+ set_control(SGI_I2C_NOT_IDLE);
+ /* Check if bus is idle, eventually force it to do so */
+ if (get_control() & SGI_I2C_NOT_IDLE)
+ if (force_idle(adap))
+ return -EIO;
+ /* Write out the i2c chip address and specify operation */
+ set_control(SGI_I2C_HOLD_BUS | SGI_I2C_WRITE | SGI_I2C_NOT_IDLE);
+ if (rd)
+ addr |= 1;
+ write_data(addr);
+ if (wait_ack(adap))
+ return -EIO;
+ return 0;
+}
+
+static int i2c_read(struct i2c_algo_sgi_data *adap, unsigned char *buf,
+ unsigned int len)
+{
+ int i;
+
+ set_control(SGI_I2C_HOLD_BUS | SGI_I2C_READ | SGI_I2C_NOT_IDLE);
+ for (i = 0; i < len; i++) {
+ if (wait_xfer_done(adap))
+ return -EIO;
+ buf[i] = read_data();
+ }
+ set_control(SGI_I2C_RELEASE_BUS | SGI_I2C_FORCE_IDLE);
+
+ return 0;
+
+}
+
+static int i2c_write(struct i2c_algo_sgi_data *adap, unsigned char *buf,
+ unsigned int len)
+{
+ int i;
+
+ /* We are already in write state */
+ for (i = 0; i < len; i++) {
+ write_data(buf[i]);
+ if (wait_ack(adap))
+ return -EIO;
+ }
+ return 0;
+}
+
+static int sgi_xfer(struct i2c_adapter *i2c_adap, struct i2c_msg msgs[],
+ int num)
+{
+ struct i2c_algo_sgi_data *adap = i2c_adap->algo_data;
+ struct i2c_msg *p;
+ int i, err = 0;
+
+ for (i = 0; !err && i < num; i++) {
+ p = &msgs[i];
+ err = do_address(adap, p->addr, p->flags & I2C_M_RD);
+ if (err || !p->len)
+ continue;
+ if (p->flags & I2C_M_RD)
+ err = i2c_read(adap, p->buf, p->len);
+ else
+ err = i2c_write(adap, p->buf, p->len);
+ }
+
+ return err;
+}
+
+static u32 sgi_func(struct i2c_adapter *adap)
+{
+ return I2C_FUNC_SMBUS_EMUL;
+}
+
+static struct i2c_algorithm sgi_algo = {
+ .name = "SGI algorithm",
+ .id = I2C_ALGO_SGI,
+ .master_xfer = sgi_xfer,
+ .functionality = sgi_func,
+};
+
+/*
+ * registering functions to load algorithms at runtime
+ */
+int i2c_sgi_add_bus(struct i2c_adapter *adap)
+{
+ adap->id |= sgi_algo.id;
+ adap->algo = &sgi_algo;
+
+ return i2c_add_adapter(adap);
+}
+
+
+int i2c_sgi_del_bus(struct i2c_adapter *adap)
+{
+ return i2c_del_adapter(adap);
+}
+
+EXPORT_SYMBOL(i2c_sgi_add_bus);
+EXPORT_SYMBOL(i2c_sgi_del_bus);
+
+MODULE_AUTHOR("Ladislav Michl <ladis@linux-mips.org>");
+MODULE_DESCRIPTION("I2C-Bus SGI algorithm");
+MODULE_LICENSE("GPL");
--- /dev/null
+/* ------------------------------------------------------------------------- */
+/* i2c-algo-sibyte.c i2c driver algorithms for bit-shift adapters */
+/* ------------------------------------------------------------------------- */
+/* Copyright (C) 2001,2002,2003 Broadcom Corporation
+ Copyright (C) 1995-2000 Simon G. Vogl
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
+/* ------------------------------------------------------------------------- */
+
+/* With some changes from Kyösti Mälkki <kmalkki@cc.hut.fi> and even
+ Frodo Looijaard <frodol@dds.nl>. */
+
+/* Ported for SiByte SOCs by Broadcom Corporation. */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/init.h>
+
+#include <asm/io.h>
+#include <asm/sibyte/sb1250_regs.h>
+#include <asm/sibyte/sb1250_smbus.h>
+
+#include <linux/i2c.h>
+#include <linux/i2c-algo-sibyte.h>
+
+/* ----- global defines ----------------------------------------------- */
+#define SMB_CSR(a,r) ((long)(a->reg_base + r))
+
+/* ----- global variables --------------------------------------------- */
+
+/* module parameters:
+ */
+static int bit_scan=0; /* have a look at what's hanging 'round */
+
+
+static int smbus_xfer(struct i2c_adapter *i2c_adap, u16 addr,
+ unsigned short flags, char read_write,
+ u8 command, int size, union i2c_smbus_data * data)
+{
+ struct i2c_algo_sibyte_data *adap = i2c_adap->algo_data;
+ int data_bytes = 0;
+ int error;
+
+ while (csr_in32(SMB_CSR(adap, R_SMB_STATUS)) & M_SMB_BUSY)
+ ;
+
+ switch (size) {
+ case I2C_SMBUS_QUICK:
+ csr_out32((V_SMB_ADDR(addr) | (read_write == I2C_SMBUS_READ ? M_SMB_QDATA : 0) |
+ V_SMB_TT_QUICKCMD), SMB_CSR(adap, R_SMB_START));
+ break;
+ case I2C_SMBUS_BYTE:
+ if (read_write == I2C_SMBUS_READ) {
+ csr_out32((V_SMB_ADDR(addr) | V_SMB_TT_RD1BYTE),
+ SMB_CSR(adap, R_SMB_START));
+ data_bytes = 1;
+ } else {
+ csr_out32(V_SMB_CMD(command), SMB_CSR(adap, R_SMB_CMD));
+ csr_out32((V_SMB_ADDR(addr) | V_SMB_TT_WR1BYTE),
+ SMB_CSR(adap, R_SMB_START));
+ }
+ break;
+ case I2C_SMBUS_BYTE_DATA:
+ csr_out32(V_SMB_CMD(command), SMB_CSR(adap, R_SMB_CMD));
+ if (read_write == I2C_SMBUS_READ) {
+ csr_out32((V_SMB_ADDR(addr) | V_SMB_TT_CMD_RD1BYTE),
+ SMB_CSR(adap, R_SMB_START));
+ data_bytes = 1;
+ } else {
+ csr_out32(V_SMB_LB(data->byte), SMB_CSR(adap, R_SMB_DATA));
+ csr_out32((V_SMB_ADDR(addr) | V_SMB_TT_WR2BYTE),
+ SMB_CSR(adap, R_SMB_START));
+ }
+ break;
+ case I2C_SMBUS_WORD_DATA:
+ csr_out32(V_SMB_CMD(command), SMB_CSR(adap, R_SMB_CMD));
+ if (read_write == I2C_SMBUS_READ) {
+ csr_out32((V_SMB_ADDR(addr) | V_SMB_TT_CMD_RD2BYTE),
+ SMB_CSR(adap, R_SMB_START));
+ data_bytes = 2;
+ } else {
+ csr_out32(V_SMB_LB(data->word & 0xff), SMB_CSR(adap, R_SMB_DATA));
+ csr_out32(V_SMB_MB(data->word >> 8), SMB_CSR(adap, R_SMB_DATA));
+ csr_out32((V_SMB_ADDR(addr) | V_SMB_TT_WR2BYTE),
+ SMB_CSR(adap, R_SMB_START));
+ }
+ break;
+ default:
+ return -1; /* XXXKW better error code? */
+ }
+
+ while (csr_in32(SMB_CSR(adap, R_SMB_STATUS)) & M_SMB_BUSY)
+ ;
+
+ error = csr_in32(SMB_CSR(adap, R_SMB_STATUS));
+ if (error & M_SMB_ERROR) {
+ /* Clear error bit by writing a 1 */
+ csr_out32(M_SMB_ERROR, SMB_CSR(adap, R_SMB_STATUS));
+ return -1; /* XXXKW better error code? */
+ }
+
+ if (data_bytes == 1)
+ data->byte = csr_in32(SMB_CSR(adap, R_SMB_DATA)) & 0xff;
+ if (data_bytes == 2)
+ data->word = csr_in32(SMB_CSR(adap, R_SMB_DATA)) & 0xffff;
+
+ return 0;
+}
+
+static int algo_control(struct i2c_adapter *adapter,
+ unsigned int cmd, unsigned long arg)
+{
+ return 0;
+}
+
+static u32 bit_func(struct i2c_adapter *adap)
+{
+ return (I2C_FUNC_SMBUS_QUICK | I2C_FUNC_SMBUS_BYTE |
+ I2C_FUNC_SMBUS_BYTE_DATA | I2C_FUNC_SMBUS_WORD_DATA);
+}
+
+
+/* -----exported algorithm data: ------------------------------------- */
+
+static struct i2c_algorithm i2c_sibyte_algo = {
+ "SiByte algorithm",
+ I2C_ALGO_SIBYTE,
+ NULL, /* master_xfer */
+ smbus_xfer, /* smbus_xfer */
+ NULL, /* slave_xmit */
+ NULL, /* slave_recv */
+ algo_control, /* ioctl */
+ bit_func, /* functionality */
+};
+
+/*
+ * registering functions to load algorithms at runtime
+ */
+int i2c_sibyte_add_bus(struct i2c_adapter *i2c_adap, int speed)
+{
+ int i;
+ struct i2c_algo_sibyte_data *adap = i2c_adap->algo_data;
+
+ /* register new adapter to i2c module... */
+
+ i2c_adap->id |= i2c_sibyte_algo.id;
+ i2c_adap->algo = &i2c_sibyte_algo;
+
+ /* Set the frequency to 100 kHz */
+ csr_out32(speed, SMB_CSR(adap,R_SMB_FREQ));
+ csr_out32(0, SMB_CSR(adap,R_SMB_CONTROL));
+
+ /* scan bus */
+ if (bit_scan) {
+ union i2c_smbus_data data;
+ int rc;
+ printk(KERN_INFO " i2c-algo-sibyte.o: scanning bus %s.\n",
+ i2c_adap->name);
+ for (i = 0x00; i < 0x7f; i++) {
+ /* XXXKW is this a realistic probe? */
+ rc = smbus_xfer(i2c_adap, i, 0, I2C_SMBUS_READ, 0,
+ I2C_SMBUS_BYTE_DATA, &data);
+ if (!rc) {
+ printk("(%02x)",i);
+ } else
+ printk(".");
+ }
+ printk("\n");
+ }
+
+ i2c_add_adapter(i2c_adap);
+
+ return 0;
+}
+
+
+int i2c_sibyte_del_bus(struct i2c_adapter *adap)
+{
+ int res;
+
+ if ((res = i2c_del_adapter(adap)) < 0)
+ return res;
+
+ return 0;
+}
+
+int __init i2c_algo_sibyte_init (void)
+{
+ printk("i2c-algo-sibyte.o: i2c SiByte algorithm module\n");
+ return 0;
+}
+
+
+EXPORT_SYMBOL(i2c_sibyte_add_bus);
+EXPORT_SYMBOL(i2c_sibyte_del_bus);
+
+#ifdef MODULE
+MODULE_AUTHOR("Kip Walker, Broadcom Corp.");
+MODULE_DESCRIPTION("SiByte I2C-Bus algorithm");
+MODULE_PARM(bit_scan, "i");
+MODULE_PARM_DESC(bit_scan, "Scan for active chips on the bus");
+MODULE_LICENSE("GPL");
+
+int init_module(void)
+{
+ return i2c_algo_sibyte_init();
+}
+
+void cleanup_module(void)
+{
+}
+#endif
--- /dev/null
+/*
+ * i2c-au1550.c: SMBus (i2c) adapter for Alchemy PSC interface
+ * Copyright (C) 2004 Embedded Edge, LLC <dan@embeddededge.com>
+ *
+ * 2.6 port by Matt Porter <mporter@kernel.crashing.org>
+ *
+ * The documentation describes this as an SMBus controller, but it doesn't
+ * understand any of the SMBus protocol in hardware. It's really an I2C
+ * controller that could emulate most of the SMBus in software.
+ *
+ * This is just a skeleton adapter to use with the Au1550 PSC
+ * algorithm. It was developed for the Pb1550, but will work with
+ * any Au1550 board that has a similar PSC configuration.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ */
+
+#include <linux/config.h>
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/i2c.h>
+
+#include <asm/mach-au1x00/au1000.h>
+#include <asm/mach-pb1x00/pb1550.h>
+#include <asm/mach-au1x00/au1xxx_psc.h>
+
+#include "i2c-au1550.h"
+
+static int
+wait_xfer_done(struct i2c_au1550_data *adap)
+{
+ u32 stat;
+ int i;
+ volatile psc_smb_t *sp;
+
+ sp = (volatile psc_smb_t *)(adap->psc_base);
+
+ /* Wait for Tx FIFO Underflow.
+ */
+ for (i = 0; i < adap->xfer_timeout; i++) {
+ stat = sp->psc_smbevnt;
+ au_sync();
+ if ((stat & PSC_SMBEVNT_TU) != 0) {
+ /* Clear it. */
+ sp->psc_smbevnt = PSC_SMBEVNT_TU;
+ au_sync();
+ return 0;
+ }
+ udelay(1);
+ }
+
+ return -ETIMEDOUT;
+}
+
+static int
+wait_ack(struct i2c_au1550_data *adap)
+{
+ u32 stat;
+ volatile psc_smb_t *sp;
+
+ if (wait_xfer_done(adap))
+ return -ETIMEDOUT;
+
+ sp = (volatile psc_smb_t *)(adap->psc_base);
+
+ stat = sp->psc_smbevnt;
+ au_sync();
+
+ if ((stat & (PSC_SMBEVNT_DN | PSC_SMBEVNT_AN | PSC_SMBEVNT_AL)) != 0)
+ return -ETIMEDOUT;
+
+ return 0;
+}
+
+static int
+wait_master_done(struct i2c_au1550_data *adap)
+{
+ u32 stat;
+ int i;
+ volatile psc_smb_t *sp;
+
+ sp = (volatile psc_smb_t *)(adap->psc_base);
+
+ /* Wait for Master Done.
+ */
+ for (i = 0; i < adap->xfer_timeout; i++) {
+ stat = sp->psc_smbevnt;
+ au_sync();
+ if ((stat & PSC_SMBEVNT_MD) != 0)
+ return 0;
+ udelay(1);
+ }
+
+ return -ETIMEDOUT;
+}
+
+static int
+do_address(struct i2c_au1550_data *adap, unsigned int addr, int rd)
+{
+ volatile psc_smb_t *sp;
+ u32 stat;
+
+ sp = (volatile psc_smb_t *)(adap->psc_base);
+
+ /* Reset the FIFOs, clear events.
+ */
+ sp->psc_smbpcr = PSC_SMBPCR_DC;
+ sp->psc_smbevnt = PSC_SMBEVNT_ALLCLR;
+ au_sync();
+ do {
+ stat = sp->psc_smbpcr;
+ au_sync();
+ } while ((stat & PSC_SMBPCR_DC) != 0);
+
+ /* Write out the i2c chip address and specify operation
+ */
+ addr <<= 1;
+ if (rd)
+ addr |= 1;
+
+ /* Put byte into fifo, start up master.
+ */
+ sp->psc_smbtxrx = addr;
+ au_sync();
+ sp->psc_smbpcr = PSC_SMBPCR_MS;
+ au_sync();
+ if (wait_ack(adap))
+ return -EIO;
+ return 0;
+}
+
+static u32
+wait_for_rx_byte(struct i2c_au1550_data *adap, u32 *ret_data)
+{
+ int j;
+ u32 data, stat;
+ volatile psc_smb_t *sp;
+
+ if (wait_xfer_done(adap))
+ return -EIO;
+
+ sp = (volatile psc_smb_t *)(adap->psc_base);
+
+ j = adap->xfer_timeout * 100;
+ do {
+ j--;
+ if (j <= 0)
+ return -EIO;
+
+ stat = sp->psc_smbstat;
+ au_sync();
+ if ((stat & PSC_SMBSTAT_RE) == 0)
+ j = 0;
+ else
+ udelay(1);
+ } while (j > 0);
+ data = sp->psc_smbtxrx;
+ au_sync();
+ *ret_data = data;
+
+ return 0;
+}
+
+static int
+i2c_read(struct i2c_au1550_data *adap, unsigned char *buf,
+ unsigned int len)
+{
+ int i;
+ u32 data;
+ volatile psc_smb_t *sp;
+
+ if (len == 0)
+ return 0;
+
+ /* A read is performed by stuffing the transmit fifo with
+ * zero bytes for timing, waiting for bytes to appear in the
+ * receive fifo, then reading the bytes.
+ */
+
+ sp = (volatile psc_smb_t *)(adap->psc_base);
+
+ i = 0;
+ while (i < (len-1)) {
+ sp->psc_smbtxrx = 0;
+ au_sync();
+ if (wait_for_rx_byte(adap, &data))
+ return -EIO;
+
+ buf[i] = data;
+ i++;
+ }
+
+ /* The last byte has to indicate transfer done.
+ */
+ sp->psc_smbtxrx = PSC_SMBTXRX_STP;
+ au_sync();
+ if (wait_master_done(adap))
+ return -EIO;
+
+ data = sp->psc_smbtxrx;
+ au_sync();
+ buf[i] = data;
+ return 0;
+}
+
+static int
+i2c_write(struct i2c_au1550_data *adap, unsigned char *buf,
+ unsigned int len)
+{
+ int i;
+ u32 data;
+ volatile psc_smb_t *sp;
+
+ if (len == 0)
+ return 0;
+
+ sp = (volatile psc_smb_t *)(adap->psc_base);
+
+ i = 0;
+ while (i < (len-1)) {
+ data = buf[i];
+ sp->psc_smbtxrx = data;
+ au_sync();
+ if (wait_ack(adap))
+ return -EIO;
+ i++;
+ }
+
+ /* The last byte has to indicate transfer done.
+ */
+ data = buf[i];
+ data |= PSC_SMBTXRX_STP;
+ sp->psc_smbtxrx = data;
+ au_sync();
+ if (wait_master_done(adap))
+ return -EIO;
+ return 0;
+}
+
+static int
+au1550_xfer(struct i2c_adapter *i2c_adap, struct i2c_msg msgs[], int num)
+{
+ struct i2c_au1550_data *adap = i2c_adap->algo_data;
+ struct i2c_msg *p;
+ int i, err = 0;
+
+ for (i = 0; !err && i < num; i++) {
+ p = &msgs[i];
+ err = do_address(adap, p->addr, p->flags & I2C_M_RD);
+ if (err || !p->len)
+ continue;
+ if (p->flags & I2C_M_RD)
+ err = i2c_read(adap, p->buf, p->len);
+ else
+ err = i2c_write(adap, p->buf, p->len);
+ }
+
+ /* Return the number of messages processed, or the error code.
+ */
+ if (err == 0)
+ err = num;
+ return err;
+}
+
+static u32
+au1550_func(struct i2c_adapter *adap)
+{
+ return I2C_FUNC_I2C;
+}
+
+static struct i2c_algorithm au1550_algo = {
+ .name = "Au1550 algorithm",
+ .id = I2C_ALGO_AU1550,
+ .master_xfer = au1550_xfer,
+ .functionality = au1550_func,
+};
+
+/*
+ * registering functions to load algorithms at runtime
+ * Prior to calling us, the 50MHz clock frequency and routing
+ * must have been set up for the PSC indicated by the adapter.
+ */
+int
+i2c_au1550_add_bus(struct i2c_adapter *i2c_adap)
+{
+ struct i2c_au1550_data *adap = i2c_adap->algo_data;
+ volatile psc_smb_t *sp;
+ u32 stat;
+
+ i2c_adap->algo = &au1550_algo;
+
+ /* Now, set up the PSC for SMBus PIO mode.
+ */
+ sp = (volatile psc_smb_t *)(adap->psc_base);
+ sp->psc_ctrl = PSC_CTRL_DISABLE;
+ au_sync();
+ sp->psc_sel = PSC_SEL_PS_SMBUSMODE;
+ sp->psc_smbcfg = 0;
+ au_sync();
+ sp->psc_ctrl = PSC_CTRL_ENABLE;
+ au_sync();
+ do {
+ stat = sp->psc_smbstat;
+ au_sync();
+ } while ((stat & PSC_SMBSTAT_SR) == 0);
+
+ sp->psc_smbcfg = (PSC_SMBCFG_RT_FIFO8 | PSC_SMBCFG_TT_FIFO8 |
+ PSC_SMBCFG_DD_DISABLE);
+
+ /* Divide by 8 to get a 6.25 MHz clock. The later protocol
+ * timings are based on this clock.
+ */
+ sp->psc_smbcfg |= PSC_SMBCFG_SET_DIV(PSC_SMBCFG_DIV8);
+ sp->psc_smbmsk = PSC_SMBMSK_ALLMASK;
+ au_sync();
+
+ /* Set the protocol timer values. See Table 71 in the
+ * Au1550 Data Book for standard timing values.
+ */
+ sp->psc_smbtmr = PSC_SMBTMR_SET_TH(0) | PSC_SMBTMR_SET_PS(15) | \
+ PSC_SMBTMR_SET_PU(15) | PSC_SMBTMR_SET_SH(15) | \
+ PSC_SMBTMR_SET_SU(15) | PSC_SMBTMR_SET_CL(15) | \
+ PSC_SMBTMR_SET_CH(15);
+ au_sync();
+
+ sp->psc_smbcfg |= PSC_SMBCFG_DE_ENABLE;
+ do {
+ stat = sp->psc_smbstat;
+ au_sync();
+ } while ((stat & PSC_SMBSTAT_DR) == 0);
+
+ return i2c_add_adapter(i2c_adap);
+}
+
+
+int
+i2c_au1550_del_bus(struct i2c_adapter *adap)
+{
+ return i2c_del_adapter(adap);
+}
+
+static int
+pb1550_reg(struct i2c_client *client)
+{
+ return 0;
+}
+
+static int
+pb1550_unreg(struct i2c_client *client)
+{
+ return 0;
+}
+
+static struct i2c_au1550_data pb1550_i2c_info = {
+ SMBUS_PSC_BASE, 200, 200
+};
+
+static struct i2c_adapter pb1550_board_adapter = {
+ name: "pb1550 adapter",
+ id: I2C_HW_AU1550_PSC,
+ algo: NULL,
+ algo_data: &pb1550_i2c_info,
+ client_register: pb1550_reg,
+ client_unregister: pb1550_unreg,
+};
+
+/* BIG hack to support the control interface on the Wolfson WM8731
+ * audio codec on the Pb1550 board. We get an address and two data
+ * bytes to write, create an i2c message, and send it across the
+ * i2c transfer function. We do this here because we have access to
+ * the i2c adapter structure.
+ */
+static struct i2c_msg wm_i2c_msg; /* We don't want this stuff on the stack */
+static u8 i2cbuf[2];
+
+int
+pb1550_wm_codec_write(u8 addr, u8 reg, u8 val)
+{
+ wm_i2c_msg.addr = addr;
+ wm_i2c_msg.flags = 0;
+ wm_i2c_msg.buf = i2cbuf;
+ wm_i2c_msg.len = 2;
+ i2cbuf[0] = reg;
+ i2cbuf[1] = val;
+
+ return pb1550_board_adapter.algo->master_xfer(&pb1550_board_adapter, &wm_i2c_msg, 1);
+}
+
+static int __init
+i2c_au1550_init(void)
+{
+ printk(KERN_INFO "Au1550 I2C: ");
+
+ /* This is where we would set up a 50MHz clock source
+ * and routing. On the Pb1550, the SMBus is PSC2, which
+ * uses a shared clock with USB. This has been already
+ * configured by Yamon as a 48MHz clock, close enough
+ * for our work.
+ */
+ if (i2c_au1550_add_bus(&pb1550_board_adapter) < 0) {
+ printk("failed to initialize.\n");
+ return -ENODEV;
+ }
+
+ printk("initialized.\n");
+ return 0;
+}
+
+static void __exit
+i2c_au1550_exit(void)
+{
+ i2c_au1550_del_bus(&pb1550_board_adapter);
+}
+
+MODULE_AUTHOR("Dan Malek, Embedded Edge, LLC.");
+MODULE_DESCRIPTION("SMBus adapter Alchemy pb1550");
+MODULE_LICENSE("GPL");
+
+module_init (i2c_au1550_init);
+module_exit (i2c_au1550_exit);
};
static struct pci_device_id hydra_ids[] = {
- {
- .vendor = PCI_VENDOR_ID_APPLE,
- .device = PCI_DEVICE_ID_APPLE_HYDRA,
- .subvendor = PCI_ANY_ID,
- .subdevice = PCI_ANY_ID,
- },
+ { PCI_DEVICE(PCI_VENDOR_ID_APPLE, PCI_DEVICE_ID_APPLE_HYDRA) },
{ 0, }
};
/* ------------------------------------------------------------------------- */
-/* i2c-iop3xx.c i2c driver algorithms for Intel XScale IOP3xx */
+/* i2c-iop3xx.c i2c driver algorithms for Intel XScale IOP3xx & IXP46x */
/* ------------------------------------------------------------------------- */
-/* Copyright (C) 2003 Peter Milne, D-TACQ Solutions Ltd
- * <Peter dot Milne at D hyphen TACQ dot com>
-
- This program is free software; you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation, version 2.
-
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */
-/* ------------------------------------------------------------------------- */
-/*
- With acknowledgements to i2c-algo-ibm_ocp.c by
- Ian DaSilva, MontaVista Software, Inc. idasilva@mvista.com
-
- And i2c-algo-pcf.c, which was created by Simon G. Vogl and Hans Berglund:
-
- Copyright (C) 1995-1997 Simon G. Vogl, 1998-2000 Hans Berglund
-
- And which acknowledged Kyösti Mälkki <kmalkki@cc.hut.fi>,
- Frodo Looijaard <frodol@dds.nl>, Martin Bailey<mbailey@littlefeet-inc.com>
-
- ---------------------------------------------------------------------------*/
+/* Copyright (C) 2003 Peter Milne, D-TACQ Solutions Ltd
+ * <Peter dot Milne at D hyphen TACQ dot com>
+ *
+ * With acknowledgements to i2c-algo-ibm_ocp.c by
+ * Ian DaSilva, MontaVista Software, Inc. idasilva@mvista.com
+ *
+ * And i2c-algo-pcf.c, which was created by Simon G. Vogl and Hans Berglund:
+ *
+ * Copyright (C) 1995-1997 Simon G. Vogl, 1998-2000 Hans Berglund
+ *
+ * And which acknowledged Kyösti Mälkki <kmalkki@cc.hut.fi>,
+ * Frodo Looijaard <frodol@dds.nl>, Martin Bailey<mbailey@littlefeet-inc.com>
+ *
+ * Major cleanup by Deepak Saxena <dsaxena@plexity.net>, 01/2005:
+ *
+ * - Use driver model to pass per-chip info instead of hardcoding and #ifdefs
+ * - Use ioremap/__raw_readl/__raw_writel instead of direct dereference
+ * - Make it work with IXP46x chips
+ * - Cleanup function names, coding style, etc
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, version 2.
+ */
#include <linux/config.h>
#include <linux/interrupt.h>
#include <linux/init.h>
#include <linux/errno.h>
#include <linux/sched.h>
+#include <linux/device.h>
#include <linux/i2c.h>
+#include <asm/io.h>
-#include <asm/arch-iop3xx/iop321.h>
-#include <asm/arch-iop3xx/iop321-irqs.h>
#include "i2c-iop3xx.h"
+/* global unit counter */
+static int i2c_id = 0;
-/* ----- global defines ----------------------------------------------- */
-#define PASSERT(x) do { if (!(x) ) \
- printk(KERN_CRIT "PASSERT %s in %s:%d\n", #x, __FILE__, __LINE__ );\
- } while (0)
-
-
-/* ----- global variables --------------------------------------------- */
-
-
-static inline unsigned char iic_cook_addr(struct i2c_msg *msg)
+static inline unsigned char
+iic_cook_addr(struct i2c_msg *msg)
{
unsigned char addr;
if (msg->flags & I2C_M_RD)
addr |= 1;
- /* PGM: what is M_REV_DIR_ADDR - do we need it ?? */
+ /*
+ * Read or Write?
+ */
if (msg->flags & I2C_M_REV_DIR_ADDR)
addr ^= 1;
return addr;
}
-
-static inline void iop3xx_adap_reset(struct i2c_algo_iop3xx_data *iop3xx_adap)
+static void
+iop3xx_i2c_reset(struct i2c_algo_iop3xx_data *iop3xx_adap)
{
/* Follows devman 9.3 */
- *iop3xx_adap->biu->CR = IOP321_ICR_UNIT_RESET;
- *iop3xx_adap->biu->SR = IOP321_ISR_CLEARBITS;
- *iop3xx_adap->biu->CR = 0;
+ __raw_writel(IOP3XX_ICR_UNIT_RESET, iop3xx_adap->ioaddr + CR_OFFSET);
+ __raw_writel(IOP3XX_ISR_CLEARBITS, iop3xx_adap->ioaddr + SR_OFFSET);
+ __raw_writel(0, iop3xx_adap->ioaddr + CR_OFFSET);
}
-static inline void iop3xx_adap_set_slave_addr(struct i2c_algo_iop3xx_data *iop3xx_adap)
+static void
+iop3xx_i2c_set_slave_addr(struct i2c_algo_iop3xx_data *iop3xx_adap)
{
- *iop3xx_adap->biu->SAR = MYSAR;
+ __raw_writel(MYSAR, iop3xx_adap->ioaddr + SAR_OFFSET);
}
-static inline void iop3xx_adap_enable(struct i2c_algo_iop3xx_data *iop3xx_adap)
+static void
+iop3xx_i2c_enable(struct i2c_algo_iop3xx_data *iop3xx_adap)
{
- u32 cr = IOP321_ICR_GCD|IOP321_ICR_SCLEN|IOP321_ICR_UE;
+ u32 cr = IOP3XX_ICR_GCD | IOP3XX_ICR_SCLEN | IOP3XX_ICR_UE;
+
+ /*
+ * Everytime unit enable is asserted, GPOD needs to be cleared
+ * on IOP321 to avoid data corruption on the bus.
+ */
+#ifdef CONFIG_ARCH_IOP321
+#define IOP321_GPOD_I2C0 0x00c0 /* clear these bits to enable ch0 */
+#define IOP321_GPOD_I2C1 0x0030 /* clear these bits to enable ch1 */
+ *IOP321_GPOD &= (iop3xx_adap->id == 0) ? ~IOP321_GPOD_I2C0 :
+ ~IOP321_GPOD_I2C1;
+#endif
/* NB SR bits not same position as CR IE bits :-( */
- iop3xx_adap->biu->SR_enabled =
- IOP321_ISR_ALD | IOP321_ISR_BERRD |
- IOP321_ISR_RXFULL | IOP321_ISR_TXEMPTY;
+ iop3xx_adap->SR_enabled =
+ IOP3XX_ISR_ALD | IOP3XX_ISR_BERRD |
+ IOP3XX_ISR_RXFULL | IOP3XX_ISR_TXEMPTY;
- cr |= IOP321_ICR_ALDIE | IOP321_ICR_BERRIE |
- IOP321_ICR_RXFULLIE | IOP321_ICR_TXEMPTYIE;
+ cr |= IOP3XX_ICR_ALD_IE | IOP3XX_ICR_BERR_IE |
+ IOP3XX_ICR_RXFULL_IE | IOP3XX_ICR_TXEMPTY_IE;
- *iop3xx_adap->biu->CR = cr;
+ __raw_writel(cr, iop3xx_adap->ioaddr + CR_OFFSET);
}
-static void iop3xx_adap_transaction_cleanup(struct i2c_algo_iop3xx_data *iop3xx_adap)
+static void
+iop3xx_i2c_transaction_cleanup(struct i2c_algo_iop3xx_data *iop3xx_adap)
{
- unsigned cr = *iop3xx_adap->biu->CR;
+ unsigned long cr = __raw_readl(iop3xx_adap->ioaddr + CR_OFFSET);
- cr &= ~(IOP321_ICR_MSTART | IOP321_ICR_TBYTE |
- IOP321_ICR_MSTOP | IOP321_ICR_SCLEN);
- *iop3xx_adap->biu->CR = cr;
-}
+ cr &= ~(IOP3XX_ICR_MSTART | IOP3XX_ICR_TBYTE |
+ IOP3XX_ICR_MSTOP | IOP3XX_ICR_SCLEN);
-static void iop3xx_adap_final_cleanup(struct i2c_algo_iop3xx_data *iop3xx_adap)
-{
- unsigned cr = *iop3xx_adap->biu->CR;
-
- cr &= ~(IOP321_ICR_ALDIE | IOP321_ICR_BERRIE |
- IOP321_ICR_RXFULLIE | IOP321_ICR_TXEMPTYIE);
- iop3xx_adap->biu->SR_enabled = 0;
- *iop3xx_adap->biu->CR = cr;
+ __raw_writel(cr, iop3xx_adap->ioaddr + CR_OFFSET);
}
/*
* NB: the handler has to clear the source of the interrupt!
* Then it passes the SR flags of interest to BH via adap data
*/
-static irqreturn_t iop3xx_i2c_handler(int this_irq,
- void *dev_id,
- struct pt_regs *regs)
+static irqreturn_t
+iop3xx_i2c_irq_handler(int this_irq, void *dev_id, struct pt_regs *regs)
{
struct i2c_algo_iop3xx_data *iop3xx_adap = dev_id;
+ u32 sr = __raw_readl(iop3xx_adap->ioaddr + SR_OFFSET);
- u32 sr = *iop3xx_adap->biu->SR;
-
- if ((sr &= iop3xx_adap->biu->SR_enabled)) {
- *iop3xx_adap->biu->SR = sr;
- iop3xx_adap->biu->SR_received |= sr;
+ if ((sr &= iop3xx_adap->SR_enabled)) {
+ __raw_writel(sr, iop3xx_adap->ioaddr + SR_OFFSET);
+ iop3xx_adap->SR_received |= sr;
wake_up_interruptible(&iop3xx_adap->waitq);
}
return IRQ_HANDLED;
}
/* check all error conditions, clear them , report most important */
-static int iop3xx_adap_error(u32 sr)
+static int
+iop3xx_i2c_error(u32 sr)
{
int rc = 0;
- if ((sr&IOP321_ISR_BERRD)) {
+ if ((sr & IOP3XX_ISR_BERRD)) {
if ( !rc ) rc = -I2C_ERR_BERR;
}
- if ((sr&IOP321_ISR_ALD)) {
+ if ((sr & IOP3XX_ISR_ALD)) {
if ( !rc ) rc = -I2C_ERR_ALD;
}
return rc;
}
-static inline u32 get_srstat(struct i2c_algo_iop3xx_data *iop3xx_adap)
+static inline u32
+iop3xx_i2c_get_srstat(struct i2c_algo_iop3xx_data *iop3xx_adap)
{
unsigned long flags;
u32 sr;
spin_lock_irqsave(&iop3xx_adap->lock, flags);
- sr = iop3xx_adap->biu->SR_received;
- iop3xx_adap->biu->SR_received = 0;
+ sr = iop3xx_adap->SR_received;
+ iop3xx_adap->SR_received = 0;
spin_unlock_irqrestore(&iop3xx_adap->lock, flags);
return sr;
typedef int (* compare_func)(unsigned test, unsigned mask);
/* returns 1 on correct comparison */
-static int iop3xx_adap_wait_event(struct i2c_algo_iop3xx_data *iop3xx_adap,
- unsigned flags, unsigned* status,
- compare_func compare)
+static int
+iop3xx_i2c_wait_event(struct i2c_algo_iop3xx_data *iop3xx_adap,
+ unsigned flags, unsigned* status,
+ compare_func compare)
{
unsigned sr = 0;
int interrupted;
do {
interrupted = wait_event_interruptible_timeout (
iop3xx_adap->waitq,
- (done = compare( sr = get_srstat(iop3xx_adap),flags )),
- iop3xx_adap->timeout
+ (done = compare( sr = iop3xx_i2c_get_srstat(iop3xx_adap) ,flags )),
+ 1 * HZ;
);
- if ((rc = iop3xx_adap_error(sr)) < 0) {
+ if ((rc = iop3xx_i2c_error(sr)) < 0) {
*status = sr;
return rc;
- }else if (!interrupted) {
+ } else if (!interrupted) {
*status = sr;
return -ETIMEDOUT;
}
/*
* Concrete compare_funcs
*/
-static int all_bits_clear(unsigned test, unsigned mask)
+static int
+all_bits_clear(unsigned test, unsigned mask)
{
return (test & mask) == 0;
}
-static int any_bits_set(unsigned test, unsigned mask)
+
+static int
+any_bits_set(unsigned test, unsigned mask)
{
return (test & mask) != 0;
}
-static int iop3xx_adap_wait_tx_done(struct i2c_algo_iop3xx_data *iop3xx_adap, int *status)
+static int
+iop3xx_i2c_wait_tx_done(struct i2c_algo_iop3xx_data *iop3xx_adap, int *status)
{
- return iop3xx_adap_wait_event(
+ return iop3xx_i2c_wait_event(
iop3xx_adap,
- IOP321_ISR_TXEMPTY|IOP321_ISR_ALD|IOP321_ISR_BERRD,
+ IOP3XX_ISR_TXEMPTY | IOP3XX_ISR_ALD | IOP3XX_ISR_BERRD,
status, any_bits_set);
}
-static int iop3xx_adap_wait_rx_done(struct i2c_algo_iop3xx_data *iop3xx_adap, int *status)
+static int
+iop3xx_i2c_wait_rx_done(struct i2c_algo_iop3xx_data *iop3xx_adap, int *status)
{
- return iop3xx_adap_wait_event(
+ return iop3xx_i2c_wait_event(
iop3xx_adap,
- IOP321_ISR_RXFULL|IOP321_ISR_ALD|IOP321_ISR_BERRD,
+ IOP3XX_ISR_RXFULL | IOP3XX_ISR_ALD | IOP3XX_ISR_BERRD,
status, any_bits_set);
}
-static int iop3xx_adap_wait_idle(struct i2c_algo_iop3xx_data *iop3xx_adap, int *status)
-{
- return iop3xx_adap_wait_event(
- iop3xx_adap, IOP321_ISR_UNITBUSY, status, all_bits_clear);
-}
-
-/*
- * Description: This performs the IOP3xx initialization sequence
- * Valid for IOP321. Maybe valid for IOP310?.
- */
-static int iop3xx_adap_init (struct i2c_algo_iop3xx_data *iop3xx_adap)
+static int
+iop3xx_i2c_wait_idle(struct i2c_algo_iop3xx_data *iop3xx_adap, int *status)
{
- *IOP321_GPOD &= ~(iop3xx_adap->channel==0 ?
- IOP321_GPOD_I2C0:
- IOP321_GPOD_I2C1);
-
- iop3xx_adap_reset(iop3xx_adap);
- iop3xx_adap_set_slave_addr(iop3xx_adap);
- iop3xx_adap_enable(iop3xx_adap);
-
- return 0;
+ return iop3xx_i2c_wait_event(
+ iop3xx_adap, IOP3XX_ISR_UNITBUSY, status, all_bits_clear);
}
-static int iop3xx_adap_send_target_slave_addr(struct i2c_algo_iop3xx_data *iop3xx_adap,
- struct i2c_msg* msg)
+static int
+iop3xx_i2c_send_target_addr(struct i2c_algo_iop3xx_data *iop3xx_adap,
+ struct i2c_msg* msg)
{
- unsigned cr = *iop3xx_adap->biu->CR;
+ unsigned long cr = __raw_readl(iop3xx_adap->ioaddr + CR_OFFSET);
int status;
int rc;
- *iop3xx_adap->biu->DBR = iic_cook_addr(msg);
+ __raw_writel(iic_cook_addr(msg), iop3xx_adap->ioaddr + DBR_OFFSET);
- cr &= ~(IOP321_ICR_MSTOP | IOP321_ICR_NACK);
- cr |= IOP321_ICR_MSTART | IOP321_ICR_TBYTE;
-
- *iop3xx_adap->biu->CR = cr;
- rc = iop3xx_adap_wait_tx_done(iop3xx_adap, &status);
- /* this assert fires every time, contrary to IOP manual
- PASSERT((status&IOP321_ISR_UNITBUSY)!=0);
- */
- PASSERT((status&IOP321_ISR_RXREAD)==0);
-
+ cr &= ~(IOP3XX_ICR_MSTOP | IOP3XX_ICR_NACK);
+ cr |= IOP3XX_ICR_MSTART | IOP3XX_ICR_TBYTE;
+
+ __raw_writel(cr, iop3xx_adap->ioaddr + CR_OFFSET);
+ rc = iop3xx_i2c_wait_tx_done(iop3xx_adap, &status);
+
return rc;
}
-static int iop3xx_adap_write_byte(struct i2c_algo_iop3xx_data *iop3xx_adap, char byte, int stop)
+static int
+iop3xx_i2c_write_byte(struct i2c_algo_iop3xx_data *iop3xx_adap, char byte,
+ int stop)
{
- unsigned cr = *iop3xx_adap->biu->CR;
+ unsigned long cr = __raw_readl(iop3xx_adap->ioaddr + CR_OFFSET);
int status;
int rc = 0;
- *iop3xx_adap->biu->DBR = byte;
- cr &= ~IOP321_ICR_MSTART;
+ __raw_writel(byte, iop3xx_adap->ioaddr + DBR_OFFSET);
+ cr &= ~IOP3XX_ICR_MSTART;
if (stop) {
- cr |= IOP321_ICR_MSTOP;
+ cr |= IOP3XX_ICR_MSTOP;
} else {
- cr &= ~IOP321_ICR_MSTOP;
+ cr &= ~IOP3XX_ICR_MSTOP;
}
- *iop3xx_adap->biu->CR = cr |= IOP321_ICR_TBYTE;
- rc = iop3xx_adap_wait_tx_done(iop3xx_adap, &status);
+ cr |= IOP3XX_ICR_TBYTE;
+ __raw_writel(cr, iop3xx_adap->ioaddr + CR_OFFSET);
+ rc = iop3xx_i2c_wait_tx_done(iop3xx_adap, &status);
return rc;
}
-static int iop3xx_adap_read_byte(struct i2c_algo_iop3xx_data *iop3xx_adap,
- char* byte, int stop)
+static int
+iop3xx_i2c_read_byte(struct i2c_algo_iop3xx_data *iop3xx_adap, char* byte,
+ int stop)
{
- unsigned cr = *iop3xx_adap->biu->CR;
+ unsigned long cr = __raw_readl(iop3xx_adap->ioaddr + CR_OFFSET);
int status;
int rc = 0;
- cr &= ~IOP321_ICR_MSTART;
+ cr &= ~IOP3XX_ICR_MSTART;
if (stop) {
- cr |= IOP321_ICR_MSTOP|IOP321_ICR_NACK;
+ cr |= IOP3XX_ICR_MSTOP | IOP3XX_ICR_NACK;
} else {
- cr &= ~(IOP321_ICR_MSTOP|IOP321_ICR_NACK);
+ cr &= ~(IOP3XX_ICR_MSTOP | IOP3XX_ICR_NACK);
}
- *iop3xx_adap->biu->CR = cr |= IOP321_ICR_TBYTE;
+ cr |= IOP3XX_ICR_TBYTE;
+ __raw_writel(cr, iop3xx_adap->ioaddr + CR_OFFSET);
- rc = iop3xx_adap_wait_rx_done(iop3xx_adap, &status);
+ rc = iop3xx_i2c_wait_rx_done(iop3xx_adap, &status);
- *byte = *iop3xx_adap->biu->DBR;
+ *byte = __raw_readl(iop3xx_adap->ioaddr + DBR_OFFSET);
return rc;
}
-static int iop3xx_i2c_writebytes(struct i2c_adapter *i2c_adap,
- const char *buf, int count)
+static int
+iop3xx_i2c_writebytes(struct i2c_adapter *i2c_adap, const char *buf, int count)
{
struct i2c_algo_iop3xx_data *iop3xx_adap = i2c_adap->algo_data;
int ii;
int rc = 0;
- for (ii = 0; rc == 0 && ii != count; ++ii) {
- rc = iop3xx_adap_write_byte(iop3xx_adap, buf[ii], ii==count-1);
- }
+ for (ii = 0; rc == 0 && ii != count; ++ii)
+ rc = iop3xx_i2c_write_byte(iop3xx_adap, buf[ii], ii==count-1);
return rc;
}
-static int iop3xx_i2c_readbytes(struct i2c_adapter *i2c_adap,
- char *buf, int count)
+static int
+iop3xx_i2c_readbytes(struct i2c_adapter *i2c_adap, char *buf, int count)
{
struct i2c_algo_iop3xx_data *iop3xx_adap = i2c_adap->algo_data;
int ii;
int rc = 0;
- for (ii = 0; rc == 0 && ii != count; ++ii) {
- rc = iop3xx_adap_read_byte(iop3xx_adap, &buf[ii], ii==count-1);
- }
+ for (ii = 0; rc == 0 && ii != count; ++ii)
+ rc = iop3xx_i2c_read_byte(iop3xx_adap, &buf[ii], ii==count-1);
+
return rc;
}
* Each transfer (i.e. a read or a write) is separated by a repeated start
* condition.
*/
-static int iop3xx_handle_msg(struct i2c_adapter *i2c_adap, struct i2c_msg* pmsg)
+static int
+iop3xx_i2c_handle_msg(struct i2c_adapter *i2c_adap, struct i2c_msg* pmsg)
{
struct i2c_algo_iop3xx_data *iop3xx_adap = i2c_adap->algo_data;
int rc;
- rc = iop3xx_adap_send_target_slave_addr(iop3xx_adap, pmsg);
+ rc = iop3xx_i2c_send_target_addr(iop3xx_adap, pmsg);
if (rc < 0) {
return rc;
}
/*
* master_xfer() - main read/write entry
*/
-static int iop3xx_master_xfer(struct i2c_adapter *i2c_adap, struct i2c_msg msgs[], int num)
+static int
+iop3xx_i2c_master_xfer(struct i2c_adapter *i2c_adap, struct i2c_msg msgs[],
+ int num)
{
struct i2c_algo_iop3xx_data *iop3xx_adap = i2c_adap->algo_data;
int im = 0;
int ret = 0;
int status;
- iop3xx_adap_wait_idle(iop3xx_adap, &status);
- iop3xx_adap_reset(iop3xx_adap);
- iop3xx_adap_enable(iop3xx_adap);
+ iop3xx_i2c_wait_idle(iop3xx_adap, &status);
+ iop3xx_i2c_reset(iop3xx_adap);
+ iop3xx_i2c_enable(iop3xx_adap);
for (im = 0; ret == 0 && im != num; im++) {
- ret = iop3xx_handle_msg(i2c_adap, &msgs[im]);
+ ret = iop3xx_i2c_handle_msg(i2c_adap, &msgs[im]);
}
- iop3xx_adap_transaction_cleanup(iop3xx_adap);
+ iop3xx_i2c_transaction_cleanup(iop3xx_adap);
if(ret)
return ret;
return im;
}
-static int algo_control(struct i2c_adapter *adapter, unsigned int cmd,
+static int
+iop3xx_i2c_algo_control(struct i2c_adapter *adapter, unsigned int cmd,
unsigned long arg)
{
return 0;
}
-static u32 iic_func(struct i2c_adapter *adap)
+static u32
+iop3xx_i2c_func(struct i2c_adapter *adap)
{
return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL;
}
-
-/* -----exported algorithm data: ------------------------------------- */
-
-static struct i2c_algorithm iic_algo = {
+static struct i2c_algorithm iop3xx_i2c_algo = {
.name = "IOP3xx I2C algorithm",
- .id = I2C_ALGO_OCP_IOP3XX,
- .master_xfer = iop3xx_master_xfer,
- .algo_control = algo_control,
- .functionality = iic_func,
+ .id = I2C_ALGO_IOP3XX,
+ .master_xfer = iop3xx_i2c_master_xfer,
+ .algo_control = iop3xx_i2c_algo_control,
+ .functionality = iop3xx_i2c_func,
};
-/*
- * registering functions to load algorithms at runtime
- */
-static int i2c_iop3xx_add_bus(struct i2c_adapter *iic_adap)
+static int
+iop3xx_i2c_remove(struct device *device)
{
- struct i2c_algo_iop3xx_data *iop3xx_adap = iic_adap->algo_data;
+ struct platform_device *pdev = to_platform_device(device);
+ struct i2c_adapter *padapter = dev_get_drvdata(&pdev->dev);
+ struct i2c_algo_iop3xx_data *adapter_data =
+ (struct i2c_algo_iop3xx_data *)padapter->algo_data;
+ struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ unsigned long cr = __raw_readl(adapter_data->ioaddr + CR_OFFSET);
+
+ /*
+ * Disable the actual HW unit
+ */
+ cr &= ~(IOP3XX_ICR_ALD_IE | IOP3XX_ICR_BERR_IE |
+ IOP3XX_ICR_RXFULL_IE | IOP3XX_ICR_TXEMPTY_IE);
+ __raw_writel(cr, adapter_data->ioaddr + CR_OFFSET);
+
+ iounmap((void __iomem*)adapter_data->ioaddr);
+ release_mem_region(res->start, IOP3XX_I2C_IO_SIZE);
+ kfree(adapter_data);
+ kfree(padapter);
+
+ dev_set_drvdata(&pdev->dev, NULL);
+
+ return 0;
+}
- if (!request_region( REGION_START(iop3xx_adap),
- REGION_LENGTH(iop3xx_adap),
- iic_adap->name)) {
- return -ENODEV;
+static int
+iop3xx_i2c_probe(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct resource *res;
+ int ret;
+ struct i2c_adapter *new_adapter;
+ struct i2c_algo_iop3xx_data *adapter_data;
+
+ new_adapter = kmalloc(sizeof(struct i2c_adapter), GFP_KERNEL);
+ if (!new_adapter) {
+ ret = -ENOMEM;
+ goto out;
}
+ memset((void*)new_adapter, 0, sizeof(*new_adapter));
- init_waitqueue_head(&iop3xx_adap->waitq);
- spin_lock_init(&iop3xx_adap->lock);
+ adapter_data = kmalloc(sizeof(struct i2c_algo_iop3xx_data), GFP_KERNEL);
+ if (!adapter_data) {
+ ret = -ENOMEM;
+ goto free_adapter;
+ }
+ memset((void*)adapter_data, 0, sizeof(*adapter_data));
- if (request_irq(
- iop3xx_adap->biu->irq,
- iop3xx_i2c_handler,
- /* SA_SAMPLE_RANDOM */ 0,
- iic_adap->name,
- iop3xx_adap)) {
- return -ENODEV;
- }
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ ret = -ENODEV;
+ goto free_both;
+ }
- /* register new iic_adapter to i2c module... */
- iic_adap->id |= iic_algo.id;
- iic_adap->algo = &iic_algo;
+ if (!request_mem_region(res->start, IOP3XX_I2C_IO_SIZE, pdev->name)) {
+ ret = -EBUSY;
+ goto free_both;
+ }
- iic_adap->timeout = 100; /* default values, should */
- iic_adap->retries = 3; /* be replaced by defines */
+ /* set the adapter enumeration # */
+ adapter_data->id = i2c_id++;
- iop3xx_adap_init(iic_adap->algo_data);
- i2c_add_adapter(iic_adap);
- return 0;
-}
+ adapter_data->ioaddr = (u32)ioremap(res->start, IOP3XX_I2C_IO_SIZE);
+ if (!adapter_data->ioaddr) {
+ ret = -ENOMEM;
+ goto release_region;
+ }
-static int i2c_iop3xx_del_bus(struct i2c_adapter *iic_adap)
-{
- struct i2c_algo_iop3xx_data *iop3xx_adap = iic_adap->algo_data;
+ res = request_irq(platform_get_irq(pdev, 0), iop3xx_i2c_irq_handler, 0,
+ pdev->name, adapter_data);
+ if (res) {
+ ret = -EIO;
+ goto unmap;
+ }
- iop3xx_adap_final_cleanup(iop3xx_adap);
- free_irq(iop3xx_adap->biu->irq, iop3xx_adap);
+ memcpy(new_adapter->name, pdev->name, strlen(pdev->name));
+ new_adapter->id = I2C_HW_IOP3XX;
+ new_adapter->owner = THIS_MODULE;
+ new_adapter->dev.parent = &pdev->dev;
- release_region(REGION_START(iop3xx_adap), REGION_LENGTH(iop3xx_adap));
+ /*
+ * Default values...should these come in from board code?
+ */
+ new_adapter->timeout = 100;
+ new_adapter->retries = 3;
+ new_adapter->algo = &iop3xx_i2c_algo;
- return i2c_del_adapter(iic_adap);
-}
+ init_waitqueue_head(&adapter_data->waitq);
+ spin_lock_init(&adapter_data->lock);
-#ifdef CONFIG_ARCH_IOP321
+ iop3xx_i2c_reset(adapter_data);
+ iop3xx_i2c_set_slave_addr(adapter_data);
+ iop3xx_i2c_enable(adapter_data);
-static struct iop3xx_biu biu0 = {
- .CR = IOP321_ICR0,
- .SR = IOP321_ISR0,
- .SAR = IOP321_ISAR0,
- .DBR = IOP321_IDBR0,
- .BMR = IOP321_IBMR0,
- .irq = IRQ_IOP321_I2C_0,
-};
+ dev_set_drvdata(&pdev->dev, new_adapter);
+ new_adapter->algo_data = adapter_data;
-static struct iop3xx_biu biu1 = {
- .CR = IOP321_ICR1,
- .SR = IOP321_ISR1,
- .SAR = IOP321_ISAR1,
- .DBR = IOP321_IDBR1,
- .BMR = IOP321_IBMR1,
- .irq = IRQ_IOP321_I2C_1,
-};
+ i2c_add_adapter(new_adapter);
-#define ADAPTER_NAME_ROOT "IOP321 i2c biu adapter "
-#else
-#error Please define the BIU struct iop3xx_biu for your processor arch
-#endif
+ return 0;
-static struct i2c_algo_iop3xx_data algo_iop3xx_data0 = {
- .channel = 0,
- .biu = &biu0,
- .timeout = 1*HZ,
-};
-static struct i2c_algo_iop3xx_data algo_iop3xx_data1 = {
- .channel = 1,
- .biu = &biu1,
- .timeout = 1*HZ,
-};
+unmap:
+ iounmap((void __iomem*)adapter_data->ioaddr);
-static struct i2c_adapter iop3xx_ops0 = {
- .owner = THIS_MODULE,
- .name = ADAPTER_NAME_ROOT "0",
- .id = I2C_HW_IOP321,
- .algo_data = &algo_iop3xx_data0,
-};
-static struct i2c_adapter iop3xx_ops1 = {
- .owner = THIS_MODULE,
- .name = ADAPTER_NAME_ROOT "1",
- .id = I2C_HW_IOP321,
- .algo_data = &algo_iop3xx_data1,
+release_region:
+ release_mem_region(res->start, IOP3XX_I2C_IO_SIZE);
+
+free_both:
+ kfree(adapter_data);
+
+free_adapter:
+ kfree(new_adapter);
+
+out:
+ return ret;
+}
+
+
+static struct device_driver iop3xx_i2c_driver = {
+ .name = "IOP3xx-I2C",
+ .bus = &platform_bus_type,
+ .probe = iop3xx_i2c_probe,
+ .remove = iop3xx_i2c_remove
};
-static int __init i2c_iop3xx_init (void)
+static int __init
+i2c_iop3xx_init (void)
{
- return i2c_iop3xx_add_bus(&iop3xx_ops0) ||
- i2c_iop3xx_add_bus(&iop3xx_ops1);
+ return driver_register(&iop3xx_i2c_driver);
}
-static void __exit i2c_iop3xx_exit (void)
+static void __exit
+i2c_iop3xx_exit (void)
{
- i2c_iop3xx_del_bus(&iop3xx_ops0);
- i2c_iop3xx_del_bus(&iop3xx_ops1);
+ driver_unregister(&iop3xx_i2c_driver);
+ return;
}
module_init (i2c_iop3xx_init);
/*
* iop321 hardware bit definitions
*/
-#define IOP321_ICR_FAST_MODE 0x8000 /* 1=400kBps, 0=100kBps */
-#define IOP321_ICR_UNIT_RESET 0x4000 /* 1=RESET */
-#define IOP321_ICR_SADIE 0x2000 /* 1=Slave Detect Interrupt Enable */
-#define IOP321_ICR_ALDIE 0x1000 /* 1=Arb Loss Detect Interrupt Enable */
-#define IOP321_ICR_SSDIE 0x0800 /* 1=Slave STOP Detect Interrupt Enable */
-#define IOP321_ICR_BERRIE 0x0400 /* 1=Bus Error Interrupt Enable */
-#define IOP321_ICR_RXFULLIE 0x0200 /* 1=Receive Full Interrupt Enable */
-#define IOP321_ICR_TXEMPTYIE 0x0100 /* 1=Transmit Empty Interrupt Enable */
-#define IOP321_ICR_GCD 0x0080 /* 1=General Call Disable */
+#define IOP3XX_ICR_FAST_MODE 0x8000 /* 1=400kBps, 0=100kBps */
+#define IOP3XX_ICR_UNIT_RESET 0x4000 /* 1=RESET */
+#define IOP3XX_ICR_SAD_IE 0x2000 /* 1=Slave Detect Interrupt Enable */
+#define IOP3XX_ICR_ALD_IE 0x1000 /* 1=Arb Loss Detect Interrupt Enable */
+#define IOP3XX_ICR_SSD_IE 0x0800 /* 1=Slave STOP Detect Interrupt Enable */
+#define IOP3XX_ICR_BERR_IE 0x0400 /* 1=Bus Error Interrupt Enable */
+#define IOP3XX_ICR_RXFULL_IE 0x0200 /* 1=Receive Full Interrupt Enable */
+#define IOP3XX_ICR_TXEMPTY_IE 0x0100 /* 1=Transmit Empty Interrupt Enable */
+#define IOP3XX_ICR_GCD 0x0080 /* 1=General Call Disable */
/*
- * IOP321_ICR_GCD: 1 disables response as slave. "This bit must be set
+ * IOP3XX_ICR_GCD: 1 disables response as slave. "This bit must be set
* when sending a master mode general call message from the I2C unit"
*/
-#define IOP321_ICR_UE 0x0040 /* 1=Unit Enable */
+#define IOP3XX_ICR_UE 0x0040 /* 1=Unit Enable */
/*
* "NOTE: To avoid I2C bus integrity problems,
* the user needs to ensure that the GPIO Output Data Register -
* The user prepares to enable I2C port 0 and
* I2C port 1 by clearing GPOD bits 7:6 and GPOD bits 5:4, respectively.
*/
-#define IOP321_ICR_SCLEN 0x0020 /* 1=SCL enable for master mode */
-#define IOP321_ICR_MABORT 0x0010 /* 1=Send a STOP with no data
+#define IOP3XX_ICR_SCLEN 0x0020 /* 1=SCL enable for master mode */
+#define IOP3XX_ICR_MABORT 0x0010 /* 1=Send a STOP with no data
* NB TBYTE must be clear */
-#define IOP321_ICR_TBYTE 0x0008 /* 1=Send/Receive a byte. i2c clears */
-#define IOP321_ICR_NACK 0x0004 /* 1=reply with NACK */
-#define IOP321_ICR_MSTOP 0x0002 /* 1=send a STOP after next data byte */
-#define IOP321_ICR_MSTART 0x0001 /* 1=initiate a START */
+#define IOP3XX_ICR_TBYTE 0x0008 /* 1=Send/Receive a byte. i2c clears */
+#define IOP3XX_ICR_NACK 0x0004 /* 1=reply with NACK */
+#define IOP3XX_ICR_MSTOP 0x0002 /* 1=send a STOP after next data byte */
+#define IOP3XX_ICR_MSTART 0x0001 /* 1=initiate a START */
-#define IOP321_ISR_BERRD 0x0400 /* 1=BUS ERROR Detected */
-#define IOP321_ISR_SAD 0x0200 /* 1=Slave ADdress Detected */
-#define IOP321_ISR_GCAD 0x0100 /* 1=General Call Address Detected */
-#define IOP321_ISR_RXFULL 0x0080 /* 1=Receive Full */
-#define IOP321_ISR_TXEMPTY 0x0040 /* 1=Transmit Empty */
-#define IOP321_ISR_ALD 0x0020 /* 1=Arbitration Loss Detected */
-#define IOP321_ISR_SSD 0x0010 /* 1=Slave STOP Detected */
-#define IOP321_ISR_BBUSY 0x0008 /* 1=Bus BUSY */
-#define IOP321_ISR_UNITBUSY 0x0004 /* 1=Unit Busy */
-#define IOP321_ISR_NACK 0x0002 /* 1=Unit Rx or Tx a NACK */
-#define IOP321_ISR_RXREAD 0x0001 /* 1=READ 0=WRITE (R/W bit of slave addr */
+#define IOP3XX_ISR_BERRD 0x0400 /* 1=BUS ERROR Detected */
+#define IOP3XX_ISR_SAD 0x0200 /* 1=Slave ADdress Detected */
+#define IOP3XX_ISR_GCAD 0x0100 /* 1=General Call Address Detected */
+#define IOP3XX_ISR_RXFULL 0x0080 /* 1=Receive Full */
+#define IOP3XX_ISR_TXEMPTY 0x0040 /* 1=Transmit Empty */
+#define IOP3XX_ISR_ALD 0x0020 /* 1=Arbitration Loss Detected */
+#define IOP3XX_ISR_SSD 0x0010 /* 1=Slave STOP Detected */
+#define IOP3XX_ISR_BBUSY 0x0008 /* 1=Bus BUSY */
+#define IOP3XX_ISR_UNITBUSY 0x0004 /* 1=Unit Busy */
+#define IOP3XX_ISR_NACK 0x0002 /* 1=Unit Rx or Tx a NACK */
+#define IOP3XX_ISR_RXREAD 0x0001 /* 1=READ 0=WRITE (R/W bit of slave addr */
-#define IOP321_ISR_CLEARBITS 0x07f0
+#define IOP3XX_ISR_CLEARBITS 0x07f0
-#define IOP321_ISAR_SAMASK 0x007f
+#define IOP3XX_ISAR_SAMASK 0x007f
-#define IOP321_IDBR_MASK 0x00ff
+#define IOP3XX_IDBR_MASK 0x00ff
-#define IOP321_IBMR_SCL 0x0002
-#define IOP321_IBMR_SDA 0x0001
+#define IOP3XX_IBMR_SCL 0x0002
+#define IOP3XX_IBMR_SDA 0x0001
-#define IOP321_GPOD_I2C0 0x00c0 /* clear these bits to enable ch0 */
-#define IOP321_GPOD_I2C1 0x0030 /* clear these bits to enable ch1 */
+#define IOP3XX_GPOD_I2C0 0x00c0 /* clear these bits to enable ch0 */
+#define IOP3XX_GPOD_I2C1 0x0030 /* clear these bits to enable ch1 */
#define MYSAR 0x02 /* SWAG a suitable slave address */
#define I2C_ERR_ALD (I2C_ERR+1)
-struct iop3xx_biu { /* Bus Interface Unit - the hardware */
-/* physical hardware defs - regs*/
- u32 *CR;
- u32 *SR;
- u32 *SAR;
- u32 *DBR;
- u32 *BMR;
-/* irq bit vector */
- u32 irq;
-/* stored flags */
- u32 SR_enabled, SR_received;
-};
+#define CR_OFFSET 0
+#define SR_OFFSET 0x4
+#define SAR_OFFSET 0x8
+#define DBR_OFFSET 0xc
+#define CCR_OFFSET 0x10
+#define BMR_OFFSET 0x14
-struct i2c_algo_iop3xx_data {
- int channel;
+#define IOP3XX_I2C_IO_SIZE 0x18
+struct i2c_algo_iop3xx_data {
+ u32 ioaddr;
wait_queue_head_t waitq;
spinlock_t lock;
- int timeout;
- struct iop3xx_biu* biu;
+ u32 SR_enabled, SR_received;
+ int id;
};
-#define REGION_START(adap) ((u32)((adap)->biu->CR))
-#define REGION_END(adap) ((u32)((adap)->biu->BMR+1))
-#define REGION_LENGTH(adap) (REGION_END(adap)-REGION_START(adap))
-
-#define IRQ_STATUS_MASK(adap) (1<<adap->biu->irq)
-
#endif /* I2C_IOP3XX_H */
/*
* (C) Copyright 2003-2004
* Humboldt Solutions Ltd, adrian@humboldt.co.uk.
-
+
* This is a combined i2c adapter and algorithm driver for the
* MPC107/Tsi107 PowerPC northbridge and processors that include
- * the same I2C unit (8240, 8245, 85xx).
+ * the same I2C unit (8240, 8245, 85xx).
*
- * Release 0.6
+ * Release 0.8
*
* This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any
#include <linux/init.h>
#include <linux/pci.h>
#include <asm/io.h>
+#ifdef CONFIG_FSL_OCP
#include <asm/ocp.h>
+#define FSL_I2C_DEV_SEPARATE_DFSRR FS_I2C_SEPARATE_DFSRR
+#define FSL_I2C_DEV_CLOCK_5200 FS_I2C_CLOCK_5200
+#else
+#include <linux/fsl_devices.h>
+#endif
#include <linux/i2c.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
struct mpc_i2c {
char *base;
- struct ocp_def *ocpdef;
u32 interrupt;
wait_queue_head_t queue;
struct i2c_adapter adap;
+ int irq;
+ u32 flags;
};
static __inline__ void writeccr(struct mpc_i2c *i2c, u32 x)
static int i2c_wait(struct mpc_i2c *i2c, unsigned timeout, int writing)
{
- DECLARE_WAITQUEUE(wait, current);
unsigned long orig_jiffies = jiffies;
u32 x;
int result = 0;
- if (i2c->ocpdef->irq == OCP_IRQ_NA) {
+ if (i2c->irq == 0)
+ {
while (!(readb(i2c->base + MPC_I2C_SR) & CSR_MIF)) {
schedule();
if (time_after(jiffies, orig_jiffies + timeout)) {
x = readb(i2c->base + MPC_I2C_SR);
writeb(0, i2c->base + MPC_I2C_SR);
} else {
- set_current_state(TASK_INTERRUPTIBLE);
- add_wait_queue(&i2c->queue, &wait);
- while (!(i2c->interrupt & CSR_MIF)) {
- if (signal_pending(current)) {
- pr_debug("I2C: Interrupted\n");
- result = -EINTR;
- break;
- }
- if (time_after(jiffies, orig_jiffies + timeout)) {
- pr_debug("I2C: timeout\n");
- result = -EIO;
- break;
- }
- msleep_interruptible(jiffies_to_msecs(timeout));
+ /* Interrupt mode */
+ result = wait_event_interruptible_timeout(i2c->queue,
+ (i2c->interrupt & CSR_MIF), timeout * HZ);
+
+ if (unlikely(result < 0))
+ pr_debug("I2C: wait interrupted\n");
+ else if (unlikely(!(i2c->interrupt & CSR_MIF))) {
+ pr_debug("I2C: wait timeout\n");
+ result = -ETIMEDOUT;
}
- set_current_state(TASK_RUNNING);
- remove_wait_queue(&i2c->queue, &wait);
+
x = i2c->interrupt;
i2c->interrupt = 0;
}
- if (result < -0)
+ if (result < 0)
return result;
if (!(x & CSR_MCF)) {
static void mpc_i2c_setclock(struct mpc_i2c *i2c)
{
- struct ocp_fs_i2c_data *i2c_data = i2c->ocpdef->additions;
/* Set clock and filters */
- if (i2c_data && (i2c_data->flags & FS_I2C_SEPARATE_DFSRR)) {
+ if (i2c->flags & FSL_I2C_DEV_SEPARATE_DFSRR) {
writeb(0x31, i2c->base + MPC_I2C_FDR);
writeb(0x10, i2c->base + MPC_I2C_DFSRR);
- } else if (i2c_data && (i2c_data->flags & FS_I2C_CLOCK_5200))
+ } else if (i2c->flags & FSL_I2C_DEV_CLOCK_5200)
writeb(0x3f, i2c->base + MPC_I2C_FDR);
else
writel(0x1031, i2c->base + MPC_I2C_FDR);
const u8 * data, int length, int restart)
{
int i;
- unsigned timeout = HZ;
+ unsigned timeout = i2c->adap.timeout;
u32 flags = restart ? CCR_RSTA : 0;
/* Start with MEN */
static int mpc_read(struct mpc_i2c *i2c, int target,
u8 * data, int length, int restart)
{
- unsigned timeout = HZ;
+ unsigned timeout = i2c->adap.timeout;
int i;
u32 flags = restart ? CCR_RSTA : 0;
.retries = 1
};
+#ifdef CONFIG_FSL_OCP
static int __devinit mpc_i2c_probe(struct ocp_device *ocp)
{
int result = 0;
if (!(i2c = kmalloc(sizeof(*i2c), GFP_KERNEL))) {
return -ENOMEM;
}
- i2c->ocpdef = ocp->def;
+ memset(i2c, 0, sizeof(*i2c));
+
+ i2c->irq = ocp->def->irq;
+ i2c->flags = ((struct ocp_fs_i2c_data *)ocp->def->additions)->flags;
init_waitqueue_head(&i2c->queue);
if (!request_mem_region(ocp->def->paddr, MPC_I2C_REGION, "i2c-mpc")) {
goto fail_map;
}
- if (ocp->def->irq != OCP_IRQ_NA)
+ if (i2c->irq != OCP_IRQ_NA)
+ {
if ((result = request_irq(ocp->def->irq, mpc_i2c_isr,
0, "i2c-mpc", i2c)) < 0) {
printk(KERN_ERR
"i2c-mpc - failed to attach interrupt\n");
goto fail_irq;
}
+ } else
+ i2c->irq = 0;
i2c->adap = mpc_ops;
i2c_set_adapdata(&i2c->adap, i2c);
+
if ((result = i2c_add_adapter(&i2c->adap)) < 0) {
printk(KERN_ERR "i2c-mpc - failed to add adapter\n");
goto fail_add;
i2c_del_adapter(&i2c->adap);
if (ocp->def->irq != OCP_IRQ_NA)
- free_irq(i2c->ocpdef->irq, i2c);
+ free_irq(i2c->irq, i2c);
iounmap(i2c->base);
- release_mem_region(i2c->ocpdef->paddr, MPC_I2C_REGION);
+ release_mem_region(ocp->def->paddr, MPC_I2C_REGION);
kfree(i2c);
}
module_init(iic_init);
module_exit(iic_exit);
+#else
+static int fsl_i2c_probe(struct device *device)
+{
+ int result = 0;
+ struct mpc_i2c *i2c;
+ struct platform_device *pdev = to_platform_device(device);
+ struct fsl_i2c_platform_data *pdata;
+ struct resource *r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+
+ pdata = (struct fsl_i2c_platform_data *) pdev->dev.platform_data;
+
+ if (!(i2c = kmalloc(sizeof(*i2c), GFP_KERNEL))) {
+ return -ENOMEM;
+ }
+ memset(i2c, 0, sizeof(*i2c));
+
+ i2c->irq = platform_get_irq(pdev, 0);
+ i2c->flags = pdata->device_flags;
+ init_waitqueue_head(&i2c->queue);
+
+ i2c->base = ioremap((phys_addr_t)r->start, MPC_I2C_REGION);
+
+ if (!i2c->base) {
+ printk(KERN_ERR "i2c-mpc - failed to map controller\n");
+ result = -ENOMEM;
+ goto fail_map;
+ }
+
+ if (i2c->irq != 0)
+ if ((result = request_irq(i2c->irq, mpc_i2c_isr,
+ 0, "fsl-i2c", i2c)) < 0) {
+ printk(KERN_ERR
+ "i2c-mpc - failed to attach interrupt\n");
+ goto fail_irq;
+ }
+
+ i2c->adap = mpc_ops;
+ i2c_set_adapdata(&i2c->adap, i2c);
+ i2c->adap.dev.parent = &pdev->dev;
+ if ((result = i2c_add_adapter(&i2c->adap)) < 0) {
+ printk(KERN_ERR "i2c-mpc - failed to add adapter\n");
+ goto fail_add;
+ }
+
+ mpc_i2c_setclock(i2c);
+ dev_set_drvdata(device, i2c);
+ return result;
+
+ fail_add:
+ if (i2c->irq != 0)
+ free_irq(i2c->irq, 0);
+ fail_irq:
+ iounmap(i2c->base);
+ fail_map:
+ kfree(i2c);
+ return result;
+};
+
+static int fsl_i2c_remove(struct device *device)
+{
+ struct mpc_i2c *i2c = dev_get_drvdata(device);
+
+ dev_set_drvdata(device, NULL);
+ i2c_del_adapter(&i2c->adap);
+
+ if (i2c->irq != 0)
+ free_irq(i2c->irq, i2c);
+
+ iounmap(i2c->base);
+ kfree(i2c);
+ return 0;
+};
+
+/* Structure for a device driver */
+static struct device_driver fsl_i2c_driver = {
+ .name = "fsl-i2c",
+ .bus = &platform_bus_type,
+ .probe = fsl_i2c_probe,
+ .remove = fsl_i2c_remove,
+};
+
+static int __init fsl_i2c_init(void)
+{
+ return driver_register(&fsl_i2c_driver);
+}
+
+static void __exit fsl_i2c_exit(void)
+{
+ driver_unregister(&fsl_i2c_driver);
+}
+
+module_init(fsl_i2c_init);
+module_exit(fsl_i2c_exit);
+
+#endif /* CONFIG_FSL_OCP */
MODULE_AUTHOR("Adrian Cox <adrian@humboldt.co.uk>");
MODULE_DESCRIPTION
/*
* S3/VIA 8365/8375 registers
*/
-#ifndef PCI_DEVICE_ID_S3_SAVAGE4
-#define PCI_DEVICE_ID_S3_SAVAGE4 0x8a25
-#endif
-#ifndef PCI_DEVICE_ID_S3_PROSAVAGE8
-#define PCI_DEVICE_ID_S3_PROSAVAGE8 0x8d04
-#endif
-
#define VGA_CR_IX 0x3d4
#define VGA_CR_DATA 0x3d5
--- /dev/null
+/*
+ * Copyright (C) 2004 Steven J. Hill
+ * Copyright (C) 2001,2002,2003 Broadcom Corporation
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/i2c-algo-sibyte.h>
+#include <asm/sibyte/sb1250_regs.h>
+#include <asm/sibyte/sb1250_smbus.h>
+
+static struct i2c_algo_sibyte_data sibyte_board_data[2] = {
+ { NULL, 0, (void *) (KSEG1+A_SMB_BASE(0)) },
+ { NULL, 1, (void *) (KSEG1+A_SMB_BASE(1)) }
+};
+
+static struct i2c_adapter sibyte_board_adapter[2] = {
+ {
+ .owner = THIS_MODULE,
+ .id = I2C_HW_SIBYTE,
+ .class = I2C_CLASS_HWMON,
+ .algo = NULL,
+ .algo_data = &sibyte_board_data[0],
+ .name = "SiByte SMBus 0",
+ },
+ {
+ .owner = THIS_MODULE,
+ .id = I2C_HW_SIBYTE,
+ .class = I2C_CLASS_HWMON,
+ .algo = NULL,
+ .algo_data = &sibyte_board_data[1],
+ .name = "SiByte SMBus 1",
+ },
+};
+
+static int __init i2c_sibyte_init(void)
+{
+ printk("i2c-swarm.o: i2c SMBus adapter module for SiByte board\n");
+ if (i2c_sibyte_add_bus(&sibyte_board_adapter[0], K_SMB_FREQ_100KHZ) < 0)
+ return -ENODEV;
+ if (i2c_sibyte_add_bus(&sibyte_board_adapter[1], K_SMB_FREQ_400KHZ) < 0)
+ return -ENODEV;
+ return 0;
+}
+
+static void __exit i2c_sibyte_exit(void)
+{
+ i2c_sibyte_del_bus(&sibyte_board_adapter[0]);
+ i2c_sibyte_del_bus(&sibyte_board_adapter[1]);
+}
+
+module_init(i2c_sibyte_init);
+module_exit(i2c_sibyte_exit);
+
+MODULE_AUTHOR("Kip Walker <kwalker@broadcom.com>, Steven J. Hill <sjhill@realitydiluted.com>");
+MODULE_DESCRIPTION("SMBus adapter routines for SiByte boards");
+MODULE_LICENSE("GPL");
#include <linux/errno.h>
#include <linux/i2c.h>
+static u8 stub_pointer;
static u8 stub_bytes[256];
static u16 stub_words[256];
ret = 0;
break;
+ case I2C_SMBUS_BYTE:
+ if (read_write == I2C_SMBUS_WRITE) {
+ stub_pointer = command;
+ dev_dbg(&adap->dev, "smbus byte - addr 0x%02x, "
+ "wrote 0x%02x.\n",
+ addr, command);
+ } else {
+ data->byte = stub_bytes[stub_pointer++];
+ dev_dbg(&adap->dev, "smbus byte - addr 0x%02x, "
+ "read 0x%02x.\n",
+ addr, data->byte);
+ }
+
+ ret = 0;
+ break;
+
case I2C_SMBUS_BYTE_DATA:
if (read_write == I2C_SMBUS_WRITE) {
stub_bytes[command] = data->byte;
"read 0x%02x at 0x%02x.\n",
addr, data->byte, command);
}
+ stub_pointer = command + 1;
ret = 0;
break;
static u32 stub_func(struct i2c_adapter *adapter)
{
- return I2C_FUNC_SMBUS_QUICK | I2C_FUNC_SMBUS_BYTE_DATA |
- I2C_FUNC_SMBUS_WORD_DATA;
+ return I2C_FUNC_SMBUS_QUICK | I2C_FUNC_SMBUS_BYTE |
+ I2C_FUNC_SMBUS_BYTE_DATA | I2C_FUNC_SMBUS_WORD_DATA;
}
static struct i2c_algorithm smbus_algorithm = {
client->id, value);
data->config1 = value;
adm1026_write_value(client, ADM1026_REG_CONFIG1, value);
+
+ /* initialize fan_div[] to hardware defaults */
+ value = adm1026_read_value(client, ADM1026_REG_FAN_DIV_0_3) |
+ (adm1026_read_value(client, ADM1026_REG_FAN_DIV_4_7) << 8);
+ for (i = 0;i <= 7;++i) {
+ data->fan_div[i] = DIV_FROM_REG(value & 0x03);
+ value >>= 2;
+ }
}
void adm1026_print_gpio(struct i2c_client *client)
struct adm1026_data *data = i2c_get_clientdata(client);
int i;
- dev_dbg(&client->dev, "(%d): GPIO config is:",
- client->id);
+ dev_dbg(&client->dev, "(%d): GPIO config is:", client->id);
for (i = 0;i <= 7;++i) {
if (data->config2 & (1 << i)) {
dev_dbg(&client->dev, "\t(%d): %sGP%s%d\n", client->id,
/* Many DS1621 constants specified below */
/* Config register used for detection */
/* 7 6 5 4 3 2 1 0 */
-/* |Done|THF |TLF |NVB | 1 | 0 |POL |1SHOT| */
-#define DS1621_REG_CONFIG_MASK 0x0C
-#define DS1621_REG_CONFIG_VAL 0x08
+/* |Done|THF |TLF |NVB | X | X |POL |1SHOT| */
+#define DS1621_REG_CONFIG_NVB 0x10
#define DS1621_REG_CONFIG_POLARITY 0x02
#define DS1621_REG_CONFIG_1SHOT 0x01
#define DS1621_REG_CONFIG_DONE 0x80
#define DS1621_REG_TEMP_MAX 0xA2 /* word, RW */
#define DS1621_REG_CONF 0xAC /* byte, RW */
#define DS1621_COM_START 0xEE /* no data */
+#define DS1621_COM_STOP 0x22 /* no data */
/* The DS1621 configuration register */
#define DS1621_ALARM_TEMP_HIGH 0x40
/* Now, we do the remaining detection. It is lousy. */
if (kind < 0) {
+ /* The NVB bit should be low if no EEPROM write has been
+ requested during the latest 10ms, which is highly
+ improbable in our case. */
conf = ds1621_read_value(new_client, DS1621_REG_CONF);
- if ((conf & DS1621_REG_CONFIG_MASK) != DS1621_REG_CONFIG_VAL)
+ if (conf & DS1621_REG_CONFIG_NVB)
goto exit_free;
+ /* The 7 lowest bits of a temperature should always be 0. */
temp = ds1621_read_value(new_client, DS1621_REG_TEMP);
if (temp & 0x007f)
goto exit_free;
static void b_peripheral(struct isp1301 *isp)
{
- enable_vbus_draw(isp, 8);
OTG_CTRL_REG = OTG_CTRL_REG & OTG_XCEIV_OUTPUTS;
usb_gadget_vbus_connect(isp->otg.gadget);
#ifdef CONFIG_USB_OTG
+ enable_vbus_draw(isp, 8);
otg_update_isp(isp);
#else
+ enable_vbus_draw(isp, 100);
/* UDC driver just set OTG_BSESSVLD */
isp1301_set_bits(isp, ISP1301_OTG_CONTROL_1, OTG1_DP_PULLUP);
isp1301_clear_bits(isp, ISP1301_OTG_CONTROL_1, OTG1_DP_PULLDOWN);
#endif
}
-static int isp_update_otg(struct isp1301 *isp, u8 stat)
+static void isp_update_otg(struct isp1301 *isp, u8 stat)
{
u8 isp_stat, isp_bstat;
enum usb_otg_state state = isp->otg.state;
if (the_transceiver)
return 0;
- isp = kmalloc(sizeof *isp, GFP_KERNEL);
+ isp = kcalloc(1, sizeof *isp, GFP_KERNEL);
if (!isp)
return 0;
- memset(isp, 0, sizeof *isp);
-
INIT_WORK(&isp->work, isp1301_work, isp);
init_timer(&isp->timer);
isp->timer.function = isp1301_timer;
(data->config & 0x04) ? "tachometer input" :
"alert output");
dev_dbg(&client->dev, "PWM clock %s kHz, output frequency %u Hz\n",
- (data->config_fan & 0x04) ? "1.4" : "360",
- ((data->config_fan & 0x04) ? 700 : 180000) / data->pwm1_freq);
+ (data->config_fan & 0x08) ? "1.4" : "360",
+ ((data->config_fan & 0x08) ? 700 : 180000) / data->pwm1_freq);
dev_dbg(&client->dev, "PWM output active %s, %s mode\n",
(data->config_fan & 0x10) ? "low" : "high",
(data->config_fan & 0x20) ? "manual" : "auto");
for (i = 0; i < 3; i++) {
if (((data->address[i] = extra_isa[i]))
- && !request_region(extra_isa[i], PC87360_EXTENT, "pc87360")) {
+ && !request_region(extra_isa[i], PC87360_EXTENT,
+ pc87360_driver.name)) {
dev_err(&new_client->dev, "Region 0x%x-0x%x already "
"in use!\n", extra_isa[i],
extra_isa[i]+PC87360_EXTENT-1);
/* Fan clock dividers may be needed before any data is read */
for (i = 0; i < data->fannr; i++) {
- data->fan_status[i] = pc87360_read_value(data, LD_FAN,
- NO_BANK, PC87360_REG_FAN_STATUS(i));
+ if (FAN_CONFIG_MONITOR(data->fan_conf, i))
+ data->fan_status[i] = pc87360_read_value(data,
+ LD_FAN, NO_BANK,
+ PC87360_REG_FAN_STATUS(i));
}
if (init > 0) {
}
if (data->fannr) {
- device_create_file(&new_client->dev, &dev_attr_fan1_input);
- device_create_file(&new_client->dev, &dev_attr_fan2_input);
- device_create_file(&new_client->dev, &dev_attr_fan1_min);
- device_create_file(&new_client->dev, &dev_attr_fan2_min);
- device_create_file(&new_client->dev, &dev_attr_fan1_div);
- device_create_file(&new_client->dev, &dev_attr_fan2_div);
- device_create_file(&new_client->dev, &dev_attr_fan1_status);
- device_create_file(&new_client->dev, &dev_attr_fan2_status);
+ if (FAN_CONFIG_MONITOR(data->fan_conf, 0)) {
+ device_create_file(&new_client->dev,
+ &dev_attr_fan1_input);
+ device_create_file(&new_client->dev,
+ &dev_attr_fan1_min);
+ device_create_file(&new_client->dev,
+ &dev_attr_fan1_div);
+ device_create_file(&new_client->dev,
+ &dev_attr_fan1_status);
+ }
+
+ if (FAN_CONFIG_MONITOR(data->fan_conf, 1)) {
+ device_create_file(&new_client->dev,
+ &dev_attr_fan2_input);
+ device_create_file(&new_client->dev,
+ &dev_attr_fan2_min);
+ device_create_file(&new_client->dev,
+ &dev_attr_fan2_div);
+ device_create_file(&new_client->dev,
+ &dev_attr_fan2_status);
+ }
if (FAN_CONFIG_CONTROL(data->fan_conf, 0))
device_create_file(&new_client->dev, &dev_attr_pwm1);
device_create_file(&new_client->dev, &dev_attr_pwm2);
}
if (data->fannr == 3) {
- device_create_file(&new_client->dev, &dev_attr_fan3_input);
- device_create_file(&new_client->dev, &dev_attr_fan3_min);
- device_create_file(&new_client->dev, &dev_attr_fan3_div);
- device_create_file(&new_client->dev, &dev_attr_fan3_status);
+ if (FAN_CONFIG_MONITOR(data->fan_conf, 2)) {
+ device_create_file(&new_client->dev,
+ &dev_attr_fan3_input);
+ device_create_file(&new_client->dev,
+ &dev_attr_fan3_min);
+ device_create_file(&new_client->dev,
+ &dev_attr_fan3_div);
+ device_create_file(&new_client->dev,
+ &dev_attr_fan3_status);
+ }
if (FAN_CONFIG_CONTROL(data->fan_conf, 2))
device_create_file(&new_client->dev, &dev_attr_pwm3);
--- /dev/null
+/*
+ smsc47b397.c - Part of lm_sensors, Linux kernel modules
+ for hardware monitoring
+
+ Supports the SMSC LPC47B397-NC Super-I/O chip.
+
+ Author/Maintainer: Mark M. Hoffman <mhoffman@lightlink.com>
+ Copyright (C) 2004 Utilitek Systems, Inc.
+
+ derived in part from smsc47m1.c:
+ Copyright (C) 2002 Mark D. Studebaker <mdsxyz123@yahoo.com>
+ Copyright (C) 2004 Jean Delvare <khali@linux-fr.org>
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+*/
+
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/ioport.h>
+#include <linux/i2c.h>
+#include <linux/i2c-sensor.h>
+#include <linux/init.h>
+#include <asm/io.h>
+
+static unsigned short normal_i2c[] = { I2C_CLIENT_END };
+/* Address is autodetected, there is no default value */
+static unsigned int normal_isa[] = { 0x0000, I2C_CLIENT_ISA_END };
+static struct i2c_force_data forces[] = {{NULL}};
+
+enum chips { any_chip, smsc47b397 };
+static struct i2c_address_data addr_data = {
+ .normal_i2c = normal_i2c,
+ .normal_isa = normal_isa,
+ .probe = normal_i2c, /* cheat */
+ .ignore = normal_i2c, /* cheat */
+ .forces = forces,
+};
+
+/* Super-I/0 registers and commands */
+
+#define REG 0x2e /* The register to read/write */
+#define VAL 0x2f /* The value to read/write */
+
+static inline void superio_outb(int reg, int val)
+{
+ outb(reg, REG);
+ outb(val, VAL);
+}
+
+static inline int superio_inb(int reg)
+{
+ outb(reg, REG);
+ return inb(VAL);
+}
+
+/* select superio logical device */
+static inline void superio_select(int ld)
+{
+ superio_outb(0x07, ld);
+}
+
+static inline void superio_enter(void)
+{
+ outb(0x55, REG);
+}
+
+static inline void superio_exit(void)
+{
+ outb(0xAA, REG);
+}
+
+#define SUPERIO_REG_DEVID 0x20
+#define SUPERIO_REG_DEVREV 0x21
+#define SUPERIO_REG_BASE_MSB 0x60
+#define SUPERIO_REG_BASE_LSB 0x61
+#define SUPERIO_REG_LD8 0x08
+
+#define SMSC_EXTENT 0x02
+
+/* 0 <= nr <= 3 */
+static u8 smsc47b397_reg_temp[] = {0x25, 0x26, 0x27, 0x80};
+#define SMSC47B397_REG_TEMP(nr) (smsc47b397_reg_temp[(nr)])
+
+/* 0 <= nr <= 3 */
+#define SMSC47B397_REG_FAN_LSB(nr) (0x28 + 2 * (nr))
+#define SMSC47B397_REG_FAN_MSB(nr) (0x29 + 2 * (nr))
+
+struct smsc47b397_data {
+ struct i2c_client client;
+ struct semaphore lock;
+
+ struct semaphore update_lock;
+ unsigned long last_updated; /* in jiffies */
+ int valid;
+
+ /* register values */
+ u16 fan[4];
+ u8 temp[4];
+};
+
+static int smsc47b397_read_value(struct i2c_client *client, u8 reg)
+{
+ struct smsc47b397_data *data = i2c_get_clientdata(client);
+ int res;
+
+ down(&data->lock);
+ outb(reg, client->addr);
+ res = inb_p(client->addr + 1);
+ up(&data->lock);
+ return res;
+}
+
+static struct smsc47b397_data *smsc47b397_update_device(struct device *dev)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct smsc47b397_data *data = i2c_get_clientdata(client);
+ int i;
+
+ down(&data->update_lock);
+
+ if (time_after(jiffies - data->last_updated, (unsigned long)HZ)
+ || time_before(jiffies, data->last_updated) || !data->valid) {
+
+ dev_dbg(&client->dev, "starting device update...\n");
+
+ /* 4 temperature inputs, 4 fan inputs */
+ for (i = 0; i < 4; i++) {
+ data->temp[i] = smsc47b397_read_value(client,
+ SMSC47B397_REG_TEMP(i));
+
+ /* must read LSB first */
+ data->fan[i] = smsc47b397_read_value(client,
+ SMSC47B397_REG_FAN_LSB(i));
+ data->fan[i] |= smsc47b397_read_value(client,
+ SMSC47B397_REG_FAN_MSB(i)) << 8;
+ }
+
+ data->last_updated = jiffies;
+ data->valid = 1;
+
+ dev_dbg(&client->dev, "... device update complete\n");
+ }
+
+ up(&data->update_lock);
+
+ return data;
+}
+
+/* TEMP: 0.001C/bit (-128C to +127C)
+ REG: 1C/bit, two's complement */
+static int temp_from_reg(u8 reg)
+{
+ return (s8)reg * 1000;
+}
+
+/* 0 <= nr <= 3 */
+static ssize_t show_temp(struct device *dev, char *buf, int nr)
+{
+ struct smsc47b397_data *data = smsc47b397_update_device(dev);
+ return sprintf(buf, "%d\n", temp_from_reg(data->temp[nr]));
+}
+
+#define sysfs_temp(num) \
+static ssize_t show_temp##num(struct device *dev, char *buf) \
+{ \
+ return show_temp(dev, buf, num-1); \
+} \
+static DEVICE_ATTR(temp##num##_input, S_IRUGO, show_temp##num, NULL)
+
+sysfs_temp(1);
+sysfs_temp(2);
+sysfs_temp(3);
+sysfs_temp(4);
+
+#define device_create_file_temp(client, num) \
+ device_create_file(&client->dev, &dev_attr_temp##num##_input)
+
+/* FAN: 1 RPM/bit
+ REG: count of 90kHz pulses / revolution */
+static int fan_from_reg(u16 reg)
+{
+ return 90000 * 60 / reg;
+}
+
+/* 0 <= nr <= 3 */
+static ssize_t show_fan(struct device *dev, char *buf, int nr)
+{
+ struct smsc47b397_data *data = smsc47b397_update_device(dev);
+ return sprintf(buf, "%d\n", fan_from_reg(data->fan[nr]));
+}
+
+#define sysfs_fan(num) \
+static ssize_t show_fan##num(struct device *dev, char *buf) \
+{ \
+ return show_fan(dev, buf, num-1); \
+} \
+static DEVICE_ATTR(fan##num##_input, S_IRUGO, show_fan##num, NULL)
+
+sysfs_fan(1);
+sysfs_fan(2);
+sysfs_fan(3);
+sysfs_fan(4);
+
+#define device_create_file_fan(client, num) \
+ device_create_file(&client->dev, &dev_attr_fan##num##_input)
+
+static int smsc47b397_detect(struct i2c_adapter *adapter, int addr, int kind);
+
+static int smsc47b397_attach_adapter(struct i2c_adapter *adapter)
+{
+ if (!(adapter->class & I2C_CLASS_HWMON))
+ return 0;
+ return i2c_detect(adapter, &addr_data, smsc47b397_detect);
+}
+
+static int smsc47b397_detach_client(struct i2c_client *client)
+{
+ int err;
+
+ if ((err = i2c_detach_client(client))) {
+ dev_err(&client->dev, "Client deregistration failed, "
+ "client not detached.\n");
+ return err;
+ }
+
+ release_region(client->addr, SMSC_EXTENT);
+ kfree(i2c_get_clientdata(client));
+
+ return 0;
+}
+
+static struct i2c_driver smsc47b397_driver = {
+ .owner = THIS_MODULE,
+ .name = "smsc47b397",
+ .id = I2C_DRIVERID_SMSC47B397,
+ .flags = I2C_DF_NOTIFY,
+ .attach_adapter = smsc47b397_attach_adapter,
+ .detach_client = smsc47b397_detach_client,
+};
+
+static int smsc47b397_detect(struct i2c_adapter *adapter, int addr, int kind)
+{
+ struct i2c_client *new_client;
+ struct smsc47b397_data *data;
+ int err = 0;
+
+ if (!i2c_is_isa_adapter(adapter)) {
+ return 0;
+ }
+
+ if (!request_region(addr, SMSC_EXTENT, smsc47b397_driver.name)) {
+ dev_err(&adapter->dev, "Region 0x%x already in use!\n", addr);
+ return -EBUSY;
+ }
+
+ if (!(data = kmalloc(sizeof(struct smsc47b397_data), GFP_KERNEL))) {
+ err = -ENOMEM;
+ goto error_release;
+ }
+ memset(data, 0x00, sizeof(struct smsc47b397_data));
+
+ new_client = &data->client;
+ i2c_set_clientdata(new_client, data);
+ new_client->addr = addr;
+ init_MUTEX(&data->lock);
+ new_client->adapter = adapter;
+ new_client->driver = &smsc47b397_driver;
+ new_client->flags = 0;
+
+ strlcpy(new_client->name, "smsc47b397", I2C_NAME_SIZE);
+
+ init_MUTEX(&data->update_lock);
+
+ if ((err = i2c_attach_client(new_client)))
+ goto error_free;
+
+ device_create_file_temp(new_client, 1);
+ device_create_file_temp(new_client, 2);
+ device_create_file_temp(new_client, 3);
+ device_create_file_temp(new_client, 4);
+
+ device_create_file_fan(new_client, 1);
+ device_create_file_fan(new_client, 2);
+ device_create_file_fan(new_client, 3);
+ device_create_file_fan(new_client, 4);
+
+ return 0;
+
+error_free:
+ kfree(new_client);
+error_release:
+ release_region(addr, SMSC_EXTENT);
+ return err;
+}
+
+static int __init smsc47b397_find(unsigned int *addr)
+{
+ u8 id, rev;
+
+ superio_enter();
+ id = superio_inb(SUPERIO_REG_DEVID);
+
+ if (id != 0x6f) {
+ superio_exit();
+ return -ENODEV;
+ }
+
+ rev = superio_inb(SUPERIO_REG_DEVREV);
+
+ superio_select(SUPERIO_REG_LD8);
+ *addr = (superio_inb(SUPERIO_REG_BASE_MSB) << 8)
+ | superio_inb(SUPERIO_REG_BASE_LSB);
+
+ printk(KERN_INFO "smsc47b397: found SMSC LPC47B397-NC "
+ "(base address 0x%04x, revision %u)\n", *addr, rev);
+
+ superio_exit();
+ return 0;
+}
+
+static int __init smsc47b397_init(void)
+{
+ int ret;
+
+ if ((ret = smsc47b397_find(normal_isa)))
+ return ret;
+
+ return i2c_add_driver(&smsc47b397_driver);
+}
+
+static void __exit smsc47b397_exit(void)
+{
+ i2c_del_driver(&smsc47b397_driver);
+}
+
+MODULE_AUTHOR("Mark M. Hoffman <mhoffman@lightlink.com>");
+MODULE_DESCRIPTION("SMSC LPC47B397 driver");
+MODULE_LICENSE("GPL");
+
+module_init(smsc47b397_init);
+module_exit(smsc47b397_exit);
return 0;
}
- if (!request_region(address, SMSC_EXTENT, "smsc47m1")) {
+ if (!request_region(address, SMSC_EXTENT, smsc47m1_driver.name)) {
dev_err(&adapter->dev, "Region 0x%x already in use!\n", address);
return -EBUSY;
}
static struct vrm_model vrm_models[] = {
{X86_VENDOR_AMD, 0x6, ANY, 90}, /* Athlon Duron etc */
- {X86_VENDOR_AMD, 0xF, 0x4, 90}, /* Athlon 64 */
- {X86_VENDOR_AMD, 0xF, 0x5, 24}, /* Opteron */
+ {X86_VENDOR_AMD, 0xF, ANY, 24}, /* Athlon 64, Opteron */
{X86_VENDOR_INTEL, 0x6, 0x9, 85}, /* 0.13um too */
{X86_VENDOR_INTEL, 0x6, 0xB, 85}, /* 0xB Tualatin */
{X86_VENDOR_INTEL, 0x6, ANY, 82}, /* any P6 */
return vrm_ret;
}
-/* and now something completely different for Non-x86 world*/
+/* and now for something completely different for Non-x86 world*/
#else
int i2c_which_vrm(void)
{
*/
static ide_startstop_t etrax_dma_intr (ide_drive_t *drive)
{
- int i, dma_stat;
- byte stat;
-
LED_DISK_READ(0);
LED_DISK_WRITE(0);
- dma_stat = HWIF(drive)->ide_dma_end(drive);
- stat = HWIF(drive)->INB(IDE_STATUS_REG); /* get drive status */
- if (OK_STAT(stat,DRIVE_READY,drive->bad_wstat|DRQ_STAT)) {
- if (!dma_stat) {
- struct request *rq;
- rq = HWGROUP(drive)->rq;
- for (i = rq->nr_sectors; i > 0;) {
- i -= rq->current_nr_sectors;
- DRIVER(drive)->end_request(drive, 1, rq->nr_sectors);
- }
- return ide_stopped;
- }
- printk("%s: bad DMA status\n", drive->name);
- }
- return DRIVER(drive)->error(drive, "dma_intr", stat);
+ return ide_dma_intr(drive);
}
/*
.name = "ide-default",
.version = IDEDEFAULT_VERSION,
.attach = idedefault_attach,
+ .cleanup = ide_unregister_subdriver,
.drives = LIST_HEAD_INIT(idedefault_driver.drives)
};
#include <asm/io.h>
-#include "cy82c693.h"
+/* the current version */
+#define CY82_VERSION "CY82C693U driver v0.34 99-13-12 Andreas S. Krebs (akrebs@altavista.net)"
+
+/*
+ * The following are used to debug the driver.
+ */
+#define CY82C693_DEBUG_LOGS 0
+#define CY82C693_DEBUG_INFO 0
+
+/* define CY82C693_SETDMA_CLOCK to set DMA Controller Clock Speed to ATCLK */
+#undef CY82C693_SETDMA_CLOCK
+
+/*
+ * NOTE: the value for busmaster timeout is tricky and I got it by
+ * trial and error! By using a to low value will cause DMA timeouts
+ * and drop IDE performance, and by using a to high value will cause
+ * audio playback to scatter.
+ * If you know a better value or how to calc it, please let me know.
+ */
+
+/* twice the value written in cy82c693ub datasheet */
+#define BUSMASTER_TIMEOUT 0x50
+/*
+ * the value above was tested on my machine and it seems to work okay
+ */
+
+/* here are the offset definitions for the registers */
+#define CY82_IDE_CMDREG 0x04
+#define CY82_IDE_ADDRSETUP 0x48
+#define CY82_IDE_MASTER_IOR 0x4C
+#define CY82_IDE_MASTER_IOW 0x4D
+#define CY82_IDE_SLAVE_IOR 0x4E
+#define CY82_IDE_SLAVE_IOW 0x4F
+#define CY82_IDE_MASTER_8BIT 0x50
+#define CY82_IDE_SLAVE_8BIT 0x51
+
+#define CY82_INDEX_PORT 0x22
+#define CY82_DATA_PORT 0x23
+
+#define CY82_INDEX_CTRLREG1 0x01
+#define CY82_INDEX_CHANNEL0 0x30
+#define CY82_INDEX_CHANNEL1 0x31
+#define CY82_INDEX_TIMEOUT 0x32
+
+/* the max PIO mode - from datasheet */
+#define CY82C693_MAX_PIO 4
+
+/* the min and max PCI bus speed in MHz - from datasheet */
+#define CY82C963_MIN_BUS_SPEED 25
+#define CY82C963_MAX_BUS_SPEED 33
+
+/* the struct for the PIO mode timings */
+typedef struct pio_clocks_s {
+ u8 address_time; /* Address setup (clocks) */
+ u8 time_16r; /* clocks for 16bit IOR (0xF0=Active/data, 0x0F=Recovery) */
+ u8 time_16w; /* clocks for 16bit IOW (0xF0=Active/data, 0x0F=Recovery) */
+ u8 time_8; /* clocks for 8bit (0xF0=Active/data, 0x0F=Recovery) */
+} pio_clocks_t;
/*
* calc clocks using bus_speed
}
}
+static ide_pci_device_t cy82c693_chipsets[] __devinitdata = {
+ { /* 0 */
+ .name = "CY82C693",
+ .init_chipset = init_chipset_cy82c693,
+ .init_iops = init_iops_cy82c693,
+ .init_hwif = init_hwif_cy82c693,
+ .channels = 1,
+ .autodma = AUTODMA,
+ .bootable = ON_BOARD,
+ }
+};
+
static int __devinit cy82c693_init_one(struct pci_dev *dev, const struct pci_device_id *id)
{
ide_pci_device_t *d = &cy82c693_chipsets[id->driver_data];
struct pci_dev *dev2;
+ int ret = -ENODEV;
/* CY82C693 is more than only a IDE controller.
Function 1 is primary IDE channel, function 2 - secondary. */
if ((dev->class >> 8) == PCI_CLASS_STORAGE_IDE &&
PCI_FUNC(dev->devfn) == 1) {
dev2 = pci_find_slot(dev->bus->number, dev->devfn + 1);
- ide_setup_pci_devices(dev, dev2, d);
+ ret = ide_setup_pci_devices(dev, dev2, d);
}
- return 0;
+ return ret;
}
static struct pci_device_id cy82c693_pci_tbl[] = {
d->name = chipset_names[(pcicmd & PCI_COMMAND_MEMORY) ? 1 : 0];
d->bootable = (pcicmd & PCI_COMMAND_MEMORY) ? OFF_BOARD : NEVER_BOARD;
- ide_setup_pci_device(dev, d);
- return 0;
+ return ide_setup_pci_device(dev, d);
}
static struct pci_device_id hpt34x_pci_tbl[] = {
static DECLARE_RWSEM(hl_drivers_sem);
static LIST_HEAD(hl_irqs);
-static rwlock_t hl_irqs_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(hl_irqs_lock);
-static rwlock_t addr_space_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(addr_space_lock);
/* addr_space list will have zero and max already included as bounds */
static struct hpsb_address_ops dummy_ops = { NULL, NULL, NULL, NULL };
host->pdev = dev;
pci_set_drvdata(dev, lynx);
- lynx->lock = SPIN_LOCK_UNLOCKED;
- lynx->phy_reg_lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&lynx->lock);
+ spin_lock_init(&lynx->phy_reg_lock);
#ifndef CONFIG_IEEE1394_PCILYNX_LOCALRAM
lynx->pcl_mem = pci_alloc_consistent(dev, LOCALRAM_SIZE,
tasklet_init(&lynx->iso_rcv.tq, (void (*)(unsigned long))iso_rcv_bh,
(unsigned long)lynx);
- lynx->iso_rcv.lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&lynx->iso_rcv.lock);
- lynx->async.queue_lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&lynx->async.queue_lock);
lynx->async.channel = CHANNEL_ASYNC_SEND;
- lynx->iso_send.queue_lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&lynx->iso_send.queue_lock);
lynx->iso_send.channel = CHANNEL_ISO_SEND;
PRINT(KERN_INFO, lynx->id, "remapped memory spaces reg 0x%p, rom 0x%p, "
--- /dev/null
+menu "InfiniBand support"
+
+config INFINIBAND
+ tristate "InfiniBand support"
+ ---help---
+ Core support for InfiniBand (IB). Make sure to also select
+ any protocols you wish to use as well as drivers for your
+ InfiniBand hardware.
+
+source "drivers/infiniband/hw/mthca/Kconfig"
+
+source "drivers/infiniband/ulp/ipoib/Kconfig"
+
+endmenu
--- /dev/null
+obj-$(CONFIG_INFINIBAND) += core/
+obj-$(CONFIG_INFINIBAND_MTHCA) += hw/mthca/
+obj-$(CONFIG_INFINIBAND_IPOIB) += ulp/ipoib/
--- /dev/null
+EXTRA_CFLAGS += -Idrivers/infiniband/include
+
+obj-$(CONFIG_INFINIBAND) += ib_core.o ib_mad.o ib_sa.o ib_umad.o
+
+ib_core-y := packer.o ud_header.o verbs.o sysfs.o \
+ device.o fmr_pool.o cache.o
+
+ib_mad-y := mad.o smi.o agent.o
+
+ib_sa-y := sa_query.o
+
+ib_umad-y := user_mad.o
--- /dev/null
+/*
+ * Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
+ * Copyright (c) 2004 Infinicon Corporation. All rights reserved.
+ * Copyright (c) 2004 Intel Corporation. All rights reserved.
+ * Copyright (c) 2004 Topspin Corporation. All rights reserved.
+ * Copyright (c) 2004 Voltaire Corporation. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: agent.c 1389 2004-12-27 22:56:47Z roland $
+ */
+
+#include <linux/dma-mapping.h>
+
+#include <asm/bug.h>
+
+#include <ib_smi.h>
+
+#include "smi.h"
+#include "agent_priv.h"
+#include "mad_priv.h"
+
+
+spinlock_t ib_agent_port_list_lock;
+static LIST_HEAD(ib_agent_port_list);
+
+extern kmem_cache_t *ib_mad_cache;
+
+
+/*
+ * Caller must hold ib_agent_port_list_lock
+ */
+static inline struct ib_agent_port_private *
+__ib_get_agent_port(struct ib_device *device, int port_num,
+ struct ib_mad_agent *mad_agent)
+{
+ struct ib_agent_port_private *entry;
+
+ BUG_ON(!(!!device ^ !!mad_agent)); /* Exactly one MUST be (!NULL) */
+
+ if (device) {
+ list_for_each_entry(entry, &ib_agent_port_list, port_list) {
+ if (entry->dr_smp_agent->device == device &&
+ entry->port_num == port_num)
+ return entry;
+ }
+ } else {
+ list_for_each_entry(entry, &ib_agent_port_list, port_list) {
+ if ((entry->dr_smp_agent == mad_agent) ||
+ (entry->lr_smp_agent == mad_agent) ||
+ (entry->perf_mgmt_agent == mad_agent))
+ return entry;
+ }
+ }
+ return NULL;
+}
+
+static inline struct ib_agent_port_private *
+ib_get_agent_port(struct ib_device *device, int port_num,
+ struct ib_mad_agent *mad_agent)
+{
+ struct ib_agent_port_private *entry;
+ unsigned long flags;
+
+ spin_lock_irqsave(&ib_agent_port_list_lock, flags);
+ entry = __ib_get_agent_port(device, port_num, mad_agent);
+ spin_unlock_irqrestore(&ib_agent_port_list_lock, flags);
+
+ return entry;
+}
+
+int smi_check_local_dr_smp(struct ib_smp *smp,
+ struct ib_device *device,
+ int port_num)
+{
+ struct ib_agent_port_private *port_priv;
+
+ if (smp->mgmt_class != IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE)
+ return 1;
+ port_priv = ib_get_agent_port(device, port_num, NULL);
+ if (!port_priv) {
+ printk(KERN_DEBUG SPFX "smi_check_local_dr_smp %s port %d "
+ "not open\n",
+ device->name, port_num);
+ return 1;
+ }
+
+ return smi_check_local_smp(port_priv->dr_smp_agent, smp);
+}
+
+static int agent_mad_send(struct ib_mad_agent *mad_agent,
+ struct ib_agent_port_private *port_priv,
+ struct ib_mad_private *mad_priv,
+ struct ib_grh *grh,
+ struct ib_wc *wc)
+{
+ struct ib_agent_send_wr *agent_send_wr;
+ struct ib_sge gather_list;
+ struct ib_send_wr send_wr;
+ struct ib_send_wr *bad_send_wr;
+ struct ib_ah_attr ah_attr;
+ unsigned long flags;
+ int ret = 1;
+
+ agent_send_wr = kmalloc(sizeof(*agent_send_wr), GFP_KERNEL);
+ if (!agent_send_wr)
+ goto out;
+ agent_send_wr->mad = mad_priv;
+
+ /* PCI mapping */
+ gather_list.addr = dma_map_single(mad_agent->device->dma_device,
+ &mad_priv->mad,
+ sizeof(mad_priv->mad),
+ DMA_TO_DEVICE);
+ gather_list.length = sizeof(mad_priv->mad);
+ gather_list.lkey = (*port_priv->mr).lkey;
+
+ send_wr.next = NULL;
+ send_wr.opcode = IB_WR_SEND;
+ send_wr.sg_list = &gather_list;
+ send_wr.num_sge = 1;
+ send_wr.wr.ud.remote_qpn = wc->src_qp; /* DQPN */
+ send_wr.wr.ud.timeout_ms = 0;
+ send_wr.send_flags = IB_SEND_SIGNALED | IB_SEND_SOLICITED;
+
+ ah_attr.dlid = wc->slid;
+ ah_attr.port_num = mad_agent->port_num;
+ ah_attr.src_path_bits = wc->dlid_path_bits;
+ ah_attr.sl = wc->sl;
+ ah_attr.static_rate = 0;
+ ah_attr.ah_flags = 0; /* No GRH */
+ if (mad_priv->mad.mad.mad_hdr.mgmt_class == IB_MGMT_CLASS_PERF_MGMT) {
+ if (wc->wc_flags & IB_WC_GRH) {
+ ah_attr.ah_flags = IB_AH_GRH;
+ /* Should sgid be looked up ? */
+ ah_attr.grh.sgid_index = 0;
+ ah_attr.grh.hop_limit = grh->hop_limit;
+ ah_attr.grh.flow_label = be32_to_cpup(
+ &grh->version_tclass_flow) & 0xfffff;
+ ah_attr.grh.traffic_class = (be32_to_cpup(
+ &grh->version_tclass_flow) >> 20) & 0xff;
+ memcpy(ah_attr.grh.dgid.raw,
+ grh->sgid.raw,
+ sizeof(ah_attr.grh.dgid));
+ }
+ }
+
+ agent_send_wr->ah = ib_create_ah(mad_agent->qp->pd, &ah_attr);
+ if (IS_ERR(agent_send_wr->ah)) {
+ printk(KERN_ERR SPFX "No memory for address handle\n");
+ kfree(agent_send_wr);
+ goto out;
+ }
+
+ send_wr.wr.ud.ah = agent_send_wr->ah;
+ if (mad_priv->mad.mad.mad_hdr.mgmt_class == IB_MGMT_CLASS_PERF_MGMT) {
+ send_wr.wr.ud.pkey_index = wc->pkey_index;
+ send_wr.wr.ud.remote_qkey = IB_QP1_QKEY;
+ } else { /* for SMPs */
+ send_wr.wr.ud.pkey_index = 0;
+ send_wr.wr.ud.remote_qkey = 0;
+ }
+ send_wr.wr.ud.mad_hdr = &mad_priv->mad.mad.mad_hdr;
+ send_wr.wr_id = (unsigned long)agent_send_wr;
+
+ pci_unmap_addr_set(agent_send_wr, mapping, gather_list.addr);
+
+ /* Send */
+ spin_lock_irqsave(&port_priv->send_list_lock, flags);
+ if (ib_post_send_mad(mad_agent, &send_wr, &bad_send_wr)) {
+ spin_unlock_irqrestore(&port_priv->send_list_lock, flags);
+ dma_unmap_single(mad_agent->device->dma_device,
+ pci_unmap_addr(agent_send_wr, mapping),
+ sizeof(mad_priv->mad),
+ DMA_TO_DEVICE);
+ ib_destroy_ah(agent_send_wr->ah);
+ kfree(agent_send_wr);
+ } else {
+ list_add_tail(&agent_send_wr->send_list,
+ &port_priv->send_posted_list);
+ spin_unlock_irqrestore(&port_priv->send_list_lock, flags);
+ ret = 0;
+ }
+
+out:
+ return ret;
+}
+
+int agent_send(struct ib_mad_private *mad,
+ struct ib_grh *grh,
+ struct ib_wc *wc,
+ struct ib_device *device,
+ int port_num)
+{
+ struct ib_agent_port_private *port_priv;
+ struct ib_mad_agent *mad_agent;
+
+ port_priv = ib_get_agent_port(device, port_num, NULL);
+ if (!port_priv) {
+ printk(KERN_DEBUG SPFX "agent_send %s port %d not open\n",
+ device->name, port_num);
+ return 1;
+ }
+
+ /* Get mad agent based on mgmt_class in MAD */
+ switch (mad->mad.mad.mad_hdr.mgmt_class) {
+ case IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE:
+ mad_agent = port_priv->dr_smp_agent;
+ break;
+ case IB_MGMT_CLASS_SUBN_LID_ROUTED:
+ mad_agent = port_priv->lr_smp_agent;
+ break;
+ case IB_MGMT_CLASS_PERF_MGMT:
+ mad_agent = port_priv->perf_mgmt_agent;
+ break;
+ default:
+ return 1;
+ }
+
+ return agent_mad_send(mad_agent, port_priv, mad, grh, wc);
+}
+
+static void agent_send_handler(struct ib_mad_agent *mad_agent,
+ struct ib_mad_send_wc *mad_send_wc)
+{
+ struct ib_agent_port_private *port_priv;
+ struct ib_agent_send_wr *agent_send_wr;
+ unsigned long flags;
+
+ /* Find matching MAD agent */
+ port_priv = ib_get_agent_port(NULL, 0, mad_agent);
+ if (!port_priv) {
+ printk(KERN_ERR SPFX "agent_send_handler: no matching MAD "
+ "agent %p\n", mad_agent);
+ return;
+ }
+
+ agent_send_wr = (struct ib_agent_send_wr *)(unsigned long)mad_send_wc->wr_id;
+ spin_lock_irqsave(&port_priv->send_list_lock, flags);
+ /* Remove completed send from posted send MAD list */
+ list_del(&agent_send_wr->send_list);
+ spin_unlock_irqrestore(&port_priv->send_list_lock, flags);
+
+ /* Unmap PCI */
+ dma_unmap_single(mad_agent->device->dma_device,
+ pci_unmap_addr(agent_send_wr, mapping),
+ sizeof(agent_send_wr->mad->mad),
+ DMA_TO_DEVICE);
+
+ ib_destroy_ah(agent_send_wr->ah);
+
+ /* Release allocated memory */
+ kmem_cache_free(ib_mad_cache, agent_send_wr->mad);
+ kfree(agent_send_wr);
+}
+
+int ib_agent_port_open(struct ib_device *device, int port_num)
+{
+ int ret;
+ struct ib_agent_port_private *port_priv;
+ struct ib_mad_reg_req reg_req;
+ unsigned long flags;
+
+ /* First, check if port already open for SMI */
+ port_priv = ib_get_agent_port(device, port_num, NULL);
+ if (port_priv) {
+ printk(KERN_DEBUG SPFX "%s port %d already open\n",
+ device->name, port_num);
+ return 0;
+ }
+
+ /* Create new device info */
+ port_priv = kmalloc(sizeof *port_priv, GFP_KERNEL);
+ if (!port_priv) {
+ printk(KERN_ERR SPFX "No memory for ib_agent_port_private\n");
+ ret = -ENOMEM;
+ goto error1;
+ }
+
+ memset(port_priv, 0, sizeof *port_priv);
+ port_priv->port_num = port_num;
+ spin_lock_init(&port_priv->send_list_lock);
+ INIT_LIST_HEAD(&port_priv->send_posted_list);
+
+ /* Obtain MAD agent for directed route SM class */
+ reg_req.mgmt_class = IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE;
+ reg_req.mgmt_class_version = 1;
+
+ port_priv->dr_smp_agent = ib_register_mad_agent(device, port_num,
+ IB_QPT_SMI,
+ NULL, 0,
+ &agent_send_handler,
+ NULL, NULL);
+
+ if (IS_ERR(port_priv->dr_smp_agent)) {
+ ret = PTR_ERR(port_priv->dr_smp_agent);
+ goto error2;
+ }
+
+ /* Obtain MAD agent for LID routed SM class */
+ reg_req.mgmt_class = IB_MGMT_CLASS_SUBN_LID_ROUTED;
+ port_priv->lr_smp_agent = ib_register_mad_agent(device, port_num,
+ IB_QPT_SMI,
+ NULL, 0,
+ &agent_send_handler,
+ NULL, NULL);
+ if (IS_ERR(port_priv->lr_smp_agent)) {
+ ret = PTR_ERR(port_priv->lr_smp_agent);
+ goto error3;
+ }
+
+ /* Obtain MAD agent for PerfMgmt class */
+ reg_req.mgmt_class = IB_MGMT_CLASS_PERF_MGMT;
+ port_priv->perf_mgmt_agent = ib_register_mad_agent(device, port_num,
+ IB_QPT_GSI,
+ NULL, 0,
+ &agent_send_handler,
+ NULL, NULL);
+ if (IS_ERR(port_priv->perf_mgmt_agent)) {
+ ret = PTR_ERR(port_priv->perf_mgmt_agent);
+ goto error4;
+ }
+
+ port_priv->mr = ib_get_dma_mr(port_priv->dr_smp_agent->qp->pd,
+ IB_ACCESS_LOCAL_WRITE);
+ if (IS_ERR(port_priv->mr)) {
+ printk(KERN_ERR SPFX "Couldn't get DMA MR\n");
+ ret = PTR_ERR(port_priv->mr);
+ goto error5;
+ }
+
+ spin_lock_irqsave(&ib_agent_port_list_lock, flags);
+ list_add_tail(&port_priv->port_list, &ib_agent_port_list);
+ spin_unlock_irqrestore(&ib_agent_port_list_lock, flags);
+
+ return 0;
+
+error5:
+ ib_unregister_mad_agent(port_priv->perf_mgmt_agent);
+error4:
+ ib_unregister_mad_agent(port_priv->lr_smp_agent);
+error3:
+ ib_unregister_mad_agent(port_priv->dr_smp_agent);
+error2:
+ kfree(port_priv);
+error1:
+ return ret;
+}
+
+int ib_agent_port_close(struct ib_device *device, int port_num)
+{
+ struct ib_agent_port_private *port_priv;
+ unsigned long flags;
+
+ spin_lock_irqsave(&ib_agent_port_list_lock, flags);
+ port_priv = __ib_get_agent_port(device, port_num, NULL);
+ if (port_priv == NULL) {
+ spin_unlock_irqrestore(&ib_agent_port_list_lock, flags);
+ printk(KERN_ERR SPFX "Port %d not found\n", port_num);
+ return -ENODEV;
+ }
+ list_del(&port_priv->port_list);
+ spin_unlock_irqrestore(&ib_agent_port_list_lock, flags);
+
+ ib_dereg_mr(port_priv->mr);
+
+ ib_unregister_mad_agent(port_priv->perf_mgmt_agent);
+ ib_unregister_mad_agent(port_priv->lr_smp_agent);
+ ib_unregister_mad_agent(port_priv->dr_smp_agent);
+ kfree(port_priv);
+
+ return 0;
+}
--- /dev/null
+/*
+ * Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
+ * Copyright (c) 2004 Infinicon Corporation. All rights reserved.
+ * Copyright (c) 2004 Intel Corporation. All rights reserved.
+ * Copyright (c) 2004 Topspin Corporation. All rights reserved.
+ * Copyright (c) 2004 Voltaire Corporation. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: agent_priv.h 1389 2004-12-27 22:56:47Z roland $
+ */
+
+#ifndef __IB_AGENT_PRIV_H__
+#define __IB_AGENT_PRIV_H__
+
+#include <linux/pci.h>
+
+#define SPFX "ib_agent: "
+
+struct ib_agent_send_wr {
+ struct list_head send_list;
+ struct ib_ah *ah;
+ struct ib_mad_private *mad;
+ DECLARE_PCI_UNMAP_ADDR(mapping)
+};
+
+struct ib_agent_port_private {
+ struct list_head port_list;
+ struct list_head send_posted_list;
+ spinlock_t send_list_lock;
+ int port_num;
+ struct ib_mad_agent *dr_smp_agent; /* DR SM class */
+ struct ib_mad_agent *lr_smp_agent; /* LR SM class */
+ struct ib_mad_agent *perf_mgmt_agent; /* PerfMgmt class */
+ struct ib_mr *mr;
+};
+
+#endif /* __IB_AGENT_PRIV_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: cache.c 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#include <linux/version.h>
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+
+#include "core_priv.h"
+
+struct ib_pkey_cache {
+ int table_len;
+ u16 table[0];
+};
+
+struct ib_gid_cache {
+ int table_len;
+ union ib_gid table[0];
+};
+
+struct ib_update_work {
+ struct work_struct work;
+ struct ib_device *device;
+ u8 port_num;
+};
+
+static inline int start_port(struct ib_device *device)
+{
+ return device->node_type == IB_NODE_SWITCH ? 0 : 1;
+}
+
+static inline int end_port(struct ib_device *device)
+{
+ return device->node_type == IB_NODE_SWITCH ? 0 : device->phys_port_cnt;
+}
+
+int ib_get_cached_gid(struct ib_device *device,
+ u8 port_num,
+ int index,
+ union ib_gid *gid)
+{
+ struct ib_gid_cache *cache;
+ unsigned long flags;
+ int ret = 0;
+
+ if (port_num < start_port(device) || port_num > end_port(device))
+ return -EINVAL;
+
+ read_lock_irqsave(&device->cache.lock, flags);
+
+ cache = device->cache.gid_cache[port_num - start_port(device)];
+
+ if (index < 0 || index >= cache->table_len)
+ ret = -EINVAL;
+ else
+ *gid = cache->table[index];
+
+ read_unlock_irqrestore(&device->cache.lock, flags);
+
+ return ret;
+}
+EXPORT_SYMBOL(ib_get_cached_gid);
+
+int ib_find_cached_gid(struct ib_device *device,
+ union ib_gid *gid,
+ u8 *port_num,
+ u16 *index)
+{
+ struct ib_gid_cache *cache;
+ unsigned long flags;
+ int p, i;
+ int ret = -ENOENT;
+
+ *port_num = -1;
+ if (index)
+ *index = -1;
+
+ read_lock_irqsave(&device->cache.lock, flags);
+
+ for (p = 0; p <= end_port(device) - start_port(device); ++p) {
+ cache = device->cache.gid_cache[p];
+ for (i = 0; i < cache->table_len; ++i) {
+ if (!memcmp(gid, &cache->table[i], sizeof *gid)) {
+ *port_num = p;
+ if (index)
+ *index = i;
+ ret = 0;
+ goto found;
+ }
+ }
+ }
+found:
+ read_unlock_irqrestore(&device->cache.lock, flags);
+
+ return ret;
+}
+EXPORT_SYMBOL(ib_find_cached_gid);
+
+int ib_get_cached_pkey(struct ib_device *device,
+ u8 port_num,
+ int index,
+ u16 *pkey)
+{
+ struct ib_pkey_cache *cache;
+ unsigned long flags;
+ int ret = 0;
+
+ if (port_num < start_port(device) || port_num > end_port(device))
+ return -EINVAL;
+
+ read_lock_irqsave(&device->cache.lock, flags);
+
+ cache = device->cache.pkey_cache[port_num - start_port(device)];
+
+ if (index < 0 || index >= cache->table_len)
+ ret = -EINVAL;
+ else
+ *pkey = cache->table[index];
+
+ read_unlock_irqrestore(&device->cache.lock, flags);
+
+ return ret;
+}
+EXPORT_SYMBOL(ib_get_cached_pkey);
+
+int ib_find_cached_pkey(struct ib_device *device,
+ u8 port_num,
+ u16 pkey,
+ u16 *index)
+{
+ struct ib_pkey_cache *cache;
+ unsigned long flags;
+ int i;
+ int ret = -ENOENT;
+
+ if (port_num < start_port(device) || port_num > end_port(device))
+ return -EINVAL;
+
+ read_lock_irqsave(&device->cache.lock, flags);
+
+ cache = device->cache.pkey_cache[port_num - start_port(device)];
+
+ *index = -1;
+
+ for (i = 0; i < cache->table_len; ++i)
+ if ((cache->table[i] & 0x7fff) == (pkey & 0x7fff)) {
+ *index = i;
+ ret = 0;
+ break;
+ }
+
+ read_unlock_irqrestore(&device->cache.lock, flags);
+
+ return ret;
+}
+EXPORT_SYMBOL(ib_find_cached_pkey);
+
+static void ib_cache_update(struct ib_device *device,
+ u8 port)
+{
+ struct ib_port_attr *tprops = NULL;
+ struct ib_pkey_cache *pkey_cache = NULL, *old_pkey_cache;
+ struct ib_gid_cache *gid_cache = NULL, *old_gid_cache;
+ int i;
+ int ret;
+
+ tprops = kmalloc(sizeof *tprops, GFP_KERNEL);
+ if (!tprops)
+ return;
+
+ ret = ib_query_port(device, port, tprops);
+ if (ret) {
+ printk(KERN_WARNING "ib_query_port failed (%d) for %s\n",
+ ret, device->name);
+ goto err;
+ }
+
+ pkey_cache = kmalloc(sizeof *pkey_cache + tprops->pkey_tbl_len *
+ sizeof *pkey_cache->table, GFP_KERNEL);
+ if (!pkey_cache)
+ goto err;
+
+ pkey_cache->table_len = tprops->pkey_tbl_len;
+
+ gid_cache = kmalloc(sizeof *gid_cache + tprops->gid_tbl_len *
+ sizeof *gid_cache->table, GFP_KERNEL);
+ if (!gid_cache)
+ goto err;
+
+ gid_cache->table_len = tprops->gid_tbl_len;
+
+ for (i = 0; i < pkey_cache->table_len; ++i) {
+ ret = ib_query_pkey(device, port, i, pkey_cache->table + i);
+ if (ret) {
+ printk(KERN_WARNING "ib_query_pkey failed (%d) for %s (index %d)\n",
+ ret, device->name, i);
+ goto err;
+ }
+ }
+
+ for (i = 0; i < gid_cache->table_len; ++i) {
+ ret = ib_query_gid(device, port, i, gid_cache->table + i);
+ if (ret) {
+ printk(KERN_WARNING "ib_query_gid failed (%d) for %s (index %d)\n",
+ ret, device->name, i);
+ goto err;
+ }
+ }
+
+ write_lock_irq(&device->cache.lock);
+
+ old_pkey_cache = device->cache.pkey_cache[port - start_port(device)];
+ old_gid_cache = device->cache.gid_cache [port - start_port(device)];
+
+ device->cache.pkey_cache[port - start_port(device)] = pkey_cache;
+ device->cache.gid_cache [port - start_port(device)] = gid_cache;
+
+ write_unlock_irq(&device->cache.lock);
+
+ kfree(old_pkey_cache);
+ kfree(old_gid_cache);
+ kfree(tprops);
+ return;
+
+err:
+ kfree(pkey_cache);
+ kfree(gid_cache);
+ kfree(tprops);
+}
+
+static void ib_cache_task(void *work_ptr)
+{
+ struct ib_update_work *work = work_ptr;
+
+ ib_cache_update(work->device, work->port_num);
+ kfree(work);
+}
+
+static void ib_cache_event(struct ib_event_handler *handler,
+ struct ib_event *event)
+{
+ struct ib_update_work *work;
+
+ if (event->event == IB_EVENT_PORT_ERR ||
+ event->event == IB_EVENT_PORT_ACTIVE ||
+ event->event == IB_EVENT_LID_CHANGE ||
+ event->event == IB_EVENT_PKEY_CHANGE ||
+ event->event == IB_EVENT_SM_CHANGE) {
+ work = kmalloc(sizeof *work, GFP_ATOMIC);
+ if (work) {
+ INIT_WORK(&work->work, ib_cache_task, work);
+ work->device = event->device;
+ work->port_num = event->element.port_num;
+ schedule_work(&work->work);
+ }
+ }
+}
+
+static void ib_cache_setup_one(struct ib_device *device)
+{
+ int p;
+
+ rwlock_init(&device->cache.lock);
+
+ device->cache.pkey_cache =
+ kmalloc(sizeof *device->cache.pkey_cache *
+ (end_port(device) - start_port(device) + 1), GFP_KERNEL);
+ device->cache.gid_cache =
+ kmalloc(sizeof *device->cache.pkey_cache *
+ (end_port(device) - start_port(device) + 1), GFP_KERNEL);
+
+ if (!device->cache.pkey_cache || !device->cache.gid_cache) {
+ printk(KERN_WARNING "Couldn't allocate cache "
+ "for %s\n", device->name);
+ goto err;
+ }
+
+ for (p = 0; p <= end_port(device) - start_port(device); ++p) {
+ device->cache.pkey_cache[p] = NULL;
+ device->cache.gid_cache [p] = NULL;
+ ib_cache_update(device, p + start_port(device));
+ }
+
+ INIT_IB_EVENT_HANDLER(&device->cache.event_handler,
+ device, ib_cache_event);
+ if (ib_register_event_handler(&device->cache.event_handler))
+ goto err_cache;
+
+ return;
+
+err_cache:
+ for (p = 0; p <= end_port(device) - start_port(device); ++p) {
+ kfree(device->cache.pkey_cache[p]);
+ kfree(device->cache.gid_cache[p]);
+ }
+
+err:
+ kfree(device->cache.pkey_cache);
+ kfree(device->cache.gid_cache);
+}
+
+static void ib_cache_cleanup_one(struct ib_device *device)
+{
+ int p;
+
+ ib_unregister_event_handler(&device->cache.event_handler);
+ flush_scheduled_work();
+
+ for (p = 0; p <= end_port(device) - start_port(device); ++p) {
+ kfree(device->cache.pkey_cache[p]);
+ kfree(device->cache.gid_cache[p]);
+ }
+
+ kfree(device->cache.pkey_cache);
+ kfree(device->cache.gid_cache);
+}
+
+static struct ib_client cache_client = {
+ .name = "cache",
+ .add = ib_cache_setup_one,
+ .remove = ib_cache_cleanup_one
+};
+
+int __init ib_cache_setup(void)
+{
+ return ib_register_client(&cache_client);
+}
+
+void __exit ib_cache_cleanup(void)
+{
+ ib_unregister_client(&cache_client);
+}
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: device.c 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#include <linux/module.h>
+#include <linux/string.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/init.h>
+
+#include <asm/semaphore.h>
+
+#include "core_priv.h"
+
+MODULE_AUTHOR("Roland Dreier");
+MODULE_DESCRIPTION("core kernel InfiniBand API");
+MODULE_LICENSE("Dual BSD/GPL");
+
+struct ib_client_data {
+ struct list_head list;
+ struct ib_client *client;
+ void * data;
+};
+
+static LIST_HEAD(device_list);
+static LIST_HEAD(client_list);
+
+/*
+ * device_sem protects access to both device_list and client_list.
+ * There's no real point to using multiple locks or something fancier
+ * like an rwsem: we always access both lists, and we're always
+ * modifying one list or the other list. In any case this is not a
+ * hot path so there's no point in trying to optimize.
+ */
+static DECLARE_MUTEX(device_sem);
+
+static int ib_device_check_mandatory(struct ib_device *device)
+{
+#define IB_MANDATORY_FUNC(x) { offsetof(struct ib_device, x), #x }
+ static const struct {
+ size_t offset;
+ char *name;
+ } mandatory_table[] = {
+ IB_MANDATORY_FUNC(query_device),
+ IB_MANDATORY_FUNC(query_port),
+ IB_MANDATORY_FUNC(query_pkey),
+ IB_MANDATORY_FUNC(query_gid),
+ IB_MANDATORY_FUNC(alloc_pd),
+ IB_MANDATORY_FUNC(dealloc_pd),
+ IB_MANDATORY_FUNC(create_ah),
+ IB_MANDATORY_FUNC(destroy_ah),
+ IB_MANDATORY_FUNC(create_qp),
+ IB_MANDATORY_FUNC(modify_qp),
+ IB_MANDATORY_FUNC(destroy_qp),
+ IB_MANDATORY_FUNC(post_send),
+ IB_MANDATORY_FUNC(post_recv),
+ IB_MANDATORY_FUNC(create_cq),
+ IB_MANDATORY_FUNC(destroy_cq),
+ IB_MANDATORY_FUNC(poll_cq),
+ IB_MANDATORY_FUNC(req_notify_cq),
+ IB_MANDATORY_FUNC(get_dma_mr),
+ IB_MANDATORY_FUNC(dereg_mr)
+ };
+ int i;
+
+ for (i = 0; i < sizeof mandatory_table / sizeof mandatory_table[0]; ++i) {
+ if (!*(void **) ((void *) device + mandatory_table[i].offset)) {
+ printk(KERN_WARNING "Device %s is missing mandatory function %s\n",
+ device->name, mandatory_table[i].name);
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
+static struct ib_device *__ib_device_get_by_name(const char *name)
+{
+ struct ib_device *device;
+
+ list_for_each_entry(device, &device_list, core_list)
+ if (!strncmp(name, device->name, IB_DEVICE_NAME_MAX))
+ return device;
+
+ return NULL;
+}
+
+
+static int alloc_name(char *name)
+{
+ long *inuse;
+ char buf[IB_DEVICE_NAME_MAX];
+ struct ib_device *device;
+ int i;
+
+ inuse = (long *) get_zeroed_page(GFP_KERNEL);
+ if (!inuse)
+ return -ENOMEM;
+
+ list_for_each_entry(device, &device_list, core_list) {
+ if (!sscanf(device->name, name, &i))
+ continue;
+ if (i < 0 || i >= PAGE_SIZE * 8)
+ continue;
+ snprintf(buf, sizeof buf, name, i);
+ if (!strncmp(buf, device->name, IB_DEVICE_NAME_MAX))
+ set_bit(i, inuse);
+ }
+
+ i = find_first_zero_bit(inuse, PAGE_SIZE * 8);
+ free_page((unsigned long) inuse);
+ snprintf(buf, sizeof buf, name, i);
+
+ if (__ib_device_get_by_name(buf))
+ return -ENFILE;
+
+ strlcpy(name, buf, IB_DEVICE_NAME_MAX);
+ return 0;
+}
+
+/**
+ * ib_alloc_device - allocate an IB device struct
+ * @size:size of structure to allocate
+ *
+ * Low-level drivers should use ib_alloc_device() to allocate &struct
+ * ib_device. @size is the size of the structure to be allocated,
+ * including any private data used by the low-level driver.
+ * ib_dealloc_device() must be used to free structures allocated with
+ * ib_alloc_device().
+ */
+struct ib_device *ib_alloc_device(size_t size)
+{
+ void *dev;
+
+ BUG_ON(size < sizeof (struct ib_device));
+
+ dev = kmalloc(size, GFP_KERNEL);
+ if (!dev)
+ return NULL;
+
+ memset(dev, 0, size);
+
+ return dev;
+}
+EXPORT_SYMBOL(ib_alloc_device);
+
+/**
+ * ib_dealloc_device - free an IB device struct
+ * @device:structure to free
+ *
+ * Free a structure allocated with ib_alloc_device().
+ */
+void ib_dealloc_device(struct ib_device *device)
+{
+ if (device->reg_state == IB_DEV_UNINITIALIZED) {
+ kfree(device);
+ return;
+ }
+
+ BUG_ON(device->reg_state != IB_DEV_UNREGISTERED);
+
+ ib_device_unregister_sysfs(device);
+}
+EXPORT_SYMBOL(ib_dealloc_device);
+
+static int add_client_context(struct ib_device *device, struct ib_client *client)
+{
+ struct ib_client_data *context;
+ unsigned long flags;
+
+ context = kmalloc(sizeof *context, GFP_KERNEL);
+ if (!context) {
+ printk(KERN_WARNING "Couldn't allocate client context for %s/%s\n",
+ device->name, client->name);
+ return -ENOMEM;
+ }
+
+ context->client = client;
+ context->data = NULL;
+
+ spin_lock_irqsave(&device->client_data_lock, flags);
+ list_add(&context->list, &device->client_data_list);
+ spin_unlock_irqrestore(&device->client_data_lock, flags);
+
+ return 0;
+}
+
+/**
+ * ib_register_device - Register an IB device with IB core
+ * @device:Device to register
+ *
+ * Low-level drivers use ib_register_device() to register their
+ * devices with the IB core. All registered clients will receive a
+ * callback for each device that is added. @device must be allocated
+ * with ib_alloc_device().
+ */
+int ib_register_device(struct ib_device *device)
+{
+ int ret;
+
+ down(&device_sem);
+
+ if (strchr(device->name, '%')) {
+ ret = alloc_name(device->name);
+ if (ret)
+ goto out;
+ }
+
+ if (ib_device_check_mandatory(device)) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ INIT_LIST_HEAD(&device->event_handler_list);
+ INIT_LIST_HEAD(&device->client_data_list);
+ spin_lock_init(&device->event_handler_lock);
+ spin_lock_init(&device->client_data_lock);
+
+ ret = ib_device_register_sysfs(device);
+ if (ret) {
+ printk(KERN_WARNING "Couldn't register device %s with driver model\n",
+ device->name);
+ goto out;
+ }
+
+ list_add_tail(&device->core_list, &device_list);
+
+ device->reg_state = IB_DEV_REGISTERED;
+
+ {
+ struct ib_client *client;
+
+ list_for_each_entry(client, &client_list, list)
+ if (client->add && !add_client_context(device, client))
+ client->add(device);
+ }
+
+ out:
+ up(&device_sem);
+ return ret;
+}
+EXPORT_SYMBOL(ib_register_device);
+
+/**
+ * ib_unregister_device - Unregister an IB device
+ * @device:Device to unregister
+ *
+ * Unregister an IB device. All clients will receive a remove callback.
+ */
+void ib_unregister_device(struct ib_device *device)
+{
+ struct ib_client *client;
+ struct ib_client_data *context, *tmp;
+ unsigned long flags;
+
+ down(&device_sem);
+
+ list_for_each_entry_reverse(client, &client_list, list)
+ if (client->remove)
+ client->remove(device);
+
+ list_del(&device->core_list);
+
+ up(&device_sem);
+
+ spin_lock_irqsave(&device->client_data_lock, flags);
+ list_for_each_entry_safe(context, tmp, &device->client_data_list, list)
+ kfree(context);
+ spin_unlock_irqrestore(&device->client_data_lock, flags);
+
+ device->reg_state = IB_DEV_UNREGISTERED;
+}
+EXPORT_SYMBOL(ib_unregister_device);
+
+/**
+ * ib_register_client - Register an IB client
+ * @client:Client to register
+ *
+ * Upper level users of the IB drivers can use ib_register_client() to
+ * register callbacks for IB device addition and removal. When an IB
+ * device is added, each registered client's add method will be called
+ * (in the order the clients were registered), and when a device is
+ * removed, each client's remove method will be called (in the reverse
+ * order that clients were registered). In addition, when
+ * ib_register_client() is called, the client will receive an add
+ * callback for all devices already registered.
+ */
+int ib_register_client(struct ib_client *client)
+{
+ struct ib_device *device;
+
+ down(&device_sem);
+
+ list_add_tail(&client->list, &client_list);
+ list_for_each_entry(device, &device_list, core_list)
+ if (client->add && !add_client_context(device, client))
+ client->add(device);
+
+ up(&device_sem);
+
+ return 0;
+}
+EXPORT_SYMBOL(ib_register_client);
+
+/**
+ * ib_unregister_client - Unregister an IB client
+ * @client:Client to unregister
+ *
+ * Upper level users use ib_unregister_client() to remove their client
+ * registration. When ib_unregister_client() is called, the client
+ * will receive a remove callback for each IB device still registered.
+ */
+void ib_unregister_client(struct ib_client *client)
+{
+ struct ib_client_data *context, *tmp;
+ struct ib_device *device;
+ unsigned long flags;
+
+ down(&device_sem);
+
+ list_for_each_entry(device, &device_list, core_list) {
+ if (client->remove)
+ client->remove(device);
+
+ spin_lock_irqsave(&device->client_data_lock, flags);
+ list_for_each_entry_safe(context, tmp, &device->client_data_list, list)
+ if (context->client == client) {
+ list_del(&context->list);
+ kfree(context);
+ }
+ spin_unlock_irqrestore(&device->client_data_lock, flags);
+ }
+ list_del(&client->list);
+
+ up(&device_sem);
+}
+EXPORT_SYMBOL(ib_unregister_client);
+
+/**
+ * ib_get_client_data - Get IB client context
+ * @device:Device to get context for
+ * @client:Client to get context for
+ *
+ * ib_get_client_data() returns client context set with
+ * ib_set_client_data().
+ */
+void *ib_get_client_data(struct ib_device *device, struct ib_client *client)
+{
+ struct ib_client_data *context;
+ void *ret = NULL;
+ unsigned long flags;
+
+ spin_lock_irqsave(&device->client_data_lock, flags);
+ list_for_each_entry(context, &device->client_data_list, list)
+ if (context->client == client) {
+ ret = context->data;
+ break;
+ }
+ spin_unlock_irqrestore(&device->client_data_lock, flags);
+
+ return ret;
+}
+EXPORT_SYMBOL(ib_get_client_data);
+
+/**
+ * ib_set_client_data - Get IB client context
+ * @device:Device to set context for
+ * @client:Client to set context for
+ * @data:Context to set
+ *
+ * ib_set_client_data() sets client context that can be retrieved with
+ * ib_get_client_data().
+ */
+void ib_set_client_data(struct ib_device *device, struct ib_client *client,
+ void *data)
+{
+ struct ib_client_data *context;
+ unsigned long flags;
+
+ spin_lock_irqsave(&device->client_data_lock, flags);
+ list_for_each_entry(context, &device->client_data_list, list)
+ if (context->client == client) {
+ context->data = data;
+ goto out;
+ }
+
+ printk(KERN_WARNING "No client context found for %s/%s\n",
+ device->name, client->name);
+
+out:
+ spin_unlock_irqrestore(&device->client_data_lock, flags);
+}
+EXPORT_SYMBOL(ib_set_client_data);
+
+/**
+ * ib_register_event_handler - Register an IB event handler
+ * @event_handler:Handler to register
+ *
+ * ib_register_event_handler() registers an event handler that will be
+ * called back when asynchronous IB events occur (as defined in
+ * chapter 11 of the InfiniBand Architecture Specification). This
+ * callback may occur in interrupt context.
+ */
+int ib_register_event_handler (struct ib_event_handler *event_handler)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&event_handler->device->event_handler_lock, flags);
+ list_add_tail(&event_handler->list,
+ &event_handler->device->event_handler_list);
+ spin_unlock_irqrestore(&event_handler->device->event_handler_lock, flags);
+
+ return 0;
+}
+EXPORT_SYMBOL(ib_register_event_handler);
+
+/**
+ * ib_unregister_event_handler - Unregister an event handler
+ * @event_handler:Handler to unregister
+ *
+ * Unregister an event handler registered with
+ * ib_register_event_handler().
+ */
+int ib_unregister_event_handler(struct ib_event_handler *event_handler)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&event_handler->device->event_handler_lock, flags);
+ list_del(&event_handler->list);
+ spin_unlock_irqrestore(&event_handler->device->event_handler_lock, flags);
+
+ return 0;
+}
+EXPORT_SYMBOL(ib_unregister_event_handler);
+
+/**
+ * ib_dispatch_event - Dispatch an asynchronous event
+ * @event:Event to dispatch
+ *
+ * Low-level drivers must call ib_dispatch_event() to dispatch the
+ * event to all registered event handlers when an asynchronous event
+ * occurs.
+ */
+void ib_dispatch_event(struct ib_event *event)
+{
+ unsigned long flags;
+ struct ib_event_handler *handler;
+
+ spin_lock_irqsave(&event->device->event_handler_lock, flags);
+
+ list_for_each_entry(handler, &event->device->event_handler_list, list)
+ handler->handler(handler, event);
+
+ spin_unlock_irqrestore(&event->device->event_handler_lock, flags);
+}
+EXPORT_SYMBOL(ib_dispatch_event);
+
+/**
+ * ib_query_device - Query IB device attributes
+ * @device:Device to query
+ * @device_attr:Device attributes
+ *
+ * ib_query_device() returns the attributes of a device through the
+ * @device_attr pointer.
+ */
+int ib_query_device(struct ib_device *device,
+ struct ib_device_attr *device_attr)
+{
+ return device->query_device(device, device_attr);
+}
+EXPORT_SYMBOL(ib_query_device);
+
+/**
+ * ib_query_port - Query IB port attributes
+ * @device:Device to query
+ * @port_num:Port number to query
+ * @port_attr:Port attributes
+ *
+ * ib_query_port() returns the attributes of a port through the
+ * @port_attr pointer.
+ */
+int ib_query_port(struct ib_device *device,
+ u8 port_num,
+ struct ib_port_attr *port_attr)
+{
+ return device->query_port(device, port_num, port_attr);
+}
+EXPORT_SYMBOL(ib_query_port);
+
+/**
+ * ib_query_gid - Get GID table entry
+ * @device:Device to query
+ * @port_num:Port number to query
+ * @index:GID table index to query
+ * @gid:Returned GID
+ *
+ * ib_query_gid() fetches the specified GID table entry.
+ */
+int ib_query_gid(struct ib_device *device,
+ u8 port_num, int index, union ib_gid *gid)
+{
+ return device->query_gid(device, port_num, index, gid);
+}
+EXPORT_SYMBOL(ib_query_gid);
+
+/**
+ * ib_query_pkey - Get P_Key table entry
+ * @device:Device to query
+ * @port_num:Port number to query
+ * @index:P_Key table index to query
+ * @pkey:Returned P_Key
+ *
+ * ib_query_pkey() fetches the specified P_Key table entry.
+ */
+int ib_query_pkey(struct ib_device *device,
+ u8 port_num, u16 index, u16 *pkey)
+{
+ return device->query_pkey(device, port_num, index, pkey);
+}
+EXPORT_SYMBOL(ib_query_pkey);
+
+/**
+ * ib_modify_device - Change IB device attributes
+ * @device:Device to modify
+ * @device_modify_mask:Mask of attributes to change
+ * @device_modify:New attribute values
+ *
+ * ib_modify_device() changes a device's attributes as specified by
+ * the @device_modify_mask and @device_modify structure.
+ */
+int ib_modify_device(struct ib_device *device,
+ int device_modify_mask,
+ struct ib_device_modify *device_modify)
+{
+ return device->modify_device(device, device_modify_mask,
+ device_modify);
+}
+EXPORT_SYMBOL(ib_modify_device);
+
+/**
+ * ib_modify_port - Modifies the attributes for the specified port.
+ * @device: The device to modify.
+ * @port_num: The number of the port to modify.
+ * @port_modify_mask: Mask used to specify which attributes of the port
+ * to change.
+ * @port_modify: New attribute values for the port.
+ *
+ * ib_modify_port() changes a port's attributes as specified by the
+ * @port_modify_mask and @port_modify structure.
+ */
+int ib_modify_port(struct ib_device *device,
+ u8 port_num, int port_modify_mask,
+ struct ib_port_modify *port_modify)
+{
+ return device->modify_port(device, port_num, port_modify_mask,
+ port_modify);
+}
+EXPORT_SYMBOL(ib_modify_port);
+
+static int __init ib_core_init(void)
+{
+ int ret;
+
+ ret = ib_sysfs_setup();
+ if (ret)
+ printk(KERN_WARNING "Couldn't create InfiniBand device class\n");
+
+ ret = ib_cache_setup();
+ if (ret) {
+ printk(KERN_WARNING "Couldn't set up InfiniBand P_Key/GID cache\n");
+ ib_sysfs_cleanup();
+ }
+
+ return ret;
+}
+
+static void __exit ib_core_cleanup(void)
+{
+ ib_cache_cleanup();
+ ib_sysfs_cleanup();
+}
+
+module_init(ib_core_init);
+module_exit(ib_core_cleanup);
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: fmr_pool.c 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#include <linux/errno.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/jhash.h>
+#include <linux/kthread.h>
+
+#include <ib_fmr_pool.h>
+
+#include "core_priv.h"
+
+enum {
+ IB_FMR_MAX_REMAPS = 32,
+
+ IB_FMR_HASH_BITS = 8,
+ IB_FMR_HASH_SIZE = 1 << IB_FMR_HASH_BITS,
+ IB_FMR_HASH_MASK = IB_FMR_HASH_SIZE - 1
+};
+
+/*
+ * If an FMR is not in use, then the list member will point to either
+ * its pool's free_list (if the FMR can be mapped again; that is,
+ * remap_count < IB_FMR_MAX_REMAPS) or its pool's dirty_list (if the
+ * FMR needs to be unmapped before being remapped). In either of
+ * these cases it is a bug if the ref_count is not 0. In other words,
+ * if ref_count is > 0, then the list member must not be linked into
+ * either free_list or dirty_list.
+ *
+ * The cache_node member is used to link the FMR into a cache bucket
+ * (if caching is enabled). This is independent of the reference
+ * count of the FMR. When a valid FMR is released, its ref_count is
+ * decremented, and if ref_count reaches 0, the FMR is placed in
+ * either free_list or dirty_list as appropriate. However, it is not
+ * removed from the cache and may be "revived" if a call to
+ * ib_fmr_register_physical() occurs before the FMR is remapped. In
+ * this case we just increment the ref_count and remove the FMR from
+ * free_list/dirty_list.
+ *
+ * Before we remap an FMR from free_list, we remove it from the cache
+ * (to prevent another user from obtaining a stale FMR). When an FMR
+ * is released, we add it to the tail of the free list, so that our
+ * cache eviction policy is "least recently used."
+ *
+ * All manipulation of ref_count, list and cache_node is protected by
+ * pool_lock to maintain consistency.
+ */
+
+struct ib_fmr_pool {
+ spinlock_t pool_lock;
+
+ int pool_size;
+ int max_pages;
+ int dirty_watermark;
+ int dirty_len;
+ struct list_head free_list;
+ struct list_head dirty_list;
+ struct hlist_head *cache_bucket;
+
+ void (*flush_function)(struct ib_fmr_pool *pool,
+ void * arg);
+ void *flush_arg;
+
+ struct task_struct *thread;
+
+ atomic_t req_ser;
+ atomic_t flush_ser;
+
+ wait_queue_head_t force_wait;
+};
+
+static inline u32 ib_fmr_hash(u64 first_page)
+{
+ return jhash_2words((u32) first_page,
+ (u32) (first_page >> 32),
+ 0);
+}
+
+/* Caller must hold pool_lock */
+static inline struct ib_pool_fmr *ib_fmr_cache_lookup(struct ib_fmr_pool *pool,
+ u64 *page_list,
+ int page_list_len,
+ u64 io_virtual_address)
+{
+ struct hlist_head *bucket;
+ struct ib_pool_fmr *fmr;
+ struct hlist_node *pos;
+
+ if (!pool->cache_bucket)
+ return NULL;
+
+ bucket = pool->cache_bucket + ib_fmr_hash(*page_list);
+
+ hlist_for_each_entry(fmr, pos, bucket, cache_node)
+ if (io_virtual_address == fmr->io_virtual_address &&
+ page_list_len == fmr->page_list_len &&
+ !memcmp(page_list, fmr->page_list,
+ page_list_len * sizeof *page_list))
+ return fmr;
+
+ return NULL;
+}
+
+static void ib_fmr_batch_release(struct ib_fmr_pool *pool)
+{
+ int ret;
+ struct ib_pool_fmr *fmr;
+ LIST_HEAD(unmap_list);
+ LIST_HEAD(fmr_list);
+
+ spin_lock_irq(&pool->pool_lock);
+
+ list_for_each_entry(fmr, &pool->dirty_list, list) {
+ hlist_del_init(&fmr->cache_node);
+ fmr->remap_count = 0;
+ list_add_tail(&fmr->fmr->list, &fmr_list);
+
+#ifdef DEBUG
+ if (fmr->ref_count !=0) {
+ printk(KERN_WARNING "Unmapping FMR 0x%08x with ref count %d",
+ fmr, fmr->ref_count);
+ }
+#endif
+ }
+
+ list_splice(&pool->dirty_list, &unmap_list);
+ INIT_LIST_HEAD(&pool->dirty_list);
+ pool->dirty_len = 0;
+
+ spin_unlock_irq(&pool->pool_lock);
+
+ if (list_empty(&unmap_list)) {
+ return;
+ }
+
+ ret = ib_unmap_fmr(&fmr_list);
+ if (ret)
+ printk(KERN_WARNING "ib_unmap_fmr returned %d", ret);
+
+ spin_lock_irq(&pool->pool_lock);
+ list_splice(&unmap_list, &pool->free_list);
+ spin_unlock_irq(&pool->pool_lock);
+}
+
+static int ib_fmr_cleanup_thread(void *pool_ptr)
+{
+ struct ib_fmr_pool *pool = pool_ptr;
+
+ do {
+ if (pool->dirty_len >= pool->dirty_watermark ||
+ atomic_read(&pool->flush_ser) - atomic_read(&pool->req_ser) < 0) {
+ ib_fmr_batch_release(pool);
+
+ atomic_inc(&pool->flush_ser);
+ wake_up_interruptible(&pool->force_wait);
+
+ if (pool->flush_function)
+ pool->flush_function(pool, pool->flush_arg);
+ }
+
+ set_current_state(TASK_INTERRUPTIBLE);
+ if (pool->dirty_len < pool->dirty_watermark &&
+ atomic_read(&pool->flush_ser) - atomic_read(&pool->req_ser) >= 0 &&
+ !kthread_should_stop())
+ schedule();
+ __set_current_state(TASK_RUNNING);
+ } while (!kthread_should_stop());
+
+ return 0;
+}
+
+/**
+ * ib_create_fmr_pool - Create an FMR pool
+ * @pd:Protection domain for FMRs
+ * @params:FMR pool parameters
+ *
+ * Create a pool of FMRs. Return value is pointer to new pool or
+ * error code if creation failed.
+ */
+struct ib_fmr_pool *ib_create_fmr_pool(struct ib_pd *pd,
+ struct ib_fmr_pool_param *params)
+{
+ struct ib_device *device;
+ struct ib_fmr_pool *pool;
+ int i;
+ int ret;
+
+ if (!params)
+ return ERR_PTR(-EINVAL);
+
+ device = pd->device;
+ if (!device->alloc_fmr || !device->dealloc_fmr ||
+ !device->map_phys_fmr || !device->unmap_fmr) {
+ printk(KERN_WARNING "Device %s does not support fast memory regions",
+ device->name);
+ return ERR_PTR(-ENOSYS);
+ }
+
+ pool = kmalloc(sizeof *pool, GFP_KERNEL);
+ if (!pool) {
+ printk(KERN_WARNING "couldn't allocate pool struct");
+ return ERR_PTR(-ENOMEM);
+ }
+
+ pool->cache_bucket = NULL;
+
+ pool->flush_function = params->flush_function;
+ pool->flush_arg = params->flush_arg;
+
+ INIT_LIST_HEAD(&pool->free_list);
+ INIT_LIST_HEAD(&pool->dirty_list);
+
+ if (params->cache) {
+ pool->cache_bucket =
+ kmalloc(IB_FMR_HASH_SIZE * sizeof *pool->cache_bucket,
+ GFP_KERNEL);
+ if (!pool->cache_bucket) {
+ printk(KERN_WARNING "Failed to allocate cache in pool");
+ ret = -ENOMEM;
+ goto out_free_pool;
+ }
+
+ for (i = 0; i < IB_FMR_HASH_SIZE; ++i)
+ INIT_HLIST_HEAD(pool->cache_bucket + i);
+ }
+
+ pool->pool_size = 0;
+ pool->max_pages = params->max_pages_per_fmr;
+ pool->dirty_watermark = params->dirty_watermark;
+ pool->dirty_len = 0;
+ spin_lock_init(&pool->pool_lock);
+ atomic_set(&pool->req_ser, 0);
+ atomic_set(&pool->flush_ser, 0);
+ init_waitqueue_head(&pool->force_wait);
+
+ pool->thread = kthread_create(ib_fmr_cleanup_thread,
+ pool,
+ "ib_fmr(%s)",
+ device->name);
+ if (IS_ERR(pool->thread)) {
+ printk(KERN_WARNING "couldn't start cleanup thread");
+ ret = PTR_ERR(pool->thread);
+ goto out_free_pool;
+ }
+
+ {
+ struct ib_pool_fmr *fmr;
+ struct ib_fmr_attr attr = {
+ .max_pages = params->max_pages_per_fmr,
+ .max_maps = IB_FMR_MAX_REMAPS,
+ .page_size = PAGE_SHIFT
+ };
+
+ for (i = 0; i < params->pool_size; ++i) {
+ fmr = kmalloc(sizeof *fmr + params->max_pages_per_fmr * sizeof (u64),
+ GFP_KERNEL);
+ if (!fmr) {
+ printk(KERN_WARNING "failed to allocate fmr struct "
+ "for FMR %d", i);
+ goto out_fail;
+ }
+
+ fmr->pool = pool;
+ fmr->remap_count = 0;
+ fmr->ref_count = 0;
+ INIT_HLIST_NODE(&fmr->cache_node);
+
+ fmr->fmr = ib_alloc_fmr(pd, params->access, &attr);
+ if (IS_ERR(fmr->fmr)) {
+ printk(KERN_WARNING "fmr_create failed for FMR %d", i);
+ kfree(fmr);
+ goto out_fail;
+ }
+
+ list_add_tail(&fmr->list, &pool->free_list);
+ ++pool->pool_size;
+ }
+ }
+
+ return pool;
+
+ out_free_pool:
+ kfree(pool->cache_bucket);
+ kfree(pool);
+
+ return ERR_PTR(ret);
+
+ out_fail:
+ ib_destroy_fmr_pool(pool);
+
+ return ERR_PTR(-ENOMEM);
+}
+EXPORT_SYMBOL(ib_create_fmr_pool);
+
+/**
+ * ib_destroy_fmr_pool - Free FMR pool
+ * @pool:FMR pool to free
+ *
+ * Destroy an FMR pool and free all associated resources.
+ */
+int ib_destroy_fmr_pool(struct ib_fmr_pool *pool)
+{
+ struct ib_pool_fmr *fmr;
+ struct ib_pool_fmr *tmp;
+ int i;
+
+ kthread_stop(pool->thread);
+ ib_fmr_batch_release(pool);
+
+ i = 0;
+ list_for_each_entry_safe(fmr, tmp, &pool->free_list, list) {
+ ib_dealloc_fmr(fmr->fmr);
+ list_del(&fmr->list);
+ kfree(fmr);
+ ++i;
+ }
+
+ if (i < pool->pool_size)
+ printk(KERN_WARNING "pool still has %d regions registered",
+ pool->pool_size - i);
+
+ kfree(pool->cache_bucket);
+ kfree(pool);
+
+ return 0;
+}
+EXPORT_SYMBOL(ib_destroy_fmr_pool);
+
+/**
+ * ib_flush_fmr_pool - Invalidate all unmapped FMRs
+ * @pool:FMR pool to flush
+ *
+ * Ensure that all unmapped FMRs are fully invalidated.
+ */
+int ib_flush_fmr_pool(struct ib_fmr_pool *pool)
+{
+ int serial;
+
+ atomic_inc(&pool->req_ser);
+ /*
+ * It's OK if someone else bumps req_ser again here -- we'll
+ * just wait a little longer.
+ */
+ serial = atomic_read(&pool->req_ser);
+
+ wake_up_process(pool->thread);
+
+ if (wait_event_interruptible(pool->force_wait,
+ atomic_read(&pool->flush_ser) -
+ atomic_read(&pool->req_ser) >= 0))
+ return -EINTR;
+
+ return 0;
+}
+EXPORT_SYMBOL(ib_flush_fmr_pool);
+
+/**
+ * ib_fmr_pool_map_phys -
+ * @pool:FMR pool to allocate FMR from
+ * @page_list:List of pages to map
+ * @list_len:Number of pages in @page_list
+ * @io_virtual_address:I/O virtual address for new FMR
+ *
+ * Map an FMR from an FMR pool.
+ */
+struct ib_pool_fmr *ib_fmr_pool_map_phys(struct ib_fmr_pool *pool_handle,
+ u64 *page_list,
+ int list_len,
+ u64 *io_virtual_address)
+{
+ struct ib_fmr_pool *pool = pool_handle;
+ struct ib_pool_fmr *fmr;
+ unsigned long flags;
+ int result;
+
+ if (list_len < 1 || list_len > pool->max_pages)
+ return ERR_PTR(-EINVAL);
+
+ spin_lock_irqsave(&pool->pool_lock, flags);
+ fmr = ib_fmr_cache_lookup(pool,
+ page_list,
+ list_len,
+ *io_virtual_address);
+ if (fmr) {
+ /* found in cache */
+ ++fmr->ref_count;
+ if (fmr->ref_count == 1) {
+ list_del(&fmr->list);
+ }
+
+ spin_unlock_irqrestore(&pool->pool_lock, flags);
+
+ return fmr;
+ }
+
+ if (list_empty(&pool->free_list)) {
+ spin_unlock_irqrestore(&pool->pool_lock, flags);
+ return ERR_PTR(-EAGAIN);
+ }
+
+ fmr = list_entry(pool->free_list.next, struct ib_pool_fmr, list);
+ list_del(&fmr->list);
+ hlist_del_init(&fmr->cache_node);
+ spin_unlock_irqrestore(&pool->pool_lock, flags);
+
+ result = ib_map_phys_fmr(fmr->fmr, page_list, list_len,
+ *io_virtual_address);
+
+ if (result) {
+ spin_lock_irqsave(&pool->pool_lock, flags);
+ list_add(&fmr->list, &pool->free_list);
+ spin_unlock_irqrestore(&pool->pool_lock, flags);
+
+ printk(KERN_WARNING "fmr_map returns %d",
+ result);
+
+ return ERR_PTR(result);
+ }
+
+ ++fmr->remap_count;
+ fmr->ref_count = 1;
+
+ if (pool->cache_bucket) {
+ fmr->io_virtual_address = *io_virtual_address;
+ fmr->page_list_len = list_len;
+ memcpy(fmr->page_list, page_list, list_len * sizeof(*page_list));
+
+ spin_lock_irqsave(&pool->pool_lock, flags);
+ hlist_add_head(&fmr->cache_node,
+ pool->cache_bucket + ib_fmr_hash(fmr->page_list[0]));
+ spin_unlock_irqrestore(&pool->pool_lock, flags);
+ }
+
+ return fmr;
+}
+EXPORT_SYMBOL(ib_fmr_pool_map_phys);
+
+/**
+ * ib_fmr_pool_unmap - Unmap FMR
+ * @fmr:FMR to unmap
+ *
+ * Unmap an FMR. The FMR mapping may remain valid until the FMR is
+ * reused (or until ib_flush_fmr_pool() is called).
+ */
+int ib_fmr_pool_unmap(struct ib_pool_fmr *fmr)
+{
+ struct ib_fmr_pool *pool;
+ unsigned long flags;
+
+ pool = fmr->pool;
+
+ spin_lock_irqsave(&pool->pool_lock, flags);
+
+ --fmr->ref_count;
+ if (!fmr->ref_count) {
+ if (fmr->remap_count < IB_FMR_MAX_REMAPS) {
+ list_add_tail(&fmr->list, &pool->free_list);
+ } else {
+ list_add_tail(&fmr->list, &pool->dirty_list);
+ ++pool->dirty_len;
+ wake_up_process(pool->thread);
+ }
+ }
+
+#ifdef DEBUG
+ if (fmr->ref_count < 0)
+ printk(KERN_WARNING "FMR %p has ref count %d < 0",
+ fmr, fmr->ref_count);
+#endif
+
+ spin_unlock_irqrestore(&pool->pool_lock, flags);
+
+ return 0;
+}
+EXPORT_SYMBOL(ib_fmr_pool_unmap);
--- /dev/null
+/*
+ * Copyright (c) 2004, 2005 Voltaire, Inc. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mad.c 1389 2004-12-27 22:56:47Z roland $
+ */
+
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+
+#include <ib_mad.h>
+
+#include "mad_priv.h"
+#include "smi.h"
+#include "agent.h"
+
+
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_DESCRIPTION("kernel IB MAD API");
+MODULE_AUTHOR("Hal Rosenstock");
+MODULE_AUTHOR("Sean Hefty");
+
+
+kmem_cache_t *ib_mad_cache;
+static struct list_head ib_mad_port_list;
+static u32 ib_mad_client_id = 0;
+
+/* Port list lock */
+static spinlock_t ib_mad_port_list_lock;
+
+
+/* Forward declarations */
+static int method_in_use(struct ib_mad_mgmt_method_table **method,
+ struct ib_mad_reg_req *mad_reg_req);
+static void remove_mad_reg_req(struct ib_mad_agent_private *priv);
+static struct ib_mad_agent_private *find_mad_agent(
+ struct ib_mad_port_private *port_priv,
+ struct ib_mad *mad, int solicited);
+static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
+ struct ib_mad_private *mad);
+static void cancel_mads(struct ib_mad_agent_private *mad_agent_priv);
+static void ib_mad_complete_send_wr(struct ib_mad_send_wr_private *mad_send_wr,
+ struct ib_mad_send_wc *mad_send_wc);
+static void timeout_sends(void *data);
+static void local_completions(void *data);
+static int solicited_mad(struct ib_mad *mad);
+static int add_nonoui_reg_req(struct ib_mad_reg_req *mad_reg_req,
+ struct ib_mad_agent_private *agent_priv,
+ u8 mgmt_class);
+static int add_oui_reg_req(struct ib_mad_reg_req *mad_reg_req,
+ struct ib_mad_agent_private *agent_priv);
+
+/*
+ * Returns a ib_mad_port_private structure or NULL for a device/port
+ * Assumes ib_mad_port_list_lock is being held
+ */
+static inline struct ib_mad_port_private *
+__ib_get_mad_port(struct ib_device *device, int port_num)
+{
+ struct ib_mad_port_private *entry;
+
+ list_for_each_entry(entry, &ib_mad_port_list, port_list) {
+ if (entry->device == device && entry->port_num == port_num)
+ return entry;
+ }
+ return NULL;
+}
+
+/*
+ * Wrapper function to return a ib_mad_port_private structure or NULL
+ * for a device/port
+ */
+static inline struct ib_mad_port_private *
+ib_get_mad_port(struct ib_device *device, int port_num)
+{
+ struct ib_mad_port_private *entry;
+ unsigned long flags;
+
+ spin_lock_irqsave(&ib_mad_port_list_lock, flags);
+ entry = __ib_get_mad_port(device, port_num);
+ spin_unlock_irqrestore(&ib_mad_port_list_lock, flags);
+
+ return entry;
+}
+
+static inline u8 convert_mgmt_class(u8 mgmt_class)
+{
+ /* Alias IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE to 0 */
+ return mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE ?
+ 0 : mgmt_class;
+}
+
+static int get_spl_qp_index(enum ib_qp_type qp_type)
+{
+ switch (qp_type)
+ {
+ case IB_QPT_SMI:
+ return 0;
+ case IB_QPT_GSI:
+ return 1;
+ default:
+ return -1;
+ }
+}
+
+static int vendor_class_index(u8 mgmt_class)
+{
+ return mgmt_class - IB_MGMT_CLASS_VENDOR_RANGE2_START;
+}
+
+static int is_vendor_class(u8 mgmt_class)
+{
+ if ((mgmt_class < IB_MGMT_CLASS_VENDOR_RANGE2_START) ||
+ (mgmt_class > IB_MGMT_CLASS_VENDOR_RANGE2_END))
+ return 0;
+ return 1;
+}
+
+static int is_vendor_oui(char *oui)
+{
+ if (oui[0] || oui[1] || oui[2])
+ return 1;
+ return 0;
+}
+
+static int is_vendor_method_in_use(
+ struct ib_mad_mgmt_vendor_class *vendor_class,
+ struct ib_mad_reg_req *mad_reg_req)
+{
+ struct ib_mad_mgmt_method_table *method;
+ int i;
+
+ for (i = 0; i < MAX_MGMT_OUI; i++) {
+ if (!memcmp(vendor_class->oui[i], mad_reg_req->oui, 3)) {
+ method = vendor_class->method_table[i];
+ if (method) {
+ if (method_in_use(&method, mad_reg_req))
+ return 1;
+ else
+ break;
+ }
+ }
+ }
+ return 0;
+}
+
+/*
+ * ib_register_mad_agent - Register to send/receive MADs
+ */
+struct ib_mad_agent *ib_register_mad_agent(struct ib_device *device,
+ u8 port_num,
+ enum ib_qp_type qp_type,
+ struct ib_mad_reg_req *mad_reg_req,
+ u8 rmpp_version,
+ ib_mad_send_handler send_handler,
+ ib_mad_recv_handler recv_handler,
+ void *context)
+{
+ struct ib_mad_port_private *port_priv;
+ struct ib_mad_agent *ret = ERR_PTR(-EINVAL);
+ struct ib_mad_agent_private *mad_agent_priv;
+ struct ib_mad_reg_req *reg_req = NULL;
+ struct ib_mad_mgmt_class_table *class;
+ struct ib_mad_mgmt_vendor_class_table *vendor;
+ struct ib_mad_mgmt_vendor_class *vendor_class;
+ struct ib_mad_mgmt_method_table *method;
+ int ret2, qpn;
+ unsigned long flags;
+ u8 mgmt_class, vclass;
+
+ /* Validate parameters */
+ qpn = get_spl_qp_index(qp_type);
+ if (qpn == -1)
+ goto error1;
+
+ if (rmpp_version)
+ goto error1; /* XXX: until RMPP implemented */
+
+ /* Validate MAD registration request if supplied */
+ if (mad_reg_req) {
+ if (mad_reg_req->mgmt_class_version >= MAX_MGMT_VERSION)
+ goto error1;
+ if (!recv_handler)
+ goto error1;
+ if (mad_reg_req->mgmt_class >= MAX_MGMT_CLASS) {
+ /*
+ * IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE is the only
+ * one in this range currently allowed
+ */
+ if (mad_reg_req->mgmt_class !=
+ IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE)
+ goto error1;
+ } else if (mad_reg_req->mgmt_class == 0) {
+ /*
+ * Class 0 is reserved in IBA and is used for
+ * aliasing of IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE
+ */
+ goto error1;
+ } else if (is_vendor_class(mad_reg_req->mgmt_class)) {
+ /*
+ * If class is in "new" vendor range,
+ * ensure supplied OUI is not zero
+ */
+ if (!is_vendor_oui(mad_reg_req->oui))
+ goto error1;
+ }
+ /* Make sure class supplied is consistent with QP type */
+ if (qp_type == IB_QPT_SMI) {
+ if ((mad_reg_req->mgmt_class !=
+ IB_MGMT_CLASS_SUBN_LID_ROUTED) &&
+ (mad_reg_req->mgmt_class !=
+ IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE))
+ goto error1;
+ } else {
+ if ((mad_reg_req->mgmt_class ==
+ IB_MGMT_CLASS_SUBN_LID_ROUTED) ||
+ (mad_reg_req->mgmt_class ==
+ IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE))
+ goto error1;
+ }
+ } else {
+ /* No registration request supplied */
+ if (!send_handler)
+ goto error1;
+ }
+
+ /* Validate device and port */
+ port_priv = ib_get_mad_port(device, port_num);
+ if (!port_priv) {
+ ret = ERR_PTR(-ENODEV);
+ goto error1;
+ }
+
+ /* Allocate structures */
+ mad_agent_priv = kmalloc(sizeof *mad_agent_priv, GFP_KERNEL);
+ if (!mad_agent_priv) {
+ ret = ERR_PTR(-ENOMEM);
+ goto error1;
+ }
+
+ if (mad_reg_req) {
+ reg_req = kmalloc(sizeof *reg_req, GFP_KERNEL);
+ if (!reg_req) {
+ ret = ERR_PTR(-ENOMEM);
+ goto error2;
+ }
+ /* Make a copy of the MAD registration request */
+ memcpy(reg_req, mad_reg_req, sizeof *reg_req);
+ }
+
+ /* Now, fill in the various structures */
+ memset(mad_agent_priv, 0, sizeof *mad_agent_priv);
+ mad_agent_priv->qp_info = &port_priv->qp_info[qpn];
+ mad_agent_priv->reg_req = reg_req;
+ mad_agent_priv->rmpp_version = rmpp_version;
+ mad_agent_priv->agent.device = device;
+ mad_agent_priv->agent.recv_handler = recv_handler;
+ mad_agent_priv->agent.send_handler = send_handler;
+ mad_agent_priv->agent.context = context;
+ mad_agent_priv->agent.qp = port_priv->qp_info[qpn].qp;
+ mad_agent_priv->agent.port_num = port_num;
+
+ spin_lock_irqsave(&port_priv->reg_lock, flags);
+ mad_agent_priv->agent.hi_tid = ++ib_mad_client_id;
+
+ /*
+ * Make sure MAD registration (if supplied)
+ * is non overlapping with any existing ones
+ */
+ if (mad_reg_req) {
+ mgmt_class = convert_mgmt_class(mad_reg_req->mgmt_class);
+ if (!is_vendor_class(mgmt_class)) {
+ class = port_priv->version[mad_reg_req->
+ mgmt_class_version].class;
+ if (class) {
+ method = class->method_table[mgmt_class];
+ if (method) {
+ if (method_in_use(&method,
+ mad_reg_req))
+ goto error3;
+ }
+ }
+ ret2 = add_nonoui_reg_req(mad_reg_req, mad_agent_priv,
+ mgmt_class);
+ } else {
+ /* "New" vendor class range */
+ vendor = port_priv->version[mad_reg_req->
+ mgmt_class_version].vendor;
+ if (vendor) {
+ vclass = vendor_class_index(mgmt_class);
+ vendor_class = vendor->vendor_class[vclass];
+ if (vendor_class) {
+ if (is_vendor_method_in_use(
+ vendor_class,
+ mad_reg_req))
+ goto error3;
+ }
+ }
+ ret2 = add_oui_reg_req(mad_reg_req, mad_agent_priv);
+ }
+ if (ret2) {
+ ret = ERR_PTR(ret2);
+ goto error3;
+ }
+ }
+
+ /* Add mad agent into port's agent list */
+ list_add_tail(&mad_agent_priv->agent_list, &port_priv->agent_list);
+ spin_unlock_irqrestore(&port_priv->reg_lock, flags);
+
+ spin_lock_init(&mad_agent_priv->lock);
+ INIT_LIST_HEAD(&mad_agent_priv->send_list);
+ INIT_LIST_HEAD(&mad_agent_priv->wait_list);
+ INIT_WORK(&mad_agent_priv->timed_work, timeout_sends, mad_agent_priv);
+ INIT_LIST_HEAD(&mad_agent_priv->local_list);
+ INIT_WORK(&mad_agent_priv->local_work, local_completions,
+ mad_agent_priv);
+ atomic_set(&mad_agent_priv->refcount, 1);
+ init_waitqueue_head(&mad_agent_priv->wait);
+
+ return &mad_agent_priv->agent;
+
+error3:
+ spin_unlock_irqrestore(&port_priv->reg_lock, flags);
+ kfree(reg_req);
+error2:
+ kfree(mad_agent_priv);
+error1:
+ return ret;
+}
+EXPORT_SYMBOL(ib_register_mad_agent);
+
+static inline int is_snooping_sends(int mad_snoop_flags)
+{
+ return (mad_snoop_flags &
+ (/*IB_MAD_SNOOP_POSTED_SENDS |
+ IB_MAD_SNOOP_RMPP_SENDS |*/
+ IB_MAD_SNOOP_SEND_COMPLETIONS /*|
+ IB_MAD_SNOOP_RMPP_SEND_COMPLETIONS*/));
+}
+
+static inline int is_snooping_recvs(int mad_snoop_flags)
+{
+ return (mad_snoop_flags &
+ (IB_MAD_SNOOP_RECVS /*|
+ IB_MAD_SNOOP_RMPP_RECVS*/));
+}
+
+static int register_snoop_agent(struct ib_mad_qp_info *qp_info,
+ struct ib_mad_snoop_private *mad_snoop_priv)
+{
+ struct ib_mad_snoop_private **new_snoop_table;
+ unsigned long flags;
+ int i;
+
+ spin_lock_irqsave(&qp_info->snoop_lock, flags);
+ /* Check for empty slot in array. */
+ for (i = 0; i < qp_info->snoop_table_size; i++)
+ if (!qp_info->snoop_table[i])
+ break;
+
+ if (i == qp_info->snoop_table_size) {
+ /* Grow table. */
+ new_snoop_table = kmalloc(sizeof mad_snoop_priv *
+ qp_info->snoop_table_size + 1,
+ GFP_ATOMIC);
+ if (!new_snoop_table) {
+ i = -ENOMEM;
+ goto out;
+ }
+ if (qp_info->snoop_table) {
+ memcpy(new_snoop_table, qp_info->snoop_table,
+ sizeof mad_snoop_priv *
+ qp_info->snoop_table_size);
+ kfree(qp_info->snoop_table);
+ }
+ qp_info->snoop_table = new_snoop_table;
+ qp_info->snoop_table_size++;
+ }
+ qp_info->snoop_table[i] = mad_snoop_priv;
+ atomic_inc(&qp_info->snoop_count);
+out:
+ spin_unlock_irqrestore(&qp_info->snoop_lock, flags);
+ return i;
+}
+
+struct ib_mad_agent *ib_register_mad_snoop(struct ib_device *device,
+ u8 port_num,
+ enum ib_qp_type qp_type,
+ int mad_snoop_flags,
+ ib_mad_snoop_handler snoop_handler,
+ ib_mad_recv_handler recv_handler,
+ void *context)
+{
+ struct ib_mad_port_private *port_priv;
+ struct ib_mad_agent *ret;
+ struct ib_mad_snoop_private *mad_snoop_priv;
+ int qpn;
+
+ /* Validate parameters */
+ if ((is_snooping_sends(mad_snoop_flags) && !snoop_handler) ||
+ (is_snooping_recvs(mad_snoop_flags) && !recv_handler)) {
+ ret = ERR_PTR(-EINVAL);
+ goto error1;
+ }
+ qpn = get_spl_qp_index(qp_type);
+ if (qpn == -1) {
+ ret = ERR_PTR(-EINVAL);
+ goto error1;
+ }
+ port_priv = ib_get_mad_port(device, port_num);
+ if (!port_priv) {
+ ret = ERR_PTR(-ENODEV);
+ goto error1;
+ }
+ /* Allocate structures */
+ mad_snoop_priv = kmalloc(sizeof *mad_snoop_priv, GFP_KERNEL);
+ if (!mad_snoop_priv) {
+ ret = ERR_PTR(-ENOMEM);
+ goto error1;
+ }
+
+ /* Now, fill in the various structures */
+ memset(mad_snoop_priv, 0, sizeof *mad_snoop_priv);
+ mad_snoop_priv->qp_info = &port_priv->qp_info[qpn];
+ mad_snoop_priv->agent.device = device;
+ mad_snoop_priv->agent.recv_handler = recv_handler;
+ mad_snoop_priv->agent.snoop_handler = snoop_handler;
+ mad_snoop_priv->agent.context = context;
+ mad_snoop_priv->agent.qp = port_priv->qp_info[qpn].qp;
+ mad_snoop_priv->agent.port_num = port_num;
+ mad_snoop_priv->mad_snoop_flags = mad_snoop_flags;
+ init_waitqueue_head(&mad_snoop_priv->wait);
+ mad_snoop_priv->snoop_index = register_snoop_agent(
+ &port_priv->qp_info[qpn],
+ mad_snoop_priv);
+ if (mad_snoop_priv->snoop_index < 0) {
+ ret = ERR_PTR(mad_snoop_priv->snoop_index);
+ goto error2;
+ }
+
+ atomic_set(&mad_snoop_priv->refcount, 1);
+ return &mad_snoop_priv->agent;
+
+error2:
+ kfree(mad_snoop_priv);
+error1:
+ return ret;
+}
+EXPORT_SYMBOL(ib_register_mad_snoop);
+
+static void unregister_mad_agent(struct ib_mad_agent_private *mad_agent_priv)
+{
+ struct ib_mad_port_private *port_priv;
+ unsigned long flags;
+
+ /* Note that we could still be handling received MADs */
+
+ /*
+ * Canceling all sends results in dropping received response
+ * MADs, preventing us from queuing additional work
+ */
+ cancel_mads(mad_agent_priv);
+
+ port_priv = mad_agent_priv->qp_info->port_priv;
+ cancel_delayed_work(&mad_agent_priv->timed_work);
+ flush_workqueue(port_priv->wq);
+
+ spin_lock_irqsave(&port_priv->reg_lock, flags);
+ remove_mad_reg_req(mad_agent_priv);
+ list_del(&mad_agent_priv->agent_list);
+ spin_unlock_irqrestore(&port_priv->reg_lock, flags);
+
+ /* XXX: Cleanup pending RMPP receives for this agent */
+
+ atomic_dec(&mad_agent_priv->refcount);
+ wait_event(mad_agent_priv->wait,
+ !atomic_read(&mad_agent_priv->refcount));
+
+ if (mad_agent_priv->reg_req)
+ kfree(mad_agent_priv->reg_req);
+ kfree(mad_agent_priv);
+}
+
+static void unregister_mad_snoop(struct ib_mad_snoop_private *mad_snoop_priv)
+{
+ struct ib_mad_qp_info *qp_info;
+ unsigned long flags;
+
+ qp_info = mad_snoop_priv->qp_info;
+ spin_lock_irqsave(&qp_info->snoop_lock, flags);
+ qp_info->snoop_table[mad_snoop_priv->snoop_index] = NULL;
+ atomic_dec(&qp_info->snoop_count);
+ spin_unlock_irqrestore(&qp_info->snoop_lock, flags);
+
+ atomic_dec(&mad_snoop_priv->refcount);
+ wait_event(mad_snoop_priv->wait,
+ !atomic_read(&mad_snoop_priv->refcount));
+
+ kfree(mad_snoop_priv);
+}
+
+/*
+ * ib_unregister_mad_agent - Unregisters a client from using MAD services
+ */
+int ib_unregister_mad_agent(struct ib_mad_agent *mad_agent)
+{
+ struct ib_mad_agent_private *mad_agent_priv;
+ struct ib_mad_snoop_private *mad_snoop_priv;
+
+ /* If the TID is zero, the agent can only snoop. */
+ if (mad_agent->hi_tid) {
+ mad_agent_priv = container_of(mad_agent,
+ struct ib_mad_agent_private,
+ agent);
+ unregister_mad_agent(mad_agent_priv);
+ } else {
+ mad_snoop_priv = container_of(mad_agent,
+ struct ib_mad_snoop_private,
+ agent);
+ unregister_mad_snoop(mad_snoop_priv);
+ }
+ return 0;
+}
+EXPORT_SYMBOL(ib_unregister_mad_agent);
+
+static void dequeue_mad(struct ib_mad_list_head *mad_list)
+{
+ struct ib_mad_queue *mad_queue;
+ unsigned long flags;
+
+ BUG_ON(!mad_list->mad_queue);
+ mad_queue = mad_list->mad_queue;
+ spin_lock_irqsave(&mad_queue->lock, flags);
+ list_del(&mad_list->list);
+ mad_queue->count--;
+ spin_unlock_irqrestore(&mad_queue->lock, flags);
+}
+
+static void snoop_send(struct ib_mad_qp_info *qp_info,
+ struct ib_send_wr *send_wr,
+ struct ib_mad_send_wc *mad_send_wc,
+ int mad_snoop_flags)
+{
+ struct ib_mad_snoop_private *mad_snoop_priv;
+ unsigned long flags;
+ int i;
+
+ spin_lock_irqsave(&qp_info->snoop_lock, flags);
+ for (i = 0; i < qp_info->snoop_table_size; i++) {
+ mad_snoop_priv = qp_info->snoop_table[i];
+ if (!mad_snoop_priv ||
+ !(mad_snoop_priv->mad_snoop_flags & mad_snoop_flags))
+ continue;
+
+ atomic_inc(&mad_snoop_priv->refcount);
+ spin_unlock_irqrestore(&qp_info->snoop_lock, flags);
+ mad_snoop_priv->agent.snoop_handler(&mad_snoop_priv->agent,
+ send_wr, mad_send_wc);
+ if (atomic_dec_and_test(&mad_snoop_priv->refcount))
+ wake_up(&mad_snoop_priv->wait);
+ spin_lock_irqsave(&qp_info->snoop_lock, flags);
+ }
+ spin_unlock_irqrestore(&qp_info->snoop_lock, flags);
+}
+
+static void snoop_recv(struct ib_mad_qp_info *qp_info,
+ struct ib_mad_recv_wc *mad_recv_wc,
+ int mad_snoop_flags)
+{
+ struct ib_mad_snoop_private *mad_snoop_priv;
+ unsigned long flags;
+ int i;
+
+ spin_lock_irqsave(&qp_info->snoop_lock, flags);
+ for (i = 0; i < qp_info->snoop_table_size; i++) {
+ mad_snoop_priv = qp_info->snoop_table[i];
+ if (!mad_snoop_priv ||
+ !(mad_snoop_priv->mad_snoop_flags & mad_snoop_flags))
+ continue;
+
+ atomic_inc(&mad_snoop_priv->refcount);
+ spin_unlock_irqrestore(&qp_info->snoop_lock, flags);
+ mad_snoop_priv->agent.recv_handler(&mad_snoop_priv->agent,
+ mad_recv_wc);
+ if (atomic_dec_and_test(&mad_snoop_priv->refcount))
+ wake_up(&mad_snoop_priv->wait);
+ spin_lock_irqsave(&qp_info->snoop_lock, flags);
+ }
+ spin_unlock_irqrestore(&qp_info->snoop_lock, flags);
+}
+
+static void build_smp_wc(u64 wr_id, u16 slid, u16 pkey_index, u8 port_num,
+ struct ib_wc *wc)
+{
+ memset(wc, 0, sizeof *wc);
+ wc->wr_id = wr_id;
+ wc->status = IB_WC_SUCCESS;
+ wc->opcode = IB_WC_RECV;
+ wc->pkey_index = pkey_index;
+ wc->byte_len = sizeof(struct ib_mad) + sizeof(struct ib_grh);
+ wc->src_qp = IB_QP0;
+ wc->qp_num = IB_QP0;
+ wc->slid = slid;
+ wc->sl = 0;
+ wc->dlid_path_bits = 0;
+ wc->port_num = port_num;
+}
+
+/*
+ * Return 0 if SMP is to be sent
+ * Return 1 if SMP was consumed locally (whether or not solicited)
+ * Return < 0 if error
+ */
+static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv,
+ struct ib_smp *smp,
+ struct ib_send_wr *send_wr)
+{
+ int ret, alloc_flags, solicited;
+ unsigned long flags;
+ struct ib_mad_local_private *local;
+ struct ib_mad_private *mad_priv;
+ struct ib_mad_port_private *port_priv;
+ struct ib_mad_agent_private *recv_mad_agent = NULL;
+ struct ib_device *device = mad_agent_priv->agent.device;
+ u8 port_num = mad_agent_priv->agent.port_num;
+ struct ib_wc mad_wc;
+
+ if (!smi_handle_dr_smp_send(smp, device->node_type, port_num)) {
+ ret = -EINVAL;
+ printk(KERN_ERR PFX "Invalid directed route\n");
+ goto out;
+ }
+ /* Check to post send on QP or process locally */
+ ret = smi_check_local_dr_smp(smp, device, port_num);
+ if (!ret || !device->process_mad)
+ goto out;
+
+ if (in_atomic() || irqs_disabled())
+ alloc_flags = GFP_ATOMIC;
+ else
+ alloc_flags = GFP_KERNEL;
+ local = kmalloc(sizeof *local, alloc_flags);
+ if (!local) {
+ ret = -ENOMEM;
+ printk(KERN_ERR PFX "No memory for ib_mad_local_private\n");
+ goto out;
+ }
+ local->mad_priv = NULL;
+ local->recv_mad_agent = NULL;
+ mad_priv = kmem_cache_alloc(ib_mad_cache, alloc_flags);
+ if (!mad_priv) {
+ ret = -ENOMEM;
+ printk(KERN_ERR PFX "No memory for local response MAD\n");
+ kfree(local);
+ goto out;
+ }
+
+ build_smp_wc(send_wr->wr_id, smp->dr_slid, send_wr->wr.ud.pkey_index,
+ send_wr->wr.ud.port_num, &mad_wc);
+
+ /* No GRH for DR SMP */
+ ret = device->process_mad(device, 0, port_num, &mad_wc, NULL,
+ (struct ib_mad *)smp,
+ (struct ib_mad *)&mad_priv->mad);
+ switch (ret)
+ {
+ case IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY:
+ /*
+ * See if response is solicited and
+ * there is a recv handler
+ */
+ if (solicited_mad(&mad_priv->mad.mad) &&
+ mad_agent_priv->agent.recv_handler) {
+ local->mad_priv = mad_priv;
+ local->recv_mad_agent = mad_agent_priv;
+ /*
+ * Reference MAD agent until receive
+ * side of local completion handled
+ */
+ atomic_inc(&mad_agent_priv->refcount);
+ } else
+ kmem_cache_free(ib_mad_cache, mad_priv);
+ break;
+ case IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_CONSUMED:
+ kmem_cache_free(ib_mad_cache, mad_priv);
+ break;
+ case IB_MAD_RESULT_SUCCESS:
+ /* Treat like an incoming receive MAD */
+ solicited = solicited_mad(&mad_priv->mad.mad);
+ port_priv = ib_get_mad_port(mad_agent_priv->agent.device,
+ mad_agent_priv->agent.port_num);
+ if (port_priv) {
+ mad_priv->mad.mad.mad_hdr.tid =
+ ((struct ib_mad *)smp)->mad_hdr.tid;
+ recv_mad_agent = find_mad_agent(port_priv,
+ &mad_priv->mad.mad,
+ solicited);
+ }
+ if (!port_priv || !recv_mad_agent) {
+ kmem_cache_free(ib_mad_cache, mad_priv);
+ kfree(local);
+ ret = 0;
+ goto out;
+ }
+ local->mad_priv = mad_priv;
+ local->recv_mad_agent = recv_mad_agent;
+ break;
+ default:
+ kmem_cache_free(ib_mad_cache, mad_priv);
+ kfree(local);
+ ret = -EINVAL;
+ goto out;
+ }
+
+ local->send_wr = *send_wr;
+ local->send_wr.sg_list = local->sg_list;
+ memcpy(local->sg_list, send_wr->sg_list,
+ sizeof *send_wr->sg_list * send_wr->num_sge);
+ local->send_wr.next = NULL;
+ local->tid = send_wr->wr.ud.mad_hdr->tid;
+ local->wr_id = send_wr->wr_id;
+ /* Reference MAD agent until send side of local completion handled */
+ atomic_inc(&mad_agent_priv->refcount);
+ /* Queue local completion to local list */
+ spin_lock_irqsave(&mad_agent_priv->lock, flags);
+ list_add_tail(&local->completion_list, &mad_agent_priv->local_list);
+ spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
+ queue_work(mad_agent_priv->qp_info->port_priv->wq,
+ &mad_agent_priv->local_work);
+ ret = 1;
+out:
+ return ret;
+}
+
+static int ib_send_mad(struct ib_mad_agent_private *mad_agent_priv,
+ struct ib_mad_send_wr_private *mad_send_wr)
+{
+ struct ib_mad_qp_info *qp_info;
+ struct ib_send_wr *bad_send_wr;
+ unsigned long flags;
+ int ret;
+
+ /* Replace user's WR ID with our own to find WR upon completion */
+ qp_info = mad_agent_priv->qp_info;
+ mad_send_wr->wr_id = mad_send_wr->send_wr.wr_id;
+ mad_send_wr->send_wr.wr_id = (unsigned long)&mad_send_wr->mad_list;
+ mad_send_wr->mad_list.mad_queue = &qp_info->send_queue;
+
+ spin_lock_irqsave(&qp_info->send_queue.lock, flags);
+ if (qp_info->send_queue.count++ < qp_info->send_queue.max_active) {
+ list_add_tail(&mad_send_wr->mad_list.list,
+ &qp_info->send_queue.list);
+ spin_unlock_irqrestore(&qp_info->send_queue.lock, flags);
+ ret = ib_post_send(mad_agent_priv->agent.qp,
+ &mad_send_wr->send_wr, &bad_send_wr);
+ if (ret) {
+ printk(KERN_ERR PFX "ib_post_send failed: %d\n", ret);
+ dequeue_mad(&mad_send_wr->mad_list);
+ }
+ } else {
+ list_add_tail(&mad_send_wr->mad_list.list,
+ &qp_info->overflow_list);
+ spin_unlock_irqrestore(&qp_info->send_queue.lock, flags);
+ ret = 0;
+ }
+ return ret;
+}
+
+/*
+ * ib_post_send_mad - Posts MAD(s) to the send queue of the QP associated
+ * with the registered client
+ */
+int ib_post_send_mad(struct ib_mad_agent *mad_agent,
+ struct ib_send_wr *send_wr,
+ struct ib_send_wr **bad_send_wr)
+{
+ int ret = -EINVAL;
+ struct ib_mad_agent_private *mad_agent_priv;
+
+ /* Validate supplied parameters */
+ if (!bad_send_wr)
+ goto error1;
+
+ if (!mad_agent || !send_wr)
+ goto error2;
+
+ if (!mad_agent->send_handler)
+ goto error2;
+
+ mad_agent_priv = container_of(mad_agent,
+ struct ib_mad_agent_private,
+ agent);
+
+ /* Walk list of send WRs and post each on send list */
+ while (send_wr) {
+ unsigned long flags;
+ struct ib_send_wr *next_send_wr;
+ struct ib_mad_send_wr_private *mad_send_wr;
+ struct ib_smp *smp;
+
+ /* Validate more parameters */
+ if (send_wr->num_sge > IB_MAD_SEND_REQ_MAX_SG)
+ goto error2;
+
+ if (send_wr->wr.ud.timeout_ms && !mad_agent->recv_handler)
+ goto error2;
+
+ if (!send_wr->wr.ud.mad_hdr) {
+ printk(KERN_ERR PFX "MAD header must be supplied "
+ "in WR %p\n", send_wr);
+ goto error2;
+ }
+
+ /*
+ * Save pointer to next work request to post in case the
+ * current one completes, and the user modifies the work
+ * request associated with the completion
+ */
+ next_send_wr = (struct ib_send_wr *)send_wr->next;
+
+ smp = (struct ib_smp *)send_wr->wr.ud.mad_hdr;
+ if (smp->mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE) {
+ ret = handle_outgoing_dr_smp(mad_agent_priv, smp,
+ send_wr);
+ if (ret < 0) /* error */
+ goto error2;
+ else if (ret == 1) /* locally consumed */
+ goto next;
+ }
+
+ /* Allocate MAD send WR tracking structure */
+ mad_send_wr = kmalloc(sizeof *mad_send_wr,
+ (in_atomic() || irqs_disabled()) ?
+ GFP_ATOMIC : GFP_KERNEL);
+ if (!mad_send_wr) {
+ printk(KERN_ERR PFX "No memory for "
+ "ib_mad_send_wr_private\n");
+ ret = -ENOMEM;
+ goto error2;
+ }
+
+ mad_send_wr->send_wr = *send_wr;
+ mad_send_wr->send_wr.sg_list = mad_send_wr->sg_list;
+ memcpy(mad_send_wr->sg_list, send_wr->sg_list,
+ sizeof *send_wr->sg_list * send_wr->num_sge);
+ mad_send_wr->send_wr.next = NULL;
+ mad_send_wr->tid = send_wr->wr.ud.mad_hdr->tid;
+ mad_send_wr->agent = mad_agent;
+ /* Timeout will be updated after send completes */
+ mad_send_wr->timeout = msecs_to_jiffies(send_wr->wr.
+ ud.timeout_ms);
+ mad_send_wr->retry = 0;
+ /* One reference for each work request to QP + response */
+ mad_send_wr->refcount = 1 + (mad_send_wr->timeout > 0);
+ mad_send_wr->status = IB_WC_SUCCESS;
+
+ /* Reference MAD agent until send completes */
+ atomic_inc(&mad_agent_priv->refcount);
+ spin_lock_irqsave(&mad_agent_priv->lock, flags);
+ list_add_tail(&mad_send_wr->agent_list,
+ &mad_agent_priv->send_list);
+ spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
+
+ ret = ib_send_mad(mad_agent_priv, mad_send_wr);
+ if (ret) {
+ /* Fail send request */
+ spin_lock_irqsave(&mad_agent_priv->lock, flags);
+ list_del(&mad_send_wr->agent_list);
+ spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
+ atomic_dec(&mad_agent_priv->refcount);
+ goto error2;
+ }
+next:
+ send_wr = next_send_wr;
+ }
+ return 0;
+
+error2:
+ *bad_send_wr = send_wr;
+error1:
+ return ret;
+}
+EXPORT_SYMBOL(ib_post_send_mad);
+
+/*
+ * ib_free_recv_mad - Returns data buffers used to receive
+ * a MAD to the access layer
+ */
+void ib_free_recv_mad(struct ib_mad_recv_wc *mad_recv_wc)
+{
+ struct ib_mad_recv_buf *entry;
+ struct ib_mad_private_header *mad_priv_hdr;
+ struct ib_mad_private *priv;
+
+ mad_priv_hdr = container_of(mad_recv_wc,
+ struct ib_mad_private_header,
+ recv_wc);
+ priv = container_of(mad_priv_hdr, struct ib_mad_private, header);
+
+ /*
+ * Walk receive buffer list associated with this WC
+ * No need to remove them from list of receive buffers
+ */
+ list_for_each_entry(entry, &mad_recv_wc->recv_buf.list, list) {
+ /* Free previous receive buffer */
+ kmem_cache_free(ib_mad_cache, priv);
+ mad_priv_hdr = container_of(mad_recv_wc,
+ struct ib_mad_private_header,
+ recv_wc);
+ priv = container_of(mad_priv_hdr, struct ib_mad_private,
+ header);
+ }
+
+ /* Free last buffer */
+ kmem_cache_free(ib_mad_cache, priv);
+}
+EXPORT_SYMBOL(ib_free_recv_mad);
+
+void ib_coalesce_recv_mad(struct ib_mad_recv_wc *mad_recv_wc,
+ void *buf)
+{
+ printk(KERN_ERR PFX "ib_coalesce_recv_mad() not implemented yet\n");
+}
+EXPORT_SYMBOL(ib_coalesce_recv_mad);
+
+struct ib_mad_agent *ib_redirect_mad_qp(struct ib_qp *qp,
+ u8 rmpp_version,
+ ib_mad_send_handler send_handler,
+ ib_mad_recv_handler recv_handler,
+ void *context)
+{
+ return ERR_PTR(-EINVAL); /* XXX: for now */
+}
+EXPORT_SYMBOL(ib_redirect_mad_qp);
+
+int ib_process_mad_wc(struct ib_mad_agent *mad_agent,
+ struct ib_wc *wc)
+{
+ printk(KERN_ERR PFX "ib_process_mad_wc() not implemented yet\n");
+ return 0;
+}
+EXPORT_SYMBOL(ib_process_mad_wc);
+
+static int method_in_use(struct ib_mad_mgmt_method_table **method,
+ struct ib_mad_reg_req *mad_reg_req)
+{
+ int i;
+
+ for (i = find_first_bit(mad_reg_req->method_mask, IB_MGMT_MAX_METHODS);
+ i < IB_MGMT_MAX_METHODS;
+ i = find_next_bit(mad_reg_req->method_mask, IB_MGMT_MAX_METHODS,
+ 1+i)) {
+ if ((*method)->agent[i]) {
+ printk(KERN_ERR PFX "Method %d already in use\n", i);
+ return -EINVAL;
+ }
+ }
+ return 0;
+}
+
+static int allocate_method_table(struct ib_mad_mgmt_method_table **method)
+{
+ /* Allocate management method table */
+ *method = kmalloc(sizeof **method, GFP_ATOMIC);
+ if (!*method) {
+ printk(KERN_ERR PFX "No memory for "
+ "ib_mad_mgmt_method_table\n");
+ return -ENOMEM;
+ }
+ /* Clear management method table */
+ memset(*method, 0, sizeof **method);
+
+ return 0;
+}
+
+/*
+ * Check to see if there are any methods still in use
+ */
+static int check_method_table(struct ib_mad_mgmt_method_table *method)
+{
+ int i;
+
+ for (i = 0; i < IB_MGMT_MAX_METHODS; i++)
+ if (method->agent[i])
+ return 1;
+ return 0;
+}
+
+/*
+ * Check to see if there are any method tables for this class still in use
+ */
+static int check_class_table(struct ib_mad_mgmt_class_table *class)
+{
+ int i;
+
+ for (i = 0; i < MAX_MGMT_CLASS; i++)
+ if (class->method_table[i])
+ return 1;
+ return 0;
+}
+
+static int check_vendor_class(struct ib_mad_mgmt_vendor_class *vendor_class)
+{
+ int i;
+
+ for (i = 0; i < MAX_MGMT_OUI; i++)
+ if (vendor_class->method_table[i])
+ return 1;
+ return 0;
+}
+
+static int find_vendor_oui(struct ib_mad_mgmt_vendor_class *vendor_class,
+ char *oui)
+{
+ int i;
+
+ for (i = 0; i < MAX_MGMT_OUI; i++)
+ /* Is there matching OUI for this vendor class ? */
+ if (!memcmp(vendor_class->oui[i], oui, 3))
+ return i;
+
+ return -1;
+}
+
+static int check_vendor_table(struct ib_mad_mgmt_vendor_class_table *vendor)
+{
+ int i;
+
+ for (i = 0; i < MAX_MGMT_VENDOR_RANGE2; i++)
+ if (vendor->vendor_class[i])
+ return 1;
+
+ return 0;
+}
+
+static void remove_methods_mad_agent(struct ib_mad_mgmt_method_table *method,
+ struct ib_mad_agent_private *agent)
+{
+ int i;
+
+ /* Remove any methods for this mad agent */
+ for (i = 0; i < IB_MGMT_MAX_METHODS; i++) {
+ if (method->agent[i] == agent) {
+ method->agent[i] = NULL;
+ }
+ }
+}
+
+static int add_nonoui_reg_req(struct ib_mad_reg_req *mad_reg_req,
+ struct ib_mad_agent_private *agent_priv,
+ u8 mgmt_class)
+{
+ struct ib_mad_port_private *port_priv;
+ struct ib_mad_mgmt_class_table **class;
+ struct ib_mad_mgmt_method_table **method;
+ int i, ret;
+
+ port_priv = agent_priv->qp_info->port_priv;
+ class = &port_priv->version[mad_reg_req->mgmt_class_version].class;
+ if (!*class) {
+ /* Allocate management class table for "new" class version */
+ *class = kmalloc(sizeof **class, GFP_ATOMIC);
+ if (!*class) {
+ printk(KERN_ERR PFX "No memory for "
+ "ib_mad_mgmt_class_table\n");
+ ret = -ENOMEM;
+ goto error1;
+ }
+ /* Clear management class table */
+ memset(*class, 0, sizeof(**class));
+ /* Allocate method table for this management class */
+ method = &(*class)->method_table[mgmt_class];
+ if ((ret = allocate_method_table(method)))
+ goto error2;
+ } else {
+ method = &(*class)->method_table[mgmt_class];
+ if (!*method) {
+ /* Allocate method table for this management class */
+ if ((ret = allocate_method_table(method)))
+ goto error1;
+ }
+ }
+
+ /* Now, make sure methods are not already in use */
+ if (method_in_use(method, mad_reg_req))
+ goto error3;
+
+ /* Finally, add in methods being registered */
+ for (i = find_first_bit(mad_reg_req->method_mask,
+ IB_MGMT_MAX_METHODS);
+ i < IB_MGMT_MAX_METHODS;
+ i = find_next_bit(mad_reg_req->method_mask, IB_MGMT_MAX_METHODS,
+ 1+i)) {
+ (*method)->agent[i] = agent_priv;
+ }
+ return 0;
+
+error3:
+ /* Remove any methods for this mad agent */
+ remove_methods_mad_agent(*method, agent_priv);
+ /* Now, check to see if there are any methods in use */
+ if (!check_method_table(*method)) {
+ /* If not, release management method table */
+ kfree(*method);
+ *method = NULL;
+ }
+ ret = -EINVAL;
+ goto error1;
+error2:
+ kfree(*class);
+ *class = NULL;
+error1:
+ return ret;
+}
+
+static int add_oui_reg_req(struct ib_mad_reg_req *mad_reg_req,
+ struct ib_mad_agent_private *agent_priv)
+{
+ struct ib_mad_port_private *port_priv;
+ struct ib_mad_mgmt_vendor_class_table **vendor_table;
+ struct ib_mad_mgmt_vendor_class_table *vendor = NULL;
+ struct ib_mad_mgmt_vendor_class *vendor_class = NULL;
+ struct ib_mad_mgmt_method_table **method;
+ int i, ret = -ENOMEM;
+ u8 vclass;
+
+ /* "New" vendor (with OUI) class */
+ vclass = vendor_class_index(mad_reg_req->mgmt_class);
+ port_priv = agent_priv->qp_info->port_priv;
+ vendor_table = &port_priv->version[
+ mad_reg_req->mgmt_class_version].vendor;
+ if (!*vendor_table) {
+ /* Allocate mgmt vendor class table for "new" class version */
+ vendor = kmalloc(sizeof *vendor, GFP_ATOMIC);
+ if (!vendor) {
+ printk(KERN_ERR PFX "No memory for "
+ "ib_mad_mgmt_vendor_class_table\n");
+ goto error1;
+ }
+ /* Clear management vendor class table */
+ memset(vendor, 0, sizeof(*vendor));
+ *vendor_table = vendor;
+ }
+ if (!(*vendor_table)->vendor_class[vclass]) {
+ /* Allocate table for this management vendor class */
+ vendor_class = kmalloc(sizeof *vendor_class, GFP_ATOMIC);
+ if (!vendor_class) {
+ printk(KERN_ERR PFX "No memory for "
+ "ib_mad_mgmt_vendor_class\n");
+ goto error2;
+ }
+ memset(vendor_class, 0, sizeof(*vendor_class));
+ (*vendor_table)->vendor_class[vclass] = vendor_class;
+ }
+ for (i = 0; i < MAX_MGMT_OUI; i++) {
+ /* Is there matching OUI for this vendor class ? */
+ if (!memcmp((*vendor_table)->vendor_class[vclass]->oui[i],
+ mad_reg_req->oui, 3)) {
+ method = &(*vendor_table)->vendor_class[
+ vclass]->method_table[i];
+ BUG_ON(!*method);
+ goto check_in_use;
+ }
+ }
+ for (i = 0; i < MAX_MGMT_OUI; i++) {
+ /* OUI slot available ? */
+ if (!is_vendor_oui((*vendor_table)->vendor_class[
+ vclass]->oui[i])) {
+ method = &(*vendor_table)->vendor_class[
+ vclass]->method_table[i];
+ BUG_ON(*method);
+ /* Allocate method table for this OUI */
+ if ((ret = allocate_method_table(method)))
+ goto error3;
+ memcpy((*vendor_table)->vendor_class[vclass]->oui[i],
+ mad_reg_req->oui, 3);
+ goto check_in_use;
+ }
+ }
+ printk(KERN_ERR PFX "All OUI slots in use\n");
+ goto error3;
+
+check_in_use:
+ /* Now, make sure methods are not already in use */
+ if (method_in_use(method, mad_reg_req))
+ goto error4;
+
+ /* Finally, add in methods being registered */
+ for (i = find_first_bit(mad_reg_req->method_mask,
+ IB_MGMT_MAX_METHODS);
+ i < IB_MGMT_MAX_METHODS;
+ i = find_next_bit(mad_reg_req->method_mask, IB_MGMT_MAX_METHODS,
+ 1+i)) {
+ (*method)->agent[i] = agent_priv;
+ }
+ return 0;
+
+error4:
+ /* Remove any methods for this mad agent */
+ remove_methods_mad_agent(*method, agent_priv);
+ /* Now, check to see if there are any methods in use */
+ if (!check_method_table(*method)) {
+ /* If not, release management method table */
+ kfree(*method);
+ *method = NULL;
+ }
+ ret = -EINVAL;
+error3:
+ if (vendor_class) {
+ (*vendor_table)->vendor_class[vclass] = NULL;
+ kfree(vendor_class);
+ }
+error2:
+ if (vendor) {
+ *vendor_table = NULL;
+ kfree(vendor);
+ }
+error1:
+ return ret;
+}
+
+static void remove_mad_reg_req(struct ib_mad_agent_private *agent_priv)
+{
+ struct ib_mad_port_private *port_priv;
+ struct ib_mad_mgmt_class_table *class;
+ struct ib_mad_mgmt_method_table *method;
+ struct ib_mad_mgmt_vendor_class_table *vendor;
+ struct ib_mad_mgmt_vendor_class *vendor_class;
+ int index;
+ u8 mgmt_class;
+
+ /*
+ * Was MAD registration request supplied
+ * with original registration ?
+ */
+ if (!agent_priv->reg_req) {
+ goto out;
+ }
+
+ port_priv = agent_priv->qp_info->port_priv;
+ class = port_priv->version[
+ agent_priv->reg_req->mgmt_class_version].class;
+ if (!class)
+ goto vendor_check;
+
+ mgmt_class = convert_mgmt_class(agent_priv->reg_req->mgmt_class);
+ method = class->method_table[mgmt_class];
+ if (method) {
+ /* Remove any methods for this mad agent */
+ remove_methods_mad_agent(method, agent_priv);
+ /* Now, check to see if there are any methods still in use */
+ if (!check_method_table(method)) {
+ /* If not, release management method table */
+ kfree(method);
+ class->method_table[mgmt_class] = NULL;
+ /* Any management classes left ? */
+ if (!check_class_table(class)) {
+ /* If not, release management class table */
+ kfree(class);
+ port_priv->version[
+ agent_priv->reg_req->
+ mgmt_class_version].class = NULL;
+ }
+ }
+ }
+
+vendor_check:
+ vendor = port_priv->version[
+ agent_priv->reg_req->mgmt_class_version].vendor;
+ if (!vendor)
+ goto out;
+
+ mgmt_class = vendor_class_index(agent_priv->reg_req->mgmt_class);
+ vendor_class = vendor->vendor_class[mgmt_class];
+ if (vendor_class) {
+ index = find_vendor_oui(vendor_class, agent_priv->reg_req->oui);
+ if (index == -1)
+ goto out;
+ method = vendor_class->method_table[index];
+ if (method) {
+ /* Remove any methods for this mad agent */
+ remove_methods_mad_agent(method, agent_priv);
+ /*
+ * Now, check to see if there are
+ * any methods still in use
+ */
+ if (!check_method_table(method)) {
+ /* If not, release management method table */
+ kfree(method);
+ vendor_class->method_table[index] = NULL;
+ memset(vendor_class->oui[index], 0, 3);
+ /* Any OUIs left ? */
+ if (!check_vendor_class(vendor_class)) {
+ /* If not, release vendor class table */
+ kfree(vendor_class);
+ vendor->vendor_class[mgmt_class] = NULL;
+ /* Any other vendor classes left ? */
+ if (!check_vendor_table(vendor)) {
+ kfree(vendor);
+ port_priv->version[
+ agent_priv->reg_req->
+ mgmt_class_version].
+ vendor = NULL;
+ }
+ }
+ }
+ }
+ }
+
+out:
+ return;
+}
+
+static int response_mad(struct ib_mad *mad)
+{
+ /* Trap represses are responses although response bit is reset */
+ return ((mad->mad_hdr.method == IB_MGMT_METHOD_TRAP_REPRESS) ||
+ (mad->mad_hdr.method & IB_MGMT_METHOD_RESP));
+}
+
+static int solicited_mad(struct ib_mad *mad)
+{
+ /* CM MADs are never solicited */
+ if (mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_CM) {
+ return 0;
+ }
+
+ /* XXX: Determine whether MAD is using RMPP */
+
+ /* Not using RMPP */
+ /* Is this MAD a response to a previous MAD ? */
+ return response_mad(mad);
+}
+
+static struct ib_mad_agent_private *
+find_mad_agent(struct ib_mad_port_private *port_priv,
+ struct ib_mad *mad,
+ int solicited)
+{
+ struct ib_mad_agent_private *mad_agent = NULL;
+ unsigned long flags;
+
+ spin_lock_irqsave(&port_priv->reg_lock, flags);
+
+ /*
+ * Whether MAD was solicited determines type of routing to
+ * MAD client.
+ */
+ if (solicited) {
+ u32 hi_tid;
+ struct ib_mad_agent_private *entry;
+
+ /*
+ * Routing is based on high 32 bits of transaction ID
+ * of MAD.
+ */
+ hi_tid = be64_to_cpu(mad->mad_hdr.tid) >> 32;
+ list_for_each_entry(entry, &port_priv->agent_list,
+ agent_list) {
+ if (entry->agent.hi_tid == hi_tid) {
+ mad_agent = entry;
+ break;
+ }
+ }
+ } else {
+ struct ib_mad_mgmt_class_table *class;
+ struct ib_mad_mgmt_method_table *method;
+ struct ib_mad_mgmt_vendor_class_table *vendor;
+ struct ib_mad_mgmt_vendor_class *vendor_class;
+ struct ib_vendor_mad *vendor_mad;
+ int index;
+
+ /*
+ * Routing is based on version, class, and method
+ * For "newer" vendor MADs, also based on OUI
+ */
+ if (mad->mad_hdr.class_version >= MAX_MGMT_VERSION)
+ goto out;
+ if (!is_vendor_class(mad->mad_hdr.mgmt_class)) {
+ class = port_priv->version[
+ mad->mad_hdr.class_version].class;
+ if (!class)
+ goto out;
+ method = class->method_table[convert_mgmt_class(
+ mad->mad_hdr.mgmt_class)];
+ if (method)
+ mad_agent = method->agent[mad->mad_hdr.method &
+ ~IB_MGMT_METHOD_RESP];
+ } else {
+ vendor = port_priv->version[
+ mad->mad_hdr.class_version].vendor;
+ if (!vendor)
+ goto out;
+ vendor_class = vendor->vendor_class[vendor_class_index(
+ mad->mad_hdr.mgmt_class)];
+ if (!vendor_class)
+ goto out;
+ /* Find matching OUI */
+ vendor_mad = (struct ib_vendor_mad *)mad;
+ index = find_vendor_oui(vendor_class, vendor_mad->oui);
+ if (index == -1)
+ goto out;
+ method = vendor_class->method_table[index];
+ if (method) {
+ mad_agent = method->agent[mad->mad_hdr.method &
+ ~IB_MGMT_METHOD_RESP];
+ }
+ }
+ }
+
+ if (mad_agent) {
+ if (mad_agent->agent.recv_handler)
+ atomic_inc(&mad_agent->refcount);
+ else {
+ printk(KERN_NOTICE PFX "No receive handler for client "
+ "%p on port %d\n",
+ &mad_agent->agent, port_priv->port_num);
+ mad_agent = NULL;
+ }
+ }
+out:
+ spin_unlock_irqrestore(&port_priv->reg_lock, flags);
+
+ return mad_agent;
+}
+
+static int validate_mad(struct ib_mad *mad, u32 qp_num)
+{
+ int valid = 0;
+
+ /* Make sure MAD base version is understood */
+ if (mad->mad_hdr.base_version != IB_MGMT_BASE_VERSION) {
+ printk(KERN_ERR PFX "MAD received with unsupported base "
+ "version %d\n", mad->mad_hdr.base_version);
+ goto out;
+ }
+
+ /* Filter SMI packets sent to other than QP0 */
+ if ((mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_LID_ROUTED) ||
+ (mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE)) {
+ if (qp_num == 0)
+ valid = 1;
+ } else {
+ /* Filter GSI packets sent to QP0 */
+ if (qp_num != 0)
+ valid = 1;
+ }
+
+out:
+ return valid;
+}
+
+/*
+ * Return start of fully reassembled MAD, or NULL, if MAD isn't assembled yet
+ */
+static struct ib_mad_private *
+reassemble_recv(struct ib_mad_agent_private *mad_agent_priv,
+ struct ib_mad_private *recv)
+{
+ /* Until we have RMPP, all receives are reassembled!... */
+ INIT_LIST_HEAD(&recv->header.recv_wc.recv_buf.list);
+ return recv;
+}
+
+static struct ib_mad_send_wr_private*
+find_send_req(struct ib_mad_agent_private *mad_agent_priv,
+ u64 tid)
+{
+ struct ib_mad_send_wr_private *mad_send_wr;
+
+ list_for_each_entry(mad_send_wr, &mad_agent_priv->wait_list,
+ agent_list) {
+ if (mad_send_wr->tid == tid)
+ return mad_send_wr;
+ }
+
+ /*
+ * It's possible to receive the response before we've
+ * been notified that the send has completed
+ */
+ list_for_each_entry(mad_send_wr, &mad_agent_priv->send_list,
+ agent_list) {
+ if (mad_send_wr->tid == tid && mad_send_wr->timeout) {
+ /* Verify request has not been canceled */
+ return (mad_send_wr->status == IB_WC_SUCCESS) ?
+ mad_send_wr : NULL;
+ }
+ }
+ return NULL;
+}
+
+static void ib_mad_complete_recv(struct ib_mad_agent_private *mad_agent_priv,
+ struct ib_mad_private *recv,
+ int solicited)
+{
+ struct ib_mad_send_wr_private *mad_send_wr;
+ struct ib_mad_send_wc mad_send_wc;
+ unsigned long flags;
+
+ /* Fully reassemble receive before processing */
+ recv = reassemble_recv(mad_agent_priv, recv);
+ if (!recv) {
+ if (atomic_dec_and_test(&mad_agent_priv->refcount))
+ wake_up(&mad_agent_priv->wait);
+ return;
+ }
+
+ /* Complete corresponding request */
+ if (solicited) {
+ spin_lock_irqsave(&mad_agent_priv->lock, flags);
+ mad_send_wr = find_send_req(mad_agent_priv,
+ recv->mad.mad.mad_hdr.tid);
+ if (!mad_send_wr) {
+ spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
+ ib_free_recv_mad(&recv->header.recv_wc);
+ if (atomic_dec_and_test(&mad_agent_priv->refcount))
+ wake_up(&mad_agent_priv->wait);
+ return;
+ }
+ /* Timeout = 0 means that we won't wait for a response */
+ mad_send_wr->timeout = 0;
+ spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
+
+ /* Defined behavior is to complete response before request */
+ recv->header.recv_wc.wc->wr_id = mad_send_wr->wr_id;
+ mad_agent_priv->agent.recv_handler(
+ &mad_agent_priv->agent,
+ &recv->header.recv_wc);
+ atomic_dec(&mad_agent_priv->refcount);
+
+ mad_send_wc.status = IB_WC_SUCCESS;
+ mad_send_wc.vendor_err = 0;
+ mad_send_wc.wr_id = mad_send_wr->wr_id;
+ ib_mad_complete_send_wr(mad_send_wr, &mad_send_wc);
+ } else {
+ mad_agent_priv->agent.recv_handler(
+ &mad_agent_priv->agent,
+ &recv->header.recv_wc);
+ if (atomic_dec_and_test(&mad_agent_priv->refcount))
+ wake_up(&mad_agent_priv->wait);
+ }
+}
+
+static void ib_mad_recv_done_handler(struct ib_mad_port_private *port_priv,
+ struct ib_wc *wc)
+{
+ struct ib_mad_qp_info *qp_info;
+ struct ib_mad_private_header *mad_priv_hdr;
+ struct ib_mad_private *recv, *response;
+ struct ib_mad_list_head *mad_list;
+ struct ib_mad_agent_private *mad_agent;
+ int solicited;
+
+ response = kmem_cache_alloc(ib_mad_cache, GFP_KERNEL);
+ if (!response)
+ printk(KERN_ERR PFX "ib_mad_recv_done_handler no memory "
+ "for response buffer\n");
+
+ mad_list = (struct ib_mad_list_head *)(unsigned long)wc->wr_id;
+ qp_info = mad_list->mad_queue->qp_info;
+ dequeue_mad(mad_list);
+
+ mad_priv_hdr = container_of(mad_list, struct ib_mad_private_header,
+ mad_list);
+ recv = container_of(mad_priv_hdr, struct ib_mad_private, header);
+ dma_unmap_single(port_priv->device->dma_device,
+ pci_unmap_addr(&recv->header, mapping),
+ sizeof(struct ib_mad_private) -
+ sizeof(struct ib_mad_private_header),
+ DMA_FROM_DEVICE);
+
+ /* Setup MAD receive work completion from "normal" work completion */
+ recv->header.recv_wc.wc = wc;
+ recv->header.recv_wc.mad_len = sizeof(struct ib_mad);
+ recv->header.recv_wc.recv_buf.mad = &recv->mad.mad;
+ recv->header.recv_wc.recv_buf.grh = &recv->grh;
+
+ if (atomic_read(&qp_info->snoop_count))
+ snoop_recv(qp_info, &recv->header.recv_wc, IB_MAD_SNOOP_RECVS);
+
+ /* Validate MAD */
+ if (!validate_mad(&recv->mad.mad, qp_info->qp->qp_num))
+ goto out;
+
+ if (recv->mad.mad.mad_hdr.mgmt_class ==
+ IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE) {
+ if (!smi_handle_dr_smp_recv(&recv->mad.smp,
+ port_priv->device->node_type,
+ port_priv->port_num,
+ port_priv->device->phys_port_cnt))
+ goto out;
+ if (!smi_check_forward_dr_smp(&recv->mad.smp))
+ goto local;
+ if (!smi_handle_dr_smp_send(&recv->mad.smp,
+ port_priv->device->node_type,
+ port_priv->port_num))
+ goto out;
+ if (!smi_check_local_dr_smp(&recv->mad.smp,
+ port_priv->device,
+ port_priv->port_num))
+ goto out;
+ }
+
+local:
+ /* Give driver "right of first refusal" on incoming MAD */
+ if (port_priv->device->process_mad) {
+ int ret;
+
+ if (!response) {
+ printk(KERN_ERR PFX "No memory for response MAD\n");
+ /*
+ * Is it better to assume that
+ * it wouldn't be processed ?
+ */
+ goto out;
+ }
+
+ ret = port_priv->device->process_mad(port_priv->device, 0,
+ port_priv->port_num,
+ wc, &recv->grh,
+ &recv->mad.mad,
+ &response->mad.mad);
+ if (ret & IB_MAD_RESULT_SUCCESS) {
+ if (ret & IB_MAD_RESULT_CONSUMED)
+ goto out;
+ if (ret & IB_MAD_RESULT_REPLY) {
+ /* Send response */
+ if (!agent_send(response, &recv->grh, wc,
+ port_priv->device,
+ port_priv->port_num))
+ response = NULL;
+ goto out;
+ }
+ }
+ }
+
+ /* Determine corresponding MAD agent for incoming receive MAD */
+ solicited = solicited_mad(&recv->mad.mad);
+ mad_agent = find_mad_agent(port_priv, &recv->mad.mad, solicited);
+ if (mad_agent) {
+ ib_mad_complete_recv(mad_agent, recv, solicited);
+ /*
+ * recv is freed up in error cases in ib_mad_complete_recv
+ * or via recv_handler in ib_mad_complete_recv()
+ */
+ recv = NULL;
+ }
+
+out:
+ /* Post another receive request for this QP */
+ if (response) {
+ ib_mad_post_receive_mads(qp_info, response);
+ if (recv)
+ kmem_cache_free(ib_mad_cache, recv);
+ } else
+ ib_mad_post_receive_mads(qp_info, recv);
+}
+
+static void adjust_timeout(struct ib_mad_agent_private *mad_agent_priv)
+{
+ struct ib_mad_send_wr_private *mad_send_wr;
+ unsigned long delay;
+
+ if (list_empty(&mad_agent_priv->wait_list)) {
+ cancel_delayed_work(&mad_agent_priv->timed_work);
+ } else {
+ mad_send_wr = list_entry(mad_agent_priv->wait_list.next,
+ struct ib_mad_send_wr_private,
+ agent_list);
+
+ if (time_after(mad_agent_priv->timeout,
+ mad_send_wr->timeout)) {
+ mad_agent_priv->timeout = mad_send_wr->timeout;
+ cancel_delayed_work(&mad_agent_priv->timed_work);
+ delay = mad_send_wr->timeout - jiffies;
+ if ((long)delay <= 0)
+ delay = 1;
+ queue_delayed_work(mad_agent_priv->qp_info->
+ port_priv->wq,
+ &mad_agent_priv->timed_work, delay);
+ }
+ }
+}
+
+static void wait_for_response(struct ib_mad_agent_private *mad_agent_priv,
+ struct ib_mad_send_wr_private *mad_send_wr )
+{
+ struct ib_mad_send_wr_private *temp_mad_send_wr;
+ struct list_head *list_item;
+ unsigned long delay;
+
+ list_del(&mad_send_wr->agent_list);
+
+ delay = mad_send_wr->timeout;
+ mad_send_wr->timeout += jiffies;
+
+ list_for_each_prev(list_item, &mad_agent_priv->wait_list) {
+ temp_mad_send_wr = list_entry(list_item,
+ struct ib_mad_send_wr_private,
+ agent_list);
+ if (time_after(mad_send_wr->timeout,
+ temp_mad_send_wr->timeout))
+ break;
+ }
+ list_add(&mad_send_wr->agent_list, list_item);
+
+ /* Reschedule a work item if we have a shorter timeout */
+ if (mad_agent_priv->wait_list.next == &mad_send_wr->agent_list) {
+ cancel_delayed_work(&mad_agent_priv->timed_work);
+ queue_delayed_work(mad_agent_priv->qp_info->port_priv->wq,
+ &mad_agent_priv->timed_work, delay);
+ }
+}
+
+/*
+ * Process a send work completion
+ */
+static void ib_mad_complete_send_wr(struct ib_mad_send_wr_private *mad_send_wr,
+ struct ib_mad_send_wc *mad_send_wc)
+{
+ struct ib_mad_agent_private *mad_agent_priv;
+ unsigned long flags;
+
+ mad_agent_priv = container_of(mad_send_wr->agent,
+ struct ib_mad_agent_private, agent);
+
+ spin_lock_irqsave(&mad_agent_priv->lock, flags);
+ if (mad_send_wc->status != IB_WC_SUCCESS &&
+ mad_send_wr->status == IB_WC_SUCCESS) {
+ mad_send_wr->status = mad_send_wc->status;
+ mad_send_wr->refcount -= (mad_send_wr->timeout > 0);
+ }
+
+ if (--mad_send_wr->refcount > 0) {
+ if (mad_send_wr->refcount == 1 && mad_send_wr->timeout &&
+ mad_send_wr->status == IB_WC_SUCCESS) {
+ wait_for_response(mad_agent_priv, mad_send_wr);
+ }
+ spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
+ return;
+ }
+
+ /* Remove send from MAD agent and notify client of completion */
+ list_del(&mad_send_wr->agent_list);
+ adjust_timeout(mad_agent_priv);
+ spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
+
+ if (mad_send_wr->status != IB_WC_SUCCESS )
+ mad_send_wc->status = mad_send_wr->status;
+ mad_agent_priv->agent.send_handler(&mad_agent_priv->agent,
+ mad_send_wc);
+
+ /* Release reference on agent taken when sending */
+ if (atomic_dec_and_test(&mad_agent_priv->refcount))
+ wake_up(&mad_agent_priv->wait);
+
+ kfree(mad_send_wr);
+}
+
+static void ib_mad_send_done_handler(struct ib_mad_port_private *port_priv,
+ struct ib_wc *wc)
+{
+ struct ib_mad_send_wr_private *mad_send_wr, *queued_send_wr;
+ struct ib_mad_list_head *mad_list;
+ struct ib_mad_qp_info *qp_info;
+ struct ib_mad_queue *send_queue;
+ struct ib_send_wr *bad_send_wr;
+ unsigned long flags;
+ int ret;
+
+ mad_list = (struct ib_mad_list_head *)(unsigned long)wc->wr_id;
+ mad_send_wr = container_of(mad_list, struct ib_mad_send_wr_private,
+ mad_list);
+ send_queue = mad_list->mad_queue;
+ qp_info = send_queue->qp_info;
+
+retry:
+ queued_send_wr = NULL;
+ spin_lock_irqsave(&send_queue->lock, flags);
+ list_del(&mad_list->list);
+
+ /* Move queued send to the send queue */
+ if (send_queue->count-- > send_queue->max_active) {
+ mad_list = container_of(qp_info->overflow_list.next,
+ struct ib_mad_list_head, list);
+ queued_send_wr = container_of(mad_list,
+ struct ib_mad_send_wr_private,
+ mad_list);
+ list_del(&mad_list->list);
+ list_add_tail(&mad_list->list, &send_queue->list);
+ }
+ spin_unlock_irqrestore(&send_queue->lock, flags);
+
+ /* Restore client wr_id in WC and complete send */
+ wc->wr_id = mad_send_wr->wr_id;
+ if (atomic_read(&qp_info->snoop_count))
+ snoop_send(qp_info, &mad_send_wr->send_wr,
+ (struct ib_mad_send_wc *)wc,
+ IB_MAD_SNOOP_SEND_COMPLETIONS);
+ ib_mad_complete_send_wr(mad_send_wr, (struct ib_mad_send_wc *)wc);
+
+ if (queued_send_wr) {
+ ret = ib_post_send(qp_info->qp, &queued_send_wr->send_wr,
+ &bad_send_wr);
+ if (ret) {
+ printk(KERN_ERR PFX "ib_post_send failed: %d\n", ret);
+ mad_send_wr = queued_send_wr;
+ wc->status = IB_WC_LOC_QP_OP_ERR;
+ goto retry;
+ }
+ }
+}
+
+static void mark_sends_for_retry(struct ib_mad_qp_info *qp_info)
+{
+ struct ib_mad_send_wr_private *mad_send_wr;
+ struct ib_mad_list_head *mad_list;
+ unsigned long flags;
+
+ spin_lock_irqsave(&qp_info->send_queue.lock, flags);
+ list_for_each_entry(mad_list, &qp_info->send_queue.list, list) {
+ mad_send_wr = container_of(mad_list,
+ struct ib_mad_send_wr_private,
+ mad_list);
+ mad_send_wr->retry = 1;
+ }
+ spin_unlock_irqrestore(&qp_info->send_queue.lock, flags);
+}
+
+static void mad_error_handler(struct ib_mad_port_private *port_priv,
+ struct ib_wc *wc)
+{
+ struct ib_mad_list_head *mad_list;
+ struct ib_mad_qp_info *qp_info;
+ struct ib_mad_send_wr_private *mad_send_wr;
+ int ret;
+
+ /* Determine if failure was a send or receive */
+ mad_list = (struct ib_mad_list_head *)(unsigned long)wc->wr_id;
+ qp_info = mad_list->mad_queue->qp_info;
+ if (mad_list->mad_queue == &qp_info->recv_queue)
+ /*
+ * Receive errors indicate that the QP has entered the error
+ * state - error handling/shutdown code will cleanup
+ */
+ return;
+
+ /*
+ * Send errors will transition the QP to SQE - move
+ * QP to RTS and repost flushed work requests
+ */
+ mad_send_wr = container_of(mad_list, struct ib_mad_send_wr_private,
+ mad_list);
+ if (wc->status == IB_WC_WR_FLUSH_ERR) {
+ if (mad_send_wr->retry) {
+ /* Repost send */
+ struct ib_send_wr *bad_send_wr;
+
+ mad_send_wr->retry = 0;
+ ret = ib_post_send(qp_info->qp, &mad_send_wr->send_wr,
+ &bad_send_wr);
+ if (ret)
+ ib_mad_send_done_handler(port_priv, wc);
+ } else
+ ib_mad_send_done_handler(port_priv, wc);
+ } else {
+ struct ib_qp_attr *attr;
+
+ /* Transition QP to RTS and fail offending send */
+ attr = kmalloc(sizeof *attr, GFP_KERNEL);
+ if (attr) {
+ attr->qp_state = IB_QPS_RTS;
+ attr->cur_qp_state = IB_QPS_SQE;
+ ret = ib_modify_qp(qp_info->qp, attr,
+ IB_QP_STATE | IB_QP_CUR_STATE);
+ kfree(attr);
+ if (ret)
+ printk(KERN_ERR PFX "mad_error_handler - "
+ "ib_modify_qp to RTS : %d\n", ret);
+ else
+ mark_sends_for_retry(qp_info);
+ }
+ ib_mad_send_done_handler(port_priv, wc);
+ }
+}
+
+/*
+ * IB MAD completion callback
+ */
+static void ib_mad_completion_handler(void *data)
+{
+ struct ib_mad_port_private *port_priv;
+ struct ib_wc wc;
+
+ port_priv = (struct ib_mad_port_private *)data;
+ ib_req_notify_cq(port_priv->cq, IB_CQ_NEXT_COMP);
+
+ while (ib_poll_cq(port_priv->cq, 1, &wc) == 1) {
+ if (wc.status == IB_WC_SUCCESS) {
+ switch (wc.opcode) {
+ case IB_WC_SEND:
+ ib_mad_send_done_handler(port_priv, &wc);
+ break;
+ case IB_WC_RECV:
+ ib_mad_recv_done_handler(port_priv, &wc);
+ break;
+ default:
+ BUG_ON(1);
+ break;
+ }
+ } else
+ mad_error_handler(port_priv, &wc);
+ }
+}
+
+static void cancel_mads(struct ib_mad_agent_private *mad_agent_priv)
+{
+ unsigned long flags;
+ struct ib_mad_send_wr_private *mad_send_wr, *temp_mad_send_wr;
+ struct ib_mad_send_wc mad_send_wc;
+ struct list_head cancel_list;
+
+ INIT_LIST_HEAD(&cancel_list);
+
+ spin_lock_irqsave(&mad_agent_priv->lock, flags);
+ list_for_each_entry_safe(mad_send_wr, temp_mad_send_wr,
+ &mad_agent_priv->send_list, agent_list) {
+ if (mad_send_wr->status == IB_WC_SUCCESS) {
+ mad_send_wr->status = IB_WC_WR_FLUSH_ERR;
+ mad_send_wr->refcount -= (mad_send_wr->timeout > 0);
+ }
+ }
+
+ /* Empty wait list to prevent receives from finding a request */
+ list_splice_init(&mad_agent_priv->wait_list, &cancel_list);
+ spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
+
+ /* Report all cancelled requests */
+ mad_send_wc.status = IB_WC_WR_FLUSH_ERR;
+ mad_send_wc.vendor_err = 0;
+
+ list_for_each_entry_safe(mad_send_wr, temp_mad_send_wr,
+ &cancel_list, agent_list) {
+ mad_send_wc.wr_id = mad_send_wr->wr_id;
+ mad_agent_priv->agent.send_handler(&mad_agent_priv->agent,
+ &mad_send_wc);
+
+ list_del(&mad_send_wr->agent_list);
+ kfree(mad_send_wr);
+ atomic_dec(&mad_agent_priv->refcount);
+ }
+}
+
+static struct ib_mad_send_wr_private*
+find_send_by_wr_id(struct ib_mad_agent_private *mad_agent_priv,
+ u64 wr_id)
+{
+ struct ib_mad_send_wr_private *mad_send_wr;
+
+ list_for_each_entry(mad_send_wr, &mad_agent_priv->wait_list,
+ agent_list) {
+ if (mad_send_wr->wr_id == wr_id)
+ return mad_send_wr;
+ }
+
+ list_for_each_entry(mad_send_wr, &mad_agent_priv->send_list,
+ agent_list) {
+ if (mad_send_wr->wr_id == wr_id)
+ return mad_send_wr;
+ }
+ return NULL;
+}
+
+void ib_cancel_mad(struct ib_mad_agent *mad_agent,
+ u64 wr_id)
+{
+ struct ib_mad_agent_private *mad_agent_priv;
+ struct ib_mad_send_wr_private *mad_send_wr;
+ struct ib_mad_send_wc mad_send_wc;
+ unsigned long flags;
+
+ mad_agent_priv = container_of(mad_agent, struct ib_mad_agent_private,
+ agent);
+ spin_lock_irqsave(&mad_agent_priv->lock, flags);
+ mad_send_wr = find_send_by_wr_id(mad_agent_priv, wr_id);
+ if (!mad_send_wr) {
+ spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
+ goto out;
+ }
+
+ if (mad_send_wr->status == IB_WC_SUCCESS)
+ mad_send_wr->refcount -= (mad_send_wr->timeout > 0);
+
+ if (mad_send_wr->refcount != 0) {
+ mad_send_wr->status = IB_WC_WR_FLUSH_ERR;
+ spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
+ goto out;
+ }
+
+ list_del(&mad_send_wr->agent_list);
+ adjust_timeout(mad_agent_priv);
+ spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
+
+ mad_send_wc.status = IB_WC_WR_FLUSH_ERR;
+ mad_send_wc.vendor_err = 0;
+ mad_send_wc.wr_id = mad_send_wr->wr_id;
+ mad_agent_priv->agent.send_handler(&mad_agent_priv->agent,
+ &mad_send_wc);
+
+ kfree(mad_send_wr);
+ if (atomic_dec_and_test(&mad_agent_priv->refcount))
+ wake_up(&mad_agent_priv->wait);
+
+out:
+ return;
+}
+EXPORT_SYMBOL(ib_cancel_mad);
+
+static void local_completions(void *data)
+{
+ struct ib_mad_agent_private *mad_agent_priv;
+ struct ib_mad_local_private *local;
+ struct ib_mad_agent_private *recv_mad_agent;
+ unsigned long flags;
+ struct ib_wc wc;
+ struct ib_mad_send_wc mad_send_wc;
+
+ mad_agent_priv = (struct ib_mad_agent_private *)data;
+
+ spin_lock_irqsave(&mad_agent_priv->lock, flags);
+ while (!list_empty(&mad_agent_priv->local_list)) {
+ local = list_entry(mad_agent_priv->local_list.next,
+ struct ib_mad_local_private,
+ completion_list);
+ spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
+ if (local->mad_priv) {
+ recv_mad_agent = local->recv_mad_agent;
+ if (!recv_mad_agent) {
+ printk(KERN_ERR PFX "No receive MAD agent for local completion\n");
+ kmem_cache_free(ib_mad_cache, local->mad_priv);
+ goto local_send_completion;
+ }
+
+ /*
+ * Defined behavior is to complete response
+ * before request
+ */
+ build_smp_wc(local->wr_id, IB_LID_PERMISSIVE,
+ 0 /* pkey index */,
+ recv_mad_agent->agent.port_num, &wc);
+
+ local->mad_priv->header.recv_wc.wc = &wc;
+ local->mad_priv->header.recv_wc.mad_len =
+ sizeof(struct ib_mad);
+ INIT_LIST_HEAD(&local->mad_priv->header.recv_wc.recv_buf.list);
+ local->mad_priv->header.recv_wc.recv_buf.grh = NULL;
+ local->mad_priv->header.recv_wc.recv_buf.mad =
+ &local->mad_priv->mad.mad;
+ if (atomic_read(&recv_mad_agent->qp_info->snoop_count))
+ snoop_recv(recv_mad_agent->qp_info,
+ &local->mad_priv->header.recv_wc,
+ IB_MAD_SNOOP_RECVS);
+ recv_mad_agent->agent.recv_handler(
+ &recv_mad_agent->agent,
+ &local->mad_priv->header.recv_wc);
+ spin_lock_irqsave(&recv_mad_agent->lock, flags);
+ atomic_dec(&recv_mad_agent->refcount);
+ spin_unlock_irqrestore(&recv_mad_agent->lock, flags);
+ }
+
+local_send_completion:
+ /* Complete send */
+ mad_send_wc.status = IB_WC_SUCCESS;
+ mad_send_wc.vendor_err = 0;
+ mad_send_wc.wr_id = local->wr_id;
+ if (atomic_read(&mad_agent_priv->qp_info->snoop_count))
+ snoop_send(mad_agent_priv->qp_info, &local->send_wr,
+ &mad_send_wc,
+ IB_MAD_SNOOP_SEND_COMPLETIONS);
+ mad_agent_priv->agent.send_handler(&mad_agent_priv->agent,
+ &mad_send_wc);
+
+ spin_lock_irqsave(&mad_agent_priv->lock, flags);
+ list_del(&local->completion_list);
+ atomic_dec(&mad_agent_priv->refcount);
+ kfree(local);
+ }
+ spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
+}
+
+static void timeout_sends(void *data)
+{
+ struct ib_mad_agent_private *mad_agent_priv;
+ struct ib_mad_send_wr_private *mad_send_wr;
+ struct ib_mad_send_wc mad_send_wc;
+ unsigned long flags, delay;
+
+ mad_agent_priv = (struct ib_mad_agent_private *)data;
+
+ mad_send_wc.status = IB_WC_RESP_TIMEOUT_ERR;
+ mad_send_wc.vendor_err = 0;
+
+ spin_lock_irqsave(&mad_agent_priv->lock, flags);
+ while (!list_empty(&mad_agent_priv->wait_list)) {
+ mad_send_wr = list_entry(mad_agent_priv->wait_list.next,
+ struct ib_mad_send_wr_private,
+ agent_list);
+
+ if (time_after(mad_send_wr->timeout, jiffies)) {
+ delay = mad_send_wr->timeout - jiffies;
+ if ((long)delay <= 0)
+ delay = 1;
+ queue_delayed_work(mad_agent_priv->qp_info->
+ port_priv->wq,
+ &mad_agent_priv->timed_work, delay);
+ break;
+ }
+
+ list_del(&mad_send_wr->agent_list);
+ spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
+
+ mad_send_wc.wr_id = mad_send_wr->wr_id;
+ mad_agent_priv->agent.send_handler(&mad_agent_priv->agent,
+ &mad_send_wc);
+
+ kfree(mad_send_wr);
+ atomic_dec(&mad_agent_priv->refcount);
+ spin_lock_irqsave(&mad_agent_priv->lock, flags);
+ }
+ spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
+}
+
+static void ib_mad_thread_completion_handler(struct ib_cq *cq)
+{
+ struct ib_mad_port_private *port_priv = cq->cq_context;
+
+ queue_work(port_priv->wq, &port_priv->work);
+}
+
+/*
+ * Allocate receive MADs and post receive WRs for them
+ */
+static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
+ struct ib_mad_private *mad)
+{
+ unsigned long flags;
+ int post, ret;
+ struct ib_mad_private *mad_priv;
+ struct ib_sge sg_list;
+ struct ib_recv_wr recv_wr, *bad_recv_wr;
+ struct ib_mad_queue *recv_queue = &qp_info->recv_queue;
+
+ /* Initialize common scatter list fields */
+ sg_list.length = sizeof *mad_priv - sizeof mad_priv->header;
+ sg_list.lkey = (*qp_info->port_priv->mr).lkey;
+
+ /* Initialize common receive WR fields */
+ recv_wr.next = NULL;
+ recv_wr.sg_list = &sg_list;
+ recv_wr.num_sge = 1;
+ recv_wr.recv_flags = IB_RECV_SIGNALED;
+
+ do {
+ /* Allocate and map receive buffer */
+ if (mad) {
+ mad_priv = mad;
+ mad = NULL;
+ } else {
+ mad_priv = kmem_cache_alloc(ib_mad_cache, GFP_KERNEL);
+ if (!mad_priv) {
+ printk(KERN_ERR PFX "No memory for receive buffer\n");
+ ret = -ENOMEM;
+ break;
+ }
+ }
+ sg_list.addr = dma_map_single(qp_info->port_priv->
+ device->dma_device,
+ &mad_priv->grh,
+ sizeof *mad_priv -
+ sizeof mad_priv->header,
+ DMA_FROM_DEVICE);
+ pci_unmap_addr_set(&mad_priv->header, mapping, sg_list.addr);
+ recv_wr.wr_id = (unsigned long)&mad_priv->header.mad_list;
+ mad_priv->header.mad_list.mad_queue = recv_queue;
+
+ /* Post receive WR */
+ spin_lock_irqsave(&recv_queue->lock, flags);
+ post = (++recv_queue->count < recv_queue->max_active);
+ list_add_tail(&mad_priv->header.mad_list.list, &recv_queue->list);
+ spin_unlock_irqrestore(&recv_queue->lock, flags);
+ ret = ib_post_recv(qp_info->qp, &recv_wr, &bad_recv_wr);
+ if (ret) {
+ spin_lock_irqsave(&recv_queue->lock, flags);
+ list_del(&mad_priv->header.mad_list.list);
+ recv_queue->count--;
+ spin_unlock_irqrestore(&recv_queue->lock, flags);
+ dma_unmap_single(qp_info->port_priv->device->dma_device,
+ pci_unmap_addr(&mad_priv->header,
+ mapping),
+ sizeof *mad_priv -
+ sizeof mad_priv->header,
+ DMA_FROM_DEVICE);
+ kmem_cache_free(ib_mad_cache, mad_priv);
+ printk(KERN_ERR PFX "ib_post_recv failed: %d\n", ret);
+ break;
+ }
+ } while (post);
+
+ return ret;
+}
+
+/*
+ * Return all the posted receive MADs
+ */
+static void cleanup_recv_queue(struct ib_mad_qp_info *qp_info)
+{
+ struct ib_mad_private_header *mad_priv_hdr;
+ struct ib_mad_private *recv;
+ struct ib_mad_list_head *mad_list;
+
+ while (!list_empty(&qp_info->recv_queue.list)) {
+
+ mad_list = list_entry(qp_info->recv_queue.list.next,
+ struct ib_mad_list_head, list);
+ mad_priv_hdr = container_of(mad_list,
+ struct ib_mad_private_header,
+ mad_list);
+ recv = container_of(mad_priv_hdr, struct ib_mad_private,
+ header);
+
+ /* Remove from posted receive MAD list */
+ list_del(&mad_list->list);
+
+ /* Undo PCI mapping */
+ dma_unmap_single(qp_info->port_priv->device->dma_device,
+ pci_unmap_addr(&recv->header, mapping),
+ sizeof(struct ib_mad_private) -
+ sizeof(struct ib_mad_private_header),
+ DMA_FROM_DEVICE);
+ kmem_cache_free(ib_mad_cache, recv);
+ }
+
+ qp_info->recv_queue.count = 0;
+}
+
+/*
+ * Start the port
+ */
+static int ib_mad_port_start(struct ib_mad_port_private *port_priv)
+{
+ int ret, i;
+ struct ib_qp_attr *attr;
+ struct ib_qp *qp;
+
+ attr = kmalloc(sizeof *attr, GFP_KERNEL);
+ if (!attr) {
+ printk(KERN_ERR PFX "Couldn't kmalloc ib_qp_attr\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < IB_MAD_QPS_CORE; i++) {
+ qp = port_priv->qp_info[i].qp;
+ /*
+ * PKey index for QP1 is irrelevant but
+ * one is needed for the Reset to Init transition
+ */
+ attr->qp_state = IB_QPS_INIT;
+ attr->pkey_index = 0;
+ attr->qkey = (qp->qp_num == 0) ? 0 : IB_QP1_QKEY;
+ ret = ib_modify_qp(qp, attr, IB_QP_STATE |
+ IB_QP_PKEY_INDEX | IB_QP_QKEY);
+ if (ret) {
+ printk(KERN_ERR PFX "Couldn't change QP%d state to "
+ "INIT: %d\n", i, ret);
+ goto out;
+ }
+
+ attr->qp_state = IB_QPS_RTR;
+ ret = ib_modify_qp(qp, attr, IB_QP_STATE);
+ if (ret) {
+ printk(KERN_ERR PFX "Couldn't change QP%d state to "
+ "RTR: %d\n", i, ret);
+ goto out;
+ }
+
+ attr->qp_state = IB_QPS_RTS;
+ attr->sq_psn = IB_MAD_SEND_Q_PSN;
+ ret = ib_modify_qp(qp, attr, IB_QP_STATE | IB_QP_SQ_PSN);
+ if (ret) {
+ printk(KERN_ERR PFX "Couldn't change QP%d state to "
+ "RTS: %d\n", i, ret);
+ goto out;
+ }
+ }
+
+ ret = ib_req_notify_cq(port_priv->cq, IB_CQ_NEXT_COMP);
+ if (ret) {
+ printk(KERN_ERR PFX "Failed to request completion "
+ "notification: %d\n", ret);
+ goto out;
+ }
+
+ for (i = 0; i < IB_MAD_QPS_CORE; i++) {
+ ret = ib_mad_post_receive_mads(&port_priv->qp_info[i], NULL);
+ if (ret) {
+ printk(KERN_ERR PFX "Couldn't post receive WRs\n");
+ goto out;
+ }
+ }
+out:
+ kfree(attr);
+ return ret;
+}
+
+static void qp_event_handler(struct ib_event *event, void *qp_context)
+{
+ struct ib_mad_qp_info *qp_info = qp_context;
+
+ /* It's worse than that! He's dead, Jim! */
+ printk(KERN_ERR PFX "Fatal error (%d) on MAD QP (%d)\n",
+ event->event, qp_info->qp->qp_num);
+}
+
+static void init_mad_queue(struct ib_mad_qp_info *qp_info,
+ struct ib_mad_queue *mad_queue)
+{
+ mad_queue->qp_info = qp_info;
+ mad_queue->count = 0;
+ spin_lock_init(&mad_queue->lock);
+ INIT_LIST_HEAD(&mad_queue->list);
+}
+
+static void init_mad_qp(struct ib_mad_port_private *port_priv,
+ struct ib_mad_qp_info *qp_info)
+{
+ qp_info->port_priv = port_priv;
+ init_mad_queue(qp_info, &qp_info->send_queue);
+ init_mad_queue(qp_info, &qp_info->recv_queue);
+ INIT_LIST_HEAD(&qp_info->overflow_list);
+ spin_lock_init(&qp_info->snoop_lock);
+ qp_info->snoop_table = NULL;
+ qp_info->snoop_table_size = 0;
+ atomic_set(&qp_info->snoop_count, 0);
+}
+
+static int create_mad_qp(struct ib_mad_qp_info *qp_info,
+ enum ib_qp_type qp_type)
+{
+ struct ib_qp_init_attr qp_init_attr;
+ int ret;
+
+ memset(&qp_init_attr, 0, sizeof qp_init_attr);
+ qp_init_attr.send_cq = qp_info->port_priv->cq;
+ qp_init_attr.recv_cq = qp_info->port_priv->cq;
+ qp_init_attr.sq_sig_type = IB_SIGNAL_ALL_WR;
+ qp_init_attr.rq_sig_type = IB_SIGNAL_ALL_WR;
+ qp_init_attr.cap.max_send_wr = IB_MAD_QP_SEND_SIZE;
+ qp_init_attr.cap.max_recv_wr = IB_MAD_QP_RECV_SIZE;
+ qp_init_attr.cap.max_send_sge = IB_MAD_SEND_REQ_MAX_SG;
+ qp_init_attr.cap.max_recv_sge = IB_MAD_RECV_REQ_MAX_SG;
+ qp_init_attr.qp_type = qp_type;
+ qp_init_attr.port_num = qp_info->port_priv->port_num;
+ qp_init_attr.qp_context = qp_info;
+ qp_init_attr.event_handler = qp_event_handler;
+ qp_info->qp = ib_create_qp(qp_info->port_priv->pd, &qp_init_attr);
+ if (IS_ERR(qp_info->qp)) {
+ printk(KERN_ERR PFX "Couldn't create ib_mad QP%d\n",
+ get_spl_qp_index(qp_type));
+ ret = PTR_ERR(qp_info->qp);
+ goto error;
+ }
+ /* Use minimum queue sizes unless the CQ is resized */
+ qp_info->send_queue.max_active = IB_MAD_QP_SEND_SIZE;
+ qp_info->recv_queue.max_active = IB_MAD_QP_RECV_SIZE;
+ return 0;
+
+error:
+ return ret;
+}
+
+static void destroy_mad_qp(struct ib_mad_qp_info *qp_info)
+{
+ ib_destroy_qp(qp_info->qp);
+ if (qp_info->snoop_table)
+ kfree(qp_info->snoop_table);
+}
+
+/*
+ * Open the port
+ * Create the QP, PD, MR, and CQ if needed
+ */
+static int ib_mad_port_open(struct ib_device *device,
+ int port_num)
+{
+ int ret, cq_size;
+ struct ib_mad_port_private *port_priv;
+ unsigned long flags;
+ char name[sizeof "ib_mad123"];
+
+ /* First, check if port already open at MAD layer */
+ port_priv = ib_get_mad_port(device, port_num);
+ if (port_priv) {
+ printk(KERN_DEBUG PFX "%s port %d already open\n",
+ device->name, port_num);
+ return 0;
+ }
+
+ /* Create new device info */
+ port_priv = kmalloc(sizeof *port_priv, GFP_KERNEL);
+ if (!port_priv) {
+ printk(KERN_ERR PFX "No memory for ib_mad_port_private\n");
+ return -ENOMEM;
+ }
+ memset(port_priv, 0, sizeof *port_priv);
+ port_priv->device = device;
+ port_priv->port_num = port_num;
+ spin_lock_init(&port_priv->reg_lock);
+ INIT_LIST_HEAD(&port_priv->agent_list);
+ init_mad_qp(port_priv, &port_priv->qp_info[0]);
+ init_mad_qp(port_priv, &port_priv->qp_info[1]);
+
+ cq_size = (IB_MAD_QP_SEND_SIZE + IB_MAD_QP_RECV_SIZE) * 2;
+ port_priv->cq = ib_create_cq(port_priv->device,
+ (ib_comp_handler)
+ ib_mad_thread_completion_handler,
+ NULL, port_priv, cq_size);
+ if (IS_ERR(port_priv->cq)) {
+ printk(KERN_ERR PFX "Couldn't create ib_mad CQ\n");
+ ret = PTR_ERR(port_priv->cq);
+ goto error3;
+ }
+
+ port_priv->pd = ib_alloc_pd(device);
+ if (IS_ERR(port_priv->pd)) {
+ printk(KERN_ERR PFX "Couldn't create ib_mad PD\n");
+ ret = PTR_ERR(port_priv->pd);
+ goto error4;
+ }
+
+ port_priv->mr = ib_get_dma_mr(port_priv->pd, IB_ACCESS_LOCAL_WRITE);
+ if (IS_ERR(port_priv->mr)) {
+ printk(KERN_ERR PFX "Couldn't get ib_mad DMA MR\n");
+ ret = PTR_ERR(port_priv->mr);
+ goto error5;
+ }
+
+ ret = create_mad_qp(&port_priv->qp_info[0], IB_QPT_SMI);
+ if (ret)
+ goto error6;
+ ret = create_mad_qp(&port_priv->qp_info[1], IB_QPT_GSI);
+ if (ret)
+ goto error7;
+
+ snprintf(name, sizeof name, "ib_mad%d", port_num);
+ port_priv->wq = create_singlethread_workqueue(name);
+ if (!port_priv->wq) {
+ ret = -ENOMEM;
+ goto error8;
+ }
+ INIT_WORK(&port_priv->work, ib_mad_completion_handler, port_priv);
+
+ ret = ib_mad_port_start(port_priv);
+ if (ret) {
+ printk(KERN_ERR PFX "Couldn't start port\n");
+ goto error9;
+ }
+
+ spin_lock_irqsave(&ib_mad_port_list_lock, flags);
+ list_add_tail(&port_priv->port_list, &ib_mad_port_list);
+ spin_unlock_irqrestore(&ib_mad_port_list_lock, flags);
+ return 0;
+
+error9:
+ destroy_workqueue(port_priv->wq);
+error8:
+ destroy_mad_qp(&port_priv->qp_info[1]);
+error7:
+ destroy_mad_qp(&port_priv->qp_info[0]);
+error6:
+ ib_dereg_mr(port_priv->mr);
+error5:
+ ib_dealloc_pd(port_priv->pd);
+error4:
+ ib_destroy_cq(port_priv->cq);
+ cleanup_recv_queue(&port_priv->qp_info[1]);
+ cleanup_recv_queue(&port_priv->qp_info[0]);
+error3:
+ kfree(port_priv);
+
+ return ret;
+}
+
+/*
+ * Close the port
+ * If there are no classes using the port, free the port
+ * resources (CQ, MR, PD, QP) and remove the port's info structure
+ */
+static int ib_mad_port_close(struct ib_device *device, int port_num)
+{
+ struct ib_mad_port_private *port_priv;
+ unsigned long flags;
+
+ spin_lock_irqsave(&ib_mad_port_list_lock, flags);
+ port_priv = __ib_get_mad_port(device, port_num);
+ if (port_priv == NULL) {
+ spin_unlock_irqrestore(&ib_mad_port_list_lock, flags);
+ printk(KERN_ERR PFX "Port %d not found\n", port_num);
+ return -ENODEV;
+ }
+ list_del(&port_priv->port_list);
+ spin_unlock_irqrestore(&ib_mad_port_list_lock, flags);
+
+ /* Stop processing completions. */
+ flush_workqueue(port_priv->wq);
+ destroy_workqueue(port_priv->wq);
+ destroy_mad_qp(&port_priv->qp_info[1]);
+ destroy_mad_qp(&port_priv->qp_info[0]);
+ ib_dereg_mr(port_priv->mr);
+ ib_dealloc_pd(port_priv->pd);
+ ib_destroy_cq(port_priv->cq);
+ cleanup_recv_queue(&port_priv->qp_info[1]);
+ cleanup_recv_queue(&port_priv->qp_info[0]);
+ /* XXX: Handle deallocation of MAD registration tables */
+
+ kfree(port_priv);
+
+ return 0;
+}
+
+static void ib_mad_init_device(struct ib_device *device)
+{
+ int ret, num_ports, cur_port, i, ret2;
+
+ if (device->node_type == IB_NODE_SWITCH) {
+ num_ports = 1;
+ cur_port = 0;
+ } else {
+ num_ports = device->phys_port_cnt;
+ cur_port = 1;
+ }
+ for (i = 0; i < num_ports; i++, cur_port++) {
+ ret = ib_mad_port_open(device, cur_port);
+ if (ret) {
+ printk(KERN_ERR PFX "Couldn't open %s port %d\n",
+ device->name, cur_port);
+ goto error_device_open;
+ }
+ ret = ib_agent_port_open(device, cur_port);
+ if (ret) {
+ printk(KERN_ERR PFX "Couldn't open %s port %d "
+ "for agents\n",
+ device->name, cur_port);
+ goto error_device_open;
+ }
+ }
+
+ goto error_device_query;
+
+error_device_open:
+ while (i > 0) {
+ cur_port--;
+ ret2 = ib_agent_port_close(device, cur_port);
+ if (ret2) {
+ printk(KERN_ERR PFX "Couldn't close %s port %d "
+ "for agents\n",
+ device->name, cur_port);
+ }
+ ret2 = ib_mad_port_close(device, cur_port);
+ if (ret2) {
+ printk(KERN_ERR PFX "Couldn't close %s port %d\n",
+ device->name, cur_port);
+ }
+ i--;
+ }
+
+error_device_query:
+ return;
+}
+
+static void ib_mad_remove_device(struct ib_device *device)
+{
+ int ret = 0, i, num_ports, cur_port, ret2;
+
+ if (device->node_type == IB_NODE_SWITCH) {
+ num_ports = 1;
+ cur_port = 0;
+ } else {
+ num_ports = device->phys_port_cnt;
+ cur_port = 1;
+ }
+ for (i = 0; i < num_ports; i++, cur_port++) {
+ ret2 = ib_agent_port_close(device, cur_port);
+ if (ret2) {
+ printk(KERN_ERR PFX "Couldn't close %s port %d "
+ "for agents\n",
+ device->name, cur_port);
+ if (!ret)
+ ret = ret2;
+ }
+ ret2 = ib_mad_port_close(device, cur_port);
+ if (ret2) {
+ printk(KERN_ERR PFX "Couldn't close %s port %d\n",
+ device->name, cur_port);
+ if (!ret)
+ ret = ret2;
+ }
+ }
+}
+
+static struct ib_client mad_client = {
+ .name = "mad",
+ .add = ib_mad_init_device,
+ .remove = ib_mad_remove_device
+};
+
+static int __init ib_mad_init_module(void)
+{
+ int ret;
+
+ spin_lock_init(&ib_mad_port_list_lock);
+ spin_lock_init(&ib_agent_port_list_lock);
+
+ ib_mad_cache = kmem_cache_create("ib_mad",
+ sizeof(struct ib_mad_private),
+ 0,
+ SLAB_HWCACHE_ALIGN,
+ NULL,
+ NULL);
+ if (!ib_mad_cache) {
+ printk(KERN_ERR PFX "Couldn't create ib_mad cache\n");
+ ret = -ENOMEM;
+ goto error1;
+ }
+
+ INIT_LIST_HEAD(&ib_mad_port_list);
+
+ if (ib_register_client(&mad_client)) {
+ printk(KERN_ERR PFX "Couldn't register ib_mad client\n");
+ ret = -EINVAL;
+ goto error2;
+ }
+
+ return 0;
+
+error2:
+ kmem_cache_destroy(ib_mad_cache);
+error1:
+ return ret;
+}
+
+static void __exit ib_mad_cleanup_module(void)
+{
+ ib_unregister_client(&mad_client);
+
+ if (kmem_cache_destroy(ib_mad_cache)) {
+ printk(KERN_DEBUG PFX "Failed to destroy ib_mad cache\n");
+ }
+}
+
+module_init(ib_mad_init_module);
+module_exit(ib_mad_cleanup_module);
--- /dev/null
+/*
+ * Copyright (c) 2004, 2005, Voltaire, Inc. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mad_priv.h 1389 2004-12-27 22:56:47Z roland $
+ */
+
+#ifndef __IB_MAD_PRIV_H__
+#define __IB_MAD_PRIV_H__
+
+#include <linux/pci.h>
+#include <linux/kthread.h>
+#include <linux/workqueue.h>
+#include <ib_mad.h>
+#include <ib_smi.h>
+
+
+#define PFX "ib_mad: "
+
+#define IB_MAD_QPS_CORE 2 /* Always QP0 and QP1 as a minimum */
+
+/* QP and CQ parameters */
+#define IB_MAD_QP_SEND_SIZE 128
+#define IB_MAD_QP_RECV_SIZE 512
+#define IB_MAD_SEND_REQ_MAX_SG 2
+#define IB_MAD_RECV_REQ_MAX_SG 1
+
+#define IB_MAD_SEND_Q_PSN 0
+
+/* Registration table sizes */
+#define MAX_MGMT_CLASS 80
+#define MAX_MGMT_VERSION 8
+#define MAX_MGMT_OUI 8
+#define MAX_MGMT_VENDOR_RANGE2 IB_MGMT_CLASS_VENDOR_RANGE2_END - \
+ IB_MGMT_CLASS_VENDOR_RANGE2_START + 1
+
+struct ib_mad_list_head {
+ struct list_head list;
+ struct ib_mad_queue *mad_queue;
+};
+
+struct ib_mad_private_header {
+ struct ib_mad_list_head mad_list;
+ struct ib_mad_recv_wc recv_wc;
+ DECLARE_PCI_UNMAP_ADDR(mapping)
+} __attribute__ ((packed));
+
+struct ib_mad_private {
+ struct ib_mad_private_header header;
+ struct ib_grh grh;
+ union {
+ struct ib_mad mad;
+ struct ib_rmpp_mad rmpp_mad;
+ struct ib_smp smp;
+ } mad;
+} __attribute__ ((packed));
+
+struct ib_mad_agent_private {
+ struct list_head agent_list;
+ struct ib_mad_agent agent;
+ struct ib_mad_reg_req *reg_req;
+ struct ib_mad_qp_info *qp_info;
+
+ spinlock_t lock;
+ struct list_head send_list;
+ struct list_head wait_list;
+ struct work_struct timed_work;
+ unsigned long timeout;
+ struct list_head local_list;
+ struct work_struct local_work;
+
+ atomic_t refcount;
+ wait_queue_head_t wait;
+ u8 rmpp_version;
+};
+
+struct ib_mad_snoop_private {
+ struct ib_mad_agent agent;
+ struct ib_mad_qp_info *qp_info;
+ int snoop_index;
+ int mad_snoop_flags;
+ atomic_t refcount;
+ wait_queue_head_t wait;
+};
+
+struct ib_mad_send_wr_private {
+ struct ib_mad_list_head mad_list;
+ struct list_head agent_list;
+ struct ib_mad_agent *agent;
+ struct ib_send_wr send_wr;
+ struct ib_sge sg_list[IB_MAD_SEND_REQ_MAX_SG];
+ u64 wr_id; /* client WR ID */
+ u64 tid;
+ unsigned long timeout;
+ int retry;
+ int refcount;
+ enum ib_wc_status status;
+};
+
+struct ib_mad_local_private {
+ struct list_head completion_list;
+ struct ib_mad_private *mad_priv;
+ struct ib_mad_agent_private *recv_mad_agent;
+ struct ib_send_wr send_wr;
+ struct ib_sge sg_list[IB_MAD_SEND_REQ_MAX_SG];
+ u64 wr_id; /* client WR ID */
+ u64 tid;
+};
+
+struct ib_mad_mgmt_method_table {
+ struct ib_mad_agent_private *agent[IB_MGMT_MAX_METHODS];
+};
+
+struct ib_mad_mgmt_class_table {
+ struct ib_mad_mgmt_method_table *method_table[MAX_MGMT_CLASS];
+};
+
+struct ib_mad_mgmt_vendor_class {
+ u8 oui[MAX_MGMT_OUI][3];
+ struct ib_mad_mgmt_method_table *method_table[MAX_MGMT_OUI];
+};
+
+struct ib_mad_mgmt_vendor_class_table {
+ struct ib_mad_mgmt_vendor_class *vendor_class[MAX_MGMT_VENDOR_RANGE2];
+};
+
+struct ib_mad_mgmt_version_table {
+ struct ib_mad_mgmt_class_table *class;
+ struct ib_mad_mgmt_vendor_class_table *vendor;
+};
+
+struct ib_mad_queue {
+ spinlock_t lock;
+ struct list_head list;
+ int count;
+ int max_active;
+ struct ib_mad_qp_info *qp_info;
+};
+
+struct ib_mad_qp_info {
+ struct ib_mad_port_private *port_priv;
+ struct ib_qp *qp;
+ struct ib_mad_queue send_queue;
+ struct ib_mad_queue recv_queue;
+ struct list_head overflow_list;
+ spinlock_t snoop_lock;
+ struct ib_mad_snoop_private **snoop_table;
+ int snoop_table_size;
+ atomic_t snoop_count;
+};
+
+struct ib_mad_port_private {
+ struct list_head port_list;
+ struct ib_device *device;
+ int port_num;
+ struct ib_cq *cq;
+ struct ib_pd *pd;
+ struct ib_mr *mr;
+
+ spinlock_t reg_lock;
+ struct ib_mad_mgmt_version_table version[MAX_MGMT_VERSION];
+ struct list_head agent_list;
+ struct workqueue_struct *wq;
+ struct work_struct work;
+ struct ib_mad_qp_info qp_info[IB_MAD_QPS_CORE];
+};
+
+#endif /* __IB_MAD_PRIV_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: sa_query.c 1389 2004-12-27 22:56:47Z roland $
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/err.h>
+#include <linux/random.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/kref.h>
+#include <linux/idr.h>
+
+#include <ib_pack.h>
+#include <ib_sa.h>
+
+MODULE_AUTHOR("Roland Dreier");
+MODULE_DESCRIPTION("InfiniBand subnet administration query support");
+MODULE_LICENSE("Dual BSD/GPL");
+
+/*
+ * These two structures must be packed because they have 64-bit fields
+ * that are only 32-bit aligned. 64-bit architectures will lay them
+ * out wrong otherwise. (And unfortunately they are sent on the wire
+ * so we can't change the layout)
+ */
+struct ib_sa_hdr {
+ u64 sm_key;
+ u16 attr_offset;
+ u16 reserved;
+ ib_sa_comp_mask comp_mask;
+} __attribute__ ((packed));
+
+struct ib_sa_mad {
+ struct ib_mad_hdr mad_hdr;
+ struct ib_rmpp_hdr rmpp_hdr;
+ struct ib_sa_hdr sa_hdr;
+ u8 data[200];
+} __attribute__ ((packed));
+
+struct ib_sa_sm_ah {
+ struct ib_ah *ah;
+ struct kref ref;
+};
+
+struct ib_sa_port {
+ struct ib_mad_agent *agent;
+ struct ib_mr *mr;
+ struct ib_sa_sm_ah *sm_ah;
+ struct work_struct update_task;
+ spinlock_t ah_lock;
+ u8 port_num;
+};
+
+struct ib_sa_device {
+ int start_port, end_port;
+ struct ib_event_handler event_handler;
+ struct ib_sa_port port[0];
+};
+
+struct ib_sa_query {
+ void (*callback)(struct ib_sa_query *, int, struct ib_sa_mad *);
+ void (*release)(struct ib_sa_query *);
+ struct ib_sa_port *port;
+ struct ib_sa_mad *mad;
+ struct ib_sa_sm_ah *sm_ah;
+ DECLARE_PCI_UNMAP_ADDR(mapping)
+ int id;
+};
+
+struct ib_sa_path_query {
+ void (*callback)(int, struct ib_sa_path_rec *, void *);
+ void *context;
+ struct ib_sa_query sa_query;
+};
+
+struct ib_sa_mcmember_query {
+ void (*callback)(int, struct ib_sa_mcmember_rec *, void *);
+ void *context;
+ struct ib_sa_query sa_query;
+};
+
+static void ib_sa_add_one(struct ib_device *device);
+static void ib_sa_remove_one(struct ib_device *device);
+
+static struct ib_client sa_client = {
+ .name = "sa",
+ .add = ib_sa_add_one,
+ .remove = ib_sa_remove_one
+};
+
+static spinlock_t idr_lock;
+static DEFINE_IDR(query_idr);
+
+static spinlock_t tid_lock;
+static u32 tid;
+
+enum {
+ IB_SA_ATTR_CLASS_PORTINFO = 0x01,
+ IB_SA_ATTR_NOTICE = 0x02,
+ IB_SA_ATTR_INFORM_INFO = 0x03,
+ IB_SA_ATTR_NODE_REC = 0x11,
+ IB_SA_ATTR_PORT_INFO_REC = 0x12,
+ IB_SA_ATTR_SL2VL_REC = 0x13,
+ IB_SA_ATTR_SWITCH_REC = 0x14,
+ IB_SA_ATTR_LINEAR_FDB_REC = 0x15,
+ IB_SA_ATTR_RANDOM_FDB_REC = 0x16,
+ IB_SA_ATTR_MCAST_FDB_REC = 0x17,
+ IB_SA_ATTR_SM_INFO_REC = 0x18,
+ IB_SA_ATTR_LINK_REC = 0x20,
+ IB_SA_ATTR_GUID_INFO_REC = 0x30,
+ IB_SA_ATTR_SERVICE_REC = 0x31,
+ IB_SA_ATTR_PARTITION_REC = 0x33,
+ IB_SA_ATTR_RANGE_REC = 0x34,
+ IB_SA_ATTR_PATH_REC = 0x35,
+ IB_SA_ATTR_VL_ARB_REC = 0x36,
+ IB_SA_ATTR_MC_GROUP_REC = 0x37,
+ IB_SA_ATTR_MC_MEMBER_REC = 0x38,
+ IB_SA_ATTR_TRACE_REC = 0x39,
+ IB_SA_ATTR_MULTI_PATH_REC = 0x3a,
+ IB_SA_ATTR_SERVICE_ASSOC_REC = 0x3b
+};
+
+#define PATH_REC_FIELD(field) \
+ .struct_offset_bytes = offsetof(struct ib_sa_path_rec, field), \
+ .struct_size_bytes = sizeof ((struct ib_sa_path_rec *) 0)->field, \
+ .field_name = "sa_path_rec:" #field
+
+static const struct ib_field path_rec_table[] = {
+ { RESERVED,
+ .offset_words = 0,
+ .offset_bits = 0,
+ .size_bits = 32 },
+ { RESERVED,
+ .offset_words = 1,
+ .offset_bits = 0,
+ .size_bits = 32 },
+ { PATH_REC_FIELD(dgid),
+ .offset_words = 2,
+ .offset_bits = 0,
+ .size_bits = 128 },
+ { PATH_REC_FIELD(sgid),
+ .offset_words = 6,
+ .offset_bits = 0,
+ .size_bits = 128 },
+ { PATH_REC_FIELD(dlid),
+ .offset_words = 10,
+ .offset_bits = 0,
+ .size_bits = 16 },
+ { PATH_REC_FIELD(slid),
+ .offset_words = 10,
+ .offset_bits = 16,
+ .size_bits = 16 },
+ { PATH_REC_FIELD(raw_traffic),
+ .offset_words = 11,
+ .offset_bits = 0,
+ .size_bits = 1 },
+ { RESERVED,
+ .offset_words = 11,
+ .offset_bits = 1,
+ .size_bits = 3 },
+ { PATH_REC_FIELD(flow_label),
+ .offset_words = 11,
+ .offset_bits = 4,
+ .size_bits = 20 },
+ { PATH_REC_FIELD(hop_limit),
+ .offset_words = 11,
+ .offset_bits = 24,
+ .size_bits = 8 },
+ { PATH_REC_FIELD(traffic_class),
+ .offset_words = 12,
+ .offset_bits = 0,
+ .size_bits = 8 },
+ { PATH_REC_FIELD(reversible),
+ .offset_words = 12,
+ .offset_bits = 8,
+ .size_bits = 1 },
+ { PATH_REC_FIELD(numb_path),
+ .offset_words = 12,
+ .offset_bits = 9,
+ .size_bits = 7 },
+ { PATH_REC_FIELD(pkey),
+ .offset_words = 12,
+ .offset_bits = 16,
+ .size_bits = 16 },
+ { RESERVED,
+ .offset_words = 13,
+ .offset_bits = 0,
+ .size_bits = 12 },
+ { PATH_REC_FIELD(sl),
+ .offset_words = 13,
+ .offset_bits = 12,
+ .size_bits = 4 },
+ { PATH_REC_FIELD(mtu_selector),
+ .offset_words = 13,
+ .offset_bits = 16,
+ .size_bits = 2 },
+ { PATH_REC_FIELD(mtu),
+ .offset_words = 13,
+ .offset_bits = 18,
+ .size_bits = 6 },
+ { PATH_REC_FIELD(rate_selector),
+ .offset_words = 13,
+ .offset_bits = 24,
+ .size_bits = 2 },
+ { PATH_REC_FIELD(rate),
+ .offset_words = 13,
+ .offset_bits = 26,
+ .size_bits = 6 },
+ { PATH_REC_FIELD(packet_life_time_selector),
+ .offset_words = 14,
+ .offset_bits = 0,
+ .size_bits = 2 },
+ { PATH_REC_FIELD(packet_life_time),
+ .offset_words = 14,
+ .offset_bits = 2,
+ .size_bits = 6 },
+ { PATH_REC_FIELD(preference),
+ .offset_words = 14,
+ .offset_bits = 8,
+ .size_bits = 8 },
+ { RESERVED,
+ .offset_words = 14,
+ .offset_bits = 16,
+ .size_bits = 48 },
+};
+
+#define MCMEMBER_REC_FIELD(field) \
+ .struct_offset_bytes = offsetof(struct ib_sa_mcmember_rec, field), \
+ .struct_size_bytes = sizeof ((struct ib_sa_mcmember_rec *) 0)->field, \
+ .field_name = "sa_mcmember_rec:" #field
+
+static const struct ib_field mcmember_rec_table[] = {
+ { MCMEMBER_REC_FIELD(mgid),
+ .offset_words = 0,
+ .offset_bits = 0,
+ .size_bits = 128 },
+ { MCMEMBER_REC_FIELD(port_gid),
+ .offset_words = 4,
+ .offset_bits = 0,
+ .size_bits = 128 },
+ { MCMEMBER_REC_FIELD(qkey),
+ .offset_words = 8,
+ .offset_bits = 0,
+ .size_bits = 32 },
+ { MCMEMBER_REC_FIELD(mlid),
+ .offset_words = 9,
+ .offset_bits = 0,
+ .size_bits = 16 },
+ { MCMEMBER_REC_FIELD(mtu_selector),
+ .offset_words = 9,
+ .offset_bits = 16,
+ .size_bits = 2 },
+ { MCMEMBER_REC_FIELD(mtu),
+ .offset_words = 9,
+ .offset_bits = 18,
+ .size_bits = 6 },
+ { MCMEMBER_REC_FIELD(traffic_class),
+ .offset_words = 9,
+ .offset_bits = 24,
+ .size_bits = 8 },
+ { MCMEMBER_REC_FIELD(pkey),
+ .offset_words = 10,
+ .offset_bits = 0,
+ .size_bits = 16 },
+ { MCMEMBER_REC_FIELD(rate_selector),
+ .offset_words = 10,
+ .offset_bits = 16,
+ .size_bits = 2 },
+ { MCMEMBER_REC_FIELD(rate),
+ .offset_words = 10,
+ .offset_bits = 18,
+ .size_bits = 6 },
+ { MCMEMBER_REC_FIELD(packet_life_time_selector),
+ .offset_words = 10,
+ .offset_bits = 24,
+ .size_bits = 2 },
+ { MCMEMBER_REC_FIELD(packet_life_time),
+ .offset_words = 10,
+ .offset_bits = 26,
+ .size_bits = 6 },
+ { MCMEMBER_REC_FIELD(sl),
+ .offset_words = 11,
+ .offset_bits = 0,
+ .size_bits = 4 },
+ { MCMEMBER_REC_FIELD(flow_label),
+ .offset_words = 11,
+ .offset_bits = 4,
+ .size_bits = 20 },
+ { MCMEMBER_REC_FIELD(hop_limit),
+ .offset_words = 11,
+ .offset_bits = 24,
+ .size_bits = 8 },
+ { MCMEMBER_REC_FIELD(scope),
+ .offset_words = 12,
+ .offset_bits = 0,
+ .size_bits = 4 },
+ { MCMEMBER_REC_FIELD(join_state),
+ .offset_words = 12,
+ .offset_bits = 4,
+ .size_bits = 4 },
+ { MCMEMBER_REC_FIELD(proxy_join),
+ .offset_words = 12,
+ .offset_bits = 8,
+ .size_bits = 1 },
+ { RESERVED,
+ .offset_words = 12,
+ .offset_bits = 9,
+ .size_bits = 23 },
+};
+
+static void free_sm_ah(struct kref *kref)
+{
+ struct ib_sa_sm_ah *sm_ah = container_of(kref, struct ib_sa_sm_ah, ref);
+
+ ib_destroy_ah(sm_ah->ah);
+ kfree(sm_ah);
+}
+
+static void update_sm_ah(void *port_ptr)
+{
+ struct ib_sa_port *port = port_ptr;
+ struct ib_sa_sm_ah *new_ah, *old_ah;
+ struct ib_port_attr port_attr;
+ struct ib_ah_attr ah_attr;
+
+ if (ib_query_port(port->agent->device, port->port_num, &port_attr)) {
+ printk(KERN_WARNING "Couldn't query port\n");
+ return;
+ }
+
+ new_ah = kmalloc(sizeof *new_ah, GFP_KERNEL);
+ if (!new_ah) {
+ printk(KERN_WARNING "Couldn't allocate new SM AH\n");
+ return;
+ }
+
+ kref_init(&new_ah->ref);
+
+ memset(&ah_attr, 0, sizeof ah_attr);
+ ah_attr.dlid = port_attr.sm_lid;
+ ah_attr.sl = port_attr.sm_sl;
+ ah_attr.port_num = port->port_num;
+
+ new_ah->ah = ib_create_ah(port->agent->qp->pd, &ah_attr);
+ if (IS_ERR(new_ah->ah)) {
+ printk(KERN_WARNING "Couldn't create new SM AH\n");
+ kfree(new_ah);
+ return;
+ }
+
+ spin_lock_irq(&port->ah_lock);
+ old_ah = port->sm_ah;
+ port->sm_ah = new_ah;
+ spin_unlock_irq(&port->ah_lock);
+
+ if (old_ah)
+ kref_put(&old_ah->ref, free_sm_ah);
+}
+
+static void ib_sa_event(struct ib_event_handler *handler, struct ib_event *event)
+{
+ if (event->event == IB_EVENT_PORT_ERR ||
+ event->event == IB_EVENT_PORT_ACTIVE ||
+ event->event == IB_EVENT_LID_CHANGE ||
+ event->event == IB_EVENT_PKEY_CHANGE ||
+ event->event == IB_EVENT_SM_CHANGE) {
+ struct ib_sa_device *sa_dev =
+ ib_get_client_data(event->device, &sa_client);
+
+ schedule_work(&sa_dev->port[event->element.port_num -
+ sa_dev->start_port].update_task);
+ }
+}
+
+/**
+ * ib_sa_cancel_query - try to cancel an SA query
+ * @id:ID of query to cancel
+ * @query:query pointer to cancel
+ *
+ * Try to cancel an SA query. If the id and query don't match up or
+ * the query has already completed, nothing is done. Otherwise the
+ * query is canceled and will complete with a status of -EINTR.
+ */
+void ib_sa_cancel_query(int id, struct ib_sa_query *query)
+{
+ unsigned long flags;
+ struct ib_mad_agent *agent;
+
+ spin_lock_irqsave(&idr_lock, flags);
+ if (idr_find(&query_idr, id) != query) {
+ spin_unlock_irqrestore(&idr_lock, flags);
+ return;
+ }
+ agent = query->port->agent;
+ spin_unlock_irqrestore(&idr_lock, flags);
+
+ ib_cancel_mad(agent, id);
+}
+EXPORT_SYMBOL(ib_sa_cancel_query);
+
+static void init_mad(struct ib_sa_mad *mad, struct ib_mad_agent *agent)
+{
+ unsigned long flags;
+
+ memset(mad, 0, sizeof *mad);
+
+ mad->mad_hdr.base_version = IB_MGMT_BASE_VERSION;
+ mad->mad_hdr.mgmt_class = IB_MGMT_CLASS_SUBN_ADM;
+ mad->mad_hdr.class_version = IB_SA_CLASS_VERSION;
+
+ spin_lock_irqsave(&tid_lock, flags);
+ mad->mad_hdr.tid =
+ cpu_to_be64(((u64) agent->hi_tid) << 32 | tid++);
+ spin_unlock_irqrestore(&tid_lock, flags);
+}
+
+static int send_mad(struct ib_sa_query *query, int timeout_ms)
+{
+ struct ib_sa_port *port = query->port;
+ unsigned long flags;
+ int ret;
+ struct ib_sge gather_list;
+ struct ib_send_wr *bad_wr, wr = {
+ .opcode = IB_WR_SEND,
+ .sg_list = &gather_list,
+ .num_sge = 1,
+ .send_flags = IB_SEND_SIGNALED,
+ .wr = {
+ .ud = {
+ .mad_hdr = &query->mad->mad_hdr,
+ .remote_qpn = 1,
+ .remote_qkey = IB_QP1_QKEY,
+ .timeout_ms = timeout_ms
+ }
+ }
+ };
+
+retry:
+ if (!idr_pre_get(&query_idr, GFP_ATOMIC))
+ return -ENOMEM;
+ spin_lock_irqsave(&idr_lock, flags);
+ ret = idr_get_new(&query_idr, query, &query->id);
+ spin_unlock_irqrestore(&idr_lock, flags);
+ if (ret == -EAGAIN)
+ goto retry;
+ if (ret)
+ return ret;
+
+ wr.wr_id = query->id;
+
+ spin_lock_irqsave(&port->ah_lock, flags);
+ kref_get(&port->sm_ah->ref);
+ query->sm_ah = port->sm_ah;
+ wr.wr.ud.ah = port->sm_ah->ah;
+ spin_unlock_irqrestore(&port->ah_lock, flags);
+
+ gather_list.addr = dma_map_single(port->agent->device->dma_device,
+ query->mad,
+ sizeof (struct ib_sa_mad),
+ DMA_TO_DEVICE);
+ gather_list.length = sizeof (struct ib_sa_mad);
+ gather_list.lkey = port->mr->lkey;
+ pci_unmap_addr_set(query, mapping, gather_list.addr);
+
+ ret = ib_post_send_mad(port->agent, &wr, &bad_wr);
+ if (ret) {
+ dma_unmap_single(port->agent->device->dma_device,
+ pci_unmap_addr(query, mapping),
+ sizeof (struct ib_sa_mad),
+ DMA_TO_DEVICE);
+ kref_put(&query->sm_ah->ref, free_sm_ah);
+ spin_lock_irqsave(&idr_lock, flags);
+ idr_remove(&query_idr, query->id);
+ spin_unlock_irqrestore(&idr_lock, flags);
+ }
+
+ return ret;
+}
+
+static void ib_sa_path_rec_callback(struct ib_sa_query *sa_query,
+ int status,
+ struct ib_sa_mad *mad)
+{
+ struct ib_sa_path_query *query =
+ container_of(sa_query, struct ib_sa_path_query, sa_query);
+
+ if (mad) {
+ struct ib_sa_path_rec rec;
+
+ ib_unpack(path_rec_table, ARRAY_SIZE(path_rec_table),
+ mad->data, &rec);
+ query->callback(status, &rec, query->context);
+ } else
+ query->callback(status, NULL, query->context);
+}
+
+static void ib_sa_path_rec_release(struct ib_sa_query *sa_query)
+{
+ kfree(sa_query->mad);
+ kfree(container_of(sa_query, struct ib_sa_path_query, sa_query));
+}
+
+/**
+ * ib_sa_path_rec_get - Start a Path get query
+ * @device:device to send query on
+ * @port_num: port number to send query on
+ * @rec:Path Record to send in query
+ * @comp_mask:component mask to send in query
+ * @timeout_ms:time to wait for response
+ * @gfp_mask:GFP mask to use for internal allocations
+ * @callback:function called when query completes, times out or is
+ * canceled
+ * @context:opaque user context passed to callback
+ * @sa_query:query context, used to cancel query
+ *
+ * Send a Path Record Get query to the SA to look up a path. The
+ * callback function will be called when the query completes (or
+ * fails); status is 0 for a successful response, -EINTR if the query
+ * is canceled, -ETIMEDOUT is the query timed out, or -EIO if an error
+ * occurred sending the query. The resp parameter of the callback is
+ * only valid if status is 0.
+ *
+ * If the return value of ib_sa_path_rec_get() is negative, it is an
+ * error code. Otherwise it is a query ID that can be used to cancel
+ * the query.
+ */
+int ib_sa_path_rec_get(struct ib_device *device, u8 port_num,
+ struct ib_sa_path_rec *rec,
+ ib_sa_comp_mask comp_mask,
+ int timeout_ms, int gfp_mask,
+ void (*callback)(int status,
+ struct ib_sa_path_rec *resp,
+ void *context),
+ void *context,
+ struct ib_sa_query **sa_query)
+{
+ struct ib_sa_path_query *query;
+ struct ib_sa_device *sa_dev = ib_get_client_data(device, &sa_client);
+ struct ib_sa_port *port = &sa_dev->port[port_num - sa_dev->start_port];
+ struct ib_mad_agent *agent = port->agent;
+ int ret;
+
+ query = kmalloc(sizeof *query, gfp_mask);
+ if (!query)
+ return -ENOMEM;
+ query->sa_query.mad = kmalloc(sizeof *query->sa_query.mad, gfp_mask);
+ if (!query->sa_query.mad) {
+ kfree(query);
+ return -ENOMEM;
+ }
+
+ query->callback = callback;
+ query->context = context;
+
+ init_mad(query->sa_query.mad, agent);
+
+ query->sa_query.callback = ib_sa_path_rec_callback;
+ query->sa_query.release = ib_sa_path_rec_release;
+ query->sa_query.port = port;
+ query->sa_query.mad->mad_hdr.method = IB_MGMT_METHOD_GET;
+ query->sa_query.mad->mad_hdr.attr_id = cpu_to_be16(IB_SA_ATTR_PATH_REC);
+ query->sa_query.mad->sa_hdr.comp_mask = comp_mask;
+
+ ib_pack(path_rec_table, ARRAY_SIZE(path_rec_table),
+ rec, query->sa_query.mad->data);
+
+ *sa_query = &query->sa_query;
+ ret = send_mad(&query->sa_query, timeout_ms);
+ if (ret) {
+ *sa_query = NULL;
+ kfree(query->sa_query.mad);
+ kfree(query);
+ }
+
+ return ret ? ret : query->sa_query.id;
+}
+EXPORT_SYMBOL(ib_sa_path_rec_get);
+
+static void ib_sa_mcmember_rec_callback(struct ib_sa_query *sa_query,
+ int status,
+ struct ib_sa_mad *mad)
+{
+ struct ib_sa_mcmember_query *query =
+ container_of(sa_query, struct ib_sa_mcmember_query, sa_query);
+
+ if (mad) {
+ struct ib_sa_mcmember_rec rec;
+
+ ib_unpack(mcmember_rec_table, ARRAY_SIZE(mcmember_rec_table),
+ mad->data, &rec);
+ query->callback(status, &rec, query->context);
+ } else
+ query->callback(status, NULL, query->context);
+}
+
+static void ib_sa_mcmember_rec_release(struct ib_sa_query *sa_query)
+{
+ kfree(sa_query->mad);
+ kfree(container_of(sa_query, struct ib_sa_mcmember_query, sa_query));
+}
+
+int ib_sa_mcmember_rec_query(struct ib_device *device, u8 port_num,
+ u8 method,
+ struct ib_sa_mcmember_rec *rec,
+ ib_sa_comp_mask comp_mask,
+ int timeout_ms, int gfp_mask,
+ void (*callback)(int status,
+ struct ib_sa_mcmember_rec *resp,
+ void *context),
+ void *context,
+ struct ib_sa_query **sa_query)
+{
+ struct ib_sa_mcmember_query *query;
+ struct ib_sa_device *sa_dev = ib_get_client_data(device, &sa_client);
+ struct ib_sa_port *port = &sa_dev->port[port_num - sa_dev->start_port];
+ struct ib_mad_agent *agent = port->agent;
+ int ret;
+
+ query = kmalloc(sizeof *query, gfp_mask);
+ if (!query)
+ return -ENOMEM;
+ query->sa_query.mad = kmalloc(sizeof *query->sa_query.mad, gfp_mask);
+ if (!query->sa_query.mad) {
+ kfree(query);
+ return -ENOMEM;
+ }
+
+ query->callback = callback;
+ query->context = context;
+
+ init_mad(query->sa_query.mad, agent);
+
+ query->sa_query.callback = ib_sa_mcmember_rec_callback;
+ query->sa_query.release = ib_sa_mcmember_rec_release;
+ query->sa_query.port = port;
+ query->sa_query.mad->mad_hdr.method = method;
+ query->sa_query.mad->mad_hdr.attr_id = cpu_to_be16(IB_SA_ATTR_MC_MEMBER_REC);
+ query->sa_query.mad->sa_hdr.comp_mask = comp_mask;
+
+ ib_pack(mcmember_rec_table, ARRAY_SIZE(mcmember_rec_table),
+ rec, query->sa_query.mad->data);
+
+ *sa_query = &query->sa_query;
+ ret = send_mad(&query->sa_query, timeout_ms);
+ if (ret) {
+ *sa_query = NULL;
+ kfree(query->sa_query.mad);
+ kfree(query);
+ }
+
+ return ret ? ret : query->sa_query.id;
+}
+EXPORT_SYMBOL(ib_sa_mcmember_rec_query);
+
+static void send_handler(struct ib_mad_agent *agent,
+ struct ib_mad_send_wc *mad_send_wc)
+{
+ struct ib_sa_query *query;
+ unsigned long flags;
+
+ spin_lock_irqsave(&idr_lock, flags);
+ query = idr_find(&query_idr, mad_send_wc->wr_id);
+ spin_unlock_irqrestore(&idr_lock, flags);
+
+ if (!query)
+ return;
+
+ switch (mad_send_wc->status) {
+ case IB_WC_SUCCESS:
+ /* No callback -- already got recv */
+ break;
+ case IB_WC_RESP_TIMEOUT_ERR:
+ query->callback(query, -ETIMEDOUT, NULL);
+ break;
+ case IB_WC_WR_FLUSH_ERR:
+ query->callback(query, -EINTR, NULL);
+ break;
+ default:
+ query->callback(query, -EIO, NULL);
+ break;
+ }
+
+ dma_unmap_single(agent->device->dma_device,
+ pci_unmap_addr(query, mapping),
+ sizeof (struct ib_sa_mad),
+ DMA_TO_DEVICE);
+ kref_put(&query->sm_ah->ref, free_sm_ah);
+
+ query->release(query);
+
+ spin_lock_irqsave(&idr_lock, flags);
+ idr_remove(&query_idr, mad_send_wc->wr_id);
+ spin_unlock_irqrestore(&idr_lock, flags);
+}
+
+static void recv_handler(struct ib_mad_agent *mad_agent,
+ struct ib_mad_recv_wc *mad_recv_wc)
+{
+ struct ib_sa_query *query;
+ unsigned long flags;
+
+ spin_lock_irqsave(&idr_lock, flags);
+ query = idr_find(&query_idr, mad_recv_wc->wc->wr_id);
+ spin_unlock_irqrestore(&idr_lock, flags);
+
+ if (query) {
+ if (mad_recv_wc->wc->status == IB_WC_SUCCESS)
+ query->callback(query,
+ mad_recv_wc->recv_buf.mad->mad_hdr.status ?
+ -EINVAL : 0,
+ (struct ib_sa_mad *) mad_recv_wc->recv_buf.mad);
+ else
+ query->callback(query, -EIO, NULL);
+ }
+
+ ib_free_recv_mad(mad_recv_wc);
+}
+
+static void ib_sa_add_one(struct ib_device *device)
+{
+ struct ib_sa_device *sa_dev;
+ int s, e, i;
+
+ if (device->node_type == IB_NODE_SWITCH)
+ s = e = 0;
+ else {
+ s = 1;
+ e = device->phys_port_cnt;
+ }
+
+ sa_dev = kmalloc(sizeof *sa_dev +
+ (e - s + 1) * sizeof (struct ib_sa_port),
+ GFP_KERNEL);
+ if (!sa_dev)
+ return;
+
+ sa_dev->start_port = s;
+ sa_dev->end_port = e;
+
+ for (i = 0; i <= e - s; ++i) {
+ sa_dev->port[i].mr = NULL;
+ sa_dev->port[i].sm_ah = NULL;
+ sa_dev->port[i].port_num = i + s;
+ spin_lock_init(&sa_dev->port[i].ah_lock);
+
+ sa_dev->port[i].agent =
+ ib_register_mad_agent(device, i + s, IB_QPT_GSI,
+ NULL, 0, send_handler,
+ recv_handler, sa_dev);
+ if (IS_ERR(sa_dev->port[i].agent))
+ goto err;
+
+ sa_dev->port[i].mr = ib_get_dma_mr(sa_dev->port[i].agent->qp->pd,
+ IB_ACCESS_LOCAL_WRITE);
+ if (IS_ERR(sa_dev->port[i].mr)) {
+ ib_unregister_mad_agent(sa_dev->port[i].agent);
+ goto err;
+ }
+
+ INIT_WORK(&sa_dev->port[i].update_task,
+ update_sm_ah, &sa_dev->port[i]);
+ }
+
+ ib_set_client_data(device, &sa_client, sa_dev);
+
+ /*
+ * We register our event handler after everything is set up,
+ * and then update our cached info after the event handler is
+ * registered to avoid any problems if a port changes state
+ * during our initialization.
+ */
+
+ INIT_IB_EVENT_HANDLER(&sa_dev->event_handler, device, ib_sa_event);
+ if (ib_register_event_handler(&sa_dev->event_handler))
+ goto err;
+
+ for (i = 0; i <= e - s; ++i)
+ update_sm_ah(&sa_dev->port[i]);
+
+ return;
+
+err:
+ while (--i >= 0) {
+ ib_dereg_mr(sa_dev->port[i].mr);
+ ib_unregister_mad_agent(sa_dev->port[i].agent);
+ }
+
+ kfree(sa_dev);
+
+ return;
+}
+
+static void ib_sa_remove_one(struct ib_device *device)
+{
+ struct ib_sa_device *sa_dev = ib_get_client_data(device, &sa_client);
+ int i;
+
+ if (!sa_dev)
+ return;
+
+ ib_unregister_event_handler(&sa_dev->event_handler);
+
+ for (i = 0; i <= sa_dev->end_port - sa_dev->start_port; ++i) {
+ ib_unregister_mad_agent(sa_dev->port[i].agent);
+ kref_put(&sa_dev->port[i].sm_ah->ref, free_sm_ah);
+ }
+
+ kfree(sa_dev);
+}
+
+static int __init ib_sa_init(void)
+{
+ int ret;
+
+ spin_lock_init(&idr_lock);
+ spin_lock_init(&tid_lock);
+
+ get_random_bytes(&tid, sizeof tid);
+
+ ret = ib_register_client(&sa_client);
+ if (ret)
+ printk(KERN_ERR "Couldn't register ib_sa client\n");
+
+ return ret;
+}
+
+static void __exit ib_sa_cleanup(void)
+{
+ ib_unregister_client(&sa_client);
+}
+
+module_init(ib_sa_init);
+module_exit(ib_sa_cleanup);
--- /dev/null
+/*
+ * Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
+ * Copyright (c) 2004 Infinicon Corporation. All rights reserved.
+ * Copyright (c) 2004 Intel Corporation. All rights reserved.
+ * Copyright (c) 2004 Topspin Corporation. All rights reserved.
+ * Copyright (c) 2004 Voltaire Corporation. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: smi.c 1389 2004-12-27 22:56:47Z roland $
+ */
+
+#include <ib_smi.h>
+
+
+/*
+ * Fixup a directed route SMP for sending
+ * Return 0 if the SMP should be discarded
+ */
+int smi_handle_dr_smp_send(struct ib_smp *smp,
+ u8 node_type,
+ int port_num)
+{
+ u8 hop_ptr, hop_cnt;
+
+ hop_ptr = smp->hop_ptr;
+ hop_cnt = smp->hop_cnt;
+
+ /* See section 14.2.2.2, Vol 1 IB spec */
+ if (!ib_get_smp_direction(smp)) {
+ /* C14-9:1 */
+ if (hop_cnt && hop_ptr == 0) {
+ smp->hop_ptr++;
+ return (smp->initial_path[smp->hop_ptr] ==
+ port_num);
+ }
+
+ /* C14-9:2 */
+ if (hop_ptr && hop_ptr < hop_cnt) {
+ if (node_type != IB_NODE_SWITCH)
+ return 0;
+
+ /* smp->return_path set when received */
+ smp->hop_ptr++;
+ return (smp->initial_path[smp->hop_ptr] ==
+ port_num);
+ }
+
+ /* C14-9:3 -- We're at the end of the DR segment of path */
+ if (hop_ptr == hop_cnt) {
+ /* smp->return_path set when received */
+ smp->hop_ptr++;
+ return (node_type == IB_NODE_SWITCH ||
+ smp->dr_dlid == IB_LID_PERMISSIVE);
+ }
+
+ /* C14-9:4 -- hop_ptr = hop_cnt + 1 -> give to SMA/SM */
+ /* C14-9:5 -- Fail unreasonable hop pointer */
+ return (hop_ptr == hop_cnt + 1);
+
+ } else {
+ /* C14-13:1 */
+ if (hop_cnt && hop_ptr == hop_cnt + 1) {
+ smp->hop_ptr--;
+ return (smp->return_path[smp->hop_ptr] ==
+ port_num);
+ }
+
+ /* C14-13:2 */
+ if (2 <= hop_ptr && hop_ptr <= hop_cnt) {
+ if (node_type != IB_NODE_SWITCH)
+ return 0;
+
+ smp->hop_ptr--;
+ return (smp->return_path[smp->hop_ptr] ==
+ port_num);
+ }
+
+ /* C14-13:3 -- at the end of the DR segment of path */
+ if (hop_ptr == 1) {
+ smp->hop_ptr--;
+ /* C14-13:3 -- SMPs destined for SM shouldn't be here */
+ return (node_type == IB_NODE_SWITCH ||
+ smp->dr_slid == IB_LID_PERMISSIVE);
+ }
+
+ /* C14-13:4 -- hop_ptr = 0 -> should have gone to SM */
+ if (hop_ptr == 0)
+ return 1;
+
+ /* C14-13:5 -- Check for unreasonable hop pointer */
+ return 0;
+ }
+}
+
+/*
+ * Adjust information for a received SMP
+ * Return 0 if the SMP should be dropped
+ */
+int smi_handle_dr_smp_recv(struct ib_smp *smp,
+ u8 node_type,
+ int port_num,
+ int phys_port_cnt)
+{
+ u8 hop_ptr, hop_cnt;
+
+ hop_ptr = smp->hop_ptr;
+ hop_cnt = smp->hop_cnt;
+
+ /* See section 14.2.2.2, Vol 1 IB spec */
+ if (!ib_get_smp_direction(smp)) {
+ /* C14-9:1 -- sender should have incremented hop_ptr */
+ if (hop_cnt && hop_ptr == 0)
+ return 0;
+
+ /* C14-9:2 -- intermediate hop */
+ if (hop_ptr && hop_ptr < hop_cnt) {
+ if (node_type != IB_NODE_SWITCH)
+ return 0;
+
+ smp->return_path[hop_ptr] = port_num;
+ /* smp->hop_ptr updated when sending */
+ return (smp->initial_path[hop_ptr+1] <= phys_port_cnt);
+ }
+
+ /* C14-9:3 -- We're at the end of the DR segment of path */
+ if (hop_ptr == hop_cnt) {
+ if (hop_cnt)
+ smp->return_path[hop_ptr] = port_num;
+ /* smp->hop_ptr updated when sending */
+
+ return (node_type == IB_NODE_SWITCH ||
+ smp->dr_dlid == IB_LID_PERMISSIVE);
+ }
+
+ /* C14-9:4 -- hop_ptr = hop_cnt + 1 -> give to SMA/SM */
+ /* C14-9:5 -- fail unreasonable hop pointer */
+ return (hop_ptr == hop_cnt + 1);
+
+ } else {
+
+ /* C14-13:1 */
+ if (hop_cnt && hop_ptr == hop_cnt + 1) {
+ smp->hop_ptr--;
+ return (smp->return_path[smp->hop_ptr] ==
+ port_num);
+ }
+
+ /* C14-13:2 */
+ if (2 <= hop_ptr && hop_ptr <= hop_cnt) {
+ if (node_type != IB_NODE_SWITCH)
+ return 0;
+
+ /* smp->hop_ptr updated when sending */
+ return (smp->return_path[hop_ptr-1] <= phys_port_cnt);
+ }
+
+ /* C14-13:3 -- We're at the end of the DR segment of path */
+ if (hop_ptr == 1) {
+ if (smp->dr_slid == IB_LID_PERMISSIVE) {
+ /* giving SMP to SM - update hop_ptr */
+ smp->hop_ptr--;
+ return 1;
+ }
+ /* smp->hop_ptr updated when sending */
+ return (node_type == IB_NODE_SWITCH);
+ }
+
+ /* C14-13:4 -- hop_ptr = 0 -> give to SM */
+ /* C14-13:5 -- Check for unreasonable hop pointer */
+ return (hop_ptr == 0);
+ }
+}
+
+/*
+ * Return 1 if the received DR SMP should be forwarded to the send queue
+ * Return 0 if the SMP should be completed up the stack
+ */
+int smi_check_forward_dr_smp(struct ib_smp *smp)
+{
+ u8 hop_ptr, hop_cnt;
+
+ hop_ptr = smp->hop_ptr;
+ hop_cnt = smp->hop_cnt;
+
+ if (!ib_get_smp_direction(smp)) {
+ /* C14-9:2 -- intermediate hop */
+ if (hop_ptr && hop_ptr < hop_cnt)
+ return 1;
+
+ /* C14-9:3 -- at the end of the DR segment of path */
+ if (hop_ptr == hop_cnt)
+ return (smp->dr_dlid == IB_LID_PERMISSIVE);
+
+ /* C14-9:4 -- hop_ptr = hop_cnt + 1 -> give to SMA/SM */
+ if (hop_ptr == hop_cnt + 1)
+ return 1;
+ } else {
+ /* C14-13:2 */
+ if (2 <= hop_ptr && hop_ptr <= hop_cnt)
+ return 1;
+
+ /* C14-13:3 -- at the end of the DR segment of path */
+ if (hop_ptr == 1)
+ return (smp->dr_slid != IB_LID_PERMISSIVE);
+ }
+ return 0;
+}
--- /dev/null
+/*
+ * Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
+ * Copyright (c) 2004 Infinicon Corporation. All rights reserved.
+ * Copyright (c) 2004 Intel Corporation. All rights reserved.
+ * Copyright (c) 2004 Topspin Corporation. All rights reserved.
+ * Copyright (c) 2004 Voltaire Corporation. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: smi.h 1389 2004-12-27 22:56:47Z roland $
+ */
+
+#ifndef __SMI_H_
+#define __SMI_H_
+
+int smi_handle_dr_smp_recv(struct ib_smp *smp,
+ u8 node_type,
+ int port_num,
+ int phys_port_cnt);
+extern int smi_check_forward_dr_smp(struct ib_smp *smp);
+extern int smi_handle_dr_smp_send(struct ib_smp *smp,
+ u8 node_type,
+ int port_num);
+extern int smi_check_local_dr_smp(struct ib_smp *smp,
+ struct ib_device *device,
+ int port_num);
+
+/*
+ * Return 1 if the SMP should be handled by the local SMA/SM via process_mad
+ */
+static inline int smi_check_local_smp(struct ib_mad_agent *mad_agent,
+ struct ib_smp *smp)
+{
+ /* C14-9:3 -- We're at the end of the DR segment of path */
+ /* C14-9:4 -- Hop Pointer = Hop Count + 1 -> give to SMA/SM */
+ return ((mad_agent->device->process_mad &&
+ !ib_get_smp_direction(smp) &&
+ (smp->hop_ptr == smp->hop_cnt + 1)));
+}
+
+#endif /* __SMI_H_ */
--- /dev/null
+/*
+ * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: sysfs.c 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#include "core_priv.h"
+
+#include <ib_mad.h>
+
+struct ib_port {
+ struct kobject kobj;
+ struct ib_device *ibdev;
+ struct attribute_group gid_group;
+ struct attribute **gid_attr;
+ struct attribute_group pkey_group;
+ struct attribute **pkey_attr;
+ u8 port_num;
+};
+
+struct port_attribute {
+ struct attribute attr;
+ ssize_t (*show)(struct ib_port *, struct port_attribute *, char *buf);
+ ssize_t (*store)(struct ib_port *, struct port_attribute *,
+ const char *buf, size_t count);
+};
+
+#define PORT_ATTR(_name, _mode, _show, _store) \
+struct port_attribute port_attr_##_name = __ATTR(_name, _mode, _show, _store)
+
+#define PORT_ATTR_RO(_name) \
+struct port_attribute port_attr_##_name = __ATTR_RO(_name)
+
+struct port_table_attribute {
+ struct port_attribute attr;
+ int index;
+};
+
+static ssize_t port_attr_show(struct kobject *kobj,
+ struct attribute *attr, char *buf)
+{
+ struct port_attribute *port_attr =
+ container_of(attr, struct port_attribute, attr);
+ struct ib_port *p = container_of(kobj, struct ib_port, kobj);
+
+ if (!port_attr->show)
+ return 0;
+
+ return port_attr->show(p, port_attr, buf);
+}
+
+static struct sysfs_ops port_sysfs_ops = {
+ .show = port_attr_show
+};
+
+static ssize_t state_show(struct ib_port *p, struct port_attribute *unused,
+ char *buf)
+{
+ struct ib_port_attr attr;
+ ssize_t ret;
+
+ static const char *state_name[] = {
+ [IB_PORT_NOP] = "NOP",
+ [IB_PORT_DOWN] = "DOWN",
+ [IB_PORT_INIT] = "INIT",
+ [IB_PORT_ARMED] = "ARMED",
+ [IB_PORT_ACTIVE] = "ACTIVE",
+ [IB_PORT_ACTIVE_DEFER] = "ACTIVE_DEFER"
+ };
+
+ ret = ib_query_port(p->ibdev, p->port_num, &attr);
+ if (ret)
+ return ret;
+
+ return sprintf(buf, "%d: %s\n", attr.state,
+ attr.state >= 0 && attr.state <= ARRAY_SIZE(state_name) ?
+ state_name[attr.state] : "UNKNOWN");
+}
+
+static ssize_t lid_show(struct ib_port *p, struct port_attribute *unused,
+ char *buf)
+{
+ struct ib_port_attr attr;
+ ssize_t ret;
+
+ ret = ib_query_port(p->ibdev, p->port_num, &attr);
+ if (ret)
+ return ret;
+
+ return sprintf(buf, "0x%x\n", attr.lid);
+}
+
+static ssize_t lid_mask_count_show(struct ib_port *p,
+ struct port_attribute *unused,
+ char *buf)
+{
+ struct ib_port_attr attr;
+ ssize_t ret;
+
+ ret = ib_query_port(p->ibdev, p->port_num, &attr);
+ if (ret)
+ return ret;
+
+ return sprintf(buf, "%d\n", attr.lmc);
+}
+
+static ssize_t sm_lid_show(struct ib_port *p, struct port_attribute *unused,
+ char *buf)
+{
+ struct ib_port_attr attr;
+ ssize_t ret;
+
+ ret = ib_query_port(p->ibdev, p->port_num, &attr);
+ if (ret)
+ return ret;
+
+ return sprintf(buf, "0x%x\n", attr.sm_lid);
+}
+
+static ssize_t sm_sl_show(struct ib_port *p, struct port_attribute *unused,
+ char *buf)
+{
+ struct ib_port_attr attr;
+ ssize_t ret;
+
+ ret = ib_query_port(p->ibdev, p->port_num, &attr);
+ if (ret)
+ return ret;
+
+ return sprintf(buf, "%d\n", attr.sm_sl);
+}
+
+static ssize_t cap_mask_show(struct ib_port *p, struct port_attribute *unused,
+ char *buf)
+{
+ struct ib_port_attr attr;
+ ssize_t ret;
+
+ ret = ib_query_port(p->ibdev, p->port_num, &attr);
+ if (ret)
+ return ret;
+
+ return sprintf(buf, "0x%08x\n", attr.port_cap_flags);
+}
+
+static ssize_t rate_show(struct ib_port *p, struct port_attribute *unused,
+ char *buf)
+{
+ struct ib_port_attr attr;
+ char *speed = "";
+ int rate;
+ ssize_t ret;
+
+ ret = ib_query_port(p->ibdev, p->port_num, &attr);
+ if (ret)
+ return ret;
+
+ switch (attr.active_speed) {
+ case 2: speed = " DDR"; break;
+ case 4: speed = " QDR"; break;
+ }
+
+ rate = 25 * ib_width_enum_to_int(attr.active_width) * attr.active_speed;
+ if (rate < 0)
+ return -EINVAL;
+
+ return sprintf(buf, "%d%s Gb/sec (%dX%s)\n",
+ rate / 10, rate % 10 ? ".5" : "",
+ ib_width_enum_to_int(attr.active_width), speed);
+}
+
+static ssize_t phys_state_show(struct ib_port *p, struct port_attribute *unused,
+ char *buf)
+{
+ struct ib_port_attr attr;
+
+ ssize_t ret;
+
+ ret = ib_query_port(p->ibdev, p->port_num, &attr);
+ if (ret)
+ return ret;
+
+ switch (attr.phys_state) {
+ case 1: return sprintf(buf, "1: Sleep\n");
+ case 2: return sprintf(buf, "2: Polling\n");
+ case 3: return sprintf(buf, "3: Disabled\n");
+ case 4: return sprintf(buf, "4: PortConfigurationTraining\n");
+ case 5: return sprintf(buf, "5: LinkUp\n");
+ case 6: return sprintf(buf, "6: LinkErrorRecovery\n");
+ case 7: return sprintf(buf, "7: Phy Test\n");
+ default: return sprintf(buf, "%d: <unknown>\n", attr.phys_state);
+ }
+}
+
+static PORT_ATTR_RO(state);
+static PORT_ATTR_RO(lid);
+static PORT_ATTR_RO(lid_mask_count);
+static PORT_ATTR_RO(sm_lid);
+static PORT_ATTR_RO(sm_sl);
+static PORT_ATTR_RO(cap_mask);
+static PORT_ATTR_RO(rate);
+static PORT_ATTR_RO(phys_state);
+
+static struct attribute *port_default_attrs[] = {
+ &port_attr_state.attr,
+ &port_attr_lid.attr,
+ &port_attr_lid_mask_count.attr,
+ &port_attr_sm_lid.attr,
+ &port_attr_sm_sl.attr,
+ &port_attr_cap_mask.attr,
+ &port_attr_rate.attr,
+ &port_attr_phys_state.attr,
+ NULL
+};
+
+static ssize_t show_port_gid(struct ib_port *p, struct port_attribute *attr,
+ char *buf)
+{
+ struct port_table_attribute *tab_attr =
+ container_of(attr, struct port_table_attribute, attr);
+ union ib_gid gid;
+ ssize_t ret;
+
+ ret = ib_query_gid(p->ibdev, p->port_num, tab_attr->index, &gid);
+ if (ret)
+ return ret;
+
+ return sprintf(buf, "%04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x\n",
+ be16_to_cpu(((u16 *) gid.raw)[0]),
+ be16_to_cpu(((u16 *) gid.raw)[1]),
+ be16_to_cpu(((u16 *) gid.raw)[2]),
+ be16_to_cpu(((u16 *) gid.raw)[3]),
+ be16_to_cpu(((u16 *) gid.raw)[4]),
+ be16_to_cpu(((u16 *) gid.raw)[5]),
+ be16_to_cpu(((u16 *) gid.raw)[6]),
+ be16_to_cpu(((u16 *) gid.raw)[7]));
+}
+
+static ssize_t show_port_pkey(struct ib_port *p, struct port_attribute *attr,
+ char *buf)
+{
+ struct port_table_attribute *tab_attr =
+ container_of(attr, struct port_table_attribute, attr);
+ u16 pkey;
+ ssize_t ret;
+
+ ret = ib_query_pkey(p->ibdev, p->port_num, tab_attr->index, &pkey);
+ if (ret)
+ return ret;
+
+ return sprintf(buf, "0x%04x\n", pkey);
+}
+
+#define PORT_PMA_ATTR(_name, _counter, _width, _offset) \
+struct port_table_attribute port_pma_attr_##_name = { \
+ .attr = __ATTR(_name, S_IRUGO, show_pma_counter, NULL), \
+ .index = (_offset) | ((_width) << 16) | ((_counter) << 24) \
+}
+
+static ssize_t show_pma_counter(struct ib_port *p, struct port_attribute *attr,
+ char *buf)
+{
+ struct port_table_attribute *tab_attr =
+ container_of(attr, struct port_table_attribute, attr);
+ int offset = tab_attr->index & 0xffff;
+ int width = (tab_attr->index >> 16) & 0xff;
+ struct ib_mad *in_mad = NULL;
+ struct ib_mad *out_mad = NULL;
+ ssize_t ret;
+
+ if (!p->ibdev->process_mad)
+ return sprintf(buf, "N/A (no PMA)\n");
+
+ in_mad = kmalloc(sizeof *in_mad, GFP_KERNEL);
+ out_mad = kmalloc(sizeof *in_mad, GFP_KERNEL);
+ if (!in_mad || !out_mad) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ memset(in_mad, 0, sizeof *in_mad);
+ in_mad->mad_hdr.base_version = 1;
+ in_mad->mad_hdr.mgmt_class = IB_MGMT_CLASS_PERF_MGMT;
+ in_mad->mad_hdr.class_version = 1;
+ in_mad->mad_hdr.method = IB_MGMT_METHOD_GET;
+ in_mad->mad_hdr.attr_id = cpu_to_be16(0x12); /* PortCounters */
+
+ in_mad->data[41] = p->port_num; /* PortSelect field */
+
+ if ((p->ibdev->process_mad(p->ibdev, IB_MAD_IGNORE_MKEY,
+ p->port_num, NULL, NULL, in_mad, out_mad) &
+ (IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY)) !=
+ (IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY)) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ switch (width) {
+ case 4:
+ ret = sprintf(buf, "%u\n", (out_mad->data[40 + offset / 8] >>
+ (offset % 4)) & 0xf);
+ break;
+ case 8:
+ ret = sprintf(buf, "%u\n", out_mad->data[40 + offset / 8]);
+ break;
+ case 16:
+ ret = sprintf(buf, "%u\n",
+ be16_to_cpup((u16 *)(out_mad->data + 40 + offset / 8)));
+ break;
+ case 32:
+ ret = sprintf(buf, "%u\n",
+ be32_to_cpup((u32 *)(out_mad->data + 40 + offset / 8)));
+ break;
+ default:
+ ret = 0;
+ }
+
+out:
+ kfree(in_mad);
+ kfree(out_mad);
+
+ return ret;
+}
+
+static PORT_PMA_ATTR(symbol_error , 0, 16, 32);
+static PORT_PMA_ATTR(link_error_recovery , 1, 8, 48);
+static PORT_PMA_ATTR(link_downed , 2, 8, 56);
+static PORT_PMA_ATTR(port_rcv_errors , 3, 16, 64);
+static PORT_PMA_ATTR(port_rcv_remote_physical_errors, 4, 16, 80);
+static PORT_PMA_ATTR(port_rcv_switch_relay_errors , 5, 16, 96);
+static PORT_PMA_ATTR(port_xmit_discards , 6, 16, 112);
+static PORT_PMA_ATTR(port_xmit_constraint_errors , 7, 8, 128);
+static PORT_PMA_ATTR(port_rcv_constraint_errors , 8, 8, 136);
+static PORT_PMA_ATTR(local_link_integrity_errors , 9, 4, 152);
+static PORT_PMA_ATTR(excessive_buffer_overrun_errors, 10, 4, 156);
+static PORT_PMA_ATTR(VL15_dropped , 11, 16, 176);
+static PORT_PMA_ATTR(port_xmit_data , 12, 32, 192);
+static PORT_PMA_ATTR(port_rcv_data , 13, 32, 224);
+static PORT_PMA_ATTR(port_xmit_packets , 14, 32, 256);
+static PORT_PMA_ATTR(port_rcv_packets , 15, 32, 288);
+
+static struct attribute *pma_attrs[] = {
+ &port_pma_attr_symbol_error.attr.attr,
+ &port_pma_attr_link_error_recovery.attr.attr,
+ &port_pma_attr_link_downed.attr.attr,
+ &port_pma_attr_port_rcv_errors.attr.attr,
+ &port_pma_attr_port_rcv_remote_physical_errors.attr.attr,
+ &port_pma_attr_port_rcv_switch_relay_errors.attr.attr,
+ &port_pma_attr_port_xmit_discards.attr.attr,
+ &port_pma_attr_port_xmit_constraint_errors.attr.attr,
+ &port_pma_attr_port_rcv_constraint_errors.attr.attr,
+ &port_pma_attr_local_link_integrity_errors.attr.attr,
+ &port_pma_attr_excessive_buffer_overrun_errors.attr.attr,
+ &port_pma_attr_VL15_dropped.attr.attr,
+ &port_pma_attr_port_xmit_data.attr.attr,
+ &port_pma_attr_port_rcv_data.attr.attr,
+ &port_pma_attr_port_xmit_packets.attr.attr,
+ &port_pma_attr_port_rcv_packets.attr.attr,
+ NULL
+};
+
+static struct attribute_group pma_group = {
+ .name = "counters",
+ .attrs = pma_attrs
+};
+
+static void ib_port_release(struct kobject *kobj)
+{
+ struct ib_port *p = container_of(kobj, struct ib_port, kobj);
+ struct attribute *a;
+ int i;
+
+ for (i = 0; (a = p->gid_attr[i]); ++i) {
+ kfree(a->name);
+ kfree(a);
+ }
+
+ for (i = 0; (a = p->pkey_attr[i]); ++i) {
+ kfree(a->name);
+ kfree(a);
+ }
+
+ kfree(p->gid_attr);
+ kfree(p);
+}
+
+static struct kobj_type port_type = {
+ .release = ib_port_release,
+ .sysfs_ops = &port_sysfs_ops,
+ .default_attrs = port_default_attrs
+};
+
+static void ib_device_release(struct class_device *cdev)
+{
+ struct ib_device *dev = container_of(cdev, struct ib_device, class_dev);
+
+ kfree(dev);
+}
+
+static int ib_device_hotplug(struct class_device *cdev, char **envp,
+ int num_envp, char *buf, int size)
+{
+ struct ib_device *dev = container_of(cdev, struct ib_device, class_dev);
+ int i = 0, len = 0;
+
+ if (add_hotplug_env_var(envp, num_envp, &i, buf, size, &len,
+ "NAME=%s", dev->name))
+ return -ENOMEM;
+
+ /*
+ * It might be nice to pass the node GUID to hotplug, but
+ * right now the only way to get it is to query the device
+ * provider, and this can crash during device removal because
+ * we are will be running after driver removal has started.
+ * We could add a node_guid field to struct ib_device, or we
+ * could just let the hotplug script read the node GUID from
+ * sysfs when devices are added.
+ */
+
+ envp[i] = NULL;
+ return 0;
+}
+
+static int alloc_group(struct attribute ***attr,
+ ssize_t (*show)(struct ib_port *,
+ struct port_attribute *, char *buf),
+ int len)
+{
+ struct port_table_attribute ***tab_attr =
+ (struct port_table_attribute ***) attr;
+ int i;
+ int ret;
+
+ *tab_attr = kmalloc((1 + len) * sizeof *tab_attr, GFP_KERNEL);
+ if (!*tab_attr)
+ return -ENOMEM;
+
+ memset(*tab_attr, 0, (1 + len) * sizeof *tab_attr);
+
+ for (i = 0; i < len; ++i) {
+ (*tab_attr)[i] = kmalloc(sizeof *(*tab_attr)[i], GFP_KERNEL);
+ if (!(*tab_attr)[i]) {
+ ret = -ENOMEM;
+ goto err;
+ }
+ memset((*tab_attr)[i], 0, sizeof *(*tab_attr)[i]);
+ (*tab_attr)[i]->attr.attr.name = kmalloc(8, GFP_KERNEL);
+ if (!(*tab_attr)[i]->attr.attr.name) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ if (snprintf((*tab_attr)[i]->attr.attr.name, 8, "%d", i) >= 8) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ (*tab_attr)[i]->attr.attr.mode = S_IRUGO;
+ (*tab_attr)[i]->attr.attr.owner = THIS_MODULE;
+ (*tab_attr)[i]->attr.show = show;
+ (*tab_attr)[i]->index = i;
+ }
+
+ return 0;
+
+err:
+ for (i = 0; i < len; ++i) {
+ if ((*tab_attr)[i])
+ kfree((*tab_attr)[i]->attr.attr.name);
+ kfree((*tab_attr)[i]);
+ }
+
+ kfree(*tab_attr);
+
+ return ret;
+}
+
+static int add_port(struct ib_device *device, int port_num)
+{
+ struct ib_port *p;
+ struct ib_port_attr attr;
+ int i;
+ int ret;
+
+ ret = ib_query_port(device, port_num, &attr);
+ if (ret)
+ return ret;
+
+ p = kmalloc(sizeof *p, GFP_KERNEL);
+ if (!p)
+ return -ENOMEM;
+ memset(p, 0, sizeof *p);
+
+ p->ibdev = device;
+ p->port_num = port_num;
+ p->kobj.ktype = &port_type;
+
+ p->kobj.parent = kobject_get(&device->ports_parent);
+ if (!p->kobj.parent) {
+ ret = -EBUSY;
+ goto err;
+ }
+
+ ret = kobject_set_name(&p->kobj, "%d", port_num);
+ if (ret)
+ goto err_put;
+
+ ret = kobject_register(&p->kobj);
+ if (ret)
+ goto err_put;
+
+ ret = sysfs_create_group(&p->kobj, &pma_group);
+ if (ret)
+ goto err_put;
+
+ ret = alloc_group(&p->gid_attr, show_port_gid, attr.gid_tbl_len);
+ if (ret)
+ goto err_remove_pma;
+
+ p->gid_group.name = "gids";
+ p->gid_group.attrs = p->gid_attr;
+
+ ret = sysfs_create_group(&p->kobj, &p->gid_group);
+ if (ret)
+ goto err_free_gid;
+
+ ret = alloc_group(&p->pkey_attr, show_port_pkey, attr.pkey_tbl_len);
+ if (ret)
+ goto err_remove_gid;
+
+ p->pkey_group.name = "pkeys";
+ p->pkey_group.attrs = p->pkey_attr;
+
+ ret = sysfs_create_group(&p->kobj, &p->pkey_group);
+ if (ret)
+ goto err_free_pkey;
+
+ list_add_tail(&p->kobj.entry, &device->port_list);
+
+ return 0;
+
+err_free_pkey:
+ for (i = 0; i < attr.pkey_tbl_len; ++i) {
+ kfree(p->pkey_attr[i]->name);
+ kfree(p->pkey_attr[i]);
+ }
+
+ kfree(p->pkey_attr);
+
+err_remove_gid:
+ sysfs_remove_group(&p->kobj, &p->gid_group);
+
+err_free_gid:
+ for (i = 0; i < attr.gid_tbl_len; ++i) {
+ kfree(p->gid_attr[i]->name);
+ kfree(p->gid_attr[i]);
+ }
+
+ kfree(p->gid_attr);
+
+err_remove_pma:
+ sysfs_remove_group(&p->kobj, &pma_group);
+
+err_put:
+ kobject_put(&device->ports_parent);
+
+err:
+ kfree(p);
+ return ret;
+}
+
+static ssize_t show_node_type(struct class_device *cdev, char *buf)
+{
+ struct ib_device *dev = container_of(cdev, struct ib_device, class_dev);
+
+ switch (dev->node_type) {
+ case IB_NODE_CA: return sprintf(buf, "%d: CA\n", dev->node_type);
+ case IB_NODE_SWITCH: return sprintf(buf, "%d: switch\n", dev->node_type);
+ case IB_NODE_ROUTER: return sprintf(buf, "%d: router\n", dev->node_type);
+ default: return sprintf(buf, "%d: <unknown>\n", dev->node_type);
+ }
+}
+
+static ssize_t show_sys_image_guid(struct class_device *cdev, char *buf)
+{
+ struct ib_device *dev = container_of(cdev, struct ib_device, class_dev);
+ struct ib_device_attr attr;
+ ssize_t ret;
+
+ ret = ib_query_device(dev, &attr);
+ if (ret)
+ return ret;
+
+ return sprintf(buf, "%04x:%04x:%04x:%04x\n",
+ be16_to_cpu(((u16 *) &attr.sys_image_guid)[0]),
+ be16_to_cpu(((u16 *) &attr.sys_image_guid)[1]),
+ be16_to_cpu(((u16 *) &attr.sys_image_guid)[2]),
+ be16_to_cpu(((u16 *) &attr.sys_image_guid)[3]));
+}
+
+static ssize_t show_node_guid(struct class_device *cdev, char *buf)
+{
+ struct ib_device *dev = container_of(cdev, struct ib_device, class_dev);
+ struct ib_device_attr attr;
+ ssize_t ret;
+
+ ret = ib_query_device(dev, &attr);
+ if (ret)
+ return ret;
+
+ return sprintf(buf, "%04x:%04x:%04x:%04x\n",
+ be16_to_cpu(((u16 *) &attr.node_guid)[0]),
+ be16_to_cpu(((u16 *) &attr.node_guid)[1]),
+ be16_to_cpu(((u16 *) &attr.node_guid)[2]),
+ be16_to_cpu(((u16 *) &attr.node_guid)[3]));
+}
+
+static CLASS_DEVICE_ATTR(node_type, S_IRUGO, show_node_type, NULL);
+static CLASS_DEVICE_ATTR(sys_image_guid, S_IRUGO, show_sys_image_guid, NULL);
+static CLASS_DEVICE_ATTR(node_guid, S_IRUGO, show_node_guid, NULL);
+
+static struct class_device_attribute *ib_class_attributes[] = {
+ &class_device_attr_node_type,
+ &class_device_attr_sys_image_guid,
+ &class_device_attr_node_guid
+};
+
+static struct class ib_class = {
+ .name = "infiniband",
+ .release = ib_device_release,
+ .hotplug = ib_device_hotplug,
+};
+
+int ib_device_register_sysfs(struct ib_device *device)
+{
+ struct class_device *class_dev = &device->class_dev;
+ int ret;
+ int i;
+
+ class_dev->class = &ib_class;
+ class_dev->class_data = device;
+ strlcpy(class_dev->class_id, device->name, BUS_ID_SIZE);
+
+ INIT_LIST_HEAD(&device->port_list);
+
+ ret = class_device_register(class_dev);
+ if (ret)
+ goto err;
+
+ for (i = 0; i < ARRAY_SIZE(ib_class_attributes); ++i) {
+ ret = class_device_create_file(class_dev, ib_class_attributes[i]);
+ if (ret)
+ goto err_unregister;
+ }
+
+ device->ports_parent.parent = kobject_get(&class_dev->kobj);
+ if (!device->ports_parent.parent) {
+ ret = -EBUSY;
+ goto err_unregister;
+ }
+ ret = kobject_set_name(&device->ports_parent, "ports");
+ if (ret)
+ goto err_put;
+ ret = kobject_register(&device->ports_parent);
+ if (ret)
+ goto err_put;
+
+ if (device->node_type == IB_NODE_SWITCH) {
+ ret = add_port(device, 0);
+ if (ret)
+ goto err_put;
+ } else {
+ int i;
+
+ for (i = 1; i <= device->phys_port_cnt; ++i) {
+ ret = add_port(device, i);
+ if (ret)
+ goto err_put;
+ }
+ }
+
+ return 0;
+
+err_put:
+ {
+ struct kobject *p, *t;
+ struct ib_port *port;
+
+ list_for_each_entry_safe(p, t, &device->port_list, entry) {
+ list_del(&p->entry);
+ port = container_of(p, struct ib_port, kobj);
+ sysfs_remove_group(p, &pma_group);
+ sysfs_remove_group(p, &port->pkey_group);
+ sysfs_remove_group(p, &port->gid_group);
+ kobject_unregister(p);
+ }
+ }
+
+ kobject_put(&class_dev->kobj);
+
+err_unregister:
+ class_device_unregister(class_dev);
+
+err:
+ return ret;
+}
+
+void ib_device_unregister_sysfs(struct ib_device *device)
+{
+ struct kobject *p, *t;
+ struct ib_port *port;
+
+ list_for_each_entry_safe(p, t, &device->port_list, entry) {
+ list_del(&p->entry);
+ port = container_of(p, struct ib_port, kobj);
+ sysfs_remove_group(p, &pma_group);
+ sysfs_remove_group(p, &port->pkey_group);
+ sysfs_remove_group(p, &port->gid_group);
+ kobject_unregister(p);
+ }
+
+ kobject_unregister(&device->ports_parent);
+ class_device_unregister(&device->class_dev);
+}
+
+int ib_sysfs_setup(void)
+{
+ return class_register(&ib_class);
+}
+
+void ib_sysfs_cleanup(void)
+{
+ class_unregister(&ib_class);
+}
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: user_mad.c 1389 2004-12-27 22:56:47Z roland $
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/err.h>
+#include <linux/fs.h>
+#include <linux/cdev.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/poll.h>
+#include <linux/rwsem.h>
+#include <linux/kref.h>
+
+#include <asm/uaccess.h>
+#include <asm/semaphore.h>
+
+#include <ib_mad.h>
+#include <ib_user_mad.h>
+
+MODULE_AUTHOR("Roland Dreier");
+MODULE_DESCRIPTION("InfiniBand userspace MAD packet access");
+MODULE_LICENSE("Dual BSD/GPL");
+
+enum {
+ IB_UMAD_MAX_PORTS = 64,
+ IB_UMAD_MAX_AGENTS = 32,
+
+ IB_UMAD_MAJOR = 231,
+ IB_UMAD_MINOR_BASE = 0
+};
+
+struct ib_umad_port {
+ int devnum;
+ struct cdev dev;
+ struct class_device class_dev;
+
+ int sm_devnum;
+ struct cdev sm_dev;
+ struct class_device sm_class_dev;
+ struct semaphore sm_sem;
+
+ struct ib_device *ib_dev;
+ struct ib_umad_device *umad_dev;
+ u8 port_num;
+};
+
+struct ib_umad_device {
+ int start_port, end_port;
+ struct kref ref;
+ struct ib_umad_port port[0];
+};
+
+struct ib_umad_file {
+ struct ib_umad_port *port;
+ spinlock_t recv_lock;
+ struct list_head recv_list;
+ wait_queue_head_t recv_wait;
+ struct rw_semaphore agent_mutex;
+ struct ib_mad_agent *agent[IB_UMAD_MAX_AGENTS];
+ struct ib_mr *mr[IB_UMAD_MAX_AGENTS];
+};
+
+struct ib_umad_packet {
+ struct ib_user_mad mad;
+ struct ib_ah *ah;
+ struct list_head list;
+ DECLARE_PCI_UNMAP_ADDR(mapping)
+};
+
+static const dev_t base_dev = MKDEV(IB_UMAD_MAJOR, IB_UMAD_MINOR_BASE);
+static spinlock_t map_lock;
+static DECLARE_BITMAP(dev_map, IB_UMAD_MAX_PORTS * 2);
+
+static void ib_umad_add_one(struct ib_device *device);
+static void ib_umad_remove_one(struct ib_device *device);
+
+static int queue_packet(struct ib_umad_file *file,
+ struct ib_mad_agent *agent,
+ struct ib_umad_packet *packet)
+{
+ int ret = 1;
+
+ down_read(&file->agent_mutex);
+ for (packet->mad.id = 0;
+ packet->mad.id < IB_UMAD_MAX_AGENTS;
+ packet->mad.id++)
+ if (agent == file->agent[packet->mad.id]) {
+ spin_lock_irq(&file->recv_lock);
+ list_add_tail(&packet->list, &file->recv_list);
+ spin_unlock_irq(&file->recv_lock);
+ wake_up_interruptible(&file->recv_wait);
+ ret = 0;
+ break;
+ }
+
+ up_read(&file->agent_mutex);
+
+ return ret;
+}
+
+static void send_handler(struct ib_mad_agent *agent,
+ struct ib_mad_send_wc *send_wc)
+{
+ struct ib_umad_file *file = agent->context;
+ struct ib_umad_packet *packet =
+ (void *) (unsigned long) send_wc->wr_id;
+
+ dma_unmap_single(agent->device->dma_device,
+ pci_unmap_addr(packet, mapping),
+ sizeof packet->mad.data,
+ DMA_TO_DEVICE);
+ ib_destroy_ah(packet->ah);
+
+ if (send_wc->status == IB_WC_RESP_TIMEOUT_ERR) {
+ packet->mad.status = ETIMEDOUT;
+
+ if (!queue_packet(file, agent, packet))
+ return;
+ }
+
+ kfree(packet);
+}
+
+static void recv_handler(struct ib_mad_agent *agent,
+ struct ib_mad_recv_wc *mad_recv_wc)
+{
+ struct ib_umad_file *file = agent->context;
+ struct ib_umad_packet *packet;
+
+ if (mad_recv_wc->wc->status != IB_WC_SUCCESS)
+ goto out;
+
+ packet = kmalloc(sizeof *packet, GFP_KERNEL);
+ if (!packet)
+ goto out;
+
+ memset(packet, 0, sizeof *packet);
+
+ memcpy(packet->mad.data, mad_recv_wc->recv_buf.mad, sizeof packet->mad.data);
+ packet->mad.status = 0;
+ packet->mad.qpn = cpu_to_be32(mad_recv_wc->wc->src_qp);
+ packet->mad.lid = cpu_to_be16(mad_recv_wc->wc->slid);
+ packet->mad.sl = mad_recv_wc->wc->sl;
+ packet->mad.path_bits = mad_recv_wc->wc->dlid_path_bits;
+ packet->mad.grh_present = !!(mad_recv_wc->wc->wc_flags & IB_WC_GRH);
+ if (packet->mad.grh_present) {
+ /* XXX parse GRH */
+ packet->mad.gid_index = 0;
+ packet->mad.hop_limit = 0;
+ packet->mad.traffic_class = 0;
+ memset(packet->mad.gid, 0, 16);
+ packet->mad.flow_label = 0;
+ }
+
+ if (queue_packet(file, agent, packet))
+ kfree(packet);
+
+out:
+ ib_free_recv_mad(mad_recv_wc);
+}
+
+static ssize_t ib_umad_read(struct file *filp, char __user *buf,
+ size_t count, loff_t *pos)
+{
+ struct ib_umad_file *file = filp->private_data;
+ struct ib_umad_packet *packet;
+ ssize_t ret;
+
+ if (count < sizeof (struct ib_user_mad))
+ return -EINVAL;
+
+ spin_lock_irq(&file->recv_lock);
+
+ while (list_empty(&file->recv_list)) {
+ spin_unlock_irq(&file->recv_lock);
+
+ if (filp->f_flags & O_NONBLOCK)
+ return -EAGAIN;
+
+ if (wait_event_interruptible(file->recv_wait,
+ !list_empty(&file->recv_list)))
+ return -ERESTARTSYS;
+
+ spin_lock_irq(&file->recv_lock);
+ }
+
+ packet = list_entry(file->recv_list.next, struct ib_umad_packet, list);
+ list_del(&packet->list);
+
+ spin_unlock_irq(&file->recv_lock);
+
+ if (copy_to_user(buf, &packet->mad, sizeof packet->mad))
+ ret = -EFAULT;
+ else
+ ret = sizeof packet->mad;
+
+ kfree(packet);
+ return ret;
+}
+
+static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
+ size_t count, loff_t *pos)
+{
+ struct ib_umad_file *file = filp->private_data;
+ struct ib_umad_packet *packet;
+ struct ib_mad_agent *agent;
+ struct ib_ah_attr ah_attr;
+ struct ib_sge gather_list;
+ struct ib_send_wr *bad_wr, wr = {
+ .opcode = IB_WR_SEND,
+ .sg_list = &gather_list,
+ .num_sge = 1,
+ .send_flags = IB_SEND_SIGNALED,
+ };
+ u8 method;
+ u64 *tid;
+ int ret;
+
+ if (count < sizeof (struct ib_user_mad))
+ return -EINVAL;
+
+ packet = kmalloc(sizeof *packet, GFP_KERNEL);
+ if (!packet)
+ return -ENOMEM;
+
+ if (copy_from_user(&packet->mad, buf, sizeof packet->mad)) {
+ kfree(packet);
+ return -EFAULT;
+ }
+
+ if (packet->mad.id < 0 || packet->mad.id >= IB_UMAD_MAX_AGENTS) {
+ ret = -EINVAL;
+ goto err;
+ }
+
+ down_read(&file->agent_mutex);
+
+ agent = file->agent[packet->mad.id];
+ if (!agent) {
+ ret = -EINVAL;
+ goto err_up;
+ }
+
+ /*
+ * If userspace is generating a request that will generate a
+ * response, we need to make sure the high-order part of the
+ * transaction ID matches the agent being used to send the
+ * MAD.
+ */
+ method = ((struct ib_mad_hdr *) packet->mad.data)->method;
+
+ if (!(method & IB_MGMT_METHOD_RESP) &&
+ method != IB_MGMT_METHOD_TRAP_REPRESS &&
+ method != IB_MGMT_METHOD_SEND) {
+ tid = &((struct ib_mad_hdr *) packet->mad.data)->tid;
+ *tid = cpu_to_be64(((u64) agent->hi_tid) << 32 |
+ (be64_to_cpup(tid) & 0xffffffff));
+ }
+
+ memset(&ah_attr, 0, sizeof ah_attr);
+ ah_attr.dlid = be16_to_cpu(packet->mad.lid);
+ ah_attr.sl = packet->mad.sl;
+ ah_attr.src_path_bits = packet->mad.path_bits;
+ ah_attr.port_num = file->port->port_num;
+ if (packet->mad.grh_present) {
+ ah_attr.ah_flags = IB_AH_GRH;
+ memcpy(ah_attr.grh.dgid.raw, packet->mad.gid, 16);
+ ah_attr.grh.flow_label = packet->mad.flow_label;
+ ah_attr.grh.hop_limit = packet->mad.hop_limit;
+ ah_attr.grh.traffic_class = packet->mad.traffic_class;
+ }
+
+ packet->ah = ib_create_ah(agent->qp->pd, &ah_attr);
+ if (IS_ERR(packet->ah)) {
+ ret = PTR_ERR(packet->ah);
+ goto err_up;
+ }
+
+ gather_list.addr = dma_map_single(agent->device->dma_device,
+ packet->mad.data,
+ sizeof packet->mad.data,
+ DMA_TO_DEVICE);
+ gather_list.length = sizeof packet->mad.data;
+ gather_list.lkey = file->mr[packet->mad.id]->lkey;
+ pci_unmap_addr_set(packet, mapping, gather_list.addr);
+
+ wr.wr.ud.mad_hdr = (struct ib_mad_hdr *) packet->mad.data;
+ wr.wr.ud.ah = packet->ah;
+ wr.wr.ud.remote_qpn = be32_to_cpu(packet->mad.qpn);
+ wr.wr.ud.remote_qkey = be32_to_cpu(packet->mad.qkey);
+ wr.wr.ud.timeout_ms = packet->mad.timeout_ms;
+
+ wr.wr_id = (unsigned long) packet;
+
+ ret = ib_post_send_mad(agent, &wr, &bad_wr);
+ if (ret) {
+ dma_unmap_single(agent->device->dma_device,
+ pci_unmap_addr(packet, mapping),
+ sizeof packet->mad.data,
+ DMA_TO_DEVICE);
+ goto err_up;
+ }
+
+ up_read(&file->agent_mutex);
+
+ return sizeof packet->mad;
+
+err_up:
+ up_read(&file->agent_mutex);
+
+err:
+ kfree(packet);
+ return ret;
+}
+
+static unsigned int ib_umad_poll(struct file *filp, struct poll_table_struct *wait)
+{
+ struct ib_umad_file *file = filp->private_data;
+
+ /* we will always be able to post a MAD send */
+ unsigned int mask = POLLOUT | POLLWRNORM;
+
+ poll_wait(filp, &file->recv_wait, wait);
+
+ if (!list_empty(&file->recv_list))
+ mask |= POLLIN | POLLRDNORM;
+
+ return mask;
+}
+
+static int ib_umad_reg_agent(struct ib_umad_file *file, unsigned long arg)
+{
+ struct ib_user_mad_reg_req ureq;
+ struct ib_mad_reg_req req;
+ struct ib_mad_agent *agent;
+ int agent_id;
+ int ret;
+
+ down_write(&file->agent_mutex);
+
+ if (copy_from_user(&ureq, (void __user *) arg, sizeof ureq)) {
+ ret = -EFAULT;
+ goto out;
+ }
+
+ if (ureq.qpn != 0 && ureq.qpn != 1) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ for (agent_id = 0; agent_id < IB_UMAD_MAX_AGENTS; ++agent_id)
+ if (!file->agent[agent_id])
+ goto found;
+
+ ret = -ENOMEM;
+ goto out;
+
+found:
+ req.mgmt_class = ureq.mgmt_class;
+ req.mgmt_class_version = ureq.mgmt_class_version;
+ memcpy(req.method_mask, ureq.method_mask, sizeof req.method_mask);
+ memcpy(req.oui, ureq.oui, sizeof req.oui);
+
+ agent = ib_register_mad_agent(file->port->ib_dev, file->port->port_num,
+ ureq.qpn ? IB_QPT_GSI : IB_QPT_SMI,
+ &req, 0, send_handler, recv_handler,
+ file);
+ if (IS_ERR(agent)) {
+ ret = PTR_ERR(agent);
+ goto out;
+ }
+
+ file->agent[agent_id] = agent;
+
+ file->mr[agent_id] = ib_get_dma_mr(agent->qp->pd, IB_ACCESS_LOCAL_WRITE);
+ if (IS_ERR(file->mr[agent_id])) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ if (put_user(agent_id,
+ (u32 __user *) (arg + offsetof(struct ib_user_mad_reg_req, id)))) {
+ ret = -EFAULT;
+ goto err_mr;
+ }
+
+ ret = 0;
+ goto out;
+
+err_mr:
+ ib_dereg_mr(file->mr[agent_id]);
+
+err:
+ file->agent[agent_id] = NULL;
+ ib_unregister_mad_agent(agent);
+
+out:
+ up_write(&file->agent_mutex);
+ return ret;
+}
+
+static int ib_umad_unreg_agent(struct ib_umad_file *file, unsigned long arg)
+{
+ u32 id;
+ int ret = 0;
+
+ down_write(&file->agent_mutex);
+
+ if (get_user(id, (u32 __user *) arg)) {
+ ret = -EFAULT;
+ goto out;
+ }
+
+ if (id < 0 || id >= IB_UMAD_MAX_AGENTS || !file->agent[id]) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ ib_dereg_mr(file->mr[id]);
+ ib_unregister_mad_agent(file->agent[id]);
+ file->agent[id] = NULL;
+
+out:
+ up_write(&file->agent_mutex);
+ return ret;
+}
+
+static long ib_umad_ioctl(struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ switch (cmd) {
+ case IB_USER_MAD_REGISTER_AGENT:
+ return ib_umad_reg_agent(filp->private_data, arg);
+ case IB_USER_MAD_UNREGISTER_AGENT:
+ return ib_umad_unreg_agent(filp->private_data, arg);
+ default:
+ return -ENOIOCTLCMD;
+ }
+}
+
+static int ib_umad_open(struct inode *inode, struct file *filp)
+{
+ struct ib_umad_port *port =
+ container_of(inode->i_cdev, struct ib_umad_port, dev);
+ struct ib_umad_file *file;
+
+ file = kmalloc(sizeof *file, GFP_KERNEL);
+ if (!file)
+ return -ENOMEM;
+
+ memset(file, 0, sizeof *file);
+
+ spin_lock_init(&file->recv_lock);
+ init_rwsem(&file->agent_mutex);
+ INIT_LIST_HEAD(&file->recv_list);
+ init_waitqueue_head(&file->recv_wait);
+
+ file->port = port;
+ filp->private_data = file;
+
+ return 0;
+}
+
+static int ib_umad_close(struct inode *inode, struct file *filp)
+{
+ struct ib_umad_file *file = filp->private_data;
+ int i;
+
+ for (i = 0; i < IB_UMAD_MAX_AGENTS; ++i)
+ if (file->agent[i]) {
+ ib_dereg_mr(file->mr[i]);
+ ib_unregister_mad_agent(file->agent[i]);
+ }
+
+ kfree(file);
+
+ return 0;
+}
+
+static struct file_operations umad_fops = {
+ .owner = THIS_MODULE,
+ .read = ib_umad_read,
+ .write = ib_umad_write,
+ .poll = ib_umad_poll,
+ .unlocked_ioctl = ib_umad_ioctl,
+ .compat_ioctl = ib_umad_ioctl,
+ .open = ib_umad_open,
+ .release = ib_umad_close
+};
+
+static int ib_umad_sm_open(struct inode *inode, struct file *filp)
+{
+ struct ib_umad_port *port =
+ container_of(inode->i_cdev, struct ib_umad_port, sm_dev);
+ struct ib_port_modify props = {
+ .set_port_cap_mask = IB_PORT_SM
+ };
+ int ret;
+
+ if (filp->f_flags & O_NONBLOCK) {
+ if (down_trylock(&port->sm_sem))
+ return -EAGAIN;
+ } else {
+ if (down_interruptible(&port->sm_sem))
+ return -ERESTARTSYS;
+ }
+
+ ret = ib_modify_port(port->ib_dev, port->port_num, 0, &props);
+ if (ret) {
+ up(&port->sm_sem);
+ return ret;
+ }
+
+ filp->private_data = port;
+
+ return 0;
+}
+
+static int ib_umad_sm_close(struct inode *inode, struct file *filp)
+{
+ struct ib_umad_port *port = filp->private_data;
+ struct ib_port_modify props = {
+ .clr_port_cap_mask = IB_PORT_SM
+ };
+ int ret;
+
+ ret = ib_modify_port(port->ib_dev, port->port_num, 0, &props);
+ up(&port->sm_sem);
+
+ return ret;
+}
+
+static struct file_operations umad_sm_fops = {
+ .owner = THIS_MODULE,
+ .open = ib_umad_sm_open,
+ .release = ib_umad_sm_close
+};
+
+static struct ib_client umad_client = {
+ .name = "umad",
+ .add = ib_umad_add_one,
+ .remove = ib_umad_remove_one
+};
+
+static ssize_t show_dev(struct class_device *class_dev, char *buf)
+{
+ struct ib_umad_port *port = class_get_devdata(class_dev);
+
+ if (class_dev == &port->class_dev)
+ return print_dev_t(buf, port->dev.dev);
+ else
+ return print_dev_t(buf, port->sm_dev.dev);
+}
+static CLASS_DEVICE_ATTR(dev, S_IRUGO, show_dev, NULL);
+
+static ssize_t show_ibdev(struct class_device *class_dev, char *buf)
+{
+ struct ib_umad_port *port = class_get_devdata(class_dev);
+
+ return sprintf(buf, "%s\n", port->ib_dev->name);
+}
+static CLASS_DEVICE_ATTR(ibdev, S_IRUGO, show_ibdev, NULL);
+
+static ssize_t show_port(struct class_device *class_dev, char *buf)
+{
+ struct ib_umad_port *port = class_get_devdata(class_dev);
+
+ return sprintf(buf, "%d\n", port->port_num);
+}
+static CLASS_DEVICE_ATTR(port, S_IRUGO, show_port, NULL);
+
+static void ib_umad_release_dev(struct kref *ref)
+{
+ struct ib_umad_device *dev =
+ container_of(ref, struct ib_umad_device, ref);
+
+ kfree(dev);
+}
+
+static void ib_umad_release_port(struct class_device *class_dev)
+{
+ struct ib_umad_port *port = class_get_devdata(class_dev);
+
+ if (class_dev == &port->class_dev) {
+ cdev_del(&port->dev);
+ clear_bit(port->devnum, dev_map);
+ } else {
+ cdev_del(&port->sm_dev);
+ clear_bit(port->sm_devnum, dev_map);
+ }
+
+ kref_put(&port->umad_dev->ref, ib_umad_release_dev);
+}
+
+static struct class umad_class = {
+ .name = "infiniband_mad",
+ .release = ib_umad_release_port
+};
+
+static ssize_t show_abi_version(struct class *class, char *buf)
+{
+ return sprintf(buf, "%d\n", IB_USER_MAD_ABI_VERSION);
+}
+static CLASS_ATTR(abi_version, S_IRUGO, show_abi_version, NULL);
+
+static int ib_umad_init_port(struct ib_device *device, int port_num,
+ struct ib_umad_port *port)
+{
+ spin_lock(&map_lock);
+ port->devnum = find_first_zero_bit(dev_map, IB_UMAD_MAX_PORTS);
+ if (port->devnum >= IB_UMAD_MAX_PORTS) {
+ spin_unlock(&map_lock);
+ return -1;
+ }
+ port->sm_devnum = find_next_zero_bit(dev_map, IB_UMAD_MAX_PORTS * 2, IB_UMAD_MAX_PORTS);
+ if (port->sm_devnum >= IB_UMAD_MAX_PORTS * 2) {
+ spin_unlock(&map_lock);
+ return -1;
+ }
+ set_bit(port->devnum, dev_map);
+ set_bit(port->sm_devnum, dev_map);
+ spin_unlock(&map_lock);
+
+ port->ib_dev = device;
+ port->port_num = port_num;
+ init_MUTEX(&port->sm_sem);
+
+ cdev_init(&port->dev, &umad_fops);
+ port->dev.owner = THIS_MODULE;
+ kobject_set_name(&port->dev.kobj, "umad%d", port->devnum);
+ if (cdev_add(&port->dev, base_dev + port->devnum, 1))
+ return -1;
+
+ port->class_dev.class = &umad_class;
+ port->class_dev.dev = device->dma_device;
+
+ snprintf(port->class_dev.class_id, BUS_ID_SIZE, "umad%d", port->devnum);
+
+ if (class_device_register(&port->class_dev))
+ goto err_cdev;
+
+ class_set_devdata(&port->class_dev, port);
+ kref_get(&port->umad_dev->ref);
+
+ if (class_device_create_file(&port->class_dev, &class_device_attr_dev))
+ goto err_class;
+ if (class_device_create_file(&port->class_dev, &class_device_attr_ibdev))
+ goto err_class;
+ if (class_device_create_file(&port->class_dev, &class_device_attr_port))
+ goto err_class;
+
+ cdev_init(&port->sm_dev, &umad_sm_fops);
+ port->sm_dev.owner = THIS_MODULE;
+ kobject_set_name(&port->dev.kobj, "issm%d", port->sm_devnum - IB_UMAD_MAX_PORTS);
+ if (cdev_add(&port->sm_dev, base_dev + port->sm_devnum, 1))
+ return -1;
+
+ port->sm_class_dev.class = &umad_class;
+ port->sm_class_dev.dev = device->dma_device;
+
+ snprintf(port->sm_class_dev.class_id, BUS_ID_SIZE, "issm%d", port->sm_devnum - IB_UMAD_MAX_PORTS);
+
+ if (class_device_register(&port->sm_class_dev))
+ goto err_sm_cdev;
+
+ class_set_devdata(&port->sm_class_dev, port);
+ kref_get(&port->umad_dev->ref);
+
+ if (class_device_create_file(&port->sm_class_dev, &class_device_attr_dev))
+ goto err_sm_class;
+ if (class_device_create_file(&port->sm_class_dev, &class_device_attr_ibdev))
+ goto err_sm_class;
+ if (class_device_create_file(&port->sm_class_dev, &class_device_attr_port))
+ goto err_sm_class;
+
+ return 0;
+
+err_sm_class:
+ class_device_unregister(&port->sm_class_dev);
+
+err_sm_cdev:
+ cdev_del(&port->sm_dev);
+
+err_class:
+ class_device_unregister(&port->class_dev);
+
+err_cdev:
+ cdev_del(&port->dev);
+ clear_bit(port->devnum, dev_map);
+
+ return -1;
+}
+
+static void ib_umad_add_one(struct ib_device *device)
+{
+ struct ib_umad_device *umad_dev;
+ int s, e, i;
+
+ if (device->node_type == IB_NODE_SWITCH)
+ s = e = 0;
+ else {
+ s = 1;
+ e = device->phys_port_cnt;
+ }
+
+ umad_dev = kmalloc(sizeof *umad_dev +
+ (e - s + 1) * sizeof (struct ib_umad_port),
+ GFP_KERNEL);
+ if (!umad_dev)
+ return;
+
+ memset(umad_dev, 0, sizeof *umad_dev +
+ (e - s + 1) * sizeof (struct ib_umad_port));
+
+ kref_init(&umad_dev->ref);
+
+ umad_dev->start_port = s;
+ umad_dev->end_port = e;
+
+ for (i = s; i <= e; ++i) {
+ umad_dev->port[i - s].umad_dev = umad_dev;
+
+ if (ib_umad_init_port(device, i, &umad_dev->port[i - s]))
+ goto err;
+ }
+
+ ib_set_client_data(device, &umad_client, umad_dev);
+
+ return;
+
+err:
+ while (--i >= s) {
+ class_device_unregister(&umad_dev->port[i - s].class_dev);
+ class_device_unregister(&umad_dev->port[i - s].sm_class_dev);
+ }
+
+ kref_put(&umad_dev->ref, ib_umad_release_dev);
+}
+
+static void ib_umad_remove_one(struct ib_device *device)
+{
+ struct ib_umad_device *umad_dev = ib_get_client_data(device, &umad_client);
+ int i;
+
+ if (!umad_dev)
+ return;
+
+ for (i = 0; i <= umad_dev->end_port - umad_dev->start_port; ++i) {
+ class_device_unregister(&umad_dev->port[i].class_dev);
+ class_device_unregister(&umad_dev->port[i].sm_class_dev);
+ }
+
+ kref_put(&umad_dev->ref, ib_umad_release_dev);
+}
+
+static int __init ib_umad_init(void)
+{
+ int ret;
+
+ spin_lock_init(&map_lock);
+
+ ret = register_chrdev_region(base_dev, IB_UMAD_MAX_PORTS * 2,
+ "infiniband_mad");
+ if (ret) {
+ printk(KERN_ERR "user_mad: couldn't register device number\n");
+ goto out;
+ }
+
+ ret = class_register(&umad_class);
+ if (ret) {
+ printk(KERN_ERR "user_mad: couldn't create class infiniband_mad\n");
+ goto out_chrdev;
+ }
+
+ ret = class_create_file(&umad_class, &class_attr_abi_version);
+ if (ret) {
+ printk(KERN_ERR "user_mad: couldn't create abi_version attribute\n");
+ goto out_class;
+ }
+
+ ret = ib_register_client(&umad_client);
+ if (ret) {
+ printk(KERN_ERR "user_mad: couldn't register ib_umad client\n");
+ goto out_class;
+ }
+
+ return 0;
+
+out_class:
+ class_unregister(&umad_class);
+
+out_chrdev:
+ unregister_chrdev_region(base_dev, IB_UMAD_MAX_PORTS * 2);
+
+out:
+ return ret;
+}
+
+static void __exit ib_umad_cleanup(void)
+{
+ ib_unregister_client(&umad_client);
+ class_unregister(&umad_class);
+ unregister_chrdev_region(base_dev, IB_UMAD_MAX_PORTS * 2);
+}
+
+module_init(ib_umad_init);
+module_exit(ib_umad_cleanup);
--- /dev/null
+/*
+ * Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
+ * Copyright (c) 2004 Infinicon Corporation. All rights reserved.
+ * Copyright (c) 2004 Intel Corporation. All rights reserved.
+ * Copyright (c) 2004 Topspin Corporation. All rights reserved.
+ * Copyright (c) 2004 Voltaire Corporation. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: verbs.c 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#include <linux/errno.h>
+#include <linux/err.h>
+
+#include <ib_verbs.h>
+
+/* Protection domains */
+
+struct ib_pd *ib_alloc_pd(struct ib_device *device)
+{
+ struct ib_pd *pd;
+
+ pd = device->alloc_pd(device);
+
+ if (!IS_ERR(pd)) {
+ pd->device = device;
+ atomic_set(&pd->usecnt, 0);
+ }
+
+ return pd;
+}
+EXPORT_SYMBOL(ib_alloc_pd);
+
+int ib_dealloc_pd(struct ib_pd *pd)
+{
+ if (atomic_read(&pd->usecnt))
+ return -EBUSY;
+
+ return pd->device->dealloc_pd(pd);
+}
+EXPORT_SYMBOL(ib_dealloc_pd);
+
+/* Address handles */
+
+struct ib_ah *ib_create_ah(struct ib_pd *pd, struct ib_ah_attr *ah_attr)
+{
+ struct ib_ah *ah;
+
+ ah = pd->device->create_ah(pd, ah_attr);
+
+ if (!IS_ERR(ah)) {
+ ah->device = pd->device;
+ ah->pd = pd;
+ atomic_inc(&pd->usecnt);
+ }
+
+ return ah;
+}
+EXPORT_SYMBOL(ib_create_ah);
+
+int ib_modify_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr)
+{
+ return ah->device->modify_ah ?
+ ah->device->modify_ah(ah, ah_attr) :
+ -ENOSYS;
+}
+EXPORT_SYMBOL(ib_modify_ah);
+
+int ib_query_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr)
+{
+ return ah->device->query_ah ?
+ ah->device->query_ah(ah, ah_attr) :
+ -ENOSYS;
+}
+EXPORT_SYMBOL(ib_query_ah);
+
+int ib_destroy_ah(struct ib_ah *ah)
+{
+ struct ib_pd *pd;
+ int ret;
+
+ pd = ah->pd;
+ ret = ah->device->destroy_ah(ah);
+ if (!ret)
+ atomic_dec(&pd->usecnt);
+
+ return ret;
+}
+EXPORT_SYMBOL(ib_destroy_ah);
+
+/* Queue pairs */
+
+struct ib_qp *ib_create_qp(struct ib_pd *pd,
+ struct ib_qp_init_attr *qp_init_attr)
+{
+ struct ib_qp *qp;
+
+ qp = pd->device->create_qp(pd, qp_init_attr);
+
+ if (!IS_ERR(qp)) {
+ qp->device = pd->device;
+ qp->pd = pd;
+ qp->send_cq = qp_init_attr->send_cq;
+ qp->recv_cq = qp_init_attr->recv_cq;
+ qp->srq = qp_init_attr->srq;
+ qp->event_handler = qp_init_attr->event_handler;
+ qp->qp_context = qp_init_attr->qp_context;
+ qp->qp_type = qp_init_attr->qp_type;
+ atomic_inc(&pd->usecnt);
+ atomic_inc(&qp_init_attr->send_cq->usecnt);
+ atomic_inc(&qp_init_attr->recv_cq->usecnt);
+ if (qp_init_attr->srq)
+ atomic_inc(&qp_init_attr->srq->usecnt);
+ }
+
+ return qp;
+}
+EXPORT_SYMBOL(ib_create_qp);
+
+int ib_modify_qp(struct ib_qp *qp,
+ struct ib_qp_attr *qp_attr,
+ int qp_attr_mask)
+{
+ return qp->device->modify_qp(qp, qp_attr, qp_attr_mask);
+}
+EXPORT_SYMBOL(ib_modify_qp);
+
+int ib_query_qp(struct ib_qp *qp,
+ struct ib_qp_attr *qp_attr,
+ int qp_attr_mask,
+ struct ib_qp_init_attr *qp_init_attr)
+{
+ return qp->device->query_qp ?
+ qp->device->query_qp(qp, qp_attr, qp_attr_mask, qp_init_attr) :
+ -ENOSYS;
+}
+EXPORT_SYMBOL(ib_query_qp);
+
+int ib_destroy_qp(struct ib_qp *qp)
+{
+ struct ib_pd *pd;
+ struct ib_cq *scq, *rcq;
+ struct ib_srq *srq;
+ int ret;
+
+ pd = qp->pd;
+ scq = qp->send_cq;
+ rcq = qp->recv_cq;
+ srq = qp->srq;
+
+ ret = qp->device->destroy_qp(qp);
+ if (!ret) {
+ atomic_dec(&pd->usecnt);
+ atomic_dec(&scq->usecnt);
+ atomic_dec(&rcq->usecnt);
+ if (srq)
+ atomic_dec(&srq->usecnt);
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL(ib_destroy_qp);
+
+/* Completion queues */
+
+struct ib_cq *ib_create_cq(struct ib_device *device,
+ ib_comp_handler comp_handler,
+ void (*event_handler)(struct ib_event *, void *),
+ void *cq_context, int cqe)
+{
+ struct ib_cq *cq;
+
+ cq = device->create_cq(device, cqe);
+
+ if (!IS_ERR(cq)) {
+ cq->device = device;
+ cq->comp_handler = comp_handler;
+ cq->event_handler = event_handler;
+ cq->cq_context = cq_context;
+ atomic_set(&cq->usecnt, 0);
+ }
+
+ return cq;
+}
+EXPORT_SYMBOL(ib_create_cq);
+
+int ib_destroy_cq(struct ib_cq *cq)
+{
+ if (atomic_read(&cq->usecnt))
+ return -EBUSY;
+
+ return cq->device->destroy_cq(cq);
+}
+EXPORT_SYMBOL(ib_destroy_cq);
+
+int ib_resize_cq(struct ib_cq *cq,
+ int cqe)
+{
+ int ret;
+
+ if (!cq->device->resize_cq)
+ return -ENOSYS;
+
+ ret = cq->device->resize_cq(cq, &cqe);
+ if (!ret)
+ cq->cqe = cqe;
+
+ return ret;
+}
+EXPORT_SYMBOL(ib_resize_cq);
+
+/* Memory regions */
+
+struct ib_mr *ib_get_dma_mr(struct ib_pd *pd, int mr_access_flags)
+{
+ struct ib_mr *mr;
+
+ mr = pd->device->get_dma_mr(pd, mr_access_flags);
+
+ if (!IS_ERR(mr)) {
+ mr->device = pd->device;
+ mr->pd = pd;
+ atomic_inc(&pd->usecnt);
+ atomic_set(&mr->usecnt, 0);
+ }
+
+ return mr;
+}
+EXPORT_SYMBOL(ib_get_dma_mr);
+
+struct ib_mr *ib_reg_phys_mr(struct ib_pd *pd,
+ struct ib_phys_buf *phys_buf_array,
+ int num_phys_buf,
+ int mr_access_flags,
+ u64 *iova_start)
+{
+ struct ib_mr *mr;
+
+ mr = pd->device->reg_phys_mr(pd, phys_buf_array, num_phys_buf,
+ mr_access_flags, iova_start);
+
+ if (!IS_ERR(mr)) {
+ mr->device = pd->device;
+ mr->pd = pd;
+ atomic_inc(&pd->usecnt);
+ atomic_set(&mr->usecnt, 0);
+ }
+
+ return mr;
+}
+EXPORT_SYMBOL(ib_reg_phys_mr);
+
+int ib_rereg_phys_mr(struct ib_mr *mr,
+ int mr_rereg_mask,
+ struct ib_pd *pd,
+ struct ib_phys_buf *phys_buf_array,
+ int num_phys_buf,
+ int mr_access_flags,
+ u64 *iova_start)
+{
+ struct ib_pd *old_pd;
+ int ret;
+
+ if (!mr->device->rereg_phys_mr)
+ return -ENOSYS;
+
+ if (atomic_read(&mr->usecnt))
+ return -EBUSY;
+
+ old_pd = mr->pd;
+
+ ret = mr->device->rereg_phys_mr(mr, mr_rereg_mask, pd,
+ phys_buf_array, num_phys_buf,
+ mr_access_flags, iova_start);
+
+ if (!ret && (mr_rereg_mask & IB_MR_REREG_PD)) {
+ atomic_dec(&old_pd->usecnt);
+ atomic_inc(&pd->usecnt);
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL(ib_rereg_phys_mr);
+
+int ib_query_mr(struct ib_mr *mr, struct ib_mr_attr *mr_attr)
+{
+ return mr->device->query_mr ?
+ mr->device->query_mr(mr, mr_attr) : -ENOSYS;
+}
+EXPORT_SYMBOL(ib_query_mr);
+
+int ib_dereg_mr(struct ib_mr *mr)
+{
+ struct ib_pd *pd;
+ int ret;
+
+ if (atomic_read(&mr->usecnt))
+ return -EBUSY;
+
+ pd = mr->pd;
+ ret = mr->device->dereg_mr(mr);
+ if (!ret)
+ atomic_dec(&pd->usecnt);
+
+ return ret;
+}
+EXPORT_SYMBOL(ib_dereg_mr);
+
+/* Memory windows */
+
+struct ib_mw *ib_alloc_mw(struct ib_pd *pd)
+{
+ struct ib_mw *mw;
+
+ if (!pd->device->alloc_mw)
+ return ERR_PTR(-ENOSYS);
+
+ mw = pd->device->alloc_mw(pd);
+ if (!IS_ERR(mw)) {
+ mw->device = pd->device;
+ mw->pd = pd;
+ atomic_inc(&pd->usecnt);
+ }
+
+ return mw;
+}
+EXPORT_SYMBOL(ib_alloc_mw);
+
+int ib_dealloc_mw(struct ib_mw *mw)
+{
+ struct ib_pd *pd;
+ int ret;
+
+ pd = mw->pd;
+ ret = mw->device->dealloc_mw(mw);
+ if (!ret)
+ atomic_dec(&pd->usecnt);
+
+ return ret;
+}
+EXPORT_SYMBOL(ib_dealloc_mw);
+
+/* "Fast" memory regions */
+
+struct ib_fmr *ib_alloc_fmr(struct ib_pd *pd,
+ int mr_access_flags,
+ struct ib_fmr_attr *fmr_attr)
+{
+ struct ib_fmr *fmr;
+
+ if (!pd->device->alloc_fmr)
+ return ERR_PTR(-ENOSYS);
+
+ fmr = pd->device->alloc_fmr(pd, mr_access_flags, fmr_attr);
+ if (!IS_ERR(fmr)) {
+ fmr->device = pd->device;
+ fmr->pd = pd;
+ atomic_inc(&pd->usecnt);
+ }
+
+ return fmr;
+}
+EXPORT_SYMBOL(ib_alloc_fmr);
+
+int ib_unmap_fmr(struct list_head *fmr_list)
+{
+ struct ib_fmr *fmr;
+
+ if (list_empty(fmr_list))
+ return 0;
+
+ fmr = list_entry(fmr_list->next, struct ib_fmr, list);
+ return fmr->device->unmap_fmr(fmr_list);
+}
+EXPORT_SYMBOL(ib_unmap_fmr);
+
+int ib_dealloc_fmr(struct ib_fmr *fmr)
+{
+ struct ib_pd *pd;
+ int ret;
+
+ pd = fmr->pd;
+ ret = fmr->device->dealloc_fmr(fmr);
+ if (!ret)
+ atomic_dec(&pd->usecnt);
+
+ return ret;
+}
+EXPORT_SYMBOL(ib_dealloc_fmr);
+
+/* Multicast groups */
+
+int ib_attach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid)
+{
+ return qp->device->attach_mcast ?
+ qp->device->attach_mcast(qp, gid, lid) :
+ -ENOSYS;
+}
+EXPORT_SYMBOL(ib_attach_mcast);
+
+int ib_detach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid)
+{
+ return qp->device->detach_mcast ?
+ qp->device->detach_mcast(qp, gid, lid) :
+ -ENOSYS;
+}
+EXPORT_SYMBOL(ib_detach_mcast);
--- /dev/null
+config INFINIBAND_MTHCA
+ tristate "Mellanox HCA support"
+ depends on PCI && INFINIBAND
+ ---help---
+ This is a low-level driver for Mellanox InfiniHost host
+ channel adapters (HCAs), including the MT23108 PCI-X HCA
+ ("Tavor") and the MT25208 PCI Express HCA ("Arbel").
+
+config INFINIBAND_MTHCA_DEBUG
+ bool "Verbose debugging output"
+ depends on INFINIBAND_MTHCA
+ default n
+ ---help---
+ This option causes the mthca driver produce a bunch of debug
+ messages. Select this is you are developing the driver or
+ trying to diagnose a problem.
--- /dev/null
+EXTRA_CFLAGS += -Idrivers/infiniband/include
+
+ifdef CONFIG_INFINIBAND_MTHCA_DEBUG
+EXTRA_CFLAGS += -DDEBUG
+endif
+
+obj-$(CONFIG_INFINIBAND_MTHCA) += ib_mthca.o
+
+ib_mthca-y := mthca_main.o mthca_cmd.o mthca_profile.o mthca_reset.o \
+ mthca_allocator.o mthca_eq.o mthca_pd.o mthca_cq.o \
+ mthca_mr.o mthca_qp.o mthca_av.o mthca_mcg.o mthca_mad.o \
+ mthca_provider.o mthca_memfree.o
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mthca_allocator.c 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/bitmap.h>
+
+#include "mthca_dev.h"
+
+/* Trivial bitmap-based allocator */
+u32 mthca_alloc(struct mthca_alloc *alloc)
+{
+ u32 obj;
+
+ spin_lock(&alloc->lock);
+ obj = find_next_zero_bit(alloc->table, alloc->max, alloc->last);
+ if (obj >= alloc->max) {
+ alloc->top = (alloc->top + alloc->max) & alloc->mask;
+ obj = find_first_zero_bit(alloc->table, alloc->max);
+ }
+
+ if (obj < alloc->max) {
+ set_bit(obj, alloc->table);
+ obj |= alloc->top;
+ } else
+ obj = -1;
+
+ spin_unlock(&alloc->lock);
+
+ return obj;
+}
+
+void mthca_free(struct mthca_alloc *alloc, u32 obj)
+{
+ obj &= alloc->max - 1;
+ spin_lock(&alloc->lock);
+ clear_bit(obj, alloc->table);
+ alloc->last = min(alloc->last, obj);
+ alloc->top = (alloc->top + alloc->max) & alloc->mask;
+ spin_unlock(&alloc->lock);
+}
+
+int mthca_alloc_init(struct mthca_alloc *alloc, u32 num, u32 mask,
+ u32 reserved)
+{
+ int i;
+
+ /* num must be a power of 2 */
+ if (num != 1 << (ffs(num) - 1))
+ return -EINVAL;
+
+ alloc->last = 0;
+ alloc->top = 0;
+ alloc->max = num;
+ alloc->mask = mask;
+ spin_lock_init(&alloc->lock);
+ alloc->table = kmalloc(BITS_TO_LONGS(num) * sizeof (long),
+ GFP_KERNEL);
+ if (!alloc->table)
+ return -ENOMEM;
+
+ bitmap_zero(alloc->table, num);
+ for (i = 0; i < reserved; ++i)
+ set_bit(i, alloc->table);
+
+ return 0;
+}
+
+void mthca_alloc_cleanup(struct mthca_alloc *alloc)
+{
+ kfree(alloc->table);
+}
+
+/*
+ * Array of pointers with lazy allocation of leaf pages. Callers of
+ * _get, _set and _clear methods must use a lock or otherwise
+ * serialize access to the array.
+ */
+
+void *mthca_array_get(struct mthca_array *array, int index)
+{
+ int p = (index * sizeof (void *)) >> PAGE_SHIFT;
+
+ if (array->page_list[p].page) {
+ int i = index & (PAGE_SIZE / sizeof (void *) - 1);
+ return array->page_list[p].page[i];
+ } else
+ return NULL;
+}
+
+int mthca_array_set(struct mthca_array *array, int index, void *value)
+{
+ int p = (index * sizeof (void *)) >> PAGE_SHIFT;
+
+ /* Allocate with GFP_ATOMIC because we'll be called with locks held. */
+ if (!array->page_list[p].page)
+ array->page_list[p].page = (void **) get_zeroed_page(GFP_ATOMIC);
+
+ if (!array->page_list[p].page)
+ return -ENOMEM;
+
+ array->page_list[p].page[index & (PAGE_SIZE / sizeof (void *) - 1)] =
+ value;
+ ++array->page_list[p].used;
+
+ return 0;
+}
+
+void mthca_array_clear(struct mthca_array *array, int index)
+{
+ int p = (index * sizeof (void *)) >> PAGE_SHIFT;
+
+ if (--array->page_list[p].used == 0) {
+ free_page((unsigned long) array->page_list[p].page);
+ array->page_list[p].page = NULL;
+ }
+
+ if (array->page_list[p].used < 0)
+ pr_debug("Array %p index %d page %d with ref count %d < 0\n",
+ array, index, p, array->page_list[p].used);
+}
+
+int mthca_array_init(struct mthca_array *array, int nent)
+{
+ int npage = (nent * sizeof (void *) + PAGE_SIZE - 1) / PAGE_SIZE;
+ int i;
+
+ array->page_list = kmalloc(npage * sizeof *array->page_list, GFP_KERNEL);
+ if (!array->page_list)
+ return -ENOMEM;
+
+ for (i = 0; i < npage; ++i) {
+ array->page_list[i].page = NULL;
+ array->page_list[i].used = 0;
+ }
+
+ return 0;
+}
+
+void mthca_array_cleanup(struct mthca_array *array, int nent)
+{
+ int i;
+
+ for (i = 0; i < (nent * sizeof (void *) + PAGE_SIZE - 1) / PAGE_SIZE; ++i)
+ free_page((unsigned long) array->page_list[i].page);
+
+ kfree(array->page_list);
+}
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mthca_av.c 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#include <linux/init.h>
+
+#include <ib_verbs.h>
+#include <ib_cache.h>
+
+#include "mthca_dev.h"
+
+struct mthca_av {
+ u32 port_pd;
+ u8 reserved1;
+ u8 g_slid;
+ u16 dlid;
+ u8 reserved2;
+ u8 gid_index;
+ u8 msg_sr;
+ u8 hop_limit;
+ u32 sl_tclass_flowlabel;
+ u32 dgid[4];
+};
+
+int mthca_create_ah(struct mthca_dev *dev,
+ struct mthca_pd *pd,
+ struct ib_ah_attr *ah_attr,
+ struct mthca_ah *ah)
+{
+ u32 index = -1;
+ struct mthca_av *av = NULL;
+
+ ah->on_hca = 0;
+
+ if (!atomic_read(&pd->sqp_count) &&
+ !(dev->mthca_flags & MTHCA_FLAG_DDR_HIDDEN)) {
+ index = mthca_alloc(&dev->av_table.alloc);
+
+ /* fall back to allocate in host memory */
+ if (index == -1)
+ goto host_alloc;
+
+ av = kmalloc(sizeof *av, GFP_KERNEL);
+ if (!av)
+ goto host_alloc;
+
+ ah->on_hca = 1;
+ ah->avdma = dev->av_table.ddr_av_base +
+ index * MTHCA_AV_SIZE;
+ }
+
+ host_alloc:
+ if (!ah->on_hca) {
+ ah->av = pci_pool_alloc(dev->av_table.pool,
+ SLAB_KERNEL, &ah->avdma);
+ if (!ah->av)
+ return -ENOMEM;
+
+ av = ah->av;
+ }
+
+ ah->key = pd->ntmr.ibmr.lkey;
+
+ memset(av, 0, MTHCA_AV_SIZE);
+
+ av->port_pd = cpu_to_be32(pd->pd_num | (ah_attr->port_num << 24));
+ av->g_slid = ah_attr->src_path_bits;
+ av->dlid = cpu_to_be16(ah_attr->dlid);
+ av->msg_sr = (3 << 4) | /* 2K message */
+ ah_attr->static_rate;
+ av->sl_tclass_flowlabel = cpu_to_be32(ah_attr->sl << 28);
+ if (ah_attr->ah_flags & IB_AH_GRH) {
+ av->g_slid |= 0x80;
+ av->gid_index = (ah_attr->port_num - 1) * dev->limits.gid_table_len +
+ ah_attr->grh.sgid_index;
+ av->hop_limit = ah_attr->grh.hop_limit;
+ av->sl_tclass_flowlabel |=
+ cpu_to_be32((ah_attr->grh.traffic_class << 20) |
+ ah_attr->grh.flow_label);
+ memcpy(av->dgid, ah_attr->grh.dgid.raw, 16);
+ } else {
+ /* Arbel workaround -- low byte of GID must be 2 */
+ av->dgid[3] = cpu_to_be32(2);
+ }
+
+ if (0) {
+ int j;
+
+ mthca_dbg(dev, "Created UDAV at %p/%08lx:\n",
+ av, (unsigned long) ah->avdma);
+ for (j = 0; j < 8; ++j)
+ printk(KERN_DEBUG " [%2x] %08x\n",
+ j * 4, be32_to_cpu(((u32 *) av)[j]));
+ }
+
+ if (ah->on_hca) {
+ memcpy_toio(dev->av_table.av_map + index * MTHCA_AV_SIZE,
+ av, MTHCA_AV_SIZE);
+ kfree(av);
+ }
+
+ return 0;
+}
+
+int mthca_destroy_ah(struct mthca_dev *dev, struct mthca_ah *ah)
+{
+ if (ah->on_hca)
+ mthca_free(&dev->av_table.alloc,
+ (ah->avdma - dev->av_table.ddr_av_base) /
+ MTHCA_AV_SIZE);
+ else
+ pci_pool_free(dev->av_table.pool, ah->av, ah->avdma);
+
+ return 0;
+}
+
+int mthca_read_ah(struct mthca_dev *dev, struct mthca_ah *ah,
+ struct ib_ud_header *header)
+{
+ if (ah->on_hca)
+ return -EINVAL;
+
+ header->lrh.service_level = be32_to_cpu(ah->av->sl_tclass_flowlabel) >> 28;
+ header->lrh.destination_lid = ah->av->dlid;
+ header->lrh.source_lid = ah->av->g_slid & 0x7f;
+ if (ah->av->g_slid & 0x80) {
+ header->grh_present = 1;
+ header->grh.traffic_class =
+ (be32_to_cpu(ah->av->sl_tclass_flowlabel) >> 20) & 0xff;
+ header->grh.flow_label =
+ ah->av->sl_tclass_flowlabel & cpu_to_be32(0xfffff);
+ ib_get_cached_gid(&dev->ib_dev,
+ be32_to_cpu(ah->av->port_pd) >> 24,
+ ah->av->gid_index,
+ &header->grh.source_gid);
+ memcpy(header->grh.destination_gid.raw,
+ ah->av->dgid, 16);
+ } else {
+ header->grh_present = 0;
+ }
+
+ return 0;
+}
+
+int __devinit mthca_init_av_table(struct mthca_dev *dev)
+{
+ int err;
+
+ err = mthca_alloc_init(&dev->av_table.alloc,
+ dev->av_table.num_ddr_avs,
+ dev->av_table.num_ddr_avs - 1,
+ 0);
+ if (err)
+ return err;
+
+ dev->av_table.pool = pci_pool_create("mthca_av", dev->pdev,
+ MTHCA_AV_SIZE,
+ MTHCA_AV_SIZE, 0);
+ if (!dev->av_table.pool)
+ goto out_free_alloc;
+
+ if (!(dev->mthca_flags & MTHCA_FLAG_DDR_HIDDEN)) {
+ dev->av_table.av_map = ioremap(pci_resource_start(dev->pdev, 4) +
+ dev->av_table.ddr_av_base -
+ dev->ddr_start,
+ dev->av_table.num_ddr_avs *
+ MTHCA_AV_SIZE);
+ if (!dev->av_table.av_map)
+ goto out_free_pool;
+ } else
+ dev->av_table.av_map = NULL;
+
+ return 0;
+
+ out_free_pool:
+ pci_pool_destroy(dev->av_table.pool);
+
+ out_free_alloc:
+ mthca_alloc_cleanup(&dev->av_table.alloc);
+ return -ENOMEM;
+}
+
+void __devexit mthca_cleanup_av_table(struct mthca_dev *dev)
+{
+ if (dev->av_table.av_map)
+ iounmap(dev->av_table.av_map);
+ pci_pool_destroy(dev->av_table.pool);
+ mthca_alloc_cleanup(&dev->av_table.alloc);
+}
--- /dev/null
+/*
+ * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mthca_cmd.c 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/errno.h>
+#include <asm/io.h>
+#include <ib_mad.h>
+
+#include "mthca_dev.h"
+#include "mthca_config_reg.h"
+#include "mthca_cmd.h"
+#include "mthca_memfree.h"
+
+#define CMD_POLL_TOKEN 0xffff
+
+enum {
+ HCR_IN_PARAM_OFFSET = 0x00,
+ HCR_IN_MODIFIER_OFFSET = 0x08,
+ HCR_OUT_PARAM_OFFSET = 0x0c,
+ HCR_TOKEN_OFFSET = 0x14,
+ HCR_STATUS_OFFSET = 0x18,
+
+ HCR_OPMOD_SHIFT = 12,
+ HCA_E_BIT = 22,
+ HCR_GO_BIT = 23
+};
+
+enum {
+ /* initialization and general commands */
+ CMD_SYS_EN = 0x1,
+ CMD_SYS_DIS = 0x2,
+ CMD_MAP_FA = 0xfff,
+ CMD_UNMAP_FA = 0xffe,
+ CMD_RUN_FW = 0xff6,
+ CMD_MOD_STAT_CFG = 0x34,
+ CMD_QUERY_DEV_LIM = 0x3,
+ CMD_QUERY_FW = 0x4,
+ CMD_ENABLE_LAM = 0xff8,
+ CMD_DISABLE_LAM = 0xff7,
+ CMD_QUERY_DDR = 0x5,
+ CMD_QUERY_ADAPTER = 0x6,
+ CMD_INIT_HCA = 0x7,
+ CMD_CLOSE_HCA = 0x8,
+ CMD_INIT_IB = 0x9,
+ CMD_CLOSE_IB = 0xa,
+ CMD_QUERY_HCA = 0xb,
+ CMD_SET_IB = 0xc,
+ CMD_ACCESS_DDR = 0x2e,
+ CMD_MAP_ICM = 0xffa,
+ CMD_UNMAP_ICM = 0xff9,
+ CMD_MAP_ICM_AUX = 0xffc,
+ CMD_UNMAP_ICM_AUX = 0xffb,
+ CMD_SET_ICM_SIZE = 0xffd,
+
+ /* TPT commands */
+ CMD_SW2HW_MPT = 0xd,
+ CMD_QUERY_MPT = 0xe,
+ CMD_HW2SW_MPT = 0xf,
+ CMD_READ_MTT = 0x10,
+ CMD_WRITE_MTT = 0x11,
+ CMD_SYNC_TPT = 0x2f,
+
+ /* EQ commands */
+ CMD_MAP_EQ = 0x12,
+ CMD_SW2HW_EQ = 0x13,
+ CMD_HW2SW_EQ = 0x14,
+ CMD_QUERY_EQ = 0x15,
+
+ /* CQ commands */
+ CMD_SW2HW_CQ = 0x16,
+ CMD_HW2SW_CQ = 0x17,
+ CMD_QUERY_CQ = 0x18,
+ CMD_RESIZE_CQ = 0x2c,
+
+ /* SRQ commands */
+ CMD_SW2HW_SRQ = 0x35,
+ CMD_HW2SW_SRQ = 0x36,
+ CMD_QUERY_SRQ = 0x37,
+
+ /* QP/EE commands */
+ CMD_RST2INIT_QPEE = 0x19,
+ CMD_INIT2RTR_QPEE = 0x1a,
+ CMD_RTR2RTS_QPEE = 0x1b,
+ CMD_RTS2RTS_QPEE = 0x1c,
+ CMD_SQERR2RTS_QPEE = 0x1d,
+ CMD_2ERR_QPEE = 0x1e,
+ CMD_RTS2SQD_QPEE = 0x1f,
+ CMD_SQD2SQD_QPEE = 0x38,
+ CMD_SQD2RTS_QPEE = 0x20,
+ CMD_ERR2RST_QPEE = 0x21,
+ CMD_QUERY_QPEE = 0x22,
+ CMD_INIT2INIT_QPEE = 0x2d,
+ CMD_SUSPEND_QPEE = 0x32,
+ CMD_UNSUSPEND_QPEE = 0x33,
+ /* special QPs and management commands */
+ CMD_CONF_SPECIAL_QP = 0x23,
+ CMD_MAD_IFC = 0x24,
+
+ /* multicast commands */
+ CMD_READ_MGM = 0x25,
+ CMD_WRITE_MGM = 0x26,
+ CMD_MGID_HASH = 0x27,
+
+ /* miscellaneous commands */
+ CMD_DIAG_RPRT = 0x30,
+ CMD_NOP = 0x31,
+
+ /* debug commands */
+ CMD_QUERY_DEBUG_MSG = 0x2a,
+ CMD_SET_DEBUG_MSG = 0x2b,
+};
+
+/*
+ * According to Mellanox code, FW may be starved and never complete
+ * commands. So we can't use strict timeouts described in PRM -- we
+ * just arbitrarily select 60 seconds for now.
+ */
+#if 0
+/*
+ * Round up and add 1 to make sure we get the full wait time (since we
+ * will be starting in the middle of a jiffy)
+ */
+enum {
+ CMD_TIME_CLASS_A = (HZ + 999) / 1000 + 1,
+ CMD_TIME_CLASS_B = (HZ + 99) / 100 + 1,
+ CMD_TIME_CLASS_C = (HZ + 9) / 10 + 1
+};
+#else
+enum {
+ CMD_TIME_CLASS_A = 60 * HZ,
+ CMD_TIME_CLASS_B = 60 * HZ,
+ CMD_TIME_CLASS_C = 60 * HZ
+};
+#endif
+
+enum {
+ GO_BIT_TIMEOUT = HZ * 10
+};
+
+struct mthca_cmd_context {
+ struct completion done;
+ struct timer_list timer;
+ int result;
+ int next;
+ u64 out_param;
+ u16 token;
+ u8 status;
+};
+
+static inline int go_bit(struct mthca_dev *dev)
+{
+ return readl(dev->hcr + HCR_STATUS_OFFSET) &
+ swab32(1 << HCR_GO_BIT);
+}
+
+static int mthca_cmd_post(struct mthca_dev *dev,
+ u64 in_param,
+ u64 out_param,
+ u32 in_modifier,
+ u8 op_modifier,
+ u16 op,
+ u16 token,
+ int event)
+{
+ int err = 0;
+
+ if (down_interruptible(&dev->cmd.hcr_sem))
+ return -EINTR;
+
+ if (event) {
+ unsigned long end = jiffies + GO_BIT_TIMEOUT;
+
+ while (go_bit(dev) && time_before(jiffies, end)) {
+ set_current_state(TASK_RUNNING);
+ schedule();
+ }
+ }
+
+ if (go_bit(dev)) {
+ err = -EAGAIN;
+ goto out;
+ }
+
+ /*
+ * We use writel (instead of something like memcpy_toio)
+ * because writes of less than 32 bits to the HCR don't work
+ * (and some architectures such as ia64 implement memcpy_toio
+ * in terms of writeb).
+ */
+ __raw_writel(cpu_to_be32(in_param >> 32), dev->hcr + 0 * 4);
+ __raw_writel(cpu_to_be32(in_param & 0xfffffffful), dev->hcr + 1 * 4);
+ __raw_writel(cpu_to_be32(in_modifier), dev->hcr + 2 * 4);
+ __raw_writel(cpu_to_be32(out_param >> 32), dev->hcr + 3 * 4);
+ __raw_writel(cpu_to_be32(out_param & 0xfffffffful), dev->hcr + 4 * 4);
+ __raw_writel(cpu_to_be32(token << 16), dev->hcr + 5 * 4);
+
+ /* __raw_writel may not order writes. */
+ wmb();
+
+ __raw_writel(cpu_to_be32((1 << HCR_GO_BIT) |
+ (event ? (1 << HCA_E_BIT) : 0) |
+ (op_modifier << HCR_OPMOD_SHIFT) |
+ op), dev->hcr + 6 * 4);
+
+out:
+ up(&dev->cmd.hcr_sem);
+ return err;
+}
+
+static int mthca_cmd_poll(struct mthca_dev *dev,
+ u64 in_param,
+ u64 *out_param,
+ int out_is_imm,
+ u32 in_modifier,
+ u8 op_modifier,
+ u16 op,
+ unsigned long timeout,
+ u8 *status)
+{
+ int err = 0;
+ unsigned long end;
+
+ if (down_interruptible(&dev->cmd.poll_sem))
+ return -EINTR;
+
+ err = mthca_cmd_post(dev, in_param,
+ out_param ? *out_param : 0,
+ in_modifier, op_modifier,
+ op, CMD_POLL_TOKEN, 0);
+ if (err)
+ goto out;
+
+ end = timeout + jiffies;
+ while (go_bit(dev) && time_before(jiffies, end)) {
+ set_current_state(TASK_RUNNING);
+ schedule();
+ }
+
+ if (go_bit(dev)) {
+ err = -EBUSY;
+ goto out;
+ }
+
+ if (out_is_imm) {
+ memcpy_fromio(out_param, dev->hcr + HCR_OUT_PARAM_OFFSET, sizeof (u64));
+ be64_to_cpus(out_param);
+ }
+
+ *status = be32_to_cpu(__raw_readl(dev->hcr + HCR_STATUS_OFFSET)) >> 24;
+
+out:
+ up(&dev->cmd.poll_sem);
+ return err;
+}
+
+void mthca_cmd_event(struct mthca_dev *dev,
+ u16 token,
+ u8 status,
+ u64 out_param)
+{
+ struct mthca_cmd_context *context =
+ &dev->cmd.context[token & dev->cmd.token_mask];
+
+ /* previously timed out command completing at long last */
+ if (token != context->token)
+ return;
+
+ context->result = 0;
+ context->status = status;
+ context->out_param = out_param;
+
+ context->token += dev->cmd.token_mask + 1;
+
+ complete(&context->done);
+}
+
+static void event_timeout(unsigned long context_ptr)
+{
+ struct mthca_cmd_context *context =
+ (struct mthca_cmd_context *) context_ptr;
+
+ context->result = -EBUSY;
+ complete(&context->done);
+}
+
+static int mthca_cmd_wait(struct mthca_dev *dev,
+ u64 in_param,
+ u64 *out_param,
+ int out_is_imm,
+ u32 in_modifier,
+ u8 op_modifier,
+ u16 op,
+ unsigned long timeout,
+ u8 *status)
+{
+ int err = 0;
+ struct mthca_cmd_context *context;
+
+ if (down_interruptible(&dev->cmd.event_sem))
+ return -EINTR;
+
+ spin_lock(&dev->cmd.context_lock);
+ BUG_ON(dev->cmd.free_head < 0);
+ context = &dev->cmd.context[dev->cmd.free_head];
+ dev->cmd.free_head = context->next;
+ spin_unlock(&dev->cmd.context_lock);
+
+ init_completion(&context->done);
+
+ err = mthca_cmd_post(dev, in_param,
+ out_param ? *out_param : 0,
+ in_modifier, op_modifier,
+ op, context->token, 1);
+ if (err)
+ goto out;
+
+ context->timer.expires = jiffies + timeout;
+ add_timer(&context->timer);
+
+ wait_for_completion(&context->done);
+ del_timer_sync(&context->timer);
+
+ err = context->result;
+ if (err)
+ goto out;
+
+ *status = context->status;
+ if (*status)
+ mthca_dbg(dev, "Command %02x completed with status %02x\n",
+ op, *status);
+
+ if (out_is_imm)
+ *out_param = context->out_param;
+
+out:
+ spin_lock(&dev->cmd.context_lock);
+ context->next = dev->cmd.free_head;
+ dev->cmd.free_head = context - dev->cmd.context;
+ spin_unlock(&dev->cmd.context_lock);
+
+ up(&dev->cmd.event_sem);
+ return err;
+}
+
+/* Invoke a command with an output mailbox */
+static int mthca_cmd_box(struct mthca_dev *dev,
+ u64 in_param,
+ u64 out_param,
+ u32 in_modifier,
+ u8 op_modifier,
+ u16 op,
+ unsigned long timeout,
+ u8 *status)
+{
+ if (dev->cmd.use_events)
+ return mthca_cmd_wait(dev, in_param, &out_param, 0,
+ in_modifier, op_modifier, op,
+ timeout, status);
+ else
+ return mthca_cmd_poll(dev, in_param, &out_param, 0,
+ in_modifier, op_modifier, op,
+ timeout, status);
+}
+
+/* Invoke a command with no output parameter */
+static int mthca_cmd(struct mthca_dev *dev,
+ u64 in_param,
+ u32 in_modifier,
+ u8 op_modifier,
+ u16 op,
+ unsigned long timeout,
+ u8 *status)
+{
+ return mthca_cmd_box(dev, in_param, 0, in_modifier,
+ op_modifier, op, timeout, status);
+}
+
+/*
+ * Invoke a command with an immediate output parameter (and copy the
+ * output into the caller's out_param pointer after the command
+ * executes).
+ */
+static int mthca_cmd_imm(struct mthca_dev *dev,
+ u64 in_param,
+ u64 *out_param,
+ u32 in_modifier,
+ u8 op_modifier,
+ u16 op,
+ unsigned long timeout,
+ u8 *status)
+{
+ if (dev->cmd.use_events)
+ return mthca_cmd_wait(dev, in_param, out_param, 1,
+ in_modifier, op_modifier, op,
+ timeout, status);
+ else
+ return mthca_cmd_poll(dev, in_param, out_param, 1,
+ in_modifier, op_modifier, op,
+ timeout, status);
+}
+
+/*
+ * Switch to using events to issue FW commands (should be called after
+ * event queue to command events has been initialized).
+ */
+int mthca_cmd_use_events(struct mthca_dev *dev)
+{
+ int i;
+
+ dev->cmd.context = kmalloc(dev->cmd.max_cmds *
+ sizeof (struct mthca_cmd_context),
+ GFP_KERNEL);
+ if (!dev->cmd.context)
+ return -ENOMEM;
+
+ for (i = 0; i < dev->cmd.max_cmds; ++i) {
+ dev->cmd.context[i].token = i;
+ dev->cmd.context[i].next = i + 1;
+ init_timer(&dev->cmd.context[i].timer);
+ dev->cmd.context[i].timer.data =
+ (unsigned long) &dev->cmd.context[i];
+ dev->cmd.context[i].timer.function = event_timeout;
+ }
+
+ dev->cmd.context[dev->cmd.max_cmds - 1].next = -1;
+ dev->cmd.free_head = 0;
+
+ sema_init(&dev->cmd.event_sem, dev->cmd.max_cmds);
+ spin_lock_init(&dev->cmd.context_lock);
+
+ for (dev->cmd.token_mask = 1;
+ dev->cmd.token_mask < dev->cmd.max_cmds;
+ dev->cmd.token_mask <<= 1)
+ ; /* nothing */
+ --dev->cmd.token_mask;
+
+ dev->cmd.use_events = 1;
+ down(&dev->cmd.poll_sem);
+
+ return 0;
+}
+
+/*
+ * Switch back to polling (used when shutting down the device)
+ */
+void mthca_cmd_use_polling(struct mthca_dev *dev)
+{
+ int i;
+
+ dev->cmd.use_events = 0;
+
+ for (i = 0; i < dev->cmd.max_cmds; ++i)
+ down(&dev->cmd.event_sem);
+
+ kfree(dev->cmd.context);
+
+ up(&dev->cmd.poll_sem);
+}
+
+int mthca_SYS_EN(struct mthca_dev *dev, u8 *status)
+{
+ u64 out;
+ int ret;
+
+ ret = mthca_cmd_imm(dev, 0, &out, 0, 0, CMD_SYS_EN, HZ, status);
+
+ if (*status == MTHCA_CMD_STAT_DDR_MEM_ERR)
+ mthca_warn(dev, "SYS_EN DDR error: syn=%x, sock=%d, "
+ "sladdr=%d, SPD source=%s\n",
+ (int) (out >> 6) & 0xf, (int) (out >> 4) & 3,
+ (int) (out >> 1) & 7, (int) out & 1 ? "NVMEM" : "DIMM");
+
+ return ret;
+}
+
+int mthca_SYS_DIS(struct mthca_dev *dev, u8 *status)
+{
+ return mthca_cmd(dev, 0, 0, 0, CMD_SYS_DIS, HZ, status);
+}
+
+static int mthca_map_cmd(struct mthca_dev *dev, u16 op, struct mthca_icm *icm,
+ u64 virt, u8 *status)
+{
+ u32 *inbox;
+ dma_addr_t indma;
+ struct mthca_icm_iter iter;
+ int lg;
+ int nent = 0;
+ int i;
+ int err = 0;
+ int ts = 0, tc = 0;
+
+ inbox = pci_alloc_consistent(dev->pdev, PAGE_SIZE, &indma);
+ if (!inbox)
+ return -ENOMEM;
+
+ memset(inbox, 0, PAGE_SIZE);
+
+ for (mthca_icm_first(icm, &iter);
+ !mthca_icm_last(&iter);
+ mthca_icm_next(&iter)) {
+ /*
+ * We have to pass pages that are aligned to their
+ * size, so find the least significant 1 in the
+ * address or size and use that as our log2 size.
+ */
+ lg = ffs(mthca_icm_addr(&iter) | mthca_icm_size(&iter)) - 1;
+ if (lg < 12) {
+ mthca_warn(dev, "Got FW area not aligned to 4K (%llx/%lx).\n",
+ (unsigned long long) mthca_icm_addr(&iter),
+ mthca_icm_size(&iter));
+ err = -EINVAL;
+ goto out;
+ }
+ for (i = 0; i < mthca_icm_size(&iter) / (1 << lg); ++i, ++nent) {
+ if (virt != -1) {
+ *((__be64 *) (inbox + nent * 4)) =
+ cpu_to_be64(virt);
+ virt += 1 << lg;
+ }
+
+ *((__be64 *) (inbox + nent * 4 + 2)) =
+ cpu_to_be64((mthca_icm_addr(&iter) +
+ (i << lg)) | (lg - 12));
+ ts += 1 << (lg - 10);
+ ++tc;
+
+ if (nent == PAGE_SIZE / 16) {
+ err = mthca_cmd(dev, indma, nent, 0, op,
+ CMD_TIME_CLASS_B, status);
+ if (err || *status)
+ goto out;
+ nent = 0;
+ }
+ }
+ }
+
+ if (nent)
+ err = mthca_cmd(dev, indma, nent, 0, op,
+ CMD_TIME_CLASS_B, status);
+
+ switch (op) {
+ case CMD_MAP_FA:
+ mthca_dbg(dev, "Mapped %d chunks/%d KB for FW.\n", tc, ts);
+ break;
+ case CMD_MAP_ICM_AUX:
+ mthca_dbg(dev, "Mapped %d chunks/%d KB for ICM aux.\n", tc, ts);
+ break;
+ case CMD_MAP_ICM:
+ mthca_dbg(dev, "Mapped %d chunks/%d KB at %llx for ICM.\n",
+ tc, ts, (unsigned long long) virt - (ts << 10));
+ break;
+ }
+
+out:
+ pci_free_consistent(dev->pdev, PAGE_SIZE, inbox, indma);
+ return err;
+}
+
+int mthca_MAP_FA(struct mthca_dev *dev, struct mthca_icm *icm, u8 *status)
+{
+ return mthca_map_cmd(dev, CMD_MAP_FA, icm, -1, status);
+}
+
+int mthca_UNMAP_FA(struct mthca_dev *dev, u8 *status)
+{
+ return mthca_cmd(dev, 0, 0, 0, CMD_UNMAP_FA, CMD_TIME_CLASS_B, status);
+}
+
+int mthca_RUN_FW(struct mthca_dev *dev, u8 *status)
+{
+ return mthca_cmd(dev, 0, 0, 0, CMD_RUN_FW, CMD_TIME_CLASS_A, status);
+}
+
+int mthca_QUERY_FW(struct mthca_dev *dev, u8 *status)
+{
+ u32 *outbox;
+ dma_addr_t outdma;
+ int err = 0;
+ u8 lg;
+
+#define QUERY_FW_OUT_SIZE 0x100
+#define QUERY_FW_VER_OFFSET 0x00
+#define QUERY_FW_MAX_CMD_OFFSET 0x0f
+#define QUERY_FW_ERR_START_OFFSET 0x30
+#define QUERY_FW_ERR_SIZE_OFFSET 0x38
+
+#define QUERY_FW_START_OFFSET 0x20
+#define QUERY_FW_END_OFFSET 0x28
+
+#define QUERY_FW_SIZE_OFFSET 0x00
+#define QUERY_FW_CLR_INT_BASE_OFFSET 0x20
+#define QUERY_FW_EQ_ARM_BASE_OFFSET 0x40
+#define QUERY_FW_EQ_SET_CI_BASE_OFFSET 0x48
+
+ outbox = pci_alloc_consistent(dev->pdev, QUERY_FW_OUT_SIZE, &outdma);
+ if (!outbox) {
+ return -ENOMEM;
+ }
+
+ err = mthca_cmd_box(dev, 0, outdma, 0, 0, CMD_QUERY_FW,
+ CMD_TIME_CLASS_A, status);
+
+ if (err)
+ goto out;
+
+ MTHCA_GET(dev->fw_ver, outbox, QUERY_FW_VER_OFFSET);
+ /*
+ * FW subminor version is at more signifant bits than minor
+ * version, so swap here.
+ */
+ dev->fw_ver = (dev->fw_ver & 0xffff00000000ull) |
+ ((dev->fw_ver & 0xffff0000ull) >> 16) |
+ ((dev->fw_ver & 0x0000ffffull) << 16);
+
+ MTHCA_GET(lg, outbox, QUERY_FW_MAX_CMD_OFFSET);
+ dev->cmd.max_cmds = 1 << lg;
+
+ mthca_dbg(dev, "FW version %012llx, max commands %d\n",
+ (unsigned long long) dev->fw_ver, dev->cmd.max_cmds);
+
+ if (dev->hca_type == ARBEL_NATIVE) {
+ MTHCA_GET(dev->fw.arbel.fw_pages, outbox, QUERY_FW_SIZE_OFFSET);
+ MTHCA_GET(dev->fw.arbel.clr_int_base, outbox, QUERY_FW_CLR_INT_BASE_OFFSET);
+ MTHCA_GET(dev->fw.arbel.eq_arm_base, outbox, QUERY_FW_EQ_ARM_BASE_OFFSET);
+ MTHCA_GET(dev->fw.arbel.eq_set_ci_base, outbox, QUERY_FW_EQ_SET_CI_BASE_OFFSET);
+ mthca_dbg(dev, "FW size %d KB\n", dev->fw.arbel.fw_pages << 2);
+
+ /*
+ * Arbel page size is always 4 KB; round up number of
+ * system pages needed.
+ */
+ dev->fw.arbel.fw_pages =
+ (dev->fw.arbel.fw_pages + (1 << (PAGE_SHIFT - 12)) - 1) >>
+ (PAGE_SHIFT - 12);
+
+ mthca_dbg(dev, "Clear int @ %llx, EQ arm @ %llx, EQ set CI @ %llx\n",
+ (unsigned long long) dev->fw.arbel.clr_int_base,
+ (unsigned long long) dev->fw.arbel.eq_arm_base,
+ (unsigned long long) dev->fw.arbel.eq_set_ci_base);
+ } else {
+ MTHCA_GET(dev->fw.tavor.fw_start, outbox, QUERY_FW_START_OFFSET);
+ MTHCA_GET(dev->fw.tavor.fw_end, outbox, QUERY_FW_END_OFFSET);
+
+ mthca_dbg(dev, "FW size %d KB (start %llx, end %llx)\n",
+ (int) ((dev->fw.tavor.fw_end - dev->fw.tavor.fw_start) >> 10),
+ (unsigned long long) dev->fw.tavor.fw_start,
+ (unsigned long long) dev->fw.tavor.fw_end);
+ }
+
+out:
+ pci_free_consistent(dev->pdev, QUERY_FW_OUT_SIZE, outbox, outdma);
+ return err;
+}
+
+int mthca_ENABLE_LAM(struct mthca_dev *dev, u8 *status)
+{
+ u8 info;
+ u32 *outbox;
+ dma_addr_t outdma;
+ int err = 0;
+
+#define ENABLE_LAM_OUT_SIZE 0x100
+#define ENABLE_LAM_START_OFFSET 0x00
+#define ENABLE_LAM_END_OFFSET 0x08
+#define ENABLE_LAM_INFO_OFFSET 0x13
+
+#define ENABLE_LAM_INFO_HIDDEN_FLAG (1 << 4)
+#define ENABLE_LAM_INFO_ECC_MASK 0x3
+
+ outbox = pci_alloc_consistent(dev->pdev, ENABLE_LAM_OUT_SIZE, &outdma);
+ if (!outbox)
+ return -ENOMEM;
+
+ err = mthca_cmd_box(dev, 0, outdma, 0, 0, CMD_ENABLE_LAM,
+ CMD_TIME_CLASS_C, status);
+
+ if (err)
+ goto out;
+
+ if (*status == MTHCA_CMD_STAT_LAM_NOT_PRE)
+ goto out;
+
+ MTHCA_GET(dev->ddr_start, outbox, ENABLE_LAM_START_OFFSET);
+ MTHCA_GET(dev->ddr_end, outbox, ENABLE_LAM_END_OFFSET);
+ MTHCA_GET(info, outbox, ENABLE_LAM_INFO_OFFSET);
+
+ if (!!(info & ENABLE_LAM_INFO_HIDDEN_FLAG) !=
+ !!(dev->mthca_flags & MTHCA_FLAG_DDR_HIDDEN)) {
+ mthca_info(dev, "FW reports that HCA-attached memory "
+ "is %s hidden; does not match PCI config\n",
+ (info & ENABLE_LAM_INFO_HIDDEN_FLAG) ?
+ "" : "not");
+ }
+ if (info & ENABLE_LAM_INFO_HIDDEN_FLAG)
+ mthca_dbg(dev, "HCA-attached memory is hidden.\n");
+
+ mthca_dbg(dev, "HCA memory size %d KB (start %llx, end %llx)\n",
+ (int) ((dev->ddr_end - dev->ddr_start) >> 10),
+ (unsigned long long) dev->ddr_start,
+ (unsigned long long) dev->ddr_end);
+
+out:
+ pci_free_consistent(dev->pdev, ENABLE_LAM_OUT_SIZE, outbox, outdma);
+ return err;
+}
+
+int mthca_DISABLE_LAM(struct mthca_dev *dev, u8 *status)
+{
+ return mthca_cmd(dev, 0, 0, 0, CMD_SYS_DIS, CMD_TIME_CLASS_C, status);
+}
+
+int mthca_QUERY_DDR(struct mthca_dev *dev, u8 *status)
+{
+ u8 info;
+ u32 *outbox;
+ dma_addr_t outdma;
+ int err = 0;
+
+#define QUERY_DDR_OUT_SIZE 0x100
+#define QUERY_DDR_START_OFFSET 0x00
+#define QUERY_DDR_END_OFFSET 0x08
+#define QUERY_DDR_INFO_OFFSET 0x13
+
+#define QUERY_DDR_INFO_HIDDEN_FLAG (1 << 4)
+#define QUERY_DDR_INFO_ECC_MASK 0x3
+
+ outbox = pci_alloc_consistent(dev->pdev, QUERY_DDR_OUT_SIZE, &outdma);
+ if (!outbox)
+ return -ENOMEM;
+
+ err = mthca_cmd_box(dev, 0, outdma, 0, 0, CMD_QUERY_DDR,
+ CMD_TIME_CLASS_A, status);
+
+ if (err)
+ goto out;
+
+ MTHCA_GET(dev->ddr_start, outbox, QUERY_DDR_START_OFFSET);
+ MTHCA_GET(dev->ddr_end, outbox, QUERY_DDR_END_OFFSET);
+ MTHCA_GET(info, outbox, QUERY_DDR_INFO_OFFSET);
+
+ if (!!(info & QUERY_DDR_INFO_HIDDEN_FLAG) !=
+ !!(dev->mthca_flags & MTHCA_FLAG_DDR_HIDDEN)) {
+ mthca_info(dev, "FW reports that HCA-attached memory "
+ "is %s hidden; does not match PCI config\n",
+ (info & QUERY_DDR_INFO_HIDDEN_FLAG) ?
+ "" : "not");
+ }
+ if (info & QUERY_DDR_INFO_HIDDEN_FLAG)
+ mthca_dbg(dev, "HCA-attached memory is hidden.\n");
+
+ mthca_dbg(dev, "HCA memory size %d KB (start %llx, end %llx)\n",
+ (int) ((dev->ddr_end - dev->ddr_start) >> 10),
+ (unsigned long long) dev->ddr_start,
+ (unsigned long long) dev->ddr_end);
+
+out:
+ pci_free_consistent(dev->pdev, QUERY_DDR_OUT_SIZE, outbox, outdma);
+ return err;
+}
+
+int mthca_QUERY_DEV_LIM(struct mthca_dev *dev,
+ struct mthca_dev_lim *dev_lim, u8 *status)
+{
+ u32 *outbox;
+ dma_addr_t outdma;
+ u8 field;
+ u16 size;
+ int err;
+
+#define QUERY_DEV_LIM_OUT_SIZE 0x100
+#define QUERY_DEV_LIM_MAX_SRQ_SZ_OFFSET 0x10
+#define QUERY_DEV_LIM_MAX_QP_SZ_OFFSET 0x11
+#define QUERY_DEV_LIM_RSVD_QP_OFFSET 0x12
+#define QUERY_DEV_LIM_MAX_QP_OFFSET 0x13
+#define QUERY_DEV_LIM_RSVD_SRQ_OFFSET 0x14
+#define QUERY_DEV_LIM_MAX_SRQ_OFFSET 0x15
+#define QUERY_DEV_LIM_RSVD_EEC_OFFSET 0x16
+#define QUERY_DEV_LIM_MAX_EEC_OFFSET 0x17
+#define QUERY_DEV_LIM_MAX_CQ_SZ_OFFSET 0x19
+#define QUERY_DEV_LIM_RSVD_CQ_OFFSET 0x1a
+#define QUERY_DEV_LIM_MAX_CQ_OFFSET 0x1b
+#define QUERY_DEV_LIM_MAX_MPT_OFFSET 0x1d
+#define QUERY_DEV_LIM_RSVD_EQ_OFFSET 0x1e
+#define QUERY_DEV_LIM_MAX_EQ_OFFSET 0x1f
+#define QUERY_DEV_LIM_RSVD_MTT_OFFSET 0x20
+#define QUERY_DEV_LIM_MAX_MRW_SZ_OFFSET 0x21
+#define QUERY_DEV_LIM_RSVD_MRW_OFFSET 0x22
+#define QUERY_DEV_LIM_MAX_MTT_SEG_OFFSET 0x23
+#define QUERY_DEV_LIM_MAX_AV_OFFSET 0x27
+#define QUERY_DEV_LIM_MAX_REQ_QP_OFFSET 0x29
+#define QUERY_DEV_LIM_MAX_RES_QP_OFFSET 0x2b
+#define QUERY_DEV_LIM_MAX_RDMA_OFFSET 0x2f
+#define QUERY_DEV_LIM_RSZ_SRQ_OFFSET 0x33
+#define QUERY_DEV_LIM_ACK_DELAY_OFFSET 0x35
+#define QUERY_DEV_LIM_MTU_WIDTH_OFFSET 0x36
+#define QUERY_DEV_LIM_VL_PORT_OFFSET 0x37
+#define QUERY_DEV_LIM_MAX_GID_OFFSET 0x3b
+#define QUERY_DEV_LIM_MAX_PKEY_OFFSET 0x3f
+#define QUERY_DEV_LIM_FLAGS_OFFSET 0x44
+#define QUERY_DEV_LIM_RSVD_UAR_OFFSET 0x48
+#define QUERY_DEV_LIM_UAR_SZ_OFFSET 0x49
+#define QUERY_DEV_LIM_PAGE_SZ_OFFSET 0x4b
+#define QUERY_DEV_LIM_MAX_SG_OFFSET 0x51
+#define QUERY_DEV_LIM_MAX_DESC_SZ_OFFSET 0x52
+#define QUERY_DEV_LIM_MAX_SG_RQ_OFFSET 0x55
+#define QUERY_DEV_LIM_MAX_DESC_SZ_RQ_OFFSET 0x56
+#define QUERY_DEV_LIM_MAX_QP_MCG_OFFSET 0x61
+#define QUERY_DEV_LIM_RSVD_MCG_OFFSET 0x62
+#define QUERY_DEV_LIM_MAX_MCG_OFFSET 0x63
+#define QUERY_DEV_LIM_RSVD_PD_OFFSET 0x64
+#define QUERY_DEV_LIM_MAX_PD_OFFSET 0x65
+#define QUERY_DEV_LIM_RSVD_RDD_OFFSET 0x66
+#define QUERY_DEV_LIM_MAX_RDD_OFFSET 0x67
+#define QUERY_DEV_LIM_EEC_ENTRY_SZ_OFFSET 0x80
+#define QUERY_DEV_LIM_QPC_ENTRY_SZ_OFFSET 0x82
+#define QUERY_DEV_LIM_EEEC_ENTRY_SZ_OFFSET 0x84
+#define QUERY_DEV_LIM_EQPC_ENTRY_SZ_OFFSET 0x86
+#define QUERY_DEV_LIM_EQC_ENTRY_SZ_OFFSET 0x88
+#define QUERY_DEV_LIM_CQC_ENTRY_SZ_OFFSET 0x8a
+#define QUERY_DEV_LIM_SRQ_ENTRY_SZ_OFFSET 0x8c
+#define QUERY_DEV_LIM_UAR_ENTRY_SZ_OFFSET 0x8e
+#define QUERY_DEV_LIM_MTT_ENTRY_SZ_OFFSET 0x90
+#define QUERY_DEV_LIM_MPT_ENTRY_SZ_OFFSET 0x92
+#define QUERY_DEV_LIM_PBL_SZ_OFFSET 0x96
+#define QUERY_DEV_LIM_BMME_FLAGS_OFFSET 0x97
+#define QUERY_DEV_LIM_RSVD_LKEY_OFFSET 0x98
+#define QUERY_DEV_LIM_LAMR_OFFSET 0x9f
+#define QUERY_DEV_LIM_MAX_ICM_SZ_OFFSET 0xa0
+
+ outbox = pci_alloc_consistent(dev->pdev, QUERY_DEV_LIM_OUT_SIZE, &outdma);
+ if (!outbox)
+ return -ENOMEM;
+
+ err = mthca_cmd_box(dev, 0, outdma, 0, 0, CMD_QUERY_DEV_LIM,
+ CMD_TIME_CLASS_A, status);
+
+ if (err)
+ goto out;
+
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_SRQ_SZ_OFFSET);
+ dev_lim->max_srq_sz = 1 << field;
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_QP_SZ_OFFSET);
+ dev_lim->max_qp_sz = 1 << field;
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_RSVD_QP_OFFSET);
+ dev_lim->reserved_qps = 1 << (field & 0xf);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_QP_OFFSET);
+ dev_lim->max_qps = 1 << (field & 0x1f);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_RSVD_SRQ_OFFSET);
+ dev_lim->reserved_srqs = 1 << (field >> 4);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_SRQ_OFFSET);
+ dev_lim->max_srqs = 1 << (field & 0x1f);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_RSVD_EEC_OFFSET);
+ dev_lim->reserved_eecs = 1 << (field & 0xf);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_EEC_OFFSET);
+ dev_lim->max_eecs = 1 << (field & 0x1f);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_CQ_SZ_OFFSET);
+ dev_lim->max_cq_sz = 1 << field;
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_RSVD_CQ_OFFSET);
+ dev_lim->reserved_cqs = 1 << (field & 0xf);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_CQ_OFFSET);
+ dev_lim->max_cqs = 1 << (field & 0x1f);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_MPT_OFFSET);
+ dev_lim->max_mpts = 1 << (field & 0x3f);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_RSVD_EQ_OFFSET);
+ dev_lim->reserved_eqs = 1 << (field & 0xf);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_EQ_OFFSET);
+ dev_lim->max_eqs = 1 << (field & 0x7);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_RSVD_MTT_OFFSET);
+ dev_lim->reserved_mtts = 1 << (field >> 4);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_MRW_SZ_OFFSET);
+ dev_lim->max_mrw_sz = 1 << field;
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_RSVD_MRW_OFFSET);
+ dev_lim->reserved_mrws = 1 << (field & 0xf);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_MTT_SEG_OFFSET);
+ dev_lim->max_mtt_seg = 1 << (field & 0x3f);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_REQ_QP_OFFSET);
+ dev_lim->max_requester_per_qp = 1 << (field & 0x3f);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_RES_QP_OFFSET);
+ dev_lim->max_responder_per_qp = 1 << (field & 0x3f);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_RDMA_OFFSET);
+ dev_lim->max_rdma_global = 1 << (field & 0x3f);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_ACK_DELAY_OFFSET);
+ dev_lim->local_ca_ack_delay = field & 0x1f;
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MTU_WIDTH_OFFSET);
+ dev_lim->max_mtu = field >> 4;
+ dev_lim->max_port_width = field & 0xf;
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_VL_PORT_OFFSET);
+ dev_lim->max_vl = field >> 4;
+ dev_lim->num_ports = field & 0xf;
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_GID_OFFSET);
+ dev_lim->max_gids = 1 << (field & 0xf);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_PKEY_OFFSET);
+ dev_lim->max_pkeys = 1 << (field & 0xf);
+ MTHCA_GET(dev_lim->flags, outbox, QUERY_DEV_LIM_FLAGS_OFFSET);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_RSVD_UAR_OFFSET);
+ dev_lim->reserved_uars = field >> 4;
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_UAR_SZ_OFFSET);
+ dev_lim->uar_size = 1 << ((field & 0x3f) + 20);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_PAGE_SZ_OFFSET);
+ dev_lim->min_page_sz = 1 << field;
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_SG_OFFSET);
+ dev_lim->max_sg = field;
+
+ MTHCA_GET(size, outbox, QUERY_DEV_LIM_MAX_DESC_SZ_OFFSET);
+ dev_lim->max_desc_sz = size;
+
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_QP_MCG_OFFSET);
+ dev_lim->max_qp_per_mcg = 1 << field;
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_RSVD_MCG_OFFSET);
+ dev_lim->reserved_mgms = field & 0xf;
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_MCG_OFFSET);
+ dev_lim->max_mcgs = 1 << field;
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_RSVD_PD_OFFSET);
+ dev_lim->reserved_pds = field >> 4;
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_PD_OFFSET);
+ dev_lim->max_pds = 1 << (field & 0x3f);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_RSVD_RDD_OFFSET);
+ dev_lim->reserved_rdds = field >> 4;
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_RDD_OFFSET);
+ dev_lim->max_rdds = 1 << (field & 0x3f);
+
+ MTHCA_GET(size, outbox, QUERY_DEV_LIM_EEC_ENTRY_SZ_OFFSET);
+ dev_lim->eec_entry_sz = size;
+ MTHCA_GET(size, outbox, QUERY_DEV_LIM_QPC_ENTRY_SZ_OFFSET);
+ dev_lim->qpc_entry_sz = size;
+ MTHCA_GET(size, outbox, QUERY_DEV_LIM_EEEC_ENTRY_SZ_OFFSET);
+ dev_lim->eeec_entry_sz = size;
+ MTHCA_GET(size, outbox, QUERY_DEV_LIM_EQPC_ENTRY_SZ_OFFSET);
+ dev_lim->eqpc_entry_sz = size;
+ MTHCA_GET(size, outbox, QUERY_DEV_LIM_EQC_ENTRY_SZ_OFFSET);
+ dev_lim->eqc_entry_sz = size;
+ MTHCA_GET(size, outbox, QUERY_DEV_LIM_CQC_ENTRY_SZ_OFFSET);
+ dev_lim->cqc_entry_sz = size;
+ MTHCA_GET(size, outbox, QUERY_DEV_LIM_SRQ_ENTRY_SZ_OFFSET);
+ dev_lim->srq_entry_sz = size;
+ MTHCA_GET(size, outbox, QUERY_DEV_LIM_UAR_ENTRY_SZ_OFFSET);
+ dev_lim->uar_scratch_entry_sz = size;
+
+ mthca_dbg(dev, "Max QPs: %d, reserved QPs: %d, entry size: %d\n",
+ dev_lim->max_qps, dev_lim->reserved_qps, dev_lim->qpc_entry_sz);
+ mthca_dbg(dev, "Max CQs: %d, reserved CQs: %d, entry size: %d\n",
+ dev_lim->max_cqs, dev_lim->reserved_cqs, dev_lim->cqc_entry_sz);
+ mthca_dbg(dev, "Max EQs: %d, reserved EQs: %d, entry size: %d\n",
+ dev_lim->max_eqs, dev_lim->reserved_eqs, dev_lim->eqc_entry_sz);
+ mthca_dbg(dev, "reserved MPTs: %d, reserved MTTs: %d\n",
+ dev_lim->reserved_mrws, dev_lim->reserved_mtts);
+ mthca_dbg(dev, "Max PDs: %d, reserved PDs: %d, reserved UARs: %d\n",
+ dev_lim->max_pds, dev_lim->reserved_pds, dev_lim->reserved_uars);
+ mthca_dbg(dev, "Max QP/MCG: %d, reserved MGMs: %d\n",
+ dev_lim->max_pds, dev_lim->reserved_mgms);
+
+ mthca_dbg(dev, "Flags: %08x\n", dev_lim->flags);
+
+ if (dev->hca_type == ARBEL_NATIVE) {
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_RSZ_SRQ_OFFSET);
+ dev_lim->hca.arbel.resize_srq = field & 1;
+ MTHCA_GET(size, outbox, QUERY_DEV_LIM_MTT_ENTRY_SZ_OFFSET);
+ dev_lim->mtt_seg_sz = size;
+ MTHCA_GET(size, outbox, QUERY_DEV_LIM_MPT_ENTRY_SZ_OFFSET);
+ dev_lim->mpt_entry_sz = size;
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_PBL_SZ_OFFSET);
+ dev_lim->hca.arbel.max_pbl_sz = 1 << (field & 0x3f);
+ MTHCA_GET(dev_lim->hca.arbel.bmme_flags, outbox,
+ QUERY_DEV_LIM_BMME_FLAGS_OFFSET);
+ MTHCA_GET(dev_lim->hca.arbel.reserved_lkey, outbox,
+ QUERY_DEV_LIM_RSVD_LKEY_OFFSET);
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_LAMR_OFFSET);
+ dev_lim->hca.arbel.lam_required = field & 1;
+ MTHCA_GET(dev_lim->hca.arbel.max_icm_sz, outbox,
+ QUERY_DEV_LIM_MAX_ICM_SZ_OFFSET);
+
+ if (dev_lim->hca.arbel.bmme_flags & 1)
+ mthca_dbg(dev, "Base MM extensions: yes "
+ "(flags %d, max PBL %d, rsvd L_Key %08x)\n",
+ dev_lim->hca.arbel.bmme_flags,
+ dev_lim->hca.arbel.max_pbl_sz,
+ dev_lim->hca.arbel.reserved_lkey);
+ else
+ mthca_dbg(dev, "Base MM extensions: no\n");
+
+ mthca_dbg(dev, "Max ICM size %lld MB\n",
+ (unsigned long long) dev_lim->hca.arbel.max_icm_sz >> 20);
+ } else {
+ MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_AV_OFFSET);
+ dev_lim->hca.tavor.max_avs = 1 << (field & 0x3f);
+ dev_lim->mtt_seg_sz = MTHCA_MTT_SEG_SIZE;
+ dev_lim->mpt_entry_sz = MTHCA_MPT_ENTRY_SIZE;
+ }
+
+out:
+ pci_free_consistent(dev->pdev, QUERY_DEV_LIM_OUT_SIZE, outbox, outdma);
+ return err;
+}
+
+int mthca_QUERY_ADAPTER(struct mthca_dev *dev,
+ struct mthca_adapter *adapter, u8 *status)
+{
+ u32 *outbox;
+ dma_addr_t outdma;
+ int err;
+
+#define QUERY_ADAPTER_OUT_SIZE 0x100
+#define QUERY_ADAPTER_VENDOR_ID_OFFSET 0x00
+#define QUERY_ADAPTER_DEVICE_ID_OFFSET 0x04
+#define QUERY_ADAPTER_REVISION_ID_OFFSET 0x08
+#define QUERY_ADAPTER_INTA_PIN_OFFSET 0x10
+
+ outbox = pci_alloc_consistent(dev->pdev, QUERY_ADAPTER_OUT_SIZE, &outdma);
+ if (!outbox)
+ return -ENOMEM;
+
+ err = mthca_cmd_box(dev, 0, outdma, 0, 0, CMD_QUERY_ADAPTER,
+ CMD_TIME_CLASS_A, status);
+
+ if (err)
+ goto out;
+
+ MTHCA_GET(adapter->vendor_id, outbox, QUERY_ADAPTER_VENDOR_ID_OFFSET);
+ MTHCA_GET(adapter->device_id, outbox, QUERY_ADAPTER_DEVICE_ID_OFFSET);
+ MTHCA_GET(adapter->revision_id, outbox, QUERY_ADAPTER_REVISION_ID_OFFSET);
+ MTHCA_GET(adapter->inta_pin, outbox, QUERY_ADAPTER_INTA_PIN_OFFSET);
+
+out:
+ pci_free_consistent(dev->pdev, QUERY_DEV_LIM_OUT_SIZE, outbox, outdma);
+ return err;
+}
+
+int mthca_INIT_HCA(struct mthca_dev *dev,
+ struct mthca_init_hca_param *param,
+ u8 *status)
+{
+ u32 *inbox;
+ dma_addr_t indma;
+ int err;
+
+#define INIT_HCA_IN_SIZE 0x200
+#define INIT_HCA_FLAGS_OFFSET 0x014
+#define INIT_HCA_QPC_OFFSET 0x020
+#define INIT_HCA_QPC_BASE_OFFSET (INIT_HCA_QPC_OFFSET + 0x10)
+#define INIT_HCA_LOG_QP_OFFSET (INIT_HCA_QPC_OFFSET + 0x17)
+#define INIT_HCA_EEC_BASE_OFFSET (INIT_HCA_QPC_OFFSET + 0x20)
+#define INIT_HCA_LOG_EEC_OFFSET (INIT_HCA_QPC_OFFSET + 0x27)
+#define INIT_HCA_SRQC_BASE_OFFSET (INIT_HCA_QPC_OFFSET + 0x28)
+#define INIT_HCA_LOG_SRQ_OFFSET (INIT_HCA_QPC_OFFSET + 0x2f)
+#define INIT_HCA_CQC_BASE_OFFSET (INIT_HCA_QPC_OFFSET + 0x30)
+#define INIT_HCA_LOG_CQ_OFFSET (INIT_HCA_QPC_OFFSET + 0x37)
+#define INIT_HCA_EQPC_BASE_OFFSET (INIT_HCA_QPC_OFFSET + 0x40)
+#define INIT_HCA_EEEC_BASE_OFFSET (INIT_HCA_QPC_OFFSET + 0x50)
+#define INIT_HCA_EQC_BASE_OFFSET (INIT_HCA_QPC_OFFSET + 0x60)
+#define INIT_HCA_LOG_EQ_OFFSET (INIT_HCA_QPC_OFFSET + 0x67)
+#define INIT_HCA_RDB_BASE_OFFSET (INIT_HCA_QPC_OFFSET + 0x70)
+#define INIT_HCA_UDAV_OFFSET 0x0b0
+#define INIT_HCA_UDAV_LKEY_OFFSET (INIT_HCA_UDAV_OFFSET + 0x0)
+#define INIT_HCA_UDAV_PD_OFFSET (INIT_HCA_UDAV_OFFSET + 0x4)
+#define INIT_HCA_MCAST_OFFSET 0x0c0
+#define INIT_HCA_MC_BASE_OFFSET (INIT_HCA_MCAST_OFFSET + 0x00)
+#define INIT_HCA_LOG_MC_ENTRY_SZ_OFFSET (INIT_HCA_MCAST_OFFSET + 0x12)
+#define INIT_HCA_MC_HASH_SZ_OFFSET (INIT_HCA_MCAST_OFFSET + 0x16)
+#define INIT_HCA_LOG_MC_TABLE_SZ_OFFSET (INIT_HCA_MCAST_OFFSET + 0x1b)
+#define INIT_HCA_TPT_OFFSET 0x0f0
+#define INIT_HCA_MPT_BASE_OFFSET (INIT_HCA_TPT_OFFSET + 0x00)
+#define INIT_HCA_MTT_SEG_SZ_OFFSET (INIT_HCA_TPT_OFFSET + 0x09)
+#define INIT_HCA_LOG_MPT_SZ_OFFSET (INIT_HCA_TPT_OFFSET + 0x0b)
+#define INIT_HCA_MTT_BASE_OFFSET (INIT_HCA_TPT_OFFSET + 0x10)
+#define INIT_HCA_UAR_OFFSET 0x120
+#define INIT_HCA_UAR_BASE_OFFSET (INIT_HCA_UAR_OFFSET + 0x00)
+#define INIT_HCA_UARC_SZ_OFFSET (INIT_HCA_UAR_OFFSET + 0x09)
+#define INIT_HCA_LOG_UAR_SZ_OFFSET (INIT_HCA_UAR_OFFSET + 0x0a)
+#define INIT_HCA_UAR_PAGE_SZ_OFFSET (INIT_HCA_UAR_OFFSET + 0x0b)
+#define INIT_HCA_UAR_SCATCH_BASE_OFFSET (INIT_HCA_UAR_OFFSET + 0x10)
+#define INIT_HCA_UAR_CTX_BASE_OFFSET (INIT_HCA_UAR_OFFSET + 0x18)
+
+ inbox = pci_alloc_consistent(dev->pdev, INIT_HCA_IN_SIZE, &indma);
+ if (!inbox)
+ return -ENOMEM;
+
+ memset(inbox, 0, INIT_HCA_IN_SIZE);
+
+#if defined(__LITTLE_ENDIAN)
+ *(inbox + INIT_HCA_FLAGS_OFFSET / 4) &= ~cpu_to_be32(1 << 1);
+#elif defined(__BIG_ENDIAN)
+ *(inbox + INIT_HCA_FLAGS_OFFSET / 4) |= cpu_to_be32(1 << 1);
+#else
+#error Host endianness not defined
+#endif
+ /* Check port for UD address vector: */
+ *(inbox + INIT_HCA_FLAGS_OFFSET / 4) |= cpu_to_be32(1);
+
+ /* We leave wqe_quota, responder_exu, etc as 0 (default) */
+
+ /* QPC/EEC/CQC/EQC/RDB attributes */
+
+ MTHCA_PUT(inbox, param->qpc_base, INIT_HCA_QPC_BASE_OFFSET);
+ MTHCA_PUT(inbox, param->log_num_qps, INIT_HCA_LOG_QP_OFFSET);
+ MTHCA_PUT(inbox, param->eec_base, INIT_HCA_EEC_BASE_OFFSET);
+ MTHCA_PUT(inbox, param->log_num_eecs, INIT_HCA_LOG_EEC_OFFSET);
+ MTHCA_PUT(inbox, param->srqc_base, INIT_HCA_SRQC_BASE_OFFSET);
+ MTHCA_PUT(inbox, param->log_num_srqs, INIT_HCA_LOG_SRQ_OFFSET);
+ MTHCA_PUT(inbox, param->cqc_base, INIT_HCA_CQC_BASE_OFFSET);
+ MTHCA_PUT(inbox, param->log_num_cqs, INIT_HCA_LOG_CQ_OFFSET);
+ MTHCA_PUT(inbox, param->eqpc_base, INIT_HCA_EQPC_BASE_OFFSET);
+ MTHCA_PUT(inbox, param->eeec_base, INIT_HCA_EEEC_BASE_OFFSET);
+ MTHCA_PUT(inbox, param->eqc_base, INIT_HCA_EQC_BASE_OFFSET);
+ MTHCA_PUT(inbox, param->log_num_eqs, INIT_HCA_LOG_EQ_OFFSET);
+ MTHCA_PUT(inbox, param->rdb_base, INIT_HCA_RDB_BASE_OFFSET);
+
+ /* UD AV attributes */
+
+ /* multicast attributes */
+
+ MTHCA_PUT(inbox, param->mc_base, INIT_HCA_MC_BASE_OFFSET);
+ MTHCA_PUT(inbox, param->log_mc_entry_sz, INIT_HCA_LOG_MC_ENTRY_SZ_OFFSET);
+ MTHCA_PUT(inbox, param->mc_hash_sz, INIT_HCA_MC_HASH_SZ_OFFSET);
+ MTHCA_PUT(inbox, param->log_mc_table_sz, INIT_HCA_LOG_MC_TABLE_SZ_OFFSET);
+
+ /* TPT attributes */
+
+ MTHCA_PUT(inbox, param->mpt_base, INIT_HCA_MPT_BASE_OFFSET);
+ if (dev->hca_type != ARBEL_NATIVE)
+ MTHCA_PUT(inbox, param->mtt_seg_sz, INIT_HCA_MTT_SEG_SZ_OFFSET);
+ MTHCA_PUT(inbox, param->log_mpt_sz, INIT_HCA_LOG_MPT_SZ_OFFSET);
+ MTHCA_PUT(inbox, param->mtt_base, INIT_HCA_MTT_BASE_OFFSET);
+
+ /* UAR attributes */
+ {
+ u8 uar_page_sz = PAGE_SHIFT - 12;
+ MTHCA_PUT(inbox, uar_page_sz, INIT_HCA_UAR_PAGE_SZ_OFFSET);
+ }
+
+ MTHCA_PUT(inbox, param->uar_scratch_base, INIT_HCA_UAR_SCATCH_BASE_OFFSET);
+
+ if (dev->hca_type == ARBEL_NATIVE) {
+ MTHCA_PUT(inbox, param->log_uarc_sz, INIT_HCA_UARC_SZ_OFFSET);
+ MTHCA_PUT(inbox, param->log_uar_sz, INIT_HCA_LOG_UAR_SZ_OFFSET);
+ MTHCA_PUT(inbox, param->uarc_base, INIT_HCA_UAR_CTX_BASE_OFFSET);
+ }
+
+ err = mthca_cmd(dev, indma, 0, 0, CMD_INIT_HCA,
+ HZ, status);
+
+ pci_free_consistent(dev->pdev, INIT_HCA_IN_SIZE, inbox, indma);
+ return err;
+}
+
+int mthca_INIT_IB(struct mthca_dev *dev,
+ struct mthca_init_ib_param *param,
+ int port, u8 *status)
+{
+ u32 *inbox;
+ dma_addr_t indma;
+ int err;
+ u32 flags;
+
+#define INIT_IB_IN_SIZE 56
+#define INIT_IB_FLAGS_OFFSET 0x00
+#define INIT_IB_FLAG_SIG (1 << 18)
+#define INIT_IB_FLAG_NG (1 << 17)
+#define INIT_IB_FLAG_G0 (1 << 16)
+#define INIT_IB_FLAG_1X (1 << 8)
+#define INIT_IB_FLAG_4X (1 << 9)
+#define INIT_IB_FLAG_12X (1 << 11)
+#define INIT_IB_VL_SHIFT 4
+#define INIT_IB_MTU_SHIFT 12
+#define INIT_IB_MAX_GID_OFFSET 0x06
+#define INIT_IB_MAX_PKEY_OFFSET 0x0a
+#define INIT_IB_GUID0_OFFSET 0x10
+#define INIT_IB_NODE_GUID_OFFSET 0x18
+#define INIT_IB_SI_GUID_OFFSET 0x20
+
+ inbox = pci_alloc_consistent(dev->pdev, INIT_IB_IN_SIZE, &indma);
+ if (!inbox)
+ return -ENOMEM;
+
+ memset(inbox, 0, INIT_IB_IN_SIZE);
+
+ flags = 0;
+ flags |= param->enable_1x ? INIT_IB_FLAG_1X : 0;
+ flags |= param->enable_4x ? INIT_IB_FLAG_4X : 0;
+ flags |= param->set_guid0 ? INIT_IB_FLAG_G0 : 0;
+ flags |= param->set_node_guid ? INIT_IB_FLAG_NG : 0;
+ flags |= param->set_si_guid ? INIT_IB_FLAG_SIG : 0;
+ flags |= param->vl_cap << INIT_IB_VL_SHIFT;
+ flags |= param->mtu_cap << INIT_IB_MTU_SHIFT;
+ MTHCA_PUT(inbox, flags, INIT_IB_FLAGS_OFFSET);
+
+ MTHCA_PUT(inbox, param->gid_cap, INIT_IB_MAX_GID_OFFSET);
+ MTHCA_PUT(inbox, param->pkey_cap, INIT_IB_MAX_PKEY_OFFSET);
+ MTHCA_PUT(inbox, param->guid0, INIT_IB_GUID0_OFFSET);
+ MTHCA_PUT(inbox, param->node_guid, INIT_IB_NODE_GUID_OFFSET);
+ MTHCA_PUT(inbox, param->si_guid, INIT_IB_SI_GUID_OFFSET);
+
+ err = mthca_cmd(dev, indma, port, 0, CMD_INIT_IB,
+ CMD_TIME_CLASS_A, status);
+
+ pci_free_consistent(dev->pdev, INIT_HCA_IN_SIZE, inbox, indma);
+ return err;
+}
+
+int mthca_CLOSE_IB(struct mthca_dev *dev, int port, u8 *status)
+{
+ return mthca_cmd(dev, 0, port, 0, CMD_CLOSE_IB, HZ, status);
+}
+
+int mthca_CLOSE_HCA(struct mthca_dev *dev, int panic, u8 *status)
+{
+ return mthca_cmd(dev, 0, 0, panic, CMD_CLOSE_HCA, HZ, status);
+}
+
+int mthca_SET_IB(struct mthca_dev *dev, struct mthca_set_ib_param *param,
+ int port, u8 *status)
+{
+ u32 *inbox;
+ dma_addr_t indma;
+ int err;
+ u32 flags = 0;
+
+#define SET_IB_IN_SIZE 0x40
+#define SET_IB_FLAGS_OFFSET 0x00
+#define SET_IB_FLAG_SIG (1 << 18)
+#define SET_IB_FLAG_RQK (1 << 0)
+#define SET_IB_CAP_MASK_OFFSET 0x04
+#define SET_IB_SI_GUID_OFFSET 0x08
+
+ inbox = pci_alloc_consistent(dev->pdev, SET_IB_IN_SIZE, &indma);
+ if (!inbox)
+ return -ENOMEM;
+
+ memset(inbox, 0, SET_IB_IN_SIZE);
+
+ flags |= param->set_si_guid ? SET_IB_FLAG_SIG : 0;
+ flags |= param->reset_qkey_viol ? SET_IB_FLAG_RQK : 0;
+ MTHCA_PUT(inbox, flags, SET_IB_FLAGS_OFFSET);
+
+ MTHCA_PUT(inbox, param->cap_mask, SET_IB_CAP_MASK_OFFSET);
+ MTHCA_PUT(inbox, param->si_guid, SET_IB_SI_GUID_OFFSET);
+
+ err = mthca_cmd(dev, indma, port, 0, CMD_SET_IB,
+ CMD_TIME_CLASS_B, status);
+
+ pci_free_consistent(dev->pdev, INIT_HCA_IN_SIZE, inbox, indma);
+ return err;
+}
+
+int mthca_MAP_ICM(struct mthca_dev *dev, struct mthca_icm *icm, u64 virt, u8 *status)
+{
+ return mthca_map_cmd(dev, CMD_MAP_ICM, icm, virt, status);
+}
+
+int mthca_MAP_ICM_page(struct mthca_dev *dev, u64 dma_addr, u64 virt, u8 *status)
+{
+ u64 *inbox;
+ dma_addr_t indma;
+ int err;
+
+ inbox = pci_alloc_consistent(dev->pdev, 16, &indma);
+ if (!inbox)
+ return -ENOMEM;
+
+ inbox[0] = cpu_to_be64(virt);
+ inbox[1] = cpu_to_be64(dma_addr | (PAGE_SHIFT - 12));
+
+ err = mthca_cmd(dev, indma, 1, 0, CMD_MAP_ICM, CMD_TIME_CLASS_B, status);
+
+ pci_free_consistent(dev->pdev, 16, inbox, indma);
+
+ if (!err)
+ mthca_dbg(dev, "Mapped page at %llx for ICM.\n",
+ (unsigned long long) virt);
+
+ return err;
+}
+
+int mthca_UNMAP_ICM(struct mthca_dev *dev, u64 virt, u32 page_count, u8 *status)
+{
+ return mthca_cmd(dev, virt, page_count, 0, CMD_UNMAP_ICM, CMD_TIME_CLASS_B, status);
+}
+
+int mthca_MAP_ICM_AUX(struct mthca_dev *dev, struct mthca_icm *icm, u8 *status)
+{
+ return mthca_map_cmd(dev, CMD_MAP_ICM_AUX, icm, -1, status);
+}
+
+int mthca_UNMAP_ICM_AUX(struct mthca_dev *dev, u8 *status)
+{
+ return mthca_cmd(dev, 0, 0, 0, CMD_UNMAP_ICM_AUX, CMD_TIME_CLASS_B, status);
+}
+
+int mthca_SET_ICM_SIZE(struct mthca_dev *dev, u64 icm_size, u64 *aux_pages,
+ u8 *status)
+{
+ int ret = mthca_cmd_imm(dev, icm_size, aux_pages, 0, 0, CMD_SET_ICM_SIZE,
+ CMD_TIME_CLASS_A, status);
+
+ if (ret || status)
+ return ret;
+
+ /*
+ * Arbel page size is always 4 KB; round up number of system
+ * pages needed.
+ */
+ *aux_pages = (*aux_pages + (1 << (PAGE_SHIFT - 12)) - 1) >> (PAGE_SHIFT - 12);
+
+ return 0;
+}
+
+int mthca_SW2HW_MPT(struct mthca_dev *dev, void *mpt_entry,
+ int mpt_index, u8 *status)
+{
+ dma_addr_t indma;
+ int err;
+
+ indma = pci_map_single(dev->pdev, mpt_entry,
+ MTHCA_MPT_ENTRY_SIZE,
+ PCI_DMA_TODEVICE);
+ if (pci_dma_mapping_error(indma))
+ return -ENOMEM;
+
+ err = mthca_cmd(dev, indma, mpt_index, 0, CMD_SW2HW_MPT,
+ CMD_TIME_CLASS_B, status);
+
+ pci_unmap_single(dev->pdev, indma,
+ MTHCA_MPT_ENTRY_SIZE, PCI_DMA_TODEVICE);
+ return err;
+}
+
+int mthca_HW2SW_MPT(struct mthca_dev *dev, void *mpt_entry,
+ int mpt_index, u8 *status)
+{
+ dma_addr_t outdma = 0;
+ int err;
+
+ if (mpt_entry) {
+ outdma = pci_map_single(dev->pdev, mpt_entry,
+ MTHCA_MPT_ENTRY_SIZE,
+ PCI_DMA_FROMDEVICE);
+ if (pci_dma_mapping_error(outdma))
+ return -ENOMEM;
+ }
+
+ err = mthca_cmd_box(dev, 0, outdma, mpt_index, !mpt_entry,
+ CMD_HW2SW_MPT,
+ CMD_TIME_CLASS_B, status);
+
+ if (mpt_entry)
+ pci_unmap_single(dev->pdev, outdma,
+ MTHCA_MPT_ENTRY_SIZE,
+ PCI_DMA_FROMDEVICE);
+ return err;
+}
+
+int mthca_WRITE_MTT(struct mthca_dev *dev, u64 *mtt_entry,
+ int num_mtt, u8 *status)
+{
+ dma_addr_t indma;
+ int err;
+
+ indma = pci_map_single(dev->pdev, mtt_entry,
+ (num_mtt + 2) * 8,
+ PCI_DMA_TODEVICE);
+ if (pci_dma_mapping_error(indma))
+ return -ENOMEM;
+
+ err = mthca_cmd(dev, indma, num_mtt, 0, CMD_WRITE_MTT,
+ CMD_TIME_CLASS_B, status);
+
+ pci_unmap_single(dev->pdev, indma,
+ (num_mtt + 2) * 8, PCI_DMA_TODEVICE);
+ return err;
+}
+
+int mthca_MAP_EQ(struct mthca_dev *dev, u64 event_mask, int unmap,
+ int eq_num, u8 *status)
+{
+ mthca_dbg(dev, "%s mask %016llx for eqn %d\n",
+ unmap ? "Clearing" : "Setting",
+ (unsigned long long) event_mask, eq_num);
+ return mthca_cmd(dev, event_mask, (unmap << 31) | eq_num,
+ 0, CMD_MAP_EQ, CMD_TIME_CLASS_B, status);
+}
+
+int mthca_SW2HW_EQ(struct mthca_dev *dev, void *eq_context,
+ int eq_num, u8 *status)
+{
+ dma_addr_t indma;
+ int err;
+
+ indma = pci_map_single(dev->pdev, eq_context,
+ MTHCA_EQ_CONTEXT_SIZE,
+ PCI_DMA_TODEVICE);
+ if (pci_dma_mapping_error(indma))
+ return -ENOMEM;
+
+ err = mthca_cmd(dev, indma, eq_num, 0, CMD_SW2HW_EQ,
+ CMD_TIME_CLASS_A, status);
+
+ pci_unmap_single(dev->pdev, indma,
+ MTHCA_EQ_CONTEXT_SIZE, PCI_DMA_TODEVICE);
+ return err;
+}
+
+int mthca_HW2SW_EQ(struct mthca_dev *dev, void *eq_context,
+ int eq_num, u8 *status)
+{
+ dma_addr_t outdma = 0;
+ int err;
+
+ outdma = pci_map_single(dev->pdev, eq_context,
+ MTHCA_EQ_CONTEXT_SIZE,
+ PCI_DMA_FROMDEVICE);
+ if (pci_dma_mapping_error(outdma))
+ return -ENOMEM;
+
+ err = mthca_cmd_box(dev, 0, outdma, eq_num, 0,
+ CMD_HW2SW_EQ,
+ CMD_TIME_CLASS_A, status);
+
+ pci_unmap_single(dev->pdev, outdma,
+ MTHCA_EQ_CONTEXT_SIZE,
+ PCI_DMA_FROMDEVICE);
+ return err;
+}
+
+int mthca_SW2HW_CQ(struct mthca_dev *dev, void *cq_context,
+ int cq_num, u8 *status)
+{
+ dma_addr_t indma;
+ int err;
+
+ indma = pci_map_single(dev->pdev, cq_context,
+ MTHCA_CQ_CONTEXT_SIZE,
+ PCI_DMA_TODEVICE);
+ if (pci_dma_mapping_error(indma))
+ return -ENOMEM;
+
+ err = mthca_cmd(dev, indma, cq_num, 0, CMD_SW2HW_CQ,
+ CMD_TIME_CLASS_A, status);
+
+ pci_unmap_single(dev->pdev, indma,
+ MTHCA_CQ_CONTEXT_SIZE, PCI_DMA_TODEVICE);
+ return err;
+}
+
+int mthca_HW2SW_CQ(struct mthca_dev *dev, void *cq_context,
+ int cq_num, u8 *status)
+{
+ dma_addr_t outdma = 0;
+ int err;
+
+ outdma = pci_map_single(dev->pdev, cq_context,
+ MTHCA_CQ_CONTEXT_SIZE,
+ PCI_DMA_FROMDEVICE);
+ if (pci_dma_mapping_error(outdma))
+ return -ENOMEM;
+
+ err = mthca_cmd_box(dev, 0, outdma, cq_num, 0,
+ CMD_HW2SW_CQ,
+ CMD_TIME_CLASS_A, status);
+
+ pci_unmap_single(dev->pdev, outdma,
+ MTHCA_CQ_CONTEXT_SIZE,
+ PCI_DMA_FROMDEVICE);
+ return err;
+}
+
+int mthca_MODIFY_QP(struct mthca_dev *dev, int trans, u32 num,
+ int is_ee, void *qp_context, u32 optmask,
+ u8 *status)
+{
+ static const u16 op[] = {
+ [MTHCA_TRANS_RST2INIT] = CMD_RST2INIT_QPEE,
+ [MTHCA_TRANS_INIT2INIT] = CMD_INIT2INIT_QPEE,
+ [MTHCA_TRANS_INIT2RTR] = CMD_INIT2RTR_QPEE,
+ [MTHCA_TRANS_RTR2RTS] = CMD_RTR2RTS_QPEE,
+ [MTHCA_TRANS_RTS2RTS] = CMD_RTS2RTS_QPEE,
+ [MTHCA_TRANS_SQERR2RTS] = CMD_SQERR2RTS_QPEE,
+ [MTHCA_TRANS_ANY2ERR] = CMD_2ERR_QPEE,
+ [MTHCA_TRANS_RTS2SQD] = CMD_RTS2SQD_QPEE,
+ [MTHCA_TRANS_SQD2SQD] = CMD_SQD2SQD_QPEE,
+ [MTHCA_TRANS_SQD2RTS] = CMD_SQD2RTS_QPEE,
+ [MTHCA_TRANS_ANY2RST] = CMD_ERR2RST_QPEE
+ };
+ u8 op_mod = 0;
+
+ dma_addr_t indma;
+ int err;
+
+ if (trans < 0 || trans >= ARRAY_SIZE(op))
+ return -EINVAL;
+
+ if (trans == MTHCA_TRANS_ANY2RST) {
+ indma = 0;
+ op_mod = 3; /* don't write outbox, any->reset */
+
+ /* For debugging */
+ qp_context = pci_alloc_consistent(dev->pdev, MTHCA_QP_CONTEXT_SIZE,
+ &indma);
+ op_mod = 2; /* write outbox, any->reset */
+ } else {
+ indma = pci_map_single(dev->pdev, qp_context,
+ MTHCA_QP_CONTEXT_SIZE,
+ PCI_DMA_TODEVICE);
+ if (pci_dma_mapping_error(indma))
+ return -ENOMEM;
+
+ if (0) {
+ int i;
+ mthca_dbg(dev, "Dumping QP context:\n");
+ printk(" %08x\n", be32_to_cpup(qp_context));
+ for (i = 0; i < 0x100 / 4; ++i) {
+ if (i % 8 == 0)
+ printk("[%02x] ", i * 4);
+ printk(" %08x", be32_to_cpu(((u32 *) qp_context)[i + 2]));
+ if ((i + 1) % 8 == 0)
+ printk("\n");
+ }
+ }
+ }
+
+ if (trans == MTHCA_TRANS_ANY2RST) {
+ err = mthca_cmd_box(dev, 0, indma, (!!is_ee << 24) | num,
+ op_mod, op[trans], CMD_TIME_CLASS_C, status);
+
+ if (0) {
+ int i;
+ mthca_dbg(dev, "Dumping QP context:\n");
+ printk(" %08x\n", be32_to_cpup(qp_context));
+ for (i = 0; i < 0x100 / 4; ++i) {
+ if (i % 8 == 0)
+ printk("[%02x] ", i * 4);
+ printk(" %08x", be32_to_cpu(((u32 *) qp_context)[i + 2]));
+ if ((i + 1) % 8 == 0)
+ printk("\n");
+ }
+ }
+
+ } else
+ err = mthca_cmd(dev, indma, (!!is_ee << 24) | num,
+ op_mod, op[trans], CMD_TIME_CLASS_C, status);
+
+ if (trans != MTHCA_TRANS_ANY2RST)
+ pci_unmap_single(dev->pdev, indma,
+ MTHCA_QP_CONTEXT_SIZE, PCI_DMA_TODEVICE);
+ else
+ pci_free_consistent(dev->pdev, MTHCA_QP_CONTEXT_SIZE,
+ qp_context, indma);
+ return err;
+}
+
+int mthca_QUERY_QP(struct mthca_dev *dev, u32 num, int is_ee,
+ void *qp_context, u8 *status)
+{
+ dma_addr_t outdma = 0;
+ int err;
+
+ outdma = pci_map_single(dev->pdev, qp_context,
+ MTHCA_QP_CONTEXT_SIZE,
+ PCI_DMA_FROMDEVICE);
+ if (pci_dma_mapping_error(outdma))
+ return -ENOMEM;
+
+ err = mthca_cmd_box(dev, 0, outdma, (!!is_ee << 24) | num, 0,
+ CMD_QUERY_QPEE,
+ CMD_TIME_CLASS_A, status);
+
+ pci_unmap_single(dev->pdev, outdma,
+ MTHCA_QP_CONTEXT_SIZE,
+ PCI_DMA_FROMDEVICE);
+ return err;
+}
+
+int mthca_CONF_SPECIAL_QP(struct mthca_dev *dev, int type, u32 qpn,
+ u8 *status)
+{
+ u8 op_mod;
+
+ switch (type) {
+ case IB_QPT_SMI:
+ op_mod = 0;
+ break;
+ case IB_QPT_GSI:
+ op_mod = 1;
+ break;
+ case IB_QPT_RAW_IPV6:
+ op_mod = 2;
+ break;
+ case IB_QPT_RAW_ETY:
+ op_mod = 3;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return mthca_cmd(dev, 0, qpn, op_mod, CMD_CONF_SPECIAL_QP,
+ CMD_TIME_CLASS_B, status);
+}
+
+int mthca_MAD_IFC(struct mthca_dev *dev, int ignore_mkey, int ignore_bkey,
+ int port, struct ib_wc* in_wc, struct ib_grh* in_grh,
+ void *in_mad, void *response_mad, u8 *status)
+{
+ void *box;
+ dma_addr_t dma;
+ int err;
+ u32 in_modifier = port;
+ u8 op_modifier = 0;
+
+#define MAD_IFC_BOX_SIZE 0x400
+#define MAD_IFC_MY_QPN_OFFSET 0x100
+#define MAD_IFC_RQPN_OFFSET 0x104
+#define MAD_IFC_SL_OFFSET 0x108
+#define MAD_IFC_G_PATH_OFFSET 0x109
+#define MAD_IFC_RLID_OFFSET 0x10a
+#define MAD_IFC_PKEY_OFFSET 0x10e
+#define MAD_IFC_GRH_OFFSET 0x140
+
+ box = pci_alloc_consistent(dev->pdev, MAD_IFC_BOX_SIZE, &dma);
+ if (!box)
+ return -ENOMEM;
+
+ memcpy(box, in_mad, 256);
+
+ /*
+ * Key check traps can't be generated unless we have in_wc to
+ * tell us where to send the trap.
+ */
+ if (ignore_mkey || !in_wc)
+ op_modifier |= 0x1;
+ if (ignore_bkey || !in_wc)
+ op_modifier |= 0x2;
+
+ if (in_wc) {
+ u8 val;
+
+ memset(box + 256, 0, 256);
+
+ MTHCA_PUT(box, in_wc->qp_num, MAD_IFC_MY_QPN_OFFSET);
+ MTHCA_PUT(box, in_wc->src_qp, MAD_IFC_RQPN_OFFSET);
+
+ val = in_wc->sl << 4;
+ MTHCA_PUT(box, val, MAD_IFC_SL_OFFSET);
+
+ val = in_wc->dlid_path_bits |
+ (in_wc->wc_flags & IB_WC_GRH ? 0x80 : 0);
+ MTHCA_PUT(box, val, MAD_IFC_GRH_OFFSET);
+
+ MTHCA_PUT(box, in_wc->slid, MAD_IFC_RLID_OFFSET);
+ MTHCA_PUT(box, in_wc->pkey_index, MAD_IFC_PKEY_OFFSET);
+
+ if (in_grh)
+ memcpy((u8 *) box + MAD_IFC_GRH_OFFSET, in_grh, 40);
+
+ op_modifier |= 0x10;
+
+ in_modifier |= in_wc->slid << 16;
+ }
+
+ err = mthca_cmd_box(dev, dma, dma + 512, in_modifier, op_modifier,
+ CMD_MAD_IFC, CMD_TIME_CLASS_C, status);
+
+ if (!err && !*status)
+ memcpy(response_mad, box + 512, 256);
+
+ pci_free_consistent(dev->pdev, MAD_IFC_BOX_SIZE, box, dma);
+ return err;
+}
+
+int mthca_READ_MGM(struct mthca_dev *dev, int index, void *mgm,
+ u8 *status)
+{
+ dma_addr_t outdma = 0;
+ int err;
+
+ outdma = pci_map_single(dev->pdev, mgm,
+ MTHCA_MGM_ENTRY_SIZE,
+ PCI_DMA_FROMDEVICE);
+ if (pci_dma_mapping_error(outdma))
+ return -ENOMEM;
+
+ err = mthca_cmd_box(dev, 0, outdma, index, 0,
+ CMD_READ_MGM,
+ CMD_TIME_CLASS_A, status);
+
+ pci_unmap_single(dev->pdev, outdma,
+ MTHCA_MGM_ENTRY_SIZE,
+ PCI_DMA_FROMDEVICE);
+ return err;
+}
+
+int mthca_WRITE_MGM(struct mthca_dev *dev, int index, void *mgm,
+ u8 *status)
+{
+ dma_addr_t indma;
+ int err;
+
+ indma = pci_map_single(dev->pdev, mgm,
+ MTHCA_MGM_ENTRY_SIZE,
+ PCI_DMA_TODEVICE);
+ if (pci_dma_mapping_error(indma))
+ return -ENOMEM;
+
+ err = mthca_cmd(dev, indma, index, 0, CMD_WRITE_MGM,
+ CMD_TIME_CLASS_A, status);
+
+ pci_unmap_single(dev->pdev, indma,
+ MTHCA_MGM_ENTRY_SIZE, PCI_DMA_TODEVICE);
+ return err;
+}
+
+int mthca_MGID_HASH(struct mthca_dev *dev, void *gid, u16 *hash,
+ u8 *status)
+{
+ dma_addr_t indma;
+ u64 imm;
+ int err;
+
+ indma = pci_map_single(dev->pdev, gid, 16, PCI_DMA_TODEVICE);
+ if (pci_dma_mapping_error(indma))
+ return -ENOMEM;
+
+ err = mthca_cmd_imm(dev, indma, &imm, 0, 0, CMD_MGID_HASH,
+ CMD_TIME_CLASS_A, status);
+ *hash = imm;
+
+ pci_unmap_single(dev->pdev, indma, 16, PCI_DMA_TODEVICE);
+ return err;
+}
+
+int mthca_NOP(struct mthca_dev *dev, u8 *status)
+{
+ return mthca_cmd(dev, 0, 0x1f, 0, CMD_NOP, msecs_to_jiffies(100), status);
+}
--- /dev/null
+/*
+ * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mthca_cmd.h 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#ifndef MTHCA_CMD_H
+#define MTHCA_CMD_H
+
+#include <ib_verbs.h>
+
+#define MTHCA_CMD_MAILBOX_ALIGN 16UL
+#define MTHCA_CMD_MAILBOX_EXTRA (MTHCA_CMD_MAILBOX_ALIGN - 1)
+
+enum {
+ /* command completed successfully: */
+ MTHCA_CMD_STAT_OK = 0x00,
+ /* Internal error (such as a bus error) occurred while processing command: */
+ MTHCA_CMD_STAT_INTERNAL_ERR = 0x01,
+ /* Operation/command not supported or opcode modifier not supported: */
+ MTHCA_CMD_STAT_BAD_OP = 0x02,
+ /* Parameter not supported or parameter out of range: */
+ MTHCA_CMD_STAT_BAD_PARAM = 0x03,
+ /* System not enabled or bad system state: */
+ MTHCA_CMD_STAT_BAD_SYS_STATE = 0x04,
+ /* Attempt to access reserved or unallocaterd resource: */
+ MTHCA_CMD_STAT_BAD_RESOURCE = 0x05,
+ /* Requested resource is currently executing a command, or is otherwise busy: */
+ MTHCA_CMD_STAT_RESOURCE_BUSY = 0x06,
+ /* memory error: */
+ MTHCA_CMD_STAT_DDR_MEM_ERR = 0x07,
+ /* Required capability exceeds device limits: */
+ MTHCA_CMD_STAT_EXCEED_LIM = 0x08,
+ /* Resource is not in the appropriate state or ownership: */
+ MTHCA_CMD_STAT_BAD_RES_STATE = 0x09,
+ /* Index out of range: */
+ MTHCA_CMD_STAT_BAD_INDEX = 0x0a,
+ /* FW image corrupted: */
+ MTHCA_CMD_STAT_BAD_NVMEM = 0x0b,
+ /* Attempt to modify a QP/EE which is not in the presumed state: */
+ MTHCA_CMD_STAT_BAD_QPEE_STATE = 0x10,
+ /* Bad segment parameters (Address/Size): */
+ MTHCA_CMD_STAT_BAD_SEG_PARAM = 0x20,
+ /* Memory Region has Memory Windows bound to: */
+ MTHCA_CMD_STAT_REG_BOUND = 0x21,
+ /* HCA local attached memory not present: */
+ MTHCA_CMD_STAT_LAM_NOT_PRE = 0x22,
+ /* Bad management packet (silently discarded): */
+ MTHCA_CMD_STAT_BAD_PKT = 0x30,
+ /* More outstanding CQEs in CQ than new CQ size: */
+ MTHCA_CMD_STAT_BAD_SIZE = 0x40
+};
+
+enum {
+ MTHCA_TRANS_INVALID = 0,
+ MTHCA_TRANS_RST2INIT,
+ MTHCA_TRANS_INIT2INIT,
+ MTHCA_TRANS_INIT2RTR,
+ MTHCA_TRANS_RTR2RTS,
+ MTHCA_TRANS_RTS2RTS,
+ MTHCA_TRANS_SQERR2RTS,
+ MTHCA_TRANS_ANY2ERR,
+ MTHCA_TRANS_RTS2SQD,
+ MTHCA_TRANS_SQD2SQD,
+ MTHCA_TRANS_SQD2RTS,
+ MTHCA_TRANS_ANY2RST,
+};
+
+enum {
+ DEV_LIM_FLAG_SRQ = 1 << 6
+};
+
+struct mthca_dev_lim {
+ int max_srq_sz;
+ int max_qp_sz;
+ int reserved_qps;
+ int max_qps;
+ int reserved_srqs;
+ int max_srqs;
+ int reserved_eecs;
+ int max_eecs;
+ int max_cq_sz;
+ int reserved_cqs;
+ int max_cqs;
+ int max_mpts;
+ int reserved_eqs;
+ int max_eqs;
+ int reserved_mtts;
+ int max_mrw_sz;
+ int reserved_mrws;
+ int max_mtt_seg;
+ int max_requester_per_qp;
+ int max_responder_per_qp;
+ int max_rdma_global;
+ int local_ca_ack_delay;
+ int max_mtu;
+ int max_port_width;
+ int max_vl;
+ int num_ports;
+ int max_gids;
+ int max_pkeys;
+ u32 flags;
+ int reserved_uars;
+ int uar_size;
+ int min_page_sz;
+ int max_sg;
+ int max_desc_sz;
+ int max_qp_per_mcg;
+ int reserved_mgms;
+ int max_mcgs;
+ int reserved_pds;
+ int max_pds;
+ int reserved_rdds;
+ int max_rdds;
+ int eec_entry_sz;
+ int qpc_entry_sz;
+ int eeec_entry_sz;
+ int eqpc_entry_sz;
+ int eqc_entry_sz;
+ int cqc_entry_sz;
+ int srq_entry_sz;
+ int uar_scratch_entry_sz;
+ int mtt_seg_sz;
+ int mpt_entry_sz;
+ union {
+ struct {
+ int max_avs;
+ } tavor;
+ struct {
+ int resize_srq;
+ int max_pbl_sz;
+ u8 bmme_flags;
+ u32 reserved_lkey;
+ int lam_required;
+ u64 max_icm_sz;
+ } arbel;
+ } hca;
+};
+
+struct mthca_adapter {
+ u32 vendor_id;
+ u32 device_id;
+ u32 revision_id;
+ u8 inta_pin;
+};
+
+struct mthca_init_hca_param {
+ u64 qpc_base;
+ u64 eec_base;
+ u64 srqc_base;
+ u64 cqc_base;
+ u64 eqpc_base;
+ u64 eeec_base;
+ u64 eqc_base;
+ u64 rdb_base;
+ u64 mc_base;
+ u64 mpt_base;
+ u64 mtt_base;
+ u64 uar_scratch_base;
+ u64 uarc_base;
+ u16 log_mc_entry_sz;
+ u16 mc_hash_sz;
+ u8 log_num_qps;
+ u8 log_num_eecs;
+ u8 log_num_srqs;
+ u8 log_num_cqs;
+ u8 log_num_eqs;
+ u8 log_mc_table_sz;
+ u8 mtt_seg_sz;
+ u8 log_mpt_sz;
+ u8 log_uar_sz;
+ u8 log_uarc_sz;
+};
+
+struct mthca_init_ib_param {
+ int enable_1x;
+ int enable_4x;
+ int vl_cap;
+ int mtu_cap;
+ u16 gid_cap;
+ u16 pkey_cap;
+ int set_guid0;
+ u64 guid0;
+ int set_node_guid;
+ u64 node_guid;
+ int set_si_guid;
+ u64 si_guid;
+};
+
+struct mthca_set_ib_param {
+ int set_si_guid;
+ int reset_qkey_viol;
+ u64 si_guid;
+ u32 cap_mask;
+};
+
+int mthca_cmd_use_events(struct mthca_dev *dev);
+void mthca_cmd_use_polling(struct mthca_dev *dev);
+void mthca_cmd_event(struct mthca_dev *dev, u16 token,
+ u8 status, u64 out_param);
+
+int mthca_SYS_EN(struct mthca_dev *dev, u8 *status);
+int mthca_SYS_DIS(struct mthca_dev *dev, u8 *status);
+int mthca_MAP_FA(struct mthca_dev *dev, struct mthca_icm *icm, u8 *status);
+int mthca_UNMAP_FA(struct mthca_dev *dev, u8 *status);
+int mthca_RUN_FW(struct mthca_dev *dev, u8 *status);
+int mthca_QUERY_FW(struct mthca_dev *dev, u8 *status);
+int mthca_ENABLE_LAM(struct mthca_dev *dev, u8 *status);
+int mthca_DISABLE_LAM(struct mthca_dev *dev, u8 *status);
+int mthca_QUERY_DDR(struct mthca_dev *dev, u8 *status);
+int mthca_QUERY_DEV_LIM(struct mthca_dev *dev,
+ struct mthca_dev_lim *dev_lim, u8 *status);
+int mthca_QUERY_ADAPTER(struct mthca_dev *dev,
+ struct mthca_adapter *adapter, u8 *status);
+int mthca_INIT_HCA(struct mthca_dev *dev,
+ struct mthca_init_hca_param *param,
+ u8 *status);
+int mthca_INIT_IB(struct mthca_dev *dev,
+ struct mthca_init_ib_param *param,
+ int port, u8 *status);
+int mthca_CLOSE_IB(struct mthca_dev *dev, int port, u8 *status);
+int mthca_CLOSE_HCA(struct mthca_dev *dev, int panic, u8 *status);
+int mthca_SET_IB(struct mthca_dev *dev, struct mthca_set_ib_param *param,
+ int port, u8 *status);
+int mthca_MAP_ICM(struct mthca_dev *dev, struct mthca_icm *icm, u64 virt, u8 *status);
+int mthca_MAP_ICM_page(struct mthca_dev *dev, u64 dma_addr, u64 virt, u8 *status);
+int mthca_UNMAP_ICM(struct mthca_dev *dev, u64 virt, u32 page_count, u8 *status);
+int mthca_MAP_ICM_AUX(struct mthca_dev *dev, struct mthca_icm *icm, u8 *status);
+int mthca_UNMAP_ICM_AUX(struct mthca_dev *dev, u8 *status);
+int mthca_SET_ICM_SIZE(struct mthca_dev *dev, u64 icm_size, u64 *aux_pages,
+ u8 *status);
+int mthca_SW2HW_MPT(struct mthca_dev *dev, void *mpt_entry,
+ int mpt_index, u8 *status);
+int mthca_HW2SW_MPT(struct mthca_dev *dev, void *mpt_entry,
+ int mpt_index, u8 *status);
+int mthca_WRITE_MTT(struct mthca_dev *dev, u64 *mtt_entry,
+ int num_mtt, u8 *status);
+int mthca_MAP_EQ(struct mthca_dev *dev, u64 event_mask, int unmap,
+ int eq_num, u8 *status);
+int mthca_SW2HW_EQ(struct mthca_dev *dev, void *eq_context,
+ int eq_num, u8 *status);
+int mthca_HW2SW_EQ(struct mthca_dev *dev, void *eq_context,
+ int eq_num, u8 *status);
+int mthca_SW2HW_CQ(struct mthca_dev *dev, void *cq_context,
+ int cq_num, u8 *status);
+int mthca_HW2SW_CQ(struct mthca_dev *dev, void *cq_context,
+ int cq_num, u8 *status);
+int mthca_MODIFY_QP(struct mthca_dev *dev, int trans, u32 num,
+ int is_ee, void *qp_context, u32 optmask,
+ u8 *status);
+int mthca_QUERY_QP(struct mthca_dev *dev, u32 num, int is_ee,
+ void *qp_context, u8 *status);
+int mthca_CONF_SPECIAL_QP(struct mthca_dev *dev, int type, u32 qpn,
+ u8 *status);
+int mthca_MAD_IFC(struct mthca_dev *dev, int ignore_mkey, int ignore_bkey,
+ int port, struct ib_wc* in_wc, struct ib_grh* in_grh,
+ void *in_mad, void *response_mad, u8 *status);
+int mthca_READ_MGM(struct mthca_dev *dev, int index, void *mgm,
+ u8 *status);
+int mthca_WRITE_MGM(struct mthca_dev *dev, int index, void *mgm,
+ u8 *status);
+int mthca_MGID_HASH(struct mthca_dev *dev, void *gid, u16 *hash,
+ u8 *status);
+int mthca_NOP(struct mthca_dev *dev, u8 *status);
+
+#define MAILBOX_ALIGN(x) ((void *) ALIGN((unsigned long) (x), MTHCA_CMD_MAILBOX_ALIGN))
+
+#endif /* MTHCA_CMD_H */
--- /dev/null
+/*
+ * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mthca_cq.c 1369 2004-12-20 16:17:07Z roland $
+ */
+
+#include <linux/init.h>
+
+#include <ib_pack.h>
+
+#include "mthca_dev.h"
+#include "mthca_cmd.h"
+
+enum {
+ MTHCA_MAX_DIRECT_CQ_SIZE = 4 * PAGE_SIZE
+};
+
+enum {
+ MTHCA_CQ_ENTRY_SIZE = 0x20
+};
+
+/*
+ * Must be packed because start is 64 bits but only aligned to 32 bits.
+ */
+struct mthca_cq_context {
+ u32 flags;
+ u64 start;
+ u32 logsize_usrpage;
+ u32 error_eqn;
+ u32 comp_eqn;
+ u32 pd;
+ u32 lkey;
+ u32 last_notified_index;
+ u32 solicit_producer_index;
+ u32 consumer_index;
+ u32 producer_index;
+ u32 cqn;
+ u32 reserved[3];
+} __attribute__((packed));
+
+#define MTHCA_CQ_STATUS_OK ( 0 << 28)
+#define MTHCA_CQ_STATUS_OVERFLOW ( 9 << 28)
+#define MTHCA_CQ_STATUS_WRITE_FAIL (10 << 28)
+#define MTHCA_CQ_FLAG_TR ( 1 << 18)
+#define MTHCA_CQ_FLAG_OI ( 1 << 17)
+#define MTHCA_CQ_STATE_DISARMED ( 0 << 8)
+#define MTHCA_CQ_STATE_ARMED ( 1 << 8)
+#define MTHCA_CQ_STATE_ARMED_SOL ( 4 << 8)
+#define MTHCA_EQ_STATE_FIRED (10 << 8)
+
+enum {
+ MTHCA_ERROR_CQE_OPCODE_MASK = 0xfe
+};
+
+enum {
+ SYNDROME_LOCAL_LENGTH_ERR = 0x01,
+ SYNDROME_LOCAL_QP_OP_ERR = 0x02,
+ SYNDROME_LOCAL_EEC_OP_ERR = 0x03,
+ SYNDROME_LOCAL_PROT_ERR = 0x04,
+ SYNDROME_WR_FLUSH_ERR = 0x05,
+ SYNDROME_MW_BIND_ERR = 0x06,
+ SYNDROME_BAD_RESP_ERR = 0x10,
+ SYNDROME_LOCAL_ACCESS_ERR = 0x11,
+ SYNDROME_REMOTE_INVAL_REQ_ERR = 0x12,
+ SYNDROME_REMOTE_ACCESS_ERR = 0x13,
+ SYNDROME_REMOTE_OP_ERR = 0x14,
+ SYNDROME_RETRY_EXC_ERR = 0x15,
+ SYNDROME_RNR_RETRY_EXC_ERR = 0x16,
+ SYNDROME_LOCAL_RDD_VIOL_ERR = 0x20,
+ SYNDROME_REMOTE_INVAL_RD_REQ_ERR = 0x21,
+ SYNDROME_REMOTE_ABORTED_ERR = 0x22,
+ SYNDROME_INVAL_EECN_ERR = 0x23,
+ SYNDROME_INVAL_EEC_STATE_ERR = 0x24
+};
+
+struct mthca_cqe {
+ u32 my_qpn;
+ u32 my_ee;
+ u32 rqpn;
+ u16 sl_g_mlpath;
+ u16 rlid;
+ u32 imm_etype_pkey_eec;
+ u32 byte_cnt;
+ u32 wqe;
+ u8 opcode;
+ u8 is_send;
+ u8 reserved;
+ u8 owner;
+};
+
+struct mthca_err_cqe {
+ u32 my_qpn;
+ u32 reserved1[3];
+ u8 syndrome;
+ u8 reserved2;
+ u16 db_cnt;
+ u32 reserved3;
+ u32 wqe;
+ u8 opcode;
+ u8 reserved4[2];
+ u8 owner;
+};
+
+#define MTHCA_CQ_ENTRY_OWNER_SW (0 << 7)
+#define MTHCA_CQ_ENTRY_OWNER_HW (1 << 7)
+
+#define MTHCA_CQ_DB_INC_CI (1 << 24)
+#define MTHCA_CQ_DB_REQ_NOT (2 << 24)
+#define MTHCA_CQ_DB_REQ_NOT_SOL (3 << 24)
+#define MTHCA_CQ_DB_SET_CI (4 << 24)
+#define MTHCA_CQ_DB_REQ_NOT_MULT (5 << 24)
+
+static inline struct mthca_cqe *get_cqe(struct mthca_cq *cq, int entry)
+{
+ if (cq->is_direct)
+ return cq->queue.direct.buf + (entry * MTHCA_CQ_ENTRY_SIZE);
+ else
+ return cq->queue.page_list[entry * MTHCA_CQ_ENTRY_SIZE / PAGE_SIZE].buf
+ + (entry * MTHCA_CQ_ENTRY_SIZE) % PAGE_SIZE;
+}
+
+static inline int cqe_sw(struct mthca_cq *cq, int i)
+{
+ return !(MTHCA_CQ_ENTRY_OWNER_HW &
+ get_cqe(cq, i)->owner);
+}
+
+static inline int next_cqe_sw(struct mthca_cq *cq)
+{
+ return cqe_sw(cq, cq->cons_index);
+}
+
+static inline void set_cqe_hw(struct mthca_cq *cq, int entry)
+{
+ get_cqe(cq, entry)->owner = MTHCA_CQ_ENTRY_OWNER_HW;
+}
+
+static inline void inc_cons_index(struct mthca_dev *dev, struct mthca_cq *cq,
+ int nent)
+{
+ u32 doorbell[2];
+
+ doorbell[0] = cpu_to_be32(MTHCA_CQ_DB_INC_CI | cq->cqn);
+ doorbell[1] = cpu_to_be32(nent - 1);
+
+ mthca_write64(doorbell,
+ dev->kar + MTHCA_CQ_DOORBELL,
+ MTHCA_GET_DOORBELL_LOCK(&dev->doorbell_lock));
+}
+
+void mthca_cq_event(struct mthca_dev *dev, u32 cqn)
+{
+ struct mthca_cq *cq;
+
+ spin_lock(&dev->cq_table.lock);
+ cq = mthca_array_get(&dev->cq_table.cq, cqn & (dev->limits.num_cqs - 1));
+ if (cq)
+ atomic_inc(&cq->refcount);
+ spin_unlock(&dev->cq_table.lock);
+
+ if (!cq) {
+ mthca_warn(dev, "Completion event for bogus CQ %08x\n", cqn);
+ return;
+ }
+
+ cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context);
+
+ if (atomic_dec_and_test(&cq->refcount))
+ wake_up(&cq->wait);
+}
+
+void mthca_cq_clean(struct mthca_dev *dev, u32 cqn, u32 qpn)
+{
+ struct mthca_cq *cq;
+ struct mthca_cqe *cqe;
+ int prod_index;
+ int nfreed = 0;
+
+ spin_lock_irq(&dev->cq_table.lock);
+ cq = mthca_array_get(&dev->cq_table.cq, cqn & (dev->limits.num_cqs - 1));
+ if (cq)
+ atomic_inc(&cq->refcount);
+ spin_unlock_irq(&dev->cq_table.lock);
+
+ if (!cq)
+ return;
+
+ spin_lock_irq(&cq->lock);
+
+ /*
+ * First we need to find the current producer index, so we
+ * know where to start cleaning from. It doesn't matter if HW
+ * adds new entries after this loop -- the QP we're worried
+ * about is already in RESET, so the new entries won't come
+ * from our QP and therefore don't need to be checked.
+ */
+ for (prod_index = cq->cons_index;
+ cqe_sw(cq, prod_index & cq->ibcq.cqe);
+ ++prod_index)
+ if (prod_index == cq->cons_index + cq->ibcq.cqe)
+ break;
+
+ if (0)
+ mthca_dbg(dev, "Cleaning QPN %06x from CQN %06x; ci %d, pi %d\n",
+ qpn, cqn, cq->cons_index, prod_index);
+
+ /*
+ * Now sweep backwards through the CQ, removing CQ entries
+ * that match our QP by copying older entries on top of them.
+ */
+ while (prod_index > cq->cons_index) {
+ cqe = get_cqe(cq, (prod_index - 1) & cq->ibcq.cqe);
+ if (cqe->my_qpn == cpu_to_be32(qpn))
+ ++nfreed;
+ else if (nfreed)
+ memcpy(get_cqe(cq, (prod_index - 1 + nfreed) &
+ cq->ibcq.cqe),
+ cqe,
+ MTHCA_CQ_ENTRY_SIZE);
+ --prod_index;
+ }
+
+ if (nfreed) {
+ wmb();
+ inc_cons_index(dev, cq, nfreed);
+ cq->cons_index = (cq->cons_index + nfreed) & cq->ibcq.cqe;
+ }
+
+ spin_unlock_irq(&cq->lock);
+ if (atomic_dec_and_test(&cq->refcount))
+ wake_up(&cq->wait);
+}
+
+static int handle_error_cqe(struct mthca_dev *dev, struct mthca_cq *cq,
+ struct mthca_qp *qp, int wqe_index, int is_send,
+ struct mthca_err_cqe *cqe,
+ struct ib_wc *entry, int *free_cqe)
+{
+ int err;
+ int dbd;
+ u32 new_wqe;
+
+ if (1 && cqe->syndrome != SYNDROME_WR_FLUSH_ERR) {
+ int j;
+
+ mthca_dbg(dev, "%x/%d: error CQE -> QPN %06x, WQE @ %08x\n",
+ cq->cqn, cq->cons_index, be32_to_cpu(cqe->my_qpn),
+ be32_to_cpu(cqe->wqe));
+
+ for (j = 0; j < 8; ++j)
+ printk(KERN_DEBUG " [%2x] %08x\n",
+ j * 4, be32_to_cpu(((u32 *) cqe)[j]));
+ }
+
+ /*
+ * For completions in error, only work request ID, status (and
+ * freed resource count for RD) have to be set.
+ */
+ switch (cqe->syndrome) {
+ case SYNDROME_LOCAL_LENGTH_ERR:
+ entry->status = IB_WC_LOC_LEN_ERR;
+ break;
+ case SYNDROME_LOCAL_QP_OP_ERR:
+ entry->status = IB_WC_LOC_QP_OP_ERR;
+ break;
+ case SYNDROME_LOCAL_EEC_OP_ERR:
+ entry->status = IB_WC_LOC_EEC_OP_ERR;
+ break;
+ case SYNDROME_LOCAL_PROT_ERR:
+ entry->status = IB_WC_LOC_PROT_ERR;
+ break;
+ case SYNDROME_WR_FLUSH_ERR:
+ entry->status = IB_WC_WR_FLUSH_ERR;
+ break;
+ case SYNDROME_MW_BIND_ERR:
+ entry->status = IB_WC_MW_BIND_ERR;
+ break;
+ case SYNDROME_BAD_RESP_ERR:
+ entry->status = IB_WC_BAD_RESP_ERR;
+ break;
+ case SYNDROME_LOCAL_ACCESS_ERR:
+ entry->status = IB_WC_LOC_ACCESS_ERR;
+ break;
+ case SYNDROME_REMOTE_INVAL_REQ_ERR:
+ entry->status = IB_WC_REM_INV_REQ_ERR;
+ break;
+ case SYNDROME_REMOTE_ACCESS_ERR:
+ entry->status = IB_WC_REM_ACCESS_ERR;
+ break;
+ case SYNDROME_REMOTE_OP_ERR:
+ entry->status = IB_WC_REM_OP_ERR;
+ break;
+ case SYNDROME_RETRY_EXC_ERR:
+ entry->status = IB_WC_RETRY_EXC_ERR;
+ break;
+ case SYNDROME_RNR_RETRY_EXC_ERR:
+ entry->status = IB_WC_RNR_RETRY_EXC_ERR;
+ break;
+ case SYNDROME_LOCAL_RDD_VIOL_ERR:
+ entry->status = IB_WC_LOC_RDD_VIOL_ERR;
+ break;
+ case SYNDROME_REMOTE_INVAL_RD_REQ_ERR:
+ entry->status = IB_WC_REM_INV_RD_REQ_ERR;
+ break;
+ case SYNDROME_REMOTE_ABORTED_ERR:
+ entry->status = IB_WC_REM_ABORT_ERR;
+ break;
+ case SYNDROME_INVAL_EECN_ERR:
+ entry->status = IB_WC_INV_EECN_ERR;
+ break;
+ case SYNDROME_INVAL_EEC_STATE_ERR:
+ entry->status = IB_WC_INV_EEC_STATE_ERR;
+ break;
+ default:
+ entry->status = IB_WC_GENERAL_ERR;
+ break;
+ }
+
+ err = mthca_free_err_wqe(qp, is_send, wqe_index, &dbd, &new_wqe);
+ if (err)
+ return err;
+
+ /*
+ * If we're at the end of the WQE chain, or we've used up our
+ * doorbell count, free the CQE. Otherwise just update it for
+ * the next poll operation.
+ */
+ if (!(new_wqe & cpu_to_be32(0x3f)) || (!cqe->db_cnt && dbd))
+ return 0;
+
+ cqe->db_cnt = cpu_to_be16(be16_to_cpu(cqe->db_cnt) - dbd);
+ cqe->wqe = new_wqe;
+ cqe->syndrome = SYNDROME_WR_FLUSH_ERR;
+
+ *free_cqe = 0;
+
+ return 0;
+}
+
+static void dump_cqe(struct mthca_cqe *cqe)
+{
+ int j;
+
+ for (j = 0; j < 8; ++j)
+ printk(KERN_DEBUG " [%2x] %08x\n",
+ j * 4, be32_to_cpu(((u32 *) cqe)[j]));
+}
+
+static inline int mthca_poll_one(struct mthca_dev *dev,
+ struct mthca_cq *cq,
+ struct mthca_qp **cur_qp,
+ int *freed,
+ struct ib_wc *entry)
+{
+ struct mthca_wq *wq;
+ struct mthca_cqe *cqe;
+ int wqe_index;
+ int is_error = 0;
+ int is_send;
+ int free_cqe = 1;
+ int err = 0;
+
+ if (!next_cqe_sw(cq))
+ return -EAGAIN;
+
+ /*
+ * Make sure we read CQ entry contents after we've checked the
+ * ownership bit.
+ */
+ rmb();
+
+ cqe = get_cqe(cq, cq->cons_index);
+
+ if (0) {
+ mthca_dbg(dev, "%x/%d: CQE -> QPN %06x, WQE @ %08x\n",
+ cq->cqn, cq->cons_index, be32_to_cpu(cqe->my_qpn),
+ be32_to_cpu(cqe->wqe));
+
+ dump_cqe(cqe);
+ }
+
+ if ((cqe->opcode & MTHCA_ERROR_CQE_OPCODE_MASK) ==
+ MTHCA_ERROR_CQE_OPCODE_MASK) {
+ is_error = 1;
+ is_send = cqe->opcode & 1;
+ } else
+ is_send = cqe->is_send & 0x80;
+
+ if (!*cur_qp || be32_to_cpu(cqe->my_qpn) != (*cur_qp)->qpn) {
+ if (*cur_qp) {
+ if (*freed) {
+ wmb();
+ inc_cons_index(dev, cq, *freed);
+ *freed = 0;
+ }
+ spin_unlock(&(*cur_qp)->lock);
+ }
+
+ spin_lock(&dev->qp_table.lock);
+ *cur_qp = mthca_array_get(&dev->qp_table.qp,
+ be32_to_cpu(cqe->my_qpn) &
+ (dev->limits.num_qps - 1));
+ if (*cur_qp)
+ atomic_inc(&(*cur_qp)->refcount);
+ spin_unlock(&dev->qp_table.lock);
+
+ if (!*cur_qp) {
+ mthca_warn(dev, "CQ entry for unknown QP %06x\n",
+ be32_to_cpu(cqe->my_qpn) & 0xffffff);
+ err = -EINVAL;
+ goto out;
+ }
+
+ spin_lock(&(*cur_qp)->lock);
+ }
+
+ entry->qp_num = (*cur_qp)->qpn;
+
+ if (is_send) {
+ wq = &(*cur_qp)->sq;
+ wqe_index = ((be32_to_cpu(cqe->wqe) - (*cur_qp)->send_wqe_offset)
+ >> wq->wqe_shift);
+ entry->wr_id = (*cur_qp)->wrid[wqe_index +
+ (*cur_qp)->rq.max];
+ } else {
+ wq = &(*cur_qp)->rq;
+ wqe_index = be32_to_cpu(cqe->wqe) >> wq->wqe_shift;
+ entry->wr_id = (*cur_qp)->wrid[wqe_index];
+ }
+
+ if (wq->last_comp < wqe_index)
+ wq->cur -= wqe_index - wq->last_comp;
+ else
+ wq->cur -= wq->max - wq->last_comp + wqe_index;
+
+ wq->last_comp = wqe_index;
+
+ if (0)
+ mthca_dbg(dev, "%s completion for QP %06x, index %d (nr %d)\n",
+ is_send ? "Send" : "Receive",
+ (*cur_qp)->qpn, wqe_index, wq->max);
+
+ if (is_error) {
+ err = handle_error_cqe(dev, cq, *cur_qp, wqe_index, is_send,
+ (struct mthca_err_cqe *) cqe,
+ entry, &free_cqe);
+ goto out;
+ }
+
+ if (is_send) {
+ entry->opcode = IB_WC_SEND; /* XXX */
+ } else {
+ entry->byte_len = be32_to_cpu(cqe->byte_cnt);
+ switch (cqe->opcode & 0x1f) {
+ case IB_OPCODE_SEND_LAST_WITH_IMMEDIATE:
+ case IB_OPCODE_SEND_ONLY_WITH_IMMEDIATE:
+ entry->wc_flags = IB_WC_WITH_IMM;
+ entry->imm_data = cqe->imm_etype_pkey_eec;
+ entry->opcode = IB_WC_RECV;
+ break;
+ case IB_OPCODE_RDMA_WRITE_LAST_WITH_IMMEDIATE:
+ case IB_OPCODE_RDMA_WRITE_ONLY_WITH_IMMEDIATE:
+ entry->wc_flags = IB_WC_WITH_IMM;
+ entry->imm_data = cqe->imm_etype_pkey_eec;
+ entry->opcode = IB_WC_RECV_RDMA_WITH_IMM;
+ break;
+ default:
+ entry->wc_flags = 0;
+ entry->opcode = IB_WC_RECV;
+ break;
+ }
+ entry->slid = be16_to_cpu(cqe->rlid);
+ entry->sl = be16_to_cpu(cqe->sl_g_mlpath) >> 12;
+ entry->src_qp = be32_to_cpu(cqe->rqpn) & 0xffffff;
+ entry->dlid_path_bits = be16_to_cpu(cqe->sl_g_mlpath) & 0x7f;
+ entry->pkey_index = be32_to_cpu(cqe->imm_etype_pkey_eec) >> 16;
+ entry->wc_flags |= be16_to_cpu(cqe->sl_g_mlpath) & 0x80 ?
+ IB_WC_GRH : 0;
+ }
+
+ entry->status = IB_WC_SUCCESS;
+
+ out:
+ if (free_cqe) {
+ set_cqe_hw(cq, cq->cons_index);
+ ++(*freed);
+ cq->cons_index = (cq->cons_index + 1) & cq->ibcq.cqe;
+ }
+
+ return err;
+}
+
+int mthca_poll_cq(struct ib_cq *ibcq, int num_entries,
+ struct ib_wc *entry)
+{
+ struct mthca_dev *dev = to_mdev(ibcq->device);
+ struct mthca_cq *cq = to_mcq(ibcq);
+ struct mthca_qp *qp = NULL;
+ unsigned long flags;
+ int err = 0;
+ int freed = 0;
+ int npolled;
+
+ spin_lock_irqsave(&cq->lock, flags);
+
+ for (npolled = 0; npolled < num_entries; ++npolled) {
+ err = mthca_poll_one(dev, cq, &qp,
+ &freed, entry + npolled);
+ if (err)
+ break;
+ }
+
+ if (freed) {
+ wmb();
+ inc_cons_index(dev, cq, freed);
+ }
+
+ if (qp) {
+ spin_unlock(&qp->lock);
+ if (atomic_dec_and_test(&qp->refcount))
+ wake_up(&qp->wait);
+ }
+
+
+ spin_unlock_irqrestore(&cq->lock, flags);
+
+ return err == 0 || err == -EAGAIN ? npolled : err;
+}
+
+void mthca_arm_cq(struct mthca_dev *dev, struct mthca_cq *cq,
+ int solicited)
+{
+ u32 doorbell[2];
+
+ doorbell[0] = cpu_to_be32((solicited ?
+ MTHCA_CQ_DB_REQ_NOT_SOL :
+ MTHCA_CQ_DB_REQ_NOT) |
+ cq->cqn);
+ doorbell[1] = 0xffffffff;
+
+ mthca_write64(doorbell,
+ dev->kar + MTHCA_CQ_DOORBELL,
+ MTHCA_GET_DOORBELL_LOCK(&dev->doorbell_lock));
+}
+
+int mthca_init_cq(struct mthca_dev *dev, int nent,
+ struct mthca_cq *cq)
+{
+ int size = nent * MTHCA_CQ_ENTRY_SIZE;
+ dma_addr_t t;
+ void *mailbox = NULL;
+ int npages, shift;
+ u64 *dma_list = NULL;
+ struct mthca_cq_context *cq_context;
+ int err = -ENOMEM;
+ u8 status;
+ int i;
+
+ might_sleep();
+
+ mailbox = kmalloc(sizeof (struct mthca_cq_context) + MTHCA_CMD_MAILBOX_EXTRA,
+ GFP_KERNEL);
+ if (!mailbox)
+ goto err_out;
+
+ cq_context = MAILBOX_ALIGN(mailbox);
+
+ if (size <= MTHCA_MAX_DIRECT_CQ_SIZE) {
+ if (0)
+ mthca_dbg(dev, "Creating direct CQ of size %d\n", size);
+
+ cq->is_direct = 1;
+ npages = 1;
+ shift = get_order(size) + PAGE_SHIFT;
+
+ cq->queue.direct.buf = pci_alloc_consistent(dev->pdev,
+ size, &t);
+ if (!cq->queue.direct.buf)
+ goto err_out;
+
+ pci_unmap_addr_set(&cq->queue.direct, mapping, t);
+
+ memset(cq->queue.direct.buf, 0, size);
+
+ while (t & ((1 << shift) - 1)) {
+ --shift;
+ npages *= 2;
+ }
+
+ dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL);
+ if (!dma_list)
+ goto err_out_free;
+
+ for (i = 0; i < npages; ++i)
+ dma_list[i] = t + i * (1 << shift);
+ } else {
+ cq->is_direct = 0;
+ npages = (size + PAGE_SIZE - 1) / PAGE_SIZE;
+ shift = PAGE_SHIFT;
+
+ if (0)
+ mthca_dbg(dev, "Creating indirect CQ with %d pages\n", npages);
+
+ dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL);
+ if (!dma_list)
+ goto err_out;
+
+ cq->queue.page_list = kmalloc(npages * sizeof *cq->queue.page_list,
+ GFP_KERNEL);
+ if (!cq->queue.page_list)
+ goto err_out;
+
+ for (i = 0; i < npages; ++i)
+ cq->queue.page_list[i].buf = NULL;
+
+ for (i = 0; i < npages; ++i) {
+ cq->queue.page_list[i].buf =
+ pci_alloc_consistent(dev->pdev, PAGE_SIZE, &t);
+ if (!cq->queue.page_list[i].buf)
+ goto err_out_free;
+
+ dma_list[i] = t;
+ pci_unmap_addr_set(&cq->queue.page_list[i], mapping, t);
+
+ memset(cq->queue.page_list[i].buf, 0, PAGE_SIZE);
+ }
+ }
+
+ for (i = 0; i < nent; ++i)
+ set_cqe_hw(cq, i);
+
+ cq->cqn = mthca_alloc(&dev->cq_table.alloc);
+ if (cq->cqn == -1)
+ goto err_out_free;
+
+ err = mthca_mr_alloc_phys(dev, dev->driver_pd.pd_num,
+ dma_list, shift, npages,
+ 0, size,
+ MTHCA_MPT_FLAG_LOCAL_WRITE |
+ MTHCA_MPT_FLAG_LOCAL_READ,
+ &cq->mr);
+ if (err)
+ goto err_out_free_cq;
+
+ spin_lock_init(&cq->lock);
+ atomic_set(&cq->refcount, 1);
+ init_waitqueue_head(&cq->wait);
+
+ memset(cq_context, 0, sizeof *cq_context);
+ cq_context->flags = cpu_to_be32(MTHCA_CQ_STATUS_OK |
+ MTHCA_CQ_STATE_DISARMED |
+ MTHCA_CQ_FLAG_TR);
+ cq_context->start = cpu_to_be64(0);
+ cq_context->logsize_usrpage = cpu_to_be32((ffs(nent) - 1) << 24 |
+ MTHCA_KAR_PAGE);
+ cq_context->error_eqn = cpu_to_be32(dev->eq_table.eq[MTHCA_EQ_ASYNC].eqn);
+ cq_context->comp_eqn = cpu_to_be32(dev->eq_table.eq[MTHCA_EQ_COMP].eqn);
+ cq_context->pd = cpu_to_be32(dev->driver_pd.pd_num);
+ cq_context->lkey = cpu_to_be32(cq->mr.ibmr.lkey);
+ cq_context->cqn = cpu_to_be32(cq->cqn);
+
+ err = mthca_SW2HW_CQ(dev, cq_context, cq->cqn, &status);
+ if (err) {
+ mthca_warn(dev, "SW2HW_CQ failed (%d)\n", err);
+ goto err_out_free_mr;
+ }
+
+ if (status) {
+ mthca_warn(dev, "SW2HW_CQ returned status 0x%02x\n",
+ status);
+ err = -EINVAL;
+ goto err_out_free_mr;
+ }
+
+ spin_lock_irq(&dev->cq_table.lock);
+ if (mthca_array_set(&dev->cq_table.cq,
+ cq->cqn & (dev->limits.num_cqs - 1),
+ cq)) {
+ spin_unlock_irq(&dev->cq_table.lock);
+ goto err_out_free_mr;
+ }
+ spin_unlock_irq(&dev->cq_table.lock);
+
+ cq->cons_index = 0;
+
+ kfree(dma_list);
+ kfree(mailbox);
+
+ return 0;
+
+ err_out_free_mr:
+ mthca_free_mr(dev, &cq->mr);
+
+ err_out_free_cq:
+ mthca_free(&dev->cq_table.alloc, cq->cqn);
+
+ err_out_free:
+ if (cq->is_direct)
+ pci_free_consistent(dev->pdev, size,
+ cq->queue.direct.buf,
+ pci_unmap_addr(&cq->queue.direct, mapping));
+ else {
+ for (i = 0; i < npages; ++i)
+ if (cq->queue.page_list[i].buf)
+ pci_free_consistent(dev->pdev, PAGE_SIZE,
+ cq->queue.page_list[i].buf,
+ pci_unmap_addr(&cq->queue.page_list[i],
+ mapping));
+
+ kfree(cq->queue.page_list);
+ }
+
+ err_out:
+ kfree(dma_list);
+ kfree(mailbox);
+
+ return err;
+}
+
+void mthca_free_cq(struct mthca_dev *dev,
+ struct mthca_cq *cq)
+{
+ void *mailbox;
+ int err;
+ u8 status;
+
+ might_sleep();
+
+ mailbox = kmalloc(sizeof (struct mthca_cq_context) + MTHCA_CMD_MAILBOX_EXTRA,
+ GFP_KERNEL);
+ if (!mailbox) {
+ mthca_warn(dev, "No memory for mailbox to free CQ.\n");
+ return;
+ }
+
+ err = mthca_HW2SW_CQ(dev, MAILBOX_ALIGN(mailbox), cq->cqn, &status);
+ if (err)
+ mthca_warn(dev, "HW2SW_CQ failed (%d)\n", err);
+ else if (status)
+ mthca_warn(dev, "HW2SW_CQ returned status 0x%02x\n",
+ status);
+
+ if (0) {
+ u32 *ctx = MAILBOX_ALIGN(mailbox);
+ int j;
+
+ printk(KERN_ERR "context for CQN %x (cons index %x, next sw %d)\n",
+ cq->cqn, cq->cons_index, next_cqe_sw(cq));
+ for (j = 0; j < 16; ++j)
+ printk(KERN_ERR "[%2x] %08x\n", j * 4, be32_to_cpu(ctx[j]));
+ }
+
+ spin_lock_irq(&dev->cq_table.lock);
+ mthca_array_clear(&dev->cq_table.cq,
+ cq->cqn & (dev->limits.num_cqs - 1));
+ spin_unlock_irq(&dev->cq_table.lock);
+
+ atomic_dec(&cq->refcount);
+ wait_event(cq->wait, !atomic_read(&cq->refcount));
+
+ mthca_free_mr(dev, &cq->mr);
+
+ if (cq->is_direct)
+ pci_free_consistent(dev->pdev,
+ (cq->ibcq.cqe + 1) * MTHCA_CQ_ENTRY_SIZE,
+ cq->queue.direct.buf,
+ pci_unmap_addr(&cq->queue.direct,
+ mapping));
+ else {
+ int i;
+
+ for (i = 0;
+ i < ((cq->ibcq.cqe + 1) * MTHCA_CQ_ENTRY_SIZE + PAGE_SIZE - 1) /
+ PAGE_SIZE;
+ ++i)
+ pci_free_consistent(dev->pdev, PAGE_SIZE,
+ cq->queue.page_list[i].buf,
+ pci_unmap_addr(&cq->queue.page_list[i],
+ mapping));
+
+ kfree(cq->queue.page_list);
+ }
+
+ mthca_free(&dev->cq_table.alloc, cq->cqn);
+ kfree(mailbox);
+}
+
+int __devinit mthca_init_cq_table(struct mthca_dev *dev)
+{
+ int err;
+
+ spin_lock_init(&dev->cq_table.lock);
+
+ err = mthca_alloc_init(&dev->cq_table.alloc,
+ dev->limits.num_cqs,
+ (1 << 24) - 1,
+ dev->limits.reserved_cqs);
+ if (err)
+ return err;
+
+ err = mthca_array_init(&dev->cq_table.cq,
+ dev->limits.num_cqs);
+ if (err)
+ mthca_alloc_cleanup(&dev->cq_table.alloc);
+
+ return err;
+}
+
+void __devexit mthca_cleanup_cq_table(struct mthca_dev *dev)
+{
+ mthca_array_cleanup(&dev->cq_table.cq, dev->limits.num_cqs);
+ mthca_alloc_cleanup(&dev->cq_table.alloc);
+}
--- /dev/null
+/*
+ * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mthca_dev.h 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#ifndef MTHCA_DEV_H
+#define MTHCA_DEV_H
+
+#include <linux/spinlock.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <asm/semaphore.h>
+
+#include "mthca_provider.h"
+#include "mthca_doorbell.h"
+
+#define DRV_NAME "ib_mthca"
+#define PFX DRV_NAME ": "
+#define DRV_VERSION "0.06-pre"
+#define DRV_RELDATE "November 8, 2004"
+
+/* Types of supported HCA */
+enum {
+ TAVOR, /* MT23108 */
+ ARBEL_COMPAT, /* MT25208 in Tavor compat mode */
+ ARBEL_NATIVE /* MT25208 with extended features */
+};
+
+enum {
+ MTHCA_FLAG_DDR_HIDDEN = 1 << 1,
+ MTHCA_FLAG_SRQ = 1 << 2,
+ MTHCA_FLAG_MSI = 1 << 3,
+ MTHCA_FLAG_MSI_X = 1 << 4,
+ MTHCA_FLAG_NO_LAM = 1 << 5
+};
+
+enum {
+ MTHCA_KAR_PAGE = 1,
+ MTHCA_MAX_PORTS = 2
+};
+
+enum {
+ MTHCA_EQ_CONTEXT_SIZE = 0x40,
+ MTHCA_CQ_CONTEXT_SIZE = 0x40,
+ MTHCA_QP_CONTEXT_SIZE = 0x200,
+ MTHCA_RDB_ENTRY_SIZE = 0x20,
+ MTHCA_AV_SIZE = 0x20,
+ MTHCA_MGM_ENTRY_SIZE = 0x40,
+
+ /* Arbel FW gives us these, but we need them for Tavor */
+ MTHCA_MPT_ENTRY_SIZE = 0x40,
+ MTHCA_MTT_SEG_SIZE = 0x40,
+};
+
+enum {
+ MTHCA_EQ_CMD,
+ MTHCA_EQ_ASYNC,
+ MTHCA_EQ_COMP,
+ MTHCA_NUM_EQ
+};
+
+struct mthca_cmd {
+ int use_events;
+ struct semaphore hcr_sem;
+ struct semaphore poll_sem;
+ struct semaphore event_sem;
+ int max_cmds;
+ spinlock_t context_lock;
+ int free_head;
+ struct mthca_cmd_context *context;
+ u16 token_mask;
+};
+
+struct mthca_limits {
+ int num_ports;
+ int vl_cap;
+ int mtu_cap;
+ int gid_table_len;
+ int pkey_table_len;
+ int local_ca_ack_delay;
+ int max_sg;
+ int num_qps;
+ int reserved_qps;
+ int num_srqs;
+ int reserved_srqs;
+ int num_eecs;
+ int reserved_eecs;
+ int num_cqs;
+ int reserved_cqs;
+ int num_eqs;
+ int reserved_eqs;
+ int num_mpts;
+ int num_mtt_segs;
+ int mtt_seg_size;
+ int reserved_mtts;
+ int reserved_mrws;
+ int reserved_uars;
+ int num_mgms;
+ int num_amgms;
+ int reserved_mcgs;
+ int num_pds;
+ int reserved_pds;
+};
+
+struct mthca_alloc {
+ u32 last;
+ u32 top;
+ u32 max;
+ u32 mask;
+ spinlock_t lock;
+ unsigned long *table;
+};
+
+struct mthca_array {
+ struct {
+ void **page;
+ int used;
+ } *page_list;
+};
+
+struct mthca_pd_table {
+ struct mthca_alloc alloc;
+};
+
+struct mthca_mr_table {
+ struct mthca_alloc mpt_alloc;
+ int max_mtt_order;
+ unsigned long **mtt_buddy;
+ u64 mtt_base;
+ struct mthca_icm_table *mtt_table;
+ struct mthca_icm_table *mpt_table;
+};
+
+struct mthca_eq_table {
+ struct mthca_alloc alloc;
+ void __iomem *clr_int;
+ u32 clr_mask;
+ struct mthca_eq eq[MTHCA_NUM_EQ];
+ u64 icm_virt;
+ struct page *icm_page;
+ dma_addr_t icm_dma;
+ int have_irq;
+ u8 inta_pin;
+};
+
+struct mthca_cq_table {
+ struct mthca_alloc alloc;
+ spinlock_t lock;
+ struct mthca_array cq;
+ struct mthca_icm_table *table;
+};
+
+struct mthca_qp_table {
+ struct mthca_alloc alloc;
+ u32 rdb_base;
+ int rdb_shift;
+ int sqp_start;
+ spinlock_t lock;
+ struct mthca_array qp;
+ struct mthca_icm_table *qp_table;
+ struct mthca_icm_table *eqp_table;
+};
+
+struct mthca_av_table {
+ struct pci_pool *pool;
+ int num_ddr_avs;
+ u64 ddr_av_base;
+ void __iomem *av_map;
+ struct mthca_alloc alloc;
+};
+
+struct mthca_mcg_table {
+ struct semaphore sem;
+ struct mthca_alloc alloc;
+};
+
+struct mthca_dev {
+ struct ib_device ib_dev;
+ struct pci_dev *pdev;
+
+ int hca_type;
+ unsigned long mthca_flags;
+
+ u32 rev_id;
+
+ /* firmware info */
+ u64 fw_ver;
+ union {
+ struct {
+ u64 fw_start;
+ u64 fw_end;
+ } tavor;
+ struct {
+ u64 clr_int_base;
+ u64 eq_arm_base;
+ u64 eq_set_ci_base;
+ struct mthca_icm *fw_icm;
+ struct mthca_icm *aux_icm;
+ u16 fw_pages;
+ } arbel;
+ } fw;
+
+ u64 ddr_start;
+ u64 ddr_end;
+
+ MTHCA_DECLARE_DOORBELL_LOCK(doorbell_lock)
+ struct semaphore cap_mask_mutex;
+
+ void __iomem *hcr;
+ void __iomem *ecr_base;
+ void __iomem *clr_base;
+ void __iomem *kar;
+
+ struct mthca_cmd cmd;
+ struct mthca_limits limits;
+
+ struct mthca_pd_table pd_table;
+ struct mthca_mr_table mr_table;
+ struct mthca_eq_table eq_table;
+ struct mthca_cq_table cq_table;
+ struct mthca_qp_table qp_table;
+ struct mthca_av_table av_table;
+ struct mthca_mcg_table mcg_table;
+
+ struct mthca_pd driver_pd;
+ struct mthca_mr driver_mr;
+
+ struct ib_mad_agent *send_agent[MTHCA_MAX_PORTS][2];
+ struct ib_ah *sm_ah[MTHCA_MAX_PORTS];
+ spinlock_t sm_lock;
+};
+
+#define mthca_dbg(mdev, format, arg...) \
+ dev_dbg(&mdev->pdev->dev, format, ## arg)
+#define mthca_err(mdev, format, arg...) \
+ dev_err(&mdev->pdev->dev, format, ## arg)
+#define mthca_info(mdev, format, arg...) \
+ dev_info(&mdev->pdev->dev, format, ## arg)
+#define mthca_warn(mdev, format, arg...) \
+ dev_warn(&mdev->pdev->dev, format, ## arg)
+
+extern void __buggy_use_of_MTHCA_GET(void);
+extern void __buggy_use_of_MTHCA_PUT(void);
+
+#define MTHCA_GET(dest, source, offset) \
+ do { \
+ void *__p = (char *) (source) + (offset); \
+ switch (sizeof (dest)) { \
+ case 1: (dest) = *(u8 *) __p; break; \
+ case 2: (dest) = be16_to_cpup(__p); break; \
+ case 4: (dest) = be32_to_cpup(__p); break; \
+ case 8: (dest) = be64_to_cpup(__p); break; \
+ default: __buggy_use_of_MTHCA_GET(); \
+ } \
+ } while (0)
+
+#define MTHCA_PUT(dest, source, offset) \
+ do { \
+ __typeof__(source) *__p = \
+ (__typeof__(source) *) ((char *) (dest) + (offset)); \
+ switch (sizeof(source)) { \
+ case 1: *__p = (source); break; \
+ case 2: *__p = cpu_to_be16(source); break; \
+ case 4: *__p = cpu_to_be32(source); break; \
+ case 8: *__p = cpu_to_be64(source); break; \
+ default: __buggy_use_of_MTHCA_PUT(); \
+ } \
+ } while (0)
+
+int mthca_reset(struct mthca_dev *mdev);
+
+u32 mthca_alloc(struct mthca_alloc *alloc);
+void mthca_free(struct mthca_alloc *alloc, u32 obj);
+int mthca_alloc_init(struct mthca_alloc *alloc, u32 num, u32 mask,
+ u32 reserved);
+void mthca_alloc_cleanup(struct mthca_alloc *alloc);
+void *mthca_array_get(struct mthca_array *array, int index);
+int mthca_array_set(struct mthca_array *array, int index, void *value);
+void mthca_array_clear(struct mthca_array *array, int index);
+int mthca_array_init(struct mthca_array *array, int nent);
+void mthca_array_cleanup(struct mthca_array *array, int nent);
+
+int mthca_init_pd_table(struct mthca_dev *dev);
+int mthca_init_mr_table(struct mthca_dev *dev);
+int mthca_init_eq_table(struct mthca_dev *dev);
+int mthca_init_cq_table(struct mthca_dev *dev);
+int mthca_init_qp_table(struct mthca_dev *dev);
+int mthca_init_av_table(struct mthca_dev *dev);
+int mthca_init_mcg_table(struct mthca_dev *dev);
+
+void mthca_cleanup_pd_table(struct mthca_dev *dev);
+void mthca_cleanup_mr_table(struct mthca_dev *dev);
+void mthca_cleanup_eq_table(struct mthca_dev *dev);
+void mthca_cleanup_cq_table(struct mthca_dev *dev);
+void mthca_cleanup_qp_table(struct mthca_dev *dev);
+void mthca_cleanup_av_table(struct mthca_dev *dev);
+void mthca_cleanup_mcg_table(struct mthca_dev *dev);
+
+int mthca_register_device(struct mthca_dev *dev);
+void mthca_unregister_device(struct mthca_dev *dev);
+
+int mthca_pd_alloc(struct mthca_dev *dev, struct mthca_pd *pd);
+void mthca_pd_free(struct mthca_dev *dev, struct mthca_pd *pd);
+
+int mthca_mr_alloc_notrans(struct mthca_dev *dev, u32 pd,
+ u32 access, struct mthca_mr *mr);
+int mthca_mr_alloc_phys(struct mthca_dev *dev, u32 pd,
+ u64 *buffer_list, int buffer_size_shift,
+ int list_len, u64 iova, u64 total_size,
+ u32 access, struct mthca_mr *mr);
+void mthca_free_mr(struct mthca_dev *dev, struct mthca_mr *mr);
+
+int mthca_map_eq_icm(struct mthca_dev *dev, u64 icm_virt);
+void mthca_unmap_eq_icm(struct mthca_dev *dev);
+
+int mthca_poll_cq(struct ib_cq *ibcq, int num_entries,
+ struct ib_wc *entry);
+void mthca_arm_cq(struct mthca_dev *dev, struct mthca_cq *cq,
+ int solicited);
+int mthca_init_cq(struct mthca_dev *dev, int nent,
+ struct mthca_cq *cq);
+void mthca_free_cq(struct mthca_dev *dev,
+ struct mthca_cq *cq);
+void mthca_cq_event(struct mthca_dev *dev, u32 cqn);
+void mthca_cq_clean(struct mthca_dev *dev, u32 cqn, u32 qpn);
+
+void mthca_qp_event(struct mthca_dev *dev, u32 qpn,
+ enum ib_event_type event_type);
+int mthca_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask);
+int mthca_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
+ struct ib_send_wr **bad_wr);
+int mthca_post_receive(struct ib_qp *ibqp, struct ib_recv_wr *wr,
+ struct ib_recv_wr **bad_wr);
+int mthca_free_err_wqe(struct mthca_qp *qp, int is_send,
+ int index, int *dbd, u32 *new_wqe);
+int mthca_alloc_qp(struct mthca_dev *dev,
+ struct mthca_pd *pd,
+ struct mthca_cq *send_cq,
+ struct mthca_cq *recv_cq,
+ enum ib_qp_type type,
+ enum ib_sig_type send_policy,
+ enum ib_sig_type recv_policy,
+ struct mthca_qp *qp);
+int mthca_alloc_sqp(struct mthca_dev *dev,
+ struct mthca_pd *pd,
+ struct mthca_cq *send_cq,
+ struct mthca_cq *recv_cq,
+ enum ib_sig_type send_policy,
+ enum ib_sig_type recv_policy,
+ int qpn,
+ int port,
+ struct mthca_sqp *sqp);
+void mthca_free_qp(struct mthca_dev *dev, struct mthca_qp *qp);
+int mthca_create_ah(struct mthca_dev *dev,
+ struct mthca_pd *pd,
+ struct ib_ah_attr *ah_attr,
+ struct mthca_ah *ah);
+int mthca_destroy_ah(struct mthca_dev *dev, struct mthca_ah *ah);
+int mthca_read_ah(struct mthca_dev *dev, struct mthca_ah *ah,
+ struct ib_ud_header *header);
+
+int mthca_multicast_attach(struct ib_qp *ibqp, union ib_gid *gid, u16 lid);
+int mthca_multicast_detach(struct ib_qp *ibqp, union ib_gid *gid, u16 lid);
+
+int mthca_process_mad(struct ib_device *ibdev,
+ int mad_flags,
+ u8 port_num,
+ struct ib_wc *in_wc,
+ struct ib_grh *in_grh,
+ struct ib_mad *in_mad,
+ struct ib_mad *out_mad);
+int mthca_create_agents(struct mthca_dev *dev);
+void mthca_free_agents(struct mthca_dev *dev);
+
+static inline struct mthca_dev *to_mdev(struct ib_device *ibdev)
+{
+ return container_of(ibdev, struct mthca_dev, ib_dev);
+}
+
+#endif /* MTHCA_DEV_H */
--- /dev/null
+/*
+ * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mthca_eq.c 1382 2004-12-24 02:21:02Z roland $
+ */
+
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+
+#include "mthca_dev.h"
+#include "mthca_cmd.h"
+#include "mthca_config_reg.h"
+
+enum {
+ MTHCA_NUM_ASYNC_EQE = 0x80,
+ MTHCA_NUM_CMD_EQE = 0x80,
+ MTHCA_EQ_ENTRY_SIZE = 0x20
+};
+
+/*
+ * Must be packed because start is 64 bits but only aligned to 32 bits.
+ */
+struct mthca_eq_context {
+ u32 flags;
+ u64 start;
+ u32 logsize_usrpage;
+ u32 pd;
+ u8 reserved1[3];
+ u8 intr;
+ u32 lost_count;
+ u32 lkey;
+ u32 reserved2[2];
+ u32 consumer_index;
+ u32 producer_index;
+ u32 reserved3[4];
+} __attribute__((packed));
+
+#define MTHCA_EQ_STATUS_OK ( 0 << 28)
+#define MTHCA_EQ_STATUS_OVERFLOW ( 9 << 28)
+#define MTHCA_EQ_STATUS_WRITE_FAIL (10 << 28)
+#define MTHCA_EQ_OWNER_SW ( 0 << 24)
+#define MTHCA_EQ_OWNER_HW ( 1 << 24)
+#define MTHCA_EQ_FLAG_TR ( 1 << 18)
+#define MTHCA_EQ_FLAG_OI ( 1 << 17)
+#define MTHCA_EQ_STATE_ARMED ( 1 << 8)
+#define MTHCA_EQ_STATE_FIRED ( 2 << 8)
+#define MTHCA_EQ_STATE_ALWAYS_ARMED ( 3 << 8)
+
+enum {
+ MTHCA_EVENT_TYPE_COMP = 0x00,
+ MTHCA_EVENT_TYPE_PATH_MIG = 0x01,
+ MTHCA_EVENT_TYPE_COMM_EST = 0x02,
+ MTHCA_EVENT_TYPE_SQ_DRAINED = 0x03,
+ MTHCA_EVENT_TYPE_SRQ_LAST_WQE = 0x13,
+ MTHCA_EVENT_TYPE_CQ_ERROR = 0x04,
+ MTHCA_EVENT_TYPE_WQ_CATAS_ERROR = 0x05,
+ MTHCA_EVENT_TYPE_EEC_CATAS_ERROR = 0x06,
+ MTHCA_EVENT_TYPE_PATH_MIG_FAILED = 0x07,
+ MTHCA_EVENT_TYPE_WQ_INVAL_REQ_ERROR = 0x10,
+ MTHCA_EVENT_TYPE_WQ_ACCESS_ERROR = 0x11,
+ MTHCA_EVENT_TYPE_SRQ_CATAS_ERROR = 0x12,
+ MTHCA_EVENT_TYPE_LOCAL_CATAS_ERROR = 0x08,
+ MTHCA_EVENT_TYPE_PORT_CHANGE = 0x09,
+ MTHCA_EVENT_TYPE_EQ_OVERFLOW = 0x0f,
+ MTHCA_EVENT_TYPE_ECC_DETECT = 0x0e,
+ MTHCA_EVENT_TYPE_CMD = 0x0a
+};
+
+#define MTHCA_ASYNC_EVENT_MASK ((1ULL << MTHCA_EVENT_TYPE_PATH_MIG) | \
+ (1ULL << MTHCA_EVENT_TYPE_COMM_EST) | \
+ (1ULL << MTHCA_EVENT_TYPE_SQ_DRAINED) | \
+ (1ULL << MTHCA_EVENT_TYPE_CQ_ERROR) | \
+ (1ULL << MTHCA_EVENT_TYPE_WQ_CATAS_ERROR) | \
+ (1ULL << MTHCA_EVENT_TYPE_EEC_CATAS_ERROR) | \
+ (1ULL << MTHCA_EVENT_TYPE_PATH_MIG_FAILED) | \
+ (1ULL << MTHCA_EVENT_TYPE_WQ_INVAL_REQ_ERROR) | \
+ (1ULL << MTHCA_EVENT_TYPE_WQ_ACCESS_ERROR) | \
+ (1ULL << MTHCA_EVENT_TYPE_LOCAL_CATAS_ERROR) | \
+ (1ULL << MTHCA_EVENT_TYPE_PORT_CHANGE) | \
+ (1ULL << MTHCA_EVENT_TYPE_ECC_DETECT))
+#define MTHCA_SRQ_EVENT_MASK (1ULL << MTHCA_EVENT_TYPE_SRQ_CATAS_ERROR) | \
+ (1ULL << MTHCA_EVENT_TYPE_SRQ_LAST_WQE)
+#define MTHCA_CMD_EVENT_MASK (1ULL << MTHCA_EVENT_TYPE_CMD)
+
+#define MTHCA_EQ_DB_INC_CI (1 << 24)
+#define MTHCA_EQ_DB_REQ_NOT (2 << 24)
+#define MTHCA_EQ_DB_DISARM_CQ (3 << 24)
+#define MTHCA_EQ_DB_SET_CI (4 << 24)
+#define MTHCA_EQ_DB_ALWAYS_ARM (5 << 24)
+
+struct mthca_eqe {
+ u8 reserved1;
+ u8 type;
+ u8 reserved2;
+ u8 subtype;
+ union {
+ u32 raw[6];
+ struct {
+ u32 cqn;
+ } __attribute__((packed)) comp;
+ struct {
+ u16 reserved1;
+ u16 token;
+ u32 reserved2;
+ u8 reserved3[3];
+ u8 status;
+ u64 out_param;
+ } __attribute__((packed)) cmd;
+ struct {
+ u32 qpn;
+ } __attribute__((packed)) qp;
+ struct {
+ u32 cqn;
+ u32 reserved1;
+ u8 reserved2[3];
+ u8 syndrome;
+ } __attribute__((packed)) cq_err;
+ struct {
+ u32 reserved1[2];
+ u32 port;
+ } __attribute__((packed)) port_change;
+ } event;
+ u8 reserved3[3];
+ u8 owner;
+} __attribute__((packed));
+
+#define MTHCA_EQ_ENTRY_OWNER_SW (0 << 7)
+#define MTHCA_EQ_ENTRY_OWNER_HW (1 << 7)
+
+static inline u64 async_mask(struct mthca_dev *dev)
+{
+ return dev->mthca_flags & MTHCA_FLAG_SRQ ?
+ MTHCA_ASYNC_EVENT_MASK | MTHCA_SRQ_EVENT_MASK :
+ MTHCA_ASYNC_EVENT_MASK;
+}
+
+static inline void set_eq_ci(struct mthca_dev *dev, struct mthca_eq *eq, u32 ci)
+{
+ u32 doorbell[2];
+
+ doorbell[0] = cpu_to_be32(MTHCA_EQ_DB_SET_CI | eq->eqn);
+ doorbell[1] = cpu_to_be32(ci & (eq->nent - 1));
+
+ mthca_write64(doorbell,
+ dev->kar + MTHCA_EQ_DOORBELL,
+ MTHCA_GET_DOORBELL_LOCK(&dev->doorbell_lock));
+}
+
+static inline void eq_req_not(struct mthca_dev *dev, int eqn)
+{
+ u32 doorbell[2];
+
+ doorbell[0] = cpu_to_be32(MTHCA_EQ_DB_REQ_NOT | eqn);
+ doorbell[1] = 0;
+
+ mthca_write64(doorbell,
+ dev->kar + MTHCA_EQ_DOORBELL,
+ MTHCA_GET_DOORBELL_LOCK(&dev->doorbell_lock));
+}
+
+static inline void disarm_cq(struct mthca_dev *dev, int eqn, int cqn)
+{
+ u32 doorbell[2];
+
+ doorbell[0] = cpu_to_be32(MTHCA_EQ_DB_DISARM_CQ | eqn);
+ doorbell[1] = cpu_to_be32(cqn);
+
+ mthca_write64(doorbell,
+ dev->kar + MTHCA_EQ_DOORBELL,
+ MTHCA_GET_DOORBELL_LOCK(&dev->doorbell_lock));
+}
+
+static inline struct mthca_eqe *get_eqe(struct mthca_eq *eq, u32 entry)
+{
+ unsigned long off = (entry & (eq->nent - 1)) * MTHCA_EQ_ENTRY_SIZE;
+ return eq->page_list[off / PAGE_SIZE].buf + off % PAGE_SIZE;
+}
+
+static inline struct mthca_eqe* next_eqe_sw(struct mthca_eq *eq)
+{
+ struct mthca_eqe* eqe;
+ eqe = get_eqe(eq, eq->cons_index);
+ return (MTHCA_EQ_ENTRY_OWNER_HW & eqe->owner) ? NULL : eqe;
+}
+
+static inline void set_eqe_hw(struct mthca_eqe *eqe)
+{
+ eqe->owner = MTHCA_EQ_ENTRY_OWNER_HW;
+}
+
+static void port_change(struct mthca_dev *dev, int port, int active)
+{
+ struct ib_event record;
+
+ mthca_dbg(dev, "Port change to %s for port %d\n",
+ active ? "active" : "down", port);
+
+ record.device = &dev->ib_dev;
+ record.event = active ? IB_EVENT_PORT_ACTIVE : IB_EVENT_PORT_ERR;
+ record.element.port_num = port;
+
+ ib_dispatch_event(&record);
+}
+
+static void mthca_eq_int(struct mthca_dev *dev, struct mthca_eq *eq)
+{
+ struct mthca_eqe *eqe;
+ int disarm_cqn;
+ int eqes_found = 0;
+
+ while ((eqe = next_eqe_sw(eq))) {
+ int set_ci = 0;
+
+ /*
+ * Make sure we read EQ entry contents after we've
+ * checked the ownership bit.
+ */
+ rmb();
+
+ switch (eqe->type) {
+ case MTHCA_EVENT_TYPE_COMP:
+ disarm_cqn = be32_to_cpu(eqe->event.comp.cqn) & 0xffffff;
+ disarm_cq(dev, eq->eqn, disarm_cqn);
+ mthca_cq_event(dev, disarm_cqn);
+ break;
+
+ case MTHCA_EVENT_TYPE_PATH_MIG:
+ mthca_qp_event(dev, be32_to_cpu(eqe->event.qp.qpn) & 0xffffff,
+ IB_EVENT_PATH_MIG);
+ break;
+
+ case MTHCA_EVENT_TYPE_COMM_EST:
+ mthca_qp_event(dev, be32_to_cpu(eqe->event.qp.qpn) & 0xffffff,
+ IB_EVENT_COMM_EST);
+ break;
+
+ case MTHCA_EVENT_TYPE_SQ_DRAINED:
+ mthca_qp_event(dev, be32_to_cpu(eqe->event.qp.qpn) & 0xffffff,
+ IB_EVENT_SQ_DRAINED);
+ break;
+
+ case MTHCA_EVENT_TYPE_WQ_CATAS_ERROR:
+ mthca_qp_event(dev, be32_to_cpu(eqe->event.qp.qpn) & 0xffffff,
+ IB_EVENT_QP_FATAL);
+ break;
+
+ case MTHCA_EVENT_TYPE_PATH_MIG_FAILED:
+ mthca_qp_event(dev, be32_to_cpu(eqe->event.qp.qpn) & 0xffffff,
+ IB_EVENT_PATH_MIG_ERR);
+ break;
+
+ case MTHCA_EVENT_TYPE_WQ_INVAL_REQ_ERROR:
+ mthca_qp_event(dev, be32_to_cpu(eqe->event.qp.qpn) & 0xffffff,
+ IB_EVENT_QP_REQ_ERR);
+ break;
+
+ case MTHCA_EVENT_TYPE_WQ_ACCESS_ERROR:
+ mthca_qp_event(dev, be32_to_cpu(eqe->event.qp.qpn) & 0xffffff,
+ IB_EVENT_QP_ACCESS_ERR);
+ break;
+
+ case MTHCA_EVENT_TYPE_CMD:
+ mthca_cmd_event(dev,
+ be16_to_cpu(eqe->event.cmd.token),
+ eqe->event.cmd.status,
+ be64_to_cpu(eqe->event.cmd.out_param));
+ /*
+ * cmd_event() may add more commands.
+ * The card will think the queue has overflowed if
+ * we don't tell it we've been processing events.
+ */
+ set_ci = 1;
+ break;
+
+ case MTHCA_EVENT_TYPE_PORT_CHANGE:
+ port_change(dev,
+ (be32_to_cpu(eqe->event.port_change.port) >> 28) & 3,
+ eqe->subtype == 0x4);
+ break;
+
+ case MTHCA_EVENT_TYPE_CQ_ERROR:
+ mthca_warn(dev, "CQ %s on CQN %08x\n",
+ eqe->event.cq_err.syndrome == 1 ?
+ "overrun" : "access violation",
+ be32_to_cpu(eqe->event.cq_err.cqn));
+ break;
+
+ case MTHCA_EVENT_TYPE_EQ_OVERFLOW:
+ mthca_warn(dev, "EQ overrun on EQN %d\n", eq->eqn);
+ break;
+
+ case MTHCA_EVENT_TYPE_EEC_CATAS_ERROR:
+ case MTHCA_EVENT_TYPE_SRQ_CATAS_ERROR:
+ case MTHCA_EVENT_TYPE_LOCAL_CATAS_ERROR:
+ case MTHCA_EVENT_TYPE_ECC_DETECT:
+ default:
+ mthca_warn(dev, "Unhandled event %02x(%02x) on EQ %d\n",
+ eqe->type, eqe->subtype, eq->eqn);
+ break;
+ };
+
+ set_eqe_hw(eqe);
+ ++eq->cons_index;
+ eqes_found = 1;
+
+ if (set_ci) {
+ wmb(); /* see comment below */
+ set_eq_ci(dev, eq, eq->cons_index);
+ set_ci = 0;
+ }
+ }
+
+ /*
+ * This barrier makes sure that all updates to
+ * ownership bits done by set_eqe_hw() hit memory
+ * before the consumer index is updated. set_eq_ci()
+ * allows the HCA to possibly write more EQ entries,
+ * and we want to avoid the exceedingly unlikely
+ * possibility of the HCA writing an entry and then
+ * having set_eqe_hw() overwrite the owner field.
+ */
+ if (likely(eqes_found)) {
+ wmb();
+ set_eq_ci(dev, eq, eq->cons_index);
+ }
+ eq_req_not(dev, eq->eqn);
+}
+
+static irqreturn_t mthca_interrupt(int irq, void *dev_ptr, struct pt_regs *regs)
+{
+ struct mthca_dev *dev = dev_ptr;
+ u32 ecr;
+ int work = 0;
+ int i;
+
+ if (dev->eq_table.clr_mask)
+ writel(dev->eq_table.clr_mask, dev->eq_table.clr_int);
+
+ if ((ecr = readl(dev->ecr_base + 4)) != 0) {
+ work = 1;
+
+ writel(ecr, dev->ecr_base +
+ MTHCA_ECR_CLR_BASE - MTHCA_ECR_BASE + 4);
+
+ for (i = 0; i < MTHCA_NUM_EQ; ++i)
+ if (ecr & dev->eq_table.eq[i].ecr_mask)
+ mthca_eq_int(dev, &dev->eq_table.eq[i]);
+ }
+
+ return IRQ_RETVAL(work);
+}
+
+static irqreturn_t mthca_msi_x_interrupt(int irq, void *eq_ptr,
+ struct pt_regs *regs)
+{
+ struct mthca_eq *eq = eq_ptr;
+ struct mthca_dev *dev = eq->dev;
+
+ mthca_eq_int(dev, eq);
+
+ /* MSI-X vectors always belong to us */
+ return IRQ_HANDLED;
+}
+
+static int __devinit mthca_create_eq(struct mthca_dev *dev,
+ int nent,
+ u8 intr,
+ struct mthca_eq *eq)
+{
+ int npages = (nent * MTHCA_EQ_ENTRY_SIZE + PAGE_SIZE - 1) /
+ PAGE_SIZE;
+ u64 *dma_list = NULL;
+ dma_addr_t t;
+ void *mailbox = NULL;
+ struct mthca_eq_context *eq_context;
+ int err = -ENOMEM;
+ int i;
+ u8 status;
+
+ /* Make sure EQ size is aligned to a power of 2 size. */
+ for (i = 1; i < nent; i <<= 1)
+ ; /* nothing */
+ nent = i;
+
+ eq->dev = dev;
+
+ eq->page_list = kmalloc(npages * sizeof *eq->page_list,
+ GFP_KERNEL);
+ if (!eq->page_list)
+ goto err_out;
+
+ for (i = 0; i < npages; ++i)
+ eq->page_list[i].buf = NULL;
+
+ dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL);
+ if (!dma_list)
+ goto err_out_free;
+
+ mailbox = kmalloc(sizeof *eq_context + MTHCA_CMD_MAILBOX_EXTRA,
+ GFP_KERNEL);
+ if (!mailbox)
+ goto err_out_free;
+ eq_context = MAILBOX_ALIGN(mailbox);
+
+ for (i = 0; i < npages; ++i) {
+ eq->page_list[i].buf = pci_alloc_consistent(dev->pdev,
+ PAGE_SIZE, &t);
+ if (!eq->page_list[i].buf)
+ goto err_out_free;
+
+ dma_list[i] = t;
+ pci_unmap_addr_set(&eq->page_list[i], mapping, t);
+
+ memset(eq->page_list[i].buf, 0, PAGE_SIZE);
+ }
+
+ for (i = 0; i < nent; ++i)
+ set_eqe_hw(get_eqe(eq, i));
+
+ eq->eqn = mthca_alloc(&dev->eq_table.alloc);
+ if (eq->eqn == -1)
+ goto err_out_free;
+
+ err = mthca_mr_alloc_phys(dev, dev->driver_pd.pd_num,
+ dma_list, PAGE_SHIFT, npages,
+ 0, npages * PAGE_SIZE,
+ MTHCA_MPT_FLAG_LOCAL_WRITE |
+ MTHCA_MPT_FLAG_LOCAL_READ,
+ &eq->mr);
+ if (err)
+ goto err_out_free_eq;
+
+ eq->nent = nent;
+
+ memset(eq_context, 0, sizeof *eq_context);
+ eq_context->flags = cpu_to_be32(MTHCA_EQ_STATUS_OK |
+ MTHCA_EQ_OWNER_HW |
+ MTHCA_EQ_STATE_ARMED |
+ MTHCA_EQ_FLAG_TR);
+ eq_context->start = cpu_to_be64(0);
+ eq_context->logsize_usrpage = cpu_to_be32((ffs(nent) - 1) << 24 |
+ MTHCA_KAR_PAGE);
+ eq_context->pd = cpu_to_be32(dev->driver_pd.pd_num);
+ eq_context->intr = intr;
+ eq_context->lkey = cpu_to_be32(eq->mr.ibmr.lkey);
+
+ err = mthca_SW2HW_EQ(dev, eq_context, eq->eqn, &status);
+ if (err) {
+ mthca_warn(dev, "SW2HW_EQ failed (%d)\n", err);
+ goto err_out_free_mr;
+ }
+ if (status) {
+ mthca_warn(dev, "SW2HW_EQ returned status 0x%02x\n",
+ status);
+ err = -EINVAL;
+ goto err_out_free_mr;
+ }
+
+ kfree(dma_list);
+ kfree(mailbox);
+
+ eq->ecr_mask = swab32(1 << eq->eqn);
+ eq->cons_index = 0;
+
+ eq_req_not(dev, eq->eqn);
+
+ mthca_dbg(dev, "Allocated EQ %d with %d entries\n",
+ eq->eqn, nent);
+
+ return err;
+
+ err_out_free_mr:
+ mthca_free_mr(dev, &eq->mr);
+
+ err_out_free_eq:
+ mthca_free(&dev->eq_table.alloc, eq->eqn);
+
+ err_out_free:
+ for (i = 0; i < npages; ++i)
+ if (eq->page_list[i].buf)
+ pci_free_consistent(dev->pdev, PAGE_SIZE,
+ eq->page_list[i].buf,
+ pci_unmap_addr(&eq->page_list[i],
+ mapping));
+
+ kfree(eq->page_list);
+ kfree(dma_list);
+ kfree(mailbox);
+
+ err_out:
+ return err;
+}
+
+static void mthca_free_eq(struct mthca_dev *dev,
+ struct mthca_eq *eq)
+{
+ void *mailbox = NULL;
+ int err;
+ u8 status;
+ int npages = (eq->nent * MTHCA_EQ_ENTRY_SIZE + PAGE_SIZE - 1) /
+ PAGE_SIZE;
+ int i;
+
+ mailbox = kmalloc(sizeof (struct mthca_eq_context) + MTHCA_CMD_MAILBOX_EXTRA,
+ GFP_KERNEL);
+ if (!mailbox)
+ return;
+
+ err = mthca_HW2SW_EQ(dev, MAILBOX_ALIGN(mailbox),
+ eq->eqn, &status);
+ if (err)
+ mthca_warn(dev, "HW2SW_EQ failed (%d)\n", err);
+ if (status)
+ mthca_warn(dev, "HW2SW_EQ returned status 0x%02x\n",
+ status);
+
+ if (0) {
+ mthca_dbg(dev, "Dumping EQ context %02x:\n", eq->eqn);
+ for (i = 0; i < sizeof (struct mthca_eq_context) / 4; ++i) {
+ if (i % 4 == 0)
+ printk("[%02x] ", i * 4);
+ printk(" %08x", be32_to_cpup(MAILBOX_ALIGN(mailbox) + i * 4));
+ if ((i + 1) % 4 == 0)
+ printk("\n");
+ }
+ }
+
+
+ mthca_free_mr(dev, &eq->mr);
+ for (i = 0; i < npages; ++i)
+ pci_free_consistent(dev->pdev, PAGE_SIZE,
+ eq->page_list[i].buf,
+ pci_unmap_addr(&eq->page_list[i], mapping));
+
+ kfree(eq->page_list);
+ kfree(mailbox);
+}
+
+static void mthca_free_irqs(struct mthca_dev *dev)
+{
+ int i;
+
+ if (dev->eq_table.have_irq)
+ free_irq(dev->pdev->irq, dev);
+ for (i = 0; i < MTHCA_NUM_EQ; ++i)
+ if (dev->eq_table.eq[i].have_irq)
+ free_irq(dev->eq_table.eq[i].msi_x_vector,
+ dev->eq_table.eq + i);
+}
+
+int __devinit mthca_map_eq_icm(struct mthca_dev *dev, u64 icm_virt)
+{
+ int ret;
+ u8 status;
+
+ /*
+ * We assume that mapping one page is enough for the whole EQ
+ * context table. This is fine with all current HCAs, because
+ * we only use 32 EQs and each EQ uses 32 bytes of context
+ * memory, or 1 KB total.
+ */
+ dev->eq_table.icm_virt = icm_virt;
+ dev->eq_table.icm_page = alloc_page(GFP_HIGHUSER);
+ if (!dev->eq_table.icm_page)
+ return -ENOMEM;
+ dev->eq_table.icm_dma = pci_map_page(dev->pdev, dev->eq_table.icm_page, 0,
+ PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(dev->eq_table.icm_dma)) {
+ __free_page(dev->eq_table.icm_page);
+ return -ENOMEM;
+ }
+
+ ret = mthca_MAP_ICM_page(dev, dev->eq_table.icm_dma, icm_virt, &status);
+ if (!ret && status)
+ ret = -EINVAL;
+ if (ret) {
+ pci_unmap_page(dev->pdev, dev->eq_table.icm_dma, PAGE_SIZE,
+ PCI_DMA_BIDIRECTIONAL);
+ __free_page(dev->eq_table.icm_page);
+ }
+
+ return ret;
+}
+
+void __devexit mthca_unmap_eq_icm(struct mthca_dev *dev)
+{
+ u8 status;
+
+ mthca_UNMAP_ICM(dev, dev->eq_table.icm_virt, PAGE_SIZE / 4096, &status);
+ pci_unmap_page(dev->pdev, dev->eq_table.icm_dma, PAGE_SIZE,
+ PCI_DMA_BIDIRECTIONAL);
+ __free_page(dev->eq_table.icm_page);
+}
+
+int __devinit mthca_init_eq_table(struct mthca_dev *dev)
+{
+ int err;
+ u8 status;
+ u8 intr;
+ int i;
+
+ err = mthca_alloc_init(&dev->eq_table.alloc,
+ dev->limits.num_eqs,
+ dev->limits.num_eqs - 1,
+ dev->limits.reserved_eqs);
+ if (err)
+ return err;
+
+ if (dev->mthca_flags & MTHCA_FLAG_MSI ||
+ dev->mthca_flags & MTHCA_FLAG_MSI_X) {
+ dev->eq_table.clr_mask = 0;
+ } else {
+ dev->eq_table.clr_mask =
+ swab32(1 << (dev->eq_table.inta_pin & 31));
+ dev->eq_table.clr_int = dev->clr_base +
+ (dev->eq_table.inta_pin < 31 ? 4 : 0);
+ }
+
+ intr = (dev->mthca_flags & MTHCA_FLAG_MSI) ?
+ 128 : dev->eq_table.inta_pin;
+
+ err = mthca_create_eq(dev, dev->limits.num_cqs,
+ (dev->mthca_flags & MTHCA_FLAG_MSI_X) ? 128 : intr,
+ &dev->eq_table.eq[MTHCA_EQ_COMP]);
+ if (err)
+ goto err_out_free;
+
+ err = mthca_create_eq(dev, MTHCA_NUM_ASYNC_EQE,
+ (dev->mthca_flags & MTHCA_FLAG_MSI_X) ? 129 : intr,
+ &dev->eq_table.eq[MTHCA_EQ_ASYNC]);
+ if (err)
+ goto err_out_comp;
+
+ err = mthca_create_eq(dev, MTHCA_NUM_CMD_EQE,
+ (dev->mthca_flags & MTHCA_FLAG_MSI_X) ? 130 : intr,
+ &dev->eq_table.eq[MTHCA_EQ_CMD]);
+ if (err)
+ goto err_out_async;
+
+ if (dev->mthca_flags & MTHCA_FLAG_MSI_X) {
+ static const char *eq_name[] = {
+ [MTHCA_EQ_COMP] = DRV_NAME " (comp)",
+ [MTHCA_EQ_ASYNC] = DRV_NAME " (async)",
+ [MTHCA_EQ_CMD] = DRV_NAME " (cmd)"
+ };
+
+ for (i = 0; i < MTHCA_NUM_EQ; ++i) {
+ err = request_irq(dev->eq_table.eq[i].msi_x_vector,
+ mthca_msi_x_interrupt, 0,
+ eq_name[i], dev->eq_table.eq + i);
+ if (err)
+ goto err_out_cmd;
+ dev->eq_table.eq[i].have_irq = 1;
+ }
+ } else {
+ err = request_irq(dev->pdev->irq, mthca_interrupt, SA_SHIRQ,
+ DRV_NAME, dev);
+ if (err)
+ goto err_out_cmd;
+ dev->eq_table.have_irq = 1;
+ }
+
+ err = mthca_MAP_EQ(dev, async_mask(dev),
+ 0, dev->eq_table.eq[MTHCA_EQ_ASYNC].eqn, &status);
+ if (err)
+ mthca_warn(dev, "MAP_EQ for async EQ %d failed (%d)\n",
+ dev->eq_table.eq[MTHCA_EQ_ASYNC].eqn, err);
+ if (status)
+ mthca_warn(dev, "MAP_EQ for async EQ %d returned status 0x%02x\n",
+ dev->eq_table.eq[MTHCA_EQ_ASYNC].eqn, status);
+
+ err = mthca_MAP_EQ(dev, MTHCA_CMD_EVENT_MASK,
+ 0, dev->eq_table.eq[MTHCA_EQ_CMD].eqn, &status);
+ if (err)
+ mthca_warn(dev, "MAP_EQ for cmd EQ %d failed (%d)\n",
+ dev->eq_table.eq[MTHCA_EQ_CMD].eqn, err);
+ if (status)
+ mthca_warn(dev, "MAP_EQ for cmd EQ %d returned status 0x%02x\n",
+ dev->eq_table.eq[MTHCA_EQ_CMD].eqn, status);
+
+ return 0;
+
+err_out_cmd:
+ mthca_free_irqs(dev);
+ mthca_free_eq(dev, &dev->eq_table.eq[MTHCA_EQ_CMD]);
+
+err_out_async:
+ mthca_free_eq(dev, &dev->eq_table.eq[MTHCA_EQ_ASYNC]);
+
+err_out_comp:
+ mthca_free_eq(dev, &dev->eq_table.eq[MTHCA_EQ_COMP]);
+
+err_out_free:
+ mthca_alloc_cleanup(&dev->eq_table.alloc);
+ return err;
+}
+
+void __devexit mthca_cleanup_eq_table(struct mthca_dev *dev)
+{
+ u8 status;
+ int i;
+
+ mthca_free_irqs(dev);
+
+ mthca_MAP_EQ(dev, async_mask(dev),
+ 1, dev->eq_table.eq[MTHCA_EQ_ASYNC].eqn, &status);
+ mthca_MAP_EQ(dev, MTHCA_CMD_EVENT_MASK,
+ 1, dev->eq_table.eq[MTHCA_EQ_CMD].eqn, &status);
+
+ for (i = 0; i < MTHCA_NUM_EQ; ++i)
+ mthca_free_eq(dev, &dev->eq_table.eq[i]);
+
+ mthca_alloc_cleanup(&dev->eq_table.alloc);
+}
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mthca_mad.c 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#include <ib_verbs.h>
+#include <ib_mad.h>
+#include <ib_smi.h>
+
+#include "mthca_dev.h"
+#include "mthca_cmd.h"
+
+enum {
+ MTHCA_VENDOR_CLASS1 = 0x9,
+ MTHCA_VENDOR_CLASS2 = 0xa
+};
+
+struct mthca_trap_mad {
+ struct ib_mad *mad;
+ DECLARE_PCI_UNMAP_ADDR(mapping)
+};
+
+static void update_sm_ah(struct mthca_dev *dev,
+ u8 port_num, u16 lid, u8 sl)
+{
+ struct ib_ah *new_ah;
+ struct ib_ah_attr ah_attr;
+ unsigned long flags;
+
+ if (!dev->send_agent[port_num - 1][0])
+ return;
+
+ memset(&ah_attr, 0, sizeof ah_attr);
+ ah_attr.dlid = lid;
+ ah_attr.sl = sl;
+ ah_attr.port_num = port_num;
+
+ new_ah = ib_create_ah(dev->send_agent[port_num - 1][0]->qp->pd,
+ &ah_attr);
+ if (IS_ERR(new_ah))
+ return;
+
+ spin_lock_irqsave(&dev->sm_lock, flags);
+ if (dev->sm_ah[port_num - 1])
+ ib_destroy_ah(dev->sm_ah[port_num - 1]);
+ dev->sm_ah[port_num - 1] = new_ah;
+ spin_unlock_irqrestore(&dev->sm_lock, flags);
+}
+
+/*
+ * Snoop SM MADs for port info and P_Key table sets, so we can
+ * synthesize LID change and P_Key change events.
+ */
+static void smp_snoop(struct ib_device *ibdev,
+ u8 port_num,
+ struct ib_mad *mad)
+{
+ struct ib_event event;
+
+ if ((mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_LID_ROUTED ||
+ mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE) &&
+ mad->mad_hdr.method == IB_MGMT_METHOD_SET) {
+ if (mad->mad_hdr.attr_id == IB_SMP_ATTR_PORT_INFO) {
+ update_sm_ah(to_mdev(ibdev), port_num,
+ be16_to_cpup((__be16 *) (mad->data + 58)),
+ (*(u8 *) (mad->data + 76)) & 0xf);
+
+ event.device = ibdev;
+ event.event = IB_EVENT_LID_CHANGE;
+ event.element.port_num = port_num;
+ ib_dispatch_event(&event);
+ }
+
+ if (mad->mad_hdr.attr_id == IB_SMP_ATTR_PKEY_TABLE) {
+ event.device = ibdev;
+ event.event = IB_EVENT_PKEY_CHANGE;
+ event.element.port_num = port_num;
+ ib_dispatch_event(&event);
+ }
+ }
+}
+
+static void forward_trap(struct mthca_dev *dev,
+ u8 port_num,
+ struct ib_mad *mad)
+{
+ int qpn = mad->mad_hdr.mgmt_class != IB_MGMT_CLASS_SUBN_LID_ROUTED;
+ struct mthca_trap_mad *tmad;
+ struct ib_sge gather_list;
+ struct ib_send_wr *bad_wr, wr = {
+ .opcode = IB_WR_SEND,
+ .sg_list = &gather_list,
+ .num_sge = 1,
+ .send_flags = IB_SEND_SIGNALED,
+ .wr = {
+ .ud = {
+ .remote_qpn = qpn,
+ .remote_qkey = qpn ? IB_QP1_QKEY : 0,
+ .timeout_ms = 0
+ }
+ }
+ };
+ struct ib_mad_agent *agent = dev->send_agent[port_num - 1][qpn];
+ int ret;
+ unsigned long flags;
+
+ if (agent) {
+ tmad = kmalloc(sizeof *tmad, GFP_KERNEL);
+ if (!tmad)
+ return;
+
+ tmad->mad = kmalloc(sizeof *tmad->mad, GFP_KERNEL);
+ if (!tmad->mad) {
+ kfree(tmad);
+ return;
+ }
+
+ memcpy(tmad->mad, mad, sizeof *mad);
+
+ wr.wr.ud.mad_hdr = &tmad->mad->mad_hdr;
+ wr.wr_id = (unsigned long) tmad;
+
+ gather_list.addr = dma_map_single(agent->device->dma_device,
+ tmad->mad,
+ sizeof *tmad->mad,
+ DMA_TO_DEVICE);
+ gather_list.length = sizeof *tmad->mad;
+ gather_list.lkey = to_mpd(agent->qp->pd)->ntmr.ibmr.lkey;
+ pci_unmap_addr_set(tmad, mapping, gather_list.addr);
+
+ /*
+ * We rely here on the fact that MLX QPs don't use the
+ * address handle after the send is posted (this is
+ * wrong following the IB spec strictly, but we know
+ * it's OK for our devices).
+ */
+ spin_lock_irqsave(&dev->sm_lock, flags);
+ wr.wr.ud.ah = dev->sm_ah[port_num - 1];
+ if (wr.wr.ud.ah)
+ ret = ib_post_send_mad(agent, &wr, &bad_wr);
+ else
+ ret = -EINVAL;
+ spin_unlock_irqrestore(&dev->sm_lock, flags);
+
+ if (ret) {
+ dma_unmap_single(agent->device->dma_device,
+ pci_unmap_addr(tmad, mapping),
+ sizeof *tmad->mad,
+ DMA_TO_DEVICE);
+ kfree(tmad->mad);
+ kfree(tmad);
+ }
+ }
+}
+
+int mthca_process_mad(struct ib_device *ibdev,
+ int mad_flags,
+ u8 port_num,
+ struct ib_wc *in_wc,
+ struct ib_grh *in_grh,
+ struct ib_mad *in_mad,
+ struct ib_mad *out_mad)
+{
+ int err;
+ u8 status;
+ u16 slid = in_wc ? in_wc->slid : IB_LID_PERMISSIVE;
+
+ /* Forward locally generated traps to the SM */
+ if (in_mad->mad_hdr.method == IB_MGMT_METHOD_TRAP &&
+ slid == 0) {
+ forward_trap(to_mdev(ibdev), port_num, in_mad);
+ return IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_CONSUMED;
+ }
+
+ /*
+ * Only handle SM gets, sets and trap represses for SM class
+ *
+ * Only handle PMA and Mellanox vendor-specific class gets and
+ * sets for other classes.
+ */
+ if (in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_LID_ROUTED ||
+ in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE) {
+ if (in_mad->mad_hdr.method != IB_MGMT_METHOD_GET &&
+ in_mad->mad_hdr.method != IB_MGMT_METHOD_SET &&
+ in_mad->mad_hdr.method != IB_MGMT_METHOD_TRAP_REPRESS)
+ return IB_MAD_RESULT_SUCCESS;
+
+ /*
+ * Don't process SMInfo queries or vendor-specific
+ * MADs -- the SMA can't handle them.
+ */
+ if (in_mad->mad_hdr.attr_id == IB_SMP_ATTR_SM_INFO ||
+ ((in_mad->mad_hdr.attr_id & IB_SMP_ATTR_VENDOR_MASK) ==
+ IB_SMP_ATTR_VENDOR_MASK))
+ return IB_MAD_RESULT_SUCCESS;
+ } else if (in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_PERF_MGMT ||
+ in_mad->mad_hdr.mgmt_class == MTHCA_VENDOR_CLASS1 ||
+ in_mad->mad_hdr.mgmt_class == MTHCA_VENDOR_CLASS2) {
+ if (in_mad->mad_hdr.method != IB_MGMT_METHOD_GET &&
+ in_mad->mad_hdr.method != IB_MGMT_METHOD_SET)
+ return IB_MAD_RESULT_SUCCESS;
+ } else
+ return IB_MAD_RESULT_SUCCESS;
+
+ err = mthca_MAD_IFC(to_mdev(ibdev),
+ mad_flags & IB_MAD_IGNORE_MKEY,
+ mad_flags & IB_MAD_IGNORE_BKEY,
+ port_num, in_wc, in_grh, in_mad, out_mad,
+ &status);
+ if (err) {
+ mthca_err(to_mdev(ibdev), "MAD_IFC failed\n");
+ return IB_MAD_RESULT_FAILURE;
+ }
+ if (status == MTHCA_CMD_STAT_BAD_PKT)
+ return IB_MAD_RESULT_SUCCESS;
+ if (status) {
+ mthca_err(to_mdev(ibdev), "MAD_IFC returned status %02x\n",
+ status);
+ return IB_MAD_RESULT_FAILURE;
+ }
+
+ if (!out_mad->mad_hdr.status)
+ smp_snoop(ibdev, port_num, in_mad);
+
+ /* set return bit in status of directed route responses */
+ if (in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE)
+ out_mad->mad_hdr.status |= cpu_to_be16(1 << 15);
+
+ if (in_mad->mad_hdr.method == IB_MGMT_METHOD_TRAP_REPRESS)
+ /* no response for trap repress */
+ return IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_CONSUMED;
+
+ return IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY;
+}
+
+static void send_handler(struct ib_mad_agent *agent,
+ struct ib_mad_send_wc *mad_send_wc)
+{
+ struct mthca_trap_mad *tmad =
+ (void *) (unsigned long) mad_send_wc->wr_id;
+
+ dma_unmap_single(agent->device->dma_device,
+ pci_unmap_addr(tmad, mapping),
+ sizeof *tmad->mad,
+ DMA_TO_DEVICE);
+ kfree(tmad->mad);
+ kfree(tmad);
+}
+
+int mthca_create_agents(struct mthca_dev *dev)
+{
+ struct ib_mad_agent *agent;
+ int p, q;
+
+ spin_lock_init(&dev->sm_lock);
+
+ for (p = 0; p < dev->limits.num_ports; ++p)
+ for (q = 0; q <= 1; ++q) {
+ agent = ib_register_mad_agent(&dev->ib_dev, p + 1,
+ q ? IB_QPT_GSI : IB_QPT_SMI,
+ NULL, 0, send_handler,
+ NULL, NULL);
+ if (IS_ERR(agent))
+ goto err;
+ dev->send_agent[p][q] = agent;
+ }
+
+ return 0;
+
+err:
+ for (p = 0; p < dev->limits.num_ports; ++p)
+ for (q = 0; q <= 1; ++q)
+ if (dev->send_agent[p][q])
+ ib_unregister_mad_agent(dev->send_agent[p][q]);
+
+ return PTR_ERR(agent);
+}
+
+void mthca_free_agents(struct mthca_dev *dev)
+{
+ struct ib_mad_agent *agent;
+ int p, q;
+
+ for (p = 0; p < dev->limits.num_ports; ++p) {
+ for (q = 0; q <= 1; ++q) {
+ agent = dev->send_agent[p][q];
+ dev->send_agent[p][q] = NULL;
+ ib_unregister_mad_agent(agent);
+ }
+
+ if (dev->sm_ah[p])
+ ib_destroy_ah(dev->sm_ah[p]);
+ }
+}
--- /dev/null
+/*
+ * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mthca_main.c 1396 2004-12-28 04:10:27Z roland $
+ */
+
+#include <linux/config.h>
+#include <linux/version.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+
+#include "mthca_dev.h"
+#include "mthca_config_reg.h"
+#include "mthca_cmd.h"
+#include "mthca_profile.h"
+#include "mthca_memfree.h"
+
+MODULE_AUTHOR("Roland Dreier");
+MODULE_DESCRIPTION("Mellanox InfiniBand HCA low-level driver");
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_VERSION(DRV_VERSION);
+
+#ifdef CONFIG_PCI_MSI
+
+static int msi_x = 0;
+module_param(msi_x, int, 0444);
+MODULE_PARM_DESC(msi_x, "attempt to use MSI-X if nonzero");
+
+static int msi = 0;
+module_param(msi, int, 0444);
+MODULE_PARM_DESC(msi, "attempt to use MSI if nonzero");
+
+#else /* CONFIG_PCI_MSI */
+
+#define msi_x (0)
+#define msi (0)
+
+#endif /* CONFIG_PCI_MSI */
+
+static const char mthca_version[] __devinitdata =
+ "ib_mthca: Mellanox InfiniBand HCA driver v"
+ DRV_VERSION " (" DRV_RELDATE ")\n";
+
+static struct mthca_profile default_profile = {
+ .num_qp = 1 << 16,
+ .rdb_per_qp = 4,
+ .num_cq = 1 << 16,
+ .num_mcg = 1 << 13,
+ .num_mpt = 1 << 17,
+ .num_mtt = 1 << 20,
+ .num_udav = 1 << 15, /* Tavor only */
+ .uarc_size = 1 << 18, /* Arbel only */
+};
+
+static int __devinit mthca_tune_pci(struct mthca_dev *mdev)
+{
+ int cap;
+ u16 val;
+
+ /* First try to max out Read Byte Count */
+ cap = pci_find_capability(mdev->pdev, PCI_CAP_ID_PCIX);
+ if (cap) {
+ if (pci_read_config_word(mdev->pdev, cap + PCI_X_CMD, &val)) {
+ mthca_err(mdev, "Couldn't read PCI-X command register, "
+ "aborting.\n");
+ return -ENODEV;
+ }
+ val = (val & ~PCI_X_CMD_MAX_READ) | (3 << 2);
+ if (pci_write_config_word(mdev->pdev, cap + PCI_X_CMD, val)) {
+ mthca_err(mdev, "Couldn't write PCI-X command register, "
+ "aborting.\n");
+ return -ENODEV;
+ }
+ } else if (mdev->hca_type == TAVOR)
+ mthca_info(mdev, "No PCI-X capability, not setting RBC.\n");
+
+ cap = pci_find_capability(mdev->pdev, PCI_CAP_ID_EXP);
+ if (cap) {
+ if (pci_read_config_word(mdev->pdev, cap + PCI_EXP_DEVCTL, &val)) {
+ mthca_err(mdev, "Couldn't read PCI Express device control "
+ "register, aborting.\n");
+ return -ENODEV;
+ }
+ val = (val & ~PCI_EXP_DEVCTL_READRQ) | (5 << 12);
+ if (pci_write_config_word(mdev->pdev, cap + PCI_EXP_DEVCTL, val)) {
+ mthca_err(mdev, "Couldn't write PCI Express device control "
+ "register, aborting.\n");
+ return -ENODEV;
+ }
+ } else if (mdev->hca_type == ARBEL_NATIVE ||
+ mdev->hca_type == ARBEL_COMPAT)
+ mthca_info(mdev, "No PCI Express capability, "
+ "not setting Max Read Request Size.\n");
+
+ return 0;
+}
+
+static int __devinit mthca_dev_lim(struct mthca_dev *mdev, struct mthca_dev_lim *dev_lim)
+{
+ int err;
+ u8 status;
+
+ err = mthca_QUERY_DEV_LIM(mdev, dev_lim, &status);
+ if (err) {
+ mthca_err(mdev, "QUERY_DEV_LIM command failed, aborting.\n");
+ return err;
+ }
+ if (status) {
+ mthca_err(mdev, "QUERY_DEV_LIM returned status 0x%02x, "
+ "aborting.\n", status);
+ return -EINVAL;
+ }
+ if (dev_lim->min_page_sz > PAGE_SIZE) {
+ mthca_err(mdev, "HCA minimum page size of %d bigger than "
+ "kernel PAGE_SIZE of %ld, aborting.\n",
+ dev_lim->min_page_sz, PAGE_SIZE);
+ return -ENODEV;
+ }
+ if (dev_lim->num_ports > MTHCA_MAX_PORTS) {
+ mthca_err(mdev, "HCA has %d ports, but we only support %d, "
+ "aborting.\n",
+ dev_lim->num_ports, MTHCA_MAX_PORTS);
+ return -ENODEV;
+ }
+
+ mdev->limits.num_ports = dev_lim->num_ports;
+ mdev->limits.vl_cap = dev_lim->max_vl;
+ mdev->limits.mtu_cap = dev_lim->max_mtu;
+ mdev->limits.gid_table_len = dev_lim->max_gids;
+ mdev->limits.pkey_table_len = dev_lim->max_pkeys;
+ mdev->limits.local_ca_ack_delay = dev_lim->local_ca_ack_delay;
+ mdev->limits.max_sg = dev_lim->max_sg;
+ mdev->limits.reserved_qps = dev_lim->reserved_qps;
+ mdev->limits.reserved_srqs = dev_lim->reserved_srqs;
+ mdev->limits.reserved_eecs = dev_lim->reserved_eecs;
+ mdev->limits.reserved_cqs = dev_lim->reserved_cqs;
+ mdev->limits.reserved_eqs = dev_lim->reserved_eqs;
+ mdev->limits.reserved_mtts = dev_lim->reserved_mtts;
+ mdev->limits.reserved_mrws = dev_lim->reserved_mrws;
+ mdev->limits.reserved_uars = dev_lim->reserved_uars;
+ mdev->limits.reserved_pds = dev_lim->reserved_pds;
+
+ if (dev_lim->flags & DEV_LIM_FLAG_SRQ)
+ mdev->mthca_flags |= MTHCA_FLAG_SRQ;
+
+ return 0;
+}
+
+static int __devinit mthca_init_tavor(struct mthca_dev *mdev)
+{
+ u8 status;
+ int err;
+ struct mthca_dev_lim dev_lim;
+ struct mthca_profile profile;
+ struct mthca_init_hca_param init_hca;
+ struct mthca_adapter adapter;
+
+ err = mthca_SYS_EN(mdev, &status);
+ if (err) {
+ mthca_err(mdev, "SYS_EN command failed, aborting.\n");
+ return err;
+ }
+ if (status) {
+ mthca_err(mdev, "SYS_EN returned status 0x%02x, "
+ "aborting.\n", status);
+ return -EINVAL;
+ }
+
+ err = mthca_QUERY_FW(mdev, &status);
+ if (err) {
+ mthca_err(mdev, "QUERY_FW command failed, aborting.\n");
+ goto err_disable;
+ }
+ if (status) {
+ mthca_err(mdev, "QUERY_FW returned status 0x%02x, "
+ "aborting.\n", status);
+ err = -EINVAL;
+ goto err_disable;
+ }
+ err = mthca_QUERY_DDR(mdev, &status);
+ if (err) {
+ mthca_err(mdev, "QUERY_DDR command failed, aborting.\n");
+ goto err_disable;
+ }
+ if (status) {
+ mthca_err(mdev, "QUERY_DDR returned status 0x%02x, "
+ "aborting.\n", status);
+ err = -EINVAL;
+ goto err_disable;
+ }
+
+ err = mthca_dev_lim(mdev, &dev_lim);
+
+ profile = default_profile;
+ profile.num_uar = dev_lim.uar_size / PAGE_SIZE;
+ profile.uarc_size = 0;
+
+ err = mthca_make_profile(mdev, &profile, &dev_lim, &init_hca);
+ if (err < 0)
+ goto err_disable;
+
+ err = mthca_INIT_HCA(mdev, &init_hca, &status);
+ if (err) {
+ mthca_err(mdev, "INIT_HCA command failed, aborting.\n");
+ goto err_disable;
+ }
+ if (status) {
+ mthca_err(mdev, "INIT_HCA returned status 0x%02x, "
+ "aborting.\n", status);
+ err = -EINVAL;
+ goto err_disable;
+ }
+
+ err = mthca_QUERY_ADAPTER(mdev, &adapter, &status);
+ if (err) {
+ mthca_err(mdev, "QUERY_ADAPTER command failed, aborting.\n");
+ goto err_close;
+ }
+ if (status) {
+ mthca_err(mdev, "QUERY_ADAPTER returned status 0x%02x, "
+ "aborting.\n", status);
+ err = -EINVAL;
+ goto err_close;
+ }
+
+ mdev->eq_table.inta_pin = adapter.inta_pin;
+ mdev->rev_id = adapter.revision_id;
+
+ return 0;
+
+err_close:
+ mthca_CLOSE_HCA(mdev, 0, &status);
+
+err_disable:
+ mthca_SYS_DIS(mdev, &status);
+
+ return err;
+}
+
+static int __devinit mthca_load_fw(struct mthca_dev *mdev)
+{
+ u8 status;
+ int err;
+
+ /* FIXME: use HCA-attached memory for FW if present */
+
+ mdev->fw.arbel.fw_icm =
+ mthca_alloc_icm(mdev, mdev->fw.arbel.fw_pages,
+ GFP_HIGHUSER | __GFP_NOWARN);
+ if (!mdev->fw.arbel.fw_icm) {
+ mthca_err(mdev, "Couldn't allocate FW area, aborting.\n");
+ return -ENOMEM;
+ }
+
+ err = mthca_MAP_FA(mdev, mdev->fw.arbel.fw_icm, &status);
+ if (err) {
+ mthca_err(mdev, "MAP_FA command failed, aborting.\n");
+ goto err_free;
+ }
+ if (status) {
+ mthca_err(mdev, "MAP_FA returned status 0x%02x, aborting.\n", status);
+ err = -EINVAL;
+ goto err_free;
+ }
+ err = mthca_RUN_FW(mdev, &status);
+ if (err) {
+ mthca_err(mdev, "RUN_FW command failed, aborting.\n");
+ goto err_unmap_fa;
+ }
+ if (status) {
+ mthca_err(mdev, "RUN_FW returned status 0x%02x, aborting.\n", status);
+ err = -EINVAL;
+ goto err_unmap_fa;
+ }
+
+ return 0;
+
+err_unmap_fa:
+ mthca_UNMAP_FA(mdev, &status);
+
+err_free:
+ mthca_free_icm(mdev, mdev->fw.arbel.fw_icm);
+ return err;
+}
+
+static int __devinit mthca_init_icm(struct mthca_dev *mdev,
+ struct mthca_dev_lim *dev_lim,
+ struct mthca_init_hca_param *init_hca,
+ u64 icm_size)
+{
+ u64 aux_pages;
+ u8 status;
+ int err;
+
+ err = mthca_SET_ICM_SIZE(mdev, icm_size, &aux_pages, &status);
+ if (err) {
+ mthca_err(mdev, "SET_ICM_SIZE command failed, aborting.\n");
+ return err;
+ }
+ if (status) {
+ mthca_err(mdev, "SET_ICM_SIZE returned status 0x%02x, "
+ "aborting.\n", status);
+ return -EINVAL;
+ }
+
+ mthca_dbg(mdev, "%lld KB of HCA context requires %lld KB aux memory.\n",
+ (unsigned long long) icm_size >> 10,
+ (unsigned long long) aux_pages << 2);
+
+ mdev->fw.arbel.aux_icm = mthca_alloc_icm(mdev, aux_pages,
+ GFP_HIGHUSER | __GFP_NOWARN);
+ if (!mdev->fw.arbel.aux_icm) {
+ mthca_err(mdev, "Couldn't allocate aux memory, aborting.\n");
+ return -ENOMEM;
+ }
+
+ err = mthca_MAP_ICM_AUX(mdev, mdev->fw.arbel.aux_icm, &status);
+ if (err) {
+ mthca_err(mdev, "MAP_ICM_AUX command failed, aborting.\n");
+ goto err_free_aux;
+ }
+ if (status) {
+ mthca_err(mdev, "MAP_ICM_AUX returned status 0x%02x, aborting.\n", status);
+ err = -EINVAL;
+ goto err_free_aux;
+ }
+
+ err = mthca_map_eq_icm(mdev, init_hca->eqc_base);
+ if (err) {
+ mthca_err(mdev, "Failed to map EQ context memory, aborting.\n");
+ goto err_unmap_aux;
+ }
+
+ mdev->mr_table.mtt_table = mthca_alloc_icm_table(mdev, init_hca->mtt_base,
+ mdev->limits.num_mtt_segs *
+ init_hca->mtt_seg_sz,
+ mdev->limits.reserved_mtts *
+ init_hca->mtt_seg_sz, 1);
+ if (!mdev->mr_table.mtt_table) {
+ mthca_err(mdev, "Failed to map MTT context memory, aborting.\n");
+ err = -ENOMEM;
+ goto err_unmap_eq;
+ }
+
+ mdev->mr_table.mpt_table = mthca_alloc_icm_table(mdev, init_hca->mpt_base,
+ mdev->limits.num_mpts *
+ dev_lim->mpt_entry_sz,
+ mdev->limits.reserved_mrws *
+ dev_lim->mpt_entry_sz, 1);
+ if (!mdev->mr_table.mpt_table) {
+ mthca_err(mdev, "Failed to map MPT context memory, aborting.\n");
+ err = -ENOMEM;
+ goto err_unmap_mtt;
+ }
+
+ mdev->qp_table.qp_table = mthca_alloc_icm_table(mdev, init_hca->qpc_base,
+ mdev->limits.num_qps *
+ dev_lim->qpc_entry_sz,
+ mdev->limits.reserved_qps *
+ dev_lim->qpc_entry_sz, 1);
+ if (!mdev->qp_table.qp_table) {
+ mthca_err(mdev, "Failed to map QP context memory, aborting.\n");
+ err = -ENOMEM;
+ goto err_unmap_mpt;
+ }
+
+ mdev->qp_table.eqp_table = mthca_alloc_icm_table(mdev, init_hca->eqpc_base,
+ mdev->limits.num_qps *
+ dev_lim->eqpc_entry_sz,
+ mdev->limits.reserved_qps *
+ dev_lim->eqpc_entry_sz, 1);
+ if (!mdev->qp_table.eqp_table) {
+ mthca_err(mdev, "Failed to map EQP context memory, aborting.\n");
+ err = -ENOMEM;
+ goto err_unmap_qp;
+ }
+
+ mdev->cq_table.table = mthca_alloc_icm_table(mdev, init_hca->cqc_base,
+ mdev->limits.num_cqs *
+ dev_lim->cqc_entry_sz,
+ mdev->limits.reserved_cqs *
+ dev_lim->cqc_entry_sz, 1);
+ if (!mdev->cq_table.table) {
+ mthca_err(mdev, "Failed to map CQ context memory, aborting.\n");
+ err = -ENOMEM;
+ goto err_unmap_eqp;
+ }
+
+ return 0;
+
+err_unmap_eqp:
+ mthca_free_icm_table(mdev, mdev->qp_table.eqp_table);
+
+err_unmap_qp:
+ mthca_free_icm_table(mdev, mdev->qp_table.qp_table);
+
+err_unmap_mpt:
+ mthca_free_icm_table(mdev, mdev->mr_table.mpt_table);
+
+err_unmap_mtt:
+ mthca_free_icm_table(mdev, mdev->mr_table.mtt_table);
+
+err_unmap_eq:
+ mthca_unmap_eq_icm(mdev);
+
+err_unmap_aux:
+ mthca_UNMAP_ICM_AUX(mdev, &status);
+
+err_free_aux:
+ mthca_free_icm(mdev, mdev->fw.arbel.aux_icm);
+
+ return err;
+}
+
+static int __devinit mthca_init_arbel(struct mthca_dev *mdev)
+{
+ struct mthca_dev_lim dev_lim;
+ struct mthca_profile profile;
+ struct mthca_init_hca_param init_hca;
+ struct mthca_adapter adapter;
+ u64 icm_size;
+ u8 status;
+ int err;
+
+ err = mthca_QUERY_FW(mdev, &status);
+ if (err) {
+ mthca_err(mdev, "QUERY_FW command failed, aborting.\n");
+ return err;
+ }
+ if (status) {
+ mthca_err(mdev, "QUERY_FW returned status 0x%02x, "
+ "aborting.\n", status);
+ return -EINVAL;
+ }
+
+ err = mthca_ENABLE_LAM(mdev, &status);
+ if (err) {
+ mthca_err(mdev, "ENABLE_LAM command failed, aborting.\n");
+ return err;
+ }
+ if (status == MTHCA_CMD_STAT_LAM_NOT_PRE) {
+ mthca_dbg(mdev, "No HCA-attached memory (running in MemFree mode)\n");
+ mdev->mthca_flags |= MTHCA_FLAG_NO_LAM;
+ } else if (status) {
+ mthca_err(mdev, "ENABLE_LAM returned status 0x%02x, "
+ "aborting.\n", status);
+ return -EINVAL;
+ }
+
+ err = mthca_load_fw(mdev);
+ if (err) {
+ mthca_err(mdev, "Failed to start FW, aborting.\n");
+ goto err_disable;
+ }
+
+ err = mthca_dev_lim(mdev, &dev_lim);
+ if (err) {
+ mthca_err(mdev, "QUERY_DEV_LIM command failed, aborting.\n");
+ goto err_stop_fw;
+ }
+
+ profile = default_profile;
+ profile.num_uar = dev_lim.uar_size / PAGE_SIZE;
+ profile.num_udav = 0;
+
+ icm_size = mthca_make_profile(mdev, &profile, &dev_lim, &init_hca);
+ if ((int) icm_size < 0) {
+ err = icm_size;
+ goto err_stop_fw;
+ }
+
+ err = mthca_init_icm(mdev, &dev_lim, &init_hca, icm_size);
+ if (err)
+ goto err_stop_fw;
+
+ err = mthca_INIT_HCA(mdev, &init_hca, &status);
+ if (err) {
+ mthca_err(mdev, "INIT_HCA command failed, aborting.\n");
+ goto err_free_icm;
+ }
+ if (status) {
+ mthca_err(mdev, "INIT_HCA returned status 0x%02x, "
+ "aborting.\n", status);
+ err = -EINVAL;
+ goto err_free_icm;
+ }
+
+ err = mthca_QUERY_ADAPTER(mdev, &adapter, &status);
+ if (err) {
+ mthca_err(mdev, "QUERY_ADAPTER command failed, aborting.\n");
+ goto err_free_icm;
+ }
+ if (status) {
+ mthca_err(mdev, "QUERY_ADAPTER returned status 0x%02x, "
+ "aborting.\n", status);
+ err = -EINVAL;
+ goto err_free_icm;
+ }
+
+ mdev->eq_table.inta_pin = adapter.inta_pin;
+ mdev->rev_id = adapter.revision_id;
+
+ return 0;
+
+err_free_icm:
+ mthca_free_icm_table(mdev, mdev->cq_table.table);
+ mthca_free_icm_table(mdev, mdev->qp_table.eqp_table);
+ mthca_free_icm_table(mdev, mdev->qp_table.qp_table);
+ mthca_free_icm_table(mdev, mdev->mr_table.mpt_table);
+ mthca_free_icm_table(mdev, mdev->mr_table.mtt_table);
+ mthca_unmap_eq_icm(mdev);
+
+ mthca_UNMAP_ICM_AUX(mdev, &status);
+ mthca_free_icm(mdev, mdev->fw.arbel.aux_icm);
+
+err_stop_fw:
+ mthca_UNMAP_FA(mdev, &status);
+ mthca_free_icm(mdev, mdev->fw.arbel.fw_icm);
+
+err_disable:
+ if (!(mdev->mthca_flags & MTHCA_FLAG_NO_LAM))
+ mthca_DISABLE_LAM(mdev, &status);
+
+ return err;
+}
+
+static int __devinit mthca_init_hca(struct mthca_dev *mdev)
+{
+ if (mdev->hca_type == ARBEL_NATIVE)
+ return mthca_init_arbel(mdev);
+ else
+ return mthca_init_tavor(mdev);
+}
+
+static int __devinit mthca_setup_hca(struct mthca_dev *dev)
+{
+ int err;
+ u8 status;
+
+ MTHCA_INIT_DOORBELL_LOCK(&dev->doorbell_lock);
+
+ err = mthca_init_pd_table(dev);
+ if (err) {
+ mthca_err(dev, "Failed to initialize "
+ "protection domain table, aborting.\n");
+ return err;
+ }
+
+ err = mthca_init_mr_table(dev);
+ if (err) {
+ mthca_err(dev, "Failed to initialize "
+ "memory region table, aborting.\n");
+ goto err_pd_table_free;
+ }
+
+ err = mthca_pd_alloc(dev, &dev->driver_pd);
+ if (err) {
+ mthca_err(dev, "Failed to create driver PD, "
+ "aborting.\n");
+ goto err_mr_table_free;
+ }
+
+ if (dev->hca_type == ARBEL_NATIVE) {
+ mthca_warn(dev, "Sorry, native MT25208 mode support is not done, "
+ "aborting.\n");
+ err = -ENODEV;
+ goto err_pd_free;
+ }
+
+ err = mthca_init_eq_table(dev);
+ if (err) {
+ mthca_err(dev, "Failed to initialize "
+ "event queue table, aborting.\n");
+ goto err_pd_free;
+ }
+
+ err = mthca_cmd_use_events(dev);
+ if (err) {
+ mthca_err(dev, "Failed to switch to event-driven "
+ "firmware commands, aborting.\n");
+ goto err_eq_table_free;
+ }
+
+ err = mthca_NOP(dev, &status);
+ if (err || status) {
+ mthca_err(dev, "NOP command failed to generate interrupt, aborting.\n");
+ if (dev->mthca_flags & (MTHCA_FLAG_MSI | MTHCA_FLAG_MSI_X))
+ mthca_err(dev, "Try again with MSI/MSI-X disabled.\n");
+ else
+ mthca_err(dev, "BIOS or ACPI interrupt routing problem?\n");
+
+ goto err_cmd_poll;
+ } else
+ mthca_dbg(dev, "NOP command IRQ test passed\n");
+
+ err = mthca_init_cq_table(dev);
+ if (err) {
+ mthca_err(dev, "Failed to initialize "
+ "completion queue table, aborting.\n");
+ goto err_cmd_poll;
+ }
+
+ err = mthca_init_qp_table(dev);
+ if (err) {
+ mthca_err(dev, "Failed to initialize "
+ "queue pair table, aborting.\n");
+ goto err_cq_table_free;
+ }
+
+ err = mthca_init_av_table(dev);
+ if (err) {
+ mthca_err(dev, "Failed to initialize "
+ "address vector table, aborting.\n");
+ goto err_qp_table_free;
+ }
+
+ err = mthca_init_mcg_table(dev);
+ if (err) {
+ mthca_err(dev, "Failed to initialize "
+ "multicast group table, aborting.\n");
+ goto err_av_table_free;
+ }
+
+ return 0;
+
+err_av_table_free:
+ mthca_cleanup_av_table(dev);
+
+err_qp_table_free:
+ mthca_cleanup_qp_table(dev);
+
+err_cq_table_free:
+ mthca_cleanup_cq_table(dev);
+
+err_cmd_poll:
+ mthca_cmd_use_polling(dev);
+
+err_eq_table_free:
+ mthca_cleanup_eq_table(dev);
+
+err_pd_free:
+ mthca_pd_free(dev, &dev->driver_pd);
+
+err_mr_table_free:
+ mthca_cleanup_mr_table(dev);
+
+err_pd_table_free:
+ mthca_cleanup_pd_table(dev);
+ return err;
+}
+
+static int __devinit mthca_request_regions(struct pci_dev *pdev,
+ int ddr_hidden)
+{
+ int err;
+
+ /*
+ * We request our first BAR in two chunks, since the MSI-X
+ * vector table is right in the middle.
+ *
+ * This is why we can't just use pci_request_regions() -- if
+ * we did then setting up MSI-X would fail, since the PCI core
+ * wants to do request_mem_region on the MSI-X vector table.
+ */
+ if (!request_mem_region(pci_resource_start(pdev, 0) +
+ MTHCA_HCR_BASE,
+ MTHCA_HCR_SIZE,
+ DRV_NAME)) {
+ err = -EBUSY;
+ goto err_hcr_failed;
+ }
+
+ if (!request_mem_region(pci_resource_start(pdev, 0) +
+ MTHCA_ECR_BASE,
+ MTHCA_MAP_ECR_SIZE,
+ DRV_NAME)) {
+ err = -EBUSY;
+ goto err_ecr_failed;
+ }
+
+ if (!request_mem_region(pci_resource_start(pdev, 0) +
+ MTHCA_CLR_INT_BASE,
+ MTHCA_CLR_INT_SIZE,
+ DRV_NAME)) {
+ err = -EBUSY;
+ goto err_int_failed;
+ }
+
+
+ err = pci_request_region(pdev, 2, DRV_NAME);
+ if (err)
+ goto err_bar2_failed;
+
+ if (!ddr_hidden) {
+ err = pci_request_region(pdev, 4, DRV_NAME);
+ if (err)
+ goto err_bar4_failed;
+ }
+
+ return 0;
+
+err_bar4_failed:
+
+ pci_release_region(pdev, 2);
+err_bar2_failed:
+
+ release_mem_region(pci_resource_start(pdev, 0) +
+ MTHCA_CLR_INT_BASE,
+ MTHCA_CLR_INT_SIZE);
+err_int_failed:
+
+ release_mem_region(pci_resource_start(pdev, 0) +
+ MTHCA_ECR_BASE,
+ MTHCA_MAP_ECR_SIZE);
+err_ecr_failed:
+
+ release_mem_region(pci_resource_start(pdev, 0) +
+ MTHCA_HCR_BASE,
+ MTHCA_HCR_SIZE);
+err_hcr_failed:
+
+ return err;
+}
+
+static void mthca_release_regions(struct pci_dev *pdev,
+ int ddr_hidden)
+{
+ if (!ddr_hidden)
+ pci_release_region(pdev, 4);
+
+ pci_release_region(pdev, 2);
+
+ release_mem_region(pci_resource_start(pdev, 0) +
+ MTHCA_CLR_INT_BASE,
+ MTHCA_CLR_INT_SIZE);
+
+ release_mem_region(pci_resource_start(pdev, 0) +
+ MTHCA_ECR_BASE,
+ MTHCA_MAP_ECR_SIZE);
+
+ release_mem_region(pci_resource_start(pdev, 0) +
+ MTHCA_HCR_BASE,
+ MTHCA_HCR_SIZE);
+}
+
+static int __devinit mthca_enable_msi_x(struct mthca_dev *mdev)
+{
+ struct msix_entry entries[3];
+ int err;
+
+ entries[0].entry = 0;
+ entries[1].entry = 1;
+ entries[2].entry = 2;
+
+ err = pci_enable_msix(mdev->pdev, entries, ARRAY_SIZE(entries));
+ if (err) {
+ if (err > 0)
+ mthca_info(mdev, "Only %d MSI-X vectors available, "
+ "not using MSI-X\n", err);
+ return err;
+ }
+
+ mdev->eq_table.eq[MTHCA_EQ_COMP ].msi_x_vector = entries[0].vector;
+ mdev->eq_table.eq[MTHCA_EQ_ASYNC].msi_x_vector = entries[1].vector;
+ mdev->eq_table.eq[MTHCA_EQ_CMD ].msi_x_vector = entries[2].vector;
+
+ return 0;
+}
+
+static void mthca_close_hca(struct mthca_dev *mdev)
+{
+ u8 status;
+
+ mthca_CLOSE_HCA(mdev, 0, &status);
+
+ if (mdev->hca_type == ARBEL_NATIVE) {
+ mthca_free_icm_table(mdev, mdev->cq_table.table);
+ mthca_free_icm_table(mdev, mdev->qp_table.eqp_table);
+ mthca_free_icm_table(mdev, mdev->qp_table.qp_table);
+ mthca_free_icm_table(mdev, mdev->mr_table.mpt_table);
+ mthca_free_icm_table(mdev, mdev->mr_table.mtt_table);
+ mthca_unmap_eq_icm(mdev);
+
+ mthca_UNMAP_ICM_AUX(mdev, &status);
+ mthca_free_icm(mdev, mdev->fw.arbel.aux_icm);
+
+ mthca_UNMAP_FA(mdev, &status);
+ mthca_free_icm(mdev, mdev->fw.arbel.fw_icm);
+
+ if (!(mdev->mthca_flags & MTHCA_FLAG_NO_LAM))
+ mthca_DISABLE_LAM(mdev, &status);
+ } else
+ mthca_SYS_DIS(mdev, &status);
+}
+
+static int __devinit mthca_init_one(struct pci_dev *pdev,
+ const struct pci_device_id *id)
+{
+ static int mthca_version_printed = 0;
+ int ddr_hidden = 0;
+ int err;
+ unsigned long mthca_base;
+ struct mthca_dev *mdev;
+
+ if (!mthca_version_printed) {
+ printk(KERN_INFO "%s", mthca_version);
+ ++mthca_version_printed;
+ }
+
+ printk(KERN_INFO PFX "Initializing %s (%s)\n",
+ pci_pretty_name(pdev), pci_name(pdev));
+
+ err = pci_enable_device(pdev);
+ if (err) {
+ dev_err(&pdev->dev, "Cannot enable PCI device, "
+ "aborting.\n");
+ return err;
+ }
+
+ /*
+ * Check for BARs. We expect 0: 1MB, 2: 8MB, 4: DDR (may not
+ * be present)
+ */
+ if (!(pci_resource_flags(pdev, 0) & IORESOURCE_MEM) ||
+ pci_resource_len(pdev, 0) != 1 << 20) {
+ dev_err(&pdev->dev, "Missing DCS, aborting.");
+ err = -ENODEV;
+ goto err_disable_pdev;
+ }
+ if (!(pci_resource_flags(pdev, 2) & IORESOURCE_MEM) ||
+ pci_resource_len(pdev, 2) != 1 << 23) {
+ dev_err(&pdev->dev, "Missing UAR, aborting.");
+ err = -ENODEV;
+ goto err_disable_pdev;
+ }
+ if (!(pci_resource_flags(pdev, 4) & IORESOURCE_MEM))
+ ddr_hidden = 1;
+
+ err = mthca_request_regions(pdev, ddr_hidden);
+ if (err) {
+ dev_err(&pdev->dev, "Cannot obtain PCI resources, "
+ "aborting.\n");
+ goto err_disable_pdev;
+ }
+
+ pci_set_master(pdev);
+
+ err = pci_set_dma_mask(pdev, DMA_64BIT_MASK);
+ if (err) {
+ dev_warn(&pdev->dev, "Warning: couldn't set 64-bit PCI DMA mask.\n");
+ err = pci_set_dma_mask(pdev, DMA_32BIT_MASK);
+ if (err) {
+ dev_err(&pdev->dev, "Can't set PCI DMA mask, aborting.\n");
+ goto err_free_res;
+ }
+ }
+ err = pci_set_consistent_dma_mask(pdev, DMA_64BIT_MASK);
+ if (err) {
+ dev_warn(&pdev->dev, "Warning: couldn't set 64-bit "
+ "consistent PCI DMA mask.\n");
+ err = pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK);
+ if (err) {
+ dev_err(&pdev->dev, "Can't set consistent PCI DMA mask, "
+ "aborting.\n");
+ goto err_free_res;
+ }
+ }
+
+ mdev = (struct mthca_dev *) ib_alloc_device(sizeof *mdev);
+ if (!mdev) {
+ dev_err(&pdev->dev, "Device struct alloc failed, "
+ "aborting.\n");
+ err = -ENOMEM;
+ goto err_free_res;
+ }
+
+ mdev->pdev = pdev;
+ mdev->hca_type = id->driver_data;
+
+ if (ddr_hidden)
+ mdev->mthca_flags |= MTHCA_FLAG_DDR_HIDDEN;
+
+ /*
+ * Now reset the HCA before we touch the PCI capabilities or
+ * attempt a firmware command, since a boot ROM may have left
+ * the HCA in an undefined state.
+ */
+ err = mthca_reset(mdev);
+ if (err) {
+ mthca_err(mdev, "Failed to reset HCA, aborting.\n");
+ goto err_free_dev;
+ }
+
+ if (msi_x && !mthca_enable_msi_x(mdev))
+ mdev->mthca_flags |= MTHCA_FLAG_MSI_X;
+ if (msi && !(mdev->mthca_flags & MTHCA_FLAG_MSI_X) &&
+ !pci_enable_msi(pdev))
+ mdev->mthca_flags |= MTHCA_FLAG_MSI;
+
+ sema_init(&mdev->cmd.hcr_sem, 1);
+ sema_init(&mdev->cmd.poll_sem, 1);
+ mdev->cmd.use_events = 0;
+
+ mthca_base = pci_resource_start(pdev, 0);
+ mdev->hcr = ioremap(mthca_base + MTHCA_HCR_BASE, MTHCA_HCR_SIZE);
+ if (!mdev->hcr) {
+ mthca_err(mdev, "Couldn't map command register, "
+ "aborting.\n");
+ err = -ENOMEM;
+ goto err_free_dev;
+ }
+
+ mdev->clr_base = ioremap(mthca_base + MTHCA_CLR_INT_BASE,
+ MTHCA_CLR_INT_SIZE);
+ if (!mdev->clr_base) {
+ mthca_err(mdev, "Couldn't map interrupt clear register, "
+ "aborting.\n");
+ err = -ENOMEM;
+ goto err_iounmap;
+ }
+
+ mdev->ecr_base = ioremap(mthca_base + MTHCA_ECR_BASE,
+ MTHCA_ECR_SIZE + MTHCA_ECR_CLR_SIZE);
+ if (!mdev->ecr_base) {
+ mthca_err(mdev, "Couldn't map ecr register, "
+ "aborting.\n");
+ err = -ENOMEM;
+ goto err_iounmap_clr;
+ }
+
+ mthca_base = pci_resource_start(pdev, 2);
+ mdev->kar = ioremap(mthca_base + PAGE_SIZE * MTHCA_KAR_PAGE, PAGE_SIZE);
+ if (!mdev->kar) {
+ mthca_err(mdev, "Couldn't map kernel access region, "
+ "aborting.\n");
+ err = -ENOMEM;
+ goto err_iounmap_ecr;
+ }
+
+ err = mthca_tune_pci(mdev);
+ if (err)
+ goto err_iounmap_kar;
+
+ err = mthca_init_hca(mdev);
+ if (err)
+ goto err_iounmap_kar;
+
+ err = mthca_setup_hca(mdev);
+ if (err)
+ goto err_close;
+
+ err = mthca_register_device(mdev);
+ if (err)
+ goto err_cleanup;
+
+ err = mthca_create_agents(mdev);
+ if (err)
+ goto err_unregister;
+
+ pci_set_drvdata(pdev, mdev);
+
+ return 0;
+
+err_unregister:
+ mthca_unregister_device(mdev);
+
+err_cleanup:
+ mthca_cleanup_mcg_table(mdev);
+ mthca_cleanup_av_table(mdev);
+ mthca_cleanup_qp_table(mdev);
+ mthca_cleanup_cq_table(mdev);
+ mthca_cmd_use_polling(mdev);
+ mthca_cleanup_eq_table(mdev);
+
+ mthca_pd_free(mdev, &mdev->driver_pd);
+
+ mthca_cleanup_mr_table(mdev);
+ mthca_cleanup_pd_table(mdev);
+
+err_close:
+ mthca_close_hca(mdev);
+
+err_iounmap_kar:
+ iounmap(mdev->kar);
+
+err_iounmap_ecr:
+ iounmap(mdev->ecr_base);
+
+err_iounmap_clr:
+ iounmap(mdev->clr_base);
+
+err_iounmap:
+ iounmap(mdev->hcr);
+
+err_free_dev:
+ if (mdev->mthca_flags & MTHCA_FLAG_MSI_X)
+ pci_disable_msix(pdev);
+ if (mdev->mthca_flags & MTHCA_FLAG_MSI)
+ pci_disable_msi(pdev);
+
+ ib_dealloc_device(&mdev->ib_dev);
+
+err_free_res:
+ mthca_release_regions(pdev, ddr_hidden);
+
+err_disable_pdev:
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+ return err;
+}
+
+static void __devexit mthca_remove_one(struct pci_dev *pdev)
+{
+ struct mthca_dev *mdev = pci_get_drvdata(pdev);
+ u8 status;
+ int p;
+
+ if (mdev) {
+ mthca_free_agents(mdev);
+ mthca_unregister_device(mdev);
+
+ for (p = 1; p <= mdev->limits.num_ports; ++p)
+ mthca_CLOSE_IB(mdev, p, &status);
+
+ mthca_cleanup_mcg_table(mdev);
+ mthca_cleanup_av_table(mdev);
+ mthca_cleanup_qp_table(mdev);
+ mthca_cleanup_cq_table(mdev);
+ mthca_cmd_use_polling(mdev);
+ mthca_cleanup_eq_table(mdev);
+
+ mthca_pd_free(mdev, &mdev->driver_pd);
+
+ mthca_cleanup_mr_table(mdev);
+ mthca_cleanup_pd_table(mdev);
+
+ mthca_close_hca(mdev);
+
+ iounmap(mdev->hcr);
+ iounmap(mdev->ecr_base);
+ iounmap(mdev->clr_base);
+
+ if (mdev->mthca_flags & MTHCA_FLAG_MSI_X)
+ pci_disable_msix(pdev);
+ if (mdev->mthca_flags & MTHCA_FLAG_MSI)
+ pci_disable_msi(pdev);
+
+ ib_dealloc_device(&mdev->ib_dev);
+ mthca_release_regions(pdev, mdev->mthca_flags &
+ MTHCA_FLAG_DDR_HIDDEN);
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+ }
+}
+
+static struct pci_device_id mthca_pci_table[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_MELLANOX, PCI_DEVICE_ID_MELLANOX_TAVOR),
+ .driver_data = TAVOR },
+ { PCI_DEVICE(PCI_VENDOR_ID_TOPSPIN, PCI_DEVICE_ID_MELLANOX_TAVOR),
+ .driver_data = TAVOR },
+ { PCI_DEVICE(PCI_VENDOR_ID_MELLANOX, PCI_DEVICE_ID_MELLANOX_ARBEL_COMPAT),
+ .driver_data = ARBEL_COMPAT },
+ { PCI_DEVICE(PCI_VENDOR_ID_TOPSPIN, PCI_DEVICE_ID_MELLANOX_ARBEL_COMPAT),
+ .driver_data = ARBEL_COMPAT },
+ { PCI_DEVICE(PCI_VENDOR_ID_MELLANOX, PCI_DEVICE_ID_MELLANOX_ARBEL),
+ .driver_data = ARBEL_NATIVE },
+ { PCI_DEVICE(PCI_VENDOR_ID_TOPSPIN, PCI_DEVICE_ID_MELLANOX_ARBEL),
+ .driver_data = ARBEL_NATIVE },
+ { 0, }
+};
+
+MODULE_DEVICE_TABLE(pci, mthca_pci_table);
+
+static struct pci_driver mthca_driver = {
+ .name = "ib_mthca",
+ .id_table = mthca_pci_table,
+ .probe = mthca_init_one,
+ .remove = __devexit_p(mthca_remove_one)
+};
+
+static int __init mthca_init(void)
+{
+ int ret;
+
+ ret = pci_register_driver(&mthca_driver);
+ return ret < 0 ? ret : 0;
+}
+
+static void __exit mthca_cleanup(void)
+{
+ pci_unregister_driver(&mthca_driver);
+}
+
+module_init(mthca_init);
+module_exit(mthca_cleanup);
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mthca_mcg.c 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#include <linux/init.h>
+
+#include "mthca_dev.h"
+#include "mthca_cmd.h"
+
+enum {
+ MTHCA_QP_PER_MGM = 4 * (MTHCA_MGM_ENTRY_SIZE / 16 - 2)
+};
+
+struct mthca_mgm {
+ u32 next_gid_index;
+ u32 reserved[3];
+ u8 gid[16];
+ u32 qp[MTHCA_QP_PER_MGM];
+};
+
+static const u8 zero_gid[16]; /* automatically initialized to 0 */
+
+/*
+ * Caller must hold MCG table semaphore. gid and mgm parameters must
+ * be properly aligned for command interface.
+ *
+ * Returns 0 unless a firmware command error occurs.
+ *
+ * If GID is found in MGM or MGM is empty, *index = *hash, *prev = -1
+ * and *mgm holds MGM entry.
+ *
+ * if GID is found in AMGM, *index = index in AMGM, *prev = index of
+ * previous entry in hash chain and *mgm holds AMGM entry.
+ *
+ * If no AMGM exists for given gid, *index = -1, *prev = index of last
+ * entry in hash chain and *mgm holds end of hash chain.
+ */
+static int find_mgm(struct mthca_dev *dev,
+ u8 *gid, struct mthca_mgm *mgm,
+ u16 *hash, int *prev, int *index)
+{
+ void *mailbox;
+ u8 *mgid;
+ int err;
+ u8 status;
+
+ mailbox = kmalloc(16 + MTHCA_CMD_MAILBOX_EXTRA, GFP_KERNEL);
+ if (!mailbox)
+ return -ENOMEM;
+ mgid = MAILBOX_ALIGN(mailbox);
+
+ memcpy(mgid, gid, 16);
+
+ err = mthca_MGID_HASH(dev, mgid, hash, &status);
+ if (err)
+ goto out;
+ if (status) {
+ mthca_err(dev, "MGID_HASH returned status %02x\n", status);
+ err = -EINVAL;
+ goto out;
+ }
+
+ if (0)
+ mthca_dbg(dev, "Hash for %04x:%04x:%04x:%04x:"
+ "%04x:%04x:%04x:%04x is %04x\n",
+ be16_to_cpu(((u16 *) gid)[0]), be16_to_cpu(((u16 *) gid)[1]),
+ be16_to_cpu(((u16 *) gid)[2]), be16_to_cpu(((u16 *) gid)[3]),
+ be16_to_cpu(((u16 *) gid)[4]), be16_to_cpu(((u16 *) gid)[5]),
+ be16_to_cpu(((u16 *) gid)[6]), be16_to_cpu(((u16 *) gid)[7]),
+ *hash);
+
+ *index = *hash;
+ *prev = -1;
+
+ do {
+ err = mthca_READ_MGM(dev, *index, mgm, &status);
+ if (err)
+ goto out;
+ if (status) {
+ mthca_err(dev, "READ_MGM returned status %02x\n", status);
+ return -EINVAL;
+ }
+
+ if (!memcmp(mgm->gid, zero_gid, 16)) {
+ if (*index != *hash) {
+ mthca_err(dev, "Found zero MGID in AMGM.\n");
+ err = -EINVAL;
+ }
+ goto out;
+ }
+
+ if (!memcmp(mgm->gid, gid, 16))
+ goto out;
+
+ *prev = *index;
+ *index = be32_to_cpu(mgm->next_gid_index) >> 5;
+ } while (*index);
+
+ *index = -1;
+
+ out:
+ kfree(mailbox);
+ return err;
+}
+
+int mthca_multicast_attach(struct ib_qp *ibqp, union ib_gid *gid, u16 lid)
+{
+ struct mthca_dev *dev = to_mdev(ibqp->device);
+ void *mailbox;
+ struct mthca_mgm *mgm;
+ u16 hash;
+ int index, prev;
+ int link = 0;
+ int i;
+ int err;
+ u8 status;
+
+ mailbox = kmalloc(sizeof *mgm + MTHCA_CMD_MAILBOX_EXTRA, GFP_KERNEL);
+ if (!mailbox)
+ return -ENOMEM;
+ mgm = MAILBOX_ALIGN(mailbox);
+
+ if (down_interruptible(&dev->mcg_table.sem))
+ return -EINTR;
+
+ err = find_mgm(dev, gid->raw, mgm, &hash, &prev, &index);
+ if (err)
+ goto out;
+
+ if (index != -1) {
+ if (!memcmp(mgm->gid, zero_gid, 16))
+ memcpy(mgm->gid, gid->raw, 16);
+ } else {
+ link = 1;
+
+ index = mthca_alloc(&dev->mcg_table.alloc);
+ if (index == -1) {
+ mthca_err(dev, "No AMGM entries left\n");
+ err = -ENOMEM;
+ goto out;
+ }
+
+ err = mthca_READ_MGM(dev, index, mgm, &status);
+ if (err)
+ goto out;
+ if (status) {
+ mthca_err(dev, "READ_MGM returned status %02x\n", status);
+ err = -EINVAL;
+ goto out;
+ }
+
+ memcpy(mgm->gid, gid->raw, 16);
+ mgm->next_gid_index = 0;
+ }
+
+ for (i = 0; i < MTHCA_QP_PER_MGM; ++i)
+ if (!(mgm->qp[i] & cpu_to_be32(1 << 31))) {
+ mgm->qp[i] = cpu_to_be32(ibqp->qp_num | (1 << 31));
+ break;
+ }
+
+ if (i == MTHCA_QP_PER_MGM) {
+ mthca_err(dev, "MGM at index %x is full.\n", index);
+ err = -ENOMEM;
+ goto out;
+ }
+
+ err = mthca_WRITE_MGM(dev, index, mgm, &status);
+ if (err)
+ goto out;
+ if (status) {
+ mthca_err(dev, "WRITE_MGM returned status %02x\n", status);
+ err = -EINVAL;
+ }
+
+ if (!link)
+ goto out;
+
+ err = mthca_READ_MGM(dev, prev, mgm, &status);
+ if (err)
+ goto out;
+ if (status) {
+ mthca_err(dev, "READ_MGM returned status %02x\n", status);
+ err = -EINVAL;
+ goto out;
+ }
+
+ mgm->next_gid_index = cpu_to_be32(index << 5);
+
+ err = mthca_WRITE_MGM(dev, prev, mgm, &status);
+ if (err)
+ goto out;
+ if (status) {
+ mthca_err(dev, "WRITE_MGM returned status %02x\n", status);
+ err = -EINVAL;
+ }
+
+ out:
+ up(&dev->mcg_table.sem);
+ kfree(mailbox);
+ return err;
+}
+
+int mthca_multicast_detach(struct ib_qp *ibqp, union ib_gid *gid, u16 lid)
+{
+ struct mthca_dev *dev = to_mdev(ibqp->device);
+ void *mailbox;
+ struct mthca_mgm *mgm;
+ u16 hash;
+ int prev, index;
+ int i, loc;
+ int err;
+ u8 status;
+
+ mailbox = kmalloc(sizeof *mgm + MTHCA_CMD_MAILBOX_EXTRA, GFP_KERNEL);
+ if (!mailbox)
+ return -ENOMEM;
+ mgm = MAILBOX_ALIGN(mailbox);
+
+ if (down_interruptible(&dev->mcg_table.sem))
+ return -EINTR;
+
+ err = find_mgm(dev, gid->raw, mgm, &hash, &prev, &index);
+ if (err)
+ goto out;
+
+ if (index == -1) {
+ mthca_err(dev, "MGID %04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x "
+ "not found\n",
+ be16_to_cpu(((u16 *) gid->raw)[0]),
+ be16_to_cpu(((u16 *) gid->raw)[1]),
+ be16_to_cpu(((u16 *) gid->raw)[2]),
+ be16_to_cpu(((u16 *) gid->raw)[3]),
+ be16_to_cpu(((u16 *) gid->raw)[4]),
+ be16_to_cpu(((u16 *) gid->raw)[5]),
+ be16_to_cpu(((u16 *) gid->raw)[6]),
+ be16_to_cpu(((u16 *) gid->raw)[7]));
+ err = -EINVAL;
+ goto out;
+ }
+
+ for (loc = -1, i = 0; i < MTHCA_QP_PER_MGM; ++i) {
+ if (mgm->qp[i] == cpu_to_be32(ibqp->qp_num | (1 << 31)))
+ loc = i;
+ if (!(mgm->qp[i] & cpu_to_be32(1 << 31)))
+ break;
+ }
+
+ if (loc == -1) {
+ mthca_err(dev, "QP %06x not found in MGM\n", ibqp->qp_num);
+ err = -EINVAL;
+ goto out;
+ }
+
+ mgm->qp[loc] = mgm->qp[i - 1];
+ mgm->qp[i - 1] = 0;
+
+ err = mthca_WRITE_MGM(dev, index, mgm, &status);
+ if (err)
+ goto out;
+ if (status) {
+ mthca_err(dev, "WRITE_MGM returned status %02x\n", status);
+ err = -EINVAL;
+ goto out;
+ }
+
+ if (i != 1)
+ goto out;
+
+ goto out;
+
+ if (prev == -1) {
+ /* Remove entry from MGM */
+ if (be32_to_cpu(mgm->next_gid_index) >> 5) {
+ err = mthca_READ_MGM(dev,
+ be32_to_cpu(mgm->next_gid_index) >> 5,
+ mgm, &status);
+ if (err)
+ goto out;
+ if (status) {
+ mthca_err(dev, "READ_MGM returned status %02x\n",
+ status);
+ err = -EINVAL;
+ goto out;
+ }
+ } else
+ memset(mgm->gid, 0, 16);
+
+ err = mthca_WRITE_MGM(dev, index, mgm, &status);
+ if (err)
+ goto out;
+ if (status) {
+ mthca_err(dev, "WRITE_MGM returned status %02x\n", status);
+ err = -EINVAL;
+ goto out;
+ }
+ } else {
+ /* Remove entry from AMGM */
+ index = be32_to_cpu(mgm->next_gid_index) >> 5;
+ err = mthca_READ_MGM(dev, prev, mgm, &status);
+ if (err)
+ goto out;
+ if (status) {
+ mthca_err(dev, "READ_MGM returned status %02x\n", status);
+ err = -EINVAL;
+ goto out;
+ }
+
+ mgm->next_gid_index = cpu_to_be32(index << 5);
+
+ err = mthca_WRITE_MGM(dev, prev, mgm, &status);
+ if (err)
+ goto out;
+ if (status) {
+ mthca_err(dev, "WRITE_MGM returned status %02x\n", status);
+ err = -EINVAL;
+ goto out;
+ }
+ }
+
+ out:
+ up(&dev->mcg_table.sem);
+ kfree(mailbox);
+ return err;
+}
+
+int __devinit mthca_init_mcg_table(struct mthca_dev *dev)
+{
+ int err;
+
+ err = mthca_alloc_init(&dev->mcg_table.alloc,
+ dev->limits.num_amgms,
+ dev->limits.num_amgms - 1,
+ 0);
+ if (err)
+ return err;
+
+ init_MUTEX(&dev->mcg_table.sem);
+
+ return 0;
+}
+
+void __devexit mthca_cleanup_mcg_table(struct mthca_dev *dev)
+{
+ mthca_alloc_cleanup(&dev->mcg_table.alloc);
+}
--- /dev/null
+/*
+ * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id$
+ */
+
+#include "mthca_memfree.h"
+#include "mthca_dev.h"
+#include "mthca_cmd.h"
+
+/*
+ * We allocate in as big chunks as we can, up to a maximum of 256 KB
+ * per chunk.
+ */
+enum {
+ MTHCA_ICM_ALLOC_SIZE = 1 << 18,
+ MTHCA_TABLE_CHUNK_SIZE = 1 << 18
+};
+
+void mthca_free_icm(struct mthca_dev *dev, struct mthca_icm *icm)
+{
+ struct mthca_icm_chunk *chunk, *tmp;
+ int i;
+
+ if (!icm)
+ return;
+
+ list_for_each_entry_safe(chunk, tmp, &icm->chunk_list, list) {
+ if (chunk->nsg > 0)
+ pci_unmap_sg(dev->pdev, chunk->mem, chunk->npages,
+ PCI_DMA_BIDIRECTIONAL);
+
+ for (i = 0; i < chunk->npages; ++i)
+ __free_pages(chunk->mem[i].page,
+ get_order(chunk->mem[i].length));
+
+ kfree(chunk);
+ }
+
+ kfree(icm);
+}
+
+struct mthca_icm *mthca_alloc_icm(struct mthca_dev *dev, int npages,
+ unsigned int gfp_mask)
+{
+ struct mthca_icm *icm;
+ struct mthca_icm_chunk *chunk = NULL;
+ int cur_order;
+
+ icm = kmalloc(sizeof *icm, gfp_mask & ~(__GFP_HIGHMEM | __GFP_NOWARN));
+ if (!icm)
+ return icm;
+
+ INIT_LIST_HEAD(&icm->chunk_list);
+
+ cur_order = get_order(MTHCA_ICM_ALLOC_SIZE);
+
+ while (npages > 0) {
+ if (!chunk) {
+ chunk = kmalloc(sizeof *chunk,
+ gfp_mask & ~(__GFP_HIGHMEM | __GFP_NOWARN));
+ if (!chunk)
+ goto fail;
+
+ chunk->npages = 0;
+ chunk->nsg = 0;
+ list_add_tail(&chunk->list, &icm->chunk_list);
+ }
+
+ while (1 << cur_order > npages)
+ --cur_order;
+
+ chunk->mem[chunk->npages].page = alloc_pages(gfp_mask, cur_order);
+ if (chunk->mem[chunk->npages].page) {
+ chunk->mem[chunk->npages].length = PAGE_SIZE << cur_order;
+ chunk->mem[chunk->npages].offset = 0;
+
+ if (++chunk->npages == MTHCA_ICM_CHUNK_LEN) {
+ chunk->nsg = pci_map_sg(dev->pdev, chunk->mem,
+ chunk->npages,
+ PCI_DMA_BIDIRECTIONAL);
+
+ if (chunk->nsg <= 0)
+ goto fail;
+
+ chunk = NULL;
+ }
+
+ npages -= 1 << cur_order;
+ } else {
+ --cur_order;
+ if (cur_order < 0)
+ goto fail;
+ }
+ }
+
+ if (chunk) {
+ chunk->nsg = pci_map_sg(dev->pdev, chunk->mem,
+ chunk->npages,
+ PCI_DMA_BIDIRECTIONAL);
+
+ if (chunk->nsg <= 0)
+ goto fail;
+ }
+
+ return icm;
+
+fail:
+ mthca_free_icm(dev, icm);
+ return NULL;
+}
+
+struct mthca_icm_table *mthca_alloc_icm_table(struct mthca_dev *dev,
+ u64 virt, unsigned size,
+ unsigned reserved,
+ int use_lowmem)
+{
+ struct mthca_icm_table *table;
+ int num_icm;
+ int i;
+ u8 status;
+
+ num_icm = size / MTHCA_TABLE_CHUNK_SIZE;
+
+ table = kmalloc(sizeof *table + num_icm * sizeof *table->icm, GFP_KERNEL);
+ if (!table)
+ return NULL;
+
+ table->virt = virt;
+ table->num_icm = num_icm;
+ init_MUTEX(&table->sem);
+
+ for (i = 0; i < num_icm; ++i)
+ table->icm[i] = NULL;
+
+ for (i = 0; i < (reserved + MTHCA_TABLE_CHUNK_SIZE - 1) / MTHCA_TABLE_CHUNK_SIZE; ++i) {
+ table->icm[i] = mthca_alloc_icm(dev, MTHCA_TABLE_CHUNK_SIZE >> PAGE_SHIFT,
+ (use_lowmem ? GFP_KERNEL : GFP_HIGHUSER) |
+ __GFP_NOWARN);
+ if (!table->icm[i])
+ goto err;
+ if (mthca_MAP_ICM(dev, table->icm[i], virt + i * MTHCA_TABLE_CHUNK_SIZE,
+ &status) || status) {
+ mthca_free_icm(dev, table->icm[i]);
+ table->icm[i] = NULL;
+ goto err;
+ }
+ }
+
+ return table;
+
+err:
+ for (i = 0; i < num_icm; ++i)
+ if (table->icm[i]) {
+ mthca_UNMAP_ICM(dev, virt + i * MTHCA_TABLE_CHUNK_SIZE,
+ MTHCA_TABLE_CHUNK_SIZE >> 12, &status);
+ mthca_free_icm(dev, table->icm[i]);
+ }
+
+ kfree(table);
+
+ return NULL;
+}
+
+void mthca_free_icm_table(struct mthca_dev *dev, struct mthca_icm_table *table)
+{
+ int i;
+ u8 status;
+
+ for (i = 0; i < table->num_icm; ++i)
+ if (table->icm[i]) {
+ mthca_UNMAP_ICM(dev, table->virt + i * MTHCA_TABLE_CHUNK_SIZE,
+ MTHCA_TABLE_CHUNK_SIZE >> 12, &status);
+ mthca_free_icm(dev, table->icm[i]);
+ }
+
+ kfree(table);
+}
--- /dev/null
+/*
+ * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id$
+ */
+
+#ifndef MTHCA_MEMFREE_H
+#define MTHCA_MEMFREE_H
+
+#include <linux/list.h>
+#include <linux/pci.h>
+
+#include <asm/semaphore.h>
+
+#define MTHCA_ICM_CHUNK_LEN \
+ ((256 - sizeof (struct list_head) - 2 * sizeof (int)) / \
+ (sizeof (struct scatterlist)))
+
+struct mthca_icm_chunk {
+ struct list_head list;
+ int npages;
+ int nsg;
+ struct scatterlist mem[MTHCA_ICM_CHUNK_LEN];
+};
+
+struct mthca_icm {
+ struct list_head chunk_list;
+};
+
+struct mthca_icm_table {
+ u64 virt;
+ int num_icm;
+ struct semaphore sem;
+ struct mthca_icm *icm[0];
+};
+
+struct mthca_icm_iter {
+ struct mthca_icm *icm;
+ struct mthca_icm_chunk *chunk;
+ int page_idx;
+};
+
+struct mthca_dev;
+
+struct mthca_icm *mthca_alloc_icm(struct mthca_dev *dev, int npages,
+ unsigned int gfp_mask);
+void mthca_free_icm(struct mthca_dev *dev, struct mthca_icm *icm);
+
+struct mthca_icm_table *mthca_alloc_icm_table(struct mthca_dev *dev,
+ u64 virt, unsigned size,
+ unsigned reserved,
+ int use_lowmem);
+void mthca_free_icm_table(struct mthca_dev *dev, struct mthca_icm_table *table);
+
+static inline void mthca_icm_first(struct mthca_icm *icm,
+ struct mthca_icm_iter *iter)
+{
+ iter->icm = icm;
+ iter->chunk = list_empty(&icm->chunk_list) ?
+ NULL : list_entry(icm->chunk_list.next,
+ struct mthca_icm_chunk, list);
+ iter->page_idx = 0;
+}
+
+static inline int mthca_icm_last(struct mthca_icm_iter *iter)
+{
+ return !iter->chunk;
+}
+
+static inline void mthca_icm_next(struct mthca_icm_iter *iter)
+{
+ if (++iter->page_idx >= iter->chunk->nsg) {
+ if (iter->chunk->list.next == &iter->icm->chunk_list) {
+ iter->chunk = NULL;
+ return;
+ }
+
+ iter->chunk = list_entry(iter->chunk->list.next,
+ struct mthca_icm_chunk, list);
+ iter->page_idx = 0;
+ }
+}
+
+static inline dma_addr_t mthca_icm_addr(struct mthca_icm_iter *iter)
+{
+ return sg_dma_address(&iter->chunk->mem[iter->page_idx]);
+}
+
+static inline unsigned long mthca_icm_size(struct mthca_icm_iter *iter)
+{
+ return sg_dma_len(&iter->chunk->mem[iter->page_idx]);
+}
+
+#endif /* MTHCA_MEMFREE_H */
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mthca_mr.c 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+
+#include "mthca_dev.h"
+#include "mthca_cmd.h"
+
+/*
+ * Must be packed because mtt_seg is 64 bits but only aligned to 32 bits.
+ */
+struct mthca_mpt_entry {
+ u32 flags;
+ u32 page_size;
+ u32 key;
+ u32 pd;
+ u64 start;
+ u64 length;
+ u32 lkey;
+ u32 window_count;
+ u32 window_count_limit;
+ u64 mtt_seg;
+ u32 reserved[3];
+} __attribute__((packed));
+
+#define MTHCA_MPT_FLAG_SW_OWNS (0xfUL << 28)
+#define MTHCA_MPT_FLAG_MIO (1 << 17)
+#define MTHCA_MPT_FLAG_BIND_ENABLE (1 << 15)
+#define MTHCA_MPT_FLAG_PHYSICAL (1 << 9)
+#define MTHCA_MPT_FLAG_REGION (1 << 8)
+
+#define MTHCA_MTT_FLAG_PRESENT 1
+
+/*
+ * Buddy allocator for MTT segments (currently not very efficient
+ * since it doesn't keep a free list and just searches linearly
+ * through the bitmaps)
+ */
+
+static u32 mthca_alloc_mtt(struct mthca_dev *dev, int order)
+{
+ int o;
+ int m;
+ u32 seg;
+
+ spin_lock(&dev->mr_table.mpt_alloc.lock);
+
+ for (o = order; o <= dev->mr_table.max_mtt_order; ++o) {
+ m = 1 << (dev->mr_table.max_mtt_order - o);
+ seg = find_first_bit(dev->mr_table.mtt_buddy[o], m);
+ if (seg < m)
+ goto found;
+ }
+
+ spin_unlock(&dev->mr_table.mpt_alloc.lock);
+ return -1;
+
+ found:
+ clear_bit(seg, dev->mr_table.mtt_buddy[o]);
+
+ while (o > order) {
+ --o;
+ seg <<= 1;
+ set_bit(seg ^ 1, dev->mr_table.mtt_buddy[o]);
+ }
+
+ spin_unlock(&dev->mr_table.mpt_alloc.lock);
+
+ seg <<= order;
+
+ return seg;
+}
+
+static void mthca_free_mtt(struct mthca_dev *dev, u32 seg, int order)
+{
+ seg >>= order;
+
+ spin_lock(&dev->mr_table.mpt_alloc.lock);
+
+ while (test_bit(seg ^ 1, dev->mr_table.mtt_buddy[order])) {
+ clear_bit(seg ^ 1, dev->mr_table.mtt_buddy[order]);
+ seg >>= 1;
+ ++order;
+ }
+
+ set_bit(seg, dev->mr_table.mtt_buddy[order]);
+
+ spin_unlock(&dev->mr_table.mpt_alloc.lock);
+}
+
+int mthca_mr_alloc_notrans(struct mthca_dev *dev, u32 pd,
+ u32 access, struct mthca_mr *mr)
+{
+ void *mailbox;
+ struct mthca_mpt_entry *mpt_entry;
+ int err;
+ u8 status;
+
+ might_sleep();
+
+ mr->order = -1;
+ mr->ibmr.lkey = mthca_alloc(&dev->mr_table.mpt_alloc);
+ if (mr->ibmr.lkey == -1)
+ return -ENOMEM;
+ mr->ibmr.rkey = mr->ibmr.lkey;
+
+ mailbox = kmalloc(sizeof *mpt_entry + MTHCA_CMD_MAILBOX_EXTRA,
+ GFP_KERNEL);
+ if (!mailbox) {
+ mthca_free(&dev->mr_table.mpt_alloc, mr->ibmr.lkey);
+ return -ENOMEM;
+ }
+ mpt_entry = MAILBOX_ALIGN(mailbox);
+
+ mpt_entry->flags = cpu_to_be32(MTHCA_MPT_FLAG_SW_OWNS |
+ MTHCA_MPT_FLAG_MIO |
+ MTHCA_MPT_FLAG_PHYSICAL |
+ MTHCA_MPT_FLAG_REGION |
+ access);
+ mpt_entry->page_size = 0;
+ mpt_entry->key = cpu_to_be32(mr->ibmr.lkey);
+ mpt_entry->pd = cpu_to_be32(pd);
+ mpt_entry->start = 0;
+ mpt_entry->length = ~0ULL;
+
+ memset(&mpt_entry->lkey, 0,
+ sizeof *mpt_entry - offsetof(struct mthca_mpt_entry, lkey));
+
+ err = mthca_SW2HW_MPT(dev, mpt_entry,
+ mr->ibmr.lkey & (dev->limits.num_mpts - 1),
+ &status);
+ if (err)
+ mthca_warn(dev, "SW2HW_MPT failed (%d)\n", err);
+ else if (status) {
+ mthca_warn(dev, "SW2HW_MPT returned status 0x%02x\n",
+ status);
+ err = -EINVAL;
+ }
+
+ kfree(mailbox);
+ return err;
+}
+
+int mthca_mr_alloc_phys(struct mthca_dev *dev, u32 pd,
+ u64 *buffer_list, int buffer_size_shift,
+ int list_len, u64 iova, u64 total_size,
+ u32 access, struct mthca_mr *mr)
+{
+ void *mailbox;
+ u64 *mtt_entry;
+ struct mthca_mpt_entry *mpt_entry;
+ int err = -ENOMEM;
+ u8 status;
+ int i;
+
+ might_sleep();
+ WARN_ON(buffer_size_shift >= 32);
+
+ mr->ibmr.lkey = mthca_alloc(&dev->mr_table.mpt_alloc);
+ if (mr->ibmr.lkey == -1)
+ return -ENOMEM;
+ mr->ibmr.rkey = mr->ibmr.lkey;
+
+ for (i = dev->limits.mtt_seg_size / 8, mr->order = 0;
+ i < list_len;
+ i <<= 1, ++mr->order)
+ ; /* nothing */
+
+ mr->first_seg = mthca_alloc_mtt(dev, mr->order);
+ if (mr->first_seg == -1)
+ goto err_out_mpt_free;
+
+ /*
+ * If list_len is odd, we add one more dummy entry for
+ * firmware efficiency.
+ */
+ mailbox = kmalloc(max(sizeof *mpt_entry,
+ (size_t) 8 * (list_len + (list_len & 1) + 2)) +
+ MTHCA_CMD_MAILBOX_EXTRA,
+ GFP_KERNEL);
+ if (!mailbox)
+ goto err_out_free_mtt;
+
+ mtt_entry = MAILBOX_ALIGN(mailbox);
+
+ mtt_entry[0] = cpu_to_be64(dev->mr_table.mtt_base +
+ mr->first_seg * dev->limits.mtt_seg_size);
+ mtt_entry[1] = 0;
+ for (i = 0; i < list_len; ++i)
+ mtt_entry[i + 2] = cpu_to_be64(buffer_list[i] |
+ MTHCA_MTT_FLAG_PRESENT);
+ if (list_len & 1) {
+ mtt_entry[i + 2] = 0;
+ ++list_len;
+ }
+
+ if (0) {
+ mthca_dbg(dev, "Dumping MPT entry\n");
+ for (i = 0; i < list_len + 2; ++i)
+ printk(KERN_ERR "[%2d] %016llx\n",
+ i, (unsigned long long) be64_to_cpu(mtt_entry[i]));
+ }
+
+ err = mthca_WRITE_MTT(dev, mtt_entry, list_len, &status);
+ if (err) {
+ mthca_warn(dev, "WRITE_MTT failed (%d)\n", err);
+ goto err_out_mailbox_free;
+ }
+ if (status) {
+ mthca_warn(dev, "WRITE_MTT returned status 0x%02x\n",
+ status);
+ err = -EINVAL;
+ goto err_out_mailbox_free;
+ }
+
+ mpt_entry = MAILBOX_ALIGN(mailbox);
+
+ mpt_entry->flags = cpu_to_be32(MTHCA_MPT_FLAG_SW_OWNS |
+ MTHCA_MPT_FLAG_MIO |
+ MTHCA_MPT_FLAG_REGION |
+ access);
+
+ mpt_entry->page_size = cpu_to_be32(buffer_size_shift - 12);
+ mpt_entry->key = cpu_to_be32(mr->ibmr.lkey);
+ mpt_entry->pd = cpu_to_be32(pd);
+ mpt_entry->start = cpu_to_be64(iova);
+ mpt_entry->length = cpu_to_be64(total_size);
+ memset(&mpt_entry->lkey, 0,
+ sizeof *mpt_entry - offsetof(struct mthca_mpt_entry, lkey));
+ mpt_entry->mtt_seg = cpu_to_be64(dev->mr_table.mtt_base +
+ mr->first_seg * dev->limits.mtt_seg_size);
+
+ if (0) {
+ mthca_dbg(dev, "Dumping MPT entry %08x:\n", mr->ibmr.lkey);
+ for (i = 0; i < sizeof (struct mthca_mpt_entry) / 4; ++i) {
+ if (i % 4 == 0)
+ printk("[%02x] ", i * 4);
+ printk(" %08x", be32_to_cpu(((u32 *) mpt_entry)[i]));
+ if ((i + 1) % 4 == 0)
+ printk("\n");
+ }
+ }
+
+ err = mthca_SW2HW_MPT(dev, mpt_entry,
+ mr->ibmr.lkey & (dev->limits.num_mpts - 1),
+ &status);
+ if (err)
+ mthca_warn(dev, "SW2HW_MPT failed (%d)\n", err);
+ else if (status) {
+ mthca_warn(dev, "SW2HW_MPT returned status 0x%02x\n",
+ status);
+ err = -EINVAL;
+ }
+
+ kfree(mailbox);
+ return err;
+
+ err_out_mailbox_free:
+ kfree(mailbox);
+
+ err_out_free_mtt:
+ mthca_free_mtt(dev, mr->first_seg, mr->order);
+
+ err_out_mpt_free:
+ mthca_free(&dev->mr_table.mpt_alloc, mr->ibmr.lkey);
+ return err;
+}
+
+void mthca_free_mr(struct mthca_dev *dev, struct mthca_mr *mr)
+{
+ int err;
+ u8 status;
+
+ might_sleep();
+
+ err = mthca_HW2SW_MPT(dev, NULL,
+ mr->ibmr.lkey & (dev->limits.num_mpts - 1),
+ &status);
+ if (err)
+ mthca_warn(dev, "HW2SW_MPT failed (%d)\n", err);
+ else if (status)
+ mthca_warn(dev, "HW2SW_MPT returned status 0x%02x\n",
+ status);
+
+ if (mr->order >= 0)
+ mthca_free_mtt(dev, mr->first_seg, mr->order);
+
+ mthca_free(&dev->mr_table.mpt_alloc, mr->ibmr.lkey);
+}
+
+int __devinit mthca_init_mr_table(struct mthca_dev *dev)
+{
+ int err;
+ int i, s;
+
+ err = mthca_alloc_init(&dev->mr_table.mpt_alloc,
+ dev->limits.num_mpts,
+ ~0, dev->limits.reserved_mrws);
+ if (err)
+ return err;
+
+ err = -ENOMEM;
+
+ for (i = 1, dev->mr_table.max_mtt_order = 0;
+ i < dev->limits.num_mtt_segs;
+ i <<= 1, ++dev->mr_table.max_mtt_order)
+ ; /* nothing */
+
+ dev->mr_table.mtt_buddy = kmalloc((dev->mr_table.max_mtt_order + 1) *
+ sizeof (long *),
+ GFP_KERNEL);
+ if (!dev->mr_table.mtt_buddy)
+ goto err_out;
+
+ for (i = 0; i <= dev->mr_table.max_mtt_order; ++i)
+ dev->mr_table.mtt_buddy[i] = NULL;
+
+ for (i = 0; i <= dev->mr_table.max_mtt_order; ++i) {
+ s = BITS_TO_LONGS(1 << (dev->mr_table.max_mtt_order - i));
+ dev->mr_table.mtt_buddy[i] = kmalloc(s * sizeof (long),
+ GFP_KERNEL);
+ if (!dev->mr_table.mtt_buddy[i])
+ goto err_out_free;
+ bitmap_zero(dev->mr_table.mtt_buddy[i],
+ 1 << (dev->mr_table.max_mtt_order - i));
+ }
+
+ set_bit(0, dev->mr_table.mtt_buddy[dev->mr_table.max_mtt_order]);
+
+ for (i = 0; i < dev->mr_table.max_mtt_order; ++i)
+ if (1 << i >= dev->limits.reserved_mtts)
+ break;
+
+ if (i == dev->mr_table.max_mtt_order) {
+ mthca_err(dev, "MTT table of order %d is "
+ "too small.\n", i);
+ goto err_out_free;
+ }
+
+ (void) mthca_alloc_mtt(dev, i);
+
+ return 0;
+
+ err_out_free:
+ for (i = 0; i <= dev->mr_table.max_mtt_order; ++i)
+ kfree(dev->mr_table.mtt_buddy[i]);
+
+ err_out:
+ mthca_alloc_cleanup(&dev->mr_table.mpt_alloc);
+
+ return err;
+}
+
+void __devexit mthca_cleanup_mr_table(struct mthca_dev *dev)
+{
+ int i;
+
+ /* XXX check if any MRs are still allocated? */
+ for (i = 0; i <= dev->mr_table.max_mtt_order; ++i)
+ kfree(dev->mr_table.mtt_buddy[i]);
+ kfree(dev->mr_table.mtt_buddy);
+ mthca_alloc_cleanup(&dev->mr_table.mpt_alloc);
+}
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mthca_pd.c 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#include <linux/init.h>
+#include <linux/errno.h>
+
+#include "mthca_dev.h"
+
+int mthca_pd_alloc(struct mthca_dev *dev, struct mthca_pd *pd)
+{
+ int err;
+
+ might_sleep();
+
+ atomic_set(&pd->sqp_count, 0);
+ pd->pd_num = mthca_alloc(&dev->pd_table.alloc);
+ if (pd->pd_num == -1)
+ return -ENOMEM;
+
+ err = mthca_mr_alloc_notrans(dev, pd->pd_num,
+ MTHCA_MPT_FLAG_LOCAL_READ |
+ MTHCA_MPT_FLAG_LOCAL_WRITE,
+ &pd->ntmr);
+ if (err)
+ mthca_free(&dev->pd_table.alloc, pd->pd_num);
+
+ return err;
+}
+
+void mthca_pd_free(struct mthca_dev *dev, struct mthca_pd *pd)
+{
+ might_sleep();
+ mthca_free_mr(dev, &pd->ntmr);
+ mthca_free(&dev->pd_table.alloc, pd->pd_num);
+}
+
+int __devinit mthca_init_pd_table(struct mthca_dev *dev)
+{
+ return mthca_alloc_init(&dev->pd_table.alloc,
+ dev->limits.num_pds,
+ (1 << 24) - 1,
+ dev->limits.reserved_pds);
+}
+
+void __devexit mthca_cleanup_pd_table(struct mthca_dev *dev)
+{
+ /* XXX check if any PDs are still allocated? */
+ mthca_alloc_cleanup(&dev->pd_table.alloc);
+}
--- /dev/null
+/*
+ * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mthca_profile.c 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+
+#include "mthca_profile.h"
+
+enum {
+ MTHCA_RES_QP,
+ MTHCA_RES_EEC,
+ MTHCA_RES_SRQ,
+ MTHCA_RES_CQ,
+ MTHCA_RES_EQP,
+ MTHCA_RES_EEEC,
+ MTHCA_RES_EQ,
+ MTHCA_RES_RDB,
+ MTHCA_RES_MCG,
+ MTHCA_RES_MPT,
+ MTHCA_RES_MTT,
+ MTHCA_RES_UAR,
+ MTHCA_RES_UDAV,
+ MTHCA_RES_UARC,
+ MTHCA_RES_NUM
+};
+
+enum {
+ MTHCA_NUM_EQS = 32,
+ MTHCA_NUM_PDS = 1 << 15
+};
+
+u64 mthca_make_profile(struct mthca_dev *dev,
+ struct mthca_profile *request,
+ struct mthca_dev_lim *dev_lim,
+ struct mthca_init_hca_param *init_hca)
+{
+ struct mthca_resource {
+ u64 size;
+ u64 start;
+ int type;
+ int num;
+ int log_num;
+ };
+
+ u64 mem_base, mem_avail;
+ u64 total_size = 0;
+ struct mthca_resource *profile;
+ struct mthca_resource tmp;
+ int i, j;
+
+ profile = kmalloc(MTHCA_RES_NUM * sizeof *profile, GFP_KERNEL);
+ if (!profile)
+ return -ENOMEM;
+
+ memset(profile, 0, MTHCA_RES_NUM * sizeof *profile);
+
+ profile[MTHCA_RES_QP].size = dev_lim->qpc_entry_sz;
+ profile[MTHCA_RES_EEC].size = dev_lim->eec_entry_sz;
+ profile[MTHCA_RES_SRQ].size = dev_lim->srq_entry_sz;
+ profile[MTHCA_RES_CQ].size = dev_lim->cqc_entry_sz;
+ profile[MTHCA_RES_EQP].size = dev_lim->eqpc_entry_sz;
+ profile[MTHCA_RES_EEEC].size = dev_lim->eeec_entry_sz;
+ profile[MTHCA_RES_EQ].size = dev_lim->eqc_entry_sz;
+ profile[MTHCA_RES_RDB].size = MTHCA_RDB_ENTRY_SIZE;
+ profile[MTHCA_RES_MCG].size = MTHCA_MGM_ENTRY_SIZE;
+ profile[MTHCA_RES_MPT].size = dev_lim->mpt_entry_sz;
+ profile[MTHCA_RES_MTT].size = dev_lim->mtt_seg_sz;
+ profile[MTHCA_RES_UAR].size = dev_lim->uar_scratch_entry_sz;
+ profile[MTHCA_RES_UDAV].size = MTHCA_AV_SIZE;
+ profile[MTHCA_RES_UARC].size = request->uarc_size;
+
+ profile[MTHCA_RES_QP].num = request->num_qp;
+ profile[MTHCA_RES_EQP].num = request->num_qp;
+ profile[MTHCA_RES_RDB].num = request->num_qp * request->rdb_per_qp;
+ profile[MTHCA_RES_CQ].num = request->num_cq;
+ profile[MTHCA_RES_EQ].num = MTHCA_NUM_EQS;
+ profile[MTHCA_RES_MCG].num = request->num_mcg;
+ profile[MTHCA_RES_MPT].num = request->num_mpt;
+ profile[MTHCA_RES_MTT].num = request->num_mtt;
+ profile[MTHCA_RES_UAR].num = request->num_uar;
+ profile[MTHCA_RES_UARC].num = request->num_uar;
+ profile[MTHCA_RES_UDAV].num = request->num_udav;
+
+ for (i = 0; i < MTHCA_RES_NUM; ++i) {
+ profile[i].type = i;
+ profile[i].log_num = max(ffs(profile[i].num) - 1, 0);
+ profile[i].size *= profile[i].num;
+ if (dev->hca_type == ARBEL_NATIVE)
+ profile[i].size = max(profile[i].size, (u64) PAGE_SIZE);
+ }
+
+ if (dev->hca_type == ARBEL_NATIVE) {
+ mem_base = 0;
+ mem_avail = dev_lim->hca.arbel.max_icm_sz;
+ } else {
+ mem_base = dev->ddr_start;
+ mem_avail = dev->fw.tavor.fw_start - dev->ddr_start;
+ }
+
+ /*
+ * Sort the resources in decreasing order of size. Since they
+ * all have sizes that are powers of 2, we'll be able to keep
+ * resources aligned to their size and pack them without gaps
+ * using the sorted order.
+ */
+ for (i = MTHCA_RES_NUM; i > 0; --i)
+ for (j = 1; j < i; ++j) {
+ if (profile[j].size > profile[j - 1].size) {
+ tmp = profile[j];
+ profile[j] = profile[j - 1];
+ profile[j - 1] = tmp;
+ }
+ }
+
+ for (i = 0; i < MTHCA_RES_NUM; ++i) {
+ if (profile[i].size) {
+ profile[i].start = mem_base + total_size;
+ total_size += profile[i].size;
+ }
+ if (total_size > mem_avail) {
+ mthca_err(dev, "Profile requires 0x%llx bytes; "
+ "won't in 0x%llx bytes of context memory.\n",
+ (unsigned long long) total_size,
+ (unsigned long long) mem_avail);
+ kfree(profile);
+ return -ENOMEM;
+ }
+
+ if (profile[i].size)
+ mthca_dbg(dev, "profile[%2d]--%2d/%2d @ 0x%16llx "
+ "(size 0x%8llx)\n",
+ i, profile[i].type, profile[i].log_num,
+ (unsigned long long) profile[i].start,
+ (unsigned long long) profile[i].size);
+ }
+
+ if (dev->hca_type == ARBEL_NATIVE)
+ mthca_dbg(dev, "HCA context memory: reserving %d KB\n",
+ (int) (total_size >> 10));
+ else
+ mthca_dbg(dev, "HCA memory: allocated %d KB/%d KB (%d KB free)\n",
+ (int) (total_size >> 10), (int) (mem_avail >> 10),
+ (int) ((mem_avail - total_size) >> 10));
+
+ for (i = 0; i < MTHCA_RES_NUM; ++i) {
+ switch (profile[i].type) {
+ case MTHCA_RES_QP:
+ dev->limits.num_qps = profile[i].num;
+ init_hca->qpc_base = profile[i].start;
+ init_hca->log_num_qps = profile[i].log_num;
+ break;
+ case MTHCA_RES_EEC:
+ dev->limits.num_eecs = profile[i].num;
+ init_hca->eec_base = profile[i].start;
+ init_hca->log_num_eecs = profile[i].log_num;
+ break;
+ case MTHCA_RES_SRQ:
+ dev->limits.num_srqs = profile[i].num;
+ init_hca->srqc_base = profile[i].start;
+ init_hca->log_num_srqs = profile[i].log_num;
+ break;
+ case MTHCA_RES_CQ:
+ dev->limits.num_cqs = profile[i].num;
+ init_hca->cqc_base = profile[i].start;
+ init_hca->log_num_cqs = profile[i].log_num;
+ break;
+ case MTHCA_RES_EQP:
+ init_hca->eqpc_base = profile[i].start;
+ break;
+ case MTHCA_RES_EEEC:
+ init_hca->eeec_base = profile[i].start;
+ break;
+ case MTHCA_RES_EQ:
+ dev->limits.num_eqs = profile[i].num;
+ init_hca->eqc_base = profile[i].start;
+ init_hca->log_num_eqs = profile[i].log_num;
+ break;
+ case MTHCA_RES_RDB:
+ for (dev->qp_table.rdb_shift = 0;
+ profile[MTHCA_RES_QP].num << dev->qp_table.rdb_shift <
+ profile[i].num;
+ ++dev->qp_table.rdb_shift)
+ ; /* nothing */
+ dev->qp_table.rdb_base = (u32) profile[i].start;
+ init_hca->rdb_base = profile[i].start;
+ break;
+ case MTHCA_RES_MCG:
+ dev->limits.num_mgms = profile[i].num >> 1;
+ dev->limits.num_amgms = profile[i].num >> 1;
+ init_hca->mc_base = profile[i].start;
+ init_hca->log_mc_entry_sz = ffs(MTHCA_MGM_ENTRY_SIZE) - 1;
+ init_hca->log_mc_table_sz = profile[i].log_num;
+ init_hca->mc_hash_sz = 1 << (profile[i].log_num - 1);
+ break;
+ case MTHCA_RES_MPT:
+ dev->limits.num_mpts = profile[i].num;
+ init_hca->mpt_base = profile[i].start;
+ init_hca->log_mpt_sz = profile[i].log_num;
+ break;
+ case MTHCA_RES_MTT:
+ dev->limits.num_mtt_segs = profile[i].num;
+ dev->limits.mtt_seg_size = dev_lim->mtt_seg_sz;
+ dev->mr_table.mtt_base = profile[i].start;
+ init_hca->mtt_base = profile[i].start;
+ init_hca->mtt_seg_sz = ffs(dev_lim->mtt_seg_sz) - 7;
+ break;
+ case MTHCA_RES_UAR:
+ init_hca->uar_scratch_base = profile[i].start;
+ break;
+ case MTHCA_RES_UDAV:
+ dev->av_table.ddr_av_base = profile[i].start;
+ dev->av_table.num_ddr_avs = profile[i].num;
+ case MTHCA_RES_UARC:
+ init_hca->uarc_base = profile[i].start;
+ init_hca->log_uarc_sz = ffs(request->uarc_size) - 13;
+ init_hca->log_uar_sz = ffs(request->num_uar) - 1;
+ default:
+ break;
+ }
+ }
+
+ /*
+ * PDs don't take any HCA memory, but we assign them as part
+ * of the HCA profile anyway.
+ */
+ dev->limits.num_pds = MTHCA_NUM_PDS;
+
+ kfree(profile);
+ return total_size;
+}
--- /dev/null
+/*
+ * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mthca_provider.c 1397 2004-12-28 05:09:00Z roland $
+ */
+
+#include <ib_smi.h>
+
+#include "mthca_dev.h"
+#include "mthca_cmd.h"
+
+static int mthca_query_device(struct ib_device *ibdev,
+ struct ib_device_attr *props)
+{
+ struct ib_smp *in_mad = NULL;
+ struct ib_smp *out_mad = NULL;
+ int err = -ENOMEM;
+ u8 status;
+
+ in_mad = kmalloc(sizeof *in_mad, GFP_KERNEL);
+ out_mad = kmalloc(sizeof *out_mad, GFP_KERNEL);
+ if (!in_mad || !out_mad)
+ goto out;
+
+ props->fw_ver = to_mdev(ibdev)->fw_ver;
+
+ memset(in_mad, 0, sizeof *in_mad);
+ in_mad->base_version = 1;
+ in_mad->mgmt_class = IB_MGMT_CLASS_SUBN_LID_ROUTED;
+ in_mad->class_version = 1;
+ in_mad->method = IB_MGMT_METHOD_GET;
+ in_mad->attr_id = IB_SMP_ATTR_NODE_INFO;
+
+ err = mthca_MAD_IFC(to_mdev(ibdev), 1, 1,
+ 1, NULL, NULL, in_mad, out_mad,
+ &status);
+ if (err)
+ goto out;
+ if (status) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ props->vendor_id = be32_to_cpup((u32 *) (out_mad->data + 36)) &
+ 0xffffff;
+ props->vendor_part_id = be16_to_cpup((u16 *) (out_mad->data + 30));
+ props->hw_ver = be16_to_cpup((u16 *) (out_mad->data + 32));
+ memcpy(&props->sys_image_guid, out_mad->data + 4, 8);
+ memcpy(&props->node_guid, out_mad->data + 12, 8);
+
+ err = 0;
+ out:
+ kfree(in_mad);
+ kfree(out_mad);
+ return err;
+}
+
+static int mthca_query_port(struct ib_device *ibdev,
+ u8 port, struct ib_port_attr *props)
+{
+ struct ib_smp *in_mad = NULL;
+ struct ib_smp *out_mad = NULL;
+ int err = -ENOMEM;
+ u8 status;
+
+ in_mad = kmalloc(sizeof *in_mad, GFP_KERNEL);
+ out_mad = kmalloc(sizeof *out_mad, GFP_KERNEL);
+ if (!in_mad || !out_mad)
+ goto out;
+
+ memset(in_mad, 0, sizeof *in_mad);
+ in_mad->base_version = 1;
+ in_mad->mgmt_class = IB_MGMT_CLASS_SUBN_LID_ROUTED;
+ in_mad->class_version = 1;
+ in_mad->method = IB_MGMT_METHOD_GET;
+ in_mad->attr_id = IB_SMP_ATTR_PORT_INFO;
+ in_mad->attr_mod = cpu_to_be32(port);
+
+ err = mthca_MAD_IFC(to_mdev(ibdev), 1, 1,
+ port, NULL, NULL, in_mad, out_mad,
+ &status);
+ if (err)
+ goto out;
+ if (status) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ props->lid = be16_to_cpup((u16 *) (out_mad->data + 16));
+ props->lmc = out_mad->data[34] & 0x7;
+ props->sm_lid = be16_to_cpup((u16 *) (out_mad->data + 18));
+ props->sm_sl = out_mad->data[36] & 0xf;
+ props->state = out_mad->data[32] & 0xf;
+ props->phys_state = out_mad->data[33] >> 4;
+ props->port_cap_flags = be32_to_cpup((u32 *) (out_mad->data + 20));
+ props->gid_tbl_len = to_mdev(ibdev)->limits.gid_table_len;
+ props->pkey_tbl_len = to_mdev(ibdev)->limits.pkey_table_len;
+ props->qkey_viol_cntr = be16_to_cpup((u16 *) (out_mad->data + 48));
+ props->active_width = out_mad->data[31] & 0xf;
+ props->active_speed = out_mad->data[35] >> 4;
+
+ out:
+ kfree(in_mad);
+ kfree(out_mad);
+ return err;
+}
+
+static int mthca_modify_port(struct ib_device *ibdev,
+ u8 port, int port_modify_mask,
+ struct ib_port_modify *props)
+{
+ struct mthca_set_ib_param set_ib;
+ struct ib_port_attr attr;
+ int err;
+ u8 status;
+
+ if (down_interruptible(&to_mdev(ibdev)->cap_mask_mutex))
+ return -ERESTARTSYS;
+
+ err = mthca_query_port(ibdev, port, &attr);
+ if (err)
+ goto out;
+
+ set_ib.set_si_guid = 0;
+ set_ib.reset_qkey_viol = !!(port_modify_mask & IB_PORT_RESET_QKEY_CNTR);
+
+ set_ib.cap_mask = (attr.port_cap_flags | props->set_port_cap_mask) &
+ ~props->clr_port_cap_mask;
+
+ err = mthca_SET_IB(to_mdev(ibdev), &set_ib, port, &status);
+ if (err)
+ goto out;
+ if (status) {
+ err = -EINVAL;
+ goto out;
+ }
+
+out:
+ up(&to_mdev(ibdev)->cap_mask_mutex);
+ return err;
+}
+
+static int mthca_query_pkey(struct ib_device *ibdev,
+ u8 port, u16 index, u16 *pkey)
+{
+ struct ib_smp *in_mad = NULL;
+ struct ib_smp *out_mad = NULL;
+ int err = -ENOMEM;
+ u8 status;
+
+ in_mad = kmalloc(sizeof *in_mad, GFP_KERNEL);
+ out_mad = kmalloc(sizeof *out_mad, GFP_KERNEL);
+ if (!in_mad || !out_mad)
+ goto out;
+
+ memset(in_mad, 0, sizeof *in_mad);
+ in_mad->base_version = 1;
+ in_mad->mgmt_class = IB_MGMT_CLASS_SUBN_LID_ROUTED;
+ in_mad->class_version = 1;
+ in_mad->method = IB_MGMT_METHOD_GET;
+ in_mad->attr_id = IB_SMP_ATTR_PKEY_TABLE;
+ in_mad->attr_mod = cpu_to_be32(index / 32);
+
+ err = mthca_MAD_IFC(to_mdev(ibdev), 1, 1,
+ port, NULL, NULL, in_mad, out_mad,
+ &status);
+ if (err)
+ goto out;
+ if (status) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ *pkey = be16_to_cpu(((u16 *) out_mad->data)[index % 32]);
+
+ out:
+ kfree(in_mad);
+ kfree(out_mad);
+ return err;
+}
+
+static int mthca_query_gid(struct ib_device *ibdev, u8 port,
+ int index, union ib_gid *gid)
+{
+ struct ib_smp *in_mad = NULL;
+ struct ib_smp *out_mad = NULL;
+ int err = -ENOMEM;
+ u8 status;
+
+ in_mad = kmalloc(sizeof *in_mad, GFP_KERNEL);
+ out_mad = kmalloc(sizeof *out_mad, GFP_KERNEL);
+ if (!in_mad || !out_mad)
+ goto out;
+
+ memset(in_mad, 0, sizeof *in_mad);
+ in_mad->base_version = 1;
+ in_mad->mgmt_class = IB_MGMT_CLASS_SUBN_LID_ROUTED;
+ in_mad->class_version = 1;
+ in_mad->method = IB_MGMT_METHOD_GET;
+ in_mad->attr_id = IB_SMP_ATTR_PORT_INFO;
+ in_mad->attr_mod = cpu_to_be32(port);
+
+ err = mthca_MAD_IFC(to_mdev(ibdev), 1, 1,
+ port, NULL, NULL, in_mad, out_mad,
+ &status);
+ if (err)
+ goto out;
+ if (status) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ memcpy(gid->raw, out_mad->data + 8, 8);
+
+ memset(in_mad, 0, sizeof *in_mad);
+ in_mad->base_version = 1;
+ in_mad->mgmt_class = IB_MGMT_CLASS_SUBN_LID_ROUTED;
+ in_mad->class_version = 1;
+ in_mad->method = IB_MGMT_METHOD_GET;
+ in_mad->attr_id = IB_SMP_ATTR_GUID_INFO;
+ in_mad->attr_mod = cpu_to_be32(index / 8);
+
+ err = mthca_MAD_IFC(to_mdev(ibdev), 1, 1,
+ port, NULL, NULL, in_mad, out_mad,
+ &status);
+ if (err)
+ goto out;
+ if (status) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ memcpy(gid->raw + 8, out_mad->data + (index % 8) * 16, 8);
+
+ out:
+ kfree(in_mad);
+ kfree(out_mad);
+ return err;
+}
+
+static struct ib_pd *mthca_alloc_pd(struct ib_device *ibdev)
+{
+ struct mthca_pd *pd;
+ int err;
+
+ pd = kmalloc(sizeof *pd, GFP_KERNEL);
+ if (!pd)
+ return ERR_PTR(-ENOMEM);
+
+ err = mthca_pd_alloc(to_mdev(ibdev), pd);
+ if (err) {
+ kfree(pd);
+ return ERR_PTR(err);
+ }
+
+ return &pd->ibpd;
+}
+
+static int mthca_dealloc_pd(struct ib_pd *pd)
+{
+ mthca_pd_free(to_mdev(pd->device), to_mpd(pd));
+ kfree(pd);
+
+ return 0;
+}
+
+static struct ib_ah *mthca_ah_create(struct ib_pd *pd,
+ struct ib_ah_attr *ah_attr)
+{
+ int err;
+ struct mthca_ah *ah;
+
+ ah = kmalloc(sizeof *ah, GFP_KERNEL);
+ if (!ah)
+ return ERR_PTR(-ENOMEM);
+
+ err = mthca_create_ah(to_mdev(pd->device), to_mpd(pd), ah_attr, ah);
+ if (err) {
+ kfree(ah);
+ return ERR_PTR(err);
+ }
+
+ return &ah->ibah;
+}
+
+static int mthca_ah_destroy(struct ib_ah *ah)
+{
+ mthca_destroy_ah(to_mdev(ah->device), to_mah(ah));
+ kfree(ah);
+
+ return 0;
+}
+
+static struct ib_qp *mthca_create_qp(struct ib_pd *pd,
+ struct ib_qp_init_attr *init_attr)
+{
+ struct mthca_qp *qp;
+ int err;
+
+ switch (init_attr->qp_type) {
+ case IB_QPT_RC:
+ case IB_QPT_UC:
+ case IB_QPT_UD:
+ {
+ qp = kmalloc(sizeof *qp, GFP_KERNEL);
+ if (!qp)
+ return ERR_PTR(-ENOMEM);
+
+ qp->sq.max = init_attr->cap.max_send_wr;
+ qp->rq.max = init_attr->cap.max_recv_wr;
+ qp->sq.max_gs = init_attr->cap.max_send_sge;
+ qp->rq.max_gs = init_attr->cap.max_recv_sge;
+
+ err = mthca_alloc_qp(to_mdev(pd->device), to_mpd(pd),
+ to_mcq(init_attr->send_cq),
+ to_mcq(init_attr->recv_cq),
+ init_attr->qp_type, init_attr->sq_sig_type,
+ init_attr->rq_sig_type, qp);
+ qp->ibqp.qp_num = qp->qpn;
+ break;
+ }
+ case IB_QPT_SMI:
+ case IB_QPT_GSI:
+ {
+ qp = kmalloc(sizeof (struct mthca_sqp), GFP_KERNEL);
+ if (!qp)
+ return ERR_PTR(-ENOMEM);
+
+ qp->sq.max = init_attr->cap.max_send_wr;
+ qp->rq.max = init_attr->cap.max_recv_wr;
+ qp->sq.max_gs = init_attr->cap.max_send_sge;
+ qp->rq.max_gs = init_attr->cap.max_recv_sge;
+
+ qp->ibqp.qp_num = init_attr->qp_type == IB_QPT_SMI ? 0 : 1;
+
+ err = mthca_alloc_sqp(to_mdev(pd->device), to_mpd(pd),
+ to_mcq(init_attr->send_cq),
+ to_mcq(init_attr->recv_cq),
+ init_attr->sq_sig_type, init_attr->rq_sig_type,
+ qp->ibqp.qp_num, init_attr->port_num,
+ to_msqp(qp));
+ break;
+ }
+ default:
+ /* Don't support raw QPs */
+ return ERR_PTR(-ENOSYS);
+ }
+
+ if (err) {
+ kfree(qp);
+ return ERR_PTR(err);
+ }
+
+ init_attr->cap.max_inline_data = 0;
+
+ return &qp->ibqp;
+}
+
+static int mthca_destroy_qp(struct ib_qp *qp)
+{
+ mthca_free_qp(to_mdev(qp->device), to_mqp(qp));
+ kfree(qp);
+ return 0;
+}
+
+static struct ib_cq *mthca_create_cq(struct ib_device *ibdev, int entries)
+{
+ struct mthca_cq *cq;
+ int nent;
+ int err;
+
+ cq = kmalloc(sizeof *cq, GFP_KERNEL);
+ if (!cq)
+ return ERR_PTR(-ENOMEM);
+
+ for (nent = 1; nent <= entries; nent <<= 1)
+ ; /* nothing */
+
+ err = mthca_init_cq(to_mdev(ibdev), nent, cq);
+ if (err) {
+ kfree(cq);
+ cq = ERR_PTR(err);
+ } else
+ cq->ibcq.cqe = nent - 1;
+
+ return &cq->ibcq;
+}
+
+static int mthca_destroy_cq(struct ib_cq *cq)
+{
+ mthca_free_cq(to_mdev(cq->device), to_mcq(cq));
+ kfree(cq);
+
+ return 0;
+}
+
+static int mthca_req_notify_cq(struct ib_cq *cq, enum ib_cq_notify notify)
+{
+ mthca_arm_cq(to_mdev(cq->device), to_mcq(cq),
+ notify == IB_CQ_SOLICITED);
+ return 0;
+}
+
+static inline u32 convert_access(int acc)
+{
+ return (acc & IB_ACCESS_REMOTE_ATOMIC ? MTHCA_MPT_FLAG_ATOMIC : 0) |
+ (acc & IB_ACCESS_REMOTE_WRITE ? MTHCA_MPT_FLAG_REMOTE_WRITE : 0) |
+ (acc & IB_ACCESS_REMOTE_READ ? MTHCA_MPT_FLAG_REMOTE_READ : 0) |
+ (acc & IB_ACCESS_LOCAL_WRITE ? MTHCA_MPT_FLAG_LOCAL_WRITE : 0) |
+ MTHCA_MPT_FLAG_LOCAL_READ;
+}
+
+static struct ib_mr *mthca_get_dma_mr(struct ib_pd *pd, int acc)
+{
+ struct mthca_mr *mr;
+ int err;
+
+ mr = kmalloc(sizeof *mr, GFP_KERNEL);
+ if (!mr)
+ return ERR_PTR(-ENOMEM);
+
+ err = mthca_mr_alloc_notrans(to_mdev(pd->device),
+ to_mpd(pd)->pd_num,
+ convert_access(acc), mr);
+
+ if (err) {
+ kfree(mr);
+ return ERR_PTR(err);
+ }
+
+ return &mr->ibmr;
+}
+
+static struct ib_mr *mthca_reg_phys_mr(struct ib_pd *pd,
+ struct ib_phys_buf *buffer_list,
+ int num_phys_buf,
+ int acc,
+ u64 *iova_start)
+{
+ struct mthca_mr *mr;
+ u64 *page_list;
+ u64 total_size;
+ u64 mask;
+ int shift;
+ int npages;
+ int err;
+ int i, j, n;
+
+ /* First check that we have enough alignment */
+ if ((*iova_start & ~PAGE_MASK) != (buffer_list[0].addr & ~PAGE_MASK))
+ return ERR_PTR(-EINVAL);
+
+ if (num_phys_buf > 1 &&
+ ((buffer_list[0].addr + buffer_list[0].size) & ~PAGE_MASK))
+ return ERR_PTR(-EINVAL);
+
+ mask = 0;
+ total_size = 0;
+ for (i = 0; i < num_phys_buf; ++i) {
+ if (buffer_list[i].addr & ~PAGE_MASK)
+ return ERR_PTR(-EINVAL);
+ if (i != 0 && i != num_phys_buf - 1 &&
+ (buffer_list[i].size & ~PAGE_MASK))
+ return ERR_PTR(-EINVAL);
+
+ total_size += buffer_list[i].size;
+ if (i > 0)
+ mask |= buffer_list[i].addr;
+ }
+
+ /* Find largest page shift we can use to cover buffers */
+ for (shift = PAGE_SHIFT; shift < 31; ++shift)
+ if (num_phys_buf > 1) {
+ if ((1ULL << shift) & mask)
+ break;
+ } else {
+ if (1ULL << shift >=
+ buffer_list[0].size +
+ (buffer_list[0].addr & ((1ULL << shift) - 1)))
+ break;
+ }
+
+ buffer_list[0].size += buffer_list[0].addr & ((1ULL << shift) - 1);
+ buffer_list[0].addr &= ~0ull << shift;
+
+ mr = kmalloc(sizeof *mr, GFP_KERNEL);
+ if (!mr)
+ return ERR_PTR(-ENOMEM);
+
+ npages = 0;
+ for (i = 0; i < num_phys_buf; ++i)
+ npages += (buffer_list[i].size + (1ULL << shift) - 1) >> shift;
+
+ if (!npages)
+ return &mr->ibmr;
+
+ page_list = kmalloc(npages * sizeof *page_list, GFP_KERNEL);
+ if (!page_list) {
+ kfree(mr);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ n = 0;
+ for (i = 0; i < num_phys_buf; ++i)
+ for (j = 0;
+ j < (buffer_list[i].size + (1ULL << shift) - 1) >> shift;
+ ++j)
+ page_list[n++] = buffer_list[i].addr + ((u64) j << shift);
+
+ mthca_dbg(to_mdev(pd->device), "Registering memory at %llx (iova %llx) "
+ "in PD %x; shift %d, npages %d.\n",
+ (unsigned long long) buffer_list[0].addr,
+ (unsigned long long) *iova_start,
+ to_mpd(pd)->pd_num,
+ shift, npages);
+
+ err = mthca_mr_alloc_phys(to_mdev(pd->device),
+ to_mpd(pd)->pd_num,
+ page_list, shift, npages,
+ *iova_start, total_size,
+ convert_access(acc), mr);
+
+ if (err) {
+ kfree(mr);
+ return ERR_PTR(err);
+ }
+
+ kfree(page_list);
+ return &mr->ibmr;
+}
+
+static int mthca_dereg_mr(struct ib_mr *mr)
+{
+ mthca_free_mr(to_mdev(mr->device), to_mmr(mr));
+ kfree(mr);
+ return 0;
+}
+
+static ssize_t show_rev(struct class_device *cdev, char *buf)
+{
+ struct mthca_dev *dev = container_of(cdev, struct mthca_dev, ib_dev.class_dev);
+ return sprintf(buf, "%x\n", dev->rev_id);
+}
+
+static ssize_t show_fw_ver(struct class_device *cdev, char *buf)
+{
+ struct mthca_dev *dev = container_of(cdev, struct mthca_dev, ib_dev.class_dev);
+ return sprintf(buf, "%x.%x.%x\n", (int) (dev->fw_ver >> 32),
+ (int) (dev->fw_ver >> 16) & 0xffff,
+ (int) dev->fw_ver & 0xffff);
+}
+
+static ssize_t show_hca(struct class_device *cdev, char *buf)
+{
+ struct mthca_dev *dev = container_of(cdev, struct mthca_dev, ib_dev.class_dev);
+ switch (dev->hca_type) {
+ case TAVOR: return sprintf(buf, "MT23108\n");
+ case ARBEL_COMPAT: return sprintf(buf, "MT25208 (MT23108 compat mode)\n");
+ case ARBEL_NATIVE: return sprintf(buf, "MT25208\n");
+ default: return sprintf(buf, "unknown\n");
+ }
+}
+
+static CLASS_DEVICE_ATTR(hw_rev, S_IRUGO, show_rev, NULL);
+static CLASS_DEVICE_ATTR(fw_ver, S_IRUGO, show_fw_ver, NULL);
+static CLASS_DEVICE_ATTR(hca_type, S_IRUGO, show_hca, NULL);
+
+static struct class_device_attribute *mthca_class_attributes[] = {
+ &class_device_attr_hw_rev,
+ &class_device_attr_fw_ver,
+ &class_device_attr_hca_type
+};
+
+int mthca_register_device(struct mthca_dev *dev)
+{
+ int ret;
+ int i;
+
+ strlcpy(dev->ib_dev.name, "mthca%d", IB_DEVICE_NAME_MAX);
+ dev->ib_dev.node_type = IB_NODE_CA;
+ dev->ib_dev.phys_port_cnt = dev->limits.num_ports;
+ dev->ib_dev.dma_device = &dev->pdev->dev;
+ dev->ib_dev.class_dev.dev = &dev->pdev->dev;
+ dev->ib_dev.query_device = mthca_query_device;
+ dev->ib_dev.query_port = mthca_query_port;
+ dev->ib_dev.modify_port = mthca_modify_port;
+ dev->ib_dev.query_pkey = mthca_query_pkey;
+ dev->ib_dev.query_gid = mthca_query_gid;
+ dev->ib_dev.alloc_pd = mthca_alloc_pd;
+ dev->ib_dev.dealloc_pd = mthca_dealloc_pd;
+ dev->ib_dev.create_ah = mthca_ah_create;
+ dev->ib_dev.destroy_ah = mthca_ah_destroy;
+ dev->ib_dev.create_qp = mthca_create_qp;
+ dev->ib_dev.modify_qp = mthca_modify_qp;
+ dev->ib_dev.destroy_qp = mthca_destroy_qp;
+ dev->ib_dev.post_send = mthca_post_send;
+ dev->ib_dev.post_recv = mthca_post_receive;
+ dev->ib_dev.create_cq = mthca_create_cq;
+ dev->ib_dev.destroy_cq = mthca_destroy_cq;
+ dev->ib_dev.poll_cq = mthca_poll_cq;
+ dev->ib_dev.req_notify_cq = mthca_req_notify_cq;
+ dev->ib_dev.get_dma_mr = mthca_get_dma_mr;
+ dev->ib_dev.reg_phys_mr = mthca_reg_phys_mr;
+ dev->ib_dev.dereg_mr = mthca_dereg_mr;
+ dev->ib_dev.attach_mcast = mthca_multicast_attach;
+ dev->ib_dev.detach_mcast = mthca_multicast_detach;
+ dev->ib_dev.process_mad = mthca_process_mad;
+
+ init_MUTEX(&dev->cap_mask_mutex);
+
+ ret = ib_register_device(&dev->ib_dev);
+ if (ret)
+ return ret;
+
+ for (i = 0; i < ARRAY_SIZE(mthca_class_attributes); ++i) {
+ ret = class_device_create_file(&dev->ib_dev.class_dev,
+ mthca_class_attributes[i]);
+ if (ret) {
+ ib_unregister_device(&dev->ib_dev);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+void mthca_unregister_device(struct mthca_dev *dev)
+{
+ ib_unregister_device(&dev->ib_dev);
+}
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mthca_provider.h 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#ifndef MTHCA_PROVIDER_H
+#define MTHCA_PROVIDER_H
+
+#include <ib_verbs.h>
+#include <ib_pack.h>
+
+#define MTHCA_MPT_FLAG_ATOMIC (1 << 14)
+#define MTHCA_MPT_FLAG_REMOTE_WRITE (1 << 13)
+#define MTHCA_MPT_FLAG_REMOTE_READ (1 << 12)
+#define MTHCA_MPT_FLAG_LOCAL_WRITE (1 << 11)
+#define MTHCA_MPT_FLAG_LOCAL_READ (1 << 10)
+
+struct mthca_buf_list {
+ void *buf;
+ DECLARE_PCI_UNMAP_ADDR(mapping)
+};
+
+struct mthca_mr {
+ struct ib_mr ibmr;
+ int order;
+ u32 first_seg;
+};
+
+struct mthca_pd {
+ struct ib_pd ibpd;
+ u32 pd_num;
+ atomic_t sqp_count;
+ struct mthca_mr ntmr;
+};
+
+struct mthca_eq {
+ struct mthca_dev *dev;
+ int eqn;
+ u32 ecr_mask;
+ u32 cons_index;
+ u16 msi_x_vector;
+ u16 msi_x_entry;
+ int have_irq;
+ int nent;
+ struct mthca_buf_list *page_list;
+ struct mthca_mr mr;
+};
+
+struct mthca_av;
+
+struct mthca_ah {
+ struct ib_ah ibah;
+ int on_hca;
+ u32 key;
+ struct mthca_av *av;
+ dma_addr_t avdma;
+};
+
+/*
+ * Quick description of our CQ/QP locking scheme:
+ *
+ * We have one global lock that protects dev->cq/qp_table. Each
+ * struct mthca_cq/qp also has its own lock. An individual qp lock
+ * may be taken inside of an individual cq lock. Both cqs attached to
+ * a qp may be locked, with the send cq locked first. No other
+ * nesting should be done.
+ *
+ * Each struct mthca_cq/qp also has an atomic_t ref count. The
+ * pointer from the cq/qp_table to the struct counts as one reference.
+ * This reference also is good for access through the consumer API, so
+ * modifying the CQ/QP etc doesn't need to take another reference.
+ * Access because of a completion being polled does need a reference.
+ *
+ * Finally, each struct mthca_cq/qp has a wait_queue_head_t for the
+ * destroy function to sleep on.
+ *
+ * This means that access from the consumer API requires nothing but
+ * taking the struct's lock.
+ *
+ * Access because of a completion event should go as follows:
+ * - lock cq/qp_table and look up struct
+ * - increment ref count in struct
+ * - drop cq/qp_table lock
+ * - lock struct, do your thing, and unlock struct
+ * - decrement ref count; if zero, wake up waiters
+ *
+ * To destroy a CQ/QP, we can do the following:
+ * - lock cq/qp_table, remove pointer, unlock cq/qp_table lock
+ * - decrement ref count
+ * - wait_event until ref count is zero
+ *
+ * It is the consumer's responsibilty to make sure that no QP
+ * operations (WQE posting or state modification) are pending when the
+ * QP is destroyed. Also, the consumer must make sure that calls to
+ * qp_modify are serialized.
+ *
+ * Possible optimizations (wait for profile data to see if/where we
+ * have locks bouncing between CPUs):
+ * - split cq/qp table lock into n separate (cache-aligned) locks,
+ * indexed (say) by the page in the table
+ * - split QP struct lock into three (one for common info, one for the
+ * send queue and one for the receive queue)
+ */
+
+struct mthca_cq {
+ struct ib_cq ibcq;
+ spinlock_t lock;
+ atomic_t refcount;
+ int cqn;
+ int cons_index;
+ int is_direct;
+ union {
+ struct mthca_buf_list direct;
+ struct mthca_buf_list *page_list;
+ } queue;
+ struct mthca_mr mr;
+ wait_queue_head_t wait;
+};
+
+struct mthca_wq {
+ int max;
+ int cur;
+ int next;
+ int last_comp;
+ void *last;
+ int max_gs;
+ int wqe_shift;
+ enum ib_sig_type policy;
+};
+
+struct mthca_qp {
+ struct ib_qp ibqp;
+ spinlock_t lock;
+ atomic_t refcount;
+ u32 qpn;
+ int is_direct;
+ u8 transport;
+ u8 state;
+ u8 atomic_rd_en;
+ u8 resp_depth;
+
+ struct mthca_mr mr;
+
+ struct mthca_wq rq;
+ struct mthca_wq sq;
+ int send_wqe_offset;
+
+ u64 *wrid;
+ union {
+ struct mthca_buf_list direct;
+ struct mthca_buf_list *page_list;
+ } queue;
+
+ wait_queue_head_t wait;
+};
+
+struct mthca_sqp {
+ struct mthca_qp qp;
+ int port;
+ int pkey_index;
+ u32 qkey;
+ u32 send_psn;
+ struct ib_ud_header ud_header;
+ int header_buf_size;
+ void *header_buf;
+ dma_addr_t header_dma;
+};
+
+static inline struct mthca_mr *to_mmr(struct ib_mr *ibmr)
+{
+ return container_of(ibmr, struct mthca_mr, ibmr);
+}
+
+static inline struct mthca_pd *to_mpd(struct ib_pd *ibpd)
+{
+ return container_of(ibpd, struct mthca_pd, ibpd);
+}
+
+static inline struct mthca_ah *to_mah(struct ib_ah *ibah)
+{
+ return container_of(ibah, struct mthca_ah, ibah);
+}
+
+static inline struct mthca_cq *to_mcq(struct ib_cq *ibcq)
+{
+ return container_of(ibcq, struct mthca_cq, ibcq);
+}
+
+static inline struct mthca_qp *to_mqp(struct ib_qp *ibqp)
+{
+ return container_of(ibqp, struct mthca_qp, ibqp);
+}
+
+static inline struct mthca_sqp *to_msqp(struct mthca_qp *qp)
+{
+ return container_of(qp, struct mthca_sqp, qp);
+}
+
+#endif /* MTHCA_PROVIDER_H */
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mthca_qp.c 1355 2004-12-17 15:23:43Z roland $
+ */
+
+#include <linux/init.h>
+
+#include <ib_verbs.h>
+#include <ib_cache.h>
+#include <ib_pack.h>
+
+#include "mthca_dev.h"
+#include "mthca_cmd.h"
+
+enum {
+ MTHCA_MAX_DIRECT_QP_SIZE = 4 * PAGE_SIZE,
+ MTHCA_ACK_REQ_FREQ = 10,
+ MTHCA_FLIGHT_LIMIT = 9,
+ MTHCA_UD_HEADER_SIZE = 72 /* largest UD header possible */
+};
+
+enum {
+ MTHCA_QP_STATE_RST = 0,
+ MTHCA_QP_STATE_INIT = 1,
+ MTHCA_QP_STATE_RTR = 2,
+ MTHCA_QP_STATE_RTS = 3,
+ MTHCA_QP_STATE_SQE = 4,
+ MTHCA_QP_STATE_SQD = 5,
+ MTHCA_QP_STATE_ERR = 6,
+ MTHCA_QP_STATE_DRAINING = 7
+};
+
+enum {
+ MTHCA_QP_ST_RC = 0x0,
+ MTHCA_QP_ST_UC = 0x1,
+ MTHCA_QP_ST_RD = 0x2,
+ MTHCA_QP_ST_UD = 0x3,
+ MTHCA_QP_ST_MLX = 0x7
+};
+
+enum {
+ MTHCA_QP_PM_MIGRATED = 0x3,
+ MTHCA_QP_PM_ARMED = 0x0,
+ MTHCA_QP_PM_REARM = 0x1
+};
+
+enum {
+ /* qp_context flags */
+ MTHCA_QP_BIT_DE = 1 << 8,
+ /* params1 */
+ MTHCA_QP_BIT_SRE = 1 << 15,
+ MTHCA_QP_BIT_SWE = 1 << 14,
+ MTHCA_QP_BIT_SAE = 1 << 13,
+ MTHCA_QP_BIT_SIC = 1 << 4,
+ MTHCA_QP_BIT_SSC = 1 << 3,
+ /* params2 */
+ MTHCA_QP_BIT_RRE = 1 << 15,
+ MTHCA_QP_BIT_RWE = 1 << 14,
+ MTHCA_QP_BIT_RAE = 1 << 13,
+ MTHCA_QP_BIT_RIC = 1 << 4,
+ MTHCA_QP_BIT_RSC = 1 << 3
+};
+
+struct mthca_qp_path {
+ u32 port_pkey;
+ u8 rnr_retry;
+ u8 g_mylmc;
+ u16 rlid;
+ u8 ackto;
+ u8 mgid_index;
+ u8 static_rate;
+ u8 hop_limit;
+ u32 sl_tclass_flowlabel;
+ u8 rgid[16];
+} __attribute__((packed));
+
+struct mthca_qp_context {
+ u32 flags;
+ u32 sched_queue;
+ u32 mtu_msgmax;
+ u32 usr_page;
+ u32 local_qpn;
+ u32 remote_qpn;
+ u32 reserved1[2];
+ struct mthca_qp_path pri_path;
+ struct mthca_qp_path alt_path;
+ u32 rdd;
+ u32 pd;
+ u32 wqe_base;
+ u32 wqe_lkey;
+ u32 params1;
+ u32 reserved2;
+ u32 next_send_psn;
+ u32 cqn_snd;
+ u32 next_snd_wqe[2];
+ u32 last_acked_psn;
+ u32 ssn;
+ u32 params2;
+ u32 rnr_nextrecvpsn;
+ u32 ra_buff_indx;
+ u32 cqn_rcv;
+ u32 next_rcv_wqe[2];
+ u32 qkey;
+ u32 srqn;
+ u32 rmsn;
+ u32 reserved3[19];
+} __attribute__((packed));
+
+struct mthca_qp_param {
+ u32 opt_param_mask;
+ u32 reserved1;
+ struct mthca_qp_context context;
+ u32 reserved2[62];
+} __attribute__((packed));
+
+enum {
+ MTHCA_QP_OPTPAR_ALT_ADDR_PATH = 1 << 0,
+ MTHCA_QP_OPTPAR_RRE = 1 << 1,
+ MTHCA_QP_OPTPAR_RAE = 1 << 2,
+ MTHCA_QP_OPTPAR_RWE = 1 << 3,
+ MTHCA_QP_OPTPAR_PKEY_INDEX = 1 << 4,
+ MTHCA_QP_OPTPAR_Q_KEY = 1 << 5,
+ MTHCA_QP_OPTPAR_RNR_TIMEOUT = 1 << 6,
+ MTHCA_QP_OPTPAR_PRIMARY_ADDR_PATH = 1 << 7,
+ MTHCA_QP_OPTPAR_SRA_MAX = 1 << 8,
+ MTHCA_QP_OPTPAR_RRA_MAX = 1 << 9,
+ MTHCA_QP_OPTPAR_PM_STATE = 1 << 10,
+ MTHCA_QP_OPTPAR_PORT_NUM = 1 << 11,
+ MTHCA_QP_OPTPAR_RETRY_COUNT = 1 << 12,
+ MTHCA_QP_OPTPAR_ALT_RNR_RETRY = 1 << 13,
+ MTHCA_QP_OPTPAR_ACK_TIMEOUT = 1 << 14,
+ MTHCA_QP_OPTPAR_RNR_RETRY = 1 << 15,
+ MTHCA_QP_OPTPAR_SCHED_QUEUE = 1 << 16
+};
+
+enum {
+ MTHCA_OPCODE_NOP = 0x00,
+ MTHCA_OPCODE_RDMA_WRITE = 0x08,
+ MTHCA_OPCODE_RDMA_WRITE_IMM = 0x09,
+ MTHCA_OPCODE_SEND = 0x0a,
+ MTHCA_OPCODE_SEND_IMM = 0x0b,
+ MTHCA_OPCODE_RDMA_READ = 0x10,
+ MTHCA_OPCODE_ATOMIC_CS = 0x11,
+ MTHCA_OPCODE_ATOMIC_FA = 0x12,
+ MTHCA_OPCODE_BIND_MW = 0x18,
+ MTHCA_OPCODE_INVALID = 0xff
+};
+
+enum {
+ MTHCA_NEXT_DBD = 1 << 7,
+ MTHCA_NEXT_FENCE = 1 << 6,
+ MTHCA_NEXT_CQ_UPDATE = 1 << 3,
+ MTHCA_NEXT_EVENT_GEN = 1 << 2,
+ MTHCA_NEXT_SOLICIT = 1 << 1,
+
+ MTHCA_MLX_VL15 = 1 << 17,
+ MTHCA_MLX_SLR = 1 << 16
+};
+
+struct mthca_next_seg {
+ u32 nda_op; /* [31:6] next WQE [4:0] next opcode */
+ u32 ee_nds; /* [31:8] next EE [7] DBD [6] F [5:0] next WQE size */
+ u32 flags; /* [3] CQ [2] Event [1] Solicit */
+ u32 imm; /* immediate data */
+};
+
+struct mthca_ud_seg {
+ u32 reserved1;
+ u32 lkey;
+ u64 av_addr;
+ u32 reserved2[4];
+ u32 dqpn;
+ u32 qkey;
+ u32 reserved3[2];
+};
+
+struct mthca_bind_seg {
+ u32 flags; /* [31] Atomic [30] rem write [29] rem read */
+ u32 reserved;
+ u32 new_rkey;
+ u32 lkey;
+ u64 addr;
+ u64 length;
+};
+
+struct mthca_raddr_seg {
+ u64 raddr;
+ u32 rkey;
+ u32 reserved;
+};
+
+struct mthca_atomic_seg {
+ u64 swap_add;
+ u64 compare;
+};
+
+struct mthca_data_seg {
+ u32 byte_count;
+ u32 lkey;
+ u64 addr;
+};
+
+struct mthca_mlx_seg {
+ u32 nda_op;
+ u32 nds;
+ u32 flags; /* [17] VL15 [16] SLR [14:12] static rate
+ [11:8] SL [3] C [2] E */
+ u16 rlid;
+ u16 vcrc;
+};
+
+static int is_sqp(struct mthca_dev *dev, struct mthca_qp *qp)
+{
+ return qp->qpn >= dev->qp_table.sqp_start &&
+ qp->qpn <= dev->qp_table.sqp_start + 3;
+}
+
+static int is_qp0(struct mthca_dev *dev, struct mthca_qp *qp)
+{
+ return qp->qpn >= dev->qp_table.sqp_start &&
+ qp->qpn <= dev->qp_table.sqp_start + 1;
+}
+
+static void *get_recv_wqe(struct mthca_qp *qp, int n)
+{
+ if (qp->is_direct)
+ return qp->queue.direct.buf + (n << qp->rq.wqe_shift);
+ else
+ return qp->queue.page_list[(n << qp->rq.wqe_shift) >> PAGE_SHIFT].buf +
+ ((n << qp->rq.wqe_shift) & (PAGE_SIZE - 1));
+}
+
+static void *get_send_wqe(struct mthca_qp *qp, int n)
+{
+ if (qp->is_direct)
+ return qp->queue.direct.buf + qp->send_wqe_offset +
+ (n << qp->sq.wqe_shift);
+ else
+ return qp->queue.page_list[(qp->send_wqe_offset +
+ (n << qp->sq.wqe_shift)) >>
+ PAGE_SHIFT].buf +
+ ((qp->send_wqe_offset + (n << qp->sq.wqe_shift)) &
+ (PAGE_SIZE - 1));
+}
+
+void mthca_qp_event(struct mthca_dev *dev, u32 qpn,
+ enum ib_event_type event_type)
+{
+ struct mthca_qp *qp;
+ struct ib_event event;
+
+ spin_lock(&dev->qp_table.lock);
+ qp = mthca_array_get(&dev->qp_table.qp, qpn & (dev->limits.num_qps - 1));
+ if (qp)
+ atomic_inc(&qp->refcount);
+ spin_unlock(&dev->qp_table.lock);
+
+ if (!qp) {
+ mthca_warn(dev, "Async event for bogus QP %08x\n", qpn);
+ return;
+ }
+
+ event.device = &dev->ib_dev;
+ event.event = event_type;
+ event.element.qp = &qp->ibqp;
+ if (qp->ibqp.event_handler)
+ qp->ibqp.event_handler(&event, qp->ibqp.qp_context);
+
+ if (atomic_dec_and_test(&qp->refcount))
+ wake_up(&qp->wait);
+}
+
+static int to_mthca_state(enum ib_qp_state ib_state)
+{
+ switch (ib_state) {
+ case IB_QPS_RESET: return MTHCA_QP_STATE_RST;
+ case IB_QPS_INIT: return MTHCA_QP_STATE_INIT;
+ case IB_QPS_RTR: return MTHCA_QP_STATE_RTR;
+ case IB_QPS_RTS: return MTHCA_QP_STATE_RTS;
+ case IB_QPS_SQD: return MTHCA_QP_STATE_SQD;
+ case IB_QPS_SQE: return MTHCA_QP_STATE_SQE;
+ case IB_QPS_ERR: return MTHCA_QP_STATE_ERR;
+ default: return -1;
+ }
+}
+
+enum { RC, UC, UD, RD, RDEE, MLX, NUM_TRANS };
+
+static int to_mthca_st(int transport)
+{
+ switch (transport) {
+ case RC: return MTHCA_QP_ST_RC;
+ case UC: return MTHCA_QP_ST_UC;
+ case UD: return MTHCA_QP_ST_UD;
+ case RD: return MTHCA_QP_ST_RD;
+ case MLX: return MTHCA_QP_ST_MLX;
+ default: return -1;
+ }
+}
+
+static const struct {
+ int trans;
+ u32 req_param[NUM_TRANS];
+ u32 opt_param[NUM_TRANS];
+} state_table[IB_QPS_ERR + 1][IB_QPS_ERR + 1] = {
+ [IB_QPS_RESET] = {
+ [IB_QPS_RESET] = { .trans = MTHCA_TRANS_ANY2RST },
+ [IB_QPS_ERR] = { .trans = MTHCA_TRANS_ANY2ERR },
+ [IB_QPS_INIT] = {
+ .trans = MTHCA_TRANS_RST2INIT,
+ .req_param = {
+ [UD] = (IB_QP_PKEY_INDEX |
+ IB_QP_PORT |
+ IB_QP_QKEY),
+ [RC] = (IB_QP_PKEY_INDEX |
+ IB_QP_PORT |
+ IB_QP_ACCESS_FLAGS),
+ [MLX] = (IB_QP_PKEY_INDEX |
+ IB_QP_QKEY),
+ },
+ /* bug-for-bug compatibility with VAPI: */
+ .opt_param = {
+ [MLX] = IB_QP_PORT
+ }
+ },
+ },
+ [IB_QPS_INIT] = {
+ [IB_QPS_RESET] = { .trans = MTHCA_TRANS_ANY2RST },
+ [IB_QPS_ERR] = { .trans = MTHCA_TRANS_ANY2ERR },
+ [IB_QPS_INIT] = {
+ .trans = MTHCA_TRANS_INIT2INIT,
+ .opt_param = {
+ [UD] = (IB_QP_PKEY_INDEX |
+ IB_QP_PORT |
+ IB_QP_QKEY),
+ [RC] = (IB_QP_PKEY_INDEX |
+ IB_QP_PORT |
+ IB_QP_ACCESS_FLAGS),
+ [MLX] = (IB_QP_PKEY_INDEX |
+ IB_QP_QKEY),
+ }
+ },
+ [IB_QPS_RTR] = {
+ .trans = MTHCA_TRANS_INIT2RTR,
+ .req_param = {
+ [RC] = (IB_QP_AV |
+ IB_QP_PATH_MTU |
+ IB_QP_DEST_QPN |
+ IB_QP_RQ_PSN |
+ IB_QP_MAX_DEST_RD_ATOMIC |
+ IB_QP_MIN_RNR_TIMER),
+ },
+ .opt_param = {
+ [UD] = (IB_QP_PKEY_INDEX |
+ IB_QP_QKEY),
+ [RC] = (IB_QP_ALT_PATH |
+ IB_QP_ACCESS_FLAGS |
+ IB_QP_PKEY_INDEX),
+ [MLX] = (IB_QP_PKEY_INDEX |
+ IB_QP_QKEY),
+ }
+ }
+ },
+ [IB_QPS_RTR] = {
+ [IB_QPS_RESET] = { .trans = MTHCA_TRANS_ANY2RST },
+ [IB_QPS_ERR] = { .trans = MTHCA_TRANS_ANY2ERR },
+ [IB_QPS_RTS] = {
+ .trans = MTHCA_TRANS_RTR2RTS,
+ .req_param = {
+ [UD] = IB_QP_SQ_PSN,
+ [RC] = (IB_QP_TIMEOUT |
+ IB_QP_RETRY_CNT |
+ IB_QP_RNR_RETRY |
+ IB_QP_SQ_PSN |
+ IB_QP_MAX_QP_RD_ATOMIC),
+ [MLX] = IB_QP_SQ_PSN,
+ },
+ .opt_param = {
+ [UD] = (IB_QP_CUR_STATE |
+ IB_QP_QKEY),
+ [RC] = (IB_QP_CUR_STATE |
+ IB_QP_ALT_PATH |
+ IB_QP_ACCESS_FLAGS |
+ IB_QP_PKEY_INDEX |
+ IB_QP_MIN_RNR_TIMER |
+ IB_QP_PATH_MIG_STATE),
+ [MLX] = (IB_QP_CUR_STATE |
+ IB_QP_QKEY),
+ }
+ }
+ },
+ [IB_QPS_RTS] = {
+ [IB_QPS_RESET] = { .trans = MTHCA_TRANS_ANY2RST },
+ [IB_QPS_ERR] = { .trans = MTHCA_TRANS_ANY2ERR },
+ [IB_QPS_RTS] = {
+ .trans = MTHCA_TRANS_RTS2RTS,
+ .opt_param = {
+ [UD] = (IB_QP_CUR_STATE |
+ IB_QP_QKEY),
+ [RC] = (IB_QP_ACCESS_FLAGS |
+ IB_QP_ALT_PATH |
+ IB_QP_PATH_MIG_STATE |
+ IB_QP_MIN_RNR_TIMER),
+ [MLX] = (IB_QP_CUR_STATE |
+ IB_QP_QKEY),
+ }
+ },
+ [IB_QPS_SQD] = {
+ .trans = MTHCA_TRANS_RTS2SQD,
+ },
+ },
+ [IB_QPS_SQD] = {
+ [IB_QPS_RESET] = { .trans = MTHCA_TRANS_ANY2RST },
+ [IB_QPS_ERR] = { .trans = MTHCA_TRANS_ANY2ERR },
+ [IB_QPS_RTS] = {
+ .trans = MTHCA_TRANS_SQD2RTS,
+ .opt_param = {
+ [UD] = (IB_QP_CUR_STATE |
+ IB_QP_QKEY),
+ [RC] = (IB_QP_CUR_STATE |
+ IB_QP_ALT_PATH |
+ IB_QP_ACCESS_FLAGS |
+ IB_QP_MIN_RNR_TIMER |
+ IB_QP_PATH_MIG_STATE),
+ [MLX] = (IB_QP_CUR_STATE |
+ IB_QP_QKEY),
+ }
+ },
+ [IB_QPS_SQD] = {
+ .trans = MTHCA_TRANS_SQD2SQD,
+ .opt_param = {
+ [UD] = (IB_QP_PKEY_INDEX |
+ IB_QP_QKEY),
+ [RC] = (IB_QP_AV |
+ IB_QP_TIMEOUT |
+ IB_QP_RETRY_CNT |
+ IB_QP_RNR_RETRY |
+ IB_QP_MAX_QP_RD_ATOMIC |
+ IB_QP_MAX_DEST_RD_ATOMIC |
+ IB_QP_CUR_STATE |
+ IB_QP_ALT_PATH |
+ IB_QP_ACCESS_FLAGS |
+ IB_QP_PKEY_INDEX |
+ IB_QP_MIN_RNR_TIMER |
+ IB_QP_PATH_MIG_STATE),
+ [MLX] = (IB_QP_PKEY_INDEX |
+ IB_QP_QKEY),
+ }
+ }
+ },
+ [IB_QPS_SQE] = {
+ [IB_QPS_RESET] = { .trans = MTHCA_TRANS_ANY2RST },
+ [IB_QPS_ERR] = { .trans = MTHCA_TRANS_ANY2ERR },
+ [IB_QPS_RTS] = {
+ .trans = MTHCA_TRANS_SQERR2RTS,
+ .opt_param = {
+ [UD] = (IB_QP_CUR_STATE |
+ IB_QP_QKEY),
+ [RC] = (IB_QP_CUR_STATE |
+ IB_QP_MIN_RNR_TIMER),
+ [MLX] = (IB_QP_CUR_STATE |
+ IB_QP_QKEY),
+ }
+ }
+ },
+ [IB_QPS_ERR] = {
+ [IB_QPS_RESET] = { .trans = MTHCA_TRANS_ANY2RST },
+ [IB_QPS_ERR] = { .trans = MTHCA_TRANS_ANY2ERR }
+ }
+};
+
+static void store_attrs(struct mthca_sqp *sqp, struct ib_qp_attr *attr,
+ int attr_mask)
+{
+ if (attr_mask & IB_QP_PKEY_INDEX)
+ sqp->pkey_index = attr->pkey_index;
+ if (attr_mask & IB_QP_QKEY)
+ sqp->qkey = attr->qkey;
+ if (attr_mask & IB_QP_SQ_PSN)
+ sqp->send_psn = attr->sq_psn;
+}
+
+static void init_port(struct mthca_dev *dev, int port)
+{
+ int err;
+ u8 status;
+ struct mthca_init_ib_param param;
+
+ memset(¶m, 0, sizeof param);
+
+ param.enable_1x = 1;
+ param.enable_4x = 1;
+ param.vl_cap = dev->limits.vl_cap;
+ param.mtu_cap = dev->limits.mtu_cap;
+ param.gid_cap = dev->limits.gid_table_len;
+ param.pkey_cap = dev->limits.pkey_table_len;
+
+ err = mthca_INIT_IB(dev, ¶m, port, &status);
+ if (err)
+ mthca_warn(dev, "INIT_IB failed, return code %d.\n", err);
+ if (status)
+ mthca_warn(dev, "INIT_IB returned status %02x.\n", status);
+}
+
+int mthca_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask)
+{
+ struct mthca_dev *dev = to_mdev(ibqp->device);
+ struct mthca_qp *qp = to_mqp(ibqp);
+ enum ib_qp_state cur_state, new_state;
+ void *mailbox = NULL;
+ struct mthca_qp_param *qp_param;
+ struct mthca_qp_context *qp_context;
+ u32 req_param, opt_param;
+ u8 status;
+ int err;
+
+ if (attr_mask & IB_QP_CUR_STATE) {
+ if (attr->cur_qp_state != IB_QPS_RTR &&
+ attr->cur_qp_state != IB_QPS_RTS &&
+ attr->cur_qp_state != IB_QPS_SQD &&
+ attr->cur_qp_state != IB_QPS_SQE)
+ return -EINVAL;
+ else
+ cur_state = attr->cur_qp_state;
+ } else {
+ spin_lock_irq(&qp->lock);
+ cur_state = qp->state;
+ spin_unlock_irq(&qp->lock);
+ }
+
+ if (attr_mask & IB_QP_STATE) {
+ if (attr->qp_state < 0 || attr->qp_state > IB_QPS_ERR)
+ return -EINVAL;
+ new_state = attr->qp_state;
+ } else
+ new_state = cur_state;
+
+ if (state_table[cur_state][new_state].trans == MTHCA_TRANS_INVALID) {
+ mthca_dbg(dev, "Illegal QP transition "
+ "%d->%d\n", cur_state, new_state);
+ return -EINVAL;
+ }
+
+ req_param = state_table[cur_state][new_state].req_param[qp->transport];
+ opt_param = state_table[cur_state][new_state].opt_param[qp->transport];
+
+ if ((req_param & attr_mask) != req_param) {
+ mthca_dbg(dev, "QP transition "
+ "%d->%d missing req attr 0x%08x\n",
+ cur_state, new_state,
+ req_param & ~attr_mask);
+ return -EINVAL;
+ }
+
+ if (attr_mask & ~(req_param | opt_param | IB_QP_STATE)) {
+ mthca_dbg(dev, "QP transition (transport %d) "
+ "%d->%d has extra attr 0x%08x\n",
+ qp->transport,
+ cur_state, new_state,
+ attr_mask & ~(req_param | opt_param |
+ IB_QP_STATE));
+ return -EINVAL;
+ }
+
+ mailbox = kmalloc(sizeof (*qp_param) + MTHCA_CMD_MAILBOX_EXTRA, GFP_KERNEL);
+ if (!mailbox)
+ return -ENOMEM;
+ qp_param = MAILBOX_ALIGN(mailbox);
+ qp_context = &qp_param->context;
+ memset(qp_param, 0, sizeof *qp_param);
+
+ qp_context->flags = cpu_to_be32((to_mthca_state(new_state) << 28) |
+ (to_mthca_st(qp->transport) << 16));
+ qp_context->flags |= cpu_to_be32(MTHCA_QP_BIT_DE);
+ if (!(attr_mask & IB_QP_PATH_MIG_STATE))
+ qp_context->flags |= cpu_to_be32(MTHCA_QP_PM_MIGRATED << 11);
+ else {
+ qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_PM_STATE);
+ switch (attr->path_mig_state) {
+ case IB_MIG_MIGRATED:
+ qp_context->flags |= cpu_to_be32(MTHCA_QP_PM_MIGRATED << 11);
+ break;
+ case IB_MIG_REARM:
+ qp_context->flags |= cpu_to_be32(MTHCA_QP_PM_REARM << 11);
+ break;
+ case IB_MIG_ARMED:
+ qp_context->flags |= cpu_to_be32(MTHCA_QP_PM_ARMED << 11);
+ break;
+ }
+ }
+ /* leave sched_queue as 0 */
+ if (qp->transport == MLX || qp->transport == UD)
+ qp_context->mtu_msgmax = cpu_to_be32((IB_MTU_2048 << 29) |
+ (11 << 24));
+ else if (attr_mask & IB_QP_PATH_MTU) {
+ qp_context->mtu_msgmax = cpu_to_be32((attr->path_mtu << 29) |
+ (31 << 24));
+ }
+ qp_context->usr_page = cpu_to_be32(MTHCA_KAR_PAGE);
+ qp_context->local_qpn = cpu_to_be32(qp->qpn);
+ if (attr_mask & IB_QP_DEST_QPN) {
+ qp_context->remote_qpn = cpu_to_be32(attr->dest_qp_num);
+ }
+
+ if (qp->transport == MLX)
+ qp_context->pri_path.port_pkey |=
+ cpu_to_be32(to_msqp(qp)->port << 24);
+ else {
+ if (attr_mask & IB_QP_PORT) {
+ qp_context->pri_path.port_pkey |=
+ cpu_to_be32(attr->port_num << 24);
+ qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_PORT_NUM);
+ }
+ }
+
+ if (attr_mask & IB_QP_PKEY_INDEX) {
+ qp_context->pri_path.port_pkey |=
+ cpu_to_be32(attr->pkey_index);
+ qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_PKEY_INDEX);
+ }
+
+ if (attr_mask & IB_QP_RNR_RETRY) {
+ qp_context->pri_path.rnr_retry = attr->rnr_retry << 5;
+ qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_RNR_RETRY);
+ }
+
+ if (attr_mask & IB_QP_AV) {
+ qp_context->pri_path.g_mylmc = attr->ah_attr.src_path_bits & 0x7f;
+ qp_context->pri_path.rlid = cpu_to_be16(attr->ah_attr.dlid);
+ qp_context->pri_path.static_rate = (!!attr->ah_attr.static_rate) << 3;
+ if (attr->ah_attr.ah_flags & IB_AH_GRH) {
+ qp_context->pri_path.g_mylmc |= 1 << 7;
+ qp_context->pri_path.mgid_index = attr->ah_attr.grh.sgid_index;
+ qp_context->pri_path.hop_limit = attr->ah_attr.grh.hop_limit;
+ qp_context->pri_path.sl_tclass_flowlabel =
+ cpu_to_be32((attr->ah_attr.sl << 28) |
+ (attr->ah_attr.grh.traffic_class << 20) |
+ (attr->ah_attr.grh.flow_label));
+ memcpy(qp_context->pri_path.rgid,
+ attr->ah_attr.grh.dgid.raw, 16);
+ } else {
+ qp_context->pri_path.sl_tclass_flowlabel =
+ cpu_to_be32(attr->ah_attr.sl << 28);
+ }
+ qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_PRIMARY_ADDR_PATH);
+ }
+
+ if (attr_mask & IB_QP_TIMEOUT) {
+ qp_context->pri_path.ackto = attr->timeout;
+ qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_ACK_TIMEOUT);
+ }
+
+ /* XXX alt_path */
+
+ /* leave rdd as 0 */
+ qp_context->pd = cpu_to_be32(to_mpd(ibqp->pd)->pd_num);
+ /* leave wqe_base as 0 (we always create an MR based at 0 for WQs) */
+ qp_context->wqe_lkey = cpu_to_be32(qp->mr.ibmr.lkey);
+ qp_context->params1 = cpu_to_be32((MTHCA_ACK_REQ_FREQ << 28) |
+ (MTHCA_FLIGHT_LIMIT << 24) |
+ MTHCA_QP_BIT_SRE |
+ MTHCA_QP_BIT_SWE |
+ MTHCA_QP_BIT_SAE);
+ if (qp->sq.policy == IB_SIGNAL_ALL_WR)
+ qp_context->params1 |= cpu_to_be32(MTHCA_QP_BIT_SSC);
+ if (attr_mask & IB_QP_RETRY_CNT) {
+ qp_context->params1 |= cpu_to_be32(attr->retry_cnt << 16);
+ qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_RETRY_COUNT);
+ }
+
+ if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC) {
+ qp_context->params1 |= cpu_to_be32(min(attr->max_dest_rd_atomic ?
+ ffs(attr->max_dest_rd_atomic) - 1 : 0,
+ 7) << 21);
+ qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_SRA_MAX);
+ }
+
+ if (attr_mask & IB_QP_SQ_PSN)
+ qp_context->next_send_psn = cpu_to_be32(attr->sq_psn);
+ qp_context->cqn_snd = cpu_to_be32(to_mcq(ibqp->send_cq)->cqn);
+
+ if (attr_mask & IB_QP_ACCESS_FLAGS) {
+ /*
+ * Only enable RDMA/atomics if we have responder
+ * resources set to a non-zero value.
+ */
+ if (qp->resp_depth) {
+ qp_context->params2 |=
+ cpu_to_be32(attr->qp_access_flags & IB_ACCESS_REMOTE_WRITE ?
+ MTHCA_QP_BIT_RWE : 0);
+ qp_context->params2 |=
+ cpu_to_be32(attr->qp_access_flags & IB_ACCESS_REMOTE_READ ?
+ MTHCA_QP_BIT_RRE : 0);
+ qp_context->params2 |=
+ cpu_to_be32(attr->qp_access_flags & IB_ACCESS_REMOTE_ATOMIC ?
+ MTHCA_QP_BIT_RAE : 0);
+ }
+
+ qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_RWE |
+ MTHCA_QP_OPTPAR_RRE |
+ MTHCA_QP_OPTPAR_RAE);
+
+ qp->atomic_rd_en = attr->qp_access_flags;
+ }
+
+ if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC) {
+ u8 rra_max;
+
+ if (qp->resp_depth && !attr->max_rd_atomic) {
+ /*
+ * Lowering our responder resources to zero.
+ * Turn off RDMA/atomics as responder.
+ * (RWE/RRE/RAE in params2 already zero)
+ */
+ qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_RWE |
+ MTHCA_QP_OPTPAR_RRE |
+ MTHCA_QP_OPTPAR_RAE);
+ }
+
+ if (!qp->resp_depth && attr->max_rd_atomic) {
+ /*
+ * Increasing our responder resources from
+ * zero. Turn on RDMA/atomics as appropriate.
+ */
+ qp_context->params2 |=
+ cpu_to_be32(qp->atomic_rd_en & IB_ACCESS_REMOTE_WRITE ?
+ MTHCA_QP_BIT_RWE : 0);
+ qp_context->params2 |=
+ cpu_to_be32(qp->atomic_rd_en & IB_ACCESS_REMOTE_READ ?
+ MTHCA_QP_BIT_RRE : 0);
+ qp_context->params2 |=
+ cpu_to_be32(qp->atomic_rd_en & IB_ACCESS_REMOTE_ATOMIC ?
+ MTHCA_QP_BIT_RAE : 0);
+
+ qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_RWE |
+ MTHCA_QP_OPTPAR_RRE |
+ MTHCA_QP_OPTPAR_RAE);
+ }
+
+ for (rra_max = 0;
+ 1 << rra_max < attr->max_rd_atomic &&
+ rra_max < dev->qp_table.rdb_shift;
+ ++rra_max)
+ ; /* nothing */
+
+ qp_context->params2 |= cpu_to_be32(rra_max << 21);
+ qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_RRA_MAX);
+
+ qp->resp_depth = attr->max_rd_atomic;
+ }
+
+ if (qp->rq.policy == IB_SIGNAL_ALL_WR)
+ qp_context->params2 |= cpu_to_be32(MTHCA_QP_BIT_RSC);
+ if (attr_mask & IB_QP_MIN_RNR_TIMER) {
+ qp_context->rnr_nextrecvpsn |= cpu_to_be32(attr->min_rnr_timer << 24);
+ qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_RNR_TIMEOUT);
+ }
+ if (attr_mask & IB_QP_RQ_PSN)
+ qp_context->rnr_nextrecvpsn |= cpu_to_be32(attr->rq_psn);
+
+ qp_context->ra_buff_indx = dev->qp_table.rdb_base +
+ ((qp->qpn & (dev->limits.num_qps - 1)) * MTHCA_RDB_ENTRY_SIZE <<
+ dev->qp_table.rdb_shift);
+
+ qp_context->cqn_rcv = cpu_to_be32(to_mcq(ibqp->recv_cq)->cqn);
+
+ if (attr_mask & IB_QP_QKEY) {
+ qp_context->qkey = cpu_to_be32(attr->qkey);
+ qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_Q_KEY);
+ }
+
+ err = mthca_MODIFY_QP(dev, state_table[cur_state][new_state].trans,
+ qp->qpn, 0, qp_param, 0, &status);
+ if (status) {
+ mthca_warn(dev, "modify QP %d returned status %02x.\n",
+ state_table[cur_state][new_state].trans, status);
+ err = -EINVAL;
+ }
+
+ if (!err)
+ qp->state = new_state;
+
+ kfree(mailbox);
+
+ if (is_sqp(dev, qp))
+ store_attrs(to_msqp(qp), attr, attr_mask);
+
+ /*
+ * If we are moving QP0 to RTR, bring the IB link up; if we
+ * are moving QP0 to RESET or ERROR, bring the link back down.
+ */
+ if (is_qp0(dev, qp)) {
+ if (cur_state != IB_QPS_RTR &&
+ new_state == IB_QPS_RTR)
+ init_port(dev, to_msqp(qp)->port);
+
+ if (cur_state != IB_QPS_RESET &&
+ cur_state != IB_QPS_ERR &&
+ (new_state == IB_QPS_RESET ||
+ new_state == IB_QPS_ERR))
+ mthca_CLOSE_IB(dev, to_msqp(qp)->port, &status);
+ }
+
+ return err;
+}
+
+/*
+ * Allocate and register buffer for WQEs. qp->rq.max, sq.max,
+ * rq.max_gs and sq.max_gs must all be assigned.
+ * mthca_alloc_wqe_buf will calculate rq.wqe_shift and
+ * sq.wqe_shift (as well as send_wqe_offset, is_direct, and
+ * queue)
+ */
+static int mthca_alloc_wqe_buf(struct mthca_dev *dev,
+ struct mthca_pd *pd,
+ struct mthca_qp *qp)
+{
+ int size;
+ int i;
+ int npages, shift;
+ dma_addr_t t;
+ u64 *dma_list = NULL;
+ int err = -ENOMEM;
+
+ size = sizeof (struct mthca_next_seg) +
+ qp->rq.max_gs * sizeof (struct mthca_data_seg);
+
+ for (qp->rq.wqe_shift = 6; 1 << qp->rq.wqe_shift < size;
+ qp->rq.wqe_shift++)
+ ; /* nothing */
+
+ size = sizeof (struct mthca_next_seg) +
+ qp->sq.max_gs * sizeof (struct mthca_data_seg);
+ if (qp->transport == MLX)
+ size += 2 * sizeof (struct mthca_data_seg);
+ else if (qp->transport == UD)
+ size += sizeof (struct mthca_ud_seg);
+ else /* bind seg is as big as atomic + raddr segs */
+ size += sizeof (struct mthca_bind_seg);
+
+ for (qp->sq.wqe_shift = 6; 1 << qp->sq.wqe_shift < size;
+ qp->sq.wqe_shift++)
+ ; /* nothing */
+
+ qp->send_wqe_offset = ALIGN(qp->rq.max << qp->rq.wqe_shift,
+ 1 << qp->sq.wqe_shift);
+ size = PAGE_ALIGN(qp->send_wqe_offset +
+ (qp->sq.max << qp->sq.wqe_shift));
+
+ qp->wrid = kmalloc((qp->rq.max + qp->sq.max) * sizeof (u64),
+ GFP_KERNEL);
+ if (!qp->wrid)
+ goto err_out;
+
+ if (size <= MTHCA_MAX_DIRECT_QP_SIZE) {
+ qp->is_direct = 1;
+ npages = 1;
+ shift = get_order(size) + PAGE_SHIFT;
+
+ if (0)
+ mthca_dbg(dev, "Creating direct QP of size %d (shift %d)\n",
+ size, shift);
+
+ qp->queue.direct.buf = pci_alloc_consistent(dev->pdev, size, &t);
+ if (!qp->queue.direct.buf)
+ goto err_out;
+
+ pci_unmap_addr_set(&qp->queue.direct, mapping, t);
+
+ memset(qp->queue.direct.buf, 0, size);
+
+ while (t & ((1 << shift) - 1)) {
+ --shift;
+ npages *= 2;
+ }
+
+ dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL);
+ if (!dma_list)
+ goto err_out_free;
+
+ for (i = 0; i < npages; ++i)
+ dma_list[i] = t + i * (1 << shift);
+ } else {
+ qp->is_direct = 0;
+ npages = size / PAGE_SIZE;
+ shift = PAGE_SHIFT;
+
+ if (0)
+ mthca_dbg(dev, "Creating indirect QP with %d pages\n", npages);
+
+ dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL);
+ if (!dma_list)
+ goto err_out;
+
+ qp->queue.page_list = kmalloc(npages *
+ sizeof *qp->queue.page_list,
+ GFP_KERNEL);
+ if (!qp->queue.page_list)
+ goto err_out;
+
+ for (i = 0; i < npages; ++i) {
+ qp->queue.page_list[i].buf =
+ pci_alloc_consistent(dev->pdev, PAGE_SIZE, &t);
+ if (!qp->queue.page_list[i].buf)
+ goto err_out_free;
+
+ memset(qp->queue.page_list[i].buf, 0, PAGE_SIZE);
+
+ pci_unmap_addr_set(&qp->queue.page_list[i], mapping, t);
+ dma_list[i] = t;
+ }
+ }
+
+ err = mthca_mr_alloc_phys(dev, pd->pd_num, dma_list, shift,
+ npages, 0, size,
+ MTHCA_MPT_FLAG_LOCAL_WRITE |
+ MTHCA_MPT_FLAG_LOCAL_READ,
+ &qp->mr);
+ if (err)
+ goto err_out_free;
+
+ kfree(dma_list);
+ return 0;
+
+ err_out_free:
+ if (qp->is_direct) {
+ pci_free_consistent(dev->pdev, size,
+ qp->queue.direct.buf,
+ pci_unmap_addr(&qp->queue.direct, mapping));
+ } else
+ for (i = 0; i < npages; ++i) {
+ if (qp->queue.page_list[i].buf)
+ pci_free_consistent(dev->pdev, PAGE_SIZE,
+ qp->queue.page_list[i].buf,
+ pci_unmap_addr(&qp->queue.page_list[i],
+ mapping));
+
+ }
+
+ err_out:
+ kfree(qp->wrid);
+ kfree(dma_list);
+ return err;
+}
+
+static int mthca_alloc_qp_common(struct mthca_dev *dev,
+ struct mthca_pd *pd,
+ struct mthca_cq *send_cq,
+ struct mthca_cq *recv_cq,
+ enum ib_sig_type send_policy,
+ enum ib_sig_type recv_policy,
+ struct mthca_qp *qp)
+{
+ int err;
+
+ spin_lock_init(&qp->lock);
+ atomic_set(&qp->refcount, 1);
+ qp->state = IB_QPS_RESET;
+ qp->atomic_rd_en = 0;
+ qp->resp_depth = 0;
+ qp->sq.policy = send_policy;
+ qp->rq.policy = recv_policy;
+ qp->rq.cur = 0;
+ qp->sq.cur = 0;
+ qp->rq.next = 0;
+ qp->sq.next = 0;
+ qp->rq.last_comp = qp->rq.max - 1;
+ qp->sq.last_comp = qp->sq.max - 1;
+ qp->rq.last = NULL;
+ qp->sq.last = NULL;
+
+ err = mthca_alloc_wqe_buf(dev, pd, qp);
+ return err;
+}
+
+int mthca_alloc_qp(struct mthca_dev *dev,
+ struct mthca_pd *pd,
+ struct mthca_cq *send_cq,
+ struct mthca_cq *recv_cq,
+ enum ib_qp_type type,
+ enum ib_sig_type send_policy,
+ enum ib_sig_type recv_policy,
+ struct mthca_qp *qp)
+{
+ int err;
+
+ switch (type) {
+ case IB_QPT_RC: qp->transport = RC; break;
+ case IB_QPT_UC: qp->transport = UC; break;
+ case IB_QPT_UD: qp->transport = UD; break;
+ default: return -EINVAL;
+ }
+
+ qp->qpn = mthca_alloc(&dev->qp_table.alloc);
+ if (qp->qpn == -1)
+ return -ENOMEM;
+
+ err = mthca_alloc_qp_common(dev, pd, send_cq, recv_cq,
+ send_policy, recv_policy, qp);
+ if (err) {
+ mthca_free(&dev->qp_table.alloc, qp->qpn);
+ return err;
+ }
+
+ spin_lock_irq(&dev->qp_table.lock);
+ mthca_array_set(&dev->qp_table.qp,
+ qp->qpn & (dev->limits.num_qps - 1), qp);
+ spin_unlock_irq(&dev->qp_table.lock);
+
+ return 0;
+}
+
+int mthca_alloc_sqp(struct mthca_dev *dev,
+ struct mthca_pd *pd,
+ struct mthca_cq *send_cq,
+ struct mthca_cq *recv_cq,
+ enum ib_sig_type send_policy,
+ enum ib_sig_type recv_policy,
+ int qpn,
+ int port,
+ struct mthca_sqp *sqp)
+{
+ int err = 0;
+ u32 mqpn = qpn * 2 + dev->qp_table.sqp_start + port - 1;
+
+ sqp->header_buf_size = sqp->qp.sq.max * MTHCA_UD_HEADER_SIZE;
+ sqp->header_buf = dma_alloc_coherent(&dev->pdev->dev, sqp->header_buf_size,
+ &sqp->header_dma, GFP_KERNEL);
+ if (!sqp->header_buf)
+ return -ENOMEM;
+
+ spin_lock_irq(&dev->qp_table.lock);
+ if (mthca_array_get(&dev->qp_table.qp, mqpn))
+ err = -EBUSY;
+ else
+ mthca_array_set(&dev->qp_table.qp, mqpn, sqp);
+ spin_unlock_irq(&dev->qp_table.lock);
+
+ if (err)
+ goto err_out;
+
+ sqp->port = port;
+ sqp->qp.qpn = mqpn;
+ sqp->qp.transport = MLX;
+
+ err = mthca_alloc_qp_common(dev, pd, send_cq, recv_cq,
+ send_policy, recv_policy,
+ &sqp->qp);
+ if (err)
+ goto err_out_free;
+
+ atomic_inc(&pd->sqp_count);
+
+ return 0;
+
+ err_out_free:
+ spin_lock_irq(&dev->qp_table.lock);
+ mthca_array_clear(&dev->qp_table.qp, mqpn);
+ spin_unlock_irq(&dev->qp_table.lock);
+
+ err_out:
+ dma_free_coherent(&dev->pdev->dev, sqp->header_buf_size,
+ sqp->header_buf, sqp->header_dma);
+
+ return err;
+}
+
+void mthca_free_qp(struct mthca_dev *dev,
+ struct mthca_qp *qp)
+{
+ u8 status;
+ int size;
+ int i;
+
+ spin_lock_irq(&dev->qp_table.lock);
+ mthca_array_clear(&dev->qp_table.qp,
+ qp->qpn & (dev->limits.num_qps - 1));
+ spin_unlock_irq(&dev->qp_table.lock);
+
+ atomic_dec(&qp->refcount);
+ wait_event(qp->wait, !atomic_read(&qp->refcount));
+
+ if (qp->state != IB_QPS_RESET)
+ mthca_MODIFY_QP(dev, MTHCA_TRANS_ANY2RST, qp->qpn, 0, NULL, 0, &status);
+
+ mthca_cq_clean(dev, to_mcq(qp->ibqp.send_cq)->cqn, qp->qpn);
+ if (qp->ibqp.send_cq != qp->ibqp.recv_cq)
+ mthca_cq_clean(dev, to_mcq(qp->ibqp.recv_cq)->cqn, qp->qpn);
+
+ mthca_free_mr(dev, &qp->mr);
+
+ size = PAGE_ALIGN(qp->send_wqe_offset +
+ (qp->sq.max << qp->sq.wqe_shift));
+
+ if (qp->is_direct) {
+ pci_free_consistent(dev->pdev, size,
+ qp->queue.direct.buf,
+ pci_unmap_addr(&qp->queue.direct, mapping));
+ } else {
+ for (i = 0; i < size / PAGE_SIZE; ++i) {
+ pci_free_consistent(dev->pdev, PAGE_SIZE,
+ qp->queue.page_list[i].buf,
+ pci_unmap_addr(&qp->queue.page_list[i],
+ mapping));
+ }
+ }
+
+ kfree(qp->wrid);
+
+ if (is_sqp(dev, qp)) {
+ atomic_dec(&(to_mpd(qp->ibqp.pd)->sqp_count));
+ dma_free_coherent(&dev->pdev->dev,
+ to_msqp(qp)->header_buf_size,
+ to_msqp(qp)->header_buf,
+ to_msqp(qp)->header_dma);
+ }
+ else
+ mthca_free(&dev->qp_table.alloc, qp->qpn);
+}
+
+/* Create UD header for an MLX send and build a data segment for it */
+static int build_mlx_header(struct mthca_dev *dev, struct mthca_sqp *sqp,
+ int ind, struct ib_send_wr *wr,
+ struct mthca_mlx_seg *mlx,
+ struct mthca_data_seg *data)
+{
+ int header_size;
+ int err;
+
+ ib_ud_header_init(256, /* assume a MAD */
+ sqp->ud_header.grh_present,
+ &sqp->ud_header);
+
+ err = mthca_read_ah(dev, to_mah(wr->wr.ud.ah), &sqp->ud_header);
+ if (err)
+ return err;
+ mlx->flags &= ~cpu_to_be32(MTHCA_NEXT_SOLICIT | 1);
+ mlx->flags |= cpu_to_be32((!sqp->qp.ibqp.qp_num ? MTHCA_MLX_VL15 : 0) |
+ (sqp->ud_header.lrh.destination_lid == 0xffff ?
+ MTHCA_MLX_SLR : 0) |
+ (sqp->ud_header.lrh.service_level << 8));
+ mlx->rlid = sqp->ud_header.lrh.destination_lid;
+ mlx->vcrc = 0;
+
+ switch (wr->opcode) {
+ case IB_WR_SEND:
+ sqp->ud_header.bth.opcode = IB_OPCODE_UD_SEND_ONLY;
+ sqp->ud_header.immediate_present = 0;
+ break;
+ case IB_WR_SEND_WITH_IMM:
+ sqp->ud_header.bth.opcode = IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE;
+ sqp->ud_header.immediate_present = 1;
+ sqp->ud_header.immediate_data = wr->imm_data;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ sqp->ud_header.lrh.virtual_lane = !sqp->qp.ibqp.qp_num ? 15 : 0;
+ if (sqp->ud_header.lrh.destination_lid == 0xffff)
+ sqp->ud_header.lrh.source_lid = 0xffff;
+ sqp->ud_header.bth.solicited_event = !!(wr->send_flags & IB_SEND_SOLICITED);
+ if (!sqp->qp.ibqp.qp_num)
+ ib_get_cached_pkey(&dev->ib_dev, sqp->port,
+ sqp->pkey_index,
+ &sqp->ud_header.bth.pkey);
+ else
+ ib_get_cached_pkey(&dev->ib_dev, sqp->port,
+ wr->wr.ud.pkey_index,
+ &sqp->ud_header.bth.pkey);
+ cpu_to_be16s(&sqp->ud_header.bth.pkey);
+ sqp->ud_header.bth.destination_qpn = cpu_to_be32(wr->wr.ud.remote_qpn);
+ sqp->ud_header.bth.psn = cpu_to_be32((sqp->send_psn++) & ((1 << 24) - 1));
+ sqp->ud_header.deth.qkey = cpu_to_be32(wr->wr.ud.remote_qkey & 0x80000000 ?
+ sqp->qkey : wr->wr.ud.remote_qkey);
+ sqp->ud_header.deth.source_qpn = cpu_to_be32(sqp->qp.ibqp.qp_num);
+
+ header_size = ib_ud_header_pack(&sqp->ud_header,
+ sqp->header_buf +
+ ind * MTHCA_UD_HEADER_SIZE);
+
+ data->byte_count = cpu_to_be32(header_size);
+ data->lkey = cpu_to_be32(to_mpd(sqp->qp.ibqp.pd)->ntmr.ibmr.lkey);
+ data->addr = cpu_to_be64(sqp->header_dma +
+ ind * MTHCA_UD_HEADER_SIZE);
+
+ return 0;
+}
+
+int mthca_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
+ struct ib_send_wr **bad_wr)
+{
+ struct mthca_dev *dev = to_mdev(ibqp->device);
+ struct mthca_qp *qp = to_mqp(ibqp);
+ void *wqe;
+ void *prev_wqe;
+ unsigned long flags;
+ int err = 0;
+ int nreq;
+ int i;
+ int size;
+ int size0 = 0;
+ u32 f0 = 0;
+ int ind;
+ u8 op0 = 0;
+
+ static const u8 opcode[] = {
+ [IB_WR_SEND] = MTHCA_OPCODE_SEND,
+ [IB_WR_SEND_WITH_IMM] = MTHCA_OPCODE_SEND_IMM,
+ [IB_WR_RDMA_WRITE] = MTHCA_OPCODE_RDMA_WRITE,
+ [IB_WR_RDMA_WRITE_WITH_IMM] = MTHCA_OPCODE_RDMA_WRITE_IMM,
+ [IB_WR_RDMA_READ] = MTHCA_OPCODE_RDMA_READ,
+ [IB_WR_ATOMIC_CMP_AND_SWP] = MTHCA_OPCODE_ATOMIC_CS,
+ [IB_WR_ATOMIC_FETCH_AND_ADD] = MTHCA_OPCODE_ATOMIC_FA,
+ };
+
+ spin_lock_irqsave(&qp->lock, flags);
+
+ /* XXX check that state is OK to post send */
+
+ ind = qp->sq.next;
+
+ for (nreq = 0; wr; ++nreq, wr = wr->next) {
+ if (qp->sq.cur + nreq >= qp->sq.max) {
+ mthca_err(dev, "SQ full (%d posted, %d max, %d nreq)\n",
+ qp->sq.cur, qp->sq.max, nreq);
+ err = -ENOMEM;
+ *bad_wr = wr;
+ goto out;
+ }
+
+ wqe = get_send_wqe(qp, ind);
+ prev_wqe = qp->sq.last;
+ qp->sq.last = wqe;
+
+ ((struct mthca_next_seg *) wqe)->nda_op = 0;
+ ((struct mthca_next_seg *) wqe)->ee_nds = 0;
+ ((struct mthca_next_seg *) wqe)->flags =
+ ((wr->send_flags & IB_SEND_SIGNALED) ?
+ cpu_to_be32(MTHCA_NEXT_CQ_UPDATE) : 0) |
+ ((wr->send_flags & IB_SEND_SOLICITED) ?
+ cpu_to_be32(MTHCA_NEXT_SOLICIT) : 0) |
+ cpu_to_be32(1);
+ if (wr->opcode == IB_WR_SEND_WITH_IMM ||
+ wr->opcode == IB_WR_RDMA_WRITE_WITH_IMM)
+ ((struct mthca_next_seg *) wqe)->flags = wr->imm_data;
+
+ wqe += sizeof (struct mthca_next_seg);
+ size = sizeof (struct mthca_next_seg) / 16;
+
+ switch (qp->transport) {
+ case RC:
+ switch (wr->opcode) {
+ case IB_WR_ATOMIC_CMP_AND_SWP:
+ case IB_WR_ATOMIC_FETCH_AND_ADD:
+ ((struct mthca_raddr_seg *) wqe)->raddr =
+ cpu_to_be64(wr->wr.atomic.remote_addr);
+ ((struct mthca_raddr_seg *) wqe)->rkey =
+ cpu_to_be32(wr->wr.atomic.rkey);
+ ((struct mthca_raddr_seg *) wqe)->reserved = 0;
+
+ wqe += sizeof (struct mthca_raddr_seg);
+
+ if (wr->opcode == IB_WR_ATOMIC_CMP_AND_SWP) {
+ ((struct mthca_atomic_seg *) wqe)->swap_add =
+ cpu_to_be64(wr->wr.atomic.swap);
+ ((struct mthca_atomic_seg *) wqe)->compare =
+ cpu_to_be64(wr->wr.atomic.compare_add);
+ } else {
+ ((struct mthca_atomic_seg *) wqe)->swap_add =
+ cpu_to_be64(wr->wr.atomic.compare_add);
+ ((struct mthca_atomic_seg *) wqe)->compare = 0;
+ }
+
+ wqe += sizeof (struct mthca_atomic_seg);
+ size += sizeof (struct mthca_raddr_seg) / 16 +
+ sizeof (struct mthca_atomic_seg);
+ break;
+
+ case IB_WR_RDMA_WRITE:
+ case IB_WR_RDMA_WRITE_WITH_IMM:
+ case IB_WR_RDMA_READ:
+ ((struct mthca_raddr_seg *) wqe)->raddr =
+ cpu_to_be64(wr->wr.rdma.remote_addr);
+ ((struct mthca_raddr_seg *) wqe)->rkey =
+ cpu_to_be32(wr->wr.rdma.rkey);
+ ((struct mthca_raddr_seg *) wqe)->reserved = 0;
+ wqe += sizeof (struct mthca_raddr_seg);
+ size += sizeof (struct mthca_raddr_seg) / 16;
+ break;
+
+ default:
+ /* No extra segments required for sends */
+ break;
+ }
+
+ break;
+
+ case UD:
+ ((struct mthca_ud_seg *) wqe)->lkey =
+ cpu_to_be32(to_mah(wr->wr.ud.ah)->key);
+ ((struct mthca_ud_seg *) wqe)->av_addr =
+ cpu_to_be64(to_mah(wr->wr.ud.ah)->avdma);
+ ((struct mthca_ud_seg *) wqe)->dqpn =
+ cpu_to_be32(wr->wr.ud.remote_qpn);
+ ((struct mthca_ud_seg *) wqe)->qkey =
+ cpu_to_be32(wr->wr.ud.remote_qkey);
+
+ wqe += sizeof (struct mthca_ud_seg);
+ size += sizeof (struct mthca_ud_seg) / 16;
+ break;
+
+ case MLX:
+ err = build_mlx_header(dev, to_msqp(qp), ind, wr,
+ wqe - sizeof (struct mthca_next_seg),
+ wqe);
+ if (err) {
+ *bad_wr = wr;
+ goto out;
+ }
+ wqe += sizeof (struct mthca_data_seg);
+ size += sizeof (struct mthca_data_seg) / 16;
+ break;
+ }
+
+ if (wr->num_sge > qp->sq.max_gs) {
+ mthca_err(dev, "too many gathers\n");
+ err = -EINVAL;
+ *bad_wr = wr;
+ goto out;
+ }
+
+ for (i = 0; i < wr->num_sge; ++i) {
+ ((struct mthca_data_seg *) wqe)->byte_count =
+ cpu_to_be32(wr->sg_list[i].length);
+ ((struct mthca_data_seg *) wqe)->lkey =
+ cpu_to_be32(wr->sg_list[i].lkey);
+ ((struct mthca_data_seg *) wqe)->addr =
+ cpu_to_be64(wr->sg_list[i].addr);
+ wqe += sizeof (struct mthca_data_seg);
+ size += sizeof (struct mthca_data_seg) / 16;
+ }
+
+ /* Add one more inline data segment for ICRC */
+ if (qp->transport == MLX) {
+ ((struct mthca_data_seg *) wqe)->byte_count =
+ cpu_to_be32((1 << 31) | 4);
+ ((u32 *) wqe)[1] = 0;
+ wqe += sizeof (struct mthca_data_seg);
+ size += sizeof (struct mthca_data_seg) / 16;
+ }
+
+ qp->wrid[ind + qp->rq.max] = wr->wr_id;
+
+ if (wr->opcode >= ARRAY_SIZE(opcode)) {
+ mthca_err(dev, "opcode invalid\n");
+ err = -EINVAL;
+ *bad_wr = wr;
+ goto out;
+ }
+
+ if (prev_wqe) {
+ ((struct mthca_next_seg *) prev_wqe)->nda_op =
+ cpu_to_be32(((ind << qp->sq.wqe_shift) +
+ qp->send_wqe_offset) |
+ opcode[wr->opcode]);
+ smp_wmb();
+ ((struct mthca_next_seg *) prev_wqe)->ee_nds =
+ cpu_to_be32((size0 ? 0 : MTHCA_NEXT_DBD) | size);
+ }
+
+ if (!size0) {
+ size0 = size;
+ op0 = opcode[wr->opcode];
+ }
+
+ ++ind;
+ if (unlikely(ind >= qp->sq.max))
+ ind -= qp->sq.max;
+ }
+
+out:
+ if (nreq) {
+ u32 doorbell[2];
+
+ doorbell[0] = cpu_to_be32(((qp->sq.next << qp->sq.wqe_shift) +
+ qp->send_wqe_offset) | f0 | op0);
+ doorbell[1] = cpu_to_be32((qp->qpn << 8) | size0);
+
+ wmb();
+
+ mthca_write64(doorbell,
+ dev->kar + MTHCA_SEND_DOORBELL,
+ MTHCA_GET_DOORBELL_LOCK(&dev->doorbell_lock));
+ }
+
+ qp->sq.cur += nreq;
+ qp->sq.next = ind;
+
+ spin_unlock_irqrestore(&qp->lock, flags);
+ return err;
+}
+
+int mthca_post_receive(struct ib_qp *ibqp, struct ib_recv_wr *wr,
+ struct ib_recv_wr **bad_wr)
+{
+ struct mthca_dev *dev = to_mdev(ibqp->device);
+ struct mthca_qp *qp = to_mqp(ibqp);
+ unsigned long flags;
+ int err = 0;
+ int nreq;
+ int i;
+ int size;
+ int size0 = 0;
+ int ind;
+ void *wqe;
+ void *prev_wqe;
+
+ spin_lock_irqsave(&qp->lock, flags);
+
+ /* XXX check that state is OK to post receive */
+
+ ind = qp->rq.next;
+
+ for (nreq = 0; wr; ++nreq, wr = wr->next) {
+ if (qp->rq.cur + nreq >= qp->rq.max) {
+ mthca_err(dev, "RQ %06x full\n", qp->qpn);
+ err = -ENOMEM;
+ *bad_wr = wr;
+ goto out;
+ }
+
+ wqe = get_recv_wqe(qp, ind);
+ prev_wqe = qp->rq.last;
+ qp->rq.last = wqe;
+
+ ((struct mthca_next_seg *) wqe)->nda_op = 0;
+ ((struct mthca_next_seg *) wqe)->ee_nds =
+ cpu_to_be32(MTHCA_NEXT_DBD);
+ ((struct mthca_next_seg *) wqe)->flags =
+ (wr->recv_flags & IB_RECV_SIGNALED) ?
+ cpu_to_be32(MTHCA_NEXT_CQ_UPDATE) : 0;
+
+ wqe += sizeof (struct mthca_next_seg);
+ size = sizeof (struct mthca_next_seg) / 16;
+
+ if (wr->num_sge > qp->rq.max_gs) {
+ err = -EINVAL;
+ *bad_wr = wr;
+ goto out;
+ }
+
+ for (i = 0; i < wr->num_sge; ++i) {
+ ((struct mthca_data_seg *) wqe)->byte_count =
+ cpu_to_be32(wr->sg_list[i].length);
+ ((struct mthca_data_seg *) wqe)->lkey =
+ cpu_to_be32(wr->sg_list[i].lkey);
+ ((struct mthca_data_seg *) wqe)->addr =
+ cpu_to_be64(wr->sg_list[i].addr);
+ wqe += sizeof (struct mthca_data_seg);
+ size += sizeof (struct mthca_data_seg) / 16;
+ }
+
+ qp->wrid[ind] = wr->wr_id;
+
+ if (prev_wqe) {
+ ((struct mthca_next_seg *) prev_wqe)->nda_op =
+ cpu_to_be32((ind << qp->rq.wqe_shift) | 1);
+ smp_wmb();
+ ((struct mthca_next_seg *) prev_wqe)->ee_nds =
+ cpu_to_be32(MTHCA_NEXT_DBD | size);
+ }
+
+ if (!size0)
+ size0 = size;
+
+ ++ind;
+ if (unlikely(ind >= qp->rq.max))
+ ind -= qp->rq.max;
+ }
+
+out:
+ if (nreq) {
+ u32 doorbell[2];
+
+ doorbell[0] = cpu_to_be32((qp->rq.next << qp->rq.wqe_shift) | size0);
+ doorbell[1] = cpu_to_be32((qp->qpn << 8) | nreq);
+
+ wmb();
+
+ mthca_write64(doorbell,
+ dev->kar + MTHCA_RECEIVE_DOORBELL,
+ MTHCA_GET_DOORBELL_LOCK(&dev->doorbell_lock));
+ }
+
+ qp->rq.cur += nreq;
+ qp->rq.next = ind;
+
+ spin_unlock_irqrestore(&qp->lock, flags);
+ return err;
+}
+
+int mthca_free_err_wqe(struct mthca_qp *qp, int is_send,
+ int index, int *dbd, u32 *new_wqe)
+{
+ struct mthca_next_seg *next;
+
+ if (is_send)
+ next = get_send_wqe(qp, index);
+ else
+ next = get_recv_wqe(qp, index);
+
+ *dbd = !!(next->ee_nds & cpu_to_be32(MTHCA_NEXT_DBD));
+ if (next->ee_nds & cpu_to_be32(0x3f))
+ *new_wqe = (next->nda_op & cpu_to_be32(~0x3f)) |
+ (next->ee_nds & cpu_to_be32(0x3f));
+ else
+ *new_wqe = 0;
+
+ return 0;
+}
+
+int __devinit mthca_init_qp_table(struct mthca_dev *dev)
+{
+ int err;
+ u8 status;
+ int i;
+
+ spin_lock_init(&dev->qp_table.lock);
+
+ /*
+ * We reserve 2 extra QPs per port for the special QPs. The
+ * special QP for port 1 has to be even, so round up.
+ */
+ dev->qp_table.sqp_start = (dev->limits.reserved_qps + 1) & ~1UL;
+ err = mthca_alloc_init(&dev->qp_table.alloc,
+ dev->limits.num_qps,
+ (1 << 24) - 1,
+ dev->qp_table.sqp_start +
+ MTHCA_MAX_PORTS * 2);
+ if (err)
+ return err;
+
+ err = mthca_array_init(&dev->qp_table.qp,
+ dev->limits.num_qps);
+ if (err) {
+ mthca_alloc_cleanup(&dev->qp_table.alloc);
+ return err;
+ }
+
+ for (i = 0; i < 2; ++i) {
+ err = mthca_CONF_SPECIAL_QP(dev, i ? IB_QPT_GSI : IB_QPT_SMI,
+ dev->qp_table.sqp_start + i * 2,
+ &status);
+ if (err)
+ goto err_out;
+ if (status) {
+ mthca_warn(dev, "CONF_SPECIAL_QP returned "
+ "status %02x, aborting.\n",
+ status);
+ err = -EINVAL;
+ goto err_out;
+ }
+ }
+ return 0;
+
+ err_out:
+ for (i = 0; i < 2; ++i)
+ mthca_CONF_SPECIAL_QP(dev, i, 0, &status);
+
+ mthca_array_cleanup(&dev->qp_table.qp, dev->limits.num_qps);
+ mthca_alloc_cleanup(&dev->qp_table.alloc);
+
+ return err;
+}
+
+void __devexit mthca_cleanup_qp_table(struct mthca_dev *dev)
+{
+ int i;
+ u8 status;
+
+ for (i = 0; i < 2; ++i)
+ mthca_CONF_SPECIAL_QP(dev, i, 0, &status);
+
+ mthca_alloc_cleanup(&dev->qp_table.alloc);
+}
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: mthca_reset.c 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/pci.h>
+#include <linux/delay.h>
+
+#include "mthca_dev.h"
+#include "mthca_cmd.h"
+
+int mthca_reset(struct mthca_dev *mdev)
+{
+ int i;
+ int err = 0;
+ u32 *hca_header = NULL;
+ u32 *bridge_header = NULL;
+ struct pci_dev *bridge = NULL;
+
+#define MTHCA_RESET_OFFSET 0xf0010
+#define MTHCA_RESET_VALUE cpu_to_be32(1)
+
+ /*
+ * Reset the chip. This is somewhat ugly because we have to
+ * save off the PCI header before reset and then restore it
+ * after the chip reboots. We skip config space offsets 22
+ * and 23 since those have a special meaning.
+ *
+ * To make matters worse, for Tavor (PCI-X HCA) we have to
+ * find the associated bridge device and save off its PCI
+ * header as well.
+ */
+
+ if (mdev->hca_type == TAVOR) {
+ /* Look for the bridge -- its device ID will be 2 more
+ than HCA's device ID. */
+ while ((bridge = pci_get_device(mdev->pdev->vendor,
+ mdev->pdev->device + 2,
+ bridge)) != NULL) {
+ if (bridge->hdr_type == PCI_HEADER_TYPE_BRIDGE &&
+ bridge->subordinate == mdev->pdev->bus) {
+ mthca_dbg(mdev, "Found bridge: %s (%s)\n",
+ pci_pretty_name(bridge), pci_name(bridge));
+ break;
+ }
+ }
+
+ if (!bridge) {
+ /*
+ * Didn't find a bridge for a Tavor device --
+ * assume we're in no-bridge mode and hope for
+ * the best.
+ */
+ mthca_warn(mdev, "No bridge found for %s (%s)\n",
+ pci_pretty_name(mdev->pdev), pci_name(mdev->pdev));
+ }
+
+ }
+
+ /* For Arbel do we need to save off the full 4K PCI Express header?? */
+ hca_header = kmalloc(256, GFP_KERNEL);
+ if (!hca_header) {
+ err = -ENOMEM;
+ mthca_err(mdev, "Couldn't allocate memory to save HCA "
+ "PCI header, aborting.\n");
+ goto out;
+ }
+
+ for (i = 0; i < 64; ++i) {
+ if (i == 22 || i == 23)
+ continue;
+ if (pci_read_config_dword(mdev->pdev, i * 4, hca_header + i)) {
+ err = -ENODEV;
+ mthca_err(mdev, "Couldn't save HCA "
+ "PCI header, aborting.\n");
+ goto out;
+ }
+ }
+
+ if (bridge) {
+ bridge_header = kmalloc(256, GFP_KERNEL);
+ if (!bridge_header) {
+ err = -ENOMEM;
+ mthca_err(mdev, "Couldn't allocate memory to save HCA "
+ "bridge PCI header, aborting.\n");
+ goto out;
+ }
+
+ for (i = 0; i < 64; ++i) {
+ if (i == 22 || i == 23)
+ continue;
+ if (pci_read_config_dword(bridge, i * 4, bridge_header + i)) {
+ err = -ENODEV;
+ mthca_err(mdev, "Couldn't save HCA bridge "
+ "PCI header, aborting.\n");
+ goto out;
+ }
+ }
+ }
+
+ /* actually hit reset */
+ {
+ void __iomem *reset = ioremap(pci_resource_start(mdev->pdev, 0) +
+ MTHCA_RESET_OFFSET, 4);
+
+ if (!reset) {
+ err = -ENOMEM;
+ mthca_err(mdev, "Couldn't map HCA reset register, "
+ "aborting.\n");
+ goto out;
+ }
+
+ writel(MTHCA_RESET_VALUE, reset);
+ iounmap(reset);
+ }
+
+ /* Docs say to wait one second before accessing device */
+ msleep(1000);
+
+ /* Now wait for PCI device to start responding again */
+ {
+ u32 v;
+ int c = 0;
+
+ for (c = 0; c < 100; ++c) {
+ if (pci_read_config_dword(bridge ? bridge : mdev->pdev, 0, &v)) {
+ err = -ENODEV;
+ mthca_err(mdev, "Couldn't access HCA after reset, "
+ "aborting.\n");
+ goto out;
+ }
+
+ if (v != 0xffffffff)
+ goto good;
+
+ msleep(100);
+ }
+
+ err = -ENODEV;
+ mthca_err(mdev, "PCI device did not come back after reset, "
+ "aborting.\n");
+ goto out;
+ }
+
+good:
+ /* Now restore the PCI headers */
+ if (bridge) {
+ /*
+ * Bridge control register is at 0x3e, so we'll
+ * naturally restore it last in this loop.
+ */
+ for (i = 0; i < 16; ++i) {
+ if (i * 4 == PCI_COMMAND)
+ continue;
+
+ if (pci_write_config_dword(bridge, i * 4, bridge_header[i])) {
+ err = -ENODEV;
+ mthca_err(mdev, "Couldn't restore HCA bridge reg %x, "
+ "aborting.\n", i);
+ goto out;
+ }
+ }
+
+ if (pci_write_config_dword(bridge, PCI_COMMAND,
+ bridge_header[PCI_COMMAND / 4])) {
+ err = -ENODEV;
+ mthca_err(mdev, "Couldn't restore HCA bridge COMMAND, "
+ "aborting.\n");
+ goto out;
+ }
+ }
+
+ for (i = 0; i < 16; ++i) {
+ if (i * 4 == PCI_COMMAND)
+ continue;
+
+ if (pci_write_config_dword(mdev->pdev, i * 4, hca_header[i])) {
+ err = -ENODEV;
+ mthca_err(mdev, "Couldn't restore HCA reg %x, "
+ "aborting.\n", i);
+ goto out;
+ }
+ }
+
+ if (pci_write_config_dword(mdev->pdev, PCI_COMMAND,
+ hca_header[PCI_COMMAND / 4])) {
+ err = -ENODEV;
+ mthca_err(mdev, "Couldn't restore HCA COMMAND, "
+ "aborting.\n");
+ goto out;
+ }
+
+out:
+ if (bridge)
+ pci_dev_put(bridge);
+ kfree(bridge_header);
+ kfree(hca_header);
+
+ return err;
+}
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: ib_cache.h 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#ifndef _IB_CACHE_H
+#define _IB_CACHE_H
+
+#include <ib_verbs.h>
+
+/**
+ * ib_get_cached_gid - Returns a cached GID table entry
+ * @device: The device to query.
+ * @port_num: The port number of the device to query.
+ * @index: The index into the cached GID table to query.
+ * @gid: The GID value found at the specified index.
+ *
+ * ib_get_cached_gid() fetches the specified GID table entry stored in
+ * the local software cache.
+ */
+int ib_get_cached_gid(struct ib_device *device,
+ u8 port_num,
+ int index,
+ union ib_gid *gid);
+
+/**
+ * ib_find_cached_gid - Returns the port number and GID table index where
+ * a specified GID value occurs.
+ * @device: The device to query.
+ * @gid: The GID value to search for.
+ * @port_num: The port number of the device where the GID value was found.
+ * @index: The index into the cached GID table where the GID was found. This
+ * parameter may be NULL.
+ *
+ * ib_find_cached_gid() searches for the specified GID value in
+ * the local software cache.
+ */
+int ib_find_cached_gid(struct ib_device *device,
+ union ib_gid *gid,
+ u8 *port_num,
+ u16 *index);
+
+/**
+ * ib_get_cached_pkey - Returns a cached PKey table entry
+ * @device: The device to query.
+ * @port_num: The port number of the device to query.
+ * @index: The index into the cached PKey table to query.
+ * @pkey: The PKey value found at the specified index.
+ *
+ * ib_get_cached_pkey() fetches the specified PKey table entry stored in
+ * the local software cache.
+ */
+int ib_get_cached_pkey(struct ib_device *device_handle,
+ u8 port_num,
+ int index,
+ u16 *pkey);
+
+/**
+ * ib_find_cached_pkey - Returns the PKey table index where a specified
+ * PKey value occurs.
+ * @device: The device to query.
+ * @port_num: The port number of the device to search for the PKey.
+ * @pkey: The PKey value to search for.
+ * @index: The index into the cached PKey table where the PKey was found.
+ *
+ * ib_find_cached_pkey() searches the specified PKey table in
+ * the local software cache.
+ */
+int ib_find_cached_pkey(struct ib_device *device,
+ u8 port_num,
+ u16 pkey,
+ u16 *index);
+
+#endif /* _IB_CACHE_H */
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Corporation. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: ib_fmr_pool.h 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#if !defined(IB_FMR_POOL_H)
+#define IB_FMR_POOL_H
+
+#include <ib_verbs.h>
+
+struct ib_fmr_pool;
+
+/**
+ * struct ib_fmr_pool_param - Parameters for creating FMR pool
+ * @max_pages_per_fmr:Maximum number of pages per map request.
+ * @access:Access flags for FMRs in pool.
+ * @pool_size:Number of FMRs to allocate for pool.
+ * @dirty_watermark:Flush is triggered when @dirty_watermark dirty
+ * FMRs are present.
+ * @flush_function:Callback called when unmapped FMRs are flushed and
+ * more FMRs are possibly available for mapping
+ * @flush_arg:Context passed to user's flush function.
+ * @cache:If set, FMRs may be reused after unmapping for identical map
+ * requests.
+ */
+struct ib_fmr_pool_param {
+ int max_pages_per_fmr;
+ enum ib_access_flags access;
+ int pool_size;
+ int dirty_watermark;
+ void (*flush_function)(struct ib_fmr_pool *pool,
+ void * arg);
+ void *flush_arg;
+ unsigned cache:1;
+};
+
+struct ib_pool_fmr {
+ struct ib_fmr *fmr;
+ struct ib_fmr_pool *pool;
+ struct list_head list;
+ struct hlist_node cache_node;
+ int ref_count;
+ int remap_count;
+ u64 io_virtual_address;
+ int page_list_len;
+ u64 page_list[0];
+};
+
+struct ib_fmr_pool *ib_create_fmr_pool(struct ib_pd *pd,
+ struct ib_fmr_pool_param *params);
+
+int ib_destroy_fmr_pool(struct ib_fmr_pool *pool);
+
+int ib_flush_fmr_pool(struct ib_fmr_pool *pool);
+
+struct ib_pool_fmr *ib_fmr_pool_map_phys(struct ib_fmr_pool *pool_handle,
+ u64 *page_list,
+ int list_len,
+ u64 *io_virtual_address);
+
+int ib_fmr_pool_unmap(struct ib_pool_fmr *fmr);
+
+#endif /* IB_FMR_POOL_H */
--- /dev/null
+/*
+ * Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
+ * Copyright (c) 2004 Infinicon Corporation. All rights reserved.
+ * Copyright (c) 2004 Intel Corporation. All rights reserved.
+ * Copyright (c) 2004 Topspin Corporation. All rights reserved.
+ * Copyright (c) 2004 Voltaire Corporation. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: ib_mad.h 1389 2004-12-27 22:56:47Z roland $
+ */
+
+#if !defined( IB_MAD_H )
+#define IB_MAD_H
+
+#include <ib_verbs.h>
+
+/* Management base version */
+#define IB_MGMT_BASE_VERSION 1
+
+/* Management classes */
+#define IB_MGMT_CLASS_SUBN_LID_ROUTED 0x01
+#define IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE 0x81
+#define IB_MGMT_CLASS_SUBN_ADM 0x03
+#define IB_MGMT_CLASS_PERF_MGMT 0x04
+#define IB_MGMT_CLASS_BM 0x05
+#define IB_MGMT_CLASS_DEVICE_MGMT 0x06
+#define IB_MGMT_CLASS_CM 0x07
+#define IB_MGMT_CLASS_SNMP 0x08
+#define IB_MGMT_CLASS_VENDOR_RANGE2_START 0x30
+#define IB_MGMT_CLASS_VENDOR_RANGE2_END 0x4F
+
+/* Management methods */
+#define IB_MGMT_METHOD_GET 0x01
+#define IB_MGMT_METHOD_SET 0x02
+#define IB_MGMT_METHOD_GET_RESP 0x81
+#define IB_MGMT_METHOD_SEND 0x03
+#define IB_MGMT_METHOD_TRAP 0x05
+#define IB_MGMT_METHOD_REPORT 0x06
+#define IB_MGMT_METHOD_REPORT_RESP 0x86
+#define IB_MGMT_METHOD_TRAP_REPRESS 0x07
+
+#define IB_MGMT_METHOD_RESP 0x80
+
+#define IB_MGMT_MAX_METHODS 128
+
+#define IB_QP0 0
+#define IB_QP1 __constant_htonl(1)
+#define IB_QP1_QKEY 0x80010000
+
+struct ib_grh {
+ u32 version_tclass_flow;
+ u16 paylen;
+ u8 next_hdr;
+ u8 hop_limit;
+ union ib_gid sgid;
+ union ib_gid dgid;
+} __attribute__ ((packed));
+
+struct ib_mad_hdr {
+ u8 base_version;
+ u8 mgmt_class;
+ u8 class_version;
+ u8 method;
+ u16 status;
+ u16 class_specific;
+ u64 tid;
+ u16 attr_id;
+ u16 resv;
+ u32 attr_mod;
+} __attribute__ ((packed));
+
+struct ib_rmpp_hdr {
+ u8 rmpp_version;
+ u8 rmpp_type;
+ u8 rmpp_rtime_flags;
+ u8 rmpp_status;
+ u32 seg_num;
+ u32 paylen_newwin;
+} __attribute__ ((packed));
+
+struct ib_mad {
+ struct ib_mad_hdr mad_hdr;
+ u8 data[232];
+} __attribute__ ((packed));
+
+struct ib_rmpp_mad {
+ struct ib_mad_hdr mad_hdr;
+ struct ib_rmpp_hdr rmpp_hdr;
+ u8 data[220];
+} __attribute__ ((packed));
+
+struct ib_vendor_mad {
+ struct ib_mad_hdr mad_hdr;
+ struct ib_rmpp_hdr rmpp_hdr;
+ u8 reserved;
+ u8 oui[3];
+ u8 data[216];
+} __attribute__ ((packed));
+
+struct ib_mad_agent;
+struct ib_mad_send_wc;
+struct ib_mad_recv_wc;
+
+/**
+ * ib_mad_send_handler - callback handler for a sent MAD.
+ * @mad_agent: MAD agent that sent the MAD.
+ * @mad_send_wc: Send work completion information on the sent MAD.
+ */
+typedef void (*ib_mad_send_handler)(struct ib_mad_agent *mad_agent,
+ struct ib_mad_send_wc *mad_send_wc);
+
+/**
+ * ib_mad_snoop_handler - Callback handler for snooping sent MADs.
+ * @mad_agent: MAD agent that snooped the MAD.
+ * @send_wr: Work request information on the sent MAD.
+ * @mad_send_wc: Work completion information on the sent MAD. Valid
+ * only for snooping that occurs on a send completion.
+ *
+ * Clients snooping MADs should not modify data referenced by the @send_wr
+ * or @mad_send_wc.
+ */
+typedef void (*ib_mad_snoop_handler)(struct ib_mad_agent *mad_agent,
+ struct ib_send_wr *send_wr,
+ struct ib_mad_send_wc *mad_send_wc);
+
+/**
+ * ib_mad_recv_handler - callback handler for a received MAD.
+ * @mad_agent: MAD agent requesting the received MAD.
+ * @mad_recv_wc: Received work completion information on the received MAD.
+ *
+ * MADs received in response to a send request operation will be handed to
+ * the user after the send operation completes. All data buffers given
+ * to registered agents through this routine are owned by the receiving
+ * client, except for snooping agents. Clients snooping MADs should not
+ * modify the data referenced by @mad_recv_wc.
+ */
+typedef void (*ib_mad_recv_handler)(struct ib_mad_agent *mad_agent,
+ struct ib_mad_recv_wc *mad_recv_wc);
+
+/**
+ * ib_mad_agent - Used to track MAD registration with the access layer.
+ * @device: Reference to device registration is on.
+ * @qp: Reference to QP used for sending and receiving MADs.
+ * @recv_handler: Callback handler for a received MAD.
+ * @send_handler: Callback handler for a sent MAD.
+ * @snoop_handler: Callback handler for snooped sent MADs.
+ * @context: User-specified context associated with this registration.
+ * @hi_tid: Access layer assigned transaction ID for this client.
+ * Unsolicited MADs sent by this client will have the upper 32-bits
+ * of their TID set to this value.
+ * @port_num: Port number on which QP is registered
+ */
+struct ib_mad_agent {
+ struct ib_device *device;
+ struct ib_qp *qp;
+ ib_mad_recv_handler recv_handler;
+ ib_mad_send_handler send_handler;
+ ib_mad_snoop_handler snoop_handler;
+ void *context;
+ u32 hi_tid;
+ u8 port_num;
+};
+
+/**
+ * ib_mad_send_wc - MAD send completion information.
+ * @wr_id: Work request identifier associated with the send MAD request.
+ * @status: Completion status.
+ * @vendor_err: Optional vendor error information returned with a failed
+ * request.
+ */
+struct ib_mad_send_wc {
+ u64 wr_id;
+ enum ib_wc_status status;
+ u32 vendor_err;
+};
+
+/**
+ * ib_mad_recv_buf - received MAD buffer information.
+ * @list: Reference to next data buffer for a received RMPP MAD.
+ * @grh: References a data buffer containing the global route header.
+ * The data refereced by this buffer is only valid if the GRH is
+ * valid.
+ * @mad: References the start of the received MAD.
+ */
+struct ib_mad_recv_buf {
+ struct list_head list;
+ struct ib_grh *grh;
+ struct ib_mad *mad;
+};
+
+/**
+ * ib_mad_recv_wc - received MAD information.
+ * @wc: Completion information for the received data.
+ * @recv_buf: Specifies the location of the received data buffer(s).
+ * @mad_len: The length of the received MAD, without duplicated headers.
+ *
+ * For received response, the wr_id field of the wc is set to the wr_id
+ * for the corresponding send request.
+ */
+struct ib_mad_recv_wc {
+ struct ib_wc *wc;
+ struct ib_mad_recv_buf recv_buf;
+ int mad_len;
+};
+
+/**
+ * ib_mad_reg_req - MAD registration request
+ * @mgmt_class: Indicates which management class of MADs should be receive
+ * by the caller. This field is only required if the user wishes to
+ * receive unsolicited MADs, otherwise it should be 0.
+ * @mgmt_class_version: Indicates which version of MADs for the given
+ * management class to receive.
+ * @oui: Indicates IEEE OUI when mgmt_class is a vendor class
+ * in the range from 0x30 to 0x4f. Otherwise not used.
+ * @method_mask: The caller will receive unsolicited MADs for any method
+ * where @method_mask = 1.
+ */
+struct ib_mad_reg_req {
+ u8 mgmt_class;
+ u8 mgmt_class_version;
+ u8 oui[3];
+ DECLARE_BITMAP(method_mask, IB_MGMT_MAX_METHODS);
+};
+
+/**
+ * ib_register_mad_agent - Register to send/receive MADs.
+ * @device: The device to register with.
+ * @port_num: The port on the specified device to use.
+ * @qp_type: Specifies which QP to access. Must be either
+ * IB_QPT_SMI or IB_QPT_GSI.
+ * @mad_reg_req: Specifies which unsolicited MADs should be received
+ * by the caller. This parameter may be NULL if the caller only
+ * wishes to receive solicited responses.
+ * @rmpp_version: If set, indicates that the client will send
+ * and receive MADs that contain the RMPP header for the given version.
+ * If set to 0, indicates that RMPP is not used by this client.
+ * @send_handler: The completion callback routine invoked after a send
+ * request has completed.
+ * @recv_handler: The completion callback routine invoked for a received
+ * MAD.
+ * @context: User specified context associated with the registration.
+ */
+struct ib_mad_agent *ib_register_mad_agent(struct ib_device *device,
+ u8 port_num,
+ enum ib_qp_type qp_type,
+ struct ib_mad_reg_req *mad_reg_req,
+ u8 rmpp_version,
+ ib_mad_send_handler send_handler,
+ ib_mad_recv_handler recv_handler,
+ void *context);
+
+enum ib_mad_snoop_flags {
+ /*IB_MAD_SNOOP_POSTED_SENDS = 1,*/
+ /*IB_MAD_SNOOP_RMPP_SENDS = (1<<1),*/
+ IB_MAD_SNOOP_SEND_COMPLETIONS = (1<<2),
+ /*IB_MAD_SNOOP_RMPP_SEND_COMPLETIONS = (1<<3),*/
+ IB_MAD_SNOOP_RECVS = (1<<4)
+ /*IB_MAD_SNOOP_RMPP_RECVS = (1<<5),*/
+ /*IB_MAD_SNOOP_REDIRECTED_QPS = (1<<6)*/
+};
+
+/**
+ * ib_register_mad_snoop - Register to snoop sent and received MADs.
+ * @device: The device to register with.
+ * @port_num: The port on the specified device to use.
+ * @qp_type: Specifies which QP traffic to snoop. Must be either
+ * IB_QPT_SMI or IB_QPT_GSI.
+ * @mad_snoop_flags: Specifies information where snooping occurs.
+ * @send_handler: The callback routine invoked for a snooped send.
+ * @recv_handler: The callback routine invoked for a snooped receive.
+ * @context: User specified context associated with the registration.
+ */
+struct ib_mad_agent *ib_register_mad_snoop(struct ib_device *device,
+ u8 port_num,
+ enum ib_qp_type qp_type,
+ int mad_snoop_flags,
+ ib_mad_snoop_handler snoop_handler,
+ ib_mad_recv_handler recv_handler,
+ void *context);
+
+/**
+ * ib_unregister_mad_agent - Unregisters a client from using MAD services.
+ * @mad_agent: Corresponding MAD registration request to deregister.
+ *
+ * After invoking this routine, MAD services are no longer usable by the
+ * client on the associated QP.
+ */
+int ib_unregister_mad_agent(struct ib_mad_agent *mad_agent);
+
+/**
+ * ib_post_send_mad - Posts MAD(s) to the send queue of the QP associated
+ * with the registered client.
+ * @mad_agent: Specifies the associated registration to post the send to.
+ * @send_wr: Specifies the information needed to send the MAD(s).
+ * @bad_send_wr: Specifies the MAD on which an error was encountered.
+ *
+ * Sent MADs are not guaranteed to complete in the order that they were posted.
+ */
+int ib_post_send_mad(struct ib_mad_agent *mad_agent,
+ struct ib_send_wr *send_wr,
+ struct ib_send_wr **bad_send_wr);
+
+/**
+ * ib_coalesce_recv_mad - Coalesces received MAD data into a single buffer.
+ * @mad_recv_wc: Work completion information for a received MAD.
+ * @buf: User-provided data buffer to receive the coalesced buffers. The
+ * referenced buffer should be at least the size of the mad_len specified
+ * by @mad_recv_wc.
+ *
+ * This call copies a chain of received RMPP MADs into a single data buffer,
+ * removing duplicated headers.
+ */
+void ib_coalesce_recv_mad(struct ib_mad_recv_wc *mad_recv_wc,
+ void *buf);
+
+/**
+ * ib_free_recv_mad - Returns data buffers used to receive a MAD to the
+ * access layer.
+ * @mad_recv_wc: Work completion information for a received MAD.
+ *
+ * Clients receiving MADs through their ib_mad_recv_handler must call this
+ * routine to return the work completion buffers to the access layer.
+ */
+void ib_free_recv_mad(struct ib_mad_recv_wc *mad_recv_wc);
+
+/**
+ * ib_cancel_mad - Cancels an outstanding send MAD operation.
+ * @mad_agent: Specifies the registration associated with sent MAD.
+ * @wr_id: Indicates the work request identifier of the MAD to cancel.
+ *
+ * MADs will be returned to the user through the corresponding
+ * ib_mad_send_handler.
+ */
+void ib_cancel_mad(struct ib_mad_agent *mad_agent,
+ u64 wr_id);
+
+/**
+ * ib_redirect_mad_qp - Registers a QP for MAD services.
+ * @qp: Reference to a QP that requires MAD services.
+ * @rmpp_version: If set, indicates that the client will send
+ * and receive MADs that contain the RMPP header for the given version.
+ * If set to 0, indicates that RMPP is not used by this client.
+ * @send_handler: The completion callback routine invoked after a send
+ * request has completed.
+ * @recv_handler: The completion callback routine invoked for a received
+ * MAD.
+ * @context: User specified context associated with the registration.
+ *
+ * Use of this call allows clients to use MAD services, such as RMPP,
+ * on user-owned QPs. After calling this routine, users may send
+ * MADs on the specified QP by calling ib_mad_post_send.
+ */
+struct ib_mad_agent *ib_redirect_mad_qp(struct ib_qp *qp,
+ u8 rmpp_version,
+ ib_mad_send_handler send_handler,
+ ib_mad_recv_handler recv_handler,
+ void *context);
+
+/**
+ * ib_process_mad_wc - Processes a work completion associated with a
+ * MAD sent or received on a redirected QP.
+ * @mad_agent: Specifies the registered MAD service using the redirected QP.
+ * @wc: References a work completion associated with a sent or received
+ * MAD segment.
+ *
+ * This routine is used to complete or continue processing on a MAD request.
+ * If the work completion is associated with a send operation, calling
+ * this routine is required to continue an RMPP transfer or to wait for a
+ * corresponding response, if it is a request. If the work completion is
+ * associated with a receive operation, calling this routine is required to
+ * process an inbound or outbound RMPP transfer, or to match a response MAD
+ * with its corresponding request.
+ */
+int ib_process_mad_wc(struct ib_mad_agent *mad_agent,
+ struct ib_wc *wc);
+
+#endif /* IB_MAD_H */
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Corporation. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: ib_pack.h 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#ifndef IB_PACK_H
+#define IB_PACK_H
+
+#include <ib_verbs.h>
+
+enum {
+ IB_LRH_BYTES = 8,
+ IB_GRH_BYTES = 40,
+ IB_BTH_BYTES = 12,
+ IB_DETH_BYTES = 8
+};
+
+struct ib_field {
+ size_t struct_offset_bytes;
+ size_t struct_size_bytes;
+ int offset_words;
+ int offset_bits;
+ int size_bits;
+ char *field_name;
+};
+
+#define RESERVED \
+ .field_name = "reserved"
+
+/*
+ * This macro cleans up the definitions of constants for BTH opcodes.
+ * It is used to define constants such as IB_OPCODE_UD_SEND_ONLY,
+ * which becomes IB_OPCODE_UD + IB_OPCODE_SEND_ONLY, and this gives
+ * the correct value.
+ *
+ * In short, user code should use the constants defined using the
+ * macro rather than worrying about adding together other constants.
+*/
+#define IB_OPCODE(transport, op) \
+ IB_OPCODE_ ## transport ## _ ## op = \
+ IB_OPCODE_ ## transport + IB_OPCODE_ ## op
+
+enum {
+ /* transport types -- just used to define real constants */
+ IB_OPCODE_RC = 0x00,
+ IB_OPCODE_UC = 0x20,
+ IB_OPCODE_RD = 0x40,
+ IB_OPCODE_UD = 0x60,
+
+ /* operations -- just used to define real constants */
+ IB_OPCODE_SEND_FIRST = 0x00,
+ IB_OPCODE_SEND_MIDDLE = 0x01,
+ IB_OPCODE_SEND_LAST = 0x02,
+ IB_OPCODE_SEND_LAST_WITH_IMMEDIATE = 0x03,
+ IB_OPCODE_SEND_ONLY = 0x04,
+ IB_OPCODE_SEND_ONLY_WITH_IMMEDIATE = 0x05,
+ IB_OPCODE_RDMA_WRITE_FIRST = 0x06,
+ IB_OPCODE_RDMA_WRITE_MIDDLE = 0x07,
+ IB_OPCODE_RDMA_WRITE_LAST = 0x08,
+ IB_OPCODE_RDMA_WRITE_LAST_WITH_IMMEDIATE = 0x09,
+ IB_OPCODE_RDMA_WRITE_ONLY = 0x0a,
+ IB_OPCODE_RDMA_WRITE_ONLY_WITH_IMMEDIATE = 0x0b,
+ IB_OPCODE_RDMA_READ_REQUEST = 0x0c,
+ IB_OPCODE_RDMA_READ_RESPONSE_FIRST = 0x0d,
+ IB_OPCODE_RDMA_READ_RESPONSE_MIDDLE = 0x0e,
+ IB_OPCODE_RDMA_READ_RESPONSE_LAST = 0x0f,
+ IB_OPCODE_RDMA_READ_RESPONSE_ONLY = 0x10,
+ IB_OPCODE_ACKNOWLEDGE = 0x11,
+ IB_OPCODE_ATOMIC_ACKNOWLEDGE = 0x12,
+ IB_OPCODE_COMPARE_SWAP = 0x13,
+ IB_OPCODE_FETCH_ADD = 0x14,
+
+ /* real constants follow -- see comment about above IB_OPCODE()
+ macro for more details */
+
+ /* RC */
+ IB_OPCODE(RC, SEND_FIRST),
+ IB_OPCODE(RC, SEND_MIDDLE),
+ IB_OPCODE(RC, SEND_LAST),
+ IB_OPCODE(RC, SEND_LAST_WITH_IMMEDIATE),
+ IB_OPCODE(RC, SEND_ONLY),
+ IB_OPCODE(RC, SEND_ONLY_WITH_IMMEDIATE),
+ IB_OPCODE(RC, RDMA_WRITE_FIRST),
+ IB_OPCODE(RC, RDMA_WRITE_MIDDLE),
+ IB_OPCODE(RC, RDMA_WRITE_LAST),
+ IB_OPCODE(RC, RDMA_WRITE_LAST_WITH_IMMEDIATE),
+ IB_OPCODE(RC, RDMA_WRITE_ONLY),
+ IB_OPCODE(RC, RDMA_WRITE_ONLY_WITH_IMMEDIATE),
+ IB_OPCODE(RC, RDMA_READ_REQUEST),
+ IB_OPCODE(RC, RDMA_READ_RESPONSE_FIRST),
+ IB_OPCODE(RC, RDMA_READ_RESPONSE_MIDDLE),
+ IB_OPCODE(RC, RDMA_READ_RESPONSE_LAST),
+ IB_OPCODE(RC, RDMA_READ_RESPONSE_ONLY),
+ IB_OPCODE(RC, ACKNOWLEDGE),
+ IB_OPCODE(RC, ATOMIC_ACKNOWLEDGE),
+ IB_OPCODE(RC, COMPARE_SWAP),
+ IB_OPCODE(RC, FETCH_ADD),
+
+ /* UC */
+ IB_OPCODE(UC, SEND_FIRST),
+ IB_OPCODE(UC, SEND_MIDDLE),
+ IB_OPCODE(UC, SEND_LAST),
+ IB_OPCODE(UC, SEND_LAST_WITH_IMMEDIATE),
+ IB_OPCODE(UC, SEND_ONLY),
+ IB_OPCODE(UC, SEND_ONLY_WITH_IMMEDIATE),
+ IB_OPCODE(UC, RDMA_WRITE_FIRST),
+ IB_OPCODE(UC, RDMA_WRITE_MIDDLE),
+ IB_OPCODE(UC, RDMA_WRITE_LAST),
+ IB_OPCODE(UC, RDMA_WRITE_LAST_WITH_IMMEDIATE),
+ IB_OPCODE(UC, RDMA_WRITE_ONLY),
+ IB_OPCODE(UC, RDMA_WRITE_ONLY_WITH_IMMEDIATE),
+
+ /* RD */
+ IB_OPCODE(RD, SEND_FIRST),
+ IB_OPCODE(RD, SEND_MIDDLE),
+ IB_OPCODE(RD, SEND_LAST),
+ IB_OPCODE(RD, SEND_LAST_WITH_IMMEDIATE),
+ IB_OPCODE(RD, SEND_ONLY),
+ IB_OPCODE(RD, SEND_ONLY_WITH_IMMEDIATE),
+ IB_OPCODE(RD, RDMA_WRITE_FIRST),
+ IB_OPCODE(RD, RDMA_WRITE_MIDDLE),
+ IB_OPCODE(RD, RDMA_WRITE_LAST),
+ IB_OPCODE(RD, RDMA_WRITE_LAST_WITH_IMMEDIATE),
+ IB_OPCODE(RD, RDMA_WRITE_ONLY),
+ IB_OPCODE(RD, RDMA_WRITE_ONLY_WITH_IMMEDIATE),
+ IB_OPCODE(RD, RDMA_READ_REQUEST),
+ IB_OPCODE(RD, RDMA_READ_RESPONSE_FIRST),
+ IB_OPCODE(RD, RDMA_READ_RESPONSE_MIDDLE),
+ IB_OPCODE(RD, RDMA_READ_RESPONSE_LAST),
+ IB_OPCODE(RD, RDMA_READ_RESPONSE_ONLY),
+ IB_OPCODE(RD, ACKNOWLEDGE),
+ IB_OPCODE(RD, ATOMIC_ACKNOWLEDGE),
+ IB_OPCODE(RD, COMPARE_SWAP),
+ IB_OPCODE(RD, FETCH_ADD),
+
+ /* UD */
+ IB_OPCODE(UD, SEND_ONLY),
+ IB_OPCODE(UD, SEND_ONLY_WITH_IMMEDIATE)
+};
+
+enum {
+ IB_LNH_RAW = 0,
+ IB_LNH_IP = 1,
+ IB_LNH_IBA_LOCAL = 2,
+ IB_LNH_IBA_GLOBAL = 3
+};
+
+struct ib_unpacked_lrh {
+ u8 virtual_lane;
+ u8 link_version;
+ u8 service_level;
+ u8 link_next_header;
+ __be16 destination_lid;
+ __be16 packet_length;
+ __be16 source_lid;
+};
+
+struct ib_unpacked_grh {
+ u8 ip_version;
+ u8 traffic_class;
+ __be32 flow_label;
+ __be16 payload_length;
+ u8 next_header;
+ u8 hop_limit;
+ union ib_gid source_gid;
+ union ib_gid destination_gid;
+};
+
+struct ib_unpacked_bth {
+ u8 opcode;
+ u8 solicited_event;
+ u8 mig_req;
+ u8 pad_count;
+ u8 transport_header_version;
+ __be16 pkey;
+ __be32 destination_qpn;
+ u8 ack_req;
+ __be32 psn;
+};
+
+struct ib_unpacked_deth {
+ __be32 qkey;
+ __be32 source_qpn;
+};
+
+struct ib_ud_header {
+ struct ib_unpacked_lrh lrh;
+ int grh_present;
+ struct ib_unpacked_grh grh;
+ struct ib_unpacked_bth bth;
+ struct ib_unpacked_deth deth;
+ int immediate_present;
+ __be32 immediate_data;
+};
+
+void ib_pack(const struct ib_field *desc,
+ int desc_len,
+ void *structure,
+ void *buf);
+
+void ib_unpack(const struct ib_field *desc,
+ int desc_len,
+ void *buf,
+ void *structure);
+
+void ib_ud_header_init(int payload_bytes,
+ int grh_present,
+ struct ib_ud_header *header);
+
+int ib_ud_header_pack(struct ib_ud_header *header,
+ void *buf);
+
+int ib_ud_header_unpack(void *buf,
+ struct ib_ud_header *header);
+
+#endif /* IB_PACK_H */
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: ib_sa.h 1389 2004-12-27 22:56:47Z roland $
+ */
+
+#ifndef IB_SA_H
+#define IB_SA_H
+
+#include <linux/compiler.h>
+
+#include <ib_verbs.h>
+#include <ib_mad.h>
+
+enum {
+ IB_SA_CLASS_VERSION = 2, /* IB spec version 1.1/1.2 */
+
+ IB_SA_METHOD_DELETE = 0x15
+};
+
+enum ib_sa_selector {
+ IB_SA_GTE = 0,
+ IB_SA_LTE = 1,
+ IB_SA_EQ = 2,
+ /*
+ * The meaning of "best" depends on the attribute: for
+ * example, for MTU best will return the largest available
+ * MTU, while for packet life time, best will return the
+ * smallest available life time.
+ */
+ IB_SA_BEST = 3
+};
+
+enum ib_sa_rate {
+ IB_SA_RATE_2_5_GBPS = 2,
+ IB_SA_RATE_5_GBPS = 5,
+ IB_SA_RATE_10_GBPS = 3,
+ IB_SA_RATE_20_GBPS = 6,
+ IB_SA_RATE_30_GBPS = 4,
+ IB_SA_RATE_40_GBPS = 7,
+ IB_SA_RATE_60_GBPS = 8,
+ IB_SA_RATE_80_GBPS = 9,
+ IB_SA_RATE_120_GBPS = 10
+};
+
+static inline int ib_sa_rate_enum_to_int(enum ib_sa_rate rate)
+{
+ switch (rate) {
+ case IB_SA_RATE_2_5_GBPS: return 1;
+ case IB_SA_RATE_5_GBPS: return 2;
+ case IB_SA_RATE_10_GBPS: return 4;
+ case IB_SA_RATE_20_GBPS: return 8;
+ case IB_SA_RATE_30_GBPS: return 12;
+ case IB_SA_RATE_40_GBPS: return 16;
+ case IB_SA_RATE_60_GBPS: return 24;
+ case IB_SA_RATE_80_GBPS: return 32;
+ case IB_SA_RATE_120_GBPS: return 48;
+ default: return -1;
+ }
+}
+
+typedef u64 __bitwise ib_sa_comp_mask;
+
+#define IB_SA_COMP_MASK(n) ((__force ib_sa_comp_mask) cpu_to_be64(1ull << n))
+
+/*
+ * Structures for SA records are named "struct ib_sa_xxx_rec." No
+ * attempt is made to pack structures to match the physical layout of
+ * SA records in SA MADs; all packing and unpacking is handled by the
+ * SA query code.
+ *
+ * For a record with structure ib_sa_xxx_rec, the naming convention
+ * for the component mask value for field yyy is IB_SA_XXX_REC_YYY (we
+ * never use different abbreviations or otherwise change the spelling
+ * of xxx/yyy between ib_sa_xxx_rec.yyy and IB_SA_XXX_REC_YYY).
+ *
+ * Reserved rows are indicated with comments to help maintainability.
+ */
+
+/* reserved: 0 */
+/* reserved: 1 */
+#define IB_SA_PATH_REC_DGID IB_SA_COMP_MASK( 2)
+#define IB_SA_PATH_REC_SGID IB_SA_COMP_MASK( 3)
+#define IB_SA_PATH_REC_DLID IB_SA_COMP_MASK( 4)
+#define IB_SA_PATH_REC_SLID IB_SA_COMP_MASK( 5)
+#define IB_SA_PATH_REC_RAW_TRAFFIC IB_SA_COMP_MASK( 6)
+/* reserved: 7 */
+#define IB_SA_PATH_REC_FLOW_LABEL IB_SA_COMP_MASK( 8)
+#define IB_SA_PATH_REC_HOP_LIMIT IB_SA_COMP_MASK( 9)
+#define IB_SA_PATH_REC_TRAFFIC_CLASS IB_SA_COMP_MASK(10)
+#define IB_SA_PATH_REC_REVERSIBLE IB_SA_COMP_MASK(11)
+#define IB_SA_PATH_REC_NUMB_PATH IB_SA_COMP_MASK(12)
+#define IB_SA_PATH_REC_PKEY IB_SA_COMP_MASK(13)
+/* reserved: 14 */
+#define IB_SA_PATH_REC_SL IB_SA_COMP_MASK(15)
+#define IB_SA_PATH_REC_MTU_SELECTOR IB_SA_COMP_MASK(16)
+#define IB_SA_PATH_REC_MTU IB_SA_COMP_MASK(17)
+#define IB_SA_PATH_REC_RATE_SELECTOR IB_SA_COMP_MASK(18)
+#define IB_SA_PATH_REC_RATE IB_SA_COMP_MASK(19)
+#define IB_SA_PATH_REC_PACKET_LIFE_TIME_SELECTOR IB_SA_COMP_MASK(20)
+#define IB_SA_PATH_REC_PACKET_LIFE_TIME IB_SA_COMP_MASK(21)
+#define IB_SA_PATH_REC_PREFERENCE IB_SA_COMP_MASK(22)
+
+struct ib_sa_path_rec {
+ /* reserved */
+ /* reserved */
+ union ib_gid dgid;
+ union ib_gid sgid;
+ u16 dlid;
+ u16 slid;
+ int raw_traffic;
+ /* reserved */
+ u32 flow_label;
+ u8 hop_limit;
+ u8 traffic_class;
+ int reversible;
+ u8 numb_path;
+ u16 pkey;
+ /* reserved */
+ u8 sl;
+ u8 mtu_selector;
+ enum ib_mtu mtu;
+ u8 rate_selector;
+ u8 rate;
+ u8 packet_life_time_selector;
+ u8 packet_life_time;
+ u8 preference;
+};
+
+#define IB_SA_MCMEMBER_REC_MGID IB_SA_COMP_MASK( 0)
+#define IB_SA_MCMEMBER_REC_PORT_GID IB_SA_COMP_MASK( 1)
+#define IB_SA_MCMEMBER_REC_QKEY IB_SA_COMP_MASK( 2)
+#define IB_SA_MCMEMBER_REC_MLID IB_SA_COMP_MASK( 3)
+#define IB_SA_MCMEMBER_REC_MTU_SELECTOR IB_SA_COMP_MASK( 4)
+#define IB_SA_MCMEMBER_REC_MTU IB_SA_COMP_MASK( 5)
+#define IB_SA_MCMEMBER_REC_TRAFFIC_CLASS IB_SA_COMP_MASK( 6)
+#define IB_SA_MCMEMBER_REC_PKEY IB_SA_COMP_MASK( 7)
+#define IB_SA_MCMEMBER_REC_RATE_SELECTOR IB_SA_COMP_MASK( 8)
+#define IB_SA_MCMEMBER_REC_RATE IB_SA_COMP_MASK( 9)
+#define IB_SA_MCMEMBER_REC_PACKET_LIFE_TIME_SELECTOR IB_SA_COMP_MASK(10)
+#define IB_SA_MCMEMBER_REC_PACKET_LIFE_TIME IB_SA_COMP_MASK(11)
+#define IB_SA_MCMEMBER_REC_SL IB_SA_COMP_MASK(12)
+#define IB_SA_MCMEMBER_REC_FLOW_LABEL IB_SA_COMP_MASK(13)
+#define IB_SA_MCMEMBER_REC_HOP_LIMIT IB_SA_COMP_MASK(14)
+#define IB_SA_MCMEMBER_REC_SCOPE IB_SA_COMP_MASK(15)
+#define IB_SA_MCMEMBER_REC_JOIN_STATE IB_SA_COMP_MASK(16)
+#define IB_SA_MCMEMBER_REC_PROXY_JOIN IB_SA_COMP_MASK(17)
+
+struct ib_sa_mcmember_rec {
+ union ib_gid mgid;
+ union ib_gid port_gid;
+ u32 qkey;
+ u16 mlid;
+ u8 mtu_selector;
+ enum ib_mtu mtu;
+ u8 traffic_class;
+ u16 pkey;
+ u8 rate_selector;
+ u8 rate;
+ u8 packet_life_time_selector;
+ u8 packet_life_time;
+ u8 sl;
+ u32 flow_label;
+ u8 hop_limit;
+ u8 scope;
+ u8 join_state;
+ int proxy_join;
+};
+
+struct ib_sa_query;
+
+void ib_sa_cancel_query(int id, struct ib_sa_query *query);
+
+int ib_sa_path_rec_get(struct ib_device *device, u8 port_num,
+ struct ib_sa_path_rec *rec,
+ ib_sa_comp_mask comp_mask,
+ int timeout_ms, int gfp_mask,
+ void (*callback)(int status,
+ struct ib_sa_path_rec *resp,
+ void *context),
+ void *context,
+ struct ib_sa_query **query);
+
+int ib_sa_mcmember_rec_query(struct ib_device *device, u8 port_num,
+ u8 method,
+ struct ib_sa_mcmember_rec *rec,
+ ib_sa_comp_mask comp_mask,
+ int timeout_ms, int gfp_mask,
+ void (*callback)(int status,
+ struct ib_sa_mcmember_rec *resp,
+ void *context),
+ void *context,
+ struct ib_sa_query **query);
+
+/**
+ * ib_sa_mcmember_rec_set - Start an MCMember set query
+ * @device:device to send query on
+ * @port_num: port number to send query on
+ * @rec:MCMember Record to send in query
+ * @comp_mask:component mask to send in query
+ * @timeout_ms:time to wait for response
+ * @gfp_mask:GFP mask to use for internal allocations
+ * @callback:function called when query completes, times out or is
+ * canceled
+ * @context:opaque user context passed to callback
+ * @sa_query:query context, used to cancel query
+ *
+ * Send an MCMember Set query to the SA (eg to join a multicast
+ * group). The callback function will be called when the query
+ * completes (or fails); status is 0 for a successful response, -EINTR
+ * if the query is canceled, -ETIMEDOUT is the query timed out, or
+ * -EIO if an error occurred sending the query. The resp parameter of
+ * the callback is only valid if status is 0.
+ *
+ * If the return value of ib_sa_mcmember_rec_set() is negative, it is
+ * an error code. Otherwise it is a query ID that can be used to
+ * cancel the query.
+ */
+static inline int
+ib_sa_mcmember_rec_set(struct ib_device *device, u8 port_num,
+ struct ib_sa_mcmember_rec *rec,
+ ib_sa_comp_mask comp_mask,
+ int timeout_ms, int gfp_mask,
+ void (*callback)(int status,
+ struct ib_sa_mcmember_rec *resp,
+ void *context),
+ void *context,
+ struct ib_sa_query **query)
+{
+ return ib_sa_mcmember_rec_query(device, port_num,
+ IB_MGMT_METHOD_SET,
+ rec, comp_mask,
+ timeout_ms, gfp_mask, callback,
+ context, query);
+}
+
+/**
+ * ib_sa_mcmember_rec_delete - Start an MCMember delete query
+ * @device:device to send query on
+ * @port_num: port number to send query on
+ * @rec:MCMember Record to send in query
+ * @comp_mask:component mask to send in query
+ * @timeout_ms:time to wait for response
+ * @gfp_mask:GFP mask to use for internal allocations
+ * @callback:function called when query completes, times out or is
+ * canceled
+ * @context:opaque user context passed to callback
+ * @sa_query:query context, used to cancel query
+ *
+ * Send an MCMember Delete query to the SA (eg to leave a multicast
+ * group). The callback function will be called when the query
+ * completes (or fails); status is 0 for a successful response, -EINTR
+ * if the query is canceled, -ETIMEDOUT is the query timed out, or
+ * -EIO if an error occurred sending the query. The resp parameter of
+ * the callback is only valid if status is 0.
+ *
+ * If the return value of ib_sa_mcmember_rec_delete() is negative, it
+ * is an error code. Otherwise it is a query ID that can be used to
+ * cancel the query.
+ */
+static inline int
+ib_sa_mcmember_rec_delete(struct ib_device *device, u8 port_num,
+ struct ib_sa_mcmember_rec *rec,
+ ib_sa_comp_mask comp_mask,
+ int timeout_ms, int gfp_mask,
+ void (*callback)(int status,
+ struct ib_sa_mcmember_rec *resp,
+ void *context),
+ void *context,
+ struct ib_sa_query **query)
+{
+ return ib_sa_mcmember_rec_query(device, port_num,
+ IB_SA_METHOD_DELETE,
+ rec, comp_mask,
+ timeout_ms, gfp_mask, callback,
+ context, query);
+}
+
+
+#endif /* IB_SA_H */
--- /dev/null
+/*
+ * Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
+ * Copyright (c) 2004 Infinicon Corporation. All rights reserved.
+ * Copyright (c) 2004 Intel Corporation. All rights reserved.
+ * Copyright (c) 2004 Topspin Corporation. All rights reserved.
+ * Copyright (c) 2004 Voltaire Corporation. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: ib_smi.h 1389 2004-12-27 22:56:47Z roland $
+ */
+
+#if !defined( IB_SMI_H )
+#define IB_SMI_H
+
+#include <ib_mad.h>
+
+#define IB_LID_PERMISSIVE 0xFFFF
+
+#define IB_SMP_DATA_SIZE 64
+#define IB_SMP_MAX_PATH_HOPS 64
+
+struct ib_smp {
+ u8 base_version;
+ u8 mgmt_class;
+ u8 class_version;
+ u8 method;
+ u16 status;
+ u8 hop_ptr;
+ u8 hop_cnt;
+ u64 tid;
+ u16 attr_id;
+ u16 resv;
+ u32 attr_mod;
+ u64 mkey;
+ u16 dr_slid;
+ u16 dr_dlid;
+ u8 reserved[28];
+ u8 data[IB_SMP_DATA_SIZE];
+ u8 initial_path[IB_SMP_MAX_PATH_HOPS];
+ u8 return_path[IB_SMP_MAX_PATH_HOPS];
+} __attribute__ ((packed));
+
+#define IB_SMP_DIRECTION __constant_htons(0x8000)
+
+/* Subnet management attributes */
+#define IB_SMP_ATTR_NOTICE __constant_htons(0x0002)
+#define IB_SMP_ATTR_NODE_DESC __constant_htons(0x0010)
+#define IB_SMP_ATTR_NODE_INFO __constant_htons(0x0011)
+#define IB_SMP_ATTR_SWITCH_INFO __constant_htons(0x0012)
+#define IB_SMP_ATTR_GUID_INFO __constant_htons(0x0014)
+#define IB_SMP_ATTR_PORT_INFO __constant_htons(0x0015)
+#define IB_SMP_ATTR_PKEY_TABLE __constant_htons(0x0016)
+#define IB_SMP_ATTR_SL_TO_VL_TABLE __constant_htons(0x0017)
+#define IB_SMP_ATTR_VL_ARB_TABLE __constant_htons(0x0018)
+#define IB_SMP_ATTR_LINEAR_FORWARD_TABLE __constant_htons(0x0019)
+#define IB_SMP_ATTR_RANDOM_FORWARD_TABLE __constant_htons(0x001A)
+#define IB_SMP_ATTR_MCAST_FORWARD_TABLE __constant_htons(0x001B)
+#define IB_SMP_ATTR_SM_INFO __constant_htons(0x0020)
+#define IB_SMP_ATTR_VENDOR_DIAG __constant_htons(0x0030)
+#define IB_SMP_ATTR_LED_INFO __constant_htons(0x0031)
+#define IB_SMP_ATTR_VENDOR_MASK __constant_htons(0xFF00)
+
+static inline u8
+ib_get_smp_direction(struct ib_smp *smp)
+{
+ return ((smp->status & IB_SMP_DIRECTION) == IB_SMP_DIRECTION);
+}
+
+#endif /* IB_SMI_H */
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: ib_user_mad.h 1389 2004-12-27 22:56:47Z roland $
+ */
+
+#ifndef IB_USER_MAD_H
+#define IB_USER_MAD_H
+
+#include <linux/types.h>
+#include <linux/ioctl.h>
+
+/*
+ * Increment this value if any changes that break userspace ABI
+ * compatibility are made.
+ */
+#define IB_USER_MAD_ABI_VERSION 2
+
+/*
+ * Make sure that all structs defined in this file remain laid out so
+ * that they pack the same way on 32-bit and 64-bit architectures (to
+ * avoid incompatibility between 32-bit userspace and 64-bit kernels).
+ */
+
+/**
+ * ib_user_mad - MAD packet
+ * @data - Contents of MAD
+ * @id - ID of agent MAD received with/to be sent with
+ * @status - 0 on successful receive, ETIMEDOUT if no response
+ * received (transaction ID in data[] will be set to TID of original
+ * request) (ignored on send)
+ * @timeout_ms - Milliseconds to wait for response (unset on receive)
+ * @qpn - Remote QP number received from/to be sent to
+ * @qkey - Remote Q_Key to be sent with (unset on receive)
+ * @lid - Remote lid received from/to be sent to
+ * @sl - Service level received with/to be sent with
+ * @path_bits - Local path bits received with/to be sent with
+ * @grh_present - If set, GRH was received/should be sent
+ * @gid_index - Local GID index to send with (unset on receive)
+ * @hop_limit - Hop limit in GRH
+ * @traffic_class - Traffic class in GRH
+ * @gid - Remote GID in GRH
+ * @flow_label - Flow label in GRH
+ *
+ * All multi-byte quantities are stored in network (big endian) byte order.
+ */
+struct ib_user_mad {
+ __u8 data[256];
+ __u32 id;
+ __u32 status;
+ __u32 timeout_ms;
+ __u32 qpn;
+ __u32 qkey;
+ __u16 lid;
+ __u8 sl;
+ __u8 path_bits;
+ __u8 grh_present;
+ __u8 gid_index;
+ __u8 hop_limit;
+ __u8 traffic_class;
+ __u8 gid[16];
+ __u32 flow_label;
+};
+
+/**
+ * ib_user_mad_reg_req - MAD registration request
+ * @id - Set by the kernel; used to identify agent in future requests.
+ * @qpn - Queue pair number; must be 0 or 1.
+ * @method_mask - The caller will receive unsolicited MADs for any method
+ * where @method_mask = 1.
+ * @mgmt_class - Indicates which management class of MADs should be receive
+ * by the caller. This field is only required if the user wishes to
+ * receive unsolicited MADs, otherwise it should be 0.
+ * @mgmt_class_version - Indicates which version of MADs for the given
+ * management class to receive.
+ * @oui: Indicates IEEE OUI when mgmt_class is a vendor class
+ * in the range from 0x30 to 0x4f. Otherwise not used.
+ */
+struct ib_user_mad_reg_req {
+ __u32 id;
+ __u32 method_mask[4];
+ __u8 qpn;
+ __u8 mgmt_class;
+ __u8 mgmt_class_version;
+ __u8 oui[3];
+};
+
+#define IB_IOCTL_MAGIC 0x1b
+
+#define IB_USER_MAD_REGISTER_AGENT _IOWR(IB_IOCTL_MAGIC, 1, \
+ struct ib_user_mad_reg_req)
+
+#define IB_USER_MAD_UNREGISTER_AGENT _IOW(IB_IOCTL_MAGIC, 2, __u32)
+
+#endif /* IB_USER_MAD_H */
--- /dev/null
+/*
+ * Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
+ * Copyright (c) 2004 Infinicon Corporation. All rights reserved.
+ * Copyright (c) 2004 Intel Corporation. All rights reserved.
+ * Copyright (c) 2004 Topspin Corporation. All rights reserved.
+ * Copyright (c) 2004 Voltaire Corporation. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: ib_verbs.h 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#if !defined(IB_VERBS_H)
+#define IB_VERBS_H
+
+#include <linux/types.h>
+#include <linux/device.h>
+#include <asm/atomic.h>
+
+union ib_gid {
+ u8 raw[16];
+ struct {
+ u64 subnet_prefix;
+ u64 interface_id;
+ } global;
+};
+
+enum ib_node_type {
+ IB_NODE_CA = 1,
+ IB_NODE_SWITCH,
+ IB_NODE_ROUTER
+};
+
+enum ib_device_cap_flags {
+ IB_DEVICE_RESIZE_MAX_WR = 1,
+ IB_DEVICE_BAD_PKEY_CNTR = (1<<1),
+ IB_DEVICE_BAD_QKEY_CNTR = (1<<2),
+ IB_DEVICE_RAW_MULTI = (1<<3),
+ IB_DEVICE_AUTO_PATH_MIG = (1<<4),
+ IB_DEVICE_CHANGE_PHY_PORT = (1<<5),
+ IB_DEVICE_UD_AV_PORT_ENFORCE = (1<<6),
+ IB_DEVICE_CURR_QP_STATE_MOD = (1<<7),
+ IB_DEVICE_SHUTDOWN_PORT = (1<<8),
+ IB_DEVICE_INIT_TYPE = (1<<9),
+ IB_DEVICE_PORT_ACTIVE_EVENT = (1<<10),
+ IB_DEVICE_SYS_IMAGE_GUID = (1<<11),
+ IB_DEVICE_RC_RNR_NAK_GEN = (1<<12),
+ IB_DEVICE_SRQ_RESIZE = (1<<13),
+ IB_DEVICE_N_NOTIFY_CQ = (1<<14),
+ IB_DEVICE_RQ_SIG_TYPE = (1<<15)
+};
+
+enum ib_atomic_cap {
+ IB_ATOMIC_NONE,
+ IB_ATOMIC_HCA,
+ IB_ATOMIC_GLOB
+};
+
+struct ib_device_attr {
+ u64 fw_ver;
+ u64 node_guid;
+ u64 sys_image_guid;
+ u64 max_mr_size;
+ u64 page_size_cap;
+ u32 vendor_id;
+ u32 vendor_part_id;
+ u32 hw_ver;
+ int max_qp;
+ int max_qp_wr;
+ int device_cap_flags;
+ int max_sge;
+ int max_sge_rd;
+ int max_cq;
+ int max_cqe;
+ int max_mr;
+ int max_pd;
+ int max_qp_rd_atom;
+ int max_ee_rd_atom;
+ int max_res_rd_atom;
+ int max_qp_init_rd_atom;
+ int max_ee_init_rd_atom;
+ enum ib_atomic_cap atomic_cap;
+ int max_ee;
+ int max_rdd;
+ int max_mw;
+ int max_raw_ipv6_qp;
+ int max_raw_ethy_qp;
+ int max_mcast_grp;
+ int max_mcast_qp_attach;
+ int max_total_mcast_qp_attach;
+ int max_ah;
+ int max_fmr;
+ int max_map_per_fmr;
+ int max_srq;
+ int max_srq_wr;
+ int max_srq_sge;
+ u16 max_pkeys;
+ u8 local_ca_ack_delay;
+};
+
+enum ib_mtu {
+ IB_MTU_256 = 1,
+ IB_MTU_512 = 2,
+ IB_MTU_1024 = 3,
+ IB_MTU_2048 = 4,
+ IB_MTU_4096 = 5
+};
+
+static inline int ib_mtu_enum_to_int(enum ib_mtu mtu)
+{
+ switch (mtu) {
+ case IB_MTU_256: return 256;
+ case IB_MTU_512: return 512;
+ case IB_MTU_1024: return 1024;
+ case IB_MTU_2048: return 2048;
+ case IB_MTU_4096: return 4096;
+ default: return -1;
+ }
+}
+
+enum ib_port_state {
+ IB_PORT_NOP = 0,
+ IB_PORT_DOWN = 1,
+ IB_PORT_INIT = 2,
+ IB_PORT_ARMED = 3,
+ IB_PORT_ACTIVE = 4,
+ IB_PORT_ACTIVE_DEFER = 5
+};
+
+enum ib_port_cap_flags {
+ IB_PORT_SM = 1 << 1,
+ IB_PORT_NOTICE_SUP = 1 << 2,
+ IB_PORT_TRAP_SUP = 1 << 3,
+ IB_PORT_OPT_IPD_SUP = 1 << 4,
+ IB_PORT_AUTO_MIGR_SUP = 1 << 5,
+ IB_PORT_SL_MAP_SUP = 1 << 6,
+ IB_PORT_MKEY_NVRAM = 1 << 7,
+ IB_PORT_PKEY_NVRAM = 1 << 8,
+ IB_PORT_LED_INFO_SUP = 1 << 9,
+ IB_PORT_SM_DISABLED = 1 << 10,
+ IB_PORT_SYS_IMAGE_GUID_SUP = 1 << 11,
+ IB_PORT_PKEY_SW_EXT_PORT_TRAP_SUP = 1 << 12,
+ IB_PORT_CM_SUP = 1 << 16,
+ IB_PORT_SNMP_TUNNEL_SUP = 1 << 17,
+ IB_PORT_REINIT_SUP = 1 << 18,
+ IB_PORT_DEVICE_MGMT_SUP = 1 << 19,
+ IB_PORT_VENDOR_CLASS_SUP = 1 << 20,
+ IB_PORT_DR_NOTICE_SUP = 1 << 21,
+ IB_PORT_CAP_MASK_NOTICE_SUP = 1 << 22,
+ IB_PORT_BOOT_MGMT_SUP = 1 << 23,
+ IB_PORT_LINK_LATENCY_SUP = 1 << 24,
+ IB_PORT_CLIENT_REG_SUP = 1 << 25
+};
+
+enum ib_port_width {
+ IB_WIDTH_1X = 1,
+ IB_WIDTH_4X = 2,
+ IB_WIDTH_8X = 4,
+ IB_WIDTH_12X = 8
+};
+
+static inline int ib_width_enum_to_int(enum ib_port_width width)
+{
+ switch (width) {
+ case IB_WIDTH_1X: return 1;
+ case IB_WIDTH_4X: return 4;
+ case IB_WIDTH_8X: return 8;
+ case IB_WIDTH_12X: return 12;
+ default: return -1;
+ }
+}
+
+struct ib_port_attr {
+ enum ib_port_state state;
+ enum ib_mtu max_mtu;
+ enum ib_mtu active_mtu;
+ int gid_tbl_len;
+ u32 port_cap_flags;
+ u32 max_msg_sz;
+ u32 bad_pkey_cntr;
+ u32 qkey_viol_cntr;
+ u16 pkey_tbl_len;
+ u16 lid;
+ u16 sm_lid;
+ u8 lmc;
+ u8 max_vl_num;
+ u8 sm_sl;
+ u8 subnet_timeout;
+ u8 init_type_reply;
+ u8 active_width;
+ u8 active_speed;
+ u8 phys_state;
+};
+
+enum ib_device_modify_flags {
+ IB_DEVICE_MODIFY_SYS_IMAGE_GUID = 1
+};
+
+struct ib_device_modify {
+ u64 sys_image_guid;
+};
+
+enum ib_port_modify_flags {
+ IB_PORT_SHUTDOWN = 1,
+ IB_PORT_INIT_TYPE = (1<<2),
+ IB_PORT_RESET_QKEY_CNTR = (1<<3)
+};
+
+struct ib_port_modify {
+ u32 set_port_cap_mask;
+ u32 clr_port_cap_mask;
+ u8 init_type;
+};
+
+enum ib_event_type {
+ IB_EVENT_CQ_ERR,
+ IB_EVENT_QP_FATAL,
+ IB_EVENT_QP_REQ_ERR,
+ IB_EVENT_QP_ACCESS_ERR,
+ IB_EVENT_COMM_EST,
+ IB_EVENT_SQ_DRAINED,
+ IB_EVENT_PATH_MIG,
+ IB_EVENT_PATH_MIG_ERR,
+ IB_EVENT_DEVICE_FATAL,
+ IB_EVENT_PORT_ACTIVE,
+ IB_EVENT_PORT_ERR,
+ IB_EVENT_LID_CHANGE,
+ IB_EVENT_PKEY_CHANGE,
+ IB_EVENT_SM_CHANGE
+};
+
+struct ib_event {
+ struct ib_device *device;
+ union {
+ struct ib_cq *cq;
+ struct ib_qp *qp;
+ u8 port_num;
+ } element;
+ enum ib_event_type event;
+};
+
+struct ib_event_handler {
+ struct ib_device *device;
+ void (*handler)(struct ib_event_handler *, struct ib_event *);
+ struct list_head list;
+};
+
+#define INIT_IB_EVENT_HANDLER(_ptr, _device, _handler) \
+ do { \
+ (_ptr)->device = _device; \
+ (_ptr)->handler = _handler; \
+ INIT_LIST_HEAD(&(_ptr)->list); \
+ } while (0)
+
+struct ib_global_route {
+ union ib_gid dgid;
+ u32 flow_label;
+ u8 sgid_index;
+ u8 hop_limit;
+ u8 traffic_class;
+};
+
+enum {
+ IB_MULTICAST_QPN = 0xffffff
+};
+
+enum ib_ah_flags {
+ IB_AH_GRH = 1
+};
+
+struct ib_ah_attr {
+ struct ib_global_route grh;
+ u16 dlid;
+ u8 sl;
+ u8 src_path_bits;
+ u8 static_rate;
+ u8 ah_flags;
+ u8 port_num;
+};
+
+enum ib_wc_status {
+ IB_WC_SUCCESS,
+ IB_WC_LOC_LEN_ERR,
+ IB_WC_LOC_QP_OP_ERR,
+ IB_WC_LOC_EEC_OP_ERR,
+ IB_WC_LOC_PROT_ERR,
+ IB_WC_WR_FLUSH_ERR,
+ IB_WC_MW_BIND_ERR,
+ IB_WC_BAD_RESP_ERR,
+ IB_WC_LOC_ACCESS_ERR,
+ IB_WC_REM_INV_REQ_ERR,
+ IB_WC_REM_ACCESS_ERR,
+ IB_WC_REM_OP_ERR,
+ IB_WC_RETRY_EXC_ERR,
+ IB_WC_RNR_RETRY_EXC_ERR,
+ IB_WC_LOC_RDD_VIOL_ERR,
+ IB_WC_REM_INV_RD_REQ_ERR,
+ IB_WC_REM_ABORT_ERR,
+ IB_WC_INV_EECN_ERR,
+ IB_WC_INV_EEC_STATE_ERR,
+ IB_WC_FATAL_ERR,
+ IB_WC_RESP_TIMEOUT_ERR,
+ IB_WC_GENERAL_ERR
+};
+
+enum ib_wc_opcode {
+ IB_WC_SEND,
+ IB_WC_RDMA_WRITE,
+ IB_WC_RDMA_READ,
+ IB_WC_COMP_SWAP,
+ IB_WC_FETCH_ADD,
+ IB_WC_BIND_MW,
+/*
+ * Set value of IB_WC_RECV so consumers can test if a completion is a
+ * receive by testing (opcode & IB_WC_RECV).
+ */
+ IB_WC_RECV = 1 << 7,
+ IB_WC_RECV_RDMA_WITH_IMM
+};
+
+enum ib_wc_flags {
+ IB_WC_GRH = 1,
+ IB_WC_WITH_IMM = (1<<1)
+};
+
+struct ib_wc {
+ u64 wr_id;
+ enum ib_wc_status status;
+ enum ib_wc_opcode opcode;
+ u32 vendor_err;
+ u32 byte_len;
+ __be32 imm_data;
+ u32 qp_num;
+ u32 src_qp;
+ int wc_flags;
+ u16 pkey_index;
+ u16 slid;
+ u8 sl;
+ u8 dlid_path_bits;
+ u8 port_num; /* valid only for DR SMPs on switches */
+};
+
+enum ib_cq_notify {
+ IB_CQ_SOLICITED,
+ IB_CQ_NEXT_COMP
+};
+
+struct ib_qp_cap {
+ u32 max_send_wr;
+ u32 max_recv_wr;
+ u32 max_send_sge;
+ u32 max_recv_sge;
+ u32 max_inline_data;
+};
+
+enum ib_sig_type {
+ IB_SIGNAL_ALL_WR,
+ IB_SIGNAL_REQ_WR
+};
+
+enum ib_qp_type {
+ /*
+ * IB_QPT_SMI and IB_QPT_GSI have to be the first two entries
+ * here (and in that order) since the MAD layer uses them as
+ * indices into a 2-entry table.
+ */
+ IB_QPT_SMI,
+ IB_QPT_GSI,
+
+ IB_QPT_RC,
+ IB_QPT_UC,
+ IB_QPT_UD,
+ IB_QPT_RAW_IPV6,
+ IB_QPT_RAW_ETY
+};
+
+struct ib_qp_init_attr {
+ void (*event_handler)(struct ib_event *, void *);
+ void *qp_context;
+ struct ib_cq *send_cq;
+ struct ib_cq *recv_cq;
+ struct ib_srq *srq;
+ struct ib_qp_cap cap;
+ enum ib_sig_type sq_sig_type;
+ enum ib_sig_type rq_sig_type;
+ enum ib_qp_type qp_type;
+ u8 port_num; /* special QP types only */
+};
+
+enum ib_rnr_timeout {
+ IB_RNR_TIMER_655_36 = 0,
+ IB_RNR_TIMER_000_01 = 1,
+ IB_RNR_TIMER_000_02 = 2,
+ IB_RNR_TIMER_000_03 = 3,
+ IB_RNR_TIMER_000_04 = 4,
+ IB_RNR_TIMER_000_06 = 5,
+ IB_RNR_TIMER_000_08 = 6,
+ IB_RNR_TIMER_000_12 = 7,
+ IB_RNR_TIMER_000_16 = 8,
+ IB_RNR_TIMER_000_24 = 9,
+ IB_RNR_TIMER_000_32 = 10,
+ IB_RNR_TIMER_000_48 = 11,
+ IB_RNR_TIMER_000_64 = 12,
+ IB_RNR_TIMER_000_96 = 13,
+ IB_RNR_TIMER_001_28 = 14,
+ IB_RNR_TIMER_001_92 = 15,
+ IB_RNR_TIMER_002_56 = 16,
+ IB_RNR_TIMER_003_84 = 17,
+ IB_RNR_TIMER_005_12 = 18,
+ IB_RNR_TIMER_007_68 = 19,
+ IB_RNR_TIMER_010_24 = 20,
+ IB_RNR_TIMER_015_36 = 21,
+ IB_RNR_TIMER_020_48 = 22,
+ IB_RNR_TIMER_030_72 = 23,
+ IB_RNR_TIMER_040_96 = 24,
+ IB_RNR_TIMER_061_44 = 25,
+ IB_RNR_TIMER_081_92 = 26,
+ IB_RNR_TIMER_122_88 = 27,
+ IB_RNR_TIMER_163_84 = 28,
+ IB_RNR_TIMER_245_76 = 29,
+ IB_RNR_TIMER_327_68 = 30,
+ IB_RNR_TIMER_491_52 = 31
+};
+
+enum ib_qp_attr_mask {
+ IB_QP_STATE = 1,
+ IB_QP_CUR_STATE = (1<<1),
+ IB_QP_EN_SQD_ASYNC_NOTIFY = (1<<2),
+ IB_QP_ACCESS_FLAGS = (1<<3),
+ IB_QP_PKEY_INDEX = (1<<4),
+ IB_QP_PORT = (1<<5),
+ IB_QP_QKEY = (1<<6),
+ IB_QP_AV = (1<<7),
+ IB_QP_PATH_MTU = (1<<8),
+ IB_QP_TIMEOUT = (1<<9),
+ IB_QP_RETRY_CNT = (1<<10),
+ IB_QP_RNR_RETRY = (1<<11),
+ IB_QP_RQ_PSN = (1<<12),
+ IB_QP_MAX_QP_RD_ATOMIC = (1<<13),
+ IB_QP_ALT_PATH = (1<<14),
+ IB_QP_MIN_RNR_TIMER = (1<<15),
+ IB_QP_SQ_PSN = (1<<16),
+ IB_QP_MAX_DEST_RD_ATOMIC = (1<<17),
+ IB_QP_PATH_MIG_STATE = (1<<18),
+ IB_QP_CAP = (1<<19),
+ IB_QP_DEST_QPN = (1<<20)
+};
+
+enum ib_qp_state {
+ IB_QPS_RESET,
+ IB_QPS_INIT,
+ IB_QPS_RTR,
+ IB_QPS_RTS,
+ IB_QPS_SQD,
+ IB_QPS_SQE,
+ IB_QPS_ERR
+};
+
+enum ib_mig_state {
+ IB_MIG_MIGRATED,
+ IB_MIG_REARM,
+ IB_MIG_ARMED
+};
+
+struct ib_qp_attr {
+ enum ib_qp_state qp_state;
+ enum ib_qp_state cur_qp_state;
+ enum ib_mtu path_mtu;
+ enum ib_mig_state path_mig_state;
+ u32 qkey;
+ u32 rq_psn;
+ u32 sq_psn;
+ u32 dest_qp_num;
+ int qp_access_flags;
+ struct ib_qp_cap cap;
+ struct ib_ah_attr ah_attr;
+ struct ib_ah_attr alt_ah_attr;
+ u16 pkey_index;
+ u16 alt_pkey_index;
+ u8 en_sqd_async_notify;
+ u8 sq_draining;
+ u8 max_rd_atomic;
+ u8 max_dest_rd_atomic;
+ u8 min_rnr_timer;
+ u8 port_num;
+ u8 timeout;
+ u8 retry_cnt;
+ u8 rnr_retry;
+ u8 alt_port_num;
+ u8 alt_timeout;
+};
+
+enum ib_wr_opcode {
+ IB_WR_RDMA_WRITE,
+ IB_WR_RDMA_WRITE_WITH_IMM,
+ IB_WR_SEND,
+ IB_WR_SEND_WITH_IMM,
+ IB_WR_RDMA_READ,
+ IB_WR_ATOMIC_CMP_AND_SWP,
+ IB_WR_ATOMIC_FETCH_AND_ADD
+};
+
+enum ib_send_flags {
+ IB_SEND_FENCE = 1,
+ IB_SEND_SIGNALED = (1<<1),
+ IB_SEND_SOLICITED = (1<<2),
+ IB_SEND_INLINE = (1<<3)
+};
+
+enum ib_recv_flags {
+ IB_RECV_SIGNALED = 1
+};
+
+struct ib_sge {
+ u64 addr;
+ u32 length;
+ u32 lkey;
+};
+
+struct ib_send_wr {
+ struct ib_send_wr *next;
+ u64 wr_id;
+ struct ib_sge *sg_list;
+ int num_sge;
+ enum ib_wr_opcode opcode;
+ int send_flags;
+ u32 imm_data;
+ union {
+ struct {
+ u64 remote_addr;
+ u32 rkey;
+ } rdma;
+ struct {
+ u64 remote_addr;
+ u64 compare_add;
+ u64 swap;
+ u32 rkey;
+ } atomic;
+ struct {
+ struct ib_ah *ah;
+ struct ib_mad_hdr *mad_hdr;
+ u32 remote_qpn;
+ u32 remote_qkey;
+ int timeout_ms; /* valid for MADs only */
+ u16 pkey_index; /* valid for GSI only */
+ u8 port_num; /* valid for DR SMPs on switch only */
+ } ud;
+ } wr;
+};
+
+struct ib_recv_wr {
+ struct ib_recv_wr *next;
+ u64 wr_id;
+ struct ib_sge *sg_list;
+ int num_sge;
+ int recv_flags;
+};
+
+enum ib_access_flags {
+ IB_ACCESS_LOCAL_WRITE = 1,
+ IB_ACCESS_REMOTE_WRITE = (1<<1),
+ IB_ACCESS_REMOTE_READ = (1<<2),
+ IB_ACCESS_REMOTE_ATOMIC = (1<<3),
+ IB_ACCESS_MW_BIND = (1<<4)
+};
+
+struct ib_phys_buf {
+ u64 addr;
+ u64 size;
+};
+
+struct ib_mr_attr {
+ struct ib_pd *pd;
+ u64 device_virt_addr;
+ u64 size;
+ int mr_access_flags;
+ u32 lkey;
+ u32 rkey;
+};
+
+enum ib_mr_rereg_flags {
+ IB_MR_REREG_TRANS = 1,
+ IB_MR_REREG_PD = (1<<1),
+ IB_MR_REREG_ACCESS = (1<<2)
+};
+
+struct ib_mw_bind {
+ struct ib_mr *mr;
+ u64 wr_id;
+ u64 addr;
+ u32 length;
+ int send_flags;
+ int mw_access_flags;
+};
+
+struct ib_fmr_attr {
+ int max_pages;
+ int max_maps;
+ u8 page_size;
+};
+
+struct ib_pd {
+ struct ib_device *device;
+ atomic_t usecnt; /* count all resources */
+};
+
+struct ib_ah {
+ struct ib_device *device;
+ struct ib_pd *pd;
+};
+
+typedef void (*ib_comp_handler)(struct ib_cq *cq, void *cq_context);
+
+struct ib_cq {
+ struct ib_device *device;
+ ib_comp_handler comp_handler;
+ void (*event_handler)(struct ib_event *, void *);
+ void * cq_context;
+ int cqe;
+ atomic_t usecnt; /* count number of work queues */
+};
+
+struct ib_srq {
+ struct ib_device *device;
+ struct ib_pd *pd;
+ void *srq_context;
+ atomic_t usecnt;
+};
+
+struct ib_qp {
+ struct ib_device *device;
+ struct ib_pd *pd;
+ struct ib_cq *send_cq;
+ struct ib_cq *recv_cq;
+ struct ib_srq *srq;
+ void (*event_handler)(struct ib_event *, void *);
+ void *qp_context;
+ u32 qp_num;
+ enum ib_qp_type qp_type;
+};
+
+struct ib_mr {
+ struct ib_device *device;
+ struct ib_pd *pd;
+ u32 lkey;
+ u32 rkey;
+ atomic_t usecnt; /* count number of MWs */
+};
+
+struct ib_mw {
+ struct ib_device *device;
+ struct ib_pd *pd;
+ u32 rkey;
+};
+
+struct ib_fmr {
+ struct ib_device *device;
+ struct ib_pd *pd;
+ struct list_head list;
+ u32 lkey;
+ u32 rkey;
+};
+
+struct ib_mad;
+struct ib_grh;
+
+enum ib_process_mad_flags {
+ IB_MAD_IGNORE_MKEY = 1,
+ IB_MAD_IGNORE_BKEY = 2,
+ IB_MAD_IGNORE_ALL = IB_MAD_IGNORE_MKEY | IB_MAD_IGNORE_BKEY
+};
+
+enum ib_mad_result {
+ IB_MAD_RESULT_FAILURE = 0, /* (!SUCCESS is the important flag) */
+ IB_MAD_RESULT_SUCCESS = 1 << 0, /* MAD was successfully processed */
+ IB_MAD_RESULT_REPLY = 1 << 1, /* Reply packet needs to be sent */
+ IB_MAD_RESULT_CONSUMED = 1 << 2 /* Packet consumed: stop processing */
+};
+
+#define IB_DEVICE_NAME_MAX 64
+
+struct ib_cache {
+ rwlock_t lock;
+ struct ib_event_handler event_handler;
+ struct ib_pkey_cache **pkey_cache;
+ struct ib_gid_cache **gid_cache;
+};
+
+struct ib_device {
+ struct device *dma_device;
+
+ char name[IB_DEVICE_NAME_MAX];
+
+ struct list_head event_handler_list;
+ spinlock_t event_handler_lock;
+
+ struct list_head core_list;
+ struct list_head client_data_list;
+ spinlock_t client_data_lock;
+
+ struct ib_cache cache;
+
+ u32 flags;
+
+ int (*query_device)(struct ib_device *device,
+ struct ib_device_attr *device_attr);
+ int (*query_port)(struct ib_device *device,
+ u8 port_num,
+ struct ib_port_attr *port_attr);
+ int (*query_gid)(struct ib_device *device,
+ u8 port_num, int index,
+ union ib_gid *gid);
+ int (*query_pkey)(struct ib_device *device,
+ u8 port_num, u16 index, u16 *pkey);
+ int (*modify_device)(struct ib_device *device,
+ int device_modify_mask,
+ struct ib_device_modify *device_modify);
+ int (*modify_port)(struct ib_device *device,
+ u8 port_num, int port_modify_mask,
+ struct ib_port_modify *port_modify);
+ struct ib_pd * (*alloc_pd)(struct ib_device *device);
+ int (*dealloc_pd)(struct ib_pd *pd);
+ struct ib_ah * (*create_ah)(struct ib_pd *pd,
+ struct ib_ah_attr *ah_attr);
+ int (*modify_ah)(struct ib_ah *ah,
+ struct ib_ah_attr *ah_attr);
+ int (*query_ah)(struct ib_ah *ah,
+ struct ib_ah_attr *ah_attr);
+ int (*destroy_ah)(struct ib_ah *ah);
+ struct ib_qp * (*create_qp)(struct ib_pd *pd,
+ struct ib_qp_init_attr *qp_init_attr);
+ int (*modify_qp)(struct ib_qp *qp,
+ struct ib_qp_attr *qp_attr,
+ int qp_attr_mask);
+ int (*query_qp)(struct ib_qp *qp,
+ struct ib_qp_attr *qp_attr,
+ int qp_attr_mask,
+ struct ib_qp_init_attr *qp_init_attr);
+ int (*destroy_qp)(struct ib_qp *qp);
+ int (*post_send)(struct ib_qp *qp,
+ struct ib_send_wr *send_wr,
+ struct ib_send_wr **bad_send_wr);
+ int (*post_recv)(struct ib_qp *qp,
+ struct ib_recv_wr *recv_wr,
+ struct ib_recv_wr **bad_recv_wr);
+ struct ib_cq * (*create_cq)(struct ib_device *device,
+ int cqe);
+ int (*destroy_cq)(struct ib_cq *cq);
+ int (*resize_cq)(struct ib_cq *cq, int *cqe);
+ int (*poll_cq)(struct ib_cq *cq, int num_entries,
+ struct ib_wc *wc);
+ int (*peek_cq)(struct ib_cq *cq, int wc_cnt);
+ int (*req_notify_cq)(struct ib_cq *cq,
+ enum ib_cq_notify cq_notify);
+ int (*req_ncomp_notif)(struct ib_cq *cq,
+ int wc_cnt);
+ struct ib_mr * (*get_dma_mr)(struct ib_pd *pd,
+ int mr_access_flags);
+ struct ib_mr * (*reg_phys_mr)(struct ib_pd *pd,
+ struct ib_phys_buf *phys_buf_array,
+ int num_phys_buf,
+ int mr_access_flags,
+ u64 *iova_start);
+ int (*query_mr)(struct ib_mr *mr,
+ struct ib_mr_attr *mr_attr);
+ int (*dereg_mr)(struct ib_mr *mr);
+ int (*rereg_phys_mr)(struct ib_mr *mr,
+ int mr_rereg_mask,
+ struct ib_pd *pd,
+ struct ib_phys_buf *phys_buf_array,
+ int num_phys_buf,
+ int mr_access_flags,
+ u64 *iova_start);
+ struct ib_mw * (*alloc_mw)(struct ib_pd *pd);
+ int (*bind_mw)(struct ib_qp *qp,
+ struct ib_mw *mw,
+ struct ib_mw_bind *mw_bind);
+ int (*dealloc_mw)(struct ib_mw *mw);
+ struct ib_fmr * (*alloc_fmr)(struct ib_pd *pd,
+ int mr_access_flags,
+ struct ib_fmr_attr *fmr_attr);
+ int (*map_phys_fmr)(struct ib_fmr *fmr,
+ u64 *page_list, int list_len,
+ u64 iova);
+ int (*unmap_fmr)(struct list_head *fmr_list);
+ int (*dealloc_fmr)(struct ib_fmr *fmr);
+ int (*attach_mcast)(struct ib_qp *qp,
+ union ib_gid *gid,
+ u16 lid);
+ int (*detach_mcast)(struct ib_qp *qp,
+ union ib_gid *gid,
+ u16 lid);
+ int (*process_mad)(struct ib_device *device,
+ int process_mad_flags,
+ u8 port_num,
+ struct ib_wc *in_wc,
+ struct ib_grh *in_grh,
+ struct ib_mad *in_mad,
+ struct ib_mad *out_mad);
+
+ struct class_device class_dev;
+ struct kobject ports_parent;
+ struct list_head port_list;
+
+ enum {
+ IB_DEV_UNINITIALIZED,
+ IB_DEV_REGISTERED,
+ IB_DEV_UNREGISTERED
+ } reg_state;
+
+ u8 node_type;
+ u8 phys_port_cnt;
+};
+
+struct ib_client {
+ char *name;
+ void (*add) (struct ib_device *);
+ void (*remove)(struct ib_device *);
+
+ struct list_head list;
+};
+
+struct ib_device *ib_alloc_device(size_t size);
+void ib_dealloc_device(struct ib_device *device);
+
+int ib_register_device (struct ib_device *device);
+void ib_unregister_device(struct ib_device *device);
+
+int ib_register_client (struct ib_client *client);
+void ib_unregister_client(struct ib_client *client);
+
+void *ib_get_client_data(struct ib_device *device, struct ib_client *client);
+void ib_set_client_data(struct ib_device *device, struct ib_client *client,
+ void *data);
+
+int ib_register_event_handler (struct ib_event_handler *event_handler);
+int ib_unregister_event_handler(struct ib_event_handler *event_handler);
+void ib_dispatch_event(struct ib_event *event);
+
+int ib_query_device(struct ib_device *device,
+ struct ib_device_attr *device_attr);
+
+int ib_query_port(struct ib_device *device,
+ u8 port_num, struct ib_port_attr *port_attr);
+
+int ib_query_gid(struct ib_device *device,
+ u8 port_num, int index, union ib_gid *gid);
+
+int ib_query_pkey(struct ib_device *device,
+ u8 port_num, u16 index, u16 *pkey);
+
+int ib_modify_device(struct ib_device *device,
+ int device_modify_mask,
+ struct ib_device_modify *device_modify);
+
+int ib_modify_port(struct ib_device *device,
+ u8 port_num, int port_modify_mask,
+ struct ib_port_modify *port_modify);
+
+/**
+ * ib_alloc_pd - Allocates an unused protection domain.
+ * @device: The device on which to allocate the protection domain.
+ *
+ * A protection domain object provides an association between QPs, shared
+ * receive queues, address handles, memory regions, and memory windows.
+ */
+struct ib_pd *ib_alloc_pd(struct ib_device *device);
+
+/**
+ * ib_dealloc_pd - Deallocates a protection domain.
+ * @pd: The protection domain to deallocate.
+ */
+int ib_dealloc_pd(struct ib_pd *pd);
+
+/**
+ * ib_create_ah - Creates an address handle for the given address vector.
+ * @pd: The protection domain associated with the address handle.
+ * @ah_attr: The attributes of the address vector.
+ *
+ * The address handle is used to reference a local or global destination
+ * in all UD QP post sends.
+ */
+struct ib_ah *ib_create_ah(struct ib_pd *pd, struct ib_ah_attr *ah_attr);
+
+/**
+ * ib_modify_ah - Modifies the address vector associated with an address
+ * handle.
+ * @ah: The address handle to modify.
+ * @ah_attr: The new address vector attributes to associate with the
+ * address handle.
+ */
+int ib_modify_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr);
+
+/**
+ * ib_query_ah - Queries the address vector associated with an address
+ * handle.
+ * @ah: The address handle to query.
+ * @ah_attr: The address vector attributes associated with the address
+ * handle.
+ */
+int ib_query_ah(struct ib_ah *ah, struct ib_ah_attr *ah_attr);
+
+/**
+ * ib_destroy_ah - Destroys an address handle.
+ * @ah: The address handle to destroy.
+ */
+int ib_destroy_ah(struct ib_ah *ah);
+
+/**
+ * ib_create_qp - Creates a QP associated with the specified protection
+ * domain.
+ * @pd: The protection domain associated with the QP.
+ * @qp_init_attr: A list of initial attributes required to create the QP.
+ */
+struct ib_qp *ib_create_qp(struct ib_pd *pd,
+ struct ib_qp_init_attr *qp_init_attr);
+
+/**
+ * ib_modify_qp - Modifies the attributes for the specified QP and then
+ * transitions the QP to the given state.
+ * @qp: The QP to modify.
+ * @qp_attr: On input, specifies the QP attributes to modify. On output,
+ * the current values of selected QP attributes are returned.
+ * @qp_attr_mask: A bit-mask used to specify which attributes of the QP
+ * are being modified.
+ */
+int ib_modify_qp(struct ib_qp *qp,
+ struct ib_qp_attr *qp_attr,
+ int qp_attr_mask);
+
+/**
+ * ib_query_qp - Returns the attribute list and current values for the
+ * specified QP.
+ * @qp: The QP to query.
+ * @qp_attr: The attributes of the specified QP.
+ * @qp_attr_mask: A bit-mask used to select specific attributes to query.
+ * @qp_init_attr: Additional attributes of the selected QP.
+ *
+ * The qp_attr_mask may be used to limit the query to gathering only the
+ * selected attributes.
+ */
+int ib_query_qp(struct ib_qp *qp,
+ struct ib_qp_attr *qp_attr,
+ int qp_attr_mask,
+ struct ib_qp_init_attr *qp_init_attr);
+
+/**
+ * ib_destroy_qp - Destroys the specified QP.
+ * @qp: The QP to destroy.
+ */
+int ib_destroy_qp(struct ib_qp *qp);
+
+/**
+ * ib_post_send - Posts a list of work requests to the send queue of
+ * the specified QP.
+ * @qp: The QP to post the work request on.
+ * @send_wr: A list of work requests to post on the send queue.
+ * @bad_send_wr: On an immediate failure, this parameter will reference
+ * the work request that failed to be posted on the QP.
+ */
+static inline int ib_post_send(struct ib_qp *qp,
+ struct ib_send_wr *send_wr,
+ struct ib_send_wr **bad_send_wr)
+{
+ return qp->device->post_send(qp, send_wr, bad_send_wr);
+}
+
+/**
+ * ib_post_recv - Posts a list of work requests to the receive queue of
+ * the specified QP.
+ * @qp: The QP to post the work request on.
+ * @recv_wr: A list of work requests to post on the receive queue.
+ * @bad_recv_wr: On an immediate failure, this parameter will reference
+ * the work request that failed to be posted on the QP.
+ */
+static inline int ib_post_recv(struct ib_qp *qp,
+ struct ib_recv_wr *recv_wr,
+ struct ib_recv_wr **bad_recv_wr)
+{
+ return qp->device->post_recv(qp, recv_wr, bad_recv_wr);
+}
+
+/**
+ * ib_create_cq - Creates a CQ on the specified device.
+ * @device: The device on which to create the CQ.
+ * @comp_handler: A user-specified callback that is invoked when a
+ * completion event occurs on the CQ.
+ * @event_handler: A user-specified callback that is invoked when an
+ * asynchronous event not associated with a completion occurs on the CQ.
+ * @cq_context: Context associated with the CQ returned to the user via
+ * the associated completion and event handlers.
+ * @cqe: The minimum size of the CQ.
+ *
+ * Users can examine the cq structure to determine the actual CQ size.
+ */
+struct ib_cq *ib_create_cq(struct ib_device *device,
+ ib_comp_handler comp_handler,
+ void (*event_handler)(struct ib_event *, void *),
+ void *cq_context, int cqe);
+
+/**
+ * ib_resize_cq - Modifies the capacity of the CQ.
+ * @cq: The CQ to resize.
+ * @cqe: The minimum size of the CQ.
+ *
+ * Users can examine the cq structure to determine the actual CQ size.
+ */
+int ib_resize_cq(struct ib_cq *cq, int cqe);
+
+/**
+ * ib_destroy_cq - Destroys the specified CQ.
+ * @cq: The CQ to destroy.
+ */
+int ib_destroy_cq(struct ib_cq *cq);
+
+/**
+ * ib_poll_cq - poll a CQ for completion(s)
+ * @cq:the CQ being polled
+ * @num_entries:maximum number of completions to return
+ * @wc:array of at least @num_entries &struct ib_wc where completions
+ * will be returned
+ *
+ * Poll a CQ for (possibly multiple) completions. If the return value
+ * is < 0, an error occurred. If the return value is >= 0, it is the
+ * number of completions returned. If the return value is
+ * non-negative and < num_entries, then the CQ was emptied.
+ */
+static inline int ib_poll_cq(struct ib_cq *cq, int num_entries,
+ struct ib_wc *wc)
+{
+ return cq->device->poll_cq(cq, num_entries, wc);
+}
+
+/**
+ * ib_peek_cq - Returns the number of unreaped completions currently
+ * on the specified CQ.
+ * @cq: The CQ to peek.
+ * @wc_cnt: A minimum number of unreaped completions to check for.
+ *
+ * If the number of unreaped completions is greater than or equal to wc_cnt,
+ * this function returns wc_cnt, otherwise, it returns the actual number of
+ * unreaped completions.
+ */
+int ib_peek_cq(struct ib_cq *cq, int wc_cnt);
+
+/**
+ * ib_req_notify_cq - Request completion notification on a CQ.
+ * @cq: The CQ to generate an event for.
+ * @cq_notify: If set to %IB_CQ_SOLICITED, completion notification will
+ * occur on the next solicited event. If set to %IB_CQ_NEXT_COMP,
+ * notification will occur on the next completion.
+ */
+static inline int ib_req_notify_cq(struct ib_cq *cq,
+ enum ib_cq_notify cq_notify)
+{
+ return cq->device->req_notify_cq(cq, cq_notify);
+}
+
+/**
+ * ib_req_ncomp_notif - Request completion notification when there are
+ * at least the specified number of unreaped completions on the CQ.
+ * @cq: The CQ to generate an event for.
+ * @wc_cnt: The number of unreaped completions that should be on the
+ * CQ before an event is generated.
+ */
+static inline int ib_req_ncomp_notif(struct ib_cq *cq, int wc_cnt)
+{
+ return cq->device->req_ncomp_notif ?
+ cq->device->req_ncomp_notif(cq, wc_cnt) :
+ -ENOSYS;
+}
+
+/**
+ * ib_get_dma_mr - Returns a memory region for system memory that is
+ * usable for DMA.
+ * @pd: The protection domain associated with the memory region.
+ * @mr_access_flags: Specifies the memory access rights.
+ */
+struct ib_mr *ib_get_dma_mr(struct ib_pd *pd, int mr_access_flags);
+
+/**
+ * ib_reg_phys_mr - Prepares a virtually addressed memory region for use
+ * by an HCA.
+ * @pd: The protection domain associated assigned to the registered region.
+ * @phys_buf_array: Specifies a list of physical buffers to use in the
+ * memory region.
+ * @num_phys_buf: Specifies the size of the phys_buf_array.
+ * @mr_access_flags: Specifies the memory access rights.
+ * @iova_start: The offset of the region's starting I/O virtual address.
+ */
+struct ib_mr *ib_reg_phys_mr(struct ib_pd *pd,
+ struct ib_phys_buf *phys_buf_array,
+ int num_phys_buf,
+ int mr_access_flags,
+ u64 *iova_start);
+
+/**
+ * ib_rereg_phys_mr - Modifies the attributes of an existing memory region.
+ * Conceptually, this call performs the functions deregister memory region
+ * followed by register physical memory region. Where possible,
+ * resources are reused instead of deallocated and reallocated.
+ * @mr: The memory region to modify.
+ * @mr_rereg_mask: A bit-mask used to indicate which of the following
+ * properties of the memory region are being modified.
+ * @pd: If %IB_MR_REREG_PD is set in mr_rereg_mask, this field specifies
+ * the new protection domain to associated with the memory region,
+ * otherwise, this parameter is ignored.
+ * @phys_buf_array: If %IB_MR_REREG_TRANS is set in mr_rereg_mask, this
+ * field specifies a list of physical buffers to use in the new
+ * translation, otherwise, this parameter is ignored.
+ * @num_phys_buf: If %IB_MR_REREG_TRANS is set in mr_rereg_mask, this
+ * field specifies the size of the phys_buf_array, otherwise, this
+ * parameter is ignored.
+ * @mr_access_flags: If %IB_MR_REREG_ACCESS is set in mr_rereg_mask, this
+ * field specifies the new memory access rights, otherwise, this
+ * parameter is ignored.
+ * @iova_start: The offset of the region's starting I/O virtual address.
+ */
+int ib_rereg_phys_mr(struct ib_mr *mr,
+ int mr_rereg_mask,
+ struct ib_pd *pd,
+ struct ib_phys_buf *phys_buf_array,
+ int num_phys_buf,
+ int mr_access_flags,
+ u64 *iova_start);
+
+/**
+ * ib_query_mr - Retrieves information about a specific memory region.
+ * @mr: The memory region to retrieve information about.
+ * @mr_attr: The attributes of the specified memory region.
+ */
+int ib_query_mr(struct ib_mr *mr, struct ib_mr_attr *mr_attr);
+
+/**
+ * ib_dereg_mr - Deregisters a memory region and removes it from the
+ * HCA translation table.
+ * @mr: The memory region to deregister.
+ */
+int ib_dereg_mr(struct ib_mr *mr);
+
+/**
+ * ib_alloc_mw - Allocates a memory window.
+ * @pd: The protection domain associated with the memory window.
+ */
+struct ib_mw *ib_alloc_mw(struct ib_pd *pd);
+
+/**
+ * ib_bind_mw - Posts a work request to the send queue of the specified
+ * QP, which binds the memory window to the given address range and
+ * remote access attributes.
+ * @qp: QP to post the bind work request on.
+ * @mw: The memory window to bind.
+ * @mw_bind: Specifies information about the memory window, including
+ * its address range, remote access rights, and associated memory region.
+ */
+static inline int ib_bind_mw(struct ib_qp *qp,
+ struct ib_mw *mw,
+ struct ib_mw_bind *mw_bind)
+{
+ /* XXX reference counting in corresponding MR? */
+ return mw->device->bind_mw ?
+ mw->device->bind_mw(qp, mw, mw_bind) :
+ -ENOSYS;
+}
+
+/**
+ * ib_dealloc_mw - Deallocates a memory window.
+ * @mw: The memory window to deallocate.
+ */
+int ib_dealloc_mw(struct ib_mw *mw);
+
+/**
+ * ib_alloc_fmr - Allocates a unmapped fast memory region.
+ * @pd: The protection domain associated with the unmapped region.
+ * @mr_access_flags: Specifies the memory access rights.
+ * @fmr_attr: Attributes of the unmapped region.
+ *
+ * A fast memory region must be mapped before it can be used as part of
+ * a work request.
+ */
+struct ib_fmr *ib_alloc_fmr(struct ib_pd *pd,
+ int mr_access_flags,
+ struct ib_fmr_attr *fmr_attr);
+
+/**
+ * ib_map_phys_fmr - Maps a list of physical pages to a fast memory region.
+ * @fmr: The fast memory region to associate with the pages.
+ * @page_list: An array of physical pages to map to the fast memory region.
+ * @list_len: The number of pages in page_list.
+ * @iova: The I/O virtual address to use with the mapped region.
+ */
+static inline int ib_map_phys_fmr(struct ib_fmr *fmr,
+ u64 *page_list, int list_len,
+ u64 iova)
+{
+ return fmr->device->map_phys_fmr(fmr, page_list, list_len, iova);
+}
+
+/**
+ * ib_unmap_fmr - Removes the mapping from a list of fast memory regions.
+ * @fmr_list: A linked list of fast memory regions to unmap.
+ */
+int ib_unmap_fmr(struct list_head *fmr_list);
+
+/**
+ * ib_dealloc_fmr - Deallocates a fast memory region.
+ * @fmr: The fast memory region to deallocate.
+ */
+int ib_dealloc_fmr(struct ib_fmr *fmr);
+
+/**
+ * ib_attach_mcast - Attaches the specified QP to a multicast group.
+ * @qp: QP to attach to the multicast group. The QP must be type
+ * IB_QPT_UD.
+ * @gid: Multicast group GID.
+ * @lid: Multicast group LID in host byte order.
+ *
+ * In order to send and receive multicast packets, subnet
+ * administration must have created the multicast group and configured
+ * the fabric appropriately. The port associated with the specified
+ * QP must also be a member of the multicast group.
+ */
+int ib_attach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid);
+
+/**
+ * ib_detach_mcast - Detaches the specified QP from a multicast group.
+ * @qp: QP to detach from the multicast group.
+ * @gid: Multicast group GID.
+ * @lid: Multicast group LID in host byte order.
+ */
+int ib_detach_mcast(struct ib_qp *qp, union ib_gid *gid, u16 lid);
+
+#endif /* IB_VERBS_H */
--- /dev/null
+config INFINIBAND_IPOIB
+ tristate "IP-over-InfiniBand"
+ depends on INFINIBAND && NETDEVICES && INET
+ ---help---
+ Support for the IP-over-InfiniBand protocol (IPoIB). This
+ transports IP packets over InfiniBand so you can use your IB
+ device as a fancy NIC.
+
+ The IPoIB protocol is defined by the IETF ipoib working
+ group: <http://www.ietf.org/html.charters/ipoib-charter.html>.
+
+config INFINIBAND_IPOIB_DEBUG
+ bool "IP-over-InfiniBand debugging"
+ depends on INFINIBAND_IPOIB
+ ---help---
+ This option causes debugging code to be compiled into the
+ IPoIB driver. The output can be turned on via the
+ debug_level and mcast_debug_level module parameters (which
+ can also be set after the driver is loaded through sysfs).
+
+ This option also creates an "ipoib_debugfs," which can be
+ mounted to expose debugging information about IB multicast
+ groups used by the IPoIB driver.
+
+config INFINIBAND_IPOIB_DEBUG_DATA
+ bool "IP-over-InfiniBand data path debugging"
+ depends on INFINIBAND_IPOIB_DEBUG
+ ---help---
+ This option compiles debugging code into the the data path
+ of the IPoIB driver. The output can be turned on via the
+ data_debug_level module parameter; however, even with output
+ turned off, this debugging code will have some performance
+ impact.
--- /dev/null
+/*
+ * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: ipoib.h 1358 2004-12-17 22:00:11Z roland $
+ */
+
+#ifndef _IPOIB_H
+#define _IPOIB_H
+
+#include <linux/list.h>
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <linux/workqueue.h>
+#include <linux/pci.h>
+#include <linux/config.h>
+#include <linux/kref.h>
+#include <linux/if_infiniband.h>
+
+#include <net/neighbour.h>
+
+#include <asm/atomic.h>
+#include <asm/semaphore.h>
+
+#include <ib_verbs.h>
+#include <ib_pack.h>
+#include <ib_sa.h>
+
+/* constants */
+
+enum {
+ IPOIB_PACKET_SIZE = 2048,
+ IPOIB_BUF_SIZE = IPOIB_PACKET_SIZE + IB_GRH_BYTES,
+
+ IPOIB_ENCAP_LEN = 4,
+
+ IPOIB_RX_RING_SIZE = 128,
+ IPOIB_TX_RING_SIZE = 64,
+
+ IPOIB_NUM_WC = 4,
+
+ IPOIB_MAX_PATH_REC_QUEUE = 3,
+ IPOIB_MAX_MCAST_QUEUE = 3,
+
+ IPOIB_FLAG_OPER_UP = 0,
+ IPOIB_FLAG_ADMIN_UP = 1,
+ IPOIB_PKEY_ASSIGNED = 2,
+ IPOIB_PKEY_STOP = 3,
+ IPOIB_FLAG_SUBINTERFACE = 4,
+ IPOIB_MCAST_RUN = 5,
+ IPOIB_STOP_REAPER = 6,
+
+ IPOIB_MAX_BACKOFF_SECONDS = 16,
+
+ IPOIB_MCAST_FLAG_FOUND = 0, /* used in set_multicast_list */
+ IPOIB_MCAST_FLAG_SENDONLY = 1,
+ IPOIB_MCAST_FLAG_BUSY = 2, /* joining or already joined */
+ IPOIB_MCAST_FLAG_ATTACHED = 3,
+};
+
+/* structs */
+
+struct ipoib_header {
+ u16 proto;
+ u16 reserved;
+};
+
+struct ipoib_pseudoheader {
+ u8 hwaddr[INFINIBAND_ALEN];
+};
+
+struct ipoib_mcast;
+
+struct ipoib_buf {
+ struct sk_buff *skb;
+ DECLARE_PCI_UNMAP_ADDR(mapping)
+};
+
+/*
+ * Device private locking: tx_lock protects members used in TX fast
+ * path (and we use LLTX so upper layers don't do extra locking).
+ * lock protects everything else. lock nests inside of tx_lock (ie
+ * tx_lock must be acquired first if needed).
+ */
+struct ipoib_dev_priv {
+ spinlock_t lock;
+
+ struct net_device *dev;
+
+ unsigned long flags;
+
+ struct semaphore mcast_mutex;
+ struct semaphore vlan_mutex;
+
+ struct rb_root path_tree;
+ struct list_head path_list;
+
+ struct ipoib_mcast *broadcast;
+ struct list_head multicast_list;
+ struct rb_root multicast_tree;
+
+ struct work_struct pkey_task;
+ struct work_struct mcast_task;
+ struct work_struct flush_task;
+ struct work_struct restart_task;
+ struct work_struct ah_reap_task;
+
+ struct ib_device *ca;
+ u8 port;
+ u16 pkey;
+ struct ib_pd *pd;
+ struct ib_mr *mr;
+ struct ib_cq *cq;
+ struct ib_qp *qp;
+ u32 qkey;
+
+ union ib_gid local_gid;
+ u16 local_lid;
+ u8 local_rate;
+
+ unsigned int admin_mtu;
+ unsigned int mcast_mtu;
+
+ struct ipoib_buf *rx_ring;
+
+ spinlock_t tx_lock;
+ struct ipoib_buf *tx_ring;
+ unsigned tx_head;
+ unsigned tx_tail;
+ struct ib_sge tx_sge;
+ struct ib_send_wr tx_wr;
+
+ struct ib_wc ibwc[IPOIB_NUM_WC];
+
+ struct list_head dead_ahs;
+
+ struct ib_event_handler event_handler;
+
+ struct net_device_stats stats;
+
+ struct net_device *parent;
+ struct list_head child_intfs;
+ struct list_head list;
+
+#ifdef CONFIG_INFINIBAND_IPOIB_DEBUG
+ struct list_head fs_list;
+ struct dentry *mcg_dentry;
+#endif
+};
+
+struct ipoib_ah {
+ struct net_device *dev;
+ struct ib_ah *ah;
+ struct list_head list;
+ struct kref ref;
+ unsigned last_send;
+};
+
+struct ipoib_path {
+ struct net_device *dev;
+ struct ib_sa_path_rec pathrec;
+ struct ipoib_ah *ah;
+ struct sk_buff_head queue;
+
+ struct list_head neigh_list;
+
+ int query_id;
+ struct ib_sa_query *query;
+ struct completion done;
+
+ struct rb_node rb_node;
+ struct list_head list;
+};
+
+struct ipoib_neigh {
+ struct ipoib_ah *ah;
+ struct sk_buff_head queue;
+
+ struct neighbour *neighbour;
+
+ struct list_head list;
+};
+
+static inline struct ipoib_neigh **to_ipoib_neigh(struct neighbour *neigh)
+{
+ return (struct ipoib_neigh **) (neigh->ha + 24 -
+ (offsetof(struct neighbour, ha) & 4));
+}
+
+extern struct workqueue_struct *ipoib_workqueue;
+
+/* functions */
+
+void ipoib_ib_completion(struct ib_cq *cq, void *dev_ptr);
+
+struct ipoib_ah *ipoib_create_ah(struct net_device *dev,
+ struct ib_pd *pd, struct ib_ah_attr *attr);
+void ipoib_free_ah(struct kref *kref);
+static inline void ipoib_put_ah(struct ipoib_ah *ah)
+{
+ kref_put(&ah->ref, ipoib_free_ah);
+}
+
+int ipoib_add_pkey_attr(struct net_device *dev);
+
+void ipoib_send(struct net_device *dev, struct sk_buff *skb,
+ struct ipoib_ah *address, u32 qpn);
+void ipoib_reap_ah(void *dev_ptr);
+
+void ipoib_flush_paths(struct net_device *dev);
+struct ipoib_dev_priv *ipoib_intf_alloc(const char *format);
+
+int ipoib_ib_dev_init(struct net_device *dev, struct ib_device *ca, int port);
+void ipoib_ib_dev_flush(void *dev);
+void ipoib_ib_dev_cleanup(struct net_device *dev);
+
+int ipoib_ib_dev_open(struct net_device *dev);
+int ipoib_ib_dev_up(struct net_device *dev);
+int ipoib_ib_dev_down(struct net_device *dev);
+int ipoib_ib_dev_stop(struct net_device *dev);
+
+int ipoib_dev_init(struct net_device *dev, struct ib_device *ca, int port);
+void ipoib_dev_cleanup(struct net_device *dev);
+
+void ipoib_mcast_join_task(void *dev_ptr);
+void ipoib_mcast_send(struct net_device *dev, union ib_gid *mgid,
+ struct sk_buff *skb);
+
+void ipoib_mcast_restart_task(void *dev_ptr);
+int ipoib_mcast_start_thread(struct net_device *dev);
+int ipoib_mcast_stop_thread(struct net_device *dev);
+
+void ipoib_mcast_dev_down(struct net_device *dev);
+void ipoib_mcast_dev_flush(struct net_device *dev);
+
+struct ipoib_mcast_iter *ipoib_mcast_iter_init(struct net_device *dev);
+void ipoib_mcast_iter_free(struct ipoib_mcast_iter *iter);
+int ipoib_mcast_iter_next(struct ipoib_mcast_iter *iter);
+void ipoib_mcast_iter_read(struct ipoib_mcast_iter *iter,
+ union ib_gid *gid,
+ unsigned long *created,
+ unsigned int *queuelen,
+ unsigned int *complete,
+ unsigned int *send_only);
+
+int ipoib_mcast_attach(struct net_device *dev, u16 mlid,
+ union ib_gid *mgid);
+int ipoib_mcast_detach(struct net_device *dev, u16 mlid,
+ union ib_gid *mgid);
+
+int ipoib_qp_create(struct net_device *dev);
+int ipoib_transport_dev_init(struct net_device *dev, struct ib_device *ca);
+void ipoib_transport_dev_cleanup(struct net_device *dev);
+
+void ipoib_event(struct ib_event_handler *handler,
+ struct ib_event *record);
+
+int ipoib_vlan_add(struct net_device *pdev, unsigned short pkey);
+int ipoib_vlan_delete(struct net_device *pdev, unsigned short pkey);
+
+void ipoib_pkey_poll(void *dev);
+int ipoib_pkey_dev_delay_open(struct net_device *dev);
+
+#ifdef CONFIG_INFINIBAND_IPOIB_DEBUG
+int ipoib_create_debug_file(struct net_device *dev);
+void ipoib_delete_debug_file(struct net_device *dev);
+int ipoib_register_debugfs(void);
+void ipoib_unregister_debugfs(void);
+#else
+static inline int ipoib_create_debug_file(struct net_device *dev) { return 0; }
+static inline void ipoib_delete_debug_file(struct net_device *dev) { }
+static inline int ipoib_register_debugfs(void) { return 0; }
+static inline void ipoib_unregister_debugfs(void) { }
+#endif
+
+
+#define ipoib_printk(level, priv, format, arg...) \
+ printk(level "%s: " format, ((struct ipoib_dev_priv *) priv)->dev->name , ## arg)
+#define ipoib_warn(priv, format, arg...) \
+ ipoib_printk(KERN_WARNING, priv, format , ## arg)
+
+
+#ifdef CONFIG_INFINIBAND_IPOIB_DEBUG
+extern int debug_level;
+
+#define ipoib_dbg(priv, format, arg...) \
+ do { \
+ if (debug_level > 0) \
+ ipoib_printk(KERN_DEBUG, priv, format , ## arg); \
+ } while (0)
+#define ipoib_dbg_mcast(priv, format, arg...) \
+ do { \
+ if (mcast_debug_level > 0) \
+ ipoib_printk(KERN_DEBUG, priv, format , ## arg); \
+ } while (0)
+#else /* CONFIG_INFINIBAND_IPOIB_DEBUG */
+#define ipoib_dbg(priv, format, arg...) \
+ do { (void) (priv); } while (0)
+#define ipoib_dbg_mcast(priv, format, arg...) \
+ do { (void) (priv); } while (0)
+#endif /* CONFIG_INFINIBAND_IPOIB_DEBUG */
+
+#ifdef CONFIG_INFINIBAND_IPOIB_DEBUG_DATA
+#define ipoib_dbg_data(priv, format, arg...) \
+ do { \
+ if (data_debug_level > 0) \
+ ipoib_printk(KERN_DEBUG, priv, format , ## arg); \
+ } while (0)
+#else /* CONFIG_INFINIBAND_IPOIB_DEBUG_DATA */
+#define ipoib_dbg_data(priv, format, arg...) \
+ do { (void) (priv); } while (0)
+#endif /* CONFIG_INFINIBAND_IPOIB_DEBUG_DATA */
+
+
+#define IPOIB_GID_FMT "%x:%x:%x:%x:%x:%x:%x:%x"
+
+#define IPOIB_GID_ARG(gid) be16_to_cpup((__be16 *) ((gid).raw + 0)), \
+ be16_to_cpup((__be16 *) ((gid).raw + 2)), \
+ be16_to_cpup((__be16 *) ((gid).raw + 4)), \
+ be16_to_cpup((__be16 *) ((gid).raw + 6)), \
+ be16_to_cpup((__be16 *) ((gid).raw + 8)), \
+ be16_to_cpup((__be16 *) ((gid).raw + 10)), \
+ be16_to_cpup((__be16 *) ((gid).raw + 12)), \
+ be16_to_cpup((__be16 *) ((gid).raw + 14))
+
+#endif /* _IPOIB_H */
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: ipoib_fs.c 1389 2004-12-27 22:56:47Z roland $
+ */
+
+#include <linux/pagemap.h>
+#include <linux/seq_file.h>
+
+#include "ipoib.h"
+
+enum {
+ IPOIB_MAGIC = 0x49504942 /* "IPIB" */
+};
+
+static DECLARE_MUTEX(ipoib_fs_mutex);
+static struct dentry *ipoib_root;
+static struct super_block *ipoib_sb;
+static LIST_HEAD(ipoib_device_list);
+
+static void *ipoib_mcg_seq_start(struct seq_file *file, loff_t *pos)
+{
+ struct ipoib_mcast_iter *iter;
+ loff_t n = *pos;
+
+ iter = ipoib_mcast_iter_init(file->private);
+ if (!iter)
+ return NULL;
+
+ while (n--) {
+ if (ipoib_mcast_iter_next(iter)) {
+ ipoib_mcast_iter_free(iter);
+ return NULL;
+ }
+ }
+
+ return iter;
+}
+
+static void *ipoib_mcg_seq_next(struct seq_file *file, void *iter_ptr,
+ loff_t *pos)
+{
+ struct ipoib_mcast_iter *iter = iter_ptr;
+
+ (*pos)++;
+
+ if (ipoib_mcast_iter_next(iter)) {
+ ipoib_mcast_iter_free(iter);
+ return NULL;
+ }
+
+ return iter;
+}
+
+static void ipoib_mcg_seq_stop(struct seq_file *file, void *iter_ptr)
+{
+ /* nothing for now */
+}
+
+static int ipoib_mcg_seq_show(struct seq_file *file, void *iter_ptr)
+{
+ struct ipoib_mcast_iter *iter = iter_ptr;
+ char gid_buf[sizeof "ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff"];
+ union ib_gid mgid;
+ int i, n;
+ unsigned long created;
+ unsigned int queuelen, complete, send_only;
+
+ if (iter) {
+ ipoib_mcast_iter_read(iter, &mgid, &created, &queuelen,
+ &complete, &send_only);
+
+ for (n = 0, i = 0; i < sizeof mgid / 2; ++i) {
+ n += sprintf(gid_buf + n, "%x",
+ be16_to_cpu(((u16 *)mgid.raw)[i]));
+ if (i < sizeof mgid / 2 - 1)
+ gid_buf[n++] = ':';
+ }
+ }
+
+ seq_printf(file, "GID: %*s", -(1 + (int) sizeof gid_buf), gid_buf);
+
+ seq_printf(file,
+ " created: %10ld queuelen: %4d complete: %d send_only: %d\n",
+ created, queuelen, complete, send_only);
+
+ return 0;
+}
+
+static struct seq_operations ipoib_seq_ops = {
+ .start = ipoib_mcg_seq_start,
+ .next = ipoib_mcg_seq_next,
+ .stop = ipoib_mcg_seq_stop,
+ .show = ipoib_mcg_seq_show,
+};
+
+static int ipoib_mcg_open(struct inode *inode, struct file *file)
+{
+ struct seq_file *seq;
+ int ret;
+
+ ret = seq_open(file, &ipoib_seq_ops);
+ if (ret)
+ return ret;
+
+ seq = file->private_data;
+ seq->private = inode->u.generic_ip;
+
+ return 0;
+}
+
+static struct file_operations ipoib_fops = {
+ .owner = THIS_MODULE,
+ .open = ipoib_mcg_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release
+};
+
+static struct inode *ipoib_get_inode(void)
+{
+ struct inode *inode = new_inode(ipoib_sb);
+
+ if (inode) {
+ inode->i_mode = S_IFREG | S_IRUGO;
+ inode->i_uid = 0;
+ inode->i_gid = 0;
+ inode->i_blksize = PAGE_CACHE_SIZE;
+ inode->i_blocks = 0;
+ inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME;
+ inode->i_fop = &ipoib_fops;
+ }
+
+ return inode;
+}
+
+static int __ipoib_create_debug_file(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct dentry *dentry;
+ struct inode *inode;
+ char name[IFNAMSIZ + sizeof "_mcg"];
+
+ snprintf(name, sizeof name, "%s_mcg", dev->name);
+
+ dentry = d_alloc_name(ipoib_root, name);
+ if (!dentry)
+ return -ENOMEM;
+
+ inode = ipoib_get_inode();
+ if (!inode) {
+ dput(dentry);
+ return -ENOMEM;
+ }
+
+ inode->u.generic_ip = dev;
+ priv->mcg_dentry = dentry;
+
+ d_add(dentry, inode);
+
+ return 0;
+}
+
+int ipoib_create_debug_file(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ down(&ipoib_fs_mutex);
+
+ list_add_tail(&priv->fs_list, &ipoib_device_list);
+
+ if (!ipoib_sb) {
+ up(&ipoib_fs_mutex);
+ return 0;
+ }
+
+ up(&ipoib_fs_mutex);
+
+ return __ipoib_create_debug_file(dev);
+}
+
+void ipoib_delete_debug_file(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ down(&ipoib_fs_mutex);
+ list_del(&priv->fs_list);
+ if (!ipoib_sb) {
+ up(&ipoib_fs_mutex);
+ return;
+ }
+ up(&ipoib_fs_mutex);
+
+ if (priv->mcg_dentry) {
+ d_drop(priv->mcg_dentry);
+ simple_unlink(ipoib_root->d_inode, priv->mcg_dentry);
+ }
+}
+
+static int ipoib_fill_super(struct super_block *sb, void *data, int silent)
+{
+ static struct tree_descr ipoib_files[] = {
+ { "" }
+ };
+ struct ipoib_dev_priv *priv;
+ int ret;
+
+ ret = simple_fill_super(sb, IPOIB_MAGIC, ipoib_files);
+ if (ret)
+ return ret;
+
+ ipoib_root = sb->s_root;
+
+ down(&ipoib_fs_mutex);
+
+ ipoib_sb = sb;
+
+ list_for_each_entry(priv, &ipoib_device_list, fs_list) {
+ ret = __ipoib_create_debug_file(priv->dev);
+ if (ret)
+ break;
+ }
+
+ up(&ipoib_fs_mutex);
+
+ return ret;
+}
+
+static struct super_block *ipoib_get_sb(struct file_system_type *fs_type,
+ int flags, const char *dev_name, void *data)
+{
+ return get_sb_single(fs_type, flags, data, ipoib_fill_super);
+}
+
+static void ipoib_kill_sb(struct super_block *sb)
+{
+ down(&ipoib_fs_mutex);
+ ipoib_sb = NULL;
+ up(&ipoib_fs_mutex);
+
+ kill_litter_super(sb);
+}
+
+static struct file_system_type ipoib_fs_type = {
+ .owner = THIS_MODULE,
+ .name = "ipoib_debugfs",
+ .get_sb = ipoib_get_sb,
+ .kill_sb = ipoib_kill_sb,
+};
+
+int ipoib_register_debugfs(void)
+{
+ return register_filesystem(&ipoib_fs_type);
+}
+
+void ipoib_unregister_debugfs(void)
+{
+ unregister_filesystem(&ipoib_fs_type);
+}
--- /dev/null
+/*
+ * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: ipoib_ib.c 1386 2004-12-27 16:23:17Z roland $
+ */
+
+#include <linux/delay.h>
+#include <linux/dma-mapping.h>
+
+#include <ib_cache.h>
+
+#include "ipoib.h"
+
+#ifdef CONFIG_INFINIBAND_IPOIB_DEBUG_DATA
+int data_debug_level;
+
+module_param(data_debug_level, int, 0644);
+MODULE_PARM_DESC(data_debug_level,
+ "Enable data path debug tracing if > 0");
+#endif
+
+#define IPOIB_OP_RECV (1ul << 31)
+
+static DECLARE_MUTEX(pkey_sem);
+
+struct ipoib_ah *ipoib_create_ah(struct net_device *dev,
+ struct ib_pd *pd, struct ib_ah_attr *attr)
+{
+ struct ipoib_ah *ah;
+
+ ah = kmalloc(sizeof *ah, GFP_KERNEL);
+ if (!ah)
+ return NULL;
+
+ ah->dev = dev;
+ ah->last_send = 0;
+ kref_init(&ah->ref);
+
+ ah->ah = ib_create_ah(pd, attr);
+ if (IS_ERR(ah->ah)) {
+ kfree(ah);
+ ah = NULL;
+ } else
+ ipoib_dbg(netdev_priv(dev), "Created ah %p\n", ah->ah);
+
+ return ah;
+}
+
+void ipoib_free_ah(struct kref *kref)
+{
+ struct ipoib_ah *ah = container_of(kref, struct ipoib_ah, ref);
+ struct ipoib_dev_priv *priv = netdev_priv(ah->dev);
+
+ unsigned long flags;
+
+ if (ah->last_send <= priv->tx_tail) {
+ ipoib_dbg(priv, "Freeing ah %p\n", ah->ah);
+ ib_destroy_ah(ah->ah);
+ kfree(ah);
+ } else {
+ spin_lock_irqsave(&priv->lock, flags);
+ list_add_tail(&ah->list, &priv->dead_ahs);
+ spin_unlock_irqrestore(&priv->lock, flags);
+ }
+}
+
+static inline int ipoib_ib_receive(struct ipoib_dev_priv *priv,
+ unsigned int wr_id,
+ dma_addr_t addr)
+{
+ struct ib_sge list = {
+ .addr = addr,
+ .length = IPOIB_BUF_SIZE,
+ .lkey = priv->mr->lkey,
+ };
+ struct ib_recv_wr param = {
+ .wr_id = wr_id | IPOIB_OP_RECV,
+ .sg_list = &list,
+ .num_sge = 1,
+ .recv_flags = IB_RECV_SIGNALED
+ };
+ struct ib_recv_wr *bad_wr;
+
+ return ib_post_recv(priv->qp, ¶m, &bad_wr);
+}
+
+static int ipoib_ib_post_receive(struct net_device *dev, int id)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct sk_buff *skb;
+ dma_addr_t addr;
+ int ret;
+
+ skb = dev_alloc_skb(IPOIB_BUF_SIZE + 4);
+ if (!skb) {
+ ipoib_warn(priv, "failed to allocate receive buffer\n");
+
+ priv->rx_ring[id].skb = NULL;
+ return -ENOMEM;
+ }
+ skb_reserve(skb, 4); /* 16 byte align IP header */
+ priv->rx_ring[id].skb = skb;
+ addr = dma_map_single(priv->ca->dma_device,
+ skb->data, IPOIB_BUF_SIZE,
+ DMA_FROM_DEVICE);
+ pci_unmap_addr_set(&priv->rx_ring[id], mapping, addr);
+
+ ret = ipoib_ib_receive(priv, id, addr);
+ if (ret) {
+ ipoib_warn(priv, "ipoib_ib_receive failed for buf %d (%d)\n",
+ id, ret);
+ priv->rx_ring[id].skb = NULL;
+ }
+
+ return ret;
+}
+
+static int ipoib_ib_post_receives(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ int i;
+
+ for (i = 0; i < IPOIB_RX_RING_SIZE; ++i) {
+ if (ipoib_ib_post_receive(dev, i)) {
+ ipoib_warn(priv, "ipoib_ib_post_receive failed for buf %d\n", i);
+ return -EIO;
+ }
+ }
+
+ return 0;
+}
+
+static void ipoib_ib_handle_wc(struct net_device *dev,
+ struct ib_wc *wc)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ unsigned int wr_id = wc->wr_id;
+
+ ipoib_dbg_data(priv, "called: id %d, op %d, status: %d\n",
+ wr_id, wc->opcode, wc->status);
+
+ if (wr_id & IPOIB_OP_RECV) {
+ wr_id &= ~IPOIB_OP_RECV;
+
+ if (wr_id < IPOIB_RX_RING_SIZE) {
+ struct sk_buff *skb = priv->rx_ring[wr_id].skb;
+
+ priv->rx_ring[wr_id].skb = NULL;
+
+ dma_unmap_single(priv->ca->dma_device,
+ pci_unmap_addr(&priv->rx_ring[wr_id],
+ mapping),
+ IPOIB_BUF_SIZE,
+ DMA_FROM_DEVICE);
+
+ if (wc->status != IB_WC_SUCCESS) {
+ if (wc->status != IB_WC_WR_FLUSH_ERR)
+ ipoib_warn(priv, "failed recv event "
+ "(status=%d, wrid=%d vend_err %x)\n",
+ wc->status, wr_id, wc->vendor_err);
+ dev_kfree_skb_any(skb);
+ return;
+ }
+
+ ipoib_dbg_data(priv, "received %d bytes, SLID 0x%04x\n",
+ wc->byte_len, wc->slid);
+
+ skb_put(skb, wc->byte_len);
+ skb_pull(skb, IB_GRH_BYTES);
+
+ if (wc->slid != priv->local_lid ||
+ wc->src_qp != priv->qp->qp_num) {
+ skb->protocol = ((struct ipoib_header *) skb->data)->proto;
+
+ skb_pull(skb, IPOIB_ENCAP_LEN);
+
+ dev->last_rx = jiffies;
+ ++priv->stats.rx_packets;
+ priv->stats.rx_bytes += skb->len;
+
+ skb->dev = dev;
+ /* XXX get correct PACKET_ type here */
+ skb->pkt_type = PACKET_HOST;
+ netif_rx_ni(skb);
+ } else {
+ ipoib_dbg_data(priv, "dropping loopback packet\n");
+ dev_kfree_skb_any(skb);
+ }
+
+ /* repost receive */
+ if (ipoib_ib_post_receive(dev, wr_id))
+ ipoib_warn(priv, "ipoib_ib_post_receive failed "
+ "for buf %d\n", wr_id);
+ } else
+ ipoib_warn(priv, "completion event with wrid %d\n",
+ wr_id);
+
+ } else {
+ struct ipoib_buf *tx_req;
+ unsigned long flags;
+
+ if (wr_id >= IPOIB_TX_RING_SIZE) {
+ ipoib_warn(priv, "completion event with wrid %d (> %d)\n",
+ wr_id, IPOIB_TX_RING_SIZE);
+ return;
+ }
+
+ ipoib_dbg_data(priv, "send complete, wrid %d\n", wr_id);
+
+ tx_req = &priv->tx_ring[wr_id];
+
+ dma_unmap_single(priv->ca->dma_device,
+ pci_unmap_addr(tx_req, mapping),
+ tx_req->skb->len,
+ DMA_TO_DEVICE);
+
+ ++priv->stats.tx_packets;
+ priv->stats.tx_bytes += tx_req->skb->len;
+
+ dev_kfree_skb_any(tx_req->skb);
+
+ spin_lock_irqsave(&priv->tx_lock, flags);
+ ++priv->tx_tail;
+ if (netif_queue_stopped(dev) &&
+ priv->tx_head - priv->tx_tail <= IPOIB_TX_RING_SIZE / 2)
+ netif_wake_queue(dev);
+ spin_unlock_irqrestore(&priv->tx_lock, flags);
+
+ if (wc->status != IB_WC_SUCCESS &&
+ wc->status != IB_WC_WR_FLUSH_ERR)
+ ipoib_warn(priv, "failed send event "
+ "(status=%d, wrid=%d vend_err %x)\n",
+ wc->status, wr_id, wc->vendor_err);
+ }
+}
+
+void ipoib_ib_completion(struct ib_cq *cq, void *dev_ptr)
+{
+ struct net_device *dev = (struct net_device *) dev_ptr;
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ int n, i;
+
+ ib_req_notify_cq(cq, IB_CQ_NEXT_COMP);
+ do {
+ n = ib_poll_cq(cq, IPOIB_NUM_WC, priv->ibwc);
+ for (i = 0; i < n; ++i)
+ ipoib_ib_handle_wc(dev, priv->ibwc + i);
+ } while (n == IPOIB_NUM_WC);
+}
+
+static inline int post_send(struct ipoib_dev_priv *priv,
+ unsigned int wr_id,
+ struct ib_ah *address, u32 qpn,
+ dma_addr_t addr, int len)
+{
+ struct ib_send_wr *bad_wr;
+
+ priv->tx_sge.addr = addr;
+ priv->tx_sge.length = len;
+
+ priv->tx_wr.wr_id = wr_id;
+ priv->tx_wr.wr.ud.remote_qpn = qpn;
+ priv->tx_wr.wr.ud.ah = address;
+
+ return ib_post_send(priv->qp, &priv->tx_wr, &bad_wr);
+}
+
+void ipoib_send(struct net_device *dev, struct sk_buff *skb,
+ struct ipoib_ah *address, u32 qpn)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct ipoib_buf *tx_req;
+ dma_addr_t addr;
+
+ if (skb->len > dev->mtu + INFINIBAND_ALEN) {
+ ipoib_warn(priv, "packet len %d (> %d) too long to send, dropping\n",
+ skb->len, dev->mtu + INFINIBAND_ALEN);
+ ++priv->stats.tx_dropped;
+ ++priv->stats.tx_errors;
+ dev_kfree_skb_any(skb);
+ return;
+ }
+
+ ipoib_dbg_data(priv, "sending packet, length=%d address=%p qpn=0x%06x\n",
+ skb->len, address, qpn);
+
+ /*
+ * We put the skb into the tx_ring _before_ we call post_send()
+ * because it's entirely possible that the completion handler will
+ * run before we execute anything after the post_send(). That
+ * means we have to make sure everything is properly recorded and
+ * our state is consistent before we call post_send().
+ */
+ tx_req = &priv->tx_ring[priv->tx_head & (IPOIB_TX_RING_SIZE - 1)];
+ tx_req->skb = skb;
+ addr = dma_map_single(priv->ca->dma_device, skb->data, skb->len,
+ DMA_TO_DEVICE);
+ pci_unmap_addr_set(tx_req, mapping, addr);
+
+ if (unlikely(post_send(priv, priv->tx_head & (IPOIB_TX_RING_SIZE - 1),
+ address->ah, qpn, addr, skb->len))) {
+ ipoib_warn(priv, "post_send failed\n");
+ ++priv->stats.tx_errors;
+ dma_unmap_single(priv->ca->dma_device, addr, skb->len,
+ DMA_TO_DEVICE);
+ dev_kfree_skb_any(skb);
+ } else {
+ dev->trans_start = jiffies;
+
+ address->last_send = priv->tx_head;
+ ++priv->tx_head;
+
+ if (priv->tx_head - priv->tx_tail == IPOIB_TX_RING_SIZE) {
+ ipoib_dbg(priv, "TX ring full, stopping kernel net queue\n");
+ netif_stop_queue(dev);
+ }
+ }
+}
+
+static void __ipoib_reap_ah(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct ipoib_ah *ah, *tah;
+ LIST_HEAD(remove_list);
+
+ spin_lock_irq(&priv->lock);
+ list_for_each_entry_safe(ah, tah, &priv->dead_ahs, list)
+ if (ah->last_send <= priv->tx_tail) {
+ list_del(&ah->list);
+ list_add_tail(&ah->list, &remove_list);
+ }
+ spin_unlock_irq(&priv->lock);
+
+ list_for_each_entry_safe(ah, tah, &remove_list, list) {
+ ipoib_dbg(priv, "Reaping ah %p\n", ah->ah);
+ ib_destroy_ah(ah->ah);
+ kfree(ah);
+ }
+}
+
+void ipoib_reap_ah(void *dev_ptr)
+{
+ struct net_device *dev = dev_ptr;
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ __ipoib_reap_ah(dev);
+
+ if (!test_bit(IPOIB_STOP_REAPER, &priv->flags))
+ queue_delayed_work(ipoib_workqueue, &priv->ah_reap_task, HZ);
+}
+
+int ipoib_ib_dev_open(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ int ret;
+
+ ret = ipoib_qp_create(dev);
+ if (ret) {
+ ipoib_warn(priv, "ipoib_qp_create returned %d\n", ret);
+ return -1;
+ }
+
+ ret = ipoib_ib_post_receives(dev);
+ if (ret) {
+ ipoib_warn(priv, "ipoib_ib_post_receives returned %d\n", ret);
+ return -1;
+ }
+
+ clear_bit(IPOIB_STOP_REAPER, &priv->flags);
+ queue_delayed_work(ipoib_workqueue, &priv->ah_reap_task, HZ);
+
+ return 0;
+}
+
+int ipoib_ib_dev_up(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ set_bit(IPOIB_FLAG_OPER_UP, &priv->flags);
+
+ return ipoib_mcast_start_thread(dev);
+}
+
+int ipoib_ib_dev_down(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ ipoib_dbg(priv, "downing ib_dev\n");
+
+ clear_bit(IPOIB_FLAG_OPER_UP, &priv->flags);
+ netif_carrier_off(dev);
+
+ /* Shutdown the P_Key thread if still active */
+ if (!test_bit(IPOIB_PKEY_ASSIGNED, &priv->flags)) {
+ down(&pkey_sem);
+ set_bit(IPOIB_PKEY_STOP, &priv->flags);
+ cancel_delayed_work(&priv->pkey_task);
+ up(&pkey_sem);
+ flush_workqueue(ipoib_workqueue);
+ }
+
+ ipoib_mcast_stop_thread(dev);
+
+ /*
+ * Flush the multicast groups first so we stop any multicast joins. The
+ * completion thread may have already died and we may deadlock waiting
+ * for the completion thread to finish some multicast joins.
+ */
+ ipoib_mcast_dev_flush(dev);
+
+ /* Delete broadcast and local addresses since they will be recreated */
+ ipoib_mcast_dev_down(dev);
+
+ ipoib_flush_paths(dev);
+
+ return 0;
+}
+
+static int recvs_pending(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ int pending = 0;
+ int i;
+
+ for (i = 0; i < IPOIB_RX_RING_SIZE; ++i)
+ if (priv->rx_ring[i].skb)
+ ++pending;
+
+ return pending;
+}
+
+int ipoib_ib_dev_stop(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct ib_qp_attr qp_attr;
+ int attr_mask;
+ unsigned long begin;
+ struct ipoib_buf *tx_req;
+ int i;
+
+ /* Kill the existing QP and allocate a new one */
+ qp_attr.qp_state = IB_QPS_ERR;
+ attr_mask = IB_QP_STATE;
+ if (ib_modify_qp(priv->qp, &qp_attr, attr_mask))
+ ipoib_warn(priv, "Failed to modify QP to ERROR state\n");
+
+ /* Wait for all sends and receives to complete */
+ begin = jiffies;
+
+ while (priv->tx_head != priv->tx_tail || recvs_pending(dev)) {
+ if (time_after(jiffies, begin + 5 * HZ)) {
+ ipoib_warn(priv, "timing out; %d sends %d receives not completed\n",
+ priv->tx_head - priv->tx_tail, recvs_pending(dev));
+
+ /*
+ * assume the HW is wedged and just free up
+ * all our pending work requests.
+ */
+ while (priv->tx_tail < priv->tx_head) {
+ tx_req = &priv->tx_ring[priv->tx_tail &
+ (IPOIB_TX_RING_SIZE - 1)];
+ dma_unmap_single(priv->ca->dma_device,
+ pci_unmap_addr(tx_req, mapping),
+ tx_req->skb->len,
+ DMA_TO_DEVICE);
+ dev_kfree_skb_any(tx_req->skb);
+ ++priv->tx_tail;
+ }
+
+ for (i = 0; i < IPOIB_RX_RING_SIZE; ++i)
+ if (priv->rx_ring[i].skb) {
+ dma_unmap_single(priv->ca->dma_device,
+ pci_unmap_addr(&priv->rx_ring[i],
+ mapping),
+ IPOIB_BUF_SIZE,
+ DMA_FROM_DEVICE);
+ dev_kfree_skb_any(priv->rx_ring[i].skb);
+ priv->rx_ring[i].skb = NULL;
+ }
+
+ goto timeout;
+ }
+
+ msleep(1);
+ }
+
+ ipoib_dbg(priv, "All sends and receives done.\n");
+
+timeout:
+ qp_attr.qp_state = IB_QPS_RESET;
+ attr_mask = IB_QP_STATE;
+ if (ib_modify_qp(priv->qp, &qp_attr, attr_mask))
+ ipoib_warn(priv, "Failed to modify QP to RESET state\n");
+
+ /* Wait for all AHs to be reaped */
+ set_bit(IPOIB_STOP_REAPER, &priv->flags);
+ cancel_delayed_work(&priv->ah_reap_task);
+ flush_workqueue(ipoib_workqueue);
+
+ begin = jiffies;
+
+ while (!list_empty(&priv->dead_ahs)) {
+ __ipoib_reap_ah(dev);
+
+ if (time_after(jiffies, begin + HZ)) {
+ ipoib_warn(priv, "timing out; will leak address handles\n");
+ break;
+ }
+
+ msleep(1);
+ }
+
+ return 0;
+}
+
+int ipoib_ib_dev_init(struct net_device *dev, struct ib_device *ca, int port)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ priv->ca = ca;
+ priv->port = port;
+ priv->qp = NULL;
+
+ if (ipoib_transport_dev_init(dev, ca)) {
+ printk(KERN_WARNING "%s: ipoib_transport_dev_init failed\n", ca->name);
+ return -ENODEV;
+ }
+
+ if (dev->flags & IFF_UP) {
+ if (ipoib_ib_dev_open(dev)) {
+ ipoib_transport_dev_cleanup(dev);
+ return -ENODEV;
+ }
+ }
+
+ return 0;
+}
+
+void ipoib_ib_dev_flush(void *_dev)
+{
+ struct net_device *dev = (struct net_device *)_dev;
+ struct ipoib_dev_priv *priv = netdev_priv(dev), *cpriv;
+
+ if (!test_bit(IPOIB_FLAG_ADMIN_UP, &priv->flags))
+ return;
+
+ ipoib_dbg(priv, "flushing\n");
+
+ ipoib_ib_dev_down(dev);
+
+ /*
+ * The device could have been brought down between the start and when
+ * we get here, don't bring it back up if it's not configured up
+ */
+ if (test_bit(IPOIB_FLAG_ADMIN_UP, &priv->flags))
+ ipoib_ib_dev_up(dev);
+
+ /* Flush any child interfaces too */
+ list_for_each_entry(cpriv, &priv->child_intfs, list)
+ ipoib_ib_dev_flush(&cpriv->dev);
+}
+
+void ipoib_ib_dev_cleanup(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ ipoib_dbg(priv, "cleaning up ib_dev\n");
+
+ ipoib_mcast_stop_thread(dev);
+
+ /* Delete the broadcast address and the local address */
+ ipoib_mcast_dev_down(dev);
+
+ ipoib_transport_dev_cleanup(dev);
+}
+
+/*
+ * Delayed P_Key Assigment Interim Support
+ *
+ * The following is initial implementation of delayed P_Key assigment
+ * mechanism. It is using the same approach implemented for the multicast
+ * group join. The single goal of this implementation is to quickly address
+ * Bug #2507. This implementation will probably be removed when the P_Key
+ * change async notification is available.
+ */
+int ipoib_open(struct net_device *dev);
+
+static void ipoib_pkey_dev_check_presence(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ u16 pkey_index = 0;
+
+ if (ib_find_cached_pkey(priv->ca, priv->port, priv->pkey, &pkey_index))
+ clear_bit(IPOIB_PKEY_ASSIGNED, &priv->flags);
+ else
+ set_bit(IPOIB_PKEY_ASSIGNED, &priv->flags);
+}
+
+void ipoib_pkey_poll(void *dev_ptr)
+{
+ struct net_device *dev = dev_ptr;
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ ipoib_pkey_dev_check_presence(dev);
+
+ if (test_bit(IPOIB_PKEY_ASSIGNED, &priv->flags))
+ ipoib_open(dev);
+ else {
+ down(&pkey_sem);
+ if (!test_bit(IPOIB_PKEY_STOP, &priv->flags))
+ queue_delayed_work(ipoib_workqueue,
+ &priv->pkey_task,
+ HZ);
+ up(&pkey_sem);
+ }
+}
+
+int ipoib_pkey_dev_delay_open(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ /* Look for the interface pkey value in the IB Port P_Key table and */
+ /* set the interface pkey assigment flag */
+ ipoib_pkey_dev_check_presence(dev);
+
+ /* P_Key value not assigned yet - start polling */
+ if (!test_bit(IPOIB_PKEY_ASSIGNED, &priv->flags)) {
+ down(&pkey_sem);
+ clear_bit(IPOIB_PKEY_STOP, &priv->flags);
+ queue_delayed_work(ipoib_workqueue,
+ &priv->pkey_task,
+ HZ);
+ up(&pkey_sem);
+ return 1;
+ }
+
+ return 0;
+}
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: ipoib_main.c 1377 2004-12-23 19:57:12Z roland $
+ */
+
+#include "ipoib.h"
+
+#include <linux/version.h>
+#include <linux/module.h>
+
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+
+#include <linux/if_arp.h> /* For ARPHRD_xxx */
+
+#include <linux/ip.h>
+#include <linux/in.h>
+
+MODULE_AUTHOR("Roland Dreier");
+MODULE_DESCRIPTION("IP-over-InfiniBand net driver");
+MODULE_LICENSE("Dual BSD/GPL");
+
+#ifdef CONFIG_INFINIBAND_IPOIB_DEBUG
+int debug_level;
+
+module_param(debug_level, int, 0644);
+MODULE_PARM_DESC(debug_level, "Enable debug tracing if > 0");
+#endif
+
+static const u8 ipv4_bcast_addr[] = {
+ 0x00, 0xff, 0xff, 0xff,
+ 0xff, 0x12, 0x40, 0x1b, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff
+};
+
+struct workqueue_struct *ipoib_workqueue;
+
+static void ipoib_add_one(struct ib_device *device);
+static void ipoib_remove_one(struct ib_device *device);
+
+static struct ib_client ipoib_client = {
+ .name = "ipoib",
+ .add = ipoib_add_one,
+ .remove = ipoib_remove_one
+};
+
+int ipoib_open(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ ipoib_dbg(priv, "bringing up interface\n");
+
+ set_bit(IPOIB_FLAG_ADMIN_UP, &priv->flags);
+
+ if (ipoib_pkey_dev_delay_open(dev))
+ return 0;
+
+ if (ipoib_ib_dev_open(dev))
+ return -EINVAL;
+
+ if (ipoib_ib_dev_up(dev))
+ return -EINVAL;
+
+ if (!test_bit(IPOIB_FLAG_SUBINTERFACE, &priv->flags)) {
+ struct ipoib_dev_priv *cpriv;
+
+ /* Bring up any child interfaces too */
+ down(&priv->vlan_mutex);
+ list_for_each_entry(cpriv, &priv->child_intfs, list) {
+ int flags;
+
+ flags = cpriv->dev->flags;
+ if (flags & IFF_UP)
+ continue;
+
+ dev_change_flags(cpriv->dev, flags | IFF_UP);
+ }
+ up(&priv->vlan_mutex);
+ }
+
+ netif_start_queue(dev);
+
+ return 0;
+}
+
+static int ipoib_stop(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ ipoib_dbg(priv, "stopping interface\n");
+
+ clear_bit(IPOIB_FLAG_ADMIN_UP, &priv->flags);
+
+ netif_stop_queue(dev);
+
+ ipoib_ib_dev_down(dev);
+ ipoib_ib_dev_stop(dev);
+
+ if (!test_bit(IPOIB_FLAG_SUBINTERFACE, &priv->flags)) {
+ struct ipoib_dev_priv *cpriv;
+
+ /* Bring down any child interfaces too */
+ down(&priv->vlan_mutex);
+ list_for_each_entry(cpriv, &priv->child_intfs, list) {
+ int flags;
+
+ flags = cpriv->dev->flags;
+ if (!(flags & IFF_UP))
+ continue;
+
+ dev_change_flags(cpriv->dev, flags & ~IFF_UP);
+ }
+ up(&priv->vlan_mutex);
+ }
+
+ return 0;
+}
+
+static int ipoib_change_mtu(struct net_device *dev, int new_mtu)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ if (new_mtu > IPOIB_PACKET_SIZE - IPOIB_ENCAP_LEN)
+ return -EINVAL;
+
+ priv->admin_mtu = new_mtu;
+
+ dev->mtu = min(priv->mcast_mtu, priv->admin_mtu);
+
+ return 0;
+}
+
+static struct ipoib_path *__path_find(struct net_device *dev,
+ union ib_gid *gid)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct rb_node *n = priv->path_tree.rb_node;
+ struct ipoib_path *path;
+ int ret;
+
+ while (n) {
+ path = rb_entry(n, struct ipoib_path, rb_node);
+
+ ret = memcmp(gid->raw, path->pathrec.dgid.raw,
+ sizeof (union ib_gid));
+
+ if (ret < 0)
+ n = n->rb_left;
+ else if (ret > 0)
+ n = n->rb_right;
+ else
+ return path;
+ }
+
+ return NULL;
+}
+
+static int __path_add(struct net_device *dev, struct ipoib_path *path)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct rb_node **n = &priv->path_tree.rb_node;
+ struct rb_node *pn = NULL;
+ struct ipoib_path *tpath;
+ int ret;
+
+ while (*n) {
+ pn = *n;
+ tpath = rb_entry(pn, struct ipoib_path, rb_node);
+
+ ret = memcmp(path->pathrec.dgid.raw, tpath->pathrec.dgid.raw,
+ sizeof (union ib_gid));
+ if (ret < 0)
+ n = &pn->rb_left;
+ else if (ret > 0)
+ n = &pn->rb_right;
+ else
+ return -EEXIST;
+ }
+
+ rb_link_node(&path->rb_node, pn, n);
+ rb_insert_color(&path->rb_node, &priv->path_tree);
+
+ list_add_tail(&path->list, &priv->path_list);
+
+ return 0;
+}
+
+static void __path_free(struct net_device *dev, struct ipoib_path *path)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct ipoib_neigh *neigh, *tn;
+ struct sk_buff *skb;
+
+ while ((skb = __skb_dequeue(&path->queue)))
+ dev_kfree_skb_irq(skb);
+
+ list_for_each_entry_safe(neigh, tn, &path->neigh_list, list) {
+ if (neigh->ah)
+ ipoib_put_ah(neigh->ah);
+ *to_ipoib_neigh(neigh->neighbour) = NULL;
+ neigh->neighbour->ops->destructor = NULL;
+ kfree(neigh);
+ }
+
+ if (path->ah)
+ ipoib_put_ah(path->ah);
+
+ rb_erase(&path->rb_node, &priv->path_tree);
+ list_del(&path->list);
+ kfree(path);
+}
+
+void ipoib_flush_paths(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct ipoib_path *path, *tp;
+ LIST_HEAD(remove_list);
+ unsigned long flags;
+
+ spin_lock_irqsave(&priv->lock, flags);
+ list_splice(&priv->path_list, &remove_list);
+ INIT_LIST_HEAD(&priv->path_list);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ list_for_each_entry_safe(path, tp, &remove_list, list) {
+ if (path->query)
+ ib_sa_cancel_query(path->query_id, path->query);
+ wait_for_completion(&path->done);
+ __path_free(dev, path);
+ }
+}
+
+static void path_rec_completion(int status,
+ struct ib_sa_path_rec *pathrec,
+ void *path_ptr)
+{
+ struct ipoib_path *path = path_ptr;
+ struct net_device *dev = path->dev;
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct ipoib_ah *ah = NULL;
+ struct ipoib_neigh *neigh;
+ struct sk_buff_head skqueue;
+ struct sk_buff *skb;
+ unsigned long flags;
+
+ if (pathrec)
+ ipoib_dbg(priv, "PathRec LID 0x%04x for GID " IPOIB_GID_FMT "\n",
+ be16_to_cpu(pathrec->dlid), IPOIB_GID_ARG(pathrec->dgid));
+ else
+ ipoib_dbg(priv, "PathRec status %d for GID " IPOIB_GID_FMT "\n",
+ status, IPOIB_GID_ARG(path->pathrec.dgid));
+
+ skb_queue_head_init(&skqueue);
+
+ if (!status) {
+ struct ib_ah_attr av = {
+ .dlid = be16_to_cpu(pathrec->dlid),
+ .sl = pathrec->sl,
+ .port_num = priv->port
+ };
+
+ if (ib_sa_rate_enum_to_int(pathrec->rate) > 0)
+ av.static_rate = (2 * priv->local_rate -
+ ib_sa_rate_enum_to_int(pathrec->rate) - 1) /
+ (priv->local_rate ? priv->local_rate : 1);
+
+ ipoib_dbg(priv, "static_rate %d for local port %dX, path %dX\n",
+ av.static_rate, priv->local_rate,
+ ib_sa_rate_enum_to_int(pathrec->rate));
+
+ ah = ipoib_create_ah(dev, priv->pd, &av);
+ }
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ path->ah = ah;
+
+ if (ah) {
+ path->pathrec = *pathrec;
+
+ ipoib_dbg(priv, "created address handle %p for LID 0x%04x, SL %d\n",
+ ah, be16_to_cpu(pathrec->dlid), pathrec->sl);
+
+ while ((skb = __skb_dequeue(&path->queue)))
+ __skb_queue_tail(&skqueue, skb);
+
+ list_for_each_entry(neigh, &path->neigh_list, list) {
+ kref_get(&path->ah->ref);
+ neigh->ah = path->ah;
+
+ while ((skb = __skb_dequeue(&neigh->queue)))
+ __skb_queue_tail(&skqueue, skb);
+ }
+ } else
+ path->query = NULL;
+
+ complete(&path->done);
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ while ((skb = __skb_dequeue(&skqueue))) {
+ skb->dev = dev;
+ if (dev_queue_xmit(skb))
+ ipoib_warn(priv, "dev_queue_xmit failed "
+ "to requeue packet\n");
+ }
+}
+
+static struct ipoib_path *path_rec_create(struct net_device *dev,
+ union ib_gid *gid)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct ipoib_path *path;
+
+ path = kmalloc(sizeof *path, GFP_ATOMIC);
+ if (!path)
+ return NULL;
+
+ path->dev = dev;
+ path->pathrec.dlid = 0;
+
+ skb_queue_head_init(&path->queue);
+
+ INIT_LIST_HEAD(&path->neigh_list);
+ path->query = NULL;
+ init_completion(&path->done);
+
+ memcpy(path->pathrec.dgid.raw, gid->raw, sizeof (union ib_gid));
+ path->pathrec.sgid = priv->local_gid;
+ path->pathrec.pkey = cpu_to_be16(priv->pkey);
+ path->pathrec.numb_path = 1;
+
+ __path_add(dev, path);
+
+ return path;
+}
+
+static int path_rec_start(struct net_device *dev,
+ struct ipoib_path *path)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ ipoib_dbg(priv, "Start path record lookup for " IPOIB_GID_FMT "\n",
+ IPOIB_GID_ARG(path->pathrec.dgid));
+
+ path->query_id =
+ ib_sa_path_rec_get(priv->ca, priv->port,
+ &path->pathrec,
+ IB_SA_PATH_REC_DGID |
+ IB_SA_PATH_REC_SGID |
+ IB_SA_PATH_REC_NUMB_PATH |
+ IB_SA_PATH_REC_PKEY,
+ 1000, GFP_ATOMIC,
+ path_rec_completion,
+ path, &path->query);
+ if (path->query_id < 0) {
+ ipoib_warn(priv, "ib_sa_path_rec_get failed\n");
+ path->query = NULL;
+ return path->query_id;
+ }
+
+ return 0;
+}
+
+static void neigh_add_path(struct sk_buff *skb, struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct ipoib_path *path;
+ struct ipoib_neigh *neigh;
+
+ neigh = kmalloc(sizeof *neigh, GFP_ATOMIC);
+ if (!neigh) {
+ ++priv->stats.tx_dropped;
+ dev_kfree_skb_any(skb);
+ return;
+ }
+
+ skb_queue_head_init(&neigh->queue);
+ neigh->neighbour = skb->dst->neighbour;
+ *to_ipoib_neigh(skb->dst->neighbour) = neigh;
+
+ /*
+ * We can only be called from ipoib_start_xmit, so we're
+ * inside tx_lock -- no need to save/restore flags.
+ */
+ spin_lock(&priv->lock);
+
+ path = __path_find(dev, (union ib_gid *) (skb->dst->neighbour->ha + 4));
+ if (!path) {
+ path = path_rec_create(dev,
+ (union ib_gid *) (skb->dst->neighbour->ha + 4));
+ if (!path)
+ goto err;
+ }
+
+ list_add_tail(&neigh->list, &path->neigh_list);
+
+ if (path->pathrec.dlid) {
+ kref_get(&path->ah->ref);
+ neigh->ah = path->ah;
+
+ ipoib_send(dev, skb, path->ah,
+ be32_to_cpup((__be32 *) skb->dst->neighbour->ha));
+ } else {
+ neigh->ah = NULL;
+ if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
+ __skb_queue_tail(&neigh->queue, skb);
+ } else {
+ ++priv->stats.tx_dropped;
+ dev_kfree_skb_any(skb);
+ }
+
+ if (!path->query && path_rec_start(dev, path))
+ goto err;
+ }
+
+ spin_unlock(&priv->lock);
+ return;
+
+err:
+ *to_ipoib_neigh(skb->dst->neighbour) = NULL;
+ list_del(&neigh->list);
+ kfree(neigh);
+ neigh->neighbour->ops->destructor = NULL;
+
+ ++priv->stats.tx_dropped;
+ dev_kfree_skb_any(skb);
+
+ spin_unlock(&priv->lock);
+}
+
+static void path_lookup(struct sk_buff *skb, struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(skb->dev);
+
+ /* Look up path record for unicasts */
+ if (skb->dst->neighbour->ha[4] != 0xff) {
+ neigh_add_path(skb, dev);
+ return;
+ }
+
+ /* Add in the P_Key for multicasts */
+ skb->dst->neighbour->ha[8] = (priv->pkey >> 8) & 0xff;
+ skb->dst->neighbour->ha[9] = priv->pkey & 0xff;
+ ipoib_mcast_send(dev, (union ib_gid *) (skb->dst->neighbour->ha + 4), skb);
+}
+
+static void unicast_arp_send(struct sk_buff *skb, struct net_device *dev,
+ struct ipoib_pseudoheader *phdr)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct ipoib_path *path;
+
+ /*
+ * We can only be called from ipoib_start_xmit, so we're
+ * inside tx_lock -- no need to save/restore flags.
+ */
+ spin_lock(&priv->lock);
+
+ path = __path_find(dev, (union ib_gid *) (phdr->hwaddr + 4));
+ if (!path) {
+ path = path_rec_create(dev,
+ (union ib_gid *) (phdr->hwaddr + 4));
+ if (path) {
+ /* put pseudoheader back on for next time */
+ skb_push(skb, sizeof *phdr);
+ __skb_queue_tail(&path->queue, skb);
+
+ if (path_rec_start(dev, path))
+ __path_free(dev, path);
+ } else {
+ ++priv->stats.tx_dropped;
+ dev_kfree_skb_any(skb);
+ }
+
+ spin_unlock(&priv->lock);
+ return;
+ }
+
+ if (path->pathrec.dlid) {
+ ipoib_dbg(priv, "Send unicast ARP to %04x\n",
+ be16_to_cpu(path->pathrec.dlid));
+
+ ipoib_send(dev, skb, path->ah,
+ be32_to_cpup((__be32 *) phdr->hwaddr));
+ } else if ((path->query || !path_rec_start(dev, path)) &&
+ skb_queue_len(&path->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
+ /* put pseudoheader back on for next time */
+ skb_push(skb, sizeof *phdr);
+ __skb_queue_tail(&path->queue, skb);
+ } else {
+ ++priv->stats.tx_dropped;
+ dev_kfree_skb_any(skb);
+ }
+
+ spin_unlock(&priv->lock);
+}
+
+static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct ipoib_neigh *neigh;
+ unsigned long flags;
+
+ local_irq_save(flags);
+ if (!spin_trylock(&priv->tx_lock)) {
+ local_irq_restore(flags);
+ return NETDEV_TX_LOCKED;
+ }
+
+ /*
+ * Check if our queue is stopped. Since we have the LLTX bit
+ * set, we can't rely on netif_stop_queue() preventing our
+ * xmit function from being called with a full queue.
+ */
+ if (unlikely(netif_queue_stopped(dev))) {
+ spin_unlock_irqrestore(&priv->tx_lock, flags);
+ return NETDEV_TX_BUSY;
+ }
+
+ if (skb->dst && skb->dst->neighbour) {
+ if (unlikely(!*to_ipoib_neigh(skb->dst->neighbour))) {
+ path_lookup(skb, dev);
+ goto out;
+ }
+
+ neigh = *to_ipoib_neigh(skb->dst->neighbour);
+
+ if (likely(neigh->ah)) {
+ ipoib_send(dev, skb, neigh->ah,
+ be32_to_cpup((__be32 *) skb->dst->neighbour->ha));
+ goto out;
+ }
+
+ if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
+ spin_lock(&priv->lock);
+ __skb_queue_tail(&neigh->queue, skb);
+ spin_unlock(&priv->lock);
+ } else {
+ ++priv->stats.tx_dropped;
+ dev_kfree_skb_any(skb);
+ }
+ } else {
+ struct ipoib_pseudoheader *phdr =
+ (struct ipoib_pseudoheader *) skb->data;
+ skb_pull(skb, sizeof *phdr);
+
+ if (phdr->hwaddr[4] == 0xff) {
+ /* Add in the P_Key for multicast*/
+ phdr->hwaddr[8] = (priv->pkey >> 8) & 0xff;
+ phdr->hwaddr[9] = priv->pkey & 0xff;
+
+ ipoib_mcast_send(dev, (union ib_gid *) (phdr->hwaddr + 4), skb);
+ } else {
+ /* unicast GID -- should be ARP reply */
+
+ if (be16_to_cpup((u16 *) skb->data) != ETH_P_ARP) {
+ ipoib_warn(priv, "Unicast, no %s: type %04x, QPN %06x "
+ IPOIB_GID_FMT "\n",
+ skb->dst ? "neigh" : "dst",
+ be16_to_cpup((u16 *) skb->data),
+ be32_to_cpup((u32 *) phdr->hwaddr),
+ IPOIB_GID_ARG(*(union ib_gid *) (phdr->hwaddr + 4)));
+ dev_kfree_skb_any(skb);
+ ++priv->stats.tx_dropped;
+ goto out;
+ }
+
+ unicast_arp_send(skb, dev, phdr);
+ }
+ }
+
+out:
+ spin_unlock_irqrestore(&priv->tx_lock, flags);
+
+ return NETDEV_TX_OK;
+}
+
+static struct net_device_stats *ipoib_get_stats(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ return &priv->stats;
+}
+
+static void ipoib_timeout(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ ipoib_warn(priv, "transmit timeout: latency %ld\n",
+ jiffies - dev->trans_start);
+ /* XXX reset QP, etc. */
+}
+
+static int ipoib_hard_header(struct sk_buff *skb,
+ struct net_device *dev,
+ unsigned short type,
+ void *daddr, void *saddr, unsigned len)
+{
+ struct ipoib_header *header;
+
+ header = (struct ipoib_header *) skb_push(skb, sizeof *header);
+
+ header->proto = htons(type);
+ header->reserved = 0;
+
+ /*
+ * If we don't have a neighbour structure, stuff the
+ * destination address onto the front of the skb so we can
+ * figure out where to send the packet later.
+ */
+ if (!skb->dst || !skb->dst->neighbour) {
+ struct ipoib_pseudoheader *phdr =
+ (struct ipoib_pseudoheader *) skb_push(skb, sizeof *phdr);
+ memcpy(phdr->hwaddr, daddr, INFINIBAND_ALEN);
+ }
+
+ return 0;
+}
+
+static void ipoib_set_mcast_list(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ schedule_work(&priv->restart_task);
+}
+
+static void ipoib_neigh_destructor(struct neighbour *n)
+{
+ struct ipoib_neigh *neigh = *to_ipoib_neigh(n);
+ struct ipoib_dev_priv *priv = netdev_priv(n->dev);
+ unsigned long flags;
+
+ ipoib_dbg(priv,
+ "neigh_destructor for %06x " IPOIB_GID_FMT "\n",
+ be32_to_cpup((__be32 *) n->ha),
+ IPOIB_GID_ARG(*((union ib_gid *) (n->ha + 4))));
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ if (neigh) {
+ if (neigh->ah)
+ ipoib_put_ah(neigh->ah);
+ list_del(&neigh->list);
+ *to_ipoib_neigh(n) = NULL;
+ kfree(neigh);
+ }
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+}
+
+static int ipoib_neigh_setup(struct neighbour *neigh)
+{
+ /*
+ * Is this kosher? I can't find anybody in the kernel that
+ * sets neigh->destructor, so we should be able to set it here
+ * without trouble.
+ */
+ neigh->ops->destructor = ipoib_neigh_destructor;
+
+ return 0;
+}
+
+static int ipoib_neigh_setup_dev(struct net_device *dev, struct neigh_parms *parms)
+{
+ parms->neigh_setup = ipoib_neigh_setup;
+
+ return 0;
+}
+
+int ipoib_dev_init(struct net_device *dev, struct ib_device *ca, int port)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ /* Allocate RX/TX "rings" to hold queued skbs */
+
+ priv->rx_ring = kmalloc(IPOIB_RX_RING_SIZE * sizeof (struct ipoib_buf),
+ GFP_KERNEL);
+ if (!priv->rx_ring) {
+ printk(KERN_WARNING "%s: failed to allocate RX ring (%d entries)\n",
+ ca->name, IPOIB_RX_RING_SIZE);
+ goto out;
+ }
+ memset(priv->rx_ring, 0,
+ IPOIB_RX_RING_SIZE * sizeof (struct ipoib_buf));
+
+ priv->tx_ring = kmalloc(IPOIB_TX_RING_SIZE * sizeof (struct ipoib_buf),
+ GFP_KERNEL);
+ if (!priv->tx_ring) {
+ printk(KERN_WARNING "%s: failed to allocate TX ring (%d entries)\n",
+ ca->name, IPOIB_TX_RING_SIZE);
+ goto out_rx_ring_cleanup;
+ }
+ memset(priv->tx_ring, 0,
+ IPOIB_TX_RING_SIZE * sizeof (struct ipoib_buf));
+
+ /* priv->tx_head & tx_tail are already 0 */
+
+ if (ipoib_ib_dev_init(dev, ca, port))
+ goto out_tx_ring_cleanup;
+
+ return 0;
+
+out_tx_ring_cleanup:
+ kfree(priv->tx_ring);
+
+out_rx_ring_cleanup:
+ kfree(priv->rx_ring);
+
+out:
+ return -ENOMEM;
+}
+
+void ipoib_dev_cleanup(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev), *cpriv, *tcpriv;
+
+ ipoib_delete_debug_file(dev);
+
+ /* Delete any child interfaces first */
+ list_for_each_entry_safe(cpriv, tcpriv, &priv->child_intfs, list) {
+ unregister_netdev(cpriv->dev);
+ ipoib_dev_cleanup(cpriv->dev);
+ free_netdev(cpriv->dev);
+ }
+
+ ipoib_ib_dev_cleanup(dev);
+
+ if (priv->rx_ring) {
+ kfree(priv->rx_ring);
+ priv->rx_ring = NULL;
+ }
+
+ if (priv->tx_ring) {
+ kfree(priv->tx_ring);
+ priv->tx_ring = NULL;
+ }
+}
+
+static void ipoib_setup(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ dev->open = ipoib_open;
+ dev->stop = ipoib_stop;
+ dev->change_mtu = ipoib_change_mtu;
+ dev->hard_start_xmit = ipoib_start_xmit;
+ dev->get_stats = ipoib_get_stats;
+ dev->tx_timeout = ipoib_timeout;
+ dev->hard_header = ipoib_hard_header;
+ dev->set_multicast_list = ipoib_set_mcast_list;
+ dev->neigh_setup = ipoib_neigh_setup_dev;
+
+ dev->watchdog_timeo = HZ;
+
+ dev->rebuild_header = NULL;
+ dev->set_mac_address = NULL;
+ dev->header_cache_update = NULL;
+
+ dev->flags |= IFF_BROADCAST | IFF_MULTICAST;
+
+ /*
+ * We add in INFINIBAND_ALEN to allow for the destination
+ * address "pseudoheader" for skbs without neighbour struct.
+ */
+ dev->hard_header_len = IPOIB_ENCAP_LEN + INFINIBAND_ALEN;
+ dev->addr_len = INFINIBAND_ALEN;
+ dev->type = ARPHRD_INFINIBAND;
+ dev->tx_queue_len = IPOIB_TX_RING_SIZE * 2;
+ dev->features = NETIF_F_VLAN_CHALLENGED | NETIF_F_LLTX;
+
+ /* MTU will be reset when mcast join happens */
+ dev->mtu = IPOIB_PACKET_SIZE - IPOIB_ENCAP_LEN;
+ priv->mcast_mtu = priv->admin_mtu = dev->mtu;
+
+ memcpy(dev->broadcast, ipv4_bcast_addr, INFINIBAND_ALEN);
+
+ netif_carrier_off(dev);
+
+ SET_MODULE_OWNER(dev);
+
+ priv->dev = dev;
+
+ spin_lock_init(&priv->lock);
+ spin_lock_init(&priv->tx_lock);
+
+ init_MUTEX(&priv->mcast_mutex);
+ init_MUTEX(&priv->vlan_mutex);
+
+ INIT_LIST_HEAD(&priv->path_list);
+ INIT_LIST_HEAD(&priv->child_intfs);
+ INIT_LIST_HEAD(&priv->dead_ahs);
+ INIT_LIST_HEAD(&priv->multicast_list);
+
+ INIT_WORK(&priv->pkey_task, ipoib_pkey_poll, priv->dev);
+ INIT_WORK(&priv->mcast_task, ipoib_mcast_join_task, priv->dev);
+ INIT_WORK(&priv->flush_task, ipoib_ib_dev_flush, priv->dev);
+ INIT_WORK(&priv->restart_task, ipoib_mcast_restart_task, priv->dev);
+ INIT_WORK(&priv->ah_reap_task, ipoib_reap_ah, priv->dev);
+}
+
+struct ipoib_dev_priv *ipoib_intf_alloc(const char *name)
+{
+ struct net_device *dev;
+
+ dev = alloc_netdev((int) sizeof (struct ipoib_dev_priv), name,
+ ipoib_setup);
+ if (!dev)
+ return NULL;
+
+ return netdev_priv(dev);
+}
+
+static ssize_t show_pkey(struct class_device *cdev, char *buf)
+{
+ struct ipoib_dev_priv *priv =
+ netdev_priv(container_of(cdev, struct net_device, class_dev));
+
+ return sprintf(buf, "0x%04x\n", priv->pkey);
+}
+static CLASS_DEVICE_ATTR(pkey, S_IRUGO, show_pkey, NULL);
+
+static ssize_t create_child(struct class_device *cdev,
+ const char *buf, size_t count)
+{
+ int pkey;
+ int ret;
+
+ if (sscanf(buf, "%i", &pkey) != 1)
+ return -EINVAL;
+
+ if (pkey < 0 || pkey > 0xffff)
+ return -EINVAL;
+
+ ret = ipoib_vlan_add(container_of(cdev, struct net_device, class_dev),
+ pkey);
+
+ return ret ? ret : count;
+}
+static CLASS_DEVICE_ATTR(create_child, S_IWUGO, NULL, create_child);
+
+static ssize_t delete_child(struct class_device *cdev,
+ const char *buf, size_t count)
+{
+ int pkey;
+ int ret;
+
+ if (sscanf(buf, "%i", &pkey) != 1)
+ return -EINVAL;
+
+ if (pkey < 0 || pkey > 0xffff)
+ return -EINVAL;
+
+ ret = ipoib_vlan_delete(container_of(cdev, struct net_device, class_dev),
+ pkey);
+
+ return ret ? ret : count;
+
+}
+static CLASS_DEVICE_ATTR(delete_child, S_IWUGO, NULL, delete_child);
+
+int ipoib_add_pkey_attr(struct net_device *dev)
+{
+ return class_device_create_file(&dev->class_dev,
+ &class_device_attr_pkey);
+}
+
+static struct net_device *ipoib_add_port(const char *format,
+ struct ib_device *hca, u8 port)
+{
+ struct ipoib_dev_priv *priv;
+ int result = -ENOMEM;
+
+ priv = ipoib_intf_alloc(format);
+ if (!priv)
+ goto alloc_mem_failed;
+
+ SET_NETDEV_DEV(priv->dev, hca->dma_device);
+
+ result = ib_query_pkey(hca, port, 0, &priv->pkey);
+ if (result) {
+ printk(KERN_WARNING "%s: ib_query_pkey port %d failed (ret = %d)\n",
+ hca->name, port, result);
+ goto alloc_mem_failed;
+ }
+
+ priv->dev->broadcast[8] = priv->pkey >> 8;
+ priv->dev->broadcast[9] = priv->pkey & 0xff;
+
+ result = ib_query_gid(hca, port, 0, &priv->local_gid);
+ if (result) {
+ printk(KERN_WARNING "%s: ib_query_gid port %d failed (ret = %d)\n",
+ hca->name, port, result);
+ goto alloc_mem_failed;
+ } else
+ memcpy(priv->dev->dev_addr + 4, priv->local_gid.raw, sizeof (union ib_gid));
+
+
+ result = ipoib_dev_init(priv->dev, hca, port);
+ if (result < 0) {
+ printk(KERN_WARNING "%s: failed to initialize port %d (ret = %d)\n",
+ hca->name, port, result);
+ goto device_init_failed;
+ }
+
+ INIT_IB_EVENT_HANDLER(&priv->event_handler,
+ priv->ca, ipoib_event);
+ result = ib_register_event_handler(&priv->event_handler);
+ if (result < 0) {
+ printk(KERN_WARNING "%s: ib_register_event_handler failed for "
+ "port %d (ret = %d)\n",
+ hca->name, port, result);
+ goto event_failed;
+ }
+
+ result = register_netdev(priv->dev);
+ if (result) {
+ printk(KERN_WARNING "%s: couldn't register ipoib port %d; error %d\n",
+ hca->name, port, result);
+ goto register_failed;
+ }
+
+ if (ipoib_create_debug_file(priv->dev))
+ goto debug_failed;
+
+ if (ipoib_add_pkey_attr(priv->dev))
+ goto sysfs_failed;
+ if (class_device_create_file(&priv->dev->class_dev,
+ &class_device_attr_create_child))
+ goto sysfs_failed;
+ if (class_device_create_file(&priv->dev->class_dev,
+ &class_device_attr_delete_child))
+ goto sysfs_failed;
+
+ return priv->dev;
+
+sysfs_failed:
+ ipoib_delete_debug_file(priv->dev);
+
+debug_failed:
+ unregister_netdev(priv->dev);
+
+register_failed:
+ ib_unregister_event_handler(&priv->event_handler);
+
+event_failed:
+ ipoib_dev_cleanup(priv->dev);
+
+device_init_failed:
+ free_netdev(priv->dev);
+
+alloc_mem_failed:
+ return ERR_PTR(result);
+}
+
+static void ipoib_add_one(struct ib_device *device)
+{
+ struct list_head *dev_list;
+ struct net_device *dev;
+ struct ipoib_dev_priv *priv;
+ int s, e, p;
+
+ dev_list = kmalloc(sizeof *dev_list, GFP_KERNEL);
+ if (!dev_list)
+ return;
+
+ INIT_LIST_HEAD(dev_list);
+
+ if (device->node_type == IB_NODE_SWITCH) {
+ s = 0;
+ e = 0;
+ } else {
+ s = 1;
+ e = device->phys_port_cnt;
+ }
+
+ for (p = s; p <= e; ++p) {
+ dev = ipoib_add_port("ib%d", device, p);
+ if (!IS_ERR(dev)) {
+ priv = netdev_priv(dev);
+ list_add_tail(&priv->list, dev_list);
+ }
+ }
+
+ ib_set_client_data(device, &ipoib_client, dev_list);
+}
+
+static void ipoib_remove_one(struct ib_device *device)
+{
+ struct ipoib_dev_priv *priv, *tmp;
+ struct list_head *dev_list;
+
+ dev_list = ib_get_client_data(device, &ipoib_client);
+
+ list_for_each_entry_safe(priv, tmp, dev_list, list) {
+ ib_unregister_event_handler(&priv->event_handler);
+
+ unregister_netdev(priv->dev);
+ ipoib_dev_cleanup(priv->dev);
+ free_netdev(priv->dev);
+ }
+}
+
+static int __init ipoib_init_module(void)
+{
+ int ret;
+
+ ret = ipoib_register_debugfs();
+ if (ret)
+ return ret;
+
+ /*
+ * We create our own workqueue mainly because we want to be
+ * able to flush it when devices are being removed. We can't
+ * use schedule_work()/flush_scheduled_work() because both
+ * unregister_netdev() and linkwatch_event take the rtnl lock,
+ * so flush_scheduled_work() can deadlock during device
+ * removal.
+ */
+ ipoib_workqueue = create_singlethread_workqueue("ipoib");
+ if (!ipoib_workqueue) {
+ ret = -ENOMEM;
+ goto err_fs;
+ }
+
+ ret = ib_register_client(&ipoib_client);
+ if (ret)
+ goto err_wq;
+
+ return 0;
+
+err_fs:
+ ipoib_unregister_debugfs();
+
+err_wq:
+ destroy_workqueue(ipoib_workqueue);
+
+ return ret;
+}
+
+static void __exit ipoib_cleanup_module(void)
+{
+ ipoib_unregister_debugfs();
+ ib_unregister_client(&ipoib_client);
+ destroy_workqueue(ipoib_workqueue);
+}
+
+module_init(ipoib_init_module);
+module_exit(ipoib_cleanup_module);
--- /dev/null
+/*
+ * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: ipoib_multicast.c 1362 2004-12-18 15:56:29Z roland $
+ */
+
+#include <linux/skbuff.h>
+#include <linux/rtnetlink.h>
+#include <linux/ip.h>
+#include <linux/in.h>
+#include <linux/igmp.h>
+#include <linux/inetdevice.h>
+#include <linux/delay.h>
+#include <linux/completion.h>
+
+#include "ipoib.h"
+
+#ifdef CONFIG_INFINIBAND_IPOIB_DEBUG
+static int mcast_debug_level;
+
+module_param(mcast_debug_level, int, 0644);
+MODULE_PARM_DESC(mcast_debug_level,
+ "Enable multicast debug tracing if > 0");
+#endif
+
+static DECLARE_MUTEX(mcast_mutex);
+
+/* Used for all multicast joins (broadcast, IPv4 mcast and IPv6 mcast) */
+struct ipoib_mcast {
+ struct ib_sa_mcmember_rec mcmember;
+ struct ipoib_ah *ah;
+
+ struct rb_node rb_node;
+ struct list_head list;
+ struct completion done;
+
+ int query_id;
+ struct ib_sa_query *query;
+
+ unsigned long created;
+ unsigned long backoff;
+
+ unsigned long flags;
+ unsigned char logcount;
+
+ struct list_head neigh_list;
+
+ struct sk_buff_head pkt_queue;
+
+ struct net_device *dev;
+};
+
+struct ipoib_mcast_iter {
+ struct net_device *dev;
+ union ib_gid mgid;
+ unsigned long created;
+ unsigned int queuelen;
+ unsigned int complete;
+ unsigned int send_only;
+};
+
+static void ipoib_mcast_free(struct ipoib_mcast *mcast)
+{
+ struct net_device *dev = mcast->dev;
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct ipoib_neigh *neigh, *tmp;
+ unsigned long flags;
+
+ ipoib_dbg_mcast(netdev_priv(dev),
+ "deleting multicast group " IPOIB_GID_FMT "\n",
+ IPOIB_GID_ARG(mcast->mcmember.mgid));
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ list_for_each_entry_safe(neigh, tmp, &mcast->neigh_list, list) {
+ ipoib_put_ah(neigh->ah);
+ *to_ipoib_neigh(neigh->neighbour) = NULL;
+ neigh->neighbour->ops->destructor = NULL;
+ kfree(neigh);
+ }
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ if (mcast->ah)
+ ipoib_put_ah(mcast->ah);
+
+ while (!skb_queue_empty(&mcast->pkt_queue)) {
+ struct sk_buff *skb = skb_dequeue(&mcast->pkt_queue);
+
+ skb->dev = dev;
+ dev_kfree_skb_any(skb);
+ }
+
+ kfree(mcast);
+}
+
+static struct ipoib_mcast *ipoib_mcast_alloc(struct net_device *dev,
+ int can_sleep)
+{
+ struct ipoib_mcast *mcast;
+
+ mcast = kmalloc(sizeof (*mcast), can_sleep ? GFP_KERNEL : GFP_ATOMIC);
+ if (!mcast)
+ return NULL;
+
+ memset(mcast, 0, sizeof (*mcast));
+
+ init_completion(&mcast->done);
+
+ mcast->dev = dev;
+ mcast->created = jiffies;
+ mcast->backoff = HZ;
+ mcast->logcount = 0;
+
+ INIT_LIST_HEAD(&mcast->list);
+ INIT_LIST_HEAD(&mcast->neigh_list);
+ skb_queue_head_init(&mcast->pkt_queue);
+
+ mcast->ah = NULL;
+ mcast->query = NULL;
+
+ return mcast;
+}
+
+static struct ipoib_mcast *__ipoib_mcast_find(struct net_device *dev, union ib_gid *mgid)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct rb_node *n = priv->multicast_tree.rb_node;
+
+ while (n) {
+ struct ipoib_mcast *mcast;
+ int ret;
+
+ mcast = rb_entry(n, struct ipoib_mcast, rb_node);
+
+ ret = memcmp(mgid->raw, mcast->mcmember.mgid.raw,
+ sizeof (union ib_gid));
+ if (ret < 0)
+ n = n->rb_left;
+ else if (ret > 0)
+ n = n->rb_right;
+ else
+ return mcast;
+ }
+
+ return NULL;
+}
+
+static int __ipoib_mcast_add(struct net_device *dev, struct ipoib_mcast *mcast)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct rb_node **n = &priv->multicast_tree.rb_node, *pn = NULL;
+
+ while (*n) {
+ struct ipoib_mcast *tmcast;
+ int ret;
+
+ pn = *n;
+ tmcast = rb_entry(pn, struct ipoib_mcast, rb_node);
+
+ ret = memcmp(mcast->mcmember.mgid.raw, tmcast->mcmember.mgid.raw,
+ sizeof (union ib_gid));
+ if (ret < 0)
+ n = &pn->rb_left;
+ else if (ret > 0)
+ n = &pn->rb_right;
+ else
+ return -EEXIST;
+ }
+
+ rb_link_node(&mcast->rb_node, pn, n);
+ rb_insert_color(&mcast->rb_node, &priv->multicast_tree);
+
+ return 0;
+}
+
+static int ipoib_mcast_join_finish(struct ipoib_mcast *mcast,
+ struct ib_sa_mcmember_rec *mcmember)
+{
+ struct net_device *dev = mcast->dev;
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ int ret;
+
+ mcast->mcmember = *mcmember;
+
+ /* Set the cached Q_Key before we attach if it's the broadcast group */
+ if (!memcmp(mcast->mcmember.mgid.raw, priv->dev->broadcast + 4,
+ sizeof (union ib_gid))) {
+ priv->qkey = be32_to_cpu(priv->broadcast->mcmember.qkey);
+ priv->tx_wr.wr.ud.remote_qkey = priv->qkey;
+ }
+
+ if (!test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags)) {
+ if (test_and_set_bit(IPOIB_MCAST_FLAG_ATTACHED, &mcast->flags)) {
+ ipoib_warn(priv, "multicast group " IPOIB_GID_FMT
+ " already attached\n",
+ IPOIB_GID_ARG(mcast->mcmember.mgid));
+
+ return 0;
+ }
+
+ ret = ipoib_mcast_attach(dev, be16_to_cpu(mcast->mcmember.mlid),
+ &mcast->mcmember.mgid);
+ if (ret < 0) {
+ ipoib_warn(priv, "couldn't attach QP to multicast group "
+ IPOIB_GID_FMT "\n",
+ IPOIB_GID_ARG(mcast->mcmember.mgid));
+
+ clear_bit(IPOIB_MCAST_FLAG_ATTACHED, &mcast->flags);
+ return ret;
+ }
+ }
+
+ {
+ struct ib_ah_attr av = {
+ .dlid = be16_to_cpu(mcast->mcmember.mlid),
+ .port_num = priv->port,
+ .sl = mcast->mcmember.sl,
+ .ah_flags = IB_AH_GRH,
+ .grh = {
+ .flow_label = be32_to_cpu(mcast->mcmember.flow_label),
+ .hop_limit = mcast->mcmember.hop_limit,
+ .sgid_index = 0,
+ .traffic_class = mcast->mcmember.traffic_class
+ }
+ };
+
+ av.grh.dgid = mcast->mcmember.mgid;
+
+ if (ib_sa_rate_enum_to_int(mcast->mcmember.rate) > 0)
+ av.static_rate = (2 * priv->local_rate -
+ ib_sa_rate_enum_to_int(mcast->mcmember.rate) - 1) /
+ (priv->local_rate ? priv->local_rate : 1);
+
+ ipoib_dbg_mcast(priv, "static_rate %d for local port %dX, mcmember %dX\n",
+ av.static_rate, priv->local_rate,
+ ib_sa_rate_enum_to_int(mcast->mcmember.rate));
+
+ mcast->ah = ipoib_create_ah(dev, priv->pd, &av);
+ if (!mcast->ah) {
+ ipoib_warn(priv, "ib_address_create failed\n");
+ } else {
+ ipoib_dbg_mcast(priv, "MGID " IPOIB_GID_FMT
+ " AV %p, LID 0x%04x, SL %d\n",
+ IPOIB_GID_ARG(mcast->mcmember.mgid),
+ mcast->ah->ah,
+ be16_to_cpu(mcast->mcmember.mlid),
+ mcast->mcmember.sl);
+ }
+ }
+
+ /* actually send any queued packets */
+ while (!skb_queue_empty(&mcast->pkt_queue)) {
+ struct sk_buff *skb = skb_dequeue(&mcast->pkt_queue);
+
+ skb->dev = dev;
+
+ if (!skb->dst || !skb->dst->neighbour) {
+ /* put pseudoheader back on for next time */
+ skb_push(skb, sizeof (struct ipoib_pseudoheader));
+ }
+
+ if (dev_queue_xmit(skb))
+ ipoib_warn(priv, "dev_queue_xmit failed to requeue packet\n");
+ }
+
+ return 0;
+}
+
+static void
+ipoib_mcast_sendonly_join_complete(int status,
+ struct ib_sa_mcmember_rec *mcmember,
+ void *mcast_ptr)
+{
+ struct ipoib_mcast *mcast = mcast_ptr;
+ struct net_device *dev = mcast->dev;
+
+ if (!status)
+ ipoib_mcast_join_finish(mcast, mcmember);
+ else {
+ if (mcast->logcount++ < 20)
+ ipoib_dbg_mcast(netdev_priv(dev), "multicast join failed for "
+ IPOIB_GID_FMT ", status %d\n",
+ IPOIB_GID_ARG(mcast->mcmember.mgid), status);
+
+ /* Flush out any queued packets */
+ while (!skb_queue_empty(&mcast->pkt_queue)) {
+ struct sk_buff *skb = skb_dequeue(&mcast->pkt_queue);
+
+ skb->dev = dev;
+
+ dev_kfree_skb_any(skb);
+ }
+
+ /* Clear the busy flag so we try again */
+ clear_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags);
+ }
+
+ complete(&mcast->done);
+}
+
+static int ipoib_mcast_sendonly_join(struct ipoib_mcast *mcast)
+{
+ struct net_device *dev = mcast->dev;
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct ib_sa_mcmember_rec rec = {
+#if 0 /* Some SMs don't support send-only yet */
+ .join_state = 4
+#else
+ .join_state = 1
+#endif
+ };
+ int ret = 0;
+
+ if (!test_bit(IPOIB_FLAG_OPER_UP, &priv->flags)) {
+ ipoib_dbg_mcast(priv, "device shutting down, no multicast joins\n");
+ return -ENODEV;
+ }
+
+ if (test_and_set_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags)) {
+ ipoib_dbg_mcast(priv, "multicast entry busy, skipping\n");
+ return -EBUSY;
+ }
+
+ rec.mgid = mcast->mcmember.mgid;
+ rec.port_gid = priv->local_gid;
+ rec.pkey = be16_to_cpu(priv->pkey);
+
+ ret = ib_sa_mcmember_rec_set(priv->ca, priv->port, &rec,
+ IB_SA_MCMEMBER_REC_MGID |
+ IB_SA_MCMEMBER_REC_PORT_GID |
+ IB_SA_MCMEMBER_REC_PKEY |
+ IB_SA_MCMEMBER_REC_JOIN_STATE,
+ 1000, GFP_ATOMIC,
+ ipoib_mcast_sendonly_join_complete,
+ mcast, &mcast->query);
+ if (ret < 0) {
+ ipoib_warn(priv, "ib_sa_mcmember_rec_set failed (ret = %d)\n",
+ ret);
+ } else {
+ ipoib_dbg_mcast(priv, "no multicast record for " IPOIB_GID_FMT
+ ", starting join\n",
+ IPOIB_GID_ARG(mcast->mcmember.mgid));
+
+ mcast->query_id = ret;
+ }
+
+ return ret;
+}
+
+static void ipoib_mcast_join_complete(int status,
+ struct ib_sa_mcmember_rec *mcmember,
+ void *mcast_ptr)
+{
+ struct ipoib_mcast *mcast = mcast_ptr;
+ struct net_device *dev = mcast->dev;
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ ipoib_dbg_mcast(priv, "join completion for " IPOIB_GID_FMT
+ " (status %d)\n",
+ IPOIB_GID_ARG(mcast->mcmember.mgid), status);
+
+ if (!status && !ipoib_mcast_join_finish(mcast, mcmember)) {
+ mcast->backoff = HZ;
+ down(&mcast_mutex);
+ if (test_bit(IPOIB_MCAST_RUN, &priv->flags))
+ queue_work(ipoib_workqueue, &priv->mcast_task);
+ up(&mcast_mutex);
+ complete(&mcast->done);
+ return;
+ }
+
+ if (status == -EINTR) {
+ complete(&mcast->done);
+ return;
+ }
+
+ if (status && mcast->logcount++ < 20) {
+ if (status == -ETIMEDOUT || status == -EINTR) {
+ ipoib_dbg_mcast(priv, "multicast join failed for " IPOIB_GID_FMT
+ ", status %d\n",
+ IPOIB_GID_ARG(mcast->mcmember.mgid),
+ status);
+ } else {
+ ipoib_warn(priv, "multicast join failed for "
+ IPOIB_GID_FMT ", status %d\n",
+ IPOIB_GID_ARG(mcast->mcmember.mgid),
+ status);
+ }
+ }
+
+ mcast->backoff *= 2;
+ if (mcast->backoff > IPOIB_MAX_BACKOFF_SECONDS)
+ mcast->backoff = IPOIB_MAX_BACKOFF_SECONDS;
+
+ mcast->query = NULL;
+
+ down(&mcast_mutex);
+ if (test_bit(IPOIB_MCAST_RUN, &priv->flags)) {
+ if (status == -ETIMEDOUT)
+ queue_work(ipoib_workqueue, &priv->mcast_task);
+ else
+ queue_delayed_work(ipoib_workqueue, &priv->mcast_task,
+ mcast->backoff * HZ);
+ } else
+ complete(&mcast->done);
+ up(&mcast_mutex);
+
+ return;
+}
+
+static void ipoib_mcast_join(struct net_device *dev, struct ipoib_mcast *mcast,
+ int create)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct ib_sa_mcmember_rec rec = {
+ .join_state = 1
+ };
+ ib_sa_comp_mask comp_mask;
+ int ret = 0;
+
+ ipoib_dbg_mcast(priv, "joining MGID " IPOIB_GID_FMT "\n",
+ IPOIB_GID_ARG(mcast->mcmember.mgid));
+
+ rec.mgid = mcast->mcmember.mgid;
+ rec.port_gid = priv->local_gid;
+ rec.pkey = be16_to_cpu(priv->pkey);
+
+ comp_mask =
+ IB_SA_MCMEMBER_REC_MGID |
+ IB_SA_MCMEMBER_REC_PORT_GID |
+ IB_SA_MCMEMBER_REC_PKEY |
+ IB_SA_MCMEMBER_REC_JOIN_STATE;
+
+ if (create) {
+ comp_mask |=
+ IB_SA_MCMEMBER_REC_QKEY |
+ IB_SA_MCMEMBER_REC_SL |
+ IB_SA_MCMEMBER_REC_FLOW_LABEL |
+ IB_SA_MCMEMBER_REC_TRAFFIC_CLASS;
+
+ rec.qkey = priv->broadcast->mcmember.qkey;
+ rec.sl = priv->broadcast->mcmember.sl;
+ rec.flow_label = priv->broadcast->mcmember.flow_label;
+ rec.traffic_class = priv->broadcast->mcmember.traffic_class;
+ }
+
+ ret = ib_sa_mcmember_rec_set(priv->ca, priv->port, &rec, comp_mask,
+ mcast->backoff * 1000, GFP_ATOMIC,
+ ipoib_mcast_join_complete,
+ mcast, &mcast->query);
+
+ if (ret < 0) {
+ ipoib_warn(priv, "ib_sa_mcmember_rec_set failed, status %d\n", ret);
+
+ mcast->backoff *= 2;
+ if (mcast->backoff > IPOIB_MAX_BACKOFF_SECONDS)
+ mcast->backoff = IPOIB_MAX_BACKOFF_SECONDS;
+
+ down(&mcast_mutex);
+ if (test_bit(IPOIB_MCAST_RUN, &priv->flags))
+ queue_delayed_work(ipoib_workqueue,
+ &priv->mcast_task,
+ mcast->backoff);
+ up(&mcast_mutex);
+ } else
+ mcast->query_id = ret;
+}
+
+void ipoib_mcast_join_task(void *dev_ptr)
+{
+ struct net_device *dev = dev_ptr;
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ if (!test_bit(IPOIB_MCAST_RUN, &priv->flags))
+ return;
+
+ if (ib_query_gid(priv->ca, priv->port, 0, &priv->local_gid))
+ ipoib_warn(priv, "ib_gid_entry_get() failed\n");
+ else
+ memcpy(priv->dev->dev_addr + 4, priv->local_gid.raw, sizeof (union ib_gid));
+
+ {
+ struct ib_port_attr attr;
+
+ if (!ib_query_port(priv->ca, priv->port, &attr)) {
+ priv->local_lid = attr.lid;
+ priv->local_rate = attr.active_speed *
+ ib_width_enum_to_int(attr.active_width);
+ } else
+ ipoib_warn(priv, "ib_query_port failed\n");
+ }
+
+ if (!priv->broadcast) {
+ priv->broadcast = ipoib_mcast_alloc(dev, 1);
+ if (!priv->broadcast) {
+ ipoib_warn(priv, "failed to allocate broadcast group\n");
+ down(&mcast_mutex);
+ if (test_bit(IPOIB_MCAST_RUN, &priv->flags))
+ queue_delayed_work(ipoib_workqueue,
+ &priv->mcast_task, HZ);
+ up(&mcast_mutex);
+ return;
+ }
+
+ memcpy(priv->broadcast->mcmember.mgid.raw, priv->dev->broadcast + 4,
+ sizeof (union ib_gid));
+
+ spin_lock_irq(&priv->lock);
+ __ipoib_mcast_add(dev, priv->broadcast);
+ spin_unlock_irq(&priv->lock);
+ }
+
+ if (!test_bit(IPOIB_MCAST_FLAG_ATTACHED, &priv->broadcast->flags)) {
+ ipoib_mcast_join(dev, priv->broadcast, 0);
+ return;
+ }
+
+ while (1) {
+ struct ipoib_mcast *mcast = NULL;
+
+ spin_lock_irq(&priv->lock);
+ list_for_each_entry(mcast, &priv->multicast_list, list) {
+ if (!test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags)
+ && !test_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags)
+ && !test_bit(IPOIB_MCAST_FLAG_ATTACHED, &mcast->flags)) {
+ /* Found the next unjoined group */
+ break;
+ }
+ }
+ spin_unlock_irq(&priv->lock);
+
+ if (&mcast->list == &priv->multicast_list) {
+ /* All done */
+ break;
+ }
+
+ ipoib_mcast_join(dev, mcast, 1);
+ return;
+ }
+
+ priv->mcast_mtu = ib_mtu_enum_to_int(priv->broadcast->mcmember.mtu) -
+ IPOIB_ENCAP_LEN;
+ dev->mtu = min(priv->mcast_mtu, priv->admin_mtu);
+
+ ipoib_dbg_mcast(priv, "successfully joined all multicast groups\n");
+
+ clear_bit(IPOIB_MCAST_RUN, &priv->flags);
+ netif_carrier_on(dev);
+}
+
+int ipoib_mcast_start_thread(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ ipoib_dbg_mcast(priv, "starting multicast thread\n");
+
+ down(&mcast_mutex);
+ if (!test_and_set_bit(IPOIB_MCAST_RUN, &priv->flags))
+ queue_work(ipoib_workqueue, &priv->mcast_task);
+ up(&mcast_mutex);
+
+ return 0;
+}
+
+int ipoib_mcast_stop_thread(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct ipoib_mcast *mcast;
+
+ ipoib_dbg_mcast(priv, "stopping multicast thread\n");
+
+ down(&mcast_mutex);
+ clear_bit(IPOIB_MCAST_RUN, &priv->flags);
+ cancel_delayed_work(&priv->mcast_task);
+ up(&mcast_mutex);
+
+ flush_workqueue(ipoib_workqueue);
+
+ if (priv->broadcast && priv->broadcast->query) {
+ ib_sa_cancel_query(priv->broadcast->query_id, priv->broadcast->query);
+ priv->broadcast->query = NULL;
+ ipoib_dbg_mcast(priv, "waiting for bcast\n");
+ wait_for_completion(&priv->broadcast->done);
+ }
+
+ list_for_each_entry(mcast, &priv->multicast_list, list) {
+ if (mcast->query) {
+ ib_sa_cancel_query(mcast->query_id, mcast->query);
+ mcast->query = NULL;
+ ipoib_dbg_mcast(priv, "waiting for MGID " IPOIB_GID_FMT "\n",
+ IPOIB_GID_ARG(mcast->mcmember.mgid));
+ wait_for_completion(&mcast->done);
+ }
+ }
+
+ return 0;
+}
+
+static int ipoib_mcast_leave(struct net_device *dev, struct ipoib_mcast *mcast)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct ib_sa_mcmember_rec rec = {
+ .join_state = 1
+ };
+ int ret = 0;
+
+ if (!test_and_clear_bit(IPOIB_MCAST_FLAG_ATTACHED, &mcast->flags))
+ return 0;
+
+ ipoib_dbg_mcast(priv, "leaving MGID " IPOIB_GID_FMT "\n",
+ IPOIB_GID_ARG(mcast->mcmember.mgid));
+
+ rec.mgid = mcast->mcmember.mgid;
+ rec.port_gid = priv->local_gid;
+ rec.pkey = be16_to_cpu(priv->pkey);
+
+ /* Remove ourselves from the multicast group */
+ ret = ipoib_mcast_detach(dev, be16_to_cpu(mcast->mcmember.mlid),
+ &mcast->mcmember.mgid);
+ if (ret)
+ ipoib_warn(priv, "ipoib_mcast_detach failed (result = %d)\n", ret);
+
+ /*
+ * Just make one shot at leaving and don't wait for a reply;
+ * if we fail, too bad.
+ */
+ ret = ib_sa_mcmember_rec_delete(priv->ca, priv->port, &rec,
+ IB_SA_MCMEMBER_REC_MGID |
+ IB_SA_MCMEMBER_REC_PORT_GID |
+ IB_SA_MCMEMBER_REC_PKEY |
+ IB_SA_MCMEMBER_REC_JOIN_STATE,
+ 0, GFP_ATOMIC, NULL,
+ mcast, &mcast->query);
+ if (ret < 0)
+ ipoib_warn(priv, "ib_sa_mcmember_rec_delete failed "
+ "for leave (result = %d)\n", ret);
+
+ return 0;
+}
+
+void ipoib_mcast_send(struct net_device *dev, union ib_gid *mgid,
+ struct sk_buff *skb)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct ipoib_mcast *mcast;
+
+ /*
+ * We can only be called from ipoib_start_xmit, so we're
+ * inside tx_lock -- no need to save/restore flags.
+ */
+ spin_lock(&priv->lock);
+
+ mcast = __ipoib_mcast_find(dev, mgid);
+ if (!mcast) {
+ /* Let's create a new send only group now */
+ ipoib_dbg_mcast(priv, "setting up send only multicast group for "
+ IPOIB_GID_FMT "\n", IPOIB_GID_ARG(*mgid));
+
+ mcast = ipoib_mcast_alloc(dev, 0);
+ if (!mcast) {
+ ipoib_warn(priv, "unable to allocate memory for "
+ "multicast structure\n");
+ dev_kfree_skb_any(skb);
+ goto out;
+ }
+
+ set_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags);
+ mcast->mcmember.mgid = *mgid;
+ __ipoib_mcast_add(dev, mcast);
+ list_add_tail(&mcast->list, &priv->multicast_list);
+ }
+
+ if (!mcast->ah) {
+ if (skb_queue_len(&mcast->pkt_queue) < IPOIB_MAX_MCAST_QUEUE)
+ skb_queue_tail(&mcast->pkt_queue, skb);
+ else
+ dev_kfree_skb_any(skb);
+
+ if (mcast->query)
+ ipoib_dbg_mcast(priv, "no address vector, "
+ "but multicast join already started\n");
+ else if (test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags))
+ ipoib_mcast_sendonly_join(mcast);
+
+ /*
+ * If lookup completes between here and out:, don't
+ * want to send packet twice.
+ */
+ mcast = NULL;
+ }
+
+out:
+ if (mcast && mcast->ah) {
+ if (skb->dst &&
+ skb->dst->neighbour &&
+ !*to_ipoib_neigh(skb->dst->neighbour)) {
+ struct ipoib_neigh *neigh = kmalloc(sizeof *neigh, GFP_ATOMIC);
+
+ if (neigh) {
+ kref_get(&mcast->ah->ref);
+ neigh->ah = mcast->ah;
+ neigh->neighbour = skb->dst->neighbour;
+ *to_ipoib_neigh(skb->dst->neighbour) = neigh;
+ list_add_tail(&neigh->list, &mcast->neigh_list);
+ }
+ }
+
+ ipoib_send(dev, skb, mcast->ah, IB_MULTICAST_QPN);
+ }
+
+ spin_unlock(&priv->lock);
+}
+
+void ipoib_mcast_dev_flush(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ LIST_HEAD(remove_list);
+ struct ipoib_mcast *mcast, *tmcast, *nmcast;
+ unsigned long flags;
+
+ ipoib_dbg_mcast(priv, "flushing multicast list\n");
+
+ spin_lock_irqsave(&priv->lock, flags);
+ list_for_each_entry_safe(mcast, tmcast, &priv->multicast_list, list) {
+ nmcast = ipoib_mcast_alloc(dev, 0);
+ if (nmcast) {
+ nmcast->flags =
+ mcast->flags & (1 << IPOIB_MCAST_FLAG_SENDONLY);
+
+ nmcast->mcmember.mgid = mcast->mcmember.mgid;
+
+ /* Add the new group in before the to-be-destroyed group */
+ list_add_tail(&nmcast->list, &mcast->list);
+ list_del_init(&mcast->list);
+
+ rb_replace_node(&mcast->rb_node, &nmcast->rb_node,
+ &priv->multicast_tree);
+
+ list_add_tail(&mcast->list, &remove_list);
+ } else {
+ ipoib_warn(priv, "could not reallocate multicast group "
+ IPOIB_GID_FMT "\n",
+ IPOIB_GID_ARG(mcast->mcmember.mgid));
+ }
+ }
+
+ if (priv->broadcast) {
+ nmcast = ipoib_mcast_alloc(dev, 0);
+ if (nmcast) {
+ nmcast->mcmember.mgid = priv->broadcast->mcmember.mgid;
+
+ rb_replace_node(&priv->broadcast->rb_node,
+ &nmcast->rb_node,
+ &priv->multicast_tree);
+
+ list_add_tail(&priv->broadcast->list, &remove_list);
+ }
+
+ priv->broadcast = nmcast;
+ }
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ list_for_each_entry(mcast, &remove_list, list) {
+ ipoib_mcast_leave(dev, mcast);
+ ipoib_mcast_free(mcast);
+ }
+}
+
+void ipoib_mcast_dev_down(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ unsigned long flags;
+
+ /* Delete broadcast since it will be recreated */
+ if (priv->broadcast) {
+ ipoib_dbg_mcast(priv, "deleting broadcast group\n");
+
+ spin_lock_irqsave(&priv->lock, flags);
+ rb_erase(&priv->broadcast->rb_node, &priv->multicast_tree);
+ spin_unlock_irqrestore(&priv->lock, flags);
+ ipoib_mcast_leave(dev, priv->broadcast);
+ ipoib_mcast_free(priv->broadcast);
+ priv->broadcast = NULL;
+ }
+}
+
+void ipoib_mcast_restart_task(void *dev_ptr)
+{
+ struct net_device *dev = dev_ptr;
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct dev_mc_list *mclist;
+ struct ipoib_mcast *mcast, *tmcast;
+ LIST_HEAD(remove_list);
+ unsigned long flags;
+
+ ipoib_dbg_mcast(priv, "restarting multicast task\n");
+
+ ipoib_mcast_stop_thread(dev);
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ /*
+ * Unfortunately, the networking core only gives us a list of all of
+ * the multicast hardware addresses. We need to figure out which ones
+ * are new and which ones have been removed
+ */
+
+ /* Clear out the found flag */
+ list_for_each_entry(mcast, &priv->multicast_list, list)
+ clear_bit(IPOIB_MCAST_FLAG_FOUND, &mcast->flags);
+
+ /* Mark all of the entries that are found or don't exist */
+ for (mclist = dev->mc_list; mclist; mclist = mclist->next) {
+ union ib_gid mgid;
+
+ memcpy(mgid.raw, mclist->dmi_addr + 4, sizeof mgid);
+
+ /* Add in the P_Key */
+ mgid.raw[4] = (priv->pkey >> 8) & 0xff;
+ mgid.raw[5] = priv->pkey & 0xff;
+
+ mcast = __ipoib_mcast_find(dev, &mgid);
+ if (!mcast || test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags)) {
+ struct ipoib_mcast *nmcast;
+
+ /* Not found or send-only group, let's add a new entry */
+ ipoib_dbg_mcast(priv, "adding multicast entry for mgid "
+ IPOIB_GID_FMT "\n", IPOIB_GID_ARG(mgid));
+
+ nmcast = ipoib_mcast_alloc(dev, 0);
+ if (!nmcast) {
+ ipoib_warn(priv, "unable to allocate memory for multicast structure\n");
+ continue;
+ }
+
+ set_bit(IPOIB_MCAST_FLAG_FOUND, &nmcast->flags);
+
+ nmcast->mcmember.mgid = mgid;
+
+ if (mcast) {
+ /* Destroy the send only entry */
+ list_del(&mcast->list);
+ list_add_tail(&mcast->list, &remove_list);
+
+ rb_replace_node(&mcast->rb_node,
+ &nmcast->rb_node,
+ &priv->multicast_tree);
+ } else
+ __ipoib_mcast_add(dev, nmcast);
+
+ list_add_tail(&nmcast->list, &priv->multicast_list);
+ }
+
+ if (mcast)
+ set_bit(IPOIB_MCAST_FLAG_FOUND, &mcast->flags);
+ }
+
+ /* Remove all of the entries don't exist anymore */
+ list_for_each_entry_safe(mcast, tmcast, &priv->multicast_list, list) {
+ if (!test_bit(IPOIB_MCAST_FLAG_FOUND, &mcast->flags) &&
+ !test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags)) {
+ ipoib_dbg_mcast(priv, "deleting multicast group " IPOIB_GID_FMT "\n",
+ IPOIB_GID_ARG(mcast->mcmember.mgid));
+
+ rb_erase(&mcast->rb_node, &priv->multicast_tree);
+
+ /* Move to the remove list */
+ list_del(&mcast->list);
+ list_add_tail(&mcast->list, &remove_list);
+ }
+ }
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ /* We have to cancel outside of the spinlock */
+ list_for_each_entry(mcast, &remove_list, list) {
+ ipoib_mcast_leave(mcast->dev, mcast);
+ ipoib_mcast_free(mcast);
+ }
+
+ if (test_bit(IPOIB_FLAG_ADMIN_UP, &priv->flags))
+ ipoib_mcast_start_thread(dev);
+}
+
+struct ipoib_mcast_iter *ipoib_mcast_iter_init(struct net_device *dev)
+{
+ struct ipoib_mcast_iter *iter;
+
+ iter = kmalloc(sizeof *iter, GFP_KERNEL);
+ if (!iter)
+ return NULL;
+
+ iter->dev = dev;
+ memset(iter->mgid.raw, 0, sizeof iter->mgid);
+
+ if (ipoib_mcast_iter_next(iter)) {
+ ipoib_mcast_iter_free(iter);
+ return NULL;
+ }
+
+ return iter;
+}
+
+void ipoib_mcast_iter_free(struct ipoib_mcast_iter *iter)
+{
+ kfree(iter);
+}
+
+int ipoib_mcast_iter_next(struct ipoib_mcast_iter *iter)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(iter->dev);
+ struct rb_node *n;
+ struct ipoib_mcast *mcast;
+ int ret = 1;
+
+ spin_lock_irq(&priv->lock);
+
+ n = rb_first(&priv->multicast_tree);
+
+ while (n) {
+ mcast = rb_entry(n, struct ipoib_mcast, rb_node);
+
+ if (memcmp(iter->mgid.raw, mcast->mcmember.mgid.raw,
+ sizeof (union ib_gid)) < 0) {
+ iter->mgid = mcast->mcmember.mgid;
+ iter->created = mcast->created;
+ iter->queuelen = skb_queue_len(&mcast->pkt_queue);
+ iter->complete = !!mcast->ah;
+ iter->send_only = !!(mcast->flags & (1 << IPOIB_MCAST_FLAG_SENDONLY));
+
+ ret = 0;
+
+ break;
+ }
+
+ n = rb_next(n);
+ }
+
+ spin_unlock_irq(&priv->lock);
+
+ return ret;
+}
+
+void ipoib_mcast_iter_read(struct ipoib_mcast_iter *iter,
+ union ib_gid *mgid,
+ unsigned long *created,
+ unsigned int *queuelen,
+ unsigned int *complete,
+ unsigned int *send_only)
+{
+ *mgid = iter->mgid;
+ *created = iter->created;
+ *queuelen = iter->queuelen;
+ *complete = iter->complete;
+ *send_only = iter->send_only;
+}
--- /dev/null
+/*
+ * Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: ipoib_verbs.c 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#include <ib_cache.h>
+
+#include "ipoib.h"
+
+int ipoib_mcast_attach(struct net_device *dev, u16 mlid, union ib_gid *mgid)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct ib_qp_attr *qp_attr;
+ int attr_mask;
+ int ret;
+ u16 pkey_index;
+
+ ret = -ENOMEM;
+ qp_attr = kmalloc(sizeof *qp_attr, GFP_KERNEL);
+ if (!qp_attr)
+ goto out;
+
+ if (ib_find_cached_pkey(priv->ca, priv->port, priv->pkey, &pkey_index)) {
+ clear_bit(IPOIB_PKEY_ASSIGNED, &priv->flags);
+ ret = -ENXIO;
+ goto out;
+ }
+ set_bit(IPOIB_PKEY_ASSIGNED, &priv->flags);
+
+ /* set correct QKey for QP */
+ qp_attr->qkey = priv->qkey;
+ attr_mask = IB_QP_QKEY;
+ ret = ib_modify_qp(priv->qp, qp_attr, attr_mask);
+ if (ret) {
+ ipoib_warn(priv, "failed to modify QP, ret = %d\n", ret);
+ goto out;
+ }
+
+ /* attach QP to multicast group */
+ down(&priv->mcast_mutex);
+ ret = ib_attach_mcast(priv->qp, mgid, mlid);
+ up(&priv->mcast_mutex);
+ if (ret)
+ ipoib_warn(priv, "failed to attach to multicast group, ret = %d\n", ret);
+
+out:
+ kfree(qp_attr);
+ return ret;
+}
+
+int ipoib_mcast_detach(struct net_device *dev, u16 mlid, union ib_gid *mgid)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ int ret;
+
+ down(&priv->mcast_mutex);
+ ret = ib_detach_mcast(priv->qp, mgid, mlid);
+ up(&priv->mcast_mutex);
+ if (ret)
+ ipoib_warn(priv, "ib_detach_mcast failed (result = %d)\n", ret);
+
+ return ret;
+}
+
+int ipoib_qp_create(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ int ret;
+ u16 pkey_index;
+ struct ib_qp_attr qp_attr;
+ int attr_mask;
+
+ /*
+ * Search through the port P_Key table for the requested pkey value.
+ * The port has to be assigned to the respective IB partition in
+ * advance.
+ */
+ ret = ib_find_cached_pkey(priv->ca, priv->port, priv->pkey, &pkey_index);
+ if (ret) {
+ clear_bit(IPOIB_PKEY_ASSIGNED, &priv->flags);
+ return ret;
+ }
+ set_bit(IPOIB_PKEY_ASSIGNED, &priv->flags);
+
+ qp_attr.qp_state = IB_QPS_INIT;
+ qp_attr.qkey = 0;
+ qp_attr.port_num = priv->port;
+ qp_attr.pkey_index = pkey_index;
+ attr_mask =
+ IB_QP_QKEY |
+ IB_QP_PORT |
+ IB_QP_PKEY_INDEX |
+ IB_QP_STATE;
+ ret = ib_modify_qp(priv->qp, &qp_attr, attr_mask);
+ if (ret) {
+ ipoib_warn(priv, "failed to modify QP to init, ret = %d\n", ret);
+ goto out_fail;
+ }
+
+ qp_attr.qp_state = IB_QPS_RTR;
+ /* Can't set this in a INIT->RTR transition */
+ attr_mask &= ~IB_QP_PORT;
+ ret = ib_modify_qp(priv->qp, &qp_attr, attr_mask);
+ if (ret) {
+ ipoib_warn(priv, "failed to modify QP to RTR, ret = %d\n", ret);
+ goto out_fail;
+ }
+
+ qp_attr.qp_state = IB_QPS_RTS;
+ qp_attr.sq_psn = 0;
+ attr_mask |= IB_QP_SQ_PSN;
+ attr_mask &= ~IB_QP_PKEY_INDEX;
+ ret = ib_modify_qp(priv->qp, &qp_attr, attr_mask);
+ if (ret) {
+ ipoib_warn(priv, "failed to modify QP to RTS, ret = %d\n", ret);
+ goto out_fail;
+ }
+
+ return 0;
+
+out_fail:
+ ib_destroy_qp(priv->qp);
+ priv->qp = NULL;
+
+ return -EINVAL;
+}
+
+int ipoib_transport_dev_init(struct net_device *dev, struct ib_device *ca)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct ib_qp_init_attr init_attr = {
+ .cap = {
+ .max_send_wr = IPOIB_TX_RING_SIZE,
+ .max_recv_wr = IPOIB_RX_RING_SIZE,
+ .max_send_sge = 1,
+ .max_recv_sge = 1
+ },
+ .sq_sig_type = IB_SIGNAL_ALL_WR,
+ .rq_sig_type = IB_SIGNAL_ALL_WR,
+ .qp_type = IB_QPT_UD
+ };
+
+ priv->pd = ib_alloc_pd(priv->ca);
+ if (IS_ERR(priv->pd)) {
+ printk(KERN_WARNING "%s: failed to allocate PD\n", ca->name);
+ return -ENODEV;
+ }
+
+ priv->cq = ib_create_cq(priv->ca, ipoib_ib_completion, NULL, dev,
+ IPOIB_TX_RING_SIZE + IPOIB_RX_RING_SIZE + 1);
+ if (IS_ERR(priv->cq)) {
+ printk(KERN_WARNING "%s: failed to create CQ\n", ca->name);
+ goto out_free_pd;
+ }
+
+ if (ib_req_notify_cq(priv->cq, IB_CQ_NEXT_COMP))
+ goto out_free_cq;
+
+ priv->mr = ib_get_dma_mr(priv->pd, IB_ACCESS_LOCAL_WRITE);
+ if (IS_ERR(priv->mr)) {
+ printk(KERN_WARNING "%s: ib_get_dma_mr failed\n", ca->name);
+ goto out_free_cq;
+ }
+
+ init_attr.send_cq = priv->cq;
+ init_attr.recv_cq = priv->cq,
+
+ priv->qp = ib_create_qp(priv->pd, &init_attr);
+ if (IS_ERR(priv->qp)) {
+ printk(KERN_WARNING "%s: failed to create QP\n", ca->name);
+ goto out_free_mr;
+ }
+
+ priv->dev->dev_addr[1] = (priv->qp->qp_num >> 16) & 0xff;
+ priv->dev->dev_addr[2] = (priv->qp->qp_num >> 8) & 0xff;
+ priv->dev->dev_addr[3] = (priv->qp->qp_num ) & 0xff;
+
+ priv->tx_sge.lkey = priv->mr->lkey;
+
+ priv->tx_wr.opcode = IB_WR_SEND;
+ priv->tx_wr.sg_list = &priv->tx_sge;
+ priv->tx_wr.num_sge = 1;
+ priv->tx_wr.send_flags = IB_SEND_SIGNALED;
+
+ return 0;
+
+out_free_mr:
+ ib_dereg_mr(priv->mr);
+
+out_free_cq:
+ ib_destroy_cq(priv->cq);
+
+out_free_pd:
+ ib_dealloc_pd(priv->pd);
+ return -ENODEV;
+}
+
+void ipoib_transport_dev_cleanup(struct net_device *dev)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ if (priv->qp) {
+ if (ib_destroy_qp(priv->qp))
+ ipoib_warn(priv, "ib_qp_destroy failed\n");
+
+ priv->qp = NULL;
+ clear_bit(IPOIB_PKEY_ASSIGNED, &priv->flags);
+ }
+
+ if (ib_dereg_mr(priv->mr))
+ ipoib_warn(priv, "ib_dereg_mr failed\n");
+
+ if (ib_destroy_cq(priv->cq))
+ ipoib_warn(priv, "ib_cq_destroy failed\n");
+
+ if (ib_dealloc_pd(priv->pd))
+ ipoib_warn(priv, "ib_dealloc_pd failed\n");
+}
+
+void ipoib_event(struct ib_event_handler *handler,
+ struct ib_event *record)
+{
+ struct ipoib_dev_priv *priv =
+ container_of(handler, struct ipoib_dev_priv, event_handler);
+
+ if (record->event == IB_EVENT_PORT_ACTIVE ||
+ record->event == IB_EVENT_LID_CHANGE ||
+ record->event == IB_EVENT_SM_CHANGE) {
+ ipoib_dbg(priv, "Port active event\n");
+ schedule_work(&priv->flush_task);
+ }
+}
--- /dev/null
+/*
+ * Copyright (c) 2004 Topspin Communications. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * $Id: ipoib_vlan.c 1349 2004-12-16 21:09:43Z roland $
+ */
+
+#include <linux/version.h>
+#include <linux/module.h>
+
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/seq_file.h>
+
+#include <asm/uaccess.h>
+
+#include "ipoib.h"
+
+static ssize_t show_parent(struct class_device *class_dev, char *buf)
+{
+ struct net_device *dev =
+ container_of(class_dev, struct net_device, class_dev);
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+
+ return sprintf(buf, "%s\n", priv->parent->name);
+}
+static CLASS_DEVICE_ATTR(parent, S_IRUGO, show_parent, NULL);
+
+int ipoib_vlan_add(struct net_device *pdev, unsigned short pkey)
+{
+ struct ipoib_dev_priv *ppriv, *priv;
+ char intf_name[IFNAMSIZ];
+ int result;
+
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+
+ ppriv = netdev_priv(pdev);
+
+ down(&ppriv->vlan_mutex);
+
+ /*
+ * First ensure this isn't a duplicate. We check the parent device and
+ * then all of the child interfaces to make sure the Pkey doesn't match.
+ */
+ if (ppriv->pkey == pkey) {
+ result = -ENOTUNIQ;
+ goto err;
+ }
+
+ list_for_each_entry(priv, &ppriv->child_intfs, list) {
+ if (priv->pkey == pkey) {
+ result = -ENOTUNIQ;
+ goto err;
+ }
+ }
+
+ snprintf(intf_name, sizeof intf_name, "%s.%04x",
+ ppriv->dev->name, pkey);
+ priv = ipoib_intf_alloc(intf_name);
+ if (!priv) {
+ result = -ENOMEM;
+ goto err;
+ }
+
+ set_bit(IPOIB_FLAG_SUBINTERFACE, &priv->flags);
+
+ priv->pkey = pkey;
+
+ memcpy(priv->dev->dev_addr, ppriv->dev->dev_addr, INFINIBAND_ALEN);
+ priv->dev->broadcast[8] = pkey >> 8;
+ priv->dev->broadcast[9] = pkey & 0xff;
+
+ result = ipoib_dev_init(priv->dev, ppriv->ca, ppriv->port);
+ if (result < 0) {
+ ipoib_warn(ppriv, "failed to initialize subinterface: "
+ "device %s, port %d",
+ ppriv->ca->name, ppriv->port);
+ goto device_init_failed;
+ }
+
+ result = register_netdev(priv->dev);
+ if (result) {
+ ipoib_warn(priv, "failed to initialize; error %i", result);
+ goto register_failed;
+ }
+
+ priv->parent = ppriv->dev;
+
+ if (ipoib_create_debug_file(priv->dev))
+ goto debug_failed;
+
+ if (ipoib_add_pkey_attr(priv->dev))
+ goto sysfs_failed;
+
+ if (class_device_create_file(&priv->dev->class_dev,
+ &class_device_attr_parent))
+ goto sysfs_failed;
+
+ list_add_tail(&priv->list, &ppriv->child_intfs);
+
+ up(&ppriv->vlan_mutex);
+
+ return 0;
+
+sysfs_failed:
+ ipoib_delete_debug_file(priv->dev);
+
+debug_failed:
+ unregister_netdev(priv->dev);
+
+register_failed:
+ ipoib_dev_cleanup(priv->dev);
+
+device_init_failed:
+ free_netdev(priv->dev);
+
+err:
+ up(&ppriv->vlan_mutex);
+ return result;
+}
+
+int ipoib_vlan_delete(struct net_device *pdev, unsigned short pkey)
+{
+ struct ipoib_dev_priv *ppriv, *priv, *tpriv;
+ int ret = -ENOENT;
+
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+
+ ppriv = netdev_priv(pdev);
+
+ down(&ppriv->vlan_mutex);
+ list_for_each_entry_safe(priv, tpriv, &ppriv->child_intfs, list) {
+ if (priv->pkey == pkey) {
+ unregister_netdev(priv->dev);
+ ipoib_dev_cleanup(priv->dev);
+
+ list_del(&priv->list);
+
+ kfree(priv);
+
+ ret = 0;
+ break;
+ }
+ }
+ up(&ppriv->vlan_mutex);
+
+ return ret;
+}
*/
static unsigned long ba0_addr;
-static unsigned int *ba0;
+static unsigned int __iomem *ba0;
static char phys[32];
static char name[] = "CS416x Gameport";
static unsigned long ba1_addr;
static union ba1_t {
struct {
- unsigned int *data0;
- unsigned int *data1;
- unsigned int *pmem;
- unsigned int *reg;
+ unsigned int __iomem *data0;
+ unsigned int __iomem *data1;
+ unsigned int __iomem *pmem;
+ unsigned int __iomem *reg;
} name;
- unsigned int *idx[4];
+ unsigned int __iomem *idx[4];
} ba1;
static void cs461x_poke(unsigned long reg, unsigned int val)
{
- ba1.idx[(reg >> 16) & 3][(reg >> 2) & 0x3fff] = val;
+ writel(val, &ba1.idx[(reg >> 16) & 3][(reg >> 2) & 0x3fff]);
}
static unsigned int cs461x_peek(unsigned long reg)
{
- return ba1.idx[(reg >> 16) & 3][(reg >> 2) & 0x3fff];
+ return readl(&ba1.idx[(reg >> 16) & 3][(reg >> 2) & 0x3fff]);
}
#endif
static void cs461x_pokeBA0(unsigned long reg, unsigned int val)
{
- ba0[reg >> 2] = val;
+ writel(val, &ba0[reg >> 2]);
}
static unsigned int cs461x_peekBA0(unsigned long reg)
{
- return ba0[reg >> 2];
+ return readl(&ba0[reg >> 2]);
}
static int cs461x_free(struct pci_dev *pdev)
static int gc_status_bit[] = { 0x40, 0x80, 0x20, 0x10, 0x08 };
static char *gc_names[] = { NULL, "SNES pad", "NES pad", "NES FourPort", "Multisystem joystick",
- "Multisystem 2-button joystick", "N64 controller", "PSX controller"
+ "Multisystem 2-button joystick", "N64 controller", "PSX controller",
"PSX DDR controller" };
/*
* N64 support.
udelay(gc_psx_delay);
read = parport_read_status(gc->pd->port) ^ 0x80;
for (j = 0; j < 5; j++)
- data[j] |= (read & gc_status_bit[j] & gc->pads[GC_PSX]) ? (1 << i) : 0;
+ data[j] |= (read & gc_status_bit[j] & (gc->pads[GC_PSX] | gc->pads[GC_DDR])) ? (1 << i) : 0;
parport_write_data(gc->pd->port, cmd | GC_PSX_CLOCK | GC_PSX_POWER);
udelay(gc_psx_delay);
}
gc_psx_command(gc, 0, data2); /* Dump status */
for (i =0; i < 5; i++) /* Find the longest pad */
- if((gc_status_bit[i] & gc->pads[GC_PSX]) && (GC_PSX_LEN(id[i]) > max_len))
+ if((gc_status_bit[i] & (gc->pads[GC_PSX] | gc->pads[GC_DDR])) && (GC_PSX_LEN(id[i]) > max_len))
max_len = GC_PSX_LEN(id[i]);
for (i = 0; i < max_len * 2; i++) { /* Read in all the data */
connected to your serial (COM) port.
You will need an additional utility called inputattach, see
- Documentation/input/joystick.txt and ff.txt.
+ <file:Documentation/input/joystick.txt>
+ and <file:Documentation/input/ff.txt>.
* Input device fields.
*/
- iforce->dev.id.bustype = BUS_USB;
+ switch (iforce->bus) {
+#ifdef CONFIG_JOYSTICK_IFORCE_USB
+ case IFORCE_USB:
+ iforce->dev.id.bustype = BUS_USB;
+ iforce->dev.dev = &iforce->usbdev->dev;
+ break;
+#endif
+#ifdef CONFIG_JOYSTICK_IFORCE_232
+ case IFORCE_232:
+ iforce->dev.id.bustype = BUS_RS232;
+ iforce->dev.dev = &iforce->serio->dev;
+ break;
+#endif
+ }
+
iforce->dev.private = iforce;
iforce->dev.name = "Unknown I-Force device";
iforce->dev.open = iforce_open;
magellan->dev.id.vendor = SERIO_MAGELLAN;
magellan->dev.id.product = 0x0001;
magellan->dev.id.version = 0x0100;
+ magellan->dev.dev = &serio->dev;
serio->private = magellan;
spaceball->dev.id.vendor = SERIO_SPACEBALL;
spaceball->dev.id.product = id;
spaceball->dev.id.version = 0x0100;
+ spaceball->dev.dev = &serio->dev;
serio->private = spaceball;
spaceorb->dev.id.vendor = SERIO_SPACEORB;
spaceorb->dev.id.product = 0x0001;
spaceorb->dev.id.version = 0x0100;
+ spaceorb->dev.dev = &serio->dev;
serio->private = spaceorb;
stinger->dev.id.vendor = SERIO_STINGER;
stinger->dev.id.product = 0x0001;
stinger->dev.id.version = 0x0100;
+ stinger->dev.dev = &serio->dev;
for (i = 0; i < 2; i++) {
stinger->dev.absmax[ABS_X+i] = 64;
twidjoy->dev.id.vendor = SERIO_TWIDJOY;
twidjoy->dev.id.product = 0x0001;
twidjoy->dev.id.version = 0x0100;
+ twidjoy->dev.dev = &serio->dev;
twidjoy->dev.evbit[0] = BIT(EV_KEY) | BIT(EV_ABS);
warrior->dev.id.vendor = SERIO_WARRIOR;
warrior->dev.id.product = 0x0001;
warrior->dev.id.version = 0x0100;
+ warrior->dev.dev = &serio->dev;
for (i = 0; i < 2; i++) {
warrior->dev.absmax[ABS_X+i] = -64;
* DISCLAIMER: This works for _me_. If you break anything by using the
* information given below, I will _not_ be liable!
*
- * RJ11 pinout: To DB9: Or DB25:
+ * RJ11 pinout: To DE9: Or DB25:
* 1 - RxD <----> Pin 3 (TxD) <-> Pin 2 (TxD)
* 2 - GND <----> Pin 5 (GND) <-> Pin 7 (GND)
* 4 - TxD <----> Pin 2 (RxD) <-> Pin 3 (RxD)
- * 3 - +12V (from HDD drive connector), DON'T connect to DB9 or DB25!!!
+ * 3 - +12V (from HDD drive connector), DON'T connect to DE9 or DB25!!!
*
- * Pin numbers for DB9 and DB25 are noted on the plug (quite small:). For
+ * Pin numbers for DE9 and DB25 are noted on the plug (quite small:). For
* RJ11, it's like this:
*
* __=__ Hold the plug in front of you, cable downwards,
lk->dev.id.vendor = SERIO_LKKBD;
lk->dev.id.product = 0;
lk->dev.id.version = 0x0100;
+ lk->dev.dev = &serio->dev;
input_register_device (&lk->dev);
nkbd->dev.id.vendor = SERIO_NEWTON;
nkbd->dev.id.product = 0x0001;
nkbd->dev.id.version = 0x0100;
+ nkbd->dev.dev = &serio->dev;
input_register_device(&nkbd->dev);
sunkbd->dev.id.vendor = SERIO_SUNKBD;
sunkbd->dev.id.product = sunkbd->type;
sunkbd->dev.id.version = 0x0100;
+ sunkbd->dev.dev = &serio->dev;
input_register_device(&sunkbd->dev);
xtkbd->dev.id.vendor = 0x0001;
xtkbd->dev.id.product = 0x0001;
xtkbd->dev.id.version = 0x0100;
+ xtkbd->dev.dev = &serio->dev;
input_register_device(&xtkbd->dev);
static char *sparcspkr_phys = "sparc/input0";
static struct input_dev sparcspkr_dev;
-spinlock_t beep_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(beep_lock);
static void __init init_sparcspkr_struct(void)
{
--- /dev/null
+/*
+ * ALPS touchpad PS/2 mouse driver
+ *
+ * Copyright (c) 2003 Neil Brown <neilb@cse.unsw.edu.au>
+ * Copyright (c) 2003 Peter Osterlund <petero2@telia.com>
+ * Copyright (c) 2004 Dmitry Torokhov <dtor@mail.ru>
+ *
+ * ALPS detection, tap switching and status querying info is taken from
+ * tpconfig utility (by C. Scott Ananian and Bruce Kall).
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ */
+
+#include <linux/input.h>
+#include <linux/serio.h>
+#include <linux/libps2.h>
+
+#include "psmouse.h"
+#include "alps.h"
+
+#undef DEBUG
+#ifdef DEBUG
+#define dbg(format, arg...) printk(KERN_INFO "alps.c: " format "\n", ## arg)
+#else
+#define dbg(format, arg...) do {} while (0)
+#endif
+
+#define ALPS_MODEL_GLIDEPOINT 1
+#define ALPS_MODEL_DUALPOINT 2
+
+struct alps_model_info {
+ unsigned char signature[3];
+ unsigned char model;
+} alps_model_data[] = {
+/* { { 0x33, 0x02, 0x0a }, ALPS_MODEL_GLIDEPOINT }, */
+ { { 0x53, 0x02, 0x0a }, ALPS_MODEL_GLIDEPOINT },
+ { { 0x53, 0x02, 0x14 }, ALPS_MODEL_GLIDEPOINT },
+ { { 0x63, 0x02, 0x0a }, ALPS_MODEL_GLIDEPOINT },
+ { { 0x63, 0x02, 0x14 }, ALPS_MODEL_GLIDEPOINT },
+ { { 0x73, 0x02, 0x0a }, ALPS_MODEL_GLIDEPOINT },
+ { { 0x73, 0x02, 0x14 }, ALPS_MODEL_GLIDEPOINT },
+ { { 0x63, 0x02, 0x28 }, ALPS_MODEL_GLIDEPOINT },
+/* { { 0x63, 0x02, 0x3c }, ALPS_MODEL_GLIDEPOINT }, */
+/* { { 0x63, 0x02, 0x50 }, ALPS_MODEL_GLIDEPOINT }, */
+ { { 0x63, 0x02, 0x64 }, ALPS_MODEL_GLIDEPOINT },
+ { { 0x20, 0x02, 0x0e }, ALPS_MODEL_DUALPOINT },
+ { { 0x22, 0x02, 0x0a }, ALPS_MODEL_DUALPOINT },
+ { { 0x22, 0x02, 0x14 }, ALPS_MODEL_DUALPOINT },
+ { { 0x63, 0x03, 0xc8 }, ALPS_MODEL_DUALPOINT },
+};
+
+/*
+ * ALPS abolute Mode
+ * byte 0: 1 1 1 1 1 mid0 rig0 lef0
+ * byte 1: 0 x6 x5 x4 x3 x2 x1 x0
+ * byte 2: 0 x10 x9 x8 x7 up1 fin ges
+ * byte 3: 0 y9 y8 y7 1 mid1 rig1 lef1
+ * byte 4: 0 y6 y5 y4 y3 y2 y1 y0
+ * byte 5: 0 z6 z5 z4 z3 z2 z1 z0
+ *
+ * On a dualpoint, {mid,rig,lef}0 are the stick, 1 are the pad.
+ * We just 'or' them together for now.
+ *
+ * We used to send 'ges'tures as BTN_TOUCH but this made it impossible
+ * to disable tap events in the synaptics driver since the driver
+ * was unable to distinguish a gesture tap from an actual button click.
+ * A tap gesture now creates an emulated touch that the synaptics
+ * driver can interpret as a tap event, if MaxTapTime=0 and
+ * MaxTapMove=0 then the driver will ignore taps.
+ *
+ * The touchpad on an 'Acer Aspire' has 4 buttons:
+ * left,right,up,down.
+ * This device always sets {mid,rig,lef}0 to 1 and
+ * reflects left,right,down,up in lef1,rig1,mid1,up1.
+ */
+
+static void alps_process_packet(struct psmouse *psmouse, struct pt_regs *regs)
+{
+ unsigned char *packet = psmouse->packet;
+ struct input_dev *dev = &psmouse->dev;
+ int x, y, z;
+ int left = 0, right = 0, middle = 0;
+
+ input_regs(dev, regs);
+
+ if ((packet[0] & 0xc8) == 0x08) { /* 3-byte PS/2 packet */
+ x = packet[1];
+ if (packet[0] & 0x10)
+ x = x - 256;
+ y = packet[2];
+ if (packet[0] & 0x20)
+ y = y - 256;
+ left = (packet[0] ) & 1;
+ right = (packet[0] >> 1) & 1;
+
+ input_report_rel(dev, REL_X, x);
+ input_report_rel(dev, REL_Y, -y);
+ input_report_key(dev, BTN_A, left);
+ input_report_key(dev, BTN_B, right);
+ input_sync(dev);
+ return;
+ }
+
+ x = (packet[1] & 0x7f) | ((packet[2] & 0x78)<<(7-3));
+ y = (packet[4] & 0x7f) | ((packet[3] & 0x70)<<(7-4));
+ z = packet[5];
+
+ if (z == 127) { /* DualPoint stick is relative, not absolute */
+ if (x > 383)
+ x = x - 768;
+ if (y > 255)
+ y = y - 512;
+ left = packet[3] & 1;
+ right = (packet[3] >> 1) & 1;
+
+ input_report_rel(dev, REL_X, x);
+ input_report_rel(dev, REL_Y, -y);
+ input_report_key(dev, BTN_LEFT, left);
+ input_report_key(dev, BTN_RIGHT, right);
+ input_sync(dev);
+ return;
+ }
+
+ if (z > 30) input_report_key(dev, BTN_TOUCH, 1);
+ if (z < 25) input_report_key(dev, BTN_TOUCH, 0);
+
+ if (z > 0) {
+ input_report_abs(dev, ABS_X, x);
+ input_report_abs(dev, ABS_Y, y);
+ }
+ input_report_abs(dev, ABS_PRESSURE, z);
+ input_report_key(dev, BTN_TOOL_FINGER, z > 0);
+
+ left |= (packet[2] ) & 1;
+ left |= (packet[3] ) & 1;
+ right |= (packet[3] >> 1) & 1;
+ if (packet[0] == 0xff) {
+ int back = (packet[3] >> 2) & 1;
+ int forward = (packet[2] >> 2) & 1;
+ if (back && forward) {
+ middle = 1;
+ back = 0;
+ forward = 0;
+ }
+ input_report_key(dev, BTN_BACK, back);
+ input_report_key(dev, BTN_FORWARD, forward);
+ } else {
+ left |= (packet[0] ) & 1;
+ right |= (packet[0] >> 1) & 1;
+ middle |= (packet[0] >> 2) & 1;
+ middle |= (packet[3] >> 2) & 1;
+ }
+
+ input_report_key(dev, BTN_LEFT, left);
+ input_report_key(dev, BTN_RIGHT, right);
+ input_report_key(dev, BTN_MIDDLE, middle);
+
+ input_sync(dev);
+}
+
+static psmouse_ret_t alps_process_byte(struct psmouse *psmouse, struct pt_regs *regs)
+{
+ if ((psmouse->packet[0] & 0xc8) == 0x08) { /* PS/2 packet */
+ if (psmouse->pktcnt == 3) {
+ alps_process_packet(psmouse, regs);
+ return PSMOUSE_FULL_PACKET;
+ }
+ return PSMOUSE_GOOD_DATA;
+ }
+
+ /* ALPS absolute mode packets start with 0b11111mrl */
+ if ((psmouse->packet[0] & 0xf8) != 0xf8)
+ return PSMOUSE_BAD_DATA;
+
+ /* Bytes 2 - 6 should have 0 in the highest bit */
+ if (psmouse->pktcnt >= 2 && psmouse->pktcnt <= 6 &&
+ (psmouse->packet[psmouse->pktcnt-1] & 0x80))
+ return PSMOUSE_BAD_DATA;
+
+ if (psmouse->pktcnt == 6) {
+ alps_process_packet(psmouse, regs);
+ return PSMOUSE_FULL_PACKET;
+ }
+
+ return PSMOUSE_GOOD_DATA;
+}
+
+int alps_get_model(struct psmouse *psmouse)
+{
+ struct ps2dev *ps2dev = &psmouse->ps2dev;
+ unsigned char param[4];
+ int i;
+
+ /*
+ * First try "E6 report".
+ * ALPS should return 0x00,0x00,0x0a or 0x00,0x00,0x64
+ */
+ param[0] = 0;
+ if (ps2_command(ps2dev, param, PSMOUSE_CMD_SETRES) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11))
+ return -1;
+
+ param[0] = param[1] = param[2] = 0xff;
+ if (ps2_command(ps2dev, param, PSMOUSE_CMD_GETINFO))
+ return -1;
+
+ dbg("E6 report: %2.2x %2.2x %2.2x", param[0], param[1], param[2]);
+
+ if (param[0] != 0x00 || param[1] != 0x00 || (param[2] != 0x0a && param[2] != 0x64))
+ return -1;
+
+ /* Now try "E7 report". ALPS should return 0x33 in byte 1 */
+ param[0] = 0;
+ if (ps2_command(ps2dev, param, PSMOUSE_CMD_SETRES) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE21) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE21) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE21))
+ return -1;
+
+ param[0] = param[1] = param[2] = 0xff;
+ if (ps2_command(ps2dev, param, PSMOUSE_CMD_GETINFO))
+ return -1;
+
+ dbg("E7 report: %2.2x %2.2x %2.2x", param[0], param[1], param[2]);
+
+ for (i = 0; i < ARRAY_SIZE(alps_model_data); i++)
+ if (!memcmp(param, alps_model_data[i].signature, sizeof(alps_model_data[i].signature)))
+ return alps_model_data[i].model;
+
+ return -1;
+}
+
+/*
+ * For DualPoint devices select the device that should respond to
+ * subsequent commands. It looks like glidepad is behind stickpointer,
+ * I'd thought it would be other way around...
+ */
+static int alps_passthrough_mode(struct psmouse *psmouse, int enable)
+{
+ struct ps2dev *ps2dev = &psmouse->ps2dev;
+ unsigned char param[3];
+ int cmd = enable ? PSMOUSE_CMD_SETSCALE21 : PSMOUSE_CMD_SETSCALE11;
+
+ if (ps2_command(ps2dev, NULL, cmd) ||
+ ps2_command(ps2dev, NULL, cmd) ||
+ ps2_command(ps2dev, NULL, cmd) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_DISABLE))
+ return -1;
+
+ /* we may get 3 more bytes, just ignore them */
+ ps2_command(ps2dev, param, 0x0300);
+
+ return 0;
+}
+
+static int alps_absolute_mode(struct psmouse *psmouse)
+{
+ struct ps2dev *ps2dev = &psmouse->ps2dev;
+
+ /* Try ALPS magic knock - 4 disable before enable */
+ if (ps2_command(ps2dev, NULL, PSMOUSE_CMD_DISABLE) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_DISABLE) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_DISABLE) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_DISABLE) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_ENABLE))
+ return -1;
+
+ /*
+ * Switch mouse to poll (remote) mode so motion data will not
+ * get in our way
+ */
+ return ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_SETPOLL);
+}
+
+static int alps_get_status(struct psmouse *psmouse, char *param)
+{
+ struct ps2dev *ps2dev = &psmouse->ps2dev;
+
+ /* Get status: 0xF5 0xF5 0xF5 0xE9 */
+ if (ps2_command(ps2dev, NULL, PSMOUSE_CMD_DISABLE) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_DISABLE) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_DISABLE) ||
+ ps2_command(ps2dev, param, PSMOUSE_CMD_GETINFO))
+ return -1;
+
+ dbg("Status: %2.2x %2.2x %2.2x", param[0], param[1], param[2]);
+
+ return 0;
+}
+
+/*
+ * Turn touchpad tapping on or off. The sequences are:
+ * 0xE9 0xF5 0xF5 0xF3 0x0A to enable,
+ * 0xE9 0xF5 0xF5 0xE8 0x00 to disable.
+ * My guess that 0xE9 (GetInfo) is here as a sync point.
+ * For models that also have stickpointer (DualPoints) its tapping
+ * is controlled separately (0xE6 0xE6 0xE6 0xF3 0x14|0x0A) but
+ * we don't fiddle with it.
+ */
+static int alps_tap_mode(struct psmouse *psmouse, int enable)
+{
+ struct ps2dev *ps2dev = &psmouse->ps2dev;
+ int cmd = enable ? PSMOUSE_CMD_SETRATE : PSMOUSE_CMD_SETRES;
+ unsigned char tap_arg = enable ? 0x0A : 0x00;
+ unsigned char param[4];
+
+ if (ps2_command(ps2dev, param, PSMOUSE_CMD_GETINFO) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_DISABLE) ||
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_DISABLE) ||
+ ps2_command(ps2dev, &tap_arg, cmd))
+ return -1;
+
+ if (alps_get_status(psmouse, param))
+ return -1;
+
+ return 0;
+}
+
+static int alps_reconnect(struct psmouse *psmouse)
+{
+ int model;
+ unsigned char param[4];
+
+ if ((model = alps_get_model(psmouse)) < 0)
+ return -1;
+
+ if (model == ALPS_MODEL_DUALPOINT && alps_passthrough_mode(psmouse, 1))
+ return -1;
+
+ if (alps_get_status(psmouse, param))
+ return -1;
+
+ if (param[0] & 0x04)
+ alps_tap_mode(psmouse, 0);
+
+ if (alps_absolute_mode(psmouse)) {
+ printk(KERN_ERR "alps.c: Failed to enable absolute mode\n");
+ return -1;
+ }
+
+ if (model == ALPS_MODEL_DUALPOINT && alps_passthrough_mode(psmouse, 0))
+ return -1;
+
+ return 0;
+}
+
+static void alps_disconnect(struct psmouse *psmouse)
+{
+ psmouse_reset(psmouse);
+}
+
+int alps_init(struct psmouse *psmouse)
+{
+ unsigned char param[4];
+ int model;
+
+ if ((model = alps_get_model(psmouse)) < 0)
+ return -1;
+
+ printk(KERN_INFO "ALPS Touchpad (%s) detected\n",
+ model == ALPS_MODEL_GLIDEPOINT ? "Glidepoint" : "Dualpoint");
+
+ if (model == ALPS_MODEL_DUALPOINT && alps_passthrough_mode(psmouse, 1))
+ return -1;
+
+ if (alps_get_status(psmouse, param)) {
+ printk(KERN_ERR "alps.c: touchpad status report request failed\n");
+ return -1;
+ }
+
+ if (param[0] & 0x04) {
+ printk(KERN_INFO " Disabling hardware tapping\n");
+ if (alps_tap_mode(psmouse, 0))
+ printk(KERN_WARNING "alps.c: Failed to disable hardware tapping\n");
+ }
+
+ if (alps_absolute_mode(psmouse)) {
+ printk(KERN_ERR "alps.c: Failed to enable absolute mode\n");
+ return -1;
+ }
+
+ if (model == ALPS_MODEL_DUALPOINT && alps_passthrough_mode(psmouse, 0))
+ return -1;
+
+ psmouse->dev.evbit[LONG(EV_REL)] |= BIT(EV_REL);
+ psmouse->dev.relbit[LONG(REL_X)] |= BIT(REL_X);
+ psmouse->dev.relbit[LONG(REL_Y)] |= BIT(REL_Y);
+ psmouse->dev.keybit[LONG(BTN_A)] |= BIT(BTN_A);
+ psmouse->dev.keybit[LONG(BTN_B)] |= BIT(BTN_B);
+
+ psmouse->dev.evbit[LONG(EV_ABS)] |= BIT(EV_ABS);
+ input_set_abs_params(&psmouse->dev, ABS_X, 0, 1023, 0, 0);
+ input_set_abs_params(&psmouse->dev, ABS_Y, 0, 1023, 0, 0);
+ input_set_abs_params(&psmouse->dev, ABS_PRESSURE, 0, 127, 0, 0);
+
+ psmouse->dev.keybit[LONG(BTN_TOUCH)] |= BIT(BTN_TOUCH);
+ psmouse->dev.keybit[LONG(BTN_TOOL_FINGER)] |= BIT(BTN_TOOL_FINGER);
+ psmouse->dev.keybit[LONG(BTN_FORWARD)] |= BIT(BTN_FORWARD);
+ psmouse->dev.keybit[LONG(BTN_BACK)] |= BIT(BTN_BACK);
+
+ psmouse->protocol_handler = alps_process_byte;
+ psmouse->disconnect = alps_disconnect;
+ psmouse->reconnect = alps_reconnect;
+ psmouse->pktsize = 6;
+
+ return 0;
+}
+
+int alps_detect(struct psmouse *psmouse, int set_properties)
+{
+ if (alps_get_model(psmouse) < 0)
+ return -1;
+
+ if (set_properties) {
+ psmouse->vendor = "ALPS";
+ psmouse->name = "TouchPad";
+ }
+ return 0;
+}
+
--- /dev/null
+/*
+ * ALPS touchpad PS/2 mouse driver
+ *
+ * Copyright (c) 2003 Peter Osterlund <petero2@telia.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ */
+
+#ifndef _ALPS_H
+#define _ALPS_H
+
+int alps_detect(struct psmouse *psmouse, int set_properties);
+int alps_init(struct psmouse *psmouse);
+
+#endif
#include <linux/input.h>
#include <linux/serio.h>
+#include <linux/libps2.h>
#include "psmouse.h"
#include "logips2pp.h"
* Process a PS2++ or PS2T++ packet.
*/
-void ps2pp_process_packet(struct psmouse *psmouse)
+static psmouse_ret_t ps2pp_process_byte(struct psmouse *psmouse, struct pt_regs *regs)
{
struct input_dev *dev = &psmouse->dev;
- unsigned char *packet = psmouse->packet;
+ unsigned char *packet = psmouse->packet;
+
+ if (psmouse->pktcnt < 3)
+ return PSMOUSE_GOOD_DATA;
+
+/*
+ * Full packet accumulated, process it
+ */
+
+ input_regs(dev, regs);
if ((packet[0] & 0x48) == 0x48 && (packet[1] & 0x02) == 0x02) {
+ /* Logitech extended packet */
switch ((packet[1] >> 4) | (packet[0] & 0x30)) {
case 0x0d: /* Mouse extra info */
(packet[1] >> 4) | (packet[0] & 0x30));
#endif
}
-
- packet[0] &= 0x0f;
- packet[1] = 0;
- packet[2] = 0;
+ } else {
+ /* Standard PS/2 motion data */
+ input_report_rel(dev, REL_X, packet[1] ? (int) packet[1] - (int) ((packet[0] << 4) & 0x100) : 0);
+ input_report_rel(dev, REL_Y, packet[2] ? (int) ((packet[0] << 3) & 0x100) - (int) packet[2] : 0);
}
+
+ input_report_key(dev, BTN_LEFT, packet[0] & 1);
+ input_report_key(dev, BTN_MIDDLE, (packet[0] >> 2) & 1);
+ input_report_key(dev, BTN_RIGHT, (packet[0] >> 1) & 1);
+
+ input_sync(dev);
+
+ return PSMOUSE_FULL_PACKET;
+
}
/*
if (psmouse_sliced_command(psmouse, command))
return -1;
- if (psmouse_command(psmouse, param, PSMOUSE_CMD_POLL))
+ if (ps2_command(&psmouse->ps2dev, param, PSMOUSE_CMD_POLL))
return -1;
return 0;
* enabled if we do nothing to it. Of course I put this in because I want it
* disabled :P
* 1 - enabled (if previously disabled, also default)
- * 0/2 - disabled
+ * 0 - disabled
*/
-static void ps2pp_set_smartscroll(struct psmouse *psmouse)
+static void ps2pp_set_smartscroll(struct psmouse *psmouse, unsigned int smartscroll)
{
+ struct ps2dev *ps2dev = &psmouse->ps2dev;
unsigned char param[4];
+ if (smartscroll > 1)
+ smartscroll = 1;
+
ps2pp_cmd(psmouse, param, 0x32);
param[0] = 0;
- psmouse_command(psmouse, param, PSMOUSE_CMD_SETRES);
- psmouse_command(psmouse, param, PSMOUSE_CMD_SETRES);
- psmouse_command(psmouse, param, PSMOUSE_CMD_SETRES);
-
- if (psmouse_smartscroll < 2) {
- /* 0 - disabled, 1 - enabled */
- param[0] = psmouse_smartscroll;
- psmouse_command(psmouse, param, PSMOUSE_CMD_SETRES);
- }
+ ps2_command(ps2dev, param, PSMOUSE_CMD_SETRES);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_SETRES);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_SETRES);
+
+ param[0] = smartscroll;
+ ps2_command(ps2dev, param, PSMOUSE_CMD_SETRES);
}
+static ssize_t psmouse_attr_show_smartscroll(struct psmouse *psmouse, char *buf)
+{
+ return sprintf(buf, "%d\n", psmouse->smartscroll ? 1 : 0);
+}
+
+static ssize_t psmouse_attr_set_smartscroll(struct psmouse *psmouse, const char *buf, size_t count)
+{
+ unsigned long value;
+ char *rest;
+
+ value = simple_strtoul(buf, &rest, 10);
+ if (*rest || value > 1)
+ return -EINVAL;
+
+ ps2pp_set_smartscroll(psmouse, value);
+ psmouse->smartscroll = value;
+ return count;
+}
+
+PSMOUSE_DEFINE_ATTR(smartscroll);
+
/*
* Support 800 dpi resolution _only_ if the user wants it (there are good
* reasons to not use it even if the mouse supports it, and of course there are
* also good reasons to use it, let the user decide).
*/
-void ps2pp_set_800dpi(struct psmouse *psmouse)
+static void ps2pp_set_resolution(struct psmouse *psmouse, unsigned int resolution)
{
- unsigned char param = 3;
- psmouse_command(psmouse, NULL, PSMOUSE_CMD_SETSCALE11);
- psmouse_command(psmouse, NULL, PSMOUSE_CMD_SETSCALE11);
- psmouse_command(psmouse, NULL, PSMOUSE_CMD_SETSCALE11);
- psmouse_command(psmouse, ¶m, PSMOUSE_CMD_SETRES);
+ if (resolution > 400) {
+ struct ps2dev *ps2dev = &psmouse->ps2dev;
+ unsigned char param = 3;
+
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11);
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11);
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11);
+ ps2_command(ps2dev, ¶m, PSMOUSE_CMD_SETRES);
+ psmouse->resolution = 800;
+ } else
+ psmouse_set_resolution(psmouse, resolution);
+}
+
+static void ps2pp_disconnect(struct psmouse *psmouse)
+{
+ device_remove_file(&psmouse->ps2dev.serio->dev, &psmouse_attr_smartscroll);
}
static struct ps2pp_info *get_model_info(unsigned char model)
* Set up input device's properties based on the detected mouse model.
*/
-static void ps2pp_set_model_properties(struct psmouse *psmouse, struct ps2pp_info *model_info)
+static void ps2pp_set_model_properties(struct psmouse *psmouse, struct ps2pp_info *model_info,
+ int using_ps2pp)
{
if (model_info->features & PS2PP_SIDE_BTN)
set_bit(BTN_SIDE, psmouse->dev.keybit);
case PS2PP_KIND_TP3:
psmouse->name = "TouchPad 3";
break;
+
+ default:
+ /*
+ * Set name to "Mouse" only when using PS2++,
+ * otherwise let other protocols define suitable
+ * name
+ */
+ if (using_ps2pp)
+ psmouse->name = "Mouse";
+ break;
}
}
int ps2pp_init(struct psmouse *psmouse, int set_properties)
{
+ struct ps2dev *ps2dev = &psmouse->ps2dev;
unsigned char param[4];
- unsigned char protocol = PSMOUSE_PS2;
unsigned char model, buttons;
struct ps2pp_info *model_info;
+ int use_ps2pp = 0;
param[0] = 0;
- psmouse_command(psmouse, param, PSMOUSE_CMD_SETRES);
- psmouse_command(psmouse, NULL, PSMOUSE_CMD_SETSCALE11);
- psmouse_command(psmouse, NULL, PSMOUSE_CMD_SETSCALE11);
- psmouse_command(psmouse, NULL, PSMOUSE_CMD_SETSCALE11);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_SETRES);
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11);
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11);
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11);
param[1] = 0;
- psmouse_command(psmouse, param, PSMOUSE_CMD_GETINFO);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_GETINFO);
- if (param[1] != 0) {
- model = ((param[0] >> 4) & 0x07) | ((param[0] << 3) & 0x78);
- buttons = param[1];
- model_info = get_model_info(model);
+ if (!param[1])
+ return -1;
+
+ model = ((param[0] >> 4) & 0x07) | ((param[0] << 3) & 0x78);
+ buttons = param[1];
+
+ if ((model_info = get_model_info(model)) != NULL) {
/*
* Do Logitech PS2++ / PS2T++ magic init.
/* Unprotect RAM */
param[0] = 0x11; param[1] = 0x04; param[2] = 0x68;
- psmouse_command(psmouse, param, 0x30d1);
+ ps2_command(ps2dev, param, 0x30d1);
/* Enable features */
param[0] = 0x11; param[1] = 0x05; param[2] = 0x0b;
- psmouse_command(psmouse, param, 0x30d1);
+ ps2_command(ps2dev, param, 0x30d1);
/* Enable PS2++ */
param[0] = 0x11; param[1] = 0x09; param[2] = 0xc3;
- psmouse_command(psmouse, param, 0x30d1);
+ ps2_command(ps2dev, param, 0x30d1);
param[0] = 0;
- if (!psmouse_command(psmouse, param, 0x13d1) &&
+ if (!ps2_command(ps2dev, param, 0x13d1) &&
param[0] == 0x06 && param[1] == 0x00 && param[2] == 0x14) {
- protocol = PSMOUSE_PS2TPP;
+ use_ps2pp = 1;
}
- } else if (model_info != NULL) {
+ } else {
param[0] = param[1] = param[2] = 0;
ps2pp_cmd(psmouse, param, 0x39); /* Magic knock */
if ((param[0] & 0x78) == 0x48 &&
(param[1] & 0xf3) == 0xc2 &&
(param[2] & 0x03) == ((param[1] >> 2) & 3)) {
- ps2pp_set_smartscroll(psmouse);
- protocol = PSMOUSE_PS2PP;
+ ps2pp_set_smartscroll(psmouse, psmouse->smartscroll);
+ use_ps2pp = 1;
}
}
+ }
- if (set_properties) {
- psmouse->vendor = "Logitech";
- psmouse->model = model;
+ if (set_properties) {
+ psmouse->vendor = "Logitech";
+ psmouse->model = model;
- if (buttons < 3)
- clear_bit(BTN_MIDDLE, psmouse->dev.keybit);
- if (buttons < 2)
- clear_bit(BTN_RIGHT, psmouse->dev.keybit);
+ if (use_ps2pp) {
+ psmouse->protocol_handler = ps2pp_process_byte;
+ psmouse->pktsize = 3;
- if (model_info)
- ps2pp_set_model_properties(psmouse, model_info);
+ if (model_info->kind != PS2PP_KIND_TP3) {
+ psmouse->set_resolution = ps2pp_set_resolution;
+ psmouse->disconnect = ps2pp_disconnect;
+
+ device_create_file(&psmouse->ps2dev.serio->dev, &psmouse_attr_smartscroll);
+ }
}
+
+ if (buttons < 3)
+ clear_bit(BTN_MIDDLE, psmouse->dev.keybit);
+ if (buttons < 2)
+ clear_bit(BTN_RIGHT, psmouse->dev.keybit);
+
+ if (model_info)
+ ps2pp_set_model_properties(psmouse, model_info, use_ps2pp);
}
- return protocol;
+ return use_ps2pp ? 0 : -1;
}
#ifndef _LOGIPS2PP_H
#define _LOGIPS2PP_H
-void ps2pp_process_packet(struct psmouse *psmouse);
-void ps2pp_set_800dpi(struct psmouse *psmouse);
int ps2pp_init(struct psmouse *psmouse, int set_properties);
#endif
* PS/2 mouse driver
*
* Copyright (c) 1999-2002 Vojtech Pavlik
+ * Copyright (c) 2003-2004 Dmitry Torokhov
*/
/*
#include <linux/input.h>
#include <linux/serio.h>
#include <linux/init.h>
+#include <linux/libps2.h>
#include "psmouse.h"
#include "synaptics.h"
#include "logips2pp.h"
+#include "alps.h"
#define DRIVER_DESC "PS/2 mouse driver"
module_param_named(proto, psmouse_proto, charp, 0);
MODULE_PARM_DESC(proto, "Highest protocol extension to probe (bare, imps, exps). Useful for KVM switches.");
-int psmouse_resolution = 200;
+static unsigned int psmouse_resolution = 200;
module_param_named(resolution, psmouse_resolution, uint, 0);
MODULE_PARM_DESC(resolution, "Resolution, in dpi.");
-unsigned int psmouse_rate = 100;
+static unsigned int psmouse_rate = 100;
module_param_named(rate, psmouse_rate, uint, 0);
MODULE_PARM_DESC(rate, "Report rate, in reports per second.");
-int psmouse_smartscroll = 1;
+static unsigned int psmouse_smartscroll = 1;
module_param_named(smartscroll, psmouse_smartscroll, bool, 0);
MODULE_PARM_DESC(smartscroll, "Logitech Smartscroll autorepeat, 1 = enabled (default), 0 = disabled.");
module_param_named(resetafter, psmouse_resetafter, uint, 0);
MODULE_PARM_DESC(resetafter, "Reset device after so many bad packets (0 = never).");
+PSMOUSE_DEFINE_ATTR(rate);
+PSMOUSE_DEFINE_ATTR(resolution);
+PSMOUSE_DEFINE_ATTR(resetafter);
+
__obsolete_setup("psmouse_noext");
__obsolete_setup("psmouse_resolution=");
__obsolete_setup("psmouse_smartscroll=");
__obsolete_setup("psmouse_resetafter=");
__obsolete_setup("psmouse_rate=");
-static char *psmouse_protocols[] = { "None", "PS/2", "PS2++", "PS2T++", "GenPS/2", "ImPS/2", "ImExPS/2", "SynPS/2"};
+static char *psmouse_protocols[] = { "None", "PS/2", "PS2++", "ThinkPS/2", "GenPS/2", "ImPS/2", "ImExPS/2", "SynPS/2", "AlpsPS/2" };
/*
* psmouse_process_byte() analyzes the PS/2 data stream and reports
struct input_dev *dev = &psmouse->dev;
unsigned char *packet = psmouse->packet;
- if (psmouse->pktcnt < 3 + (psmouse->type >= PSMOUSE_GENPS))
+ if (psmouse->pktcnt < psmouse->pktsize)
return PSMOUSE_GOOD_DATA;
/*
input_regs(dev, regs);
-/*
- * The PS2++ protocol is a little bit complex
- */
-
- if (psmouse->type == PSMOUSE_PS2PP || psmouse->type == PSMOUSE_PS2TPP)
- ps2pp_process_packet(psmouse);
-
/*
* Scroll wheel on IntelliMice, scroll buttons on NetMice
*/
input_report_key(dev, BTN_EXTRA, (packet[0] >> 7) & 1);
}
+/*
+ * Extra button on ThinkingMouse
+ */
+ if (psmouse->type == PSMOUSE_THINKPS) {
+ input_report_key(dev, BTN_EXTRA, (packet[0] >> 3) & 1);
+ /* Without this bit of weirdness moving up gives wildly high Y changes. */
+ packet[1] |= (packet[0] & 0x40) << 1;
+ }
+
/*
* Generic PS/2 Mouse
*/
printk(KERN_WARNING "psmouse.c: bad data from KBC -%s%s\n",
flags & SERIO_TIMEOUT ? " timeout" : "",
flags & SERIO_PARITY ? " bad parity" : "");
- psmouse->nak = 1;
- clear_bit(PSMOUSE_FLAG_ACK, &psmouse->flags);
- clear_bit(PSMOUSE_FLAG_CMD, &psmouse->flags);
- wake_up_interruptible(&psmouse->wait);
+ ps2_cmd_aborted(&psmouse->ps2dev);
goto out;
}
- if (test_bit(PSMOUSE_FLAG_ACK, &psmouse->flags)) {
- switch (data) {
- case PSMOUSE_RET_ACK:
- psmouse->nak = 0;
- break;
-
- case PSMOUSE_RET_NAK:
- psmouse->nak = 1;
- break;
-
- /*
- * Workaround for mice which don't ACK the Get ID command.
- * These are valid mouse IDs that we recognize.
- */
- case 0x00:
- case 0x03:
- case 0x04:
- if (test_bit(PSMOUSE_FLAG_WAITID, &psmouse->flags)) {
- psmouse->nak = 0;
- break;
- }
- /* Fall through */
- default:
- goto out;
- }
-
- if (!psmouse->nak && psmouse->cmdcnt) {
- set_bit(PSMOUSE_FLAG_CMD, &psmouse->flags);
- set_bit(PSMOUSE_FLAG_CMD1, &psmouse->flags);
- }
- clear_bit(PSMOUSE_FLAG_ACK, &psmouse->flags);
- wake_up_interruptible(&psmouse->wait);
-
- if (data == PSMOUSE_RET_ACK || data == PSMOUSE_RET_NAK)
+ if (unlikely(psmouse->ps2dev.flags & PS2_FLAG_ACK))
+ if (ps2_handle_ack(&psmouse->ps2dev, data))
goto out;
- }
-
- if (test_bit(PSMOUSE_FLAG_CMD, &psmouse->flags)) {
- if (psmouse->cmdcnt)
- psmouse->cmdbuf[--psmouse->cmdcnt] = data;
- if (test_and_clear_bit(PSMOUSE_FLAG_CMD1, &psmouse->flags) && psmouse->cmdcnt)
- wake_up_interruptible(&psmouse->wait);
-
- if (!psmouse->cmdcnt) {
- clear_bit(PSMOUSE_FLAG_CMD, &psmouse->flags);
- wake_up_interruptible(&psmouse->wait);
- }
- goto out;
- }
+ if (unlikely(psmouse->ps2dev.flags & PS2_FLAG_CMD))
+ if (ps2_handle_response(&psmouse->ps2dev, data))
+ goto out;
if (psmouse->state == PSMOUSE_INITIALIZING)
goto out;
psmouse->name, psmouse->phys, psmouse->pktcnt);
psmouse->pktcnt = 0;
- if (++psmouse->out_of_sync == psmouse_resetafter) {
+ if (++psmouse->out_of_sync == psmouse->resetafter) {
psmouse->state = PSMOUSE_IGNORE;
printk(KERN_NOTICE "psmouse.c: issuing reconnect request\n");
- serio_reconnect(psmouse->serio);
+ serio_reconnect(psmouse->ps2dev.serio);
}
break;
return IRQ_HANDLED;
}
-/*
- * psmouse_sendbyte() sends a byte to the mouse, and waits for acknowledge.
- * It doesn't handle retransmission, though it could - because when there would
- * be need for retransmissions, the mouse has to be replaced anyway.
- *
- * psmouse_sendbyte() can only be called from a process context
- */
-
-static int psmouse_sendbyte(struct psmouse *psmouse, unsigned char byte)
-{
- psmouse->nak = 1;
- set_bit(PSMOUSE_FLAG_ACK, &psmouse->flags);
-
- if (serio_write(psmouse->serio, byte) == 0)
- wait_event_interruptible_timeout(psmouse->wait,
- !test_bit(PSMOUSE_FLAG_ACK, &psmouse->flags),
- msecs_to_jiffies(200));
-
- clear_bit(PSMOUSE_FLAG_ACK, &psmouse->flags);
- return -psmouse->nak;
-}
-
-/*
- * psmouse_command() sends a command and its parameters to the mouse,
- * then waits for the response and puts it in the param array.
- *
- * psmouse_command() can only be called from a process context
- */
-
-int psmouse_command(struct psmouse *psmouse, unsigned char *param, int command)
-{
- int timeout;
- int send = (command >> 12) & 0xf;
- int receive = (command >> 8) & 0xf;
- int rc = -1;
- int i;
-
- timeout = msecs_to_jiffies(command == PSMOUSE_CMD_RESET_BAT ? 4000 : 500);
-
- clear_bit(PSMOUSE_FLAG_CMD, &psmouse->flags);
- if (command == PSMOUSE_CMD_GETID)
- set_bit(PSMOUSE_FLAG_WAITID, &psmouse->flags);
-
- if (receive && param)
- for (i = 0; i < receive; i++)
- psmouse->cmdbuf[(receive - 1) - i] = param[i];
-
- psmouse->cmdcnt = receive;
-
- if (command & 0xff)
- if (psmouse_sendbyte(psmouse, command & 0xff))
- goto out;
-
- for (i = 0; i < send; i++)
- if (psmouse_sendbyte(psmouse, param[i]))
- goto out;
-
- timeout = wait_event_interruptible_timeout(psmouse->wait,
- !test_bit(PSMOUSE_FLAG_CMD1, &psmouse->flags), timeout);
-
- if (psmouse->cmdcnt && timeout > 0) {
- if (command == PSMOUSE_CMD_RESET_BAT && jiffies_to_msecs(timeout) > 100)
- timeout = msecs_to_jiffies(100);
-
- if (command == PSMOUSE_CMD_GETID &&
- psmouse->cmdbuf[receive - 1] != 0xab && psmouse->cmdbuf[receive - 1] != 0xac) {
- /*
- * Device behind the port is not a keyboard
- * so we don't need to wait for the 2nd byte
- * of ID response.
- */
- clear_bit(PSMOUSE_FLAG_CMD, &psmouse->flags);
- psmouse->cmdcnt = 0;
- }
-
- wait_event_interruptible_timeout(psmouse->wait,
- !test_bit(PSMOUSE_FLAG_CMD, &psmouse->flags), timeout);
- }
-
- if (param)
- for (i = 0; i < receive; i++)
- param[i] = psmouse->cmdbuf[(receive - 1) - i];
-
- if (psmouse->cmdcnt && (command != PSMOUSE_CMD_RESET_BAT || psmouse->cmdcnt != 1))
- goto out;
-
- rc = 0;
-
-out:
- clear_bit(PSMOUSE_FLAG_CMD, &psmouse->flags);
- clear_bit(PSMOUSE_FLAG_CMD1, &psmouse->flags);
- clear_bit(PSMOUSE_FLAG_WAITID, &psmouse->flags);
- return rc;
-}
/*
* psmouse_sliced_command() sends an extended PS/2 command to the mouse
{
int i;
- if (psmouse_command(psmouse, NULL, PSMOUSE_CMD_SETSCALE11))
+ if (ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_SETSCALE11))
return -1;
for (i = 6; i >= 0; i -= 2) {
unsigned char d = (command >> i) & 3;
- if (psmouse_command(psmouse, &d, PSMOUSE_CMD_SETRES))
+ if (ps2_command(&psmouse->ps2dev, &d, PSMOUSE_CMD_SETRES))
return -1;
}
{
unsigned char param[2];
- if (psmouse_command(psmouse, param, PSMOUSE_CMD_RESET_BAT))
+ if (ps2_command(&psmouse->ps2dev, param, PSMOUSE_CMD_RESET_BAT))
return -1;
if (param[0] != PSMOUSE_RET_BAT && param[1] != PSMOUSE_RET_ID)
/*
* Genius NetMouse magic init.
*/
-static int genius_detect(struct psmouse *psmouse)
+static int genius_detect(struct psmouse *psmouse, int set_properties)
{
+ struct ps2dev *ps2dev = &psmouse->ps2dev;
unsigned char param[4];
param[0] = 3;
- psmouse_command(psmouse, param, PSMOUSE_CMD_SETRES);
- psmouse_command(psmouse, NULL, PSMOUSE_CMD_SETSCALE11);
- psmouse_command(psmouse, NULL, PSMOUSE_CMD_SETSCALE11);
- psmouse_command(psmouse, NULL, PSMOUSE_CMD_SETSCALE11);
- psmouse_command(psmouse, param, PSMOUSE_CMD_GETINFO);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_SETRES);
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11);
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11);
+ ps2_command(ps2dev, NULL, PSMOUSE_CMD_SETSCALE11);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_GETINFO);
+
+ if (param[0] != 0x00 || param[1] != 0x33 || param[2] != 0x55)
+ return -1;
+
+ if (set_properties) {
+ set_bit(BTN_EXTRA, psmouse->dev.keybit);
+ set_bit(BTN_SIDE, psmouse->dev.keybit);
+ set_bit(REL_WHEEL, psmouse->dev.relbit);
- return param[0] == 0x00 && param[1] == 0x33 && param[2] == 0x55;
+ psmouse->vendor = "Genius";
+ psmouse->name = "Wheel Mouse";
+ psmouse->pktsize = 4;
+ }
+
+ return 0;
}
/*
* IntelliMouse magic init.
*/
-static int intellimouse_detect(struct psmouse *psmouse)
+static int intellimouse_detect(struct psmouse *psmouse, int set_properties)
{
+ struct ps2dev *ps2dev = &psmouse->ps2dev;
unsigned char param[2];
param[0] = 200;
- psmouse_command(psmouse, param, PSMOUSE_CMD_SETRATE);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_SETRATE);
param[0] = 100;
- psmouse_command(psmouse, param, PSMOUSE_CMD_SETRATE);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_SETRATE);
param[0] = 80;
- psmouse_command(psmouse, param, PSMOUSE_CMD_SETRATE);
- psmouse_command(psmouse, param, PSMOUSE_CMD_GETID);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_SETRATE);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_GETID);
+
+ if (param[0] != 3)
+ return -1;
+
+ if (set_properties) {
+ set_bit(REL_WHEEL, psmouse->dev.relbit);
+
+ if (!psmouse->vendor) psmouse->vendor = "Generic";
+ if (!psmouse->name) psmouse->name = "Wheel Mouse";
+ psmouse->pktsize = 4;
+ }
- return param[0] == 3;
+ return 0;
}
/*
* Try IntelliMouse/Explorer magic init.
*/
-static int im_explorer_detect(struct psmouse *psmouse)
+static int im_explorer_detect(struct psmouse *psmouse, int set_properties)
{
+ struct ps2dev *ps2dev = &psmouse->ps2dev;
unsigned char param[2];
- intellimouse_detect(psmouse);
+ intellimouse_detect(psmouse, 0);
param[0] = 200;
- psmouse_command(psmouse, param, PSMOUSE_CMD_SETRATE);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_SETRATE);
param[0] = 200;
- psmouse_command(psmouse, param, PSMOUSE_CMD_SETRATE);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_SETRATE);
param[0] = 80;
- psmouse_command(psmouse, param, PSMOUSE_CMD_SETRATE);
- psmouse_command(psmouse, param, PSMOUSE_CMD_GETID);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_SETRATE);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_GETID);
+
+ if (param[0] != 4)
+ return -1;
+
+ if (set_properties) {
+ set_bit(REL_WHEEL, psmouse->dev.relbit);
+ set_bit(BTN_SIDE, psmouse->dev.keybit);
+ set_bit(BTN_EXTRA, psmouse->dev.keybit);
- return param[0] == 4;
+ if (!psmouse->vendor) psmouse->vendor = "Generic";
+ if (!psmouse->name) psmouse->name = "Explorer Mouse";
+ psmouse->pktsize = 4;
+ }
+
+ return 0;
+}
+
+/*
+ * Kensington ThinkingMouse / ExpertMouse magic init.
+ */
+static int thinking_detect(struct psmouse *psmouse, int set_properties)
+{
+ struct ps2dev *ps2dev = &psmouse->ps2dev;
+ unsigned char param[2];
+ unsigned char seq[] = { 20, 60, 40, 20, 20, 60, 40, 20, 20, 0 };
+ int i;
+
+ param[0] = 10;
+ ps2_command(ps2dev, param, PSMOUSE_CMD_SETRATE);
+ param[0] = 0;
+ ps2_command(ps2dev, param, PSMOUSE_CMD_SETRES);
+ for (i = 0; seq[i]; i++)
+ ps2_command(ps2dev, seq + i, PSMOUSE_CMD_SETRATE);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_GETID);
+
+ if (param[0] != 2)
+ return -1;
+
+ if (set_properties) {
+ set_bit(BTN_EXTRA, psmouse->dev.keybit);
+
+ psmouse->vendor = "Kensington";
+ psmouse->name = "ThinkingMouse";
+ }
+
+ return 0;
+}
+
+/*
+ * Bare PS/2 protocol "detection". Always succeeds.
+ */
+static int ps2bare_detect(struct psmouse *psmouse, int set_properties)
+{
+ if (!psmouse->vendor) psmouse->vendor = "Generic";
+ if (!psmouse->name) psmouse->name = "Mouse";
+
+ return 0;
}
/*
{
int synaptics_hardware = 0;
+/*
+ * Try Kensington ThinkingMouse (we try first, because synaptics probe
+ * upsets the thinkingmouse).
+ */
+
+ if (max_proto > PSMOUSE_PS2 && thinking_detect(psmouse, set_properties) == 0)
+ return PSMOUSE_THINKPS;
+
/*
* Try Synaptics TouchPad
*/
- if (max_proto > PSMOUSE_PS2 && synaptics_detect(psmouse)) {
+ if (max_proto > PSMOUSE_PS2 && synaptics_detect(psmouse, set_properties) == 0) {
synaptics_hardware = 1;
- if (set_properties) {
- psmouse->vendor = "Synaptics";
- psmouse->name = "TouchPad";
- }
-
if (max_proto > PSMOUSE_IMEX) {
if (!set_properties || synaptics_init(psmouse) == 0)
return PSMOUSE_SYNAPTICS;
synaptics_reset(psmouse);
}
- if (max_proto > PSMOUSE_IMEX && genius_detect(psmouse)) {
-
- if (set_properties) {
- set_bit(BTN_EXTRA, psmouse->dev.keybit);
- set_bit(BTN_SIDE, psmouse->dev.keybit);
- set_bit(REL_WHEEL, psmouse->dev.relbit);
- psmouse->vendor = "Genius";
- psmouse->name = "Wheel Mouse";
+/*
+ * Try ALPS TouchPad
+ */
+ if (max_proto > PSMOUSE_IMEX) {
+ ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_RESET_DIS);
+ if (alps_detect(psmouse, set_properties) == 0) {
+ if (!set_properties || alps_init(psmouse) == 0)
+ return PSMOUSE_ALPS;
+/*
+ * Init failed, try basic relative protocols
+ */
+ max_proto = PSMOUSE_IMEX;
}
-
- return PSMOUSE_GENPS;
}
- if (max_proto > PSMOUSE_IMEX) {
- int type = ps2pp_init(psmouse, set_properties);
- if (type > PSMOUSE_PS2)
- return type;
- }
+ if (max_proto > PSMOUSE_IMEX && genius_detect(psmouse, set_properties) == 0)
+ return PSMOUSE_GENPS;
- if (max_proto >= PSMOUSE_IMEX && im_explorer_detect(psmouse)) {
+ if (max_proto > PSMOUSE_IMEX && ps2pp_init(psmouse, set_properties) == 0)
+ return PSMOUSE_PS2PP;
- if (set_properties) {
- set_bit(REL_WHEEL, psmouse->dev.relbit);
- set_bit(BTN_SIDE, psmouse->dev.keybit);
- set_bit(BTN_EXTRA, psmouse->dev.keybit);
- if (!psmouse->name)
- psmouse->name = "Explorer Mouse";
- }
+/*
+ * Reset to defaults in case the device got confused by extended
+ * protocol probes.
+ */
+ ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_RESET_DIS);
+ if (max_proto >= PSMOUSE_IMEX && im_explorer_detect(psmouse, set_properties) == 0)
return PSMOUSE_IMEX;
- }
-
- if (max_proto >= PSMOUSE_IMPS && intellimouse_detect(psmouse)) {
-
- if (set_properties) {
- set_bit(REL_WHEEL, psmouse->dev.relbit);
- if (!psmouse->name)
- psmouse->name = "Wheel Mouse";
- }
+ if (max_proto >= PSMOUSE_IMPS && intellimouse_detect(psmouse, set_properties) == 0)
return PSMOUSE_IMPS;
- }
/*
* Okay, all failed, we have a standard mouse here. The number of the buttons
* is still a question, though. We assume 3.
*/
+ ps2bare_detect(psmouse, set_properties);
+
if (synaptics_hardware) {
/*
* We detected Synaptics hardware but it did not respond to IMPS/2 probes.
* extensions.
*/
psmouse_reset(psmouse);
- psmouse_command(psmouse, NULL, PSMOUSE_CMD_RESET_DIS);
+ ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_RESET_DIS);
}
return PSMOUSE_PS2;
static int psmouse_probe(struct psmouse *psmouse)
{
+ struct ps2dev *ps2dev = &psmouse->ps2dev;
unsigned char param[2];
/*
*/
param[0] = 0xa5;
-
- if (psmouse_command(psmouse, param, PSMOUSE_CMD_GETID))
+ if (ps2_command(ps2dev, param, PSMOUSE_CMD_GETID))
return -1;
if (param[0] != 0x00 && param[0] != 0x03 && param[0] != 0x04)
* Then we reset and disable the mouse so that it doesn't generate events.
*/
- if (psmouse_command(psmouse, NULL, PSMOUSE_CMD_RESET_DIS))
- printk(KERN_WARNING "psmouse.c: Failed to reset mouse on %s\n", psmouse->serio->phys);
+ if (ps2_command(ps2dev, NULL, PSMOUSE_CMD_RESET_DIS))
+ printk(KERN_WARNING "psmouse.c: Failed to reset mouse on %s\n", ps2dev->serio->phys);
return 0;
}
* Here we set the mouse resolution.
*/
-static void psmouse_set_resolution(struct psmouse *psmouse)
+void psmouse_set_resolution(struct psmouse *psmouse, unsigned int resolution)
{
- unsigned char param[1];
-
- if (psmouse->type == PSMOUSE_PS2PP && psmouse_resolution > 400) {
- ps2pp_set_800dpi(psmouse);
- return;
- }
+ unsigned char params[] = { 0, 1, 2, 2, 3 };
- if (!psmouse_resolution || psmouse_resolution >= 200)
- param[0] = 3;
- else if (psmouse_resolution >= 100)
- param[0] = 2;
- else if (psmouse_resolution >= 50)
- param[0] = 1;
- else if (psmouse_resolution)
- param[0] = 0;
+ if (resolution == 0 || resolution > 200)
+ resolution = 200;
- psmouse_command(psmouse, param, PSMOUSE_CMD_SETRES);
+ ps2_command(&psmouse->ps2dev, ¶ms[resolution / 50], PSMOUSE_CMD_SETRES);
+ psmouse->resolution = 25 << params[resolution / 50];
}
/*
* Here we set the mouse report rate.
*/
-static void psmouse_set_rate(struct psmouse *psmouse)
+static void psmouse_set_rate(struct psmouse *psmouse, unsigned int rate)
{
unsigned char rates[] = { 200, 100, 80, 60, 40, 20, 10, 0 };
int i = 0;
- while (rates[i] > psmouse_rate) i++;
- psmouse_command(psmouse, rates + i, PSMOUSE_CMD_SETRATE);
+ while (rates[i] > rate) i++;
+ ps2_command(&psmouse->ps2dev, &rates[i], PSMOUSE_CMD_SETRATE);
+ psmouse->rate = rates[i];
}
/*
static void psmouse_initialize(struct psmouse *psmouse)
{
- unsigned char param[2];
-
/*
- * We set the mouse report rate, resolution and scaling.
+ * We set the mouse into streaming mode.
*/
- if (psmouse_max_proto != PSMOUSE_PS2) {
- psmouse_set_rate(psmouse);
- psmouse_set_resolution(psmouse);
- psmouse_command(psmouse, NULL, PSMOUSE_CMD_SETSCALE11);
- }
+ ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_SETSTREAM);
/*
- * We set the mouse into streaming mode.
+ * We set the mouse report rate, resolution and scaling.
*/
- psmouse_command(psmouse, param, PSMOUSE_CMD_SETSTREAM);
+ if (psmouse_max_proto != PSMOUSE_PS2) {
+ psmouse->set_rate(psmouse, psmouse->rate);
+ psmouse->set_resolution(psmouse, psmouse->resolution);
+ ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_SETSCALE11);
+ }
}
/*
static void psmouse_set_state(struct psmouse *psmouse, enum psmouse_state new_state)
{
- serio_pause_rx(psmouse->serio);
+ serio_pause_rx(psmouse->ps2dev.serio);
psmouse->state = new_state;
- psmouse->pktcnt = psmouse->cmdcnt = psmouse->out_of_sync = 0;
- psmouse->flags = 0;
- serio_continue_rx(psmouse->serio);
+ psmouse->pktcnt = psmouse->out_of_sync = 0;
+ psmouse->ps2dev.flags = 0;
+ serio_continue_rx(psmouse->ps2dev.serio);
}
/*
static void psmouse_activate(struct psmouse *psmouse)
{
- if (psmouse_command(psmouse, NULL, PSMOUSE_CMD_ENABLE))
- printk(KERN_WARNING "psmouse.c: Failed to enable mouse on %s\n", psmouse->serio->phys);
+ if (ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_ENABLE))
+ printk(KERN_WARNING "psmouse.c: Failed to enable mouse on %s\n",
+ psmouse->ps2dev.serio->phys);
psmouse_set_state(psmouse, PSMOUSE_ACTIVATED);
}
static void psmouse_deactivate(struct psmouse *psmouse)
{
- if (psmouse_command(psmouse, NULL, PSMOUSE_CMD_DISABLE))
- printk(KERN_WARNING "psmouse.c: Failed to deactivate mouse on %s\n", psmouse->serio->phys);
+ if (ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_DISABLE))
+ printk(KERN_WARNING "psmouse.c: Failed to deactivate mouse on %s\n",
+ psmouse->ps2dev.serio->phys);
psmouse_set_state(psmouse, PSMOUSE_CMD_MODE);
}
{
struct psmouse *psmouse, *parent;
+ device_remove_file(&serio->dev, &psmouse_attr_rate);
+ device_remove_file(&serio->dev, &psmouse_attr_resolution);
+ device_remove_file(&serio->dev, &psmouse_attr_resetafter);
+
psmouse = serio->private;
psmouse_set_state(psmouse, PSMOUSE_CMD_MODE);
memset(psmouse, 0, sizeof(struct psmouse));
- init_waitqueue_head(&psmouse->wait);
- init_input_dev(&psmouse->dev);
+ ps2_init(&psmouse->ps2dev, serio);
psmouse->dev.evbit[0] = BIT(EV_KEY) | BIT(EV_REL);
psmouse->dev.keybit[LONG(BTN_MOUSE)] = BIT(BTN_LEFT) | BIT(BTN_MIDDLE) | BIT(BTN_RIGHT);
psmouse->dev.relbit[0] = BIT(REL_X) | BIT(REL_Y);
- psmouse->serio = serio;
psmouse->dev.private = psmouse;
+ psmouse->dev.dev = &serio->dev;
psmouse_set_state(psmouse, PSMOUSE_INITIALIZING);
serio->private = psmouse;
goto out;
}
+ psmouse->rate = psmouse_rate;
+ psmouse->resolution = psmouse_resolution;
+ psmouse->resetafter = psmouse_resetafter;
+ psmouse->smartscroll = psmouse_smartscroll;
+ psmouse->set_rate = psmouse_set_rate;
+ psmouse->set_resolution = psmouse_set_resolution;
+ psmouse->protocol_handler = psmouse_process_byte;
+ psmouse->pktsize = 3;
+
psmouse->type = psmouse_extensions(psmouse, psmouse_max_proto, 1);
- if (!psmouse->vendor)
- psmouse->vendor = "Generic";
- if (!psmouse->name)
- psmouse->name = "Mouse";
- if (!psmouse->protocol_handler)
- psmouse->protocol_handler = psmouse_process_byte;
sprintf(psmouse->devname, "%s %s %s",
psmouse_protocols[psmouse->type], psmouse->vendor, psmouse->name);
if (parent && parent->pt_activate)
parent->pt_activate(parent);
+ device_create_file(&serio->dev, &psmouse_attr_rate);
+ device_create_file(&serio->dev, &psmouse_attr_resolution);
+ device_create_file(&serio->dev, &psmouse_attr_resetafter);
+
if (serio->child) {
/*
* Nothing to be done here, serio core will detect that
psmouse_set_state(psmouse, PSMOUSE_INITIALIZING);
if (psmouse->reconnect) {
- if (psmouse->reconnect(psmouse))
+ if (psmouse->reconnect(psmouse))
goto out;
} else if (psmouse_probe(psmouse) < 0 ||
psmouse->type != psmouse_extensions(psmouse, psmouse_max_proto, 0))
.cleanup = psmouse_cleanup,
};
+ssize_t psmouse_attr_show_helper(struct device *dev, char *buf,
+ ssize_t (*handler)(struct psmouse *, char *))
+{
+ struct serio *serio = to_serio_port(dev);
+ int retval;
+
+ retval = serio_pin_driver(serio);
+ if (retval)
+ return retval;
+
+ if (serio->drv != &psmouse_drv) {
+ retval = -ENODEV;
+ goto out;
+ }
+
+ retval = handler(serio->private, buf);
+
+out:
+ serio_unpin_driver(serio);
+ return retval;
+}
+
+ssize_t psmouse_attr_set_helper(struct device *dev, const char *buf, size_t count,
+ ssize_t (*handler)(struct psmouse *, const char *, size_t))
+{
+ struct serio *serio = to_serio_port(dev);
+ struct psmouse *psmouse = serio->private, *parent = NULL;
+ int retval;
+
+ retval = serio_pin_driver(serio);
+ if (retval)
+ return retval;
+
+ if (serio->drv != &psmouse_drv) {
+ retval = -ENODEV;
+ goto out;
+ }
+
+ if (serio->parent && (serio->type & SERIO_TYPE) == SERIO_PS_PSTHRU) {
+ parent = serio->parent->private;
+ psmouse_deactivate(parent);
+ }
+ psmouse_deactivate(psmouse);
+
+ retval = handler(psmouse, buf, count);
+
+ psmouse_activate(psmouse);
+ if (parent)
+ psmouse_activate(parent);
+
+out:
+ serio_unpin_driver(serio);
+ return retval;
+}
+
+static ssize_t psmouse_attr_show_rate(struct psmouse *psmouse, char *buf)
+{
+ return sprintf(buf, "%d\n", psmouse->rate);
+}
+
+static ssize_t psmouse_attr_set_rate(struct psmouse *psmouse, const char *buf, size_t count)
+{
+ unsigned long value;
+ char *rest;
+
+ value = simple_strtoul(buf, &rest, 10);
+ if (*rest)
+ return -EINVAL;
+
+ psmouse->set_rate(psmouse, value);
+ return count;
+}
+
+static ssize_t psmouse_attr_show_resolution(struct psmouse *psmouse, char *buf)
+{
+ return sprintf(buf, "%d\n", psmouse->resolution);
+}
+
+static ssize_t psmouse_attr_set_resolution(struct psmouse *psmouse, const char *buf, size_t count)
+{
+ unsigned long value;
+ char *rest;
+
+ value = simple_strtoul(buf, &rest, 10);
+ if (*rest)
+ return -EINVAL;
+
+ psmouse->set_resolution(psmouse, value);
+ return count;
+}
+
+static ssize_t psmouse_attr_show_resetafter(struct psmouse *psmouse, char *buf)
+{
+ return sprintf(buf, "%d\n", psmouse->resetafter);
+}
+
+static ssize_t psmouse_attr_set_resetafter(struct psmouse *psmouse, const char *buf, size_t count)
+{
+ unsigned long value;
+ char *rest;
+
+ value = simple_strtoul(buf, &rest, 10);
+ if (*rest)
+ return -EINVAL;
+
+ psmouse->resetafter = value;
+ return count;
+}
+
static inline void psmouse_parse_proto(void)
{
if (psmouse_proto) {
#define _PSMOUSE_H
#define PSMOUSE_CMD_SETSCALE11 0x00e6
+#define PSMOUSE_CMD_SETSCALE21 0x00e7
#define PSMOUSE_CMD_SETRES 0x10e8
#define PSMOUSE_CMD_GETINFO 0x03e9
#define PSMOUSE_CMD_SETSTREAM 0x00ea
+#define PSMOUSE_CMD_SETPOLL 0x00f0
#define PSMOUSE_CMD_POLL 0x03eb
#define PSMOUSE_CMD_GETID 0x02f2
#define PSMOUSE_CMD_SETRATE 0x10f3
#define PSMOUSE_RET_ACK 0xfa
#define PSMOUSE_RET_NAK 0xfe
-#define PSMOUSE_FLAG_ACK 0 /* Waiting for ACK/NAK */
-#define PSMOUSE_FLAG_CMD 1 /* Waiting for command to finish */
-#define PSMOUSE_FLAG_CMD1 2 /* Waiting for the first byte of command response */
-#define PSMOUSE_FLAG_WAITID 3 /* Command execiting is GET ID */
-
enum psmouse_state {
PSMOUSE_IGNORE,
PSMOUSE_INITIALIZING,
struct psmouse {
void *private;
struct input_dev dev;
- struct serio *serio;
+ struct ps2dev ps2dev;
char *vendor;
char *name;
- unsigned char cmdbuf[8];
unsigned char packet[8];
- unsigned char cmdcnt;
unsigned char pktcnt;
+ unsigned char pktsize;
unsigned char type;
unsigned char model;
unsigned long last;
unsigned long out_of_sync;
enum psmouse_state state;
- unsigned char nak;
- char error;
char devname[64];
char phys[32];
- unsigned long flags;
- /* Used to signal completion from interrupt handler */
- wait_queue_head_t wait;
+ unsigned int rate;
+ unsigned int resolution;
+ unsigned int resetafter;
+ unsigned int smartscroll; /* Logitech only */
psmouse_ret_t (*protocol_handler)(struct psmouse *psmouse, struct pt_regs *regs);
+ void (*set_rate)(struct psmouse *psmouse, unsigned int rate);
+ void (*set_resolution)(struct psmouse *psmouse, unsigned int resolution);
+
int (*reconnect)(struct psmouse *psmouse);
void (*disconnect)(struct psmouse *psmouse);
void (*pt_deactivate)(struct psmouse *psmouse);
};
-#define PSMOUSE_PS2 1
-#define PSMOUSE_PS2PP 2
-#define PSMOUSE_PS2TPP 3
-#define PSMOUSE_GENPS 4
-#define PSMOUSE_IMPS 5
-#define PSMOUSE_IMEX 6
-#define PSMOUSE_SYNAPTICS 7
+enum psmouse_type {
+ PSMOUSE_NONE,
+ PSMOUSE_PS2,
+ PSMOUSE_PS2PP,
+ PSMOUSE_THINKPS,
+ PSMOUSE_GENPS,
+ PSMOUSE_IMPS,
+ PSMOUSE_IMEX,
+ PSMOUSE_SYNAPTICS,
+ PSMOUSE_ALPS,
+};
-int psmouse_command(struct psmouse *psmouse, unsigned char *param, int command);
int psmouse_sliced_command(struct psmouse *psmouse, unsigned char command);
int psmouse_reset(struct psmouse *psmouse);
+void psmouse_set_resolution(struct psmouse *psmouse, unsigned int resolution);
+
+ssize_t psmouse_attr_show_helper(struct device *dev, char *buf,
+ ssize_t (*handler)(struct psmouse *, char *));
+ssize_t psmouse_attr_set_helper(struct device *dev, const char *buf, size_t count,
+ ssize_t (*handler)(struct psmouse *, const char *, size_t));
-extern int psmouse_smartscroll;
-extern unsigned int psmouse_rate;
+#define PSMOUSE_DEFINE_ATTR(_name) \
+static ssize_t psmouse_attr_show_##_name(struct psmouse *, char *); \
+static ssize_t psmouse_attr_set_##_name(struct psmouse *, const char *, size_t);\
+static ssize_t psmouse_do_show_##_name(struct device *d, char *b) \
+{ \
+ return psmouse_attr_show_helper(d, b, psmouse_attr_show_##_name); \
+} \
+static ssize_t psmouse_do_set_##_name(struct device *d, const char *b, size_t s)\
+{ \
+ return psmouse_attr_set_helper(d, b, s, psmouse_attr_set_##_name); \
+} \
+static struct device_attribute psmouse_attr_##_name = \
+ __ATTR(_name, S_IWUSR | S_IRUGO, \
+ psmouse_do_show_##_name, psmouse_do_set_##_name);
#endif /* _PSMOUSE_H */
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
+ * the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
- *
+ *
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
+ *
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- *
+ *
* Should you need to contact me, the author, you can do so either by
* e-mail - mail your message to <vojtech@ucw.cz>, or by paper mail:
* Vojtech Pavlik, Simunkova 1594, Prague 8, 182 00 Czech Republic
case 0:
if ((data & 0xf8) != 0x80) return;
- input_report_key(dev, BTN_LEFT, !(data & 4));
+ input_report_key(dev, BTN_LEFT, !(data & 4));
input_report_key(dev, BTN_RIGHT, !(data & 1));
input_report_key(dev, BTN_MIDDLE, !(data & 2));
break;
- case 1:
- case 3:
+ case 1:
+ case 3:
input_report_rel(dev, REL_X, data / 2);
input_report_rel(dev, REL_Y, -buf[1]);
buf[0] = data - data / 2;
break;
- case 2:
+ case 2:
case 4:
input_report_rel(dev, REL_X, buf[0]);
input_report_rel(dev, REL_Y, buf[1] - data);
case 3:
switch (sermouse->type) {
-
+
case SERIO_MS:
sermouse->type = SERIO_MP;
input_report_rel(dev, REL_WHEEL, (data & 8) - (data & 7));
break;
}
-
+
break;
case 4:
{
struct sermouse *sermouse;
unsigned char c;
-
+
if ((serio->type & SERIO_TYPE) != SERIO_RS232)
return;
sermouse->dev.id.vendor = sermouse->type;
sermouse->dev.id.product = c;
sermouse->dev.id.version = 0x0100;
+ sermouse->dev.dev = &serio->dev;
if (serio_open(serio, drv)) {
kfree(sermouse);
}
input_register_device(&sermouse->dev);
-
+
printk(KERN_INFO "input: %s on %s\n", sermouse_protocols[sermouse->type], serio->phys);
}
#include <linux/module.h>
#include <linux/input.h>
#include <linux/serio.h>
+#include <linux/libps2.h>
#include "psmouse.h"
#include "synaptics.h"
{
if (psmouse_sliced_command(psmouse, c))
return -1;
- if (psmouse_command(psmouse, param, PSMOUSE_CMD_GETINFO))
+ if (ps2_command(&psmouse->ps2dev, param, PSMOUSE_CMD_GETINFO))
return -1;
return 0;
}
if (psmouse_sliced_command(psmouse, mode))
return -1;
param[0] = SYN_PS_SET_MODE2;
- if (psmouse_command(psmouse, param, PSMOUSE_CMD_SETRATE))
+ if (ps2_command(&psmouse->ps2dev, param, PSMOUSE_CMD_SETRATE))
return -1;
return 0;
}
return 0;
}
-static int synaptics_set_mode(struct psmouse *psmouse, int mode)
+static int synaptics_set_absolute_mode(struct psmouse *psmouse)
{
struct synaptics_data *priv = psmouse->private;
- mode |= SYN_BIT_ABSOLUTE_MODE;
- if (psmouse_rate >= 80)
- mode |= SYN_BIT_HIGH_RATE;
+ priv->mode = SYN_BIT_ABSOLUTE_MODE;
if (SYN_ID_MAJOR(priv->identity) >= 4)
- mode |= SYN_BIT_DISABLE_GESTURE;
+ priv->mode |= SYN_BIT_DISABLE_GESTURE;
if (SYN_CAP_EXTENDED(priv->capabilities))
- mode |= SYN_BIT_W_MODE;
- if (synaptics_mode_cmd(psmouse, mode))
+ priv->mode |= SYN_BIT_W_MODE;
+
+ if (synaptics_mode_cmd(psmouse, priv->mode))
return -1;
return 0;
}
+static void synaptics_set_rate(struct psmouse *psmouse, unsigned int rate)
+{
+ struct synaptics_data *priv = psmouse->private;
+
+ if (rate >= 80) {
+ priv->mode |= SYN_BIT_HIGH_RATE;
+ psmouse->rate = 80;
+ } else {
+ priv->mode &= ~SYN_BIT_HIGH_RATE;
+ psmouse->rate = 40;
+ }
+
+ synaptics_mode_cmd(psmouse, priv->mode);
+}
+
/*****************************************************************************
* Synaptics pass-through PS/2 port support
****************************************************************************/
if (psmouse_sliced_command(parent, c))
return -1;
- if (psmouse_command(parent, &rate_param, PSMOUSE_CMD_SETRATE))
+ if (ps2_command(&parent->ps2dev, &rate_param, PSMOUSE_CMD_SETRATE))
return -1;
return 0;
}
static void synaptics_pt_activate(struct psmouse *psmouse)
{
- struct psmouse *child = psmouse->serio->child->private;
+ struct psmouse *child = psmouse->ps2dev.serio->child->private;
+ struct synaptics_data *priv = psmouse->private;
/* adjust the touchpad to child's choice of protocol */
- if (child && child->type >= PSMOUSE_GENPS) {
- if (synaptics_set_mode(psmouse, SYN_BIT_FOUR_BYTE_CLIENT))
- printk(KERN_INFO "synaptics: failed to enable 4-byte guest protocol\n");
+ if (child) {
+ if (child->type >= PSMOUSE_GENPS)
+ priv->mode |= SYN_BIT_FOUR_BYTE_CLIENT;
+ else
+ priv->mode &= ~SYN_BIT_FOUR_BYTE_CLIENT;
+
+ if (synaptics_mode_cmd(psmouse, priv->mode))
+ printk(KERN_INFO "synaptics: failed to switch guest protocol\n");
}
}
strlcpy(serio->name, "Synaptics pass-through", sizeof(serio->name));
strlcpy(serio->phys, "synaptics-pt/serio0", sizeof(serio->name));
serio->write = synaptics_pt_write;
- serio->parent = psmouse->serio;
+ serio->parent = psmouse->ps2dev.serio;
psmouse->pt_activate = synaptics_pt_activate;
- psmouse->serio->child = serio;
+ psmouse->ps2dev.serio->child = serio;
}
/*****************************************************************************
priv->pkt_type = synaptics_detect_pkt_type(psmouse);
if (SYN_CAP_PASS_THROUGH(priv->capabilities) && synaptics_is_pt_packet(psmouse->packet)) {
- if (psmouse->serio->child)
- synaptics_pass_pt_packet(psmouse->serio->child, psmouse->packet);
+ if (psmouse->ps2dev.serio->child)
+ synaptics_pass_pt_packet(psmouse->ps2dev.serio->child, psmouse->packet);
} else
synaptics_process_packet(psmouse);
struct synaptics_data *priv = psmouse->private;
struct synaptics_data old_priv = *priv;
- if (!synaptics_detect(psmouse))
+ if (synaptics_detect(psmouse, 0))
return -1;
if (synaptics_query_hardware(psmouse)) {
old_priv.ext_cap != priv->ext_cap)
return -1;
- if (synaptics_set_mode(psmouse, 0)) {
+ if (synaptics_set_absolute_mode(psmouse)) {
printk(KERN_ERR "Unable to initialize Synaptics hardware.\n");
return -1;
}
return 0;
}
-int synaptics_detect(struct psmouse *psmouse)
+int synaptics_detect(struct psmouse *psmouse, int set_properties)
{
+ struct ps2dev *ps2dev = &psmouse->ps2dev;
unsigned char param[4];
param[0] = 0;
- psmouse_command(psmouse, param, PSMOUSE_CMD_SETRES);
- psmouse_command(psmouse, param, PSMOUSE_CMD_SETRES);
- psmouse_command(psmouse, param, PSMOUSE_CMD_SETRES);
- psmouse_command(psmouse, param, PSMOUSE_CMD_SETRES);
- psmouse_command(psmouse, param, PSMOUSE_CMD_GETINFO);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_SETRES);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_SETRES);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_SETRES);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_SETRES);
+ ps2_command(ps2dev, param, PSMOUSE_CMD_GETINFO);
+
+ if (param[1] != 0x47)
+ return -1;
+
+ if (set_properties) {
+ psmouse->vendor = "Synaptics";
+ psmouse->name = "TouchPad";
+ }
- return param[1] == 0x47;
+ return 0;
}
int synaptics_init(struct psmouse *psmouse)
goto init_fail;
}
- if (synaptics_set_mode(psmouse, 0)) {
+ if (synaptics_set_absolute_mode(psmouse)) {
printk(KERN_ERR "Unable to initialize Synaptics hardware.\n");
goto init_fail;
}
set_input_params(&psmouse->dev, priv);
psmouse->protocol_handler = synaptics_process_byte;
+ psmouse->set_rate = synaptics_set_rate;
psmouse->disconnect = synaptics_disconnect;
psmouse->reconnect = synaptics_reconnect;
+ psmouse->pktsize = 6;
return 0;
#ifndef _SYNAPTICS_H
#define _SYNAPTICS_H
-extern int synaptics_detect(struct psmouse *psmouse);
+extern int synaptics_detect(struct psmouse *psmouse, int set_properties);
extern int synaptics_init(struct psmouse *psmouse);
extern void synaptics_reset(struct psmouse *psmouse);
unsigned long int ext_cap; /* Extended Capabilities */
unsigned long int identity; /* Identification */
- /* Data for normal processing */
- int old_w; /* Previous w value */
unsigned char pkt_type; /* packet type - old, new, etc */
+ unsigned char mode; /* current mode byte */
};
#endif /* _SYNAPTICS_H */
/*
- * DEC VSXXX-AA and VSXXX-GA mouse driver.
+ * Driver for DEC VSXXX-AA mouse (hockey-puck mouse, ball or two rollers)
+ * DEC VSXXX-GA mouse (rectangular mouse, with ball)
+ * DEC VSXXX-AB tablet (digitizer with hair cross or stylus)
*
* Copyright (C) 2003-2004 by Jan-Benedict Glaw <jbglaw@lug-owl.de>
*
- * The packet format was taken from a patch to GPM which is (C) 2001
+ * The packet format was initially taken from a patch to GPM which is (C) 2001
* by Karsten Merker <merker@linuxtag.org>
* and Maciej W. Rozycki <macro@ds2.pg.gda.pl>
+ * Later on, I had access to the device's documentation (referenced below).
*/
/*
*/
/*
- * Building an adaptor to DB9 / DB25 RS232
+ * Building an adaptor to DE9 / DB25 RS232
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*
* DISCLAIMER: Use this description AT YOUR OWN RISK! I'll not pay for
* | 4 --- 3 |
* \ 2 1 /
* -------
- *
- * DEC socket DB9 DB25 Note
+ *
+ * DEC socket DE9 DB25 Note
* 1 (GND) 5 7 -
* 2 (RxD) 2 3 -
* 3 (TxD) 3 2 -
#include <linux/serio.h>
#include <linux/init.h>
-#define DRIVER_DESC "Serial DEC VSXXX-AA/GA mouse / DEC tablet driver"
+#define DRIVER_DESC "Driver for DEC VSXXX-AA and -GA mice and VSXXX-AB tablet"
MODULE_AUTHOR ("Jan-Benedict Glaw <jbglaw@lug-owl.de>");
MODULE_DESCRIPTION (DRIVER_DESC);
#define VSXXXAA_PACKET_REL 0x80
#define VSXXXAA_PACKET_ABS 0xc0
#define VSXXXAA_PACKET_POR 0xa0
-#define MATCH_PACKET_TYPE(data, type) (((data) & VSXXXAA_PACKET_MASK) == type)
+#define MATCH_PACKET_TYPE(data, type) (((data) & VSXXXAA_PACKET_MASK) == (type))
{
switch (mouse->type) {
case 0x02:
- sprintf (mouse->name, "DEC VSXXX-AA/GA mouse");
+ sprintf (mouse->name, "DEC VSXXX-AA/-GA mouse");
break;
case 0x04:
break;
default:
- sprintf (mouse->name, "unknown DEC pointer device");
+ sprintf (mouse->name, "unknown DEC pointer device "
+ "(type = 0x%02x)", mouse->type);
break;
}
*
* M: manufacturer location code
* R: revision code
- * E: Error code. I'm not sure about these, but gpm's sources,
- * which support this mouse, too, tell about them:
- * E = [0x00 .. 0x1f]: no error, byte #3 is button state
- * E = 0x3d: button error, byte #3 tells which one.
- * E = <else>: other error
+ * E: Error code. If it's in the range of 0x00..0x1f, only some
+ * minor problem occured. Errors >= 0x20 are considered bad
+ * and the device may not work properly...
* D: <0010> == mouse, <0100> == tablet
- *
*/
mouse->version = buf[0] & 0x0f;
vsxxxaa_detection_done (mouse);
if (error <= 0x1f) {
- /* No error. Report buttons */
+ /* No (serious) error. Report buttons */
input_regs (dev, regs);
input_report_key (dev, BTN_LEFT, left);
input_report_key (dev, BTN_MIDDLE, middle);
input_report_key (dev, BTN_RIGHT, right);
input_report_key (dev, BTN_TOUCH, 0);
input_sync (dev);
- } else {
- printk (KERN_ERR "Your %s on %s reports an undefined error, "
- "please check it...\n", mouse->name,
- mouse->phys);
+
+ if (error != 0)
+ printk (KERN_INFO "Your %s on %s reports error=0x%02x\n",
+ mouse->name, mouse->phys, error);
+
}
/*
* If the mouse was hot-plugged, we need to force differential mode
* now... However, give it a second to recover from it's reset.
*/
- printk (KERN_NOTICE "%s on %s: Forceing standard packet format and "
- "streaming mode\n", mouse->name, mouse->phys);
- mouse->serio->write (mouse->serio, 'S');
+ printk (KERN_NOTICE "%s on %s: Forceing standard packet format, "
+ "incremental streaming mode and 72 samples/sec\n",
+ mouse->name, mouse->phys);
+ mouse->serio->write (mouse->serio, 'S'); /* Standard format */
+ mdelay (50);
+ mouse->serio->write (mouse->serio, 'R'); /* Incremental */
mdelay (50);
- mouse->serio->write (mouse->serio, 'R');
+ mouse->serio->write (mouse->serio, 'L'); /* 72 samples/sec */
}
static void
mouse->dev.private = mouse;
serio->private = mouse;
- sprintf (mouse->name, "DEC VSXXX-AA/GA mouse or VSXXX-AB digitizer");
+ sprintf (mouse->name, "DEC VSXXX-AA/-GA mouse or VSXXX-AB digitizer");
sprintf (mouse->phys, "%s/input0", serio->phys);
mouse->dev.name = mouse->name;
mouse->dev.phys = mouse->phys;
mouse->dev.id.bustype = BUS_RS232;
+ mouse->dev.dev = &serio->dev;
mouse->serio = serio;
if (serio_open (serio, drv)) {
obj-$(CONFIG_SERIO_GSCPS2) += gscps2.o
obj-$(CONFIG_SERIO_PCIPS2) += pcips2.o
obj-$(CONFIG_SERIO_MACEPS2) += maceps2.o
+obj-$(CONFIG_SERIO_LIBPS2) += libps2.o
obj-$(CONFIG_SERIO_RAW) += serio_raw.o
gscps2_enable(ps2port, DISABLE);
}
-static struct serio gscps2_serio_port =
-{
- .name = "GSC PS/2",
- .idbus = BUS_GSC,
- .idvendor = PCI_VENDOR_ID_HP,
- .idproduct = 0x0001,
- .idversion = 0x0010,
- .type = SERIO_8042,
- .write = gscps2_write,
- .open = gscps2_open,
- .close = gscps2_close,
-};
-
/**
* gscps2_probe() - Probes PS2 devices
* @return: success/error report
--- /dev/null
+#ifndef _I8042_X86IA64IO_H
+#define _I8042_X86IA64IO_H
+
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ */
+
+/*
+ * Names.
+ */
+
+#define I8042_KBD_PHYS_DESC "isa0060/serio0"
+#define I8042_AUX_PHYS_DESC "isa0060/serio1"
+#define I8042_MUX_PHYS_DESC "isa0060/serio%d"
+
+/*
+ * IRQs.
+ */
+
+#if defined(__ia64__)
+# define I8042_MAP_IRQ(x) isa_irq_to_vector((x))
+#else
+# define I8042_MAP_IRQ(x) (x)
+#endif
+
+#define I8042_KBD_IRQ i8042_kbd_irq
+#define I8042_AUX_IRQ i8042_aux_irq
+
+static int i8042_kbd_irq;
+static int i8042_aux_irq;
+
+/*
+ * Register numbers.
+ */
+
+#define I8042_COMMAND_REG i8042_command_reg
+#define I8042_STATUS_REG i8042_command_reg
+#define I8042_DATA_REG i8042_data_reg
+
+static int i8042_command_reg = 0x64;
+static int i8042_data_reg = 0x60;
+
+
+static inline int i8042_read_data(void)
+{
+ return inb(I8042_DATA_REG);
+}
+
+static inline int i8042_read_status(void)
+{
+ return inb(I8042_STATUS_REG);
+}
+
+static inline void i8042_write_data(int val)
+{
+ outb(val, I8042_DATA_REG);
+}
+
+static inline void i8042_write_command(int val)
+{
+ outb(val, I8042_COMMAND_REG);
+}
+
+#if defined(__i386__)
+
+#include <linux/dmi.h>
+
+static struct dmi_system_id __initdata i8042_dmi_table[] = {
+ {
+ .ident = "Compaq Proliant 8500",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Compaq"),
+ DMI_MATCH(DMI_PRODUCT_NAME , "ProLiant"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "8500"),
+ },
+ },
+ {
+ .ident = "Compaq Proliant DL760",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Compaq"),
+ DMI_MATCH(DMI_PRODUCT_NAME , "ProLiant"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "DL760"),
+ },
+ },
+ { }
+};
+#endif
+
+#if defined(__ia64__) && defined(CONFIG_ACPI)
+#include <linux/acpi.h>
+#include <acpi/acpi_bus.h>
+
+struct i8042_acpi_resources {
+ unsigned int port1;
+ unsigned int port2;
+ unsigned int irq;
+};
+
+static int i8042_acpi_kbd_registered;
+static int i8042_acpi_aux_registered;
+
+static acpi_status i8042_acpi_parse_resource(struct acpi_resource *res, void *data)
+{
+ struct i8042_acpi_resources *i8042_res = data;
+ struct acpi_resource_io *io;
+ struct acpi_resource_fixed_io *fixed_io;
+ struct acpi_resource_irq *irq;
+ struct acpi_resource_ext_irq *ext_irq;
+
+ switch (res->id) {
+ case ACPI_RSTYPE_IO:
+ io = &res->data.io;
+ if (io->range_length) {
+ if (!i8042_res->port1)
+ i8042_res->port1 = io->min_base_address;
+ else
+ i8042_res->port2 = io->min_base_address;
+ }
+ break;
+
+ case ACPI_RSTYPE_FIXED_IO:
+ fixed_io = &res->data.fixed_io;
+ if (fixed_io->range_length) {
+ if (!i8042_res->port1)
+ i8042_res->port1 = fixed_io->base_address;
+ else
+ i8042_res->port2 = fixed_io->base_address;
+ }
+ break;
+
+ case ACPI_RSTYPE_IRQ:
+ irq = &res->data.irq;
+ if (irq->number_of_interrupts > 0)
+ i8042_res->irq =
+ acpi_register_gsi(irq->interrupts[0],
+ irq->edge_level,
+ irq->active_high_low);
+ break;
+
+ case ACPI_RSTYPE_EXT_IRQ:
+ ext_irq = &res->data.extended_irq;
+ if (ext_irq->number_of_interrupts > 0)
+ i8042_res->irq =
+ acpi_register_gsi(ext_irq->interrupts[0],
+ ext_irq->edge_level,
+ ext_irq->active_high_low);
+ break;
+ }
+ return AE_OK;
+}
+
+static int i8042_acpi_kbd_add(struct acpi_device *device)
+{
+ struct i8042_acpi_resources kbd_res;
+ acpi_status status;
+
+ memset(&kbd_res, 0, sizeof(kbd_res));
+ status = acpi_walk_resources(device->handle, METHOD_NAME__CRS,
+ i8042_acpi_parse_resource, &kbd_res);
+ if (ACPI_FAILURE(status))
+ return -ENODEV;
+
+ if (kbd_res.port1)
+ i8042_data_reg = kbd_res.port1;
+ else
+ printk(KERN_WARNING "ACPI: [%s] has no data port; default is 0x%x\n",
+ acpi_device_bid(device), i8042_data_reg);
+
+ if (kbd_res.port2)
+ i8042_command_reg = kbd_res.port2;
+ else
+ printk(KERN_WARNING "ACPI: [%s] has no command port; default is 0x%x\n",
+ acpi_device_bid(device), i8042_command_reg);
+
+ if (kbd_res.irq)
+ i8042_kbd_irq = kbd_res.irq;
+ else
+ printk(KERN_WARNING "ACPI: [%s] has no IRQ; default is %d\n",
+ acpi_device_bid(device), i8042_kbd_irq);
+
+ strncpy(acpi_device_name(device), "PS/2 Keyboard Controller",
+ sizeof(acpi_device_name(device)));
+ printk("ACPI: %s [%s] at I/O 0x%x, 0x%x, irq %d\n",
+ acpi_device_name(device), acpi_device_bid(device),
+ i8042_data_reg, i8042_command_reg, i8042_kbd_irq);
+
+ return 0;
+}
+
+static int i8042_acpi_aux_add(struct acpi_device *device)
+{
+ struct i8042_acpi_resources aux_res;
+ acpi_status status;
+
+ memset(&aux_res, 0, sizeof(aux_res));
+ status = acpi_walk_resources(device->handle, METHOD_NAME__CRS,
+ i8042_acpi_parse_resource, &aux_res);
+ if (ACPI_FAILURE(status))
+ return -ENODEV;
+
+ if (aux_res.irq)
+ i8042_aux_irq = aux_res.irq;
+ else
+ printk(KERN_WARNING "ACPI: [%s] has no IRQ; default is %d\n",
+ acpi_device_bid(device), i8042_aux_irq);
+
+ strncpy(acpi_device_name(device), "PS/2 Mouse Controller",
+ sizeof(acpi_device_name(device)));
+ printk("ACPI: %s [%s] at irq %d\n",
+ acpi_device_name(device), acpi_device_bid(device), i8042_aux_irq);
+
+ return 0;
+}
+
+static struct acpi_driver i8042_acpi_kbd_driver = {
+ .name = "i8042",
+ .ids = "PNP0303,PNP030B",
+ .ops = {
+ .add = i8042_acpi_kbd_add,
+ },
+};
+
+static struct acpi_driver i8042_acpi_aux_driver = {
+ .name = "i8042",
+ .ids = "PNP0F03,PNP0F0B,PNP0F0E,PNP0F12,PNP0F13,SYN0801",
+ .ops = {
+ .add = i8042_acpi_aux_add,
+ },
+};
+
+static int i8042_acpi_init(void)
+{
+ int result;
+
+ if (acpi_disabled || i8042_noacpi) {
+ printk("i8042: ACPI detection disabled\n");
+ return 0;
+ }
+
+ result = acpi_bus_register_driver(&i8042_acpi_kbd_driver);
+ if (result < 0)
+ return result;
+
+ if (result == 0) {
+ acpi_bus_unregister_driver(&i8042_acpi_kbd_driver);
+ return -ENODEV;
+ }
+ i8042_acpi_kbd_registered = 1;
+
+ result = acpi_bus_register_driver(&i8042_acpi_aux_driver);
+ if (result >= 0)
+ i8042_acpi_aux_registered = 1;
+ if (result == 0)
+ i8042_noaux = 1;
+
+ return 0;
+}
+
+static void i8042_acpi_exit(void)
+{
+ if (i8042_acpi_kbd_registered)
+ acpi_bus_unregister_driver(&i8042_acpi_kbd_driver);
+
+ if (i8042_acpi_aux_registered)
+ acpi_bus_unregister_driver(&i8042_acpi_aux_driver);
+}
+#endif
+
+static inline int i8042_platform_init(void)
+{
+/*
+ * On ix86 platforms touching the i8042 data register region can do really
+ * bad things. Because of this the region is always reserved on ix86 boxes.
+ *
+ * if (!request_region(I8042_DATA_REG, 16, "i8042"))
+ * return -1;
+ */
+
+ i8042_kbd_irq = I8042_MAP_IRQ(1);
+ i8042_aux_irq = I8042_MAP_IRQ(12);
+
+#if defined(__ia64__) && defined(CONFIG_ACPI)
+ if (i8042_acpi_init())
+ return -1;
+#endif
+
+#if defined(__ia64__)
+ i8042_reset = 1;
+#endif
+
+#if defined(__i386__)
+ if (dmi_check_system(i8042_dmi_table))
+ i8042_noloop = 1;
+#endif
+
+ return 0;
+}
+
+static inline void i8042_platform_exit(void)
+{
+#if defined(__ia64__) && defined(CONFIG_ACPI)
+ i8042_acpi_exit();
+#endif
+}
+
+#endif /* _I8042_X86IA64IO_H */
* Arch-dependent inline functions and defines.
*/
-#if defined(CONFIG_MIPS_JAZZ)
+#if defined(CONFIG_MACH_JAZZ)
#include "i8042-jazzio.h"
#elif defined(CONFIG_SGI_IP22)
#include "i8042-ip22io.h"
#include "i8042-ppcio.h"
#elif defined(CONFIG_SPARC32) || defined(CONFIG_SPARC64)
#include "i8042-sparcio.h"
+#elif defined(CONFIG_X86) || defined(CONFIG_IA64)
+#include "i8042-x86ia64io.h"
#else
#include "i8042-io.h"
#endif
#ifdef DEBUG
static unsigned long i8042_start;
#define dbg_init() do { i8042_start = jiffies; } while (0)
-#define dbg(format, arg...) printk(KERN_DEBUG __FILE__ ": " format " [%d]\n" ,\
- ## arg, (int) (jiffies - i8042_start))
+#define dbg(format, arg...) \
+ do { \
+ if (i8042_debug) \
+ printk(KERN_DEBUG __FILE__ ": " format " [%d]\n" , \
+ ## arg, (int) (jiffies - i8042_start)); \
+ } while (0)
#else
#define dbg_init() do { } while (0)
#define dbg(format, arg...) do {} while (0)
--- /dev/null
+/*
+ * PS/2 driver library
+ *
+ * Copyright (c) 1999-2002 Vojtech Pavlik
+ * Copyright (c) 2004 Dmitry Torokhov
+ */
+
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ */
+
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/input.h>
+#include <linux/serio.h>
+#include <linux/init.h>
+#include <linux/libps2.h>
+
+#define DRIVER_DESC "PS/2 driver library"
+
+MODULE_AUTHOR("Dmitry Torokhov <dtor@mail.ru>");
+MODULE_DESCRIPTION("PS/2 driver library");
+MODULE_LICENSE("GPL");
+
+EXPORT_SYMBOL(ps2_init);
+EXPORT_SYMBOL(ps2_sendbyte);
+EXPORT_SYMBOL(ps2_command);
+EXPORT_SYMBOL(ps2_schedule_command);
+EXPORT_SYMBOL(ps2_handle_ack);
+EXPORT_SYMBOL(ps2_handle_response);
+EXPORT_SYMBOL(ps2_cmd_aborted);
+
+/* Work structure to schedule execution of a command */
+struct ps2work {
+ struct work_struct work;
+ struct ps2dev *ps2dev;
+ int command;
+ unsigned char param[0];
+};
+
+
+/*
+ * ps2_sendbyte() sends a byte to the mouse, and waits for acknowledge.
+ * It doesn't handle retransmission, though it could - because when there would
+ * be need for retransmissions, the mouse has to be replaced anyway.
+ *
+ * ps2_sendbyte() can only be called from a process context
+ */
+
+int ps2_sendbyte(struct ps2dev *ps2dev, unsigned char byte, int timeout)
+{
+ serio_pause_rx(ps2dev->serio);
+ ps2dev->nak = 1;
+ ps2dev->flags |= PS2_FLAG_ACK;
+ serio_continue_rx(ps2dev->serio);
+
+ if (serio_write(ps2dev->serio, byte) == 0)
+ wait_event_timeout(ps2dev->wait,
+ !(ps2dev->flags & PS2_FLAG_ACK),
+ msecs_to_jiffies(timeout));
+
+ serio_pause_rx(ps2dev->serio);
+ ps2dev->flags &= ~PS2_FLAG_ACK;
+ serio_continue_rx(ps2dev->serio);
+
+ return -ps2dev->nak;
+}
+
+/*
+ * ps2_command() sends a command and its parameters to the mouse,
+ * then waits for the response and puts it in the param array.
+ *
+ * ps2_command() can only be called from a process context
+ */
+
+int ps2_command(struct ps2dev *ps2dev, unsigned char *param, int command)
+{
+ int timeout;
+ int send = (command >> 12) & 0xf;
+ int receive = (command >> 8) & 0xf;
+ int rc = -1;
+ int i;
+
+ down(&ps2dev->cmd_sem);
+
+ serio_pause_rx(ps2dev->serio);
+ ps2dev->flags = command == PS2_CMD_GETID ? PS2_FLAG_WAITID : 0;
+ ps2dev->cmdcnt = receive;
+ if (receive && param)
+ for (i = 0; i < receive; i++)
+ ps2dev->cmdbuf[(receive - 1) - i] = param[i];
+ serio_continue_rx(ps2dev->serio);
+
+ /*
+ * Some devices (Synaptics) peform the reset before
+ * ACKing the reset command, and so it can take a long
+ * time before the ACK arrrives.
+ */
+ if (command & 0xff)
+ if (ps2_sendbyte(ps2dev, command & 0xff,
+ command == PS2_CMD_RESET_BAT ? 1000 : 200))
+ goto out;
+
+ for (i = 0; i < send; i++)
+ if (ps2_sendbyte(ps2dev, param[i], 200))
+ goto out;
+
+ /*
+ * The reset command takes a long time to execute.
+ */
+ timeout = msecs_to_jiffies(command == PS2_CMD_RESET_BAT ? 4000 : 500);
+
+ timeout = wait_event_timeout(ps2dev->wait,
+ !(ps2dev->flags & PS2_FLAG_CMD1), timeout);
+
+ if (ps2dev->cmdcnt && timeout > 0) {
+
+ if (command == PS2_CMD_RESET_BAT && timeout > msecs_to_jiffies(100)) {
+ /*
+ * Device has sent the first response byte
+ * after a reset command, reset is thus done,
+ * shorten the timeout. The next byte will come
+ * soon (keyboard) or not at all (mouse).
+ */
+ timeout = msecs_to_jiffies(100);
+ }
+
+ if (command == PS2_CMD_GETID &&
+ ps2dev->cmdbuf[receive - 1] != 0xab && /* Regular keyboards */
+ ps2dev->cmdbuf[receive - 1] != 0xac && /* NCD Sun keyboard */
+ ps2dev->cmdbuf[receive - 1] != 0x2b && /* Trust keyboard, translated */
+ ps2dev->cmdbuf[receive - 1] != 0x5d && /* Trust keyboard */
+ ps2dev->cmdbuf[receive - 1] != 0x60 && /* NMB SGI keyboard, translated */
+ ps2dev->cmdbuf[receive - 1] != 0x47) { /* NMB SGI keyboard */
+ /*
+ * Device behind the port is not a keyboard
+ * so we don't need to wait for the 2nd byte
+ * of ID response.
+ */
+ serio_pause_rx(ps2dev->serio);
+ ps2dev->flags = ps2dev->cmdcnt = 0;
+ serio_continue_rx(ps2dev->serio);
+ }
+
+ wait_event_timeout(ps2dev->wait,
+ !(ps2dev->flags & PS2_FLAG_CMD), timeout);
+ }
+
+ if (param)
+ for (i = 0; i < receive; i++)
+ param[i] = ps2dev->cmdbuf[(receive - 1) - i];
+
+ if (ps2dev->cmdcnt && (command != PS2_CMD_RESET_BAT || ps2dev->cmdcnt != 1))
+ goto out;
+
+ rc = 0;
+
+out:
+ serio_pause_rx(ps2dev->serio);
+ ps2dev->flags = 0;
+ serio_continue_rx(ps2dev->serio);
+
+ up(&ps2dev->cmd_sem);
+ return rc;
+}
+
+/*
+ * ps2_execute_scheduled_command() sends a command, previously scheduled by
+ * ps2_schedule_command(), to a PS/2 device (keyboard, mouse, etc.)
+ */
+
+static void ps2_execute_scheduled_command(void *data)
+{
+ struct ps2work *ps2work = data;
+
+ ps2_command(ps2work->ps2dev, ps2work->param, ps2work->command);
+ kfree(ps2work);
+}
+
+/*
+ * ps2_schedule_command() allows to schedule delayed execution of a PS/2
+ * command and can be used to issue a command from an interrupt or softirq
+ * context.
+ */
+
+int ps2_schedule_command(struct ps2dev *ps2dev, unsigned char *param, int command)
+{
+ struct ps2work *ps2work;
+ int send = (command >> 12) & 0xf;
+ int receive = (command >> 8) & 0xf;
+
+ if (!(ps2work = kmalloc(sizeof(struct ps2work) + max(send, receive), GFP_ATOMIC)))
+ return -1;
+
+ memset(ps2work, 0, sizeof(struct ps2work));
+ ps2work->ps2dev = ps2dev;
+ ps2work->command = command;
+ memcpy(ps2work->param, param, send);
+ INIT_WORK(&ps2work->work, ps2_execute_scheduled_command, ps2work);
+
+ if (!schedule_work(&ps2work->work)) {
+ kfree(ps2work);
+ return -1;
+ }
+
+ return 0;
+}
+
+/*
+ * ps2_init() initializes ps2dev structure
+ */
+
+void ps2_init(struct ps2dev *ps2dev, struct serio *serio)
+{
+ init_MUTEX(&ps2dev->cmd_sem);
+ init_waitqueue_head(&ps2dev->wait);
+ ps2dev->serio = serio;
+}
+
+/*
+ * ps2_handle_ack() is supposed to be used in interrupt handler
+ * to properly process ACK/NAK of a command from a PS/2 device.
+ */
+
+int ps2_handle_ack(struct ps2dev *ps2dev, unsigned char data)
+{
+ switch (data) {
+ case PS2_RET_ACK:
+ ps2dev->nak = 0;
+ break;
+
+ case PS2_RET_NAK:
+ ps2dev->nak = 1;
+ break;
+
+ /*
+ * Workaround for mice which don't ACK the Get ID command.
+ * These are valid mouse IDs that we recognize.
+ */
+ case 0x00:
+ case 0x03:
+ case 0x04:
+ if (ps2dev->flags & PS2_FLAG_WAITID) {
+ ps2dev->nak = 0;
+ break;
+ }
+ /* Fall through */
+ default:
+ return 0;
+ }
+
+
+ if (!ps2dev->nak && ps2dev->cmdcnt)
+ ps2dev->flags |= PS2_FLAG_CMD | PS2_FLAG_CMD1;
+
+ ps2dev->flags &= ~PS2_FLAG_ACK;
+ wake_up(&ps2dev->wait);
+
+ if (data != PS2_RET_ACK)
+ ps2_handle_response(ps2dev, data);
+
+ return 1;
+}
+
+/*
+ * ps2_handle_response() is supposed to be used in interrupt handler
+ * to properly store device's response to a command and notify process
+ * waiting for completion of the command.
+ */
+
+int ps2_handle_response(struct ps2dev *ps2dev, unsigned char data)
+{
+ if (ps2dev->cmdcnt)
+ ps2dev->cmdbuf[--ps2dev->cmdcnt] = data;
+
+ if (ps2dev->flags & PS2_FLAG_CMD1) {
+ ps2dev->flags &= ~PS2_FLAG_CMD1;
+ if (ps2dev->cmdcnt)
+ wake_up(&ps2dev->wait);
+ }
+
+ if (!ps2dev->cmdcnt) {
+ ps2dev->flags &= ~PS2_FLAG_CMD;
+ wake_up(&ps2dev->wait);
+ }
+
+ return 1;
+}
+
+void ps2_cmd_aborted(struct ps2dev *ps2dev)
+{
+ if (ps2dev->flags & PS2_FLAG_ACK)
+ ps2dev->nak = 1;
+
+ if (ps2dev->flags & (PS2_FLAG_ACK | PS2_FLAG_CMD))
+ wake_up(&ps2dev->wait);
+
+ ps2dev->flags = 0;
+}
+
MODULE_DESCRIPTION("Parallel port to Keyboard port adapter driver");
MODULE_LICENSE("GPL");
-MODULE_PARM(parkbd, "1i");
-MODULE_PARM(parkbd_mode, "1i");
+static unsigned int parkbd_pp_no;
+module_param_named(port, parkbd_pp_no, int, 0);
+MODULE_PARM_DESC(port, "Parallel port the adapter is connected to (default is 0)");
+
+static unsigned int parkbd_mode = SERIO_8042;
+module_param_named(mode, parkbd_mode, uint, 0);
+MODULE_PARM_DESC(mode, "Mode of operation: XT = 0/AT = 1 (default)");
#define PARKBD_CLOCK 0x01 /* Strobe & Ack */
#define PARKBD_DATA 0x02 /* AutoFd & Busy */
-static int parkbd;
-static int parkbd_mode = SERIO_8042;
-
static int parkbd_buffer;
static int parkbd_counter;
static unsigned long parkbd_last;
{
struct parport *pp;
- if (parkbd < 0) {
- printk(KERN_ERR "parkbd: no port specified\n");
- return -ENODEV;
- }
-
- pp = parport_find_number(parkbd);
+ pp = parport_find_number(parkbd_pp_no);
if (pp == NULL) {
printk(KERN_ERR "parkbd: no such parport\n");
MODULE_DESCRIPTION("Q40 PS/2 keyboard controller driver");
MODULE_LICENSE("GPL");
-spinlock_t q40kbd_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(q40kbd_lock);
static struct serio *q40kbd_port;
static struct platform_device *q40kbd_device;
#include <linux/completion.h>
#include <linux/sched.h>
#include <linux/smp_lock.h>
-#include <linux/suspend.h>
#include <linux/slab.h>
MODULE_AUTHOR("Vojtech Pavlik <vojtech@ucw.cz>");
SERIO_UNREGISTER_PORT,
};
-static spinlock_t serio_event_lock = SPIN_LOCK_UNLOCKED; /* protects serio_event_list */
+static DEFINE_SPINLOCK(serio_event_lock); /* protects serio_event_list */
static LIST_HEAD(serio_event_list);
static DECLARE_WAIT_QUEUE_HEAD(serio_wait);
static DECLARE_COMPLETION(serio_exited);
do {
serio_handle_events();
wait_event_interruptible(serio_wait, !list_empty(&serio_event_list));
- if (current->flags & PF_FREEZE)
- refrigerator(PF_FREEZE);
+ try_to_freeze(PF_FREEZE);
} while (!signal_pending(current));
printk(KERN_DEBUG "serio: kseriod exiting\n");
try_module_get(THIS_MODULE);
spin_lock_init(&serio->lock);
+ init_MUTEX(&serio->drv_sem);
list_add_tail(&serio->node, &serio_list);
snprintf(serio->dev.bus_id, sizeof(serio->dev.bus_id), "serio%d", serio_no++);
serio->dev.bus = &serio_bus;
up(&serio_sem);
}
-/* called from serio_driver->connect/disconnect methods under serio_sem */
-int serio_open(struct serio *serio, struct serio_driver *drv)
+static void serio_set_drv(struct serio *serio, struct serio_driver *drv)
{
+ down(&serio->drv_sem);
serio_pause_rx(serio);
serio->drv = drv;
serio_continue_rx(serio);
+ up(&serio->drv_sem);
+}
+
+/* called from serio_driver->connect/disconnect methods under serio_sem */
+int serio_open(struct serio *serio, struct serio_driver *drv)
+{
+ serio_set_drv(serio, drv);
if (serio->open && serio->open(serio)) {
- serio_pause_rx(serio);
- serio->drv = NULL;
- serio_continue_rx(serio);
+ serio_set_drv(serio, NULL);
return -1;
}
return 0;
if (serio->close)
serio->close(serio);
- serio_pause_rx(serio);
- serio->drv = NULL;
- serio_continue_rx(serio);
+ serio_set_drv(serio, NULL);
}
irqreturn_t serio_interrupt(struct serio *serio,
*/
struct h3600_dev {
struct input_dev dev;
+ struct pm_dev *pm_dev;
struct serio *serio;
unsigned char event; /* event ID from packet */
unsigned char chksum;
//h3600_flite_control(1, 25); /* default brightness */
#ifdef CONFIG_PM
- ts->dev.pm_dev = pm_register(PM_ILLUMINATION_DEV, PM_SYS_LIGHT,
- h3600ts_pm_callback);
+ ts->pm_dev = pm_register(PM_ILLUMINATION_DEV, PM_SYS_LIGHT,
+ h3600ts_pm_callback);
printk("registered pm callback\n");
#endif
input_register_device(&ts->dev);
s->s_blocksize_bits = 10;
s->s_magic = CAPIFS_SUPER_MAGIC;
s->s_op = &capifs_sops;
+ s->s_time_gran = 1;
inode = new_inode(s);
if (!inode)
ulong if_used = 0; /* number of interface users */
static struct divert_info *divert_info_head = NULL; /* head of queue */
static struct divert_info *divert_info_tail = NULL; /* pointer to last entry */
-static spinlock_t divert_info_lock = SPIN_LOCK_UNLOCKED;/* lock for queue */
+static DEFINE_SPINLOCK(divert_info_lock);/* lock for queue */
static wait_queue_head_t rd_queue;
/*********************************/
static struct deflect_struc *table_tail = NULL;
static unsigned char extern_wait_max = 4; /* maximum wait in s for external process */
-spinlock_t divert_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(divert_lock);
/***************************/
/* timer callback function */
char msgbuf[128]; /* capimsg msg part */
char databuf[2048]; /* capimsg data part */
- void *mbase;
+ void __iomem *mbase;
volatile u32 csr;
avmcard_dmainfo *dma;
*/
#ifndef __DIVA_DIDD_DADAPTER_INC__
#define __DIVA_DIDD_DADAPTER_INC__
+
void diva_didd_load_time_init (void);
void diva_didd_load_time_finit (void);
-int diva_didd_add_descriptor (DESCRIPTOR* d);
-int diva_didd_remove_descriptor (IDI_CALL request);
-int diva_didd_read_adapter_array (DESCRIPTOR* buffer, int length);
-#define OLD_MAX_DESCRIPTORS 16
+
#define NEW_MAX_DESCRIPTORS 64
+
#endif
/*------------------------------------------------------------------*/
void pr_out(ADAPTER * a);
byte pr_dpc(ADAPTER * a);
-byte pr_test_int(ADAPTER * a);
-void pr_clear_int(ADAPTER * a);
-void scom_out(ADAPTER * a);
-byte scom_dpc(ADAPTER * a);
byte scom_test_int(ADAPTER * a);
void scom_clear_int(ADAPTER * a);
-void quadro_clear_int(ADAPTER * a);
/*------------------------------------------------------------------*/
/* OS specific functions used by IDI common code */
/*------------------------------------------------------------------*/
-/* $Id: diva_didd.c,v 1.13.6.1 2004/08/28 20:03:53 armin Exp $
+/* $Id: diva_didd.c,v 1.13.6.4 2005/02/11 19:40:25 armin Exp $
*
* DIDD Interface module for Eicon active cards.
*
#include "divasync.h"
#include "did_vers.h"
-static char *main_revision = "$Revision: 1.13.6.1 $";
+static char *main_revision = "$Revision: 1.13.6.4 $";
static char *DRIVERNAME =
"Eicon DIVA - DIDD table (http://www.melware.net)";
return (ret);
}
-void DIVA_EXIT_FUNCTION divadidd_exit(void)
+static void DIVA_EXIT_FUNCTION divadidd_exit(void)
{
diddfunc_finit();
remove_proc();
extern ADAPTER * adapter[MAX_ADAPTER];
extern PISDN_ADAPTER IoAdapters[MAX_ADAPTER];
void request (PISDN_ADAPTER, ENTITY *);
-void pcm_req (PISDN_ADAPTER, ENTITY *);
+static void pcm_req (PISDN_ADAPTER, ENTITY *);
/* --------------------------------------------------------------------------
local functions
-------------------------------------------------------------------------- */
&IoAdapter->Name[0]))
}
/*****************************************************************************/
-char *(ExceptionCauseTable[]) =
+#if defined(XDI_USE_XLOG)
+static char *(ExceptionCauseTable[]) =
{
"Interrupt",
"TLB mod /IBOUND",
"Reserved 30",
"VCED"
} ;
+#endif
void
dump_trap_frame (PISDN_ADAPTER IoAdapter, byte __iomem *exceptionFrame)
{
if (pI->descriptor_number >= 0) {
dword dma_magic;
void* local_addr;
-#if 0
- DBG_TRC(("A(%d) dma_alloc(%d)",
- IoAdapter->ANum, pI->descriptor_number))
-#endif
diva_get_dma_map_entry (\
(struct _diva_dma_map_entry*)IoAdapter->dma_map,
pI->descriptor_number,
}
} else if ((pI->operation == IDI_SYNC_REQ_DMA_DESCRIPTOR_FREE) &&
(pI->descriptor_number >= 0)) {
-#if 0
- DBG_TRC(("A(%d) dma_free(%d)", IoAdapter->ANum, pI->descriptor_number))
-#endif
diva_free_dma_map_entry((struct _diva_dma_map_entry*)IoAdapter->dma_map,
pI->descriptor_number);
pI->descriptor_number = -1;
}
if ( IoAdapter )
{
-#if 0
- DBG_FTL(("xdi: unknown Req 0 / Rc %d !", e->Rc))
-#endif
return ;
}
}
/* --------------------------------------------------------------------------
XLOG interface
-------------------------------------------------------------------------- */
-void
+static void
pcm_req (PISDN_ADAPTER IoAdapter, ENTITY *e)
{
diva_os_spin_lock_magic_t OldIrql ;
if ( e && e->callback )
e->callback (e) ;
}
-/* --------------------------------------------------------------------------
- routines for aligned reading and writing on RISC
- -------------------------------------------------------------------------- */
-void outp_words_from_buffer (word __iomem * adr, byte* P, dword len)
-{
- dword i = 0;
- word w;
- while (i < (len & 0xfffffffe)) {
- w = P[i++];
- w += (P[i++])<<8;
- outppw (adr, w);
- }
-}
-void inp_words_to_buffer (word __iomem * adr, byte* P, dword len)
-{
- dword i = 0;
- word w;
- while (i < (len & 0xfffffffe)) {
- w = inppw (adr);
- P[i++] = (byte)(w);
- P[i++] = (byte)(w>>8);
- }
-}
};
#define PR_RAM ((struct pr_ram *)0)
#define RAM ((struct dual *)0)
-/* ---------------------------------------------------------------------
- Functions for port io
- --------------------------------------------------------------------- */
-void outp_words_from_buffer (word __iomem * adr, byte* P, dword len);
-void inp_words_to_buffer (word __iomem * adr, byte* P, dword len);
/* ---------------------------------------------------------------------
platform specific conversions
--------------------------------------------------------------------- */
-/* $Id: platform.h,v 1.37.4.2 2004/08/28 20:03:53 armin Exp $
+/* $Id: platform.h,v 1.37.4.6 2005/01/31 12:22:20 armin Exp $
*
* platform.h
*
}
static __inline__ void diva_os_free (unsigned long flags, void* ptr)
{
- if (ptr) {
- vfree(ptr);
- }
+ vfree(ptr);
}
/*
return (1) ;
}
-#if !defined(DIVA_USER_MODE_CARD_CONFIG) /* { */
-/* --------------------------------------------------------------------------
- Download protocol code to the adapter
- -------------------------------------------------------------------------- */
-
-static int qBri_protocol_load (PISDN_ADAPTER BaseIoAdapter, PISDN_ADAPTER IoAdapter) {
- PISDN_ADAPTER HighIoAdapter;
-
- byte *p;
- dword FileLength ;
- dword *sharedRam, *File;
- dword Addr, ProtOffset, SharedRamOffset, i;
- dword tasks = BaseIoAdapter->tasks ;
- int factor = (tasks == 1) ? 1 : 2;
-
- if (!(File = (dword *)xdiLoadArchive (IoAdapter, &FileLength, 0))) {
- return (0) ;
- }
-
- IoAdapter->features = diva_get_protocol_file_features ((byte*)File,
- OFFS_PROTOCOL_ID_STRING,
- IoAdapter->ProtocolIdString,
- sizeof(IoAdapter->ProtocolIdString)) ;
- IoAdapter->a.protocol_capabilities = IoAdapter->features ;
-
- DBG_LOG(("Loading %s", IoAdapter->ProtocolIdString))
-
- ProtOffset = IoAdapter->ControllerNumber * (IoAdapter->MemorySize >> factor);
- SharedRamOffset = (IoAdapter->MemorySize >> factor) - MQ_SHARED_RAM_SIZE;
- Addr = ((dword)(((byte *) File)[OFFS_PROTOCOL_END_ADDR]))
- | (((dword)(((byte *) File)[OFFS_PROTOCOL_END_ADDR + 1])) << 8)
- | (((dword)(((byte *) File)[OFFS_PROTOCOL_END_ADDR + 2])) << 16)
- | (((dword)(((byte *) File)[OFFS_PROTOCOL_END_ADDR + 3])) << 24) ;
- if ( Addr != 0 )
- {
- IoAdapter->DspCodeBaseAddr = (Addr + 3) & (~3) ;
- IoAdapter->MaxDspCodeSize = (MQ_UNCACHED_ADDR (ProtOffset + SharedRamOffset) -
- IoAdapter->DspCodeBaseAddr) & ((IoAdapter->MemorySize >> factor) - 1);
-
- i = 0 ;
- while ( BaseIoAdapter->QuadroList->QuadroAdapter[i]->ControllerNumber != tasks - 1 )
- i++ ;
- HighIoAdapter = BaseIoAdapter->QuadroList->QuadroAdapter[i] ;
- Addr = HighIoAdapter->DspCodeBaseAddr ;
-
- if (tasks == 1) {
- ((byte *) File)[OFFS_DIVA_INIT_TASK_COUNT] =(byte)1;
- ((byte *) File)[OFFS_DIVA_INIT_TASK_COUNT+1] = (byte)BaseIoAdapter->cardType;
- }
-
- ((byte *) File)[OFFS_DSP_CODE_BASE_ADDR] = (byte) Addr ;
- ((byte *) File)[OFFS_DSP_CODE_BASE_ADDR + 1] = (byte)(Addr >> 8) ;
- ((byte *) File)[OFFS_DSP_CODE_BASE_ADDR + 2] = (byte)(Addr >> 16) ;
- ((byte *) File)[OFFS_DSP_CODE_BASE_ADDR + 3] = (byte)(Addr >> 24) ;
- IoAdapter->InitialDspInfo = 0x80 ;
- }
- else
- {
- if ( IoAdapter->features & PROTCAP_VOIP )
- {
- IoAdapter->DspCodeBaseAddr = MQ_CACHED_ADDR (ProtOffset + SharedRamOffset - MQ_VOIP_MAX_DSP_CODE_SIZE) ;
-
- IoAdapter->MaxDspCodeSize = MQ_VOIP_MAX_DSP_CODE_SIZE ;
-
- }
- else if ( IoAdapter->features & PROTCAP_V90D )
- {
- IoAdapter->DspCodeBaseAddr = MQ_CACHED_ADDR (ProtOffset + SharedRamOffset - MQ_V90D_MAX_DSP_CODE_SIZE) ;
-
- IoAdapter->MaxDspCodeSize = (IoAdapter->ControllerNumber == tasks - 1) ? MQ_V90D_MAX_DSP_CODE_SIZE : 0 ;
-
- }
- else
- {
- IoAdapter->DspCodeBaseAddr = MQ_CACHED_ADDR (ProtOffset + SharedRamOffset - MQ_ORG_MAX_DSP_CODE_SIZE) ;
-
- IoAdapter->MaxDspCodeSize = (IoAdapter->ControllerNumber == tasks - 1) ? MQ_ORG_MAX_DSP_CODE_SIZE : 0 ;
-
- }
- IoAdapter->InitialDspInfo = (MQ_CACHED_ADDR (ProtOffset + SharedRamOffset -
- MQ_ORG_MAX_DSP_CODE_SIZE) - IoAdapter->DspCodeBaseAddr) >> 14 ;
-
- }
- DBG_LOG(("%d: DSP code base 0x%08lx, max size 0x%08lx (%08lx,%02x)",
- IoAdapter->ControllerNumber,
- IoAdapter->DspCodeBaseAddr, IoAdapter->MaxDspCodeSize,
- Addr, IoAdapter->InitialDspInfo))
-
- if (FileLength > ((IoAdapter->DspCodeBaseAddr - MQ_CACHED_ADDR (ProtOffset)) & (IoAdapter->MemorySize - 1)) )
- {
- xdiFreeFile (File) ;
- DBG_FTL(("Protocol code '%s' too long (%ld)",
- &IoAdapter->Protocol[0], FileLength))
- return (0) ;
- }
- IoAdapter->downloadAddr = 0 ;
- p = DIVA_OS_MEM_ATTACH_RAM(IoAdapter);
- sharedRam = (dword *)&p[IoAdapter->downloadAddr & (IoAdapter->MemorySize - 1)];
- memcpy (sharedRam, File, FileLength) ;
-
- DBG_TRC(("Download addr 0x%08x len %ld - virtual 0x%08x",
- IoAdapter->downloadAddr, FileLength, sharedRam))
-
- if ( memcmp (sharedRam, File, FileLength) )
- {
- DBG_FTL(("%s: Memory test failed!", IoAdapter->Properties.Name))
-
- DBG_FTL(("File=0x%x, sharedRam=0x%x", File, sharedRam))
- DBG_BLK(( (char *)File, 256))
- DBG_BLK(( (char *)sharedRam, 256))
- DIVA_OS_MEM_DETACH_RAM(IoAdapter, p);
-
- xdiFreeFile (File) ;
- return (0) ;
- }
- DIVA_OS_MEM_DETACH_RAM(IoAdapter, p);
- xdiFreeFile (File) ;
-
- return (1) ;
-}
-
-/* --------------------------------------------------------------------------
- DSP Code download
- -------------------------------------------------------------------------- */
-static long qBri_download_buffer (OsFileHandle *fp, long length, void **addr) {
- PISDN_ADAPTER BaseIoAdapter = (PISDN_ADAPTER)fp->sysLoadDesc ;
- PISDN_ADAPTER IoAdapter;
- word i ;
- dword *sharedRam ;
- byte *p;
-
- i = 0 ;
-
- do
- {
- IoAdapter = BaseIoAdapter->QuadroList->QuadroAdapter[i++] ;
- } while ( (i < BaseIoAdapter->tasks)
- && (((dword) length) > IoAdapter->DspCodeBaseAddr +
- IoAdapter->MaxDspCodeSize - IoAdapter->downloadAddr) );
-
- *addr = (void *)IoAdapter->downloadAddr ;
- if ( ((dword) length) > IoAdapter->DspCodeBaseAddr +
- IoAdapter->MaxDspCodeSize - IoAdapter->downloadAddr )
- {
- DBG_FTL(("%s: out of card memory during DSP download (0x%X)",
- IoAdapter->Properties.Name,
- IoAdapter->downloadAddr + length))
- return (-1) ;
- }
- p = DIVA_OS_MEM_ATTACH_RAM(BaseIoAdapter);
- sharedRam = (dword*)&p[IoAdapter->downloadAddr & (IoAdapter->MemorySize - 1)];
-
- if ( fp->sysFileRead (fp, sharedRam, length) != length ) {
- DIVA_OS_MEM_DETACH_RAM(BaseIoAdapter, p);
- return (-1) ;
- }
- DIVA_OS_MEM_DETACH_RAM(BaseIoAdapter, p);
-
- IoAdapter->downloadAddr += length ;
- IoAdapter->downloadAddr = (IoAdapter->downloadAddr + 3) & (~3) ;
-
- return (0) ;
-}
-
-/******************************************************************************/
-
-static dword qBri_telindus_load (PISDN_ADAPTER BaseIoAdapter) {
- PISDN_ADAPTER IoAdapter = 0;
- PISDN_ADAPTER HighIoAdapter = NULL ;
- char *error ;
- OsFileHandle *fp ;
- t_dsp_portable_desc download_table[DSP_MAX_DOWNLOAD_COUNT] ;
- word download_count, i ;
- dword *sharedRam ;
- dword FileLength ;
- byte *p;
-
- if ( !(fp = OsOpenFile (DSP_TELINDUS_FILE)) ) {
- DBG_FTL(("qBri_telindus_load: %s not found!", DSP_TELINDUS_FILE))
- return (0) ;
- }
-
-
- for ( i = 0 ; i < BaseIoAdapter->tasks ; ++i )
- {
- IoAdapter = BaseIoAdapter->QuadroList->QuadroAdapter[i] ;
- IoAdapter->downloadAddr = IoAdapter->DspCodeBaseAddr ;
- if ( IoAdapter->ControllerNumber == BaseIoAdapter->tasks - 1 )
- {
- HighIoAdapter = IoAdapter ;
- HighIoAdapter->downloadAddr = (HighIoAdapter->downloadAddr
- + sizeof(dword) + sizeof(download_table) + 3) & (~3) ;
- }
- }
-
-
- FileLength = fp->sysFileSize ;
- fp->sysLoadDesc = (void *)BaseIoAdapter ;
- fp->sysCardLoad = qBri_download_buffer ;
-
- download_count = DSP_MAX_DOWNLOAD_COUNT ;
- memset (&download_table[0], '\0', sizeof(download_table)) ;
-/*
- * set start address for download
- */
- error = dsp_read_file (fp, (word)(IoAdapter->cardType),
- &download_count, NULL, &download_table[0]) ;
- if ( error )
- {
- DBG_FTL(("download file error: %s", error))
- OsCloseFile (fp) ;
- return (0) ;
- }
- OsCloseFile (fp) ;
-
-
- /*
- * store # of download files extracted from the archive and download table
- */
- HighIoAdapter->downloadAddr = HighIoAdapter->DspCodeBaseAddr ;
- p = DIVA_OS_MEM_ATTACH_RAM(BaseIoAdapter);
- sharedRam = (dword *)&p[HighIoAdapter->downloadAddr & (IoAdapter->MemorySize - 1)];
- WRITE_DWORD(&(sharedRam[0]), (dword)download_count);
- memcpy (&sharedRam[1], &download_table[0], sizeof(download_table)) ;
-
-
- /* memory check */
- if ( memcmp (&sharedRam[1], &download_table, download_count) ) {
- DBG_FTL(("%s: Dsp Memory test failed!", IoAdapter->Properties.Name))
- }
- DIVA_OS_MEM_DETACH_RAM(BaseIoAdapter, p);
-
- return (FileLength) ;
-}
-
-/*
- Load SDP tasks to the card
- Return start address of image on succesful load
- Return zero in case of problem
-
- INPUT:
- task -> name of the image containing this task
- link_addr -> pointer to start of previous task
- */
-static byte* qBri_sdp_load (PISDN_ADAPTER BaseIoAdapter,
- char* task,
- byte* link_addr) {
- OsFileHandle *fp;
- dword FileLength;
- byte tmp[sizeof(dword)];
- dword gp_addr;
- dword entry_addr;
- dword start_addr = 0;
- dword phys_start_addr;
- dword end_addr;
- byte* sharedRam = 0;
- byte *p;
-
- if (task) {
- if (!(fp = OsOpenFile (task))) {
- DBG_ERR(("Can't open [%s] image", task))
- return (0);
- }
- if ((FileLength = fp->sysFileSize) < DIVA_MIPS_TASK_IMAGE_ID_STRING_OFFS) {
- OsCloseFile (fp) ;
- DBG_ERR(("Image [%s] too short", task))
- return (0);
- }
-
- fp->sysFileSeek (fp, DIVA_MIPS_TASK_IMAGE_GP_OFFS, OS_SEEK_SET);
- if (fp->sysFileRead (fp, tmp, sizeof(dword)) != sizeof(dword)) {
- OsCloseFile (fp) ;
- DBG_ERR(("Can't read image [%s]", task))
- return (0);
- }
- gp_addr = ((dword)tmp[0]) |
- (((dword)tmp[1]) << 8) |
- (((dword)tmp[2]) << 16) |
- (((dword)tmp[3]) << 24);
- DBG_TRC(("Image [%s] GP = %08lx", task, gp_addr))
-
- fp->sysFileSeek (fp, DIVA_MIPS_TASK_IMAGE_ENTRY_OFFS, OS_SEEK_SET);
- if (fp->sysFileRead (fp, tmp, sizeof(dword)) != sizeof(dword)) {
- OsCloseFile (fp) ;
- DBG_ERR(("Can't read image [%s]", task))
- return (0);
- }
- entry_addr = ((dword)tmp[0]) |
- (((dword)tmp[1]) << 8) |
- (((dword)tmp[2]) << 16) |
- (((dword)tmp[3]) << 24);
- DBG_TRC(("Image [%s] entry = %08lx", task, entry_addr))
-
- fp->sysFileSeek (fp, DIVA_MIPS_TASK_IMAGE_LOAD_ADDR_OFFS, OS_SEEK_SET);
- if (fp->sysFileRead (fp, tmp, sizeof(dword)) != sizeof(dword)) {
- OsCloseFile (fp) ;
- DBG_ERR(("Can't read image [%s]", task))
- return (0);
- }
- start_addr = ((dword)tmp[0]) |
- (((dword)tmp[1]) << 8) |
- (((dword)tmp[2]) << 16) |
- (((dword)tmp[3]) << 24);
- DBG_TRC(("Image [%s] start = %08lx", task, start_addr))
-
- fp->sysFileSeek (fp, DIVA_MIPS_TASK_IMAGE_END_ADDR_OFFS, OS_SEEK_SET);
- if (fp->sysFileRead (fp, tmp, sizeof(dword)) != sizeof(dword)) {
- OsCloseFile (fp) ;
- DBG_ERR(("Can't read image [%s]", task))
- return (0);
- }
- end_addr = ((dword)tmp[0]) |
- (((dword)tmp[1]) << 8) |
- (((dword)tmp[2]) << 16) |
- (((dword)tmp[3]) << 24);
- DBG_TRC(("Image [%s] end = %08lx", task, end_addr))
-
- phys_start_addr = start_addr & 0x1fffffff;
-
- if ((phys_start_addr + FileLength) >= BaseIoAdapter->MemorySize) {
- OsCloseFile (fp) ;
- DBG_ERR(("Image [%s] too long", task))
- return (0);
- }
-
- fp->sysFileSeek (fp, 0, OS_SEEK_SET);
- p = DIVA_OS_MEM_ATTACH_RAM(BaseIoAdapter);
- sharedRam = &p[phys_start_addr];
- if ((dword)fp->sysFileRead (fp, sharedRam, FileLength) != FileLength) {
- DIVA_OS_MEM_DETACH_RAM(BaseIoAdapter, p);
- OsCloseFile (fp) ;
- DBG_ERR(("Can't read image [%s]", task))
- return (0);
- }
- DIVA_OS_MEM_DETACH_RAM(BaseIoAdapter, p);
-
- OsCloseFile (fp) ;
- }
-
- p = DIVA_OS_MEM_ATTACH_RAM(BaseIoAdapter);
- if (!link_addr) {
- link_addr = &p[OFFS_DSP_CODE_BASE_ADDR];
- }
-
- DBG_TRC(("Write task [%s] link %08lx at %08lx",
- task ? task : "none",
- start_addr,
- link_addr - (byte*)&BaseIoAdapter->ram[0]))
-
- link_addr[0] = (byte)(start_addr & 0xff);
- link_addr[1] = (byte)((start_addr >> 8) & 0xff);
- link_addr[2] = (byte)((start_addr >> 16) & 0xff);
- link_addr[3] = (byte)((start_addr >> 24) & 0xff);
-
- DIVA_OS_MEM_DETACH_RAM(BaseIoAdapter, p);
-
- return (task ? &sharedRam[DIVA_MIPS_TASK_IMAGE_LINK_OFFS] : 0);
-}
-
-/* --------------------------------------------------------------------------
- Load Card
- -------------------------------------------------------------------------- */
-static int load_qBri_hardware (PISDN_ADAPTER IoAdapter) {
- dword i, offset, controller ;
- word *signature ;
- int factor = (IoAdapter->tasks == 1) ? 1 : 2;
- byte *p;
-
- PISDN_ADAPTER Slave ;
-
-
- if (
-
- !IoAdapter->QuadroList
-
- || ( (IoAdapter->cardType != CARDTYPE_DIVASRV_Q_8M_PCI)
- && (IoAdapter->cardType != CARDTYPE_DIVASRV_VOICE_Q_8M_PCI)
- && (IoAdapter->cardType != CARDTYPE_DIVASRV_Q_8M_V2_PCI)
- && (IoAdapter->cardType != CARDTYPE_DIVASRV_VOICE_Q_8M_V2_PCI)
- && (IoAdapter->cardType != CARDTYPE_DIVASRV_B_2M_V2_PCI)
- && (IoAdapter->cardType != CARDTYPE_DIVASRV_VOICE_B_2M_V2_PCI)
- && (IoAdapter->cardType != CARDTYPE_DIVASRV_B_2F_PCI) ) )
- {
- return (0) ;
- }
-
-/*
- * Check for first instance
- */
- if ( IoAdapter->ControllerNumber > 0 )
- return (1) ;
-
-/*
- * first initialize the onboard FPGA
- */
- if ( !qBri_FPGA_download (IoAdapter) )
- return (0) ;
-
-
- for ( i = 0; i < IoAdapter->tasks; i++ )
- {
- Slave = IoAdapter->QuadroList->QuadroAdapter[i] ;
- Slave->fpga_features = IoAdapter->fpga_features ;
- }
-
-
-/*
- * download protocol code for all instances
- */
-
- controller = IoAdapter->tasks;
- do
- {
- controller-- ;
- i = 0 ;
- while ( IoAdapter->QuadroList->QuadroAdapter[i]->ControllerNumber != controller )
- i++ ;
-/*
- * calculate base address for instance
- */
- Slave = IoAdapter->QuadroList->QuadroAdapter[i] ;
- offset = Slave->ControllerNumber * (IoAdapter->MemorySize >> factor) ;
- Slave->Address = &IoAdapter->Address[offset] ;
- Slave->ram = &IoAdapter->ram[offset] ;
- Slave->reset = IoAdapter->reset ;
- Slave->ctlReg = IoAdapter->ctlReg ;
- Slave->prom = IoAdapter->prom ;
- Slave->Config = IoAdapter->Config ;
- Slave->Control = IoAdapter->Control ;
-
- if ( !qBri_protocol_load (IoAdapter, Slave) )
- return (0) ;
-
- } while (controller != 0) ;
-
-
-/*
- * download only one copy of the DSP code
- */
- if (IoAdapter->cardType != CARDTYPE_DIVASRV_B_2F_PCI) {
- if ( !qBri_telindus_load (IoAdapter) )
- return (0) ;
- } else {
- byte* link_addr = 0;
- link_addr = qBri_sdp_load (IoAdapter, DIVA_BRI2F_SDP_1_NAME, link_addr);
- link_addr = qBri_sdp_load (IoAdapter, DIVA_BRI2F_SDP_2_NAME, link_addr);
- if (!link_addr) {
- qBri_sdp_load (IoAdapter, 0, link_addr);
- }
- }
-
-/*
- * copy configuration parameters
- */
-
- for ( i = 0 ; i < IoAdapter->tasks ; ++i )
- {
- Slave = IoAdapter->QuadroList->QuadroAdapter[i] ;
- Slave->ram += (IoAdapter->MemorySize >> factor) - MQ_SHARED_RAM_SIZE ;
- p = DIVA_OS_MEM_ATTACH_RAM(Slave);
- DBG_TRC(("Configure instance %d shared memory @ 0x%08lx",
- Slave->ControllerNumber, p))
- memset (p, '\0', 256) ;
- DIVA_OS_MEM_DETACH_RAM(Slave, p);
- diva_configure_protocol (Slave);
- }
-
-/*
- * start adapter
- */
- start_qBri_hardware (IoAdapter) ;
- p = DIVA_OS_MEM_ATTACH_RAM(IoAdapter);
- signature = (word *)(&p[0x1E]) ;
-/*
- * wait for signature in shared memory (max. 3 seconds)
- */
- for ( i = 0 ; i < 300 ; ++i )
- {
- diva_os_wait (10) ;
-
- if ( signature[0] == 0x4447 )
- {
- DIVA_OS_MEM_DETACH_RAM(IoAdapter, p);
- DBG_TRC(("Protocol startup time %d.%02d seconds",
- (i / 100), (i % 100) ))
-
- return (1) ;
- }
- }
- DIVA_OS_MEM_DETACH_RAM(IoAdapter, p);
- DBG_FTL(("%s: Adapter selftest failed (0x%04X)!",
- IoAdapter->Properties.Name, signature[0] >> 16))
- qBri_cpu_trapped (IoAdapter) ;
- return (FALSE) ;
-}
-#else /* } { */
static int load_qBri_hardware (PISDN_ADAPTER IoAdapter) {
return (0);
}
-#endif /* } */
/* --------------------------------------------------------------------------
Card ISR
outpp (p, 0x00) ; /* clear int, halt cpu */
DIVA_OS_MEM_DETACH_CTLREG(IoAdapter, p);
}
-#if !defined(DIVA_USER_MODE_CARD_CONFIG) /* { */
-/* ---------------------------------------------------------------------
- Load protocol on the card
- --------------------------------------------------------------------- */
-static dword bri_protocol_load (PISDN_ADAPTER IoAdapter) {
- dword FileLength ;
- word test, *File = NULL ;
- byte* addrHi, *addrLo, *ioaddr ;
- char *FileName = &IoAdapter->Protocol[0] ;
- dword Addr, i ;
- byte *Port;
- /* -------------------------------------------------------------------
- Try to load protocol code. 'File' points to memory location
- that does contain entire protocol code
- ------------------------------------------------------------------- */
- if ( !(File = (word *)xdiLoadArchive (IoAdapter, &FileLength, 0)) )
- return (0) ;
- /* -------------------------------------------------------------------
- Get protocol features and calculate load addresses
- ------------------------------------------------------------------- */
- IoAdapter->features = diva_get_protocol_file_features ((byte*)File,
- OFFS_PROTOCOL_ID_STRING,
- IoAdapter->ProtocolIdString,
- sizeof(IoAdapter->ProtocolIdString));
- IoAdapter->a.protocol_capabilities = IoAdapter->features ;
- DBG_LOG(("Loading %s", IoAdapter->ProtocolIdString))
- Addr = ((dword)(((byte *) File)[OFFS_PROTOCOL_END_ADDR]))
- | (((dword)(((byte *) File)[OFFS_PROTOCOL_END_ADDR + 1])) << 8)
- | (((dword)(((byte *) File)[OFFS_PROTOCOL_END_ADDR + 2])) << 16)
- | (((dword)(((byte *) File)[OFFS_PROTOCOL_END_ADDR + 3])) << 24) ;
- if ( Addr != 0 )
- {
- IoAdapter->DspCodeBaseAddr = (Addr + 3) & (~3) ;
- IoAdapter->MaxDspCodeSize = (BRI_UNCACHED_ADDR (IoAdapter->MemoryBase + IoAdapter->MemorySize -
- BRI_SHARED_RAM_SIZE)
- - IoAdapter->DspCodeBaseAddr) & (IoAdapter->MemorySize - 1) ;
- Addr = IoAdapter->DspCodeBaseAddr ;
- ((byte *) File)[OFFS_DSP_CODE_BASE_ADDR] = (byte) Addr ;
- ((byte *) File)[OFFS_DSP_CODE_BASE_ADDR + 1] = (byte)(Addr >> 8) ;
- ((byte *) File)[OFFS_DSP_CODE_BASE_ADDR + 2] = (byte)(Addr >> 16) ;
- ((byte *) File)[OFFS_DSP_CODE_BASE_ADDR + 3] = (byte)(Addr >> 24) ;
- IoAdapter->InitialDspInfo = 0x80 ;
- }
- else
- {
- if ( IoAdapter->features & PROTCAP_V90D )
- IoAdapter->MaxDspCodeSize = BRI_V90D_MAX_DSP_CODE_SIZE ;
- else
- IoAdapter->MaxDspCodeSize = BRI_ORG_MAX_DSP_CODE_SIZE ;
- IoAdapter->DspCodeBaseAddr = BRI_CACHED_ADDR (IoAdapter->MemoryBase + IoAdapter->MemorySize -
- BRI_SHARED_RAM_SIZE - IoAdapter->MaxDspCodeSize);
- IoAdapter->InitialDspInfo = (IoAdapter->MaxDspCodeSize - BRI_ORG_MAX_DSP_CODE_SIZE) >> 14 ;
- }
- DBG_LOG(("DSP code base 0x%08lx, max size 0x%08lx (%08lx,%02x)",
- IoAdapter->DspCodeBaseAddr, IoAdapter->MaxDspCodeSize,
- Addr, IoAdapter->InitialDspInfo))
- if ( FileLength > ((IoAdapter->DspCodeBaseAddr -
- BRI_CACHED_ADDR (IoAdapter->MemoryBase)) & (IoAdapter->MemorySize - 1)) )
- {
- xdiFreeFile (File);
- DBG_FTL(("Protocol code '%s' too big (%ld)", FileName, FileLength))
- return (0) ;
- }
- Port = DIVA_OS_MEM_ATTACH_PORT(IoAdapter);
- addrHi = Port + ((IoAdapter->Properties.Bus == BUS_PCI) ? M_PCI_ADDRH : ADDRH) ;
- addrLo = Port + ADDR ;
- ioaddr = Port + DATA ;
-/*
- * set start address for download (use autoincrement mode !)
- */
- outpp (addrHi, 0) ;
- outppw (addrLo, 0) ;
- for ( i = 0 ; i < FileLength ; i += 2 )
- {
- if ( (i & 0x0000FFFF) == 0 )
- {
- outpp (addrHi, (byte)(i >> 16)) ;
- }
- outppw (ioaddr, File[i/2]) ;
- }
-/*
- * memory test without second load of file
- */
- outpp (addrHi, 0) ;
- outppw (addrLo, 0) ;
- for ( i = 0 ; i < FileLength ; i += 2 )
- {
- if ( (i & 0x0000FFFF) == 0 )
- {
- outpp (addrHi, (byte)(i >> 16)) ;
- }
- test = inppw (ioaddr) ;
- if ( test != File[i/2] )
- {
- DIVA_OS_MEM_DETACH_PORT(IoAdapter, Port);
- DBG_FTL(("%s: Memory test failed! (%d - 0x%04X/0x%04X)",
- IoAdapter->Properties.Name, i, test, File[i/2]))
- xdiFreeFile (File);
- return (0) ;
- }
- }
- DIVA_OS_MEM_DETACH_PORT(IoAdapter, Port);
- xdiFreeFile (File);
- return (FileLength) ;
-}
-/******************************************************************************/
-typedef struct
-{
- PISDN_ADAPTER IoAdapter ;
- byte* AddrLo ;
- byte* AddrHi ;
- word* Data ;
- dword DownloadPos ;
-} bri_download_info ;
-static long bri_download_buffer (OsFileHandle *fp, long length, void **addr) {
- int buffer_size = 2048*sizeof(word);
- word *buffer = (word*)diva_os_malloc (0, buffer_size);
- bri_download_info *info ;
- word test ;
- long i, len, page ;
- if (!buffer) {
- DBG_ERR(("A: out of memory, s_bri at %d", __LINE__))
- return (-1);
- }
- info = (bri_download_info *)fp->sysLoadDesc ;
- *addr = (void *)info->DownloadPos ;
- if ( ((dword) length) > info->IoAdapter->DspCodeBaseAddr +
- info->IoAdapter->MaxDspCodeSize - info->DownloadPos )
- {
- DBG_FTL(("%s: out of card memory during DSP download (0x%X)",
- info->IoAdapter->Properties.Name,
- info->DownloadPos + length))
- diva_os_free (0, buffer);
- return (-1) ;
- }
- for ( len = 0 ; length > 0 ; length -= len )
- {
- len = (length > buffer_size ? buffer_size : length) ;
- page = ((long)(info->DownloadPos) + len) & 0xFFFF0000 ;
- if ( page != (long)(info->DownloadPos & 0xFFFF0000) )
- {
- len = 0x00010000 - (((long)info->DownloadPos) & 0x0000FFFF) ;
- }
- if ( fp->sysFileRead (fp, &buffer[0], len) != len ) {
- diva_os_free (0, buffer);
- return (-1) ;
- }
- outpp (info->AddrHi, (byte)(info->DownloadPos >> 16)) ;
- outppw (info->AddrLo, (word)info->DownloadPos) ;
- outppw_buffer (info->Data, &buffer[0], (len + 1)) ;
-/*
- * memory test without second load of file
- */
- outpp (info->AddrHi, (byte)(info->DownloadPos >> 16)) ;
- outppw (info->AddrLo, (word)info->DownloadPos) ;
- for ( i = 0 ; i < len ; i += 2 )
- {
- if ( (test = inppw (info->Data)) != buffer[i/2] )
- {
- DBG_FTL(("%s: Memory test failed! (0x%lX - 0x%04X/0x%04X)",
- info->IoAdapter->Properties.Name,
- info->DownloadPos + i, test, buffer[i/2]))
- diva_os_free (0, buffer);
- return (-2) ;
- }
- }
- info->DownloadPos += len ;
- }
- info->DownloadPos = (info->DownloadPos + 3) & (~3) ;
- diva_os_free (0, buffer);
- return (0) ;
-}
-/******************************************************************************/
-static dword bri_telindus_load (PISDN_ADAPTER IoAdapter, char *DspTelindusFile)
-{
- bri_download_info *pinfo =\
- (bri_download_info*)diva_os_malloc(0, sizeof(*pinfo));
- char *error ;
- OsFileHandle *fp ;
- t_dsp_portable_desc download_table[DSP_MAX_DOWNLOAD_COUNT] ;
- word download_count ;
- dword FileLength ;
- byte *Port;
- if (!pinfo) {
- DBG_ERR (("A: out of memory s_bri at %d", __LINE__))
- return (0);
- }
- if (!(fp = OsOpenFile (DspTelindusFile))) {
- diva_os_free (0, pinfo);
- return (0) ;
- }
- FileLength = fp->sysFileSize ;
- Port = DIVA_OS_MEM_ATTACH_PORT(IoAdapter);
- pinfo->IoAdapter = IoAdapter ;
- pinfo->AddrLo = Port + ADDR ;
- pinfo->AddrHi = Port + (IoAdapter->Properties.Bus == BUS_PCI ? M_PCI_ADDRH : ADDRH);
- pinfo->Data = (word*)(Port + DATA) ;
- pinfo->DownloadPos = (IoAdapter->DspCodeBaseAddr +\
- sizeof(dword) + sizeof(download_table) + 3) & (~3) ;
- fp->sysLoadDesc = (void *)pinfo;
- fp->sysCardLoad = bri_download_buffer ;
- download_count = DSP_MAX_DOWNLOAD_COUNT ;
- memset (&download_table[0], '\0', sizeof(download_table)) ;
-/*
- * set start address for download (use autoincrement mode !)
- */
- error = dsp_read_file (fp, (word)(IoAdapter->cardType),
- &download_count, NULL, &download_table[0]) ;
- if ( error )
- {
- DIVA_OS_MEM_DETACH_PORT(IoAdapter, Port);
- DBG_FTL(("download file error: %s", error))
- OsCloseFile (fp) ;
- diva_os_free (0, pinfo);
- return (0) ;
- }
- OsCloseFile (fp) ;
-/*
- * store # of separate download files extracted from archive
- */
- pinfo->DownloadPos = IoAdapter->DspCodeBaseAddr ;
- outpp (pinfo->AddrHi, (byte)(pinfo->DownloadPos >> 16)) ;
- outppw (pinfo->AddrLo, (word)pinfo->DownloadPos) ;
- outppw (pinfo->Data, (word)download_count) ;
- outppw (pinfo->Data, (word)0) ;
-/*
- * copy download table to board
- */
- outppw_buffer (pinfo->Data, &download_table[0], sizeof(download_table)) ;
- DIVA_OS_MEM_DETACH_PORT(IoAdapter, Port);
- diva_os_free (0, pinfo);
- return (FileLength) ;
-}
-/******************************************************************************/
-static int load_bri_hardware (PISDN_ADAPTER IoAdapter) {
- dword i ;
- byte* addrHi, *addrLo, *ioaddr, *p ;
- dword test ;
- byte *Port;
- if ( IoAdapter->Properties.Card != CARD_MAE )
- {
- return (FALSE) ;
- }
- reset_bri_hardware (IoAdapter) ;
- Port = DIVA_OS_MEM_ATTACH_PORT(IoAdapter);
- addrHi = Port + ((IoAdapter->Properties.Bus==BUS_PCI) ? M_PCI_ADDRH : ADDRH);
- addrLo = Port + ADDR ;
- ioaddr = Port + DATA ;
- diva_os_wait (100);
-/*
- * recover
- */
- outpp (addrHi, (byte) 0) ;
- outppw (addrLo, (word) 0) ;
- outppw (ioaddr, (word) 0) ;
-/*
- * clear shared memory
- */
- outpp (addrHi, (byte)((BRI_UNCACHED_ADDR (IoAdapter->MemoryBase + \
- IoAdapter->MemorySize - BRI_SHARED_RAM_SIZE)) >> 16)) ;
- outppw (addrLo, 0) ;
- for ( i = 0 ; i < 0x8000 ; outppw (ioaddr, 0), ++i ) ;
- DIVA_OS_MEM_DETACH_PORT(IoAdapter, Port);
- diva_os_wait (100) ;
-/*
- * download protocol and dsp files
- */
- switch ( IoAdapter->protocol_id ) {
- default:
- if ( !xdiSetProtocol (IoAdapter, IoAdapter->ProtocolSuffix) )
- return (FALSE) ;
- if ( !bri_protocol_load (IoAdapter) )
- return (FALSE) ;
- if ( !bri_telindus_load (IoAdapter, DSP_TELINDUS_FILE) )
- return (FALSE) ;
- break ;
- case PROTTYPE_QSIG:
- case PROTTYPE_CORNETN:
- if ( !xdiSetProtocol (IoAdapter, IoAdapter->ProtocolSuffix) )
- return (FALSE) ;
- if (IoAdapter->ProtocolSuffix && *IoAdapter->ProtocolSuffix) {
- sprintf (&IoAdapter->Protocol[0],
- "TE_QSIG.%s", IoAdapter->ProtocolSuffix) ;
- }
- DBG_TRC(("xdiSetProtocol: %s firmware '%s' archive '%s'",
- IoAdapter->Properties.Name,
- &IoAdapter->Protocol[0], &IoAdapter->Archive[0]))
- if ( !bri_protocol_load (IoAdapter) )
- return (FALSE) ;
- if ( !bri_telindus_load (IoAdapter, DSP_QSIG_TELINDUS_FILE) )
- return (FALSE) ;
- break ;
- }
-
- Port = DIVA_OS_MEM_ATTACH_PORT(IoAdapter);
- addrHi = Port + ((IoAdapter->Properties.Bus==BUS_PCI) ? M_PCI_ADDRH : ADDRH);
- addrLo = Port + ADDR ;
- ioaddr = Port + DATA ;
-/*
- * clear signature
- */
- outpp (addrHi, (byte)((BRI_UNCACHED_ADDR (IoAdapter->MemoryBase + \
- IoAdapter->MemorySize - BRI_SHARED_RAM_SIZE)) >> 16)) ;
- outppw (addrLo, 0x1e) ;
- outpp (ioaddr, 0) ;
- outpp (ioaddr, 0) ;
-/*
- * copy parameters
- */
- diva_configure_protocol (IoAdapter);
- DIVA_OS_MEM_DETACH_PORT(IoAdapter, Port);
-/*
- * start the protocol code
- */
- p = DIVA_OS_MEM_ATTACH_CTLREG(IoAdapter);
- outpp (p, 0x08) ;
- DIVA_OS_MEM_DETACH_CTLREG(IoAdapter, p);
-/*
- * wait for signature (max. 3 seconds)
- */
- Port = DIVA_OS_MEM_ATTACH_PORT(IoAdapter);
- addrHi = Port + ((IoAdapter->Properties.Bus==BUS_PCI) ? M_PCI_ADDRH : ADDRH);
- addrLo = Port + ADDR ;
- ioaddr = Port + DATA ;
- for ( i = 0 ; i < 300 ; ++i )
- {
- diva_os_wait (10) ;
- outpp (addrHi, (byte)((BRI_UNCACHED_ADDR (IoAdapter->MemoryBase + \
- IoAdapter->MemorySize - BRI_SHARED_RAM_SIZE)) >> 16)) ;
- outppw (addrLo, 0x1e) ;
- test = (dword)inppw (ioaddr) ;
- if ( test == 0x4447 )
- {
- DIVA_OS_MEM_DETACH_PORT(IoAdapter, Port);
- DBG_TRC(("Protocol startup time %d.%02d seconds",
- (i / 100), (i % 100) ))
- return (TRUE) ;
- }
- }
- DIVA_OS_MEM_DETACH_PORT(IoAdapter, Port);
- DBG_FTL(("%s: Adapter selftest failed (0x%04X)!",
- IoAdapter->Properties.Name, test))
- bri_cpu_trapped (IoAdapter) ;
- return (FALSE) ;
-}
-#else /* } { */
static int load_bri_hardware (PISDN_ADAPTER IoAdapter) {
return (0);
}
-#endif /* } */
/******************************************************************************/
static int bri_ISR (struct _ISDN_ADAPTER* IoAdapter) {
byte __iomem *p;
WRITE_BYTE(p, _MP_RISC_RESET | _MP_LED1 | _MP_LED2);
DIVA_OS_MEM_DETACH_RESET(IoAdapter, p);
}
-#if !defined(DIVA_USER_MODE_CARD_CONFIG) /* { */
-/* -------------------------------------------------------------------------
- Load protocol code to the PRI Card
- ------------------------------------------------------------------------- */
-#define DOWNLOAD_ADDR(IoAdapter) (IoAdapter->downloadAddr & (IoAdapter->MemorySize - 1))
-static int pri_protocol_load (PISDN_ADAPTER IoAdapter) {
- dword FileLength ;
- dword *File ;
- dword *sharedRam ;
- dword Addr ;
- byte *p;
- if (!(File = (dword *)xdiLoadArchive (IoAdapter, &FileLength, 0))) {
- return (0) ;
- }
- IoAdapter->features = diva_get_protocol_file_features ((byte*)File,
- OFFS_PROTOCOL_ID_STRING,
- IoAdapter->ProtocolIdString,
- sizeof(IoAdapter->ProtocolIdString)) ;
- IoAdapter->a.protocol_capabilities = IoAdapter->features ;
- DBG_LOG(("Loading %s", IoAdapter->ProtocolIdString))
- Addr = ((dword)(((byte *) File)[OFFS_PROTOCOL_END_ADDR]))
- | (((dword)(((byte *) File)[OFFS_PROTOCOL_END_ADDR + 1])) << 8)
- | (((dword)(((byte *) File)[OFFS_PROTOCOL_END_ADDR + 2])) << 16)
- | (((dword)(((byte *) File)[OFFS_PROTOCOL_END_ADDR + 3])) << 24) ;
- if ( Addr != 0 )
- {
- IoAdapter->DspCodeBaseAddr = (Addr + 3) & (~3) ;
- IoAdapter->MaxDspCodeSize = (MP_UNCACHED_ADDR (IoAdapter->MemorySize)
- - IoAdapter->DspCodeBaseAddr) & (IoAdapter->MemorySize - 1) ;
- Addr = IoAdapter->DspCodeBaseAddr ;
- ((byte *) File)[OFFS_DSP_CODE_BASE_ADDR] = (byte) Addr ;
- ((byte *) File)[OFFS_DSP_CODE_BASE_ADDR + 1] = (byte)(Addr >> 8) ;
- ((byte *) File)[OFFS_DSP_CODE_BASE_ADDR + 2] = (byte)(Addr >> 16) ;
- ((byte *) File)[OFFS_DSP_CODE_BASE_ADDR + 3] = (byte)(Addr >> 24) ;
- IoAdapter->InitialDspInfo = 0x80 ;
- }
- else
- {
- if ( IoAdapter->features & PROTCAP_VOIP )
- IoAdapter->MaxDspCodeSize = MP_VOIP_MAX_DSP_CODE_SIZE ;
- else if ( IoAdapter->features & PROTCAP_V90D )
- IoAdapter->MaxDspCodeSize = MP_V90D_MAX_DSP_CODE_SIZE ;
- else
- IoAdapter->MaxDspCodeSize = MP_ORG_MAX_DSP_CODE_SIZE ;
- IoAdapter->DspCodeBaseAddr = MP_CACHED_ADDR (IoAdapter->MemorySize -
- IoAdapter->MaxDspCodeSize) ;
- IoAdapter->InitialDspInfo = (IoAdapter->MaxDspCodeSize
- - MP_ORG_MAX_DSP_CODE_SIZE) >> 14 ;
- }
- DBG_LOG(("DSP code base 0x%08lx, max size 0x%08lx (%08lx,%02x)",
- IoAdapter->DspCodeBaseAddr, IoAdapter->MaxDspCodeSize,
- Addr, IoAdapter->InitialDspInfo))
- if ( FileLength > ((IoAdapter->DspCodeBaseAddr -
- MP_CACHED_ADDR (MP_PROTOCOL_OFFSET)) & (IoAdapter->MemorySize - 1)) )
- {
- xdiFreeFile (File);
- DBG_FTL(("Protocol code '%s' too long (%ld)",
- &IoAdapter->Protocol[0], FileLength))
- return (0) ;
- }
- IoAdapter->downloadAddr = MP_UNCACHED_ADDR (MP_PROTOCOL_OFFSET) ;
- p = DIVA_OS_MEM_ATTACH_RAM(IoAdapter);
- sharedRam = (dword *)(&p[DOWNLOAD_ADDR(IoAdapter)]);
- memcpy (sharedRam, File, FileLength) ;
- if ( memcmp (sharedRam, File, FileLength) )
- {
- DIVA_OS_MEM_DETACH_RAM(IoAdapter, p);
- DBG_FTL(("%s: Memory test failed!", IoAdapter->Properties.Name))
- xdiFreeFile (File);
- return (0) ;
- }
- DIVA_OS_MEM_DETACH_RAM(IoAdapter, p);
- xdiFreeFile (File);
- return (1) ;
-}
-/******************************************************************************/
-/*------------------------------------------------------------------
- Dsp related definitions
- ------------------------------------------------------------------ */
-#define DSP_SIGNATURE_PROBE_WORD 0x5a5a
-/*
-** Checks presence of DSP on board
-*/
-static int
-dsp_check_presence (volatile byte* addr, volatile byte* data, int dsp)
-{
- word pattern;
- *(volatile word*)addr = 0x4000;
- *(volatile word*)data = DSP_SIGNATURE_PROBE_WORD;
- *(volatile word*)addr = 0x4000;
- pattern = *(volatile word*)data;
- if (pattern != DSP_SIGNATURE_PROBE_WORD) {
- DBG_TRC(("W: DSP[%d] %04x(is) != %04x(should)",
- dsp, pattern, DSP_SIGNATURE_PROBE_WORD))
- return (-1);
- }
- *(volatile word*)addr = 0x4000;
- *(volatile word*)data = ~DSP_SIGNATURE_PROBE_WORD;
- *(volatile word*)addr = 0x4000;
- pattern = *(volatile word*)data;
- if (pattern != (word)~DSP_SIGNATURE_PROBE_WORD) {
- DBG_ERR(("A: DSP[%d] %04x(is) != %04x(should)",
- dsp, pattern, (word)~DSP_SIGNATURE_PROBE_WORD))
- return (-2);
- }
- DBG_TRC (("DSP[%d] present", dsp))
- return (0);
-}
-/*
-** Check if DSP's are present and operating
-** Information about detected DSP's is returned as bit mask
-** Bit 0 - DSP1
-** ...
-** ...
-** ...
-** Bit 29 - DSP30
-*/
-static dword
-diva_pri_detect_dsps (PISDN_ADAPTER IoAdapter)
-{
- byte* base;
- byte* p;
- dword ret = 0, DspCount = 0 ;
- dword row_offset[] = {
- 0x00000000,
- 0x00000800, /* 1 - ROW 1 */
- 0x00000840, /* 2 - ROW 2 */
- 0x00001000, /* 3 - ROW 3 */
- 0x00001040, /* 4 - ROW 4 */
- 0x00000000 /* 5 - ROW 0 */
- };
- byte *dsp_addr_port, *dsp_data_port, row_state;
- int dsp_row = 0, dsp_index, dsp_num;
- IoAdapter->InitialDspInfo &= 0xffff ;
- p = DIVA_OS_MEM_ATTACH_RESET(IoAdapter);
- if (!p)
- {
- DIVA_OS_MEM_DETACH_RESET(IoAdapter, p);
- return (0);
- }
- *(volatile byte*)(p) = _MP_RISC_RESET | _MP_DSP_RESET;
- DIVA_OS_MEM_DETACH_RESET(IoAdapter, p);
- diva_os_wait (5) ;
-
- base = DIVA_OS_MEM_ATTACH_CONTROL(IoAdapter);
-
- for (dsp_num = 0; dsp_num < 30; dsp_num++) {
- dsp_row = dsp_num / 7 + 1;
- dsp_index = dsp_num % 7;
- dsp_data_port = base;
- dsp_addr_port = base;
- dsp_data_port += row_offset[dsp_row];
- dsp_addr_port += row_offset[dsp_row];
- dsp_data_port += (dsp_index * 8);
- dsp_addr_port += (dsp_index * 8) + 0x80;
- if (!dsp_check_presence (dsp_addr_port, dsp_data_port, dsp_num+1)) {
- ret |= (1 << dsp_num);
- DspCount++ ;
- }
- }
- DIVA_OS_MEM_DETACH_CONTROL(IoAdapter, base);
-
- p = DIVA_OS_MEM_ATTACH_RESET(IoAdapter);
- *(volatile byte*)(p) = _MP_RISC_RESET | _MP_LED1 | _MP_LED2;
- diva_os_wait (50) ;
- /*
- Verify modules
- */
- for (dsp_row = 0; dsp_row < 4; dsp_row++) {
- row_state = (byte)((ret >> (dsp_row*7)) & 0x7F);
- if (row_state && (row_state != 0x7F)) {
- for (dsp_index = 0; dsp_index < 7; dsp_index++) {
- if (!(row_state & (1 << dsp_index))) {
- DBG_ERR (("A: MODULE[%d]-DSP[%d] failed", dsp_row+1, dsp_index+1))
- }
- }
- }
- }
- if (!(ret & 0x10000000)) {
- DBG_ERR (("A: ON BOARD-DSP[1] failed"))
- }
- if (!(ret & 0x20000000)) {
- DBG_ERR (("A: ON BOARD-DSP[2] failed"))
- }
- /*
- Print module population now
- */
- DBG_LOG(("+-----------------------+"))
- DBG_LOG(("| DSP MODULE POPULATION |"))
- DBG_LOG(("+-----------------------+"))
- DBG_LOG(("| 1 | 2 | 3 | 4 |"))
- DBG_LOG(("+-----------------------+"))
- DBG_LOG(("| %s | %s | %s | %s |",
- ((ret >> (0*7)) & 0x7F) ? "Y" : "N",
- ((ret >> (1*7)) & 0x7F) ? "Y" : "N",
- ((ret >> (2*7)) & 0x7F) ? "Y" : "N",
- ((ret >> (3*7)) & 0x7F) ? "Y" : "N"))
- DBG_LOG(("+-----------------------+"))
- DBG_LOG(("DSP's(present-absent):%08x-%08x", ret, ~ret & 0x3fffffff))
- *(volatile byte*)(p) = 0 ;
- DIVA_OS_MEM_DETACH_RESET(IoAdapter, p);
- diva_os_wait (50) ;
- IoAdapter->InitialDspInfo |= DspCount << 16 ;
- return (ret);
-}
-/* -------------------------------------------------------------------------
- helper used to download dsp code toi PRI Card
- ------------------------------------------------------------------------- */
-static long pri_download_buffer (OsFileHandle *fp, long length, void **addr) {
- PISDN_ADAPTER IoAdapter = (PISDN_ADAPTER)fp->sysLoadDesc ;
- dword *sharedRam ;
- byte *p;
- *addr = (void *)IoAdapter->downloadAddr ;
- if ( ((dword) length) > IoAdapter->DspCodeBaseAddr +
- IoAdapter->MaxDspCodeSize - IoAdapter->downloadAddr )
- {
- DBG_FTL(("%s: out of card memory during DSP download (0x%X)",
- IoAdapter->Properties.Name,
- IoAdapter->downloadAddr + length))
- return (-1) ;
- }
- p = DIVA_OS_MEM_ATTACH_RAM(IoAdapter);
- sharedRam = (dword *)(&p[DOWNLOAD_ADDR(IoAdapter)]);
- if ( fp->sysFileRead (fp, sharedRam, length) != length ) {
- DIVA_OS_MEM_DETACH_RAM(IoAdapter, p);
- return (-1) ;
- }
- IoAdapter->downloadAddr += length ;
- IoAdapter->downloadAddr = (IoAdapter->downloadAddr + 3) & (~3) ;
- DIVA_OS_MEM_DETACH_RAM(IoAdapter, p);
- return (0) ;
-}
-/* -------------------------------------------------------------------------
- Download DSP code to PRI Card
- ------------------------------------------------------------------------- */
-static dword pri_telindus_load (PISDN_ADAPTER IoAdapter) {
- char *error ;
- OsFileHandle *fp ;
- t_dsp_portable_desc download_table[DSP_MAX_DOWNLOAD_COUNT] ;
- word download_count ;
- dword *sharedRam ;
- dword FileLength ;
- byte *p;
- if ( !(fp = OsOpenFile (DSP_TELINDUS_FILE)) )
- return (0) ;
- IoAdapter->downloadAddr = (IoAdapter->DspCodeBaseAddr
- + sizeof(dword) + sizeof(download_table) + 3) & (~3) ;
- FileLength = fp->sysFileSize ;
- fp->sysLoadDesc = (void *)IoAdapter ;
- fp->sysCardLoad = pri_download_buffer ;
- download_count = DSP_MAX_DOWNLOAD_COUNT ;
- memset (&download_table[0], '\0', sizeof(download_table)) ;
-/*
- * set start address for download (use autoincrement mode !)
- */
- error = dsp_read_file (fp, (word)(IoAdapter->cardType),
- &download_count, NULL, &download_table[0]) ;
- if ( error )
- {
- DBG_FTL(("download file error: %s", error))
- OsCloseFile (fp) ;
- return (0) ;
- }
- OsCloseFile (fp) ;
-/*
- * store # of separate download files extracted from archive
- */
- IoAdapter->downloadAddr = IoAdapter->DspCodeBaseAddr ;
- p = DIVA_OS_MEM_ATTACH_RAM(IoAdapter);
- sharedRam = (dword *)(&p[DOWNLOAD_ADDR(IoAdapter)]);
- WRITE_DWORD(&(sharedRam[0]), (dword)download_count);
- memcpy (&sharedRam[1], &download_table[0], sizeof(download_table)) ;
- DIVA_OS_MEM_DETACH_RAM(IoAdapter, p);
- return (FileLength) ;
-}
-/* -------------------------------------------------------------------------
- Download PRI Card
- ------------------------------------------------------------------------- */
-#define MIN_DSPS 0x30000000
-static int load_pri_hardware (PISDN_ADAPTER IoAdapter) {
- dword i ;
- struct mp_load *boot = (struct mp_load *)DIVA_OS_MEM_ATTACH_RAM(IoAdapter);
- if ( IoAdapter->Properties.Card != CARD_MAEP ) {
- DIVA_OS_MEM_DETACH_RAM(IoAdapter, boot);
- return (0) ;
- }
- boot->err = 0 ;
-#if 0
- IoAdapter->rstFnc (IoAdapter) ;
-#else
- if ( MIN_DSPS != (MIN_DSPS & diva_pri_detect_dsps(IoAdapter)) ) { /* makes reset */
- DIVA_OS_MEM_DETACH_RAM(IoAdapter, boot);
- DBG_FTL(("%s: DSP error!", IoAdapter->Properties.Name))
- return (0) ;
- }
-#endif
-/*
- * check if CPU is alive
- */
- diva_os_wait (10) ;
- i = boot->live ;
- diva_os_wait (10) ;
- if ( i == boot->live )
- {
- DIVA_OS_MEM_DETACH_RAM(IoAdapter, boot);
- DBG_FTL(("%s: CPU is not alive!", IoAdapter->Properties.Name))
- return (0) ;
- }
- if ( boot->err )
- {
- DIVA_OS_MEM_DETACH_RAM(IoAdapter, boot);
- DBG_FTL(("%s: Board Selftest failed!", IoAdapter->Properties.Name))
- return (0) ;
- }
-/*
- * download protocol and dsp files
- */
- if ( !xdiSetProtocol (IoAdapter, IoAdapter->ProtocolSuffix) ) {
- DIVA_OS_MEM_DETACH_RAM(IoAdapter, boot);
- return (0) ;
- }
- if ( !pri_protocol_load (IoAdapter) ) {
- DIVA_OS_MEM_DETACH_RAM(IoAdapter, boot);
- return (0) ;
- }
- if ( !pri_telindus_load (IoAdapter) ) {
- DIVA_OS_MEM_DETACH_RAM(IoAdapter, boot);
- return (0) ;
- }
-/*
- * copy configuration parameters
- */
- IoAdapter->ram += MP_SHARED_RAM_OFFSET ;
- memset (boot + MP_SHARED_RAM_OFFSET, '\0', 256) ;
- diva_configure_protocol (IoAdapter);
-/*
- * start adapter
- */
- boot->addr = MP_UNCACHED_ADDR (MP_PROTOCOL_OFFSET) ;
- boot->cmd = 3 ;
-/*
- * wait for signature in shared memory (max. 3 seconds)
- */
- for ( i = 0 ; i < 300 ; ++i )
- {
- diva_os_wait (10) ;
- if ( (boot->signature >> 16) == 0x4447 )
- {
- DIVA_OS_MEM_DETACH_RAM(IoAdapter, boot);
- DBG_TRC(("Protocol startup time %d.%02d seconds",
- (i / 100), (i % 100) ))
- return (1) ;
- }
- }
- DIVA_OS_MEM_DETACH_RAM(IoAdapter, boot);
- DBG_FTL(("%s: Adapter selftest failed (0x%04X)!",
- IoAdapter->Properties.Name, boot->signature >> 16))
- pri_cpu_trapped (IoAdapter) ;
- return (0) ;
-}
-#else /* } { */
static int load_pri_hardware (PISDN_ADAPTER IoAdapter) {
return (0);
}
-#endif /* } */
/* --------------------------------------------------------------------------
PRI Adapter interrupt Service Routine
-------------------------------------------------------------------------- */
-\r
-/*\r
- *\r
- Copyright (c) Eicon Networks, 2002.\r
- *\r
- This source file is supplied for the use with\r
- Eicon Networks range of DIVA Server Adapters.\r
- *\r
- Eicon File Revision : 2.1\r
- *\r
- This program is free software; you can redistribute it and/or modify\r
- it under the terms of the GNU General Public License as published by\r
- the Free Software Foundation; either version 2, or (at your option)\r
- any later version.\r
- *\r
- This program is distributed in the hope that it will be useful,\r
- but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY\r
- implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\r
- See the GNU General Public License for more details.\r
- *\r
- You should have received a copy of the GNU General Public License\r
- along with this program; if not, write to the Free Software\r
- Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.\r
- *\r
- */\r
-static char diva_xdi_common_code_build[] = "102-52"; \r
+
+/*
+ *
+ Copyright (c) Eicon Networks, 2002.
+ *
+ This source file is supplied for the use with
+ Eicon Networks range of DIVA Server Adapters.
+ *
+ Eicon File Revision : 2.1
+ *
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+ *
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY OF ANY KIND WHATSOEVER INCLUDING ANY
+ implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ See the GNU General Public License for more details.
+ *
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ */
+static char diva_xdi_common_code_build[] = "102-52";
cs->irq = card->para[0];
- outb(cs->hw.avm.cfg_reg+ASL1_OFFSET, ASL1_W_ENABLE_S0);
+ byteout(cs->hw.avm.cfg_reg+ASL1_OFFSET, ASL1_W_ENABLE_S0);
byteout(cs->hw.avm.cfg_reg+ASL0_OFFSET,0x00);
HZDELAY(HZ / 5 + 1);
byteout(cs->hw.avm.cfg_reg+ASL0_OFFSET,ASL0_W_RESET);
*/
#ifdef PCMCIA_DEBUG
static int pc_debug = PCMCIA_DEBUG;
-MODULE_PARM(pc_debug, "i");
+module_param(pc_debug, int, 0);
#define DEBUG(n, args...) if (pc_debug>(n)) printk(KERN_DEBUG args);
static char *version =
"avma1_cs.c 1.00 1998/01/23 10:00:00 (Carsten Paeth)";
/* Parameters that can be set with 'insmod' */
-static int default_irq_list[11] = { 15, 13, 12, 11, 10, 9, 7, 5, 4, 3, -1 };
-static int irq_list[11] = { -1 };
static int isdnprot = 2;
-MODULE_PARM(irq_list, "1-11i");
-MODULE_PARM(isdnprot, "1-4i");
+module_param(isdnprot, int, 0);
/*====================================================================*/
client_reg_t client_reg;
dev_link_t *link;
local_info_t *local;
- int ret, i;
+ int ret;
DEBUG(0, "avma1cs_attach()\n");
link->irq.Attributes = IRQ_TYPE_EXCLUSIVE;
link->irq.Attributes = IRQ_TYPE_DYNAMIC_SHARING|IRQ_FIRST_SHARED;
- link->irq.IRQInfo1 = IRQ_INFO2_VALID|IRQ_LEVEL_ID;
- if (irq_list[0] != -1) {
- for (i = 0; i < 10 && irq_list[i] > 0; i++)
- link->irq.IRQInfo2 |= 1 << irq_list[i];
- } else {
- for (i = 0; i < 10 && default_irq_list[i] > 0; i++)
- link->irq.IRQInfo2 |= 1 << default_irq_list[i];
- }
-
+ link->irq.IRQInfo1 = IRQ_LEVEL_ID;
+
/* General socket configuration */
link->conf.Attributes = CONF_ENABLE_IRQ;
link->conf.Vcc = 50;
link->next = dev_list;
dev_list = link;
client_reg.dev_info = &dev_info;
- client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE;
client_reg.EventMask =
CS_EVENT_CARD_INSERTION | CS_EVENT_CARD_REMOVAL |
CS_EVENT_RESET_PHYSICAL | CS_EVENT_CARD_RESET |
static void __exit exit_avma1_cs(void)
{
pcmcia_unregister_driver(&avma1cs_driver);
-
- /* XXX: this really needs to move into generic code.. */
- while (dev_list != NULL) {
- if (dev_list->state & DEV_CONFIG)
- avma1cs_release(dev_list);
- avma1cs_detach(dev_list);
- }
+ BUG_ON(dev_list != NULL);
}
module_init(init_avma1_cs);
#ifdef CONFIG_HISAX_DEBUG
static int debug = 0;
/* static int hdlcfifosize = 32; */
-MODULE_PARM(debug, "i");
-/* MODULE_PARM(hdlcfifosize, "i"); */
+module_param(debug, int, 0);
+/* module_param(hdlcfifosize, int, 0); */
#endif
MODULE_AUTHOR("Kai Germaschewski <kai.germaschewski@gmx.de>/Karsten Keil <kkeil@suse.de>");
#endif
static int protocol = 2; /* EURO-ISDN Default */
-MODULE_PARM(protocol, "i");
+module_param(protocol, int, 0);
MODULE_LICENSE("GPL");
// ----------------------------------------------------------------------
#ifdef CONFIG_HISAX_DEBUG
static int debug = 1;
-MODULE_PARM(debug, "i");
+module_param(debug, int, 0);
static char *ISACVer[] = {
"2086/2186 V1.1",
static char *ICCVer[] __initdata =
{"2070 A1/A3", "2070 B1", "2070 B2/B3", "2070 V2.4"};
-void
+void __init
ICCVersion(struct IsdnCardState *cs, char *s)
{
int val;
#define ICC_IND_AIL 0xE
#define ICC_IND_DC 0xF
-extern void ICCVersion(struct IsdnCardState *cs, char *s);
+extern void __init ICCVersion(struct IsdnCardState *cs, char *s);
extern void initicc(struct IsdnCardState *cs);
extern void icc_interrupt(struct IsdnCardState *cs, u_char val);
extern void clear_pending_icc_ints(struct IsdnCardState *cs);
#ifdef PCMCIA_DEBUG
static int pc_debug = PCMCIA_DEBUG;
-MODULE_PARM(pc_debug, "i");
+module_param(pc_debug, int, 0);
#define DEBUG(n, args...) if (pc_debug>(n)) printk(KERN_DEBUG args);
static char *version =
"sedlbauer_cs.c 1.1a 2001/01/28 15:04:04 (M.Niemann)";
/* Parameters that can be set with 'insmod' */
-/* The old way: bit map of interrupts to choose from */
-/* This means pick from 15, 14, 12, 11, 10, 9, 7, 5, 4, and 3 */
-static u_int irq_mask = 0xdeb8;
-/* Newer, simpler way of listing specific interrupts */
-static int irq_list[4] = { -1 };
-
-MODULE_PARM(irq_mask, "i");
-MODULE_PARM(irq_list, "1-4i");
-
static int protocol = 2; /* EURO-ISDN Default */
-MODULE_PARM(protocol, "i");
+module_param(protocol, int, 0);
/*====================================================================*/
local_info_t *local;
dev_link_t *link;
client_reg_t client_reg;
- int ret, i;
+ int ret;
DEBUG(0, "sedlbauer_attach()\n");
/* Interrupt setup */
link->irq.Attributes = IRQ_TYPE_EXCLUSIVE;
- link->irq.IRQInfo1 = IRQ_INFO2_VALID|IRQ_LEVEL_ID;
- if (irq_list[0] == -1)
- link->irq.IRQInfo2 = irq_mask;
- else
- for (i = 0; i < 4; i++)
- link->irq.IRQInfo2 |= 1 << irq_list[i];
+ link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->irq.Handler = NULL;
-
+
/*
General socket configuration defaults can go here. In this
client, we assume very little, and rely on the CIS for almost
link->next = dev_list;
dev_list = link;
client_reg.dev_info = &dev_info;
- client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE;
client_reg.EventMask =
CS_EVENT_CARD_INSERTION | CS_EVENT_CARD_REMOVAL |
CS_EVENT_RESET_PHYSICAL | CS_EVENT_CARD_RESET |
static void __exit exit_sedlbauer_cs(void)
{
pcmcia_unregister_driver(&sedlbauer_driver);
-
- /* XXX: this really needs to move into generic code.. */
- while (dev_list != NULL) {
- if (dev_list->state & DEV_CONFIG)
- sedlbauer_release(dev_list);
- sedlbauer_detach(dev_list);
- }
+ BUG_ON(dev_list != NULL);
}
module_init(init_sedlbauer_cs);
MODULE_LICENSE("GPL");
static int protocol = 2; /* EURO-ISDN Default */
-MODULE_PARM(protocol, "i");
+module_param(protocol, int, 0);
static int number_of_leds = 2; /* 2 LEDs on the adpater default */
-MODULE_PARM(number_of_leds, "i");
+module_param(number_of_leds, int, 0);
#ifdef CONFIG_HISAX_DEBUG
static int debug = 0x1;
-MODULE_PARM(debug, "i");
+module_param(debug, int, 0);
int st5481_debug;
#endif
int retval, i;
printk(KERN_INFO "st541: found adapter VendorId %04x, ProductId %04x, LEDs %d\n",
- dev->descriptor.idVendor, dev->descriptor.idProduct,
+ le16_to_cpu(dev->descriptor.idVendor),
+ le16_to_cpu(dev->descriptor.idProduct),
number_of_leds);
adapter = kmalloc(sizeof(struct st5481_adapter), GFP_KERNEL);
help
If you say Y here, the modem-emulator will support a subset of the
EIA Class 8 Voice commands. Using a getty with voice-support
- (mgetty+sendfax by gert@greenie.muc.de with an extension, available
+ (mgetty+sendfax by <gert@greenie.muc.de> with an extension, available
with the ISDN utility package for example), you will be able to use
your Linux box as an ISDN-answering machine. Of course, this must be
supported by the lowlevel driver also. Currently, the HiSax driver
typedef struct icn_dev {
spinlock_t devlock; /* spinlock to protect this struct */
unsigned long memaddr; /* Address of memory mapped buffers */
- icn_shmem *shmem; /* Pointer to memory-mapped-buffers */
+ icn_shmem __iomem *shmem; /* Pointer to memory-mapped-buffers */
int mvalid; /* IO-shmem has been requested */
int channel; /* Currently mapped channel */
struct icn_card *mcard; /* Currently mapped card */
struct pcbit_dev {
/* board */
- volatile unsigned char* sh_mem; /* RDP address */
+ volatile unsigned char __iomem *sh_mem; /* RDP address */
unsigned long ph_mem;
unsigned int irq;
unsigned int id;
u_char w_busy;
u_char r_busy;
- volatile unsigned char *readptr;
- volatile unsigned char *writeptr;
+ volatile unsigned char __iomem *readptr;
+ volatile unsigned char __iomem *writeptr;
ushort loadptr;
MODULE_DESCRIPTION("ISDN4Linux: Driver for Spellcaster card");
MODULE_AUTHOR("Spellcaster Telecommunications Inc.");
MODULE_LICENSE("GPL");
-MODULE_PARM( io, "1-" __MODULE_STRING(MAX_CARDS) "i");
-MODULE_PARM(irq, "1-" __MODULE_STRING(MAX_CARDS) "i");
-MODULE_PARM(ram, "1-" __MODULE_STRING(MAX_CARDS) "i");
-MODULE_PARM(do_reset, "i");
board *sc_adapter[MAX_CARDS];
int cinst;
static unsigned long ram[] = {0,0,0,0};
static int do_reset = 0;
+module_param_array(io, int, NULL, 0);
+module_param_array(irq, int, NULL, 0);
+module_param_array(ram, int, NULL, 0);
+module_param(do_reset, bool, 0);
+
static int sup_irq[] = { 11, 10, 9, 5, 12, 14, 7, 3, 4, 6 };
#define MAX_IRQS 10
MODULE_AUTHOR("Stelian Pop");
MODULE_LICENSE("GPL");
MODULE_PARM_DESC(id,"ID-String of the driver");
-MODULE_PARM(id,"s");
+module_param(id, charp, 0);
/*
* Finds a board by its driver ID.
memset((char *)card, 0, sizeof(tpam_card));
card->irq = dev->irq;
- card->lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&card->lock);
sprintf(card->interface.id, "%s%d", id, cards_num);
/* request interrupt */
static int pending_devs[16];
static int pending_led_start=0;
static int pending_led_end=0;
-static spinlock_t leds_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(leds_lock);
static void leds_done(struct adb_request *req)
{
static volatile struct adb_regs __iomem *adb;
static struct adb_request *current_req, *last_req;
-static spinlock_t macio_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(macio_lock);
static int macio_probe(void);
static int macio_init(void);
#include <linux/init.h>
static volatile unsigned char __iomem *via;
-static spinlock_t cuda_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(cuda_lock);
#ifdef CONFIG_MAC
#define CUDA_IRQ IRQ_MAC_ADB
/* Very few machines have more than one MCA bus. However, there are
* those that do (Voyager 35xx/5xxx), so we do it this way for future
* expansion. None that I know have more than 2 */
-struct mca_bus *mca_root_busses[MAX_MCA_BUSSES];
+static struct mca_bus *mca_root_busses[MAX_MCA_BUSSES];
#define MCA_DEVINFO(i,s) { .pos = i, .name = s }
}
EXPORT_SYMBOL(mca_set_adapter_name);
-/**
- * mca_get_adapter_name - get the adapter description
- * @slot: slot to query
- *
- * Return the adapter description if set. If it has not been
- * set or the slot is out range then return NULL.
- */
-
-char *mca_get_adapter_name(int slot)
-{
- struct mca_device *mca_dev = mca_find_device_by_slot(slot);
-
- if(!mca_dev)
- return NULL;
-
- return mca_device_get_name(mca_dev);
-}
-EXPORT_SYMBOL(mca_get_adapter_name);
-
/**
* mca_is_adapter_used - check if claimed by driver
* @slot: slot to check
}
EXPORT_SYMBOL(mca_mark_as_unused);
-/**
- * mca_isadapter - check if the slot holds an adapter
- * @slot: slot to query
- *
- * Returns zero if the slot does not hold an adapter, non zero if
- * it does.
- */
-
-int mca_isadapter(int slot)
-{
- struct mca_device *mca_dev = mca_find_device_by_slot(slot);
- enum MCA_AdapterStatus status;
-
- if(!mca_dev)
- return 0;
-
- status = mca_device_status(mca_dev);
-
- return status == MCA_ADAPTER_NORMAL
- || status == MCA_ADAPTER_DISABLED;
-}
-EXPORT_SYMBOL(mca_isadapter);
-
-/**
- * mca_isenabled - check if the slot holds an enabled adapter
- * @slot: slot to query
- *
- * Returns a non zero value if the slot holds an enabled adapter
- * and zero for any other case.
- */
-
-int mca_isenabled(int slot)
-{
- struct mca_device *mca_dev = mca_find_device_by_slot(slot);
-
- if(!mca_dev)
- return 0;
-
- return mca_device_status(mca_dev) == MCA_ADAPTER_NORMAL;
-}
return len;
}
-int get_mca_info(char *page, char **start, off_t off,
- int count, int *eof, void *data)
+static int get_mca_info(char *page, char **start, off_t off,
+ int count, int *eof, void *data)
{
int i, len = 0;
struct bio *base_bio, unsigned int *bio_vec_idx)
{
struct bio *bio;
- unsigned int nr_iovecs = dm_div_up(size, PAGE_SIZE);
+ unsigned int nr_iovecs = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
int gfp_mask = GFP_NOIO | __GFP_HIGHMEM;
unsigned long flags = current->flags;
unsigned int i;
return 0;
}
-static inline void bs_bio_init(struct bio *bio)
-{
- bio->bi_next = NULL;
- bio->bi_flags = 1 << BIO_UPTODATE;
- bio->bi_rw = 0;
- bio->bi_vcnt = 0;
- bio->bi_idx = 0;
- bio->bi_phys_segments = 0;
- bio->bi_hw_segments = 0;
- bio->bi_size = 0;
- bio->bi_max_vecs = 0;
- bio->bi_end_io = NULL;
- atomic_set(&bio->bi_cnt, 1);
- bio->bi_private = NULL;
-}
-
static unsigned _bio_count = 0;
struct bio *bio_set_alloc(struct bio_set *bs, int gfp_mask, int nr_iovecs)
{
#include "dm-io.h"
static LIST_HEAD(_log_types);
-static spinlock_t _lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(_lock);
int dm_register_dirty_log_type(struct dirty_log_type *type)
{
- if (!try_module_get(type->module))
- return -EINVAL;
-
spin_lock(&_lock);
type->use_count = 0;
list_add(&type->list, &_log_types);
if (type->use_count)
DMWARN("Attempt to unregister a log type that is still in use");
- else {
+ else
list_del(&type->list);
- module_put(type->module);
- }
spin_unlock(&_lock);
spin_lock(&_lock);
list_for_each_entry (type, &_log_types, list)
if (!strcmp(type_name, type->name)) {
+ if (!type->use_count && !try_module_get(type->module)){
+ spin_unlock(&_lock);
+ return NULL;
+ }
type->use_count++;
spin_unlock(&_lock);
return type;
static void put_type(struct dirty_log_type *type)
{
spin_lock(&_lock);
- type->use_count--;
+ if (!--type->use_count)
+ module_put(type->module);
spin_unlock(&_lock);
}
struct log_c {
struct dm_target *ti;
int touched;
- sector_t region_size;
+ uint32_t region_size;
unsigned int region_count;
region_t sync_count;
enum sync sync = DEFAULTSYNC;
struct log_c *lc;
- sector_t region_size;
+ uint32_t region_size;
unsigned int region_count;
size_t bitset_size;
}
}
- if (sscanf(argv[0], SECTOR_FORMAT, ®ion_size) != 1) {
+ if (sscanf(argv[0], "%u", ®ion_size) != 1) {
DMWARN("invalid region size string");
return -EINVAL;
}
- region_count = dm_div_up(ti->len, region_size);
+ region_count = dm_sector_div_up(ti->len, region_size);
lc = kmalloc(sizeof(*lc), GFP_KERNEL);
if (!lc) {
return write_header(lc);
}
-static sector_t core_get_region_size(struct dirty_log *log)
+static uint32_t core_get_region_size(struct dirty_log *log)
{
struct log_c *lc = (struct log_c *) log->context;
return lc->region_size;
break;
case STATUSTYPE_TABLE:
- DMEMIT("%s %u " SECTOR_FORMAT " ", log->type->name,
+ DMEMIT("%s %u %u ", log->type->name,
lc->sync == DEFAULTSYNC ? 1 : 2, lc->region_size);
DMEMIT_SYNC;
}
case STATUSTYPE_TABLE:
format_dev_t(buffer, lc->log_dev->bdev->bd_dev);
- DMEMIT("%s %u %s " SECTOR_FORMAT " ", log->type->name,
+ DMEMIT("%s %u %s %u ", log->type->name,
lc->sync == DEFAULTSYNC ? 2 : 3, buffer,
lc->region_size);
DMEMIT_SYNC;
* Retrieves the smallest size of region that the log can
* deal with.
*/
- sector_t (*get_region_size)(struct dirty_log *log);
+ uint32_t (*get_region_size)(struct dirty_log *log);
/*
* A predicate to say whether a region is clean or not.
struct mirror_set;
struct region_hash {
struct mirror_set *ms;
- sector_t region_size;
+ uint32_t region_size;
unsigned region_shift;
/* holds persistent region state */
#define MIN_REGIONS 64
#define MAX_RECOVERY 1
static int rh_init(struct region_hash *rh, struct mirror_set *ms,
- struct dirty_log *log, sector_t region_size,
+ struct dirty_log *log, uint32_t region_size,
region_t nr_regions)
{
unsigned int nr_buckets, max_buckets;
else {
__rh_insert(rh, nreg);
if (nreg->state == RH_CLEAN) {
- spin_lock_irq(&rh->region_lock);
+ spin_lock(&rh->region_lock);
list_add(&nreg->list, &rh->clean_regions);
- spin_unlock_irq(&rh->region_lock);
+ spin_unlock(&rh->region_lock);
}
reg = nreg;
}
* Target functions
*---------------------------------------------------------------*/
static struct mirror_set *alloc_context(unsigned int nr_mirrors,
- sector_t region_size,
+ uint32_t region_size,
struct dm_target *ti,
struct dirty_log *dl)
{
ms->ti = ti;
ms->nr_mirrors = nr_mirrors;
- ms->nr_regions = dm_div_up(ti->len, region_size);
+ ms->nr_regions = dm_sector_div_up(ti->len, region_size);
ms->in_sync = 0;
if (rh_init(&ms->rh, ms, dl, region_size, ms->nr_regions)) {
kfree(ms);
}
-static inline int _check_region_size(struct dm_target *ti, sector_t size)
+static inline int _check_region_size(struct dm_target *ti, uint32_t size)
{
return !(size % (PAGE_SIZE >> 9) || (size & (size - 1)) ||
size > ti->len);
return 0;
}
-static void mirror_suspend(struct dm_target *ti)
+static void mirror_postsuspend(struct dm_target *ti)
{
struct mirror_set *ms = (struct mirror_set *) ti->private;
struct dirty_log *log = ms->rh.log;
+
rh_stop_recovery(&ms->rh);
if (log->type->suspend && log->type->suspend(log))
/* FIXME: need better error handling */
.dtr = mirror_dtr,
.map = mirror_map,
.end_io = mirror_end_io,
- .suspend = mirror_suspend,
+ .postsuspend = mirror_postsuspend,
.resume = mirror_resume,
.status = mirror_status,
};
uint32_t stripes;
/* The size of this target / num. stripes */
- uint32_t stripe_width;
+ sector_t stripe_width;
/* stripe chunk size */
uint32_t chunk_shift;
struct stripe_c *sc = (struct stripe_c *) ti->private;
sector_t offset = bio->bi_sector - ti->begin;
- uint32_t chunk = (uint32_t) (offset >> sc->chunk_shift);
- uint32_t stripe = chunk % sc->stripes; /* 32bit modulus */
- chunk = chunk / sc->stripes;
+ sector_t chunk = offset >> sc->chunk_shift;
+ uint32_t stripe = sector_div(chunk, sc->stripes);
bio->bi_bdev = sc->stripe[stripe].dev->bdev;
bio->bi_sector = sc->stripe[stripe].physical_start +
static struct target_type stripe_target = {
.name = "striped",
- .version= {1, 0, 1},
+ .version= {1, 0, 2},
.module = THIS_MODULE,
.ctr = stripe_ctr,
.dtr = stripe_dtr,
*
* All three of these are protected by job_lock.
*/
-static spinlock_t _job_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(_job_lock);
static LIST_HEAD(_complete_jobs);
static LIST_HEAD(_io_jobs);
return -ENOMEM;
}
- kc->lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&kc->lock);
kc->pages = NULL;
kc->nr_pages = kc->nr_free_pages = 0;
r = client_alloc_pages(kc, nr_pages);
extern const struct raid6_calls raid6_sse2x1;
extern const struct raid6_calls raid6_sse2x2;
extern const struct raid6_calls raid6_sse2x4;
+extern const struct raid6_calls raid6_altivec1;
+extern const struct raid6_calls raid6_altivec2;
+extern const struct raid6_calls raid6_altivec4;
+extern const struct raid6_calls raid6_altivec8;
const struct raid6_calls * const raid6_algos[] = {
&raid6_intx1,
&raid6_sse2x1,
&raid6_sse2x2,
&raid6_sse2x4,
+#endif
+#ifdef CONFIG_ALTIVEC
+ &raid6_altivec1,
+ &raid6_altivec2,
+ &raid6_altivec4,
+ &raid6_altivec8,
#endif
NULL
};
config VIDEO_IR
tristate
+config VIDEO_TVEEPROM
+ tristate
+
endmenu
/* disable all irqs */
saa7146_write(dev, IER, 0);
- /* shut down all dma transfers */
- saa7146_write(dev, MC1, 0x00ff0000);
+ /* shut down all dma transfers and rps tasks */
+ saa7146_write(dev, MC1, 0x30ff0000);
/* clear out any rps-signals pending */
saa7146_write(dev, MC2, 0xf8000000);
pci_set_drvdata(pci,dev);
init_MUTEX(&dev->lock);
- dev->int_slock = SPIN_LOCK_UNLOCKED;
- dev->slock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&dev->int_slock);
+ spin_lock_init(&dev->slock);
init_MUTEX(&dev->i2c_lock);
select DVB_STV0299
select DVB_MT352
select DVB_MT312
+ select DVB_NXT2002
help
Support for the Skystar2 PCI DVB card by Technisat, which
- is equipped with the FlexCopII chipset by B2C2.
+ is equipped with the FlexCopII chipset by B2C2, and
+ for the B2C2/BBTI Air2PC-ATSC card.
Say Y if you own such a device and want to use it.
select DVB_MT352
help
Support for the Air/Sky/Cable2PC USB DVB device by B2C2. Currently
- this does nothing, but providing basic function for the used usb
+ the does nothing, but providing basic function for the used usb
protocol.
Say Y if you own such a device and want to use it.
}
static int debug;
-module_param(debug, int, 0x644);
+module_param(debug, int, 0644);
MODULE_PARM_DESC(debug, "set debugging level (1=info,ts=2,ctrl=4 (or-able)).");
#define deb_info(args...) dprintk(0x01,args)
/* request types */
typedef enum {
+
+/* something is wrong with this part
RTYPE_READ_DW = (1 << 6),
RTYPE_WRITE_DW_1 = (3 << 6),
+ RTYPE_READ_V8_MEMORY = (6 << 6),
+ RTYPE_WRITE_V8_MEMORY = (7 << 6),
+ RTYPE_WRITE_V8_FLASH = (8 << 6),
+ RTYPE_GENERIC = (9 << 6),
+*/
+ RTYPE_READ_DW = (3 << 6),
+ RTYPE_WRITE_DW_1 = (1 << 6),
+
RTYPE_READ_V8_MEMORY = (6 << 6),
RTYPE_WRITE_V8_MEMORY = (7 << 6),
RTYPE_WRITE_V8_FLASH = (8 << 6),
static int b2c2_init_usb(struct usb_b2c2_usb *b2c2)
{
- u16 frame_size = b2c2->uintf->cur_altsetting->endpoint[0].desc.wMaxPacketSize;
+ u16 frame_size = le16_to_cpu(b2c2->uintf->cur_altsetting->endpoint[0].desc.wMaxPacketSize);
int bufsize = B2C2_USB_NUM_ISO_URB * B2C2_USB_FRAMES_PER_ISO * frame_size,i,j,ret;
int buffer_offset = 0;
}
/* initialising and submitting iso urbs */
for (i = 0; i < B2C2_USB_NUM_ISO_URB; i++) {
- deb_info("initializing and submitting urb no. %d (buf_offset: %d).\n",i,buffer_offset);
int frame_offset = 0;
struct urb *urb = b2c2->iso_urb[i];
+ deb_info("initializing and submitting urb no. %d (buf_offset: %d).\n",i,buffer_offset);
urb->dev = b2c2->udev;
urb->context = b2c2;
depends on DVB_CORE && PCI && VIDEO_BT848
select DVB_MT352
select DVB_SP887X
+ select DVB_NXT6000
+ select DVB_CX24110
help
Support for PCI cards based on the Bt8xx PCI bridge. Examples are
the Nebula cards, the Pinnacle PCTV cards and Twinhan DST cards.
only compressed MPEG data over the PCI bus, so you need
an external software decoder to watch TV on your computer.
- If you have a Twinhan card, don't forget to select
- "Twinhan DST based DVB-S/-T frontend".
-
Say Y if you own such a device and want to use it.
*
*/
+#ifndef DVB_BT8XX_H
+#define DVB_BT8XX_H
+
#include <linux/i2c.h>
#include "dvbdev.h"
#include "dvb_net.h"
#include "sp887x.h"
#include "dst.h"
#include "nxt6000.h"
+#include "cx24110.h"
struct dvb_bt8xx_card {
struct semaphore lock;
struct dvb_frontend* fe;
};
+
+#endif /* DVB_BT8XX_H */
switch (cmd) {
case FE_GET_INFO:
- return copy_to_user((void*) arg, &cinergyt2_fe_info,
+ return copy_to_user((void __user*) arg, &cinergyt2_fe_info,
sizeof(struct dvb_frontend_info));
case FE_READ_STATUS:
if (stat->lock_bits & (1 << 1))
status |= FE_HAS_VITERBI;
- return copy_to_user((void *) arg, &status, sizeof(status));
+ return copy_to_user((void __user*) arg, &status, sizeof(status));
case FE_READ_BER:
return put_user(le32_to_cpu(stat->viterbi_error_rate),
if ((file->f_flags & O_ACCMODE) == O_RDONLY)
return -EPERM;
- if (copy_from_user(&p, (void *) arg, sizeof(p)))
+ if (copy_from_user(&p, (void __user*) arg, sizeof(p)))
return -EFAULT;
if (down_interruptible(&cinergyt2->sem))
* for now we only fill the status field. the parameters
* are trivial to fill as soon FE_GET_FRONTEND is done.
*/
- struct dvb_frontend_event *e = (void *) arg;
+ struct dvb_frontend_event __user *e = (void __user *) arg;
if (cinergyt2->pending_fe_events == 0) {
if (file->f_flags & O_NONBLOCK)
return -EWOULDBLOCK;
config DVB_DIBUSB
- tristate "DiBcom USB DVB-T devices (see help for device list)"
+ tristate "DiBcom USB DVB-T devices (see help for a complete device list)"
depends on DVB_CORE && USB
select FW_LOADER
select DVB_DIB3000MB
select DVB_DIB3000MC
+ select DVB_MT352
help
Support for USB 1.1 and 2.0 DVB-T devices based on reference designs made by
- DiBcom (http://www.dibcom.fr).
+ DiBcom (http://www.dibcom.fr) and C&E.
Devices supported by this driver:
TwinhanDTV Magic Box (VP7041e)
KWorld V-Stream XPERT DTV - DVB-T USB
Hama DVB-T USB-Box
- DiBcom reference device (non-public)
+ DiBcom reference devices (non-public)
Ultima Electronic/Artec T1 USB TVBOX
Compro Videomate DVB-U2000 - DVB-T USB
Grandtec DVB-T USB
Avermedia AverTV DVBT USB
- Yakumo DVB-T mobile USB2.0
+ Artec T1 USB1.1 and USB2.0 boxes
+ Yakumo/Typhoon DVB-T USB2.0
+ Hanftek UMT-010 USB2.0
The VP7041 seems to be identical to "CTS Portable" (Chinese
Television System).
0x0574:0x2235 (Artec T1 USB1.1, cold)
0x04b4:0x8613 (Artec T1 USB2.0, cold)
0x0574:0x1002 (Artec T1 USB2.0, warm)
+ 0x0574:0x2131 (aged DiBcom USB1.1 test device)
Say Y if your device has one of the mentioned IDs.
+dvb-dibusb-objs = dvb-dibusb-core.o \
+ dvb-dibusb-dvb.o \
+ dvb-dibusb-fe-i2c.o \
+ dvb-dibusb-firmware.o \
+ dvb-dibusb-remote.o \
+ dvb-dibusb-usb.o \
+ dvb-dibusb-pid.o
+
obj-$(CONFIG_DVB_DIBUSB) += dvb-dibusb.o
EXTRA_CFLAGS = -Idrivers/media/dvb/dvb-core/ -Idrivers/media/dvb/frontends/
--- /dev/null
+/*
+ * Driver for mobile USB Budget DVB-T devices based on reference
+ * design made by DiBcom (http://www.dibcom.fr/)
+ *
+ * dvb-dibusb-core.c
+ *
+ * Copyright (C) 2004-5 Patrick Boettcher (patrick.boettcher@desy.de)
+ *
+ * based on GPL code from DiBcom, which has
+ * Copyright (C) 2004 Amaury Demol for DiBcom (ademol@dibcom.fr)
+ *
+ * Remote control code added by David Matthews (dm@prolingua.co.uk)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation, version 2.
+ *
+ * Acknowledgements
+ *
+ * Amaury Demol (ademol@dibcom.fr) from DiBcom for providing specs and driver
+ * sources, on which this driver (and the dib3000mb/mc/p frontends) are based.
+ *
+ * see Documentation/dvb/README.dibusb for more information
+ */
+#include "dvb-dibusb.h"
+
+#include <linux/moduleparam.h>
+
+/* debug */
+int dvb_dibusb_debug;
+module_param_named(debug, dvb_dibusb_debug, int, 0644);
+
+#ifdef CONFIG_DVB_DIBCOM_DEBUG
+#define DBSTATUS ""
+#else
+#define DBSTATUS " (debugging is not enabled)"
+#endif
+MODULE_PARM_DESC(debug, "set debugging level (1=info,2=xfer,4=alotmore,8=ts,16=err,32=rc (|-able))." DBSTATUS);
+#undef DBSTATUS
+
+static int pid_parse;
+module_param(pid_parse, int, 0644);
+MODULE_PARM_DESC(pid_parse, "enable pid parsing (filtering) when running at USB2.0");
+
+static int rc_query_interval;
+module_param(rc_query_interval, int, 0644);
+MODULE_PARM_DESC(rc_query_interval, "interval in msecs for remote control query (default: 100; min: 40)");
+
+/* Vendor IDs */
+#define USB_VID_ANCHOR 0x0547
+#define USB_VID_AVERMEDIA 0x14aa
+#define USB_VID_COMPRO 0x185b
+#define USB_VID_COMPRO_UNK 0x145f
+#define USB_VID_CYPRESS 0x04b4
+#define USB_VID_DIBCOM 0x10b8
+#define USB_VID_EMPIA 0xeb1a
+#define USB_VID_GRANDTEC 0x5032
+#define USB_VID_HYPER_PALTEK 0x1025
+#define USB_VID_HANFTEK 0x15f4
+#define USB_VID_IMC_NETWORKS 0x13d3
+#define USB_VID_TWINHAN 0x1822
+#define USB_VID_ULTIMA_ELECTRONIC 0x05d8
+
+/* Product IDs */
+#define USB_PID_AVERMEDIA_DVBT_USB_COLD 0x0001
+#define USB_PID_AVERMEDIA_DVBT_USB_WARM 0x0002
+#define USB_PID_COMPRO_DVBU2000_COLD 0xd000
+#define USB_PID_COMPRO_DVBU2000_WARM 0xd001
+#define USB_PID_COMPRO_DVBU2000_UNK_COLD 0x010c
+#define USB_PID_COMPRO_DVBU2000_UNK_WARM 0x010d
+#define USB_PID_DIBCOM_MOD3000_COLD 0x0bb8
+#define USB_PID_DIBCOM_MOD3000_WARM 0x0bb9
+#define USB_PID_DIBCOM_MOD3001_COLD 0x0bc6
+#define USB_PID_DIBCOM_MOD3001_WARM 0x0bc7
+#define USB_PID_DIBCOM_ANCHOR_2135_COLD 0x2131
+#define USB_PID_GRANDTEC_DVBT_USB_COLD 0x0fa0
+#define USB_PID_GRANDTEC_DVBT_USB_WARM 0x0fa1
+#define USB_PID_KWORLD_VSTREAM_COLD 0x17de
+#define USB_PID_KWORLD_VSTREAM_WARM 0x17df
+#define USB_PID_TWINHAN_VP7041_COLD 0x3201
+#define USB_PID_TWINHAN_VP7041_WARM 0x3202
+#define USB_PID_ULTIMA_TVBOX_COLD 0x8105
+#define USB_PID_ULTIMA_TVBOX_WARM 0x8106
+#define USB_PID_ULTIMA_TVBOX_AN2235_COLD 0x8107
+#define USB_PID_ULTIMA_TVBOX_AN2235_WARM 0x8108
+#define USB_PID_ULTIMA_TVBOX_ANCHOR_COLD 0x2235
+#define USB_PID_ULTIMA_TVBOX_USB2_COLD 0x8109
+#define USB_PID_ULTIMA_TVBOX_USB2_FX_COLD 0x8613
+#define USB_PID_ULTIMA_TVBOX_USB2_FX_WARM 0x1002
+#define USB_PID_UNK_HYPER_PALTEK_COLD 0x005e
+#define USB_PID_UNK_HYPER_PALTEK_WARM 0x005f
+#define USB_PID_HANFTEK_UMT_010_COLD 0x0001
+#define USB_PID_HANFTEK_UMT_010_WARM 0x0025
+#define USB_PID_YAKUMO_DTT200U_COLD 0x0201
+#define USB_PID_YAKUMO_DTT200U_WARM 0x0301
+
+/* USB Driver stuff
+ * table of devices that this driver is working with
+ *
+ * ATTENTION: Never ever change the order of this table, the particular
+ * devices depend on this order
+ *
+ * Each entry is used as a reference in the device_struct. Currently this is
+ * the only non-redundant way of assigning USB ids to actual devices I'm aware
+ * of, because there is only one place in the code where the assignment of
+ * vendor and product id is done, here.
+ */
+static struct usb_device_id dib_table [] = {
+/* 00 */ { USB_DEVICE(USB_VID_AVERMEDIA, USB_PID_AVERMEDIA_DVBT_USB_COLD)},
+/* 01 */ { USB_DEVICE(USB_VID_AVERMEDIA, USB_PID_AVERMEDIA_DVBT_USB_WARM)},
+/* 02 */ { USB_DEVICE(USB_VID_AVERMEDIA, USB_PID_YAKUMO_DTT200U_COLD) },
+
+/* the following device is actually not supported, but when loading the
+ * correct firmware (ie. its usb ids will change) everything works fine then
+ */
+/* 03 */ { USB_DEVICE(USB_VID_AVERMEDIA, USB_PID_YAKUMO_DTT200U_WARM) },
+
+/* 04 */ { USB_DEVICE(USB_VID_COMPRO, USB_PID_COMPRO_DVBU2000_COLD) },
+/* 05 */ { USB_DEVICE(USB_VID_COMPRO, USB_PID_COMPRO_DVBU2000_WARM) },
+/* 06 */ { USB_DEVICE(USB_VID_COMPRO_UNK, USB_PID_COMPRO_DVBU2000_UNK_COLD) },
+/* 07 */ { USB_DEVICE(USB_VID_DIBCOM, USB_PID_DIBCOM_MOD3000_COLD) },
+/* 08 */ { USB_DEVICE(USB_VID_DIBCOM, USB_PID_DIBCOM_MOD3000_WARM) },
+/* 09 */ { USB_DEVICE(USB_VID_DIBCOM, USB_PID_DIBCOM_MOD3001_COLD) },
+/* 10 */ { USB_DEVICE(USB_VID_DIBCOM, USB_PID_DIBCOM_MOD3001_WARM) },
+/* 11 */ { USB_DEVICE(USB_VID_EMPIA, USB_PID_KWORLD_VSTREAM_COLD) },
+/* 12 */ { USB_DEVICE(USB_VID_EMPIA, USB_PID_KWORLD_VSTREAM_WARM) },
+/* 13 */ { USB_DEVICE(USB_VID_GRANDTEC, USB_PID_GRANDTEC_DVBT_USB_COLD) },
+/* 14 */ { USB_DEVICE(USB_VID_GRANDTEC, USB_PID_GRANDTEC_DVBT_USB_WARM) },
+/* 15 */ { USB_DEVICE(USB_VID_GRANDTEC, USB_PID_DIBCOM_MOD3000_COLD) },
+/* 16 */ { USB_DEVICE(USB_VID_GRANDTEC, USB_PID_DIBCOM_MOD3000_WARM) },
+/* 17 */ { USB_DEVICE(USB_VID_HYPER_PALTEK, USB_PID_UNK_HYPER_PALTEK_COLD) },
+/* 18 */ { USB_DEVICE(USB_VID_HYPER_PALTEK, USB_PID_UNK_HYPER_PALTEK_WARM) },
+/* 19 */ { USB_DEVICE(USB_VID_IMC_NETWORKS, USB_PID_TWINHAN_VP7041_COLD) },
+/* 20 */ { USB_DEVICE(USB_VID_IMC_NETWORKS, USB_PID_TWINHAN_VP7041_WARM) },
+/* 21 */ { USB_DEVICE(USB_VID_TWINHAN, USB_PID_TWINHAN_VP7041_COLD) },
+/* 22 */ { USB_DEVICE(USB_VID_TWINHAN, USB_PID_TWINHAN_VP7041_WARM) },
+/* 23 */ { USB_DEVICE(USB_VID_ULTIMA_ELECTRONIC, USB_PID_ULTIMA_TVBOX_COLD) },
+/* 24 */ { USB_DEVICE(USB_VID_ULTIMA_ELECTRONIC, USB_PID_ULTIMA_TVBOX_WARM) },
+/* 25 */ { USB_DEVICE(USB_VID_ULTIMA_ELECTRONIC, USB_PID_ULTIMA_TVBOX_AN2235_COLD) },
+/* 26 */ { USB_DEVICE(USB_VID_ULTIMA_ELECTRONIC, USB_PID_ULTIMA_TVBOX_AN2235_WARM) },
+/* 27 */ { USB_DEVICE(USB_VID_ULTIMA_ELECTRONIC, USB_PID_ULTIMA_TVBOX_USB2_COLD) },
+
+/* 28 */ { USB_DEVICE(USB_VID_HANFTEK, USB_PID_HANFTEK_UMT_010_COLD) },
+/* 29 */ { USB_DEVICE(USB_VID_HANFTEK, USB_PID_HANFTEK_UMT_010_WARM) },
+
+/*
+ * activate the following define when you have one of the devices and want to
+ * build it from build-2.6 in dvb-kernel
+ */
+// #define CONFIG_DVB_DIBUSB_MISDESIGNED_DEVICES
+#ifdef CONFIG_DVB_DIBUSB_MISDESIGNED_DEVICES
+/* 30 */ { USB_DEVICE(USB_VID_ANCHOR, USB_PID_ULTIMA_TVBOX_ANCHOR_COLD) },
+/* 31 */ { USB_DEVICE(USB_VID_CYPRESS, USB_PID_ULTIMA_TVBOX_USB2_FX_COLD) },
+/* 32 */ { USB_DEVICE(USB_VID_ANCHOR, USB_PID_ULTIMA_TVBOX_USB2_FX_WARM) },
+/* 33 */ { USB_DEVICE(USB_VID_ANCHOR, USB_PID_DIBCOM_ANCHOR_2135_COLD) },
+#endif
+ { } /* Terminating entry */
+};
+
+MODULE_DEVICE_TABLE (usb, dib_table);
+
+static struct dibusb_usb_controller dibusb_usb_ctrl[] = {
+ { .name = "Cypress AN2135", .cpu_cs_register = 0x7f92 },
+ { .name = "Cypress AN2235", .cpu_cs_register = 0x7f92 },
+ { .name = "Cypress FX2", .cpu_cs_register = 0xe600 },
+};
+
+struct dibusb_tuner dibusb_tuner[] = {
+ { DIBUSB_TUNER_CABLE_THOMSON,
+ 0x61
+ },
+ { DIBUSB_TUNER_COFDM_PANASONIC_ENV57H1XD5,
+ 0x60
+ },
+ { DIBUSB_TUNER_CABLE_LG_TDTP_E102P,
+ 0x61
+ },
+ { DIBUSB_TUNER_COFDM_PANASONIC_ENV77H11D5,
+ 0x60
+ },
+};
+
+static struct dibusb_demod dibusb_demod[] = {
+ { DIBUSB_DIB3000MB,
+ 16,
+ { 0x8, 0 },
+ },
+ { DIBUSB_DIB3000MC,
+ 32,
+ { 0x9, 0xa, 0xb, 0xc },
+ },
+ { DIBUSB_MT352,
+ 254,
+ { 0xf, 0 },
+ },
+};
+
+static struct dibusb_device_class dibusb_device_classes[] = {
+ { .id = DIBUSB1_1, .usb_ctrl = &dibusb_usb_ctrl[0],
+ .firmware = "dvb-dibusb-5.0.0.11.fw",
+ .pipe_cmd = 0x01, .pipe_data = 0x02,
+ .urb_count = 3, .urb_buffer_size = 4096,
+ DIBUSB_RC_NEC_PROTOCOL,
+ &dibusb_demod[DIBUSB_DIB3000MB],
+ &dibusb_tuner[DIBUSB_TUNER_CABLE_THOMSON],
+ },
+ { DIBUSB1_1_AN2235, &dibusb_usb_ctrl[1],
+ "dvb-dibusb-an2235-1.fw",
+ 0x01, 0x02,
+ 3, 4096,
+ DIBUSB_RC_NEC_PROTOCOL,
+ &dibusb_demod[DIBUSB_DIB3000MB],
+ &dibusb_tuner[DIBUSB_TUNER_CABLE_THOMSON],
+ },
+ { DIBUSB2_0,&dibusb_usb_ctrl[2],
+ "dvb-dibusb-6.0.0.5.fw",
+ 0x01, 0x06,
+ 3, 188*210,
+ DIBUSB_RC_NEC_PROTOCOL,
+ &dibusb_demod[DIBUSB_DIB3000MC],
+ &dibusb_tuner[DIBUSB_TUNER_COFDM_PANASONIC_ENV57H1XD5],
+ },
+ { UMT2_0, &dibusb_usb_ctrl[2],
+ "dvb-dibusb-umt-1.fw",
+ 0x01, 0x02,
+ 15, 188*21,
+ DIBUSB_RC_NO,
+ &dibusb_demod[DIBUSB_MT352],
+// &dibusb_tuner[DIBUSB_TUNER_COFDM_PANASONIC_ENV77H11D5],
+ &dibusb_tuner[DIBUSB_TUNER_CABLE_LG_TDTP_E102P],
+ },
+};
+
+static struct dibusb_usb_device dibusb_devices[] = {
+ { "TwinhanDTV USB1.1 / Magic Box / HAMA USB1.1 DVB-T device",
+ &dibusb_device_classes[DIBUSB1_1],
+ { &dib_table[19], &dib_table[21], NULL},
+ { &dib_table[20], &dib_table[22], NULL},
+ },
+ { "KWorld V-Stream XPERT DTV - DVB-T USB1.1",
+ &dibusb_device_classes[DIBUSB1_1],
+ { &dib_table[11], NULL },
+ { &dib_table[12], NULL },
+ },
+ { "Grandtec USB1.1 DVB-T",
+ &dibusb_device_classes[DIBUSB1_1],
+ { &dib_table[13], &dib_table[15], NULL },
+ { &dib_table[14], &dib_table[16], NULL },
+ },
+ { "DiBcom USB1.1 DVB-T reference design (MOD3000)",
+ &dibusb_device_classes[DIBUSB1_1],
+ { &dib_table[7], NULL },
+ { &dib_table[8], NULL },
+ },
+ { "Artec T1 USB1.1 TVBOX with AN2135",
+ &dibusb_device_classes[DIBUSB1_1],
+ { &dib_table[23], NULL },
+ { &dib_table[24], NULL },
+ },
+ { "Artec T1 USB1.1 TVBOX with AN2235",
+ &dibusb_device_classes[DIBUSB1_1_AN2235],
+ { &dib_table[25], NULL },
+ { &dib_table[26], NULL },
+ },
+ { "Avermedia AverTV DVBT USB1.1",
+ &dibusb_device_classes[DIBUSB1_1],
+ { &dib_table[0], NULL },
+ { &dib_table[1], NULL },
+ },
+ { "Compro Videomate DVB-U2000 - DVB-T USB1.1 (please confirm to linux-dvb)",
+ &dibusb_device_classes[DIBUSB1_1],
+ { &dib_table[4], &dib_table[6], NULL},
+ { &dib_table[5], NULL },
+ },
+ { "Unkown USB1.1 DVB-T device ???? please report the name to the author",
+ &dibusb_device_classes[DIBUSB1_1],
+ { &dib_table[17], NULL },
+ { &dib_table[18], NULL },
+ },
+ { "DiBcom USB2.0 DVB-T reference design (MOD3000P)",
+ &dibusb_device_classes[DIBUSB2_0],
+ { &dib_table[9], NULL },
+ { &dib_table[10], NULL },
+ },
+ { "Artec T1 USB2.0 TVBOX (please report the warm ID)",
+ &dibusb_device_classes[DIBUSB2_0],
+ { &dib_table[27], NULL },
+ { NULL },
+ },
+ { "AVermedia/Yakumo/Hama/Typhoon DVB-T USB2.0",
+ &dibusb_device_classes[UMT2_0],
+ { &dib_table[2], NULL },
+ { NULL },
+ },
+ { "Hanftek UMT-010 DVB-T USB2.0",
+ &dibusb_device_classes[UMT2_0],
+ { &dib_table[28], NULL },
+ { &dib_table[29], NULL },
+ },
+#ifdef CONFIG_DVB_DIBUSB_MISDESIGNED_DEVICES
+ { "Artec T1 USB1.1 TVBOX with AN2235 (misdesigned)",
+ &dibusb_device_classes[DIBUSB1_1_AN2235],
+ { &dib_table[30], NULL },
+ { NULL },
+ },
+ { "Artec T1 USB2.0 TVBOX with FX2 IDs (misdesigned, please report the warm ID)",
+ &dibusb_device_classes[DIBUSB2_0],
+ { &dib_table[31], NULL },
+ { &dib_table[32], NULL }, /* undefined, it could be that the device will get another USB ID in warm state */
+ },
+ { "DiBcom USB1.1 DVB-T reference design (MOD3000) with AN2135 default IDs",
+ &dibusb_device_classes[DIBUSB1_1],
+ { &dib_table[33], NULL },
+ { NULL },
+ },
+#endif
+};
+
+static int dibusb_exit(struct usb_dibusb *dib)
+{
+ deb_info("init_state before exiting everything: %x\n",dib->init_state);
+ dibusb_remote_exit(dib);
+ dibusb_fe_exit(dib);
+ dibusb_i2c_exit(dib);
+ dibusb_pid_list_exit(dib);
+ dibusb_dvb_exit(dib);
+ dibusb_urb_exit(dib);
+ deb_info("init_state should be zero now: %x\n",dib->init_state);
+ dib->init_state = DIBUSB_STATE_INIT;
+ kfree(dib);
+ return 0;
+}
+
+static int dibusb_init(struct usb_dibusb *dib)
+{
+ int ret = 0;
+ sema_init(&dib->usb_sem, 1);
+ sema_init(&dib->i2c_sem, 1);
+
+ dib->init_state = DIBUSB_STATE_INIT;
+
+ if ((ret = dibusb_urb_init(dib)) ||
+ (ret = dibusb_dvb_init(dib)) ||
+ (ret = dibusb_pid_list_init(dib)) ||
+ (ret = dibusb_i2c_init(dib))) {
+ dibusb_exit(dib);
+ return ret;
+ }
+
+ if ((ret = dibusb_fe_init(dib)))
+ err("could not initialize a frontend.");
+
+ if ((ret = dibusb_remote_init(dib)))
+ err("could not initialize remote control.");
+
+ return 0;
+}
+
+static struct dibusb_usb_device * dibusb_find_device (struct usb_device *udev,int *cold)
+{
+ int i,j;
+ *cold = -1;
+ for (i = 0; i < sizeof(dibusb_devices)/sizeof(struct dibusb_usb_device); i++) {
+ for (j = 0; j < DIBUSB_ID_MAX_NUM && dibusb_devices[i].cold_ids[j] != NULL; j++) {
+ deb_info("check for cold %x %x\n",dibusb_devices[i].cold_ids[j]->idVendor, dibusb_devices[i].cold_ids[j]->idProduct);
+ if (dibusb_devices[i].cold_ids[j]->idVendor == le16_to_cpu(udev->descriptor.idVendor) &&
+ dibusb_devices[i].cold_ids[j]->idProduct == le16_to_cpu(udev->descriptor.idProduct)) {
+ *cold = 1;
+ return &dibusb_devices[i];
+ }
+ }
+
+ for (j = 0; j < DIBUSB_ID_MAX_NUM && dibusb_devices[i].warm_ids[j] != NULL; j++) {
+ deb_info("check for warm %x %x\n",dibusb_devices[i].warm_ids[j]->idVendor, dibusb_devices[i].warm_ids[j]->idProduct);
+ if (dibusb_devices[i].warm_ids[j]->idVendor == le16_to_cpu(udev->descriptor.idVendor) &&
+ dibusb_devices[i].warm_ids[j]->idProduct == le16_to_cpu(udev->descriptor.idProduct)) {
+ *cold = 0;
+ return &dibusb_devices[i];
+ }
+ }
+ }
+ return NULL;
+}
+
+/*
+ * USB
+ */
+static int dibusb_probe(struct usb_interface *intf,
+ const struct usb_device_id *id)
+{
+ struct usb_device *udev = interface_to_usbdev(intf);
+ struct usb_dibusb *dib = NULL;
+ struct dibusb_usb_device *dibdev = NULL;
+
+ int ret = -ENOMEM,cold=0;
+
+ if ((dibdev = dibusb_find_device(udev,&cold)) == NULL) {
+ err("something went very wrong, "
+ "unknown product ID: %.4x",le16_to_cpu(udev->descriptor.idProduct));
+ return -ENODEV;
+ }
+
+ if (cold == 1) {
+ info("found a '%s' in cold state, will try to load a firmware",dibdev->name);
+ ret = dibusb_loadfirmware(udev,dibdev);
+ } else {
+ info("found a '%s' in warm state.",dibdev->name);
+ dib = kmalloc(sizeof(struct usb_dibusb),GFP_KERNEL);
+ if (dib == NULL) {
+ err("no memory");
+ return ret;
+ }
+ memset(dib,0,sizeof(struct usb_dibusb));
+
+ dib->udev = udev;
+ dib->dibdev = dibdev;
+
+ /* store parameters to structures */
+ dib->rc_query_interval = rc_query_interval;
+ dib->pid_parse = pid_parse;
+
+ usb_set_intfdata(intf, dib);
+
+ ret = dibusb_init(dib);
+ }
+
+ if (ret == 0)
+ info("%s successfully initialized and connected.",dibdev->name);
+ else
+ info("%s error while loading driver (%d)",dibdev->name,ret);
+ return ret;
+}
+
+static void dibusb_disconnect(struct usb_interface *intf)
+{
+ struct usb_dibusb *dib = usb_get_intfdata(intf);
+ const char *name = DRIVER_DESC;
+
+ usb_set_intfdata(intf,NULL);
+ if (dib != NULL && dib->dibdev != NULL) {
+ name = dib->dibdev->name;
+ dibusb_exit(dib);
+ }
+ info("%s successfully deinitialized and disconnected.",name);
+
+}
+
+/* usb specific object needed to register this driver with the usb subsystem */
+struct usb_driver dibusb_driver = {
+ .owner = THIS_MODULE,
+ .name = DRIVER_DESC,
+ .probe = dibusb_probe,
+ .disconnect = dibusb_disconnect,
+ .id_table = dib_table,
+};
+
+/* module stuff */
+static int __init usb_dibusb_init(void)
+{
+ int result;
+ if ((result = usb_register(&dibusb_driver))) {
+ err("usb_register failed. Error number %d",result);
+ return result;
+ }
+
+ return 0;
+}
+
+static void __exit usb_dibusb_exit(void)
+{
+ /* deregister this driver from the USB subsystem */
+ usb_deregister(&dibusb_driver);
+}
+
+module_init (usb_dibusb_init);
+module_exit (usb_dibusb_exit);
+
+MODULE_AUTHOR(DRIVER_AUTHOR);
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * dvb-dibusb-dvb.c is part of the driver for mobile USB Budget DVB-T devices
+ * based on reference design made by DiBcom (http://www.dibcom.fr/)
+ *
+ * Copyright (C) 2004-5 Patrick Boettcher (patrick.boettcher@desy.de)
+ *
+ * see dvb-dibusb-core.c for more copyright details.
+ *
+ * This file contains functions for initializing and handling the
+ * linux-dvb API.
+ */
+#include "dvb-dibusb.h"
+
+#include <linux/usb.h>
+#include <linux/version.h>
+
+static u32 urb_compl_count;
+
+/*
+ * MPEG2 TS DVB stuff
+ */
+void dibusb_urb_complete(struct urb *urb, struct pt_regs *ptregs)
+{
+ struct usb_dibusb *dib = urb->context;
+ int ret;
+
+ deb_ts("urb complete feedcount: %d, status: %d, length: %d\n",dib->feedcount,urb->status,
+ urb->actual_length);
+
+ urb_compl_count++;
+ if (urb_compl_count % 500 == 0)
+ deb_info("%d urbs completed so far.\n",urb_compl_count);
+
+ switch (urb->status) {
+ case 0: /* success */
+ case -ETIMEDOUT: /* NAK */
+ break;
+ case -ECONNRESET: /* unlink */
+ case -ENOENT:
+ case -ESHUTDOWN:
+ return;
+ default: /* error */
+ warn("urb completition error %d.", urb->status);
+ }
+
+ if (dib->feedcount > 0) {
+ deb_ts("URB return len: %d\n",urb->actual_length);
+ if (urb->actual_length % 188)
+ deb_ts("TS Packets: %d, %d\n", urb->actual_length/188,urb->actual_length % 188);
+
+ /* Francois recommends to drop not full-filled packets, even if they may
+ * contain valid TS packets, at least for USB1.1
+ *
+ * if (urb->actual_length == dib->dibdev->parm->default_size && dib->dvb_is_ready) */
+ if (dib->init_state & DIBUSB_STATE_DVB)
+ dvb_dmx_swfilter(&dib->demux, (u8*) urb->transfer_buffer,urb->actual_length);
+ else
+ deb_ts("URB dropped because of the "
+ "actual_length or !dvb_is_ready (%d).\n",dib->init_state & DIBUSB_STATE_DVB);
+ } else
+ deb_ts("URB dropped because of feedcount.\n");
+
+ ret = usb_submit_urb(urb,GFP_ATOMIC);
+ deb_ts("urb resubmitted, (%d)\n",ret);
+}
+
+static int dibusb_ctrl_feed(struct dvb_demux_feed *dvbdmxfeed, int onoff)
+{
+ struct usb_dibusb *dib = dvbdmxfeed->demux->priv;
+ int newfeedcount;
+
+ if (dib == NULL)
+ return -ENODEV;
+
+ newfeedcount = dib->feedcount + (onoff ? 1 : -1);
+
+ /*
+ * stop feed before setting a new pid if there will be no pid anymore
+ */
+// if ((dib->dibdev->parm->firmware_bug && dib->feedcount) ||
+ if (newfeedcount == 0) {
+ deb_ts("stop feeding\n");
+ if (dib->xfer_ops.fifo_ctrl != NULL) {
+ if (dib->xfer_ops.fifo_ctrl(dib->fe,0)) {
+ err("error while inhibiting fifo.");
+ return -ENODEV;
+ }
+ }
+ }
+
+ dib->feedcount = newfeedcount;
+
+ /* get a free pid from the list and activate it on the device
+ * specific pid_filter
+ */
+ if (dib->pid_parse)
+ dibusb_ctrl_pid(dib,dvbdmxfeed,onoff);
+
+ /*
+ * start the feed, either if there is the firmware bug or
+ * if this was the first pid to set and there is still a pid for
+ * reception.
+ */
+
+// if ((dib->dibdev->parm->firmware_bug)
+ if (dib->feedcount == onoff && dib->feedcount > 0) {
+
+ deb_ts("controlling pid parser\n");
+ if (dib->xfer_ops.pid_parse != NULL) {
+ if (dib->xfer_ops.pid_parse(dib->fe,dib->pid_parse) < 0) {
+ err("could not handle pid_parser");
+ }
+ }
+
+ deb_ts("start feeding\n");
+ if (dib->xfer_ops.fifo_ctrl != NULL) {
+ if (dib->xfer_ops.fifo_ctrl(dib->fe,1)) {
+ err("error while enabling fifo.");
+ return -ENODEV;
+ }
+ }
+ dibusb_streaming(dib,1);
+ }
+ return 0;
+}
+
+static int dibusb_start_feed(struct dvb_demux_feed *dvbdmxfeed)
+{
+ deb_ts("start pid: 0x%04x, feedtype: %d\n", dvbdmxfeed->pid,dvbdmxfeed->type);
+ return dibusb_ctrl_feed(dvbdmxfeed,1);
+}
+
+static int dibusb_stop_feed(struct dvb_demux_feed *dvbdmxfeed)
+{
+ deb_ts("stop pid: 0x%04x, feedtype: %d\n", dvbdmxfeed->pid, dvbdmxfeed->type);
+ return dibusb_ctrl_feed(dvbdmxfeed,0);
+}
+
+int dibusb_dvb_init(struct usb_dibusb *dib)
+{
+ int ret;
+
+ urb_compl_count = 0;
+
+#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,4)
+ if ((ret = dvb_register_adapter(&dib->adapter, DRIVER_DESC)) < 0) {
+#else
+ if ((ret = dvb_register_adapter(&dib->adapter, DRIVER_DESC ,
+ THIS_MODULE)) < 0) {
+#endif
+ deb_info("dvb_register_adapter failed: error %d", ret);
+ goto err;
+ }
+ dib->adapter->priv = dib;
+
+/* i2c is done in dibusb_i2c_init */
+
+ dib->demux.dmx.capabilities = DMX_TS_FILTERING | DMX_SECTION_FILTERING;
+
+ dib->demux.priv = (void *)dib;
+ /* get pidcount from demod */
+ dib->demux.feednum = dib->demux.filternum = 255;
+ dib->demux.start_feed = dibusb_start_feed;
+ dib->demux.stop_feed = dibusb_stop_feed;
+ dib->demux.write_to_decoder = NULL;
+ if ((ret = dvb_dmx_init(&dib->demux)) < 0) {
+ err("dvb_dmx_init failed: error %d",ret);
+ goto err_dmx;
+ }
+
+ dib->dmxdev.filternum = dib->demux.filternum;
+ dib->dmxdev.demux = &dib->demux.dmx;
+ dib->dmxdev.capabilities = 0;
+ if ((ret = dvb_dmxdev_init(&dib->dmxdev, dib->adapter)) < 0) {
+ err("dvb_dmxdev_init failed: error %d",ret);
+ goto err_dmx_dev;
+ }
+
+ dvb_net_init(dib->adapter, &dib->dvb_net, &dib->demux.dmx);
+
+ goto success;
+err_dmx_dev:
+ dvb_dmx_release(&dib->demux);
+err_dmx:
+ dvb_unregister_adapter(dib->adapter);
+err:
+ return ret;
+success:
+ dib->init_state |= DIBUSB_STATE_DVB;
+ return 0;
+}
+
+int dibusb_dvb_exit(struct usb_dibusb *dib)
+{
+ if (dib->init_state & DIBUSB_STATE_DVB) {
+ dib->init_state &= ~DIBUSB_STATE_DVB;
+ deb_info("unregistering DVB part\n");
+ dvb_net_release(&dib->dvb_net);
+ dib->demux.dmx.close(&dib->demux.dmx);
+ dvb_dmxdev_release(&dib->dmxdev);
+ dvb_dmx_release(&dib->demux);
+ dvb_unregister_adapter(dib->adapter);
+ }
+ return 0;
+}
--- /dev/null
+/*
+ * dvb-dibusb-fe-i2c.c is part of the driver for mobile USB Budget DVB-T devices
+ * based on reference design made by DiBcom (http://www.dibcom.fr/)
+ *
+ * Copyright (C) 2004-5 Patrick Boettcher (patrick.boettcher@desy.de)
+ *
+ * see dvb-dibusb-core.c for more copyright details.
+ *
+ * This file contains functions for attaching, initializing of an appropriate
+ * demodulator/frontend. I2C-stuff is also located here.
+ *
+ */
+#include "dvb-dibusb.h"
+
+#include <linux/usb.h>
+
+int dibusb_i2c_msg(struct usb_dibusb *dib, u8 addr,
+ u8 *wbuf, u16 wlen, u8 *rbuf, u16 rlen)
+{
+ u8 sndbuf[wlen+4]; /* lead(1) devaddr,direction(1) addr(2) data(wlen) (len(2) (when reading)) */
+ /* write only ? */
+ int wo = (rbuf == NULL || rlen == 0),
+ len = 2 + wlen + (wo ? 0 : 2);
+
+ sndbuf[0] = wo ? DIBUSB_REQ_I2C_WRITE : DIBUSB_REQ_I2C_READ;
+ sndbuf[1] = (addr << 1) | (wo ? 0 : 1);
+
+ memcpy(&sndbuf[2],wbuf,wlen);
+
+ if (!wo) {
+ sndbuf[wlen+2] = (rlen >> 8) & 0xff;
+ sndbuf[wlen+3] = rlen & 0xff;
+ }
+
+ return dibusb_readwrite_usb(dib,sndbuf,len,rbuf,rlen);
+}
+
+/*
+ * I2C master xfer function
+ */
+static int dibusb_i2c_xfer(struct i2c_adapter *adap,struct i2c_msg msg[],int num)
+{
+ struct usb_dibusb *dib = i2c_get_adapdata(adap);
+ int i;
+
+ if (down_interruptible(&dib->i2c_sem) < 0)
+ return -EAGAIN;
+
+ if (num > 2)
+ warn("more than 2 i2c messages at a time is not handled yet. TODO.");
+
+ for (i = 0; i < num; i++) {
+ /* write/read request */
+ if (i+1 < num && (msg[i+1].flags & I2C_M_RD)) {
+ if (dibusb_i2c_msg(dib, msg[i].addr, msg[i].buf,msg[i].len,
+ msg[i+1].buf,msg[i+1].len) < 0)
+ break;
+ i++;
+ } else
+ if (dibusb_i2c_msg(dib, msg[i].addr, msg[i].buf,msg[i].len,NULL,0) < 0)
+ break;
+ }
+
+ up(&dib->i2c_sem);
+ return i;
+}
+
+static u32 dibusb_i2c_func(struct i2c_adapter *adapter)
+{
+ return I2C_FUNC_I2C;
+}
+
+static struct i2c_algorithm dibusb_algo = {
+ .name = "DiBcom USB i2c algorithm",
+ .id = I2C_ALGO_BIT,
+ .master_xfer = dibusb_i2c_xfer,
+ .functionality = dibusb_i2c_func,
+};
+
+static int dibusb_general_demod_init(struct dvb_frontend *fe);
+static u8 dibusb_general_pll_addr(struct dvb_frontend *fe);
+static int dibusb_general_pll_init(struct dvb_frontend *fe, u8 pll_buf[5]);
+static int dibusb_general_pll_set(struct dvb_frontend *fe,
+ struct dvb_frontend_parameters* params, u8 pll_buf[5]);
+
+static struct mt352_config mt352_hanftek_umt_010_config = {
+ .demod_address = 0x1e,
+ .demod_init = dibusb_general_demod_init,
+ .pll_set = dibusb_general_pll_set,
+};
+
+static int dibusb_tuner_quirk(struct usb_dibusb *dib)
+{
+ switch (dib->dibdev->dev_cl->id) {
+ case DIBUSB1_1: /* some these device have the ENV77H11D5 and some the THOMSON CABLE */
+ case DIBUSB1_1_AN2235: { /* actually its this device, but in warm state they are indistinguishable */
+ struct dibusb_tuner *t;
+ u8 b[2] = { 0,0 } ,b2[1];
+ struct i2c_msg msg[2] = {
+ { .flags = 0, .buf = b, .len = 2 },
+ { .flags = I2C_M_RD, .buf = b2, .len = 1},
+ };
+
+ t = &dibusb_tuner[DIBUSB_TUNER_COFDM_PANASONIC_ENV77H11D5];
+
+ msg[0].addr = msg[1].addr = t->pll_addr;
+
+ if (dib->xfer_ops.tuner_pass_ctrl != NULL)
+ dib->xfer_ops.tuner_pass_ctrl(dib->fe,1,t->pll_addr);
+ dibusb_i2c_xfer(&dib->i2c_adap,msg,2);
+ if (dib->xfer_ops.tuner_pass_ctrl != NULL)
+ dib->xfer_ops.tuner_pass_ctrl(dib->fe,0,t->pll_addr);
+
+ if (b2[0] == 0xfe)
+ info("this device has the Thomson Cable onboard. Which is default.");
+ else {
+ dib->tuner = t;
+ info("this device has the Panasonic ENV77H11D5 onboard.");
+ }
+ break;
+ }
+ default:
+ break;
+ }
+ return 0;
+}
+
+/* there is a ugly pid_filter in the firmware of the umt devices, it is accessible
+ * by i2c address 0x8. Don't know how to deactivate it and how many rows it has.
+ */
+static int dibusb_umt_pid_control(struct dvb_frontend *fe, int index, int pid, int onoff)
+{
+ struct usb_dibusb *dib = fe->dvb->priv;
+ u8 b[3];
+ b[0] = index;
+ if (onoff) {
+ b[1] = (pid >> 8) & 0xff;
+ b[2] = pid & 0xff;
+ } else {
+ b[1] = 0;
+ b[2] = 0;
+ }
+ dibusb_i2c_msg(dib, 0x8, b, 3, NULL,0);
+ dibusb_set_streaming_mode(dib,0);
+ dibusb_set_streaming_mode(dib,1);
+ return 0;
+}
+
+int dibusb_fe_init(struct usb_dibusb* dib)
+{
+ struct dib3000_config demod_cfg;
+ int i;
+
+ if (dib->init_state & DIBUSB_STATE_I2C) {
+ for (i = 0; i < sizeof(dib->dibdev->dev_cl->demod->i2c_addrs) / sizeof(unsigned char) &&
+ dib->dibdev->dev_cl->demod->i2c_addrs[i] != 0; i++) {
+
+ demod_cfg.demod_address = dib->dibdev->dev_cl->demod->i2c_addrs[i];
+ demod_cfg.pll_addr = dibusb_general_pll_addr;
+ demod_cfg.pll_set = dibusb_general_pll_set;
+ demod_cfg.pll_init = dibusb_general_pll_init;
+
+ switch (dib->dibdev->dev_cl->demod->id) {
+ case DIBUSB_DIB3000MB:
+ dib->fe = dib3000mb_attach(&demod_cfg,&dib->i2c_adap,&dib->xfer_ops);
+ break;
+ case DIBUSB_DIB3000MC:
+ dib->fe = dib3000mc_attach(&demod_cfg,&dib->i2c_adap,&dib->xfer_ops);
+ break;
+ case DIBUSB_MT352:
+ mt352_hanftek_umt_010_config.demod_address = dib->dibdev->dev_cl->demod->i2c_addrs[i];
+ dib->fe = mt352_attach(&mt352_hanftek_umt_010_config, &dib->i2c_adap);
+ dib->xfer_ops.pid_ctrl = dibusb_umt_pid_control;
+ break;
+ }
+ if (dib->fe != NULL) {
+ info("found demodulator at i2c address 0x%x",dib->dibdev->dev_cl->demod->i2c_addrs[i]);
+ break;
+ }
+ }
+ if (dib->fe->ops->sleep != NULL)
+ dib->fe_sleep = dib->fe->ops->sleep;
+ dib->fe->ops->sleep = dibusb_hw_sleep;
+
+ if (dib->fe->ops->init != NULL )
+ dib->fe_init = dib->fe->ops->init;
+ dib->fe->ops->init = dibusb_hw_wakeup;
+
+ /* setting the default tuner */
+ dib->tuner = dib->dibdev->dev_cl->tuner;
+
+ /* check which tuner is mounted on this device, in case this is unsure */
+ dibusb_tuner_quirk(dib);
+ }
+ if (dib->fe == NULL) {
+ err("A frontend driver was not found for device '%s'.",
+ dib->dibdev->name);
+ return -ENODEV;
+ } else {
+ if (dvb_register_frontend(dib->adapter, dib->fe)) {
+ err("Frontend registration failed.");
+ if (dib->fe->ops->release)
+ dib->fe->ops->release(dib->fe);
+ dib->fe = NULL;
+ return -ENODEV;
+ }
+ }
+ return 0;
+}
+
+int dibusb_fe_exit(struct usb_dibusb *dib)
+{
+ if (dib->fe != NULL)
+ dvb_unregister_frontend(dib->fe);
+ return 0;
+}
+
+int dibusb_i2c_init(struct usb_dibusb *dib)
+{
+ int ret = 0;
+
+ dib->adapter->priv = dib;
+
+ strncpy(dib->i2c_adap.name,dib->dibdev->name,I2C_NAME_SIZE);
+#ifdef I2C_ADAP_CLASS_TV_DIGITAL
+ dib->i2c_adap.class = I2C_ADAP_CLASS_TV_DIGITAL,
+#else
+ dib->i2c_adap.class = I2C_CLASS_TV_DIGITAL,
+#endif
+ dib->i2c_adap.algo = &dibusb_algo;
+ dib->i2c_adap.algo_data = NULL;
+ dib->i2c_adap.id = I2C_ALGO_BIT;
+
+ i2c_set_adapdata(&dib->i2c_adap, dib);
+
+ if ((ret = i2c_add_adapter(&dib->i2c_adap)) < 0)
+ err("could not add i2c adapter");
+
+ dib->init_state |= DIBUSB_STATE_I2C;
+
+ return ret;
+}
+
+int dibusb_i2c_exit(struct usb_dibusb *dib)
+{
+ if (dib->init_state & DIBUSB_STATE_I2C)
+ i2c_del_adapter(&dib->i2c_adap);
+ dib->init_state &= ~DIBUSB_STATE_I2C;
+ return 0;
+}
+
+
+/* pll stuff, maybe removed soon (thx to Gerd/Andrew in advance) */
+static int thomson_cable_eu_pll_set(struct dvb_frontend_parameters *fep, u8 pllbuf[4])
+{
+ u32 tfreq = (fep->frequency + 36125000) / 62500;
+ int vu,p0,p1,p2;
+
+ if (fep->frequency > 403250000)
+ vu = 1, p2 = 1, p1 = 0, p0 = 1;
+ else if (fep->frequency > 115750000)
+ vu = 0, p2 = 1, p1 = 1, p0 = 0;
+ else if (fep->frequency > 44250000)
+ vu = 0, p2 = 0, p1 = 1, p0 = 1;
+ else
+ return -EINVAL;
+
+ pllbuf[0] = (tfreq >> 8) & 0x7f;
+ pllbuf[1] = tfreq & 0xff;
+ pllbuf[2] = 0x8e;
+ pllbuf[3] = (vu << 7) | (p2 << 2) | (p1 << 1) | p0;
+ return 0;
+}
+
+static int panasonic_cofdm_env57h1xd5_pll_set(struct dvb_frontend_parameters *fep, u8 pllbuf[4])
+{
+ u32 freq = fep->frequency;
+ u32 tfreq = ((freq + 36125000)*6 + 500000) / 1000000;
+ u8 TA, T210, R210, ctrl1, cp210, p4321;
+ if (freq > 858000000) {
+ err("frequency cannot be larger than 858 MHz.");
+ return -EINVAL;
+ }
+
+ // contol data 1 : 1 | T/A=1 | T2,T1,T0 = 0,0,0 | R2,R1,R0 = 0,1,0
+ TA = 1;
+ T210 = 0;
+ R210 = 0x2;
+ ctrl1 = (1 << 7) | (TA << 6) | (T210 << 3) | R210;
+
+// ******** CHARGE PUMP CONFIG vs RF FREQUENCIES *****************
+ if (freq < 470000000)
+ cp210 = 2; // VHF Low and High band ch E12 to E4 to E12
+ else if (freq < 526000000)
+ cp210 = 4; // UHF band Ch E21 to E27
+ else // if (freq < 862000000)
+ cp210 = 5; // UHF band ch E28 to E69
+
+//********************* BW select *******************************
+ if (freq < 153000000)
+ p4321 = 1; // BW selected for VHF low
+ else if (freq < 470000000)
+ p4321 = 2; // BW selected for VHF high E5 to E12
+ else // if (freq < 862000000)
+ p4321 = 4; // BW selection for UHF E21 to E69
+
+ pllbuf[0] = (tfreq >> 8) & 0xff;
+ pllbuf[1] = (tfreq >> 0) & 0xff;
+ pllbuf[2] = 0xff & ctrl1;
+ pllbuf[3] = (cp210 << 5) | (p4321);
+
+ return 0;
+}
+
+/*
+ * 7 6 5 4 3 2 1 0
+ * Address Byte 1 1 0 0 0 MA1 MA0 R/~W=0
+ *
+ * Program divider byte 1 0 n14 n13 n12 n11 n10 n9 n8
+ * Program divider byte 2 n7 n6 n5 n4 n3 n2 n1 n0
+ *
+ * Control byte 1 1 T/A=1 T2 T1 T0 R2 R1 R0
+ * 1 T/A=0 0 0 ATC AL2 AL1 AL0
+ *
+ * Control byte 2 CP2 CP1 CP0 BS5 BS4 BS3 BS2 BS1
+ *
+ * MA0/1 = programmable address bits
+ * R/~W = read/write bit (0 for writing)
+ * N14-0 = programmable LO frequency
+ *
+ * T/A = test AGC bit (0 = next 6 bits AGC setting,
+ * 1 = next 6 bits test and reference divider ratio settings)
+ * T2-0 = test bits
+ * R2-0 = reference divider ratio and programmable frequency step
+ * ATC = AGC current setting and time constant
+ * ATC = 0: AGC current = 220nA, AGC time constant = 2s
+ * ATC = 1: AGC current = 9uA, AGC time constant = 50ms
+ * AL2-0 = AGC take-over point bits
+ * CP2-0 = charge pump current
+ * BS5-1 = PMOS ports control bits;
+ * BSn = 0 corresponding port is off, high-impedance state (at power-on)
+ * BSn = 1 corresponding port is on
+ */
+
+
+static int panasonic_cofdm_env77h11d5_tda6650_init(struct dvb_frontend *fe, u8 pllbuf[4])
+{
+ pllbuf[0] = 0x0b;
+ pllbuf[1] = 0xf5;
+ pllbuf[2] = 0x85;
+ pllbuf[3] = 0xab;
+ return 0;
+}
+
+static int panasonic_cofdm_env77h11d5_tda6650_set (struct dvb_frontend_parameters *fep,u8 pllbuf[4])
+{
+ int tuner_frequency = 0;
+ u8 band, cp, filter;
+
+ // determine charge pump
+ tuner_frequency = fep->frequency + 36166000;
+ if (tuner_frequency < 87000000)
+ return -EINVAL;
+ else if (tuner_frequency < 130000000)
+ cp = 3;
+ else if (tuner_frequency < 160000000)
+ cp = 5;
+ else if (tuner_frequency < 200000000)
+ cp = 6;
+ else if (tuner_frequency < 290000000)
+ cp = 3;
+ else if (tuner_frequency < 420000000)
+ cp = 5;
+ else if (tuner_frequency < 480000000)
+ cp = 6;
+ else if (tuner_frequency < 620000000)
+ cp = 3;
+ else if (tuner_frequency < 830000000)
+ cp = 5;
+ else if (tuner_frequency < 895000000)
+ cp = 7;
+ else
+ return -EINVAL;
+
+ // determine band
+ if (fep->frequency < 49000000)
+ return -EINVAL;
+ else if (fep->frequency < 161000000)
+ band = 1;
+ else if (fep->frequency < 444000000)
+ band = 2;
+ else if (fep->frequency < 861000000)
+ band = 4;
+ else
+ return -EINVAL;
+
+ // setup PLL filter
+ switch (fep->u.ofdm.bandwidth) {
+ case BANDWIDTH_6_MHZ:
+ case BANDWIDTH_7_MHZ:
+ filter = 0;
+ break;
+ case BANDWIDTH_8_MHZ:
+ filter = 1;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ // calculate divisor
+ // ((36166000+((1000000/6)/2)) + Finput)/(1000000/6)
+ tuner_frequency = (((fep->frequency / 1000) * 6) + 217496) / 1000;
+
+ // setup tuner buffer
+ pllbuf[0] = (tuner_frequency >> 8) & 0x7f;
+ pllbuf[1] = tuner_frequency & 0xff;
+ pllbuf[2] = 0xca;
+ pllbuf[3] = (cp << 5) | (filter << 3) | band;
+ return 0;
+}
+
+/*
+ * 7 6 5 4 3 2 1 0
+ * Address Byte 1 1 0 0 0 MA1 MA0 R/~W=0
+ *
+ * Program divider byte 1 0 n14 n13 n12 n11 n10 n9 n8
+ * Program divider byte 2 n7 n6 n5 n4 n3 n2 n1 n0
+ *
+ * Control byte 1 CP T2 T1 T0 RSA RSB OS
+ *
+ * Band Switch byte X X X P4 P3 P2 P1 P0
+ *
+ * Auxiliary byte ATC AL2 AL1 AL0 0 0 0 0
+ *
+ * Address: MA1 MA0 Address
+ * 0 0 c0
+ * 0 1 c2 (always valid)
+ * 1 0 c4
+ * 1 1 c6
+ *
+ *
+ *
+ */
+
+static int lg_tdtp_e102p_tua6034(struct dvb_frontend_parameters* fep, u8 pllbuf[4])
+{
+ u32 div;
+ u8 p3210, p4;
+
+#define TUNER_MUL 62500
+
+ div = (fep->frequency + 36125000 + TUNER_MUL / 2) / TUNER_MUL;
+
+ if (fep->frequency < 174500000)
+ p3210 = 1; // not supported by the tdtp_e102p
+ else if (fep->frequency < 230000000) // VHF
+ p3210 = 2;
+ else
+ p3210 = 4;
+
+ if (fep->u.ofdm.bandwidth == BANDWIDTH_7_MHZ)
+ p4 = 0;
+ else
+ p4 = 1;
+
+ pllbuf[0] = (div >> 8) & 0x7f;
+ pllbuf[1] = div & 0xff;
+ pllbuf[2] = 0xce;
+ pllbuf[3] = (p4 << 4) | p3210;
+
+ return 0;
+}
+
+static int lg_tdtp_e102p_mt352_demod_init(struct dvb_frontend *fe)
+{
+ static u8 mt352_clock_config[] = { 0x89, 0xb0, 0x2d };
+ static u8 mt352_reset[] = { 0x50, 0x80 };
+ static u8 mt352_mclk_ratio[] = { 0x8b, 0x00 };
+ static u8 mt352_adc_ctl_1_cfg[] = { 0x8E, 0x40 };
+ static u8 mt352_agc_cfg[] = { 0x67, 0x14, 0x22 };
+ static u8 mt352_sec_agc_cfg[] = { 0x69, 0x00, 0xff, 0xff, 0x00, 0xff, 0x00, 0x40, 0x40 };
+
+ static u8 mt352_unk [] = { 0xb5, 0x7a };
+
+ static u8 mt352_acq_ctl[] = { 0x53, 0x5f };
+ static u8 mt352_input_freq_1[] = { 0x56, 0xf1, 0x05 };
+
+// static u8 mt352_capt_range_cfg[] = { 0x75, 0x32 };
+
+ mt352_write(fe, mt352_clock_config, sizeof(mt352_clock_config));
+ udelay(2000);
+ mt352_write(fe, mt352_reset, sizeof(mt352_reset));
+ mt352_write(fe, mt352_mclk_ratio, sizeof(mt352_mclk_ratio));
+
+ mt352_write(fe, mt352_adc_ctl_1_cfg, sizeof(mt352_adc_ctl_1_cfg));
+ mt352_write(fe, mt352_agc_cfg, sizeof(mt352_agc_cfg));
+
+ mt352_write(fe, mt352_sec_agc_cfg, sizeof(mt352_sec_agc_cfg));
+
+ mt352_write(fe, mt352_unk, sizeof(mt352_unk));
+
+ mt352_write(fe, mt352_acq_ctl, sizeof(mt352_acq_ctl));
+ mt352_write(fe, mt352_input_freq_1, sizeof(mt352_input_freq_1));
+
+// mt352_write(fe, mt352_capt_range_cfg, sizeof(mt352_capt_range_cfg));
+
+ return 0;
+}
+
+static int dibusb_general_demod_init(struct dvb_frontend *fe)
+{
+ struct usb_dibusb* dib = (struct usb_dibusb*) fe->dvb->priv;
+ switch (dib->dibdev->dev_cl->id) {
+ case UMT2_0:
+ return lg_tdtp_e102p_mt352_demod_init(fe);
+ default: /* other device classes do not have device specific demod inits */
+ break;
+ }
+ return 0;
+}
+
+static u8 dibusb_general_pll_addr(struct dvb_frontend *fe)
+{
+ struct usb_dibusb* dib = (struct usb_dibusb*) fe->dvb->priv;
+ return dib->tuner->pll_addr;
+}
+
+static int dibusb_pll_i2c_helper(struct usb_dibusb *dib, u8 pll_buf[5], u8 buf[4])
+{
+ if (pll_buf == NULL) {
+ struct i2c_msg msg = {
+ .addr = dib->tuner->pll_addr,
+ .flags = 0,
+ .buf = buf,
+ .len = sizeof(buf)
+ };
+ if (i2c_transfer (&dib->i2c_adap, &msg, 1) != 1)
+ return -EIO;
+ msleep(1);
+ } else {
+ pll_buf[0] = dib->tuner->pll_addr << 1;
+ memcpy(&pll_buf[1],buf,4);
+ }
+
+ return 0;
+}
+
+static int dibusb_general_pll_init(struct dvb_frontend *fe,
+ u8 pll_buf[5])
+{
+ struct usb_dibusb* dib = (struct usb_dibusb*) fe->dvb->priv;
+ u8 buf[4];
+ int ret=0;
+ switch (dib->tuner->id) {
+ case DIBUSB_TUNER_COFDM_PANASONIC_ENV77H11D5:
+ ret = panasonic_cofdm_env77h11d5_tda6650_init(fe,buf);
+ break;
+ default:
+ break;
+ }
+
+ if (ret)
+ return ret;
+
+ return dibusb_pll_i2c_helper(dib,pll_buf,buf);
+}
+
+static int dibusb_general_pll_set(struct dvb_frontend *fe,
+ struct dvb_frontend_parameters *fep, u8 pll_buf[5])
+{
+ struct usb_dibusb* dib = (struct usb_dibusb*) fe->dvb->priv;
+ u8 buf[4];
+ int ret=0;
+
+ switch (dib->tuner->id) {
+ case DIBUSB_TUNER_CABLE_THOMSON:
+ ret = thomson_cable_eu_pll_set(fep, buf);
+ break;
+ case DIBUSB_TUNER_COFDM_PANASONIC_ENV57H1XD5:
+ ret = panasonic_cofdm_env57h1xd5_pll_set(fep, buf);
+ break;
+ case DIBUSB_TUNER_CABLE_LG_TDTP_E102P:
+ ret = lg_tdtp_e102p_tua6034(fep, buf);
+ break;
+ case DIBUSB_TUNER_COFDM_PANASONIC_ENV77H11D5:
+ ret = panasonic_cofdm_env77h11d5_tda6650_set(fep,buf);
+ break;
+ default:
+ warn("no pll programming routine found for tuner %d.\n",dib->tuner->id);
+ ret = -ENODEV;
+ break;
+ }
+
+ if (ret)
+ return ret;
+
+ return dibusb_pll_i2c_helper(dib,pll_buf,buf);
+}
--- /dev/null
+/*
+ * dvb-dibusb-firmware.c is part of the driver for mobile USB Budget DVB-T devices
+ * based on reference design made by DiBcom (http://www.dibcom.fr/)
+ *
+ * Copyright (C) 2004-5 Patrick Boettcher (patrick.boettcher@desy.de)
+ *
+ * see dvb-dibusb-core.c for more copyright details.
+ *
+ * This file contains functions for downloading the firmware to the device.
+ */
+#include "dvb-dibusb.h"
+
+#include <linux/firmware.h>
+#include <linux/usb.h>
+
+/*
+ * load a firmware packet to the device
+ */
+static int dibusb_writemem(struct usb_device *udev,u16 addr,u8 *data, u8 len)
+{
+ return usb_control_msg(udev, usb_sndctrlpipe(udev,0),
+ 0xa0, USB_TYPE_VENDOR, addr, 0x00, data, len, 5*HZ);
+}
+
+int dibusb_loadfirmware(struct usb_device *udev, struct dibusb_usb_device *dibdev)
+{
+ const struct firmware *fw = NULL;
+ u16 addr;
+ u8 *b,*p;
+ int ret = 0,i;
+
+ if ((ret = request_firmware(&fw, dibdev->dev_cl->firmware, &udev->dev)) != 0) {
+ err("did not find a valid firmware file. (%s) "
+ "Please see linux/Documentation/dvb/ for more details on firmware-problems.",
+ dibdev->dev_cl->firmware);
+ return ret;
+ }
+
+ p = kmalloc(fw->size,GFP_KERNEL);
+ if (p != NULL) {
+ u8 reset;
+ /*
+ * you cannot use the fw->data as buffer for
+ * usb_control_msg, a new buffer has to be
+ * created
+ */
+ memcpy(p,fw->data,fw->size);
+
+ /* stop the CPU */
+ reset = 1;
+ if ((ret = dibusb_writemem(udev,dibdev->dev_cl->usb_ctrl->cpu_cs_register,&reset,1)) != 1)
+ err("could not stop the USB controller CPU.");
+ for(i = 0; p[i+3] == 0 && i < fw->size; ) {
+ b = (u8 *) &p[i];
+ addr = *((u16 *) &b[1]);
+
+ ret = dibusb_writemem(udev,addr,&b[4],b[0]);
+
+ if (ret != b[0]) {
+ err("error while transferring firmware "
+ "(transferred size: %d, block size: %d)",
+ ret,b[0]);
+ ret = -EINVAL;
+ break;
+ }
+ i += 5 + b[0];
+ }
+ /* length in ret */
+ if (ret > 0)
+ ret = 0;
+ /* restart the CPU */
+ reset = 0;
+ if (ret || dibusb_writemem(udev,dibdev->dev_cl->usb_ctrl->cpu_cs_register,&reset,1) != 1) {
+ err("could not restart the USB controller CPU.");
+ ret = -EINVAL;
+ }
+
+ kfree(p);
+ } else {
+ ret = -ENOMEM;
+ }
+ release_firmware(fw);
+
+ return ret;
+}
--- /dev/null
+/*
+ * dvb-dibusb-pid.c is part of the driver for mobile USB Budget DVB-T devices
+ * based on reference design made by DiBcom (http://www.dibcom.fr/)
+ *
+ * Copyright (C) 2004-5 Patrick Boettcher (patrick.boettcher@desy.de)
+ *
+ * see dvb-dibusb-core.c for more copyright details.
+ *
+ * This file contains functions for initializing and handling the internal
+ * pid-list. This pid-list mirrors the information currently stored in the
+ * devices pid-list.
+ */
+#include "dvb-dibusb.h"
+
+int dibusb_pid_list_init(struct usb_dibusb *dib)
+{
+ int i;
+ dib->pid_list = kmalloc(sizeof(struct dibusb_pid) * dib->dibdev->dev_cl->demod->pid_filter_count,GFP_KERNEL);
+ if (dib->pid_list == NULL)
+ return -ENOMEM;
+
+ deb_xfer("initializing %d pids for the pid_list.\n",dib->dibdev->dev_cl->demod->pid_filter_count);
+
+ dib->pid_list_lock = SPIN_LOCK_UNLOCKED;
+ memset(dib->pid_list,0,dib->dibdev->dev_cl->demod->pid_filter_count*(sizeof(struct dibusb_pid)));
+ for (i=0; i < dib->dibdev->dev_cl->demod->pid_filter_count; i++) {
+ dib->pid_list[i].index = i;
+ dib->pid_list[i].pid = 0;
+ dib->pid_list[i].active = 0;
+ }
+
+ dib->init_state |= DIBUSB_STATE_PIDLIST;
+ return 0;
+}
+
+void dibusb_pid_list_exit(struct usb_dibusb *dib)
+{
+ if (dib->init_state & DIBUSB_STATE_PIDLIST)
+ kfree(dib->pid_list);
+ dib->init_state &= ~DIBUSB_STATE_PIDLIST;
+}
+
+/* fetch a pid from pid_list and set it on or off */
+int dibusb_ctrl_pid(struct usb_dibusb *dib, struct dvb_demux_feed *dvbdmxfeed , int onoff)
+{
+ int i,ret = -1;
+ unsigned long flags;
+ u16 pid = dvbdmxfeed->pid;
+
+ if (onoff) {
+ spin_lock_irqsave(&dib->pid_list_lock,flags);
+ for (i=0; i < dib->dibdev->dev_cl->demod->pid_filter_count; i++)
+ if (!dib->pid_list[i].active) {
+ dib->pid_list[i].pid = pid;
+ dib->pid_list[i].active = 1;
+ ret = i;
+ break;
+ }
+ dvbdmxfeed->priv = &dib->pid_list[ret];
+ spin_unlock_irqrestore(&dib->pid_list_lock,flags);
+
+ if (dib->xfer_ops.pid_ctrl != NULL)
+ dib->xfer_ops.pid_ctrl(dib->fe,dib->pid_list[ret].index,dib->pid_list[ret].pid,1);
+ } else {
+ struct dibusb_pid *dpid = dvbdmxfeed->priv;
+
+ if (dib->xfer_ops.pid_ctrl != NULL)
+ dib->xfer_ops.pid_ctrl(dib->fe,dpid->index,0,0);
+
+ dpid->pid = 0;
+ dpid->active = 0;
+ ret = dpid->index;
+ }
+
+ /* a free pid from the list */
+ deb_info("setting pid: %5d %04x at index %d '%s'\n",pid,pid,ret,onoff ? "on" : "off");
+
+ return ret;
+}
+
--- /dev/null
+/*
+ * dvb-dibusb-remote.c is part of the driver for mobile USB Budget DVB-T devices
+ * based on reference design made by DiBcom (http://www.dibcom.fr/)
+ *
+ * Copyright (C) 2004-5 Patrick Boettcher (patrick.boettcher@desy.de)
+ *
+ * see dvb-dibusb-core.c for more copyright details.
+ *
+ * This file contains functions for handling the event device on the software
+ * side and the remote control on the hardware side.
+ */
+#include "dvb-dibusb.h"
+
+/* Table to map raw key codes to key events. This should not be hard-wired
+ into the kernel. */
+static const struct { u8 c0, c1, c2; uint32_t key; } rc_keys [] =
+{
+ /* Key codes for the little Artec T1/Twinhan/HAMA/ remote. */
+ { 0x00, 0xff, 0x16, KEY_POWER },
+ { 0x00, 0xff, 0x10, KEY_MUTE },
+ { 0x00, 0xff, 0x03, KEY_1 },
+ { 0x00, 0xff, 0x01, KEY_2 },
+ { 0x00, 0xff, 0x06, KEY_3 },
+ { 0x00, 0xff, 0x09, KEY_4 },
+ { 0x00, 0xff, 0x1d, KEY_5 },
+ { 0x00, 0xff, 0x1f, KEY_6 },
+ { 0x00, 0xff, 0x0d, KEY_7 },
+ { 0x00, 0xff, 0x19, KEY_8 },
+ { 0x00, 0xff, 0x1b, KEY_9 },
+ { 0x00, 0xff, 0x15, KEY_0 },
+ { 0x00, 0xff, 0x05, KEY_CHANNELUP },
+ { 0x00, 0xff, 0x02, KEY_CHANNELDOWN },
+ { 0x00, 0xff, 0x1e, KEY_VOLUMEUP },
+ { 0x00, 0xff, 0x0a, KEY_VOLUMEDOWN },
+ { 0x00, 0xff, 0x11, KEY_RECORD },
+ { 0x00, 0xff, 0x17, KEY_FAVORITES }, /* Heart symbol - Channel list. */
+ { 0x00, 0xff, 0x14, KEY_PLAY },
+ { 0x00, 0xff, 0x1a, KEY_STOP },
+ { 0x00, 0xff, 0x40, KEY_REWIND },
+ { 0x00, 0xff, 0x12, KEY_FASTFORWARD },
+ { 0x00, 0xff, 0x0e, KEY_PREVIOUS }, /* Recall - Previous channel. */
+ { 0x00, 0xff, 0x4c, KEY_PAUSE },
+ { 0x00, 0xff, 0x4d, KEY_SCREEN }, /* Full screen mode. */
+ { 0x00, 0xff, 0x54, KEY_AUDIO }, /* MTS - Switch to secondary audio. */
+ /* additional keys TwinHan VisionPlus, the Artec seemingly not have */
+ { 0x00, 0xff, 0x0c, KEY_CANCEL }, /* Cancel */
+ { 0x00, 0xff, 0x1c, KEY_EPG }, /* EPG */
+ { 0x00, 0xff, 0x00, KEY_TAB }, /* Tab */
+ { 0x00, 0xff, 0x48, KEY_INFO }, /* Preview */
+ { 0x00, 0xff, 0x04, KEY_LIST }, /* RecordList */
+ { 0x00, 0xff, 0x0f, KEY_TEXT }, /* Teletext */
+ /* Key codes for the KWorld/ADSTech/JetWay remote. */
+ { 0x86, 0x6b, 0x12, KEY_POWER },
+ { 0x86, 0x6b, 0x0f, KEY_SELECT }, /* source */
+ { 0x86, 0x6b, 0x0c, KEY_UNKNOWN }, /* scan */
+ { 0x86, 0x6b, 0x0b, KEY_EPG },
+ { 0x86, 0x6b, 0x10, KEY_MUTE },
+ { 0x86, 0x6b, 0x01, KEY_1 },
+ { 0x86, 0x6b, 0x02, KEY_2 },
+ { 0x86, 0x6b, 0x03, KEY_3 },
+ { 0x86, 0x6b, 0x04, KEY_4 },
+ { 0x86, 0x6b, 0x05, KEY_5 },
+ { 0x86, 0x6b, 0x06, KEY_6 },
+ { 0x86, 0x6b, 0x07, KEY_7 },
+ { 0x86, 0x6b, 0x08, KEY_8 },
+ { 0x86, 0x6b, 0x09, KEY_9 },
+ { 0x86, 0x6b, 0x0a, KEY_0 },
+ { 0x86, 0x6b, 0x18, KEY_ZOOM },
+ { 0x86, 0x6b, 0x1c, KEY_UNKNOWN }, /* preview */
+ { 0x86, 0x6b, 0x13, KEY_UNKNOWN }, /* snap */
+ { 0x86, 0x6b, 0x00, KEY_UNDO },
+ { 0x86, 0x6b, 0x1d, KEY_RECORD },
+ { 0x86, 0x6b, 0x0d, KEY_STOP },
+ { 0x86, 0x6b, 0x0e, KEY_PAUSE },
+ { 0x86, 0x6b, 0x16, KEY_PLAY },
+ { 0x86, 0x6b, 0x11, KEY_BACK },
+ { 0x86, 0x6b, 0x19, KEY_FORWARD },
+ { 0x86, 0x6b, 0x14, KEY_UNKNOWN }, /* pip */
+ { 0x86, 0x6b, 0x15, KEY_ESC },
+ { 0x86, 0x6b, 0x1a, KEY_UP },
+ { 0x86, 0x6b, 0x1e, KEY_DOWN },
+ { 0x86, 0x6b, 0x1f, KEY_LEFT },
+ { 0x86, 0x6b, 0x1b, KEY_RIGHT },
+};
+
+/*
+ * Read the remote control and feed the appropriate event.
+ * NEC protocol is used for remote controls
+ */
+static int dibusb_read_remote_control(struct usb_dibusb *dib)
+{
+ u8 b[1] = { DIBUSB_REQ_POLL_REMOTE }, rb[5];
+ int ret;
+ int i;
+ if ((ret = dibusb_readwrite_usb(dib,b,1,rb,5)))
+ return ret;
+
+ switch (rb[0]) {
+ case DIBUSB_RC_NEC_KEY_PRESSED:
+ /* rb[1-3] is the actual key, rb[4] is a checksum */
+ deb_rc("raw key code 0x%02x, 0x%02x, 0x%02x, 0x%02x\n",
+ rb[1], rb[2], rb[3], rb[4]);
+
+ if ((0xff - rb[3]) != rb[4]) {
+ deb_rc("remote control checksum failed.\n");
+ break;
+ }
+
+ /* See if we can match the raw key code. */
+ for (i = 0; i < sizeof(rc_keys)/sizeof(rc_keys[0]); i++) {
+ if (rc_keys[i].c0 == rb[1] &&
+ rc_keys[i].c1 == rb[2] &&
+ rc_keys[i].c2 == rb[3]) {
+ dib->rc_input_event = rc_keys[i].key;
+ deb_rc("Translated key 0x%04x\n", dib->rc_input_event);
+ /* Signal down and up events for this key. */
+ input_report_key(&dib->rc_input_dev, dib->rc_input_event, 1);
+ input_report_key(&dib->rc_input_dev, dib->rc_input_event, 0);
+ input_sync(&dib->rc_input_dev);
+ break;
+ }
+ }
+ break;
+ case DIBUSB_RC_NEC_EMPTY: /* No (more) remote control keys. */
+ break;
+ case DIBUSB_RC_NEC_KEY_REPEATED:
+ /* rb[1]..rb[4] are always zero.*/
+ /* Repeats often seem to occur so for the moment just ignore this. */
+ deb_rc("Key repeat\n");
+ break;
+ default:
+ break;
+ }
+ return 0;
+}
+
+/* Remote-control poll function - called every dib->rc_query_interval ms to see
+ whether the remote control has received anything. */
+static void dibusb_remote_query(void *data)
+{
+ struct usb_dibusb *dib = (struct usb_dibusb *) data;
+ /* TODO: need a lock here. We can simply skip checking for the remote control
+ if we're busy. */
+ dibusb_read_remote_control(dib);
+ schedule_delayed_work(&dib->rc_query_work,
+ msecs_to_jiffies(dib->rc_query_interval));
+}
+
+int dibusb_remote_init(struct usb_dibusb *dib)
+{
+ int i;
+
+ if (dib->dibdev->dev_cl->remote_type == DIBUSB_RC_NO)
+ return 0;
+
+ /* Initialise the remote-control structures.*/
+ init_input_dev(&dib->rc_input_dev);
+
+ dib->rc_input_dev.evbit[0] = BIT(EV_KEY);
+ dib->rc_input_dev.keycodesize = sizeof(unsigned char);
+ dib->rc_input_dev.keycodemax = KEY_MAX;
+ dib->rc_input_dev.name = DRIVER_DESC " remote control";
+
+ for (i=0; i<sizeof(rc_keys)/sizeof(rc_keys[0]); i++)
+ set_bit(rc_keys[i].key, dib->rc_input_dev.keybit);
+
+ input_register_device(&dib->rc_input_dev);
+
+ dib->rc_input_event = KEY_MAX;
+
+ INIT_WORK(&dib->rc_query_work, dibusb_remote_query, dib);
+
+ /* Start the remote-control polling. */
+ if (dib->rc_query_interval < 40)
+ dib->rc_query_interval = 100; /* default */
+
+ info("schedule remote query interval to %d msecs.",dib->rc_query_interval);
+ schedule_delayed_work(&dib->rc_query_work,msecs_to_jiffies(dib->rc_query_interval));
+
+ dib->init_state |= DIBUSB_STATE_REMOTE;
+
+ return 0;
+}
+
+int dibusb_remote_exit(struct usb_dibusb *dib)
+{
+ if (dib->dibdev->dev_cl->remote_type == DIBUSB_RC_NO)
+ return 0;
+
+ if (dib->init_state & DIBUSB_STATE_REMOTE) {
+ cancel_delayed_work(&dib->rc_query_work);
+ flush_scheduled_work();
+ input_unregister_device(&dib->rc_input_dev);
+ }
+ dib->init_state &= ~DIBUSB_STATE_REMOTE;
+ return 0;
+}
--- /dev/null
+/*
+ * dvb-dibusb-usb.c is part of the driver for mobile USB Budget DVB-T devices
+ * based on reference design made by DiBcom (http://www.dibcom.fr/)
+ *
+ * Copyright (C) 2004-5 Patrick Boettcher (patrick.boettcher@desy.de)
+ *
+ * see dvb-dibusb-core.c for more copyright details.
+ *
+ * This file contains functions for initializing and handling the
+ * usb specific stuff.
+ */
+#include "dvb-dibusb.h"
+
+#include <linux/version.h>
+#include <linux/pci.h>
+
+int dibusb_readwrite_usb(struct usb_dibusb *dib, u8 *wbuf, u16 wlen, u8 *rbuf,
+ u16 rlen)
+{
+ int actlen,ret = -ENOMEM;
+
+ if (wbuf == NULL || wlen == 0)
+ return -EINVAL;
+
+ if ((ret = down_interruptible(&dib->usb_sem)))
+ return ret;
+
+ if (dib->feedcount &&
+ wbuf[0] == DIBUSB_REQ_I2C_WRITE &&
+ dib->dibdev->dev_cl->id == DIBUSB1_1)
+ deb_err("BUG: writing to i2c, while TS-streaming destroys the stream."
+ "(%x reg: %x %x)\n", wbuf[0],wbuf[2],wbuf[3]);
+
+ debug_dump(wbuf,wlen);
+
+ ret = usb_bulk_msg(dib->udev,usb_sndbulkpipe(dib->udev,
+ dib->dibdev->dev_cl->pipe_cmd), wbuf,wlen,&actlen,
+ DIBUSB_I2C_TIMEOUT);
+
+ if (ret)
+ err("bulk message failed: %d (%d/%d)",ret,wlen,actlen);
+ else
+ ret = actlen != wlen ? -1 : 0;
+
+ /* an answer is expected, and no error before */
+ if (!ret && rbuf && rlen) {
+ ret = usb_bulk_msg(dib->udev,usb_rcvbulkpipe(dib->udev,
+ dib->dibdev->dev_cl->pipe_cmd),rbuf,rlen,&actlen,
+ DIBUSB_I2C_TIMEOUT);
+
+ if (ret)
+ err("recv bulk message failed: %d",ret);
+ else {
+ deb_alot("rlen: %d\n",rlen);
+ debug_dump(rbuf,actlen);
+ }
+ }
+
+ up(&dib->usb_sem);
+ return ret;
+}
+
+/*
+ * Cypress controls
+ */
+
+#if 0
+/*
+ * #if 0'ing the following functions as they are not in use _now_,
+ * but probably will be sometime.
+ */
+
+/*
+ * do not use this, just a workaround for a bug,
+ * which will hopefully never occur :).
+ */
+int dibusb_interrupt_read_loop(struct usb_dibusb *dib)
+{
+ u8 b[1] = { DIBUSB_REQ_INTR_READ };
+ return dibusb_write_usb(dib,b,1);
+}
+
+#endif
+static int dibusb_write_usb(struct usb_dibusb *dib, u8 *buf, u16 len)
+{
+ return dibusb_readwrite_usb(dib,buf,len,NULL,0);
+}
+
+/*
+ * ioctl for the firmware
+ */
+static int dibusb_ioctl_cmd(struct usb_dibusb *dib, u8 cmd, u8 *param, int plen)
+{
+ u8 b[34];
+ int size = plen > 32 ? 32 : plen;
+ memset(b,0,34);
+ b[0] = DIBUSB_REQ_SET_IOCTL;
+ b[1] = cmd;
+
+ if (size > 0)
+ memcpy(&b[2],param,size);
+
+ return dibusb_write_usb(dib,b,34); //2+size);
+}
+
+/*
+ * ioctl for power control
+ */
+int dibusb_hw_wakeup(struct dvb_frontend *fe)
+{
+ struct usb_dibusb *dib = (struct usb_dibusb *) fe->dvb->priv;
+ u8 b[1] = { DIBUSB_IOCTL_POWER_WAKEUP };
+ deb_info("dibusb-device is getting up.\n");
+ dibusb_ioctl_cmd(dib,DIBUSB_IOCTL_CMD_POWER_MODE, b,1);
+
+ if (dib->fe_init)
+ return dib->fe_init(fe);
+
+ return 0;
+}
+
+int dibusb_hw_sleep(struct dvb_frontend *fe)
+{
+ struct usb_dibusb *dib = (struct usb_dibusb *) fe->dvb->priv;
+ u8 b[1] = { DIBUSB_IOCTL_POWER_SLEEP };
+ deb_info("dibusb-device is going to bed.\n");
+ dibusb_ioctl_cmd(dib,DIBUSB_IOCTL_CMD_POWER_MODE, b,1);
+
+ if (dib->fe_sleep)
+ return dib->fe_sleep(fe);
+
+ return 0;
+}
+
+int dibusb_set_streaming_mode(struct usb_dibusb *dib,u8 mode)
+{
+ u8 b[2] = { DIBUSB_REQ_SET_STREAMING_MODE, mode };
+ return dibusb_readwrite_usb(dib,b,2,NULL,0);
+}
+
+int dibusb_streaming(struct usb_dibusb *dib,int onoff)
+{
+ switch (dib->dibdev->dev_cl->id) {
+ case DIBUSB2_0:
+ if (onoff)
+ return dibusb_ioctl_cmd(dib,DIBUSB_IOCTL_CMD_ENABLE_STREAM,NULL,0);
+ else
+ return dibusb_ioctl_cmd(dib,DIBUSB_IOCTL_CMD_DISABLE_STREAM,NULL,0);
+ break;
+ case UMT2_0:
+ return dibusb_set_streaming_mode(dib,onoff);
+ break;
+ default:
+ break;
+ }
+ return 0;
+}
+
+int dibusb_urb_init(struct usb_dibusb *dib)
+{
+ int ret,i,bufsize,def_pid_parse = 1;
+
+ /*
+ * when reloading the driver w/o replugging the device
+ * a timeout occures, this helps
+ */
+ usb_clear_halt(dib->udev,usb_sndbulkpipe(dib->udev,dib->dibdev->dev_cl->pipe_cmd));
+ usb_clear_halt(dib->udev,usb_rcvbulkpipe(dib->udev,dib->dibdev->dev_cl->pipe_cmd));
+ usb_clear_halt(dib->udev,usb_rcvbulkpipe(dib->udev,dib->dibdev->dev_cl->pipe_data));
+
+ /* allocate the array for the data transfer URBs */
+ dib->urb_list = kmalloc(dib->dibdev->dev_cl->urb_count*sizeof(struct urb *),GFP_KERNEL);
+ if (dib->urb_list == NULL)
+ return -ENOMEM;
+ memset(dib->urb_list,0,dib->dibdev->dev_cl->urb_count*sizeof(struct urb *));
+
+ dib->init_state |= DIBUSB_STATE_URB_LIST;
+
+ bufsize = dib->dibdev->dev_cl->urb_count*dib->dibdev->dev_cl->urb_buffer_size;
+ deb_info("allocate %d bytes as buffersize for all URBs\n",bufsize);
+ /* allocate the actual buffer for the URBs */
+ if ((dib->buffer = pci_alloc_consistent(NULL,bufsize,&dib->dma_handle)) == NULL) {
+ deb_info("not enough memory.\n");
+ return -ENOMEM;
+ }
+ deb_info("allocation complete\n");
+ memset(dib->buffer,0,bufsize);
+
+ dib->init_state |= DIBUSB_STATE_URB_BUF;
+
+ /* allocate and submit the URBs */
+ for (i = 0; i < dib->dibdev->dev_cl->urb_count; i++) {
+ if (!(dib->urb_list[i] = usb_alloc_urb(0,GFP_ATOMIC))) {
+ return -ENOMEM;
+ }
+ deb_info("submitting URB no. %d\n",i);
+
+ usb_fill_bulk_urb( dib->urb_list[i], dib->udev,
+ usb_rcvbulkpipe(dib->udev,dib->dibdev->dev_cl->pipe_data),
+ &dib->buffer[i*dib->dibdev->dev_cl->urb_buffer_size],
+ dib->dibdev->dev_cl->urb_buffer_size,
+ dibusb_urb_complete, dib);
+
+ dib->urb_list[i]->transfer_flags = 0;
+
+ if ((ret = usb_submit_urb(dib->urb_list[i],GFP_ATOMIC))) {
+ err("could not submit buffer urb no. %d\n",i);
+ return ret;
+ }
+ dib->init_state |= DIBUSB_STATE_URB_SUBMIT;
+ }
+
+ /* dib->pid_parse here contains the value of the module parameter */
+ /* decide if pid parsing can be deactivated:
+ * is possible (by speed) and wanted (by user)
+ */
+ switch (dib->dibdev->dev_cl->id) {
+ case DIBUSB2_0:
+ if (dib->udev->speed == USB_SPEED_HIGH && !dib->pid_parse) {
+ def_pid_parse = 0;
+ info("running at HIGH speed, will deliver the complete TS.");
+ } else
+ info("will use pid_parsing.");
+ break;
+ default:
+ break;
+ }
+ /* from here on it contains the device and user decision */
+ dib->pid_parse = def_pid_parse;
+
+ return 0;
+}
+
+int dibusb_urb_exit(struct usb_dibusb *dib)
+{
+ int i;
+ if (dib->init_state & DIBUSB_STATE_URB_LIST) {
+ for (i = 0; i < dib->dibdev->dev_cl->urb_count; i++) {
+ if (dib->urb_list[i] != NULL) {
+ deb_info("killing URB no. %d.\n",i);
+
+ /* stop the URBs */
+ usb_kill_urb(dib->urb_list[i]);
+
+ deb_info("freeing URB no. %d.\n",i);
+ /* free the URBs */
+ usb_free_urb(dib->urb_list[i]);
+ }
+ }
+ /* free the urb array */
+ kfree(dib->urb_list);
+ dib->init_state &= ~DIBUSB_STATE_URB_SUBMIT;
+ dib->init_state &= ~DIBUSB_STATE_URB_LIST;
+ }
+
+ if (dib->init_state & DIBUSB_STATE_URB_BUF)
+ pci_free_consistent(NULL,
+ dib->dibdev->dev_cl->urb_buffer_size*dib->dibdev->dev_cl->urb_count,
+ dib->buffer,dib->dma_handle);
+
+ dib->init_state &= ~DIBUSB_STATE_URB_BUF;
+ return 0;
+}
/*
* dvb-dibusb.h
*
- * Copyright (C) 2004 Patrick Boettcher (patrick.boettcher@desy.de)
+ * Copyright (C) 2004-5 Patrick Boettcher (patrick.boettcher@desy.de)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation, version 2.
*
- * for more information see dvb-dibusb.c .
+ * for more information see dvb-dibusb-core.c .
*/
-
#ifndef __DVB_DIBUSB_H__
#define __DVB_DIBUSB_H__
+#include <linux/input.h>
+#include <linux/config.h>
+#include <linux/usb.h>
+
+#include "dvb_frontend.h"
+#include "dvb_demux.h"
+#include "dvb_net.h"
+#include "dmxdev.h"
+
#include "dib3000.h"
+#include "mt352.h"
+
+/* debug */
+#ifdef CONFIG_DVB_DIBCOM_DEBUG
+#define dprintk(level,args...) \
+ do { if ((dvb_dibusb_debug & level)) { printk(args); } } while (0)
+
+#define debug_dump(b,l) {\
+ int i; \
+ for (i = 0; i < l; i++) deb_xfer("%02x ", b[i]); \
+ deb_xfer("\n");\
+}
+
+#else
+#define dprintk(args...)
+#define debug_dump(b,l)
+#endif
+
+extern int dvb_dibusb_debug;
+
+/* Version information */
+#define DRIVER_VERSION "0.3"
+#define DRIVER_DESC "Driver for DiBcom based USB Budget DVB-T device"
+#define DRIVER_AUTHOR "Patrick Boettcher, patrick.boettcher@desy.de"
+
+#define deb_info(args...) dprintk(0x01,args)
+#define deb_xfer(args...) dprintk(0x02,args)
+#define deb_alot(args...) dprintk(0x04,args)
+#define deb_ts(args...) dprintk(0x08,args)
+#define deb_err(args...) dprintk(0x10,args)
+#define deb_rc(args...) dprintk(0x20,args)
+
+/* generic log methods - taken from usb.h */
+#define err(format, arg...) printk(KERN_ERR "%s: " format "\n" , __FILE__ , ## arg)
+#define info(format, arg...) printk(KERN_INFO "%s: " format "\n" , __FILE__ , ## arg)
+#define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n" , __FILE__ , ## arg)
+
+struct dibusb_usb_controller {
+ const char *name; /* name of the usb controller */
+ u16 cpu_cs_register; /* needs to be restarted, when the firmware has been downloaded. */
+};
typedef enum {
DIBUSB1_1 = 0,
- DIBUSB2_0,
DIBUSB1_1_AN2235,
-} dibusb_type;
+ DIBUSB2_0,
+ UMT2_0,
+} dibusb_class_t;
-static const char * dibusb_fw_filenames1_1[] = {
- "dvb-dibusb-5.0.0.11.fw"
-};
+typedef enum {
+ DIBUSB_TUNER_CABLE_THOMSON = 0,
+ DIBUSB_TUNER_COFDM_PANASONIC_ENV57H1XD5,
+ DIBUSB_TUNER_CABLE_LG_TDTP_E102P,
+ DIBUSB_TUNER_COFDM_PANASONIC_ENV77H11D5,
+} dibusb_tuner_t;
-static const char * dibusb_fw_filenames1_1_an2235[] = {
- "dvb-dibusb-an2235-1.fw"
-};
+typedef enum {
+ DIBUSB_DIB3000MB = 0,
+ DIBUSB_DIB3000MC,
+ DIBUSB_MT352,
+} dibusb_demodulator_t;
-static const char * dibusb_fw_filenames2_0[] = {
- "dvb-dibusb-6.0.0.5.fw"
-};
+typedef enum {
+ DIBUSB_RC_NO = 0,
+ DIBUSB_RC_NEC_PROTOCOL = 1,
+} dibusb_remote_t;
-struct dibusb_device_parameter {
- dibusb_type type;
- u8 demod_addr;
- const char **fw_filenames;
- const char *usb_controller;
- u16 usb_cpu_csreg;
-
- int num_urbs;
- int urb_buf_size;
- int default_size;
- int firmware_bug;
-
- int cmd_pipe;
- int result_pipe;
- int data_pipe;
-};
+struct dibusb_tuner {
+ dibusb_tuner_t id;
-static struct dibusb_device_parameter dibusb_dev_parm[3] = {
- { .type = DIBUSB1_1,
- .demod_addr = 0x10,
- .fw_filenames = dibusb_fw_filenames1_1,
- .usb_controller = "Cypress AN2135",
- .usb_cpu_csreg = 0x7f92,
-
- .num_urbs = 3,
- .urb_buf_size = 4096,
- .default_size = 188*21,
- .firmware_bug = 1,
-
- .cmd_pipe = 0x01,
- .result_pipe = 0x81,
- .data_pipe = 0x82,
- },
- { .type = DIBUSB2_0,
- .demod_addr = 0x18,
- .fw_filenames = dibusb_fw_filenames2_0,
- .usb_controller = "Cypress FX2",
- .usb_cpu_csreg = 0xe600,
-
- .num_urbs = 3,
- .urb_buf_size = 40960,
- .default_size = 188*210,
- .firmware_bug = 0,
-
- .cmd_pipe = 0x01,
- .result_pipe = 0x81,
- .data_pipe = 0x86,
- },
- { .type = DIBUSB1_1_AN2235,
- .demod_addr = 0x10,
- .fw_filenames = dibusb_fw_filenames1_1_an2235,
- .usb_controller = "Cypress CY7C64613 (AN2235)",
- .usb_cpu_csreg = 0x7f92,
-
- .num_urbs = 3,
- .urb_buf_size = 4096,
- .default_size = 188*21,
- .firmware_bug = 1,
-
- .cmd_pipe = 0x01,
- .result_pipe = 0x81,
- .data_pipe = 0x82,
- }
+ u8 pll_addr; /* tuner i2c address */
};
+extern struct dibusb_tuner dibusb_tuner[];
-struct dibusb_device {
- const char *name;
- u16 cold_product_id;
- u16 warm_product_id;
- struct dibusb_device_parameter *parm;
-};
+#define DIBUSB_POSSIBLE_I2C_ADDR_NUM 4
+struct dibusb_demod {
+ dibusb_demodulator_t id;
-/* Vendor IDs */
-#define USB_VID_ANCHOR 0x0547
-#define USB_VID_AVERMEDIA 0x14aa
-#define USB_VID_COMPRO 0x185b
-#define USB_VID_COMPRO_UNK 0x145f
-#define USB_VID_CYPRESS 0x04b4
-#define USB_VID_DIBCOM 0x10b8
-#define USB_VID_EMPIA 0xeb1a
-#define USB_VID_GRANDTEC 0x5032
-#define USB_VID_HYPER_PALTEK 0x1025
-#define USB_VID_IMC_NETWORKS 0x13d3
-#define USB_VID_TWINHAN 0x1822
-#define USB_VID_ULTIMA_ELECTRONIC 0x05d8
-
-/* Product IDs */
-#define USB_PID_AVERMEDIA_DVBT_USB_COLD 0x0001
-#define USB_PID_AVERMEDIA_DVBT_USB_WARM 0x0002
-#define USB_PID_COMPRO_DVBU2000_COLD 0xd000
-#define USB_PID_COMPRO_DVBU2000_WARM 0xd001
-#define USB_PID_COMPRO_DVBU2000_UNK_COLD 0x010c
-#define USB_PID_COMPRO_DVBU2000_UNK_WARM 0x010d
-#define USB_PID_DIBCOM_MOD3000_COLD 0x0bb8
-#define USB_PID_DIBCOM_MOD3000_WARM 0x0bb9
-#define USB_PID_DIBCOM_MOD3001_COLD 0x0bc6
-#define USB_PID_DIBCOM_MOD3001_WARM 0x0bc7
-#define USB_PID_GRANDTEC_DVBT_USB_COLD 0x0fa0
-#define USB_PID_GRANDTEC_DVBT_USB_WARM 0x0fa1
-#define USB_PID_KWORLD_VSTREAM_COLD 0x17de
-#define USB_PID_KWORLD_VSTREAM_WARM 0x17df
-#define USB_PID_TWINHAN_VP7041_COLD 0x3201
-#define USB_PID_TWINHAN_VP7041_WARM 0x3202
-#define USB_PID_ULTIMA_TVBOX_COLD 0x8105
-#define USB_PID_ULTIMA_TVBOX_WARM 0x8106
-#define USB_PID_ULTIMA_TVBOX_AN2235_COLD 0x8107
-#define USB_PID_ULTIMA_TVBOX_AN2235_WARM 0x8108
-#define USB_PID_ULTIMA_TVBOX_ANCHOR_COLD 0x2235
-#define USB_PID_ULTIMA_TVBOX_USB2_COLD 0x8109
-#define USB_PID_ULTIMA_TVBOX_USB2_FX_COLD 0x8613
-#define USB_PID_ULTIMA_TVBOX_USB2_FX_WARM 0x1002
-#define USB_PID_UNK_HYPER_PALTEK_COLD 0x005e
-#define USB_PID_UNK_HYPER_PALTEK_WARM 0x005f
-#define USB_PID_YAKUMO_DTT200U_COLD 0x0201
-#define USB_PID_YAKUMO_DTT200U_WARM 0x0301
-
-#define DIBUSB_SUPPORTED_DEVICES 15
-
-/* USB Driver stuff */
-static struct dibusb_device dibusb_devices[DIBUSB_SUPPORTED_DEVICES] = {
- { .name = "TwinhanDTV USB1.1 / Magic Box / HAMA USB1.1 DVB-T device",
- .cold_product_id = USB_PID_TWINHAN_VP7041_COLD,
- .warm_product_id = USB_PID_TWINHAN_VP7041_WARM,
- .parm = &dibusb_dev_parm[0],
- },
- { .name = "KWorld V-Stream XPERT DTV - DVB-T USB1.1",
- .cold_product_id = USB_PID_KWORLD_VSTREAM_COLD,
- .warm_product_id = USB_PID_KWORLD_VSTREAM_WARM,
- .parm = &dibusb_dev_parm[0],
- },
- { .name = "Grandtec USB1.1 DVB-T/DiBcom USB1.1 DVB-T reference design (MOD3000)",
- .cold_product_id = USB_PID_DIBCOM_MOD3000_COLD,
- .warm_product_id = USB_PID_DIBCOM_MOD3000_WARM,
- .parm = &dibusb_dev_parm[0],
- },
- { .name = "Artec T1 USB1.1 TVBOX with AN2135",
- .cold_product_id = USB_PID_ULTIMA_TVBOX_COLD,
- .warm_product_id = USB_PID_ULTIMA_TVBOX_WARM,
- .parm = &dibusb_dev_parm[0],
- },
- { .name = "Artec T1 USB1.1 TVBOX with AN2235",
- .cold_product_id = USB_PID_ULTIMA_TVBOX_AN2235_COLD,
- .warm_product_id = USB_PID_ULTIMA_TVBOX_AN2235_WARM,
- .parm = &dibusb_dev_parm[2],
- },
- { .name = "Artec T1 USB1.1 TVBOX with AN2235 (misdesigned)",
- .cold_product_id = USB_PID_ULTIMA_TVBOX_ANCHOR_COLD,
- .warm_product_id = 0, /* undefined, this design becomes USB_PID_DIBCOM_MOD3000_WARM in warm state */
- .parm = &dibusb_dev_parm[2],
- },
- { .name = "Artec T1 USB2.0 TVBOX (please report the warm ID)",
- .cold_product_id = USB_PID_ULTIMA_TVBOX_USB2_COLD,
- .warm_product_id = 0, /* don't know, it is most likely that the device will get another USB ID in warm state */
- .parm = &dibusb_dev_parm[1],
- },
- { .name = "Artec T1 USB2.0 TVBOX with FX2 IDs (misdesigned, please report the warm ID)",
- .cold_product_id = USB_PID_ULTIMA_TVBOX_USB2_FX_COLD,
- .warm_product_id = USB_PID_ULTIMA_TVBOX_USB2_FX_WARM, /* undefined, it could be that the device will get another USB ID in warm state */
- .parm = &dibusb_dev_parm[1],
- },
- { .name = "Compro Videomate DVB-U2000 - DVB-T USB1.1",
- .cold_product_id = USB_PID_COMPRO_DVBU2000_COLD,
- .warm_product_id = USB_PID_COMPRO_DVBU2000_WARM,
- .parm = &dibusb_dev_parm[0],
- },
- { .name = "Compro Videomate DVB-U2000 - DVB-T USB1.1 (really ?? please report the name!)",
- .cold_product_id = USB_PID_COMPRO_DVBU2000_UNK_COLD,
- .warm_product_id = USB_PID_COMPRO_DVBU2000_UNK_WARM,
- .parm = &dibusb_dev_parm[0],
- },
- { .name = "Unkown USB1.1 DVB-T device ???? please report the name to the author",
- .cold_product_id = USB_PID_UNK_HYPER_PALTEK_COLD,
- .warm_product_id = USB_PID_UNK_HYPER_PALTEK_WARM,
- .parm = &dibusb_dev_parm[0],
- },
- { .name = "DiBcom USB2.0 DVB-T reference design (MOD3000P)",
- .cold_product_id = USB_PID_DIBCOM_MOD3001_COLD,
- .warm_product_id = USB_PID_DIBCOM_MOD3001_WARM,
- .parm = &dibusb_dev_parm[1],
- },
- { .name = "Grandtec DVB-T USB1.1",
- .cold_product_id = USB_PID_GRANDTEC_DVBT_USB_COLD,
- .warm_product_id = USB_PID_GRANDTEC_DVBT_USB_WARM,
- .parm = &dibusb_dev_parm[0],
- },
- { .name = "Avermedia AverTV DVBT USB1.1",
- .cold_product_id = USB_PID_AVERMEDIA_DVBT_USB_COLD,
- .warm_product_id = USB_PID_AVERMEDIA_DVBT_USB_WARM,
- .parm = &dibusb_dev_parm[0],
- },
- { .name = "Yakumo DVB-T mobile USB2.0",
- .cold_product_id = USB_PID_YAKUMO_DTT200U_COLD,
- .warm_product_id = USB_PID_YAKUMO_DTT200U_WARM,
- .parm = &dibusb_dev_parm[1],
- }
+ int pid_filter_count; /* counter of the internal pid_filter */
+ u8 i2c_addrs[DIBUSB_POSSIBLE_I2C_ADDR_NUM]; /* list of possible i2c addresses of the demod */
};
-/* USB Driver stuff */
-/* table of devices that work with this driver */
-static struct usb_device_id dibusb_table [] = {
- { USB_DEVICE(USB_VID_AVERMEDIA, USB_PID_AVERMEDIA_DVBT_USB_COLD)},
- { USB_DEVICE(USB_VID_AVERMEDIA, USB_PID_AVERMEDIA_DVBT_USB_WARM)},
- { USB_DEVICE(USB_VID_COMPRO, USB_PID_COMPRO_DVBU2000_COLD) },
- { USB_DEVICE(USB_VID_COMPRO, USB_PID_COMPRO_DVBU2000_WARM) },
- { USB_DEVICE(USB_VID_DIBCOM, USB_PID_DIBCOM_MOD3000_COLD) },
- { USB_DEVICE(USB_VID_DIBCOM, USB_PID_DIBCOM_MOD3000_WARM) },
- { USB_DEVICE(USB_VID_DIBCOM, USB_PID_DIBCOM_MOD3001_COLD) },
- { USB_DEVICE(USB_VID_DIBCOM, USB_PID_DIBCOM_MOD3001_WARM) },
- { USB_DEVICE(USB_VID_EMPIA, USB_PID_KWORLD_VSTREAM_COLD) },
- { USB_DEVICE(USB_VID_EMPIA, USB_PID_KWORLD_VSTREAM_WARM) },
- { USB_DEVICE(USB_VID_GRANDTEC, USB_PID_GRANDTEC_DVBT_USB_COLD) },
- { USB_DEVICE(USB_VID_GRANDTEC, USB_PID_GRANDTEC_DVBT_USB_WARM) },
- { USB_DEVICE(USB_VID_GRANDTEC, USB_PID_DIBCOM_MOD3000_COLD) },
- { USB_DEVICE(USB_VID_GRANDTEC, USB_PID_DIBCOM_MOD3000_WARM) },
- { USB_DEVICE(USB_VID_HYPER_PALTEK, USB_PID_UNK_HYPER_PALTEK_COLD) },
- { USB_DEVICE(USB_VID_HYPER_PALTEK, USB_PID_UNK_HYPER_PALTEK_WARM) },
- { USB_DEVICE(USB_VID_IMC_NETWORKS, USB_PID_TWINHAN_VP7041_COLD) },
- { USB_DEVICE(USB_VID_IMC_NETWORKS, USB_PID_TWINHAN_VP7041_WARM) },
- { USB_DEVICE(USB_VID_TWINHAN, USB_PID_TWINHAN_VP7041_COLD) },
- { USB_DEVICE(USB_VID_TWINHAN, USB_PID_TWINHAN_VP7041_WARM) },
- { USB_DEVICE(USB_VID_ULTIMA_ELECTRONIC, USB_PID_ULTIMA_TVBOX_COLD) },
- { USB_DEVICE(USB_VID_ULTIMA_ELECTRONIC, USB_PID_ULTIMA_TVBOX_WARM) },
- { USB_DEVICE(USB_VID_ULTIMA_ELECTRONIC, USB_PID_ULTIMA_TVBOX_AN2235_COLD) },
- { USB_DEVICE(USB_VID_ULTIMA_ELECTRONIC, USB_PID_ULTIMA_TVBOX_AN2235_WARM) },
- { USB_DEVICE(USB_VID_AVERMEDIA, USB_PID_YAKUMO_DTT200U_COLD) },
- { USB_DEVICE(USB_VID_AVERMEDIA, USB_PID_YAKUMO_DTT200U_WARM) },
- { USB_DEVICE(USB_PID_COMPRO_DVBU2000_UNK_COLD, USB_VID_COMPRO_UNK) },
- { USB_DEVICE(USB_VID_ULTIMA_ELECTRONIC, USB_PID_ULTIMA_TVBOX_USB2_COLD) },
+#define DIBUSB_MAX_TUNER_NUM 2
+struct dibusb_device_class {
+ dibusb_class_t id;
-/*
- * activate the following define when you have one of the devices and want to
- * build it from build-2.6 in dvb-kernel
- */
-// #define CONFIG_DVB_DIBUSB_MISDESIGNED_DEVICES
-#ifdef CONFIG_DVB_DIBUSB_MISDESIGNED_DEVICES
- { USB_DEVICE(USB_VID_ANCHOR, USB_PID_ULTIMA_TVBOX_ANCHOR_COLD) },
- { USB_DEVICE(USB_VID_CYPRESS, USB_PID_ULTIMA_TVBOX_USB2_FX_COLD) },
- { USB_DEVICE(USB_VID_ANCHOR, USB_PID_ULTIMA_TVBOX_USB2_FX_WARM) },
-#endif
- { } /* Terminating entry */
+ const struct dibusb_usb_controller *usb_ctrl; /* usb controller */
+ const char *firmware; /* valid firmware filenames */
+
+ int pipe_cmd; /* command pipe (read/write) */
+ int pipe_data; /* data pipe */
+
+ int urb_count; /* number of data URBs to be submitted */
+ int urb_buffer_size; /* the size of the buffer for each URB */
+
+ dibusb_remote_t remote_type; /* does this device have a ir-receiver */
+
+ struct dibusb_demod *demod; /* which demodulator is mount */
+ struct dibusb_tuner *tuner; /* which tuner can be found here */
};
-MODULE_DEVICE_TABLE (usb, dibusb_table);
+#define DIBUSB_ID_MAX_NUM 15
+struct dibusb_usb_device {
+ const char *name; /* real name of the box */
+ struct dibusb_device_class *dev_cl; /* which dibusb_device_class is this device part of */
-#define DIBUSB_I2C_TIMEOUT HZ*5
+ struct usb_device_id *cold_ids[DIBUSB_ID_MAX_NUM]; /* list of USB ids when this device is at pre firmware state */
+ struct usb_device_id *warm_ids[DIBUSB_ID_MAX_NUM]; /* list of USB ids when this device is at post firmware state */
+};
+
+/* a PID for the pid_filter list, when in use */
+struct dibusb_pid
+{
+ int index;
+ u16 pid;
+ int active;
+};
struct usb_dibusb {
/* usb */
struct usb_device * udev;
- struct dibusb_device * dibdev;
+ struct dibusb_usb_device * dibdev;
+
+#define DIBUSB_STATE_INIT 0x000
+#define DIBUSB_STATE_URB_LIST 0x001
+#define DIBUSB_STATE_URB_BUF 0x002
+#define DIBUSB_STATE_URB_SUBMIT 0x004
+#define DIBUSB_STATE_DVB 0x008
+#define DIBUSB_STATE_I2C 0x010
+#define DIBUSB_STATE_REMOTE 0x020
+#define DIBUSB_STATE_PIDLIST 0x040
+ int init_state;
int feedcount;
- int pid_parse;
- struct dib3000_xfer_ops xfer_ops;
+ struct dib_fe_xfer_ops xfer_ops;
+
+ struct dibusb_tuner *tuner;
struct urb **urb_list;
u8 *buffer;
/* I2C */
struct i2c_adapter i2c_adap;
- struct i2c_client i2c_client;
/* locking */
struct semaphore usb_sem;
struct semaphore i2c_sem;
+ /* pid filtering */
+ spinlock_t pid_list_lock;
+ struct dibusb_pid *pid_list;
+
/* dvb */
- int dvb_is_ready;
struct dvb_adapter *adapter;
struct dmxdev dmxdev;
struct dvb_demux demux;
struct dvb_net dvb_net;
struct dvb_frontend* fe;
+ int (*fe_sleep) (struct dvb_frontend *);
+ int (*fe_init) (struct dvb_frontend *);
+
/* remote control */
struct input_dev rc_input_dev;
struct work_struct rc_query_work;
int rc_input_event;
+
+ /* module parameters */
+ int pid_parse;
+ int rc_query_interval;
};
+/* commonly used functions in the separated files */
+
+/* dvb-dibusb-firmware.c */
+int dibusb_loadfirmware(struct usb_device *udev, struct dibusb_usb_device *dibdev);
+
+/* dvb-dibusb-remote.c */
+int dibusb_remote_exit(struct usb_dibusb *dib);
+int dibusb_remote_init(struct usb_dibusb *dib);
+
+/* dvb-dibusb-fe-i2c.c */
+int dibusb_i2c_msg(struct usb_dibusb *dib, u8 addr,
+ u8 *wbuf, u16 wlen, u8 *rbuf, u16 rlen);
+int dibusb_fe_init(struct usb_dibusb* dib);
+int dibusb_fe_exit(struct usb_dibusb *dib);
+int dibusb_i2c_init(struct usb_dibusb *dib);
+int dibusb_i2c_exit(struct usb_dibusb *dib);
+
+/* dvb-dibusb-dvb.c */
+void dibusb_urb_complete(struct urb *urb, struct pt_regs *ptregs);
+int dibusb_dvb_init(struct usb_dibusb *dib);
+int dibusb_dvb_exit(struct usb_dibusb *dib);
-/* types of first byte of each buffer */
+/* dvb-dibusb-usb.c */
+int dibusb_readwrite_usb(struct usb_dibusb *dib, u8 *wbuf, u16 wlen, u8 *rbuf,
+ u16 rlen);
+
+int dibusb_hw_wakeup(struct dvb_frontend *);
+int dibusb_hw_sleep(struct dvb_frontend *);
+int dibusb_set_streaming_mode(struct usb_dibusb *,u8);
+int dibusb_streaming(struct usb_dibusb *,int);
+
+int dibusb_urb_init(struct usb_dibusb *);
+int dibusb_urb_exit(struct usb_dibusb *);
+
+/* dvb-dibusb-pid.c */
+int dibusb_pid_list_init(struct usb_dibusb *dib);
+void dibusb_pid_list_exit(struct usb_dibusb *dib);
+int dibusb_ctrl_pid(struct usb_dibusb *dib, struct dvb_demux_feed *dvbdmxfeed , int onoff);
+
+/* i2c and transfer stuff */
+#define DIBUSB_I2C_TIMEOUT HZ*5
+
+/*
+ * protocol of all dibusb related devices
+ */
+
+/*
+ * bulk msg to/from endpoint 0x01
+ *
+ * general structure:
+ * request_byte parameter_bytes
+ */
#define DIBUSB_REQ_START_READ 0x00
#define DIBUSB_REQ_START_DEMOD 0x01
+
+/*
+ * i2c read
+ * bulk write: 0x02 ((7bit i2c_addr << 1) & 0x01) register_bytes length_word
+ * bulk read: byte_buffer (length_word bytes)
+ */
#define DIBUSB_REQ_I2C_READ 0x02
+
+/*
+ * i2c write
+ * bulk write: 0x03 (7bit i2c_addr << 1) register_bytes value_bytes
+ */
#define DIBUSB_REQ_I2C_WRITE 0x03
-/* prefix for reading the current RC key */
+/*
+ * polling the value of the remote control
+ * bulk write: 0x04
+ * bulk read: byte_buffer (5 bytes)
+ *
+ * first byte of byte_buffer shows the status (0x00, 0x01, 0x02)
+ */
#define DIBUSB_REQ_POLL_REMOTE 0x04
#define DIBUSB_RC_NEC_EMPTY 0x00
#define DIBUSB_RC_NEC_KEY_PRESSED 0x01
#define DIBUSB_RC_NEC_KEY_REPEATED 0x02
-/* 0x05 0xXX */
+/* streaming mode:
+ * bulk write: 0x05 mode_byte
+ *
+ * mode_byte is mostly 0x00
+ */
#define DIBUSB_REQ_SET_STREAMING_MODE 0x05
/* interrupt the internal read loop, when blocking */
#define DIBUSB_REQ_INTR_READ 0x06
-/* IO control
- * 0x07 <cmd 1 byte> <param 32 bytes>
+/* io control
+ * 0x07 cmd_byte param_bytes
+ *
+ * param_bytes can be up to 32 bytes
+ *
+ * cmd_byte function parameter name
+ * 0x00 power mode
+ * 0x00 sleep
+ * 0x01 wakeup
+ *
+ * 0x01 enable streaming
+ * 0x02 disable streaming
+ *
+ *
*/
#define DIBUSB_REQ_SET_IOCTL 0x07
#define DIBUSB_IOCTL_POWER_SLEEP 0x00
#define DIBUSB_IOCTL_POWER_WAKEUP 0x01
+/* modify streaming of the FX2 */
+#define DIBUSB_IOCTL_CMD_ENABLE_STREAM 0x01
+#define DIBUSB_IOCTL_CMD_DISABLE_STREAM 0x02
+
#endif
enum dmx_ts_pes pes_type;
int cc;
+ int pusi_seen; /* prevents feeding of garbage from previous section */
u16 peslen;
static DECLARE_MUTEX(frontend_mutex);
+struct dvb_frontend_private {
+
+ struct dvb_device *dvbdev;
+ struct dvb_frontend_parameters parameters;
+ struct dvb_fe_events events;
+ struct semaphore sem;
+ struct list_head list_head;
+ wait_queue_head_t wait_queue;
+ pid_t thread_pid;
+ unsigned long release_jiffies;
+ int state;
+ int bending;
+ int lnb_drift;
+ int inversion;
+ int auto_step;
+ int auto_sub_step;
+ int started_auto_step;
+ int min_delay;
+ int max_drift;
+ int step_size;
+ int exit;
+ int wakeup;
+ fe_status_t status;
+};
+
+
static void dvb_frontend_add_event(struct dvb_frontend *fe, fe_status_t status)
{
- struct dvb_fe_events *events = &fe->events;
+ struct dvb_frontend_private *fepriv = (struct dvb_frontend_private*) fe->frontend_priv;
+ struct dvb_fe_events *events = &fepriv->events;
struct dvb_frontend_event *e;
int wp;
e = &events->events[events->eventw];
- memcpy (&e->parameters, &fe->parameters,
+ memcpy (&e->parameters, &fepriv->parameters,
sizeof (struct dvb_frontend_parameters));
if (status & FE_HAS_LOCK)
static int dvb_frontend_get_event(struct dvb_frontend *fe,
struct dvb_frontend_event *event, int flags)
{
- struct dvb_fe_events *events = &fe->events;
+ struct dvb_frontend_private *fepriv = (struct dvb_frontend_private*) fe->frontend_priv;
+ struct dvb_fe_events *events = &fepriv->events;
dprintk ("%s\n", __FUNCTION__);
if (flags & O_NONBLOCK)
return -EWOULDBLOCK;
- up(&fe->sem);
+ up(&fepriv->sem);
ret = wait_event_interruptible (events->wait_queue,
events->eventw != events->eventr);
- if (down_interruptible (&fe->sem))
+ if (down_interruptible (&fepriv->sem))
return -ERESTARTSYS;
if (ret < 0)
{
int autoinversion;
int ready = 0;
- int original_inversion = fe->parameters.inversion;
- u32 original_frequency = fe->parameters.frequency;
+ struct dvb_frontend_private *fepriv = (struct dvb_frontend_private*) fe->frontend_priv;
+ int original_inversion = fepriv->parameters.inversion;
+ u32 original_frequency = fepriv->parameters.frequency;
/* are we using autoinversion? */
autoinversion = ((!(fe->ops->info.caps & FE_CAN_INVERSION_AUTO)) &&
- (fe->parameters.inversion == INVERSION_AUTO));
+ (fepriv->parameters.inversion == INVERSION_AUTO));
/* setup parameters correctly */
while(!ready) {
/* calculate the lnb_drift */
- fe->lnb_drift = fe->auto_step * fe->step_size;
+ fepriv->lnb_drift = fepriv->auto_step * fepriv->step_size;
/* wrap the auto_step if we've exceeded the maximum drift */
- if (fe->lnb_drift > fe->max_drift) {
- fe->auto_step = 0;
- fe->auto_sub_step = 0;
- fe->lnb_drift = 0;
+ if (fepriv->lnb_drift > fepriv->max_drift) {
+ fepriv->auto_step = 0;
+ fepriv->auto_sub_step = 0;
+ fepriv->lnb_drift = 0;
}
/* perform inversion and +/- zigzag */
- switch(fe->auto_sub_step) {
+ switch(fepriv->auto_sub_step) {
case 0:
/* try with the current inversion and current drift setting */
ready = 1;
case 1:
if (!autoinversion) break;
- fe->inversion = (fe->inversion == INVERSION_OFF) ? INVERSION_ON : INVERSION_OFF;
+ fepriv->inversion = (fepriv->inversion == INVERSION_OFF) ? INVERSION_ON : INVERSION_OFF;
ready = 1;
break;
case 2:
- if (fe->lnb_drift == 0) break;
+ if (fepriv->lnb_drift == 0) break;
- fe->lnb_drift = -fe->lnb_drift;
+ fepriv->lnb_drift = -fepriv->lnb_drift;
ready = 1;
break;
case 3:
- if (fe->lnb_drift == 0) break;
+ if (fepriv->lnb_drift == 0) break;
if (!autoinversion) break;
- fe->inversion = (fe->inversion == INVERSION_OFF) ? INVERSION_ON : INVERSION_OFF;
- fe->lnb_drift = -fe->lnb_drift;
+ fepriv->inversion = (fepriv->inversion == INVERSION_OFF) ? INVERSION_ON : INVERSION_OFF;
+ fepriv->lnb_drift = -fepriv->lnb_drift;
ready = 1;
break;
default:
- fe->auto_step++;
- fe->auto_sub_step = -1; /* it'll be incremented to 0 in a moment */
+ fepriv->auto_step++;
+ fepriv->auto_sub_step = -1; /* it'll be incremented to 0 in a moment */
break;
}
- if (!ready) fe->auto_sub_step++;
+ if (!ready) fepriv->auto_sub_step++;
}
/* if this attempt would hit where we started, indicate a complete
* iteration has occurred */
- if ((fe->auto_step == fe->started_auto_step) &&
- (fe->auto_sub_step == 0) && check_wrapped) {
+ if ((fepriv->auto_step == fepriv->started_auto_step) &&
+ (fepriv->auto_sub_step == 0) && check_wrapped) {
return 1;
}
dprintk("%s: drift:%i inversion:%i auto_step:%i "
"auto_sub_step:%i started_auto_step:%i\n",
- __FUNCTION__, fe->lnb_drift, fe->inversion,
- fe->auto_step, fe->auto_sub_step, fe->started_auto_step);
+ __FUNCTION__, fepriv->lnb_drift, fepriv->inversion,
+ fepriv->auto_step, fepriv->auto_sub_step, fepriv->started_auto_step);
/* set the frontend itself */
- fe->parameters.frequency += fe->lnb_drift;
+ fepriv->parameters.frequency += fepriv->lnb_drift;
if (autoinversion)
- fe->parameters.inversion = fe->inversion;
+ fepriv->parameters.inversion = fepriv->inversion;
if (fe->ops->set_frontend)
- fe->ops->set_frontend(fe, &fe->parameters);
+ fe->ops->set_frontend(fe, &fepriv->parameters);
- fe->parameters.frequency = original_frequency;
- fe->parameters.inversion = original_inversion;
+ fepriv->parameters.frequency = original_frequency;
+ fepriv->parameters.inversion = original_inversion;
- fe->auto_sub_step++;
+ fepriv->auto_sub_step++;
return 0;
}
static int dvb_frontend_is_exiting(struct dvb_frontend *fe)
{
- if (fe->exit)
+ struct dvb_frontend_private *fepriv = (struct dvb_frontend_private*) fe->frontend_priv;
+
+ if (fepriv->exit)
return 1;
- if (fe->dvbdev->writers == 1)
- if (jiffies - fe->release_jiffies > dvb_shutdown_timeout * HZ)
+ if (fepriv->dvbdev->writers == 1)
+ if (jiffies - fepriv->release_jiffies > dvb_shutdown_timeout * HZ)
return 1;
return 0;
static int dvb_frontend_should_wakeup(struct dvb_frontend *fe)
{
- if (fe->wakeup) {
- fe->wakeup = 0;
+ struct dvb_frontend_private *fepriv = (struct dvb_frontend_private*) fe->frontend_priv;
+
+ if (fepriv->wakeup) {
+ fepriv->wakeup = 0;
return 1;
}
return dvb_frontend_is_exiting(fe);
static void dvb_frontend_wakeup(struct dvb_frontend *fe)
{
- fe->wakeup = 1;
- wake_up_interruptible(&fe->wait_queue);
+ struct dvb_frontend_private *fepriv = (struct dvb_frontend_private*) fe->frontend_priv;
+
+ fepriv->wakeup = 1;
+ wake_up_interruptible(&fepriv->wait_queue);
}
/*
static int dvb_frontend_thread (void *data)
{
struct dvb_frontend *fe = (struct dvb_frontend *) data;
+ struct dvb_frontend_private *fepriv = (struct dvb_frontend_private*) fe->frontend_priv;
unsigned long timeout;
char name [15];
int quality = 0, delay = 3*HZ;
sigfillset (¤t->blocked);
unlock_kernel ();
- fe->status = 0;
+ fepriv->status = 0;
dvb_frontend_init (fe);
- fe->wakeup = 0;
+ fepriv->wakeup = 0;
while (1) {
- up (&fe->sem); /* is locked when we enter the thread... */
+ up(&fepriv->sem); /* is locked when we enter the thread... */
- timeout = wait_event_interruptible_timeout(fe->wait_queue,
+ timeout = wait_event_interruptible_timeout(fepriv->wait_queue,
dvb_frontend_should_wakeup(fe),
delay);
if (0 != dvb_frontend_is_exiting (fe)) {
if (current->flags & PF_FREEZE)
refrigerator(PF_FREEZE);
- if (down_interruptible (&fe->sem))
+ if (down_interruptible(&fepriv->sem))
break;
/* if we've got no parameters, just keep idling */
- if (fe->state & FESTATE_IDLE) {
+ if (fepriv->state & FESTATE_IDLE) {
delay = 3*HZ;
quality = 0;
continue;
}
-retune:
/* get the frontend status */
- if (fe->state & FESTATE_RETUNE) {
+ if (fepriv->state & FESTATE_RETUNE) {
s = 0;
} else {
if (fe->ops->read_status)
fe->ops->read_status(fe, &s);
- if (s != fe->status) {
+ if (s != fepriv->status) {
dvb_frontend_add_event (fe, s);
- fe->status = s;
+ fepriv->status = s;
}
}
/* if we're not tuned, and we have a lock, move to the TUNED state */
- if ((fe->state & FESTATE_WAITFORLOCK) && (s & FE_HAS_LOCK)) {
- update_delay(&quality, &delay, fe->min_delay, s & FE_HAS_LOCK);
- fe->state = FESTATE_TUNED;
+ if ((fepriv->state & FESTATE_WAITFORLOCK) && (s & FE_HAS_LOCK)) {
+ update_delay(&quality, &delay, fepriv->min_delay, s & FE_HAS_LOCK);
+ fepriv->state = FESTATE_TUNED;
/* if we're tuned, then we have determined the correct inversion */
if ((!(fe->ops->info.caps & FE_CAN_INVERSION_AUTO)) &&
- (fe->parameters.inversion == INVERSION_AUTO)) {
- fe->parameters.inversion = fe->inversion;
+ (fepriv->parameters.inversion == INVERSION_AUTO)) {
+ fepriv->parameters.inversion = fepriv->inversion;
}
continue;
}
/* if we are tuned already, check we're still locked */
- if (fe->state & FESTATE_TUNED) {
- update_delay(&quality, &delay, fe->min_delay, s & FE_HAS_LOCK);
+ if (fepriv->state & FESTATE_TUNED) {
+ update_delay(&quality, &delay, fepriv->min_delay, s & FE_HAS_LOCK);
/* we're tuned, and the lock is still good... */
if (s & FE_HAS_LOCK)
else {
/* if we _WERE_ tuned, but now don't have a lock,
* need to zigzag */
- fe->state = FESTATE_ZIGZAG_FAST;
- fe->started_auto_step = fe->auto_step;
+ fepriv->state = FESTATE_ZIGZAG_FAST;
+ fepriv->started_auto_step = fepriv->auto_step;
check_wrapped = 0;
}
}
/* don't actually do anything if we're in the LOSTLOCK state,
* the frontend is set to FE_CAN_RECOVER, and the max_drift is 0 */
- if ((fe->state & FESTATE_LOSTLOCK) &&
- (fe->ops->info.caps & FE_CAN_RECOVER) && (fe->max_drift == 0)) {
- update_delay(&quality, &delay, fe->min_delay, s & FE_HAS_LOCK);
+ if ((fepriv->state & FESTATE_LOSTLOCK) &&
+ (fe->ops->info.caps & FE_CAN_RECOVER) && (fepriv->max_drift == 0)) {
+ update_delay(&quality, &delay, fepriv->min_delay, s & FE_HAS_LOCK);
continue;
}
/* don't do anything if we're in the DISEQC state, since this
* might be someone with a motorized dish controlled by DISEQC.
* If its actually a re-tune, there will be a SET_FRONTEND soon enough. */
- if (fe->state & FESTATE_DISEQC) {
- update_delay(&quality, &delay, fe->min_delay, s & FE_HAS_LOCK);
+ if (fepriv->state & FESTATE_DISEQC) {
+ update_delay(&quality, &delay, fepriv->min_delay, s & FE_HAS_LOCK);
continue;
}
/* if we're in the RETUNE state, set everything up for a brand
* new scan, keeping the current inversion setting, as the next
* tune is _very_ likely to require the same */
- if (fe->state & FESTATE_RETUNE) {
- fe->lnb_drift = 0;
- fe->auto_step = 0;
- fe->auto_sub_step = 0;
- fe->started_auto_step = 0;
+ if (fepriv->state & FESTATE_RETUNE) {
+ fepriv->lnb_drift = 0;
+ fepriv->auto_step = 0;
+ fepriv->auto_sub_step = 0;
+ fepriv->started_auto_step = 0;
check_wrapped = 0;
}
/* fast zigzag. */
- if ((fe->state & FESTATE_SEARCHING_FAST) || (fe->state & FESTATE_RETUNE)) {
- delay = fe->min_delay;
+ if ((fepriv->state & FESTATE_SEARCHING_FAST) || (fepriv->state & FESTATE_RETUNE)) {
+ delay = fepriv->min_delay;
/* peform a tune */
if (dvb_frontend_autotune(fe, check_wrapped)) {
/* OK, if we've run out of trials at the fast speed.
* Drop back to slow for the _next_ attempt */
- fe->state = FESTATE_SEARCHING_SLOW;
- fe->started_auto_step = fe->auto_step;
+ fepriv->state = FESTATE_SEARCHING_SLOW;
+ fepriv->started_auto_step = fepriv->auto_step;
continue;
}
check_wrapped = 1;
* This ensures we cannot return from an
* FE_SET_FRONTEND ioctl before the first frontend tune
* occurs */
- if (fe->state & FESTATE_RETUNE) {
- fe->state = FESTATE_TUNING_FAST;
- goto retune;
+ if (fepriv->state & FESTATE_RETUNE) {
+ fepriv->state = FESTATE_TUNING_FAST;
}
}
/* slow zigzag */
- if (fe->state & FESTATE_SEARCHING_SLOW) {
- update_delay(&quality, &delay, fe->min_delay, s & FE_HAS_LOCK);
+ if (fepriv->state & FESTATE_SEARCHING_SLOW) {
+ update_delay(&quality, &delay, fepriv->min_delay, s & FE_HAS_LOCK);
/* Note: don't bother checking for wrapping; we stay in this
* state until we get a lock */
fe->ops->sleep(fe);
}
- fe->thread_pid = 0;
+ fepriv->thread_pid = 0;
mb();
dvb_frontend_wakeup(fe);
static void dvb_frontend_stop(struct dvb_frontend *fe)
{
unsigned long ret;
+ struct dvb_frontend_private *fepriv = (struct dvb_frontend_private*) fe->frontend_priv;
dprintk ("%s\n", __FUNCTION__);
- fe->exit = 1;
+ fepriv->exit = 1;
mb();
- if (!fe->thread_pid)
+ if (!fepriv->thread_pid)
return;
/* check if the thread is really alive */
- if (kill_proc(fe->thread_pid, 0, 1) == -ESRCH) {
+ if (kill_proc(fepriv->thread_pid, 0, 1) == -ESRCH) {
printk("dvb_frontend_stop: thread PID %d already died\n",
- fe->thread_pid);
+ fepriv->thread_pid);
/* make sure the mutex was not held by the thread */
- init_MUTEX (&fe->sem);
+ init_MUTEX (&fepriv->sem);
return;
}
dvb_frontend_wakeup(fe);
/* wait until the frontend thread has exited */
- ret = wait_event_interruptible(fe->wait_queue,0 == fe->thread_pid);
+ ret = wait_event_interruptible(fepriv->wait_queue,0 == fepriv->thread_pid);
if (-ERESTARTSYS != ret) {
- fe->state = FESTATE_IDLE;
+ fepriv->state = FESTATE_IDLE;
return;
}
- fe->state = FESTATE_IDLE;
+ fepriv->state = FESTATE_IDLE;
/* paranoia check in case a signal arrived */
- if (fe->thread_pid)
+ if (fepriv->thread_pid)
printk("dvb_frontend_stop: warning: thread PID %d won't exit\n",
- fe->thread_pid);
+ fepriv->thread_pid);
}
static int dvb_frontend_start(struct dvb_frontend *fe)
{
int ret;
+ struct dvb_frontend_private *fepriv = (struct dvb_frontend_private*) fe->frontend_priv;
dprintk ("%s\n", __FUNCTION__);
- if (fe->thread_pid) {
- if (!fe->exit)
+ if (fepriv->thread_pid) {
+ if (!fepriv->exit)
return 0;
else
dvb_frontend_stop (fe);
if (signal_pending(current))
return -EINTR;
- if (down_interruptible (&fe->sem))
+ if (down_interruptible (&fepriv->sem))
return -EINTR;
- fe->state = FESTATE_IDLE;
- fe->exit = 0;
- fe->thread_pid = 0;
+ fepriv->state = FESTATE_IDLE;
+ fepriv->exit = 0;
+ fepriv->thread_pid = 0;
mb();
ret = kernel_thread (dvb_frontend_thread, fe, 0);
+
if (ret < 0) {
printk("dvb_frontend_start: failed to start kernel_thread (%d)\n", ret);
- up(&fe->sem);
+ up(&fepriv->sem);
return ret;
}
- fe->thread_pid = ret;
+ fepriv->thread_pid = ret;
return 0;
}
{
struct dvb_device *dvbdev = file->private_data;
struct dvb_frontend *fe = dvbdev->priv;
+ struct dvb_frontend_private *fepriv = (struct dvb_frontend_private*) fe->frontend_priv;
int err = -EOPNOTSUPP;
dprintk ("%s\n", __FUNCTION__);
- if (!fe || fe->exit)
+ if (!fe || fepriv->exit)
return -ENODEV;
if ((file->f_flags & O_ACCMODE) == O_RDONLY &&
cmd == FE_DISEQC_RECV_SLAVE_REPLY))
return -EPERM;
- if (down_interruptible (&fe->sem))
+ if (down_interruptible (&fepriv->sem))
return -ERESTARTSYS;
switch (cmd) {
case FE_DISEQC_RESET_OVERLOAD:
if (fe->ops->diseqc_reset_overload) {
err = fe->ops->diseqc_reset_overload(fe);
- fe->state = FESTATE_DISEQC;
- fe->status = 0;
+ fepriv->state = FESTATE_DISEQC;
+ fepriv->status = 0;
}
break;
case FE_DISEQC_SEND_MASTER_CMD:
if (fe->ops->diseqc_send_master_cmd) {
err = fe->ops->diseqc_send_master_cmd(fe, (struct dvb_diseqc_master_cmd*) parg);
- fe->state = FESTATE_DISEQC;
- fe->status = 0;
+ fepriv->state = FESTATE_DISEQC;
+ fepriv->status = 0;
}
break;
case FE_DISEQC_SEND_BURST:
if (fe->ops->diseqc_send_burst) {
err = fe->ops->diseqc_send_burst(fe, (fe_sec_mini_cmd_t) parg);
- fe->state = FESTATE_DISEQC;
- fe->status = 0;
+ fepriv->state = FESTATE_DISEQC;
+ fepriv->status = 0;
}
break;
case FE_SET_TONE:
if (fe->ops->set_tone) {
err = fe->ops->set_tone(fe, (fe_sec_tone_mode_t) parg);
- fe->state = FESTATE_DISEQC;
- fe->status = 0;
+ fepriv->state = FESTATE_DISEQC;
+ fepriv->status = 0;
}
break;
case FE_SET_VOLTAGE:
if (fe->ops->set_voltage) {
err = fe->ops->set_voltage(fe, (fe_sec_voltage_t) parg);
- fe->state = FESTATE_DISEQC;
- fe->status = 0;
+ fepriv->state = FESTATE_DISEQC;
+ fepriv->status = 0;
}
break;
case FE_DISHNETWORK_SEND_LEGACY_CMD:
if (fe->ops->dishnetwork_send_legacy_command) {
err = fe->ops->dishnetwork_send_legacy_command(fe, (unsigned int) parg);
- fe->state = FESTATE_DISEQC;
- fe->status = 0;
+ fepriv->state = FESTATE_DISEQC;
+ fepriv->status = 0;
}
break;
break;
case FE_ENABLE_HIGH_LNB_VOLTAGE:
- if (fe->ops->enable_high_lnb_voltage);
+ if (fe->ops->enable_high_lnb_voltage)
err = fe->ops->enable_high_lnb_voltage(fe, (int) parg);
break;
case FE_SET_FRONTEND: {
struct dvb_frontend_tune_settings fetunesettings;
- memcpy (&fe->parameters, parg,
+ memcpy (&fepriv->parameters, parg,
sizeof (struct dvb_frontend_parameters));
memset(&fetunesettings, 0, sizeof(struct dvb_frontend_tune_settings));
/* force auto frequency inversion if requested */
if (dvb_force_auto_inversion) {
- fe->parameters.inversion = INVERSION_AUTO;
+ fepriv->parameters.inversion = INVERSION_AUTO;
fetunesettings.parameters.inversion = INVERSION_AUTO;
}
if (fe->ops->info.type == FE_OFDM) {
/* without hierachical coding code_rate_LP is irrelevant,
* so we tolerate the otherwise invalid FEC_NONE setting */
- if (fe->parameters.u.ofdm.hierarchy_information == HIERARCHY_NONE &&
- fe->parameters.u.ofdm.code_rate_LP == FEC_NONE)
- fe->parameters.u.ofdm.code_rate_LP = FEC_AUTO;
+ if (fepriv->parameters.u.ofdm.hierarchy_information == HIERARCHY_NONE &&
+ fepriv->parameters.u.ofdm.code_rate_LP == FEC_NONE)
+ fepriv->parameters.u.ofdm.code_rate_LP = FEC_AUTO;
}
/* get frontend-specific tuning settings */
if (fe->ops->get_tune_settings && (fe->ops->get_tune_settings(fe, &fetunesettings) == 0)) {
- fe->min_delay = (fetunesettings.min_delay_ms * HZ) / 1000;
- fe->max_drift = fetunesettings.max_drift;
- fe->step_size = fetunesettings.step_size;
+ fepriv->min_delay = (fetunesettings.min_delay_ms * HZ) / 1000;
+ fepriv->max_drift = fetunesettings.max_drift;
+ fepriv->step_size = fetunesettings.step_size;
} else {
/* default values */
switch(fe->ops->info.type) {
case FE_QPSK:
- fe->min_delay = HZ/20;
- fe->step_size = fe->parameters.u.qpsk.symbol_rate / 16000;
- fe->max_drift = fe->parameters.u.qpsk.symbol_rate / 2000;
+ fepriv->min_delay = HZ/20;
+ fepriv->step_size = fepriv->parameters.u.qpsk.symbol_rate / 16000;
+ fepriv->max_drift = fepriv->parameters.u.qpsk.symbol_rate / 2000;
break;
case FE_QAM:
- fe->min_delay = HZ/20;
- fe->step_size = 0; /* no zigzag */
- fe->max_drift = 0;
+ fepriv->min_delay = HZ/20;
+ fepriv->step_size = 0; /* no zigzag */
+ fepriv->max_drift = 0;
break;
case FE_OFDM:
- fe->min_delay = HZ/20;
- fe->step_size = fe->ops->info.frequency_stepsize * 2;
- fe->max_drift = (fe->ops->info.frequency_stepsize * 2) + 1;
+ fepriv->min_delay = HZ/20;
+ fepriv->step_size = fe->ops->info.frequency_stepsize * 2;
+ fepriv->max_drift = (fe->ops->info.frequency_stepsize * 2) + 1;
break;
case FE_ATSC:
printk("dvb-core: FE_ATSC not handled yet.\n");
}
}
if (dvb_override_tune_delay > 0)
- fe->min_delay = (dvb_override_tune_delay * HZ) / 1000;
+ fepriv->min_delay = (dvb_override_tune_delay * HZ) / 1000;
- fe->state = FESTATE_RETUNE;
+ fepriv->state = FESTATE_RETUNE;
dvb_frontend_wakeup(fe);
dvb_frontend_add_event (fe, 0);
- fe->status = 0;
+ fepriv->status = 0;
err = 0;
break;
}
case FE_GET_FRONTEND:
if (fe->ops->get_frontend) {
- memcpy (parg, &fe->parameters, sizeof (struct dvb_frontend_parameters));
+ memcpy (parg, &fepriv->parameters, sizeof (struct dvb_frontend_parameters));
err = fe->ops->get_frontend(fe, (struct dvb_frontend_parameters*) parg);
}
break;
};
- up (&fe->sem);
+ up (&fepriv->sem);
return err;
}
{
struct dvb_device *dvbdev = file->private_data;
struct dvb_frontend *fe = dvbdev->priv;
+ struct dvb_frontend_private *fepriv = (struct dvb_frontend_private*) fe->frontend_priv;
dprintk ("%s\n", __FUNCTION__);
- poll_wait (file, &fe->events.wait_queue, wait);
+ poll_wait (file, &fepriv->events.wait_queue, wait);
- if (fe->events.eventw != fe->events.eventr)
+ if (fepriv->events.eventw != fepriv->events.eventr)
return (POLLIN | POLLRDNORM | POLLPRI);
return 0;
{
struct dvb_device *dvbdev = file->private_data;
struct dvb_frontend *fe = dvbdev->priv;
+ struct dvb_frontend_private *fepriv = (struct dvb_frontend_private*) fe->frontend_priv;
int ret;
dprintk ("%s\n", __FUNCTION__);
dvb_generic_release (inode, file);
/* empty event queue */
- fe->events.eventr = fe->events.eventw = 0;
+ fepriv->events.eventr = fepriv->events.eventw = 0;
}
return ret;
{
struct dvb_device *dvbdev = file->private_data;
struct dvb_frontend *fe = dvbdev->priv;
+ struct dvb_frontend_private *fepriv = (struct dvb_frontend_private*) fe->frontend_priv;
dprintk ("%s\n", __FUNCTION__);
if ((file->f_flags & O_ACCMODE) != O_RDONLY)
- fe->release_jiffies = jiffies;
+ fepriv->release_jiffies = jiffies;
return dvb_generic_release (inode, file);
}
int dvb_register_frontend(struct dvb_adapter* dvb,
struct dvb_frontend* fe)
{
+ struct dvb_frontend_private *fepriv;
static const struct dvb_device dvbdev_template = {
.users = ~0,
.writers = 1,
if (down_interruptible (&frontend_mutex))
return -ERESTARTSYS;
- init_MUTEX (&fe->sem);
- init_waitqueue_head (&fe->wait_queue);
- init_waitqueue_head (&fe->events.wait_queue);
- init_MUTEX (&fe->events.sem);
- fe->events.eventw = fe->events.eventr = 0;
- fe->events.overflow = 0;
+ fe->frontend_priv = kmalloc(sizeof(struct dvb_frontend_private), GFP_KERNEL);
+ if (fe->frontend_priv == NULL) {
+ up(&frontend_mutex);
+ return -ENOMEM;
+ }
+ fepriv = (struct dvb_frontend_private*) fe->frontend_priv;
+ memset(fe->frontend_priv, 0, sizeof(struct dvb_frontend_private));
+
+ init_MUTEX (&fepriv->sem);
+ init_waitqueue_head (&fepriv->wait_queue);
+ init_waitqueue_head (&fepriv->events.wait_queue);
+ init_MUTEX (&fepriv->events.sem);
fe->dvb = dvb;
- fe->inversion = INVERSION_OFF;
+ fepriv->inversion = INVERSION_OFF;
printk ("DVB: registering frontend %i (%s)...\n",
fe->dvb->num,
fe->ops->info.name);
- dvb_register_device (fe->dvb, &fe->dvbdev, &dvbdev_template,
+ dvb_register_device (fe->dvb, &fepriv->dvbdev, &dvbdev_template,
fe, DVB_DEVICE_FRONTEND);
up (&frontend_mutex);
int dvb_unregister_frontend(struct dvb_frontend* fe)
{
+ struct dvb_frontend_private *fepriv = (struct dvb_frontend_private*) fe->frontend_priv;
dprintk ("%s\n", __FUNCTION__);
down (&frontend_mutex);
- dvb_unregister_device (fe->dvbdev);
+ dvb_unregister_device (fepriv->dvbdev);
dvb_frontend_stop (fe);
if (fe->ops->release)
fe->ops->release(fe);
else
printk("dvb_frontend: Demodulator (%s) does not have a release callback!\n", fe->ops->info.name);
+ /* fe is invalid now */
+ if (fepriv)
+ kfree(fepriv);
up (&frontend_mutex);
return 0;
}
struct dvb_frontend_ops* ops;
struct dvb_adapter *dvb;
void* demodulator_priv;
-
- struct dvb_device *dvbdev;
- struct dvb_frontend_parameters parameters;
- struct dvb_fe_events events;
- struct semaphore sem;
- struct list_head list_head;
- wait_queue_head_t wait_queue;
- pid_t thread_pid;
- unsigned long release_jiffies;
- int state;
- int bending;
- int lnb_drift;
- int inversion;
- int auto_step;
- int auto_sub_step;
- int started_auto_step;
- int min_delay;
- int max_drift;
- int step_size;
- int exit;
- int wakeup;
- fe_status_t status;
+ void* frontend_priv;
};
extern int dvb_register_frontend(struct dvb_adapter* dvb,
obj-$(CONFIG_DVB_TDA80XX) += tda80xx.o
obj-$(CONFIG_DVB_TDA10021) += tda10021.o
obj-$(CONFIG_DVB_STV0297) += stv0297.o
+obj-$(CONFIG_DVB_NXT2002) += nxt2002.o
+
-#/*
+/*
Conexant cx22700 DVB OFDM demodulator driver
Copyright (C) 2001-2002 Convergence Integrated Media GmbH
#ifdef CONFIG_DVB_DIBCOM_DEBUG
static int debug;
-module_param(debug, int, 0x644);
+module_param(debug, int, 0644);
MODULE_PARM_DESC(debug, "set debugging level (1=info,2=i2c,4=srch (|-able)).");
#endif
#define deb_info(args...) dprintk(0x01,args)
return i2c_transfer(state->i2c,msg, 1) != 1 ? -EREMOTEIO : 0;
}
-int dib3000_init_pid_list(struct dib3000_state *state, int num)
-{
- int i;
- if (state != NULL) {
- state->pid_list = kmalloc(sizeof(struct dib3000_pid) * num,GFP_KERNEL);
- if (state->pid_list == NULL)
- return -ENOMEM;
-
- deb_info("initializing %d pids for the pid_list.\n",num);
- state->pid_list_lock = SPIN_LOCK_UNLOCKED;
- memset(state->pid_list,0,num*(sizeof(struct dib3000_pid)));
- for (i=0; i < num; i++) {
- state->pid_list[i].pid = 0;
- state->pid_list[i].active = 0;
- }
- state->feedcount = 0;
- } else
- return -EINVAL;
-
- return 0;
-}
-
-void dib3000_dealloc_pid_list(struct dib3000_state *state)
-{
- if (state != NULL && state->pid_list != NULL)
- kfree(state->pid_list);
-}
-
-/* fetch a pid from pid_list */
-int dib3000_get_pid_index(struct dib3000_pid pid_list[], int num_pids, int pid,
- spinlock_t *pid_list_lock,int onoff)
-{
- int i,ret = -1;
- unsigned long flags;
-
- spin_lock_irqsave(pid_list_lock,flags);
- for (i=0; i < num_pids; i++)
- if (onoff) {
- if (!pid_list[i].active) {
- pid_list[i].pid = pid;
- pid_list[i].active = 1;
- ret = i;
- break;
- }
- } else {
- if (pid_list[i].active && pid_list[i].pid == pid) {
- pid_list[i].pid = 0;
- pid_list[i].active = 0;
- ret = i;
- break;
- }
- }
-
- deb_info("setting pid: %5d %04x at index %d '%s'\n",pid,pid,ret,onoff ? "on" : "off");
-
- spin_unlock_irqrestore(pid_list_lock,flags);
- return ret;
-}
-
int dib3000_search_status(u16 irq,u16 lock)
{
if (irq & 0x02) {
EXPORT_SYMBOL(dib3000_read_reg);
EXPORT_SYMBOL(dib3000_write_reg);
-EXPORT_SYMBOL(dib3000_init_pid_list);
-EXPORT_SYMBOL(dib3000_dealloc_pid_list);
-EXPORT_SYMBOL(dib3000_get_pid_index);
EXPORT_SYMBOL(dib3000_search_status);
*
* DiBcom (http://www.dibcom.fr/)
*
- * Copyright (C) 2004 Patrick Boettcher (patrick.boettcher@desy.de)
+ * Copyright (C) 2004-5 Patrick Boettcher (patrick.boettcher@desy.de)
*
* based on GPL code from DibCom, which has
*
#include "dvb_frontend.h"
#include "dib3000.h"
-/* info and err, taken from usb.h, if there is anything available like by default,
- * please change !
- */
-#define err(format, arg...) printk(KERN_ERR "%s: " format "\n" , __FILE__ , ## arg)
-#define info(format, arg...) printk(KERN_INFO "%s: " format "\n" , __FILE__ , ## arg)
-#define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n" , __FILE__ , ## arg)
-
-/* a PID for the pid_filter list, when in use */
-struct dib3000_pid
-{
- u16 pid;
- int active;
-};
+/* info and err, taken from usb.h, if there is anything available like by default. */
+#define err(format, arg...) printk(KERN_ERR "dib3000mX: " format "\n" , ## arg)
+#define info(format, arg...) printk(KERN_INFO "dib3000mX: " format "\n" , ## arg)
+#define warn(format, arg...) printk(KERN_WARNING "dib3000mX: " format "\n" , ## arg)
/* frontend state */
struct dib3000_state {
/* configuration settings */
struct dib3000_config config;
- spinlock_t pid_list_lock;
- struct dib3000_pid *pid_list;
-
- int feedcount;
-
struct dvb_frontend frontend;
int timing_offset;
int timing_offset_comp_done;
+
+ fe_bandwidth_t last_tuned_bw;
+ u32 last_tuned_freq;
};
/* commonly used methods by the dib3000mb/mc/p frontend */
extern int dib3000_read_reg(struct dib3000_state *state, u16 reg);
extern int dib3000_write_reg(struct dib3000_state *state, u16 reg, u16 val);
-extern int dib3000_init_pid_list(struct dib3000_state *state, int num);
-extern void dib3000_dealloc_pid_list(struct dib3000_state *state);
-extern int dib3000_get_pid_index(struct dib3000_pid pid_list[], int num_pids,
- int pid, spinlock_t *pid_list_lock,int onoff);
-
extern int dib3000_search_status(u16 irq,u16 lock);
/* handy shortcuts */
#define wr_foreach(a,v) { int i; \
if (sizeof(a) != sizeof(v)) \
- err("sizeof: %d %d is different",sizeof(a),sizeof(v));\
+ err("sizeof: %zu %zu is different",sizeof(a),sizeof(v));\
for (i=0; i < sizeof(a)/sizeof(u16); i++) \
wr(a[i],v[i]); \
}
#define DIB3000_DDS_INVERSION_OFF ( 0)
#define DIB3000_DDS_INVERSION_ON ( 1)
-#define DIB3000_TUNER_WRITE_ENABLE(a) (0xffff & (a << 7))
-#define DIB3000_TUNER_WRITE_DISABLE(a) (0xffff & ((a << 7) | (1 << 7)))
+#define DIB3000_TUNER_WRITE_ENABLE(a) (0xffff & (a << 8))
+#define DIB3000_TUNER_WRITE_DISABLE(a) (0xffff & ((a << 8) | (1 << 7)))
/* for auto search */
extern u16 dib3000_seq[2][2][2];
* public header file of the frontend drivers for mobile DVB-T demodulators
* DiBcom 3000-MB and DiBcom 3000-MC/P (http://www.dibcom.fr/)
*
- * Copyright (C) 2004 Patrick Boettcher (patrick.boettcher@desy.de)
+ * Copyright (C) 2004-5 Patrick Boettcher (patrick.boettcher@desy.de)
*
* based on GPL code from DibCom, which has
*
/* the demodulator's i2c address */
u8 demod_address;
- /* The i2c address of the PLL */
- u8 pll_addr;
-
- /* PLL maintenance */
- int (*pll_init)(struct dvb_frontend *fe);
- int (*pll_set)(struct dvb_frontend *fe, struct dvb_frontend_parameters* params);
+ /* PLL maintenance and the i2c address of the PLL */
+ u8 (*pll_addr)(struct dvb_frontend *fe);
+ int (*pll_init)(struct dvb_frontend *fe, u8 pll_buf[5]);
+ int (*pll_set)(struct dvb_frontend *fe, struct dvb_frontend_parameters* params, u8 pll_buf[5]);
};
-struct dib3000_xfer_ops
+struct dib_fe_xfer_ops
{
/* pid and transfer handling is done in the demodulator */
int (*pid_parse)(struct dvb_frontend *fe, int onoff);
int (*fifo_ctrl)(struct dvb_frontend *fe, int onoff);
- int (*pid_ctrl)(struct dvb_frontend *fe, int pid, int onoff);
+ int (*pid_ctrl)(struct dvb_frontend *fe, int index, int pid, int onoff);
+ int (*tuner_pass_ctrl)(struct dvb_frontend *fe, int onoff, u8 pll_ctrl);
};
extern struct dvb_frontend* dib3000mb_attach(const struct dib3000_config* config,
- struct i2c_adapter* i2c, struct dib3000_xfer_ops *xfer_ops);
+ struct i2c_adapter* i2c, struct dib_fe_xfer_ops *xfer_ops);
extern struct dvb_frontend* dib3000mc_attach(const struct dib3000_config* config,
- struct i2c_adapter* i2c, struct dib3000_xfer_ops *xfer_ops);
+ struct i2c_adapter* i2c, struct dib_fe_xfer_ops *xfer_ops);
#endif // DIB3000_H
* Frontend driver for mobile DVB-T demodulator DiBcom 3000-MB
* DiBcom (http://www.dibcom.fr/)
*
- * Copyright (C) 2004 Patrick Boettcher (patrick.boettcher@desy.de)
+ * Copyright (C) 2004-5 Patrick Boettcher (patrick.boettcher@desy.de)
*
* based on GPL code from DibCom, which has
*
#include <linux/init.h>
#include <linux/delay.h>
-#include "dvb_frontend.h"
#include "dib3000-common.h"
#include "dib3000mb_priv.h"
#include "dib3000.h"
#ifdef CONFIG_DVB_DIBCOM_DEBUG
static int debug;
-module_param(debug, int, 0x644);
+module_param(debug, int, 0644);
MODULE_PARM_DESC(debug, "set debugging level (1=info,2=xfer,4=setfe,8=getfe (|-able)).");
#endif
#define deb_info(args...) dprintk(0x01,args)
#define deb_setf(args...) dprintk(0x04,args)
#define deb_getf(args...) dprintk(0x08,args)
+static int dib3000mb_tuner_pass_ctrl(struct dvb_frontend *fe, int onoff, u8 pll_addr);
+
static int dib3000mb_get_frontend(struct dvb_frontend* fe,
struct dvb_frontend_parameters *fep);
int search_state,seq;
if (tuner) {
- wr(DIB3000MB_REG_TUNER,
- DIB3000_TUNER_WRITE_ENABLE(state->config.pll_addr));
- state->config.pll_set(fe, fep);
- wr(DIB3000MB_REG_TUNER,
- DIB3000_TUNER_WRITE_DISABLE(state->config.pll_addr));
+ dib3000mb_tuner_pass_ctrl(fe,1,state->config.pll_addr(fe));
+ state->config.pll_set(fe, fep, NULL);
+ dib3000mb_tuner_pass_ctrl(fe,0,state->config.pll_addr(fe));
deb_setf("bandwidth: ");
switch (ofdm->bandwidth) {
wr(DIB3000MB_REG_DATA_IN_DIVERSITY,DIB3000MB_DATA_DIVERSITY_IN_OFF);
if (state->config.pll_init) {
- wr(DIB3000MB_REG_TUNER,
- DIB3000_TUNER_WRITE_ENABLE(state->config.pll_addr));
- state->config.pll_init(fe);
- wr(DIB3000MB_REG_TUNER,
- DIB3000_TUNER_WRITE_DISABLE(state->config.pll_addr));
+ dib3000mb_tuner_pass_ctrl(fe,1,state->config.pll_addr(fe));
+ state->config.pll_init(fe,NULL);
+ dib3000mb_tuner_pass_ctrl(fe,0,state->config.pll_addr(fe));
}
return 0;
return 0;
dds_val = ((rd(DIB3000MB_REG_DDS_VALUE_MSB) & 0xff) << 16) + rd(DIB3000MB_REG_DDS_VALUE_LSB);
+ deb_getf("DDS_VAL: %x %x %x",dds_val, rd(DIB3000MB_REG_DDS_VALUE_MSB), rd(DIB3000MB_REG_DDS_VALUE_LSB));
if (dds_val < threshold)
inv_test1 = 0;
else if (dds_val == threshold)
inv_test1 = 2;
dds_val = ((rd(DIB3000MB_REG_DDS_FREQ_MSB) & 0xff) << 16) + rd(DIB3000MB_REG_DDS_FREQ_LSB);
+ deb_getf("DDS_FREQ: %x %x %x",dds_val, rd(DIB3000MB_REG_DDS_FREQ_MSB), rd(DIB3000MB_REG_DDS_FREQ_LSB));
if (dds_val < threshold)
inv_test2 = 0;
else if (dds_val == threshold)
}
/* pid filter and transfer stuff */
-static int dib3000mb_pid_control(struct dvb_frontend *fe,int pid,int onoff)
+static int dib3000mb_pid_control(struct dvb_frontend *fe,int index, int pid,int onoff)
{
struct dib3000_state *state = fe->demodulator_priv;
- int index = dib3000_get_pid_index(state->pid_list, DIB3000MB_NUM_PIDS, pid, &state->pid_list_lock,onoff);
pid = (onoff ? pid | DIB3000_ACTIVATE_PID_FILTERING : 0);
-
- if (index >= 0) {
wr(index+DIB3000MB_REG_FIRST_PID,pid);
- } else {
- err("no more pids for filtering.");
- return -ENOMEM;
- }
return 0;
}
return 0;
}
+static int dib3000mb_tuner_pass_ctrl(struct dvb_frontend *fe, int onoff, u8 pll_addr)
+{
+ struct dib3000_state *state = (struct dib3000_state*) fe->demodulator_priv;
+ if (onoff) {
+ wr(DIB3000MB_REG_TUNER, DIB3000_TUNER_WRITE_ENABLE(pll_addr));
+ } else {
+ wr(DIB3000MB_REG_TUNER, DIB3000_TUNER_WRITE_DISABLE(pll_addr));
+ }
+ return 0;
+}
+
static struct dvb_frontend_ops dib3000mb_ops;
struct dvb_frontend* dib3000mb_attach(const struct dib3000_config* config,
- struct i2c_adapter* i2c, struct dib3000_xfer_ops *xfer_ops)
+ struct i2c_adapter* i2c, struct dib_fe_xfer_ops *xfer_ops)
{
struct dib3000_state* state = NULL;
if (rd(DIB3000_REG_DEVICE_ID) != DIB3000MB_DEVICE_ID)
goto error;
- if (dib3000_init_pid_list(state,DIB3000MB_NUM_PIDS))
- goto error;
-
/* create dvb_frontend */
state->frontend.ops = &state->ops;
state->frontend.demodulator_priv = state;
xfer_ops->pid_parse = dib3000mb_pid_parse;
xfer_ops->fifo_ctrl = dib3000mb_fifo_control;
xfer_ops->pid_ctrl = dib3000mb_pid_control;
+ xfer_ops->tuner_pass_ctrl = dib3000mb_tuner_pass_ctrl;
return &state->frontend;
FE_CAN_QPSK | FE_CAN_QAM_16 | FE_CAN_QAM_64 | FE_CAN_QAM_AUTO |
FE_CAN_TRANSMISSION_MODE_AUTO |
FE_CAN_GUARD_INTERVAL_AUTO |
+ FE_CAN_RECOVER |
FE_CAN_HIERARCHY_AUTO,
},
* Frontend driver for mobile DVB-T demodulator DiBcom 3000-MC/P
* DiBcom (http://www.dibcom.fr/)
*
- * Copyright (C) 2004 Patrick Boettcher (patrick.boettcher@desy.de)
+ * Copyright (C) 2004-5 Patrick Boettcher (patrick.boettcher@desy.de)
*
- * based on GPL code from DibCom, which has
+ * based on GPL code from DiBCom, which has
*
* Copyright (C) 2004 Amaury Demol for DiBcom (ademol@dibcom.fr)
*
#include <linux/init.h>
#include <linux/delay.h>
-#include "dvb_frontend.h"
#include "dib3000-common.h"
#include "dib3000mc_priv.h"
#include "dib3000.h"
#ifdef CONFIG_DVB_DIBCOM_DEBUG
static int debug;
-module_param(debug, int, 0x644);
-MODULE_PARM_DESC(debug, "set debugging level (1=info,2=xfer,4=setfe,8=getfe (|-able)).");
+module_param(debug, int, 0644);
+MODULE_PARM_DESC(debug, "set debugging level (1=info,2=xfer,4=setfe,8=getfe,16=stat (|-able)).");
#endif
#define deb_info(args...) dprintk(0x01,args)
#define deb_xfer(args...) dprintk(0x02,args)
#define deb_setf(args...) dprintk(0x04,args)
#define deb_getf(args...) dprintk(0x08,args)
+#define deb_stat(args...) dprintk(0x10,args)
+static int dib3000mc_tuner_pass_ctrl(struct dvb_frontend *fe, int onoff, u8 pll_addr);
static int dib3000mc_set_impulse_noise(struct dib3000_state * state, int mode,
fe_transmit_mode_t transmission_mode, fe_bandwidth_t bandwidth)
return 0;
}
-static int dib3000mc_get_frontend(struct dvb_frontend* fe,
- struct dvb_frontend_parameters *fep);
+static int dib3000mc_set_adp_cfg(struct dib3000_state *state, fe_modulation_t con)
+{
+ switch (con) {
+ case QAM_64:
+ wr_foreach(dib3000mc_reg_adp_cfg,dib3000mc_adp_cfg[2]);
+ break;
+ case QAM_16:
+ wr_foreach(dib3000mc_reg_adp_cfg,dib3000mc_adp_cfg[1]);
+ break;
+ case QPSK:
+ wr_foreach(dib3000mc_reg_adp_cfg,dib3000mc_adp_cfg[0]);
+ break;
+ case QAM_AUTO:
+ break;
+ default:
+ warn("unkown constellation.");
+ break;
+ }
+ return 0;
+}
-static int dib3000mc_set_frontend(struct dvb_frontend* fe,
- struct dvb_frontend_parameters *fep, int tuner)
+static int dib3000mc_set_general_cfg(struct dib3000_state *state, struct dvb_frontend_parameters *fep, int *auto_val)
{
- struct dib3000_state* state = (struct dib3000_state*) fe->demodulator_priv;
struct dvb_ofdm_parameters *ofdm = &fep->u.ofdm;
fe_code_rate_t fe_cr = FEC_NONE;
- int search_state, seq;
- u16 val;
u8 fft=0, guard=0, qam=0, alpha=0, sel_hp=0, cr=0, hrch=0;
-
- if (tuner) {
- wr(DIB3000MC_REG_TUNER,
- DIB3000_TUNER_WRITE_ENABLE(state->config.pll_addr));
- state->config.pll_set(fe, fep);
- wr(DIB3000MC_REG_TUNER,
- DIB3000_TUNER_WRITE_DISABLE(state->config.pll_addr));
- }
-
- dib3000mc_set_timing(state,0,ofdm->transmission_mode,ofdm->bandwidth);
- dib3000mc_init_auto_scan(state, ofdm->bandwidth, 0);
-
- wr(DIB3000MC_REG_RESTART,DIB3000MC_RESTART_AGC);
- wr(DIB3000MC_REG_RESTART,DIB3000MC_RESTART_OFF);
-
-/* Default cfg isi offset adp */
- wr_foreach(dib3000mc_reg_offset,dib3000mc_offset[0]);
-
- wr(DIB3000MC_REG_ISI,DIB3000MC_ISI_DEFAULT | DIB3000MC_ISI_INHIBIT);
- wr_foreach(dib3000mc_reg_adp_cfg,dib3000mc_adp_cfg[1]);
- wr(DIB3000MC_REG_UNK_133,DIB3000MC_UNK_133);
-
- wr_foreach(dib3000mc_reg_bandwidth_general,dib3000mc_bandwidth_general);
- if (ofdm->bandwidth == BANDWIDTH_8_MHZ) {
- wr_foreach(dib3000mc_reg_bw,dib3000mc_bw[3]);
- } else {
- wr_foreach(dib3000mc_reg_bw,dib3000mc_bw[0]);
- }
+ int seq;
switch (ofdm->transmission_mode) {
case TRANSMISSION_MODE_2K: fft = DIB3000_TRANSMISSION_MODE_2K; break;
case INVERSION_OFF:
wr(DIB3000MC_REG_SET_DDS_FREQ_MSB,DIB3000MC_DDS_FREQ_MSB_INV_OFF);
break;
- case INVERSION_AUTO:
- break;
+ case INVERSION_AUTO: /* fall through */
case INVERSION_ON:
wr(DIB3000MC_REG_SET_DDS_FREQ_MSB,DIB3000MC_DDS_FREQ_MSB_INV_ON);
break;
deb_setf("seq? %d\n", seq);
wr(DIB3000MC_REG_SEQ_TPS,DIB3000MC_SEQ_TPS(seq,1));
-
- dib3000mc_set_impulse_noise(state,0,ofdm->constellation,ofdm->bandwidth);
-
- val = rd(DIB3000MC_REG_DEMOD_PARM);
- wr(DIB3000MC_REG_DEMOD_PARM,val|DIB3000MC_DEMOD_RST_DEMOD_ON);
- wr(DIB3000MC_REG_DEMOD_PARM,val);
-
- msleep(70);
-
- wr_foreach(dib3000mc_reg_agc_bandwidth, dib3000mc_agc_bandwidth);
-
- /* something has to be auto searched */
- if (ofdm->constellation == QAM_AUTO ||
+ *auto_val = ofdm->constellation == QAM_AUTO ||
ofdm->hierarchy_information == HIERARCHY_AUTO ||
ofdm->guard_interval == GUARD_INTERVAL_AUTO ||
ofdm->transmission_mode == TRANSMISSION_MODE_AUTO ||
fe_cr == FEC_AUTO ||
- fep->inversion == INVERSION_AUTO
- ) {
- int as_count=0;
-
- deb_setf("autosearch enabled.\n");
-
- val = rd(DIB3000MC_REG_DEMOD_PARM);
- wr(DIB3000MC_REG_DEMOD_PARM,val | DIB3000MC_DEMOD_RST_AUTO_SRCH_ON);
- wr(DIB3000MC_REG_DEMOD_PARM,val);
-
- while ((search_state = dib3000_search_status(
- rd(DIB3000MC_REG_AS_IRQ),1)) < 0 && as_count++ < 100)
- msleep(10);
-
- deb_info("search_state after autosearch %d after %d checks\n",search_state,as_count);
-
- if (search_state == 1) {
- struct dvb_frontend_parameters feps;
- feps.u.ofdm.bandwidth = ofdm->bandwidth; /* bw is not auto searched */;
- if (dib3000mc_get_frontend(fe, &feps) == 0) {
- deb_setf("reading tuning data from frontend succeeded.\n");
- return dib3000mc_set_frontend(fe, &feps, 0);
- }
- }
- } else {
- wr(DIB3000MC_REG_ISI,DIB3000MC_ISI_DEFAULT|DIB3000MC_ISI_ACTIVATE);
- wr_foreach(dib3000mc_reg_adp_cfg,dib3000mc_adp_cfg[qam]);
- /* set_offset_cfg */
- wr_foreach(dib3000mc_reg_offset,
- dib3000mc_offset[(ofdm->transmission_mode == TRANSMISSION_MODE_8K)+1]);
-
-// dib3000mc_set_timing(1,ofdm->transmission_mode,ofdm->bandwidth);
-
-// wr(DIB3000MC_REG_LOCK_MASK,DIB3000MC_ACTIVATE_LOCK_MASK); /* activates some locks if needed */
-
-/* set_or(DIB3000MC_REG_DEMOD_PARM,DIB3000MC_DEMOD_RST_AUTO_SRCH_ON);
- set_or(DIB3000MC_REG_DEMOD_PARM,DIB3000MC_DEMOD_RST_AUTO_SRCH_OFF);
- wr(DIB3000MC_REG_RESTART_VIT,DIB3000MC_RESTART_VIT_ON);
- wr(DIB3000MC_REG_RESTART_VIT,DIB3000MC_RESTART_VIT_OFF);*/
- }
-
- return 0;
-}
-
-
-static int dib3000mc_fe_init(struct dvb_frontend* fe, int mobile_mode)
-{
- struct dib3000_state* state = (struct dib3000_state*) fe->demodulator_priv;
-
- state->timing_offset = 0;
- state->timing_offset_comp_done = 0;
-
- wr(DIB3000MC_REG_ELEC_OUT,DIB3000MC_ELEC_OUT_DIV_OUT_ON);
- wr(DIB3000MC_REG_OUTMODE,DIB3000MC_OM_PAR_CONT_CLK);
- wr(DIB3000MC_REG_RST_I2C_ADDR,
- DIB3000MC_DEMOD_ADDR(state->config.demod_address) |
- DIB3000MC_DEMOD_ADDR_ON);
-
- wr(DIB3000MC_REG_RST_I2C_ADDR,
- DIB3000MC_DEMOD_ADDR(state->config.demod_address));
-
- wr(DIB3000MC_REG_RESTART,DIB3000MC_RESTART_CONFIG);
- wr(DIB3000MC_REG_RESTART,DIB3000MC_RESTART_OFF);
-
- wr(DIB3000MC_REG_CLK_CFG_1,DIB3000MC_CLK_CFG_1_POWER_UP);
- wr(DIB3000MC_REG_CLK_CFG_2,DIB3000MC_CLK_CFG_2_PUP_MOBILE);
- wr(DIB3000MC_REG_CLK_CFG_3,DIB3000MC_CLK_CFG_3_POWER_UP);
- wr(DIB3000MC_REG_CLK_CFG_7,DIB3000MC_CLK_CFG_7_INIT);
-
- wr(DIB3000MC_REG_RST_UNC,DIB3000MC_RST_UNC_OFF);
- wr(DIB3000MC_REG_UNK_19,DIB3000MC_UNK_19);
-
- wr(33,5);
- wr(36,81);
- wr(DIB3000MC_REG_UNK_88,DIB3000MC_UNK_88);
-
- wr(DIB3000MC_REG_UNK_99,DIB3000MC_UNK_99);
- wr(DIB3000MC_REG_UNK_111,DIB3000MC_UNK_111_PH_N_MODE_0); /* phase noise algo off */
-
- /* mobile mode - portable reception */
- wr_foreach(dib3000mc_reg_mobile_mode,dib3000mc_mobile_mode[1]);
-
-/* TUNER_PANASONIC_ENV57H12D5: */
- wr_foreach(dib3000mc_reg_agc_bandwidth,dib3000mc_agc_bandwidth);
- wr_foreach(dib3000mc_reg_agc_bandwidth_general,dib3000mc_agc_bandwidth_general);
- wr_foreach(dib3000mc_reg_agc,dib3000mc_agc_tuner[1]);
-
- wr(DIB3000MC_REG_UNK_110,DIB3000MC_UNK_110);
- wr(26,0x6680);
- wr(DIB3000MC_REG_UNK_1,DIB3000MC_UNK_1);
- wr(DIB3000MC_REG_UNK_2,DIB3000MC_UNK_2);
- wr(DIB3000MC_REG_UNK_3,DIB3000MC_UNK_3);
- wr(DIB3000MC_REG_SEQ_TPS,DIB3000MC_SEQ_TPS_DEFAULT);
-
- wr_foreach(dib3000mc_reg_bandwidth_general,dib3000mc_bandwidth_general);
- wr_foreach(dib3000mc_reg_bandwidth,dib3000mc_bandwidth_8mhz);
-
- wr(DIB3000MC_REG_UNK_4,DIB3000MC_UNK_4);
-
- wr(DIB3000MC_REG_SET_DDS_FREQ_MSB,DIB3000MC_DDS_FREQ_MSB_INV_OFF);
- wr(DIB3000MC_REG_SET_DDS_FREQ_LSB,DIB3000MC_DDS_FREQ_LSB);
-
- dib3000mc_set_timing(state,0,TRANSMISSION_MODE_2K,BANDWIDTH_8_MHZ);
-// wr_foreach(dib3000mc_reg_timing_freq,dib3000mc_timing_freq[3]);
-
- wr(DIB3000MC_REG_UNK_120,DIB3000MC_UNK_120);
- wr(DIB3000MC_REG_UNK_134,DIB3000MC_UNK_134);
- wr(DIB3000MC_REG_FEC_CFG,DIB3000MC_FEC_CFG);
-
- dib3000mc_set_impulse_noise(state,0,TRANSMISSION_MODE_8K,BANDWIDTH_8_MHZ);
-
-/* output mode control, just the MPEG2_SLAVE */
- set_or(DIB3000MC_REG_OUTMODE,DIB3000MC_OM_SLAVE);
- wr(DIB3000MC_REG_SMO_MODE,DIB3000MC_SMO_MODE_SLAVE);
- wr(DIB3000MC_REG_FIFO_THRESHOLD,DIB3000MC_FIFO_THRESHOLD_SLAVE);
- wr(DIB3000MC_REG_ELEC_OUT,DIB3000MC_ELEC_OUT_SLAVE);
-
-/* MPEG2_PARALLEL_CONTINUOUS_CLOCK
- wr(DIB3000MC_REG_OUTMODE,
- DIB3000MC_SET_OUTMODE(DIB3000MC_OM_PAR_CONT_CLK,
- rd(DIB3000MC_REG_OUTMODE)));
-
- wr(DIB3000MC_REG_SMO_MODE,
- DIB3000MC_SMO_MODE_DEFAULT |
- DIB3000MC_SMO_MODE_188);
-
- wr(DIB3000MC_REG_FIFO_THRESHOLD,DIB3000MC_FIFO_THRESHOLD_DEFAULT);
- wr(DIB3000MC_REG_ELEC_OUT,DIB3000MC_ELEC_OUT_DIV_OUT_ON);
-*/
-/* diversity */
- wr(DIB3000MC_REG_DIVERSITY1,DIB3000MC_DIVERSITY1_DEFAULT);
- wr(DIB3000MC_REG_DIVERSITY2,DIB3000MC_DIVERSITY2_DEFAULT);
-
- wr(DIB3000MC_REG_DIVERSITY3,DIB3000MC_DIVERSITY3_IN_OFF);
-
- set_or(DIB3000MC_REG_CLK_CFG_7,DIB3000MC_CLK_CFG_7_DIV_IN_OFF);
-
-
-/* if (state->config->pll_init) {
- wr(DIB3000MC_REG_TUNER,
- DIB3000_TUNER_WRITE_ENABLE(state->config->pll_addr));
- state->config->pll_init(fe);
- wr(DIB3000MC_REG_TUNER,
- DIB3000_TUNER_WRITE_DISABLE(state->config->pll_addr));
- }*/
+ fep->inversion == INVERSION_AUTO;
return 0;
}
if (!(rd(DIB3000MC_REG_LOCK_507) & DIB3000MC_LOCK_507))
return 0;
- dds_val = ((rd(DIB3000MC_REG_DDS_FREQ_MSB) & 0xff) << 16) + rd(DIB3000MC_REG_DDS_FREQ_LSB);
+ dds_val = (rd(DIB3000MC_REG_DDS_FREQ_MSB) << 16) + rd(DIB3000MC_REG_DDS_FREQ_LSB);
+ deb_getf("DDS_FREQ: %6x\n",dds_val);
if (dds_val < threshold)
inv_test1 = 0;
else if (dds_val == threshold)
else
inv_test1 = 2;
- dds_val = ((rd(DIB3000MC_REG_SET_DDS_FREQ_MSB) & 0xff) << 16) + rd(DIB3000MC_REG_SET_DDS_FREQ_LSB);
+ dds_val = (rd(DIB3000MC_REG_SET_DDS_FREQ_MSB) << 16) + rd(DIB3000MC_REG_SET_DDS_FREQ_LSB);
+ deb_getf("DDS_SET_FREQ: %6x\n",dds_val);
if (dds_val < threshold)
inv_test2 = 0;
else if (dds_val == threshold)
deb_getf("inversion %d %d, %d\n", inv_test2, inv_test1, fep->inversion);
+ fep->frequency = state->last_tuned_freq;
+ fep->u.ofdm.bandwidth= state->last_tuned_bw;
+
tps_val = rd(DIB3000MC_REG_TUNING_PARM);
switch (DIB3000MC_TP_QAM(tps_val)) {
default:
err("unexpected transmission mode return by TPS (%d)", tps_val);
break;
+ }
+ deb_getf("\n");
+
+ return 0;
+}
+
+static int dib3000mc_set_frontend(struct dvb_frontend* fe,
+ struct dvb_frontend_parameters *fep, int tuner)
+{
+ struct dib3000_state* state = (struct dib3000_state*) fe->demodulator_priv;
+ struct dvb_ofdm_parameters *ofdm = &fep->u.ofdm;
+ int search_state,auto_val;
+ u16 val;
+
+ if (tuner) { /* initial call from dvb */
+ dib3000mc_tuner_pass_ctrl(fe,1,state->config.pll_addr(fe));
+ state->config.pll_set(fe,fep,NULL);
+ dib3000mc_tuner_pass_ctrl(fe,0,state->config.pll_addr(fe));
+
+ state->last_tuned_freq = fep->frequency;
+ // if (!scanboost) {
+ dib3000mc_set_timing(state,0,ofdm->transmission_mode,ofdm->bandwidth);
+ dib3000mc_init_auto_scan(state, ofdm->bandwidth, 0);
+ state->last_tuned_bw = ofdm->bandwidth;
+
+ wr_foreach(dib3000mc_reg_agc_bandwidth,dib3000mc_agc_bandwidth);
+ wr(DIB3000MC_REG_RESTART,DIB3000MC_RESTART_AGC);
+ wr(DIB3000MC_REG_RESTART,DIB3000MC_RESTART_OFF);
+
+ /* Default cfg isi offset adp */
+ wr_foreach(dib3000mc_reg_offset,dib3000mc_offset[0]);
+
+ wr(DIB3000MC_REG_ISI,DIB3000MC_ISI_DEFAULT | DIB3000MC_ISI_INHIBIT);
+ dib3000mc_set_adp_cfg(state,ofdm->constellation);
+ wr(DIB3000MC_REG_UNK_133,DIB3000MC_UNK_133);
+
+ wr_foreach(dib3000mc_reg_bandwidth_general,dib3000mc_bandwidth_general);
+ /* power smoothing */
+ if (ofdm->bandwidth != BANDWIDTH_8_MHZ) {
+ wr_foreach(dib3000mc_reg_bw,dib3000mc_bw[0]);
+ } else {
+ wr_foreach(dib3000mc_reg_bw,dib3000mc_bw[3]);
+ }
+ auto_val = 0;
+ dib3000mc_set_general_cfg(state,fep,&auto_val);
+ dib3000mc_set_impulse_noise(state,0,ofdm->constellation,ofdm->bandwidth);
+
+ val = rd(DIB3000MC_REG_DEMOD_PARM);
+ wr(DIB3000MC_REG_DEMOD_PARM,val|DIB3000MC_DEMOD_RST_DEMOD_ON);
+ wr(DIB3000MC_REG_DEMOD_PARM,val);
+ // }
+ msleep(70);
+
+ /* something has to be auto searched */
+ if (auto_val) {
+ int as_count=0;
+
+ deb_setf("autosearch enabled.\n");
+
+ val = rd(DIB3000MC_REG_DEMOD_PARM);
+ wr(DIB3000MC_REG_DEMOD_PARM,val | DIB3000MC_DEMOD_RST_AUTO_SRCH_ON);
+ wr(DIB3000MC_REG_DEMOD_PARM,val);
+
+ while ((search_state = dib3000_search_status(
+ rd(DIB3000MC_REG_AS_IRQ),1)) < 0 && as_count++ < 100)
+ msleep(10);
+
+ deb_info("search_state after autosearch %d after %d checks\n",search_state,as_count);
+
+ if (search_state == 1) {
+ struct dvb_frontend_parameters feps;
+ if (dib3000mc_get_frontend(fe, &feps) == 0) {
+ deb_setf("reading tuning data from frontend succeeded.\n");
+ return dib3000mc_set_frontend(fe, &feps, 0);
+ }
+ }
+ } else {
+ dib3000mc_set_impulse_noise(state,0,ofdm->transmission_mode,ofdm->bandwidth);
+ wr(DIB3000MC_REG_ISI,DIB3000MC_ISI_DEFAULT|DIB3000MC_ISI_ACTIVATE);
+ dib3000mc_set_adp_cfg(state,ofdm->constellation);
+
+ /* set_offset_cfg */
+ wr_foreach(dib3000mc_reg_offset,
+ dib3000mc_offset[(ofdm->transmission_mode == TRANSMISSION_MODE_8K)+1]);
+ }
+ } else { /* second call, after autosearch (fka: set_WithKnownParams) */
+// dib3000mc_set_timing(state,1,ofdm->transmission_mode,ofdm->bandwidth);
+
+ auto_val = 0;
+ dib3000mc_set_general_cfg(state,fep,&auto_val);
+ if (auto_val)
+ deb_info("auto_val is true, even though an auto search was already performed.\n");
+
+ dib3000mc_set_impulse_noise(state,0,ofdm->constellation,ofdm->bandwidth);
+
+ val = rd(DIB3000MC_REG_DEMOD_PARM);
+ wr(DIB3000MC_REG_DEMOD_PARM,val | DIB3000MC_DEMOD_RST_AUTO_SRCH_ON);
+ wr(DIB3000MC_REG_DEMOD_PARM,val);
+
+ msleep(30);
+
+ wr(DIB3000MC_REG_ISI,DIB3000MC_ISI_DEFAULT|DIB3000MC_ISI_ACTIVATE);
+ dib3000mc_set_adp_cfg(state,ofdm->constellation);
+ wr_foreach(dib3000mc_reg_offset,
+ dib3000mc_offset[(ofdm->transmission_mode == TRANSMISSION_MODE_8K)+1]);
+
+
}
return 0;
}
+static int dib3000mc_fe_init(struct dvb_frontend* fe, int mobile_mode)
+{
+ struct dib3000_state *state;
+
+ deb_info("init start\n");
+
+ state = fe->demodulator_priv;
+ state->timing_offset = 0;
+ state->timing_offset_comp_done = 0;
+
+ wr(DIB3000MC_REG_RESTART,DIB3000MC_RESTART_CONFIG);
+ wr(DIB3000MC_REG_RESTART,DIB3000MC_RESTART_OFF);
+ wr(DIB3000MC_REG_CLK_CFG_1,DIB3000MC_CLK_CFG_1_POWER_UP);
+ wr(DIB3000MC_REG_CLK_CFG_2,DIB3000MC_CLK_CFG_2_PUP_MOBILE);
+ wr(DIB3000MC_REG_CLK_CFG_3,DIB3000MC_CLK_CFG_3_POWER_UP);
+ wr(DIB3000MC_REG_CLK_CFG_7,DIB3000MC_CLK_CFG_7_INIT);
+
+ wr(DIB3000MC_REG_RST_UNC,DIB3000MC_RST_UNC_OFF);
+ wr(DIB3000MC_REG_UNK_19,DIB3000MC_UNK_19);
+
+ wr(33,5);
+ wr(36,81);
+ wr(DIB3000MC_REG_UNK_88,DIB3000MC_UNK_88);
+
+ wr(DIB3000MC_REG_UNK_99,DIB3000MC_UNK_99);
+ wr(DIB3000MC_REG_UNK_111,DIB3000MC_UNK_111_PH_N_MODE_0); /* phase noise algo off */
+
+ /* mobile mode - portable reception */
+ wr_foreach(dib3000mc_reg_mobile_mode,dib3000mc_mobile_mode[1]);
+
+/* TUNER_PANASONIC_ENV57H12D5: */
+ wr_foreach(dib3000mc_reg_agc_bandwidth,dib3000mc_agc_bandwidth);
+ wr_foreach(dib3000mc_reg_agc_bandwidth_general,dib3000mc_agc_bandwidth_general);
+ wr_foreach(dib3000mc_reg_agc,dib3000mc_agc_tuner[1]);
+
+ wr(DIB3000MC_REG_UNK_110,DIB3000MC_UNK_110);
+ wr(26,0x6680);
+ wr(DIB3000MC_REG_UNK_1,DIB3000MC_UNK_1);
+ wr(DIB3000MC_REG_UNK_2,DIB3000MC_UNK_2);
+ wr(DIB3000MC_REG_UNK_3,DIB3000MC_UNK_3);
+ wr(DIB3000MC_REG_SEQ_TPS,DIB3000MC_SEQ_TPS_DEFAULT);
+
+ wr_foreach(dib3000mc_reg_bandwidth,dib3000mc_bandwidth_8mhz);
+ wr_foreach(dib3000mc_reg_bandwidth_general,dib3000mc_bandwidth_general);
+
+ wr(DIB3000MC_REG_UNK_4,DIB3000MC_UNK_4);
+
+ wr(DIB3000MC_REG_SET_DDS_FREQ_MSB,DIB3000MC_DDS_FREQ_MSB_INV_OFF);
+ wr(DIB3000MC_REG_SET_DDS_FREQ_LSB,DIB3000MC_DDS_FREQ_LSB);
+
+ dib3000mc_set_timing(state,0,TRANSMISSION_MODE_8K,BANDWIDTH_8_MHZ);
+// wr_foreach(dib3000mc_reg_timing_freq,dib3000mc_timing_freq[3]);
+
+ wr(DIB3000MC_REG_UNK_120,DIB3000MC_UNK_120);
+ wr(DIB3000MC_REG_UNK_134,DIB3000MC_UNK_134);
+ wr(DIB3000MC_REG_FEC_CFG,DIB3000MC_FEC_CFG);
+
+ wr(DIB3000MC_REG_DIVERSITY3,DIB3000MC_DIVERSITY3_IN_OFF);
+
+ dib3000mc_set_impulse_noise(state,0,TRANSMISSION_MODE_8K,BANDWIDTH_8_MHZ);
+
+/* output mode control, just the MPEG2_SLAVE */
+// set_or(DIB3000MC_REG_OUTMODE,DIB3000MC_OM_SLAVE);
+ wr(DIB3000MC_REG_OUTMODE,DIB3000MC_OM_SLAVE);
+ wr(DIB3000MC_REG_SMO_MODE,DIB3000MC_SMO_MODE_SLAVE);
+ wr(DIB3000MC_REG_FIFO_THRESHOLD,DIB3000MC_FIFO_THRESHOLD_SLAVE);
+ wr(DIB3000MC_REG_ELEC_OUT,DIB3000MC_ELEC_OUT_SLAVE);
+
+/* MPEG2_PARALLEL_CONTINUOUS_CLOCK
+ wr(DIB3000MC_REG_OUTMODE,
+ DIB3000MC_SET_OUTMODE(DIB3000MC_OM_PAR_CONT_CLK,
+ rd(DIB3000MC_REG_OUTMODE)));
+
+ wr(DIB3000MC_REG_SMO_MODE,
+ DIB3000MC_SMO_MODE_DEFAULT |
+ DIB3000MC_SMO_MODE_188);
+
+ wr(DIB3000MC_REG_FIFO_THRESHOLD,DIB3000MC_FIFO_THRESHOLD_DEFAULT);
+ wr(DIB3000MC_REG_ELEC_OUT,DIB3000MC_ELEC_OUT_DIV_OUT_ON);
+*/
+
+/* diversity */
+ wr(DIB3000MC_REG_DIVERSITY1,DIB3000MC_DIVERSITY1_DEFAULT);
+ wr(DIB3000MC_REG_DIVERSITY2,DIB3000MC_DIVERSITY2_DEFAULT);
+
+ set_and(DIB3000MC_REG_DIVERSITY3,DIB3000MC_DIVERSITY3_IN_OFF);
+
+ set_or(DIB3000MC_REG_CLK_CFG_7,DIB3000MC_CLK_CFG_7_DIV_IN_OFF);
+
+/* if (state->config->pll_init) {
+ dib3000mc_tuner_pass_ctrl(fe,1,state->config.pll_addr(fe));
+ state->config->pll_init(fe,NULL);
+ dib3000mc_tuner_pass_ctrl(fe,0,state->config.pll_addr(fe));
+ }*/
+ deb_info("init end\n");
+ return 0;
+}
static int dib3000mc_read_status(struct dvb_frontend* fe, fe_status_t *stat)
{
struct dib3000_state* state = (struct dib3000_state*) fe->demodulator_priv;
*stat |= FE_HAS_SIGNAL;
if (DIB3000MC_CARRIER_LOCK(lock))
*stat |= FE_HAS_CARRIER;
- if (DIB3000MC_TPS_LOCK(lock)) /* VIT_LOCK ? */
+ if (DIB3000MC_TPS_LOCK(lock))
*stat |= FE_HAS_VITERBI;
if (DIB3000MC_MPEG_SYNC_LOCK(lock))
*stat |= (FE_HAS_SYNC | FE_HAS_LOCK);
- deb_info("actual status is %2x\n",*stat);
+ deb_stat("actual status is %2x fifo_level: %x,244: %x, 206: %x, 207: %x, 1040: %x\n",*stat,rd(510),rd(244),rd(206),rd(207),rd(1040));
return 0;
}
u16 val = rd(DIB3000MC_REG_SIGNAL_NOISE_LSB);
*strength = (((val >> 6) & 0xff) << 8) + (val & 0x3f);
- deb_info("signal: mantisse = %d, exponent = %d\n",(*strength >> 8) & 0xff, *strength & 0xff);
+ deb_stat("signal: mantisse = %d, exponent = %d\n",(*strength >> 8) & 0xff, *strength & 0xff);
return 0;
}
static int dib3000mc_read_snr(struct dvb_frontend* fe, u16 *snr)
{
struct dib3000_state* state = (struct dib3000_state*) fe->demodulator_priv;
-
- u16 val = rd(DIB3000MC_REG_SIGNAL_NOISE_MSB),
- val2 = rd(DIB3000MC_REG_SIGNAL_NOISE_LSB);
+ u16 val = rd(DIB3000MC_REG_SIGNAL_NOISE_LSB),
+ val2 = rd(DIB3000MC_REG_SIGNAL_NOISE_MSB);
u16 sig,noise;
sig = (((val >> 6) & 0xff) << 8) + (val & 0x3f);
else
*snr = (u16) sig/noise;
- deb_info("signal: mantisse = %d, exponent = %d\n",(sig >> 8) & 0xff, sig & 0xff);
- deb_info("noise: mantisse = %d, exponent = %d\n",(noise >> 8) & 0xff, noise & 0xff);
- deb_info("snr: %d\n",*snr);
+ deb_stat("signal: mantisse = %d, exponent = %d\n",(sig >> 8) & 0xff, sig & 0xff);
+ deb_stat("noise: mantisse = %d, exponent = %d\n",(noise >> 8) & 0xff, noise & 0xff);
+ deb_stat("snr: %d\n",*snr);
return 0;
}
static int dib3000mc_fe_get_tune_settings(struct dvb_frontend* fe, struct dvb_frontend_tune_settings *tune)
{
- tune->min_delay_ms = 800;
+ tune->min_delay_ms = 2000;
tune->step_size = 166667;
tune->max_drift = 166667 * 2;
static void dib3000mc_release(struct dvb_frontend* fe)
{
struct dib3000_state *state = (struct dib3000_state*) fe->demodulator_priv;
- dib3000_dealloc_pid_list(state);
kfree(state);
}
/* pid filter and transfer stuff */
-static int dib3000mc_pid_control(struct dvb_frontend *fe,int pid,int onoff)
+static int dib3000mc_pid_control(struct dvb_frontend *fe,int index, int pid,int onoff)
{
struct dib3000_state *state = fe->demodulator_priv;
- int index = dib3000_get_pid_index(state->pid_list, DIB3000MC_NUM_PIDS, pid, &state->pid_list_lock,onoff);
pid = (onoff ? pid | DIB3000_ACTIVATE_PID_FILTERING : 0);
-
- if (index >= 0) {
wr(index+DIB3000MC_REG_FIRST_PID,pid);
- } else {
- err("no more pids for filtering.");
- return -ENOMEM;
- }
return 0;
}
{
struct dib3000_state *state = (struct dib3000_state*) fe->demodulator_priv;
u16 tmp = rd(DIB3000MC_REG_SMO_MODE);
- deb_xfer("%s fifo",onoff ? "enabling" : "disabling");
+
+ deb_xfer("%s fifo\n",onoff ? "enabling" : "disabling");
+
if (onoff) {
+ deb_xfer("%d %x\n",tmp & DIB3000MC_SMO_MODE_FIFO_UNFLUSH,tmp & DIB3000MC_SMO_MODE_FIFO_UNFLUSH);
wr(DIB3000MC_REG_SMO_MODE,tmp & DIB3000MC_SMO_MODE_FIFO_UNFLUSH);
} else {
+ deb_xfer("%d %x\n",tmp | DIB3000MC_SMO_MODE_FIFO_FLUSH,tmp | DIB3000MC_SMO_MODE_FIFO_FLUSH);
wr(DIB3000MC_REG_SMO_MODE,tmp | DIB3000MC_SMO_MODE_FIFO_FLUSH);
}
return 0;
{
struct dib3000_state *state = fe->demodulator_priv;
u16 tmp = rd(DIB3000MC_REG_SMO_MODE);
- deb_xfer("%s pid parsing",onoff ? "enabling" : "disabling");
+
+ deb_xfer("%s pid parsing\n",onoff ? "enabling" : "disabling");
+
if (onoff) {
+ deb_xfer("%d %x\n",tmp | DIB3000MC_SMO_MODE_PID_PARSE,tmp | DIB3000MC_SMO_MODE_PID_PARSE);
wr(DIB3000MC_REG_SMO_MODE,tmp | DIB3000MC_SMO_MODE_PID_PARSE);
} else {
+ deb_xfer("%d %x\n",tmp & DIB3000MC_SMO_MODE_NO_PID_PARSE,tmp & DIB3000MC_SMO_MODE_NO_PID_PARSE);
wr(DIB3000MC_REG_SMO_MODE,tmp & DIB3000MC_SMO_MODE_NO_PID_PARSE);
}
return 0;
}
+static int dib3000mc_tuner_pass_ctrl(struct dvb_frontend *fe, int onoff, u8 pll_addr)
+{
+ struct dib3000_state *state = (struct dib3000_state*) fe->demodulator_priv;
+ if (onoff) {
+ wr(DIB3000MC_REG_TUNER, DIB3000_TUNER_WRITE_ENABLE(pll_addr));
+ } else {
+ wr(DIB3000MC_REG_TUNER, DIB3000_TUNER_WRITE_DISABLE(pll_addr));
+ }
+ return 0;
+}
+
+static int dib3000mc_demod_init(struct dib3000_state *state)
+{
+ u16 default_addr = 0x0a;
+ /* first init */
+ if (state->config.demod_address != default_addr) {
+ deb_info("initializing the demod the first time. Setting demod addr to 0x%x\n",default_addr);
+ wr(DIB3000MC_REG_ELEC_OUT,DIB3000MC_ELEC_OUT_DIV_OUT_ON);
+ wr(DIB3000MC_REG_OUTMODE,DIB3000MC_OM_PAR_CONT_CLK);
+
+ wr(DIB3000MC_REG_RST_I2C_ADDR,
+ DIB3000MC_DEMOD_ADDR(default_addr) |
+ DIB3000MC_DEMOD_ADDR_ON);
+
+ state->config.demod_address = default_addr;
+
+ wr(DIB3000MC_REG_RST_I2C_ADDR,
+ DIB3000MC_DEMOD_ADDR(default_addr));
+ } else
+ deb_info("demod is already initialized. Demod addr: 0x%x\n",state->config.demod_address);
+ return 0;
+}
+
+
static struct dvb_frontend_ops dib3000mc_ops;
struct dvb_frontend* dib3000mc_attach(const struct dib3000_config* config,
- struct i2c_adapter* i2c, struct dib3000_xfer_ops *xfer_ops)
+ struct i2c_adapter* i2c, struct dib_fe_xfer_ops *xfer_ops)
{
struct dib3000_state* state = NULL;
u16 devid;
if (devid != DIB3000MC_DEVICE_ID && devid != DIB3000P_DEVICE_ID)
goto error;
-
switch (devid) {
case DIB3000MC_DEVICE_ID:
- info("Found a DiBcom 3000-MC.");
+ info("Found a DiBcom 3000-MC, interesting...");
break;
case DIB3000P_DEVICE_ID:
info("Found a DiBcom 3000-P.");
break;
}
- if (dib3000_init_pid_list(state,DIB3000MC_NUM_PIDS))
- goto error;
-
/* create dvb_frontend */
state->frontend.ops = &state->ops;
state->frontend.demodulator_priv = state;
xfer_ops->pid_parse = dib3000mc_pid_parse;
xfer_ops->fifo_ctrl = dib3000mc_fifo_control;
xfer_ops->pid_ctrl = dib3000mc_pid_control;
+ xfer_ops->tuner_pass_ctrl = dib3000mc_tuner_pass_ctrl;
+
+ dib3000mc_demod_init(state);
return &state->frontend;
FE_CAN_QPSK | FE_CAN_QAM_16 | FE_CAN_QAM_64 | FE_CAN_QAM_AUTO |
FE_CAN_TRANSMISSION_MODE_AUTO |
FE_CAN_GUARD_INTERVAL_AUTO |
+ FE_CAN_RECOVER |
FE_CAN_HIERARCHY_AUTO,
},
#ifndef __DIB3000MC_PRIV_H__
#define __DIB3000MC_PRIV_H__
-/* info and err, taken from usb.h, if there is anything available like by default,
- * please change !
- */
-#define err(format, arg...) printk(KERN_ERR "%s: " format "\n" , __FILE__ , ## arg)
-#define info(format, arg...) printk(KERN_INFO "%s: " format "\n" , __FILE__ , ## arg)
-#define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n" , __FILE__ , ## arg)
-
-// defines the phase noise algorithm to be used (O:Inhib, 1:CPE on)
-#define DEF_PHASE_NOISE_MODE 0
-
-// define Mobille algorithms
-#define DEF_MOBILE_MODE Auto_Reception
-
-// defines the tuner type
-#define DEF_TUNER_TYPE TUNER_PANASONIC_ENV57H13D5
-
-// defines the impule noise algorithm to be used
-#define DEF_IMPULSE_NOISE_MODE 0
-
-// defines the MPEG2 data output format
-#define DEF_MPEG2_OUTPUT_188 0
-
-// defines the MPEG2 data output format
-#define DEF_OUTPUT_MODE MPEG2_PARALLEL_CONTINUOUS_CLOCK
-
/*
* Demodulator parameters
* reg: 0 1 1 1 11 11 111
{ 0x1c, 0xfba5, 0x60, 0x9c25, 0x1e3, 0x0cb7, 0x1, 0xb0d0 };
static u16 dib3000mc_bandwidth_8mhz[] =
- { 0x19, 0x5c30, 0x54, 0x88a0, 0x1a6, 0xab20, 0x1, 0xb0b0 };
+ { 0x19, 0x5c30, 0x54, 0x88a0, 0x1a6, 0xab20, 0x1, 0xb0d0 };
static u16 dib3000mc_reg_bandwidth_general[] = { 12,13,14,15 };
static u16 dib3000mc_bandwidth_general[] = { 0x0000, 0x03e8, 0x0000, 0x03f2 };
static u16 dib3000mc_reg_imp_noise_ctl[] = { 34,35 };
static u16 dib3000mc_imp_noise_ctl[][2] = {
- { 0x1294, 0xfff8 }, /* mode 0 */
- { 0x1294, 0xfff8 }, /* mode 1 */
- { 0x1294, 0xfff8 }, /* mode 2 */
- { 0x1294, 0xfff8 }, /* mode 3 */
- { 0x1294, 0xfff8 }, /* mode 4 */
+ { 0x1294, 0x1ff8 }, /* mode 0 */
+ { 0x1294, 0x1ff8 }, /* mode 1 */
+ { 0x1294, 0x1ff8 }, /* mode 2 */
+ { 0x1294, 0x1ff8 }, /* mode 3 */
+ { 0x1294, 0x1ff8 }, /* mode 4 */
};
/* AGC registers */
#define DIB3000MC_REG_FEC_CFG ( 195)
#define DIB3000MC_FEC_CFG ( 0x10)
+/*
+ * reg 206, output mode
+ * 1111 1111
+ * |||| ||||
+ * |||| |||+- unk
+ * |||| ||+-- unk
+ * |||| |+--- unk (on by default)
+ * |||| +---- fifo_ctrl (1 = inhibit (flushed), 0 = active (unflushed))
+ * |||+------ pid_parse (1 = enabled, 0 = disabled)
+ * ||+------- outp_188 (1 = TS packet size 188, 0 = packet size 204)
+ * |+-------- unk
+ * +--------- unk
+ */
+
#define DIB3000MC_REG_SMO_MODE ( 206)
#define DIB3000MC_SMO_MODE_DEFAULT (1 << 2)
#define DIB3000MC_SMO_MODE_FIFO_FLUSH (1 << 3)
-#define DIB3000MC_SMO_MODE_FIFO_UNFLUSH ~DIB3000MC_SMO_MODE_FIFO_FLUSH
+#define DIB3000MC_SMO_MODE_FIFO_UNFLUSH (0xfff7)
#define DIB3000MC_SMO_MODE_PID_PARSE (1 << 4)
-#define DIB3000MC_SMO_MODE_NO_PID_PARSE ~DIB3000MC_SMO_MODE_PID_PARSE
+#define DIB3000MC_SMO_MODE_NO_PID_PARSE (0xffef)
#define DIB3000MC_SMO_MODE_188 (1 << 5)
#define DIB3000MC_SMO_MODE_SLAVE (DIB3000MC_SMO_MODE_DEFAULT | \
DIB3000MC_SMO_MODE_188 | DIB3000MC_SMO_MODE_PID_PARSE | (1<<1))
#define DIB3000MC_REG_RST_I2C_ADDR ( 1024)
#define DIB3000MC_DEMOD_ADDR_ON ( 1)
-#define DIB3000MC_DEMOD_ADDR(a) ((a << 3) & 0x03F0)
+#define DIB3000MC_DEMOD_ADDR(a) ((a << 4) & 0x03F0)
#define DIB3000MC_REG_RESTART ( 1027)
#define DIB3000MC_RESTART_OFF (0x0000)
if (debug) printk(KERN_DEBUG "mt352: " args); \
} while (0)
-int mt352_write(struct dvb_frontend* fe, u8* ibuf, int ilen)
+static int mt352_single_write(struct dvb_frontend *fe, u8 reg, u8 val)
{
struct mt352_state* state = (struct mt352_state*) fe->demodulator_priv;
+ u8 buf[2] = { reg, val };
struct i2c_msg msg = { .addr = state->config->demod_address, .flags = 0,
- .buf = ibuf, .len = ilen };
+ .buf = buf, .len = 2 };
int err = i2c_transfer(state->i2c, &msg, 1);
if (err != 1) {
- dprintk("mt352_write() failed (err = %d)!\n", err);
+ dprintk("mt352_write() to reg %x failed (err = %d)!\n", reg, err);
return err;
}
+ return 0;
+}
+
+int mt352_write(struct dvb_frontend* fe, u8* ibuf, int ilen)
+{
+ int err,i;
+ for (i=0; i < ilen-1; i++)
+ if ((err = mt352_single_write(fe,ibuf[0]+i,ibuf[i+1])))
+ return err;
return 0;
}
return b1[0];
}
-
-
-
+u8 mt352_read(struct dvb_frontend *fe, u8 reg)
+{
+ return mt352_read_register(fe->demodulator_priv,reg);
+}
EXPORT_SYMBOL(mt352_attach);
EXPORT_SYMBOL(mt352_write);
+EXPORT_SYMBOL(mt352_read);
struct i2c_adapter* i2c);
extern int mt352_write(struct dvb_frontend* fe, u8* ibuf, int ilen);
+extern u8 mt352_read(struct dvb_frontend *fe, u8 reg);
#endif // MT352_H
--- /dev/null
+/*
+ Support for B2C2/BBTI Technisat Air2PC - ATSC
+
+ Copyright (C) 2004 Taylor Jacob <rtjacob@earthlink.net>
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+*/
+
+/*
+ * This driver needs external firmware. Please use the command
+ * "<kerneldir>/Documentation/dvb/get_dvb_firmware nxt2002" to
+ * download/extract it, and then copy it to /usr/lib/hotplug/firmware.
+ */
+#define NXT2002_DEFAULT_FIRMWARE "dvb-fe-nxt2002.fw"
+#define CRC_CCIT_MASK 0x1021
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/device.h>
+#include <linux/firmware.h>
+
+#include "dvb_frontend.h"
+#include "nxt2002.h"
+
+struct nxt2002_state {
+
+ struct i2c_adapter* i2c;
+ struct dvb_frontend_ops ops;
+ const struct nxt2002_config* config;
+ struct dvb_frontend frontend;
+
+ /* demodulator private data */
+ u8 initialised:1;
+};
+
+static int debug;
+#define dprintk(args...) \
+ do { \
+ if (debug) printk(KERN_DEBUG "nxt2002: " args); \
+ } while (0)
+
+static int i2c_writebytes (struct nxt2002_state* state, u8 reg, u8 *buf, u8 len)
+{
+ /* probbably a much better way or doing this */
+ u8 buf2 [256],x;
+ int err;
+ struct i2c_msg msg = { .addr = state->config->demod_address, .flags = 0, .buf = buf2, .len = len + 1 };
+
+ buf2[0] = reg;
+ for (x = 0 ; x < len ; x++)
+ buf2[x+1] = buf[x];
+
+ if ((err = i2c_transfer (state->i2c, &msg, 1)) != 1) {
+ printk ("%s: i2c write error (addr %02x, err == %i)\n",
+ __FUNCTION__, state->config->demod_address, err);
+ return -EREMOTEIO;
+ }
+
+ return 0;
+}
+
+static u8 i2c_readbytes (struct nxt2002_state* state, u8 reg, u8* buf, u8 len)
+{
+ u8 reg2 [] = { reg };
+
+ struct i2c_msg msg [] = { { .addr = state->config->demod_address, .flags = 0, .buf = reg2, .len = 1 },
+ { .addr = state->config->demod_address, .flags = I2C_M_RD, .buf = buf, .len = len } };
+
+ int err;
+
+ if ((err = i2c_transfer (state->i2c, msg, 2)) != 2) {
+ printk ("%s: i2c read error (addr %02x, err == %i)\n",
+ __FUNCTION__, state->config->demod_address, err);
+ return -EREMOTEIO;
+ }
+
+ return 0;
+}
+
+static u16 nxt2002_crc(u16 crc, u8 c)
+{
+
+ u8 i;
+ u16 input = (u16) c & 0xFF;
+
+ input<<=8;
+ for(i=0 ;i<8 ;i++) {
+ if((crc ^ input) & 0x8000)
+ crc=(crc<<1)^CRC_CCIT_MASK;
+ else
+ crc<<=1;
+ input<<=1;
+ }
+ return crc;
+}
+
+static int nxt2002_writereg_multibyte (struct nxt2002_state* state, u8 reg, u8* data, u8 len)
+{
+ u8 buf;
+ dprintk("%s\n", __FUNCTION__);
+
+ /* set multi register length */
+ i2c_writebytes(state,0x34,&len,1);
+
+ /* set mutli register register */
+ i2c_writebytes(state,0x35,®,1);
+
+ /* send the actual data */
+ i2c_writebytes(state,0x36,data,len);
+
+ /* toggle the multireg write bit*/
+ buf = 0x02;
+ i2c_writebytes(state,0x21,&buf,1);
+
+ i2c_readbytes(state,0x21,&buf,1);
+
+ if ((buf & 0x02) == 0)
+ return 0;
+
+ dprintk("Error writing multireg register %02X\n",reg);
+
+ return 0;
+}
+
+static int nxt2002_readreg_multibyte (struct nxt2002_state* state, u8 reg, u8* data, u8 len)
+{
+ u8 len2;
+ dprintk("%s\n", __FUNCTION__);
+
+ /* set multi register length */
+ len2 = len & 0x80;
+ i2c_writebytes(state,0x34,&len2,1);
+
+ /* set mutli register register */
+ i2c_writebytes(state,0x35,®,1);
+
+ /* send the actual data */
+ i2c_readbytes(state,reg,data,len);
+
+ return 0;
+}
+
+static void nxt2002_microcontroller_stop (struct nxt2002_state* state)
+{
+ u8 buf[2],counter = 0;
+ dprintk("%s\n", __FUNCTION__);
+
+ buf[0] = 0x80;
+ i2c_writebytes(state,0x22,buf,1);
+
+ while (counter < 20) {
+ i2c_readbytes(state,0x31,buf,1);
+ if (buf[0] & 0x40)
+ return;
+ msleep(10);
+ counter++;
+ }
+
+ dprintk("Timeout waiting for micro to stop.. This is ok after firmware upload\n");
+ return;
+}
+
+static void nxt2002_microcontroller_start (struct nxt2002_state* state)
+{
+ u8 buf;
+ dprintk("%s\n", __FUNCTION__);
+
+ buf = 0x00;
+ i2c_writebytes(state,0x22,&buf,1);
+}
+
+static int nxt2002_writetuner (struct nxt2002_state* state, u8* data)
+{
+ u8 buf,count = 0;
+
+ dprintk("Tuner Bytes: %02X %02X %02X %02X\n",data[0],data[1],data[2],data[3]);
+
+ dprintk("%s\n", __FUNCTION__);
+ /* stop the micro first */
+ nxt2002_microcontroller_stop(state);
+
+ /* set the i2c transfer speed to the tuner */
+ buf = 0x03;
+ i2c_writebytes(state,0x20,&buf,1);
+
+ /* setup to transfer 4 bytes via i2c */
+ buf = 0x04;
+ i2c_writebytes(state,0x34,&buf,1);
+
+ /* write actual tuner bytes */
+ i2c_writebytes(state,0x36,data,4);
+
+ /* set tuner i2c address */
+ buf = 0xC2;
+ i2c_writebytes(state,0x35,&buf,1);
+
+ /* write UC Opmode to begin transfer */
+ buf = 0x80;
+ i2c_writebytes(state,0x21,&buf,1);
+
+ while (count < 20) {
+ i2c_readbytes(state,0x21,&buf,1);
+ if ((buf & 0x80)== 0x00)
+ return 0;
+ msleep(100);
+ count++;
+ }
+
+ printk("nxt2002: timeout error writing tuner\n");
+ return 0;
+}
+
+static void nxt2002_agc_reset(struct nxt2002_state* state)
+{
+ u8 buf;
+ dprintk("%s\n", __FUNCTION__);
+
+ buf = 0x08;
+ i2c_writebytes(state,0x08,&buf,1);
+
+ buf = 0x00;
+ i2c_writebytes(state,0x08,&buf,1);
+
+ return;
+}
+
+static int nxt2002_load_firmware (struct dvb_frontend* fe, const struct firmware *fw)
+{
+
+ struct nxt2002_state* state = (struct nxt2002_state*) fe->demodulator_priv;
+ u8 buf[256],written = 0,chunkpos = 0;
+ u16 rambase,position,crc = 0;
+
+ dprintk("%s\n", __FUNCTION__);
+ dprintk("Firmware is %zu bytes\n",fw->size);
+
+ /* Get the RAM base for this nxt2002 */
+ i2c_readbytes(state,0x10,buf,1);
+
+
+ if (buf[0] & 0x10)
+ rambase = 0x1000;
+ else
+ rambase = 0x0000;
+
+ dprintk("rambase on this nxt2002 is %04X\n",rambase);
+
+ /* Hold the micro in reset while loading firmware */
+ buf[0] = 0x80;
+ i2c_writebytes(state,0x2B,buf,1);
+
+
+ for (position = 0; position < fw->size ; position++) {
+ if (written == 0) {
+ crc = 0;
+ chunkpos = 0x28;
+ buf[0] = ((rambase + position) >> 8);
+ buf[1] = (rambase + position) & 0xFF;
+ buf[2] = 0x81;
+ /* write starting address */
+ i2c_writebytes(state,0x29,buf,3);
+ }
+ written++;
+ chunkpos++;
+
+ if ((written % 4) == 0)
+ i2c_writebytes(state,chunkpos,&fw->data[position-3],4);
+
+ crc = nxt2002_crc(crc,fw->data[position]);
+
+
+ if ((written == 255) || (position+1 == fw->size)) {
+ /* write remaining bytes of firmware */
+ i2c_writebytes(state, chunkpos+4-(written %4),
+ &fw->data[position-(written %4) + 1],
+ written %4);
+ buf[0] = crc << 8;
+ buf[1] = crc & 0xFF;
+
+ /* write crc */
+ i2c_writebytes(state,0x2C,buf,2);
+
+ /* do a read to stop things */
+ i2c_readbytes(state,0x2A,buf,1);
+
+ /* set transfer mode to complete */
+ buf[0] = 0x80;
+ i2c_writebytes(state,0x2B,buf,1);
+
+ written = 0;
+ }
+ }
+
+ printk ("done.\n");
+ return 0;
+};
+
+
+static int nxt2002_setup_frontend_parameters (struct dvb_frontend* fe,
+ struct dvb_frontend_parameters *p)
+{
+ struct nxt2002_state* state = (struct nxt2002_state*) fe->demodulator_priv;
+ u32 freq = 0;
+ u16 tunerfreq = 0;
+ u8 buf[4];
+
+ freq = 44000 + ( p->frequency / 1000 );
+
+ dprintk("freq = %d p->frequency = %d\n",freq,p->frequency);
+
+ tunerfreq = freq * 24/4000;
+
+ buf[0] = (tunerfreq >> 8) & 0x7F;
+ buf[1] = (tunerfreq & 0xFF);
+
+ if (p->frequency <= 214000000) {
+ buf[2] = 0x84 + (0x06 << 3);
+ buf[3] = (p->frequency <= 172000000) ? 0x01 : 0x02;
+ } else if (p->frequency <= 721000000) {
+ buf[2] = 0x84 + (0x07 << 3);
+ buf[3] = (p->frequency <= 467000000) ? 0x02 : 0x08;
+ } else if (p->frequency <= 841000000) {
+ buf[2] = 0x84 + (0x0E << 3);
+ buf[3] = 0x08;
+ } else {
+ buf[2] = 0x84 + (0x0F << 3);
+ buf[3] = 0x02;
+ }
+
+ /* write frequency information */
+ nxt2002_writetuner(state,buf);
+
+ /* reset the agc now that tuning has been completed */
+ nxt2002_agc_reset(state);
+
+ /* set target power level */
+ buf[0] = 0x70;
+ i2c_writebytes(state,0x42,buf,1);
+
+ /* configure sdm */
+ buf[0] = 0x87;
+ i2c_writebytes(state,0x57,buf,1);
+
+ /* write sdm1 input */
+ buf[0] = 0x10;
+ buf[1] = 0x00;
+ nxt2002_writereg_multibyte(state,0x58,buf,2);
+
+ /* write sdmx input */
+ buf[0] = 0x60;
+ buf[1] = 0x00;
+ nxt2002_writereg_multibyte(state,0x5C,buf,2);
+
+ /* write adc power lpf fc */
+ buf[0] = 0x05;
+ i2c_writebytes(state,0x43,buf,1);
+
+ /* write adc power lpf fc */
+ buf[0] = 0x05;
+ i2c_writebytes(state,0x43,buf,1);
+
+ /* write accumulator2 input */
+ buf[0] = 0x80;
+ buf[1] = 0x00;
+ nxt2002_writereg_multibyte(state,0x4B,buf,2);
+
+ /* write kg1 */
+ buf[0] = 0x00;
+ i2c_writebytes(state,0x4D,buf,1);
+
+ /* write sdm12 lpf fc */
+ buf[0] = 0x44;
+ i2c_writebytes(state,0x55,buf,1);
+
+ /* write agc control reg */
+ buf[0] = 0x04;
+ i2c_writebytes(state,0x41,buf,1);
+
+ /* write agc ucgp0 */
+ buf[0] = 0x00;
+ i2c_writebytes(state,0x30,buf,1);
+
+ /* write agc control reg */
+ buf[0] = 0x00;
+ i2c_writebytes(state,0x41,buf,1);
+
+ /* write accumulator2 input */
+ buf[0] = 0x80;
+ buf[1] = 0x00;
+ nxt2002_writereg_multibyte(state,0x49,buf,2);
+ nxt2002_writereg_multibyte(state,0x4B,buf,2);
+
+ /* write agc control reg */
+ buf[0] = 0x04;
+ i2c_writebytes(state,0x41,buf,1);
+
+ nxt2002_microcontroller_start(state);
+
+ /* adjacent channel detection should be done here, but I don't
+ have any stations with this need so I cannot test it */
+
+ return 0;
+}
+
+static int nxt2002_read_status(struct dvb_frontend* fe, fe_status_t* status)
+{
+ struct nxt2002_state* state = (struct nxt2002_state*) fe->demodulator_priv;
+ u8 lock;
+ i2c_readbytes(state,0x31,&lock,1);
+
+ *status = 0;
+ if (lock & 0x20) {
+ *status |= FE_HAS_SIGNAL;
+ *status |= FE_HAS_CARRIER;
+ *status |= FE_HAS_VITERBI;
+ *status |= FE_HAS_SYNC;
+ *status |= FE_HAS_LOCK;
+ }
+ return 0;
+}
+
+static int nxt2002_read_ber(struct dvb_frontend* fe, u32* ber)
+{
+ struct nxt2002_state* state = (struct nxt2002_state*) fe->demodulator_priv;
+ u8 b[3];
+
+ nxt2002_readreg_multibyte(state,0xE6,b,3);
+
+ *ber = ((b[0] << 8) + b[1]) * 8;
+
+ return 0;
+}
+
+static int nxt2002_read_signal_strength(struct dvb_frontend* fe, u16* strength)
+{
+ struct nxt2002_state* state = (struct nxt2002_state*) fe->demodulator_priv;
+ u8 b[2];
+ u16 temp = 0;
+
+ /* setup to read cluster variance */
+ b[0] = 0x00;
+ i2c_writebytes(state,0xA1,b,1);
+
+ /* get multreg val */
+ nxt2002_readreg_multibyte(state,0xA6,b,2);
+
+ temp = (b[0] << 8) | b[1];
+ *strength = ((0x7FFF - temp) & 0x0FFF) * 16;
+
+ return 0;
+}
+
+static int nxt2002_read_snr(struct dvb_frontend* fe, u16* snr)
+{
+
+ struct nxt2002_state* state = (struct nxt2002_state*) fe->demodulator_priv;
+ u8 b[2];
+ u16 temp = 0, temp2;
+ u32 snrdb = 0;
+
+ /* setup to read cluster variance */
+ b[0] = 0x00;
+ i2c_writebytes(state,0xA1,b,1);
+
+ /* get multreg val from 0xA6 */
+ nxt2002_readreg_multibyte(state,0xA6,b,2);
+
+ temp = (b[0] << 8) | b[1];
+ temp2 = 0x7FFF - temp;
+
+ /* snr will be in db */
+ if (temp2 > 0x7F00)
+ snrdb = 1000*24 + ( 1000*(30-24) * ( temp2 - 0x7F00 ) / ( 0x7FFF - 0x7F00 ) );
+ else if (temp2 > 0x7EC0)
+ snrdb = 1000*18 + ( 1000*(24-18) * ( temp2 - 0x7EC0 ) / ( 0x7F00 - 0x7EC0 ) );
+ else if (temp2 > 0x7C00)
+ snrdb = 1000*12 + ( 1000*(18-12) * ( temp2 - 0x7C00 ) / ( 0x7EC0 - 0x7C00 ) );
+ else
+ snrdb = 1000*0 + ( 1000*(12-0) * ( temp2 - 0 ) / ( 0x7C00 - 0 ) );
+
+ /* the value reported back from the frontend will be FFFF=32db 0000=0db */
+
+ *snr = snrdb * (0xFFFF/32000);
+
+ return 0;
+}
+
+static int nxt2002_read_ucblocks(struct dvb_frontend* fe, u32* ucblocks)
+{
+ struct nxt2002_state* state = (struct nxt2002_state*) fe->demodulator_priv;
+ u8 b[3];
+
+ nxt2002_readreg_multibyte(state,0xE6,b,3);
+
+ *ucblocks = b[2];
+
+ return 0;
+}
+
+static int nxt2002_sleep(struct dvb_frontend* fe)
+{
+ return 0;
+}
+
+static int nxt2002_init(struct dvb_frontend* fe)
+{
+ struct nxt2002_state* state = (struct nxt2002_state*) fe->demodulator_priv;
+ const struct firmware *fw;
+ int ret;
+ u8 buf[2];
+
+ if (!state->initialised) {
+ /* request the firmware, this will block until someone uploads it */
+ printk("nxt2002: Waiting for firmware upload...\n");
+ ret = state->config->request_firmware(fe, &fw, NXT2002_DEFAULT_FIRMWARE);
+ printk("nxt2002: Waiting for firmware upload(2)...\n");
+ if (ret) {
+ printk("nxt2002: no firmware upload (timeout or file not found?)\n");
+ return ret;
+ }
+
+ ret = nxt2002_load_firmware(fe, fw);
+ if (ret) {
+ printk("nxt2002: writing firmware to device failed\n");
+ release_firmware(fw);
+ return ret;
+ }
+
+ /* Put the micro into reset */
+ nxt2002_microcontroller_stop(state);
+
+ /* ensure transfer is complete */
+ buf[0]=0;
+ i2c_writebytes(state,0x2B,buf,1);
+
+ /* Put the micro into reset for real this time */
+ nxt2002_microcontroller_stop(state);
+
+ /* soft reset everything (agc,frontend,eq,fec)*/
+ buf[0] = 0x0F;
+ i2c_writebytes(state,0x08,buf,1);
+ buf[0] = 0x00;
+ i2c_writebytes(state,0x08,buf,1);
+
+ /* write agc sdm configure */
+ buf[0] = 0xF1;
+ i2c_writebytes(state,0x57,buf,1);
+
+ /* write mod output format */
+ buf[0] = 0x20;
+ i2c_writebytes(state,0x09,buf,1);
+
+ /* write fec mpeg mode */
+ buf[0] = 0x7E;
+ buf[1] = 0x00;
+ i2c_writebytes(state,0xE9,buf,2);
+
+ /* write mux selection */
+ buf[0] = 0x00;
+ i2c_writebytes(state,0xCC,buf,1);
+
+ state->initialised = 1;
+ }
+
+ return 0;
+}
+
+static int nxt2002_get_tune_settings(struct dvb_frontend* fe, struct dvb_frontend_tune_settings* fesettings)
+{
+ fesettings->min_delay_ms = 500;
+ fesettings->step_size = 0;
+ fesettings->max_drift = 0;
+ return 0;
+}
+
+static void nxt2002_release(struct dvb_frontend* fe)
+{
+ struct nxt2002_state* state = (struct nxt2002_state*) fe->demodulator_priv;
+ kfree(state);
+}
+
+static struct dvb_frontend_ops nxt2002_ops;
+
+struct dvb_frontend* nxt2002_attach(const struct nxt2002_config* config,
+ struct i2c_adapter* i2c)
+{
+ struct nxt2002_state* state = NULL;
+ u8 buf [] = {0,0,0,0,0};
+
+ /* allocate memory for the internal state */
+ state = (struct nxt2002_state*) kmalloc(sizeof(struct nxt2002_state), GFP_KERNEL);
+ if (state == NULL) goto error;
+
+ /* setup the state */
+ state->config = config;
+ state->i2c = i2c;
+ memcpy(&state->ops, &nxt2002_ops, sizeof(struct dvb_frontend_ops));
+ state->initialised = 0;
+
+ /* Check the first 5 registers to ensure this a revision we can handle */
+
+ i2c_readbytes(state, 0x00, buf, 5);
+ if (buf[0] != 0x04) goto error; /* device id */
+ if (buf[1] != 0x02) goto error; /* fab id */
+ if (buf[2] != 0x11) goto error; /* month */
+ if (buf[3] != 0x20) goto error; /* year msb */
+ if (buf[4] != 0x00) goto error; /* year lsb */
+
+ /* create dvb_frontend */
+ state->frontend.ops = &state->ops;
+ state->frontend.demodulator_priv = state;
+ return &state->frontend;
+
+error:
+ if (state) kfree(state);
+ return NULL;
+}
+
+static struct dvb_frontend_ops nxt2002_ops = {
+
+ .info = {
+ .name = "Nextwave nxt2002 VSB/QAM frontend",
+ .type = FE_ATSC,
+ .frequency_min = 54000000,
+ .frequency_max = 803000000,
+ /* stepsize is just a guess */
+ .frequency_stepsize = 166666,
+ .caps = FE_CAN_FEC_1_2 | FE_CAN_FEC_2_3 | FE_CAN_FEC_3_4 |
+ FE_CAN_FEC_5_6 | FE_CAN_FEC_7_8 | FE_CAN_FEC_AUTO |
+ FE_CAN_8VSB
+ },
+
+ .release = nxt2002_release,
+
+ .init = nxt2002_init,
+ .sleep = nxt2002_sleep,
+
+ .set_frontend = nxt2002_setup_frontend_parameters,
+ .get_tune_settings = nxt2002_get_tune_settings,
+
+ .read_status = nxt2002_read_status,
+ .read_ber = nxt2002_read_ber,
+ .read_signal_strength = nxt2002_read_signal_strength,
+ .read_snr = nxt2002_read_snr,
+ .read_ucblocks = nxt2002_read_ucblocks,
+
+};
+
+module_param(debug, int, 0644);
+MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+
+MODULE_DESCRIPTION("NXT2002 ATSC (8VSB & ITU J83 AnnexB FEC QAM64/256) demodulator driver");
+MODULE_AUTHOR("Taylor Jacob");
+MODULE_LICENSE("GPL");
+
+EXPORT_SYMBOL(nxt2002_attach);
--- /dev/null
+/*
+ Driver for the Nxt2002 demodulator
+*/
+
+#ifndef NXT2002_H
+#define NXT2002_H
+
+#include <linux/dvb/frontend.h>
+#include <linux/firmware.h>
+
+struct nxt2002_config
+{
+ /* the demodulator's i2c address */
+ u8 demod_address;
+
+ /* request firmware for device */
+ int (*request_firmware)(struct dvb_frontend* fe, const struct firmware **fw, char* name);
+};
+
+extern struct dvb_frontend* nxt2002_attach(const struct nxt2002_config* config,
+ struct i2c_adapter* i2c);
+
+#endif // NXT2002_H
/* request the firmware, this will block until someone uploads it */
- printk("sp8870: waiting for firmware upload...\n");
+ printk("sp8870: waiting for firmware upload (%s)...\n", SP8870_DEFAULT_FIRMWARE);
if (state->config->request_firmware(fe, &fw, SP8870_DEFAULT_FIRMWARE)) {
printk("sp8870: no firmware upload (timeout or file not found?)\n");
release_firmware(fw);
release_firmware(fw);
return -EIO;
}
+ printk("sp8870: firmware upload complete\n");
/* enable TS output and interface pins */
sp8870_writereg(state, 0xc18, 0x00d);
if (!state->initialised) {
/* request the firmware, this will block until someone uploads it */
- printk("sp887x: waiting for firmware upload...\n");
+ printk("sp887x: waiting for firmware upload (%s)...\n", SP887X_DEFAULT_FIRMWARE);
ret = state->config->request_firmware(fe, &fw, SP887X_DEFAULT_FIRMWARE);
if (ret) {
printk("sp887x: no firmware upload (timeout or file not found?)\n");
release_firmware(fw);
return ret;
}
+ printk("sp887x: firmware upload complete\n");
state->initialised = 1;
}
struct dvb_frontend frontend;
- int freq_off;
-
unsigned long base_freq;
-
u8 pwm;
};
int ret;
u8 b0[] = { reg };
u8 b1[] = { 0 };
- struct i2c_msg msg [] = { { .addr = state->config->demod_address, .flags = 0, .buf = b0, .len = 1 },
- { .addr = state->config->demod_address, .flags = I2C_M_RD, .buf = b1, .len = 1 } };
+ struct i2c_msg msg[] = { {.addr = state->config->demod_address,.flags = 0,.buf = b0,.len =
+ 1},
+ {.addr = state->config->demod_address,.flags = I2C_M_RD,.buf = b1,.len = 1}
+ };
// this device needs a STOP between the register and data
if ((ret = i2c_transfer (state->i2c, &msg[0], 1)) != 1) {
static int stv0297_readregs (struct stv0297_state* state, u8 reg1, u8 *b, u8 len)
{
int ret;
- struct i2c_msg msg [] = { { .addr = state->config->demod_address, .flags = 0, .buf = ®1, .len = 1 },
- { .addr = state->config->demod_address, .flags = I2C_M_RD, .buf = b, .len = len } };
+ struct i2c_msg msg[] = { {.addr = state->config->demod_address,.flags = 0,.buf =
+ ®1,.len = 1},
+ {.addr = state->config->demod_address,.flags = I2C_M_RD,.buf = b,.len = len}
+ };
// this device needs a STOP between the register and data
if ((ret = i2c_transfer (state->i2c, &msg[0], 1)) != 1) {
return 0;
}
+static u32 stv0297_get_symbolrate(struct stv0297_state *state)
+{
+ u64 tmp;
+
+ tmp = stv0297_readreg(state, 0x55);
+ tmp |= stv0297_readreg(state, 0x56) << 8;
+ tmp |= stv0297_readreg(state, 0x57) << 16;
+ tmp |= stv0297_readreg(state, 0x58) << 24;
+
+ tmp *= STV0297_CLOCK_KHZ;
+ tmp >>= 32;
+
+ return (u32) tmp;
+}
+
static void stv0297_set_symbolrate(struct stv0297_state *state, u32 srate)
{
long tmp;
stv0297_writereg_mask(state, 0x69, 0x0F, (tmp >> 24) & 0x0f);
}
+/*
static long stv0297_get_carrieroffset(struct stv0297_state* state)
{
- s32 raw;
- long tmp;
+ s64 tmp;
stv0297_writereg(state,0x6B, 0x00);
- raw = stv0297_readreg(state,0x66);
- raw |= (stv0297_readreg(state,0x67) << 8);
- raw |= (stv0297_readreg(state,0x68) << 16);
- raw |= (stv0297_readreg(state,0x69) & 0x0F) << 24;
+ tmp = stv0297_readreg(state, 0x66);
+ tmp |= (stv0297_readreg(state, 0x67) << 8);
+ tmp |= (stv0297_readreg(state, 0x68) << 16);
+ tmp |= (stv0297_readreg(state, 0x69) & 0x0F) << 24;
- tmp = raw;
- tmp /= 26844L;
+ tmp *= stv0297_get_symbolrate(state);
+ tmp >>= 28;
- return tmp;
+ return (s32) tmp;
}
+*/
static void stv0297_set_initialdemodfreq(struct stv0297_state* state, long freq)
{
-/*
- s64 tmp;
+ s32 tmp;
- if (freq > 10000) freq -= STV0297_CLOCK_KHZ;
+ if (freq > 10000)
+ freq -= STV0297_CLOCK_KHZ;
- tmp = freq << 16;
- do_div(tmp, STV0297_CLOCK_KHZ);
- if (tmp > 0xffff) tmp = 0xffff; // check this calculation
+ tmp = (STV0297_CLOCK_KHZ * 1000) / (1 << 16);
+ tmp = (freq * 1000) / tmp;
+ if (tmp > 0xffff)
+ tmp = 0xffff;
stv0297_writereg_mask(state, 0x25, 0x80, 0x80);
stv0297_writereg(state, 0x21, tmp >> 8);
stv0297_writereg(state, 0x20, tmp);
-*/
}
static int stv0297_set_qam(struct stv0297_state* state, fe_modulation_t modulation)
return 0;
}
+static int stv0297_sleep(struct dvb_frontend *fe)
+{
+ struct stv0297_state *state = (struct stv0297_state *) fe->demodulator_priv;
+
+ stv0297_writereg_mask(state, 0x80, 1, 1);
+
+ return 0;
+}
+
static int stv0297_read_status(struct dvb_frontend* fe, fe_status_t* status)
{
struct stv0297_state* state = (struct stv0297_state*) fe->demodulator_priv;
int carrieroffset;
unsigned long starttime;
unsigned long timeout;
+ fe_spectral_inversion_t inversion;
switch(p->u.qam.modulation) {
case QAM_16:
}
// determine inversion dependant parameters
+ inversion = p->inversion;
+ if (state->config->invert)
+ inversion = (inversion == INVERSION_ON) ? INVERSION_OFF : INVERSION_ON;
carrieroffset = -330;
- switch(p->inversion) {
+ switch (inversion) {
case INVERSION_OFF:
break;
return -EINVAL;
}
+ stv0297_init(fe);
state->config->pll_set(fe, p);
/* clear software interrupts */
stv0297_writereg(state, 0x82, 0x0);
/* set initial demodulation frequency */
- stv0297_set_initialdemodfreq(state, state->freq_off + 7250);
+ stv0297_set_initialdemodfreq(state, 7250);
/* setup AGC */
stv0297_writereg_mask(state, 0x43, 0x10, 0x00);
stv0297_set_symbolrate(state, p->u.qam.symbol_rate/1000);
stv0297_set_sweeprate(state, sweeprate, p->u.qam.symbol_rate / 1000);
stv0297_set_carrieroffset(state, carrieroffset);
- stv0297_set_inversion(state, p->inversion);
+ stv0297_set_inversion(state, inversion);
/* kick off lock */
stv0297_writereg_mask(state, 0x88, 0x08, 0x08);
/* success!! */
stv0297_writereg_mask(state, 0x5a, 0x40, 0x00);
- state->freq_off = stv0297_get_carrieroffset(state);
state->base_freq = p->frequency;
return 0;
reg_00 = stv0297_readreg(state, 0x00);
reg_83 = stv0297_readreg(state, 0x83);
- p->frequency = state->base_freq + state->freq_off;
+ p->frequency = state->base_freq;
p->inversion = (reg_83 & 0x08) ? INVERSION_ON : INVERSION_OFF;
- p->u.qam.symbol_rate = 0;
- p->u.qam.fec_inner = 0;
+ if (state->config->invert)
+ p->inversion = (p->inversion == INVERSION_ON) ? INVERSION_OFF : INVERSION_ON;
+ p->u.qam.symbol_rate = stv0297_get_symbolrate(state) * 1000;
+ p->u.qam.fec_inner = FEC_NONE;
switch((reg_00 >> 4) & 0x7) {
case 0:
state->config = config;
state->i2c = i2c;
memcpy(&state->ops, &stv0297_ops, sizeof(struct dvb_frontend_ops));
- state->freq_off = 0;
state->base_freq = 0;
state->pwm = pwm;
.release = stv0297_release,
.init = stv0297_init,
+ .sleep = stv0297_sleep,
.set_frontend = stv0297_set_frontend,
.get_frontend = stv0297_get_frontend,
/* the demodulator's i2c address */
u8 demod_address;
+ /* does the "inversion" need inverted? */
+ u8 invert:1;
+
/* PLL maintenance */
int (*pll_init)(struct dvb_frontend* fe);
int (*pll_set)(struct dvb_frontend* fe, struct dvb_frontend_parameters* params);
Copyright (C) 1999 Convergence Integrated Media GmbH <ralph@convergence.de>
Copyright (C) 2004 Markus Schulz <msc@antzsystem.de>
- Suppport for TDA10021
+ Support for TDA10021
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
Copyright (C) 1999 Convergence Integrated Media GmbH <ralph@convergence.de>
Copyright (C) 2004 Markus Schulz <msc@antzsystem.de>
- Suppport for TDA10021
+ Support for TDA10021
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
#include <linux/spinlock.h>
#include <linux/threads.h>
#include <linux/interrupt.h>
-#include <linux/irq.h>
+#include <asm/irq.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/slab.h>
int bmpp;
int bmplen;
- int bmp_state;
+ volatile int bmp_state;
#define BMP_NONE 0
#define BMP_LOADING 1
#define BMP_LOADINGS 2
struct dmx_frontend hw_frontend;
struct dmx_frontend mem_frontend;
+ /* for budget mode demux1 */
+ struct dmxdev dmxdev1;
+ struct dvb_demux demux1;
+ struct dvb_net dvb_net1;
+ spinlock_t feedlock1;
+ int feeding1;
+ u8 tsf;
+ u32 ttbp;
+ unsigned char *grabbing;
+ struct saa7146_pgtable pt;
+ struct tasklet_struct vpe_tasklet;
+
int fe_synced;
struct semaphore pid_mutex;
if (ves1820_writereg(dev, 0x09, 0x0f, 0x20))
dprintk(1, "setting band in demodulator failed.\n");
} else if (av7110->analog_tuner_flags & ANALOG_TUNER_STV0297) {
- saa7146_setgpio(dev, 1, SAA7146_GPIO_OUTHI); // TDA9198 pin9(STD)
+ saa7146_setgpio(dev, 1, SAA7146_GPIO_OUTLO); // TDA9198 pin9(STD)
saa7146_setgpio(dev, 3, SAA7146_GPIO_OUTLO); // TDA9198 pin30(VIF)
}
}
if (ves1820_writereg(av7110->dev, 0x09, 0x0f, 0x20))
dprintk(1, "setting band in demodulator failed.\n");
} else if (av7110->analog_tuner_flags & ANALOG_TUNER_STV0297) {
- saa7146_setgpio(av7110->dev, 1, SAA7146_GPIO_OUTHI); // TDA9198 pin9(STD)
+ saa7146_setgpio(av7110->dev, 1, SAA7146_GPIO_OUTLO); // TDA9198 pin9(STD)
saa7146_setgpio(av7110->dev, 3, SAA7146_GPIO_OUTLO); // TDA9198 pin30(VIF)
}
* Pitch: 188, NumBytes3: 188, NumLines3: 1024
*/
- if (budget->card->type == BUDGET_FS_ACTIVY) {
+ switch(budget->card->type) {
+ case BUDGET_FS_ACTIVY:
saa7146_write(dev, DD1_INIT, 0x04000000);
saa7146_write(dev, MC2, (MASK_09 | MASK_25));
saa7146_write(dev, BRS_CTRL, 0x00000000);
- } else {
+ break;
+ case BUDGET_PATCH:
+ saa7146_write(dev, DD1_INIT, 0x00000200);
+ saa7146_write(dev, MC2, (MASK_10 | MASK_26));
+ saa7146_write(dev, BRS_CTRL, 0x60000000);
+ break;
+ default:
if (budget->video_port == BUDGET_VIDEO_PORTA) {
saa7146_write(dev, DD1_INIT, 0x06000200);
saa7146_write(dev, MC2, (MASK_09 | MASK_25 | MASK_10 | MASK_26));
}
saa7146_write(dev, MC2, (MASK_04 | MASK_20));
- saa7146_write(dev, MC1, (MASK_04 | MASK_20)); // DMA3 on
- SAA7146_IER_ENABLE(budget->dev, MASK_10); // VPE
+ SAA7146_ISR_CLEAR(budget->dev, MASK_10); /* VPE */
+ SAA7146_IER_ENABLE(budget->dev, MASK_10); /* VPE */
+ saa7146_write(dev, MC1, (MASK_04 | MASK_20)); /* DMA3 on */
return ++budget->feeding;
}
return -EINVAL;
spin_lock(&budget->feedlock);
+ feed->pusi_seen = 0; /* have a clean section start */
status = start_ts_capture (budget);
spin_unlock(&budget->feedlock);
return status;
static struct saa7146_extension budget_extension;
-MAKE_BUDGET_INFO(fs_1_3,"Siemens/Technotrend/Hauppauge PCI rev1.3+Budget_Patch", BUDGET_PATCH);
+MAKE_BUDGET_INFO(ttbp, "TT-Budget/Patch DVB-S 1.x PCI", BUDGET_PATCH);
+//MAKE_BUDGET_INFO(satel,"TT-Budget/Patch SATELCO PCI", BUDGET_TT_HW_DISEQC);
static struct pci_device_id pci_tbl[] = {
- MAKE_EXTENSION_PCI(fs_1_3,0x13c2, 0x0000),
+ MAKE_EXTENSION_PCI(ttbp,0x13c2, 0x0000),
+// MAKE_EXTENSION_PCI(satel, 0x13c2, 0x1013),
{
.vendor = 0,
}
};
-static int budget_wdebi(struct budget_patch *budget, u32 config, int addr, u32 val, int count)
+/* those lines are for budget-patch to be tried
+** on a true budget card and observe the
+** behaviour of VSYNC generated by rps1.
+** this code was shamelessly copy/pasted from budget.c
+*/
+static void gpio_Set22K (struct budget *budget, int state)
+{
+ struct saa7146_dev *dev=budget->dev;
+ dprintk(2, "budget: %p\n", budget);
+ saa7146_setgpio(dev, 3, (state ? SAA7146_GPIO_OUTHI : SAA7146_GPIO_OUTLO));
+}
+
+/* Diseqc functions only for TT Budget card */
+/* taken from the Skyvision DVB driver by
+ Ralph Metzler <rjkm@metzlerbros.de> */
+
+static void DiseqcSendBit (struct budget *budget, int data)
+{
+ struct saa7146_dev *dev=budget->dev;
+ dprintk(2, "budget: %p\n", budget);
+
+ saa7146_setgpio(dev, 3, SAA7146_GPIO_OUTHI);
+ udelay(data ? 500 : 1000);
+ saa7146_setgpio(dev, 3, SAA7146_GPIO_OUTLO);
+ udelay(data ? 1000 : 500);
+}
+
+static void DiseqcSendByte (struct budget *budget, int data)
+{
+ int i, par=1, d;
+
+ dprintk(2, "budget: %p\n", budget);
+
+ for (i=7; i>=0; i--) {
+ d = (data>>i)&1;
+ par ^= d;
+ DiseqcSendBit(budget, d);
+ }
+
+ DiseqcSendBit(budget, par);
+}
+
+static int SendDiSEqCMsg (struct budget *budget, int len, u8 *msg, unsigned long burst)
{
struct saa7146_dev *dev=budget->dev;
+ int i;
dprintk(2, "budget: %p\n", budget);
- if (count <= 0 || count > 4)
- return -1;
+ saa7146_setgpio(dev, 3, SAA7146_GPIO_OUTLO);
+ mdelay(16);
- saa7146_write(dev, DEBI_CONFIG, config);
+ for (i=0; i<len; i++)
+ DiseqcSendByte(budget, msg[i]);
- saa7146_write(dev, DEBI_AD, val );
- saa7146_write(dev, DEBI_COMMAND, (count << 17) | (addr & 0xffff));
- saa7146_write(dev, MC2, (2 << 16) | 2);
- mdelay(5);
+ mdelay(16);
+
+ if (burst!=-1) {
+ if (burst)
+ DiseqcSendByte(budget, 0xff);
+ else {
+ saa7146_setgpio(dev, 3, SAA7146_GPIO_OUTHI);
+ udelay(12500);
+ saa7146_setgpio(dev, 3, SAA7146_GPIO_OUTLO);
+ }
+ msleep(20);
+ }
+
+ return 0;
+}
+
+/* shamelessly copy/pasted from budget.c
+*/
+static int budget_set_tone(struct dvb_frontend* fe, fe_sec_tone_mode_t tone)
+{
+ struct budget* budget = (struct budget*) fe->dvb->priv;
+
+ switch (tone) {
+ case SEC_TONE_ON:
+ gpio_Set22K (budget, 1);
+ break;
+
+ case SEC_TONE_OFF:
+ gpio_Set22K (budget, 0);
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int budget_diseqc_send_master_cmd(struct dvb_frontend* fe, struct dvb_diseqc_master_cmd* cmd)
+{
+ struct budget* budget = (struct budget*) fe->dvb->priv;
+
+ SendDiSEqCMsg (budget, cmd->msg_len, cmd->msg, 0);
return 0;
}
+static int budget_diseqc_send_burst(struct dvb_frontend* fe, fe_sec_mini_cmd_t minicmd)
+{
+ struct budget* budget = (struct budget*) fe->dvb->priv;
+
+ SendDiSEqCMsg (budget, 0, NULL, minicmd);
+
+ return 0;
+}
static int budget_av7110_send_fw_cmd(struct budget_patch *budget, u16* buf, int length)
{
dprintk(2, "budget: %p\n", budget);
for (i = 2; i < length; i++)
- budget_wdebi(budget, DEBINOSWAP, COMMAND + 2*i, (u32) buf[i], 2);
-
+ {
+ ttpci_budget_debiwrite(budget, DEBINOSWAP, COMMAND + 2*i, 2, (u32) buf[i], 0,0);
+ msleep(5);
+ }
if (length)
- budget_wdebi(budget, DEBINOSWAP, COMMAND + 2, (u32) buf[1], 2);
+ ttpci_budget_debiwrite(budget, DEBINOSWAP, COMMAND + 2, 2, (u32) buf[1], 0,0);
else
- budget_wdebi(budget, DEBINOSWAP, COMMAND + 2, 0, 2);
-
- budget_wdebi(budget, DEBINOSWAP, COMMAND, (u32) buf[0], 2);
+ ttpci_budget_debiwrite(budget, DEBINOSWAP, COMMAND + 2, 2, 0, 0,0);
+ msleep(5);
+ ttpci_budget_debiwrite(budget, DEBINOSWAP, COMMAND, 2, (u32) buf[0], 0,0);
+ msleep(5);
return 0;
}
{
switch(budget->dev->pci->subsystem_device) {
case 0x0000: // Hauppauge/TT WinTV DVB-S rev1.X
+ case 0x1013: // SATELCO Multimedia PCI
// try the ALPS BSRV2 first of all
budget->dvb_frontend = ves1x93_attach(&alps_bsrv2_config, &budget->i2c_adap);
// try the ALPS BSRU6 now
budget->dvb_frontend = stv0299_attach(&alps_bsru6_config, &budget->i2c_adap);
if (budget->dvb_frontend) {
- budget->dvb_frontend->ops->diseqc_send_master_cmd = budget_patch_diseqc_send_master_cmd;
- budget->dvb_frontend->ops->diseqc_send_burst = budget_patch_diseqc_send_burst;
- budget->dvb_frontend->ops->set_tone = budget_patch_set_tone;
+ budget->dvb_frontend->ops->diseqc_send_master_cmd = budget_diseqc_send_master_cmd;
+ budget->dvb_frontend->ops->diseqc_send_burst = budget_diseqc_send_burst;
+ budget->dvb_frontend->ops->set_tone = budget_set_tone;
break;
}
// Try the grundig 29504-451
budget->dvb_frontend = tda8083_attach(&grundig_29504_451_config, &budget->i2c_adap);
if (budget->dvb_frontend) {
- budget->dvb_frontend->ops->diseqc_send_master_cmd = budget_patch_diseqc_send_master_cmd;
- budget->dvb_frontend->ops->diseqc_send_burst = budget_patch_diseqc_send_burst;
- budget->dvb_frontend->ops->set_tone = budget_patch_set_tone;
+ budget->dvb_frontend->ops->diseqc_send_master_cmd = budget_diseqc_send_master_cmd;
+ budget->dvb_frontend->ops->diseqc_send_burst = budget_diseqc_send_burst;
+ budget->dvb_frontend->ops->set_tone = budget_set_tone;
break;
}
break;
}
}
+/* written by Emard */
static int budget_patch_attach (struct saa7146_dev* dev, struct saa7146_pci_extension_data *info)
{
struct budget_patch *budget;
int err;
int count = 0;
+ int detected = 0;
+
+#define PATCH_RESET 0
+#define RPS_IRQ 0
+#define HPS_SETUP 0
+#if PATCH_RESET
+ saa7146_write(dev, MC1, MASK_31);
+ msleep(40);
+#endif
+#if HPS_SETUP
+ // initialize registers. Better to have it like this
+ // than leaving something unconfigured
+ saa7146_write(dev, DD1_STREAM_B, 0);
+ // port B VSYNC at rising edge
+ saa7146_write(dev, DD1_INIT, 0x00000200); // have this in budget-core too!
+ saa7146_write(dev, BRS_CTRL, 0x00000000); // VBI
+
+ // debi config
+ // saa7146_write(dev, DEBI_CONFIG, MASK_30|MASK_28|MASK_18);
+
+ // zero all HPS registers
+ saa7146_write(dev, HPS_H_PRESCALE, 0); // r68
+ saa7146_write(dev, HPS_H_SCALE, 0); // r6c
+ saa7146_write(dev, BCS_CTRL, 0); // r70
+ saa7146_write(dev, HPS_V_SCALE, 0); // r60
+ saa7146_write(dev, HPS_V_GAIN, 0); // r64
+ saa7146_write(dev, CHROMA_KEY_RANGE, 0); // r74
+ saa7146_write(dev, CLIP_FORMAT_CTRL, 0); // r78
+ // Set HPS prescaler for port B input
+ saa7146_write(dev, HPS_CTRL, (1<<30) | (0<<29) | (1<<28) | (0<<12) );
+ saa7146_write(dev, MC2,
+ 0 * (MASK_08 | MASK_24) | // BRS control
+ 0 * (MASK_09 | MASK_25) | // a
+ 0 * (MASK_10 | MASK_26) | // b
+ 1 * (MASK_06 | MASK_22) | // HPS_CTRL1
+ 1 * (MASK_05 | MASK_21) | // HPS_CTRL2
+ 0 * (MASK_01 | MASK_15) // DEBI
+ );
+#endif
+ // Disable RPS1 and RPS0
+ saa7146_write(dev, MC1, ( MASK_29 | MASK_28));
+ // RPS1 timeout disable
+ saa7146_write(dev, RPS_TOV1, 0);
+
+ // code for autodetection
+ // will wait for VBI_B event (vertical blank at port B)
+ // and will reset GPIO3 after VBI_B is detected.
+ // (GPIO3 should be raised high by CPU to
+ // test if GPIO3 will generate vertical blank signal
+ // in budget patch GPIO3 is connected to VSYNC_B
+ count = 0;
+#if 0
+ WRITE_RPS1(cpu_to_le32(CMD_UPLOAD |
+ MASK_10 | MASK_09 | MASK_08 | MASK_06 | MASK_05 | MASK_04 | MASK_03 | MASK_02 ));
+#endif
+ WRITE_RPS1(cpu_to_le32(CMD_PAUSE | EVT_VBI_B));
+ WRITE_RPS1(cpu_to_le32(CMD_WR_REG_MASK | (GPIO_CTRL>>2)));
+ WRITE_RPS1(cpu_to_le32(GPIO3_MSK));
+ WRITE_RPS1(cpu_to_le32(SAA7146_GPIO_OUTLO<<24));
+#if RPS_IRQ
+ // issue RPS1 interrupt to increment counter
+ WRITE_RPS1(cpu_to_le32(CMD_INTERRUPT));
+ // at least a NOP is neede between two interrupts
+ WRITE_RPS1(cpu_to_le32(CMD_NOP));
+ // interrupt again
+ WRITE_RPS1(cpu_to_le32(CMD_INTERRUPT));
+#endif
+ WRITE_RPS1(cpu_to_le32(CMD_STOP));
+
+#if RPS_IRQ
+ // set event counter 1 source as RPS1 interrupt (0x03) (rE4 p53)
+ // use 0x03 to track RPS1 interrupts - increase by 1 every gpio3 is toggled
+ // use 0x15 to track VPE interrupts - increase by 1 every vpeirq() is called
+ saa7146_write(dev, EC1SSR, (0x03<<2) | 3 );
+ // set event counter 1 treshold to maximum allowed value (rEC p55)
+ saa7146_write(dev, ECT1R, 0x3fff );
+#endif
+ // Fix VSYNC level
+ saa7146_setgpio(dev, 3, SAA7146_GPIO_OUTLO);
+ // Set RPS1 Address register to point to RPS code (r108 p42)
+ saa7146_write(dev, RPS_ADDR1, dev->d_rps1.dma_handle);
+ // Enable RPS1, (rFC p33)
+ saa7146_write(dev, MC1, (MASK_13 | MASK_29 ));
- if (!(budget = kmalloc (sizeof(struct budget_patch), GFP_KERNEL)))
- return -ENOMEM;
- dprintk(2, "budget: %p\n", budget);
+ mdelay(50);
+ saa7146_setgpio(dev, 3, SAA7146_GPIO_OUTHI);
+ mdelay(150);
- if ((err = ttpci_budget_init (budget, dev, info, THIS_MODULE))) {
- kfree (budget);
- return err;
- }
+
+ if( (saa7146_read(dev, GPIO_CTRL) & 0x10000000) == 0)
+ detected = 1;
-/*
+#if RPS_IRQ
+ printk("Event Counter 1 0x%04x\n", saa7146_read(dev, EC1R) & 0x3fff );
+#endif
+ // Disable RPS1
+ saa7146_write(dev, MC1, ( MASK_29 ));
+
+ if(detected == 0)
+ printk("budget-patch not detected or saa7146 in non-default state.\n"
+ "try enabling ressetting of 7146 with MASK_31 in MC1 register\n");
+
+ else
+ printk("BUDGET-PATCH DETECTED.\n");
+
+
+/* OLD (Original design by Roberto Deza):
** This code will setup the SAA7146_RPS1 to generate a square
** wave on GPIO3, changing when a field (TS_HEIGHT/2 "lines" of
** TS_WIDTH packets) has been acquired on SAA7146_D1B video port;
** which seems that can be done perfectly without this :-)).
*/
+/* New design (By Emard)
+** this rps1 code will copy internal HS event to GPIO3 pin.
+** GPIO3 is in budget-patch hardware connectd to port B VSYNC
+
+** HS is an internal event of 7146, accessible with RPS
+** and temporarily raised high every n lines
+** (n in defined in the RPS_THRESH1 counter threshold)
+** I think HS is raised high on the beginning of the n-th line
+** and remains high until this n-th line that triggered
+** it is completely received. When the receiption of n-th line
+** ends, HS is lowered.
+
+** To transmit data over DMA, 7146 needs changing state at
+** port B VSYNC pin. Any changing of port B VSYNC will
+** cause some DMA data transfer, with more or less packets loss.
+** It depends on the phase and frequency of VSYNC and
+** the way of 7146 is instructed to trigger on port B (defined
+** in DD1_INIT register, 3rd nibble from the right valid
+** numbers are 0-7, see datasheet)
+**
+** The correct triggering can minimize packet loss,
+** dvbtraffic should give this stable bandwidths:
+** 22k transponder = 33814 kbit/s
+** 27.5k transponder = 38045 kbit/s
+** by experiment it is found that the best results
+** (stable bandwidths and almost no packet loss)
+** are obtained using DD1_INIT triggering number 2
+** (Va at rising edge of VS Fa = HS x VS-failing forced toggle)
+** and a VSYNC phase that occurs in the middle of DMA transfer
+** (about byte 188*512=96256 in the DMA window).
+**
+** Phase of HS is still not clear to me how to control,
+** It just happens to be so. It can be seen if one enables
+** RPS_IRQ and print Event Counter 1 in vpeirq(). Every
+** time RPS_INTERRUPT is called, the Event Counter 1 will
+** increment. That's how the 7146 is programmed to do event
+** counting in this budget-patch.c
+** I *think* HPS setting has something to do with the phase
+** of HS but I cant be 100% sure in that.
+
+** hardware debug note: a working budget card (including budget patch)
+** with vpeirq() interrupt setup in mode "0x90" (every 64K) will
+** generate 3 interrupts per 25-Hz DMA frame of 2*188*512 bytes
+** and that means 3*25=75 Hz of interrupt freqency, as seen by
+** watch cat /proc/interrupts
+**
+** If this frequency is 3x lower (and data received in the DMA
+** buffer don't start with 0x47, but in the middle of packets,
+** whose lengths appear to be like 188 292 188 104 etc.
+** this means VSYNC line is not connected in the hardware.
+** (check soldering pcb and pins)
+** The same behaviour of missing VSYNC can be duplicated on budget
+** cards, by seting DD1_INIT trigger mode 7 in 3rd nibble.
+*/
+
// Setup RPS1 "program" (p35)
+ count = 0;
+
- // Wait reset Source Line Counter Threshold (p36)
- WRITE_RPS1(cpu_to_le32(CMD_PAUSE | RPS_INV | EVT_HS));
// Wait Source Line Counter Threshold (p36)
WRITE_RPS1(cpu_to_le32(CMD_PAUSE | EVT_HS));
// Set GPIO3=1 (p42)
WRITE_RPS1(cpu_to_le32(CMD_WR_REG_MASK | (GPIO_CTRL>>2)));
WRITE_RPS1(cpu_to_le32(GPIO3_MSK));
WRITE_RPS1(cpu_to_le32(SAA7146_GPIO_OUTHI<<24));
+#if RPS_IRQ
+ // issue RPS1 interrupt
+ WRITE_RPS1(cpu_to_le32(CMD_INTERRUPT));
+#endif
// Wait reset Source Line Counter Threshold (p36)
WRITE_RPS1(cpu_to_le32(CMD_PAUSE | RPS_INV | EVT_HS));
- // Wait Source Line Counter Threshold
- WRITE_RPS1(cpu_to_le32(CMD_PAUSE | EVT_HS));
// Set GPIO3=0 (p42)
WRITE_RPS1(cpu_to_le32(CMD_WR_REG_MASK | (GPIO_CTRL>>2)));
WRITE_RPS1(cpu_to_le32(GPIO3_MSK));
WRITE_RPS1(cpu_to_le32(SAA7146_GPIO_OUTLO<<24));
+#if RPS_IRQ
+ // issue RPS1 interrupt
+ WRITE_RPS1(cpu_to_le32(CMD_INTERRUPT));
+#endif
// Jump to begin of RPS program (p37)
WRITE_RPS1(cpu_to_le32(CMD_JUMP));
WRITE_RPS1(cpu_to_le32(dev->d_rps1.dma_handle));
// Set RPS1 Address register to point to RPS code (r108 p42)
saa7146_write(dev, RPS_ADDR1, dev->d_rps1.dma_handle);
// Set Source Line Counter Threshold, using BRS (rCC p43)
- saa7146_write(dev, RPS_THRESH1, ((TS_HEIGHT/2) | MASK_12));
+ // It generates HS event every TS_HEIGHT lines
+ // this is related to TS_WIDTH set in register
+ // NUM_LINE_BYTE3 in budget-core.c. If NUM_LINE_BYTE
+ // low 16 bits are set to TS_WIDTH bytes (TS_WIDTH=2*188
+ //,then RPS_THRESH1
+ // should be set to trigger every TS_HEIGHT (512) lines.
+ //
+ saa7146_write(dev, RPS_THRESH1, (TS_HEIGHT*1) | MASK_12 );
+
+ // saa7146_write(dev, RPS_THRESH0, ((TS_HEIGHT/2)<<16) |MASK_28| (TS_HEIGHT/2) |MASK_12 );
// Enable RPS1 (rFC p33)
saa7146_write(dev, MC1, (MASK_13 | MASK_29));
+
+ if (!(budget = kmalloc (sizeof(struct budget_patch), GFP_KERNEL)))
+ return -ENOMEM;
+
+ dprintk(2, "budget: %p\n", budget);
+
+ if ((err = ttpci_budget_init (budget, dev, info, THIS_MODULE))) {
+ kfree (budget);
+ return err;
+ }
+
+
dev->ext_priv = budget;
budget->dvb_adapter->priv = budget;
{
switch(budget->dev->pci->subsystem_device) {
case 0x1003: // Hauppauge/TT Nova budget (stv0299/ALPS BSRU6(tsa5059) OR ves1893/ALPS BSRV2(sp5659))
-
+ case 0x1013:
// try the ALPS BSRV2 first of all
budget->dvb_frontend = ves1x93_attach(&alps_bsrv2_config, &budget->i2c_adap);
if (budget->dvb_frontend) {
MAKE_BUDGET_INFO(ttbs, "TT-Budget/WinTV-NOVA-S PCI", BUDGET_TT);
MAKE_BUDGET_INFO(ttbc, "TT-Budget/WinTV-NOVA-C PCI", BUDGET_TT);
MAKE_BUDGET_INFO(ttbt, "TT-Budget/WinTV-NOVA-T PCI", BUDGET_TT);
-/* MAKE_BUDGET_INFO(satel, "SATELCO Multimedia PCI", BUDGET_TT_HW_DISEQC); UNDEFINED HARDWARE - mail linuxtv.org list */
+MAKE_BUDGET_INFO(satel, "SATELCO Multimedia PCI", BUDGET_TT_HW_DISEQC);
MAKE_BUDGET_INFO(fsacs, "Fujitsu Siemens Activy Budget-S PCI", BUDGET_FS_ACTIVY);
static struct pci_device_id pci_tbl[] = {
MAKE_EXTENSION_PCI(ttbs, 0x13c2, 0x1003),
MAKE_EXTENSION_PCI(ttbc, 0x13c2, 0x1004),
MAKE_EXTENSION_PCI(ttbt, 0x13c2, 0x1005),
-/* MAKE_EXTENSION_PCI(satel, 0x13c2, 0x1013), UNDEFINED HARDWARE */
+ MAKE_EXTENSION_PCI(satel, 0x13c2, 0x1013),
MAKE_EXTENSION_PCI(fsacs, 0x1131, 0x4f61),
{
.vendor = 0,
* and bit 4 (+16) is to keep the signal strength meter enabled
*/
-void send_0_byte(int port, struct rt_device *dev)
+static void send_0_byte(int port, struct rt_device *dev)
{
if ((dev->curvol == 0) || (dev->muted)) {
outb_p(128+64+16+ 1, port); /* wr-enable + data low */
sleep_delay(1000);
}
-void send_1_byte(int port, struct rt_device *dev)
+static void send_1_byte(int port, struct rt_device *dev)
{
if ((dev->curvol == 0) || (dev->muted)) {
outb_p(128+64+16+4 +1, port); /* wr-enable+data high */
MODULE_DESCRIPTION("A driver for the RadioTrack/RadioReveal radio card.");
MODULE_LICENSE("GPL");
-MODULE_PARM(io, "i");
+module_param(io, int, 0);
MODULE_PARM_DESC(io, "I/O address of the RadioTrack card (0x20f or 0x30f)");
-MODULE_PARM(radio_nr, "i");
+module_param(radio_nr, int, 0);
static void __exit cleanup_rtrack_module(void)
{
MODULE_DESCRIPTION("A driver for the Aztech radio card.");
MODULE_LICENSE("GPL");
-MODULE_PARM(io, "i");
-MODULE_PARM(radio_nr, "i");
+module_param(io, int, 0);
+module_param(radio_nr, int, 0);
MODULE_PARM_DESC(io, "I/O address of the Aztech card (0x350 or 0x358)");
static void __exit aztech_cleanup(void)
MODULE_DEVICE_TABLE( pci, gemtek_pci_id );
-static u8 mx = 1;
+static int mx = 1;
static struct file_operations gemtek_pci_fops = {
.owner = THIS_MODULE,
MODULE_DESCRIPTION( "The video4linux driver for the Gemtek PCI Radio Card" );
MODULE_LICENSE("GPL");
-MODULE_PARM( mx, "b" );
+module_param(mx, bool, 0);
MODULE_PARM_DESC( mx, "single digit: 1 - turn off the turner upon module exit (default), 0 - do not" );
-MODULE_PARM( nr_radio, "i");
+module_param(nr_radio, int, 0);
MODULE_PARM_DESC( nr_radio, "video4linux device number to use");
module_init( gemtek_pci_init_module );
#define BITS2FREQ(x) ((x) * FREQ_STEP - FREQ_IF)
static int radio_nr = -1;
-MODULE_PARM(radio_nr, "i");
+module_param(radio_nr, int, 0);
static int radio_ioctl(struct inode *inode, struct file *file,
unsigned int cmd, unsigned long arg);
MODULE_DESCRIPTION("Radio driver for the Maestro PCI sound card radio.");
MODULE_LICENSE("GPL");
-void __exit maestro_radio_exit(void)
+static void __exit maestro_radio_exit(void)
{
video_unregister_device(&maestro_radio);
}
-int __init maestro_radio_init(void)
+static int __init maestro_radio_init(void)
{
register __u16 found=0;
struct pci_dev *pcidev = NULL;
static const int clk = 1, data = 2, wren = 4, mo_st = 8, power = 16 ;
static int radio_nr = -1;
-MODULE_PARM(radio_nr, "i");
+module_param(radio_nr, int, 0);
#define FREQ_LO 50*16000
.remove = __devexit_p(maxiradio_remove_one),
};
-int __init maxiradio_radio_init(void)
+static int __init maxiradio_radio_init(void)
{
return pci_module_init(&maxiradio_driver);
}
-void __exit maxiradio_radio_exit(void)
+static void __exit maxiradio_radio_exit(void)
{
pci_unregister_driver(&maxiradio_driver);
}
MODULE_DESCRIPTION("A driver for the RadioTrack II radio card.");
MODULE_LICENSE("GPL");
-MODULE_PARM(io, "i");
+module_param(io, int, 0);
MODULE_PARM_DESC(io, "I/O address of the RadioTrack card (0x20c or 0x30c)");
-MODULE_PARM(radio_nr, "i");
+module_param(radio_nr, int, 0);
static void __exit rtrack2_cleanup_module(void)
{
MODULE_DESCRIPTION("A driver for the SF16MI radio.");
MODULE_LICENSE("GPL");
-MODULE_PARM(io, "i");
+module_param(io, int, 0);
MODULE_PARM_DESC(io, "I/O address of the SF16MI card (0x284 or 0x384)");
-MODULE_PARM(radio_nr, "i");
+module_param(radio_nr, int, 0);
static void __exit fmi_cleanup_module(void)
{
MODULE_DESCRIPTION("A driver for the SF16FMR2 radio.");
MODULE_LICENSE("GPL");
-MODULE_PARM(io, "i");
+module_param(io, int, 0);
MODULE_PARM_DESC(io, "I/O address of the SF16FMR2 card (should be 0x384, if do not work try 0x284)");
-MODULE_PARM(radio_nr, "i");
+module_param(radio_nr, int, 0);
static void __exit fmr2_cleanup_module(void)
{
return 0;
}
-int tt_getsigstr(struct tt_device *dev) /* TODO */
+static int tt_getsigstr(struct tt_device *dev) /* TODO */
{
if (inb(io) & 2) /* bit set = no signal present */
return 0;
MODULE_AUTHOR("R.OFFERMANNS & others");
MODULE_DESCRIPTION("A driver for the TerraTec ActiveRadio Standalone radio card.");
MODULE_LICENSE("GPL");
-MODULE_PARM(io, "i");
+module_param(io, int, 0);
MODULE_PARM_DESC(io, "I/O address of the TerraTec ActiveRadio card (0x590 or 0x591)");
-MODULE_PARM(radio_nr, "i");
+module_param(radio_nr, int, 0);
static void __exit terratec_cleanup_module(void)
{
MODULE_DESCRIPTION("A driver for the Trust FM Radio card.");
MODULE_LICENSE("GPL");
-MODULE_PARM(io, "i");
+module_param(io, int, 0);
MODULE_PARM_DESC(io, "I/O address of the Trust FM Radio card (0x350 or 0x358)");
-MODULE_PARM(radio_nr, "i");
+module_param(radio_nr, int, 0);
static void __exit cleanup_trust_module(void)
{
#include <linux/video_encoder.h>
static int debug = 0;
-MODULE_PARM(debug, "i");
+module_param(debug, int, 0);
MODULE_PARM_DESC(debug, "Debug level (0-1)");
#define dprintk(num, format, args...) \
u8 block_data[32];
msg.addr = client->addr;
- msg.flags = client->flags;
+ msg.flags = 0;
while (len >= 2) {
msg.buf = (char *) block_data;
msg.len = 0;
#include <linux/video_encoder.h>
static int debug = 0;
-MODULE_PARM(debug, "i");
+module_param(debug, int, 0);
MODULE_PARM_DESC(debug, "Debug level (0-1)");
#define dprintk(num, format, args...) \
u8 block_data[32];
msg.addr = client->addr;
- msg.flags = client->flags;
+ msg.flags = 0;
while (len >= 2) {
msg.buf = (char *) block_data;
msg.len = 0;
*/
static int ar_initialize(struct video_device *dev)
{
- struct ar_device *ar = (struct ar_device *)dev->priv;
+ struct ar_device *ar = dev->priv;
unsigned long cr = 0;
int i,found=0;
#include <linux/video_decoder.h>
static int debug = 0;
-MODULE_PARM(debug, "i");
+module_param(debug, int, 0);
MODULE_PARM_DESC(debug, "Debug level (0-1)");
#define dprintk(num, format, args...) \
};
/* for values, see the bt819 datasheet */
-struct timing timing_data[] = {
+static struct timing timing_data[] = {
{864 - 24, 20, 625 - 2, 1, 0x0504, 0x0000},
{858 - 24, 20, 525 - 2, 1, 0x00f8, 0x0000},
};
u8 block_data[32];
msg.addr = client->addr;
- msg.flags = client->flags;
+ msg.flags = 0;
while (len >= 2) {
msg.buf = (char *) block_data;
msg.len = 0;
#include <linux/video_encoder.h>
static int debug = 0;
-MODULE_PARM(debug, "i");
+module_param(debug, int, 0);
MODULE_PARM_DESC(debug, "Debug level (0-1)");
#define dprintk(num, format, args...) \
/*
- $Id: btcx-risc.c,v 1.4 2004/11/07 13:17:14 kraxel Exp $
+ $Id: btcx-risc.c,v 1.5 2004/12/10 12:33:39 kraxel Exp $
btcx-risc.c
*/
#include <linux/module.h>
+#include <linux/moduleparam.h>
#include <linux/init.h>
#include <linux/pci.h>
#include <linux/interrupt.h>
/*
- $Id: bttv-if.c,v 1.3 2004/10/13 10:39:00 kraxel Exp $
+ $Id: bttv-if.c,v 1.4 2004/11/17 18:47:47 kraxel Exp $
bttv-if.c -- old gpio interface to other kernel modules
don't use in new code, will go away in 2.7
int bttv_get_cardinfo(unsigned int card, int *type, unsigned *cardid)
{
+ printk("The bttv_* interface is obsolete and will go away,\n"
+ "please use the new, sysfs based interface instead.\n");
if (card >= bttv_num) {
return -1;
}
int bttv_get_id(unsigned int card)
{
- printk("bttv_get_id is obsolete, use bttv_get_cardinfo instead\n");
+ printk("The bttv_* interface is obsolete and will go away,\n"
+ "please use the new, sysfs based interface instead.\n");
if (card >= bttv_num) {
return -1;
}
return &btv->gpioq;
}
+void bttv_i2c_call(unsigned int card, unsigned int cmd, void *arg)
+{
+ if (card >= bttv_num)
+ return;
+ bttv_call_i2c_clients(&bttvs[card], cmd, arg);
+}
+
/*
* Local variables:
* c-basic-offset: 8
MODULE_DESCRIPTION("Parallel port driver for Vision CPiA based cameras");
MODULE_LICENSE("GPL");
-MODULE_PARM(parport, "1-" __MODULE_STRING(PARPORT_MAX) "s");
+module_param_array(parport, charp, NULL, 0);
MODULE_PARM_DESC(parport, "'auto' or a list of parallel port numbers. Just like lp.");
#else
static int parport_nr[PARPORT_MAX] __initdata =
/* initialize driver struct */
init_MUTEX(&dev->lock);
- dev->slock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&dev->slock);
/* init dma queue */
INIT_LIST_HEAD(&dev->mpegq.active);
/* these are the available audio sources, which can switched
to the line- and cd-output individually */
-struct v4l2_audio mxb_audios[MXB_AUDIOS] = {
+static struct v4l2_audio mxb_audios[MXB_AUDIOS] = {
{
.index = 0,
.name = "Tuner",
#include <linux/video_decoder.h>
static int debug = 0;
-MODULE_PARM(debug, "i");
+module_param(debug, int, 0);
MODULE_PARM_DESC(debug, "Debug level (0-1)");
#define dprintk(num, format, args...) \
#define I2C_SAA7110 0x9C /* or 0x9E */
+#define SAA7110_NR_REG 0x35
+
struct saa7110 {
- unsigned char reg[54];
+ u8 reg[SAA7110_NR_REG];
int norm;
int input;
unsigned int len)
{
int ret = -1;
- u8 reg = *data++;
+ u8 reg = *data; /* first register to write to */
- len--;
+ /* Sanity check */
+ if (reg + (len - 1) > SAA7110_NR_REG)
+ return ret;
/* the saa7110 has an autoincrement function, use it if
* the adapter understands raw I2C */
if (i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
struct saa7110 *decoder = i2c_get_clientdata(client);
struct i2c_msg msg;
- u8 block_data[54];
- msg.len = 0;
- msg.buf = (char *) block_data;
+ msg.len = len;
+ msg.buf = (char *) data;
msg.addr = client->addr;
- msg.flags = client->flags;
- while (len >= 1) {
- msg.len = 0;
- block_data[msg.len++] = reg;
- while (len-- >= 1 && msg.len < 54)
- block_data[msg.len++] =
- decoder->reg[reg++] = *data++;
- ret = i2c_transfer(client->adapter, &msg, 1);
- }
+ msg.flags = 0;
+ ret = i2c_transfer(client->adapter, &msg, 1);
+
+ /* Cache the written data */
+ memcpy(decoder->reg + reg, data + 1, len - 1);
} else {
- while (len-- >= 1) {
+ for (++data, --len; len; len--) {
if ((ret = saa7110_write(client, reg++,
*data++)) < 0)
break;
return 0;
}
-static const unsigned char initseq[] = {
+static const unsigned char initseq[1 + SAA7110_NR_REG] = {
0, 0x4C, 0x3C, 0x0D, 0xEF, 0xBD, 0xF2, 0x03, 0x00,
/* 0x08 */ 0xF8, 0xF8, 0x60, 0x60, 0x00, 0x86, 0x18, 0x90,
/* 0x10 */ 0x00, 0x59, 0x40, 0x46, 0x42, 0x1A, 0xFF, 0xDA,
#include <linux/video_decoder.h>
static int debug = 0;
-MODULE_PARM(debug, "i");
+module_param(debug, int, 0);
MODULE_PARM_DESC(debug, "Debug level (0-1)");
#define dprintk(num, format, args...) \
u8 block_data[32];
msg.addr = client->addr;
- msg.flags = client->flags;
+ msg.flags = 0;
while (len >= 2) {
msg.buf = (char *) block_data;
msg.len = 0;
#include <linux/video_encoder.h>
static int debug = 0;
-MODULE_PARM(debug, "i");
+module_param(debug, int, 0);
MODULE_PARM_DESC(debug, "Debug level (0-1)");
#define dprintk(num, format, args...) \
u8 block_data[32];
msg.addr = client->addr;
- msg.flags = client->flags;
+ msg.flags = 0;
while (len >= 2) {
msg.buf = (char *) block_data;
msg.len = 0;
.name = "SAB3036",
};
-int __init
+static int __init
tuner3036_init(void)
{
i2c_add_driver(&i2c_driver_tuner);
return 0;
}
-void __exit
+static void __exit
tuner3036_exit(void)
{
i2c_del_driver(&i2c_driver_tuner);
MODULE_AUTHOR("Philip Blundell <philb@gnu.org>");
MODULE_LICENSE("GPL");
-MODULE_PARM(debug,"i");
+module_param(debug, int, 0);
MODULE_PARM_DESC(debug,"Enable debugging output");
module_init(tuner3036_init);
#define I2C_TDA985x_H 0xb6
#define I2C_TDA9874 0xb0 /* also used by 9875 */
-#define I2C_TEA6300 0x80
+#define I2C_TEA6300 0x80 /* also used by 6320 */
#define I2C_TEA6420 0x98
#define I2C_PIC16C54 0x96 /* PV951 */
--- /dev/null
+/*
+ * tveeprom - eeprom decoder for tvcard configuration eeproms
+ *
+ * Data and decoding routines shamelessly borrowed from bttv-cards.c
+ * eeprom access routine shamelessly borrowed from bttv-if.c
+ * which are:
+
+ Copyright (C) 1996,97,98 Ralph Metzler (rjkm@thp.uni-koeln.de)
+ & Marcus Metzler (mocm@thp.uni-koeln.de)
+ (c) 1999-2001 Gerd Knorr <kraxel@goldbach.in-berlin.de>
+
+ * Adjustments to fit a more general model and all bugs:
+
+ Copyright (C) 2003 John Klar <linpvr at projectplasma.com>
+
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/i2c.h>
+
+#include <media/tuner.h>
+#include <media/tveeprom.h>
+
+MODULE_DESCRIPTION("i2c Hauppauge eeprom decoder driver");
+MODULE_AUTHOR("John Klar");
+MODULE_LICENSE("GPL");
+
+static int debug = 0;
+module_param(debug, int, 0644);
+MODULE_PARM_DESC(debug, "Debug level (0-2)");
+
+#define STRM(array,i) (i < sizeof(array)/sizeof(char*) ? array[i] : "unknown")
+
+#define dprintk(num, args...) \
+ do { \
+ if (debug >= num) \
+ printk(KERN_INFO "tveeprom: " args); \
+ } while (0)
+
+#define TVEEPROM_KERN_ERR(args...) printk(KERN_ERR "tveeprom: " args);
+#define TVEEPROM_KERN_INFO(args...) printk(KERN_INFO "tveeprom: " args);
+
+/* ----------------------------------------------------------------------- */
+/* some hauppauge specific stuff */
+
+static struct HAUPPAUGE_TUNER_FMT
+{
+ int id;
+ char *name;
+}
+hauppauge_tuner_fmt[] =
+{
+ { 0x00000000, "unknown1" },
+ { 0x00000000, "unknown2" },
+ { 0x00000007, "PAL(B/G)" },
+ { 0x00001000, "NTSC(M)" },
+ { 0x00000010, "PAL(I)" },
+ { 0x00400000, "SECAM(L/L�)" },
+ { 0x00000e00, "PAL(D/K)" },
+ { 0x03000000, "ATSC Digital" },
+};
+
+/* This is the full list of possible tuners. Many thanks to Hauppauge for
+ supplying this information. Note that many tuners where only used for
+ testing and never made it to the outside world. So you will only see
+ a subset in actual produced cards. */
+static struct HAUPPAUGE_TUNER
+{
+ int id;
+ char *name;
+}
+hauppauge_tuner[] =
+{
+ /* 0-9 */
+ { TUNER_ABSENT, "None" },
+ { TUNER_ABSENT, "External" },
+ { TUNER_ABSENT, "Unspecified" },
+ { TUNER_PHILIPS_PAL, "Philips FI1216" },
+ { TUNER_PHILIPS_SECAM, "Philips FI1216MF" },
+ { TUNER_PHILIPS_NTSC, "Philips FI1236" },
+ { TUNER_PHILIPS_PAL_I, "Philips FI1246" },
+ { TUNER_PHILIPS_PAL_DK,"Philips FI1256" },
+ { TUNER_PHILIPS_PAL, "Philips FI1216 MK2" },
+ { TUNER_PHILIPS_SECAM, "Philips FI1216MF MK2" },
+ /* 10-19 */
+ { TUNER_PHILIPS_NTSC, "Philips FI1236 MK2" },
+ { TUNER_PHILIPS_PAL_I, "Philips FI1246 MK2" },
+ { TUNER_PHILIPS_PAL_DK,"Philips FI1256 MK2" },
+ { TUNER_TEMIC_NTSC, "Temic 4032FY5" },
+ { TUNER_TEMIC_PAL, "Temic 4002FH5" },
+ { TUNER_TEMIC_PAL_I, "Temic 4062FY5" },
+ { TUNER_PHILIPS_PAL, "Philips FR1216 MK2" },
+ { TUNER_PHILIPS_SECAM, "Philips FR1216MF MK2" },
+ { TUNER_PHILIPS_NTSC, "Philips FR1236 MK2" },
+ { TUNER_PHILIPS_PAL_I, "Philips FR1246 MK2" },
+ /* 20-29 */
+ { TUNER_PHILIPS_PAL_DK,"Philips FR1256 MK2" },
+ { TUNER_PHILIPS_PAL, "Philips FM1216" },
+ { TUNER_PHILIPS_SECAM, "Philips FM1216MF" },
+ { TUNER_PHILIPS_NTSC, "Philips FM1236" },
+ { TUNER_PHILIPS_PAL_I, "Philips FM1246" },
+ { TUNER_PHILIPS_PAL_DK,"Philips FM1256" },
+ { TUNER_TEMIC_4036FY5_NTSC, "Temic 4036FY5" },
+ { TUNER_ABSENT, "Samsung TCPN9082D" },
+ { TUNER_ABSENT, "Samsung TCPM9092P" },
+ { TUNER_TEMIC_4006FH5_PAL, "Temic 4006FH5" },
+ /* 30-39 */
+ { TUNER_ABSENT, "Samsung TCPN9085D" },
+ { TUNER_ABSENT, "Samsung TCPB9085P" },
+ { TUNER_ABSENT, "Samsung TCPL9091P" },
+ { TUNER_TEMIC_4039FR5_NTSC, "Temic 4039FR5" },
+ { TUNER_PHILIPS_FQ1216ME, "Philips FQ1216 ME" },
+ { TUNER_TEMIC_4066FY5_PAL_I, "Temic 4066FY5" },
+ { TUNER_PHILIPS_NTSC, "Philips TD1536" },
+ { TUNER_PHILIPS_NTSC, "Philips TD1536D" },
+ { TUNER_PHILIPS_NTSC, "Philips FMR1236" }, /* mono radio */
+ { TUNER_ABSENT, "Philips FI1256MP" },
+ /* 40-49 */
+ { TUNER_ABSENT, "Samsung TCPQ9091P" },
+ { TUNER_TEMIC_4006FN5_MULTI_PAL, "Temic 4006FN5" },
+ { TUNER_TEMIC_4009FR5_PAL, "Temic 4009FR5" },
+ { TUNER_TEMIC_4046FM5, "Temic 4046FM5" },
+ { TUNER_TEMIC_4009FN5_MULTI_PAL_FM, "Temic 4009FN5" },
+ { TUNER_ABSENT, "Philips TD1536D FH 44"},
+ { TUNER_LG_NTSC_FM, "LG TP18NSR01F"},
+ { TUNER_LG_PAL_FM, "LG TP18PSB01D"},
+ { TUNER_LG_PAL, "LG TP18PSB11D"},
+ { TUNER_LG_PAL_I_FM, "LG TAPC-I001D"},
+ /* 50-59 */
+ { TUNER_LG_PAL_I, "LG TAPC-I701D"},
+ { TUNER_ABSENT, "Temic 4042FI5"},
+ { TUNER_MICROTUNE_4049FM5, "Microtune 4049 FM5"},
+ { TUNER_ABSENT, "LG TPI8NSR11F"},
+ { TUNER_ABSENT, "Microtune 4049 FM5 Alt I2C"},
+ { TUNER_ABSENT, "Philips FQ1216ME MK3"},
+ { TUNER_ABSENT, "Philips FI1236 MK3"},
+ { TUNER_PHILIPS_FM1216ME_MK3, "Philips FM1216 ME MK3"},
+ { TUNER_ABSENT, "Philips FM1236 MK3"},
+ { TUNER_ABSENT, "Philips FM1216MP MK3"},
+ /* 60-69 */
+ { TUNER_ABSENT, "LG S001D MK3"},
+ { TUNER_ABSENT, "LG M001D MK3"},
+ { TUNER_ABSENT, "LG S701D MK3"},
+ { TUNER_ABSENT, "LG M701D MK3"},
+ { TUNER_ABSENT, "Temic 4146FM5"},
+ { TUNER_ABSENT, "Temic 4136FY5"},
+ { TUNER_ABSENT, "Temic 4106FH5"},
+ { TUNER_ABSENT, "Philips FQ1216LMP MK3"},
+ { TUNER_LG_NTSC_TAPE, "LG TAPE H001F MK3"},
+ { TUNER_ABSENT, "LG TAPE H701F MK3"},
+ /* 70-79 */
+ { TUNER_ABSENT, "LG TALN H200T"},
+ { TUNER_ABSENT, "LG TALN H250T"},
+ { TUNER_ABSENT, "LG TALN M200T"},
+ { TUNER_ABSENT, "LG TALN Z200T"},
+ { TUNER_ABSENT, "LG TALN S200T"},
+ { TUNER_ABSENT, "Thompson DTT7595"},
+ { TUNER_ABSENT, "Thompson DTT7592"},
+ { TUNER_ABSENT, "Silicon TDA8275C1 8290"},
+ { TUNER_ABSENT, "Silicon TDA8275C1 8290 FM"},
+ { TUNER_ABSENT, "Thompson DTT757"},
+ /* 80-89 */
+ { TUNER_ABSENT, "Philips FQ1216LME MK3"},
+ { TUNER_ABSENT, "LG TAPC G701D"},
+ { TUNER_LG_NTSC_NEW_TAPC, "LG TAPC H791F"},
+ { TUNER_ABSENT, "TCL 2002MB 3"},
+ { TUNER_ABSENT, "TCL 2002MI 3"},
+ { TUNER_TCL_2002N, "TCL 2002N 6A"},
+ { TUNER_ABSENT, "Philips FQ1236 MK3"},
+ { TUNER_ABSENT, "Samsung TCPN 2121P30A"},
+ { TUNER_ABSENT, "Samsung TCPE 4121P30A"},
+ { TUNER_ABSENT, "TCL MFPE05 2"},
+ /* 90-99 */
+ { TUNER_ABSENT, "LG TALN H202T"},
+ { TUNER_ABSENT, "Philips FQ1216AME MK4"},
+ { TUNER_ABSENT, "Philips FQ1236A MK4"},
+ { TUNER_ABSENT, "Philips FQ1286A MK4"},
+ { TUNER_ABSENT, "Philips FQ1216ME MK5"},
+ { TUNER_ABSENT, "Philips FQ1236 MK5"},
+};
+
+static char *sndtype[] = {
+ "None", "TEA6300", "TEA6320", "TDA9850", "MSP3400C", "MSP3410D",
+ "MSP3415", "MSP3430", "MSP3438", "CS5331", "MSP3435", "MSP3440",
+ "MSP3445", "MSP3411", "MSP3416", "MSP3425",
+
+ "Type 0x10","Type 0x11","Type 0x12","Type 0x13",
+ "Type 0x14","Type 0x15","Type 0x16","Type 0x17",
+ "Type 0x18","MSP4418","Type 0x1a","MSP4448",
+ "Type 0x1c","Type 0x1d","Type 0x1e","Type 0x1f",
+};
+
+static int hasRadioTuner(int tunerType)
+{
+ switch (tunerType) {
+ case 18: //PNPEnv_TUNER_FR1236_MK2:
+ case 23: //PNPEnv_TUNER_FM1236:
+ case 38: //PNPEnv_TUNER_FMR1236:
+ case 16: //PNPEnv_TUNER_FR1216_MK2:
+ case 19: //PNPEnv_TUNER_FR1246_MK2:
+ case 21: //PNPEnv_TUNER_FM1216:
+ case 24: //PNPEnv_TUNER_FM1246:
+ case 17: //PNPEnv_TUNER_FR1216MF_MK2:
+ case 22: //PNPEnv_TUNER_FM1216MF:
+ case 20: //PNPEnv_TUNER_FR1256_MK2:
+ case 25: //PNPEnv_TUNER_FM1256:
+ case 33: //PNPEnv_TUNER_4039FR5:
+ case 42: //PNPEnv_TUNER_4009FR5:
+ case 52: //PNPEnv_TUNER_4049FM5:
+ case 54: //PNPEnv_TUNER_4049FM5_AltI2C:
+ case 44: //PNPEnv_TUNER_4009FN5:
+ case 31: //PNPEnv_TUNER_TCPB9085P:
+ case 30: //PNPEnv_TUNER_TCPN9085D:
+ case 46: //PNPEnv_TUNER_TP18NSR01F:
+ case 47: //PNPEnv_TUNER_TP18PSB01D:
+ case 49: //PNPEnv_TUNER_TAPC_I001D:
+ case 60: //PNPEnv_TUNER_TAPE_S001D_MK3:
+ case 57: //PNPEnv_TUNER_FM1216ME_MK3:
+ case 59: //PNPEnv_TUNER_FM1216MP_MK3:
+ case 58: //PNPEnv_TUNER_FM1236_MK3:
+ case 68: //PNPEnv_TUNER_TAPE_H001F_MK3:
+ case 61: //PNPEnv_TUNER_TAPE_M001D_MK3:
+ case 78: //PNPEnv_TUNER_TDA8275C1_8290_FM:
+ case 89: //PNPEnv_TUNER_TCL_MFPE05_2:
+ return 1;
+ }
+ return 0;
+}
+
+void tveeprom_hauppauge_analog(struct tveeprom *tvee, unsigned char *eeprom_data)
+{
+ /* ----------------------------------------------
+ ** The hauppauge eeprom format is tagged
+ **
+ ** if packet[0] == 0x84, then packet[0..1] == length
+ ** else length = packet[0] & 3f;
+ ** if packet[0] & f8 == f8, then EOD and packet[1] == checksum
+ **
+ ** In our (ivtv) case we're interested in the following:
+ ** tuner type: tag [00].05 or [0a].01 (index into hauppauge_tuners)
+ ** tuner fmts: tag [00].04 or [0a].00 (bitmask index into hauppauge_fmts)
+ ** radio: tag [00].{last} or [0e].00 (bitmask. bit2=FM)
+ ** audio proc: tag [02].01 or [05].00 (lower nibble indexes lut?)
+
+ ** Fun info:
+ ** model: tag [00].07-08 or [06].00-01
+ ** revision: tag [00].09-0b or [06].04-06
+ ** serial#: tag [01].05-07 or [04].04-06
+
+ ** # of inputs/outputs ???
+ */
+
+ int i, j, len, done, tag, tuner = 0, t_format = 0;
+ char *t_name = NULL, *t_fmt_name = NULL;
+
+ dprintk(1, "%s\n",__FUNCTION__);
+ tvee->revision = done = len = 0;
+ for (i = 0; !done && i < 256; i += len) {
+ dprintk(2, "processing pos = %02x (%02x, %02x)\n",
+ i, eeprom_data[i], eeprom_data[i + 1]);
+
+ if (eeprom_data[i] == 0x84) {
+ len = eeprom_data[i + 1] + (eeprom_data[i + 2] << 8);
+ i+=3;
+ } else if ((eeprom_data[i] & 0xf0) == 0x70) {
+ if ((eeprom_data[i] & 0x08)) {
+ /* verify checksum! */
+ done = 1;
+ break;
+ }
+ len = eeprom_data[i] & 0x07;
+ ++i;
+ } else {
+ TVEEPROM_KERN_ERR("Encountered bad packet header [%02x]. "
+ "Corrupt or not a Hauppauge eeprom.\n", eeprom_data[i]);
+ return;
+ }
+
+ dprintk(1, "%3d [%02x] ", len, eeprom_data[i]);
+ for(j = 1; j < len; j++) {
+ dprintk(1, "%02x ", eeprom_data[i + j]);
+ }
+ dprintk(1, "\n");
+
+ /* process by tag */
+ tag = eeprom_data[i];
+ switch (tag) {
+ case 0x00:
+ tuner = eeprom_data[i+6];
+ t_format = eeprom_data[i+5];
+ tvee->has_radio = eeprom_data[i+len-1];
+ tvee->model =
+ eeprom_data[i+8] +
+ (eeprom_data[i+9] << 8);
+ tvee->revision = eeprom_data[i+10] +
+ (eeprom_data[i+11] << 8) +
+ (eeprom_data[i+12] << 16);
+ break;
+ case 0x01:
+ tvee->serial_number =
+ eeprom_data[i+6] +
+ (eeprom_data[i+7] << 8) +
+ (eeprom_data[i+8] << 16);
+ break;
+ case 0x02:
+ tvee->audio_processor = eeprom_data[i+2] & 0x0f;
+ break;
+ case 0x04:
+ tvee->serial_number =
+ eeprom_data[i+5] +
+ (eeprom_data[i+6] << 8) +
+ (eeprom_data[i+7] << 16);
+ break;
+ case 0x05:
+ tvee->audio_processor = eeprom_data[i+1] & 0x0f;
+ break;
+ case 0x06:
+ tvee->model =
+ eeprom_data[i+1] +
+ (eeprom_data[i+2] << 8);
+ tvee->revision = eeprom_data[i+5] +
+ (eeprom_data[i+6] << 8) +
+ (eeprom_data[i+7] << 16);
+ break;
+ case 0x0a:
+ tuner = eeprom_data[i+2];
+ t_format = eeprom_data[i+1];
+ break;
+ case 0x0e:
+ tvee->has_radio = eeprom_data[i+1];
+ break;
+ default:
+ dprintk(1, "Not sure what to do with tag [%02x]\n", tag);
+ /* dump the rest of the packet? */
+ }
+
+ }
+
+ if (!done) {
+ TVEEPROM_KERN_ERR("Ran out of data!\n");
+ return;
+ }
+
+ if (tvee->revision != 0) {
+ tvee->rev_str[0] = 32 + ((tvee->revision >> 18) & 0x3f);
+ tvee->rev_str[1] = 32 + ((tvee->revision >> 12) & 0x3f);
+ tvee->rev_str[2] = 32 + ((tvee->revision >> 6) & 0x3f);
+ tvee->rev_str[3] = 32 + ( tvee->revision & 0x3f);
+ tvee->rev_str[4] = 0;
+ }
+
+ if (hasRadioTuner(tuner) && !tvee->has_radio) {
+ TVEEPROM_KERN_INFO("The eeprom says no radio is present, but the tuner type\n");
+ TVEEPROM_KERN_INFO("indicates otherwise. I will assume that radio is present.\n");
+ tvee->has_radio = 1;
+ }
+
+ if (tuner < sizeof(hauppauge_tuner)/sizeof(struct HAUPPAUGE_TUNER)) {
+ tvee->tuner_type = hauppauge_tuner[tuner].id;
+ t_name = hauppauge_tuner[tuner].name;
+ } else {
+ t_name = "<unknown>";
+ }
+
+ tvee->tuner_formats = 0;
+ t_fmt_name = "<none>";
+ for (i = 0; i < 8; i++) {
+ if (t_format & (1<<i)) {
+ tvee->tuner_formats |= hauppauge_tuner_fmt[i].id;
+ /* yuck */
+ t_fmt_name = hauppauge_tuner_fmt[i].name;
+ }
+ }
+
+#if 0
+ if (t_format < sizeof(hauppauge_tuner_fmt)/sizeof(struct HAUPPAUGE_TUNER_FMT)) {
+ tvee->tuner_formats = hauppauge_tuner_fmt[t_format].id;
+ t_fmt_name = hauppauge_tuner_fmt[t_format].name;
+ } else {
+ t_fmt_name = "<unknown>";
+ }
+#endif
+
+ TVEEPROM_KERN_INFO("Hauppauge: model = %d, rev = %s, serial# = %d\n",
+ tvee->model,
+ tvee->rev_str,
+ tvee->serial_number);
+ TVEEPROM_KERN_INFO("tuner = %s (idx = %d, type = %d)\n",
+ t_name,
+ tuner,
+ tvee->tuner_type);
+ TVEEPROM_KERN_INFO("tuner fmt = %s (eeprom = 0x%02x, v4l2 = 0x%08x)\n",
+ t_fmt_name,
+ t_format,
+ tvee->tuner_formats);
+
+ TVEEPROM_KERN_INFO("audio_processor = %s (type = %d)\n",
+ STRM(sndtype,tvee->audio_processor),
+ tvee->audio_processor);
+
+}
+EXPORT_SYMBOL(tveeprom_hauppauge_analog);
+
+/* ----------------------------------------------------------------------- */
+/* generic helper functions */
+
+int tveeprom_read(struct i2c_client *c, unsigned char *eedata, int len)
+{
+ unsigned char buf;
+ int err;
+
+ dprintk(1, "%s\n",__FUNCTION__);
+ buf = 0;
+ if (1 != (err = i2c_master_send(c,&buf,1))) {
+ printk(KERN_INFO "tveeprom(%s): Huh, no eeprom present (err=%d)?\n",
+ c->name,err);
+ return -1;
+ }
+ if (len != (err = i2c_master_recv(c,eedata,len))) {
+ printk(KERN_WARNING "tveeprom(%s): i2c eeprom read error (err=%d)\n",
+ c->name,err);
+ return -1;
+ }
+ return 0;
+}
+EXPORT_SYMBOL(tveeprom_read);
+
+int tveeprom_dump(unsigned char *eedata, int len)
+{
+ int i;
+
+ dprintk(1, "%s\n",__FUNCTION__);
+ for (i = 0; i < len; i++) {
+ if (0 == (i % 16))
+ printk(KERN_INFO "tveeprom: %02x:",i);
+ printk(" %02x",eedata[i]);
+ if (15 == (i % 16))
+ printk("\n");
+ }
+ return 0;
+}
+EXPORT_SYMBOL(tveeprom_dump);
+
+/* ----------------------------------------------------------------------- */
+/* needed for ivtv.sf.net at the moment. Should go away in the long */
+/* run, just call the exported tveeprom_* directly, there is no point in */
+/* using the indirect way via i2c_driver->command() */
+
+#ifndef I2C_DRIVERID_TVEEPROM
+# define I2C_DRIVERID_TVEEPROM I2C_DRIVERID_EXP2
+#endif
+
+static unsigned short normal_i2c[] = {
+ 0xa0 >> 1,
+ I2C_CLIENT_END,
+};
+static unsigned short normal_i2c_range[] = { I2C_CLIENT_END };
+I2C_CLIENT_INSMOD;
+
+struct i2c_driver i2c_driver_tveeprom;
+
+static int
+tveeprom_command(struct i2c_client *client,
+ unsigned int cmd,
+ void *arg)
+{
+ struct tveeprom eeprom;
+ u32 *eeprom_props = arg;
+ u8 *buf;
+
+ switch (cmd) {
+ case 0:
+ buf = kmalloc(256,GFP_KERNEL);
+ memset(buf,0,256);
+ tveeprom_read(client,buf,256);
+ tveeprom_hauppauge_analog(&eeprom,buf);
+ kfree(buf);
+ eeprom_props[0] = eeprom.tuner_type;
+ eeprom_props[1] = eeprom.tuner_formats;
+ eeprom_props[2] = eeprom.model;
+ eeprom_props[3] = eeprom.revision;
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static int
+tveeprom_detect_client(struct i2c_adapter *adapter,
+ int address,
+ int kind)
+{
+ struct i2c_client *client;
+
+ dprintk(1,"%s: id 0x%x @ 0x%x\n",__FUNCTION__,
+ adapter->id, address << 1);
+ client = kmalloc(sizeof(struct i2c_client), GFP_KERNEL);
+ if (NULL == client)
+ return -ENOMEM;
+ memset(client, 0, sizeof(struct i2c_client));
+ client->addr = address;
+ client->adapter = adapter;
+ client->driver = &i2c_driver_tveeprom;
+ client->flags = I2C_CLIENT_ALLOW_USE;
+ snprintf(client->name, sizeof(client->name), "tveeprom");
+ i2c_attach_client(client);
+ return 0;
+}
+
+static int
+tveeprom_attach_adapter (struct i2c_adapter *adapter)
+{
+ dprintk(1,"%s: id 0x%x\n",__FUNCTION__,adapter->id);
+ if (adapter->id != (I2C_ALGO_BIT | I2C_HW_B_BT848))
+ return 0;
+ return i2c_probe(adapter, &addr_data, tveeprom_detect_client);
+}
+
+static int
+tveeprom_detach_client (struct i2c_client *client)
+{
+ int err;
+
+ err = i2c_detach_client(client);
+ if (err < 0)
+ return err;
+ kfree(client);
+ return 0;
+}
+
+struct i2c_driver i2c_driver_tveeprom = {
+ .owner = THIS_MODULE,
+ .name = "tveeprom",
+ .id = I2C_DRIVERID_TVEEPROM,
+ .flags = I2C_DF_NOTIFY,
+ .attach_adapter = tveeprom_attach_adapter,
+ .detach_client = tveeprom_detach_client,
+ .command = tveeprom_command,
+};
+
+static int __init tveeprom_init(void)
+{
+ return i2c_add_driver(&i2c_driver_tveeprom);
+}
+
+static void __exit tveeprom_exit(void)
+{
+ i2c_del_driver(&i2c_driver_tveeprom);
+}
+
+module_init(tveeprom_init);
+module_exit(tveeprom_exit);
+
+/*
+ * Local variables:
+ * c-basic-offset: 8
+ * End:
+ */
/*
- * $Id: video-buf-dvb.c,v 1.5 2004/11/07 13:17:15 kraxel Exp $
+ * $Id: video-buf-dvb.c,v 1.7 2004/12/09 12:51:35 kraxel Exp $
*
* some helper function for simple DVB cards which simply DMA the
* complete transport stream and let the computer sort everything else
MODULE_PARM_DESC(debug,"enable debug messages");
#define dprintk(fmt, arg...) if (debug) \
- printk(KERN_DEBUG "%s/dvb: " fmt, dvb->name, ## arg)
+ printk(KERN_DEBUG "%s/dvb: " fmt, dvb->name , ## arg)
/* ------------------------------------------------------------------ */
/* ------------------------------------------------------------------ */
-int videobuf_dvb_register(struct videobuf_dvb *dvb)
+int videobuf_dvb_register(struct videobuf_dvb *dvb,
+ struct module *module,
+ void *adapter_priv)
{
int result;
init_MUTEX(&dvb->lock);
/* register adapter */
- result = dvb_register_adapter(&dvb->adapter, dvb->name, THIS_MODULE);
+ result = dvb_register_adapter(&dvb->adapter, dvb->name, module);
if (result < 0) {
printk(KERN_WARNING "%s: dvb_register_adapter failed (errno = %d)\n",
dvb->name, result);
goto fail_adapter;
}
+ dvb->adapter->priv = adapter_priv;
/* register frontend */
result = dvb_register_frontend(dvb->adapter, dvb->frontend);
u16 Wt, Wa, HStart, HSyncStart, Ht, Ha, VStart;
};
+struct jpeg_com_marker {
+ int len; /* number of usable bytes in data */
+ char data[60];
+};
+
+struct jpeg_app_marker {
+ int appn; /* number app segment */
+ int len; /* number of usable bytes in data */
+ char data[60];
+};
+
struct videocodec {
struct module *owner;
/* -- filled in by slave device during register -- */
-/* $Id: vino.c,v 1.5 1999/10/09 00:01:14 ralf Exp $
- * drivers/char/vino.c
+/*
+ * (incomplete) Driver for the VINO (Video In No Out) system found in SGI Indys.
*
- * (incomplete) Driver for the Vino Video input system found in SGI Indys.
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License version 2 as published by the Free Software Foundation.
*
- * Copyright (C) 1999 Ulf Carlsson (ulfc@bun.falkenberg.se)
- *
- * This isn't complete yet, please don't expect any video until I've written
- * some more code.
+ * Copyright (C) 2003 Ladislav Michl <ladis@linux-mips.org>
*/
#include <linux/module.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/wrapper.h>
#include <linux/errno.h>
+#include <linux/irq.h>
+#include <linux/delay.h>
#include <linux/videodev.h>
+#include <linux/i2c.h>
+#include <linux/i2c-algo-sgi.h>
#include <asm/addrspace.h>
#include <asm/system.h>
+#include <asm/bootinfo.h>
+#include <asm/pgtable.h>
+#include <asm/paccess.h>
+#include <asm/io.h>
+#include <asm/sgi/ip22.h>
+#include <asm/sgi/hpc3.h>
+#include <asm/sgi/mc.h>
#include "vino.h"
+/* debugging? */
+#if 1
+#define DEBUG(x...) printk(x);
+#else
+#define DEBUG(x...)
+#endif
+
+
+/* VINO ASIC registers */
+struct sgi_vino *vino;
+
+static const char *vinostr = "VINO IndyCam/TV";
+static int threshold_a = 512;
+static int threshold_b = 512;
+
struct vino_device {
struct video_device vdev;
+#define VINO_CHAN_A 1
+#define VINO_CHAN_B 2
+ int chan;
+};
- unsigned long chan;
-#define VINO_CHAN_A 0
-#define VINO_CHAN_B 1
-
- unsigned long flags;
-#define VINO_DMA_ACTIVE (1<<0)
+struct vino_client {
+ struct i2c_client *driver;
+ int owner;
};
-/* We can actually receive TV and IndyCam input at the same time. Believe it or
- * not..
- */
-static struct vino_device vino[2];
+struct vino_video {
+ struct vino_device chA;
+ struct vino_device chB;
-/* Those registers have to be accessed by either *one* 64 bit write or *one* 64
- * bit read. We need some asm to fix this. We can't use mips3 as standard
- * because we just save 32 bits at context switch.
- */
+ struct vino_client decoder;
+ struct vino_client camera;
-static __inline__ unsigned long long vino_reg_read(unsigned long addr)
-{
- unsigned long long ret __attribute__ ((aligned (64)));
- unsigned long virt_addr = KSEG1ADDR(addr + VINO_BASE);
- unsigned long flags;
-
- save_and_cli(flags);
- __asm__ __volatile__(
- ".set\tmips3\n\t"
- ".set\tnoat\n\t"
- "ld\t$1,(%0)\n\t"
- "sd\t$1,(%1)\n\t"
- ".set\tat\n\t"
- ".set\tmips0"
- :
- :"r" (virt_addr),
- "r" (&ret)
- :"$1");
- restore_flags(flags);
+ struct semaphore input_lock;
- return ret;
+ /* Loaded into VINO descriptors to clear End Of Descriptors table
+ * interupt condition */
+ unsigned long dummy_page;
+ unsigned int dummy_buf[4] __attribute__((aligned(8)));
+};
+
+static struct vino_video *Vino;
+
+unsigned i2c_vino_getctrl(void *data)
+{
+ return vino->i2c_control;
}
-static __inline__ void vino_reg_write(unsigned long long value,
- unsigned long addr)
+void i2c_vino_setctrl(void *data, unsigned val)
{
- unsigned long virt_addr = KSEG1ADDR(addr + VINO_BASE);
- unsigned long flags;
-
- /* we might lose the upper parts of the registers which are not saved
- * if there comes an interrupt in our way, play safe */
-
- save_and_cli(flags);
- __asm__ __volatile__(
- ".set\tmips3\n\t"
- ".set\tnoat\n\t"
- "ld\t$1,(%0)\n\t"
- "sd\t$1,(%1)\n\t"
- ".set\tat\n\t"
- ".set\tmips0"
- :
- :"r" (&value),
- "r" (virt_addr)
- :"$1");
- restore_flags(flags);
+ vino->i2c_control = val;
}
-static __inline__ void vino_reg_and(unsigned long long value,
- unsigned long addr)
+unsigned i2c_vino_rdata(void *data)
{
- unsigned long virt_addr = KSEG1ADDR(addr + VINO_BASE);
- unsigned long flags;
-
- save_and_cli(flags);
- __asm__ __volatile__(
- ".set\tmips3\n\t"
- ".set\tnoat\n\t"
- "ld\t$1,(%0)\n\t"
- "ld\t$2,(%1)\n\t"
- "and\t$1,$1,$2\n\t"
- "sd\t$1,(%0)\n\t"
- ".set\tat\n\t"
- ".set\tmips0"
- :
- :"r" (virt_addr),
- "r" (&value)
- :"$1","$2");
- restore_flags(flags);
+ return vino->i2c_data;
}
-static __inline__ void vino_reg_or(unsigned long long value,
- unsigned long addr)
+void i2c_vino_wdata(void *data, unsigned val)
{
- unsigned long virt_addr = KSEG1ADDR(addr + VINO_BASE);
- unsigned long flags;
-
- save_and_cli(flags);
- __asm__ __volatile__(
- ".set\tmips3\n\t"
- ".set\tnoat\n\t"
- "ld\t$1,(%0)\n\t"
- "ld\t$2,(%1)\n\t"
- "or\t$1,$1,$2\n\t"
- "sd\t$1,(%0)\n\t"
- ".set\tat\n\t"
- ".set\tmips0"
- :
- :"r" (virt_addr),
- "r" (&value)
- :"$1","$2");
- restore_flags(flags);
+ vino->i2c_data = val;
}
-static int vino_dma_setup(void)
+static struct i2c_algo_sgi_data i2c_sgi_vino_data =
{
- return 0;
+ .getctrl = &i2c_vino_getctrl,
+ .setctrl = &i2c_vino_setctrl,
+ .rdata = &i2c_vino_rdata,
+ .wdata = &i2c_vino_wdata,
+ .xfer_timeout = 200,
+ .ack_timeout = 1000,
+};
+
+/*
+ * There are two possible clients on VINO I2C bus, so we limit usage only
+ * to them.
+ */
+static int i2c_vino_client_reg(struct i2c_client *client)
+{
+ int res = 0;
+
+ down(&Vino->input_lock);
+ switch (client->driver->id) {
+ case I2C_DRIVERID_SAA7191:
+ if (Vino->decoder.driver)
+ res = -EBUSY;
+ else
+ Vino->decoder.driver = client;
+ break;
+ case I2C_DRIVERID_INDYCAM:
+ if (Vino->camera.driver)
+ res = -EBUSY;
+ else
+ Vino->camera.driver = client;
+ break;
+ default:
+ res = -ENODEV;
+ }
+ up(&Vino->input_lock);
+
+ return res;
}
-static void vino_dma_stop(void)
+static int i2c_vino_client_unreg(struct i2c_client *client)
{
+ int res = 0;
+
+ down(&Vino->input_lock);
+ if (client == Vino->decoder.driver) {
+ if (Vino->decoder.owner)
+ res = -EBUSY;
+ else
+ Vino->decoder.driver = NULL;
+ } else if (client == Vino->camera.driver) {
+ if (Vino->camera.owner)
+ res = -EBUSY;
+ else
+ Vino->camera.driver = NULL;
+ }
+ up(&Vino->input_lock);
+ return res;
}
-static int vino_init(void)
+static struct i2c_adapter vino_i2c_adapter =
{
- unsigned long ret;
- unsigned short rev, id;
- unsigned long long foo;
- unsigned long *bar;
-
- bar = (unsigned long *) &foo;
-
- ret = vino_reg_read(VINO_REVID);
-
- rev = (ret & VINO_REVID_REV_MASK);
- id = (ret & VINO_REVID_ID_MASK) >> 4;
-
- printk("Vino: ID:%02hx Rev:%02hx\n", id, rev);
-
- foo = vino_reg_read(VINO_A_DESC_DATA0);
- printk("0x%lx", bar[0]);
- printk("%lx ", bar[1]);
- foo = vino_reg_read(VINO_A_DESC_DATA1);
- printk("0x%lx", bar[0]);
- printk("%lx ", bar[1]);
- foo = vino_reg_read(VINO_A_DESC_DATA2);
- printk("0x%lx", bar[0]);
- printk("%lx ", bar[1]);
- foo = vino_reg_read(VINO_A_DESC_DATA3);
- printk("0x%lx", bar[0]);
- printk("%lx\n", bar[1]);
- foo = vino_reg_read(VINO_B_DESC_DATA0);
- printk("0x%lx", bar[0]);
- printk("%lx ", bar[1]);
- foo = vino_reg_read(VINO_B_DESC_DATA1);
- printk("0x%lx", bar[0]);
- printk("%lx ", bar[1]);
- foo = vino_reg_read(VINO_B_DESC_DATA2);
- printk("0x%lx", bar[0]);
- printk("%lx ", bar[1]);
- foo = vino_reg_read(VINO_B_DESC_DATA3);
- printk("0x%lx", bar[0]);
- printk("%lx\n", bar[1]);
+ .name = "VINO I2C bus",
+ .id = I2C_HW_SGI_VINO,
+ .algo_data = &i2c_sgi_vino_data,
+ .client_register = &i2c_vino_client_reg,
+ .client_unregister = &i2c_vino_client_unreg,
+};
- return 0;
+static int vino_i2c_add_bus(void)
+{
+ return i2c_sgi_add_bus(&vino_i2c_adapter);
}
-static void vino_dma_go(struct vino_device *v)
+static int vino_i2c_del_bus(void)
{
-
+ return i2c_sgi_del_bus(&vino_i2c_adapter);
}
-/* Reset the vino back to default state */
-static void vino_setup(struct vino_device *v)
+static void vino_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
-
}
static int vino_open(struct video_device *dev, int flags)
{
+ struct vino_device *videv = (struct vino_device *)dev;
+
return 0;
}
static void vino_close(struct video_device *dev)
{
+ struct vino_device *videv = (struct vino_device *)dev;
}
-static int vino_ioctl(struct video_device *dev, unsigned int cmd, void *arg)
+static int vino_mmap(struct video_device *dev, const char *adr,
+ unsigned long size)
{
- return 0;
+ struct vino_device *videv = (struct vino_device *)dev;
+
+ return -EINVAL;
}
-static int vino_mmap(struct video_device *dev, const char *adr,
- unsigned long size)
+static int vino_ioctl(struct video_device *dev, unsigned int cmd, void *arg)
{
- return 0;
+ struct vino_device *videv = (struct vino_device *)dev;
+
+ return -EINVAL;
}
-static struct video_device vino_dev = {
+static const struct video_device vino_device = {
.owner = THIS_MODULE,
- .name = "Vino IndyCam/TV",
- .type = VID_TYPE_CAPTURE,
+ .type = VID_TYPE_CAPTURE | VID_TYPE_SUBCAPTURE,
.hardware = VID_HARDWARE_VINO,
+ .name = "VINO",
.open = vino_open,
.close = vino_close,
.ioctl = vino_ioctl,
.mmap = vino_mmap,
};
-int __init init_vino(struct video_device *dev)
+static int __init vino_init(void)
{
- int err;
+ unsigned long rev;
+ int i, ret = 0;
- err = vino_init();
- if (err)
- return err;
+ /* VINO is Indy specific beast */
+ if (ip22_is_fullhouse())
+ return -ENODEV;
-#if 0
- if (video_register_device(&vinodev, VFL_TYPE_GRABBER) == -1) {
+ /*
+ * VINO is in the EISA address space, so the sysid register will tell
+ * us if the EISA_PRESENT pin on MC has been pulled low.
+ *
+ * If EISA_PRESENT is not set we definitely don't have a VINO equiped
+ * system.
+ */
+ if (!(sgimc->systemid & SGIMC_SYSID_EPRESENT)) {
+ printk(KERN_ERR "VINO not found\n");
return -ENODEV;
}
-#endif
- return 0;
-}
+ vino = (struct sgi_vino *)ioremap(VINO_BASE, sizeof(struct sgi_vino));
+ if (!vino)
+ return -EIO;
-#ifdef MODULE
-int init_module(void)
-{
- int err;
+ /* Okay, once we know that VINO is present we'll read its revision
+ * safe way. One never knows... */
+ if (get_dbe(rev, &(vino->rev_id))) {
+ printk(KERN_ERR "VINO: failed to read revision register\n");
+ ret = -ENODEV;
+ goto out_unmap;
+ }
+ if (VINO_ID_VALUE(rev) != VINO_CHIP_ID) {
+ printk(KERN_ERR "VINO is not VINO (Rev/ID: 0x%04lx)\n", rev);
+ ret = -ENODEV;
+ goto out_unmap;
+ }
+ printk(KERN_INFO "VINO Rev: 0x%02lx\n", VINO_REV_NUM(rev));
- err = vino_init();
- if (err)
- return err;
+ Vino = (struct vino_video *)
+ kmalloc(sizeof(struct vino_video), GFP_KERNEL);
+ if (!Vino) {
+ ret = -ENOMEM;
+ goto out_unmap;
+ }
+
+ Vino->dummy_page = get_zeroed_page(GFP_KERNEL | GFP_DMA);
+ if (!Vino->dummy_page) {
+ ret = -ENOMEM;
+ goto out_free_vino;
+ }
+ for (i = 0; i < 4; i++)
+ Vino->dummy_buf[i] = PHYSADDR(Vino->dummy_page);
+
+ vino->control = 0;
+ /* prevent VINO from throwing spurious interrupts */
+ vino->a.next_4_desc = PHYSADDR(Vino->dummy_buf);
+ vino->b.next_4_desc = PHYSADDR(Vino->dummy_buf);
+ udelay(5);
+ vino->intr_status = 0;
+ /* set threshold level */
+ vino->a.fifo_thres = threshold_a;
+ vino->b.fifo_thres = threshold_b;
+
+ init_MUTEX(&Vino->input_lock);
+
+ if (request_irq(SGI_VINO_IRQ, vino_interrupt, 0, vinostr, NULL)) {
+ printk(KERN_ERR "VINO: irq%02d registration failed\n",
+ SGI_VINO_IRQ);
+ ret = -EAGAIN;
+ goto out_free_page;
+ }
+
+ ret = vino_i2c_add_bus();
+ if (ret) {
+ printk(KERN_ERR "VINO: I2C bus registration failed\n");
+ goto out_free_irq;
+ }
+
+ if (video_register_device(&Vino->chA.vdev, VFL_TYPE_GRABBER, -1) < 0) {
+ printk("%s, chnl %d: device registration failed.\n",
+ Vino->chA.vdev.name, Vino->chA.chan);
+ ret = -EINVAL;
+ goto out_i2c_del_bus;
+ }
+ if (video_register_device(&Vino->chB.vdev, VFL_TYPE_GRABBER, -1) < 0) {
+ printk("%s, chnl %d: device registration failed.\n",
+ Vino->chB.vdev.name, Vino->chB.chan);
+ ret = -EINVAL;
+ goto out_unregister_vdev;
+ }
return 0;
+
+out_unregister_vdev:
+ video_unregister_device(&Vino->chA.vdev);
+out_i2c_del_bus:
+ vino_i2c_del_bus();
+out_free_irq:
+ free_irq(SGI_VINO_IRQ, NULL);
+out_free_page:
+ free_page(Vino->dummy_page);
+out_free_vino:
+ kfree(Vino);
+out_unmap:
+ iounmap(vino);
+
+ return ret;
}
-void cleanup_module(void)
+static void __exit vino_exit(void)
{
+ video_unregister_device(&Vino->chA.vdev);
+ video_unregister_device(&Vino->chB.vdev);
+ vino_i2c_del_bus();
+ free_irq(SGI_VINO_IRQ, NULL);
+ free_page(Vino->dummy_page);
+ kfree(Vino);
+ iounmap(vino);
}
-#endif
+
+module_init(vino_init);
+module_exit(vino_exit);
+
+MODULE_DESCRIPTION("Video4Linux driver for SGI Indy VINO (IndyCam)");
+MODULE_LICENSE("GPL");
/*
- * Copyright (C) 1999 Ulf Carlsson (ulfc@bun.falkenberg.se)
- * Copyright (C) 2001 Ralf Baechle (ralf@gnu.org)
+ * Copyright (C) 1999 Ulf Karlsson <ulfc@bun.falkenberg.se>
+ * Copyright (C) 2003 Ladislav Michl <ladis@linux-mips.org>
*/
-#define VINO_BASE 0x00080000 /* In EISA address space */
-
-#define VINO_REVID 0x0000
-#define VINO_CTRL 0x0008
-#define VINO_INTSTAT 0x0010 /* Interrupt status */
-#define VINO_I2C_CTRL 0x0018
-#define VINO_I2C_DATA 0x0020
-#define VINO_A_ALPHA 0x0028 /* Channel A ... */
-#define VINO_A_CLIPS 0x0030 /* Clipping start */
-#define VINO_A_CLIPE 0x0038 /* Clipping end */
-#define VINO_A_FRAMERT 0x0040 /* Framerate */
-#define VINO_A_FLDCNT 0x0048 /* Field counter */
-#define VINO_A_LNSZ 0x0050
-#define VINO_A_LNCNT 0x0058
-#define VINO_A_PGIX 0x0060 /* Page index */
-#define VINO_A_DESC_PTR 0x0068 /* Ptr to next four descriptors */
-#define VINO_A_DESC_TLB_PTR 0x0070 /* Ptr to start of descriptor table */
-#define VINO_A_DESC_DATA0 0x0078 /* Descriptor data 0 */
-#define VINO_A_DESC_DATA1 0x0080 /* ... */
-#define VINO_A_DESC_DATA2 0x0088
-#define VINO_A_DESC_DATA3 0x0090
-#define VINO_A_FIFO_THRESHOLD 0x0098 /* FIFO threshold */
-#define VINO_A_FIFO_RP 0x00a0
-#define VINO_A_FIFO_WP 0x00a8
-#define VINO_B_ALPHA 0x00b0 /* Channel B ... */
-#define VINO_B_CLIPS 0x00b8
-#define VINO_B_CLIPE 0x00c0
-#define VINO_B_FRAMERT 0x00c8
-#define VINO_B_FLDCNT 0x00d0
-#define VINO_B_LNSZ 0x00d8
-#define VINO_B_LNCNT 0x00e0
-#define VINO_B_PGIX 0x00e8
-#define VINO_B_DESC_PTR 0x00f0
-#define VINO_B_DESC_TLB_PTR 0x00f8
-#define VINO_B_DESC_DATA0 0x0100
-#define VINO_B_DESC_DATA1 0x0108
-#define VINO_B_DESC_DATA2 0x0110
-#define VINO_B_DESC_DATA3 0x0118
-#define VINO_B_FIFO_THRESHOLD 0x0120
-#define VINO_B_FIFO_RP 0x0128
-#define VINO_B_FIFO_WP 0x0130
-
-/* Bits in the VINO_REVID register */
-
-#define VINO_REVID_REV_MASK 0x000f /* bits 0:3 */
-#define VINO_REVID_ID_MASK 0x00f0 /* bits 4:7 */
-
-/* Bits in the VINO_CTRL register */
+#ifndef VINO_H
+#define VINO_H
+
+#define VINO_BASE 0x00080000 /* Vino is in the EISA address space,
+ * but it is not an EISA bus card */
+
+struct sgi_vino_channel {
+ u32 _pad_alpha;
+ volatile u32 alpha;
+
+#define VINO_CLIP_X(x) ((x) & 0x3ff) /* bits 0:9 */
+#define VINO_CLIP_ODD(x) (((x) & 0x1ff) << 10) /* bits 10:18 */
+#define VINO_CLIP_EVEN(x) (((x) & 0x1ff) << 19) /* bits 19:27 */
+ u32 _pad_clip_start;
+ volatile u32 clip_start;
+ u32 _pad_clip_end;
+ volatile u32 clip_end;
+
+#define VINO_FRAMERT_PAL (1<<0) /* 0=NTSC 1=PAL */
+#define VINO_FRAMERT_RT(x) (((x) & 0x1fff) << 1) /* bits 1:12 */
+ u32 _pad_frame_rate;
+ volatile u32 frame_rate;
+
+ u32 _pad_field_counter;
+ volatile u32 field_counter;
+ u32 _pad_line_size;
+ volatile u32 line_size;
+ u32 _pad_line_count;
+ volatile u32 line_count;
+ u32 _pad_page_index;
+ volatile u32 page_index;
+ u32 _pad_next_4_desc;
+ volatile u32 next_4_desc;
+ u32 _pad_start_desc_tbl;
+ volatile u32 start_desc_tbl;
+
+#define VINO_DESC_JUMP (1<<30)
+#define VINO_DESC_STOP (1<<31)
+#define VINO_DESC_VALID (1<<32)
+ u32 _pad_desc_0;
+ volatile u32 desc_0;
+ u32 _pad_desc_1;
+ volatile u32 desc_1;
+ u32 _pad_desc_2;
+ volatile u32 desc_2;
+ u32 _pad_Bdesc_3;
+ volatile u32 desc_3;
+
+ u32 _pad_fifo_thres;
+ volatile u32 fifo_thres;
+ u32 _pad_fifo_read;
+ volatile u32 fifo_read;
+ u32 _pad_fifo_write;
+ volatile u32 fifo_write;
+};
+
+struct sgi_vino {
+#define VINO_CHIP_ID 0xb
+#define VINO_REV_NUM(x) ((x) & 0x0f)
+#define VINO_ID_VALUE(x) (((x) & 0xf0) >> 4)
+ u32 _pad_rev_id;
+ volatile u32 rev_id;
#define VINO_CTRL_LITTLE_ENDIAN (1<<0)
#define VINO_CTRL_A_FIELD_TRANS_INT (1<<1) /* Field transferred int */
#define VINO_CTRL_A_FIFO_OF_INT (1<<2) /* FIFO overflow int */
#define VINO_CTRL_A_END_DESC_TBL_INT (1<<3) /* End of desc table int */
+#define VINO_CTRL_A_INT (VINO_CTRL_A_FIELD_TRANS_INT | \
+ VINO_CTRL_A_FIFO_OF_INT | \
+ VINO_CTRL_A_END_DESC_TBL_INT)
#define VINO_CTRL_B_FIELD_TRANS_INT (1<<4) /* Field transferred int */
#define VINO_CTRL_B_FIFO_OF_INT (1<<5) /* FIFO overflow int */
-#define VINO_CTRL_B_END_DESC_TLB_INT (1<<6) /* End of desc table int */
+#define VINO_CTRL_B_END_DESC_TBL_INT (1<<6) /* End of desc table int */
+#define VINO_CTRL_B_INT (VINO_CTRL_B_FIELD_TRANS_INT | \
+ VINO_CTRL_B_FIFO_OF_INT | \
+ VINO_CTRL_B_END_DESC_TBL_INT)
#define VINO_CTRL_A_DMA_ENBL (1<<7)
#define VINO_CTRL_A_INTERLEAVE_ENBL (1<<8)
#define VINO_CTRL_A_SYNC_ENBL (1<<9)
#define VINO_CTRL_A_LUMA_ONLY (1<<12)
#define VINO_CTRL_A_DEC_ENBL (1<<13) /* Decimation */
#define VINO_CTRL_A_DEC_SCALE_MASK 0x1c000 /* bits 14:17 */
+#define VINO_CTRL_A_DEC_SCALE_SHIFT (14)
#define VINO_CTRL_A_DEC_HOR_ONLY (1<<17) /* Horizontal only */
#define VINO_CTRL_A_DITHER (1<<18) /* 24 -> 8 bit dither */
#define VINO_CTRL_B_DMA_ENBL (1<<19)
#define VINO_CTRL_B_INTERLEAVE_ENBL (1<<20)
#define VINO_CTRL_B_SYNC_ENBL (1<<21)
#define VINO_CTRL_B_SELECT (1<<22) /* 1=D1 0=Philips */
-#define VINO_CTRL_B_RGB (1<<22) /* 1=RGB 0=YUV */
-#define VINO_CTRL_B_LUMA_ONLY (1<<23)
-#define VINO_CTRL_B_DEC_ENBL (1<<24) /* Decimation */
-#define VINO_CTRL_B_DEC_SCALE_MASK 0x1c000000 /* bits 25:28 */
+#define VINO_CTRL_B_RGB (1<<23) /* 1=RGB 0=YUV */
+#define VINO_CTRL_B_LUMA_ONLY (1<<24)
+#define VINO_CTRL_B_DEC_ENBL (1<<25) /* Decimation */
+#define VINO_CTRL_B_DEC_SCALE_MASK 0x1c000000 /* bits 26:28 */
+#define VINO_CTRL_B_DEC_SCALE_SHIFT (26)
#define VINO_CTRL_B_DEC_HOR_ONLY (1<<29) /* Decimation horizontal only */
#define VINO_CTRL_B_DITHER (1<<30) /* ChanB 24 -> 8 bit dither */
-
-/* Bits in the Interrupt and Status register */
+ u32 _pad_control;
+ volatile u32 control;
#define VINO_INTSTAT_A_FIELD_TRANS (1<<0) /* Field transferred int */
#define VINO_INTSTAT_A_FIFO_OF (1<<1) /* FIFO overflow int */
#define VINO_INTSTAT_A_END_DESC_TBL (1<<2) /* End of desc table int */
+#define VINO_INTSTAT_A (VINO_INTSTAT_A_FIELD_TRANS | \
+ VINO_INTSTAT_A_FIFO_OF | \
+ VINO_INTSTAT_A_END_DESC_TBL)
#define VINO_INTSTAT_B_FIELD_TRANS (1<<3) /* Field transferred int */
#define VINO_INTSTAT_B_FIFO_OF (1<<4) /* FIFO overflow int */
#define VINO_INTSTAT_B_END_DESC_TBL (1<<5) /* End of desc table int */
-
-/* Bits in the Clipping Start register */
-
-#define VINO_CLIPS_START 0x3ff /* bits 0:9 */
-#define VINO_CLIPS_ODD_MASK 0x7fc00 /* bits 10:18 */
-#define VINO_CLIPS_EVEN_MASK 0xff80000 /* bits 19:27 */
-
-/* Bits in the Clipping End register */
-
-#define VINO_CLIPE_END 0x3ff /* bits 0:9 */
-#define VINO_CLIPE_ODD_MASK 0x7fc00 /* bits 10:18 */
-#define VINO_CLIPE_EVEN_MASK 0xff80000 /* bits 19:27 */
-
-/* Bits in the Frame Rate register */
-
-#define VINO_FRAMERT_PAL (1<<0) /* 0=NTSC 1=PAL */
-#define VINO_FRAMERT_RT_MASK 0x1ffe /* bits 1:12 */
-
-/* Bits in the VINO_I2C_CTRL */
-
-#define VINO_CTRL_I2C_IDLE (1<<0) /* write: 0=force idle
- * read: 0=idle 1=not idle */
-#define VINO_CTRL_I2C_DIR (1<<1) /* 0=read 1=write */
-#define VINO_CTRL_I2C_MORE_BYTES (1<<2) /* 0=last byte 1=more bytes */
-#define VINO_CTRL_I2C_TRANS_BUSY (1<<4) /* 0=trans done 1=trans busy */
-#define VINO_CTRL_I2C_ACK (1<<5) /* 0=ack received 1=ack not */
-#define VINO_CTRL_I2C_BUS_ERROR (1<<7) /* 0=no bus err 1=bus err */
+#define VINO_INTSTAT_B (VINO_INTSTAT_B_FIELD_TRANS | \
+ VINO_INTSTAT_B_FIFO_OF | \
+ VINO_INTSTAT_B_END_DESC_TBL)
+ u32 _pad_intr_status;
+ volatile u32 intr_status;
+
+ u32 _pad_i2c_control;
+ volatile u32 i2c_control;
+ u32 _pad_i2c_data;
+ volatile u32 i2c_data;
+
+ struct sgi_vino_channel a;
+ struct sgi_vino_channel b;
+};
+
+#endif
#define VPX3220_DEBUG KERN_DEBUG "vpx3220: "
static int debug = 0;
-MODULE_PARM(debug, "i");
+module_param(debug, int, 0);
MODULE_PARM_DESC(debug, "Debug level (0-1)");
#define dprintk(num, format, args...) \
#define I2C_NAME(x) (x)->name
extern const struct zoran_format zoran_formats[];
-extern const int zoran_num_formats;
static int card[BUZ_MAX] = { -1, -1, -1, -1 };
-MODULE_PARM(card, "1-" __stringify(BUZ_MAX) "i");
+module_param_array(card, int, NULL, 0);
MODULE_PARM_DESC(card, "The type of card");
static int encoder[BUZ_MAX] = { -1, -1, -1, -1 };
-MODULE_PARM(encoder, "1-" __stringify(BUZ_MAX) "i");
+module_param_array(encoder, int, NULL, 0);
MODULE_PARM_DESC(encoder, "i2c TV encoder");
static int decoder[BUZ_MAX] = { -1, -1, -1, -1 };
-MODULE_PARM(decoder, "1-" __stringify(BUZ_MAX) "i");
+module_param_array(decoder, int, NULL, 0);
MODULE_PARM_DESC(decoder, "i2c TV decoder");
/*
*/
static unsigned long vidmem = 0; /* Video memory base address */
-MODULE_PARM(vidmem, "i");
+module_param(vidmem, ulong, 0);
/*
Default input and video norm at startup of the driver.
*/
static int default_input = 0; /* 0=Composite, 1=S-Video */
-MODULE_PARM(default_input, "i");
+module_param(default_input, int, 0);
MODULE_PARM_DESC(default_input,
"Default input (0=Composite, 1=S-Video, 2=Internal)");
static int default_norm = 0; /* 0=PAL, 1=NTSC 2=SECAM */
-MODULE_PARM(default_norm, "i");
+module_param(default_norm, int, 0);
MODULE_PARM_DESC(default_norm, "Default norm (0=PAL, 1=NTSC, 2=SECAM)");
static int video_nr = -1; /* /dev/videoN, -1 for autodetect */
-MODULE_PARM(video_nr, "i");
+module_param(video_nr, int, 0);
MODULE_PARM_DESC(video_nr, "video device number");
/*
int v4l_nbufs = 2;
int v4l_bufsize = 128; /* Everybody should be able to work with this setting */
-MODULE_PARM(v4l_nbufs, "i");
+module_param(v4l_nbufs, int, 0);
MODULE_PARM_DESC(v4l_nbufs, "Maximum number of V4L buffers to use");
-MODULE_PARM(v4l_bufsize, "i");
+module_param(v4l_bufsize, int, 0);
MODULE_PARM_DESC(v4l_bufsize, "Maximum size per V4L buffer (in kB)");
int jpg_nbufs = 32;
int jpg_bufsize = 512; /* max size for 100% quality full-PAL frame */
-MODULE_PARM(jpg_nbufs, "i");
+module_param(jpg_nbufs, int, 0);
MODULE_PARM_DESC(jpg_nbufs, "Maximum number of JPG buffers to use");
-MODULE_PARM(jpg_bufsize, "i");
+module_param(jpg_bufsize, int, 0);
MODULE_PARM_DESC(jpg_bufsize, "Maximum size per JPG buffer (in kB)");
int pass_through = 0; /* 1=Pass through TV signal when device is not used */
/* 0=Show color bar when device is not used (LML33: only if lml33dpath=1) */
-MODULE_PARM(pass_through, "i");
+module_param(pass_through, int, 0);
MODULE_PARM_DESC(pass_through,
"Pass TV signal through to TV-out when idling");
static int debug = 1;
int *zr_debug = &debug;
-MODULE_PARM(debug, "i");
+module_param(debug, int, 0);
MODULE_PARM_DESC(debug, "Debug level (0-4)");
MODULE_DESCRIPTION("Zoran-36057/36067 JPEG codec driver");
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/vmalloc.h>
+#include <linux/byteorder/generic.h>
#include <linux/interrupt.h>
#include <linux/proc_fs.h>
ZR36057_ISR_JPEGRepIRQ )
extern const struct zoran_format zoran_formats[];
-extern const int zoran_num_formats;
extern int *zr_debug;
* load on Bt819 input, there will be
* some image imperfections */
-MODULE_PARM(lml33dpath, "i");
+module_param(lml33dpath, bool, 0);
MODULE_PARM_DESC(lml33dpath,
"Use digital path capture mode (on LML33 cards)");
+static void
+zr36057_init_vfe (struct zoran *zr);
+
/*
* General Purpose I/O and Guest bus access
*/
zr->jpg_buffers.buffer[i].state = BUZ_STATE_USER; /* nothing going on */
}
for (i = 0; i < BUZ_NUM_STAT_COM; i++) {
- zr->stat_com[i] = 1; /* mark as unavailable to zr36057 */
+ zr->stat_com[i] = cpu_to_le32(1); /* mark as unavailable to zr36057 */
}
}
switch (mode) {
- case BUZ_MODE_MOTION_COMPRESS:
+ case BUZ_MODE_MOTION_COMPRESS: {
+ struct jpeg_app_marker app;
+ struct jpeg_com_marker com;
+
/* In motion compress mode, the decoder output must be enabled, and
* the video bus direction set to input.
*/
/* Take the JPEG codec and the VFE out of sleep */
jpeg_codec_sleep(zr, 0);
+
+ /* set JPEG app/com marker */
+ app.appn = zr->jpg_settings.jpg_comp.APPn;
+ app.len = zr->jpg_settings.jpg_comp.APP_len;
+ memcpy(app.data, zr->jpg_settings.jpg_comp.APP_data, 60);
+ zr->codec->control(zr->codec, CODEC_S_JPEG_APP_DATA,
+ sizeof(struct jpeg_app_marker), &app);
+
+ com.len = zr->jpg_settings.jpg_comp.COM_len;
+ memcpy(com.data, zr->jpg_settings.jpg_comp.COM_data, 60);
+ zr->codec->control(zr->codec, CODEC_S_JPEG_COM_DATA,
+ sizeof(struct jpeg_com_marker), &com);
+
/* Setup the JPEG codec */
zr->codec->control(zr->codec, CODEC_S_JPEG_TDS_BYTE,
sizeof(int), &field_size);
dprintk(2, KERN_INFO "%s: enable_jpg(MOTION_COMPRESS)\n",
ZR_DEVNAME(zr));
break;
+ }
case BUZ_MODE_MOTION_DECOMPRESS:
/* In motion decompression mode, the decoder output must be disabled, and
/* fill 1 stat_com entry */
i = (zr->jpg_dma_head -
zr->jpg_err_shift) & BUZ_MASK_STAT_COM;
- if (!(zr->stat_com[i] & 1))
+ if (!(zr->stat_com[i] & cpu_to_le32(1)))
break;
zr->stat_com[i] =
- zr->jpg_buffers.buffer[frame].frag_tab_bus;
+ cpu_to_le32(zr->jpg_buffers.buffer[frame].frag_tab_bus);
} else {
/* fill 2 stat_com entries */
i = ((zr->jpg_dma_head -
zr->jpg_err_shift) & 1) * 2;
- if (!(zr->stat_com[i] & 1))
+ if (!(zr->stat_com[i] & cpu_to_le32(1)))
break;
zr->stat_com[i] =
- zr->jpg_buffers.buffer[frame].frag_tab_bus;
+ cpu_to_le32(zr->jpg_buffers.buffer[frame].frag_tab_bus);
zr->stat_com[i + 1] =
- zr->jpg_buffers.buffer[frame].frag_tab_bus;
+ cpu_to_le32(zr->jpg_buffers.buffer[frame].frag_tab_bus);
}
zr->jpg_buffers.buffer[frame].state = BUZ_STATE_DMA;
zr->jpg_dma_head++;
i = ((zr->jpg_dma_tail -
zr->jpg_err_shift) & 1) * 2 + 1;
- stat_com = zr->stat_com[i];
+ stat_com = le32_to_cpu(zr->stat_com[i]);
if ((stat_com & 1) == 0) {
return;
for (i = 0;
i < zr->jpg_buffers.num_buffers;
i++) {
- if (zr->stat_com[j] ==
+ if (le32_to_cpu(zr->stat_com[j]) ==
zr->jpg_buffers.
buffer[i].
frag_tab_bus) {
printk("\n");
}
}
-
/* Find an entry in stat_com and rotate contents */
{
int i;
zr->jpg_err_shift) & 1) * 2;
if (zr->codec_mode == BUZ_MODE_MOTION_DECOMPRESS) {
/* Mimic zr36067 operation */
- zr->stat_com[i] |= 1;
+ zr->stat_com[i] |= cpu_to_le32(1);
if (zr->jpg_settings.TmpDcm != 1)
- zr->stat_com[i + 1] |= 1;
+ zr->stat_com[i + 1] |= cpu_to_le32(1);
/* Refill */
zoran_reap_stat_com(zr);
zoran_feed_stat_com(zr);
int j;
u32 bus_addr[BUZ_NUM_STAT_COM];
+ /* Here we are copying the stat_com array, which
+ * is already in little endian format, so
+ * no endian conversions here
+ */
memcpy(bus_addr, zr->stat_com,
sizeof(bus_addr));
for (j = 0; j < BUZ_NUM_STAT_COM; j++) {
zr->stat_com[j] =
bus_addr[(i + j) &
BUZ_MASK_STAT_COM];
+
}
zr->jpg_err_shift += i;
zr->jpg_err_shift &= BUZ_MASK_STAT_COM;
int i;
strcpy(sv, sc);
for (i = 0; i < 4; i++) {
- if (zr->stat_com[i] & 1)
+ if (le32_to_cpu(zr->stat_com[i]) & 1)
sv[i] = '1';
}
sv[4] = 0;
ZR_DEVNAME(zr), zr->jpg_seq_num);
for (i = 0; i < 4; i++) {
printk(" %08x",
- zr->stat_com[i]);
+ le32_to_cpu(zr->stat_com[i]));
}
printk("\n");
}
* initialize video front end
*/
-void
+static void
zr36057_init_vfe (struct zoran *zr)
{
u32 reg;
int set_master);
extern void zoran_init_hardware(struct zoran *zr);
extern void zr36057_restart(struct zoran *zr);
-extern void zr36057_init_vfe(struct zoran *zr);
/* i2c */
extern int decoder_command(struct zoran *zr,
/* debugging is available via module parameter */
static int debug = 0;
-MODULE_PARM(debug, "i");
+module_param(debug, int, 0);
MODULE_PARM_DESC(debug, "Debug level (0-4)");
#define dprintk(num, format, args...) \
/* debugging is available via module parameter */
static int debug = 0;
-MODULE_PARM(debug, "i");
+module_param(debug, int, 0);
MODULE_PARM_DESC(debug, "Debug level (0-4)");
#define dprintk(num, format, args...) \
0xF9, 0xFA
};
-static const char zr36050_app[0x40] = {
- 0xff, 0xe0, //Marker: APP0
- 0x00, 0x3e, //Length: 60+2
- ' ', 'A', 'V', 'I', '1', 0, 0, 0, // 'AVI' field
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0
-};
-
-static const char zr36050_com[0x40] = {
- 0xff, 0xfe, //Marker: COM
- 0x00, 0x3e, //Length: 60+2
- ' ', 'C', 'O', 'M', 0, 0, 0, 0, // 'COM' field
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0
-};
-
/* jpeg baseline setup, this is just fixed in this driver (YUV pictures) */
#define NO_OF_COMPONENTS 0x3 //Y,U,V
#define BASELINE_PRECISION 0x8 //MCU size (?)
sizeof(zr36050_dqt), zr36050_dqt);
sum += zr36050_pushit(ptr, ZR050_DHT_IDX,
sizeof(zr36050_dht), zr36050_dht);
- sum += zr36050_pushit(ptr, ZR050_APP_IDX,
- sizeof(zr36050_app), zr36050_app);
- sum += zr36050_pushit(ptr, ZR050_COM_IDX,
- sizeof(zr36050_com), zr36050_com);
+ zr36050_write(ptr, ZR050_APP_IDX, 0xff);
+ zr36050_write(ptr, ZR050_APP_IDX + 1, 0xe0 + ptr->app.appn);
+ zr36050_write(ptr, ZR050_APP_IDX + 2, 0x00);
+ zr36050_write(ptr, ZR050_APP_IDX + 3, ptr->app.len + 2);
+ sum += zr36050_pushit(ptr, ZR050_APP_IDX + 4, 60,
+ ptr->app.data) + 4;
+ zr36050_write(ptr, ZR050_COM_IDX, 0xff);
+ zr36050_write(ptr, ZR050_COM_IDX + 1, 0xfe);
+ zr36050_write(ptr, ZR050_COM_IDX + 2, 0x00);
+ zr36050_write(ptr, ZR050_COM_IDX + 3, ptr->com.len + 2);
+ sum += zr36050_pushit(ptr, ZR050_COM_IDX + 4, 60,
+ ptr->com.data) + 4;
/* do the internal huffman table preload */
zr36050_write(ptr, ZR050_MARKERS_EN, ZR050_ME_DHTI);
/* this headers seem to deliver "valid AVI" jpeg frames */
zr36050_write(ptr, ZR050_MARKERS_EN,
- ZR050_ME_APP | ZR050_ME_DQT | ZR050_ME_DHT |
- ZR050_ME_COM);
+ ZR050_ME_DQT | ZR050_ME_DHT |
+ ((ptr->app.len > 0) ? ZR050_ME_APP : 0) |
+ ((ptr->com.len > 0) ? ZR050_ME_COM : 0));
} else {
dprintk(2, "%s: EXPANSION SETUP\n", ptr->name);
return -EFAULT;
ptr->scalefact = *ival;
break;
+
+ case CODEC_G_JPEG_APP_DATA: { /* get appn marker data */
+ struct jpeg_app_marker *app = data;
+
+ if (size != sizeof(struct jpeg_app_marker))
+ return -EFAULT;
+
+ *app = ptr->app;
+ break;
+ }
+
+ case CODEC_S_JPEG_APP_DATA: { /* set appn marker data */
+ struct jpeg_app_marker *app = data;
+
+ if (size != sizeof(struct jpeg_app_marker))
+ return -EFAULT;
+
+ ptr->app = *app;
+ break;
+ }
+
+ case CODEC_G_JPEG_COM_DATA: { /* get comment marker data */
+ struct jpeg_com_marker *com = data;
+
+ if (size != sizeof(struct jpeg_com_marker))
+ return -EFAULT;
+
+ *com = ptr->com;
+ break;
+ }
+
+ case CODEC_S_JPEG_COM_DATA: { /* set comment marker data */
+ struct jpeg_com_marker *com = data;
+
+ if (size != sizeof(struct jpeg_com_marker))
+ return -EFAULT;
+
+ ptr->com = *com;
+ break;
+ }
+
default:
return -EINVAL;
}
ptr->max_block_vol = 240;
ptr->scalefact = 0x100;
ptr->dri = 1;
+
+ /* no app/com marker by default */
+ ptr->app.appn = 0;
+ ptr->app.len = 0;
+ ptr->com.len = 0;
+
zr36050_init(ptr);
dprintk(1, KERN_INFO "%s: codec attached and running\n",
#ifndef ZR36050_H
#define ZR36050_H
+#include "videocodec.h"
+
/* data stored for each zoran jpeg codec chip */
struct zr36050 {
char name[32];
__u8 v_samp_ratio[8];
__u16 scalefact;
__u16 dri;
+
+ /* com/app marker */
+ struct jpeg_com_marker com;
+ struct jpeg_app_marker app;
};
/* zr36050 register addresses */
static int zr36060_codecs = 0;
static int low_bitrate = 0;
-MODULE_PARM(low_bitrate, "i");
+module_param(low_bitrate, bool, 0);
MODULE_PARM_DESC(low_bitrate, "Buz compatibility option, halves bitrate");
/* debugging is available via module parameter */
static int debug = 0;
-MODULE_PARM(debug, "i");
+module_param(debug, int, 0);
MODULE_PARM_DESC(debug, "Debug level (0-4)");
#define dprintk(num, format, args...) \
0xF9, 0xFA
};
-static const char zr36060_app[0x40] = {
- 0xff, 0xe0, //Marker: APP0
- 0x00, 0x07, //Length: 7
- ' ', 'A', 'V', 'I', '1', 0, 0, 0, // 'AVI' field
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0
-};
-
-static const char zr36060_com[0x40] = {
- 0xff, 0xfe, //Marker: COM
- 0x00, 0x06, //Length: 6
- ' ', 'C', 'O', 'M', 0, 0, 0, 0, // 'COM' field
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0
-};
-
/* jpeg baseline setup, this is just fixed in this driver (YUV pictures) */
#define NO_OF_COMPONENTS 0x3 //Y,U,V
#define BASELINE_PRECISION 0x8 //MCU size (?)
sum +=
zr36060_pushit(ptr, ZR060_DHT_IDX, sizeof(zr36060_dht),
zr36060_dht);
- sum +=
- zr36060_pushit(ptr, ZR060_APP_IDX, sizeof(zr36060_app),
- zr36060_app);
- sum +=
- zr36060_pushit(ptr, ZR060_COM_IDX, sizeof(zr36060_com),
- zr36060_com);
+ zr36060_write(ptr, ZR060_APP_IDX, 0xff);
+ zr36060_write(ptr, ZR060_APP_IDX + 1, 0xe0 + ptr->app.appn);
+ zr36060_write(ptr, ZR060_APP_IDX + 2, 0x00);
+ zr36060_write(ptr, ZR060_APP_IDX + 3, ptr->app.len + 2);
+ sum += zr36060_pushit(ptr, ZR060_APP_IDX + 4, 60,
+ ptr->app.data) + 4;
+ zr36060_write(ptr, ZR060_COM_IDX, 0xff);
+ zr36060_write(ptr, ZR060_COM_IDX + 1, 0xfe);
+ zr36060_write(ptr, ZR060_COM_IDX + 2, 0x00);
+ zr36060_write(ptr, ZR060_COM_IDX + 3, ptr->com.len + 2);
+ sum += zr36060_pushit(ptr, ZR060_COM_IDX + 4, 60,
+ ptr->com.data) + 4;
/* setup misc. data for compression (target code sizes) */
/* JPEG markers to be included in the compressed stream */
zr36060_write(ptr, ZR060_MER,
- ZR060_MER_App | ZR060_MER_Com | ZR060_MER_DQT
- | ZR060_MER_DHT);
+ ZR060_MER_DQT | ZR060_MER_DHT |
+ ((ptr->com.len > 0) ? ZR060_MER_Com : 0) |
+ ((ptr->app.len > 0) ? ZR060_MER_App : 0));
/* Setup the Video Frontend */
/* Limit pixel range to 16..235 as per CCIR-601 */
return -EFAULT;
ptr->scalefact = *ival;
break;
+
+ case CODEC_G_JPEG_APP_DATA: { /* get appn marker data */
+ struct jpeg_app_marker *app = data;
+
+ if (size != sizeof(struct jpeg_app_marker))
+ return -EFAULT;
+
+ *app = ptr->app;
+ break;
+ }
+
+ case CODEC_S_JPEG_APP_DATA: { /* set appn marker data */
+ struct jpeg_app_marker *app = data;
+
+ if (size != sizeof(struct jpeg_app_marker))
+ return -EFAULT;
+
+ ptr->app = *app;
+ break;
+ }
+
+ case CODEC_G_JPEG_COM_DATA: { /* get comment marker data */
+ struct jpeg_com_marker *com = data;
+
+ if (size != sizeof(struct jpeg_com_marker))
+ return -EFAULT;
+
+ *com = ptr->com;
+ break;
+ }
+
+ case CODEC_S_JPEG_COM_DATA: { /* set comment marker data */
+ struct jpeg_com_marker *com = data;
+
+ if (size != sizeof(struct jpeg_com_marker))
+ return -EFAULT;
+
+ ptr->com = *com;
+ break;
+ }
+
default:
return -EINVAL;
}
ptr->max_block_vol = 240; /* CHECKME, was 120 is 240 */
ptr->scalefact = 0x100;
ptr->dri = 1; /* CHECKME, was 8 is 1 */
+
+ /* by default, no COM or APP markers - app should set those */
+ ptr->com.len = 0;
+ ptr->app.appn = 0;
+ ptr->app.len = 0;
+
zr36060_init(ptr);
dprintk(1, KERN_INFO "%s: codec attached and running\n",
#ifndef ZR36060_H
#define ZR36060_H
+#include "videocodec.h"
+
/* data stored for each zoran jpeg codec chip */
struct zr36060 {
char name[32];
__u8 v_samp_ratio[8];
__u16 scalefact;
__u16 dri;
+
+ /* app/com marker data */
+ struct jpeg_app_marker app;
+ struct jpeg_com_marker com;
};
/* ZR36060 register addresses */
rc = i2o_device_issue_claim(dev, I2O_CMD_UTIL_CLAIM, I2O_CLAIM_PRIMARY);
if (!rc)
- pr_debug("claim of device %d succeded\n", dev->lct_data.tid);
+ pr_debug("i2o: claim of device %d succeded\n",
+ dev->lct_data.tid);
else
- pr_debug("claim of device %d failed %d\n", dev->lct_data.tid,
- rc);
+ pr_debug("i2o: claim of device %d failed %d\n",
+ dev->lct_data.tid, rc);
up(&dev->lock);
}
if (!rc)
- pr_debug("claim release of device %d succeded\n",
+ pr_debug("i2o: claim release of device %d succeded\n",
dev->lct_data.tid);
else
- pr_debug("claim release of device %d failed %d\n",
+ pr_debug("i2o: claim release of device %d failed %d\n",
dev->lct_data.tid, rc);
up(&dev->lock);
{
struct i2o_device *i2o_dev = to_i2o_device(dev);
- pr_debug("Release I2O device %s\n", dev->bus_id);
+ pr_debug("i2o: device %s released\n", dev->bus_id);
kfree(i2o_dev);
};
i2o_driver_notify_device_add_all(dev);
- pr_debug("I2O device %s added\n", dev->device.bus_id);
+ pr_debug("i2o: device %s added\n", dev->device.bus_id);
return dev;
};
max = (lct->table_size - 3) / 9;
- pr_debug("LCT has %d entries (LCT size: %d)\n", max, lct->table_size);
+ pr_debug("%s: LCT has %d entries (LCT size: %d)\n", c->name, max,
+ lct->table_size);
/* remove devices, which are not in the LCT anymore */
list_for_each_entry_safe(dev, tmp, &c->devices, list) {
int rc = 0;
unsigned long flags;
- pr_debug("Register driver %s\n", drv->name);
+ pr_debug("i2o: Register driver %s\n", drv->name);
if (drv->event) {
drv->event_queue = create_workqueue(drv->name);
"for driver %s\n", drv->name);
return -EFAULT;
}
- pr_debug("Event queue initialized for driver %s\n", drv->name);
+ pr_debug("i2o: Event queue initialized for driver %s\n",
+ drv->name);
} else
drv->event_queue = NULL;
spin_unlock_irqrestore(&i2o_drivers_lock, flags);
- pr_debug("driver %s gets context id %d\n", drv->name, drv->context);
+ pr_debug("i2o: driver %s gets context id %d\n", drv->name,
+ drv->context);
list_for_each_entry(c, &i2o_controllers, list) {
struct i2o_device *i2o_dev;
struct i2o_controller *c;
unsigned long flags;
- pr_debug("unregister driver %s\n", drv->name);
+ pr_debug("i2o: unregister driver %s\n", drv->name);
driver_unregister(&drv->driver);
if (drv->event_queue) {
destroy_workqueue(drv->event_queue);
drv->event_queue = NULL;
- pr_debug("event queue removed for %s\n", drv->name);
+ pr_debug("i2o: event queue removed for %s\n", drv->name);
}
};
spin_unlock(&i2o_drivers_lock);
if (unlikely(!drv)) {
- printk(KERN_WARNING "i2o: Spurious reply to unknown "
- "driver %d\n", context);
+ printk(KERN_WARNING "%s: Spurious reply to unknown "
+ "driver %d\n", c->name, context);
return -EIO;
}
" defined!\n", c->name, drv->name);
return -EIO;
} else
- printk(KERN_WARNING "i2o: Spurious reply to unknown driver "
- "%d\n", readl(&msg->u.s.icntxt));
+ printk(KERN_WARNING "%s: Spurious reply to unknown driver "
+ "%d\n", c->name, readl(&msg->u.s.icntxt));
return -EIO;
}
">=2 and <= 64 and a power of 2\n", i2o_max_drivers);
i2o_max_drivers = I2O_MAX_DRIVERS;
}
- printk(KERN_INFO "i2o: max_drivers=%d\n", i2o_max_drivers);
+ printk(KERN_INFO "i2o: max drivers = %d\n", i2o_max_drivers);
i2o_drivers =
kmalloc(i2o_max_drivers * sizeof(*i2o_drivers), GFP_KERNEL);
#include <linux/i2o.h>
#include <linux/delay.h>
+#define OSM_NAME "exec-osm"
+
struct i2o_driver i2o_exec_driver;
static int i2o_exec_lct_notify(struct i2o_controller *c, u32 change_ind);
dev = &c->pdev->dev;
- pr_debug("timedout reply received!\n");
+ pr_debug("%s: timedout reply received!\n",
+ c->name);
i2o_dma_free(dev, &wait->dma);
i2o_exec_wait_free(wait);
rc = -1;
spin_unlock(&lock);
- pr_debug("i2o: Bogus reply in POST WAIT (tr-context: %08x)!\n",
+ pr_debug("%s: Bogus reply in POST WAIT (tr-context: %08x)!\n", c->name,
context);
return -1;
*/
static void i2o_exec_event(struct i2o_event *evt)
{
- printk(KERN_INFO "Event received from device: %d\n",
- evt->i2o_dev->lct_data.tid);
+ osm_info("Event received from device: %d\n",
+ evt->i2o_dev->lct_data.tid);
kfree(evt);
};
/* Exec OSM driver struct */
struct i2o_driver i2o_exec_driver = {
- .name = "exec-osm",
+ .name = OSM_NAME,
.reply = i2o_exec_reply,
.event = i2o_exec_event,
.classes = i2o_exec_class_id,
#include <linux/i2o.h>
#include <linux/delay.h>
+#define OSM_VERSION "$Rev$"
+#define OSM_DESCRIPTION "I2O subsystem"
+
/* global I2O controller list */
LIST_HEAD(i2o_controllers);
unsigned long flags;
if (!ptr)
- printk(KERN_ERR "NULL pointer found!\n");
+ printk(KERN_ERR "%s: couldn't add NULL pointer to context list!"
+ "\n", c->name);
entry = kmalloc(sizeof(*entry), GFP_ATOMIC);
if (!entry) {
- printk(KERN_ERR "i2o: Could not allocate memory for context "
- "list element\n");
+ printk(KERN_ERR "%s: Could not allocate memory for context "
+ "list element\n", c->name);
return 0;
}
spin_unlock_irqrestore(&c->context_list_lock, flags);
- pr_debug("Add context to list %p -> %d\n", ptr, context);
+ pr_debug("%s: Add context to list %p -> %d\n", c->name, ptr, context);
return entry->context;
};
spin_unlock_irqrestore(&c->context_list_lock, flags);
if (!context)
- printk(KERN_WARNING "i2o: Could not remove nonexistent ptr "
- "%p\n", ptr);
+ printk(KERN_WARNING "%s: Could not remove nonexistent ptr "
+ "%p\n", c->name, ptr);
- pr_debug("remove ptr from context list %d -> %p\n", context, ptr);
+ pr_debug("%s: remove ptr from context list %d -> %p\n", c->name,
+ context, ptr);
return context;
};
spin_unlock_irqrestore(&c->context_list_lock, flags);
if (!ptr)
- printk(KERN_WARNING "i2o: context id %d not found\n", context);
+ printk(KERN_WARNING "%s: context id %d not found\n", c->name,
+ context);
- pr_debug("get ptr from context list %d -> %p\n", context, ptr);
+ pr_debug("%s: get ptr from context list %d -> %p\n", c->name, context,
+ ptr);
return ptr;
};
spin_unlock_irqrestore(&c->context_list_lock, flags);
if (!context)
- printk(KERN_WARNING "i2o: Could not find nonexistent ptr "
- "%p\n", ptr);
+ printk(KERN_WARNING "%s: Could not find nonexistent ptr "
+ "%p\n", c->name, ptr);
- pr_debug("get context id from context list %p -> %d\n", ptr, context);
+ pr_debug("%s: get context id from context list %p -> %d\n", c->name,
+ ptr, context);
return context;
};
i2o_status_block *sb = c->status_block.virt;
int rc = 0;
- pr_debug("Resetting controller\n");
+ pr_debug("%s: Resetting controller\n", c->name);
m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_MESSAGE_GET);
if (m == I2O_QUEUE_EMPTY)
timeout = jiffies + I2O_TIMEOUT_RESET * HZ;
while (!*status) {
if (time_after(jiffies, timeout)) {
- printk(KERN_ERR "IOP reset timeout.\n");
+ printk(KERN_ERR "%s: IOP reset timeout.\n", c->name);
rc = -ETIMEDOUT;
goto exit;
}
m = i2o_msg_get_wait(c, &msg, I2O_TIMEOUT_RESET);
while (m == I2O_QUEUE_EMPTY) {
if (time_after(jiffies, timeout)) {
- printk(KERN_ERR "IOP reset timeout.\n");
+ printk(KERN_ERR "%s: IOP reset timeout.\n",
+ c->name);
rc = -ETIMEDOUT;
goto exit;
}
rc = i2o_status_get(c);
if (rc) {
- printk(KERN_INFO "Unable to obtain status of %s, "
+ printk(KERN_INFO "%s: Unable to obtain status, "
"attempting a reset.\n", c->name);
if (i2o_iop_reset(c))
return rc;
}
if (sb->i2o_version > I2OVER15) {
- printk(KERN_ERR "%s: Not running vrs. 1.5. of the I2O "
+ printk(KERN_ERR "%s: Not running version 1.5 of the I2O "
"Specification.\n", c->name);
return -ENODEV;
}
case ADAPTER_STATE_OPERATIONAL:
case ADAPTER_STATE_HOLD:
case ADAPTER_STATE_FAILED:
- pr_debug("already running, trying to reset...\n");
+ pr_debug("%s: already running, trying to reset...\n", c->name);
if (i2o_iop_reset(c))
return -ENODEV;
}
c->name);
root = pci_find_parent_resource(c->pdev, res);
if (root == NULL)
- printk(KERN_WARNING "Can't find parent resource!\n");
+ printk(KERN_WARNING "%s: Can't find parent resource!\n",
+ c->name);
if (root && allocate_resource(root, res, sb->desired_mem_size, sb->desired_mem_size, sb->desired_mem_size, 1 << 20, /* Unspecified, so use 1Mb and play safe */
NULL, NULL) >= 0) {
c->mem_alloc = 1;
sb->current_mem_size = 1 + res->end - res->start;
sb->current_mem_base = res->start;
- printk(KERN_INFO
- "%s: allocated %ld bytes of PCI memory at 0x%08lX.\n",
- c->name, 1 + res->end - res->start, res->start);
+ printk(KERN_INFO "%s: allocated %ld bytes of PCI memory"
+ " at 0x%08lX.\n", c->name,
+ 1 + res->end - res->start, res->start);
}
}
c->name);
root = pci_find_parent_resource(c->pdev, res);
if (root == NULL)
- printk(KERN_WARNING "Can't find parent resource!\n");
+ printk(KERN_WARNING "%s: Can't find parent resource!\n",
+ c->name);
if (root && allocate_resource(root, res, sb->desired_io_size, sb->desired_io_size, sb->desired_io_size, 1 << 20, /* Unspecified, so use 1Mb and play safe */
NULL, NULL) >= 0) {
c->io_alloc = 1;
sb->current_io_size = 1 + res->end - res->start;
sb->current_mem_base = res->start;
- printk(KERN_INFO
- "%s: allocated %ld bytes of PCI I/O at 0x%08lX.\n",
- c->name, 1 + res->end - res->start, res->start);
+ printk(KERN_INFO "%s: allocated %ld bytes of PCI I/O at"
+ " 0x%08lX.\n", c->name,
+ 1 + res->end - res->start, res->start);
}
}
{
struct i2o_device *dev, *tmp;
- pr_debug("Deleting controller %s\n", c->name);
+ pr_debug("%s: deleting controller\n", c->name);
i2o_driver_notify_controller_remove_all(c);
c = kmalloc(sizeof(*c), GFP_KERNEL);
if (!c) {
- printk(KERN_ERR "i2o: Insufficient memory to allocate the "
+ printk(KERN_ERR "i2o: Insufficient memory to allocate a I2O "
"controller.\n");
return ERR_PTR(-ENOMEM);
}
"devices\n", c->name);
if ((rc = i2o_iop_activate(c))) {
- printk(KERN_ERR "%s: controller could not activated\n",
+ printk(KERN_ERR "%s: could not activate controller\n",
c->name);
i2o_iop_reset(c);
return rc;
}
- pr_debug("building sys table %s...\n", c->name);
+ pr_debug("%s: building sys table...\n", c->name);
if ((rc = i2o_systab_build())) {
i2o_iop_reset(c);
return rc;
}
- pr_debug("online controller %s...\n", c->name);
+ pr_debug("%s: online controller...\n", c->name);
if ((rc = i2o_iop_online(c))) {
i2o_iop_reset(c);
return rc;
}
- pr_debug("getting LCT %s...\n", c->name);
+ pr_debug("%s: getting LCT...\n", c->name);
if ((rc = i2o_exec_lct_get(c))) {
i2o_iop_reset(c);
{
int rc = 0;
- printk(KERN_INFO "I2O Core - (C) Copyright 1999 Red Hat Software\n");
+ printk(KERN_INFO OSM_DESCRIPTION " v" OSM_VERSION "\n");
rc = i2o_device_init();
if (rc)
module_exit(i2o_iop_exit);
MODULE_AUTHOR("Red Hat Software");
-MODULE_DESCRIPTION("I2O Core");
MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION(OSM_DESCRIPTION);
+MODULE_VERSION(OSM_VERSION);
#if BITS_PER_LONG == 64
EXPORT_SYMBOL(i2o_cntxt_list_add);
config MMC_WBSD
tristate "Winbond W83L51xD SD/MMC Card Interface support"
- depends on MMC
+ depends on MMC && ISA
help
This selects the Winbond(R) W83L51xD Secure digital and
Multimedia card Interface.
brq.cmd.arg = req->sector << 9;
brq.cmd.flags = MMC_RSP_R1;
- brq.data.req = req;
brq.data.timeout_ns = card->csd.tacc_ns * 10;
brq.data.timeout_clks = card->csd.tacc_clks * 10;
brq.data.blksz_bits = md->block_bits;
md->disk->queue = md->queue.queue;
md->disk->driverfs_dev = &card->dev;
+ /*
+ * As discussed on lkml, GENHD_FL_REMOVABLE should:
+ *
+ * - be set for removable media with permanent block devices
+ * - be unset for removable block devices with permanent media
+ *
+ * Since MMC block devices clearly fall under the second
+ * case, we do not set GENHD_FL_REMOVABLE. Userspace
+ * should use the block device creation/destruction hotplug
+ * messages to tell when the card is present.
+ */
+
sprintf(md->disk->disk_name, "mmcblk%d", devidx);
sprintf(md->disk->devfs_name, "mmc/blk%d", devidx);
#include <linux/ioport.h>
#include <linux/device.h>
#include <linux/interrupt.h>
-#include <linux/blkdev.h>
#include <linux/delay.h>
#include <linux/err.h>
+#include <linux/highmem.h>
#include <linux/mmc/host.h>
#include <linux/mmc/protocol.h>
#include <asm/io.h>
#include <asm/irq.h>
+#include <asm/scatterlist.h>
#include <asm/hardware/amba.h>
#include <asm/hardware/clock.h>
#include <asm/mach/mmc.h>
#include <linux/device.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
-#include <linux/blkdev.h>
#include <linux/dma-mapping.h>
#include <linux/mmc/host.h>
#include <linux/mmc/protocol.h>
#include <asm/dma.h>
#include <asm/io.h>
#include <asm/irq.h>
+#include <asm/scatterlist.h>
#include <asm/sizes.h>
#include <asm/arch/pxa-regs.h>
struct mmc_host *mmc;
spinlock_t lock;
struct resource *res;
- void *base;
+ void __iomem *base;
int irq;
int dma;
unsigned int clkrt;
/*
- * linux/drivers/mmc/wbsd.c
+ * linux/drivers/mmc/wbsd.c - Winbond W83L51xD SD/MMC driver
*
- * Copyright (C) 2004 Pierre Ossman, All Rights Reserved.
+ * Copyright (C) 2004-2005 Pierre Ossman, All Rights Reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
+ *
+ *
+ * Warning!
+ *
+ * Changes to the FIFO system should be done with extreme care since
+ * the hardware is full of bugs related to the FIFO. Known issues are:
+ *
+ * - FIFO size field in FSR is always zero.
+ *
+ * - FIFO interrupts tend not to work as they should. Interrupts are
+ * triggered only for full/empty events, not for threshold values.
+ *
+ * - On APIC systems the FIFO empty interrupt is sometimes lost.
*/
#include <linux/config.h>
#include <linux/device.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
-#include <linux/blkdev.h>
-
+#include <linux/highmem.h>
#include <linux/mmc/host.h>
#include <linux/mmc/protocol.h>
#include <asm/io.h>
#include <asm/dma.h>
+#include <asm/scatterlist.h>
#include "wbsd.h"
#define DRIVER_NAME "wbsd"
-#define DRIVER_VERSION "1.0"
+#define DRIVER_VERSION "1.1"
#ifdef CONFIG_MMC_DEBUG
#define DBG(x...) \
printk(KERN_DEBUG DRIVER_NAME ": " x)
#define DBGF(f, x...) \
- printk(KERN_DEBUG DRIVER_NAME " [%s()]: " f, __func__, ##x)
+ printk(KERN_DEBUG DRIVER_NAME " [%s()]: " f, __func__ , ##x)
#else
#define DBG(x...) do { } while (0)
#define DBGF(x...) do { } while (0)
static inline void wbsd_init_sg(struct wbsd_host* host, struct mmc_data* data)
{
- struct request* req = data->req;
-
/*
* Get info. about SG list from data structure.
*/
static inline char* wbsd_kmap_sg(struct wbsd_host* host)
{
- return kmap_atomic(host->cur_sg->page, KM_BIO_SRC_IRQ) +
+ host->mapped_sg = kmap_atomic(host->cur_sg->page, KM_BIO_SRC_IRQ) +
host->cur_sg->offset;
+ return host->mapped_sg;
}
static inline void wbsd_kunmap_sg(struct wbsd_host* host)
{
- kunmap_atomic(host->cur_sg->page, KM_BIO_SRC_IRQ);
+ kunmap_atomic(host->mapped_sg, KM_BIO_SRC_IRQ);
}
static inline void wbsd_sg_to_dma(struct wbsd_host* host, struct mmc_data* data)
memcpy(dmabuf, sgbuf, size);
else
memcpy(dmabuf, sgbuf, sg[i].length);
- kunmap_atomic(sg[i].page, KM_BIO_SRC_IRQ);
+ kunmap_atomic(sgbuf, KM_BIO_SRC_IRQ);
dmabuf += sg[i].length;
if (size < sg[i].length)
memcpy(sgbuf, dmabuf, size);
else
memcpy(sgbuf, dmabuf, sg[i].length);
- kunmap_atomic(sg[i].page, KM_BIO_SRC_IRQ);
+ kunmap_atomic(sgbuf, KM_BIO_SRC_IRQ);
dmabuf += sg[i].length;
if (size < sg[i].length)
static void wbsd_send_command(struct wbsd_host* host, struct mmc_command* cmd)
{
int i;
- u8 status, eir, isr;
+ u8 status, isr;
DBGF("Sending cmd (%x)\n", cmd->opcode);
/*
- * Disable interrupts as the interrupt routine
- * will destroy the contents of ISR.
+ * Clear accumulated ISR. The interrupt routine
+ * will fill this one with events that occur during
+ * transfer.
*/
- eir = inb(host->base + WBSD_EIR);
- outb(0, host->base + WBSD_EIR);
+ host->isr = 0;
/*
* Send the command (CRC calculated by host).
/*
* Read back status.
*/
- isr = inb(host->base + WBSD_ISR);
+ isr = host->isr;
/* Card removed? */
if (isr & WBSD_INT_CARD)
}
}
- /*
- * Restore interrupt mask to previous value.
- */
- outb(eir, host->base + WBSD_EIR);
-
- /*
- * Call the interrupt routine to jump start
- * interrupts.
- */
- wbsd_irq(0, host, NULL);
-
DBGF("Sent cmd (%x), res %d\n", cmd->opcode, cmd->error);
}
{
struct mmc_data* data = host->mrq->cmd->data;
char* buffer;
+ int i, fsr, fifo;
/*
* Handle excessive data.
* Drain the fifo. This has a tendency to loop longer
* than the FIFO length (usually one block).
*/
- while (!(inb(host->base + WBSD_FSR) & WBSD_FIFO_EMPTY))
+ while (!((fsr = inb(host->base + WBSD_FSR)) & WBSD_FIFO_EMPTY))
{
- *buffer = inb(host->base + WBSD_DFR);
- buffer++;
- host->offset++;
- host->remain--;
-
- data->bytes_xfered++;
-
/*
- * Transfer done?
- */
- if (data->bytes_xfered == host->size)
- {
- wbsd_kunmap_sg(host);
- return;
- }
+ * The size field in the FSR is broken so we have to
+ * do some guessing.
+ */
+ if (fsr & WBSD_FIFO_FULL)
+ fifo = 16;
+ else if (fsr & WBSD_FIFO_FUTHRE)
+ fifo = 8;
+ else
+ fifo = 1;
- /*
- * End of scatter list entry?
- */
- if (host->remain == 0)
+ for (i = 0;i < fifo;i++)
{
- wbsd_kunmap_sg(host);
+ *buffer = inb(host->base + WBSD_DFR);
+ buffer++;
+ host->offset++;
+ host->remain--;
+
+ data->bytes_xfered++;
+
+ /*
+ * Transfer done?
+ */
+ if (data->bytes_xfered == host->size)
+ {
+ wbsd_kunmap_sg(host);
+ return;
+ }
/*
- * Get next entry. Check if last.
+ * End of scatter list entry?
*/
- if (!wbsd_next_sg(host))
+ if (host->remain == 0)
{
+ wbsd_kunmap_sg(host);
+
/*
- * We should never reach this point.
- * It means that we're trying to
- * transfer more blocks than can fit
- * into the scatter list.
+ * Get next entry. Check if last.
*/
- BUG_ON(1);
-
- host->size = data->bytes_xfered;
+ if (!wbsd_next_sg(host))
+ {
+ /*
+ * We should never reach this point.
+ * It means that we're trying to
+ * transfer more blocks than can fit
+ * into the scatter list.
+ */
+ BUG_ON(1);
+
+ host->size = data->bytes_xfered;
+
+ return;
+ }
- return;
+ buffer = wbsd_kmap_sg(host);
}
-
- buffer = wbsd_kmap_sg(host);
}
}
wbsd_kunmap_sg(host);
+
+ /*
+ * This is a very dirty hack to solve a
+ * hardware problem. The chip doesn't trigger
+ * FIFO threshold interrupts properly.
+ */
+ if ((host->size - data->bytes_xfered) < 16)
+ tasklet_schedule(&host->fifo_tasklet);
}
static void wbsd_fill_fifo(struct wbsd_host* host)
{
struct mmc_data* data = host->mrq->cmd->data;
char* buffer;
+ int i, fsr, fifo;
/*
* Check that we aren't being called after the
* Fill the fifo. This has a tendency to loop longer
* than the FIFO length (usually one block).
*/
- while (!(inb(host->base + WBSD_FSR) & WBSD_FIFO_FULL))
+ while (!((fsr = inb(host->base + WBSD_FSR)) & WBSD_FIFO_FULL))
{
- outb(*buffer, host->base + WBSD_DFR);
- buffer++;
- host->offset++;
- host->remain--;
-
- data->bytes_xfered++;
-
/*
- * Transfer done?
- */
- if (data->bytes_xfered == host->size)
- {
- wbsd_kunmap_sg(host);
- return;
- }
+ * The size field in the FSR is broken so we have to
+ * do some guessing.
+ */
+ if (fsr & WBSD_FIFO_EMPTY)
+ fifo = 0;
+ else if (fsr & WBSD_FIFO_EMTHRE)
+ fifo = 8;
+ else
+ fifo = 15;
- /*
- * End of scatter list entry?
- */
- if (host->remain == 0)
+ for (i = 16;i > fifo;i--)
{
- wbsd_kunmap_sg(host);
+ outb(*buffer, host->base + WBSD_DFR);
+ buffer++;
+ host->offset++;
+ host->remain--;
+
+ data->bytes_xfered++;
/*
- * Get next entry. Check if last.
+ * Transfer done?
+ */
+ if (data->bytes_xfered == host->size)
+ {
+ wbsd_kunmap_sg(host);
+ return;
+ }
+
+ /*
+ * End of scatter list entry?
*/
- if (!wbsd_next_sg(host))
+ if (host->remain == 0)
{
+ wbsd_kunmap_sg(host);
+
/*
- * We should never reach this point.
- * It means that we're trying to
- * transfer more blocks than can fit
- * into the scatter list.
+ * Get next entry. Check if last.
*/
- BUG_ON(1);
-
- host->size = data->bytes_xfered;
+ if (!wbsd_next_sg(host))
+ {
+ /*
+ * We should never reach this point.
+ * It means that we're trying to
+ * transfer more blocks than can fit
+ * into the scatter list.
+ */
+ BUG_ON(1);
+
+ host->size = data->bytes_xfered;
+
+ return;
+ }
- return;
+ buffer = wbsd_kmap_sg(host);
}
-
- buffer = wbsd_kmap_sg(host);
}
}
disable_dma(host->dma);
clear_dma_ff(host->dma);
if (data->flags & MMC_DATA_READ)
- set_dma_mode(host->dma, DMA_MODE_READ);
+ set_dma_mode(host->dma, DMA_MODE_READ & ~0x40);
else
- set_dma_mode(host->dma, DMA_MODE_WRITE);
+ set_dma_mode(host->dma, DMA_MODE_WRITE & ~0x40);
set_dma_addr(host->dma, host->dma_addr);
set_dma_count(host->dma, host->size);
/*
* Enable DMA on the host.
*/
- wbsd_write_index(host, WBSD_IDX_DMA,
- WBSD_DMA_SINGLE | WBSD_DMA_ENABLE);
+ wbsd_write_index(host, WBSD_IDX_DMA, WBSD_DMA_ENABLE);
}
else
{
{
unsigned long dmaflags;
int count;
+ u8 status;
WARN_ON(host->mrq == NULL);
*/
if (data->stop)
wbsd_send_command(host, data->stop);
+
+ /*
+ * Wait for the controller to leave data
+ * transfer state.
+ */
+ do
+ {
+ status = wbsd_read_index(host, WBSD_IDX_STATUS);
+ } while (status & (WBSD_BLOCK_READ | WBSD_BLOCK_WRITE));
/*
* DMA transfer?
* transfered.
*/
if (cmd->data && (cmd->error == MMC_ERR_NONE))
- {
+ {
+ /*
+ * Dirty fix for hardware bug.
+ */
+ if (host->dma == -1)
+ tasklet_schedule(&host->fifo_tasklet);
+
spin_unlock_bh(&host->lock);
return;
spin_lock(&host->lock);
- WARN_ON(!host->mrq);
if (!host->mrq)
goto end;
spin_lock(&host->lock);
- WARN_ON(!host->mrq);
if (!host->mrq)
goto end;
*/
if (isr == 0xff || isr == 0x00)
return IRQ_NONE;
+
+ host->isr |= isr;
/*
* Schedule tasklets as needed.
if (isr & WBSD_INT_CARD)
tasklet_schedule(&host->card_tasklet);
if (isr & WBSD_INT_FIFO_THRE)
- tasklet_hi_schedule(&host->fifo_tasklet);
+ tasklet_schedule(&host->fifo_tasklet);
if (isr & WBSD_INT_CRC)
tasklet_hi_schedule(&host->crc_tasklet);
if (isr & WBSD_INT_TIMEOUT)
* Maximum number of segments. Worst case is one sector per segment
* so this will be 64kB/512.
*/
- mmc->max_hw_segs = NR_SG;
- mmc->max_phys_segs = NR_SG;
+ mmc->max_hw_segs = 128;
+ mmc->max_phys_segs = 128;
/*
* Maximum number of sectors in one transfer. Also limited by 64kB
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Winbond W83L51xD SD/MMC card interface driver");
+MODULE_VERSION(DRIVER_VERSION);
MODULE_PARM_DESC(io, "I/O base to allocate. Must be 8 byte aligned. (default 0x248)");
MODULE_PARM_DESC(irq, "IRQ to allocate. (default 6)");
/*
- * linux/drivers/mmc/wbsd.h
+ * linux/drivers/mmc/wbsd.h - Winbond W83L51xD SD/MMC driver
*
- * Copyright (C) 2004 Pierre Ossman, All Rights Reserved.
+ * Copyright (C) 2004-2005 Pierre Ossman, All Rights Reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
#define WBSD_FIFOEN_FULL 0x10
#define WBSD_FIFO_THREMASK 0x0F
+#define WBSD_BLOCK_READ 0x80
+#define WBSD_BLOCK_WRITE 0x40
#define WBSD_BUSY 0x20
#define WBSD_CARDTRAFFIC 0x04
#define WBSD_SENDCMD 0x02
#define WBSD_CRC_FAIL 0x0B /* S101E (01011) */
-/* 64kB / 512 */
-#define NR_SG 128
-
struct wbsd_host
{
struct mmc_host* mmc; /* MMC structure */
struct mmc_request* mrq; /* Current request */
- struct scatterlist sg[NR_SG]; /* SG list */
+ u8 isr; /* Accumulated ISR */
+
struct scatterlist* cur_sg; /* Current SG entry */
unsigned int num_sg; /* Number of entries left */
+ void* mapped_sg; /* vaddr of mapped sg */
unsigned int offset; /* Offset into current entry */
unsigned int remain; /* Data left in curren entry */
--- /dev/null
+/*
+ * $Id: block2mtd.c,v 1.23 2005/01/05 17:05:46 dwmw2 Exp $
+ *
+ * block2mtd.c - create an mtd from a block device
+ *
+ * Copyright (C) 2001,2002 Simon Evans <spse@secret.org.uk>
+ * Copyright (C) 2004 Gareth Bult <Gareth@Encryptec.net>
+ * Copyright (C) 2004,2005 Jörn Engel <joern@wh.fh-wedel.de>
+ *
+ * Licence: GPL
+ */
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/blkdev.h>
+#include <linux/bio.h>
+#include <linux/pagemap.h>
+#include <linux/list.h>
+#include <linux/init.h>
+#include <linux/mtd/mtd.h>
+#include <linux/buffer_head.h>
+
+#define VERSION "$Revision: 1.23 $"
+
+
+#define ERROR(fmt, args...) printk(KERN_ERR "block2mtd: " fmt "\n" , ## args)
+#define INFO(fmt, args...) printk(KERN_INFO "block2mtd: " fmt "\n" , ## args)
+
+
+/* Info for the block device */
+struct block2mtd_dev {
+ struct list_head list;
+ struct block_device *blkdev;
+ struct mtd_info mtd;
+ struct semaphore write_mutex;
+};
+
+
+/* Static info about the MTD, used in cleanup_module */
+static LIST_HEAD(blkmtd_device_list);
+
+
+#define PAGE_READAHEAD 64
+void cache_readahead(struct address_space *mapping, int index)
+{
+ filler_t *filler = (filler_t*)mapping->a_ops->readpage;
+ int i, pagei;
+ unsigned ret = 0;
+ unsigned long end_index;
+ struct page *page;
+ LIST_HEAD(page_pool);
+ struct inode *inode = mapping->host;
+ loff_t isize = i_size_read(inode);
+
+ if (!isize) {
+ INFO("iSize=0 in cache_readahead\n");
+ return;
+ }
+
+ end_index = ((isize - 1) >> PAGE_CACHE_SHIFT);
+
+ spin_lock_irq(&mapping->tree_lock);
+ for (i = 0; i < PAGE_READAHEAD; i++) {
+ pagei = index + i;
+ if (pagei > end_index) {
+ INFO("Overrun end of disk in cache readahead\n");
+ break;
+ }
+ page = radix_tree_lookup(&mapping->page_tree, pagei);
+ if (page && (!i))
+ break;
+ if (page)
+ continue;
+ spin_unlock_irq(&mapping->tree_lock);
+ page = page_cache_alloc_cold(mapping);
+ spin_lock_irq(&mapping->tree_lock);
+ if (!page)
+ break;
+ page->index = pagei;
+ list_add(&page->lru, &page_pool);
+ ret++;
+ }
+ spin_unlock_irq(&mapping->tree_lock);
+ if (ret)
+ read_cache_pages(mapping, &page_pool, filler, NULL);
+}
+
+
+static struct page* page_readahead(struct address_space *mapping, int index)
+{
+ filler_t *filler = (filler_t*)mapping->a_ops->readpage;
+ //do_page_cache_readahead(mapping, index, XXX, 64);
+ cache_readahead(mapping, index);
+ return read_cache_page(mapping, index, filler, NULL);
+}
+
+
+/* erase a specified part of the device */
+static int _block2mtd_erase(struct block2mtd_dev *dev, loff_t to, size_t len)
+{
+ struct address_space *mapping = dev->blkdev->bd_inode->i_mapping;
+ struct page *page;
+ int index = to >> PAGE_SHIFT; // page index
+ int pages = len >> PAGE_SHIFT;
+ u_long *p;
+ u_long *max;
+
+ while (pages) {
+ page = page_readahead(mapping, index);
+ if (!page)
+ return -ENOMEM;
+ if (IS_ERR(page))
+ return PTR_ERR(page);
+
+ max = (u_long*)page_address(page) + PAGE_SIZE;
+ for (p=(u_long*)page_address(page); p<max; p++)
+ if (*p != -1UL) {
+ lock_page(page);
+ memset(page_address(page), 0xff, PAGE_SIZE);
+ set_page_dirty(page);
+ unlock_page(page);
+ break;
+ }
+
+ page_cache_release(page);
+ pages--;
+ index++;
+ }
+ return 0;
+}
+static int block2mtd_erase(struct mtd_info *mtd, struct erase_info *instr)
+{
+ struct block2mtd_dev *dev = mtd->priv;
+ size_t from = instr->addr;
+ size_t len = instr->len;
+ int err;
+
+ instr->state = MTD_ERASING;
+ down(&dev->write_mutex);
+ err = _block2mtd_erase(dev, from, len);
+ up(&dev->write_mutex);
+ if (err) {
+ ERROR("erase failed err = %d", err);
+ instr->state = MTD_ERASE_FAILED;
+ } else
+ instr->state = MTD_ERASE_DONE;
+
+ instr->state = MTD_ERASE_DONE;
+ mtd_erase_callback(instr);
+ return err;
+}
+
+
+static int block2mtd_read(struct mtd_info *mtd, loff_t from, size_t len,
+ size_t *retlen, u_char *buf)
+{
+ struct block2mtd_dev *dev = mtd->priv;
+ struct page *page;
+ int index = from >> PAGE_SHIFT;
+ int offset = from & (PAGE_SHIFT-1);
+ int cpylen;
+
+ if (from > mtd->size)
+ return -EINVAL;
+ if (from + len > mtd->size)
+ len = mtd->size - from;
+
+ if (retlen)
+ *retlen = 0;
+
+ while (len) {
+ if ((offset + len) > PAGE_SIZE)
+ cpylen = PAGE_SIZE - offset; // multiple pages
+ else
+ cpylen = len; // this page
+ len = len - cpylen;
+
+ // Get page
+ page = page_readahead(dev->blkdev->bd_inode->i_mapping, index);
+ if (!page)
+ return -ENOMEM;
+ if (IS_ERR(page))
+ return PTR_ERR(page);
+
+ memcpy(buf, page_address(page) + offset, cpylen);
+ page_cache_release(page);
+
+ if (retlen)
+ *retlen += cpylen;
+ buf += cpylen;
+ offset = 0;
+ index++;
+ }
+ return 0;
+}
+
+
+/* write data to the underlying device */
+static int _block2mtd_write(struct block2mtd_dev *dev, const u_char *buf,
+ loff_t to, size_t len, size_t *retlen)
+{
+ struct page *page;
+ struct address_space *mapping = dev->blkdev->bd_inode->i_mapping;
+ int index = to >> PAGE_SHIFT; // page index
+ int offset = to & ~PAGE_MASK; // page offset
+ int cpylen;
+
+ if (retlen)
+ *retlen = 0;
+ while (len) {
+ if ((offset+len) > PAGE_SIZE)
+ cpylen = PAGE_SIZE - offset; // multiple pages
+ else
+ cpylen = len; // this page
+ len = len - cpylen;
+
+ // Get page
+ page = page_readahead(mapping, index);
+ if (!page)
+ return -ENOMEM;
+ if (IS_ERR(page))
+ return PTR_ERR(page);
+
+ if (memcmp(page_address(page)+offset, buf, cpylen)) {
+ lock_page(page);
+ memcpy(page_address(page) + offset, buf, cpylen);
+ set_page_dirty(page);
+ unlock_page(page);
+ }
+ page_cache_release(page);
+
+ if (retlen)
+ *retlen += cpylen;
+
+ buf += cpylen;
+ offset = 0;
+ index++;
+ }
+ return 0;
+}
+static int block2mtd_write(struct mtd_info *mtd, loff_t to, size_t len,
+ size_t *retlen, const u_char *buf)
+{
+ struct block2mtd_dev *dev = mtd->priv;
+ int err;
+
+ if (!len)
+ return 0;
+ if (to >= mtd->size)
+ return -ENOSPC;
+ if (to + len > mtd->size)
+ len = mtd->size - to;
+
+ down(&dev->write_mutex);
+ err = _block2mtd_write(dev, buf, to, len, retlen);
+ up(&dev->write_mutex);
+ if (err > 0)
+ err = 0;
+ return err;
+}
+
+
+/* sync the device - wait until the write queue is empty */
+static void block2mtd_sync(struct mtd_info *mtd)
+{
+ struct block2mtd_dev *dev = mtd->priv;
+ sync_blockdev(dev->blkdev);
+ return;
+}
+
+
+static void block2mtd_free_device(struct block2mtd_dev *dev)
+{
+ if (!dev)
+ return;
+
+ kfree(dev->mtd.name);
+
+ if (dev->blkdev) {
+ invalidate_inode_pages(dev->blkdev->bd_inode->i_mapping);
+ close_bdev_excl(dev->blkdev);
+ }
+
+ kfree(dev);
+}
+
+
+/* FIXME: ensure that mtd->size % erase_size == 0 */
+static struct block2mtd_dev *add_device(char *devname, int erase_size)
+{
+ struct block_device *bdev;
+ struct block2mtd_dev *dev;
+
+ if (!devname)
+ return NULL;
+
+ dev = kmalloc(sizeof(struct block2mtd_dev), GFP_KERNEL);
+ if (!dev)
+ return NULL;
+ memset(dev, 0, sizeof(*dev));
+
+ /* Get a handle on the device */
+ bdev = open_bdev_excl(devname, O_RDWR, NULL);
+ if (IS_ERR(bdev)) {
+ ERROR("error: cannot open device %s", devname);
+ goto devinit_err;
+ }
+ dev->blkdev = bdev;
+
+ if (MAJOR(bdev->bd_dev) == MTD_BLOCK_MAJOR) {
+ ERROR("attempting to use an MTD device as a block device");
+ goto devinit_err;
+ }
+
+ init_MUTEX(&dev->write_mutex);
+
+ /* Setup the MTD structure */
+ /* make the name contain the block device in */
+ dev->mtd.name = kmalloc(sizeof("block2mtd: ") + strlen(devname),
+ GFP_KERNEL);
+ if (!dev->mtd.name)
+ goto devinit_err;
+
+ sprintf(dev->mtd.name, "block2mtd: %s", devname);
+
+ dev->mtd.size = dev->blkdev->bd_inode->i_size & PAGE_MASK;
+ dev->mtd.erasesize = erase_size;
+ dev->mtd.type = MTD_RAM;
+ dev->mtd.flags = MTD_CAP_RAM;
+ dev->mtd.erase = block2mtd_erase;
+ dev->mtd.write = block2mtd_write;
+ dev->mtd.writev = default_mtd_writev;
+ dev->mtd.sync = block2mtd_sync;
+ dev->mtd.read = block2mtd_read;
+ dev->mtd.readv = default_mtd_readv;
+ dev->mtd.priv = dev;
+ dev->mtd.owner = THIS_MODULE;
+
+ if (add_mtd_device(&dev->mtd)) {
+ /* Device didnt get added, so free the entry */
+ goto devinit_err;
+ }
+ list_add(&dev->list, &blkmtd_device_list);
+ INFO("mtd%d: [%s] erase_size = %dKiB [%d]", dev->mtd.index,
+ dev->mtd.name + strlen("blkmtd: "),
+ dev->mtd.erasesize >> 10, dev->mtd.erasesize);
+ return dev;
+
+devinit_err:
+ block2mtd_free_device(dev);
+ return NULL;
+}
+
+
+static int ustrtoul(const char *cp, char **endp, unsigned int base)
+{
+ unsigned long result = simple_strtoul(cp, endp, base);
+ switch (**endp) {
+ case 'G' :
+ result *= 1024;
+ case 'M':
+ result *= 1024;
+ case 'k':
+ result *= 1024;
+ /* By dwmw2 editorial decree, "ki", "Mi" or "Gi" are to be used. */
+ if ((*endp)[1] == 'i')
+ (*endp) += 2;
+ }
+ return result;
+}
+
+
+static int parse_num32(u32 *num32, const char *token)
+{
+ char *endp;
+ unsigned long n;
+
+ n = ustrtoul(token, &endp, 0);
+ if (*endp)
+ return -EINVAL;
+
+ *num32 = n;
+ return 0;
+}
+
+
+static int parse_name(char **pname, const char *token, size_t limit)
+{
+ size_t len;
+ char *name;
+
+ len = strlen(token) + 1;
+ if (len > limit)
+ return -ENOSPC;
+
+ name = kmalloc(len, GFP_KERNEL);
+ if (!name)
+ return -ENOMEM;
+
+ strcpy(name, token);
+
+ *pname = name;
+ return 0;
+}
+
+
+static inline void kill_final_newline(char *str)
+{
+ char *newline = strrchr(str, '\n');
+ if (newline && !newline[1])
+ *newline = 0;
+}
+
+
+#define parse_err(fmt, args...) do { \
+ ERROR("block2mtd: " fmt "\n", ## args); \
+ return 0; \
+} while (0)
+
+static int block2mtd_setup(const char *val, struct kernel_param *kp)
+{
+ char buf[80+12], *str=buf; /* 80 for device, 12 for erase size */
+ char *token[2];
+ char *name;
+ u32 erase_size = PAGE_SIZE;
+ int i, ret;
+
+ if (strnlen(val, sizeof(buf)) >= sizeof(buf))
+ parse_err("parameter too long");
+
+ strcpy(str, val);
+ kill_final_newline(str);
+
+ for (i=0; i<2; i++)
+ token[i] = strsep(&str, ",");
+
+ if (str)
+ parse_err("too many arguments");
+
+ if (!token[0])
+ parse_err("no argument");
+
+ ret = parse_name(&name, token[0], 80);
+ if (ret == -ENOMEM)
+ parse_err("out of memory");
+ if (ret == -ENOSPC)
+ parse_err("name too long");
+ if (ret)
+ return 0;
+
+ if (token[1]) {
+ ret = parse_num32(&erase_size, token[1]);
+ if (ret)
+ parse_err("illegal erase size");
+ }
+
+ add_device(name, erase_size);
+
+ return 0;
+}
+
+
+module_param_call(block2mtd, block2mtd_setup, NULL, NULL, 0200);
+MODULE_PARM_DESC(block2mtd, "Device to use. \"block2mtd=<dev>[,<erasesize>]\"");
+
+static int __init block2mtd_init(void)
+{
+ INFO("version " VERSION);
+ return 0;
+}
+
+
+static void __devexit block2mtd_exit(void)
+{
+ struct list_head *pos, *next;
+
+ /* Remove the MTD devices */
+ list_for_each_safe(pos, next, &blkmtd_device_list) {
+ struct block2mtd_dev *dev = list_entry(pos, typeof(*dev), list);
+ block2mtd_sync(&dev->mtd);
+ del_mtd_device(&dev->mtd);
+ INFO("mtd%d: [%s] removed", dev->mtd.index,
+ dev->mtd.name + strlen("blkmtd: "));
+ list_del(&dev->list);
+ block2mtd_free_device(dev);
+ }
+}
+
+
+module_init(block2mtd_init);
+module_exit(block2mtd_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Simon Evans <spse@secret.org.uk> and others");
+MODULE_DESCRIPTION("Emulate an MTD using a block device");
/*
* mtdram - a test mtd device
- * $Id: mtdram.c,v 1.34 2004/11/16 18:29:01 dwmw2 Exp $
+ * $Id: mtdram.c,v 1.35 2005/01/05 18:05:12 dwmw2 Exp $
* Author: Alexander Larsson <alex@cendio.se>
*
* Copyright (c) 1999 Alexander Larsson <alex@cendio.se>
#ifdef MODULE
static unsigned long total_size = CONFIG_MTDRAM_TOTAL_SIZE;
static unsigned long erase_size = CONFIG_MTDRAM_ERASE_SIZE;
-MODULE_PARM(total_size,"l");
+module_param(total_size,ulong,0);
MODULE_PARM_DESC(total_size, "Total device size in KiB");
-MODULE_PARM(erase_size,"l");
+module_param(erase_size,ulong,0);
MODULE_PARM_DESC(erase_size, "Device erase block size in KiB");
#define MTDRAM_TOTAL_SIZE (total_size * 1024)
#define MTDRAM_ERASE_SIZE (erase_size * 1024)
void *addr;
int err;
/* Allocate some memory */
- mtd_info = (struct mtd_info *)kmalloc(sizeof(struct mtd_info), GFP_KERNEL);
+ mtd_info = kmalloc(sizeof(struct mtd_info), GFP_KERNEL);
if (!mtd_info)
return -ENOMEM;
void *addr;
int err;
/* Allocate some memory */
- mtd_info = (struct mtd_info *)kmalloc(sizeof(struct mtd_info), GFP_KERNEL);
+ mtd_info = kmalloc(sizeof(struct mtd_info), GFP_KERNEL);
if (!mtd_info)
return -ENOMEM;
/**
+ * $Id: phram.c,v 1.11 2005/01/05 18:05:13 dwmw2 Exp $
*
- * $Id: phram.c,v 1.3 2004/11/16 18:29:01 dwmw2 Exp $
- *
- * Copyright (c) Jochen Schaeuble <psionic@psionic.de>
- * 07/2003 rewritten by Joern Engel <joern@wh.fh-wedel.de>
- *
- * DISCLAIMER: This driver makes use of Rusty's excellent module code,
- * so it will not work for 2.4 without changes and it wont work for 2.4
- * as a module without major changes. Oh well!
+ * Copyright (c) ???? Jochen Schäuble <psionic@psionic.de>
+ * Copyright (c) 2003-2004 Jörn Engel <joern@wh.fh-wedel.de>
*
* Usage:
*
* phram=<name>,<start>,<len>
* <name> may be up to 63 characters.
* <start> and <len> can be octal, decimal or hexadecimal. If followed
- * by "k", "M" or "G", the numbers will be interpreted as kilo, mega or
+ * by "ki", "Mi" or "Gi", the numbers will be interpreted as kilo, mega or
* gigabytes.
*
+ * Example:
+ * phram=swap,64Mi,128Mi phram=test,900Mi,1Mi
+ *
*/
#include <asm/io.h>
#define ERROR(fmt, args...) printk(KERN_ERR "phram: " fmt , ## args)
struct phram_mtd_list {
+ struct mtd_info mtd;
struct list_head list;
- struct mtd_info *mtdinfo;
};
static LIST_HEAD(phram_list);
static int phram_erase(struct mtd_info *mtd, struct erase_info *instr)
{
- u_char *start = (u_char *)mtd->priv;
+ u_char *start = mtd->priv;
if (instr->addr + instr->len > mtd->size)
return -EINVAL;
static int phram_point(struct mtd_info *mtd, loff_t from, size_t len,
size_t *retlen, u_char **mtdbuf)
{
- u_char *start = (u_char *)mtd->priv;
+ u_char *start = mtd->priv;
if (from + len > mtd->size)
return -EINVAL;
static int phram_read(struct mtd_info *mtd, loff_t from, size_t len,
size_t *retlen, u_char *buf)
{
- u_char *start = (u_char *)mtd->priv;
+ u_char *start = mtd->priv;
if (from + len > mtd->size)
return -EINVAL;
static int phram_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, const u_char *buf)
{
- u_char *start = (u_char *)mtd->priv;
+ u_char *start = mtd->priv;
if (to + len > mtd->size)
return -EINVAL;
struct phram_mtd_list *this;
list_for_each_entry(this, &phram_list, list) {
- del_mtd_device(this->mtdinfo);
- iounmap(this->mtdinfo->priv);
- kfree(this->mtdinfo);
+ del_mtd_device(&this->mtd);
+ iounmap(this->mtd.priv);
kfree(this);
}
}
if (!new)
goto out0;
- new->mtdinfo = kmalloc(sizeof(struct mtd_info), GFP_KERNEL);
- if (!new->mtdinfo)
- goto out1;
-
- memset(new->mtdinfo, 0, sizeof(struct mtd_info));
+ memset(new, 0, sizeof(*new));
ret = -EIO;
- new->mtdinfo->priv = ioremap(start, len);
- if (!new->mtdinfo->priv) {
+ new->mtd.priv = ioremap(start, len);
+ if (!new->mtd.priv) {
ERROR("ioremap failed\n");
- goto out2;
+ goto out1;
}
- new->mtdinfo->name = name;
- new->mtdinfo->size = len;
- new->mtdinfo->flags = MTD_CAP_RAM | MTD_ERASEABLE | MTD_VOLATILE;
- new->mtdinfo->erase = phram_erase;
- new->mtdinfo->point = phram_point;
- new->mtdinfo->unpoint = phram_unpoint;
- new->mtdinfo->read = phram_read;
- new->mtdinfo->write = phram_write;
- new->mtdinfo->owner = THIS_MODULE;
- new->mtdinfo->type = MTD_RAM;
- new->mtdinfo->erasesize = 0x0;
+ new->mtd.name = name;
+ new->mtd.size = len;
+ new->mtd.flags = MTD_CAP_RAM | MTD_ERASEABLE | MTD_VOLATILE;
+ new->mtd.erase = phram_erase;
+ new->mtd.point = phram_point;
+ new->mtd.unpoint = phram_unpoint;
+ new->mtd.read = phram_read;
+ new->mtd.write = phram_write;
+ new->mtd.owner = THIS_MODULE;
+ new->mtd.type = MTD_RAM;
+ new->mtd.erasesize = 0;
ret = -EAGAIN;
- if (add_mtd_device(new->mtdinfo)) {
+ if (add_mtd_device(&new->mtd)) {
ERROR("Failed to register new device\n");
- goto out3;
+ goto out2;
}
list_add_tail(&new->list, &phram_list);
return 0;
-out3:
- iounmap(new->mtdinfo->priv);
out2:
- kfree(new->mtdinfo);
+ iounmap(new->mtd.priv);
out1:
kfree(new);
out0:
result *= 1024;
case 'k':
result *= 1024;
- endp++;
+ /* By dwmw2 editorial decree, "ki", "Mi" or "Gi" are to be used. */
+ if ((*endp)[1] == 'i')
+ (*endp) += 2;
}
return result;
}
uint32_t len;
int i, ret;
- if (strnlen(val, sizeof(str)) >= sizeof(str))
+ if (strnlen(val, sizeof(buf)) >= sizeof(buf))
parse_err("parameter too long\n");
strcpy(str, val);
}
module_param_call(phram, phram_setup, NULL, NULL, 000);
-MODULE_PARM_DESC(phram, "Memory region to map. \"map=<name>,<start><length>\"");
-
-/*
- * Just for compatibility with slram, this is horrible and should go someday.
- */
-static int __init slram_setup(const char *val, struct kernel_param *kp)
-{
- char buf[256], *str = buf;
-
- if (!val || !val[0])
- parse_err("no arguments to \"slram=\"\n");
-
- if (strnlen(val, sizeof(str)) >= sizeof(str))
- parse_err("parameter too long\n");
-
- strcpy(str, val);
-
- while (str) {
- char *token[3];
- char *name;
- uint32_t start;
- uint32_t len;
- int i, ret;
-
- for (i=0; i<3; i++) {
- token[i] = strsep(&str, ",");
- if (token[i])
- continue;
- parse_err("wrong number of arguments to \"slram=\"\n");
- }
-
- /* name */
- ret = parse_name(&name, token[0]);
- if (ret == -ENOMEM)
- parse_err("of memory\n");
- if (ret == -ENOSPC)
- parse_err("too long\n");
- if (ret)
- return 1;
-
- /* start */
- ret = parse_num32(&start, token[1]);
- if (ret)
- parse_err("illegal start address\n");
-
- /* len */
- if (token[2][0] == '+')
- ret = parse_num32(&len, token[2] + 1);
- else
- ret = parse_num32(&len, token[2]);
-
- if (ret)
- parse_err("illegal device length\n");
-
- if (token[2][0] != '+') {
- if (len < start)
- parse_err("end < start\n");
- len -= start;
- }
-
- register_device(name, start, len);
- }
- return 1;
-}
-
-module_param_call(slram, slram_setup, NULL, NULL, 000);
-MODULE_PARM_DESC(slram, "List of memory regions to map. \"map=<name>,<start><length/end>\"");
+MODULE_PARM_DESC(phram,"Memory region to map. \"map=<name>,<start>,<length>\"");
static int __init init_phram(void)
{
- printk(KERN_ERR "phram loaded\n");
return 0;
}
/*======================================================================
- $Id: slram.c,v 1.32 2004/11/16 18:29:01 dwmw2 Exp $
+ $Id: slram.c,v 1.33 2005/01/05 18:05:13 dwmw2 Exp $
This driver provides a method to access memory not used by the kernel
itself (i.e. if the kernel commandline mem=xxx is used). To actually
#ifdef MODULE
static char *map[SLRAM_MAX_DEVICES_PARAMS];
+
+module_param_array(map, charp, NULL, 0);
+MODULE_PARM_DESC(map, "List of memory regions to map. \"map=<name>, <start>, <length / end>\"");
#else
static char *map;
#endif
-MODULE_PARM(map, "3-" __MODULE_STRING(SLRAM_MAX_DEVICES_PARAMS) "s");
-MODULE_PARM_DESC(map, "List of memory regions to map. \"map=<name>, <start>, <length / end>\"");
-
static slram_mtd_list_t *slram_mtdlist = NULL;
static int slram_erase(struct mtd_info *, struct erase_info *);
static int slram_point(struct mtd_info *mtd, loff_t from, size_t len,
size_t *retlen, u_char **mtdbuf)
{
- slram_priv_t *priv = (slram_priv_t *)mtd->priv;
+ slram_priv_t *priv = mtd->priv;
*mtdbuf = priv->start + from;
*retlen = len;
static int slram_read(struct mtd_info *mtd, loff_t from, size_t len,
size_t *retlen, u_char *buf)
{
- slram_priv_t *priv = (slram_priv_t *)mtd->priv;
+ slram_priv_t *priv = mtd->priv;
memcpy(buf, priv->start + from, len);
static int slram_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, const u_char *buf)
{
- slram_priv_t *priv = (slram_priv_t *)mtd->priv;
+ slram_priv_t *priv = mtd->priv;
memcpy(priv->start + to, buf, len);
if ((*curmtd)->mtdinfo) {
memset((char *)(*curmtd)->mtdinfo, 0, sizeof(struct mtd_info));
(*curmtd)->mtdinfo->priv =
- (void *)kmalloc(sizeof(slram_priv_t), GFP_KERNEL);
+ kmalloc(sizeof(slram_priv_t), GFP_KERNEL);
if (!(*curmtd)->mtdinfo->priv) {
kfree((*curmtd)->mtdinfo);
--- /dev/null
+/*
+ * drivers/mtd/maps/chestnut.c
+ *
+ * $Id: chestnut.c,v 1.1 2005/01/05 16:59:50 dwmw2 Exp $
+ *
+ * Flash map driver for IBM Chestnut (750FXGX Eval)
+ *
+ * Chose not to enable 8 bit flash as it contains the firmware and board
+ * info. Thus only the 32bit flash is supported.
+ *
+ * Author: <source@mvista.com>
+ *
+ * 2004 (c) MontaVista Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <asm/io.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+#include <platforms/chestnut.h>
+
+static struct map_info chestnut32_map = {
+ .name = "User FS",
+ .size = CHESTNUT_32BIT_SIZE,
+ .bankwidth = 4,
+ .phys = CHESTNUT_32BIT_BASE,
+};
+
+static struct mtd_partition chestnut32_partitions[] = {
+ {
+ .name = "User FS",
+ .offset = 0,
+ .size = CHESTNUT_32BIT_SIZE,
+ }
+};
+
+static struct mtd_info *flash32;
+
+int __init init_chestnut(void)
+{
+ /* 32-bit FLASH */
+
+ chestnut32_map.virt = ioremap(chestnut32_map.phys, chestnut32_map.size);
+
+ if (!chestnut32_map.virt) {
+ printk(KERN_NOTICE "Failed to ioremap 32-bit flash\n");
+ return -EIO;
+ }
+
+ simple_map_init(&chestnut32_map);
+
+ flash32 = do_map_probe("cfi_probe", &chestnut32_map);
+ if (flash32) {
+ flash32->owner = THIS_MODULE;
+ add_mtd_partitions(flash32, chestnut32_partitions,
+ ARRAY_SIZE(chestnut32_partitions));
+ } else {
+ printk(KERN_NOTICE "map probe failed for 32-bit flash\n");
+ return -ENXIO;
+ }
+
+ return 0;
+}
+
+static void __exit
+cleanup_chestnut(void)
+{
+ if (flash32) {
+ del_mtd_partitions(flash32);
+ map_destroy(flash32);
+ }
+
+ if (chestnut32_map.virt) {
+ iounmap((void *)chestnut32_map.virt);
+ chestnut32_map.virt = 0;
+ }
+}
+
+module_init(init_chestnut);
+module_exit(cleanup_chestnut);
+
+MODULE_DESCRIPTION("MTD map and partitions for IBM Chestnut (750fxgx Eval)");
+MODULE_AUTHOR("<source@mvista.com>");
+MODULE_LICENSE("GPL");
* ichxrom.c
*
* Normal mappings of chips in physical memory
- * $Id: ichxrom.c,v 1.15 2004/11/16 18:29:02 dwmw2 Exp $
+ * $Id: ichxrom.c,v 1.16 2004/11/28 09:40:39 dwmw2 Exp $
*/
#include <linux/module.h>
};
#endif
-static spinlock_t ipaq_vpp_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(ipaq_vpp_lock);
static void h3xxx_set_vpp(struct map_info *map, int vpp)
{
--- /dev/null
+/*
+ * sharpsl-flash.c
+ *
+ * Copyright (C) 2001 Lineo Japan, Inc.
+ * Copyright (C) 2002 SHARP
+ *
+ * $Id: sharpsl-flash.c,v 1.2 2004/11/24 20:38:06 rpurdie Exp $
+ *
+ * based on rpxlite.c,v 1.15 2001/10/02 15:05:14 dwmw2 Exp
+ * Handle mapping of the flash on the RPX Lite and CLLF boards
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <asm/io.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+
+#define WINDOW_ADDR 0x00000000
+#define WINDOW_SIZE 0x01000000
+#define BANK_WIDTH 2
+
+static struct mtd_info *mymtd;
+
+struct map_info sharpsl_map = {
+ .name = "sharpsl-flash",
+ .size = WINDOW_SIZE,
+ .bankwidth = BANK_WIDTH,
+ .phys = WINDOW_ADDR
+};
+
+static struct mtd_partition sharpsl_partitions[1] = {
+ {
+ name: "Filesystem",
+ size: 0x006d0000,
+ offset: 0x00120000
+ }
+};
+
+#define NB_OF(x) (sizeof(x)/sizeof(x[0]))
+
+int __init init_sharpsl(void)
+{
+ struct mtd_partition *parts;
+ int nb_parts = 0;
+ char *part_type = "static";
+
+ printk(KERN_NOTICE "Sharp SL series flash device: %x at %x\n", WINDOW_SIZE, WINDOW_ADDR);
+ sharpsl_map.virt = ioremap(WINDOW_ADDR, WINDOW_SIZE);
+ if (!sharpsl_map.virt) {
+ printk("Failed to ioremap\n");
+ return -EIO;
+ }
+ mymtd = do_map_probe("map_rom", &sharpsl_map);
+ if (!mymtd) {
+ iounmap(sharpsl_map.virt);
+ return -ENXIO;
+ }
+
+ mymtd->owner = THIS_MODULE;
+
+ parts = sharpsl_partitions;
+ nb_parts = NB_OF(sharpsl_partitions);
+
+ printk(KERN_NOTICE "Using %s partision definition\n", part_type);
+ add_mtd_partitions(mymtd, parts, nb_parts);
+
+ return 0;
+}
+
+static void __exit cleanup_sharpsl(void)
+{
+ if (mymtd) {
+ del_mtd_partitions(mymtd);
+ map_destroy(mymtd);
+ }
+ if (sharpsl_map.virt) {
+ iounmap(sharpsl_map.virt);
+ sharpsl_map.virt = 0;
+ }
+}
+
+module_init(init_sharpsl);
+module_exit(cleanup_sharpsl);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("SHARP (Original: Arnold Christensen <AKC@pel.dk>)");
+MODULE_DESCRIPTION("MTD map driver for SHARP SL series");
* - If you have created your own jffs file system and the bios overwrites
* it during boot, try disabling Drive A: and B: in the boot order.
*
- * $Id: ts5500_flash.c,v 1.1 2004/09/20 15:33:26 sean Exp $
+ * $Id: ts5500_flash.c,v 1.2 2004/11/28 09:40:40 dwmw2 Exp $
*/
#include <linux/config.h>
--- /dev/null
+/*
+ * $Id: walnut.c,v 1.2 2004/12/10 12:07:42 holindho Exp $
+ *
+ * Mapping for Walnut flash
+ * (used ebony.c as a "framework")
+ *
+ * Heikki Lindholm <holindho@infradead.org>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+#include <linux/config.h>
+#include <linux/version.h>
+#include <asm/io.h>
+#include <asm/ibm4xx.h>
+#include <platforms/4xx/walnut.h>
+
+/* these should be in platforms/4xx/walnut.h ? */
+#define WALNUT_FLASH_ONBD_N(x) (x & 0x02)
+#define WALNUT_FLASH_SRAM_SEL(x) (x & 0x01)
+#define WALNUT_FLASH_LOW 0xFFF00000
+#define WALNUT_FLASH_HIGH 0xFFF80000
+#define WALNUT_FLASH_SIZE 0x80000
+
+static struct mtd_info *flash;
+
+static struct map_info walnut_map = {
+ .name = "Walnut flash",
+ .size = WALNUT_FLASH_SIZE,
+ .bankwidth = 1,
+};
+
+/* Actually, OpenBIOS is the last 128 KiB of the flash - better
+ * partitioning could be made */
+static struct mtd_partition walnut_partitions[] = {
+ {
+ .name = "OpenBIOS",
+ .offset = 0x0,
+ .size = WALNUT_FLASH_SIZE,
+ /*.mask_flags = MTD_WRITEABLE, */ /* force read-only */
+ }
+};
+
+int __init init_walnut(void)
+{
+ u8 fpga_brds1;
+ void *fpga_brds1_adr;
+ void *fpga_status_adr;
+ unsigned long flash_base;
+
+ /* this should already be mapped (platform/4xx/walnut.c) */
+ fpga_status_adr = ioremap(WALNUT_FPGA_BASE, 8);
+ if (!fpga_status_adr)
+ return -ENOMEM;
+
+ fpga_brds1_adr = fpga_status_adr+5;
+ fpga_brds1 = readb(fpga_brds1_adr);
+ /* iounmap(fpga_status_adr); */
+
+ if (WALNUT_FLASH_ONBD_N(fpga_brds1)) {
+ printk("The on-board flash is disabled (U79 sw 5)!");
+ return -EIO;
+ }
+ if (WALNUT_FLASH_SRAM_SEL(fpga_brds1))
+ flash_base = WALNUT_FLASH_LOW;
+ else
+ flash_base = WALNUT_FLASH_HIGH;
+
+ walnut_map.phys = flash_base;
+ walnut_map.virt =
+ (void __iomem *)ioremap(flash_base, walnut_map.size);
+
+ if (!walnut_map.virt) {
+ printk("Failed to ioremap flash.\n");
+ return -EIO;
+ }
+
+ simple_map_init(&walnut_map);
+
+ flash = do_map_probe("jedec_probe", &walnut_map);
+ if (flash) {
+ flash->owner = THIS_MODULE;
+ add_mtd_partitions(flash, walnut_partitions,
+ ARRAY_SIZE(walnut_partitions));
+ } else {
+ printk("map probe failed for flash\n");
+ return -ENXIO;
+ }
+
+ return 0;
+}
+
+static void __exit cleanup_walnut(void)
+{
+ if (flash) {
+ del_mtd_partitions(flash);
+ map_destroy(flash);
+ }
+
+ if (walnut_map.virt) {
+ iounmap((void *)walnut_map.virt);
+ walnut_map.virt = 0;
+ }
+}
+
+module_init(init_walnut);
+module_exit(cleanup_walnut);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Heikki Lindholm <holindho@infradead.org>");
+MODULE_DESCRIPTION("MTD map and partitions for IBM 405GP Walnut boards");
*
* Interface to generic NAND code for M-Systems DiskOnChip devices
*
- * $Id: diskonchip.c,v 1.42 2004/11/16 18:29:03 dwmw2 Exp $
+ * $Id: diskonchip.c,v 1.45 2005/01/05 18:05:14 dwmw2 Exp $
*/
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/delay.h>
#include <linux/rslib.h>
+#include <linux/moduleparam.h>
#include <asm/io.h>
#include <linux/mtd/mtd.h>
static void doc2000_write_byte(struct mtd_info *mtd, u_char datum)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
if(debug)printk("write_byte %02x\n", datum);
static u_char doc2000_read_byte(struct mtd_info *mtd)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
u_char ret;
const u_char *buf, int len)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
int i;
if (debug)printk("writebuf of %d bytes: ", len);
u_char *buf, int len)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
int i;
u_char *buf, int len)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
int i;
const u_char *buf, int len)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
int i;
static uint16_t __init doc200x_ident_chip(struct mtd_info *mtd, int nr)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
uint16_t ret;
doc200x_select_chip(mtd, nr);
static void __init doc2000_count_chips(struct mtd_info *mtd)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
uint16_t mfrid;
int i;
static int doc200x_wait(struct mtd_info *mtd, struct nand_chip *this, int state)
{
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
int status;
static void doc2001_write_byte(struct mtd_info *mtd, u_char datum)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
WriteDOC(datum, docptr, CDSNSlowIO);
static u_char doc2001_read_byte(struct mtd_info *mtd)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
//ReadDOC(docptr, CDSNSlowIO);
const u_char *buf, int len)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
int i;
u_char *buf, int len)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
int i;
const u_char *buf, int len)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
int i;
static u_char doc2001plus_read_byte(struct mtd_info *mtd)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
u_char ret;
const u_char *buf, int len)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
int i;
u_char *buf, int len)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
int i;
const u_char *buf, int len)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
int i;
static void doc2001plus_select_chip(struct mtd_info *mtd, int chip)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
int floor = 0;
static void doc200x_select_chip(struct mtd_info *mtd, int chip)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
int floor = 0;
static void doc200x_hwcontrol(struct mtd_info *mtd, int cmd)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
switch(cmd) {
static void doc2001plus_command (struct mtd_info *mtd, unsigned command, int column, int page_addr)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
/*
static int doc200x_dev_ready(struct mtd_info *mtd)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
if (DoC_is_MillenniumPlus(doc)) {
static void doc200x_enable_hwecc(struct mtd_info *mtd, int mode)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
/* Prime the ECC engine */
static void doc2001plus_enable_hwecc(struct mtd_info *mtd, int mode)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
/* Prime the ECC engine */
unsigned char *ecc_code)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
int i;
int emptymatch = 1;
{
int i, ret = 0;
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
void __iomem *docptr = doc->virtadr;
volatile u_char dummy;
int emptymatch = 1;
const char *id, int findmirror)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
unsigned offs, end = (MAX_MEDIAHEADER_SCAN << this->phys_erase_shift);
int ret;
size_t retlen;
struct mtd_partition *parts)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
int ret = 0;
u_char *buf;
struct NFTLMediaHeader *mh;
struct mtd_partition *parts)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
int ret = 0;
u_char *buf;
struct INFTLMediaHeader *mh;
{
int ret, numparts;
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
struct mtd_partition parts[2];
memset((char *) parts, 0, sizeof(parts));
{
int ret, numparts;
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
struct mtd_partition parts[5];
if (this->numchips > doc->chips_per_floor) {
static inline int __init doc2000_init(struct mtd_info *mtd)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
this->write_byte = doc2000_write_byte;
this->read_byte = doc2000_read_byte;
static inline int __init doc2001_init(struct mtd_info *mtd)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
this->write_byte = doc2001_write_byte;
this->read_byte = doc2001_read_byte;
static inline int __init doc2001plus_init(struct mtd_info *mtd)
{
struct nand_chip *this = mtd->priv;
- struct doc_priv *doc = (void *)this->priv;
+ struct doc_priv *doc = this->priv;
this->write_byte = NULL;
this->read_byte = doc2001plus_read_byte;
unsigned char oldval;
unsigned char newval;
nand = mtd->priv;
- doc = (void *)nand->priv;
+ doc = nand->priv;
/* Use the alias resolution register to determine if this is
in fact the same DOC aliased to a new address. If writes
to one chip's alias resolution register change the value on
nand->bbt_td = (struct nand_bbt_descr *) (doc + 1);
nand->bbt_md = nand->bbt_td + 1;
- mtd->priv = (void *) nand;
+ mtd->priv = nand;
mtd->owner = THIS_MODULE;
- nand->priv = (void *) doc;
+ nand->priv = doc;
nand->select_chip = doc200x_select_chip;
nand->hwcontrol = doc200x_hwcontrol;
nand->dev_ready = doc200x_dev_ready;
actually a DiskOnChip. */
WriteDOC(save_control, virtadr, DOCControl);
fail:
- iounmap((void *)virtadr);
+ iounmap(virtadr);
return ret;
}
for (mtd = doclist; mtd; mtd = nextmtd) {
nand = mtd->priv;
- doc = (void *)nand->priv;
+ doc = nand->priv;
nextmtd = doc->nextdoc;
nand_release(mtd);
- iounmap((void *)doc->virtadr);
+ iounmap(doc->virtadr);
kfree(mtd);
}
}
* The AG-AND chips have nice features for speed improvement,
* which are not supported yet. Read / program 4 pages in one go.
*
- * $Id: nand_base.c,v 1.121 2004/10/06 19:53:11 gleixner Exp $
+ * $Id: nand_base.c,v 1.126 2004/12/13 11:22:25 lavinen Exp $
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
u_char *oob_buf, struct nand_oobinfo *oobsel, int cached)
{
int i, status;
- u_char ecc_code[8];
+ u_char ecc_code[32];
int eccmode = oobsel->useecc ? this->eccmode : NAND_ECC_NONE;
int *oob_config = oobsel->eccpos;
int datidx = 0, eccidx = 0, eccsteps = this->eccsteps;
}
this->write_buf(mtd, this->data_poi, mtd->oobblock);
break;
-
- /* Hardware ecc 8 byte / 512 byte data */
- case NAND_ECC_HW8_512:
- eccbytes += 2;
- /* Hardware ecc 6 byte / 512 byte data */
- case NAND_ECC_HW6_512:
- eccbytes += 3;
- /* Hardware ecc 3 byte / 256 data */
- /* Hardware ecc 3 byte / 512 byte data */
- case NAND_ECC_HW3_256:
- case NAND_ECC_HW3_512:
- eccbytes += 3;
+ default:
+ eccbytes = this->eccbytes;
for (; eccsteps; eccsteps--) {
/* enable hardware ecc logic for write */
this->enable_hwecc(mtd, NAND_ECC_WRITE);
* the data bytes (words) */
if (this->options & NAND_HWECC_SYNDROME)
this->write_buf(mtd, ecc_code, eccbytes);
-
datidx += this->eccsize;
}
break;
-
- default:
- printk (KERN_WARNING "Invalid NAND_ECC_MODE %d\n", this->eccmode);
- BUG();
}
/* Write out OOB data */
int eccmode, eccsteps;
int *oob_config, datidx;
int blockcheck = (1 << (this->phys_erase_shift - this->page_shift)) - 1;
- int eccbytes = 3;
+ int eccbytes;
int compareecc = 1;
int oobreadlen;
end = mtd->oobblock;
ecc = this->eccsize;
- switch (eccmode) {
- case NAND_ECC_HW6_512: /* Hardware ECC 6 byte / 512 byte data */
- eccbytes = 6;
- break;
- case NAND_ECC_HW8_512: /* Hardware ECC 8 byte / 512 byte data */
- eccbytes = 8;
- break;
- case NAND_ECC_NONE:
- compareecc = 0;
- break;
- }
-
- if (this->options & NAND_HWECC_SYNDROME)
+ eccbytes = this->eccbytes;
+
+ if ((eccmode == NAND_ECC_NONE) || (this->options & NAND_HWECC_SYNDROME))
compareecc = 0;
oobreadlen = mtd->oobsize;
for (i = 0, datidx = 0; eccsteps; eccsteps--, i+=3, datidx += ecc)
this->calculate_ecc(mtd, &data_poi[datidx], &ecc_calc[i]);
break;
-
- case NAND_ECC_HW3_256: /* Hardware ECC 3 byte /256 byte data */
- case NAND_ECC_HW3_512: /* Hardware ECC 3 byte /512 byte data */
- case NAND_ECC_HW6_512: /* Hardware ECC 6 byte / 512 byte data */
- case NAND_ECC_HW8_512: /* Hardware ECC 8 byte / 512 byte data */
+
+ default:
for (i = 0, datidx = 0; eccsteps; eccsteps--, i+=eccbytes, datidx += ecc) {
- this->enable_hwecc(mtd, NAND_ECC_READ);
+ this->enable_hwecc(mtd, NAND_ECC_READ);
this->read_buf(mtd, &data_poi[datidx], ecc);
/* HW ecc with syndrome calculation must read the
}
}
break;
-
- default:
- printk (KERN_WARNING "Invalid NAND_ECC_MODE %d\n", this->eccmode);
- BUG();
}
/* read oobdata */
* fallback to software ECC
*/
this->eccsize = 256; /* set default eccsize */
+ this->eccbytes = 3;
switch (this->eccmode) {
+ case NAND_ECC_HW12_2048:
+ if (mtd->oobblock < 2048) {
+ printk(KERN_WARNING "2048 byte HW ECC not possible on %d byte page size, fallback to SW ECC\n",
+ mtd->oobblock);
+ this->eccmode = NAND_ECC_SOFT;
+ this->calculate_ecc = nand_calculate_ecc;
+ this->correct_data = nand_correct_data;
+ } else
+ this->eccsize = 2048;
+ break;
case NAND_ECC_HW3_512:
case NAND_ECC_HW6_512:
this->eccmode = NAND_ECC_SOFT;
this->calculate_ecc = nand_calculate_ecc;
this->correct_data = nand_correct_data;
- break;
} else
- this->eccsize = 512; /* set eccsize to 512 and fall through for function check */
-
+ this->eccsize = 512; /* set eccsize to 512 */
+ break;
+
case NAND_ECC_HW3_256:
- if (this->calculate_ecc && this->correct_data && this->enable_hwecc)
- break;
- printk (KERN_WARNING "No ECC functions supplied, Hardware ECC not possible\n");
- BUG();
-
+ break;
+
case NAND_ECC_NONE:
printk (KERN_WARNING "NAND_ECC_NONE selected by board driver. This is not recommended !!\n");
this->eccmode = NAND_ECC_NONE;
printk (KERN_WARNING "Invalid NAND_ECC_MODE %d\n", this->eccmode);
BUG();
}
-
+
+ /* Check hardware ecc function availability and adjust number of ecc bytes per
+ * calculation step
+ */
+ switch (this->eccmode) {
+ case NAND_ECC_HW12_2048:
+ this->eccbytes += 4;
+ case NAND_ECC_HW8_512:
+ this->eccbytes += 2;
+ case NAND_ECC_HW6_512:
+ this->eccbytes += 3;
+ case NAND_ECC_HW3_512:
+ case NAND_ECC_HW3_256:
+ if (this->calculate_ecc && this->correct_data && this->enable_hwecc)
+ break;
+ printk (KERN_WARNING "No ECC functions supplied, Hardware ECC not possible\n");
+ BUG();
+ }
+
mtd->eccsize = this->eccsize;
/* Set the number of read / write steps for one page to ensure ECC generation */
switch (this->eccmode) {
+ case NAND_ECC_HW12_2048:
+ this->eccsteps = mtd->oobblock / 2048;
+ break;
case NAND_ECC_HW3_512:
case NAND_ECC_HW6_512:
case NAND_ECC_HW8_512:
*
* Copyright (C) 2004 Thomas Gleixner (tglx@linutronix.de)
*
- * $Id: nand_bbt.c,v 1.26 2004/10/05 13:50:20 gleixner Exp $
+ * $Id: nand_bbt.c,v 1.28 2004/11/13 10:19:09 gleixner Exp $
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
return nand_scan_bbt (mtd, &agand_flashbased);
}
+
/* Is a flash based bad block table requested ? */
if (this->options & NAND_USE_FLASH_BBT) {
/* Use the default pattern descriptors */
if (!this->bbt_td) {
this->bbt_td = &bbt_main_descr;
this->bbt_md = &bbt_mirror_descr;
- }
- if (mtd->oobblock > 512)
- return nand_scan_bbt (mtd, &largepage_flashbased);
- else
- return nand_scan_bbt (mtd, &smallpage_flashbased);
+ }
+ if (!this->badblock_pattern) {
+ this->badblock_pattern = (mtd->oobblock > 512) ?
+ &largepage_flashbased : &smallpage_flashbased;
+ }
} else {
this->bbt_td = NULL;
this->bbt_md = NULL;
- if (mtd->oobblock > 512)
- return nand_scan_bbt (mtd, &largepage_memorybased);
- else
- return nand_scan_bbt (mtd, &smallpage_memorybased);
+ if (!this->badblock_pattern) {
+ this->badblock_pattern = (mtd->oobblock > 512) ?
+ &largepage_memorybased : &smallpage_memorybased;
+ }
}
+ return nand_scan_bbt (mtd, this->badblock_pattern);
}
/**
--- /dev/null
+/*
+ * NAND flash simulator.
+ *
+ * Author: Artem B. Bityuckiy <dedekind@oktetlabs.ru>, <dedekind@infradead.org>
+ *
+ * Copyright (C) 2004 Nokia Corporation
+ *
+ * Note: NS means "NAND Simulator".
+ * Note: Input means input TO flash chip, output means output FROM chip.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2, or (at your option) any later
+ * version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA
+ *
+ * $Id: nandsim.c,v 1.7 2004/12/06 11:53:06 dedekind Exp $
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/vmalloc.h>
+#include <linux/slab.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/partitions.h>
+#include <linux/delay.h>
+#ifdef CONFIG_NS_ABS_POS
+#include <asm/io.h>
+#endif
+
+
+/* Default simulator parameters values */
+#if !defined(CONFIG_NANDSIM_FIRST_ID_BYTE) || \
+ !defined(CONFIG_NANDSIM_SECOND_ID_BYTE) || \
+ !defined(CONFIG_NANDSIM_THIRD_ID_BYTE) || \
+ !defined(CONFIG_NANDSIM_FOURTH_ID_BYTE)
+#define CONFIG_NANDSIM_FIRST_ID_BYTE 0x98
+#define CONFIG_NANDSIM_SECOND_ID_BYTE 0x39
+#define CONFIG_NANDSIM_THIRD_ID_BYTE 0xFF /* No byte */
+#define CONFIG_NANDSIM_FOURTH_ID_BYTE 0xFF /* No byte */
+#endif
+
+#ifndef CONFIG_NANDSIM_ACCESS_DELAY
+#define CONFIG_NANDSIM_ACCESS_DELAY 25
+#endif
+#ifndef CONFIG_NANDSIM_PROGRAMM_DELAY
+#define CONFIG_NANDSIM_PROGRAMM_DELAY 200
+#endif
+#ifndef CONFIG_NANDSIM_ERASE_DELAY
+#define CONFIG_NANDSIM_ERASE_DELAY 2
+#endif
+#ifndef CONFIG_NANDSIM_OUTPUT_CYCLE
+#define CONFIG_NANDSIM_OUTPUT_CYCLE 40
+#endif
+#ifndef CONFIG_NANDSIM_INPUT_CYCLE
+#define CONFIG_NANDSIM_INPUT_CYCLE 50
+#endif
+#ifndef CONFIG_NANDSIM_BUS_WIDTH
+#define CONFIG_NANDSIM_BUS_WIDTH 8
+#endif
+#ifndef CONFIG_NANDSIM_DO_DELAYS
+#define CONFIG_NANDSIM_DO_DELAYS 0
+#endif
+#ifndef CONFIG_NANDSIM_LOG
+#define CONFIG_NANDSIM_LOG 0
+#endif
+#ifndef CONFIG_NANDSIM_DBG
+#define CONFIG_NANDSIM_DBG 0
+#endif
+
+static uint first_id_byte = CONFIG_NANDSIM_FIRST_ID_BYTE;
+static uint second_id_byte = CONFIG_NANDSIM_SECOND_ID_BYTE;
+static uint third_id_byte = CONFIG_NANDSIM_THIRD_ID_BYTE;
+static uint fourth_id_byte = CONFIG_NANDSIM_FOURTH_ID_BYTE;
+static uint access_delay = CONFIG_NANDSIM_ACCESS_DELAY;
+static uint programm_delay = CONFIG_NANDSIM_PROGRAMM_DELAY;
+static uint erase_delay = CONFIG_NANDSIM_ERASE_DELAY;
+static uint output_cycle = CONFIG_NANDSIM_OUTPUT_CYCLE;
+static uint input_cycle = CONFIG_NANDSIM_INPUT_CYCLE;
+static uint bus_width = CONFIG_NANDSIM_BUS_WIDTH;
+static uint do_delays = CONFIG_NANDSIM_DO_DELAYS;
+static uint log = CONFIG_NANDSIM_LOG;
+static uint dbg = CONFIG_NANDSIM_DBG;
+
+module_param(first_id_byte, uint, 0400);
+module_param(second_id_byte, uint, 0400);
+module_param(third_id_byte, uint, 0400);
+module_param(fourth_id_byte, uint, 0400);
+module_param(access_delay, uint, 0400);
+module_param(programm_delay, uint, 0400);
+module_param(erase_delay, uint, 0400);
+module_param(output_cycle, uint, 0400);
+module_param(input_cycle, uint, 0400);
+module_param(bus_width, uint, 0400);
+module_param(do_delays, uint, 0400);
+module_param(log, uint, 0400);
+module_param(dbg, uint, 0400);
+
+MODULE_PARM_DESC(first_id_byte, "The fist byte returned by NAND Flash 'read ID' command (manufaturer ID)");
+MODULE_PARM_DESC(second_id_byte, "The second byte returned by NAND Flash 'read ID' command (chip ID)");
+MODULE_PARM_DESC(third_id_byte, "The third byte returned by NAND Flash 'read ID' command");
+MODULE_PARM_DESC(fourth_id_byte, "The fourth byte returned by NAND Flash 'read ID' command");
+MODULE_PARM_DESC(access_delay, "Initial page access delay (microiseconds)");
+MODULE_PARM_DESC(programm_delay, "Page programm delay (microseconds");
+MODULE_PARM_DESC(erase_delay, "Sector erase delay (milliseconds)");
+MODULE_PARM_DESC(output_cycle, "Word output (from flash) time (nanodeconds)");
+MODULE_PARM_DESC(input_cycle, "Word input (to flash) time (nanodeconds)");
+MODULE_PARM_DESC(bus_width, "Chip's bus width (8- or 16-bit)");
+MODULE_PARM_DESC(do_delays, "Simulate NAND delays using busy-waits if not zero");
+MODULE_PARM_DESC(log, "Perform logging if not zero");
+MODULE_PARM_DESC(dbg, "Output debug information if not zero");
+
+/* The largest possible page size */
+#define NS_LARGEST_PAGE_SIZE 2048
+
+/* The prefix for simulator output */
+#define NS_OUTPUT_PREFIX "[nandsim]"
+
+/* Simulator's output macros (logging, debugging, warning, error) */
+#define NS_LOG(args...) \
+ do { if (log) printk(KERN_DEBUG NS_OUTPUT_PREFIX " log: " args); } while(0)
+#define NS_DBG(args...) \
+ do { if (dbg) printk(KERN_DEBUG NS_OUTPUT_PREFIX " debug: " args); } while(0)
+#define NS_WARN(args...) \
+ do { printk(KERN_WARNING NS_OUTPUT_PREFIX " warnig: " args); } while(0)
+#define NS_ERR(args...) \
+ do { printk(KERN_ERR NS_OUTPUT_PREFIX " errorr: " args); } while(0)
+
+/* Busy-wait delay macros (microseconds, milliseconds) */
+#define NS_UDELAY(us) \
+ do { if (do_delays) udelay(us); } while(0)
+#define NS_MDELAY(us) \
+ do { if (do_delays) mdelay(us); } while(0)
+
+/* Is the nandsim structure initialized ? */
+#define NS_IS_INITIALIZED(ns) ((ns)->geom.totsz != 0)
+
+/* Good operation completion status */
+#define NS_STATUS_OK(ns) (NAND_STATUS_READY | (NAND_STATUS_WP * ((ns)->lines.wp == 0)))
+
+/* Operation failed completion status */
+#define NS_STATUS_FAILED(ns) (NAND_STATUS_FAIL | NS_STATUS_OK(ns))
+
+/* Calculate the page offset in flash RAM image by (row, column) address */
+#define NS_RAW_OFFSET(ns) \
+ (((ns)->regs.row << (ns)->geom.pgshift) + ((ns)->regs.row * (ns)->geom.oobsz) + (ns)->regs.column)
+
+/* Calculate the OOB offset in flash RAM image by (row, column) address */
+#define NS_RAW_OFFSET_OOB(ns) (NS_RAW_OFFSET(ns) + ns->geom.pgsz)
+
+/* After a command is input, the simulator goes to one of the following states */
+#define STATE_CMD_READ0 0x00000001 /* read data from the beginning of page */
+#define STATE_CMD_READ1 0x00000002 /* read data from the second half of page */
+#define STATE_CMD_READSTART 0x00000003 /* read data second command (large page devices) */
+#define STATE_CMD_PAGEPROG 0x00000004 /* start page programm */
+#define STATE_CMD_READOOB 0x00000005 /* read OOB area */
+#define STATE_CMD_ERASE1 0x00000006 /* sector erase first command */
+#define STATE_CMD_STATUS 0x00000007 /* read status */
+#define STATE_CMD_STATUS_M 0x00000008 /* read multi-plane status (isn't implemented) */
+#define STATE_CMD_SEQIN 0x00000009 /* sequential data imput */
+#define STATE_CMD_READID 0x0000000A /* read ID */
+#define STATE_CMD_ERASE2 0x0000000B /* sector erase second command */
+#define STATE_CMD_RESET 0x0000000C /* reset */
+#define STATE_CMD_MASK 0x0000000F /* command states mask */
+
+/* After an addres is input, the simulator goes to one of these states */
+#define STATE_ADDR_PAGE 0x00000010 /* full (row, column) address is accepted */
+#define STATE_ADDR_SEC 0x00000020 /* sector address was accepted */
+#define STATE_ADDR_ZERO 0x00000030 /* one byte zero address was accepted */
+#define STATE_ADDR_MASK 0x00000030 /* address states mask */
+
+/* Durind data input/output the simulator is in these states */
+#define STATE_DATAIN 0x00000100 /* waiting for data input */
+#define STATE_DATAIN_MASK 0x00000100 /* data input states mask */
+
+#define STATE_DATAOUT 0x00001000 /* waiting for page data output */
+#define STATE_DATAOUT_ID 0x00002000 /* waiting for ID bytes output */
+#define STATE_DATAOUT_STATUS 0x00003000 /* waiting for status output */
+#define STATE_DATAOUT_STATUS_M 0x00004000 /* waiting for multi-plane status output */
+#define STATE_DATAOUT_MASK 0x00007000 /* data output states mask */
+
+/* Previous operation is done, ready to accept new requests */
+#define STATE_READY 0x00000000
+
+/* This state is used to mark that the next state isn't known yet */
+#define STATE_UNKNOWN 0x10000000
+
+/* Simulator's actions bit masks */
+#define ACTION_CPY 0x00100000 /* copy page/OOB to the internal buffer */
+#define ACTION_PRGPAGE 0x00200000 /* programm the internal buffer to flash */
+#define ACTION_SECERASE 0x00300000 /* erase sector */
+#define ACTION_ZEROOFF 0x00400000 /* don't add any offset to address */
+#define ACTION_HALFOFF 0x00500000 /* add to address half of page */
+#define ACTION_OOBOFF 0x00600000 /* add to address OOB offset */
+#define ACTION_MASK 0x00700000 /* action mask */
+
+#define NS_OPER_NUM 12 /* Number of operations supported by the simulator */
+#define NS_OPER_STATES 6 /* Maximum number of states in operation */
+
+#define OPT_ANY 0xFFFFFFFF /* any chip supports this operation */
+#define OPT_PAGE256 0x00000001 /* 256-byte page chips */
+#define OPT_PAGE512 0x00000002 /* 512-byte page chips */
+#define OPT_PAGE2048 0x00000008 /* 2048-byte page chips */
+#define OPT_SMARTMEDIA 0x00000010 /* SmartMedia technology chips */
+#define OPT_AUTOINCR 0x00000020 /* page number auto inctimentation is possible */
+#define OPT_PAGE512_8BIT 0x00000040 /* 512-byte page chips with 8-bit bus width */
+#define OPT_LARGEPAGE (OPT_PAGE2048) /* 2048-byte page chips */
+#define OPT_SMALLPAGE (OPT_PAGE256 | OPT_PAGE512) /* 256 and 512-byte page chips */
+
+/* Remove action bits ftom state */
+#define NS_STATE(x) ((x) & ~ACTION_MASK)
+
+/*
+ * Maximum previous states which need to be saved. Currently saving is
+ * only needed for page programm operation with preceeded read command
+ * (which is only valid for 512-byte pages).
+ */
+#define NS_MAX_PREVSTATES 1
+
+/*
+ * The structure which describes all the internal simulator data.
+ */
+struct nandsim {
+ struct mtd_partition part;
+
+ uint busw; /* flash chip bus width (8 or 16) */
+ u_char ids[4]; /* chip's ID bytes */
+ uint32_t options; /* chip's characteristic bits */
+ uint32_t state; /* current chip state */
+ uint32_t nxstate; /* next expected state */
+
+ uint32_t *op; /* current operation, NULL operations isn't known yet */
+ uint32_t pstates[NS_MAX_PREVSTATES]; /* previous states */
+ uint16_t npstates; /* number of previous states saved */
+ uint16_t stateidx; /* current state index */
+
+ /* The simulated NAND flash image */
+ union flash_media {
+ u_char *byte;
+ uint16_t *word;
+ } mem;
+
+ /* Internal buffer of page + OOB size bytes */
+ union internal_buffer {
+ u_char *byte; /* for byte access */
+ uint16_t *word; /* for 16-bit word access */
+ } buf;
+
+ /* NAND flash "geometry" */
+ struct nandsin_geometry {
+ uint32_t totsz; /* total flash size, bytes */
+ uint32_t secsz; /* flash sector (erase block) size, bytes */
+ uint pgsz; /* NAND flash page size, bytes */
+ uint oobsz; /* page OOB area size, bytes */
+ uint32_t totszoob; /* total flash size including OOB, bytes */
+ uint pgszoob; /* page size including OOB , bytes*/
+ uint secszoob; /* sector size including OOB, bytes */
+ uint pgnum; /* total number of pages */
+ uint pgsec; /* number of pages per sector */
+ uint secshift; /* bits number in sector size */
+ uint pgshift; /* bits number in page size */
+ uint oobshift; /* bits number in OOB size */
+ uint pgaddrbytes; /* bytes per page address */
+ uint secaddrbytes; /* bytes per sector address */
+ uint idbytes; /* the number ID bytes that this chip outputs */
+ } geom;
+
+ /* NAND flash internal registers */
+ struct nandsim_regs {
+ unsigned command; /* the command register */
+ u_char status; /* the status register */
+ uint row; /* the page number */
+ uint column; /* the offset within page */
+ uint count; /* internal counter */
+ uint num; /* number of bytes which must be processed */
+ uint off; /* fixed page offset */
+ } regs;
+
+ /* NAND flash lines state */
+ struct ns_lines_status {
+ int ce; /* chip Enable */
+ int cle; /* command Latch Enable */
+ int ale; /* address Latch Enable */
+ int wp; /* write Protect */
+ } lines;
+};
+
+/*
+ * Operations array. To perform any operation the simulator must pass
+ * through the correspondent states chain.
+ */
+static struct nandsim_operations {
+ uint32_t reqopts; /* options which are required to perform the operation */
+ uint32_t states[NS_OPER_STATES]; /* operation's states */
+} ops[NS_OPER_NUM] = {
+ /* Read page + OOB from the beginning */
+ {OPT_SMALLPAGE, {STATE_CMD_READ0 | ACTION_ZEROOFF, STATE_ADDR_PAGE | ACTION_CPY,
+ STATE_DATAOUT, STATE_READY}},
+ /* Read page + OOB from the second half */
+ {OPT_PAGE512_8BIT, {STATE_CMD_READ1 | ACTION_HALFOFF, STATE_ADDR_PAGE | ACTION_CPY,
+ STATE_DATAOUT, STATE_READY}},
+ /* Read OOB */
+ {OPT_SMALLPAGE, {STATE_CMD_READOOB | ACTION_OOBOFF, STATE_ADDR_PAGE | ACTION_CPY,
+ STATE_DATAOUT, STATE_READY}},
+ /* Programm page starting from the beginning */
+ {OPT_ANY, {STATE_CMD_SEQIN, STATE_ADDR_PAGE, STATE_DATAIN,
+ STATE_CMD_PAGEPROG | ACTION_PRGPAGE, STATE_READY}},
+ /* Programm page starting from the beginning */
+ {OPT_SMALLPAGE, {STATE_CMD_READ0, STATE_CMD_SEQIN | ACTION_ZEROOFF, STATE_ADDR_PAGE,
+ STATE_DATAIN, STATE_CMD_PAGEPROG | ACTION_PRGPAGE, STATE_READY}},
+ /* Programm page starting from the second half */
+ {OPT_PAGE512, {STATE_CMD_READ1, STATE_CMD_SEQIN | ACTION_HALFOFF, STATE_ADDR_PAGE,
+ STATE_DATAIN, STATE_CMD_PAGEPROG | ACTION_PRGPAGE, STATE_READY}},
+ /* Programm OOB */
+ {OPT_SMALLPAGE, {STATE_CMD_READOOB, STATE_CMD_SEQIN | ACTION_OOBOFF, STATE_ADDR_PAGE,
+ STATE_DATAIN, STATE_CMD_PAGEPROG | ACTION_PRGPAGE, STATE_READY}},
+ /* Erase sector */
+ {OPT_ANY, {STATE_CMD_ERASE1, STATE_ADDR_SEC, STATE_CMD_ERASE2 | ACTION_SECERASE, STATE_READY}},
+ /* Read status */
+ {OPT_ANY, {STATE_CMD_STATUS, STATE_DATAOUT_STATUS, STATE_READY}},
+ /* Read multi-plane status */
+ {OPT_SMARTMEDIA, {STATE_CMD_STATUS_M, STATE_DATAOUT_STATUS_M, STATE_READY}},
+ /* Read ID */
+ {OPT_ANY, {STATE_CMD_READID, STATE_ADDR_ZERO, STATE_DATAOUT_ID, STATE_READY}},
+ /* Large page devices read page */
+ {OPT_LARGEPAGE, {STATE_CMD_READ0, STATE_ADDR_PAGE, STATE_CMD_READSTART | ACTION_CPY,
+ STATE_DATAOUT, STATE_READY}}
+};
+
+/* MTD structure for NAND controller */
+static struct mtd_info *nsmtd;
+
+static u_char ns_verify_buf[NS_LARGEST_PAGE_SIZE];
+
+/*
+ * Initialize the nandsim structure.
+ *
+ * RETURNS: 0 if success, -ERRNO if failure.
+ */
+static int
+init_nandsim(struct mtd_info *mtd)
+{
+ struct nand_chip *chip = (struct nand_chip *)mtd->priv;
+ struct nandsim *ns = (struct nandsim *)(chip->priv);
+ int i;
+
+ if (NS_IS_INITIALIZED(ns)) {
+ NS_ERR("init_nandsim: nandsim is already initialized\n");
+ return -EIO;
+ }
+
+ /* Force mtd to not do delays */
+ chip->chip_delay = 0;
+
+ /* Initialize the NAND flash parameters */
+ ns->busw = chip->options & NAND_BUSWIDTH_16 ? 16 : 8;
+ ns->geom.totsz = mtd->size;
+ ns->geom.pgsz = mtd->oobblock;
+ ns->geom.oobsz = mtd->oobsize;
+ ns->geom.secsz = mtd->erasesize;
+ ns->geom.pgszoob = ns->geom.pgsz + ns->geom.oobsz;
+ ns->geom.pgnum = ns->geom.totsz / ns->geom.pgsz;
+ ns->geom.totszoob = ns->geom.totsz + ns->geom.pgnum * ns->geom.oobsz;
+ ns->geom.secshift = ffs(ns->geom.secsz) - 1;
+ ns->geom.pgshift = chip->page_shift;
+ ns->geom.oobshift = ffs(ns->geom.oobsz) - 1;
+ ns->geom.pgsec = ns->geom.secsz / ns->geom.pgsz;
+ ns->geom.secszoob = ns->geom.secsz + ns->geom.oobsz * ns->geom.pgsec;
+ ns->options = 0;
+
+ if (ns->geom.pgsz == 256) {
+ ns->options |= OPT_PAGE256;
+ }
+ else if (ns->geom.pgsz == 512) {
+ ns->options |= (OPT_PAGE512 | OPT_AUTOINCR);
+ if (ns->busw == 8)
+ ns->options |= OPT_PAGE512_8BIT;
+ } else if (ns->geom.pgsz == 2048) {
+ ns->options |= OPT_PAGE2048;
+ } else {
+ NS_ERR("init_nandsim: unknown page size %u\n", ns->geom.pgsz);
+ return -EIO;
+ }
+
+ if (ns->options & OPT_SMALLPAGE) {
+ if (ns->geom.totsz < (64 << 20)) {
+ ns->geom.pgaddrbytes = 3;
+ ns->geom.secaddrbytes = 2;
+ } else {
+ ns->geom.pgaddrbytes = 4;
+ ns->geom.secaddrbytes = 3;
+ }
+ } else {
+ if (ns->geom.totsz <= (128 << 20)) {
+ ns->geom.pgaddrbytes = 5;
+ ns->geom.secaddrbytes = 2;
+ } else {
+ ns->geom.pgaddrbytes = 5;
+ ns->geom.secaddrbytes = 3;
+ }
+ }
+
+ /* Detect how many ID bytes the NAND chip outputs */
+ for (i = 0; nand_flash_ids[i].name != NULL; i++) {
+ if (second_id_byte != nand_flash_ids[i].id)
+ continue;
+ if (!(nand_flash_ids[i].options & NAND_NO_AUTOINCR))
+ ns->options |= OPT_AUTOINCR;
+ }
+
+ if (ns->busw == 16)
+ NS_WARN("16-bit flashes support wasn't tested\n");
+
+ printk("flash size: %u MiB\n", ns->geom.totsz >> 20);
+ printk("page size: %u bytes\n", ns->geom.pgsz);
+ printk("OOB area size: %u bytes\n", ns->geom.oobsz);
+ printk("sector size: %u KiB\n", ns->geom.secsz >> 10);
+ printk("pages number: %u\n", ns->geom.pgnum);
+ printk("pages per sector: %u\n", ns->geom.pgsec);
+ printk("bus width: %u\n", ns->busw);
+ printk("bits in sector size: %u\n", ns->geom.secshift);
+ printk("bits in page size: %u\n", ns->geom.pgshift);
+ printk("bits in OOB size: %u\n", ns->geom.oobshift);
+ printk("flash size with OOB: %u KiB\n", ns->geom.totszoob >> 10);
+ printk("page address bytes: %u\n", ns->geom.pgaddrbytes);
+ printk("sector address bytes: %u\n", ns->geom.secaddrbytes);
+ printk("options: %#x\n", ns->options);
+
+ /* Map / allocate and initialize the flash image */
+#ifdef CONFIG_NS_ABS_POS
+ ns->mem.byte = ioremap(CONFIG_NS_ABS_POS, ns->geom.totszoob);
+ if (!ns->mem.byte) {
+ NS_ERR("init_nandsim: failed to map the NAND flash image at address %p\n",
+ (void *)CONFIG_NS_ABS_POS);
+ return -ENOMEM;
+ }
+#else
+ ns->mem.byte = vmalloc(ns->geom.totszoob);
+ if (!ns->mem.byte) {
+ NS_ERR("init_nandsim: unable to allocate %u bytes for flash image\n",
+ ns->geom.totszoob);
+ return -ENOMEM;
+ }
+ memset(ns->mem.byte, 0xFF, ns->geom.totszoob);
+#endif
+
+ /* Allocate / initialize the internal buffer */
+ ns->buf.byte = kmalloc(ns->geom.pgszoob, GFP_KERNEL);
+ if (!ns->buf.byte) {
+ NS_ERR("init_nandsim: unable to allocate %u bytes for the internal buffer\n",
+ ns->geom.pgszoob);
+ goto error;
+ }
+ memset(ns->buf.byte, 0xFF, ns->geom.pgszoob);
+
+ /* Fill the partition_info structure */
+ ns->part.name = "NAND simulator partition";
+ ns->part.offset = 0;
+ ns->part.size = ns->geom.totsz;
+
+ return 0;
+
+error:
+#ifdef CONFIG_NS_ABS_POS
+ iounmap(ns->mem.byte);
+#else
+ vfree(ns->mem.byte);
+#endif
+
+ return -ENOMEM;
+}
+
+/*
+ * Free the nandsim structure.
+ */
+static void
+free_nandsim(struct nandsim *ns)
+{
+ kfree(ns->buf.byte);
+
+#ifdef CONFIG_NS_ABS_POS
+ iounmap(ns->mem.byte);
+#else
+ vfree(ns->mem.byte);
+#endif
+
+ return;
+}
+
+/*
+ * Returns the string representation of 'state' state.
+ */
+static char *
+get_state_name(uint32_t state)
+{
+ switch (NS_STATE(state)) {
+ case STATE_CMD_READ0:
+ return "STATE_CMD_READ0";
+ case STATE_CMD_READ1:
+ return "STATE_CMD_READ1";
+ case STATE_CMD_PAGEPROG:
+ return "STATE_CMD_PAGEPROG";
+ case STATE_CMD_READOOB:
+ return "STATE_CMD_READOOB";
+ case STATE_CMD_READSTART:
+ return "STATE_CMD_READSTART";
+ case STATE_CMD_ERASE1:
+ return "STATE_CMD_ERASE1";
+ case STATE_CMD_STATUS:
+ return "STATE_CMD_STATUS";
+ case STATE_CMD_STATUS_M:
+ return "STATE_CMD_STATUS_M";
+ case STATE_CMD_SEQIN:
+ return "STATE_CMD_SEQIN";
+ case STATE_CMD_READID:
+ return "STATE_CMD_READID";
+ case STATE_CMD_ERASE2:
+ return "STATE_CMD_ERASE2";
+ case STATE_CMD_RESET:
+ return "STATE_CMD_RESET";
+ case STATE_ADDR_PAGE:
+ return "STATE_ADDR_PAGE";
+ case STATE_ADDR_SEC:
+ return "STATE_ADDR_SEC";
+ case STATE_ADDR_ZERO:
+ return "STATE_ADDR_ZERO";
+ case STATE_DATAIN:
+ return "STATE_DATAIN";
+ case STATE_DATAOUT:
+ return "STATE_DATAOUT";
+ case STATE_DATAOUT_ID:
+ return "STATE_DATAOUT_ID";
+ case STATE_DATAOUT_STATUS:
+ return "STATE_DATAOUT_STATUS";
+ case STATE_DATAOUT_STATUS_M:
+ return "STATE_DATAOUT_STATUS_M";
+ case STATE_READY:
+ return "STATE_READY";
+ case STATE_UNKNOWN:
+ return "STATE_UNKNOWN";
+ }
+
+ NS_ERR("get_state_name: unknown state, BUG\n");
+ return NULL;
+}
+
+/*
+ * Check if command is valid.
+ *
+ * RETURNS: 1 if wrong command, 0 if right.
+ */
+static int
+check_command(int cmd)
+{
+ switch (cmd) {
+
+ case NAND_CMD_READ0:
+ case NAND_CMD_READSTART:
+ case NAND_CMD_PAGEPROG:
+ case NAND_CMD_READOOB:
+ case NAND_CMD_ERASE1:
+ case NAND_CMD_STATUS:
+ case NAND_CMD_SEQIN:
+ case NAND_CMD_READID:
+ case NAND_CMD_ERASE2:
+ case NAND_CMD_RESET:
+ case NAND_CMD_READ1:
+ return 0;
+
+ case NAND_CMD_STATUS_MULTI:
+ default:
+ return 1;
+ }
+}
+
+/*
+ * Returns state after command is accepted by command number.
+ */
+static uint32_t
+get_state_by_command(unsigned command)
+{
+ switch (command) {
+ case NAND_CMD_READ0:
+ return STATE_CMD_READ0;
+ case NAND_CMD_READ1:
+ return STATE_CMD_READ1;
+ case NAND_CMD_PAGEPROG:
+ return STATE_CMD_PAGEPROG;
+ case NAND_CMD_READSTART:
+ return STATE_CMD_READSTART;
+ case NAND_CMD_READOOB:
+ return STATE_CMD_READOOB;
+ case NAND_CMD_ERASE1:
+ return STATE_CMD_ERASE1;
+ case NAND_CMD_STATUS:
+ return STATE_CMD_STATUS;
+ case NAND_CMD_STATUS_MULTI:
+ return STATE_CMD_STATUS_M;
+ case NAND_CMD_SEQIN:
+ return STATE_CMD_SEQIN;
+ case NAND_CMD_READID:
+ return STATE_CMD_READID;
+ case NAND_CMD_ERASE2:
+ return STATE_CMD_ERASE2;
+ case NAND_CMD_RESET:
+ return STATE_CMD_RESET;
+ }
+
+ NS_ERR("get_state_by_command: unknown command, BUG\n");
+ return 0;
+}
+
+/*
+ * Move an address byte to the correspondent internal register.
+ */
+static inline void
+accept_addr_byte(struct nandsim *ns, u_char bt)
+{
+ uint byte = (uint)bt;
+
+ if (ns->regs.count < (ns->geom.pgaddrbytes - ns->geom.secaddrbytes))
+ ns->regs.column |= (byte << 8 * ns->regs.count);
+ else {
+ ns->regs.row |= (byte << 8 * (ns->regs.count -
+ ns->geom.pgaddrbytes +
+ ns->geom.secaddrbytes));
+ }
+
+ return;
+}
+
+/*
+ * Switch to STATE_READY state.
+ */
+static inline void
+switch_to_ready_state(struct nandsim *ns, u_char status)
+{
+ NS_DBG("switch_to_ready_state: switch to %s state\n", get_state_name(STATE_READY));
+
+ ns->state = STATE_READY;
+ ns->nxstate = STATE_UNKNOWN;
+ ns->op = NULL;
+ ns->npstates = 0;
+ ns->stateidx = 0;
+ ns->regs.num = 0;
+ ns->regs.count = 0;
+ ns->regs.off = 0;
+ ns->regs.row = 0;
+ ns->regs.column = 0;
+ ns->regs.status = status;
+}
+
+/*
+ * If the operation isn't known yet, try to find it in the global array
+ * of supported operations.
+ *
+ * Operation can be unknown because of the following.
+ * 1. New command was accepted and this is the firs call to find the
+ * correspondent states chain. In this case ns->npstates = 0;
+ * 2. There is several operations which begin with the same command(s)
+ * (for example program from the second half and read from the
+ * second half operations both begin with the READ1 command). In this
+ * case the ns->pstates[] array contains previous states.
+ *
+ * Thus, the function tries to find operation containing the following
+ * states (if the 'flag' parameter is 0):
+ * ns->pstates[0], ... ns->pstates[ns->npstates], ns->state
+ *
+ * If (one and only one) matching operation is found, it is accepted (
+ * ns->ops, ns->state, ns->nxstate are initialized, ns->npstate is
+ * zeroed).
+ *
+ * If there are several maches, the current state is pushed to the
+ * ns->pstates.
+ *
+ * The operation can be unknown only while commands are input to the chip.
+ * As soon as address command is accepted, the operation must be known.
+ * In such situation the function is called with 'flag' != 0, and the
+ * operation is searched using the following pattern:
+ * ns->pstates[0], ... ns->pstates[ns->npstates], <address input>
+ *
+ * It is supposed that this pattern must either match one operation on
+ * none. There can't be ambiguity in that case.
+ *
+ * If no matches found, the functions does the following:
+ * 1. if there are saved states present, try to ignore them and search
+ * again only using the last command. If nothing was found, switch
+ * to the STATE_READY state.
+ * 2. if there are no saved states, switch to the STATE_READY state.
+ *
+ * RETURNS: -2 - no matched operations found.
+ * -1 - several matches.
+ * 0 - operation is found.
+ */
+static int
+find_operation(struct nandsim *ns, uint32_t flag)
+{
+ int opsfound = 0;
+ int i, j, idx = 0;
+
+ for (i = 0; i < NS_OPER_NUM; i++) {
+
+ int found = 1;
+
+ if (!(ns->options & ops[i].reqopts))
+ /* Ignore operations we can't perform */
+ continue;
+
+ if (flag) {
+ if (!(ops[i].states[ns->npstates] & STATE_ADDR_MASK))
+ continue;
+ } else {
+ if (NS_STATE(ns->state) != NS_STATE(ops[i].states[ns->npstates]))
+ continue;
+ }
+
+ for (j = 0; j < ns->npstates; j++)
+ if (NS_STATE(ops[i].states[j]) != NS_STATE(ns->pstates[j])
+ && (ns->options & ops[idx].reqopts)) {
+ found = 0;
+ break;
+ }
+
+ if (found) {
+ idx = i;
+ opsfound += 1;
+ }
+ }
+
+ if (opsfound == 1) {
+ /* Exact match */
+ ns->op = &ops[idx].states[0];
+ if (flag) {
+ /*
+ * In this case the find_operation function was
+ * called when address has just began input. But it isn't
+ * yet fully input and the current state must
+ * not be one of STATE_ADDR_*, but the STATE_ADDR_*
+ * state must be the next state (ns->nxstate).
+ */
+ ns->stateidx = ns->npstates - 1;
+ } else {
+ ns->stateidx = ns->npstates;
+ }
+ ns->npstates = 0;
+ ns->state = ns->op[ns->stateidx];
+ ns->nxstate = ns->op[ns->stateidx + 1];
+ NS_DBG("find_operation: operation found, index: %d, state: %s, nxstate %s\n",
+ idx, get_state_name(ns->state), get_state_name(ns->nxstate));
+ return 0;
+ }
+
+ if (opsfound == 0) {
+ /* Nothing was found. Try to ignore previous commands (if any) and search again */
+ if (ns->npstates != 0) {
+ NS_DBG("find_operation: no operation found, try again with state %s\n",
+ get_state_name(ns->state));
+ ns->npstates = 0;
+ return find_operation(ns, 0);
+
+ }
+ NS_DBG("find_operation: no operations found\n");
+ switch_to_ready_state(ns, NS_STATUS_FAILED(ns));
+ return -2;
+ }
+
+ if (flag) {
+ /* This shouldn't happen */
+ NS_DBG("find_operation: BUG, operation must be known if address is input\n");
+ return -2;
+ }
+
+ NS_DBG("find_operation: there is still ambiguity\n");
+
+ ns->pstates[ns->npstates++] = ns->state;
+
+ return -1;
+}
+
+/*
+ * If state has any action bit, perform this action.
+ *
+ * RETURNS: 0 if success, -1 if error.
+ */
+static int
+do_state_action(struct nandsim *ns, uint32_t action)
+{
+ int i, num;
+ int busdiv = ns->busw == 8 ? 1 : 2;
+
+ action &= ACTION_MASK;
+
+ /* Check that page address input is correct */
+ if (action != ACTION_SECERASE && ns->regs.row >= ns->geom.pgnum) {
+ NS_WARN("do_state_action: wrong page number (%#x)\n", ns->regs.row);
+ return -1;
+ }
+
+ switch (action) {
+
+ case ACTION_CPY:
+ /*
+ * Copy page data to the internal buffer.
+ */
+
+ /* Column shouldn't be very large */
+ if (ns->regs.column >= (ns->geom.pgszoob - ns->regs.off)) {
+ NS_ERR("do_state_action: column number is too large\n");
+ break;
+ }
+ num = ns->geom.pgszoob - ns->regs.off - ns->regs.column;
+ memcpy(ns->buf.byte, ns->mem.byte + NS_RAW_OFFSET(ns) + ns->regs.off, num);
+
+ NS_DBG("do_state_action: (ACTION_CPY:) copy %d bytes to int buf, raw offset %d\n",
+ num, NS_RAW_OFFSET(ns) + ns->regs.off);
+
+ if (ns->regs.off == 0)
+ NS_LOG("read page %d\n", ns->regs.row);
+ else if (ns->regs.off < ns->geom.pgsz)
+ NS_LOG("read page %d (second half)\n", ns->regs.row);
+ else
+ NS_LOG("read OOB of page %d\n", ns->regs.row);
+
+ NS_UDELAY(access_delay);
+ NS_UDELAY(input_cycle * ns->geom.pgsz / 1000 / busdiv);
+
+ break;
+
+ case ACTION_SECERASE:
+ /*
+ * Erase sector.
+ */
+
+ if (ns->lines.wp) {
+ NS_ERR("do_state_action: device is write-protected, ignore sector erase\n");
+ return -1;
+ }
+
+ if (ns->regs.row >= ns->geom.pgnum - ns->geom.pgsec
+ || (ns->regs.row & ~(ns->geom.secsz - 1))) {
+ NS_ERR("do_state_action: wrong sector address (%#x)\n", ns->regs.row);
+ return -1;
+ }
+
+ ns->regs.row = (ns->regs.row <<
+ 8 * (ns->geom.pgaddrbytes - ns->geom.secaddrbytes)) | ns->regs.column;
+ ns->regs.column = 0;
+
+ NS_DBG("do_state_action: erase sector at address %#x, off = %d\n",
+ ns->regs.row, NS_RAW_OFFSET(ns));
+ NS_LOG("erase sector %d\n", ns->regs.row >> (ns->geom.secshift - ns->geom.pgshift));
+
+ memset(ns->mem.byte + NS_RAW_OFFSET(ns), 0xFF, ns->geom.secszoob);
+
+ NS_MDELAY(erase_delay);
+
+ break;
+
+ case ACTION_PRGPAGE:
+ /*
+ * Programm page - move internal buffer data to the page.
+ */
+
+ if (ns->lines.wp) {
+ NS_WARN("do_state_action: device is write-protected, programm\n");
+ return -1;
+ }
+
+ num = ns->geom.pgszoob - ns->regs.off - ns->regs.column;
+ if (num != ns->regs.count) {
+ NS_ERR("do_state_action: too few bytes were input (%d instead of %d)\n",
+ ns->regs.count, num);
+ return -1;
+ }
+
+ for (i = 0; i < num; i++)
+ ns->mem.byte[NS_RAW_OFFSET(ns) + ns->regs.off + i] &= ns->buf.byte[i];
+
+ NS_DBG("do_state_action: copy %d bytes from int buf to (%#x, %#x), raw off = %d\n",
+ num, ns->regs.row, ns->regs.column, NS_RAW_OFFSET(ns) + ns->regs.off);
+ NS_LOG("programm page %d\n", ns->regs.row);
+
+ NS_UDELAY(programm_delay);
+ NS_UDELAY(output_cycle * ns->geom.pgsz / 1000 / busdiv);
+
+ break;
+
+ case ACTION_ZEROOFF:
+ NS_DBG("do_state_action: set internal offset to 0\n");
+ ns->regs.off = 0;
+ break;
+
+ case ACTION_HALFOFF:
+ if (!(ns->options & OPT_PAGE512_8BIT)) {
+ NS_ERR("do_state_action: BUG! can't skip half of page for non-512"
+ "byte page size 8x chips\n");
+ return -1;
+ }
+ NS_DBG("do_state_action: set internal offset to %d\n", ns->geom.pgsz/2);
+ ns->regs.off = ns->geom.pgsz/2;
+ break;
+
+ case ACTION_OOBOFF:
+ NS_DBG("do_state_action: set internal offset to %d\n", ns->geom.pgsz);
+ ns->regs.off = ns->geom.pgsz;
+ break;
+
+ default:
+ NS_DBG("do_state_action: BUG! unknown action\n");
+ }
+
+ return 0;
+}
+
+/*
+ * Switch simulator's state.
+ */
+static void
+switch_state(struct nandsim *ns)
+{
+ if (ns->op) {
+ /*
+ * The current operation have already been identified.
+ * Just follow the states chain.
+ */
+
+ ns->stateidx += 1;
+ ns->state = ns->nxstate;
+ ns->nxstate = ns->op[ns->stateidx + 1];
+
+ NS_DBG("switch_state: operation is known, switch to the next state, "
+ "state: %s, nxstate: %s\n",
+ get_state_name(ns->state), get_state_name(ns->nxstate));
+
+ /* See, whether we need to do some action */
+ if ((ns->state & ACTION_MASK) && do_state_action(ns, ns->state) < 0) {
+ switch_to_ready_state(ns, NS_STATUS_FAILED(ns));
+ return;
+ }
+
+ } else {
+ /*
+ * We don't yet know which operation we perform.
+ * Try to identify it.
+ */
+
+ /*
+ * The only event causing the switch_state function to
+ * be called with yet unknown operation is new command.
+ */
+ ns->state = get_state_by_command(ns->regs.command);
+
+ NS_DBG("switch_state: operation is unknown, try to find it\n");
+
+ if (find_operation(ns, 0) != 0)
+ return;
+
+ if ((ns->state & ACTION_MASK) && do_state_action(ns, ns->state) < 0) {
+ switch_to_ready_state(ns, NS_STATUS_FAILED(ns));
+ return;
+ }
+ }
+
+ /* For 16x devices column means the page offset in words */
+ if ((ns->nxstate & STATE_ADDR_MASK) && ns->busw == 16) {
+ NS_DBG("switch_state: double the column number for 16x device\n");
+ ns->regs.column <<= 1;
+ }
+
+ if (NS_STATE(ns->nxstate) == STATE_READY) {
+ /*
+ * The current state is the last. Return to STATE_READY
+ */
+
+ u_char status = NS_STATUS_OK(ns);
+
+ /* In case of data states, see if all bytes were input/output */
+ if ((ns->state & (STATE_DATAIN_MASK | STATE_DATAOUT_MASK))
+ && ns->regs.count != ns->regs.num) {
+ NS_WARN("switch_state: not all bytes were processed, %d left\n",
+ ns->regs.num - ns->regs.count);
+ status = NS_STATUS_FAILED(ns);
+ }
+
+ NS_DBG("switch_state: operation complete, switch to STATE_READY state\n");
+
+ switch_to_ready_state(ns, status);
+
+ return;
+ } else if (ns->nxstate & (STATE_DATAIN_MASK | STATE_DATAOUT_MASK)) {
+ /*
+ * If the next state is data input/output, switch to it now
+ */
+
+ ns->state = ns->nxstate;
+ ns->nxstate = ns->op[++ns->stateidx + 1];
+ ns->regs.num = ns->regs.count = 0;
+
+ NS_DBG("switch_state: the next state is data I/O, switch, "
+ "state: %s, nxstate: %s\n",
+ get_state_name(ns->state), get_state_name(ns->nxstate));
+
+ /*
+ * Set the internal register to the count of bytes which
+ * are expected to be input or output
+ */
+ switch (NS_STATE(ns->state)) {
+ case STATE_DATAIN:
+ case STATE_DATAOUT:
+ ns->regs.num = ns->geom.pgszoob - ns->regs.off - ns->regs.column;
+ break;
+
+ case STATE_DATAOUT_ID:
+ ns->regs.num = ns->geom.idbytes;
+ break;
+
+ case STATE_DATAOUT_STATUS:
+ case STATE_DATAOUT_STATUS_M:
+ ns->regs.count = ns->regs.num = 0;
+ break;
+
+ default:
+ NS_ERR("switch_state: BUG! unknown data state\n");
+ }
+
+ } else if (ns->nxstate & STATE_ADDR_MASK) {
+ /*
+ * If the next state is address input, set the internal
+ * register to the number of expected address bytes
+ */
+
+ ns->regs.count = 0;
+
+ switch (NS_STATE(ns->nxstate)) {
+ case STATE_ADDR_PAGE:
+ ns->regs.num = ns->geom.pgaddrbytes;
+
+ break;
+ case STATE_ADDR_SEC:
+ ns->regs.num = ns->geom.secaddrbytes;
+ break;
+
+ case STATE_ADDR_ZERO:
+ ns->regs.num = 1;
+ break;
+
+ default:
+ NS_ERR("switch_state: BUG! unknown address state\n");
+ }
+ } else {
+ /*
+ * Just reset internal counters.
+ */
+
+ ns->regs.num = 0;
+ ns->regs.count = 0;
+ }
+}
+
+static void
+ns_hwcontrol(struct mtd_info *mtd, int cmd)
+{
+ struct nandsim *ns = (struct nandsim *)((struct nand_chip *)mtd->priv)->priv;
+
+ switch (cmd) {
+
+ /* set CLE line high */
+ case NAND_CTL_SETCLE:
+ NS_DBG("ns_hwcontrol: start command latch cycles\n");
+ ns->lines.cle = 1;
+ break;
+
+ /* set CLE line low */
+ case NAND_CTL_CLRCLE:
+ NS_DBG("ns_hwcontrol: stop command latch cycles\n");
+ ns->lines.cle = 0;
+ break;
+
+ /* set ALE line high */
+ case NAND_CTL_SETALE:
+ NS_DBG("ns_hwcontrol: start address latch cycles\n");
+ ns->lines.ale = 1;
+ break;
+
+ /* set ALE line low */
+ case NAND_CTL_CLRALE:
+ NS_DBG("ns_hwcontrol: stop address latch cycles\n");
+ ns->lines.ale = 0;
+ break;
+
+ /* set WP line high */
+ case NAND_CTL_SETWP:
+ NS_DBG("ns_hwcontrol: enable write protection\n");
+ ns->lines.wp = 1;
+ break;
+
+ /* set WP line low */
+ case NAND_CTL_CLRWP:
+ NS_DBG("ns_hwcontrol: disable write protection\n");
+ ns->lines.wp = 0;
+ break;
+
+ /* set CE line low */
+ case NAND_CTL_SETNCE:
+ NS_DBG("ns_hwcontrol: enable chip\n");
+ ns->lines.ce = 1;
+ break;
+
+ /* set CE line high */
+ case NAND_CTL_CLRNCE:
+ NS_DBG("ns_hwcontrol: disable chip\n");
+ ns->lines.ce = 0;
+ break;
+
+ default:
+ NS_ERR("hwcontrol: unknown command\n");
+ }
+
+ return;
+}
+
+static u_char
+ns_nand_read_byte(struct mtd_info *mtd)
+{
+ struct nandsim *ns = (struct nandsim *)((struct nand_chip *)mtd->priv)->priv;
+ u_char outb = 0x00;
+
+ /* Sanity and correctness checks */
+ if (!ns->lines.ce) {
+ NS_ERR("read_byte: chip is disabled, return %#x\n", (uint)outb);
+ return outb;
+ }
+ if (ns->lines.ale || ns->lines.cle) {
+ NS_ERR("read_byte: ALE or CLE pin is high, return %#x\n", (uint)outb);
+ return outb;
+ }
+ if (!(ns->state & STATE_DATAOUT_MASK)) {
+ NS_WARN("read_byte: unexpected data output cycle, state is %s "
+ "return %#x\n", get_state_name(ns->state), (uint)outb);
+ return outb;
+ }
+
+ /* Status register may be read as many times as it is wanted */
+ if (NS_STATE(ns->state) == STATE_DATAOUT_STATUS) {
+ NS_DBG("read_byte: return %#x status\n", ns->regs.status);
+ return ns->regs.status;
+ }
+
+ /* Check if there is any data in the internal buffer which may be read */
+ if (ns->regs.count == ns->regs.num) {
+ NS_WARN("read_byte: no more data to output, return %#x\n", (uint)outb);
+ return outb;
+ }
+
+ switch (NS_STATE(ns->state)) {
+ case STATE_DATAOUT:
+ if (ns->busw == 8) {
+ outb = ns->buf.byte[ns->regs.count];
+ ns->regs.count += 1;
+ } else {
+ outb = (u_char)cpu_to_le16(ns->buf.word[ns->regs.count >> 1]);
+ ns->regs.count += 2;
+ }
+ break;
+ case STATE_DATAOUT_ID:
+ NS_DBG("read_byte: read ID byte %d, total = %d\n", ns->regs.count, ns->regs.num);
+ outb = ns->ids[ns->regs.count];
+ ns->regs.count += 1;
+ break;
+ default:
+ BUG();
+ }
+
+ if (ns->regs.count == ns->regs.num) {
+ NS_DBG("read_byte: all bytes were read\n");
+
+ /*
+ * The OPT_AUTOINCR allows to read next conseqitive pages without
+ * new read operation cycle.
+ */
+ if ((ns->options & OPT_AUTOINCR) && NS_STATE(ns->state) == STATE_DATAOUT) {
+ ns->regs.count = 0;
+ if (ns->regs.row + 1 < ns->geom.pgnum)
+ ns->regs.row += 1;
+ NS_DBG("read_byte: switch to the next page (%#x)\n", ns->regs.row);
+ do_state_action(ns, ACTION_CPY);
+ }
+ else if (NS_STATE(ns->nxstate) == STATE_READY)
+ switch_state(ns);
+
+ }
+
+ return outb;
+}
+
+static void
+ns_nand_write_byte(struct mtd_info *mtd, u_char byte)
+{
+ struct nandsim *ns = (struct nandsim *)((struct nand_chip *)mtd->priv)->priv;
+
+ /* Sanity and correctness checks */
+ if (!ns->lines.ce) {
+ NS_ERR("write_byte: chip is disabled, ignore write\n");
+ return;
+ }
+ if (ns->lines.ale && ns->lines.cle) {
+ NS_ERR("write_byte: ALE and CLE pins are high simultaneously, ignore write\n");
+ return;
+ }
+
+ if (ns->lines.cle == 1) {
+ /*
+ * The byte written is a command.
+ */
+
+ if (byte == NAND_CMD_RESET) {
+ NS_LOG("reset chip\n");
+ switch_to_ready_state(ns, NS_STATUS_OK(ns));
+ return;
+ }
+
+ /*
+ * Chip might still be in STATE_DATAOUT
+ * (if OPT_AUTOINCR feature is supported), STATE_DATAOUT_STATUS or
+ * STATE_DATAOUT_STATUS_M state. If so, switch state.
+ */
+ if (NS_STATE(ns->state) == STATE_DATAOUT_STATUS
+ || NS_STATE(ns->state) == STATE_DATAOUT_STATUS_M
+ || ((ns->options & OPT_AUTOINCR) && NS_STATE(ns->state) == STATE_DATAOUT))
+ switch_state(ns);
+
+ /* Check if chip is expecting command */
+ if (NS_STATE(ns->nxstate) != STATE_UNKNOWN && !(ns->nxstate & STATE_CMD_MASK)) {
+ /*
+ * We are in situation when something else (not command)
+ * was expected but command was input. In this case ignore
+ * previous command(s)/state(s) and accept the last one.
+ */
+ NS_WARN("write_byte: command (%#x) wasn't expected, expected state is %s, "
+ "ignore previous states\n", (uint)byte, get_state_name(ns->nxstate));
+ switch_to_ready_state(ns, NS_STATUS_FAILED(ns));
+ }
+
+ /* Check that the command byte is correct */
+ if (check_command(byte)) {
+ NS_ERR("write_byte: unknown command %#x\n", (uint)byte);
+ return;
+ }
+
+ NS_DBG("command byte corresponding to %s state accepted\n",
+ get_state_name(get_state_by_command(byte)));
+ ns->regs.command = byte;
+ switch_state(ns);
+
+ } else if (ns->lines.ale == 1) {
+ /*
+ * The byte written is an address.
+ */
+
+ if (NS_STATE(ns->nxstate) == STATE_UNKNOWN) {
+
+ NS_DBG("write_byte: operation isn't known yet, identify it\n");
+
+ if (find_operation(ns, 1) < 0)
+ return;
+
+ if ((ns->state & ACTION_MASK) && do_state_action(ns, ns->state) < 0) {
+ switch_to_ready_state(ns, NS_STATUS_FAILED(ns));
+ return;
+ }
+
+ ns->regs.count = 0;
+ switch (NS_STATE(ns->nxstate)) {
+ case STATE_ADDR_PAGE:
+ ns->regs.num = ns->geom.pgaddrbytes;
+ break;
+ case STATE_ADDR_SEC:
+ ns->regs.num = ns->geom.secaddrbytes;
+ break;
+ case STATE_ADDR_ZERO:
+ ns->regs.num = 1;
+ break;
+ default:
+ BUG();
+ }
+ }
+
+ /* Check that chip is expecting address */
+ if (!(ns->nxstate & STATE_ADDR_MASK)) {
+ NS_ERR("write_byte: address (%#x) isn't expected, expected state is %s, "
+ "switch to STATE_READY\n", (uint)byte, get_state_name(ns->nxstate));
+ switch_to_ready_state(ns, NS_STATUS_FAILED(ns));
+ return;
+ }
+
+ /* Check if this is expected byte */
+ if (ns->regs.count == ns->regs.num) {
+ NS_ERR("write_byte: no more address bytes expected\n");
+ switch_to_ready_state(ns, NS_STATUS_FAILED(ns));
+ return;
+ }
+
+ accept_addr_byte(ns, byte);
+
+ ns->regs.count += 1;
+
+ NS_DBG("write_byte: address byte %#x was accepted (%d bytes input, %d expected)\n",
+ (uint)byte, ns->regs.count, ns->regs.num);
+
+ if (ns->regs.count == ns->regs.num) {
+ NS_DBG("address (%#x, %#x) is accepted\n", ns->regs.row, ns->regs.column);
+ switch_state(ns);
+ }
+
+ } else {
+ /*
+ * The byte written is an input data.
+ */
+
+ /* Check that chip is expecting data input */
+ if (!(ns->state & STATE_DATAIN_MASK)) {
+ NS_ERR("write_byte: data input (%#x) isn't expected, state is %s, "
+ "switch to %s\n", (uint)byte,
+ get_state_name(ns->state), get_state_name(STATE_READY));
+ switch_to_ready_state(ns, NS_STATUS_FAILED(ns));
+ return;
+ }
+
+ /* Check if this is expected byte */
+ if (ns->regs.count == ns->regs.num) {
+ NS_WARN("write_byte: %u input bytes has already been accepted, ignore write\n",
+ ns->regs.num);
+ return;
+ }
+
+ if (ns->busw == 8) {
+ ns->buf.byte[ns->regs.count] = byte;
+ ns->regs.count += 1;
+ } else {
+ ns->buf.word[ns->regs.count >> 1] = cpu_to_le16((uint16_t)byte);
+ ns->regs.count += 2;
+ }
+ }
+
+ return;
+}
+
+static int
+ns_device_ready(struct mtd_info *mtd)
+{
+ NS_DBG("device_ready\n");
+ return 1;
+}
+
+static uint16_t
+ns_nand_read_word(struct mtd_info *mtd)
+{
+ struct nand_chip *chip = (struct nand_chip *)mtd->priv;
+
+ NS_DBG("read_word\n");
+
+ return chip->read_byte(mtd) | (chip->read_byte(mtd) << 8);
+}
+
+static void
+ns_nand_write_word(struct mtd_info *mtd, uint16_t word)
+{
+ struct nand_chip *chip = (struct nand_chip *)mtd->priv;
+
+ NS_DBG("write_word\n");
+
+ chip->write_byte(mtd, word & 0xFF);
+ chip->write_byte(mtd, word >> 8);
+}
+
+static void
+ns_nand_write_buf(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ struct nandsim *ns = (struct nandsim *)((struct nand_chip *)mtd->priv)->priv;
+
+ /* Check that chip is expecting data input */
+ if (!(ns->state & STATE_DATAIN_MASK)) {
+ NS_ERR("write_buf: data input isn't expected, state is %s, "
+ "switch to STATE_READY\n", get_state_name(ns->state));
+ switch_to_ready_state(ns, NS_STATUS_FAILED(ns));
+ return;
+ }
+
+ /* Check if these are expected bytes */
+ if (ns->regs.count + len > ns->regs.num) {
+ NS_ERR("write_buf: too many input bytes\n");
+ switch_to_ready_state(ns, NS_STATUS_FAILED(ns));
+ return;
+ }
+
+ memcpy(ns->buf.byte + ns->regs.count, buf, len);
+ ns->regs.count += len;
+
+ if (ns->regs.count == ns->regs.num) {
+ NS_DBG("write_buf: %d bytes were written\n", ns->regs.count);
+ }
+}
+
+static void
+ns_nand_read_buf(struct mtd_info *mtd, u_char *buf, int len)
+{
+ struct nandsim *ns = (struct nandsim *)((struct nand_chip *)mtd->priv)->priv;
+
+ /* Sanity and correctness checks */
+ if (!ns->lines.ce) {
+ NS_ERR("read_buf: chip is disabled\n");
+ return;
+ }
+ if (ns->lines.ale || ns->lines.cle) {
+ NS_ERR("read_buf: ALE or CLE pin is high\n");
+ return;
+ }
+ if (!(ns->state & STATE_DATAOUT_MASK)) {
+ NS_WARN("read_buf: unexpected data output cycle, current state is %s\n",
+ get_state_name(ns->state));
+ return;
+ }
+
+ if (NS_STATE(ns->state) != STATE_DATAOUT) {
+ int i;
+
+ for (i = 0; i < len; i++)
+ buf[i] = ((struct nand_chip *)mtd->priv)->read_byte(mtd);
+
+ return;
+ }
+
+ /* Check if these are expected bytes */
+ if (ns->regs.count + len > ns->regs.num) {
+ NS_ERR("read_buf: too many bytes to read\n");
+ switch_to_ready_state(ns, NS_STATUS_FAILED(ns));
+ return;
+ }
+
+ memcpy(buf, ns->buf.byte + ns->regs.count, len);
+ ns->regs.count += len;
+
+ if (ns->regs.count == ns->regs.num) {
+ if ((ns->options & OPT_AUTOINCR) && NS_STATE(ns->state) == STATE_DATAOUT) {
+ ns->regs.count = 0;
+ if (ns->regs.row + 1 < ns->geom.pgnum)
+ ns->regs.row += 1;
+ NS_DBG("read_buf: switch to the next page (%#x)\n", ns->regs.row);
+ do_state_action(ns, ACTION_CPY);
+ }
+ else if (NS_STATE(ns->nxstate) == STATE_READY)
+ switch_state(ns);
+ }
+
+ return;
+}
+
+static int
+ns_nand_verify_buf(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ ns_nand_read_buf(mtd, (u_char *)&ns_verify_buf[0], len);
+
+ if (!memcmp(buf, &ns_verify_buf[0], len)) {
+ NS_DBG("verify_buf: the buffer is OK\n");
+ return 0;
+ } else {
+ NS_DBG("verify_buf: the buffer is wrong\n");
+ return -EFAULT;
+ }
+}
+
+/*
+ * Having only NAND chip IDs we call nand_scan which detects NAND flash
+ * parameters and then calls scan_bbt in order to scan/find/build the
+ * NAND flash bad block table. But since at that moment the NAND flash
+ * image isn't allocated in the simulator, errors arise. To avoid this
+ * we redefine the scan_bbt callback and initialize the nandsim structure
+ * before the flash media scanning.
+ */
+int ns_scan_bbt(struct mtd_info *mtd)
+{
+ struct nand_chip *chip = (struct nand_chip *)mtd->priv;
+ struct nandsim *ns = (struct nandsim *)(chip->priv);
+ int retval;
+
+ if (!NS_IS_INITIALIZED(ns))
+ if ((retval = init_nandsim(mtd)) != 0) {
+ NS_ERR("scan_bbt: can't initialize the nandsim structure\n");
+ return retval;
+ }
+ if ((retval = nand_default_bbt(mtd)) != 0) {
+ free_nandsim(ns);
+ return retval;
+ }
+
+ return 0;
+}
+
+/*
+ * Module initialization function
+ */
+int __init ns_init_module(void)
+{
+ struct nand_chip *chip;
+ struct nandsim *nand;
+ int retval = -ENOMEM;
+
+ if (bus_width != 8 && bus_width != 16) {
+ NS_ERR("wrong bus width (%d), use only 8 or 16\n", bus_width);
+ return -EINVAL;
+ }
+
+ /* Allocate and initialize mtd_info, nand_chip and nandsim structures */
+ nsmtd = kmalloc(sizeof(struct mtd_info) + sizeof(struct nand_chip)
+ + sizeof(struct nandsim), GFP_KERNEL);
+ if (!nsmtd) {
+ NS_ERR("unable to allocate core structures.\n");
+ return -ENOMEM;
+ }
+ memset(nsmtd, 0, sizeof(struct mtd_info) + sizeof(struct nand_chip) +
+ sizeof(struct nandsim));
+ chip = (struct nand_chip *)(nsmtd + 1);
+ nsmtd->priv = (void *)chip;
+ nand = (struct nandsim *)(chip + 1);
+ chip->priv = (void *)nand;
+
+ /*
+ * Register simulator's callbacks.
+ */
+ chip->hwcontrol = ns_hwcontrol;
+ chip->read_byte = ns_nand_read_byte;
+ chip->dev_ready = ns_device_ready;
+ chip->scan_bbt = ns_scan_bbt;
+ chip->write_byte = ns_nand_write_byte;
+ chip->write_buf = ns_nand_write_buf;
+ chip->read_buf = ns_nand_read_buf;
+ chip->verify_buf = ns_nand_verify_buf;
+ chip->write_word = ns_nand_write_word;
+ chip->read_word = ns_nand_read_word;
+ chip->eccmode = NAND_ECC_SOFT;
+
+ /*
+ * Perform minimum nandsim structure initialization to handle
+ * the initial ID read command correctly
+ */
+ if (third_id_byte != 0xFF || fourth_id_byte != 0xFF)
+ nand->geom.idbytes = 4;
+ else
+ nand->geom.idbytes = 2;
+ nand->regs.status = NS_STATUS_OK(nand);
+ nand->nxstate = STATE_UNKNOWN;
+ nand->options |= OPT_PAGE256; /* temporary value */
+ nand->ids[0] = first_id_byte;
+ nand->ids[1] = second_id_byte;
+ nand->ids[2] = third_id_byte;
+ nand->ids[3] = fourth_id_byte;
+ if (bus_width == 16) {
+ nand->busw = 16;
+ chip->options |= NAND_BUSWIDTH_16;
+ }
+
+ if ((retval = nand_scan(nsmtd, 1)) != 0) {
+ NS_ERR("can't register NAND Simulator\n");
+ if (retval > 0)
+ retval = -ENXIO;
+ goto error;
+ }
+
+ /* Register NAND as one big partition */
+ add_mtd_partitions(nsmtd, &nand->part, 1);
+
+ return 0;
+
+error:
+ kfree(nsmtd);
+
+ return retval;
+}
+
+module_init(ns_init_module);
+
+/*
+ * Module clean-up function
+ */
+static void __exit ns_cleanup_module(void)
+{
+ struct nandsim *ns = (struct nandsim *)(((struct nand_chip *)nsmtd->priv)->priv);
+
+ free_nandsim(ns); /* Free nandsim private resources */
+ nand_release(nsmtd); /* Unregisterd drived */
+ kfree(nsmtd); /* Free other structures */
+}
+
+module_exit(ns_cleanup_module);
+
+MODULE_LICENSE ("GPL");
+MODULE_AUTHOR ("Artem B. Bityuckiy");
+MODULE_DESCRIPTION ("The NAND flash simulator");
+
* 28-Sep-2004 BJD Fixed ECC placement for Hardware mode
* 12-Oct-2004 BJD Fixed errors in use of platform data
*
- * $Id: s3c2410.c,v 1.5 2004/10/12 10:10:15 bjd Exp $
+ * $Id: s3c2410.c,v 1.7 2005/01/05 18:05:14 dwmw2 Exp $
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
static struct s3c2410_nand_info *to_nand_info(struct device *dev)
{
- return (struct s3c2410_nand_info *)dev_get_drvdata(dev);
+ return dev_get_drvdata(dev);
}
static struct s3c2410_platform_nand *to_nand_plat(struct device *dev)
{
- return (struct s3c2410_platform_nand *)dev->platform_data;
+ return dev->platform_data;
}
/* timing calculations */
if (plat != NULL) {
tacls = s3c2410_nand_calc_rate(plat->tacls, clkrate, 8);
twrph0 = s3c2410_nand_calc_rate(plat->twrph0, clkrate, 8);
- twrph1 = s3c2410_nand_calc_rate(plat->twrph0, clkrate, 8);
+ twrph1 = s3c2410_nand_calc_rate(plat->twrph1, clkrate, 8);
} else {
/* default timings */
tacls = 8;
struct nand_chip *this = mtd->priv;
unsigned long cur;
- nmtd = (struct s3c2410_nand_mtd *)this->priv;
+ nmtd = this->priv;
info = nmtd->info;
cur = readl(info->regs + S3C2410_NFCONF);
static void s3c2410_nand_read_buf(struct mtd_info *mtd, u_char *buf, int len)
{
- struct nand_chip *this = (struct nand_chip *)mtd->priv;
+ struct nand_chip *this = mtd->priv;
readsb(this->IO_ADDR_R, buf, len);
}
static void s3c2410_nand_write_buf(struct mtd_info *mtd,
const u_char *buf, int len)
{
- struct nand_chip *this = (struct nand_chip *)mtd->priv;
+ struct nand_chip *this = mtd->priv;
writesb(this->IO_ADDR_W, buf, len);
}
--- /dev/null
+/*
+ * drivers/mtd/nand/sharpsl.c
+ *
+ * Copyright (C) 2004 Richard Purdie
+ *
+ * $Id: sharpsl.c,v 1.3 2005/01/03 14:53:50 rpurdie Exp $
+ *
+ * Based on Sharp's NAND driver sharp_sl.c
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/genhd.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/nand_ecc.h>
+#include <linux/mtd/partitions.h>
+#include <linux/interrupt.h>
+#include <asm/io.h>
+#include <asm/hardware.h>
+#include <asm/mach-types.h>
+
+static void __iomem *sharpsl_io_base;
+static int sharpsl_phys_base = 0x0C000000;
+
+/* register offset */
+#define ECCLPLB sharpsl_io_base+0x00 /* line parity 7 - 0 bit */
+#define ECCLPUB sharpsl_io_base+0x04 /* line parity 15 - 8 bit */
+#define ECCCP sharpsl_io_base+0x08 /* column parity 5 - 0 bit */
+#define ECCCNTR sharpsl_io_base+0x0C /* ECC byte counter */
+#define ECCCLRR sharpsl_io_base+0x10 /* cleare ECC */
+#define FLASHIO sharpsl_io_base+0x14 /* Flash I/O */
+#define FLASHCTL sharpsl_io_base+0x18 /* Flash Control */
+
+/* Flash control bit */
+#define FLRYBY (1 << 5)
+#define FLCE1 (1 << 4)
+#define FLWP (1 << 3)
+#define FLALE (1 << 2)
+#define FLCLE (1 << 1)
+#define FLCE0 (1 << 0)
+
+
+/*
+ * MTD structure for SharpSL
+ */
+static struct mtd_info *sharpsl_mtd = NULL;
+
+/*
+ * Define partitions for flash device
+ */
+#define DEFAULT_NUM_PARTITIONS 3
+
+static int nr_partitions;
+static struct mtd_partition sharpsl_nand_default_partition_info[] = {
+ {
+ .name = "System Area",
+ .offset = 0,
+ .size = 7 * 1024 * 1024,
+ },
+ {
+ .name = "Root Filesystem",
+ .offset = 7 * 1024 * 1024,
+ .size = 30 * 1024 * 1024,
+ },
+ {
+ .name = "Home Filesystem",
+ .offset = MTDPART_OFS_APPEND ,
+ .size = MTDPART_SIZ_FULL ,
+ },
+};
+
+/*
+ * hardware specific access to control-lines
+ */
+static void
+sharpsl_nand_hwcontrol(struct mtd_info* mtd, int cmd)
+{
+ switch (cmd) {
+ case NAND_CTL_SETCLE:
+ writeb(readb(FLASHCTL) | FLCLE, FLASHCTL);
+ break;
+ case NAND_CTL_CLRCLE:
+ writeb(readb(FLASHCTL) & ~FLCLE, FLASHCTL);
+ break;
+
+ case NAND_CTL_SETALE:
+ writeb(readb(FLASHCTL) | FLALE, FLASHCTL);
+ break;
+ case NAND_CTL_CLRALE:
+ writeb(readb(FLASHCTL) & ~FLALE, FLASHCTL);
+ break;
+
+ case NAND_CTL_SETNCE:
+ writeb(readb(FLASHCTL) & ~(FLCE0|FLCE1), FLASHCTL);
+ break;
+ case NAND_CTL_CLRNCE:
+ writeb(readb(FLASHCTL) | (FLCE0|FLCE1), FLASHCTL);
+ break;
+ }
+}
+
+static uint8_t scan_ff_pattern[] = { 0xff, 0xff };
+
+static struct nand_bbt_descr sharpsl_bbt = {
+ .options = 0,
+ .offs = 4,
+ .len = 2,
+ .pattern = scan_ff_pattern
+};
+
+static int
+sharpsl_nand_dev_ready(struct mtd_info* mtd)
+{
+ return !((readb(FLASHCTL) & FLRYBY) == 0);
+}
+
+static void
+sharpsl_nand_enable_hwecc(struct mtd_info* mtd, int mode)
+{
+ writeb(0 ,ECCCLRR);
+}
+
+static int
+sharpsl_nand_calculate_ecc(struct mtd_info* mtd, const u_char* dat,
+ u_char* ecc_code)
+{
+ ecc_code[0] = ~readb(ECCLPUB);
+ ecc_code[1] = ~readb(ECCLPLB);
+ ecc_code[2] = (~readb(ECCCP) << 2) | 0x03;
+ return readb(ECCCNTR) != 0;
+}
+
+
+#ifdef CONFIG_MTD_PARTITIONS
+const char *part_probes[] = { "cmdlinepart", NULL };
+#endif
+
+
+/*
+ * Main initialization routine
+ */
+int __init
+sharpsl_nand_init(void)
+{
+ struct nand_chip *this;
+ struct mtd_partition* sharpsl_partition_info;
+ int err = 0;
+
+ /* Allocate memory for MTD device structure and private data */
+ sharpsl_mtd = kmalloc(sizeof(struct mtd_info) + sizeof(struct nand_chip),
+ GFP_KERNEL);
+ if (!sharpsl_mtd) {
+ printk ("Unable to allocate SharpSL NAND MTD device structure.\n");
+ return -ENOMEM;
+ }
+
+ /* map physical adress */
+ sharpsl_io_base = ioremap(sharpsl_phys_base, 0x1000);
+ if(!sharpsl_io_base){
+ printk("ioremap to access Sharp SL NAND chip failed\n");
+ kfree(sharpsl_mtd);
+ return -EIO;
+ }
+
+ /* Get pointer to private data */
+ this = (struct nand_chip *) (&sharpsl_mtd[1]);
+
+ /* Initialize structures */
+ memset((char *) sharpsl_mtd, 0, sizeof(struct mtd_info));
+ memset((char *) this, 0, sizeof(struct nand_chip));
+
+ /* Link the private data with the MTD structure */
+ sharpsl_mtd->priv = this;
+
+ /*
+ * PXA initialize
+ */
+ writeb(readb(FLASHCTL) | FLWP, FLASHCTL);
+
+ /* Set address of NAND IO lines */
+ this->IO_ADDR_R = FLASHIO;
+ this->IO_ADDR_W = FLASHIO;
+ /* Set address of hardware control function */
+ this->hwcontrol = sharpsl_nand_hwcontrol;
+ this->dev_ready = sharpsl_nand_dev_ready;
+ /* 15 us command delay time */
+ this->chip_delay = 15;
+ /* set eccmode using hardware ECC */
+ this->eccmode = NAND_ECC_HW3_256;
+ this->enable_hwecc = sharpsl_nand_enable_hwecc;
+ this->calculate_ecc = sharpsl_nand_calculate_ecc;
+ this->correct_data = nand_correct_data;
+ this->badblock_pattern = &sharpsl_bbt;
+
+ /* Scan to find existence of the device */
+ err=nand_scan(sharpsl_mtd,1);
+ if (err) {
+ iounmap(sharpsl_io_base);
+ kfree(sharpsl_mtd);
+ return err;
+ }
+
+ /* Register the partitions */
+ sharpsl_mtd->name = "sharpsl-nand";
+ nr_partitions = parse_mtd_partitions(sharpsl_mtd, part_probes,
+ &sharpsl_partition_info, 0);
+
+ if (nr_partitions <= 0) {
+ nr_partitions = DEFAULT_NUM_PARTITIONS;
+ sharpsl_partition_info = sharpsl_nand_default_partition_info;
+ if (machine_is_poodle()) {
+ sharpsl_partition_info[1].size=22 * 1024 * 1024;
+ } else if (machine_is_corgi() || machine_is_shepherd()) {
+ sharpsl_partition_info[1].size=25 * 1024 * 1024;
+ } else if (machine_is_husky()) {
+ sharpsl_partition_info[1].size=53 * 1024 * 1024;
+ }
+ }
+
+ if (machine_is_husky()) {
+ /* Need to use small eraseblock size for backward compatibility */
+ sharpsl_mtd->flags |= MTD_NO_VIRTBLOCKS;
+ }
+
+ add_mtd_partitions(sharpsl_mtd, sharpsl_partition_info, nr_partitions);
+
+ /* Return happy */
+ return 0;
+}
+module_init(sharpsl_nand_init);
+
+/*
+ * Clean up routine
+ */
+#ifdef MODULE
+static void __exit sharpsl_nand_cleanup(void)
+{
+ struct nand_chip *this = (struct nand_chip *) &sharpsl_mtd[1];
+
+ /* Release resources, unregister device */
+ nand_release(sharpsl_mtd);
+
+ iounmap(sharpsl_io_base);
+
+ /* Free the MTD device structure */
+ kfree(sharpsl_mtd);
+}
+module_exit(sharpsl_nand_cleanup);
+#endif
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Richard Purdie <rpurdie@rpsys.net>");
+MODULE_DESCRIPTION("Device specific logic for NAND flash on Sharp SL-C7xx Series");
/* "Knobs" that adjust features and parameters. */
/* Set the copy breakpoint for the copy-only-tiny-frames scheme.
Setting to > 1512 effectively disables this feature. */
-static const int rx_copybreak = 200;
+static int rx_copybreak = 200;
/* Allow setting MTU to a larger size, bypassing the normal ethernet setup. */
static const int mtu = 1500;
MODULE_DESCRIPTION("3Com 3c515 Corkscrew driver");
MODULE_LICENSE("GPL");
-MODULE_PARM(debug, "i");
-MODULE_PARM(options, "1-" __MODULE_STRING(MAX_UNITS) "i");
-MODULE_PARM(rx_copybreak, "i");
-MODULE_PARM(max_interrupt_work, "i");
-MODULE_PARM_DESC(debug, "3c515 debug level (0-6)");
-MODULE_PARM_DESC(options, "3c515: Bits 0-2: media type, bit 3: full duplex, bit 4: bus mastering");
-MODULE_PARM_DESC(rx_copybreak, "3c515 copy breakpoint for copy-only-tiny-frames");
-MODULE_PARM_DESC(max_interrupt_work, "3c515 maximum events handled per interrupt");
-
/* "Knobs" for adjusting internal parameters. */
/* Put out somewhat more debugging messages. (0 - no msg, 1 minimal msgs). */
#define DRIVER_DEBUG 1
#ifdef MODULE
static int debug = -1;
+
+module_param(debug, int, 0);
+module_param_array(options, int, NULL, 0);
+module_param(rx_copybreak, int, 0);
+module_param(max_interrupt_work, int, 0);
+MODULE_PARM_DESC(debug, "3c515 debug level (0-6)");
+MODULE_PARM_DESC(options, "3c515: Bits 0-2: media type, bit 3: full duplex, bit 4: bus mastering");
+MODULE_PARM_DESC(rx_copybreak, "3c515 copy breakpoint for copy-only-tiny-frames");
+MODULE_PARM_DESC(max_interrupt_work, "3c515 maximum events handled per interrupt");
+
/* A list of all installed Vortex devices, for removing the driver module. */
/* we will need locking (and refcounting) if we ever use it for more */
static LIST_HEAD(root_corkscrew_dev);
static struct pci_device_id cp_pci_tbl[] = {
{ PCI_VENDOR_ID_REALTEK, PCI_DEVICE_ID_REALTEK_8139,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
+ { PCI_VENDOR_ID_TTTECH, PCI_DEVICE_ID_TTTECH_MC322,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
{ },
};
MODULE_DEVICE_TABLE(pci, cp_pci_tbl);
static void cp_set_d3_state (struct cp_private *cp)
{
pci_enable_wake (cp->pdev, 0, 1); /* Enable PME# generation */
- pci_set_power_state (cp->pdev, 3);
+ pci_set_power_state (cp->pdev, PCI_D3hot);
}
static int cp_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
BUG();
unregister_netdev(dev);
iounmap(cp->regs);
- if (cp->wol_enabled) pci_set_power_state (pdev, 0);
+ if (cp->wol_enabled) pci_set_power_state (pdev, PCI_D0);
pci_release_regions(pdev);
pci_clear_mwi(pdev);
pci_disable_device(pdev);
netif_device_attach (dev);
if (cp->pdev && cp->wol_enabled) {
- pci_set_power_state (cp->pdev, 0);
+ pci_set_power_state (cp->pdev, PCI_D0);
pci_restore_state (cp->pdev);
}
Paul Gortmaker : Separate out Tx timeout code from Tx path.
Paul Gortmaker : Remove old unused single Tx buffer code.
Hayato Fujiwara : Add m32r support.
+ Paul Gortmaker : use skb_padto() instead of stack scratch area
Sources:
The National Semiconductor LAN Databook, and the 3Com 3c503 databook.
{
long e8390_base = dev->base_addr;
struct ei_device *ei_local = (struct ei_device *) netdev_priv(dev);
- int length, send_length, output_page;
+ int send_length = skb->len, output_page;
unsigned long flags;
- char scratch[ETH_ZLEN];
- length = skb->len;
+ if (skb->len < ETH_ZLEN) {
+ skb = skb_padto(skb, ETH_ZLEN);
+ if (skb == NULL)
+ return 0;
+ send_length = ETH_ZLEN;
+ }
/* Mask interrupts from the ethercard.
SMP: We have to grab the lock here otherwise the IRQ handler
ei_local->irqlock = 1;
- send_length = ETH_ZLEN < length ? length : ETH_ZLEN;
-
/*
* We have two Tx slots available for use. Find the first free
* slot, and then perform some sanity checks. With two Tx bufs,
* trigger the send later, upon receiving a Tx done interrupt.
*/
- if (length == send_length)
- ei_block_output(dev, length, skb->data, output_page);
- else {
- memset(scratch, 0, ETH_ZLEN);
- memcpy(scratch, skb->data, skb->len);
- ei_block_output(dev, ETH_ZLEN, scratch, output_page);
- }
+ ei_block_output(dev, send_length, skb->data, output_page);
if (! ei_local->txing)
{
#ifdef CONFIG_NET_POLL_CONTROLLER
EXPORT_SYMBOL(ei_poll);
#endif
-EXPORT_SYMBOL(ei_tx_timeout);
EXPORT_SYMBOL(NS8390_init);
EXPORT_SYMBOL(__alloc_ei_netdev);
MODULE_AUTHOR("Advanced Micro Devices, Inc.");
MODULE_DESCRIPTION ("AMD8111 based 10/100 Ethernet Controller. Driver Version 3.0.3");
MODULE_LICENSE("GPL");
-MODULE_PARM(speed_duplex, "1-" __MODULE_STRING (MAX_UNITS) "i");
+module_param_array(speed_duplex, int, NULL, 0);
MODULE_PARM_DESC(speed_duplex, "Set device speed and duplex modes, 0: Auto Negotitate, 1: 10Mbps Half Duplex, 2: 10Mbps Full Duplex, 3: 100Mbps Half Duplex, 4: 100Mbps Full Duplex");
-MODULE_PARM(coalesce, "1-" __MODULE_STRING(MAX_UNITS) "i");
+module_param_array(coalesce, bool, NULL, 0);
MODULE_PARM_DESC(coalesce, "Enable or Disable interrupt coalescing, 1: Enable, 0: Disable");
-MODULE_PARM(dynamic_ipg, "1-" __MODULE_STRING(MAX_UNITS) "i");
+module_param_array(dynamic_ipg, bool, NULL, 0);
MODULE_PARM_DESC(dynamic_ipg, "Enable or Disable dynamic IPG, 1: Enable, 0: Disable");
static struct pci_device_id amd8111e_pci_tbl[] = {
if(amd8111e_restart(dev)){
spin_unlock_irq(&lp->lock);
+ if (dev->irq)
+ free_irq(dev->irq, dev);
return -ENOMEM;
}
/* Start ipg timer */
if(lp->options & OPTION_WAKE_PHY_ENABLE)
amd8111e_enable_link_change(lp);
- pci_enable_wake(pci_dev, 3, 1);
- pci_enable_wake(pci_dev, 4, 1); /* D3 cold */
+ pci_enable_wake(pci_dev, PCI_D3hot, 1);
+ pci_enable_wake(pci_dev, PCI_D3cold, 1);
}
else{
- pci_enable_wake(pci_dev, 3, 0);
- pci_enable_wake(pci_dev, 4, 0); /* 4 == D3 cold */
+ pci_enable_wake(pci_dev, PCI_D3hot, 0);
+ pci_enable_wake(pci_dev, PCI_D3cold, 0);
}
pci_save_state(pci_dev);
- pci_set_power_state(pci_dev, 3);
+ pci_set_power_state(pci_dev, PCI_D3hot);
return 0;
}
if (!netif_running(dev))
return 0;
- pci_set_power_state(pci_dev, 0);
+ pci_set_power_state(pci_dev, PCI_D0);
pci_restore_state(pci_dev);
- pci_enable_wake(pci_dev, 3, 0);
- pci_enable_wake(pci_dev, 4, 0); /* D3 cold */
+ pci_enable_wake(pci_dev, PCI_D3hot, 0);
+ pci_enable_wake(pci_dev, PCI_D3cold, 0); /* D3 cold */
netif_device_attach(dev);
static struct net_device *cops_dev;
MODULE_LICENSE("GPL");
-MODULE_PARM(io, "i");
-MODULE_PARM(irq, "i");
-MODULE_PARM(board_type, "i");
+module_param(io, int, 0);
+module_param(irq, int, 0);
+module_param(board_type, int, 0);
int init_module(void)
{
static struct net_device *dev_ipddp;
MODULE_LICENSE("GPL");
-MODULE_PARM(ipddp_mode, "i");
+module_param(ipddp_mode, int, 0);
static int __init ipddp_init_module(void)
{
to work unless talking to a copy of the same Linux arcnet driver,
but perhaps marginally faster in that case.
+config ARCNET_CAP
+ tristate "Enable CAP mode packet interface"
+ depends on ARCNET
+ help
+ ARCnet "cap mode" packet encapsulation. Used to get the hardware
+ acknowledge back to userspace. After the initial protocol byte every
+ packet is stuffed with an extra 4 byte "cookie" which doesn't
+ actually appear on the network. After transmit the driver will send
+ back a packet with protocol byte 0 containing the status of the
+ transmition:
+ 0=no hardware acknowledge
+ 1=excessive nak
+ 2=transmition accepted by the reciever hardware
+
+ Received packets are also stuffed with the extra 4 bytes but it will
+ be random data.
+
+ Cap only listens to protocol 1-8.
+
config ARCNET_COM90xx
tristate "ARCnet COM90xx (normal) chipset driver"
depends on ARCNET
This is yet another chipset driver for the COM90xx cards, but this
time only using memory-mapped mode, and no IO ports at all. This
driver is completely untested, so if you have one of these cards,
- please mail dwmw2@infradead.org, especially if it works!
+ please mail <dwmw2@infradead.org>, especially if it works!
To compile this driver as a module, choose M here and read
<file:Documentation/networking/net-modules.txt>. The module will
obj-$(CONFIG_ARCNET_1201) += rfc1201.o
obj-$(CONFIG_ARCNET_1051) += rfc1051.o
obj-$(CONFIG_ARCNET_RAW) += arc-rawmode.o
+obj-$(CONFIG_ARCNET_CAP) += capmode.o
obj-$(CONFIG_ARCNET_COM90xx) += com90xx.o
obj-$(CONFIG_ARCNET_COM90xxIO) += com90io.o
obj-$(CONFIG_ARCNET_RIM_I) += arc-rimi.o
static int prepare_tx(struct net_device *dev, struct archdr *pkt, int length,
int bufnum);
-
struct ArcProto rawmode_proto =
{
.suffix = 'r',
.rx = rx,
.build_header = build_header,
.prepare_tx = prepare_tx,
+ .continue_tx = NULL,
+ .ack_tx = NULL
};
BUGLVL(D_SKB) arcnet_dump_skb(dev, skb, "rx");
- skb->protocol = 0;
+ skb->protocol = __constant_htons(ETH_P_ARCNET);
+;
netif_rx(skb);
dev->last_rx = jiffies;
}
} else
hard->offset[0] = ofs = 256 - length;
+ BUGMSG(D_DURING, "prepare_tx: length=%d ofs=%d\n",
+ length,ofs);
+
lp->hw.copy_to_card(dev, bufnum, 0, hard, ARC_HDR_SIZE);
lp->hw.copy_to_card(dev, bufnum, ofs, &pkt->soft, length);
#include <asm/io.h>
-
#define VERSION "arcnet: COM20020 ISA support (by David Woodhouse et al.)\n"
lp->config = 0x21 | (lp->timeout << 3) | (lp->backplane << 2);
/* set node ID to 0x42 (but transmitter is disabled, so it's okay) */
SETCONF;
- outb(0x42, ioaddr + 7);
+ outb(0x42, ioaddr + BUS_ALIGN*7);
status = ASTATUS();
/* Enable TX */
outb(0x39, _CONFIG);
- outb(inb(ioaddr + 8), ioaddr + 7);
+ outb(inb(ioaddr + BUS_ALIGN*8), ioaddr + BUS_ALIGN*7);
ACOMMAND(CFLAGScmd | RESETclear | CONFIGclear);
dev->set_multicast_list = com20020_set_mc_list;
if (!dev->dev_addr[0])
- dev->dev_addr[0] = inb(ioaddr + 8); /* FIXME: do this some other way! */
+ dev->dev_addr[0] = inb(ioaddr + BUS_ALIGN*8); /* FIXME: do this some other way! */
SET_SUBADR(SUB_SETUP1);
outb(lp->setup, _XREG);
outb(0x18, _COMMAND);
}
-
lp->config = 0x20 | (lp->timeout << 3) | (lp->backplane << 2) | 1;
/* Default 0x38 + register: Node ID */
SETCONF;
static int com20020_reset(struct net_device *dev, int really_reset)
{
struct arcnet_local *lp = (struct arcnet_local *) dev->priv;
- short ioaddr = dev->base_addr;
+ u_int ioaddr = dev->base_addr;
u_char inbyte;
+ BUGMSG(D_DEBUG, "%s: %d: %s: dev: %p, lp: %p, dev->name: %s\n",
+ __FILE__,__LINE__,__FUNCTION__,dev,lp,dev->name);
BUGMSG(D_INIT, "Resetting %s (status=%02Xh)\n",
dev->name, ASTATUS());
+ BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
lp->config = TXENcfg | (lp->timeout << 3) | (lp->backplane << 2);
/* power-up defaults */
SETCONF;
+ BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
if (really_reset) {
/* reset the card */
mdelay(RESETtime * 2); /* COM20020 seems to be slower sometimes */
}
/* clear flags & end reset */
+ BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
ACOMMAND(CFLAGScmd | RESETclear | CONFIGclear);
/* verify that the ARCnet signature byte is present */
+ BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
com20020_copy_from_card(dev, 0, 0, &inbyte, 1);
+ BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
if (inbyte != TESTvalue) {
+ BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
BUGMSG(D_NORMAL, "reset failed: TESTvalue not present.\n");
return 1;
}
/* enable extended (512-byte) packets */
ACOMMAND(CONFIGcmd | EXTconf);
+ BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__);
/* done! return success. */
return 0;
static void com20020_setmask(struct net_device *dev, int mask)
{
- short ioaddr = dev->base_addr;
+ u_int ioaddr = dev->base_addr;
+ BUGMSG(D_DURING, "Setting mask to %x at %x\n",mask,ioaddr);
AINTMASK(mask);
}
static void com20020_command(struct net_device *dev, int cmd)
{
- short ioaddr = dev->base_addr;
+ u_int ioaddr = dev->base_addr;
ACOMMAND(cmd);
}
static int com20020_status(struct net_device *dev)
{
- short ioaddr = dev->base_addr;
- return ASTATUS();
+ u_int ioaddr = dev->base_addr;
+
+ return ASTATUS() + (ADIAGSTATUS()<<8);
}
static void com20020_close(struct net_device *dev)
{
.suffix = 's',
.mtu = XMTU - RFC1051_HDR_SIZE,
+ .is_ip = 1,
.rx = rx,
.build_header = build_header,
.prepare_tx = prepare_tx,
+ .continue_tx = NULL,
+ .ack_tx = NULL
};
{
.suffix = 'a',
.mtu = 1500, /* could be more, but some receivers can't handle it... */
+ .is_ip = 1, /* This is for sending IP and ARP packages */
.rx = rx,
.build_header = build_header,
.prepare_tx = prepare_tx,
.continue_tx = continue_tx,
+ .ack_tx = NULL
};
MODULE_DESCRIPTION("RealTek RTL8002/8012 parallel port Ethernet driver");
MODULE_LICENSE("GPL");
-MODULE_PARM(max_interrupt_work, "i");
-MODULE_PARM(debug, "i");
-MODULE_PARM(io, "1-" __MODULE_STRING(NUM_UNITS) "i");
-MODULE_PARM(irq, "1-" __MODULE_STRING(NUM_UNITS) "i");
-MODULE_PARM(xcvr, "1-" __MODULE_STRING(NUM_UNITS) "i");
+module_param(max_interrupt_work, int, 0);
+module_param(debug, int, 0);
+module_param_array(io, int, NULL, 0);
+module_param_array(irq, int, NULL, 0);
+module_param_array(xcvr, int, NULL, 0);
MODULE_PARM_DESC(max_interrupt_work, "ATP maximum events handled per interrupt");
MODULE_PARM_DESC(debug, "ATP debug level (0-7)");
MODULE_PARM_DESC(io, "ATP I/O base address(es)");
i++, mclist = mclist->next)
{
int filterbit = ether_crc_le(ETH_ALEN, mclist->dmi_addr) & 0x3f;
- mc_filter[filterbit >> 5] |= cpu_to_le32(1 << (filterbit & 31));
+ mc_filter[filterbit >> 5] |= 1 << (filterbit & 31);
}
new_mode = CMR2h_Normal;
}
struct bmac_data {
/* volatile struct bmac *bmac; */
struct sk_buff_head *queue;
- volatile struct dbdma_regs *tx_dma;
+ volatile struct dbdma_regs __iomem *tx_dma;
int tx_dma_intr;
- volatile struct dbdma_regs *rx_dma;
+ volatile struct dbdma_regs __iomem *rx_dma;
int rx_dma_intr;
volatile struct dbdma_cmd *tx_cmds; /* xmit dma command list */
volatile struct dbdma_cmd *rx_cmds; /* recv dma command list */
#define DBDMA_CLEAR(x) ( (x) << 16)
static inline void
-dbdma_st32(volatile unsigned long *a, unsigned long x)
+dbdma_st32(volatile __u32 __iomem *a, unsigned long x)
{
__asm__ volatile( "stwbrx %0,0,%1" : : "r" (x), "r" (a) : "memory");
return;
}
static inline unsigned long
-dbdma_ld32(volatile unsigned long *a)
+dbdma_ld32(volatile __u32 __iomem *a)
{
- unsigned long swap;
+ __u32 swap;
__asm__ volatile ("lwbrx %0,0,%1" : "=r" (swap) : "r" (a));
return swap;
}
static void
-dbdma_continue(volatile struct dbdma_regs *dmap)
+dbdma_continue(volatile struct dbdma_regs __iomem *dmap)
{
- dbdma_st32((volatile unsigned long *)&dmap->control,
+ dbdma_st32(&dmap->control,
DBDMA_SET(RUN|WAKE) | DBDMA_CLEAR(PAUSE|DEAD));
eieio();
}
static void
-dbdma_reset(volatile struct dbdma_regs *dmap)
+dbdma_reset(volatile struct dbdma_regs __iomem *dmap)
{
- dbdma_st32((volatile unsigned long *)&dmap->control,
+ dbdma_st32(&dmap->control,
DBDMA_CLEAR(ACTIVE|DEAD|WAKE|FLUSH|PAUSE|RUN));
eieio();
- while (dbdma_ld32((volatile unsigned long *)&dmap->status) & RUN)
+ while (dbdma_ld32(&dmap->status) & RUN)
eieio();
}
static inline
void bmwrite(struct net_device *dev, unsigned long reg_offset, unsigned data )
{
- out_le16((void *)dev->base_addr + reg_offset, data);
+ out_le16((void __iomem *)dev->base_addr + reg_offset, data);
}
static inline
volatile unsigned short bmread(struct net_device *dev, unsigned long reg_offset )
{
- return in_le16((void *)dev->base_addr + reg_offset);
+ return in_le16((void __iomem *)dev->base_addr + reg_offset);
}
static void
bmac_enable_and_reset_chip(struct net_device *dev)
{
struct bmac_data *bp = netdev_priv(dev);
- volatile struct dbdma_regs *rd = bp->rx_dma;
- volatile struct dbdma_regs *td = bp->tx_dma;
+ volatile struct dbdma_regs __iomem *rd = bp->rx_dma;
+ volatile struct dbdma_regs __iomem *td = bp->tx_dma;
if (rd)
dbdma_reset(rd);
bmac_start_chip(struct net_device *dev)
{
struct bmac_data *bp = netdev_priv(dev);
- volatile struct dbdma_regs *rd = bp->rx_dma;
+ volatile struct dbdma_regs __iomem *rd = bp->rx_dma;
unsigned short oldConfig;
/* enable rx dma channel */
bp->sleeping = 1;
spin_unlock_irqrestore(&bp->lock, flags);
if (bp->opened) {
- volatile struct dbdma_regs *rd = bp->rx_dma;
- volatile struct dbdma_regs *td = bp->tx_dma;
+ volatile struct dbdma_regs __iomem *rd = bp->rx_dma;
+ volatile struct dbdma_regs __iomem *td = bp->tx_dma;
config = bmread(dev, RXCFG);
bmwrite(dev, RXCFG, (config & ~RxMACEnable));
static void
bmac_init_tx_ring(struct bmac_data *bp)
{
- volatile struct dbdma_regs *td = bp->tx_dma;
+ volatile struct dbdma_regs __iomem *td = bp->tx_dma;
memset((char *)bp->tx_cmds, 0, (N_TX_RING+1) * sizeof(struct dbdma_cmd));
static int
bmac_init_rx_ring(struct bmac_data *bp)
{
- volatile struct dbdma_regs *rd = bp->rx_dma;
+ volatile struct dbdma_regs __iomem *rd = bp->rx_dma;
int i;
struct sk_buff *skb;
static int bmac_transmit_packet(struct sk_buff *skb, struct net_device *dev)
{
struct bmac_data *bp = netdev_priv(dev);
- volatile struct dbdma_regs *td = bp->tx_dma;
+ volatile struct dbdma_regs __iomem *td = bp->tx_dma;
int i;
/* see if there's a free slot in the tx ring */
{
struct net_device *dev = (struct net_device *) dev_id;
struct bmac_data *bp = netdev_priv(dev);
- volatile struct dbdma_regs *rd = bp->rx_dma;
+ volatile struct dbdma_regs __iomem *rd = bp->rx_dma;
volatile struct dbdma_cmd *cp;
int i, nb, stat;
struct sk_buff *skb;
goto err_out_iounmap;
bp->is_bmac_plus = is_bmac_plus;
- bp->tx_dma = (volatile struct dbdma_regs *)
- ioremap(macio_resource_start(mdev, 1), macio_resource_len(mdev, 1));
+ bp->tx_dma = ioremap(macio_resource_start(mdev, 1), macio_resource_len(mdev, 1));
if (!bp->tx_dma)
goto err_out_iounmap;
bp->tx_dma_intr = macio_irq(mdev, 1);
- bp->rx_dma = (volatile struct dbdma_regs *)
- ioremap(macio_resource_start(mdev, 2), macio_resource_len(mdev, 2));
+ bp->rx_dma = ioremap(macio_resource_start(mdev, 2), macio_resource_len(mdev, 2));
if (!bp->rx_dma)
goto err_out_iounmap_tx;
bp->rx_dma_intr = macio_irq(mdev, 2);
err_out_irq0:
free_irq(dev->irq, dev);
err_out_iounmap_rx:
- iounmap((void *)bp->rx_dma);
+ iounmap(bp->rx_dma);
err_out_iounmap_tx:
- iounmap((void *)bp->tx_dma);
+ iounmap(bp->tx_dma);
err_out_iounmap:
- iounmap((void *)dev->base_addr);
+ iounmap((void __iomem *)dev->base_addr);
out_release:
macio_release_resources(mdev);
out_free:
static int bmac_close(struct net_device *dev)
{
struct bmac_data *bp = netdev_priv(dev);
- volatile struct dbdma_regs *rd = bp->rx_dma;
- volatile struct dbdma_regs *td = bp->tx_dma;
+ volatile struct dbdma_regs __iomem *rd = bp->rx_dma;
+ volatile struct dbdma_regs __iomem *td = bp->tx_dma;
unsigned short config;
int i;
{
struct net_device *dev = (struct net_device *) data;
struct bmac_data *bp = netdev_priv(dev);
- volatile struct dbdma_regs *td = bp->tx_dma;
- volatile struct dbdma_regs *rd = bp->rx_dma;
+ volatile struct dbdma_regs __iomem *td = bp->tx_dma;
+ volatile struct dbdma_regs __iomem *rd = bp->rx_dma;
volatile struct dbdma_cmd *cp;
unsigned long flags;
unsigned short config, oldConfig;
free_irq(bp->tx_dma_intr, dev);
free_irq(bp->rx_dma_intr, dev);
- iounmap((void *)dev->base_addr);
- iounmap((void *)bp->tx_dma);
- iounmap((void *)bp->rx_dma);
+ iounmap((void __iomem *)dev->base_addr);
+ iounmap(bp->tx_dma);
+ iounmap(bp->rx_dma);
macio_release_resources(mdev);
static u16 __get_link_speed(struct port *port);
static u8 __get_duplex(struct port *port);
static inline void __initialize_port_locks(struct port *port);
-static inline void __deinitialize_port_locks(struct port *port);
//conversions
static void __ntohs_lacpdu(struct lacpdu *lacpdu);
static u16 __ad_timer_to_ticks(u16 timer_type, u16 Par);
spin_lock_init(&(SLAVE_AD_INFO(port->slave).rx_machine_lock));
}
-/**
- * __deinitialize_port_locks - deinitialize a port's RX machine spinlock
- * @port: the port we're looking at
- *
- */
-static inline void __deinitialize_port_locks(struct port *port)
-{
-}
-
//conversions
/**
* __ntohs_lacpdu - convert the contents of a LACPDU to host byte order
#include "de600.h"
static unsigned int de600_debug = DE600_DEBUG;
-MODULE_PARM(de600_debug, "i");
+module_param(de600_debug, int, 0);
MODULE_PARM_DESC(de600_debug, "DE-600 debug level (0-2)");
static unsigned int check_lost = 1;
-MODULE_PARM(check_lost, "i");
+module_param(check_lost, bool, 0);
MODULE_PARM_DESC(check_lost, "If set then check for unplugged de600");
static unsigned int delay_time = 10;
-MODULE_PARM(delay_time, "i");
+module_param(delay_time, int, 0);
MODULE_PARM_DESC(delay_time, "DE-600 deley on I/O in microseconds");
static volatile int tx_fifo_out;
static volatile int free_tx_pages = TX_PAGES;
static int was_down;
-static spinlock_t de600_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(de600_lock);
static inline u8 de600_read_status(struct net_device *dev)
{
static spinlock_t de620_lock;
-MODULE_PARM(bnc, "i");
-MODULE_PARM(utp, "i");
-MODULE_PARM(io, "i");
-MODULE_PARM(irq, "i");
-MODULE_PARM(clone, "i");
-MODULE_PARM(de620_debug, "i");
+module_param(bnc, int, 0);
+module_param(utp, int, 0);
+module_param(io, int, 0);
+module_param(irq, int, 0);
+module_param(clone, int, 0);
+module_param(de620_debug, int, 0);
MODULE_PARM_DESC(bnc, "DE-620 set BNC medium (0-1)");
MODULE_PARM_DESC(utp, "DE-620 set UTP medium (0-1)");
MODULE_PARM_DESC(io, "DE-620 I/O base address,required");
} depca_bus; /* type of bus */
struct depca_init init_block; /* Shadow Initialization block */
/* CPU address space fields */
- struct depca_rx_desc *rx_ring; /* Pointer to start of RX descriptor ring */
- struct depca_tx_desc *tx_ring; /* Pointer to start of TX descriptor ring */
- void *rx_buff[NUM_RX_DESC]; /* CPU virt address of sh'd memory buffs */
- void *tx_buff[NUM_TX_DESC]; /* CPU virt address of sh'd memory buffs */
- void *sh_mem; /* CPU mapped virt address of device RAM */
+ struct depca_rx_desc __iomem *rx_ring; /* Pointer to start of RX descriptor ring */
+ struct depca_tx_desc __iomem *tx_ring; /* Pointer to start of TX descriptor ring */
+ void __iomem *rx_buff[NUM_RX_DESC]; /* CPU virt address of sh'd memory buffs */
+ void __iomem *tx_buff[NUM_TX_DESC]; /* CPU virt address of sh'd memory buffs */
+ void __iomem *sh_mem; /* CPU mapped virt address of device RAM */
u_long mem_start; /* Bus address of device RAM (before remap) */
u_long mem_len; /* device memory size */
/* Device address space fields */
/* Tx & Rx descriptors (aligned to a quadword boundary) */
offset = (offset + DEPCA_ALIGN) & ~DEPCA_ALIGN;
- lp->rx_ring = (struct depca_rx_desc *) (lp->sh_mem + offset);
+ lp->rx_ring = (struct depca_rx_desc __iomem *) (lp->sh_mem + offset);
lp->rx_ring_offset = offset;
offset += (sizeof(struct depca_rx_desc) * NUM_RX_DESC);
- lp->tx_ring = (struct depca_tx_desc *) (lp->sh_mem + offset);
+ lp->tx_ring = (struct depca_tx_desc __iomem *) (lp->sh_mem + offset);
lp->tx_ring_offset = offset;
offset += (sizeof(struct depca_tx_desc) * NUM_TX_DESC);
static int __init DepcaSignature(char *name, u_long base_addr)
{
u_int i, j, k;
- void *ptr;
+ void __iomem *ptr;
char tmpstr[16];
u_long prom_addr = base_addr + 0xc000;
u_long mem_addr = base_addr + 0x8000; /* 32KB */
printk("Descriptor addresses (CPU):\nRX: ");
for (i = 0; i < lp->rxRingMask; i++) {
if (i < 3) {
- printk("0x%8.8lx ", (long) &lp->rx_ring[i].base);
+ printk("%p ", &lp->rx_ring[i].base);
}
}
- printk("...0x%8.8lx\n", (long) &lp->rx_ring[i].base);
+ printk("...%p\n", &lp->rx_ring[i].base);
printk("TX: ");
for (i = 0; i < lp->txRingMask; i++) {
if (i < 3) {
- printk("0x%8.8lx ", (long) &lp->tx_ring[i].base);
+ printk("%p ", &lp->tx_ring[i].base);
}
}
- printk("...0x%8.8lx\n", (long) &lp->tx_ring[i].base);
+ printk("...%p\n", &lp->tx_ring[i].base);
printk("\nDescriptor buffers (Device):\nRX: ");
for (i = 0; i < lp->rxRingMask; i++) {
if (i < 3) {
MODULE_AUTHOR ("Edward Peng");
MODULE_DESCRIPTION ("D-Link DL2000-based Gigabit Ethernet Adapter");
MODULE_LICENSE("GPL");
-MODULE_PARM (mtu, "1-" __MODULE_STRING (MAX_UNITS) "i");
-MODULE_PARM (media, "1-" __MODULE_STRING (MAX_UNITS) "s");
-MODULE_PARM (vlan, "1-" __MODULE_STRING (MAX_UNITS) "i");
-MODULE_PARM (jumbo, "1-" __MODULE_STRING (MAX_UNITS) "i");
-MODULE_PARM (tx_flow, "i");
-MODULE_PARM (rx_flow, "i");
-MODULE_PARM (copy_thresh, "i");
-MODULE_PARM (rx_coalesce, "i"); /* Rx frame count each interrupt */
-MODULE_PARM (rx_timeout, "i"); /* Rx DMA wait time in 64ns increments */
-MODULE_PARM (tx_coalesce, "i"); /* HW xmit count each TxDMAComplete */
+module_param_array(mtu, int, NULL, 0);
+module_param_array(media, charp, NULL, 0);
+module_param_array(vlan, int, NULL, 0);
+module_param_array(jumbo, int, NULL, 0);
+module_param(tx_flow, int, 0);
+module_param(rx_flow, int, 0);
+module_param(copy_thresh, int, 0);
+module_param(rx_coalesce, int, 0); /* Rx frame count each interrupt */
+module_param(rx_timeout, int, 0); /* Rx DMA wait time in 64ns increments */
+module_param(tx_coalesce, int, 0); /* HW xmit count each TxDMAComplete */
/* Enable the default interrupts */
#include <linux/sched.h>
#ifndef msec_delay
-#define msec_delay(x) do { if(in_interrupt()) { \
- /* Don't mdelay in interrupt context! */ \
- BUG(); \
- } else { \
- set_current_state(TASK_UNINTERRUPTIBLE); \
- schedule_timeout((x * HZ)/1000 + 2); \
- } } while(0)
+#define msec_delay(x) msleep(x)
+
/* Some workarounds require millisecond delays and are run during interrupt
* context. Most notably, when establishing link, the phy may need tweaking
* but cannot process phy register reads/writes faster than millisecond
static void rx_net_packet(struct fc_info *fi, u_char *buff_addr, int payload_size);
static void rx_net_mfs_packet(struct fc_info *fi, struct sk_buff *skb);
-unsigned short fc_type_trans(struct sk_buff *skb, struct net_device *dev);
static int tx_ip_packet(struct sk_buff *skb, unsigned long len, struct fc_info *fi);
static int tx_arp_packet(char *data, unsigned long len, struct fc_info *fi);
#endif
/*************************************************/
/* XXX both FECs use the MII interface of FEC1 */
-static spinlock_t fec_mii_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(fec_mii_lock);
#define FEC_MII_LOOPS 10000
* kernel's AX.25 protocol layers.
*
* Authors: Andreas Könsgen <ajk@iehk.rwth-aachen.de>
- * Ralf Baechle DO1GRB <ralf@linux-mips.org>
+ * Ralf Baechle DL5RB <ralf@linux-mips.org>
*
* Quite a lot of stuff "stolen" by Joerg Reuter from slip.c, written by
*
unsigned char status1;
unsigned char status2;
unsigned char tx_enable;
- unsigned char tnc_ok;
+ unsigned char tnc_state;
struct timer_list tx_t;
struct timer_list resync_t;
-
atomic_t refcnt;
struct semaphore dead_sem;
spinlock_t lock;
static void sp_start_tx_timer(struct sixpack *);
static void sixpack_decode(struct sixpack *, unsigned char[], int);
static int encode_sixpack(unsigned char *, unsigned char *, int, unsigned char);
-static int sixpack_init(struct net_device *dev);
/*
* perform the persistence/slottime algorithm for CSMA access. If the
goto out_drop;
}
+ if (len > sp->mtu) { /* sp->mtu = AX25_MTU = max. PACLEN = 256 */
+ msg = "oversized transmit packet!";
+ goto out_drop;
+ }
+
if (p[0] > 5) {
msg = "invalid KISS command";
goto out_drop;
out_drop:
sp->stats.tx_dropped++;
netif_start_queue(sp->dev);
- printk(KERN_DEBUG "%s: %s - dropped.\n", sp->dev->name, msg);
- return;
+ if (net_ratelimit())
+ printk(KERN_DEBUG "%s: %s - dropped.\n", sp->dev->name, msg);
}
/* Encapsulate an IP datagram and kick it into a TTY queue. */
return &sp->stats;
}
-static int sp_set_dev_mac_address(struct net_device *dev, void *addr)
+static int sp_set_mac_address(struct net_device *dev, void *addr)
{
- struct sockaddr *sa = addr;
- memcpy(dev->dev_addr, sa->sa_data, AX25_ADDR_LEN);
+ struct sockaddr_ax25 *sa = addr;
+
+ if (sa->sax25_family != AF_AX25)
+ return -EINVAL;
+
+ if (!sa->sax25_ndigis)
+ return -EINVAL;
+
+ spin_lock_irq(&dev->xmit_lock);
+ memcpy(dev->dev_addr, &sa->sax25_call, AX25_ADDR_LEN);
+ spin_unlock_irq(&dev->xmit_lock);
+
return 0;
}
{'L'<<1,'I'<<1,'N'<<1,'U'<<1,'X'<<1,' '<<1,'1'<<1};
/* Finish setting up the DEVICE info. */
- dev->init = sixpack_init;
dev->mtu = SIXP_MTU;
dev->hard_start_xmit = sp_xmit;
dev->open = sp_open_dev;
dev->stop = sp_close;
dev->hard_header = sp_header;
dev->get_stats = sp_get_stats;
- dev->set_mac_address = sp_set_dev_mac_address;
+ dev->set_mac_address = sp_set_mac_address;
dev->hard_header_len = AX25_MAX_HEADER_LEN;
dev->addr_len = AX25_ADDR_LEN;
dev->type = ARPHRD_AX25;
SET_MODULE_OWNER(dev);
- /* New-style flags. */
dev->flags = 0;
}
-/* Find a free 6pack channel, and link in this `tty' line. */
-static inline struct sixpack *sp_alloc(void)
-{
- struct sixpack *sp = NULL;
- struct net_device *dev = NULL;
-
- dev = alloc_netdev(sizeof(struct sixpack), "sp%d", sp_setup);
- if (!dev)
- return NULL;
-
- sp = netdev_priv(dev);
- sp->dev = dev;
-
- spin_lock_init(&sp->lock);
-
- if (register_netdev(dev))
- goto out_free;
-
- return sp;
-
-out_free:
- printk(KERN_WARNING "sp_alloc() - register_netdev() failure.\n");
-
- free_netdev(dev);
-
- return NULL;
-}
-
-/* Free a 6pack channel. */
-static inline void sp_free(struct sixpack *sp)
-{
- void * tmp;
-
- /* Free all 6pack frame buffers. */
- if ((tmp = xchg(&sp->rbuff, NULL)) != NULL)
- kfree(tmp);
- if ((tmp = xchg(&sp->xbuff, NULL)) != NULL)
- kfree(tmp);
-}
-
-
/* Send one completely decapsulated IP datagram to the IP layer. */
/*
* best way to fix this is to use a rwlock in the tty struct, but for now we
* use a single global rwlock for all ttys in ppp line discipline.
*/
-static rwlock_t disc_data_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(disc_data_lock);
static struct sixpack *sp_get(struct tty_struct *tty)
{
struct sixpack *sp = sp_get(tty);
int actual;
+ if (!sp)
+ return;
if (sp->xleft <= 0) {
/* Now serial buffer is almost free & we can start
* transmission of another packet */
goto out;
}
- if (sp->tx_enable == 1) {
+ if (sp->tx_enable) {
actual = tty->driver->write(tty, sp->xhead, sp->xleft);
sp->xleft -= actual;
sp->xhead += actual;
/* ----------------------------------------------------------------------- */
-/* Open the low-level part of the 6pack channel. */
-static int sp_open(struct net_device *dev)
-{
- struct sixpack *sp = netdev_priv(dev);
- char *rbuff, *xbuff = NULL;
- int err = -ENOBUFS;
- unsigned long len;
-
- /* !!! length of the buffers. MTU is IP MTU, not PACLEN! */
-
- len = dev->mtu * 2;
-
- rbuff = kmalloc(len + 4, GFP_KERNEL);
- if (rbuff == NULL)
- goto err_exit;
-
- xbuff = kmalloc(len + 4, GFP_KERNEL);
- if (xbuff == NULL)
- goto err_exit;
-
- spin_lock_bh(&sp->lock);
-
- if (sp->tty == NULL)
- return -ENODEV;
-
- /*
- * Allocate the 6pack frame buffers:
- *
- * rbuff Receive buffer.
- * xbuff Transmit buffer.
- */
-
- rbuff = xchg(&sp->rbuff, rbuff);
- xbuff = xchg(&sp->xbuff, xbuff);
-
- sp->mtu = AX25_MTU + 73;
- sp->buffsize = len;
- sp->rcount = 0;
- sp->rx_count = 0;
- sp->rx_count_cooked = 0;
- sp->xleft = 0;
-
- sp->flags = 0; /* Clear ESCAPE & ERROR flags */
-
- sp->duplex = 0;
- sp->tx_delay = SIXP_TXDELAY;
- sp->persistence = SIXP_PERSIST;
- sp->slottime = SIXP_SLOTTIME;
- sp->led_state = 0x60;
- sp->status = 1;
- sp->status1 = 1;
- sp->status2 = 0;
- sp->tnc_ok = 0;
- sp->tx_enable = 0;
-
- netif_start_queue(dev);
-
- init_timer(&sp->tx_t);
- init_timer(&sp->resync_t);
-
- spin_unlock_bh(&sp->lock);
-
- err = 0;
-
-err_exit:
- if (xbuff)
- kfree(xbuff);
- if (rbuff)
- kfree(rbuff);
-
- return err;
-}
-
-
static int sixpack_receive_room(struct tty_struct *tty)
{
return 65536; /* We can handle an infinite amount of data. :-) */
* decode_prio_command
*/
+#define TNC_UNINITIALIZED 0
+#define TNC_UNSYNC_STARTUP 1
+#define TNC_UNSYNCED 2
+#define TNC_IN_SYNC 3
+
+static void __tnc_set_sync_state(struct sixpack *sp, int new_tnc_state)
+{
+ char *msg;
+
+ switch (new_tnc_state) {
+ default: /* gcc oh piece-o-crap ... */
+ case TNC_UNSYNC_STARTUP:
+ msg = "Synchronizing with TNC";
+ break;
+ case TNC_UNSYNCED:
+ msg = "Lost synchronization with TNC\n";
+ break;
+ case TNC_IN_SYNC:
+ msg = "Found TNC";
+ break;
+ }
+
+ sp->tnc_state = new_tnc_state;
+ printk(KERN_INFO "%s: %s\n", sp->dev->name, msg);
+}
+
+static inline void tnc_set_sync_state(struct sixpack *sp, int new_tnc_state)
+{
+ int old_tnc_state = sp->tnc_state;
+
+ if (old_tnc_state != new_tnc_state)
+ __tnc_set_sync_state(sp, new_tnc_state);
+}
+
static void resync_tnc(unsigned long channel)
{
struct sixpack *sp = (struct sixpack *) channel;
- struct net_device *dev = sp->dev;
static char resync_cmd = 0xe8;
- printk(KERN_INFO "%s: resyncing TNC\n", dev->name);
-
/* clear any data that might have been received */
sp->rx_count = 0;
sp->status = 1;
sp->status1 = 1;
sp->status2 = 0;
- sp->tnc_ok = 0;
/* resync the TNC */
/* Start resync timer again -- the TNC might be still absent */
del_timer(&sp->resync_t);
- sp->resync_t.data = (unsigned long) sp;
- sp->resync_t.function = resync_tnc;
- sp->resync_t.expires = jiffies + SIXP_RESYNC_TIMEOUT;
+ sp->resync_t.data = (unsigned long) sp;
+ sp->resync_t.function = resync_tnc;
+ sp->resync_t.expires = jiffies + SIXP_RESYNC_TIMEOUT;
add_timer(&sp->resync_t);
}
{
unsigned char inbyte = 0xe8;
+ tnc_set_sync_state(sp, TNC_UNSYNC_STARTUP);
+
sp->tty->driver->write(sp->tty, &inbyte, 1);
del_timer(&sp->resync_t);
*/
static int sixpack_open(struct tty_struct *tty)
{
+ char *rbuff = NULL, *xbuff = NULL;
+ struct net_device *dev;
struct sixpack *sp;
+ unsigned long len;
int err = 0;
if (!capable(CAP_NET_ADMIN))
return -EPERM;
- sp = sp_alloc();
- if (!sp) {
+ dev = alloc_netdev(sizeof(struct sixpack), "sp%d", sp_setup);
+ if (!dev) {
err = -ENOMEM;
goto out;
}
- sp->tty = tty;
+ sp = netdev_priv(dev);
+ sp->dev = dev;
+
+ spin_lock_init(&sp->lock);
atomic_set(&sp->refcnt, 1);
init_MUTEX_LOCKED(&sp->dead_sem);
- /* Perform the low-level 6pack initialization. */
- if ((err = sp_open(sp->dev)))
- goto out;
+ /* !!! length of the buffers. MTU is IP MTU, not PACLEN! */
+
+ len = dev->mtu * 2;
+
+ rbuff = kmalloc(len + 4, GFP_KERNEL);
+ xbuff = kmalloc(len + 4, GFP_KERNEL);
+
+ if (rbuff == NULL || xbuff == NULL) {
+ err = -ENOBUFS;
+ goto out_free;
+ }
+
+ spin_lock_bh(&sp->lock);
+
+ sp->tty = tty;
+
+ sp->rbuff = rbuff;
+ sp->xbuff = xbuff;
+
+ sp->mtu = AX25_MTU + 73;
+ sp->buffsize = len;
+ sp->rcount = 0;
+ sp->rx_count = 0;
+ sp->rx_count_cooked = 0;
+ sp->xleft = 0;
+
+ sp->flags = 0; /* Clear ESCAPE & ERROR flags */
+
+ sp->duplex = 0;
+ sp->tx_delay = SIXP_TXDELAY;
+ sp->persistence = SIXP_PERSIST;
+ sp->slottime = SIXP_SLOTTIME;
+ sp->led_state = 0x60;
+ sp->status = 1;
+ sp->status1 = 1;
+ sp->status2 = 0;
+ sp->tx_enable = 0;
+
+ netif_start_queue(dev);
+
+ init_timer(&sp->tx_t);
+ init_timer(&sp->resync_t);
+
+ spin_unlock_bh(&sp->lock);
/* Done. We have linked the TTY line to a channel. */
tty->disc_data = sp;
+ /* Now we're ready to register. */
+ if (register_netdev(dev))
+ goto out_free;
+
tnc_init(sp);
+ return 0;
+
+out_free:
+ kfree(xbuff);
+ kfree(rbuff);
+
+ if (dev)
+ free_netdev(dev);
+
out:
return err;
}
*/
static void sixpack_close(struct tty_struct *tty)
{
- struct sixpack *sp = (struct sixpack *) tty->disc_data;
+ struct sixpack *sp;
write_lock(&disc_data_lock);
sp = tty->disc_data;
if (!atomic_dec_and_test(&sp->refcnt))
down(&sp->dead_sem);
+ unregister_netdev(sp->dev);
+
del_timer(&sp->tx_t);
del_timer(&sp->resync_t);
- sp_free(sp);
- unregister_netdev(sp->dev);
-}
-
-static int sp_set_mac_address(struct net_device *dev, void __user *addr)
-{
- return copy_from_user(dev->dev_addr, addr, AX25_ADDR_LEN) ? -EFAULT : 0;
+ /* Free all 6pack frame buffers. */
+ kfree(sp->rbuff);
+ kfree(sp->xbuff);
}
/* Perform I/O control on an active 6pack channel. */
unsigned int cmd, unsigned long arg)
{
struct sixpack *sp = sp_get(tty);
+ struct net_device *dev = sp->dev;
unsigned int tmp, err;
if (!sp)
switch(cmd) {
case SIOCGIFNAME:
- err = copy_to_user((void __user *) arg, sp->dev->name,
- strlen(sp->dev->name) + 1) ? -EFAULT : 0;
+ err = copy_to_user((void *) arg, dev->name,
+ strlen(dev->name) + 1) ? -EFAULT : 0;
break;
case SIOCGIFENCAP:
}
sp->mode = tmp;
- sp->dev->addr_len = AX25_ADDR_LEN; /* sizeof an AX.25 addr */
- sp->dev->hard_header_len = AX25_KISS_HEADER_LEN + AX25_MAX_HEADER_LEN + 3;
- sp->dev->type = ARPHRD_AX25;
+ dev->addr_len = AX25_ADDR_LEN;
+ dev->hard_header_len = AX25_KISS_HEADER_LEN +
+ AX25_MAX_HEADER_LEN + 3;
+ dev->type = ARPHRD_AX25;
err = 0;
break;
- case SIOCSIFHWADDR:
- err = sp_set_mac_address(sp->dev, (void __user *) arg);
+ case SIOCSIFHWADDR: {
+ char addr[AX25_ADDR_LEN];
+
+ if (copy_from_user(&addr,
+ (void __user *) arg, AX25_ADDR_LEN)) {
+ err = -EFAULT;
+ break;
+ }
+
+ spin_lock_irq(&dev->xmit_lock);
+ memcpy(dev->dev_addr, &addr, AX25_ADDR_LEN);
+ spin_unlock_irq(&dev->xmit_lock);
+
+ err = 0;
break;
+ }
/* Allow stty to read, but not set, the serial port */
case TCGETS:
break;
default:
- return -ENOIOCTLCMD;
+ err = -ENOIOCTLCMD;
}
sp_put(sp);
return err;
}
-/* Fill in our line protocol discipline */
static struct tty_ldisc sp_ldisc = {
.owner = THIS_MODULE,
.magic = TTY_LDISC_MAGIC,
/* Initialize 6pack control device -- register 6pack line discipline */
-static char msg_banner[] __initdata = KERN_INFO "AX.25: 6pack driver, " SIXPACK_VERSION "\n";
-static char msg_regfail[] __initdata = KERN_ERR "6pack: can't register line discipline (err = %d)\n";
+static char msg_banner[] __initdata = KERN_INFO \
+ "AX.25: 6pack driver, " SIXPACK_VERSION "\n";
+static char msg_regfail[] __initdata = KERN_ERR \
+ "6pack: can't register line discipline (err = %d)\n";
static int __init sixpack_init_driver(void)
{
return status;
}
-static const char msg_unregfail[] __exitdata = KERN_ERR "6pack: can't unregister line discipline (err = %d)\n";
+static const char msg_unregfail[] __exitdata = KERN_ERR \
+ "6pack: can't unregister line discipline (err = %d)\n";
static void __exit sixpack_exit_driver(void)
{
printk(msg_unregfail, ret);
}
-/* Initialize the 6pack driver. Called by DDI. */
-static int sixpack_init(struct net_device *dev)
-{
- struct sixpack *sp = netdev_priv(dev);
-
- if (sp == NULL) /* Allocation failed ?? */
- return -ENODEV;
-
- /* Set up the "6pack Control Block". (And clear statistics) */
-
- memset(sp, 0, sizeof (struct sixpack));
- sp->dev = dev;
-
- return 0;
-}
-
/* encode an AX.25 packet into 6pack */
static int encode_sixpack(unsigned char *tx_buf, unsigned char *tx_buf_raw,
/* decode 4 sixpack-encoded bytes into 3 data bytes */
-static void decode_data(unsigned char inbyte, struct sixpack *sp)
+static void decode_data(struct sixpack *sp, unsigned char inbyte)
{
unsigned char *buf;
/* identify and execute a 6pack priority command byte */
-static void decode_prio_command(unsigned char cmd, struct sixpack *sp)
+static void decode_prio_command(struct sixpack *sp, unsigned char cmd)
{
unsigned char channel;
int actual;
/* if the state byte has been received, the TNC is present,
so the resync timer can be reset. */
- if (sp->tnc_ok == 1) {
+ if (sp->tnc_state == TNC_IN_SYNC) {
del_timer(&sp->resync_t);
- sp->resync_t.data = (unsigned long) sp;
- sp->resync_t.function = resync_tnc;
- sp->resync_t.expires = jiffies + SIXP_INIT_RESYNC_TIMEOUT;
+ sp->resync_t.data = (unsigned long) sp;
+ sp->resync_t.function = resync_tnc;
+ sp->resync_t.expires = jiffies + SIXP_INIT_RESYNC_TIMEOUT;
add_timer(&sp->resync_t);
}
/* identify and execute a standard 6pack command byte */
-static void decode_std_command(unsigned char cmd, struct sixpack *sp)
+static void decode_std_command(struct sixpack *sp, unsigned char cmd)
{
unsigned char checksum = 0, rest = 0, channel;
short i;
rest = sp->rx_count;
if (rest != 0)
for (i = rest; i <= 3; i++)
- decode_data(0, sp);
+ decode_data(sp, 0);
if (rest == 2)
sp->rx_count_cooked -= 2;
else if (rest == 3)
/* decode a 6pack packet */
static void
-sixpack_decode(struct sixpack *sp, unsigned char pre_rbuff[], int count)
+sixpack_decode(struct sixpack *sp, unsigned char *pre_rbuff, int count)
{
unsigned char inbyte;
int count1;
for (count1 = 0; count1 < count; count1++) {
inbyte = pre_rbuff[count1];
if (inbyte == SIXP_FOUND_TNC) {
- printk(KERN_INFO "6pack: TNC found.\n");
- sp->tnc_ok = 1;
+ tnc_set_sync_state(sp, TNC_IN_SYNC);
del_timer(&sp->resync_t);
}
if ((inbyte & SIXP_PRIO_CMD_MASK) != 0)
- decode_prio_command(inbyte, sp);
+ decode_prio_command(sp, inbyte);
else if ((inbyte & SIXP_STD_CMD_MASK) != 0)
- decode_std_command(inbyte, sp);
+ decode_std_command(sp, inbyte);
else if ((sp->status & SIXP_RX_DCD_MASK) == SIXP_RX_DCD_MASK)
- decode_data(inbyte, sp);
+ decode_data(sp, inbyte);
}
}
static const char *mode[NR_PORTS] = { "picpar", };
static int iobase[NR_PORTS] = { 0x378, };
-MODULE_PARM(mode, "1-" __MODULE_STRING(NR_PORTS) "s");
+module_param_array(mode, charp, NULL, 0);
MODULE_PARM_DESC(mode, "baycom operating mode; eg. par96 or picpar");
-MODULE_PARM(iobase, "1-" __MODULE_STRING(NR_PORTS) "i");
+module_param_array(iobase, int, NULL, 0);
MODULE_PARM_DESC(iobase, "baycom io base address");
MODULE_AUTHOR("Thomas M. Sailer, sailer@ife.ee.ethz.ch, hb9jnx@hb9w.che.eu");
static int irq[NR_PORTS] = { 4, };
static int baud[NR_PORTS] = { [0 ... NR_PORTS-1] = 1200 };
-MODULE_PARM(mode, "1-" __MODULE_STRING(NR_PORTS) "s");
+module_param_array(mode, charp, NULL, 0);
MODULE_PARM_DESC(mode, "baycom operating mode; * for software DCD");
-MODULE_PARM(iobase, "1-" __MODULE_STRING(NR_PORTS) "i");
+module_param_array(iobase, int, NULL, 0);
MODULE_PARM_DESC(iobase, "baycom io base address");
-MODULE_PARM(irq, "1-" __MODULE_STRING(NR_PORTS) "i");
+module_param_array(irq, int, NULL, 0);
MODULE_PARM_DESC(irq, "baycom irq number");
-MODULE_PARM(baud, "1-" __MODULE_STRING(NR_PORTS) "i");
+module_param_array(baud, int, NULL, 0);
MODULE_PARM_DESC(baud, "baycom baud rate (300 to 4800)");
MODULE_AUTHOR("Thomas M. Sailer, sailer@ife.ee.ethz.ch, hb9jnx@hb9w.che.eu");
static int iobase[NR_PORTS] = { 0x3f8, };
static int irq[NR_PORTS] = { 4, };
-MODULE_PARM(mode, "1-" __MODULE_STRING(NR_PORTS) "s");
+module_param_array(mode, charp, NULL, 0);
MODULE_PARM_DESC(mode, "baycom operating mode; * for software DCD");
-MODULE_PARM(iobase, "1-" __MODULE_STRING(NR_PORTS) "i");
+module_param_array(iobase, int, NULL, 0);
MODULE_PARM_DESC(iobase, "baycom io base address");
-MODULE_PARM(irq, "1-" __MODULE_STRING(NR_PORTS) "i");
+module_param_array(irq, int, NULL, 0);
MODULE_PARM_DESC(irq, "baycom irq number");
MODULE_AUTHOR("Thomas M. Sailer, sailer@ife.ee.ethz.ch, hb9jnx@hb9w.che.eu");
/* These provide interrupt save 2-step access to the Z8530 registers */
-static spinlock_t iolock = SPIN_LOCK_UNLOCKED; /* Guards paired accesses */
+static DEFINE_SPINLOCK(iolock); /* Guards paired accesses */
static inline unsigned char InReg(io_port port, unsigned char reg)
{
#endif /* CONFIG_IBM_EMAC4 */
#define EMAC_M1_BASE (EMAC_M1_TX_FIFO_2K | \
EMAC_M1_APP | \
- EMAC_M1_TR)
+ EMAC_M1_TR | EMAC_M1_VLE)
/* Transmit Mode Register 0 */
#define EMAC_TMR0_GNP0 0x80000000
out_be32(&emacp->em0stacr, stacr);
- while (((stacr = in_be32(&emacp->em0stacr) & EMAC_STACR_OC) == 0)
- && (count++ < 5000))
+ count = 0;
+ while ((((stacr = in_be32(&emacp->em0stacr)) & EMAC_STACR_OC) == 0)
+ && (count++ < MDIO_DELAY))
udelay(1);
MDIO_DEBUG((" (count was %d)\n", count));
PKT_DEBUG(("emac_start_xmit() stopping queue\n"));
netif_stop_queue(dev);
spin_unlock_irqrestore(&fep->lock, flags);
- restore_flags(flags);
return -EBUSY;
}
/* Format the receive descriptor ring. */
ep->rx_slot = 0;
/* Default is MTU=1500 + Ethernet overhead */
- ep->rx_buffer_size = ENET_DEF_BUF_SIZE;
+ ep->rx_buffer_size = dev->mtu + ENET_HEADER_SIZE + ENET_FCS_SIZE;
emac_rx_fill(dev, 0);
if (ep->rx_slot != 0) {
printk(KERN_ERR
/* set frame gap */
out_be32(&emacp->em0ipgvr, CONFIG_IBM_EMAC_FGAP);
+
+ /* set VLAN Tag Protocol Identifier */
+ out_be32(&emacp->em0vtpid, 0x8100);
/* Init ring buffers */
emac_init_rings(fep->ndev);
.rxde = &emac_rxde_dev,
};
+#ifdef CONFIG_NET_POLL_CONTROLLER
+static int emac_netpoll(struct net_device *ndev)
+{
+ emac_rxeob_dev((void *)ndev, 0);
+ emac_txeob_dev((void *)ndev, 0);
+ return 0;
+}
+#endif
+
static int emac_init_device(struct ocp_device *ocpdev, struct ibm_ocp_mal *mal)
{
int deferred_init = 0;
SET_ETHTOOL_OPS(ndev, &emac_ethtool_ops);
if (emacdata->tah_idx >= 0)
ndev->features = NETIF_F_IP_CSUM | NETIF_F_SG;
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ ndev->poll_controller = emac_netpoll;
+#endif
SET_MODULE_OWNER(ndev);
#define ENET_HEADER_SIZE 14
#define ENET_FCS_SIZE 4
-#define ENET_DEF_MTU_SIZE 1500
-#define ENET_DEF_BUF_SIZE (ENET_DEF_MTU_SIZE + ENET_HEADER_SIZE + ENET_FCS_SIZE)
#define EMAC_MIN_FRAME 64
#define EMAC_MAX_FRAME 9018
#define EMAC_MIN_MTU (EMAC_MIN_FRAME - ENET_HEADER_SIZE - ENET_FCS_SIZE)
/* This lock protects the commac list. On today UP implementations, it's
* really only used as IRQ protection in mal_{register,unregister}_commac()
*/
-static rwlock_t mal_list_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(mal_list_lock);
int mal_register_commac(struct ibm_ocp_mal *mal, struct mal_commac *commac)
{
u16 lpa;
if (phy->autoneg) {
- lpa = phy_read(phy, MII_LPA);
+ lpa = phy_read(phy, MII_LPA) & phy_read(phy, MII_ADVERTISE);
- if (lpa & (LPA_10FULL | LPA_100FULL))
- phy->duplex = DUPLEX_FULL;
- else
- phy->duplex = DUPLEX_HALF;
- if (lpa & (LPA_100FULL | LPA_100HALF))
- phy->speed = SPEED_100;
- else
- phy->speed = SPEED_10;
+ phy->speed = SPEED_10;
+ phy->duplex = DUPLEX_HALF;
phy->pause = 0;
+
+ if (lpa & (LPA_100FULL | LPA_100HALF)) {
+ phy->speed = SPEED_100;
+ if (lpa & LPA_100FULL)
+ phy->duplex = DUPLEX_FULL;
+ } else if (lpa & LPA_10FULL)
+ phy->duplex = DUPLEX_FULL;
}
/* On non-aneg, we assume what we put in BMCR is the speed,
* though magic-aneg shouldn't prevent this case from occurring
static BCSR * const bcsr = (BCSR *)0xAE000000;
#endif
-static spinlock_t ir_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(ir_lock);
/*
* IrDA peripheral bug. You have to read the register
#define TX_DMA_ERR 0x80
struct mace_data {
- volatile struct mace *mace;
- volatile struct dbdma_regs *tx_dma;
+ volatile struct mace __iomem *mace;
+ volatile struct dbdma_regs __iomem *tx_dma;
int tx_dma_intr;
- volatile struct dbdma_regs *rx_dma;
+ volatile struct dbdma_regs __iomem *rx_dma;
int rx_dma_intr;
volatile struct dbdma_cmd *tx_cmds; /* xmit dma command list */
volatile struct dbdma_cmd *rx_cmds; /* recv dma command list */
static irqreturn_t mace_rxdma_intr(int irq, void *dev_id, struct pt_regs *regs);
static void mace_set_timeout(struct net_device *dev);
static void mace_tx_timeout(unsigned long data);
-static inline void dbdma_reset(volatile struct dbdma_regs *dma);
+static inline void dbdma_reset(volatile struct dbdma_regs __iomem *dma);
static inline void mace_clean_rings(struct mace_data *mp);
static void __mace_set_address(struct net_device *dev, void *addr);
macio_set_drvdata(mdev, dev);
dev->base_addr = macio_resource_start(mdev, 0);
- mp->mace = (volatile struct mace *)ioremap(dev->base_addr, 0x1000);
+ mp->mace = ioremap(dev->base_addr, 0x1000);
if (mp->mace == NULL) {
printk(KERN_ERR "MACE: can't map IO resources !\n");
rc = -ENOMEM;
mp = (struct mace_data *) dev->priv;
mp->maccc = ENXMT | ENRCV;
- mp->tx_dma = (volatile struct dbdma_regs *)
- ioremap(macio_resource_start(mdev, 1), 0x1000);
+ mp->tx_dma = ioremap(macio_resource_start(mdev, 1), 0x1000);
if (mp->tx_dma == NULL) {
printk(KERN_ERR "MACE: can't map TX DMA resources !\n");
rc = -ENOMEM;
}
mp->tx_dma_intr = macio_irq(mdev, 1);
- mp->rx_dma = (volatile struct dbdma_regs *)
- ioremap(macio_resource_start(mdev, 2), 0x1000);
+ mp->rx_dma = ioremap(macio_resource_start(mdev, 2), 0x1000);
if (mp->rx_dma == NULL) {
printk(KERN_ERR "MACE: can't map RX DMA resources !\n");
rc = -ENOMEM;
err_free_irq:
free_irq(macio_irq(mdev, 0), dev);
err_unmap_rx_dma:
- iounmap((void*)mp->rx_dma);
+ iounmap(mp->rx_dma);
err_unmap_tx_dma:
- iounmap((void*)mp->tx_dma);
+ iounmap(mp->tx_dma);
err_unmap_io:
- iounmap((void*)mp->mace);
+ iounmap(mp->mace);
err_free:
free_netdev(dev);
err_release:
free_irq(mp->tx_dma_intr, dev);
free_irq(mp->rx_dma_intr, dev);
- iounmap((void*)mp->rx_dma);
- iounmap((void*)mp->tx_dma);
- iounmap((void*)mp->mace);
+ iounmap(mp->rx_dma);
+ iounmap(mp->tx_dma);
+ iounmap(mp->mace);
free_netdev(dev);
return 0;
}
-static void dbdma_reset(volatile struct dbdma_regs *dma)
+static void dbdma_reset(volatile struct dbdma_regs __iomem *dma)
{
int i;
static void mace_reset(struct net_device *dev)
{
struct mace_data *mp = (struct mace_data *) dev->priv;
- volatile struct mace *mb = mp->mace;
+ volatile struct mace __iomem *mb = mp->mace;
int i;
/* soft-reset the chip */
static void __mace_set_address(struct net_device *dev, void *addr)
{
struct mace_data *mp = (struct mace_data *) dev->priv;
- volatile struct mace *mb = mp->mace;
+ volatile struct mace __iomem *mb = mp->mace;
unsigned char *p = addr;
int i;
static int mace_set_address(struct net_device *dev, void *addr)
{
struct mace_data *mp = (struct mace_data *) dev->priv;
- volatile struct mace *mb = mp->mace;
+ volatile struct mace __iomem *mb = mp->mace;
unsigned long flags;
spin_lock_irqsave(&mp->lock, flags);
static int mace_open(struct net_device *dev)
{
struct mace_data *mp = (struct mace_data *) dev->priv;
- volatile struct mace *mb = mp->mace;
- volatile struct dbdma_regs *rd = mp->rx_dma;
- volatile struct dbdma_regs *td = mp->tx_dma;
+ volatile struct mace __iomem *mb = mp->mace;
+ volatile struct dbdma_regs __iomem *rd = mp->rx_dma;
+ volatile struct dbdma_regs __iomem *td = mp->tx_dma;
volatile struct dbdma_cmd *cp;
int i;
struct sk_buff *skb;
static int mace_close(struct net_device *dev)
{
struct mace_data *mp = (struct mace_data *) dev->priv;
- volatile struct mace *mb = mp->mace;
- volatile struct dbdma_regs *rd = mp->rx_dma;
- volatile struct dbdma_regs *td = mp->tx_dma;
+ volatile struct mace __iomem *mb = mp->mace;
+ volatile struct dbdma_regs __iomem *rd = mp->rx_dma;
+ volatile struct dbdma_regs __iomem *td = mp->tx_dma;
/* disable rx and tx */
out_8(&mb->maccc, 0);
static int mace_xmit_start(struct sk_buff *skb, struct net_device *dev)
{
struct mace_data *mp = (struct mace_data *) dev->priv;
- volatile struct dbdma_regs *td = mp->tx_dma;
+ volatile struct dbdma_regs __iomem *td = mp->tx_dma;
volatile struct dbdma_cmd *cp, *np;
unsigned long flags;
int fill, next, len;
static void mace_set_multicast(struct net_device *dev)
{
struct mace_data *mp = (struct mace_data *) dev->priv;
- volatile struct mace *mb = mp->mace;
+ volatile struct mace __iomem *mb = mp->mace;
int i, j;
u32 crc;
unsigned long flags;
static void mace_handle_misc_intrs(struct mace_data *mp, int intr)
{
- volatile struct mace *mb = mp->mace;
+ volatile struct mace __iomem *mb = mp->mace;
static int mace_babbles, mace_jabbers;
if (intr & MPCO)
{
struct net_device *dev = (struct net_device *) dev_id;
struct mace_data *mp = (struct mace_data *) dev->priv;
- volatile struct mace *mb = mp->mace;
- volatile struct dbdma_regs *td = mp->tx_dma;
+ volatile struct mace __iomem *mb = mp->mace;
+ volatile struct dbdma_regs __iomem *td = mp->tx_dma;
volatile struct dbdma_cmd *cp;
int intr, fs, i, stat, x;
int xcount, dstat;
{
struct net_device *dev = (struct net_device *) data;
struct mace_data *mp = (struct mace_data *) dev->priv;
- volatile struct mace *mb = mp->mace;
- volatile struct dbdma_regs *td = mp->tx_dma;
- volatile struct dbdma_regs *rd = mp->rx_dma;
+ volatile struct mace __iomem *mb = mp->mace;
+ volatile struct dbdma_regs __iomem *td = mp->tx_dma;
+ volatile struct dbdma_regs __iomem *rd = mp->rx_dma;
volatile struct dbdma_cmd *cp;
unsigned long flags;
int i;
{
struct net_device *dev = (struct net_device *) dev_id;
struct mace_data *mp = (struct mace_data *) dev->priv;
- volatile struct dbdma_regs *rd = mp->rx_dma;
+ volatile struct dbdma_regs __iomem *rd = mp->rx_dma;
volatile struct dbdma_cmd *cp, *np;
int i, nb, stat, next;
struct sk_buff *skb;
static inline void bang_the_chip(struct myri_eth *mp)
{
- struct myri_shmem *shmem = mp->shmem;
+ struct myri_shmem __iomem *shmem = mp->shmem;
void __iomem *cregs = mp->cregs;
sbus_writel(1, &shmem->send);
static int myri_do_handshake(struct myri_eth *mp)
{
- struct myri_shmem *shmem = mp->shmem;
+ struct myri_shmem __iomem *shmem = mp->shmem;
void __iomem *cregs = mp->cregs;
- struct myri_channel *chan = &shmem->channel;
+ struct myri_channel __iomem *chan = &shmem->channel;
int tick = 0;
DET(("myri_do_handshake: "));
u32 csum = sbus_readl(&rxdack->csum);
int len = sbus_readl(&rxdack->myri_scatters[0].len);
int index = sbus_readl(&rxdack->ctx);
- struct myri_rxd __iomem *rxd = &rq->myri_rxd[rq->tail];
+ struct myri_rxd __iomem *rxd = &rq->myri_rxd[sbus_readl(&rq->tail)];
struct sk_buff *skb = mp->rx_skbs[index];
/* Ack it. */
struct net_device *dev = (struct net_device *) dev_id;
struct myri_eth *mp = (struct myri_eth *) dev->priv;
void __iomem *lregs = mp->lregs;
- struct myri_channel *chan = &mp->shmem->channel;
+ struct myri_channel __iomem *chan = &mp->shmem->channel;
unsigned long flags;
u32 status;
int handled = 0;
#ifdef MODULE
static struct net_device *dev_ni65;
-MODULE_PARM(irq, "i");
-MODULE_PARM(io, "i");
-MODULE_PARM(dma, "i");
+module_param(irq, int, 0);
+module_param(io, int, 0);
+module_param(dma, int, 0);
MODULE_PARM_DESC(irq, "ni6510 IRQ number (ignored for some cards)");
MODULE_PARM_DESC(io, "ni6510 I/O base address");
MODULE_PARM_DESC(dma, "ni6510 ISA DMA channel (ignored for some cards)");
#
menu "PCMCIA network device support"
- depends on NETDEVICES && HOTPLUG && PCMCIA!=n
+ depends on NETDEVICES && PCMCIA!=n
config NET_PCMCIA
bool "PCMCIA network device support"
u64 tda_err_alarm;
u64 pcc_err_reg;
+#define PCC_FB_ECC_DB_ERR vBIT(0xFF, 16, 8)
+
u64 pcc_err_mask;
u64 pcc_err_alarm;
#define RX_PA_CFG_IGNORE_FRM_ERR BIT(1)
#define RX_PA_CFG_IGNORE_SNAP_OUI BIT(2)
#define RX_PA_CFG_IGNORE_LLC_CTRL BIT(3)
+#define RX_PA_CFG_IGNORE_L2_ERR BIT(6)
u8 unused12[0x700 - 0x1D8];
static struct net_device *dev_seeq;
MODULE_LICENSE("GPL");
-MODULE_PARM(io, "i");
-MODULE_PARM(irq, "i");
+module_param(io, int, 0);
+module_param(irq, int, 0);
MODULE_PARM_DESC(io, "SEEQ 8005 I/O base address");
MODULE_PARM_DESC(irq, "SEEQ 8005 IRQ number");
static int shapers = 1;
#ifdef MODULE
-MODULE_PARM(shapers, "i");
+module_param(shapers, int, 0);
MODULE_PARM_DESC(shapers, "Traffic shaper: maximum number of shapers");
#else /* MODULE */
obj-$(CONFIG_SK98LIN) += sk98lin.o
sk98lin-objs := \
skge.o \
+ skethtool.o \
skdim.o \
skaddr.o \
skgehwt.o \
/* 64-bit hash values with all bits set. */
-SK_U16 OnesHash[4] = {0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF};
+static const SK_U16 OnesHash[4] = {0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF};
/* local variables ************************************************************/
--- /dev/null
+/******************************************************************************
+ *
+ * Name: skethtool.c
+ * Project: GEnesis, PCI Gigabit Ethernet Adapter
+ * Version: $Revision: 1.7 $
+ * Date: $Date: 2004/09/29 13:32:07 $
+ * Purpose: All functions regarding ethtool handling
+ *
+ ******************************************************************************/
+
+/******************************************************************************
+ *
+ * (C)Copyright 1998-2002 SysKonnect GmbH.
+ * (C)Copyright 2002-2004 Marvell.
+ *
+ * Driver for Marvell Yukon/2 chipset and SysKonnect Gigabit Ethernet
+ * Server Adapters.
+ *
+ * Author: Ralph Roesler (rroesler@syskonnect.de)
+ * Mirko Lindner (mlindner@syskonnect.de)
+ *
+ * Address all question to: linux@syskonnect.de
+ *
+ * The technical manual for the adapters is available from SysKonnect's
+ * web pages: www.syskonnect.com
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * The information in this file is provided "AS IS" without warranty.
+ *
+ *****************************************************************************/
+
+#include "h/skdrv1st.h"
+#include "h/skdrv2nd.h"
+#include "h/skversion.h"
+
+#include <linux/ethtool.h>
+#include <linux/timer.h>
+#include <linux/delay.h>
+
+/******************************************************************************
+ *
+ * Defines
+ *
+ *****************************************************************************/
+
+#define SUPP_COPPER_ALL (SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full | \
+ SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full | \
+ SUPPORTED_1000baseT_Half| SUPPORTED_1000baseT_Full| \
+ SUPPORTED_TP)
+
+#define ADV_COPPER_ALL (ADVERTISED_10baseT_Half | ADVERTISED_10baseT_Full | \
+ ADVERTISED_100baseT_Half | ADVERTISED_100baseT_Full | \
+ ADVERTISED_1000baseT_Half| ADVERTISED_1000baseT_Full| \
+ ADVERTISED_TP)
+
+#define SUPP_FIBRE_ALL (SUPPORTED_1000baseT_Full | \
+ SUPPORTED_FIBRE | \
+ SUPPORTED_Autoneg)
+
+#define ADV_FIBRE_ALL (ADVERTISED_1000baseT_Full | \
+ ADVERTISED_FIBRE | \
+ ADVERTISED_Autoneg)
+
+
+/******************************************************************************
+ *
+ * Local Functions
+ *
+ *****************************************************************************/
+
+/*****************************************************************************
+ *
+ * getSettings - retrieves the current settings of the selected adapter
+ *
+ * Description:
+ * The current configuration of the selected adapter is returned.
+ * This configuration involves a)speed, b)duplex and c)autoneg plus
+ * a number of other variables.
+ *
+ * Returns: always 0
+ *
+ */
+static int getSettings(struct net_device *dev, struct ethtool_cmd *ecmd)
+{
+ const DEV_NET *pNet = netdev_priv(dev);
+ int port = pNet->PortNr;
+ const SK_AC *pAC = pNet->pAC;
+ const SK_GEPORT *pPort = &pAC->GIni.GP[port];
+
+ static int DuplexAutoNegConfMap[9][3]= {
+ { -1 , -1 , -1 },
+ { 0 , -1 , -1 },
+ { SK_LMODE_HALF , DUPLEX_HALF, AUTONEG_DISABLE },
+ { SK_LMODE_FULL , DUPLEX_FULL, AUTONEG_DISABLE },
+ { SK_LMODE_AUTOHALF , DUPLEX_HALF, AUTONEG_ENABLE },
+ { SK_LMODE_AUTOFULL , DUPLEX_FULL, AUTONEG_ENABLE },
+ { SK_LMODE_AUTOBOTH , DUPLEX_FULL, AUTONEG_ENABLE },
+ { SK_LMODE_AUTOSENSE , -1 , -1 },
+ { SK_LMODE_INDETERMINATED, -1 , -1 }
+ };
+ static int SpeedConfMap[6][2] = {
+ { 0 , -1 },
+ { SK_LSPEED_AUTO , -1 },
+ { SK_LSPEED_10MBPS , SPEED_10 },
+ { SK_LSPEED_100MBPS , SPEED_100 },
+ { SK_LSPEED_1000MBPS , SPEED_1000 },
+ { SK_LSPEED_INDETERMINATED, -1 }
+ };
+ static int AdvSpeedMap[6][2] = {
+ { 0 , -1 },
+ { SK_LSPEED_AUTO , -1 },
+ { SK_LSPEED_10MBPS , ADVERTISED_10baseT_Half | ADVERTISED_10baseT_Full },
+ { SK_LSPEED_100MBPS , ADVERTISED_100baseT_Half | ADVERTISED_100baseT_Full },
+ { SK_LSPEED_1000MBPS , ADVERTISED_1000baseT_Half | ADVERTISED_1000baseT_Full},
+ { SK_LSPEED_INDETERMINATED, -1 }
+ };
+
+ ecmd->phy_address = port;
+ ecmd->speed = SpeedConfMap[pPort->PLinkSpeedUsed][1];
+ ecmd->duplex = DuplexAutoNegConfMap[pPort->PLinkModeStatus][1];
+ ecmd->autoneg = DuplexAutoNegConfMap[pPort->PLinkModeStatus][2];
+ ecmd->transceiver = XCVR_INTERNAL;
+
+ if (pAC->GIni.GICopperType) {
+ ecmd->port = PORT_TP;
+ ecmd->supported = (SUPP_COPPER_ALL|SUPPORTED_Autoneg);
+ if (pAC->GIni.GIGenesis) {
+ ecmd->supported &= ~(SUPPORTED_10baseT_Half);
+ ecmd->supported &= ~(SUPPORTED_10baseT_Full);
+ ecmd->supported &= ~(SUPPORTED_100baseT_Half);
+ ecmd->supported &= ~(SUPPORTED_100baseT_Full);
+ } else {
+ if (pAC->GIni.GIChipId == CHIP_ID_YUKON) {
+ ecmd->supported &= ~(SUPPORTED_1000baseT_Half);
+ }
+#ifdef CHIP_ID_YUKON_FE
+ if (pAC->GIni.GIChipId == CHIP_ID_YUKON_FE) {
+ ecmd->supported &= ~(SUPPORTED_1000baseT_Half);
+ ecmd->supported &= ~(SUPPORTED_1000baseT_Full);
+ }
+#endif
+ }
+ if (pAC->GIni.GP[0].PLinkSpeed != SK_LSPEED_AUTO) {
+ ecmd->advertising = AdvSpeedMap[pPort->PLinkSpeed][1];
+ if (pAC->GIni.GIChipId == CHIP_ID_YUKON) {
+ ecmd->advertising &= ~(SUPPORTED_1000baseT_Half);
+ }
+ } else {
+ ecmd->advertising = ecmd->supported;
+ }
+
+ if (ecmd->autoneg == AUTONEG_ENABLE)
+ ecmd->advertising |= ADVERTISED_Autoneg;
+ } else {
+ ecmd->port = PORT_FIBRE;
+ ecmd->supported = SUPP_FIBRE_ALL;
+ ecmd->advertising = ADV_FIBRE_ALL;
+ }
+ return 0;
+}
+
+/*
+ * MIB infrastructure uses instance value starting at 1
+ * based on board and port.
+ */
+static inline u32 pnmiInstance(const DEV_NET *pNet)
+{
+ return 1 + (pNet->pAC->RlmtNets == 2) + pNet->PortNr;
+}
+
+/*****************************************************************************
+ *
+ * setSettings - configures the settings of a selected adapter
+ *
+ * Description:
+ * Possible settings that may be altered are a)speed, b)duplex or
+ * c)autonegotiation.
+ *
+ * Returns:
+ * 0: everything fine, no error
+ * <0: the return value is the error code of the failure
+ */
+static int setSettings(struct net_device *dev, struct ethtool_cmd *ecmd)
+{
+ DEV_NET *pNet = netdev_priv(dev);
+ SK_AC *pAC = pNet->pAC;
+ u32 instance;
+ char buf[4];
+ int len = 1;
+
+ if (ecmd->speed != SPEED_10 && ecmd->speed != SPEED_100
+ && ecmd->speed != SPEED_1000)
+ return -EINVAL;
+
+ if (ecmd->duplex != DUPLEX_HALF && ecmd->duplex != DUPLEX_FULL)
+ return -EINVAL;
+
+ if (ecmd->autoneg != AUTONEG_DISABLE && ecmd->autoneg != AUTONEG_ENABLE)
+ return -EINVAL;
+
+ if (ecmd->autoneg == AUTONEG_DISABLE)
+ *buf = (ecmd->duplex == DUPLEX_FULL)
+ ? SK_LMODE_FULL : SK_LMODE_HALF;
+ else
+ *buf = (ecmd->duplex == DUPLEX_FULL)
+ ? SK_LMODE_AUTOFULL : SK_LMODE_AUTOHALF;
+
+ instance = pnmiInstance(pNet);
+ if (SkPnmiSetVar(pAC, pAC->IoBase, OID_SKGE_LINK_MODE,
+ &buf, &len, instance, pNet->NetNr) != SK_PNMI_ERR_OK)
+ return -EINVAL;
+
+ switch(ecmd->speed) {
+ case SPEED_1000:
+ *buf = SK_LSPEED_1000MBPS;
+ break;
+ case SPEED_100:
+ *buf = SK_LSPEED_100MBPS;
+ break;
+ case SPEED_10:
+ *buf = SK_LSPEED_10MBPS;
+ }
+
+ if (SkPnmiSetVar(pAC, pAC->IoBase, OID_SKGE_SPEED_MODE,
+ &buf, &len, instance, pNet->NetNr) != SK_PNMI_ERR_OK)
+ return -EINVAL;
+
+ return 0;
+}
+
+/*****************************************************************************
+ *
+ * getDriverInfo - returns generic driver and adapter information
+ *
+ * Description:
+ * Generic driver information is returned via this function, such as
+ * the name of the driver, its version and and firmware version.
+ * In addition to this, the location of the selected adapter is
+ * returned as a bus info string (e.g. '01:05.0').
+ *
+ * Returns: N/A
+ *
+ */
+static void getDriverInfo(struct net_device *dev, struct ethtool_drvinfo *info)
+{
+ const DEV_NET *pNet = netdev_priv(dev);
+ const SK_AC *pAC = pNet->pAC;
+ char vers[32];
+
+ snprintf(vers, sizeof(vers)-1, VER_STRING "(v%d.%d)",
+ (pAC->GIni.GIPciHwRev >> 4) & 0xf, pAC->GIni.GIPciHwRev & 0xf);
+
+ strlcpy(info->driver, DRIVER_FILE_NAME, sizeof(info->driver));
+ strcpy(info->version, vers);
+ strcpy(info->fw_version, "N/A");
+ strlcpy(info->bus_info, pAC->PciDev->slot_name, ETHTOOL_BUSINFO_LEN);
+}
+
+/*
+ * Ethtool statistics support.
+ */
+static const char StringsStats[][ETH_GSTRING_LEN] = {
+ "rx_packets", "tx_packets",
+ "rx_bytes", "tx_bytes",
+ "rx_errors", "tx_errors",
+ "rx_dropped", "tx_dropped",
+ "multicasts", "collisions",
+ "rx_length_errors", "rx_buffer_overflow_errors",
+ "rx_crc_errors", "rx_frame_errors",
+ "rx_too_short_errors", "rx_too_long_errors",
+ "rx_carrier_extension_errors", "rx_symbol_errors",
+ "rx_llc_mac_size_errors", "rx_carrier_errors",
+ "rx_jabber_errors", "rx_missed_errors",
+ "tx_abort_collision_errors", "tx_carrier_errors",
+ "tx_buffer_underrun_errors", "tx_heartbeat_errors",
+ "tx_window_errors",
+};
+
+static int getStatsCount(struct net_device *dev)
+{
+ return ARRAY_SIZE(StringsStats);
+}
+
+static void getStrings(struct net_device *dev, u32 stringset, u8 *data)
+{
+ switch(stringset) {
+ case ETH_SS_STATS:
+ memcpy(data, *StringsStats, sizeof(StringsStats));
+ break;
+ }
+}
+
+static void getEthtoolStats(struct net_device *dev,
+ struct ethtool_stats *stats, u64 *data)
+{
+ const DEV_NET *pNet = netdev_priv(dev);
+ const SK_AC *pAC = pNet->pAC;
+ const SK_PNMI_STRUCT_DATA *pPnmiStruct = &pAC->PnmiStruct;
+
+ *data++ = pPnmiStruct->Stat[0].StatRxOkCts;
+ *data++ = pPnmiStruct->Stat[0].StatTxOkCts;
+ *data++ = pPnmiStruct->Stat[0].StatRxOctetsOkCts;
+ *data++ = pPnmiStruct->Stat[0].StatTxOctetsOkCts;
+ *data++ = pPnmiStruct->InErrorsCts;
+ *data++ = pPnmiStruct->Stat[0].StatTxSingleCollisionCts;
+ *data++ = pPnmiStruct->RxNoBufCts;
+ *data++ = pPnmiStruct->TxNoBufCts;
+ *data++ = pPnmiStruct->Stat[0].StatRxMulticastOkCts;
+ *data++ = pPnmiStruct->Stat[0].StatTxSingleCollisionCts;
+ *data++ = pPnmiStruct->Stat[0].StatRxRuntCts;
+ *data++ = pPnmiStruct->Stat[0].StatRxFifoOverflowCts;
+ *data++ = pPnmiStruct->Stat[0].StatRxFcsCts;
+ *data++ = pPnmiStruct->Stat[0].StatRxFramingCts;
+ *data++ = pPnmiStruct->Stat[0].StatRxShortsCts;
+ *data++ = pPnmiStruct->Stat[0].StatRxTooLongCts;
+ *data++ = pPnmiStruct->Stat[0].StatRxCextCts;
+ *data++ = pPnmiStruct->Stat[0].StatRxSymbolCts;
+ *data++ = pPnmiStruct->Stat[0].StatRxIRLengthCts;
+ *data++ = pPnmiStruct->Stat[0].StatRxCarrierCts;
+ *data++ = pPnmiStruct->Stat[0].StatRxJabberCts;
+ *data++ = pPnmiStruct->Stat[0].StatRxMissedCts;
+ *data++ = pAC->stats.tx_aborted_errors;
+ *data++ = pPnmiStruct->Stat[0].StatTxCarrierCts;
+ *data++ = pPnmiStruct->Stat[0].StatTxFifoUnderrunCts;
+ *data++ = pPnmiStruct->Stat[0].StatTxCarrierCts;
+ *data++ = pAC->stats.tx_window_errors;
+}
+
+
+/*****************************************************************************
+ *
+ * toggleLeds - Changes the LED state of an adapter
+ *
+ * Description:
+ * This function changes the current state of all LEDs of an adapter so
+ * that it can be located by a user.
+ *
+ * Returns: N/A
+ *
+ */
+static void toggleLeds(DEV_NET *pNet, int on)
+{
+ SK_AC *pAC = pNet->pAC;
+ int port = pNet->PortNr;
+ void __iomem *io = pAC->IoBase;
+
+ if (pAC->GIni.GIGenesis) {
+ SK_OUT8(io, MR_ADDR(port,LNK_LED_REG),
+ on ? SK_LNK_ON : SK_LNK_OFF);
+ SkGeYellowLED(pAC, io,
+ on ? (LED_ON >> 1) : (LED_OFF >> 1));
+ SkGeXmitLED(pAC, io, MR_ADDR(port,RX_LED_INI),
+ on ? SK_LED_TST : SK_LED_DIS);
+
+ if (pAC->GIni.GP[port].PhyType == SK_PHY_BCOM)
+ SkXmPhyWrite(pAC, io, port, PHY_BCOM_P_EXT_CTRL,
+ on ? PHY_B_PEC_LED_ON : PHY_B_PEC_LED_OFF);
+ else if (pAC->GIni.GP[port].PhyType == SK_PHY_LONE)
+ SkXmPhyWrite(pAC, io, port, PHY_LONE_LED_CFG,
+ on ? 0x0800 : PHY_L_LC_LEDT);
+ else
+ SkGeXmitLED(pAC, io, MR_ADDR(port,TX_LED_INI),
+ on ? SK_LED_TST : SK_LED_DIS);
+ } else {
+ const u16 YukLedOn = (PHY_M_LED_MO_DUP(MO_LED_ON) |
+ PHY_M_LED_MO_10(MO_LED_ON) |
+ PHY_M_LED_MO_100(MO_LED_ON) |
+ PHY_M_LED_MO_1000(MO_LED_ON) |
+ PHY_M_LED_MO_RX(MO_LED_ON));
+ const u16 YukLedOff = (PHY_M_LED_MO_DUP(MO_LED_OFF) |
+ PHY_M_LED_MO_10(MO_LED_OFF) |
+ PHY_M_LED_MO_100(MO_LED_OFF) |
+ PHY_M_LED_MO_1000(MO_LED_OFF) |
+ PHY_M_LED_MO_RX(MO_LED_OFF));
+
+
+ SkGmPhyWrite(pAC,io,port,PHY_MARV_LED_CTRL,0);
+ SkGmPhyWrite(pAC,io,port,PHY_MARV_LED_OVER,
+ on ? YukLedOn : YukLedOff);
+ }
+}
+
+/*****************************************************************************
+ *
+ * skGeBlinkTimer - Changes the LED state of an adapter
+ *
+ * Description:
+ * This function changes the current state of all LEDs of an adapter so
+ * that it can be located by a user. If the requested time interval for
+ * this test has elapsed, this function cleans up everything that was
+ * temporarily setup during the locate NIC test. This involves of course
+ * also closing or opening any adapter so that the initial board state
+ * is recovered.
+ *
+ * Returns: N/A
+ *
+ */
+void SkGeBlinkTimer(unsigned long data)
+{
+ struct net_device *dev = (struct net_device *) data;
+ DEV_NET *pNet = netdev_priv(dev);
+ SK_AC *pAC = pNet->pAC;
+
+ toggleLeds(pNet, pAC->LedsOn);
+
+ pAC->LedsOn = !pAC->LedsOn;
+ mod_timer(&pAC->BlinkTimer, jiffies + HZ/4);
+}
+
+/*****************************************************************************
+ *
+ * locateDevice - start the locate NIC feature of the elected adapter
+ *
+ * Description:
+ * This function is used if the user want to locate a particular NIC.
+ * All LEDs are regularly switched on and off, so the NIC can easily
+ * be identified.
+ *
+ * Returns:
+ * ==0: everything fine, no error, locateNIC test was started
+ * !=0: one locateNIC test runs already
+ *
+ */
+static int locateDevice(struct net_device *dev, u32 data)
+{
+ DEV_NET *pNet = netdev_priv(dev);
+ SK_AC *pAC = pNet->pAC;
+
+ if(!data || data > (u32)(MAX_SCHEDULE_TIMEOUT / HZ))
+ data = (u32)(MAX_SCHEDULE_TIMEOUT / HZ);
+
+ /* start blinking */
+ pAC->LedsOn = 0;
+ mod_timer(&pAC->BlinkTimer, jiffies);
+ msleep_interruptible(data * 1000);
+
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout(data * HZ);
+ del_timer_sync(&pAC->BlinkTimer);
+ toggleLeds(pNet, 0);
+
+ return 0;
+}
+
+/*****************************************************************************
+ *
+ * getPauseParams - retrieves the pause parameters
+ *
+ * Description:
+ * All current pause parameters of a selected adapter are placed
+ * in the passed ethtool_pauseparam structure and are returned.
+ *
+ * Returns: N/A
+ *
+ */
+static void getPauseParams(struct net_device *dev, struct ethtool_pauseparam *epause)
+{
+ DEV_NET *pNet = netdev_priv(dev);
+ SK_AC *pAC = pNet->pAC;
+ SK_GEPORT *pPort = &pAC->GIni.GP[pNet->PortNr];
+
+ epause->rx_pause = (pPort->PFlowCtrlMode == SK_FLOW_MODE_SYMMETRIC) ||
+ (pPort->PFlowCtrlMode == SK_FLOW_MODE_SYM_OR_REM);
+
+ epause->tx_pause = epause->rx_pause || (pPort->PFlowCtrlMode == SK_FLOW_MODE_LOC_SEND);
+ epause->autoneg = epause->rx_pause || epause->tx_pause;
+}
+
+/*****************************************************************************
+ *
+ * setPauseParams - configures the pause parameters of an adapter
+ *
+ * Description:
+ * This function sets the Rx or Tx pause parameters
+ *
+ * Returns:
+ * ==0: everything fine, no error
+ * !=0: the return value is the error code of the failure
+ */
+static int setPauseParams(struct net_device *dev , struct ethtool_pauseparam *epause)
+{
+ DEV_NET *pNet = netdev_priv(dev);
+ SK_AC *pAC = pNet->pAC;
+ SK_GEPORT *pPort = &pAC->GIni.GP[pNet->PortNr];
+ u32 instance = pnmiInstance(pNet);
+ struct ethtool_pauseparam old;
+ u8 oldspeed = pPort->PLinkSpeedUsed;
+ char buf[4];
+ int len = 1;
+ int ret;
+
+ /*
+ ** we have to determine the current settings to see if
+ ** the operator requested any modification of the flow
+ ** control parameters...
+ */
+ getPauseParams(dev, &old);
+
+ /*
+ ** perform modifications regarding the changes
+ ** requested by the operator
+ */
+ if (epause->autoneg != old.autoneg)
+ *buf = epause->autoneg ? SK_FLOW_MODE_NONE : SK_FLOW_MODE_SYMMETRIC;
+ else {
+ if (epause->rx_pause && epause->tx_pause)
+ *buf = SK_FLOW_MODE_SYMMETRIC;
+ else if (epause->rx_pause && !epause->tx_pause)
+ *buf = SK_FLOW_MODE_SYM_OR_REM;
+ else if (!epause->rx_pause && epause->tx_pause)
+ *buf = SK_FLOW_MODE_LOC_SEND;
+ else
+ *buf = SK_FLOW_MODE_NONE;
+ }
+
+ ret = SkPnmiSetVar(pAC, pAC->IoBase, OID_SKGE_FLOWCTRL_MODE,
+ &buf, &len, instance, pNet->NetNr);
+
+ if (ret != SK_PNMI_ERR_OK) {
+ SK_DBG_MSG(NULL, SK_DBGMOD_DRV, SK_DBGCAT_CTRL,
+ ("ethtool (sk98lin): error changing rx/tx pause (%i)\n", ret));
+ goto err;
+ }
+
+ /*
+ ** It may be that autoneg has been disabled! Therefore
+ ** set the speed to the previously used value...
+ */
+ if (!epause->autoneg) {
+ len = 1;
+ ret = SkPnmiSetVar(pAC, pAC->IoBase, OID_SKGE_SPEED_MODE,
+ &oldspeed, &len, instance, pNet->NetNr);
+ if (ret != SK_PNMI_ERR_OK)
+ SK_DBG_MSG(NULL, SK_DBGMOD_DRV, SK_DBGCAT_CTRL,
+ ("ethtool (sk98lin): error setting speed (%i)\n", ret));
+ }
+ err:
+ return ret ? -EIO : 0;
+}
+
+struct ethtool_ops SkGeEthtoolOps = {
+ .get_settings = getSettings,
+ .set_settings = setSettings,
+ .get_drvinfo = getDriverInfo,
+ .get_strings = getStrings,
+ .get_stats_count = getStatsCount,
+ .get_ethtool_stats = getEthtoolStats,
+ .phys_id = locateDevice,
+ .get_pauseparam = getPauseParams,
+ .set_pauseparam = setPauseParams,
+};
#include "h/skdrv2nd.h"
#include "h/skversion.h"
-extern struct SK_NET_DEVICE *SkGeRootDev;
-static int sk_proc_print(void *writePtr, char *format, ...);
-static void sk_gen_browse(void *buffer);
-int len;
-
static int sk_seq_show(struct seq_file *seq, void *v);
static int sk_proc_open(struct inode *inode, struct file *file);
+
struct file_operations sk_proc_fops = {
.owner = THIS_MODULE,
.open = sk_proc_open,
.llseek = seq_lseek,
.release = single_release,
};
-struct net_device *currDev = NULL;
+
/*****************************************************************************
*
- * sk_gen_browse -generic print "summaries" entry
+ * sk_seq_show - show proc information of a particular adapter
*
* Description:
* This function fills the proc entry with statistic data about
- * the ethernet device.
+ * the ethernet device. It invokes the generic sk_gen_browse() to
+ * print out all items one per one.
*
- * Returns: -
- *
+ * Returns: 0
+ *
*/
-static void sk_gen_browse(void *buffer)
+static int sk_seq_show(struct seq_file *seq, void *v)
{
- struct SK_NET_DEVICE *SkgeProcDev = SkGeRootDev;
- struct SK_NET_DEVICE *next;
- SK_PNMI_STRUCT_DATA *pPnmiStruct;
- SK_PNMI_STAT *pPnmiStat;
+ struct net_device *dev = seq->private;
+ DEV_NET *pNet = netdev_priv(dev);
+ SK_AC *pAC = pNet->pAC;
+ SK_PNMI_STRUCT_DATA *pPnmiStruct = &pAC->PnmiStruct;
unsigned long Flags;
unsigned int Size;
- DEV_NET *pNet;
- SK_AC *pAC;
char sens_msg[50];
- int MaxSecurityCount = 0;
int t;
int i;
- while (SkgeProcDev) {
- MaxSecurityCount++;
- if (MaxSecurityCount > 100) {
- printk("Max limit for sk_proc_read security counter!\n");
- return;
- }
- pNet = (DEV_NET*) SkgeProcDev->priv;
- pAC = pNet->pAC;
- next = pAC->Next;
- pPnmiStruct = &pAC->PnmiStruct;
- /* NetIndex in GetStruct is now required, zero is only dummy */
-
- for (t=pAC->GIni.GIMacsFound; t > 0; t--) {
- if ((pAC->GIni.GIMacsFound == 2) && pAC->RlmtNets == 1)
- t--;
+ /* NetIndex in GetStruct is now required, zero is only dummy */
+ for (t=pAC->GIni.GIMacsFound; t > 0; t--) {
+ if ((pAC->GIni.GIMacsFound == 2) && pAC->RlmtNets == 1)
+ t--;
- spin_lock_irqsave(&pAC->SlowPathLock, Flags);
- Size = SK_PNMI_STRUCT_SIZE;
+ spin_lock_irqsave(&pAC->SlowPathLock, Flags);
+ Size = SK_PNMI_STRUCT_SIZE;
#ifdef SK_DIAG_SUPPORT
- if (pAC->BoardLevel == SK_INIT_DATA) {
- SK_MEMCPY(&(pAC->PnmiStruct), &(pAC->PnmiBackup), sizeof(SK_PNMI_STRUCT_DATA));
- if (pAC->DiagModeActive == DIAG_NOTACTIVE) {
- pAC->Pnmi.DiagAttached = SK_DIAG_IDLE;
- }
- } else {
- SkPnmiGetStruct(pAC, pAC->IoBase, pPnmiStruct, &Size, t-1);
+ if (pAC->BoardLevel == SK_INIT_DATA) {
+ SK_MEMCPY(&(pAC->PnmiStruct), &(pAC->PnmiBackup), sizeof(SK_PNMI_STRUCT_DATA));
+ if (pAC->DiagModeActive == DIAG_NOTACTIVE) {
+ pAC->Pnmi.DiagAttached = SK_DIAG_IDLE;
}
+ } else {
+ SkPnmiGetStruct(pAC, pAC->IoBase, pPnmiStruct, &Size, t-1);
+ }
#else
- SkPnmiGetStruct(pAC, pAC->IoBase,
+ SkPnmiGetStruct(pAC, pAC->IoBase,
pPnmiStruct, &Size, t-1);
#endif
- spin_unlock_irqrestore(&pAC->SlowPathLock, Flags);
+ spin_unlock_irqrestore(&pAC->SlowPathLock, Flags);
- if (strcmp(pAC->dev[t-1]->name, currDev->name) == 0) {
- pPnmiStat = &pPnmiStruct->Stat[0];
- len = sk_proc_print(buffer,
- "\nDetailed statistic for device %s\n",
- pAC->dev[t-1]->name);
- len += sk_proc_print(buffer,
- "=======================================\n");
+ if (pAC->dev[t-1] == dev) {
+ SK_PNMI_STAT *pPnmiStat = &pPnmiStruct->Stat[0];
+
+ seq_printf(seq, "\nDetailed statistic for device %s\n",
+ pAC->dev[t-1]->name);
+ seq_printf(seq, "=======================================\n");
- /* Board statistics */
- len += sk_proc_print(buffer,
- "\nBoard statistics\n\n");
- len += sk_proc_print(buffer,
- "Active Port %c\n",
- 'A' + pAC->Rlmt.Net[t-1].Port[pAC->Rlmt.
- Net[t-1].PrefPort]->PortNumber);
- len += sk_proc_print(buffer,
- "Preferred Port %c\n",
- 'A' + pAC->Rlmt.Net[t-1].Port[pAC->Rlmt.
- Net[t-1].PrefPort]->PortNumber);
+ /* Board statistics */
+ seq_printf(seq, "\nBoard statistics\n\n");
+ seq_printf(seq, "Active Port %c\n",
+ 'A' + pAC->Rlmt.Net[t-1].Port[pAC->Rlmt.
+ Net[t-1].PrefPort]->PortNumber);
+ seq_printf(seq, "Preferred Port %c\n",
+ 'A' + pAC->Rlmt.Net[t-1].Port[pAC->Rlmt.
+ Net[t-1].PrefPort]->PortNumber);
- len += sk_proc_print(buffer,
- "Bus speed (MHz) %d\n",
- pPnmiStruct->BusSpeed);
+ seq_printf(seq, "Bus speed (MHz) %d\n",
+ pPnmiStruct->BusSpeed);
- len += sk_proc_print(buffer,
- "Bus width (Bit) %d\n",
- pPnmiStruct->BusWidth);
- len += sk_proc_print(buffer,
- "Driver version %s\n",
- VER_STRING);
- len += sk_proc_print(buffer,
- "Hardware revision v%d.%d\n",
- (pAC->GIni.GIPciHwRev >> 4) & 0x0F,
- pAC->GIni.GIPciHwRev & 0x0F);
+ seq_printf(seq, "Bus width (Bit) %d\n",
+ pPnmiStruct->BusWidth);
+ seq_printf(seq, "Driver version %s\n",
+ VER_STRING);
+ seq_printf(seq, "Hardware revision v%d.%d\n",
+ (pAC->GIni.GIPciHwRev >> 4) & 0x0F,
+ pAC->GIni.GIPciHwRev & 0x0F);
- /* Print sensor informations */
- for (i=0; i < pAC->I2c.MaxSens; i ++) {
- /* Check type */
- switch (pAC->I2c.SenTable[i].SenType) {
- case 1:
- strcpy(sens_msg, pAC->I2c.SenTable[i].SenDesc);
- strcat(sens_msg, " (C)");
- len += sk_proc_print(buffer,
- "%-25s %d.%02d\n",
- sens_msg,
- pAC->I2c.SenTable[i].SenValue / 10,
- pAC->I2c.SenTable[i].SenValue % 10);
+ /* Print sensor informations */
+ for (i=0; i < pAC->I2c.MaxSens; i ++) {
+ /* Check type */
+ switch (pAC->I2c.SenTable[i].SenType) {
+ case 1:
+ strcpy(sens_msg, pAC->I2c.SenTable[i].SenDesc);
+ strcat(sens_msg, " (C)");
+ seq_printf(seq, "%-25s %d.%02d\n",
+ sens_msg,
+ pAC->I2c.SenTable[i].SenValue / 10,
+ pAC->I2c.SenTable[i].SenValue % 10);
- strcpy(sens_msg, pAC->I2c.SenTable[i].SenDesc);
- strcat(sens_msg, " (F)");
- len += sk_proc_print(buffer,
- "%-25s %d.%02d\n",
- sens_msg,
- ((((pAC->I2c.SenTable[i].SenValue)
- *10)*9)/5 + 3200)/100,
- ((((pAC->I2c.SenTable[i].SenValue)
- *10)*9)/5 + 3200) % 10);
- break;
- case 2:
- strcpy(sens_msg, pAC->I2c.SenTable[i].SenDesc);
- strcat(sens_msg, " (V)");
- len += sk_proc_print(buffer,
- "%-25s %d.%03d\n",
- sens_msg,
- pAC->I2c.SenTable[i].SenValue / 1000,
- pAC->I2c.SenTable[i].SenValue % 1000);
- break;
- case 3:
- strcpy(sens_msg, pAC->I2c.SenTable[i].SenDesc);
- strcat(sens_msg, " (rpm)");
- len += sk_proc_print(buffer,
- "%-25s %d\n",
- sens_msg,
- pAC->I2c.SenTable[i].SenValue);
- break;
- default:
- break;
- }
+ strcpy(sens_msg, pAC->I2c.SenTable[i].SenDesc);
+ strcat(sens_msg, " (F)");
+ seq_printf(seq, "%-25s %d.%02d\n",
+ sens_msg,
+ ((((pAC->I2c.SenTable[i].SenValue)
+ *10)*9)/5 + 3200)/100,
+ ((((pAC->I2c.SenTable[i].SenValue)
+ *10)*9)/5 + 3200) % 10);
+ break;
+ case 2:
+ strcpy(sens_msg, pAC->I2c.SenTable[i].SenDesc);
+ strcat(sens_msg, " (V)");
+ seq_printf(seq, "%-25s %d.%03d\n",
+ sens_msg,
+ pAC->I2c.SenTable[i].SenValue / 1000,
+ pAC->I2c.SenTable[i].SenValue % 1000);
+ break;
+ case 3:
+ strcpy(sens_msg, pAC->I2c.SenTable[i].SenDesc);
+ strcat(sens_msg, " (rpm)");
+ seq_printf(seq, "%-25s %d\n",
+ sens_msg,
+ pAC->I2c.SenTable[i].SenValue);
+ break;
+ default:
+ break;
}
+ }
- /*Receive statistics */
- len += sk_proc_print(buffer,
- "\nReceive statistics\n\n");
+ /*Receive statistics */
+ seq_printf(seq, "\nReceive statistics\n\n");
- len += sk_proc_print(buffer,
- "Received bytes %Lu\n",
- (unsigned long long) pPnmiStat->StatRxOctetsOkCts);
- len += sk_proc_print(buffer,
- "Received packets %Lu\n",
- (unsigned long long) pPnmiStat->StatRxOkCts);
+ seq_printf(seq, "Received bytes %Lu\n",
+ (unsigned long long) pPnmiStat->StatRxOctetsOkCts);
+ seq_printf(seq, "Received packets %Lu\n",
+ (unsigned long long) pPnmiStat->StatRxOkCts);
#if 0
- if (pAC->GIni.GP[0].PhyType == SK_PHY_XMAC &&
- pAC->HWRevision < 12) {
- pPnmiStruct->InErrorsCts = pPnmiStruct->InErrorsCts -
- pPnmiStat->StatRxShortsCts;
- pPnmiStat->StatRxShortsCts = 0;
- }
+ if (pAC->GIni.GP[0].PhyType == SK_PHY_XMAC &&
+ pAC->HWRevision < 12) {
+ pPnmiStruct->InErrorsCts = pPnmiStruct->InErrorsCts -
+ pPnmiStat->StatRxShortsCts;
+ pPnmiStat->StatRxShortsCts = 0;
+ }
#endif
- if (pNet->Mtu > 1500)
- pPnmiStruct->InErrorsCts = pPnmiStruct->InErrorsCts -
- pPnmiStat->StatRxTooLongCts;
+ if (dev->mtu > 1500)
+ pPnmiStruct->InErrorsCts = pPnmiStruct->InErrorsCts -
+ pPnmiStat->StatRxTooLongCts;
- len += sk_proc_print(buffer,
- "Receive errors %Lu\n",
- (unsigned long long) pPnmiStruct->InErrorsCts);
- len += sk_proc_print(buffer,
- "Receive dropped %Lu\n",
- (unsigned long long) pPnmiStruct->RxNoBufCts);
- len += sk_proc_print(buffer,
- "Received multicast %Lu\n",
- (unsigned long long) pPnmiStat->StatRxMulticastOkCts);
- len += sk_proc_print(buffer,
- "Receive error types\n");
- len += sk_proc_print(buffer,
- " length %Lu\n",
- (unsigned long long) pPnmiStat->StatRxRuntCts);
- len += sk_proc_print(buffer,
- " buffer overflow %Lu\n",
- (unsigned long long) pPnmiStat->StatRxFifoOverflowCts);
- len += sk_proc_print(buffer,
- " bad crc %Lu\n",
- (unsigned long long) pPnmiStat->StatRxFcsCts);
- len += sk_proc_print(buffer,
- " framing %Lu\n",
- (unsigned long long) pPnmiStat->StatRxFramingCts);
- len += sk_proc_print(buffer,
- " missed frames %Lu\n",
- (unsigned long long) pPnmiStat->StatRxMissedCts);
+ seq_printf(seq, "Receive errors %Lu\n",
+ (unsigned long long) pPnmiStruct->InErrorsCts);
+ seq_printf(seq, "Receive dropped %Lu\n",
+ (unsigned long long) pPnmiStruct->RxNoBufCts);
+ seq_printf(seq, "Received multicast %Lu\n",
+ (unsigned long long) pPnmiStat->StatRxMulticastOkCts);
+ seq_printf(seq, "Receive error types\n");
+ seq_printf(seq, " length %Lu\n",
+ (unsigned long long) pPnmiStat->StatRxRuntCts);
+ seq_printf(seq, " buffer overflow %Lu\n",
+ (unsigned long long) pPnmiStat->StatRxFifoOverflowCts);
+ seq_printf(seq, " bad crc %Lu\n",
+ (unsigned long long) pPnmiStat->StatRxFcsCts);
+ seq_printf(seq, " framing %Lu\n",
+ (unsigned long long) pPnmiStat->StatRxFramingCts);
+ seq_printf(seq, " missed frames %Lu\n",
+ (unsigned long long) pPnmiStat->StatRxMissedCts);
- if (pNet->Mtu > 1500)
- pPnmiStat->StatRxTooLongCts = 0;
+ if (dev->mtu > 1500)
+ pPnmiStat->StatRxTooLongCts = 0;
- len += sk_proc_print(buffer,
- " too long %Lu\n",
- (unsigned long long) pPnmiStat->StatRxTooLongCts);
- len += sk_proc_print(buffer,
- " carrier extension %Lu\n",
- (unsigned long long) pPnmiStat->StatRxCextCts);
- len += sk_proc_print(buffer,
- " too short %Lu\n",
- (unsigned long long) pPnmiStat->StatRxShortsCts);
- len += sk_proc_print(buffer,
- " symbol %Lu\n",
- (unsigned long long) pPnmiStat->StatRxSymbolCts);
- len += sk_proc_print(buffer,
- " LLC MAC size %Lu\n",
- (unsigned long long) pPnmiStat->StatRxIRLengthCts);
- len += sk_proc_print(buffer,
- " carrier event %Lu\n",
- (unsigned long long) pPnmiStat->StatRxCarrierCts);
- len += sk_proc_print(buffer,
- " jabber %Lu\n",
- (unsigned long long) pPnmiStat->StatRxJabberCts);
+ seq_printf(seq, " too long %Lu\n",
+ (unsigned long long) pPnmiStat->StatRxTooLongCts);
+ seq_printf(seq, " carrier extension %Lu\n",
+ (unsigned long long) pPnmiStat->StatRxCextCts);
+ seq_printf(seq, " too short %Lu\n",
+ (unsigned long long) pPnmiStat->StatRxShortsCts);
+ seq_printf(seq, " symbol %Lu\n",
+ (unsigned long long) pPnmiStat->StatRxSymbolCts);
+ seq_printf(seq, " LLC MAC size %Lu\n",
+ (unsigned long long) pPnmiStat->StatRxIRLengthCts);
+ seq_printf(seq, " carrier event %Lu\n",
+ (unsigned long long) pPnmiStat->StatRxCarrierCts);
+ seq_printf(seq, " jabber %Lu\n",
+ (unsigned long long) pPnmiStat->StatRxJabberCts);
- /*Transmit statistics */
- len += sk_proc_print(buffer,
- "\nTransmit statistics\n\n");
+ /*Transmit statistics */
+ seq_printf(seq, "\nTransmit statistics\n\n");
- len += sk_proc_print(buffer,
- "Transmited bytes %Lu\n",
- (unsigned long long) pPnmiStat->StatTxOctetsOkCts);
- len += sk_proc_print(buffer,
- "Transmited packets %Lu\n",
- (unsigned long long) pPnmiStat->StatTxOkCts);
- len += sk_proc_print(buffer,
- "Transmit errors %Lu\n",
- (unsigned long long) pPnmiStat->StatTxSingleCollisionCts);
- len += sk_proc_print(buffer,
- "Transmit dropped %Lu\n",
- (unsigned long long) pPnmiStruct->TxNoBufCts);
- len += sk_proc_print(buffer,
- "Transmit collisions %Lu\n",
- (unsigned long long) pPnmiStat->StatTxSingleCollisionCts);
- len += sk_proc_print(buffer,
- "Transmit error types\n");
- len += sk_proc_print(buffer,
- " excessive collision %ld\n",
- pAC->stats.tx_aborted_errors);
- len += sk_proc_print(buffer,
- " carrier %Lu\n",
- (unsigned long long) pPnmiStat->StatTxCarrierCts);
- len += sk_proc_print(buffer,
- " fifo underrun %Lu\n",
- (unsigned long long) pPnmiStat->StatTxFifoUnderrunCts);
- len += sk_proc_print(buffer,
- " heartbeat %Lu\n",
- (unsigned long long) pPnmiStat->StatTxCarrierCts);
- len += sk_proc_print(buffer,
- " window %ld\n",
- pAC->stats.tx_window_errors);
+ seq_printf(seq, "Transmited bytes %Lu\n",
+ (unsigned long long) pPnmiStat->StatTxOctetsOkCts);
+ seq_printf(seq, "Transmited packets %Lu\n",
+ (unsigned long long) pPnmiStat->StatTxOkCts);
+ seq_printf(seq, "Transmit errors %Lu\n",
+ (unsigned long long) pPnmiStat->StatTxSingleCollisionCts);
+ seq_printf(seq, "Transmit dropped %Lu\n",
+ (unsigned long long) pPnmiStruct->TxNoBufCts);
+ seq_printf(seq, "Transmit collisions %Lu\n",
+ (unsigned long long) pPnmiStat->StatTxSingleCollisionCts);
+ seq_printf(seq, "Transmit error types\n");
+ seq_printf(seq, " excessive collision %ld\n",
+ pAC->stats.tx_aborted_errors);
+ seq_printf(seq, " carrier %Lu\n",
+ (unsigned long long) pPnmiStat->StatTxCarrierCts);
+ seq_printf(seq, " fifo underrun %Lu\n",
+ (unsigned long long) pPnmiStat->StatTxFifoUnderrunCts);
+ seq_printf(seq, " heartbeat %Lu\n",
+ (unsigned long long) pPnmiStat->StatTxCarrierCts);
+ seq_printf(seq, " window %ld\n",
+ pAC->stats.tx_window_errors);
- } /* if (strcmp(pACname, currDeviceName) == 0) */
}
- SkgeProcDev = next;
}
-}
-
-/*****************************************************************************
- *
- * sk_proc_print -generic line print
- *
- * Description:
- * This function fills the proc entry with statistic data about
- * the ethernet device.
- *
- * Returns: number of bytes written
- *
- */
-static int sk_proc_print(void *writePtr, char *format, ...)
-{
-#define MAX_LEN_SINGLE_LINE 256
- char str[MAX_LEN_SINGLE_LINE];
- va_list a_start;
- int lenght = 0;
-
- struct seq_file *seq = (struct seq_file *) writePtr;
-
- SK_MEMSET(str, 0, MAX_LEN_SINGLE_LINE);
-
- va_start(a_start, format);
- vsprintf(str, format, a_start);
- va_end(a_start);
-
- lenght = strlen(str);
-
- seq_printf(seq, str);
- return lenght;
-}
-
-/*****************************************************************************
- *
- * sk_seq_show - show proc information of a particular adapter
- *
- * Description:
- * This function fills the proc entry with statistic data about
- * the ethernet device. It invokes the generic sk_gen_browse() to
- * print out all items one per one.
- *
- * Returns: number of bytes written
- *
- */
-static int sk_seq_show(struct seq_file *seq, void *v)
-{
- void *castedBuffer = (void *) seq;
- currDev = seq->private;
- sk_gen_browse(castedBuffer);
- return 0;
+ return 0;
}
/*****************************************************************************
/* static variables */
static SK_RAM *board; /* pointer to our memory mapped board components */
-static spinlock_t SK_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(SK_lock);
/* Macros */
#define _FPLUS_
#ifndef HW_PTR
-#ifdef MEM_MAPPED_IO
-#define HW_PTR u_long
-#else
-#define HW_PTR u_short
-#endif
+#define HW_PTR void __iomem *
#endif
/*
#endif
#ifndef HW_PTR
-#ifdef MEM_MAPPED_IO
-#define HW_PTR u_long
-#else
-#define HW_PTR u_short
-#endif
+#define HW_PTR void __iomem *
#endif
#ifdef MULT_OEM
#define _far
#endif
-#ifndef MEM_MAPPED_IO // "normal" IO
-#define inp(p) inb(p)
-#define inpw(p) inw(p)
-#define inpd(p) inl(p)
-#define outp(p,c) outb(c,p)
-#define outpw(p,s) outw(s,p)
-#define outpd(p,l) outl(l,p)
-#else // memory mapped io
-#define inp(a) readb(a)
-#define inpw(a) readw(a)
-#define inpd(a) readl(a)
-#define outp(a,v) writeb(v, a)
-#define outpw(a,v) writew(v, a)
-#define outpd(a,v) writel(v, a)
-#endif
+#define inp(p) ioread8(p)
+#define inpw(p) ioread16(p)
+#define inpd(p) ioread32(p)
+#define outp(p,c) iowrite8(c,p)
+#define outpw(p,s) iowrite16(s,p)
+#define outpd(p,l) iowrite32(l,p)
#endif /* _TYPES_ */
return IRQ_HANDLED;
}
+#ifdef CONFIG_NET_POLL_CONTROLLER
+/*
+ * Polling receive - used by netconsole and other diagnostic tools
+ * to allow network i/o with interrupts disabled.
+ */
+static void smc_poll_controller(struct net_device *dev)
+{
+ disable_irq(dev->irq);
+ smc_interrupt(dev->irq, dev, NULL);
+ enable_irq(dev->irq);
+}
+#endif
+
/* Our watchdog timed out. Called by the networking layer */
static void smc_timeout(struct net_device *dev)
{
dev->get_stats = smc_query_statistics;
dev->set_multicast_list = smc_set_multicast_list;
dev->ethtool_ops = &smc_ethtool_ops;
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ dev->poll_controller = smc_poll_controller;
+#endif
tasklet_init(&lp->tx_task, smc_hardware_send_pkt, (unsigned long)dev);
INIT_WORK(&lp->phy_configure, smc_phy_configure, dev);
struct lance_private {
void __iomem *lregs; /* Lance RAP/RDP regs. */
void __iomem *dregs; /* DMA controller regs. */
- struct lance_init_block *init_block;
+ struct lance_init_block __iomem *init_block_iomem;
+ struct lance_init_block *init_block_mem;
spinlock_t lock;
static void lance_init_ring_dvma(struct net_device *dev)
{
struct lance_private *lp = netdev_priv(dev);
- struct lance_init_block *ib = lp->init_block;
+ struct lance_init_block *ib = lp->init_block_mem;
dma_addr_t aib = lp->init_block_dvma;
__u32 leptr;
int i;
static void lance_init_ring_pio(struct net_device *dev)
{
struct lance_private *lp = netdev_priv(dev);
- struct lance_init_block *ib = lp->init_block;
+ struct lance_init_block __iomem *ib = lp->init_block_iomem;
u32 leptr;
int i;
static void lance_rx_dvma(struct net_device *dev)
{
struct lance_private *lp = netdev_priv(dev);
- struct lance_init_block *ib = lp->init_block;
+ struct lance_init_block *ib = lp->init_block_mem;
struct lance_rx_desc *rd;
u8 bits;
int len, entry = lp->rx_new;
static void lance_tx_dvma(struct net_device *dev)
{
struct lance_private *lp = netdev_priv(dev);
- struct lance_init_block *ib = lp->init_block;
+ struct lance_init_block *ib = lp->init_block_mem;
int i, j;
spin_lock(&lp->lock);
static void lance_rx_pio(struct net_device *dev)
{
struct lance_private *lp = netdev_priv(dev);
- struct lance_init_block *ib = lp->init_block;
- struct lance_rx_desc *rd;
+ struct lance_init_block __iomem *ib = lp->init_block_iomem;
+ struct lance_rx_desc __iomem *rd;
unsigned char bits;
int len, entry;
struct sk_buff *skb;
static void lance_tx_pio(struct net_device *dev)
{
struct lance_private *lp = netdev_priv(dev);
- struct lance_init_block *ib = lp->init_block;
+ struct lance_init_block __iomem *ib = lp->init_block_iomem;
int i, j;
spin_lock(&lp->lock);
j = lp->tx_old;
for (i = j; i != lp->tx_new; i = j) {
- struct lance_tx_desc *td = &ib->btx_ring [i];
+ struct lance_tx_desc __iomem *td = &ib->btx_ring [i];
u8 bits = sbus_readb(&td->tmd1_bits);
/* If we hit a packet not owned by us, stop */
static void build_fake_packet(struct lance_private *lp)
{
struct net_device *dev = lp->dev;
- struct lance_init_block *ib = lp->init_block;
- u16 *packet;
- struct ethhdr *eth;
int i, entry;
entry = lp->tx_new & TX_RING_MOD_MASK;
- packet = (u16 *) &(ib->tx_buf[entry][0]);
- eth = (struct ethhdr *) packet;
if (lp->pio_buffer) {
+ struct lance_init_block __iomem *ib = lp->init_block_iomem;
+ u16 __iomem *packet = (u16 __iomem *) &(ib->tx_buf[entry][0]);
+ struct ethhdr __iomem *eth = (struct ethhdr __iomem *) packet;
for (i = 0; i < (ETH_ZLEN / sizeof(u16)); i++)
sbus_writew(0, &packet[i]);
for (i = 0; i < 6; i++) {
sbus_writew(0, &ib->btx_ring[entry].misc);
sbus_writeb(LE_T1_POK|LE_T1_OWN, &ib->btx_ring[entry].tmd1_bits);
} else {
+ struct lance_init_block *ib = lp->init_block_mem;
+ u16 *packet = (u16 *) &(ib->tx_buf[entry][0]);
+ struct ethhdr *eth = (struct ethhdr *) packet;
memset(packet, 0, ETH_ZLEN);
for (i = 0; i < 6; i++) {
eth->h_dest[i] = dev->dev_addr[i];
static int lance_open(struct net_device *dev)
{
struct lance_private *lp = netdev_priv(dev);
- struct lance_init_block *ib = lp->init_block;
int status = 0;
last_dev = dev;
* BTW it is common bug in all lance drivers! --ANK
*/
if (lp->pio_buffer) {
+ struct lance_init_block __iomem *ib = lp->init_block_iomem;
sbus_writew(0, &ib->mode);
sbus_writel(0, &ib->filter[0]);
sbus_writel(0, &ib->filter[1]);
} else {
+ struct lance_init_block *ib = lp->init_block_mem;
ib->mode = 0;
ib->filter [0] = 0;
ib->filter [1] = 0;
static int lance_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct lance_private *lp = netdev_priv(dev);
- struct lance_init_block *ib = lp->init_block;
int entry, skblen, len;
skblen = skb->len;
entry = lp->tx_new & TX_RING_MOD_MASK;
if (lp->pio_buffer) {
+ struct lance_init_block __iomem *ib = lp->init_block_iomem;
sbus_writew((-len) | 0xf000, &ib->btx_ring[entry].length);
sbus_writew(0, &ib->btx_ring[entry].misc);
lance_piocopy_from_skb(&ib->tx_buf[entry][0], skb->data, skblen);
lance_piozero(&ib->tx_buf[entry][skblen], len - skblen);
sbus_writeb(LE_T1_POK | LE_T1_OWN, &ib->btx_ring[entry].tmd1_bits);
} else {
+ struct lance_init_block *ib = lp->init_block_mem;
ib->btx_ring [entry].length = (-len) | 0xf000;
ib->btx_ring [entry].misc = 0;
memcpy((char *)&ib->tx_buf [entry][0], skb->data, skblen);
static void lance_load_multicast(struct net_device *dev)
{
struct lance_private *lp = netdev_priv(dev);
- struct lance_init_block *ib = lp->init_block;
- u16 *mcast_table = (u16 *) &ib->filter;
struct dev_mc_list *dmi = dev->mc_list;
char *addrs;
int i;
u32 crc;
+ u32 val;
/* set all multicast bits */
- if (dev->flags & IFF_ALLMULTI) {
- if (lp->pio_buffer) {
- sbus_writel(0xffffffff, &ib->filter[0]);
- sbus_writel(0xffffffff, &ib->filter[1]);
- } else {
- ib->filter [0] = 0xffffffff;
- ib->filter [1] = 0xffffffff;
- }
- return;
- }
- /* clear the multicast filter */
+ if (dev->flags & IFF_ALLMULTI)
+ val = ~0;
+ else
+ val = 0;
+
if (lp->pio_buffer) {
- sbus_writel(0, &ib->filter[0]);
- sbus_writel(0, &ib->filter[1]);
+ struct lance_init_block __iomem *ib = lp->init_block_iomem;
+ sbus_writel(val, &ib->filter[0]);
+ sbus_writel(val, &ib->filter[1]);
} else {
- ib->filter [0] = 0;
- ib->filter [1] = 0;
+ struct lance_init_block *ib = lp->init_block_mem;
+ ib->filter [0] = val;
+ ib->filter [1] = val;
}
+ if (dev->flags & IFF_ALLMULTI)
+ return;
+
/* Add addresses */
for (i = 0; i < dev->mc_count; i++) {
addrs = dmi->dmi_addr;
crc = ether_crc_le(6, addrs);
crc = crc >> 26;
if (lp->pio_buffer) {
+ struct lance_init_block __iomem *ib = lp->init_block_iomem;
+ u16 __iomem *mcast_table = (u16 __iomem *) &ib->filter;
u16 tmp = sbus_readw(&mcast_table[crc>>4]);
tmp |= 1 << (crc & 0xf);
sbus_writew(tmp, &mcast_table[crc>>4]);
} else {
+ struct lance_init_block *ib = lp->init_block_mem;
+ u16 *mcast_table = (u16 *) &ib->filter;
mcast_table [crc >> 4] |= 1 << (crc & 0xf);
}
}
static void lance_set_multicast(struct net_device *dev)
{
struct lance_private *lp = netdev_priv(dev);
- struct lance_init_block *ib = lp->init_block;
+ struct lance_init_block *ib_mem = lp->init_block_mem;
+ struct lance_init_block __iomem *ib_iomem = lp->init_block_iomem;
u16 mode;
if (!netif_running(dev))
lp->init_ring(dev);
if (lp->pio_buffer)
- mode = sbus_readw(&ib->mode);
+ mode = sbus_readw(&ib_iomem->mode);
else
- mode = ib->mode;
+ mode = ib_mem->mode;
if (dev->flags & IFF_PROMISC) {
mode |= LE_MO_PROM;
if (lp->pio_buffer)
- sbus_writew(mode, &ib->mode);
+ sbus_writew(mode, &ib_iomem->mode);
else
- ib->mode = mode;
+ ib_mem->mode = mode;
} else {
mode &= ~LE_MO_PROM;
if (lp->pio_buffer)
- sbus_writew(mode, &ib->mode);
+ sbus_writew(mode, &ib_iomem->mode);
else
- ib->mode = mode;
+ ib_mem->mode = mode;
lance_load_multicast(dev);
}
load_csrs(lp);
{
if (lp->lregs)
sbus_iounmap(lp->lregs, LANCE_REG_SIZE);
- if (lp->init_block != NULL) {
- if (lp->pio_buffer) {
- sbus_iounmap(lp->init_block,
- sizeof(struct lance_init_block));
- } else {
- sbus_free_consistent(lp->sdev,
- sizeof(struct lance_init_block),
- lp->init_block,
- lp->init_block_dvma);
- }
+ if (lp->init_block_iomem) {
+ sbus_iounmap(lp->init_block_iomem,
+ sizeof(struct lance_init_block));
+ } else if (lp->init_block_mem) {
+ sbus_free_consistent(lp->sdev,
+ sizeof(struct lance_init_block),
+ lp->init_block_mem,
+ lp->init_block_dvma);
}
}
return -ENOMEM;
lp = netdev_priv(dev);
+ memset(lp, 0, sizeof(*lp));
if (sparc_lance_debug && version_printed++ == 0)
printk (KERN_INFO "%s", version);
/* Get the IO region */
lp->lregs = sbus_ioremap(&sdev->resource[0], 0,
LANCE_REG_SIZE, lancestr);
- if (lp->lregs == 0UL) {
+ if (!lp->lregs) {
printk(KERN_ERR "SunLance: Cannot map registers.\n");
goto fail;
}
lp->sdev = sdev;
if (lebuffer) {
- lp->init_block =
+ /* sanity check */
+ if (lebuffer->resource[0].start & 7) {
+ printk(KERN_ERR "SunLance: ERROR: Rx and Tx rings not on even boundary.\n");
+ goto fail;
+ }
+ lp->init_block_iomem =
sbus_ioremap(&lebuffer->resource[0], 0,
sizeof(struct lance_init_block), "lebuffer");
- if (lp->init_block == NULL) {
+ if (!lp->init_block_iomem) {
printk(KERN_ERR "SunLance: Cannot map PIO buffer.\n");
goto fail;
}
lp->rx = lance_rx_pio;
lp->tx = lance_tx_pio;
} else {
- lp->init_block =
+ lp->init_block_mem =
sbus_alloc_consistent(sdev, sizeof(struct lance_init_block),
&lp->init_block_dvma);
- if (lp->init_block == NULL ||
- lp->init_block_dvma == 0) {
+ if (!lp->init_block_mem || lp->init_block_dvma == 0) {
printk(KERN_ERR "SunLance: Cannot allocate consistent DMA memory.\n");
goto fail;
}
udelay(200);
sbus_writel(csr & ~DMA_RST_ENET, lp->dregs + DMA_CSR);
} else
- lp->dregs = 0;
-
- /* This should never happen. */
- if ((unsigned long)(lp->init_block->brx_ring) & 0x07) {
- printk(KERN_ERR "SunLance: ERROR: Rx and Tx rings not on even boundary.\n");
- goto fail;
- }
+ lp->dregs = NULL;
lp->dev = dev;
SET_MODULE_OWNER(dev);
return 0;
fail:
- if (lp != NULL)
- lance_free_hwresources(lp);
+ lance_free_hwresources(lp);
free_netdev(dev);
return -ENODEV;
}
memset(&sdev, 0, sizeof(sdev));
sdev.reg_addrs[0].phys_addr = sun4_eth_physaddr;
sdev.irqs[0] = 6;
- return sparc_lance_init(&sdev, 0, 0);
+ return sparc_lance_init(&sdev, NULL, NULL);
}
return -ENODEV;
}
/* Routines to access internal registers. */
-inline u8 TLan_DioRead8(u16 base_addr, u16 internal_addr)
+static inline u8 TLan_DioRead8(u16 base_addr, u16 internal_addr)
{
outw(internal_addr, base_addr + TLAN_DIO_ADR);
return (inb((base_addr + TLAN_DIO_DATA) + (internal_addr & 0x3)));
-inline u16 TLan_DioRead16(u16 base_addr, u16 internal_addr)
+static inline u16 TLan_DioRead16(u16 base_addr, u16 internal_addr)
{
outw(internal_addr, base_addr + TLAN_DIO_ADR);
return (inw((base_addr + TLAN_DIO_DATA) + (internal_addr & 0x2)));
-inline u32 TLan_DioRead32(u16 base_addr, u16 internal_addr)
+static inline u32 TLan_DioRead32(u16 base_addr, u16 internal_addr)
{
outw(internal_addr, base_addr + TLAN_DIO_ADR);
return (inl(base_addr + TLAN_DIO_DATA));
-inline void TLan_DioWrite8(u16 base_addr, u16 internal_addr, u8 data)
+static inline void TLan_DioWrite8(u16 base_addr, u16 internal_addr, u8 data)
{
outw(internal_addr, base_addr + TLAN_DIO_ADR);
outb(data, base_addr + TLAN_DIO_DATA + (internal_addr & 0x3));
-inline void TLan_DioWrite16(u16 base_addr, u16 internal_addr, u16 data)
+static inline void TLan_DioWrite16(u16 base_addr, u16 internal_addr, u16 data)
{
outw(internal_addr, base_addr + TLAN_DIO_ADR);
outw(data, base_addr + TLAN_DIO_DATA + (internal_addr & 0x2));
-inline void TLan_DioWrite32(u16 base_addr, u16 internal_addr, u32 data)
+static inline void TLan_DioWrite32(u16 base_addr, u16 internal_addr, u32 data)
{
outw(internal_addr, base_addr + TLAN_DIO_ADR);
outl(data, base_addr + TLAN_DIO_DATA + (internal_addr & 0x2));
}
-
-
-#if 0
-inline void TLan_ClearBit(u8 bit, u16 port)
-{
- outb_p(inb_p(port) & ~bit, port);
-}
-
-
-
-
-inline int TLan_GetBit(u8 bit, u16 port)
-{
- return ((int) (inb_p(port) & bit));
-}
-
-
-
-
-inline void TLan_SetBit(u8 bit, u16 port)
-{
- outb_p(inb_p(port) | bit, port);
-}
-#endif
-
#define TLan_ClearBit( bit, port ) outb_p(inb_p(port) & ~bit, port)
#define TLan_GetBit( bit, port ) ((int) (inb_p(port) & bit))
#define TLan_SetBit( bit, port ) outb_p(inb_p(port) | bit, port)
-#ifdef I_LIKE_A_FAST_HASH_FUNCTION
-/* given 6 bytes, view them as 8 6-bit numbers and return the XOR of those */
-/* the code below is about seven times as fast as the original code */
-inline u32 TLan_HashFunc( u8 *a )
+/*
+ * given 6 bytes, view them as 8 6-bit numbers and return the XOR of those
+ * the code below is about seven times as fast as the original code
+ *
+ * The original code was:
+ *
+ * u32 xor( u32 a, u32 b ) { return ( ( a && ! b ) || ( ! a && b ) ); }
+ *
+ * #define XOR8( a, b, c, d, e, f, g, h ) \
+ * xor( a, xor( b, xor( c, xor( d, xor( e, xor( f, xor( g, h ) ) ) ) ) ) )
+ * #define DA( a, bit ) ( ( (u8) a[bit/8] ) & ( (u8) ( 1 << bit%8 ) ) )
+ *
+ * hash = XOR8( DA(a,0), DA(a, 6), DA(a,12), DA(a,18), DA(a,24), DA(a,30), DA(a,36), DA(a,42) );
+ * hash |= XOR8( DA(a,1), DA(a, 7), DA(a,13), DA(a,19), DA(a,25), DA(a,31), DA(a,37), DA(a,43) ) << 1;
+ * hash |= XOR8( DA(a,2), DA(a, 8), DA(a,14), DA(a,20), DA(a,26), DA(a,32), DA(a,38), DA(a,44) ) << 2;
+ * hash |= XOR8( DA(a,3), DA(a, 9), DA(a,15), DA(a,21), DA(a,27), DA(a,33), DA(a,39), DA(a,45) ) << 3;
+ * hash |= XOR8( DA(a,4), DA(a,10), DA(a,16), DA(a,22), DA(a,28), DA(a,34), DA(a,40), DA(a,46) ) << 4;
+ * hash |= XOR8( DA(a,5), DA(a,11), DA(a,17), DA(a,23), DA(a,29), DA(a,35), DA(a,41), DA(a,47) ) << 5;
+ *
+ */
+static inline u32 TLan_HashFunc( const u8 *a )
{
u8 hash;
return (hash & 077);
}
-
-#else /* original code */
-
-inline u32 xor( u32 a, u32 b )
-{
- return ( ( a && ! b ) || ( ! a && b ) );
-}
-#define XOR8( a, b, c, d, e, f, g, h ) xor( a, xor( b, xor( c, xor( d, xor( e, xor( f, xor( g, h ) ) ) ) ) ) )
-#define DA( a, bit ) ( ( (u8) a[bit/8] ) & ( (u8) ( 1 << bit%8 ) ) )
-
-inline u32 TLan_HashFunc( u8 *a )
-{
- u32 hash;
-
- hash = XOR8( DA(a,0), DA(a, 6), DA(a,12), DA(a,18), DA(a,24), DA(a,30), DA(a,36), DA(a,42) );
- hash |= XOR8( DA(a,1), DA(a, 7), DA(a,13), DA(a,19), DA(a,25), DA(a,31), DA(a,37), DA(a,43) ) << 1;
- hash |= XOR8( DA(a,2), DA(a, 8), DA(a,14), DA(a,20), DA(a,26), DA(a,32), DA(a,38), DA(a,44) ) << 2;
- hash |= XOR8( DA(a,3), DA(a, 9), DA(a,15), DA(a,21), DA(a,27), DA(a,33), DA(a,39), DA(a,45) ) << 3;
- hash |= XOR8( DA(a,4), DA(a,10), DA(a,16), DA(a,22), DA(a,28), DA(a,34), DA(a,40), DA(a,46) ) << 4;
- hash |= XOR8( DA(a,5), DA(a,11), DA(a,17), DA(a,23), DA(a,29), DA(a,35), DA(a,41), DA(a,47) ) << 5;
-
- return hash;
-
-}
-
-#endif /* I_LIKE_A_FAST_HASH_FUNCTION */
#endif
static int ringspeed[XL_MAX_ADAPTERS] = {0,} ;
-MODULE_PARM(ringspeed, "1-" __MODULE_STRING(XL_MAX_ADAPTERS) "i");
+module_param_array(ringspeed, int, NULL, 0);
MODULE_PARM_DESC(ringspeed,"3c359: Ringspeed selection - 4,16 or 0") ;
/* Packet buffer size */
static int pkt_buf_sz[XL_MAX_ADAPTERS] = {0,} ;
-MODULE_PARM(pkt_buf_sz, "1-" __MODULE_STRING(XL_MAX_ADAPTERS) "i") ;
+module_param_array(pkt_buf_sz, int, NULL, 0) ;
MODULE_PARM_DESC(pkt_buf_sz,"3c359: Initial buffer size") ;
/* Message Level */
static int message_level[XL_MAX_ADAPTERS] = {0,} ;
-MODULE_PARM(message_level, "1-" __MODULE_STRING(XL_MAX_ADAPTERS) "i") ;
+module_param_array(message_level, int, NULL, 0) ;
MODULE_PARM_DESC(message_level, "3c359: Level of reported messages \n") ;
/*
* This is a real nasty way of doing this, but otherwise you
struct xl_private *xl_priv = (struct xl_private *)dev->priv ;
struct xl_tx_desc *txd ;
- u8 *xl_mmio = xl_priv->xl_mmio ;
+ u8 __iomem *xl_mmio = xl_priv->xl_mmio ;
int i ;
printk("tx_ring_head: %d, tx_ring_tail: %d, free_ent: %d \n",xl_priv->tx_ring_head,
struct xl_private *xl_priv = (struct xl_private *)dev->priv ;
struct xl_rx_desc *rxd ;
- u8 *xl_mmio = xl_priv->xl_mmio ;
+ u8 __iomem *xl_mmio = xl_priv->xl_mmio ;
int i ;
printk("rx_ring_tail: %d \n", xl_priv->rx_ring_tail) ;
static u16 xl_ee_read(struct net_device *dev, int ee_addr)
{
struct xl_private *xl_priv = (struct xl_private *)dev->priv ;
- u8 *xl_mmio = xl_priv->xl_mmio ;
+ u8 __iomem *xl_mmio = xl_priv->xl_mmio ;
/* Wait for EEProm to not be busy */
writel(IO_WORD_READ | EECONTROL, xl_mmio + MMIO_MAC_ACCESS_CMD) ;
static void xl_ee_write(struct net_device *dev, int ee_addr, u16 ee_value)
{
struct xl_private *xl_priv = (struct xl_private *)dev->priv ;
- u8 *xl_mmio = xl_priv->xl_mmio ;
+ u8 __iomem *xl_mmio = xl_priv->xl_mmio ;
/* Wait for EEProm to not be busy */
writel(IO_WORD_READ | EECONTROL, xl_mmio + MMIO_MAC_ACCESS_CMD) ;
static int xl_hw_reset(struct net_device *dev)
{
struct xl_private *xl_priv = (struct xl_private *)dev->priv ;
- u8 *xl_mmio = xl_priv->xl_mmio ;
+ u8 __iomem *xl_mmio = xl_priv->xl_mmio ;
unsigned long t ;
u16 i ;
u16 result_16 ;
static int xl_open(struct net_device *dev)
{
struct xl_private *xl_priv=(struct xl_private *)dev->priv;
- u8 * xl_mmio = xl_priv->xl_mmio ;
+ u8 __iomem *xl_mmio = xl_priv->xl_mmio ;
u8 i ;
u16 hwaddr[3] ; /* Should be u8[6] but we get word return values */
int open_err ;
static int xl_open_hw(struct net_device *dev)
{
struct xl_private *xl_priv=(struct xl_private *)dev->priv;
- u8 * xl_mmio = xl_priv->xl_mmio ;
+ u8 __iomem *xl_mmio = xl_priv->xl_mmio ;
u16 vsoff ;
char ver_str[33];
int open_err ;
static void xl_rx(struct net_device *dev)
{
struct xl_private *xl_priv=(struct xl_private *)dev->priv;
- u8 * xl_mmio = xl_priv->xl_mmio ;
+ u8 __iomem * xl_mmio = xl_priv->xl_mmio ;
struct sk_buff *skb, *skb2 ;
int frame_length = 0, copy_len = 0 ;
int temp_ring_loc ;
static void xl_reset(struct net_device *dev)
{
struct xl_private *xl_priv=(struct xl_private *)dev->priv;
- u8 * xl_mmio = xl_priv->xl_mmio ;
+ u8 __iomem * xl_mmio = xl_priv->xl_mmio ;
unsigned long t;
writew( GLOBAL_RESET, xl_mmio + MMIO_COMMAND ) ;
{
struct net_device *dev = (struct net_device *)dev_id;
struct xl_private *xl_priv =(struct xl_private *)dev->priv;
- u8 * xl_mmio = xl_priv->xl_mmio ;
+ u8 __iomem * xl_mmio = xl_priv->xl_mmio ;
u16 intstatus, macstatus ;
if (!dev) {
static void xl_dn_comp(struct net_device *dev)
{
struct xl_private *xl_priv=(struct xl_private *)dev->priv;
- u8 * xl_mmio = xl_priv->xl_mmio ;
+ u8 __iomem * xl_mmio = xl_priv->xl_mmio ;
struct xl_tx_desc *txd ;
static int xl_close(struct net_device *dev)
{
struct xl_private *xl_priv = (struct xl_private *) dev->priv ;
- u8 * xl_mmio = xl_priv->xl_mmio ;
+ u8 __iomem * xl_mmio = xl_priv->xl_mmio ;
unsigned long t ;
netif_stop_queue(dev) ;
static void xl_srb_bh(struct net_device *dev)
{
struct xl_private *xl_priv = (struct xl_private *) dev->priv ;
- u8 * xl_mmio = xl_priv->xl_mmio ;
+ u8 __iomem * xl_mmio = xl_priv->xl_mmio ;
u8 srb_cmd, ret_code ;
int i ;
static void xl_arb_cmd(struct net_device *dev)
{
struct xl_private *xl_priv = (struct xl_private *) dev->priv;
- u8 * xl_mmio = xl_priv->xl_mmio ;
+ u8 __iomem * xl_mmio = xl_priv->xl_mmio ;
u8 arb_cmd ;
u16 lan_status, lan_status_diff ;
static void xl_asb_cmd(struct net_device *dev)
{
struct xl_private *xl_priv = (struct xl_private *) dev->priv ;
- u8 * xl_mmio = xl_priv->xl_mmio ;
+ u8 __iomem * xl_mmio = xl_priv->xl_mmio ;
if (xl_priv->asb_queued == 1)
writel(ACK_INTERRUPT | LATCH_ACK | ASBFACK, xl_mmio + MMIO_COMMAND) ;
static void xl_asb_bh(struct net_device *dev)
{
struct xl_private *xl_priv = (struct xl_private *) dev->priv ;
- u8 * xl_mmio = xl_priv->xl_mmio ;
+ u8 __iomem * xl_mmio = xl_priv->xl_mmio ;
u8 ret_code ;
writel(MMIO_BYTE_READ | 0xd0000 | xl_priv->asb | 2, xl_mmio + MMIO_MAC_ACCESS_CMD) ;
static void xl_srb_cmd(struct net_device *dev, int srb_cmd)
{
struct xl_private *xl_priv = (struct xl_private *) dev->priv ;
- u8 * xl_mmio = xl_priv->xl_mmio ;
+ u8 __iomem * xl_mmio = xl_priv->xl_mmio ;
switch (srb_cmd) {
case READ_LOG:
static void xl_wait_misr_flags(struct net_device *dev)
{
struct xl_private *xl_priv = (struct xl_private *) dev->priv ;
- u8 * xl_mmio = xl_priv->xl_mmio ;
+ u8 __iomem * xl_mmio = xl_priv->xl_mmio ;
int i ;
u16 arb;
u16 asb;
- u8 *xl_mmio;
+ u8 __iomem *xl_mmio;
char *xl_card_name;
struct pci_dev *pdev ;
struct streamer_private *next;
struct pci_dev *pci_dev;
- __u8 *streamer_mmio;
+ __u8 __iomem *streamer_mmio;
char *streamer_card_name;
spinlock_t streamer_lock;
MODULE_LICENSE("GPL");
-MODULE_PARM(io, "1-" __MODULE_STRING(ISATR_MAX_ADAPTERS) "i");
-MODULE_PARM(irq, "1-" __MODULE_STRING(ISATR_MAX_ADAPTERS) "i");
-MODULE_PARM(dma, "1-" __MODULE_STRING(ISATR_MAX_ADAPTERS) "i");
+module_param_array(io, int, NULL, 0);
+module_param_array(irq, int, NULL, 0);
+module_param_array(dma, int, NULL, 0);
static struct net_device *proteon_dev[ISATR_MAX_ADAPTERS];
MODULE_LICENSE("GPL");
-MODULE_PARM(io, "1-" __MODULE_STRING(ISATR_MAX_ADAPTERS) "i");
-MODULE_PARM(irq, "1-" __MODULE_STRING(ISATR_MAX_ADAPTERS) "i");
-MODULE_PARM(dma, "1-" __MODULE_STRING(ISATR_MAX_ADAPTERS) "i");
+module_param_array(io, int, NULL, 0);
+module_param_array(irq, int, NULL, 0);
+module_param_array(dma, int, NULL, 0);
static struct net_device *sk_isa_dev[ISATR_MAX_ADAPTERS];
MODULE_DESCRIPTION("VIA Networking Velocity Family Gigabit Ethernet Adapter Driver");
#define VELOCITY_PARAM(N,D) \
- static const int N[MAX_UNITS]=OPTION_DEFAULT;\
- MODULE_PARM(N, "1-" __MODULE_STRING(MAX_UNITS) "i");\
+ static int N[MAX_UNITS]=OPTION_DEFAULT;\
+ module_param_array(N, int, NULL, 0); \
MODULE_PARM_DESC(N, D);
#define RX_DESC_MIN 64
VELOCITY_PARAM(int_works, "Number of packets per interrupt services");
static int rx_copybreak = 200;
-MODULE_PARM(rx_copybreak, "i");
+module_param(rx_copybreak, int, 0644);
MODULE_PARM_DESC(rx_copybreak, "Copy breakpoint for copy-only-tiny-frames");
static void velocity_init_info(struct pci_dev *pdev, struct velocity_info *vptr, struct velocity_info_tbl *info);
.notifier_call = velocity_netdev_event,
};
-static spinlock_t velocity_dev_list_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(velocity_dev_list_lock);
static LIST_HEAD(velocity_dev_list);
static void velocity_register_notifier(void)
/* and leave the chip powered down */
- pci_set_power_state(pdev, 3);
+ pci_set_power_state(pdev, PCI_D3hot);
#ifdef CONFIG_PM
{
unsigned long flags;
goto err_free_rd_ring;
/* Ensure chip is running */
- pci_set_power_state(vptr->pdev, 0);
+ pci_set_power_state(vptr->pdev, PCI_D0);
velocity_init_registers(vptr, VELOCITY_INIT_COLD);
dev->name, dev);
if (ret < 0) {
/* Power down the chip */
- pci_set_power_state(vptr->pdev, 3);
+ pci_set_power_state(vptr->pdev, PCI_D3hot);
goto err_free_td_ring;
}
free_irq(dev->irq, dev);
/* Power down the chip */
- pci_set_power_state(vptr->pdev, 3);
+ pci_set_power_state(vptr->pdev, PCI_D3hot);
/* Free the resources */
velocity_free_td_ring(vptr);
/* If we are asked for information and the device is power
saving then we need to bring the device back up to talk to it */
- if(!netif_running(dev))
- pci_set_power_state(vptr->pdev, 0);
+ if (!netif_running(dev))
+ pci_set_power_state(vptr->pdev, PCI_D0);
switch (cmd) {
case SIOCGMIIPHY: /* Get address of MII PHY in use. */
default:
ret = -EOPNOTSUPP;
}
- if(!netif_running(dev))
- pci_set_power_state(vptr->pdev, 3);
+ if (!netif_running(dev))
+ pci_set_power_state(vptr->pdev, PCI_D3hot);
return ret;
static int velocity_ethtool_up(struct net_device *dev)
{
struct velocity_info *vptr = dev->priv;
- if(!netif_running(dev))
- pci_set_power_state(vptr->pdev, 0);
+ if (!netif_running(dev))
+ pci_set_power_state(vptr->pdev, PCI_D0);
return 0;
}
static void velocity_ethtool_down(struct net_device *dev)
{
struct velocity_info *vptr = dev->priv;
- if(!netif_running(dev))
- pci_set_power_state(vptr->pdev, 3);
+ if (!netif_running(dev))
+ pci_set_power_state(vptr->pdev, PCI_D3hot);
}
static int velocity_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
velocity_shutdown(vptr);
velocity_set_wol(vptr);
pci_enable_wake(pdev, 3, 1);
- pci_set_power_state(pdev, 3);
+ pci_set_power_state(pdev, PCI_D3hot);
} else {
velocity_save_context(vptr, &vptr->context);
velocity_shutdown(vptr);
pci_disable_device(pdev);
- pci_set_power_state(pdev, state);
+ pci_set_power_state(pdev, pci_choose_state(pdev, state));
}
#else
- pci_set_power_state(pdev, state);
+ pci_set_power_state(pdev, pci_choose_state(pdev, state));
#endif
spin_unlock_irqrestore(&vptr->lock, flags);
return 0;
if(!netif_running(vptr->dev))
return 0;
- pci_set_power_state(pdev, 0);
+ pci_set_power_state(pdev, PCI_D0);
pci_enable_wake(pdev, 0, 0);
pci_restore_state(pdev);
dma_addr_t buf_dma;
};
-enum {
+enum velocity_owner {
OWNED_BY_HOST = 0,
OWNED_BY_NIC = 1
-} velocity_owner;
+};
/*
MODULE_AUTHOR("Arnaldo Carvalho de Melo");
MODULE_DESCRIPTION("Cyclom 2X Sync Card Driver.");
MODULE_LICENSE("GPL");
-MODULE_PARM(cycx_debug, "i");
+module_param(cycx_debug, int, 0);
MODULE_PARM_DESC(cycx_debug, "cyclomx debug level");
/* Defines & Macros */
pc300_t *card = (pc300_t *) chan->card;
struct net_device_stats *stats = hdlc_stats(dev);
int ch = chan->channel;
- uclong flags;
+ unsigned long flags;
ucchar ilar;
stats->tx_errors++;
pc300_t *card = (pc300_t *) chan->card;
struct net_device_stats *stats = hdlc_stats(dev);
int ch = chan->channel;
- uclong flags;
+ unsigned long flags;
#ifdef PC300_DEBUG_TX
int i;
#endif
{
ucchar ilar;
void __iomem *scabase = card->hw.scabase;
- uclong flags;
+ unsigned long flags;
tx_dma_buf_check(card, ch);
rx_dma_buf_check(card, ch);
{
pc300ch_t *chan = &card->chan[ch];
falc_t *pfalc = (falc_t *) & chan->falc;
- uclong flags;
+ unsigned long flags;
CPC_LOCK(card, flags);
printk("CH%d: %s %s %d channels\n",
pc300dev_t *d = (pc300dev_t *) dev->priv;
pc300ch_t *chan = (pc300ch_t *) d->chan;
pc300_t *card = (pc300_t *) chan->card;
- uclong flags;
+ unsigned long flags;
#ifdef PC300_DEBUG_OTHER
printk("pc300: cpc_close");
static int irq = irqUNKNOWN;
static int txScrambled = 1;
static int mdebug;
-#endif
-MODULE_PARM(irq, "i");
-MODULE_PARM(mem, "i");
-MODULE_PARM(arlan_debug, "i");
-MODULE_PARM(testMemory, "i");
-MODULE_PARM(spreadingCode, "i");
-MODULE_PARM(channelNumber, "i");
-MODULE_PARM(channelSet, "i");
-MODULE_PARM(systemId, "i");
-MODULE_PARM(registrationMode, "i");
-MODULE_PARM(radioNodeId, "i");
-MODULE_PARM(SID, "i");
-MODULE_PARM(txScrambled, "i");
-MODULE_PARM(keyStart, "i");
-MODULE_PARM(mdebug, "i");
-MODULE_PARM(tx_delay_ms, "i");
-MODULE_PARM(retries, "i");
-MODULE_PARM(async, "i");
-MODULE_PARM(tx_queue_len, "i");
-MODULE_PARM(arlan_entry_debug, "i");
-MODULE_PARM(arlan_exit_debug, "i");
-MODULE_PARM(arlan_entry_and_exit_debug, "i");
-MODULE_PARM(arlan_EEPROM_bad, "i");
+module_param(irq, int, 0);
+module_param(mdebug, int, 0);
+module_param(testMemory, int, 0);
+module_param(arlan_entry_debug, int, 0);
+module_param(arlan_exit_debug, int, 0);
+module_param(txScrambled, int, 0);
MODULE_PARM_DESC(irq, "(unused)");
-MODULE_PARM_DESC(mem, "Arlan memory address for single device probing");
-MODULE_PARM_DESC(arlan_debug, "Arlan debug enable (0-1)");
MODULE_PARM_DESC(testMemory, "(unused)");
MODULE_PARM_DESC(mdebug, "Arlan multicast debugging (0-1)");
+#endif
+
+module_param(arlan_debug, int, 0);
+module_param(spreadingCode, int, 0);
+module_param(channelNumber, int, 0);
+module_param(channelSet, int, 0);
+module_param(systemId, int, 0);
+module_param(registrationMode, int, 0);
+module_param(radioNodeId, int, 0);
+module_param(SID, int, 0);
+module_param(keyStart, int, 0);
+module_param(tx_delay_ms, int, 0);
+module_param(retries, int, 0);
+module_param(tx_queue_len, int, 0);
+module_param(arlan_EEPROM_bad, int, 0);
+MODULE_PARM_DESC(arlan_debug, "Arlan debug enable (0-1)");
MODULE_PARM_DESC(retries, "Arlan maximum packet retransmisions");
#ifdef ARLAN_ENTRY_EXIT_DEBUGGING
MODULE_PARM_DESC(arlan_entry_debug, "Arlan driver function entry debugging");
static inline int arlan_drop_tx(struct net_device *dev)
{
- struct arlan_private *priv = dev->priv;
+ struct arlan_private *priv = netdev_priv(dev);
priv->stats.tx_errors++;
if (priv->Conf->tx_delay_ms)
int arlan_command(struct net_device *dev, int command_p)
{
- struct arlan_private *priv = dev->priv;
- volatile struct arlan_shmem *arlan = priv->card;
+ struct arlan_private *priv = netdev_priv(dev);
+ volatile struct arlan_shmem __iomem *arlan = priv->card;
struct arlan_conf_stru *conf = priv->Conf;
int udelayed = 0;
int i = 0;
if (!registrationBad(dev))
{
setInterruptEnable(dev);
- memset_io((void *) arlan->commandParameter, 0, 0xf);
+ memset_io(arlan->commandParameter, 0, 0xf);
WRITESHMB(arlan->commandByte, ARLAN_COM_INT | ARLAN_COM_RX_ENABLE);
WRITESHMB(arlan->commandParameter[0], conf->rxParameter);
arlan_interrupt_lancpu(dev);
priv->last_rx_int_ack_time + us2ticks(conf->rx_tweak2)))
{
setInterruptEnable(dev);
- memset_io((void *) arlan->commandParameter, 0, 0xf);
+ memset_io(arlan->commandParameter, 0, 0xf);
WRITESHMB(arlan->commandByte, ARLAN_COM_TX_ENABLE | ARLAN_COM_INT);
- memcpy_toio((void *) arlan->commandParameter, &TXLAST(dev), 14);
+ memcpy_toio(arlan->commandParameter, &TXLAST(dev), 14);
// for ( i=1 ; i < 15 ; i++) printk("%02x:",READSHMB(arlan->commandParameter[i]));
priv->tx_last_sent = jiffies;
arlan_interrupt_lancpu(dev);
static inline void arlan_command_process(struct net_device *dev)
{
- struct arlan_private *priv = dev->priv;
+ struct arlan_private *priv = netdev_priv(dev);
int times = 0;
while (priv->waiting_command_mask && times < 8)
static inline void arlan_retransmit_now(struct net_device *dev)
{
- struct arlan_private *priv = dev->priv;
+ struct arlan_private *priv = netdev_priv(dev);
ARLAN_DEBUG_ENTRY("arlan_retransmit_now");
static void arlan_registration_timer(unsigned long data)
{
struct net_device *dev = (struct net_device *) data;
- struct arlan_private *priv = dev->priv;
+ struct arlan_private *priv = netdev_priv(dev);
int bh_mark_needed = 0;
int next_tick = 1;
long lostTime = ((long)jiffies - (long)priv->registrationLastSeen)
static void arlan_print_registers(struct net_device *dev, int line)
{
- struct arlan_private *priv = dev->priv;
+ struct arlan_private *priv = netdev_priv(dev);
volatile struct arlan_shmem *arlan = priv->card;
u_char hostcpuLock, lancpuLock, controlRegister, cntrlRegImage,
{
int i;
- struct arlan_private *priv = dev->priv;
- volatile struct arlan_shmem *arlan = priv->card;
+ struct arlan_private *priv = netdev_priv(dev);
+ volatile struct arlan_shmem __iomem *arlan = priv->card;
struct arlan_conf_stru *conf = priv->Conf;
int tailStarts = 0x800;
ARLAN_DEBUG_ENTRY("arlan_hw_tx");
if (TXHEAD(dev).offset)
- headEnds = (((TXHEAD(dev).offset + TXHEAD(dev).length - (((int) arlan->txBuffer) - ((int) arlan))) / 64) + 1) * 64;
+ headEnds = (((TXHEAD(dev).offset + TXHEAD(dev).length - offsetof(struct arlan_shmem, txBuffer)) / 64) + 1) * 64;
if (TXTAIL(dev).offset)
- tailStarts = 0x800 - (((TXTAIL(dev).offset - (((int) arlan->txBuffer) - ((int) arlan))) / 64) + 2) * 64;
+ tailStarts = 0x800 - (((TXTAIL(dev).offset - offsetof(struct arlan_shmem, txBuffer)) / 64) + 2) * 64;
if (!TXHEAD(dev).offset && length < tailStarts)
printk(KERN_ERR "TXHEAD insert, tailStart %d\n", tailStarts);
TXHEAD(dev).offset =
- (((int) arlan->txBuffer) - ((int) arlan));
+ offsetof(struct arlan_shmem, txBuffer);
TXHEAD(dev).length = length - ARLAN_FAKE_HDR_LEN;
for (i = 0; i < 6; i++)
TXHEAD(dev).dest[i] = buf[i];
TXHEAD(dev).retries = conf->txRetries; /* 0 is use default */
TXHEAD(dev).routing = conf->txRouting;
TXHEAD(dev).scrambled = conf->txScrambled;
- memcpy_toio(((char *) arlan + TXHEAD(dev).offset), buf + ARLAN_FAKE_HDR_LEN, TXHEAD(dev).length);
+ memcpy_toio((char __iomem *)arlan + TXHEAD(dev).offset, buf + ARLAN_FAKE_HDR_LEN, TXHEAD(dev).length);
}
else if (!TXTAIL(dev).offset && length < (0x800 - headEnds))
{
printk(KERN_ERR "TXTAIL insert, headEnd %d\n", headEnds);
TXTAIL(dev).offset =
- (((int) arlan->txBuffer) - ((int) arlan)) + 0x800 - (length / 64 + 2) * 64;
+ offsetof(struct arlan_shmem, txBuffer) + 0x800 - (length / 64 + 2) * 64;
TXTAIL(dev).length = length - ARLAN_FAKE_HDR_LEN;
for (i = 0; i < 6; i++)
TXTAIL(dev).dest[i] = buf[i];
TXTAIL(dev).retries = conf->txRetries;
TXTAIL(dev).routing = conf->txRouting;
TXTAIL(dev).scrambled = conf->txScrambled;
- memcpy_toio(((char *) arlan + TXTAIL(dev).offset), buf + ARLAN_FAKE_HDR_LEN, TXTAIL(dev).length);
+ memcpy_toio(((char __iomem *)arlan + TXTAIL(dev).offset), buf + ARLAN_FAKE_HDR_LEN, TXTAIL(dev).length);
}
else
{
static int arlan_hw_config(struct net_device *dev)
{
- struct arlan_private *priv = dev->priv;
- volatile struct arlan_shmem *arlan = priv->card;
+ struct arlan_private *priv = netdev_priv(dev);
+ volatile struct arlan_shmem __iomem *arlan = priv->card;
struct arlan_conf_stru *conf = priv->Conf;
ARLAN_DEBUG_ENTRY("arlan_hw_config");
static int arlan_read_card_configuration(struct net_device *dev)
{
u_char tlx415;
- struct arlan_private *priv = dev->priv;
- volatile struct arlan_shmem *arlan = priv->card;
+ struct arlan_private *priv = netdev_priv(dev);
+ volatile struct arlan_shmem __iomem *arlan = priv->card;
struct arlan_conf_stru *conf = priv->Conf;
ARLAN_DEBUG_ENTRY("arlan_read_card_configuration");
static int __init arlan_check_fingerprint(unsigned long memaddr)
{
static const char probeText[] = "TELESYSTEM SLW INC. ARLAN \0";
- volatile struct arlan_shmem *arlan = (struct arlan_shmem *) memaddr;
+ volatile struct arlan_shmem __iomem *arlan = (struct arlan_shmem *) memaddr;
unsigned long paddr = virt_to_phys((void *) memaddr);
char tempBuf[49];
static int arlan_change_mtu(struct net_device *dev, int new_mtu)
{
- struct arlan_private *priv = dev->priv;
+ struct arlan_private *priv = netdev_priv(dev);
struct arlan_conf_stru *conf = priv->Conf;
ARLAN_DEBUG_ENTRY("arlan_change_mtu");
static int __init arlan_setup_device(struct net_device *dev, int num)
{
- struct arlan_private *ap = dev->priv;
+ struct arlan_private *ap = netdev_priv(dev);
int err;
ARLAN_DEBUG_ENTRY("arlan_setup_device");
static int __init arlan_probe_here(struct net_device *dev,
unsigned long memaddr)
{
- struct arlan_private *ap = dev->priv;
+ struct arlan_private *ap = netdev_priv(dev);
ARLAN_DEBUG_ENTRY("arlan_probe_here");
static int arlan_open(struct net_device *dev)
{
- struct arlan_private *priv = dev->priv;
- volatile struct arlan_shmem *arlan = priv->card;
+ struct arlan_private *priv = netdev_priv(dev);
+ volatile struct arlan_shmem __iomem *arlan = priv->card;
int ret = 0;
ARLAN_DEBUG_ENTRY("arlan_open");
static inline int DoNotReTransmitCrap(struct net_device *dev)
{
- struct arlan_private *priv = dev->priv;
+ struct arlan_private *priv = netdev_priv(dev);
if (TXLAST(dev).length < priv->Conf->ReTransmitPacketMaxSize)
return 1;
static inline int DoNotWaitReTransmitCrap(struct net_device *dev)
{
- struct arlan_private *priv = dev->priv;
+ struct arlan_private *priv = netdev_priv(dev);
if (TXLAST(dev).length < priv->Conf->waitReTransmitPacketMaxSize)
return 1;
static inline void arlan_queue_retransmit(struct net_device *dev)
{
- struct arlan_private *priv = dev->priv;
+ struct arlan_private *priv = netdev_priv(dev);
ARLAN_DEBUG_ENTRY("arlan_queue_retransmit");
static inline void RetryOrFail(struct net_device *dev)
{
- struct arlan_private *priv = dev->priv;
+ struct arlan_private *priv = netdev_priv(dev);
ARLAN_DEBUG_ENTRY("RetryOrFail");
static void arlan_tx_done_interrupt(struct net_device *dev, int status)
{
- struct arlan_private *priv = dev->priv;
+ struct arlan_private *priv = netdev_priv(dev);
ARLAN_DEBUG_ENTRY("arlan_tx_done_interrupt");
char *skbtmp;
int i = 0;
- struct arlan_private *priv = dev->priv;
- volatile struct arlan_shmem *arlan = priv->card;
+ struct arlan_private *priv = netdev_priv(dev);
+ volatile struct arlan_shmem __iomem *arlan = priv->card;
struct arlan_conf_stru *conf = priv->Conf;
skb->dev = dev;
skbtmp = skb_put(skb, pkt_len);
- memcpy_fromio(skbtmp + ARLAN_FAKE_HDR_LEN, ((char *) arlan) + rxOffset, pkt_len - ARLAN_FAKE_HDR_LEN);
+ memcpy_fromio(skbtmp + ARLAN_FAKE_HDR_LEN, ((char __iomem *) arlan) + rxOffset, pkt_len - ARLAN_FAKE_HDR_LEN);
memcpy_fromio(skbtmp, arlan->ultimateDestAddress, 6);
memcpy_fromio(skbtmp + 6, arlan->rxSrc, 6);
WRITESHMB(arlan->rxStatus, 0x00);
static void arlan_process_interrupt(struct net_device *dev)
{
- struct arlan_private *priv = dev->priv;
- volatile struct arlan_shmem *arlan = priv->card;
+ struct arlan_private *priv = netdev_priv(dev);
+ volatile struct arlan_shmem __iomem *arlan = priv->card;
u_char rxStatus = READSHMB(arlan->rxStatus);
u_char txStatus = READSHMB(arlan->txStatus);
u_short rxOffset = READSHMS(arlan->rxOffset);
static irqreturn_t arlan_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
struct net_device *dev = dev_id;
- struct arlan_private *priv = dev->priv;
- volatile struct arlan_shmem *arlan = priv->card;
+ struct arlan_private *priv = netdev_priv(dev);
+ volatile struct arlan_shmem __iomem *arlan = priv->card;
u_char rxStatus = READSHMB(arlan->rxStatus);
u_char txStatus = READSHMB(arlan->txStatus);
static int arlan_close(struct net_device *dev)
{
- struct arlan_private *priv = dev->priv;
+ struct arlan_private *priv = netdev_priv(dev);
ARLAN_DEBUG_ENTRY("arlan_close");
static struct net_device_stats *arlan_statistics(struct net_device *dev)
{
- struct arlan_private *priv = dev->priv;
- volatile struct arlan_shmem *arlan = priv->card;
+ struct arlan_private *priv = netdev_priv(dev);
+ volatile struct arlan_shmem __iomem *arlan = priv->card;
ARLAN_DEBUG_ENTRY("arlan_statistics");
static void arlan_set_multicast(struct net_device *dev)
{
- struct arlan_private *priv = dev->priv;
- volatile struct arlan_shmem *arlan = priv->card;
+ struct arlan_private *priv = netdev_priv(dev);
+ volatile struct arlan_shmem __iomem *arlan = priv->card;
struct arlan_conf_stru *conf = priv->Conf;
int board_conf_needed = 0;
/* Information that need to be kept for each board. */
struct arlan_private {
struct net_device_stats stats;
- struct arlan_shmem * card;
+ struct arlan_shmem __iomem * card;
struct arlan_shmem * conf;
struct arlan_conf_stru * Conf;
#define ARLAN_COM_INT 0x80
-#define TXLAST(dev) (((struct arlan_private *)dev->priv)->txRing[((struct arlan_private *)dev->priv)->txLast])
-#define TXHEAD(dev) (((struct arlan_private *)dev->priv)->txRing[0])
-#define TXTAIL(dev) (((struct arlan_private *)dev->priv)->txRing[1])
+#define TXLAST(dev) (((struct arlan_private *)netdev_priv(dev))->txRing[((struct arlan_private *)netdev_priv(dev))->txLast])
+#define TXHEAD(dev) (((struct arlan_private *)netdev_priv(dev))->txRing[0])
+#define TXTAIL(dev) (((struct arlan_private *)netdev_priv(dev))->txRing[1])
-#define TXBuffStart(dev) \
- ((int)(((struct arlan_private *)dev->priv)->card)->txBuffer) - ((int)(((struct arlan_private *)dev->priv)->card) )
-#define TXBuffEnd(dev) \
- ((int)(((struct arlan_private *)dev->priv)->card)->rxBuffer) - ((int)(((struct arlan_private *)dev->priv)->card)
+#define TXBuffStart(dev) offsetof(struct arlan_shmem, txBuffer)
+#define TXBuffEnd(dev) offsetof(struct arlan_shmem, xxBuffer)
#define READSHM(to,from,atype) {\
atype tmp;\
#define registrationBad(dev)\
- ( ( READSHMB(((struct arlan_private *)dev->priv)->card->registrationMode) > 0) && \
- ( READSHMB(((struct arlan_private *)dev->priv)->card->registrationStatus) == 0) )
+ ( ( READSHMB(((struct arlan_private *)netdev_priv(dev))->card->registrationMode) > 0) && \
+ ( READSHMB(((struct arlan_private *)netdev_priv(dev))->card->registrationStatus) == 0) )
#define readControlRegister(dev)\
- READSHMB(((struct arlan_private *)dev->priv)->card->cntrlRegImage)
+ READSHMB(((struct arlan_private *)netdev_priv(dev))->card->cntrlRegImage)
#define writeControlRegister(dev, v){\
- WRITESHMB(((struct arlan_private *)dev->priv)->card->cntrlRegImage ,((v) &0xF) );\
- WRITESHMB(((struct arlan_private *)dev->priv)->card->controlRegister ,(v) );}
+ WRITESHMB(((struct arlan_private *)netdev_priv(dev))->card->cntrlRegImage ,((v) &0xF) );\
+ WRITESHMB(((struct arlan_private *)netdev_priv(dev))->card->controlRegister ,(v) );}
#define arlan_interrupt_lancpu(dev) {\
#ifdef PCMCIA_DEBUG
static int pc_debug = PCMCIA_DEBUG;
-MODULE_PARM(pc_debug, "i");
+module_param(pc_debug, int, 0);
#define DEBUG(n, args...) if (pc_debug>(n)) printk(KERN_DEBUG args)
static char *version =
"netwave_cs.c 0.3.0 Thu Jul 17 14:36:02 1997 (John Markus Bjørndalen)\n";
*/
static int mem_speed;
-/* Bit map of interrupts to choose from */
-/* This means pick from 15, 14, 12, 11, 10, 9, 7, 5, 4, and 3 */
-static u_int irq_mask = 0xdeb8;
-static int irq_list[4] = { -1 };
-
-MODULE_PARM(domain, "i");
-MODULE_PARM(scramble_key, "i");
-MODULE_PARM(mem_speed, "i");
-MODULE_PARM(irq_mask, "i");
-MODULE_PARM(irq_list, "1-4i");
+module_param(domain, int, 0);
+module_param(scramble_key, int, 0);
+module_param(mem_speed, int, 0);
/*====================================================================*/
static void netwave_detach(dev_link_t *); /* Destroy instance */
/* Hardware configuration */
-static void netwave_doreset(ioaddr_t iobase, u_char* ramBase);
+static void netwave_doreset(kio_addr_t iobase, u_char __iomem *ramBase);
static void netwave_reset(struct net_device *dev);
/* Misc device stuff */
dev_link_t link;
spinlock_t spinlock; /* Serialize access to the hardware (SMP) */
dev_node_t node;
- u_char *ramBase;
+ u_char __iomem *ramBase;
int timeoutCounter;
int lastExec;
struct timer_list watchdog; /* To avoid blocking state */
* The Netwave card is little-endian, so won't work for big endian
* systems.
*/
-static inline unsigned short get_uint16(u_char* staddr)
+static inline unsigned short get_uint16(u_char __iomem *staddr)
{
return readw(staddr); /* Return only 16 bits */
}
-static inline short get_int16(u_char* staddr)
+static inline short get_int16(u_char __iomem * staddr)
{
return readw(staddr);
}
}
#ifdef WIRELESS_EXT
-static void netwave_snapshot(netwave_private *priv, u_char *ramBase,
- ioaddr_t iobase) {
+static void netwave_snapshot(netwave_private *priv, u_char __iomem *ramBase,
+ kio_addr_t iobase) {
u_short resultBuffer;
/* if time since last snapshot is > 1 sec. (100 jiffies?) then take
static struct iw_statistics *netwave_get_wireless_stats(struct net_device *dev)
{
unsigned long flags;
- ioaddr_t iobase = dev->base_addr;
- netwave_private *priv = (netwave_private *) dev->priv;
- u_char *ramBase = priv->ramBase;
+ kio_addr_t iobase = dev->base_addr;
+ netwave_private *priv = netdev_priv(dev);
+ u_char __iomem *ramBase = priv->ramBase;
struct iw_statistics* wstats;
wstats = &priv->iw_stats;
dev_link_t *link;
struct net_device *dev;
netwave_private *priv;
- int i, ret;
+ int ret;
DEBUG(0, "netwave_attach()\n");
dev = alloc_etherdev(sizeof(netwave_private));
if (!dev)
return NULL;
- priv = dev->priv;
+ priv = netdev_priv(dev);
link = &priv->link;
link->priv = dev;
/* Interrupt setup */
link->irq.Attributes = IRQ_TYPE_EXCLUSIVE | IRQ_HANDLE_PRESENT;
- link->irq.IRQInfo1 = IRQ_INFO2_VALID|IRQ_LEVEL_ID;
- if (irq_list[0] == -1)
- link->irq.IRQInfo2 = irq_mask;
- else
- for (i = 0; i < 4; i++)
- link->irq.IRQInfo2 |= 1 << irq_list[i];
+ link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->irq.Handler = &netwave_interrupt;
/* General socket configuration */
link->next = dev_list;
dev_list = link;
client_reg.dev_info = &dev_info;
- client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE;
client_reg.EventMask =
CS_EVENT_CARD_INSERTION | CS_EVENT_CARD_REMOVAL |
CS_EVENT_RESET_PHYSICAL | CS_EVENT_CARD_RESET |
char *extra)
{
unsigned long flags;
- ioaddr_t iobase = dev->base_addr;
- netwave_private *priv = (netwave_private *) dev->priv;
- u_char *ramBase = priv->ramBase;
+ kio_addr_t iobase = dev->base_addr;
+ netwave_private *priv = netdev_priv(dev);
+ u_char __iomem *ramBase = priv->ramBase;
/* Disable interrupts & save flags */
spin_lock_irqsave(&priv->spinlock, flags);
char *key)
{
unsigned long flags;
- ioaddr_t iobase = dev->base_addr;
- netwave_private *priv = (netwave_private *) dev->priv;
- u_char *ramBase = priv->ramBase;
+ kio_addr_t iobase = dev->base_addr;
+ netwave_private *priv = netdev_priv(dev);
+ u_char __iomem *ramBase = priv->ramBase;
/* Disable interrupts & save flags */
spin_lock_irqsave(&priv->spinlock, flags);
char *extra)
{
unsigned long flags;
- ioaddr_t iobase = dev->base_addr;
- netwave_private *priv = (netwave_private *) dev->priv;
- u_char *ramBase = priv->ramBase;
+ kio_addr_t iobase = dev->base_addr;
+ netwave_private *priv = netdev_priv(dev);
+ u_char __iomem *ramBase = priv->ramBase;
/* Disable interrupts & save flags */
spin_lock_irqsave(&priv->spinlock, flags);
static void netwave_pcmcia_config(dev_link_t *link) {
client_handle_t handle = link->handle;
struct net_device *dev = link->priv;
- netwave_private *priv = dev->priv;
+ netwave_private *priv = netdev_priv(dev);
tuple_t tuple;
cisparse_t parse;
int i, j, last_ret, last_fn;
u_char buf[64];
win_req_t req;
memreq_t mem;
- u_char *ramBase = NULL;
+ u_char __iomem *ramBase = NULL;
DEBUG(0, "netwave_pcmcia_config(0x%p)\n", link);
/* Store base address of the common window frame */
ramBase = ioremap(req.Base, 0x8000);
- ((netwave_private*)dev->priv)->ramBase = ramBase;
+ priv->ramBase = ramBase;
dev->irq = link->irq.AssignedIRQ;
dev->base_addr = link->io.BasePort1;
+ SET_NETDEV_DEV(dev, &handle_to_dev(handle));
+
if (register_netdev(dev) != 0) {
printk(KERN_DEBUG "netwave_cs: register_netdev() failed\n");
goto failed;
static void netwave_release(dev_link_t *link)
{
struct net_device *dev = link->priv;
- netwave_private *priv = dev->priv;
+ netwave_private *priv = netdev_priv(dev);
DEBUG(0, "netwave_release(0x%p)\n", link);
*
*/
static int netwave_event(event_t event, int priority,
- event_callback_args_t *args) {
+ event_callback_args_t *args)
+{
dev_link_t *link = args->client_data;
struct net_device *dev = link->priv;
*
* Proper hardware reset of the card.
*/
-static void netwave_doreset(ioaddr_t ioBase, u_char* ramBase) {
+static void netwave_doreset(kio_addr_t ioBase, u_char __iomem *ramBase)
+{
/* Reset card */
wait_WOC(ioBase);
outb(0x80, ioBase + NETWAVE_REG_PMR);
*/
static void netwave_reset(struct net_device *dev) {
/* u_char state; */
- netwave_private *priv = (netwave_private*) dev->priv;
- u_char *ramBase = priv->ramBase;
- ioaddr_t iobase = dev->base_addr;
+ netwave_private *priv = netdev_priv(dev);
+ u_char __iomem *ramBase = priv->ramBase;
+ kio_addr_t iobase = dev->base_addr;
DEBUG(0, "netwave_reset: Done with hardware reset\n");
DataOffset;
int tmpcount;
- netwave_private *priv = (netwave_private *) dev->priv;
- u_char* ramBase = priv->ramBase;
- ioaddr_t iobase = dev->base_addr;
+ netwave_private *priv = netdev_priv(dev);
+ u_char __iomem * ramBase = priv->ramBase;
+ kio_addr_t iobase = dev->base_addr;
/* Disable interrupts & save flags */
spin_lock_irqsave(&priv->spinlock, flags);
* ready to transmit another packet.
* 3. A command has completed execution.
*/
-static irqreturn_t netwave_interrupt(int irq, void* dev_id, struct pt_regs *regs) {
- ioaddr_t iobase;
- u_char *ramBase;
+static irqreturn_t netwave_interrupt(int irq, void* dev_id, struct pt_regs *regs)
+{
+ kio_addr_t iobase;
+ u_char __iomem *ramBase;
struct net_device *dev = (struct net_device *)dev_id;
- struct netwave_private *priv = dev->priv;
+ struct netwave_private *priv = netdev_priv(dev);
dev_link_t *link = &priv->link;
int i;
} /* netwave_watchdog */
static struct net_device_stats *netwave_get_stats(struct net_device *dev) {
- netwave_private *priv = (netwave_private*)dev->priv;
+ netwave_private *priv = netdev_priv(dev);
update_stats(dev);
static void update_stats(struct net_device *dev) {
//unsigned long flags;
-/* netwave_private *priv = (netwave_private*) dev->priv; */
+/* netwave_private *priv = netdev_priv(dev); */
//spin_lock_irqsave(&priv->spinlock, flags);
//spin_unlock_irqrestore(&priv->spinlock, flags);
}
-static int netwave_rx(struct net_device *dev) {
- netwave_private *priv = (netwave_private*)(dev->priv);
- u_char *ramBase = priv->ramBase;
- ioaddr_t iobase = dev->base_addr;
+static int netwave_rx(struct net_device *dev)
+{
+ netwave_private *priv = netdev_priv(dev);
+ u_char __iomem *ramBase = priv->ramBase;
+ kio_addr_t iobase = dev->base_addr;
u_char rxStatus;
struct sk_buff *skb = NULL;
unsigned int curBuffer,
}
static int netwave_open(struct net_device *dev) {
- netwave_private *priv = dev->priv;
+ netwave_private *priv = netdev_priv(dev);
dev_link_t *link = &priv->link;
DEBUG(1, "netwave_open: starting.\n");
}
static int netwave_close(struct net_device *dev) {
- netwave_private *priv = (netwave_private *)dev->priv;
+ netwave_private *priv = netdev_priv(dev);
dev_link_t *link = &priv->link;
DEBUG(1, "netwave_close: finishing.\n");
static void __exit exit_netwave_cs(void)
{
pcmcia_unregister_driver(&netwave_driver);
-
- if (dev_list != NULL) /* Critical situation */
- printk("netwave_cs: devices remaining when removing module\n");
+ BUG_ON(dev_list != NULL);
}
module_init(init_netwave_cs);
*/
static void set_multicast_list(struct net_device *dev)
{
- ioaddr_t iobase = dev->base_addr;
- u_char* ramBase = ((netwave_private*) dev->priv)->ramBase;
+ kio_addr_t iobase = dev->base_addr;
+ netwave_private *priv = netdev_priv(dev);
+ u_char __iomem * ramBase = priv->ramBase;
u_char rcvMode = 0;
#ifdef PCMCIA_DEBUG
/* Module parameters */
-/* The old way: bit map of interrupts to choose from */
-/* This means pick from 15, 14, 12, 11, 10, 9, 7, 5, 4, and 3 */
-static uint irq_mask = 0xdeb8;
-/* Newer, simpler way of listing specific interrupts */
-static int irq_list[4] = { -1 };
-
/* Some D-Link cards have buggy CIS. They do work at 5v properly, but
* don't have any CIS entry for it. This workaround it... */
static int ignore_cis_vcc; /* = 0 */
-MODULE_PARM(irq_mask, "i");
-MODULE_PARM(irq_list, "1-4i");
-MODULE_PARM(ignore_cis_vcc, "i");
+module_param(ignore_cis_vcc, int, 0);
/********************************************************************/
/* Magic constants */
struct orinoco_pccard *card;
dev_link_t *link;
client_reg_t client_reg;
- int ret, i;
+ int ret;
dev = alloc_orinocodev(sizeof(*card), orinoco_cs_hard_reset);
if (! dev)
/* Interrupt setup */
link->irq.Attributes = IRQ_TYPE_EXCLUSIVE;
- link->irq.IRQInfo1 = IRQ_INFO2_VALID | IRQ_LEVEL_ID;
- if (irq_list[0] == -1)
- link->irq.IRQInfo2 = irq_mask;
- else
- for (i = 0; i < 4; i++)
- link->irq.IRQInfo2 |= 1 << irq_list[i];
+ link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->irq.Handler = NULL;
/* General socket configuration defaults can go here. In this
dev_list = link;
client_reg.dev_info = &dev_info;
- client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE;
client_reg.EventMask =
CS_EVENT_CARD_INSERTION | CS_EVENT_CARD_REMOVAL |
CS_EVENT_RESET_PHYSICAL | CS_EVENT_CARD_RESET |
* the irq structure is initialized.
*/
if (link->conf.Attributes & CONF_ENABLE_IRQ) {
- int i;
-
link->irq.Attributes = IRQ_TYPE_EXCLUSIVE | IRQ_HANDLE_PRESENT;
- link->irq.IRQInfo1 = IRQ_INFO2_VALID | IRQ_LEVEL_ID;
- if (irq_list[0] == -1)
- link->irq.IRQInfo2 = irq_mask;
- else
- for (i=0; i<4; i++)
- link->irq.IRQInfo2 |= 1 << irq_list[i];
-
+ link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->irq.Handler = orinoco_interrupt;
link->irq.Instance = dev;
/* register_netdev will give us an ethX name */
dev->name[0] = '\0';
+ SET_NETDEV_DEV(dev, &handle_to_dev(handle));
/* Tell the stack we exist */
if (register_netdev(dev) != 0) {
printk(KERN_ERR PFX "register_netdev() failed\n");
exit_orinoco_cs(void)
{
pcmcia_unregister_driver(&orinoco_driver);
-
- if (dev_list)
- DEBUG(0, PFX "Removing leftover devices.\n");
- while (dev_list != NULL) {
- if (dev_list->state & DEV_CONFIG)
- orinoco_cs_release(dev_list);
- orinoco_cs_detach(dev_list);
- }
+ BUG_ON(dev_list != NULL);
}
module_init(init_orinoco_cs);
#include <linux/workqueue.h>
#include <linux/compiler.h>
-#if !defined(CONFIG_FW_LOADER) && !defined(CONFIG_FW_LOADER_MODULE)
-#error Firmware Loading is not configured in the kernel !
+#ifndef __iomem
+#define __iomem
#endif
-#define prism54_synchronize_irq(irq) synchronize_irq(irq)
-
#define PRISM_FW_PDEV &priv->pdev->dev
#endif /* _PRISM_COMPAT_H */
struct net_device *dev = alloc_etherdev(sizeof(net_local));
if (!dev)
break;
- memcpy(dev->name, name[i], IFNAMSIZ); /* Copy name */
+ if (name[i])
+ strcpy(dev->name, name[i]); /* Copy name */
dev->base_addr = io[i];
dev->irq = irq[i];
/* Parameters set by insmod */
static int io[4];
static int irq[4];
-static char name[4][IFNAMSIZ];
-MODULE_PARM(io, "1-4i");
-MODULE_PARM(irq, "1-4i");
-MODULE_PARM(name, "1-4c" __MODULE_STRING(IFNAMSIZ));
+static char *name[4];
+module_param_array(io, int, NULL, 0);
+module_param_array(irq, int, NULL, 0);
+module_param_array(name, charp, NULL, 0);
+
MODULE_PARM_DESC(io, "WaveLAN I/O base address(es),required");
MODULE_PARM_DESC(irq, "WaveLAN IRQ number(s)");
MODULE_PARM_DESC(name, "WaveLAN interface neme(s)");
u_char * b, /* buffer to fill */
int n) /* size to read */
{
- u_char * ptr = ((u_char *)dev->mem_start) + PSA_ADDR + (o << 1);
+ net_local *lp = netdev_priv(dev);
+ u_char __iomem *ptr = lp->mem + PSA_ADDR + (o << 1);
while(n-- > 0)
{
u_char * b, /* Buffer in memory */
int n) /* Length of buffer */
{
- u_char * ptr = ((u_char *) dev->mem_start) + PSA_ADDR + (o << 1);
+ net_local *lp = netdev_priv(dev);
+ u_char __iomem *ptr = lp->mem + PSA_ADDR + (o << 1);
int count = 0;
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
/* As there seem to have no flag PSA_BUSY as in the ISA model, we are
* oblige to verify this address to know when the PSA is ready... */
- volatile u_char * verify = ((u_char *) dev->mem_start) + PSA_ADDR +
+ volatile u_char __iomem *verify = lp->mem + PSA_ADDR +
(psaoff(0, psa_comp_number) << 1);
/* Authorize writting to PSA */
/* Perform a handover to a new WavePoint */
void wv_roam_handover(wavepoint_history *wavepoint, net_local *lp)
{
- ioaddr_t base = lp->dev->base_addr;
+ kio_addr_t base = lp->dev->base_addr;
mm_t m;
unsigned long flags;
int cmd,
int result)
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
int status;
int wait_completed;
long spin;
OP0_DIAGNOSE, SR0_DIAGNOSE_PASSED))
ret = TRUE;
-#ifdef DEBUG_CONFIG_ERROR
+#ifdef DEBUG_CONFIG_ERRORS
printk(KERN_INFO "wavelan_cs: i82593 Self Test failed!\n");
#endif
return(ret);
char * buf,
int len)
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
int ring_ptr = addr;
int chunk_len;
char * buf_ptr = buf;
static void
wv_mmc_show(struct net_device * dev)
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
net_local * lp = netdev_priv(dev);
mmr_t m;
static inline void
wv_init_info(struct net_device * dev)
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
psa_t psa;
int i;
#ifdef DEBUG_BASIC_SHOW
/* Now, let's go for the basic stuff */
- printk(KERN_NOTICE "%s: WaveLAN: port %#x, irq %d, hw_addr",
+ printk(KERN_NOTICE "%s: WaveLAN: port %#lx, irq %d, hw_addr",
dev->name, base, dev->irq);
for(i = 0; i < WAVELAN_ADDR_SIZE; i++)
printk("%s%02X", (i == 0) ? " " : ":", dev->dev_addr[i]);
union iwreq_data *wrqu,
char *extra)
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
net_local *lp = netdev_priv(dev);
psa_t psa;
mm_t m;
union iwreq_data *wrqu,
char *extra)
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
net_local *lp = netdev_priv(dev);
unsigned long flags;
int ret;
union iwreq_data *wrqu,
char *extra)
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
net_local *lp = netdev_priv(dev);
psa_t psa;
unsigned long flags;
union iwreq_data *wrqu,
char *extra)
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
net_local *lp = netdev_priv(dev);
psa_t psa;
unsigned long flags;
union iwreq_data *wrqu,
char *extra)
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
net_local *lp = netdev_priv(dev);
unsigned long flags;
psa_t psa;
union iwreq_data *wrqu,
char *extra)
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
net_local *lp = netdev_priv(dev);
psa_t psa;
unsigned long flags;
union iwreq_data *wrqu,
char *extra)
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
net_local *lp = netdev_priv(dev);
struct iw_range *range = (struct iw_range *) extra;
unsigned long flags;
union iwreq_data *wrqu,
char *extra)
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
net_local *lp = netdev_priv(dev);
psa_t psa;
unsigned long flags;
static iw_stats *
wavelan_get_wireless_stats(struct net_device * dev)
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
net_local * lp = netdev_priv(dev);
mmr_t m;
iw_stats * wstats;
int rfp, /* end of frame */
int wrap) /* start of buffer */
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
int rp;
int len;
static inline void
wv_packet_rcv(struct net_device * dev)
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
net_local * lp = netdev_priv(dev);
int newrfp;
int rp;
short length)
{
net_local * lp = netdev_priv(dev);
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
unsigned long flags;
int clen = length;
register u_short xmtdata_base = TX_BASE;
static inline int
wv_mmc_init(struct net_device * dev)
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
psa_t psa;
mmw_t m;
int configured;
static int
wv_ru_stop(struct net_device * dev)
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
net_local * lp = netdev_priv(dev);
unsigned long flags;
int status;
/* If there was a problem */
if(spin <= 0)
{
-#ifdef DEBUG_CONFIG_ERROR
+#ifdef DEBUG_CONFIG_ERRORS
printk(KERN_INFO "%s: wv_ru_stop(): The chip doesn't want to stop...\n",
dev->name);
#endif
static int
wv_ru_start(struct net_device * dev)
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
net_local * lp = netdev_priv(dev);
unsigned long flags;
static int
wv_82593_config(struct net_device * dev)
{
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
net_local * lp = netdev_priv(dev);
struct i82593_conf_block cfblk;
int ret = TRUE;
wv_hw_config(struct net_device * dev)
{
net_local * lp = netdev_priv(dev);
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
unsigned long flags;
int ret = FALSE;
static inline int
wv_pcmcia_config(dev_link_t * link)
{
- client_handle_t handle;
+ client_handle_t handle = link->handle;
tuple_t tuple;
cisparse_t parse;
- struct net_device * dev;
+ struct net_device * dev = (struct net_device *) link->priv;
int i;
u_char buf[64];
win_req_t req;
memreq_t mem;
+ net_local * lp = netdev_priv(dev);
- handle = link->handle;
- dev = (struct net_device *) link->priv;
#ifdef DEBUG_CONFIG_TRACE
printk(KERN_DEBUG "->wv_pcmcia_config(0x%p)\n", link);
break;
}
- dev->mem_start = (u_long)ioremap(req.Base, req.Size);
+ lp->mem = ioremap(req.Base, req.Size);
+ dev->mem_start = (u_long)lp->mem;
dev->mem_end = dev->mem_start + req.Size;
mem.CardOffset = 0; mem.Page = 0;
netif_start_queue(dev);
#ifdef DEBUG_CONFIG_INFO
- printk(KERN_DEBUG "wv_pcmcia_config: MEMSTART 0x%x IRQ %d IOPORT 0x%x\n",
- (u_int) dev->mem_start, dev->irq, (u_int) dev->base_addr);
+ printk(KERN_DEBUG "wv_pcmcia_config: MEMSTART %p IRQ %d IOPORT 0x%x\n",
+ lp->mem, dev->irq, (u_int) dev->base_addr);
#endif
+ SET_NETDEV_DEV(dev, &handle_to_dev(handle));
i = register_netdev(dev);
if(i != 0)
{
wv_pcmcia_release(dev_link_t *link)
{
struct net_device * dev = (struct net_device *) link->priv;
+ net_local * lp = netdev_priv(dev);
#ifdef DEBUG_CONFIG_TRACE
printk(KERN_DEBUG "%s: -> wv_pcmcia_release(0x%p)\n", dev->name, link);
#endif
/* Don't bother checking to see if these succeed or not */
- iounmap((u_char *)dev->mem_start);
+ iounmap(lp->mem);
pcmcia_release_window(link->win);
pcmcia_release_configuration(link->handle);
pcmcia_release_io(link->handle, &link->io);
{
struct net_device * dev;
net_local * lp;
- ioaddr_t base;
+ kio_addr_t base;
int status0;
u_int tx_status;
wavelan_watchdog(struct net_device * dev)
{
net_local * lp = netdev_priv(dev);
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
unsigned long flags;
int aborted = FALSE;
{
net_local * lp = netdev_priv(dev);
dev_link_t * link = lp->link;
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
#ifdef DEBUG_CALLBACK_TRACE
printk(KERN_DEBUG "%s: ->wavelan_open(dev=0x%x)\n", dev->name,
wavelan_close(struct net_device * dev)
{
dev_link_t * link = ((net_local *)netdev_priv(dev))->link;
- ioaddr_t base = dev->base_addr;
+ kio_addr_t base = dev->base_addr;
#ifdef DEBUG_CALLBACK_TRACE
printk(KERN_DEBUG "%s: ->wavelan_close(dev=0x%x)\n", dev->name,
dev_link_t * link; /* Info for cardmgr */
struct net_device * dev; /* Interface generic data */
net_local * lp; /* Interface specific data */
- int i, ret;
+ int ret;
#ifdef DEBUG_CALLBACK_TRACE
printk(KERN_DEBUG "-> wavelan_attach()\n");
/* Interrupt setup */
link->irq.Attributes = IRQ_TYPE_EXCLUSIVE | IRQ_HANDLE_PRESENT;
- link->irq.IRQInfo1 = IRQ_INFO2_VALID | IRQ_LEVEL_ID;
- if (irq_list[0] == -1)
- link->irq.IRQInfo2 = irq_mask;
- else
- for (i = 0; i < 4; i++)
- link->irq.IRQInfo2 |= 1 << irq_list[i];
+ link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->irq.Handler = wavelan_interrupt;
/* General socket configuration */
/* Register with Card Services */
client_reg.dev_info = &dev_info;
- client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE;
client_reg.EventMask =
CS_EVENT_REGISTRATION_COMPLETE |
CS_EVENT_CARD_INSERTION | CS_EVENT_CARD_REMOVAL |
int cell_search; /* Searching for new cell? */
struct timer_list cell_timer; /* Garbage collection */
#endif /* WAVELAN_ROAMING */
+ void __iomem *mem;
};
/**************************** PROTOTYPES ****************************/
* The exact syntax is 'insmod wavelan_cs.o <var>=<value>'
*/
-/* Bit map of interrupts to choose from */
-/* This means pick from 15, 14, 12, 11, 10, 9, 7, 5, 4 and 3 */
-static int irq_mask = 0xdeb8;
-static int irq_list[4] = { -1 };
-
/* Shared memory speed, in ns */
static int mem_speed = 0;
/* New module interface */
-MODULE_PARM(irq_mask, "i");
-MODULE_PARM(irq_list, "1-4i");
-MODULE_PARM(mem_speed, "i");
+module_param(mem_speed, int, 0);
#ifdef WAVELAN_ROAMING /* Conditional compile, see above in options */
/* Enable roaming mode ? No ! Please keep this to 0 */
static int do_roaming = 0;
-MODULE_PARM(do_roaming, "i");
+module_param(do_roaming, bool, 0);
#endif /* WAVELAN_ROAMING */
MODULE_LICENSE("GPL");
#define PCMCIA_DEBUG 0
#ifdef PCMCIA_DEBUG
static int pc_debug = PCMCIA_DEBUG;
-MODULE_PARM(pc_debug, "i");
+module_param(pc_debug, int, 0);
#define dprintk(n, format, args...) \
{ if (pc_debug > (n)) \
printk(KERN_INFO "%s: " format "\n", __FUNCTION__ , ##args); }
#define WL3501_RESUME 0
#define WL3501_SUSPEND 1
-/* Parameters that can be set with 'insmod' */
-/* Bit map of interrupts to choose from */
-/* This means pick from 15, 14, 12, 11, 10, 9, 7, 5, 4, and 3 */
-static unsigned long wl3501_irq_mask = 0xdeb8;
-static int wl3501_irq_list[4] = { -1 };
-
/*
* The event() function is this driver's Card Services event handler. It will
* be called by Card Services when an appropriate card status event is
client_reg_t client_reg;
dev_link_t *link;
struct net_device *dev;
- int ret, i;
+ int ret;
/* Initialize the dev_link_t structure */
link = kmalloc(sizeof(*link), GFP_KERNEL);
/* Interrupt setup */
link->irq.Attributes = IRQ_TYPE_EXCLUSIVE | IRQ_HANDLE_PRESENT;
- link->irq.IRQInfo1 = IRQ_INFO2_VALID | IRQ_LEVEL_ID;
- link->irq.IRQInfo2 = wl3501_irq_mask;
- if (wl3501_irq_list[0] != -1)
- for (i = 0; i < 4; i++)
- link->irq.IRQInfo2 |= 1 << wl3501_irq_list[i];
+ link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->irq.Handler = wl3501_interrupt;
/* General socket configuration */
link->next = wl3501_dev_list;
wl3501_dev_list = link;
client_reg.dev_info = &wl3501_dev_info;
- client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE;
client_reg.EventMask = CS_EVENT_CARD_INSERTION |
CS_EVENT_RESET_PHYSICAL |
CS_EVENT_CARD_RESET |
dev->irq = link->irq.AssignedIRQ;
dev->base_addr = link->io.BasePort1;
+ SET_NETDEV_DEV(dev, &handle_to_dev(handle));
if (register_netdev(dev)) {
printk(KERN_NOTICE "wl3501_cs: register_netdev() failed\n");
goto failed;
{
dprintk(0, ": unloading");
pcmcia_unregister_driver(&wl3501_driver);
- while (wl3501_dev_list) {
- /* Mark the device as non-existing to minimize calls to card */
- wl3501_dev_list->state &= ~DEV_PRESENT;
- if (wl3501_dev_list->state & DEV_CONFIG)
- wl3501_release(wl3501_dev_list);
- wl3501_detach(wl3501_dev_list);
- }
+ BUG_ON(wl3501_dev_list != NULL);
}
module_init(wl3501_init_module);
module_exit(wl3501_exit_module);
-MODULE_PARM(wl3501_irq_mask, "i");
-MODULE_PARM(wl3501_irq_list, "1-4i");
MODULE_AUTHOR("Fox Chen <mhchen@golf.ccl.itri.org.tw>, "
"Arnaldo Carvalho de Melo <acme@conectiva.com.br>,"
"Gustavo Niemeyer <niemeyer@conectiva.com>");
#define ZNET_DEBUG 1
#endif
static unsigned int znet_debug = ZNET_DEBUG;
-MODULE_PARM (znet_debug, "i");
+module_param (znet_debug, int, 0);
MODULE_PARM_DESC (znet_debug, "ZNet debug level");
MODULE_LICENSE("GPL");
unsigned long buffer_size;
struct task_struct * last_task;
int last_is_kernel;
+ int tracing;
struct op_sample * buffer;
unsigned long sample_received;
unsigned long sample_lost_overflow;
+ unsigned long backtrace_aborted;
int cpu;
struct work_struct work;
} ____cacheline_aligned;
void cpu_buffer_reset(struct oprofile_cpu_buffer * cpu_buf);
+/* transient events for the CPU buffer -> event buffer */
+#define CPU_IS_KERNEL 1
+#define CPU_TRACE_BEGIN 2
+
#endif /* OPROFILE_CPU_BUFFER_H */
#define KERNEL_EXIT_SWITCH_CODE 5
#define MODULE_LOADED_CODE 6
#define CTX_TGID_CODE 7
+#define TRACE_BEGIN_CODE 8
+#define TRACE_END_CODE 9
/* add data to the event buffer */
void add_event_entry(unsigned long data);
#include "buffer_sync.h"
#include "oprofile_stats.h"
-struct oprofile_operations * oprofile_ops;
+struct oprofile_operations oprofile_ops;
+
unsigned long oprofile_started;
+unsigned long backtrace_depth;
static unsigned long is_setup;
static DECLARE_MUTEX(start_sem);
if ((err = alloc_event_buffer()))
goto out1;
- if (oprofile_ops->setup && (err = oprofile_ops->setup()))
+ if (oprofile_ops.setup && (err = oprofile_ops.setup()))
goto out2;
/* Note even though this starts part of the
return 0;
out3:
- if (oprofile_ops->shutdown)
- oprofile_ops->shutdown();
+ if (oprofile_ops.shutdown)
+ oprofile_ops.shutdown();
out2:
free_event_buffer();
out1:
oprofile_reset_stats();
- if ((err = oprofile_ops->start()))
+ if ((err = oprofile_ops.start()))
goto out;
oprofile_started = 1;
down(&start_sem);
if (!oprofile_started)
goto out;
- oprofile_ops->stop();
+ oprofile_ops.stop();
oprofile_started = 0;
/* wake up the daemon to read what remains */
wake_up_buffer_waiter();
{
down(&start_sem);
sync_stop();
- if (oprofile_ops->shutdown)
- oprofile_ops->shutdown();
+ if (oprofile_ops.shutdown)
+ oprofile_ops.shutdown();
is_setup = 0;
free_event_buffer();
free_cpu_buffers();
}
-extern void timer_init(struct oprofile_operations ** ops);
+int oprofile_set_backtrace(unsigned long val)
+{
+ int err = 0;
+ down(&start_sem);
-static int __init oprofile_init(void)
-{
- /* Architecture must fill in the interrupt ops and the
- * logical CPU type, or we can fall back to the timer
- * interrupt profiler.
- */
- int err = oprofile_arch_init(&oprofile_ops);
+ if (oprofile_started) {
+ err = -EBUSY;
+ goto out;
+ }
- if (err == -ENODEV || timer) {
- timer_init(&oprofile_ops);
- err = 0;
- } else if (err) {
+ if (!oprofile_ops.backtrace) {
+ err = -EINVAL;
goto out;
}
- if (!oprofile_ops->cpu_type) {
- printk(KERN_ERR "oprofile: cpu_type not set !\n");
- err = -EFAULT;
- } else {
- err = oprofilefs_register();
+ backtrace_depth = val;
+
+out:
+ up(&start_sem);
+ return err;
+}
+
+static int __init oprofile_init(void)
+{
+ int err;
+
+ err = oprofile_arch_init(&oprofile_ops);
+
+ if (err < 0 || timer) {
+ printk(KERN_INFO "oprofile: using timer interrupt.\n");
+ oprofile_timer_init(&oprofile_ops);
}
-
+
+ err = oprofilefs_register();
if (err)
- goto out_exit;
-out:
+ oprofile_arch_exit();
+
return err;
-out_exit:
- oprofile_arch_exit();
- goto out;
}
extern unsigned long fs_buffer_size;
extern unsigned long fs_cpu_buffer_size;
extern unsigned long fs_buffer_watershed;
-extern struct oprofile_operations * oprofile_ops;
+extern struct oprofile_operations oprofile_ops;
extern unsigned long oprofile_started;
+extern unsigned long backtrace_depth;
struct super_block;
struct dentry;
void oprofile_create_files(struct super_block * sb, struct dentry * root);
+void oprofile_timer_init(struct oprofile_operations * ops);
+
+int oprofile_set_backtrace(unsigned long depth);
#endif /* OPROF_H */
&cpu_buf->sample_received);
oprofilefs_create_ro_ulong(sb, cpudir, "sample_lost_overflow",
&cpu_buf->sample_lost_overflow);
+ oprofilefs_create_ro_ulong(sb, cpudir, "backtrace_aborted",
+ &cpu_buf->backtrace_aborted);
}
oprofilefs_create_ro_atomic(sb, dir, "sample_lost_no_mm",
&oprofile_stats.sample_lost_no_mapping);
oprofilefs_create_ro_atomic(sb, dir, "event_lost_overflow",
&oprofile_stats.event_lost_overflow);
+ oprofilefs_create_ro_atomic(sb, dir, "bt_lost_no_mapping",
+ &oprofile_stats.bt_lost_no_mapping);
}
struct oprofile_stat_struct {
atomic_t sample_lost_no_mm;
atomic_t sample_lost_no_mapping;
+ atomic_t bt_lost_no_mapping;
atomic_t event_lost_overflow;
};
#include <linux/profile.h>
#include <linux/init.h>
#include <asm/ptrace.h>
-
+
+#include "oprof.h"
+
static int timer_notify(struct pt_regs *regs)
{
- int cpu = smp_processor_id();
- unsigned long eip = profile_pc(regs);
-
- oprofile_add_sample(eip, !user_mode(regs), 0, cpu);
+ oprofile_add_sample(regs, 0);
return 0;
}
}
-static struct oprofile_operations timer_ops = {
- .start = timer_start,
- .stop = timer_stop,
- .cpu_type = "timer"
-};
-
-
-void __init timer_init(struct oprofile_operations ** ops)
+void __init oprofile_timer_init(struct oprofile_operations * ops)
{
- *ops = &timer_ops;
- printk(KERN_INFO "oprofile: using timer interrupt.\n");
+ ops->create_files = NULL;
+ ops->setup = NULL;
+ ops->shutdown = NULL;
+ ops->start = timer_start;
+ ops->stop = timer_stop;
+ ops->cpu_type = "timer";
}
# Makefile for most of the non-PCI devices in PA-RISC machines
#
-obj-y :=
-obj-m :=
-obj-n :=
-obj- :=
-
# I/O SAPIC is also on IA64 platforms.
# The two could be merged into a common source some day.
obj-$(CONFIG_IOSAPIC) += iosapic.o
# obj-$(CONFIG_IOMMU_CCIO) += ccio-rm-dma.o
obj-$(CONFIG_IOMMU_CCIO) += ccio-dma.o
-obj-y += gsc.o
+obj-$(CONFIG_GSC) += gsc.o
obj-$(CONFIG_HPPB) += hppb.o
obj-$(CONFIG_GSC_DINO) += dino.o
#include <linux/errno.h>
#include <linux/init.h>
-#include <linux/irq.h>
+#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/types.h>
#define VIPER_INT_WORD 0xFFFBF088 /* addr of viper interrupt word */
-static int asp_choose_irq(struct parisc_device *dev)
+static void asp_choose_irq(struct parisc_device *dev, void *ctrl)
{
- int irq = -1;
+ int irq;
switch (dev->id.sversion) {
- case 0x71: irq = 22; break; /* SCSI */
- case 0x72: irq = 23; break; /* LAN */
- case 0x73: irq = 30; break; /* HIL */
- case 0x74: irq = 24; break; /* Centronics */
- case 0x75: irq = (dev->hw_path == 4) ? 26 : 25; break; /* RS232 */
- case 0x76: irq = 21; break; /* EISA BA */
- case 0x77: irq = 20; break; /* Graphics1 */
- case 0x7a: irq = 18; break; /* Audio (Bushmaster) */
- case 0x7b: irq = 18; break; /* Audio (Scorpio) */
- case 0x7c: irq = 28; break; /* FW SCSI */
- case 0x7d: irq = 27; break; /* FDDI */
- case 0x7f: irq = 18; break; /* Audio (Outfield) */
+ case 0x71: irq = 9; break; /* SCSI */
+ case 0x72: irq = 8; break; /* LAN */
+ case 0x73: irq = 1; break; /* HIL */
+ case 0x74: irq = 7; break; /* Centronics */
+ case 0x75: irq = (dev->hw_path == 4) ? 5 : 6; break; /* RS232 */
+ case 0x76: irq = 10; break; /* EISA BA */
+ case 0x77: irq = 11; break; /* Graphics1 */
+ case 0x7a: irq = 13; break; /* Audio (Bushmaster) */
+ case 0x7b: irq = 13; break; /* Audio (Scorpio) */
+ case 0x7c: irq = 3; break; /* FW SCSI */
+ case 0x7d: irq = 4; break; /* FDDI */
+ case 0x7f: irq = 13; break; /* Audio (Outfield) */
+ default: return; /* Unknown */
}
- return irq;
+
+ gsc_asic_assign_irq(ctrl, irq, &dev->irq);
}
/* There are two register ranges we're interested in. Interrupt /
int __init
asp_init_chip(struct parisc_device *dev)
{
- struct busdevice *asp;
+ struct gsc_asic *asp;
struct gsc_irq gsc_irq;
- int irq, ret;
+ int ret;
- asp = kmalloc(sizeof(struct busdevice), GFP_KERNEL);
+ asp = kmalloc(sizeof(*asp), GFP_KERNEL);
if(!asp)
return -ENOMEM;
/* the IRQ ASP should use */
ret = -EBUSY;
- irq = gsc_claim_irq(&gsc_irq, ASP_GSC_IRQ);
- if (irq < 0) {
+ dev->irq = gsc_claim_irq(&gsc_irq, ASP_GSC_IRQ);
+ if (dev->irq < 0) {
printk(KERN_ERR "%s(): cannot get GSC irq\n", __FUNCTION__);
goto out;
}
- ret = request_irq(gsc_irq.irq, busdev_barked, 0, "asp", asp);
+ asp->eim = ((u32) gsc_irq.txn_addr) | gsc_irq.txn_data;
+
+ ret = request_irq(gsc_irq.irq, gsc_asic_intr, 0, "asp", asp);
if (ret < 0)
goto out;
- /* Save this for debugging later */
- asp->parent_irq = gsc_irq.irq;
- asp->eim = ((u32) gsc_irq.txn_addr) | gsc_irq.txn_data;
-
/* Program VIPER to interrupt on the ASP irq */
gsc_writel((1 << (31 - ASP_GSC_IRQ)),VIPER_INT_WORD);
/* Done init'ing, register this driver */
- ret = gsc_common_irqsetup(dev, asp);
+ ret = gsc_common_setup(dev, asp);
if (ret)
goto out;
- fixup_child_irqs(dev, asp->busdev_region->data.irqbase, asp_choose_irq);
+ gsc_fixup_irqs(dev, asp, asp_choose_irq);
/* Mongoose is a sibling of Asp, not a child... */
- fixup_child_irqs(dev->parent, asp->busdev_region->data.irqbase,
- asp_choose_irq);
+ gsc_fixup_irqs(parisc_parent(dev), asp, asp_choose_irq);
/* initialize the chassis LEDs */
#ifdef CONFIG_CHASSIS_LCD_LED
register_led_driver(DISPLAY_MODEL_OLD_ASP, LED_CMD_REG_NONE,
- (char *)ASP_LED_ADDR);
+ ASP_LED_ADDR);
#endif
return 0;
#include <asm/page.h>
#include <asm/system.h>
#include <asm/io.h>
-#include <asm/irq.h>
#include <asm/hardware.h>
#include "gsc.h"
spinlock_t dinosaur_pen;
unsigned long txn_addr; /* EIR addr to generate interrupt */
u32 txn_data; /* EIR data assign to each dino */
- int irq; /* Virtual IRQ dino uses */
- struct irq_region *dino_region; /* region for this Dino */
-
- u32 imr; /* IRQ's which are enabled */
+ u32 imr; /* IRQ's which are enabled */
+ int global_irq[12]; /* map IMR bit to global irq */
#ifdef DINO_DEBUG
- unsigned int dino_irr0; /* save most recent IRQ line stat */
+ unsigned int dino_irr0; /* save most recent IRQ line stat */
#endif
};
struct dino_device *d = DINO_DEV(parisc_walk_tree(bus->bridge));
u32 local_bus = (bus->parent == NULL) ? 0 : bus->secondary;
u32 v = DINO_CFG_TOK(local_bus, devfn, where & ~3);
- unsigned long base_addr = d->hba.base_addr;
+ void __iomem *base_addr = d->hba.base_addr;
unsigned long flags;
spin_lock_irqsave(&d->dinosaur_pen, flags);
/* tell HW which CFG address */
- gsc_writel(v, base_addr + DINO_PCI_ADDR);
+ __raw_writel(v, base_addr + DINO_PCI_ADDR);
/* generate cfg read cycle */
if (size == 1) {
- *val = gsc_readb(base_addr + DINO_CONFIG_DATA + (where & 3));
+ *val = readb(base_addr + DINO_CONFIG_DATA + (where & 3));
} else if (size == 2) {
- *val = le16_to_cpu(gsc_readw(base_addr +
- DINO_CONFIG_DATA + (where & 2)));
+ *val = readw(base_addr + DINO_CONFIG_DATA + (where & 2));
} else if (size == 4) {
- *val = le32_to_cpu(gsc_readl(base_addr + DINO_CONFIG_DATA));
+ *val = readl(base_addr + DINO_CONFIG_DATA);
}
spin_unlock_irqrestore(&d->dinosaur_pen, flags);
struct dino_device *d = DINO_DEV(parisc_walk_tree(bus->bridge));
u32 local_bus = (bus->parent == NULL) ? 0 : bus->secondary;
u32 v = DINO_CFG_TOK(local_bus, devfn, where & ~3);
- unsigned long base_addr = d->hba.base_addr;
+ void __iomem *base_addr = d->hba.base_addr;
unsigned long flags;
spin_lock_irqsave(&d->dinosaur_pen, flags);
/* avoid address stepping feature */
- gsc_writel(v & 0xffffff00, base_addr + DINO_PCI_ADDR);
- gsc_readl(base_addr + DINO_CONFIG_DATA);
+ __raw_writel(v & 0xffffff00, base_addr + DINO_PCI_ADDR);
+ __raw_readl(base_addr + DINO_CONFIG_DATA);
/* tell HW which CFG address */
- gsc_writel(v, base_addr + DINO_PCI_ADDR);
+ __raw_writel(v, base_addr + DINO_PCI_ADDR);
/* generate cfg read cycle */
if (size == 1) {
- gsc_writeb(val, base_addr + DINO_CONFIG_DATA + (where & 3));
+ writeb(val, base_addr + DINO_CONFIG_DATA + (where & 3));
} else if (size == 2) {
- gsc_writew(cpu_to_le16(val),
- base_addr + DINO_CONFIG_DATA + (where & 2));
+ writew(val, base_addr + DINO_CONFIG_DATA + (where & 2));
} else if (size == 4) {
- gsc_writel(cpu_to_le32(val), base_addr + DINO_CONFIG_DATA);
+ writel(val, base_addr + DINO_CONFIG_DATA);
}
spin_unlock_irqrestore(&d->dinosaur_pen, flags);
* I/O port instead of MMIO.
*/
-#define cpu_to_le8(x) (x)
-#define le8_to_cpu(x) (x)
-
#define DINO_PORT_IN(type, size, mask) \
static u##size dino_in##size (struct pci_hba_data *d, u16 addr) \
{ \
unsigned long flags; \
spin_lock_irqsave(&(DINO_DEV(d)->dinosaur_pen), flags); \
/* tell HW which IO Port address */ \
- gsc_writel((u32) addr, d->base_addr + DINO_PCI_ADDR); \
+ __raw_writel((u32) addr, d->base_addr + DINO_PCI_ADDR); \
/* generate I/O PORT read cycle */ \
- v = gsc_read##type(d->base_addr+DINO_IO_DATA+(addr&mask)); \
+ v = read##type(d->base_addr+DINO_IO_DATA+(addr&mask)); \
spin_unlock_irqrestore(&(DINO_DEV(d)->dinosaur_pen), flags); \
- return le##size##_to_cpu(v); \
+ return v; \
}
DINO_PORT_IN(b, 8, 3)
unsigned long flags; \
spin_lock_irqsave(&(DINO_DEV(d)->dinosaur_pen), flags); \
/* tell HW which IO port address */ \
- gsc_writel((u32) addr, d->base_addr + DINO_PCI_ADDR); \
+ __raw_writel((u32) addr, d->base_addr + DINO_PCI_ADDR); \
/* generate cfg write cycle */ \
- gsc_write##type(cpu_to_le##size(val), d->base_addr+DINO_IO_DATA+(addr&mask)); \
+ write##type(val, d->base_addr+DINO_IO_DATA+(addr&mask)); \
spin_unlock_irqrestore(&(DINO_DEV(d)->dinosaur_pen), flags); \
}
.outl = dino_out32
};
-static void
-dino_mask_irq(void *irq_dev, int irq)
+static void dino_disable_irq(unsigned int irq)
{
- struct dino_device *dino_dev = DINO_DEV(irq_dev);
+ struct dino_device *dino_dev = irq_desc[irq].handler_data;
+ int local_irq = gsc_find_local_irq(irq, dino_dev->global_irq, irq);
DBG(KERN_WARNING "%s(0x%p, %d)\n", __FUNCTION__, irq_dev, irq);
- if (NULL == irq_dev || irq > DINO_IRQS || irq < 0) {
- printk(KERN_WARNING "%s(0x%lx, %d) - not a dino irq?\n",
- __FUNCTION__, (long) irq_dev, irq);
- BUG();
- } else {
- /*
- ** Clear the matching bit in the IMR register
- */
- dino_dev->imr &= ~(DINO_MASK_IRQ(irq));
- gsc_writel(dino_dev->imr, dino_dev->hba.base_addr+DINO_IMR);
- }
+ /* Clear the matching bit in the IMR register */
+ dino_dev->imr &= ~(DINO_MASK_IRQ(local_irq));
+ __raw_writel(dino_dev->imr, dino_dev->hba.base_addr+DINO_IMR);
}
-
-static void
-dino_unmask_irq(void *irq_dev, int irq)
+static void dino_enable_irq(unsigned int irq)
{
- struct dino_device *dino_dev = DINO_DEV(irq_dev);
+ struct dino_device *dino_dev = irq_desc[irq].handler_data;
+ int local_irq = gsc_find_local_irq(irq, dino_dev->global_irq, irq);
u32 tmp;
DBG(KERN_WARNING "%s(0x%p, %d)\n", __FUNCTION__, irq_dev, irq);
- if (NULL == irq_dev || irq > DINO_IRQS) {
- printk(KERN_WARNING "%s(): %d not a dino irq?\n",
- __FUNCTION__, irq);
- BUG();
- return;
- }
+ /*
+ ** clear pending IRQ bits
+ **
+ ** This does NOT change ILR state!
+ ** See comment below for ILR usage.
+ */
+ __raw_readl(dino_dev->hba.base_addr+DINO_IPR);
/* set the matching bit in the IMR register */
- dino_dev->imr |= DINO_MASK_IRQ(irq); /* used in dino_isr() */
- gsc_writel( dino_dev->imr, dino_dev->hba.base_addr+DINO_IMR);
+ dino_dev->imr |= DINO_MASK_IRQ(local_irq); /* used in dino_isr() */
+ __raw_writel( dino_dev->imr, dino_dev->hba.base_addr+DINO_IMR);
/* Emulate "Level Triggered" Interrupt
** Basically, a driver is blowing it if the IRQ line is asserted
** dino_isr() will read IPR and find nothing. But then catch this
** when it also checks ILR.
*/
- tmp = gsc_readl(dino_dev->hba.base_addr+DINO_ILR);
- if (tmp & DINO_MASK_IRQ(irq)) {
+ tmp = __raw_readl(dino_dev->hba.base_addr+DINO_ILR);
+ if (tmp & DINO_MASK_IRQ(local_irq)) {
DBG(KERN_WARNING "%s(): IRQ asserted! (ILR 0x%x)\n",
__FUNCTION__, tmp);
gsc_writel(dino_dev->txn_data, dino_dev->txn_addr);
}
}
-
-
-static void
-dino_enable_irq(void *irq_dev, int irq)
+static unsigned int dino_startup_irq(unsigned int irq)
{
- struct dino_device *dino_dev = DINO_DEV(irq_dev);
-
- /*
- ** clear pending IRQ bits
- **
- ** This does NOT change ILR state!
- ** See comments in dino_unmask_irq() for ILR usage.
- */
- gsc_readl(dino_dev->hba.base_addr+DINO_IPR);
-
- dino_unmask_irq(irq_dev, irq);
+ dino_enable_irq(irq);
+ return 0;
}
-
-static struct irq_region_ops dino_irq_ops = {
- .disable_irq = dino_mask_irq, /* ??? */
- .enable_irq = dino_enable_irq,
- .mask_irq = dino_mask_irq,
- .unmask_irq = dino_unmask_irq
+static struct hw_interrupt_type dino_interrupt_type = {
+ .typename = "GSC-PCI",
+ .startup = dino_startup_irq,
+ .shutdown = dino_disable_irq,
+ .enable = dino_enable_irq,
+ .disable = dino_disable_irq,
+ .ack = no_ack_irq,
+ .end = no_end_irq,
};
static irqreturn_t
dino_isr(int irq, void *intr_dev, struct pt_regs *regs)
{
- struct dino_device *dino_dev = DINO_DEV(intr_dev);
+ struct dino_device *dino_dev = intr_dev;
u32 mask;
int ilr_loop = 100;
- extern void do_irq(struct irqaction *a, int i, struct pt_regs *p);
-
/* read and acknowledge pending interrupts */
#ifdef DINO_DEBUG
dino_dev->dino_irr0 =
#endif
- mask = gsc_readl(dino_dev->hba.base_addr+DINO_IRR0) & DINO_IRR_MASK;
-
-ilr_again:
- while (mask)
- {
- int irq;
-
- irq = __ffs(mask);
+ mask = __raw_readl(dino_dev->hba.base_addr+DINO_IRR0) & DINO_IRR_MASK;
- mask &= ~(1<<irq);
+ if (mask == 0)
+ return IRQ_NONE;
- DBG(KERN_WARNING "%s(%x, %p) mask %0x\n",
+ilr_again:
+ do {
+ int local_irq = __ffs(mask);
+ int irq = dino_dev->global_irq[local_irq];
+ DBG(KERN_DEBUG "%s(%d, %p) mask 0x%x\n",
__FUNCTION__, irq, intr_dev, mask);
- do_irq(&dino_dev->dino_region->action[irq],
- dino_dev->dino_region->data.irqbase + irq,
- regs);
-
- }
+ __do_IRQ(irq, regs);
+ mask &= ~(1 << local_irq);
+ } while (mask);
/* Support for level triggered IRQ lines.
**
** device drivers may assume lines are level triggered (and not
** edge triggered like EISA/ISA can be).
*/
- mask = gsc_readl(dino_dev->hba.base_addr+DINO_ILR) & dino_dev->imr;
+ mask = __raw_readl(dino_dev->hba.base_addr+DINO_ILR) & dino_dev->imr;
if (mask) {
if (--ilr_loop > 0)
goto ilr_again;
- printk(KERN_ERR "Dino %lx: stuck interrupt %d\n", dino_dev->hba.base_addr, mask);
+ printk(KERN_ERR "Dino 0x%p: stuck interrupt %d\n",
+ dino_dev->hba.base_addr, mask);
return IRQ_NONE;
}
return IRQ_HANDLED;
}
-static int dino_choose_irq(struct parisc_device *dev)
+static void dino_assign_irq(struct dino_device *dino, int local_irq, int *irqp)
{
- int irq = -1;
+ int irq = gsc_assign_irq(&dino_interrupt_type, dino);
+ if (irq == NO_IRQ)
+ return;
+
+ *irqp = irq;
+ dino->global_irq[local_irq] = irq;
+}
+
+static void dino_choose_irq(struct parisc_device *dev, void *ctrl)
+{
+ int irq;
+ struct dino_device *dino = ctrl;
switch (dev->id.sversion) {
case 0x00084: irq = 8; break; /* PS/2 */
case 0x0008c: irq = 10; break; /* RS232 */
case 0x00096: irq = 8; break; /* PS/2 */
+ default: return; /* Unknown */
}
- return irq;
+ dino_assign_irq(dino, irq, &dev->irq);
}
static void __init
*/
#define _8MB 0x00800000UL
static void __init
-dino_card_setup(struct pci_bus *bus, unsigned long base_addr)
+dino_card_setup(struct pci_bus *bus, void __iomem *base_addr)
{
int i;
struct dino_device *dino_dev = DINO_DEV(parisc_walk_tree(bus->bridge));
res = &dino_dev->hba.lmmio_space;
res->flags = IORESOURCE_MEM;
- size = scnprintf(name, sizeof(name), "Dino LMMIO (%s)", bus->bridge->bus_id);
+ size = scnprintf(name, sizeof(name), "Dino LMMIO (%s)",
+ bus->bridge->bus_id);
res->name = kmalloc(size+1, GFP_KERNEL);
if(res->name)
strcpy((char *)res->name, name);
}
DBG("DINO GSC WRITE i=%d, start=%lx, dino addr = %lx\n",
i, res->start, base_addr + DINO_IO_ADDR_EN);
- gsc_writel(1 << i, base_addr + DINO_IO_ADDR_EN);
+ __raw_writel(1 << i, base_addr + DINO_IO_ADDR_EN);
}
static void __init
** Set Latency Timer to 0xff (not a shared bus)
** Set CACHELINE_SIZE.
*/
- dino_cfg_write(dev->bus, dev->devfn, PCI_CACHE_LINE_SIZE, 2, 0xff00 | L1_CACHE_BYTES/4);
+ dino_cfg_write(dev->bus, dev->devfn,
+ PCI_CACHE_LINE_SIZE, 2, 0xff00 | L1_CACHE_BYTES/4);
/*
** Program INT_LINE for card-mode devices.
struct dino_device *dino_dev = DINO_DEV(parisc_walk_tree(bus->bridge));
int port_base = HBA_PORT_BASE(dino_dev->hba.hba_num);
- DBG(KERN_WARNING "%s(0x%p) bus %d sysdata 0x%p\n",
- __FUNCTION__, bus, bus->secondary, bus->bridge->platform_data);
+ DBG(KERN_WARNING "%s(0x%p) bus %d platform_data 0x%p\n",
+ __FUNCTION__, bus, bus->secondary,
+ bus->bridge->platform_data);
/* Firmware doesn't set up card-mode dino, so we have to */
if (is_card_dino(&dino_dev->hba.dev->id)) {
for(i = PCI_BRIDGE_RESOURCES; i < PCI_NUM_RESOURCES; i++) {
- if((bus->self->resource[i].flags & (IORESOURCE_IO | IORESOURCE_MEM)) == 0)
+ if((bus->self->resource[i].flags &
+ (IORESOURCE_IO | IORESOURCE_MEM)) == 0)
continue;
if(bus->self->resource[i].flags & IORESOURCE_MEM) {
u32 irq_pin;
- dino_cfg_read(dev->bus, dev->devfn, PCI_INTERRUPT_PIN, 1, &irq_pin);
- dev->irq = (irq_pin + PCI_SLOT(dev->devfn) - 1) % 4 ;
- dino_cfg_write(dev->bus, dev->devfn, PCI_INTERRUPT_LINE, 1, dev->irq);
- dev->irq += dino_dev->dino_region->data.irqbase;
- printk(KERN_WARNING "Device %s has undefined IRQ, setting to %d\n", dev->slot_name, irq_pin);
+ dino_cfg_read(dev->bus, dev->devfn,
+ PCI_INTERRUPT_PIN, 1, &irq_pin);
+ irq_pin = (irq_pin + PCI_SLOT(dev->devfn) - 1) % 4 ;
+ printk(KERN_WARNING "Device %s has undefined IRQ, "
+ "setting to %d\n", dev->slot_name,
+ irq_pin);
+ dino_cfg_write(dev->bus, dev->devfn,
+ PCI_INTERRUPT_LINE, 1, irq_pin);
+ dino_assign_irq(dino_dev, irq_pin, &dev->irq);
#else
dev->irq = 65535;
printk(KERN_WARNING "Device %s has unassigned IRQ\n", dev->slot_name);
} else {
/* Adjust INT_LINE for that busses region */
- dev->irq += dino_dev->dino_region->data.irqbase;
+ dino_assign_irq(dino_dev, dev->irq, &dev->irq);
}
}
}
{
u32 brdg_feat = 0x00784e05;
- gsc_writel(0x00000000, dino_dev->hba.base_addr+DINO_GMASK);
- gsc_writel(0x00000001, dino_dev->hba.base_addr+DINO_IO_FBB_EN);
- gsc_writel(0x00000000, dino_dev->hba.base_addr+DINO_ICR);
+ __raw_writel(0x00000000, dino_dev->hba.base_addr+DINO_GMASK);
+ __raw_writel(0x00000001, dino_dev->hba.base_addr+DINO_IO_FBB_EN);
+ __raw_writel(0x00000000, dino_dev->hba.base_addr+DINO_ICR);
#if 1
/* REVISIT - should be a runtime check (eg if (CPU_IS_PCX_L) ...) */
*/
brdg_feat &= ~0x4; /* UXQL */
#endif
- gsc_writel( brdg_feat, dino_dev->hba.base_addr+DINO_BRDG_FEAT);
+ __raw_writel( brdg_feat, dino_dev->hba.base_addr+DINO_BRDG_FEAT);
/*
** Don't enable address decoding until we know which I/O range
** currently is available from the host. Only affects MMIO
** and not I/O port space.
*/
- gsc_writel(0x00000000, dino_dev->hba.base_addr+DINO_IO_ADDR_EN);
+ __raw_writel(0x00000000, dino_dev->hba.base_addr+DINO_IO_ADDR_EN);
- gsc_writel(0x00000000, dino_dev->hba.base_addr+DINO_DAMODE);
- gsc_writel(0x00222222, dino_dev->hba.base_addr+DINO_PCIROR);
- gsc_writel(0x00222222, dino_dev->hba.base_addr+DINO_PCIWOR);
+ __raw_writel(0x00000000, dino_dev->hba.base_addr+DINO_DAMODE);
+ __raw_writel(0x00222222, dino_dev->hba.base_addr+DINO_PCIROR);
+ __raw_writel(0x00222222, dino_dev->hba.base_addr+DINO_PCIWOR);
- gsc_writel(0x00000040, dino_dev->hba.base_addr+DINO_MLTIM);
- gsc_writel(0x00000080, dino_dev->hba.base_addr+DINO_IO_CONTROL);
- gsc_writel(0x0000008c, dino_dev->hba.base_addr+DINO_TLTIM);
+ __raw_writel(0x00000040, dino_dev->hba.base_addr+DINO_MLTIM);
+ __raw_writel(0x00000080, dino_dev->hba.base_addr+DINO_IO_CONTROL);
+ __raw_writel(0x0000008c, dino_dev->hba.base_addr+DINO_TLTIM);
/* Disable PAMR before writing PAPR */
- gsc_writel(0x0000007e, dino_dev->hba.base_addr+DINO_PAMR);
- gsc_writel(0x0000007f, dino_dev->hba.base_addr+DINO_PAPR);
- gsc_writel(0x00000000, dino_dev->hba.base_addr+DINO_PAMR);
+ __raw_writel(0x0000007e, dino_dev->hba.base_addr+DINO_PAMR);
+ __raw_writel(0x0000007f, dino_dev->hba.base_addr+DINO_PAPR);
+ __raw_writel(0x00000000, dino_dev->hba.base_addr+DINO_PAMR);
/*
** Dino ERS encourages enabling FBB (0x6f).
** We can't until we know *all* devices below us can support it.
** (Something in device configuration header tells us).
*/
- gsc_writel(0x0000004f, dino_dev->hba.base_addr+DINO_PCICMD);
+ __raw_writel(0x0000004f, dino_dev->hba.base_addr+DINO_PCICMD);
/* Somewhere, the PCI spec says give devices 1 second
** to recover from the #RESET being de-asserted.
* since PDC has already initialized this.
*/
- io_addr = gsc_readl(dino_dev->hba.base_addr + DINO_IO_ADDR_EN);
+ io_addr = __raw_readl(dino_dev->hba.base_addr + DINO_IO_ADDR_EN);
if (io_addr == 0) {
printk(KERN_WARNING "%s: No PCI devices enabled.\n", name);
return -ENODEV;
** still only has 11 IRQ input lines - just map some of them
** to a different processor.
*/
- dino_dev->irq = gsc_alloc_irq(&gsc_irq);
+ dev->irq = gsc_alloc_irq(&gsc_irq);
dino_dev->txn_addr = gsc_irq.txn_addr;
dino_dev->txn_data = gsc_irq.txn_data;
eim = ((u32) gsc_irq.txn_addr) | gsc_irq.txn_data;
** Dino needs a PA "IRQ" to get a processor's attention.
** arch/parisc/kernel/irq.c returns an EIRR bit.
*/
- if (dino_dev->irq < 0) {
+ if (dev->irq < 0) {
printk(KERN_WARNING "%s: gsc_alloc_irq() failed\n", name);
return 1;
}
- status = request_irq(dino_dev->irq, dino_isr, 0, name, dino_dev);
+ status = request_irq(dev->irq, dino_isr, 0, name, dino_dev);
if (status) {
printk(KERN_WARNING "%s: request_irq() failed with %d\n",
name, status);
return 1;
}
- /*
- ** Tell generic interrupt support we have 11 bits which need
- ** be checked in the interrupt handler.
- */
- dino_dev->dino_region = alloc_irq_region(DINO_IRQS, &dino_irq_ops,
- name, dino_dev);
-
- if (NULL == dino_dev->dino_region) {
- printk(KERN_WARNING "%s: alloc_irq_region() failed\n", name);
- return 1;
- }
-
/* Support the serial port which is sometimes attached on built-in
* Dino / Cujo chips.
*/
- fixup_child_irqs(dev, dino_dev->dino_region->data.irqbase,
- dino_choose_irq);
+ gsc_fixup_irqs(dev, dino_dev, dino_choose_irq);
/*
** This enables DINO to generate interrupts when it sees
** any of its inputs *change*. Just asserting an IRQ
** before it's enabled (ie unmasked) isn't good enough.
*/
- gsc_writel(eim, dino_dev->hba.base_addr+DINO_IAR0);
+ __raw_writel(eim, dino_dev->hba.base_addr+DINO_IAR0);
/*
** Some platforms don't clear Dino's IRR0 register at boot time.
** Reading will clear it now.
*/
- gsc_readl(dino_dev->hba.base_addr+DINO_IRR0);
+ __raw_readl(dino_dev->hba.base_addr+DINO_IRR0);
/* allocate I/O Port resource region */
res = &dino_dev->hba.io_space;
res->end = res->start + (HBA_PORT_SPACE_SIZE - 1);
res->flags = IORESOURCE_IO; /* do not mark it busy ! */
if (request_resource(&ioport_resource, res) < 0) {
- printk(KERN_ERR "%s: request I/O Port region failed 0x%lx/%lx (hpa 0x%lx)\n",
- name, res->start, res->end, dino_dev->hba.base_addr);
+ printk(KERN_ERR "%s: request I/O Port region failed "
+ "0x%lx/%lx (hpa 0x%p)\n",
+ name, res->start, res->end, dino_dev->hba.base_addr);
return 1;
}
{
struct dino_device *dino_dev; // Dino specific control struct
const char *version = "unknown";
- const int name_len = 32;
- char hw_path[64];
char *name;
int is_cujo = 0;
struct pci_bus *bus;
- name = kmalloc(name_len, GFP_KERNEL);
- if(name) {
- print_pa_hwpath(dev, hw_path);
- snprintf(name, name_len, "Dino [%s]", hw_path);
- }
- else
- name = "Dino";
-
+ name = "Dino";
if (is_card_dino(&dev->id)) {
version = "3.x (card mode)";
} else {
#ifdef CONFIG_IOMMU_CCIO
printk(KERN_WARNING "Enabling Cujo 2.0 bug workaround\n");
if (dev->hpa == (unsigned long)CUJO_RAVEN_ADDR) {
- ccio_cujo20_fixup(dev->parent, CUJO_RAVEN_BADPAGE);
+ ccio_cujo20_fixup(dev, CUJO_RAVEN_BADPAGE);
} else if (dev->hpa == (unsigned long)CUJO_FIREHAWK_ADDR) {
- ccio_cujo20_fixup(dev->parent, CUJO_FIREHAWK_BADPAGE);
+ ccio_cujo20_fixup(dev, CUJO_FIREHAWK_BADPAGE);
} else {
printk("Don't recognise Cujo at address 0x%lx, not enabling workaround\n", dev->hpa);
}
memset(dino_dev, 0, sizeof(struct dino_device));
dino_dev->hba.dev = dev;
- dino_dev->hba.base_addr = dev->hpa; /* faster access */
+ dino_dev->hba.base_addr = ioremap(dev->hpa, 4096); /* faster access */
dino_dev->hba.lmmio_space_offset = 0; /* CPU addrs == bus addrs */
- dino_dev->dinosaur_pen = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&dino_dev->dinosaur_pen);
dino_dev->hba.iommu = ccio_get_iommu(dev);
if (is_card_dino(&dev->id)) {
#include <linux/init.h>
#include <linux/ioport.h>
-#include <linux/irq.h>
+#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/pci.h>
#define SNAKES_EEPROM_BASE_ADDR 0xF0810400
#define MIRAGE_EEPROM_BASE_ADDR 0xF00C0400
-static spinlock_t eisa_irq_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(eisa_irq_lock);
/* We can only have one EISA adapter in the system because neither
* implementation can be flexed.
/* called by free irq */
-static void eisa_disable_irq(void *irq_dev, int irq)
+static void eisa_disable_irq(unsigned int irq)
{
unsigned long flags;
}
/* called by request irq */
-static void eisa_enable_irq(void *irq_dev, int irq)
+static void eisa_enable_irq(unsigned int irq)
{
unsigned long flags;
EISA_DBG("enable irq %d\n", irq);
EISA_DBG("pic1 mask %02x\n", eisa_in8(0xa1));
}
-static void eisa_mask_irq(void *irq_dev, int irq)
+static unsigned int eisa_startup_irq(unsigned int irq)
{
- unsigned long flags;
- EISA_DBG("mask irq %d\n", irq);
-
- /* mask irq */
- spin_lock_irqsave(&eisa_irq_lock, flags);
- if (irq & 8) {
- slave_mask |= (1 << (irq&7));
- eisa_out8(slave_mask, 0xa1);
- } else {
- master_mask |= (1 << (irq&7));
- eisa_out8(master_mask, 0x21);
- }
- spin_unlock_irqrestore(&eisa_irq_lock, flags);
-}
-
-static void eisa_unmask_irq(void *irq_dev, int irq)
-{
- unsigned long flags;
- EISA_DBG("unmask irq %d\n", irq);
-
- /* unmask */
- spin_lock_irqsave(&eisa_irq_lock, flags);
- if (irq & 8) {
- slave_mask &= ~(1 << (irq&7));
- eisa_out8(slave_mask, 0xa1);
- } else {
- master_mask &= ~(1 << (irq&7));
- eisa_out8(master_mask, 0x21);
- }
- spin_unlock_irqrestore(&eisa_irq_lock, flags);
+ eisa_enable_irq(irq);
+ return 0;
}
-static struct irqaction action[IRQ_PER_REGION];
-
-/* EISA needs to be fixed at IRQ region #0 (EISA_IRQ_REGION) */
-static struct irq_region eisa_irq_region = {
- .ops = { eisa_disable_irq, eisa_enable_irq, eisa_mask_irq, eisa_unmask_irq },
- .data = { .name = "EISA", .irqbase = 0 },
- .action = action,
+static struct hw_interrupt_type eisa_interrupt_type = {
+ .typename = "EISA",
+ .startup = eisa_startup_irq,
+ .shutdown = eisa_disable_irq,
+ .enable = eisa_enable_irq,
+ .disable = eisa_disable_irq,
+ .ack = no_ack_irq,
+ .end = no_end_irq,
};
-static irqreturn_t eisa_irq(int _, void *intr_dev, struct pt_regs *regs)
+static irqreturn_t eisa_irq(int wax_irq, void *intr_dev, struct pt_regs *regs)
{
- extern void do_irq(struct irqaction *a, int i, struct pt_regs *p);
int irq = gsc_readb(0xfc01f000); /* EISA supports 16 irqs */
unsigned long flags;
}
spin_unlock_irqrestore(&eisa_irq_lock, flags);
-
- do_irq(&eisa_irq_region.action[irq], EISA_IRQ_REGION + irq, regs);
+ __do_IRQ(irq, regs);
spin_lock_irqsave(&eisa_irq_lock, flags);
/* unmask */
return IRQ_HANDLED;
}
+static struct irqaction irq2_action = {
+ .handler = dummy_irq2_handler,
+ .name = "cascade",
+};
+
static void init_eisa_pic(void)
{
unsigned long flags;
static int __devinit eisa_probe(struct parisc_device *dev)
{
- int result;
+ int i, result;
char *name = is_mongoose(dev) ? "Mongoose" : "Wax";
}
pcibios_register_hba(&eisa_dev.hba);
- result = request_irq(dev->irq, eisa_irq, SA_SHIRQ, "EISA", NULL);
+ result = request_irq(dev->irq, eisa_irq, SA_SHIRQ, "EISA", &eisa_dev);
if (result) {
printk(KERN_ERR "EISA: request_irq failed!\n");
return result;
}
/* Reserve IRQ2 */
- action[2].handler = dummy_irq2_handler;
- action[2].name = "cascade";
+ irq_desc[2].action = &irq2_action;
- eisa_irq_region.data.dev = dev;
- irq_region[0] = &eisa_irq_region;
+ for (i = 0; i < 16; i++) {
+ irq_desc[i].handler = &eisa_interrupt_type;
+ }
EISA_bus = 1;
if (dev->num_addrs) {
#include <asm/hardware.h>
#include <asm/io.h>
-#include <asm/irq.h>
#include "gsc.h"
-/* This sets the vmerge boundary and size, it's here because it has to
- * be available on all platforms (zero means no-virtual merging) */
-unsigned long parisc_vmerge_boundary = 0;
-unsigned long parisc_vmerge_max_size = 0;
-
#undef DEBUG
#ifdef DEBUG
{
int c = irq;
- irq += IRQ_FROM_REGION(CPU_IRQ_REGION); /* virtualize the IRQ first */
+ irq += CPU_IRQ_BASE; /* virtualize the IRQ first */
irq = txn_claim_irq(irq);
if (irq < 0) {
EXPORT_SYMBOL(gsc_alloc_irq);
EXPORT_SYMBOL(gsc_claim_irq);
-/* IRQ bits must be numbered from Most Significant Bit */
-#define GSC_FIX_IRQ(x) (31-(x))
-#define GSC_MASK_IRQ(x) (1<<(GSC_FIX_IRQ(x)))
-
/* Common interrupt demultiplexer used by Asp, Lasi & Wax. */
-irqreturn_t busdev_barked(int busdev_irq, void *dev, struct pt_regs *regs)
+irqreturn_t gsc_asic_intr(int gsc_asic_irq, void *dev, struct pt_regs *regs)
{
- unsigned long irq;
- struct busdevice *busdev = (struct busdevice *) dev;
-
- /*
- Don't need to protect OFFSET_IRR with spinlock since this is
- the only place it's touched.
- Protect busdev_region by disabling this region's interrupts,
- modifying the region, and then re-enabling the region.
- */
-
- irq = gsc_readl(busdev->hpa+OFFSET_IRR);
- if (irq == 0) {
- printk(KERN_ERR "%s: barking without apparent reason.\n", busdev->name);
- } else {
- DEBPRINTK ("%s (0x%x) barked, mask=0x%x, irq=%d\n",
- busdev->name, busdev->busdev_region->data.irqbase,
- irq, GSC_FIX_IRQ(ffs(irq))+1 );
-
- do_irq_mask(irq, busdev->busdev_region, regs);
- }
+ unsigned long irr;
+ struct gsc_asic *gsc_asic = dev;
+
+ irr = gsc_readl(gsc_asic->hpa + OFFSET_IRR);
+ if (irr == 0)
+ return IRQ_NONE;
+
+ DEBPRINTK("%s intr, mask=0x%x\n", gsc_asic->name, irr);
+
+ do {
+ int local_irq = __ffs(irr);
+ unsigned int irq = gsc_asic->global_irq[local_irq];
+ __do_IRQ(irq, regs);
+ irr &= ~(1 << local_irq);
+ } while (irr);
+
return IRQ_HANDLED;
}
-static void
-busdev_disable_irq(void *irq_dev, int irq)
+int gsc_find_local_irq(unsigned int irq, int *global_irqs, int limit)
{
- /* Disable the IRQ line by clearing the bit in the IMR */
- u32 imr = gsc_readl(BUSDEV_DEV(irq_dev)->hpa+OFFSET_IMR);
- imr &= ~(GSC_MASK_IRQ(irq));
+ int local_irq;
- DEBPRINTK( KERN_WARNING "%s(%p, %d) %s: IMR 0x%x\n",
- __FUNCTION__, irq_dev, irq, BUSDEV_DEV(irq_dev)->name, imr);
+ for (local_irq = 0; local_irq < limit; local_irq++) {
+ if (global_irqs[local_irq] == irq)
+ return local_irq;
+ }
- gsc_writel(imr, BUSDEV_DEV(irq_dev)->hpa+OFFSET_IMR);
+ return NO_IRQ;
}
+static void gsc_asic_disable_irq(unsigned int irq)
+{
+ struct gsc_asic *irq_dev = irq_desc[irq].handler_data;
+ int local_irq = gsc_find_local_irq(irq, irq_dev->global_irq, 32);
+ u32 imr;
+
+ DEBPRINTK(KERN_DEBUG "%s(%d) %s: IMR 0x%x\n", __FUNCTION__, irq,
+ irq_dev->name, imr);
-static void
-busdev_enable_irq(void *irq_dev, int irq)
+ /* Disable the IRQ line by clearing the bit in the IMR */
+ imr = gsc_readl(irq_dev->hpa + OFFSET_IMR);
+ imr &= ~(1 << local_irq);
+ gsc_writel(imr, irq_dev->hpa + OFFSET_IMR);
+}
+
+static void gsc_asic_enable_irq(unsigned int irq)
{
- /* Enable the IRQ line by setting the bit in the IMR */
- unsigned long addr = BUSDEV_DEV(irq_dev)->hpa + OFFSET_IMR;
- u32 imr = gsc_readl(addr);
- imr |= GSC_MASK_IRQ(irq);
+ struct gsc_asic *irq_dev = irq_desc[irq].handler_data;
+ int local_irq = gsc_find_local_irq(irq, irq_dev->global_irq, 32);
+ u32 imr;
- DEBPRINTK (KERN_WARNING "%s(%p, %d) %s: IMR 0x%x\n",
- __FUNCTION__, irq_dev, irq, BUSDEV_DEV(irq_dev)->name, imr);
+ DEBPRINTK(KERN_DEBUG "%s(%d) %s: IMR 0x%x\n", __FUNCTION__, irq,
+ irq_dev->name, imr);
- gsc_writel(imr, addr);
-// gsc_writel(~0L, addr);
+ /* Enable the IRQ line by setting the bit in the IMR */
+ imr = gsc_readl(irq_dev->hpa + OFFSET_IMR);
+ imr |= 1 << local_irq;
+ gsc_writel(imr, irq_dev->hpa + OFFSET_IMR);
+ /*
+ * FIXME: read IPR to make sure the IRQ isn't already pending.
+ * If so, we need to read IRR and manually call do_irq().
+ */
+}
-/* FIXME: read IPR to make sure the IRQ isn't already pending.
-** If so, we need to read IRR and manually call do_irq_mask().
-** This code should be shared with busdev_unmask_irq().
-*/
+static unsigned int gsc_asic_startup_irq(unsigned int irq)
+{
+ gsc_asic_enable_irq(irq);
+ return 0;
}
-static void
-busdev_mask_irq(void *irq_dev, int irq)
+static struct hw_interrupt_type gsc_asic_interrupt_type = {
+ .typename = "GSC-ASIC",
+ .startup = gsc_asic_startup_irq,
+ .shutdown = gsc_asic_disable_irq,
+ .enable = gsc_asic_enable_irq,
+ .disable = gsc_asic_disable_irq,
+ .ack = no_ack_irq,
+ .end = no_end_irq,
+};
+
+int gsc_assign_irq(struct hw_interrupt_type *type, void *data)
{
-/* FIXME: Clear the IMR bit in busdev for that IRQ */
+ static int irq = GSC_IRQ_BASE;
+
+ if (irq > GSC_IRQ_MAX)
+ return NO_IRQ;
+
+ irq_desc[irq].handler = type;
+ irq_desc[irq].handler_data = data;
+ return irq++;
}
-static void
-busdev_unmask_irq(void *irq_dev, int irq)
+void gsc_asic_assign_irq(struct gsc_asic *asic, int local_irq, int *irqp)
{
-/* FIXME: Read IPR. Set the IMR bit in busdev for that IRQ.
- call do_irq_mask() if IPR is non-zero
-*/
+ int irq = gsc_assign_irq(&gsc_asic_interrupt_type, asic);
+ if (irq == NO_IRQ)
+ return;
+
+ *irqp = irq;
+ asic->global_irq[local_irq] = irq;
}
-struct irq_region_ops busdev_irq_ops = {
- .disable_irq = busdev_disable_irq,
- .enable_irq = busdev_enable_irq,
- .mask_irq = busdev_mask_irq,
- .unmask_irq = busdev_unmask_irq
-};
+void gsc_fixup_irqs(struct parisc_device *parent, void *ctrl,
+ void (*choose_irq)(struct parisc_device *, void *))
+{
+ struct device *dev;
+ list_for_each_entry(dev, &parent->dev.children, node) {
+ struct parisc_device *padev = to_parisc_device(dev);
-int gsc_common_irqsetup(struct parisc_device *parent, struct busdevice *busdev)
+ /* work-around for 715/64 and others which have parent
+ at path [5] and children at path [5/0/x] */
+ if (padev->id.hw_type == HPHW_FAULTY)
+ return gsc_fixup_irqs(padev, ctrl, choose_irq);
+ choose_irq(padev, ctrl);
+ }
+}
+
+int gsc_common_setup(struct parisc_device *parent, struct gsc_asic *gsc_asic)
{
struct resource *res;
- busdev->gsc = parent;
-
- /* the IRQs we simulate */
- busdev->busdev_region = alloc_irq_region(32, &busdev_irq_ops,
- busdev->name, busdev);
- if (!busdev->busdev_region)
- return -ENOMEM;
+ gsc_asic->gsc = parent;
/* allocate resource region */
- res = request_mem_region(busdev->hpa, 0x100000, busdev->name);
+ res = request_mem_region(gsc_asic->hpa, 0x100000, gsc_asic->name);
if (res) {
res->flags = IORESOURCE_MEM; /* do not mark it busy ! */
}
#if 0
- printk(KERN_WARNING "%s IRQ %d EIM 0x%x", busdev->name,
- busdev->parent_irq, busdev->eim);
- if (gsc_readl(busdev->hpa + OFFSET_IMR))
+ printk(KERN_WARNING "%s IRQ %d EIM 0x%x", gsc_asic->name,
+ parent->irq, gsc_asic->eim);
+ if (gsc_readl(gsc_asic->hpa + OFFSET_IMR))
printk(" IMR is non-zero! (0x%x)",
- gsc_readl(busdev->hpa + OFFSET_IMR));
+ gsc_readl(gsc_asic->hpa + OFFSET_IMR));
printk("\n");
#endif
int irq; /* virtual IRQ */
};
-struct busdevice {
+struct gsc_asic {
struct parisc_device *gsc;
unsigned long hpa;
char *name;
int version;
int type;
- int parent_irq;
int eim;
- struct irq_region *busdev_region;
+ int global_irq[32];
};
-/* short cut to keep the compiler happy */
-#define BUSDEV_DEV(x) ((struct busdevice *) (x))
+int gsc_common_setup(struct parisc_device *parent, struct gsc_asic *gsc_asic);
+int gsc_alloc_irq(struct gsc_irq *dev); /* dev needs an irq */
+int gsc_claim_irq(struct gsc_irq *dev, int irq); /* dev needs this irq */
+int gsc_assign_irq(struct hw_interrupt_type *type, void *data);
+int gsc_find_local_irq(unsigned int irq, int *global_irq, int limit);
+void gsc_fixup_irqs(struct parisc_device *parent, void *ctrl,
+ void (*choose)(struct parisc_device *child, void *ctrl));
+void gsc_asic_assign_irq(struct gsc_asic *asic, int local_irq, int *irqp);
-int gsc_common_irqsetup(struct parisc_device *parent, struct busdevice *busdev);
-extern int gsc_alloc_irq(struct gsc_irq *dev); /* dev needs an irq */
-extern int gsc_claim_irq(struct gsc_irq *dev, int irq); /* dev needs this irq */
-
-irqreturn_t busdev_barked(int busdev_irq, void *dev, struct pt_regs *regs);
+irqreturn_t gsc_asic_intr(int irq, void *dev, struct pt_regs *regs);
** iosapic_register().
**
**
-** IRQ region notes
-** ----------------
-** The data passed to iosapic_interrupt() is per IRQ line.
-** Each IRQ line will get one txn_addr/data pair. Thus each IRQ region,
-** will have several txn_addr/data pairs (up to 7 for current I/O SAPIC
-** implementations). The IRQ region "sysdata" will NOT be directly passed
-** to the interrupt handler like GSCtoPCI (dino.c).
-**
-** iosapic interrupt handler will NOT call do_irq_mask().
-** It doesn't need to read a bit mask to determine which IRQ line was pulled
-** since it already knows based on vector_info passed to iosapic_interrupt().
-**
-** One IRQ number represents both an IRQ line and a driver ISR.
-** The I/O sapic driver can't manage shared IRQ lines because
-** additional data besides the IRQ number must be passed via
-** irq_region_ops. do_irq() and request_irq() must manage
-** a sharing a bit in the mask.
-**
-** iosapic_interrupt() replaces do_irq_mask() and calls do_irq().
-** Which IRQ line was asserted is already known since each
-** line has unique data associated with it. We could omit
-** iosapic_interrupt() from the calling path if it did NOT need
-** to write EOI. For unshared lines, it really doesn't.
-**
-** Unfortunately, can't optimize out EOI if IRQ line isn't "shared".
-** N-class console "device" and some sort of heartbeat actually share
-** one line though only one driver is registered...<sigh>...this was
-** true for HP-UX at least. May not be true for parisc-linux.
-**
+** IRQ handling notes
+** ------------------
+** The IO-SAPIC can indicate to the CPU which interrupt was asserted.
+** So, unlike the GSC-ASIC and Dino, we allocate one CPU interrupt per
+** IO-SAPIC interrupt and call the device driver's handler directly.
+** The IO-SAPIC driver hijacks the CPU interrupt handler so it can
+** issue the End Of Interrupt command to the IO-SAPIC.
**
** Overview of exported iosapic functions
** --------------------------------------
** o locate vector_info (needs: isi, intr_line)
** o allocate processor "irq" and get txn_addr/data
** o request_irq(processor_irq, iosapic_interrupt, vector_info,...)
-** o pcidev->irq = isi->isi_region...base + intr_line;
-**
-** iosapic_interrupt:
-** o call do_irq(vector->isi->irq_region, vector->irq_line, regs)
-** o assume level triggered and write EOI
**
** iosapic_enable_irq:
** o clear any pending IRQ on that line
**
** iosapic_disable_irq:
** o disable IRdT - call disable_irq(vector[line]->processor_irq)
-**
-** FIXME: mask/unmask
*/
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/spinlock.h>
-#include <linux/pci.h> /* pci cfg accessor functions */
+#include <linux/pci.h>
#include <linux/init.h>
#include <linux/slab.h>
-#include <linux/smp_lock.h>
-#include <linux/interrupt.h> /* irqaction */
-#include <linux/irq.h> /* irq_region support */
+#include <linux/interrupt.h>
#include <asm/byteorder.h> /* get in-line asm for swab */
#include <asm/pdc.h>
#include <asm/pdcpat.h>
#include <asm/page.h>
-#include <asm/segment.h>
#include <asm/system.h>
#include <asm/io.h> /* read/write functions */
#ifdef CONFIG_SUPERIO
#define IOSAPIC_IRDT_ID_EID_SHIFT 0x10
-static struct iosapic_info *iosapic_list;
-static spinlock_t iosapic_lock;
-static int iosapic_count;
+static spinlock_t iosapic_lock = SPIN_LOCK_UNLOCKED;
+static inline void iosapic_eoi(void __iomem *addr, unsigned int data)
+{
+ __raw_writel(data, addr);
+}
/*
** REVISIT: future platforms may have more than one IRT.
unsigned long cell = 0;
/* init global data */
- iosapic_lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&iosapic_lock);
iosapic_list = (struct iosapic_info *) NULL;
iosapic_count = 0;
return irt_find_irqline(isi, intr_slot, intr_pin);
}
-
-static irqreturn_t
-iosapic_interrupt(int irq, void *dev_id, struct pt_regs * regs)
-{
- struct vector_info *vi = (struct vector_info *) dev_id;
- extern void do_irq(struct irqaction *a, int i, struct pt_regs *p);
- int irq_num = vi->iosapic->isi_region->data.irqbase + vi->irqline;
-
- DBG("iosapic_interrupt(): irq %d line %d eoi 0x%p 0x%x\n",
- irq, vi->irqline, vi->eoi_addr, vi->eoi_data);
-
- /* Do NOT need to mask/unmask IRQ. processor is already masked. */
-
- do_irq(&vi->iosapic->isi_region->action[vi->irqline], irq_num, regs);
-
- /*
- ** PARISC only supports PCI devices below I/O SAPIC.
- ** PCI only supports level triggered in order to share IRQ lines.
- ** ergo I/O SAPIC must always issue EOI on parisc.
- **
- ** i386/ia64 support ISA devices and have to deal with
- ** edge-triggered interrupts too.
- */
- __raw_writel(vi->eoi_data, vi->eoi_addr);
- return IRQ_HANDLED;
-}
-
-
-int
-iosapic_fixup_irq(void *isi_obj, struct pci_dev *pcidev)
-{
- struct iosapic_info *isi = (struct iosapic_info *)isi_obj;
- struct irt_entry *irte = NULL; /* only used if PAT PDC */
- struct vector_info *vi;
- int isi_line; /* line used by device */
- int tmp;
-
- if (NULL == isi) {
- printk(KERN_WARNING MODULE_NAME ": hpa not registered for %s\n",
- pci_name(pcidev));
- return(-1);
- }
-
-#ifdef CONFIG_SUPERIO
- /*
- * HACK ALERT! (non-compliant PCI device support)
- *
- * All SuckyIO interrupts are routed through the PIC's on function 1.
- * But SuckyIO OHCI USB controller gets an IRT entry anyway because
- * it advertises INT D for INT_PIN. Use that IRT entry to get the
- * SuckyIO interrupt routing for PICs on function 1 (*BLEECCHH*).
- */
- if (is_superio_device(pcidev)) {
- /* We must call superio_fixup_irq() to register the pdev */
- pcidev->irq = superio_fixup_irq(pcidev);
-
- /* Don't return if need to program the IOSAPIC's IRT... */
- if (PCI_FUNC(pcidev->devfn) != SUPERIO_USB_FN)
- return pcidev->irq;
- }
-#endif /* CONFIG_SUPERIO */
-
- /* lookup IRT entry for isi/slot/pin set */
- irte = iosapic_xlate_pin(isi, pcidev);
- if (NULL == irte) {
- printk("iosapic: no IRTE for %s (IRQ not connected?)\n",
- pci_name(pcidev));
- return(-1);
- }
- DBG_IRT("iosapic_fixup_irq(): irte %p %x %x %x %x %x %x %x %x\n",
- irte,
- irte->entry_type,
- irte->entry_length,
- irte->polarity_trigger,
- irte->src_bus_irq_devno,
- irte->src_bus_id,
- irte->src_seg_id,
- irte->dest_iosapic_intin,
- (u32) irte->dest_iosapic_addr);
- isi_line = irte->dest_iosapic_intin;
- pcidev->irq = isi->isi_region->data.irqbase + isi_line;
-
- /* get vector info for this input line */
- ASSERT(NULL != isi->isi_vector);
- vi = &(isi->isi_vector[isi_line]);
- DBG_IRT("iosapic_fixup_irq: line %d vi 0x%p\n", isi_line, vi);
-
- /* If this IRQ line has already been setup, skip it */
- if (vi->irte)
- return pcidev->irq;
-
- vi->irte = irte;
-
- /* Allocate processor IRQ */
- vi->txn_irq = txn_alloc_irq();
-
-/* XXX/FIXME The txn_alloc_irq() code and related code should be moved
-** to enable_irq(). That way we only allocate processor IRQ bits
-** for devices that actually have drivers claiming them.
-** Right now we assign an IRQ to every PCI device present regardless
-** of whether it's used or not.
-*/
- if (vi->txn_irq < 0)
- panic("I/O sapic: couldn't get TXN IRQ\n");
-
- /* enable_irq() will use txn_* to program IRdT */
- vi->txn_addr = txn_alloc_addr(vi->txn_irq);
- vi->txn_data = txn_alloc_data(vi->txn_irq, 8);
- ASSERT(vi->txn_data < 256); /* matches 8 above */
-
- tmp = request_irq(vi->txn_irq, iosapic_interrupt, 0,
- vi->name, vi);
- ASSERT(tmp == 0);
-
- vi->eoi_addr = (u32 *) (isi->isi_hpa + IOSAPIC_REG_EOI);
- vi->eoi_data = cpu_to_le32(vi->txn_data);
- ASSERT(NULL != isi->isi_region);
-
- DBG_IRT("iosapic_fixup_irq() %d:%d %x %x line %d irq %d\n",
- PCI_SLOT(pcidev->devfn), PCI_FUNC(pcidev->irq),
- pcidev->vendor, pcidev->device, isi_line, pcidev->irq);
-
- return pcidev->irq;
-}
-
-
-static void
-iosapic_rd_irt_entry(struct vector_info *vi , u32 *dp0, u32 *dp1)
+static void iosapic_rd_irt_entry(struct vector_info *vi , u32 *dp0, u32 *dp1)
{
struct iosapic_info *isp = vi->iosapic;
u8 idx = vi->irqline;
- *dp0 = iosapic_read(isp->isi_hpa, IOSAPIC_IRDT_ENTRY(idx));
- *dp1 = iosapic_read(isp->isi_hpa, IOSAPIC_IRDT_ENTRY_HI(idx));
+ *dp0 = iosapic_read(isp->addr, IOSAPIC_IRDT_ENTRY(idx));
+ *dp1 = iosapic_read(isp->addr, IOSAPIC_IRDT_ENTRY_HI(idx));
}
-static void
-iosapic_wr_irt_entry(struct vector_info *vi, u32 dp0, u32 dp1)
+static void iosapic_wr_irt_entry(struct vector_info *vi, u32 dp0, u32 dp1)
{
struct iosapic_info *isp = vi->iosapic;
- ASSERT(NULL != isp);
- ASSERT(0 != isp->isi_hpa);
- DBG_IRT("iosapic_wr_irt_entry(): irq %d hpa %p 0x%x 0x%x\n",
- vi->irqline,
- isp->isi_hpa,
- dp0, dp1);
+ DBG_IRT("iosapic_wr_irt_entry(): irq %d hpa %lx 0x%x 0x%x\n",
+ vi->irqline, isp->isi_hpa, dp0, dp1);
- iosapic_write(isp->isi_hpa, IOSAPIC_IRDT_ENTRY(vi->irqline), dp0);
+ iosapic_write(isp->addr, IOSAPIC_IRDT_ENTRY(vi->irqline), dp0);
/* Read the window register to flush the writes down to HW */
- dp0 = readl(isp->isi_hpa+IOSAPIC_REG_WINDOW);
+ dp0 = readl(isp->addr+IOSAPIC_REG_WINDOW);
- iosapic_write(isp->isi_hpa, IOSAPIC_IRDT_ENTRY_HI(vi->irqline), dp1);
+ iosapic_write(isp->addr, IOSAPIC_IRDT_ENTRY_HI(vi->irqline), dp1);
/* Read the window register to flush the writes down to HW */
- dp1 = readl(isp->isi_hpa+IOSAPIC_REG_WINDOW);
+ dp1 = readl(isp->addr+IOSAPIC_REG_WINDOW);
}
-
/*
** set_irt prepares the data (dp0, dp1) according to the vector_info
** and target cpu (id_eid). dp0/dp1 are then used to program I/O SAPIC
}
-static void
-iosapic_disable_irq(void *irq_dev, int irq)
+static struct vector_info *iosapic_get_vector(unsigned int irq)
{
- ulong irqflags;
- struct vector_info *vi = &(((struct vector_info *) irq_dev)[irq]);
- u32 d0, d1;
-
- ASSERT(NULL != vi);
-
- IOSAPIC_LOCK(&iosapic_lock);
-
-#ifdef REVISIT_DESIGN_ISSUE
-/*
-** XXX/FIXME
-
-disable_irq()/enable_irq(): drawback of using IRQ as a "handle"
-
-Current disable_irq interface only allows the irq_region support routines
-to manage sharing of "irq" objects. The problem is the disable_irq()
-interface specifies which IRQ line needs to be disabled but does not
-identify the particular ISR which needs to be disabled. IO sapic
-(and similar code in Dino) can only support one handler per IRQ
-since they don't further encode the meaning of the IRQ number.
-irq_region support has to hide it's implementation of "shared IRQ"
-behind a function call.
-
-Encoding the IRQ would be possible by I/O SAPIC but makes life really
-complicated for the IRQ handler and not help performance.
+ return irq_desc[irq].handler_data;
+}
-Need more info on how Linux supports shared IRQ lines on a PC.
-*/
-#endif /* REVISIT_DESIGN_ISSUE */
+static void iosapic_disable_irq(unsigned int irq)
+{
+ unsigned long flags;
+ struct vector_info *vi = iosapic_get_vector(irq);
+ u32 d0, d1;
+ spin_lock_irqsave(&iosapic_lock, flags);
iosapic_rd_irt_entry(vi, &d0, &d1);
d0 |= IOSAPIC_IRDT_ENABLE;
iosapic_wr_irt_entry(vi, d0, d1);
-
- IOSAPIC_UNLOCK(&iosapic_lock);
-
- /* disable ISR for parent */
- disable_irq(vi->txn_irq);
+ spin_unlock_irqrestore(&iosapic_lock, flags);
}
-
-static void
-iosapic_enable_irq(void *dev, int irq)
+static void iosapic_enable_irq(unsigned int irq)
{
- struct vector_info *vi = &(((struct vector_info *) dev)[irq]);
+ struct vector_info *vi = iosapic_get_vector(irq);
u32 d0, d1;
- ASSERT(NULL != vi);
- ASSERT(NULL != vi->irte);
-
/* data is initialized by fixup_irq */
- ASSERT(0 < vi->txn_irq);
- ASSERT(0UL != vi->txn_data);
+ WARN_ON(vi->txn_irq == 0);
iosapic_set_irt_data(vi, &d0, &d1);
iosapic_wr_irt_entry(vi, d0, d1);
struct iosapic_info *isp = vi->iosapic;
for (d0=0x10; d0<0x1e; d0++) {
- d1 = iosapic_read(isp->isi_hpa, d0);
+ d1 = iosapic_read(isp->addr, d0);
printk(" %x", d1);
}
}
#endif
/*
- ** Issueing I/O SAPIC an EOI causes an interrupt IFF IRQ line is
- ** asserted. IRQ generally should not be asserted when a driver
- ** enables their IRQ. It can lead to "interesting" race conditions
- ** in the driver initialization sequence.
- */
- __raw_writel(vi->eoi_data, vi->eoi_addr);
+ * Issuing I/O SAPIC an EOI causes an interrupt IFF IRQ line is
+ * asserted. IRQ generally should not be asserted when a driver
+ * enables their IRQ. It can lead to "interesting" race conditions
+ * in the driver initialization sequence.
+ */
+ DBG(KERN_DEBUG "enable_irq(%d): eoi(%p, 0x%x)\n", irq,
+ vi->eoi_addr, vi->eoi_data);
+ iosapic_eoi(vi->eoi_addr, vi->eoi_data);
}
+/*
+ * PARISC only supports PCI devices below I/O SAPIC.
+ * PCI only supports level triggered in order to share IRQ lines.
+ * ergo I/O SAPIC must always issue EOI on parisc.
+ *
+ * i386/ia64 support ISA devices and have to deal with
+ * edge-triggered interrupts too.
+ */
+static void iosapic_end_irq(unsigned int irq)
+{
+ struct vector_info *vi = iosapic_get_vector(irq);
+ DBG(KERN_DEBUG "end_irq(%d): eoi(%p, 0x%x)\n", irq,
+ vi->eoi_addr, vi->eoi_data);
+ iosapic_eoi(vi->eoi_addr, vi->eoi_data);
+}
-static void
-iosapic_mask_irq(void *dev, int irq)
+static unsigned int iosapic_startup_irq(unsigned int irq)
{
- BUG();
+ iosapic_enable_irq(irq);
+ return 0;
}
+static struct hw_interrupt_type iosapic_interrupt_type = {
+ .typename = "IO-SAPIC-level",
+ .startup = iosapic_startup_irq,
+ .shutdown = iosapic_disable_irq,
+ .enable = iosapic_enable_irq,
+ .disable = iosapic_disable_irq,
+ .ack = no_ack_irq,
+ .end = iosapic_end_irq,
+// .set_affinity = iosapic_set_affinity_irq,
+};
-static void
-iosapic_unmask_irq(void *dev, int irq)
+int iosapic_fixup_irq(void *isi_obj, struct pci_dev *pcidev)
{
- BUG();
-}
+ struct iosapic_info *isi = isi_obj;
+ struct irt_entry *irte = NULL; /* only used if PAT PDC */
+ struct vector_info *vi;
+ int isi_line; /* line used by device */
+ if (!isi) {
+ printk(KERN_WARNING MODULE_NAME ": hpa not registered for %s\n",
+ pci_name(pcidev));
+ return -1;
+ }
-static struct irq_region_ops iosapic_irq_ops = {
- .disable_irq = iosapic_disable_irq,
- .enable_irq = iosapic_enable_irq,
- .mask_irq = iosapic_mask_irq,
- .unmask_irq = iosapic_unmask_irq
-};
+#ifdef CONFIG_SUPERIO
+ /*
+ * HACK ALERT! (non-compliant PCI device support)
+ *
+ * All SuckyIO interrupts are routed through the PIC's on function 1.
+ * But SuckyIO OHCI USB controller gets an IRT entry anyway because
+ * it advertises INT D for INT_PIN. Use that IRT entry to get the
+ * SuckyIO interrupt routing for PICs on function 1 (*BLEECCHH*).
+ */
+ if (is_superio_device(pcidev)) {
+ /* We must call superio_fixup_irq() to register the pdev */
+ pcidev->irq = superio_fixup_irq(pcidev);
+
+ /* Don't return if need to program the IOSAPIC's IRT... */
+ if (PCI_FUNC(pcidev->devfn) != SUPERIO_USB_FN)
+ return pcidev->irq;
+ }
+#endif /* CONFIG_SUPERIO */
+
+ /* lookup IRT entry for isi/slot/pin set */
+ irte = iosapic_xlate_pin(isi, pcidev);
+ if (!irte) {
+ printk("iosapic: no IRTE for %s (IRQ not connected?)\n",
+ pci_name(pcidev));
+ return -1;
+ }
+ DBG_IRT("iosapic_fixup_irq(): irte %p %x %x %x %x %x %x %x %x\n",
+ irte,
+ irte->entry_type,
+ irte->entry_length,
+ irte->polarity_trigger,
+ irte->src_bus_irq_devno,
+ irte->src_bus_id,
+ irte->src_seg_id,
+ irte->dest_iosapic_intin,
+ (u32) irte->dest_iosapic_addr);
+ isi_line = irte->dest_iosapic_intin;
+
+ /* get vector info for this input line */
+ vi = isi->isi_vector + isi_line;
+ DBG_IRT("iosapic_fixup_irq: line %d vi 0x%p\n", isi_line, vi);
+
+ /* If this IRQ line has already been setup, skip it */
+ if (vi->irte)
+ goto out;
+
+ vi->irte = irte;
+
+ /* Allocate processor IRQ */
+ vi->txn_irq = txn_alloc_irq();
+
+ /*
+ * XXX/FIXME The txn_alloc_irq() code and related code should be
+ * moved to enable_irq(). That way we only allocate processor IRQ
+ * bits for devices that actually have drivers claiming them.
+ * Right now we assign an IRQ to every PCI device present,
+ * regardless of whether it's used or not.
+ */
+ if (vi->txn_irq < 0)
+ panic("I/O sapic: couldn't get TXN IRQ\n");
+
+ /* enable_irq() will use txn_* to program IRdT */
+ vi->txn_addr = txn_alloc_addr(vi->txn_irq);
+ vi->txn_data = txn_alloc_data(vi->txn_irq, 8);
+
+ vi->eoi_addr = isi->addr + IOSAPIC_REG_EOI;
+ vi->eoi_data = cpu_to_le32(vi->txn_data);
+
+ cpu_claim_irq(vi->txn_irq, &iosapic_interrupt_type, vi);
+
+ out:
+ pcidev->irq = vi->txn_irq;
+
+ DBG_IRT("iosapic_fixup_irq() %d:%d %x %x line %d irq %d\n",
+ PCI_SLOT(pcidev->devfn), PCI_FUNC(pcidev->devfn),
+ pcidev->vendor, pcidev->device, isi_line, pcidev->irq);
+
+ return pcidev->irq;
+}
/*
** o read iosapic version and squirrel that away
** o read size of IRdT.
** o allocate and initialize isi_vector[]
-** o allocate isi_region (registers region handlers)
+** o allocate irq region
*/
-void *
-iosapic_register(unsigned long hpa)
+void *iosapic_register(unsigned long hpa)
{
struct iosapic_info *isi = NULL;
struct irt_entry *irte = irt_cell;
if (vip == NULL) {
IOSAPIC_FREE(isi, struct iosapic_info, 1);
- return (NULL);
+ return NULL;
}
memset(vip, 0, sizeof(struct vector_info) * isi->isi_num_vectors);
- sprintf(isi->isi_name, "IO-SAPIC%02d", iosapic_count++);
- /*
- ** Initialize vector array
- */
for (cnt=0; cnt < isi->isi_num_vectors; cnt++, vip++) {
vip->irqline = (unsigned char) cnt;
vip->iosapic = isi;
- sprintf(vip->name, "%s-L%d", isi->isi_name, cnt);
}
-
- isi->isi_region = alloc_irq_region(isi->isi_num_vectors,
- &iosapic_irq_ops, isi->isi_name,
- (void *) isi->isi_vector);
-
- ASSERT(NULL != isi->isi_region);
- return ((void *) isi);
+ return isi;
}
u32 eoi_data; /* IA64: ? PA: swapped txn_data */
int txn_irq; /* virtual IRQ number for processor */
ulong txn_addr; /* IA64: id_eid PA: partial HPA */
- ulong txn_data; /* IA64: vector PA: EIR bit */
+ u32 txn_data; /* CPU interrupt bit */
u8 status; /* status/flags */
u8 irqline; /* INTINn(IRQ) */
- char name[32]; /* user visible identity */
};
struct iosapic_info {
- struct iosapic_info *isi_next; /* list of I/O SAPIC */
- unsigned long isi_hpa; /* physical base address */
- struct irq_region *isi_region; /* each I/O SAPIC is one region */
- struct vector_info *isi_vector; /* IRdT (IRQ line) array */
- int isi_num_vectors; /* size of IRdT array */
- int isi_status; /* status/flags */
- unsigned int isi_version; /* DEBUG: data fr version reg */
- /* round up to next cacheline */
- char isi_name[20]; /* identify region for users */
+ struct iosapic_info * isi_next; /* list of I/O SAPIC */
+ void __iomem * addr; /* remapped address */
+ unsigned long isi_hpa; /* physical base address */
+ struct vector_info * isi_vector; /* IRdT (IRQ line) array */
+ int isi_num_vectors; /* size of IRdT array */
+ int isi_status; /* status/flags */
+ unsigned int isi_version; /* DEBUG: data fr version reg */
};
#include <linux/errno.h>
#include <linux/init.h>
-#include <linux/irq.h>
+#include <linux/interrupt.h>
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/pm.h>
#define LASI_IO_CONF 0x7FFFE /* LASI primary configuration register */
#define LASI_IO_CONF2 0x7FFFF /* LASI secondary configuration register */
-static int lasi_choose_irq(struct parisc_device *dev)
+static void lasi_choose_irq(struct parisc_device *dev, void *ctrl)
{
int irq;
- /*
- ** "irq" bits below are numbered relative to most significant bit.
- */
switch (dev->id.sversion) {
- case 0x74: irq = 24; break; /* Centronics */
- case 0x7B: irq = 18; break; /* Audio */
- case 0x81: irq = 17; break; /* Lasi itself */
- case 0x82: irq = 22; break; /* SCSI */
- case 0x83: irq = 11; break; /* Floppy */
- case 0x84: irq = 5; break; /* PS/2 Keyboard */
- case 0x87: irq = 13; break; /* ISDN */
- case 0x8A: irq = 23; break; /* LAN */
- case 0x8C: irq = 26; break; /* RS232 */
- case 0x8D: irq = (dev->hw_path == 13) ? 15 : 14;
- break; /* Telephone */
- default: irq = -1; break; /* unknown */
+ case 0x74: irq = 7; break; /* Centronics */
+ case 0x7B: irq = 13; break; /* Audio */
+ case 0x81: irq = 14; break; /* Lasi itself */
+ case 0x82: irq = 9; break; /* SCSI */
+ case 0x83: irq = 20; break; /* Floppy */
+ case 0x84: irq = 26; break; /* PS/2 Keyboard */
+ case 0x87: irq = 18; break; /* ISDN */
+ case 0x8A: irq = 8; break; /* LAN */
+ case 0x8C: irq = 5; break; /* RS232 */
+ case 0x8D: irq = (dev->hw_path == 13) ? 16 : 17; break;
+ /* Telephone */
+ default: return; /* unknown */
}
- return irq;
+ gsc_asic_assign_irq(ctrl, irq, &dev->irq);
}
static void __init
-lasi_init_irq(struct busdevice *this_lasi)
+lasi_init_irq(struct gsc_asic *this_lasi)
{
unsigned long lasi_base = this_lasi->hpa;
int __init
lasi_init_chip(struct parisc_device *dev)
{
- struct busdevice *lasi;
+ struct gsc_asic *lasi;
struct gsc_irq gsc_irq;
- int irq, ret;
+ int ret;
- lasi = kmalloc(sizeof(struct busdevice), GFP_KERNEL);
+ lasi = kmalloc(sizeof(*lasi), GFP_KERNEL);
if (!lasi)
return -ENOMEM;
lasi_init_irq(lasi);
/* the IRQ lasi should use */
- irq = gsc_alloc_irq(&gsc_irq);
- if (irq < 0) {
+ dev->irq = gsc_alloc_irq(&gsc_irq);
+ if (dev->irq < 0) {
printk(KERN_ERR "%s(): cannot get GSC irq\n",
__FUNCTION__);
kfree(lasi);
return -EBUSY;
}
- ret = request_irq(gsc_irq.irq, busdev_barked, 0, "lasi", lasi);
+ lasi->eim = ((u32) gsc_irq.txn_addr) | gsc_irq.txn_data;
+
+ ret = request_irq(gsc_irq.irq, gsc_asic_intr, 0, "lasi", lasi);
if (ret < 0) {
kfree(lasi);
return ret;
}
- /* Save this for debugging later */
- lasi->parent_irq = gsc_irq.irq;
- lasi->eim = ((u32) gsc_irq.txn_addr) | gsc_irq.txn_data;
-
/* enable IRQ's for devices below LASI */
gsc_writel(lasi->eim, lasi->hpa + OFFSET_IAR);
/* Done init'ing, register this driver */
- ret = gsc_common_irqsetup(dev, lasi);
+ ret = gsc_common_setup(dev, lasi);
if (ret) {
kfree(lasi);
return ret;
}
- fixup_child_irqs(dev, lasi->busdev_region->data.irqbase,
- lasi_choose_irq);
+ gsc_fixup_irqs(dev, lasi, lasi_choose_irq);
/* initialize the power off function */
/* FIXME: Record the LASI HPA for the power off function. This should
#include <linux/smp_lock.h>
#include <asm/byteorder.h>
-#include <asm/irq.h> /* for struct irq_region support */
#include <asm/pdc.h>
#include <asm/pdcpat.h>
#include <asm/page.h>
-#include <asm/segment.h>
#include <asm/system.h>
#include <asm/hardware.h> /* for register_parisc_driver() stuff */
{
pci_bios = &lba_bios_ops;
pcibios_register_hba(HBA_DATA(lba_dev));
- lba_dev->lba_lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&lba_dev->lba_lock);
/*
** Set flags which depend on hw_rev
#include <linux/init.h>
#include <linux/types.h>
#include <linux/ioport.h>
-#include <linux/version.h>
+#include <linux/utsname.h>
#include <linux/delay.h>
#include <linux/netdevice.h>
#include <linux/inetdevice.h>
static int led_diskio = 1;
static int led_lanrxtx = 1;
static char lcd_text[32];
-static char lcd_text_default[] = "Linux " UTS_RELEASE;
+static char lcd_text_default[32];
#if 0
#define DPRINTK(x) printk x
struct pdc_chassis_info chassis_info;
int ret;
+ snprintf(lcd_text_default, sizeof(lcd_text_default),
+ "Linux %s", system_utsname.release);
+
/* Work around the buggy PDC of KittyHawk-machines */
switch (CPU_HVERSION) {
case 0x580: /* KittyHawk DC2-100 (K100) */
__FUNCTION__, i, res_size, sba_dev->ioc[i].res_map);
}
- sba_dev->sba_lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&sba_dev->sba_lock);
ioc_needs_fdc = boot_cpu_data.pdc.capabilities & PDC_MODEL_IOPDIR_FDC;
#ifdef DEBUG_SBA_INIT
config PARPORT_PC_PCMCIA
tristate "Support for PCMCIA management for PC-style ports"
- depends on PARPORT!=n && HOTPLUG && (PCMCIA!=n && PARPORT_PC=m && PARPORT_PC || PARPORT_PC=y && PCMCIA)
+ depends on PARPORT!=n && (PCMCIA!=n && PARPORT_PC=m && PARPORT_PC || PARPORT_PC=y && PCMCIA)
help
Say Y here if you need PCMCIA support for your PC-style parallel
ports. If unsure, say N.
MODULE_DESCRIPTION("PCMCIA parallel port card driver");
MODULE_LICENSE("Dual MPL/GPL");
-#define INT_MODULE_PARM(n, v) static int n = v; MODULE_PARM(n, "i")
-
-/* Bit map of interrupts to choose from */
-INT_MODULE_PARM(irq_mask, 0xdeb8);
-static int irq_list[4] = { -1 };
-MODULE_PARM(irq_list, "1-4i");
+#define INT_MODULE_PARM(n, v) static int n = v; module_param(n, int, 0)
INT_MODULE_PARM(epp_mode, 1);
parport_info_t *info;
dev_link_t *link;
client_reg_t client_reg;
- int i, ret;
+ int ret;
DEBUG(0, "parport_attach()\n");
link->io.Attributes1 = IO_DATA_PATH_WIDTH_8;
link->io.Attributes2 = IO_DATA_PATH_WIDTH_8;
link->irq.Attributes = IRQ_TYPE_EXCLUSIVE;
- link->irq.IRQInfo1 = IRQ_INFO2_VALID|IRQ_LEVEL_ID;
- if (irq_list[0] == -1)
- link->irq.IRQInfo2 = irq_mask;
- else
- for (i = 0; i < 4; i++)
- link->irq.IRQInfo2 |= 1 << irq_list[i];
+ link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->conf.Attributes = CONF_ENABLE_IRQ;
link->conf.Vcc = 50;
link->conf.IntType = INT_MEMORY_AND_IO;
link->next = dev_list;
dev_list = link;
client_reg.dev_info = &dev_info;
- client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE;
client_reg.EventMask =
CS_EVENT_CARD_INSERTION | CS_EVENT_CARD_REMOVAL |
CS_EVENT_RESET_PHYSICAL | CS_EVENT_CARD_RESET |
static void __exit exit_parport_cs(void)
{
pcmcia_unregister_driver(&parport_cs_driver);
-
- /* XXX: this really needs to move into generic code.. */
- while (dev_list != NULL)
- parport_detach(dev_list);
+ BUG_ON(dev_list != NULL);
}
module_init(init_parport_cs);
* configuration space.
*/
-static spinlock_t pci_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(pci_lock);
/*
* Wrappers for all PCI configuration access functions. They just check
#
menu "PCI Hotplug Support"
- depends on HOTPLUG
config HOTPLUG_PCI
tristate "Support for PCI Hotplug (EXPERIMENTAL)"
depends on PCI && EXPERIMENTAL
+ select HOTPLUG
---help---
Say Y here if you have a motherboard with a PCI Hotplug controller.
This allows you to add and remove PCI cards while the machine is
When in doubt, say N.
-config HOTPLUG_PCI_PCIE
- tristate "PCI Express Hotplug driver"
- depends on HOTPLUG_PCI
- help
- Say Y here if you have a motherboard that supports PCI Express Native
- Hotplug
-
- To compile this driver as a module, choose M here: the
- module will be called pciehp.
-
- When in doubt, say N.
-
-config HOTPLUG_PCI_PCIE_POLL_EVENT_MODE
- bool "Use polling mechanism for hot-plug events (for testing purpose)"
- depends on HOTPLUG_PCI_PCIE
- help
- Say Y here if you want to use the polling mechanism for hot-plug
- events for early platform testing.
-
- When in doubt, say N.
-
config HOTPLUG_PCI_SHPC
tristate "SHPC PCI Hotplug driver"
depends on HOTPLUG_PCI
MODULE_DESCRIPTION(DRIVER_DESC);
MODULE_LICENSE("GPL");
MODULE_VERSION(DRIVER_VERSION);
-module_param(debug, bool, 644);
+module_param(debug, bool, 0644);
MODULE_PARM_DESC(debug, " Debugging mode enabled or not");
#define MY_NAME "acpiphp_ibm"
#include "pci_hotplug.h"
#include "../pci.h"
-#if !defined(CONFIG_HOTPLUG_PCI_FAKE_MODULE)
+#if !defined(MODULE)
#define MY_NAME "fakephp"
#else
#define MY_NAME THIS_MODULE->name
cleanup_count = 6;
goto error;
}
- newfunc = (struct pci_func *) kmalloc (sizeof (struct pci_func), GFP_KERNEL);
+ newfunc = kmalloc(sizeof(*newfunc), GFP_KERNEL);
if (!newfunc) {
err ("out of system memory\n");
return -ENOMEM;
flag = FALSE;
for (i = 0; i < 32; i++) {
if (func->devices[i]) {
- newfunc = (struct pci_func *) kmalloc (sizeof (struct pci_func), GFP_KERNEL);
+ newfunc = kmalloc(sizeof(*newfunc), GFP_KERNEL);
if (!newfunc) {
err ("out of system memory\n");
return -ENOMEM;
}
}
- newfunc = (struct pci_func *) kmalloc (sizeof (struct pci_func), GFP_KERNEL);
+ newfunc = kmalloc(sizeof(*newfunc), GFP_KERNEL);
if (!newfunc) {
err ("out of system memory\n");
return -ENOMEM;
for (i = 0; i < 32; i++) {
if (func->devices[i]) {
debug ("inside for loop, device is %x\n", i);
- newfunc = (struct pci_func *) kmalloc (sizeof (struct pci_func), GFP_KERNEL);
+ newfunc = kmalloc(sizeof(*newfunc), GFP_KERNEL);
if (!newfunc) {
err (" out of system memory\n");
return -ENOMEM;
memset (io[count], 0, sizeof (struct resource_node));
io[count]->type = IO;
io[count]->busno = func->busno;
- io[count]->devfunc = ((func->device << 3) | (func->function & 0x7));
+ io[count]->devfunc = PCI_DEVFN(func->device, func->function);
io[count]->len = len[count];
if (ibmphp_check_resource(io[count], 0) == 0) {
ibmphp_add_resource (io[count]);
memset (pfmem[count], 0, sizeof (struct resource_node));
pfmem[count]->type = PFMEM;
pfmem[count]->busno = func->busno;
- pfmem[count]->devfunc = ((func->device << 3) | (func->function & 0x7));
+ pfmem[count]->devfunc = PCI_DEVFN(func->device,
+ func->function);
pfmem[count]->len = len[count];
pfmem[count]->fromMem = FALSE;
if (ibmphp_check_resource (pfmem[count], 0) == 0) {
ibmphp_add_resource (pfmem[count]);
func->pfmem[count] = pfmem[count];
} else {
- mem_tmp = kmalloc (sizeof (struct resource_node), GFP_KERNEL);
+ mem_tmp = kmalloc(sizeof(*mem_tmp), GFP_KERNEL);
if (!mem_tmp) {
err ("out of system memory\n");
kfree (pfmem[count]);
memset (mem[count], 0, sizeof (struct resource_node));
mem[count]->type = MEM;
mem[count]->busno = func->busno;
- mem[count]->devfunc = ((func->device << 3) | (func->function & 0x7));
+ mem[count]->devfunc = PCI_DEVFN(func->device,
+ func->function);
mem[count]->len = len[count];
if (ibmphp_check_resource (mem[count], 0) == 0) {
ibmphp_add_resource (mem[count]);
memset (bus_io[count], 0, sizeof (struct resource_node));
bus_io[count]->type = IO;
bus_io[count]->busno = func->busno;
- bus_io[count]->devfunc = ((func->device << 3) | (func->function & 0x7));
+ bus_io[count]->devfunc = PCI_DEVFN(func->device,
+ func->function);
bus_io[count]->len = len[count];
if (ibmphp_check_resource (bus_io[count], 0) == 0) {
ibmphp_add_resource (bus_io[count]);
memset (bus_pfmem[count], 0, sizeof (struct resource_node));
bus_pfmem[count]->type = PFMEM;
bus_pfmem[count]->busno = func->busno;
- bus_pfmem[count]->devfunc = ((func->device << 3) | (func->function & 0x7));
+ bus_pfmem[count]->devfunc = PCI_DEVFN(func->device,
+ func->function);
bus_pfmem[count]->len = len[count];
bus_pfmem[count]->fromMem = FALSE;
if (ibmphp_check_resource (bus_pfmem[count], 0) == 0) {
ibmphp_add_resource (bus_pfmem[count]);
func->pfmem[count] = bus_pfmem[count];
} else {
- mem_tmp = kmalloc (sizeof (struct resource_node), GFP_KERNEL);
+ mem_tmp = kmalloc(sizeof(*mem_tmp), GFP_KERNEL);
if (!mem_tmp) {
err ("out of system memory\n");
retval = -ENOMEM;
memset (bus_mem[count], 0, sizeof (struct resource_node));
bus_mem[count]->type = MEM;
bus_mem[count]->busno = func->busno;
- bus_mem[count]->devfunc = ((func->device << 3) | (func->function & 0x7));
+ bus_mem[count]->devfunc = PCI_DEVFN(func->device,
+ func->function);
bus_mem[count]->len = len[count];
if (ibmphp_check_resource (bus_mem[count], 0) == 0) {
ibmphp_add_resource (bus_mem[count]);
flag_io = TRUE;
} else {
debug ("it wants %x IO behind the bridge\n", amount_needed->io);
- io = kmalloc (sizeof (struct resource_node), GFP_KERNEL);
+ io = kmalloc(sizeof(*io), GFP_KERNEL);
if (!io) {
err ("out of system memory\n");
memset (io, 0, sizeof (struct resource_node));
io->type = IO;
io->busno = func->busno;
- io->devfunc = ((func->device << 3) | (func->function & 0x7));
+ io->devfunc = PCI_DEVFN(func->device, func->function);
io->len = amount_needed->io;
if (ibmphp_check_resource (io, 1) == 0) {
debug ("were we able to add io\n");
flag_mem = TRUE;
} else {
debug ("it wants %x memory behind the bridge\n", amount_needed->mem);
- mem = kmalloc (sizeof (struct resource_node), GFP_KERNEL);
+ mem = kmalloc(sizeof(*mem), GFP_KERNEL);
if (!mem) {
err ("out of system memory\n");
retval = -ENOMEM;
memset (mem, 0, sizeof (struct resource_node));
mem->type = MEM;
mem->busno = func->busno;
- mem->devfunc = ((func->device << 3) | (func->function & 0x7));
+ mem->devfunc = PCI_DEVFN(func->device, func->function);
mem->len = amount_needed->mem;
if (ibmphp_check_resource (mem, 1) == 0) {
ibmphp_add_resource (mem);
flag_pfmem = TRUE;
} else {
debug ("it wants %x pfmemory behind the bridge\n", amount_needed->pfmem);
- pfmem = kmalloc (sizeof (struct resource_node), GFP_KERNEL);
+ pfmem = kmalloc(sizeof(*pfmem), GFP_KERNEL);
if (!pfmem) {
err ("out of system memory\n");
retval = -ENOMEM;
memset (pfmem, 0, sizeof (struct resource_node));
pfmem->type = PFMEM;
pfmem->busno = func->busno;
- pfmem->devfunc = ((func->device << 3) | (func->function & 0x7));
+ pfmem->devfunc = PCI_DEVFN(func->device, func->function);
pfmem->len = amount_needed->pfmem;
pfmem->fromMem = FALSE;
if (ibmphp_check_resource (pfmem, 1) == 0) {
ibmphp_add_resource (pfmem);
flag_pfmem = TRUE;
} else {
- mem_tmp = kmalloc (sizeof (struct resource_node), GFP_KERNEL);
+ mem_tmp = kmalloc(sizeof(*mem_tmp), GFP_KERNEL);
if (!mem_tmp) {
err ("out of system memory\n");
retval = -ENOMEM;
*/
bus = ibmphp_find_res_bus (sec_number);
if (!bus) {
- bus = kmalloc (sizeof (struct bus_node), GFP_KERNEL);
+ bus = kmalloc(sizeof(*bus), GFP_KERNEL);
if (!bus) {
err ("out of system memory\n");
retval = -ENOMEM;
}
error:
- if (amount_needed)
- kfree (amount_needed);
+ kfree(amount_needed);
if (pfmem)
ibmphp_remove_resource (pfmem);
if (io)
};
struct res_needed *amount;
- amount = kmalloc (sizeof (struct res_needed), GFP_KERNEL);
+ amount = kmalloc(sizeof(*amount), GFP_KERNEL);
if (amount == NULL)
return NULL;
memset (amount, 0, sizeof (struct res_needed));
list_add (&bus->bus_list, &cur_bus->bus_list);
}
if (io) {
- io_range = kmalloc (sizeof (struct range_node), GFP_KERNEL);
+ io_range = kmalloc(sizeof(*io_range), GFP_KERNEL);
if (!io_range) {
err ("out of system memory\n");
return -ENOMEM;
bus->rangeIO = io_range;
}
if (mem) {
- mem_range = kmalloc (sizeof (struct range_node), GFP_KERNEL);
+ mem_range = kmalloc(sizeof(*mem_range), GFP_KERNEL);
if (!mem_range) {
err ("out of system memory\n");
return -ENOMEM;
bus->rangeMem = mem_range;
}
if (pfmem) {
- pfmem_range = kmalloc (sizeof (struct range_node), GFP_KERNEL);
+ pfmem_range = kmalloc(sizeof(*pfmem_range), GFP_KERNEL);
if (!pfmem_range) {
err ("out of system memory\n");
return -ENOMEM;
--- /dev/null
+#
+# PCI Express Port Bus Configuration
+#
+config PCIEPORTBUS
+ bool "PCI Express support"
+ depends on PCI_GOMMCONFIG || PCI_GOANY
+ default n
+
+ ---help---
+ This automatically enables PCI Express Port Bus support. Users can
+ choose Native Hot-Plug support, Advanced Error Reporting support,
+ Power Management Event support and Virtual Channel support to run
+ on PCI Express Ports (Root or Switch).
+
+#
+# Include service Kconfig here
+#
+config HOTPLUG_PCI_PCIE
+ tristate "PCI Express Hotplug driver"
+ depends on HOTPLUG_PCI && PCIEPORTBUS
+ help
+ Say Y here if you have a motherboard that supports PCI Express Native
+ Hotplug
+
+ To compile this driver as a module, choose M here: the
+ module will be called pciehp.
+
+ When in doubt, say N.
+
+config HOTPLUG_PCI_PCIE_POLL_EVENT_MODE
+ bool "Use polling mechanism for hot-plug events (for testing purpose)"
+ depends on HOTPLUG_PCI_PCIE
+ help
+ Say Y here if you want to use the polling mechanism for hot-plug
+ events for early platform testing.
+
+ When in doubt, say N.
+
--- /dev/null
+#
+# Makefile for PCI-Express PORT Driver
+#
+
+pcieportdrv-y := portdrv_core.o portdrv_pci.o portdrv_bus.o
+
+obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o
--- /dev/null
+/*
+ * File: portdrv.h
+ * Purpose: PCI Express Port Bus Driver's Internal Data Structures
+ *
+ * Copyright (C) 2004 Intel
+ * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com)
+ */
+
+#ifndef _PORTDRV_H_
+#define _PORTDRV_H_
+
+#if !defined(PCI_CAP_ID_PME)
+#define PCI_CAP_ID_PME 1
+#endif
+
+#if !defined(PCI_CAP_ID_EXP)
+#define PCI_CAP_ID_EXP 0x10
+#endif
+
+#define PORT_TYPE_MASK 0xf
+#define PORT_TO_SLOT_MASK 0x100
+#define SLOT_HP_CAPABLE_MASK 0x40
+#define PCIE_CAPABILITIES_REG 0x2
+#define PCIE_SLOT_CAPABILITIES_REG 0x14
+#define PCIE_PORT_DEVICE_MAXSERVICES 4
+#define PCI_CFG_SPACE_SIZE 256
+
+#define get_descriptor_id(type, service) (((type - 4) << 4) | service)
+
+extern struct bus_type pcie_port_bus_type;
+extern int pcie_port_device_probe(struct pci_dev *dev);
+extern int pcie_port_device_register(struct pci_dev *dev);
+#ifdef CONFIG_PM
+extern int pcie_port_device_suspend(struct pci_dev *dev, u32 state);
+extern int pcie_port_device_resume(struct pci_dev *dev);
+#endif
+extern void pcie_port_device_remove(struct pci_dev *dev);
+extern void pcie_port_bus_register(void);
+extern void pcie_port_bus_unregister(void);
+
+#endif /* _PORTDRV_H_ */
--- /dev/null
+/*
+ * File: portdrv_bus.c
+ * Purpose: PCI Express Port Bus Driver's Bus Overloading Functions
+ *
+ * Copyright (C) 2004 Intel
+ * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com)
+ */
+
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/pm.h>
+
+#include <linux/pcieport_if.h>
+
+static int pcie_port_bus_match(struct device *dev, struct device_driver *drv);
+static int pcie_port_bus_suspend(struct device *dev, u32 state);
+static int pcie_port_bus_resume(struct device *dev);
+
+struct bus_type pcie_port_bus_type = {
+ .name = "pci_express",
+ .match = pcie_port_bus_match,
+ .suspend = pcie_port_bus_suspend,
+ .resume = pcie_port_bus_resume,
+};
+
+static int pcie_port_bus_match(struct device *dev, struct device_driver *drv)
+{
+ struct pcie_device *pciedev;
+ struct pcie_port_service_driver *driver;
+
+ if (drv->bus != &pcie_port_bus_type || dev->bus != &pcie_port_bus_type)
+ return 0;
+
+ pciedev = to_pcie_device(dev);
+ driver = to_service_driver(drv);
+ if ( (driver->id_table->vendor != PCI_ANY_ID &&
+ driver->id_table->vendor != pciedev->id.vendor) ||
+ (driver->id_table->device != PCI_ANY_ID &&
+ driver->id_table->device != pciedev->id.device) ||
+ driver->id_table->port_type != pciedev->id.port_type ||
+ driver->id_table->service_type != pciedev->id.service_type )
+ return 0;
+
+ return 1;
+}
+
+static int pcie_port_bus_suspend(struct device *dev, u32 state)
+{
+ struct pcie_device *pciedev;
+ struct pcie_port_service_driver *driver;
+
+ if (!dev || !dev->driver)
+ return 0;
+
+ pciedev = to_pcie_device(dev);
+ driver = to_service_driver(dev->driver);
+ if (driver && driver->suspend)
+ driver->suspend(pciedev, state);
+ return 0;
+}
+
+static int pcie_port_bus_resume(struct device *dev)
+{
+ struct pcie_device *pciedev;
+ struct pcie_port_service_driver *driver;
+
+ if (!dev || !dev->driver)
+ return 0;
+
+ pciedev = to_pcie_device(dev);
+ driver = to_service_driver(dev->driver);
+ if (driver && driver->resume)
+ driver->resume(pciedev);
+ return 0;
+}
--- /dev/null
+/*
+ * File: portdrv_core.c
+ * Purpose: PCI Express Port Bus Driver's Core Functions
+ *
+ * Copyright (C) 2004 Intel
+ * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com)
+ */
+
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/pm.h>
+#include <linux/pcieport_if.h>
+
+#include "portdrv.h"
+
+extern int pcie_mch_quirk; /* MSI-quirk Indicator */
+
+static int pcie_port_probe_service(struct device *dev)
+{
+ struct pcie_device *pciedev;
+ struct pcie_port_service_driver *driver;
+ int status = -ENODEV;
+
+ if (!dev || !dev->driver)
+ return status;
+
+ driver = to_service_driver(dev->driver);
+ if (!driver || !driver->probe)
+ return status;
+
+ pciedev = to_pcie_device(dev);
+ status = driver->probe(pciedev, driver->id_table);
+ if (!status) {
+ printk(KERN_DEBUG "Load service driver %s on pcie device %s\n",
+ driver->name, dev->bus_id);
+ get_device(dev);
+ }
+ return status;
+}
+
+static int pcie_port_remove_service(struct device *dev)
+{
+ struct pcie_device *pciedev;
+ struct pcie_port_service_driver *driver;
+
+ if (!dev || !dev->driver)
+ return 0;
+
+ pciedev = to_pcie_device(dev);
+ driver = to_service_driver(dev->driver);
+ if (driver && driver->remove) {
+ printk(KERN_DEBUG "Unload service driver %s on pcie device %s\n",
+ driver->name, dev->bus_id);
+ driver->remove(pciedev);
+ put_device(dev);
+ }
+ return 0;
+}
+
+static void pcie_port_shutdown_service(struct device *dev) {}
+
+static int pcie_port_suspend_service(struct device *dev, u32 state, u32 level)
+{
+ struct pcie_device *pciedev;
+ struct pcie_port_service_driver *driver;
+
+ if (!dev || !dev->driver)
+ return 0;
+
+ pciedev = to_pcie_device(dev);
+ driver = to_service_driver(dev->driver);
+ if (driver && driver->suspend)
+ driver->suspend(pciedev, state);
+ return 0;
+}
+
+static int pcie_port_resume_service(struct device *dev, u32 state)
+{
+ struct pcie_device *pciedev;
+ struct pcie_port_service_driver *driver;
+
+ if (!dev || !dev->driver)
+ return 0;
+
+ pciedev = to_pcie_device(dev);
+ driver = to_service_driver(dev->driver);
+
+ if (driver && driver->resume)
+ driver->resume(pciedev);
+ return 0;
+}
+
+/*
+ * release_pcie_device
+ *
+ * Being invoked automatically when device is being removed
+ * in response to device_unregister(dev) call.
+ * Release all resources being claimed.
+ */
+static void release_pcie_device(struct device *dev)
+{
+ printk(KERN_DEBUG "Free Port Service[%s]\n", dev->bus_id);
+ kfree(to_pcie_device(dev));
+}
+
+static int is_msi_quirked(struct pci_dev *dev)
+{
+ int port_type, quirk = 0;
+ u16 reg16;
+
+ pci_read_config_word(dev,
+ pci_find_capability(dev, PCI_CAP_ID_EXP) +
+ PCIE_CAPABILITIES_REG, ®16);
+ port_type = (reg16 >> 4) & PORT_TYPE_MASK;
+ switch(port_type) {
+ case PCIE_RC_PORT:
+ if (pcie_mch_quirk == 1)
+ quirk = 1;
+ break;
+ case PCIE_SW_UPSTREAM_PORT:
+ case PCIE_SW_DOWNSTREAM_PORT:
+ default:
+ break;
+ }
+ return quirk;
+}
+
+static int assign_interrupt_mode(struct pci_dev *dev, int *vectors, int mask)
+{
+ int i, pos, nvec, status = -EINVAL;
+ int interrupt_mode = PCIE_PORT_INTx_MODE;
+
+ /* Set INTx as default */
+ for (i = 0, nvec = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++) {
+ if (mask & (1 << i))
+ nvec++;
+ vectors[i] = dev->irq;
+ }
+
+ /* Check MSI quirk */
+ if (is_msi_quirked(dev))
+ return interrupt_mode;
+
+ /* Select MSI-X over MSI if supported */
+ pos = pci_find_capability(dev, PCI_CAP_ID_MSIX);
+ if (pos) {
+ struct msix_entry msix_entries[PCIE_PORT_DEVICE_MAXSERVICES] =
+ {{0, 0}, {0, 1}, {0, 2}, {0, 3}};
+ printk("%s Found MSIX capability\n", __FUNCTION__);
+ status = pci_enable_msix(dev, msix_entries, nvec);
+ if (!status) {
+ int j = 0;
+
+ interrupt_mode = PCIE_PORT_MSIX_MODE;
+ for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++) {
+ if (mask & (1 << i))
+ vectors[i] = msix_entries[j++].vector;
+ }
+ }
+ }
+ if (status) {
+ pos = pci_find_capability(dev, PCI_CAP_ID_MSI);
+ if (pos) {
+ printk("%s Found MSI capability\n", __FUNCTION__);
+ status = pci_enable_msi(dev);
+ if (!status) {
+ interrupt_mode = PCIE_PORT_MSI_MODE;
+ for (i = 0;i < PCIE_PORT_DEVICE_MAXSERVICES;i++)
+ vectors[i] = dev->irq;
+ }
+ }
+ }
+ return interrupt_mode;
+}
+
+static int get_port_device_capability(struct pci_dev *dev)
+{
+ int services = 0, pos;
+ u16 reg16;
+ u32 reg32;
+
+ pos = pci_find_capability(dev, PCI_CAP_ID_EXP);
+ pci_read_config_word(dev, pos + PCIE_CAPABILITIES_REG, ®16);
+ /* Hot-Plug Capable */
+ if (reg16 & PORT_TO_SLOT_MASK) {
+ pci_read_config_dword(dev,
+ pos + PCIE_SLOT_CAPABILITIES_REG, ®32);
+ if (reg32 & SLOT_HP_CAPABLE_MASK)
+ services |= PCIE_PORT_SERVICE_HP;
+ }
+ /* PME Capable */
+ pos = pci_find_capability(dev, PCI_CAP_ID_PME);
+ if (pos)
+ services |= PCIE_PORT_SERVICE_PME;
+
+ pos = PCI_CFG_SPACE_SIZE;
+ while (pos) {
+ pci_read_config_dword(dev, pos, ®32);
+ switch (reg32 & 0xffff) {
+ case PCI_EXT_CAP_ID_ERR:
+ services |= PCIE_PORT_SERVICE_AER;
+ pos = reg32 >> 20;
+ break;
+ case PCI_EXT_CAP_ID_VC:
+ services |= PCIE_PORT_SERVICE_VC;
+ pos = reg32 >> 20;
+ break;
+ default:
+ pos = 0;
+ break;
+ }
+ }
+
+ return services;
+}
+
+static void pcie_device_init(struct pci_dev *parent, struct pcie_device *dev,
+ int port_type, int service_type, int irq, int irq_mode)
+{
+ struct device *device;
+
+ dev->port = parent;
+ dev->interrupt_mode = irq_mode;
+ dev->irq = irq;
+ dev->id.vendor = parent->vendor;
+ dev->id.device = parent->device;
+ dev->id.port_type = port_type;
+ dev->id.service_type = (1 << service_type);
+
+ /* Initialize generic device interface */
+ device = &dev->device;
+ memset(device, 0, sizeof(struct device));
+ INIT_LIST_HEAD(&device->node);
+ INIT_LIST_HEAD(&device->children);
+ INIT_LIST_HEAD(&device->bus_list);
+ device->bus = &pcie_port_bus_type;
+ device->driver = NULL;
+ device->driver_data = NULL;
+ device->release = release_pcie_device; /* callback to free pcie dev */
+ sprintf(&device->bus_id[0], "pcie%02x",
+ get_descriptor_id(port_type, service_type));
+ device->parent = &parent->dev;
+}
+
+static struct pcie_device* alloc_pcie_device(struct pci_dev *parent,
+ int port_type, int service_type, int irq, int irq_mode)
+{
+ struct pcie_device *device;
+
+ device = kmalloc(sizeof(struct pcie_device), GFP_KERNEL);
+ if (!device)
+ return NULL;
+
+ memset(device, 0, sizeof(struct pcie_device));
+ pcie_device_init(parent, device, port_type, service_type, irq,irq_mode);
+ printk(KERN_DEBUG "Allocate Port Service[%s]\n", device->device.bus_id);
+ return device;
+}
+
+int pcie_port_device_probe(struct pci_dev *dev)
+{
+ int pos, type;
+ u16 reg;
+
+ if (!(pos = pci_find_capability(dev, PCI_CAP_ID_EXP)))
+ return -ENODEV;
+
+ pci_read_config_word(dev, pos + PCIE_CAPABILITIES_REG, ®);
+ type = (reg >> 4) & PORT_TYPE_MASK;
+ if ( type == PCIE_RC_PORT || type == PCIE_SW_UPSTREAM_PORT ||
+ type == PCIE_SW_DOWNSTREAM_PORT )
+ return 0;
+
+ return -ENODEV;
+}
+
+int pcie_port_device_register(struct pci_dev *dev)
+{
+ int status, type, capabilities, irq_mode, i;
+ int vectors[PCIE_PORT_DEVICE_MAXSERVICES];
+ u16 reg16;
+
+ /* Get port type */
+ pci_read_config_word(dev,
+ pci_find_capability(dev, PCI_CAP_ID_EXP) +
+ PCIE_CAPABILITIES_REG, ®16);
+ type = (reg16 >> 4) & PORT_TYPE_MASK;
+
+ /* Now get port services */
+ capabilities = get_port_device_capability(dev);
+ irq_mode = assign_interrupt_mode(dev, vectors, capabilities);
+
+ /* Allocate child services if any */
+ for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++) {
+ struct pcie_device *child;
+
+ if (capabilities & (1 << i)) {
+ child = alloc_pcie_device(
+ dev, /* parent */
+ type, /* port type */
+ i, /* service type */
+ vectors[i], /* irq */
+ irq_mode /* interrupt mode */);
+ if (child) {
+ status = device_register(&child->device);
+ if (status) {
+ kfree(child);
+ continue;
+ }
+ get_device(&child->device);
+ }
+ }
+ }
+ return 0;
+}
+
+#ifdef CONFIG_PM
+int pcie_port_device_suspend(struct pci_dev *dev, u32 state)
+{
+ struct list_head *head, *tmp;
+ struct device *parent, *child;
+ struct device_driver *driver;
+ struct pcie_port_service_driver *service_driver;
+
+ parent = &dev->dev;
+ head = &parent->children;
+ tmp = head->next;
+ while (head != tmp) {
+ child = container_of(tmp, struct device, node);
+ tmp = tmp->next;
+ if (child->bus != &pcie_port_bus_type)
+ continue;
+ driver = child->driver;
+ if (!driver)
+ continue;
+ service_driver = to_service_driver(driver);
+ if (service_driver->suspend)
+ service_driver->suspend(to_pcie_device(child), state);
+ }
+ return 0;
+}
+
+int pcie_port_device_resume(struct pci_dev *dev)
+{
+ struct list_head *head, *tmp;
+ struct device *parent, *child;
+ struct device_driver *driver;
+ struct pcie_port_service_driver *service_driver;
+
+ parent = &dev->dev;
+ head = &parent->children;
+ tmp = head->next;
+ while (head != tmp) {
+ child = container_of(tmp, struct device, node);
+ tmp = tmp->next;
+ if (child->bus != &pcie_port_bus_type)
+ continue;
+ driver = child->driver;
+ if (!driver)
+ continue;
+ service_driver = to_service_driver(driver);
+ if (service_driver->resume)
+ service_driver->resume(to_pcie_device(child));
+ }
+ return 0;
+
+}
+#endif
+
+void pcie_port_device_remove(struct pci_dev *dev)
+{
+ struct list_head *head, *tmp;
+ struct device *parent, *child;
+ struct device_driver *driver;
+ struct pcie_port_service_driver *service_driver;
+ int interrupt_mode = PCIE_PORT_INTx_MODE;
+
+ parent = &dev->dev;
+ head = &parent->children;
+ tmp = head->next;
+ while (head != tmp) {
+ child = container_of(tmp, struct device, node);
+ tmp = tmp->next;
+ if (child->bus != &pcie_port_bus_type)
+ continue;
+ driver = child->driver;
+ if (driver) {
+ service_driver = to_service_driver(driver);
+ if (service_driver->remove)
+ service_driver->remove(to_pcie_device(child));
+ }
+ interrupt_mode = (to_pcie_device(child))->interrupt_mode;
+ put_device(child);
+ device_unregister(child);
+ }
+ /* Switch to INTx by default if MSI enabled */
+ if (interrupt_mode == PCIE_PORT_MSIX_MODE)
+ pci_disable_msix(dev);
+ else if (interrupt_mode == PCIE_PORT_MSI_MODE)
+ pci_disable_msi(dev);
+}
+
+void pcie_port_bus_register(void)
+{
+ bus_register(&pcie_port_bus_type);
+}
+
+void pcie_port_bus_unregister(void)
+{
+ bus_unregister(&pcie_port_bus_type);
+}
+
+int pcie_port_service_register(struct pcie_port_service_driver *new)
+{
+ new->driver.name = (char *)new->name;
+ new->driver.bus = &pcie_port_bus_type;
+ new->driver.probe = pcie_port_probe_service;
+ new->driver.remove = pcie_port_remove_service;
+ new->driver.shutdown = pcie_port_shutdown_service;
+ new->driver.suspend = pcie_port_suspend_service;
+ new->driver.resume = pcie_port_resume_service;
+
+ return driver_register(&new->driver);
+}
+
+void pcie_port_service_unregister(struct pcie_port_service_driver *new)
+{
+ driver_unregister(&new->driver);
+}
+
+EXPORT_SYMBOL(pcie_port_service_register);
+EXPORT_SYMBOL(pcie_port_service_unregister);
--- /dev/null
+/*
+ * File: portdrv_pci.c
+ * Purpose: PCI Express Port Bus Driver
+ *
+ * Copyright (C) 2004 Intel
+ * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com)
+ */
+
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/pm.h>
+#include <linux/init.h>
+#include <linux/pcieport_if.h>
+
+#include "portdrv.h"
+
+/*
+ * Version Information
+ */
+#define DRIVER_VERSION "v1.0"
+#define DRIVER_AUTHOR "tom.l.nguyen@intel.com"
+#define DRIVER_DESC "PCIE Port Bus Driver"
+MODULE_AUTHOR(DRIVER_AUTHOR);
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_LICENSE("GPL");
+
+/* global data */
+static const char device_name[] = "pcieport-driver";
+
+/*
+ * pcie_portdrv_probe - Probe PCI-Express port devices
+ * @dev: PCI-Express port device being probed
+ *
+ * If detected invokes the pcie_port_device_register() method for
+ * this port device.
+ *
+ */
+static int __devinit pcie_portdrv_probe (struct pci_dev *dev,
+ const struct pci_device_id *id )
+{
+ int status;
+
+ status = pcie_port_device_probe(dev);
+ if (status)
+ return status;
+
+ if (pci_enable_device(dev) < 0)
+ return -ENODEV;
+
+ pci_set_master(dev);
+ if (!dev->irq) {
+ printk(KERN_WARNING
+ "%s->Dev[%04x:%04x] has invalid IRQ. Check vendor BIOS\n",
+ __FUNCTION__, dev->device, dev->vendor);
+ }
+ if (pcie_port_device_register(dev))
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void pcie_portdrv_remove (struct pci_dev *dev)
+{
+ pcie_port_device_remove(dev);
+}
+
+#ifdef CONFIG_PM
+static int pcie_portdrv_suspend (struct pci_dev *dev, u32 state)
+{
+ return pcie_port_device_suspend(dev, state);
+}
+
+static int pcie_portdrv_resume (struct pci_dev *dev)
+{
+ return pcie_port_device_resume(dev);
+}
+#endif
+
+/*
+ * LINUX Device Driver Model
+ */
+static const struct pci_device_id port_pci_ids[] = { {
+ /* handle any PCI-Express port */
+ PCI_DEVICE_CLASS(((PCI_CLASS_BRIDGE_PCI << 8) | 0x00), ~0),
+ }, { /* end: all zeroes */ }
+};
+MODULE_DEVICE_TABLE(pci, port_pci_ids);
+
+static struct pci_driver pcie_portdrv = {
+ .name = (char *)device_name,
+ .id_table = &port_pci_ids[0],
+
+ .probe = pcie_portdrv_probe,
+ .remove = pcie_portdrv_remove,
+
+#ifdef CONFIG_PM
+ .suspend = pcie_portdrv_suspend,
+ .resume = pcie_portdrv_resume,
+#endif /* PM */
+};
+
+static int __init pcie_portdrv_init(void)
+{
+ int retval = 0;
+
+ pcie_port_bus_register();
+ retval = pci_module_init(&pcie_portdrv);
+ if (retval)
+ pcie_port_bus_unregister();
+ return retval;
+}
+
+static void __exit pcie_portdrv_exit(void)
+{
+ pci_unregister_driver(&pcie_portdrv);
+ pcie_port_bus_unregister();
+}
+
+module_init(pcie_portdrv_init);
+module_exit(pcie_portdrv_exit);
}
EXPORT_SYMBOL(pci_remove_device_safe);
-void pci_remove_bus(struct pci_bus *b)
+void pci_remove_bus(struct pci_bus *pci_bus)
{
- pci_proc_detach_bus(b);
+ pci_proc_detach_bus(pci_bus);
spin_lock(&pci_bus_lock);
- list_del(&b->node);
+ list_del(&pci_bus->node);
spin_unlock(&pci_bus_lock);
-
- class_device_unregister(&b->class_dev);
+ pci_remove_legacy_files(pci_bus);
+ class_device_remove_file(&pci_bus->class_dev,
+ &class_device_attr_cpuaffinity);
+ sysfs_remove_link(&pci_bus->class_dev.kobj, "bridge");
+ class_device_unregister(&pci_bus->class_dev);
}
EXPORT_SYMBOL(pci_remove_bus);
* (C) Copyright 2004 Silicon Graphics, Inc. Jesse Barnes <jbarnes@sgi.com>
*
* PCI ROM access routines
- *
*/
-
-
#include <linux/config.h>
#include <linux/kernel.h>
#include <linux/pci.h>
* between the ROM and other resources, so enabling it may disable access
* to MMIO registers or other card memory.
*/
-static void
-pci_enable_rom(struct pci_dev *pdev)
+static void pci_enable_rom(struct pci_dev *pdev)
{
u32 rom_addr;
-
+
pci_read_config_dword(pdev, pdev->rom_base_reg, &rom_addr);
rom_addr |= PCI_ROM_ADDRESS_ENABLE;
pci_write_config_dword(pdev, pdev->rom_base_reg, rom_addr);
* Disable ROM decoding on a PCI device by turning off the last bit in the
* ROM BAR.
*/
-static void
-pci_disable_rom(struct pci_dev *pdev)
+static void pci_disable_rom(struct pci_dev *pdev)
{
u32 rom_addr;
pci_read_config_dword(pdev, pdev->rom_base_reg, &rom_addr);
* @return: kernel virtual pointer to image of ROM
*
* Map a PCI ROM into kernel space. If ROM is boot video ROM,
- * the shadow BIOS copy will be returned instead of the
+ * the shadow BIOS copy will be returned instead of the
* actual ROM.
*/
void __iomem *pci_map_rom(struct pci_dev *pdev, size_t *size)
void __iomem *rom;
void __iomem *image;
int last_image;
-
- if (res->flags & IORESOURCE_ROM_SHADOW) { /* IORESOURCE_ROM_SHADOW only set on x86 */
- start = (loff_t)0xC0000; /* primary video rom always starts here */
- *size = 0x20000; /* cover C000:0 through E000:0 */
+
+ /* IORESOURCE_ROM_SHADOW only set on x86 */
+ if (res->flags & IORESOURCE_ROM_SHADOW) {
+ /* primary video rom always starts here */
+ start = (loff_t)0xC0000;
+ *size = 0x20000; /* cover C000:0 through E000:0 */
} else {
if (res->flags & IORESOURCE_ROM_COPY) {
*size = pci_resource_len(pdev, PCI_ROM_RESOURCE);
/* assign the ROM an address if it doesn't have one */
if (res->parent == NULL)
pci_assign_resource(pdev, PCI_ROM_RESOURCE);
-
+
start = pci_resource_start(pdev, PCI_ROM_RESOURCE);
*size = pci_resource_len(pdev, PCI_ROM_RESOURCE);
if (*size == 0)
return NULL;
-
+
/* Enable ROM space decodes */
pci_enable_rom(pdev);
}
}
-
+
rom = ioremap(start, *size);
if (!rom) {
/* restore enable if ioremap fails */
- if (!(res->flags & (IORESOURCE_ROM_ENABLE | IORESOURCE_ROM_SHADOW | IORESOURCE_ROM_COPY)))
+ if (!(res->flags & (IORESOURCE_ROM_ENABLE |
+ IORESOURCE_ROM_SHADOW |
+ IORESOURCE_ROM_COPY)))
pci_disable_rom(pdev);
return NULL;
- }
+ }
- /* Try to find the true size of the ROM since sometimes the PCI window */
- /* size is much larger than the actual size of the ROM. */
- /* True size is important if the ROM is going to be copied. */
+ /*
+ * Try to find the true size of the ROM since sometimes the PCI window
+ * size is much larger than the actual size of the ROM.
+ * True size is important if the ROM is going to be copied.
+ */
image = rom;
do {
void __iomem *pds;
* @return: kernel virtual pointer to image of ROM
*
* Map a PCI ROM into kernel space. If ROM is boot video ROM,
- * the shadow BIOS copy will be returned instead of the
+ * the shadow BIOS copy will be returned instead of the
* actual ROM.
*/
void __iomem *pci_map_rom_copy(struct pci_dev *pdev, size_t *size)
{
struct resource *res = &pdev->resource[PCI_ROM_RESOURCE];
void __iomem *rom;
-
+
rom = pci_map_rom(pdev, size);
if (!rom)
return NULL;
-
+
if (res->flags & (IORESOURCE_ROM_COPY | IORESOURCE_ROM_SHADOW))
return rom;
-
+
res->start = (unsigned long)kmalloc(*size, GFP_KERNEL);
- if (!res->start)
+ if (!res->start)
return rom;
- res->end = res->start + *size;
+ res->end = res->start + *size;
memcpy_fromio((void*)res->start, rom, *size);
pci_unmap_rom(pdev, rom);
res->flags |= IORESOURCE_ROM_COPY;
-
+
return (void __iomem *)res->start;
}
*
* Remove a mapping of a previously mapped ROM
*/
-void
-pci_unmap_rom(struct pci_dev *pdev, void __iomem *rom)
+void pci_unmap_rom(struct pci_dev *pdev, void __iomem *rom)
{
struct resource *res = &pdev->resource[PCI_ROM_RESOURCE];
if (res->flags & IORESOURCE_ROM_COPY)
return;
-
+
iounmap(rom);
-
+
/* Disable again before continuing, leave enabled if pci=rom */
if (!(res->flags & (IORESOURCE_ROM_ENABLE | IORESOURCE_ROM_SHADOW)))
pci_disable_rom(pdev);
* pci_remove_rom - disable the ROM and remove its sysfs attribute
* @dev: pointer to pci device struct
*
+ * Remove the rom file in sysfs and disable ROM decoding.
*/
-void
-pci_remove_rom(struct pci_dev *pdev)
+void pci_remove_rom(struct pci_dev *pdev)
{
struct resource *res = &pdev->resource[PCI_ROM_RESOURCE];
-
+
if (pci_resource_len(pdev, PCI_ROM_RESOURCE))
sysfs_remove_bin_file(&pdev->dev.kobj, pdev->rom_attr);
- if (!(res->flags & (IORESOURCE_ROM_ENABLE | IORESOURCE_ROM_SHADOW | IORESOURCE_ROM_COPY)))
+ if (!(res->flags & (IORESOURCE_ROM_ENABLE |
+ IORESOURCE_ROM_SHADOW |
+ IORESOURCE_ROM_COPY)))
pci_disable_rom(pdev);
}
/**
- * pci_cleanup_rom - internal routine for freeing the ROM copy created
+ * pci_cleanup_rom - internal routine for freeing the ROM copy created
* by pci_map_rom_copy called from remove.c
* @dev: pointer to pci device struct
*
+ * Free the copied ROM if we allocated one.
*/
-void
-pci_cleanup_rom(struct pci_dev *pdev)
+void pci_cleanup_rom(struct pci_dev *pdev)
{
struct resource *res = &pdev->resource[PCI_ROM_RESOURCE];
if (res->flags & IORESOURCE_ROM_COPY) {
list_for_each_entry(dev, &bus->devices, bus_list) {
u16 class = dev->class >> 8;
- if (class == PCI_CLASS_DISPLAY_VGA
- || class == PCI_CLASS_NOT_DEFINED_VGA)
+ /* Don't touch classless devices and host bridges. */
+ if (class == PCI_CLASS_NOT_DEFINED ||
+ class == PCI_CLASS_BRIDGE_HOST)
+ continue;
+
+ if (class == PCI_CLASS_DISPLAY_VGA ||
+ class == PCI_CLASS_NOT_DEFINED_VGA)
bus->bridge_ctl |= PCI_BRIDGE_CTL_VGA;
pdev_sort_resources(dev, &head);
irq = 0;
dev->irq = irq;
- DBGC((KERN_ERR "PCI fixup irq: (%s) got %d\n", dev->dev.name, dev->irq));
+ DBGC((KERN_ERR "PCI fixup irq: (%s) got %d\n",
+ dev->dev.kobj.name, dev->irq));
/* Always tell the device, so the driver knows what is
the real IRQ to use; the device does not use it. */
--- /dev/null
+/*
+ *
+ * Alchemy Semi Db1x00 boards specific pcmcia routines.
+ *
+ * Copyright 2002 MontaVista Software Inc.
+ * Author: MontaVista Software, Inc.
+ * ppopov@mvista.com or source@mvista.com
+ *
+ * Copyright 2004 Pete Popov, updated the driver to 2.6.
+ * Followed the sa11xx API and largely copied many of the hardware
+ * independent functions.
+ *
+ * ########################################################################
+ *
+ * This program is free software; you can distribute it and/or modify it
+ * under the terms of the GNU General Public License (Version 2) as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * ########################################################################
+ *
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/device.h>
+#include <linux/init.h>
+
+#include <asm/irq.h>
+#include <asm/signal.h>
+#include <asm/mach-au1x00/au1000.h>
+#include <asm/mach-db1x00/db1x00.h>
+
+#include "au1000_generic.h"
+
+#if 0
+#define debug(x,args...) printk(KERN_DEBUG "%s: " x, __func__ , ##args)
+#else
+#define debug(x,args...)
+#endif
+
+static BCSR * const bcsr = (BCSR *)BCSR_KSEG1_ADDR;
+
+struct au1000_pcmcia_socket au1000_pcmcia_socket[PCMCIA_NUM_SOCKS];
+extern int au1x00_pcmcia_socket_probe(struct device *, struct pcmcia_low_level *, int, int);
+
+static int db1x00_pcmcia_hw_init(struct au1000_pcmcia_socket *skt)
+{
+#ifdef CONFIG_MIPS_DB1550
+ skt->irq = skt->nr ? AU1000_GPIO_5 : AU1000_GPIO_3;
+#else
+ skt->irq = skt->nr ? AU1000_GPIO_5 : AU1000_GPIO_2;
+#endif
+ return 0;
+}
+
+static void db1x00_pcmcia_shutdown(struct au1000_pcmcia_socket *skt)
+{
+ bcsr->pcmcia = 0; /* turn off power */
+ au_sync_delay(2);
+}
+
+static void
+db1x00_pcmcia_socket_state(struct au1000_pcmcia_socket *skt, struct pcmcia_state *state)
+{
+ u32 inserted;
+ unsigned char vs;
+
+ state->ready = 0;
+ state->vs_Xv = 0;
+ state->vs_3v = 0;
+ state->detect = 0;
+
+ switch (skt->nr) {
+ case 0:
+ vs = bcsr->status & 0x3;
+ inserted = !(bcsr->status & (1<<4));
+ break;
+ case 1:
+ vs = (bcsr->status & 0xC)>>2;
+ inserted = !(bcsr->status & (1<<5));
+ break;
+ default:/* should never happen */
+ return;
+ }
+
+ if (inserted)
+ debug("db1x00 socket %d: inserted %d, vs %d pcmcia %x\n",
+ skt->nr, inserted, vs, bcsr->pcmcia);
+
+ if (inserted) {
+ switch (vs) {
+ case 0:
+ case 2:
+ state->vs_3v=1;
+ break;
+ case 3: /* 5V */
+ break;
+ default:
+ /* return without setting 'detect' */
+ printk(KERN_ERR "db1x00 bad VS (%d)\n",
+ vs);
+ }
+ state->detect = 1;
+ state->ready = 1;
+ }
+ else {
+ /* if the card was previously inserted and then ejected,
+ * we should turn off power to it
+ */
+ if ((skt->nr == 0) && (bcsr->pcmcia & BCSR_PCMCIA_PC0RST)) {
+ bcsr->pcmcia &= ~(BCSR_PCMCIA_PC0RST |
+ BCSR_PCMCIA_PC0DRVEN |
+ BCSR_PCMCIA_PC0VPP |
+ BCSR_PCMCIA_PC0VCC);
+ au_sync_delay(10);
+ }
+ else if ((skt->nr == 1) && bcsr->pcmcia & BCSR_PCMCIA_PC1RST) {
+ bcsr->pcmcia &= ~(BCSR_PCMCIA_PC1RST |
+ BCSR_PCMCIA_PC1DRVEN |
+ BCSR_PCMCIA_PC1VPP |
+ BCSR_PCMCIA_PC1VCC);
+ au_sync_delay(10);
+ }
+ }
+
+ state->bvd1=1;
+ state->bvd2=1;
+ state->wrprot=0;
+}
+
+static int
+db1x00_pcmcia_configure_socket(struct au1000_pcmcia_socket *skt, struct socket_state_t *state)
+{
+ u16 pwr;
+ int sock = skt->nr;
+
+ debug("config_skt %d Vcc %dV Vpp %dV, reset %d\n",
+ sock, state->Vcc, state->Vpp,
+ state->flags & SS_RESET);
+
+ /* pcmcia reg was set to zero at init time. Be careful when
+ * initializing a socket not to wipe out the settings of the
+ * other socket.
+ */
+ pwr = bcsr->pcmcia;
+ pwr &= ~(0xf << sock*8); /* clear voltage settings */
+
+ state->Vpp = 0;
+ switch(state->Vcc){
+ case 0: /* Vcc 0 */
+ pwr |= SET_VCC_VPP(0,0,sock);
+ break;
+ case 50: /* Vcc 5V */
+ switch(state->Vpp) {
+ case 0:
+ pwr |= SET_VCC_VPP(2,0,sock);
+ break;
+ case 50:
+ pwr |= SET_VCC_VPP(2,1,sock);
+ break;
+ case 12:
+ pwr |= SET_VCC_VPP(2,2,sock);
+ break;
+ case 33:
+ default:
+ pwr |= SET_VCC_VPP(0,0,sock);
+ printk("%s: bad Vcc/Vpp (%d:%d)\n",
+ __FUNCTION__,
+ state->Vcc,
+ state->Vpp);
+ break;
+ }
+ break;
+ case 33: /* Vcc 3.3V */
+ switch(state->Vpp) {
+ case 0:
+ pwr |= SET_VCC_VPP(1,0,sock);
+ break;
+ case 12:
+ pwr |= SET_VCC_VPP(1,2,sock);
+ break;
+ case 33:
+ pwr |= SET_VCC_VPP(1,1,sock);
+ break;
+ case 50:
+ default:
+ pwr |= SET_VCC_VPP(0,0,sock);
+ printk("%s: bad Vcc/Vpp (%d:%d)\n",
+ __FUNCTION__,
+ state->Vcc,
+ state->Vpp);
+ break;
+ }
+ break;
+ default: /* what's this ? */
+ pwr |= SET_VCC_VPP(0,0,sock);
+ printk(KERN_ERR "%s: bad Vcc %d\n",
+ __FUNCTION__, state->Vcc);
+ break;
+ }
+
+ bcsr->pcmcia = pwr;
+ au_sync_delay(300);
+
+ if (sock == 0) {
+ if (!(state->flags & SS_RESET)) {
+ pwr |= BCSR_PCMCIA_PC0DRVEN;
+ bcsr->pcmcia = pwr;
+ au_sync_delay(300);
+ pwr |= BCSR_PCMCIA_PC0RST;
+ bcsr->pcmcia = pwr;
+ au_sync_delay(100);
+ }
+ else {
+ pwr &= ~(BCSR_PCMCIA_PC0RST | BCSR_PCMCIA_PC0DRVEN);
+ bcsr->pcmcia = pwr;
+ au_sync_delay(100);
+ }
+ }
+ else {
+ if (!(state->flags & SS_RESET)) {
+ pwr |= BCSR_PCMCIA_PC1DRVEN;
+ bcsr->pcmcia = pwr;
+ au_sync_delay(300);
+ pwr |= BCSR_PCMCIA_PC1RST;
+ bcsr->pcmcia = pwr;
+ au_sync_delay(100);
+ }
+ else {
+ pwr &= ~(BCSR_PCMCIA_PC1RST | BCSR_PCMCIA_PC1DRVEN);
+ bcsr->pcmcia = pwr;
+ au_sync_delay(100);
+ }
+ }
+ return 0;
+}
+
+/*
+ * Enable card status IRQs on (re-)initialisation. This can
+ * be called at initialisation, power management event, or
+ * pcmcia event.
+ */
+void db1x00_socket_init(struct au1000_pcmcia_socket *skt)
+{
+ /* nothing to do for now */
+}
+
+/*
+ * Disable card status IRQs and PCMCIA bus on suspend.
+ */
+void db1x00_socket_suspend(struct au1000_pcmcia_socket *skt)
+{
+ /* nothing to do for now */
+}
+
+struct pcmcia_low_level db1x00_pcmcia_ops = {
+ .owner = THIS_MODULE,
+
+ .hw_init = db1x00_pcmcia_hw_init,
+ .hw_shutdown = db1x00_pcmcia_shutdown,
+
+ .socket_state = db1x00_pcmcia_socket_state,
+ .configure_socket = db1x00_pcmcia_configure_socket,
+
+ .socket_init = db1x00_socket_init,
+ .socket_suspend = db1x00_socket_suspend
+};
+
+int __init au1x_board_init(struct device *dev)
+{
+ int ret = -ENODEV;
+ bcsr->pcmcia = 0; /* turn off power, if it's not already off */
+ au_sync_delay(2);
+ ret = au1x00_pcmcia_socket_probe(dev, &db1x00_pcmcia_ops, 0, 2);
+ return ret;
+}
*
* Alchemy Semi Au1000 pcmcia driver
*
- * Copyright 2001 MontaVista Software Inc.
+ * Copyright 2001-2003 MontaVista Software Inc.
* Author: MontaVista Software, Inc.
- * ppopov@mvista.com or source@mvista.com
+ * ppopov@embeddedalley.com or source@mvista.com
+ *
+ * Copyright 2004 Pete Popov, Embedded Alley Solutions, Inc.
+ * Updated the driver to 2.6. Followed the sa11xx API and largely
+ * copied many of the hardware independent functions.
*
* ########################################################################
*
*
*
*/
+
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/init.h>
#include <linux/config.h>
-#include <linux/delay.h>
+#include <linux/cpufreq.h>
#include <linux/ioport.h>
#include <linux/kernel.h>
-#include <linux/tqueue.h>
#include <linux/timer.h>
#include <linux/mm.h>
-#include <linux/proc_fs.h>
-#include <linux/types.h>
-#include <linux/vmalloc.h>
-
-#include <pcmcia/version.h>
-#include <pcmcia/cs_types.h>
-#include <pcmcia/cs.h>
-#include <pcmcia/ss.h>
-#include <pcmcia/bulkmem.h>
-#include <pcmcia/cistpl.h>
-#include <pcmcia/bus_ops.h>
-#include "cs_internal.h"
+#include <linux/notifier.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/device.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/system.h>
-#include <asm/au1000.h>
-#include <asm/au1000_pcmcia.h>
-
-#ifdef DEBUG
-static int pc_debug;
-
-module_param(pc_debug, int, 0644);
-
-#define debug(lvl,fmt) do { \
- if (pc_debug > (lvl)) \
- printk(KERN_DEBUG fmt); \
-} while (0)
-#else
-#define debug(lvl,fmt) do { } while (0)
-#endif
+#include <asm/mach-au1x00/au1000.h>
+#include "au1000_generic.h"
MODULE_LICENSE("GPL");
-MODULE_AUTHOR("Pete Popov, MontaVista Software <ppopov@mvista.com>");
+MODULE_AUTHOR("Pete Popov <ppopov@embeddedalley.com>");
MODULE_DESCRIPTION("Linux PCMCIA Card Services: Au1x00 Socket Controller");
-#define MAP_SIZE 0x1000000
-
-/* This structure maintains housekeeping state for each socket, such
- * as the last known values of the card detect pins, or the Card Services
- * callback value associated with the socket:
- */
-static struct au1000_pcmcia_socket *pcmcia_socket;
-static int socket_count;
-
-
-/* Returned by the low-level PCMCIA interface: */
-static struct pcmcia_low_level *pcmcia_low_level;
-
-/* Event poll timer structure */
-static struct timer_list poll_timer;
-
-
-/* Prototypes for routines which are used internally: */
-
-static int au1000_pcmcia_driver_init(void);
-static void au1000_pcmcia_driver_shutdown(void);
-static void au1000_pcmcia_task_handler(void *data);
-static void au1000_pcmcia_poll_event(unsigned long data);
-static void au1000_pcmcia_interrupt(int irq, void *dev, struct pt_regs *regs);
-static struct tq_struct au1000_pcmcia_task;
-
-#ifdef CONFIG_PROC_FS
-static int au1000_pcmcia_proc_status(char *buf, char **start,
- off_t pos, int count, int *eof, void *data);
+#if 0
+#define debug(x,args...) printk(KERN_DEBUG "%s: " x, __func__ , ##args)
+#else
+#define debug(x,args...)
#endif
+#define MAP_SIZE 0x100000
+extern struct au1000_pcmcia_socket au1000_pcmcia_socket[];
+#define PCMCIA_SOCKET(x) (au1000_pcmcia_socket + (x))
+#define to_au1000_socket(x) container_of(x, struct au1000_pcmcia_socket, socket)
-/* Prototypes for operations which are exported to the
- * new-and-impr^H^H^H^H^H^H^H^H^H^H in-kernel PCMCIA core:
+/* Some boards like to support CF cards as IDE root devices, so they
+ * grab pcmcia sockets directly.
*/
+u32 *pcmcia_base_vaddrs[2];
+extern const unsigned long mips_io_port_base;
-static int au1000_pcmcia_init(u32 sock);
-static int au1000_pcmcia_suspend(u32 sock);
-static int au1000_pcmcia_register_callback(u32 sock,
- void (*handler)(void *, u32), void *info);
-static int au1000_pcmcia_inquire_socket(u32 sock, socket_cap_t *cap);
-static int au1000_pcmcia_get_status(u32 sock, u_int *value);
-static int au1000_pcmcia_get_socket(u32 sock, socket_state_t *state);
-static int au1000_pcmcia_set_socket(u32 sock, socket_state_t *state);
-static int au1000_pcmcia_get_io_map(u32 sock, struct pccard_io_map *io);
-static int au1000_pcmcia_set_io_map(u32 sock, struct pccard_io_map *io);
-static int au1000_pcmcia_get_mem_map(u32 sock, struct pccard_mem_map *mem);
-static int au1000_pcmcia_set_mem_map(u32 sock, struct pccard_mem_map *mem);
-#ifdef CONFIG_PROC_FS
-static void au1000_pcmcia_proc_setup(u32 sock, struct proc_dir_entry *base);
-#endif
+DECLARE_MUTEX(pcmcia_sockets_lock);
-static struct pccard_operations au1000_pcmcia_operations = {
- au1000_pcmcia_init,
- au1000_pcmcia_suspend,
- au1000_pcmcia_register_callback,
- au1000_pcmcia_inquire_socket,
- au1000_pcmcia_get_status,
- au1000_pcmcia_get_socket,
- au1000_pcmcia_set_socket,
- au1000_pcmcia_get_io_map,
- au1000_pcmcia_set_io_map,
- au1000_pcmcia_get_mem_map,
- au1000_pcmcia_set_mem_map,
-#ifdef CONFIG_PROC_FS
- au1000_pcmcia_proc_setup
-#endif
+static int (*au1x00_pcmcia_hw_init[])(struct device *dev) = {
+ au1x_board_init,
};
-static spinlock_t pcmcia_lock = SPIN_LOCK_UNLOCKED;
-
-static int __init au1000_pcmcia_driver_init(void)
+static int
+au1x00_pcmcia_skt_state(struct au1000_pcmcia_socket *skt)
{
- struct pcmcia_init pcmcia_init;
struct pcmcia_state state;
- unsigned int i;
+ unsigned int stat;
- printk("\nAu1x00 PCMCIA\n");
+ memset(&state, 0, sizeof(struct pcmcia_state));
-#ifndef CONFIG_64BIT_PHYS_ADDR
- printk(KERN_ERR "Au1x00 PCMCIA 36 bit IO support not enabled\n");
- return -1;
-#endif
-
-#if defined(CONFIG_MIPS_PB1000) || defined(CONFIG_MIPS_PB1100) || defined(CONFIG_MIPS_PB1500)
- pcmcia_low_level=&pb1x00_pcmcia_ops;
-#else
-#error Unsupported AU1000 board.
-#endif
-
- pcmcia_init.handler=au1000_pcmcia_interrupt;
- if((socket_count=pcmcia_low_level->init(&pcmcia_init))<0) {
- printk(KERN_ERR "Unable to initialize PCMCIA service.\n");
- return -EIO;
- }
+ skt->ops->socket_state(skt, &state);
- /* NOTE: the chip select must already be setup */
+ stat = state.detect ? SS_DETECT : 0;
+ stat |= state.ready ? SS_READY : 0;
+ stat |= state.wrprot ? SS_WRPROT : 0;
+ stat |= state.vs_3v ? SS_3VCARD : 0;
+ stat |= state.vs_Xv ? SS_XVCARD : 0;
+ stat |= skt->cs_state.Vcc ? SS_POWERON : 0;
- pcmcia_socket =
- kmalloc(sizeof(struct au1000_pcmcia_socket) * socket_count,
- GFP_KERNEL);
- if (!pcmcia_socket) {
- printk(KERN_ERR "Card Services can't get memory \n");
- return -1;
+ if (skt->cs_state.flags & SS_IOCARD)
+ stat |= state.bvd1 ? SS_STSCHG : 0;
+ else {
+ if (state.bvd1 == 0)
+ stat |= SS_BATDEAD;
+ else if (state.bvd2 == 0)
+ stat |= SS_BATWARN;
}
- memset(pcmcia_socket, 0,
- sizeof(struct au1000_pcmcia_socket) * socket_count);
-
- /*
- * Assuming max of 2 sockets, which the Au1000 supports.
- * WARNING: the Pb1000 has two sockets, and both work, but you
- * can't use them both at the same time due to glue logic conflicts.
- */
- for(i=0; i < socket_count; i++) {
+ return stat;
+}
- if(pcmcia_low_level->socket_state(i, &state)<0){
- printk(KERN_ERR "Unable to get PCMCIA status\n");
- return -EIO;
- }
- pcmcia_socket[i].k_state=state;
- pcmcia_socket[i].cs_state.csc_mask=SS_DETECT;
-
- if (i == 0) {
- pcmcia_socket[i].virt_io =
- (u32)ioremap((ioaddr_t)0xF00000000, 0x1000);
- pcmcia_socket[i].phys_attr = (memaddr_t)0xF40000000;
- pcmcia_socket[i].phys_mem = (memaddr_t)0xF80000000;
- }
- else {
- pcmcia_socket[i].virt_io =
- (u32)ioremap((ioaddr_t)0xF08000000, 0x1000);
- pcmcia_socket[i].phys_attr = (memaddr_t)0xF48000000;
- pcmcia_socket[i].phys_mem = (memaddr_t)0xF88000000;
- }
- }
+/*
+ * au100_pcmcia_config_skt
+ *
+ * Convert PCMCIA socket state to our socket configure structure.
+ */
+static int
+au1x00_pcmcia_config_skt(struct au1000_pcmcia_socket *skt, socket_state_t *state)
+{
+ int ret;
- /* Only advertise as many sockets as we can detect: */
- if(register_ss_entry(socket_count, &au1000_pcmcia_operations)<0){
- printk(KERN_ERR "Unable to register socket service routine\n");
- return -ENXIO;
+ ret = skt->ops->configure_socket(skt, state);
+ if (ret == 0) {
+ skt->cs_state = *state;
}
- /* Start the event poll timer.
- * It will reschedule by itself afterwards.
- */
- au1000_pcmcia_poll_event(0);
-
- debug(1, "au1000: initialization complete\n");
- return 0;
+ if (ret < 0)
+ debug("unable to configure socket %d\n", skt->nr);
-} /* au1000_pcmcia_driver_init() */
-
-module_init(au1000_pcmcia_driver_init);
-
-static void __exit au1000_pcmcia_driver_shutdown(void)
-{
- int i;
-
- del_timer_sync(&poll_timer);
- unregister_ss_entry(&au1000_pcmcia_operations);
- pcmcia_low_level->shutdown();
- flush_scheduled_tasks();
- for(i=0; i < socket_count; i++) {
- if (pcmcia_socket[i].virt_io)
- iounmap((void *)pcmcia_socket[i].virt_io);
- }
- debug(1, "au1000: shutdown complete\n");
+ return ret;
}
-module_exit(au1000_pcmcia_driver_shutdown);
+/* au1x00_pcmcia_sock_init()
+ *
+ * (Re-)Initialise the socket, turning on status interrupts
+ * and PCMCIA bus. This must wait for power to stabilise
+ * so that the card status signals report correctly.
+ *
+ * Returns: 0
+ */
+static int au1x00_pcmcia_sock_init(struct pcmcia_socket *sock)
+{
+ struct au1000_pcmcia_socket *skt = to_au1000_socket(sock);
-static int au1000_pcmcia_init(unsigned int sock) { return 0; }
+ debug("initializing socket %u\n", skt->nr);
-static int au1000_pcmcia_suspend(unsigned int sock)
-{
+ skt->ops->socket_init(skt);
return 0;
}
-
-static inline unsigned
-au1000_pcmcia_events(struct pcmcia_state *state,
- struct pcmcia_state *prev_state,
- unsigned int mask, unsigned int flags)
+/*
+ * au1x00_pcmcia_suspend()
+ *
+ * Remove power on the socket, disable IRQs from the card.
+ * Turn off status interrupts, and disable the PCMCIA bus.
+ *
+ * Returns: 0
+ */
+static int au1x00_pcmcia_suspend(struct pcmcia_socket *sock)
{
- unsigned int events=0;
+ struct au1000_pcmcia_socket *skt = to_au1000_socket(sock);
+ int ret;
- if(state->detect!=prev_state->detect){
- debug(2, "%s(): card detect value %u\n",
- __FUNCTION__, state->detect);
- events |= mask&SS_DETECT;
- }
-
-
- if(state->ready!=prev_state->ready){
- debug(2, "%s(): card ready value %u\n",
- __FUNCTION__, state->ready);
- events |= mask&((flags&SS_IOCARD)?0:SS_READY);
- }
+ debug("suspending socket %u\n", skt->nr);
- *prev_state=*state;
- return events;
+ ret = au1x00_pcmcia_config_skt(skt, &dead_socket);
+ if (ret == 0)
+ skt->ops->socket_suspend(skt);
-} /* au1000_pcmcia_events() */
+ return ret;
+}
+static DEFINE_SPINLOCK(status_lock);
-/*
- * Au1000_pcmcia_task_handler()
- * Processes socket events.
+/*
+ * au1x00_check_status()
*/
-static void au1000_pcmcia_task_handler(void *data)
+static void au1x00_check_status(struct au1000_pcmcia_socket *skt)
{
- struct pcmcia_state state;
- int i, events, irq_status;
-
- for(i=0; i<socket_count; i++) {
- if((irq_status = pcmcia_low_level->socket_state(i, &state))<0)
- printk(KERN_ERR "low-level PCMCIA error\n");
-
- events = au1000_pcmcia_events(&state,
- &pcmcia_socket[i].k_state,
- pcmcia_socket[i].cs_state.csc_mask,
- pcmcia_socket[i].cs_state.flags);
- if(pcmcia_socket[i].handler!=NULL) {
- pcmcia_socket[i].handler(pcmcia_socket[i].handler_info,
- events);
- }
- }
+ unsigned int events;
-} /* au1000_pcmcia_task_handler() */
+ debug("entering PCMCIA monitoring thread\n");
-static struct tq_struct au1000_pcmcia_task = {
- routine: au1000_pcmcia_task_handler
-};
+ do {
+ unsigned int status;
+ unsigned long flags;
+ status = au1x00_pcmcia_skt_state(skt);
-static void au1000_pcmcia_poll_event(unsigned long dummy)
-{
- poll_timer.function = au1000_pcmcia_poll_event;
- poll_timer.expires = jiffies + AU1000_PCMCIA_POLL_PERIOD;
- add_timer(&poll_timer);
- schedule_task(&au1000_pcmcia_task);
-}
+ spin_lock_irqsave(&status_lock, flags);
+ events = (status ^ skt->status) & skt->cs_state.csc_mask;
+ skt->status = status;
+ spin_unlock_irqrestore(&status_lock, flags);
+ debug("events: %s%s%s%s%s%s\n",
+ events == 0 ? "<NONE>" : "",
+ events & SS_DETECT ? "DETECT " : "",
+ events & SS_READY ? "READY " : "",
+ events & SS_BATDEAD ? "BATDEAD " : "",
+ events & SS_BATWARN ? "BATWARN " : "",
+ events & SS_STSCHG ? "STSCHG " : "");
+
+ if (events)
+ pcmcia_parse_events(&skt->socket, events);
+ } while (events);
+}
/*
- * au1000_pcmcia_interrupt()
- * The actual interrupt work is performed by au1000_pcmcia_task(),
- * because the Card Services event handling code performs scheduling
- * operations which cannot be executed from within an interrupt context.
+ * au1x00_pcmcia_poll_event()
+ * Let's poll for events in addition to IRQs since IRQ only is unreliable...
*/
-static void
-au1000_pcmcia_interrupt(int irq, void *dev, struct pt_regs *regs)
+static void au1x00_pcmcia_poll_event(unsigned long dummy)
{
- schedule_task(&au1000_pcmcia_task);
-}
+ struct au1000_pcmcia_socket *skt = (struct au1000_pcmcia_socket *)dummy;
+ debug("polling for events\n");
+ mod_timer(&skt->poll_timer, jiffies + AU1000_PCMCIA_POLL_PERIOD);
-static int
-au1000_pcmcia_register_callback(unsigned int sock,
- void (*handler)(void *, unsigned int), void *info)
-{
- if(handler==NULL){
- pcmcia_socket[sock].handler=NULL;
- MOD_DEC_USE_COUNT;
- } else {
- MOD_INC_USE_COUNT;
- pcmcia_socket[sock].handler=handler;
- pcmcia_socket[sock].handler_info=info;
- }
- return 0;
+ au1x00_check_status(skt);
}
-
-/* au1000_pcmcia_inquire_socket()
- *
- * From the sa1100 socket driver :
+/* au1x00_pcmcia_get_status()
*
- * Implements the inquire_socket() operation for the in-kernel PCMCIA
- * service (formerly SS_InquireSocket in Card Services). We set
- * SS_CAP_STATIC_MAP, which disables the memory resource database check.
- * (Mapped memory is set up within the socket driver itself.)
+ * From the sa11xx_core.c:
+ * Implements the get_status() operation for the in-kernel PCMCIA
+ * service (formerly SS_GetStatus in Card Services). Essentially just
+ * fills in bits in `status' according to internal driver state or
+ * the value of the voltage detect chipselect register.
*
- * In conjunction with the STATIC_MAP capability is a new field,
- * `io_offset', recommended by David Hinds. Rather than go through
- * the SetIOMap interface (which is not quite suited for communicating
- * window locations up from the socket driver), we just pass up
- * an offset which is applied to client-requested base I/O addresses
- * in alloc_io_space().
+ * As a debugging note, during card startup, the PCMCIA core issues
+ * three set_socket() commands in a row the first with RESET deasserted,
+ * the second with RESET asserted, and the last with RESET deasserted
+ * again. Following the third set_socket(), a get_status() command will
+ * be issued. The kernel is looking for the SS_READY flag (see
+ * setup_socket(), reset_socket(), and unreset_socket() in cs.c).
*
- * Returns: 0 on success, -1 if no pin has been configured for `sock'
+ * Returns: 0
*/
-static int au1000_pcmcia_inquire_socket(unsigned int sock, socket_cap_t *cap)
+static int
+au1x00_pcmcia_get_status(struct pcmcia_socket *sock, unsigned int *status)
{
- struct pcmcia_irq_info irq_info;
+ struct au1000_pcmcia_socket *skt = to_au1000_socket(sock);
- if(sock > socket_count){
- printk(KERN_ERR "au1000: socket %u not configured\n", sock);
- return -1;
- }
-
- /* from the sa1100_generic driver: */
-
- /* SS_CAP_PAGE_REGS: used by setup_cis_mem() in cistpl.c to set the
- * force_low argument to validate_mem() in rsrc_mgr.c -- since in
- * general, the mapped * addresses of the PCMCIA memory regions
- * will not be within 0xffff, setting force_low would be
- * undesirable.
- *
- * SS_CAP_STATIC_MAP: don't bother with the (user-configured) memory
- * resource database; we instead pass up physical address ranges
- * and allow other parts of Card Services to deal with remapping.
- *
- * SS_CAP_PCCARD: we can deal with 16-bit PCMCIA & CF cards, but
- * not 32-bit CardBus devices.
- */
- cap->features=(SS_CAP_PAGE_REGS | SS_CAP_STATIC_MAP | SS_CAP_PCCARD);
-
- irq_info.sock=sock;
- irq_info.irq=-1;
-
- if(pcmcia_low_level->get_irq_info(&irq_info)<0){
- printk(KERN_ERR "Error obtaining IRQ info socket %u\n", sock);
- return -1;
- }
-
- cap->irq_mask=0;
- cap->map_size=MAP_SIZE;
- cap->pci_irq=irq_info.irq;
- cap->io_offset=pcmcia_socket[sock].virt_io;
+ skt->status = au1x00_pcmcia_skt_state(skt);
+ *status = skt->status;
return 0;
+}
-} /* au1000_pcmcia_inquire_socket() */
-
-
-static int
-au1000_pcmcia_get_status(unsigned int sock, unsigned int *status)
+/* au1x00_pcmcia_get_socket()
+ * Implements the get_socket() operation for the in-kernel PCMCIA
+ * service (formerly SS_GetSocket in Card Services). Not a very
+ * exciting routine.
+ *
+ * Returns: 0
+ */
+static int
+au1x00_pcmcia_get_socket(struct pcmcia_socket *sock, socket_state_t *state)
{
- struct pcmcia_state state;
-
-
- if((pcmcia_low_level->socket_state(sock, &state))<0){
- printk(KERN_ERR "Unable to get PCMCIA status from kernel.\n");
- return -1;
- }
-
- pcmcia_socket[sock].k_state = state;
-
- *status = state.detect?SS_DETECT:0;
-
- *status |= state.ready?SS_READY:0;
-
- *status |= pcmcia_socket[sock].cs_state.Vcc?SS_POWERON:0;
+ struct au1000_pcmcia_socket *skt = to_au1000_socket(sock);
- if(pcmcia_socket[sock].cs_state.flags&SS_IOCARD)
- *status |= state.bvd1?SS_STSCHG:0;
- else {
- if(state.bvd1==0)
- *status |= SS_BATDEAD;
- else if(state.bvd2 == 0)
- *status |= SS_BATWARN;
- }
-
- *status|=state.vs_3v?SS_3VCARD:0;
-
- *status|=state.vs_Xv?SS_XVCARD:0;
-
- debug(2, "\tstatus: %s%s%s%s%s%s%s%s\n",
- (*status&SS_DETECT)?"DETECT ":"",
- (*status&SS_READY)?"READY ":"",
- (*status&SS_BATDEAD)?"BATDEAD ":"",
- (*status&SS_BATWARN)?"BATWARN ":"",
- (*status&SS_POWERON)?"POWERON ":"",
- (*status&SS_STSCHG)?"STSCHG ":"",
- (*status&SS_3VCARD)?"3VCARD ":"",
- (*status&SS_XVCARD)?"XVCARD ":"");
-
- return 0;
-
-} /* au1000_pcmcia_get_status() */
-
-
-static int
-au1000_pcmcia_get_socket(unsigned int sock, socket_state_t *state)
-{
- *state = pcmcia_socket[sock].cs_state;
- return 0;
+ debug("for sock %u\n", skt->nr);
+ *state = skt->cs_state;
+ return 0;
}
-
-static int
-au1000_pcmcia_set_socket(unsigned int sock, socket_state_t *state)
+/* au1x00_pcmcia_set_socket()
+ * Implements the set_socket() operation for the in-kernel PCMCIA
+ * service (formerly SS_SetSocket in Card Services). We more or
+ * less punt all of this work and let the kernel handle the details
+ * of power configuration, reset, &c. We also record the value of
+ * `state' in order to regurgitate it to the PCMCIA core later.
+ *
+ * Returns: 0
+ */
+static int
+au1x00_pcmcia_set_socket(struct pcmcia_socket *sock, socket_state_t *state)
{
- struct pcmcia_configure configure;
+ struct au1000_pcmcia_socket *skt = to_au1000_socket(sock);
- debug(2, "\tmask: %s%s%s%s%s%s\n\tflags: %s%s%s%s%s%s\n"
- "\tVcc %d Vpp %d irq %d\n",
+ debug("for sock %u\n", skt->nr);
+
+ debug("\tmask: %s%s%s%s%s%s\n\tflags: %s%s%s%s%s%s\n",
(state->csc_mask==0)?"<NONE>":"",
(state->csc_mask&SS_DETECT)?"DETECT ":"",
(state->csc_mask&SS_READY)?"READY ":"",
(state->flags&SS_IOCARD)?"IOCARD ":"",
(state->flags&SS_RESET)?"RESET ":"",
(state->flags&SS_SPKR_ENA)?"SPKR_ENA ":"",
- (state->flags&SS_OUTPUT_ENA)?"OUTPUT_ENA ":"",
+ (state->flags&SS_OUTPUT_ENA)?"OUTPUT_ENA ":"");
+ debug("\tVcc %d Vpp %d irq %d\n",
state->Vcc, state->Vpp, state->io_irq);
- configure.sock=sock;
- configure.vcc=state->Vcc;
- configure.vpp=state->Vpp;
- configure.output=(state->flags&SS_OUTPUT_ENA)?1:0;
- configure.speaker=(state->flags&SS_SPKR_ENA)?1:0;
- configure.reset=(state->flags&SS_RESET)?1:0;
-
- if(pcmcia_low_level->configure_socket(&configure)<0){
- printk(KERN_ERR "Unable to configure socket %u\n", sock);
- return -1;
- }
-
- pcmcia_socket[sock].cs_state = *state;
- return 0;
-
-} /* au1000_pcmcia_set_socket() */
-
-
-static int
-au1000_pcmcia_get_io_map(unsigned int sock, struct pccard_io_map *map)
-{
- debug(1, "au1000_pcmcia_get_io_map: sock %d\n", sock);
- if(map->map>=MAX_IO_WIN){
- printk(KERN_ERR "%s(): map (%d) out of range\n",
- __FUNCTION__, map->map);
- return -1;
- }
- *map=pcmcia_socket[sock].io_map[map->map];
- return 0;
+ return au1x00_pcmcia_config_skt(skt, state);
}
-
int
-au1000_pcmcia_set_io_map(unsigned int sock, struct pccard_io_map *map)
+au1x00_pcmcia_set_io_map(struct pcmcia_socket *sock, struct pccard_io_map *map)
{
+ struct au1000_pcmcia_socket *skt = to_au1000_socket(sock);
unsigned int speed;
- unsigned long start;
if(map->map>=MAX_IO_WIN){
- printk(KERN_ERR "%s(): map (%d) out of range\n",
- __FUNCTION__, map->map);
+ debug("map (%d) out of range\n", map->map);
return -1;
}
if(map->flags&MAP_ACTIVE){
speed=(map->speed>0)?map->speed:AU1000_PCMCIA_IO_SPEED;
- pcmcia_socket[sock].speed_io=speed;
+ skt->spd_io[map->map] = speed;
}
- start=map->start;
-
- if(map->stop==1) {
- map->stop=PAGE_SIZE-1;
- }
-
- map->start=pcmcia_socket[sock].virt_io;
- map->stop=map->start+(map->stop-start);
- pcmcia_socket[sock].io_map[map->map]=*map;
- debug(3, "set_io_map %d start %x stop %x\n",
- map->map, map->start, map->stop);
+ map->start=(ioaddr_t)(u32)skt->virt_io;
+ map->stop=map->start+MAP_SIZE;
return 0;
-} /* au1000_pcmcia_set_io_map() */
+} /* au1x00_pcmcia_set_io_map() */
static int
-au1000_pcmcia_get_mem_map(unsigned int sock, struct pccard_mem_map *map)
+au1x00_pcmcia_set_mem_map(struct pcmcia_socket *sock, struct pccard_mem_map *map)
{
+ struct au1000_pcmcia_socket *skt = to_au1000_socket(sock);
+ unsigned short speed = map->speed;
- if(map->map>=MAX_WIN) {
- printk(KERN_ERR "%s(): map (%d) out of range\n",
- __FUNCTION__, map->map);
+ if(map->map>=MAX_WIN){
+ debug("map (%d) out of range\n", map->map);
return -1;
}
- *map=pcmcia_socket[sock].mem_map[map->map];
+
+ if (map->flags & MAP_ATTRIB) {
+ skt->spd_attr[map->map] = speed;
+ skt->spd_mem[map->map] = 0;
+ } else {
+ skt->spd_attr[map->map] = 0;
+ skt->spd_mem[map->map] = speed;
+ }
+
+ if (map->flags & MAP_ATTRIB) {
+ map->static_start = skt->phys_attr + map->card_start;
+ }
+ else {
+ map->static_start = skt->phys_mem + map->card_start;
+ }
+
+ debug("set_mem_map %d start %08lx card_start %08x\n",
+ map->map, map->static_start, map->card_start);
return 0;
-}
+} /* au1x00_pcmcia_set_mem_map() */
-static int
-au1000_pcmcia_set_mem_map(unsigned int sock, struct pccard_mem_map *map)
+static struct pccard_operations au1x00_pcmcia_operations = {
+ .init = au1x00_pcmcia_sock_init,
+ .suspend = au1x00_pcmcia_suspend,
+ .get_status = au1x00_pcmcia_get_status,
+ .get_socket = au1x00_pcmcia_get_socket,
+ .set_socket = au1x00_pcmcia_set_socket,
+ .set_io_map = au1x00_pcmcia_set_io_map,
+ .set_mem_map = au1x00_pcmcia_set_mem_map,
+};
+
+static const char *skt_names[] = {
+ "PCMCIA socket 0",
+ "PCMCIA socket 1",
+};
+
+struct skt_dev_info {
+ int nskt;
+};
+
+int au1x00_pcmcia_socket_probe(struct device *dev, struct pcmcia_low_level *ops, int first, int nr)
{
- unsigned int speed;
- u_long flags;
+ struct skt_dev_info *sinfo;
+ int ret, i;
- if(map->map>=MAX_WIN){
- printk(KERN_ERR "%s(): map (%d) out of range\n",
- __FUNCTION__, map->map);
- return -1;
+ sinfo = kmalloc(sizeof(struct skt_dev_info), GFP_KERNEL);
+ if (!sinfo) {
+ ret = -ENOMEM;
+ goto out;
}
- if(map->flags&MAP_ACTIVE){
- speed=(map->speed>0)?map->speed:AU1000_PCMCIA_MEM_SPEED;
-
- /* TBD */
- if(map->flags&MAP_ATTRIB){
- pcmcia_socket[sock].speed_attr=speed;
- }
- else {
- pcmcia_socket[sock].speed_mem=speed;
+ memset(sinfo, 0, sizeof(struct skt_dev_info));
+ sinfo->nskt = nr;
+
+ /*
+ * Initialise the per-socket structure.
+ */
+ for (i = 0; i < nr; i++) {
+ struct au1000_pcmcia_socket *skt = PCMCIA_SOCKET(i);
+ memset(skt, 0, sizeof(*skt));
+
+ skt->socket.ops = &au1x00_pcmcia_operations;
+ skt->socket.owner = ops->owner;
+ skt->socket.dev.dev = dev;
+
+ init_timer(&skt->poll_timer);
+ skt->poll_timer.function = au1x00_pcmcia_poll_event;
+ skt->poll_timer.data = (unsigned long)skt;
+ skt->poll_timer.expires = jiffies + AU1000_PCMCIA_POLL_PERIOD;
+
+ skt->nr = first + i;
+ skt->irq = 255;
+ skt->dev = dev;
+ skt->ops = ops;
+
+ skt->res_skt.name = skt_names[skt->nr];
+ skt->res_io.name = "io";
+ skt->res_io.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+ skt->res_mem.name = "memory";
+ skt->res_mem.flags = IORESOURCE_MEM;
+ skt->res_attr.name = "attribute";
+ skt->res_attr.flags = IORESOURCE_MEM;
+
+ /*
+ * PCMCIA client drivers use the inb/outb macros to access the
+ * IO registers. Since mips_io_port_base is added to the
+ * access address of the mips implementation of inb/outb,
+ * we need to subtract it here because we want to access the
+ * I/O or MEM address directly, without going through this
+ * "mips_io_port_base" mechanism.
+ */
+ if (i == 0) {
+ skt->virt_io = (void *)
+ (ioremap((phys_t)AU1X_SOCK0_IO, 0x1000) -
+ (u32)mips_io_port_base);
+ skt->phys_attr = AU1X_SOCK0_PSEUDO_PHYS_ATTR;
+ skt->phys_mem = AU1X_SOCK0_PSEUDO_PHYS_MEM;
}
- }
+#ifndef CONFIG_MIPS_XXS1500
+ else {
+ skt->virt_io = (void *)
+ (ioremap((phys_t)AU1X_SOCK1_IO, 0x1000) -
+ (u32)mips_io_port_base);
+ skt->phys_attr = AU1X_SOCK1_PSEUDO_PHYS_ATTR;
+ skt->phys_mem = AU1X_SOCK1_PSEUDO_PHYS_MEM;
+ }
+#endif
+ pcmcia_base_vaddrs[i] = (u32 *)skt->virt_io;
+ ret = ops->hw_init(skt);
- spin_lock_irqsave(&pcmcia_lock, flags);
- if (map->flags & MAP_ATTRIB) {
- map->static_start = pcmcia_socket[sock].phys_attr +
- map->card_start;
- }
- else {
- map->static_start = pcmcia_socket[sock].phys_mem +
- map->card_start;
+ skt->socket.features = SS_CAP_STATIC_MAP|SS_CAP_PCCARD;
+ skt->socket.irq_mask = 0;
+ skt->socket.map_size = MAP_SIZE;
+ skt->socket.pci_irq = skt->irq;
+ skt->socket.io_offset = (unsigned long)skt->virt_io;
+
+ skt->status = au1x00_pcmcia_skt_state(skt);
+
+ ret = pcmcia_register_socket(&skt->socket);
+ if (ret)
+ goto out_err;
+
+ WARN_ON(skt->socket.sock != i);
+
+ add_timer(&skt->poll_timer);
}
- pcmcia_socket[sock].mem_map[map->map]=*map;
- spin_unlock_irqrestore(&pcmcia_lock, flags);
- debug(3, "set_mem_map %d start %x card_start %x\n",
- map->map, map->static_start,
- map->card_start);
+ dev_set_drvdata(dev, sinfo);
return 0;
-} /* au1000_pcmcia_set_mem_map() */
+ do {
+ struct au1000_pcmcia_socket *skt = PCMCIA_SOCKET(i);
+ del_timer_sync(&skt->poll_timer);
+ pcmcia_unregister_socket(&skt->socket);
+out_err:
+ flush_scheduled_work();
+ ops->hw_shutdown(skt);
-#if defined(CONFIG_PROC_FS)
+ i--;
+ } while (i > 0);
+ kfree(sinfo);
+out:
+ return ret;
+}
-static void
-au1000_pcmcia_proc_setup(unsigned int sock, struct proc_dir_entry *base)
+int au1x00_drv_pcmcia_remove(struct device *dev)
{
- struct proc_dir_entry *entry;
+ struct skt_dev_info *sinfo = dev_get_drvdata(dev);
+ int i;
- if((entry=create_proc_entry("status", 0, base))==NULL){
- printk(KERN_ERR "Unable to install \"status\" procfs entry\n");
- return;
+ down(&pcmcia_sockets_lock);
+ dev_set_drvdata(dev, NULL);
+
+ for (i = 0; i < sinfo->nskt; i++) {
+ struct au1000_pcmcia_socket *skt = PCMCIA_SOCKET(i);
+
+ del_timer_sync(&skt->poll_timer);
+ pcmcia_unregister_socket(&skt->socket);
+ flush_scheduled_work();
+ skt->ops->hw_shutdown(skt);
+ au1x00_pcmcia_config_skt(skt, &dead_socket);
+ iounmap(skt->virt_io);
+ skt->virt_io = NULL;
}
- entry->read_proc=au1000_pcmcia_proc_status;
- entry->data=(void *)sock;
+ kfree(sinfo);
+ up(&pcmcia_sockets_lock);
+ return 0;
}
-/* au1000_pcmcia_proc_status()
- * Implements the /proc/bus/pccard/??/status file.
+/*
+ * PCMCIA "Driver" API
+ */
+
+static int au1x00_drv_pcmcia_probe(struct device *dev)
+{
+ int i, ret = -ENODEV;
+
+ down(&pcmcia_sockets_lock);
+ for (i=0; i < ARRAY_SIZE(au1x00_pcmcia_hw_init); i++) {
+ ret = au1x00_pcmcia_hw_init[i](dev);
+ if (ret == 0)
+ break;
+ }
+ up(&pcmcia_sockets_lock);
+ return ret;
+}
+
+
+static int au1x00_drv_pcmcia_suspend(struct device *dev, u32 state, u32 level)
+{
+ int ret = 0;
+ if (level == SUSPEND_SAVE_STATE)
+ ret = pcmcia_socket_dev_suspend(dev, state);
+ return ret;
+}
+
+static int au1x00_drv_pcmcia_resume(struct device *dev, u32 level)
+{
+ int ret = 0;
+ if (level == RESUME_RESTORE_STATE)
+ ret = pcmcia_socket_dev_resume(dev);
+ return ret;
+}
+
+
+static struct device_driver au1x00_pcmcia_driver = {
+ .probe = au1x00_drv_pcmcia_probe,
+ .remove = au1x00_drv_pcmcia_remove,
+ .name = "au1x00-pcmcia",
+ .bus = &platform_bus_type,
+ .suspend = au1x00_drv_pcmcia_suspend,
+ .resume = au1x00_drv_pcmcia_resume
+};
+
+static struct platform_device au1x00_device = {
+ .name = "au1x00-pcmcia",
+ .id = 0,
+};
+
+/* au1x00_pcmcia_init()
+ *
+ * This routine performs low-level PCMCIA initialization and then
+ * registers this socket driver with Card Services.
*
- * Returns: the number of characters added to the buffer
+ * Returns: 0 on success, -ve error code on failure
*/
-static int
-au1000_pcmcia_proc_status(char *buf, char **start, off_t pos,
- int count, int *eof, void *data)
+static int __init au1x00_pcmcia_init(void)
{
- char *p=buf;
- unsigned int sock=(unsigned int)data;
-
- p+=sprintf(p, "k_flags : %s%s%s%s%s%s%s\n",
- pcmcia_socket[sock].k_state.detect?"detect ":"",
- pcmcia_socket[sock].k_state.ready?"ready ":"",
- pcmcia_socket[sock].k_state.bvd1?"bvd1 ":"",
- pcmcia_socket[sock].k_state.bvd2?"bvd2 ":"",
- pcmcia_socket[sock].k_state.wrprot?"wrprot ":"",
- pcmcia_socket[sock].k_state.vs_3v?"vs_3v ":"",
- pcmcia_socket[sock].k_state.vs_Xv?"vs_Xv ":"");
-
- p+=sprintf(p, "status : %s%s%s%s%s%s%s%s%s\n",
- pcmcia_socket[sock].k_state.detect?"SS_DETECT ":"",
- pcmcia_socket[sock].k_state.ready?"SS_READY ":"",
- pcmcia_socket[sock].cs_state.Vcc?"SS_POWERON ":"",
- pcmcia_socket[sock].cs_state.flags&SS_IOCARD?\
- "SS_IOCARD ":"",
- (pcmcia_socket[sock].cs_state.flags&SS_IOCARD &&
- pcmcia_socket[sock].k_state.bvd1)?"SS_STSCHG ":"",
- ((pcmcia_socket[sock].cs_state.flags&SS_IOCARD)==0 &&
- (pcmcia_socket[sock].k_state.bvd1==0))?"SS_BATDEAD ":"",
- ((pcmcia_socket[sock].cs_state.flags&SS_IOCARD)==0 &&
- (pcmcia_socket[sock].k_state.bvd2==0))?"SS_BATWARN ":"",
- pcmcia_socket[sock].k_state.vs_3v?"SS_3VCARD ":"",
- pcmcia_socket[sock].k_state.vs_Xv?"SS_XVCARD ":"");
-
- p+=sprintf(p, "mask : %s%s%s%s%s\n",
- pcmcia_socket[sock].cs_state.csc_mask&SS_DETECT?\
- "SS_DETECT ":"",
- pcmcia_socket[sock].cs_state.csc_mask&SS_READY?\
- "SS_READY ":"",
- pcmcia_socket[sock].cs_state.csc_mask&SS_BATDEAD?\
- "SS_BATDEAD ":"",
- pcmcia_socket[sock].cs_state.csc_mask&SS_BATWARN?\
- "SS_BATWARN ":"",
- pcmcia_socket[sock].cs_state.csc_mask&SS_STSCHG?\
- "SS_STSCHG ":"");
-
- p+=sprintf(p, "cs_flags : %s%s%s%s%s\n",
- pcmcia_socket[sock].cs_state.flags&SS_PWR_AUTO?\
- "SS_PWR_AUTO ":"",
- pcmcia_socket[sock].cs_state.flags&SS_IOCARD?\
- "SS_IOCARD ":"",
- pcmcia_socket[sock].cs_state.flags&SS_RESET?\
- "SS_RESET ":"",
- pcmcia_socket[sock].cs_state.flags&SS_SPKR_ENA?\
- "SS_SPKR_ENA ":"",
- pcmcia_socket[sock].cs_state.flags&SS_OUTPUT_ENA?\
- "SS_OUTPUT_ENA ":"");
-
- p+=sprintf(p, "Vcc : %d\n", pcmcia_socket[sock].cs_state.Vcc);
- p+=sprintf(p, "Vpp : %d\n", pcmcia_socket[sock].cs_state.Vpp);
- p+=sprintf(p, "irq : %d\n", pcmcia_socket[sock].cs_state.io_irq);
- p+=sprintf(p, "I/O : %u\n", pcmcia_socket[sock].speed_io);
- p+=sprintf(p, "attribute: %u\n", pcmcia_socket[sock].speed_attr);
- p+=sprintf(p, "common : %u\n", pcmcia_socket[sock].speed_mem);
- return p-buf;
+ int error = 0;
+ if ((error = driver_register(&au1x00_pcmcia_driver)))
+ return error;
+ platform_device_register(&au1x00_device);
+ return error;
}
+/* au1x00_pcmcia_exit()
+ * Invokes the low-level kernel service to free IRQs associated with this
+ * socket controller and reset GPIO edge detection.
+ */
+static void __exit au1x00_pcmcia_exit(void)
+{
+ driver_unregister(&au1x00_pcmcia_driver);
+ platform_device_unregister(&au1x00_device);
+}
-#endif /* defined(CONFIG_PROC_FS) */
+module_init(au1x00_pcmcia_init);
+module_exit(au1x00_pcmcia_exit);
--- /dev/null
+/*
+ * Alchemy Semi Au1000 pcmcia driver include file
+ *
+ * Copyright 2001 MontaVista Software Inc.
+ * Author: MontaVista Software, Inc.
+ * ppopov@mvista.com or source@mvista.com
+ *
+ * This program is free software; you can distribute it and/or modify it
+ * under the terms of the GNU General Public License (Version 2) as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ */
+#ifndef __ASM_AU1000_PCMCIA_H
+#define __ASM_AU1000_PCMCIA_H
+
+/* include the world */
+#include <pcmcia/version.h>
+#include <pcmcia/cs_types.h>
+#include <pcmcia/cs.h>
+#include <pcmcia/ss.h>
+#include <pcmcia/bulkmem.h>
+#include <pcmcia/cistpl.h>
+#include "cs_internal.h"
+
+#define AU1000_PCMCIA_POLL_PERIOD (2*HZ)
+#define AU1000_PCMCIA_IO_SPEED (255)
+#define AU1000_PCMCIA_MEM_SPEED (300)
+
+#define AU1X_SOCK0_IO 0xF00000000
+#define AU1X_SOCK0_PHYS_ATTR 0xF40000000
+#define AU1X_SOCK0_PHYS_MEM 0xF80000000
+/* pseudo 32 bit phys addresses, which get fixed up to the
+ * real 36 bit address in fixup_bigphys_addr() */
+#define AU1X_SOCK0_PSEUDO_PHYS_ATTR 0xF4000000
+#define AU1X_SOCK0_PSEUDO_PHYS_MEM 0xF8000000
+
+/* pcmcia socket 1 needs external glue logic so the memory map
+ * differs from board to board.
+ */
+#if defined(CONFIG_MIPS_PB1000) || defined(CONFIG_MIPS_PB1100) || defined(CONFIG_MIPS_PB1500) || defined(CONFIG_MIPS_PB1550)
+#define AU1X_SOCK1_IO 0xF08000000
+#define AU1X_SOCK1_PHYS_ATTR 0xF48000000
+#define AU1X_SOCK1_PHYS_MEM 0xF88000000
+#define AU1X_SOCK1_PSEUDO_PHYS_ATTR 0xF4800000
+#define AU1X_SOCK1_PSEUDO_PHYS_MEM 0xF8800000
+#elif defined(CONFIG_MIPS_DB1000) || defined(CONFIG_MIPS_DB1100) || defined(CONFIG_MIPS_DB1500) || defined(CONFIG_MIPS_DB1550)
+#define AU1X_SOCK1_IO 0xF04000000
+#define AU1X_SOCK1_PHYS_ATTR 0xF44000000
+#define AU1X_SOCK1_PHYS_MEM 0xF84000000
+#define AU1X_SOCK1_PSEUDO_PHYS_ATTR 0xF4400000
+#define AU1X_SOCK1_PSEUDO_PHYS_MEM 0xF8400000
+#endif
+
+struct pcmcia_state {
+ unsigned detect: 1,
+ ready: 1,
+ wrprot: 1,
+ bvd1: 1,
+ bvd2: 1,
+ vs_3v: 1,
+ vs_Xv: 1;
+};
+
+struct pcmcia_configure {
+ unsigned sock: 8,
+ vcc: 8,
+ vpp: 8,
+ output: 1,
+ speaker: 1,
+ reset: 1;
+};
+
+struct pcmcia_irqs {
+ int sock;
+ int irq;
+ const char *str;
+};
+
+
+struct au1000_pcmcia_socket {
+ struct pcmcia_socket socket;
+
+ /*
+ * Info from low level handler
+ */
+ struct device *dev;
+ unsigned int nr;
+ unsigned int irq;
+
+ /*
+ * Core PCMCIA state
+ */
+ struct pcmcia_low_level *ops;
+
+ unsigned int status;
+ socket_state_t cs_state;
+
+ unsigned short spd_io[MAX_IO_WIN];
+ unsigned short spd_mem[MAX_WIN];
+ unsigned short spd_attr[MAX_WIN];
+
+ struct resource res_skt;
+ struct resource res_io;
+ struct resource res_mem;
+ struct resource res_attr;
+
+ void * virt_io;
+ ioaddr_t phys_io;
+ unsigned int phys_attr;
+ unsigned int phys_mem;
+ unsigned short speed_io, speed_attr, speed_mem;
+
+ unsigned int irq_state;
+
+ struct timer_list poll_timer;
+};
+
+struct pcmcia_low_level {
+ struct module *owner;
+
+ int (*hw_init)(struct au1000_pcmcia_socket *);
+ void (*hw_shutdown)(struct au1000_pcmcia_socket *);
+
+ void (*socket_state)(struct au1000_pcmcia_socket *, struct pcmcia_state *);
+ int (*configure_socket)(struct au1000_pcmcia_socket *, struct socket_state_t *);
+
+ /*
+ * Enable card status IRQs on (re-)initialisation. This can
+ * be called at initialisation, power management event, or
+ * pcmcia event.
+ */
+ void (*socket_init)(struct au1000_pcmcia_socket *);
+
+ /*
+ * Disable card status IRQs and PCMCIA bus on suspend.
+ */
+ void (*socket_suspend)(struct au1000_pcmcia_socket *);
+};
+
+extern int au1x_board_init(struct device *dev);
+
+#endif /* __ASM_AU1000_PCMCIA_H */
#include <linux/timer.h>
#include <linux/mm.h>
#include <linux/proc_fs.h>
+#include <linux/version.h>
#include <linux/types.h>
#include <pcmcia/version.h>
#define PCMCIA_IRQ AU1000_GPIO_15
#elif defined (CONFIG_MIPS_PB1500)
#include <asm/pb1500.h>
-#define PCMCIA_IRQ AU1000_GPIO_11 /* fixme */
+#define PCMCIA_IRQ AU1500_GPIO_203
#elif defined (CONFIG_MIPS_PB1100)
#include <asm/pb1100.h>
#define PCMCIA_IRQ AU1000_GPIO_11
#else /* fixme -- take care of the Pb1500 at some point */
u16 pcr;
- pcr = au_readw(PB1100_MEM_PCMCIA) & ~0xf; /* turn off power */
- pcr &= ~(PB1100_PC_DEASSERT_RST | PB1100_PC_DRV_EN);
- au_writew(pcr, PB1100_MEM_PCMCIA);
+ pcr = au_readw(PCMCIA_BOARD_REG) & ~0xf; /* turn off power */
+ pcr &= ~(PC_DEASSERT_RST | PC_DRV_EN);
+ au_writew(pcr, PCMCIA_BOARD_REG);
au_sync_delay(500);
return PCMCIA_NUM_SOCKS;
#endif
return 0;
#else
u16 pcr;
- pcr = au_readw(PB1100_MEM_PCMCIA) & ~0xf; /* turn off power */
- pcr &= ~(PB1100_PC_DEASSERT_RST | PB1100_PC_DRV_EN);
- au_writew(pcr, PB1100_MEM_PCMCIA);
+ pcr = au_readw(PCMCIA_BOARD_REG) & ~0xf; /* turn off power */
+ pcr &= ~(PC_DEASSERT_RST | PC_DRV_EN);
+ au_writew(pcr, PCMCIA_BOARD_REG);
au_sync_delay(2);
return 0;
#endif
vs0 = (vs0 >> 4) & 0x3;
vs1 = (vs1 >> 12) & 0x3;
#else
- vs0 = (au_readw(PB1100_BOARD_STATUS) >> 4) & 0x3;
+ vs0 = (au_readw(BOARD_STATUS_REG) >> 4) & 0x3;
+#ifdef CONFIG_MIPS_PB1500
+ inserted0 = !((au_readl(GPIO2_PINSTATE) >> 1) & 0x1); /* gpio 201 */
+#else /* Pb1100 */
inserted0 = !((au_readl(SYS_PINSTATERD) >> 9) & 0x1); /* gpio 9 */
#endif
+ inserted1 = 0;
+#endif
state->ready = 0;
state->vs_Xv = 0;
/* return without setting 'detect' */
printk(KERN_ERR "pb1x00 bad VS (%d)\n",
vs0);
- return;
+ return 0;
}
state->detect = 1;
}
/* return without setting 'detect' */
printk(KERN_ERR "pb1x00 bad VS (%d)\n",
vs1);
- return;
+ return 0;
}
state->detect = 1;
}
#else
- pcr = au_readw(PB1100_MEM_PCMCIA) & ~0xf;
+ pcr = au_readw(PCMCIA_BOARD_REG) & ~0xf;
debug("Vcc %dV Vpp %dV, pcr %x, reset %d\n",
configure->vcc, configure->vpp, pcr, configure->reset);
break;
}
- au_writew(pcr, PB1100_MEM_PCMCIA);
+ au_writew(pcr, PCMCIA_BOARD_REG);
au_sync_delay(300);
if (!configure->reset) {
- pcr |= PB1100_PC_DRV_EN;
- au_writew(pcr, PB1100_MEM_PCMCIA);
+ pcr |= PC_DRV_EN;
+ au_writew(pcr, PCMCIA_BOARD_REG);
au_sync_delay(100);
- pcr |= PB1100_PC_DEASSERT_RST;
- au_writew(pcr, PB1100_MEM_PCMCIA);
+ pcr |= PC_DEASSERT_RST;
+ au_writew(pcr, PCMCIA_BOARD_REG);
au_sync_delay(100);
}
else {
- pcr &= ~(PB1100_PC_DEASSERT_RST | PB1100_PC_DRV_EN);
- au_writew(pcr, PB1100_MEM_PCMCIA);
+ pcr &= ~(PC_DEASSERT_RST | PC_DRV_EN);
+ au_writew(pcr, PCMCIA_BOARD_REG);
au_sync_delay(100);
}
#endif
return 0;
}
+
struct pcmcia_low_level pb1x00_pcmcia_ops = {
pb1x00_pcmcia_init,
pb1x00_pcmcia_shutdown,
struct pccard_io_map *sio;
pgprot_t prot;
- DPRINTK("hs_set_io_map(sock=%d, map=%d, flags=0x%x, speed=%dns, start=0x%04x, stop=0x%04x)\n",
+ DPRINTK("hs_set_io_map(sock=%d, map=%d, flags=0x%x, speed=%dns, start=%#lx, stop=%#lx)\n",
sock, map, io->flags, io->speed, io->start, io->stop);
if (map >= MAX_IO_WIN)
return -EINVAL;
hs_set_voltages(&hs_sockets[i], 0, 0);
hs_sockets[i].socket.features |= SS_CAP_PCCARD | SS_CAP_STATIC_MAP; /* mappings are fixed in host memory */
+ hs_sockets[i].socket.resource_ops = &pccard_static_ops;
hs_sockets[i].socket.irq_mask = 0xffde;/*0xffff*/ /* IRQs mapped in s/w so can do any, really */
hs_sockets[i].socket.map_size = HD64465_PCC_WINDOW; /* 16MB fixed window size */
u_short type, flags;
struct pcmcia_socket socket;
unsigned int number;
- ioaddr_t ioaddr;
+ kio_addr_t ioaddr;
u_long mapaddr;
u_long base; /* PCC register base */
u_char cs_irq1, cs_irq2, intr;
static unsigned int pcc_get(u_short, unsigned int);
static void pcc_set(u_short, unsigned int , unsigned int );
-static spinlock_t pcc_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(pcc_lock);
#if !defined(CONFIG_PLAT_USRV)
static inline u_long pcc_port2addr(unsigned long port, int size) {
/*====================================================================*/
+#define IS_REGISTERED 0x2000
#define IS_ALIVE 0x8000
typedef struct pcc_t {
return 0;
}
-static void add_pcc_socket(ulong base, int irq, ulong mapaddr, ioaddr_t ioaddr)
+static void add_pcc_socket(ulong base, int irq, ulong mapaddr, kio_addr_t ioaddr)
{
pcc_socket_t *t = &socket[pcc_sockets];
u_char map;
debug(3, "m32r_cfc: SetIOMap(%d, %d, %#2.2x, %d ns, "
- "%#4.4x-%#4.4x)\n", sock, io->map, io->flags,
+ "%#lx-%#lx)\n", sock, io->map, io->flags,
io->speed, io->start, io->stop);
map = io->map;
pcc_socket_t *t = &socket[sock];
debug(3, "m32r_cfc: SetMemMap(%d, %d, %#2.2x, %d ns, "
- "%#5.5lx, %#5.5x)\n", sock, map, mem->flags,
+ "%#lx, %#x)\n", sock, map, mem->flags,
mem->speed, mem->static_start, mem->card_start);
/*
#else /* CONFIG_PLAT_USRV */
{
ulong base, mapaddr;
- ioaddr_t ioaddr;
+ kio_addr_t ioaddr;
for (i = 0 ; i < M32R_MAX_PCC ; i++) {
base = (ulong)PLD_CFRSTCR;
for (i = 0 ; i < pcc_sockets ; i++) {
socket[i].socket.dev.dev = &pcc_device.dev;
socket[i].socket.ops = &pcc_operations;
+ socket[i].socket.resource_ops = &pccard_static_ops;
socket[i].socket.owner = THIS_MODULE;
socket[i].number = i;
ret = pcmcia_register_socket(&socket[i].socket);
- if (ret && i--) {
- for (; i>= 0; i--)
- pcmcia_unregister_socket(&socket[i].socket);
- break;
- }
+ if (!ret)
+ socket[i].flags |= IS_REGISTERED;
+
#if 0 /* driver model ordering issue */
class_device_create_file(&socket[i].socket.dev,
&class_device_attr_info);
int i;
for (i = 0; i < pcc_sockets; i++)
- pcmcia_unregister_socket(&socket[i].socket);
+ if (socket[i].flags & IS_REGISTERED)
+ pcmcia_unregister_socket(&socket[i].socket);
platform_device_unregister(&pcc_device);
if (poll_interval != 0)
u_short type, flags;
struct pcmcia_socket socket;
unsigned int number;
- ioaddr_t ioaddr;
+ kio_addr_t ioaddr;
u_long mapaddr;
u_long base; /* PCC register base */
u_char cs_irq, intr;
static unsigned int pcc_get(u_short, unsigned int);
static void pcc_set(u_short, unsigned int , unsigned int );
-static spinlock_t pcc_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(pcc_lock);
void pcc_iorw(int sock, unsigned long port, void *buf, size_t size, size_t nmemb, int wr, int flag)
{
/*====================================================================*/
+#define IS_REGISTERED 0x2000
#define IS_ALIVE 0x8000
typedef struct pcc_t {
return 0;
}
-static void add_pcc_socket(ulong base, int irq, ulong mapaddr, ioaddr_t ioaddr)
+static void add_pcc_socket(ulong base, int irq, ulong mapaddr, kio_addr_t ioaddr)
{
pcc_socket_t *t = &socket[pcc_sockets];
u_char map;
debug(3, "m32r-pcc: SetIOMap(%d, %d, %#2.2x, %d ns, "
- "%#4.4x-%#4.4x)\n", sock, io->map, io->flags,
+ "%#lx-%#lx)\n", sock, io->map, io->flags,
io->speed, io->start, io->stop);
map = io->map;
#endif
debug(3, "m32r-pcc: SetMemMap(%d, %d, %#2.2x, %d ns, "
- "%#5.5lx, %#5.5x)\n", sock, map, mem->flags,
+ "%#lx, %#x)\n", sock, map, mem->flags,
mem->speed, mem->static_start, mem->card_start);
/*
for (i = 0 ; i < pcc_sockets ; i++) {
socket[i].socket.dev.dev = &pcc_device.dev;
socket[i].socket.ops = &pcc_operations;
+ socket[i].socket.resource_ops = &pccard_static_ops;
socket[i].socket.owner = THIS_MODULE;
socket[i].number = i;
ret = pcmcia_register_socket(&socket[i].socket);
- if (ret && i--) {
- for (; i>= 0; i--)
- pcmcia_unregister_socket(&socket[i].socket);
- break;
- }
+ if (!ret)
+ socket[i].flags |= IS_REGISTERED;
+
#if 0 /* driver model ordering issue */
class_device_create_file(&socket[i].socket.dev,
&class_device_attr_info);
int i;
for (i = 0; i < pcc_sockets; i++)
- pcmcia_unregister_socket(&socket[i].socket);
+ if (socket[i].flags & IS_REGISTERED)
+ pcmcia_unregister_socket(&socket[i].socket);
platform_device_unregister(&pcc_device);
if (poll_interval != 0)
}
EXPORT_SYMBOL(pcmcia_access_configuration_register);
-#ifdef CONFIG_PCMCIA_OBSOLETE
-
-int pcmcia_get_first_window(window_handle_t *win, win_req_t *req)
-{
- if ((win == NULL) || ((*win)->magic != WINDOW_MAGIC))
- return CS_BAD_HANDLE;
-
- return pcmcia_get_window(((client_handle_t)*win)->Socket, win, 0, req);
-}
-EXPORT_SYMBOL(pcmcia_get_first_window);
-
-int pcmcia_get_next_window(window_handle_t *win, win_req_t *req)
-{
- if ((win == NULL) || ((*win)->magic != WINDOW_MAGIC))
- return CS_BAD_HANDLE;
- return pcmcia_get_window((*win)->sock, win, (*win)->index+1, req);
-}
-EXPORT_SYMBOL(pcmcia_get_next_window);
-
-#endif
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Driver for the Cirrus PD6729 PCI-PCMCIA bridge");
-MODULE_AUTHOR("Jun Komuro <komurojun@mbn.nifty.com>");
+MODULE_AUTHOR("Jun Komuro <komurojun-mbn@nifty.com>");
#define MAX_SOCKETS 2
*/
#define to_cycles(ns) ((ns)/120)
-static spinlock_t port_lock = SPIN_LOCK_UNLOCKED;
+#ifndef NO_IRQ
+#define NO_IRQ ((unsigned int)(0))
+#endif
+
+/*
+ * PARAMETERS
+ * irq_mode=n
+ * Specifies the interrupt delivery mode. The default (1) is to use PCI
+ * interrupts; a value of 0 selects ISA interrupts. This must be set for
+ * correct operation of PCI card readers.
+ *
+ * irq_list=i,j,...
+ * This list limits the set of interrupts that can be used by PCMCIA
+ * cards.
+ * The default list is 3,4,5,7,9,10,11.
+ * (irq_list parameter is not used, if irq_mode = 1)
+ */
+
+static int irq_mode = 1; /* 0 = ISA interrupt, 1 = PCI interrupt */
+static int irq_list[16];
+static int irq_list_count = 0;
+
+module_param(irq_mode, int, 0444);
+module_param_array(irq_list, int, &irq_list_count, 0444);
+MODULE_PARM_DESC(irq_mode,
+ "interrupt delivery mode. 0 = ISA, 1 = PCI. default is 1");
+MODULE_PARM_DESC(irq_list, "interrupts that can be used by PCMCIA cards");
+
+static DEFINE_SPINLOCK(port_lock);
/* basic value read/write functions */
-static unsigned char indirect_read(struct pd6729_socket *socket, unsigned short reg)
+static unsigned char indirect_read(struct pd6729_socket *socket,
+ unsigned short reg)
{
unsigned long port;
unsigned char val;
return val;
}
-static unsigned short indirect_read16(struct pd6729_socket *socket, unsigned short reg)
+static unsigned short indirect_read16(struct pd6729_socket *socket,
+ unsigned short reg)
{
unsigned long port;
unsigned short tmp;
return tmp;
}
-static void indirect_write(struct pd6729_socket *socket, unsigned short reg, unsigned char value)
+static void indirect_write(struct pd6729_socket *socket, unsigned short reg,
+ unsigned char value)
{
unsigned long port;
unsigned long flags;
spin_unlock_irqrestore(&port_lock, flags);
}
-static void indirect_setbit(struct pd6729_socket *socket, unsigned short reg, unsigned char mask)
+static void indirect_setbit(struct pd6729_socket *socket, unsigned short reg,
+ unsigned char mask)
{
unsigned long port;
unsigned char val;
spin_unlock_irqrestore(&port_lock, flags);
}
-static void indirect_resetbit(struct pd6729_socket *socket, unsigned short reg, unsigned char mask)
+static void indirect_resetbit(struct pd6729_socket *socket, unsigned short reg,
+ unsigned char mask)
{
unsigned long port;
unsigned char val;
spin_unlock_irqrestore(&port_lock, flags);
}
-static void indirect_write16(struct pd6729_socket *socket, unsigned short reg, unsigned short value)
+static void indirect_write16(struct pd6729_socket *socket, unsigned short reg,
+ unsigned short value)
{
unsigned long port;
unsigned char val;
while (1) {
loopcount++;
if (loopcount > 20) {
- printk(KERN_ERR "pd6729: infinite eventloop in interrupt\n");
+ printk(KERN_ERR "pd6729: infinite eventloop "
+ "in interrupt\n");
break;
}
dprintk("Card detected in socket %i!\n", i);
}
- if (indirect_read(&socket[i], I365_INTCTL) & I365_PC_IOCARD) {
+ if (indirect_read(&socket[i], I365_INTCTL)
+ & I365_PC_IOCARD) {
/* For IO/CARDS, bit 0 means "read the card" */
- events |= (csc & I365_CSC_STSCHG) ? SS_STSCHG : 0;
+ events |= (csc & I365_CSC_STSCHG)
+ ? SS_STSCHG : 0;
} else {
/* Check for battery/ready events */
- events |= (csc & I365_CSC_BVD1) ? SS_BATDEAD : 0;
- events |= (csc & I365_CSC_BVD2) ? SS_BATWARN : 0;
- events |= (csc & I365_CSC_READY) ? SS_READY : 0;
+ events |= (csc & I365_CSC_BVD1)
+ ? SS_BATDEAD : 0;
+ events |= (csc & I365_CSC_BVD2)
+ ? SS_BATWARN : 0;
+ events |= (csc & I365_CSC_READY)
+ ? SS_READY : 0;
}
if (events) {
/* socket functions */
-static void set_bridge_state(struct pd6729_socket *socket)
+static void pd6729_interrupt_wrapper(unsigned long data)
{
- indirect_write(socket, I365_GBLCTL, 0x00);
- indirect_write(socket, I365_GENCTL, 0x00);
+ struct pd6729_socket *socket = (struct pd6729_socket *) data;
- indirect_setbit(socket, I365_INTCTL, 0x08);
+ pd6729_interrupt(0, (void *)socket, NULL);
+ mod_timer(&socket->poll_timer, jiffies + HZ);
}
static int pd6729_get_status(struct pcmcia_socket *sock, u_int *value)
{
- struct pd6729_socket *socket = container_of(sock, struct pd6729_socket, socket);
+ struct pd6729_socket *socket
+ = container_of(sock, struct pd6729_socket, socket);
unsigned int status;
unsigned int data;
struct pd6729_socket *t;
static int pd6729_get_socket(struct pcmcia_socket *sock, socket_state_t *state)
{
- struct pd6729_socket *socket = container_of(sock, struct pd6729_socket, socket);
+ struct pd6729_socket *socket
+ = container_of(sock, struct pd6729_socket, socket);
unsigned char reg, vcc, vpp;
state->flags = 0;
state->io_irq = 0;
state->csc_mask = 0;
- /*
- * First the power status of the socket
- * PCTRL - Power Control Register
- */
+ /* First the power status of the socket */
reg = indirect_read(socket, I365_POWER);
if (reg & I365_PWR_AUTO)
state->Vpp = 120;
}
- /*
- * Now the IO card, RESET flags and IO interrupt
- * IGENC, Interrupt and General Control
- */
+ /* Now the IO card, RESET flags and IO interrupt */
reg = indirect_read(socket, I365_INTCTL);
if ((reg & I365_PC_RESET) == 0)
state->flags |= SS_IOCARD; /* This is an IO card */
/* Set the IRQ number */
- state->io_irq = socket->socket.pci_irq;
+ state->io_irq = socket->card_irq;
- /*
- * Card status change
- * CSCICR, Card Status Change Interrupt Configuration
- */
+ /* Card status change */
reg = indirect_read(socket, I365_CSCINT);
if (reg & I365_CSC_DETECT)
static int pd6729_set_socket(struct pcmcia_socket *sock, socket_state_t *state)
{
- struct pd6729_socket *socket = container_of(sock, struct pd6729_socket, socket);
- unsigned char reg;
+ struct pd6729_socket *socket
+ = container_of(sock, struct pd6729_socket, socket);
+ unsigned char reg, data;
/* First, set the global controller options */
-
- set_bridge_state(socket);
+ indirect_write(socket, I365_GBLCTL, 0x00);
+ indirect_write(socket, I365_GENCTL, 0x00);
/* Values for the IGENC register */
+ socket->card_irq = state->io_irq;
reg = 0;
/* The reset bit has "inverse" logic */
if (!(state->flags & SS_RESET))
- reg = reg | I365_PC_RESET;
+ reg |= I365_PC_RESET;
if (state->flags & SS_IOCARD)
- reg = reg | I365_PC_IOCARD;
+ reg |= I365_PC_IOCARD;
/* IGENC, Interrupt and General Control Register */
indirect_write(socket, I365_INTCTL, reg);
indirect_resetbit(socket, PD67_MISC_CTL_1, PD67_MC1_VCC_3V);
break;
default:
- dprintk("pd6729: pd6729_set_socket called with invalid VCC power value: %i\n",
+ dprintk("pd6729: pd6729_set_socket called with "
+ "invalid VCC power value: %i\n",
state->Vcc);
return -EINVAL;
}
if (reg != indirect_read(socket, I365_POWER))
indirect_write(socket, I365_POWER, reg);
- /* Now, specifiy that all interrupts are to be done as PCI interrupts */
+ if (irq_mode == 1) {
+ /* all interrupts are to be done as PCI interrupts */
+ data = PD67_EC1_INV_MGMT_IRQ | PD67_EC1_INV_CARD_IRQ;
+ } else
+ data = 0;
+
indirect_write(socket, PD67_EXT_INDEX, PD67_EXT_CTL_1);
- indirect_write(socket, PD67_EXT_DATA, PD67_EC1_INV_MGMT_IRQ | PD67_EC1_INV_CARD_IRQ);
+ indirect_write(socket, PD67_EXT_DATA, data);
/* Enable specific interrupt events */
if (state->csc_mask & SS_READY)
reg |= I365_CSC_READY;
}
- reg |= 0x30; /* management IRQ: PCI INTA# = "irq 3" */
+ if (irq_mode == 1)
+ reg |= 0x30; /* management IRQ: PCI INTA# = "irq 3" */
indirect_write(socket, I365_CSCINT, reg);
reg = indirect_read(socket, I365_INTCTL);
- reg |= 0x03; /* card IRQ: PCI INTA# = "irq 3" */
+ if (irq_mode == 1)
+ reg |= 0x03; /* card IRQ: PCI INTA# = "irq 3" */
+ else
+ reg |= socket->card_irq;
indirect_write(socket, I365_INTCTL, reg);
/* now clear the (probably bogus) pending stuff by doing a dummy read */
return 0;
}
-static int pd6729_set_io_map(struct pcmcia_socket *sock, struct pccard_io_map *io)
+static int pd6729_set_io_map(struct pcmcia_socket *sock,
+ struct pccard_io_map *io)
{
- struct pd6729_socket *socket = container_of(sock, struct pd6729_socket, socket);
+ struct pd6729_socket *socket
+ = container_of(sock, struct pd6729_socket, socket);
unsigned char map, ioctl;
map = io->map;
if (indirect_read(socket, I365_ADDRWIN) & I365_ENA_IO(map))
indirect_resetbit(socket, I365_ADDRWIN, I365_ENA_IO(map));
-/* dprintk("set_io_map: Setting range to %x - %x\n", io->start, io->stop);*/
+ /* dprintk("set_io_map: Setting range to %x - %x\n",
+ io->start, io->stop);*/
/* write the new values */
indirect_write16(socket, I365_IO(map)+I365_W_START, io->start);
return 0;
}
-static int pd6729_set_mem_map(struct pcmcia_socket *sock, struct pccard_mem_map *mem)
+static int pd6729_set_mem_map(struct pcmcia_socket *sock,
+ struct pccard_mem_map *mem)
{
- struct pd6729_socket *socket = container_of(sock, struct pd6729_socket, socket);
+ struct pd6729_socket *socket
+ = container_of(sock, struct pd6729_socket, socket);
unsigned short base, i;
unsigned char map;
if ((mem->res->start > mem->res->end) || (mem->speed > 1000)) {
printk("pd6729_set_mem_map: invalid address / speed");
- /* printk("invalid mem map for socket %i : %lx to %lx with a start of %x\n",
- sock, mem->res->start, mem->res->end, mem->card_start); */
return -EINVAL;
}
if (mem->flags & MAP_WRPROT)
i |= I365_MEM_WRPROT;
if (mem->flags & MAP_ATTRIB) {
-/* dprintk("requesting attribute memory for socket %i\n",
+ /* dprintk("requesting attribute memory for socket %i\n",
socket->number);*/
i |= I365_MEM_REG;
} else {
-/* dprintk("requesting normal memory for socket %i\n",
+ /* dprintk("requesting normal memory for socket %i\n",
socket->number);*/
}
indirect_write16(socket, base + I365_W_OFF, i);
.set_mem_map = pd6729_set_mem_map,
};
-static int __devinit pd6729_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+static irqreturn_t pd6729_test(int irq, void *dev, struct pt_regs *regs)
+{
+ dprintk("-> hit on irq %d\n", irq);
+ return IRQ_HANDLED;
+}
+
+static int pd6729_check_irq(int irq, int flags)
+{
+ if (request_irq(irq, pd6729_test, flags, "x", pd6729_test) != 0)
+ return -1;
+ free_irq(irq, pd6729_test);
+ return 0;
+}
+
+static u_int __init pd6729_isa_scan(void)
+{
+ u_int mask0, mask = 0;
+ int i;
+
+ if (irq_mode == 1) {
+ printk(KERN_INFO "pd6729: PCI card interrupts, "
+ "PCI status changes\n");
+ return 0;
+ }
+
+ if (irq_list_count == 0)
+ mask0 = 0xffff;
+ else
+ for (i = mask0 = 0; i < irq_list_count; i++)
+ mask0 |= (1<<irq_list[i]);
+
+ mask0 &= PD67_MASK;
+
+ /* just find interrupts that aren't in use */
+ for (i = 0; i < 16; i++)
+ if ((mask0 & (1 << i)) && (pd6729_check_irq(i, 0) == 0))
+ mask |= (1 << i);
+
+ printk(KERN_INFO "pd6729: ISA irqs = ");
+ for (i = 0; i < 16; i++)
+ if (mask & (1<<i))
+ printk("%s%d", ((mask & ((1<<i)-1)) ? "," : ""), i);
+
+ if (mask == 0) printk("none!");
+
+ printk(" polling status changes.\n");
+
+ return mask;
+}
+
+static int __devinit pd6729_pci_probe(struct pci_dev *dev,
+ const struct pci_device_id *id)
{
int i, j, ret;
+ u_int mask;
char configbyte;
struct pd6729_socket *socket;
- socket = kmalloc(sizeof(struct pd6729_socket) * MAX_SOCKETS, GFP_KERNEL);
+ socket = kmalloc(sizeof(struct pd6729_socket) * MAX_SOCKETS,
+ GFP_KERNEL);
if (!socket)
return -ENOMEM;
if ((ret = pci_enable_device(dev)))
goto err_out_free_mem;
- printk(KERN_INFO "pd6729: Cirrus PD6729 PCI to PCMCIA Bridge at 0x%lx on irq %d\n",
- pci_resource_start(dev, 0), dev->irq);
- printk(KERN_INFO "pd6729: configured as a %d socket device.\n", MAX_SOCKETS);
+ printk(KERN_INFO "pd6729: Cirrus PD6729 PCI to PCMCIA Bridge "
+ "at 0x%lx on irq %d\n", pci_resource_start(dev, 0), dev->irq);
/*
- * Since we have no memory BARs some firmware we may not
- * have had PCI_COMMAND_MEM enabled, yet the device needs
- * it.
+ * Since we have no memory BARs some firmware may not
+ * have had PCI_COMMAND_MEMORY enabled, yet the device needs it.
*/
pci_read_config_byte(dev, PCI_COMMAND, &configbyte);
if (!(configbyte & PCI_COMMAND_MEMORY)) {
goto err_out_disable;
}
+ if (dev->irq == NO_IRQ)
+ irq_mode = 0; /* fall back to ISA interrupt mode */
+
+ mask = pd6729_isa_scan();
+ if (irq_mode == 0 && mask == 0) {
+ printk(KERN_INFO "pd6729: no ISA interrupt is available.\n");
+ goto err_out_free_res;
+ }
+
for (i = 0; i < MAX_SOCKETS; i++) {
socket[i].io_base = pci_resource_start(dev, 0);
socket[i].socket.features |= SS_CAP_PCCARD;
socket[i].socket.map_size = 0x1000;
- socket[i].socket.irq_mask = 0;
+ socket[i].socket.irq_mask = mask;
socket[i].socket.pci_irq = dev->irq;
socket[i].socket.owner = THIS_MODULE;
socket[i].number = i;
socket[i].socket.ops = &pd6729_operations;
+ socket[i].socket.resource_ops = &pccard_nonstatic_ops;
socket[i].socket.dev.dev = &dev->dev;
socket[i].socket.driver_data = &socket[i];
}
pci_set_drvdata(dev, socket);
-
- /* Register the interrupt handler */
- if ((ret = request_irq(dev->irq, pd6729_interrupt, SA_SHIRQ, "pd6729", socket))) {
- printk(KERN_ERR "pd6729: Failed to register irq %d, aborting\n", dev->irq);
- goto err_out_free_res;
+ if (irq_mode == 1) {
+ /* Register the interrupt handler */
+ if ((ret = request_irq(dev->irq, pd6729_interrupt, SA_SHIRQ,
+ "pd6729", socket))) {
+ printk(KERN_ERR "pd6729: Failed to register irq %d, "
+ "aborting\n", dev->irq);
+ goto err_out_free_res;
+ }
+ } else {
+ /* poll Card status change */
+ init_timer(&socket->poll_timer);
+ socket->poll_timer.function = pd6729_interrupt_wrapper;
+ socket->poll_timer.data = (unsigned long)socket;
+ socket->poll_timer.expires = jiffies + HZ;
+ add_timer(&socket->poll_timer);
}
for (i = 0; i < MAX_SOCKETS; i++) {
ret = pcmcia_register_socket(&socket[i].socket);
if (ret) {
- printk(KERN_INFO "pd6729: pcmcia_register_socket failed.\n");
+ printk(KERN_INFO "pd6729: pcmcia_register_socket "
+ "failed.\n");
for (j = 0; j < i ; j++)
pcmcia_unregister_socket(&socket[j].socket);
goto err_out_free_res2;
return 0;
err_out_free_res2:
- free_irq(dev->irq, socket);
+ if (irq_mode == 1)
+ free_irq(dev->irq, socket);
+ else
+ del_timer_sync(&socket->poll_timer);
err_out_free_res:
pci_release_regions(dev);
err_out_disable:
int i;
struct pd6729_socket *socket = pci_get_drvdata(dev);
- for (i = 0; i < MAX_SOCKETS; i++)
+ for (i = 0; i < MAX_SOCKETS; i++) {
+ /* Turn off all interrupt sources */
+ indirect_write(&socket[i], I365_CSCINT, 0);
+ indirect_write(&socket[i], I365_INTCTL, 0);
+
pcmcia_unregister_socket(&socket[i].socket);
+ }
- free_irq(dev->irq, socket);
+ if (irq_mode == 1)
+ free_irq(dev->irq, socket);
+ else
+ del_timer_sync(&socket->poll_timer);
pci_release_regions(dev);
pci_disable_device(dev);
--- /dev/null
+/*
+ * Sharp SL-C7xx Series PCMCIA routines
+ *
+ * Copyright (c) 2004-2005 Richard Purdie
+ *
+ * Based on Sharp's 2.4 kernel patches and pxa2xx_mainstone.c
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/device.h>
+
+#include <asm/hardware.h>
+#include <asm/irq.h>
+
+#include <asm/hardware/scoop.h>
+#include <asm/arch/corgi.h>
+#include <asm/arch/pxa-regs.h>
+
+#include "soc_common.h"
+
+#define NO_KEEP_VS 0x0001
+
+static unsigned char keep_vs;
+static unsigned char keep_rd;
+
+static struct pcmcia_irqs irqs[] = {
+ { 0, CORGI_IRQ_GPIO_CF_CD, "PCMCIA0 CD"},
+};
+
+static void sharpsl_pcmcia_init_reset(void)
+{
+ reset_scoop();
+ keep_vs = NO_KEEP_VS;
+ keep_rd = 0;
+}
+
+static int sharpsl_pcmcia_hw_init(struct soc_pcmcia_socket *skt)
+{
+ int ret;
+
+ /*
+ * Setup default state of GPIO outputs
+ * before we enable them as outputs.
+ */
+ GPSR(GPIO48_nPOE) =
+ GPIO_bit(GPIO48_nPOE) |
+ GPIO_bit(GPIO49_nPWE) |
+ GPIO_bit(GPIO50_nPIOR) |
+ GPIO_bit(GPIO51_nPIOW) |
+ GPIO_bit(GPIO52_nPCE_1) |
+ GPIO_bit(GPIO53_nPCE_2);
+
+ pxa_gpio_mode(GPIO48_nPOE_MD);
+ pxa_gpio_mode(GPIO49_nPWE_MD);
+ pxa_gpio_mode(GPIO50_nPIOR_MD);
+ pxa_gpio_mode(GPIO51_nPIOW_MD);
+ pxa_gpio_mode(GPIO52_nPCE_1_MD);
+ pxa_gpio_mode(GPIO53_nPCE_2_MD);
+ pxa_gpio_mode(GPIO54_pSKTSEL_MD);
+ pxa_gpio_mode(GPIO55_nPREG_MD);
+ pxa_gpio_mode(GPIO56_nPWAIT_MD);
+ pxa_gpio_mode(GPIO57_nIOIS16_MD);
+
+ /* Register interrupts */
+ ret = soc_pcmcia_request_irqs(skt, irqs, ARRAY_SIZE(irqs));
+
+ if (ret) {
+ printk(KERN_ERR "Request for Compact Flash IRQ failed\n");
+ return ret;
+ }
+
+ /* Enable interrupt */
+ write_scoop_reg(SCOOP_IMR, 0x00C0);
+ write_scoop_reg(SCOOP_MCR, 0x0101);
+ keep_vs = NO_KEEP_VS;
+
+ skt->irq = CORGI_IRQ_GPIO_CF_IRQ;
+
+ return 0;
+}
+
+static void sharpsl_pcmcia_hw_shutdown(struct soc_pcmcia_socket *skt)
+{
+ soc_pcmcia_free_irqs(skt, irqs, ARRAY_SIZE(irqs));
+
+ /* CF_BUS_OFF */
+ sharpsl_pcmcia_init_reset();
+}
+
+
+static void sharpsl_pcmcia_socket_state(struct soc_pcmcia_socket *skt,
+ struct pcmcia_state *state)
+{
+ unsigned short cpr, csr;
+
+ cpr = read_scoop_reg(SCOOP_CPR);
+
+ write_scoop_reg(SCOOP_IRM, 0x00FF);
+ write_scoop_reg(SCOOP_ISR, 0x0000);
+ write_scoop_reg(SCOOP_IRM, 0x0000);
+ csr = read_scoop_reg(SCOOP_CSR);
+ if (csr & 0x0004) {
+ /* card eject */
+ write_scoop_reg(SCOOP_CDR, 0x0000);
+ keep_vs = NO_KEEP_VS;
+ }
+ else if (!(keep_vs & NO_KEEP_VS)) {
+ /* keep vs1,vs2 */
+ write_scoop_reg(SCOOP_CDR, 0x0000);
+ csr |= keep_vs;
+ }
+ else if (cpr & 0x0003) {
+ /* power on */
+ write_scoop_reg(SCOOP_CDR, 0x0000);
+ keep_vs = (csr & 0x00C0);
+ }
+ else {
+ /* card detect */
+ write_scoop_reg(SCOOP_CDR, 0x0002);
+ }
+
+ state->detect = (csr & 0x0004) ? 0 : 1;
+ state->ready = (csr & 0x0002) ? 1 : 0;
+ state->bvd1 = (csr & 0x0010) ? 1 : 0;
+ state->bvd2 = (csr & 0x0020) ? 1 : 0;
+ state->wrprot = (csr & 0x0008) ? 1 : 0;
+ state->vs_3v = (csr & 0x0040) ? 0 : 1;
+ state->vs_Xv = (csr & 0x0080) ? 0 : 1;
+
+ if ((cpr & 0x0080) && ((cpr & 0x8040) != 0x8040)) {
+ printk(KERN_ERR "sharpsl_pcmcia_socket_state(): CPR=%04X, Low voltage!\n", cpr);
+ }
+
+}
+
+
+static int sharpsl_pcmcia_configure_socket(struct soc_pcmcia_socket *skt,
+ const socket_state_t *state)
+{
+ unsigned long flags;
+
+ unsigned short cpr, ncpr, ccr, nccr, mcr, nmcr, imr, nimr;
+
+ switch (state->Vcc) {
+ case 0: break;
+ case 33: break;
+ case 50: break;
+ default:
+ printk(KERN_ERR "sharpsl_pcmcia_configure_socket(): bad Vcc %u\n", state->Vcc);
+ return -1;
+ }
+
+ if ((state->Vpp!=state->Vcc) && (state->Vpp!=0)) {
+ printk(KERN_ERR "CF slot cannot support Vpp %u\n", state->Vpp);
+ return -1;
+ }
+
+ local_irq_save(flags);
+
+ nmcr = (mcr = read_scoop_reg(SCOOP_MCR)) & ~0x0010;
+ ncpr = (cpr = read_scoop_reg(SCOOP_CPR)) & ~0x0083;
+ nccr = (ccr = read_scoop_reg(SCOOP_CCR)) & ~0x0080;
+ nimr = (imr = read_scoop_reg(SCOOP_IMR)) & ~0x003E;
+
+ ncpr |= (state->Vcc == 33) ? 0x0001 :
+ (state->Vcc == 50) ? 0x0002 : 0;
+ nmcr |= (state->flags&SS_IOCARD) ? 0x0010 : 0;
+ ncpr |= (state->flags&SS_OUTPUT_ENA) ? 0x0080 : 0;
+ nccr |= (state->flags&SS_RESET)? 0x0080: 0;
+ nimr |= ((skt->status&SS_DETECT) ? 0x0004 : 0)|
+ ((skt->status&SS_READY) ? 0x0002 : 0)|
+ ((skt->status&SS_BATDEAD)? 0x0010 : 0)|
+ ((skt->status&SS_BATWARN)? 0x0020 : 0)|
+ ((skt->status&SS_STSCHG) ? 0x0010 : 0)|
+ ((skt->status&SS_WRPROT) ? 0x0008 : 0);
+
+ if (!(ncpr & 0x0003)) {
+ keep_rd = 0;
+ } else if (!keep_rd) {
+ if (nccr & 0x0080)
+ keep_rd = 1;
+ else
+ nccr |= 0x0080;
+ }
+
+ if (mcr != nmcr)
+ write_scoop_reg(SCOOP_MCR, nmcr);
+ if (cpr != ncpr)
+ write_scoop_reg(SCOOP_CPR, ncpr);
+ if (ccr != nccr)
+ write_scoop_reg(SCOOP_CCR, nccr);
+ if (imr != nimr)
+ write_scoop_reg(SCOOP_IMR, nimr);
+
+ local_irq_restore(flags);
+
+ return 0;
+}
+
+static void sharpsl_pcmcia_socket_init(struct soc_pcmcia_socket *skt)
+{
+}
+
+static void sharpsl_pcmcia_socket_suspend(struct soc_pcmcia_socket *skt)
+{
+}
+
+static struct pcmcia_low_level sharpsl_pcmcia_ops = {
+ .owner = THIS_MODULE,
+ .hw_init = sharpsl_pcmcia_hw_init,
+ .hw_shutdown = sharpsl_pcmcia_hw_shutdown,
+ .socket_state = sharpsl_pcmcia_socket_state,
+ .configure_socket = sharpsl_pcmcia_configure_socket,
+ .socket_init = sharpsl_pcmcia_socket_init,
+ .socket_suspend = sharpsl_pcmcia_socket_suspend,
+ .first = 0,
+ .nr = 1,
+};
+
+static struct platform_device *sharpsl_pcmcia_device;
+
+static int __init sharpsl_pcmcia_init(void)
+{
+ int ret;
+
+ sharpsl_pcmcia_device = kmalloc(sizeof(*sharpsl_pcmcia_device), GFP_KERNEL);
+ if (!sharpsl_pcmcia_device)
+ return -ENOMEM;
+ memset(sharpsl_pcmcia_device, 0, sizeof(*sharpsl_pcmcia_device));
+ sharpsl_pcmcia_device->name = "pxa2xx-pcmcia";
+ sharpsl_pcmcia_device->dev.platform_data = &sharpsl_pcmcia_ops;
+
+ ret = platform_device_register(sharpsl_pcmcia_device);
+ if (ret)
+ kfree(sharpsl_pcmcia_device);
+
+ return ret;
+}
+
+static void __exit sharpsl_pcmcia_exit(void)
+{
+ /*
+ * This call is supposed to free our sharpsl_pcmcia_device.
+ * Unfortunately platform_device don't have a free method, and
+ * we can't assume it's free of any reference at this point so we
+ * can't free it either.
+ */
+ platform_device_unregister(sharpsl_pcmcia_device);
+}
+
+module_init(sharpsl_pcmcia_init);
+module_exit(sharpsl_pcmcia_exit);
+
+MODULE_DESCRIPTION("Sharp SL Series PCMCIA Support");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * rsrc_nonstatic.c -- Resource management routines for !SS_CAP_STATIC_MAP sockets
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * The initial developer of the original code is David A. Hinds
+ * <dahinds@users.sourceforge.net>. Portions created by David A. Hinds
+ * are Copyright (C) 1999 David A. Hinds. All Rights Reserved.
+ *
+ * (C) 1999 David A. Hinds
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/ioport.h>
+#include <linux/timer.h>
+#include <linux/pci.h>
+#include <asm/irq.h>
+#include <asm/io.h>
+
+#include <pcmcia/cs_types.h>
+#include <pcmcia/ss.h>
+#include <pcmcia/cs.h>
+#include <pcmcia/bulkmem.h>
+#include <pcmcia/cistpl.h>
+#include "cs_internal.h"
+
+MODULE_AUTHOR("David A. Hinds, Dominik Brodowski");
+MODULE_LICENSE("GPL");
+
+/* Parameters that can be set with 'insmod' */
+
+#define INT_MODULE_PARM(n, v) static int n = v; module_param(n, int, 0444)
+
+INT_MODULE_PARM(probe_mem, 1); /* memory probe? */
+#ifdef CONFIG_PCMCIA_PROBE
+INT_MODULE_PARM(probe_io, 1); /* IO port probe? */
+INT_MODULE_PARM(mem_limit, 0x10000);
+#endif
+
+/* for io_db and mem_db */
+struct resource_map {
+ u_long base, num;
+ struct resource_map *next;
+};
+
+struct socket_data {
+ struct resource_map mem_db;
+ struct resource_map io_db;
+ unsigned int rsrc_mem_probe;
+};
+
+static DECLARE_MUTEX(rsrc_sem);
+#define MEM_PROBE_LOW (1 << 0)
+#define MEM_PROBE_HIGH (1 << 1)
+
+
+/*======================================================================
+
+ Linux resource management extensions
+
+======================================================================*/
+
+static struct resource *
+make_resource(unsigned long b, unsigned long n, int flags, char *name)
+{
+ struct resource *res = kmalloc(sizeof(*res), GFP_KERNEL);
+
+ if (res) {
+ memset(res, 0, sizeof(*res));
+ res->name = name;
+ res->start = b;
+ res->end = b + n - 1;
+ res->flags = flags;
+ }
+ return res;
+}
+
+static struct resource *
+claim_region(struct pcmcia_socket *s, unsigned long base, unsigned long size,
+ int type, char *name)
+{
+ struct resource *res, *parent;
+
+ parent = type & IORESOURCE_MEM ? &iomem_resource : &ioport_resource;
+ res = make_resource(base, size, type | IORESOURCE_BUSY, name);
+
+ if (res) {
+#ifdef CONFIG_PCI
+ if (s && s->cb_dev)
+ parent = pci_find_parent_resource(s->cb_dev, res);
+#endif
+ if (!parent || request_resource(parent, res)) {
+ kfree(res);
+ res = NULL;
+ }
+ }
+ return res;
+}
+
+static void free_region(struct resource *res)
+{
+ if (res) {
+ release_resource(res);
+ kfree(res);
+ }
+}
+
+/*======================================================================
+
+ These manage the internal databases of available resources.
+
+======================================================================*/
+
+static int add_interval(struct resource_map *map, u_long base, u_long num)
+{
+ struct resource_map *p, *q;
+
+ for (p = map; ; p = p->next) {
+ if ((p != map) && (p->base+p->num-1 >= base))
+ return -1;
+ if ((p->next == map) || (p->next->base > base+num-1))
+ break;
+ }
+ q = kmalloc(sizeof(struct resource_map), GFP_KERNEL);
+ if (!q) return CS_OUT_OF_RESOURCE;
+ q->base = base; q->num = num;
+ q->next = p->next; p->next = q;
+ return CS_SUCCESS;
+}
+
+/*====================================================================*/
+
+static int sub_interval(struct resource_map *map, u_long base, u_long num)
+{
+ struct resource_map *p, *q;
+
+ for (p = map; ; p = q) {
+ q = p->next;
+ if (q == map)
+ break;
+ if ((q->base+q->num > base) && (base+num > q->base)) {
+ if (q->base >= base) {
+ if (q->base+q->num <= base+num) {
+ /* Delete whole block */
+ p->next = q->next;
+ kfree(q);
+ /* don't advance the pointer yet */
+ q = p;
+ } else {
+ /* Cut off bit from the front */
+ q->num = q->base + q->num - base - num;
+ q->base = base + num;
+ }
+ } else if (q->base+q->num <= base+num) {
+ /* Cut off bit from the end */
+ q->num = base - q->base;
+ } else {
+ /* Split the block into two pieces */
+ p = kmalloc(sizeof(struct resource_map), GFP_KERNEL);
+ if (!p) return CS_OUT_OF_RESOURCE;
+ p->base = base+num;
+ p->num = q->base+q->num - p->base;
+ q->num = base - q->base;
+ p->next = q->next ; q->next = p;
+ }
+ }
+ }
+ return CS_SUCCESS;
+}
+
+/*======================================================================
+
+ These routines examine a region of IO or memory addresses to
+ determine what ranges might be genuinely available.
+
+======================================================================*/
+
+#ifdef CONFIG_PCMCIA_PROBE
+static void do_io_probe(struct pcmcia_socket *s, kio_addr_t base, kio_addr_t num)
+{
+ struct resource *res;
+ struct socket_data *s_data = s->resource_data;
+ kio_addr_t i, j, bad;
+ int any;
+ u_char *b, hole, most;
+
+ printk(KERN_INFO "cs: IO port probe %#lx-%#lx:",
+ base, base+num-1);
+
+ /* First, what does a floating port look like? */
+ b = kmalloc(256, GFP_KERNEL);
+ if (!b) {
+ printk(KERN_ERR "do_io_probe: unable to kmalloc 256 bytes");
+ return;
+ }
+ memset(b, 0, 256);
+ for (i = base, most = 0; i < base+num; i += 8) {
+ res = claim_region(NULL, i, 8, IORESOURCE_IO, "PCMCIA IO probe");
+ if (!res)
+ continue;
+ hole = inb(i);
+ for (j = 1; j < 8; j++)
+ if (inb(i+j) != hole) break;
+ free_region(res);
+ if ((j == 8) && (++b[hole] > b[most]))
+ most = hole;
+ if (b[most] == 127) break;
+ }
+ kfree(b);
+
+ bad = any = 0;
+ for (i = base; i < base+num; i += 8) {
+ res = claim_region(NULL, i, 8, IORESOURCE_IO, "PCMCIA IO probe");
+ if (!res)
+ continue;
+ for (j = 0; j < 8; j++)
+ if (inb(i+j) != most) break;
+ free_region(res);
+ if (j < 8) {
+ if (!any)
+ printk(" excluding");
+ if (!bad)
+ bad = any = i;
+ } else {
+ if (bad) {
+ sub_interval(&s_data->io_db, bad, i-bad);
+ printk(" %#lx-%#lx", bad, i-1);
+ bad = 0;
+ }
+ }
+ }
+ if (bad) {
+ if ((num > 16) && (bad == base) && (i == base+num)) {
+ printk(" nothing: probe failed.\n");
+ return;
+ } else {
+ sub_interval(&s_data->io_db, bad, i-bad);
+ printk(" %#lx-%#lx", bad, i-1);
+ }
+ }
+
+ printk(any ? "\n" : " clean.\n");
+}
+#endif
+
+/*======================================================================
+
+ This is tricky... when we set up CIS memory, we try to validate
+ the memory window space allocations.
+
+======================================================================*/
+
+/* Validation function for cards with a valid CIS */
+static int readable(struct pcmcia_socket *s, struct resource *res, cisinfo_t *info)
+{
+ int ret = -1;
+
+ s->cis_mem.res = res;
+ s->cis_virt = ioremap(res->start, s->map_size);
+ if (s->cis_virt) {
+ ret = pccard_validate_cis(s, BIND_FN_ALL, info);
+ /* invalidate mapping and CIS cache */
+ iounmap(s->cis_virt);
+ s->cis_virt = NULL;
+ destroy_cis_cache(s);
+ }
+ s->cis_mem.res = NULL;
+ if ((ret != 0) || (info->Chains == 0))
+ return 0;
+ return 1;
+}
+
+/* Validation function for simple memory cards */
+static int checksum(struct pcmcia_socket *s, struct resource *res)
+{
+ pccard_mem_map map;
+ int i, a = 0, b = -1, d;
+ void __iomem *virt;
+
+ virt = ioremap(res->start, s->map_size);
+ if (virt) {
+ map.map = 0;
+ map.flags = MAP_ACTIVE;
+ map.speed = 0;
+ map.res = res;
+ map.card_start = 0;
+ s->ops->set_mem_map(s, &map);
+
+ /* Don't bother checking every word... */
+ for (i = 0; i < s->map_size; i += 44) {
+ d = readl(virt+i);
+ a += d;
+ b &= d;
+ }
+
+ map.flags = 0;
+ s->ops->set_mem_map(s, &map);
+
+ iounmap(virt);
+ }
+
+ return (b == -1) ? -1 : (a>>1);
+}
+
+static int
+cis_readable(struct pcmcia_socket *s, unsigned long base, unsigned long size)
+{
+ struct resource *res1, *res2;
+ cisinfo_t info1, info2;
+ int ret = 0;
+
+ res1 = claim_region(s, base, size/2, IORESOURCE_MEM, "cs memory probe");
+ res2 = claim_region(s, base + size/2, size/2, IORESOURCE_MEM, "cs memory probe");
+
+ if (res1 && res2) {
+ ret = readable(s, res1, &info1);
+ ret += readable(s, res2, &info2);
+ }
+
+ free_region(res2);
+ free_region(res1);
+
+ return (ret == 2) && (info1.Chains == info2.Chains);
+}
+
+static int
+checksum_match(struct pcmcia_socket *s, unsigned long base, unsigned long size)
+{
+ struct resource *res1, *res2;
+ int a = -1, b = -1;
+
+ res1 = claim_region(s, base, size/2, IORESOURCE_MEM, "cs memory probe");
+ res2 = claim_region(s, base + size/2, size/2, IORESOURCE_MEM, "cs memory probe");
+
+ if (res1 && res2) {
+ a = checksum(s, res1);
+ b = checksum(s, res2);
+ }
+
+ free_region(res2);
+ free_region(res1);
+
+ return (a == b) && (a >= 0);
+}
+
+/*======================================================================
+
+ The memory probe. If the memory list includes a 64K-aligned block
+ below 1MB, we probe in 64K chunks, and as soon as we accumulate at
+ least mem_limit free space, we quit.
+
+======================================================================*/
+
+static int do_mem_probe(u_long base, u_long num, struct pcmcia_socket *s)
+{
+ struct socket_data *s_data = s->resource_data;
+ u_long i, j, bad, fail, step;
+
+ printk(KERN_INFO "cs: memory probe 0x%06lx-0x%06lx:",
+ base, base+num-1);
+ bad = fail = 0;
+ step = (num < 0x20000) ? 0x2000 : ((num>>4) & ~0x1fff);
+ /* cis_readable wants to map 2x map_size */
+ if (step < 2 * s->map_size)
+ step = 2 * s->map_size;
+ for (i = j = base; i < base+num; i = j + step) {
+ if (!fail) {
+ for (j = i; j < base+num; j += step) {
+ if (cis_readable(s, j, step))
+ break;
+ }
+ fail = ((i == base) && (j == base+num));
+ }
+ if (fail) {
+ for (j = i; j < base+num; j += 2*step)
+ if (checksum_match(s, j, step) &&
+ checksum_match(s, j + step, step))
+ break;
+ }
+ if (i != j) {
+ if (!bad) printk(" excluding");
+ printk(" %#05lx-%#05lx", i, j-1);
+ sub_interval(&s_data->mem_db, i, j-i);
+ bad += j-i;
+ }
+ }
+ printk(bad ? "\n" : " clean.\n");
+ return (num - bad);
+}
+
+#ifdef CONFIG_PCMCIA_PROBE
+
+static u_long inv_probe(struct resource_map *m, struct pcmcia_socket *s)
+{
+ struct socket_data *s_data = s->resource_data;
+ u_long ok;
+ if (m == &s_data->mem_db)
+ return 0;
+ ok = inv_probe(m->next, s);
+ if (ok) {
+ if (m->base >= 0x100000)
+ sub_interval(&s_data->mem_db, m->base, m->num);
+ return ok;
+ }
+ if (m->base < 0x100000)
+ return 0;
+ return do_mem_probe(m->base, m->num, s);
+}
+
+static void validate_mem(struct pcmcia_socket *s, unsigned int probe_mask)
+{
+ struct resource_map *m, mm;
+ static u_char order[] = { 0xd0, 0xe0, 0xc0, 0xf0 };
+ u_long b, i, ok = 0;
+ struct socket_data *s_data = s->resource_data;
+
+ /* We do up to four passes through the list */
+ if (probe_mask & MEM_PROBE_HIGH) {
+ if (inv_probe(s_data->mem_db.next, s) > 0)
+ return;
+ printk(KERN_NOTICE "cs: warning: no high memory space "
+ "available!\n");
+ }
+ if ((probe_mask & MEM_PROBE_LOW) == 0)
+ return;
+ for (m = s_data->mem_db.next; m != &s_data->mem_db; m = mm.next) {
+ mm = *m;
+ /* Only probe < 1 MB */
+ if (mm.base >= 0x100000) continue;
+ if ((mm.base | mm.num) & 0xffff) {
+ ok += do_mem_probe(mm.base, mm.num, s);
+ continue;
+ }
+ /* Special probe for 64K-aligned block */
+ for (i = 0; i < 4; i++) {
+ b = order[i] << 12;
+ if ((b >= mm.base) && (b+0x10000 <= mm.base+mm.num)) {
+ if (ok >= mem_limit)
+ sub_interval(&s_data->mem_db, b, 0x10000);
+ else
+ ok += do_mem_probe(b, 0x10000, s);
+ }
+ }
+ }
+}
+
+#else /* CONFIG_PCMCIA_PROBE */
+
+static void validate_mem(struct pcmcia_socket *s, unsigned int probe_mask)
+{
+ struct resource_map *m, mm;
+ struct socket_data *s_data = s->resource_data;
+
+ for (m = s_data->mem_db.next; m != &s_data->mem_db; m = mm.next) {
+ mm = *m;
+ if (do_mem_probe(mm.base, mm.num, s))
+ break;
+ }
+}
+
+#endif /* CONFIG_PCMCIA_PROBE */
+
+
+/*
+ * Locking note: this is the only place where we take
+ * both rsrc_sem and skt_sem.
+ */
+static void pcmcia_nonstatic_validate_mem(struct pcmcia_socket *s)
+{
+ struct socket_data *s_data = s->resource_data;
+ if (probe_mem) {
+ unsigned int probe_mask;
+
+ down(&rsrc_sem);
+
+ probe_mask = MEM_PROBE_LOW;
+ if (s->features & SS_CAP_PAGE_REGS)
+ probe_mask = MEM_PROBE_HIGH;
+
+ if (probe_mask & ~s_data->rsrc_mem_probe) {
+ s_data->rsrc_mem_probe |= probe_mask;
+
+ down(&s->skt_sem);
+
+ if (s->state & SOCKET_PRESENT)
+ validate_mem(s, probe_mask);
+
+ up(&s->skt_sem);
+ }
+
+ up(&rsrc_sem);
+ }
+}
+
+struct pcmcia_align_data {
+ unsigned long mask;
+ unsigned long offset;
+ struct resource_map *map;
+};
+
+static void
+pcmcia_common_align(void *align_data, struct resource *res,
+ unsigned long size, unsigned long align)
+{
+ struct pcmcia_align_data *data = align_data;
+ unsigned long start;
+ /*
+ * Ensure that we have the correct start address
+ */
+ start = (res->start & ~data->mask) + data->offset;
+ if (start < res->start)
+ start += data->mask + 1;
+ res->start = start;
+}
+
+static void
+pcmcia_align(void *align_data, struct resource *res,
+ unsigned long size, unsigned long align)
+{
+ struct pcmcia_align_data *data = align_data;
+ struct resource_map *m;
+
+ pcmcia_common_align(data, res, size, align);
+
+ for (m = data->map->next; m != data->map; m = m->next) {
+ unsigned long start = m->base;
+ unsigned long end = m->base + m->num - 1;
+
+ /*
+ * If the lower resources are not available, try aligning
+ * to this entry of the resource database to see if it'll
+ * fit here.
+ */
+ if (res->start < start) {
+ res->start = start;
+ pcmcia_common_align(data, res, size, align);
+ }
+
+ /*
+ * If we're above the area which was passed in, there's
+ * no point proceeding.
+ */
+ if (res->start >= res->end)
+ break;
+
+ if ((res->start + size - 1) <= end)
+ break;
+ }
+
+ /*
+ * If we failed to find something suitable, ensure we fail.
+ */
+ if (m == data->map)
+ res->start = res->end;
+}
+
+/*
+ * Adjust an existing IO region allocation, but making sure that we don't
+ * encroach outside the resources which the user supplied.
+ */
+static int nonstatic_adjust_io_region(struct resource *res, unsigned long r_start,
+ unsigned long r_end, struct pcmcia_socket *s)
+{
+ struct resource_map *m;
+ struct socket_data *s_data = s->resource_data;
+ int ret = -ENOMEM;
+
+ down(&rsrc_sem);
+ for (m = s_data->io_db.next; m != &s_data->io_db; m = m->next) {
+ unsigned long start = m->base;
+ unsigned long end = m->base + m->num - 1;
+
+ if (start > r_start || r_end > end)
+ continue;
+
+ ret = adjust_resource(res, r_start, r_end - r_start + 1);
+ break;
+ }
+ up(&rsrc_sem);
+
+ return ret;
+}
+
+/*======================================================================
+
+ These find ranges of I/O ports or memory addresses that are not
+ currently allocated by other devices.
+
+ The 'align' field should reflect the number of bits of address
+ that need to be preserved from the initial value of *base. It
+ should be a power of two, greater than or equal to 'num'. A value
+ of 0 means that all bits of *base are significant. *base should
+ also be strictly less than 'align'.
+
+======================================================================*/
+
+struct resource *nonstatic_find_io_region(unsigned long base, int num,
+ unsigned long align, struct pcmcia_socket *s)
+{
+ struct resource *res = make_resource(0, num, IORESOURCE_IO, s->dev.class_id);
+ struct socket_data *s_data = s->resource_data;
+ struct pcmcia_align_data data;
+ unsigned long min = base;
+ int ret;
+
+ if (align == 0)
+ align = 0x10000;
+
+ data.mask = align - 1;
+ data.offset = base & data.mask;
+ data.map = &s_data->io_db;
+
+ down(&rsrc_sem);
+#ifdef CONFIG_PCI
+ if (s->cb_dev) {
+ ret = pci_bus_alloc_resource(s->cb_dev->bus, res, num, 1,
+ min, 0, pcmcia_align, &data);
+ } else
+#endif
+ ret = allocate_resource(&ioport_resource, res, num, min, ~0UL,
+ 1, pcmcia_align, &data);
+ up(&rsrc_sem);
+
+ if (ret != 0) {
+ kfree(res);
+ res = NULL;
+ }
+ return res;
+}
+
+struct resource * nonstatic_find_mem_region(u_long base, u_long num, u_long align,
+ int low, struct pcmcia_socket *s)
+{
+ struct resource *res = make_resource(0, num, IORESOURCE_MEM, s->dev.class_id);
+ struct socket_data *s_data = s->resource_data;
+ struct pcmcia_align_data data;
+ unsigned long min, max;
+ int ret, i;
+
+ low = low || !(s->features & SS_CAP_PAGE_REGS);
+
+ data.mask = align - 1;
+ data.offset = base & data.mask;
+ data.map = &s_data->mem_db;
+
+ for (i = 0; i < 2; i++) {
+ if (low) {
+ max = 0x100000UL;
+ min = base < max ? base : 0;
+ } else {
+ max = ~0UL;
+ min = 0x100000UL + base;
+ }
+
+ down(&rsrc_sem);
+#ifdef CONFIG_PCI
+ if (s->cb_dev) {
+ ret = pci_bus_alloc_resource(s->cb_dev->bus, res, num,
+ 1, min, 0,
+ pcmcia_align, &data);
+ } else
+#endif
+ ret = allocate_resource(&iomem_resource, res, num, min,
+ max, 1, pcmcia_align, &data);
+ up(&rsrc_sem);
+ if (ret == 0 || low)
+ break;
+ low = 1;
+ }
+
+ if (ret != 0) {
+ kfree(res);
+ res = NULL;
+ }
+ return res;
+}
+
+
+static int adjust_memory(struct pcmcia_socket *s, adjust_t *adj)
+{
+ u_long base, num;
+ struct socket_data *data = s->resource_data;
+ int ret;
+
+ base = adj->resource.memory.Base;
+ num = adj->resource.memory.Size;
+ if ((num == 0) || (base+num-1 < base))
+ return CS_BAD_SIZE;
+
+ ret = CS_SUCCESS;
+
+ down(&rsrc_sem);
+ switch (adj->Action) {
+ case ADD_MANAGED_RESOURCE:
+ ret = add_interval(&data->mem_db, base, num);
+ break;
+ case REMOVE_MANAGED_RESOURCE:
+ ret = sub_interval(&data->mem_db, base, num);
+ if (ret == CS_SUCCESS) {
+ struct pcmcia_socket *socket;
+ down_read(&pcmcia_socket_list_rwsem);
+ list_for_each_entry(socket, &pcmcia_socket_list, socket_list)
+ release_cis_mem(socket);
+ up_read(&pcmcia_socket_list_rwsem);
+ }
+ break;
+ default:
+ ret = CS_UNSUPPORTED_FUNCTION;
+ }
+ up(&rsrc_sem);
+
+ return ret;
+}
+
+
+static int adjust_io(struct pcmcia_socket *s, adjust_t *adj)
+{
+ struct socket_data *data = s->resource_data;
+ kio_addr_t base, num;
+ int ret = CS_SUCCESS;
+
+ base = adj->resource.io.BasePort;
+ num = adj->resource.io.NumPorts;
+ if ((base < 0) || (base > 0xffff))
+ return CS_BAD_BASE;
+ if ((num <= 0) || (base+num > 0x10000) || (base+num <= base))
+ return CS_BAD_SIZE;
+
+ down(&rsrc_sem);
+ switch (adj->Action) {
+ case ADD_MANAGED_RESOURCE:
+ if (add_interval(&data->io_db, base, num) != 0) {
+ ret = CS_IN_USE;
+ break;
+ }
+#ifdef CONFIG_PCMCIA_PROBE
+ if (probe_io)
+ do_io_probe(s, base, num);
+#endif
+ break;
+ case REMOVE_MANAGED_RESOURCE:
+ sub_interval(&data->io_db, base, num);
+ break;
+ default:
+ ret = CS_UNSUPPORTED_FUNCTION;
+ break;
+ }
+ up(&rsrc_sem);
+
+ return ret;
+}
+
+
+static int nonstatic_adjust_resource_info(struct pcmcia_socket *s, adjust_t *adj)
+{
+ switch (adj->Resource) {
+ case RES_MEMORY_RANGE:
+ return adjust_memory(s, adj);
+ case RES_IO_RANGE:
+ return adjust_io(s, adj);
+ }
+ return CS_UNSUPPORTED_FUNCTION;
+}
+
+static int nonstatic_init(struct pcmcia_socket *s)
+{
+ struct socket_data *data;
+
+ data = kmalloc(sizeof(struct socket_data), GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
+ data->mem_db.next = &data->mem_db;
+ data->io_db.next = &data->io_db;
+
+ s->resource_data = (void *) data;
+
+ return 0;
+}
+
+static void nonstatic_release_resource_db(struct pcmcia_socket *s)
+{
+ struct socket_data *data = s->resource_data;
+ struct resource_map *p, *q;
+
+ down(&rsrc_sem);
+ for (p = data->mem_db.next; p != &data->mem_db; p = q) {
+ q = p->next;
+ kfree(p);
+ }
+ for (p = data->io_db.next; p != &data->io_db; p = q) {
+ q = p->next;
+ kfree(p);
+ }
+ up(&rsrc_sem);
+}
+
+
+struct pccard_resource_ops pccard_nonstatic_ops = {
+ .validate_mem = pcmcia_nonstatic_validate_mem,
+ .adjust_io_region = nonstatic_adjust_io_region,
+ .find_io = nonstatic_find_io_region,
+ .find_mem = nonstatic_find_mem_region,
+ .adjust_resource = nonstatic_adjust_resource_info,
+ .init = nonstatic_init,
+ .exit = nonstatic_release_resource_db,
+};
+EXPORT_SYMBOL(pccard_nonstatic_ops);
return ret;
}
-static spinlock_t status_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(status_lock);
static void soc_common_check_status(struct soc_pcmcia_socket *skt)
{
map->stop = PAGE_SIZE-1;
map->stop -= map->start;
- map->stop += (unsigned long)skt->virt_io;
- map->start = (unsigned long)skt->virt_io;
+ map->stop += skt->socket.io_offset;
+ map->start = skt->socket.io_offset;
return 0;
}
goto out_err_6;
skt->socket.features = SS_CAP_STATIC_MAP|SS_CAP_PCCARD;
+ skt->socket.resource_ops = &pccard_static_ops;
skt->socket.irq_mask = 0;
skt->socket.map_size = PAGE_SIZE;
skt->socket.pci_irq = skt->irq;
#include <linux/pm.h>
#include <linux/pci.h>
#include <linux/device.h>
-#include <linux/suspend.h>
#include <asm/system.h>
#include <asm/irq.h>
static CLASS_DEVICE_ATTR(card_eject, 0200, NULL, pccard_store_eject);
+static ssize_t pccard_show_irq_mask(struct class_device *dev, char *buf)
+{
+ struct pcmcia_socket *s = to_socket(dev);
+ return sprintf(buf, "0x%04x\n", s->irq_mask);
+}
+
+static ssize_t pccard_store_irq_mask(struct class_device *dev, const char *buf, size_t count)
+{
+ ssize_t ret;
+ struct pcmcia_socket *s = to_socket(dev);
+ u32 mask;
+
+ if (!count)
+ return -EINVAL;
+
+ ret = sscanf (buf, "0x%x\n", &mask);
+
+ if (ret == 1) {
+ s->irq_mask &= mask;
+ ret = 0;
+ }
+
+ return ret ? ret : count;
+}
+static CLASS_DEVICE_ATTR(card_irq_mask, 0600, pccard_show_irq_mask, pccard_store_irq_mask);
+
+
static struct class_device_attribute *pccard_socket_attributes[] = {
&class_device_attr_card_type,
&class_device_attr_card_voltage,
&class_device_attr_card_vcc,
&class_device_attr_card_insert,
&class_device_attr_card_eject,
+ &class_device_attr_card_irq_mask,
NULL,
};
static int ti12xx_override(struct yenta_socket *socket)
{
- u32 val;
+ u32 val, val_orig;
/* make sure that memory burst is active */
- val = config_readl(socket, TI113X_SYSTEM_CONTROL);
+ val_orig = val = config_readl(socket, TI113X_SYSTEM_CONTROL);
+ if (disable_clkrun && PCI_FUNC(socket->dev->devfn) == 0) {
+ printk(KERN_INFO "Yenta: Disabling CLKRUN feature\n");
+ val |= TI113X_SCR_KEEPCLK;
+ }
if (!(val & TI122X_SCR_MRBURSTUP)) {
printk(KERN_INFO "Yenta: Enabling burst memory read transactions\n");
val |= TI122X_SCR_MRBURSTUP;
- config_writel(socket, TI113X_SYSTEM_CONTROL, val);
}
+ if (val_orig != val)
+ config_writel(socket, TI113X_SYSTEM_CONTROL, val);
/*
* for EnE bridges only: clear testbit TLTEnable. this makes the
--- /dev/null
+/*
+ * vrc4171_card.c, NEC VRC4171 Card Controller driver for Socket Services.
+ *
+ * Copyright (C) 2003 Yoichi Yuasa <yuasa@hh.iij4u.or.jp>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/spinlock.h>
+#include <linux/sched.h>
+#include <linux/types.h>
+
+#include <asm/io.h>
+#include <asm/vr41xx/vrc4171.h>
+
+#include <pcmcia/ss.h>
+
+#include "i82365.h"
+
+MODULE_DESCRIPTION("NEC VRC4171 Card Controllers driver for Socket Services");
+MODULE_AUTHOR("Yoichi Yuasa <yuasa@hh.iij4u.or.jp>");
+MODULE_LICENSE("GPL");
+
+#define CARD_MAX_SLOTS 2
+#define CARD_SLOTA 0
+#define CARD_SLOTB 1
+#define CARD_SLOTB_OFFSET 0x40
+
+#define CARD_MEM_START 0x10000000
+#define CARD_MEM_END 0x13ffffff
+#define CARD_MAX_MEM_OFFSET 0x3ffffff
+#define CARD_MAX_MEM_SPEED 1000
+
+#define CARD_CONTROLLER_INDEX 0x03e0
+#define CARD_CONTROLLER_DATA 0x03e1
+#define CARD_CONTROLLER_SIZE 2
+ /* Power register */
+ #define VPP_GET_VCC 0x01
+ #define POWER_ENABLE 0x10
+ #define CARD_VOLTAGE_SENSE 0x1f
+ #define VCC_3VORXV_CAPABLE 0x00
+ #define VCC_XV_ONLY 0x01
+ #define VCC_3V_CAPABLE 0x02
+ #define VCC_5V_ONLY 0x03
+ #define CARD_VOLTAGE_SELECT 0x2f
+ #define VCC_3V 0x01
+ #define VCC_5V 0x00
+ #define VCC_XV 0x02
+ #define VCC_STATUS_3V 0x02
+ #define VCC_STATUS_5V 0x01
+ #define VCC_STATUS_XV 0x03
+ #define GLOBAL_CONTROL 0x1e
+ #define EXWRBK 0x04
+ #define IRQPM_EN 0x08
+ #define CLRPMIRQ 0x10
+
+#define IO_MAX_MAPS 2
+#define MEM_MAX_MAPS 5
+
+enum {
+ SLOT_PROBE = 0,
+ SLOT_NOPROBE_IO,
+ SLOT_NOPROBE_MEM,
+ SLOT_NOPROBE_ALL
+};
+
+typedef struct vrc4171_socket {
+ int noprobe;
+ struct pcmcia_socket pcmcia_socket;
+ char name[24];
+ int csc_irq;
+ int io_irq;
+} vrc4171_socket_t;
+
+static vrc4171_socket_t vrc4171_sockets[CARD_MAX_SLOTS];
+static int vrc4171_slotb = SLOTB_IS_NONE;
+static unsigned int vrc4171_irq;
+static uint16_t vrc4171_irq_mask = 0xdeb8;
+
+static inline uint8_t exca_read_byte(int slot, uint8_t index)
+{
+ if (slot == CARD_SLOTB)
+ index += CARD_SLOTB_OFFSET;
+
+ outb(index, CARD_CONTROLLER_INDEX);
+ return inb(CARD_CONTROLLER_DATA);
+}
+
+static inline uint16_t exca_read_word(int slot, uint8_t index)
+{
+ uint16_t data;
+
+ if (slot == CARD_SLOTB)
+ index += CARD_SLOTB_OFFSET;
+
+ outb(index++, CARD_CONTROLLER_INDEX);
+ data = inb(CARD_CONTROLLER_DATA);
+
+ outb(index, CARD_CONTROLLER_INDEX);
+ data |= ((uint16_t)inb(CARD_CONTROLLER_DATA)) << 8;
+
+ return data;
+}
+
+static inline uint8_t exca_write_byte(int slot, uint8_t index, uint8_t data)
+{
+ if (slot == CARD_SLOTB)
+ index += CARD_SLOTB_OFFSET;
+
+ outb(index, CARD_CONTROLLER_INDEX);
+ outb(data, CARD_CONTROLLER_DATA);
+
+ return data;
+}
+
+static inline uint16_t exca_write_word(int slot, uint8_t index, uint16_t data)
+{
+ if (slot == CARD_SLOTB)
+ index += CARD_SLOTB_OFFSET;
+
+ outb(index++, CARD_CONTROLLER_INDEX);
+ outb(data, CARD_CONTROLLER_DATA);
+
+ outb(index, CARD_CONTROLLER_INDEX);
+ outb((uint8_t)(data >> 8), CARD_CONTROLLER_DATA);
+
+ return data;
+}
+
+static inline int search_nonuse_irq(void)
+{
+ int i;
+
+ for (i = 0; i < 16; i++) {
+ if (vrc4171_irq_mask & (1 << i)) {
+ vrc4171_irq_mask &= ~(1 << i);
+ return i;
+ }
+ }
+
+ return -1;
+}
+
+static int pccard_init(struct pcmcia_socket *sock)
+{
+ vrc4171_socket_t *socket;
+ unsigned int slot;
+
+ sock->features |= SS_CAP_PCCARD | SS_CAP_PAGE_REGS;
+ sock->irq_mask = 0;
+ sock->map_size = 0x1000;
+ sock->pci_irq = vrc4171_irq;
+
+ slot = sock->sock;
+ socket = &vrc4171_sockets[slot];
+ socket->csc_irq = search_nonuse_irq();
+ socket->io_irq = search_nonuse_irq();
+
+ return 0;
+}
+
+static int pccard_suspend(struct pcmcia_socket *sock)
+{
+ return -EINVAL;
+}
+
+static int pccard_get_status(struct pcmcia_socket *sock, u_int *value)
+{
+ unsigned int slot;
+ uint8_t status, sense;
+ u_int val = 0;
+
+ if (sock == NULL || sock->sock >= CARD_MAX_SLOTS || value == NULL)
+ return -EINVAL;
+
+ slot = sock->sock;
+
+ status = exca_read_byte(slot, I365_STATUS);
+ if (exca_read_byte(slot, I365_INTCTL) & I365_PC_IOCARD) {
+ if (status & I365_CS_STSCHG)
+ val |= SS_STSCHG;
+ } else {
+ if (!(status & I365_CS_BVD1))
+ val |= SS_BATDEAD;
+ else if ((status & (I365_CS_BVD1 | I365_CS_BVD2)) == I365_CS_BVD1)
+ val |= SS_BATWARN;
+ }
+ if ((status & I365_CS_DETECT) == I365_CS_DETECT)
+ val |= SS_DETECT;
+ if (status & I365_CS_WRPROT)
+ val |= SS_WRPROT;
+ if (status & I365_CS_READY)
+ val |= SS_READY;
+ if (status & I365_CS_POWERON)
+ val |= SS_POWERON;
+
+ sense = exca_read_byte(slot, CARD_VOLTAGE_SENSE);
+ switch (sense) {
+ case VCC_3VORXV_CAPABLE:
+ val |= SS_3VCARD | SS_XVCARD;
+ break;
+ case VCC_XV_ONLY:
+ val |= SS_XVCARD;
+ break;
+ case VCC_3V_CAPABLE:
+ val |= SS_3VCARD;
+ break;
+ default:
+ /* 5V only */
+ break;
+ }
+
+ *value = val;
+
+ return 0;
+}
+
+static inline u_char get_Vcc_value(uint8_t voltage)
+{
+ switch (voltage) {
+ case VCC_STATUS_3V:
+ return 33;
+ case VCC_STATUS_5V:
+ return 50;
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static inline u_char get_Vpp_value(uint8_t power, u_char Vcc)
+{
+ if ((power & 0x03) == 0x01 || (power & 0x03) == 0x02)
+ return Vcc;
+
+ return 0;
+}
+
+static int pccard_get_socket(struct pcmcia_socket *sock, socket_state_t *state)
+{
+ unsigned int slot;
+ uint8_t power, voltage, control, cscint;
+
+ if (sock == NULL || sock->sock >= CARD_MAX_SLOTS || state == NULL)
+ return -EINVAL;
+
+ slot = sock->sock;
+
+ power = exca_read_byte(slot, I365_POWER);
+ voltage = exca_read_byte(slot, CARD_VOLTAGE_SELECT);
+
+ state->Vcc = get_Vcc_value(voltage);
+ state->Vpp = get_Vpp_value(power, state->Vcc);
+
+ state->flags = 0;
+ if (power & POWER_ENABLE)
+ state->flags |= SS_PWR_AUTO;
+ if (power & I365_PWR_OUT)
+ state->flags |= SS_OUTPUT_ENA;
+
+ control = exca_read_byte(slot, I365_INTCTL);
+ if (control & I365_PC_IOCARD)
+ state->flags |= SS_IOCARD;
+ if (!(control & I365_PC_RESET))
+ state->flags |= SS_RESET;
+
+ cscint = exca_read_byte(slot, I365_CSCINT);
+ state->csc_mask = 0;
+ if (state->flags & SS_IOCARD) {
+ if (cscint & I365_CSC_STSCHG)
+ state->flags |= SS_STSCHG;
+ } else {
+ if (cscint & I365_CSC_BVD1)
+ state->csc_mask |= SS_BATDEAD;
+ if (cscint & I365_CSC_BVD2)
+ state->csc_mask |= SS_BATWARN;
+ }
+ if (cscint & I365_CSC_READY)
+ state->csc_mask |= SS_READY;
+ if (cscint & I365_CSC_DETECT)
+ state->csc_mask |= SS_DETECT;
+
+ return 0;
+}
+
+static inline uint8_t set_Vcc_value(u_char Vcc)
+{
+ switch (Vcc) {
+ case 33:
+ return VCC_3V;
+ case 50:
+ return VCC_5V;
+ }
+
+ /* Small voltage is chosen for safety. */
+ return VCC_3V;
+}
+
+static int pccard_set_socket(struct pcmcia_socket *sock, socket_state_t *state)
+{
+ vrc4171_socket_t *socket;
+ unsigned int slot;
+ uint8_t voltage, power, control, cscint;
+
+ if (sock == NULL || sock->sock >= CARD_MAX_SLOTS ||
+ (state->Vpp != state->Vcc && state->Vpp != 0) ||
+ (state->Vcc != 50 && state->Vcc != 33 && state->Vcc != 0))
+ return -EINVAL;
+
+ slot = sock->sock;
+ socket = &vrc4171_sockets[slot];
+
+ spin_lock_irq(&sock->lock);
+
+ voltage = set_Vcc_value(state->Vcc);
+ exca_write_byte(slot, CARD_VOLTAGE_SELECT, voltage);
+
+ power = POWER_ENABLE;
+ if (state->Vpp == state->Vcc)
+ power |= VPP_GET_VCC;
+ if (state->flags & SS_OUTPUT_ENA)
+ power |= I365_PWR_OUT;
+ exca_write_byte(slot, I365_POWER, power);
+
+ control = 0;
+ if (state->io_irq != 0)
+ control |= socket->io_irq;
+ if (state->flags & SS_IOCARD)
+ control |= I365_PC_IOCARD;
+ if (state->flags & SS_RESET)
+ control &= ~I365_PC_RESET;
+ else
+ control |= I365_PC_RESET;
+ exca_write_byte(slot, I365_INTCTL, control);
+
+ cscint = 0;
+ exca_write_byte(slot, I365_CSCINT, cscint);
+ exca_read_byte(slot, I365_CSC); /* clear CardStatus change */
+ if (state->csc_mask != 0)
+ cscint |= socket->csc_irq << 8;
+ if (state->flags & SS_IOCARD) {
+ if (state->csc_mask & SS_STSCHG)
+ cscint |= I365_CSC_STSCHG;
+ } else {
+ if (state->csc_mask & SS_BATDEAD)
+ cscint |= I365_CSC_BVD1;
+ if (state->csc_mask & SS_BATWARN)
+ cscint |= I365_CSC_BVD2;
+ }
+ if (state->csc_mask & SS_READY)
+ cscint |= I365_CSC_READY;
+ if (state->csc_mask & SS_DETECT)
+ cscint |= I365_CSC_DETECT;
+ exca_write_byte(slot, I365_CSCINT, cscint);
+
+ spin_unlock_irq(&sock->lock);
+
+ return 0;
+}
+
+static int pccard_set_io_map(struct pcmcia_socket *sock, struct pccard_io_map *io)
+{
+ unsigned int slot;
+ uint8_t ioctl, addrwin;
+ u_char map;
+
+ if (sock == NULL || sock->sock >= CARD_MAX_SLOTS ||
+ io == NULL || io->map >= IO_MAX_MAPS ||
+ io->start > 0xffff || io->stop > 0xffff || io->start > io->stop)
+ return -EINVAL;
+
+ slot = sock->sock;
+ map = io->map;
+
+ addrwin = exca_read_byte(slot, I365_ADDRWIN);
+ if (addrwin & I365_ENA_IO(map)) {
+ addrwin &= ~I365_ENA_IO(map);
+ exca_write_byte(slot, I365_ADDRWIN, addrwin);
+ }
+
+ exca_write_word(slot, I365_IO(map)+I365_W_START, io->start);
+ exca_write_word(slot, I365_IO(map)+I365_W_STOP, io->stop);
+
+ ioctl = 0;
+ if (io->speed > 0)
+ ioctl |= I365_IOCTL_WAIT(map);
+ if (io->flags & MAP_16BIT)
+ ioctl |= I365_IOCTL_16BIT(map);
+ if (io->flags & MAP_AUTOSZ)
+ ioctl |= I365_IOCTL_IOCS16(map);
+ if (io->flags & MAP_0WS)
+ ioctl |= I365_IOCTL_0WS(map);
+ exca_write_byte(slot, I365_IOCTL, ioctl);
+
+ if (io->flags & MAP_ACTIVE) {
+ addrwin |= I365_ENA_IO(map);
+ exca_write_byte(slot, I365_ADDRWIN, addrwin);
+ }
+
+ return 0;
+}
+
+static int pccard_set_mem_map(struct pcmcia_socket *sock, struct pccard_mem_map *mem)
+{
+ unsigned int slot;
+ uint16_t start, stop, offset;
+ uint8_t addrwin;
+ u_char map;
+
+ if (sock == NULL || sock->sock >= CARD_MAX_SLOTS ||
+ mem == NULL || mem->map >= MEM_MAX_MAPS ||
+ mem->sys_start < CARD_MEM_START || mem->sys_start > CARD_MEM_END ||
+ mem->sys_stop < CARD_MEM_START || mem->sys_stop > CARD_MEM_END ||
+ mem->sys_start > mem->sys_stop ||
+ mem->card_start > CARD_MAX_MEM_OFFSET ||
+ mem->speed > CARD_MAX_MEM_SPEED)
+ return -EINVAL;
+
+ slot = sock->sock;
+ map = mem->map;
+
+ addrwin = exca_read_byte(slot, I365_ADDRWIN);
+ if (addrwin & I365_ENA_MEM(map)) {
+ addrwin &= ~I365_ENA_MEM(map);
+ exca_write_byte(slot, I365_ADDRWIN, addrwin);
+ }
+
+ start = (mem->sys_start >> 12) & 0x3fff;
+ if (mem->flags & MAP_16BIT)
+ start |= I365_MEM_16BIT;
+ exca_write_word(slot, I365_MEM(map)+I365_W_START, start);
+
+ stop = (mem->sys_stop >> 12) & 0x3fff;
+ switch (mem->speed) {
+ case 0:
+ break;
+ case 1:
+ stop |= I365_MEM_WS0;
+ break;
+ case 2:
+ stop |= I365_MEM_WS1;
+ break;
+ default:
+ stop |= I365_MEM_WS0 | I365_MEM_WS1;
+ break;
+ }
+ exca_write_word(slot, I365_MEM(map)+I365_W_STOP, stop);
+
+ offset = (mem->card_start >> 12) & 0x3fff;
+ if (mem->flags & MAP_ATTRIB)
+ offset |= I365_MEM_REG;
+ if (mem->flags & MAP_WRPROT)
+ offset |= I365_MEM_WRPROT;
+ exca_write_word(slot, I365_MEM(map)+I365_W_OFF, offset);
+
+ if (mem->flags & MAP_ACTIVE) {
+ addrwin |= I365_ENA_MEM(map);
+ exca_write_byte(slot, I365_ADDRWIN, addrwin);
+ }
+
+ return 0;
+}
+
+static struct pccard_operations vrc4171_pccard_operations = {
+ .init = pccard_init,
+ .suspend = pccard_suspend,
+ .get_status = pccard_get_status,
+ .get_socket = pccard_get_socket,
+ .set_socket = pccard_set_socket,
+ .set_io_map = pccard_set_io_map,
+ .set_mem_map = pccard_set_mem_map,
+};
+
+static inline unsigned int get_events(int slot)
+{
+ unsigned int events = 0;
+ uint8_t status, csc;
+
+ status = exca_read_byte(slot, I365_STATUS);
+ csc = exca_read_byte(slot, I365_CSC);
+
+ if (exca_read_byte(slot, I365_INTCTL) & I365_PC_IOCARD) {
+ if ((csc & I365_CSC_STSCHG) && (status & I365_CS_STSCHG))
+ events |= SS_STSCHG;
+ } else {
+ if (csc & (I365_CSC_BVD1 | I365_CSC_BVD2)) {
+ if (!(status & I365_CS_BVD1))
+ events |= SS_BATDEAD;
+ else if ((status & (I365_CS_BVD1 | I365_CS_BVD2)) == I365_CS_BVD1)
+ events |= SS_BATWARN;
+ }
+ }
+ if ((csc & I365_CSC_READY) && (status & I365_CS_READY))
+ events |= SS_READY;
+ if ((csc & I365_CSC_DETECT) && ((status & I365_CS_DETECT) == I365_CS_DETECT))
+ events |= SS_DETECT;
+
+ return events;
+}
+
+static irqreturn_t pccard_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ vrc4171_socket_t *socket;
+ unsigned int events;
+ irqreturn_t retval = IRQ_NONE;
+ uint16_t status;
+
+ status = vrc4171_get_irq_status();
+ if (status & IRQ_A) {
+ socket = &vrc4171_sockets[CARD_SLOTA];
+ if (socket->noprobe == SLOT_PROBE) {
+ if (status & (1 << socket->csc_irq)) {
+ events = get_events(CARD_SLOTA);
+ if (events != 0) {
+ pcmcia_parse_events(&socket->pcmcia_socket, events);
+ retval = IRQ_HANDLED;
+ }
+ }
+ }
+ }
+
+ if (status & IRQ_B) {
+ socket = &vrc4171_sockets[CARD_SLOTB];
+ if (socket->noprobe == SLOT_PROBE) {
+ if (status & (1 << socket->csc_irq)) {
+ events = get_events(CARD_SLOTB);
+ if (events != 0) {
+ pcmcia_parse_events(&socket->pcmcia_socket, events);
+ retval = IRQ_HANDLED;
+ }
+ }
+ }
+ }
+
+ return retval;
+}
+
+static inline void reserve_using_irq(int slot)
+{
+ unsigned int irq;
+
+ irq = exca_read_byte(slot, I365_INTCTL);
+ irq &= 0x0f;
+ vrc4171_irq_mask &= ~(1 << irq);
+
+ irq = exca_read_byte(slot, I365_CSCINT);
+ irq = (irq & 0xf0) >> 4;
+ vrc4171_irq_mask &= ~(1 << irq);
+}
+
+static int __devinit vrc4171_add_socket(int slot)
+{
+ vrc4171_socket_t *socket;
+ int retval;
+
+ if (slot >= CARD_MAX_SLOTS)
+ return -EINVAL;
+
+ socket = &vrc4171_sockets[slot];
+ if (socket->noprobe != SLOT_PROBE) {
+ uint8_t addrwin;
+
+ switch (socket->noprobe) {
+ case SLOT_NOPROBE_MEM:
+ addrwin = exca_read_byte(slot, I365_ADDRWIN);
+ addrwin &= 0x1f;
+ exca_write_byte(slot, I365_ADDRWIN, addrwin);
+ break;
+ case SLOT_NOPROBE_IO:
+ addrwin = exca_read_byte(slot, I365_ADDRWIN);
+ addrwin &= 0xc0;
+ exca_write_byte(slot, I365_ADDRWIN, addrwin);
+ break;
+ default:
+ break;
+ }
+
+ reserve_using_irq(slot);
+
+ return 0;
+ }
+
+ sprintf(socket->name, "NEC VRC4171 Card Slot %1c", 'A' + slot);
+
+ socket->pcmcia_socket.ops = &vrc4171_pccard_operations;
+
+ retval = pcmcia_register_socket(&socket->pcmcia_socket);
+ if (retval != 0)
+ return retval;
+
+ exca_write_byte(slot, I365_ADDRWIN, 0);
+
+ exca_write_byte(slot, GLOBAL_CONTROL, 0);
+
+ return 0;
+}
+
+static void vrc4171_remove_socket(int slot)
+{
+ vrc4171_socket_t *socket;
+
+ if (slot >= CARD_MAX_SLOTS)
+ return;
+
+ socket = &vrc4171_sockets[slot];
+
+ pcmcia_unregister_socket(&socket->pcmcia_socket);
+}
+
+static int __devinit vrc4171_card_setup(char *options)
+{
+ if (options == NULL || *options == '\0')
+ return 0;
+
+ if (strncmp(options, "irq:", 4) == 0) {
+ int irq;
+ options += 4;
+ irq = simple_strtoul(options, &options, 0);
+ if (irq >= 0 && irq < NR_IRQS)
+ vrc4171_irq = irq;
+
+ if (*options != ',')
+ return 0;
+ options++;
+ }
+
+ if (strncmp(options, "slota:", 6) == 0) {
+ options += 6;
+ if (*options != '\0') {
+ if (strncmp(options, "memnoprobe", 10) == 0) {
+ vrc4171_sockets[CARD_SLOTA].noprobe = SLOT_NOPROBE_MEM;
+ options += 10;
+ } else if (strncmp(options, "ionoprobe", 9) == 0) {
+ vrc4171_sockets[CARD_SLOTA].noprobe = SLOT_NOPROBE_IO;
+ options += 9;
+ } else if ( strncmp(options, "noprobe", 7) == 0) {
+ vrc4171_sockets[CARD_SLOTA].noprobe = SLOT_NOPROBE_ALL;
+ options += 7;
+ }
+
+ if (*options != ',')
+ return 0;
+ options++;
+ } else
+ return 0;
+
+ }
+
+ if (strncmp(options, "slotb:", 6) == 0) {
+ options += 6;
+ if (*options != '\0') {
+ if (strncmp(options, "pccard", 6) == 0) {
+ vrc4171_slotb = SLOTB_IS_PCCARD;
+ options += 6;
+ } else if (strncmp(options, "cf", 2) == 0) {
+ vrc4171_slotb = SLOTB_IS_CF;
+ options += 2;
+ } else if (strncmp(options, "flashrom", 8) == 0) {
+ vrc4171_slotb = SLOTB_IS_FLASHROM;
+ options += 8;
+ } else if (strncmp(options, "none", 4) == 0) {
+ vrc4171_slotb = SLOTB_IS_NONE;
+ options += 4;
+ }
+
+ if (*options != ',')
+ return 0;
+ options++;
+
+ if (strncmp(options, "memnoprobe", 10) == 0)
+ vrc4171_sockets[CARD_SLOTB].noprobe = SLOT_NOPROBE_MEM;
+ if (strncmp(options, "ionoprobe", 9) == 0)
+ vrc4171_sockets[CARD_SLOTB].noprobe = SLOT_NOPROBE_IO;
+ if (strncmp(options, "noprobe", 7) == 0)
+ vrc4171_sockets[CARD_SLOTB].noprobe = SLOT_NOPROBE_ALL;
+ }
+ }
+
+ return 0;
+}
+
+__setup("vrc4171_card=", vrc4171_card_setup);
+
+static int __devinit vrc4171_card_init(void)
+{
+ int retval, slot;
+
+ vrc4171_set_multifunction_pin(vrc4171_slotb);
+
+ if (request_region(CARD_CONTROLLER_INDEX, CARD_CONTROLLER_SIZE,
+ "NEC VRC4171 Card Controller") == NULL)
+ return -EBUSY;
+
+ for (slot = 0; slot < CARD_MAX_SLOTS; slot++) {
+ if (slot == CARD_SLOTB && vrc4171_slotb == SLOTB_IS_NONE)
+ break;
+
+ retval = vrc4171_add_socket(slot);
+ if (retval != 0)
+ return retval;
+ }
+
+ retval = request_irq(vrc4171_irq, pccard_interrupt, SA_SHIRQ,
+ "NEC VRC4171 Card Controller", vrc4171_sockets);
+ if (retval < 0) {
+ for (slot = 0; slot < CARD_MAX_SLOTS; slot++)
+ vrc4171_remove_socket(slot);
+
+ return retval;
+ }
+
+ printk(KERN_INFO "NEC VRC4171 Card Controller, connected to IRQ %d\n", vrc4171_irq);
+
+ return 0;
+}
+
+static void __devexit vrc4171_card_exit(void)
+{
+ int slot;
+
+ for (slot = 0; slot < CARD_MAX_SLOTS; slot++)
+ vrc4171_remove_socket(slot);
+
+ release_region(CARD_CONTROLLER_INDEX, CARD_CONTROLLER_SIZE);
+}
+
+module_init(vrc4171_card_init);
+module_exit(vrc4171_card_exit);
--- /dev/null
+/*
+ * FILE NAME
+ * drivers/pcmcia/vrc4173_cardu.c
+ *
+ * BRIEF MODULE DESCRIPTION
+ * NEC VRC4173 CARDU driver for Socket Services
+ * (This device doesn't support CardBus. it is supporting only 16bit PC Card.)
+ *
+ * Copyright 2002,2003 Yoichi Yuasa <yuasa@hh.iij4u.or.jp>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
+ * OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
+ * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ * USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+
+#include <asm/io.h>
+
+#include <pcmcia/ss.h>
+
+#include "vrc4173_cardu.h"
+
+MODULE_DESCRIPTION("NEC VRC4173 CARDU driver for Socket Services");
+MODULE_AUTHOR("Yoichi Yuasa <yuasa@hh.iij4u.or.jp>");
+MODULE_LICENSE("GPL");
+
+static int vrc4173_cardu_slots;
+
+static vrc4173_socket_t cardu_sockets[CARDU_MAX_SOCKETS];
+
+extern struct socket_info_t *pcmcia_register_socket (int slot,
+ struct pccard_operations *vtable,
+ int use_bus_pm);
+extern void pcmcia_unregister_socket(struct socket_info_t *s);
+
+static inline uint8_t exca_readb(vrc4173_socket_t *socket, uint16_t offset)
+{
+ return readb(socket->base + EXCA_REGS_BASE + offset);
+}
+
+static inline uint16_t exca_readw(vrc4173_socket_t *socket, uint16_t offset)
+{
+ uint16_t val;
+
+ val = readb(socket->base + EXCA_REGS_BASE + offset);
+ val |= (u16)readb(socket->base + EXCA_REGS_BASE + offset + 1) << 8;
+
+ return val;
+}
+
+static inline void exca_writeb(vrc4173_socket_t *socket, uint16_t offset, uint8_t val)
+{
+ writeb(val, socket->base + EXCA_REGS_BASE + offset);
+}
+
+static inline void exca_writew(vrc4173_socket_t *socket, uint8_t offset, uint16_t val)
+{
+ writeb((u8)val, socket->base + EXCA_REGS_BASE + offset);
+ writeb((u8)(val >> 8), socket->base + EXCA_REGS_BASE + offset + 1);
+}
+
+static inline uint32_t cardbus_socket_readl(vrc4173_socket_t *socket, u16 offset)
+{
+ return readl(socket->base + CARDBUS_SOCKET_REGS_BASE + offset);
+}
+
+static inline void cardbus_socket_writel(vrc4173_socket_t *socket, u16 offset, uint32_t val)
+{
+ writel(val, socket->base + CARDBUS_SOCKET_REGS_BASE + offset);
+}
+
+static void cardu_pciregs_init(struct pci_dev *dev)
+{
+ u32 syscnt;
+ u16 brgcnt;
+ u8 devcnt;
+
+ pci_write_config_dword(dev, 0x1c, 0x10000000);
+ pci_write_config_dword(dev, 0x20, 0x17fff000);
+ pci_write_config_dword(dev, 0x2c, 0);
+ pci_write_config_dword(dev, 0x30, 0xfffc);
+
+ pci_read_config_word(dev, BRGCNT, &brgcnt);
+ brgcnt &= ~IREQ_INT;
+ pci_write_config_word(dev, BRGCNT, brgcnt);
+
+ pci_read_config_dword(dev, SYSCNT, &syscnt);
+ syscnt &= ~(BAD_VCC_REQ_DISB|PCPCI_EN|CH_ASSIGN_MASK|SUB_ID_WR_EN|PCI_CLK_RIN);
+ syscnt |= (CH_ASSIGN_NODMA|ASYN_INT_MODE);
+ pci_write_config_dword(dev, SYSCNT, syscnt);
+
+ pci_read_config_byte(dev, DEVCNT, &devcnt);
+ devcnt &= ~(ZOOM_VIDEO_EN|SR_PCI_INT_SEL_MASK|PCI_INT_MODE|IRQ_MODE);
+ devcnt |= (SR_PCI_INT_SEL_NONE|IFG);
+ pci_write_config_byte(dev, DEVCNT, devcnt);
+
+ pci_write_config_byte(dev, CHIPCNT, S_PREF_DISB);
+
+ pci_write_config_byte(dev, SERRDIS, 0);
+}
+
+static int cardu_init(unsigned int slot)
+{
+ vrc4173_socket_t *socket = &cardu_sockets[slot];
+
+ cardu_pciregs_init(socket->dev);
+
+ /* CARD_SC bits are cleared by reading CARD_SC. */
+ exca_writeb(socket, GLO_CNT, 0);
+
+ socket->cap.features |= SS_CAP_PCCARD | SS_CAP_PAGE_REGS;
+ socket->cap.irq_mask = 0;
+ socket->cap.map_size = 0x1000;
+ socket->cap.pci_irq = socket->dev->irq;
+ socket->events = 0;
+ spin_lock_init(socket->event_lock);
+
+ /* Enable PC Card status interrupts */
+ exca_writeb(socket, CARD_SCI, CARD_DT_EN|RDY_EN|BAT_WAR_EN|BAT_DEAD_EN);
+
+ return 0;
+}
+
+static int cardu_suspend(unsigned int slot)
+{
+ return -EINVAL;
+}
+
+static int cardu_register_callback(unsigned int sock,
+ void (*handler)(void *, unsigned int),
+ void * info)
+{
+ vrc4173_socket_t *socket = &cardu_sockets[sock];
+
+ socket->handler = handler;
+ socket->info = info;
+
+ return 0;
+}
+
+static int cardu_inquire_socket(unsigned int sock, socket_cap_t *cap)
+{
+ vrc4173_socket_t *socket = &cardu_sockets[sock];
+
+ *cap = socket->cap;
+
+ return 0;
+}
+
+static int cardu_get_status(unsigned int sock, u_int *value)
+{
+ vrc4173_socket_t *socket = &cardu_sockets[sock];
+ uint32_t state;
+ uint8_t status;
+ u_int val = 0;
+
+ status = exca_readb(socket, IF_STATUS);
+ if (status & CARD_PWR) val |= SS_POWERON;
+ if (status & READY) val |= SS_READY;
+ if (status & CARD_WP) val |= SS_WRPROT;
+ if ((status & (CARD_DETECT1|CARD_DETECT2)) == (CARD_DETECT1|CARD_DETECT2))
+ val |= SS_DETECT;
+ if (exca_readb(socket, INT_GEN_CNT) & CARD_TYPE_IO) {
+ if (status & STSCHG) val |= SS_STSCHG;
+ } else {
+ status &= BV_DETECT_MASK;
+ if (status != BV_DETECT_GOOD) {
+ if (status == BV_DETECT_WARN) val |= SS_BATWARN;
+ else val |= SS_BATDEAD;
+ }
+ }
+
+ state = cardbus_socket_readl(socket, SKT_PRE_STATE);
+ if (state & VOL_3V_CARD_DT) val |= SS_3VCARD;
+ if (state & VOL_XV_CARD_DT) val |= SS_XVCARD;
+ if (state & CB_CARD_DT) val |= SS_CARDBUS;
+ if (!(state &
+ (VOL_YV_CARD_DT|VOL_XV_CARD_DT|VOL_3V_CARD_DT|VOL_5V_CARD_DT|CCD20|CCD10)))
+ val |= SS_PENDING;
+
+ *value = val;
+
+ return 0;
+}
+
+static inline u_char get_Vcc_value(uint8_t val)
+{
+ switch (val & VCC_MASK) {
+ case VCC_3V:
+ return 33;
+ case VCC_5V:
+ return 50;
+ }
+
+ return 0;
+}
+
+static inline u_char get_Vpp_value(uint8_t val)
+{
+ switch (val & VPP_MASK) {
+ case VPP_12V:
+ return 120;
+ case VPP_VCC:
+ return get_Vcc_value(val);
+ }
+
+ return 0;
+}
+
+static int cardu_get_socket(unsigned int sock, socket_state_t *state)
+{
+ vrc4173_socket_t *socket = &cardu_sockets[sock];
+ uint8_t val;
+
+ val = exca_readb(socket, PWR_CNT);
+ state->Vcc = get_Vcc_value(val);
+ state->Vpp = get_Vpp_value(val);
+ state->flags = 0;
+ if (val & CARD_OUT_EN) state->flags |= SS_OUTPUT_ENA;
+
+ val = exca_readb(socket, INT_GEN_CNT);
+ if (!(val & CARD_REST0)) state->flags |= SS_RESET;
+ if (val & CARD_TYPE_IO) state->flags |= SS_IOCARD;
+
+ return 0;
+}
+
+static inline uint8_t set_Vcc_value(u_char Vcc)
+{
+ switch (Vcc) {
+ case 33:
+ return VCC_3V;
+ case 50:
+ return VCC_5V;
+ }
+
+ return VCC_0V;
+}
+
+static inline uint8_t set_Vpp_value(u_char Vpp)
+{
+ switch (Vpp) {
+ case 33:
+ case 50:
+ return VPP_VCC;
+ case 120:
+ return VPP_12V;
+ }
+
+ return VPP_0V;
+}
+
+static int cardu_set_socket(unsigned int sock, socket_state_t *state)
+{
+ vrc4173_socket_t *socket = &cardu_sockets[sock];
+ uint8_t val;
+
+ if (((state->Vpp == 33) || (state->Vpp == 50)) && (state->Vpp != state->Vcc))
+ return -EINVAL;
+
+ val = set_Vcc_value(state->Vcc);
+ val |= set_Vpp_value(state->Vpp);
+ if (state->flags & SS_OUTPUT_ENA) val |= CARD_OUT_EN;
+ exca_writeb(socket, PWR_CNT, val);
+
+ val = exca_readb(socket, INT_GEN_CNT) & CARD_REST0;
+ if (state->flags & SS_RESET) val &= ~CARD_REST0;
+ else val |= CARD_REST0;
+ if (state->flags & SS_IOCARD) val |= CARD_TYPE_IO;
+ exca_writeb(socket, INT_GEN_CNT, val);
+
+ return 0;
+}
+
+static int cardu_get_io_map(unsigned int sock, struct pccard_io_map *io)
+{
+ vrc4173_socket_t *socket = &cardu_sockets[sock];
+ uint8_t ioctl, window;
+ u_char map;
+
+ map = io->map;
+ if (map > 1)
+ return -EINVAL;
+
+ io->start = exca_readw(socket, IO_WIN_SA(map));
+ io->stop = exca_readw(socket, IO_WIN_EA(map));
+
+ ioctl = exca_readb(socket, IO_WIN_CNT);
+ window = exca_readb(socket, ADR_WIN_EN);
+ io->flags = (window & IO_WIN_EN(map)) ? MAP_ACTIVE : 0;
+ if (ioctl & IO_WIN_DATA_AUTOSZ(map))
+ io->flags |= MAP_AUTOSZ;
+ else if (ioctl & IO_WIN_DATA_16BIT(map))
+ io->flags |= MAP_16BIT;
+
+ return 0;
+}
+
+static int cardu_set_io_map(unsigned int sock, struct pccard_io_map *io)
+{
+ vrc4173_socket_t *socket = &cardu_sockets[sock];
+ uint16_t ioctl;
+ uint8_t window, enable;
+ u_char map;
+
+ map = io->map;
+ if (map > 1)
+ return -EINVAL;
+
+ window = exca_readb(socket, ADR_WIN_EN);
+ enable = IO_WIN_EN(map);
+
+ if (window & enable) {
+ window &= ~enable;
+ exca_writeb(socket, ADR_WIN_EN, window);
+ }
+
+ exca_writew(socket, IO_WIN_SA(map), io->start);
+ exca_writew(socket, IO_WIN_EA(map), io->stop);
+
+ ioctl = exca_readb(socket, IO_WIN_CNT) & ~IO_WIN_CNT_MASK(map);
+ if (io->flags & MAP_AUTOSZ) ioctl |= IO_WIN_DATA_AUTOSZ(map);
+ else if (io->flags & MAP_16BIT) ioctl |= IO_WIN_DATA_16BIT(map);
+ exca_writeb(socket, IO_WIN_CNT, ioctl);
+
+ if (io->flags & MAP_ACTIVE)
+ exca_writeb(socket, ADR_WIN_EN, window | enable);
+
+ return 0;
+}
+
+static int cardu_get_mem_map(unsigned int sock, struct pccard_mem_map *mem)
+{
+ vrc4173_socket_t *socket = &cardu_sockets[sock];
+ uint32_t start, stop, offset, page;
+ uint8_t window;
+ u_char map;
+
+ map = mem->map;
+ if (map > 4)
+ return -EINVAL;
+
+ window = exca_readb(socket, ADR_WIN_EN);
+ mem->flags = (window & MEM_WIN_EN(map)) ? MAP_ACTIVE : 0;
+
+ start = exca_readw(socket, MEM_WIN_SA(map));
+ mem->flags |= (start & MEM_WIN_DSIZE) ? MAP_16BIT : 0;
+ start = (start & 0x0fff) << 12;
+
+ stop = exca_readw(socket, MEM_WIN_EA(map));
+ stop = ((stop & 0x0fff) << 12) + 0x0fff;
+
+ offset = exca_readw(socket, MEM_WIN_OA(map));
+ mem->flags |= (offset & MEM_WIN_WP) ? MAP_WRPROT : 0;
+ mem->flags |= (offset & MEM_WIN_REGSET) ? MAP_ATTRIB : 0;
+ offset = ((offset & 0x3fff) << 12) + start;
+ mem->card_start = offset & 0x03ffffff;
+
+ page = exca_readb(socket, MEM_WIN_SAU(map)) << 24;
+ mem->sys_start = start + page;
+ mem->sys_stop = start + page;
+
+ return 0;
+}
+
+static int cardu_set_mem_map(unsigned int sock, struct pccard_mem_map *mem)
+{
+ vrc4173_socket_t *socket = &cardu_sockets[sock];
+ uint16_t value;
+ uint8_t window, enable;
+ u_long sys_start, sys_stop, card_start;
+ u_char map;
+
+ map = mem->map;
+ sys_start = mem->sys_start;
+ sys_stop = mem->sys_stop;
+ card_start = mem->card_start;
+
+ if (map > 4 || sys_start > sys_stop || ((sys_start ^ sys_stop) >> 24) ||
+ (card_start >> 26))
+ return -EINVAL;
+
+ window = exca_readb(socket, ADR_WIN_EN);
+ enable = MEM_WIN_EN(map);
+ if (window & enable) {
+ window &= ~enable;
+ exca_writeb(socket, ADR_WIN_EN, window);
+ }
+
+ exca_writeb(socket, MEM_WIN_SAU(map), sys_start >> 24);
+
+ value = (sys_start >> 12) & 0x0fff;
+ if (mem->flags & MAP_16BIT) value |= MEM_WIN_DSIZE;
+ exca_writew(socket, MEM_WIN_SA(map), value);
+
+ value = (sys_stop >> 12) & 0x0fff;
+ exca_writew(socket, MEM_WIN_EA(map), value);
+
+ value = ((card_start - sys_start) >> 12) & 0x3fff;
+ if (mem->flags & MAP_WRPROT) value |= MEM_WIN_WP;
+ if (mem->flags & MAP_ATTRIB) value |= MEM_WIN_REGSET;
+ exca_writew(socket, MEM_WIN_OA(map), value);
+
+ if (mem->flags & MAP_ACTIVE)
+ exca_writeb(socket, ADR_WIN_EN, window | enable);
+
+ return 0;
+}
+
+static void cardu_proc_setup(unsigned int sock, struct proc_dir_entry *base)
+{
+}
+
+static struct pccard_operations cardu_operations = {
+ .init = cardu_init,
+ .suspend = cardu_suspend,
+ .register_callback = cardu_register_callback,
+ .inquire_socket = cardu_inquire_socket,
+ .get_status = cardu_get_status,
+ .get_socket = cardu_get_socket,
+ .set_socket = cardu_set_socket,
+ .get_io_map = cardu_get_io_map,
+ .set_io_map = cardu_set_io_map,
+ .get_mem_map = cardu_get_mem_map,
+ .set_mem_map = cardu_set_mem_map,
+ .proc_setup = cardu_proc_setup,
+};
+
+static void cardu_bh(void *data)
+{
+ vrc4173_socket_t *socket = (vrc4173_socket_t *)data;
+ uint16_t events;
+
+ spin_lock_irq(&socket->event_lock);
+ events = socket->events;
+ socket->events = 0;
+ spin_unlock_irq(&socket->event_lock);
+
+ if (socket->handler)
+ socket->handler(socket->info, events);
+}
+
+static uint16_t get_events(vrc4173_socket_t *socket)
+{
+ uint16_t events = 0;
+ uint8_t csc, status;
+
+ status = exca_readb(socket, IF_STATUS);
+ csc = exca_readb(socket, CARD_SC);
+ if ((csc & CARD_DT_CHG) &&
+ ((status & (CARD_DETECT1|CARD_DETECT2)) == (CARD_DETECT1|CARD_DETECT2)))
+ events |= SS_DETECT;
+
+ if ((csc & RDY_CHG) && (status & READY))
+ events |= SS_READY;
+
+ if (exca_readb(socket, INT_GEN_CNT) & CARD_TYPE_IO) {
+ if ((csc & BAT_DEAD_ST_CHG) && (status & STSCHG))
+ events |= SS_STSCHG;
+ } else {
+ if (csc & (BAT_WAR_CHG|BAT_DEAD_ST_CHG)) {
+ if ((status & BV_DETECT_MASK) != BV_DETECT_GOOD) {
+ if (status == BV_DETECT_WARN) events |= SS_BATWARN;
+ else events |= SS_BATDEAD;
+ }
+ }
+ }
+
+ return events;
+}
+
+static void cardu_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ vrc4173_socket_t *socket = (vrc4173_socket_t *)dev_id;
+ uint16_t events;
+
+ INIT_WORK(&socket->tq_work, cardu_bh, socket);
+
+ events = get_events(socket);
+ if (events) {
+ spin_lock(&socket->event_lock);
+ socket->events |= events;
+ spin_unlock(&socket->event_lock);
+ schedule_work(&socket->tq_work);
+ }
+}
+
+static int __devinit vrc4173_cardu_probe(struct pci_dev *dev,
+ const struct pci_device_id *ent)
+{
+ vrc4173_socket_t *socket;
+ unsigned long start, len, flags;
+ int slot, err;
+
+ slot = vrc4173_cardu_slots++;
+ socket = &cardu_sockets[slot];
+ if (socket->noprobe != 0)
+ return -EBUSY;
+
+ sprintf(socket->name, "NEC VRC4173 CARDU%1d", slot+1);
+
+ if ((err = pci_enable_device(dev)) < 0)
+ return err;
+
+ start = pci_resource_start(dev, 0);
+ if (start == 0)
+ return -ENODEV;
+
+ len = pci_resource_len(dev, 0);
+ if (len == 0)
+ return -ENODEV;
+
+ if (((flags = pci_resource_flags(dev, 0)) & IORESOURCE_MEM) == 0)
+ return -EBUSY;
+
+ if ((err = pci_request_regions(dev, socket->name)) < 0)
+ return err;
+
+ socket->base = ioremap(start, len);
+ if (socket->base == NULL)
+ return -ENODEV;
+
+ socket->dev = dev;
+
+ socket->pcmcia_socket = pcmcia_register_socket(slot, &cardu_operations, 1);
+ if (socket->pcmcia_socket == NULL) {
+ iounmap(socket->base);
+ socket->base = NULL;
+ return -ENOMEM;
+ }
+
+ if (request_irq(dev->irq, cardu_interrupt, SA_SHIRQ, socket->name, socket) < 0) {
+ pcmcia_unregister_socket(socket->pcmcia_socket);
+ socket->pcmcia_socket = NULL;
+ iounmap(socket->base);
+ socket->base = NULL;
+ return -EBUSY;
+ }
+
+ printk(KERN_INFO "%s at %#08lx, IRQ %d\n", socket->name, start, dev->irq);
+
+ return 0;
+}
+
+static int __devinit vrc4173_cardu_setup(char *options)
+{
+ if (options == NULL || *options == '\0')
+ return 0;
+
+ if (strncmp(options, "cardu1:", 7) == 0) {
+ options += 7;
+ if (*options != '\0') {
+ if (strncmp(options, "noprobe", 7) == 0) {
+ cardu_sockets[CARDU1].noprobe = 1;
+ options += 7;
+ }
+
+ if (*options != ',')
+ return 0;
+ } else
+ return 0;
+ }
+
+ if (strncmp(options, "cardu2:", 7) == 0) {
+ options += 7;
+ if ((*options != '\0') && (strncmp(options, "noprobe", 7) == 0))
+ cardu_sockets[CARDU2].noprobe = 1;
+ }
+
+ return 0;
+}
+
+__setup("vrc4173_cardu=", vrc4173_cardu_setup);
+
+static struct pci_device_id vrc4173_cardu_id_table[] __devinitdata = {
+ { .vendor = PCI_VENDOR_ID_NEC,
+ .device = PCI_DEVICE_ID_NEC_NAPCCARD,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID, },
+ {0, }
+};
+
+static struct pci_driver vrc4173_cardu_driver = {
+ .name = "NEC VRC4173 CARDU",
+ .probe = vrc4173_cardu_probe,
+ .id_table = vrc4173_cardu_id_table,
+};
+
+static int __devinit vrc4173_cardu_init(void)
+{
+ vrc4173_cardu_slots = 0;
+
+ return pci_module_init(&vrc4173_cardu_driver);
+}
+
+static void __devexit vrc4173_cardu_exit(void)
+{
+ pci_unregister_driver(&vrc4173_cardu_driver);
+}
+
+module_init(vrc4173_cardu_init);
+module_exit(vrc4173_cardu_exit);
LIST_HEAD(pnp_protocols);
LIST_HEAD(pnp_global);
-spinlock_t pnp_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(pnp_lock);
void *pnp_alloc(long size)
{
if (!acpi_bus_get_device(handle, &device))
pnpacpi_add_device(device);
+ else
+ return AE_CTRL_DEPTH;
return AE_OK;
}
+int pnpacpi_disabled __initdata;
int __init pnpacpi_init(void)
{
- if (acpi_disabled) {
- pnp_info("PnP ACPI: ACPI disable");
+ if (acpi_disabled || pnpacpi_disabled) {
+ pnp_info("PnP ACPI: disabled");
return 0;
}
pnp_info("PnP ACPI init");
pnp_register_protocol(&pnpacpi_protocol);
- acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT,
- ACPI_UINT32_MAX, pnpacpi_add_device_handler,
- NULL, NULL);
+ acpi_get_devices(NULL, pnpacpi_add_device_handler, NULL, NULL);
pnp_info("PnP ACPI: found %d devices", num);
return 0;
}
subsys_initcall(pnpacpi_init);
+static int __init pnpacpi_setup(char *str)
+{
+ if (str == NULL)
+ return 1;
+ if (!strncmp(str, "off", 3))
+ pnpacpi_disabled = 1;
+ return 1;
+}
+__setup("pnpacpi=", pnpacpi_setup);
+
EXPORT_SYMBOL(pnpacpi_protocol);
#include <linux/ctype.h>
#include <linux/pnp.h>
#include <linux/pnpbios.h>
+
+#ifdef CONFIG_PCI
#include <linux/pci.h>
+#else
+inline void pcibios_penalize_isa_irq(int irq) {}
+#endif /* CONFIG_PCI */
#include "pnpbios.h"
* Bugreports.to..: <Linux390@de.ibm.com>
* (C) IBM Corporation, IBM Deutschland Entwicklung GmbH, 1999,2000
*
- * $Revision: 1.8 $
+ * $Revision: 1.9 $
*/
#ifndef DASD_ECKD_H
#define DASD_ECKD_H
-/*******************************************************************************
+/*****************************************************************************
* SECTION: CCW Definitions
- ******************************************************************************/
+ ****************************************************************************/
#define DASD_ECKD_CCW_WRITE 0x05
#define DASD_ECKD_CCW_READ 0x06
#define DASD_ECKD_CCW_WRITE_HOME_ADDRESS 0x09
*/
#define PSF_ORDER_PRSSD 0x18
-/*******************************************************************************
+/*****************************************************************************
* SECTION: Type Definitions
- ******************************************************************************/
+ ****************************************************************************/
struct eckd_count {
__u16 cyl;
__u8 ga_extended; /* Global Attributes Extended */
struct ch_t beg_ext;
struct ch_t end_ext;
- unsigned long long ep_sys_time; /* Extended Parameter - System Time Stamp */
+ unsigned long long ep_sys_time; /* Ext Parameter - System Time Stamp */
__u8 ep_format; /* Extended Parameter format byte */
__u8 ep_prio; /* Extended Parameter priority I/O byte */
__u8 ep_reserved[6]; /* Extended Parameter Reserved */
fdata->stop_unit, fdata->blksize, fdata->intensity);
/* Since dasdfmt keeps the device open after it was disabled,
- * there still exists an inode for this device. We must update i_blkbits,
- * otherwise we might get errors when enabling the device later.
+ * there still exists an inode for this device.
+ * We must update i_blkbits, otherwise we might get errors when
+ * enabling the device later.
*/
if (fdata->start_unit == 0) {
struct block_device *bdev = bdget_disk(device->gdp, 0);
*
* /proc interface for the dasd driver.
*
- * $Revision: 1.27 $
+ * $Revision: 1.30 $
*/
#include <linux/config.h>
if (user_len > 65536)
user_len = 65536;
buffer = dasd_get_user_string(user_buf, user_len);
- MESSAGE(KERN_INFO, "/proc/dasd/statictics: '%s'", buffer);
+ if (IS_ERR(buffer))
+ return PTR_ERR(buffer);
+ MESSAGE_LOG(KERN_INFO, "/proc/dasd/statictics: '%s'", buffer);
/* check for valid verbs */
for (str = buffer; isspace(*str); str++);
if (strcmp(str, "on") == 0) {
/* switch on statistics profiling */
dasd_profile_level = DASD_PROFILE_ON;
- MESSAGE(KERN_INFO, "%s", "Statictics switched on");
+ MESSAGE(KERN_INFO, "%s", "Statistics switched on");
} else if (strcmp(str, "off") == 0) {
/* switch off and reset statistics profiling */
memset(&dasd_global_profile,
0, sizeof (struct dasd_profile_info_t));
dasd_profile_level = DASD_PROFILE_OFF;
- MESSAGE(KERN_INFO, "%s", "Statictics switched off");
+ MESSAGE(KERN_INFO, "%s", "Statistics switched off");
} else
goto out_error;
} else if (strncmp(str, "reset", 5) == 0) {
/* reset the statistics */
memset(&dasd_global_profile, 0,
sizeof (struct dasd_profile_info_t));
- MESSAGE(KERN_INFO, "%s", "Statictics reset");
+ MESSAGE(KERN_INFO, "%s", "Statistics reset");
} else
goto out_error;
kfree(buffer);
/* array of 3215 devices structures */
static struct raw3215_info *raw3215[NR_3215];
/* spinlock to protect the raw3215 array */
-static spinlock_t raw3215_device_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(raw3215_device_lock);
/* list of free request structures */
static struct raw3215_req *raw3215_freelist;
/* spinlock to protect free list */
con3270_set_timer(struct con3270 *cp, int expires)
{
if (expires == 0) {
- del_timer(&cp->timer);
+ if (timer_pending(&cp->timer))
+ del_timer(&cp->timer);
return;
}
- if (mod_timer(&cp->timer, jiffies + expires))
+ if (timer_pending(&cp->timer) &&
+ mod_timer(&cp->timer, jiffies + expires))
return;
cp->timer.function = (void (*)(unsigned long)) con3270_update;
cp->timer.data = (unsigned long) cp;
sclp_cmdw_t command; /* sclp command to execute */
void *sccb; /* pointer to the sccb to execute */
char status; /* status of this request */
+ int start_count; /* number of SVCs done for this req */
/* Callback that is called after reaching final status. */
void (*callback)(struct sclp_req *, void *data);
void *callback_data;
};
/* externals from sclp.c */
-void sclp_add_request(struct sclp_req *req);
+int sclp_add_request(struct sclp_req *req);
void sclp_sync_wait(void);
int sclp_register(struct sclp_register *reg);
void sclp_unregister(struct sclp_register *reg);
-char *sclp_error_message(u16 response_code);
int sclp_remove_processed(struct sccb_header *sccb);
+int sclp_deactivate(void);
+int sclp_reactivate(void);
/* useful inlines */
sclp_conbuf_callback(struct sclp_buffer *buffer, int rc)
{
unsigned long flags;
- struct sclp_buffer *next;
void *page;
- /* Ignore return code - because console-writes aren't critical,
- we do without a sophisticated error recovery mechanism. */
- page = sclp_unmake_buffer(buffer);
- spin_lock_irqsave(&sclp_con_lock, flags);
- /* Remove buffer from outqueue */
- list_del(&buffer->list);
- sclp_con_buffer_count--;
- list_add_tail((struct list_head *) page, &sclp_con_pages);
- /* Check if there is a pending buffer on the out queue. */
- next = NULL;
- if (!list_empty(&sclp_con_outqueue))
- next = list_entry(sclp_con_outqueue.next,
- struct sclp_buffer, list);
- spin_unlock_irqrestore(&sclp_con_lock, flags);
- if (next != NULL)
- sclp_emit_buffer(next, sclp_conbuf_callback);
+ do {
+ page = sclp_unmake_buffer(buffer);
+ spin_lock_irqsave(&sclp_con_lock, flags);
+ /* Remove buffer from outqueue */
+ list_del(&buffer->list);
+ sclp_con_buffer_count--;
+ list_add_tail((struct list_head *) page, &sclp_con_pages);
+ /* Check if there is a pending buffer on the out queue. */
+ buffer = NULL;
+ if (!list_empty(&sclp_con_outqueue))
+ buffer = list_entry(sclp_con_outqueue.next,
+ struct sclp_buffer, list);
+ spin_unlock_irqrestore(&sclp_con_lock, flags);
+ } while (buffer && sclp_emit_buffer(buffer, sclp_conbuf_callback));
}
static inline void
struct sclp_buffer* buffer;
unsigned long flags;
int count;
+ int rc;
spin_lock_irqsave(&sclp_con_lock, flags);
buffer = sclp_conbuf;
list_add_tail(&buffer->list, &sclp_con_outqueue);
count = sclp_con_buffer_count++;
spin_unlock_irqrestore(&sclp_con_lock, flags);
- if (count == 0)
- sclp_emit_buffer(buffer, sclp_conbuf_callback);
+ if (count)
+ return;
+ rc = sclp_emit_buffer(buffer, sclp_conbuf_callback);
+ if (rc)
+ sclp_conbuf_callback(buffer, rc);
}
/*
rc = sclp_register(&sclp_cpi_event);
if (rc) {
/* could not register sclp event. Die. */
- printk("cpi: could not register to hardware console.\n");
+ printk(KERN_WARNING "cpi: could not register to hardware "
+ "console.\n");
return -EINVAL;
}
if (!(sclp_cpi_event.sclp_send_mask & EvTyp_CtlProgIdent_Mask)) {
- printk("cpi: no control program identification support\n");
+ printk(KERN_WARNING "cpi: no control program identification "
+ "support\n");
sclp_unregister(&sclp_cpi_event);
return -ENOTSUPP;
}
req = cpi_prepare_req();
if (IS_ERR(req)) {
- printk("cpi: couldn't allocate request\n");
+ printk(KERN_WARNING "cpi: couldn't allocate request\n");
sclp_unregister(&sclp_cpi_event);
return PTR_ERR(req);
}
sema_init(&sem, 0);
req->callback_data = &sem;
/* Add request to sclp queue */
- sclp_add_request(req);
+ rc = sclp_add_request(req);
+ if (rc) {
+ printk(KERN_WARNING "cpi: could not start request\n");
+ cpi_free_req(req);
+ sclp_unregister(&sclp_cpi_event);
+ return rc;
+ }
/* make "insmod" sleep until callback arrives */
down(&sem);
rc = ((struct cpi_sccb *) req->sccb)->header.response_code;
if (rc != 0x0020) {
- printk("cpi: failed with response code 0x%x\n", rc);
+ printk(KERN_WARNING "cpi: failed with response code 0x%x\n",
+ rc);
rc = -ECOMM;
} else
rc = 0;
--- /dev/null
+/*
+ * drivers/s390/char/sclp_quiesce.c
+ * signal quiesce handler
+ *
+ * (C) Copyright IBM Corp. 1999,2004
+ * Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com>
+ * Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/cpumask.h>
+#include <linux/smp.h>
+#include <linux/init.h>
+#include <asm/atomic.h>
+#include <asm/ptrace.h>
+#include <asm/sigp.h>
+
+#include "sclp.h"
+
+
+#ifdef CONFIG_SMP
+/* Signal completion of shutdown process. All CPUs except the first to enter
+ * this function: go to stopped state. First CPU: wait until all other
+ * CPUs are in stopped or check stop state. Afterwards, load special PSW
+ * to indicate completion. */
+static void
+do_load_quiesce_psw(void * __unused)
+{
+ static atomic_t cpuid = ATOMIC_INIT(-1);
+ psw_t quiesce_psw;
+ __u32 status;
+ int i;
+
+ if (atomic_compare_and_swap(-1, smp_processor_id(), &cpuid))
+ signal_processor(smp_processor_id(), sigp_stop);
+ /* Wait for all other cpus to enter stopped state */
+ i = 1;
+ while (i < NR_CPUS) {
+ if (!cpu_online(i)) {
+ i++;
+ continue;
+ }
+ switch (signal_processor_ps(&status, 0, i, sigp_sense)) {
+ case sigp_order_code_accepted:
+ case sigp_status_stored:
+ /* Check for stopped and check stop state */
+ if (status & 0x50)
+ i++;
+ break;
+ case sigp_busy:
+ break;
+ case sigp_not_operational:
+ i++;
+ break;
+ }
+ }
+ /* Quiesce the last cpu with the special psw */
+ quiesce_psw.mask = PSW_BASE_BITS | PSW_MASK_WAIT;
+ quiesce_psw.addr = 0xfff;
+ __load_psw(quiesce_psw);
+}
+
+/* Shutdown handler. Perform shutdown function on all CPUs. */
+static void
+do_machine_quiesce(void)
+{
+ on_each_cpu(do_load_quiesce_psw, NULL, 0, 0);
+}
+#else
+/* Shutdown handler. Signal completion of shutdown by loading special PSW. */
+static void
+do_machine_quiesce(void)
+{
+ psw_t quiesce_psw;
+
+ quiesce_psw.mask = PSW_BASE_BITS | PSW_MASK_WAIT;
+ quiesce_psw.addr = 0xfff;
+ __load_psw(quiesce_psw);
+}
+#endif
+
+extern void ctrl_alt_del(void);
+
+/* Handler for quiesce event. Start shutdown procedure. */
+static void
+sclp_quiesce_handler(struct evbuf_header *evbuf)
+{
+ _machine_restart = (void *) do_machine_quiesce;
+ _machine_halt = do_machine_quiesce;
+ _machine_power_off = do_machine_quiesce;
+ ctrl_alt_del();
+}
+
+static struct sclp_register sclp_quiesce_event = {
+ .receive_mask = EvTyp_SigQuiesce_Mask,
+ .receiver_fn = sclp_quiesce_handler
+};
+
+/* Initialize quiesce driver. */
+static int __init
+sclp_quiesce_init(void)
+{
+ int rc;
+
+ rc = sclp_register(&sclp_quiesce_event);
+ if (rc)
+ printk(KERN_WARNING "sclp: could not register quiesce handler "
+ "(rc=%d)\n", rc);
+ return rc;
+}
+
+module_init(sclp_quiesce_init);
buffer = ((struct sclp_buffer *) ((addr_t) sccb + PAGE_SIZE)) - 1;
buffer->sccb = sccb;
buffer->retry_count = 0;
- init_timer(&buffer->retry_timer);
buffer->mto_number = 0;
buffer->mto_char_sum = 0;
buffer->current_line = NULL;
return rc;
}
-static void
-sclp_buffer_retry(unsigned long data)
-{
- struct sclp_buffer *buffer = (struct sclp_buffer *) data;
- buffer->request.status = SCLP_REQ_FILLED;
- buffer->sccb->header.response_code = 0x0000;
- sclp_add_request(&buffer->request);
-}
-
-#define SCLP_BUFFER_MAX_RETRY 5
-#define SCLP_BUFFER_RETRY_INTERVAL 2
+#define SCLP_BUFFER_MAX_RETRY 1
/*
* second half of Write Event Data-function that has to be done after
break;
case 0x0340: /* Contained SCLP equipment check */
- if (buffer->retry_count++ > SCLP_BUFFER_MAX_RETRY) {
+ if (++buffer->retry_count > SCLP_BUFFER_MAX_RETRY) {
rc = -EIO;
break;
}
/* not all buffers were processed */
sccb->header.response_code = 0x0000;
buffer->request.status = SCLP_REQ_FILLED;
- sclp_add_request(request);
- return;
- }
- rc = 0;
+ rc = sclp_add_request(request);
+ if (rc == 0)
+ return;
+ } else
+ rc = 0;
break;
case 0x0040: /* SCLP equipment check */
case 0x05f0: /* Target resource in improper state */
- if (buffer->retry_count++ > SCLP_BUFFER_MAX_RETRY) {
+ if (++buffer->retry_count > SCLP_BUFFER_MAX_RETRY) {
rc = -EIO;
break;
}
- /* wait some time, then retry request */
- buffer->retry_timer.function = sclp_buffer_retry;
- buffer->retry_timer.data = (unsigned long) buffer;
- buffer->retry_timer.expires = jiffies +
- SCLP_BUFFER_RETRY_INTERVAL*HZ;
- add_timer(&buffer->retry_timer);
- return;
-
+ /* retry request */
+ sccb->header.response_code = 0x0000;
+ buffer->request.status = SCLP_REQ_FILLED;
+ rc = sclp_add_request(request);
+ if (rc == 0)
+ return;
+ break;
default:
if (sccb->header.response_code == 0x71f0)
rc = -ENOMEM;
/*
* Setup the request structure in the struct sclp_buffer to do SCLP Write
- * Event Data and pass the request to the core SCLP loop.
+ * Event Data and pass the request to the core SCLP loop. Return zero on
+ * success, non-zero otherwise.
*/
-void
+int
sclp_emit_buffer(struct sclp_buffer *buffer,
void (*callback)(struct sclp_buffer *, int))
{
sclp_finalize_mto(buffer);
/* Are there messages in the output buffer ? */
- if (buffer->mto_number == 0) {
- if (callback != NULL)
- callback(buffer, 0);
- return;
- }
+ if (buffer->mto_number == 0)
+ return -EIO;
sccb = buffer->sccb;
if (sclp_rw_event.sclp_send_mask & EvTyp_Msg_Mask)
else if (sclp_rw_event.sclp_send_mask & EvTyp_PMsgCmd_Mask)
/* Use write priority message */
sccb->msg_buf.header.type = EvTyp_PMsgCmd;
- else {
- if (callback != NULL)
- callback(buffer, -ENOSYS);
- return;
- }
+ else
+ return -ENOSYS;
buffer->request.command = SCLP_CMDW_WRITEDATA;
buffer->request.status = SCLP_REQ_FILLED;
buffer->request.callback = sclp_writedata_callback;
buffer->request.callback_data = buffer;
buffer->request.sccb = sccb;
buffer->callback = callback;
- sclp_add_request(&buffer->request);
+ return sclp_add_request(&buffer->request);
}
#define __SCLP_RW_H__
#include <linux/list.h>
-#include <linux/timer.h>
struct mto {
u16 length;
char *current_line;
int current_length;
int retry_count;
- struct timer_list retry_timer;
/* output format settings */
unsigned short columns;
unsigned short htab;
void *sclp_unmake_buffer(struct sclp_buffer *);
int sclp_buffer_space(struct sclp_buffer *);
int sclp_write(struct sclp_buffer *buffer, const unsigned char *, int);
-void sclp_emit_buffer(struct sclp_buffer *,void (*)(struct sclp_buffer *,int));
+int sclp_emit_buffer(struct sclp_buffer *,void (*)(struct sclp_buffer *,int));
void sclp_set_columns(struct sclp_buffer *, unsigned short);
void sclp_set_htab(struct sclp_buffer *, unsigned short);
int sclp_chars_in_buffer(struct sclp_buffer *);
sclp_ttybuf_callback(struct sclp_buffer *buffer, int rc)
{
unsigned long flags;
- struct sclp_buffer *next;
void *page;
- /* Ignore return code - because tty-writes aren't critical,
- we do without a sophisticated error recovery mechanism. */
- page = sclp_unmake_buffer(buffer);
- spin_lock_irqsave(&sclp_tty_lock, flags);
- /* Remove buffer from outqueue */
- list_del(&buffer->list);
- sclp_tty_buffer_count--;
- list_add_tail((struct list_head *) page, &sclp_tty_pages);
- /* Check if there is a pending buffer on the out queue. */
- next = NULL;
- if (!list_empty(&sclp_tty_outqueue))
- next = list_entry(sclp_tty_outqueue.next,
- struct sclp_buffer, list);
- spin_unlock_irqrestore(&sclp_tty_lock, flags);
- if (next != NULL)
- sclp_emit_buffer(next, sclp_ttybuf_callback);
+ do {
+ page = sclp_unmake_buffer(buffer);
+ spin_lock_irqsave(&sclp_tty_lock, flags);
+ /* Remove buffer from outqueue */
+ list_del(&buffer->list);
+ sclp_tty_buffer_count--;
+ list_add_tail((struct list_head *) page, &sclp_tty_pages);
+ /* Check if there is a pending buffer on the out queue. */
+ buffer = NULL;
+ if (!list_empty(&sclp_tty_outqueue))
+ buffer = list_entry(sclp_tty_outqueue.next,
+ struct sclp_buffer, list);
+ spin_unlock_irqrestore(&sclp_tty_lock, flags);
+ } while (buffer && sclp_emit_buffer(buffer, sclp_ttybuf_callback));
wake_up(&sclp_tty_waitq);
/* check if the tty needs a wake up call */
if (sclp_tty != NULL) {
{
unsigned long flags;
int count;
+ int rc;
spin_lock_irqsave(&sclp_tty_lock, flags);
list_add_tail(&buffer->list, &sclp_tty_outqueue);
count = sclp_tty_buffer_count++;
spin_unlock_irqrestore(&sclp_tty_lock, flags);
-
- if (count == 0)
- sclp_emit_buffer(buffer, sclp_ttybuf_callback);
+ if (count)
+ return;
+ rc = sclp_emit_buffer(buffer, sclp_ttybuf_callback);
+ if (rc)
+ sclp_ttybuf_callback(buffer, rc);
}
/*
struct list_head list;
struct sclp_req sclp_req;
int retry_count;
- struct timer_list retry_timer;
};
/* VT220 SCCB */
static int sclp_vt220_flush_later;
static void sclp_vt220_receiver_fn(struct evbuf_header *evbuf);
-static void __sclp_vt220_emit(struct sclp_vt220_request *request);
+static int __sclp_vt220_emit(struct sclp_vt220_request *request);
static void sclp_vt220_emit_current(void);
/* Registration structure for our interest in SCLP event buffers */
sclp_vt220_process_queue(struct sclp_vt220_request *request)
{
unsigned long flags;
- struct sclp_vt220_request *next;
void *page;
- /* Put buffer back to list of empty buffers */
- page = request->sclp_req.sccb;
- spin_lock_irqsave(&sclp_vt220_lock, flags);
- /* Move request from outqueue to empty queue */
- list_del(&request->list);
- sclp_vt220_outqueue_count--;
- list_add_tail((struct list_head *) page, &sclp_vt220_empty);
- /* Check if there is a pending buffer on the out queue. */
- next = NULL;
- if (!list_empty(&sclp_vt220_outqueue))
- next = list_entry(sclp_vt220_outqueue.next,
- struct sclp_vt220_request, list);
- spin_unlock_irqrestore(&sclp_vt220_lock, flags);
- if (next != NULL)
- __sclp_vt220_emit(next);
- else if (sclp_vt220_flush_later)
+ do {
+ /* Put buffer back to list of empty buffers */
+ page = request->sclp_req.sccb;
+ spin_lock_irqsave(&sclp_vt220_lock, flags);
+ /* Move request from outqueue to empty queue */
+ list_del(&request->list);
+ sclp_vt220_outqueue_count--;
+ list_add_tail((struct list_head *) page, &sclp_vt220_empty);
+ /* Check if there is a pending buffer on the out queue. */
+ request = NULL;
+ if (!list_empty(&sclp_vt220_outqueue))
+ request = list_entry(sclp_vt220_outqueue.next,
+ struct sclp_vt220_request, list);
+ spin_unlock_irqrestore(&sclp_vt220_lock, flags);
+ } while (request && __sclp_vt220_emit(request));
+ if (request == NULL && sclp_vt220_flush_later)
sclp_vt220_emit_current();
wake_up(&sclp_vt220_waitq);
/* Check if the tty needs a wake up call */
}
}
-/*
- * Retry sclp write request after waiting some time for an sclp equipment
- * check to pass.
- */
-static void
-sclp_vt220_retry(unsigned long data)
-{
- struct sclp_vt220_request *request;
- struct sclp_vt220_sccb *sccb;
-
- request = (struct sclp_vt220_request *) data;
- request->sclp_req.status = SCLP_REQ_FILLED;
- sccb = (struct sclp_vt220_sccb *) request->sclp_req.sccb;
- sccb->header.response_code = 0x0000;
- sclp_add_request(&request->sclp_req);
-}
-
-#define SCLP_BUFFER_MAX_RETRY 5
-#define SCLP_BUFFER_RETRY_INTERVAL 2
+#define SCLP_BUFFER_MAX_RETRY 1
/*
* Callback through which the result of a write request is reported by the
break;
case 0x0340: /* Contained SCLP equipment check */
- if (vt220_request->retry_count++ > SCLP_BUFFER_MAX_RETRY)
+ if (++vt220_request->retry_count > SCLP_BUFFER_MAX_RETRY)
break;
/* Remove processed buffers and requeue rest */
if (sclp_remove_processed((struct sccb_header *) sccb) > 0) {
/* Not all buffers were processed */
sccb->header.response_code = 0x0000;
vt220_request->sclp_req.status = SCLP_REQ_FILLED;
- sclp_add_request(request);
- return;
+ if (sclp_add_request(request) == 0)
+ return;
}
break;
case 0x0040: /* SCLP equipment check */
- if (vt220_request->retry_count++ > SCLP_BUFFER_MAX_RETRY)
+ if (++vt220_request->retry_count > SCLP_BUFFER_MAX_RETRY)
break;
- /* Wait some time, then retry request */
- vt220_request->retry_timer.function = sclp_vt220_retry;
- vt220_request->retry_timer.data =
- (unsigned long) vt220_request;
- vt220_request->retry_timer.expires =
- jiffies + SCLP_BUFFER_RETRY_INTERVAL*HZ;
- add_timer(&vt220_request->retry_timer);
- return;
+ sccb->header.response_code = 0x0000;
+ vt220_request->sclp_req.status = SCLP_REQ_FILLED;
+ if (sclp_add_request(request) == 0)
+ return;
+ break;
default:
break;
}
/*
- * Emit vt220 request buffer to SCLP.
+ * Emit vt220 request buffer to SCLP. Return zero on success, non-zero
+ * otherwise.
*/
-static void
+static int
__sclp_vt220_emit(struct sclp_vt220_request *request)
{
if (!(sclp_vt220_register.sclp_send_mask & EvTyp_VT220Msg_Mask)) {
request->sclp_req.status = SCLP_REQ_FAILED;
- sclp_vt220_callback(&request->sclp_req, (void *) request);
- return;
+ return -EIO;
}
request->sclp_req.command = SCLP_CMDW_WRITEDATA;
request->sclp_req.status = SCLP_REQ_FILLED;
request->sclp_req.callback = sclp_vt220_callback;
request->sclp_req.callback_data = (void *) request;
- sclp_add_request(&request->sclp_req);
+ return sclp_add_request(&request->sclp_req);
}
/*
spin_unlock_irqrestore(&sclp_vt220_lock, flags);
/* Emit only the first buffer immediately - callback takes care of
* the rest */
- if (count == 0)
- __sclp_vt220_emit(request);
+ if (count == 0 && __sclp_vt220_emit(request))
+ sclp_vt220_process_queue(request);
}
/*
- * Queue and emit current request.
+ * Queue and emit current request. Return zero on success, non-zero otherwise.
*/
static void
sclp_vt220_emit_current(void)
/* Place request structure at end of page */
request = ((struct sclp_vt220_request *)
((addr_t) page + PAGE_SIZE)) - 1;
- init_timer(&request->retry_timer);
request->retry_count = 0;
request->sclp_req.sccb = page;
/* SCCB goes at start of page */
* The list is protected by the rwlock
*/
static struct list_head tape_device_list = LIST_HEAD_INIT(tape_device_list);
-static rwlock_t tape_device_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(tape_device_lock);
/*
* Pointer to debug area.
/* Let the discipline have a go at the device. */
device->discipline = discipline;
+ if (!try_module_get(discipline->owner)) {
+ PRINT_ERR("Cannot get module. Module gone.\n");
+ return -EINVAL;
+ }
+
rc = discipline->setup_device(device);
if (rc)
goto out;
out_minor:
tape_remove_minor(device);
out:
+ module_put(discipline->owner);
return rc;
}
tapeblock_cleanup_device(device);
tapechar_cleanup_device(device);
device->discipline->cleanup_device(device);
+ module_put(device->discipline->owner);
tape_remove_minor(device);
tape_med_state_set(device, MS_UNKNOWN);
}
#ifdef DBF_LIKE_HELL
debug_set_level(TAPE_DBF_AREA, 6);
#endif
- DBF_EVENT(3, "tape init: ($Revision: 1.50 $)\n");
+ DBF_EVENT(3, "tape init: ($Revision: 1.51 $)\n");
tape_proc_init();
tapechar_init ();
tapeblock_init ();
MODULE_AUTHOR("(C) 2001 IBM Deutschland Entwicklung GmbH by Carsten Otte and "
"Michael Holzheu (cotte@de.ibm.com,holzheu@de.ibm.com)");
MODULE_DESCRIPTION("Linux on zSeries channel attached "
- "tape device driver ($Revision: 1.50 $)");
+ "tape device driver ($Revision: 1.51 $)");
MODULE_LICENSE("GPL");
module_init(tape_init);
goto not_connected;
}
- return 0;
+ return nonseekable_open(inode, filp);
not_connected:
iucv_unregister_program(logptr->iucv_handle);
extern int cio_resume (struct subchannel *);
extern int cio_halt (struct subchannel *);
extern int cio_start (struct subchannel *, struct ccw1 *, __u8);
+extern int cio_start_key (struct subchannel *, struct ccw1 *, __u8, __u8);
extern int cio_cancel (struct subchannel *);
extern int cio_set_options (struct subchannel *, int);
extern int cio_get_options (struct subchannel *);
static iucv_handle_t smsg_handle;
static unsigned short smsg_pathid;
-static spinlock_t smsg_list_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(smsg_list_lock);
static struct list_head smsg_list = LIST_HEAD_INIT(smsg_list);
static void
static int jsfd_init(void)
{
- static spinlock_t lock = SPIN_LOCK_UNLOCKED;
+ static DEFINE_SPINLOCK(lock);
struct jsflash *jsf;
struct jsfd_part *jdp;
int err;
/* External functions */
-extern void esp_cmd(struct NCR_ESP *esp, struct ESP_regs *eregs, unchar cmd);
+extern void esp_bootup_reset(struct NCR_ESP *esp, struct ESP_regs *eregs);
extern struct NCR_ESP *esp_allocate(Scsi_Host_Template *, void *);
extern void esp_deallocate(struct NCR_ESP *);
extern void esp_release(void);
extern irqreturn_t esp_intr(int, void *, struct pt_regs *);
extern const char *esp_info(struct Scsi_Host *);
extern int esp_queue(Scsi_Cmnd *, void (*done)(Scsi_Cmnd *));
-extern int esp_command(Scsi_Cmnd *);
extern int esp_abort(Scsi_Cmnd *);
extern int esp_reset(Scsi_Cmnd *);
extern int esp_proc_info(struct Scsi_Host *shost, char *buffer, char **start, off_t offset, int length,
static void ahci_host_stop(struct ata_host_set *host_set);
static void ahci_qc_prep(struct ata_queued_cmd *qc);
static u8 ahci_check_status(struct ata_port *ap);
+static u8 ahci_check_err(struct ata_port *ap);
static inline int ahci_host_intr(struct ata_port *ap, struct ata_queued_cmd *qc);
static Scsi_Host_Template ahci_sht = {
.port_disable = ata_port_disable,
.check_status = ahci_check_status,
+ .check_altstatus = ahci_check_status,
+ .check_err = ahci_check_err,
.dev_select = ata_noop_dev_select,
.phy_reset = ahci_phy_reset,
static struct pci_device_id ahci_pci_tbl[] = {
{ PCI_VENDOR_ID_INTEL, 0x2652, PCI_ANY_ID, PCI_ANY_ID, 0, 0,
- board_ahci },
+ board_ahci }, /* ICH6 */
{ PCI_VENDOR_ID_INTEL, 0x2653, PCI_ANY_ID, PCI_ANY_ID, 0, 0,
- board_ahci },
+ board_ahci }, /* ICH6M */
+ { PCI_VENDOR_ID_INTEL, 0x27c1, PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ board_ahci }, /* ICH7 */
+ { PCI_VENDOR_ID_INTEL, 0x27c5, PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ board_ahci }, /* ICH7M */
+ { PCI_VENDOR_ID_INTEL, 0x27c2, PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ board_ahci }, /* ICH7R */
+ { PCI_VENDOR_ID_INTEL, 0x27c3, PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ board_ahci }, /* ICH7R */
+ { PCI_VENDOR_ID_AL, 0x5288, PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ board_ahci }, /* ULi M5288 */
{ } /* terminate list */
};
return readl(mmio + PORT_TFDATA) & 0xFF;
}
+static u8 ahci_check_err(struct ata_port *ap)
+{
+ void *mmio = (void *) ap->ioaddr.cmd_addr;
+
+ return (readl(mmio + PORT_TFDATA) >> 8) & 0xFF;
+}
+
static void ahci_fill_sg(struct ata_queued_cmd *qc)
{
struct ahci_port_priv *pp = qc->ap->private_data;
ahci_fill_sg(qc);
}
-static inline void ahci_dma_complete (struct ata_port *ap,
- struct ata_queued_cmd *qc,
- int have_err)
-{
- /* get drive status; clear intr; complete txn */
- ata_qc_complete(ata_qc_from_tag(ap, ap->active_tag),
- have_err ? ATA_ERR : 0);
-}
-
static void ahci_intr_error(struct ata_port *ap, u32 irq_stat)
{
void *mmio = ap->host_set->mmio_base;
unsigned long base;
void *mmio_base;
unsigned int board_idx = (unsigned int) ent->driver_data;
+ int pci_dev_busy = 0;
int rc;
VPRINTK("ENTER\n");
return rc;
rc = pci_request_regions(pdev, DRV_NAME);
- if (rc)
+ if (rc) {
+ pci_dev_busy = 1;
goto err_out;
+ }
pci_enable_intx(pdev);
err_out_regions:
pci_release_regions(pdev);
err_out:
- pci_disable_device(pdev);
+ if (!pci_dev_busy)
+ pci_disable_device(pdev);
return rc;
}
ahc->chip |= AHC_PCI;
ahc->description = entry->name;
- ahc_power_state_change(ahc, AHC_POWER_STATE_D0);
+ pci_set_power_state(ahc->dev_softc, AHC_POWER_STATE_D0);
error = ahc_pci_map_registers(ahc);
if (error != 0)
ahc_pci_resume(struct ahc_softc *ahc)
{
- ahc_power_state_change(ahc, AHC_POWER_STATE_D0);
+ pci_set_power_state(ahc->dev_softc, AHC_POWER_STATE_D0);
/*
* We assume that the OS has restored our register
-------------------
begin : Thu Sep 7 2000
copyright : (C) 2001 by Adaptec
- email : deanna_bonds@adaptec.com
See Documentation/scsi/dpti.txt for history, notes, license info
and credits
--- /dev/null
+
+
+#ifndef IRQ_HANDLED
+typedef void irqreturn_t;
+#define IRQ_NONE
+#define IRQ_HANDLED
+#endif
+
+#ifndef MODULE_LICENSE
+#define MODULE_LICENSE(x)
+#endif
+
+#ifndef SERVICE_ACTION_IN
+#define SERVICE_ACTION_IN 0x9e
+#endif
+#ifndef READ_16
+#define READ_16 0x88
+#endif
+#ifndef WRITE_16
+#define WRITE_16 0x8a
+#endif
union viosrp_iu iu;
void (*cmnd_done) (struct scsi_cmnd *);
struct completion comp;
+ union viosrp_iu *sync_srp;
};
/* a pool of event structs for use */
static unsigned int ipr_log_level = IPR_DEFAULT_LOG_LEVEL;
static unsigned int ipr_max_speed = 1;
static int ipr_testmode = 0;
-static spinlock_t ipr_driver_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(ipr_driver_lock);
/* This table describes the differences between DMA controller chips */
static const struct ipr_chip_cfg_t ipr_chip_cfg[] = {
"9041: Array protection temporarily suspended"},
{0x066B0200, 0, 1,
"9030: Array no longer protected due to missing or failed disk unit"},
+ {0x066B8200, 0, 1,
+ "9042: Corrupt array parity detected on specified device"},
{0x07270000, 0, 0,
"Failure due to other device"},
{0x07278000, 0, 1,
return -EIO;
}
- if (pci_read_config_word(ioa_cfg->pdev, pcix_cmd_reg,
+ if (pci_read_config_word(ioa_cfg->pdev, pcix_cmd_reg + PCI_X_CMD,
&ioa_cfg->saved_pcix_cmd_reg) != PCIBIOS_SUCCESSFUL) {
dev_err(&ioa_cfg->pdev->dev, "Failed to save PCI-X command register\n");
return -EIO;
int pcix_cmd_reg = pci_find_capability(ioa_cfg->pdev, PCI_CAP_ID_PCIX);
if (pcix_cmd_reg) {
- if (pci_write_config_word(ioa_cfg->pdev, pcix_cmd_reg,
+ if (pci_write_config_word(ioa_cfg->pdev, pcix_cmd_reg + PCI_X_CMD,
ioa_cfg->saved_pcix_cmd_reg) != PCIBIOS_SUCCESSFUL) {
dev_err(&ioa_cfg->pdev->dev, "Failed to setup PCI-X command register\n");
return -EIO;
#endif
/**
- * ipr_store_queue_depth - Change the device's queue depth
- * @dev: device struct
- * @buf: buffer
+ * ipr_change_queue_depth - Change the device's queue depth
+ * @sdev: scsi device struct
+ * @qdepth: depth to set
*
* Return value:
- * number of bytes printed to buffer
+ * actual depth set
**/
-static ssize_t ipr_store_queue_depth(struct device *dev,
- const char *buf, size_t count)
+static int ipr_change_queue_depth(struct scsi_device *sdev, int qdepth)
{
- struct scsi_device *sdev = to_scsi_device(dev);
struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)sdev->host->hostdata;
struct ipr_resource_entry *res;
- int qdepth = simple_strtoul(buf, NULL, 10);
int tagged = 0;
unsigned long lock_flags = 0;
- ssize_t len = -ENXIO;
spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
res = (struct ipr_resource_entry *)sdev->hostdata;
if (ipr_is_gscsi(res) && res->tcq_active)
tagged = MSG_ORDERED_TAG;
-
- len = strlen(buf);
}
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
scsi_adjust_queue_depth(sdev, tagged, qdepth);
- return len;
+ return qdepth;
}
-static struct device_attribute ipr_queue_depth_attr = {
- .attr = {
- .name = "queue_depth",
- .mode = S_IRUSR | S_IWUSR,
- },
- .store = ipr_store_queue_depth
-};
-
/**
* ipr_show_tcq_enable - Show if the device is enabled for tcqing
* @dev: device struct
};
static struct device_attribute *ipr_dev_attrs[] = {
- &ipr_queue_depth_attr,
&ipr_tcqing_attr,
&ipr_adapter_handle_attr,
NULL,
ioarcb->cmd_pkt.flags_lo |= ipr_get_task_attributes(scsi_cmd);
}
- if (!ipr_is_gscsi(res) && scsi_cmd->cmnd[0] >= 0xC0)
+ if (scsi_cmd->cmnd[0] >= 0xC0 &&
+ (!ipr_is_gscsi(res) || scsi_cmd->cmnd[0] == IPR_QUERY_RSRC_STATE))
ioarcb->cmd_pkt.request_type = IPR_RQTYPE_IOACMD;
if (ipr_is_ioa_resource(res) && scsi_cmd->cmnd[0] == MODE_SELECT)
.slave_alloc = ipr_slave_alloc,
.slave_configure = ipr_slave_configure,
.slave_destroy = ipr_slave_destroy,
+ .change_queue_depth = ipr_change_queue_depth,
.bios_param = ipr_biosparam,
.can_queue = IPR_MAX_COMMANDS,
.this_id = -1,
#include <linux/kref.h>
#include <scsi/scsi.h>
#include <scsi/scsi_cmnd.h>
-#ifdef CONFIG_KDB
-#include <linux/kdb.h>
-#endif
/*
* Literals
*/
-#define IPR_DRIVER_VERSION "2.0.11"
-#define IPR_DRIVER_DATE "(August 3, 2004)"
+#define IPR_DRIVER_VERSION "2.0.12"
+#define IPR_DRIVER_DATE "(December 14, 2004)"
/*
* IPR_DBG_TRACE: Setting this to 1 will turn on some general function tracing
/*
* Adapter Commands
*/
+#define IPR_QUERY_RSRC_STATE 0xC2
#define IPR_RESET_DEVICE 0xC3
#define IPR_RESET_TYPE_SELECT 0x80
#define IPR_LUN_RESET 0x40
};
struct ipr_software_inq_lid_info {
- u32 load_id;
- u32 timestamp[3];
+ u32 load_id;
+ u32 timestamp[3];
}__attribute__((packed, aligned (4)));
struct ipr_ucode_image_header {
- u32 header_length;
- u32 lid_table_offset;
- u8 major_release;
- u8 card_type;
- u8 minor_release[2];
- u8 reserved[20];
- char eyecatcher[16];
- u32 num_lids;
- struct ipr_software_inq_lid_info lid[1];
+ u32 header_length;
+ u32 lid_table_offset;
+ u8 major_release;
+ u8 card_type;
+ u8 minor_release[2];
+ u8 reserved[20];
+ char eyecatcher[16];
+ u32 num_lids;
+ struct ipr_software_inq_lid_info lid[1];
}__attribute__((packed, aligned (4)));
/*
#define IPR_DBG_CMD(CMD)
#endif
-#define ipr_breakpoint_data KERN_ERR IPR_NAME\
-": %s: %s: Line: %d ioa_cfg: %p\n", __FILE__, \
-__FUNCTION__, __LINE__, ioa_cfg
-
-#if defined(CONFIG_KDB) && !defined(CONFIG_PPC_ISERIES)
-#define ipr_breakpoint {printk(ipr_breakpoint_data); KDB_ENTER();}
-#define ipr_breakpoint_or_die {printk(ipr_breakpoint_data); KDB_ENTER();}
-#else
-#define ipr_breakpoint
-#define ipr_breakpoint_or_die panic(ipr_breakpoint_data)
-#endif
-
#ifdef CONFIG_SCSI_IPR_TRACE
#define ipr_create_trace_file(kobj, attr) sysfs_create_bin_file(kobj, attr)
#define ipr_remove_trace_file(kobj, attr) sysfs_remove_bin_file(kobj, attr)
#define MRAID_IS_LOGICAL(adp, scp) \
(SCP2CHANNEL(scp) == (adp)->max_channel) ? 1 : 0
+#define MRAID_IS_LOGICAL_SDEV(adp, sdev) \
+ (sdev->channel == (adp)->max_channel) ? 1 : 0
+
#define MRAID_GET_DEVICE_MAP(adp, scp, p_chan, target, islogical) \
/* \
* Is the request coming for the virtual channel \
int mraid_mm_register_adp(mraid_mmadp_t *);
int mraid_mm_unregister_adp(uint32_t);
+uint32_t mraid_mm_adapter_app_handle(uint32_t);
#endif /* _MEGARAID_IOCTL_H_ */
* 2 of the License, or (at your option) any later version.
*
* FILE : megaraid_mbox.c
- * Version : v2.20.4.1 (Nov 04 2004)
+ * Version : v2.20.4.5 (Feb 03 2005)
*
* Authors:
* Atul Mukker <Atul.Mukker@lsil.com>
* INTEL RAID Controller SROMBU42E 1000 0408 8086 3499
* INTEL RAID Controller SRCU51L 1000 1960 8086 0520
*
- *
* FSC MegaRAID PCI Express ROMB 1000 0408 1734 1065
*
- *
* ACER MegaRAID ROMB-2E 1000 0408 1025 004D
*
+ * NEC MegaRAID PCI Express ROMB 1000 0408 1033 8287
*
* For history of changes, see Documentation/ChangeLog.megaraid
*/
static int megaraid_mbox_setup_dma_pools(adapter_t *);
static void megaraid_mbox_teardown_dma_pools(adapter_t *);
+static int megaraid_sysfs_alloc_resources(adapter_t *);
+static void megaraid_sysfs_free_resources(adapter_t *);
+
static int megaraid_abort_handler(struct scsi_cmnd *);
static int megaraid_reset_handler(struct scsi_cmnd *);
static void megaraid_mbox_dpc(unsigned long);
+static ssize_t megaraid_sysfs_show_app_hndl(struct class_device *, char *);
+static ssize_t megaraid_sysfs_show_ldnum(struct device *, char *);
+
static int megaraid_cmm_register(adapter_t *);
static int megaraid_cmm_unregister(adapter_t *);
static int megaraid_mbox_mm_handler(unsigned long, uioc_t *, uint32_t);
* ### global data ###
*/
static uint8_t megaraid_mbox_version[8] =
- { 0x02, 0x20, 0x04, 0x00, 9, 27, 20, 4 };
+ { 0x02, 0x20, 0x04, 0x05, 2, 3, 20, 5 };
/*
PCI_VENDOR_ID_AMI,
PCI_SUBSYS_ID_PERC3_SC,
},
+ {
+ PCI_VENDOR_ID_AMI,
+ PCI_DEVICE_ID_AMI_MEGARAID3,
+ PCI_VENDOR_ID_AMI,
+ PCI_SUBSYS_ID_PERC3_DC,
+ },
{
PCI_VENDOR_ID_LSI_LOGIC,
PCI_DEVICE_ID_MEGARAID_SCSI_320_0,
PCI_VENDOR_ID_AI,
PCI_SUBSYS_ID_MEGARAID_ACER_ROMB_2E,
},
+ {
+ PCI_VENDOR_ID_LSI_LOGIC,
+ PCI_DEVICE_ID_MEGARAID_NEC_ROMB_2E,
+ PCI_VENDOR_ID_NEC,
+ PCI_SUBSYS_ID_MEGARAID_NEC_ROMB_2E,
+ },
{0} /* Terminating entry */
};
MODULE_DEVICE_TABLE(pci, pci_id_table_g);
};
+
+// definitions for the device attributes for exporting logical drive number
+// for a scsi address (Host, Channel, Id, Lun)
+
+CLASS_DEVICE_ATTR(megaraid_mbox_app_hndl, S_IRUSR, megaraid_sysfs_show_app_hndl,
+ NULL);
+
+// Host template initializer for megaraid mbox sysfs device attributes
+static struct class_device_attribute *megaraid_shost_attrs[] = {
+ &class_device_attr_megaraid_mbox_app_hndl,
+ NULL,
+};
+
+
+DEVICE_ATTR(megaraid_mbox_ld, S_IRUSR, megaraid_sysfs_show_ldnum, NULL);
+
+// Host template initializer for megaraid mbox sysfs device attributes
+static struct device_attribute *megaraid_sdev_attrs[] = {
+ &dev_attr_megaraid_mbox_ld,
+ NULL,
+};
+
+
/*
* Scsi host template for megaraid unified driver
*/
.eh_bus_reset_handler = megaraid_reset_handler,
.eh_host_reset_handler = megaraid_reset_handler,
.use_clustering = ENABLE_CLUSTERING,
+ .sdev_attrs = megaraid_sdev_attrs,
+ .shost_attrs = megaraid_shost_attrs,
};
}
adapter->device_ids[adapter->max_channel][adapter->init_id] =
0xFF;
+
+ raid_dev->random_del_supported = 1;
}
/*
*/
adapter->cmd_per_lun = megaraid_cmd_per_lun;
+ /*
+ * Allocate resources required to issue FW calls, when sysfs is
+ * accessed
+ */
+ if (megaraid_sysfs_alloc_resources(adapter) != 0) {
+ goto out_alloc_cmds;
+ }
+
// Set the DMA mask to 64-bit. All supported controllers as capable of
// DMA in this range
if (pci_set_dma_mask(adapter->pdev, 0xFFFFFFFFFFFFFFFFULL) != 0) {
con_log(CL_ANN, (KERN_WARNING
"megaraid: could not set DMA mask for 64-bit.\n"));
- goto out_alloc_cmds;
+ goto out_free_sysfs_res;
}
// setup tasklet for DPC
return 0;
+out_free_sysfs_res:
+ megaraid_sysfs_free_resources(adapter);
out_alloc_cmds:
megaraid_free_cmd_packets(adapter);
out_free_irq:
tasklet_kill(&adapter->dpc_h);
+ megaraid_sysfs_free_resources(adapter);
+
megaraid_free_cmd_packets(adapter);
free_irq(adapter->irq, adapter);
if (scb->dma_direction == PCI_DMA_TODEVICE) {
if (!scb->scp->use_sg) { // sg list not used
- pci_dma_sync_single_for_device(adapter->pdev, ccb->buf_dma_h,
+ pci_dma_sync_single_for_device(adapter->pdev,
+ ccb->buf_dma_h,
scb->scp->request_bufflen,
PCI_DMA_TODEVICE);
}
else {
- pci_dma_sync_sg_for_device(adapter->pdev, scb->scp->request_buffer,
+ pci_dma_sync_sg_for_device(adapter->pdev,
+ scb->scp->request_buffer,
scb->scp->use_sg, PCI_DMA_TODEVICE);
}
}
scp->scsi_done = done;
scp->result = 0;
- ASSERT(spin_is_locked(adapter->host_lock));
+ assert_spin_locked(adapter->host_lock);
spin_unlock(adapter->host_lock);
while (!list_empty(&adapter->pend_list)) {
- ASSERT(spin_is_locked(PENDING_LIST_LOCK(adapter)));
+ assert_spin_locked(PENDING_LIST_LOCK(adapter));
scb = list_entry(adapter->pend_list.next, scb_t, list);
channel = scb->dev_channel;
target = scb->dev_target;
- pthru->timeout = 1; // 0=6sec, 1=60sec, 2=10min, 3=3hrs
+ // 0=6sec, 1=60sec, 2=10min, 3=3hrs, 4=NO timeout
+ pthru->timeout = 4;
pthru->ars = 1;
pthru->islogical = 0;
pthru->channel = 0;
channel = scb->dev_channel;
target = scb->dev_target;
- epthru->timeout = 1; // 0=6sec, 1=60sec, 2=10min, 3=3hrs
+ // 0=6sec, 1=60sec, 2=10min, 3=3hrs, 4=NO timeout
+ epthru->timeout = 4;
epthru->ars = 1;
epthru->islogical = 0;
epthru->channel = 0;
adapter = SCP2ADAPTER(scp);
raid_dev = ADAP2RAIDDEV(adapter);
- ASSERT(spin_is_locked(adapter->host_lock));
+ assert_spin_locked(adapter->host_lock);
con_log(CL_ANN, (KERN_WARNING
"megaraid: aborting-%ld cmd=%x <c=%d t=%d l=%d>\n",
adapter = SCP2ADAPTER(scp);
raid_dev = ADAP2RAIDDEV(adapter);
- ASSERT(spin_is_locked(adapter->host_lock));
+ assert_spin_locked(adapter->host_lock);
con_log(CL_ANN, (KERN_WARNING "megaraid: reseting the host...\n"));
memset((caddr_t)raw_mbox, 0, sizeof(mbox_t));
raw_mbox[0] = FC_DEL_LOGDRV;
- raw_mbox[0] = OP_SUP_DEL_LOGDRV;
+ raw_mbox[2] = OP_SUP_DEL_LOGDRV;
// Issue the command
rval = 0;
spin_unlock_irqrestore(USER_FREE_LIST_LOCK(adapter), flags);
- scb->state = SCB_ACTIVE;
- scb->dma_type = MRAID_DMA_NONE;
+ scb->state = SCB_ACTIVE;
+ scb->dma_type = MRAID_DMA_NONE;
+ scb->dma_direction = PCI_DMA_NONE;
ccb = (mbox_ccb_t *)scb->ccb;
mbox64 = (mbox64_t *)(unsigned long)kioc->cmdbuf;
*/
+
+/**
+ * megaraid_sysfs_alloc_resources - allocate sysfs related resources
+ *
+ * Allocate packets required to issue FW calls whenever the sysfs attributes
+ * are read. These attributes would require up-to-date information from the
+ * FW. Also set up resources for mutual exclusion to share these resources and
+ * the wait queue.
+ *
+ * @param adapter : controller's soft state
+ *
+ * @return 0 on success
+ * @return -ERROR_CODE on failure
+ */
+static int
+megaraid_sysfs_alloc_resources(adapter_t *adapter)
+{
+ mraid_device_t *raid_dev = ADAP2RAIDDEV(adapter);
+ int rval = 0;
+
+ raid_dev->sysfs_uioc = kmalloc(sizeof(uioc_t), GFP_KERNEL);
+
+ raid_dev->sysfs_mbox64 = kmalloc(sizeof(mbox64_t), GFP_KERNEL);
+
+ raid_dev->sysfs_buffer = pci_alloc_consistent(adapter->pdev,
+ PAGE_SIZE, &raid_dev->sysfs_buffer_dma);
+
+ if (!raid_dev->sysfs_uioc || !raid_dev->sysfs_mbox64 ||
+ !raid_dev->sysfs_buffer) {
+
+ con_log(CL_ANN, (KERN_WARNING
+ "megaraid: out of memory, %s %d\n", __FUNCTION__,
+ __LINE__));
+
+ rval = -ENOMEM;
+
+ megaraid_sysfs_free_resources(adapter);
+ }
+
+ sema_init(&raid_dev->sysfs_sem, 1);
+
+ init_waitqueue_head(&raid_dev->sysfs_wait_q);
+
+ return rval;
+}
+
+
+/**
+ * megaraid_sysfs_free_resources - free sysfs related resources
+ *
+ * Free packets allocated for sysfs FW commands
+ *
+ * @param adapter : controller's soft state
+ */
+static void
+megaraid_sysfs_free_resources(adapter_t *adapter)
+{
+ mraid_device_t *raid_dev = ADAP2RAIDDEV(adapter);
+
+ if (raid_dev->sysfs_uioc) kfree(raid_dev->sysfs_uioc);
+
+ if (raid_dev->sysfs_mbox64) kfree(raid_dev->sysfs_mbox64);
+
+ if (raid_dev->sysfs_buffer) {
+ pci_free_consistent(adapter->pdev, PAGE_SIZE,
+ raid_dev->sysfs_buffer, raid_dev->sysfs_buffer_dma);
+ }
+}
+
+
+/**
+ * megaraid_sysfs_get_ldmap_done - callback for get ldmap
+ *
+ * Callback routine called in the ISR/tasklet context for get ldmap call
+ *
+ * @param uioc : completed packet
+ */
+static void
+megaraid_sysfs_get_ldmap_done(uioc_t *uioc)
+{
+ adapter_t *adapter = (adapter_t *)uioc->buf_vaddr;
+ mraid_device_t *raid_dev = ADAP2RAIDDEV(adapter);
+
+ uioc->status = 0;
+
+ wake_up(&raid_dev->sysfs_wait_q);
+}
+
+
+/**
+ * megaraid_sysfs_get_ldmap_timeout - timeout handling for get ldmap
+ *
+ * Timeout routine to recover and return to application, in case the adapter
+ * has stopped responding. A timeout of 60 seconds for this command seem like
+ * a good value
+ *
+ * @param uioc : timed out packet
+ */
+static void
+megaraid_sysfs_get_ldmap_timeout(unsigned long data)
+{
+ uioc_t *uioc = (uioc_t *)data;
+ adapter_t *adapter = (adapter_t *)uioc->buf_vaddr;
+ mraid_device_t *raid_dev = ADAP2RAIDDEV(adapter);
+
+ uioc->status = -ETIME;
+
+ wake_up(&raid_dev->sysfs_wait_q);
+}
+
+
+/**
+ * megaraid_sysfs_get_ldmap - get update logical drive map
+ *
+ * This routine will be called whenever user reads the logical drive
+ * attributes, go get the current logical drive mapping table from the
+ * firmware. We use the managment API's to issue commands to the controller.
+ *
+ * NOTE: The commands issuance functionality is not generalized and
+ * implemented in context of "get ld map" command only. If required, the
+ * command issuance logical can be trivially pulled out and implemented as a
+ * standalone libary. For now, this should suffice since there is no other
+ * user of this interface.
+ *
+ * @param adapter : controller's soft state
+ *
+ * @return 0 on success
+ * @return -1 on failure
+ */
+static int
+megaraid_sysfs_get_ldmap(adapter_t *adapter)
+{
+ mraid_device_t *raid_dev = ADAP2RAIDDEV(adapter);
+ uioc_t *uioc;
+ mbox64_t *mbox64;
+ mbox_t *mbox;
+ char *raw_mbox;
+ struct timer_list sysfs_timer;
+ struct timer_list *timerp;
+ caddr_t ldmap;
+ int rval = 0;
+
+ /*
+ * Allow only one read at a time to go through the sysfs attributes
+ */
+ down(&raid_dev->sysfs_sem);
+
+ uioc = raid_dev->sysfs_uioc;
+ mbox64 = raid_dev->sysfs_mbox64;
+ ldmap = raid_dev->sysfs_buffer;
+
+ memset(uioc, 0, sizeof(uioc_t));
+ memset(mbox64, 0, sizeof(mbox64_t));
+ memset(ldmap, 0, sizeof(raid_dev->curr_ldmap));
+
+ mbox = &mbox64->mbox32;
+ raw_mbox = (char *)mbox;
+ uioc->cmdbuf = (uint64_t)(unsigned long)mbox64;
+ uioc->buf_vaddr = (caddr_t)adapter;
+ uioc->status = -ENODATA;
+ uioc->done = megaraid_sysfs_get_ldmap_done;
+
+ /*
+ * Prepare the mailbox packet to get the current logical drive mapping
+ * table
+ */
+ mbox->xferaddr = (uint32_t)raid_dev->sysfs_buffer_dma;
+
+ raw_mbox[0] = FC_DEL_LOGDRV;
+ raw_mbox[2] = OP_GET_LDID_MAP;
+
+ /*
+ * Setup a timer to recover from a non-responding controller
+ */
+ timerp = &sysfs_timer;
+ init_timer(timerp);
+
+ timerp->function = megaraid_sysfs_get_ldmap_timeout;
+ timerp->data = (unsigned long)uioc;
+ timerp->expires = jiffies + 60 * HZ;
+
+ add_timer(timerp);
+
+ /*
+ * Send the command to the firmware
+ */
+ rval = megaraid_mbox_mm_command(adapter, uioc);
+
+ if (rval == 0) { // command successfully issued
+ wait_event(raid_dev->sysfs_wait_q, (uioc->status != -ENODATA));
+
+ /*
+ * Check if the command timed out
+ */
+ if (uioc->status == -ETIME) {
+ con_log(CL_ANN, (KERN_NOTICE
+ "megaraid: sysfs get ld map timed out\n"));
+
+ rval = -ETIME;
+ }
+ else {
+ rval = mbox->status;
+ }
+
+ if (rval == 0) {
+ memcpy(raid_dev->curr_ldmap, ldmap,
+ sizeof(raid_dev->curr_ldmap));
+ }
+ else {
+ con_log(CL_ANN, (KERN_NOTICE
+ "megaraid: get ld map failed with %x\n", rval));
+ }
+ }
+ else {
+ con_log(CL_ANN, (KERN_NOTICE
+ "megaraid: could not issue ldmap command:%x\n", rval));
+ }
+
+
+ del_timer_sync(timerp);
+
+ up(&raid_dev->sysfs_sem);
+
+ return rval;
+}
+
+
+/**
+ * megaraid_sysfs_show_app_hndl - display application handle for this adapter
+ *
+ * Display the handle used by the applications while executing management
+ * tasks on the adapter. We invoke a management module API to get the adapter
+ * handle, since we do not interface with applications directly.
+ *
+ * @param cdev : class device object representation for the host
+ * @param buf : buffer to send data to
+ */
+static ssize_t
+megaraid_sysfs_show_app_hndl(struct class_device *cdev, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ adapter_t *adapter = (adapter_t *)SCSIHOST2ADAP(shost);
+ uint32_t app_hndl;
+
+ app_hndl = mraid_mm_adapter_app_handle(adapter->unique_id);
+
+ return snprintf(buf, 8, "%u\n", app_hndl);
+}
+
+
+/**
+ * megaraid_sysfs_show_ldnum - display the logical drive number for this device
+ *
+ * Display the logical drive number for the device in question, if it a valid
+ * logical drive. For physical devices, "-1" is returned
+ * The logical drive number is displayed in following format
+ *
+ * <SCSI ID> <LD NUM> <LD STICKY ID> <APP ADAPTER HANDLE>
+ * <int> <int> <int> <int>
+ *
+ * @param dev : device object representation for the scsi device
+ * @param buf : buffer to send data to
+ */
+static ssize_t
+megaraid_sysfs_show_ldnum(struct device *dev, char *buf)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ adapter_t *adapter = (adapter_t *)SCSIHOST2ADAP(sdev->host);
+ mraid_device_t *raid_dev = ADAP2RAIDDEV(adapter);
+ int scsi_id = -1;
+ int logical_drv = -1;
+ int ldid_map = -1;
+ uint32_t app_hndl = 0;
+ int mapped_sdev_id;
+ int rval;
+ int i;
+
+ if (raid_dev->random_del_supported &&
+ MRAID_IS_LOGICAL_SDEV(adapter, sdev)) {
+
+ rval = megaraid_sysfs_get_ldmap(adapter);
+ if (rval == 0) {
+
+ for (i = 0; i < MAX_LOGICAL_DRIVES_40LD; i++) {
+
+ mapped_sdev_id = sdev->id;
+
+ if (sdev->id > adapter->init_id) {
+ mapped_sdev_id -= 1;
+ }
+
+ if (raid_dev->curr_ldmap[i] == mapped_sdev_id) {
+
+ scsi_id = sdev->id;
+
+ logical_drv = i;
+
+ ldid_map = raid_dev->curr_ldmap[i];
+
+ app_hndl = mraid_mm_adapter_app_handle(
+ adapter->unique_id);
+
+ break;
+ }
+ }
+ }
+ else {
+ con_log(CL_ANN, (KERN_NOTICE
+ "megaraid: sysfs get ld map failed: %x\n",
+ rval));
+ }
+ }
+
+ return snprintf(buf, 36, "%d %d %d %d\n", scsi_id, logical_drv,
+ ldid_map, app_hndl);
+}
+
+
/*
* END: Mailbox Low Level Driver
*/
#include "megaraid_ioctl.h"
-#define MEGARAID_VERSION "2.20.4.1"
-#define MEGARAID_EXT_VERSION "(Release Date: Thu Nov 4 17:44:59 EST 2004)"
+#define MEGARAID_VERSION "2.20.4.5"
+#define MEGARAID_EXT_VERSION "(Release Date: Thu Feb 03 12:27:22 EST 2005)"
/*
#define PCI_SUBSYS_ID_PERC3_DC 0x0493
#define PCI_SUBSYS_ID_PERC3_SC 0x0475
+#define PCI_DEVICE_ID_MEGARAID_NEC_ROMB_2E 0x0408
+#define PCI_SUBSYS_ID_MEGARAID_NEC_ROMB_2E 0x8287
+
#ifndef PCI_SUBSYS_ID_FSC
#define PCI_SUBSYS_ID_FSC 0x1734
#endif
* @param hw_error : set if FW not responding
* @param fast_load : If set, skip physical device scanning
* @channel_class : channel class, RAID or SCSI
+ * @sysfs_sem : semaphore to serialize access to sysfs res.
+ * @sysfs_uioc : management packet to issue FW calls from sysfs
+ * @sysfs_mbox64 : mailbox packet to issue FW calls from sysfs
+ * @sysfs_buffer : data buffer for FW commands issued from sysfs
+ * @sysfs_buffer_dma : DMA buffer for FW commands issued from sysfs
+ * @sysfs_wait_q : wait queue for sysfs operations
+ * @random_del_supported : set if the random deletion is supported
+ * @curr_ldmap : current LDID map
*
* Initialization structure for mailbox controllers: memory based and IO based
* All the fields in this structure are LLD specific and may be discovered at
*
* NOTE: The fields of this structures are placed to minimize cache misses
*/
+#define MAX_LD_EXTENDED64 64
typedef struct {
mbox64_t *una_mbox64;
dma_addr_t una_mbox64_dma;
int hw_error;
int fast_load;
uint8_t channel_class;
+ struct semaphore sysfs_sem;
+ uioc_t *sysfs_uioc;
+ mbox64_t *sysfs_mbox64;
+ caddr_t sysfs_buffer;
+ dma_addr_t sysfs_buffer_dma;
+ wait_queue_head_t sysfs_wait_q;
+ int random_del_supported;
+ uint16_t curr_ldmap[MAX_LD_EXTENDED64];
} mraid_device_t;
// route to raid device from adapter
* 2 of the License, or (at your option) any later version.
*
* FILE : megaraid_mm.c
- * Version : v2.20.2.3 (Dec 09 2004)
+ * Version : v2.20.2.5 (Jan 21 2005)
*
* Common management module
*/
EXPORT_SYMBOL(mraid_mm_register_adp);
EXPORT_SYMBOL(mraid_mm_unregister_adp);
+EXPORT_SYMBOL(mraid_mm_adapter_app_handle);
static int majorno;
static uint32_t drvr_ver = 0x02200201;
static int adapters_count_g;
static struct list_head adapters_list_g;
-wait_queue_head_t wait_q;
+static wait_queue_head_t wait_q;
static struct file_operations lsi_fops = {
.open = mraid_mm_open,
return rval;
}
+
+/**
+ * mraid_mm_adapter_app_handle - return the application handle for this adapter
+ *
+ * For the given driver data, locate the adadpter in our global list and
+ * return the corresponding handle, which is also used by applications to
+ * uniquely identify an adapter.
+ *
+ * @param unique_id : adapter unique identifier
+ *
+ * @return adapter handle if found in the list
+ * @return 0 if adapter could not be located, should never happen though
+ */
+uint32_t
+mraid_mm_adapter_app_handle(uint32_t unique_id)
+{
+ mraid_mmadp_t *adapter;
+ mraid_mmadp_t *tmp;
+ int index = 0;
+
+ list_for_each_entry_safe(adapter, tmp, &adapters_list_g, list) {
+
+ if (adapter->unique_id == unique_id) {
+
+ return MKADAP(index);
+ }
+
+ index++;
+ }
+
+ return 0;
+}
+
+
/**
* mraid_mm_setup_dma_pools - Set up dma buffer pools per adapter
*
#include "megaraid_ioctl.h"
-#define LSI_COMMON_MOD_VERSION "2.20.2.3"
+#define LSI_COMMON_MOD_VERSION "2.20.2.5"
#define LSI_COMMON_MOD_EXT_VERSION \
- "(Release Date: Thu Dec 9 19:02:14 EST 2004)"
+ "(Release Date: Fri Jan 21 00:01:03 EST 2005)"
+
#define LSI_DBGLVL dbglevel
/*
- * $Header: /cvsroot/osst/Driver/osst.h,v 1.14 2003/12/14 14:34:38 wriede Exp $
+ * $Header: /cvsroot/osst/Driver/osst.h,v 1.16 2005/01/01 21:13:35 wriede Exp $
*/
#include <asm/byteorder.h>
#define BLOCK_SIZE_PAGE_LENGTH 4
#define BUFFER_FILLING_PAGE 0x33
-#define BUFFER_FILLING_PAGE_LENGTH
+#define BUFFER_FILLING_PAGE_LENGTH 4
#define VENDOR_IDENT_PAGE 0x36
#define VENDOR_IDENT_PAGE_LENGTH 8
//#define OSST_MAX_SG 2
/* The OnStream tape buffer descriptor. */
-typedef struct {
+struct osst_buffer {
unsigned char in_use;
unsigned char dma; /* DMA-able buffer */
int buffer_size;
int writing;
int midlevel_result;
int syscall_result;
- Scsi_Request *last_SRpnt;
+ struct scsi_request *last_SRpnt;
unsigned char *b_data;
os_aux_t *aux; /* onstream AUX structure at end of each block */
unsigned short use_sg; /* zero or number of s/g segments for this adapter */
unsigned short sg_segs; /* number of segments in s/g list */
unsigned short orig_sg_segs; /* number of segments allocated at first try */
struct scatterlist sg[1]; /* MUST BE last item */
-} OSST_buffer;
+} ;
/* The OnStream tape drive descriptor */
-typedef struct {
+struct osst_tape {
struct scsi_driver *driver;
unsigned capacity;
- Scsi_Device* device;
+ struct scsi_device *device;
struct semaphore lock; /* for serialization */
struct completion wait; /* for SCSI commands */
- OSST_buffer * buffer;
+ struct osst_buffer * buffer;
/* Drive characteristics */
unsigned char omit_blklims;
int min_block;
int max_block;
int recover_count; /* from tape opening */
+ int abort_count;
int write_count;
int read_count;
int recover_erreg; /* from last status call */
unsigned char last_sense[16];
#endif
struct gendisk *drive;
-} OS_Scsi_Tape;
+} ;
/* Values of write_type */
#define OS_WRITE_DATA 0
/* ================================================================== */
-/* Parameters that can be set with 'insmod' */
-
-/* Bit map of interrupts to choose from */
-static unsigned int irq_mask = 0xdeb8; /* 3-5, 7, 9-12, 14, 15 */
-static int irq_list[4] = { -1 };
-
-module_param(irq_mask, int, 0);
-MODULE_PARM_DESC(irq_mask, "IRQ mask bits (default: 0xdeb8)");
-module_param_array(irq_list, int, NULL, 0);
-MODULE_PARM_DESC(irq_list, "Comma-separated list of up to 4 IRQs to try (default: auto select).");
-
-/* ================================================================== */
-
#define SYNC_MODE 0 /* Synchronous transfer mode */
/* Default configuration */
struct scsi_info_t *info;
client_reg_t client_reg;
dev_link_t *link;
- int i, ret;
+ int ret;
DEBUG(0, "SYM53C500_attach()\n");
link->io.Attributes1 = IO_DATA_PATH_WIDTH_AUTO;
link->io.IOAddrLines = 10;
link->irq.Attributes = IRQ_TYPE_EXCLUSIVE;
- link->irq.IRQInfo1 = IRQ_INFO2_VALID | IRQ_LEVEL_ID;
- if (irq_list[0] == -1)
- link->irq.IRQInfo2 = irq_mask;
- else
- for (i = 0; i < 4; i++)
- link->irq.IRQInfo2 |= 1 << irq_list[i];
+ link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->conf.Attributes = CONF_ENABLE_IRQ;
link->conf.Vcc = 50;
link->conf.IntType = INT_MEMORY_AND_IO;
link->next = dev_list;
dev_list = link;
client_reg.dev_info = &dev_info;
- client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE;
client_reg.event_handler = &SYM53C500_event;
client_reg.EventMask = CS_EVENT_RESET_REQUEST | CS_EVENT_CARD_RESET |
CS_EVENT_CARD_INSERTION | CS_EVENT_CARD_REMOVAL |
depends on SCSI_QLA2XXX
select SCSI_FC_ATTRS
---help---
- This driver supports the QLogic 2300 (ISP2300, and ISP2312) host
+ This driver supports the QLogic 2300 (ISP2300 and ISP2312) host
adapter family.
config SCSI_QLA2322
This driver supports the QLogic 2322 (ISP2322) host adapter family.
config SCSI_QLA6312
- tristate "QLogic ISP6312 host adapter family support"
+ tristate "QLogic ISP63xx host adapter family support"
depends on SCSI_QLA2XXX
select SCSI_FC_ATTRS
---help---
- This driver supports the QLogic 6312 (ISP6312) host adapter family.
-
-config SCSI_QLA6322
- tristate "QLogic ISP6322 host adapter family support"
- depends on SCSI_QLA2XXX
- select SCSI_FC_ATTRS
- ---help---
- This driver supports the QLogic 6322 (ISP6322) host adapter family.
+ This driver supports the QLogic 63xx (ISP6312 and ISP6322) host
+ adapter family.
-#define QLA_MODEL_NAMES 0x21
+#define QLA_MODEL_NAMES 0x32
/*
* Adapter model names.
*/
-char *qla2x00_model_name[QLA_MODEL_NAMES] = {
+static char *qla2x00_model_name[QLA_MODEL_NAMES] = {
"QLA2340", /* 0x100 */
"QLA2342", /* 0x101 */
"QLA2344", /* 0x102 */
"QLA2350", /* 0x10c */
"QLA2352", /* 0x10d */
"QLA2352", /* 0x10e */
- "HPQSVS ", /* 0x10f */
- "HPQSVS ", /* 0x110 */
- "QLA4010", /* 0x111 */
- "QLA4010", /* 0x112 */
- "QLA4010C", /* 0x113 */
- "QLA4010C", /* 0x114 */
+ "HPQ SVS", /* 0x10f */
+ "HPQ SVS", /* 0x110 */
+ " ", /* 0x111 */
+ " ", /* 0x112 */
+ " ", /* 0x113 */
+ " ", /* 0x114 */
"QLA2360", /* 0x115 */
"QLA2362", /* 0x116 */
- " ", /* 0x117 */
- " ", /* 0x118 */
+ "QLE2360", /* 0x117 */
+ "QLE2362", /* 0x118 */
"QLA200", /* 0x119 */
- "QLA200C" /* 0x11A */
- "QLA200P" /* 0x11B */
- "QLA200P" /* 0x11C */
- "QLA4040" /* 0x11D */
- "QLA4040" /* 0x11E */
- "QLA4040C" /* 0x11F */
- "QLA4040C" /* 0x120 */
+ "QLA200C", /* 0x11a */
+ "QLA200P", /* 0x11b */
+ "QLA200P", /* 0x11c */
+ " ", /* 0x11d */
+ " ", /* 0x11e */
+ " ", /* 0x11f */
+ " ", /* 0x120 */
+ " ", /* 0x121 */
+ " ", /* 0x122 */
+ " ", /* 0x123 */
+ " ", /* 0x124 */
+ " ", /* 0x125 */
+ " ", /* 0x126 */
+ " ", /* 0x127 */
+ " ", /* 0x128 */
+ " ", /* 0x129 */
+ " ", /* 0x12a */
+ " ", /* 0x12b */
+ " ", /* 0x12c */
+ " ", /* 0x12d */
+ " ", /* 0x12e */
+ "QLA210", /* 0x12f */
+ "EMC 250", /* 0x130 */
+ "HP A7538A" /* 0x131 */
};
-char *qla2x00_model_desc[QLA_MODEL_NAMES] = {
+static char *qla2x00_model_desc[QLA_MODEL_NAMES] = {
"133MHz PCI-X to 2Gb FC, Single Channel", /* 0x100 */
"133MHz PCI-X to 2Gb FC, Dual Channel", /* 0x101 */
"133MHz PCI-X to 2Gb FC, Quad Channel", /* 0x102 */
" ", /* 0x10e */
"HPQ SVS HBA- Initiator device", /* 0x10f */
"HPQ SVS HBA- Target device", /* 0x110 */
- "Optical- 133MHz to 1Gb iSCSI- networking", /* 0x111 */
- "Optical- 133MHz to 1Gb iSCSI- storage", /* 0x112 */
- "Copper- 133MHz to 1Gb iSCSI- networking", /* 0x113 */
- "Copper- 133MHz to 1Gb iSCSI- storage", /* 0x114 */
+ " ", /* 0x111 */
+ " ", /* 0x112 */
+ " ", /* 0x113 */
+ " ", /* 0x114 */
"133MHz PCI-X to 2Gb FC Single Channel", /* 0x115 */
"133MHz PCI-X to 2Gb FC Dual Channel", /* 0x116 */
- " ", /* 0x117 */
- " ", /* 0x118 */
+ "PCI-Express to 2Gb FC, Single Channel", /* 0x117 */
+ "PCI-Express to 2Gb FC, Dual Channel", /* 0x118 */
"133MHz PCI-X to 2Gb FC Optical", /* 0x119 */
- "133MHz PCI-X to 2Gb FC Copper" /* 0x11A */
- "133MHz PCI-X to 2Gb FC SFP" /* 0x11B */
- "133MHz PCI-X to 2Gb FC SFP" /* 0x11C */
- "Optical- 133MHz to 1Gb NIC with IPSEC", /* 0x11D */
- "Optical- 133MHz to 1Gb iSCSI with IPSEC", /* 0x11E */
- "Copper- 133MHz to 1Gb NIC with IPSEC", /* 0x11F */
- "Copper- 133MHz to 1Gb iSCSI with IPSEC", /* 0x120 */
+ "133MHz PCI-X to 2Gb FC Copper", /* 0x11a */
+ "133MHz PCI-X to 2Gb FC SFP", /* 0x11b */
+ "133MHz PCI-X to 2Gb FC SFP", /* 0x11c */
+ " ", /* 0x11d */
+ " ", /* 0x11e */
+ " ", /* 0x11f */
+ " ", /* 0x120 */
+ " ", /* 0x121 */
+ " ", /* 0x122 */
+ " ", /* 0x123 */
+ " ", /* 0x124 */
+ " ", /* 0x125 */
+ " ", /* 0x126 */
+ " ", /* 0x127 */
+ " ", /* 0x128 */
+ " ", /* 0x129 */
+ " ", /* 0x12a */
+ " ", /* 0x12b */
+ " ", /* 0x12c */
+ " ", /* 0x12d */
+ " ", /* 0x12e */
+ "133MHz PCI-X to 2Gb FC SFF", /* 0x12f */
+ "133MHz PCI-X to 2Gb FC SFF", /* 0x130 */
+ "HP 1p2g QLA2340" /* 0x131 */
};
* If you do not delete the provisions above, a recipient may use your
* version of this file under either the OSL or the GPL.
*
+ * 0.06
+ * - Added generic SATA support by using a pci_device_id that filters on
+ * the IDE storage class code.
+ *
* 0.03
* - Fixed a bug where the hotplug handlers for non-CK804/MCP04 were using
* mmio_base, which is only set for the CK804/MCP04 case.
#include <linux/libata.h>
#define DRV_NAME "sata_nv"
-#define DRV_VERSION "0.5"
+#define DRV_VERSION "0.6"
#define NV_PORTS 2
#define NV_PIO_MASK 0x1f
enum nv_host_type
{
+ GENERIC,
NFORCE2,
NFORCE3,
CK804
PCI_ANY_ID, PCI_ANY_ID, 0, 0, CK804 },
{ PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_SATA2,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, CK804 },
+ { PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
+ PCI_ANY_ID, PCI_ANY_ID,
+ PCI_CLASS_STORAGE_IDE<<8, 0xffff00, GENERIC },
{ 0, } /* terminate list */
};
struct nv_host_desc
{
enum nv_host_type host_type;
- unsigned long host_flags;
void (*enable_hotplug)(struct ata_probe_ent *probe_ent);
void (*disable_hotplug)(struct ata_host_set *host_set);
void (*check_hotplug)(struct ata_host_set *host_set);
};
static struct nv_host_desc nv_device_tbl[] = {
+ {
+ .host_type = GENERIC,
+ .enable_hotplug = NULL,
+ .disable_hotplug= NULL,
+ .check_hotplug = NULL,
+ },
{
.host_type = NFORCE2,
- .host_flags = 0x00000000,
.enable_hotplug = nv_enable_hotplug,
.disable_hotplug= nv_disable_hotplug,
.check_hotplug = nv_check_hotplug,
},
{
.host_type = NFORCE3,
- .host_flags = 0x00000000,
.enable_hotplug = nv_enable_hotplug,
.disable_hotplug= nv_disable_hotplug,
.check_hotplug = nv_check_hotplug,
},
{ .host_type = CK804,
- .host_flags = NV_HOST_FLAGS_SCR_MMIO,
.enable_hotplug = nv_enable_hotplug_ck804,
.disable_hotplug= nv_disable_hotplug_ck804,
.check_hotplug = nv_check_hotplug_ck804,
struct nv_host
{
struct nv_host_desc *host_desc;
+ unsigned long host_flags;
};
static struct pci_driver nv_pci_driver = {
.phy_reset = sata_phy_reset,
.bmdma_setup = ata_bmdma_setup,
.bmdma_start = ata_bmdma_start,
+ .bmdma_stop = ata_bmdma_stop,
+ .bmdma_status = ata_bmdma_status,
.qc_prep = ata_qc_prep,
.qc_issue = ata_qc_issue_prot,
.eng_timeout = ata_eng_timeout,
if (sc_reg > SCR_CONTROL)
return 0xffffffffU;
- if (host->host_desc->host_flags & NV_HOST_FLAGS_SCR_MMIO)
- return readl(ap->ioaddr.scr_addr + (sc_reg * 4));
+ if (host->host_flags & NV_HOST_FLAGS_SCR_MMIO)
+ return readl((void*)ap->ioaddr.scr_addr + (sc_reg * 4));
else
return inl(ap->ioaddr.scr_addr + (sc_reg * 4));
}
if (sc_reg > SCR_CONTROL)
return;
- if (host->host_desc->host_flags & NV_HOST_FLAGS_SCR_MMIO)
- writel(val, ap->ioaddr.scr_addr + (sc_reg * 4));
+ if (host->host_flags & NV_HOST_FLAGS_SCR_MMIO)
+ writel(val, (void*)ap->ioaddr.scr_addr + (sc_reg * 4));
else
outl(val, ap->ioaddr.scr_addr + (sc_reg * 4));
}
struct nv_host *host;
struct ata_port_info *ppi;
struct ata_probe_ent *probe_ent;
+ int pci_dev_busy = 0;
int rc;
+ u32 bar;
+
+ // Make sure this is a SATA controller by counting the number of bars
+ // (NVIDIA SATA controllers will always have six bars). Otherwise,
+ // it's an IDE controller and we ignore it.
+ for (bar=0; bar<6; bar++)
+ if (pci_resource_start(pdev, bar) == 0)
+ return -ENODEV;
if (!printed_version++)
printk(KERN_DEBUG DRV_NAME " version " DRV_VERSION "\n");
goto err_out;
rc = pci_request_regions(pdev, DRV_NAME);
- if (rc)
+ if (rc) {
+ pci_dev_busy = 1;
goto err_out_disable;
+ }
rc = pci_set_dma_mask(pdev, ATA_DMA_MASK);
if (rc)
if (!host)
goto err_out_free_ent;
+ memset(host, 0, sizeof(struct nv_host));
host->host_desc = &nv_device_tbl[ent->driver_data];
probe_ent->private_data = host;
- if (host->host_desc->host_flags & NV_HOST_FLAGS_SCR_MMIO) {
+ if (pci_resource_flags(pdev, 5) & IORESOURCE_MEM)
+ host->host_flags |= NV_HOST_FLAGS_SCR_MMIO;
+
+ if (host->host_flags & NV_HOST_FLAGS_SCR_MMIO) {
unsigned long base;
probe_ent->mmio_base = ioremap(pci_resource_start(pdev, 5),
return 0;
err_out_iounmap:
- if (host->host_desc->host_flags & NV_HOST_FLAGS_SCR_MMIO)
+ if (host->host_flags & NV_HOST_FLAGS_SCR_MMIO)
iounmap(probe_ent->mmio_base);
err_out_free_host:
kfree(host);
err_out_regions:
pci_release_regions(pdev);
err_out_disable:
- pci_disable_device(pdev);
+ if (!pci_dev_busy)
+ pci_disable_device(pdev);
err_out:
return rc;
}
--- /dev/null
+/*
+ * sata_qstor.c - Pacific Digital Corporation QStor SATA
+ *
+ * Maintained by: Mark Lord <mlord@pobox.com>
+ *
+ * Copyright 2005 Pacific Digital Corporation.
+ * (OSL/GPL code release authorized by Jalil Fadavi).
+ *
+ * The contents of this file are subject to the Open
+ * Software License version 1.1 that can be found at
+ * http://www.opensource.org/licenses/osl-1.1.txt and is included herein
+ * by reference.
+ *
+ * Alternatively, the contents of this file may be used under the terms
+ * of the GNU General Public License version 2 (the "GPL") as distributed
+ * in the kernel source COPYING file, in which case the provisions of
+ * the GPL are applicable instead of the above. If you wish to allow
+ * the use of your version of this file only under the terms of the
+ * GPL and not to allow others to use your version of this file under
+ * the OSL, indicate your decision by deleting the provisions above and
+ * replace them with the notice and other provisions required by the GPL.
+ * If you do not delete the provisions above, a recipient may use your
+ * version of this file under either the OSL or the GPL.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/blkdev.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/sched.h>
+#include "scsi.h"
+#include <scsi/scsi_host.h>
+#include <asm/io.h>
+#include <linux/libata.h>
+
+#define DRV_NAME "sata_qstor"
+#define DRV_VERSION "0.03"
+
+enum {
+ QS_PORTS = 4,
+ QS_MAX_PRD = LIBATA_MAX_PRD,
+ QS_CPB_ORDER = 6,
+ QS_CPB_BYTES = (1 << QS_CPB_ORDER),
+ QS_PRD_BYTES = QS_MAX_PRD * 16,
+ QS_PKT_BYTES = QS_CPB_BYTES + QS_PRD_BYTES,
+
+ QS_DMA_BOUNDARY = ~0UL,
+
+ /* global register offsets */
+ QS_HCF_CNFG3 = 0x0003, /* host configuration offset */
+ QS_HID_HPHY = 0x0004, /* host physical interface info */
+ QS_HCT_CTRL = 0x00e4, /* global interrupt mask offset */
+ QS_HST_SFF = 0x0100, /* host status fifo offset */
+ QS_HVS_SERD3 = 0x0393, /* PHY enable offset */
+
+ /* global control bits */
+ QS_HPHY_64BIT = (1 << 1), /* 64-bit bus detected */
+ QS_CNFG3_GSRST = 0x01, /* global chip reset */
+ QS_SERD3_PHY_ENA = 0xf0, /* PHY detection ENAble*/
+
+ /* per-channel register offsets */
+ QS_CCF_CPBA = 0x0710, /* chan CPB base address */
+ QS_CCF_CSEP = 0x0718, /* chan CPB separation factor */
+ QS_CFC_HUFT = 0x0800, /* host upstream fifo threshold */
+ QS_CFC_HDFT = 0x0804, /* host downstream fifo threshold */
+ QS_CFC_DUFT = 0x0808, /* dev upstream fifo threshold */
+ QS_CFC_DDFT = 0x080c, /* dev downstream fifo threshold */
+ QS_CCT_CTR0 = 0x0900, /* chan control-0 offset */
+ QS_CCT_CTR1 = 0x0901, /* chan control-1 offset */
+ QS_CCT_CFF = 0x0a00, /* chan command fifo offset */
+
+ /* channel control bits */
+ QS_CTR0_REG = (1 << 1), /* register mode (vs. pkt mode) */
+ QS_CTR0_CLER = (1 << 2), /* clear channel errors */
+ QS_CTR1_RDEV = (1 << 1), /* sata phy/comms reset */
+ QS_CTR1_RCHN = (1 << 4), /* reset channel logic */
+ QS_CCF_RUN_PKT = 0x107, /* RUN a new dma PKT */
+
+ /* pkt sub-field headers */
+ QS_HCB_HDR = 0x01, /* Host Control Block header */
+ QS_DCB_HDR = 0x02, /* Device Control Block header */
+
+ /* pkt HCB flag bits */
+ QS_HF_DIRO = (1 << 0), /* data DIRection Out */
+ QS_HF_DAT = (1 << 3), /* DATa pkt */
+ QS_HF_IEN = (1 << 4), /* Interrupt ENable */
+ QS_HF_VLD = (1 << 5), /* VaLiD pkt */
+
+ /* pkt DCB flag bits */
+ QS_DF_PORD = (1 << 2), /* Pio OR Dma */
+ QS_DF_ELBA = (1 << 3), /* Extended LBA (lba48) */
+
+ /* PCI device IDs */
+ board_2068_idx = 0, /* QStor 4-port SATA/RAID */
+};
+
+typedef enum { qs_state_idle, qs_state_pkt, qs_state_mmio } qs_state_t;
+
+struct qs_port_priv {
+ u8 *pkt;
+ dma_addr_t pkt_dma;
+ qs_state_t state;
+};
+
+static u32 qs_scr_read (struct ata_port *ap, unsigned int sc_reg);
+static void qs_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val);
+static int qs_ata_init_one (struct pci_dev *pdev, const struct pci_device_id *ent);
+static irqreturn_t qs_intr (int irq, void *dev_instance, struct pt_regs *regs);
+static int qs_port_start(struct ata_port *ap);
+static void qs_host_stop(struct ata_host_set *host_set);
+static void qs_port_stop(struct ata_port *ap);
+static void qs_phy_reset(struct ata_port *ap);
+static void qs_qc_prep(struct ata_queued_cmd *qc);
+static int qs_qc_issue(struct ata_queued_cmd *qc);
+static int qs_check_atapi_dma(struct ata_queued_cmd *qc);
+static void qs_bmdma_stop(struct ata_port *ap);
+static u8 qs_bmdma_status(struct ata_port *ap);
+static void qs_irq_clear(struct ata_port *ap);
+
+static Scsi_Host_Template qs_ata_sht = {
+ .module = THIS_MODULE,
+ .name = DRV_NAME,
+ .ioctl = ata_scsi_ioctl,
+ .queuecommand = ata_scsi_queuecmd,
+ .eh_strategy_handler = ata_scsi_error,
+ .can_queue = ATA_DEF_QUEUE,
+ .this_id = ATA_SHT_THIS_ID,
+ .sg_tablesize = QS_MAX_PRD,
+ .max_sectors = ATA_MAX_SECTORS,
+ .cmd_per_lun = ATA_SHT_CMD_PER_LUN,
+ .emulated = ATA_SHT_EMULATED,
+ //FIXME .use_clustering = ATA_SHT_USE_CLUSTERING,
+ .use_clustering = ENABLE_CLUSTERING,
+ .proc_name = DRV_NAME,
+ .dma_boundary = QS_DMA_BOUNDARY,
+ .slave_configure = ata_scsi_slave_config,
+ .bios_param = ata_std_bios_param,
+};
+
+static struct ata_port_operations qs_ata_ops = {
+ .port_disable = ata_port_disable,
+ .tf_load = ata_tf_load,
+ .tf_read = ata_tf_read,
+ .check_status = ata_check_status,
+ .check_atapi_dma = qs_check_atapi_dma,
+ .exec_command = ata_exec_command,
+ .dev_select = ata_std_dev_select,
+ .phy_reset = qs_phy_reset,
+ .qc_prep = qs_qc_prep,
+ .qc_issue = qs_qc_issue,
+ .eng_timeout = ata_eng_timeout,
+ .irq_handler = qs_intr,
+ .irq_clear = qs_irq_clear,
+ .scr_read = qs_scr_read,
+ .scr_write = qs_scr_write,
+ .port_start = qs_port_start,
+ .port_stop = qs_port_stop,
+ .host_stop = qs_host_stop,
+ .bmdma_stop = qs_bmdma_stop,
+ .bmdma_status = qs_bmdma_status,
+};
+
+static struct ata_port_info qs_port_info[] = {
+ /* board_2068_idx */
+ {
+ .sht = &qs_ata_sht,
+ .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY |
+ ATA_FLAG_SATA_RESET |
+ //FIXME ATA_FLAG_SRST |
+ ATA_FLAG_MMIO,
+ .pio_mask = 0x10, /* pio4 */
+ .udma_mask = 0x7f, /* udma0-6 */
+ .port_ops = &qs_ata_ops,
+ },
+};
+
+static struct pci_device_id qs_ata_pci_tbl[] = {
+ { PCI_VENDOR_ID_PDC, 0x2068, PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ board_2068_idx },
+
+ { } /* terminate list */
+};
+
+static struct pci_driver qs_ata_pci_driver = {
+ .name = DRV_NAME,
+ .id_table = qs_ata_pci_tbl,
+ .probe = qs_ata_init_one,
+ .remove = ata_pci_remove_one,
+};
+
+static int qs_check_atapi_dma(struct ata_queued_cmd *qc)
+{
+ return 1; /* ATAPI DMA not supported */
+}
+
+static void qs_bmdma_stop(struct ata_port *ap)
+{
+ /* nothing */
+}
+
+static u8 qs_bmdma_status(struct ata_port *ap)
+{
+ return 0;
+}
+
+static void qs_irq_clear(struct ata_port *ap)
+{
+ /* nothing */
+}
+
+static void qs_enter_reg_mode(struct ata_port *ap)
+{
+ u8 __iomem *chan = ap->host_set->mmio_base + (ap->port_no * 0x4000);
+
+ writeb(QS_CTR0_REG, chan + QS_CCT_CTR0);
+ readb(chan + QS_CCT_CTR0); /* flush */
+}
+
+static void qs_phy_reset(struct ata_port *ap)
+{
+ u8 __iomem *chan = ap->host_set->mmio_base + (ap->port_no * 0x4000);
+ struct qs_port_priv *pp = ap->private_data;
+
+ pp->state = qs_state_idle;
+ writeb(QS_CTR1_RCHN, chan + QS_CCT_CTR1);
+ qs_enter_reg_mode(ap);
+ sata_phy_reset(ap);
+}
+
+static u32 qs_scr_read (struct ata_port *ap, unsigned int sc_reg)
+{
+ if (sc_reg > SCR_CONTROL)
+ return ~0U;
+ return readl((void __iomem *)(ap->ioaddr.scr_addr + (sc_reg * 8)));
+}
+
+static void qs_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val)
+{
+ if (sc_reg > SCR_CONTROL)
+ return;
+ writel(val, (void __iomem *)(ap->ioaddr.scr_addr + (sc_reg * 8)));
+}
+
+static void qs_fill_sg(struct ata_queued_cmd *qc)
+{
+ struct scatterlist *sg = qc->sg;
+ struct ata_port *ap = qc->ap;
+ struct qs_port_priv *pp = ap->private_data;
+ unsigned int nelem;
+ u8 *prd = pp->pkt + QS_CPB_BYTES;
+
+ assert(sg != NULL);
+ assert(qc->n_elem > 0);
+
+ for (nelem = 0; nelem < qc->n_elem; nelem++,sg++) {
+ u64 addr;
+ u32 len;
+
+ addr = sg_dma_address(sg);
+ *(u64 *)prd = cpu_to_le64(addr);
+ prd += sizeof(u64);
+
+ len = sg_dma_len(sg);
+ *(u32 *)prd = cpu_to_le32(len);
+ prd += sizeof(u64);
+
+ VPRINTK("PRD[%u] = (0x%llX, 0x%X)\n", nelem,
+ (unsigned long long)addr, len);
+ }
+}
+
+static void qs_qc_prep(struct ata_queued_cmd *qc)
+{
+ struct qs_port_priv *pp = qc->ap->private_data;
+ u8 dflags = QS_DF_PORD, *buf = pp->pkt;
+ u8 hflags = QS_HF_DAT | QS_HF_IEN | QS_HF_VLD;
+ u64 addr;
+
+ VPRINTK("ENTER\n");
+
+ qs_enter_reg_mode(qc->ap);
+ if (qc->tf.protocol != ATA_PROT_DMA) {
+ ata_qc_prep(qc);
+ return;
+ }
+
+ qs_fill_sg(qc);
+
+ if ((qc->tf.flags & ATA_TFLAG_WRITE))
+ hflags |= QS_HF_DIRO;
+ if ((qc->tf.flags & ATA_TFLAG_LBA48))
+ dflags |= QS_DF_ELBA;
+
+ /* host control block (HCB) */
+ buf[ 0] = QS_HCB_HDR;
+ buf[ 1] = hflags;
+ *(u32 *)(&buf[ 4]) = cpu_to_le32(qc->nsect * ATA_SECT_SIZE);
+ *(u32 *)(&buf[ 8]) = cpu_to_le32(qc->n_elem);
+ addr = ((u64)pp->pkt_dma) + QS_CPB_BYTES;
+ *(u64 *)(&buf[16]) = cpu_to_le64(addr);
+
+ /* device control block (DCB) */
+ buf[24] = QS_DCB_HDR;
+ buf[28] = dflags;
+
+ /* frame information structure (FIS) */
+ ata_tf_to_fis(&qc->tf, &buf[32], 0);
+}
+
+static inline void qs_packet_start(struct ata_queued_cmd *qc)
+{
+ struct ata_port *ap = qc->ap;
+ u8 __iomem *chan = ap->host_set->mmio_base + (ap->port_no * 0x4000);
+
+ VPRINTK("ENTER, ap %p\n", ap);
+
+ writeb(QS_CTR0_CLER, chan + QS_CCT_CTR0);
+ wmb(); /* flush PRDs and pkt to memory */
+ writel(QS_CCF_RUN_PKT, chan + QS_CCT_CFF);
+ readl(chan + QS_CCT_CFF); /* flush */
+}
+
+static int qs_qc_issue(struct ata_queued_cmd *qc)
+{
+ struct qs_port_priv *pp = qc->ap->private_data;
+
+ switch (qc->tf.protocol) {
+ case ATA_PROT_DMA:
+
+ pp->state = qs_state_pkt;
+ qs_packet_start(qc);
+ return 0;
+
+ case ATA_PROT_ATAPI_DMA:
+ BUG();
+ break;
+
+ default:
+ break;
+ }
+
+ pp->state = qs_state_mmio;
+ return ata_qc_issue_prot(qc);
+}
+
+static inline unsigned int qs_intr_pkt(struct ata_host_set *host_set)
+{
+ unsigned int handled = 0;
+ u8 sFFE;
+ u8 __iomem *mmio_base = host_set->mmio_base;
+
+ do {
+ u32 sff0 = readl(mmio_base + QS_HST_SFF);
+ u32 sff1 = readl(mmio_base + QS_HST_SFF + 4);
+ u8 sEVLD = (sff1 >> 30) & 0x01; /* valid flag */
+ sFFE = sff1 >> 31; /* empty flag */
+
+ if (sEVLD) {
+ u8 sDST = sff0 >> 16; /* dev status */
+ u8 sHST = sff1 & 0x3f; /* host status */
+ unsigned int port_no = (sff1 >> 8) & 0x03;
+ struct ata_port *ap = host_set->ports[port_no];
+
+ DPRINTK("SFF=%08x%08x: sCHAN=%u sHST=%d sDST=%02x\n",
+ sff1, sff0, port_no, sHST, sDST);
+ handled = 1;
+ if (ap && (!(ap->flags & ATA_FLAG_PORT_DISABLED))) {
+ struct ata_queued_cmd *qc;
+ struct qs_port_priv *pp = ap->private_data;
+ if (!pp || pp->state != qs_state_pkt)
+ continue;
+ qc = ata_qc_from_tag(ap, ap->active_tag);
+ if (qc && (!(qc->tf.ctl & ATA_NIEN))) {
+ switch (sHST) {
+ case 0: /* sucessful CPB */
+ case 3: /* device error */
+ pp->state = qs_state_idle;
+ qs_enter_reg_mode(qc->ap);
+ ata_qc_complete(qc, sDST);
+ break;
+ default:
+ break;
+ }
+ }
+ }
+ }
+ } while (!sFFE);
+ return handled;
+}
+
+static inline unsigned int qs_intr_mmio(struct ata_host_set *host_set)
+{
+ unsigned int handled = 0, port_no;
+
+ for (port_no = 0; port_no < host_set->n_ports; ++port_no) {
+ struct ata_port *ap;
+ ap = host_set->ports[port_no];
+ if (ap && (!(ap->flags & ATA_FLAG_PORT_DISABLED))) {
+ struct ata_queued_cmd *qc;
+ struct qs_port_priv *pp = ap->private_data;
+ if (!pp || pp->state != qs_state_mmio)
+ continue;
+ qc = ata_qc_from_tag(ap, ap->active_tag);
+ if (qc && (!(qc->tf.ctl & ATA_NIEN))) {
+
+ /* check main status, clearing INTRQ */
+ u8 status = ata_chk_status(ap);
+ if ((status & ATA_BUSY))
+ continue;
+ DPRINTK("ata%u: protocol %d (dev_stat 0x%X)\n",
+ ap->id, qc->tf.protocol, status);
+
+ /* complete taskfile transaction */
+ pp->state = qs_state_idle;
+ ata_qc_complete(qc, status);
+ handled = 1;
+ }
+ }
+ }
+ return handled;
+}
+
+static irqreturn_t qs_intr(int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct ata_host_set *host_set = dev_instance;
+ unsigned int handled = 0;
+
+ VPRINTK("ENTER\n");
+
+ spin_lock(&host_set->lock);
+ handled = qs_intr_pkt(host_set) | qs_intr_mmio(host_set);
+ spin_unlock(&host_set->lock);
+
+ VPRINTK("EXIT\n");
+
+ return IRQ_RETVAL(handled);
+}
+
+static void qs_ata_setup_port(struct ata_ioports *port, unsigned long base)
+{
+ port->cmd_addr =
+ port->data_addr = base + 0x400;
+ port->error_addr =
+ port->feature_addr = base + 0x408; /* hob_feature = 0x409 */
+ port->nsect_addr = base + 0x410; /* hob_nsect = 0x411 */
+ port->lbal_addr = base + 0x418; /* hob_lbal = 0x419 */
+ port->lbam_addr = base + 0x420; /* hob_lbam = 0x421 */
+ port->lbah_addr = base + 0x428; /* hob_lbah = 0x429 */
+ port->device_addr = base + 0x430;
+ port->status_addr =
+ port->command_addr = base + 0x438;
+ port->altstatus_addr =
+ port->ctl_addr = base + 0x440;
+ port->scr_addr = base + 0xc00;
+}
+
+static int qs_port_start(struct ata_port *ap)
+{
+ struct device *dev = ap->host_set->dev;
+ struct qs_port_priv *pp;
+ void __iomem *mmio_base = ap->host_set->mmio_base;
+ void __iomem *chan = mmio_base + (ap->port_no * 0x4000);
+ u64 addr;
+ int rc;
+
+ rc = ata_port_start(ap);
+ if (rc)
+ return rc;
+ qs_enter_reg_mode(ap);
+ pp = kcalloc(1, sizeof(*pp), GFP_KERNEL);
+ if (!pp) {
+ rc = -ENOMEM;
+ goto err_out;
+ }
+ pp->pkt = dma_alloc_coherent(dev, QS_PKT_BYTES, &pp->pkt_dma,
+ GFP_KERNEL);
+ if (!pp->pkt) {
+ rc = -ENOMEM;
+ goto err_out_kfree;
+ }
+ memset(pp->pkt, 0, QS_PKT_BYTES);
+ ap->private_data = pp;
+
+ addr = (u64)pp->pkt_dma;
+ writel((u32) addr, chan + QS_CCF_CPBA);
+ writel((u32)(addr >> 32), chan + QS_CCF_CPBA + 4);
+ return 0;
+
+err_out_kfree:
+ kfree(pp);
+err_out:
+ ata_port_stop(ap);
+ return rc;
+}
+
+static void qs_port_stop(struct ata_port *ap)
+{
+ struct device *dev = ap->host_set->dev;
+ struct qs_port_priv *pp = ap->private_data;
+
+ if (pp != NULL) {
+ ap->private_data = NULL;
+ if (pp->pkt != NULL)
+ dma_free_coherent(dev, QS_PKT_BYTES, pp->pkt,
+ pp->pkt_dma);
+ kfree(pp);
+ }
+ ata_port_stop(ap);
+}
+
+static void qs_host_stop(struct ata_host_set *host_set)
+{
+ void __iomem *mmio_base = host_set->mmio_base;
+
+ writeb(0, mmio_base + QS_HCT_CTRL); /* disable host interrupts */
+ writeb(QS_CNFG3_GSRST, mmio_base + QS_HCF_CNFG3); /* global reset */
+}
+
+static void qs_host_init(unsigned int chip_id, struct ata_probe_ent *pe)
+{
+ void __iomem *mmio_base = pe->mmio_base;
+ unsigned int port_no;
+
+ writeb(0, mmio_base + QS_HCT_CTRL); /* disable host interrupts */
+ writeb(QS_CNFG3_GSRST, mmio_base + QS_HCF_CNFG3); /* global reset */
+
+ /* reset each channel in turn */
+ for (port_no = 0; port_no < pe->n_ports; ++port_no) {
+ u8 __iomem *chan = mmio_base + (port_no * 0x4000);
+ writeb(QS_CTR1_RDEV|QS_CTR1_RCHN, chan + QS_CCT_CTR1);
+ writeb(QS_CTR0_REG, chan + QS_CCT_CTR0);
+ readb(chan + QS_CCT_CTR0); /* flush */
+ }
+ writeb(QS_SERD3_PHY_ENA, mmio_base + QS_HVS_SERD3); /* enable phy */
+
+ for (port_no = 0; port_no < pe->n_ports; ++port_no) {
+ u8 __iomem *chan = mmio_base + (port_no * 0x4000);
+ /* set FIFO depths to same settings as Windows driver */
+ writew(32, chan + QS_CFC_HUFT);
+ writew(32, chan + QS_CFC_HDFT);
+ writew(10, chan + QS_CFC_DUFT);
+ writew( 8, chan + QS_CFC_DDFT);
+ /* set CPB size in bytes, as a power of two */
+ writeb(QS_CPB_ORDER, chan + QS_CCF_CSEP);
+ }
+ writeb(1, mmio_base + QS_HCT_CTRL); /* enable host interrupts */
+}
+
+/*
+ * The QStor understands 64-bit buses, and uses 64-bit fields
+ * for DMA pointers regardless of bus width. We just have to
+ * make sure our DMA masks are set appropriately for whatever
+ * bridge lies between us and the QStor, and then the DMA mapping
+ * code will ensure we only ever "see" appropriate buffer addresses.
+ * If we're 32-bit limited somewhere, then our 64-bit fields will
+ * just end up with zeros in the upper 32-bits, without any special
+ * logic required outside of this routine (below).
+ */
+static int qs_set_dma_masks(struct pci_dev *pdev, void __iomem *mmio_base)
+{
+ u32 bus_info = readl(mmio_base + QS_HID_HPHY);
+ int rc, have_64bit_bus = (bus_info & QS_HPHY_64BIT);
+
+ if (have_64bit_bus &&
+ !pci_set_dma_mask(pdev, 0xffffffffffffffffULL)) {
+ rc = pci_set_consistent_dma_mask(pdev, 0xffffffffffffffffULL);
+ if (rc) {
+ rc = pci_set_consistent_dma_mask(pdev, 0xffffffffULL);
+ if (rc) {
+ printk(KERN_ERR DRV_NAME
+ "(%s): 64-bit DMA enable failed\n",
+ pci_name(pdev));
+ return rc;
+ }
+ }
+ } else {
+ rc = pci_set_dma_mask(pdev, 0xffffffffULL);
+ if (rc) {
+ printk(KERN_ERR DRV_NAME
+ "(%s): 32-bit DMA enable failed\n",
+ pci_name(pdev));
+ return rc;
+ }
+ rc = pci_set_consistent_dma_mask(pdev, 0xffffffffULL);
+ if (rc) {
+ printk(KERN_ERR DRV_NAME
+ "(%s): 32-bit consistent DMA enable failed\n",
+ pci_name(pdev));
+ return rc;
+ }
+ }
+ return 0;
+}
+
+static int qs_ata_init_one(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+{
+ static int printed_version;
+ struct ata_probe_ent *probe_ent = NULL;
+ void __iomem *mmio_base;
+ unsigned int board_idx = (unsigned int) ent->driver_data;
+ int rc, port_no;
+
+ if (!printed_version++)
+ printk(KERN_DEBUG DRV_NAME " version " DRV_VERSION "\n");
+
+ rc = pci_enable_device(pdev);
+ if (rc)
+ return rc;
+
+ rc = pci_request_regions(pdev, DRV_NAME);
+ if (rc)
+ goto err_out;
+
+ if ((pci_resource_flags(pdev, 4) & IORESOURCE_MEM) == 0) {
+ rc = -ENODEV;
+ goto err_out_regions;
+ }
+
+ mmio_base = ioremap(pci_resource_start(pdev, 4),
+ pci_resource_len(pdev, 4));
+ if (mmio_base == NULL) {
+ rc = -ENOMEM;
+ goto err_out_regions;
+ }
+
+ rc = qs_set_dma_masks(pdev, mmio_base);
+ if (rc)
+ goto err_out_iounmap;
+
+ probe_ent = kmalloc(sizeof(*probe_ent), GFP_KERNEL);
+ if (probe_ent == NULL) {
+ rc = -ENOMEM;
+ goto err_out_iounmap;
+ }
+
+ memset(probe_ent, 0, sizeof(*probe_ent));
+ probe_ent->dev = pci_dev_to_dev(pdev);
+ INIT_LIST_HEAD(&probe_ent->node);
+
+ probe_ent->sht = qs_port_info[board_idx].sht;
+ probe_ent->host_flags = qs_port_info[board_idx].host_flags;
+ probe_ent->pio_mask = qs_port_info[board_idx].pio_mask;
+ probe_ent->mwdma_mask = qs_port_info[board_idx].mwdma_mask;
+ probe_ent->udma_mask = qs_port_info[board_idx].udma_mask;
+ probe_ent->port_ops = qs_port_info[board_idx].port_ops;
+
+ probe_ent->irq = pdev->irq;
+ probe_ent->irq_flags = SA_SHIRQ;
+ probe_ent->mmio_base = mmio_base;
+ probe_ent->n_ports = QS_PORTS;
+
+ for (port_no = 0; port_no < probe_ent->n_ports; ++port_no) {
+ unsigned long chan = (unsigned long)mmio_base +
+ (port_no * 0x4000);
+ qs_ata_setup_port(&probe_ent->port[port_no], chan);
+ }
+
+ pci_set_master(pdev);
+
+ /* initialize adapter */
+ qs_host_init(board_idx, probe_ent);
+
+ rc = ata_device_add(probe_ent);
+ kfree(probe_ent);
+ if (rc != QS_PORTS)
+ goto err_out_iounmap;
+ return 0;
+
+err_out_iounmap:
+ iounmap(mmio_base);
+err_out_regions:
+ pci_release_regions(pdev);
+err_out:
+ pci_disable_device(pdev);
+ return rc;
+}
+
+static int __init qs_ata_init(void)
+{
+ return pci_module_init(&qs_ata_pci_driver);
+}
+
+static void __exit qs_ata_exit(void)
+{
+ pci_unregister_driver(&qs_ata_pci_driver);
+}
+
+MODULE_AUTHOR("Mark Lord");
+MODULE_DESCRIPTION("Pacific Digital Corporation QStor SATA low-level driver");
+MODULE_LICENSE("GPL");
+MODULE_DEVICE_TABLE(pci, qs_ata_pci_tbl);
+MODULE_VERSION(DRV_VERSION);
+
+module_init(qs_ata_init);
+module_exit(qs_ata_exit);
void *mmio_base, *dimm_mmio = NULL;
struct pdc_host_priv *hpriv = NULL;
unsigned int board_idx = (unsigned int) ent->driver_data;
+ int pci_dev_busy = 0;
int rc;
if (!printed_version++)
return rc;
rc = pci_request_regions(pdev, DRV_NAME);
- if (rc)
+ if (rc) {
+ pci_dev_busy = 1;
goto err_out;
+ }
rc = pci_set_dma_mask(pdev, ATA_DMA_MASK);
if (rc)
err_out_regions:
pci_release_regions(pdev);
err_out:
- pci_disable_device(pdev);
+ if (!pci_dev_busy)
+ pci_disable_device(pdev);
return rc;
}
#include <linux/libata.h>
#define DRV_NAME "sata_uli"
-#define DRV_VERSION "0.2"
+#define DRV_VERSION "0.5"
enum {
uli_5289 = 0,
uli_5287 = 1,
+ uli_5281 = 2,
/* PCI configuration registers */
- ULI_SCR_BASE = 0x90, /* sata0 phy SCR registers */
- ULI_SATA1_OFS = 0x10, /* offset from sata0->sata1 phy regs */
-
+ ULI5287_BASE = 0x90, /* sata0 phy SCR registers */
+ ULI5287_OFFS = 0x10, /* offset from sata0->sata1 phy regs */
+ ULI5281_BASE = 0x60, /* sata0 phy SCR registers */
+ ULI5281_OFFS = 0x60, /* offset from sata0->sata1 phy regs */
};
static int uli_init_one (struct pci_dev *pdev, const struct pci_device_id *ent);
static struct pci_device_id uli_pci_tbl[] = {
{ PCI_VENDOR_ID_AL, 0x5289, PCI_ANY_ID, PCI_ANY_ID, 0, 0, uli_5289 },
{ PCI_VENDOR_ID_AL, 0x5287, PCI_ANY_ID, PCI_ANY_ID, 0, 0, uli_5287 },
+ { PCI_VENDOR_ID_AL, 0x5281, PCI_ANY_ID, PCI_ANY_ID, 0, 0, uli_5281 },
{ } /* terminate list */
};
.bmdma_setup = ata_bmdma_setup,
.bmdma_start = ata_bmdma_start,
+ .bmdma_stop = ata_bmdma_stop,
+ .bmdma_status = ata_bmdma_status,
.qc_prep = ata_qc_prep,
.qc_issue = ata_qc_issue_prot,
MODULE_DEVICE_TABLE(pci, uli_pci_tbl);
MODULE_VERSION(DRV_VERSION);
-static unsigned int get_scr_cfg_addr(unsigned int port_no, unsigned int sc_reg)
+static unsigned int get_scr_cfg_addr(struct ata_port *ap, unsigned int sc_reg)
{
- unsigned int addr = ULI_SCR_BASE + (4 * sc_reg);
-
- switch (port_no) {
- case 0:
- break;
- case 1:
- addr += ULI_SATA1_OFS;
- break;
- case 2:
- addr += ULI_SATA1_OFS*4;
- break;
- case 3:
- addr += ULI_SATA1_OFS*5;
- break;
- default:
- BUG();
- break;
- }
- return addr;
+ return ap->ioaddr.scr_addr + (4 * sc_reg);
}
static u32 uli_scr_cfg_read (struct ata_port *ap, unsigned int sc_reg)
{
struct pci_dev *pdev = to_pci_dev(ap->host_set->dev);
- unsigned int cfg_addr = get_scr_cfg_addr(ap->port_no, sc_reg);
+ unsigned int cfg_addr = get_scr_cfg_addr(ap, sc_reg);
u32 val;
pci_read_config_dword(pdev, cfg_addr, &val);
static void uli_scr_cfg_write (struct ata_port *ap, unsigned int scr, u32 val)
{
struct pci_dev *pdev = to_pci_dev(ap->host_set->dev);
- unsigned int cfg_addr = get_scr_cfg_addr(ap->port_no, scr);
+ unsigned int cfg_addr = get_scr_cfg_addr(ap, scr);
pci_write_config_dword(pdev, cfg_addr, val);
}
struct ata_port_info *ppi;
int rc;
unsigned int board_idx = (unsigned int) ent->driver_data;
+ int pci_dev_busy = 0;
rc = pci_enable_device(pdev);
if (rc)
return rc;
rc = pci_request_regions(pdev, DRV_NAME);
- if (rc)
+ if (rc) {
+ pci_dev_busy = 1;
goto err_out;
+ }
rc = pci_set_dma_mask(pdev, ATA_DMA_MASK);
if (rc)
rc = -ENOMEM;
goto err_out_regions;
}
-
+
switch (board_idx) {
case uli_5287:
+ probe_ent->port[0].scr_addr = ULI5287_BASE;
+ probe_ent->port[1].scr_addr = ULI5287_BASE + ULI5287_OFFS;
probe_ent->n_ports = 4;
probe_ent->port[2].cmd_addr = pci_resource_start(pdev, 0) + 8;
probe_ent->port[2].ctl_addr =
(pci_resource_start(pdev, 1) | ATA_PCI_CTL_OFS) + 4;
probe_ent->port[2].bmdma_addr = pci_resource_start(pdev, 4) + 16;
+ probe_ent->port[2].scr_addr = ULI5287_BASE + ULI5287_OFFS*4;
probe_ent->port[3].cmd_addr = pci_resource_start(pdev, 2) + 8;
probe_ent->port[3].altstatus_addr =
probe_ent->port[3].ctl_addr =
(pci_resource_start(pdev, 3) | ATA_PCI_CTL_OFS) + 4;
probe_ent->port[3].bmdma_addr = pci_resource_start(pdev, 4) + 24;
+ probe_ent->port[3].scr_addr = ULI5287_BASE + ULI5287_OFFS*5;
ata_std_ports(&probe_ent->port[2]);
ata_std_ports(&probe_ent->port[3]);
break;
case uli_5289:
- /* do nothing; ata_pci_init_native_mode did it all */
+ probe_ent->port[0].scr_addr = ULI5287_BASE;
+ probe_ent->port[1].scr_addr = ULI5287_BASE + ULI5287_OFFS;
+ break;
+
+ case uli_5281:
+ probe_ent->port[0].scr_addr = ULI5281_BASE;
+ probe_ent->port[1].scr_addr = ULI5281_BASE + ULI5281_OFFS;
break;
default:
pci_release_regions(pdev);
err_out:
- pci_disable_device(pdev);
+ if (!pci_dev_busy)
+ pci_disable_device(pdev);
return rc;
}
--- /dev/null
+/*
+ * iSCSI transport class definitions
+ *
+ * Copyright (C) IBM Corporation, 2004
+ * Copyright (C) Mike Christie, 2004
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ */
+#include <linux/module.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_transport.h>
+#include <scsi/scsi_transport_iscsi.h>
+
+#define ISCSI_SESSION_ATTRS 20
+#define ISCSI_HOST_ATTRS 2
+
+struct iscsi_internal {
+ struct scsi_transport_template t;
+ struct iscsi_function_template *fnt;
+ /*
+ * We do not have any private or other attrs.
+ */
+ struct class_device_attribute *session_attrs[ISCSI_SESSION_ATTRS + 1];
+ struct class_device_attribute *host_attrs[ISCSI_HOST_ATTRS + 1];
+};
+
+#define to_iscsi_internal(tmpl) container_of(tmpl, struct iscsi_internal, t)
+
+static DECLARE_TRANSPORT_CLASS(iscsi_transport_class,
+ "iscsi_transport",
+ NULL,
+ NULL,
+ NULL);
+
+static DECLARE_TRANSPORT_CLASS(iscsi_host_class,
+ "iscsi_host",
+ NULL,
+ NULL,
+ NULL);
+/*
+ * iSCSI target and session attrs
+ */
+#define iscsi_session_show_fn(field, format) \
+ \
+static ssize_t \
+show_session_##field(struct class_device *cdev, char *buf) \
+{ \
+ struct scsi_target *starget = transport_class_to_starget(cdev); \
+ struct Scsi_Host *shost = dev_to_shost(starget->dev.parent); \
+ struct iscsi_internal *i = to_iscsi_internal(shost->transportt); \
+ \
+ if (i->fnt->get_##field) \
+ i->fnt->get_##field(starget); \
+ return snprintf(buf, 20, format"\n", iscsi_##field(starget)); \
+}
+
+#define iscsi_session_rd_attr(field, format) \
+ iscsi_session_show_fn(field, format) \
+static CLASS_DEVICE_ATTR(field, S_IRUGO, show_session_##field, NULL);
+
+iscsi_session_rd_attr(tpgt, "%hu");
+iscsi_session_rd_attr(tsih, "%2x");
+iscsi_session_rd_attr(max_recv_data_segment_len, "%u");
+iscsi_session_rd_attr(max_burst_len, "%u");
+iscsi_session_rd_attr(first_burst_len, "%u");
+iscsi_session_rd_attr(def_time2wait, "%hu");
+iscsi_session_rd_attr(def_time2retain, "%hu");
+iscsi_session_rd_attr(max_outstanding_r2t, "%hu");
+iscsi_session_rd_attr(erl, "%d");
+
+
+#define iscsi_session_show_bool_fn(field) \
+ \
+static ssize_t \
+show_session_bool_##field(struct class_device *cdev, char *buf) \
+{ \
+ struct scsi_target *starget = transport_class_to_starget(cdev); \
+ struct Scsi_Host *shost = dev_to_shost(starget->dev.parent); \
+ struct iscsi_internal *i = to_iscsi_internal(shost->transportt); \
+ \
+ if (i->fnt->get_##field) \
+ i->fnt->get_##field(starget); \
+ \
+ if (iscsi_##field(starget)) \
+ return sprintf(buf, "Yes\n"); \
+ return sprintf(buf, "No\n"); \
+}
+
+#define iscsi_session_rd_bool_attr(field) \
+ iscsi_session_show_bool_fn(field) \
+static CLASS_DEVICE_ATTR(field, S_IRUGO, show_session_bool_##field, NULL);
+
+iscsi_session_rd_bool_attr(initial_r2t);
+iscsi_session_rd_bool_attr(immediate_data);
+iscsi_session_rd_bool_attr(data_pdu_in_order);
+iscsi_session_rd_bool_attr(data_sequence_in_order);
+
+#define iscsi_session_show_digest_fn(field) \
+ \
+static ssize_t \
+show_##field(struct class_device *cdev, char *buf) \
+{ \
+ struct scsi_target *starget = transport_class_to_starget(cdev); \
+ struct Scsi_Host *shost = dev_to_shost(starget->dev.parent); \
+ struct iscsi_internal *i = to_iscsi_internal(shost->transportt); \
+ \
+ if (i->fnt->get_##field) \
+ i->fnt->get_##field(starget); \
+ \
+ if (iscsi_##field(starget)) \
+ return sprintf(buf, "CRC32C\n"); \
+ return sprintf(buf, "None\n"); \
+}
+
+#define iscsi_session_rd_digest_attr(field) \
+ iscsi_session_show_digest_fn(field) \
+static CLASS_DEVICE_ATTR(field, S_IRUGO, show_##field, NULL);
+
+iscsi_session_rd_digest_attr(header_digest);
+iscsi_session_rd_digest_attr(data_digest);
+
+static ssize_t
+show_port(struct class_device *cdev, char *buf)
+{
+ struct scsi_target *starget = transport_class_to_starget(cdev);
+ struct Scsi_Host *shost = dev_to_shost(starget->dev.parent);
+ struct iscsi_internal *i = to_iscsi_internal(shost->transportt);
+
+ if (i->fnt->get_port)
+ i->fnt->get_port(starget);
+
+ return snprintf(buf, 20, "%hu\n", ntohs(iscsi_port(starget)));
+}
+static CLASS_DEVICE_ATTR(port, S_IRUGO, show_port, NULL);
+
+static ssize_t
+show_ip_address(struct class_device *cdev, char *buf)
+{
+ struct scsi_target *starget = transport_class_to_starget(cdev);
+ struct Scsi_Host *shost = dev_to_shost(starget->dev.parent);
+ struct iscsi_internal *i = to_iscsi_internal(shost->transportt);
+
+ if (i->fnt->get_ip_address)
+ i->fnt->get_ip_address(starget);
+
+ if (iscsi_addr_type(starget) == AF_INET)
+ return sprintf(buf, "%u.%u.%u.%u\n",
+ NIPQUAD(iscsi_sin_addr(starget)));
+ else if(iscsi_addr_type(starget) == AF_INET6)
+ return sprintf(buf, "%04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x\n",
+ NIP6(iscsi_sin6_addr(starget)));
+ return -EINVAL;
+}
+static CLASS_DEVICE_ATTR(ip_address, S_IRUGO, show_ip_address, NULL);
+
+static ssize_t
+show_isid(struct class_device *cdev, char *buf)
+{
+ struct scsi_target *starget = transport_class_to_starget(cdev);
+ struct Scsi_Host *shost = dev_to_shost(starget->dev.parent);
+ struct iscsi_internal *i = to_iscsi_internal(shost->transportt);
+
+ if (i->fnt->get_isid)
+ i->fnt->get_isid(starget);
+
+ return sprintf(buf, "%02x%02x%02x%02x%02x%02x\n",
+ iscsi_isid(starget)[0], iscsi_isid(starget)[1],
+ iscsi_isid(starget)[2], iscsi_isid(starget)[3],
+ iscsi_isid(starget)[4], iscsi_isid(starget)[5]);
+}
+static CLASS_DEVICE_ATTR(isid, S_IRUGO, show_isid, NULL);
+
+/*
+ * This is used for iSCSI names. Normally, we follow
+ * the transport class convention of having the lld
+ * set the field, but in these cases the value is
+ * too large.
+ */
+#define iscsi_session_show_str_fn(field) \
+ \
+static ssize_t \
+show_session_str_##field(struct class_device *cdev, char *buf) \
+{ \
+ ssize_t ret = 0; \
+ struct scsi_target *starget = transport_class_to_starget(cdev); \
+ struct Scsi_Host *shost = dev_to_shost(starget->dev.parent); \
+ struct iscsi_internal *i = to_iscsi_internal(shost->transportt); \
+ \
+ if (i->fnt->get_##field) \
+ ret = i->fnt->get_##field(starget, buf, PAGE_SIZE); \
+ return ret; \
+}
+
+#define iscsi_session_rd_str_attr(field) \
+ iscsi_session_show_str_fn(field) \
+static CLASS_DEVICE_ATTR(field, S_IRUGO, show_session_str_##field, NULL);
+
+iscsi_session_rd_str_attr(target_name);
+iscsi_session_rd_str_attr(target_alias);
+
+/*
+ * iSCSI host attrs
+ */
+
+/*
+ * Again, this is used for iSCSI names. Normally, we follow
+ * the transport class convention of having the lld set
+ * the field, but in these cases the value is too large.
+ */
+#define iscsi_host_show_str_fn(field) \
+ \
+static ssize_t \
+show_host_str_##field(struct class_device *cdev, char *buf) \
+{ \
+ int ret = 0; \
+ struct Scsi_Host *shost = transport_class_to_shost(cdev); \
+ struct iscsi_internal *i = to_iscsi_internal(shost->transportt); \
+ \
+ if (i->fnt->get_##field) \
+ ret = i->fnt->get_##field(shost, buf, PAGE_SIZE); \
+ return ret; \
+}
+
+#define iscsi_host_rd_str_attr(field) \
+ iscsi_host_show_str_fn(field) \
+static CLASS_DEVICE_ATTR(field, S_IRUGO, show_host_str_##field, NULL);
+
+iscsi_host_rd_str_attr(initiator_name);
+iscsi_host_rd_str_attr(initiator_alias);
+
+#define SETUP_SESSION_RD_ATTR(field) \
+ if (i->fnt->show_##field) { \
+ i->session_attrs[count] = &class_device_attr_##field; \
+ count++; \
+ }
+
+#define SETUP_HOST_RD_ATTR(field) \
+ if (i->fnt->show_##field) { \
+ i->host_attrs[count] = &class_device_attr_##field; \
+ count++; \
+ }
+
+static int iscsi_host_match(struct attribute_container *cont,
+ struct device *dev)
+{
+ struct Scsi_Host *shost;
+ struct iscsi_internal *i;
+
+ if (!scsi_is_host_device(dev))
+ return 0;
+
+ shost = dev_to_shost(dev);
+ if (!shost->transportt || shost->transportt->host_attrs.class
+ != &iscsi_host_class.class)
+ return 0;
+
+ i = to_iscsi_internal(shost->transportt);
+
+ return &i->t.host_attrs == cont;
+}
+
+static int iscsi_target_match(struct attribute_container *cont,
+ struct device *dev)
+{
+ struct Scsi_Host *shost;
+ struct iscsi_internal *i;
+
+ if (!scsi_is_target_device(dev))
+ return 0;
+
+ shost = dev_to_shost(dev->parent);
+ if (!shost->transportt || shost->transportt->host_attrs.class
+ != &iscsi_host_class.class)
+ return 0;
+
+ i = to_iscsi_internal(shost->transportt);
+
+ return &i->t.target_attrs == cont;
+}
+
+struct scsi_transport_template *
+iscsi_attach_transport(struct iscsi_function_template *fnt)
+{
+ struct iscsi_internal *i = kmalloc(sizeof(struct iscsi_internal),
+ GFP_KERNEL);
+ int count = 0;
+
+ if (unlikely(!i))
+ return NULL;
+
+ memset(i, 0, sizeof(struct iscsi_internal));
+ i->fnt = fnt;
+
+ i->t.target_attrs.attrs = &i->session_attrs[0];
+ i->t.target_attrs.class = &iscsi_transport_class.class;
+ i->t.target_attrs.match = iscsi_target_match;
+ attribute_container_register(&i->t.target_attrs);
+ i->t.target_size = sizeof(struct iscsi_class_session);
+
+ SETUP_SESSION_RD_ATTR(tsih);
+ SETUP_SESSION_RD_ATTR(isid);
+ SETUP_SESSION_RD_ATTR(header_digest);
+ SETUP_SESSION_RD_ATTR(data_digest);
+ SETUP_SESSION_RD_ATTR(target_name);
+ SETUP_SESSION_RD_ATTR(target_alias);
+ SETUP_SESSION_RD_ATTR(port);
+ SETUP_SESSION_RD_ATTR(tpgt);
+ SETUP_SESSION_RD_ATTR(ip_address);
+ SETUP_SESSION_RD_ATTR(initial_r2t);
+ SETUP_SESSION_RD_ATTR(immediate_data);
+ SETUP_SESSION_RD_ATTR(max_recv_data_segment_len);
+ SETUP_SESSION_RD_ATTR(max_burst_len);
+ SETUP_SESSION_RD_ATTR(first_burst_len);
+ SETUP_SESSION_RD_ATTR(def_time2wait);
+ SETUP_SESSION_RD_ATTR(def_time2retain);
+ SETUP_SESSION_RD_ATTR(max_outstanding_r2t);
+ SETUP_SESSION_RD_ATTR(data_pdu_in_order);
+ SETUP_SESSION_RD_ATTR(data_sequence_in_order);
+ SETUP_SESSION_RD_ATTR(erl);
+
+ BUG_ON(count > ISCSI_SESSION_ATTRS);
+ i->session_attrs[count] = NULL;
+
+ i->t.host_attrs.attrs = &i->host_attrs[0];
+ i->t.host_attrs.class = &iscsi_host_class.class;
+ i->t.host_attrs.match = iscsi_host_match;
+ attribute_container_register(&i->t.host_attrs);
+ i->t.host_size = 0;
+
+ count = 0;
+ SETUP_HOST_RD_ATTR(initiator_name);
+ SETUP_HOST_RD_ATTR(initiator_alias);
+
+ BUG_ON(count > ISCSI_HOST_ATTRS);
+ i->host_attrs[count] = NULL;
+
+ return &i->t;
+}
+
+EXPORT_SYMBOL(iscsi_attach_transport);
+
+void iscsi_release_transport(struct scsi_transport_template *t)
+{
+ struct iscsi_internal *i = to_iscsi_internal(t);
+
+ attribute_container_unregister(&i->t.target_attrs);
+ attribute_container_unregister(&i->t.host_attrs);
+
+ kfree(i);
+}
+
+EXPORT_SYMBOL(iscsi_release_transport);
+
+static __init int iscsi_transport_init(void)
+{
+ int err = transport_class_register(&iscsi_transport_class);
+
+ if (err)
+ return err;
+ return transport_class_register(&iscsi_host_class);
+}
+
+static void __exit iscsi_transport_exit(void)
+{
+ transport_class_unregister(&iscsi_host_class);
+ transport_class_unregister(&iscsi_transport_class);
+}
+
+module_init(iscsi_transport_init);
+module_exit(iscsi_transport_exit);
+
+MODULE_AUTHOR("Mike Christie");
+MODULE_DESCRIPTION("iSCSI Transport Attributes");
+MODULE_LICENSE("GPL");
*/
/* #define SYM_CONF_IARB_SUPPORT */
-/*
- * Number of lists for the optimization of the IO timeout handling.
- * Not used under FreeBSD and Linux.
- */
-#ifndef SYM_CONF_TIMEOUT_ORDER_MAX
-#define SYM_CONF_TIMEOUT_ORDER_MAX (8)
-#endif
-
/*
* Only relevant if IARB support configured.
* - Max number of successive settings of IARB hints.
#ifndef SYM_DEFS_H
#define SYM_DEFS_H
-#define SYM_VERSION "2.1.18m"
+#define SYM_VERSION "2.1.18n"
#define SYM_DRIVER_NAME "sym-" SYM_VERSION
-/*
- * PCI device identifier of SYMBIOS chips.
- */
-#define PCI_ID_SYM53C810 PCI_DEVICE_ID_NCR_53C810
-#define PCI_ID_SYM53C810AP PCI_DEVICE_ID_LSI_53C810AP
-#define PCI_ID_SYM53C815 PCI_DEVICE_ID_NCR_53C815
-#define PCI_ID_SYM53C820 PCI_DEVICE_ID_NCR_53C820
-#define PCI_ID_SYM53C825 PCI_DEVICE_ID_NCR_53C825
-#define PCI_ID_SYM53C860 PCI_DEVICE_ID_NCR_53C860
-#define PCI_ID_SYM53C875 PCI_DEVICE_ID_NCR_53C875
-#define PCI_ID_SYM53C875_2 PCI_DEVICE_ID_NCR_53C875J
-#define PCI_ID_SYM53C885 PCI_DEVICE_ID_NCR_53C885
-#define PCI_ID_SYM53C895 PCI_DEVICE_ID_NCR_53C895
-#define PCI_ID_SYM53C896 PCI_DEVICE_ID_NCR_53C896
-#define PCI_ID_SYM53C895A PCI_DEVICE_ID_LSI_53C895A
-#define PCI_ID_SYM53C875A PCI_DEVICE_ID_LSI_53C875A
-#define PCI_ID_LSI53C1010_33 PCI_DEVICE_ID_LSI_53C1010_33
-#define PCI_ID_LSI53C1010_66 PCI_DEVICE_ID_LSI_53C1010_66
-#define PCI_ID_LSI53C1510D PCI_DEVICE_ID_LSI_53C1510
-
/*
* SYM53C8XX device features descriptor.
*/
#define M_RESTORE_DP RESTORE_POINTERS
#define M_DISCONNECT DISCONNECT
#define M_ID_ERROR INITIATOR_ERROR
-#define M_ABORT ABORT
+#define M_ABORT ABORT_TASK_SET
#define M_REJECT MESSAGE_REJECT
#define M_NOOP NOP
#define M_PARITY MSG_PARITY_ERROR
#define M_LCOMPLETE LINKED_CMD_COMPLETE
#define M_FCOMPLETE LINKED_FLG_CMD_COMPLETE
-#define M_RESET BUS_DEVICE_RESET
-#define M_ABORT_TAG (0x0d)
-#define M_CLEAR_QUEUE (0x0e)
+#define M_RESET TARGET_RESET
+#define M_ABORT_TAG ABORT_TASK
+#define M_CLEAR_QUEUE CLEAR_TASK_SET
#define M_INIT_REC INITIATE_RECOVERY
#define M_REL_REC RELEASE_RECOVERY
#define M_TERMINATE (0x11)
#define M_SIMPLE_TAG SIMPLE_QUEUE_TAG
#define M_HEAD_TAG HEAD_OF_QUEUE_TAG
#define M_ORDERED_TAG ORDERED_QUEUE_TAG
-#define M_IGN_RESIDUE (0x23)
+#define M_IGN_RESIDUE IGNORE_WIDE_RESIDUE
#define M_X_MODIFY_DP EXTENDED_MODIFY_DATA_POINTER
#define M_X_SYNC_REQ EXTENDED_SDTR
#define M_X_WIDE_REQ EXTENDED_WDTR
-#define M_X_PPR_REQ (0x04)
+#define M_X_PPR_REQ EXTENDED_PPR
/*
* PPR protocol options
* Remove a couple of work-arounds specific to C1010 if
* they are not desirable. See `sym_fw2.h' for more details.
*/
- if (!(np->device_id == PCI_ID_LSI53C1010_66 &&
+ if (!(np->device_id == PCI_DEVICE_ID_LSI_53C1010_66 &&
np->revision_id < 0x1 &&
np->pciclk_khz < 60000)) {
scripta0->datao_phase[0] = cpu_to_scr(SCR_NO_OP);
scripta0->datao_phase[1] = cpu_to_scr(0);
}
- if (!(np->device_id == PCI_ID_LSI53C1010_33 &&
+ if (!(np->device_id == PCI_DEVICE_ID_LSI_53C1010_33 &&
/* np->revision_id < 0xff */ 1)) {
scripta0->sel_done[0] = cpu_to_scr(SCR_NO_OP);
scripta0->sel_done[1] = cpu_to_scr(0);
#include <scsi/scsi_device.h>
#include <scsi/scsi_host.h>
-#ifndef bzero
-#define bzero(d, n) memset((d), 0, (n))
-#endif
-
-/*
- * General driver includes.
- */
#include "sym_conf.h"
#include "sym_defs.h"
#include "sym_misc.h"
typedef struct sym_tcb *tcb_p;
typedef struct sym_lcb *lcb_p;
typedef struct sym_ccb *ccb_p;
-typedef struct sym_hcb *hcb_p;
-
-/*
- * Define a reference to the O/S dependent IO request.
- */
-typedef struct scsi_cmnd *cam_ccb_p; /* Generic */
-typedef struct scsi_cmnd *cam_scsiio_p;/* SCSI I/O */
-
/*
* IO functions definition for big/little endian CPU support.
/*
* Async handler for negotiations.
*/
-void sym_xpt_async_nego_wide(hcb_p np, int target);
+void sym_xpt_async_nego_wide(struct sym_hcb *np, int target);
#define sym_xpt_async_nego_sync(np, target) \
sym_announce_transfer_rate(np, target)
#define sym_xpt_async_nego_ppr(np, target) \
/*
* Build CAM result for a successful IO and for a failed IO.
*/
-static __inline void sym_set_cam_result_ok(hcb_p np, ccb_p cp, int resid)
+static __inline void sym_set_cam_result_ok(struct sym_hcb *np, ccb_p cp, int resid)
{
struct scsi_cmnd *cmd = cp->cam_ccb;
cmd->resid = resid;
cmd->result = (((DID_OK) << 16) + ((cp->ssss_status) & 0x7f));
}
-void sym_set_cam_result_error(hcb_p np, ccb_p cp, int resid);
+void sym_set_cam_result_error(struct sym_hcb *np, ccb_p cp, int resid);
/*
* Other O/S specific methods.
#define sym_cam_target_id(ccb) (ccb)->target
#define sym_cam_target_lun(ccb) (ccb)->lun
#define sym_freeze_cam_ccb(ccb) do { ; } while (0)
-void sym_xpt_done(hcb_p np, cam_ccb_p ccb);
-void sym_xpt_done2(hcb_p np, cam_ccb_p ccb, int cam_status);
+void sym_xpt_done(struct sym_hcb *np, struct scsi_cmnd *ccb);
void sym_print_addr (ccb_p cp);
-void sym_xpt_async_bus_reset(hcb_p np);
-void sym_xpt_async_sent_bdr(hcb_p np, int target);
-int sym_setup_data_and_start (hcb_p np, cam_scsiio_p csio, ccb_p cp);
-void sym_log_bus_error(hcb_p np);
-void sym_sniff_inquiry(hcb_p np, struct scsi_cmnd *cmd, int resid);
+void sym_xpt_async_bus_reset(struct sym_hcb *np);
+void sym_xpt_async_sent_bdr(struct sym_hcb *np, int target);
+int sym_setup_data_and_start (struct sym_hcb *np, struct scsi_cmnd *csio, ccb_p cp);
+void sym_log_bus_error(struct sym_hcb *np);
+void sym_sniff_inquiry(struct sym_hcb *np, struct scsi_cmnd *cmd, int resid);
#endif /* SYM_GLUE_H */
/*
* Needed function prototypes.
*/
-static void sym_int_ma (hcb_p np);
-static void sym_int_sir (hcb_p np);
-static ccb_p sym_alloc_ccb(hcb_p np);
-static ccb_p sym_ccb_from_dsa(hcb_p np, u32 dsa);
-static void sym_alloc_lcb_tags (hcb_p np, u_char tn, u_char ln);
-static void sym_complete_error (hcb_p np, ccb_p cp);
-static void sym_complete_ok (hcb_p np, ccb_p cp);
-static int sym_compute_residual(hcb_p np, ccb_p cp);
+static void sym_int_ma (struct sym_hcb *np);
+static void sym_int_sir (struct sym_hcb *np);
+static ccb_p sym_alloc_ccb(struct sym_hcb *np);
+static ccb_p sym_ccb_from_dsa(struct sym_hcb *np, u32 dsa);
+static void sym_alloc_lcb_tags (struct sym_hcb *np, u_char tn, u_char ln);
+static void sym_complete_error (struct sym_hcb *np, ccb_p cp);
+static void sym_complete_ok (struct sym_hcb *np, ccb_p cp);
+static int sym_compute_residual(struct sym_hcb *np, ccb_p cp);
/*
* Returns the name of this driver.
* Print something which allows to retrieve the controler type,
* unit, target, lun concerned by a kernel message.
*/
-static void sym_print_target (hcb_p np, int target)
+static void sym_print_target (struct sym_hcb *np, int target)
{
printf ("%s:%d:", sym_name(np), target);
}
-static void sym_print_lun(hcb_p np, int target, int lun)
+static void sym_print_lun(struct sym_hcb *np, int target, int lun)
{
printf ("%s:%d:%d:", sym_name(np), target, lun);
}
printf (".\n");
}
-static void sym_print_nego_msg (hcb_p np, int target, char *label, u_char *msg)
+static void sym_print_nego_msg (struct sym_hcb *np, int target, char *label, u_char *msg)
{
PRINT_TARGET(np, target);
if (label)
* On the other hand, LVD devices need some delay
* to settle and report actual BUS mode in STEST4.
*/
-static void sym_chip_reset (hcb_p np)
+static void sym_chip_reset (struct sym_hcb *np)
{
OUTB (nc_istat, SRST);
UDELAY (10);
* So, we need to abort the current operation prior to
* soft resetting the chip.
*/
-static void sym_soft_reset (hcb_p np)
+static void sym_soft_reset (struct sym_hcb *np)
{
u_char istat = 0;
int i;
*
* The interrupt handler will reinitialize the chip.
*/
-static void sym_start_reset(hcb_p np)
+static void sym_start_reset(struct sym_hcb *np)
{
(void) sym_reset_scsi_bus(np, 1);
}
-int sym_reset_scsi_bus(hcb_p np, int enab_int)
+int sym_reset_scsi_bus(struct sym_hcb *np, int enab_int)
{
u32 term;
int retv = 0;
/*
* Select SCSI clock frequency
*/
-static void sym_selectclock(hcb_p np, u_char scntl3)
+static void sym_selectclock(struct sym_hcb *np, u_char scntl3)
{
/*
* If multiplier not present or not selected, leave here.
/*
* calculate SCSI clock frequency (in KHz)
*/
-static unsigned getfreq (hcb_p np, int gen)
+static unsigned getfreq (struct sym_hcb *np, int gen)
{
unsigned int ms = 0;
unsigned int f;
return f;
}
-static unsigned sym_getfreq (hcb_p np)
+static unsigned sym_getfreq (struct sym_hcb *np)
{
u_int f1, f2;
int gen = 8;
/*
* Get/probe chip SCSI clock frequency
*/
-static void sym_getclock (hcb_p np, int mult)
+static void sym_getclock (struct sym_hcb *np, int mult)
{
unsigned char scntl3 = np->sv_scntl3;
unsigned char stest1 = np->sv_stest1;
/*
* Get/probe PCI clock frequency
*/
-static int sym_getpciclock (hcb_p np)
+static int sym_getpciclock (struct sym_hcb *np)
{
int f = 0;
* synchronous factor period.
*/
static int
-sym_getsync(hcb_p np, u_char dt, u_char sfac, u_char *divp, u_char *fakp)
+sym_getsync(struct sym_hcb *np, u_char dt, u_char sfac, u_char *divp, u_char *fakp)
{
u32 clk = np->clock_khz; /* SCSI clock frequency in kHz */
int div = np->clock_divn; /* Number of divisors supported */
/*
* Set initial io register bits from burst code.
*/
-static __inline void sym_init_burst(hcb_p np, u_char bc)
+static __inline void sym_init_burst(struct sym_hcb *np, u_char bc)
{
np->rv_ctest4 &= ~0x80;
np->rv_dmode &= ~(0x3 << 6);
/*
* Print out the list of targets that have some flag disabled by user.
*/
-static void sym_print_targets_flag(hcb_p np, int mask, char *msg)
+static void sym_print_targets_flag(struct sym_hcb *np, int mask, char *msg)
{
int cnt;
int i;
* is not safe on paper, but it seems to work quite
* well. :)
*/
-static void sym_save_initial_setting (hcb_p np)
+static void sym_save_initial_setting (struct sym_hcb *np)
{
np->sv_scntl0 = INB(nc_scntl0) & 0x0a;
np->sv_scntl3 = INB(nc_scntl3) & 0x07;
np->sv_ctest5 = INB(nc_ctest5) & 0x24;
}
-#ifdef CONFIG_PARISC
-static u32 parisc_setup_hcb(hcb_p np, u32 period)
-{
- unsigned long pdc_period;
- char scsi_mode;
- struct hardware_path hwpath;
-
- /* Host firmware (PDC) keeps a table for crippling SCSI capabilities.
- * Many newer machines export one channel of 53c896 chip
- * as SE, 50-pin HD. Also used for Multi-initiator SCSI clusters
- * to set the SCSI Initiator ID.
- */
- get_pci_node_path(np->s.device, &hwpath);
- if (!pdc_get_initiator(&hwpath, &np->myaddr, &pdc_period,
- &np->maxwide, &scsi_mode))
- return period;
-
- if (scsi_mode >= 0) {
- /* C3000 PDC reports period/mode */
- SYM_SETUP_SCSI_DIFF = 0;
- switch(scsi_mode) {
- case 0: np->scsi_mode = SMODE_SE; break;
- case 1: np->scsi_mode = SMODE_HVD; break;
- case 2: np->scsi_mode = SMODE_LVD; break;
- default: break;
- }
- }
-
- return (u32) pdc_period;
-}
-#else
-static inline int parisc_setup_hcb(hcb_p np, u32 period) { return period; }
-#endif
/*
* Prepare io register values used by sym_start_up()
* according to selected and supported features.
*/
-static int sym_prepare_setting(hcb_p np, struct sym_nvram *nvram)
+static int sym_prepare_setting(struct sym_hcb *np, struct sym_nvram *nvram)
{
u_char burst_max;
u32 period;
*/
period = (4 * div_10M[0] + np->clock_khz - 1) / np->clock_khz;
- period = parisc_setup_hcb(np, period);
-
if (period <= 250) np->minsync = 10;
else if (period <= 303) np->minsync = 11;
else if (period <= 500) np->minsync = 12;
* In dual channel mode, contention occurs if internal cycles
* are used. Disable internal cycles.
*/
- if (np->device_id == PCI_ID_LSI53C1010_33 &&
+ if (np->device_id == PCI_DEVICE_ID_LSI_53C1010_33 &&
np->revision_id < 0x1)
np->rv_ccntl0 |= DILS;
* this driver. The generic ncr driver that does not use
* LOAD/STORE instructions does not need this work-around.
*/
- if ((np->device_id == PCI_ID_SYM53C810 &&
+ if ((np->device_id == PCI_DEVICE_ID_NCR_53C810 &&
np->revision_id >= 0x10 && np->revision_id <= 0x11) ||
- (np->device_id == PCI_ID_SYM53C860 &&
+ (np->device_id == PCI_DEVICE_ID_NCR_53C860 &&
np->revision_id <= 0x1))
np->features &= ~(FE_WRIE|FE_ERL|FE_ERMP);
if ((SYM_SETUP_SCSI_LED ||
(nvram->type == SYM_SYMBIOS_NVRAM ||
(nvram->type == SYM_TEKRAM_NVRAM &&
- np->device_id == PCI_ID_SYM53C895))) &&
+ np->device_id == PCI_DEVICE_ID_NCR_53C895))) &&
!(np->features & FE_LEDC) && !(np->sv_gpcntl & 0x01))
np->features |= FE_LED0;
* Has to be called with interrupts disabled.
*/
#ifndef SYM_CONF_IOMAPPED
-static int sym_regtest (hcb_p np)
+static int sym_regtest (struct sym_hcb *np)
{
register volatile u32 data;
/*
}
#endif
-static int sym_snooptest (hcb_p np)
+static int sym_snooptest (struct sym_hcb *np)
{
u32 sym_rd, sym_wr, sym_bk, host_rd, host_wr, pc, dstat;
int i, err=0;
* First 24 register of the chip:
* r0..rf
*/
-static void sym_log_hard_error(hcb_p np, u_short sist, u_char dstat)
+static void sym_log_hard_error(struct sym_hcb *np, u_short sist, u_char dstat)
{
u32 dsp;
int script_ofs;
}
static struct sym_pci_chip sym_pci_dev_table[] = {
- {PCI_ID_SYM53C810, 0x0f, "810", 4, 8, 4, 64,
+ {PCI_DEVICE_ID_NCR_53C810, 0x0f, "810", 4, 8, 4, 64,
FE_ERL}
,
#ifdef SYM_DEBUG_GENERIC_SUPPORT
- {PCI_ID_SYM53C810, 0xff, "810a", 4, 8, 4, 1,
+ {PCI_DEVICE_ID_NCR_53C810, 0xff, "810a", 4, 8, 4, 1,
FE_BOF}
,
#else
- {PCI_ID_SYM53C810, 0xff, "810a", 4, 8, 4, 1,
+ {PCI_DEVICE_ID_NCR_53C810, 0xff, "810a", 4, 8, 4, 1,
FE_CACHE_SET|FE_LDSTR|FE_PFEN|FE_BOF}
,
#endif
- {PCI_ID_SYM53C815, 0xff, "815", 4, 8, 4, 64,
+ {PCI_DEVICE_ID_NCR_53C815, 0xff, "815", 4, 8, 4, 64,
FE_BOF|FE_ERL}
,
- {PCI_ID_SYM53C825, 0x0f, "825", 6, 8, 4, 64,
+ {PCI_DEVICE_ID_NCR_53C825, 0x0f, "825", 6, 8, 4, 64,
FE_WIDE|FE_BOF|FE_ERL|FE_DIFF}
,
- {PCI_ID_SYM53C825, 0xff, "825a", 6, 8, 4, 2,
+ {PCI_DEVICE_ID_NCR_53C825, 0xff, "825a", 6, 8, 4, 2,
FE_WIDE|FE_CACHE0_SET|FE_BOF|FE_DFS|FE_LDSTR|FE_PFEN|FE_RAM|FE_DIFF}
,
- {PCI_ID_SYM53C860, 0xff, "860", 4, 8, 5, 1,
+ {PCI_DEVICE_ID_NCR_53C860, 0xff, "860", 4, 8, 5, 1,
FE_ULTRA|FE_CACHE_SET|FE_BOF|FE_LDSTR|FE_PFEN}
,
- {PCI_ID_SYM53C875, 0x01, "875", 6, 16, 5, 2,
+ {PCI_DEVICE_ID_NCR_53C875, 0x01, "875", 6, 16, 5, 2,
FE_WIDE|FE_ULTRA|FE_CACHE0_SET|FE_BOF|FE_DFS|FE_LDSTR|FE_PFEN|
FE_RAM|FE_DIFF|FE_VARCLK}
,
- {PCI_ID_SYM53C875, 0xff, "875", 6, 16, 5, 2,
+ {PCI_DEVICE_ID_NCR_53C875, 0xff, "875", 6, 16, 5, 2,
FE_WIDE|FE_ULTRA|FE_DBLR|FE_CACHE0_SET|FE_BOF|FE_DFS|FE_LDSTR|FE_PFEN|
FE_RAM|FE_DIFF|FE_VARCLK}
,
- {PCI_ID_SYM53C875_2, 0xff, "875", 6, 16, 5, 2,
+ {PCI_DEVICE_ID_NCR_53C875J, 0xff, "875J", 6, 16, 5, 2,
FE_WIDE|FE_ULTRA|FE_DBLR|FE_CACHE0_SET|FE_BOF|FE_DFS|FE_LDSTR|FE_PFEN|
FE_RAM|FE_DIFF|FE_VARCLK}
,
- {PCI_ID_SYM53C885, 0xff, "885", 6, 16, 5, 2,
+ {PCI_DEVICE_ID_NCR_53C885, 0xff, "885", 6, 16, 5, 2,
FE_WIDE|FE_ULTRA|FE_DBLR|FE_CACHE0_SET|FE_BOF|FE_DFS|FE_LDSTR|FE_PFEN|
FE_RAM|FE_DIFF|FE_VARCLK}
,
#ifdef SYM_DEBUG_GENERIC_SUPPORT
- {PCI_ID_SYM53C895, 0xff, "895", 6, 31, 7, 2,
+ {PCI_DEVICE_ID_NCR_53C895, 0xff, "895", 6, 31, 7, 2,
FE_WIDE|FE_ULTRA2|FE_QUAD|FE_CACHE_SET|FE_BOF|FE_DFS|
FE_RAM|FE_LCKFRQ}
,
#else
- {PCI_ID_SYM53C895, 0xff, "895", 6, 31, 7, 2,
+ {PCI_DEVICE_ID_NCR_53C895, 0xff, "895", 6, 31, 7, 2,
FE_WIDE|FE_ULTRA2|FE_QUAD|FE_CACHE_SET|FE_BOF|FE_DFS|FE_LDSTR|FE_PFEN|
FE_RAM|FE_LCKFRQ}
,
#endif
- {PCI_ID_SYM53C896, 0xff, "896", 6, 31, 7, 4,
+ {PCI_DEVICE_ID_NCR_53C896, 0xff, "896", 6, 31, 7, 4,
FE_WIDE|FE_ULTRA2|FE_QUAD|FE_CACHE_SET|FE_BOF|FE_DFS|FE_LDSTR|FE_PFEN|
FE_RAM|FE_RAM8K|FE_64BIT|FE_DAC|FE_IO256|FE_NOPM|FE_LEDC|FE_LCKFRQ}
,
- {PCI_ID_SYM53C895A, 0xff, "895a", 6, 31, 7, 4,
+ {PCI_DEVICE_ID_LSI_53C895A, 0xff, "895a", 6, 31, 7, 4,
FE_WIDE|FE_ULTRA2|FE_QUAD|FE_CACHE_SET|FE_BOF|FE_DFS|FE_LDSTR|FE_PFEN|
FE_RAM|FE_RAM8K|FE_DAC|FE_IO256|FE_NOPM|FE_LEDC|FE_LCKFRQ}
,
- {PCI_ID_SYM53C875A, 0xff, "875a", 6, 31, 7, 4,
+ {PCI_DEVICE_ID_LSI_53C875A, 0xff, "875a", 6, 31, 7, 4,
FE_WIDE|FE_ULTRA|FE_QUAD|FE_CACHE_SET|FE_BOF|FE_DFS|FE_LDSTR|FE_PFEN|
FE_RAM|FE_DAC|FE_IO256|FE_NOPM|FE_LEDC|FE_LCKFRQ}
,
- {PCI_ID_LSI53C1010_33, 0x00, "1010-33", 6, 31, 7, 8,
+ {PCI_DEVICE_ID_LSI_53C1010_33, 0x00, "1010-33", 6, 31, 7, 8,
FE_WIDE|FE_ULTRA3|FE_QUAD|FE_CACHE_SET|FE_BOF|FE_DFBC|FE_LDSTR|FE_PFEN|
FE_RAM|FE_RAM8K|FE_64BIT|FE_DAC|FE_IO256|FE_NOPM|FE_LEDC|FE_CRC|
FE_C10}
,
- {PCI_ID_LSI53C1010_33, 0xff, "1010-33", 6, 31, 7, 8,
+ {PCI_DEVICE_ID_LSI_53C1010_33, 0xff, "1010-33", 6, 31, 7, 8,
FE_WIDE|FE_ULTRA3|FE_QUAD|FE_CACHE_SET|FE_BOF|FE_DFBC|FE_LDSTR|FE_PFEN|
FE_RAM|FE_RAM8K|FE_64BIT|FE_DAC|FE_IO256|FE_NOPM|FE_LEDC|FE_CRC|
FE_C10|FE_U3EN}
,
- {PCI_ID_LSI53C1010_66, 0xff, "1010-66", 6, 31, 7, 8,
+ {PCI_DEVICE_ID_LSI_53C1010_66, 0xff, "1010-66", 6, 31, 7, 8,
FE_WIDE|FE_ULTRA3|FE_QUAD|FE_CACHE_SET|FE_BOF|FE_DFBC|FE_LDSTR|FE_PFEN|
FE_RAM|FE_RAM8K|FE_64BIT|FE_DAC|FE_IO256|FE_NOPM|FE_LEDC|FE_66MHZ|FE_CRC|
FE_C10|FE_U3EN}
,
- {PCI_ID_LSI53C1510D, 0xff, "1510d", 6, 31, 7, 4,
+ {PCI_DEVICE_ID_LSI_53C1510, 0xff, "1510d", 6, 31, 7, 4,
FE_WIDE|FE_ULTRA2|FE_QUAD|FE_CACHE_SET|FE_BOF|FE_DFS|FE_LDSTR|FE_PFEN|
FE_RAM|FE_IO256|FE_LEDC}
};
* This is only used if the direct mapping
* has been unsuccessful.
*/
-int sym_lookup_dmap(hcb_p np, u32 h, int s)
+int sym_lookup_dmap(struct sym_hcb *np, u32 h, int s)
{
int i;
* Update IO registers scratch C..R so they will be
* in sync. with queued CCB expectations.
*/
-static void sym_update_dmap_regs(hcb_p np)
+static void sym_update_dmap_regs(struct sym_hcb *np)
{
int o, i;
}
#endif
+/* Enforce all the fiddly SPI rules and the chip limitations */
static void sym_check_goals(struct scsi_device *sdev)
{
struct sym_hcb *np = ((struct host_data *)sdev->host->hostdata)->ncb;
struct sym_trans *st = &np->target[sdev->id].tinfo.goal;
- /* here we enforce all the fiddly SPI rules */
-
if (!scsi_device_wide(sdev))
st->width = 0;
st->offset = 0;
return;
}
-
+
if (scsi_device_dt(sdev)) {
if (scsi_device_dt_only(sdev))
st->options |= PPR_OPT_DT;
st->options &= ~PPR_OPT_DT;
}
- if (!(np->features & FE_ULTRA3))
+ /* Some targets fail to properly negotiate DT in SE mode */
+ if ((np->scsi_mode != SMODE_LVD) || !(np->features & FE_U3EN))
st->options &= ~PPR_OPT_DT;
if (st->options & PPR_OPT_DT) {
* negotiation and the nego_status field of the CCB.
* Returns the size of the message in bytes.
*/
-static int sym_prepare_nego(hcb_p np, ccb_p cp, int nego, u_char *msgptr)
+static int sym_prepare_nego(struct sym_hcb *np, ccb_p cp, u_char *msgptr)
{
tcb_p tp = &np->target[cp->target];
- int msglen = 0;
struct scsi_device *sdev = tp->sdev;
+ struct sym_trans *goal = &tp->tinfo.goal;
+ struct sym_trans *curr = &tp->tinfo.curr;
+ int msglen = 0;
+ int nego;
if (likely(sdev))
sym_check_goals(sdev);
/*
- * Early C1010 chips need a work-around for DT
- * data transfer to work.
+ * Many devices implement PPR in a buggy way, so only use it if we
+ * really want to.
*/
- if (!(np->features & FE_U3EN))
- tp->tinfo.goal.options = 0;
- /*
- * negotiate using PPR ?
- */
- if (scsi_device_dt(sdev)) {
+ if ((goal->options & PPR_OPT_MASK) || (goal->period < 0xa)) {
nego = NS_PPR;
+ } else if (curr->width != goal->width) {
+ nego = NS_WIDE;
+ } else if (curr->period != goal->period ||
+ curr->offset != goal->offset) {
+ nego = NS_SYNC;
} else {
- /*
- * negotiate wide transfers ?
- */
- if (tp->tinfo.curr.width != tp->tinfo.goal.width)
- nego = NS_WIDE;
- /*
- * negotiate synchronous transfers?
- */
- else if (tp->tinfo.curr.period != tp->tinfo.goal.period ||
- tp->tinfo.curr.offset != tp->tinfo.goal.offset)
- nego = NS_SYNC;
+ nego = 0;
}
switch (nego) {
msgptr[msglen++] = M_EXTENDED;
msgptr[msglen++] = 3;
msgptr[msglen++] = M_X_SYNC_REQ;
- msgptr[msglen++] = tp->tinfo.goal.period;
- msgptr[msglen++] = tp->tinfo.goal.offset;
+ msgptr[msglen++] = goal->period;
+ msgptr[msglen++] = goal->offset;
break;
case NS_WIDE:
msgptr[msglen++] = M_EXTENDED;
msgptr[msglen++] = 2;
msgptr[msglen++] = M_X_WIDE_REQ;
- msgptr[msglen++] = tp->tinfo.goal.width;
+ msgptr[msglen++] = goal->width;
break;
case NS_PPR:
msgptr[msglen++] = M_EXTENDED;
msgptr[msglen++] = 6;
msgptr[msglen++] = M_X_PPR_REQ;
- msgptr[msglen++] = tp->tinfo.goal.period;
+ msgptr[msglen++] = goal->period;
msgptr[msglen++] = 0;
- msgptr[msglen++] = tp->tinfo.goal.offset;
- msgptr[msglen++] = tp->tinfo.goal.width;
- msgptr[msglen++] = tp->tinfo.goal.options & PPR_OPT_MASK;
+ msgptr[msglen++] = goal->offset;
+ msgptr[msglen++] = goal->width;
+ msgptr[msglen++] = goal->options & PPR_OPT_MASK;
break;
};
/*
* Insert a job into the start queue.
*/
-void sym_put_start_queue(hcb_p np, ccb_p cp)
+void sym_put_start_queue(struct sym_hcb *np, ccb_p cp)
{
u_short qidx;
cp->host_xflags |= HX_DMAP_DIRTY;
#endif
- /*
- * Optionnaly, set the IO timeout condition.
- */
-#ifdef SYM_OPT_HANDLE_IO_TIMEOUT
- sym_timeout_ccb(np, cp, sym_cam_timeout(cp->cam_ccb));
-#endif
-
/*
* Insert first the idle task and then our job.
* The MBs should ensure proper ordering.
/*
* Start next ready-to-start CCBs.
*/
-void sym_start_next_ccbs(hcb_p np, lcb_p lp, int maxn)
+void sym_start_next_ccbs(struct sym_hcb *np, lcb_p lp, int maxn)
{
SYM_QUEHEAD *qp;
ccb_p cp;
* prevent out of order LOADs by the CPU from having
* prefetched stale data prior to DMA having occurred.
*/
-static int sym_wakeup_done (hcb_p np)
+static int sym_wakeup_done (struct sym_hcb *np)
{
ccb_p cp;
int i, n;
return n;
}
+/*
+ * Complete all CCBs queued to the COMP queue.
+ *
+ * These CCBs are assumed:
+ * - Not to be referenced either by devices or
+ * SCRIPTS-related queues and datas.
+ * - To have to be completed with an error condition
+ * or requeued.
+ *
+ * The device queue freeze count is incremented
+ * for each CCB that does not prevent this.
+ * This function is called when all CCBs involved
+ * in error handling/recovery have been reaped.
+ */
+static void sym_flush_comp_queue(struct sym_hcb *np, int cam_status)
+{
+ SYM_QUEHEAD *qp;
+ ccb_p cp;
+
+ while ((qp = sym_remque_head(&np->comp_ccbq)) != 0) {
+ struct scsi_cmnd *ccb;
+ cp = sym_que_entry(qp, struct sym_ccb, link_ccbq);
+ sym_insque_tail(&cp->link_ccbq, &np->busy_ccbq);
+ /* Leave quiet CCBs waiting for resources */
+ if (cp->host_status == HS_WAIT)
+ continue;
+ ccb = cp->cam_ccb;
+ if (cam_status)
+ sym_set_cam_status(ccb, cam_status);
+#ifdef SYM_OPT_HANDLE_DEVICE_QUEUEING
+ if (sym_get_cam_status(ccb) == CAM_REQUEUE_REQ) {
+ tcb_p tp = &np->target[cp->target];
+ lcb_p lp = sym_lp(np, tp, cp->lun);
+ if (lp) {
+ sym_remque(&cp->link2_ccbq);
+ sym_insque_tail(&cp->link2_ccbq,
+ &lp->waiting_ccbq);
+ if (cp->started) {
+ if (cp->tag != NO_TAG)
+ --lp->started_tags;
+ else
+ --lp->started_no_tag;
+ }
+ }
+ cp->started = 0;
+ continue;
+ }
+#endif
+ sym_free_ccb(np, cp);
+ sym_freeze_cam_ccb(ccb);
+ sym_xpt_done(np, ccb);
+ }
+}
+
/*
* Complete all active CCBs with error.
* Used on CHIP/SCSI RESET.
*/
-static void sym_flush_busy_queue (hcb_p np, int cam_status)
+static void sym_flush_busy_queue (struct sym_hcb *np, int cam_status)
{
/*
* Move all active CCBs to the COMP queue
* 1: SCSI BUS RESET delivered or received.
* 2: SCSI BUS MODE changed.
*/
-void sym_start_up (hcb_p np, int reason)
+void sym_start_up (struct sym_hcb *np, int reason)
{
int i;
u32 phys;
/*
* For now, disable AIP generation on C1010-66.
*/
- if (np->device_id == PCI_ID_LSI53C1010_66)
+ if (np->device_id == PCI_DEVICE_ID_LSI_53C1010_66)
OUTB (nc_aipcntl1, DISAIP);
/*
* that from SCRIPTS for each selection/reselection, but
* I just don't want. :)
*/
- if (np->device_id == PCI_ID_LSI53C1010_33 &&
+ if (np->device_id == PCI_DEVICE_ID_LSI_53C1010_33 &&
np->revision_id < 1)
OUTB (nc_stest1, INB(nc_stest1) | 0x30);
* Disable overlapped arbitration for some dual function devices,
* regardless revision id (kind of post-chip-design feature. ;-))
*/
- if (np->device_id == PCI_ID_SYM53C875)
+ if (np->device_id == PCI_DEVICE_ID_NCR_53C875)
OUTB (nc_ctest0, (1<<5));
- else if (np->device_id == PCI_ID_SYM53C896)
+ else if (np->device_id == PCI_DEVICE_ID_NCR_53C896)
np->rv_ccntl0 |= DPR;
/*
/*
* Switch trans mode for current job and it's target.
*/
-static void sym_settrans(hcb_p np, int target, u_char opts, u_char ofs,
+static void sym_settrans(struct sym_hcb *np, int target, u_char opts, u_char ofs,
u_char per, u_char wide, u_char div, u_char fak)
{
SYM_QUEHEAD *qp;
* We received a WDTR.
* Let everything be aware of the changes.
*/
-static void sym_setwide(hcb_p np, int target, u_char wide)
+static void sym_setwide(struct sym_hcb *np, int target, u_char wide)
{
tcb_p tp = &np->target[target];
* Let everything be aware of the changes.
*/
static void
-sym_setsync(hcb_p np, int target,
+sym_setsync(struct sym_hcb *np, int target,
u_char ofs, u_char per, u_char div, u_char fak)
{
tcb_p tp = &np->target[target];
* Let everything be aware of the changes.
*/
static void
-sym_setpprot(hcb_p np, int target, u_char opts, u_char ofs,
+sym_setpprot(struct sym_hcb *np, int target, u_char opts, u_char ofs,
u_char per, u_char wide, u_char div, u_char fak)
{
tcb_p tp = &np->target[target];
* pushes a DSA into a queue, we can trust it when it
* points to a CCB.
*/
-static void sym_recover_scsi_int (hcb_p np, u_char hsts)
+static void sym_recover_scsi_int (struct sym_hcb *np, u_char hsts)
{
u32 dsp = INL (nc_dsp);
u32 dsa = INL (nc_dsa);
/*
* chip exception handler for selection timeout
*/
-static void sym_int_sto (hcb_p np)
+static void sym_int_sto (struct sym_hcb *np)
{
u32 dsp = INL (nc_dsp);
/*
* chip exception handler for unexpected disconnect
*/
-static void sym_int_udc (hcb_p np)
+static void sym_int_udc (struct sym_hcb *np)
{
printf ("%s: unexpected disconnect\n", sym_name(np));
sym_recover_scsi_int(np, HS_UNEXPECTED);
* mode to eight bit asynchronous, etc...
* So, just reinitializing all except chip should be enough.
*/
-static void sym_int_sbmc (hcb_p np)
+static void sym_int_sbmc (struct sym_hcb *np)
{
u_char scsi_mode = INB (nc_stest4) & SMODE;
* The chip will load the DSP with the phase mismatch
* JUMP address and interrupt the host processor.
*/
-static void sym_int_par (hcb_p np, u_short sist)
+static void sym_int_par (struct sym_hcb *np, u_short sist)
{
u_char hsts = INB (HS_PRT);
u32 dsp = INL (nc_dsp);
* We have to construct a new transfer descriptor,
* to transfer the rest of the current block.
*/
-static void sym_int_ma (hcb_p np)
+static void sym_int_ma (struct sym_hcb *np)
{
u32 dbc;
u32 rest;
* Use at your own decision and risk.
*/
-void sym_interrupt (hcb_p np)
+void sym_interrupt (struct sym_hcb *np)
{
u_char istat, istatc;
u_char dstat;
* It is called with SCRIPTS not running.
*/
static int
-sym_dequeue_from_squeue(hcb_p np, int i, int target, int lun, int task)
+sym_dequeue_from_squeue(struct sym_hcb *np, int i, int target, int lun, int task)
{
int j;
ccb_p cp;
return (i - j) / 2;
}
-/*
- * Complete all CCBs queued to the COMP queue.
- *
- * These CCBs are assumed:
- * - Not to be referenced either by devices or
- * SCRIPTS-related queues and datas.
- * - To have to be completed with an error condition
- * or requeued.
- *
- * The device queue freeze count is incremented
- * for each CCB that does not prevent this.
- * This function is called when all CCBs involved
- * in error handling/recovery have been reaped.
- */
-void sym_flush_comp_queue(hcb_p np, int cam_status)
-{
- SYM_QUEHEAD *qp;
- ccb_p cp;
-
- while ((qp = sym_remque_head(&np->comp_ccbq)) != 0) {
- cam_ccb_p ccb;
- cp = sym_que_entry(qp, struct sym_ccb, link_ccbq);
- sym_insque_tail(&cp->link_ccbq, &np->busy_ccbq);
- /* Leave quiet CCBs waiting for resources */
- if (cp->host_status == HS_WAIT)
- continue;
- ccb = cp->cam_ccb;
- if (cam_status)
- sym_set_cam_status(ccb, cam_status);
-#ifdef SYM_OPT_HANDLE_DEVICE_QUEUEING
- if (sym_get_cam_status(ccb) == CAM_REQUEUE_REQ) {
- tcb_p tp = &np->target[cp->target];
- lcb_p lp = sym_lp(np, tp, cp->lun);
- if (lp) {
- sym_remque(&cp->link2_ccbq);
- sym_insque_tail(&cp->link2_ccbq,
- &lp->waiting_ccbq);
- if (cp->started) {
- if (cp->tag != NO_TAG)
- --lp->started_tags;
- else
- --lp->started_no_tag;
- }
- }
- cp->started = 0;
- continue;
- }
-#endif
- sym_free_ccb(np, cp);
- sym_freeze_cam_ccb(ccb);
- sym_xpt_done(np, ccb);
- }
-}
-
/*
* chip handler for bad SCSI status condition
*
* SCRATCHA is assumed to have been loaded with STARTPOS
* before the SCRIPTS called the C code.
*/
-static void sym_sir_bad_scsi_status(hcb_p np, int num, ccb_p cp)
+static void sym_sir_bad_scsi_status(struct sym_hcb *np, int num, ccb_p cp)
{
tcb_p tp = &np->target[cp->target];
u32 startp;
u_char s_status = cp->ssss_status;
u_char h_flags = cp->host_flags;
int msglen;
- int nego;
int i;
/*
* cp->nego_status is filled by sym_prepare_nego().
*/
cp->nego_status = 0;
- nego = 0;
- if (tp->tinfo.curr.options & PPR_OPT_MASK)
- nego = NS_PPR;
- else if (tp->tinfo.curr.width != BUS_8_BIT)
- nego = NS_WIDE;
- else if (tp->tinfo.curr.offset != 0)
- nego = NS_SYNC;
- if (nego)
- msglen +=
- sym_prepare_nego (np,cp, nego, &cp->scsi_smsg2[msglen]);
+ msglen += sym_prepare_nego(np, cp, &cp->scsi_smsg2[msglen]);
/*
* Message table indirect structure.
*/
/*
* sense data
*/
- bzero(cp->sns_bbuf, SYM_SNS_BBUF_LEN);
+ memset(cp->sns_bbuf, 0, SYM_SNS_BBUF_LEN);
cp->phys.sense.addr = cpu_to_scr(vtobus(cp->sns_bbuf));
cp->phys.sense.size = cpu_to_scr(SYM_SNS_BBUF_LEN);
* - lun=-1 means any logical UNIT otherwise a given one.
* - task=-1 means any task, otherwise a given one.
*/
-int sym_clear_tasks(hcb_p np, int cam_status, int target, int lun, int task)
+int sym_clear_tasks(struct sym_hcb *np, int cam_status, int target, int lun, int task)
{
SYM_QUEHEAD qtmp, *qp;
int i = 0;
* the BUSY queue.
*/
while ((qp = sym_remque_head(&qtmp)) != 0) {
- cam_ccb_p ccb;
+ struct scsi_cmnd *ccb;
cp = sym_que_entry(qp, struct sym_ccb, link_ccbq);
ccb = cp->cam_ccb;
if (cp->host_status != HS_DISCONNECT ||
* all the CCBs that should have been aborted by the
* target according to our message.
*/
-static void sym_sir_task_recovery(hcb_p np, int num)
+static void sym_sir_task_recovery(struct sym_hcb *np, int num)
{
SYM_QUEHEAD *qp;
ccb_p cp;
* the corresponding values of dp_sg and dp_ofs.
*/
-static int sym_evaluate_dp(hcb_p np, ccb_p cp, u32 scr, int *ofs)
+static int sym_evaluate_dp(struct sym_hcb *np, ccb_p cp, u32 scr, int *ofs)
{
u32 dp_scr;
int dp_ofs, dp_sg, dp_sgmin;
* is equivalent to a MODIFY DATA POINTER (offset=-1).
*/
-static void sym_modify_dp(hcb_p np, tcb_p tp, ccb_p cp, int ofs)
+static void sym_modify_dp(struct sym_hcb *np, tcb_p tp, ccb_p cp, int ofs)
{
int dp_ofs = ofs;
u32 dp_scr = sym_get_script_dp (np, cp);
* a relevant information. :)
*/
-int sym_compute_residual(hcb_p np, ccb_p cp)
+int sym_compute_residual(struct sym_hcb *np, ccb_p cp)
{
int dp_sg, dp_sgmin, resid = 0;
int dp_ofs = 0;
* chip handler for SYNCHRONOUS DATA TRANSFER REQUEST (SDTR) message.
*/
static int
-sym_sync_nego_check(hcb_p np, int req, int target)
+sym_sync_nego_check(struct sym_hcb *np, int req, int target)
{
u_char chg, ofs, per, fak, div;
return -1;
}
-static void sym_sync_nego(hcb_p np, tcb_p tp, ccb_p cp)
+static void sym_sync_nego(struct sym_hcb *np, tcb_p tp, ccb_p cp)
{
int req = 1;
int result;
* chip handler for PARALLEL PROTOCOL REQUEST (PPR) message.
*/
static int
-sym_ppr_nego_check(hcb_p np, int req, int target)
+sym_ppr_nego_check(struct sym_hcb *np, int req, int target)
{
tcb_p tp = &np->target[target];
unsigned char fak, div;
if (ofs) {
unsigned char minsync = dt ? np->minsync_dt : np->minsync;
- if (per < np->minsync_dt) {
+ if (per < minsync) {
chg = 1;
per = minsync;
}
return -1;
}
-static void sym_ppr_nego(hcb_p np, tcb_p tp, ccb_p cp)
+static void sym_ppr_nego(struct sym_hcb *np, tcb_p tp, ccb_p cp)
{
int req = 1;
int result;
* chip handler for WIDE DATA TRANSFER REQUEST (WDTR) message.
*/
static int
-sym_wide_nego_check(hcb_p np, int req, int target)
+sym_wide_nego_check(struct sym_hcb *np, int req, int target)
{
u_char chg, wide;
return -1;
}
-static void sym_wide_nego(hcb_p np, tcb_p tp, ccb_p cp)
+static void sym_wide_nego(struct sym_hcb *np, tcb_p tp, ccb_p cp)
{
int req = 1;
int result;
* So, if a PPR makes problems, we may just want to
* try a legacy negotiation later.
*/
-static void sym_nego_default(hcb_p np, tcb_p tp, ccb_p cp)
+static void sym_nego_default(struct sym_hcb *np, tcb_p tp, ccb_p cp)
{
switch (cp->nego_status) {
case NS_PPR:
* chip handler for MESSAGE REJECT received in response to
* PPR, WIDE or SYNCHRONOUS negotiation.
*/
-static void sym_nego_rejected(hcb_p np, tcb_p tp, ccb_p cp)
+static void sym_nego_rejected(struct sym_hcb *np, tcb_p tp, ccb_p cp)
{
sym_nego_default(np, tp, cp);
OUTB (HS_PRT, HS_BUSY);
/*
* chip exception handler for programmed interrupts.
*/
-static void sym_int_sir (hcb_p np)
+static void sym_int_sir (struct sym_hcb *np)
{
u_char num = INB (nc_dsps);
u32 dsa = INL (nc_dsa);
/*
* Acquire a control block
*/
-ccb_p sym_get_ccb (hcb_p np, u_char tn, u_char ln, u_char tag_order)
+ccb_p sym_get_ccb (struct sym_hcb *np, u_char tn, u_char ln, u_char tag_order)
{
tcb_p tp = &np->target[tn];
lcb_p lp = sym_lp(np, tp, ln);
/*
* Release one control block
*/
-void sym_free_ccb (hcb_p np, ccb_p cp)
+void sym_free_ccb (struct sym_hcb *np, ccb_p cp)
{
tcb_p tp = &np->target[cp->target];
lcb_p lp = sym_lp(np, tp, cp->lun);
sym_remque(&cp->link_ccbq);
sym_insque_head(&cp->link_ccbq, &np->free_ccbq);
-#ifdef SYM_OPT_HANDLE_IO_TIMEOUT
- /*
- * Cancel any pending timeout condition.
- */
- sym_untimeout_ccb(np, cp);
-#endif
-
#ifdef SYM_OPT_HANDLE_DEVICE_QUEUEING
if (lp) {
sym_remque(&cp->link2_ccbq);
/*
* Allocate a CCB from memory and initialize its fixed part.
*/
-static ccb_p sym_alloc_ccb(hcb_p np)
+static ccb_p sym_alloc_ccb(struct sym_hcb *np)
{
ccb_p cp = NULL;
int hcode;
/*
* Chain into optionnal lists.
*/
-#ifdef SYM_OPT_HANDLE_IO_TIMEOUT
- sym_insque_head(&cp->tmo_linkq, &np->tmo0_ccbq);
-#endif
#ifdef SYM_OPT_HANDLE_DEVICE_QUEUEING
sym_insque_head(&cp->link2_ccbq, &np->dummy_ccbq);
#endif
/*
* Look up a CCB from a DSA value.
*/
-static ccb_p sym_ccb_from_dsa(hcb_p np, u32 dsa)
+static ccb_p sym_ccb_from_dsa(struct sym_hcb *np, u32 dsa)
{
int hcode;
ccb_p cp;
* Target control block initialisation.
* Nothing important to do at the moment.
*/
-static void sym_init_tcb (hcb_p np, u_char tn)
+static void sym_init_tcb (struct sym_hcb *np, u_char tn)
{
#if 0 /* Hmmm... this checking looks paranoid. */
/*
/*
* Lun control block allocation and initialization.
*/
-lcb_p sym_alloc_lcb (hcb_p np, u_char tn, u_char ln)
+lcb_p sym_alloc_lcb (struct sym_hcb *np, u_char tn, u_char ln)
{
tcb_p tp = &np->target[tn];
lcb_p lp = sym_lp(np, tp, ln);
/*
* Allocate LCB resources for tagged command queuing.
*/
-static void sym_alloc_lcb_tags (hcb_p np, u_char tn, u_char ln)
+static void sym_alloc_lcb_tags (struct sym_hcb *np, u_char tn, u_char ln)
{
tcb_p tp = &np->target[tn];
lcb_p lp = sym_lp(np, tp, ln);
/*
* Queue a SCSI IO to the controller.
*/
-int sym_queue_scsiio(hcb_p np, cam_scsiio_p csio, ccb_p cp)
+int sym_queue_scsiio(struct sym_hcb *np, struct scsi_cmnd *csio, ccb_p cp)
{
tcb_p tp;
lcb_p lp;
/*
* Keep track of the IO in our CCB.
*/
- cp->cam_ccb = (cam_ccb_p) csio;
+ cp->cam_ccb = csio;
/*
* Retrieve the target descriptor.
tp->tinfo.curr.offset != tp->tinfo.goal.offset ||
tp->tinfo.curr.options != tp->tinfo.goal.options) {
if (!tp->nego_cp && lp)
- msglen += sym_prepare_nego(np, cp, 0, msgptr + msglen);
+ msglen += sym_prepare_nego(np, cp, msgptr + msglen);
}
/*
/*
* Reset a SCSI target (all LUNs of this target).
*/
-int sym_reset_scsi_target(hcb_p np, int target)
+int sym_reset_scsi_target(struct sym_hcb *np, int target)
{
tcb_p tp;
/*
* Abort a SCSI IO.
*/
-int sym_abort_ccb(hcb_p np, ccb_p cp, int timed_out)
+int sym_abort_ccb(struct sym_hcb *np, ccb_p cp, int timed_out)
{
/*
* Check that the IO is active.
return 0;
}
-int sym_abort_scsiio(hcb_p np, cam_ccb_p ccb, int timed_out)
+int sym_abort_scsiio(struct sym_hcb *np, struct scsi_cmnd *ccb, int timed_out)
{
ccb_p cp;
SYM_QUEHEAD *qp;
* SCRATCHA is assumed to have been loaded with STARTPOS
* before the SCRIPTS called the C code.
*/
-void sym_complete_error (hcb_p np, ccb_p cp)
+void sym_complete_error (struct sym_hcb *np, ccb_p cp)
{
tcb_p tp;
lcb_p lp;
* The SCRIPTS processor is running while we are
* completing successful commands.
*/
-void sym_complete_ok (hcb_p np, ccb_p cp)
+void sym_complete_ok (struct sym_hcb *np, ccb_p cp)
{
tcb_p tp;
lcb_p lp;
- cam_ccb_p ccb;
+ struct scsi_cmnd *ccb;
int resid;
/*
/*
* Soft-attach the controller.
*/
-int sym_hcb_attach(hcb_p np, struct sym_fw *fw, struct sym_nvram *nvram)
+int sym_hcb_attach(struct sym_hcb *np, struct sym_fw *fw, struct sym_nvram *nvram)
{
int i;
sym_que_init(&np->comp_ccbq);
/*
- * Initializations for optional handling
- * of IO timeouts and device queueing.
+ * Initialization for optional handling
+ * of device queueing.
*/
-#ifdef SYM_OPT_HANDLE_IO_TIMEOUT
- sym_que_init(&np->tmo0_ccbq);
- np->tmo_ccbq =
- sym_calloc(2*SYM_CONF_TIMEOUT_ORDER_MAX*sizeof(SYM_QUEHEAD),
- "TMO_CCBQ");
- for (i = 0 ; i < 2*SYM_CONF_TIMEOUT_ORDER_MAX ; i++)
- sym_que_init(&np->tmo_ccbq[i]);
-#endif
#ifdef SYM_OPT_HANDLE_DEVICE_QUEUEING
sym_que_init(&np->dummy_ccbq);
#endif
/*
* Free everything that has been allocated for this device.
*/
-void sym_hcb_free(hcb_p np)
+void sym_hcb_free(struct sym_hcb *np)
{
SYM_QUEHEAD *qp;
ccb_p cp;
sym_mfree_dma(np->scriptb0, np->scriptb_sz, "SCRIPTB0");
if (np->scripta0)
sym_mfree_dma(np->scripta0, np->scripta_sz, "SCRIPTA0");
-#ifdef SYM_OPT_HANDLE_IO_TIMEOUT
- if (np->tmo_ccbq)
- sym_mfree(np->tmo_ccbq,
- 2*SYM_CONF_TIMEOUT_ORDER_MAX*sizeof(SYM_QUEHEAD),
- "TMO_CCBQ");
-#endif
if (np->squeue)
sym_mfree_dma(np->squeue, sizeof(u32)*(MAX_QUEUE*2), "SQUEUE");
if (np->dqueue)
/*
* Pointer to CAM ccb and related stuff.
*/
- cam_ccb_p cam_ccb; /* CAM scsiio ccb */
+ struct scsi_cmnd *cam_ccb; /* CAM scsiio ccb */
u8 cdb_buf[16]; /* Copy of CDB */
u8 *sns_bbuf; /* Bounce buffer for sense data */
#ifndef SYM_SNS_BBUF_LEN
/*
* Other fields.
*/
-#ifdef SYM_OPT_HANDLE_IO_TIMEOUT
- SYM_QUEHEAD tmo_linkq; /* Optional timeout handling */
- u_int tmo_clock; /* (link and dealine value) */
-#endif
u32 ccb_ba; /* BUS address of this CCB */
u_short tag; /* Tag for this transfer */
/* NO_TAG means no tag */
struct sym_fwa_ba fwa_bas; /* Useful SCRIPTA bus addresses */
struct sym_fwb_ba fwb_bas; /* Useful SCRIPTB bus addresses */
struct sym_fwz_ba fwz_bas; /* Useful SCRIPTZ bus addresses */
- void (*fw_setup)(hcb_p np, struct sym_fw *fw);
- void (*fw_patch)(hcb_p np);
+ void (*fw_setup)(struct sym_hcb *np, struct sym_fw *fw);
+ void (*fw_patch)(struct sym_hcb *np);
char *fw_name;
/*
#ifdef SYM_OPT_HANDLE_DEVICE_QUEUEING
SYM_QUEHEAD dummy_ccbq;
#endif
- /*
- * Optional handling of IO timeouts.
- */
-#ifdef SYM_OPT_HANDLE_IO_TIMEOUT
- SYM_QUEHEAD tmo0_ccbq;
- SYM_QUEHEAD *tmo_ccbq; /* [2*SYM_TIMEOUT_ORDER_MAX] */
- u_int tmo_clock;
- u_int tmo_actq;
-#endif
/*
* IMMEDIATE ARBITRATION (IARB) control.
* FIRMWARES (sym_fw.c)
*/
struct sym_fw * sym_find_firmware(struct sym_pci_chip *chip);
-void sym_fw_bind_script (hcb_p np, u32 *start, int len);
+void sym_fw_bind_script (struct sym_hcb *np, u32 *start, int len);
/*
* Driver methods called from O/S specific code.
*/
char *sym_driver_name(void);
void sym_print_xerr(ccb_p cp, int x_status);
-int sym_reset_scsi_bus(hcb_p np, int enab_int);
+int sym_reset_scsi_bus(struct sym_hcb *np, int enab_int);
struct sym_pci_chip *
sym_lookup_pci_chip_table (u_short device_id, u_char revision);
-void sym_put_start_queue(hcb_p np, ccb_p cp);
+void sym_put_start_queue(struct sym_hcb *np, ccb_p cp);
#ifdef SYM_OPT_HANDLE_DEVICE_QUEUEING
-void sym_start_next_ccbs(hcb_p np, lcb_p lp, int maxn);
+void sym_start_next_ccbs(struct sym_hcb *np, lcb_p lp, int maxn);
#endif
-void sym_start_up (hcb_p np, int reason);
-void sym_interrupt (hcb_p np);
-void sym_flush_comp_queue(hcb_p np, int cam_status);
-int sym_clear_tasks(hcb_p np, int cam_status, int target, int lun, int task);
-ccb_p sym_get_ccb (hcb_p np, u_char tn, u_char ln, u_char tag_order);
-void sym_free_ccb (hcb_p np, ccb_p cp);
-lcb_p sym_alloc_lcb (hcb_p np, u_char tn, u_char ln);
-int sym_queue_scsiio(hcb_p np, cam_scsiio_p csio, ccb_p cp);
-int sym_abort_scsiio(hcb_p np, cam_ccb_p ccb, int timed_out);
-int sym_abort_ccb(hcb_p np, ccb_p cp, int timed_out);
-int sym_reset_scsi_target(hcb_p np, int target);
-void sym_hcb_free(hcb_p np);
-int sym_hcb_attach(hcb_p np, struct sym_fw *fw, struct sym_nvram *nvram);
-
-/*
- * Optionnaly, the driver may handle IO timeouts.
- */
-#ifdef SYM_OPT_HANDLE_IO_TIMEOUT
-int sym_abort_ccb(hcb_p np, ccb_p cp, int timed_out);
-void sym_timeout_ccb(hcb_p np, ccb_p cp, u_int ticks);
-static void __inline sym_untimeout_ccb(hcb_p np, ccb_p cp)
-{
- sym_remque(&cp->tmo_linkq);
- sym_insque_head(&cp->tmo_linkq, &np->tmo0_ccbq);
-}
-void sym_clock(hcb_p np);
-#endif /* SYM_OPT_HANDLE_IO_TIMEOUT */
+void sym_start_up (struct sym_hcb *np, int reason);
+void sym_interrupt (struct sym_hcb *np);
+int sym_clear_tasks(struct sym_hcb *np, int cam_status, int target, int lun, int task);
+ccb_p sym_get_ccb (struct sym_hcb *np, u_char tn, u_char ln, u_char tag_order);
+void sym_free_ccb (struct sym_hcb *np, ccb_p cp);
+lcb_p sym_alloc_lcb (struct sym_hcb *np, u_char tn, u_char ln);
+int sym_queue_scsiio(struct sym_hcb *np, struct scsi_cmnd *csio, ccb_p cp);
+int sym_abort_scsiio(struct sym_hcb *np, struct scsi_cmnd *ccb, int timed_out);
+int sym_abort_ccb(struct sym_hcb *np, ccb_p cp, int timed_out);
+int sym_reset_scsi_target(struct sym_hcb *np, int target);
+void sym_hcb_free(struct sym_hcb *np);
+int sym_hcb_attach(struct sym_hcb *np, struct sym_fw *fw, struct sym_nvram *nvram);
/*
* Optionnaly, the driver may provide a function
* to announce transfer rate changes.
*/
#ifdef SYM_OPT_ANNOUNCE_TRANSFER_RATE
-void sym_announce_transfer_rate(hcb_p np, int target);
+void sym_announce_transfer_rate(struct sym_hcb *np, int target);
#endif
/*
(data)->size = cpu_to_scr((((badd) >> 8) & 0xff000000) + len); \
} while (0)
#elif SYM_CONF_DMA_ADDRESSING_MODE == 2
-int sym_lookup_dmap(hcb_p np, u32 h, int s);
+int sym_lookup_dmap(struct sym_hcb *np, u32 h, int s);
static __inline void
-sym_build_sge(hcb_p np, struct sym_tblmove *data, u64 badd, int len)
+sym_build_sge(struct sym_hcb *np, struct sym_tblmove *data, u64 badd, int len)
{
u32 h = (badd>>32);
int s = (h&SYM_DMAP_MASK);
}
if (p)
- bzero(p, size);
+ memset(p, 0, size);
else if (uflags & SYM_MEM_WARN)
printf ("__sym_calloc2: failed to allocate %s[%d]\n", name, size);
return p;
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
-#ifdef __FreeBSD__
-#include <dev/sym/sym_glue.h>
-#else
#include "sym_glue.h"
-#endif
-
-#ifdef SYM_OPT_HANDLE_IO_TIMEOUT
-/*
- * Optional CCB timeout handling.
- *
- * This code is useful for O/Ses that allow or expect
- * SIMs (low-level drivers) to handle SCSI IO timeouts.
- * It uses a power-of-two based algorithm of my own:)
- * that avoids scanning of lists, provided that:
- *
- * - The IO does complete in less than half the associated
- * timeout value.
- * - The greatest delay between the queuing of the IO and
- * its completion is less than
- * (1<<(SYM_CONF_TIMEOUT_ORDER_MAX-1))/2 ticks.
- *
- * For example, if tick is 1 second and the max order is 8,
- * any IO that is completed within less than 64 seconds will
- * just be put into some list at queuing and be removed
- * at completion without any additionnal overhead.
- */
-
-/*
- * Set a timeout condition on a CCB.
- */
-void sym_timeout_ccb(hcb_p np, ccb_p cp, u_int ticks)
-{
- sym_remque(&cp->tmo_linkq);
- cp->tmo_clock = np->tmo_clock + ticks;
- if (!ticks) {
- sym_insque_head(&cp->tmo_linkq, &np->tmo0_ccbq);
- }
- else {
- int i = SYM_CONF_TIMEOUT_ORDER_MAX - 1;
- while (i > 0) {
- if (ticks >= (1<<(i+1)))
- break;
- --i;
- }
- if (!(np->tmo_actq & (1<<i)))
- i += SYM_CONF_TIMEOUT_ORDER_MAX;
- sym_insque_head(&cp->tmo_linkq, &np->tmo_ccbq[i]);
- }
-}
-
-/*
- * Walk a list of CCB and handle timeout conditions.
- * Should never be called in normal situations.
- */
-static void sym_walk_ccb_tmo_list(hcb_p np, SYM_QUEHEAD *tmoq)
-{
- SYM_QUEHEAD qtmp, *qp;
- ccb_p cp;
-
- sym_que_move(tmoq, &qtmp);
- while ((qp = sym_remque_head(&qtmp)) != 0) {
- sym_insque_head(qp, &np->tmo0_ccbq);
- cp = sym_que_entry(qp, struct sym_ccb, tmo_linkq);
- if (cp->tmo_clock != np->tmo_clock &&
- cp->tmo_clock + 1 != np->tmo_clock)
- sym_timeout_ccb(np, cp, cp->tmo_clock - np->tmo_clock);
- else
- sym_abort_ccb(np, cp, 1);
- }
-}
-
-/*
- * Our clock handler called from the O/S specific side.
- */
-void sym_clock(hcb_p np)
-{
- int i, j;
- u_int tmp;
-
- tmp = np->tmo_clock;
- tmp ^= (++np->tmo_clock);
-
- for (i = 0; i < SYM_CONF_TIMEOUT_ORDER_MAX; i++, tmp >>= 1) {
- if (!(tmp & 1))
- continue;
- j = i;
- if (np->tmo_actq & (1<<i))
- j += SYM_CONF_TIMEOUT_ORDER_MAX;
-
- if (!sym_que_empty(&np->tmo_ccbq[j])) {
- sym_walk_ccb_tmo_list(np, &np->tmo_ccbq[j]);
- }
- np->tmo_actq ^= (1<<i);
- }
-}
-#endif /* SYM_OPT_HANDLE_IO_TIMEOUT */
-
#ifdef SYM_OPT_ANNOUNCE_TRANSFER_RATE
/*
* Announce transfer rate if anything changed since last announcement.
*/
-void sym_announce_transfer_rate(hcb_p np, int target)
+void sym_announce_transfer_rate(struct sym_hcb *np, int target)
{
tcb_p tp = &np->target[target];
case SYM_TEKRAM_NVRAM:
np->myaddr = nvram->data.Tekram.host_id & 0x0f;
break;
+#ifdef CONFIG_PARISC
+ case SYM_PARISC_PDC:
+ if (nvram->data.parisc.host_id != -1)
+ np->myaddr = nvram->data.parisc.host_id;
+ if (nvram->data.parisc.factor != -1)
+ np->minsync = nvram->data.parisc.factor;
+ if (nvram->data.parisc.width != -1)
+ np->maxwide = nvram->data.parisc.width;
+ switch (nvram->data.parisc.mode) {
+ case 0: np->scsi_mode = SMODE_SE; break;
+ case 1: np->scsi_mode = SMODE_HVD; break;
+ case 2: np->scsi_mode = SMODE_LVD; break;
+ default: break;
+ }
+#endif
default:
break;
}
return 0;
}
+#ifdef CONFIG_PARISC
+/*
+ * Host firmware (PDC) keeps a table for altering SCSI capabilities.
+ * Many newer machines export one channel of 53c896 chip as SE, 50-pin HD.
+ * Also used for Multi-initiator SCSI clusters to set the SCSI Initiator ID.
+ */
+static int sym_read_parisc_pdc(struct sym_device *np, struct pdc_initiator *pdc)
+{
+ struct hardware_path hwpath;
+ get_pci_node_path(np->pdev, &hwpath);
+ if (!pdc_get_initiator(&hwpath, pdc))
+ return 0;
+
+ return SYM_PARISC_PDC;
+}
+#else
+static int sym_read_parisc_pdc(struct sym_device *np, struct pdc_initiator *x)
+{
+ return 0;
+}
+#endif
+
/*
* Try reading Symbios or Tekram NVRAM
*/
nvp->type = SYM_TEKRAM_NVRAM;
sym_display_Tekram_nvram(np, &nvp->data.Tekram);
} else {
- nvp->type = 0;
+ nvp->type = sym_read_parisc_pdc(np, &nvp->data.parisc);
}
return nvp->type;
}
typedef struct Tekram_nvram Tekram_nvram;
typedef struct Tekram_target Tekram_target;
+#ifndef CONFIG_PARISC
+struct pdc_initiator { int dummy; };
+#endif
+
/*
* Union of supported NVRAM formats.
*/
int type;
#define SYM_SYMBIOS_NVRAM (1)
#define SYM_TEKRAM_NVRAM (2)
+#define SYM_PARISC_PDC (3)
#if SYM_CONF_NVRAM_SUPPORT
union {
Symbios_nvram Symbios;
Tekram_nvram Tekram;
+ struct pdc_initiator parisc;
} data;
#endif
};
* what we have here is a missing parent device, so tell
* the user what they're missing.
*/
- if (dev->parent->id.hw_type != HPHW_IOA) {
+ if (parisc_parent(dev)->id.hw_type != HPHW_IOA) {
printk(KERN_INFO "Serial: device 0x%lx not configured.\n"
"Enable support for Wax, Lasi, Asp or Dino.\n", dev->hpa);
}
printk(KERN_WARNING "serial8250_register_port returned error %d\n", err);
return err;
}
-
+
return 0;
}
MODULE_DEVICE_TABLE(parisc, serial_tbl);
static struct parisc_driver lasi_driver = {
- .name = "Lasi RS232",
+ .name = "serial_1",
.id_table = lasi_tbl,
.probe = serial_init_chip,
};
static struct parisc_driver serial_driver = {
- .name = "Serial RS232",
+ .name = "serial",
.id_table = serial_tbl,
.probe = serial_init_chip,
};
io->iop_pdird |= 0x00000002; /* Tx */
/* Wire BRG1 to SCC1 */
- cpm2_immr->im_cpmux.cmx_scr &= ~0x00ffffff;
+ cpm2_immr->im_cpmux.cmx_scr &= 0x00ffffff;
cpm2_immr->im_cpmux.cmx_scr |= 0x00000000;
pinfo->brg = 1;
}
io->iop_psorb |= 0x00880000;
io->iop_pdirb &= ~0x00030000;
io->iop_psorb &= ~0x00030000;
- cpm2_immr->im_cpmux.cmx_scr &= ~0xff00ffff;
+ cpm2_immr->im_cpmux.cmx_scr &= 0xff00ffff;
cpm2_immr->im_cpmux.cmx_scr |= 0x00090000;
pinfo->brg = 2;
}
io->iop_psorb |= 0x00880000;
io->iop_pdirb &= ~0x00030000;
io->iop_psorb &= ~0x00030000;
- cpm2_immr->im_cpmux.cmx_scr &= ~0xffff00ff;
+ cpm2_immr->im_cpmux.cmx_scr &= 0xffff00ff;
cpm2_immr->im_cpmux.cmx_scr |= 0x00001200;
pinfo->brg = 3;
}
io->iop_pdird &= ~0x00000200; /* Rx */
io->iop_pdird |= 0x00000400; /* Tx */
- cpm2_immr->im_cpmux.cmx_scr &= ~0xffffff00;
+ cpm2_immr->im_cpmux.cmx_scr &= 0xffffff00;
cpm2_immr->im_cpmux.cmx_scr |= 0x0000001b;
pinfo->brg = 4;
}
unsigned char cable_id;
unsigned char read_status_mask;
unsigned char ignore_status_mask;
- unsigned long int_reg;
- struct icom_regs *global_reg;
- struct func_dram *dram;
+ void __iomem * int_reg;
+ struct icom_regs __iomem *global_reg;
+ struct func_dram __iomem *dram;
int port;
struct statusArea *statStg;
dma_addr_t statStg_pci;
};
struct icom_adapter {
- unsigned long base_addr;
+ void __iomem * base_addr;
unsigned long base_addr_pci;
unsigned char irq_number;
struct pci_dev *pci_dev;
extern void iCom_sercons_init(void);
struct lookup_proc_table {
- u32 *global_control_reg;
+ u32 __iomem *global_control_reg;
unsigned long processor_id;
};
struct lookup_int_table {
- u32 *global_int_mask;
+ u32 __iomem *global_int_mask;
unsigned long processor_id;
};
#include "ip22zilog.h"
-int ip22serial_current_minor = 64;
-
void ip22_do_break(void);
/*
#define ZSDELAY_LONG() udelay(20)
#define ZS_WSYNC(channel) do { } while (0)
-#define NUM_IP22ZILOG 1
-#define NUM_CHANNELS (NUM_IP22ZILOG * 2)
+#define NUM_IP22ZILOG 1
+#define NUM_CHANNELS (NUM_IP22ZILOG * 2)
-#define ZS_CLOCK 4915200 /* Zilog input clock rate. */
+#define ZS_CLOCK 3672000 /* Zilog input clock rate. */
#define ZS_CLOCK_DIVISOR 16 /* Divisor this driver uses. */
/*
#define IP22ZILOG_FLAG_TX_STOPPED 0x00000080
#define IP22ZILOG_FLAG_TX_ACTIVE 0x00000100
- unsigned int cflag;
+ unsigned int cflag;
/* L1-A keyboard break state. */
int kbd_id;
}
}
-/* The port lock is not held. */
+/* The port lock is held and interrupts are disabled. */
static void ip22zilog_stop_rx(struct uart_port *port)
{
struct uart_ip22zilog_port *up = UART_ZILOG(port);
struct zilog_channel *channel;
- unsigned long flags;
if (ZS_IS_CONS(up))
return;
- spin_lock_irqsave(&port->lock, flags);
-
channel = ZILOG_CHANNEL_FROM_PORT(port);
/* Disable all RX interrupts. */
up->curregs[R1] &= ~RxINT_MASK;
ip22zilog_maybe_update_regs(up, channel);
-
- spin_unlock_irqrestore(&port->lock, flags);
}
-/* The port lock is not held. */
+/* The port lock is held. */
static void ip22zilog_enable_ms(struct uart_port *port)
{
struct uart_ip22zilog_port *up = (struct uart_ip22zilog_port *) port;
struct zilog_channel *channel = ZILOG_CHANNEL_FROM_PORT(port);
unsigned char new_reg;
- unsigned long flags;
-
- spin_lock_irqsave(&port->lock, flags);
new_reg = up->curregs[R15] | (DCDIE | SYNCIE | CTSIE);
if (new_reg != up->curregs[R15]) {
/* NOTE: Not subject to 'transmitter active' rule. */
write_zsreg(channel, R15, up->curregs[R15]);
}
-
- spin_unlock_irqrestore(&port->lock, flags);
}
/* The port lock is not held. */
up->curregs[R4] |= X16CLK;
up->curregs[R12] = brg & 0xff;
up->curregs[R13] = (brg >> 8) & 0xff;
- up->curregs[R14] = BRSRC | BRENAB;
+ up->curregs[R14] = BRENAB;
/* Character size, stop bits, and parity. */
up->curregs[3] &= ~RxN_MASK;
static struct uart_ip22zilog_port *ip22zilog_irq_chain;
static int zilog_irq = -1;
-static struct uart_driver ip22zilog_reg = {
- .owner = THIS_MODULE,
- .driver_name = "ttyS",
- .devfs_name = "tty/",
- .major = TTY_MAJOR,
-};
-
static void * __init alloc_one_table(unsigned long size)
{
void *ret;
}
/* Not probe-able, hard code it. */
- base = (unsigned long) &sgioc->serport;
+ base = (unsigned long) &sgioc->uart;
zilog_irq = SGI_SERIAL_IRQ;
request_mem_region(base, 8, "IP22-Zilog");
int parity = 'n';
int flow = 'n';
- if (!serial_console)
- return;
-
if (options)
uart_parse_options(options, &baud, &parity, &bits, &flow);
unsigned long flags;
int baud, brg;
- printk("Console: ttyS%d (IP22-Zilog)\n",
- (ip22zilog_reg.minor - 64) + con->index);
+ printk("Console: ttyS%d (IP22-Zilog)\n", con->index);
/* Get firmware console settings. */
ip22serial_console_termios(con, options);
return 0;
}
+static struct uart_driver ip22zilog_reg;
+
static struct console ip22zilog_console = {
.name = "ttyS",
.write = ip22zilog_console_write,
.index = -1,
.data = &ip22zilog_reg,
};
-#define IP22ZILOG_CONSOLE (&ip22zilog_console)
-
-static int __init ip22zilog_console_init(void)
-{
- int i;
-
- if (con_is_present())
- return 0;
-
- for (i = 0; i < NUM_CHANNELS; i++) {
- int this_minor = ip22zilog_reg.minor + i;
+#endif /* CONFIG_SERIAL_IP22_ZILOG_CONSOLE */
- if ((this_minor - 64) == (serial_console - 1))
- break;
- }
- if (i == NUM_CHANNELS)
- return 0;
-
- ip22zilog_console.index = i;
- register_console(&ip22zilog_console);
- return 0;
-}
-#else /* CONFIG_SERIAL_IP22_ZILOG_CONSOLE */
-#define IP22ZILOG_CONSOLE (NULL)
-#define ip22zilog_console_init() do { } while (0)
+static struct uart_driver ip22zilog_reg = {
+ .owner = THIS_MODULE,
+ .driver_name = "serial",
+ .devfs_name = "tts/",
+ .dev_name = "ttyS",
+ .major = TTY_MAJOR,
+ .minor = 64,
+ .nr = NUM_CHANNELS,
+#ifdef CONFIG_SERIAL_IP22_ZILOG_CONSOLE
+ .cons = &ip22zilog_console,
#endif
+};
static void __init ip22zilog_prepare(void)
{
for (channel = 0; channel < NUM_CHANNELS; channel++)
spin_lock_init(&ip22zilog_port_table[channel].port.lock);
- ip22zilog_irq_chain = up = &ip22zilog_port_table[0];
- for (channel = 0; channel < NUM_CHANNELS - 1; channel++)
- up[channel].next = &up[channel + 1];
+ ip22zilog_irq_chain = &ip22zilog_port_table[NUM_CHANNELS - 1];
+ up = &ip22zilog_port_table[0];
+ for (channel = NUM_CHANNELS - 1 ; channel > 0; channel--)
+ up[channel].next = &up[channel - 1];
up[channel].next = NULL;
for (chip = 0; chip < NUM_IP22ZILOG; chip++) {
if (!ip22zilog_chip_regs[chip]) {
ip22zilog_chip_regs[chip] = rp = get_zs(chip);
- up[(chip * 2) + 0].port.membase = (char *) &rp->channelA;
- up[(chip * 2) + 1].port.membase = (char *) &rp->channelB;
+ up[(chip * 2) + 0].port.membase = (char *) &rp->channelB;
+ up[(chip * 2) + 1].port.membase = (char *) &rp->channelA;
+
+ /* In theory mapbase is the physical address ... */
+ up[(chip * 2) + 0].port.mapbase =
+ (unsigned long) ioremap((unsigned long) &rp->channelB, 8);
+ up[(chip * 2) + 1].port.mapbase =
+ (unsigned long) ioremap((unsigned long) &rp->channelA, 8);
}
/* Channel A */
up[(chip * 2) + 0].port.type = PORT_IP22ZILOG;
up[(chip * 2) + 0].port.flags = 0;
up[(chip * 2) + 0].port.line = (chip * 2) + 0;
- up[(chip * 2) + 0].flags |= IP22ZILOG_FLAG_IS_CHANNEL_A;
+ up[(chip * 2) + 0].flags = 0;
/* Channel B */
up[(chip * 2) + 1].port.iotype = UPIO_MEM;
up[(chip * 2) + 1].port.fifosize = 1;
up[(chip * 2) + 1].port.ops = &ip22zilog_pops;
up[(chip * 2) + 1].port.type = PORT_IP22ZILOG;
- up[(chip * 2) + 1].port.flags = 0;
+ up[(chip * 2) + 1].port.flags |= IP22ZILOG_FLAG_IS_CHANNEL_A;
up[(chip * 2) + 1].port.line = (chip * 2) + 1;
- up[(chip * 2) + 1].flags |= 0;
+ up[(chip * 2) + 1].flags = 0;
}
}
brg = BPS_TO_BRG(baud, ZS_CLOCK / ZS_CLOCK_DIVISOR);
up->curregs[R12] = (brg & 0xff);
up->curregs[R13] = (brg >> 8) & 0xff;
- up->curregs[R14] = BRSRC | BRENAB;
+ up->curregs[R14] = BRENAB;
__load_zsregs(channel, up->curregs);
+ /* set master interrupt enable */
+ write_zsreg(channel, R9, up->curregs[R9]);
spin_unlock_irqrestore(&up->port.lock, flags);
}
ip22zilog_init_hw();
- /* We can only init this once we have probed the Zilogs
- * in the system.
- */
- ip22zilog_reg.nr = NUM_CHANNELS;
- ip22zilog_reg.cons = IP22ZILOG_CONSOLE;
-
- ip22zilog_reg.minor = ip22serial_current_minor;
- ip22serial_current_minor += NUM_CHANNELS;
-
ret = uart_register_driver(&ip22zilog_reg);
if (ret == 0) {
int i;
static int __init ip22zilog_init(void)
{
/* IP22 Zilog setup is hard coded, no probing to do. */
-
ip22zilog_alloc_tables();
-
ip22zilog_ports_init();
- ip22zilog_console_init();
return 0;
}
* keep going. Perhaps one day the cflag settings for the
* console can be used instead.
*/
-#if defined(CONFIG_ARNEWSH) || defined(CONFIG_MOTOROLA) || defined(CONFIG_senTec)
+#if defined(CONFIG_ARNEWSH) || defined(CONFIG_MOTOROLA) || defined(CONFIG_senTec) || defined(CONFIG_SNEHA)
#define CONSOLE_BAUD_RATE 19200
#define DEFAULT_CBAUD B19200
#endif
+#if defined(CONFIG_HW_FEITH)
+ #define CONSOLE_BAUD_RATE 38400
+ #define DEFAULT_CBAUD B38400
+#endif
+
#ifndef CONSOLE_BAUD_RATE
#define CONSOLE_BAUD_RATE 9600
#define DEFAULT_CBAUD B9600
#undef SERIAL_DEBUG_OPEN
#undef SERIAL_DEBUG_FLOW
-#ifdef CONFIG_M5282
-#define IRQBASE 77
+#if defined(CONFIG_M527x) || defined(CONFIG_M528x)
+#define IRQBASE (MCFINT_VECBASE+MCFINT_UART0)
#else
#define IRQBASE 73
#endif
#endif
tty->flip.count++;
- if (status & MCFUART_USR_RXERR)
+ if (status & MCFUART_USR_RXERR) {
uartp[MCFUART_UCR] = MCFUART_UCR_CMDRESETERR;
- if (status & MCFUART_USR_RXBREAK) {
- info->stats.rxbreak++;
- *tty->flip.flag_buf_ptr++ = TTY_BREAK;
- } else if (status & MCFUART_USR_RXPARITY) {
- info->stats.rxparity++;
- *tty->flip.flag_buf_ptr++ = TTY_PARITY;
- } else if (status & MCFUART_USR_RXOVERRUN) {
- info->stats.rxoverrun++;
- *tty->flip.flag_buf_ptr++ = TTY_OVERRUN;
- } else if (status & MCFUART_USR_RXFRAMING) {
- info->stats.rxframing++;
- *tty->flip.flag_buf_ptr++ = TTY_FRAME;
+ if (status & MCFUART_USR_RXBREAK) {
+ info->stats.rxbreak++;
+ *tty->flip.flag_buf_ptr++ = TTY_BREAK;
+ } else if (status & MCFUART_USR_RXPARITY) {
+ info->stats.rxparity++;
+ *tty->flip.flag_buf_ptr++ = TTY_PARITY;
+ } else if (status & MCFUART_USR_RXOVERRUN) {
+ info->stats.rxoverrun++;
+ *tty->flip.flag_buf_ptr++ = TTY_OVERRUN;
+ } else if (status & MCFUART_USR_RXFRAMING) {
+ info->stats.rxframing++;
+ *tty->flip.flag_buf_ptr++ = TTY_FRAME;
+ } else {
+ /* This should never happen... */
+ *tty->flip.flag_buf_ptr++ = 0;
+ }
} else {
*tty->flip.flag_buf_ptr++ = 0;
}
if (serial_paranoia_check(info, tty->name, "mcfrs_flush_chars"))
return;
+ uartp = (volatile unsigned char *) info->addr;
+
+ /*
+ * re-enable receiver interrupt
+ */
+ local_irq_save(flags);
+ if ((!(info->imr & MCFUART_UIR_RXREADY)) &&
+ (info->flags & ASYNC_INITIALIZED) ) {
+ info->imr |= MCFUART_UIR_RXREADY;
+ uartp[MCFUART_UIMR] = info->imr;
+ }
+ local_irq_restore(flags);
+
if (info->xmit_cnt <= 0 || tty->stopped || tty->hw_stopped ||
!info->xmit_buf)
return;
/* Enable transmitter */
local_irq_save(flags);
- uartp = info->addr;
info->imr |= MCFUART_UIR_TXREADY;
uartp[MCFUART_UIMR] = info->imr;
local_irq_restore(flags);
local_irq_save(flags);
uartp[MCFUART_UCR] = MCFUART_UCR_CMDBREAKSTART;
- schedule_timeout(jiffies + duration);
+ schedule_timeout(duration);
uartp[MCFUART_UCR] = MCFUART_UCR_CMDBREAKSTOP;
local_irq_restore(flags);
}
*portp = (*portp & ~0x000000ff) | 0x00000055;
portp = (volatile unsigned long *) (MCF_MBAR + MCFSIM_PDCNT);
*portp = (*portp & ~0x000003fc) | 0x000002a8;
-#elif defined(CONFIG_M5282)
+#elif defined(CONFIG_M527x) || defined(CONFIG_M528x)
volatile unsigned char *icrp, *uartp;
volatile unsigned long *imrp;
imrp = (volatile unsigned long *) (MCF_MBAR + MCFICM_INTC0 +
MCFINTC_IMRL);
- *imrp &= ~((1 << (info->irq - 64)) | 1);
+ *imrp &= ~((1 << (info->irq - MCFINT_VECBASE)) | 1);
#else
volatile unsigned char *icrp, *uartp;
/* Basic port init. Needed since we use some uart_??? func before
* real init for early access */
- port->lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&port->lock);
port->uartclk = __res.bi_ipbfreq / 2; /* Look at CTLR doc */
port->ops = &mpc52xx_uart_ops;
port->mapbase = MPC52xx_PSCx(co->index);
port = &mpc52xx_uart_ports[idx];
/* Init the port structure */
- port->lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&port->lock);
port->mapbase = ocp->def->paddr;
port->irq = ocp->def->irq;
port->uartclk = __res.bi_ipbfreq / 2; /* Look at CTLR doc */
--- /dev/null
+/*
+ * drivers/serial/mpsc.c
+ *
+ * Generic driver for the MPSC (UART mode) on Marvell parts (e.g., GT64240,
+ * GT64260, MV64340, MV64360, GT96100, ... ).
+ *
+ * Author: Mark A. Greer <mgreer@mvista.com>
+ *
+ * Based on an old MPSC driver that was in the linuxppc tree. It appears to
+ * have been created by Chris Zankel (formerly of MontaVista) but there
+ * is no proper Copyright so I'm not sure. Apparently, parts were also
+ * taken from PPCBoot (now U-Boot). Also based on drivers/serial/8250.c
+ * by Russell King.
+ *
+ * 2004 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+/*
+ * The MPSC interface is much like a typical network controller's interface.
+ * That is, you set up separate rings of descriptors for transmitting and
+ * receiving data. There is also a pool of buffers with (one buffer per
+ * descriptor) that incoming data are dma'd into or outgoing data are dma'd
+ * out of.
+ *
+ * The MPSC requires two other controllers to be able to work. The Baud Rate
+ * Generator (BRG) provides a clock at programmable frequencies which determines
+ * the baud rate. The Serial DMA Controller (SDMA) takes incoming data from the
+ * MPSC and DMA's it into memory or DMA's outgoing data and passes it to the
+ * MPSC. It is actually the SDMA interrupt that the driver uses to keep the
+ * transmit and receive "engines" going (i.e., indicate data has been
+ * transmitted or received).
+ *
+ * NOTES:
+ *
+ * 1) Some chips have an erratum where several regs cannot be
+ * read. To work around that, we keep a local copy of those regs in
+ * 'mpsc_port_info'.
+ *
+ * 2) Some chips have an erratum where the ctlr will hang when the SDMA ctlr
+ * accesses system mem with coherency enabled. For that reason, the driver
+ * assumes that coherency for that ctlr has been disabled. This means
+ * that when in a cache coherent system, the driver has to manually manage
+ * the data cache on the areas that it touches because the dma_* macro are
+ * basically no-ops.
+ *
+ * 3) There is an erratum (on PPC) where you can't use the instruction to do
+ * a DMA_TO_DEVICE/cache clean so DMA_BIDIRECTIONAL/flushes are used in places
+ * where a DMA_TO_DEVICE/clean would have [otherwise] sufficed.
+ *
+ * 4) AFAICT, hardware flow control isn't supported by the controller --MAG.
+ */
+
+#include "mpsc.h"
+
+/*
+ * Define how this driver is known to the outside (we've been assigned a
+ * range on the "Low-density serial ports" major).
+ */
+#define MPSC_MAJOR 204
+#define MPSC_MINOR_START 44
+#define MPSC_DRIVER_NAME "MPSC"
+#define MPSC_DEVFS_NAME "ttymm/"
+#define MPSC_DEV_NAME "ttyMM"
+#define MPSC_VERSION "1.00"
+
+static struct mpsc_port_info mpsc_ports[MPSC_NUM_CTLRS];
+static struct mpsc_shared_regs mpsc_shared_regs;
+
+/*
+ ******************************************************************************
+ *
+ * Baud Rate Generator Routines (BRG)
+ *
+ ******************************************************************************
+ */
+static void
+mpsc_brg_init(struct mpsc_port_info *pi, u32 clk_src)
+{
+ u32 v;
+
+ v = (pi->mirror_regs) ? pi->BRG_BCR_m : readl(pi->brg_base + BRG_BCR);
+ v = (v & ~(0xf << 18)) | ((clk_src & 0xf) << 18);
+
+ if (pi->brg_can_tune)
+ v &= ~(1 << 25);
+
+ if (pi->mirror_regs)
+ pi->BRG_BCR_m = v;
+ writel(v, pi->brg_base + BRG_BCR);
+
+ writel(readl(pi->brg_base + BRG_BTR) & 0xffff0000,
+ pi->brg_base + BRG_BTR);
+ return;
+}
+
+static void
+mpsc_brg_enable(struct mpsc_port_info *pi)
+{
+ u32 v;
+
+ v = (pi->mirror_regs) ? pi->BRG_BCR_m : readl(pi->brg_base + BRG_BCR);
+ v |= (1 << 16);
+
+ if (pi->mirror_regs)
+ pi->BRG_BCR_m = v;
+ writel(v, pi->brg_base + BRG_BCR);
+ return;
+}
+
+static void
+mpsc_brg_disable(struct mpsc_port_info *pi)
+{
+ u32 v;
+
+ v = (pi->mirror_regs) ? pi->BRG_BCR_m : readl(pi->brg_base + BRG_BCR);
+ v &= ~(1 << 16);
+
+ if (pi->mirror_regs)
+ pi->BRG_BCR_m = v;
+ writel(v, pi->brg_base + BRG_BCR);
+ return;
+}
+
+static inline void
+mpsc_set_baudrate(struct mpsc_port_info *pi, u32 baud)
+{
+ /*
+ * To set the baud, we adjust the CDV field in the BRG_BCR reg.
+ * From manual: Baud = clk / ((CDV+1)*2) ==> CDV = (clk / (baud*2)) - 1.
+ * However, the input clock is divided by 16 in the MPSC b/c of how
+ * 'MPSC_MMCRH' was set up so we have to divide the 'clk' used in our
+ * calculation by 16 to account for that. So the real calculation
+ * that accounts for the way the mpsc is set up is:
+ * CDV = (clk / (baud*2*16)) - 1 ==> CDV = (clk / (baud << 5)) - 1.
+ */
+ u32 cdv = (pi->port.uartclk / (baud << 5)) - 1;
+ u32 v;
+
+ mpsc_brg_disable(pi);
+ v = (pi->mirror_regs) ? pi->BRG_BCR_m : readl(pi->brg_base + BRG_BCR);
+ v = (v & 0xffff0000) | (cdv & 0xffff);
+
+ if (pi->mirror_regs)
+ pi->BRG_BCR_m = v;
+ writel(v, pi->brg_base + BRG_BCR);
+ mpsc_brg_enable(pi);
+
+ return;
+}
+
+/*
+ ******************************************************************************
+ *
+ * Serial DMA Routines (SDMA)
+ *
+ ******************************************************************************
+ */
+
+static void
+mpsc_sdma_burstsize(struct mpsc_port_info *pi, u32 burst_size)
+{
+ u32 v;
+
+ pr_debug("mpsc_sdma_burstsize[%d]: burst_size: %d\n",
+ pi->port.line, burst_size);
+
+ burst_size >>= 3; /* Divide by 8 b/c reg values are 8-byte chunks */
+
+ if (burst_size < 2)
+ v = 0x0; /* 1 64-bit word */
+ else if (burst_size < 4)
+ v = 0x1; /* 2 64-bit words */
+ else if (burst_size < 8)
+ v = 0x2; /* 4 64-bit words */
+ else
+ v = 0x3; /* 8 64-bit words */
+
+ writel((readl(pi->sdma_base + SDMA_SDC) & (0x3 << 12)) | (v << 12),
+ pi->sdma_base + SDMA_SDC);
+ return;
+}
+
+static void
+mpsc_sdma_init(struct mpsc_port_info *pi, u32 burst_size)
+{
+ pr_debug("mpsc_sdma_init[%d]: burst_size: %d\n", pi->port.line,
+ burst_size);
+
+ writel((readl(pi->sdma_base + SDMA_SDC) & 0x3ff) | 0x03f,
+ pi->sdma_base + SDMA_SDC);
+ mpsc_sdma_burstsize(pi, burst_size);
+ return;
+}
+
+static inline u32
+mpsc_sdma_intr_mask(struct mpsc_port_info *pi, u32 mask)
+{
+ u32 old, v;
+
+ pr_debug("mpsc_sdma_intr_mask[%d]: mask: 0x%x\n", pi->port.line, mask);
+
+ old = v = (pi->mirror_regs) ? pi->shared_regs->SDMA_INTR_MASK_m :
+ readl(pi->shared_regs->sdma_intr_base + SDMA_INTR_MASK);
+
+ mask &= 0xf;
+ if (pi->port.line)
+ mask <<= 8;
+ v &= ~mask;
+
+ if (pi->mirror_regs)
+ pi->shared_regs->SDMA_INTR_MASK_m = v;
+ writel(v, pi->shared_regs->sdma_intr_base + SDMA_INTR_MASK);
+
+ if (pi->port.line)
+ old >>= 8;
+ return old & 0xf;
+}
+
+static inline void
+mpsc_sdma_intr_unmask(struct mpsc_port_info *pi, u32 mask)
+{
+ u32 v;
+
+ pr_debug("mpsc_sdma_intr_unmask[%d]: mask: 0x%x\n", pi->port.line,mask);
+
+ v = (pi->mirror_regs) ? pi->shared_regs->SDMA_INTR_MASK_m :
+ readl(pi->shared_regs->sdma_intr_base + SDMA_INTR_MASK);
+
+ mask &= 0xf;
+ if (pi->port.line)
+ mask <<= 8;
+ v |= mask;
+
+ if (pi->mirror_regs)
+ pi->shared_regs->SDMA_INTR_MASK_m = v;
+ writel(v, pi->shared_regs->sdma_intr_base + SDMA_INTR_MASK);
+ return;
+}
+
+static inline void
+mpsc_sdma_intr_ack(struct mpsc_port_info *pi)
+{
+ pr_debug("mpsc_sdma_intr_ack[%d]: Acknowledging IRQ\n", pi->port.line);
+
+ if (pi->mirror_regs)
+ pi->shared_regs->SDMA_INTR_CAUSE_m = 0;
+ writel(0, pi->shared_regs->sdma_intr_base + SDMA_INTR_CAUSE);
+ return;
+}
+
+static inline void
+mpsc_sdma_set_rx_ring(struct mpsc_port_info *pi, struct mpsc_rx_desc *rxre_p)
+{
+ pr_debug("mpsc_sdma_set_rx_ring[%d]: rxre_p: 0x%x\n",
+ pi->port.line, (u32) rxre_p);
+
+ writel((u32)rxre_p, pi->sdma_base + SDMA_SCRDP);
+ return;
+}
+
+static inline void
+mpsc_sdma_set_tx_ring(struct mpsc_port_info *pi, struct mpsc_tx_desc *txre_p)
+{
+ writel((u32)txre_p, pi->sdma_base + SDMA_SFTDP);
+ writel((u32)txre_p, pi->sdma_base + SDMA_SCTDP);
+ return;
+}
+
+static inline void
+mpsc_sdma_cmd(struct mpsc_port_info *pi, u32 val)
+{
+ u32 v;
+
+ v = readl(pi->sdma_base + SDMA_SDCM);
+ if (val)
+ v |= val;
+ else
+ v = 0;
+ wmb();
+ writel(v, pi->sdma_base + SDMA_SDCM);
+ wmb();
+ return;
+}
+
+static inline uint
+mpsc_sdma_tx_active(struct mpsc_port_info *pi)
+{
+ return readl(pi->sdma_base + SDMA_SDCM) & SDMA_SDCM_TXD;
+}
+
+static inline void
+mpsc_sdma_start_tx(struct mpsc_port_info *pi)
+{
+ struct mpsc_tx_desc *txre, *txre_p;
+
+ /* If tx isn't running & there's a desc ready to go, start it */
+ if (!mpsc_sdma_tx_active(pi)) {
+ txre = (struct mpsc_tx_desc *)(pi->txr +
+ (pi->txr_tail * MPSC_TXRE_SIZE));
+ dma_cache_sync((void *) txre, MPSC_TXRE_SIZE, DMA_FROM_DEVICE);
+#if defined(CONFIG_PPC32) && !defined(CONFIG_NOT_COHERENT_CACHE)
+ if (pi->cache_mgmt) /* GT642[46]0 Res #COMM-2 */
+ invalidate_dcache_range((ulong)txre,
+ (ulong)txre + MPSC_TXRE_SIZE);
+#endif
+
+ if (be32_to_cpu(txre->cmdstat) & SDMA_DESC_CMDSTAT_O) {
+ txre_p = (struct mpsc_tx_desc *)(pi->txr_p +
+ (pi->txr_tail *
+ MPSC_TXRE_SIZE));
+
+ mpsc_sdma_set_tx_ring(pi, txre_p);
+ mpsc_sdma_cmd(pi, SDMA_SDCM_STD | SDMA_SDCM_TXD);
+ }
+ }
+
+ return;
+}
+
+static inline void
+mpsc_sdma_stop(struct mpsc_port_info *pi)
+{
+ pr_debug("mpsc_sdma_stop[%d]: Stopping SDMA\n", pi->port.line);
+
+ /* Abort any SDMA transfers */
+ mpsc_sdma_cmd(pi, 0);
+ mpsc_sdma_cmd(pi, SDMA_SDCM_AR | SDMA_SDCM_AT);
+
+ /* Clear the SDMA current and first TX and RX pointers */
+ mpsc_sdma_set_tx_ring(pi, 0);
+ mpsc_sdma_set_rx_ring(pi, 0);
+
+ /* Disable interrupts */
+ mpsc_sdma_intr_mask(pi, 0xf);
+ mpsc_sdma_intr_ack(pi);
+
+ return;
+}
+
+/*
+ ******************************************************************************
+ *
+ * Multi-Protocol Serial Controller Routines (MPSC)
+ *
+ ******************************************************************************
+ */
+
+static void
+mpsc_hw_init(struct mpsc_port_info *pi)
+{
+ u32 v;
+
+ pr_debug("mpsc_hw_init[%d]: Initializing hardware\n", pi->port.line);
+
+ /* Set up clock routing */
+ if (pi->mirror_regs) {
+ v = pi->shared_regs->MPSC_MRR_m;
+ v &= ~0x1c7;
+ pi->shared_regs->MPSC_MRR_m = v;
+ writel(v, pi->shared_regs->mpsc_routing_base + MPSC_MRR);
+
+ v = pi->shared_regs->MPSC_RCRR_m;
+ v = (v & ~0xf0f) | 0x100;
+ pi->shared_regs->MPSC_RCRR_m = v;
+ writel(v, pi->shared_regs->mpsc_routing_base + MPSC_RCRR);
+
+ v = pi->shared_regs->MPSC_TCRR_m;
+ v = (v & ~0xf0f) | 0x100;
+ pi->shared_regs->MPSC_TCRR_m = v;
+ writel(v, pi->shared_regs->mpsc_routing_base + MPSC_TCRR);
+ }
+ else {
+ v = readl(pi->shared_regs->mpsc_routing_base + MPSC_MRR);
+ v &= ~0x1c7;
+ writel(v, pi->shared_regs->mpsc_routing_base + MPSC_MRR);
+
+ v = readl(pi->shared_regs->mpsc_routing_base + MPSC_RCRR);
+ v = (v & ~0xf0f) | 0x100;
+ writel(v, pi->shared_regs->mpsc_routing_base + MPSC_RCRR);
+
+ v = readl(pi->shared_regs->mpsc_routing_base + MPSC_TCRR);
+ v = (v & ~0xf0f) | 0x100;
+ writel(v, pi->shared_regs->mpsc_routing_base + MPSC_TCRR);
+ }
+
+ /* Put MPSC in UART mode & enabel Tx/Rx egines */
+ writel(0x000004c4, pi->mpsc_base + MPSC_MMCRL);
+
+ /* No preamble, 16x divider, low-latency, */
+ writel(0x04400400, pi->mpsc_base + MPSC_MMCRH);
+
+ if (pi->mirror_regs) {
+ pi->MPSC_CHR_1_m = 0;
+ pi->MPSC_CHR_2_m = 0;
+ }
+ writel(0, pi->mpsc_base + MPSC_CHR_1);
+ writel(0, pi->mpsc_base + MPSC_CHR_2);
+ writel(pi->mpsc_max_idle, pi->mpsc_base + MPSC_CHR_3);
+ writel(0, pi->mpsc_base + MPSC_CHR_4);
+ writel(0, pi->mpsc_base + MPSC_CHR_5);
+ writel(0, pi->mpsc_base + MPSC_CHR_6);
+ writel(0, pi->mpsc_base + MPSC_CHR_7);
+ writel(0, pi->mpsc_base + MPSC_CHR_8);
+ writel(0, pi->mpsc_base + MPSC_CHR_9);
+ writel(0, pi->mpsc_base + MPSC_CHR_10);
+
+ return;
+}
+
+static inline void
+mpsc_enter_hunt(struct mpsc_port_info *pi)
+{
+ pr_debug("mpsc_enter_hunt[%d]: Hunting...\n", pi->port.line);
+
+ if (pi->mirror_regs) {
+ writel(pi->MPSC_CHR_2_m | MPSC_CHR_2_EH,
+ pi->mpsc_base + MPSC_CHR_2);
+ /* Erratum prevents reading CHR_2 so just delay for a while */
+ udelay(100);
+ }
+ else {
+ writel(readl(pi->mpsc_base + MPSC_CHR_2) | MPSC_CHR_2_EH,
+ pi->mpsc_base + MPSC_CHR_2);
+
+ while (readl(pi->mpsc_base + MPSC_CHR_2) & MPSC_CHR_2_EH)
+ udelay(10);
+ }
+
+ return;
+}
+
+static inline void
+mpsc_freeze(struct mpsc_port_info *pi)
+{
+ u32 v;
+
+ pr_debug("mpsc_freeze[%d]: Freezing\n", pi->port.line);
+
+ v = (pi->mirror_regs) ? pi->MPSC_MPCR_m :
+ readl(pi->mpsc_base + MPSC_MPCR);
+ v |= MPSC_MPCR_FRZ;
+
+ if (pi->mirror_regs)
+ pi->MPSC_MPCR_m = v;
+ writel(v, pi->mpsc_base + MPSC_MPCR);
+ return;
+}
+
+static inline void
+mpsc_unfreeze(struct mpsc_port_info *pi)
+{
+ u32 v;
+
+ v = (pi->mirror_regs) ? pi->MPSC_MPCR_m :
+ readl(pi->mpsc_base + MPSC_MPCR);
+ v &= ~MPSC_MPCR_FRZ;
+
+ if (pi->mirror_regs)
+ pi->MPSC_MPCR_m = v;
+ writel(v, pi->mpsc_base + MPSC_MPCR);
+
+ pr_debug("mpsc_unfreeze[%d]: Unfrozen\n", pi->port.line);
+ return;
+}
+
+static inline void
+mpsc_set_char_length(struct mpsc_port_info *pi, u32 len)
+{
+ u32 v;
+
+ pr_debug("mpsc_set_char_length[%d]: char len: %d\n", pi->port.line,len);
+
+ v = (pi->mirror_regs) ? pi->MPSC_MPCR_m :
+ readl(pi->mpsc_base + MPSC_MPCR);
+ v = (v & ~(0x3 << 12)) | ((len & 0x3) << 12);
+
+ if (pi->mirror_regs)
+ pi->MPSC_MPCR_m = v;
+ writel(v, pi->mpsc_base + MPSC_MPCR);
+ return;
+}
+
+static inline void
+mpsc_set_stop_bit_length(struct mpsc_port_info *pi, u32 len)
+{
+ u32 v;
+
+ pr_debug("mpsc_set_stop_bit_length[%d]: stop bits: %d\n",
+ pi->port.line, len);
+
+ v = (pi->mirror_regs) ? pi->MPSC_MPCR_m :
+ readl(pi->mpsc_base + MPSC_MPCR);
+
+ v = (v & ~(1 << 14)) | ((len & 0x1) << 14);
+
+ if (pi->mirror_regs)
+ pi->MPSC_MPCR_m = v;
+ writel(v, pi->mpsc_base + MPSC_MPCR);
+ return;
+}
+
+static inline void
+mpsc_set_parity(struct mpsc_port_info *pi, u32 p)
+{
+ u32 v;
+
+ pr_debug("mpsc_set_parity[%d]: parity bits: 0x%x\n", pi->port.line, p);
+
+ v = (pi->mirror_regs) ? pi->MPSC_CHR_2_m :
+ readl(pi->mpsc_base + MPSC_CHR_2);
+
+ p &= 0x3;
+ v = (v & ~0xc000c) | (p << 18) | (p << 2);
+
+ if (pi->mirror_regs)
+ pi->MPSC_CHR_2_m = v;
+ writel(v, pi->mpsc_base + MPSC_CHR_2);
+ return;
+}
+
+/*
+ ******************************************************************************
+ *
+ * Driver Init Routines
+ *
+ ******************************************************************************
+ */
+
+static void
+mpsc_init_hw(struct mpsc_port_info *pi)
+{
+ pr_debug("mpsc_init_hw[%d]: Initializing\n", pi->port.line);
+
+ mpsc_brg_init(pi, pi->brg_clk_src);
+ mpsc_brg_enable(pi);
+ mpsc_sdma_init(pi, dma_get_cache_alignment()); /* burst a cacheline */
+ mpsc_sdma_stop(pi);
+ mpsc_hw_init(pi);
+
+ return;
+}
+
+static int
+mpsc_alloc_ring_mem(struct mpsc_port_info *pi)
+{
+ int rc = 0;
+ static void mpsc_free_ring_mem(struct mpsc_port_info *pi);
+
+ pr_debug("mpsc_alloc_ring_mem[%d]: Allocating ring mem\n",
+ pi->port.line);
+
+ if (!pi->dma_region) {
+ if (!dma_supported(pi->port.dev, 0xffffffff)) {
+ printk(KERN_ERR "MPSC: Inadequate DMA support\n");
+ rc = -ENXIO;
+ }
+ else if ((pi->dma_region = dma_alloc_noncoherent(pi->port.dev,
+ MPSC_DMA_ALLOC_SIZE, &pi->dma_region_p, GFP_KERNEL))
+ == NULL) {
+
+ printk(KERN_ERR "MPSC: Can't alloc Desc region\n");
+ rc = -ENOMEM;
+ }
+ }
+
+ return rc;
+}
+
+static void
+mpsc_free_ring_mem(struct mpsc_port_info *pi)
+{
+ pr_debug("mpsc_free_ring_mem[%d]: Freeing ring mem\n", pi->port.line);
+
+ if (pi->dma_region) {
+ dma_free_noncoherent(pi->port.dev, MPSC_DMA_ALLOC_SIZE,
+ pi->dma_region, pi->dma_region_p);
+ pi->dma_region = NULL;
+ pi->dma_region_p = (dma_addr_t) NULL;
+ }
+
+ return;
+}
+
+static void
+mpsc_init_rings(struct mpsc_port_info *pi)
+{
+ struct mpsc_rx_desc *rxre;
+ struct mpsc_tx_desc *txre;
+ dma_addr_t dp, dp_p;
+ u8 *bp, *bp_p;
+ int i;
+
+ pr_debug("mpsc_init_rings[%d]: Initializing rings\n", pi->port.line);
+
+ BUG_ON(pi->dma_region == NULL);
+
+ memset(pi->dma_region, 0, MPSC_DMA_ALLOC_SIZE);
+
+ /*
+ * Descriptors & buffers are multiples of cacheline size and must be
+ * cacheline aligned.
+ */
+ dp = ALIGN((u32) pi->dma_region, dma_get_cache_alignment());
+ dp_p = ALIGN((u32) pi->dma_region_p, dma_get_cache_alignment());
+
+ /*
+ * Partition dma region into rx ring descriptor, rx buffers,
+ * tx ring descriptors, and tx buffers.
+ */
+ pi->rxr = dp;
+ pi->rxr_p = dp_p;
+ dp += MPSC_RXR_SIZE;
+ dp_p += MPSC_RXR_SIZE;
+
+ pi->rxb = (u8 *) dp;
+ pi->rxb_p = (u8 *) dp_p;
+ dp += MPSC_RXB_SIZE;
+ dp_p += MPSC_RXB_SIZE;
+
+ pi->rxr_posn = 0;
+
+ pi->txr = dp;
+ pi->txr_p = dp_p;
+ dp += MPSC_TXR_SIZE;
+ dp_p += MPSC_TXR_SIZE;
+
+ pi->txb = (u8 *) dp;
+ pi->txb_p = (u8 *) dp_p;
+
+ pi->txr_head = 0;
+ pi->txr_tail = 0;
+
+ /* Init rx ring descriptors */
+ dp = pi->rxr;
+ dp_p = pi->rxr_p;
+ bp = pi->rxb;
+ bp_p = pi->rxb_p;
+
+ for (i = 0; i < MPSC_RXR_ENTRIES; i++) {
+ rxre = (struct mpsc_rx_desc *)dp;
+
+ rxre->bufsize = cpu_to_be16(MPSC_RXBE_SIZE);
+ rxre->bytecnt = cpu_to_be16(0);
+ rxre->cmdstat = cpu_to_be32(SDMA_DESC_CMDSTAT_O |
+ SDMA_DESC_CMDSTAT_EI |
+ SDMA_DESC_CMDSTAT_F |
+ SDMA_DESC_CMDSTAT_L);
+ rxre->link = cpu_to_be32(dp_p + MPSC_RXRE_SIZE);
+ rxre->buf_ptr = cpu_to_be32(bp_p);
+
+ dp += MPSC_RXRE_SIZE;
+ dp_p += MPSC_RXRE_SIZE;
+ bp += MPSC_RXBE_SIZE;
+ bp_p += MPSC_RXBE_SIZE;
+ }
+ rxre->link = cpu_to_be32(pi->rxr_p); /* Wrap last back to first */
+
+ /* Init tx ring descriptors */
+ dp = pi->txr;
+ dp_p = pi->txr_p;
+ bp = pi->txb;
+ bp_p = pi->txb_p;
+
+ for (i = 0; i < MPSC_TXR_ENTRIES; i++) {
+ txre = (struct mpsc_tx_desc *)dp;
+
+ txre->link = cpu_to_be32(dp_p + MPSC_TXRE_SIZE);
+ txre->buf_ptr = cpu_to_be32(bp_p);
+
+ dp += MPSC_TXRE_SIZE;
+ dp_p += MPSC_TXRE_SIZE;
+ bp += MPSC_TXBE_SIZE;
+ bp_p += MPSC_TXBE_SIZE;
+ }
+ txre->link = cpu_to_be32(pi->txr_p); /* Wrap last back to first */
+
+ dma_cache_sync((void *) pi->dma_region, MPSC_DMA_ALLOC_SIZE,
+ DMA_BIDIRECTIONAL);
+#if defined(CONFIG_PPC32) && !defined(CONFIG_NOT_COHERENT_CACHE)
+ if (pi->cache_mgmt) /* GT642[46]0 Res #COMM-2 */
+ flush_dcache_range((ulong)pi->dma_region,
+ (ulong)pi->dma_region + MPSC_DMA_ALLOC_SIZE);
+#endif
+
+ return;
+}
+
+static void
+mpsc_uninit_rings(struct mpsc_port_info *pi)
+{
+ pr_debug("mpsc_uninit_rings[%d]: Uninitializing rings\n",pi->port.line);
+
+ BUG_ON(pi->dma_region == NULL);
+
+ pi->rxr = 0;
+ pi->rxr_p = 0;
+ pi->rxb = NULL;
+ pi->rxb_p = NULL;
+ pi->rxr_posn = 0;
+
+ pi->txr = 0;
+ pi->txr_p = 0;
+ pi->txb = NULL;
+ pi->txb_p = NULL;
+ pi->txr_head = 0;
+ pi->txr_tail = 0;
+
+ return;
+}
+
+static int
+mpsc_make_ready(struct mpsc_port_info *pi)
+{
+ int rc;
+
+ pr_debug("mpsc_make_ready[%d]: Making cltr ready\n", pi->port.line);
+
+ if (!pi->ready) {
+ mpsc_init_hw(pi);
+ if ((rc = mpsc_alloc_ring_mem(pi)))
+ return rc;
+ mpsc_init_rings(pi);
+ pi->ready = 1;
+ }
+
+ return 0;
+}
+
+/*
+ ******************************************************************************
+ *
+ * Interrupt Handling Routines
+ *
+ ******************************************************************************
+ */
+
+static inline int
+mpsc_rx_intr(struct mpsc_port_info *pi, struct pt_regs *regs)
+{
+ struct mpsc_rx_desc *rxre;
+ struct tty_struct *tty = pi->port.info->tty;
+ u32 cmdstat, bytes_in, i;
+ int rc = 0;
+ u8 *bp;
+ char flag = TTY_NORMAL;
+ static void mpsc_start_rx(struct mpsc_port_info *pi);
+
+ pr_debug("mpsc_rx_intr[%d]: Handling Rx intr\n", pi->port.line);
+
+ rxre = (struct mpsc_rx_desc *)(pi->rxr + (pi->rxr_posn*MPSC_RXRE_SIZE));
+
+ dma_cache_sync((void *)rxre, MPSC_RXRE_SIZE, DMA_FROM_DEVICE);
+#if defined(CONFIG_PPC32) && !defined(CONFIG_NOT_COHERENT_CACHE)
+ if (pi->cache_mgmt) /* GT642[46]0 Res #COMM-2 */
+ invalidate_dcache_range((ulong)rxre,
+ (ulong)rxre + MPSC_RXRE_SIZE);
+#endif
+
+ /*
+ * Loop through Rx descriptors handling ones that have been completed.
+ */
+ while (!((cmdstat = be32_to_cpu(rxre->cmdstat)) & SDMA_DESC_CMDSTAT_O)){
+ bytes_in = be16_to_cpu(rxre->bytecnt);
+
+ /* Following use of tty struct directly is deprecated */
+ if (unlikely((tty->flip.count + bytes_in) >= TTY_FLIPBUF_SIZE)){
+ if (tty->low_latency)
+ tty_flip_buffer_push(tty);
+ /*
+ * If this failed then we will throw awa the bytes
+ * but mst do so to clear interrupts.
+ */
+ }
+
+ bp = pi->rxb + (pi->rxr_posn * MPSC_RXBE_SIZE);
+ dma_cache_sync((void *) bp, MPSC_RXBE_SIZE, DMA_FROM_DEVICE);
+#if defined(CONFIG_PPC32) && !defined(CONFIG_NOT_COHERENT_CACHE)
+ if (pi->cache_mgmt) /* GT642[46]0 Res #COMM-2 */
+ invalidate_dcache_range((ulong)bp,
+ (ulong)bp + MPSC_RXBE_SIZE);
+#endif
+
+ /*
+ * Other than for parity error, the manual provides little
+ * info on what data will be in a frame flagged by any of
+ * these errors. For parity error, it is the last byte in
+ * the buffer that had the error. As for the rest, I guess
+ * we'll assume there is no data in the buffer.
+ * If there is...it gets lost.
+ */
+ if (unlikely(cmdstat & (SDMA_DESC_CMDSTAT_BR |
+ SDMA_DESC_CMDSTAT_FR | SDMA_DESC_CMDSTAT_OR))) {
+
+ pi->port.icount.rx++;
+
+ if (cmdstat & SDMA_DESC_CMDSTAT_BR) { /* Break */
+ pi->port.icount.brk++;
+
+ if (uart_handle_break(&pi->port))
+ goto next_frame;
+ }
+ else if (cmdstat & SDMA_DESC_CMDSTAT_FR)/* Framing */
+ pi->port.icount.frame++;
+ else if (cmdstat & SDMA_DESC_CMDSTAT_OR) /* Overrun */
+ pi->port.icount.overrun++;
+
+ cmdstat &= pi->port.read_status_mask;
+
+ if (cmdstat & SDMA_DESC_CMDSTAT_BR)
+ flag = TTY_BREAK;
+ else if (cmdstat & SDMA_DESC_CMDSTAT_FR)
+ flag = TTY_FRAME;
+ else if (cmdstat & SDMA_DESC_CMDSTAT_OR)
+ flag = TTY_OVERRUN;
+ else if (cmdstat & SDMA_DESC_CMDSTAT_PE)
+ flag = TTY_PARITY;
+ }
+
+ if (uart_handle_sysrq_char(&pi->port, *bp, regs)) {
+ bp++;
+ bytes_in--;
+ goto next_frame;
+ }
+
+ if ((unlikely(cmdstat & (SDMA_DESC_CMDSTAT_BR |
+ SDMA_DESC_CMDSTAT_FR | SDMA_DESC_CMDSTAT_OR))) &&
+ !(cmdstat & pi->port.ignore_status_mask))
+
+ tty_insert_flip_char(tty, *bp, flag);
+ else {
+ for (i=0; i<bytes_in; i++)
+ tty_insert_flip_char(tty, *bp++, TTY_NORMAL);
+
+ pi->port.icount.rx += bytes_in;
+ }
+
+next_frame:
+ rxre->bytecnt = cpu_to_be16(0);
+ wmb();
+ rxre->cmdstat = cpu_to_be32(SDMA_DESC_CMDSTAT_O |
+ SDMA_DESC_CMDSTAT_EI |
+ SDMA_DESC_CMDSTAT_F |
+ SDMA_DESC_CMDSTAT_L);
+ wmb();
+ dma_cache_sync((void *)rxre, MPSC_RXRE_SIZE, DMA_BIDIRECTIONAL);
+#if defined(CONFIG_PPC32) && !defined(CONFIG_NOT_COHERENT_CACHE)
+ if (pi->cache_mgmt) /* GT642[46]0 Res #COMM-2 */
+ flush_dcache_range((ulong)rxre,
+ (ulong)rxre + MPSC_RXRE_SIZE);
+#endif
+
+ /* Advance to next descriptor */
+ pi->rxr_posn = (pi->rxr_posn + 1) & (MPSC_RXR_ENTRIES - 1);
+ rxre = (struct mpsc_rx_desc *)(pi->rxr +
+ (pi->rxr_posn * MPSC_RXRE_SIZE));
+ dma_cache_sync((void *)rxre, MPSC_RXRE_SIZE, DMA_FROM_DEVICE);
+#if defined(CONFIG_PPC32) && !defined(CONFIG_NOT_COHERENT_CACHE)
+ if (pi->cache_mgmt) /* GT642[46]0 Res #COMM-2 */
+ invalidate_dcache_range((ulong)rxre,
+ (ulong)rxre + MPSC_RXRE_SIZE);
+#endif
+
+ rc = 1;
+ }
+
+ /* Restart rx engine, if its stopped */
+ if ((readl(pi->sdma_base + SDMA_SDCM) & SDMA_SDCM_ERD) == 0)
+ mpsc_start_rx(pi);
+
+ tty_flip_buffer_push(tty);
+ return rc;
+}
+
+static inline void
+mpsc_setup_tx_desc(struct mpsc_port_info *pi, u32 count, u32 intr)
+{
+ struct mpsc_tx_desc *txre;
+
+ txre = (struct mpsc_tx_desc *)(pi->txr +
+ (pi->txr_head * MPSC_TXRE_SIZE));
+
+ txre->bytecnt = cpu_to_be16(count);
+ txre->shadow = txre->bytecnt;
+ wmb(); /* ensure cmdstat is last field updated */
+ txre->cmdstat = cpu_to_be32(SDMA_DESC_CMDSTAT_O | SDMA_DESC_CMDSTAT_F |
+ SDMA_DESC_CMDSTAT_L | ((intr) ?
+ SDMA_DESC_CMDSTAT_EI
+ : 0));
+ wmb();
+ dma_cache_sync((void *) txre, MPSC_TXRE_SIZE, DMA_BIDIRECTIONAL);
+#if defined(CONFIG_PPC32) && !defined(CONFIG_NOT_COHERENT_CACHE)
+ if (pi->cache_mgmt) /* GT642[46]0 Res #COMM-2 */
+ flush_dcache_range((ulong)txre,
+ (ulong)txre + MPSC_TXRE_SIZE);
+#endif
+
+ return;
+}
+
+static inline void
+mpsc_copy_tx_data(struct mpsc_port_info *pi)
+{
+ struct circ_buf *xmit = &pi->port.info->xmit;
+ u8 *bp;
+ u32 i;
+
+ /* Make sure the desc ring isn't full */
+ while (CIRC_CNT(pi->txr_head, pi->txr_tail, MPSC_TXR_ENTRIES) <
+ (MPSC_TXR_ENTRIES - 1)) {
+ if (pi->port.x_char) {
+ /*
+ * Ideally, we should use the TCS field in
+ * CHR_1 to put the x_char out immediately but
+ * errata prevents us from being able to read
+ * CHR_2 to know that its safe to write to
+ * CHR_1. Instead, just put it in-band with
+ * all the other Tx data.
+ */
+ bp = pi->txb + (pi->txr_head * MPSC_TXBE_SIZE);
+ *bp = pi->port.x_char;
+ pi->port.x_char = 0;
+ i = 1;
+ }
+ else if (!uart_circ_empty(xmit) && !uart_tx_stopped(&pi->port)){
+ i = min((u32) MPSC_TXBE_SIZE,
+ (u32) uart_circ_chars_pending(xmit));
+ i = min(i, (u32) CIRC_CNT_TO_END(xmit->head, xmit->tail,
+ UART_XMIT_SIZE));
+ bp = pi->txb + (pi->txr_head * MPSC_TXBE_SIZE);
+ memcpy(bp, &xmit->buf[xmit->tail], i);
+ xmit->tail = (xmit->tail + i) & (UART_XMIT_SIZE - 1);
+
+ if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ uart_write_wakeup(&pi->port);
+ }
+ else /* All tx data copied into ring bufs */
+ return;
+
+ dma_cache_sync((void *) bp, MPSC_TXBE_SIZE, DMA_BIDIRECTIONAL);
+#if defined(CONFIG_PPC32) && !defined(CONFIG_NOT_COHERENT_CACHE)
+ if (pi->cache_mgmt) /* GT642[46]0 Res #COMM-2 */
+ flush_dcache_range((ulong)bp,
+ (ulong)bp + MPSC_TXBE_SIZE);
+#endif
+ mpsc_setup_tx_desc(pi, i, 1);
+
+ /* Advance to next descriptor */
+ pi->txr_head = (pi->txr_head + 1) & (MPSC_TXR_ENTRIES - 1);
+ }
+
+ return;
+}
+
+static inline int
+mpsc_tx_intr(struct mpsc_port_info *pi)
+{
+ struct mpsc_tx_desc *txre;
+ int rc = 0;
+
+ if (!mpsc_sdma_tx_active(pi)) {
+ txre = (struct mpsc_tx_desc *)(pi->txr +
+ (pi->txr_tail * MPSC_TXRE_SIZE));
+
+ dma_cache_sync((void *) txre, MPSC_TXRE_SIZE, DMA_FROM_DEVICE);
+#if defined(CONFIG_PPC32) && !defined(CONFIG_NOT_COHERENT_CACHE)
+ if (pi->cache_mgmt) /* GT642[46]0 Res #COMM-2 */
+ invalidate_dcache_range((ulong)txre,
+ (ulong)txre + MPSC_TXRE_SIZE);
+#endif
+
+ while (!(be32_to_cpu(txre->cmdstat) & SDMA_DESC_CMDSTAT_O)) {
+ rc = 1;
+ pi->port.icount.tx += be16_to_cpu(txre->bytecnt);
+ pi->txr_tail = (pi->txr_tail+1) & (MPSC_TXR_ENTRIES-1);
+
+ /* If no more data to tx, fall out of loop */
+ if (pi->txr_head == pi->txr_tail)
+ break;
+
+ txre = (struct mpsc_tx_desc *)(pi->txr +
+ (pi->txr_tail * MPSC_TXRE_SIZE));
+ dma_cache_sync((void *) txre, MPSC_TXRE_SIZE,
+ DMA_FROM_DEVICE);
+#if defined(CONFIG_PPC32) && !defined(CONFIG_NOT_COHERENT_CACHE)
+ if (pi->cache_mgmt) /* GT642[46]0 Res #COMM-2 */
+ invalidate_dcache_range((ulong)txre,
+ (ulong)txre + MPSC_TXRE_SIZE);
+#endif
+ }
+
+ mpsc_copy_tx_data(pi);
+ mpsc_sdma_start_tx(pi); /* start next desc if ready */
+ }
+
+ return rc;
+}
+
+/*
+ * This is the driver's interrupt handler. To avoid a race, we first clear
+ * the interrupt, then handle any completed Rx/Tx descriptors. When done
+ * handling those descriptors, we restart the Rx/Tx engines if they're stopped.
+ */
+static irqreturn_t
+mpsc_sdma_intr(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct mpsc_port_info *pi = dev_id;
+ ulong iflags;
+ int rc = IRQ_NONE;
+
+ pr_debug("mpsc_sdma_intr[%d]: SDMA Interrupt Received\n",pi->port.line);
+
+ spin_lock_irqsave(&pi->port.lock, iflags);
+ mpsc_sdma_intr_ack(pi);
+ if (mpsc_rx_intr(pi, regs))
+ rc = IRQ_HANDLED;
+ if (mpsc_tx_intr(pi))
+ rc = IRQ_HANDLED;
+ spin_unlock_irqrestore(&pi->port.lock, iflags);
+
+ pr_debug("mpsc_sdma_intr[%d]: SDMA Interrupt Handled\n", pi->port.line);
+ return rc;
+}
+
+/*
+ ******************************************************************************
+ *
+ * serial_core.c Interface routines
+ *
+ ******************************************************************************
+ */
+static uint
+mpsc_tx_empty(struct uart_port *port)
+{
+ struct mpsc_port_info *pi = (struct mpsc_port_info *)port;
+ ulong iflags;
+ uint rc;
+
+ spin_lock_irqsave(&pi->port.lock, iflags);
+ rc = mpsc_sdma_tx_active(pi) ? 0 : TIOCSER_TEMT;
+ spin_unlock_irqrestore(&pi->port.lock, iflags);
+
+ return rc;
+}
+
+static void
+mpsc_set_mctrl(struct uart_port *port, uint mctrl)
+{
+ /* Have no way to set modem control lines AFAICT */
+ return;
+}
+
+static uint
+mpsc_get_mctrl(struct uart_port *port)
+{
+ struct mpsc_port_info *pi = (struct mpsc_port_info *)port;
+ u32 mflags, status;
+ ulong iflags;
+
+ spin_lock_irqsave(&pi->port.lock, iflags);
+ status = (pi->mirror_regs) ? pi->MPSC_CHR_10_m :
+ readl(pi->mpsc_base + MPSC_CHR_10);
+ spin_unlock_irqrestore(&pi->port.lock, iflags);
+
+ mflags = 0;
+ if (status & 0x1)
+ mflags |= TIOCM_CTS;
+ if (status & 0x2)
+ mflags |= TIOCM_CAR;
+
+ return mflags | TIOCM_DSR; /* No way to tell if DSR asserted */
+}
+
+static void
+mpsc_stop_tx(struct uart_port *port, uint tty_start)
+{
+ struct mpsc_port_info *pi = (struct mpsc_port_info *)port;
+
+ pr_debug("mpsc_stop_tx[%d]: tty_start: %d\n", port->line, tty_start);
+
+ mpsc_freeze(pi);
+ return;
+}
+
+static void
+mpsc_start_tx(struct uart_port *port, uint tty_start)
+{
+ struct mpsc_port_info *pi = (struct mpsc_port_info *)port;
+
+ mpsc_unfreeze(pi);
+ mpsc_copy_tx_data(pi);
+ mpsc_sdma_start_tx(pi);
+
+ pr_debug("mpsc_start_tx[%d]: tty_start: %d\n", port->line, tty_start);
+ return;
+}
+
+static void
+mpsc_start_rx(struct mpsc_port_info *pi)
+{
+ pr_debug("mpsc_start_rx[%d]: Starting...\n", pi->port.line);
+
+ if (pi->rcv_data) {
+ mpsc_enter_hunt(pi);
+ mpsc_sdma_cmd(pi, SDMA_SDCM_ERD);
+ }
+ return;
+}
+
+static void
+mpsc_stop_rx(struct uart_port *port)
+{
+ struct mpsc_port_info *pi = (struct mpsc_port_info *)port;
+
+ pr_debug("mpsc_stop_rx[%d]: Stopping...\n", port->line);
+
+ mpsc_sdma_cmd(pi, SDMA_SDCM_AR);
+ return;
+}
+
+static void
+mpsc_enable_ms(struct uart_port *port)
+{
+ return; /* Not supported */
+}
+
+static void
+mpsc_break_ctl(struct uart_port *port, int ctl)
+{
+ struct mpsc_port_info *pi = (struct mpsc_port_info *)port;
+ ulong flags;
+ u32 v;
+
+ v = ctl ? 0x00ff0000 : 0;
+
+ spin_lock_irqsave(&pi->port.lock, flags);
+ if (pi->mirror_regs)
+ pi->MPSC_CHR_1_m = v;
+ writel(v, pi->mpsc_base + MPSC_CHR_1);
+ spin_unlock_irqrestore(&pi->port.lock, flags);
+
+ return;
+}
+
+static int
+mpsc_startup(struct uart_port *port)
+{
+ struct mpsc_port_info *pi = (struct mpsc_port_info *)port;
+ u32 flag = 0;
+ int rc;
+
+ pr_debug("mpsc_startup[%d]: Starting up MPSC, irq: %d\n",
+ port->line, pi->port.irq);
+
+ if ((rc = mpsc_make_ready(pi)) == 0) {
+ /* Setup IRQ handler */
+ mpsc_sdma_intr_ack(pi);
+
+ /* If irq's are shared, need to set flag */
+ if (mpsc_ports[0].port.irq == mpsc_ports[1].port.irq)
+ flag = SA_SHIRQ;
+
+ if (request_irq(pi->port.irq, mpsc_sdma_intr, flag,
+ "mpsc/sdma", pi))
+ printk(KERN_ERR "MPSC: Can't get SDMA IRQ %d\n",
+ pi->port.irq);
+
+ mpsc_sdma_intr_unmask(pi, 0xf);
+ mpsc_sdma_set_rx_ring(pi, (struct mpsc_rx_desc *)(pi->rxr_p +
+ (pi->rxr_posn * MPSC_RXRE_SIZE)));
+ }
+
+ return rc;
+}
+
+static void
+mpsc_shutdown(struct uart_port *port)
+{
+ struct mpsc_port_info *pi = (struct mpsc_port_info *)port;
+ static void mpsc_release_port(struct uart_port *port);
+
+ pr_debug("mpsc_shutdown[%d]: Shutting down MPSC\n", port->line);
+
+ mpsc_sdma_stop(pi);
+ free_irq(pi->port.irq, pi);
+ return;
+}
+
+static void
+mpsc_set_termios(struct uart_port *port, struct termios *termios,
+ struct termios *old)
+{
+ struct mpsc_port_info *pi = (struct mpsc_port_info *)port;
+ u32 baud;
+ ulong flags;
+ u32 chr_bits, stop_bits, par;
+
+ pi->c_iflag = termios->c_iflag;
+ pi->c_cflag = termios->c_cflag;
+
+ switch (termios->c_cflag & CSIZE) {
+ case CS5:
+ chr_bits = MPSC_MPCR_CL_5;
+ break;
+ case CS6:
+ chr_bits = MPSC_MPCR_CL_6;
+ break;
+ case CS7:
+ chr_bits = MPSC_MPCR_CL_7;
+ break;
+ case CS8:
+ default:
+ chr_bits = MPSC_MPCR_CL_8;
+ break;
+ }
+
+ if (termios->c_cflag & CSTOPB)
+ stop_bits = MPSC_MPCR_SBL_2;
+ else
+ stop_bits = MPSC_MPCR_SBL_1;
+
+ par = MPSC_CHR_2_PAR_EVEN;
+ if (termios->c_cflag & PARENB)
+ if (termios->c_cflag & PARODD)
+ par = MPSC_CHR_2_PAR_ODD;
+#ifdef CMSPAR
+ if (termios->c_cflag & CMSPAR) {
+ if (termios->c_cflag & PARODD)
+ par = MPSC_CHR_2_PAR_MARK;
+ else
+ par = MPSC_CHR_2_PAR_SPACE;
+ }
+#endif
+
+ baud = uart_get_baud_rate(port, termios, old, 0, port->uartclk);
+
+ spin_lock_irqsave(&pi->port.lock, flags);
+
+ uart_update_timeout(port, termios->c_cflag, baud);
+
+ mpsc_set_char_length(pi, chr_bits);
+ mpsc_set_stop_bit_length(pi, stop_bits);
+ mpsc_set_parity(pi, par);
+ mpsc_set_baudrate(pi, baud);
+
+ /* Characters/events to read */
+ pi->rcv_data = 1;
+ pi->port.read_status_mask = SDMA_DESC_CMDSTAT_OR;
+
+ if (termios->c_iflag & INPCK)
+ pi->port.read_status_mask |= SDMA_DESC_CMDSTAT_PE |
+ SDMA_DESC_CMDSTAT_FR;
+
+ if (termios->c_iflag & (BRKINT | PARMRK))
+ pi->port.read_status_mask |= SDMA_DESC_CMDSTAT_BR;
+
+ /* Characters/events to ignore */
+ pi->port.ignore_status_mask = 0;
+
+ if (termios->c_iflag & IGNPAR)
+ pi->port.ignore_status_mask |= SDMA_DESC_CMDSTAT_PE |
+ SDMA_DESC_CMDSTAT_FR;
+
+ if (termios->c_iflag & IGNBRK) {
+ pi->port.ignore_status_mask |= SDMA_DESC_CMDSTAT_BR;
+
+ if (termios->c_iflag & IGNPAR)
+ pi->port.ignore_status_mask |= SDMA_DESC_CMDSTAT_OR;
+ }
+
+ /* Ignore all chars if CREAD not set */
+ if (!(termios->c_cflag & CREAD))
+ pi->rcv_data = 0;
+ else
+ mpsc_start_rx(pi);
+
+ spin_unlock_irqrestore(&pi->port.lock, flags);
+ return;
+}
+
+static const char *
+mpsc_type(struct uart_port *port)
+{
+ pr_debug("mpsc_type[%d]: port type: %s\n", port->line,MPSC_DRIVER_NAME);
+ return MPSC_DRIVER_NAME;
+}
+
+static int
+mpsc_request_port(struct uart_port *port)
+{
+ /* Should make chip/platform specific call */
+ return 0;
+}
+
+static void
+mpsc_release_port(struct uart_port *port)
+{
+ struct mpsc_port_info *pi = (struct mpsc_port_info *)port;
+
+ if (pi->ready) {
+ mpsc_uninit_rings(pi);
+ mpsc_free_ring_mem(pi);
+ pi->ready = 0;
+ }
+
+ return;
+}
+
+static void
+mpsc_config_port(struct uart_port *port, int flags)
+{
+ return;
+}
+
+static int
+mpsc_verify_port(struct uart_port *port, struct serial_struct *ser)
+{
+ struct mpsc_port_info *pi = (struct mpsc_port_info *)port;
+ int rc = 0;
+
+ pr_debug("mpsc_verify_port[%d]: Verifying port data\n", pi->port.line);
+
+ if (ser->type != PORT_UNKNOWN && ser->type != PORT_MPSC)
+ rc = -EINVAL;
+ else if (pi->port.irq != ser->irq)
+ rc = -EINVAL;
+ else if (ser->io_type != SERIAL_IO_MEM)
+ rc = -EINVAL;
+ else if (pi->port.uartclk / 16 != ser->baud_base) /* Not sure */
+ rc = -EINVAL;
+ else if ((void *)pi->port.mapbase != ser->iomem_base)
+ rc = -EINVAL;
+ else if (pi->port.iobase != ser->port)
+ rc = -EINVAL;
+ else if (ser->hub6 != 0)
+ rc = -EINVAL;
+
+ return rc;
+}
+
+static struct uart_ops mpsc_pops = {
+ .tx_empty = mpsc_tx_empty,
+ .set_mctrl = mpsc_set_mctrl,
+ .get_mctrl = mpsc_get_mctrl,
+ .stop_tx = mpsc_stop_tx,
+ .start_tx = mpsc_start_tx,
+ .stop_rx = mpsc_stop_rx,
+ .enable_ms = mpsc_enable_ms,
+ .break_ctl = mpsc_break_ctl,
+ .startup = mpsc_startup,
+ .shutdown = mpsc_shutdown,
+ .set_termios = mpsc_set_termios,
+ .type = mpsc_type,
+ .release_port = mpsc_release_port,
+ .request_port = mpsc_request_port,
+ .config_port = mpsc_config_port,
+ .verify_port = mpsc_verify_port,
+};
+
+/*
+ ******************************************************************************
+ *
+ * Console Interface Routines
+ *
+ ******************************************************************************
+ */
+
+#ifdef CONFIG_SERIAL_MPSC_CONSOLE
+static void
+mpsc_console_write(struct console *co, const char *s, uint count)
+{
+ struct mpsc_port_info *pi = &mpsc_ports[co->index];
+ u8 *bp, *dp, add_cr = 0;
+ int i;
+
+ while (mpsc_sdma_tx_active(pi))
+ udelay(100);
+
+ while (count > 0) {
+ bp = dp = pi->txb + (pi->txr_head * MPSC_TXBE_SIZE);
+
+ for (i = 0; i < MPSC_TXBE_SIZE; i++) {
+ if (count == 0)
+ break;
+
+ if (add_cr) {
+ *(dp++) = '\r';
+ add_cr = 0;
+ }
+ else {
+ *(dp++) = *s;
+
+ if (*(s++) == '\n') { /* add '\r' after '\n' */
+ add_cr = 1;
+ count++;
+ }
+ }
+
+ count--;
+ }
+
+ dma_cache_sync((void *) bp, MPSC_TXBE_SIZE, DMA_BIDIRECTIONAL);
+#if defined(CONFIG_PPC32) && !defined(CONFIG_NOT_COHERENT_CACHE)
+ if (pi->cache_mgmt) /* GT642[46]0 Res #COMM-2 */
+ flush_dcache_range((ulong)bp,
+ (ulong)bp + MPSC_TXBE_SIZE);
+#endif
+ mpsc_setup_tx_desc(pi, i, 0);
+ pi->txr_head = (pi->txr_head + 1) & (MPSC_TXR_ENTRIES - 1);
+ mpsc_sdma_start_tx(pi);
+
+ while (mpsc_sdma_tx_active(pi))
+ udelay(100);
+
+ pi->txr_tail = (pi->txr_tail + 1) & (MPSC_TXR_ENTRIES - 1);
+ }
+
+ return;
+}
+
+static int __init
+mpsc_console_setup(struct console *co, char *options)
+{
+ struct mpsc_port_info *pi;
+ int baud, bits, parity, flow;
+
+ pr_debug("mpsc_console_setup[%d]: options: %s\n", co->index, options);
+
+ if (co->index >= MPSC_NUM_CTLRS)
+ co->index = 0;
+
+ pi = &mpsc_ports[co->index];
+
+ baud = pi->default_baud;
+ bits = pi->default_bits;
+ parity = pi->default_parity;
+ flow = pi->default_flow;
+
+ if (!pi->port.ops)
+ return -ENODEV;
+
+ spin_lock_init(&pi->port.lock); /* Temporary fix--copied from 8250.c */
+
+ if (options)
+ uart_parse_options(options, &baud, &parity, &bits, &flow);
+
+ return uart_set_options(&pi->port, co, baud, parity, bits, flow);
+}
+
+extern struct uart_driver mpsc_reg;
+static struct console mpsc_console = {
+ .name = MPSC_DEV_NAME,
+ .write = mpsc_console_write,
+ .device = uart_console_device,
+ .setup = mpsc_console_setup,
+ .flags = CON_PRINTBUFFER,
+ .index = -1,
+ .data = &mpsc_reg,
+};
+
+static int __init
+mpsc_late_console_init(void)
+{
+ pr_debug("mpsc_late_console_init: Enter\n");
+
+ if (!(mpsc_console.flags & CON_ENABLED))
+ register_console(&mpsc_console);
+ return 0;
+}
+
+late_initcall(mpsc_late_console_init);
+
+#define MPSC_CONSOLE &mpsc_console
+#else
+#define MPSC_CONSOLE NULL
+#endif
+/*
+ ******************************************************************************
+ *
+ * Dummy Platform Driver to extract & map shared register regions
+ *
+ ******************************************************************************
+ */
+static void
+mpsc_resource_err(char *s)
+{
+ printk(KERN_WARNING "MPSC: Platform device resource error in %s\n", s);
+ return;
+}
+
+static int
+mpsc_shared_map_regs(struct platform_device *pd)
+{
+ struct resource *r;
+
+ if ((r = platform_get_resource(pd, IORESOURCE_MEM,
+ MPSC_ROUTING_BASE_ORDER)) && request_mem_region(r->start,
+ MPSC_ROUTING_REG_BLOCK_SIZE, "mpsc_routing_regs")) {
+
+ mpsc_shared_regs.mpsc_routing_base = ioremap(r->start,
+ MPSC_ROUTING_REG_BLOCK_SIZE);
+ mpsc_shared_regs.mpsc_routing_base_p = r->start;
+ }
+ else {
+ mpsc_resource_err("MPSC routing base");
+ return -ENOMEM;
+ }
+
+ if ((r = platform_get_resource(pd, IORESOURCE_MEM,
+ MPSC_SDMA_INTR_BASE_ORDER)) && request_mem_region(r->start,
+ MPSC_SDMA_INTR_REG_BLOCK_SIZE, "sdma_intr_regs")) {
+
+ mpsc_shared_regs.sdma_intr_base = ioremap(r->start,
+ MPSC_SDMA_INTR_REG_BLOCK_SIZE);
+ mpsc_shared_regs.sdma_intr_base_p = r->start;
+ }
+ else {
+ iounmap(mpsc_shared_regs.mpsc_routing_base);
+ release_mem_region(mpsc_shared_regs.mpsc_routing_base_p,
+ MPSC_ROUTING_REG_BLOCK_SIZE);
+ mpsc_resource_err("SDMA intr base");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void
+mpsc_shared_unmap_regs(void)
+{
+ if (!mpsc_shared_regs.mpsc_routing_base) {
+ iounmap(mpsc_shared_regs.mpsc_routing_base);
+ release_mem_region(mpsc_shared_regs.mpsc_routing_base_p,
+ MPSC_ROUTING_REG_BLOCK_SIZE);
+ }
+ if (!mpsc_shared_regs.sdma_intr_base) {
+ iounmap(mpsc_shared_regs.sdma_intr_base);
+ release_mem_region(mpsc_shared_regs.sdma_intr_base_p,
+ MPSC_SDMA_INTR_REG_BLOCK_SIZE);
+ }
+
+ mpsc_shared_regs.mpsc_routing_base = 0;
+ mpsc_shared_regs.sdma_intr_base = 0;
+
+ mpsc_shared_regs.mpsc_routing_base_p = 0;
+ mpsc_shared_regs.sdma_intr_base_p = 0;
+
+ return;
+}
+
+static int
+mpsc_shared_drv_probe(struct device *dev)
+{
+ struct platform_device *pd = to_platform_device(dev);
+ struct mpsc_shared_pdata *pdata;
+ int rc = -ENODEV;
+
+ if (pd->id == 0) {
+ if (!(rc = mpsc_shared_map_regs(pd))) {
+ pdata = (struct mpsc_shared_pdata *)dev->platform_data;
+
+ mpsc_shared_regs.MPSC_MRR_m = pdata->mrr_val;
+ mpsc_shared_regs.MPSC_RCRR_m= pdata->rcrr_val;
+ mpsc_shared_regs.MPSC_TCRR_m= pdata->tcrr_val;
+ mpsc_shared_regs.SDMA_INTR_CAUSE_m =
+ pdata->intr_cause_val;
+ mpsc_shared_regs.SDMA_INTR_MASK_m =
+ pdata->intr_mask_val;
+
+ rc = 0;
+ }
+ }
+
+ return rc;
+}
+
+static int
+mpsc_shared_drv_remove(struct device *dev)
+{
+ struct platform_device *pd = to_platform_device(dev);
+ int rc = -ENODEV;
+
+ if (pd->id == 0) {
+ mpsc_shared_unmap_regs();
+ mpsc_shared_regs.MPSC_MRR_m = 0;
+ mpsc_shared_regs.MPSC_RCRR_m = 0;
+ mpsc_shared_regs.MPSC_TCRR_m = 0;
+ mpsc_shared_regs.SDMA_INTR_CAUSE_m = 0;
+ mpsc_shared_regs.SDMA_INTR_MASK_m = 0;
+ rc = 0;
+ }
+
+ return rc;
+}
+
+static struct device_driver mpsc_shared_driver = {
+ .name = MPSC_SHARED_NAME,
+ .bus = &platform_bus_type,
+ .probe = mpsc_shared_drv_probe,
+ .remove = mpsc_shared_drv_remove,
+};
+
+/*
+ ******************************************************************************
+ *
+ * Driver Interface Routines
+ *
+ ******************************************************************************
+ */
+static struct uart_driver mpsc_reg = {
+ .owner = THIS_MODULE,
+ .driver_name = MPSC_DRIVER_NAME,
+ .devfs_name = MPSC_DEVFS_NAME,
+ .dev_name = MPSC_DEV_NAME,
+ .major = MPSC_MAJOR,
+ .minor = MPSC_MINOR_START,
+ .nr = MPSC_NUM_CTLRS,
+ .cons = MPSC_CONSOLE,
+};
+
+static int
+mpsc_drv_map_regs(struct mpsc_port_info *pi, struct platform_device *pd)
+{
+ struct resource *r;
+
+ if ((r = platform_get_resource(pd, IORESOURCE_MEM, MPSC_BASE_ORDER)) &&
+ request_mem_region(r->start, MPSC_REG_BLOCK_SIZE, "mpsc_regs")){
+
+ pi->mpsc_base = ioremap(r->start, MPSC_REG_BLOCK_SIZE);
+ pi->mpsc_base_p = r->start;
+ }
+ else {
+ mpsc_resource_err("MPSC base");
+ return -ENOMEM;
+ }
+
+ if ((r = platform_get_resource(pd, IORESOURCE_MEM,
+ MPSC_SDMA_BASE_ORDER)) && request_mem_region(r->start,
+ MPSC_SDMA_REG_BLOCK_SIZE, "sdma_regs")) {
+
+ pi->sdma_base = ioremap(r->start,MPSC_SDMA_REG_BLOCK_SIZE);
+ pi->sdma_base_p = r->start;
+ }
+ else {
+ mpsc_resource_err("SDMA base");
+ return -ENOMEM;
+ }
+
+ if ((r = platform_get_resource(pd,IORESOURCE_MEM,MPSC_BRG_BASE_ORDER))
+ && request_mem_region(r->start, MPSC_BRG_REG_BLOCK_SIZE,
+ "brg_regs")) {
+
+ pi->brg_base = ioremap(r->start, MPSC_BRG_REG_BLOCK_SIZE);
+ pi->brg_base_p = r->start;
+ }
+ else {
+ mpsc_resource_err("BRG base");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void
+mpsc_drv_unmap_regs(struct mpsc_port_info *pi)
+{
+ if (!pi->mpsc_base) {
+ iounmap(pi->mpsc_base);
+ release_mem_region(pi->mpsc_base_p, MPSC_REG_BLOCK_SIZE);
+ }
+ if (!pi->sdma_base) {
+ iounmap(pi->sdma_base);
+ release_mem_region(pi->sdma_base_p, MPSC_SDMA_REG_BLOCK_SIZE);
+ }
+ if (!pi->brg_base) {
+ iounmap(pi->brg_base);
+ release_mem_region(pi->brg_base_p, MPSC_BRG_REG_BLOCK_SIZE);
+ }
+
+ pi->mpsc_base = 0;
+ pi->sdma_base = 0;
+ pi->brg_base = 0;
+
+ pi->mpsc_base_p = 0;
+ pi->sdma_base_p = 0;
+ pi->brg_base_p = 0;
+
+ return;
+}
+
+static void
+mpsc_drv_get_platform_data(struct mpsc_port_info *pi,
+ struct platform_device *pd, int num)
+{
+ struct mpsc_pdata *pdata;
+
+ pdata = (struct mpsc_pdata *)pd->dev.platform_data;
+
+ pi->port.uartclk = pdata->brg_clk_freq;
+ pi->port.iotype = UPIO_MEM;
+ pi->port.line = num;
+ pi->port.type = PORT_MPSC;
+ pi->port.fifosize = MPSC_TXBE_SIZE;
+ pi->port.membase = pi->mpsc_base;
+ pi->port.mapbase = (ulong)pi->mpsc_base;
+ pi->port.ops = &mpsc_pops;
+
+ pi->mirror_regs = pdata->mirror_regs;
+ pi->cache_mgmt = pdata->cache_mgmt;
+ pi->brg_can_tune = pdata->brg_can_tune;
+ pi->brg_clk_src = pdata->brg_clk_src;
+ pi->mpsc_max_idle = pdata->max_idle;
+ pi->default_baud = pdata->default_baud;
+ pi->default_bits = pdata->default_bits;
+ pi->default_parity = pdata->default_parity;
+ pi->default_flow = pdata->default_flow;
+
+ /* Initial values of mirrored regs */
+ pi->MPSC_CHR_1_m = pdata->chr_1_val;
+ pi->MPSC_CHR_2_m = pdata->chr_2_val;
+ pi->MPSC_CHR_10_m = pdata->chr_10_val;
+ pi->MPSC_MPCR_m = pdata->mpcr_val;
+ pi->BRG_BCR_m = pdata->bcr_val;
+
+ pi->shared_regs = &mpsc_shared_regs;
+
+ pi->port.irq = platform_get_irq(pd, 0);
+
+ return;
+}
+
+static int
+mpsc_drv_probe(struct device *dev)
+{
+ struct platform_device *pd = to_platform_device(dev);
+ struct mpsc_port_info *pi;
+ int rc = -ENODEV;
+
+ pr_debug("mpsc_drv_probe: Adding MPSC %d\n", pd->id);
+
+ if (pd->id < MPSC_NUM_CTLRS) {
+ pi = &mpsc_ports[pd->id];
+
+ if (!(rc = mpsc_drv_map_regs(pi, pd))) {
+ mpsc_drv_get_platform_data(pi, pd, pd->id);
+
+ if (!(rc = mpsc_make_ready(pi)))
+ if (!(rc = uart_add_one_port(&mpsc_reg,
+ &pi->port)))
+ rc = 0;
+ else {
+ mpsc_release_port(
+ (struct uart_port *)pi);
+ mpsc_drv_unmap_regs(pi);
+ }
+ else
+ mpsc_drv_unmap_regs(pi);
+ }
+ }
+
+ return rc;
+}
+
+static int
+mpsc_drv_remove(struct device *dev)
+{
+ struct platform_device *pd = to_platform_device(dev);
+
+ pr_debug("mpsc_drv_exit: Removing MPSC %d\n", pd->id);
+
+ if (pd->id < MPSC_NUM_CTLRS) {
+ uart_remove_one_port(&mpsc_reg, &mpsc_ports[pd->id].port);
+ mpsc_release_port((struct uart_port *)&mpsc_ports[pd->id].port);
+ mpsc_drv_unmap_regs(&mpsc_ports[pd->id]);
+ return 0;
+ }
+ else
+ return -ENODEV;
+}
+
+static struct device_driver mpsc_driver = {
+ .name = MPSC_CTLR_NAME,
+ .bus = &platform_bus_type,
+ .probe = mpsc_drv_probe,
+ .remove = mpsc_drv_remove,
+};
+
+static int __init
+mpsc_drv_init(void)
+{
+ int rc;
+
+ printk(KERN_INFO "Serial: MPSC driver $Revision: 1.00 $\n");
+
+ memset(mpsc_ports, 0, sizeof(mpsc_ports));
+ memset(&mpsc_shared_regs, 0, sizeof(mpsc_shared_regs));
+
+ if (!(rc = uart_register_driver(&mpsc_reg))) {
+ if (!(rc = driver_register(&mpsc_shared_driver))) {
+ if ((rc = driver_register(&mpsc_driver))) {
+ driver_unregister(&mpsc_shared_driver);
+ uart_unregister_driver(&mpsc_reg);
+ }
+ }
+ else
+ uart_unregister_driver(&mpsc_reg);
+ }
+
+ return rc;
+
+}
+
+static void __exit
+mpsc_drv_exit(void)
+{
+ driver_unregister(&mpsc_driver);
+ driver_unregister(&mpsc_shared_driver);
+ uart_unregister_driver(&mpsc_reg);
+ memset(mpsc_ports, 0, sizeof(mpsc_ports));
+ memset(&mpsc_shared_regs, 0, sizeof(mpsc_shared_regs));
+ return;
+}
+
+module_init(mpsc_drv_init);
+module_exit(mpsc_drv_exit);
+
+MODULE_AUTHOR("Mark A. Greer <mgreer@mvista.com>");
+MODULE_DESCRIPTION("Generic Marvell MPSC serial/UART driver $Revision: 1.00 $");
+MODULE_VERSION(MPSC_VERSION);
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_CHARDEV_MAJOR(MPSC_MAJOR);
--- /dev/null
+/*
+ * drivers/serial/mpsc.h
+ *
+ * Author: Mark A. Greer <mgreer@mvista.com>
+ *
+ * 2004 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#ifndef __MPSC_H__
+#define __MPSC_H__
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/tty.h>
+#include <linux/tty_flip.h>
+#include <linux/ioport.h>
+#include <linux/init.h>
+#include <linux/console.h>
+#include <linux/sysrq.h>
+#include <linux/serial.h>
+#include <linux/serial_core.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/mv643xx.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+
+#if defined(CONFIG_SERIAL_MPSC_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
+#define SUPPORT_SYSRQ
+#endif
+
+#define MPSC_NUM_CTLRS 2
+
+/*
+ * Descriptors and buffers must be cache line aligned.
+ * Buffers lengths must be multiple of cache line size.
+ * Number of Tx & Rx descriptors must be powers of 2.
+ */
+#define MPSC_RXR_ENTRIES 32
+#define MPSC_RXRE_SIZE dma_get_cache_alignment()
+#define MPSC_RXR_SIZE (MPSC_RXR_ENTRIES * MPSC_RXRE_SIZE)
+#define MPSC_RXBE_SIZE dma_get_cache_alignment()
+#define MPSC_RXB_SIZE (MPSC_RXR_ENTRIES * MPSC_RXBE_SIZE)
+
+#define MPSC_TXR_ENTRIES 32
+#define MPSC_TXRE_SIZE dma_get_cache_alignment()
+#define MPSC_TXR_SIZE (MPSC_TXR_ENTRIES * MPSC_TXRE_SIZE)
+#define MPSC_TXBE_SIZE dma_get_cache_alignment()
+#define MPSC_TXB_SIZE (MPSC_TXR_ENTRIES * MPSC_TXBE_SIZE)
+
+#define MPSC_DMA_ALLOC_SIZE (MPSC_RXR_SIZE + MPSC_RXB_SIZE + \
+ MPSC_TXR_SIZE + MPSC_TXB_SIZE + \
+ dma_get_cache_alignment() /* for alignment */)
+
+/* Rx and Tx Ring entry descriptors -- assume entry size is <= cacheline size */
+struct mpsc_rx_desc {
+ u16 bufsize;
+ u16 bytecnt;
+ u32 cmdstat;
+ u32 link;
+ u32 buf_ptr;
+} __attribute((packed));
+
+struct mpsc_tx_desc {
+ u16 bytecnt;
+ u16 shadow;
+ u32 cmdstat;
+ u32 link;
+ u32 buf_ptr;
+} __attribute((packed));
+
+/*
+ * Some regs that have the erratum that you can't read them are are shared
+ * between the two MPSC controllers. This struct contains those shared regs.
+ */
+struct mpsc_shared_regs {
+ phys_addr_t mpsc_routing_base_p;
+ phys_addr_t sdma_intr_base_p;
+
+ void *mpsc_routing_base;
+ void *sdma_intr_base;
+
+ u32 MPSC_MRR_m;
+ u32 MPSC_RCRR_m;
+ u32 MPSC_TCRR_m;
+ u32 SDMA_INTR_CAUSE_m;
+ u32 SDMA_INTR_MASK_m;
+};
+
+/* The main driver data structure */
+struct mpsc_port_info {
+ struct uart_port port; /* Overlay uart_port structure */
+
+ /* Internal driver state for this ctlr */
+ u8 ready;
+ u8 rcv_data;
+ tcflag_t c_iflag; /* save termios->c_iflag */
+ tcflag_t c_cflag; /* save termios->c_cflag */
+
+ /* Info passed in from platform */
+ u8 mirror_regs; /* Need to mirror regs? */
+ u8 cache_mgmt; /* Need manual cache mgmt? */
+ u8 brg_can_tune; /* BRG has baud tuning? */
+ u32 brg_clk_src;
+ u16 mpsc_max_idle;
+ int default_baud;
+ int default_bits;
+ int default_parity;
+ int default_flow;
+
+ /* Physical addresses of various blocks of registers (from platform) */
+ phys_addr_t mpsc_base_p;
+ phys_addr_t sdma_base_p;
+ phys_addr_t brg_base_p;
+
+ /* Virtual addresses of various blocks of registers (from platform) */
+ void *mpsc_base;
+ void *sdma_base;
+ void *brg_base;
+
+ /* Descriptor ring and buffer allocations */
+ void *dma_region;
+ dma_addr_t dma_region_p;
+
+ dma_addr_t rxr; /* Rx descriptor ring */
+ dma_addr_t rxr_p; /* Phys addr of rxr */
+ u8 *rxb; /* Rx Ring I/O buf */
+ u8 *rxb_p; /* Phys addr of rxb */
+ u32 rxr_posn; /* First desc w/ Rx data */
+
+ dma_addr_t txr; /* Tx descriptor ring */
+ dma_addr_t txr_p; /* Phys addr of txr */
+ u8 *txb; /* Tx Ring I/O buf */
+ u8 *txb_p; /* Phys addr of txb */
+ int txr_head; /* Where new data goes */
+ int txr_tail; /* Where sent data comes off */
+
+ /* Mirrored values of regs we can't read (if 'mirror_regs' set) */
+ u32 MPSC_MPCR_m;
+ u32 MPSC_CHR_1_m;
+ u32 MPSC_CHR_2_m;
+ u32 MPSC_CHR_10_m;
+ u32 BRG_BCR_m;
+ struct mpsc_shared_regs *shared_regs;
+};
+
+/* Hooks to platform-specific code */
+int mpsc_platform_register_driver(void);
+void mpsc_platform_unregister_driver(void);
+
+/* Hooks back in to mpsc common to be called by platform-specific code */
+struct mpsc_port_info *mpsc_device_probe(int index);
+struct mpsc_port_info *mpsc_device_remove(int index);
+
+/*
+ *****************************************************************************
+ *
+ * Multi-Protocol Serial Controller Interface Registers
+ *
+ *****************************************************************************
+ */
+
+/* Main Configuratino Register Offsets */
+#define MPSC_MMCRL 0x0000
+#define MPSC_MMCRH 0x0004
+#define MPSC_MPCR 0x0008
+#define MPSC_CHR_1 0x000c
+#define MPSC_CHR_2 0x0010
+#define MPSC_CHR_3 0x0014
+#define MPSC_CHR_4 0x0018
+#define MPSC_CHR_5 0x001c
+#define MPSC_CHR_6 0x0020
+#define MPSC_CHR_7 0x0024
+#define MPSC_CHR_8 0x0028
+#define MPSC_CHR_9 0x002c
+#define MPSC_CHR_10 0x0030
+#define MPSC_CHR_11 0x0034
+
+#define MPSC_MPCR_FRZ (1 << 9)
+#define MPSC_MPCR_CL_5 0
+#define MPSC_MPCR_CL_6 1
+#define MPSC_MPCR_CL_7 2
+#define MPSC_MPCR_CL_8 3
+#define MPSC_MPCR_SBL_1 0
+#define MPSC_MPCR_SBL_2 1
+
+#define MPSC_CHR_2_TEV (1<<1)
+#define MPSC_CHR_2_TA (1<<7)
+#define MPSC_CHR_2_TTCS (1<<9)
+#define MPSC_CHR_2_REV (1<<17)
+#define MPSC_CHR_2_RA (1<<23)
+#define MPSC_CHR_2_CRD (1<<25)
+#define MPSC_CHR_2_EH (1<<31)
+#define MPSC_CHR_2_PAR_ODD 0
+#define MPSC_CHR_2_PAR_SPACE 1
+#define MPSC_CHR_2_PAR_EVEN 2
+#define MPSC_CHR_2_PAR_MARK 3
+
+/* MPSC Signal Routing */
+#define MPSC_MRR 0x0000
+#define MPSC_RCRR 0x0004
+#define MPSC_TCRR 0x0008
+
+/*
+ *****************************************************************************
+ *
+ * Serial DMA Controller Interface Registers
+ *
+ *****************************************************************************
+ */
+
+#define SDMA_SDC 0x0000
+#define SDMA_SDCM 0x0008
+#define SDMA_RX_DESC 0x0800
+#define SDMA_RX_BUF_PTR 0x0808
+#define SDMA_SCRDP 0x0810
+#define SDMA_TX_DESC 0x0c00
+#define SDMA_SCTDP 0x0c10
+#define SDMA_SFTDP 0x0c14
+
+#define SDMA_DESC_CMDSTAT_PE (1<<0)
+#define SDMA_DESC_CMDSTAT_CDL (1<<1)
+#define SDMA_DESC_CMDSTAT_FR (1<<3)
+#define SDMA_DESC_CMDSTAT_OR (1<<6)
+#define SDMA_DESC_CMDSTAT_BR (1<<9)
+#define SDMA_DESC_CMDSTAT_MI (1<<10)
+#define SDMA_DESC_CMDSTAT_A (1<<11)
+#define SDMA_DESC_CMDSTAT_AM (1<<12)
+#define SDMA_DESC_CMDSTAT_CT (1<<13)
+#define SDMA_DESC_CMDSTAT_C (1<<14)
+#define SDMA_DESC_CMDSTAT_ES (1<<15)
+#define SDMA_DESC_CMDSTAT_L (1<<16)
+#define SDMA_DESC_CMDSTAT_F (1<<17)
+#define SDMA_DESC_CMDSTAT_P (1<<18)
+#define SDMA_DESC_CMDSTAT_EI (1<<23)
+#define SDMA_DESC_CMDSTAT_O (1<<31)
+
+#define SDMA_DESC_DFLT (SDMA_DESC_CMDSTAT_O | \
+ SDMA_DESC_CMDSTAT_EI)
+
+#define SDMA_SDC_RFT (1<<0)
+#define SDMA_SDC_SFM (1<<1)
+#define SDMA_SDC_BLMR (1<<6)
+#define SDMA_SDC_BLMT (1<<7)
+#define SDMA_SDC_POVR (1<<8)
+#define SDMA_SDC_RIFB (1<<9)
+
+#define SDMA_SDCM_ERD (1<<7)
+#define SDMA_SDCM_AR (1<<15)
+#define SDMA_SDCM_STD (1<<16)
+#define SDMA_SDCM_TXD (1<<23)
+#define SDMA_SDCM_AT (1<<31)
+
+#define SDMA_0_CAUSE_RXBUF (1<<0)
+#define SDMA_0_CAUSE_RXERR (1<<1)
+#define SDMA_0_CAUSE_TXBUF (1<<2)
+#define SDMA_0_CAUSE_TXEND (1<<3)
+#define SDMA_1_CAUSE_RXBUF (1<<8)
+#define SDMA_1_CAUSE_RXERR (1<<9)
+#define SDMA_1_CAUSE_TXBUF (1<<10)
+#define SDMA_1_CAUSE_TXEND (1<<11)
+
+#define SDMA_CAUSE_RX_MASK (SDMA_0_CAUSE_RXBUF | SDMA_0_CAUSE_RXERR | \
+ SDMA_1_CAUSE_RXBUF | SDMA_1_CAUSE_RXERR)
+#define SDMA_CAUSE_TX_MASK (SDMA_0_CAUSE_TXBUF | SDMA_0_CAUSE_TXEND | \
+ SDMA_1_CAUSE_TXBUF | SDMA_1_CAUSE_TXEND)
+
+/* SDMA Interrupt registers */
+#define SDMA_INTR_CAUSE 0x0000
+#define SDMA_INTR_MASK 0x0080
+
+/*
+ *****************************************************************************
+ *
+ * Baud Rate Generator Interface Registers
+ *
+ *****************************************************************************
+ */
+
+#define BRG_BCR 0x0000
+#define BRG_BTR 0x0004
+
+#endif /* __MPSC_H__ */
unsigned long flags;
unsigned int baud, quot;
unsigned int ulcon;
+ unsigned int umcon;
/*
* We don't support modem control lines.
*/
- termios->c_cflag &= ~(HUPCL | CRTSCTS | CMSPAR);
+ termios->c_cflag &= ~(HUPCL | CMSPAR);
termios->c_cflag |= CLOCAL;
/*
if (termios->c_cflag & CSTOPB)
ulcon |= S3C2410_LCON_STOPB;
+ umcon = (termios->c_cflag & CRTSCTS) ? S3C2410_UMCOM_AFC : 0;
+
if (termios->c_cflag & PARENB) {
- if (!(termios->c_cflag & PARODD))
+ if (termios->c_cflag & PARODD)
ulcon |= S3C2410_LCON_PODD;
else
ulcon |= S3C2410_LCON_PEVEN;
wr_regl(port, S3C2410_ULCON, ulcon);
wr_regl(port, S3C2410_UBRDIV, quot);
+ wr_regl(port, S3C2410_UMCON, umcon);
dbg("uart: ulcon = 0x%08x, ucon = 0x%08x, ufcon = 0x%08x\n",
rd_regl(port, S3C2410_ULCON),
/* Parameters that can be set with 'insmod' */
-/* Bit map of interrupts to choose from */
-static u_int irq_mask = 0xdeb8;
-static int irq_list[4];
-static unsigned int irq_list_count;
-
/* Enable the speaker? */
static int do_sound = 1;
/* Skip strict UART tests? */
static int buggy_uart;
-module_param(irq_mask, uint, 0444);
-module_param_array(irq_list, int, &irq_list_count, 0444);
module_param(do_sound, int, 0444);
module_param(buggy_uart, int, 0444);
struct serial_info *info;
client_reg_t client_reg;
dev_link_t *link;
- int i, ret;
+ int ret;
DEBUG(0, "serial_attach()\n");
link->io.Attributes1 = IO_DATA_PATH_WIDTH_8;
link->io.NumPorts1 = 8;
link->irq.Attributes = IRQ_TYPE_EXCLUSIVE;
- link->irq.IRQInfo1 = IRQ_INFO2_VALID | IRQ_LEVEL_ID;
- if (irq_list_count == 0)
- link->irq.IRQInfo2 = irq_mask;
- else
- for (i = 0; i < irq_list_count; i++)
- link->irq.IRQInfo2 |= 1 << irq_list[i];
+ link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->conf.Attributes = CONF_ENABLE_IRQ;
if (do_sound) {
link->conf.Attributes |= CONF_ENABLE_SPKR;
link->next = dev_list;
dev_list = link;
client_reg.dev_info = &dev_info;
- client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE;
client_reg.EventMask =
CS_EVENT_CARD_INSERTION | CS_EVENT_CARD_REMOVAL |
CS_EVENT_RESET_PHYSICAL | CS_EVENT_CARD_RESET |
/*====================================================================*/
-static int setup_serial(struct serial_info * info, ioaddr_t iobase, int irq)
+static int setup_serial(struct serial_info * info, kio_addr_t iobase, int irq)
{
struct uart_port port;
int line;
static int simple_config(dev_link_t *link)
{
- static ioaddr_t base[5] = { 0x3f8, 0x2f8, 0x3e8, 0x2e8, 0x0 };
+ static kio_addr_t base[5] = { 0x3f8, 0x2f8, 0x3e8, 0x2e8, 0x0 };
static int size_table[2] = { 8, 16 };
client_handle_t handle = link->handle;
struct serial_info *info = link->priv;
/* If the card is already configured, look up the port and irq */
i = pcmcia_get_configuration_info(handle, &config);
if ((i == CS_SUCCESS) && (config.Attributes & CONF_VALID_CLIENT)) {
- ioaddr_t port = 0;
+ kio_addr_t port = 0;
if ((config.BasePort2 != 0) && (config.NumPorts2 == 8)) {
port = config.BasePort2;
info->slave = 1;
static void __exit exit_serial_cs(void)
{
pcmcia_unregister_driver(&serial_cs_driver);
-
- /* XXX: this really needs to move into generic code.. */
- while (dev_list != NULL)
- serial_detach(dev_list);
+ BUG_ON(dev_list != NULL);
}
module_init(init_serial_cs);
--- /dev/null
+/*
+ * drivers/serial/serial_txx9.c
+ *
+ * Derived from many drivers using generic_serial interface,
+ * especially serial_tx3912.c by Steven J. Hill and r39xx_serial.c
+ * (was in Linux/VR tree) by Jim Pick.
+ *
+ * Copyright (C) 1999 Harald Koerfgen
+ * Copyright (C) 2000 Jim Pick <jim@jimpick.com>
+ * Copyright (C) 2001 Steven J. Hill (sjhill@realitydiluted.com)
+ * Copyright (C) 2000-2002 Toshiba Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Serial driver for TX3927/TX4927/TX4925/TX4938 internal SIO controller
+ *
+ * Revision History:
+ * 0.30 Initial revision. (Renamed from serial_txx927.c)
+ * 0.31 Use save_flags instead of local_irq_save.
+ * 0.32 Support SCLK.
+ * 0.33 Switch TXX9_TTY_NAME by CONFIG_SERIAL_TXX9_STDSERIAL.
+ * Support TIOCSERGETLSR.
+ * 0.34 Support slow baudrate.
+ * 0.40 Merge codes from mainstream kernel (2.4.22).
+ * 0.41 Fix console checking in rs_shutdown_port().
+ * Disable flow-control in serial_console_write().
+ * 0.42 Fix minor compiler warning.
+ * 1.00 Kernel 2.6. Converted to new serial core (based on 8250.c).
+ * 1.01 Set fifosize to make tx_empry called properly.
+ * Use standard uart_get_divisor.
+ * 1.02 Cleanup. (import 8250.c changes)
+ */
+#include <linux/config.h>
+
+#if defined(CONFIG_SERIAL_TXX9_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
+#define SUPPORT_SYSRQ
+#endif
+
+#include <linux/module.h>
+#include <linux/ioport.h>
+#include <linux/init.h>
+#include <linux/console.h>
+#include <linux/sysrq.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/tty.h>
+#include <linux/tty_flip.h>
+#include <linux/serial_core.h>
+#include <linux/serial.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+
+static char *serial_version = "1.02";
+static char *serial_name = "TX39/49 Serial driver";
+
+#define PASS_LIMIT 256
+
+#if !defined(CONFIG_SERIAL_TXX9_STDSERIAL)
+/* "ttyS" is used for standard serial driver */
+#define TXX9_TTY_NAME "ttyTX"
+#define TXX9_TTY_DEVFS_NAME "tttx/"
+#define TXX9_TTY_MINOR_START (64 + 64) /* ttyTX0(128), ttyTX1(129) */
+#else
+/* acts like standard serial driver */
+#define TXX9_TTY_NAME "ttyS"
+#define TXX9_TTY_DEVFS_NAME "tts/"
+#define TXX9_TTY_MINOR_START 64
+#endif
+#define TXX9_TTY_MAJOR TTY_MAJOR
+
+/* flag aliases */
+#define UPF_TXX9_HAVE_CTS_LINE UPF_BUGGY_UART
+#define UPF_TXX9_USE_SCLK UPF_MAGIC_MULTIPLIER
+
+#ifdef CONFIG_PCI
+/* support for Toshiba TC86C001 SIO */
+#define ENABLE_SERIAL_TXX9_PCI
+#endif
+
+/*
+ * Number of serial ports
+ */
+#ifdef ENABLE_SERIAL_TXX9_PCI
+#define NR_PCI_BOARDS 4
+#define UART_NR (2 + NR_PCI_BOARDS)
+#else
+#define UART_NR 2
+#endif
+
+struct uart_txx9_port {
+ struct uart_port port;
+
+ /*
+ * We provide a per-port pm hook.
+ */
+ void (*pm)(struct uart_port *port,
+ unsigned int state, unsigned int old);
+};
+
+#define TXX9_REGION_SIZE 0x24
+
+/* TXX9 Serial Registers */
+#define TXX9_SILCR 0x00
+#define TXX9_SIDICR 0x04
+#define TXX9_SIDISR 0x08
+#define TXX9_SICISR 0x0c
+#define TXX9_SIFCR 0x10
+#define TXX9_SIFLCR 0x14
+#define TXX9_SIBGR 0x18
+#define TXX9_SITFIFO 0x1c
+#define TXX9_SIRFIFO 0x20
+
+/* SILCR : Line Control */
+#define TXX9_SILCR_SCS_MASK 0x00000060
+#define TXX9_SILCR_SCS_IMCLK 0x00000000
+#define TXX9_SILCR_SCS_IMCLK_BG 0x00000020
+#define TXX9_SILCR_SCS_SCLK 0x00000040
+#define TXX9_SILCR_SCS_SCLK_BG 0x00000060
+#define TXX9_SILCR_UEPS 0x00000010
+#define TXX9_SILCR_UPEN 0x00000008
+#define TXX9_SILCR_USBL_MASK 0x00000004
+#define TXX9_SILCR_USBL_1BIT 0x00000000
+#define TXX9_SILCR_USBL_2BIT 0x00000004
+#define TXX9_SILCR_UMODE_MASK 0x00000003
+#define TXX9_SILCR_UMODE_8BIT 0x00000000
+#define TXX9_SILCR_UMODE_7BIT 0x00000001
+
+/* SIDICR : DMA/Int. Control */
+#define TXX9_SIDICR_TDE 0x00008000
+#define TXX9_SIDICR_RDE 0x00004000
+#define TXX9_SIDICR_TIE 0x00002000
+#define TXX9_SIDICR_RIE 0x00001000
+#define TXX9_SIDICR_SPIE 0x00000800
+#define TXX9_SIDICR_CTSAC 0x00000600
+#define TXX9_SIDICR_STIE_MASK 0x0000003f
+#define TXX9_SIDICR_STIE_OERS 0x00000020
+#define TXX9_SIDICR_STIE_CTSS 0x00000010
+#define TXX9_SIDICR_STIE_RBRKD 0x00000008
+#define TXX9_SIDICR_STIE_TRDY 0x00000004
+#define TXX9_SIDICR_STIE_TXALS 0x00000002
+#define TXX9_SIDICR_STIE_UBRKD 0x00000001
+
+/* SIDISR : DMA/Int. Status */
+#define TXX9_SIDISR_UBRK 0x00008000
+#define TXX9_SIDISR_UVALID 0x00004000
+#define TXX9_SIDISR_UFER 0x00002000
+#define TXX9_SIDISR_UPER 0x00001000
+#define TXX9_SIDISR_UOER 0x00000800
+#define TXX9_SIDISR_ERI 0x00000400
+#define TXX9_SIDISR_TOUT 0x00000200
+#define TXX9_SIDISR_TDIS 0x00000100
+#define TXX9_SIDISR_RDIS 0x00000080
+#define TXX9_SIDISR_STIS 0x00000040
+#define TXX9_SIDISR_RFDN_MASK 0x0000001f
+
+/* SICISR : Change Int. Status */
+#define TXX9_SICISR_OERS 0x00000020
+#define TXX9_SICISR_CTSS 0x00000010
+#define TXX9_SICISR_RBRKD 0x00000008
+#define TXX9_SICISR_TRDY 0x00000004
+#define TXX9_SICISR_TXALS 0x00000002
+#define TXX9_SICISR_UBRKD 0x00000001
+
+/* SIFCR : FIFO Control */
+#define TXX9_SIFCR_SWRST 0x00008000
+#define TXX9_SIFCR_RDIL_MASK 0x00000180
+#define TXX9_SIFCR_RDIL_1 0x00000000
+#define TXX9_SIFCR_RDIL_4 0x00000080
+#define TXX9_SIFCR_RDIL_8 0x00000100
+#define TXX9_SIFCR_RDIL_12 0x00000180
+#define TXX9_SIFCR_RDIL_MAX 0x00000180
+#define TXX9_SIFCR_TDIL_MASK 0x00000018
+#define TXX9_SIFCR_TDIL_MASK 0x00000018
+#define TXX9_SIFCR_TDIL_1 0x00000000
+#define TXX9_SIFCR_TDIL_4 0x00000001
+#define TXX9_SIFCR_TDIL_8 0x00000010
+#define TXX9_SIFCR_TDIL_MAX 0x00000010
+#define TXX9_SIFCR_TFRST 0x00000004
+#define TXX9_SIFCR_RFRST 0x00000002
+#define TXX9_SIFCR_FRSTE 0x00000001
+#define TXX9_SIO_TX_FIFO 8
+#define TXX9_SIO_RX_FIFO 16
+
+/* SIFLCR : Flow Control */
+#define TXX9_SIFLCR_RCS 0x00001000
+#define TXX9_SIFLCR_TES 0x00000800
+#define TXX9_SIFLCR_RTSSC 0x00000200
+#define TXX9_SIFLCR_RSDE 0x00000100
+#define TXX9_SIFLCR_TSDE 0x00000080
+#define TXX9_SIFLCR_RTSTL_MASK 0x0000001e
+#define TXX9_SIFLCR_RTSTL_MAX 0x0000001e
+#define TXX9_SIFLCR_TBRK 0x00000001
+
+/* SIBGR : Baudrate Control */
+#define TXX9_SIBGR_BCLK_MASK 0x00000300
+#define TXX9_SIBGR_BCLK_T0 0x00000000
+#define TXX9_SIBGR_BCLK_T2 0x00000100
+#define TXX9_SIBGR_BCLK_T4 0x00000200
+#define TXX9_SIBGR_BCLK_T6 0x00000300
+#define TXX9_SIBGR_BRD_MASK 0x000000ff
+
+static inline unsigned int sio_in(struct uart_txx9_port *up, int offset)
+{
+ switch (up->port.iotype) {
+ default:
+ return *(volatile u32 *)(up->port.membase + offset);
+ case UPIO_PORT:
+ return inl(up->port.iobase + offset);
+ }
+}
+
+static inline void
+sio_out(struct uart_txx9_port *up, int offset, int value)
+{
+ switch (up->port.iotype) {
+ default:
+ *(volatile u32 *)(up->port.membase + offset) = value;
+ break;
+ case UPIO_PORT:
+ outl(value, up->port.iobase + offset);
+ break;
+ }
+}
+
+static inline void
+sio_mask(struct uart_txx9_port *up, int offset, unsigned int value)
+{
+ sio_out(up, offset, sio_in(up, offset) & ~value);
+}
+static inline void
+sio_set(struct uart_txx9_port *up, int offset, unsigned int value)
+{
+ sio_out(up, offset, sio_in(up, offset) | value);
+}
+
+static inline void
+sio_quot_set(struct uart_txx9_port *up, int quot)
+{
+ quot >>= 1;
+ if (quot < 256)
+ sio_out(up, TXX9_SIBGR, quot | TXX9_SIBGR_BCLK_T0);
+ else if (quot < (256 << 2))
+ sio_out(up, TXX9_SIBGR, (quot >> 2) | TXX9_SIBGR_BCLK_T2);
+ else if (quot < (256 << 4))
+ sio_out(up, TXX9_SIBGR, (quot >> 4) | TXX9_SIBGR_BCLK_T4);
+ else if (quot < (256 << 6))
+ sio_out(up, TXX9_SIBGR, (quot >> 6) | TXX9_SIBGR_BCLK_T6);
+ else
+ sio_out(up, TXX9_SIBGR, 0xff | TXX9_SIBGR_BCLK_T6);
+}
+
+static void serial_txx9_stop_tx(struct uart_port *port, unsigned int tty_stop)
+{
+ struct uart_txx9_port *up = (struct uart_txx9_port *)port;
+ unsigned long flags;
+
+ spin_lock_irqsave(&up->port.lock, flags);
+ sio_mask(up, TXX9_SIDICR, TXX9_SIDICR_TIE);
+ spin_unlock_irqrestore(&up->port.lock, flags);
+}
+
+static void serial_txx9_start_tx(struct uart_port *port, unsigned int tty_start)
+{
+ struct uart_txx9_port *up = (struct uart_txx9_port *)port;
+ unsigned long flags;
+
+ spin_lock_irqsave(&up->port.lock, flags);
+ sio_set(up, TXX9_SIDICR, TXX9_SIDICR_TIE);
+ spin_unlock_irqrestore(&up->port.lock, flags);
+}
+
+static void serial_txx9_stop_rx(struct uart_port *port)
+{
+ struct uart_txx9_port *up = (struct uart_txx9_port *)port;
+ unsigned long flags;
+
+ spin_lock_irqsave(&up->port.lock, flags);
+ up->port.read_status_mask &= ~TXX9_SIDISR_RDIS;
+#if 0
+ sio_mask(up, TXX9_SIDICR, TXX9_SIDICR_RIE);
+#endif
+ spin_unlock_irqrestore(&up->port.lock, flags);
+}
+
+static void serial_txx9_enable_ms(struct uart_port *port)
+{
+ /* TXX9-SIO can not control DTR... */
+}
+
+static inline void
+receive_chars(struct uart_txx9_port *up, unsigned int *status, struct pt_regs *regs)
+{
+ struct tty_struct *tty = up->port.info->tty;
+ unsigned char ch;
+ unsigned int disr = *status;
+ int max_count = 256;
+ char flag;
+
+ do {
+ /* The following is not allowed by the tty layer and
+ unsafe. It should be fixed ASAP */
+ if (unlikely(tty->flip.count >= TTY_FLIPBUF_SIZE)) {
+ if(tty->low_latency)
+ tty_flip_buffer_push(tty);
+ /* If this failed then we will throw away the
+ bytes but must do so to clear interrupts */
+ }
+ ch = sio_in(up, TXX9_SIRFIFO);
+ flag = TTY_NORMAL;
+ up->port.icount.rx++;
+
+ if (unlikely(disr & (TXX9_SIDISR_UBRK | TXX9_SIDISR_UPER |
+ TXX9_SIDISR_UFER | TXX9_SIDISR_UOER))) {
+ /*
+ * For statistics only
+ */
+ if (disr & TXX9_SIDISR_UBRK) {
+ disr &= ~(TXX9_SIDISR_UFER | TXX9_SIDISR_UPER);
+ up->port.icount.brk++;
+ /*
+ * We do the SysRQ and SAK checking
+ * here because otherwise the break
+ * may get masked by ignore_status_mask
+ * or read_status_mask.
+ */
+ if (uart_handle_break(&up->port))
+ goto ignore_char;
+ } else if (disr & TXX9_SIDISR_UPER)
+ up->port.icount.parity++;
+ else if (disr & TXX9_SIDISR_UFER)
+ up->port.icount.frame++;
+ if (disr & TXX9_SIDISR_UOER)
+ up->port.icount.overrun++;
+
+ /*
+ * Mask off conditions which should be ingored.
+ */
+ disr &= up->port.read_status_mask;
+
+ if (disr & TXX9_SIDISR_UBRK) {
+ flag = TTY_BREAK;
+ } else if (disr & TXX9_SIDISR_UPER)
+ flag = TTY_PARITY;
+ else if (disr & TXX9_SIDISR_UFER)
+ flag = TTY_FRAME;
+ }
+ if (uart_handle_sysrq_char(&up->port, ch, regs))
+ goto ignore_char;
+ if ((disr & up->port.ignore_status_mask) == 0) {
+ tty_insert_flip_char(tty, ch, flag);
+ }
+ if ((disr & TXX9_SIDISR_UOER) &&
+ tty->flip.count < TTY_FLIPBUF_SIZE) {
+ /*
+ * Overrun is special, since it's reported
+ * immediately, and doesn't affect the current
+ * character.
+ */
+ tty_insert_flip_char(tty, 0, TTY_OVERRUN);
+ }
+ ignore_char:
+ disr = sio_in(up, TXX9_SIDISR);
+ } while (!(disr & TXX9_SIDISR_UVALID) && (max_count-- > 0));
+ tty_flip_buffer_push(tty);
+ *status = disr;
+}
+
+static inline void transmit_chars(struct uart_txx9_port *up)
+{
+ struct circ_buf *xmit = &up->port.info->xmit;
+ int count;
+
+ if (up->port.x_char) {
+ sio_out(up, TXX9_SITFIFO, up->port.x_char);
+ up->port.icount.tx++;
+ up->port.x_char = 0;
+ return;
+ }
+ if (uart_circ_empty(xmit) || uart_tx_stopped(&up->port)) {
+ serial_txx9_stop_tx(&up->port, 0);
+ return;
+ }
+
+ count = TXX9_SIO_TX_FIFO;
+ do {
+ sio_out(up, TXX9_SITFIFO, xmit->buf[xmit->tail]);
+ xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
+ up->port.icount.tx++;
+ if (uart_circ_empty(xmit))
+ break;
+ } while (--count > 0);
+
+ if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ uart_write_wakeup(&up->port);
+
+ if (uart_circ_empty(xmit))
+ serial_txx9_stop_tx(&up->port, 0);
+}
+
+static irqreturn_t serial_txx9_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ int pass_counter = 0;
+ struct uart_txx9_port *up = dev_id;
+ unsigned int status;
+
+ while (1) {
+ spin_lock(&up->port.lock);
+ status = sio_in(up, TXX9_SIDISR);
+ if (!(sio_in(up, TXX9_SIDICR) & TXX9_SIDICR_TIE))
+ status &= ~TXX9_SIDISR_TDIS;
+ if (!(status & (TXX9_SIDISR_TDIS | TXX9_SIDISR_RDIS |
+ TXX9_SIDISR_TOUT))) {
+ spin_unlock(&up->port.lock);
+ break;
+ }
+
+ if (status & TXX9_SIDISR_RDIS)
+ receive_chars(up, &status, regs);
+ if (status & TXX9_SIDISR_TDIS)
+ transmit_chars(up);
+ /* Clear TX/RX Int. Status */
+ sio_mask(up, TXX9_SIDISR,
+ TXX9_SIDISR_TDIS | TXX9_SIDISR_RDIS |
+ TXX9_SIDISR_TOUT);
+ spin_unlock(&up->port.lock);
+
+ if (pass_counter++ > PASS_LIMIT)
+ break;
+ }
+
+ return pass_counter ? IRQ_HANDLED : IRQ_NONE;
+}
+
+static unsigned int serial_txx9_tx_empty(struct uart_port *port)
+{
+ struct uart_txx9_port *up = (struct uart_txx9_port *)port;
+ unsigned long flags;
+ unsigned int ret;
+
+ spin_lock_irqsave(&up->port.lock, flags);
+ ret = (sio_in(up, TXX9_SICISR) & TXX9_SICISR_TXALS) ? TIOCSER_TEMT : 0;
+ spin_unlock_irqrestore(&up->port.lock, flags);
+
+ return ret;
+}
+
+static unsigned int serial_txx9_get_mctrl(struct uart_port *port)
+{
+ struct uart_txx9_port *up = (struct uart_txx9_port *)port;
+ unsigned long flags;
+ unsigned int ret;
+
+ spin_lock_irqsave(&up->port.lock, flags);
+ ret = ((sio_in(up, TXX9_SIFLCR) & TXX9_SIFLCR_RTSSC) ? 0 : TIOCM_RTS)
+ | ((sio_in(up, TXX9_SICISR) & TXX9_SICISR_CTSS) ? 0 : TIOCM_CTS);
+ spin_unlock_irqrestore(&up->port.lock, flags);
+
+ return ret;
+}
+
+static void serial_txx9_set_mctrl(struct uart_port *port, unsigned int mctrl)
+{
+ struct uart_txx9_port *up = (struct uart_txx9_port *)port;
+ unsigned long flags;
+
+ spin_lock_irqsave(&up->port.lock, flags);
+ if (mctrl & TIOCM_RTS)
+ sio_mask(up, TXX9_SIFLCR, TXX9_SIFLCR_RTSSC);
+ else
+ sio_set(up, TXX9_SIFLCR, TXX9_SIFLCR_RTSSC);
+ spin_unlock_irqrestore(&up->port.lock, flags);
+}
+
+static void serial_txx9_break_ctl(struct uart_port *port, int break_state)
+{
+ struct uart_txx9_port *up = (struct uart_txx9_port *)port;
+ unsigned long flags;
+
+ spin_lock_irqsave(&up->port.lock, flags);
+ if (break_state == -1)
+ sio_set(up, TXX9_SIFLCR, TXX9_SIFLCR_TBRK);
+ else
+ sio_mask(up, TXX9_SIFLCR, TXX9_SIFLCR_TBRK);
+ spin_unlock_irqrestore(&up->port.lock, flags);
+}
+
+static int serial_txx9_startup(struct uart_port *port)
+{
+ struct uart_txx9_port *up = (struct uart_txx9_port *)port;
+ unsigned long flags;
+ int retval;
+
+ /*
+ * Clear the FIFO buffers and disable them.
+ * (they will be reeanbled in set_termios())
+ */
+ sio_set(up, TXX9_SIFCR,
+ TXX9_SIFCR_TFRST | TXX9_SIFCR_RFRST | TXX9_SIFCR_FRSTE);
+ /* clear reset */
+ sio_mask(up, TXX9_SIFCR,
+ TXX9_SIFCR_TFRST | TXX9_SIFCR_RFRST | TXX9_SIFCR_FRSTE);
+ sio_out(up, TXX9_SIDICR, 0);
+
+ /*
+ * Clear the interrupt registers.
+ */
+ sio_out(up, TXX9_SIDISR, 0);
+
+ retval = request_irq(up->port.irq, serial_txx9_interrupt,
+ SA_SHIRQ, "serial_txx9", up);
+ if (retval)
+ return retval;
+
+ /*
+ * Now, initialize the UART
+ */
+ spin_lock_irqsave(&up->port.lock, flags);
+ serial_txx9_set_mctrl(&up->port, up->port.mctrl);
+ spin_unlock_irqrestore(&up->port.lock, flags);
+
+ /* Enable RX/TX */
+ sio_mask(up, TXX9_SIFLCR, TXX9_SIFLCR_RSDE | TXX9_SIFLCR_TSDE);
+
+ /*
+ * Finally, enable interrupts.
+ */
+ sio_set(up, TXX9_SIDICR, TXX9_SIDICR_RIE);
+
+ return 0;
+}
+
+static void serial_txx9_shutdown(struct uart_port *port)
+{
+ struct uart_txx9_port *up = (struct uart_txx9_port *)port;
+ unsigned long flags;
+
+ /*
+ * Disable interrupts from this port
+ */
+ sio_out(up, TXX9_SIDICR, 0); /* disable all intrs */
+
+ spin_lock_irqsave(&up->port.lock, flags);
+ serial_txx9_set_mctrl(&up->port, up->port.mctrl);
+ spin_unlock_irqrestore(&up->port.lock, flags);
+
+ /*
+ * Disable break condition
+ */
+ sio_mask(up, TXX9_SIFLCR, TXX9_SIFLCR_TBRK);
+
+#ifdef CONFIG_SERIAL_TXX9_CONSOLE
+ if (up->port.cons && up->port.line == up->port.cons->index) {
+ free_irq(up->port.irq, up);
+ return;
+ }
+#endif
+ /* reset FIFOs */
+ sio_set(up, TXX9_SIFCR,
+ TXX9_SIFCR_TFRST | TXX9_SIFCR_RFRST | TXX9_SIFCR_FRSTE);
+ /* clear reset */
+ sio_mask(up, TXX9_SIFCR,
+ TXX9_SIFCR_TFRST | TXX9_SIFCR_RFRST | TXX9_SIFCR_FRSTE);
+
+ /* Disable RX/TX */
+ sio_set(up, TXX9_SIFLCR, TXX9_SIFLCR_RSDE | TXX9_SIFLCR_TSDE);
+
+ free_irq(up->port.irq, up);
+}
+
+static void
+serial_txx9_set_termios(struct uart_port *port, struct termios *termios,
+ struct termios *old)
+{
+ struct uart_txx9_port *up = (struct uart_txx9_port *)port;
+ unsigned int cval, fcr = 0;
+ unsigned long flags;
+ unsigned int baud, quot;
+
+ cval = sio_in(up, TXX9_SILCR);
+ /* byte size and parity */
+ cval &= ~TXX9_SILCR_UMODE_MASK;
+ switch (termios->c_cflag & CSIZE) {
+ case CS7:
+ cval |= TXX9_SILCR_UMODE_7BIT;
+ break;
+ default:
+ case CS5: /* not supported */
+ case CS6: /* not supported */
+ case CS8:
+ cval |= TXX9_SILCR_UMODE_8BIT;
+ break;
+ }
+
+ cval &= ~TXX9_SILCR_USBL_MASK;
+ if (termios->c_cflag & CSTOPB)
+ cval |= TXX9_SILCR_USBL_2BIT;
+ else
+ cval |= TXX9_SILCR_USBL_1BIT;
+ cval &= ~(TXX9_SILCR_UPEN | TXX9_SILCR_UEPS);
+ if (termios->c_cflag & PARENB)
+ cval |= TXX9_SILCR_UPEN;
+ if (!(termios->c_cflag & PARODD))
+ cval |= TXX9_SILCR_UEPS;
+
+ /*
+ * Ask the core to calculate the divisor for us.
+ */
+ baud = uart_get_baud_rate(port, termios, old, 0, port->uartclk/16/2);
+ quot = uart_get_divisor(port, baud);
+
+ /* Set up FIFOs */
+ /* TX Int by FIFO Empty, RX Int by Receiving 1 char. */
+ fcr = TXX9_SIFCR_TDIL_MAX | TXX9_SIFCR_RDIL_1;
+
+ /*
+ * Ok, we're now changing the port state. Do it with
+ * interrupts disabled.
+ */
+ spin_lock_irqsave(&up->port.lock, flags);
+
+ /*
+ * Update the per-port timeout.
+ */
+ uart_update_timeout(port, termios->c_cflag, baud);
+
+ up->port.read_status_mask = TXX9_SIDISR_UOER |
+ TXX9_SIDISR_TDIS | TXX9_SIDISR_RDIS;
+ if (termios->c_iflag & INPCK)
+ up->port.read_status_mask |= TXX9_SIDISR_UFER | TXX9_SIDISR_UPER;
+ if (termios->c_iflag & (BRKINT | PARMRK))
+ up->port.read_status_mask |= TXX9_SIDISR_UBRK;
+
+ /*
+ * Characteres to ignore
+ */
+ up->port.ignore_status_mask = 0;
+ if (termios->c_iflag & IGNPAR)
+ up->port.ignore_status_mask |= TXX9_SIDISR_UPER | TXX9_SIDISR_UFER;
+ if (termios->c_iflag & IGNBRK) {
+ up->port.ignore_status_mask |= TXX9_SIDISR_UBRK;
+ /*
+ * If we're ignoring parity and break indicators,
+ * ignore overruns too (for real raw support).
+ */
+ if (termios->c_iflag & IGNPAR)
+ up->port.ignore_status_mask |= TXX9_SIDISR_UOER;
+ }
+
+ /*
+ * ignore all characters if CREAD is not set
+ */
+ if ((termios->c_cflag & CREAD) == 0)
+ up->port.ignore_status_mask |= TXX9_SIDISR_RDIS;
+
+ /* CTS flow control flag */
+ if ((termios->c_cflag & CRTSCTS) &&
+ (up->port.flags & UPF_TXX9_HAVE_CTS_LINE)) {
+ sio_set(up, TXX9_SIFLCR,
+ TXX9_SIFLCR_RCS | TXX9_SIFLCR_TES);
+ } else {
+ sio_mask(up, TXX9_SIFLCR,
+ TXX9_SIFLCR_RCS | TXX9_SIFLCR_TES);
+ }
+
+ sio_out(up, TXX9_SILCR, cval);
+ sio_quot_set(up, quot);
+ sio_out(up, TXX9_SIFCR, fcr);
+
+ serial_txx9_set_mctrl(&up->port, up->port.mctrl);
+ spin_unlock_irqrestore(&up->port.lock, flags);
+}
+
+static void
+serial_txx9_pm(struct uart_port *port, unsigned int state,
+ unsigned int oldstate)
+{
+ struct uart_txx9_port *up = (struct uart_txx9_port *)port;
+ if (state) {
+ /* sleep */
+
+ if (up->pm)
+ up->pm(port, state, oldstate);
+ } else {
+ /* wake */
+
+ if (up->pm)
+ up->pm(port, state, oldstate);
+ }
+}
+
+static int serial_txx9_request_resource(struct uart_txx9_port *up)
+{
+ unsigned int size = TXX9_REGION_SIZE;
+ int ret = 0;
+
+ switch (up->port.iotype) {
+ default:
+ if (!up->port.mapbase)
+ break;
+
+ if (!request_mem_region(up->port.mapbase, size, "serial_txx9")) {
+ ret = -EBUSY;
+ break;
+ }
+
+ if (up->port.flags & UPF_IOREMAP) {
+ up->port.membase = ioremap(up->port.mapbase, size);
+ if (!up->port.membase) {
+ release_mem_region(up->port.mapbase, size);
+ ret = -ENOMEM;
+ }
+ }
+ break;
+
+ case UPIO_PORT:
+ if (!request_region(up->port.iobase, size, "serial_txx9"))
+ ret = -EBUSY;
+ break;
+ }
+ return ret;
+}
+
+static void serial_txx9_release_resource(struct uart_txx9_port *up)
+{
+ unsigned int size = TXX9_REGION_SIZE;
+
+ switch (up->port.iotype) {
+ default:
+ if (!up->port.mapbase)
+ break;
+
+ if (up->port.flags & UPF_IOREMAP) {
+ iounmap(up->port.membase);
+ up->port.membase = NULL;
+ }
+
+ release_mem_region(up->port.mapbase, size);
+ break;
+
+ case UPIO_PORT:
+ release_region(up->port.iobase, size);
+ break;
+ }
+}
+
+static void serial_txx9_release_port(struct uart_port *port)
+{
+ struct uart_txx9_port *up = (struct uart_txx9_port *)port;
+ serial_txx9_release_resource(up);
+}
+
+static int serial_txx9_request_port(struct uart_port *port)
+{
+ struct uart_txx9_port *up = (struct uart_txx9_port *)port;
+ return serial_txx9_request_resource(up);
+}
+
+static void serial_txx9_config_port(struct uart_port *port, int uflags)
+{
+ struct uart_txx9_port *up = (struct uart_txx9_port *)port;
+ unsigned long flags;
+ int ret;
+
+ /*
+ * Find the region that we can probe for. This in turn
+ * tells us whether we can probe for the type of port.
+ */
+ ret = serial_txx9_request_resource(up);
+ if (ret < 0)
+ return;
+ port->type = PORT_TXX9;
+ up->port.fifosize = TXX9_SIO_TX_FIFO;
+
+#ifdef CONFIG_SERIAL_TXX9_CONSOLE
+ if (up->port.line == up->port.cons->index)
+ return;
+#endif
+ spin_lock_irqsave(&up->port.lock, flags);
+ /*
+ * Reset the UART.
+ */
+ sio_out(up, TXX9_SIFCR, TXX9_SIFCR_SWRST);
+#ifdef CONFIG_CPU_TX49XX
+ /* TX4925 BUG WORKAROUND. Accessing SIOC register
+ * immediately after soft reset causes bus error. */
+ iob();
+ udelay(1);
+#endif
+ while (sio_in(up, TXX9_SIFCR) & TXX9_SIFCR_SWRST)
+ ;
+ /* TX Int by FIFO Empty, RX Int by Receiving 1 char. */
+ sio_set(up, TXX9_SIFCR,
+ TXX9_SIFCR_TDIL_MAX | TXX9_SIFCR_RDIL_1);
+ /* initial settings */
+ sio_out(up, TXX9_SILCR,
+ TXX9_SILCR_UMODE_8BIT | TXX9_SILCR_USBL_1BIT |
+ ((up->port.flags & UPF_TXX9_USE_SCLK) ?
+ TXX9_SILCR_SCS_SCLK_BG : TXX9_SILCR_SCS_IMCLK_BG));
+ sio_quot_set(up, uart_get_divisor(port, 9600));
+ sio_out(up, TXX9_SIFLCR, TXX9_SIFLCR_RTSTL_MAX /* 15 */);
+ spin_unlock_irqrestore(&up->port.lock, flags);
+}
+
+static int
+serial_txx9_verify_port(struct uart_port *port, struct serial_struct *ser)
+{
+ if (ser->irq < 0 ||
+ ser->baud_base < 9600 || ser->type != PORT_TXX9)
+ return -EINVAL;
+ return 0;
+}
+
+static const char *
+serial_txx9_type(struct uart_port *port)
+{
+ return "txx9";
+}
+
+static struct uart_ops serial_txx9_pops = {
+ .tx_empty = serial_txx9_tx_empty,
+ .set_mctrl = serial_txx9_set_mctrl,
+ .get_mctrl = serial_txx9_get_mctrl,
+ .stop_tx = serial_txx9_stop_tx,
+ .start_tx = serial_txx9_start_tx,
+ .stop_rx = serial_txx9_stop_rx,
+ .enable_ms = serial_txx9_enable_ms,
+ .break_ctl = serial_txx9_break_ctl,
+ .startup = serial_txx9_startup,
+ .shutdown = serial_txx9_shutdown,
+ .set_termios = serial_txx9_set_termios,
+ .pm = serial_txx9_pm,
+ .type = serial_txx9_type,
+ .release_port = serial_txx9_release_port,
+ .request_port = serial_txx9_request_port,
+ .config_port = serial_txx9_config_port,
+ .verify_port = serial_txx9_verify_port,
+};
+
+static struct uart_txx9_port serial_txx9_ports[UART_NR];
+
+static void __init serial_txx9_register_ports(struct uart_driver *drv)
+{
+ int i;
+
+ for (i = 0; i < UART_NR; i++) {
+ struct uart_txx9_port *up = &serial_txx9_ports[i];
+
+ up->port.line = i;
+ up->port.ops = &serial_txx9_pops;
+ uart_add_one_port(drv, &up->port);
+ }
+}
+
+#ifdef CONFIG_SERIAL_TXX9_CONSOLE
+
+/*
+ * Wait for transmitter & holding register to empty
+ */
+static inline void wait_for_xmitr(struct uart_txx9_port *up)
+{
+ unsigned int tmout = 10000;
+
+ /* Wait up to 10ms for the character(s) to be sent. */
+ while (--tmout &&
+ !(sio_in(up, TXX9_SICISR) & TXX9_SICISR_TXALS))
+ udelay(1);
+
+ /* Wait up to 1s for flow control if necessary */
+ if (up->port.flags & UPF_CONS_FLOW) {
+ tmout = 1000000;
+ while (--tmout &&
+ (sio_in(up, TXX9_SICISR) & TXX9_SICISR_CTSS))
+ udelay(1);
+ }
+}
+
+/*
+ * Print a string to the serial port trying not to disturb
+ * any possible real use of the port...
+ *
+ * The console_lock must be held when we get here.
+ */
+static void
+serial_txx9_console_write(struct console *co, const char *s, unsigned int count)
+{
+ struct uart_txx9_port *up = &serial_txx9_ports[co->index];
+ unsigned int ier, flcr;
+ int i;
+
+ /*
+ * First save the UER then disable the interrupts
+ */
+ ier = sio_in(up, TXX9_SIDICR);
+ sio_out(up, TXX9_SIDICR, 0);
+ /*
+ * Disable flow-control if enabled (and unnecessary)
+ */
+ flcr = sio_in(up, TXX9_SIFLCR);
+ if (!(up->port.flags & UPF_CONS_FLOW) && (flcr & TXX9_SIFLCR_TES))
+ sio_out(up, TXX9_SIFLCR, flcr & ~TXX9_SIFLCR_TES);
+
+ /*
+ * Now, do each character
+ */
+ for (i = 0; i < count; i++, s++) {
+ wait_for_xmitr(up);
+
+ /*
+ * Send the character out.
+ * If a LF, also do CR...
+ */
+ sio_out(up, TXX9_SITFIFO, *s);
+ if (*s == 10) {
+ wait_for_xmitr(up);
+ sio_out(up, TXX9_SITFIFO, 13);
+ }
+ }
+
+ /*
+ * Finally, wait for transmitter to become empty
+ * and restore the IER
+ */
+ wait_for_xmitr(up);
+ sio_out(up, TXX9_SIFLCR, flcr);
+ sio_out(up, TXX9_SIDICR, ier);
+}
+
+static int serial_txx9_console_setup(struct console *co, char *options)
+{
+ struct uart_port *port;
+ struct uart_txx9_port *up;
+ int baud = 9600;
+ int bits = 8;
+ int parity = 'n';
+ int flow = 'n';
+
+ /*
+ * Check whether an invalid uart number has been specified, and
+ * if so, search for the first available port that does have
+ * console support.
+ */
+ if (co->index >= UART_NR)
+ co->index = 0;
+ up = &serial_txx9_ports[co->index];
+ port = &up->port;
+ if (!port->ops)
+ return -ENODEV;
+
+ /*
+ * Temporary fix.
+ */
+ spin_lock_init(&port->lock);
+
+ /*
+ * Disable UART interrupts, set DTR and RTS high
+ * and set speed.
+ */
+ sio_out(up, TXX9_SIDICR, 0);
+ /* initial settings */
+ sio_out(up, TXX9_SILCR,
+ TXX9_SILCR_UMODE_8BIT | TXX9_SILCR_USBL_1BIT |
+ ((port->flags & UPF_TXX9_USE_SCLK) ?
+ TXX9_SILCR_SCS_SCLK_BG : TXX9_SILCR_SCS_IMCLK_BG));
+ sio_out(up, TXX9_SIFLCR, TXX9_SIFLCR_RTSTL_MAX /* 15 */);
+
+ if (options)
+ uart_parse_options(options, &baud, &parity, &bits, &flow);
+
+ return uart_set_options(port, co, baud, parity, bits, flow);
+}
+
+static struct uart_driver serial_txx9_reg;
+static struct console serial_txx9_console = {
+ .name = TXX9_TTY_NAME,
+ .write = serial_txx9_console_write,
+ .device = uart_console_device,
+ .setup = serial_txx9_console_setup,
+ .flags = CON_PRINTBUFFER,
+ .index = -1,
+ .data = &serial_txx9_reg,
+};
+
+static int __init serial_txx9_console_init(void)
+{
+ register_console(&serial_txx9_console);
+ return 0;
+}
+console_initcall(serial_txx9_console_init);
+
+static int __init serial_txx9_late_console_init(void)
+{
+ if (!(serial_txx9_console.flags & CON_ENABLED))
+ register_console(&serial_txx9_console);
+ return 0;
+}
+late_initcall(serial_txx9_late_console_init);
+
+#define SERIAL_TXX9_CONSOLE &serial_txx9_console
+#else
+#define SERIAL_TXX9_CONSOLE NULL
+#endif
+
+static struct uart_driver serial_txx9_reg = {
+ .owner = THIS_MODULE,
+ .driver_name = "serial_txx9",
+ .devfs_name = TXX9_TTY_DEVFS_NAME,
+ .dev_name = TXX9_TTY_NAME,
+ .major = TXX9_TTY_MAJOR,
+ .minor = TXX9_TTY_MINOR_START,
+ .nr = UART_NR,
+ .cons = SERIAL_TXX9_CONSOLE,
+};
+
+int __init early_serial_txx9_setup(struct uart_port *port)
+{
+ if (port->line >= ARRAY_SIZE(serial_txx9_ports))
+ return -ENODEV;
+
+ serial_txx9_ports[port->line].port = *port;
+ serial_txx9_ports[port->line].port.ops = &serial_txx9_pops;
+ serial_txx9_ports[port->line].port.flags |= UPF_BOOT_AUTOCONF;
+ return 0;
+}
+
+#ifdef ENABLE_SERIAL_TXX9_PCI
+/**
+ * serial_txx9_suspend_port - suspend one serial port
+ * @line: serial line number
+ * @level: the level of port suspension, as per uart_suspend_port
+ *
+ * Suspend one serial port.
+ */
+static void serial_txx9_suspend_port(int line)
+{
+ uart_suspend_port(&serial_txx9_reg, &serial_txx9_ports[line].port);
+}
+
+/**
+ * serial_txx9_resume_port - resume one serial port
+ * @line: serial line number
+ * @level: the level of port resumption, as per uart_resume_port
+ *
+ * Resume one serial port.
+ */
+static void serial_txx9_resume_port(int line)
+{
+ uart_resume_port(&serial_txx9_reg, &serial_txx9_ports[line].port);
+}
+
+/*
+ * Probe one serial board. Unfortunately, there is no rhyme nor reason
+ * to the arrangement of serial ports on a PCI card.
+ */
+static int __devinit
+pciserial_txx9_init_one(struct pci_dev *dev, const struct pci_device_id *ent)
+{
+ struct uart_port port;
+ int line;
+ int rc;
+
+ rc = pci_enable_device(dev);
+ if (rc)
+ return rc;
+
+ memset(&port, 0, sizeof(port));
+ port.ops = &serial_txx9_pops;
+ port.flags |= UPF_BOOT_AUTOCONF; /* uart_ops.config_port will be called */
+ port.flags |= UPF_TXX9_HAVE_CTS_LINE;
+ port.uartclk = 66670000;
+ port.irq = dev->irq;
+ port.iotype = UPIO_PORT;
+ port.iobase = pci_resource_start(dev, 1);
+ line = uart_register_port(&serial_txx9_reg, &port);
+ if (line < 0) {
+ printk(KERN_WARNING "Couldn't register serial port %s: %d\n", pci_name(dev), line);
+ }
+ pci_set_drvdata(dev, (void *)(long)line);
+
+ return 0;
+}
+
+static void __devexit pciserial_txx9_remove_one(struct pci_dev *dev)
+{
+ int line = (int)(long)pci_get_drvdata(dev);
+
+ pci_set_drvdata(dev, NULL);
+
+ if (line) {
+ uart_unregister_port(&serial_txx9_reg, line);
+ pci_disable_device(dev);
+ }
+}
+
+static int pciserial_txx9_suspend_one(struct pci_dev *dev, u32 state)
+{
+ int line = (int)(long)pci_get_drvdata(dev);
+
+ if (line)
+ serial_txx9_suspend_port(line);
+ return 0;
+}
+
+static int pciserial_txx9_resume_one(struct pci_dev *dev)
+{
+ int line = (int)(long)pci_get_drvdata(dev);
+
+ if (line)
+ serial_txx9_resume_port(line);
+ return 0;
+}
+
+static struct pci_device_id serial_txx9_pci_tbl[] = {
+ { PCI_VENDOR_ID_TOSHIBA_2, PCI_DEVICE_ID_TOSHIBA_TC86C001_MISC,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, 0 },
+ { 0, }
+};
+
+static struct pci_driver serial_txx9_pci_driver = {
+ .name = "serial_txx9",
+ .probe = pciserial_txx9_init_one,
+ .remove = __devexit_p(pciserial_txx9_remove_one),
+ .suspend = pciserial_txx9_suspend_one,
+ .resume = pciserial_txx9_resume_one,
+ .id_table = serial_txx9_pci_tbl,
+};
+
+MODULE_DEVICE_TABLE(pci, serial_txx9_pci_tbl);
+#endif /* ENABLE_SERIAL_TXX9_PCI */
+
+static int __init serial_txx9_init(void)
+{
+ int ret;
+
+ printk(KERN_INFO "%s version %s\n", serial_name, serial_version);
+
+ ret = uart_register_driver(&serial_txx9_reg);
+ if (ret >= 0) {
+ serial_txx9_register_ports(&serial_txx9_reg);
+
+#ifdef ENABLE_SERIAL_TXX9_PCI
+ ret = pci_module_init(&serial_txx9_pci_driver);
+#endif
+ }
+ return ret;
+}
+
+static void __exit serial_txx9_exit(void)
+{
+ int i;
+
+#ifdef ENABLE_SERIAL_TXX9_PCI
+ pci_unregister_driver(&serial_txx9_pci_driver);
+#endif
+ for (i = 0; i < UART_NR; i++)
+ uart_remove_one_port(&serial_txx9_reg, &serial_txx9_ports[i].port);
+
+ uart_unregister_driver(&serial_txx9_reg);
+}
+
+module_init(serial_txx9_init);
+module_exit(serial_txx9_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("TX39/49 serial driver");
+
+MODULE_ALIAS_CHARDEV_MAJOR(TXX9_TTY_MAJOR);
return -ENODEV;
}
- sal_console_port.sc_port.lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&sal_console_port.sc_port.lock);
/* Setup the port struct with the minimum needed */
sal_console_port.sc_port.membase = (char *)1; /* just needs to be non-zero */
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
+ * Copyright (C) 1999-2002 Harald Koerfgen <hkoerfg@web.de>
+ * Copyright (C) 2001, 2002, 2003, 2004 Maciej W. Rozycki
*/
+
+#include <linux/config.h>
+
#include <linux/errno.h>
+#include <linux/sched.h>
#include <linux/tty.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/delay.h>
#include <linux/kbd_ll.h>
-#include <asm/wbflush.h>
+#include <linux/kbd_kern.h>
+#include <linux/vt_kern.h>
+
+#include <asm/keyboard.h>
#include <asm/dec/tc.h>
#include <asm/dec/machtype.h>
+#include <asm/dec/serial.h>
-#include "zs.h"
#include "lk201.h"
+/*
+ * Only handle DECstations that have an LK201 interface.
+ * Maxine uses LK501 at the Access.Bus and various DECsystems
+ * have no keyboard interface at all.
+ */
+#define LK_IFACE (mips_machtype == MACH_DS23100 || \
+ mips_machtype == MACH_DS5000_200 || \
+ mips_machtype == MACH_DS5000_1XX || \
+ mips_machtype == MACH_DS5000_2X0)
+/*
+ * These use the Z8530 SCC. Others use the DZ11.
+ */
+#define LK_IFACE_ZS (mips_machtype == MACH_DS5000_1XX || \
+ mips_machtype == MACH_DS5000_2X0)
+
/* Simple translation table for the SysRq keys */
#ifdef CONFIG_MAGIC_SYSRQ
*/
unsigned char lk201_sysrq_xlate[128];
unsigned char *kbd_sysrq_xlate = lk201_sysrq_xlate;
+
+unsigned char kbd_sysrq_key = -1;
#endif
#define KEYB_LINE 3
-static int __init lk201_init(struct dec_serial *);
-static void __init lk201_info(struct dec_serial *);
-static void lk201_kbd_rx_char(unsigned char, unsigned char);
+static int __init lk201_init(void *);
+static void __init lk201_info(void *);
+static void lk201_rx_char(unsigned char, unsigned char);
-struct zs_hook lk201_kbdhook = {
- .init_channel = lk201_init,
- .init_info = lk201_info,
- .cflags = B4800 | CS8 | CSTOPB | CLOCAL
+static struct dec_serial_hook lk201_hook = {
+ .init_channel = lk201_init,
+ .init_info = lk201_info,
+ .rx_char = NULL,
+ .poll_rx_char = NULL,
+ .poll_tx_char = NULL,
+ .cflags = B4800 | CS8 | CSTOPB | CLOCAL,
};
/*
* This is used during keyboard initialisation
*/
static unsigned char lk201_reset_string[] = {
- LK_CMD_LEDS_ON, LK_PARAM_LED_MASK(0xf), /* show we are resetting */
LK_CMD_SET_DEFAULTS,
LK_CMD_MODE(LK_MODE_RPT_DOWN, 1),
LK_CMD_MODE(LK_MODE_RPT_DOWN, 2),
LK_CMD_MODE(LK_MODE_RPT_DOWN, 12),
LK_CMD_MODE(LK_MODE_DOWN, 13),
LK_CMD_MODE(LK_MODE_RPT_DOWN, 14),
- LK_CMD_ENB_RPT,
LK_CMD_DIS_KEYCLK,
- LK_CMD_RESUME,
LK_CMD_ENB_BELL, LK_PARAM_VOLUME(4),
- LK_CMD_LEDS_OFF, LK_PARAM_LED_MASK(0xf)
};
-static int __init lk201_reset(struct dec_serial *info)
+static void *lk201_handle;
+
+static int lk201_send(unsigned char ch)
+{
+ if (lk201_hook.poll_tx_char(lk201_handle, ch)) {
+ printk(KERN_ERR "lk201: transmit timeout\n");
+ return -EIO;
+ }
+ return 0;
+}
+
+static inline int lk201_get_id(void)
+{
+ return lk201_send(LK_CMD_REQ_ID);
+}
+
+static int lk201_reset(void)
+{
+ int i, r;
+
+ for (i = 0; i < sizeof(lk201_reset_string); i++) {
+ r = lk201_send(lk201_reset_string[i]);
+ if (r < 0)
+ return r;
+ }
+ return 0;
+}
+
+static void lk201_report(unsigned char id[6])
+{
+ char *report = "lk201: keyboard attached, ";
+
+ switch (id[2]) {
+ case LK_STAT_PWRUP_OK:
+ printk(KERN_INFO "%sself-test OK\n", report);
+ break;
+ case LK_STAT_PWRUP_KDOWN:
+ /* The keyboard will resend the power-up ID
+ after all keys are released, so we don't
+ bother handling the error specially. Still
+ there may be a short-circuit inside.
+ */
+ printk(KERN_ERR "%skey down (stuck?), code: 0x%02x\n",
+ report, id[3]);
+ break;
+ case LK_STAT_PWRUP_ERROR:
+ printk(KERN_ERR "%sself-test failure\n", report);
+ break;
+ default:
+ printk(KERN_ERR "%sunknown error: 0x%02x\n",
+ report, id[2]);
+ }
+}
+
+static void lk201_id(unsigned char id[6])
+{
+ /*
+ * Report whether there is an LK201 or an LK401
+ * The LK401 has ALT keys...
+ */
+ switch (id[4]) {
+ case 1:
+ printk(KERN_INFO "lk201: LK201 detected\n");
+ break;
+ case 2:
+ printk(KERN_INFO "lk201: LK401 detected\n");
+ break;
+ case 3:
+ printk(KERN_INFO "lk201: LK443 detected\n");
+ break;
+ case 4:
+ printk(KERN_INFO "lk201: LK421 detected\n");
+ break;
+ default:
+ printk(KERN_WARNING
+ "lk201: unknown keyboard detected, ID %d\n", id[4]);
+ printk(KERN_WARNING "lk201: ... please report to "
+ "<linux-mips@linux-mips.org>\n");
+ }
+}
+
+#define DEFAULT_KEYB_REP_DELAY (250/5) /* [5ms] */
+#define DEFAULT_KEYB_REP_RATE 30 /* [cps] */
+
+static struct kbd_repeat kbdrate = {
+ DEFAULT_KEYB_REP_DELAY,
+ DEFAULT_KEYB_REP_RATE
+};
+
+static void parse_kbd_rate(struct kbd_repeat *r)
{
+ if (r->delay <= 0)
+ r->delay = kbdrate.delay;
+ if (r->rate <= 0)
+ r->rate = kbdrate.rate;
+
+ if (r->delay < 5)
+ r->delay = 5;
+ if (r->delay > 630)
+ r->delay = 630;
+ if (r->rate < 12)
+ r->rate = 12;
+ if (r->rate > 127)
+ r->rate = 127;
+ if (r->rate == 125)
+ r->rate = 124;
+}
+
+static int write_kbd_rate(struct kbd_repeat *rep)
+{
+ int delay, rate;
int i;
- for (i = 0; i < sizeof(lk201_reset_string); i++)
- if (info->hook->poll_tx_char(info, lk201_reset_string[i])) {
- printk("%s transmit timeout\n", __FUNCTION__);
- return -EIO;
- }
+ delay = rep->delay / 5;
+ rate = rep->rate;
+ for (i = 0; i < 4; i++) {
+ if (lk201_hook.poll_tx_char(lk201_handle,
+ LK_CMD_RPT_RATE(i)))
+ return 1;
+ if (lk201_hook.poll_tx_char(lk201_handle,
+ LK_PARAM_DELAY(delay)))
+ return 1;
+ if (lk201_hook.poll_tx_char(lk201_handle,
+ LK_PARAM_RATE(rate)))
+ return 1;
+ }
+ return 0;
+}
+
+static int lk201_kbd_rate(struct kbd_repeat *rep)
+{
+ if (rep == NULL)
+ return -EINVAL;
+
+ parse_kbd_rate(rep);
+
+ if (write_kbd_rate(rep)) {
+ memcpy(rep, &kbdrate, sizeof(struct kbd_repeat));
+ return -EIO;
+ }
+
+ memcpy(&kbdrate, rep, sizeof(struct kbd_repeat));
+
return 0;
}
+static void lk201_kd_mksound(unsigned int hz, unsigned int ticks)
+{
+ if (!ticks)
+ return;
+
+ /*
+ * Can't set frequency and we "approximate"
+ * duration by volume. ;-)
+ */
+ ticks /= HZ / 32;
+ if (ticks > 7)
+ ticks = 7;
+ ticks = 7 - ticks;
+
+ if (lk201_hook.poll_tx_char(lk201_handle, LK_CMD_ENB_BELL))
+ return;
+ if (lk201_hook.poll_tx_char(lk201_handle, LK_PARAM_VOLUME(ticks)))
+ return;
+ if (lk201_hook.poll_tx_char(lk201_handle, LK_CMD_BELL))
+ return;
+}
+
void kbd_leds(unsigned char leds)
{
- return;
+ unsigned char l = 0;
+
+ if (!lk201_handle) /* FIXME */
+ return;
+
+ /* FIXME -- Only Hold and Lock LEDs for now. --macro */
+ if (leds & LED_SCR)
+ l |= LK_LED_HOLD;
+ if (leds & LED_CAP)
+ l |= LK_LED_LOCK;
+
+ if (lk201_hook.poll_tx_char(lk201_handle, LK_CMD_LEDS_ON))
+ return;
+ if (lk201_hook.poll_tx_char(lk201_handle, LK_PARAM_LED_MASK(l)))
+ return;
+ if (lk201_hook.poll_tx_char(lk201_handle, LK_CMD_LEDS_OFF))
+ return;
+ if (lk201_hook.poll_tx_char(lk201_handle, LK_PARAM_LED_MASK(~l)))
+ return;
}
int kbd_setkeycode(unsigned int scancode, unsigned int keycode)
return 0x80;
}
-static void lk201_kbd_rx_char(unsigned char ch, unsigned char stat)
+static void lk201_rx_char(unsigned char ch, unsigned char fl)
{
+ static unsigned char id[6];
+ static int id_i;
+
static int shift_state = 0;
static int prev_scancode;
unsigned char c = scancodeRemap[ch];
- if (!stat || stat == 4) {
- switch (ch) {
- case LK_KEY_ACK:
- break;
- case LK_KEY_LOCK:
- shift_state ^= LK_LOCK;
- handle_scancode(c, shift_state && LK_LOCK ? 1 : 0);
- break;
- case LK_KEY_SHIFT:
- shift_state ^= LK_SHIFT;
- handle_scancode(c, shift_state && LK_SHIFT ? 1 : 0);
- break;
- case LK_KEY_CTRL:
- shift_state ^= LK_CTRL;
- handle_scancode(c, shift_state && LK_CTRL ? 1 : 0);
- break;
- case LK_KEY_COMP:
- shift_state ^= LK_COMP;
- handle_scancode(c, shift_state && LK_COMP ? 1 : 0);
- break;
- case LK_KEY_RELEASE:
- if (shift_state & LK_SHIFT)
- handle_scancode(scancodeRemap[LK_KEY_SHIFT], 0);
- if (shift_state & LK_CTRL)
- handle_scancode(scancodeRemap[LK_KEY_CTRL], 0);
- if (shift_state & LK_COMP)
- handle_scancode(scancodeRemap[LK_KEY_COMP], 0);
- if (shift_state & LK_LOCK)
- handle_scancode(scancodeRemap[LK_KEY_LOCK], 0);
- shift_state = 0;
- break;
- case LK_KEY_REPEAT:
- handle_scancode(prev_scancode, 1);
- break;
- default:
- prev_scancode = c;
- handle_scancode(c, 1);
- break;
+ if (fl != TTY_NORMAL && fl != TTY_OVERRUN) {
+ printk(KERN_ERR "lk201: keyboard receive error: 0x%02x\n", fl);
+ return;
+ }
+
+ /* Assume this is a power-up ID. */
+ if (ch == LK_STAT_PWRUP_ID && !id_i) {
+ id[id_i++] = ch;
+ return;
+ }
+
+ /* Handle the power-up sequence. */
+ if (id_i) {
+ id[id_i++] = ch;
+ if (id_i == 4) {
+ /* OK, the power-up concluded. */
+ lk201_report(id);
+ if (id[2] == LK_STAT_PWRUP_OK)
+ lk201_get_id();
+ else {
+ id_i = 0;
+ printk(KERN_ERR "lk201: keyboard power-up "
+ "error, skipping initialization\n");
+ }
+ } else if (id_i == 6) {
+ /* We got the ID; report it and start operation. */
+ id_i = 0;
+ lk201_id(id);
+ lk201_reset();
}
- } else
- printk("Error reading LKx01 keyboard: 0x%02x\n", stat);
+ return;
+ }
+
+ /* Everything else is a scancode/status response. */
+ id_i = 0;
+ switch (ch) {
+ case LK_STAT_RESUME_ERR:
+ case LK_STAT_ERROR:
+ case LK_STAT_INHIBIT_ACK:
+ case LK_STAT_TEST_ACK:
+ case LK_STAT_MODE_KEYDOWN:
+ case LK_STAT_MODE_ACK:
+ break;
+ case LK_KEY_LOCK:
+ shift_state ^= LK_LOCK;
+ handle_scancode(c, (shift_state & LK_LOCK) ? 1 : 0);
+ break;
+ case LK_KEY_SHIFT:
+ shift_state ^= LK_SHIFT;
+ handle_scancode(c, (shift_state & LK_SHIFT) ? 1 : 0);
+ break;
+ case LK_KEY_CTRL:
+ shift_state ^= LK_CTRL;
+ handle_scancode(c, (shift_state & LK_CTRL) ? 1 : 0);
+ break;
+ case LK_KEY_COMP:
+ shift_state ^= LK_COMP;
+ handle_scancode(c, (shift_state & LK_COMP) ? 1 : 0);
+ break;
+ case LK_KEY_RELEASE:
+ if (shift_state & LK_SHIFT)
+ handle_scancode(scancodeRemap[LK_KEY_SHIFT], 0);
+ if (shift_state & LK_CTRL)
+ handle_scancode(scancodeRemap[LK_KEY_CTRL], 0);
+ if (shift_state & LK_COMP)
+ handle_scancode(scancodeRemap[LK_KEY_COMP], 0);
+ if (shift_state & LK_LOCK)
+ handle_scancode(scancodeRemap[LK_KEY_LOCK], 0);
+ shift_state = 0;
+ break;
+ case LK_KEY_REPEAT:
+ handle_scancode(prev_scancode, 1);
+ break;
+ default:
+ prev_scancode = c;
+ handle_scancode(c, 1);
+ break;
+ }
+ tasklet_schedule(&keyboard_tasklet);
}
-static void __init lk201_info(struct dec_serial *info)
+static void __init lk201_info(void *handle)
{
}
-static int __init lk201_init(struct dec_serial *info)
+static int __init lk201_init(void *handle)
{
- unsigned int ch, id = 0;
- int result;
+ /* First install handlers. */
+ lk201_handle = handle;
+ kbd_rate = lk201_kbd_rate;
+ kd_mksound = lk201_kd_mksound;
- printk("DECstation LK keyboard driver v0.04... ");
+ lk201_hook.rx_char = lk201_rx_char;
- result = lk201_reset(info);
- if (result)
- return result;
- mdelay(10);
-
- /*
- * Detect whether there is an LK201 or an LK401
- * The LK401 has ALT keys...
- */
- info->hook->poll_tx_char(info, LK_CMD_REQ_ID);
- while ((ch = info->hook->poll_rx_char(info)) > 0)
- id = ch;
-
- switch (id) {
- case 1:
- printk("LK201 detected\n");
- break;
- case 2:
- printk("LK401 detected\n");
- break;
- default:
- printk("unknown keyboard, ID %d,\n", id);
- printk("... please report to <linux-mips@oss.sgi.com>\n");
- }
-
- /*
- * now we're ready
- */
- info->hook->rx_char = lk201_kbd_rx_char;
+ /* Then just issue a reset -- the handlers will do the rest. */
+ lk201_send(LK_CMD_POWER_UP);
return 0;
}
void __init kbd_init_hw(void)
{
- extern int register_zs_hook(unsigned int, struct zs_hook *);
- extern int unregister_zs_hook(unsigned int);
-
- if (TURBOCHANNEL) {
- if (mips_machtype != MACH_DS5000_XX) {
- /*
- * This is not a MAXINE, so:
- *
- * kbd_init_hw() is being called before
- * rs_init() so just register the kbd hook
- * and let zs_init do the rest :-)
- */
- if (mips_machtype == MACH_DS5000_200)
- printk("LK201 Support for DS5000/200 not yet ready ...\n");
- else
- if(!register_zs_hook(KEYB_LINE, &lk201_kbdhook))
- unregister_zs_hook(KEYB_LINE);
- }
+ /* Maxine uses LK501 at the Access.Bus. */
+ if (!LK_IFACE)
+ return;
+
+ printk(KERN_INFO "lk201: DECstation LK keyboard driver v0.05.\n");
+
+ if (LK_IFACE_ZS) {
+ /*
+ * kbd_init_hw() is being called before
+ * rs_init() so just register the kbd hook
+ * and let zs_init do the rest :-)
+ */
+ if (!register_dec_serial_hook(KEYB_LINE, &lk201_hook))
+ unregister_dec_serial_hook(KEYB_LINE);
} else {
/*
* TODO: modify dz.c to allow similar hooks
* for LK201 handling on DS2100, DS3100, and DS5000/200
*/
- printk("LK201 Support for DS3100 not yet ready ...\n");
+ printk(KERN_ERR "lk201: support for DZ11 not yet ready.\n");
}
}
-
-
-
-
* Commands to the keyboard processor
*/
-#define LK_PARAM 0x80 /* start/end parameter list */
-
-#define LK_CMD_RESUME 0x8b
-#define LK_CMD_INHIBIT 0xb9
-#define LK_CMD_LEDS_ON 0x13 /* 1 param: led bitmask */
-#define LK_CMD_LEDS_OFF 0x11 /* 1 param: led bitmask */
-#define LK_CMD_DIS_KEYCLK 0x99
-#define LK_CMD_ENB_KEYCLK 0x1b /* 1 param: volume */
-#define LK_CMD_DIS_CTLCLK 0xb9
-#define LK_CMD_ENB_CTLCLK 0xbb
-#define LK_CMD_SOUND_CLK 0x9f
-#define LK_CMD_DIS_BELL 0xa1
-#define LK_CMD_ENB_BELL 0x23 /* 1 param: volume */
-#define LK_CMD_BELL 0xa7
-#define LK_CMD_TMP_NORPT 0xc1
-#define LK_CMD_ENB_RPT 0xe3
-#define LK_CMD_DIS_RPT 0xe1
-#define LK_CMD_RPT_TO_DOWN 0xd9
-#define LK_CMD_REQ_ID 0xab
-#define LK_CMD_POWER_UP 0xfd
-#define LK_CMD_TEST_MODE 0xcb
-#define LK_CMD_SET_DEFAULTS 0xd3
+#define LK_PARAM 0x80 /* start/end parameter list */
+
+#define LK_CMD_RESUME 0x8b /* resume transmission to the host */
+#define LK_CMD_INHIBIT 0x89 /* stop transmission to the host */
+#define LK_CMD_LEDS_ON 0x13 /* light LEDs */
+ /* 1st param: led bitmask */
+#define LK_CMD_LEDS_OFF 0x11 /* turn off LEDs */
+ /* 1st param: led bitmask */
+#define LK_CMD_DIS_KEYCLK 0x99 /* disable the keyclick */
+#define LK_CMD_ENB_KEYCLK 0x1b /* enable the keyclick */
+ /* 1st param: volume */
+#define LK_CMD_DIS_CTLCLK 0xb9 /* disable the Ctrl keyclick */
+#define LK_CMD_ENB_CTLCLK 0xbb /* enable the Ctrl keyclick */
+#define LK_CMD_SOUND_CLK 0x9f /* emit a keyclick */
+#define LK_CMD_DIS_BELL 0xa1 /* disable the bell */
+#define LK_CMD_ENB_BELL 0x23 /* enable the bell */
+ /* 1st param: volume */
+#define LK_CMD_BELL 0xa7 /* emit a bell */
+#define LK_CMD_TMP_NORPT 0xd1 /* disable typematic */
+ /* for the currently pressed key */
+#define LK_CMD_ENB_RPT 0xe3 /* enable typematic */
+ /* for RPT_DOWN groups */
+#define LK_CMD_DIS_RPT 0xe1 /* disable typematic */
+ /* for RPT_DOWN groups */
+#define LK_CMD_RPT_TO_DOWN 0xd9 /* set RPT_DOWN groups to DOWN */
+#define LK_CMD_REQ_ID 0xab /* request the keyboard ID */
+#define LK_CMD_POWER_UP 0xfd /* init power-up sequence */
+#define LK_CMD_TEST_MODE 0xcb /* enter the factory test mode */
+#define LK_CMD_TEST_EXIT 0x80 /* exit the factory test mode */
+#define LK_CMD_SET_DEFAULTS 0xd3 /* set power-up defaults */
+
+#define LK_CMD_MODE(m,div) (LK_PARAM|(((div)&0xf)<<3)|(((m)&0x3)<<1))
+ /* select the repeat mode */
+ /* for the selected key group */
+#define LK_CMD_MODE_AR(m,div) ((((div)&0xf)<<3)|(((m)&0x3)<<1))
+ /* select the repeat mode */
+ /* and the repeat register */
+ /* for the selected key group */
+ /* 1st param: register number */
+#define LK_CMD_RPT_RATE(r) (0x78|(((r)&0x3)<<1))
+ /* set the delay and repeat rate */
+ /* for the selected repeat register */
+ /* 1st param: initial delay */
+ /* 2nd param: repeat rate */
/* there are 4 leds, represent them in the low 4 bits of a byte */
-#define LK_PARAM_LED_MASK(ledbmap) (LK_PARAM|(ledbmap))
+#define LK_PARAM_LED_MASK(ledbmap) (LK_PARAM|((ledbmap)&0xf))
+#define LK_LED_WAIT 0x1 /* Wait LED */
+#define LK_LED_COMP 0x2 /* Compose LED */
+#define LK_LED_LOCK 0x4 /* Lock LED */
+#define LK_LED_HOLD 0x8 /* Hold Screen LED */
/* max volume is 0, lowest is 0x7 */
-#define LK_PARAM_VOLUME(v) (LK_PARAM|((v)&0x7))
+#define LK_PARAM_VOLUME(v) (LK_PARAM|((v)&0x7))
+
+/* mode set command details, div is a key group number */
+#define LK_MODE_DOWN 0x0 /* make only */
+#define LK_MODE_RPT_DOWN 0x1 /* make and typematic */
+#define LK_MODE_DOWN_UP 0x3 /* make and release */
-/* mode set command(s) details */
-#define LK_MODE_DOWN 0x0
-#define LK_MODE_RPT_DOWN 0x2
-#define LK_MODE_DOWN_UP 0x6
-#define LK_CMD_MODE(m,div) (LK_PARAM|(div<<3)|m)
+/* there are 4 repeat registers */
+#define LK_PARAM_AR(r) (LK_PARAM|((v)&0x3))
+
+/*
+ * Mappings between key groups and keycodes are as follows:
+ *
+ * 1: 0xbf - 0xff -- alphanumeric,
+ * 2: 0x91 - 0xa5 -- numeric keypad,
+ * 3: 0xbc -- Backspace,
+ * 4: 0xbd - 0xbe -- Tab, Return,
+ * 5: 0xb0 - 0xb2 -- Lock, Compose Character,
+ * 6: 0xad - 0xaf -- Ctrl, Shift,
+ * 7: 0xa6 - 0xa8 -- Left Arrow, Right Arrow,
+ * 8: 0xa9 - 0xac -- Up Arrow, Down Arrow, Right Shift,
+ * 9: 0x88 - 0x90 -- editor keypad,
+ * 10: 0x56 - 0x62 -- F1 - F5,
+ * 11: 0x63 - 0x6e -- F6 - F10,
+ * 12: 0x6f - 0x7a -- F11 - F14,
+ * 13: 0x7b - 0x7d -- Help, Do,
+ * 14: 0x7e - 0x87 -- F17 - F20.
+ *
+ * Notes:
+ * 1. Codes in the 0x00 - 0x40 range are reserved.
+ * 2. The assignment of the 0x41 - 0x55 range is undiscovered, probably 10.
+ */
+
+/* delay is 5 - 630 ms; 0x00 and 0x7f are reserved */
+#define LK_PARAM_DELAY(t) ((t)&0x7f)
+
+/* rate is 12 - 127 Hz; 0x00 - 0x0b and 0x7d (power-up!) are reserved */
+#define LK_PARAM_RATE(r) (LK_PARAM|((r)&0x7f))
#define LK_SHIFT 1<<0
#define LK_CTRL 1<<1
#define LK_LOCK 1<<2
#define LK_COMP 1<<3
-#define LK_KEY_SHIFT 174
-#define LK_KEY_CTRL 175
-#define LK_KEY_LOCK 176
-#define LK_KEY_COMP 177
-#define LK_KEY_RELEASE 179
-#define LK_KEY_REPEAT 180
-#define LK_KEY_ACK 186
+#define LK_KEY_SHIFT 0xae
+#define LK_KEY_CTRL 0xaf
+#define LK_KEY_LOCK 0xb0
+#define LK_KEY_COMP 0xb1
+
+#define LK_KEY_RELEASE 0xb3 /* all keys released */
+#define LK_KEY_REPEAT 0xb4 /* repeat the last key */
+
+/* status responses */
+#define LK_STAT_RESUME_ERR 0xb5 /* keystrokes lost while inhibited */
+#define LK_STAT_ERROR 0xb6 /* an invalid command received */
+#define LK_STAT_INHIBIT_ACK 0xb7 /* transmission inhibited */
+#define LK_STAT_TEST_ACK 0xb8 /* the factory test mode entered */
+#define LK_STAT_MODE_KEYDOWN 0xb9 /* a key is down on a change */
+ /* to the DOWN_UP mode; */
+ /* the keycode follows */
+#define LK_STAT_MODE_ACK 0xba /* the mode command succeeded */
+
+#define LK_STAT_PWRUP_ID 0x01 /* the power-up response start mark */
+#define LK_STAT_PWRUP_OK 0x00 /* the power-up self test OK */
+#define LK_STAT_PWRUP_KDOWN 0x3d /* a key was down during the test */
+#define LK_STAT_PWRUP_ERROR 0x3e /* keyboard self test failure */
-extern unsigned char scancodeRemap[256];
\ No newline at end of file
+extern unsigned char scancodeRemap[256];
* for more details.
*
* Copyright (c) Harald Koerfgen, 1998
+ * Copyright (c) 2001, 2003 Maciej W. Rozycki
*/
#include <linux/string.h>
#include <linux/init.h>
#include <linux/ioport.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+
#include <asm/addrspace.h>
#include <asm/errno.h>
#include <asm/dec/machtype.h>
+#include <asm/dec/prom.h>
#include <asm/dec/tcinfo.h>
#include <asm/dec/tcmodule.h>
#include <asm/dec/interrupts.h>
-
+#include <asm/paccess.h>
#include <asm/ptrace.h>
-#include <linux/kernel.h>
-#include <linux/module.h>
#define TC_DEBUG
MODULE_LICENSE("GPL");
slot_info tc_bus[MAX_SLOT];
-static int max_tcslot;
+static int num_tcslots;
static tcinfo *info;
unsigned long system_base;
-extern void (*dbe_board_handler)(struct pt_regs *regs);
-extern unsigned long *(*rex_slot_address)(int);
-extern void *(*rex_gettcinfo)(void);
-
/*
* Interface to the world. Read comment in include/asm-mips/tc.h.
*/
-int search_tc_card(char *name)
+int search_tc_card(const char *name)
{
int slot;
slot_info *sip;
- for (slot = 0; slot <= max_tcslot; slot++) {
+ for (slot = 0; slot < num_tcslots; slot++) {
sip = &tc_bus[slot];
- if ((sip->flags & FREE) && (strncmp(sip->name, name, strlen(name)) == 0)) {
+ if ((sip->flags & FREE) &&
+ (strncmp(sip->name, name, strlen(name)) == 0)) {
return slot;
}
}
void release_tc_card(int slot)
{
if (tc_bus[slot].flags & FREE) {
- printk("release_tc_card: attempting to release a card already free\n");
+ printk("release_tc_card: "
+ "attempting to release a card already free\n");
return;
}
tc_bus[slot].flags &= ~IN_USE;
/*
* Probing for TURBOchannel modules
*/
-static void __init my_dbe_handler(struct pt_regs *regs)
+static void __init tc_probe(unsigned long startaddr, unsigned long size,
+ int slots)
{
- regs->cp0_epc += 4;
-}
-
-static void __init tc_probe(unsigned long startaddr, unsigned long size, int max_slot)
-{
- int i, slot;
+ int i, slot, err;
long offset;
+ unsigned char pattern[4];
unsigned char *module;
- void (*old_be_handler)(struct pt_regs *regs);
-
- /* Install our exception handler temporarily */
- old_be_handler = dbe_board_handler;
- dbe_board_handler = my_dbe_handler;
- for (slot = 0; slot <= max_slot; slot++) {
+ for (slot = 0; slot < slots; slot++) {
module = (char *)(startaddr + slot * size);
- offset = -1;
- if (module[OLDCARD + TC_PATTERN0] == 0x55 && module[OLDCARD + TC_PATTERN1] == 0x00
- && module[OLDCARD + TC_PATTERN2] == 0xaa && module[OLDCARD + TC_PATTERN3] == 0xff)
- offset = OLDCARD;
- if (module[TC_PATTERN0] == 0x55 && module[TC_PATTERN1] == 0x00
- && module[TC_PATTERN2] == 0xaa && module[TC_PATTERN3] == 0xff)
- offset = 0;
-
- if (offset != -1) {
- tc_bus[slot].base_addr = (unsigned long)module;
- for(i = 0; i < 8; i++) {
- tc_bus[slot].firmware[i] = module[TC_FIRM_VER + offset + 4 * i];
- tc_bus[slot].vendor[i] = module[TC_VENDOR + offset + 4 * i];
- tc_bus[slot].name[i] = module[TC_MODULE + offset + 4 * i];
- }
- tc_bus[slot].firmware[8] = 0;
- tc_bus[slot].vendor[8] = 0;
- tc_bus[slot].name[8] = 0;
- /*
- * Looks unneccesary, but we may change
- * TC? in the future
- */
- switch (slot) {
- case 0:
- tc_bus[slot].interrupt = TC0;
- break;
- case 1:
- tc_bus[slot].interrupt = TC1;
- break;
- case 2:
- tc_bus[slot].interrupt = TC2;
- break;
- /*
- * Yuck! DS5000/200 onboard devices
- */
- case 5:
- tc_bus[slot].interrupt = SCSI_INT;
- break;
- case 6:
- tc_bus[slot].interrupt = ETHER;
- break;
- default:
- tc_bus[slot].interrupt = -1;
- break;
- }
+
+ offset = OLDCARD;
+
+ err = 0;
+ err |= get_dbe(pattern[0], module + OLDCARD + TC_PATTERN0);
+ err |= get_dbe(pattern[1], module + OLDCARD + TC_PATTERN1);
+ err |= get_dbe(pattern[2], module + OLDCARD + TC_PATTERN2);
+ err |= get_dbe(pattern[3], module + OLDCARD + TC_PATTERN3);
+ if (err)
+ continue;
+
+ if (pattern[0] != 0x55 || pattern[1] != 0x00 ||
+ pattern[2] != 0xaa || pattern[3] != 0xff) {
+ offset = NEWCARD;
+
+ err = 0;
+ err |= get_dbe(pattern[0], module + TC_PATTERN0);
+ err |= get_dbe(pattern[1], module + TC_PATTERN1);
+ err |= get_dbe(pattern[2], module + TC_PATTERN2);
+ err |= get_dbe(pattern[3], module + TC_PATTERN3);
+ if (err)
+ continue;
}
- }
- dbe_board_handler = old_be_handler;
+ if (pattern[0] != 0x55 || pattern[1] != 0x00 ||
+ pattern[2] != 0xaa || pattern[3] != 0xff)
+ continue;
+
+ tc_bus[slot].base_addr = (unsigned long)module;
+ for(i = 0; i < 8; i++) {
+ tc_bus[slot].firmware[i] =
+ module[TC_FIRM_VER + offset + 4 * i];
+ tc_bus[slot].vendor[i] =
+ module[TC_VENDOR + offset + 4 * i];
+ tc_bus[slot].name[i] =
+ module[TC_MODULE + offset + 4 * i];
+ }
+ tc_bus[slot].firmware[8] = 0;
+ tc_bus[slot].vendor[8] = 0;
+ tc_bus[slot].name[8] = 0;
+ /*
+ * Looks unneccesary, but we may change
+ * TC? in the future
+ */
+ switch (slot) {
+ case 0:
+ tc_bus[slot].interrupt = dec_interrupt[DEC_IRQ_TC0];
+ break;
+ case 1:
+ tc_bus[slot].interrupt = dec_interrupt[DEC_IRQ_TC1];
+ break;
+ case 2:
+ tc_bus[slot].interrupt = dec_interrupt[DEC_IRQ_TC2];
+ break;
+ /*
+ * Yuck! DS5000/200 onboard devices
+ */
+ case 5:
+ tc_bus[slot].interrupt = dec_interrupt[DEC_IRQ_TC5];
+ break;
+ case 6:
+ tc_bus[slot].interrupt = dec_interrupt[DEC_IRQ_TC6];
+ break;
+ default:
+ tc_bus[slot].interrupt = -1;
+ break;
+ }
+ }
}
/*
switch (mips_machtype) {
case MACH_DS5000_200:
- max_tcslot = 6;
+ num_tcslots = 7;
break;
case MACH_DS5000_1XX:
case MACH_DS5000_2X0:
- max_tcslot = 2;
+ case MACH_DS5900:
+ num_tcslots = 3;
break;
case MACH_DS5000_XX:
default:
- max_tcslot = 1;
+ num_tcslots = 2;
break;
}
slot_size = info->slot_size << 20;
- tc_probe(slot0addr, slot_size, max_tcslot);
+ tc_probe(slot0addr, slot_size, num_tcslots);
/*
* All TURBOchannel DECstations have the onboard devices
- * where the (max_tcslot + 1 or 2 on DS5k/xx) Option Module
+ * where the (num_tcslots + 0 or 1 on DS5k/xx) Option Module
* would be.
*/
if(mips_machtype == MACH_DS5000_XX)
- i = 2;
- else
i = 1;
-
- system_base = slot0addr + slot_size * (max_tcslot + i);
+ else
+ i = 0;
+
+ system_base = slot0addr + slot_size * (num_tcslots + i);
#ifdef TC_DEBUG
- for (i = 0; i <= max_tcslot; i++)
+ for (i = 0; i < num_tcslots; i++)
if (tc_bus[i].base_addr) {
printk(" slot %d: ", i);
printk("%s %s %s\n", tc_bus[i].vendor,
EXPORT_SYMBOL(get_tc_base_addr);
EXPORT_SYMBOL(get_tc_irq_nr);
EXPORT_SYMBOL(get_tc_speed);
-
+EXPORT_SYMBOL(system_base);
/*
- * macserial.h: Definitions for the Macintosh Z8530 serial driver.
+ * drivers/tc/zs.h: Definitions for the DECstation Z85C30 serial driver.
*
* Adapted from drivers/sbus/char/sunserial.h by Paul Mackerras.
+ * Adapted from drivers/macintosh/macserial.h by Harald Koerfgen.
*
* Copyright (C) 1996 Paul Mackerras (Paul.Mackerras@cs.anu.edu.au)
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
+ * Copyright (C) 2004 Maciej W. Rozycki
*/
#ifndef _DECSERIAL_H
#define _DECSERIAL_H
+#include <asm/dec/serial.h>
+
#define NUM_ZSREGS 16
struct serial_struct {
unsigned char curregs[NUM_ZSREGS];
};
-struct dec_serial;
-
-struct zs_hook {
- int (*init_channel)(struct dec_serial* info);
- void (*init_info)(struct dec_serial* info);
- void (*rx_char)(unsigned char ch, unsigned char stat);
- int (*poll_rx_char)(struct dec_serial* info);
- int (*poll_tx_char)(struct dec_serial* info,
- unsigned char ch);
- unsigned cflags;
-};
-
struct dec_serial {
- struct dec_serial *zs_next; /* For IRQ servicing chain */
- struct dec_zschannel *zs_channel; /* Channel registers */
- struct dec_zschannel *zs_chan_a; /* A side registers */
- unsigned char read_reg_zero;
-
- char soft_carrier; /* Use soft carrier on this channel */
- char break_abort; /* Is serial console in, so process brk/abrt */
- struct zs_hook *hook; /* Hook on this channel */
- char is_cons; /* Is this our console. */
- unsigned char tx_active; /* character is being xmitted */
- unsigned char tx_stopped; /* output is suspended */
-
- /* We need to know the current clock divisor
- * to read the bps rate the chip has currently
- * loaded.
+ struct dec_serial *zs_next; /* For IRQ servicing chain. */
+ struct dec_zschannel *zs_channel; /* Channel registers. */
+ struct dec_zschannel *zs_chan_a; /* A side registers. */
+ unsigned char read_reg_zero;
+
+ struct dec_serial_hook *hook; /* Hook on this channel. */
+ int tty_break; /* Set on BREAK condition. */
+ int is_cons; /* Is this our console. */
+ int tx_active; /* Char is being xmitted. */
+ int tx_stopped; /* Output is suspended. */
+
+ /*
+ * We need to know the current clock divisor
+ * to read the bps rate the chip has currently loaded.
*/
- unsigned char clk_divisor; /* May be 1, 16, 32, or 64 */
- int zs_baud;
+ int clk_divisor; /* May be 1, 16, 32, or 64. */
+ int zs_baud;
- char change_needed;
+ char change_needed;
int magic;
int baud_base;
int port;
int irq;
- int flags; /* defined in tty.h */
- int type; /* UART type */
+ int flags; /* Defined in tty.h. */
+ int type; /* UART type. */
struct tty_struct *tty;
int read_status_mask;
int ignore_status_mask;
int timeout;
int xmit_fifo_size;
int custom_divisor;
- int x_char; /* xon/xoff character */
+ int x_char; /* XON/XOFF character. */
int close_delay;
unsigned short closing_wait;
unsigned short closing_wait2;
unsigned long event;
unsigned long last_active;
int line;
- int count; /* # of fd on device */
- int blocked_open; /* # of blocked opens */
+ int count; /* # of fds on device. */
+ int blocked_open; /* # of blocked opens. */
unsigned char *xmit_buf;
int xmit_head;
int xmit_tail;
#define RxINT_DISAB 0 /* Rx Int Disable */
#define RxINT_FCERR 0x8 /* Rx Int on First Character Only or Error */
-#define INT_ALL_Rx 0x10 /* Int on all Rx Characters or error */
-#define INT_ERR_Rx 0x18 /* Int on error only */
+#define RxINT_ALL 0x10 /* Int on all Rx Characters or error */
+#define RxINT_ERR 0x18 /* Int on error only */
+#define RxINT_MASK 0x18
#define WT_RDY_RT 0x20 /* Wait/Ready on R/T */
#define WT_FN_RDYFN 0x40 /* Wait/FN/Ready FN */
link->next = dev_list;
dev_list = link;
client_reg.dev_info = &dev_info;
- client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE;
client_reg.EventMask =
CS_EVENT_CARD_INSERTION | CS_EVENT_CARD_REMOVAL |
CS_EVENT_RESET_PHYSICAL | CS_EVENT_CARD_RESET |
static void ixj_pcmcia_exit(void)
{
pcmcia_unregister_driver(&ixj_driver);
-
- /* XXX: this really needs to move into generic code.. */
- while (dev_list != NULL)
- ixj_detach(dev_list);
+ BUG_ON(dev_list != NULL);
}
module_init(ixj_pcmcia_init);
+To understand all the Linux-USB framework, you'll use these resources:
+
+ * This source code. This is necessarily an evolving work, and
+ includes kerneldoc that should help you get a current overview.
+ ("make pdfdocs", and then look at "usb.pdf" for host side and
+ "gadget.pdf" for peripheral side.) Also, Documentation/usb has
+ more information.
+
+ * The USB 2.0 specification (from www.usb.org), with supplements
+ such as those for USB OTG and the various device classes.
+ The USB specification has a good overview chapter, and USB
+ peripherals conform to the widely known "Chapter 9".
+
+ * Chip specifications for USB controllers. Examples include
+ host controllers (on PCs, servers, and more); peripheral
+ controllers (in devices with Linux firmware, like printers or
+ cell phones); and hard-wired peripherals like Ethernet adapters.
+
+ * Specifications for other protocols implemented by USB peripheral
+ functions. Some are vendor-specific; others are vendor-neutral
+ but just standardized outside of the www.usb.org team.
+
Here is a list of what each subdirectory here is, and what is contained in
them.
core/ - This is for the core USB host code, including the
- usbfs files.
+ usbfs files and the hub class driver ("khubd").
-host/ - This is for all of the USB host drivers. This
- includes UHCI, OHCI, EHCI, and any others that might
- be created in the future.
+host/ - This is for USB host controller drivers. This
+ includes UHCI, OHCI, EHCI, and others that might
+ be used with more specialized "embedded" systems.
-gadget/ - This is for all of the USB device controller drivers.
+gadget/ - This is for USB peripheral controller drivers and
+ the various gadget drivers which talk to them.
Individual USB driver directories. A new driver should be added to the
#include "usb_atm.h"
-/*
-#define DEBUG
-#define VERBOSE_DEBUG
-*/
-
-#if !defined (DEBUG) && defined (CONFIG_USB_DEBUG)
-# define DEBUG
-#endif
-
-#include <linux/usb.h>
-
#if defined(CONFIG_FW_LOADER) || defined(CONFIG_FW_LOADER_MODULE)
# define USE_FW_LOADER
#endif
-#ifdef VERBOSE_DEBUG
-static int udsl_print_packet(const unsigned char *data, int len);
-#define PACKETDEBUG(arg...) udsl_print_packet (arg)
-#define vdbg(arg...) dbg (arg)
-#else
-#define PACKETDEBUG(arg...)
-#define vdbg(arg...)
-#endif
-
#define DRIVER_AUTHOR "Johan Verrept, Duncan Sands <duncan.sands@free.fr>"
#define DRIVER_VERSION "1.8"
#define DRIVER_DESC "Alcatel SpeedTouch USB driver version " DRIVER_VERSION
dbg("speedtch_upload_firmware: write BLOCK1 to modem failed (%d)!", ret);
goto fail_release;
}
- dbg("speedtch_upload_firmware: BLOCK1 uploaded (%d bytes)", fw1->size);
+ dbg("speedtch_upload_firmware: BLOCK1 uploaded (%zu bytes)", fw1->size);
}
/* USB led blinking green, ADSL led off */
goto fail_release;
}
}
- dbg("speedtch_upload_firmware: BLOCK3 uploaded (%d bytes)", fw2->size);
+ dbg("speedtch_upload_firmware: BLOCK3 uploaded (%zu bytes)", fw2->size);
/* USB led static green, ADSL led static red */
const struct firmware **fw_p)
{
char buf[24];
- const u16 bcdDevice = instance->u.usb_dev->descriptor.bcdDevice;
+ const u16 bcdDevice = le16_to_cpu(instance->u.usb_dev->descriptor.bcdDevice);
const u8 major_revision = bcdDevice >> 8;
const u8 minor_revision = bcdDevice & 0xff;
int ret, i;
char buf7[SIZE_7];
- dbg("speedtch_usb_probe: trying device with vendor=0x%x, product=0x%x, ifnum %d", dev->descriptor.idVendor, dev->descriptor.idProduct, ifnum);
+ dbg("speedtch_usb_probe: trying device with vendor=0x%x, product=0x%x, ifnum %d",
+ le16_to_cpu(dev->descriptor.idVendor),
+ le16_to_cpu(dev->descriptor.idProduct), ifnum);
- if ((dev->descriptor.bDeviceClass != USB_CLASS_VENDOR_SPEC) ||
- (dev->descriptor.idVendor != SPEEDTOUCH_VENDORID) ||
- (dev->descriptor.idProduct != SPEEDTOUCH_PRODUCTID) || (ifnum != 1))
+ if ((dev->descriptor.bDeviceClass != USB_CLASS_VENDOR_SPEC) ||
+ (ifnum != 1))
return -ENODEV;
dbg("speedtch_usb_probe: device accepted");
#include "usb_atm.h"
-/*
-#define DEBUG
-#define VERBOSE_DEBUG
-*/
-
-#if !defined (DEBUG) && defined (CONFIG_USB_DEBUG)
-# define DEBUG
-#endif
-
-#include <linux/usb.h>
-
-#ifdef DEBUG
-#define UDSL_ASSERT(x) BUG_ON(!(x))
-#else
-#define UDSL_ASSERT(x) do { if (!(x)) warn("failed assertion '" #x "' at line %d", __LINE__); } while(0)
-#endif
-
#ifdef VERBOSE_DEBUG
static int udsl_print_packet(const unsigned char *data, int len);
#define PACKETDEBUG(arg...) udsl_print_packet (arg)
*
******************************************************************************/
+#include <linux/config.h>
#include <linux/list.h>
-#include <linux/usb.h>
#include <linux/kref.h>
#include <linux/atm.h>
#include <linux/atmdev.h>
#include <asm/semaphore.h>
+/*
+#define DEBUG
+#define VERBOSE_DEBUG
+*/
+
+#if !defined (DEBUG) && defined (CONFIG_USB_DEBUG)
+# define DEBUG
+#endif
+
+#include <linux/usb.h>
+
+#ifdef DEBUG
+#define UDSL_ASSERT(x) BUG_ON(!(x))
+#else
+#define UDSL_ASSERT(x) do { if (!(x)) warn("failed assertion '" #x "' at line %d", __LINE__); } while(0)
+#endif
+
#define UDSL_MAX_RCV_URBS 4
#define UDSL_MAX_SND_URBS 4
#define UDSL_MAX_RCV_BUFS 8
#define CDC_DATA_INTERFACE_TYPE 0x0a
-
+/* constants describing various quirks and errors */
+#define NO_UNION_NORMAL 1
struct usb_midi_device {
char *deviceName;
- int idVendor;
- int idProduct;
+ u16 idVendor;
+ u16 idProduct;
int interface;
int altSetting; /* -1: auto detect */
return 1;
/* HNP test device is _never_ targeted (see OTG spec 6.6.6) */
- if (dev->descriptor.idVendor == 0x1a0a
- && dev->descriptor.idProduct == 0xbadd)
+ if ((le16_to_cpu(dev->descriptor.idVendor) == 0x1a0a &&
+ le16_to_cpu(dev->descriptor.idProduct) == 0xbadd))
return 0;
/* NOTE: can't use usb_match_id() since interface caches
*/
for (id = whitelist_table; id->match_flags; id++) {
if ((id->match_flags & USB_DEVICE_ID_MATCH_VENDOR) &&
- id->idVendor != dev->descriptor.idVendor)
+ id->idVendor != le16_to_cpu(dev->descriptor.idVendor))
continue;
if ((id->match_flags & USB_DEVICE_ID_MATCH_PRODUCT) &&
- id->idProduct != dev->descriptor.idProduct)
+ id->idProduct != le16_to_cpu(dev->descriptor.idProduct))
continue;
/* No need to test id->bcdDevice_lo != 0, since 0 is never
greater than any unsigned number. */
if ((id->match_flags & USB_DEVICE_ID_MATCH_DEV_LO) &&
- (id->bcdDevice_lo > dev->descriptor.bcdDevice))
+ (id->bcdDevice_lo > le16_to_cpu(dev->descriptor.bcdDevice)))
continue;
if ((id->match_flags & USB_DEVICE_ID_MATCH_DEV_HI) &&
- (id->bcdDevice_hi < dev->descriptor.bcdDevice))
+ (id->bcdDevice_hi < le16_to_cpu(dev->descriptor.bcdDevice)))
continue;
if ((id->match_flags & USB_DEVICE_ID_MATCH_DEV_CLASS) &&
/* OTG MESSAGE: report errors here, customize to match your product */
dev_err(&dev->dev, "device v%04x p%04x is not supported\n",
- dev->descriptor.idVendor,
- dev->descriptor.idProduct);
+ le16_to_cpu(dev->descriptor.idVendor),
+ le16_to_cpu(dev->descriptor.idProduct));
#ifdef CONFIG_USB_OTG_WHITELIST
return 0;
#else
show_version (struct device *dev, char *buf)
{
struct usb_device *udev;
+ u16 bcdUSB;
- udev = to_usb_device (dev);
- return sprintf (buf, "%2x.%02x\n", udev->descriptor.bcdUSB >> 8,
- udev->descriptor.bcdUSB & 0xff);
+ udev = to_usb_device(dev);
+ bcdUSB = le16_to_cpu(udev->descriptor.bcdUSB);
+ return sprintf(buf, "%2x.%02x\n", bcdUSB >> 8, bcdUSB & 0xff);
}
static DEVICE_ATTR(version, S_IRUGO, show_version, NULL);
static DEVICE_ATTR(maxchild, S_IRUGO, show_maxchild, NULL);
/* Descriptor fields */
+#define usb_descriptor_attr_le16(field, format_string) \
+static ssize_t \
+show_##field (struct device *dev, char *buf) \
+{ \
+ struct usb_device *udev; \
+ \
+ udev = to_usb_device (dev); \
+ return sprintf (buf, format_string, \
+ le16_to_cpu(udev->descriptor.field)); \
+} \
+static DEVICE_ATTR(field, S_IRUGO, show_##field, NULL);
+
+usb_descriptor_attr_le16(idVendor, "%04x\n")
+usb_descriptor_attr_le16(idProduct, "%04x\n")
+usb_descriptor_attr_le16(bcdDevice, "%04x\n")
+
#define usb_descriptor_attr(field, format_string) \
static ssize_t \
show_##field (struct device *dev, char *buf) \
} \
static DEVICE_ATTR(field, S_IRUGO, show_##field, NULL);
-usb_descriptor_attr (idVendor, "%04x\n")
-usb_descriptor_attr (idProduct, "%04x\n")
-usb_descriptor_attr (bcdDevice, "%04x\n")
usb_descriptor_attr (bDeviceClass, "%02x\n")
usb_descriptor_attr (bDeviceSubClass, "%02x\n")
usb_descriptor_attr (bDeviceProtocol, "%02x\n")
/*
- * omap_udc.c -- for OMAP 1610 udc, with OTG support
+ * omap_udc.c -- for OMAP full speed udc; most chips support OTG.
*
* Copyright (C) 2004 Texas Instruments, Inc.
* Copyright (C) 2004 David Brownell
{
0x12, /* __u8 bLength; */
0x01, /* __u8 bDescriptorType; Device */
- 0x00, /* __u16 bcdUSB; v1.0 */
+ 0x00, /* __le16 bcdUSB; v1.0 */
0x01,
0x09, /* __u8 bDeviceClass; HUB_CLASSCODE */
0x00, /* __u8 bDeviceSubClass; */
0x00, /* __u8 bDeviceProtocol; */
0x08, /* __u8 bMaxPacketSize0; 8 Bytes */
- 0x00, /* __u16 idVendor; */
+ 0x00, /* __le16 idVendor; */
0x00,
- 0x00, /* __u16 idProduct; */
+ 0x00, /* __le16 idProduct; */
0x00,
- 0x00, /* __u16 bcdDevice; */
+ 0x00, /* __le16 bcdDevice; */
0x00,
0x00, /* __u8 iManufacturer; */
0x02, /* __u8 iProduct; */
{
0x09, /* __u8 bLength; */
0x02, /* __u8 bDescriptorType; Configuration */
- 0x19, /* __u16 wTotalLength; */
+ 0x19, /* __le16 wTotalLength; */
0x00,
0x01, /* __u8 bNumInterfaces; */
0x01, /* __u8 bConfigurationValue; */
0x05, /* __u8 ep_bDescriptorType; Endpoint */
0x81, /* __u8 ep_bEndpointAddress; IN Endpoint 1 */
0x03, /* __u8 ep_bmAttributes; Interrupt */
- 0x08, /* __u16 ep_wMaxPacketSize; 8 Bytes */
+ 0x08, /* __le16 ep_wMaxPacketSize; 8 Bytes */
0x00,
0xff /* __u8 ep_bInterval; 255 ms */
};
static int etrax_usb_submit_urb(struct urb *urb, int mem_flags);
static int etrax_usb_unlink_urb(struct urb *urb, int status);
static int etrax_usb_get_frame_number(struct usb_device *usb_dev);
-static int etrax_usb_allocate_dev(struct usb_device *usb_dev);
-static int etrax_usb_deallocate_dev(struct usb_device *usb_dev);
static irqreturn_t etrax_usb_tx_interrupt(int irq, void *vhc, struct pt_regs *regs);
static irqreturn_t etrax_usb_rx_interrupt(int irq, void *vhc, struct pt_regs *regs);
static struct usb_operations etrax_usb_device_operations =
{
- .allocate = etrax_usb_allocate_dev,
- .deallocate = etrax_usb_deallocate_dev,
.get_frame_number = etrax_usb_get_frame_number,
.submit_urb = etrax_usb_submit_urb,
.unlink_urb = etrax_usb_unlink_urb,
return (*R_USB_FM_NUMBER & 0x7ff);
}
-static int etrax_usb_allocate_dev(struct usb_device *usb_dev)
-{
- DBFENTER;
- DBFEXIT;
- return 0;
-}
-
-static int etrax_usb_deallocate_dev(struct usb_device *usb_dev)
-{
- DBFENTER;
- DBFEXIT;
- return 0;
-}
-
static irqreturn_t etrax_usb_tx_interrupt(int irq, void *vhc, struct pt_regs *regs)
{
DBFENTER;
usb_rh->speed = USB_SPEED_FULL;
usb_rh->devnum = 1;
hc->bus->devnum_next = 2;
- usb_rh->epmaxpacketin[0] = usb_rh->epmaxpacketout[0] = 64;
+ usb_rh->ep0.desc.wMaxPacketSize = __const_cpu_to_le16(64);
usb_get_device_descriptor(usb_rh, USB_DT_DEVICE_SIZE);
usb_new_device(usb_rh);
--- /dev/null
+/*
+ * OHCI HCD (Host Controller Driver) for USB.
+ *
+ * (C) Copyright 1999 Roman Weissgaerber <weissg@vienna.at>
+ * (C) Copyright 2000-2002 David Brownell <dbrownell@users.sourceforge.net>
+ * (C) Copyright 2002 Hewlett-Packard Company
+ *
+ * Bus Glue for AMD Alchemy Au1xxx
+ *
+ * Written by Christopher Hoover <ch@hpl.hp.com>
+ * Based on fragments of previous driver by Rusell King et al.
+ *
+ * Modified for LH7A404 from ohci-sa1111.c
+ * by Durgesh Pattamatta <pattamattad@sharpsec.com>
+ * Modified for AMD Alchemy Au1xxx
+ * by Matt Porter <mporter@kernel.crashing.org>
+ *
+ * This file is licenced under the GPL.
+ */
+
+#include <asm/mach-au1x00/au1000.h>
+
+#define USBH_ENABLE_BE (1<<0)
+#define USBH_ENABLE_C (1<<1)
+#define USBH_ENABLE_E (1<<2)
+#define USBH_ENABLE_CE (1<<3)
+#define USBH_ENABLE_RD (1<<4)
+
+#ifdef __LITTLE_ENDIAN
+#define USBH_ENABLE_INIT (USBH_ENABLE_CE | USBH_ENABLE_E | USBH_ENABLE_C)
+#elif __BIG_ENDIAN
+#define USBH_ENABLE_INIT (USBH_ENABLE_CE | USBH_ENABLE_E | USBH_ENABLE_C | USBH_ENABLE_BE)
+#else
+#error not byte order defined
+#endif
+
+extern int usb_disabled(void);
+
+/*-------------------------------------------------------------------------*/
+
+static void au1xxx_start_hc(struct platform_device *dev)
+{
+ printk(KERN_DEBUG __FILE__
+ ": starting Au1xxx OHCI USB Controller\n");
+
+ /* enable host controller */
+ au_writel(USBH_ENABLE_CE, USB_HOST_CONFIG);
+ udelay(1000);
+ au_writel(USBH_ENABLE_INIT, USB_HOST_CONFIG);
+ udelay(1000);
+
+ /* wait for reset complete (read register twice; see au1500 errata) */
+ while (au_readl(USB_HOST_CONFIG),
+ !(au_readl(USB_HOST_CONFIG) & USBH_ENABLE_RD))
+ udelay(1000);
+
+ printk(KERN_DEBUG __FILE__
+ ": Clock to USB host has been enabled \n");
+}
+
+static void au1xxx_stop_hc(struct platform_device *dev)
+{
+ printk(KERN_DEBUG __FILE__
+ ": stopping Au1xxx OHCI USB Controller\n");
+
+ /* Disable clock */
+ au_writel(readl((void *)USB_HOST_CONFIG) & ~USBH_ENABLE_CE, USB_HOST_CONFIG);
+}
+
+
+/*-------------------------------------------------------------------------*/
+
+
+static irqreturn_t usb_hcd_au1xxx_hcim_irq (int irq, void *__hcd,
+ struct pt_regs * r)
+{
+ struct usb_hcd *hcd = __hcd;
+
+ return usb_hcd_irq(irq, hcd, r);
+}
+
+/*-------------------------------------------------------------------------*/
+
+void usb_hcd_au1xxx_remove (struct usb_hcd *, struct platform_device *);
+
+/* configure so an HC device and id are always provided */
+/* always called with process context; sleeping is OK */
+
+
+/**
+ * usb_hcd_au1xxx_probe - initialize Au1xxx-based HCDs
+ * Context: !in_interrupt()
+ *
+ * Allocates basic resources for this USB host controller, and
+ * then invokes the start() method for the HCD associated with it
+ * through the hotplug entry's driver_data.
+ *
+ */
+int usb_hcd_au1xxx_probe (const struct hc_driver *driver,
+ struct usb_hcd **hcd_out,
+ struct platform_device *dev)
+{
+ int retval;
+ struct usb_hcd *hcd = 0;
+
+ unsigned int *addr = NULL;
+
+ if (!request_mem_region(dev->resource[0].start,
+ dev->resource[0].end
+ - dev->resource[0].start + 1, hcd_name)) {
+ pr_debug("request_mem_region failed");
+ return -EBUSY;
+ }
+
+ au1xxx_start_hc(dev);
+
+ addr = ioremap(dev->resource[0].start,
+ dev->resource[0].end
+ - dev->resource[0].start + 1);
+ if (!addr) {
+ pr_debug("ioremap failed");
+ retval = -ENOMEM;
+ goto err1;
+ }
+
+ if(dev->resource[1].flags != IORESOURCE_IRQ) {
+ pr_debug ("resource[1] is not IORESOURCE_IRQ");
+ retval = -ENOMEM;
+ goto err1;
+ }
+
+ hcd = usb_create_hcd(driver);
+ if (hcd == NULL) {
+ pr_debug ("usb_create_hcd failed");
+ retval = -ENOMEM;
+ goto err1;
+ }
+ ohci_hcd_init(hcd_to_ohci(hcd));
+
+ hcd->irq = dev->resource[1].start;
+ hcd->regs = addr;
+ hcd->self.controller = &dev->dev;
+
+ retval = hcd_buffer_create (hcd);
+ if (retval != 0) {
+ pr_debug ("pool alloc fail");
+ goto err2;
+ }
+
+ retval = request_irq (hcd->irq, usb_hcd_au1xxx_hcim_irq, SA_INTERRUPT,
+ hcd->driver->description, hcd);
+ if (retval != 0) {
+ pr_debug("request_irq failed");
+ retval = -EBUSY;
+ goto err3;
+ }
+
+ pr_debug ("%s (Au1xxx) at 0x%p, irq %d",
+ hcd->driver->description, hcd->regs, hcd->irq);
+
+ hcd->self.bus_name = "au1xxx";
+
+ usb_register_bus (&hcd->self);
+
+ if ((retval = driver->start (hcd)) < 0)
+ {
+ usb_hcd_au1xxx_remove(hcd, dev);
+ printk("bad driver->start\n");
+ return retval;
+ }
+
+ *hcd_out = hcd;
+ return 0;
+
+ err3:
+ hcd_buffer_destroy (hcd);
+ err2:
+ usb_put_hcd(hcd);
+ err1:
+ au1xxx_stop_hc(dev);
+ release_mem_region(dev->resource[0].start,
+ dev->resource[0].end
+ - dev->resource[0].start + 1);
+ return retval;
+}
+
+
+/* may be called without controller electrically present */
+/* may be called with controller, bus, and devices active */
+
+/**
+ * usb_hcd_au1xxx_remove - shutdown processing for Au1xxx-based HCDs
+ * @dev: USB Host Controller being removed
+ * Context: !in_interrupt()
+ *
+ * Reverses the effect of usb_hcd_au1xxx_probe(), first invoking
+ * the HCD's stop() method. It is always called from a thread
+ * context, normally "rmmod", "apmd", or something similar.
+ *
+ */
+void usb_hcd_au1xxx_remove (struct usb_hcd *hcd, struct platform_device *dev)
+{
+ pr_debug ("remove: %s, state %x", hcd->self.bus_name, hcd->state);
+
+ if (in_interrupt ())
+ BUG ();
+
+ hcd->state = USB_STATE_QUIESCING;
+
+ pr_debug ("%s: roothub graceful disconnect", hcd->self.bus_name);
+ usb_disconnect (&hcd->self.root_hub);
+
+ hcd->driver->stop (hcd);
+ hcd->state = USB_STATE_HALT;
+
+ free_irq (hcd->irq, hcd);
+ hcd_buffer_destroy (hcd);
+
+ usb_deregister_bus (&hcd->self);
+
+ au1xxx_stop_hc(dev);
+ release_mem_region(dev->resource[0].start,
+ dev->resource[0].end
+ - dev->resource[0].start + 1);
+}
+
+/*-------------------------------------------------------------------------*/
+
+static int __devinit
+ohci_au1xxx_start (struct usb_hcd *hcd)
+{
+ struct ohci_hcd *ohci = hcd_to_ohci (hcd);
+ int ret;
+
+ ohci_dbg (ohci, "ohci_au1xxx_start, ohci:%p", ohci);
+
+ if ((ret = ohci_init (ohci)) < 0)
+ return ret;
+
+ if ((ret = ohci_run (ohci)) < 0) {
+ err ("can't start %s", hcd->self.bus_name);
+ ohci_stop (hcd);
+ return ret;
+ }
+
+ return 0;
+}
+
+/*-------------------------------------------------------------------------*/
+
+static const struct hc_driver ohci_au1xxx_hc_driver = {
+ .description = hcd_name,
+ .product_desc = "Au1xxx OHCI",
+ .hcd_priv_size = sizeof(struct ohci_hcd),
+
+ /*
+ * generic hardware linkage
+ */
+ .irq = ohci_irq,
+ .flags = HCD_USB11,
+
+ /*
+ * basic lifecycle operations
+ */
+ .start = ohci_au1xxx_start,
+#ifdef CONFIG_PM
+ /* suspend: ohci_au1xxx_suspend, -- tbd */
+ /* resume: ohci_au1xxx_resume, -- tbd */
+#endif /*CONFIG_PM*/
+ .stop = ohci_stop,
+
+ /*
+ * managing i/o requests and associated device resources
+ */
+ .urb_enqueue = ohci_urb_enqueue,
+ .urb_dequeue = ohci_urb_dequeue,
+ .endpoint_disable = ohci_endpoint_disable,
+
+ /*
+ * scheduling support
+ */
+ .get_frame_number = ohci_get_frame,
+
+ /*
+ * root hub support
+ */
+ .hub_status_data = ohci_hub_status_data,
+ .hub_control = ohci_hub_control,
+};
+
+/*-------------------------------------------------------------------------*/
+
+static int ohci_hcd_au1xxx_drv_probe(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct usb_hcd *hcd = NULL;
+ int ret;
+
+ pr_debug ("In ohci_hcd_au1xxx_drv_probe");
+
+ if (usb_disabled())
+ return -ENODEV;
+
+ ret = usb_hcd_au1xxx_probe(&ohci_au1xxx_hc_driver, &hcd, pdev);
+
+ if (ret == 0)
+ dev_set_drvdata(dev, hcd);
+
+ return ret;
+}
+
+static int ohci_hcd_au1xxx_drv_remove(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct usb_hcd *hcd = dev_get_drvdata(dev);
+
+ usb_hcd_au1xxx_remove(hcd, pdev);
+ dev_set_drvdata(dev, NULL);
+ return 0;
+}
+ /*TBD*/
+/*static int ohci_hcd_au1xxx_drv_suspend(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct usb_hcd *hcd = dev_get_drvdata(dev);
+
+ return 0;
+}
+static int ohci_hcd_au1xxx_drv_resume(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct usb_hcd *hcd = dev_get_drvdata(dev);
+
+ return 0;
+}
+*/
+
+static struct device_driver ohci_hcd_au1xxx_driver = {
+ .name = "au1xxx-ohci",
+ .bus = &platform_bus_type,
+ .probe = ohci_hcd_au1xxx_drv_probe,
+ .remove = ohci_hcd_au1xxx_drv_remove,
+ /*.suspend = ohci_hcd_au1xxx_drv_suspend, */
+ /*.resume = ohci_hcd_au1xxx_drv_resume, */
+};
+
+static int __init ohci_hcd_au1xxx_init (void)
+{
+ pr_debug (DRIVER_INFO " (Au1xxx)");
+ pr_debug ("block sizes: ed %d td %d\n",
+ sizeof (struct ed), sizeof (struct td));
+
+ return driver_register(&ohci_hcd_au1xxx_driver);
+}
+
+static void __exit ohci_hcd_au1xxx_cleanup (void)
+{
+ driver_unregister(&ohci_hcd_au1xxx_driver);
+}
+
+module_init (ohci_hcd_au1xxx_init);
+module_exit (ohci_hcd_au1xxx_cleanup);
retval = -ENOMEM;
goto err1;
}
-
- hcd = driver->hcd_alloc ();
- if (hcd == NULL){
- pr_debug ("hcd_alloc failed");
+ if(dev->resource[1].flags != IORESOURCE_IRQ){
+ pr_debug ("resource[1] is not IORESOURCE_IRQ");
retval = -ENOMEM;
goto err1;
}
+
- if(dev->resource[1].flags != IORESOURCE_IRQ){
- pr_debug ("resource[1] is not IORESOURCE_IRQ");
+ hcd = usb_create_hcd (driver);
+ if (hcd == NULL){
+ pr_debug ("hcd_alloc failed");
retval = -ENOMEM;
goto err1;
}
+ ohci_hcd_init(hcd_to_ohci(hcd));
- hcd->driver = (struct hc_driver *) driver;
- hcd->description = driver->description;
hcd->irq = dev->resource[1].start;
hcd->regs = addr;
hcd->self.controller = &dev->dev;
retval = hcd_buffer_create (hcd);
if (retval != 0) {
pr_debug ("pool alloc fail");
- goto err1;
+ goto err2;
}
retval = request_irq (hcd->irq, usb_hcd_lh7a404_hcim_irq, SA_INTERRUPT,
- hcd->description, hcd);
+ hcd->driver->description, hcd);
if (retval != 0) {
pr_debug("request_irq failed");
retval = -EBUSY;
- goto err2;
+ goto err3;
}
pr_debug ("%s (LH7A404) at 0x%p, irq %d",
- hcd->description, hcd->regs, hcd->irq);
+ hcd->driver->description, hcd->regs, hcd->irq);
- usb_bus_init (&hcd->self);
- hcd->self.op = &usb_hcd_operations;
- hcd->self.release = &usb_hcd_release;
- hcd->self.hcpriv = (void *) hcd;
hcd->self.bus_name = "lh7a404";
- hcd->product_desc = "LH7A404 OHCI";
-
- INIT_LIST_HEAD (&hcd->dev_list);
-
usb_register_bus (&hcd->self);
if ((retval = driver->start (hcd)) < 0)
*hcd_out = hcd;
return 0;
- err2:
+ err3:
hcd_buffer_destroy (hcd);
+ err2:
+ usb_put_hcd(hcd);
err1:
- kfree(hcd);
lh7a404_stop_hc(dev);
release_mem_region(dev->resource[0].start,
dev->resource[0].end
return ret;
if ((ret = ohci_run (ohci)) < 0) {
- err ("can't start %s", ohci->hcd.self.bus_name);
+ err ("can't start %s", hcd->self.bus_name);
ohci_stop (hcd);
return ret;
}
static const struct hc_driver ohci_lh7a404_hc_driver = {
.description = hcd_name,
+ .product_desc = "LH7A404 OHCI",
+ .hcd_priv_size = sizeof(struct ohci_hcd),
/*
* generic hardware linkage
#endif /*CONFIG_PM*/
.stop = ohci_stop,
- /*
- * memory lifecycle (except per-request)
- */
- .hcd_alloc = ohci_hcd_alloc,
-
/*
* managing i/o requests and associated device resources
*/
goto err1;
}
- hcd = driver->hcd_alloc ();
- if (hcd == NULL){
- pr_debug ("hcd_alloc failed");
+ if(dev->resource[1].flags != IORESOURCE_IRQ){
+ pr_debug ("resource[1] is not IORESOURCE_IRQ");
retval = -ENOMEM;
goto err1;
}
- if(dev->resource[1].flags != IORESOURCE_IRQ){
- pr_debug ("resource[1] is not IORESOURCE_IRQ");
+ hcd = usb_create_hcd (driver);
+ if (hcd == NULL){
+ pr_debug ("hcd_alloc failed");
retval = -ENOMEM;
goto err1;
}
+ ohci_hcd_init(hcd_to_ohci(hcd));
- hcd->driver = (struct hc_driver *) driver;
- hcd->description = driver->description;
hcd->irq = dev->resource[1].start;
hcd->regs = addr;
hcd->self.controller = &dev->dev;
retval = hcd_buffer_create (hcd);
if (retval != 0) {
pr_debug ("pool alloc fail");
- goto err1;
+ goto err2;
}
retval = request_irq (hcd->irq, usb_hcd_irq, SA_INTERRUPT,
- hcd->description, hcd);
+ hcd->driver->description, hcd);
if (retval != 0) {
pr_debug("request_irq(%d) failed with retval %d\n",hcd->irq,retval);
retval = -EBUSY;
- goto err2;
+ goto err3;
}
pr_debug ("%s (pxa27x) at 0x%p, irq %d",
- hcd->description, hcd->regs, hcd->irq);
+ hcd->driver->description, hcd->regs, hcd->irq);
- usb_bus_init (&hcd->self);
- hcd->self.op = &usb_hcd_operations;
- hcd->self.release = &usb_hcd_release;
- hcd->self.hcpriv = (void *) hcd;
hcd->self.bus_name = "pxa27x";
- hcd->product_desc = "PXA27x OHCI";
-
- INIT_LIST_HEAD (&hcd->dev_list);
-
usb_register_bus (&hcd->self);
if ((retval = driver->start (hcd)) < 0) {
*hcd_out = hcd;
return 0;
- err2:
+ err3:
hcd_buffer_destroy (hcd);
+ err2:
+ usb_put_hcd(hcd);
err1:
- kfree(hcd);
pxa27x_stop_hc(dev);
release_mem_region(dev->resource[0].start,
dev->resource[0].end
return ret;
if ((ret = ohci_run (ohci)) < 0) {
- err ("can't start %s", ohci->hcd.self.bus_name);
+ err ("can't start %s", hcd->self.bus_name);
ohci_stop (hcd);
return ret;
}
static const struct hc_driver ohci_pxa27x_hc_driver = {
.description = hcd_name,
+ .product_desc = "PXA27x OHCI",
+ .hcd_priv_size = sizeof(struct ohci_hcd),
/*
* generic hardware linkage
.start = ohci_pxa27x_start,
.stop = ohci_stop,
- /*
- * memory lifecycle (except per-request)
- */
- .hcd_alloc = ohci_hcd_alloc,
-
/*
* managing i/o requests and associated device resources
*/
MODULE_DESCRIPTION("SL811HS USB Host Controller Driver");
MODULE_LICENSE("GPL");
-#define DRIVER_VERSION "06 Dec 2004"
+#define DRIVER_VERSION "15 Dec 2004"
#ifndef DEBUG
/*-------------------------------------------------------------------------*/
-static irqreturn_t sl811h_irq(int irq, void *_sl811, struct pt_regs *regs);
+static irqreturn_t sl811h_irq(int irq, void *_hcd, struct pt_regs *regs);
static void port_power(struct sl811 *sl811, int is_on)
{
+ struct usb_hcd *hcd = sl811_to_hcd(sl811);
+
/* hub is inactive unless the port is powered */
if (is_on) {
if (sl811->port1 & (1 << USB_PORT_FEAT_POWER))
sl811->port1 = (1 << USB_PORT_FEAT_POWER);
sl811->irq_enable = SL11H_INTMASK_INSRMV;
- sl811->hcd.self.controller->power.power_state = PM_SUSPEND_ON;
+ hcd->self.controller->power.power_state = PM_SUSPEND_ON;
} else {
sl811->port1 = 0;
sl811->irq_enable = 0;
- sl811->hcd.state = USB_STATE_HALT;
- sl811->hcd.self.controller->power.power_state = PM_SUSPEND_DISK;
+ hcd->state = USB_STATE_HALT;
+ hcd->self.controller->power.power_state = PM_SUSPEND_DISK;
}
sl811->ctrl1 = 0;
sl811_write(sl811, SL11H_IRQ_ENABLE, 0);
if (sl811->board && sl811->board->port_power) {
/* switch VBUS, at 500mA unless hub power budget gets set */
DBG("power %s\n", is_on ? "on" : "off");
- sl811->board->port_power(sl811->hcd.self.controller, is_on);
+ sl811->board->port_power(hcd->self.controller, is_on);
}
/* reset as thoroughly as we can */
if (sl811->board && sl811->board->reset)
- sl811->board->reset(sl811->hcd.self.controller);
+ sl811->board->reset(hcd->self.controller);
sl811_write(sl811, SL11H_IRQ_ENABLE, 0);
sl811_write(sl811, SL11H_CTLREG1, sl811->ctrl1);
static struct sl811h_ep *start(struct sl811 *sl811, u8 bank)
{
struct sl811h_ep *ep;
- struct sl811h_req *req;
struct urb *urb;
int fclock;
u8 control;
struct sl811h_ep, schedule);
}
- if (unlikely(list_empty(&ep->queue))) {
+ if (unlikely(list_empty(&ep->hep->urb_list))) {
DBG("empty %p queue?\n", ep);
return NULL;
}
- req = container_of(ep->queue.next, struct sl811h_req, queue);
- urb = req->urb;
+ urb = container_of(ep->hep->urb_list.next, struct urb, urb_list);
control = ep->defctrl;
/* if this frame doesn't have enough time left to transfer this
static void finish_request(
struct sl811 *sl811,
struct sl811h_ep *ep,
- struct sl811h_req *req,
+ struct urb *urb,
struct pt_regs *regs,
int status
) __releases(sl811->lock) __acquires(sl811->lock)
{
unsigned i;
- struct urb *urb = req->urb;
-
- list_del(&req->queue);
- kfree(req);
- urb->hcpriv = NULL;
if (usb_pipecontrol(urb->pipe))
ep->nextpid = USB_PID_SETUP;
spin_unlock(&urb->lock);
spin_unlock(&sl811->lock);
- usb_hcd_giveback_urb(&sl811->hcd, urb, regs);
+ usb_hcd_giveback_urb(sl811_to_hcd(sl811), urb, regs);
spin_lock(&sl811->lock);
/* leave active endpoints in the schedule */
- if (!list_empty(&ep->queue))
+ if (!list_empty(&ep->hep->urb_list))
return;
/* async deschedule? */
}
ep->branch = PERIODIC_SIZE;
sl811->periodic_count--;
- hcd_to_bus(&sl811->hcd)->bandwidth_allocated
+ sl811_to_hcd(sl811)->self.bandwidth_allocated
-= ep->load / ep->period;
if (ep == sl811->next_periodic)
sl811->next_periodic = ep->next;
done(struct sl811 *sl811, struct sl811h_ep *ep, u8 bank, struct pt_regs *regs)
{
u8 status;
- struct sl811h_req *req;
struct urb *urb;
int urbstat = -EINPROGRESS;
status = sl811_read(sl811, bank + SL11H_PKTSTATREG);
- req = container_of(ep->queue.next, struct sl811h_req, queue);
- urb = req->urb;
+ urb = container_of(ep->hep->urb_list.next, struct urb, urb_list);
/* we can safely ignore NAKs */
if (status & SL11H_STATMASK_NAK) {
urb->status = urbstat;
spin_unlock(&urb->lock);
- req = NULL;
+ urb = NULL;
ep->nextpid = USB_PID_ACK;
}
break;
bank, status, ep, urbstat);
}
- if ((urbstat != -EINPROGRESS || urb->status != -EINPROGRESS)
- && req)
- finish_request(sl811, ep, req, regs, urbstat);
+ if (urb && (urbstat != -EINPROGRESS || urb->status != -EINPROGRESS))
+ finish_request(sl811, ep, urb, regs, urbstat);
}
static inline u8 checkdone(struct sl811 *sl811)
ctl = sl811_read(sl811, SL811_EP_B(SL11H_HOSTCTLREG));
if (ctl & SL11H_HCTLMASK_ARM)
sl811_write(sl811, SL811_EP_B(SL11H_HOSTCTLREG), 0);
- DBG("%s DONE_B: ctrl %02x sts %02x\n", ctl,
+ DBG("%s DONE_B: ctrl %02x sts %02x\n",
(ctl & SL11H_HCTLMASK_ARM) ? "timeout" : "lost",
ctl,
sl811_read(sl811, SL811_EP_B(SL11H_PKTSTATREG)));
return irqstat;
}
-static irqreturn_t sl811h_irq(int irq, void *_sl811, struct pt_regs *regs)
+static irqreturn_t sl811h_irq(int irq, void *_hcd, struct pt_regs *regs)
{
- struct sl811 *sl811 = _sl811;
+ struct usb_hcd *hcd = _hcd;
+ struct sl811 *sl811 = hcd_to_sl811(hcd);
u8 irqstat;
irqreturn_t ret = IRQ_NONE;
unsigned retries = 5;
if (sl811->active_a) {
sl811_write(sl811, SL811_EP_A(SL11H_HOSTCTLREG), 0);
finish_request(sl811, sl811->active_a,
- container_of(sl811->active_a->queue.next,
- struct sl811h_req, queue),
+ container_of(sl811->active_a->hep->urb_list.next,
+ struct urb, urb_list),
NULL, -ESHUTDOWN);
sl811->active_a = NULL;
}
if (sl811->active_b) {
sl811_write(sl811, SL811_EP_B(SL11H_HOSTCTLREG), 0);
finish_request(sl811, sl811->active_b,
- container_of(sl811->active_b->queue.next,
- struct sl811h_req, queue),
+ container_of(sl811->active_b->hep->urb_list.next,
+ struct urb, urb_list),
NULL, -ESHUTDOWN);
sl811->active_b = NULL;
}
if (sl811->port1 & (1 << USB_PORT_FEAT_ENABLE))
start_transfer(sl811);
ret = IRQ_HANDLED;
- sl811->hcd.saw_irq = 1;
+ hcd->saw_irq = 1;
if (retries--)
goto retry;
}
/*-------------------------------------------------------------------------*/
static int sl811h_urb_enqueue(
- struct usb_hcd *hcd,
- struct urb *urb,
- int mem_flags
+ struct usb_hcd *hcd,
+ struct usb_host_endpoint *hep,
+ struct urb *urb,
+ int mem_flags
) {
struct sl811 *sl811 = hcd_to_sl811(hcd);
struct usb_device *udev = urb->dev;
- struct hcd_dev *hdev = (struct hcd_dev *) udev->hcpriv;
unsigned int pipe = urb->pipe;
int is_out = !usb_pipein(pipe);
int type = usb_pipetype(pipe);
int epnum = usb_pipeendpoint(pipe);
struct sl811h_ep *ep = NULL;
- struct sl811h_req *req;
unsigned long flags;
int i;
int retval = 0;
return -ENOSPC;
#endif
- /* avoid all allocations within spinlocks: request or endpoint */
- urb->hcpriv = req = kmalloc(sizeof *req, mem_flags);
- if (!req)
- return -ENOMEM;
- req->urb = urb;
-
- i = epnum << 1;
- if (i && is_out)
- i |= 1;
- if (!hdev->ep[i])
+ /* avoid all allocations within spinlocks */
+ if (!hep->hcpriv)
ep = kcalloc(1, sizeof *ep, mem_flags);
spin_lock_irqsave(&sl811->lock, flags);
/* don't submit to a dead or disabled port */
if (!(sl811->port1 & (1 << USB_PORT_FEAT_ENABLE))
- || !HCD_IS_RUNNING(sl811->hcd.state)) {
+ || !HCD_IS_RUNNING(hcd->state)) {
retval = -ENODEV;
goto fail;
}
- if (hdev->ep[i]) {
+ if (hep->hcpriv) {
kfree(ep);
- ep = hdev->ep[i];
+ ep = hep->hcpriv;
} else if (!ep) {
retval = -ENOMEM;
goto fail;
} else {
- INIT_LIST_HEAD(&ep->queue);
INIT_LIST_HEAD(&ep->schedule);
ep->udev = usb_get_dev(udev);
ep->epnum = epnum;
break;
}
- hdev->ep[i] = ep;
+ hep->hcpriv = ep;
}
/* maybe put endpoint into schedule */
sl811->load[i] += ep->load;
}
sl811->periodic_count++;
- hcd_to_bus(&sl811->hcd)->bandwidth_allocated
- += ep->load / ep->period;
+ hcd->self.bandwidth_allocated += ep->load / ep->period;
sofirq_on(sl811);
}
spin_lock(&urb->lock);
if (urb->status != -EINPROGRESS) {
spin_unlock(&urb->lock);
- finish_request(sl811, ep, req, NULL, 0);
- req = NULL;
+ finish_request(sl811, ep, urb, NULL, 0);
retval = 0;
goto fail;
}
- list_add_tail(&req->queue, &ep->queue);
+ urb->hcpriv = hep;
spin_unlock(&urb->lock);
start_transfer(sl811);
sl811_write(sl811, SL11H_IRQ_ENABLE, sl811->irq_enable);
fail:
spin_unlock_irqrestore(&sl811->lock, flags);
- if (retval)
- kfree(req);
return retval;
}
static int sl811h_urb_dequeue(struct usb_hcd *hcd, struct urb *urb)
{
struct sl811 *sl811 = hcd_to_sl811(hcd);
- struct usb_device *udev = urb->dev;
- struct hcd_dev *hdev = (struct hcd_dev *) udev->hcpriv;
- unsigned int pipe = urb->pipe;
- int is_out = !usb_pipein(pipe);
+ struct usb_host_endpoint *hep = urb->hcpriv;
unsigned long flags;
- unsigned i;
struct sl811h_ep *ep;
- struct sl811h_req *req = urb->hcpriv;
int retval = 0;
- i = usb_pipeendpoint(pipe) << 1;
- if (i && is_out)
- i |= 1;
+ if (!hep)
+ return -EINVAL;
spin_lock_irqsave(&sl811->lock, flags);
- ep = hdev->ep[i];
+ ep = hep->hcpriv;
if (ep) {
/* finish right away if this urb can't be active ...
* note that some drivers wrongly expect delays
*/
- if (ep->queue.next != &req->queue) {
+ if (ep->hep->urb_list.next != &urb->urb_list) {
/* not front of queue? never active */
/* for active transfers, we expect an IRQ */
0);
sl811->active_a = NULL;
} else
- req = NULL;
+ urb = NULL;
#ifdef USE_B
} else if (sl811->active_b == ep) {
if (time_before_eq(sl811->jiffies_a, jiffies)) {
0);
sl811->active_b = NULL;
} else
- req = NULL;
+ urb = NULL;
#endif
} else {
/* front of queue for inactive endpoint */
}
- if (req)
- finish_request(sl811, ep, req, NULL, 0);
+ if (urb)
+ finish_request(sl811, ep, urb, NULL, 0);
else
VDBG("dequeue, urb %p active %s; wait4irq\n", urb,
(sl811->active_a == ep) ? "A" : "B");
}
static void
-sl811h_endpoint_disable(struct usb_hcd *hcd, struct hcd_dev *hdev, int epnum)
+sl811h_endpoint_disable(struct usb_hcd *hcd, struct usb_host_endpoint *hep)
{
- struct sl811 *sl811 = hcd_to_sl811(hcd);
- struct sl811h_ep *ep;
- unsigned long flags;
- int i;
+ struct sl811h_ep *ep = hep->hcpriv;
- i = (epnum & 0xf) << 1;
- if (epnum && !(epnum & USB_DIR_IN))
- i |= 1;
-
- spin_lock_irqsave(&sl811->lock, flags);
- ep = hdev->ep[i];
- hdev->ep[i] = NULL;
- spin_unlock_irqrestore(&sl811->lock, flags);
+ if (!ep)
+ return;
- if (ep) {
- /* assume we'd just wait for the irq */
- if (!list_empty(&ep->queue))
- msleep(3);
- if (!list_empty(&ep->queue))
- WARN("ep %p not empty?\n", ep);
+ /* assume we'd just wait for the irq */
+ if (!list_empty(&hep->urb_list))
+ msleep(3);
+ if (!list_empty(&hep->urb_list))
+ WARN("ep %p not empty?\n", ep);
- usb_put_dev(ep->udev);
- kfree(ep);
- }
- return;
+ usb_put_dev(ep->udev);
+ kfree(ep);
+ hep->hcpriv = NULL;
}
static int
unsigned i;
seq_printf(s, "%s\n%s version %s\nportstatus[1] = %08x\n",
- sl811->hcd.product_desc,
+ sl811_to_hcd(sl811)->product_desc,
hcd_name, DRIVER_VERSION,
sl811->port1);
sl811_read(sl811, SL811_EP_B(SL11H_PKTSTATREG)));
seq_printf(s, "\n");
list_for_each_entry (ep, &sl811->async, schedule) {
- struct sl811h_req *req;
+ struct urb *urb;
seq_printf(s, "%s%sqh%p, ep%d%s, maxpacket %d"
" nak %d err %d\n",
}; s;}),
ep->maxpacket,
ep->nak_count, ep->error_count);
- list_for_each_entry (req, &ep->queue, queue) {
- seq_printf(s, " urb%p, %d/%d\n", req->urb,
- req->urb->actual_length,
- req->urb->transfer_buffer_length);
+ list_for_each_entry (urb, &ep->hep->urb_list, urb_list) {
+ seq_printf(s, " urb%p, %d/%d\n", urb,
+ urb->actual_length,
+ urb->transfer_buffer_length);
}
}
if (!list_empty(&sl811->async))
struct sl811 *sl811 = hcd_to_sl811(hcd);
unsigned long flags;
- del_timer_sync(&sl811->hcd.rh_timer);
+ del_timer_sync(&hcd->rh_timer);
spin_lock_irqsave(&sl811->lock, flags);
port_power(sl811, 0);
/* chip has been reset, VBUS power is off */
- udev = usb_alloc_dev(NULL, &sl811->hcd.self, 0);
+ udev = usb_alloc_dev(NULL, &hcd->self, 0);
if (!udev)
return -ENOMEM;
hcd->state = USB_STATE_RUNNING;
if (sl811->board)
- sl811->hcd.can_wakeup = sl811->board->can_wakeup;
+ hcd->can_wakeup = sl811->board->can_wakeup;
- if (hcd_register_root(udev, &sl811->hcd) != 0) {
+ if (hcd_register_root(udev, hcd) != 0) {
usb_put_dev(udev);
sl811h_stop(hcd);
return -ENODEV;
static struct hc_driver sl811h_hc_driver = {
.description = hcd_name,
+ .hcd_priv_size = sizeof(struct sl811),
/*
* generic hardware linkage
sl811h_remove(struct device *dev)
{
struct sl811 *sl811 = dev_get_drvdata(dev);
+ struct usb_hcd *hcd = sl811_to_hcd(sl811);
struct platform_device *pdev;
struct resource *res;
pdev = container_of(dev, struct platform_device, dev);
- if (HCD_IS_RUNNING(sl811->hcd.state))
- sl811->hcd.state = USB_STATE_QUIESCING;
+ if (HCD_IS_RUNNING(hcd->state))
+ hcd->state = USB_STATE_QUIESCING;
- usb_disconnect(&sl811->hcd.self.root_hub);
+ usb_disconnect(&hcd->self.root_hub);
remove_debug_file(sl811);
- sl811h_stop(&sl811->hcd);
+ sl811h_stop(hcd);
- if (!list_empty(&sl811->hcd.self.bus_list))
- usb_deregister_bus(&sl811->hcd.self);
+ usb_deregister_bus(&hcd->self);
- if (sl811->hcd.irq >= 0)
- free_irq(sl811->hcd.irq, sl811);
+ free_irq(hcd->irq, hcd);
- if (sl811->data_reg)
- iounmap(sl811->data_reg);
+ iounmap(sl811->data_reg);
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
release_mem_region(res->start, 1);
- if (sl811->addr_reg)
- iounmap(sl811->addr_reg);
+ iounmap(sl811->addr_reg);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
release_mem_region(res->start, 1);
- kfree(sl811);
+ usb_put_hcd(hcd);
return 0;
}
static int __init
sl811h_probe(struct device *dev)
{
+ struct usb_hcd *hcd;
struct sl811 *sl811;
struct platform_device *pdev;
struct resource *addr, *data;
int irq;
- int status;
+ void __iomem *addr_reg;
+ void __iomem *data_reg;
+ int retval;
u8 tmp;
- unsigned long flags;
/* basic sanity checks first. board-specific init logic should
* have initialized these three resources and probably board
return -EINVAL;
}
- if (!request_mem_region(addr->start, 1, hcd_name))
- return -EBUSY;
+ if (!request_mem_region(addr->start, 1, hcd_name)) {
+ retval = -EBUSY;
+ goto err1;
+ }
+ addr_reg = ioremap(addr->start, resource_len(addr));
+ if (addr_reg == NULL) {
+ retval = -ENOMEM;
+ goto err2;
+ }
+
if (!request_mem_region(data->start, 1, hcd_name)) {
- release_mem_region(addr->start, 1);
- return -EBUSY;
+ retval = -EBUSY;
+ goto err3;
+ }
+ data_reg = ioremap(data->start, resource_len(addr));
+ if (data_reg == NULL) {
+ retval = -ENOMEM;
+ goto err4;
}
/* allocate and initialize hcd */
- sl811 = kcalloc(1, sizeof *sl811, GFP_KERNEL);
- if (!sl811)
- return 0;
+ hcd = usb_create_hcd(&sl811h_hc_driver);
+ if (!hcd) {
+ retval = 0;
+ goto err5;
+ }
+ sl811 = hcd_to_sl811(hcd);
dev_set_drvdata(dev, sl811);
- usb_bus_init(&sl811->hcd.self);
- sl811->hcd.self.controller = dev;
- sl811->hcd.self.bus_name = dev->bus_id;
- sl811->hcd.self.op = &usb_hcd_operations;
- sl811->hcd.self.hcpriv = sl811;
-
- // NOTE: 2.6.11 starts to change the hcd glue layer some more,
- // eventually letting us eliminate struct sl811h_req and a
- // lot of the boilerplate code here
-
- INIT_LIST_HEAD(&sl811->hcd.dev_list);
- sl811->hcd.self.release = &usb_hcd_release;
-
- sl811->hcd.description = sl811h_hc_driver.description;
- init_timer(&sl811->hcd.rh_timer);
- sl811->hcd.driver = &sl811h_hc_driver;
- sl811->hcd.irq = -1;
- sl811->hcd.state = USB_STATE_HALT;
+ hcd->self.controller = dev;
+ hcd->self.bus_name = dev->bus_id;
+ hcd->irq = irq;
+ hcd->regs = addr_reg;
spin_lock_init(&sl811->lock);
INIT_LIST_HEAD(&sl811->async);
init_timer(&sl811->timer);
sl811->timer.function = sl811h_timer;
sl811->timer.data = (unsigned long) sl811;
+ sl811->addr_reg = addr_reg;
+ sl811->data_reg = data_reg;
- sl811->addr_reg = ioremap(addr->start, resource_len(addr));
- if (sl811->addr_reg == NULL) {
- status = -ENOMEM;
- goto fail;
- }
- sl811->data_reg = ioremap(data->start, resource_len(addr));
- if (sl811->data_reg == NULL) {
- status = -ENOMEM;
- goto fail;
- }
-
- spin_lock_irqsave(&sl811->lock, flags);
+ spin_lock_irq(&sl811->lock);
port_power(sl811, 0);
- spin_unlock_irqrestore(&sl811->lock, flags);
+ spin_unlock_irq(&sl811->lock);
msleep(200);
tmp = sl811_read(sl811, SL11H_HWREVREG);
switch (tmp >> 4) {
case 1:
- sl811->hcd.product_desc = "SL811HS v1.2";
+ hcd->product_desc = "SL811HS v1.2";
break;
case 2:
- sl811->hcd.product_desc = "SL811HS v1.5";
+ hcd->product_desc = "SL811HS v1.5";
break;
default:
/* reject case 0, SL11S is less functional */
DBG("chiprev %02x\n", tmp);
- status = -ENXIO;
- goto fail;
+ retval = -ENXIO;
+ goto err6;
}
/* sl811s would need a different handler for this irq */
/* Cypress docs say the IRQ is IRQT_HIGH ... */
set_irq_type(irq, IRQT_RISING);
#endif
- status = request_irq(irq, sl811h_irq, SA_INTERRUPT, hcd_name, sl811);
- if (status < 0)
- goto fail;
- sl811->hcd.irq = irq;
+ retval = request_irq(irq, sl811h_irq, SA_INTERRUPT,
+ hcd->driver->description, hcd);
+ if (retval != 0)
+ goto err6;
- INFO("%s, irq %d\n", sl811->hcd.product_desc, irq);
+ INFO("%s, irq %d\n", hcd->product_desc, irq);
- status = usb_register_bus(&sl811->hcd.self);
- if (status < 0)
- goto fail;
- status = sl811h_start(&sl811->hcd);
- if (status == 0) {
- create_debug_file(sl811);
- return 0;
- }
-fail:
- sl811h_remove(dev);
- DBG("init error, %d\n", status);
- return status;
+ retval = usb_register_bus(&hcd->self);
+ if (retval < 0)
+ goto err7;
+
+ retval = sl811h_start(hcd);
+ if (retval < 0)
+ goto err8;
+
+ create_debug_file(sl811);
+ return 0;
+
+ err8:
+ usb_deregister_bus(&hcd->self);
+ err7:
+ free_irq(hcd->irq, hcd);
+ err6:
+ usb_put_hcd(hcd);
+ err5:
+ iounmap(data_reg);
+ err4:
+ release_mem_region(data->start, 1);
+ err3:
+ iounmap(addr_reg);
+ err2:
+ release_mem_region(addr->start, 1);
+ err1:
+ DBG("init error, %d\n", retval);
+ return retval;
}
#ifdef CONFIG_PM
return retval;
if (state <= PM_SUSPEND_MEM)
- retval = sl811h_hub_suspend(&sl811->hcd);
+ retval = sl811h_hub_suspend(sl811_to_hcd(sl811));
else
port_power(sl811, 0);
if (retval == 0)
* let's assume it'd only be powered to enable remote wakeup.
*/
if (dev->power.power_state > PM_SUSPEND_MEM
- || !sl811->hcd.can_wakeup) {
+ || !sl811_to_hcd(sl811)->can_wakeup) {
sl811->port1 = 0;
port_power(sl811, 1);
return 0;
}
dev->power.power_state = PM_SUSPEND_ON;
- return sl811h_hub_resume(&sl811->hcd);
+ return sl811h_hub_resume(sl811_to_hcd(sl811));
}
#else
#define PERIODIC_SIZE (1 << LOG2_PERIODIC_SIZE)
struct sl811 {
- struct usb_hcd hcd;
spinlock_t lock;
void __iomem *addr_reg;
void __iomem *data_reg;
static inline struct sl811 *hcd_to_sl811(struct usb_hcd *hcd)
{
- return container_of(hcd, struct sl811, hcd);
+ return (struct sl811 *) (hcd->hcd_priv);
+}
+
+static inline struct usb_hcd *sl811_to_hcd(struct sl811 *sl811)
+{
+ return container_of((void *) sl811, struct usb_hcd, hcd_priv);
}
struct sl811h_ep {
- struct list_head queue;
+ struct usb_host_endpoint *hep;
struct usb_device *udev;
u8 defctrl;
struct list_head schedule;
};
-struct sl811h_req {
- /* FIXME usbcore should maintain endpoints' urb queues
- * directly in 'struct usb_host_endpoint'
- */
- struct urb *urb;
- struct list_head queue;
-};
-
/*-------------------------------------------------------------------------*/
/* These register utilities should work for the SL811S register API too
config USB_HPUSBSCSI
tristate "HP53xx USB scanner support"
- depends on USB && SCSI
+ depends on USB && SCSI && BROKEN
help
Say Y here if you want support for the HP 53xx series of scanners
and the Minolta Scan Dual.
* Vojtech Pavlik, Simunkova 1594, Prague 8, 182 00 Czech Republic
*/
+#include <linux/input.h>
+
struct hid_usage_entry {
unsigned page;
unsigned usage;
{0, 0x8b, "SystemMenuLeft"},
{0, 0x8c, "SystemMenuUp"},
{0, 0x8d, "SystemMenuDown"},
- {0, 0x90, "D-padUp"},
- {0, 0x91, "D-padDown"},
- {0, 0x92, "D-padRight"},
- {0, 0x93, "D-padLeft"},
+ {0, 0x90, "D-PadUp"},
+ {0, 0x91, "D-PadDown"},
+ {0, 0x92, "D-PadRight"},
+ {0, 0x93, "D-PadLeft"},
{ 7, 0, "Keyboard" },
{ 8, 0, "LED" },
+ {0, 0x01, "NumLock"},
+ {0, 0x02, "CapsLock"},
+ {0, 0x03, "ScrollLock"},
+ {0, 0x04, "Compose"},
+ {0, 0x05, "Kana"},
{ 9, 0, "Button" },
{ 10, 0, "Ordinal" },
- { 12, 0, "Hotkey" },
+ { 12, 0, "Consumer" },
+ {0, 0x238, "HorizontalWheel"},
{ 13, 0, "Digitizers" },
{0, 0x01, "Digitizer"},
{0, 0x02, "Pen"},
printk(".");
for (p = hid_usage_table; p->description; p++)
if (p->page == (usage >> 16)) {
- for(++p; p->description && p->page != 0; p++)
+ for(++p; p->description && p->usage != 0; p++)
if (p->usage == (usage & 0xffff)) {
printk("%s", p->description);
return;
resolv_usage(usage->hid);
printk(" = %d\n", value);
}
+
+
+static char *events[EV_MAX + 1] = {
+ [0 ... EV_MAX] = NULL,
+ [EV_SYN] = "Sync", [EV_KEY] = "Key",
+ [EV_REL] = "Relative", [EV_ABS] = "Absolute",
+ [EV_MSC] = "Misc", [EV_LED] = "LED",
+ [EV_SND] = "Sound", [EV_REP] = "Repeat",
+ [EV_FF] = "ForceFeedback", [EV_PWR] = "Power",
+ [EV_FF_STATUS] = "ForceFeedbackStatus",
+};
+
+static char *syncs[2] = {
+ [0 ... 1] = NULL,
+ [SYN_REPORT] = "Report", [SYN_CONFIG] = "Config",
+};
+static char *keys[KEY_MAX + 1] = {
+ [0 ... KEY_MAX] = NULL,
+ [KEY_RESERVED] = "Reserved", [KEY_ESC] = "Esc",
+ [KEY_1] = "1", [KEY_2] = "2",
+ [KEY_3] = "3", [KEY_4] = "4",
+ [KEY_5] = "5", [KEY_6] = "6",
+ [KEY_7] = "7", [KEY_8] = "8",
+ [KEY_9] = "9", [KEY_0] = "0",
+ [KEY_MINUS] = "Minus", [KEY_EQUAL] = "Equal",
+ [KEY_BACKSPACE] = "Backspace", [KEY_TAB] = "Tab",
+ [KEY_Q] = "Q", [KEY_W] = "W",
+ [KEY_E] = "E", [KEY_R] = "R",
+ [KEY_T] = "T", [KEY_Y] = "Y",
+ [KEY_U] = "U", [KEY_I] = "I",
+ [KEY_O] = "O", [KEY_P] = "P",
+ [KEY_LEFTBRACE] = "LeftBrace", [KEY_RIGHTBRACE] = "RightBrace",
+ [KEY_ENTER] = "Enter", [KEY_LEFTCTRL] = "LeftControl",
+ [KEY_A] = "A", [KEY_S] = "S",
+ [KEY_D] = "D", [KEY_F] = "F",
+ [KEY_G] = "G", [KEY_H] = "H",
+ [KEY_J] = "J", [KEY_K] = "K",
+ [KEY_L] = "L", [KEY_SEMICOLON] = "Semicolon",
+ [KEY_APOSTROPHE] = "Apostrophe", [KEY_GRAVE] = "Grave",
+ [KEY_LEFTSHIFT] = "LeftShift", [KEY_BACKSLASH] = "BackSlash",
+ [KEY_Z] = "Z", [KEY_X] = "X",
+ [KEY_C] = "C", [KEY_V] = "V",
+ [KEY_B] = "B", [KEY_N] = "N",
+ [KEY_M] = "M", [KEY_COMMA] = "Comma",
+ [KEY_DOT] = "Dot", [KEY_SLASH] = "Slash",
+ [KEY_RIGHTSHIFT] = "RightShift", [KEY_KPASTERISK] = "KPAsterisk",
+ [KEY_LEFTALT] = "LeftAlt", [KEY_SPACE] = "Space",
+ [KEY_CAPSLOCK] = "CapsLock", [KEY_F1] = "F1",
+ [KEY_F2] = "F2", [KEY_F3] = "F3",
+ [KEY_F4] = "F4", [KEY_F5] = "F5",
+ [KEY_F6] = "F6", [KEY_F7] = "F7",
+ [KEY_F8] = "F8", [KEY_F9] = "F9",
+ [KEY_F10] = "F10", [KEY_NUMLOCK] = "NumLock",
+ [KEY_SCROLLLOCK] = "ScrollLock", [KEY_KP7] = "KP7",
+ [KEY_KP8] = "KP8", [KEY_KP9] = "KP9",
+ [KEY_KPMINUS] = "KPMinus", [KEY_KP4] = "KP4",
+ [KEY_KP5] = "KP5", [KEY_KP6] = "KP6",
+ [KEY_KPPLUS] = "KPPlus", [KEY_KP1] = "KP1",
+ [KEY_KP2] = "KP2", [KEY_KP3] = "KP3",
+ [KEY_KP0] = "KP0", [KEY_KPDOT] = "KPDot",
+ [KEY_ZENKAKUHANKAKU] = "Zenkaku/Hankaku", [KEY_102ND] = "102nd",
+ [KEY_F11] = "F11", [KEY_F12] = "F12",
+ [KEY_RO] = "RO", [KEY_KATAKANA] = "Katakana",
+ [KEY_HIRAGANA] = "HIRAGANA", [KEY_HENKAN] = "Henkan",
+ [KEY_KATAKANAHIRAGANA] = "Katakana/Hiragana", [KEY_MUHENKAN] = "Muhenkan",
+ [KEY_KPJPCOMMA] = "KPJpComma", [KEY_KPENTER] = "KPEnter",
+ [KEY_RIGHTCTRL] = "RightCtrl", [KEY_KPSLASH] = "KPSlash",
+ [KEY_SYSRQ] = "SysRq", [KEY_RIGHTALT] = "RightAlt",
+ [KEY_LINEFEED] = "LineFeed", [KEY_HOME] = "Home",
+ [KEY_UP] = "Up", [KEY_PAGEUP] = "PageUp",
+ [KEY_LEFT] = "Left", [KEY_RIGHT] = "Right",
+ [KEY_END] = "End", [KEY_DOWN] = "Down",
+ [KEY_PAGEDOWN] = "PageDown", [KEY_INSERT] = "Insert",
+ [KEY_DELETE] = "Delete", [KEY_MACRO] = "Macro",
+ [KEY_MUTE] = "Mute", [KEY_VOLUMEDOWN] = "VolumeDown",
+ [KEY_VOLUMEUP] = "VolumeUp", [KEY_POWER] = "Power",
+ [KEY_KPEQUAL] = "KPEqual", [KEY_KPPLUSMINUS] = "KPPlusMinus",
+ [KEY_PAUSE] = "Pause", [KEY_KPCOMMA] = "KPComma",
+ [KEY_HANGUEL] = "Hanguel", [KEY_HANJA] = "Hanja",
+ [KEY_YEN] = "Yen", [KEY_LEFTMETA] = "LeftMeta",
+ [KEY_RIGHTMETA] = "RightMeta", [KEY_COMPOSE] = "Compose",
+ [KEY_STOP] = "Stop", [KEY_AGAIN] = "Again",
+ [KEY_PROPS] = "Props", [KEY_UNDO] = "Undo",
+ [KEY_FRONT] = "Front", [KEY_COPY] = "Copy",
+ [KEY_OPEN] = "Open", [KEY_PASTE] = "Paste",
+ [KEY_FIND] = "Find", [KEY_CUT] = "Cut",
+ [KEY_HELP] = "Help", [KEY_MENU] = "Menu",
+ [KEY_CALC] = "Calc", [KEY_SETUP] = "Setup",
+ [KEY_SLEEP] = "Sleep", [KEY_WAKEUP] = "WakeUp",
+ [KEY_FILE] = "File", [KEY_SENDFILE] = "SendFile",
+ [KEY_DELETEFILE] = "DeleteFile", [KEY_XFER] = "X-fer",
+ [KEY_PROG1] = "Prog1", [KEY_PROG2] = "Prog2",
+ [KEY_WWW] = "WWW", [KEY_MSDOS] = "MSDOS",
+ [KEY_COFFEE] = "Coffee", [KEY_DIRECTION] = "Direction",
+ [KEY_CYCLEWINDOWS] = "CycleWindows", [KEY_MAIL] = "Mail",
+ [KEY_BOOKMARKS] = "Bookmarks", [KEY_COMPUTER] = "Computer",
+ [KEY_BACK] = "Back", [KEY_FORWARD] = "Forward",
+ [KEY_CLOSECD] = "CloseCD", [KEY_EJECTCD] = "EjectCD",
+ [KEY_EJECTCLOSECD] = "EjectCloseCD", [KEY_NEXTSONG] = "NextSong",
+ [KEY_PLAYPAUSE] = "PlayPause", [KEY_PREVIOUSSONG] = "PreviousSong",
+ [KEY_STOPCD] = "StopCD", [KEY_RECORD] = "Record",
+ [KEY_REWIND] = "Rewind", [KEY_PHONE] = "Phone",
+ [KEY_ISO] = "ISOKey", [KEY_CONFIG] = "Config",
+ [KEY_HOMEPAGE] = "HomePage", [KEY_REFRESH] = "Refresh",
+ [KEY_EXIT] = "Exit", [KEY_MOVE] = "Move",
+ [KEY_EDIT] = "Edit", [KEY_SCROLLUP] = "ScrollUp",
+ [KEY_SCROLLDOWN] = "ScrollDown", [KEY_KPLEFTPAREN] = "KPLeftParenthesis",
+ [KEY_KPRIGHTPAREN] = "KPRightParenthesis", [KEY_F13] = "F13",
+ [KEY_F14] = "F14", [KEY_F15] = "F15",
+ [KEY_F16] = "F16", [KEY_F17] = "F17",
+ [KEY_F18] = "F18", [KEY_F19] = "F19",
+ [KEY_F20] = "F20", [KEY_F21] = "F21",
+ [KEY_F22] = "F22", [KEY_F23] = "F23",
+ [KEY_F24] = "F24", [KEY_PLAYCD] = "PlayCD",
+ [KEY_PAUSECD] = "PauseCD", [KEY_PROG3] = "Prog3",
+ [KEY_PROG4] = "Prog4", [KEY_SUSPEND] = "Suspend",
+ [KEY_CLOSE] = "Close", [KEY_PLAY] = "Play",
+ [KEY_FASTFORWARD] = "Fast Forward", [KEY_BASSBOOST] = "Bass Boost",
+ [KEY_PRINT] = "Print", [KEY_HP] = "HP",
+ [KEY_CAMERA] = "Camera", [KEY_SOUND] = "Sound",
+ [KEY_QUESTION] = "Question", [KEY_EMAIL] = "Email",
+ [KEY_CHAT] = "Chat", [KEY_SEARCH] = "Search",
+ [KEY_CONNECT] = "Connect", [KEY_FINANCE] = "Finance",
+ [KEY_SPORT] = "Sport", [KEY_SHOP] = "Shop",
+ [KEY_ALTERASE] = "Alternate Erase", [KEY_CANCEL] = "Cancel",
+ [KEY_BRIGHTNESSDOWN] = "Brightness down", [KEY_BRIGHTNESSUP] = "Brightness up",
+ [KEY_MEDIA] = "Media", [KEY_UNKNOWN] = "Unknown",
+ [BTN_0] = "Btn0", [BTN_1] = "Btn1",
+ [BTN_2] = "Btn2", [BTN_3] = "Btn3",
+ [BTN_4] = "Btn4", [BTN_5] = "Btn5",
+ [BTN_6] = "Btn6", [BTN_7] = "Btn7",
+ [BTN_8] = "Btn8", [BTN_9] = "Btn9",
+ [BTN_LEFT] = "LeftBtn", [BTN_RIGHT] = "RightBtn",
+ [BTN_MIDDLE] = "MiddleBtn", [BTN_SIDE] = "SideBtn",
+ [BTN_EXTRA] = "ExtraBtn", [BTN_FORWARD] = "ForwardBtn",
+ [BTN_BACK] = "BackBtn", [BTN_TASK] = "TaskBtn",
+ [BTN_TRIGGER] = "Trigger", [BTN_THUMB] = "ThumbBtn",
+ [BTN_THUMB2] = "ThumbBtn2", [BTN_TOP] = "TopBtn",
+ [BTN_TOP2] = "TopBtn2", [BTN_PINKIE] = "PinkieBtn",
+ [BTN_BASE] = "BaseBtn", [BTN_BASE2] = "BaseBtn2",
+ [BTN_BASE3] = "BaseBtn3", [BTN_BASE4] = "BaseBtn4",
+ [BTN_BASE5] = "BaseBtn5", [BTN_BASE6] = "BaseBtn6",
+ [BTN_DEAD] = "BtnDead", [BTN_A] = "BtnA",
+ [BTN_B] = "BtnB", [BTN_C] = "BtnC",
+ [BTN_X] = "BtnX", [BTN_Y] = "BtnY",
+ [BTN_Z] = "BtnZ", [BTN_TL] = "BtnTL",
+ [BTN_TR] = "BtnTR", [BTN_TL2] = "BtnTL2",
+ [BTN_TR2] = "BtnTR2", [BTN_SELECT] = "BtnSelect",
+ [BTN_START] = "BtnStart", [BTN_MODE] = "BtnMode",
+ [BTN_THUMBL] = "BtnThumbL", [BTN_THUMBR] = "BtnThumbR",
+ [BTN_TOOL_PEN] = "ToolPen", [BTN_TOOL_RUBBER] = "ToolRubber",
+ [BTN_TOOL_BRUSH] = "ToolBrush", [BTN_TOOL_PENCIL] = "ToolPencil",
+ [BTN_TOOL_AIRBRUSH] = "ToolAirbrush", [BTN_TOOL_FINGER] = "ToolFinger",
+ [BTN_TOOL_MOUSE] = "ToolMouse", [BTN_TOOL_LENS] = "ToolLens",
+ [BTN_TOUCH] = "Touch", [BTN_STYLUS] = "Stylus",
+ [BTN_STYLUS2] = "Stylus2", [BTN_TOOL_DOUBLETAP] = "Tool Doubletap",
+ [BTN_TOOL_TRIPLETAP] = "Tool Tripletap", [BTN_GEAR_DOWN] = "WheelBtn",
+ [BTN_GEAR_UP] = "Gear up", [KEY_OK] = "Ok",
+ [KEY_SELECT] = "Select", [KEY_GOTO] = "Goto",
+ [KEY_CLEAR] = "Clear", [KEY_POWER2] = "Power2",
+ [KEY_OPTION] = "Option", [KEY_INFO] = "Info",
+ [KEY_TIME] = "Time", [KEY_VENDOR] = "Vendor",
+ [KEY_ARCHIVE] = "Archive", [KEY_PROGRAM] = "Program",
+ [KEY_CHANNEL] = "Channel", [KEY_FAVORITES] = "Favorites",
+ [KEY_EPG] = "EPG", [KEY_PVR] = "PVR",
+ [KEY_MHP] = "MHP", [KEY_LANGUAGE] = "Language",
+ [KEY_TITLE] = "Title", [KEY_SUBTITLE] = "Subtitle",
+ [KEY_ANGLE] = "Angle", [KEY_ZOOM] = "Zoom",
+ [KEY_MODE] = "Mode", [KEY_KEYBOARD] = "Keyboard",
+ [KEY_SCREEN] = "Screen", [KEY_PC] = "PC",
+ [KEY_TV] = "TV", [KEY_TV2] = "TV2",
+ [KEY_VCR] = "VCR", [KEY_VCR2] = "VCR2",
+ [KEY_SAT] = "Sat", [KEY_SAT2] = "Sat2",
+ [KEY_CD] = "CD", [KEY_TAPE] = "Tape",
+ [KEY_RADIO] = "Radio", [KEY_TUNER] = "Tuner",
+ [KEY_PLAYER] = "Player", [KEY_TEXT] = "Text",
+ [KEY_DVD] = "DVD", [KEY_AUX] = "Aux",
+ [KEY_MP3] = "MP3", [KEY_AUDIO] = "Audio",
+ [KEY_VIDEO] = "Video", [KEY_DIRECTORY] = "Directory",
+ [KEY_LIST] = "List", [KEY_MEMO] = "Memo",
+ [KEY_CALENDAR] = "Calendar", [KEY_RED] = "Red",
+ [KEY_GREEN] = "Green", [KEY_YELLOW] = "Yellow",
+ [KEY_BLUE] = "Blue", [KEY_CHANNELUP] = "ChannelUp",
+ [KEY_CHANNELDOWN] = "ChannelDown", [KEY_FIRST] = "First",
+ [KEY_LAST] = "Last", [KEY_AB] = "AB",
+ [KEY_NEXT] = "Next", [KEY_RESTART] = "Restart",
+ [KEY_SLOW] = "Slow", [KEY_SHUFFLE] = "Shuffle",
+ [KEY_BREAK] = "Break", [KEY_PREVIOUS] = "Previous",
+ [KEY_DIGITS] = "Digits", [KEY_TEEN] = "TEEN",
+ [KEY_TWEN] = "TWEN", [KEY_DEL_EOL] = "DeleteEOL",
+ [KEY_DEL_EOS] = "DeleteEOS", [KEY_INS_LINE] = "InsertLine",
+ [KEY_DEL_LINE] = "DeleteLine",
+};
+
+static char *relatives[REL_MAX + 1] = {
+ [0 ... REL_MAX] = NULL,
+ [REL_X] = "X", [REL_Y] = "Y",
+ [REL_Z] = "Z", [REL_HWHEEL] = "HWheel",
+ [REL_DIAL] = "Dial", [REL_WHEEL] = "Wheel",
+ [REL_MISC] = "Misc",
+};
+
+static char *absolutes[ABS_MAX + 1] = {
+ [0 ... ABS_MAX] = NULL,
+ [ABS_X] = "X", [ABS_Y] = "Y",
+ [ABS_Z] = "Z", [ABS_RX] = "Rx",
+ [ABS_RY] = "Ry", [ABS_RZ] = "Rz",
+ [ABS_THROTTLE] = "Throttle", [ABS_RUDDER] = "Rudder",
+ [ABS_WHEEL] = "Wheel", [ABS_GAS] = "Gas",
+ [ABS_BRAKE] = "Brake", [ABS_HAT0X] = "Hat0X",
+ [ABS_HAT0Y] = "Hat0Y", [ABS_HAT1X] = "Hat1X",
+ [ABS_HAT1Y] = "Hat1Y", [ABS_HAT2X] = "Hat2X",
+ [ABS_HAT2Y] = "Hat2Y", [ABS_HAT3X] = "Hat3X",
+ [ABS_HAT3Y] = "Hat 3Y", [ABS_PRESSURE] = "Pressure",
+ [ABS_DISTANCE] = "Distance", [ABS_TILT_X] = "XTilt",
+ [ABS_TILT_Y] = "YTilt", [ABS_TOOL_WIDTH] = "Tool Width",
+ [ABS_VOLUME] = "Volume", [ABS_MISC] = "Misc",
+};
+
+static char *misc[MSC_MAX + 1] = {
+ [ 0 ... MSC_MAX] = NULL,
+ [MSC_SERIAL] = "Serial", [MSC_PULSELED] = "Pulseled",
+ [MSC_GESTURE] = "Gesture", [MSC_RAW] = "RawData"
+};
+
+static char *leds[LED_MAX + 1] = {
+ [0 ... LED_MAX] = NULL,
+ [LED_NUML] = "NumLock", [LED_CAPSL] = "CapsLock",
+ [LED_SCROLLL] = "ScrollLock", [LED_COMPOSE] = "Compose",
+ [LED_KANA] = "Kana", [LED_SLEEP] = "Sleep",
+ [LED_SUSPEND] = "Suspend", [LED_MUTE] = "Mute",
+ [LED_MISC] = "Misc",
+};
+
+static char *repeats[REP_MAX + 1] = {
+ [0 ... REP_MAX] = NULL,
+ [REP_DELAY] = "Delay", [REP_PERIOD] = "Period"
+};
+
+static char *sounds[SND_MAX + 1] = {
+ [0 ... SND_MAX] = NULL,
+ [SND_CLICK] = "Click", [SND_BELL] = "Bell",
+ [SND_TONE] = "Tone"
+};
+
+static char **names[EV_MAX + 1] = {
+ [0 ... EV_MAX] = NULL,
+ [EV_SYN] = syncs, [EV_KEY] = keys,
+ [EV_REL] = relatives, [EV_ABS] = absolutes,
+ [EV_MSC] = misc, [EV_LED] = leds,
+ [EV_SND] = sounds, [EV_REP] = repeats,
+};
+
+static void resolv_event(__u8 type, __u16 code) {
+
+ printk("%s.%s", events[type] ? events[type] : "?",
+ names[type] ? (names[type][code] ? names[type][code] : "?") : "?");
+}
{
struct hid_ff_initializer *init;
- init = hid_get_ff_init(hid->dev->descriptor.idVendor,
- hid->dev->descriptor.idProduct);
+ init = hid_get_ff_init(le16_to_cpu(hid->dev->descriptor.idVendor),
+ le16_to_cpu(hid->dev->descriptor.idProduct));
if (!init) {
dbg("hid_ff_init could not find initializer");
#include <linux/input.h>
#include <linux/usb.h>
+#undef DEBUG
+
#include "hid.h"
#define unk KEY_UNKNOWN
__s32 y;
} hid_hat_to_axis[] = {{ 0, 0}, { 0,-1}, { 1,-1}, { 1, 0}, { 1, 1}, { 0, 1}, {-1, 1}, {-1, 0}, {-1,-1}};
-static struct input_dev *find_input(struct hid_device *hid, struct hid_field *field)
-{
- struct list_head *lh;
- struct hid_input *hidinput;
-
- list_for_each (lh, &hid->inputs) {
- int i;
-
- hidinput = list_entry(lh, struct hid_input, list);
-
- if (! hidinput->report)
- continue;
+#define map_abs(c) do { usage->code = c; usage->type = EV_ABS; bit = input->absbit; max = ABS_MAX; } while (0)
+#define map_rel(c) do { usage->code = c; usage->type = EV_REL; bit = input->relbit; max = REL_MAX; } while (0)
+#define map_key(c) do { usage->code = c; usage->type = EV_KEY; bit = input->keybit; max = KEY_MAX; } while (0)
+#define map_led(c) do { usage->code = c; usage->type = EV_LED; bit = input->ledbit; max = LED_MAX; } while (0)
+#define map_ff(c) do { usage->code = c; usage->type = EV_FF; bit = input->ffbit; max = FF_MAX; } while (0)
- for (i = 0; i < hidinput->report->maxfield; i++)
- if (hidinput->report->field[i] == field)
- return &hidinput->input;
- }
-
- /* Assume we only have one input and use it */
- if (!list_empty(&hid->inputs)) {
- hidinput = list_entry(hid->inputs.next, struct hid_input, list);
- return &hidinput->input;
- }
-
- /* This is really a bug */
- return NULL;
-}
+#define map_abs_clear(c) do { map_abs(c); clear_bit(c, bit); } while (0)
+#define map_key_clear(c) do { map_key(c); clear_bit(c, bit); } while (0)
+#define map_ff_effect(c) do { set_bit(c, input->ffbit); } while (0)
static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_field *field,
struct hid_usage *usage)
{
struct input_dev *input = &hidinput->input;
struct hid_device *device = hidinput->input.private;
- int max;
- int is_abs = 0;
+ int max, code;
unsigned long *bit;
+ field->hidinput = hidinput;
+
+#ifdef DEBUG
+ printk(KERN_DEBUG "Mapping: ");
+ resolv_usage(usage->hid);
+ printk(" ---> ");
+#endif
+
+ if (field->flags & HID_MAIN_ITEM_CONSTANT)
+ goto ignore;
+
switch (usage->hid & HID_USAGE_PAGE) {
+ case HID_UP_UNDEFINED:
+ goto ignore;
+
case HID_UP_KEYBOARD:
set_bit(EV_REP, input->evbit);
- usage->type = EV_KEY; bit = input->keybit; max = KEY_MAX;
if ((usage->hid & HID_USAGE) < 256) {
- if (!(usage->code = hid_keyboard[usage->hid & HID_USAGE]))
- return;
- clear_bit(usage->code, bit);
+ if (!hid_keyboard[usage->hid & HID_USAGE]) goto ignore;
+ map_key_clear(hid_keyboard[usage->hid & HID_USAGE]);
} else
- usage->code = KEY_UNKNOWN;
+ map_key(KEY_UNKNOWN);
break;
case HID_UP_BUTTON:
- usage->code = ((usage->hid - 1) & 0xf) + 0x100;
- usage->type = EV_KEY; bit = input->keybit; max = KEY_MAX;
+ code = ((usage->hid - 1) & 0xf);
switch (field->application) {
- case HID_GD_GAMEPAD: usage->code += 0x10;
- case HID_GD_JOYSTICK: usage->code += 0x10;
- case HID_GD_MOUSE: usage->code += 0x10; break;
+ case HID_GD_MOUSE:
+ case HID_GD_POINTER: code += 0x110; break;
+ case HID_GD_JOYSTICK: code += 0x120; break;
+ case HID_GD_GAMEPAD: code += 0x130; break;
default:
- if (field->physical == HID_GD_POINTER)
- usage->code += 0x10;
- break;
+ switch (field->physical) {
+ case HID_GD_MOUSE:
+ case HID_GD_POINTER: code += 0x110; break;
+ case HID_GD_JOYSTICK: code += 0x120; break;
+ case HID_GD_GAMEPAD: code += 0x130; break;
+ default: code += 0x100;
+ }
}
+
+ map_key(code);
break;
case HID_UP_GENDESK:
if ((usage->hid & 0xf0) == 0x80) { /* SystemControl */
switch (usage->hid & 0xf) {
- case 0x1: usage->code = KEY_POWER; break;
- case 0x2: usage->code = KEY_SLEEP; break;
- case 0x3: usage->code = KEY_WAKEUP; break;
- default: usage->code = KEY_UNKNOWN; break;
+ case 0x1: map_key_clear(KEY_POWER); break;
+ case 0x2: map_key_clear(KEY_SLEEP); break;
+ case 0x3: map_key_clear(KEY_WAKEUP); break;
+ default: goto unknown;
}
- usage->type = EV_KEY; bit = input->keybit; max = KEY_MAX;
break;
}
- usage->code = usage->hid & 0xf;
-
- if (field->report_size == 1) {
- usage->code = BTN_MISC;
- usage->type = EV_KEY; bit = input->keybit; max = KEY_MAX;
+ if ((usage->hid & 0xf0) == 0x90) { /* D-pad */
+ switch (usage->hid) {
+ case HID_GD_UP: usage->hat_dir = 1; break;
+ case HID_GD_DOWN: usage->hat_dir = 5; break;
+ case HID_GD_RIGHT: usage->hat_dir = 3; break;
+ case HID_GD_LEFT: usage->hat_dir = 7; break;
+ default: goto unknown;
+ }
+ if (field->dpad) {
+ map_abs(field->dpad);
+ goto ignore;
+ }
+ map_abs(ABS_HAT0X);
break;
}
- if (field->flags & HID_MAIN_ITEM_RELATIVE) {
- usage->type = EV_REL; bit = input->relbit; max = REL_MAX;
- break;
- }
+ switch (usage->hid) {
- usage->type = EV_ABS; bit = input->absbit; max = ABS_MAX;
+ /* These usage IDs map directly to the usage codes. */
+ case HID_GD_X: case HID_GD_Y: case HID_GD_Z:
+ case HID_GD_RX: case HID_GD_RY: case HID_GD_RZ:
+ case HID_GD_SLIDER: case HID_GD_DIAL: case HID_GD_WHEEL:
+ if (field->flags & HID_MAIN_ITEM_RELATIVE)
+ map_rel(usage->hid & 0xf);
+ else
+ map_abs(usage->hid & 0xf);
+ break;
- if (usage->hid == HID_GD_HATSWITCH) {
- usage->code = ABS_HAT0X;
- usage->hat_min = field->logical_minimum;
- usage->hat_max = field->logical_maximum;
+ case HID_GD_HATSWITCH:
+ usage->hat_min = field->logical_minimum;
+ usage->hat_max = field->logical_maximum;
+ map_abs(ABS_HAT0X);
+ break;
+
+ case HID_GD_START: map_key_clear(BTN_START); break;
+ case HID_GD_SELECT: map_key_clear(BTN_SELECT); break;
+
+ default: goto unknown;
}
+
break;
case HID_UP_LED:
-
- usage->code = (usage->hid - 1) & 0xf;
- usage->type = EV_LED; bit = input->ledbit; max = LED_MAX;
+ if (((usage->hid - 1) & 0xffff) >= LED_MAX)
+ goto ignore;
+ map_led((usage->hid - 1) & 0xffff);
break;
case HID_UP_DIGITIZER:
switch (usage->hid & 0xff) {
case 0x30: /* TipPressure */
-
if (!test_bit(BTN_TOUCH, input->keybit)) {
device->quirks |= HID_QUIRK_NOTOUCH;
set_bit(EV_KEY, input->evbit);
set_bit(BTN_TOUCH, input->keybit);
}
- usage->type = EV_ABS; bit = input->absbit; max = ABS_MAX;
- usage->code = ABS_PRESSURE;
- clear_bit(usage->code, bit);
+
+ map_abs_clear(ABS_PRESSURE);
break;
case 0x32: /* InRange */
-
- usage->type = EV_KEY; bit = input->keybit; max = KEY_MAX;
switch (field->physical & 0xff) {
- case 0x21: usage->code = BTN_TOOL_MOUSE; break;
- case 0x22: usage->code = BTN_TOOL_FINGER; break;
- default: usage->code = BTN_TOOL_PEN; break;
+ case 0x21: map_key(BTN_TOOL_MOUSE); break;
+ case 0x22: map_key(BTN_TOOL_FINGER); break;
+ default: map_key(BTN_TOOL_PEN); break;
}
break;
case 0x3c: /* Invert */
-
- usage->type = EV_KEY; bit = input->keybit; max = KEY_MAX;
- usage->code = BTN_TOOL_RUBBER;
- clear_bit(usage->code, bit);
+ map_key_clear(BTN_TOOL_RUBBER);
break;
case 0x33: /* Touch */
case 0x42: /* TipSwitch */
case 0x43: /* TipSwitch2 */
-
device->quirks &= ~HID_QUIRK_NOTOUCH;
- usage->type = EV_KEY; bit = input->keybit; max = KEY_MAX;
- usage->code = BTN_TOUCH;
- clear_bit(usage->code, bit);
+ map_key_clear(BTN_TOUCH);
break;
case 0x44: /* BarrelSwitch */
-
- usage->type = EV_KEY; bit = input->keybit; max = KEY_MAX;
- usage->code = BTN_STYLUS;
- clear_bit(usage->code, bit);
+ map_key_clear(BTN_STYLUS);
break;
default: goto unknown;
case HID_UP_CONSUMER: /* USB HUT v1.1, pages 56-62 */
- set_bit(EV_REP, input->evbit);
switch (usage->hid & HID_USAGE) {
- case 0x000: usage->code = 0; break;
- case 0x034: usage->code = KEY_SLEEP; break;
- case 0x036: usage->code = BTN_MISC; break;
- case 0x08a: usage->code = KEY_WWW; break;
- case 0x095: usage->code = KEY_HELP; break;
-
- case 0x0b0: usage->code = KEY_PLAY; break;
- case 0x0b1: usage->code = KEY_PAUSE; break;
- case 0x0b2: usage->code = KEY_RECORD; break;
- case 0x0b3: usage->code = KEY_FASTFORWARD; break;
- case 0x0b4: usage->code = KEY_REWIND; break;
- case 0x0b5: usage->code = KEY_NEXTSONG; break;
- case 0x0b6: usage->code = KEY_PREVIOUSSONG; break;
- case 0x0b7: usage->code = KEY_STOPCD; break;
- case 0x0b8: usage->code = KEY_EJECTCD; break;
- case 0x0cd: usage->code = KEY_PLAYPAUSE; break;
- case 0x0e0: is_abs = 1;
- usage->code = ABS_VOLUME;
- break;
- case 0x0e2: usage->code = KEY_MUTE; break;
- case 0x0e5: usage->code = KEY_BASSBOOST; break;
- case 0x0e9: usage->code = KEY_VOLUMEUP; break;
- case 0x0ea: usage->code = KEY_VOLUMEDOWN; break;
-
- case 0x183: usage->code = KEY_CONFIG; break;
- case 0x18a: usage->code = KEY_MAIL; break;
- case 0x192: usage->code = KEY_CALC; break;
- case 0x194: usage->code = KEY_FILE; break;
- case 0x21a: usage->code = KEY_UNDO; break;
- case 0x21b: usage->code = KEY_COPY; break;
- case 0x21c: usage->code = KEY_CUT; break;
- case 0x21d: usage->code = KEY_PASTE; break;
-
- case 0x221: usage->code = KEY_FIND; break;
- case 0x223: usage->code = KEY_HOMEPAGE; break;
- case 0x224: usage->code = KEY_BACK; break;
- case 0x225: usage->code = KEY_FORWARD; break;
- case 0x226: usage->code = KEY_STOP; break;
- case 0x227: usage->code = KEY_REFRESH; break;
- case 0x22a: usage->code = KEY_BOOKMARKS; break;
-
- default: usage->code = KEY_UNKNOWN; break;
- }
-
- if (is_abs) {
- usage->type = EV_ABS; bit = input->absbit; max = ABS_MAX;
- } else {
- usage->type = EV_KEY; bit = input->keybit; max = KEY_MAX;
+ case 0x000: goto ignore;
+ case 0x034: map_key_clear(KEY_SLEEP); break;
+ case 0x036: map_key_clear(BTN_MISC); break;
+ case 0x08a: map_key_clear(KEY_WWW); break;
+ case 0x095: map_key_clear(KEY_HELP); break;
+ case 0x0b0: map_key_clear(KEY_PLAY); break;
+ case 0x0b1: map_key_clear(KEY_PAUSE); break;
+ case 0x0b2: map_key_clear(KEY_RECORD); break;
+ case 0x0b3: map_key_clear(KEY_FASTFORWARD); break;
+ case 0x0b4: map_key_clear(KEY_REWIND); break;
+ case 0x0b5: map_key_clear(KEY_NEXTSONG); break;
+ case 0x0b6: map_key_clear(KEY_PREVIOUSSONG); break;
+ case 0x0b7: map_key_clear(KEY_STOPCD); break;
+ case 0x0b8: map_key_clear(KEY_EJECTCD); break;
+ case 0x0cd: map_key_clear(KEY_PLAYPAUSE); break;
+ case 0x0e0: map_abs_clear(ABS_VOLUME); break;
+ case 0x0e2: map_key_clear(KEY_MUTE); break;
+ case 0x0e5: map_key_clear(KEY_BASSBOOST); break;
+ case 0x0e9: map_key_clear(KEY_VOLUMEUP); break;
+ case 0x0ea: map_key_clear(KEY_VOLUMEDOWN); break;
+ case 0x183: map_key_clear(KEY_CONFIG); break;
+ case 0x18a: map_key_clear(KEY_MAIL); break;
+ case 0x192: map_key_clear(KEY_CALC); break;
+ case 0x194: map_key_clear(KEY_FILE); break;
+ case 0x21a: map_key_clear(KEY_UNDO); break;
+ case 0x21b: map_key_clear(KEY_COPY); break;
+ case 0x21c: map_key_clear(KEY_CUT); break;
+ case 0x21d: map_key_clear(KEY_PASTE); break;
+ case 0x221: map_key_clear(KEY_FIND); break;
+ case 0x223: map_key_clear(KEY_HOMEPAGE); break;
+ case 0x224: map_key_clear(KEY_BACK); break;
+ case 0x225: map_key_clear(KEY_FORWARD); break;
+ case 0x226: map_key_clear(KEY_STOP); break;
+ case 0x227: map_key_clear(KEY_REFRESH); break;
+ case 0x22a: map_key_clear(KEY_BOOKMARKS); break;
+ case 0x238: map_rel(REL_HWHEEL); break;
+ default: goto unknown;
}
break;
set_bit(EV_REP, input->evbit);
switch (usage->hid & HID_USAGE) {
- case 0x021: usage->code = KEY_PRINT; break;
- case 0x070: usage->code = KEY_HP; break;
- case 0x071: usage->code = KEY_CAMERA; break;
- case 0x072: usage->code = KEY_SOUND; break;
- case 0x073: usage->code = KEY_QUESTION; break;
-
- case 0x080: usage->code = KEY_EMAIL; break;
- case 0x081: usage->code = KEY_CHAT; break;
- case 0x082: usage->code = KEY_SEARCH; break;
- case 0x083: usage->code = KEY_CONNECT; break;
- case 0x084: usage->code = KEY_FINANCE; break;
- case 0x085: usage->code = KEY_SPORT; break;
- case 0x086: usage->code = KEY_SHOP; break;
-
- default: usage->code = KEY_UNKNOWN; break;
-
+ case 0x021: map_key_clear(KEY_PRINT); break;
+ case 0x070: map_key_clear(KEY_HP); break;
+ case 0x071: map_key_clear(KEY_CAMERA); break;
+ case 0x072: map_key_clear(KEY_SOUND); break;
+ case 0x073: map_key_clear(KEY_QUESTION); break;
+ case 0x080: map_key_clear(KEY_EMAIL); break;
+ case 0x081: map_key_clear(KEY_CHAT); break;
+ case 0x082: map_key_clear(KEY_SEARCH); break;
+ case 0x083: map_key_clear(KEY_CONNECT); break;
+ case 0x084: map_key_clear(KEY_FINANCE); break;
+ case 0x085: map_key_clear(KEY_SPORT); break;
+ case 0x086: map_key_clear(KEY_SHOP); break;
+ default: goto ignore;
}
-
- usage->type = EV_KEY; bit = input->keybit; max = KEY_MAX;
break;
+
+ case HID_UP_MSVENDOR:
+
+ goto ignore;
case HID_UP_PID:
- usage->type = EV_FF; bit = input->ffbit; max = FF_MAX;
-
+ set_bit(EV_FF, input->evbit);
switch(usage->hid & HID_USAGE) {
- case 0x26: set_bit(FF_CONSTANT, input->ffbit); break;
- case 0x27: set_bit(FF_RAMP, input->ffbit); break;
- case 0x28: set_bit(FF_CUSTOM, input->ffbit); break;
- case 0x30: set_bit(FF_SQUARE, input->ffbit);
- set_bit(FF_PERIODIC, input->ffbit); break;
- case 0x31: set_bit(FF_SINE, input->ffbit);
- set_bit(FF_PERIODIC, input->ffbit); break;
- case 0x32: set_bit(FF_TRIANGLE, input->ffbit);
- set_bit(FF_PERIODIC, input->ffbit); break;
- case 0x33: set_bit(FF_SAW_UP, input->ffbit);
- set_bit(FF_PERIODIC, input->ffbit); break;
- case 0x34: set_bit(FF_SAW_DOWN, input->ffbit);
- set_bit(FF_PERIODIC, input->ffbit); break;
- case 0x40: set_bit(FF_SPRING, input->ffbit); break;
- case 0x41: set_bit(FF_DAMPER, input->ffbit); break;
- case 0x42: set_bit(FF_INERTIA , input->ffbit); break;
- case 0x43: set_bit(FF_FRICTION, input->ffbit); break;
- case 0x7e: usage->code = FF_GAIN; break;
- case 0x83: /* Simultaneous Effects Max */
- input->ff_effects_max = (field->value[0]);
- dbg("Maximum Effects - %d",input->ff_effects_max);
- break;
- case 0x98: /* Device Control */
- usage->code = FF_AUTOCENTER; break;
- case 0xa4: /* Safety Switch */
- usage->code = BTN_DEAD;
- bit = input->keybit;
- usage->type = EV_KEY;
- max = KEY_MAX;
- dbg("Safety Switch Report\n");
- break;
- case 0x9f: /* Device Paused */
- case 0xa0: /* Actuators Enabled */
- dbg("Not telling the input API about ");
- resolv_usage(usage->hid);
- return;
+ case 0x26: map_ff_effect(FF_CONSTANT); goto ignore;
+ case 0x27: map_ff_effect(FF_RAMP); goto ignore;
+ case 0x28: map_ff_effect(FF_CUSTOM); goto ignore;
+ case 0x30: map_ff_effect(FF_SQUARE); map_ff_effect(FF_PERIODIC); goto ignore;
+ case 0x31: map_ff_effect(FF_SINE); map_ff_effect(FF_PERIODIC); goto ignore;
+ case 0x32: map_ff_effect(FF_TRIANGLE); map_ff_effect(FF_PERIODIC); goto ignore;
+ case 0x33: map_ff_effect(FF_SAW_UP); map_ff_effect(FF_PERIODIC); goto ignore;
+ case 0x34: map_ff_effect(FF_SAW_DOWN); map_ff_effect(FF_PERIODIC); goto ignore;
+ case 0x40: map_ff_effect(FF_SPRING); goto ignore;
+ case 0x41: map_ff_effect(FF_DAMPER); goto ignore;
+ case 0x42: map_ff_effect(FF_INERTIA); goto ignore;
+ case 0x43: map_ff_effect(FF_FRICTION); goto ignore;
+ case 0x7e: map_ff(FF_GAIN); break;
+ case 0x83: input->ff_effects_max = field->value[0]; goto ignore;
+ case 0x98: map_ff(FF_AUTOCENTER); break;
+ case 0xa4: map_key_clear(BTN_DEAD); break;
+ default: goto ignore;
}
break;
+
default:
unknown:
- resolv_usage(usage->hid);
-
if (field->report_size == 1) {
-
if (field->report->type == HID_OUTPUT_REPORT) {
- usage->code = LED_MISC;
- usage->type = EV_LED; bit = input->ledbit; max = LED_MAX;
+ map_led(LED_MISC);
break;
}
-
- usage->code = BTN_MISC;
- usage->type = EV_KEY; bit = input->keybit; max = KEY_MAX;
+ map_key(BTN_MISC);
break;
}
-
if (field->flags & HID_MAIN_ITEM_RELATIVE) {
- usage->code = REL_MISC;
- usage->type = EV_REL; bit = input->relbit; max = REL_MAX;
+ map_rel(REL_MISC);
break;
}
-
- usage->code = ABS_MISC;
- usage->type = EV_ABS; bit = input->absbit; max = ABS_MAX;
+ map_abs(ABS_MISC);
break;
}
set_bit(usage->type, input->evbit);
- if ((usage->type == EV_REL)
- && (device->quirks & (HID_QUIRK_2WHEEL_MOUSE_HACK_BACK
- | HID_QUIRK_2WHEEL_MOUSE_HACK_EXTRA))
- && (usage->code == REL_WHEEL)) {
- set_bit(REL_HWHEEL, bit);
- }
- while (usage->code <= max && test_and_set_bit(usage->code, bit)) {
+ while (usage->code <= max && test_and_set_bit(usage->code, bit))
usage->code = find_next_zero_bit(bit, max + 1, usage->code);
- }
if (usage->code > max)
- return;
+ goto ignore;
+
+ if ((device->quirks & (HID_QUIRK_2WHEEL_MOUSE_HACK_7 | HID_QUIRK_2WHEEL_MOUSE_HACK_5)) &&
+ (usage->type == EV_REL) && (usage->code == REL_WHEEL))
+ set_bit(REL_HWHEEL, bit);
+
+ if (((device->quirks & HID_QUIRK_2WHEEL_MOUSE_HACK_5) && (usage->hid == 0x00090005))
+ || ((device->quirks & HID_QUIRK_2WHEEL_MOUSE_HACK_7) && (usage->hid == 0x00090007)))
+ goto ignore;
if (usage->type == EV_ABS) {
+
int a = field->logical_minimum;
int b = field->logical_maximum;
b = field->logical_maximum = 255;
}
- input->absmin[usage->code] = a;
- input->absmax[usage->code] = b;
- input->absfuzz[usage->code] = 0;
- input->absflat[usage->code] = 0;
-
- if (field->application == HID_GD_GAMEPAD || field->application == HID_GD_JOYSTICK) {
- input->absfuzz[usage->code] = (b - a) >> 8;
- input->absflat[usage->code] = (b - a) >> 4;
- }
+ if (field->application == HID_GD_GAMEPAD || field->application == HID_GD_JOYSTICK)
+ input_set_abs_params(input, usage->code, a, b, (b - a) >> 8, (b - a) >> 4);
+ else input_set_abs_params(input, usage->code, a, b, 0, 0);
+
}
- if (usage->hat_min != usage->hat_max) {
+ if (usage->hat_min < usage->hat_max || usage->hat_dir) {
int i;
for (i = usage->code; i < usage->code + 2 && i <= max; i++) {
- input->absmax[i] = 1;
- input->absmin[i] = -1;
- input->absfuzz[i] = 0;
- input->absflat[i] = 0;
+ input_set_abs_params(input, i, -1, 1, 0, 0);
+ set_bit(i, input->absbit);
}
- set_bit(usage->code + 1, input->absbit);
+ if (usage->hat_dir && !field->dpad)
+ field->dpad = usage->code;
}
+
+#ifdef DEBUG
+ resolv_event(usage->type, usage->code);
+ printk("\n");
+#endif
+ return;
+
+ignore:
+#ifdef DEBUG
+ printk("IGNORED\n");
+#endif
+ return;
}
void hidinput_hid_event(struct hid_device *hid, struct hid_field *field, struct hid_usage *usage, __s32 value, struct pt_regs *regs)
{
- struct input_dev *input = find_input(hid, field);
+ struct input_dev *input = &field->hidinput->input;
int *quirks = &hid->quirks;
if (!input)
return;
input_regs(input, regs);
+ input_event(input, EV_MSC, MSC_SCAN, usage->hid);
- if (((hid->quirks & HID_QUIRK_2WHEEL_MOUSE_HACK_EXTRA) && (usage->code == BTN_EXTRA))
- || ((hid->quirks & HID_QUIRK_2WHEEL_MOUSE_HACK_BACK) && (usage->code == BTN_BACK))) {
- if (value)
- hid->quirks |= HID_QUIRK_2WHEEL_MOUSE_HACK_ON;
- else
- hid->quirks &= ~HID_QUIRK_2WHEEL_MOUSE_HACK_ON;
+ if (!usage->type)
return;
- }
- if ((hid->quirks & HID_QUIRK_2WHEEL_MOUSE_HACK_ON)
- && (usage->code == REL_WHEEL)) {
- input_event(input, usage->type, REL_HWHEEL, value);
+ if (((hid->quirks & HID_QUIRK_2WHEEL_MOUSE_HACK_5) && (usage->hid == 0x00090005))
+ || ((hid->quirks & HID_QUIRK_2WHEEL_MOUSE_HACK_7) && (usage->hid == 0x00090007))) {
+ if (value) hid->quirks |= HID_QUIRK_2WHEEL_MOUSE_HACK_ON;
+ else hid->quirks &= ~HID_QUIRK_2WHEEL_MOUSE_HACK_ON;
return;
}
- if (usage->hat_min != usage->hat_max ) { /* FIXME: hat_max can be 0 and hat_min 1 */
- value = (value - usage->hat_min) * 8 / (usage->hat_max - usage->hat_min + 1) + 1;
- if (value < 0 || value > 8) value = 0;
- input_event(input, usage->type, usage->code , hid_hat_to_axis[value].x);
- input_event(input, usage->type, usage->code + 1, hid_hat_to_axis[value].y);
+ if ((hid->quirks & HID_QUIRK_2WHEEL_MOUSE_HACK_ON) && (usage->code == REL_WHEEL)) {
+ input_event(input, usage->type, REL_HWHEEL, value);
return;
}
+ if (usage->hat_min < usage->hat_max || usage->hat_dir) {
+ int hat_dir = usage->hat_dir;
+ if (!hat_dir)
+ hat_dir = (value - usage->hat_min) * 8 / (usage->hat_max - usage->hat_min + 1) + 1;
+ if (hat_dir < 0 || hat_dir > 8) hat_dir = 0;
+ input_event(input, usage->type, usage->code , hid_hat_to_axis[hat_dir].x);
+ input_event(input, usage->type, usage->code + 1, hid_hat_to_axis[hat_dir].y);
+ return;
+ }
+
if (usage->hid == (HID_UP_DIGITIZER | 0x003c)) { /* Invert */
*quirks = value ? (*quirks | HID_QUIRK_INVERT) : (*quirks & ~HID_QUIRK_INVERT);
return;
dbg("Maximum Effects - %d",input->ff_effects_max);
return;
}
+
if (usage->hid == (HID_UP_PID | 0x7fUL)) {
dbg("PID Pool Report\n");
return;
if (type == EV_FF)
return hid_ff_event(hid, dev, type, code, value);
+ if (type != EV_LED)
+ return -1;
+
if ((offset = hid_find_field(hid, type, code, &field)) == -1) {
warn("event field not found");
return -1;
INIT_LIST_HEAD(&hid->inputs);
for (i = 0; i < hid->maxcollection; i++)
- if (hid->collection[i].type == HID_COLLECTION_APPLICATION &&
- IS_INPUT_APPLICATION(hid->collection[i].usage))
- break;
+ if (hid->collection[i].type == HID_COLLECTION_APPLICATION ||
+ hid->collection[i].type == HID_COLLECTION_PHYSICAL)
+ if (IS_INPUT_APPLICATION(hid->collection[i].usage))
+ break;
if (i == hid->maxcollection)
return -1;
hidinput->input.phys = hid->phys;
hidinput->input.uniq = hid->uniq;
hidinput->input.id.bustype = BUS_USB;
- hidinput->input.id.vendor = dev->descriptor.idVendor;
- hidinput->input.id.product = dev->descriptor.idProduct;
- hidinput->input.id.version = dev->descriptor.bcdDevice;
+ hidinput->input.id.vendor = le16_to_cpu(dev->descriptor.idVendor);
+ hidinput->input.id.product = le16_to_cpu(dev->descriptor.idProduct);
+ hidinput->input.id.version = le16_to_cpu(dev->descriptor.bcdDevice);
hidinput->input.dev = &hid->intf->dev;
+
+ set_bit(EV_MSC, hidinput->input.evbit);
+ set_bit(MSC_SCAN, hidinput->input.mscbit);
}
for (i = 0; i < report->maxfield; i++)
for (j = 0; j < report->field[i]->maxusage; j++)
hidinput_configure_usage(hidinput, report->field[i],
report->field[i]->usage + j);
-
+
if (hid->quirks & HID_QUIRK_MULTI_INPUT) {
/* This will leave hidinput NULL, so that it
* allocates another one if we have more inputs on
{
struct device_type* dev = devices;
signed short* ff;
- u16 idVendor = hid->dev->descriptor.idVendor;
- u16 idProduct = hid->dev->descriptor.idProduct;
+ u16 idVendor = le16_to_cpu(hid->dev->descriptor.idVendor);
+ u16 idProduct = le16_to_cpu(hid->dev->descriptor.idProduct);
struct hid_input *hidinput = list_entry(hid->inputs.next, struct hid_input, list);
while (dev->idVendor && (idVendor != dev->idVendor || idProduct != dev->idProduct))
#define HID_USAGE_PAGE 0xffff0000
+#define HID_UP_UNDEFINED 0x00000000
#define HID_UP_GENDESK 0x00010000
#define HID_UP_KEYBOARD 0x00070000
#define HID_UP_LED 0x00080000
#define HID_UP_DIGITIZER 0x000d0000
#define HID_UP_PID 0x000f0000
#define HID_UP_HPVENDOR 0xff7f0000
+#define HID_UP_MSVENDOR 0xff000000
#define HID_USAGE 0x0000ffff
#define HID_GD_MOUSE 0x00010002
#define HID_GD_JOYSTICK 0x00010004
#define HID_GD_GAMEPAD 0x00010005
+#define HID_GD_KEYBOARD 0x00010006
+#define HID_GD_KEYPAD 0x00010007
+#define HID_GD_MULTIAXIS 0x00010008
+#define HID_GD_X 0x00010030
+#define HID_GD_Y 0x00010031
+#define HID_GD_Z 0x00010032
+#define HID_GD_RX 0x00010033
+#define HID_GD_RY 0x00010034
+#define HID_GD_RZ 0x00010035
+#define HID_GD_SLIDER 0x00010036
+#define HID_GD_DIAL 0x00010037
+#define HID_GD_WHEEL 0x00010038
#define HID_GD_HATSWITCH 0x00010039
+#define HID_GD_BUFFER 0x0001003a
+#define HID_GD_BYTECOUNT 0x0001003b
+#define HID_GD_MOTION 0x0001003c
+#define HID_GD_START 0x0001003d
+#define HID_GD_SELECT 0x0001003e
+#define HID_GD_VX 0x00010040
+#define HID_GD_VY 0x00010041
+#define HID_GD_VZ 0x00010042
+#define HID_GD_VBRX 0x00010043
+#define HID_GD_VBRY 0x00010044
+#define HID_GD_VBRZ 0x00010045
+#define HID_GD_VNO 0x00010046
+#define HID_GD_FEATURE 0x00010047
+#define HID_GD_UP 0x00010090
+#define HID_GD_DOWN 0x00010091
+#define HID_GD_RIGHT 0x00010092
+#define HID_GD_LEFT 0x00010093
/*
* HID report types --- Ouch! HID spec says 1 2 3!
#define HID_QUIRK_HIDDEV 0x010
#define HID_QUIRK_BADPAD 0x020
#define HID_QUIRK_MULTI_INPUT 0x040
-#define HID_QUIRK_2WHEEL_MOUSE_HACK_BACK 0x080
-#define HID_QUIRK_2WHEEL_MOUSE_HACK_EXTRA 0x100
+#define HID_QUIRK_2WHEEL_MOUSE_HACK_7 0x080
+#define HID_QUIRK_2WHEEL_MOUSE_HACK_5 0x100
#define HID_QUIRK_2WHEEL_MOUSE_HACK_ON 0x200
/*
struct hid_usage {
unsigned hid; /* hid usage code */
unsigned collection_index; /* index into collection array */
+ /* hidinput data */
__u16 code; /* input driver code */
__u8 type; /* input driver type */
__s8 hat_min; /* hat switch fun */
__s8 hat_max; /* ditto */
+ __s8 hat_dir; /* ditto */
};
+struct hid_input;
+
struct hid_field {
unsigned physical; /* physical usage for this field */
unsigned logical; /* logical usage for this field */
unsigned unit;
struct hid_report *report; /* associated report */
unsigned index; /* index into report->field[] */
+ /* hidinput data */
+ struct hid_input *hidinput; /* associated input structure */
+ __u16 dpad; /* dpad input code */
};
#define HID_MAX_FIELDS 64
#define hid_dump_device(c) do { } while (0)
#define hid_dump_field(a,b) do { } while (0)
#define resolv_usage(a) do { } while (0)
+#define resolv_event(a,b) do { } while (0)
#endif
#endif
#ifdef CONFIG_USB_HIDINPUT
/* Applications from HID Usage Tables 4/8/99 Version 1.1 */
/* We ignore a few input applications that are not widely used */
-#define IS_INPUT_APPLICATION(a) (((a >= 0x00010000) && (a <= 0x00010008)) || ( a == 0x00010080) || ( a == 0x000c0001))
+#define IS_INPUT_APPLICATION(a) (((a >= 0x00010000) && (a <= 0x00010008)) || (a == 0x00010080) || (a == 0x000c0001))
extern void hidinput_hid_event(struct hid_device *, struct hid_field *, struct hid_usage *, __s32, struct pt_regs *regs);
extern void hidinput_report_event(struct hid_device *hid, struct hid_report *report);
extern int hidinput_connect(struct hid_device *);
touchkit->input.name = touchkit->name;
touchkit->input.phys = touchkit->phys;
touchkit->input.id.bustype = BUS_USB;
- touchkit->input.id.vendor = udev->descriptor.idVendor;
- touchkit->input.id.product = udev->descriptor.idProduct;
- touchkit->input.id.version = udev->descriptor.bcdDevice;
+ touchkit->input.id.vendor = le16_to_cpu(udev->descriptor.idVendor);
+ touchkit->input.id.product = le16_to_cpu(udev->descriptor.idProduct);
+ touchkit->input.id.version = le16_to_cpu(udev->descriptor.bcdDevice);
touchkit->input.dev = &intf->dev;
touchkit->input.evbit[0] = BIT(EV_KEY) | BIT(EV_ABS);
kbd->dev.name = kbd->name;
kbd->dev.phys = kbd->phys;
kbd->dev.id.bustype = BUS_USB;
- kbd->dev.id.vendor = dev->descriptor.idVendor;
- kbd->dev.id.product = dev->descriptor.idProduct;
- kbd->dev.id.version = dev->descriptor.bcdDevice;
+ kbd->dev.id.vendor = le16_to_cpu(dev->descriptor.idVendor);
+ kbd->dev.id.product = le16_to_cpu(dev->descriptor.idProduct);
+ kbd->dev.id.version = le16_to_cpu(dev->descriptor.bcdDevice);
kbd->dev.dev = &iface->dev;
if (!(buf = kmalloc(63, GFP_KERNEL))) {
mouse->dev.name = mouse->name;
mouse->dev.phys = mouse->phys;
mouse->dev.id.bustype = BUS_USB;
- mouse->dev.id.vendor = dev->descriptor.idVendor;
- mouse->dev.id.product = dev->descriptor.idProduct;
- mouse->dev.id.version = dev->descriptor.bcdDevice;
+ mouse->dev.id.vendor = le16_to_cpu(dev->descriptor.idVendor);
+ mouse->dev.id.product = le16_to_cpu(dev->descriptor.idProduct);
+ mouse->dev.id.version = le16_to_cpu(dev->descriptor.bcdDevice);
mouse->dev.dev = &intf->dev;
if (!(buf = kmalloc(63, GFP_KERNEL))) {
int i;
for (i = 0; xpad_device[i].idVendor; i++) {
- if ((udev->descriptor.idVendor == xpad_device[i].idVendor) &&
- (udev->descriptor.idProduct == xpad_device[i].idProduct))
+ if ((le16_to_cpu(udev->descriptor.idVendor) == xpad_device[i].idVendor) &&
+ (le16_to_cpu(udev->descriptor.idProduct) == xpad_device[i].idProduct))
break;
}
xpad->udev = udev;
xpad->dev.id.bustype = BUS_USB;
- xpad->dev.id.vendor = udev->descriptor.idVendor;
- xpad->dev.id.product = udev->descriptor.idProduct;
- xpad->dev.id.version = udev->descriptor.bcdDevice;
+ xpad->dev.id.vendor = le16_to_cpu(udev->descriptor.idVendor);
+ xpad->dev.id.product = le16_to_cpu(udev->descriptor.idProduct);
+ xpad->dev.id.version = le16_to_cpu(udev->descriptor.bcdDevice);
xpad->dev.dev = &intf->dev;
xpad->dev.private = xpad;
xpad->dev.name = xpad_device[i].name;
/***************************************************************************
* V4L2 driver for SN9C10x PC Camera Controllers *
* *
- * Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * Copyright (C) 2004-2005 by Luca Risolia <luca.risolia@studio.unibo.it> *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
#define SN9C102_DEBUG_LEVEL 2
#define SN9C102_MAX_DEVICES 64
#define SN9C102_PRESERVE_IMGSCALE 0
+#define SN9C102_FORCE_MUNMAP 0
#define SN9C102_MAX_FRAMES 32
#define SN9C102_URBS 2
#define SN9C102_ISO_PACKETS 7
#define SN9C102_ALTERNATE_SETTING 8
-#define SN9C102_URB_TIMEOUT msecs_to_jiffies(3)
-#define SN9C102_CTRL_TIMEOUT msecs_to_jiffies(100)
+#define SN9C102_URB_TIMEOUT msecs_to_jiffies(2 * SN9C102_ISO_PACKETS)
+#define SN9C102_CTRL_TIMEOUT msecs_to_jiffies(300)
/*****************************************************************************/
#define SN9C102_MODULE_AUTHOR "(C) 2004 Luca Risolia"
#define SN9C102_AUTHOR_EMAIL "<luca.risolia@studio.unibo.it>"
#define SN9C102_MODULE_LICENSE "GPL"
-#define SN9C102_MODULE_VERSION "1:1.19"
-#define SN9C102_MODULE_VERSION_CODE KERNEL_VERSION(1, 0, 19)
+#define SN9C102_MODULE_VERSION "1:1.22"
+#define SN9C102_MODULE_VERSION_CODE KERNEL_VERSION(1, 0, 22)
enum sn9c102_bridge {
BRIDGE_SN9C101 = 0x01,
STREAM_ON,
};
+typedef char sn9c102_sof_header_t[12];
+typedef char sn9c102_eof_header_t[4];
+
struct sn9c102_sysfs_attr {
u8 reg, i2c_reg;
+ sn9c102_sof_header_t frame_header;
+};
+
+struct sn9c102_module_param {
+ u8 force_munmap;
};
static DECLARE_MUTEX(sn9c102_sysfs_lock);
struct v4l2_jpegcompression compression;
struct sn9c102_sysfs_attr sysfs;
+ sn9c102_sof_header_t sof_header;
u16 reg[32];
+ struct sn9c102_module_param module_param;
+
enum sn9c102_dev_state state;
u8 users;
/***************************************************************************
* V4L2 driver for SN9C10x PC Camera Controllers *
* *
- * Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * Copyright (C) 2004-2005 by Luca Risolia <luca.risolia@studio.unibo.it> *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
#include <linux/mm.h>
#include <linux/vmalloc.h>
#include <linux/page-flags.h>
+#include <linux/byteorder/generic.h>
#include <asm/page.h>
#include <asm/uaccess.h>
"\none and for every other camera."
"\n");
+static short force_munmap[] = {[0 ... SN9C102_MAX_DEVICES-1] =
+ SN9C102_FORCE_MUNMAP};
+module_param_array(force_munmap, bool, NULL, 0444);
+MODULE_PARM_DESC(force_munmap,
+ "\n<0|1[,...]> Force the application to unmap previously "
+ "\nmapped buffer memory before calling any VIDIOC_S_CROP or "
+ "\nVIDIOC_S_FMT ioctl's. Not all the applications support "
+ "\nthis feature. This parameter is specific for each "
+ "\ndetected camera."
+ "\n 0 = do not force memory unmapping"
+ "\n 1 = force memory unmapping (save memory)"
+ "\nDefault value is "__MODULE_STRING(SN9C102_FORCE_MUNMAP)"."
+ "\n");
+
#ifdef SN9C102_DEBUG
static unsigned short debug = SN9C102_DEBUG_LEVEL;
module_param(debug, ushort, 0644);
/*****************************************************************************/
-typedef char sn9c102_sof_header_t[12];
-typedef char sn9c102_eof_header_t[4];
-
static sn9c102_sof_header_t sn9c102_sof_header[] = {
{0xff, 0xff, 0x00, 0xc4, 0xc4, 0x96, 0x00},
{0xff, 0xff, 0x00, 0xc4, 0xc4, 0x96, 0x01},
}
-static u32 sn9c102_request_buffers(struct sn9c102_device* cam, u32 count)
+static u32
+sn9c102_request_buffers(struct sn9c102_device* cam, u32 count,
+ enum sn9c102_io_method io)
{
struct v4l2_pix_format* p = &(cam->sensor->pix_format);
- const size_t imagesize = (p->width * p->height * p->priv)/8;
+ struct v4l2_rect* r = &(cam->sensor->cropcap.bounds);
+ const size_t imagesize = cam->module_param.force_munmap ||
+ io == IO_READ ?
+ (p->width * p->height * p->priv)/8 :
+ (r->width * r->height * p->priv)/8;
void* buff = NULL;
u32 i;
if (r & 0x04)
return 0;
if (sensor->frequency & SN9C102_I2C_400KHZ)
- udelay(5*8);
+ udelay(5*16);
else
- udelay(16*8);
+ udelay(16*16);
}
return -EBUSY;
}
int
-sn9c102_i2c_try_read(struct sn9c102_device* cam,
- struct sn9c102_sensor* sensor, u8 address)
+sn9c102_i2c_try_raw_read(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor, u8 data0, u8 data1,
+ u8 n, u8 buffer[])
{
struct usb_device* udev = cam->usbdev;
u8* data = cam->control_buffer;
int err = 0, res;
- /* Write cycle - address */
+ /* Write cycle */
data[0] = ((sensor->interface == SN9C102_I2C_2WIRES) ? 0x80 : 0) |
((sensor->frequency & SN9C102_I2C_400KHZ) ? 0x01 : 0) | 0x10;
- data[1] = sensor->slave_write_id;
- data[2] = address;
+ data[1] = data0; /* I2C slave id */
+ data[2] = data1; /* address */
data[7] = 0x10;
res = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), 0x08, 0x41,
0x08, 0, data, 8, SN9C102_CTRL_TIMEOUT);
err += sn9c102_i2c_wait(cam, sensor);
- /* Read cycle - 1 byte */
+ /* Read cycle - n bytes */
data[0] = ((sensor->interface == SN9C102_I2C_2WIRES) ? 0x80 : 0) |
((sensor->frequency & SN9C102_I2C_400KHZ) ? 0x01 : 0) |
- 0x10 | 0x02;
- data[1] = sensor->slave_read_id;
+ (n << 4) | 0x02;
+ data[1] = data0;
data[7] = 0x10;
res = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), 0x08, 0x41,
0x08, 0, data, 8, SN9C102_CTRL_TIMEOUT);
err += sn9c102_i2c_wait(cam, sensor);
- /* The read byte will be placed in data[4] */
+ /* The first read byte will be placed in data[4] */
res = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), 0x00, 0xc1,
0x0a, 0, data, 5, SN9C102_CTRL_TIMEOUT);
if (res < 0)
err += sn9c102_i2c_detect_read_error(cam, sensor);
- if (err)
+ PDBGG("I2C read: address 0x%02X, first read byte: 0x%02X", data1,
+ data[4])
+
+ if (err) {
DBG(3, "I2C read failed for %s image sensor", sensor->name)
+ return -1;
+ }
- PDBGG("I2C read: address 0x%02X, value: 0x%02X", address, data[4])
+ if (buffer)
+ memcpy(buffer, data, sizeof(buffer));
- return err ? -1 : (int)data[4];
+ return (int)data[4];
}
}
+int
+sn9c102_i2c_try_read(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor, u8 address)
+{
+ return sn9c102_i2c_try_raw_read(cam, sensor, sensor->i2c_slave_id,
+ address, 1, NULL);
+}
+
+
int
sn9c102_i2c_try_write(struct sn9c102_device* cam,
struct sn9c102_sensor* sensor, u8 address, u8 value)
{
return sn9c102_i2c_try_raw_write(cam, sensor, 3,
- sensor->slave_write_id, address,
+ sensor->i2c_slave_id, address,
value, 0, 0, 0);
}
for (i = 0; (len >= soflen) && (i <= len - soflen); i++)
for (j = 0; j < n; j++)
/* It's enough to compare 7 bytes */
- if (!memcmp(mem + i, sn9c102_sof_header[j], 7))
- /* Skips the header */
+ if (!memcmp(mem + i, sn9c102_sof_header[j], 7)) {
+ memcpy(cam->sof_header, mem + i, soflen);
+ /* Skip the header */
return mem + i + soflen;
+ }
return NULL;
}
(*f) = NULL;
spin_unlock_irqrestore(&cam->queue_lock
, lock_flags);
+ memcpy(cam->sysfs.frame_header,
+ cam->sof_header,
+ sizeof(sn9c102_sof_header_t));
DBG(3, "Video frame captured: "
"%lu bytes", (unsigned long)(b))
(cam->stream == STREAM_OFF) ||
(cam->state & DEV_DISCONNECTED),
SN9C102_URB_TIMEOUT);
- if (err) {
+ if (cam->state & DEV_DISCONNECTED)
+ return -ENODEV;
+ else if (err) {
cam->state |= DEV_MISCONFIGURED;
- DBG(1, "The camera is misconfigured. To use "
- "it, close and open /dev/video%d "
- "again.", cam->v4ldev->minor)
+ DBG(1, "The camera is misconfigured. To use it, close and "
+ "open /dev/video%d again.", cam->v4ldev->minor)
return err;
}
- if (cam->state & DEV_DISCONNECTED)
- return -ENODEV;
return 0;
}
up(&sn9c102_sysfs_lock);
return count;
-}
+}
static ssize_t
return -ENODEV;
}
- if (cam->sensor->slave_read_id == SN9C102_I2C_SLAVEID_UNAVAILABLE) {
+ if (!(cam->sensor->sysfs_ops & SN9C102_I2C_READ)) {
up(&sn9c102_sysfs_lock);
return -ENOSYS;
}
return -ENODEV;
}
+ if (!(cam->sensor->sysfs_ops & SN9C102_I2C_WRITE)) {
+ up(&sn9c102_sysfs_lock);
+ return -ENOSYS;
+ }
+
value = sn9c102_strtou8(buf, len, &count);
if (!count) {
up(&sn9c102_sysfs_lock);
}
+static ssize_t sn9c102_show_frame_header(struct class_device* cd, char* buf)
+{
+ struct sn9c102_device* cam;
+ ssize_t count;
+
+ cam = video_get_drvdata(to_video_device(cd));
+ if (!cam)
+ return -ENODEV;
+
+ count = sizeof(cam->sysfs.frame_header);
+ memcpy(buf, cam->sysfs.frame_header, count);
+
+ DBG(3, "Frame header, read bytes: %zd", count)
+
+ return count;
+}
+
+
static CLASS_DEVICE_ATTR(reg, S_IRUGO | S_IWUSR,
sn9c102_show_reg, sn9c102_store_reg);
static CLASS_DEVICE_ATTR(val, S_IRUGO | S_IWUSR,
static CLASS_DEVICE_ATTR(green, S_IWUGO, NULL, sn9c102_store_green);
static CLASS_DEVICE_ATTR(blue, S_IWUGO, NULL, sn9c102_store_blue);
static CLASS_DEVICE_ATTR(red, S_IWUGO, NULL, sn9c102_store_red);
+static CLASS_DEVICE_ATTR(frame_header, S_IRUGO,
+ sn9c102_show_frame_header, NULL);
static void sn9c102_create_sysfs(struct sn9c102_device* cam)
video_device_create_file(v4ldev, &class_device_attr_reg);
video_device_create_file(v4ldev, &class_device_attr_val);
+ video_device_create_file(v4ldev, &class_device_attr_frame_header);
if (cam->bridge == BRIDGE_SN9C101 || cam->bridge == BRIDGE_SN9C102)
video_device_create_file(v4ldev, &class_device_attr_green);
else if (cam->bridge == BRIDGE_SN9C103) {
video_device_create_file(v4ldev, &class_device_attr_blue);
video_device_create_file(v4ldev, &class_device_attr_red);
}
- if (cam->sensor->slave_write_id != SN9C102_I2C_SLAVEID_UNAVAILABLE ||
- cam->sensor->slave_read_id != SN9C102_I2C_SLAVEID_UNAVAILABLE) {
+ if (cam->sensor->sysfs_ops) {
video_device_create_file(v4ldev, &class_device_attr_i2c_reg);
video_device_create_file(v4ldev, &class_device_attr_i2c_val);
}
/*****************************************************************************/
static int
-sn9c102_set_format(struct sn9c102_device* cam, struct v4l2_pix_format* fmt)
+sn9c102_set_pix_format(struct sn9c102_device* cam, struct v4l2_pix_format* pix)
{
int err = 0;
- if (fmt->pixelformat == V4L2_PIX_FMT_SN9C10X)
+ if (pix->pixelformat == V4L2_PIX_FMT_SN9C10X)
err += sn9c102_write_reg(cam, cam->reg[0x18] | 0x80, 0x18);
else
err += sn9c102_write_reg(cam, cam->reg[0x18] & 0x7f, 0x18);
cam->compression.quality = cam->reg[0x17] & 0x01 ? 0 : 1;
else
err += sn9c102_set_compression(cam, &cam->compression);
- err += sn9c102_set_format(cam, &s->pix_format);
+ err += sn9c102_set_pix_format(cam, &s->pix_format);
+ if (s->set_pix_format)
+ err += s->set_pix_format(cam, &s->pix_format);
if (err)
return err;
}
if (cam->io == IO_NONE) {
- if (!sn9c102_request_buffers(cam, cam->nreadbuffers)) {
+ if (!sn9c102_request_buffers(cam,cam->nreadbuffers, IO_READ)) {
DBG(1, "read() failed, not enough memory")
up(&cam->fileop_sem);
return -ENOMEM;
}
if (cam->io == IO_NONE) {
- if (!sn9c102_request_buffers(cam, 2)) {
+ if (!sn9c102_request_buffers(cam, 2, IO_READ)) {
DBG(1, "poll() failed, not enough memory")
goto error;
}
return err;
}
+ case VIDIOC_S_CTRL_OLD:
case VIDIOC_S_CTRL:
{
struct sn9c102_sensor* s = cam->sensor;
if (crop.type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
return -EINVAL;
- for (i = 0; i < cam->nbuffers; i++)
- if (cam->frame[i].vma_use_count) {
- DBG(3, "VIDIOC_S_CROP failed. "
- "Unmap the buffers first.")
- return -EINVAL;
- }
+ if (cam->module_param.force_munmap)
+ for (i = 0; i < cam->nbuffers; i++)
+ if (cam->frame[i].vma_use_count) {
+ DBG(3, "VIDIOC_S_CROP failed. "
+ "Unmap the buffers first.")
+ return -EINVAL;
+ }
/* Preserve R,G or B origin */
rect->left = (s->_rect.left & 1L) ?
return -EFAULT;
}
- sn9c102_release_buffers(cam);
+ if (cam->module_param.force_munmap)
+ sn9c102_release_buffers(cam);
err = sn9c102_set_crop(cam, rect);
if (s->set_crop)
s->pix_format.height = rect->height/scale;
memcpy(&(s->_rect), rect, sizeof(*rect));
- if (nbuffers != sn9c102_request_buffers(cam, nbuffers)) {
+ if (cam->module_param.force_munmap &&
+ nbuffers != sn9c102_request_buffers(cam, nbuffers,
+ cam->io)) {
cam->state |= DEV_MISCONFIGURED;
DBG(1, "VIDIOC_S_CROP failed because of not enough "
"memory. To use the camera, close and open "
return 0;
}
- for (i = 0; i < cam->nbuffers; i++)
- if (cam->frame[i].vma_use_count) {
- DBG(3, "VIDIOC_S_FMT failed. "
- "Unmap the buffers first.")
- return -EINVAL;
- }
+ if (cam->module_param.force_munmap)
+ for (i = 0; i < cam->nbuffers; i++)
+ if (cam->frame[i].vma_use_count) {
+ DBG(3, "VIDIOC_S_FMT failed. "
+ "Unmap the buffers first.")
+ return -EINVAL;
+ }
if (cam->stream == STREAM_ON)
if ((err = sn9c102_stream_interrupt(cam)))
return -EFAULT;
}
- sn9c102_release_buffers(cam);
+ if (cam->module_param.force_munmap)
+ sn9c102_release_buffers(cam);
- err += sn9c102_set_format(cam, pix);
+ err += sn9c102_set_pix_format(cam, pix);
err += sn9c102_set_crop(cam, &rect);
+ if (s->set_pix_format)
+ err += s->set_pix_format(cam, pix);
if (s->set_crop)
err += s->set_crop(cam, &rect);
err += sn9c102_set_scale(cam, scale);
memcpy(pfmt, pix, sizeof(*pix));
memcpy(&(s->_rect), &rect, sizeof(rect));
- if (nbuffers != sn9c102_request_buffers(cam, nbuffers)) {
+ if (cam->module_param.force_munmap &&
+ nbuffers != sn9c102_request_buffers(cam, nbuffers,
+ cam->io)) {
cam->state |= DEV_MISCONFIGURED;
DBG(1, "VIDIOC_S_FMT failed because of not enough "
"memory. To use the camera, close and open "
sn9c102_release_buffers(cam);
if (rb.count)
- rb.count = sn9c102_request_buffers(cam, rb.count);
+ rb.count = sn9c102_request_buffers(cam, rb.count,
+ IO_MMAP);
if (copy_to_user(arg, &rb, sizeof(rb))) {
sn9c102_release_buffers(cam);
return 0;
}
+ case VIDIOC_S_PARM_OLD:
case VIDIOC_S_PARM:
{
struct v4l2_streamparm sp;
n = sizeof(sn9c102_id_table)/sizeof(sn9c102_id_table[0]);
for (i = 0; i < n-1; i++)
- if (udev->descriptor.idVendor==sn9c102_id_table[i].idVendor &&
- udev->descriptor.idProduct==sn9c102_id_table[i].idProduct)
+ if (le16_to_cpu(udev->descriptor.idVendor) ==
+ sn9c102_id_table[i].idVendor &&
+ le16_to_cpu(udev->descriptor.idProduct) ==
+ sn9c102_id_table[i].idProduct)
break;
if (i == n-1)
return -ENODEV;
DBG(2, "V4L2 device registered as /dev/video%d", cam->v4ldev->minor)
+ cam->module_param.force_munmap = force_munmap[dev_nr];
+
+ dev_nr = (dev_nr < SN9C102_MAX_DEVICES-1) ? dev_nr+1 : 0;
+
sn9c102_create_sysfs(cam);
+ DBG(2, "Optional device control through 'sysfs' interface ready")
usb_set_intfdata(intf, cam);
--- /dev/null
+/***************************************************************************
+ * Plug-in for HV7131D image sensor connected to the SN9C10x PC Camera *
+ * Controllers *
+ * *
+ * Copyright (C) 2004-2005 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * *
+ * This program is free software; you can redistribute it and/or modify *
+ * it under the terms of the GNU General Public License as published by *
+ * the Free Software Foundation; either version 2 of the License, or *
+ * (at your option) any later version. *
+ * *
+ * This program is distributed in the hope that it will be useful, *
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of *
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
+ * GNU General Public License for more details. *
+ * *
+ * You should have received a copy of the GNU General Public License *
+ * along with this program; if not, write to the Free Software *
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. *
+ ***************************************************************************/
+
+#include "sn9c102_sensor.h"
+
+
+static struct sn9c102_sensor hv7131d;
+
+
+static int hv7131d_init(struct sn9c102_device* cam)
+{
+ int err = 0;
+
+ err += sn9c102_write_reg(cam, 0x00, 0x10);
+ err += sn9c102_write_reg(cam, 0x00, 0x11);
+ err += sn9c102_write_reg(cam, 0x00, 0x14);
+ err += sn9c102_write_reg(cam, 0x60, 0x17);
+ err += sn9c102_write_reg(cam, 0x0e, 0x18);
+ err += sn9c102_write_reg(cam, 0xf2, 0x19);
+
+ err += sn9c102_i2c_write(cam, 0x01, 0x04);
+ err += sn9c102_i2c_write(cam, 0x02, 0x00);
+ err += sn9c102_i2c_write(cam, 0x28, 0x00);
+
+ return err;
+}
+
+
+static int hv7131d_get_ctrl(struct sn9c102_device* cam,
+ struct v4l2_control* ctrl)
+{
+ switch (ctrl->id) {
+ case V4L2_CID_EXPOSURE:
+ {
+ int r1 = sn9c102_i2c_read(cam, 0x26),
+ r2 = sn9c102_i2c_read(cam, 0x27);
+ if (r1 < 0 || r2 < 0)
+ return -EIO;
+ ctrl->value = (r1 << 8) | (r2 & 0xff);
+ }
+ return 0;
+ case V4L2_CID_RED_BALANCE:
+ if ((ctrl->value = sn9c102_i2c_read(cam, 0x31)) < 0)
+ return -EIO;
+ ctrl->value = 0x3f - (ctrl->value & 0x3f);
+ return 0;
+ case V4L2_CID_BLUE_BALANCE:
+ if ((ctrl->value = sn9c102_i2c_read(cam, 0x33)) < 0)
+ return -EIO;
+ ctrl->value = 0x3f - (ctrl->value & 0x3f);
+ return 0;
+ case SN9C102_V4L2_CID_GREEN_BALANCE:
+ if ((ctrl->value = sn9c102_i2c_read(cam, 0x32)) < 0)
+ return -EIO;
+ ctrl->value = 0x3f - (ctrl->value & 0x3f);
+ return 0;
+ case SN9C102_V4L2_CID_RESET_LEVEL:
+ if ((ctrl->value = sn9c102_i2c_read(cam, 0x30)) < 0)
+ return -EIO;
+ ctrl->value &= 0x3f;
+ return 0;
+ case SN9C102_V4L2_CID_PIXEL_BIAS_VOLTAGE:
+ if ((ctrl->value = sn9c102_i2c_read(cam, 0x34)) < 0)
+ return -EIO;
+ ctrl->value &= 0x07;
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
+
+
+static int hv7131d_set_ctrl(struct sn9c102_device* cam,
+ const struct v4l2_control* ctrl)
+{
+ int err = 0;
+
+ switch (ctrl->id) {
+ case V4L2_CID_EXPOSURE:
+ err += sn9c102_i2c_write(cam, 0x26, ctrl->value >> 8);
+ err += sn9c102_i2c_write(cam, 0x27, ctrl->value & 0xff);
+ break;
+ case V4L2_CID_RED_BALANCE:
+ err += sn9c102_i2c_write(cam, 0x31, 0x3f - ctrl->value);
+ break;
+ case V4L2_CID_BLUE_BALANCE:
+ err += sn9c102_i2c_write(cam, 0x33, 0x3f - ctrl->value);
+ break;
+ case SN9C102_V4L2_CID_GREEN_BALANCE:
+ err += sn9c102_i2c_write(cam, 0x32, 0x3f - ctrl->value);
+ break;
+ case SN9C102_V4L2_CID_RESET_LEVEL:
+ err += sn9c102_i2c_write(cam, 0x30, ctrl->value);
+ break;
+ case SN9C102_V4L2_CID_PIXEL_BIAS_VOLTAGE:
+ err += sn9c102_i2c_write(cam, 0x34, ctrl->value);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return err ? -EIO : 0;
+}
+
+
+static int hv7131d_set_crop(struct sn9c102_device* cam,
+ const struct v4l2_rect* rect)
+{
+ struct sn9c102_sensor* s = &hv7131d;
+ int err = 0;
+ u8 h_start = (u8)(rect->left - s->cropcap.bounds.left) + 2,
+ v_start = (u8)(rect->top - s->cropcap.bounds.top) + 2;
+
+ err += sn9c102_write_reg(cam, h_start, 0x12);
+ err += sn9c102_write_reg(cam, v_start, 0x13);
+
+ return err;
+}
+
+
+static int hv7131d_set_pix_format(struct sn9c102_device* cam,
+ const struct v4l2_pix_format* pix)
+{
+ int err = 0;
+
+ if (pix->pixelformat == V4L2_PIX_FMT_SN9C10X)
+ err += sn9c102_write_reg(cam, 0x42, 0x19);
+ else
+ err += sn9c102_write_reg(cam, 0xf2, 0x19);
+
+ return err;
+}
+
+
+static struct sn9c102_sensor hv7131d = {
+ .name = "HV7131D",
+ .maintainer = "Luca Risolia <luca.risolia@studio.unibo.it>",
+ .sysfs_ops = SN9C102_I2C_READ | SN9C102_I2C_WRITE,
+ .frequency = SN9C102_I2C_100KHZ,
+ .interface = SN9C102_I2C_2WIRES,
+ .i2c_slave_id = 0x11,
+ .init = &hv7131d_init,
+ .qctrl = {
+ {
+ .id = V4L2_CID_EXPOSURE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "exposure",
+ .minimum = 0x0250,
+ .maximum = 0xffff,
+ .step = 0x0001,
+ .default_value = 0x0250,
+ .flags = 0,
+ },
+ {
+ .id = V4L2_CID_RED_BALANCE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "red balance",
+ .minimum = 0x00,
+ .maximum = 0x3f,
+ .step = 0x01,
+ .default_value = 0x00,
+ .flags = 0,
+ },
+ {
+ .id = V4L2_CID_BLUE_BALANCE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "blue balance",
+ .minimum = 0x00,
+ .maximum = 0x3f,
+ .step = 0x01,
+ .default_value = 0x20,
+ .flags = 0,
+ },
+ {
+ .id = SN9C102_V4L2_CID_GREEN_BALANCE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "green balance",
+ .minimum = 0x00,
+ .maximum = 0x3f,
+ .step = 0x01,
+ .default_value = 0x1e,
+ .flags = 0,
+ },
+ {
+ .id = SN9C102_V4L2_CID_RESET_LEVEL,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "reset level",
+ .minimum = 0x19,
+ .maximum = 0x3f,
+ .step = 0x01,
+ .default_value = 0x30,
+ .flags = 0,
+ },
+ {
+ .id = SN9C102_V4L2_CID_PIXEL_BIAS_VOLTAGE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "pixel bias voltage",
+ .minimum = 0x00,
+ .maximum = 0x07,
+ .step = 0x01,
+ .default_value = 0x02,
+ .flags = 0,
+ },
+ },
+ .get_ctrl = &hv7131d_get_ctrl,
+ .set_ctrl = &hv7131d_set_ctrl,
+ .cropcap = {
+ .bounds = {
+ .left = 0,
+ .top = 0,
+ .width = 640,
+ .height = 480,
+ },
+ .defrect = {
+ .left = 0,
+ .top = 0,
+ .width = 640,
+ .height = 480,
+ },
+ },
+ .set_crop = &hv7131d_set_crop,
+ .pix_format = {
+ .width = 640,
+ .height = 480,
+ .pixelformat = V4L2_PIX_FMT_SBGGR8,
+ .priv = 8,
+ },
+ .set_pix_format = &hv7131d_set_pix_format
+};
+
+
+int sn9c102_probe_hv7131d(struct sn9c102_device* cam)
+{
+ int r0 = 0, r1 = 0, err = 0;
+
+ err += sn9c102_write_reg(cam, 0x01, 0x01);
+ err += sn9c102_write_reg(cam, 0x00, 0x01);
+ err += sn9c102_write_reg(cam, 0x28, 0x17);
+ if (err)
+ return -EIO;
+
+ r0 = sn9c102_i2c_try_read(cam, &hv7131d, 0x00);
+ r1 = sn9c102_i2c_try_read(cam, &hv7131d, 0x01);
+ if (r0 < 0 || r1 < 0)
+ return -EIO;
+
+ if (r0 != 0x00 && r1 != 0x04)
+ return -ENODEV;
+
+ sn9c102_attach_sensor(cam, &hv7131d);
+
+ return 0;
+}
--- /dev/null
+/***************************************************************************
+ * Plug-in for MI-0343 image sensor connected to the SN9C10x PC Camera *
+ * Controllers *
+ * *
+ * Copyright (C) 2004-2005 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * *
+ * This program is free software; you can redistribute it and/or modify *
+ * it under the terms of the GNU General Public License as published by *
+ * the Free Software Foundation; either version 2 of the License, or *
+ * (at your option) any later version. *
+ * *
+ * This program is distributed in the hope that it will be useful, *
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of *
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
+ * GNU General Public License for more details. *
+ * *
+ * You should have received a copy of the GNU General Public License *
+ * along with this program; if not, write to the Free Software *
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. *
+ ***************************************************************************/
+
+#include "sn9c102_sensor.h"
+
+
+static struct sn9c102_sensor mi0343;
+static u8 mi0343_i2c_data[5+1];
+
+
+static int mi0343_init(struct sn9c102_device* cam)
+{
+ int err = 0;
+
+ err += sn9c102_write_reg(cam, 0x00, 0x10);
+ err += sn9c102_write_reg(cam, 0x00, 0x11);
+ err += sn9c102_write_reg(cam, 0x0a, 0x14);
+ err += sn9c102_write_reg(cam, 0x40, 0x01);
+ err += sn9c102_write_reg(cam, 0x20, 0x17);
+ err += sn9c102_write_reg(cam, 0x07, 0x18);
+ err += sn9c102_write_reg(cam, 0xa0, 0x19);
+
+ err += sn9c102_i2c_try_raw_write(cam, &mi0343, 4, mi0343.i2c_slave_id,
+ 0x0d, 0x00, 0x01, 0, 0);
+ err += sn9c102_i2c_try_raw_write(cam, &mi0343, 4, mi0343.i2c_slave_id,
+ 0x0d, 0x00, 0x00, 0, 0);
+ err += sn9c102_i2c_try_raw_write(cam, &mi0343, 4, mi0343.i2c_slave_id,
+ 0x03, 0x01, 0xe1, 0, 0);
+ err += sn9c102_i2c_try_raw_write(cam, &mi0343, 4, mi0343.i2c_slave_id,
+ 0x04, 0x02, 0x81, 0, 0);
+ err += sn9c102_i2c_try_raw_write(cam, &mi0343, 4, mi0343.i2c_slave_id,
+ 0x05, 0x00, 0x17, 0, 0);
+ err += sn9c102_i2c_try_raw_write(cam, &mi0343, 4, mi0343.i2c_slave_id,
+ 0x06, 0x00, 0x11, 0, 0);
+ err += sn9c102_i2c_try_raw_write(cam, &mi0343, 4, mi0343.i2c_slave_id,
+ 0x62, 0x04, 0x9a, 0, 0);
+
+ return err;
+}
+
+
+static int mi0343_get_ctrl(struct sn9c102_device* cam,
+ struct v4l2_control* ctrl)
+{
+ switch (ctrl->id) {
+ case V4L2_CID_EXPOSURE:
+ if (sn9c102_i2c_try_raw_read(cam, &mi0343, mi0343.i2c_slave_id,
+ 0x09, 2+1, mi0343_i2c_data) < 0)
+ return -EIO;
+ ctrl->value = mi0343_i2c_data[2];
+ return 0;
+ case V4L2_CID_GAIN:
+ if (sn9c102_i2c_try_raw_read(cam, &mi0343, mi0343.i2c_slave_id,
+ 0x35, 2+1, mi0343_i2c_data) < 0)
+ return -EIO;
+ break;
+ case V4L2_CID_HFLIP:
+ if (sn9c102_i2c_try_raw_read(cam, &mi0343, mi0343.i2c_slave_id,
+ 0x20, 2+1, mi0343_i2c_data) < 0)
+ return -EIO;
+ ctrl->value = mi0343_i2c_data[3] & 0x20 ? 1 : 0;
+ return 0;
+ case V4L2_CID_VFLIP:
+ if (sn9c102_i2c_try_raw_read(cam, &mi0343, mi0343.i2c_slave_id,
+ 0x20, 2+1, mi0343_i2c_data) < 0)
+ return -EIO;
+ ctrl->value = mi0343_i2c_data[3] & 0x80 ? 1 : 0;
+ return 0;
+ case V4L2_CID_RED_BALANCE:
+ if (sn9c102_i2c_try_raw_read(cam, &mi0343, mi0343.i2c_slave_id,
+ 0x2d, 2+1, mi0343_i2c_data) < 0)
+ return -EIO;
+ break;
+ case V4L2_CID_BLUE_BALANCE:
+ if (sn9c102_i2c_try_raw_read(cam, &mi0343, mi0343.i2c_slave_id,
+ 0x2c, 2+1, mi0343_i2c_data) < 0)
+ return -EIO;
+ break;
+ case SN9C102_V4L2_CID_GREEN_BALANCE:
+ if (sn9c102_i2c_try_raw_read(cam, &mi0343, mi0343.i2c_slave_id,
+ 0x2e, 2+1, mi0343_i2c_data) < 0)
+ return -EIO;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ switch (ctrl->id) {
+ case V4L2_CID_GAIN:
+ case V4L2_CID_RED_BALANCE:
+ case V4L2_CID_BLUE_BALANCE:
+ case SN9C102_V4L2_CID_GREEN_BALANCE:
+ ctrl->value = mi0343_i2c_data[3] | (mi0343_i2c_data[2] << 8);
+ if (ctrl->value >= 0x10 && ctrl->value <= 0x3f)
+ ctrl->value -= 0x10;
+ else if (ctrl->value >= 0x60 && ctrl->value <= 0x7f)
+ ctrl->value -= 0x60;
+ else if (ctrl->value >= 0xe0 && ctrl->value <= 0xff)
+ ctrl->value -= 0xe0;
+ }
+
+ return 0;
+}
+
+
+static int mi0343_set_ctrl(struct sn9c102_device* cam,
+ const struct v4l2_control* ctrl)
+{
+ u16 reg = 0;
+ int err = 0;
+
+ switch (ctrl->id) {
+ case V4L2_CID_GAIN:
+ case V4L2_CID_RED_BALANCE:
+ case V4L2_CID_BLUE_BALANCE:
+ case SN9C102_V4L2_CID_GREEN_BALANCE:
+ if (ctrl->value <= (0x3f-0x10))
+ reg = 0x10 + ctrl->value;
+ else if (ctrl->value <= ((0x3f-0x10) + (0x7f-0x60)))
+ reg = 0x60 + (ctrl->value - (0x3f-0x10));
+ else
+ reg = 0xe0 + (ctrl->value - (0x3f-0x10) - (0x7f-0x60));
+ break;
+ }
+
+ switch (ctrl->id) {
+ case V4L2_CID_EXPOSURE:
+ err += sn9c102_i2c_try_raw_write(cam, &mi0343, 4,
+ mi0343.i2c_slave_id,
+ 0x09, ctrl->value, 0x00,
+ 0, 0);
+ break;
+ case V4L2_CID_GAIN:
+ err += sn9c102_i2c_try_raw_write(cam, &mi0343, 4,
+ mi0343.i2c_slave_id,
+ 0x35, reg >> 8, reg & 0xff,
+ 0, 0);
+ break;
+ case V4L2_CID_HFLIP:
+ err += sn9c102_i2c_try_raw_write(cam, &mi0343, 4,
+ mi0343.i2c_slave_id,
+ 0x20, ctrl->value ? 0x40:0x00,
+ ctrl->value ? 0x20:0x00,
+ 0, 0);
+ break;
+ case V4L2_CID_VFLIP:
+ err += sn9c102_i2c_try_raw_write(cam, &mi0343, 4,
+ mi0343.i2c_slave_id,
+ 0x20, ctrl->value ? 0x80:0x00,
+ ctrl->value ? 0x80:0x00,
+ 0, 0);
+ break;
+ case V4L2_CID_RED_BALANCE:
+ err += sn9c102_i2c_try_raw_write(cam, &mi0343, 4,
+ mi0343.i2c_slave_id,
+ 0x2d, reg >> 8, reg & 0xff,
+ 0, 0);
+ break;
+ case V4L2_CID_BLUE_BALANCE:
+ err += sn9c102_i2c_try_raw_write(cam, &mi0343, 4,
+ mi0343.i2c_slave_id,
+ 0x2c, reg >> 8, reg & 0xff,
+ 0, 0);
+ break;
+ case SN9C102_V4L2_CID_GREEN_BALANCE:
+ err += sn9c102_i2c_try_raw_write(cam, &mi0343, 4,
+ mi0343.i2c_slave_id,
+ 0x2b, reg >> 8, reg & 0xff,
+ 0, 0);
+ err += sn9c102_i2c_try_raw_write(cam, &mi0343, 4,
+ mi0343.i2c_slave_id,
+ 0x2e, reg >> 8, reg & 0xff,
+ 0, 0);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return err ? -EIO : 0;
+}
+
+
+static int mi0343_set_crop(struct sn9c102_device* cam,
+ const struct v4l2_rect* rect)
+{
+ struct sn9c102_sensor* s = &mi0343;
+ int err = 0;
+ u8 h_start = (u8)(rect->left - s->cropcap.bounds.left) + 0,
+ v_start = (u8)(rect->top - s->cropcap.bounds.top) + 2;
+
+ err += sn9c102_write_reg(cam, h_start, 0x12);
+ err += sn9c102_write_reg(cam, v_start, 0x13);
+
+ return err;
+}
+
+
+static int mi0343_set_pix_format(struct sn9c102_device* cam,
+ const struct v4l2_pix_format* pix)
+{
+ int err = 0;
+
+ if (pix->pixelformat == V4L2_PIX_FMT_SN9C10X) {
+ err += sn9c102_i2c_try_raw_write(cam, &mi0343, 4,
+ mi0343.i2c_slave_id,
+ 0x0a, 0x00, 0x03, 0, 0);
+ err += sn9c102_write_reg(cam, 0x20, 0x19);
+ } else {
+ err += sn9c102_i2c_try_raw_write(cam, &mi0343, 4,
+ mi0343.i2c_slave_id,
+ 0x0a, 0x00, 0x05, 0, 0);
+ err += sn9c102_write_reg(cam, 0xa0, 0x19);
+ }
+
+ return err;
+}
+
+
+static struct sn9c102_sensor mi0343 = {
+ .name = "MI-0343",
+ .maintainer = "Luca Risolia <luca.risolia@studio.unibo.it>",
+ .frequency = SN9C102_I2C_100KHZ,
+ .interface = SN9C102_I2C_2WIRES,
+ .i2c_slave_id = 0x5d,
+ .init = &mi0343_init,
+ .qctrl = {
+ {
+ .id = V4L2_CID_EXPOSURE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "exposure",
+ .minimum = 0x00,
+ .maximum = 0x0f,
+ .step = 0x01,
+ .default_value = 0x06,
+ .flags = 0,
+ },
+ {
+ .id = V4L2_CID_GAIN,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "global gain",
+ .minimum = 0x00,
+ .maximum = (0x3f-0x10)+(0x7f-0x60)+(0xff-0xe0),/*0x6d*/
+ .step = 0x01,
+ .default_value = 0x00,
+ .flags = 0,
+ },
+ {
+ .id = V4L2_CID_HFLIP,
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .name = "horizontal mirror",
+ .minimum = 0,
+ .maximum = 1,
+ .step = 1,
+ .default_value = 0,
+ .flags = 0,
+ },
+ {
+ .id = V4L2_CID_VFLIP,
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .name = "vertical mirror",
+ .minimum = 0,
+ .maximum = 1,
+ .step = 1,
+ .default_value = 0,
+ .flags = 0,
+ },
+ {
+ .id = V4L2_CID_RED_BALANCE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "red balance",
+ .minimum = 0x00,
+ .maximum = (0x3f-0x10)+(0x7f-0x60)+(0xff-0xe0),
+ .step = 0x01,
+ .default_value = 0x00,
+ .flags = 0,
+ },
+ {
+ .id = V4L2_CID_BLUE_BALANCE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "blue balance",
+ .minimum = 0x00,
+ .maximum = (0x3f-0x10)+(0x7f-0x60)+(0xff-0xe0),
+ .step = 0x01,
+ .default_value = 0x00,
+ .flags = 0,
+ },
+ {
+ .id = SN9C102_V4L2_CID_GREEN_BALANCE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "green balance",
+ .minimum = 0x00,
+ .maximum = ((0x3f-0x10)+(0x7f-0x60)+(0xff-0xe0)),
+ .step = 0x01,
+ .default_value = 0x00,
+ .flags = 0,
+ },
+ },
+ .get_ctrl = &mi0343_get_ctrl,
+ .set_ctrl = &mi0343_set_ctrl,
+ .cropcap = {
+ .bounds = {
+ .left = 0,
+ .top = 0,
+ .width = 640,
+ .height = 480,
+ },
+ .defrect = {
+ .left = 0,
+ .top = 0,
+ .width = 640,
+ .height = 480,
+ },
+ },
+ .set_crop = &mi0343_set_crop,
+ .pix_format = {
+ .width = 640,
+ .height = 480,
+ .pixelformat = V4L2_PIX_FMT_SBGGR8,
+ .priv = 8,
+ },
+ .set_pix_format = &mi0343_set_pix_format
+};
+
+
+int sn9c102_probe_mi0343(struct sn9c102_device* cam)
+{
+ int err = 0;
+
+ err += sn9c102_write_reg(cam, 0x01, 0x01);
+ err += sn9c102_write_reg(cam, 0x00, 0x01);
+ err += sn9c102_write_reg(cam, 0x28, 0x17);
+ if (err)
+ return -EIO;
+
+ if (sn9c102_i2c_try_raw_read(cam, &mi0343, mi0343.i2c_slave_id, 0x00,
+ 2, mi0343_i2c_data) < 0)
+ return -EIO;
+
+ if (mi0343_i2c_data[4] != 0x32 && mi0343_i2c_data[3] != 0xe3)
+ return -ENODEV;
+
+ sn9c102_attach_sensor(cam, &mi0343);
+
+ return 0;
+}
* Plug-in for PAS106B image sensor connected to the SN9C10x PC Camera *
* Controllers *
* *
- * Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * Copyright (C) 2004-2005 by Luca Risolia <luca.risolia@studio.unibo.it> *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
return -EIO;
ctrl->value &= 0xf8;
return 0;
- case SN9C102_V4L2_CID_DAC_SIGN:
- if ((ctrl->value = sn9c102_i2c_read(cam, 0x07)) < 0)
- return -EIO;
- ctrl->value &= 0x01;
- return 0;
default:
return -EINVAL;
}
case SN9C102_V4L2_CID_DAC_MAGNITUDE:
err += sn9c102_i2c_write(cam, 0x08, ctrl->value << 3);
break;
- case SN9C102_V4L2_CID_DAC_SIGN:
- {
- int r;
- err += (r = sn9c102_i2c_read(cam, 0x07)) < 0 ? r : 0;
- err += sn9c102_i2c_write(cam, 0x07, r | ctrl->value);
- }
- break;
default:
return -EINVAL;
}
}
+static int pas106b_set_pix_format(struct sn9c102_device* cam,
+ const struct v4l2_pix_format* pix)
+{
+ int err = 0;
+
+ if (pix->pixelformat == V4L2_PIX_FMT_SN9C10X)
+ err += sn9c102_write_reg(cam, 0x2c, 0x17);
+ else
+ err += sn9c102_write_reg(cam, 0x20, 0x17);
+
+ return err;
+}
+
+
static struct sn9c102_sensor pas106b = {
.name = "PAS106B",
.maintainer = "Luca Risolia <luca.risolia@studio.unibo.it>",
+ .sysfs_ops = SN9C102_I2C_READ | SN9C102_I2C_WRITE,
.frequency = SN9C102_I2C_400KHZ | SN9C102_I2C_100KHZ,
.interface = SN9C102_I2C_2WIRES,
- .slave_read_id = 0x40,
- .slave_write_id = 0x40,
+ .i2c_slave_id = 0x40,
.init = &pas106b_init,
.qctrl = {
{
.name = "exposure",
.minimum = 0x125,
.maximum = 0xfff,
- .step = 0x01,
+ .step = 0x001,
.default_value = 0x140,
.flags = 0,
},
.default_value = 0x01,
.flags = 0,
},
- {
- .id = SN9C102_V4L2_CID_DAC_SIGN,
- .type = V4L2_CTRL_TYPE_BOOLEAN,
- .name = "DAC sign",
- .minimum = 0x00,
- .maximum = 0x01,
- .step = 0x01,
- .default_value = 0x00,
- .flags = 0,
- },
},
.get_ctrl = &pas106b_get_ctrl,
.set_ctrl = &pas106b_set_ctrl,
.height = 288,
.pixelformat = V4L2_PIX_FMT_SBGGR8,
.priv = 8, /* we use this field as 'bits per pixel' */
- }
+ },
+ .set_pix_format = &pas106b_set_pix_format
};
* <medaglia@undl.org.br> *
* http://cadu.homelinux.com:8080/ *
* *
- * DAC Magnitude, DAC sign, exposure and green gain controls added by *
+ * DAC Magnitude, exposure and green gain controls added by *
* Luca Risolia <luca.risolia@studio.unibo.it> *
* *
* This program is free software; you can redistribute it and/or modify *
if ((ctrl->value = sn9c102_i2c_read(cam, 0x0c)) < 0)
return -EIO;
return 0;
- case SN9C102_V4L2_CID_DAC_SIGN:
- if ((ctrl->value = sn9c102_i2c_read(cam, 0x0b)) < 0)
- return -EIO;
- ctrl->value &= 0x01;
- return 0;
default:
return -EINVAL;
}
}
+static int pas202bcb_set_pix_format(struct sn9c102_device* cam,
+ const struct v4l2_pix_format* pix)
+{
+ int err = 0;
+
+ if (pix->pixelformat == V4L2_PIX_FMT_SN9C10X)
+ err += sn9c102_write_reg(cam, 0x24, 0x17);
+ else
+ err += sn9c102_write_reg(cam, 0x20, 0x17);
+
+ return err;
+}
+
+
static int pas202bcb_set_ctrl(struct sn9c102_device* cam,
const struct v4l2_control* ctrl)
{
case SN9C102_V4L2_CID_DAC_MAGNITUDE:
err += sn9c102_i2c_write(cam, 0x0c, ctrl->value);
break;
- case SN9C102_V4L2_CID_DAC_SIGN:
- {
- int r;
- err += (r = sn9c102_i2c_read(cam, 0x0b)) < 0 ? r : 0;
- err += sn9c102_i2c_write(cam, 0x0b, r | ctrl->value);
- }
- break;
default:
return -EINVAL;
}
.name = "PAS202BCB",
.maintainer = "Carlos Eduardo Medaglia Dyonisio "
"<medaglia@undl.org.br>",
+ .sysfs_ops = SN9C102_I2C_READ | SN9C102_I2C_WRITE,
.frequency = SN9C102_I2C_400KHZ | SN9C102_I2C_100KHZ,
.interface = SN9C102_I2C_2WIRES,
- .slave_read_id = 0x40,
- .slave_write_id = 0x40,
+ .i2c_slave_id = 0x40,
.init = &pas202bcb_init,
.qctrl = {
{
.name = "exposure",
.minimum = 0x01e5,
.maximum = 0x3fff,
- .step = 0x01,
+ .step = 0x0001,
.default_value = 0x01e5,
.flags = 0,
},
.default_value = 0x04,
.flags = 0,
},
- {
- .id = SN9C102_V4L2_CID_DAC_SIGN,
- .type = V4L2_CTRL_TYPE_BOOLEAN,
- .name = "DAC sign",
- .minimum = 0x00,
- .maximum = 0x01,
- .step = 0x01,
- .default_value = 0x01,
- .flags = 0,
- },
},
.get_ctrl = &pas202bcb_get_ctrl,
.set_ctrl = &pas202bcb_set_ctrl,
.height = 480,
.pixelformat = V4L2_PIX_FMT_SBGGR8,
.priv = 8,
- }
+ },
+ .set_pix_format = &pas202bcb_set_pix_format
};
r0 = sn9c102_i2c_try_read(cam, &pas202bcb, 0x00);
r1 = sn9c102_i2c_try_read(cam, &pas202bcb, 0x01);
-
+
if (r0 < 0 || r1 < 0)
return -EIO;
/***************************************************************************
* API for image sensors connected to the SN9C10x PC Camera Controllers *
* *
- * Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * Copyright (C) 2004-2005 by Luca Risolia <luca.risolia@studio.unibo.it> *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
ahead.
Functions must return 0 on success, the appropriate error otherwise.
*/
+extern int sn9c102_probe_hv7131d(struct sn9c102_device* cam);
+extern int sn9c102_probe_mi0343(struct sn9c102_device* cam);
extern int sn9c102_probe_pas106b(struct sn9c102_device* cam);
extern int sn9c102_probe_pas202bcb(struct sn9c102_device* cam);
extern int sn9c102_probe_tas5110c1b(struct sn9c102_device* cam);
*/
#define SN9C102_SENSOR_TABLE \
static int (*sn9c102_sensor_table[])(struct sn9c102_device*) = { \
+ &sn9c102_probe_mi0343, /* strong detection based on SENSOR ids */ \
&sn9c102_probe_pas106b, /* strong detection based on SENSOR ids */ \
&sn9c102_probe_pas202bcb, /* strong detection based on SENSOR ids */ \
+ &sn9c102_probe_hv7131d, /* strong detection based on SENSOR ids */ \
&sn9c102_probe_tas5110c1b, /* detection based on USB pid/vid */ \
&sn9c102_probe_tas5130d1b, /* detection based on USB pid/vid */ \
NULL, \
{ USB_DEVICE(0x0c45, 0x6025), }, /* TAS5130D1B and TAS5110C1B */ \
{ USB_DEVICE(0x0c45, 0x6028), }, /* PAS202BCB */ \
{ USB_DEVICE(0x0c45, 0x6029), }, /* PAS106B */ \
- { USB_DEVICE(0x0c45, 0x602a), }, /* HV7131[D|E1] */ \
+ { USB_DEVICE(0x0c45, 0x602a), }, /* HV7131D */ \
{ USB_DEVICE(0x0c45, 0x602b), }, /* MI-0343 */ \
{ USB_DEVICE(0x0c45, 0x602c), }, /* OV7620 */ \
{ USB_DEVICE(0x0c45, 0x6030), }, /* MI03x */ \
u8 address);
/*
- This must be used if and only if the sensor doesn't implement the standard
- I2C protocol. There a number of good reasons why you must use the
- single-byte versions of this function: do not abuse. It writes n bytes,
- from data0 to datan, (registers 0x09 - 0x09+n of SN9C10X chip).
+ These must be used if and only if the sensor doesn't implement the standard
+ I2C protocol. There are a number of good reasons why you must use the
+ single-byte versions of these functions: do not abuse. The first function
+ writes n bytes, from data0 to datan, to registers 0x09 - 0x09+n of SN9C10X
+ chip. The second one programs the registers 0x09 and 0x10 with data0 and
+ data1, and places the n bytes read from the sensor register table in the
+ buffer pointed by 'buffer'. Both the functions return -1 on error; the write
+ version returns 0 on success, while the read version returns the first read
+ byte.
*/
extern int sn9c102_i2c_try_raw_write(struct sn9c102_device* cam,
struct sn9c102_sensor* sensor, u8 n,
u8 data0, u8 data1, u8 data2, u8 data3,
u8 data4, u8 data5);
+extern int sn9c102_i2c_try_raw_read(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor, u8 data0,
+ u8 data1, u8 n, u8 buffer[]);
/* To be used after the sensor struct has been attached to the camera struct */
extern int sn9c102_i2c_write(struct sn9c102_device*, u8 address, u8 value);
extern int sn9c102_pread_reg(struct sn9c102_device*, u16 index);
/*
- NOTE: there are no debugging functions here. To uniform the output you must
- use the dev_info()/dev_warn()/dev_err() macros defined in device.h, already
- included here, the argument being the struct device 'dev' of the sensor
- structure. Do NOT use these macros before the sensor is attached or the
- kernel will crash! However you should not need to notify the user about
+ NOTE: there are no exported debugging functions. To uniform the output you
+ must use the dev_info()/dev_warn()/dev_err() macros defined in device.h,
+ already included here, the argument being the struct device 'dev' of the
+ sensor structure. Do NOT use these macros before the sensor is attached or
+ the kernel will crash! However, you should not need to notify the user about
common errors or other messages, since this is done by the master module.
*/
/*****************************************************************************/
+enum sn9c102_i2c_sysfs_ops {
+ SN9C102_I2C_READ = 0x01,
+ SN9C102_I2C_WRITE = 0x02,
+};
+
enum sn9c102_i2c_frequency { /* sensors may support both the frequencies */
SN9C102_I2C_100KHZ = 0x01,
SN9C102_I2C_400KHZ = 0x02,
SN9C102_I2C_3WIRES,
};
-#define SN9C102_I2C_SLAVEID_FICTITIOUS 0xff
-#define SN9C102_I2C_SLAVEID_UNAVAILABLE 0x00
-
struct sn9c102_sensor {
char name[32], /* sensor name */
maintainer[64]; /* name of the mantainer <email> */
+ /* Supported operations through the 'sysfs' interface */
+ enum sn9c102_i2c_sysfs_ops sysfs_ops;
+
/*
These sensor capabilities must be provided if the SN9C10X controller
needs to communicate through the sensor serial interface by using
enum sn9c102_i2c_interface interface;
/*
- These identifiers must be provided if the image sensor implements
+ This identifier must be provided if the image sensor implements
the standard I2C protocol.
*/
- u8 slave_read_id, slave_write_id; /* reg. 0x09 */
+ u8 i2c_slave_id; /* reg. 0x09 */
/*
NOTE: Where not noted,most of the functions below are not mandatory.
int (*init)(struct sn9c102_device* cam);
/*
- This function is called after the sensor has been attached.
+ This function will be called after the sensor has been attached.
It should be used to initialize the sensor only, but may also
configure part of the SN9C10X chip if necessary. You don't need to
setup picture settings like brightness, contrast, etc.. here, if
matches the RGB bayer sequence (i.e. BGBGBG...GRGRGR).
*/
+ int (*set_pix_format)(struct sn9c102_device* cam,
+ const struct v4l2_pix_format* pix);
+ /*
+ To be called on VIDIOC_S_FMT, when switching from the SBGGR8 to
+ SN9C10X pixel format or viceversa. On error return the corresponding
+ error code without rolling back.
+ */
+
const struct device* dev;
/*
This is the argument for dev_err(), dev_info() and dev_warn(). It
/* Private ioctl's for control settings supported by some image sensors */
#define SN9C102_V4L2_CID_DAC_MAGNITUDE V4L2_CID_PRIVATE_BASE
-#define SN9C102_V4L2_CID_DAC_SIGN V4L2_CID_PRIVATE_BASE + 1
-#define SN9C102_V4L2_CID_GREEN_BALANCE V4L2_CID_PRIVATE_BASE + 2
+#define SN9C102_V4L2_CID_GREEN_BALANCE V4L2_CID_PRIVATE_BASE + 1
+#define SN9C102_V4L2_CID_RESET_LEVEL V4L2_CID_PRIVATE_BASE + 2
+#define SN9C102_V4L2_CID_PIXEL_BIAS_VOLTAGE V4L2_CID_PRIVATE_BASE + 3
#endif /* _SN9C102_SENSOR_H_ */
* Plug-in for TAS5110C1B image sensor connected to the SN9C10x PC Camera *
* Controllers *
* *
- * Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * Copyright (C) 2004-2005 by Luca Risolia <luca.risolia@studio.unibo.it> *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
/* Don't change ! */
err += sn9c102_write_reg(cam, 0x14, 0x1a);
err += sn9c102_write_reg(cam, 0x0a, 0x1b);
- err += sn9c102_write_reg(cam, 0xfb, 0x19);
+ err += sn9c102_write_reg(cam, sn9c102_pread_reg(cam, 0x19), 0x19);
+
+ return err;
+}
+
+
+static int tas5110c1b_set_pix_format(struct sn9c102_device* cam,
+ const struct v4l2_pix_format* pix)
+{
+ int err = 0;
+
+ if (pix->pixelformat == V4L2_PIX_FMT_SN9C10X)
+ err += sn9c102_write_reg(cam, 0x2b, 0x19);
+ else
+ err += sn9c102_write_reg(cam, 0xfb, 0x19);
return err;
}
static struct sn9c102_sensor tas5110c1b = {
.name = "TAS5110C1B",
.maintainer = "Luca Risolia <luca.risolia@studio.unibo.it>",
+ .sysfs_ops = SN9C102_I2C_WRITE,
.frequency = SN9C102_I2C_100KHZ,
.interface = SN9C102_I2C_3WIRES,
- .slave_read_id = SN9C102_I2C_SLAVEID_UNAVAILABLE,
- .slave_write_id = SN9C102_I2C_SLAVEID_FICTITIOUS,
.init = &tas5110c1b_init,
.qctrl = {
{
.height = 288,
.pixelformat = V4L2_PIX_FMT_SBGGR8,
.priv = 8,
- }
+ },
+ .set_pix_format = &tas5110c1b_set_pix_format
};
sn9c102_attach_sensor(cam, &tas5110c1b);
/* Sensor detection is based on USB pid/vid */
- if (tas5110c1b.usbdev->descriptor.idProduct != 0x6001 &&
- tas5110c1b.usbdev->descriptor.idProduct != 0x6005 &&
- tas5110c1b.usbdev->descriptor.idProduct != 0x60ab)
+ if (le16_to_cpu(tas5110c1b.usbdev->descriptor.idProduct) != 0x6001 &&
+ le16_to_cpu(tas5110c1b.usbdev->descriptor.idProduct) != 0x6005 &&
+ le16_to_cpu(tas5110c1b.usbdev->descriptor.idProduct) != 0x60ab)
return -ENODEV;
return 0;
* Plug-in for TAS5130D1B image sensor connected to the SN9C10x PC Camera *
* Controllers *
* *
- * Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * Copyright (C) 2004-2005 by Luca Risolia <luca.risolia@studio.unibo.it> *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
/* Do NOT change! */
err += sn9c102_write_reg(cam, 0x1f, 0x1a);
err += sn9c102_write_reg(cam, 0x1a, 0x1b);
- err += sn9c102_write_reg(cam, 0xf3, 0x19);
+ err += sn9c102_write_reg(cam, sn9c102_pread_reg(cam, 0x19), 0x19);
+
+ return err;
+}
+
+
+static int tas5130d1b_set_pix_format(struct sn9c102_device* cam,
+ const struct v4l2_pix_format* pix)
+{
+ int err = 0;
+
+ if (pix->pixelformat == V4L2_PIX_FMT_SN9C10X)
+ err += sn9c102_write_reg(cam, 0x63, 0x19);
+ else
+ err += sn9c102_write_reg(cam, 0xf3, 0x19);
return err;
}
static struct sn9c102_sensor tas5130d1b = {
.name = "TAS5130D1B",
.maintainer = "Luca Risolia <luca.risolia@studio.unibo.it>",
+ .sysfs_ops = SN9C102_I2C_WRITE,
.frequency = SN9C102_I2C_100KHZ,
.interface = SN9C102_I2C_3WIRES,
- .slave_read_id = SN9C102_I2C_SLAVEID_UNAVAILABLE,
- .slave_write_id = SN9C102_I2C_SLAVEID_FICTITIOUS,
.init = &tas5130d1b_init,
.qctrl = {
{
.height = 480,
.pixelformat = V4L2_PIX_FMT_SBGGR8,
.priv = 8,
- }
+ },
+ .set_pix_format = &tas5130d1b_set_pix_format
};
sn9c102_attach_sensor(cam, &tas5130d1b);
/* Sensor detection is based on USB pid/vid */
- if (tas5130d1b.usbdev->descriptor.idProduct != 0x6025 &&
- tas5130d1b.usbdev->descriptor.idProduct != 0x60aa)
+ if (le16_to_cpu(tas5130d1b.usbdev->descriptor.idProduct) != 0x6025 &&
+ le16_to_cpu(tas5130d1b.usbdev->descriptor.idProduct) != 0x60aa)
return -ENODEV;
return 0;
--- /dev/null
+/* Siemens ID Mouse driver v0.5
+
+ This program is free software; you can redistribute it and/or
+ modify it under the terms of the GNU General Public License as
+ published by the Free Software Foundation; either version 2 of
+ the License, or (at your option) any later version.
+
+ Copyright (C) 2004-5 by Florian 'Floe' Echtler <echtler@fs.tum.de>
+ and Andreas 'ad' Deresch <aderesch@fs.tum.de>
+
+ Derived from the USB Skeleton driver 1.1,
+ Copyright (C) 2003 Greg Kroah-Hartman (greg@kroah.com)
+
+*/
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/smp_lock.h>
+#include <linux/completion.h>
+#include <asm/uaccess.h>
+#include <linux/usb.h>
+
+#define WIDTH 225
+#define HEIGHT 288
+#define HEADER "P5 225 288 255 "
+#define IMGSIZE ((WIDTH * HEIGHT) + sizeof(HEADER)-1)
+
+/* Version Information */
+#define DRIVER_VERSION "0.5"
+#define DRIVER_SHORT "idmouse"
+#define DRIVER_AUTHOR "Florian 'Floe' Echtler <echtler@fs.tum.de>"
+#define DRIVER_DESC "Siemens ID Mouse FingerTIP Sensor Driver"
+
+/* Siemens ID Mouse */
+#define USB_IDMOUSE_VENDOR_ID 0x0681
+#define USB_IDMOUSE_PRODUCT_ID 0x0005
+
+/* we still need a minor number */
+#define USB_IDMOUSE_MINOR_BASE 132
+
+static struct usb_device_id idmouse_table[] = {
+ {USB_DEVICE(USB_IDMOUSE_VENDOR_ID, USB_IDMOUSE_PRODUCT_ID)},
+ {} /* null entry at the end */
+};
+
+MODULE_DEVICE_TABLE(usb, idmouse_table);
+
+/* structure to hold all of our device specific stuff */
+struct usb_idmouse {
+
+ struct usb_device *udev; /* save off the usb device pointer */
+ struct usb_interface *interface; /* the interface for this device */
+
+ unsigned char *bulk_in_buffer; /* the buffer to receive data */
+ size_t bulk_in_size; /* the size of the receive buffer */
+ __u8 bulk_in_endpointAddr; /* the address of the bulk in endpoint */
+
+ int open; /* if the port is open or not */
+ int present; /* if the device is not disconnected */
+ struct semaphore sem; /* locks this structure */
+
+};
+
+/* local function prototypes */
+static ssize_t idmouse_read(struct file *file, char __user *buffer,
+ size_t count, loff_t * ppos);
+
+static int idmouse_open(struct inode *inode, struct file *file);
+static int idmouse_release(struct inode *inode, struct file *file);
+
+static int idmouse_probe(struct usb_interface *interface,
+ const struct usb_device_id *id);
+
+static void idmouse_disconnect(struct usb_interface *interface);
+
+/* file operation pointers */
+static struct file_operations idmouse_fops = {
+ .owner = THIS_MODULE,
+ .read = idmouse_read,
+ .open = idmouse_open,
+ .release = idmouse_release,
+};
+
+/* class driver information for devfs */
+static struct usb_class_driver idmouse_class = {
+ .name = "usb/idmouse%d",
+ .fops = &idmouse_fops,
+ .mode = S_IFCHR | S_IRUSR | S_IRGRP | S_IROTH, /* filemode (char, 444) */
+ .minor_base = USB_IDMOUSE_MINOR_BASE,
+};
+
+/* usb specific object needed to register this driver with the usb subsystem */
+static struct usb_driver idmouse_driver = {
+ .owner = THIS_MODULE,
+ .name = DRIVER_SHORT,
+ .probe = idmouse_probe,
+ .disconnect = idmouse_disconnect,
+ .id_table = idmouse_table,
+};
+
+// prevent races between open() and disconnect()
+static DECLARE_MUTEX(disconnect_sem);
+
+static int idmouse_create_image(struct usb_idmouse *dev)
+{
+ int bytes_read = 0;
+ int bulk_read = 0;
+ int result = 0;
+
+ if (dev->bulk_in_size < sizeof(HEADER))
+ return -ENOMEM;
+
+ memcpy(dev->bulk_in_buffer,HEADER,sizeof(HEADER)-1);
+ bytes_read += sizeof(HEADER)-1;
+
+ /* Dump the setup packets. Yes, they are uncommented, simply
+ because they were sniffed under Windows using SnoopyPro.
+ I _guess_ that 0x22 is a kind of reset command and 0x21
+ means init..
+ */
+ result = usb_control_msg (dev->udev, usb_sndctrlpipe (dev->udev, 0),
+ 0x21, 0x42, 0x0001, 0x0002, NULL, 0, HZ);
+ if (result < 0)
+ return result;
+ result = usb_control_msg (dev->udev, usb_sndctrlpipe (dev->udev, 0),
+ 0x20, 0x42, 0x0001, 0x0002, NULL, 0, HZ);
+ if (result < 0)
+ return result;
+ result = usb_control_msg (dev->udev, usb_sndctrlpipe (dev->udev, 0),
+ 0x22, 0x42, 0x0000, 0x0002, NULL, 0, HZ);
+ if (result < 0)
+ return result;
+
+ result = usb_control_msg (dev->udev, usb_sndctrlpipe (dev->udev, 0),
+ 0x21, 0x42, 0x0001, 0x0002, NULL, 0, HZ);
+ if (result < 0)
+ return result;
+ result = usb_control_msg (dev->udev, usb_sndctrlpipe (dev->udev, 0),
+ 0x20, 0x42, 0x0001, 0x0002, NULL, 0, HZ);
+ if (result < 0)
+ return result;
+ result = usb_control_msg (dev->udev, usb_sndctrlpipe (dev->udev, 0),
+ 0x20, 0x42, 0x0000, 0x0002, NULL, 0, HZ);
+ if (result < 0)
+ return result;
+
+ /* loop over a blocking bulk read to get data from the device */
+ while (bytes_read < IMGSIZE) {
+ result = usb_bulk_msg (dev->udev,
+ usb_rcvbulkpipe (dev->udev, dev->bulk_in_endpointAddr),
+ dev->bulk_in_buffer + bytes_read,
+ dev->bulk_in_size, &bulk_read, HZ * 5);
+ if (result < 0)
+ return result;
+ if (signal_pending(current))
+ return -EINTR;
+ bytes_read += bulk_read;
+ }
+
+ /* reset the device */
+ result = usb_control_msg (dev->udev, usb_sndctrlpipe (dev->udev, 0),
+ 0x22, 0x42, 0x0000, 0x0002, NULL, 0, HZ);
+ if (result < 0)
+ return result;
+
+ /* should be IMGSIZE == 64815 */
+ dbg("read %d bytes fingerprint data", bytes_read);
+ return 0;
+}
+
+static inline void idmouse_delete(struct usb_idmouse *dev)
+{
+ kfree(dev->bulk_in_buffer);
+ kfree(dev);
+}
+
+static int idmouse_open(struct inode *inode, struct file *file)
+{
+ struct usb_idmouse *dev = NULL;
+ struct usb_interface *interface;
+ int result = 0;
+
+ /* prevent disconnects */
+ down(&disconnect_sem);
+
+ /* get the interface from minor number and driver information */
+ interface = usb_find_interface (&idmouse_driver, iminor (inode));
+ if (!interface) {
+ up(&disconnect_sem);
+ return -ENODEV;
+ }
+ /* get the device information block from the interface */
+ dev = usb_get_intfdata(interface);
+ if (!dev) {
+ up(&disconnect_sem);
+ return -ENODEV;
+ }
+
+ /* lock this device */
+ down(&dev->sem);
+
+ /* check if already open */
+ if (dev->open) {
+
+ /* already open, so fail */
+ result = -EBUSY;
+
+ } else {
+
+ /* create a new image and check for success */
+ result = idmouse_create_image (dev);
+ if (result)
+ goto error;
+
+ /* increment our usage count for the driver */
+ ++dev->open;
+
+ /* save our object in the file's private structure */
+ file->private_data = dev;
+
+ }
+
+error:
+
+ /* unlock this device */
+ up(&dev->sem);
+
+ /* unlock the disconnect semaphore */
+ up(&disconnect_sem);
+ return result;
+}
+
+static int idmouse_release(struct inode *inode, struct file *file)
+{
+ struct usb_idmouse *dev;
+
+ /* prevent a race condition with open() */
+ down(&disconnect_sem);
+
+ dev = (struct usb_idmouse *) file->private_data;
+
+ if (dev == NULL) {
+ up(&disconnect_sem);
+ return -ENODEV;
+ }
+
+ /* lock our device */
+ down(&dev->sem);
+
+ /* are we really open? */
+ if (dev->open <= 0) {
+ up(&dev->sem);
+ up(&disconnect_sem);
+ return -ENODEV;
+ }
+
+ --dev->open;
+
+ if (!dev->present) {
+ /* the device was unplugged before the file was released */
+ up(&dev->sem);
+ idmouse_delete(dev);
+ up(&disconnect_sem);
+ return 0;
+ }
+
+ up(&dev->sem);
+ up(&disconnect_sem);
+ return 0;
+}
+
+static ssize_t idmouse_read(struct file *file, char __user *buffer, size_t count,
+ loff_t * ppos)
+{
+ struct usb_idmouse *dev;
+ int result = 0;
+
+ dev = (struct usb_idmouse *) file->private_data;
+
+ // lock this object
+ down (&dev->sem);
+
+ // verify that the device wasn't unplugged
+ if (!dev->present) {
+ up (&dev->sem);
+ return -ENODEV;
+ }
+
+ if (*ppos >= IMGSIZE) {
+ up (&dev->sem);
+ return 0;
+ }
+
+ if (count > IMGSIZE - *ppos)
+ count = IMGSIZE - *ppos;
+
+ if (copy_to_user (buffer, dev->bulk_in_buffer + *ppos, count)) {
+ result = -EFAULT;
+ } else {
+ result = count;
+ *ppos += count;
+ }
+
+ // unlock the device
+ up(&dev->sem);
+ return result;
+}
+
+static int idmouse_probe(struct usb_interface *interface,
+ const struct usb_device_id *id)
+{
+ struct usb_device *udev = interface_to_usbdev(interface);
+ struct usb_idmouse *dev = NULL;
+ struct usb_host_interface *iface_desc;
+ struct usb_endpoint_descriptor *endpoint;
+ size_t buffer_size;
+ int result;
+
+ /* check if we have gotten the data or the hid interface */
+ iface_desc = &interface->altsetting[0];
+ if (iface_desc->desc.bInterfaceClass != 0x0A)
+ return -ENODEV;
+
+ /* allocate memory for our device state and initialize it */
+ dev = kmalloc(sizeof(*dev), GFP_KERNEL);
+ if (dev == NULL)
+ return -ENOMEM;
+ memset(dev, 0x00, sizeof(*dev));
+
+ init_MUTEX(&dev->sem);
+ dev->udev = udev;
+ dev->interface = interface;
+
+ /* set up the endpoint information - use only the first bulk-in endpoint */
+ endpoint = &iface_desc->endpoint[0].desc;
+ if (!dev->bulk_in_endpointAddr
+ && (endpoint->bEndpointAddress & USB_DIR_IN)
+ && ((endpoint->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) ==
+ USB_ENDPOINT_XFER_BULK)) {
+
+ /* we found a bulk in endpoint */
+ buffer_size = le16_to_cpu(endpoint->wMaxPacketSize);
+ dev->bulk_in_size = buffer_size;
+ dev->bulk_in_endpointAddr = endpoint->bEndpointAddress;
+ dev->bulk_in_buffer =
+ kmalloc(IMGSIZE + buffer_size, GFP_KERNEL);
+
+ if (!dev->bulk_in_buffer) {
+ err("Unable to allocate input buffer.");
+ idmouse_delete(dev);
+ return -ENOMEM;
+ }
+ }
+
+ if (!(dev->bulk_in_endpointAddr)) {
+ err("Unable to find bulk-in endpoint.");
+ idmouse_delete(dev);
+ return -ENODEV;
+ }
+ /* allow device read, write and ioctl */
+ dev->present = 1;
+
+ /* we can register the device now, as it is ready */
+ usb_set_intfdata(interface, dev);
+ result = usb_register_dev(interface, &idmouse_class);
+ if (result) {
+ /* something prevented us from registering this device */
+ err("Unble to allocate minor number.");
+ usb_set_intfdata(interface, NULL);
+ idmouse_delete(dev);
+ return result;
+ }
+
+ /* be noisy */
+ dev_info(&interface->dev,"%s now attached\n",DRIVER_DESC);
+
+ return 0;
+}
+
+static void idmouse_disconnect(struct usb_interface *interface)
+{
+ struct usb_idmouse *dev;
+
+ /* prevent races with open() */
+ down(&disconnect_sem);
+
+ /* get device structure */
+ dev = usb_get_intfdata(interface);
+ usb_set_intfdata(interface, NULL);
+
+ /* lock it */
+ down(&dev->sem);
+
+ /* give back our minor */
+ usb_deregister_dev(interface, &idmouse_class);
+
+ /* prevent device read, write and ioctl */
+ dev->present = 0;
+
+ /* unlock */
+ up(&dev->sem);
+
+ /* if the device is opened, idmouse_release will clean this up */
+ if (!dev->open)
+ idmouse_delete(dev);
+
+ up(&disconnect_sem);
+
+ info("%s disconnected", DRIVER_DESC);
+}
+
+static int __init usb_idmouse_init(void)
+{
+ int result;
+
+ info(DRIVER_DESC " " DRIVER_VERSION);
+
+ /* register this driver with the USB subsystem */
+ result = usb_register(&idmouse_driver);
+ if (result)
+ err("Unable to register device (error %d).", result);
+
+ return result;
+}
+
+static void __exit usb_idmouse_exit(void)
+{
+ /* deregister this driver with the USB subsystem */
+ usb_deregister(&idmouse_driver);
+}
+
+module_init(usb_idmouse_init);
+module_exit(usb_idmouse_exit);
+
+MODULE_AUTHOR(DRIVER_AUTHOR);
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_LICENSE("GPL");
+
unsigned long arg)
{
struct lcd_usb_data *lcd = &lcd_instance;
- int i;
+ u16 bcdDevice;
char buf[30];
/* Sanity check to make sure lcd is connected, powered, etc */
switch (cmd) {
case IOCTL_GET_HARD_VERSION:
- i = (lcd->lcd_dev)->descriptor.bcdDevice;
- sprintf(buf,"%1d%1d.%1d%1d",(i & 0xF000)>>12,(i & 0xF00)>>8,
- (i & 0xF0)>>4,(i & 0xF));
+ bcdDevice = le16_to_cpu((lcd->lcd_dev)->descriptor.bcdDevice);
+ sprintf(buf,"%1d%1d.%1d%1d",
+ (bcdDevice & 0xF000)>>12,
+ (bcdDevice & 0xF00)>>8,
+ (bcdDevice & 0xF0)>>4,
+ (bcdDevice & 0xF));
if (copy_to_user((void __user *)arg,buf,strlen(buf))!=0)
return -EFAULT;
break;
int i;
int retval;
- if (dev->descriptor.idProduct != 0x0001 ) {
+ if (le16_to_cpu(dev->descriptor.idProduct) != 0x0001) {
warn(KERN_INFO "USBLCD model not supported.");
return -ENODEV;
}
return -ENODEV;
}
- i = dev->descriptor.bcdDevice;
+ i = le16_to_cpu(dev->descriptor.bcdDevice);
info("USBLCD Version %1d%1d.%1d%1d found at address %d",
(i & 0xF000)>>12,(i & 0xF00)>>8,(i & 0xF0)>>4,(i & 0xF),
}
/* The F5U011 has the same vendor/product as the netmate but a device version of 0x130 */
- if (usbdev->descriptor.idVendor == 0x0423 && usbdev->descriptor.idProduct == 0xa &&
- catc->usbdev->descriptor.bcdDevice == 0x0130 ) {
+ if (le16_to_cpu(usbdev->descriptor.idVendor) == 0x0423 &&
+ le16_to_cpu(usbdev->descriptor.idProduct) == 0xa &&
+ le16_to_cpu(catc->usbdev->descriptor.bcdDevice) == 0x0130) {
dbg("Testing for f5u011");
catc->is_f5u011 = 1;
atomic_set(&catc->recq_sz, 0);
*
*
* Lonnie Mendez <dignome@gmail.com>
+ * 12-15-2004
+ * Incorporated write buffering from pl2303 driver. Fixed bug with line
+ * handling so both lines are raised in cypress_open. (was dropping rts)
+ * Various code cleanups made as well along with other misc bug fixes.
+ *
+ * Lonnie Mendez <dignome@gmail.com>
* 04-10-2004
* Driver modified to support dynamic line settings. Various improvments
* and features.
* Long Term TODO:
* Improve transfer speeds - both read/write are somewhat slow
* at this point.
+ * Improve debugging. Show modem line status with debug output and
+ * implement filtering for certain data as a module parameter.
*/
-/* Neil Whelchel wrote the cypress m8 implementation */
+/* Thanks to Neil Whelchel for writing the first cypress m8 implementation for linux. */
/* Thanks to cypress for providing references for the hid reports. */
/* Thanks to Jiang Zhang for providing links and for general help. */
/* Code originates and was built up from ftdi_sio, belkin, pl2303 and others. */
#include <linux/usb.h>
#include <linux/serial.h>
+#include "usb-serial.h"
+#include "cypress_m8.h"
+
+
#ifdef CONFIG_USB_SERIAL_DEBUG
static int debug = 1;
#else
static int debug;
#endif
-
static int stats;
-#include "usb-serial.h"
-#include "cypress_m8.h"
-
/*
* Version Information
*/
-#define DRIVER_VERSION "v1.06"
+#define DRIVER_VERSION "v1.08"
#define DRIVER_AUTHOR "Lonnie Mendez <dignome@gmail.com>, Neil Whelchel <koyama@firstlight.net>"
#define DRIVER_DESC "Cypress USB to Serial Driver"
+/* write buffer size defines */
+#define CYPRESS_BUF_SIZE 1024
+#define CYPRESS_CLOSING_WAIT (30*HZ)
+
static struct usb_device_id id_table_earthmate [] = {
{ USB_DEVICE(VENDOR_ID_DELORME, PRODUCT_ID_EARTHMATEUSB) },
{ } /* Terminating entry */
int bytes_out; /* used for statistics */
int cmd_count; /* used for statistics */
int cmd_ctrl; /* always set this to 1 before issuing a command */
+ struct cypress_buf *buf; /* write buffer */
+ int write_urb_in_use; /* write urb in use indicator */
int termios_initialized;
__u8 line_control; /* holds dtr / rts value */
__u8 current_status; /* received from last read - info on dsr,cts,cd,ri,etc */
char prev_status, diff_status; /* used for TIOCMIWAIT */
/* we pass a pointer to this as the arguement sent to cypress_set_termios old_termios */
struct termios tmp_termios; /* stores the old termios settings */
- int write_interval; /* interrupt out write interval, as obtained from interrupt_out_urb */
- int writepipe; /* used for clear halt, if necessary */
+ char calledfromopen; /* used when issuing lines on open - fixes rts drop bug */
+};
+
+/* write buffer structure */
+struct cypress_buf {
+ unsigned int buf_size;
+ char *buf_buf;
+ char *buf_get;
+ char *buf_put;
};
/* function prototypes for the Cypress USB to serial device */
static int cypress_open (struct usb_serial_port *port, struct file *filp);
static void cypress_close (struct usb_serial_port *port, struct file *filp);
static int cypress_write (struct usb_serial_port *port, const unsigned char *buf, int count);
+static void cypress_send (struct usb_serial_port *port);
static int cypress_write_room (struct usb_serial_port *port);
static int cypress_ioctl (struct usb_serial_port *port, struct file * file, unsigned int cmd, unsigned long arg);
static void cypress_set_termios (struct usb_serial_port *port, struct termios * old);
static void cypress_unthrottle (struct usb_serial_port *port);
static void cypress_read_int_callback (struct urb *urb, struct pt_regs *regs);
static void cypress_write_int_callback (struct urb *urb, struct pt_regs *regs);
-static int mask_to_rate (unsigned mask);
-static unsigned rate_to_mask (int rate);
+/* baud helper functions */
+static int mask_to_rate (unsigned mask);
+static unsigned rate_to_mask (int rate);
+/* write buffer functions */
+static struct cypress_buf *cypress_buf_alloc(unsigned int size);
+static void cypress_buf_free(struct cypress_buf *cb);
+static void cypress_buf_clear(struct cypress_buf *cb);
+static unsigned int cypress_buf_data_avail(struct cypress_buf *cb);
+static unsigned int cypress_buf_space_avail(struct cypress_buf *cb);
+static unsigned int cypress_buf_put(struct cypress_buf *cb, const char *buf,
+ unsigned int count);
+static unsigned int cypress_buf_get(struct cypress_buf *cb, char *buf,
+ unsigned int count);
+
static struct usb_serial_device_type cypress_earthmate_device = {
.owner = THIS_MODULE,
memset(priv, 0x00, sizeof (struct cypress_private));
spin_lock_init(&priv->lock);
+ priv->buf = cypress_buf_alloc(CYPRESS_BUF_SIZE);
+ if (priv->buf == NULL) {
+ kfree(priv);
+ return -ENOMEM;
+ }
init_waitqueue_head(&priv->delta_msr_wait);
- priv->writepipe = serial->port[0]->interrupt_out_urb->pipe;
- /* free up interrupt_out buffer / urb allocated by usbserial
- * for this port as we use our own urbs for writing */
- if (serial->port[0]->interrupt_out_buffer) {
- kfree(serial->port[0]->interrupt_out_buffer);
- serial->port[0]->interrupt_out_buffer = NULL;
- }
- if (serial->port[0]->interrupt_out_urb) {
- priv->write_interval = serial->port[0]->interrupt_out_urb->interval;
- usb_free_urb(serial->port[0]->interrupt_out_urb);
- serial->port[0]->interrupt_out_urb = NULL;
- } else /* still need a write interval */
- priv->write_interval = 10;
-
+ usb_reset_configuration (serial->dev);
+
priv->cmd_ctrl = 0;
priv->line_control = 0;
priv->termios_initialized = 0;
+ priv->calledfromopen = 0;
priv->rx_flags = 0;
usb_set_serial_port_data(serial->port[0], priv);
priv = usb_get_serial_port_data(serial->port[0]);
if (priv) {
+ cypress_buf_free(priv->buf);
kfree(priv);
usb_set_serial_port_data(serial->port[0], NULL);
}
dbg("%s - port %d", __FUNCTION__, port->number);
+ /* clear halts before open */
+ usb_clear_halt(serial->dev, 0x00);
+ usb_clear_halt(serial->dev, 0x81);
+ usb_clear_halt(serial->dev, 0x02);
+
spin_lock_irqsave(&priv->lock, flags);
/* reset read/write statistics */
priv->bytes_in = 0;
priv->bytes_out = 0;
priv->cmd_count = 0;
-
- /* turn on dtr / rts since we are not flow controlling by default */
- priv->line_control = CONTROL_DTR | CONTROL_RTS; /* sent in status byte */
+ priv->rx_flags = 0;
spin_unlock_irqrestore(&priv->lock, flags);
- priv->cmd_ctrl = 1;
- result = cypress_write(port, NULL, 0);
-
+
+ /* setting to zero could cause data loss */
port->tty->low_latency = 1;
- /* termios defaults are set by usb_serial_init */
-
- cypress_set_termios(port, &priv->tmp_termios);
+ /* raise both lines and set termios */
+ spin_lock_irqsave(&priv->lock, flags);
+ priv->line_control = CONTROL_DTR | CONTROL_RTS;
+ priv->calledfromopen = 1;
+ priv->cmd_ctrl = 1;
+ spin_unlock_irqrestore(&priv->lock, flags);
+ result = cypress_write(port, NULL, 0);
if (result) {
dev_err(&port->dev, "%s - failed setting the control lines - error %d\n", __FUNCTION__, result);
return result;
} else
- dbg("%s - success setting the control lines", __FUNCTION__);
+ dbg("%s - success setting the control lines", __FUNCTION__);
- /* throttling off */
- spin_lock_irqsave(&priv->lock, flags);
- priv->rx_flags = 0;
- spin_unlock_irqrestore(&priv->lock, flags);
+ cypress_set_termios(port, &priv->tmp_termios);
- /* setup the port and
- * start reading from the device */
+ /* setup the port and start reading from the device */
if(!port->interrupt_in_urb){
err("%s - interrupt_in_urb is empty!", __FUNCTION__);
return(-1);
struct cypress_private *priv = usb_get_serial_port_data(port);
unsigned int c_cflag;
unsigned long flags;
+ int bps;
+ long timeout;
+ wait_queue_t wait;
dbg("%s - port %d", __FUNCTION__, port->number);
+ /* wait for data to drain from buffer */
+ spin_lock_irqsave(&priv->lock, flags);
+ timeout = CYPRESS_CLOSING_WAIT;
+ init_waitqueue_entry(&wait, current);
+ add_wait_queue(&port->tty->write_wait, &wait);
+ for (;;) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ if (cypress_buf_data_avail(priv->buf) == 0
+ || timeout == 0 || signal_pending(current)
+ || !usb_get_intfdata(port->serial->interface))
+ break;
+ spin_unlock_irqrestore(&priv->lock, flags);
+ timeout = schedule_timeout(timeout);
+ spin_lock_irqsave(&priv->lock, flags);
+ }
+ set_current_state(TASK_RUNNING);
+ remove_wait_queue(&port->tty->write_wait, &wait);
+ /* clear out any remaining data in the buffer */
+ cypress_buf_clear(priv->buf);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ /* wait for characters to drain from device */
+ bps = tty_get_baud_rate(port->tty);
+ if (bps > 1200)
+ timeout = max((HZ*2560)/bps,HZ/10);
+ else
+ timeout = 2*HZ;
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout(timeout);
+
+ dbg("%s - stopping urbs", __FUNCTION__);
+ usb_kill_urb (port->interrupt_in_urb);
+ usb_kill_urb (port->interrupt_out_urb);
+
if (port->tty) {
c_cflag = port->tty->termios->c_cflag;
if (c_cflag & HUPCL) {
}
}
- if (port->interrupt_in_urb) {
- dbg("%s - stopping read urb", __FUNCTION__);
- usb_kill_urb (port->interrupt_in_urb);
- }
-
if (stats)
dev_info (&port->dev, "Statistics: %d Bytes In | %d Bytes Out | %d Commands Issued\n",
priv->bytes_in, priv->bytes_out, priv->cmd_count);
{
struct cypress_private *priv = usb_get_serial_port_data(port);
unsigned long flags;
- struct urb *urb;
- int status, s_pos = 0;
- __u8 transfer_size = 0;
- __u8 *buffer;
-
- dbg("%s - port %d", __FUNCTION__, port->number);
+
+ dbg("%s - port %d, %d bytes", __FUNCTION__, port->number, count);
- spin_lock_irqsave(&priv->lock, flags);
- if (count == 0 && !priv->cmd_ctrl) {
- spin_unlock_irqrestore(&priv->lock, flags);
- dbg("%s - write request of 0 bytes", __FUNCTION__);
- return 0;
+ /* line control commands, which need to be executed immediately,
+ are not put into the buffer for obvious reasons.
+ */
+ if (priv->cmd_ctrl) {
+ count = 0;
+ goto finish;
}
-
- if (priv->cmd_ctrl)
- ++priv->cmd_count;
- priv->cmd_ctrl = 0;
+
+ if (!count)
+ return count;
+
+ spin_lock_irqsave(&priv->lock, flags);
+ count = cypress_buf_put(priv->buf, buf, count);
spin_unlock_irqrestore(&priv->lock, flags);
- dbg("%s - interrupt out size is %d", __FUNCTION__, port->interrupt_out_size);
- dbg("%s - count is %d", __FUNCTION__, count);
+finish:
+ cypress_send(port);
- /* Allocate buffer and urb */
- buffer = kmalloc (port->interrupt_out_size, GFP_ATOMIC);
- if (!buffer) {
- dev_err(&port->dev, "ran out of memory for buffer\n");
- return -ENOMEM;
- }
+ return count;
+} /* cypress_write */
- urb = usb_alloc_urb (0, GFP_ATOMIC);
- if (!urb) {
- dev_err(&port->dev, "failed allocating write urb\n");
- kfree (buffer);
- return -ENOMEM;
+
+static void cypress_send(struct usb_serial_port *port)
+{
+ int count = 0, result, offset, actual_size;
+ struct cypress_private *priv = usb_get_serial_port_data(port);
+ unsigned long flags;
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+ dbg("%s - interrupt out size is %d", __FUNCTION__, port->interrupt_out_size);
+
+ spin_lock_irqsave(&priv->lock, flags);
+ if (priv->write_urb_in_use) {
+ dbg("%s - can't write, urb in use", __FUNCTION__);
+ spin_unlock_irqrestore(&priv->lock, flags);
+ return;
}
+ spin_unlock_irqrestore(&priv->lock, flags);
- memset(buffer, 0, port->interrupt_out_size); // test if this is needed... probably not since loop removed
+ /* clear buffer */
+ memset(port->interrupt_out_urb->transfer_buffer, 0, port->interrupt_out_size);
spin_lock_irqsave(&priv->lock, flags);
switch (port->interrupt_out_size) {
case 32:
// this is for the CY7C64013...
- transfer_size = min (count, 30);
- buffer[0] = priv->line_control;
- buffer[1] = transfer_size;
- s_pos = 2;
+ offset = 2;
+ port->interrupt_out_buffer[0] = priv->line_control;
break;
case 8:
// this is for the CY7C63743...
- transfer_size = min (count, 7);
- buffer[0] = priv->line_control | transfer_size;
- s_pos = 1;
+ offset = 1;
+ port->interrupt_out_buffer[0] = priv->line_control;
break;
default:
dbg("%s - wrong packet size", __FUNCTION__);
spin_unlock_irqrestore(&priv->lock, flags);
- kfree (buffer);
- usb_free_urb (urb);
- return -1;
+ return;
}
if (priv->line_control & CONTROL_RESET)
priv->line_control &= ~CONTROL_RESET;
- spin_unlock_irqrestore(&priv->lock, flags);
-
- /* copy data to offset position in urb transfer buffer */
- memcpy (&buffer[s_pos], buf, transfer_size);
- usb_serial_debug_data (debug, &port->dev, __FUNCTION__, port->interrupt_out_size, buffer);
+ if (priv->cmd_ctrl) {
+ priv->cmd_count++;
+ dbg("%s - line control command being issued", __FUNCTION__);
+ spin_unlock_irqrestore(&priv->lock, flags);
+ goto send;
+ } else
+ spin_unlock_irqrestore(&priv->lock, flags);
- /* build up the urb */
- usb_fill_int_urb (urb, port->serial->dev,
- usb_sndintpipe(port->serial->dev, port->interrupt_out_endpointAddress),
- buffer, port->interrupt_out_size,
- cypress_write_int_callback, port, priv->write_interval);
+ count = cypress_buf_get(priv->buf, &port->interrupt_out_buffer[offset],
+ port->interrupt_out_size-offset);
- status = usb_submit_urb(urb, GFP_ATOMIC);
+ if (count == 0) {
+ return;
+ }
- if (status) {
- dev_err(&port->dev, "%s - usb_submit_urb (write interrupt) failed with status %d\n",
- __FUNCTION__, status);
- transfer_size = status;
- kfree (buffer);
- goto exit;
+ switch (port->interrupt_out_size) {
+ case 32:
+ port->interrupt_out_buffer[1] = count;
+ break;
+ case 8:
+ port->interrupt_out_buffer[0] |= count;
}
+ dbg("%s - count is %d", __FUNCTION__, count);
+
+send:
spin_lock_irqsave(&priv->lock, flags);
- priv->bytes_out += transfer_size;
+ priv->write_urb_in_use = 1;
spin_unlock_irqrestore(&priv->lock, flags);
-exit:
- /* buffer free'd in callback */
- usb_free_urb (urb);
+ if (priv->cmd_ctrl)
+ actual_size = 1;
+ else
+ actual_size = count + (port->interrupt_out_size == 32 ? 2 : 1);
+
+ usb_serial_debug_data(debug, &port->dev, __FUNCTION__, port->interrupt_out_size,
+ port->interrupt_out_urb->transfer_buffer);
- return transfer_size;
+ port->interrupt_out_urb->transfer_buffer_length = actual_size;
+ port->interrupt_out_urb->dev = port->serial->dev;
+ result = usb_submit_urb (port->interrupt_out_urb, GFP_ATOMIC);
+ if (result) {
+ dev_err(&port->dev, "%s - failed submitting write urb, error %d\n", __FUNCTION__,
+ result);
+ priv->write_urb_in_use = 0;
+ }
-} /* cypress_write */
+ spin_lock_irqsave(&priv->lock, flags);
+ if (priv->cmd_ctrl) {
+ priv->cmd_ctrl = 0;
+ }
+ priv->bytes_out += count; /* do not count the line control and size bytes */
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ schedule_work(&port->work);
+} /* cypress_send */
+/* returns how much space is available in the soft buffer */
static int cypress_write_room(struct usb_serial_port *port)
{
+ struct cypress_private *priv = usb_get_serial_port_data(port);
+ int room = 0;
+ unsigned long flags;
+
dbg("%s - port %d", __FUNCTION__, port->number);
- /*
- * We really can take anything the user throw at us
- * but let's pick a nice big number to tell the tty
- * layer that we have lots of free space
- */
+ spin_lock_irqsave(&priv->lock, flags);
+ room = cypress_buf_space_avail(priv->buf);
+ spin_unlock_irqrestore(&priv->lock, flags);
- return 2048;
+ dbg("%s - returns %d", __FUNCTION__, room);
+ return room;
}
int data_bits, stop_bits, parity_type, parity_enable;
unsigned cflag, iflag, baud_mask;
unsigned long flags;
+ __u8 oldlines;
+ int linechange;
dbg("%s - port %d", __FUNCTION__, port->number);
data_bits = 3;
spin_lock_irqsave(&priv->lock, flags);
+ oldlines = priv->line_control;
if ((cflag & CBAUD) == B0) {
/* drop dtr and rts */
dbg("%s - dropping the lines, baud rate 0bps", __FUNCTION__);
}
priv->line_control |= CONTROL_DTR;
- /* this is probably not what I think it is... check into it */
- if (cflag & CRTSCTS)
- priv->line_control |= CONTROL_RTS;
- else
- priv->line_control &= ~CONTROL_RTS;
+ /* toggle CRTSCTS? - don't do this if being called from cypress_open */
+ if (!priv->calledfromopen) {
+ if (cflag & CRTSCTS)
+ priv->line_control |= CONTROL_RTS;
+ else
+ priv->line_control &= ~CONTROL_RTS;
+ }
}
spin_unlock_irqrestore(&priv->lock, flags);
/* Here we can define custom tty settings for devices
*
- * the main tty base comes from empeg.c
+ * the main tty termios flag base comes from empeg.c
*/
spin_lock_irqsave(&priv->lock, flags);
// Software app handling it for device...
- } else {
-
- /* do something here */
-
}
+ linechange = (priv->line_control != oldlines);
spin_unlock_irqrestore(&priv->lock, flags);
- /* set lines */
- priv->cmd_ctrl = 1;
- cypress_write(port, NULL, 0);
+ /* if necessary, set lines */
+ if (!priv->calledfromopen && linechange) {
+ priv->cmd_ctrl = 1;
+ cypress_write(port, NULL, 0);
+ }
+
+ if (priv->calledfromopen)
+ priv->calledfromopen = 0;
- return;
} /* cypress_set_termios */
-
+
+/* returns amount of data still left in soft buffer */
static int cypress_chars_in_buffer(struct usb_serial_port *port)
{
- dbg("%s - port %d", __FUNCTION__, port->number);
+ struct cypress_private *priv = usb_get_serial_port_data(port);
+ int chars = 0;
+ unsigned long flags;
- /*
- * We can't really account for how much data we
- * have sent out, but hasn't made it through to the
- * device, so just tell the tty layer that everything
- * is flushed.
- */
+ dbg("%s - port %d", __FUNCTION__, port->number);
+
+ spin_lock_irqsave(&priv->lock, flags);
+ chars = cypress_buf_data_avail(priv->buf);
+ spin_unlock_irqrestore(&priv->lock, flags);
- return 0;
+ dbg("%s - returns %d", __FUNCTION__, chars);
+ return chars;
}
struct tty_struct *tty;
unsigned char *data = urb->transfer_buffer;
unsigned long flags;
- char tty_flag = TTY_NORMAL;
- int bytes=0;
+ char tty_flag = TTY_NORMAL;
+ int havedata = 0;
+ int bytes = 0;
int result;
- int i=0;
+ int i = 0;
dbg("%s - port %d", __FUNCTION__, port->number);
spin_lock_irqsave(&priv->lock, flags);
if (priv->rx_flags & THROTTLED) {
+ dbg("%s - now throttling", __FUNCTION__);
priv->rx_flags |= ACTUALLY_THROTTLED;
spin_unlock_irqrestore(&priv->lock, flags);
return;
return;
}
- usb_serial_debug_data (debug, &port->dev, __FUNCTION__, urb->actual_length, data);
-
spin_lock_irqsave(&priv->lock, flags);
switch(urb->actual_length) {
case 32:
priv->current_status = data[0] & 0xF8;
bytes = data[1]+2;
i=2;
+ if (bytes > 2)
+ havedata = 1;
break;
case 8:
// This is for the CY7C63743...
priv->current_status = data[0] & 0xF8;
bytes = (data[0] & 0x07)+1;
i=1;
+ if (bytes > 1)
+ havedata = 1;
break;
default:
dbg("%s - wrong packet size - received %d bytes", __FUNCTION__, urb->actual_length);
}
spin_unlock_irqrestore(&priv->lock, flags);
+ usb_serial_debug_data (debug, &port->dev, __FUNCTION__, urb->actual_length, data);
+
spin_lock_irqsave(&priv->lock, flags);
/* check to see if status has changed */
if (priv != NULL) {
/* process read if there is data other than line status */
if (tty && (bytes > i)) {
for (; i < bytes ; ++i) {
- dbg("pushing byte number %d - %d",i,data[i]);
+ dbg("pushing byte number %d - %d - %c",i,data[i],data[i]);
if(tty->flip.count >= TTY_FLIPBUF_SIZE) {
tty_flip_buffer_push(tty);
}
static void cypress_write_int_callback(struct urb *urb, struct pt_regs *regs)
{
struct usb_serial_port *port = (struct usb_serial_port *)urb->context;
-
- /* free up the transfer buffer, as usb_free_urb() does not do this */
- kfree (urb->transfer_buffer);
+ struct cypress_private *priv = usb_get_serial_port_data(port);
+ int result;
dbg("%s - port %d", __FUNCTION__, port->number);
- if (urb->status) {
- dbg("%s - nonzero write status received: %d", __FUNCTION__, urb->status);
- return;
+ switch (urb->status) {
+ case 0:
+ /* success */
+ break;
+ case -ECONNRESET:
+ case -ENOENT:
+ case -ESHUTDOWN:
+ /* this urb is terminated, clean up */
+ dbg("%s - urb shutting down with status: %d", __FUNCTION__, urb->status);
+ priv->write_urb_in_use = 0;
+ return;
+ default:
+ /* error in the urb, so we have to resubmit it */
+ dbg("%s - Overflow in write", __FUNCTION__);
+ dbg("%s - nonzero write bulk status received: %d", __FUNCTION__, urb->status);
+ port->interrupt_out_urb->transfer_buffer_length = 1;
+ port->interrupt_out_urb->dev = port->serial->dev;
+ result = usb_submit_urb(port->interrupt_out_urb, GFP_ATOMIC);
+ if (result)
+ dev_err(&urb->dev->dev, "%s - failed resubmitting write urb, error %d\n",
+ __FUNCTION__, result);
+ else
+ return;
}
+
+ priv->write_urb_in_use = 0;
+
+ /* send any buffered data */
+ cypress_send(port);
+}
- schedule_work(&port->work);
+
+/*****************************************************************************
+ * Write buffer functions - buffering code from pl2303 used
+ *****************************************************************************/
+
+/*
+ * cypress_buf_alloc
+ *
+ * Allocate a circular buffer and all associated memory.
+ */
+
+static struct cypress_buf *cypress_buf_alloc(unsigned int size)
+{
+
+ struct cypress_buf *cb;
+
+
+ if (size == 0)
+ return NULL;
+
+ cb = (struct cypress_buf *)kmalloc(sizeof(struct cypress_buf), GFP_KERNEL);
+ if (cb == NULL)
+ return NULL;
+
+ cb->buf_buf = kmalloc(size, GFP_KERNEL);
+ if (cb->buf_buf == NULL) {
+ kfree(cb);
+ return NULL;
+ }
+
+ cb->buf_size = size;
+ cb->buf_get = cb->buf_put = cb->buf_buf;
+
+ return cb;
+
+}
+
+
+/*
+ * cypress_buf_free
+ *
+ * Free the buffer and all associated memory.
+ */
+
+static void cypress_buf_free(struct cypress_buf *cb)
+{
+ if (cb != NULL) {
+ if (cb->buf_buf != NULL)
+ kfree(cb->buf_buf);
+ kfree(cb);
+ }
+}
+
+
+/*
+ * cypress_buf_clear
+ *
+ * Clear out all data in the circular buffer.
+ */
+
+static void cypress_buf_clear(struct cypress_buf *cb)
+{
+ if (cb != NULL)
+ cb->buf_get = cb->buf_put;
+ /* equivalent to a get of all data available */
}
+/*
+ * cypress_buf_data_avail
+ *
+ * Return the number of bytes of data available in the circular
+ * buffer.
+ */
+
+static unsigned int cypress_buf_data_avail(struct cypress_buf *cb)
+{
+ if (cb != NULL)
+ return ((cb->buf_size + cb->buf_put - cb->buf_get) % cb->buf_size);
+ else
+ return 0;
+}
+
+
+/*
+ * cypress_buf_space_avail
+ *
+ * Return the number of bytes of space available in the circular
+ * buffer.
+ */
+
+static unsigned int cypress_buf_space_avail(struct cypress_buf *cb)
+{
+ if (cb != NULL)
+ return ((cb->buf_size + cb->buf_get - cb->buf_put - 1) % cb->buf_size);
+ else
+ return 0;
+}
+
+
+/*
+ * cypress_buf_put
+ *
+ * Copy data data from a user buffer and put it into the circular buffer.
+ * Restrict to the amount of space available.
+ *
+ * Return the number of bytes copied.
+ */
+
+static unsigned int cypress_buf_put(struct cypress_buf *cb, const char *buf,
+ unsigned int count)
+{
+
+ unsigned int len;
+
+
+ if (cb == NULL)
+ return 0;
+
+ len = cypress_buf_space_avail(cb);
+ if (count > len)
+ count = len;
+
+ if (count == 0)
+ return 0;
+
+ len = cb->buf_buf + cb->buf_size - cb->buf_put;
+ if (count > len) {
+ memcpy(cb->buf_put, buf, len);
+ memcpy(cb->buf_buf, buf+len, count - len);
+ cb->buf_put = cb->buf_buf + count - len;
+ } else {
+ memcpy(cb->buf_put, buf, count);
+ if (count < len)
+ cb->buf_put += count;
+ else /* count == len */
+ cb->buf_put = cb->buf_buf;
+ }
+
+ return count;
+
+}
+
+
+/*
+ * cypress_buf_get
+ *
+ * Get data from the circular buffer and copy to the given buffer.
+ * Restrict to the amount of data available.
+ *
+ * Return the number of bytes copied.
+ */
+
+static unsigned int cypress_buf_get(struct cypress_buf *cb, char *buf,
+ unsigned int count)
+{
+
+ unsigned int len;
+
+
+ if (cb == NULL)
+ return 0;
+
+ len = cypress_buf_data_avail(cb);
+ if (count > len)
+ count = len;
+
+ if (count == 0)
+ return 0;
+
+ len = cb->buf_buf + cb->buf_size - cb->buf_get;
+ if (count > len) {
+ memcpy(buf, cb->buf_get, len);
+ memcpy(buf+len, cb->buf_buf, count - len);
+ cb->buf_get = cb->buf_buf + count - len;
+ } else {
+ memcpy(buf, cb->buf_get, count);
+ if (count < len)
+ cb->buf_get += count;
+ else /* count == len */
+ cb->buf_get = cb->buf_buf;
+ }
+
+ return count;
+
+}
+
/*****************************************************************************
* Module functions
*****************************************************************************/
MODULE_AUTHOR( DRIVER_AUTHOR );
MODULE_DESCRIPTION( DRIVER_DESC );
+MODULE_VERSION( DRIVER_VERSION );
MODULE_LICENSE("GPL");
module_param(debug, bool, S_IRUGO | S_IWUSR);
--- /dev/null
+/*
+ * Garmin GPS driver
+ *
+ * Copyright (C) 2004 Hermann Kneissel herkne@users.sourceforge.net
+ *
+ * The latest version of the driver can be found at
+ * http://sourceforge.net/projects/garmin-gps/
+ *
+ * This driver has been derived from v2.1 of the visor driver.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111 USA
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/timer.h>
+#include <linux/tty.h>
+#include <linux/tty_driver.h>
+#include <linux/tty_flip.h>
+#include <linux/module.h>
+#include <linux/spinlock.h>
+#include <asm/uaccess.h>
+#include <linux/usb.h>
+
+/* the mode to be set when the port ist opened */
+static int initial_mode = 1;
+
+/* debug flag */
+static int debug = 0;
+
+#include "usb-serial.h"
+
+#define GARMIN_VENDOR_ID 0x091E
+
+/*
+ * Version Information
+ */
+
+#define VERSION_MAJOR 0
+#define VERSION_MINOR 23
+
+#define _STR(s) #s
+#define _DRIVER_VERSION(a,b) "v" _STR(a) "." _STR(b)
+#define DRIVER_VERSION _DRIVER_VERSION(VERSION_MAJOR, VERSION_MINOR)
+#define DRIVER_AUTHOR "hermann kneissel"
+#define DRIVER_DESC "garmin gps driver"
+
+/* error codes returned by the driver */
+#define EINVPKT 1000 /* invalid packet structure */
+
+
+// size of the header of a packet using the usb protocol
+#define GARMIN_PKTHDR_LENGTH 12
+
+// max. possible size of a packet using the serial protocol
+#define MAX_SERIAL_PKT_SIZ (3+255+3)
+
+// max. possible size of a packet with worst case stuffing
+#define MAX_SERIAL_PKT_SIZ_STUFFED MAX_SERIAL_PKT_SIZ+256
+
+// size of a buffer able to hold a complete (no stuffing) packet
+// (the document protocol does not contain packets with a larger
+// size, but in theory a packet may be 64k+12 bytes - if in
+// later protocol versions larger packet sizes occur, this value
+// should be increased accordingly, so the input buffer is always
+// large enough the store a complete packet inclusive header)
+#define GPS_IN_BUFSIZ (GARMIN_PKTHDR_LENGTH+MAX_SERIAL_PKT_SIZ)
+
+// size of a buffer able to hold a complete (incl. stuffing) packet
+#define GPS_OUT_BUFSIZ (GARMIN_PKTHDR_LENGTH+MAX_SERIAL_PKT_SIZ_STUFFED)
+
+// where to place the packet id of a serial packet, so we can
+// prepend the usb-packet header without the need to move the
+// packets data
+#define GSP_INITIAL_OFFSET (GARMIN_PKTHDR_LENGTH-2)
+
+// max. size of incoming private packets (header+1 param)
+#define PRIVPKTSIZ (GARMIN_PKTHDR_LENGTH+4)
+
+#define GARMIN_LAYERID_TRANSPORT 0
+#define GARMIN_LAYERID_APPL 20
+// our own layer-id to use for some control mechanisms
+#define GARMIN_LAYERID_PRIVATE 0x01106E4B
+
+#define GARMIN_PKTID_PVT_DATA 51
+#define GARMIN_PKTID_L001_COMMAND_DATA 10
+
+#define CMND_ABORT_TRANSFER 0
+
+// packet ids used in private layer
+#define PRIV_PKTID_SET_DEBUG 1
+#define PRIV_PKTID_SET_MODE 2
+#define PRIV_PKTID_INFO_REQ 3
+#define PRIV_PKTID_INFO_RESP 4
+#define PRIV_PKTID_RESET_REQ 5
+#define PRIV_PKTID_SET_DEF_MODE 6
+
+
+#define ETX 0x03
+#define DLE 0x10
+#define ACK 0x06
+#define NAK 0x15
+
+/* structure used to queue incoming packets */
+struct garmin_packet {
+ struct list_head list;
+ int seq;
+ int size; // the real size of the data array, always > 0
+ __u8 data[1];
+};
+
+/* structure used to keep the current state of the driver */
+struct garmin_data {
+ __u8 state;
+ __u16 flags;
+ __u8 mode;
+ __u8 ignorePkts;
+ __u8 count;
+ __u8 pkt_id;
+ __u32 serial_num;
+ struct timer_list timer;
+ struct usb_serial_port *port;
+ int seq_counter;
+ int insize;
+ int outsize;
+ __u8 inbuffer [GPS_IN_BUFSIZ]; /* tty -> usb */
+ __u8 outbuffer[GPS_OUT_BUFSIZ]; /* usb -> tty */
+ __u8 privpkt[4*6];
+ spinlock_t lock;
+ struct list_head pktlist;
+};
+
+
+#define STATE_NEW 0
+#define STATE_INITIAL_DELAY 1
+#define STATE_TIMEOUT 2
+#define STATE_SESSION_REQ1 3
+#define STATE_SESSION_REQ2 4
+#define STATE_ACTIVE 5
+
+#define STATE_RESET 8
+#define STATE_DISCONNECTED 9
+#define STATE_WAIT_TTY_ACK 10
+#define STATE_GSP_WAIT_DATA 11
+
+#define MODE_NATIVE 0
+#define MODE_GARMIN_SERIAL 1
+
+// Flags used in garmin_data.flags:
+#define FLAGS_SESSION_REPLY_MASK 0x00C0
+#define FLAGS_SESSION_REPLY1_SEEN 0x0080
+#define FLAGS_SESSION_REPLY2_SEEN 0x0040
+#define FLAGS_BULK_IN_ACTIVE 0x0020
+#define FLAGS_THROTTLED 0x0010
+#define CLEAR_HALT_REQUIRED 0x0001
+
+#define FLAGS_QUEUING 0x0100
+#define FLAGS_APP_RESP_SEEN 0x0200
+#define FLAGS_APP_REQ_SEEN 0x0400
+#define FLAGS_DROP_DATA 0x0800
+
+#define FLAGS_GSP_SKIP 0x1000
+#define FLAGS_GSP_DLESEEN 0x2000
+
+
+
+
+
+
+/* function prototypes */
+static void gsp_next_packet(struct garmin_data * garmin_data_p);
+static int garmin_write_bulk(struct usb_serial_port *port,
+ const unsigned char *buf, int count);
+
+/* some special packets to be send or received */
+static unsigned char const GARMIN_START_SESSION_REQ[]
+ = { 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0 };
+static unsigned char const GARMIN_START_SESSION_REQ2[]
+ = { 0, 0, 0, 0, 16, 0, 0, 0, 0, 0, 0, 0 };
+static unsigned char const GARMIN_START_SESSION_REPLY[]
+ = { 0, 0, 0, 0, 6, 0, 0, 0, 4, 0, 0, 0 };
+static unsigned char const GARMIN_SESSION_ACTIVE_REPLY[]
+ = { 0, 0, 0, 0, 17, 0, 0, 0, 4, 0, 0, 0, 0, 16, 0, 0 };
+static unsigned char const GARMIN_BULK_IN_AVAIL_REPLY[]
+ = { 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0 };
+static unsigned char const GARMIN_APP_LAYER_REPLY[]
+ = { 0x14, 0, 0, 0 };
+static unsigned char const GARMIN_START_PVT_REQ[]
+ = { 20, 0, 0, 0, 10, 0, 0, 0, 2, 0, 0, 0, 49, 0 };
+static unsigned char const GARMIN_STOP_PVT_REQ[]
+ = { 20, 0, 0, 0, 10, 0, 0, 0, 2, 0, 0, 0, 50, 0 };
+static unsigned char const GARMIN_STOP_TRANSFER_REQ[]
+ = { 20, 0, 0, 0, 10, 0, 0, 0, 2, 0, 0, 0, 0, 0 };
+static unsigned char const GARMIN_STOP_TRANSFER_REQ_V2[]
+ = { 20, 0, 0, 0, 10, 0, 0, 0, 1, 0, 0, 0, 0 };
+static unsigned char const PRIVATE_REQ[]
+ = { 0x4B, 0x6E, 0x10, 0x01, 0xFF, 0, 0, 0, 0xFF, 0, 0, 0 };
+
+
+
+static struct usb_device_id id_table [] = {
+ /* the same device id seems to be used by all usb enabled gps devices */
+ { USB_DEVICE(GARMIN_VENDOR_ID, 3 ) },
+ { } /* Terminating entry */
+};
+
+MODULE_DEVICE_TABLE (usb, id_table);
+
+static struct usb_driver garmin_driver = {
+ .owner = THIS_MODULE,
+ .name = "garmin_gps",
+ .probe = usb_serial_probe,
+ .disconnect = usb_serial_disconnect,
+ .id_table = id_table,
+};
+
+
+static inline int noResponseFromAppLayer(struct garmin_data * garmin_data_p)
+{
+ return ((garmin_data_p->flags
+ & (FLAGS_APP_REQ_SEEN|FLAGS_APP_RESP_SEEN))
+ == FLAGS_APP_REQ_SEEN);
+}
+
+
+static inline int getLayerId(const __u8 *usbPacket)
+{
+ return __le32_to_cpup((__le32 *)(usbPacket));
+}
+
+static inline int getPacketId(const __u8 *usbPacket)
+{
+ return __le32_to_cpup((__le32 *)(usbPacket+4));
+}
+
+static inline int getDataLength(const __u8 *usbPacket)
+{
+ return __le32_to_cpup((__le32 *)(usbPacket+8));
+}
+
+
+/*
+ * check if the usb-packet in buf contains an abort-transfer command.
+ * (if yes, all queued data will be dropped)
+ */
+static inline int isAbortTrfCmnd(const unsigned char *buf)
+{
+ if (0 == memcmp(buf, GARMIN_STOP_TRANSFER_REQ,
+ sizeof(GARMIN_STOP_TRANSFER_REQ)) ||
+ 0 == memcmp(buf, GARMIN_STOP_TRANSFER_REQ_V2,
+ sizeof(GARMIN_STOP_TRANSFER_REQ_V2)))
+ return 1;
+ else
+ return 0;
+}
+
+
+
+static void send_to_tty(struct usb_serial_port *port,
+ char *data, unsigned int actual_length)
+{
+ struct tty_struct *tty = port->tty;
+ int i;
+
+ if (tty && actual_length) {
+
+ usb_serial_debug_data(debug, &port->dev,
+ __FUNCTION__, actual_length, data);
+
+ for (i = 0; i < actual_length ; ++i) {
+ /* if we insert more than TTY_FLIPBUF_SIZE characters,
+ we drop them. */
+ if(tty->flip.count >= TTY_FLIPBUF_SIZE) {
+ tty_flip_buffer_push(tty);
+ }
+ /* this doesn't actually push the data through unless
+ tty->low_latency is set */
+ tty_insert_flip_char(tty, data[i], 0);
+ }
+ tty_flip_buffer_push(tty);
+ }
+}
+
+
+/******************************************************************************
+ * packet queue handling
+ ******************************************************************************/
+
+/*
+ * queue a received (usb-)packet for later processing
+ */
+static int pkt_add(struct garmin_data * garmin_data_p,
+ unsigned char *data, unsigned int data_length)
+{
+ int result = 0;
+ unsigned long flags;
+ struct garmin_packet *pkt;
+
+ /* process only packets containg data ... */
+ if (data_length) {
+ garmin_data_p->flags |= FLAGS_QUEUING;
+ pkt = kmalloc(sizeof(struct garmin_packet)+data_length,
+ GFP_ATOMIC);
+ if (pkt == NULL) {
+ dev_err(&garmin_data_p->port->dev, "out of memory\n");
+ return 0;
+ }
+ pkt->size = data_length;
+ memcpy(pkt->data, data, data_length);
+
+ spin_lock_irqsave(&garmin_data_p->lock, flags);
+ result = list_empty(&garmin_data_p->pktlist);
+ pkt->seq = garmin_data_p->seq_counter++;
+ list_add_tail(&pkt->list, &garmin_data_p->pktlist);
+ spin_unlock_irqrestore(&garmin_data_p->lock, flags);
+
+ /* in serial mode, if someone is waiting for data from
+ the device, iconvert and send the next packet to tty. */
+ if (result && (garmin_data_p->state == STATE_GSP_WAIT_DATA)) {
+ gsp_next_packet(garmin_data_p);
+ }
+ }
+ return result;
+}
+
+
+/* get the next pending packet */
+static struct garmin_packet *pkt_pop(struct garmin_data * garmin_data_p)
+{
+ unsigned long flags;
+ struct garmin_packet *result = NULL;
+
+ spin_lock_irqsave(&garmin_data_p->lock, flags);
+ if (!list_empty(&garmin_data_p->pktlist)) {
+ result = (struct garmin_packet *)garmin_data_p->pktlist.next;
+ list_del(&result->list);
+ }
+ spin_unlock_irqrestore(&garmin_data_p->lock, flags);
+ return result;
+}
+
+
+/* free up all queued data */
+static void pkt_clear(struct garmin_data * garmin_data_p)
+{
+ unsigned long flags;
+ struct garmin_packet *result = NULL;
+
+ dbg("%s", __FUNCTION__);
+
+ spin_lock_irqsave(&garmin_data_p->lock, flags);
+ while (!list_empty(&garmin_data_p->pktlist)) {
+ result = (struct garmin_packet *)garmin_data_p->pktlist.next;
+ list_del(&result->list);
+ kfree(result);
+ }
+ spin_unlock_irqrestore(&garmin_data_p->lock, flags);
+}
+
+
+/******************************************************************************
+ * garmin serial protocol handling handling
+ ******************************************************************************/
+
+/* send an ack packet back to the tty */
+static int gsp_send_ack(struct garmin_data * garmin_data_p, __u8 pkt_id)
+{
+ __u8 pkt[10];
+ __u8 cksum = 0;
+ __u8 *ptr = pkt;
+ unsigned l = 0;
+
+ dbg("%s - pkt-id: 0x%X.", __FUNCTION__, 0xFF & pkt_id);
+
+ *ptr++ = DLE;
+ *ptr++ = ACK;
+ cksum += ACK;
+
+ *ptr++ = 2;
+ cksum += 2;
+
+ *ptr++ = pkt_id;
+ cksum += pkt_id;
+
+ if (pkt_id == DLE) {
+ *ptr++ = DLE;
+ }
+
+ *ptr++ = 0;
+ *ptr++ = 0xFF & (-cksum);
+ *ptr++ = DLE;
+ *ptr++ = ETX;
+
+ l = ptr-pkt;
+
+ send_to_tty(garmin_data_p->port, pkt, l);
+ return 0;
+}
+
+
+
+/*
+ * called for a complete packet received from tty layer
+ *
+ * the complete packet (pkzid ... cksum) is in garmin_data_p->inbuf starting
+ * at GSP_INITIAL_OFFSET.
+ *
+ * count - number of bytes in the input buffer including space reserved for
+ * the usb header: GSP_INITIAL_OFFSET + number of bytes in packet
+ * (including pkt-id, data-length a. cksum)
+ */
+static int gsp_rec_packet(struct garmin_data * garmin_data_p, int count)
+{
+ const __u8* recpkt = garmin_data_p->inbuffer+GSP_INITIAL_OFFSET;
+ __le32 *usbdata = (__le32 *) garmin_data_p->inbuffer;
+
+ int cksum = 0;
+ int n = 0;
+ int pktid = recpkt[0];
+ int size = recpkt[1];
+
+ usb_serial_debug_data(debug, &garmin_data_p->port->dev,
+ __FUNCTION__, count-GSP_INITIAL_OFFSET, recpkt);
+
+ if (size != (count-GSP_INITIAL_OFFSET-3)) {
+ dbg("%s - invalid size, expected %d bytes, got %d",
+ __FUNCTION__, size, (count-GSP_INITIAL_OFFSET-3));
+ return -EINVPKT;
+ }
+
+ cksum += *recpkt++;
+ cksum += *recpkt++;
+
+ // sanity check, remove after test ...
+ if ((__u8*)&(usbdata[3]) != recpkt) {
+ dbg("%s - ptr mismatch %p - %p",
+ __FUNCTION__, &(usbdata[4]), recpkt);
+ return -EINVPKT;
+ }
+
+ while (n < size) {
+ cksum += *recpkt++;
+ n++;
+ }
+
+ if ((0xff & (cksum + *recpkt)) != 0) {
+ dbg("%s - invalid checksum, expected %02x, got %02x",
+ __FUNCTION__, 0xff & -cksum, 0xff & *recpkt);
+ return -EINVPKT;
+ }
+
+ usbdata[0] = __cpu_to_le32(GARMIN_LAYERID_APPL);
+ usbdata[1] = __cpu_to_le32(pktid);
+ usbdata[2] = __cpu_to_le32(size);
+
+ garmin_write_bulk (garmin_data_p->port, garmin_data_p->inbuffer,
+ GARMIN_PKTHDR_LENGTH+size);
+
+ /* if this was an abort-transfer command, flush all
+ queued data. */
+ if (isAbortTrfCmnd(garmin_data_p->inbuffer)) {
+ garmin_data_p->flags |= FLAGS_DROP_DATA;
+ pkt_clear(garmin_data_p);
+ }
+
+ return count;
+}
+
+
+
+/*
+ * Called for data received from tty
+ *
+ * buf contains the data read, it may span more than one packet or even
+ * incomplete packets
+ *
+ * input record should be a serial-record, but it may not be complete.
+ * Copy it into our local buffer, until an etx is seen (or an error
+ * occurs).
+ * Once the record is complete, convert into a usb packet and send it
+ * to the bulk pipe, send an ack back to the tty.
+ *
+ * If the input is an ack, just send the last queued packet to the
+ * tty layer.
+ *
+ * if the input is an abort command, drop all queued data.
+ */
+
+static int gsp_receive(struct garmin_data * garmin_data_p,
+ const unsigned char *buf, int count)
+{
+ int offs = 0;
+ int ack_or_nak_seen = 0;
+ int i = 0;
+ __u8 *dest = garmin_data_p->inbuffer;
+ int size = garmin_data_p->insize;
+ // dleSeen: set if last byte read was a DLE
+ int dleSeen = garmin_data_p->flags & FLAGS_GSP_DLESEEN;
+ // skip: if set, skip incoming data until possible start of
+ // new packet
+ int skip = garmin_data_p->flags & FLAGS_GSP_SKIP;
+ __u8 data;
+
+ dbg("%s - dle=%d skip=%d size=%d count=%d",
+ __FUNCTION__, dleSeen, skip, size, count);
+
+ if (size == 0) {
+ size = GSP_INITIAL_OFFSET;
+ }
+
+ while (offs < count) {
+
+ data = *(buf+offs);
+ offs ++;
+
+ if (data == DLE) {
+ if (skip) { /* start of a new pkt */
+ skip = 0;
+ size = GSP_INITIAL_OFFSET;
+ dleSeen = 1;
+ } else if (dleSeen) {
+ dest[size++] = data;
+ dleSeen = 0;
+ } else {
+ dleSeen = 1;
+ }
+ } else if (data == ETX) {
+ if (dleSeen) {
+ /* packet complete */
+
+ data = dest[GSP_INITIAL_OFFSET];
+
+ if (data == ACK) {
+ ack_or_nak_seen = ACK;
+ dbg("ACK packet complete.");
+ } else if (data == NAK) {
+ ack_or_nak_seen = NAK;
+ dbg("NAK packet complete.");
+ } else {
+ dbg("packet complete "
+ "- id=0x%X.",
+ 0xFF & data);
+ gsp_rec_packet(garmin_data_p, size);
+ }
+
+ skip = 1;
+ size = GSP_INITIAL_OFFSET;
+ dleSeen = 0;
+ } else {
+ dest[size++] = data;
+ }
+ } else if (!skip) {
+
+ if (dleSeen) {
+ dbg("non-masked DLE at %d - restarting", i);
+ size = GSP_INITIAL_OFFSET;
+ dleSeen = 0;
+ }
+
+ dest[size++] = data;
+ }
+
+ if (size >= GPS_IN_BUFSIZ) {
+ dbg("%s - packet too large.", __FUNCTION__);
+ skip = 1;
+ size = GSP_INITIAL_OFFSET;
+ dleSeen = 0;
+ }
+ }
+
+ garmin_data_p->insize = size;
+
+ // copy flags back to structure
+ if (skip)
+ garmin_data_p->flags |= FLAGS_GSP_SKIP;
+ else
+ garmin_data_p->flags &= ~FLAGS_GSP_SKIP;
+
+ if (dleSeen)
+ garmin_data_p->flags |= FLAGS_GSP_DLESEEN;
+ else
+ garmin_data_p->flags &= ~FLAGS_GSP_DLESEEN;
+
+ if (ack_or_nak_seen) {
+ garmin_data_p->state = STATE_GSP_WAIT_DATA;
+ gsp_next_packet(garmin_data_p);
+ }
+
+ return count;
+}
+
+
+
+
+/*
+ * Sends a usb packet to the tty
+ *
+ * Assumes, that all packages and at an usb-packet boundary.
+ *
+ * return <0 on error, 0 if packet is incomplete or > 0 if packet was sent
+ */
+int gsp_send(struct garmin_data * garmin_data_p, const unsigned char *buf,
+ int count)
+{
+ const unsigned char *src;
+ unsigned char *dst;
+ int pktid = 0;
+ int datalen = 0;
+ int cksum = 0;
+ int i=0;
+ int k;
+
+ dbg("%s - state %d - %d bytes.", __FUNCTION__,
+ garmin_data_p->state, count);
+
+ k = garmin_data_p->outsize;
+ if ((k+count) > GPS_OUT_BUFSIZ) {
+ dbg("packet too large");
+ garmin_data_p->outsize = 0;
+ return -4;
+ }
+
+ memcpy(garmin_data_p->outbuffer+k, buf, count);
+ k += count;
+ garmin_data_p->outsize = k;
+
+ if (k >= GARMIN_PKTHDR_LENGTH) {
+ pktid = getPacketId(garmin_data_p->outbuffer);
+ datalen= getDataLength(garmin_data_p->outbuffer);
+ i = GARMIN_PKTHDR_LENGTH + datalen;
+ if (k < i)
+ return 0;
+ } else {
+ return 0;
+ }
+
+ dbg("%s - %d bytes in buffer, %d bytes in pkt.", __FUNCTION__,
+ k, i);
+
+ /* garmin_data_p->outbuffer now contains a complete packet */
+
+ usb_serial_debug_data(debug, &garmin_data_p->port->dev,
+ __FUNCTION__, k, garmin_data_p->outbuffer);
+
+ garmin_data_p->outsize = 0;
+
+ if (GARMIN_LAYERID_APPL != getLayerId(garmin_data_p->outbuffer)) {
+ dbg("not an application packet (%d)",
+ getLayerId(garmin_data_p->outbuffer));
+ return -1;
+ }
+
+ if (pktid > 255) {
+ dbg("packet-id %d too large", pktid);
+ return -2;
+ }
+
+ if (datalen > 255) {
+ dbg("packet-size %d too large", datalen);
+ return -3;
+ }
+
+ /* the serial protocol should be able to handle this packet */
+
+ k = 0;
+ src = garmin_data_p->outbuffer+GARMIN_PKTHDR_LENGTH;
+ for (i=0; i<datalen; i++) {
+ if (*src++ == DLE)
+ k++;
+ }
+
+ src = garmin_data_p->outbuffer+GARMIN_PKTHDR_LENGTH;
+ if (k > (GARMIN_PKTHDR_LENGTH-2)) {
+ /* can't add stuffing DLEs in place, move data to end
+ of buffer ... */
+ dst = garmin_data_p->outbuffer+GPS_OUT_BUFSIZ-datalen;
+ memcpy(dst, src, datalen);
+ src = dst;
+ }
+
+ dst = garmin_data_p->outbuffer;
+
+ *dst++ = DLE;
+ *dst++ = pktid;
+ cksum += pktid;
+ *dst++ = datalen;
+ cksum += datalen;
+ if (datalen == DLE)
+ *dst++ = DLE;
+
+ for (i=0; i<datalen; i++) {
+ __u8 c = *src++;
+ *dst++ = c;
+ cksum += c;
+ if (c == DLE)
+ *dst++ = DLE;
+ }
+
+ cksum = 0xFF & -cksum;
+ *dst++ = cksum;
+ if (cksum == DLE)
+ *dst++ = DLE;
+ *dst++ = DLE;
+ *dst++ = ETX;
+
+ i = dst-garmin_data_p->outbuffer;
+
+ send_to_tty(garmin_data_p->port, garmin_data_p->outbuffer, i);
+
+ garmin_data_p->pkt_id = pktid;
+ garmin_data_p->state = STATE_WAIT_TTY_ACK;
+
+ return i;
+}
+
+
+
+
+
+/*
+ * Process the next pending data packet - if there is one
+ */
+static void gsp_next_packet(struct garmin_data * garmin_data_p)
+{
+ struct garmin_packet *pkt = NULL;
+
+ while ((pkt = pkt_pop(garmin_data_p)) != NULL) {
+ dbg("%s - next pkt: %d", __FUNCTION__, pkt->seq);
+ if (gsp_send(garmin_data_p, pkt->data, pkt->size) > 0) {
+ kfree(pkt);
+ return;
+ }
+ kfree(pkt);
+ }
+}
+
+
+
+
+/******************************************************************************
+ * garmin native mode
+ ******************************************************************************/
+
+
+/*
+ * Called for data received from tty
+ *
+ * The input data is expected to be in garmin usb-packet format.
+ *
+ * buf contains the data read, it may span more than one packet
+ * or even incomplete packets
+ */
+static int nat_receive(struct garmin_data * garmin_data_p,
+ const unsigned char *buf, int count)
+{
+ __u8 * dest;
+ int offs = 0;
+ int result = count;
+ int len;
+
+ while (offs < count) {
+ // if buffer contains header, copy rest of data
+ if (garmin_data_p->insize >= GARMIN_PKTHDR_LENGTH)
+ len = GARMIN_PKTHDR_LENGTH
+ +getDataLength(garmin_data_p->inbuffer);
+ else
+ len = GARMIN_PKTHDR_LENGTH;
+
+ if (len >= GPS_IN_BUFSIZ) {
+ /* seem to be an invalid packet, ignore rest of input */
+ dbg("%s - packet size too large: %d",
+ __FUNCTION__, len);
+ garmin_data_p->insize = 0;
+ count = 0;
+ result = -EINVPKT;
+ } else {
+ len -= garmin_data_p->insize;
+ if (len > (count-offs))
+ len = (count-offs);
+ if (len > 0) {
+ dest = garmin_data_p->inbuffer
+ +garmin_data_p->insize;
+ memcpy(dest, buf+offs, len);
+ garmin_data_p->insize += len;
+ offs += len;
+ }
+ }
+
+ /* do we have a complete packet ? */
+ if (garmin_data_p->insize >= GARMIN_PKTHDR_LENGTH) {
+ len = GARMIN_PKTHDR_LENGTH+
+ getDataLength(garmin_data_p->inbuffer);
+ if (garmin_data_p->insize >= len) {
+ garmin_write_bulk (garmin_data_p->port,
+ garmin_data_p->inbuffer,
+ len);
+ garmin_data_p->insize = 0;
+
+ /* if this was an abort-transfer command,
+ flush all queued data. */
+ if (isAbortTrfCmnd(garmin_data_p->inbuffer)) {
+ garmin_data_p->flags |= FLAGS_DROP_DATA;
+ pkt_clear(garmin_data_p);
+ }
+ }
+ }
+ }
+ return result;
+}
+
+
+/******************************************************************************
+ * private packets
+ ******************************************************************************/
+
+static void priv_status_resp(struct usb_serial_port *port)
+{
+ struct garmin_data * garmin_data_p = usb_get_serial_port_data(port);
+ __le32 *pkt = (__le32 *)garmin_data_p->privpkt;
+
+ pkt[0] = __cpu_to_le32(GARMIN_LAYERID_PRIVATE);
+ pkt[1] = __cpu_to_le32(PRIV_PKTID_INFO_RESP);
+ pkt[2] = __cpu_to_le32(12);
+ pkt[3] = __cpu_to_le32(VERSION_MAJOR << 16 | VERSION_MINOR);
+ pkt[4] = __cpu_to_le32(garmin_data_p->mode);
+ pkt[5] = __cpu_to_le32(garmin_data_p->serial_num);
+
+ send_to_tty(port, (__u8*)pkt, 6*4);
+}
+
+
+/******************************************************************************
+ * Garmin specific driver functions
+ ******************************************************************************/
+
+static int process_resetdev_request(struct usb_serial_port *port)
+{
+ int status;
+ struct garmin_data * garmin_data_p = usb_get_serial_port_data(port);
+
+ garmin_data_p->flags &= ~(CLEAR_HALT_REQUIRED);
+ garmin_data_p->state = STATE_RESET;
+ garmin_data_p->serial_num = 0;
+
+ usb_kill_urb (port->interrupt_in_urb);
+ dbg("%s - usb_reset_device", __FUNCTION__ );
+ status = usb_reset_device(port->serial->dev);
+ if (status)
+ dbg("%s - usb_reset_device failed: %d",
+ __FUNCTION__, status);
+ return status;
+}
+
+
+
+/*
+ * clear all cached data
+ */
+static int garmin_clear(struct garmin_data * garmin_data_p)
+{
+ int status = 0;
+
+ struct usb_serial_port *port = garmin_data_p->port;
+
+ if (port != NULL && garmin_data_p->flags & FLAGS_APP_RESP_SEEN) {
+ /* send a terminate command */
+ status = garmin_write_bulk(port, GARMIN_STOP_TRANSFER_REQ,
+ sizeof(GARMIN_STOP_TRANSFER_REQ));
+ }
+
+ /* flush all queued data */
+ pkt_clear(garmin_data_p);
+
+ garmin_data_p->insize = 0;
+ garmin_data_p->outsize = 0;
+
+ return status;
+}
+
+
+
+
+
+
+static int garmin_init_session(struct usb_serial_port *port)
+{
+ struct usb_serial *serial = port->serial;
+ struct garmin_data * garmin_data_p = usb_get_serial_port_data(port);
+ int status = 0;
+
+ if (status == 0) {
+ usb_kill_urb (port->interrupt_in_urb);
+
+ dbg("%s - adding interrupt input", __FUNCTION__);
+ port->interrupt_in_urb->dev = serial->dev;
+ status = usb_submit_urb(port->interrupt_in_urb, GFP_KERNEL);
+ if (status)
+ dev_err(&serial->dev->dev,
+ "%s - failed submitting interrupt urb,"
+ " error %d\n",
+ __FUNCTION__, status);
+ }
+
+ if (status == 0) {
+ dbg("%s - starting session ...", __FUNCTION__);
+ garmin_data_p->state = STATE_ACTIVE;
+ status = garmin_write_bulk(port, GARMIN_START_SESSION_REQ,
+ sizeof(GARMIN_START_SESSION_REQ));
+
+ if (status >= 0) {
+
+ garmin_data_p->ignorePkts++;
+
+ /* not needed, but the win32 driver does it too ... */
+ status = garmin_write_bulk(port,
+ GARMIN_START_SESSION_REQ2,
+ sizeof(GARMIN_START_SESSION_REQ2));
+ if (status >= 0) {
+ status = 0;
+ garmin_data_p->ignorePkts++;
+ }
+ }
+ }
+
+ return status;
+}
+
+
+
+
+
+static int garmin_open (struct usb_serial_port *port, struct file *filp)
+{
+ int status = 0;
+ struct garmin_data * garmin_data_p = usb_get_serial_port_data(port);
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+
+ /*
+ * Force low_latency on so that our tty_push actually forces the data
+ * through, otherwise it is scheduled, and with high data rates (like
+ * with OHCI) data can get lost.
+ */
+ if (port->tty)
+ port->tty->low_latency = 1;
+
+ garmin_data_p->mode = initial_mode;
+ garmin_data_p->count = 0;
+ garmin_data_p->flags = 0;
+
+ /* shutdown any bulk reads that might be going on */
+ usb_kill_urb (port->write_urb);
+ usb_kill_urb (port->read_urb);
+
+ if (garmin_data_p->state == STATE_RESET) {
+ status = garmin_init_session(port);
+ }
+
+ garmin_data_p->state = STATE_ACTIVE;
+
+ return status;
+}
+
+
+static void garmin_close (struct usb_serial_port *port, struct file * filp)
+{
+ struct usb_serial *serial = port->serial;
+ struct garmin_data * garmin_data_p = usb_get_serial_port_data(port);
+
+ dbg("%s - port %d - mode=%d state=%d flags=0x%X", __FUNCTION__,
+ port->number, garmin_data_p->mode,
+ garmin_data_p->state, garmin_data_p->flags);
+
+ if (!serial)
+ return;
+
+ garmin_clear(garmin_data_p);
+
+ /* shutdown our urbs */
+ usb_kill_urb (port->read_urb);
+ usb_kill_urb (port->write_urb);
+
+ if (noResponseFromAppLayer(garmin_data_p) ||
+ ((garmin_data_p->flags & CLEAR_HALT_REQUIRED) != 0)) {
+ process_resetdev_request(port);
+ garmin_data_p->state = STATE_RESET;
+ } else {
+ garmin_data_p->state = STATE_DISCONNECTED;
+ }
+}
+
+
+static void garmin_write_bulk_callback (struct urb *urb, struct pt_regs *regs)
+{
+ struct usb_serial_port *port = (struct usb_serial_port *)urb->context;
+ struct garmin_data * garmin_data_p = usb_get_serial_port_data(port);
+
+ /* free up the transfer buffer, as usb_free_urb() does not do this */
+ kfree (urb->transfer_buffer);
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+
+ if (urb->status) {
+ dbg("%s - nonzero write bulk status received: %d",
+ __FUNCTION__, urb->status);
+ garmin_data_p->flags |= CLEAR_HALT_REQUIRED;
+ }
+
+ schedule_work(&port->work);
+}
+
+
+static int garmin_write_bulk (struct usb_serial_port *port,
+ const unsigned char *buf, int count)
+{
+ struct usb_serial *serial = port->serial;
+ struct garmin_data * garmin_data_p = usb_get_serial_port_data(port);
+ struct urb *urb;
+ unsigned char *buffer;
+ int status;
+
+ dbg("%s - port %d, state %d", __FUNCTION__, port->number,
+ garmin_data_p->state);
+
+ garmin_data_p->flags &= ~FLAGS_DROP_DATA;
+
+ buffer = kmalloc (count, GFP_ATOMIC);
+ if (!buffer) {
+ dev_err(&port->dev, "out of memory\n");
+ return -ENOMEM;
+ }
+
+ urb = usb_alloc_urb(0, GFP_ATOMIC);
+ if (!urb) {
+ dev_err(&port->dev, "no more free urbs\n");
+ kfree (buffer);
+ return -ENOMEM;
+ }
+
+ memcpy (buffer, buf, count);
+
+ usb_serial_debug_data(debug, &port->dev, __FUNCTION__, count, buffer);
+
+ usb_fill_bulk_urb (urb, serial->dev,
+ usb_sndbulkpipe (serial->dev,
+ port->bulk_out_endpointAddress),
+ buffer, count,
+ garmin_write_bulk_callback, port);
+ urb->transfer_flags |= URB_ZERO_PACKET;
+
+ if (GARMIN_LAYERID_APPL == getLayerId(buffer)) {
+ garmin_data_p->flags |= FLAGS_APP_REQ_SEEN;
+ if (garmin_data_p->mode == MODE_GARMIN_SERIAL) {
+ pkt_clear(garmin_data_p);
+ garmin_data_p->state = STATE_GSP_WAIT_DATA;
+ }
+ }
+
+ /* send it down the pipe */
+ status = usb_submit_urb(urb, GFP_ATOMIC);
+ if (status) {
+ dev_err(&port->dev,
+ "%s - usb_submit_urb(write bulk) "
+ "failed with status = %d\n",
+ __FUNCTION__, status);
+ count = status;
+ } else {
+
+ if (GARMIN_LAYERID_APPL == getLayerId(buffer)
+ && (garmin_data_p->mode == MODE_GARMIN_SERIAL)) {
+
+ gsp_send_ack(garmin_data_p, buffer[4]);
+ }
+ }
+
+ /* we are done with this urb, so let the host driver
+ * really free it when it is finished with it */
+ usb_free_urb (urb);
+
+ return count;
+}
+
+
+
+static int garmin_write (struct usb_serial_port *port,
+ const unsigned char *buf, int count)
+{
+ int pktid, pktsiz, len;
+ struct garmin_data * garmin_data_p = usb_get_serial_port_data(port);
+ __le32 *privpkt = (__le32 *)garmin_data_p->privpkt;
+
+ usb_serial_debug_data(debug, &port->dev, __FUNCTION__, count, buf);
+
+ /* check for our private packets */
+ if (count >= GARMIN_PKTHDR_LENGTH) {
+
+ len = PRIVPKTSIZ;
+ if (count < len)
+ len = count;
+
+ memcpy(garmin_data_p->privpkt, buf, len);
+
+ pktsiz = getDataLength(garmin_data_p->privpkt);
+ pktid = getPacketId(garmin_data_p->privpkt);
+
+ if (count == (GARMIN_PKTHDR_LENGTH+pktsiz)
+ && GARMIN_LAYERID_PRIVATE == getLayerId(garmin_data_p->privpkt)) {
+
+ dbg("%s - processing private request %d",
+ __FUNCTION__, pktid);
+
+ // drop all unfinished transfers
+ garmin_clear(garmin_data_p);
+
+ switch(pktid) {
+
+ case PRIV_PKTID_SET_DEBUG:
+ if (pktsiz != 4)
+ return -EINVPKT;
+ debug = __le32_to_cpu(privpkt[3]);
+ dbg("%s - debug level set to 0x%X",
+ __FUNCTION__, debug);
+ break;
+
+ case PRIV_PKTID_SET_MODE:
+ if (pktsiz != 4)
+ return -EINVPKT;
+ garmin_data_p->mode = __le32_to_cpu(privpkt[3]);
+ dbg("%s - mode set to %d",
+ __FUNCTION__, garmin_data_p->mode);
+ break;
+
+ case PRIV_PKTID_INFO_REQ:
+ priv_status_resp(port);
+ break;
+
+ case PRIV_PKTID_RESET_REQ:
+ garmin_data_p->flags |= FLAGS_APP_REQ_SEEN;
+ break;
+
+ case PRIV_PKTID_SET_DEF_MODE:
+ if (pktsiz != 4)
+ return -EINVPKT;
+ initial_mode = __le32_to_cpu(privpkt[3]);
+ dbg("%s - initial_mode set to %d",
+ __FUNCTION__,
+ garmin_data_p->mode);
+ break;
+ }
+ return count;
+ }
+ }
+
+ if (garmin_data_p->mode == MODE_GARMIN_SERIAL) {
+ return gsp_receive(garmin_data_p, buf, count);
+ } else { /* MODE_NATIVE */
+ return nat_receive(garmin_data_p, buf, count);
+ }
+}
+
+
+static int garmin_write_room (struct usb_serial_port *port)
+{
+ /*
+ * Report back the bytes currently available in the output buffer.
+ */
+ struct garmin_data * garmin_data_p = usb_get_serial_port_data(port);
+ return GPS_OUT_BUFSIZ-garmin_data_p->outsize;
+}
+
+
+static int garmin_chars_in_buffer (struct usb_serial_port *port)
+{
+ /*
+ * Report back the number of bytes currently in our input buffer.
+ * Will this lock up the driver - the buffer contains an incomplete
+ * package which will not be written to the device until it
+ * has been completed ?
+ */
+ //struct garmin_data * garmin_data_p = usb_get_serial_port_data(port);
+ //return garmin_data_p->insize;
+ return 0;
+}
+
+
+static void garmin_read_process(struct garmin_data * garmin_data_p,
+ unsigned char *data, unsigned data_length)
+{
+ if (garmin_data_p->flags & FLAGS_DROP_DATA) {
+ /* abort-transfer cmd is actice */
+ dbg("%s - pkt dropped", __FUNCTION__);
+ } else if (garmin_data_p->state != STATE_DISCONNECTED &&
+ garmin_data_p->state != STATE_RESET ) {
+
+ /* remember any appl.layer packets, so we know
+ if a reset is required or not when closing
+ the device */
+ if (0 == memcmp(data, GARMIN_APP_LAYER_REPLY,
+ sizeof(GARMIN_APP_LAYER_REPLY)))
+ garmin_data_p->flags |= FLAGS_APP_RESP_SEEN;
+
+ /* if throttling is active or postprecessing is required
+ put the received data in th input queue, otherwise
+ send it directly to the tty port */
+ if (garmin_data_p->flags & FLAGS_QUEUING) {
+ pkt_add(garmin_data_p, data, data_length);
+ } else if (garmin_data_p->mode == MODE_GARMIN_SERIAL) {
+ if (getLayerId(data) == GARMIN_LAYERID_APPL) {
+ pkt_add(garmin_data_p, data, data_length);
+ }
+ } else {
+ send_to_tty(garmin_data_p->port, data, data_length);
+ }
+ }
+}
+
+
+static void garmin_read_bulk_callback (struct urb *urb, struct pt_regs *regs)
+{
+ struct usb_serial_port *port = (struct usb_serial_port *)urb->context;
+ struct usb_serial *serial = port->serial;
+ struct garmin_data * garmin_data_p = usb_get_serial_port_data(port);
+ unsigned char *data = urb->transfer_buffer;
+ int status;
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+
+ if (!serial) {
+ dbg("%s - bad serial pointer, exiting", __FUNCTION__);
+ return;
+ }
+
+ if (urb->status) {
+ dbg("%s - nonzero read bulk status received: %d",
+ __FUNCTION__, urb->status);
+ return;
+ }
+
+ usb_serial_debug_data(debug, &port->dev,
+ __FUNCTION__, urb->actual_length, data);
+
+ garmin_read_process(garmin_data_p, data, urb->actual_length);
+
+ /* Continue trying to read until nothing more is received */
+ if (urb->actual_length > 0) {
+ usb_fill_bulk_urb (port->read_urb, serial->dev,
+ usb_rcvbulkpipe (serial->dev,
+ port->bulk_in_endpointAddress),
+ port->read_urb->transfer_buffer,
+ port->read_urb->transfer_buffer_length,
+ garmin_read_bulk_callback, port);
+ status = usb_submit_urb(port->read_urb, GFP_ATOMIC);
+ if (status)
+ dev_err(&port->dev,
+ "%s - failed resubmitting read urb, error %d\n",
+ __FUNCTION__, status);
+ }
+ return;
+}
+
+
+static void garmin_read_int_callback (struct urb *urb, struct pt_regs *regs)
+{
+ int status;
+ struct usb_serial_port *port = (struct usb_serial_port *)urb->context;
+ struct usb_serial *serial = port->serial;
+ struct garmin_data * garmin_data_p = usb_get_serial_port_data(port);
+ unsigned char *data = urb->transfer_buffer;
+
+ switch (urb->status) {
+ case 0:
+ /* success */
+ break;
+ case -ECONNRESET:
+ case -ENOENT:
+ case -ESHUTDOWN:
+ /* this urb is terminated, clean up */
+ dbg("%s - urb shutting down with status: %d",
+ __FUNCTION__, urb->status);
+ return;
+ default:
+ dbg("%s - nonzero urb status received: %d",
+ __FUNCTION__, urb->status);
+ return;
+ }
+
+ usb_serial_debug_data(debug, &port->dev, __FUNCTION__,
+ urb->actual_length, urb->transfer_buffer);
+
+ if (urb->actual_length == sizeof(GARMIN_BULK_IN_AVAIL_REPLY) &&
+ 0 == memcmp(data, GARMIN_BULK_IN_AVAIL_REPLY,
+ sizeof(GARMIN_BULK_IN_AVAIL_REPLY))) {
+
+ dbg("%s - bulk data available.", __FUNCTION__);
+
+ /* bulk data available */
+ usb_fill_bulk_urb (port->read_urb, serial->dev,
+ usb_rcvbulkpipe (serial->dev,
+ port->bulk_in_endpointAddress),
+ port->read_urb->transfer_buffer,
+ port->read_urb->transfer_buffer_length,
+ garmin_read_bulk_callback, port);
+ status = usb_submit_urb(port->read_urb, GFP_KERNEL);
+ if (status) {
+ dev_err(&port->dev,
+ "%s - failed submitting read urb, error %d\n",
+ __FUNCTION__, status);
+ }
+
+ } else if (urb->actual_length == (4+sizeof(GARMIN_START_SESSION_REPLY))
+ && 0 == memcmp(data, GARMIN_START_SESSION_REPLY,
+ sizeof(GARMIN_START_SESSION_REPLY))) {
+
+ garmin_data_p->flags |= FLAGS_SESSION_REPLY1_SEEN;
+
+ /* save the serial number */
+ garmin_data_p->serial_num
+ = __le32_to_cpup((__le32*)(data+GARMIN_PKTHDR_LENGTH));
+
+ dbg("%s - start-of-session reply seen - serial %u.",
+ __FUNCTION__, garmin_data_p->serial_num);
+ }
+
+ if (garmin_data_p->ignorePkts) {
+ /* this reply belongs to a request generated by the driver,
+ ignore it. */
+ dbg("%s - pkt ignored (%d)",
+ __FUNCTION__, garmin_data_p->ignorePkts);
+ garmin_data_p->ignorePkts--;
+ } else {
+ garmin_read_process(garmin_data_p, data, urb->actual_length);
+ }
+
+ port->interrupt_in_urb->dev = port->serial->dev;
+ status = usb_submit_urb (urb, GFP_ATOMIC);
+ if (status)
+ dev_err(&urb->dev->dev,
+ "%s - Error %d submitting interrupt urb\n",
+ __FUNCTION__, status);
+}
+
+
+/*
+ * Sends the next queued packt to the tty port (garmin native mode only)
+ * and then sets a timer to call itself again until all queued data
+ * is sent.
+ */
+static int garmin_flush_queue(struct garmin_data * garmin_data_p)
+{
+ struct garmin_packet *pkt;
+
+ if ((garmin_data_p->flags & FLAGS_THROTTLED) == 0) {
+ pkt = pkt_pop(garmin_data_p);
+ if (pkt != NULL) {
+
+ send_to_tty(garmin_data_p->port, pkt->data, pkt->size);
+ kfree(pkt);
+ mod_timer(&garmin_data_p->timer, (1)+jiffies);
+
+ } else {
+ garmin_data_p->flags &= ~FLAGS_QUEUING;
+ }
+ }
+ return 0;
+}
+
+
+static void garmin_throttle (struct usb_serial_port *port)
+{
+ struct garmin_data * garmin_data_p = usb_get_serial_port_data(port);
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+ /* set flag, data received will be put into a queue
+ for later processing */
+ garmin_data_p->flags |= FLAGS_QUEUING|FLAGS_THROTTLED;
+}
+
+
+static void garmin_unthrottle (struct usb_serial_port *port)
+{
+ struct garmin_data * garmin_data_p = usb_get_serial_port_data(port);
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+ garmin_data_p->flags &= ~FLAGS_THROTTLED;
+
+ /* in native mode send queued data to tty, in
+ serial mode nothing needs to be done here */
+ if (garmin_data_p->mode == MODE_NATIVE)
+ garmin_flush_queue(garmin_data_p);
+}
+
+
+
+/*
+ * The timer is currently only used to send queued packets to
+ * the tty in cases where the protocol provides no own handshaking
+ * to initiate the transfer.
+ */
+static void timeout_handler(unsigned long data)
+{
+ struct garmin_data *garmin_data_p = (struct garmin_data *) data;
+
+ /* send the next queued packet to the tty port */
+ if (garmin_data_p->mode == MODE_NATIVE)
+ if (garmin_data_p->flags & FLAGS_QUEUING)
+ garmin_flush_queue(garmin_data_p);
+}
+
+
+
+static int garmin_attach (struct usb_serial *serial)
+{
+ int status = 0;
+ struct usb_serial_port *port = serial->port[0];
+ struct garmin_data * garmin_data_p = NULL;
+
+ dbg("%s", __FUNCTION__);
+
+ garmin_data_p = kmalloc (sizeof(struct garmin_data), GFP_KERNEL);
+ if (garmin_data_p == NULL) {
+ dev_err(&port->dev, "%s - Out of memory\n", __FUNCTION__);
+ return -ENOMEM;
+ }
+ memset (garmin_data_p, 0, sizeof(struct garmin_data));
+ init_timer(&garmin_data_p->timer);
+ spin_lock_init(&garmin_data_p->lock);
+ INIT_LIST_HEAD(&garmin_data_p->pktlist);
+ //garmin_data_p->timer.expires = jiffies + session_timeout;
+ garmin_data_p->timer.data = (unsigned long)garmin_data_p;
+ garmin_data_p->timer.function = timeout_handler;
+ garmin_data_p->port = port;
+ garmin_data_p->state = 0;
+ garmin_data_p->count = 0;
+ usb_set_serial_port_data(port, garmin_data_p);
+
+ status = garmin_init_session(port);
+
+ return status;
+}
+
+
+static void garmin_shutdown (struct usb_serial *serial)
+{
+ struct usb_serial_port *port = serial->port[0];
+ struct garmin_data * garmin_data_p = usb_get_serial_port_data(port);
+
+ dbg("%s", __FUNCTION__);
+
+ usb_kill_urb (port->interrupt_in_urb);
+ del_timer_sync(&garmin_data_p->timer);
+ kfree (garmin_data_p);
+ usb_set_serial_port_data(port, NULL);
+}
+
+
+
+
+
+
+
+/* All of the device info needed */
+static struct usb_serial_device_type garmin_device = {
+ .owner = THIS_MODULE,
+ .name = "Garmin GPS usb/tty",
+ .short_name = "garmin_gps",
+ .id_table = id_table,
+ .num_interrupt_in = 1,
+ .num_bulk_in = 1,
+ .num_bulk_out = 1,
+ .num_ports = 1,
+ .open = garmin_open,
+ .close = garmin_close,
+ .throttle = garmin_throttle,
+ .unthrottle = garmin_unthrottle,
+ .attach = garmin_attach,
+ .shutdown = garmin_shutdown,
+ .write = garmin_write,
+ .write_room = garmin_write_room,
+ .chars_in_buffer = garmin_chars_in_buffer,
+ .write_bulk_callback = garmin_write_bulk_callback,
+ .read_bulk_callback = garmin_read_bulk_callback,
+ .read_int_callback = garmin_read_int_callback,
+};
+
+
+static int __init garmin_init (void)
+{
+ int retval;
+
+ retval = usb_serial_register(&garmin_device);
+ if (retval)
+ goto failed_garmin_register;
+ retval = usb_register(&garmin_driver);
+ if (retval)
+ goto failed_usb_register;
+ info(DRIVER_DESC " " DRIVER_VERSION);
+
+ return 0;
+failed_usb_register:
+ usb_serial_deregister(&garmin_device);
+failed_garmin_register:
+ return retval;
+}
+
+
+static void __exit garmin_exit (void)
+{
+ usb_deregister (&garmin_driver);
+ usb_serial_deregister (&garmin_device);
+}
+
+
+
+
+module_init(garmin_init);
+module_exit(garmin_exit);
+
+MODULE_AUTHOR( DRIVER_AUTHOR );
+MODULE_DESCRIPTION( DRIVER_DESC );
+MODULE_LICENSE("GPL");
+
+module_param(debug, bool, S_IWUSR | S_IRUGO);
+MODULE_PARM_DESC(debug, "Debug enabled or not");
+module_param(initial_mode, int, S_IRUGO);
+MODULE_PARM_DESC(initial_mode, "Initial mode");
+
--- /dev/null
+/* vi: ts=8 sw=8
+ *
+ * TI 3410/5052 USB Serial Driver
+ *
+ * Copyright (C) 2004 Texas Instruments
+ *
+ * This driver is based on the Linux io_ti driver, which is
+ * Copyright (C) 2000-2002 Inside Out Networks
+ * Copyright (C) 2001-2002 Greg Kroah-Hartman
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * For questions or problems with this driver, contact Texas Instruments
+ * technical support, or Al Borchers <alborchers@steinerpoint.com>, or
+ * Peter Berger <pberger@brimson.com>.
+ *
+ * This driver needs this hotplug script in /etc/hotplug/usb/ti_usb_3410_5052
+ * or in /etc/hotplug.d/usb/ti_usb_3410_5052.hotplug to set the device
+ * configuration.
+ *
+ * #!/bin/bash
+ *
+ * BOOT_CONFIG=1
+ * ACTIVE_CONFIG=2
+ *
+ * if [[ "$ACTION" != "add" ]]
+ * then
+ * exit
+ * fi
+ *
+ * CONFIG_PATH=/sys${DEVPATH%/?*}/bConfigurationValue
+ *
+ * if [[ 0`cat $CONFIG_PATH` -ne $BOOT_CONFIG ]]
+ * then
+ * exit
+ * fi
+ *
+ * PRODUCT=${PRODUCT%/?*} # delete version
+ * VENDOR_ID=`printf "%d" 0x${PRODUCT%/?*}`
+ * PRODUCT_ID=`printf "%d" 0x${PRODUCT#*?/}`
+ *
+ * PARAM_PATH=/sys/module/ti_usb_3410_5052/parameters
+ *
+ * function scan() {
+ * s=$1
+ * shift
+ * for i
+ * do
+ * if [[ $s -eq $i ]]
+ * then
+ * return 0
+ * fi
+ * done
+ * return 1
+ * }
+ *
+ * IFS=$IFS,
+ *
+ * if (scan $VENDOR_ID 1105 `cat $PARAM_PATH/vendor_3410` &&
+ * scan $PRODUCT_ID 13328 `cat $PARAM_PATH/product_3410`) ||
+ * (scan $VENDOR_ID 1105 `cat $PARAM_PATH/vendor_5052` &&
+ * scan $PRODUCT_ID 20562 20818 20570 20575 `cat $PARAM_PATH/product_5052`)
+ * then
+ * echo $ACTIVE_CONFIG > $CONFIG_PATH
+ * fi
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/tty.h>
+#include <linux/tty_driver.h>
+#include <linux/tty_flip.h>
+#include <linux/module.h>
+#include <linux/spinlock.h>
+#include <linux/ioctl.h>
+#include <linux/serial.h>
+#include <linux/circ_buf.h>
+#include <asm/uaccess.h>
+#include <asm/semaphore.h>
+#include <linux/usb.h>
+
+#include "usb-serial.h"
+#include "ti_usb_3410_5052.h"
+#include "ti_fw_3410.h" /* firmware image for 3410 */
+#include "ti_fw_5052.h" /* firmware image for 5052 */
+
+
+/* Defines */
+
+#define TI_DRIVER_VERSION "v0.9"
+#define TI_DRIVER_AUTHOR "Al Borchers <alborchers@steinerpoint.com>"
+#define TI_DRIVER_DESC "TI USB 3410/5052 Serial Driver"
+
+#define TI_FIRMWARE_BUF_SIZE 16284
+
+#define TI_WRITE_BUF_SIZE 1024
+
+#define TI_TRANSFER_TIMEOUT 2
+
+#define TI_DEFAULT_LOW_LATENCY 0
+#define TI_DEFAULT_CLOSING_WAIT 4000 /* in .01 secs */
+
+/* supported setserial flags */
+#define TI_SET_SERIAL_FLAGS (ASYNC_LOW_LATENCY)
+
+/* read urb states */
+#define TI_READ_URB_RUNNING 0
+#define TI_READ_URB_STOPPING 1
+#define TI_READ_URB_STOPPED 2
+
+#define TI_EXTRA_VID_PID_COUNT 5
+
+
+/* Structures */
+
+struct ti_port {
+ int tp_is_open;
+ __u8 tp_msr;
+ __u8 tp_lsr;
+ __u8 tp_shadow_mcr;
+ __u8 tp_uart_mode; /* 232 or 485 modes */
+ unsigned int tp_uart_base_addr;
+ int tp_flags;
+ int tp_closing_wait;/* in .01 secs */
+ struct async_icount tp_icount;
+ wait_queue_head_t tp_msr_wait; /* wait for msr change */
+ wait_queue_head_t tp_write_wait;
+ struct ti_device *tp_tdev;
+ struct usb_serial_port *tp_port;
+ spinlock_t tp_lock;
+ int tp_read_urb_state;
+ int tp_write_urb_in_use;
+ struct circ_buf *tp_write_buf;
+};
+
+struct ti_device {
+ struct semaphore td_open_close_sem;
+ int td_open_port_count;
+ struct usb_serial *td_serial;
+ int td_is_3410;
+ int td_urb_error;
+};
+
+
+/* Function Declarations */
+
+static int ti_startup(struct usb_serial *serial);
+static void ti_shutdown(struct usb_serial *serial);
+static int ti_open(struct usb_serial_port *port, struct file *file);
+static void ti_close(struct usb_serial_port *port, struct file *file);
+static int ti_write(struct usb_serial_port *port, const unsigned char *data,
+ int count);
+static int ti_write_room(struct usb_serial_port *port);
+static int ti_chars_in_buffer(struct usb_serial_port *port);
+static void ti_throttle(struct usb_serial_port *port);
+static void ti_unthrottle(struct usb_serial_port *port);
+static int ti_ioctl(struct usb_serial_port *port, struct file *file, unsigned int cmd, unsigned long arg);
+static void ti_set_termios(struct usb_serial_port *port,
+ struct termios *old_termios);
+static int ti_tiocmget(struct usb_serial_port *port, struct file *file);
+static int ti_tiocmset(struct usb_serial_port *port, struct file *file,
+ unsigned int set, unsigned int clear);
+static void ti_break(struct usb_serial_port *port, int break_state);
+static void ti_interrupt_callback(struct urb *urb, struct pt_regs *regs);
+static void ti_bulk_in_callback(struct urb *urb, struct pt_regs *regs);
+static void ti_bulk_out_callback(struct urb *urb, struct pt_regs *regs);
+
+static void ti_recv(struct device *dev, struct tty_struct *tty,
+ unsigned char *data, int length);
+static void ti_send(struct ti_port *tport);
+static int ti_set_mcr(struct ti_port *tport, unsigned int mcr);
+static int ti_get_lsr(struct ti_port *tport);
+static int ti_get_serial_info(struct ti_port *tport,
+ struct serial_struct __user *ret_arg);
+static int ti_set_serial_info(struct ti_port *tport,
+ struct serial_struct __user *new_arg);
+static void ti_handle_new_msr(struct ti_port *tport, __u8 msr);
+
+static void ti_drain(struct ti_port *tport, unsigned long timeout, int flush);
+
+static void ti_stop_read(struct ti_port *tport, struct tty_struct *tty);
+static int ti_restart_read(struct ti_port *tport, struct tty_struct *tty);
+
+static int ti_command_out_sync(struct ti_device *tdev, __u8 command,
+ __u16 moduleid, __u16 value, __u8 *data, int size);
+static int ti_command_in_sync(struct ti_device *tdev, __u8 command,
+ __u16 moduleid, __u16 value, __u8 *data, int size);
+
+static int ti_write_byte(struct ti_device *tdev, unsigned long addr,
+ __u8 mask, __u8 byte);
+
+static int ti_download_firmware(struct ti_device *tdev,
+ unsigned char *firmware, unsigned int firmware_size);
+
+/* circular buffer */
+static struct circ_buf *ti_buf_alloc(void);
+static void ti_buf_free(struct circ_buf *cb);
+static void ti_buf_clear(struct circ_buf *cb);
+static int ti_buf_data_avail(struct circ_buf *cb);
+static int ti_buf_space_avail(struct circ_buf *cb);
+static int ti_buf_put(struct circ_buf *cb, const char *buf, int count);
+static int ti_buf_get(struct circ_buf *cb, char *buf, int count);
+
+
+/* Data */
+
+/* module parameters */
+static int debug;
+static int low_latency = TI_DEFAULT_LOW_LATENCY;
+static int closing_wait = TI_DEFAULT_CLOSING_WAIT;
+static ushort vendor_3410[TI_EXTRA_VID_PID_COUNT];
+static int vendor_3410_count;
+static ushort product_3410[TI_EXTRA_VID_PID_COUNT];
+static int product_3410_count;
+static ushort vendor_5052[TI_EXTRA_VID_PID_COUNT];
+static int vendor_5052_count;
+static ushort product_5052[TI_EXTRA_VID_PID_COUNT];
+static int product_5052_count;
+
+/* supported devices */
+/* the array dimension is the number of default entries plus */
+/* TI_EXTRA_VID_PID_COUNT user defined entries plus 1 terminating */
+/* null entry */
+static struct usb_device_id ti_id_table_3410[1+TI_EXTRA_VID_PID_COUNT+1] = {
+ { USB_DEVICE(TI_VENDOR_ID, TI_3410_PRODUCT_ID) },
+};
+
+static struct usb_device_id ti_id_table_5052[4+TI_EXTRA_VID_PID_COUNT+1] = {
+ { USB_DEVICE(TI_VENDOR_ID, TI_5052_BOOT_PRODUCT_ID) },
+ { USB_DEVICE(TI_VENDOR_ID, TI_5152_BOOT_PRODUCT_ID) },
+ { USB_DEVICE(TI_VENDOR_ID, TI_5052_EEPROM_PRODUCT_ID) },
+ { USB_DEVICE(TI_VENDOR_ID, TI_5052_FIRMWARE_PRODUCT_ID) },
+};
+
+static struct usb_device_id ti_id_table_combined[] = {
+ { USB_DEVICE(TI_VENDOR_ID, TI_3410_PRODUCT_ID) },
+ { USB_DEVICE(TI_VENDOR_ID, TI_5052_BOOT_PRODUCT_ID) },
+ { USB_DEVICE(TI_VENDOR_ID, TI_5152_BOOT_PRODUCT_ID) },
+ { USB_DEVICE(TI_VENDOR_ID, TI_5052_EEPROM_PRODUCT_ID) },
+ { USB_DEVICE(TI_VENDOR_ID, TI_5052_FIRMWARE_PRODUCT_ID) },
+ { }
+};
+
+static struct usb_driver ti_usb_driver = {
+ .owner = THIS_MODULE,
+ .name = "ti_usb_3410_5052",
+ .probe = usb_serial_probe,
+ .disconnect = usb_serial_disconnect,
+ .id_table = ti_id_table_combined,
+};
+
+static struct usb_serial_device_type ti_1port_device = {
+ .owner = THIS_MODULE,
+ .name = "TI USB 3410 1 port adapter",
+ .id_table = ti_id_table_3410,
+ .num_interrupt_in = 1,
+ .num_bulk_in = 1,
+ .num_bulk_out = 1,
+ .num_ports = 1,
+ .attach = ti_startup,
+ .shutdown = ti_shutdown,
+ .open = ti_open,
+ .close = ti_close,
+ .write = ti_write,
+ .write_room = ti_write_room,
+ .chars_in_buffer = ti_chars_in_buffer,
+ .throttle = ti_throttle,
+ .unthrottle = ti_unthrottle,
+ .ioctl = ti_ioctl,
+ .set_termios = ti_set_termios,
+ .tiocmget = ti_tiocmget,
+ .tiocmset = ti_tiocmset,
+ .break_ctl = ti_break,
+ .read_int_callback = ti_interrupt_callback,
+ .read_bulk_callback = ti_bulk_in_callback,
+ .write_bulk_callback = ti_bulk_out_callback,
+};
+
+static struct usb_serial_device_type ti_2port_device = {
+ .owner = THIS_MODULE,
+ .name = "TI USB 5052 2 port adapter",
+ .id_table = ti_id_table_5052,
+ .num_interrupt_in = 1,
+ .num_bulk_in = 2,
+ .num_bulk_out = 2,
+ .num_ports = 2,
+ .attach = ti_startup,
+ .shutdown = ti_shutdown,
+ .open = ti_open,
+ .close = ti_close,
+ .write = ti_write,
+ .write_room = ti_write_room,
+ .chars_in_buffer = ti_chars_in_buffer,
+ .throttle = ti_throttle,
+ .unthrottle = ti_unthrottle,
+ .ioctl = ti_ioctl,
+ .set_termios = ti_set_termios,
+ .tiocmget = ti_tiocmget,
+ .tiocmset = ti_tiocmset,
+ .break_ctl = ti_break,
+ .read_int_callback = ti_interrupt_callback,
+ .read_bulk_callback = ti_bulk_in_callback,
+ .write_bulk_callback = ti_bulk_out_callback,
+};
+
+
+/* Module */
+
+MODULE_AUTHOR(TI_DRIVER_AUTHOR);
+MODULE_DESCRIPTION(TI_DRIVER_DESC);
+MODULE_VERSION(TI_DRIVER_VERSION);
+MODULE_LICENSE("GPL");
+
+module_param(debug, bool, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(debug, "Enable debugging, 0=no, 1=yes");
+
+module_param(low_latency, bool, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(low_latency, "TTY low_latency flag, 0=off, 1=on, default is off");
+
+module_param(closing_wait, int, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(closing_wait, "Maximum wait for data to drain in close, in .01 secs, default is 4000");
+
+module_param_array(vendor_3410, ushort, &vendor_3410_count, S_IRUGO);
+MODULE_PARM_DESC(vendor_3410, "Vendor ids for 3410 based devices, 1-5 short integers");
+module_param_array(product_3410, ushort, &product_3410_count, S_IRUGO);
+MODULE_PARM_DESC(product_3410, "Product ids for 3410 based devices, 1-5 short integers");
+module_param_array(vendor_5052, ushort, &vendor_5052_count, S_IRUGO);
+MODULE_PARM_DESC(vendor_5052, "Vendor ids for 5052 based devices, 1-5 short integers");
+module_param_array(product_5052, ushort, &product_5052_count, S_IRUGO);
+MODULE_PARM_DESC(product_5052, "Product ids for 5052 based devices, 1-5 short integers");
+
+MODULE_DEVICE_TABLE(usb, ti_id_table_combined);
+
+
+/* Functions */
+
+static int __init ti_init(void)
+{
+ int i,j;
+ int ret;
+
+
+ /* insert extra vendor and product ids */
+ j = sizeof(ti_id_table_3410)/sizeof(struct usb_device_id)
+ - TI_EXTRA_VID_PID_COUNT - 1;
+ for (i=0; i<min(vendor_3410_count,product_3410_count); i++,j++) {
+ ti_id_table_3410[j].idVendor = vendor_3410[i];
+ ti_id_table_3410[j].idProduct = product_3410[i];
+ ti_id_table_3410[j].match_flags = USB_DEVICE_ID_MATCH_DEVICE;
+ }
+ j = sizeof(ti_id_table_5052)/sizeof(struct usb_device_id)
+ - TI_EXTRA_VID_PID_COUNT - 1;
+ for (i=0; i<min(vendor_5052_count,product_5052_count); i++,j++) {
+ ti_id_table_5052[j].idVendor = vendor_5052[i];
+ ti_id_table_5052[j].idProduct = product_5052[i];
+ ti_id_table_5052[j].match_flags = USB_DEVICE_ID_MATCH_DEVICE;
+ }
+
+ ret = usb_serial_register(&ti_1port_device);
+ if (ret)
+ goto failed_1port;
+ ret = usb_serial_register(&ti_2port_device);
+ if (ret)
+ goto failed_2port;
+
+ ret = usb_register(&ti_usb_driver);
+ if (ret)
+ goto failed_usb;
+
+ info(TI_DRIVER_DESC " " TI_DRIVER_VERSION);
+
+ return 0;
+
+failed_usb:
+ usb_serial_deregister(&ti_2port_device);
+failed_2port:
+ usb_serial_deregister(&ti_1port_device);
+failed_1port:
+ return ret;
+}
+
+
+static void __exit ti_exit(void)
+{
+ usb_serial_deregister(&ti_1port_device);
+ usb_serial_deregister(&ti_2port_device);
+ usb_deregister(&ti_usb_driver);
+}
+
+
+module_init(ti_init);
+module_exit(ti_exit);
+
+
+static int ti_startup(struct usb_serial *serial)
+{
+ struct ti_device *tdev;
+ struct ti_port *tport;
+ struct usb_device *dev = serial->dev;
+ int status;
+ int i;
+
+
+ dbg("%s - product 0x%4X, num configurations %d, configuration value %d",
+ __FUNCTION__, le16_to_cpu(dev->descriptor.idProduct),
+ dev->descriptor.bNumConfigurations,
+ dev->actconfig->desc.bConfigurationValue);
+
+ /* create device structure */
+ tdev = kmalloc(sizeof(struct ti_device), GFP_KERNEL);
+ if (tdev == NULL) {
+ dev_err(&dev->dev, "%s - out of memory\n", __FUNCTION__);
+ return -ENOMEM;
+ }
+ memset(tdev, 0, sizeof(struct ti_device));
+ sema_init(&tdev->td_open_close_sem, 1);
+ tdev->td_serial = serial;
+ usb_set_serial_data(serial, tdev);
+
+ /* determine device type */
+ if (usb_match_id(serial->interface, ti_id_table_3410))
+ tdev->td_is_3410 = 1;
+ dbg("%s - device type is %s", __FUNCTION__, tdev->td_is_3410 ? "3410" : "5052");
+
+ /* if we have only 1 configuration, download firmware */
+ if (dev->descriptor.bNumConfigurations == 1) {
+
+ if (tdev->td_is_3410)
+ status = ti_download_firmware(tdev, ti_fw_3410,
+ sizeof(ti_fw_3410));
+ else
+ status = ti_download_firmware(tdev, ti_fw_5052,
+ sizeof(ti_fw_5052));
+ if (status)
+ goto free_tdev;
+
+ /* 3410 must be reset, 5052 resets itself */
+ if (tdev->td_is_3410) {
+ msleep_interruptible(100);
+ usb_reset_device(dev);
+ }
+
+ status = -ENODEV;
+ goto free_tdev;
+ }
+
+ /* the second configuration must be set (in sysfs by hotplug script) */
+ if (dev->actconfig->desc.bConfigurationValue == TI_BOOT_CONFIG) {
+ status = -ENODEV;
+ goto free_tdev;
+ }
+
+ /* set up port structures */
+ for (i = 0; i < serial->num_ports; ++i) {
+ tport = kmalloc(sizeof(struct ti_port), GFP_KERNEL);
+ if (tport == NULL) {
+ dev_err(&dev->dev, "%s - out of memory\n", __FUNCTION__);
+ status = -ENOMEM;
+ goto free_tports;
+ }
+ memset(tport, 0, sizeof(struct ti_port));
+ spin_lock_init(&tport->tp_lock);
+ tport->tp_uart_base_addr = (i == 0 ? TI_UART1_BASE_ADDR : TI_UART2_BASE_ADDR);
+ tport->tp_flags = low_latency ? ASYNC_LOW_LATENCY : 0;
+ tport->tp_closing_wait = closing_wait;
+ init_waitqueue_head(&tport->tp_msr_wait);
+ init_waitqueue_head(&tport->tp_write_wait);
+ tport->tp_write_buf = ti_buf_alloc();
+ if (tport->tp_write_buf == NULL) {
+ dev_err(&dev->dev, "%s - out of memory\n", __FUNCTION__);
+ kfree(tport);
+ status = -ENOMEM;
+ goto free_tports;
+ }
+ tport->tp_port = serial->port[i];
+ tport->tp_tdev = tdev;
+ usb_set_serial_port_data(serial->port[i], tport);
+ tport->tp_uart_mode = 0; /* default is RS232 */
+ }
+
+ return 0;
+
+free_tports:
+ for (--i; i>=0; --i) {
+ tport = usb_get_serial_port_data(serial->port[i]);
+ ti_buf_free(tport->tp_write_buf);
+ kfree(tport);
+ usb_set_serial_port_data(serial->port[i], NULL);
+ }
+free_tdev:
+ kfree(tdev);
+ usb_set_serial_data(serial, NULL);
+ return status;
+}
+
+
+static void ti_shutdown(struct usb_serial *serial)
+{
+ int i;
+ struct ti_device *tdev = usb_get_serial_data(serial);
+ struct ti_port *tport;
+
+ dbg("%s", __FUNCTION__);
+
+ for (i=0; i < serial->num_ports; ++i) {
+ tport = usb_get_serial_port_data(serial->port[i]);
+ if (tport) {
+ ti_buf_free(tport->tp_write_buf);
+ kfree(tport);
+ usb_set_serial_port_data(serial->port[i], NULL);
+ }
+ }
+
+ if (tdev)
+ kfree(tdev);
+ usb_set_serial_data(serial, NULL);
+}
+
+
+static int ti_open(struct usb_serial_port *port, struct file *file)
+{
+ struct ti_port *tport = usb_get_serial_port_data(port);
+ struct ti_device *tdev;
+ struct usb_device *dev;
+ struct urb *urb;
+ int port_number;
+ int status;
+ __u16 open_settings = (__u8)(TI_PIPE_MODE_CONTINOUS |
+ TI_PIPE_TIMEOUT_ENABLE |
+ (TI_TRANSFER_TIMEOUT << 2));
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+
+ if (tport == NULL)
+ return -ENODEV;
+
+ dev = port->serial->dev;
+ tdev = tport->tp_tdev;
+
+ /* only one open on any port on a device at a time */
+ if (down_interruptible(&tdev->td_open_close_sem))
+ return -ERESTARTSYS;
+
+ if (port->tty)
+ port->tty->low_latency =
+ (tport->tp_flags & ASYNC_LOW_LATENCY) ? 1 : 0;
+
+ port_number = port->number - port->serial->minor;
+
+ memset(&(tport->tp_icount), 0x00, sizeof(tport->tp_icount));
+
+ tport->tp_msr = 0;
+ tport->tp_shadow_mcr |= (TI_MCR_RTS | TI_MCR_DTR);
+
+ /* start interrupt urb the first time a port is opened on this device */
+ if (tdev->td_open_port_count == 0) {
+ dbg("%s - start interrupt in urb", __FUNCTION__);
+ urb = tdev->td_serial->port[0]->interrupt_in_urb;
+ if (!urb) {
+ dev_err(&port->dev, "%s - no interrupt urb\n", __FUNCTION__);
+ status = -EINVAL;
+ goto up_sem;
+ }
+ urb->complete = ti_interrupt_callback;
+ urb->context = tdev;
+ urb->dev = dev;
+ status = usb_submit_urb(urb, GFP_KERNEL);
+ if (status) {
+ dev_err(&port->dev, "%s - submit interrupt urb failed, %d\n", __FUNCTION__, status);
+ goto up_sem;
+ }
+ }
+
+ ti_set_termios(port, NULL);
+
+ dbg("%s - sending TI_OPEN_PORT", __FUNCTION__);
+ status = ti_command_out_sync(tdev, TI_OPEN_PORT,
+ (__u8)(TI_UART1_PORT + port_number), open_settings, NULL, 0);
+ if (status) {
+ dev_err(&port->dev, "%s - cannot send open command, %d\n", __FUNCTION__, status);
+ goto unlink_int_urb;
+ }
+
+ dbg("%s - sending TI_START_PORT", __FUNCTION__);
+ status = ti_command_out_sync(tdev, TI_START_PORT,
+ (__u8)(TI_UART1_PORT + port_number), 0, NULL, 0);
+ if (status) {
+ dev_err(&port->dev, "%s - cannot send start command, %d\n", __FUNCTION__, status);
+ goto unlink_int_urb;
+ }
+
+ dbg("%s - sending TI_PURGE_PORT", __FUNCTION__);
+ status = ti_command_out_sync(tdev, TI_PURGE_PORT,
+ (__u8)(TI_UART1_PORT + port_number), TI_PURGE_INPUT, NULL, 0);
+ if (status) {
+ dev_err(&port->dev, "%s - cannot clear input buffers, %d\n", __FUNCTION__, status);
+ goto unlink_int_urb;
+ }
+ status = ti_command_out_sync(tdev, TI_PURGE_PORT,
+ (__u8)(TI_UART1_PORT + port_number), TI_PURGE_OUTPUT, NULL, 0);
+ if (status) {
+ dev_err(&port->dev, "%s - cannot clear output buffers, %d\n", __FUNCTION__, status);
+ goto unlink_int_urb;
+ }
+
+ /* reset the data toggle on the bulk endpoints to work around bug in
+ * host controllers where things get out of sync some times */
+ usb_clear_halt(dev, port->write_urb->pipe);
+ usb_clear_halt(dev, port->read_urb->pipe);
+
+ ti_set_termios(port, NULL);
+
+ dbg("%s - sending TI_OPEN_PORT (2)", __FUNCTION__);
+ status = ti_command_out_sync(tdev, TI_OPEN_PORT,
+ (__u8)(TI_UART1_PORT + port_number), open_settings, NULL, 0);
+ if (status) {
+ dev_err(&port->dev, "%s - cannot send open command (2), %d\n", __FUNCTION__, status);
+ goto unlink_int_urb;
+ }
+
+ dbg("%s - sending TI_START_PORT (2)", __FUNCTION__);
+ status = ti_command_out_sync(tdev, TI_START_PORT,
+ (__u8)(TI_UART1_PORT + port_number), 0, NULL, 0);
+ if (status) {
+ dev_err(&port->dev, "%s - cannot send start command (2), %d\n", __FUNCTION__, status);
+ goto unlink_int_urb;
+ }
+
+ /* start read urb */
+ dbg("%s - start read urb", __FUNCTION__);
+ urb = port->read_urb;
+ if (!urb) {
+ dev_err(&port->dev, "%s - no read urb\n", __FUNCTION__);
+ status = -EINVAL;
+ goto unlink_int_urb;
+ }
+ tport->tp_read_urb_state = TI_READ_URB_RUNNING;
+ urb->complete = ti_bulk_in_callback;
+ urb->context = tport;
+ urb->dev = dev;
+ status = usb_submit_urb(urb, GFP_KERNEL);
+ if (status) {
+ dev_err(&port->dev, "%s - submit read urb failed, %d\n", __FUNCTION__, status);
+ goto unlink_int_urb;
+ }
+
+ tport->tp_is_open = 1;
+ ++tdev->td_open_port_count;
+
+ goto up_sem;
+
+unlink_int_urb:
+ if (tdev->td_open_port_count == 0)
+ usb_kill_urb(port->serial->port[0]->interrupt_in_urb);
+up_sem:
+ up(&tdev->td_open_close_sem);
+ dbg("%s - exit %d", __FUNCTION__, status);
+ return status;
+}
+
+
+static void ti_close(struct usb_serial_port *port, struct file *file)
+{
+ struct ti_device *tdev;
+ struct ti_port *tport;
+ int port_number;
+ int status;
+ int do_up;
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+
+ tdev = usb_get_serial_data(port->serial);
+ tport = usb_get_serial_port_data(port);
+ if (tdev == NULL || tport == NULL)
+ return;
+
+ tport->tp_is_open = 0;
+
+ ti_drain(tport, (tport->tp_closing_wait*HZ)/100, 1);
+
+ usb_kill_urb(port->read_urb);
+ usb_kill_urb(port->write_urb);
+ tport->tp_write_urb_in_use = 0;
+
+ port_number = port->number - port->serial->minor;
+
+ dbg("%s - sending TI_CLOSE_PORT", __FUNCTION__);
+ status = ti_command_out_sync(tdev, TI_CLOSE_PORT,
+ (__u8)(TI_UART1_PORT + port_number), 0, NULL, 0);
+ if (status)
+ dev_err(&port->dev, "%s - cannot send close port command, %d\n" , __FUNCTION__, status);
+
+ /* if down is interrupted, continue anyway */
+ do_up = !down_interruptible(&tdev->td_open_close_sem);
+ --tport->tp_tdev->td_open_port_count;
+ if (tport->tp_tdev->td_open_port_count <= 0) {
+ /* last port is closed, shut down interrupt urb */
+ usb_kill_urb(port->serial->port[0]->interrupt_in_urb);
+ tport->tp_tdev->td_open_port_count = 0;
+ }
+ if (do_up)
+ up(&tdev->td_open_close_sem);
+
+ dbg("%s - exit", __FUNCTION__);
+}
+
+
+static int ti_write(struct usb_serial_port *port, const unsigned char *data,
+ int count)
+{
+ struct ti_port *tport = usb_get_serial_port_data(port);
+ unsigned long flags;
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+
+ if (count == 0) {
+ dbg("%s - write request of 0 bytes", __FUNCTION__);
+ return 0;
+ }
+
+ if (tport == NULL || !tport->tp_is_open)
+ return -ENODEV;
+
+ spin_lock_irqsave(&tport->tp_lock, flags);
+ count = ti_buf_put(tport->tp_write_buf, data, count);
+ spin_unlock_irqrestore(&tport->tp_lock, flags);
+
+ ti_send(tport);
+
+ return count;
+}
+
+
+static int ti_write_room(struct usb_serial_port *port)
+{
+ struct ti_port *tport = usb_get_serial_port_data(port);
+ int room = 0;
+ unsigned long flags;
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+
+ if (tport == NULL)
+ return -ENODEV;
+
+ spin_lock_irqsave(&tport->tp_lock, flags);
+ room = ti_buf_space_avail(tport->tp_write_buf);
+ spin_unlock_irqrestore(&tport->tp_lock, flags);
+
+ dbg("%s - returns %d", __FUNCTION__, room);
+ return room;
+}
+
+
+static int ti_chars_in_buffer(struct usb_serial_port *port)
+{
+ struct ti_port *tport = usb_get_serial_port_data(port);
+ int chars = 0;
+ unsigned long flags;
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+
+ if (tport == NULL)
+ return -ENODEV;
+
+ spin_lock_irqsave(&tport->tp_lock, flags);
+ chars = ti_buf_data_avail(tport->tp_write_buf);
+ spin_unlock_irqrestore(&tport->tp_lock, flags);
+
+ dbg("%s - returns %d", __FUNCTION__, chars);
+ return chars;
+}
+
+
+static void ti_throttle(struct usb_serial_port *port)
+{
+ struct ti_port *tport = usb_get_serial_port_data(port);
+ struct tty_struct *tty;
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+
+ if (tport == NULL)
+ return;
+
+ tty = port->tty;
+ if (!tty) {
+ dbg("%s - no tty", __FUNCTION__);
+ return;
+ }
+
+ if (I_IXOFF(tty) || C_CRTSCTS(tty))
+ ti_stop_read(tport, tty);
+
+}
+
+
+static void ti_unthrottle(struct usb_serial_port *port)
+{
+ struct ti_port *tport = usb_get_serial_port_data(port);
+ struct tty_struct *tty;
+ int status;
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+
+ if (tport == NULL)
+ return;
+
+ tty = port->tty;
+ if (!tty) {
+ dbg("%s - no tty", __FUNCTION__);
+ return;
+ }
+
+ if (I_IXOFF(tty) || C_CRTSCTS(tty)) {
+ status = ti_restart_read(tport, tty);
+ if (status)
+ dev_err(&port->dev, "%s - cannot restart read, %d\n", __FUNCTION__, status);
+ }
+}
+
+
+static int ti_ioctl(struct usb_serial_port *port, struct file *file,
+ unsigned int cmd, unsigned long arg)
+{
+ struct ti_port *tport = usb_get_serial_port_data(port);
+ struct async_icount cnow;
+ struct async_icount cprev;
+
+ dbg("%s - port %d, cmd = 0x%04X", __FUNCTION__, port->number, cmd);
+
+ if (tport == NULL)
+ return -ENODEV;
+
+ switch (cmd) {
+ case TIOCGSERIAL:
+ dbg("%s - (%d) TIOCGSERIAL", __FUNCTION__, port->number);
+ return ti_get_serial_info(tport, (struct serial_struct __user *)arg);
+ break;
+
+ case TIOCSSERIAL:
+ dbg("%s - (%d) TIOCSSERIAL", __FUNCTION__, port->number);
+ return ti_set_serial_info(tport, (struct serial_struct __user *)arg);
+ break;
+
+ case TIOCMIWAIT:
+ dbg("%s - (%d) TIOCMIWAIT", __FUNCTION__, port->number);
+ cprev = tport->tp_icount;
+ while (1) {
+ interruptible_sleep_on(&tport->tp_msr_wait);
+ if (signal_pending(current))
+ return -ERESTARTSYS;
+ cnow = tport->tp_icount;
+ if (cnow.rng == cprev.rng && cnow.dsr == cprev.dsr &&
+ cnow.dcd == cprev.dcd && cnow.cts == cprev.cts)
+ return -EIO; /* no change => error */
+ if (((arg & TIOCM_RNG) && (cnow.rng != cprev.rng)) ||
+ ((arg & TIOCM_DSR) && (cnow.dsr != cprev.dsr)) ||
+ ((arg & TIOCM_CD) && (cnow.dcd != cprev.dcd)) ||
+ ((arg & TIOCM_CTS) && (cnow.cts != cprev.cts)) ) {
+ return 0;
+ }
+ cprev = cnow;
+ }
+ break;
+
+ case TIOCGICOUNT:
+ dbg("%s - (%d) TIOCGICOUNT RX=%d, TX=%d", __FUNCTION__, port->number, tport->tp_icount.rx, tport->tp_icount.tx);
+ if (copy_to_user((void __user *)arg, &tport->tp_icount, sizeof(tport->tp_icount)))
+ return -EFAULT;
+ return 0;
+ }
+
+ return -ENOIOCTLCMD;
+}
+
+
+static void ti_set_termios(struct usb_serial_port *port,
+ struct termios *old_termios)
+{
+ struct ti_port *tport = usb_get_serial_port_data(port);
+ struct tty_struct *tty = port->tty;
+ struct ti_uart_config *config;
+ tcflag_t cflag,iflag;
+ int baud;
+ int status;
+ int port_number = port->number - port->serial->minor;
+ unsigned int mcr;
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+
+ if (!tty || !tty->termios) {
+ dbg("%s - no tty or termios", __FUNCTION__);
+ return;
+ }
+
+ cflag = tty->termios->c_cflag;
+ iflag = tty->termios->c_iflag;
+
+ if (old_termios && cflag == old_termios->c_cflag
+ && iflag == old_termios->c_iflag) {
+ dbg("%s - nothing to change", __FUNCTION__);
+ return;
+ }
+
+ dbg("%s - clfag %08x, iflag %08x", __FUNCTION__, cflag, iflag);
+
+ if (old_termios)
+ dbg("%s - old clfag %08x, old iflag %08x", __FUNCTION__, old_termios->c_cflag, old_termios->c_iflag);
+
+ if (tport == NULL)
+ return;
+
+ config = kmalloc(sizeof(*config), GFP_KERNEL);
+ if (!config) {
+ dev_err(&port->dev, "%s - out of memory\n", __FUNCTION__);
+ return;
+ }
+
+ config->wFlags = 0;
+
+ /* these flags must be set */
+ config->wFlags |= TI_UART_ENABLE_MS_INTS;
+ config->wFlags |= TI_UART_ENABLE_AUTO_START_DMA;
+ config->bUartMode = (__u8)(tport->tp_uart_mode);
+
+ switch (cflag & CSIZE) {
+ case CS5:
+ config->bDataBits = TI_UART_5_DATA_BITS;
+ break;
+ case CS6:
+ config->bDataBits = TI_UART_6_DATA_BITS;
+ break;
+ case CS7:
+ config->bDataBits = TI_UART_7_DATA_BITS;
+ break;
+ default:
+ case CS8:
+ config->bDataBits = TI_UART_8_DATA_BITS;
+ break;
+ }
+
+ if (cflag & PARENB) {
+ if (cflag & PARODD) {
+ config->wFlags |= TI_UART_ENABLE_PARITY_CHECKING;
+ config->bParity = TI_UART_ODD_PARITY;
+ } else {
+ config->wFlags |= TI_UART_ENABLE_PARITY_CHECKING;
+ config->bParity = TI_UART_EVEN_PARITY;
+ }
+ } else {
+ config->wFlags &= ~TI_UART_ENABLE_PARITY_CHECKING;
+ config->bParity = TI_UART_NO_PARITY;
+ }
+
+ if (cflag & CSTOPB)
+ config->bStopBits = TI_UART_2_STOP_BITS;
+ else
+ config->bStopBits = TI_UART_1_STOP_BITS;
+
+ if (cflag & CRTSCTS) {
+ /* RTS flow control must be off to drop RTS for baud rate B0 */
+ if ((cflag & CBAUD) != B0)
+ config->wFlags |= TI_UART_ENABLE_RTS_IN;
+ config->wFlags |= TI_UART_ENABLE_CTS_OUT;
+ } else {
+ tty->hw_stopped = 0;
+ ti_restart_read(tport, tty);
+ }
+
+ if (I_IXOFF(tty) || I_IXON(tty)) {
+ config->cXon = START_CHAR(tty);
+ config->cXoff = STOP_CHAR(tty);
+
+ if (I_IXOFF(tty))
+ config->wFlags |= TI_UART_ENABLE_X_IN;
+ else
+ ti_restart_read(tport, tty);
+
+ if (I_IXON(tty))
+ config->wFlags |= TI_UART_ENABLE_X_OUT;
+ }
+
+ baud = tty_get_baud_rate(tty);
+ if (!baud) baud = 9600;
+ if (tport->tp_tdev->td_is_3410)
+ config->wBaudRate = (__u16)((923077 + baud/2) / baud);
+ else
+ config->wBaudRate = (__u16)((461538 + baud/2) / baud);
+
+ dbg("%s - BaudRate=%d, wBaudRate=%d, wFlags=0x%04X, bDataBits=%d, bParity=%d, bStopBits=%d, cXon=%d, cXoff=%d, bUartMode=%d",
+ __FUNCTION__, baud, config->wBaudRate, config->wFlags, config->bDataBits, config->bParity, config->bStopBits, config->cXon, config->cXoff, config->bUartMode);
+
+ cpu_to_be16s(&config->wBaudRate);
+ cpu_to_be16s(&config->wFlags);
+
+ status = ti_command_out_sync(tport->tp_tdev, TI_SET_CONFIG,
+ (__u8)(TI_UART1_PORT + port_number), 0, (__u8 *)config,
+ sizeof(*config));
+ if (status)
+ dev_err(&port->dev, "%s - cannot set config on port %d, %d\n", __FUNCTION__, port_number, status);
+
+ /* SET_CONFIG asserts RTS and DTR, reset them correctly */
+ mcr = tport->tp_shadow_mcr;
+ /* if baud rate is B0, clear RTS and DTR */
+ if ((cflag & CBAUD) == B0)
+ mcr &= ~(TI_MCR_DTR | TI_MCR_RTS);
+ status = ti_set_mcr(tport, mcr);
+ if (status)
+ dev_err(&port->dev, "%s - cannot set modem control on port %d, %d\n", __FUNCTION__, port_number, status);
+
+ kfree(config);
+}
+
+
+static int ti_tiocmget(struct usb_serial_port *port, struct file *file)
+{
+ struct ti_port *tport = usb_get_serial_port_data(port);
+ unsigned int result;
+ unsigned int msr;
+ unsigned int mcr;
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+
+ if (tport == NULL)
+ return -ENODEV;
+
+ msr = tport->tp_msr;
+ mcr = tport->tp_shadow_mcr;
+
+ result = ((mcr & TI_MCR_DTR) ? TIOCM_DTR : 0)
+ | ((mcr & TI_MCR_RTS) ? TIOCM_RTS : 0)
+ | ((mcr & TI_MCR_LOOP) ? TIOCM_LOOP : 0)
+ | ((msr & TI_MSR_CTS) ? TIOCM_CTS : 0)
+ | ((msr & TI_MSR_CD) ? TIOCM_CAR : 0)
+ | ((msr & TI_MSR_RI) ? TIOCM_RI : 0)
+ | ((msr & TI_MSR_DSR) ? TIOCM_DSR : 0);
+
+ dbg("%s - 0x%04X", __FUNCTION__, result);
+
+ return result;
+}
+
+
+static int ti_tiocmset(struct usb_serial_port *port, struct file *file,
+ unsigned int set, unsigned int clear)
+{
+ struct ti_port *tport = usb_get_serial_port_data(port);
+ unsigned int mcr;
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+
+ if (tport == NULL)
+ return -ENODEV;
+
+ mcr = tport->tp_shadow_mcr;
+
+ if (set & TIOCM_RTS)
+ mcr |= TI_MCR_RTS;
+ if (set & TIOCM_DTR)
+ mcr |= TI_MCR_DTR;
+ if (set & TIOCM_LOOP)
+ mcr |= TI_MCR_LOOP;
+
+ if (clear & TIOCM_RTS)
+ mcr &= ~TI_MCR_RTS;
+ if (clear & TIOCM_DTR)
+ mcr &= ~TI_MCR_DTR;
+ if (clear & TIOCM_LOOP)
+ mcr &= ~TI_MCR_LOOP;
+
+ return ti_set_mcr(tport, mcr);
+}
+
+
+static void ti_break(struct usb_serial_port *port, int break_state)
+{
+ struct ti_port *tport = usb_get_serial_port_data(port);
+ int status;
+
+ dbg("%s - state = %d", __FUNCTION__, break_state);
+
+ if (tport == NULL)
+ return;
+
+ ti_drain(tport, (tport->tp_closing_wait*HZ)/100, 0);
+
+ status = ti_write_byte(tport->tp_tdev,
+ tport->tp_uart_base_addr + TI_UART_OFFSET_LCR,
+ TI_LCR_BREAK, break_state == -1 ? TI_LCR_BREAK : 0);
+
+ if (status)
+ dbg("%s - error setting break, %d", __FUNCTION__, status);
+}
+
+
+static void ti_interrupt_callback(struct urb *urb, struct pt_regs *regs)
+{
+ struct ti_device *tdev = (struct ti_device *)urb->context;
+ struct usb_serial_port *port;
+ struct usb_serial *serial = tdev->td_serial;
+ struct ti_port *tport;
+ struct device *dev = &urb->dev->dev;
+ unsigned char *data = urb->transfer_buffer;
+ int length = urb->actual_length;
+ int port_number;
+ int function;
+ int status;
+ __u8 msr;
+
+ dbg("%s", __FUNCTION__);
+
+ switch (urb->status) {
+ case 0:
+ break;
+ case -ECONNRESET:
+ case -ENOENT:
+ case -ESHUTDOWN:
+ dbg("%s - urb shutting down, %d", __FUNCTION__, urb->status);
+ tdev->td_urb_error = 1;
+ return;
+ default:
+ dev_err(dev, "%s - nonzero urb status, %d\n", __FUNCTION__, urb->status);
+ tdev->td_urb_error = 1;
+ goto exit;
+ }
+
+ if (length != 2) {
+ dbg("%s - bad packet size, %d", __FUNCTION__, length);
+ goto exit;
+ }
+
+ if (data[0] == TI_CODE_HARDWARE_ERROR) {
+ dev_err(dev, "%s - hardware error, %d\n", __FUNCTION__, data[1]);
+ goto exit;
+ }
+
+ port_number = TI_GET_PORT_FROM_CODE(data[0]);
+ function = TI_GET_FUNC_FROM_CODE(data[0]);
+
+ dbg("%s - port_number %d, function %d, data 0x%02X", __FUNCTION__, port_number, function, data[1]);
+
+ if (port_number >= serial->num_ports) {
+ dev_err(dev, "%s - bad port number, %d\n", __FUNCTION__, port_number);
+ goto exit;
+ }
+
+ port = serial->port[port_number];
+
+ tport = usb_get_serial_port_data(port);
+ if (!tport)
+ goto exit;
+
+ switch (function) {
+ case TI_CODE_DATA_ERROR:
+ dev_err(dev, "%s - DATA ERROR, port %d, data 0x%02X\n", __FUNCTION__, port_number, data[1]);
+ break;
+
+ case TI_CODE_MODEM_STATUS:
+ msr = data[1];
+ dbg("%s - port %d, msr 0x%02X", __FUNCTION__, port_number, msr);
+ ti_handle_new_msr(tport, msr);
+ break;
+
+ default:
+ dev_err(dev, "%s - unknown interrupt code, 0x%02X\n", __FUNCTION__, data[1]);
+ break;
+ }
+
+exit:
+ status = usb_submit_urb(urb, GFP_ATOMIC);
+ if (status)
+ dev_err(dev, "%s - resubmit interrupt urb failed, %d\n", __FUNCTION__, status);
+}
+
+
+static void ti_bulk_in_callback(struct urb *urb, struct pt_regs *regs)
+{
+ struct ti_port *tport = (struct ti_port *)urb->context;
+ struct usb_serial_port *port = tport->tp_port;
+ struct device *dev = &urb->dev->dev;
+ int status = 0;
+
+ dbg("%s", __FUNCTION__);
+
+ switch (urb->status) {
+ case 0:
+ break;
+ case -ECONNRESET:
+ case -ENOENT:
+ case -ESHUTDOWN:
+ dbg("%s - urb shutting down, %d", __FUNCTION__, urb->status);
+ tport->tp_tdev->td_urb_error = 1;
+ wake_up_interruptible(&tport->tp_write_wait);
+ return;
+ default:
+ dev_err(dev, "%s - nonzero urb status, %d\n", __FUNCTION__, urb->status );
+ tport->tp_tdev->td_urb_error = 1;
+ wake_up_interruptible(&tport->tp_write_wait);
+ }
+
+ if (urb->status == -EPIPE)
+ goto exit;
+
+ if (urb->status) {
+ dev_err(dev, "%s - stopping read!\n", __FUNCTION__);
+ return;
+ }
+
+ if (port->tty && urb->actual_length) {
+ usb_serial_debug_data(debug, dev, __FUNCTION__,
+ urb->actual_length, urb->transfer_buffer);
+
+ if (!tport->tp_is_open)
+ dbg("%s - port closed, dropping data", __FUNCTION__);
+ else
+ ti_recv(&urb->dev->dev, port->tty, urb->transfer_buffer,
+ urb->actual_length);
+
+ spin_lock(&tport->tp_lock);
+ tport->tp_icount.rx += urb->actual_length;
+ spin_unlock(&tport->tp_lock);
+ }
+
+exit:
+ /* continue to read unless stopping */
+ spin_lock(&tport->tp_lock);
+ if (tport->tp_read_urb_state == TI_READ_URB_RUNNING) {
+ urb->dev = port->serial->dev;
+ status = usb_submit_urb(urb, GFP_ATOMIC);
+ } else if (tport->tp_read_urb_state == TI_READ_URB_STOPPING) {
+ tport->tp_read_urb_state = TI_READ_URB_STOPPED;
+ }
+ spin_unlock(&tport->tp_lock);
+ if (status)
+ dev_err(dev, "%s - resubmit read urb failed, %d\n", __FUNCTION__, status);
+}
+
+
+static void ti_bulk_out_callback(struct urb *urb, struct pt_regs *regs)
+{
+ struct ti_port *tport = (struct ti_port *)urb->context;
+ struct usb_serial_port *port = tport->tp_port;
+ struct device *dev = &urb->dev->dev;
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+
+ tport->tp_write_urb_in_use = 0;
+
+ switch (urb->status) {
+ case 0:
+ break;
+ case -ECONNRESET:
+ case -ENOENT:
+ case -ESHUTDOWN:
+ dbg("%s - urb shutting down, %d", __FUNCTION__, urb->status);
+ tport->tp_tdev->td_urb_error = 1;
+ wake_up_interruptible(&tport->tp_write_wait);
+ return;
+ default:
+ dev_err(dev, "%s - nonzero urb status, %d\n", __FUNCTION__, urb->status);
+ tport->tp_tdev->td_urb_error = 1;
+ wake_up_interruptible(&tport->tp_write_wait);
+ }
+
+ /* send any buffered data */
+ ti_send(tport);
+}
+
+
+static void ti_recv(struct device *dev, struct tty_struct *tty,
+ unsigned char *data, int length)
+{
+ int cnt;
+
+ do {
+ if (tty->flip.count >= TTY_FLIPBUF_SIZE) {
+ tty_flip_buffer_push(tty);
+ if (tty->flip.count >= TTY_FLIPBUF_SIZE) {
+ dev_err(dev, "%s - dropping data, %d bytes lost\n", __FUNCTION__, length);
+ return;
+ }
+ }
+ cnt = min(length, TTY_FLIPBUF_SIZE - tty->flip.count);
+ memcpy(tty->flip.char_buf_ptr, data, cnt);
+ memset(tty->flip.flag_buf_ptr, 0, cnt);
+ tty->flip.char_buf_ptr += cnt;
+ tty->flip.flag_buf_ptr += cnt;
+ tty->flip.count += cnt;
+ data += cnt;
+ length -= cnt;
+ } while (length > 0);
+
+ tty_flip_buffer_push(tty);
+}
+
+
+static void ti_send(struct ti_port *tport)
+{
+ int count, result;
+ struct usb_serial_port *port = tport->tp_port;
+ struct tty_struct *tty = port->tty;
+ unsigned long flags;
+
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+
+ spin_lock_irqsave(&tport->tp_lock, flags);
+
+ if (tport->tp_write_urb_in_use) {
+ spin_unlock_irqrestore(&tport->tp_lock, flags);
+ return;
+ }
+
+ count = ti_buf_get(tport->tp_write_buf,
+ port->write_urb->transfer_buffer,
+ port->bulk_out_size);
+
+ if (count == 0) {
+ spin_unlock_irqrestore(&tport->tp_lock, flags);
+ return;
+ }
+
+ tport->tp_write_urb_in_use = 1;
+
+ spin_unlock_irqrestore(&tport->tp_lock, flags);
+
+ usb_serial_debug_data(debug, &port->dev, __FUNCTION__, count, port->write_urb->transfer_buffer);
+
+ usb_fill_bulk_urb(port->write_urb, port->serial->dev,
+ usb_sndbulkpipe(port->serial->dev,
+ port->bulk_out_endpointAddress),
+ port->write_urb->transfer_buffer, count,
+ ti_bulk_out_callback, tport);
+
+ result = usb_submit_urb(port->write_urb, GFP_ATOMIC);
+ if (result) {
+ dev_err(&port->dev, "%s - submit write urb failed, %d\n", __FUNCTION__, result);
+ tport->tp_write_urb_in_use = 0;
+ /* TODO: reschedule ti_send */
+ } else {
+ spin_lock_irqsave(&tport->tp_lock, flags);
+ tport->tp_icount.tx += count;
+ spin_unlock_irqrestore(&tport->tp_lock, flags);
+ }
+
+ /* more room in the buffer for new writes, wakeup */
+ if (tty)
+ tty_wakeup(tty);
+ wake_up_interruptible(&tport->tp_write_wait);
+}
+
+
+static int ti_set_mcr(struct ti_port *tport, unsigned int mcr)
+{
+ int status;
+
+ status = ti_write_byte(tport->tp_tdev,
+ tport->tp_uart_base_addr + TI_UART_OFFSET_MCR,
+ TI_MCR_RTS | TI_MCR_DTR | TI_MCR_LOOP, mcr);
+
+ if (!status)
+ tport->tp_shadow_mcr = mcr;
+
+ return status;
+}
+
+
+static int ti_get_lsr(struct ti_port *tport)
+{
+ int size,status;
+ struct ti_device *tdev = tport->tp_tdev;
+ struct usb_serial_port *port = tport->tp_port;
+ int port_number = port->number - port->serial->minor;
+ struct ti_port_status *data;
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+
+ size = sizeof(struct ti_port_status);
+ data = kmalloc(size, GFP_KERNEL);
+ if (!data) {
+ dev_err(&port->dev, "%s - out of memory\n", __FUNCTION__);
+ return -ENOMEM;
+ }
+
+ status = ti_command_in_sync(tdev, TI_GET_PORT_STATUS,
+ (__u8)(TI_UART1_PORT+port_number), 0, (__u8 *)data, size);
+ if (status) {
+ dev_err(&port->dev, "%s - get port status command failed, %d\n", __FUNCTION__, status);
+ goto free_data;
+ }
+
+ dbg("%s - lsr 0x%02X", __FUNCTION__, data->bLSR);
+
+ tport->tp_lsr = data->bLSR;
+
+free_data:
+ kfree(data);
+ return status;
+}
+
+
+static int ti_get_serial_info(struct ti_port *tport,
+ struct serial_struct __user *ret_arg)
+{
+ struct usb_serial_port *port = tport->tp_port;
+ struct serial_struct ret_serial;
+
+ if (!ret_arg)
+ return -EFAULT;
+
+ memset(&ret_serial, 0, sizeof(ret_serial));
+
+ ret_serial.type = PORT_16550A;
+ ret_serial.line = port->serial->minor;
+ ret_serial.port = port->number - port->serial->minor;
+ ret_serial.flags = tport->tp_flags;
+ ret_serial.xmit_fifo_size = TI_WRITE_BUF_SIZE;
+ ret_serial.baud_base = tport->tp_tdev->td_is_3410 ? 921600 : 460800;
+ ret_serial.closing_wait = tport->tp_closing_wait;
+
+ if (copy_to_user(ret_arg, &ret_serial, sizeof(*ret_arg)))
+ return -EFAULT;
+
+ return 0;
+}
+
+
+static int ti_set_serial_info(struct ti_port *tport,
+ struct serial_struct __user *new_arg)
+{
+ struct usb_serial_port *port = tport->tp_port;
+ struct serial_struct new_serial;
+
+ if (copy_from_user(&new_serial, new_arg, sizeof(new_serial)))
+ return -EFAULT;
+
+ tport->tp_flags = new_serial.flags & TI_SET_SERIAL_FLAGS;
+ if (port->tty)
+ port->tty->low_latency =
+ (tport->tp_flags & ASYNC_LOW_LATENCY) ? 1 : 0;
+ tport->tp_closing_wait = new_serial.closing_wait;
+
+ return 0;
+}
+
+
+static void ti_handle_new_msr(struct ti_port *tport, __u8 msr)
+{
+ struct async_icount *icount;
+ struct tty_struct *tty;
+ unsigned long flags;
+
+ dbg("%s - msr 0x%02X", __FUNCTION__, msr);
+
+ if (msr & TI_MSR_DELTA_MASK) {
+ spin_lock_irqsave(&tport->tp_lock, flags);
+ icount = &tport->tp_icount;
+ if (msr & TI_MSR_DELTA_CTS)
+ icount->cts++;
+ if (msr & TI_MSR_DELTA_DSR)
+ icount->dsr++;
+ if (msr & TI_MSR_DELTA_CD)
+ icount->dcd++;
+ if (msr & TI_MSR_DELTA_RI)
+ icount->rng++;
+ wake_up_interruptible(&tport->tp_msr_wait);
+ spin_unlock_irqrestore(&tport->tp_lock, flags);
+ }
+
+ tport->tp_msr = msr & TI_MSR_MASK;
+
+ /* handle CTS flow control */
+ tty = tport->tp_port->tty;
+ if (tty && C_CRTSCTS(tty)) {
+ if (msr & TI_MSR_CTS) {
+ tty->hw_stopped = 0;
+ tty_wakeup(tty);
+ } else {
+ tty->hw_stopped = 1;
+ }
+ }
+}
+
+
+static void ti_drain(struct ti_port *tport, unsigned long timeout, int flush)
+{
+ struct ti_device *tdev = tport->tp_tdev;
+ struct usb_serial_port *port = tport->tp_port;
+ wait_queue_t wait;
+ unsigned long flags;
+
+ dbg("%s - port %d", __FUNCTION__, port->number);
+
+ spin_lock_irqsave(&tport->tp_lock, flags);
+
+ /* wait for data to drain from the buffer */
+ tdev->td_urb_error = 0;
+ init_waitqueue_entry(&wait, current);
+ add_wait_queue(&tport->tp_write_wait, &wait);
+ for (;;) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ if (ti_buf_data_avail(tport->tp_write_buf) == 0
+ || timeout == 0 || signal_pending(current)
+ || tdev->td_urb_error
+ || !usb_get_intfdata(port->serial->interface)) /* disconnect */
+ break;
+ spin_unlock_irqrestore(&tport->tp_lock, flags);
+ timeout = schedule_timeout(timeout);
+ spin_lock_irqsave(&tport->tp_lock, flags);
+ }
+ set_current_state(TASK_RUNNING);
+ remove_wait_queue(&tport->tp_write_wait, &wait);
+
+ /* flush any remaining data in the buffer */
+ if (flush)
+ ti_buf_clear(tport->tp_write_buf);
+
+ spin_unlock_irqrestore(&tport->tp_lock, flags);
+
+ /* wait for data to drain from the device */
+ /* wait for empty tx register, plus 20 ms */
+ timeout += jiffies;
+ tport->tp_lsr &= ~TI_LSR_TX_EMPTY;
+ while ((long)(jiffies - timeout) < 0 && !signal_pending(current)
+ && !(tport->tp_lsr&TI_LSR_TX_EMPTY) && !tdev->td_urb_error
+ && usb_get_intfdata(port->serial->interface)) { /* not disconnected */
+ if (ti_get_lsr(tport))
+ break;
+ msleep_interruptible(20);
+ }
+}
+
+
+static void ti_stop_read(struct ti_port *tport, struct tty_struct *tty)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&tport->tp_lock, flags);
+
+ if (tport->tp_read_urb_state == TI_READ_URB_RUNNING)
+ tport->tp_read_urb_state = TI_READ_URB_STOPPING;
+
+ spin_unlock_irqrestore(&tport->tp_lock, flags);
+}
+
+
+static int ti_restart_read(struct ti_port *tport, struct tty_struct *tty)
+{
+ struct urb *urb;
+ int status = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&tport->tp_lock, flags);
+
+ if (tport->tp_read_urb_state == TI_READ_URB_STOPPED) {
+ urb = tport->tp_port->read_urb;
+ urb->complete = ti_bulk_in_callback;
+ urb->context = tport;
+ urb->dev = tport->tp_port->serial->dev;
+ status = usb_submit_urb(urb, GFP_KERNEL);
+ }
+ tport->tp_read_urb_state = TI_READ_URB_RUNNING;
+
+ spin_unlock_irqrestore(&tport->tp_lock, flags);
+
+ return status;
+}
+
+
+static int ti_command_out_sync(struct ti_device *tdev, __u8 command,
+ __u16 moduleid, __u16 value, __u8 *data, int size)
+{
+ int status;
+
+ status = usb_control_msg(tdev->td_serial->dev,
+ usb_sndctrlpipe(tdev->td_serial->dev, 0), command,
+ (USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT),
+ value, moduleid, data, size, HZ);
+
+ if (status == size)
+ status = 0;
+
+ if (status > 0)
+ status = -ECOMM;
+
+ return status;
+}
+
+
+static int ti_command_in_sync(struct ti_device *tdev, __u8 command,
+ __u16 moduleid, __u16 value, __u8 *data, int size)
+{
+ int status;
+
+ status = usb_control_msg(tdev->td_serial->dev,
+ usb_rcvctrlpipe(tdev->td_serial->dev, 0), command,
+ (USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN),
+ value, moduleid, data, size, HZ);
+
+ if (status == size)
+ status = 0;
+
+ if (status > 0)
+ status = -ECOMM;
+
+ return status;
+}
+
+
+static int ti_write_byte(struct ti_device *tdev, unsigned long addr,
+ __u8 mask, __u8 byte)
+{
+ int status;
+ unsigned int size;
+ struct ti_write_data_bytes *data;
+ struct device *dev = &tdev->td_serial->dev->dev;
+
+ dbg("%s - addr 0x%08lX, mask 0x%02X, byte 0x%02X", __FUNCTION__, addr, mask, byte);
+
+ size = sizeof(struct ti_write_data_bytes) + 2;
+ data = kmalloc(size, GFP_KERNEL);
+ if (!data) {
+ dev_err(dev, "%s - out of memory\n", __FUNCTION__);
+ return -ENOMEM;
+ }
+
+ data->bAddrType = TI_RW_DATA_ADDR_XDATA;
+ data->bDataType = TI_RW_DATA_BYTE;
+ data->bDataCounter = 1;
+ data->wBaseAddrHi = cpu_to_be16(addr>>16);
+ data->wBaseAddrLo = cpu_to_be16(addr);
+ data->bData[0] = mask;
+ data->bData[1] = byte;
+
+ status = ti_command_out_sync(tdev, TI_WRITE_DATA, TI_RAM_PORT, 0,
+ (__u8 *)data, size);
+
+ if (status < 0)
+ dev_err(dev, "%s - failed, %d\n", __FUNCTION__, status);
+
+ kfree(data);
+
+ return status;
+}
+
+
+static int ti_download_firmware(struct ti_device *tdev,
+ unsigned char *firmware, unsigned int firmware_size)
+{
+ int status = 0;
+ int buffer_size;
+ int pos;
+ int len;
+ int done;
+ __u8 cs = 0;
+ __u8 *buffer;
+ struct usb_device *dev = tdev->td_serial->dev;
+ struct ti_firmware_header *header;
+ unsigned int pipe = usb_sndbulkpipe(dev,
+ tdev->td_serial->port[0]->bulk_out_endpointAddress);
+
+
+ buffer_size = TI_FIRMWARE_BUF_SIZE + sizeof(struct ti_firmware_header);
+ buffer = kmalloc(buffer_size, GFP_KERNEL);
+ if (!buffer) {
+ dev_err(&dev->dev, "%s - out of memory\n", __FUNCTION__);
+ return -ENOMEM;
+ }
+
+ memcpy(buffer, firmware, firmware_size);
+ memset(buffer+firmware_size, 0xff, buffer_size-firmware_size);
+
+ for(pos = sizeof(struct ti_firmware_header); pos < buffer_size; pos++)
+ cs = (__u8)(cs + buffer[pos]);
+
+ header = (struct ti_firmware_header *)buffer;
+ header->wLength = cpu_to_le16((__u16)(buffer_size - sizeof(struct ti_firmware_header)));
+ header->bCheckSum = cs;
+
+ dbg("%s - downloading firmware", __FUNCTION__);
+ for (pos = 0; pos < buffer_size; pos += done) {
+ len = min(buffer_size - pos, TI_DOWNLOAD_MAX_PACKET_SIZE);
+ status = usb_bulk_msg(dev, pipe, buffer+pos, len, &done, HZ);
+ if (status)
+ break;
+ }
+
+ kfree(buffer);
+
+ if (status) {
+ dev_err(&dev->dev, "%s - error downloading firmware, %d\n", __FUNCTION__, status);
+ return status;
+ }
+
+ dbg("%s - download successful", __FUNCTION__);
+
+ return 0;
+}
+
+
+/* Circular Buffer Functions */
+
+/*
+ * ti_buf_alloc
+ *
+ * Allocate a circular buffer and all associated memory.
+ */
+
+static struct circ_buf *ti_buf_alloc(void)
+{
+ struct circ_buf *cb;
+
+ cb = (struct circ_buf *)kmalloc(sizeof(struct circ_buf), GFP_KERNEL);
+ if (cb == NULL)
+ return NULL;
+
+ cb->buf = kmalloc(TI_WRITE_BUF_SIZE, GFP_KERNEL);
+ if (cb->buf == NULL) {
+ kfree(cb);
+ return NULL;
+ }
+
+ ti_buf_clear(cb);
+
+ return cb;
+}
+
+
+/*
+ * ti_buf_free
+ *
+ * Free the buffer and all associated memory.
+ */
+
+static void ti_buf_free(struct circ_buf *cb)
+{
+ kfree(cb->buf);
+ kfree(cb);
+}
+
+
+/*
+ * ti_buf_clear
+ *
+ * Clear out all data in the circular buffer.
+ */
+
+static void ti_buf_clear(struct circ_buf *cb)
+{
+ cb->head = cb->tail = 0;
+}
+
+
+/*
+ * ti_buf_data_avail
+ *
+ * Return the number of bytes of data available in the circular
+ * buffer.
+ */
+
+static int ti_buf_data_avail(struct circ_buf *cb)
+{
+ return CIRC_CNT(cb->head,cb->tail,TI_WRITE_BUF_SIZE);
+}
+
+
+/*
+ * ti_buf_space_avail
+ *
+ * Return the number of bytes of space available in the circular
+ * buffer.
+ */
+
+static int ti_buf_space_avail(struct circ_buf *cb)
+{
+ return CIRC_SPACE(cb->head,cb->tail,TI_WRITE_BUF_SIZE);
+}
+
+
+/*
+ * ti_buf_put
+ *
+ * Copy data data from a user buffer and put it into the circular buffer.
+ * Restrict to the amount of space available.
+ *
+ * Return the number of bytes copied.
+ */
+
+static int ti_buf_put(struct circ_buf *cb, const char *buf, int count)
+{
+ int c, ret = 0;
+
+ while (1) {
+ c = CIRC_SPACE_TO_END(cb->head, cb->tail, TI_WRITE_BUF_SIZE);
+ if (count < c)
+ c = count;
+ if (c <= 0)
+ break;
+ memcpy(cb->buf + cb->head, buf, c);
+ cb->head = (cb->head + c) & (TI_WRITE_BUF_SIZE-1);
+ buf += c;
+ count -= c;
+ ret += c;
+ }
+
+ return ret;
+}
+
+
+/*
+ * ti_buf_get
+ *
+ * Get data from the circular buffer and copy to the given buffer.
+ * Restrict to the amount of data available.
+ *
+ * Return the number of bytes copied.
+ */
+
+static int ti_buf_get(struct circ_buf *cb, char *buf, int count)
+{
+ int c, ret = 0;
+
+ while (1) {
+ c = CIRC_CNT_TO_END(cb->head, cb->tail, TI_WRITE_BUF_SIZE);
+ if (count < c)
+ c = count;
+ if (c <= 0)
+ break;
+ memcpy(buf, cb->buf + cb->tail, c);
+ cb->tail = (cb->tail + c) & (TI_WRITE_BUF_SIZE-1);
+ buf += c;
+ count -= c;
+ ret += c;
+ }
+
+ return ret;
+}
--- /dev/null
+/* vi: ts=8 sw=8
+ *
+ * TI 3410/5052 USB Serial Driver Header
+ *
+ * Copyright (C) 2004 Texas Instruments
+ *
+ * This driver is based on the Linux io_ti driver, which is
+ * Copyright (C) 2000-2002 Inside Out Networks
+ * Copyright (C) 2001-2002 Greg Kroah-Hartman
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * For questions or problems with this driver, contact Texas Instruments
+ * technical support, or Al Borchers <alborchers@steinerpoint.com>, or
+ * Peter Berger <pberger@brimson.com>.
+ */
+
+#ifndef _TI_3410_5052_H_
+#define _TI_3410_5052_H_
+
+/* Configuration ids */
+#define TI_BOOT_CONFIG 1
+#define TI_ACTIVE_CONFIG 2
+
+/* Vendor and product ids */
+#define TI_VENDOR_ID 0x0451
+#define TI_3410_PRODUCT_ID 0x3410
+#define TI_5052_BOOT_PRODUCT_ID 0x5052 /* no EEPROM, no firmware */
+#define TI_5152_BOOT_PRODUCT_ID 0x5152 /* no EEPROM, no firmware */
+#define TI_5052_EEPROM_PRODUCT_ID 0x505A /* EEPROM, no firmware */
+#define TI_5052_FIRMWARE_PRODUCT_ID 0x505F /* firmware is running */
+
+/* Commands */
+#define TI_GET_VERSION 0x01
+#define TI_GET_PORT_STATUS 0x02
+#define TI_GET_PORT_DEV_INFO 0x03
+#define TI_GET_CONFIG 0x04
+#define TI_SET_CONFIG 0x05
+#define TI_OPEN_PORT 0x06
+#define TI_CLOSE_PORT 0x07
+#define TI_START_PORT 0x08
+#define TI_STOP_PORT 0x09
+#define TI_TEST_PORT 0x0A
+#define TI_PURGE_PORT 0x0B
+#define TI_RESET_EXT_DEVICE 0x0C
+#define TI_WRITE_DATA 0x80
+#define TI_READ_DATA 0x81
+#define TI_REQ_TYPE_CLASS 0x82
+
+/* Module identifiers */
+#define TI_I2C_PORT 0x01
+#define TI_IEEE1284_PORT 0x02
+#define TI_UART1_PORT 0x03
+#define TI_UART2_PORT 0x04
+#define TI_RAM_PORT 0x05
+
+/* Modem status */
+#define TI_MSR_DELTA_CTS 0x01
+#define TI_MSR_DELTA_DSR 0x02
+#define TI_MSR_DELTA_RI 0x04
+#define TI_MSR_DELTA_CD 0x08
+#define TI_MSR_CTS 0x10
+#define TI_MSR_DSR 0x20
+#define TI_MSR_RI 0x40
+#define TI_MSR_CD 0x80
+#define TI_MSR_DELTA_MASK 0x0F
+#define TI_MSR_MASK 0xF0
+
+/* Line status */
+#define TI_LSR_OVERRUN_ERROR 0x01
+#define TI_LSR_PARITY_ERROR 0x02
+#define TI_LSR_FRAMING_ERROR 0x04
+#define TI_LSR_BREAK 0x08
+#define TI_LSR_ERROR 0x0F
+#define TI_LSR_RX_FULL 0x10
+#define TI_LSR_TX_EMPTY 0x20
+
+/* Line control */
+#define TI_LCR_BREAK 0x40
+
+/* Modem control */
+#define TI_MCR_LOOP 0x04
+#define TI_MCR_DTR 0x10
+#define TI_MCR_RTS 0x20
+
+/* Mask settings */
+#define TI_UART_ENABLE_RTS_IN 0x0001
+#define TI_UART_DISABLE_RTS 0x0002
+#define TI_UART_ENABLE_PARITY_CHECKING 0x0008
+#define TI_UART_ENABLE_DSR_OUT 0x0010
+#define TI_UART_ENABLE_CTS_OUT 0x0020
+#define TI_UART_ENABLE_X_OUT 0x0040
+#define TI_UART_ENABLE_XA_OUT 0x0080
+#define TI_UART_ENABLE_X_IN 0x0100
+#define TI_UART_ENABLE_DTR_IN 0x0800
+#define TI_UART_DISABLE_DTR 0x1000
+#define TI_UART_ENABLE_MS_INTS 0x2000
+#define TI_UART_ENABLE_AUTO_START_DMA 0x4000
+
+/* Parity */
+#define TI_UART_NO_PARITY 0x00
+#define TI_UART_ODD_PARITY 0x01
+#define TI_UART_EVEN_PARITY 0x02
+#define TI_UART_MARK_PARITY 0x03
+#define TI_UART_SPACE_PARITY 0x04
+
+/* Stop bits */
+#define TI_UART_1_STOP_BITS 0x00
+#define TI_UART_1_5_STOP_BITS 0x01
+#define TI_UART_2_STOP_BITS 0x02
+
+/* Bits per character */
+#define TI_UART_5_DATA_BITS 0x00
+#define TI_UART_6_DATA_BITS 0x01
+#define TI_UART_7_DATA_BITS 0x02
+#define TI_UART_8_DATA_BITS 0x03
+
+/* 232/485 modes */
+#define TI_UART_232 0x00
+#define TI_UART_485_RECEIVER_DISABLED 0x01
+#define TI_UART_485_RECEIVER_ENABLED 0x02
+
+/* Pipe transfer mode and timeout */
+#define TI_PIPE_MODE_CONTINOUS 0x01
+#define TI_PIPE_MODE_MASK 0x03
+#define TI_PIPE_TIMEOUT_MASK 0x7C
+#define TI_PIPE_TIMEOUT_ENABLE 0x80
+
+/* Config struct */
+struct ti_uart_config {
+ __u16 wBaudRate;
+ __u16 wFlags;
+ __u8 bDataBits;
+ __u8 bParity;
+ __u8 bStopBits;
+ char cXon;
+ char cXoff;
+ __u8 bUartMode;
+} __attribute__((packed));
+
+/* Get port status */
+struct ti_port_status {
+ __u8 bCmdCode;
+ __u8 bModuleId;
+ __u8 bErrorCode;
+ __u8 bMSR;
+ __u8 bLSR;
+} __attribute__((packed));
+
+/* Purge modes */
+#define TI_PURGE_OUTPUT 0x00
+#define TI_PURGE_INPUT 0x80
+
+/* Read/Write data */
+#define TI_RW_DATA_ADDR_SFR 0x10
+#define TI_RW_DATA_ADDR_IDATA 0x20
+#define TI_RW_DATA_ADDR_XDATA 0x30
+#define TI_RW_DATA_ADDR_CODE 0x40
+#define TI_RW_DATA_ADDR_GPIO 0x50
+#define TI_RW_DATA_ADDR_I2C 0x60
+#define TI_RW_DATA_ADDR_FLASH 0x70
+#define TI_RW_DATA_ADDR_DSP 0x80
+
+#define TI_RW_DATA_UNSPECIFIED 0x00
+#define TI_RW_DATA_BYTE 0x01
+#define TI_RW_DATA_WORD 0x02
+#define TI_RW_DATA_DOUBLE_WORD 0x04
+
+struct ti_write_data_bytes {
+ __u8 bAddrType;
+ __u8 bDataType;
+ __u8 bDataCounter;
+ __be16 wBaseAddrHi;
+ __be16 wBaseAddrLo;
+ __u8 bData[0];
+} __attribute__((packed));
+
+struct ti_read_data_request {
+ __u8 bAddrType;
+ __u8 bDataType;
+ __u8 bDataCounter;
+ __be16 wBaseAddrHi;
+ __be16 wBaseAddrLo;
+} __attribute__((packed));
+
+struct ti_read_data_bytes {
+ __u8 bCmdCode;
+ __u8 bModuleId;
+ __u8 bErrorCode;
+ __u8 bData[0];
+} __attribute__((packed));
+
+/* Interrupt struct */
+struct ti_interrupt {
+ __u8 bICode;
+ __u8 bIInfo;
+} __attribute__((packed));
+
+/* Interrupt codes */
+#define TI_GET_PORT_FROM_CODE(c) (((c) >> 4) - 3)
+#define TI_GET_FUNC_FROM_CODE(c) ((c) & 0x0f)
+#define TI_CODE_HARDWARE_ERROR 0xFF
+#define TI_CODE_DATA_ERROR 0x03
+#define TI_CODE_MODEM_STATUS 0x04
+
+/* Download firmware max packet size */
+#define TI_DOWNLOAD_MAX_PACKET_SIZE 64
+
+/* Firmware image header */
+struct ti_firmware_header {
+ __le16 wLength;
+ __u8 bCheckSum;
+} __attribute__((packed));
+
+/* UART addresses */
+#define TI_UART1_BASE_ADDR 0xFFA0 /* UART 1 base address */
+#define TI_UART2_BASE_ADDR 0xFFB0 /* UART 2 base address */
+#define TI_UART_OFFSET_LCR 0x0002 /* UART MCR register offset */
+#define TI_UART_OFFSET_MCR 0x0004 /* UART MCR register offset */
+
+#endif /* _TI_3410_5052_H_ */
((endpoint->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK)
== USB_ENDPOINT_XFER_BULK)) {
/* we found a bulk in endpoint */
- buffer_size = endpoint->wMaxPacketSize;
+ buffer_size = le16_to_cpu(endpoint->wMaxPacketSize);
dev->bulk_in_size = buffer_size;
dev->bulk_in_endpointAddr = endpoint->bEndpointAddress;
dev->bulk_in_buffer = kmalloc(buffer_size, GFP_KERNEL);
#include <linux/string.h>
#include <linux/slab.h>
#include <linux/delay.h>
+#include <linux/mm.h>
#include <linux/fb.h>
#include <linux/init.h>
#include <linux/ioport.h>
return 0;
}
+static int clcdfb_mmap(struct fb_info *info, struct file *file,
+ struct vm_area_struct *vma)
+{
+ struct clcd_fb *fb = to_clcd(info);
+ unsigned long len, off = vma->vm_pgoff << PAGE_SHIFT;
+ int ret = -EINVAL;
+
+ len = info->fix.smem_len;
+
+ if (off <= len && vma->vm_end - vma->vm_start <= len - off &&
+ fb->board->mmap)
+ ret = fb->board->mmap(fb, vma);
+
+ return ret;
+}
+
static struct fb_ops clcdfb_ops = {
.owner = THIS_MODULE,
.fb_check_var = clcdfb_check_var,
.fb_copyarea = cfb_copyarea,
.fb_imageblit = cfb_imageblit,
.fb_cursor = soft_cursor,
+ .fb_mmap = clcdfb_mmap,
};
static int clcdfb_register(struct clcd_fb *fb)
atyfb-y := atyfb_base.o mach64_accel.o mach64_cursor.o
atyfb-$(CONFIG_FB_ATY_GX) += mach64_gx.o
atyfb-$(CONFIG_FB_ATY_CT) += mach64_ct.o
+atyfb-$(CONFIG_FB_ATY_XL_INIT) += xlinit.o
+
atyfb-objs := $(atyfb-y)
radeonfb-y := radeon_base.o radeon_pm.o radeon_monitor.o radeon_accel.o
* radeonfb
*/
+#define PCI_CHIP_RV380_3150 0x3150
+#define PCI_CHIP_RV380_3151 0x3151
+#define PCI_CHIP_RV380_3152 0x3152
+#define PCI_CHIP_RV380_3153 0x3153
+#define PCI_CHIP_RV380_3154 0x3154
+#define PCI_CHIP_RV380_3156 0x3156
+#define PCI_CHIP_RV380_3E50 0x3E50
+#define PCI_CHIP_RV380_3E51 0x3E51
+#define PCI_CHIP_RV380_3E52 0x3E52
+#define PCI_CHIP_RV380_3E53 0x3E53
+#define PCI_CHIP_RV380_3E54 0x3E54
+#define PCI_CHIP_RV380_3E56 0x3E56
#define PCI_CHIP_RS100_4136 0x4136
#define PCI_CHIP_RS200_4137 0x4137
#define PCI_CHIP_R300_AD 0x4144
#define PCI_CHIP_RV250_Ie 0x4965
#define PCI_CHIP_RV250_If 0x4966
#define PCI_CHIP_RV250_Ig 0x4967
+#define PCI_CHIP_R420_JH 0x4A48
+#define PCI_CHIP_R420_JI 0x4A49
+#define PCI_CHIP_R420_JJ 0x4A4A
+#define PCI_CHIP_R420_JK 0x4A4B
+#define PCI_CHIP_R420_JL 0x4A4C
+#define PCI_CHIP_R420_JM 0x4A4D
+#define PCI_CHIP_R420_JN 0x4A4E
+#define PCI_CHIP_R420_JP 0x4A50
#define PCI_CHIP_MACH64LB 0x4C42
#define PCI_CHIP_MACH64LD 0x4C44
#define PCI_CHIP_RAGE128LE 0x4C45
#define PCI_CHIP_RV250_Le 0x4C65
#define PCI_CHIP_RV250_Lf 0x4C66
#define PCI_CHIP_RV250_Lg 0x4C67
+#define PCI_CHIP_RV250_Ln 0x4C6E
#define PCI_CHIP_RAGE128MF 0x4D46
#define PCI_CHIP_RAGE128ML 0x4D4C
#define PCI_CHIP_R300_ND 0x4E44
#define PCI_CHIP_RAGE128TS 0x5453
#define PCI_CHIP_RAGE128TT 0x5454
#define PCI_CHIP_RAGE128TU 0x5455
+#define PCI_CHIP_RV370_5460 0x5460
+#define PCI_CHIP_RV370_5461 0x5461
+#define PCI_CHIP_RV370_5462 0x5462
+#define PCI_CHIP_RV370_5463 0x5463
+#define PCI_CHIP_RV370_5464 0x5464
+#define PCI_CHIP_RV370_5465 0x5465
+#define PCI_CHIP_RV370_5466 0x5466
+#define PCI_CHIP_RV370_5467 0x5467
+#define PCI_CHIP_R423_UH 0x5548
+#define PCI_CHIP_R423_UI 0x5549
+#define PCI_CHIP_R423_UJ 0x554A
+#define PCI_CHIP_R423_UK 0x554B
+#define PCI_CHIP_R423_UQ 0x5551
+#define PCI_CHIP_R423_UR 0x5552
+#define PCI_CHIP_R423_UT 0x5554
#define PCI_CHIP_MACH64VT 0x5654
#define PCI_CHIP_MACH64VU 0x5655
#define PCI_CHIP_MACH64VV 0x5656
#define PCI_CHIP_RS300_5835 0x5835
#define PCI_CHIP_RS300_5836 0x5836
#define PCI_CHIP_RS300_5837 0x5837
+#define PCI_CHIP_RV370_5B60 0x5B60
+#define PCI_CHIP_RV370_5B61 0x5B61
+#define PCI_CHIP_RV370_5B62 0x5B62
+#define PCI_CHIP_RV370_5B63 0x5B63
+#define PCI_CHIP_RV370_5B64 0x5B64
+#define PCI_CHIP_RV370_5B65 0x5B65
+#define PCI_CHIP_RV370_5B66 0x5B66
+#define PCI_CHIP_RV370_5B67 0x5B67
#define PCI_CHIP_RV280_5960 0x5960
#define PCI_CHIP_RV280_5961 0x5961
#define PCI_CHIP_RV280_5962 0x5962
-#define PCI_CHIP_RV280_5963 0x5963
#define PCI_CHIP_RV280_5964 0x5964
-#define PCI_CHIP_RV280_5968 0x5968
-#define PCI_CHIP_RV280_5969 0x5969
-#define PCI_CHIP_RV280_596A 0x596A
-#define PCI_CHIP_RV280_596B 0x596B
#define PCI_CHIP_RV280_5C61 0x5C61
#define PCI_CHIP_RV280_5C63 0x5C63
+#define PCI_CHIP_R423_5D57 0x5D57
+#define PCI_CHIP_RS350_7834 0x7834
+#define PCI_CHIP_RS350_7835 0x7835
+
#include <asm/io.h>
#ifdef CONFIG_PPC_PMAC
+#include <asm/pmac_feature.h>
#include <asm/prom.h>
#include <asm/pci-bridge.h>
#include "../macmodes.h"
static void aty128_remove(struct pci_dev *pdev);
static int aty128_pci_suspend(struct pci_dev *pdev, u32 state);
static int aty128_pci_resume(struct pci_dev *pdev);
+static int aty128_do_resume(struct pci_dev *pdev);
/* supported Rage128 chipsets */
static struct pci_device_id aty128_pci_tbl[] = {
static void __init aty128_get_pllinfo(struct aty128fb_par *par,
void __iomem *bios);
static void __init __iomem *aty128_map_ROM(struct pci_dev *pdev, const struct aty128fb_par *par);
-static void __init aty128_unmap_ROM(struct pci_dev *dev, void __iomem * rom);
#endif
static void aty128_timings(struct aty128fb_par *par);
static void aty128_init_engine(struct aty128fb_par *par);
#ifndef __sparc__
-static void __init aty128_unmap_ROM(struct pci_dev *dev, void __iomem * rom)
-{
- struct resource *r = &dev->resource[PCI_ROM_RESOURCE];
-
- iounmap(rom);
-
- /* Release the ROM resource if we used it in the first place */
- if (r->parent && r->flags & PCI_ROM_ADDRESS_ENABLE) {
- release_resource(r);
- r->flags &= ~PCI_ROM_ADDRESS_ENABLE;
- r->end -= r->start;
- r->start = 0;
- }
- /* This will disable and set address to unassigned */
- pci_write_config_dword(dev, dev->rom_base_reg, 0);
-}
-
-
static void __iomem * __init aty128_map_ROM(const struct aty128fb_par *par, struct pci_dev *dev)
{
- struct resource *r;
u16 dptr;
u8 rom_type;
void __iomem *bios;
+ size_t rom_size;
/* Fix from ATI for problem with Rage128 hardware not leaving ROM enabled */
unsigned int temp;
aty_st_le32(RAGE128_MPP_TB_CONFIG, temp);
temp = aty_ld_le32(RAGE128_MPP_TB_CONFIG);
- /* no need to search for the ROM, just ask the card where it is. */
- r = &dev->resource[PCI_ROM_RESOURCE];
+ bios = pci_map_rom(dev, &rom_size);
- /* assign the ROM an address if it doesn't have one */
- if (r->parent == NULL)
- pci_assign_resource(dev, PCI_ROM_RESOURCE);
-
- /* enable if needed */
- if (!(r->flags & PCI_ROM_ADDRESS_ENABLE)) {
- pci_write_config_dword(dev, dev->rom_base_reg,
- r->start | PCI_ROM_ADDRESS_ENABLE);
- r->flags |= PCI_ROM_ADDRESS_ENABLE;
- }
-
- bios = ioremap(r->start, r->end - r->start + 1);
if (!bios) {
printk(KERN_ERR "aty128fb: ROM failed to map\n");
return NULL;
}
-
+
/* Very simple test to make sure it appeared */
if (BIOS_IN16(0) != 0xaa55) {
printk(KERN_ERR "aty128fb: Invalid ROM signature %x should be 0xaa55\n",
return bios;
failed:
- aty128_unmap_ROM(dev, bios);
+ pci_unmap_rom(dev, bios);
return NULL;
}
* Initialisation
*/
+#ifdef CONFIG_PPC_PMAC
+static void aty128_early_resume(void *data)
+{
+ struct aty128fb_par *par = data;
+
+ if (try_acquire_console_sem())
+ return;
+ aty128_do_resume(par->pdev);
+ release_console_sem();
+}
+#endif /* CONFIG_PPC_PMAC */
+
static int __init aty128_init(struct pci_dev *pdev, const struct pci_device_id *ent)
{
struct fb_info *info = pci_get_drvdata(pdev);
var = default_var;
#ifdef CONFIG_PPC_PMAC
if (_machine == _MACH_Pmac) {
+ /* Indicate sleep capability */
+ if (par->chip_gen == rage_M3) {
+ pmac_call_feature(PMAC_FTR_DEVICE_CAN_WAKE, NULL, 0, 1);
+ pmac_set_early_video_resume(aty128_early_resume, par);
+ }
+
+ /* Find default mode */
if (mode_option) {
if (!mac_find_mode(&var, info, mode_option, 8))
var = default_var;
else {
printk(KERN_INFO "aty128fb: Rage128 BIOS located\n");
aty128_get_pllinfo(par, bios);
- aty128_unmap_ROM(pdev, bios);
+ pci_unmap_rom(pdev, bios);
}
#endif /* __sparc__ */
return 0;
}
-static int aty128_pci_resume(struct pci_dev *pdev)
+static int aty128_do_resume(struct pci_dev *pdev)
{
struct fb_info *info = pci_get_drvdata(pdev);
struct aty128fb_par *par = info->par;
if (pdev->dev.power.power_state == 0)
return 0;
- acquire_console_sem();
-
/* Wakeup chip */
if (pdev->dev.power.power_state == 2)
aty128_set_suspend(par, 0);
par->lock_blank = 0;
aty128fb_blank(0, info);
- release_console_sem();
-
pdev->dev.power.power_state = 0;
printk(KERN_DEBUG "aty128fb: resumed !\n");
return 0;
}
+static int aty128_pci_resume(struct pci_dev *pdev)
+{
+ int rc;
+
+ acquire_console_sem();
+ rc = aty128_do_resume(pdev);
+ release_console_sem();
+
+ return rc;
+}
+
+
int __init aty128fb_init(void)
{
#ifndef MODULE
MODULE_DESCRIPTION("FBDev driver for ATI Rage128 / Pro cards");
MODULE_LICENSE("GPL");
module_param(mode_option, charp, 0);
-MODULE_PARM_DESC(mode, "Specify resolution as \"<xres>x<yres>[-<bpp>][@<refresh>]\" ");
+MODULE_PARM_DESC(mode_option, "Specify resolution as \"<xres>x<yres>[-<bpp>][@<refresh>]\" ");
#ifdef CONFIG_MTRR
module_param_named(nomtrr, mtrr, invbool, 0);
-MODULE_PARM_DESC(mtrr, "bool: Disable MTRR support (0 or 1=disabled) (default=0)");
+MODULE_PARM_DESC(nomtrr, "bool: Disable MTRR support (0 or 1=disabled) (default=0)");
#endif
#endif
u8 pll_gen_cntl;
u8 mclk_fb_div;
u8 mclk_fb_mult; /* 2 ro 4 */
-/* u8 sclk_fb_div;*/
+ u8 sclk_fb_div;
u8 pll_vclk_cntl;
u8 vclk_post_div;
u8 vclk_fb_div;
u8 pll_ext_cntl;
-/* u8 ext_vpll_cntl;
- u8 spll_cntl2;*/
+ u8 ext_vpll_cntl;
+ u8 spll_cntl2;
u32 dsp_config; /* Mach64 GTB DSP */
u32 dsp_on_off; /* Mach64 GTB DSP */
u32 dsp_loop_latency;
#define M64F_XL_DLL 0x00080000
#define M64F_MFB_FORCE_4 0x00100000
#define M64F_HW_TRIPLE 0x00200000
-
/*
* Register access
*/
#endif
}
+static inline void aty_st_le16(int regindex, u16 val,
+ const struct atyfb_par *par)
+{
+ /* Hack for bloc 1, should be cleanly optimized by compiler */
+ if (regindex >= 0x400)
+ regindex -= 0x800;
+#ifdef CONFIG_ATARI
+ out_le16((volatile u16 *)(par->ati_regbase + regindex), val);
+#else
+ writel(val, par->ati_regbase + regindex);
+#endif
+}
+
static inline u8 aty_ld_8(int regindex, const struct atyfb_par *par)
{
/* Hack for bloc 1, should be cleanly optimized by compiler */
extern void aty_reset_engine(const struct atyfb_par *par);
extern void aty_init_engine(struct atyfb_par *par, struct fb_info *info);
-
+extern int atyfb_xl_init(struct fb_info *info);
+extern void aty_st_pll_ct(int offset, u8 val, const struct atyfb_par *par);
+extern u8 aty_ld_pll_ct(int offset, const struct atyfb_par *par);
{ 0x37, 0x00000000 }
};
-static inline u32 aty_ld_lcd(u8 lcd_reg, struct atyfb_par *par)
-{
- aty_st_8(LCD_INDEX, lcd_reg, par);
- return aty_ld_le32(LCD_DATA, par);
-}
-
-static inline void aty_st_lcd(u8 lcd_reg, u32 val,
- struct atyfb_par *par)
-{
- aty_st_8(LCD_INDEX, lcd_reg, par);
- aty_st_le32(LCD_DATA, val, par);
-}
-
static void reset_gui(struct atyfb_par *par)
{
aty_st_8(GEN_TEST_CNTL+1, 0x01, par);
// the MCLK, XCLK are 120MHz on victoria card
par->mclk_per = 1000000/120;
par->xclk_per = 1000000/120;
- par->features &= ~M64F_MFB_TIMES_4;
+ par->features &= ~M64F_MFB_FORCE_4;
}
/*
--- /dev/null
+/*
+ * BRIEF MODULE DESCRIPTION
+ * Au1100 LCD Driver.
+ *
+ * Copyright 2002 MontaVista Software
+ * Author: MontaVista Software, Inc.
+ * ppopov@mvista.com or source@mvista.com
+ *
+ * Copyright 2002 Alchemy Semiconductor
+ * Author: Alchemy Semiconductor
+ *
+ * Based on:
+ * linux/drivers/video/skeletonfb.c -- Skeleton for a frame buffer device
+ * Created 28 Dec 1997 by Geert Uytterhoeven
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN
+ * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+ * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+ * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/tty.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/fb.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+
+#include <asm/au1000.h>
+#include <asm/pb1100.h>
+#include "au1100fb.h"
+
+#include <video/fbcon.h>
+#include <video/fbcon-mfb.h>
+#include <video/fbcon-cfb2.h>
+#include <video/fbcon-cfb4.h>
+#include <video/fbcon-cfb8.h>
+#include <video/fbcon-cfb16.h>
+
+/*
+ * Sanity check. If this is a new Au1100 based board, search for
+ * the PB1100 ifdefs to make sure you modify the code accordingly.
+ */
+#if defined(CONFIG_MIPS_PB1100) || defined(CONFIG_MIPS_DB1100) || defined(CONFIG_MIPS_HYDROGEN3)
+#else
+error Unknown Au1100 board
+#endif
+
+#define CMAPSIZE 16
+
+static int my_lcd_index; /* default is zero */
+struct known_lcd_panels *p_lcd;
+AU1100_LCD *p_lcd_reg = (AU1100_LCD *)AU1100_LCD_ADDR;
+
+struct au1100fb_info {
+ struct fb_info_gen gen;
+ unsigned long fb_virt_start;
+ unsigned long fb_size;
+ unsigned long fb_phys;
+ int mmaped;
+ int nohwcursor;
+
+ struct { unsigned red, green, blue, pad; } palette[256];
+
+#if defined(FBCON_HAS_CFB16)
+ u16 fbcon_cmap16[16];
+#endif
+};
+
+
+struct au1100fb_par {
+ struct fb_var_screeninfo var;
+
+ int line_length; // in bytes
+ int cmap_len; // color-map length
+};
+
+
+static struct au1100fb_info fb_info;
+static struct au1100fb_par current_par;
+static struct display disp;
+
+int au1100fb_init(void);
+void au1100fb_setup(char *options, int *ints);
+static int au1100fb_mmap(struct fb_info *fb, struct file *file,
+ struct vm_area_struct *vma);
+static int au1100_blank(int blank_mode, struct fb_info_gen *info);
+static int au1100fb_ioctl(struct inode *inode, struct file *file, u_int cmd,
+ u_long arg, int con, struct fb_info *info);
+
+void au1100_nocursor(struct display *p, int mode, int xx, int yy){};
+
+static struct fb_ops au1100fb_ops = {
+ owner: THIS_MODULE,
+ fb_get_fix: fbgen_get_fix,
+ fb_get_var: fbgen_get_var,
+ fb_set_var: fbgen_set_var,
+ fb_get_cmap: fbgen_get_cmap,
+ fb_set_cmap: fbgen_set_cmap,
+ fb_pan_display: fbgen_pan_display,
+ fb_ioctl: au1100fb_ioctl,
+ fb_mmap: au1100fb_mmap,
+};
+
+static void au1100_detect(void)
+{
+ /*
+ * This function should detect the current video mode settings
+ * and store it as the default video mode
+ */
+
+ /*
+ * Yeh, well, we're not going to change any settings so we're
+ * always stuck with the default ...
+ */
+
+}
+
+static int au1100_encode_fix(struct fb_fix_screeninfo *fix,
+ const void *_par, struct fb_info_gen *_info)
+{
+ struct au1100fb_info *info = (struct au1100fb_info *) _info;
+ struct au1100fb_par *par = (struct au1100fb_par *) _par;
+ struct fb_var_screeninfo *var = &par->var;
+
+ memset(fix, 0, sizeof(struct fb_fix_screeninfo));
+
+ fix->smem_start = info->fb_phys;
+ fix->smem_len = info->fb_size;
+ fix->type = FB_TYPE_PACKED_PIXELS;
+ fix->type_aux = 0;
+ fix->visual = (var->bits_per_pixel == 8) ?
+ FB_VISUAL_PSEUDOCOLOR : FB_VISUAL_TRUECOLOR;
+ fix->ywrapstep = 0;
+ fix->xpanstep = 1;
+ fix->ypanstep = 1;
+ fix->line_length = current_par.line_length;
+ return 0;
+}
+
+static void set_color_bitfields(struct fb_var_screeninfo *var)
+{
+ switch (var->bits_per_pixel) {
+ case 8:
+ var->red.offset = 0;
+ var->red.length = 8;
+ var->green.offset = 0;
+ var->green.length = 8;
+ var->blue.offset = 0;
+ var->blue.length = 8;
+ var->transp.offset = 0;
+ var->transp.length = 0;
+ break;
+ case 16: /* RGB 565 */
+ var->red.offset = 11;
+ var->red.length = 5;
+ var->green.offset = 5;
+ var->green.length = 6;
+ var->blue.offset = 0;
+ var->blue.length = 5;
+ var->transp.offset = 0;
+ var->transp.length = 0;
+ break;
+ }
+
+ var->red.msb_right = 0;
+ var->green.msb_right = 0;
+ var->blue.msb_right = 0;
+ var->transp.msb_right = 0;
+}
+
+static int au1100_decode_var(const struct fb_var_screeninfo *var,
+ void *_par, struct fb_info_gen *_info)
+{
+
+ struct au1100fb_par *par = (struct au1100fb_par *)_par;
+
+ /*
+ * Don't allow setting any of these yet: xres and yres don't
+ * make sense for LCD panels.
+ */
+ if (var->xres != p_lcd->xres ||
+ var->yres != p_lcd->yres ||
+ var->xres != p_lcd->xres ||
+ var->yres != p_lcd->yres) {
+ return -EINVAL;
+ }
+ if(var->bits_per_pixel != p_lcd->bpp) {
+ return -EINVAL;
+ }
+
+ memset(par, 0, sizeof(struct au1100fb_par));
+ par->var = *var;
+
+ /* FIXME */
+ switch (var->bits_per_pixel) {
+ case 8:
+ par->var.bits_per_pixel = 8;
+ break;
+ case 16:
+ par->var.bits_per_pixel = 16;
+ break;
+ default:
+ printk("color depth %d bpp not supported\n",
+ var->bits_per_pixel);
+ return -EINVAL;
+
+ }
+ set_color_bitfields(&par->var);
+ par->cmap_len = (par->var.bits_per_pixel == 8) ? 256 : 16;
+ return 0;
+}
+
+static int au1100_encode_var(struct fb_var_screeninfo *var,
+ const void *par, struct fb_info_gen *_info)
+{
+
+ *var = ((struct au1100fb_par *)par)->var;
+ return 0;
+}
+
+static void
+au1100_get_par(void *_par, struct fb_info_gen *_info)
+{
+ *(struct au1100fb_par *)_par = current_par;
+}
+
+static void au1100_set_par(const void *par, struct fb_info_gen *info)
+{
+ /* nothing to do: we don't change any settings */
+}
+
+static int au1100_getcolreg(unsigned regno, unsigned *red, unsigned *green,
+ unsigned *blue, unsigned *transp,
+ struct fb_info *info)
+{
+
+ struct au1100fb_info* i = (struct au1100fb_info*)info;
+
+ if (regno > 255)
+ return 1;
+
+ *red = i->palette[regno].red;
+ *green = i->palette[regno].green;
+ *blue = i->palette[regno].blue;
+ *transp = 0;
+
+ return 0;
+}
+
+static int au1100_setcolreg(unsigned regno, unsigned red, unsigned green,
+ unsigned blue, unsigned transp,
+ struct fb_info *info)
+{
+ struct au1100fb_info* i = (struct au1100fb_info *)info;
+ u32 rgbcol;
+
+ if (regno > 255)
+ return 1;
+
+ i->palette[regno].red = red;
+ i->palette[regno].green = green;
+ i->palette[regno].blue = blue;
+
+ switch(p_lcd->bpp) {
+#ifdef FBCON_HAS_CFB8
+ case 8:
+ red >>= 10;
+ green >>= 10;
+ blue >>= 10;
+ p_lcd_reg->lcd_pallettebase[regno] = (blue&0x1f) |
+ ((green&0x3f)<<5) | ((red&0x1f)<<11);
+ break;
+#endif
+#ifdef FBCON_HAS_CFB16
+ case 16:
+ i->fbcon_cmap16[regno] =
+ ((red & 0xf800) >> 0) |
+ ((green & 0xfc00) >> 5) |
+ ((blue & 0xf800) >> 11);
+ break;
+#endif
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+
+static int au1100_blank(int blank_mode, struct fb_info_gen *_info)
+{
+
+ switch (blank_mode) {
+ case VESA_NO_BLANKING:
+ /* turn on panel */
+ //printk("turn on panel\n");
+#ifdef CONFIG_MIPS_PB1100
+ p_lcd_reg->lcd_control |= LCD_CONTROL_GO;
+ au_writew(au_readw(PB1100_G_CONTROL) | p_lcd->mode_backlight,
+ PB1100_G_CONTROL);
+#endif
+#ifdef CONFIG_MIPS_HYDROGEN3
+ /* Turn controller & power supply on, GPIO213 */
+ au_writel(0x20002000, 0xB1700008);
+ au_writel(0x00040000, 0xB1900108);
+ au_writel(0x01000100, 0xB1700008);
+#endif
+ au_sync();
+ break;
+
+ case VESA_VSYNC_SUSPEND:
+ case VESA_HSYNC_SUSPEND:
+ case VESA_POWERDOWN:
+ /* turn off panel */
+ //printk("turn off panel\n");
+#ifdef CONFIG_MIPS_PB1100
+ au_writew(au_readw(PB1100_G_CONTROL) & ~p_lcd->mode_backlight,
+ PB1100_G_CONTROL);
+ p_lcd_reg->lcd_control &= ~LCD_CONTROL_GO;
+#endif
+ au_sync();
+ break;
+ default:
+ break;
+
+ }
+ return 0;
+}
+
+static void au1100_set_disp(const void *unused, struct display *disp,
+ struct fb_info_gen *info)
+{
+ disp->screen_base = (char *)fb_info.fb_virt_start;
+
+ switch (disp->var.bits_per_pixel) {
+#ifdef FBCON_HAS_CFB8
+ case 8:
+ disp->dispsw = &fbcon_cfb8;
+ if (fb_info.nohwcursor)
+ fbcon_cfb8.cursor = au1100_nocursor;
+ break;
+#endif
+#ifdef FBCON_HAS_CFB16
+ case 16:
+ disp->dispsw = &fbcon_cfb16;
+ disp->dispsw_data = fb_info.fbcon_cmap16;
+ if (fb_info.nohwcursor)
+ fbcon_cfb16.cursor = au1100_nocursor;
+ break;
+#endif
+ default:
+ disp->dispsw = &fbcon_dummy;
+ disp->dispsw_data = NULL;
+ break;
+ }
+}
+
+static int
+au1100fb_mmap(struct fb_info *_fb,
+ struct file *file,
+ struct vm_area_struct *vma)
+{
+ unsigned int len;
+ unsigned long start=0, off;
+ struct au1100fb_info *fb = (struct au1100fb_info *)_fb;
+
+ if (vma->vm_pgoff > (~0UL >> PAGE_SHIFT)) {
+ return -EINVAL;
+ }
+
+ start = fb_info.fb_phys & PAGE_MASK;
+ len = PAGE_ALIGN((start & ~PAGE_MASK) + fb_info.fb_size);
+
+ off = vma->vm_pgoff << PAGE_SHIFT;
+
+ if ((vma->vm_end - vma->vm_start + off) > len) {
+ return -EINVAL;
+ }
+
+ off += start;
+ vma->vm_pgoff = off >> PAGE_SHIFT;
+
+ pgprot_val(vma->vm_page_prot) &= ~_CACHE_MASK;
+ //pgprot_val(vma->vm_page_prot) |= _CACHE_CACHABLE_NONCOHERENT;
+ pgprot_val(vma->vm_page_prot) |= (6 << 9); //CCA=6
+
+ /* This is an IO map - tell maydump to skip this VMA */
+ vma->vm_flags |= VM_IO;
+
+ if (io_remap_page_range(vma, vma->vm_start, off,
+ vma->vm_end - vma->vm_start,
+ vma->vm_page_prot)) {
+ return -EAGAIN;
+ }
+
+ fb->mmaped = 1;
+ return 0;
+}
+
+int au1100_pan_display(const struct fb_var_screeninfo *var,
+ struct fb_info_gen *info)
+{
+ return 0;
+}
+
+static int au1100fb_ioctl(struct inode *inode, struct file *file, u_int cmd,
+ u_long arg, int con, struct fb_info *info)
+{
+ /* nothing to do yet */
+ return -EINVAL;
+}
+
+static struct fbgen_hwswitch au1100_switch = {
+ au1100_detect,
+ au1100_encode_fix,
+ au1100_decode_var,
+ au1100_encode_var,
+ au1100_get_par,
+ au1100_set_par,
+ au1100_getcolreg,
+ au1100_setcolreg,
+ au1100_pan_display,
+ au1100_blank,
+ au1100_set_disp
+};
+
+
+int au1100_setmode(void)
+{
+ int words;
+
+ /* FIXME Need to accomodate for swivel mode and 12bpp, <8bpp*/
+ switch (p_lcd->mode_control & LCD_CONTROL_SM)
+ {
+ case LCD_CONTROL_SM_0:
+ case LCD_CONTROL_SM_180:
+ words = (p_lcd->xres * p_lcd->yres * p_lcd->bpp) / 32;
+ break;
+ case LCD_CONTROL_SM_90:
+ case LCD_CONTROL_SM_270:
+ /* is this correct? */
+ words = (p_lcd->xres * p_lcd->bpp) / 8;
+ break;
+ default:
+ printk("mode_control reg not initialized\n");
+ return -EINVAL;
+ }
+
+ /*
+ * Setup LCD controller
+ */
+
+ p_lcd_reg->lcd_control = p_lcd->mode_control;
+ p_lcd_reg->lcd_intstatus = 0;
+ p_lcd_reg->lcd_intenable = 0;
+ p_lcd_reg->lcd_horztiming = p_lcd->mode_horztiming;
+ p_lcd_reg->lcd_verttiming = p_lcd->mode_verttiming;
+ p_lcd_reg->lcd_clkcontrol = p_lcd->mode_clkcontrol;
+ p_lcd_reg->lcd_words = words - 1;
+ p_lcd_reg->lcd_dmaaddr0 = fb_info.fb_phys;
+
+ /* turn on panel */
+#ifdef CONFIG_MIPS_PB1100
+ au_writew(au_readw(PB1100_G_CONTROL) | p_lcd->mode_backlight,
+ PB1100_G_CONTROL);
+#endif
+#ifdef CONFIG_MIPS_HYDROGEN3
+ /* Turn controller & power supply on, GPIO213 */
+ au_writel(0x20002000, 0xB1700008);
+ au_writel(0x00040000, 0xB1900108);
+ au_writel(0x01000100, 0xB1700008);
+#endif
+
+ p_lcd_reg->lcd_control |= LCD_CONTROL_GO;
+
+ return 0;
+}
+
+
+int __init au1100fb_init(void)
+{
+ uint32 sys_clksrc;
+ unsigned long page;
+
+ /*
+ * Get the panel information/display mode and update the registry
+ */
+ p_lcd = &panels[my_lcd_index];
+
+ switch (p_lcd->mode_control & LCD_CONTROL_SM)
+ {
+ case LCD_CONTROL_SM_0:
+ case LCD_CONTROL_SM_180:
+ p_lcd->xres =
+ (p_lcd->mode_horztiming & LCD_HORZTIMING_PPL) + 1;
+ p_lcd->yres =
+ (p_lcd->mode_verttiming & LCD_VERTTIMING_LPP) + 1;
+ break;
+ case LCD_CONTROL_SM_90:
+ case LCD_CONTROL_SM_270:
+ p_lcd->yres =
+ (p_lcd->mode_horztiming & LCD_HORZTIMING_PPL) + 1;
+ p_lcd->xres =
+ (p_lcd->mode_verttiming & LCD_VERTTIMING_LPP) + 1;
+ break;
+ }
+
+ /*
+ * Panel dimensions x bpp must be divisible by 32
+ */
+ if (((p_lcd->yres * p_lcd->bpp) % 32) != 0)
+ printk("VERT %% 32\n");
+ if (((p_lcd->xres * p_lcd->bpp) % 32) != 0)
+ printk("HORZ %% 32\n");
+
+ /*
+ * Allocate LCD framebuffer from system memory
+ */
+ fb_info.fb_size = (p_lcd->xres * p_lcd->yres * p_lcd->bpp) / 8;
+
+ current_par.var.xres = p_lcd->xres;
+ current_par.var.xres_virtual = p_lcd->xres;
+ current_par.var.yres = p_lcd->yres;
+ current_par.var.yres_virtual = p_lcd->yres;
+ current_par.var.bits_per_pixel = p_lcd->bpp;
+
+ /* FIX!!! only works for 8/16 bpp */
+ current_par.line_length = p_lcd->xres * p_lcd->bpp / 8; /* in bytes */
+ fb_info.fb_virt_start = (unsigned long )
+ __get_free_pages(GFP_ATOMIC | GFP_DMA,
+ get_order(fb_info.fb_size + 0x1000));
+ if (!fb_info.fb_virt_start) {
+ printk("Unable to allocate fb memory\n");
+ return -ENOMEM;
+ }
+ fb_info.fb_phys = virt_to_bus((void *)fb_info.fb_virt_start);
+
+ /*
+ * Set page reserved so that mmap will work. This is necessary
+ * since we'll be remapping normal memory.
+ */
+ for (page = fb_info.fb_virt_start;
+ page < PAGE_ALIGN(fb_info.fb_virt_start + fb_info.fb_size);
+ page += PAGE_SIZE) {
+ SetPageReserved(virt_to_page(page));
+ }
+
+ memset((void *)fb_info.fb_virt_start, 0, fb_info.fb_size);
+
+ /* set freqctrl now to allow more time to stabilize */
+ /* zero-out out LCD bits */
+ sys_clksrc = au_readl(SYS_CLKSRC) & ~0x000003e0;
+ sys_clksrc |= p_lcd->mode_toyclksrc;
+ au_writel(sys_clksrc, SYS_CLKSRC);
+
+ /* FIXME add check to make sure auxpll is what is expected! */
+ au1100_setmode();
+
+ fb_info.gen.parsize = sizeof(struct au1100fb_par);
+ fb_info.gen.fbhw = &au1100_switch;
+
+ strcpy(fb_info.gen.info.modename, "Au1100 LCD");
+ fb_info.gen.info.changevar = NULL;
+ fb_info.gen.info.node = -1;
+
+ fb_info.gen.info.fbops = &au1100fb_ops;
+ fb_info.gen.info.disp = &disp;
+ fb_info.gen.info.switch_con = &fbgen_switch;
+ fb_info.gen.info.updatevar = &fbgen_update_var;
+ fb_info.gen.info.blank = &fbgen_blank;
+ fb_info.gen.info.flags = FBINFO_FLAG_DEFAULT;
+
+ /* This should give a reasonable default video mode */
+ fbgen_get_var(&disp.var, -1, &fb_info.gen.info);
+ fbgen_do_set_var(&disp.var, 1, &fb_info.gen);
+ fbgen_set_disp(-1, &fb_info.gen);
+ fbgen_install_cmap(0, &fb_info.gen);
+ if (register_framebuffer(&fb_info.gen.info) < 0)
+ return -EINVAL;
+ printk(KERN_INFO "fb%d: %s frame buffer device\n",
+ GET_FB_IDX(fb_info.gen.info.node),
+ fb_info.gen.info.modename);
+
+ return 0;
+}
+
+
+void au1100fb_cleanup(struct fb_info *info)
+{
+ unregister_framebuffer(info);
+}
+
+
+void au1100fb_setup(char *options, int *ints)
+{
+ char* this_opt;
+ int i;
+ int num_panels = sizeof(panels)/sizeof(struct known_lcd_panels);
+
+
+ if (!options || !*options)
+ return;
+
+ for(this_opt=strtok(options, ","); this_opt;
+ this_opt=strtok(NULL, ",")) {
+ if (!strncmp(this_opt, "panel:", 6)) {
+#if defined(CONFIG_MIPS_PB1100) || defined(CONFIG_MIPS_DB1100)
+ /* Read Pb1100 Switch S10 ? */
+ if (!strncmp(this_opt+6, "s10", 3))
+ {
+ int panel;
+ panel = *(volatile int *)0xAE000008; /* BCSR SWITCHES */
+ panel >>= 8;
+ panel &= 0x0F;
+ if (panel >= num_panels) panel = 0;
+ my_lcd_index = panel;
+ }
+ else
+#endif
+ /* Get the panel name, everything else if fixed */
+ for (i=0; i<num_panels; i++) {
+ if (!strncmp(this_opt+6, panels[i].panel_name,
+ strlen(this_opt))) {
+ my_lcd_index = i;
+ break;
+ }
+ }
+ }
+ else if (!strncmp(this_opt, "nohwcursor", 10)) {
+ printk("nohwcursor\n");
+ fb_info.nohwcursor = 1;
+ }
+ }
+
+ printk("au1100fb: Panel %d %s\n", my_lcd_index,
+ panels[my_lcd_index].panel_name);
+}
+
+
+
+#ifdef MODULE
+MODULE_LICENSE("GPL");
+int init_module(void)
+{
+ return au1100fb_init();
+}
+
+void cleanup_module(void)
+{
+ au1100fb_cleanup(void);
+}
+
+MODULE_AUTHOR("Pete Popov <ppopov@mvista.com>");
+MODULE_DESCRIPTION("Au1100 LCD framebuffer device driver");
+#endif /* MODULE */
--- /dev/null
+/*
+ * BRIEF MODULE DESCRIPTION
+ * Hardware definitions for the Au1100 LCD controller
+ *
+ * Copyright 2002 MontaVista Software
+ * Copyright 2002 Alchemy Semiconductor
+ * Author: Alchemy Semiconductor, MontaVista Software
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN
+ * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+ * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+ * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#ifndef _AU1100LCD_H
+#define _AU1100LCD_H
+
+/********************************************************************/
+#define uint32 unsigned long
+typedef volatile struct
+{
+ uint32 lcd_control;
+ uint32 lcd_intstatus;
+ uint32 lcd_intenable;
+ uint32 lcd_horztiming;
+ uint32 lcd_verttiming;
+ uint32 lcd_clkcontrol;
+ uint32 lcd_dmaaddr0;
+ uint32 lcd_dmaaddr1;
+ uint32 lcd_words;
+ uint32 lcd_pwmdiv;
+ uint32 lcd_pwmhi;
+ uint32 reserved[(0x0400-0x002C)/4];
+ uint32 lcd_pallettebase[256];
+
+} AU1100_LCD;
+
+/********************************************************************/
+
+#define AU1100_LCD_ADDR 0xB5000000
+
+/*
+ * Register bit definitions
+ */
+
+/* lcd_control */
+#define LCD_CONTROL_SBPPF (7<<18)
+#define LCD_CONTROL_SBPPF_655 (0<<18)
+#define LCD_CONTROL_SBPPF_565 (1<<18)
+#define LCD_CONTROL_SBPPF_556 (2<<18)
+#define LCD_CONTROL_SBPPF_1555 (3<<18)
+#define LCD_CONTROL_SBPPF_5551 (4<<18)
+#define LCD_CONTROL_WP (1<<17)
+#define LCD_CONTROL_WD (1<<16)
+#define LCD_CONTROL_C (1<<15)
+#define LCD_CONTROL_SM (3<<13)
+#define LCD_CONTROL_SM_0 (0<<13)
+#define LCD_CONTROL_SM_90 (1<<13)
+#define LCD_CONTROL_SM_180 (2<<13)
+#define LCD_CONTROL_SM_270 (3<<13)
+#define LCD_CONTROL_DB (1<<12)
+#define LCD_CONTROL_CCO (1<<11)
+#define LCD_CONTROL_DP (1<<10)
+#define LCD_CONTROL_PO (3<<8)
+#define LCD_CONTROL_PO_00 (0<<8)
+#define LCD_CONTROL_PO_01 (1<<8)
+#define LCD_CONTROL_PO_10 (2<<8)
+#define LCD_CONTROL_PO_11 (3<<8)
+#define LCD_CONTROL_MPI (1<<7)
+#define LCD_CONTROL_PT (1<<6)
+#define LCD_CONTROL_PC (1<<5)
+#define LCD_CONTROL_BPP (7<<1)
+#define LCD_CONTROL_BPP_1 (0<<1)
+#define LCD_CONTROL_BPP_2 (1<<1)
+#define LCD_CONTROL_BPP_4 (2<<1)
+#define LCD_CONTROL_BPP_8 (3<<1)
+#define LCD_CONTROL_BPP_12 (4<<1)
+#define LCD_CONTROL_BPP_16 (5<<1)
+#define LCD_CONTROL_GO (1<<0)
+
+/* lcd_intstatus, lcd_intenable */
+#define LCD_INT_SD (1<<7)
+#define LCD_INT_OF (1<<6)
+#define LCD_INT_UF (1<<5)
+#define LCD_INT_SA (1<<3)
+#define LCD_INT_SS (1<<2)
+#define LCD_INT_S1 (1<<1)
+#define LCD_INT_S0 (1<<0)
+
+/* lcd_horztiming */
+#define LCD_HORZTIMING_HN2 (255<<24)
+#define LCD_HORZTIMING_HN2_N(N) (((N)-1)<<24)
+#define LCD_HORZTIMING_HN1 (255<<16)
+#define LCD_HORZTIMING_HN1_N(N) (((N)-1)<<16)
+#define LCD_HORZTIMING_HPW (63<<10)
+#define LCD_HORZTIMING_HPW_N(N) (((N)-1)<<10)
+#define LCD_HORZTIMING_PPL (1023<<0)
+#define LCD_HORZTIMING_PPL_N(N) (((N)-1)<<0)
+
+/* lcd_verttiming */
+#define LCD_VERTTIMING_VN2 (255<<24)
+#define LCD_VERTTIMING_VN2_N(N) (((N)-1)<<24)
+#define LCD_VERTTIMING_VN1 (255<<16)
+#define LCD_VERTTIMING_VN1_N(N) (((N)-1)<<16)
+#define LCD_VERTTIMING_VPW (63<<10)
+#define LCD_VERTTIMING_VPW_N(N) (((N)-1)<<10)
+#define LCD_VERTTIMING_LPP (1023<<0)
+#define LCD_VERTTIMING_LPP_N(N) (((N)-1)<<0)
+
+/* lcd_clkcontrol */
+#define LCD_CLKCONTROL_IB (1<<18)
+#define LCD_CLKCONTROL_IC (1<<17)
+#define LCD_CLKCONTROL_IH (1<<16)
+#define LCD_CLKCONTROL_IV (1<<15)
+#define LCD_CLKCONTROL_BF (31<<10)
+#define LCD_CLKCONTROL_BF_N(N) (((N)-1)<<10)
+#define LCD_CLKCONTROL_PCD (1023<<0)
+#define LCD_CLKCONTROL_PCD_N(N) ((N)<<0)
+
+/* lcd_pwmdiv */
+#define LCD_PWMDIV_EN (1<<12)
+#define LCD_PWMDIV_PWMDIV (2047<<0)
+#define LCD_PWMDIV_PWMDIV_N(N) (((N)-1)<<0)
+
+/* lcd_pwmhi */
+#define LCD_PWMHI_PWMHI1 (2047<<12)
+#define LCD_PWMHI_PWMHI1_N(N) ((N)<<12)
+#define LCD_PWMHI_PWMHI0 (2047<<0)
+#define LCD_PWMHI_PWMHI0_N(N) ((N)<<0)
+
+/* lcd_pallettebase - MONOCHROME */
+#define LCD_PALLETTE_MONO_MI (15<<0)
+#define LCD_PALLETTE_MONO_MI_N(N) ((N)<<0)
+
+/* lcd_pallettebase - COLOR */
+#define LCD_PALLETTE_COLOR_BI (15<<8)
+#define LCD_PALLETTE_COLOR_BI_N(N) ((N)<<8)
+#define LCD_PALLETTE_COLOR_GI (15<<4)
+#define LCD_PALLETTE_COLOR_GI_N(N) ((N)<<4)
+#define LCD_PALLETTE_COLOR_RI (15<<0)
+#define LCD_PALLETTE_COLOR_RI_N(N) ((N)<<0)
+
+/* lcd_palletebase - COLOR TFT PALLETIZED */
+#define LCD_PALLETTE_TFT_DC (65535<<0)
+#define LCD_PALLETTE_TFT_DC_N(N) ((N)<<0)
+
+/********************************************************************/
+
+struct known_lcd_panels
+{
+ uint32 xres;
+ uint32 yres;
+ uint32 bpp;
+ unsigned char panel_name[256];
+ uint32 mode_control;
+ uint32 mode_horztiming;
+ uint32 mode_verttiming;
+ uint32 mode_clkcontrol;
+ uint32 mode_pwmdiv;
+ uint32 mode_pwmhi;
+ uint32 mode_toyclksrc;
+ uint32 mode_backlight;
+
+};
+
+#if defined(__BIG_ENDIAN)
+#define LCD_DEFAULT_PIX_FORMAT LCD_CONTROL_PO_11
+#else
+#define LCD_DEFAULT_PIX_FORMAT LCD_CONTROL_PO_00
+#endif
+
+/*
+ * The fb driver assumes that AUX PLL is at 48MHz. That can
+ * cover up to 800x600 resolution; if you need higher resolution,
+ * you should modify the driver as needed, not just this structure.
+ */
+struct known_lcd_panels panels[] =
+{
+ { /* 0: Pb1100 LCDA: Sharp 320x240 TFT panel */
+ 320, /* xres */
+ 240, /* yres */
+ 16, /* bpp */
+
+ "Sharp_320x240_16",
+ /* mode_control */
+ ( LCD_CONTROL_SBPPF_565
+ /*LCD_CONTROL_WP*/
+ /*LCD_CONTROL_WD*/
+ | LCD_CONTROL_C
+ | LCD_CONTROL_SM_0
+ /*LCD_CONTROL_DB*/
+ /*LCD_CONTROL_CCO*/
+ /*LCD_CONTROL_DP*/
+ | LCD_DEFAULT_PIX_FORMAT
+ /*LCD_CONTROL_MPI*/
+ | LCD_CONTROL_PT
+ | LCD_CONTROL_PC
+ | LCD_CONTROL_BPP_16 ),
+
+ /* mode_horztiming */
+ ( LCD_HORZTIMING_HN2_N(8)
+ | LCD_HORZTIMING_HN1_N(60)
+ | LCD_HORZTIMING_HPW_N(12)
+ | LCD_HORZTIMING_PPL_N(320) ),
+
+ /* mode_verttiming */
+ ( LCD_VERTTIMING_VN2_N(5)
+ | LCD_VERTTIMING_VN1_N(17)
+ | LCD_VERTTIMING_VPW_N(1)
+ | LCD_VERTTIMING_LPP_N(240) ),
+
+ /* mode_clkcontrol */
+ ( 0
+ /*LCD_CLKCONTROL_IB*/
+ /*LCD_CLKCONTROL_IC*/
+ /*LCD_CLKCONTROL_IH*/
+ /*LCD_CLKCONTROL_IV*/
+ | LCD_CLKCONTROL_PCD_N(1) ),
+
+ /* mode_pwmdiv */
+ 0,
+
+ /* mode_pwmhi */
+ 0,
+
+ /* mode_toyclksrc */
+ ((1<<7) | (1<<6) | (1<<5)),
+
+ /* mode_backlight */
+ 6
+ },
+
+ { /* 1: Pb1100 LCDC 640x480 TFT panel */
+ 640, /* xres */
+ 480, /* yres */
+ 16, /* bpp */
+
+ "Generic_640x480_16",
+
+ /* mode_control */
+ 0x004806a | LCD_DEFAULT_PIX_FORMAT,
+
+ /* mode_horztiming */
+ 0x3434d67f,
+
+ /* mode_verttiming */
+ 0x0e0e39df,
+
+ /* mode_clkcontrol */
+ ( 0
+ /*LCD_CLKCONTROL_IB*/
+ /*LCD_CLKCONTROL_IC*/
+ /*LCD_CLKCONTROL_IH*/
+ /*LCD_CLKCONTROL_IV*/
+ | LCD_CLKCONTROL_PCD_N(1) ),
+
+ /* mode_pwmdiv */
+ 0,
+
+ /* mode_pwmhi */
+ 0,
+
+ /* mode_toyclksrc */
+ ((1<<7) | (1<<6) | (0<<5)),
+
+ /* mode_backlight */
+ 7
+ },
+
+ { /* 2: Pb1100 LCDB 640x480 PrimeView TFT panel */
+ 640, /* xres */
+ 480, /* yres */
+ 16, /* bpp */
+
+ "PrimeView_640x480_16",
+
+ /* mode_control */
+ 0x0004886a | LCD_DEFAULT_PIX_FORMAT,
+
+ /* mode_horztiming */
+ 0x0e4bfe7f,
+
+ /* mode_verttiming */
+ 0x210805df,
+
+ /* mode_clkcontrol */
+ 0x00038001,
+
+ /* mode_pwmdiv */
+ 0,
+
+ /* mode_pwmhi */
+ 0,
+
+ /* mode_toyclksrc */
+ ((1<<7) | (1<<6) | (0<<5)),
+
+ /* mode_backlight */
+ 7
+ },
+
+ { /* 3: Pb1100 800x600x16bpp NEON CRT */
+ 800, /* xres */
+ 600, /* yres */
+ 16, /* bpp */
+
+ "NEON_800x600_16",
+
+ /* mode_control */
+ 0x0004886A | LCD_DEFAULT_PIX_FORMAT,
+
+ /* mode_horztiming */
+ 0x005AFF1F,
+
+ /* mode_verttiming */
+ 0x16000E57,
+
+ /* mode_clkcontrol */
+ 0x00020000,
+
+ /* mode_pwmdiv */
+ 0,
+
+ /* mode_pwmhi */
+ 0,
+
+ /* mode_toyclksrc */
+ ((1<<7) | (1<<6) | (0<<5)),
+
+ /* mode_backlight */
+ 7
+ },
+
+ { /* 4: Pb1100 640x480x16bpp NEON CRT */
+ 640, /* xres */
+ 480, /* yres */
+ 16, /* bpp */
+
+ "NEON_640x480_16",
+
+ /* mode_control */
+ 0x0004886A | LCD_DEFAULT_PIX_FORMAT,
+
+ /* mode_horztiming */
+ 0x0052E27F,
+
+ /* mode_verttiming */
+ 0x18000DDF,
+
+ /* mode_clkcontrol */
+ 0x00020000,
+
+ /* mode_pwmdiv */
+ 0,
+
+ /* mode_pwmhi */
+ 0,
+
+ /* mode_toyclksrc */
+ ((1<<7) | (1<<6) | (0<<5)),
+
+ /* mode_backlight */
+ 7
+ },
+};
+#endif /* _AU1100LCD_H */
--- /dev/null
+#
+# Backlight & LCD drivers configuration
+#
+
+menuconfig BACKLIGHT_LCD_SUPPORT
+ bool "Backlight & LCD device support"
+ help
+ Enable this to be able to choose the drivers for controlling the
+ backlight and the LCD panel on some platforms, for example on PDAs.
+
+config BACKLIGHT_CLASS_DEVICE
+ tristate "Lowlevel Backlight controls"
+ depends on BACKLIGHT_LCD_SUPPORT
+ default m
+ help
+ This framework adds support for low-level control of the LCD
+ backlight. This includes support for brightness and power.
+
+ To have support for your specific LCD panel you will have to
+ select the proper drivers which depend on this option.
+
+config BACKLIGHT_DEVICE
+ bool
+ depends on BACKLIGHT_CLASS_DEVICE
+ default y
+
+config LCD_CLASS_DEVICE
+ tristate "Lowlevel LCD controls"
+ depends on BACKLIGHT_LCD_SUPPORT
+ default m
+ help
+ This framework adds support for low-level control of LCD.
+ Some framebuffer devices connect to platform-specific LCD modules
+ in order to have a platform-specific way to control the flat panel
+ (contrast and applying power to the LCD (not to the backlight!)).
+
+ To have support for your specific LCD panel you will have to
+ select the proper drivers which depend on this option.
+
+config LCD_DEVICE
+ bool
+ depends on LCD_CLASS_DEVICE
+ default y
+
+config BACKLIGHT_CORGI
+ tristate "Sharp Corgi Backlight Driver (SL-C7xx Series)"
+ depends on BACKLIGHT_DEVICE && PXA_SHARPSL
+ default y
+ help
+ If you have a Sharp Zaurus SL-C7xx, say y to enable the
+ backlight driver.
+
--- /dev/null
+# Backlight & LCD drivers
+
+obj-$(CONFIG_LCD_CLASS_DEVICE) += lcd.o
+obj-$(CONFIG_BACKLIGHT_CLASS_DEVICE) += backlight.o
+obj-$(CONFIG_BACKLIGHT_CORGI) += corgi_bl.o
--- /dev/null
+/*
+ * Backlight Lowlevel Control Abstraction
+ *
+ * Copyright (C) 2003,2004 Hewlett-Packard Company
+ *
+ */
+
+#include <linux/version.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/backlight.h>
+#include <linux/notifier.h>
+#include <linux/ctype.h>
+#include <linux/err.h>
+#include <linux/fb.h>
+#include <asm/bug.h>
+
+static ssize_t backlight_show_power(struct class_device *cdev, char *buf)
+{
+ int rc;
+ struct backlight_device *bd = to_backlight_device(cdev);
+
+ down(&bd->sem);
+ if (likely(bd->props && bd->props->get_power))
+ rc = sprintf(buf, "%d\n", bd->props->get_power(bd));
+ else
+ rc = -ENXIO;
+ up(&bd->sem);
+
+ return rc;
+}
+
+static ssize_t backlight_store_power(struct class_device *cdev, const char *buf, size_t count)
+{
+ int rc, power;
+ char *endp;
+ struct backlight_device *bd = to_backlight_device(cdev);
+
+ power = simple_strtoul(buf, &endp, 0);
+ if (*endp && !isspace(*endp))
+ return -EINVAL;
+
+ down(&bd->sem);
+ if (likely(bd->props && bd->props->set_power)) {
+ pr_debug("backlight: set power to %d\n", power);
+ bd->props->set_power(bd, power);
+ rc = count;
+ } else
+ rc = -ENXIO;
+ up(&bd->sem);
+
+ return rc;
+}
+
+static ssize_t backlight_show_brightness(struct class_device *cdev, char *buf)
+{
+ int rc;
+ struct backlight_device *bd = to_backlight_device(cdev);
+
+ down(&bd->sem);
+ if (likely(bd->props && bd->props->get_brightness))
+ rc = sprintf(buf, "%d\n", bd->props->get_brightness(bd));
+ else
+ rc = -ENXIO;
+ up(&bd->sem);
+
+ return rc;
+}
+
+static ssize_t backlight_store_brightness(struct class_device *cdev, const char *buf, size_t count)
+{
+ int rc, brightness;
+ char *endp;
+ struct backlight_device *bd = to_backlight_device(cdev);
+
+ brightness = simple_strtoul(buf, &endp, 0);
+ if (*endp && !isspace(*endp))
+ return -EINVAL;
+
+ down(&bd->sem);
+ if (likely(bd->props && bd->props->set_brightness)) {
+ pr_debug("backlight: set brightness to %d\n", brightness);
+ bd->props->set_brightness(bd, brightness);
+ rc = count;
+ } else
+ rc = -ENXIO;
+ up(&bd->sem);
+
+ return rc;
+}
+
+static ssize_t backlight_show_max_brightness(struct class_device *cdev, char *buf)
+{
+ int rc;
+ struct backlight_device *bd = to_backlight_device(cdev);
+
+ down(&bd->sem);
+ if (likely(bd->props))
+ rc = sprintf(buf, "%d\n", bd->props->max_brightness);
+ else
+ rc = -ENXIO;
+ up(&bd->sem);
+
+ return rc;
+}
+
+static void backlight_class_release(struct class_device *dev)
+{
+ struct backlight_device *bd = to_backlight_device(dev);
+ kfree(bd);
+}
+
+struct class backlight_class = {
+ .name = "backlight",
+ .release = backlight_class_release,
+};
+
+#define DECLARE_ATTR(_name,_mode,_show,_store) \
+{ \
+ .attr = { .name = __stringify(_name), .mode = _mode, .owner = THIS_MODULE }, \
+ .show = _show, \
+ .store = _store, \
+}
+
+static struct class_device_attribute bl_class_device_attributes[] = {
+ DECLARE_ATTR(power, 0644, backlight_show_power, backlight_store_power),
+ DECLARE_ATTR(brightness, 0644, backlight_show_brightness, backlight_store_brightness),
+ DECLARE_ATTR(max_brightness, 0444, backlight_show_max_brightness, NULL),
+};
+
+/* This callback gets called when something important happens inside a
+ * framebuffer driver. We're looking if that important event is blanking,
+ * and if it is, we're switching backlight power as well ...
+ */
+static int fb_notifier_callback(struct notifier_block *self,
+ unsigned long event, void *data)
+{
+ struct backlight_device *bd;
+ struct fb_event *evdata =(struct fb_event *)data;
+
+ /* If we aren't interested in this event, skip it immediately ... */
+ if (event != FB_EVENT_BLANK)
+ return 0;
+
+ bd = container_of(self, struct backlight_device, fb_notif);
+ down(&bd->sem);
+ if (bd->props)
+ if (!bd->props->check_fb || bd->props->check_fb(evdata->info))
+ bd->props->set_power(bd, *(int *)evdata->data);
+ up(&bd->sem);
+ return 0;
+}
+
+/**
+ * backlight_device_register - create and register a new object of
+ * backlight_device class.
+ * @name: the name of the new object(must be the same as the name of the
+ * respective framebuffer device).
+ * @devdata: an optional pointer to be stored in the class_device. The
+ * methods may retrieve it by using class_get_devdata(&bd->class_dev).
+ * @bp: the backlight properties structure.
+ *
+ * Creates and registers new backlight class_device. Returns either an
+ * ERR_PTR() or a pointer to the newly allocated device.
+ */
+struct backlight_device *backlight_device_register(const char *name, void *devdata,
+ struct backlight_properties *bp)
+{
+ int i, rc;
+ struct backlight_device *new_bd;
+
+ pr_debug("backlight_device_alloc: name=%s\n", name);
+
+ new_bd = kmalloc(sizeof(struct backlight_device), GFP_KERNEL);
+ if (unlikely(!new_bd))
+ return ERR_PTR(ENOMEM);
+
+ init_MUTEX(&new_bd->sem);
+ new_bd->props = bp;
+ memset(&new_bd->class_dev, 0, sizeof(new_bd->class_dev));
+ new_bd->class_dev.class = &backlight_class;
+ strlcpy(new_bd->class_dev.class_id, name, KOBJ_NAME_LEN);
+ class_set_devdata(&new_bd->class_dev, devdata);
+
+ rc = class_device_register(&new_bd->class_dev);
+ if (unlikely(rc)) {
+error: kfree(new_bd);
+ return ERR_PTR(rc);
+ }
+
+ memset(&new_bd->fb_notif, 0, sizeof(new_bd->fb_notif));
+ new_bd->fb_notif.notifier_call = fb_notifier_callback;
+
+ rc = fb_register_client(&new_bd->fb_notif);
+ if (unlikely(rc))
+ goto error;
+
+ for (i = 0; i < ARRAY_SIZE(bl_class_device_attributes); i++) {
+ rc = class_device_create_file(&new_bd->class_dev,
+ &bl_class_device_attributes[i]);
+ if (unlikely(rc)) {
+ while (--i >= 0)
+ class_device_remove_file(&new_bd->class_dev,
+ &bl_class_device_attributes[i]);
+ class_device_unregister(&new_bd->class_dev);
+ /* No need to kfree(new_bd) since release() method was called */
+ return ERR_PTR(rc);
+ }
+ }
+
+ return new_bd;
+}
+EXPORT_SYMBOL(backlight_device_register);
+
+/**
+ * backlight_device_unregister - unregisters a backlight device object.
+ * @bd: the backlight device object to be unregistered and freed.
+ *
+ * Unregisters a previously registered via backlight_device_register object.
+ */
+void backlight_device_unregister(struct backlight_device *bd)
+{
+ int i;
+
+ if (!bd)
+ return;
+
+ pr_debug("backlight_device_unregister: name=%s\n", bd->class_dev.class_id);
+
+ for (i = 0; i < ARRAY_SIZE(bl_class_device_attributes); i++)
+ class_device_remove_file(&bd->class_dev,
+ &bl_class_device_attributes[i]);
+
+ down(&bd->sem);
+ bd->props = NULL;
+ up(&bd->sem);
+
+ fb_unregister_client(&bd->fb_notif);
+
+ class_device_unregister(&bd->class_dev);
+}
+EXPORT_SYMBOL(backlight_device_unregister);
+
+static void __exit backlight_class_exit(void)
+{
+ class_unregister(&backlight_class);
+}
+
+static int __init backlight_class_init(void)
+{
+ return class_register(&backlight_class);
+}
+
+/*
+ * if this is compiled into the kernel, we need to ensure that the
+ * class is registered before users of the class try to register lcd's
+ */
+postcore_initcall(backlight_class_init);
+module_exit(backlight_class_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Jamey Hicks <jamey.hicks@hp.com>, Andrew Zabolotny <zap@homelink.ru>");
+MODULE_DESCRIPTION("Backlight Lowlevel Control Abstraction");
--- /dev/null
+/*
+ * Backlight Driver for Sharp Corgi
+ *
+ * Copyright (c) 2004-2005 Richard Purdie
+ *
+ * Based on Sharp's 2.4 Backlight Driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/spinlock.h>
+#include <linux/fb.h>
+#include <linux/backlight.h>
+
+#include <asm/arch-pxa/corgi.h>
+#include <asm/hardware/scoop.h>
+
+#define CORGI_MAX_INTENSITY 0x3e
+#define CORGI_DEFAULT_INTENSITY 0x1f
+#define CORGI_LIMIT_MASK 0x0b
+
+static int corgibl_powermode = FB_BLANK_UNBLANK;
+static int current_intensity = 0;
+static int corgibl_limit = 0;
+static spinlock_t bl_lock = SPIN_LOCK_UNLOCKED;
+
+static void corgibl_send_intensity(int intensity)
+{
+ unsigned long flags;
+ void (*corgi_kick_batt)(void);
+
+ if (corgibl_powermode != FB_BLANK_UNBLANK) {
+ intensity = 0;
+ } else {
+ if (corgibl_limit)
+ intensity &= CORGI_LIMIT_MASK;
+ }
+
+ /* Skip 0x20 as it will blank the display */
+ if (intensity >= 0x20)
+ intensity++;
+
+ spin_lock_irqsave(&bl_lock, flags);
+ /* Bits 0-4 are accessed via the SSP interface */
+ corgi_ssp_blduty_set(intensity & 0x1f);
+ /* Bit 5 is via SCOOP */
+ if (intensity & 0x0020)
+ set_scoop_gpio(CORGI_SCP_BACKLIGHT_CONT);
+ else
+ reset_scoop_gpio(CORGI_SCP_BACKLIGHT_CONT);
+ spin_unlock_irqrestore(&bl_lock, flags);
+}
+
+static void corgibl_blank(int blank)
+{
+ switch(blank) {
+
+ case FB_BLANK_NORMAL:
+ case FB_BLANK_VSYNC_SUSPEND:
+ case FB_BLANK_HSYNC_SUSPEND:
+ case FB_BLANK_POWERDOWN:
+ if (corgibl_powermode == FB_BLANK_UNBLANK) {
+ corgibl_send_intensity(0);
+ corgibl_powermode = blank;
+ }
+ break;
+ case FB_BLANK_UNBLANK:
+ if (corgibl_powermode != FB_BLANK_UNBLANK) {
+ corgibl_powermode = blank;
+ corgibl_send_intensity(current_intensity);
+ }
+ break;
+ }
+}
+
+#ifdef CONFIG_PM
+static int corgibl_suspend(struct device *dev, u32 state, u32 level)
+{
+ if (level == SUSPEND_POWER_DOWN)
+ corgibl_blank(FB_BLANK_POWERDOWN);
+ return 0;
+}
+
+static int corgibl_resume(struct device *dev, u32 level)
+{
+ if (level == RESUME_POWER_ON)
+ corgibl_blank(FB_BLANK_UNBLANK);
+ return 0;
+}
+#else
+#define corgibl_suspend NULL
+#define corgibl_resume NULL
+#endif
+
+
+static int corgibl_set_power(struct backlight_device *bd, int state)
+{
+ corgibl_blank(state);
+ return 0;
+}
+
+static int corgibl_get_power(struct backlight_device *bd)
+{
+ return corgibl_powermode;
+}
+
+static int corgibl_set_intensity(struct backlight_device *bd, int intensity)
+{
+ if (intensity > CORGI_MAX_INTENSITY)
+ intensity = CORGI_MAX_INTENSITY;
+ corgibl_send_intensity(intensity);
+ current_intensity=intensity;
+ return 0;
+}
+
+static int corgibl_get_intensity(struct backlight_device *bd)
+{
+ return current_intensity;
+}
+
+/*
+ * Called when the battery is low to limit the backlight intensity.
+ * If limit==0 clear any limit, otherwise limit the intensity
+ */
+void corgibl_limit_intensity(int limit)
+{
+ corgibl_limit = (limit ? 1 : 0);
+ corgibl_send_intensity(current_intensity);
+}
+EXPORT_SYMBOL(corgibl_limit_intensity);
+
+
+static struct backlight_properties corgibl_data = {
+ .owner = THIS_MODULE,
+ .get_power = corgibl_get_power,
+ .set_power = corgibl_set_power,
+ .max_brightness = CORGI_MAX_INTENSITY,
+ .get_brightness = corgibl_get_intensity,
+ .set_brightness = corgibl_set_intensity,
+};
+
+static struct backlight_device *corgi_backlight_device;
+
+static int __init corgibl_probe(struct device *dev)
+{
+ corgi_backlight_device = backlight_device_register ("corgi-bl",
+ NULL, &corgibl_data);
+ if (IS_ERR (corgi_backlight_device))
+ return PTR_ERR (corgi_backlight_device);
+
+ corgibl_set_intensity(NULL, CORGI_DEFAULT_INTENSITY);
+
+ printk("Corgi Backlight Driver Initialized.\n");
+ return 0;
+}
+
+static int corgibl_remove(struct device *dev)
+{
+ backlight_device_unregister(corgi_backlight_device);
+
+ corgibl_set_intensity(NULL, 0);
+
+ printk("Corgi Backlight Driver Unloaded\n");
+ return 0;
+}
+
+static struct device_driver corgibl_driver = {
+ .name = "corgi-bl",
+ .bus = &platform_bus_type,
+ .probe = corgibl_probe,
+ .remove = corgibl_remove,
+ .suspend = corgibl_suspend,
+ .resume = corgibl_resume,
+};
+
+static int __init corgibl_init(void)
+{
+ return driver_register(&corgibl_driver);
+}
+
+static void __exit corgibl_exit(void)
+{
+ driver_unregister(&corgibl_driver);
+}
+
+module_init(corgibl_init);
+module_exit(corgibl_exit);
+
+MODULE_AUTHOR("Richard Purdie <rpurdie@rpsys.net>");
+MODULE_DESCRIPTION("Corgi Backlight Driver");
+MODULE_LICENSE("GPLv2");
--- /dev/null
+/*
+ * LCD Lowlevel Control Abstraction
+ *
+ * Copyright (C) 2003,2004 Hewlett-Packard Company
+ *
+ */
+
+#include <linux/version.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/lcd.h>
+#include <linux/notifier.h>
+#include <linux/ctype.h>
+#include <linux/err.h>
+#include <linux/fb.h>
+#include <asm/bug.h>
+
+static ssize_t lcd_show_power(struct class_device *cdev, char *buf)
+{
+ int rc;
+ struct lcd_device *ld = to_lcd_device(cdev);
+
+ down(&ld->sem);
+ if (likely(ld->props && ld->props->get_power))
+ rc = sprintf(buf, "%d\n", ld->props->get_power(ld));
+ else
+ rc = -ENXIO;
+ up(&ld->sem);
+
+ return rc;
+}
+
+static ssize_t lcd_store_power(struct class_device *cdev, const char *buf, size_t count)
+{
+ int rc, power;
+ char *endp;
+ struct lcd_device *ld = to_lcd_device(cdev);
+
+ power = simple_strtoul(buf, &endp, 0);
+ if (*endp && !isspace(*endp))
+ return -EINVAL;
+
+ down(&ld->sem);
+ if (likely(ld->props && ld->props->set_power)) {
+ pr_debug("lcd: set power to %d\n", power);
+ ld->props->set_power(ld, power);
+ rc = count;
+ } else
+ rc = -ENXIO;
+ up(&ld->sem);
+
+ return rc;
+}
+
+static ssize_t lcd_show_contrast(struct class_device *cdev, char *buf)
+{
+ int rc;
+ struct lcd_device *ld = to_lcd_device(cdev);
+
+ down(&ld->sem);
+ if (likely(ld->props && ld->props->get_contrast))
+ rc = sprintf(buf, "%d\n", ld->props->get_contrast(ld));
+ else
+ rc = -ENXIO;
+ up(&ld->sem);
+
+ return rc;
+}
+
+static ssize_t lcd_store_contrast(struct class_device *cdev, const char *buf, size_t count)
+{
+ int rc, contrast;
+ char *endp;
+ struct lcd_device *ld = to_lcd_device(cdev);
+
+ contrast = simple_strtoul(buf, &endp, 0);
+ if (*endp && !isspace(*endp))
+ return -EINVAL;
+
+ down(&ld->sem);
+ if (likely(ld->props && ld->props->set_contrast)) {
+ pr_debug("lcd: set contrast to %d\n", contrast);
+ ld->props->set_contrast(ld, contrast);
+ rc = count;
+ } else
+ rc = -ENXIO;
+ up(&ld->sem);
+
+ return rc;
+}
+
+static ssize_t lcd_show_max_contrast(struct class_device *cdev, char *buf)
+{
+ int rc;
+ struct lcd_device *ld = to_lcd_device(cdev);
+
+ down(&ld->sem);
+ if (likely(ld->props))
+ rc = sprintf(buf, "%d\n", ld->props->max_contrast);
+ else
+ rc = -ENXIO;
+ up(&ld->sem);
+
+ return rc;
+}
+
+static void lcd_class_release(struct class_device *dev)
+{
+ struct lcd_device *ld = to_lcd_device(dev);
+ kfree(ld);
+}
+
+struct class lcd_class = {
+ .name = "lcd",
+ .release = lcd_class_release,
+};
+
+#define DECLARE_ATTR(_name,_mode,_show,_store) \
+{ \
+ .attr = { .name = __stringify(_name), .mode = _mode, .owner = THIS_MODULE }, \
+ .show = _show, \
+ .store = _store, \
+}
+
+static struct class_device_attribute lcd_class_device_attributes[] = {
+ DECLARE_ATTR(power, 0644, lcd_show_power, lcd_store_power),
+ DECLARE_ATTR(contrast, 0644, lcd_show_contrast, lcd_store_contrast),
+ DECLARE_ATTR(max_contrast, 0444, lcd_show_max_contrast, NULL),
+};
+
+/* This callback gets called when something important happens inside a
+ * framebuffer driver. We're looking if that important event is blanking,
+ * and if it is, we're switching lcd power as well ...
+ */
+static int fb_notifier_callback(struct notifier_block *self,
+ unsigned long event, void *data)
+{
+ struct lcd_device *ld;
+ struct fb_event *evdata =(struct fb_event *)data;
+
+ /* If we aren't interested in this event, skip it immediately ... */
+ if (event != FB_EVENT_BLANK)
+ return 0;
+
+ ld = container_of(self, struct lcd_device, fb_notif);
+ down(&ld->sem);
+ if (ld->props)
+ if (!ld->props->check_fb || ld->props->check_fb(evdata->info))
+ ld->props->set_power(ld, *(int *)evdata->data);
+ up(&ld->sem);
+ return 0;
+}
+
+/**
+ * lcd_device_register - register a new object of lcd_device class.
+ * @name: the name of the new object(must be the same as the name of the
+ * respective framebuffer device).
+ * @devdata: an optional pointer to be stored in the class_device. The
+ * methods may retrieve it by using class_get_devdata(ld->class_dev).
+ * @lp: the lcd properties structure.
+ *
+ * Creates and registers a new lcd class_device. Returns either an ERR_PTR()
+ * or a pointer to the newly allocated device.
+ */
+struct lcd_device *lcd_device_register(const char *name, void *devdata,
+ struct lcd_properties *lp)
+{
+ int i, rc;
+ struct lcd_device *new_ld;
+
+ pr_debug("lcd_device_register: name=%s\n", name);
+
+ new_ld = kmalloc(sizeof(struct lcd_device), GFP_KERNEL);
+ if (unlikely(!new_ld))
+ return ERR_PTR(ENOMEM);
+
+ init_MUTEX(&new_ld->sem);
+ new_ld->props = lp;
+ memset(&new_ld->class_dev, 0, sizeof(new_ld->class_dev));
+ new_ld->class_dev.class = &lcd_class;
+ strlcpy(new_ld->class_dev.class_id, name, KOBJ_NAME_LEN);
+ class_set_devdata(&new_ld->class_dev, devdata);
+
+ rc = class_device_register(&new_ld->class_dev);
+ if (unlikely(rc)) {
+error: kfree(new_ld);
+ return ERR_PTR(rc);
+ }
+
+ memset(&new_ld->fb_notif, 0, sizeof(new_ld->fb_notif));
+ new_ld->fb_notif.notifier_call = fb_notifier_callback;
+
+ rc = fb_register_client(&new_ld->fb_notif);
+ if (unlikely(rc))
+ goto error;
+
+ for (i = 0; i < ARRAY_SIZE(lcd_class_device_attributes); i++) {
+ rc = class_device_create_file(&new_ld->class_dev,
+ &lcd_class_device_attributes[i]);
+ if (unlikely(rc)) {
+ while (--i >= 0)
+ class_device_remove_file(&new_ld->class_dev,
+ &lcd_class_device_attributes[i]);
+ class_device_unregister(&new_ld->class_dev);
+ /* No need to kfree(new_ld) since release() method was called */
+ return ERR_PTR(rc);
+ }
+ }
+
+ return new_ld;
+}
+EXPORT_SYMBOL(lcd_device_register);
+
+/**
+ * lcd_device_unregister - unregisters a object of lcd_device class.
+ * @ld: the lcd device object to be unregistered and freed.
+ *
+ * Unregisters a previously registered via lcd_device_register object.
+ */
+void lcd_device_unregister(struct lcd_device *ld)
+{
+ int i;
+
+ if (!ld)
+ return;
+
+ pr_debug("lcd_device_unregister: name=%s\n", ld->class_dev.class_id);
+
+ for (i = 0; i < ARRAY_SIZE(lcd_class_device_attributes); i++)
+ class_device_remove_file(&ld->class_dev,
+ &lcd_class_device_attributes[i]);
+
+ down(&ld->sem);
+ ld->props = NULL;
+ up(&ld->sem);
+
+ fb_unregister_client(&ld->fb_notif);
+
+ class_device_unregister(&ld->class_dev);
+}
+EXPORT_SYMBOL(lcd_device_unregister);
+
+static void __exit lcd_class_exit(void)
+{
+ class_unregister(&lcd_class);
+}
+
+static int __init lcd_class_init(void)
+{
+ return class_register(&lcd_class);
+}
+
+/*
+ * if this is compiled into the kernel, we need to ensure that the
+ * class is registered before users of the class try to register lcd's
+ */
+postcore_initcall(lcd_class_init);
+module_exit(lcd_class_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Jamey Hicks <jamey.hicks@hp.com>, Andrew Zabolotny <zap@homelink.ru>");
+MODULE_DESCRIPTION("LCD Lowlevel Control Abstraction");
int is_8mb, linebytes, i;
if (!sdev) {
- prom_getproperty(node, "address",
- (char *) &bases[0], sizeof(bases));
- if (!bases[0]) {
+ if (prom_getproperty(node, "address",
+ (char *) &bases[0], sizeof(bases)) <= 0
+ || !bases[0]) {
printk(KERN_ERR "cg14: Device is not mapped.\n");
return;
}
case FB_BLANK_HSYNC_SUSPEND: /* VESA blank (hsync off) */
case FB_BLANK_POWERDOWN: /* Poweroff */
val = sbus_readb(®s->control);
- val |= CG3_CR_ENABLE_VIDEO;
+ val &= ~CG3_CR_ENABLE_VIDEO;
sbus_writeb(val, ®s->control);
par->flags |= CG3_FLAG_BLANKED;
break;
static struct sbus_mmap_map cg3_mmap_map[] = {
{
- .poff = CG3_MMAP_OFFSET,
- .voff = CG3_RAM_OFFSET,
+ .voff = CG3_MMAP_OFFSET,
+ .poff = CG3_RAM_OFFSET,
.size = SBUS_MMAP_FBSIZE(1)
},
{ .size = 0 }
all->par.physbase = sdev->reg_addrs[0].phys_addr;
sbusfb_fill_var(&all->info.var, sdev->prom_node, 8);
+ all->info.var.red.length = 8;
+ all->info.var.green.length = 8;
+ all->info.var.blue.length = 8;
if (!strcmp(sdev->prom_name, "cgRDI"))
all->par.flags |= CG3_FLAG_RDI;
if (all->par.flags & CG3_FLAG_RDI)
kfree(all);
return;
}
+ fb_set_cmap(&all->info.cmap, &all->info);
cg3_init_fix(&all->info, linebytes);
all->par.physbase = sdev->reg_addrs[0].phys_addr;
sbusfb_fill_var(&all->info.var, sdev->prom_node, 8);
+ all->info.var.red.length = 8;
+ all->info.var.green.length = 8;
+ all->info.var.blue.length = 8;
linebytes = prom_getintdefault(sdev->prom_node, "linebytes",
all->info.var.xres);
return;
}
+ fb_set_cmap(&all->info.cmap, &all->info);
cg6_init_fix(&all->info, linebytes);
if (register_framebuffer(&all->info) < 0) {
#endif
#define FBMON_FIX_HEADER 1
+#define FBMON_FIX_INPUT 2
#ifdef CONFIG_FB_MODE_HELPERS
struct broken_edid {
static struct broken_edid brokendb[] = {
/* DEC FR-PCXAV-YZ */
- { .manufacturer = "DEC",
- .model = 0x073a,
- .fix = FBMON_FIX_HEADER,
+ {
+ .manufacturer = "DEC",
+ .model = 0x073a,
+ .fix = FBMON_FIX_HEADER,
+ },
+ /* ViewSonic PF775a */
+ {
+ .manufacturer = "VSC",
+ .model = 0x5a44,
+ .fix = FBMON_FIX_INPUT,
},
};
while (i-- && (*--s == 0x20)) *s = 0;
}
-static void fix_broken_edid(unsigned char *edid)
+static int check_edid(unsigned char *edid)
{
unsigned char *block = edid + ID_MANUFACTURER_NAME, manufacturer[4];
- u32 model, i;
+ unsigned char *b;
+ u32 model;
+ int i, fix = 0, ret = 0;
manufacturer[0] = ((block[0] & 0x7c) >> 2) + '@';
manufacturer[1] = ((block[0] & 0x03) << 3) +
for (i = 0; i < ARRAY_SIZE(brokendb); i++) {
if (!strncmp(manufacturer, brokendb[i].manufacturer, 4) &&
brokendb[i].model == model) {
- switch (brokendb[i].fix) {
- case FBMON_FIX_HEADER:
- printk("fbmon: The EDID header of "
- "Manufacturer: %s Model: 0x%x is "
- "known to be broken,\n"
- "fbmon: trying a header "
- "reconstruct\n", manufacturer, model);
- memcpy(edid, edid_v1_header, 8);
- break;
- }
+ printk("fbmon: The EDID Block of "
+ "Manufacturer: %s Model: 0x%x is known to "
+ "be broken,\n", manufacturer, model);
+ fix = brokendb[i].fix;
+ break;
+ }
+ }
+
+ switch (fix) {
+ case FBMON_FIX_HEADER:
+ for (i = 0; i < 8; i++) {
+ if (edid[i] != edid_v1_header[i])
+ ret = fix;
}
+ break;
+ case FBMON_FIX_INPUT:
+ b = edid + EDID_STRUCT_DISPLAY;
+ /* Only if display is GTF capable will
+ the input type be reset to analog */
+ if (b[4] & 0x01 && b[0] & 0x80)
+ ret = fix;
+ break;
+ }
+
+ return ret;
+}
+
+static void fix_edid(unsigned char *edid, int fix)
+{
+ unsigned char *b;
+
+ switch (fix) {
+ case FBMON_FIX_HEADER:
+ printk("fbmon: trying a header reconstruct\n");
+ memcpy(edid, edid_v1_header, 8);
+ break;
+ case FBMON_FIX_INPUT:
+ printk("fbmon: trying to fix input type\n");
+ b = edid + EDID_STRUCT_DISPLAY;
+ b[0] &= ~0x80;
+ edid[127] += 0x80;
}
}
static int edid_checksum(unsigned char *edid)
{
unsigned char i, csum = 0, all_null = 0;
+ int err = 0, fix = check_edid(edid);
+
+ if (fix)
+ fix_edid(edid, fix);
for (i = 0; i < EDID_LENGTH; i++) {
csum += edid[i];
if (csum == 0x00 && all_null) {
/* checksum passed, everything's good */
- return 1;
+ err = 1;
}
- fix_broken_edid(edid);
- csum = all_null = 0;
- for (i = 0; i < EDID_LENGTH; i++) {
- csum += edid[i];
- all_null |= edid[i];
- }
- if (csum != 0x00 || !all_null) {
- printk("EDID checksum failed, aborting\n");
- return 0;
- }
- return 1;
+ return err;
}
static int edid_check_header(unsigned char *edid)
{
- int i, fix = 0;
+ int i, err = 1, fix = check_edid(edid);
- for (i = 0; i < 8; i++) {
- if (edid[i] != edid_v1_header[i])
- fix = 1;
- }
- if (!fix)
- return 1;
+ if (fix)
+ fix_edid(edid, fix);
- fix_broken_edid(edid);
for (i = 0; i < 8; i++) {
if (edid[i] != edid_v1_header[i])
- return 0;
+ err = 0;
}
- return 1;
+
+ return err;
}
static void parse_vendor_block(unsigned char *block, struct fb_monspecs *specs)
fb_get_monitor_limits(edid, specs);
- c = (block[0] & 0x80) >> 7;
+ c = block[0] & 0x80;
specs->input = 0;
if (c) {
specs->input |= FB_DISP_DDI;
DPRINTK("0.700V/0.000V");
specs->input |= FB_DISP_ANA_700_000;
break;
- default:
- DPRINTK("unknown");
- specs->input |= FB_DISP_UNKNOWN;
}
}
DPRINTK("\n Sync: ");
- c = (block[0] & 0x10) >> 4;
+ c = block[0] & 0x10;
if (c)
DPRINTK(" Configurable signal level\n");
c = block[0] & 0x0f;
*/
#include <linux/module.h>
+#include <linux/console.h>
#include <linux/sched.h>
#include <linux/kernel.h>
#include <linux/errno.h>
static int gbe_revision;
-static struct fb_info fb_info;
static int ypan, ywrap;
static uint32_t pseudo_palette[256];
static int flat_panel_enabled = 0;
-static struct gbefb_par par_current;
-
static void gbe_reset(void)
{
/* Turn on dotclock PLL */
.fb_cursor = soft_cursor,
};
+/*
+ * sysfs
+ */
+
+static ssize_t gbefb_show_memsize(struct device *dev, char *buf)
+{
+ return snprintf(buf, PAGE_SIZE, "%d\n", gbe_mem_size);
+}
+
+static DEVICE_ATTR(size, S_IRUGO, gbefb_show_memsize, NULL);
+
+static ssize_t gbefb_show_rev(struct device *device, char *buf)
+{
+ return snprintf(buf, PAGE_SIZE, "%d\n", gbe_revision);
+}
+
+static DEVICE_ATTR(revision, S_IRUGO, gbefb_show_rev, NULL);
+
+static void __devexit gbefb_remove_sysfs(struct device *dev)
+{
+ device_remove_file(dev, &dev_attr_size);
+ device_remove_file(dev, &dev_attr_revision);
+}
+
+static void gbefb_create_sysfs(struct device *dev)
+{
+ device_create_file(dev, &dev_attr_size);
+ device_create_file(dev, &dev_attr_revision);
+}
+
/*
* Initialization
*/
return 0;
}
-int __init gbefb_init(void)
+static int __init gbefb_probe(struct device *dev)
{
int i, ret = 0;
-
+ struct fb_info *info;
+ struct gbefb_par *par;
+ struct platform_device *p_dev = to_platform_device(dev);
#ifndef MODULE
char *options = NULL;
+#endif
+ info = framebuffer_alloc(sizeof(struct gbefb_par), &p_dev->dev);
+ if (!info)
+ return -ENOMEM;
+
+#ifndef MODULE
if (fb_get_options("gbefb", &options))
return -ENODEV;
gbefb_setup(options);
if (!request_mem_region(GBE_BASE, sizeof(struct sgi_gbe), "GBE")) {
printk(KERN_ERR "gbefb: couldn't reserve mmio region\n");
- return -EBUSY;
+ ret = -EBUSY;
+ goto out_release_framebuffer;
}
gbe = (struct sgi_gbe *) ioremap(GBE_BASE, sizeof(struct sgi_gbe));
goto out_unmap;
}
-
if (gbe_mem_phys) {
/* memory was allocated at boot time */
gbe_mem = ioremap_nocache(gbe_mem_phys, gbe_mem_size);
for (i = 0; i < (gbe_mem_size >> TILE_SHIFT); i++)
gbe_tiles.cpu[i] = (gbe_mem_phys >> TILE_SHIFT) + i;
- fb_info.fbops = &gbefb_ops;
- fb_info.pseudo_palette = pseudo_palette;
- fb_info.flags = FBINFO_DEFAULT;
- fb_info.screen_base = gbe_mem;
- fb_alloc_cmap(&fb_info.cmap, 256, 0);
+ info->fbops = &gbefb_ops;
+ info->pseudo_palette = pseudo_palette;
+ info->flags = FBINFO_DEFAULT;
+ info->screen_base = gbe_mem;
+ fb_alloc_cmap(&info->cmap, 256, 0);
/* reset GBE */
gbe_reset();
+ par = info->par;
/* turn on default video mode */
- if (fb_find_mode(&par_current.var, &fb_info, mode_option, NULL, 0,
+ if (fb_find_mode(&par->var, info, mode_option, NULL, 0,
default_mode, 8) == 0)
- par_current.var = *default_var;
- fb_info.var = par_current.var;
- gbefb_check_var(&par_current.var, &fb_info);
- gbefb_encode_fix(&fb_info.fix, &fb_info.var);
- fb_info.par = &par_current;
+ par->var = *default_var;
+ info->var = par->var;
+ gbefb_check_var(&par->var, info);
+ gbefb_encode_fix(&info->fix, &info->var);
- if (register_framebuffer(&fb_info) < 0) {
- ret = -ENXIO;
+ if (register_framebuffer(info) < 0) {
printk(KERN_ERR "gbefb: couldn't register framebuffer\n");
+ ret = -ENXIO;
goto out_gbe_unmap;
}
+ dev_set_drvdata(&p_dev->dev, info);
+ gbefb_create_sysfs(dev);
+
printk(KERN_INFO "fb%d: %s rev %d @ 0x%08x using %dkB memory\n",
- fb_info.node, fb_info.fix.id, gbe_revision, (unsigned) GBE_BASE,
+ info->node, info->fix.id, gbe_revision, (unsigned) GBE_BASE,
gbe_mem_size >> 10);
return 0;
iounmap(gbe);
out_release_mem_region:
release_mem_region(GBE_BASE, sizeof(struct sgi_gbe));
+out_release_framebuffer:
+ framebuffer_release(info);
+
return ret;
}
-void __exit gbefb_exit(void)
+static int __devexit gbefb_remove(struct device* dev)
{
- unregister_framebuffer(&fb_info);
+ struct platform_device *p_dev = to_platform_device(dev);
+ struct fb_info *info = dev_get_drvdata(&p_dev->dev);
+
+ unregister_framebuffer(info);
gbe_turn_off();
if (gbe_dma_addr)
dma_free_coherent(NULL, gbe_mem_size, gbe_mem, gbe_mem_phys);
(void *)gbe_tiles.cpu, gbe_tiles.dma);
release_mem_region(GBE_BASE, sizeof(struct sgi_gbe));
iounmap(gbe);
+ gbefb_remove_sysfs(dev);
+ framebuffer_release(info);
+
+ return 0;
}
-module_init(gbefb_init);
+static struct device_driver gbefb_driver = {
+ .name = "gbefb",
+ .bus = &platform_bus_type,
+ .probe = gbefb_probe,
+ .remove = __devexit_p(gbefb_remove),
+};
+
+static struct platform_device gbefb_device = {
+ .name = "gbefb",
+};
-#ifdef MODULE
+int __init gbefb_init(void)
+{
+ int ret = driver_register(&gbefb_driver);
+ if (!ret) {
+ ret = platform_device_register(&gbefb_device);
+ if (ret)
+ driver_unregister(&gbefb_driver);
+ }
+ return ret;
+}
+
+void __exit gbefb_exit(void)
+{
+ driver_unregister(&gbefb_driver);
+}
+
+module_init(gbefb_init);
module_exit(gbefb_exit);
-#endif
MODULE_LICENSE("GPL");
end_iring(par);
}
-/**
- * mono_src_copy_blit - color expand from video memory to framebuffer
- * @dwidth: width of destination
- * @dheight: height of destination
- * @dpitch: pixels per line of the buffer
- * @qsize: size of bitmap in quad words
- * @dest: address of first byte of pixel;
- * @rop: raster operation
- * @blit_bpp: pixelformat to use which can be different from the
- * framebuffer's pixelformat
- * @src: address of image data
- * @bg: backgound color
- * @fg: forground color
- * @par: pointer to i810fb_par structure
- *
- * DESCRIPTION:
- * A color expand operation where the source data is in video memory.
- * Useful for drawing text.
- *
- * REQUIREMENT:
- * The end of a scanline must be padded to the next word.
- */
-static inline void mono_src_copy_blit(int dwidth, int dheight, int dpitch,
- int qsize, int blit_bpp, int rop,
- int dest, int src, int bg,
- int fg, struct fb_info *info)
-{
- struct i810fb_par *par = (struct i810fb_par *) info->par;
-
- if (begin_iring(info, 32 + IRING_PAD)) return;
-
- PUT_RING(BLIT | MONO_SOURCE_COPY_BLIT | 6);
- PUT_RING(DYN_COLOR_EN | blit_bpp | rop << 16 | dpitch | 1 << 27);
- PUT_RING(dheight << 16 | dwidth);
- PUT_RING(dest);
- PUT_RING(qsize - 1);
- PUT_RING(src);
- PUT_RING(bg);
- PUT_RING(fg);
-
- end_iring(par);
-}
-
static inline void load_front(int offset, struct fb_info *info)
{
struct i810fb_par *par = (struct i810fb_par *) info->par;
return pci_register_driver(&i810fb_driver);
}
-module_param(vram, int, 4);
+module_param(vram, int, 0);
MODULE_PARM_DESC(vram, "System RAM to allocate to framebuffer in MiB"
" (default=4)");
module_param(voffset, int, 0);
MODULE_PARM_DESC(voffset, "at what offset to place start of framebuffer "
"memory (0 to maximum aperture size), in MiB (default = 48)");
-module_param(bpp, int, 8);
+module_param(bpp, int, 0);
MODULE_PARM_DESC(bpp, "Color depth for display in bits per pixel"
" (default = 8)");
-module_param(xres, int, 640);
+module_param(xres, int, 0);
MODULE_PARM_DESC(xres, "Horizontal resolution in pixels (default = 640)");
-module_param(yres, int, 480);
+module_param(yres, int, 0);
MODULE_PARM_DESC(yres, "Vertical resolution in scanlines (default = 480)");
-module_param(vyres,int, 480);
+module_param(vyres,int, 0);
MODULE_PARM_DESC(vyres, "Virtual vertical resolution in scanlines"
" (default = 480)");
module_param(hsync1, int, 0);
static int
intelfb_set_par(struct fb_info *info)
{
- struct intelfb_hwstate hw;
-
+ struct intelfb_hwstate *hw;
struct intelfb_info *dinfo = GET_DINFO(info);
if (FIXED_MODE(dinfo)) {
return -EINVAL;
}
+ hw = kmalloc(sizeof(*hw), GFP_ATOMIC);
+ if (!hw)
+ return -ENOMEM;
+
DBG_MSG("intelfb_set_par (%dx%d-%d)\n", info->var.xres,
info->var.yres, info->var.bits_per_pixel);
if (dinfo->accel)
intelfbhw_2d_stop(dinfo);
- hw = dinfo->save_state;
- if (intelfbhw_mode_to_hw(dinfo, &hw, &info->var))
- return -EINVAL;
- if (intelfbhw_program_mode(dinfo, &hw, 0))
- return -EINVAL;
+ memcpy(hw, &dinfo->save_state, sizeof(*hw));
+ if (intelfbhw_mode_to_hw(dinfo, hw, &info->var))
+ goto invalid_mode;
+ if (intelfbhw_program_mode(dinfo, hw, 0))
+ goto invalid_mode;
#if REGDUMP > 0
- intelfbhw_read_hw_state(dinfo, &hw, 0);
- intelfbhw_print_hw_state(dinfo, &hw);
+ intelfbhw_read_hw_state(dinfo, hw, 0);
+ intelfbhw_print_hw_state(dinfo, hw);
#endif
update_dinfo(dinfo, &info->var);
} else {
info->flags = FBINFO_DEFAULT | FBINFO_HWACCEL_YPAN;
}
+ kfree(hw);
return 0;
+invalid_mode:
+ kfree(hw);
+ return -EINVAL;
}
static int
OUTREG(ADPA, tmp);
/* setup display plane */
+ if (dinfo->pdev->device == PCI_DEVICE_ID_INTEL_830M) {
+ /*
+ * i830M errata: the display plane must be enabled
+ * to allow writes to the other bits in the plane
+ * control register.
+ */
+ tmp = INREG(DSPACNTR);
+ if ((tmp & DISPPLANE_PLANE_ENABLE) != DISPPLANE_PLANE_ENABLE) {
+ tmp |= DISPPLANE_PLANE_ENABLE;
+ OUTREG(DSPACNTR, tmp);
+ OUTREG(DSPACNTR,
+ hw->disp_a_ctrl|DISPPLANE_PLANE_ENABLE);
+ mdelay(1);
+ }
+ }
+
OUTREG(DSPACNTR, hw->disp_a_ctrl & ~DISPPLANE_PLANE_ENABLE);
OUTREG(DSPASTRIDE, hw->disp_a_stride);
OUTREG(DSPABASE, hw->disp_a_base);
config LOGO_DEC_CLUT224
bool "224-color Digital Equipment Corporation Linux logo"
- depends on LOGO && DECSTATION
+ depends on LOGO && MACH_DECSTATION
default y
config LOGO_MAC_CLUT224
# Each configuration option enables a list of files.
-my-obj-$(CONFIG_FB_MATROX_G100) += g450_pll.o
-my-obj-$(CONFIG_FB_MATROX_G450) += matroxfb_g450.o matroxfb_crtc2.o
+my-obj-$(CONFIG_FB_MATROX_G) += g450_pll.o matroxfb_g450.o matroxfb_crtc2.o
obj-$(CONFIG_FB_MATROX) += matroxfb_base.o matroxfb_accel.o matroxfb_DAC1064.o matroxfb_Ti3026.o matroxfb_misc.o $(my-obj-y)
obj-$(CONFIG_FB_MATROX_I2C) += i2c-matroxfb.o
#ifdef CONFIG_FB_MATROX_MYSTIQUE
extern struct matrox_switch matrox_mystique;
#endif
-#ifdef CONFIG_FB_MATROX_G100
+#ifdef CONFIG_FB_MATROX_G
extern struct matrox_switch matrox_G100;
#endif
#ifdef NEED_DAC1064
#include "matroxfb_base.h"
-#ifdef CONFIG_FB_MATROX_G450
+#ifdef CONFIG_FB_MATROX_G
void matroxfb_g450_connect(WPMINFO2);
void matroxfb_g450_shutdown(WPMINFO2);
#else
static struct fb_info fb_info;
static struct fb_var_screeninfo maxinefb_defined = {
- .xres = 1024,
- .yres = 768,
- .xres_virtual = 1024,
- .yres_virtual = 768,
- .bits_per_pixel = 8,
- .red.length = 8,
- .green.length = 8,
- .blue.length = 8,
- .activate = FB_ACTIVATE_NOW,
- .height = -1,
- .width = -1,
- .vmode = FB_VMODE_NONINTERLACED,
+ .xres = 1024,
+ .yres = 768,
+ .xres_virtual = 1024,
+ .yres_virtual = 768,
+ .bits_per_pixel =8,
+ .activate = FB_ACTIVATE_NOW,
+ .height = -1,
+ .width = -1,
+ .vmode = FB_VMODE_NONINTERLACED,
};
static struct fb_fix_screeninfo maxinefb_fix = {
- .id = "Maxine onboard graphics 1024x768x8",
- .smem_len = (1024*768),
- .type = FB_TYPE_PACKED_PIXELS,
- .visual = FB_VISUAL_PSEUDOCOLOR,
- .line_length = 1024,
+ .id = "Maxine onboard graphics 1024x768x8",
+ .smem_len = (1024*768),
+ .type = FB_TYPE_PACKED_PIXELS,
+ .visual = FB_VISUAL_PSEUDOCOLOR,
+ .line_length = 1024,
};
-/* Reference to machine type set in arch/mips/dec/prom/identify.c, KM */
-extern unsigned long mips_machtype;
-
/* Handle the funny Inmos RamDAC/video controller ... */
void maxinefb_ims332_write_register(int regno, register unsigned int val)
/* value to be written into the palette reg. */
unsigned long hw_colorvalue = 0;
- red >>= 8; /* The cmap fields are 16 bits */
- green >>= 8; /* wide, but the harware colormap */
- blue >>= 8; /* registers are only 8 bits wide */
+ red >>= 8; /* The cmap fields are 16 bits */
+ green >>= 8; /* wide, but the harware colormap */
+ blue >>= 8; /* registers are only 8 bits wide */
hw_colorvalue = (blue << 16) + (green << 8) + (red);
-
+
maxinefb_ims332_write_register(IMS332_REG_COLOR_PALETTE + regno,
hw_colorvalue);
return 0;
static struct fb_ops maxinefb_ops = {
.owner = THIS_MODULE,
- .fb_setcolreg = maxinefb_setcolreg,
+ .fb_get_fix = gen_get_fix,
+ .fb_get_var = gen_get_var,
+ .fb_setcolreg = maxinefb_setcolreg,
.fb_fillrect = cfb_fillrect,
.fb_copyarea = cfb_copyarea,
- .fb_imageblit = cfb_imageblit,
+ .fb_imageblit = cfb_imageblit,
.fb_cursor = soft_cursor,
};
int __init maxinefb_init(void)
{
- volatile unsigned char *fboff;
+ unsigned long fboff;
unsigned long fb_start;
int i;
/* Clear screen */
for (fboff = fb_start; fboff < fb_start + 0x1ffff; fboff++)
- *fboff = 0x0;
+ *(volatile unsigned char *)fboff = 0x0;
maxinefb_fix.smem_start = fb_start;
}
fb_info.fbops = &maxinefb_ops;
- fb_info.screen_base = (char *) maxinefb_fix.smem_start;
+ fb_info.screen_base = (char *)maxinefb_fix.smem_start;
fb_info.var = maxinefb_defined;
fb_info.fix = maxinefb_fix;
fb_info.flags = FBINFO_DEFAULT;
--- /dev/null
+/*
+ * linux/drivers/video/pmag-aa-fb.c
+ * Copyright 2002 Karsten Merker <merker@debian.org>
+ *
+ * PMAG-AA TurboChannel framebuffer card support ... derived from
+ * pmag-ba-fb.c, which is Copyright (C) 1999, 2000, 2001 by
+ * Michael Engel <engel@unix-ag.org>, Karsten Merker <merker@debian.org>
+ * and Harald Koerfgen <hkoerfg@web.de>, which itself is derived from
+ * "HP300 Topcat framebuffer support (derived from macfb of all things)
+ * Phil Blundell <philb@gnu.org> 1998"
+ *
+ * This file is subject to the terms and conditions of the GNU General
+ * Public License. See the file COPYING in the main directory of this
+ * archive for more details.
+ *
+ * 2002-09-28 Karsten Merker <merker@linuxtag.org>
+ * Version 0.01: First try to get a PMAG-AA running.
+ *
+ * 2003-02-24 Thiemo Seufer <seufer@csv.ica.uni-stuttgart.de>
+ * Version 0.02: Major code cleanup.
+ *
+ * 2003-09-21 Thiemo Seufer <seufer@csv.ica.uni-stuttgart.de>
+ * Hardware cursor support.
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/timer.h>
+#include <linux/mm.h>
+#include <linux/tty.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/fb.h>
+#include <linux/console.h>
+
+#include <asm/bootinfo.h>
+#include <asm/dec/machtype.h>
+#include <asm/dec/tc.h>
+
+#include <video/fbcon.h>
+#include <video/fbcon-cfb8.h>
+
+#include "bt455.h"
+#include "bt431.h"
+
+/* Version information */
+#define DRIVER_VERSION "0.02"
+#define DRIVER_AUTHOR "Karsten Merker <merker@linuxtag.org>"
+#define DRIVER_DESCRIPTION "PMAG-AA Framebuffer Driver"
+
+/* Prototypes */
+static int aafb_set_var(struct fb_var_screeninfo *var, int con,
+ struct fb_info *info);
+
+/*
+ * Bt455 RAM DAC register base offset (rel. to TC slot base address).
+ */
+#define PMAG_AA_BT455_OFFSET 0x100000
+
+/*
+ * Bt431 cursor generator offset (rel. to TC slot base address).
+ */
+#define PMAG_AA_BT431_OFFSET 0x180000
+
+/*
+ * Begin of PMAG-AA framebuffer memory relative to TC slot address,
+ * resolution is 1280x1024x1 (8 bits deep, but only LSB is used).
+ */
+#define PMAG_AA_ONBOARD_FBMEM_OFFSET 0x200000
+
+struct aafb_cursor {
+ struct timer_list timer;
+ int enable;
+ int on;
+ int vbl_cnt;
+ int blink_rate;
+ u16 x, y, width, height;
+};
+
+#define CURSOR_TIMER_FREQ (HZ / 50)
+#define CURSOR_BLINK_RATE (20)
+#define CURSOR_DRAW_DELAY (2)
+
+struct aafb_info {
+ struct fb_info info;
+ struct display disp;
+ struct aafb_cursor cursor;
+ struct bt455_regs *bt455;
+ struct bt431_regs *bt431;
+ unsigned long fb_start;
+ unsigned long fb_size;
+ unsigned long fb_line_length;
+};
+
+/*
+ * Max 3 TURBOchannel slots -> max 3 PMAG-AA.
+ */
+static struct aafb_info my_fb_info[3];
+
+static struct aafb_par {
+} current_par;
+
+static int currcon = -1;
+
+static void aafb_set_cursor(struct aafb_info *info, int on)
+{
+ struct aafb_cursor *c = &info->cursor;
+
+ if (on) {
+ bt431_position_cursor(info->bt431, c->x, c->y);
+ bt431_enable_cursor(info->bt431);
+ } else
+ bt431_erase_cursor(info->bt431);
+}
+
+static void aafbcon_cursor(struct display *disp, int mode, int x, int y)
+{
+ struct aafb_info *info = (struct aafb_info *)disp->fb_info;
+ struct aafb_cursor *c = &info->cursor;
+
+ x *= fontwidth(disp);
+ y *= fontheight(disp);
+
+ if (c->x == x && c->y == y && (mode == CM_ERASE) == !c->enable)
+ return;
+
+ c->enable = 0;
+ if (c->on)
+ aafb_set_cursor(info, 0);
+ c->x = x - disp->var.xoffset;
+ c->y = y - disp->var.yoffset;
+
+ switch (mode) {
+ case CM_ERASE:
+ c->on = 0;
+ break;
+ case CM_DRAW:
+ case CM_MOVE:
+ if (c->on)
+ aafb_set_cursor(info, c->on);
+ else
+ c->vbl_cnt = CURSOR_DRAW_DELAY;
+ c->enable = 1;
+ break;
+ }
+}
+
+static int aafbcon_set_font(struct display *disp, int width, int height)
+{
+ struct aafb_info *info = (struct aafb_info *)disp->fb_info;
+ struct aafb_cursor *c = &info->cursor;
+ u8 fgc = ~attr_bgcol_ec(disp, disp->conp);
+
+ if (width > 64 || height > 64 || width < 0 || height < 0)
+ return -EINVAL;
+
+ c->height = height;
+ c->width = width;
+
+ bt431_set_font(info->bt431, fgc, width, height);
+
+ return 1;
+}
+
+static void aafb_cursor_timer_handler(unsigned long data)
+{
+ struct aafb_info *info = (struct aafb_info *)data;
+ struct aafb_cursor *c = &info->cursor;
+
+ if (!c->enable)
+ goto out;
+
+ if (c->vbl_cnt && --c->vbl_cnt == 0) {
+ c->on ^= 1;
+ aafb_set_cursor(info, c->on);
+ c->vbl_cnt = c->blink_rate;
+ }
+
+out:
+ c->timer.expires = jiffies + CURSOR_TIMER_FREQ;
+ add_timer(&c->timer);
+}
+
+static void __init aafb_cursor_init(struct aafb_info *info)
+{
+ struct aafb_cursor *c = &info->cursor;
+
+ c->enable = 1;
+ c->on = 1;
+ c->x = c->y = 0;
+ c->width = c->height = 0;
+ c->vbl_cnt = CURSOR_DRAW_DELAY;
+ c->blink_rate = CURSOR_BLINK_RATE;
+
+ init_timer(&c->timer);
+ c->timer.data = (unsigned long)info;
+ c->timer.function = aafb_cursor_timer_handler;
+ mod_timer(&c->timer, jiffies + CURSOR_TIMER_FREQ);
+}
+
+static void __exit aafb_cursor_exit(struct aafb_info *info)
+{
+ struct aafb_cursor *c = &info->cursor;
+
+ del_timer_sync(&c->timer);
+}
+
+static struct display_switch aafb_switch8 = {
+ .setup = fbcon_cfb8_setup,
+ .bmove = fbcon_cfb8_bmove,
+ .clear = fbcon_cfb8_clear,
+ .putc = fbcon_cfb8_putc,
+ .putcs = fbcon_cfb8_putcs,
+ .revc = fbcon_cfb8_revc,
+ .cursor = aafbcon_cursor,
+ .set_font = aafbcon_set_font,
+ .clear_margins = fbcon_cfb8_clear_margins,
+ .fontwidthmask = FONTWIDTH(4)|FONTWIDTH(8)|FONTWIDTH(12)|FONTWIDTH(16)
+};
+
+static void aafb_get_par(struct aafb_par *par)
+{
+ *par = current_par;
+}
+
+static int aafb_get_fix(struct fb_fix_screeninfo *fix, int con,
+ struct fb_info *info)
+{
+ struct aafb_info *ip = (struct aafb_info *)info;
+
+ memset(fix, 0, sizeof(struct fb_fix_screeninfo));
+ strcpy(fix->id, "PMAG-AA");
+ fix->smem_start = ip->fb_start;
+ fix->smem_len = ip->fb_size;
+ fix->type = FB_TYPE_PACKED_PIXELS;
+ fix->ypanstep = 1;
+ fix->ywrapstep = 1;
+ fix->visual = FB_VISUAL_MONO10;
+ fix->line_length = 1280;
+ fix->accel = FB_ACCEL_NONE;
+
+ return 0;
+}
+
+static void aafb_set_disp(struct display *disp, int con,
+ struct aafb_info *info)
+{
+ struct fb_fix_screeninfo fix;
+
+ disp->fb_info = &info->info;
+ aafb_set_var(&disp->var, con, &info->info);
+ if (disp->conp && disp->conp->vc_sw && disp->conp->vc_sw->con_cursor)
+ disp->conp->vc_sw->con_cursor(disp->conp, CM_ERASE);
+ disp->dispsw = &aafb_switch8;
+ disp->dispsw_data = 0;
+
+ aafb_get_fix(&fix, con, &info->info);
+ disp->screen_base = (u8 *) fix.smem_start;
+ disp->visual = fix.visual;
+ disp->type = fix.type;
+ disp->type_aux = fix.type_aux;
+ disp->ypanstep = fix.ypanstep;
+ disp->ywrapstep = fix.ywrapstep;
+ disp->line_length = fix.line_length;
+ disp->next_line = 2048;
+ disp->can_soft_blank = 1;
+ disp->inverse = 0;
+ disp->scrollmode = SCROLL_YREDRAW;
+
+ aafbcon_set_font(disp, fontwidth(disp), fontheight(disp));
+}
+
+static int aafb_get_cmap(struct fb_cmap *cmap, int kspc, int con,
+ struct fb_info *info)
+{
+ static u16 color[2] = {0x0000, 0x000f};
+ static struct fb_cmap aafb_cmap = {0, 2, color, color, color, NULL};
+
+ fb_copy_cmap(&aafb_cmap, cmap, kspc ? 0 : 2);
+ return 0;
+}
+
+static int aafb_set_cmap(struct fb_cmap *cmap, int kspc, int con,
+ struct fb_info *info)
+{
+ u16 color[2] = {0x0000, 0x000f};
+
+ if (cmap->start == 0
+ && cmap->len == 2
+ && memcmp(cmap->red, color, sizeof(color)) == 0
+ && memcmp(cmap->green, color, sizeof(color)) == 0
+ && memcmp(cmap->blue, color, sizeof(color)) == 0
+ && cmap->transp == NULL)
+ return 0;
+ else
+ return -EINVAL;
+}
+
+static int aafb_ioctl(struct inode *inode, struct file *file, u32 cmd,
+ unsigned long arg, int con, struct fb_info *info)
+{
+ /* TODO: Not yet implemented */
+ return -ENOIOCTLCMD;
+}
+
+static int aafb_switch(int con, struct fb_info *info)
+{
+ struct aafb_info *ip = (struct aafb_info *)info;
+ struct display *old = (currcon < 0) ? &ip->disp : (fb_display + currcon);
+ struct display *new = (con < 0) ? &ip->disp : (fb_display + con);
+
+ if (old->conp && old->conp->vc_sw && old->conp->vc_sw->con_cursor)
+ old->conp->vc_sw->con_cursor(old->conp, CM_ERASE);
+
+ /* Set the current console. */
+ currcon = con;
+ aafb_set_disp(new, con, ip);
+
+ return 0;
+}
+
+static void aafb_encode_var(struct fb_var_screeninfo *var,
+ struct aafb_par *par)
+{
+ var->xres = 1280;
+ var->yres = 1024;
+ var->xres_virtual = 2048;
+ var->yres_virtual = 1024;
+ var->xoffset = 0;
+ var->yoffset = 0;
+ var->bits_per_pixel = 8;
+ var->grayscale = 1;
+ var->red.offset = 0;
+ var->red.length = 0;
+ var->red.msb_right = 0;
+ var->green.offset = 0;
+ var->green.length = 1;
+ var->green.msb_right = 0;
+ var->blue.offset = 0;
+ var->blue.length = 0;
+ var->blue.msb_right = 0;
+ var->transp.offset = 0;
+ var->transp.length = 0;
+ var->transp.msb_right = 0;
+ var->nonstd = 0;
+ var->activate &= ~FB_ACTIVATE_MASK & FB_ACTIVATE_NOW;
+ var->accel_flags = 0;
+ var->sync = FB_SYNC_ON_GREEN;
+ var->vmode &= ~FB_VMODE_MASK & FB_VMODE_NONINTERLACED;
+}
+
+static int aafb_get_var(struct fb_var_screeninfo *var, int con,
+ struct fb_info *info)
+{
+ if (con < 0) {
+ struct aafb_par par;
+
+ memset(var, 0, sizeof(struct fb_var_screeninfo));
+ aafb_get_par(&par);
+ aafb_encode_var(var, &par);
+ } else
+ *var = info->var;
+
+ return 0;
+}
+
+static int aafb_set_var(struct fb_var_screeninfo *var, int con,
+ struct fb_info *info)
+{
+ struct aafb_par par;
+
+ aafb_get_par(&par);
+ aafb_encode_var(var, &par);
+ info->var = *var;
+
+ return 0;
+}
+
+static int aafb_update_var(int con, struct fb_info *info)
+{
+ struct aafb_info *ip = (struct aafb_info *)info;
+ struct display *disp = (con < 0) ? &ip->disp : (fb_display + con);
+
+ if (con == currcon)
+ aafbcon_cursor(disp, CM_ERASE, ip->cursor.x, ip->cursor.y);
+
+ return 0;
+}
+
+/* 0 unblanks, any other blanks. */
+
+static void aafb_blank(int blank, struct fb_info *info)
+{
+ struct aafb_info *ip = (struct aafb_info *)info;
+ u8 val = blank ? 0x00 : 0x0f;
+
+ bt455_write_cmap_entry(ip->bt455, 1, val, val, val);
+ aafbcon_cursor(&ip->disp, CM_ERASE, ip->cursor.x, ip->cursor.y);
+}
+
+static struct fb_ops aafb_ops = {
+ .owner = THIS_MODULE,
+ .fb_get_fix = aafb_get_fix,
+ .fb_get_var = aafb_get_var,
+ .fb_set_var = aafb_set_var,
+ .fb_get_cmap = aafb_get_cmap,
+ .fb_set_cmap = aafb_set_cmap,
+ .fb_ioctl = aafb_ioctl
+};
+
+static int __init init_one(int slot)
+{
+ unsigned long base_addr = get_tc_base_addr(slot);
+ struct aafb_info *ip = &my_fb_info[slot];
+
+ memset(ip, 0, sizeof(struct aafb_info));
+
+ /*
+ * Framebuffer display memory base address and friends.
+ */
+ ip->bt455 = (struct bt455_regs *) (base_addr + PMAG_AA_BT455_OFFSET);
+ ip->bt431 = (struct bt431_regs *) (base_addr + PMAG_AA_BT431_OFFSET);
+ ip->fb_start = base_addr + PMAG_AA_ONBOARD_FBMEM_OFFSET;
+ ip->fb_size = 2048 * 1024; /* fb_fix_screeninfo.smem_length
+ seems to be physical */
+ ip->fb_line_length = 2048;
+
+ /*
+ * Let there be consoles..
+ */
+ strcpy(ip->info.modename, "PMAG-AA");
+ ip->info.node = -1;
+ ip->info.flags = FBINFO_FLAG_DEFAULT;
+ ip->info.fbops = &aafb_ops;
+ ip->info.disp = &ip->disp;
+ ip->info.changevar = NULL;
+ ip->info.switch_con = &aafb_switch;
+ ip->info.updatevar = &aafb_update_var;
+ ip->info.blank = &aafb_blank;
+
+ aafb_set_disp(&ip->disp, currcon, ip);
+
+ /*
+ * Configure the RAM DACs.
+ */
+ bt455_erase_cursor(ip->bt455);
+
+ /* Init colormap. */
+ bt455_write_cmap_entry(ip->bt455, 0, 0x00, 0x00, 0x00);
+ bt455_write_cmap_entry(ip->bt455, 1, 0x0f, 0x0f, 0x0f);
+
+ /* Init hardware cursor. */
+ bt431_init_cursor(ip->bt431);
+ aafb_cursor_init(ip);
+
+ /* Clear the screen. */
+ memset ((void *)ip->fb_start, 0, ip->fb_size);
+
+ if (register_framebuffer(&ip->info) < 0)
+ return -EINVAL;
+
+ printk(KERN_INFO "fb%d: %s frame buffer in TC slot %d\n",
+ GET_FB_IDX(ip->info.node), ip->info.modename, slot);
+
+ return 0;
+}
+
+static int __exit exit_one(int slot)
+{
+ struct aafb_info *ip = &my_fb_info[slot];
+
+ if (unregister_framebuffer(&ip->info) < 0)
+ return -EINVAL;
+
+ return 0;
+}
+
+/*
+ * Initialise the framebuffer.
+ */
+int __init pmagaafb_init(void)
+{
+ int sid;
+ int found = 0;
+
+ while ((sid = search_tc_card("PMAG-AA")) >= 0) {
+ found = 1;
+ claim_tc_card(sid);
+ init_one(sid);
+ }
+
+ return found ? 0 : -ENXIO;
+}
+
+static void __exit pmagaafb_exit(void)
+{
+ int sid;
+
+ while ((sid = search_tc_card("PMAG-AA")) >= 0) {
+ exit_one(sid);
+ release_tc_card(sid);
+ }
+}
+
+MODULE_AUTHOR(DRIVER_AUTHOR);
+MODULE_DESCRIPTION(DRIVER_DESCRIPTION);
+MODULE_LICENSE("GPL");
+#ifdef MODULE
+module_init(pmagaafb_init);
+module_exit(pmagaafb_exit);
+#endif
static struct fb_info pmagba_fb_info[3];
static struct fb_var_screeninfo pmagbafb_defined = {
- .xres = 1024,
+ .xres = 1024,
.yres = 864,
- .xres_virtual = 1024,
- .yres_virtual = 864,
- .bits_per_pixel = 8,
+ .xres_virtual = 1024,
+ .yres_virtual = 864,
+ .bits_per_pixel = 8,
.red.length = 8,
.green.length = 8,
.blue.length = 8,
- .activate = FB_ACTIVATE_NOW,
- .height = 274,
- .width = 195,
- .accel = FB_ACCEL_NONE,
- .vmode = FB_VMODE_NONINTERLACED,
+ .activate = FB_ACTIVATE_NOW,
+ .height = 274,
+ .width = 195,
+ .accel = FB_ACCEL_NONE,
+ .vmode = FB_VMODE_NONINTERLACED,
};
static struct fb_fix_screeninfo pmagbafb_fix = {
- .id = "PMAG-BA",
- .smem_len = (1024 * 864),
- .type = FB_TYPE_PACKED_PIXELS,
- .visual = FB_VISUAL_PSEUDOCOLOR,
- .line_length = 1024,
+ .id = "PMAG-BA",
+ .smem_len = (1024 * 864),
+ .type = FB_TYPE_PACKED_PIXELS,
+ .visual = FB_VISUAL_PSEUDOCOLOR,
+ .line_length = 1024,
};
/*
* Set the palette.
*/
static int pmagbafb_setcolreg(unsigned regno, unsigned red, unsigned green,
- unsigned blue, unsigned transp,
- struct fb_info *info)
+ unsigned blue, unsigned transp,
+ struct fb_info *info)
{
- struct pmag_ba_ramdac_regs *bt459_regs = (struct pmag_ba_ramdac_regs *) info->par;
+ struct pmag_ba_ramdac_regs *bt459_regs = (struct pmag_ba_ramdac_regs *) info->par;
if (regno >= info->cmap.len)
return 1;
static struct fb_ops pmagbafb_ops = {
.owner = THIS_MODULE,
+ .fb_get_fix = gen_get_fix,
+ .fb_get_var = gen_get_var,
.fb_setcolreg = pmagbafb_setcolreg,
.fb_fillrect = cfb_fillrect,
.fb_copyarea = cfb_copyarea,
{
unsigned long base_addr = get_tc_base_addr(slot);
struct fb_info *info = &pmagba_fb_info[slot];
+ struct display *disp = &pmagba_disp[slot];
printk("PMAG-BA framebuffer in slot %d\n", slot);
/*
info->flags = FBINFO_DEFAULT;
fb_alloc_cmap(&fb_info.cmap, 256, 0);
-
+
if (register_framebuffer(info) < 0)
return 1;
return 0;
static struct fb_info pmagbb_fb_info[3];
static struct fb_var_screeninfo pmagbbfb_defined = {
- .xres = 1280,
- .yres = 1024,
- .xres_virtual = 1280,
- .yres_virtual = 1024,
- .bits_per_pixel = 8,
+ .xres = 1280,
+ .yres = 1024,
+ .xres_virtual = 1280,
+ .yres_virtual = 1024,
+ .bits_per_pixel = 8,
.red.length = 8,
.green.length = 8,
.blue.length = 8,
- .activate = FB_ACTIVATE_NOW,
- .height = 274,
- .width = 195,
- .accel_flags = FB_ACCEL_NONE,
- .vmode = FB_VMODE_NONINTERLACED,
+ .activate = FB_ACTIVATE_NOW,
+ .height = 274,
+ .width = 195,
+ .accel_flags = FB_ACCEL_NONE,
+ .vmode = FB_VMODE_NONINTERLACED,
};
static struct fb_fix_screeninfo pmagbafb_fix = {
- .id = "PMAGB-BA",
- .smem_len = (1280 * 1024),
- .type = FB_TYPE_PACKED_PIXELS,
- .visual = FB_VISUAL_PSEUDOCOLOR,
- .line_length = 1280,
+ .id = "PMAGB-BA",
+ .smem_len = (1280 * 1024),
+ .type = FB_TYPE_PACKED_PIXELS,
+ .visual = FB_VISUAL_PSEUDOCOLOR,
+ .line_length = 1280,
}
/*
return 0;
}
+static int pxafb_mmap(struct fb_info *info, struct file *file,
+ struct vm_area_struct *vma)
+{
+ struct pxafb_info *fbi = (struct pxafb_info *)info;
+ unsigned long off = vma->vm_pgoff << PAGE_SHIFT;
+
+ if (off < info->fix.smem_len) {
+ vma->vm_pgoff += 1;
+ return dma_mmap_writecombine(fbi->dev, vma, fbi->map_cpu,
+ fbi->map_dma, fbi->map_size);
+ }
+ return -EINVAL;
+}
+
static struct fb_ops pxafb_ops = {
.owner = THIS_MODULE,
.fb_check_var = pxafb_check_var,
.fb_imageblit = cfb_imageblit,
.fb_blank = pxafb_blank,
.fb_cursor = soft_cursor,
+ .fb_mmap = pxafb_mmap,
};
/*
DPRINTK("Disabling LCD controller\n");
- add_wait_queue(&fbi->ctrlr_wait, &wait);
set_current_state(TASK_UNINTERRUPTIBLE);
+ add_wait_queue(&fbi->ctrlr_wait, &wait);
LCSR = 0xffffffff; /* Clear LCD Status Register */
LCCR0 &= ~LCCR0_LDM; /* Enable LCD Disable Done Interrupt */
#include <linux/delay.h>
#include <linux/pci.h>
#include <linux/fb.h>
+#include <linux/jiffies.h>
#include <asm/io.h>
chan->algo.getsda = riva_gpio_getsda;
chan->algo.getscl = riva_gpio_getscl;
chan->algo.udelay = 40;
- chan->algo.mdelay = 5;
- chan->algo.timeout = 20;
+ chan->algo.timeout = msecs_to_jiffies(2);
chan->algo.data = chan;
i2c_set_adapdata(&chan->adapter, chan);
void riva_create_i2c_busses(struct riva_par *par)
{
+ par->bus = 3;
+
par->chan[0].par = par;
par->chan[1].par = par;
par->chan[2].par = par;
- par->bus = 0;
-
- switch ((par->pdev->device >> 4) & 0xff) {
- case 0x17:
- case 0x18:
- case 0x25:
- case 0x28:
- case 0x30:
- case 0x31:
- case 0x32:
- case 0x33:
- case 0x34:
- par->chan[2].ddc_base = 0x50;
- par->bus++;
- riva_setup_i2c_bus(&par->chan[2], "BUS3");
- case 0x04:
- case 0x05:
- case 0x10:
- case 0x11:
- case 0x15:
- case 0x20:
- par->chan[1].ddc_base = 0x36;
- par->bus++;
- riva_setup_i2c_bus(&par->chan[1], "BUS2");
- case 0x03:
- par->chan[0].ddc_base = 0x3e;
- par->bus++;
- riva_setup_i2c_bus(&par->chan[0], "BUS1");
- }
+ par->chan[0].ddc_base = 0x3e;
+ par->chan[1].ddc_base = 0x36;
+ par->chan[2].ddc_base = 0x50;
+ riva_setup_i2c_bus(&par->chan[0], "BUS1");
+ riva_setup_i2c_bus(&par->chan[1], "BUS2");
+ riva_setup_i2c_bus(&par->chan[2], "BUS3");
}
void riva_delete_i2c_busses(struct riva_par *par)
i2c_bit_del_bus(&par->chan[1].adapter);
par->chan[1].par = NULL;
+ if (par->chan[2].par)
+ i2c_bit_del_bus(&par->chan[2].adapter);
+ par->chan[2].par = NULL;
}
static u8 *riva_do_probe_i2c_edid(struct riva_i2c_chan *chan)
udelay(20);
rc = add_bus(&chan->adapter);
+
if (rc == 0)
dev_dbg(&chan->par->pcidev->dev,
"I2C bus %s registered.\n", name);
else
dev_warn(&chan->par->pcidev->dev,
"Failed to register I2C bus %s.\n", name);
+
+ symbol_put(i2c_bit_add_bus);
} else
chan->par = NULL;
int (*del_bus)(struct i2c_adapter *) =
symbol_get(i2c_bit_del_bus);
- if (del_bus && par->chan.par)
+ if (del_bus && par->chan.par) {
del_bus(&par->chan.adapter);
+ symbol_put(i2c_bit_del_bus);
+ }
par->chan.par = NULL;
}
if (transfer && chan->par) {
buf = kmalloc(EDID_LENGTH, GFP_KERNEL);
+
if (buf) {
msgs[1].buf = buf;
buf = NULL;
}
}
+
+ symbol_put(i2c_transfer);
}
return buf;
struct fbcmap __user *c = (struct fbcmap __user *) arg;
struct fb_cmap cmap;
u16 red, green, blue;
+ u8 red8, green8, blue8;
unsigned char __user *ured;
unsigned char __user *ugreen;
unsigned char __user *ublue;
for (i = 0; i < count; i++) {
int err;
- if (get_user(red, &ured[i]) ||
- get_user(green, &ugreen[i]) ||
- get_user(blue, &ublue[i]))
+ if (get_user(red8, &ured[i]) ||
+ get_user(green8, &ugreen[i]) ||
+ get_user(blue8, &ublue[i]))
return -EFAULT;
+ red = red8 << 8;
+ green = green8 << 8;
+ blue = blue8 << 8;
+
cmap.start = index + i;
err = fb_set_cmap(&cmap, info);
if (err)
unsigned char __user *ublue;
struct fb_cmap *cmap = &info->cmap;
int index, count, i;
+ u8 red, green, blue;
if (get_user(index, &c->index) ||
__get_user(count, &c->count) ||
return -EINVAL;
for (i = 0; i < count; i++) {
- if (put_user(cmap->red[index + i], &ured[i]) ||
- put_user(cmap->green[index + i], &ugreen[i]) ||
- put_user(cmap->blue[index + i], &ublue[i]))
+ red = cmap->red[index + i] >> 8;
+ green = cmap->green[index + i] >> 8;
+ blue = cmap->blue[index + i] >> 8;
+ if (put_user(red, &ured[i]) ||
+ put_user(green, &ugreen[i]) ||
+ put_user(blue, &ublue[i]))
return -EFAULT;
}
return 0;
*
* Framebuffer for LCD controller in TMPR3912/05 and PR31700 processors
*/
-#include <linux/config.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/errno.h>
static struct fb_fix_screeninfo tx3912fb_fix __initdata = {
.id = "tx3912fb",
-#ifdef CONFIG_NINO_16MB
- .smem_len = (240 * 320),
-#else
.smem_len = ((240 * 320)/2),
-#endif
.type = FB_TYPE_PACKED_PIXELS,
.visual = FB_VISUAL_TRUECOLOR,
.xpanstep = 1,
.yres = 320,
.xres_virtual = 240,
.yres_virtual = 320,
-#ifdef CONFIG_NINO_16MB
- .bits_per_pixel =8,
- .red = { 5, 3, 0 }, /* RGB 332 */
- .green = { 2, 3, 0 },
- .blue = { 0, 2, 0 },
-#else
.bits_per_pixel =4,
.red = { 0, 4, 0 }, /* ??? */
.green = { 0, 4, 0 },
.blue = { 0, 4, 0 },
-#endif
.activate = FB_ACTIVATE_NOW,
.width = -1,
.height = -1,
--- /dev/null
+/*
+ * linux/drivers/video/w100fb.c
+ *
+ * Frame Buffer Device for ATI Imageon w100 (Wallaby)
+ *
+ * Copyright (C) 2002, ATI Corp.
+ * Copyright (C) 2004-2005 Richard Purdie
+ *
+ * Rewritten for 2.6 by Richard Purdie <rpurdie@rpsys.net>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/delay.h>
+#include <linux/fb.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/device.h>
+#include <linux/string.h>
+#include <linux/proc_fs.h>
+#include <linux/vmalloc.h>
+#include <asm/io.h>
+#include <asm/uaccess.h>
+#include <video/w100fb.h>
+#include "w100fb.h"
+
+/*
+ * Prototypes
+ */
+static void w100fb_save_buffer(void);
+static void w100fb_clear_buffer(void);
+static void w100fb_restore_buffer(void);
+static void w100fb_clear_screen(u32 mode, long int offset);
+static void w100_resume(void);
+static void w100_suspend(u32 mode);
+static void w100_init_qvga_rotation(u16 deg);
+static void w100_init_vga_rotation(u16 deg);
+static void w100_vsync(void);
+static void w100_init_sharp_lcd(u32 mode);
+static void w100_pwm_setup(void);
+static void w100_InitExtMem(u32 mode);
+static void w100_hw_init(void);
+static u16 w100_set_fastsysclk(u16 Freq);
+
+static void lcdtg_hw_init(u32 mode);
+static void lcdtg_lcd_change(u32 mode);
+static void lcdtg_resume(void);
+static void lcdtg_suspend(void);
+
+
+/* Register offsets & lengths */
+#define REMAPPED_FB_LEN 0x15ffff
+
+#define BITS_PER_PIXEL 16
+
+/* Pseudo palette size */
+#define MAX_PALETTES 16
+
+/* for resolution change */
+#define LCD_MODE_INIT (-1)
+#define LCD_MODE_480 0
+#define LCD_MODE_320 1
+#define LCD_MODE_240 2
+#define LCD_MODE_640 3
+
+#define LCD_SHARP_QVGA 0
+#define LCD_SHARP_VGA 1
+
+#define LCD_MODE_PORTRAIT 0
+#define LCD_MODE_LANDSCAPE 1
+
+#define W100_SUSPEND_EXTMEM 0
+#define W100_SUSPEND_ALL 1
+
+/* General frame buffer data structures */
+struct w100fb_par {
+ u32 xres;
+ u32 yres;
+ int fastsysclk_mode;
+ int lcdMode;
+ int rotation_flag;
+ int blanking_flag;
+ int comadj;
+ int phadadj;
+};
+
+static struct w100fb_par *current_par;
+
+static u16 *gSaveImagePtr = NULL;
+
+/* Remapped addresses for base cfg, memmapped regs and the frame buffer itself */
+static void *remapped_base;
+static void *remapped_regs;
+static void *remapped_fbuf;
+
+/* External Function */
+static void(*w100fb_ssp_send)(u8 adrs, u8 data);
+
+/*
+ * Sysfs functions
+ */
+
+static ssize_t rotation_show(struct device *dev, char *buf)
+{
+ struct fb_info *info = dev_get_drvdata(dev);
+ struct w100fb_par *par=info->par;
+
+ return sprintf(buf, "%d\n",par->rotation_flag);
+}
+
+static ssize_t rotation_store(struct device *dev, const char *buf, size_t count)
+{
+ unsigned int rotate;
+ struct fb_info *info = dev_get_drvdata(dev);
+ struct w100fb_par *par=info->par;
+
+ rotate = simple_strtoul(buf, NULL, 10);
+
+ if (rotate > 0) par->rotation_flag = 1;
+ else par->rotation_flag = 0;
+
+ if (par->lcdMode == LCD_MODE_320)
+ w100_init_qvga_rotation(par->rotation_flag ? 270 : 90);
+ else if (par->lcdMode == LCD_MODE_240)
+ w100_init_qvga_rotation(par->rotation_flag ? 180 : 0);
+ else if (par->lcdMode == LCD_MODE_640)
+ w100_init_vga_rotation(par->rotation_flag ? 270 : 90);
+ else if (par->lcdMode == LCD_MODE_480)
+ w100_init_vga_rotation(par->rotation_flag ? 180 : 0);
+
+ return count;
+}
+
+static DEVICE_ATTR(rotation, 0644, rotation_show, rotation_store);
+
+static ssize_t w100fb_reg_read(struct device *dev, const char *buf, size_t count)
+{
+ unsigned long param;
+ unsigned long regs;
+ regs = simple_strtoul(buf, NULL, 16);
+ param = readl(remapped_regs + regs);
+ printk("Read Register 0x%08lX: 0x%08lX\n", regs, param);
+ return count;
+}
+
+static DEVICE_ATTR(reg_read, 0200, NULL, w100fb_reg_read);
+
+static ssize_t w100fb_reg_write(struct device *dev, const char *buf, size_t count)
+{
+ unsigned long regs;
+ unsigned long param;
+ sscanf(buf, "%lx %lx", ®s, ¶m);
+
+ if (regs <= 0x2000) {
+ printk("Write Register 0x%08lX: 0x%08lX\n", regs, param);
+ writel(param, remapped_regs + regs);
+ }
+
+ return count;
+}
+
+static DEVICE_ATTR(reg_write, 0200, NULL, w100fb_reg_write);
+
+
+static ssize_t fastsysclk_show(struct device *dev, char *buf)
+{
+ struct fb_info *info = dev_get_drvdata(dev);
+ struct w100fb_par *par=info->par;
+
+ return sprintf(buf, "%d\n",par->fastsysclk_mode);
+}
+
+static ssize_t fastsysclk_store(struct device *dev, const char *buf, size_t count)
+{
+ int param;
+ struct fb_info *info = dev_get_drvdata(dev);
+ struct w100fb_par *par=info->par;
+
+ param = simple_strtoul(buf, NULL, 10);
+
+ if (param == 75) {
+ printk("Set fastsysclk %d\n", param);
+ par->fastsysclk_mode = param;
+ w100_set_fastsysclk(par->fastsysclk_mode);
+ } else if (param == 100) {
+ printk("Set fastsysclk %d\n", param);
+ par->fastsysclk_mode = param;
+ w100_set_fastsysclk(par->fastsysclk_mode);
+ }
+ return count;
+}
+
+static DEVICE_ATTR(fastsysclk, 0644, fastsysclk_show, fastsysclk_store);
+
+/*
+ * The touchscreen on this device needs certain information
+ * from the video driver to function correctly. We export it here.
+ */
+int w100fb_get_xres(void) {
+ return current_par->xres;
+}
+
+int w100fb_get_blanking(void) {
+ return current_par->blanking_flag;
+}
+
+int w100fb_get_fastsysclk(void) {
+ return current_par->fastsysclk_mode;
+}
+EXPORT_SYMBOL(w100fb_get_xres);
+EXPORT_SYMBOL(w100fb_get_blanking);
+EXPORT_SYMBOL(w100fb_get_fastsysclk);
+
+
+/*
+ * Set a palette value from rgb components
+ */
+static int w100fb_setcolreg(u_int regno, u_int red, u_int green, u_int blue,
+ u_int trans, struct fb_info *info)
+{
+ unsigned int val;
+ int ret = 1;
+
+ /*
+ * If greyscale is true, then we convert the RGB value
+ * to greyscale no matter what visual we are using.
+ */
+ if (info->var.grayscale)
+ red = green = blue = (19595 * red + 38470 * green + 7471 * blue) >> 16;
+
+ /*
+ * 16-bit True Colour. We encode the RGB value
+ * according to the RGB bitfield information.
+ */
+ if (regno < MAX_PALETTES) {
+
+ u32 *pal = info->pseudo_palette;
+
+ val = (red & 0xf800) | ((green & 0xfc00) >> 5) | ((blue & 0xf800) >> 11);
+ pal[regno] = val;
+ ret = 0;
+ }
+ return ret;
+}
+
+
+/*
+ * Blank the display based on value in blank_mode
+ */
+static int w100fb_blank(int blank_mode, struct fb_info *info)
+{
+ struct w100fb_par *par;
+ par=info->par;
+
+ switch(blank_mode) {
+
+ case FB_BLANK_NORMAL: /* Normal blanking */
+ case FB_BLANK_VSYNC_SUSPEND: /* VESA blank (vsync off) */
+ case FB_BLANK_HSYNC_SUSPEND: /* VESA blank (hsync off) */
+ case FB_BLANK_POWERDOWN: /* Poweroff */
+ if (par->blanking_flag == 0) {
+ w100fb_save_buffer();
+ lcdtg_suspend();
+ par->blanking_flag = 1;
+ }
+ break;
+
+ case FB_BLANK_UNBLANK: /* Unblanking */
+ if (par->blanking_flag != 0) {
+ w100fb_restore_buffer();
+ lcdtg_resume();
+ par->blanking_flag = 0;
+ }
+ break;
+ }
+ return 0;
+}
+
+/*
+ * Change the resolution by calling the appropriate hardware functions
+ */
+static void w100fb_changeres(int rotate_mode, u32 mode)
+{
+ u16 rotation=0;
+
+ switch(rotate_mode) {
+ case LCD_MODE_LANDSCAPE:
+ rotation=(current_par->rotation_flag ? 270 : 90);
+ break;
+ case LCD_MODE_PORTRAIT:
+ rotation=(current_par->rotation_flag ? 180 : 0);
+ break;
+ }
+
+ w100_pwm_setup();
+ switch(mode) {
+ case LCD_SHARP_QVGA:
+ w100_vsync();
+ w100_suspend(W100_SUSPEND_EXTMEM);
+ w100_init_sharp_lcd(LCD_SHARP_QVGA);
+ w100_init_qvga_rotation(rotation);
+ w100_InitExtMem(LCD_SHARP_QVGA);
+ w100fb_clear_screen(LCD_SHARP_QVGA, 0);
+ lcdtg_lcd_change(LCD_SHARP_QVGA);
+ break;
+ case LCD_SHARP_VGA:
+ w100fb_clear_screen(LCD_SHARP_QVGA, 0);
+ writel(0xBFFFA000, remapped_regs + mmMC_EXT_MEM_LOCATION);
+ w100_InitExtMem(LCD_SHARP_VGA);
+ w100fb_clear_screen(LCD_SHARP_VGA, 0x200000);
+ w100_vsync();
+ w100_init_sharp_lcd(LCD_SHARP_VGA);
+ if (rotation != 0)
+ w100_init_vga_rotation(rotation);
+ lcdtg_lcd_change(LCD_SHARP_VGA);
+ break;
+ }
+}
+
+/*
+ * Set up the display for the fb subsystem
+ */
+static void w100fb_activate_var(struct fb_info *info)
+{
+ u32 temp32;
+ struct w100fb_par *par=info->par;
+ struct fb_var_screeninfo *var = &info->var;
+
+ /* Set the hardware to 565 */
+ temp32 = readl(remapped_regs + mmDISP_DEBUG2);
+ temp32 &= 0xff7fffff;
+ temp32 |= 0x00800000;
+ writel(temp32, remapped_regs + mmDISP_DEBUG2);
+
+ if (par->lcdMode == LCD_MODE_INIT) {
+ w100_init_sharp_lcd(LCD_SHARP_VGA);
+ w100_init_vga_rotation(par->rotation_flag ? 270 : 90);
+ par->lcdMode = LCD_MODE_640;
+ lcdtg_hw_init(LCD_SHARP_VGA);
+ } else if (var->xres == 320 && var->yres == 240) {
+ if (par->lcdMode != LCD_MODE_320) {
+ w100fb_changeres(LCD_MODE_LANDSCAPE, LCD_SHARP_QVGA);
+ par->lcdMode = LCD_MODE_320;
+ }
+ } else if (var->xres == 240 && var->yres == 320) {
+ if (par->lcdMode != LCD_MODE_240) {
+ w100fb_changeres(LCD_MODE_PORTRAIT, LCD_SHARP_QVGA);
+ par->lcdMode = LCD_MODE_240;
+ }
+ } else if (var->xres == 640 && var->yres == 480) {
+ if (par->lcdMode != LCD_MODE_640) {
+ w100fb_changeres(LCD_MODE_LANDSCAPE, LCD_SHARP_VGA);
+ par->lcdMode = LCD_MODE_640;
+ }
+ } else if (var->xres == 480 && var->yres == 640) {
+ if (par->lcdMode != LCD_MODE_480) {
+ w100fb_changeres(LCD_MODE_PORTRAIT, LCD_SHARP_VGA);
+ par->lcdMode = LCD_MODE_480;
+ }
+ } else printk(KERN_ERR "W100FB: Resolution error!\n");
+}
+
+
+/*
+ * w100fb_check_var():
+ * Get the video params out of 'var'. If a value doesn't fit, round it up,
+ * if it's too big, return -EINVAL.
+ *
+ */
+static int w100fb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+{
+ if (var->xres < var->yres) { /* Portrait mode */
+ if ((var->xres > 480) || (var->yres > 640)) {
+ return -EINVAL;
+ } else if ((var->xres > 240) || (var->yres > 320)) {
+ var->xres = 480;
+ var->yres = 640;
+ } else {
+ var->xres = 240;
+ var->yres = 320;
+ }
+ } else { /* Landscape mode */
+ if ((var->xres > 640) || (var->yres > 480)) {
+ return -EINVAL;
+ } else if ((var->xres > 320) || (var->yres > 240)) {
+ var->xres = 640;
+ var->yres = 480;
+ } else {
+ var->xres = 320;
+ var->yres = 240;
+ }
+ }
+
+ var->xres_virtual = max(var->xres_virtual, var->xres);
+ var->yres_virtual = max(var->yres_virtual, var->yres);
+
+ if (var->bits_per_pixel > BITS_PER_PIXEL)
+ return -EINVAL;
+ else
+ var->bits_per_pixel = BITS_PER_PIXEL;
+
+ var->red.offset = 11;
+ var->red.length = 5;
+ var->green.offset = 5;
+ var->green.length = 6;
+ var->blue.offset = 0;
+ var->blue.length = 5;
+ var->transp.offset = var->transp.length = 0;
+
+ var->nonstd = 0;
+
+ var->height = -1;
+ var->width = -1;
+ var->vmode = FB_VMODE_NONINTERLACED;
+
+ var->sync = 0;
+ var->pixclock = 0x04; /* 171521; */
+
+ return 0;
+}
+
+
+/*
+ * w100fb_set_par():
+ * Set the user defined part of the display for the specified console
+ * by looking at the values in info.var
+ */
+static int w100fb_set_par(struct fb_info *info)
+{
+ struct w100fb_par *par=info->par;
+
+ par->xres = info->var.xres;
+ par->yres = info->var.yres;
+
+ info->fix.visual = FB_VISUAL_TRUECOLOR;
+
+ info->fix.ypanstep = 0;
+ info->fix.ywrapstep = 0;
+
+ if (par->blanking_flag)
+ w100fb_clear_buffer();
+
+ w100fb_activate_var(info);
+
+ if (par->lcdMode == LCD_MODE_480) {
+ info->fix.line_length = (480 * BITS_PER_PIXEL) / 8;
+ info->fix.smem_len = 0x200000;
+ } else if (par->lcdMode == LCD_MODE_320) {
+ info->fix.line_length = (320 * BITS_PER_PIXEL) / 8;
+ info->fix.smem_len = 0x60000;
+ } else if (par->lcdMode == LCD_MODE_240) {
+ info->fix.line_length = (240 * BITS_PER_PIXEL) / 8;
+ info->fix.smem_len = 0x60000;
+ } else if (par->lcdMode == LCD_MODE_INIT || par->lcdMode == LCD_MODE_640) {
+ info->fix.line_length = (640 * BITS_PER_PIXEL) / 8;
+ info->fix.smem_len = 0x200000;
+ }
+
+ return 0;
+}
+
+
+/*
+ * Frame buffer operations
+ */
+static struct fb_ops w100fb_ops = {
+ .owner = THIS_MODULE,
+ .fb_check_var = w100fb_check_var,
+ .fb_set_par = w100fb_set_par,
+ .fb_setcolreg = w100fb_setcolreg,
+ .fb_blank = w100fb_blank,
+ .fb_fillrect = cfb_fillrect,
+ .fb_copyarea = cfb_copyarea,
+ .fb_imageblit = cfb_imageblit,
+ .fb_cursor = soft_cursor,
+};
+
+
+static void w100fb_clear_screen(u32 mode, long int offset)
+{
+ int i, numPix = 0;
+
+ if (mode == LCD_SHARP_VGA)
+ numPix = 640 * 480;
+ else if (mode == LCD_SHARP_QVGA)
+ numPix = 320 * 240;
+
+ for (i = 0; i < numPix; i++)
+ writew(0xffff, remapped_fbuf + offset + (2*i));
+}
+
+
+static void w100fb_save_buffer(void)
+{
+ int i;
+
+ if (gSaveImagePtr != NULL) {
+ vfree(gSaveImagePtr);
+ gSaveImagePtr = NULL;
+ }
+ gSaveImagePtr = vmalloc(current_par->xres * current_par->yres * BITS_PER_PIXEL / 8);
+ if (gSaveImagePtr != NULL) {
+ for (i = 0; i < (current_par->xres * current_par->yres); i++)
+ *(gSaveImagePtr + i) = readw(remapped_fbuf + (2*i));
+ } else {
+ printk(KERN_WARNING "can't alloc pre-off image buffer\n");
+ }
+}
+
+
+static void w100fb_restore_buffer(void)
+{
+ int i;
+
+ if (gSaveImagePtr != NULL) {
+ for (i = 0; i < (current_par->xres * current_par->yres); i++) {
+ writew(*(gSaveImagePtr + i),remapped_fbuf + (2*i));
+ }
+ vfree(gSaveImagePtr);
+ gSaveImagePtr = NULL;
+ }
+}
+
+static void w100fb_clear_buffer(void)
+{
+ if (gSaveImagePtr != NULL) {
+ vfree(gSaveImagePtr);
+ gSaveImagePtr = NULL;
+ }
+}
+
+
+#ifdef CONFIG_PM
+static int w100fb_suspend(struct device *dev, u32 state, u32 level)
+{
+ if (level == SUSPEND_POWER_DOWN) {
+ struct fb_info *info = dev_get_drvdata(dev);
+ struct w100fb_par *par=info->par;
+
+ w100fb_save_buffer();
+ lcdtg_suspend();
+ w100_suspend(W100_SUSPEND_ALL);
+ par->blanking_flag = 1;
+ }
+ return 0;
+}
+
+static int w100fb_resume(struct device *dev, u32 level)
+{
+ if (level == RESUME_POWER_ON) {
+ struct fb_info *info = dev_get_drvdata(dev);
+ struct w100fb_par *par=info->par;
+
+ w100_resume();
+ w100fb_restore_buffer();
+ lcdtg_resume();
+ par->blanking_flag = 0;
+ }
+ return 0;
+}
+#else
+#define w100fb_suspend NULL
+#define w100fb_resume NULL
+#endif
+
+
+int __init w100fb_probe(struct device *dev)
+{
+ struct w100fb_mach_info *inf;
+ struct fb_info *info;
+ struct w100fb_par *par;
+ struct platform_device *pdev = to_platform_device(dev);
+ struct resource *mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+
+ if (!mem)
+ return -EINVAL;
+
+ /* remap the areas we're going to use */
+ remapped_base = ioremap_nocache(mem->start+W100_CFG_BASE, W100_CFG_LEN);
+ if (remapped_base == NULL)
+ return -EIO;
+
+ remapped_regs = ioremap_nocache(mem->start+W100_REG_BASE, W100_REG_LEN);
+ if (remapped_regs == NULL) {
+ iounmap(remapped_base);
+ return -EIO;
+ }
+
+ remapped_fbuf = ioremap_nocache(mem->start+MEM_EXT_BASE_VALUE, REMAPPED_FB_LEN);
+ if (remapped_fbuf == NULL) {
+ iounmap(remapped_base);
+ iounmap(remapped_regs);
+ return -EIO;
+ }
+
+ info=framebuffer_alloc(sizeof(struct w100fb_par), dev);
+ if (!info) {
+ iounmap(remapped_base);
+ iounmap(remapped_regs);
+ iounmap(remapped_fbuf);
+ return -ENOMEM;
+ }
+
+ info->device=dev;
+ par = info->par;
+ current_par=info->par;
+ dev_set_drvdata(dev, info);
+
+ inf = dev->platform_data;
+ par->phadadj = inf->phadadj;
+ par->comadj = inf->comadj;
+ par->fastsysclk_mode = 75;
+ par->lcdMode = LCD_MODE_INIT;
+ par->rotation_flag=0;
+ par->blanking_flag=0;
+ w100fb_ssp_send = inf->w100fb_ssp_send;
+
+ w100_hw_init();
+ w100_pwm_setup();
+
+ info->pseudo_palette = kmalloc(sizeof (u32) * MAX_PALETTES, GFP_KERNEL);
+ if (!info->pseudo_palette) {
+ iounmap(remapped_base);
+ iounmap(remapped_regs);
+ iounmap(remapped_fbuf);
+ return -ENOMEM;
+ }
+
+ info->fbops = &w100fb_ops;
+ info->flags = FBINFO_DEFAULT;
+ info->node = -1;
+ info->screen_base = remapped_fbuf;
+ info->screen_size = REMAPPED_FB_LEN;
+
+ info->var.xres = 640;
+ info->var.xres_virtual = info->var.xres;
+ info->var.yres = 480;
+ info->var.yres_virtual = info->var.yres;
+ info->var.pixclock = 0x04; /* 171521; */
+ info->var.sync = 0;
+ info->var.grayscale = 0;
+ info->var.xoffset = info->var.yoffset = 0;
+ info->var.accel_flags = 0;
+ info->var.activate = FB_ACTIVATE_NOW;
+
+ strcpy(info->fix.id, "w100fb");
+ info->fix.type = FB_TYPE_PACKED_PIXELS;
+ info->fix.type_aux = 0;
+ info->fix.accel = FB_ACCEL_NONE;
+ info->fix.smem_start = mem->start+MEM_EXT_BASE_VALUE;
+ info->fix.mmio_start = mem->start+W100_REG_BASE;
+ info->fix.mmio_len = W100_REG_LEN;
+
+ w100fb_check_var(&info->var, info);
+ w100fb_set_par(info);
+
+ if (register_framebuffer(info) < 0) {
+ kfree(info->pseudo_palette);
+ iounmap(remapped_base);
+ iounmap(remapped_regs);
+ iounmap(remapped_fbuf);
+ return -EINVAL;
+ }
+
+ device_create_file(dev, &dev_attr_fastsysclk);
+ device_create_file(dev, &dev_attr_reg_read);
+ device_create_file(dev, &dev_attr_reg_write);
+ device_create_file(dev, &dev_attr_rotation);
+
+ printk(KERN_INFO "fb%d: %s frame buffer device\n", info->node, info->fix.id);
+ return 0;
+}
+
+
+static int w100fb_remove(struct device *dev)
+{
+ struct fb_info *info = dev_get_drvdata(dev);
+
+ device_remove_file(dev, &dev_attr_fastsysclk);
+ device_remove_file(dev, &dev_attr_reg_read);
+ device_remove_file(dev, &dev_attr_reg_write);
+ device_remove_file(dev, &dev_attr_rotation);
+
+ unregister_framebuffer(info);
+
+ w100fb_clear_buffer();
+ kfree(info->pseudo_palette);
+
+ iounmap(remapped_base);
+ iounmap(remapped_regs);
+ iounmap(remapped_fbuf);
+
+ framebuffer_release(info);
+
+ return 0;
+}
+
+
+/* ------------------- chipset specific functions -------------------------- */
+
+
+static void w100_soft_reset(void)
+{
+ u16 val = readw((u16 *) remapped_base + cfgSTATUS);
+ writew(val | 0x08, (u16 *) remapped_base + cfgSTATUS);
+ udelay(100);
+ writew(0x00, (u16 *) remapped_base + cfgSTATUS);
+ udelay(100);
+}
+
+/*
+ * Initialization of critical w100 hardware
+ */
+static void w100_hw_init(void)
+{
+ u32 temp32;
+ union cif_cntl_u cif_cntl;
+ union intf_cntl_u intf_cntl;
+ union cfgreg_base_u cfgreg_base;
+ union wrap_top_dir_u wrap_top_dir;
+ union cif_read_dbg_u cif_read_dbg;
+ union cpu_defaults_u cpu_default;
+ union cif_write_dbg_u cif_write_dbg;
+ union wrap_start_dir_u wrap_start_dir;
+ union mc_ext_mem_location_u mc_ext_mem_loc;
+ union cif_io_u cif_io;
+
+ w100_soft_reset();
+
+ /* This is what the fpga_init code does on reset. May be wrong
+ but there is little info available */
+ writel(0x31, remapped_regs + mmSCRATCH_UMSK);
+ for (temp32 = 0; temp32 < 10000; temp32++)
+ readl(remapped_regs + mmSCRATCH_UMSK);
+ writel(0x30, remapped_regs + mmSCRATCH_UMSK);
+
+ /* Set up CIF */
+ cif_io.val = defCIF_IO;
+ writel((u32)(cif_io.val), remapped_regs + mmCIF_IO);
+
+ cif_write_dbg.val = readl(remapped_regs + mmCIF_WRITE_DBG);
+ cif_write_dbg.f.dis_packer_ful_during_rbbm_timeout = 0;
+ cif_write_dbg.f.en_dword_split_to_rbbm = 1;
+ cif_write_dbg.f.dis_timeout_during_rbbm = 1;
+ writel((u32) (cif_write_dbg.val), remapped_regs + mmCIF_WRITE_DBG);
+
+ cif_read_dbg.val = readl(remapped_regs + mmCIF_READ_DBG);
+ cif_read_dbg.f.dis_rd_same_byte_to_trig_fetch = 1;
+ writel((u32) (cif_read_dbg.val), remapped_regs + mmCIF_READ_DBG);
+
+ cif_cntl.val = readl(remapped_regs + mmCIF_CNTL);
+ cif_cntl.f.dis_system_bits = 1;
+ cif_cntl.f.dis_mr = 1;
+ cif_cntl.f.en_wait_to_compensate_dq_prop_dly = 0;
+ cif_cntl.f.intb_oe = 1;
+ cif_cntl.f.interrupt_active_high = 1;
+ writel((u32) (cif_cntl.val), remapped_regs + mmCIF_CNTL);
+
+ /* Setup cfgINTF_CNTL and cfgCPU defaults */
+ intf_cntl.val = defINTF_CNTL;
+ intf_cntl.f.ad_inc_a = 1;
+ intf_cntl.f.ad_inc_b = 1;
+ intf_cntl.f.rd_data_rdy_a = 0;
+ intf_cntl.f.rd_data_rdy_b = 0;
+ writeb((u8) (intf_cntl.val), remapped_base + cfgINTF_CNTL);
+
+ cpu_default.val = defCPU_DEFAULTS;
+ cpu_default.f.access_ind_addr_a = 1;
+ cpu_default.f.access_ind_addr_b = 1;
+ cpu_default.f.access_scratch_reg = 1;
+ cpu_default.f.transition_size = 0;
+ writeb((u8) (cpu_default.val), remapped_base + cfgCPU_DEFAULTS);
+
+ /* set up the apertures */
+ writeb((u8) (W100_REG_BASE >> 16), remapped_base + cfgREG_BASE);
+
+ cfgreg_base.val = defCFGREG_BASE;
+ cfgreg_base.f.cfgreg_base = W100_CFG_BASE;
+ writel((u32) (cfgreg_base.val), remapped_regs + mmCFGREG_BASE);
+
+ /* This location is relative to internal w100 addresses */
+ writel(0x15FF1000, remapped_regs + mmMC_FB_LOCATION);
+
+ mc_ext_mem_loc.val = defMC_EXT_MEM_LOCATION;
+ mc_ext_mem_loc.f.mc_ext_mem_start = MEM_EXT_BASE_VALUE >> 8;
+ mc_ext_mem_loc.f.mc_ext_mem_top = MEM_EXT_TOP_VALUE >> 8;
+ writel((u32) (mc_ext_mem_loc.val), remapped_regs + mmMC_EXT_MEM_LOCATION);
+
+ if ((current_par->lcdMode == LCD_MODE_240) || (current_par->lcdMode == LCD_MODE_320))
+ w100_InitExtMem(LCD_SHARP_QVGA);
+ else
+ w100_InitExtMem(LCD_SHARP_VGA);
+
+ wrap_start_dir.val = defWRAP_START_DIR;
+ wrap_start_dir.f.start_addr = WRAP_BUF_BASE_VALUE >> 1;
+ writel((u32) (wrap_start_dir.val), remapped_regs + mmWRAP_START_DIR);
+
+ wrap_top_dir.val = defWRAP_TOP_DIR;
+ wrap_top_dir.f.top_addr = WRAP_BUF_TOP_VALUE >> 1;
+ writel((u32) (wrap_top_dir.val), remapped_regs + mmWRAP_TOP_DIR);
+
+ writel((u32) 0x2440, remapped_regs + mmRBBM_CNTL);
+}
+
+
+/*
+ * Types
+ */
+
+struct pll_parm {
+ u16 freq; /* desired Fout for PLL */
+ u8 M;
+ u8 N_int;
+ u8 N_fac;
+ u8 tfgoal;
+ u8 lock_time;
+};
+
+struct power_state {
+ union clk_pin_cntl_u clk_pin_cntl;
+ union pll_ref_fb_div_u pll_ref_fb_div;
+ union pll_cntl_u pll_cntl;
+ union sclk_cntl_u sclk_cntl;
+ union pclk_cntl_u pclk_cntl;
+ union clk_test_cntl_u clk_test_cntl;
+ union pwrmgt_cntl_u pwrmgt_cntl;
+ u32 freq; /* Fout for PLL calibration */
+ u8 tf100; /* for pll calibration */
+ u8 tf80; /* for pll calibration */
+ u8 tf20; /* for pll calibration */
+ u8 M; /* for pll calibration */
+ u8 N_int; /* for pll calibration */
+ u8 N_fac; /* for pll calibration */
+ u8 lock_time; /* for pll calibration */
+ u8 tfgoal; /* for pll calibration */
+ u8 auto_mode; /* hardware auto switch? */
+ u8 pwm_mode; /* 0 fast, 1 normal/slow */
+ u16 fast_sclk; /* fast clk freq */
+ u16 norm_sclk; /* slow clk freq */
+};
+
+
+/*
+ * Global state variables
+ */
+
+static struct power_state w100_pwr_state;
+
+/* This table is specific for 12.5MHz ref crystal. */
+static struct pll_parm gPLLTable[] = {
+ /*freq M N_int N_fac tfgoal lock_time */
+ { 50, 0, 1, 0, 0xE0, 56}, /* 50.00 MHz */
+ { 75, 0, 5, 0, 0xDE, 37}, /* 75.00 MHz */
+ {100, 0, 7, 0, 0xE0, 28}, /* 100.00 MHz */
+ {125, 0, 9, 0, 0xE0, 22}, /* 125.00 MHz */
+ {150, 0, 11, 0, 0xE0, 17}, /* 150.00 MHz */
+ { 0, 0, 0, 0, 0, 0} /* Terminator */
+};
+
+
+static u8 w100_pll_get_testcount(u8 testclk_sel)
+{
+ udelay(5);
+
+ w100_pwr_state.clk_test_cntl.f.start_check_freq = 0x0;
+ w100_pwr_state.clk_test_cntl.f.testclk_sel = testclk_sel;
+ w100_pwr_state.clk_test_cntl.f.tstcount_rst = 0x1; /*reset test count */
+ writel((u32) (w100_pwr_state.clk_test_cntl.val), remapped_regs + mmCLK_TEST_CNTL);
+ w100_pwr_state.clk_test_cntl.f.tstcount_rst = 0x0;
+ writel((u32) (w100_pwr_state.clk_test_cntl.val), remapped_regs + mmCLK_TEST_CNTL);
+
+ w100_pwr_state.clk_test_cntl.f.start_check_freq = 0x1;
+ writel((u32) (w100_pwr_state.clk_test_cntl.val), remapped_regs + mmCLK_TEST_CNTL);
+
+ udelay(20);
+
+ w100_pwr_state.clk_test_cntl.val = readl(remapped_regs + mmCLK_TEST_CNTL);
+ w100_pwr_state.clk_test_cntl.f.start_check_freq = 0x0;
+ writel((u32) (w100_pwr_state.clk_test_cntl.val), remapped_regs + mmCLK_TEST_CNTL);
+
+ return w100_pwr_state.clk_test_cntl.f.test_count;
+}
+
+
+static u8 w100_pll_adjust(void)
+{
+ do {
+ /* Wai Ming 80 percent of VDD 1.3V gives 1.04V, minimum operating voltage is 1.08V
+ * therefore, commented out the following lines
+ * tf80 meant tf100
+ * set VCO input = 0.8 * VDD
+ */
+ w100_pwr_state.pll_cntl.f.pll_dactal = 0xd;
+ writel((u32) (w100_pwr_state.pll_cntl.val), remapped_regs + mmPLL_CNTL);
+
+ w100_pwr_state.tf80 = w100_pll_get_testcount(0x1); /* PLLCLK */
+ if (w100_pwr_state.tf80 >= (w100_pwr_state.tfgoal)) {
+ /* set VCO input = 0.2 * VDD */
+ w100_pwr_state.pll_cntl.f.pll_dactal = 0x7;
+ writel((u32) (w100_pwr_state.pll_cntl.val), remapped_regs + mmPLL_CNTL);
+
+ w100_pwr_state.tf20 = w100_pll_get_testcount(0x1); /* PLLCLK */
+ if (w100_pwr_state.tf20 <= (w100_pwr_state.tfgoal))
+ return 1; // Success
+
+ if ((w100_pwr_state.pll_cntl.f.pll_vcofr == 0x0) &&
+ ((w100_pwr_state.pll_cntl.f.pll_pvg == 0x7) ||
+ (w100_pwr_state.pll_cntl.f.pll_ioffset == 0x0))) {
+ /* slow VCO config */
+ w100_pwr_state.pll_cntl.f.pll_vcofr = 0x1;
+ w100_pwr_state.pll_cntl.f.pll_pvg = 0x0;
+ w100_pwr_state.pll_cntl.f.pll_ioffset = 0x0;
+ writel((u32) (w100_pwr_state.pll_cntl.val),
+ remapped_regs + mmPLL_CNTL);
+ continue;
+ }
+ }
+ if ((w100_pwr_state.pll_cntl.f.pll_ioffset) < 0x3) {
+ w100_pwr_state.pll_cntl.f.pll_ioffset += 0x1;
+ writel((u32) (w100_pwr_state.pll_cntl.val), remapped_regs + mmPLL_CNTL);
+ continue;
+ }
+ if ((w100_pwr_state.pll_cntl.f.pll_pvg) < 0x7) {
+ w100_pwr_state.pll_cntl.f.pll_ioffset = 0x0;
+ w100_pwr_state.pll_cntl.f.pll_pvg += 0x1;
+ writel((u32) (w100_pwr_state.pll_cntl.val), remapped_regs + mmPLL_CNTL);
+ continue;
+ }
+ return 0; // error
+ } while(1);
+}
+
+
+/*
+ * w100_pll_calibration
+ * freq = target frequency of the PLL
+ * (note: crystal = 14.3MHz)
+ */
+static u8 w100_pll_calibration(u32 freq)
+{
+ u8 status;
+
+ /* initial setting */
+ w100_pwr_state.pll_cntl.f.pll_pwdn = 0x0; /* power down */
+ w100_pwr_state.pll_cntl.f.pll_reset = 0x0; /* not reset */
+ w100_pwr_state.pll_cntl.f.pll_tcpoff = 0x1; /* Hi-Z */
+ w100_pwr_state.pll_cntl.f.pll_pvg = 0x0; /* VCO gain = 0 */
+ w100_pwr_state.pll_cntl.f.pll_vcofr = 0x0; /* VCO frequency range control = off */
+ w100_pwr_state.pll_cntl.f.pll_ioffset = 0x0; /* current offset inside VCO = 0 */
+ w100_pwr_state.pll_cntl.f.pll_ring_off = 0x0;
+ writel((u32) (w100_pwr_state.pll_cntl.val), remapped_regs + mmPLL_CNTL);
+
+ /* check for (tf80 >= tfgoal) && (tf20 =< tfgoal) */
+ if ((w100_pwr_state.tf80 < w100_pwr_state.tfgoal) || (w100_pwr_state.tf20 > w100_pwr_state.tfgoal)) {
+ status=w100_pll_adjust();
+ }
+ /* PLL Reset And Lock */
+
+ /* set VCO input = 0.5 * VDD */
+ w100_pwr_state.pll_cntl.f.pll_dactal = 0xa;
+ writel((u32) (w100_pwr_state.pll_cntl.val), remapped_regs + mmPLL_CNTL);
+
+ /* reset time */
+ udelay(1);
+
+ /* enable charge pump */
+ w100_pwr_state.pll_cntl.f.pll_tcpoff = 0x0; /* normal */
+ writel((u32) (w100_pwr_state.pll_cntl.val), remapped_regs + mmPLL_CNTL);
+
+ /* set VCO input = Hi-Z */
+ /* disable DAC */
+ w100_pwr_state.pll_cntl.f.pll_dactal = 0x0;
+ writel((u32) (w100_pwr_state.pll_cntl.val), remapped_regs + mmPLL_CNTL);
+
+ /* lock time */
+ udelay(400); /* delay 400 us */
+
+ /* PLL locked */
+
+ w100_pwr_state.sclk_cntl.f.sclk_src_sel = 0x1; /* PLL clock */
+ writel((u32) (w100_pwr_state.sclk_cntl.val), remapped_regs + mmSCLK_CNTL);
+
+ w100_pwr_state.tf100 = w100_pll_get_testcount(0x1); /* PLLCLK */
+
+ return status;
+}
+
+
+static u8 w100_pll_set_clk(void)
+{
+ u8 status;
+
+ if (w100_pwr_state.auto_mode == 1) /* auto mode */
+ {
+ w100_pwr_state.pwrmgt_cntl.f.pwm_fast_noml_hw_en = 0x0; /* disable fast to normal */
+ w100_pwr_state.pwrmgt_cntl.f.pwm_noml_fast_hw_en = 0x0; /* disable normal to fast */
+ writel((u32) (w100_pwr_state.pwrmgt_cntl.val), remapped_regs + mmPWRMGT_CNTL);
+ }
+
+ w100_pwr_state.sclk_cntl.f.sclk_src_sel = 0x0; /* crystal clock */
+ writel((u32) (w100_pwr_state.sclk_cntl.val), remapped_regs + mmSCLK_CNTL);
+
+ w100_pwr_state.pll_ref_fb_div.f.pll_ref_div = w100_pwr_state.M;
+ w100_pwr_state.pll_ref_fb_div.f.pll_fb_div_int = w100_pwr_state.N_int;
+ w100_pwr_state.pll_ref_fb_div.f.pll_fb_div_frac = w100_pwr_state.N_fac;
+ w100_pwr_state.pll_ref_fb_div.f.pll_lock_time = w100_pwr_state.lock_time;
+ writel((u32) (w100_pwr_state.pll_ref_fb_div.val), remapped_regs + mmPLL_REF_FB_DIV);
+
+ w100_pwr_state.pwrmgt_cntl.f.pwm_mode_req = 0;
+ writel((u32) (w100_pwr_state.pwrmgt_cntl.val), remapped_regs + mmPWRMGT_CNTL);
+
+ status = w100_pll_calibration (w100_pwr_state.freq);
+
+ if (w100_pwr_state.auto_mode == 1) /* auto mode */
+ {
+ w100_pwr_state.pwrmgt_cntl.f.pwm_fast_noml_hw_en = 0x1; /* reenable fast to normal */
+ w100_pwr_state.pwrmgt_cntl.f.pwm_noml_fast_hw_en = 0x1; /* reenable normal to fast */
+ writel((u32) (w100_pwr_state.pwrmgt_cntl.val), remapped_regs + mmPWRMGT_CNTL);
+ }
+ return status;
+}
+
+
+/* assume reference crystal clk is 12.5MHz,
+ * and that doubling is not enabled.
+ *
+ * Freq = 12 == 12.5MHz.
+ */
+static u16 w100_set_slowsysclk(u16 freq)
+{
+ if (w100_pwr_state.norm_sclk == freq)
+ return freq;
+
+ if (w100_pwr_state.auto_mode == 1) /* auto mode */
+ return 0;
+
+ if (freq == 12) {
+ w100_pwr_state.norm_sclk = freq;
+ w100_pwr_state.sclk_cntl.f.sclk_post_div_slow = 0x0; /* Pslow = 1 */
+ w100_pwr_state.sclk_cntl.f.sclk_src_sel = 0x0; /* crystal src */
+
+ writel((u32) (w100_pwr_state.sclk_cntl.val), remapped_regs + mmSCLK_CNTL);
+
+ w100_pwr_state.clk_pin_cntl.f.xtalin_pm_en = 0x1;
+ writel((u32) (w100_pwr_state.clk_pin_cntl.val), remapped_regs + mmCLK_PIN_CNTL);
+
+ w100_pwr_state.pwrmgt_cntl.f.pwm_enable = 0x1;
+ w100_pwr_state.pwrmgt_cntl.f.pwm_mode_req = 0x1;
+ writel((u32) (w100_pwr_state.pwrmgt_cntl.val), remapped_regs + mmPWRMGT_CNTL);
+ w100_pwr_state.pwm_mode = 1; /* normal mode */
+ return freq;
+ } else
+ return 0;
+}
+
+
+static u16 w100_set_fastsysclk(u16 freq)
+{
+ u16 pll_freq;
+ int i;
+
+ while(1) {
+ pll_freq = (u16) (freq * (w100_pwr_state.sclk_cntl.f.sclk_post_div_fast + 1));
+ i = 0;
+ do {
+ if (pll_freq == gPLLTable[i].freq) {
+ w100_pwr_state.freq = gPLLTable[i].freq * 1000000;
+ w100_pwr_state.M = gPLLTable[i].M;
+ w100_pwr_state.N_int = gPLLTable[i].N_int;
+ w100_pwr_state.N_fac = gPLLTable[i].N_fac;
+ w100_pwr_state.tfgoal = gPLLTable[i].tfgoal;
+ w100_pwr_state.lock_time = gPLLTable[i].lock_time;
+ w100_pwr_state.tf20 = 0xff; /* set highest */
+ w100_pwr_state.tf80 = 0x00; /* set lowest */
+
+ w100_pll_set_clk();
+ w100_pwr_state.pwm_mode = 0; /* fast mode */
+ w100_pwr_state.fast_sclk = freq;
+ return freq;
+ }
+ i++;
+ } while(gPLLTable[i].freq);
+
+ if (w100_pwr_state.auto_mode == 1)
+ break;
+
+ if (w100_pwr_state.sclk_cntl.f.sclk_post_div_fast == 0)
+ break;
+
+ w100_pwr_state.sclk_cntl.f.sclk_post_div_fast -= 1;
+ writel((u32) (w100_pwr_state.sclk_cntl.val), remapped_regs + mmSCLK_CNTL);
+ }
+ return 0;
+}
+
+
+/* Set up an initial state. Some values/fields set
+ here will be overwritten. */
+static void w100_pwm_setup(void)
+{
+ w100_pwr_state.clk_pin_cntl.f.osc_en = 0x1;
+ w100_pwr_state.clk_pin_cntl.f.osc_gain = 0x1f;
+ w100_pwr_state.clk_pin_cntl.f.dont_use_xtalin = 0x0;
+ w100_pwr_state.clk_pin_cntl.f.xtalin_pm_en = 0x0;
+ w100_pwr_state.clk_pin_cntl.f.xtalin_dbl_en = 0x0; /* no freq doubling */
+ w100_pwr_state.clk_pin_cntl.f.cg_debug = 0x0;
+ writel((u32) (w100_pwr_state.clk_pin_cntl.val), remapped_regs + mmCLK_PIN_CNTL);
+
+ w100_pwr_state.sclk_cntl.f.sclk_src_sel = 0x0; /* Crystal Clk */
+ w100_pwr_state.sclk_cntl.f.sclk_post_div_fast = 0x0; /* Pfast = 1 */
+ w100_pwr_state.sclk_cntl.f.sclk_clkon_hys = 0x3;
+ w100_pwr_state.sclk_cntl.f.sclk_post_div_slow = 0x0; /* Pslow = 1 */
+ w100_pwr_state.sclk_cntl.f.disp_cg_ok2switch_en = 0x0;
+ w100_pwr_state.sclk_cntl.f.sclk_force_reg = 0x0; /* Dynamic */
+ w100_pwr_state.sclk_cntl.f.sclk_force_disp = 0x0; /* Dynamic */
+ w100_pwr_state.sclk_cntl.f.sclk_force_mc = 0x0; /* Dynamic */
+ w100_pwr_state.sclk_cntl.f.sclk_force_extmc = 0x0; /* Dynamic */
+ w100_pwr_state.sclk_cntl.f.sclk_force_cp = 0x0; /* Dynamic */
+ w100_pwr_state.sclk_cntl.f.sclk_force_e2 = 0x0; /* Dynamic */
+ w100_pwr_state.sclk_cntl.f.sclk_force_e3 = 0x0; /* Dynamic */
+ w100_pwr_state.sclk_cntl.f.sclk_force_idct = 0x0; /* Dynamic */
+ w100_pwr_state.sclk_cntl.f.sclk_force_bist = 0x0; /* Dynamic */
+ w100_pwr_state.sclk_cntl.f.busy_extend_cp = 0x0;
+ w100_pwr_state.sclk_cntl.f.busy_extend_e2 = 0x0;
+ w100_pwr_state.sclk_cntl.f.busy_extend_e3 = 0x0;
+ w100_pwr_state.sclk_cntl.f.busy_extend_idct = 0x0;
+ writel((u32) (w100_pwr_state.sclk_cntl.val), remapped_regs + mmSCLK_CNTL);
+
+ w100_pwr_state.pclk_cntl.f.pclk_src_sel = 0x0; /* Crystal Clk */
+ w100_pwr_state.pclk_cntl.f.pclk_post_div = 0x1; /* P = 2 */
+ w100_pwr_state.pclk_cntl.f.pclk_force_disp = 0x0; /* Dynamic */
+ writel((u32) (w100_pwr_state.pclk_cntl.val), remapped_regs + mmPCLK_CNTL);
+
+ w100_pwr_state.pll_ref_fb_div.f.pll_ref_div = 0x0; /* M = 1 */
+ w100_pwr_state.pll_ref_fb_div.f.pll_fb_div_int = 0x0; /* N = 1.0 */
+ w100_pwr_state.pll_ref_fb_div.f.pll_fb_div_frac = 0x0;
+ w100_pwr_state.pll_ref_fb_div.f.pll_reset_time = 0x5;
+ w100_pwr_state.pll_ref_fb_div.f.pll_lock_time = 0xff;
+ writel((u32) (w100_pwr_state.pll_ref_fb_div.val), remapped_regs + mmPLL_REF_FB_DIV);
+
+ w100_pwr_state.pll_cntl.f.pll_pwdn = 0x1;
+ w100_pwr_state.pll_cntl.f.pll_reset = 0x1;
+ w100_pwr_state.pll_cntl.f.pll_pm_en = 0x0;
+ w100_pwr_state.pll_cntl.f.pll_mode = 0x0; /* uses VCO clock */
+ w100_pwr_state.pll_cntl.f.pll_refclk_sel = 0x0;
+ w100_pwr_state.pll_cntl.f.pll_fbclk_sel = 0x0;
+ w100_pwr_state.pll_cntl.f.pll_tcpoff = 0x0;
+ w100_pwr_state.pll_cntl.f.pll_pcp = 0x4;
+ w100_pwr_state.pll_cntl.f.pll_pvg = 0x0;
+ w100_pwr_state.pll_cntl.f.pll_vcofr = 0x0;
+ w100_pwr_state.pll_cntl.f.pll_ioffset = 0x0;
+ w100_pwr_state.pll_cntl.f.pll_pecc_mode = 0x0;
+ w100_pwr_state.pll_cntl.f.pll_pecc_scon = 0x0;
+ w100_pwr_state.pll_cntl.f.pll_dactal = 0x0; /* Hi-Z */
+ w100_pwr_state.pll_cntl.f.pll_cp_clip = 0x3;
+ w100_pwr_state.pll_cntl.f.pll_conf = 0x2;
+ w100_pwr_state.pll_cntl.f.pll_mbctrl = 0x2;
+ w100_pwr_state.pll_cntl.f.pll_ring_off = 0x0;
+ writel((u32) (w100_pwr_state.pll_cntl.val), remapped_regs + mmPLL_CNTL);
+
+ w100_pwr_state.clk_test_cntl.f.testclk_sel = 0x1; /* PLLCLK (for testing) */
+ w100_pwr_state.clk_test_cntl.f.start_check_freq = 0x0;
+ w100_pwr_state.clk_test_cntl.f.tstcount_rst = 0x0;
+ writel((u32) (w100_pwr_state.clk_test_cntl.val), remapped_regs + mmCLK_TEST_CNTL);
+
+ w100_pwr_state.pwrmgt_cntl.f.pwm_enable = 0x0;
+ w100_pwr_state.pwrmgt_cntl.f.pwm_mode_req = 0x1; /* normal mode (0, 1, 3) */
+ w100_pwr_state.pwrmgt_cntl.f.pwm_wakeup_cond = 0x0;
+ w100_pwr_state.pwrmgt_cntl.f.pwm_fast_noml_hw_en = 0x0;
+ w100_pwr_state.pwrmgt_cntl.f.pwm_noml_fast_hw_en = 0x0;
+ w100_pwr_state.pwrmgt_cntl.f.pwm_fast_noml_cond = 0x1; /* PM4,ENG */
+ w100_pwr_state.pwrmgt_cntl.f.pwm_noml_fast_cond = 0x1; /* PM4,ENG */
+ w100_pwr_state.pwrmgt_cntl.f.pwm_idle_timer = 0xFF;
+ w100_pwr_state.pwrmgt_cntl.f.pwm_busy_timer = 0xFF;
+ writel((u32) (w100_pwr_state.pwrmgt_cntl.val), remapped_regs + mmPWRMGT_CNTL);
+
+ w100_pwr_state.auto_mode = 0; /* manual mode */
+ w100_pwr_state.pwm_mode = 1; /* normal mode (0, 1, 2) */
+ w100_pwr_state.freq = 50000000; /* 50 MHz */
+ w100_pwr_state.M = 3; /* M = 4 */
+ w100_pwr_state.N_int = 6; /* N = 7.0 */
+ w100_pwr_state.N_fac = 0;
+ w100_pwr_state.tfgoal = 0xE0;
+ w100_pwr_state.lock_time = 56;
+ w100_pwr_state.tf20 = 0xff; /* set highest */
+ w100_pwr_state.tf80 = 0x00; /* set lowest */
+ w100_pwr_state.tf100 = 0x00; /* set lowest */
+ w100_pwr_state.fast_sclk = 50; /* 50.0 MHz */
+ w100_pwr_state.norm_sclk = 12; /* 12.5 MHz */
+}
+
+
+static void w100_init_sharp_lcd(u32 mode)
+{
+ u32 temp32;
+ union disp_db_buf_cntl_wr_u disp_db_buf_wr_cntl;
+
+ /* Prevent display updates */
+ disp_db_buf_wr_cntl.f.db_buf_cntl = 0x1e;
+ disp_db_buf_wr_cntl.f.update_db_buf = 0;
+ disp_db_buf_wr_cntl.f.en_db_buf = 0;
+ writel((u32) (disp_db_buf_wr_cntl.val), remapped_regs + mmDISP_DB_BUF_CNTL);
+
+ switch(mode) {
+ case LCD_SHARP_QVGA:
+ w100_set_slowsysclk(12); /* use crystal -- 12.5MHz */
+ /* not use PLL */
+
+ writel(0x7FFF8000, remapped_regs + mmMC_EXT_MEM_LOCATION);
+ writel(0x85FF8000, remapped_regs + mmMC_FB_LOCATION);
+ writel(0x00000003, remapped_regs + mmLCD_FORMAT);
+ writel(0x00CF1C06, remapped_regs + mmGRAPHIC_CTRL);
+ writel(0x01410145, remapped_regs + mmCRTC_TOTAL);
+ writel(0x01170027, remapped_regs + mmACTIVE_H_DISP);
+ writel(0x01410001, remapped_regs + mmACTIVE_V_DISP);
+ writel(0x01170027, remapped_regs + mmGRAPHIC_H_DISP);
+ writel(0x01410001, remapped_regs + mmGRAPHIC_V_DISP);
+ writel(0x81170027, remapped_regs + mmCRTC_SS);
+ writel(0xA0140000, remapped_regs + mmCRTC_LS);
+ writel(0x00400008, remapped_regs + mmCRTC_REV);
+ writel(0xA0000000, remapped_regs + mmCRTC_DCLK);
+ writel(0xC0140014, remapped_regs + mmCRTC_GS);
+ writel(0x00010141, remapped_regs + mmCRTC_VPOS_GS);
+ writel(0x8015010F, remapped_regs + mmCRTC_GCLK);
+ writel(0x80100110, remapped_regs + mmCRTC_GOE);
+ writel(0x00000000, remapped_regs + mmCRTC_FRAME);
+ writel(0x00000000, remapped_regs + mmCRTC_FRAME_VPOS);
+ writel(0x01CC0000, remapped_regs + mmLCDD_CNTL1);
+ writel(0x0003FFFF, remapped_regs + mmLCDD_CNTL2);
+ writel(0x00FFFF0D, remapped_regs + mmGENLCD_CNTL1);
+ writel(0x003F3003, remapped_regs + mmGENLCD_CNTL2);
+ writel(0x00000000, remapped_regs + mmCRTC_DEFAULT_COUNT);
+ writel(0x0000FF00, remapped_regs + mmLCD_BACKGROUND_COLOR);
+ writel(0x000102aa, remapped_regs + mmGENLCD_CNTL3);
+ writel(0x00800000, remapped_regs + mmGRAPHIC_OFFSET);
+ writel(0x000001e0, remapped_regs + mmGRAPHIC_PITCH);
+ writel(0x000000bf, remapped_regs + mmGPIO_DATA);
+ writel(0x03c0feff, remapped_regs + mmGPIO_CNTL2);
+ writel(0x00000000, remapped_regs + mmGPIO_CNTL1);
+ writel(0x41060010, remapped_regs + mmCRTC_PS1_ACTIVE);
+ break;
+ case LCD_SHARP_VGA:
+ w100_set_slowsysclk(12); /* use crystal -- 12.5MHz */
+ w100_set_fastsysclk(current_par->fastsysclk_mode); /* use PLL -- 75.0MHz */
+ w100_pwr_state.pclk_cntl.f.pclk_src_sel = 0x1;
+ w100_pwr_state.pclk_cntl.f.pclk_post_div = 0x2;
+ writel((u32) (w100_pwr_state.pclk_cntl.val), remapped_regs + mmPCLK_CNTL);
+ writel(0x15FF1000, remapped_regs + mmMC_FB_LOCATION);
+ writel(0x9FFF8000, remapped_regs + mmMC_EXT_MEM_LOCATION);
+ writel(0x00000003, remapped_regs + mmLCD_FORMAT);
+ writel(0x00DE1D66, remapped_regs + mmGRAPHIC_CTRL);
+
+ writel(0x0283028B, remapped_regs + mmCRTC_TOTAL);
+ writel(0x02360056, remapped_regs + mmACTIVE_H_DISP);
+ writel(0x02830003, remapped_regs + mmACTIVE_V_DISP);
+ writel(0x02360056, remapped_regs + mmGRAPHIC_H_DISP);
+ writel(0x02830003, remapped_regs + mmGRAPHIC_V_DISP);
+ writel(0x82360056, remapped_regs + mmCRTC_SS);
+ writel(0xA0280000, remapped_regs + mmCRTC_LS);
+ writel(0x00400008, remapped_regs + mmCRTC_REV);
+ writel(0xA0000000, remapped_regs + mmCRTC_DCLK);
+ writel(0x80280028, remapped_regs + mmCRTC_GS);
+ writel(0x02830002, remapped_regs + mmCRTC_VPOS_GS);
+ writel(0x8015010F, remapped_regs + mmCRTC_GCLK);
+ writel(0x80100110, remapped_regs + mmCRTC_GOE);
+ writel(0x00000000, remapped_regs + mmCRTC_FRAME);
+ writel(0x00000000, remapped_regs + mmCRTC_FRAME_VPOS);
+ writel(0x01CC0000, remapped_regs + mmLCDD_CNTL1);
+ writel(0x0003FFFF, remapped_regs + mmLCDD_CNTL2);
+ writel(0x00FFFF0D, remapped_regs + mmGENLCD_CNTL1);
+ writel(0x003F3003, remapped_regs + mmGENLCD_CNTL2);
+ writel(0x00000000, remapped_regs + mmCRTC_DEFAULT_COUNT);
+ writel(0x0000FF00, remapped_regs + mmLCD_BACKGROUND_COLOR);
+ writel(0x000102aa, remapped_regs + mmGENLCD_CNTL3);
+ writel(0x00800000, remapped_regs + mmGRAPHIC_OFFSET);
+ writel(0x000003C0, remapped_regs + mmGRAPHIC_PITCH);
+ writel(0x000000bf, remapped_regs + mmGPIO_DATA);
+ writel(0x03c0feff, remapped_regs + mmGPIO_CNTL2);
+ writel(0x00000000, remapped_regs + mmGPIO_CNTL1);
+ writel(0x41060010, remapped_regs + mmCRTC_PS1_ACTIVE);
+ break;
+ default:
+ break;
+ }
+
+ /* Hack for overlay in ext memory */
+ temp32 = readl(remapped_regs + mmDISP_DEBUG2);
+ temp32 |= 0xc0000000;
+ writel(temp32, remapped_regs + mmDISP_DEBUG2);
+
+ /* Re-enable display updates */
+ disp_db_buf_wr_cntl.f.db_buf_cntl = 0x1e;
+ disp_db_buf_wr_cntl.f.update_db_buf = 1;
+ disp_db_buf_wr_cntl.f.en_db_buf = 1;
+ writel((u32) (disp_db_buf_wr_cntl.val), remapped_regs + mmDISP_DB_BUF_CNTL);
+}
+
+
+static void w100_set_vga_rotation_regs(u16 divider, unsigned long ctrl, unsigned long offset, unsigned long pitch)
+{
+ w100_pwr_state.pclk_cntl.f.pclk_src_sel = 0x1;
+ w100_pwr_state.pclk_cntl.f.pclk_post_div = divider;
+ writel((u32) (w100_pwr_state.pclk_cntl.val), remapped_regs + mmPCLK_CNTL);
+
+ writel(ctrl, remapped_regs + mmGRAPHIC_CTRL);
+ writel(offset, remapped_regs + mmGRAPHIC_OFFSET);
+ writel(pitch, remapped_regs + mmGRAPHIC_PITCH);
+
+ /* Re-enable display updates */
+ writel(0x0000007b, remapped_regs + mmDISP_DB_BUF_CNTL);
+}
+
+
+static void w100_init_vga_rotation(u16 deg)
+{
+ switch(deg) {
+ case 0:
+ w100_set_vga_rotation_regs(0x02, 0x00DE1D66, 0x00800000, 0x000003c0);
+ break;
+ case 90:
+ w100_set_vga_rotation_regs(0x06, 0x00DE1D0e, 0x00895b00, 0x00000500);
+ break;
+ case 180:
+ w100_set_vga_rotation_regs(0x02, 0x00DE1D7e, 0x00895ffc, 0x000003c0);
+ break;
+ case 270:
+ w100_set_vga_rotation_regs(0x06, 0x00DE1D16, 0x008004fc, 0x00000500);
+ break;
+ default:
+ /* not-support */
+ break;
+ }
+}
+
+
+static void w100_set_qvga_rotation_regs(unsigned long ctrl, unsigned long offset, unsigned long pitch)
+{
+ writel(ctrl, remapped_regs + mmGRAPHIC_CTRL);
+ writel(offset, remapped_regs + mmGRAPHIC_OFFSET);
+ writel(pitch, remapped_regs + mmGRAPHIC_PITCH);
+
+ /* Re-enable display updates */
+ writel(0x0000007b, remapped_regs + mmDISP_DB_BUF_CNTL);
+}
+
+
+static void w100_init_qvga_rotation(u16 deg)
+{
+ switch(deg) {
+ case 0:
+ w100_set_qvga_rotation_regs(0x00d41c06, 0x00800000, 0x000001e0);
+ break;
+ case 90:
+ w100_set_qvga_rotation_regs(0x00d41c0E, 0x00825580, 0x00000280);
+ break;
+ case 180:
+ w100_set_qvga_rotation_regs(0x00d41c1e, 0x008257fc, 0x000001e0);
+ break;
+ case 270:
+ w100_set_qvga_rotation_regs(0x00d41c16, 0x0080027c, 0x00000280);
+ break;
+ default:
+ /* not-support */
+ break;
+ }
+}
+
+
+static void w100_suspend(u32 mode)
+{
+ u32 val;
+
+ writel(0x7FFF8000, remapped_regs + mmMC_EXT_MEM_LOCATION);
+ writel(0x00FF0000, remapped_regs + mmMC_PERF_MON_CNTL);
+
+ val = readl(remapped_regs + mmMEM_EXT_TIMING_CNTL);
+ val &= ~(0x00100000); /* bit20=0 */
+ val |= 0xFF000000; /* bit31:24=0xff */
+ writel(val, remapped_regs + mmMEM_EXT_TIMING_CNTL);
+
+ val = readl(remapped_regs + mmMEM_EXT_CNTL);
+ val &= ~(0x00040000); /* bit18=0 */
+ val |= 0x00080000; /* bit19=1 */
+ writel(val, remapped_regs + mmMEM_EXT_CNTL);
+
+ udelay(1); /* wait 1us */
+
+ if (mode == W100_SUSPEND_EXTMEM) {
+
+ /* CKE: Tri-State */
+ val = readl(remapped_regs + mmMEM_EXT_CNTL);
+ val |= 0x40000000; /* bit30=1 */
+ writel(val, remapped_regs + mmMEM_EXT_CNTL);
+
+ /* CLK: Stop */
+ val = readl(remapped_regs + mmMEM_EXT_CNTL);
+ val &= ~(0x00000001); /* bit0=0 */
+ writel(val, remapped_regs + mmMEM_EXT_CNTL);
+ } else {
+
+ writel(0x00000000, remapped_regs + mmSCLK_CNTL);
+ writel(0x000000BF, remapped_regs + mmCLK_PIN_CNTL);
+ writel(0x00000015, remapped_regs + mmPWRMGT_CNTL);
+
+ udelay(5);
+
+ val = readl(remapped_regs + mmPLL_CNTL);
+ val |= 0x00000004; /* bit2=1 */
+ writel(val, remapped_regs + mmPLL_CNTL);
+ writel(0x0000001d, remapped_regs + mmPWRMGT_CNTL);
+ }
+}
+
+
+static void w100_resume(void)
+{
+ u32 temp32;
+
+ w100_hw_init();
+ w100_pwm_setup();
+
+ temp32 = readl(remapped_regs + mmDISP_DEBUG2);
+ temp32 &= 0xff7fffff;
+ temp32 |= 0x00800000;
+ writel(temp32, remapped_regs + mmDISP_DEBUG2);
+
+ if (current_par->lcdMode == LCD_MODE_480 || current_par->lcdMode == LCD_MODE_640) {
+ w100_init_sharp_lcd(LCD_SHARP_VGA);
+ if (current_par->lcdMode == LCD_MODE_640) {
+ w100_init_vga_rotation(current_par->rotation_flag ? 270 : 90);
+ }
+ } else {
+ w100_init_sharp_lcd(LCD_SHARP_QVGA);
+ if (current_par->lcdMode == LCD_MODE_320) {
+ w100_init_qvga_rotation(current_par->rotation_flag ? 270 : 90);
+ }
+ }
+}
+
+
+static void w100_vsync(void)
+{
+ u32 tmp;
+ int timeout = 30000; /* VSync timeout = 30[ms] > 16.8[ms] */
+
+ tmp = readl(remapped_regs + mmACTIVE_V_DISP);
+
+ /* set vline pos */
+ writel((tmp >> 16) & 0x3ff, remapped_regs + mmDISP_INT_CNTL);
+
+ /* disable vline irq */
+ tmp = readl(remapped_regs + mmGEN_INT_CNTL);
+
+ tmp &= ~0x00000002;
+ writel(tmp, remapped_regs + mmGEN_INT_CNTL);
+
+ /* clear vline irq status */
+ writel(0x00000002, remapped_regs + mmGEN_INT_STATUS);
+
+ /* enable vline irq */
+ writel((tmp | 0x00000002), remapped_regs + mmGEN_INT_CNTL);
+
+ /* clear vline irq status */
+ writel(0x00000002, remapped_regs + mmGEN_INT_STATUS);
+
+ while(timeout > 0) {
+ if (readl(remapped_regs + mmGEN_INT_STATUS) & 0x00000002)
+ break;
+ udelay(1);
+ timeout--;
+ }
+
+ /* disable vline irq */
+ writel(tmp, remapped_regs + mmGEN_INT_CNTL);
+
+ /* clear vline irq status */
+ writel(0x00000002, remapped_regs + mmGEN_INT_STATUS);
+}
+
+
+static void w100_InitExtMem(u32 mode)
+{
+ switch(mode) {
+ case LCD_SHARP_QVGA:
+ /* QVGA doesn't use external memory
+ nothing to do, really. */
+ break;
+ case LCD_SHARP_VGA:
+ writel(0x00007800, remapped_regs + mmMC_BIST_CTRL);
+ writel(0x00040003, remapped_regs + mmMEM_EXT_CNTL);
+ writel(0x00200021, remapped_regs + mmMEM_SDRAM_MODE_REG);
+ udelay(100);
+ writel(0x80200021, remapped_regs + mmMEM_SDRAM_MODE_REG);
+ udelay(100);
+ writel(0x00650021, remapped_regs + mmMEM_SDRAM_MODE_REG);
+ udelay(100);
+ writel(0x10002a4a, remapped_regs + mmMEM_EXT_TIMING_CNTL);
+ writel(0x7ff87012, remapped_regs + mmMEM_IO_CNTL);
+ break;
+ default:
+ break;
+ }
+}
+
+
+#define RESCTL_ADRS 0x00
+#define PHACTRL_ADRS 0x01
+#define DUTYCTRL_ADRS 0x02
+#define POWERREG0_ADRS 0x03
+#define POWERREG1_ADRS 0x04
+#define GPOR3_ADRS 0x05
+#define PICTRL_ADRS 0x06
+#define POLCTRL_ADRS 0x07
+
+#define RESCTL_QVGA 0x01
+#define RESCTL_VGA 0x00
+
+#define POWER1_VW_ON 0x01 /* VW Supply FET ON */
+#define POWER1_GVSS_ON 0x02 /* GVSS(-8V) Power Supply ON */
+#define POWER1_VDD_ON 0x04 /* VDD(8V),SVSS(-4V) Power Supply ON */
+
+#define POWER1_VW_OFF 0x00 /* VW Supply FET OFF */
+#define POWER1_GVSS_OFF 0x00 /* GVSS(-8V) Power Supply OFF */
+#define POWER1_VDD_OFF 0x00 /* VDD(8V),SVSS(-4V) Power Supply OFF */
+
+#define POWER0_COM_DCLK 0x01 /* COM Voltage DC Bias DAC Serial Data Clock */
+#define POWER0_COM_DOUT 0x02 /* COM Voltage DC Bias DAC Serial Data Out */
+#define POWER0_DAC_ON 0x04 /* DAC Power Supply ON */
+#define POWER0_COM_ON 0x08 /* COM Powewr Supply ON */
+#define POWER0_VCC5_ON 0x10 /* VCC5 Power Supply ON */
+
+#define POWER0_DAC_OFF 0x00 /* DAC Power Supply OFF */
+#define POWER0_COM_OFF 0x00 /* COM Powewr Supply OFF */
+#define POWER0_VCC5_OFF 0x00 /* VCC5 Power Supply OFF */
+
+#define PICTRL_INIT_STATE 0x01
+#define PICTRL_INIOFF 0x02
+#define PICTRL_POWER_DOWN 0x04
+#define PICTRL_COM_SIGNAL_OFF 0x08
+#define PICTRL_DAC_SIGNAL_OFF 0x10
+
+#define PICTRL_POWER_ACTIVE (0)
+
+#define POLCTRL_SYNC_POL_FALL 0x01
+#define POLCTRL_EN_POL_FALL 0x02
+#define POLCTRL_DATA_POL_FALL 0x04
+#define POLCTRL_SYNC_ACT_H 0x08
+#define POLCTRL_EN_ACT_L 0x10
+
+#define POLCTRL_SYNC_POL_RISE 0x00
+#define POLCTRL_EN_POL_RISE 0x00
+#define POLCTRL_DATA_POL_RISE 0x00
+#define POLCTRL_SYNC_ACT_L 0x00
+#define POLCTRL_EN_ACT_H 0x00
+
+#define PHACTRL_PHASE_MANUAL 0x01
+
+#define PHAD_QVGA_DEFAULT_VAL (9)
+#define COMADJ_DEFAULT (125)
+
+static void lcdtg_ssp_send(u8 adrs, u8 data)
+{
+ w100fb_ssp_send(adrs,data);
+}
+
+/*
+ * This is only a psuedo I2C interface. We can't use the standard kernel
+ * routines as the interface is write only. We just assume the data is acked...
+ */
+static void lcdtg_ssp_i2c_send(u8 data)
+{
+ lcdtg_ssp_send(POWERREG0_ADRS, data);
+ udelay(10);
+}
+
+static void lcdtg_i2c_send_bit(u8 data)
+{
+ lcdtg_ssp_i2c_send(data);
+ lcdtg_ssp_i2c_send(data | POWER0_COM_DCLK);
+ lcdtg_ssp_i2c_send(data);
+}
+
+static void lcdtg_i2c_send_start(u8 base)
+{
+ lcdtg_ssp_i2c_send(base | POWER0_COM_DCLK | POWER0_COM_DOUT);
+ lcdtg_ssp_i2c_send(base | POWER0_COM_DCLK);
+ lcdtg_ssp_i2c_send(base);
+}
+
+static void lcdtg_i2c_send_stop(u8 base)
+{
+ lcdtg_ssp_i2c_send(base);
+ lcdtg_ssp_i2c_send(base | POWER0_COM_DCLK);
+ lcdtg_ssp_i2c_send(base | POWER0_COM_DCLK | POWER0_COM_DOUT);
+}
+
+static void lcdtg_i2c_send_byte(u8 base, u8 data)
+{
+ int i;
+ for (i = 0; i < 8; i++) {
+ if (data & 0x80)
+ lcdtg_i2c_send_bit(base | POWER0_COM_DOUT);
+ else
+ lcdtg_i2c_send_bit(base);
+ data <<= 1;
+ }
+}
+
+static void lcdtg_i2c_wait_ack(u8 base)
+{
+ lcdtg_i2c_send_bit(base);
+}
+
+static void lcdtg_set_common_voltage(u8 base_data, u8 data)
+{
+ /* Set Common Voltage to M62332FP via I2C */
+ lcdtg_i2c_send_start(base_data);
+ lcdtg_i2c_send_byte(base_data, 0x9c);
+ lcdtg_i2c_wait_ack(base_data);
+ lcdtg_i2c_send_byte(base_data, 0x00);
+ lcdtg_i2c_wait_ack(base_data);
+ lcdtg_i2c_send_byte(base_data, data);
+ lcdtg_i2c_wait_ack(base_data);
+ lcdtg_i2c_send_stop(base_data);
+}
+
+static struct lcdtg_register_setting {
+ u8 adrs;
+ u8 data;
+ u32 wait;
+} lcdtg_power_on_table[] = {
+
+ /* Initialize Internal Logic & Port */
+ { PICTRL_ADRS,
+ PICTRL_POWER_DOWN | PICTRL_INIOFF | PICTRL_INIT_STATE |
+ PICTRL_COM_SIGNAL_OFF | PICTRL_DAC_SIGNAL_OFF,
+ 0 },
+
+ { POWERREG0_ADRS,
+ POWER0_COM_DCLK | POWER0_COM_DOUT | POWER0_DAC_OFF | POWER0_COM_OFF |
+ POWER0_VCC5_OFF,
+ 0 },
+
+ { POWERREG1_ADRS,
+ POWER1_VW_OFF | POWER1_GVSS_OFF | POWER1_VDD_OFF,
+ 0 },
+
+ /* VDD(+8V),SVSS(-4V) ON */
+ { POWERREG1_ADRS,
+ POWER1_VW_OFF | POWER1_GVSS_OFF | POWER1_VDD_ON /* VDD ON */,
+ 3000 },
+
+ /* DAC ON */
+ { POWERREG0_ADRS,
+ POWER0_COM_DCLK | POWER0_COM_DOUT | POWER0_DAC_ON /* DAC ON */ |
+ POWER0_COM_OFF | POWER0_VCC5_OFF,
+ 0 },
+
+ /* INIB = H, INI = L */
+ { PICTRL_ADRS,
+ /* PICTL[0] = H , PICTL[1] = PICTL[2] = PICTL[4] = L */
+ PICTRL_INIT_STATE | PICTRL_COM_SIGNAL_OFF,
+ 0 },
+
+ /* Set Common Voltage */
+ { 0xfe, 0, 0 },
+
+ /* VCC5 ON */
+ { POWERREG0_ADRS,
+ POWER0_COM_DCLK | POWER0_COM_DOUT | POWER0_DAC_ON /* DAC ON */ |
+ POWER0_COM_OFF | POWER0_VCC5_ON /* VCC5 ON */,
+ 0 },
+
+ /* GVSS(-8V) ON */
+ { POWERREG1_ADRS,
+ POWER1_VW_OFF | POWER1_GVSS_ON /* GVSS ON */ |
+ POWER1_VDD_ON /* VDD ON */,
+ 2000 },
+
+ /* COM SIGNAL ON (PICTL[3] = L) */
+ { PICTRL_ADRS,
+ PICTRL_INIT_STATE,
+ 0 },
+
+ /* COM ON */
+ { POWERREG0_ADRS,
+ POWER0_COM_DCLK | POWER0_COM_DOUT | POWER0_DAC_ON /* DAC ON */ |
+ POWER0_COM_ON /* COM ON */ | POWER0_VCC5_ON /* VCC5_ON */,
+ 0 },
+
+ /* VW ON */
+ { POWERREG1_ADRS,
+ POWER1_VW_ON /* VW ON */ | POWER1_GVSS_ON /* GVSS ON */ |
+ POWER1_VDD_ON /* VDD ON */,
+ 0 /* Wait 100ms */ },
+
+ /* Signals output enable */
+ { PICTRL_ADRS,
+ 0 /* Signals output enable */,
+ 0 },
+
+ { PHACTRL_ADRS,
+ PHACTRL_PHASE_MANUAL,
+ 0 },
+
+ /* Initialize for Input Signals from ATI */
+ { POLCTRL_ADRS,
+ POLCTRL_SYNC_POL_RISE | POLCTRL_EN_POL_RISE | POLCTRL_DATA_POL_RISE |
+ POLCTRL_SYNC_ACT_L | POLCTRL_EN_ACT_H,
+ 1000 /*100000*/ /* Wait 100ms */ },
+
+ /* end mark */
+ { 0xff, 0, 0 }
+};
+
+static void lcdtg_resume(void)
+{
+ if (current_par->lcdMode == LCD_MODE_480 || current_par->lcdMode == LCD_MODE_640) {
+ lcdtg_hw_init(LCD_SHARP_VGA);
+ } else {
+ lcdtg_hw_init(LCD_SHARP_QVGA);
+ }
+}
+
+static void lcdtg_suspend(void)
+{
+ int i;
+
+ for (i = 0; i < (current_par->xres * current_par->yres); i++) {
+ writew(0xffff, remapped_fbuf + (2*i));
+ }
+
+ /* 60Hz x 2 frame = 16.7msec x 2 = 33.4 msec */
+ mdelay(34);
+
+ /* (1)VW OFF */
+ lcdtg_ssp_send(POWERREG1_ADRS, POWER1_VW_OFF | POWER1_GVSS_ON | POWER1_VDD_ON);
+
+ /* (2)COM OFF */
+ lcdtg_ssp_send(PICTRL_ADRS, PICTRL_COM_SIGNAL_OFF);
+ lcdtg_ssp_send(POWERREG0_ADRS, POWER0_DAC_ON | POWER0_COM_OFF | POWER0_VCC5_ON);
+
+ /* (3)Set Common Voltage Bias 0V */
+ lcdtg_set_common_voltage(POWER0_DAC_ON | POWER0_COM_OFF | POWER0_VCC5_ON, 0);
+
+ /* (4)GVSS OFF */
+ lcdtg_ssp_send(POWERREG1_ADRS, POWER1_VW_OFF | POWER1_GVSS_OFF | POWER1_VDD_ON);
+
+ /* (5)VCC5 OFF */
+ lcdtg_ssp_send(POWERREG0_ADRS, POWER0_DAC_ON | POWER0_COM_OFF | POWER0_VCC5_OFF);
+
+ /* (6)Set PDWN, INIOFF, DACOFF */
+ lcdtg_ssp_send(PICTRL_ADRS, PICTRL_INIOFF | PICTRL_DAC_SIGNAL_OFF |
+ PICTRL_POWER_DOWN | PICTRL_COM_SIGNAL_OFF);
+
+ /* (7)DAC OFF */
+ lcdtg_ssp_send(POWERREG0_ADRS, POWER0_DAC_OFF | POWER0_COM_OFF | POWER0_VCC5_OFF);
+
+ /* (8)VDD OFF */
+ lcdtg_ssp_send(POWERREG1_ADRS, POWER1_VW_OFF | POWER1_GVSS_OFF | POWER1_VDD_OFF);
+
+}
+
+static void lcdtg_set_phadadj(u32 mode)
+{
+ int adj;
+
+ if (mode == LCD_SHARP_VGA) {
+ /* Setting for VGA */
+ adj = current_par->phadadj;
+ if (adj < 0) {
+ adj = PHACTRL_PHASE_MANUAL;
+ } else {
+ adj = ((adj & 0x0f) << 1) | PHACTRL_PHASE_MANUAL;
+ }
+ } else {
+ /* Setting for QVGA */
+ adj = (PHAD_QVGA_DEFAULT_VAL << 1) | PHACTRL_PHASE_MANUAL;
+ }
+ lcdtg_ssp_send(PHACTRL_ADRS, adj);
+}
+
+static void lcdtg_hw_init(u32 mode)
+{
+ int i;
+ int comadj;
+
+ i = 0;
+ while(lcdtg_power_on_table[i].adrs != 0xff) {
+ if (lcdtg_power_on_table[i].adrs == 0xfe) {
+ /* Set Common Voltage */
+ comadj = current_par->comadj;
+ if (comadj < 0) {
+ comadj = COMADJ_DEFAULT;
+ }
+ lcdtg_set_common_voltage((POWER0_DAC_ON | POWER0_COM_OFF | POWER0_VCC5_OFF), comadj);
+ } else if (lcdtg_power_on_table[i].adrs == PHACTRL_ADRS) {
+ /* Set Phase Adjuct */
+ lcdtg_set_phadadj(mode);
+ } else {
+ /* Other */
+ lcdtg_ssp_send(lcdtg_power_on_table[i].adrs, lcdtg_power_on_table[i].data);
+ }
+ if (lcdtg_power_on_table[i].wait != 0)
+ udelay(lcdtg_power_on_table[i].wait);
+ i++;
+ }
+
+ switch(mode) {
+ case LCD_SHARP_QVGA:
+ /* Set Lcd Resolution (QVGA) */
+ lcdtg_ssp_send(RESCTL_ADRS, RESCTL_QVGA);
+ break;
+ case LCD_SHARP_VGA:
+ /* Set Lcd Resolution (VGA) */
+ lcdtg_ssp_send(RESCTL_ADRS, RESCTL_VGA);
+ break;
+ default:
+ break;
+ }
+}
+
+static void lcdtg_lcd_change(u32 mode)
+{
+ /* Set Phase Adjuct */
+ lcdtg_set_phadadj(mode);
+
+ if (mode == LCD_SHARP_VGA)
+ /* Set Lcd Resolution (VGA) */
+ lcdtg_ssp_send(RESCTL_ADRS, RESCTL_VGA);
+ else if (mode == LCD_SHARP_QVGA)
+ /* Set Lcd Resolution (QVGA) */
+ lcdtg_ssp_send(RESCTL_ADRS, RESCTL_QVGA);
+}
+
+
+static struct device_driver w100fb_driver = {
+ .name = "w100fb",
+ .bus = &platform_bus_type,
+ .probe = w100fb_probe,
+ .remove = w100fb_remove,
+ .suspend = w100fb_suspend,
+ .resume = w100fb_resume,
+};
+
+int __devinit w100fb_init(void)
+{
+ return driver_register(&w100fb_driver);
+}
+
+void __exit w100fb_cleanup(void)
+{
+ driver_unregister(&w100fb_driver);
+}
+
+module_init(w100fb_init);
+module_exit(w100fb_cleanup);
+
+MODULE_DESCRIPTION("ATI Imageon w100 framebuffer driver");
+MODULE_LICENSE("GPLv2");
--- /dev/null
+/*
+ * linux/drivers/video/w100fb.h
+ *
+ * Frame Buffer Device for ATI w100 (Wallaby)
+ *
+ * Copyright (C) 2002, ATI Corp.
+ * Copyright (C) 2004-2005 Richard Purdie
+ *
+ * Modified to work with 2.6 by Richard Purdie <rpurdie@rpsys.net>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#if !defined (_W100FB_H)
+#define _W100FB_H
+
+/* Block CIF Start: */
+#define mmCHIP_ID 0x0000
+#define mmREVISION_ID 0x0004
+#define mmWRAP_BUF_A 0x0008
+#define mmWRAP_BUF_B 0x000C
+#define mmWRAP_TOP_DIR 0x0010
+#define mmWRAP_START_DIR 0x0014
+#define mmCIF_CNTL 0x0018
+#define mmCFGREG_BASE 0x001C
+#define mmCIF_IO 0x0020
+#define mmCIF_READ_DBG 0x0024
+#define mmCIF_WRITE_DBG 0x0028
+#define cfgIND_ADDR_A_0 0x0000
+#define cfgIND_ADDR_A_1 0x0001
+#define cfgIND_ADDR_A_2 0x0002
+#define cfgIND_DATA_A 0x0003
+#define cfgREG_BASE 0x0004
+#define cfgINTF_CNTL 0x0005
+#define cfgSTATUS 0x0006
+#define cfgCPU_DEFAULTS 0x0007
+#define cfgIND_ADDR_B_0 0x0008
+#define cfgIND_ADDR_B_1 0x0009
+#define cfgIND_ADDR_B_2 0x000A
+#define cfgIND_DATA_B 0x000B
+#define cfgPM4_RPTR 0x000C
+#define cfgSCRATCH 0x000D
+#define cfgPM4_WRPTR_0 0x000E
+#define cfgPM4_WRPTR_1 0x000F
+/* Block CIF End: */
+
+/* Block CP Start: */
+#define mmSCRATCH_UMSK 0x0280
+#define mmSCRATCH_ADDR 0x0284
+#define mmGEN_INT_CNTL 0x0200
+#define mmGEN_INT_STATUS 0x0204
+/* Block CP End: */
+
+/* Block DISPLAY Start: */
+#define mmLCD_FORMAT 0x0410
+#define mmGRAPHIC_CTRL 0x0414
+#define mmGRAPHIC_OFFSET 0x0418
+#define mmGRAPHIC_PITCH 0x041C
+#define mmCRTC_TOTAL 0x0420
+#define mmACTIVE_H_DISP 0x0424
+#define mmACTIVE_V_DISP 0x0428
+#define mmGRAPHIC_H_DISP 0x042C
+#define mmGRAPHIC_V_DISP 0x0430
+#define mmVIDEO_CTRL 0x0434
+#define mmGRAPHIC_KEY 0x0438
+#define mmBRIGHTNESS_CNTL 0x045C
+#define mmDISP_INT_CNTL 0x0488
+#define mmCRTC_SS 0x048C
+#define mmCRTC_LS 0x0490
+#define mmCRTC_REV 0x0494
+#define mmCRTC_DCLK 0x049C
+#define mmCRTC_GS 0x04A0
+#define mmCRTC_VPOS_GS 0x04A4
+#define mmCRTC_GCLK 0x04A8
+#define mmCRTC_GOE 0x04AC
+#define mmCRTC_FRAME 0x04B0
+#define mmCRTC_FRAME_VPOS 0x04B4
+#define mmGPIO_DATA 0x04B8
+#define mmGPIO_CNTL1 0x04BC
+#define mmGPIO_CNTL2 0x04C0
+#define mmLCDD_CNTL1 0x04C4
+#define mmLCDD_CNTL2 0x04C8
+#define mmGENLCD_CNTL1 0x04CC
+#define mmGENLCD_CNTL2 0x04D0
+#define mmDISP_DEBUG 0x04D4
+#define mmDISP_DB_BUF_CNTL 0x04D8
+#define mmDISP_CRC_SIG 0x04DC
+#define mmCRTC_DEFAULT_COUNT 0x04E0
+#define mmLCD_BACKGROUND_COLOR 0x04E4
+#define mmCRTC_PS2 0x04E8
+#define mmCRTC_PS2_VPOS 0x04EC
+#define mmCRTC_PS1_ACTIVE 0x04F0
+#define mmCRTC_PS1_NACTIVE 0x04F4
+#define mmCRTC_GCLK_EXT 0x04F8
+#define mmCRTC_ALW 0x04FC
+#define mmCRTC_ALW_VPOS 0x0500
+#define mmCRTC_PSK 0x0504
+#define mmCRTC_PSK_HPOS 0x0508
+#define mmCRTC_CV4_START 0x050C
+#define mmCRTC_CV4_END 0x0510
+#define mmCRTC_CV4_HPOS 0x0514
+#define mmCRTC_ECK 0x051C
+#define mmREFRESH_CNTL 0x0520
+#define mmGENLCD_CNTL3 0x0524
+#define mmGPIO_DATA2 0x0528
+#define mmGPIO_CNTL3 0x052C
+#define mmGPIO_CNTL4 0x0530
+#define mmCHIP_STRAP 0x0534
+#define mmDISP_DEBUG2 0x0538
+#define mmDEBUG_BUS_CNTL 0x053C
+#define mmGAMMA_VALUE1 0x0540
+#define mmGAMMA_VALUE2 0x0544
+#define mmGAMMA_SLOPE 0x0548
+#define mmGEN_STATUS 0x054C
+#define mmHW_INT 0x0550
+/* Block DISPLAY End: */
+
+/* Block GFX Start: */
+#define mmBRUSH_OFFSET 0x108C
+#define mmBRUSH_Y_X 0x1074
+#define mmDEFAULT_PITCH_OFFSET 0x10A0
+#define mmDEFAULT_SC_BOTTOM_RIGHT 0x10A8
+#define mmDEFAULT2_SC_BOTTOM_RIGHT 0x10AC
+#define mmGLOBAL_ALPHA 0x1210
+#define mmFILTER_COEF 0x1214
+#define mmMVC_CNTL_START 0x11E0
+#define mmE2_ARITHMETIC_CNTL 0x1220
+#define mmENG_CNTL 0x13E8
+#define mmENG_PERF_CNT 0x13F0
+/* Block GFX End: */
+
+/* Block IDCT Start: */
+#define mmIDCT_RUNS 0x0C00
+#define mmIDCT_LEVELS 0x0C04
+#define mmIDCT_CONTROL 0x0C3C
+#define mmIDCT_AUTH_CONTROL 0x0C08
+#define mmIDCT_AUTH 0x0C0C
+/* Block IDCT End: */
+
+/* Block MC Start: */
+#define mmMEM_CNTL 0x0180
+#define mmMEM_ARB 0x0184
+#define mmMC_FB_LOCATION 0x0188
+#define mmMEM_EXT_CNTL 0x018C
+#define mmMC_EXT_MEM_LOCATION 0x0190
+#define mmMEM_EXT_TIMING_CNTL 0x0194
+#define mmMEM_SDRAM_MODE_REG 0x0198
+#define mmMEM_IO_CNTL 0x019C
+#define mmMC_DEBUG 0x01A0
+#define mmMC_BIST_CTRL 0x01A4
+#define mmMC_BIST_COLLAR_READ 0x01A8
+#define mmTC_MISMATCH 0x01AC
+#define mmMC_PERF_MON_CNTL 0x01B0
+#define mmMC_PERF_COUNTERS 0x01B4
+/* Block MC End: */
+
+/* Block RBBM Start: */
+#define mmWAIT_UNTIL 0x1400
+#define mmISYNC_CNTL 0x1404
+#define mmRBBM_CNTL 0x0144
+#define mmNQWAIT_UNTIL 0x0150
+/* Block RBBM End: */
+
+/* Block CG Start: */
+#define mmCLK_PIN_CNTL 0x0080
+#define mmPLL_REF_FB_DIV 0x0084
+#define mmPLL_CNTL 0x0088
+#define mmSCLK_CNTL 0x008C
+#define mmPCLK_CNTL 0x0090
+#define mmCLK_TEST_CNTL 0x0094
+#define mmPWRMGT_CNTL 0x0098
+#define mmPWRMGT_STATUS 0x009C
+/* Block CG End: */
+
+/* default value definitions */
+#define defWRAP_TOP_DIR 0x00000000
+#define defWRAP_START_DIR 0x00000000
+#define defCFGREG_BASE 0x00000000
+#define defCIF_IO 0x000C0902
+#define defINTF_CNTL 0x00000011
+#define defCPU_DEFAULTS 0x00000006
+#define defHW_INT 0x00000000
+#define defMC_EXT_MEM_LOCATION 0x07ff0000
+#define defTC_MISMATCH 0x00000000
+
+#define W100_CFG_BASE 0x0
+#define W100_CFG_LEN 0x10
+#define W100_REG_BASE 0x10000
+#define W100_REG_LEN 0x2000
+#define MEM_INT_BASE_VALUE 0x100000
+#define MEM_INT_TOP_VALUE_W100 0x15ffff
+#define MEM_EXT_BASE_VALUE 0x800000
+#define MEM_EXT_TOP_VALUE 0x9fffff
+#define WRAP_BUF_BASE_VALUE 0x80000
+#define WRAP_BUF_TOP_VALUE 0xbffff
+
+
+/* data structure definitions */
+
+struct wrap_top_dir_t {
+ unsigned long top_addr : 23;
+ unsigned long : 9;
+} __attribute__((packed));
+
+union wrap_top_dir_u {
+ unsigned long val : 32;
+ struct wrap_top_dir_t f;
+} __attribute__((packed));
+
+struct wrap_start_dir_t {
+ unsigned long start_addr : 23;
+ unsigned long : 9;
+} __attribute__((packed));
+
+union wrap_start_dir_u {
+ unsigned long val : 32;
+ struct wrap_start_dir_t f;
+} __attribute__((packed));
+
+struct cif_cntl_t {
+ unsigned long swap_reg : 2;
+ unsigned long swap_fbuf_1 : 2;
+ unsigned long swap_fbuf_2 : 2;
+ unsigned long swap_fbuf_3 : 2;
+ unsigned long pmi_int_disable : 1;
+ unsigned long pmi_schmen_disable : 1;
+ unsigned long intb_oe : 1;
+ unsigned long en_wait_to_compensate_dq_prop_dly : 1;
+ unsigned long compensate_wait_rd_size : 2;
+ unsigned long wait_asserted_timeout_val : 2;
+ unsigned long wait_masked_val : 2;
+ unsigned long en_wait_timeout : 1;
+ unsigned long en_one_clk_setup_before_wait : 1;
+ unsigned long interrupt_active_high : 1;
+ unsigned long en_overwrite_straps : 1;
+ unsigned long strap_wait_active_hi : 1;
+ unsigned long lat_busy_count : 2;
+ unsigned long lat_rd_pm4_sclk_busy : 1;
+ unsigned long dis_system_bits : 1;
+ unsigned long dis_mr : 1;
+ unsigned long cif_spare_1 : 4;
+} __attribute__((packed));
+
+union cif_cntl_u {
+ unsigned long val : 32;
+ struct cif_cntl_t f;
+} __attribute__((packed));
+
+struct cfgreg_base_t {
+ unsigned long cfgreg_base : 24;
+ unsigned long : 8;
+} __attribute__((packed));
+
+union cfgreg_base_u {
+ unsigned long val : 32;
+ struct cfgreg_base_t f;
+} __attribute__((packed));
+
+struct cif_io_t {
+ unsigned long dq_srp : 1;
+ unsigned long dq_srn : 1;
+ unsigned long dq_sp : 4;
+ unsigned long dq_sn : 4;
+ unsigned long waitb_srp : 1;
+ unsigned long waitb_srn : 1;
+ unsigned long waitb_sp : 4;
+ unsigned long waitb_sn : 4;
+ unsigned long intb_srp : 1;
+ unsigned long intb_srn : 1;
+ unsigned long intb_sp : 4;
+ unsigned long intb_sn : 4;
+ unsigned long : 2;
+} __attribute__((packed));
+
+union cif_io_u {
+ unsigned long val : 32;
+ struct cif_io_t f;
+} __attribute__((packed));
+
+struct cif_read_dbg_t {
+ unsigned long unpacker_pre_fetch_trig_gen : 2;
+ unsigned long dly_second_rd_fetch_trig : 1;
+ unsigned long rst_rd_burst_id : 1;
+ unsigned long dis_rd_burst_id : 1;
+ unsigned long en_block_rd_when_packer_is_not_emp : 1;
+ unsigned long dis_pre_fetch_cntl_sm : 1;
+ unsigned long rbbm_chrncy_dis : 1;
+ unsigned long rbbm_rd_after_wr_lat : 2;
+ unsigned long dis_be_during_rd : 1;
+ unsigned long one_clk_invalidate_pulse : 1;
+ unsigned long dis_chnl_priority : 1;
+ unsigned long rst_read_path_a_pls : 1;
+ unsigned long rst_read_path_b_pls : 1;
+ unsigned long dis_reg_rd_fetch_trig : 1;
+ unsigned long dis_rd_fetch_trig_from_ind_addr : 1;
+ unsigned long dis_rd_same_byte_to_trig_fetch : 1;
+ unsigned long dis_dir_wrap : 1;
+ unsigned long dis_ring_buf_to_force_dec : 1;
+ unsigned long dis_addr_comp_in_16bit : 1;
+ unsigned long clr_w : 1;
+ unsigned long err_rd_tag_is_3 : 1;
+ unsigned long err_load_when_ful_a : 1;
+ unsigned long err_load_when_ful_b : 1;
+ unsigned long : 7;
+} __attribute__((packed));
+
+union cif_read_dbg_u {
+ unsigned long val : 32;
+ struct cif_read_dbg_t f;
+} __attribute__((packed));
+
+struct cif_write_dbg_t {
+ unsigned long packer_timeout_count : 2;
+ unsigned long en_upper_load_cond : 1;
+ unsigned long en_chnl_change_cond : 1;
+ unsigned long dis_addr_comp_cond : 1;
+ unsigned long dis_load_same_byte_addr_cond : 1;
+ unsigned long dis_timeout_cond : 1;
+ unsigned long dis_timeout_during_rbbm : 1;
+ unsigned long dis_packer_ful_during_rbbm_timeout : 1;
+ unsigned long en_dword_split_to_rbbm : 1;
+ unsigned long en_dummy_val : 1;
+ unsigned long dummy_val_sel : 1;
+ unsigned long mask_pm4_wrptr_dec : 1;
+ unsigned long dis_mc_clean_cond : 1;
+ unsigned long err_two_reqi_during_ful : 1;
+ unsigned long err_reqi_during_idle_clk : 1;
+ unsigned long err_global : 1;
+ unsigned long en_wr_buf_dbg_load : 1;
+ unsigned long en_wr_buf_dbg_path : 1;
+ unsigned long sel_wr_buf_byte : 3;
+ unsigned long dis_rd_flush_wr : 1;
+ unsigned long dis_packer_ful_cond : 1;
+ unsigned long dis_invalidate_by_ops_chnl : 1;
+ unsigned long en_halt_when_reqi_err : 1;
+ unsigned long cif_spare_2 : 5;
+ unsigned long : 1;
+} __attribute__((packed));
+
+union cif_write_dbg_u {
+ unsigned long val : 32;
+ struct cif_write_dbg_t f;
+} __attribute__((packed));
+
+
+struct intf_cntl_t {
+ unsigned char ad_inc_a : 1;
+ unsigned char ring_buf_a : 1;
+ unsigned char rd_fetch_trigger_a : 1;
+ unsigned char rd_data_rdy_a : 1;
+ unsigned char ad_inc_b : 1;
+ unsigned char ring_buf_b : 1;
+ unsigned char rd_fetch_trigger_b : 1;
+ unsigned char rd_data_rdy_b : 1;
+} __attribute__((packed));
+
+union intf_cntl_u {
+ unsigned char val : 8;
+ struct intf_cntl_t f;
+} __attribute__((packed));
+
+struct cpu_defaults_t {
+ unsigned char unpack_rd_data : 1;
+ unsigned char access_ind_addr_a: 1;
+ unsigned char access_ind_addr_b: 1;
+ unsigned char access_scratch_reg : 1;
+ unsigned char pack_wr_data : 1;
+ unsigned char transition_size : 1;
+ unsigned char en_read_buf_mode : 1;
+ unsigned char rd_fetch_scratch : 1;
+} __attribute__((packed));
+
+union cpu_defaults_u {
+ unsigned char val : 8;
+ struct cpu_defaults_t f;
+} __attribute__((packed));
+
+struct video_ctrl_t {
+ unsigned long video_mode : 1;
+ unsigned long keyer_en : 1;
+ unsigned long en_video_req : 1;
+ unsigned long en_graphic_req_video : 1;
+ unsigned long en_video_crtc : 1;
+ unsigned long video_hor_exp : 2;
+ unsigned long video_ver_exp : 2;
+ unsigned long uv_combine : 1;
+ unsigned long total_req_video : 9;
+ unsigned long video_ch_sel : 1;
+ unsigned long video_portrait : 2;
+ unsigned long yuv2rgb_en : 1;
+ unsigned long yuv2rgb_option : 1;
+ unsigned long video_inv_hor : 1;
+ unsigned long video_inv_ver : 1;
+ unsigned long gamma_sel : 2;
+ unsigned long dis_limit : 1;
+ unsigned long en_uv_hblend : 1;
+ unsigned long rgb_gamma_sel : 2;
+} __attribute__((packed));
+
+union video_ctrl_u {
+ unsigned long val : 32;
+ struct video_ctrl_t f;
+} __attribute__((packed));
+
+struct disp_db_buf_cntl_rd_t {
+ unsigned long en_db_buf : 1;
+ unsigned long update_db_buf_done : 1;
+ unsigned long db_buf_cntl : 6;
+ unsigned long : 24;
+} __attribute__((packed));
+
+union disp_db_buf_cntl_rd_u {
+ unsigned long val : 32;
+ struct disp_db_buf_cntl_rd_t f;
+} __attribute__((packed));
+
+struct disp_db_buf_cntl_wr_t {
+ unsigned long en_db_buf : 1;
+ unsigned long update_db_buf : 1;
+ unsigned long db_buf_cntl : 6;
+ unsigned long : 24;
+} __attribute__((packed));
+
+union disp_db_buf_cntl_wr_u {
+ unsigned long val : 32;
+ struct disp_db_buf_cntl_wr_t f;
+} __attribute__((packed));
+
+struct gamma_value1_t {
+ unsigned long gamma1 : 8;
+ unsigned long gamma2 : 8;
+ unsigned long gamma3 : 8;
+ unsigned long gamma4 : 8;
+} __attribute__((packed));
+
+union gamma_value1_u {
+ unsigned long val : 32;
+ struct gamma_value1_t f;
+} __attribute__((packed));
+
+struct gamma_value2_t {
+ unsigned long gamma5 : 8;
+ unsigned long gamma6 : 8;
+ unsigned long gamma7 : 8;
+ unsigned long gamma8 : 8;
+} __attribute__((packed));
+
+union gamma_value2_u {
+ unsigned long val : 32;
+ struct gamma_value2_t f;
+} __attribute__((packed));
+
+struct gamma_slope_t {
+ unsigned long slope1 : 3;
+ unsigned long slope2 : 3;
+ unsigned long slope3 : 3;
+ unsigned long slope4 : 3;
+ unsigned long slope5 : 3;
+ unsigned long slope6 : 3;
+ unsigned long slope7 : 3;
+ unsigned long slope8 : 3;
+ unsigned long : 8;
+} __attribute__((packed));
+
+union gamma_slope_u {
+ unsigned long val : 32;
+ struct gamma_slope_t f;
+} __attribute__((packed));
+
+struct mc_ext_mem_location_t {
+ unsigned long mc_ext_mem_start : 16;
+ unsigned long mc_ext_mem_top : 16;
+} __attribute__((packed));
+
+union mc_ext_mem_location_u {
+ unsigned long val : 32;
+ struct mc_ext_mem_location_t f;
+} __attribute__((packed));
+
+struct clk_pin_cntl_t {
+ unsigned long osc_en : 1;
+ unsigned long osc_gain : 5;
+ unsigned long dont_use_xtalin : 1;
+ unsigned long xtalin_pm_en : 1;
+ unsigned long xtalin_dbl_en : 1;
+ unsigned long : 7;
+ unsigned long cg_debug : 16;
+} __attribute__((packed));
+
+union clk_pin_cntl_u {
+ unsigned long val : 32;
+ struct clk_pin_cntl_t f;
+} __attribute__((packed));
+
+struct pll_ref_fb_div_t {
+ unsigned long pll_ref_div : 4;
+ unsigned long : 4;
+ unsigned long pll_fb_div_int : 6;
+ unsigned long : 2;
+ unsigned long pll_fb_div_frac : 3;
+ unsigned long : 1;
+ unsigned long pll_reset_time : 4;
+ unsigned long pll_lock_time : 8;
+} __attribute__((packed));
+
+union pll_ref_fb_div_u {
+ unsigned long val : 32;
+ struct pll_ref_fb_div_t f;
+} __attribute__((packed));
+
+struct pll_cntl_t {
+ unsigned long pll_pwdn : 1;
+ unsigned long pll_reset : 1;
+ unsigned long pll_pm_en : 1;
+ unsigned long pll_mode : 1;
+ unsigned long pll_refclk_sel : 1;
+ unsigned long pll_fbclk_sel : 1;
+ unsigned long pll_tcpoff : 1;
+ unsigned long pll_pcp : 3;
+ unsigned long pll_pvg : 3;
+ unsigned long pll_vcofr : 1;
+ unsigned long pll_ioffset : 2;
+ unsigned long pll_pecc_mode : 2;
+ unsigned long pll_pecc_scon : 2;
+ unsigned long pll_dactal : 4;
+ unsigned long pll_cp_clip : 2;
+ unsigned long pll_conf : 3;
+ unsigned long pll_mbctrl : 2;
+ unsigned long pll_ring_off : 1;
+} __attribute__((packed));
+
+union pll_cntl_u {
+ unsigned long val : 32;
+ struct pll_cntl_t f;
+} __attribute__((packed));
+
+struct sclk_cntl_t {
+ unsigned long sclk_src_sel : 2;
+ unsigned long : 2;
+ unsigned long sclk_post_div_fast : 4;
+ unsigned long sclk_clkon_hys : 3;
+ unsigned long sclk_post_div_slow : 4;
+ unsigned long disp_cg_ok2switch_en : 1;
+ unsigned long sclk_force_reg : 1;
+ unsigned long sclk_force_disp : 1;
+ unsigned long sclk_force_mc : 1;
+ unsigned long sclk_force_extmc : 1;
+ unsigned long sclk_force_cp : 1;
+ unsigned long sclk_force_e2 : 1;
+ unsigned long sclk_force_e3 : 1;
+ unsigned long sclk_force_idct : 1;
+ unsigned long sclk_force_bist : 1;
+ unsigned long busy_extend_cp : 1;
+ unsigned long busy_extend_e2 : 1;
+ unsigned long busy_extend_e3 : 1;
+ unsigned long busy_extend_idct : 1;
+ unsigned long : 3;
+} __attribute__((packed));
+
+union sclk_cntl_u {
+ unsigned long val : 32;
+ struct sclk_cntl_t f;
+} __attribute__((packed));
+
+struct pclk_cntl_t {
+ unsigned long pclk_src_sel : 2;
+ unsigned long : 2;
+ unsigned long pclk_post_div : 4;
+ unsigned long : 8;
+ unsigned long pclk_force_disp : 1;
+ unsigned long : 15;
+} __attribute__((packed));
+
+union pclk_cntl_u {
+ unsigned long val : 32;
+ struct pclk_cntl_t f;
+} __attribute__((packed));
+
+struct clk_test_cntl_t {
+ unsigned long testclk_sel : 4;
+ unsigned long : 3;
+ unsigned long start_check_freq : 1;
+ unsigned long tstcount_rst : 1;
+ unsigned long : 15;
+ unsigned long test_count : 8;
+} __attribute__((packed));
+
+union clk_test_cntl_u {
+ unsigned long val : 32;
+ struct clk_test_cntl_t f;
+} __attribute__((packed));
+
+struct pwrmgt_cntl_t {
+ unsigned long pwm_enable : 1;
+ unsigned long : 1;
+ unsigned long pwm_mode_req : 2;
+ unsigned long pwm_wakeup_cond : 2;
+ unsigned long pwm_fast_noml_hw_en : 1;
+ unsigned long pwm_noml_fast_hw_en : 1;
+ unsigned long pwm_fast_noml_cond : 4;
+ unsigned long pwm_noml_fast_cond : 4;
+ unsigned long pwm_idle_timer : 8;
+ unsigned long pwm_busy_timer : 8;
+} __attribute__((packed));
+
+union pwrmgt_cntl_u {
+ unsigned long val : 32;
+ struct pwrmgt_cntl_t f;
+} __attribute__((packed));
+
+#endif
+
ds_dev->ep[i+1] = endpoint->bEndpointAddress;
printk("%d: addr=%x, size=%d, dir=%s, type=%x\n",
- i, endpoint->bEndpointAddress, endpoint->wMaxPacketSize,
+ i, endpoint->bEndpointAddress, le16_to_cpu(endpoint->wMaxPacketSize),
(endpoint->bEndpointAddress & USB_DIR_IN)?"IN":"OUT",
endpoint->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK);
}
#include <linux/device.h>
#include <linux/slab.h>
#include <linux/sched.h>
-#include <linux/suspend.h>
#include "w1.h"
#include "w1_io.h"
module_param_named(max_slave_count, w1_max_slave_count, int, 0);
module_param_named(slave_ttl, w1_max_slave_ttl, int, 0);
-spinlock_t w1_mlock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(w1_mlock);
LIST_HEAD(w1_masters);
static pid_t control_thread;
w1_netlink_send(sl->master, &msg);
}
-static void w1_search(struct w1_master *dev)
+static struct w1_master *w1_search_master(unsigned long data)
{
- u64 last, rn, tmp;
- int i, count = 0, slave_count;
- int last_family_desc, last_zero, last_device;
- int search_bit, id_bit, comp_bit, desc_bit;
- struct list_head *ent;
+ struct w1_master *dev;
+ int found = 0;
+
+ spin_lock_irq(&w1_mlock);
+ list_for_each_entry(dev, &w1_masters, w1_master_entry) {
+ if (dev->bus_master->data == data) {
+ found = 1;
+ atomic_inc(&dev->refcnt);
+ break;
+ }
+ }
+ spin_unlock_irq(&w1_mlock);
+
+ return (found)?dev:NULL;
+}
+
+void w1_slave_found(unsigned long data, u64 rn)
+{
+ int slave_count;
struct w1_slave *sl;
+ struct list_head *ent;
+ struct w1_reg_num *tmp;
int family_found = 0;
+ struct w1_master *dev;
+
+ dev = w1_search_master(data);
+ if (!dev) {
+ printk(KERN_ERR "Failed to find w1 master device for data %08lx, it is impossible.\n",
+ data);
+ return;
+ }
+
+ tmp = (struct w1_reg_num *) &rn;
+
+ slave_count = 0;
+ list_for_each(ent, &dev->slist) {
+
+ sl = list_entry(ent, struct w1_slave, w1_slave_entry);
- dev->attempts++;
+ if (sl->reg_num.family == tmp->family &&
+ sl->reg_num.id == tmp->id &&
+ sl->reg_num.crc == tmp->crc) {
+ set_bit(W1_SLAVE_ACTIVE, (long *)&sl->flags);
+ break;
+ }
+ else if (sl->reg_num.family == tmp->family) {
+ family_found = 1;
+ break;
+ }
+
+ slave_count++;
+ }
+
+ if (slave_count == dev->slave_count &&
+ rn && ((rn >> 56) & 0xff) == w1_calc_crc8((u8 *)&rn, 7)) {
+ w1_attach_slave_device(dev, (struct w1_reg_num *) &rn);
+ }
+
+ atomic_dec(&dev->refcnt);
+}
+
+void w1_search(struct w1_master *dev)
+{
+ u64 last, rn, tmp;
+ int i, count = 0;
+ int last_family_desc, last_zero, last_device;
+ int search_bit, id_bit, comp_bit, desc_bit;
search_bit = id_bit = comp_bit = 0;
rn = tmp = last = 0;
last_device = 1;
desc_bit = last_zero;
-
- slave_count = 0;
- list_for_each(ent, &dev->slist) {
- struct w1_reg_num *tmp;
-
- tmp = (struct w1_reg_num *) &rn;
-
- sl = list_entry(ent, struct w1_slave, w1_slave_entry);
-
- if (sl->reg_num.family == tmp->family &&
- sl->reg_num.id == tmp->id &&
- sl->reg_num.crc == tmp->crc) {
- set_bit(W1_SLAVE_ACTIVE, (long *)&sl->flags);
- break;
- }
- else if (sl->reg_num.family == tmp->family) {
- family_found = 1;
- break;
- }
-
- slave_count++;
- }
-
- if (slave_count == dev->slave_count &&
- rn && ((rn >> 56) & 0xff) == w1_calc_crc8((u8 *)&rn, 7)) {
- w1_attach_slave_device(dev, (struct w1_reg_num *) &rn);
- }
+
+ w1_slave_found(dev->bus_master->data, rn);
}
}
timeout = w1_timeout*HZ;
do {
timeout = interruptible_sleep_on_timeout(&w1_control_wait, timeout);
- if (current->flags & PF_FREEZE)
- refrigerator(PF_FREEZE);
+ try_to_freeze(PF_FREEZE);
} while (!signal_pending(current) && (timeout > 0));
if (signal_pending(current))
timeout = w1_timeout*HZ;
do {
timeout = interruptible_sleep_on_timeout(&dev->kwait, timeout);
- if (current->flags & PF_FREEZE)
- refrigerator(PF_FREEZE);
+ try_to_freeze(PF_FREEZE);
} while (!signal_pending(current) && (timeout > 0));
if (signal_pending(current))
clear_bit(W1_SLAVE_ACTIVE, (long *)&sl->flags);
}
- w1_search(dev);
-
+ w1_search_devices(dev, w1_slave_found);
+
list_for_each_safe(ent, n, &dev->slist) {
sl = list_entry(ent, struct w1_slave, w1_slave_entry);
struct device_attribute attr_name, attr_val;
};
+typedef void (* w1_slave_found_callback)(unsigned long, u64);
+
struct w1_bus_master
{
unsigned long data;
u8 (*touch_bit)(unsigned long, u8);
u8 (*reset_bus)(unsigned long);
+
+ void (*search)(unsigned long, w1_slave_found_callback);
};
struct w1_master
int w1_create_master_attributes(struct w1_master *);
void w1_destroy_master_attributes(struct w1_master *);
+void w1_search(struct w1_master *dev);
#endif /* __KERNEL__ */
#include "w1_family.h"
-spinlock_t w1_flock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(w1_flock);
static LIST_HEAD(w1_families);
static int w1_check_family(struct w1_family *f)
return crc;
}
+void w1_search_devices(struct w1_master *dev, w1_slave_found_callback cb)
+{
+ dev->attempts++;
+ if (dev->bus_master->search)
+ dev->bus_master->search(dev->bus_master->data, cb);
+ else
+ w1_search(dev);
+}
+
EXPORT_SYMBOL(w1_write_bit);
EXPORT_SYMBOL(w1_write_8);
EXPORT_SYMBOL(w1_read_bit);
EXPORT_SYMBOL(w1_delay);
EXPORT_SYMBOL(w1_read_block);
EXPORT_SYMBOL(w1_write_block);
+EXPORT_SYMBOL(w1_search_devices);
u8 w1_calc_crc8(u8 *, int);
void w1_write_block(struct w1_master *, u8 *, int);
u8 w1_read_block(struct w1_master *, u8 *, int);
+void w1_search_devices(struct w1_master *dev, w1_slave_found_callback cb);
#endif /* __W1_IO_H */
ld.so (check the file <file:Documentation/Changes> for location and
latest version).
+config BINFMT_ELF_FDPIC
+ bool "Kernel support for FDPIC ELF binaries"
+ default y
+ depends on FRV
+ help
+ ELF FDPIC binaries are based on ELF, but allow the individual load
+ segments of a binary to be located in memory independently of each
+ other. This makes this format ideal for use in environments where no
+ MMU is available as it still permits text segments to be shared,
+ even if data segments are not.
+
+ It is also possible to run FDPIC ELF binaries on MMU linux also.
+
config BINFMT_FLAT
tristate "Kernel support for flat binaries"
depends on !MMU || SUPERH
/*
* For future. This should probably be per-directory.
*/
-static rwlock_t adfs_dir_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(adfs_dir_lock);
static int
adfs_readdir(struct file *filp, void *dirent, filldir_t filldir)
return;
cur_time:
- *tv = CURRENT_TIME;
+ *tv = CURRENT_TIME_SEC;
return;
too_early:
/*
* For the future...
*/
-static rwlock_t adfs_map_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(adfs_map_lock);
/*
* This is fun. We need to load up to 19 bits from the map at an
--- /dev/null
+#include <linux/types.h>
+#include <linux/fs.h>
+#include <linux/buffer_head.h>
+#include <linux/affs_fs.h>
+#include <linux/amigaffs.h>
+
+/* AmigaOS allows file names with up to 30 characters length.
+ * Names longer than that will be silently truncated. If you
+ * want to disallow this, comment out the following #define.
+ * Creating filesystem objects with longer names will then
+ * result in an error (ENAMETOOLONG).
+ */
+/*#define AFFS_NO_TRUNCATE */
+
+/* Ugly macros make the code more pretty. */
+
+#define GET_END_PTR(st,p,sz) ((st *)((char *)(p)+((sz)-sizeof(st))))
+#define AFFS_GET_HASHENTRY(data,hashkey) be32_to_cpu(((struct dir_front *)data)->hashtable[hashkey])
+#define AFFS_BLOCK(sb, bh, blk) (AFFS_HEAD(bh)->table[AFFS_SB(sb)->s_hashsize-1-(blk)])
+
+#ifdef __LITTLE_ENDIAN
+#define BO_EXBITS 0x18UL
+#elif defined(__BIG_ENDIAN)
+#define BO_EXBITS 0x00UL
+#else
+#error Endianness must be known for affs to work.
+#endif
+
+#define AFFS_HEAD(bh) ((struct affs_head *)(bh)->b_data)
+#define AFFS_TAIL(sb, bh) ((struct affs_tail *)((bh)->b_data+(sb)->s_blocksize-sizeof(struct affs_tail)))
+#define AFFS_ROOT_HEAD(bh) ((struct affs_root_head *)(bh)->b_data)
+#define AFFS_ROOT_TAIL(sb, bh) ((struct affs_root_tail *)((bh)->b_data+(sb)->s_blocksize-sizeof(struct affs_root_tail)))
+#define AFFS_DATA_HEAD(bh) ((struct affs_data_head *)(bh)->b_data)
+#define AFFS_DATA(bh) (((struct affs_data_head *)(bh)->b_data)->data)
+
+#define AFFS_CACHE_SIZE PAGE_SIZE
+
+#define AFFS_MAX_PREALLOC 32
+#define AFFS_LC_SIZE (AFFS_CACHE_SIZE/sizeof(u32)/2)
+#define AFFS_AC_SIZE (AFFS_CACHE_SIZE/sizeof(struct affs_ext_key)/2)
+#define AFFS_AC_MASK (AFFS_AC_SIZE-1)
+
+struct affs_ext_key {
+ u32 ext; /* idx of the extended block */
+ u32 key; /* block number */
+};
+
+/*
+ * affs fs inode data in memory
+ */
+struct affs_inode_info {
+ u32 i_opencnt;
+ struct semaphore i_link_lock; /* Protects internal inode access. */
+ struct semaphore i_ext_lock; /* Protects internal inode access. */
+#define i_hash_lock i_ext_lock
+ u32 i_blkcnt; /* block count */
+ u32 i_extcnt; /* extended block count */
+ u32 *i_lc; /* linear cache of extended blocks */
+ u32 i_lc_size;
+ u32 i_lc_shift;
+ u32 i_lc_mask;
+ struct affs_ext_key *i_ac; /* associative cache of extended blocks */
+ u32 i_ext_last; /* last accessed extended block */
+ struct buffer_head *i_ext_bh; /* bh of last extended block */
+ loff_t mmu_private;
+ u32 i_protect; /* unused attribute bits */
+ u32 i_lastalloc; /* last allocated block */
+ int i_pa_cnt; /* number of preallocated blocks */
+ struct inode vfs_inode;
+};
+
+/* short cut to get to the affs specific inode data */
+static inline struct affs_inode_info *AFFS_I(struct inode *inode)
+{
+ return list_entry(inode, struct affs_inode_info, vfs_inode);
+}
+
+/*
+ * super-block data in memory
+ *
+ * Block numbers are adjusted for their actual size
+ *
+ */
+
+struct affs_bm_info {
+ u32 bm_key; /* Disk block number */
+ u32 bm_free; /* Free blocks in here */
+};
+
+struct affs_sb_info {
+ int s_partition_size; /* Partition size in blocks. */
+ int s_reserved; /* Number of reserved blocks. */
+ //u32 s_blksize; /* Initial device blksize */
+ u32 s_data_blksize; /* size of the data block w/o header */
+ u32 s_root_block; /* FFS root block number. */
+ int s_hashsize; /* Size of hash table. */
+ unsigned long s_flags; /* See below. */
+ uid_t s_uid; /* uid to override */
+ gid_t s_gid; /* gid to override */
+ umode_t s_mode; /* mode to override */
+ struct buffer_head *s_root_bh; /* Cached root block. */
+ struct semaphore s_bmlock; /* Protects bitmap access. */
+ struct affs_bm_info *s_bitmap; /* Bitmap infos. */
+ u32 s_bmap_count; /* # of bitmap blocks. */
+ u32 s_bmap_bits; /* # of bits in one bitmap blocks */
+ u32 s_last_bmap;
+ struct buffer_head *s_bmap_bh;
+ char *s_prefix; /* Prefix for volumes and assigns. */
+ int s_prefix_len; /* Length of prefix. */
+ char s_volume[32]; /* Volume prefix for absolute symlinks. */
+};
+
+#define SF_INTL 0x0001 /* International filesystem. */
+#define SF_BM_VALID 0x0002 /* Bitmap is valid. */
+#define SF_IMMUTABLE 0x0004 /* Protection bits cannot be changed */
+#define SF_QUIET 0x0008 /* chmod errors will be not reported */
+#define SF_SETUID 0x0010 /* Ignore Amiga uid */
+#define SF_SETGID 0x0020 /* Ignore Amiga gid */
+#define SF_SETMODE 0x0040 /* Ignore Amiga protection bits */
+#define SF_MUFS 0x0100 /* Use MUFS uid/gid mapping */
+#define SF_OFS 0x0200 /* Old filesystem */
+#define SF_PREFIX 0x0400 /* Buffer for prefix is allocated */
+#define SF_VERBOSE 0x0800 /* Talk about fs when mounting */
+
+/* short cut to get to the affs specific sb data */
+static inline struct affs_sb_info *AFFS_SB(struct super_block *sb)
+{
+ return sb->s_fs_info;
+}
+
+/* amigaffs.c */
+
+extern int affs_insert_hash(struct inode *inode, struct buffer_head *bh);
+extern int affs_remove_hash(struct inode *dir, struct buffer_head *rem_bh);
+extern int affs_remove_header(struct dentry *dentry);
+extern u32 affs_checksum_block(struct super_block *sb, struct buffer_head *bh);
+extern void affs_fix_checksum(struct super_block *sb, struct buffer_head *bh);
+extern void secs_to_datestamp(time_t secs, struct affs_date *ds);
+extern mode_t prot_to_mode(u32 prot);
+extern void mode_to_prot(struct inode *inode);
+extern void affs_error(struct super_block *sb, const char *function, const char *fmt, ...);
+extern void affs_warning(struct super_block *sb, const char *function, const char *fmt, ...);
+extern int affs_check_name(const unsigned char *name, int len);
+extern int affs_copy_name(unsigned char *bstr, struct dentry *dentry);
+
+/* bitmap. c */
+
+extern u32 affs_count_free_blocks(struct super_block *s);
+extern void affs_free_block(struct super_block *sb, u32 block);
+extern u32 affs_alloc_block(struct inode *inode, u32 goal);
+extern int affs_init_bitmap(struct super_block *sb, int *flags);
+extern void affs_free_bitmap(struct super_block *sb);
+
+/* namei.c */
+
+extern int affs_hash_name(struct super_block *sb, const u8 *name, unsigned int len);
+extern struct dentry *affs_lookup(struct inode *dir, struct dentry *dentry, struct nameidata *);
+extern int affs_unlink(struct inode *dir, struct dentry *dentry);
+extern int affs_create(struct inode *dir, struct dentry *dentry, int mode, struct nameidata *);
+extern int affs_mkdir(struct inode *dir, struct dentry *dentry, int mode);
+extern int affs_rmdir(struct inode *dir, struct dentry *dentry);
+extern int affs_link(struct dentry *olddentry, struct inode *dir,
+ struct dentry *dentry);
+extern int affs_symlink(struct inode *dir, struct dentry *dentry,
+ const char *symname);
+extern int affs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ struct inode *new_dir, struct dentry *new_dentry);
+
+/* inode.c */
+
+extern unsigned long affs_parent_ino(struct inode *dir);
+extern struct inode *affs_new_inode(struct inode *dir);
+extern int affs_notify_change(struct dentry *dentry, struct iattr *attr);
+extern void affs_put_inode(struct inode *inode);
+extern void affs_delete_inode(struct inode *inode);
+extern void affs_clear_inode(struct inode *inode);
+extern void affs_read_inode(struct inode *inode);
+extern int affs_write_inode(struct inode *inode, int);
+extern int affs_add_entry(struct inode *dir, struct inode *inode, struct dentry *dentry, s32 type);
+
+/* file.c */
+
+void affs_free_prealloc(struct inode *inode);
+extern void affs_truncate(struct inode *);
+
+/* dir.c */
+
+extern void affs_dir_truncate(struct inode *);
+
+/* jump tables */
+
+extern struct inode_operations affs_file_inode_operations;
+extern struct inode_operations affs_dir_inode_operations;
+extern struct inode_operations affs_symlink_inode_operations;
+extern struct file_operations affs_file_operations;
+extern struct file_operations affs_file_operations_ofs;
+extern struct file_operations affs_dir_operations;
+extern struct address_space_operations affs_symlink_aops;
+extern struct address_space_operations affs_aops;
+extern struct address_space_operations affs_aops_ofs;
+
+extern struct dentry_operations affs_dentry_operations;
+extern struct dentry_operations affs_dentry_operations_intl;
+
+static inline void
+affs_set_blocksize(struct super_block *sb, int size)
+{
+ sb_set_blocksize(sb, size);
+}
+static inline struct buffer_head *
+affs_bread(struct super_block *sb, int block)
+{
+ pr_debug("affs_bread: %d\n", block);
+ if (block >= AFFS_SB(sb)->s_reserved && block < AFFS_SB(sb)->s_partition_size)
+ return sb_bread(sb, block);
+ return NULL;
+}
+static inline struct buffer_head *
+affs_getblk(struct super_block *sb, int block)
+{
+ pr_debug("affs_getblk: %d\n", block);
+ if (block >= AFFS_SB(sb)->s_reserved && block < AFFS_SB(sb)->s_partition_size)
+ return sb_getblk(sb, block);
+ return NULL;
+}
+static inline struct buffer_head *
+affs_getzeroblk(struct super_block *sb, int block)
+{
+ struct buffer_head *bh;
+ pr_debug("affs_getzeroblk: %d\n", block);
+ if (block >= AFFS_SB(sb)->s_reserved && block < AFFS_SB(sb)->s_partition_size) {
+ bh = sb_getblk(sb, block);
+ lock_buffer(bh);
+ memset(bh->b_data, 0 , sb->s_blocksize);
+ set_buffer_uptodate(bh);
+ unlock_buffer(bh);
+ return bh;
+ }
+ return NULL;
+}
+static inline struct buffer_head *
+affs_getemptyblk(struct super_block *sb, int block)
+{
+ struct buffer_head *bh;
+ pr_debug("affs_getemptyblk: %d\n", block);
+ if (block >= AFFS_SB(sb)->s_reserved && block < AFFS_SB(sb)->s_partition_size) {
+ bh = sb_getblk(sb, block);
+ wait_on_buffer(bh);
+ set_buffer_uptodate(bh);
+ return bh;
+ }
+ return NULL;
+}
+static inline void
+affs_brelse(struct buffer_head *bh)
+{
+ if (bh)
+ pr_debug("affs_brelse: %lld\n", (long long) bh->b_blocknr);
+ brelse(bh);
+}
+
+static inline void
+affs_adjust_checksum(struct buffer_head *bh, u32 val)
+{
+ u32 tmp = be32_to_cpu(((__be32 *)bh->b_data)[5]);
+ ((__be32 *)bh->b_data)[5] = cpu_to_be32(tmp - val);
+}
+static inline void
+affs_adjust_bitmapchecksum(struct buffer_head *bh, u32 val)
+{
+ u32 tmp = be32_to_cpu(((__be32 *)bh->b_data)[0]);
+ ((__be32 *)bh->b_data)[0] = cpu_to_be32(tmp - val);
+}
+
+static inline void
+affs_lock_link(struct inode *inode)
+{
+ down(&AFFS_I(inode)->i_link_lock);
+}
+static inline void
+affs_unlock_link(struct inode *inode)
+{
+ up(&AFFS_I(inode)->i_link_lock);
+}
+static inline void
+affs_lock_dir(struct inode *inode)
+{
+ down(&AFFS_I(inode)->i_hash_lock);
+}
+static inline void
+affs_unlock_dir(struct inode *inode)
+{
+ up(&AFFS_I(inode)->i_hash_lock);
+}
+static inline void
+affs_lock_ext(struct inode *inode)
+{
+ down(&AFFS_I(inode)->i_ext_lock);
+}
+static inline void
+affs_unlock_ext(struct inode *inode)
+{
+ up(&AFFS_I(inode)->i_ext_lock);
+}
*
*/
-#include <asm/uaccess.h>
-#include <linux/errno.h>
-#include <linux/fs.h>
-#include <linux/kernel.h>
-#include <linux/affs_fs.h>
-#include <linux/stat.h>
-#include <linux/string.h>
-#include <linux/mm.h>
-#include <linux/amigaffs.h>
-#include <linux/smp_lock.h>
-#include <linux/buffer_head.h>
+#include "affs.h"
static int affs_readdir(struct file *, void *, filldir_t);
* affs regular file handling primitives
*/
-#include <asm/div64.h>
-#include <asm/uaccess.h>
-#include <asm/system.h>
-#include <linux/time.h>
-#include <linux/affs_fs.h>
-#include <linux/fcntl.h>
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/stat.h>
-#include <linux/smp_lock.h>
-#include <linux/dirent.h>
-#include <linux/fs.h>
-#include <linux/amigaffs.h>
-#include <linux/mm.h>
-#include <linux/highmem.h>
-#include <linux/pagemap.h>
-#include <linux/buffer_head.h>
+#include "affs.h"
#if PAGE_SIZE < 4096
#error PAGE_SIZE must be at least 4096
retval = generic_file_write (file, buf, count, ppos);
if (retval >0) {
struct inode *inode = file->f_dentry->d_inode;
- inode->i_ctime = inode->i_mtime = CURRENT_TIME;
+ inode->i_ctime = inode->i_mtime = CURRENT_TIME_SEC;
mark_inode_dirty(inode);
}
return retval;
* (C) 1991 Linus Torvalds - minix filesystem
*/
-#include <linux/time.h>
-#include <linux/affs_fs.h>
-#include <linux/kernel.h>
-#include <linux/string.h>
-#include <linux/stat.h>
-#include <linux/fcntl.h>
-#include <linux/amigaffs.h>
-#include <linux/smp_lock.h>
-#include <linux/buffer_head.h>
-#include <asm/uaccess.h>
-
-#include <linux/errno.h>
+#include "affs.h"
typedef int (*toupper_t)(int);
-extern struct inode_operations affs_symlink_inode_operations;
-
static int affs_toupper(int ch);
static int affs_hash_dentry(struct dentry *, struct qstr *);
static int affs_compare_dentry(struct dentry *, struct qstr *, struct qstr *);
LIST_HEAD(afs_proc_cells);
static struct list_head afs_cells = LIST_HEAD_INIT(afs_cells);
-static rwlock_t afs_cells_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(afs_cells_lock);
static DECLARE_RWSEM(afs_cells_sem); /* add/remove serialisation */
static struct afs_cell *afs_cell_root;
static LIST_HEAD(kafsasyncd_async_attnq);
static LIST_HEAD(kafsasyncd_async_busyq);
-static spinlock_t kafsasyncd_async_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(kafsasyncd_async_lock);
static void kafsasyncd_null_call_attn_func(struct rxrpc_call *call)
{
static int kafstimod_die;
static LIST_HEAD(kafstimod_list);
-static spinlock_t kafstimod_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(kafstimod_lock);
static int kafstimod(void *arg);
};
struct list_head afs_cb_hash_tbl[AFS_CB_HASH_COUNT];
-spinlock_t afs_cb_hash_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(afs_cb_hash_lock);
#ifdef AFS_CACHING_SUPPORT
static struct cachefs_netfs_operations afs_cache_ops = {
#include "kafstimod.h"
#include "internal.h"
-spinlock_t afs_server_peer_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(afs_server_peer_lock);
#define FS_SERVICE_ID 1 /* AFS Volume Location Service ID */
#define VL_SERVICE_ID 52 /* AFS Volume Location Service ID */
AFSVL_BACKVOL, /* backup volume */
} __attribute__((packed)) afs_voltype_t;
-extern const char *afs_voltypes[];
-
typedef enum {
AFS_FTYPE_INVALID = 0,
AFS_FTYPE_FILE = 1,
#include "vlclient.h"
#include "internal.h"
-const char *afs_voltypes[] = { "R/W", "R/O", "BAK" };
+#ifdef __KDEBUG
+static const char *afs_voltypes[] = { "R/W", "R/O", "BAK" };
+#endif
#ifdef AFS_CACHING_SUPPORT
static cachefs_match_val_t afs_volume_cache_match(void *target,
s->s_blocksize_bits = 10;
s->s_magic = AUTOFS_SUPER_MAGIC;
s->s_op = &autofs_sops;
+ s->s_time_gran = 1;
root_inode = iget(s, AUTOFS_ROOT_INO);
root = d_alloc_root(root_inode);
}
inode->i_size = 0;
- inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME;
+ inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME_SEC;
lock_kernel();
mark_inode_dirty(inode);
block = (ino - BFS_ROOT_INO)/BFS_INODES_PER_BLOCK + 1;
--- /dev/null
+/* binfmt_elf_fdpic.c: FDPIC ELF binary format
+ *
+ * Copyright (C) 2003, 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ * Derived from binfmt_elf.c
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/module.h>
+
+#include <linux/fs.h>
+#include <linux/stat.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/mman.h>
+#include <linux/errno.h>
+#include <linux/signal.h>
+#include <linux/binfmts.h>
+#include <linux/string.h>
+#include <linux/file.h>
+#include <linux/fcntl.h>
+#include <linux/slab.h>
+#include <linux/highmem.h>
+#include <linux/personality.h>
+#include <linux/ptrace.h>
+#include <linux/init.h>
+#include <linux/smp_lock.h>
+#include <linux/elf.h>
+#include <linux/elf-fdpic.h>
+#include <linux/elfcore.h>
+
+#include <asm/uaccess.h>
+#include <asm/param.h>
+#include <asm/pgalloc.h>
+
+typedef char *elf_caddr_t;
+#ifndef elf_addr_t
+#define elf_addr_t unsigned long
+#endif
+
+#if 0
+#define kdebug(fmt, ...) printk("FDPIC "fmt"\n" ,##__VA_ARGS__ )
+#else
+#define kdebug(fmt, ...) do {} while(0)
+#endif
+
+MODULE_LICENSE("GPL");
+
+static int load_elf_fdpic_binary(struct linux_binprm *bprm, struct pt_regs *regs);
+//static int load_elf_fdpic_library(struct file *);
+static int elf_fdpic_fetch_phdrs(struct elf_fdpic_params *params, struct file *file);
+static int elf_fdpic_map_file(struct elf_fdpic_params *params,
+ struct file *file,
+ struct mm_struct *mm,
+ const char *what);
+
+static int create_elf_fdpic_tables(struct linux_binprm *bprm,
+ struct mm_struct *mm,
+ struct elf_fdpic_params *exec_params,
+ struct elf_fdpic_params *interp_params);
+
+#ifndef CONFIG_MMU
+static int elf_fdpic_transfer_args_to_stack(struct linux_binprm *bprm, unsigned long *_sp);
+static int elf_fdpic_map_file_constdisp_on_uclinux(struct elf_fdpic_params *params,
+ struct file *file,
+ struct mm_struct *mm);
+#endif
+
+static int elf_fdpic_map_file_by_direct_mmap(struct elf_fdpic_params *params,
+ struct file *file,
+ struct mm_struct *mm);
+
+static struct linux_binfmt elf_fdpic_format = {
+ .module = THIS_MODULE,
+ .load_binary = load_elf_fdpic_binary,
+// .load_shlib = load_elf_fdpic_library,
+// .core_dump = elf_fdpic_core_dump,
+ .min_coredump = ELF_EXEC_PAGESIZE,
+};
+
+static int __init init_elf_fdpic_binfmt(void) { return register_binfmt(&elf_fdpic_format); }
+static void __exit exit_elf_fdpic_binfmt(void) { unregister_binfmt(&elf_fdpic_format); }
+
+module_init(init_elf_fdpic_binfmt)
+module_exit(exit_elf_fdpic_binfmt)
+
+static int is_elf_fdpic(struct elfhdr *hdr, struct file *file)
+{
+ if (memcmp(hdr->e_ident, ELFMAG, SELFMAG) != 0)
+ return 0;
+ if (hdr->e_type != ET_EXEC && hdr->e_type != ET_DYN)
+ return 0;
+ if (!elf_check_arch(hdr) || !elf_check_fdpic(hdr))
+ return 0;
+ if (!file->f_op || !file->f_op->mmap)
+ return 0;
+ return 1;
+}
+
+/*****************************************************************************/
+/*
+ * read the program headers table into memory
+ */
+static int elf_fdpic_fetch_phdrs(struct elf_fdpic_params *params, struct file *file)
+{
+ struct elf32_phdr *phdr;
+ unsigned long size;
+ int retval, loop;
+
+ if (params->hdr.e_phentsize != sizeof(struct elf_phdr))
+ return -ENOMEM;
+ if (params->hdr.e_phnum > 65536U / sizeof(struct elf_phdr))
+ return -ENOMEM;
+
+ size = params->hdr.e_phnum * sizeof(struct elf_phdr);
+ params->phdrs = kmalloc(size, GFP_KERNEL);
+ if (!params->phdrs)
+ return -ENOMEM;
+
+ retval = kernel_read(file, params->hdr.e_phoff, (char *) params->phdrs, size);
+ if (retval < 0)
+ return retval;
+
+ /* determine stack size for this binary */
+ phdr = params->phdrs;
+ for (loop = 0; loop < params->hdr.e_phnum; loop++, phdr++) {
+ if (phdr->p_type != PT_GNU_STACK)
+ continue;
+
+ if (phdr->p_flags & PF_X)
+ params->flags |= ELF_FDPIC_FLAG_EXEC_STACK;
+ else
+ params->flags |= ELF_FDPIC_FLAG_NOEXEC_STACK;
+
+ params->stack_size = phdr->p_memsz;
+ break;
+ }
+
+ return 0;
+} /* end elf_fdpic_fetch_phdrs() */
+
+/*****************************************************************************/
+/*
+ * load an fdpic binary into various bits of memory
+ */
+static int load_elf_fdpic_binary(struct linux_binprm *bprm, struct pt_regs *regs)
+{
+ struct elf_fdpic_params exec_params, interp_params;
+ struct elf_phdr *phdr;
+ unsigned long stack_size;
+ struct file *interpreter = NULL; /* to shut gcc up */
+ char *interpreter_name = NULL;
+ int executable_stack;
+ int retval, i;
+
+ memset(&exec_params, 0, sizeof(exec_params));
+ memset(&interp_params, 0, sizeof(interp_params));
+
+ exec_params.hdr = *(struct elfhdr *) bprm->buf;
+ exec_params.flags = ELF_FDPIC_FLAG_PRESENT | ELF_FDPIC_FLAG_EXECUTABLE;
+
+ /* check that this is a binary we know how to deal with */
+ retval = -ENOEXEC;
+ if (!is_elf_fdpic(&exec_params.hdr, bprm->file))
+ goto error;
+
+ /* read the program header table */
+ retval = elf_fdpic_fetch_phdrs(&exec_params, bprm->file);
+ if (retval < 0)
+ goto error;
+
+ /* scan for a program header that specifies an interpreter */
+ phdr = exec_params.phdrs;
+
+ for (i = 0; i < exec_params.hdr.e_phnum; i++, phdr++) {
+ switch (phdr->p_type) {
+ case PT_INTERP:
+ retval = -ENOMEM;
+ if (phdr->p_filesz > PATH_MAX)
+ goto error;
+ retval = -ENOENT;
+ if (phdr->p_filesz < 2)
+ goto error;
+
+ /* read the name of the interpreter into memory */
+ interpreter_name = (char *) kmalloc(phdr->p_filesz, GFP_KERNEL);
+ if (!interpreter_name)
+ goto error;
+
+ retval = kernel_read(bprm->file,
+ phdr->p_offset,
+ interpreter_name,
+ phdr->p_filesz);
+ if (retval < 0)
+ goto error;
+
+ retval = -ENOENT;
+ if (interpreter_name[phdr->p_filesz - 1] != '\0')
+ goto error;
+
+ kdebug("Using ELF interpreter %s", interpreter_name);
+
+ /* replace the program with the interpreter */
+ interpreter = open_exec(interpreter_name);
+ retval = PTR_ERR(interpreter);
+ if (IS_ERR(interpreter)) {
+ interpreter = NULL;
+ goto error;
+ }
+
+ retval = kernel_read(interpreter, 0, bprm->buf, BINPRM_BUF_SIZE);
+ if (retval < 0)
+ goto error;
+
+ interp_params.hdr = *((struct elfhdr *) bprm->buf);
+ break;
+
+ case PT_LOAD:
+#ifdef CONFIG_MMU
+ if (exec_params.load_addr == 0)
+ exec_params.load_addr = phdr->p_vaddr;
+#endif
+ break;
+ }
+
+ }
+
+ if (elf_check_const_displacement(&exec_params.hdr))
+ exec_params.flags |= ELF_FDPIC_FLAG_CONSTDISP;
+
+ /* perform insanity checks on the interpreter */
+ if (interpreter_name) {
+ retval = -ELIBBAD;
+ if (!is_elf_fdpic(&interp_params.hdr, interpreter))
+ goto error;
+
+ interp_params.flags = ELF_FDPIC_FLAG_PRESENT;
+
+ /* read the interpreter's program header table */
+ retval = elf_fdpic_fetch_phdrs(&interp_params, interpreter);
+ if (retval < 0)
+ goto error;
+ }
+
+ stack_size = exec_params.stack_size;
+ if (stack_size < interp_params.stack_size)
+ stack_size = interp_params.stack_size;
+
+ if (exec_params.flags & ELF_FDPIC_FLAG_EXEC_STACK)
+ executable_stack = EXSTACK_ENABLE_X;
+ else if (exec_params.flags & ELF_FDPIC_FLAG_NOEXEC_STACK)
+ executable_stack = EXSTACK_DISABLE_X;
+ else if (interp_params.flags & ELF_FDPIC_FLAG_EXEC_STACK)
+ executable_stack = EXSTACK_ENABLE_X;
+ else if (interp_params.flags & ELF_FDPIC_FLAG_NOEXEC_STACK)
+ executable_stack = EXSTACK_DISABLE_X;
+ else
+ executable_stack = EXSTACK_DEFAULT;
+
+ retval = -ENOEXEC;
+ if (stack_size == 0)
+ goto error;
+
+ if (elf_check_const_displacement(&interp_params.hdr))
+ interp_params.flags |= ELF_FDPIC_FLAG_CONSTDISP;
+
+ /* flush all traces of the currently running executable */
+ retval = flush_old_exec(bprm);
+ if (retval)
+ goto error;
+
+ /* there's now no turning back... the old userspace image is dead,
+ * defunct, deceased, etc. after this point we have to exit via
+ * error_kill */
+ set_personality(PER_LINUX_FDPIC);
+ set_binfmt(&elf_fdpic_format);
+
+ current->mm->start_code = 0;
+ current->mm->end_code = 0;
+ current->mm->start_stack = 0;
+ current->mm->start_data = 0;
+ current->mm->end_data = 0;
+ current->mm->context.exec_fdpic_loadmap = 0;
+ current->mm->context.interp_fdpic_loadmap = 0;
+
+ current->flags &= ~PF_FORKNOEXEC;
+
+#ifdef CONFIG_MMU
+ elf_fdpic_arch_lay_out_mm(&exec_params,
+ &interp_params,
+ ¤t->mm->start_stack,
+ ¤t->mm->start_brk);
+#endif
+
+ /* do this so that we can load the interpreter, if need be
+ * - we will change some of these later
+ */
+ current->mm->rss = 0;
+
+#ifdef CONFIG_MMU
+ retval = setup_arg_pages(bprm, current->mm->start_stack, executable_stack);
+ if (retval < 0) {
+ send_sig(SIGKILL, current, 0);
+ goto error_kill;
+ }
+#endif
+
+ /* load the executable and interpreter into memory */
+ retval = elf_fdpic_map_file(&exec_params, bprm->file, current->mm, "executable");
+ if (retval < 0)
+ goto error_kill;
+
+ if (interpreter_name) {
+ retval = elf_fdpic_map_file(&interp_params, interpreter,
+ current->mm, "interpreter");
+ if (retval < 0) {
+ printk(KERN_ERR "Unable to load interpreter\n");
+ goto error_kill;
+ }
+
+ allow_write_access(interpreter);
+ fput(interpreter);
+ interpreter = NULL;
+ }
+
+#ifdef CONFIG_MMU
+ if (!current->mm->start_brk)
+ current->mm->start_brk = current->mm->end_data;
+
+ current->mm->brk = current->mm->start_brk = PAGE_ALIGN(current->mm->start_brk);
+
+#else
+ /* create a stack and brk area big enough for everyone
+ * - the brk heap starts at the bottom and works up
+ * - the stack starts at the top and works down
+ */
+ stack_size = (stack_size + PAGE_SIZE - 1) & PAGE_MASK;
+ if (stack_size < PAGE_SIZE * 2)
+ stack_size = PAGE_SIZE * 2;
+
+ down_write(¤t->mm->mmap_sem);
+ current->mm->start_brk = do_mmap(NULL,
+ 0,
+ stack_size,
+ PROT_READ | PROT_WRITE | PROT_EXEC,
+ MAP_PRIVATE | MAP_ANON | MAP_GROWSDOWN,
+ 0);
+
+ if (IS_ERR((void *) current->mm->start_brk)) {
+ up_write(¤t->mm->mmap_sem);
+ retval = current->mm->start_brk;
+ current->mm->start_brk = 0;
+ goto error_kill;
+ }
+
+ if (do_mremap(current->mm->start_brk,
+ stack_size,
+ ksize((char *) current->mm->start_brk),
+ 0, 0
+ ) == current->mm->start_brk
+ )
+ stack_size = ksize((char *) current->mm->start_brk);
+ up_write(¤t->mm->mmap_sem);
+
+ current->mm->brk = current->mm->start_brk;
+ current->mm->context.end_brk = current->mm->start_brk;
+ current->mm->context.end_brk += (stack_size > PAGE_SIZE) ? (stack_size - PAGE_SIZE) : 0;
+ current->mm->start_stack = current->mm->start_brk + stack_size;
+#endif
+
+ compute_creds(bprm);
+ current->flags &= ~PF_FORKNOEXEC;
+ if (create_elf_fdpic_tables(bprm, current->mm, &exec_params, &interp_params) < 0)
+ goto error_kill;
+
+ kdebug("- start_code %lx", (long) current->mm->start_code);
+ kdebug("- end_code %lx", (long) current->mm->end_code);
+ kdebug("- start_data %lx", (long) current->mm->start_data);
+ kdebug("- end_data %lx", (long) current->mm->end_data);
+ kdebug("- start_brk %lx", (long) current->mm->start_brk);
+ kdebug("- brk %lx", (long) current->mm->brk);
+ kdebug("- start_stack %lx", (long) current->mm->start_stack);
+
+#ifdef ELF_FDPIC_PLAT_INIT
+ /*
+ * The ABI may specify that certain registers be set up in special
+ * ways (on i386 %edx is the address of a DT_FINI function, for
+ * example. This macro performs whatever initialization to
+ * the regs structure is required.
+ */
+ ELF_FDPIC_PLAT_INIT(regs,
+ exec_params.map_addr,
+ interp_params.map_addr,
+ interp_params.dynamic_addr ?: exec_params.dynamic_addr
+ );
+#endif
+
+ /* everything is now ready... get the userspace context ready to roll */
+ start_thread(regs,
+ interp_params.entry_addr ?: exec_params.entry_addr,
+ current->mm->start_stack);
+
+ if (unlikely(current->ptrace & PT_PTRACED)) {
+ if (current->ptrace & PT_TRACE_EXEC)
+ ptrace_notify ((PTRACE_EVENT_EXEC << 8) | SIGTRAP);
+ else
+ send_sig(SIGTRAP, current, 0);
+ }
+
+ retval = 0;
+
+error:
+ if (interpreter) {
+ allow_write_access(interpreter);
+ fput(interpreter);
+ }
+ if (interpreter_name)
+ kfree(interpreter_name);
+ if (exec_params.phdrs)
+ kfree(exec_params.phdrs);
+ if (exec_params.loadmap)
+ kfree(exec_params.loadmap);
+ if (interp_params.phdrs)
+ kfree(interp_params.phdrs);
+ if (interp_params.loadmap)
+ kfree(interp_params.loadmap);
+ return retval;
+
+ /* unrecoverable error - kill the process */
+ error_kill:
+ send_sig(SIGSEGV, current, 0);
+ goto error;
+
+} /* end load_elf_fdpic_binary() */
+
+/*****************************************************************************/
+/*
+ * present useful information to the program
+ */
+static int create_elf_fdpic_tables(struct linux_binprm *bprm,
+ struct mm_struct *mm,
+ struct elf_fdpic_params *exec_params,
+ struct elf_fdpic_params *interp_params)
+{
+ unsigned long sp, csp, nitems;
+ elf_caddr_t *argv, *envp;
+ size_t platform_len = 0, len;
+ char *k_platform, *u_platform, *p;
+ long hwcap;
+ int loop;
+
+ /* we're going to shovel a whole load of stuff onto the stack */
+#ifdef CONFIG_MMU
+ sp = bprm->p;
+#else
+ sp = mm->start_stack;
+
+ /* stack the program arguments and environment */
+ if (elf_fdpic_transfer_args_to_stack(bprm, &sp) < 0)
+ return -EFAULT;
+#endif
+
+ /* get hold of platform and hardware capabilities masks for the machine
+ * we are running on. In some cases (Sparc), this info is impossible
+ * to get, in others (i386) it is merely difficult.
+ */
+ hwcap = ELF_HWCAP;
+ k_platform = ELF_PLATFORM;
+
+ if (k_platform) {
+ platform_len = strlen(k_platform) + 1;
+ sp -= platform_len;
+ if (__copy_to_user(u_platform, k_platform, platform_len) != 0)
+ return -EFAULT;
+ }
+
+ u_platform = (char *) sp;
+
+#if defined(__i386__) && defined(CONFIG_SMP)
+ /* in some cases (e.g. Hyper-Threading), we want to avoid L1 evictions
+ * by the processes running on the same package. One thing we can do
+ * is to shuffle the initial stack for them.
+ *
+ * the conditionals here are unneeded, but kept in to make the
+ * code behaviour the same as pre change unless we have hyperthreaded
+ * processors. This keeps Mr Marcelo Person happier but should be
+ * removed for 2.5
+ */
+ if (smp_num_siblings > 1)
+ sp = sp - ((current->pid % 64) << 7);
+#endif
+
+ sp &= ~7UL;
+
+ /* stack the load map(s) */
+ len = sizeof(struct elf32_fdpic_loadmap);
+ len += sizeof(struct elf32_fdpic_loadseg) * exec_params->loadmap->nsegs;
+ sp = (sp - len) & ~7UL;
+ exec_params->map_addr = sp;
+
+ if (copy_to_user((void *) sp, exec_params->loadmap, len) != 0)
+ return -EFAULT;
+
+ current->mm->context.exec_fdpic_loadmap = (unsigned long) sp;
+
+ if (interp_params->loadmap) {
+ len = sizeof(struct elf32_fdpic_loadmap);
+ len += sizeof(struct elf32_fdpic_loadseg) * interp_params->loadmap->nsegs;
+ sp = (sp - len) & ~7UL;
+ interp_params->map_addr = sp;
+
+ if (copy_to_user((void *) sp, interp_params->loadmap, len) != 0)
+ return -EFAULT;
+
+ current->mm->context.interp_fdpic_loadmap = (unsigned long) sp;
+ }
+
+ /* force 16 byte _final_ alignment here for generality */
+#define DLINFO_ITEMS 13
+
+ nitems = 1 + DLINFO_ITEMS + (k_platform ? 1 : 0);
+#ifdef DLINFO_ARCH_ITEMS
+ nitems += DLINFO_ARCH_ITEMS;
+#endif
+
+ csp = sp;
+ sp -= nitems * 2 * sizeof(unsigned long);
+ sp -= (bprm->envc + 1) * sizeof(char *); /* envv[] */
+ sp -= (bprm->argc + 1) * sizeof(char *); /* argv[] */
+ sp -= 1 * sizeof(unsigned long); /* argc */
+
+ csp -= sp & 15UL;
+ sp -= sp & 15UL;
+
+ /* put the ELF interpreter info on the stack */
+#define NEW_AUX_ENT(nr, id, val) \
+ do { \
+ struct { unsigned long _id, _val; } *ent = (void *) csp; \
+ __put_user((id), &ent[nr]._id); \
+ __put_user((val), &ent[nr]._val); \
+ } while (0)
+
+ csp -= 2 * sizeof(unsigned long);
+ NEW_AUX_ENT(0, AT_NULL, 0);
+ if (k_platform) {
+ csp -= 2 * sizeof(unsigned long);
+ NEW_AUX_ENT(0, AT_PLATFORM, (elf_addr_t)(unsigned long) u_platform);
+ }
+
+ csp -= DLINFO_ITEMS * 2 * sizeof(unsigned long);
+ NEW_AUX_ENT( 0, AT_HWCAP, hwcap);
+ NEW_AUX_ENT( 1, AT_PAGESZ, PAGE_SIZE);
+ NEW_AUX_ENT( 2, AT_CLKTCK, CLOCKS_PER_SEC);
+ NEW_AUX_ENT( 3, AT_PHDR, exec_params->ph_addr);
+ NEW_AUX_ENT( 4, AT_PHENT, sizeof(struct elf_phdr));
+ NEW_AUX_ENT( 5, AT_PHNUM, exec_params->hdr.e_phnum);
+ NEW_AUX_ENT( 6, AT_BASE, interp_params->elfhdr_addr);
+ NEW_AUX_ENT( 7, AT_FLAGS, 0);
+ NEW_AUX_ENT( 8, AT_ENTRY, exec_params->entry_addr);
+ NEW_AUX_ENT( 9, AT_UID, (elf_addr_t) current->uid);
+ NEW_AUX_ENT(10, AT_EUID, (elf_addr_t) current->euid);
+ NEW_AUX_ENT(11, AT_GID, (elf_addr_t) current->gid);
+ NEW_AUX_ENT(12, AT_EGID, (elf_addr_t) current->egid);
+
+#ifdef ARCH_DLINFO
+ /* ARCH_DLINFO must come last so platform specific code can enforce
+ * special alignment requirements on the AUXV if necessary (eg. PPC).
+ */
+ ARCH_DLINFO;
+#endif
+#undef NEW_AUX_ENT
+
+ /* allocate room for argv[] and envv[] */
+ csp -= (bprm->envc + 1) * sizeof(elf_caddr_t);
+ envp = (elf_caddr_t *) csp;
+ csp -= (bprm->argc + 1) * sizeof(elf_caddr_t);
+ argv = (elf_caddr_t *) csp;
+
+ /* stack argc */
+ csp -= sizeof(unsigned long);
+ __put_user(bprm->argc, (unsigned long *) csp);
+
+ if (csp != sp)
+ BUG();
+
+ /* fill in the argv[] array */
+#ifdef CONFIG_MMU
+ current->mm->arg_start = bprm->p;
+#else
+ current->mm->arg_start = current->mm->start_stack - (MAX_ARG_PAGES * PAGE_SIZE - bprm->p);
+#endif
+
+ p = (char *) current->mm->arg_start;
+ for (loop = bprm->argc; loop > 0; loop--) {
+ __put_user((elf_caddr_t) p, argv++);
+ len = strnlen_user(p, PAGE_SIZE * MAX_ARG_PAGES);
+ if (!len || len > PAGE_SIZE * MAX_ARG_PAGES)
+ return -EINVAL;
+ p += len;
+ }
+ __put_user(NULL, argv);
+ current->mm->arg_end = (unsigned long) p;
+
+ /* fill in the envv[] array */
+ current->mm->env_start = (unsigned long) p;
+ for (loop = bprm->envc; loop > 0; loop--) {
+ __put_user((elf_caddr_t)(unsigned long) p, envp++);
+ len = strnlen_user(p, PAGE_SIZE * MAX_ARG_PAGES);
+ if (!len || len > PAGE_SIZE * MAX_ARG_PAGES)
+ return -EINVAL;
+ p += len;
+ }
+ __put_user(NULL, envp);
+ current->mm->env_end = (unsigned long) p;
+
+ mm->start_stack = (unsigned long) sp;
+ return 0;
+} /* end create_elf_fdpic_tables() */
+
+/*****************************************************************************/
+/*
+ * transfer the program arguments and environment from the holding pages onto
+ * the stack
+ */
+#ifndef CONFIG_MMU
+static int elf_fdpic_transfer_args_to_stack(struct linux_binprm *bprm, unsigned long *_sp)
+{
+ unsigned long index, stop, sp;
+ char *src;
+ int ret = 0;
+
+ stop = bprm->p >> PAGE_SHIFT;
+ sp = *_sp;
+
+ for (index = MAX_ARG_PAGES - 1; index >= stop; index--) {
+ src = kmap(bprm->page[index]);
+ sp -= PAGE_SIZE;
+ if (copy_to_user((void *) sp, src, PAGE_SIZE) != 0)
+ ret = -EFAULT;
+ kunmap(bprm->page[index]);
+ if (ret < 0)
+ goto out;
+ }
+
+ *_sp = (*_sp - (MAX_ARG_PAGES * PAGE_SIZE - bprm->p)) & ~15;
+
+ out:
+ return ret;
+} /* end elf_fdpic_transfer_args_to_stack() */
+#endif
+
+/*****************************************************************************/
+/*
+ * load the appropriate binary image (executable or interpreter) into memory
+ * - we assume no MMU is available
+ * - if no other PIC bits are set in params->hdr->e_flags
+ * - we assume that the LOADable segments in the binary are independently relocatable
+ * - we assume R/O executable segments are shareable
+ * - else
+ * - we assume the loadable parts of the image to require fixed displacement
+ * - the image is not shareable
+ */
+static int elf_fdpic_map_file(struct elf_fdpic_params *params,
+ struct file *file,
+ struct mm_struct *mm,
+ const char *what)
+{
+ struct elf32_fdpic_loadmap *loadmap;
+#ifdef CONFIG_MMU
+ struct elf32_fdpic_loadseg *mseg;
+#endif
+ struct elf32_fdpic_loadseg *seg;
+ struct elf32_phdr *phdr;
+ unsigned long load_addr, stop;
+ unsigned nloads, tmp;
+ size_t size;
+ int loop, ret;
+
+ /* allocate a load map table */
+ nloads = 0;
+ for (loop = 0; loop < params->hdr.e_phnum; loop++)
+ if (params->phdrs[loop].p_type == PT_LOAD)
+ nloads++;
+
+ if (nloads == 0)
+ return -ELIBBAD;
+
+ size = sizeof(*loadmap) + nloads * sizeof(*seg);
+ loadmap = kmalloc(size, GFP_KERNEL);
+ if (!loadmap)
+ return -ENOMEM;
+
+ params->loadmap = loadmap;
+ memset(loadmap, 0, size);
+
+ loadmap->version = ELF32_FDPIC_LOADMAP_VERSION;
+ loadmap->nsegs = nloads;
+
+ load_addr = params->load_addr;
+ seg = loadmap->segs;
+
+ /* map the requested LOADs into the memory space */
+ switch (params->flags & ELF_FDPIC_FLAG_ARRANGEMENT) {
+ case ELF_FDPIC_FLAG_CONSTDISP:
+ case ELF_FDPIC_FLAG_CONTIGUOUS:
+#ifndef CONFIG_MMU
+ ret = elf_fdpic_map_file_constdisp_on_uclinux(params, file, mm);
+ if (ret < 0)
+ return ret;
+ break;
+#endif
+ default:
+ ret = elf_fdpic_map_file_by_direct_mmap(params, file, mm);
+ if (ret < 0)
+ return ret;
+ break;
+ }
+
+ /* map the entry point */
+ if (params->hdr.e_entry) {
+ seg = loadmap->segs;
+ for (loop = loadmap->nsegs; loop > 0; loop--, seg++) {
+ if (params->hdr.e_entry >= seg->p_vaddr &&
+ params->hdr.e_entry < seg->p_vaddr + seg->p_memsz
+ ) {
+ params->entry_addr =
+ (params->hdr.e_entry - seg->p_vaddr) + seg->addr;
+ break;
+ }
+ }
+ }
+
+ /* determine where the program header table has wound up if mapped */
+ stop = params->hdr.e_phoff + params->hdr.e_phnum * sizeof (struct elf_phdr);
+ phdr = params->phdrs;
+
+ for (loop = 0; loop < params->hdr.e_phnum; loop++, phdr++) {
+ if (phdr->p_type != PT_LOAD)
+ continue;
+
+ if (phdr->p_offset > params->hdr.e_phoff ||
+ phdr->p_offset + phdr->p_filesz < stop)
+ continue;
+
+ seg = loadmap->segs;
+ for (loop = loadmap->nsegs; loop > 0; loop--, seg++) {
+ if (phdr->p_vaddr >= seg->p_vaddr &&
+ phdr->p_vaddr + phdr->p_filesz <= seg->p_vaddr + seg->p_memsz
+ ) {
+ params->ph_addr = (phdr->p_vaddr - seg->p_vaddr) + seg->addr +
+ params->hdr.e_phoff - phdr->p_offset;
+ break;
+ }
+ }
+ break;
+ }
+
+ /* determine where the dynamic section has wound up if there is one */
+ phdr = params->phdrs;
+ for (loop = 0; loop < params->hdr.e_phnum; loop++, phdr++) {
+ if (phdr->p_type != PT_DYNAMIC)
+ continue;
+
+ seg = loadmap->segs;
+ for (loop = loadmap->nsegs; loop > 0; loop--, seg++) {
+ if (phdr->p_vaddr >= seg->p_vaddr &&
+ phdr->p_vaddr + phdr->p_memsz <= seg->p_vaddr + seg->p_memsz
+ ) {
+ params->dynamic_addr = (phdr->p_vaddr - seg->p_vaddr) + seg->addr;
+
+ /* check the dynamic section contains at least one item, and that
+ * the last item is a NULL entry */
+ if (phdr->p_memsz == 0 ||
+ phdr->p_memsz % sizeof(Elf32_Dyn) != 0)
+ goto dynamic_error;
+
+ tmp = phdr->p_memsz / sizeof(Elf32_Dyn);
+ if (((Elf32_Dyn *) params->dynamic_addr)[tmp - 1].d_tag != 0)
+ goto dynamic_error;
+ break;
+ }
+ }
+ break;
+ }
+
+ /* now elide adjacent segments in the load map on MMU linux
+ * - on uClinux the holes between may actually be filled with system stuff or stuff from
+ * other processes
+ */
+#ifdef CONFIG_MMU
+ nloads = loadmap->nsegs;
+ mseg = loadmap->segs;
+ seg = mseg + 1;
+ for (loop = 1; loop < nloads; loop++) {
+ /* see if we have a candidate for merging */
+ if (seg->p_vaddr - mseg->p_vaddr == seg->addr - mseg->addr) {
+ load_addr = PAGE_ALIGN(mseg->addr + mseg->p_memsz);
+ if (load_addr == (seg->addr & PAGE_MASK)) {
+ mseg->p_memsz += load_addr - (mseg->addr + mseg->p_memsz);
+ mseg->p_memsz += seg->addr & ~PAGE_MASK;
+ mseg->p_memsz += seg->p_memsz;
+ loadmap->nsegs--;
+ continue;
+ }
+ }
+
+ mseg++;
+ if (mseg != seg)
+ *mseg = *seg;
+ }
+#endif
+
+ kdebug("Mapped Object [%s]:", what);
+ kdebug("- elfhdr : %lx", params->elfhdr_addr);
+ kdebug("- entry : %lx", params->entry_addr);
+ kdebug("- PHDR[] : %lx", params->ph_addr);
+ kdebug("- DYNAMIC[]: %lx", params->dynamic_addr);
+ seg = loadmap->segs;
+ for (loop = 0; loop < loadmap->nsegs; loop++, seg++)
+ kdebug("- LOAD[%d] : %08x-%08x [va=%x ms=%x]",
+ loop,
+ seg->addr, seg->addr + seg->p_memsz - 1,
+ seg->p_vaddr, seg->p_memsz);
+
+ return 0;
+
+ dynamic_error:
+ printk("ELF FDPIC %s with invalid DYNAMIC section (inode=%lu)\n",
+ what, file->f_dentry->d_inode->i_ino);
+ return -ELIBBAD;
+} /* end elf_fdpic_map_file() */
+
+/*****************************************************************************/
+/*
+ * map a file with constant displacement under uClinux
+ */
+#ifndef CONFIG_MMU
+static int elf_fdpic_map_file_constdisp_on_uclinux(struct elf_fdpic_params *params,
+ struct file *file,
+ struct mm_struct *mm)
+{
+ struct elf32_fdpic_loadseg *seg;
+ struct elf32_phdr *phdr;
+ unsigned long load_addr, base = ULONG_MAX, top = 0, maddr = 0, mflags;
+ loff_t fpos;
+ int loop, ret;
+
+ load_addr = params->load_addr;
+ seg = params->loadmap->segs;
+
+ /* determine the bounds of the contiguous overall allocation we must make */
+ phdr = params->phdrs;
+ for (loop = 0; loop < params->hdr.e_phnum; loop++, phdr++) {
+ if (params->phdrs[loop].p_type != PT_LOAD)
+ continue;
+
+ if (base > phdr->p_vaddr)
+ base = phdr->p_vaddr;
+ if (top < phdr->p_vaddr + phdr->p_memsz)
+ top = phdr->p_vaddr + phdr->p_memsz;
+ }
+
+ /* allocate one big anon block for everything */
+ mflags = MAP_PRIVATE;
+ if (params->flags & ELF_FDPIC_FLAG_EXECUTABLE)
+ mflags |= MAP_EXECUTABLE;
+
+ down_write(&mm->mmap_sem);
+ maddr = do_mmap(NULL, load_addr, top - base,
+ PROT_READ | PROT_WRITE | PROT_EXEC, mflags, 0);
+ up_write(&mm->mmap_sem);
+ if (IS_ERR((void *) maddr))
+ return (int) maddr;
+
+ if (load_addr != 0)
+ load_addr += PAGE_ALIGN(top - base);
+
+ /* and then load the file segments into it */
+ phdr = params->phdrs;
+ for (loop = 0; loop < params->hdr.e_phnum; loop++, phdr++) {
+ if (params->phdrs[loop].p_type != PT_LOAD)
+ continue;
+
+ fpos = phdr->p_offset;
+
+ seg->addr = maddr + (phdr->p_vaddr - base);
+ seg->p_vaddr = phdr->p_vaddr;
+ seg->p_memsz = phdr->p_memsz;
+
+ ret = file->f_op->read(file, (void *) seg->addr, phdr->p_filesz, &fpos);
+ if (ret < 0)
+ return ret;
+
+ /* map the ELF header address if in this segment */
+ if (phdr->p_offset == 0)
+ params->elfhdr_addr = seg->addr;
+
+ /* clear any space allocated but not loaded */
+ if (phdr->p_filesz < phdr->p_memsz)
+ clear_user((void *) (seg->addr + phdr->p_filesz),
+ phdr->p_memsz - phdr->p_filesz);
+
+ if (mm) {
+ if (phdr->p_flags & PF_X) {
+ mm->start_code = seg->addr;
+ mm->end_code = seg->addr + phdr->p_memsz;
+ }
+ else if (!mm->start_data) {
+ mm->start_data = seg->addr;
+#ifndef CONFIG_MMU
+ mm->end_data = seg->addr + phdr->p_memsz;
+#endif
+ }
+
+#ifdef CONFIG_MMU
+ if (seg->addr + phdr->p_memsz > mm->end_data)
+ mm->end_data = seg->addr + phdr->p_memsz;
+#endif
+ }
+
+ seg++;
+ }
+
+ return 0;
+} /* end elf_fdpic_map_file_constdisp_on_uclinux() */
+#endif
+
+/*****************************************************************************/
+/*
+ * map a binary by direct mmap() of the individual PT_LOAD segments
+ */
+static int elf_fdpic_map_file_by_direct_mmap(struct elf_fdpic_params *params,
+ struct file *file,
+ struct mm_struct *mm)
+{
+ struct elf32_fdpic_loadseg *seg;
+ struct elf32_phdr *phdr;
+ unsigned long load_addr, delta_vaddr;
+ int loop, dvset;
+
+ load_addr = params->load_addr;
+ delta_vaddr = 0;
+ dvset = 0;
+
+ seg = params->loadmap->segs;
+
+ /* deal with each load segment separately */
+ phdr = params->phdrs;
+ for (loop = 0; loop < params->hdr.e_phnum; loop++, phdr++) {
+ unsigned long maddr, disp, excess, excess1;
+ int prot = 0, flags;
+
+ if (phdr->p_type != PT_LOAD)
+ continue;
+
+ kdebug("[LOAD] va=%lx of=%lx fs=%lx ms=%lx",
+ (unsigned long) phdr->p_vaddr,
+ (unsigned long) phdr->p_offset,
+ (unsigned long) phdr->p_filesz,
+ (unsigned long) phdr->p_memsz);
+
+ /* determine the mapping parameters */
+ if (phdr->p_flags & PF_R) prot |= PROT_READ;
+ if (phdr->p_flags & PF_W) prot |= PROT_WRITE;
+ if (phdr->p_flags & PF_X) prot |= PROT_EXEC;
+
+ flags = MAP_PRIVATE | MAP_DENYWRITE;
+ if (params->flags & ELF_FDPIC_FLAG_EXECUTABLE)
+ flags |= MAP_EXECUTABLE;
+
+ maddr = 0;
+
+ switch (params->flags & ELF_FDPIC_FLAG_ARRANGEMENT) {
+ case ELF_FDPIC_FLAG_INDEPENDENT:
+ /* PT_LOADs are independently locatable */
+ break;
+
+ case ELF_FDPIC_FLAG_HONOURVADDR:
+ /* the specified virtual address must be honoured */
+ maddr = phdr->p_vaddr;
+ flags |= MAP_FIXED;
+ break;
+
+ case ELF_FDPIC_FLAG_CONSTDISP:
+ /* constant displacement
+ * - can be mapped anywhere, but must be mapped as a unit
+ */
+ if (!dvset) {
+ maddr = load_addr;
+ delta_vaddr = phdr->p_vaddr;
+ dvset = 1;
+ }
+ else {
+ maddr = load_addr + phdr->p_vaddr - delta_vaddr;
+ flags |= MAP_FIXED;
+ }
+ break;
+
+ case ELF_FDPIC_FLAG_CONTIGUOUS:
+ /* contiguity handled later */
+ break;
+
+ default:
+ BUG();
+ }
+
+ maddr &= PAGE_MASK;
+
+ /* create the mapping */
+ disp = phdr->p_vaddr & ~PAGE_MASK;
+ down_write(&mm->mmap_sem);
+ maddr = do_mmap(file, maddr, phdr->p_memsz + disp, prot, flags,
+ phdr->p_offset - disp);
+ up_write(&mm->mmap_sem);
+
+ kdebug("mmap[%d] <file> sz=%lx pr=%x fl=%x of=%lx --> %08lx",
+ loop, phdr->p_memsz + disp, prot, flags, phdr->p_offset - disp,
+ maddr);
+
+ if (IS_ERR((void *) maddr))
+ return (int) maddr;
+
+ if ((params->flags & ELF_FDPIC_FLAG_ARRANGEMENT) == ELF_FDPIC_FLAG_CONTIGUOUS)
+ load_addr += PAGE_ALIGN(phdr->p_memsz + disp);
+
+ seg->addr = maddr + disp;
+ seg->p_vaddr = phdr->p_vaddr;
+ seg->p_memsz = phdr->p_memsz;
+
+ /* map the ELF header address if in this segment */
+ if (phdr->p_offset == 0)
+ params->elfhdr_addr = seg->addr;
+
+ /* clear the bit between beginning of mapping and beginning of PT_LOAD */
+ if (prot & PROT_WRITE && disp > 0) {
+ kdebug("clear[%d] ad=%lx sz=%lx", loop, maddr, disp);
+ clear_user((void *) maddr, disp);
+ maddr += disp;
+ }
+
+ /* clear any space allocated but not loaded
+ * - on uClinux we can just clear the lot
+ * - on MMU linux we'll get a SIGBUS beyond the last page
+ * extant in the file
+ */
+ excess = phdr->p_memsz - phdr->p_filesz;
+ excess1 = PAGE_SIZE - ((maddr + phdr->p_filesz) & ~PAGE_MASK);
+
+#ifdef CONFIG_MMU
+
+ if (excess > excess1) {
+ unsigned long xaddr = maddr + phdr->p_filesz + excess1;
+ unsigned long xmaddr;
+
+ flags |= MAP_FIXED | MAP_ANONYMOUS;
+ down_write(&mm->mmap_sem);
+ xmaddr = do_mmap(NULL, xaddr, excess - excess1, prot, flags, 0);
+ up_write(&mm->mmap_sem);
+
+ kdebug("mmap[%d] <anon>"
+ " ad=%lx sz=%lx pr=%x fl=%x of=0 --> %08lx",
+ loop, xaddr, excess - excess1, prot, flags, xmaddr);
+
+ if (xmaddr != xaddr)
+ return -ENOMEM;
+ }
+
+ if (prot & PROT_WRITE && excess1 > 0) {
+ kdebug("clear[%d] ad=%lx sz=%lx",
+ loop, maddr + phdr->p_filesz, excess1);
+ clear_user((void *) maddr + phdr->p_filesz, excess1);
+ }
+
+#else
+ if (excess > 0) {
+ kdebug("clear[%d] ad=%lx sz=%lx",
+ loop, maddr + phdr->p_filesz, excess);
+ clear_user((void *) maddr + phdr->p_filesz, excess);
+ }
+#endif
+
+ if (mm) {
+ if (phdr->p_flags & PF_X) {
+ mm->start_code = maddr;
+ mm->end_code = maddr + phdr->p_memsz;
+ }
+ else if (!mm->start_data) {
+ mm->start_data = maddr;
+ mm->end_data = maddr + phdr->p_memsz;
+ }
+ }
+
+ seg++;
+ }
+
+ return 0;
+} /* end elf_fdpic_map_file_by_direct_mmap() */
/* dir inode-ops */
static int coda_create(struct inode *dir, struct dentry *new, int mode, struct nameidata *nd);
-static int coda_mknod(struct inode *dir, struct dentry *new, int mode, dev_t rdev);
static struct dentry *coda_lookup(struct inode *dir, struct dentry *target, struct nameidata *nd);
static int coda_link(struct dentry *old_dentry, struct inode *dir_inode,
struct dentry *entry);
void *dirent, struct dentry *dir);
int coda_fsync(struct file *, struct dentry *dentry, int datasync);
-int coda_hasmknod;
+/* same as fs/bad_inode.c */
+static int coda_return_EIO(void)
+{
+ return -EIO;
+}
+#define CODA_EIO_ERROR ((void *) (coda_return_EIO))
-struct dentry_operations coda_dentry_operations =
+static struct dentry_operations coda_dentry_operations =
{
.d_revalidate = coda_dentry_revalidate,
.d_delete = coda_dentry_delete,
.symlink = coda_symlink,
.mkdir = coda_mkdir,
.rmdir = coda_rmdir,
- .mknod = coda_mknod,
+ .mknod = CODA_EIO_ERROR,
.rename = coda_rename,
.permission = coda_permission,
.getattr = coda_getattr,
/* optimistically we can also act as if our nose bleeds. The
* granularity of the mtime is coarse anyways so we might actually be
* right most of the time. Note: we only do this for directories. */
- dir->i_mtime = dir->i_ctime = CURRENT_TIME;
+ dir->i_mtime = dir->i_ctime = CURRENT_TIME_SEC;
#endif
if (link)
dir->i_nlink += link;
}
error = venus_create(dir->i_sb, coda_i2f(dir), name, length,
- 0, mode, 0, &newfid, &attrs);
+ 0, mode, &newfid, &attrs);
if ( error ) {
unlock_kernel();
return 0;
}
-static int coda_mknod(struct inode *dir, struct dentry *de, int mode, dev_t rdev)
-{
- int error=0;
- const char *name=de->d_name.name;
- int length=de->d_name.len;
- struct inode *inode;
- struct CodaFid newfid;
- struct coda_vattr attrs;
-
- if ( coda_hasmknod == 0 )
- return -EIO;
-
- if (!old_valid_dev(rdev))
- return -EINVAL;
-
- lock_kernel();
- coda_vfs_stat.create++;
-
- if (coda_isroot(dir) && coda_iscontrol(name, length)) {
- unlock_kernel();
- return -EPERM;
- }
-
- error = venus_create(dir->i_sb, coda_i2f(dir), name, length,
- 0, mode, rdev, &newfid, &attrs);
-
- if ( error ) {
- unlock_kernel();
- d_drop(de);
- return error;
- }
-
- inode = coda_iget(dir->i_sb, &newfid, &attrs);
- if ( IS_ERR(inode) ) {
- unlock_kernel();
- d_drop(de);
- return PTR_ERR(inode);
- }
-
- /* invalidate the directory cnode's attributes */
- coda_dir_changed(dir, 0);
- unlock_kernel();
- d_instantiate(de, inode);
- return 0;
-}
-
static int coda_mkdir(struct inode *dir, struct dentry *de, int mode)
{
struct inode *inode;
/* if CODA_STORE fails with EOPNOTSUPP, venus clearly doesn't support
* CODA_STORE/CODA_RELEASE and we fall back on using the CODA_CLOSE upcall */
-int use_coda_close;
+static int use_coda_close;
static ssize_t
coda_file_read(struct file *coda_file, char __user *buf, size_t count, loff_t *ppos)
coda_inode->i_size = host_inode->i_size;
coda_inode->i_blocks = (coda_inode->i_size + 511) >> 9;
- coda_inode->i_mtime = coda_inode->i_ctime = CURRENT_TIME;
+ coda_inode->i_mtime = coda_inode->i_ctime = CURRENT_TIME_SEC;
up(&coda_inode->i_sem);
return ret;
--- /dev/null
+/*
+ * file.c - part of debugfs, a tiny little debug file system
+ *
+ * Copyright (C) 2004 Greg Kroah-Hartman <greg@kroah.com>
+ * Copyright (C) 2004 IBM Inc.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * debugfs is for people to use instead of /proc or /sys.
+ * See Documentation/DocBook/kernel-api for more details.
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/pagemap.h>
+#include <linux/debugfs.h>
+
+static ssize_t default_read_file(struct file *file, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ return 0;
+}
+
+static ssize_t default_write_file(struct file *file, const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ return count;
+}
+
+static int default_open(struct inode *inode, struct file *file)
+{
+ if (inode->u.generic_ip)
+ file->private_data = inode->u.generic_ip;
+
+ return 0;
+}
+
+struct file_operations debugfs_file_operations = {
+ .read = default_read_file,
+ .write = default_write_file,
+ .open = default_open,
+};
+
+#define simple_type(type, format, temptype, strtolfn) \
+static ssize_t read_file_##type(struct file *file, char __user *user_buf, \
+ size_t count, loff_t *ppos) \
+{ \
+ char buf[32]; \
+ type *val = file->private_data; \
+ \
+ snprintf(buf, sizeof(buf), format, *val); \
+ return simple_read_from_buffer(user_buf, count, ppos, buf, strlen(buf));\
+} \
+static ssize_t write_file_##type(struct file *file, const char __user *user_buf,\
+ size_t count, loff_t *ppos) \
+{ \
+ char *endp; \
+ char buf[32]; \
+ int buf_size; \
+ type *val = file->private_data; \
+ temptype tmp; \
+ \
+ memset(buf, 0x00, sizeof(buf)); \
+ buf_size = min(count, (sizeof(buf)-1)); \
+ if (copy_from_user(buf, user_buf, buf_size)) \
+ return -EFAULT; \
+ \
+ tmp = strtolfn(buf, &endp, 0); \
+ if ((endp == buf) || ((type)tmp != tmp)) \
+ return -EINVAL; \
+ *val = tmp; \
+ return count; \
+} \
+static struct file_operations fops_##type = { \
+ .read = read_file_##type, \
+ .write = write_file_##type, \
+ .open = default_open, \
+};
+simple_type(u8, "%c", unsigned long, simple_strtoul);
+simple_type(u16, "%hi", unsigned long, simple_strtoul);
+simple_type(u32, "%i", unsigned long, simple_strtoul);
+
+/**
+ * debugfs_create_u8 - create a file in the debugfs filesystem that is used to read and write a unsigned 8 bit value.
+ *
+ * @name: a pointer to a string containing the name of the file to create.
+ * @mode: the permission that the file should have
+ * @parent: a pointer to the parent dentry for this file. This should be a
+ * directory dentry if set. If this paramater is NULL, then the
+ * file will be created in the root of the debugfs filesystem.
+ * @value: a pointer to the variable that the file should read to and write
+ * from.
+ *
+ * This function creates a file in debugfs with the given name that
+ * contains the value of the variable @value. If the @mode variable is so
+ * set, it can be read from, and written to.
+ *
+ * This function will return a pointer to a dentry if it succeeds. This
+ * pointer must be passed to the debugfs_remove() function when the file is
+ * to be removed (no automatic cleanup happens if your module is unloaded,
+ * you are responsible here.) If an error occurs, NULL will be returned.
+ *
+ * If debugfs is not enabled in the kernel, the value -ENODEV will be
+ * returned. It is not wise to check for this value, but rather, check for
+ * NULL or !NULL instead as to eliminate the need for #ifdef in the calling
+ * code.
+ */
+struct dentry *debugfs_create_u8(const char *name, mode_t mode,
+ struct dentry *parent, u8 *value)
+{
+ return debugfs_create_file(name, mode, parent, value, &fops_u8);
+}
+EXPORT_SYMBOL_GPL(debugfs_create_u8);
+
+/**
+ * debugfs_create_u16 - create a file in the debugfs filesystem that is used to read and write a unsigned 8 bit value.
+ *
+ * @name: a pointer to a string containing the name of the file to create.
+ * @mode: the permission that the file should have
+ * @parent: a pointer to the parent dentry for this file. This should be a
+ * directory dentry if set. If this paramater is NULL, then the
+ * file will be created in the root of the debugfs filesystem.
+ * @value: a pointer to the variable that the file should read to and write
+ * from.
+ *
+ * This function creates a file in debugfs with the given name that
+ * contains the value of the variable @value. If the @mode variable is so
+ * set, it can be read from, and written to.
+ *
+ * This function will return a pointer to a dentry if it succeeds. This
+ * pointer must be passed to the debugfs_remove() function when the file is
+ * to be removed (no automatic cleanup happens if your module is unloaded,
+ * you are responsible here.) If an error occurs, NULL will be returned.
+ *
+ * If debugfs is not enabled in the kernel, the value -ENODEV will be
+ * returned. It is not wise to check for this value, but rather, check for
+ * NULL or !NULL instead as to eliminate the need for #ifdef in the calling
+ * code.
+ */
+struct dentry *debugfs_create_u16(const char *name, mode_t mode,
+ struct dentry *parent, u16 *value)
+{
+ return debugfs_create_file(name, mode, parent, value, &fops_u16);
+}
+EXPORT_SYMBOL_GPL(debugfs_create_u16);
+
+/**
+ * debugfs_create_u32 - create a file in the debugfs filesystem that is used to read and write a unsigned 8 bit value.
+ *
+ * @name: a pointer to a string containing the name of the file to create.
+ * @mode: the permission that the file should have
+ * @parent: a pointer to the parent dentry for this file. This should be a
+ * directory dentry if set. If this paramater is NULL, then the
+ * file will be created in the root of the debugfs filesystem.
+ * @value: a pointer to the variable that the file should read to and write
+ * from.
+ *
+ * This function creates a file in debugfs with the given name that
+ * contains the value of the variable @value. If the @mode variable is so
+ * set, it can be read from, and written to.
+ *
+ * This function will return a pointer to a dentry if it succeeds. This
+ * pointer must be passed to the debugfs_remove() function when the file is
+ * to be removed (no automatic cleanup happens if your module is unloaded,
+ * you are responsible here.) If an error occurs, NULL will be returned.
+ *
+ * If debugfs is not enabled in the kernel, the value -ENODEV will be
+ * returned. It is not wise to check for this value, but rather, check for
+ * NULL or !NULL instead as to eliminate the need for #ifdef in the calling
+ * code.
+ */
+struct dentry *debugfs_create_u32(const char *name, mode_t mode,
+ struct dentry *parent, u32 *value)
+{
+ return debugfs_create_file(name, mode, parent, value, &fops_u32);
+}
+EXPORT_SYMBOL_GPL(debugfs_create_u32);
+
+static ssize_t read_file_bool(struct file *file, char __user *user_buf,
+ size_t count, loff_t *ppos)
+{
+ char buf[3];
+ u32 *val = file->private_data;
+
+ if (val)
+ buf[0] = 'Y';
+ else
+ buf[0] = 'N';
+ buf[1] = '\n';
+ buf[2] = 0x00;
+ return simple_read_from_buffer(user_buf, count, ppos, buf, 2);
+}
+
+static ssize_t write_file_bool(struct file *file, const char __user *user_buf,
+ size_t count, loff_t *ppos)
+{
+ char buf[32];
+ int buf_size;
+ u32 *val = file->private_data;
+
+ buf_size = min(count, (sizeof(buf)-1));
+ if (copy_from_user(buf, user_buf, buf_size))
+ return -EFAULT;
+
+ switch (buf[0]) {
+ case 'y':
+ case 'Y':
+ case '1':
+ *val = 1;
+ break;
+ case 'n':
+ case 'N':
+ case '0':
+ *val = 0;
+ break;
+ }
+
+ return count;
+}
+
+static struct file_operations fops_bool = {
+ .read = read_file_bool,
+ .write = write_file_bool,
+ .open = default_open,
+};
+
+/**
+ * debugfs_create_bool - create a file in the debugfs filesystem that is used to read and write a boolean value.
+ *
+ * @name: a pointer to a string containing the name of the file to create.
+ * @mode: the permission that the file should have
+ * @parent: a pointer to the parent dentry for this file. This should be a
+ * directory dentry if set. If this paramater is NULL, then the
+ * file will be created in the root of the debugfs filesystem.
+ * @value: a pointer to the variable that the file should read to and write
+ * from.
+ *
+ * This function creates a file in debugfs with the given name that
+ * contains the value of the variable @value. If the @mode variable is so
+ * set, it can be read from, and written to.
+ *
+ * This function will return a pointer to a dentry if it succeeds. This
+ * pointer must be passed to the debugfs_remove() function when the file is
+ * to be removed (no automatic cleanup happens if your module is unloaded,
+ * you are responsible here.) If an error occurs, NULL will be returned.
+ *
+ * If debugfs is not enabled in the kernel, the value -ENODEV will be
+ * returned. It is not wise to check for this value, but rather, check for
+ * NULL or !NULL instead as to eliminate the need for #ifdef in the calling
+ * code.
+ */
+struct dentry *debugfs_create_bool(const char *name, mode_t mode,
+ struct dentry *parent, u32 *value)
+{
+ return debugfs_create_file(name, mode, parent, value, &fops_bool);
+}
+EXPORT_SYMBOL_GPL(debugfs_create_bool);
+
--- /dev/null
+/*
+ * file.c - part of debugfs, a tiny little debug file system
+ *
+ * Copyright (C) 2004 Greg Kroah-Hartman <greg@kroah.com>
+ * Copyright (C) 2004 IBM Inc.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * debugfs is for people to use instead of /proc or /sys.
+ * See Documentation/DocBook/kernel-api for more details.
+ *
+ */
+
+/* uncomment to get debug messages from the debug filesystem, ah the irony. */
+/* #define DEBUG */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/mount.h>
+#include <linux/pagemap.h>
+#include <linux/init.h>
+#include <linux/namei.h>
+#include <linux/debugfs.h>
+
+#define DEBUGFS_MAGIC 0x64626720
+
+/* declared over in file.c */
+extern struct file_operations debugfs_file_operations;
+
+static struct vfsmount *debugfs_mount;
+static int debugfs_mount_count;
+
+static struct inode *debugfs_get_inode(struct super_block *sb, int mode, dev_t dev)
+{
+ struct inode *inode = new_inode(sb);
+
+ if (inode) {
+ inode->i_mode = mode;
+ inode->i_uid = 0;
+ inode->i_gid = 0;
+ inode->i_blksize = PAGE_CACHE_SIZE;
+ inode->i_blocks = 0;
+ inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME;
+ switch (mode & S_IFMT) {
+ default:
+ init_special_inode(inode, mode, dev);
+ break;
+ case S_IFREG:
+ inode->i_fop = &debugfs_file_operations;
+ break;
+ case S_IFDIR:
+ inode->i_op = &simple_dir_inode_operations;
+ inode->i_fop = &simple_dir_operations;
+
+ /* directory inodes start off with i_nlink == 2 (for "." entry) */
+ inode->i_nlink++;
+ break;
+ }
+ }
+ return inode;
+}
+
+/* SMP-safe */
+static int debugfs_mknod(struct inode *dir, struct dentry *dentry,
+ int mode, dev_t dev)
+{
+ struct inode *inode = debugfs_get_inode(dir->i_sb, mode, dev);
+ int error = -EPERM;
+
+ if (dentry->d_inode)
+ return -EEXIST;
+
+ if (inode) {
+ d_instantiate(dentry, inode);
+ dget(dentry);
+ error = 0;
+ }
+ return error;
+}
+
+static int debugfs_mkdir(struct inode *dir, struct dentry *dentry, int mode)
+{
+ int res;
+
+ mode = (mode & (S_IRWXUGO | S_ISVTX)) | S_IFDIR;
+ res = debugfs_mknod(dir, dentry, mode, 0);
+ if (!res)
+ dir->i_nlink++;
+ return res;
+}
+
+static int debugfs_create(struct inode *dir, struct dentry *dentry, int mode)
+{
+ mode = (mode & S_IALLUGO) | S_IFREG;
+ return debugfs_mknod(dir, dentry, mode, 0);
+}
+
+static inline int debugfs_positive(struct dentry *dentry)
+{
+ return dentry->d_inode && !d_unhashed(dentry);
+}
+
+static int debug_fill_super(struct super_block *sb, void *data, int silent)
+{
+ static struct tree_descr debug_files[] = {{""}};
+
+ return simple_fill_super(sb, DEBUGFS_MAGIC, debug_files);
+}
+
+static struct dentry * get_dentry(struct dentry *parent, const char *name)
+{
+ struct qstr qstr;
+
+ qstr.name = name;
+ qstr.len = strlen(name);
+ qstr.hash = full_name_hash(name,qstr.len);
+ return lookup_hash(&qstr,parent);
+}
+
+static struct super_block *debug_get_sb(struct file_system_type *fs_type,
+ int flags, const char *dev_name,
+ void *data)
+{
+ return get_sb_single(fs_type, flags, data, debug_fill_super);
+}
+
+static struct file_system_type debug_fs_type = {
+ .owner = THIS_MODULE,
+ .name = "debugfs",
+ .get_sb = debug_get_sb,
+ .kill_sb = kill_litter_super,
+};
+
+static int debugfs_create_by_name(const char *name, mode_t mode,
+ struct dentry *parent,
+ struct dentry **dentry)
+{
+ int error = 0;
+
+ /* If the parent is not specified, we create it in the root.
+ * We need the root dentry to do this, which is in the super
+ * block. A pointer to that is in the struct vfsmount that we
+ * have around.
+ */
+ if (!parent ) {
+ if (debugfs_mount && debugfs_mount->mnt_sb) {
+ parent = debugfs_mount->mnt_sb->s_root;
+ }
+ }
+ if (!parent) {
+ pr_debug("debugfs: Ah! can not find a parent!\n");
+ return -EFAULT;
+ }
+
+ *dentry = NULL;
+ down(&parent->d_inode->i_sem);
+ *dentry = get_dentry (parent, name);
+ if (!IS_ERR(dentry)) {
+ if ((mode & S_IFMT) == S_IFDIR)
+ error = debugfs_mkdir(parent->d_inode, *dentry, mode);
+ else
+ error = debugfs_create(parent->d_inode, *dentry, mode);
+ } else
+ error = PTR_ERR(dentry);
+ up(&parent->d_inode->i_sem);
+
+ return error;
+}
+
+/**
+ * debugfs_create_file - create a file in the debugfs filesystem
+ *
+ * @name: a pointer to a string containing the name of the file to create.
+ * @mode: the permission that the file should have
+ * @parent: a pointer to the parent dentry for this file. This should be a
+ * directory dentry if set. If this paramater is NULL, then the
+ * file will be created in the root of the debugfs filesystem.
+ * @data: a pointer to something that the caller will want to get to later
+ * on. The inode.u.generic_ip pointer will point to this value on
+ * the open() call.
+ * @fops: a pointer to a struct file_operations that should be used for
+ * this file.
+ *
+ * This is the basic "create a file" function for debugfs. It allows for a
+ * wide range of flexibility in createing a file, or a directory (if you
+ * want to create a directory, the debugfs_create_dir() function is
+ * recommended to be used instead.)
+ *
+ * This function will return a pointer to a dentry if it succeeds. This
+ * pointer must be passed to the debugfs_remove() function when the file is
+ * to be removed (no automatic cleanup happens if your module is unloaded,
+ * you are responsible here.) If an error occurs, NULL will be returned.
+ *
+ * If debugfs is not enabled in the kernel, the value -ENODEV will be
+ * returned. It is not wise to check for this value, but rather, check for
+ * NULL or !NULL instead as to eliminate the need for #ifdef in the calling
+ * code.
+ */
+struct dentry *debugfs_create_file(const char *name, mode_t mode,
+ struct dentry *parent, void *data,
+ struct file_operations *fops)
+{
+ struct dentry *dentry = NULL;
+ int error;
+
+ pr_debug("debugfs: creating file '%s'\n",name);
+
+ error = simple_pin_fs("debugfs", &debugfs_mount, &debugfs_mount_count);
+ if (error)
+ goto exit;
+
+ error = debugfs_create_by_name(name, mode, parent, &dentry);
+ if (error) {
+ dentry = NULL;
+ goto exit;
+ }
+
+ if (dentry->d_inode) {
+ if (data)
+ dentry->d_inode->u.generic_ip = data;
+ if (fops)
+ dentry->d_inode->i_fop = fops;
+ }
+exit:
+ return dentry;
+}
+EXPORT_SYMBOL_GPL(debugfs_create_file);
+
+/**
+ * debugfs_create_dir - create a directory in the debugfs filesystem
+ *
+ * @name: a pointer to a string containing the name of the directory to
+ * create.
+ * @parent: a pointer to the parent dentry for this file. This should be a
+ * directory dentry if set. If this paramater is NULL, then the
+ * directory will be created in the root of the debugfs filesystem.
+ *
+ * This function creates a directory in debugfs with the given name.
+ *
+ * This function will return a pointer to a dentry if it succeeds. This
+ * pointer must be passed to the debugfs_remove() function when the file is
+ * to be removed (no automatic cleanup happens if your module is unloaded,
+ * you are responsible here.) If an error occurs, NULL will be returned.
+ *
+ * If debugfs is not enabled in the kernel, the value -ENODEV will be
+ * returned. It is not wise to check for this value, but rather, check for
+ * NULL or !NULL instead as to eliminate the need for #ifdef in the calling
+ * code.
+ */
+struct dentry *debugfs_create_dir(const char *name, struct dentry *parent)
+{
+ return debugfs_create_file(name,
+ S_IFDIR | S_IRWXU | S_IRUGO | S_IXUGO,
+ parent, NULL, NULL);
+}
+EXPORT_SYMBOL_GPL(debugfs_create_dir);
+
+/**
+ * debugfs_remove - removes a file or directory from the debugfs filesystem
+ *
+ * @dentry: a pointer to a the dentry of the file or directory to be
+ * removed.
+ *
+ * This function removes a file or directory in debugfs that was previously
+ * created with a call to another debugfs function (like
+ * debufs_create_file() or variants thereof.)
+ *
+ * This function is required to be called in order for the file to be
+ * removed, no automatic cleanup of files will happen when a module is
+ * removed, you are responsible here.
+ */
+void debugfs_remove(struct dentry *dentry)
+{
+ struct dentry *parent;
+
+ if (!dentry)
+ return;
+
+ parent = dentry->d_parent;
+ if (!parent || !parent->d_inode)
+ return;
+
+ down(&parent->d_inode->i_sem);
+ if (debugfs_positive(dentry)) {
+ if (dentry->d_inode) {
+ if (S_ISDIR(dentry->d_inode->i_mode))
+ simple_rmdir(parent->d_inode, dentry);
+ else
+ simple_unlink(parent->d_inode, dentry);
+ dput(dentry);
+ }
+ }
+ up(&parent->d_inode->i_sem);
+ simple_release_fs(&debugfs_mount, &debugfs_mount_count);
+}
+EXPORT_SYMBOL_GPL(debugfs_remove);
+
+static decl_subsys(debug, NULL, NULL);
+
+static int __init debugfs_init(void)
+{
+ int retval;
+
+ kset_set_kset_s(&debug_subsys, kernel_subsys);
+ retval = subsystem_register(&debug_subsys);
+ if (retval)
+ return retval;
+
+ retval = register_filesystem(&debug_fs_type);
+ if (retval)
+ subsystem_unregister(&debug_subsys);
+ return retval;
+}
+
+static void __exit debugfs_exit(void)
+{
+ simple_release_fs(&debugfs_mount, &debugfs_mount_count);
+ unregister_filesystem(&debug_fs_type);
+ subsystem_unregister(&debug_subsys);
+}
+
+core_initcall(debugfs_init);
+module_exit(debugfs_exit);
+MODULE_LICENSE("GPL");
+
.fs_flags = FS_REQUIRES_DEV,
};
+static struct pt_types sgi_pt_types[] = {
+ {0x00, "SGI vh"},
+ {0x01, "SGI trkrepl"},
+ {0x02, "SGI secrepl"},
+ {0x03, "SGI raw"},
+ {0x04, "SGI bsd"},
+ {SGI_SYSV, "SGI sysv"},
+ {0x06, "SGI vol"},
+ {SGI_EFS, "SGI efs"},
+ {0x08, "SGI lv"},
+ {0x09, "SGI rlv"},
+ {0x0A, "SGI xfs"},
+ {0x0B, "SGI xfslog"},
+ {0x0C, "SGI xlv"},
+ {0x82, "Linux swap"},
+ {0x83, "Linux native"},
+ {0, NULL}
+};
+
+
static kmem_cache_t * efs_inode_cachep;
static struct inode *efs_alloc_inode(struct super_block *sb)
extern void ext2_delete_inode (struct inode *);
extern int ext2_sync_inode (struct inode *);
extern void ext2_discard_prealloc (struct inode *);
+extern int ext2_get_block(struct inode *, sector_t, struct buffer_head *, int);
extern void ext2_truncate (struct inode *);
extern int ext2_setattr (struct dentry *, struct iattr *);
extern void ext2_set_inode_flags(struct inode *inode);
#include <linux/xattr_acl.h>
#define EXT3_ACL_VERSION 0x0001
-#define EXT3_ACL_MAX_ENTRIES 32
typedef struct {
__le16 e_tag;
static inline int
ext3_init_acl(handle_t *handle, struct inode *inode, struct inode *dir)
{
- inode->i_mode &= ~current->fs->umask;
return 0;
}
#endif /* CONFIG_EXT3_FS_POSIX_ACL */
obj-$(CONFIG_FAT_FS) += fat.o
-fat-objs := cache.o dir.o file.o inode.o misc.o fatfs_syms.o
+fat-objs := cache.o dir.o file.o inode.o misc.o
spin_unlock(&MSDOS_I(inode)->cache_lru_lock);
}
-int __fat_access(struct super_block *sb, int nr, int new_value)
+static int __fat_access(struct super_block *sb, int nr, int new_value)
{
struct msdos_sb_info *sbi = MSDOS_SB(sb);
struct buffer_head *bh, *bh2, *c_bh, *c_bh2;
return next;
}
-/*
+/*
* Returns the this'th FAT entry, -1 if it is an end-of-file entry. If
* new_value is != -1, that FAT entry is replaced by it.
*/
int next;
next = -EIO;
- if (nr < 2 || MSDOS_SB(sb)->clusters + 2 <= nr) {
+ if (nr < FAT_START_ENT || MSDOS_SB(sb)->max_cluster <= nr) {
fat_fs_panic(sb, "invalid access to FAT (entry 0x%08x)", nr);
goto out;
}
int nr;
BUG_ON(MSDOS_I(inode)->i_start == 0);
-
+
*fclus = 0;
*dclus = MSDOS_I(inode)->i_start;
if (cluster == 0)
nr = fat_access(sb, *dclus, -1);
if (nr < 0)
- return nr;
+ return nr;
else if (nr == FAT_ENT_FREE) {
fat_fs_panic(sb, "%s: invalid cluster chain"
" (i_pos %lld)", __FUNCTION__,
cluster = fat_bmap_cluster(inode, cluster);
if (cluster < 0)
return cluster;
- else if (cluster) {
- *phys = ((sector_t)cluster - 2) * sbi->sec_per_clus
- + sbi->data_start + offset;
- }
+ else if (cluster)
+ *phys = fat_clus_to_blknr(sbi, cluster) + offset;
return 0;
}
-
-/* Free all clusters after the skip'th cluster. */
-int fat_free(struct inode *inode, int skip)
-{
- struct super_block *sb = inode->i_sb;
- int nr, ret, fclus, dclus;
-
- if (MSDOS_I(inode)->i_start == 0)
- return 0;
-
- if (skip) {
- ret = fat_get_cluster(inode, skip - 1, &fclus, &dclus);
- if (ret < 0)
- return ret;
- else if (ret == FAT_ENT_EOF)
- return 0;
-
- nr = fat_access(sb, dclus, -1);
- if (nr == FAT_ENT_EOF)
- return 0;
- else if (nr > 0) {
- /*
- * write a new EOF, and get the remaining cluster
- * chain for freeing.
- */
- nr = fat_access(sb, dclus, FAT_ENT_EOF);
- }
- if (nr < 0)
- return nr;
-
- fat_cache_inval_inode(inode);
- } else {
- fat_cache_inval_inode(inode);
-
- nr = MSDOS_I(inode)->i_start;
- MSDOS_I(inode)->i_start = 0;
- MSDOS_I(inode)->i_logstart = 0;
- mark_inode_dirty(inode);
- }
-
- lock_fat(sb);
- do {
- nr = fat_access(sb, nr, FAT_ENT_FREE);
- if (nr < 0)
- goto error;
- else if (nr == FAT_ENT_FREE) {
- fat_fs_panic(sb, "%s: deleting beyond EOF (i_pos %lld)",
- __FUNCTION__, MSDOS_I(inode)->i_pos);
- nr = -EIO;
- goto error;
- }
- if (MSDOS_SB(sb)->free_clusters != -1)
- MSDOS_SB(sb)->free_clusters++;
- inode->i_blocks -= MSDOS_SB(sb)->cluster_size >> 9;
- } while (nr != FAT_ENT_EOF);
- fat_clusters_flush(sb);
- nr = 0;
-error:
- unlock_fat(sb);
-
- return nr;
-}
* and date_dos2unix for date==0 by Igor Zhbanov(bsg@uniyar.ac.ru)
*/
+#include <linux/module.h>
#include <linux/fs.h>
#include <linux/msdos_fs.h>
#include <linux/buffer_head.h>
* fat_fs_panic reports a severe file system problem and sets the file system
* read-only. The file system can be made writable again by remounting it.
*/
-
-static char panic_msg[512];
-
void fat_fs_panic(struct super_block *s, const char *fmt, ...)
{
- int not_ro;
va_list args;
- va_start (args, fmt);
- vsnprintf (panic_msg, sizeof(panic_msg), fmt, args);
- va_end (args);
+ printk(KERN_ERR "FAT: Filesystem panic (dev %s)\n", s->s_id);
- not_ro = !(s->s_flags & MS_RDONLY);
- if (not_ro)
- s->s_flags |= MS_RDONLY;
+ printk(KERN_ERR " ");
+ va_start(args, fmt);
+ vprintk(fmt, args);
+ va_end(args);
+ printk("\n");
- printk(KERN_ERR "FAT: Filesystem panic (dev %s)\n"
- " %s\n", s->s_id, panic_msg);
- if (not_ro)
+ if (!(s->s_flags & MS_RDONLY)) {
+ s->s_flags |= MS_RDONLY;
printk(KERN_ERR " File system has been set read-only\n");
+ }
}
void lock_fat(struct super_block *sb)
int fat_add_cluster(struct inode *inode)
{
struct super_block *sb = inode->i_sb;
+ struct msdos_sb_info *sbi = MSDOS_SB(sb);
int ret, count, limit, new_dclus, new_fclus, last;
- int cluster_bits = MSDOS_SB(sb)->cluster_bits;
-
- /*
+ int cluster_bits = sbi->cluster_bits;
+
+ /*
* We must locate the last cluster of the file to add this new
* one (new_dclus) to the end of the link list (the FAT).
*
/* find free FAT entry */
lock_fat(sb);
-
- if (MSDOS_SB(sb)->free_clusters == 0) {
+
+ if (sbi->free_clusters == 0) {
unlock_fat(sb);
return -ENOSPC;
}
- limit = MSDOS_SB(sb)->clusters + 2;
- new_dclus = MSDOS_SB(sb)->prev_free + 1;
- for (count = 0; count < MSDOS_SB(sb)->clusters; count++, new_dclus++) {
+ limit = sbi->max_cluster;
+ new_dclus = sbi->prev_free + 1;
+ for (count = FAT_START_ENT; count < limit; count++, new_dclus++) {
new_dclus = new_dclus % limit;
- if (new_dclus < 2)
- new_dclus = 2;
+ if (new_dclus < FAT_START_ENT)
+ new_dclus = FAT_START_ENT;
ret = fat_access(sb, new_dclus, -1);
if (ret < 0) {
} else if (ret == FAT_ENT_FREE)
break;
}
- if (count >= MSDOS_SB(sb)->clusters) {
- MSDOS_SB(sb)->free_clusters = 0;
+ if (count >= limit) {
+ sbi->free_clusters = 0;
unlock_fat(sb);
return -ENOSPC;
}
return ret;
}
- MSDOS_SB(sb)->prev_free = new_dclus;
- if (MSDOS_SB(sb)->free_clusters != -1)
- MSDOS_SB(sb)->free_clusters--;
+ sbi->prev_free = new_dclus;
+ if (sbi->free_clusters != -1)
+ sbi->free_clusters--;
fat_clusters_flush(sb);
-
+
unlock_fat(sb);
/* add new one to the last of the cluster chain */
new_fclus, inode->i_blocks >> (cluster_bits - 9));
fat_cache_inval_inode(inode);
}
- inode->i_blocks += MSDOS_SB(sb)->cluster_size >> 9;
+ inode->i_blocks += sbi->cluster_size >> 9;
return new_dclus;
}
-struct buffer_head *fat_extend_dir(struct inode *inode)
-{
- struct super_block *sb = inode->i_sb;
- struct buffer_head *bh, *res = NULL;
- int nr, sec_per_clus = MSDOS_SB(sb)->sec_per_clus;
- sector_t sector, last_sector;
-
- if (MSDOS_SB(sb)->fat_bits != 32) {
- if (inode->i_ino == MSDOS_ROOT_INO)
- return ERR_PTR(-ENOSPC);
- }
-
- nr = fat_add_cluster(inode);
- if (nr < 0)
- return ERR_PTR(nr);
-
- sector = ((sector_t)nr - 2) * sec_per_clus + MSDOS_SB(sb)->data_start;
- last_sector = sector + sec_per_clus;
- for ( ; sector < last_sector; sector++) {
- if ((bh = sb_getblk(sb, sector))) {
- memset(bh->b_data, 0, sb->s_blocksize);
- set_buffer_uptodate(bh);
- mark_buffer_dirty(bh);
- if (!res)
- res = bh;
- else
- brelse(bh);
- }
- }
- if (res == NULL)
- res = ERR_PTR(-EIO);
- if (inode->i_size & (sb->s_blocksize - 1)) {
- fat_fs_panic(sb, "Odd directory size");
- inode->i_size = (inode->i_size + sb->s_blocksize)
- & ~((loff_t)sb->s_blocksize - 1);
- }
- inode->i_size += MSDOS_SB(sb)->cluster_size;
- MSDOS_I(inode)->mmu_private += MSDOS_SB(sb)->cluster_size;
-
- return res;
-}
-
-/* Linear day numbers of the respective 1sts in non-leap years. */
-
-static int day_n[] = { 0,31,59,90,120,151,181,212,243,273,304,334,0,0,0,0 };
- /* JanFebMarApr May Jun Jul Aug Sep Oct Nov Dec */
-
-
extern struct timezone sys_tz;
+/* Linear day numbers of the respective 1sts in non-leap years. */
+static int day_n[] = {
+ /* Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec */
+ 0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 0, 0, 0, 0
+};
/* Convert a MS-DOS time/date pair to a UNIX date (seconds since 1 1 70). */
-
-int date_dos2unix(unsigned short time,unsigned short date)
+int date_dos2unix(unsigned short time, unsigned short date)
{
- int month,year,secs;
+ int month, year, secs;
- /* first subtract and mask after that... Otherwise, if
- date == 0, bad things happen */
+ /*
+ * first subtract and mask after that... Otherwise, if
+ * date == 0, bad things happen
+ */
month = ((date >> 5) - 1) & 15;
year = date >> 9;
secs = (time & 31)*2+60*((time >> 5) & 63)+(time >> 11)*3600+86400*
return secs;
}
-
/* Convert linear UNIX date to a MS-DOS time/date pair. */
-
-void fat_date_unix2dos(int unix_date,__le16 *time, __le16 *date)
+void fat_date_unix2dos(int unix_date, __le16 *time, __le16 *date)
{
- int day,year,nl_day,month;
+ int day, year, nl_day, month;
unix_date -= sys_tz.tz_minuteswest*60;
(((unix_date/3600) % 24) << 11));
day = unix_date/86400-3652;
year = day/365;
- if ((year+3)/4+365*year > day) year--;
+ if ((year+3)/4+365*year > day)
+ year--;
day -= (year+3)/4+365*year;
if (day == 59 && !(year & 3)) {
nl_day = day;
month = 2;
- }
- else {
+ } else {
nl_day = (year & 3) || day <= 59 ? day : day-1;
- for (month = 0; month < 12; month++)
- if (day_n[month] > nl_day) break;
+ for (month = 0; month < 12; month++) {
+ if (day_n[month] > nl_day)
+ break;
+ }
}
*date = cpu_to_le16(nl_day-day_n[month-1]+1+(month << 5)+(year << 9));
}
+EXPORT_SYMBOL(fat_date_unix2dos);
/* Returns the inode number of the directory entry at offset pos. If bh is
non-NULL, it is brelse'd before. Pos is incremented. The buffer header is
AV. want the next entry (took one explicit de=NULL in vfat/namei.c).
AV. It's done in fat_get_entry() (inlined), here the slow case lives.
AV. Additionally, when we return -1 (i.e. reached the end of directory)
- AV. we make bh NULL.
+ AV. we make bh NULL.
*/
-int fat__get_entry(struct inode *dir, loff_t *pos,struct buffer_head **bh,
+int fat__get_entry(struct inode *dir, loff_t *pos, struct buffer_head **bh,
struct msdos_dir_entry **de, loff_t *i_pos)
{
struct super_block *sb = dir->i_sb;
return 0;
}
+
+EXPORT_SYMBOL(fat__get_entry);
goto err;
err:
- if (!PIPE_READERS(*inode) && !PIPE_WRITERS(*inode)) {
- struct pipe_inode_info *info = inode->i_pipe;
- inode->i_pipe = NULL;
- free_page((unsigned long)info->base);
- kfree(info);
- }
+ if (!PIPE_READERS(*inode) && !PIPE_WRITERS(*inode))
+ free_pipe_info(inode);
err_nocleanup:
up(PIPE_SEM(*inode));
*/
static struct file_system_type *file_systems;
-static rwlock_t file_systems_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(file_systems_lock);
/* WARNING: This can be used only if we _already_ own a reference */
void get_filesystem(struct file_system_type *fs)
inode->i_nlink--;
hfs_delete_inode(inode);
- inode->i_ctime = CURRENT_TIME;
+ inode->i_ctime = CURRENT_TIME_SEC;
mark_inode_dirty(inode);
return res;
if (res)
return res;
inode->i_nlink = 0;
- inode->i_ctime = CURRENT_TIME;
+ inode->i_ctime = CURRENT_TIME_SEC;
hfs_delete_inode(inode);
mark_inode_dirty(inode);
return 0;
retval = generic_file_write(file, buf, count, ppos);
if (retval > 0) {
struct inode *inode = file->f_dentry->d_inode;
- inode->i_mtime = CURRENT_TIME;
+ inode->i_mtime = CURRENT_TIME_SEC;
hpfs_i(inode)->i_dirty = 1;
}
return retval;
break;
}
retry = __flush_buffer(journal, jh, bhs, &batch_count, &drop_count);
+ if (cond_resched_lock(&journal->j_list_lock)) {
+ retry = 1;
+ break;
+ }
} while (jh != last_jh && !retry);
if (batch_count)
/* Use trylock because of the ranknig */
if (jbd_trylock_bh_state(jh2bh(jh)))
ret += __try_to_free_cp_buf(jh);
+ /*
+ * This function only frees up some memory
+ * if possible so we dont have an obligation
+ * to finish processing. Bail out if preemption
+ * requested:
+ */
+ if (need_resched())
+ goto out;
} while (jh != last_jh);
}
} while (transaction != last_transaction);
struct buffer_head * obh;
struct buffer_head * nbh;
+ cond_resched(); /* We're under lock_kernel() */
+
/* If we already know where to stop the log traversal,
* check right now that we haven't gone past the end of
* the log. */
- $Id: README.Locking,v 1.4 2002/03/08 16:20:06 dwmw2 Exp $
+ $Id: README.Locking,v 1.9 2004/11/20 10:35:40 dwmw2 Exp $
JFFS2 LOCKING DOCUMENTATION
---------------------------
(NB) the per-inode list of physical nodes. The latter is a special
case - see below.
-As the MTD API permits erase-completion callback functions to be
-called from bottom-half (timer) context, and these functions access
-the data structures protected by this lock, it must be locked with
-spin_lock_bh().
+As the MTD API no longer permits erase-completion callback functions
+to be called from bottom-half (timer) context (on the basis that nobody
+ever actually implemented such a thing), it's now sufficient to use
+a simple spin_lock() rather than spin_lock_bh().
Note that the per-inode list of physical nodes (f->nodes) is a special
case. Any changes to _valid_ nodes (i.e. ->flash_offset & 1 == 0) in
GC thread locks it, sends the signal, then unlocks it - while the GC
thread itself locks it, zeroes c->gc_task, then unlocks on the exit path.
- node_free_sem
- -------------
+
+ inocache_lock spinlock
+ ----------------------
+
+This spinlock protects the hashed list (c->inocache_list) of the
+in-core jffs2_inode_cache objects (each inode in JFFS2 has the
+correspondent jffs2_inode_cache object). So, the inocache_lock
+has to be locked while walking the c->inocache_list hash buckets.
+
+Note, the f->sem guarantees that the correspondent jffs2_inode_cache
+will not be removed. So, it is allowed to access it without locking
+the inocache_lock spinlock.
+
+Ordering constraints:
+
+ If both erase_completion_lock and inocache_lock are needed, the
+ c->erase_completion has to be acquired first.
+
+
+ erase_free_sem
+ --------------
This semaphore is only used by the erase code which frees obsolete
node references and the jffs2_garbage_collect_deletion_dirent()
collection code is looking at them.
Suggestions for alternative solutions to this problem would be welcomed.
+
+
+ wbuf_sem
+ --------
+
+This read/write semaphore protects against concurrent access to the
+write-behind buffer ('wbuf') used for flash chips where we must write
+in blocks. It protects both the contents of the wbuf and the metadata
+which indicates which flash region (if any) is currently covered by
+the buffer.
+
+Ordering constraints:
+ Lock wbuf_sem last, after the alloc_sem or and f->sem.
*
* Copyright (C) 2001-2003 Red Hat, Inc.
*
- * Created by David Woodhouse <dwmw2@redhat.com>
+ * Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
- * $Id: dir.c,v 1.83 2004/10/19 07:48:44 havasi Exp $
+ * $Id: dir.c,v 1.84 2004/11/16 20:36:11 dwmw2 Exp $
*
*/
*
* Copyright (C) 2001, 2002 Red Hat, Inc.
*
- * Created by David Woodhouse <dwmw2@redhat.com>
+ * Created by David Woodhouse <dwmw2@infradead.org>
*
* For licensing information, see the file 'LICENCE' in this directory.
*
- * $Id: pushpull.h,v 1.9 2003/10/04 08:33:06 dwmw2 Exp $
+ * $Id: pushpull.h,v 1.10 2004/11/16 20:36:11 dwmw2 Exp $
*
*/
* Don't commit if inode has been committed since last being
* marked dirty, or if it has been deleted.
*/
- if (test_cflag(COMMIT_Nolink, inode) ||
- !test_cflag(COMMIT_Dirty, inode))
+ if (inode->i_nlink == 0 || !test_cflag(COMMIT_Dirty, inode))
return 0;
if (isReadOnly(inode)) {
tid = txBegin(inode->i_sb, COMMIT_INODE);
down(&JFS_IP(inode)->commit_sem);
- rc = txCommit(tid, 1, &inode, wait ? COMMIT_SYNC : 0);
+
+ /*
+ * Retest inode state after taking commit_sem
+ */
+ if (inode->i_nlink && test_cflag(COMMIT_Dirty, inode))
+ rc = txCommit(tid, 1, &inode, wait ? COMMIT_SYNC : 0);
+
txEnd(tid);
up(&JFS_IP(inode)->commit_sem);
return rc;
s->s_blocksize_bits = 10;
s->s_magic = magic;
s->s_op = ops ? ops : &default_ops;
+ s->s_time_gran = 1;
root = new_inode(s);
if (!root)
goto Enomem;
s->s_blocksize_bits = PAGE_CACHE_SHIFT;
s->s_magic = magic;
s->s_op = &s_ops;
+ s->s_time_gran = 1;
inode = new_inode(s);
if (!inode)
return -ENOMEM;
}
-static spinlock_t pin_fs_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(pin_fs_lock);
int simple_pin_fs(char *name, struct vfsmount **mount, int *count)
{
char *simple_transaction_get(struct file *file, const char __user *buf, size_t size)
{
struct simple_transaction_argresp *ar;
- static spinlock_t simple_transaction_lock = SPIN_LOCK_UNLOCKED;
+ static DEFINE_SPINLOCK(simple_transaction_lock);
if (size > SIMPLE_TRANSACTION_LIMIT - 1)
return ERR_PTR(-EFBIG);
EXPORT_SYMBOL(dcache_dir_open);
EXPORT_SYMBOL(dcache_readdir);
EXPORT_SYMBOL(generic_read_dir);
+EXPORT_SYMBOL(get_sb_pseudo);
EXPORT_SYMBOL(simple_commit_write);
EXPORT_SYMBOL(simple_dir_inode_operations);
EXPORT_SYMBOL(simple_dir_operations);
inode->i_uid = current->fsuid;
inode->i_gid = (dir->i_mode & S_ISGID) ? dir->i_gid : current->fsgid;
inode->i_ino = j;
- inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
+ inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME_SEC;
inode->i_blocks = inode->i_blksize = 0;
memset(&minix_i(inode)->u, 0, sizeof(minix_i(inode)->u));
insert_inode_hash(inode);
memset (de->name + namelen, 0, sbi->s_dirsize - namelen - 2);
de->inode = inode->i_ino;
err = dir_commit_chunk(page, from, to);
- dir->i_mtime = dir->i_ctime = CURRENT_TIME;
+ dir->i_mtime = dir->i_ctime = CURRENT_TIME_SEC;
mark_inode_dirty(dir);
out_put:
dir_put_page(page);
unlock_page(page);
}
dir_put_page(page);
- inode->i_ctime = inode->i_mtime = CURRENT_TIME;
+ inode->i_ctime = inode->i_mtime = CURRENT_TIME_SEC;
mark_inode_dirty(inode);
return err;
}
unlock_page(page);
}
dir_put_page(page);
- dir->i_mtime = dir->i_ctime = CURRENT_TIME;
+ dir->i_mtime = dir->i_ctime = CURRENT_TIME_SEC;
mark_inode_dirty(dir);
}
struct buffer_head *bh;
} Indirect;
-static rwlock_t pointers_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(pointers_lock);
static inline void add_chain(Indirect *p, struct buffer_head *bh, block_t *v)
{
/* We are done with atomic stuff, now do the rest of housekeeping */
- inode->i_ctime = CURRENT_TIME;
+ inode->i_ctime = CURRENT_TIME_SEC;
/* had we spliced it onto indirect block? */
if (where->bh)
}
first_whole++;
}
- inode->i_mtime = inode->i_ctime = CURRENT_TIME;
+ inode->i_mtime = inode->i_ctime = CURRENT_TIME_SEC;
mark_inode_dirty(inode);
}
if (inode->i_nlink >= minix_sb(inode->i_sb)->s_link_max)
return -EMLINK;
- inode->i_ctime = CURRENT_TIME;
+ inode->i_ctime = CURRENT_TIME_SEC;
inc_count(inode);
atomic_inc(&inode->i_count);
return add_nondir(dentry, inode);
goto out_dir;
inc_count(old_inode);
minix_set_link(new_de, new_page, old_inode);
- new_inode->i_ctime = CURRENT_TIME;
+ new_inode->i_ctime = CURRENT_TIME_SEC;
if (dir_de)
new_inode->i_nlink--;
dec_count(new_inode);
#include <linux/msdos_fs.h>
#include <linux/smp_lock.h>
-
/* MS-DOS "device special files" */
static const unsigned char *reserved_names[] = {
- "CON ","PRN ","NUL ","AUX ",
- "LPT1 ","LPT2 ","LPT3 ","LPT4 ",
- "COM1 ","COM2 ","COM3 ","COM4 ",
+ "CON ", "PRN ", "NUL ", "AUX ",
+ "LPT1 ", "LPT2 ", "LPT3 ", "LPT4 ",
+ "COM1 ", "COM2 ", "COM3 ", "COM4 ",
NULL
};
/* Characters that are undesirable in an MS-DOS file name */
static unsigned char bad_chars[] = "*?<>|\"";
static unsigned char bad_if_strict_pc[] = "+=,; ";
-static unsigned char bad_if_strict_atari[] = " "; /* GEMDOS is less restrictive */
-#define bad_if_strict(opts) ((opts)->atari ? bad_if_strict_atari : bad_if_strict_pc)
+/* GEMDOS is less restrictive */
+static unsigned char bad_if_strict_atari[] = " ";
+
+#define bad_if_strict(opts) \
+ ((opts)->atari ? bad_if_strict_atari : bad_if_strict_pc)
/***** Formats an MS-DOS file name. Rejects invalid names. */
static int msdos_format_name(const unsigned char *name, int len,
- unsigned char *res, struct fat_mount_options *opts)
- /* name is the proposed name, len is its length, res is
+ unsigned char *res, struct fat_mount_options *opts)
+ /*
+ * name is the proposed name, len is its length, res is
* the resulting name, opts->name_check is either (r)elaxed,
* (n)ormal or (s)trict, opts->dotsOK allows dots at the
* beginning of name (for hidden files)
unsigned char c;
int space;
- if (name[0] == '.') { /* dotfile because . and .. already done */
+ if (name[0] == '.') { /* dotfile because . and .. already done */
if (opts->dotsOK) {
/* Get rid of dot - test for it elsewhere */
- name++; len--;
- }
- else if (!opts->atari) return -EINVAL;
+ name++;
+ len--;
+ } else if (!opts->atari)
+ return -EINVAL;
}
- /* disallow names that _really_ start with a dot for MS-DOS, GEMDOS does
- * not care */
+ /*
+ * disallow names that _really_ start with a dot for MS-DOS,
+ * GEMDOS does not care
+ */
space = !opts->atari;
c = 0;
- for (walk = res; len && walk-res < 8; walk++) {
- c = *name++;
+ for (walk = res; len && walk - res < 8; walk++) {
+ c = *name++;
len--;
- if (opts->name_check != 'r' && strchr(bad_chars,c))
+ if (opts->name_check != 'r' && strchr(bad_chars, c))
return -EINVAL;
- if (opts->name_check == 's' && strchr(bad_if_strict(opts),c))
+ if (opts->name_check == 's' && strchr(bad_if_strict(opts), c))
return -EINVAL;
- if (c >= 'A' && c <= 'Z' && opts->name_check == 's')
+ if (c >= 'A' && c <= 'Z' && opts->name_check == 's')
return -EINVAL;
- if (c < ' ' || c == ':' || c == '\\') return -EINVAL;
-/* 0xE5 is legal as a first character, but we must substitute 0x05 */
-/* because 0xE5 marks deleted files. Yes, DOS really does this. */
-/* It seems that Microsoft hacked DOS to support non-US characters */
-/* after the 0xE5 character was already in use to mark deleted files. */
- if((res==walk) && (c==0xE5)) c=0x05;
- if (c == '.') break;
+ if (c < ' ' || c == ':' || c == '\\')
+ return -EINVAL;
+ /*
+ * 0xE5 is legal as a first character, but we must substitute
+ * 0x05 because 0xE5 marks deleted files. Yes, DOS really
+ * does this.
+ * It seems that Microsoft hacked DOS to support non-US
+ * characters after the 0xE5 character was already in use to
+ * mark deleted files.
+ */
+ if ((res == walk) && (c == 0xE5))
+ c = 0x05;
+ if (c == '.')
+ break;
space = (c == ' ');
- *walk = (!opts->nocase && c >= 'a' && c <= 'z') ? c-32 : c;
+ *walk = (!opts->nocase && c >= 'a' && c <= 'z') ? c - 32 : c;
}
- if (space) return -EINVAL;
+ if (space)
+ return -EINVAL;
if (opts->name_check == 's' && len && c != '.') {
c = *name++;
len--;
- if (c != '.') return -EINVAL;
+ if (c != '.')
+ return -EINVAL;
}
- while (c != '.' && len--) c = *name++;
+ while (c != '.' && len--)
+ c = *name++;
if (c == '.') {
- while (walk-res < 8) *walk++ = ' ';
- while (len > 0 && walk-res < MSDOS_NAME) {
+ while (walk - res < 8)
+ *walk++ = ' ';
+ while (len > 0 && walk - res < MSDOS_NAME) {
c = *name++;
len--;
- if (opts->name_check != 'r' && strchr(bad_chars,c))
+ if (opts->name_check != 'r' && strchr(bad_chars, c))
return -EINVAL;
if (opts->name_check == 's' &&
- strchr(bad_if_strict(opts),c))
+ strchr(bad_if_strict(opts), c))
return -EINVAL;
if (c < ' ' || c == ':' || c == '\\')
return -EINVAL;
if (c >= 'A' && c <= 'Z' && opts->name_check == 's')
return -EINVAL;
space = c == ' ';
- *walk++ = (!opts->nocase && c >= 'a' && c <= 'z') ? c-32 : c;
+ if (!opts->nocase && c >= 'a' && c <= 'z')
+ *walk++ = c - 32;
+ else
+ *walk++ = c;
}
- if (space) return -EINVAL;
- if (opts->name_check == 's' && len) return -EINVAL;
+ if (space)
+ return -EINVAL;
+ if (opts->name_check == 's' && len)
+ return -EINVAL;
}
- while (walk-res < MSDOS_NAME) *walk++ = ' ';
+ while (walk - res < MSDOS_NAME)
+ *walk++ = ' ';
if (!opts->atari)
/* GEMDOS is less stupid and has no reserved names */
for (reserved = reserved_names; *reserved; reserved++)
- if (!strncmp(res,*reserved,8)) return -EINVAL;
+ if (!strncmp(res, *reserved, 8))
+ return -EINVAL;
return 0;
}
int res;
dotsOK = MSDOS_SB(dir->i_sb)->options.dotsOK;
- res = msdos_format_name(name,len, msdos_name,&MSDOS_SB(dir->i_sb)->options);
+ res = msdos_format_name(name, len, msdos_name,
+ &MSDOS_SB(dir->i_sb)->options);
if (res < 0)
return -ENOENT;
res = fat_scan(dir, msdos_name, bh, de, i_pos);
if (!res && dotsOK) {
- if (name[0]=='.') {
+ if (name[0] == '.') {
if (!((*de)->attr & ATTR_HIDDEN))
res = -ENOENT;
} else {
*/
static int msdos_hash(struct dentry *dentry, struct qstr *qstr)
{
- struct fat_mount_options *options = & (MSDOS_SB(dentry->d_sb)->options);
+ struct fat_mount_options *options = &MSDOS_SB(dentry->d_sb)->options;
unsigned char msdos_name[MSDOS_NAME];
int error;
-
+
error = msdos_format_name(qstr->name, qstr->len, msdos_name, options);
if (!error)
qstr->hash = full_name_hash(msdos_name, MSDOS_NAME);
*/
static int msdos_cmp(struct dentry *dentry, struct qstr *a, struct qstr *b)
{
- struct fat_mount_options *options = & (MSDOS_SB(dentry->d_sb)->options);
+ struct fat_mount_options *options = &MSDOS_SB(dentry->d_sb)->options;
unsigned char a_msdos_name[MSDOS_NAME], b_msdos_name[MSDOS_NAME];
int error;
goto out;
}
-
static struct dentry_operations msdos_dentry_operations = {
.d_hash = msdos_hash,
.d_compare = msdos_cmp,
/***** Get inode using directory and name */
static struct dentry *msdos_lookup(struct inode *dir, struct dentry *dentry,
- struct nameidata *nd)
+ struct nameidata *nd)
{
struct super_block *sb = dir->i_sb;
struct inode *inode = NULL;
struct buffer_head *bh = NULL;
loff_t i_pos;
int res;
-
+
dentry->d_op = &msdos_dentry_operations;
lock_kernel();
/*
* XXX all times should be set by caller upon successful completion.
*/
- dir->i_ctime = dir->i_mtime = CURRENT_TIME;
+ dir->i_ctime = dir->i_mtime = CURRENT_TIME_SEC;
mark_inode_dirty(dir);
memcpy((*de)->name, name, MSDOS_NAME);
/***** Create a file */
static int msdos_create(struct inode *dir, struct dentry *dentry, int mode,
- struct nameidata *nd)
+ struct nameidata *nd)
{
struct super_block *sb = dir->i_sb;
struct buffer_head *bh;
unsigned char msdos_name[MSDOS_NAME];
lock_kernel();
- res = msdos_format_name(dentry->d_name.name,dentry->d_name.len,
+ res = msdos_format_name(dentry->d_name.name, dentry->d_name.len,
msdos_name, &MSDOS_SB(sb)->options);
if (res < 0) {
unlock_kernel();
return res;
}
- is_hid = (dentry->d_name.name[0]=='.') && (msdos_name[0]!='.');
+ is_hid = (dentry->d_name.name[0] == '.') && (msdos_name[0] != '.');
/* Have to do it due to foo vs. .foo conflicts */
if (fat_scan(dir, msdos_name, &bh, &de, &i_pos) >= 0) {
brelse(bh);
unlock_kernel();
return -EINVAL;
- }
+ }
inode = NULL;
res = msdos_add_entry(dir, msdos_name, &bh, &de, &i_pos, 0, is_hid);
if (res) {
unlock_kernel();
return res;
}
- inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
+ inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME_SEC;
mark_inode_dirty(inode);
d_instantiate(dentry, inode);
unlock_kernel();
mark_buffer_dirty(bh);
fat_detach(inode);
inode->i_nlink = 0;
- inode->i_ctime = dir->i_ctime = dir->i_mtime = CURRENT_TIME;
+ inode->i_ctime = dir->i_ctime = dir->i_mtime = CURRENT_TIME_SEC;
dir->i_nlink--;
mark_inode_dirty(inode);
mark_inode_dirty(dir);
struct buffer_head *bh;
struct msdos_dir_entry *de;
struct inode *inode;
- int res,is_hid;
+ int res, is_hid;
unsigned char msdos_name[MSDOS_NAME];
loff_t i_pos;
lock_kernel();
- res = msdos_format_name(dentry->d_name.name,dentry->d_name.len,
+ res = msdos_format_name(dentry->d_name.name, dentry->d_name.len,
msdos_name, &MSDOS_SB(sb)->options);
if (res < 0) {
unlock_kernel();
return res;
}
- is_hid = (dentry->d_name.name[0]=='.') && (msdos_name[0]!='.');
+ is_hid = (dentry->d_name.name[0] == '.') && (msdos_name[0] != '.');
/* foo vs .foo situation */
if (fat_scan(dir, msdos_name, &bh, &de, &i_pos) >= 0)
goto out_exist;
res = 0;
dir->i_nlink++;
- inode->i_nlink = 2; /* no need to mark them dirty */
+ inode->i_nlink = 2; /* no need to mark them dirty */
res = fat_new_dir(inode, dir, 0);
if (res)
mkdir_error:
inode->i_nlink = 0;
- inode->i_ctime = dir->i_ctime = dir->i_mtime = CURRENT_TIME;
+ inode->i_ctime = dir->i_ctime = dir->i_mtime = CURRENT_TIME_SEC;
dir->i_nlink--;
mark_inode_dirty(inode);
mark_inode_dirty(dir);
fat_detach(inode);
brelse(bh);
inode->i_nlink = 0;
- inode->i_ctime = dir->i_ctime = dir->i_mtime = CURRENT_TIME;
+ inode->i_ctime = dir->i_ctime = dir->i_mtime = CURRENT_TIME_SEC;
mark_inode_dirty(inode);
mark_inode_dirty(dir);
res = 0;
}
static int do_msdos_rename(struct inode *old_dir, unsigned char *old_name,
- struct dentry *old_dentry,
- struct inode *new_dir, unsigned char *new_name, struct dentry *new_dentry,
- struct buffer_head *old_bh,
- struct msdos_dir_entry *old_de, loff_t old_i_pos, int is_hid)
+ struct dentry *old_dentry,
+ struct inode *new_dir, unsigned char *new_name,
+ struct dentry *new_dentry,
+ struct buffer_head *old_bh,
+ struct msdos_dir_entry *old_de, loff_t old_i_pos,
+ int is_hid)
{
- struct buffer_head *new_bh=NULL,*dotdot_bh=NULL;
- struct msdos_dir_entry *new_de,*dotdot_de;
- struct inode *old_inode,*new_inode;
+ struct buffer_head *new_bh = NULL, *dotdot_bh = NULL;
+ struct msdos_dir_entry *new_de, *dotdot_de;
+ struct inode *old_inode, *new_inode;
loff_t new_i_pos, dotdot_i_pos;
int error;
int is_dir;
new_inode = new_dentry->d_inode;
is_dir = S_ISDIR(old_inode->i_mode);
- if (fat_scan(new_dir, new_name, &new_bh, &new_de, &new_i_pos) >= 0
- && !new_inode)
+ if (fat_scan(new_dir, new_name, &new_bh, &new_de, &new_i_pos) >= 0 &&
+ !new_inode)
goto degenerate_case;
if (is_dir) {
if (new_inode) {
MSDOS_I(old_inode)->i_attrs &= ~ATTR_HIDDEN;
mark_inode_dirty(old_inode);
old_dir->i_version++;
- old_dir->i_ctime = old_dir->i_mtime = CURRENT_TIME;
+ old_dir->i_ctime = old_dir->i_mtime = CURRENT_TIME_SEC;
mark_inode_dirty(old_dir);
if (new_inode) {
new_inode->i_nlink--;
- new_inode->i_ctime = CURRENT_TIME;
+ new_inode->i_ctime = CURRENT_TIME_SEC;
mark_inode_dirty(new_inode);
}
if (dotdot_bh) {
dotdot_de->start = cpu_to_le16(MSDOS_I(new_dir)->i_logstart);
- dotdot_de->starthi = cpu_to_le16((MSDOS_I(new_dir)->i_logstart) >> 16);
+ dotdot_de->starthi =
+ cpu_to_le16((MSDOS_I(new_dir)->i_logstart) >> 16);
mark_buffer_dirty(dotdot_bh);
old_dir->i_nlink--;
mark_inode_dirty(old_dir);
degenerate_case:
error = -EINVAL;
- if (new_de!=old_de)
+ if (new_de != old_de)
goto out;
if (is_hid)
MSDOS_I(old_inode)->i_attrs |= ATTR_HIDDEN;
MSDOS_I(old_inode)->i_attrs &= ~ATTR_HIDDEN;
mark_inode_dirty(old_inode);
old_dir->i_version++;
- old_dir->i_ctime = old_dir->i_mtime = CURRENT_TIME;
+ old_dir->i_ctime = old_dir->i_mtime = CURRENT_TIME_SEC;
mark_inode_dirty(old_dir);
return 0;
}
/***** Rename, a wrapper for rename_same_dir & rename_diff_dir */
static int msdos_rename(struct inode *old_dir, struct dentry *old_dentry,
- struct inode *new_dir, struct dentry *new_dentry)
+ struct inode *new_dir, struct dentry *new_dentry)
{
struct buffer_head *old_bh;
struct msdos_dir_entry *old_de;
lock_kernel();
error = msdos_format_name(old_dentry->d_name.name,
- old_dentry->d_name.len,old_msdos_name,
+ old_dentry->d_name.len, old_msdos_name,
&MSDOS_SB(old_dir->i_sb)->options);
if (error < 0)
goto rename_done;
error = msdos_format_name(new_dentry->d_name.name,
- new_dentry->d_name.len,new_msdos_name,
+ new_dentry->d_name.len, new_msdos_name,
&MSDOS_SB(new_dir->i_sb)->options);
if (error < 0)
goto rename_done;
- is_hid = (new_dentry->d_name.name[0]=='.') && (new_msdos_name[0]!='.');
- old_hid = (old_dentry->d_name.name[0]=='.') && (old_msdos_name[0]!='.');
+ is_hid =
+ (new_dentry->d_name.name[0] == '.') && (new_msdos_name[0] != '.');
+ old_hid =
+ (old_dentry->d_name.name[0] == '.') && (old_msdos_name[0] != '.');
+
error = fat_scan(old_dir, old_msdos_name, &old_bh, &old_de, &old_i_pos);
if (error < 0)
goto rename_done;
.setattr = fat_notify_change,
};
-static int msdos_fill_super(struct super_block *sb,void *data, int silent)
+static int msdos_fill_super(struct super_block *sb, void *data, int silent)
{
int res;
}
static struct super_block *msdos_get_sb(struct file_system_type *fs_type,
- int flags, const char *dev_name, void *data)
+ int flags, const char *dev_name,
+ void *data)
{
return get_sb_bdev(fs_type, flags, dev_name, data, msdos_fill_super);
}
dprintk("%s: call fsinfo\n", __FUNCTION__);
info->fattr->valid = 0;
status = rpc_call(server->client_sys, NFS3PROC_FSINFO, fhandle, info, 0);
- dprintk("%s: reply fsinfo %d\n", __FUNCTION__, status);
+ dprintk("%s: reply fsinfo: %d\n", __FUNCTION__, status);
if (!(info->fattr->valid & NFS_ATTR_FATTR)) {
status = rpc_call(server->client_sys, NFS3PROC_GETATTR, fhandle, info->fattr, 0);
- dprintk("%s: reply getattr %d\n", __FUNCTION__, status);
+ dprintk("%s: reply getattr: %d\n", __FUNCTION__, status);
}
return status;
}
fattr->valid = 0;
status = rpc_call(server->client, NFS3PROC_GETATTR,
fhandle, fattr, 0);
- dprintk("NFS reply getattr\n");
+ dprintk("NFS reply getattr: %d\n", status);
return status;
}
dprintk("NFS call setattr\n");
fattr->valid = 0;
status = rpc_call(NFS_CLIENT(inode), NFS3PROC_SETATTR, &arg, fattr, 0);
- dprintk("NFS reply setattr\n");
+ dprintk("NFS reply setattr: %d\n", status);
return status;
}
if (res.access & (NFS3_ACCESS_LOOKUP|NFS3_ACCESS_EXECUTE))
entry->mask |= MAY_EXEC;
}
- dprintk("NFS reply access, status = %d\n", status);
+ dprintk("NFS reply access: %d\n", status);
return status;
}
* For now, we don't implement O_EXCL.
*/
static struct inode *
-nfs3_proc_create(struct inode *dir, struct qstr *name, struct iattr *sattr,
+nfs3_proc_create(struct inode *dir, struct dentry *dentry, struct iattr *sattr,
int flags)
{
struct nfs_fh fhandle;
struct nfs_fattr dir_attr;
struct nfs3_createargs arg = {
.fh = NFS_FH(dir),
- .name = name->name,
- .len = name->len,
+ .name = dentry->d_name.name,
+ .len = dentry->d_name.len,
.sattr = sattr,
};
struct nfs3_diropres res = {
};
int status;
- dprintk("NFS call create %s\n", name->name);
+ dprintk("NFS call create %s\n", dentry->d_name.name);
arg.createmode = NFS3_CREATE_UNCHECKED;
if (flags & O_EXCL) {
arg.createmode = NFS3_CREATE_EXCLUSIVE;
if (status != 0)
goto out;
if (fhandle.size == 0 || !(fattr.valid & NFS_ATTR_FATTR)) {
- status = nfs3_proc_lookup(dir, name, &fhandle, &fattr);
+ status = nfs3_proc_lookup(dir, &dentry->d_name, &fhandle, &fattr);
if (status != 0)
goto out;
}
export.o auth.o lockd.o nfscache.o nfsxdr.o stats.o
nfsd-$(CONFIG_NFSD_V3) += nfs3proc.o nfs3xdr.o
nfsd-$(CONFIG_NFSD_V4) += nfs4proc.o nfs4xdr.o nfs4state.o nfs4idmap.o \
- nfs4acl.o
+ nfs4acl.o nfs4callback.o
nfsd-objs := $(nfsd-y)
--- /dev/null
+/*
+ * linux/fs/nfsd/nfs4callback.c
+ *
+ * Copyright (c) 2001 The Regents of the University of Michigan.
+ * All rights reserved.
+ *
+ * Kendrick Smith <kmsmith@umich.edu>
+ * Andy Adamson <andros@umich.edu>
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of the University nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/list.h>
+#include <linux/inet.h>
+#include <linux/errno.h>
+#include <linux/sunrpc/xdr.h>
+#include <linux/sunrpc/svc.h>
+#include <linux/sunrpc/clnt.h>
+#include <linux/nfsd/nfsd.h>
+#include <linux/nfsd/state.h>
+#include <linux/sunrpc/sched.h>
+#include <linux/nfs4.h>
+
+#define NFSDDBG_FACILITY NFSDDBG_PROC
+
+#define NFSPROC4_CB_NULL 0
+#define NFSPROC4_CB_COMPOUND 1
+
+/* declarations */
+static void nfs4_cb_null(struct rpc_task *task);
+extern spinlock_t recall_lock;
+
+/* Index of predefined Linux callback client operations */
+
+enum {
+ NFSPROC4_CLNT_CB_NULL = 0,
+ NFSPROC4_CLNT_CB_RECALL,
+};
+
+enum nfs_cb_opnum4 {
+ OP_CB_RECALL = 4,
+};
+
+#define NFS4_MAXTAGLEN 20
+
+#define NFS4_enc_cb_null_sz 0
+#define NFS4_dec_cb_null_sz 0
+#define cb_compound_enc_hdr_sz 4
+#define cb_compound_dec_hdr_sz (3 + (NFS4_MAXTAGLEN >> 2))
+#define op_enc_sz 1
+#define op_dec_sz 2
+#define enc_nfs4_fh_sz (1 + (NFS4_FHSIZE >> 2))
+#define enc_stateid_sz 16
+#define NFS4_enc_cb_recall_sz (cb_compound_enc_hdr_sz + \
+ 1 + enc_stateid_sz + \
+ enc_nfs4_fh_sz)
+
+#define NFS4_dec_cb_recall_sz (cb_compound_dec_hdr_sz + \
+ op_dec_sz)
+
+/*
+* Generic encode routines from fs/nfs/nfs4xdr.c
+*/
+static inline u32 *
+xdr_writemem(u32 *p, const void *ptr, int nbytes)
+{
+ int tmp = XDR_QUADLEN(nbytes);
+ if (!tmp)
+ return p;
+ p[tmp-1] = 0;
+ memcpy(p, ptr, nbytes);
+ return p + tmp;
+}
+
+#define WRITE32(n) *p++ = htonl(n)
+#define WRITEMEM(ptr,nbytes) do { \
+ p = xdr_writemem(p, ptr, nbytes); \
+} while (0)
+#define RESERVE_SPACE(nbytes) do { \
+ p = xdr_reserve_space(xdr, nbytes); \
+ if (!p) dprintk("NFSD: RESERVE_SPACE(%d) failed in function %s\n", (int) (nbytes), __FUNCTION__); \
+ BUG_ON(!p); \
+} while (0)
+
+/*
+ * Generic decode routines from fs/nfs/nfs4xdr.c
+ */
+#define DECODE_TAIL \
+ status = 0; \
+out: \
+ return status; \
+xdr_error: \
+ dprintk("NFSD: xdr error! (%s:%d)\n", __FILE__, __LINE__); \
+ status = -EIO; \
+ goto out
+
+#define READ32(x) (x) = ntohl(*p++)
+#define READ64(x) do { \
+ (x) = (u64)ntohl(*p++) << 32; \
+ (x) |= ntohl(*p++); \
+} while (0)
+#define READTIME(x) do { \
+ p++; \
+ (x.tv_sec) = ntohl(*p++); \
+ (x.tv_nsec) = ntohl(*p++); \
+} while (0)
+#define READ_BUF(nbytes) do { \
+ p = xdr_inline_decode(xdr, nbytes); \
+ if (!p) { \
+ dprintk("NFSD: %s: reply buffer overflowed in line %d.", \
+ __FUNCTION__, __LINE__); \
+ return -EIO; \
+ } \
+} while (0)
+
+struct nfs4_cb_compound_hdr {
+ int status;
+ u32 ident;
+ u32 nops;
+ u32 taglen;
+ char * tag;
+};
+
+static struct {
+int stat;
+int errno;
+} nfs_cb_errtbl[] = {
+ { NFS4_OK, 0 },
+ { NFS4ERR_PERM, EPERM },
+ { NFS4ERR_NOENT, ENOENT },
+ { NFS4ERR_IO, EIO },
+ { NFS4ERR_NXIO, ENXIO },
+ { NFS4ERR_ACCESS, EACCES },
+ { NFS4ERR_EXIST, EEXIST },
+ { NFS4ERR_XDEV, EXDEV },
+ { NFS4ERR_NOTDIR, ENOTDIR },
+ { NFS4ERR_ISDIR, EISDIR },
+ { NFS4ERR_INVAL, EINVAL },
+ { NFS4ERR_FBIG, EFBIG },
+ { NFS4ERR_NOSPC, ENOSPC },
+ { NFS4ERR_ROFS, EROFS },
+ { NFS4ERR_MLINK, EMLINK },
+ { NFS4ERR_NAMETOOLONG, ENAMETOOLONG },
+ { NFS4ERR_NOTEMPTY, ENOTEMPTY },
+ { NFS4ERR_DQUOT, EDQUOT },
+ { NFS4ERR_STALE, ESTALE },
+ { NFS4ERR_BADHANDLE, EBADHANDLE },
+ { NFS4ERR_BAD_COOKIE, EBADCOOKIE },
+ { NFS4ERR_NOTSUPP, ENOTSUPP },
+ { NFS4ERR_TOOSMALL, ETOOSMALL },
+ { NFS4ERR_SERVERFAULT, ESERVERFAULT },
+ { NFS4ERR_BADTYPE, EBADTYPE },
+ { NFS4ERR_LOCKED, EAGAIN },
+ { NFS4ERR_RESOURCE, EREMOTEIO },
+ { NFS4ERR_SYMLINK, ELOOP },
+ { NFS4ERR_OP_ILLEGAL, EOPNOTSUPP },
+ { NFS4ERR_DEADLOCK, EDEADLK },
+ { -1, EIO }
+};
+
+static int
+nfs_cb_stat_to_errno(int stat)
+{
+ int i;
+ for (i = 0; nfs_cb_errtbl[i].stat != -1; i++) {
+ if (nfs_cb_errtbl[i].stat == stat)
+ return nfs_cb_errtbl[i].errno;
+ }
+ /* If we cannot translate the error, the recovery routines should
+ * handle it.
+ * Note: remaining NFSv4 error codes have values > 10000, so should
+ * not conflict with native Linux error codes.
+ */
+ return stat;
+}
+
+/*
+ * XDR encode
+ */
+
+static int
+encode_cb_compound_hdr(struct xdr_stream *xdr, struct nfs4_cb_compound_hdr *hdr)
+{
+ u32 * p;
+
+ RESERVE_SPACE(16);
+ WRITE32(0); /* tag length is always 0 */
+ WRITE32(NFS4_MINOR_VERSION);
+ WRITE32(hdr->ident);
+ WRITE32(hdr->nops);
+ return 0;
+}
+
+static int
+encode_cb_recall(struct xdr_stream *xdr, struct nfs4_cb_recall *cb_rec)
+{
+ u32 *p;
+ int len = cb_rec->cbr_fhlen;
+
+ RESERVE_SPACE(12+sizeof(cb_rec->cbr_stateid) + len);
+ WRITE32(OP_CB_RECALL);
+ WRITEMEM(&cb_rec->cbr_stateid, sizeof(stateid_t));
+ WRITE32(cb_rec->cbr_trunc);
+ WRITE32(len);
+ WRITEMEM(cb_rec->cbr_fhval, len);
+ return 0;
+}
+
+static int
+nfs4_xdr_enc_cb_null(struct rpc_rqst *req, u32 *p)
+{
+ struct xdr_stream xdrs, *xdr = &xdrs;
+
+ xdr_init_encode(&xdrs, &req->rq_snd_buf, p);
+ RESERVE_SPACE(0);
+ return 0;
+}
+
+static int
+nfs4_xdr_enc_cb_recall(struct rpc_rqst *req, u32 *p, struct nfs4_cb_recall *args)
+{
+ struct xdr_stream xdr;
+ struct nfs4_cb_compound_hdr hdr = {
+ .nops = 1,
+ };
+
+ xdr_init_encode(&xdr, &req->rq_snd_buf, p);
+ encode_cb_compound_hdr(&xdr, &hdr);
+ return (encode_cb_recall(&xdr, args));
+}
+
+
+static int
+decode_cb_compound_hdr(struct xdr_stream *xdr, struct nfs4_cb_compound_hdr *hdr){
+ u32 *p;
+
+ READ_BUF(8);
+ READ32(hdr->status);
+ READ32(hdr->taglen);
+ READ_BUF(hdr->taglen + 4);
+ hdr->tag = (char *)p;
+ p += XDR_QUADLEN(hdr->taglen);
+ READ32(hdr->nops);
+ return 0;
+}
+
+static int
+decode_cb_op_hdr(struct xdr_stream *xdr, enum nfs_opnum4 expected)
+{
+ u32 *p;
+ u32 op;
+ int32_t nfserr;
+
+ READ_BUF(8);
+ READ32(op);
+ if (op != expected) {
+ dprintk("NFSD: decode_cb_op_hdr: Callback server returned "
+ " operation %d but we issued a request for %d\n",
+ op, expected);
+ return -EIO;
+ }
+ READ32(nfserr);
+ if (nfserr != NFS_OK)
+ return -nfs_cb_stat_to_errno(nfserr);
+ return 0;
+}
+
+static int
+nfs4_xdr_dec_cb_null(struct rpc_rqst *req, u32 *p)
+{
+ return 0;
+}
+
+static int
+nfs4_xdr_dec_cb_recall(struct rpc_rqst *rqstp, u32 *p)
+{
+ struct xdr_stream xdr;
+ struct nfs4_cb_compound_hdr hdr;
+ int status;
+
+ xdr_init_decode(&xdr, &rqstp->rq_rcv_buf, p);
+ status = decode_cb_compound_hdr(&xdr, &hdr);
+ if (status)
+ goto out;
+ status = decode_cb_op_hdr(&xdr, OP_CB_RECALL);
+out :
+ return status;
+}
+
+/*
+ * RPC procedure tables
+ */
+#ifndef MAX
+# define MAX(a, b) (((a) > (b))? (a) : (b))
+#endif
+
+#define PROC(proc, call, argtype, restype) \
+[NFSPROC4_CLNT_##proc] = { \
+ .p_proc = NFSPROC4_CB_##call, \
+ .p_encode = (kxdrproc_t) nfs4_xdr_##argtype, \
+ .p_decode = (kxdrproc_t) nfs4_xdr_##restype, \
+ .p_bufsiz = MAX(NFS4_##argtype##_sz,NFS4_##restype##_sz) << 2, \
+}
+
+struct rpc_procinfo nfs4_cb_procedures[] = {
+ PROC(CB_NULL, NULL, enc_cb_null, dec_cb_null),
+ PROC(CB_RECALL, COMPOUND, enc_cb_recall, dec_cb_recall),
+};
+
+struct rpc_version nfs_cb_version4 = {
+ .number = 1,
+ .nrprocs = sizeof(nfs4_cb_procedures)/sizeof(nfs4_cb_procedures[0]),
+ .procs = nfs4_cb_procedures
+};
+
+static struct rpc_version * nfs_cb_version[] = {
+ NULL,
+ &nfs_cb_version4,
+};
+
+/*
+ * Use the SETCLIENTID credential
+ */
+struct rpc_cred *
+nfsd4_lookupcred(struct nfs4_client *clp, int taskflags)
+{
+ struct auth_cred acred;
+ struct rpc_clnt *clnt = clp->cl_callback.cb_client;
+ struct rpc_cred *ret = NULL;
+
+ if (!clnt)
+ goto out;
+ get_group_info(clp->cl_cred.cr_group_info);
+ acred.uid = clp->cl_cred.cr_uid;
+ acred.gid = clp->cl_cred.cr_gid;
+ acred.group_info = clp->cl_cred.cr_group_info;
+
+ dprintk("NFSD: looking up %s cred\n",
+ clnt->cl_auth->au_ops->au_name);
+ ret = rpcauth_lookup_credcache(clnt->cl_auth, &acred, taskflags);
+ put_group_info(clp->cl_cred.cr_group_info);
+out:
+ return ret;
+}
+
+/*
+ * Set up the callback client and put a NFSPROC4_CB_NULL on the wire...
+ */
+void
+nfsd4_probe_callback(struct nfs4_client *clp)
+{
+ struct sockaddr_in addr;
+ struct nfs4_callback *cb = &clp->cl_callback;
+ struct rpc_timeout timeparms;
+ struct rpc_xprt * xprt;
+ struct rpc_program * program = &cb->cb_program;
+ struct rpc_stat * stat = &cb->cb_stat;
+ struct rpc_clnt * clnt;
+ struct rpc_message msg = {
+ .rpc_proc = &nfs4_cb_procedures[NFSPROC4_CLNT_CB_NULL],
+ .rpc_argp = clp,
+ };
+ char hostname[32];
+ int status;
+
+ dprintk("NFSD: probe_callback. cb_parsed %d cb_set %d\n",
+ cb->cb_parsed, atomic_read(&cb->cb_set));
+ if (!cb->cb_parsed || atomic_read(&cb->cb_set))
+ return;
+
+ /* Initialize address */
+ memset(&addr, 0, sizeof(addr));
+ addr.sin_family = AF_INET;
+ addr.sin_port = htons(cb->cb_port);
+ addr.sin_addr.s_addr = htonl(cb->cb_addr);
+
+ /* Initialize timeout */
+ timeparms.to_initval = (NFSD_LEASE_TIME/4) * HZ;
+ timeparms.to_retries = 5;
+ timeparms.to_maxval = (NFSD_LEASE_TIME/2) * HZ;
+ timeparms.to_exponential = 1;
+
+ /* Create RPC transport */
+ if (!(xprt = xprt_create_proto(IPPROTO_TCP, &addr, &timeparms))) {
+ dprintk("NFSD: couldn't create callback transport!\n");
+ goto out_err;
+ }
+
+ /* Initialize rpc_program */
+ program->name = "nfs4_cb";
+ program->number = cb->cb_prog;
+ program->nrvers = sizeof(nfs_cb_version)/sizeof(nfs_cb_version[0]);
+ program->version = nfs_cb_version;
+ program->stats = stat;
+
+ /* Initialize rpc_stat */
+ memset(stat, 0, sizeof(struct rpc_stat));
+ stat->program = program;
+
+ /* Create RPC client
+ *
+ * XXX AUTH_UNIX only - need AUTH_GSS....
+ */
+ sprintf(hostname, "%u.%u.%u.%u", NIPQUAD(addr.sin_addr.s_addr));
+ if (!(clnt = rpc_create_client(xprt, hostname, program, 1, RPC_AUTH_UNIX))) {
+ dprintk("NFSD: couldn't create callback client\n");
+ goto out_xprt;
+ }
+ clnt->cl_intr = 1;
+ clnt->cl_softrtry = 1;
+ clnt->cl_chatty = 1;
+ cb->cb_client = clnt;
+
+ /* Kick rpciod, put the call on the wire. */
+
+ if (rpciod_up() != 0) {
+ dprintk("nfsd: couldn't start rpciod for callbacks!\n");
+ goto out_clnt;
+ }
+
+ /* the task holds a reference to the nfs4_client struct */
+ atomic_inc(&clp->cl_count);
+
+ msg.rpc_cred = nfsd4_lookupcred(clp,0);
+ status = rpc_call_async(clnt, &msg, RPC_TASK_ASYNC, nfs4_cb_null, NULL);
+
+ if (status != 0) {
+ dprintk("NFSD: asynchronous NFSPROC4_CB_NULL failed!\n");
+ goto out_rpciod;
+ }
+ return;
+
+out_rpciod:
+ rpciod_down();
+out_clnt:
+ rpc_shutdown_client(clnt);
+ goto out_err;
+out_xprt:
+ xprt_destroy(xprt);
+out_err:
+ dprintk("NFSD: warning: no callback path to client %.*s\n",
+ (int)clp->cl_name.len, clp->cl_name.data);
+ cb->cb_client = NULL;
+}
+
+static void
+nfs4_cb_null(struct rpc_task *task)
+{
+ struct nfs4_client *clp = (struct nfs4_client *)task->tk_msg.rpc_argp;
+ struct nfs4_callback *cb = &clp->cl_callback;
+ u32 addr = htonl(cb->cb_addr);
+
+ dprintk("NFSD: nfs4_cb_null task->tk_status %d\n", task->tk_status);
+
+ if (task->tk_status < 0) {
+ dprintk("NFSD: callback establishment to client %.*s failed\n",
+ (int)clp->cl_name.len, clp->cl_name.data);
+ goto out;
+ }
+ atomic_set(&cb->cb_set, 1);
+ dprintk("NFSD: callback set to client %u.%u.%u.%u\n", NIPQUAD(addr));
+out:
+ put_nfs4_client(clp);
+}
+
+/*
+ * Called with dp->dl_count incremented
+ */
+static void
+nfs4_cb_recall_done(struct rpc_task *task)
+{
+ struct nfs4_cb_recall *cbr = (struct nfs4_cb_recall *)task->tk_calldata;
+ struct nfs4_delegation *dp = cbr->cbr_dp;
+ int status;
+
+ /* all is well... */
+ if (task->tk_status == 0)
+ goto out;
+
+ /* network partition, retry nfsd4_cb_recall once. */
+ if (task->tk_status == -EIO) {
+ if (atomic_read(&dp->dl_recall_cnt) == 0)
+ goto retry;
+ else
+ /* callback channel no longer available */
+ atomic_set(&dp->dl_client->cl_callback.cb_set, 0);
+ }
+
+ /* Race: a recall occurred miliseconds after a delegation was granted.
+ * Client may have received recall prior to delegation. retry recall
+ * once.
+ */
+ if ((task->tk_status == -EBADHANDLE) || (task->tk_status == -NFS4ERR_BAD_STATEID)){
+ if (atomic_read(&dp->dl_recall_cnt) == 0)
+ goto retry;
+ }
+ atomic_set(&dp->dl_state, NFS4_RECALL_COMPLETE);
+
+out:
+ if (atomic_dec_and_test(&dp->dl_count))
+ atomic_set(&dp->dl_state, NFS4_REAP_DELEG);
+ BUG_ON(atomic_read(&dp->dl_count) < 0);
+ dprintk("NFSD: nfs4_cb_recall_done: dp %p dl_flock %p dl_count %d\n",dp, dp->dl_flock, atomic_read(&dp->dl_count));
+ return;
+
+retry:
+ atomic_inc(&dp->dl_recall_cnt);
+ /* sleep 2 seconds before retrying recall */
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(2*HZ);
+ status = nfsd4_cb_recall(dp);
+ dprintk("NFSD: nfs4_cb_recall_done: retry status: %d dp %p dl_flock %p\n",status,dp, dp->dl_flock);
+}
+
+/*
+ * called with dp->dl_count inc'ed.
+ * nfs4_lock_state() may or may not have been called.
+ */
+int
+nfsd4_cb_recall(struct nfs4_delegation *dp)
+{
+ struct nfs4_client *clp;
+ struct rpc_clnt *clnt;
+ struct rpc_message msg = {
+ .rpc_proc = &nfs4_cb_procedures[NFSPROC4_CLNT_CB_RECALL],
+ };
+ struct nfs4_cb_recall *cbr = &dp->dl_recall;
+ int status;
+
+ dprintk("NFSD: nfsd4_cb_recall NFS4_enc_cb_recall_sz %d NFS4_dec_cb_recall_sz %d \n",NFS4_enc_cb_recall_sz,NFS4_dec_cb_recall_sz);
+
+ clp = dp->dl_client;
+ clnt = clp->cl_callback.cb_client;
+ status = EIO;
+ if ((!atomic_read(&clp->cl_callback.cb_set)) || !clnt)
+ goto out_free;
+
+ msg.rpc_argp = cbr;
+ msg.rpc_resp = cbr;
+ msg.rpc_cred = nfsd4_lookupcred(clp,0);
+
+ cbr->cbr_trunc = 0; /* XXX need to implement truncate optimization */
+ cbr->cbr_dp = dp;
+
+ if ((status = rpc_call_async(clnt, &msg, RPC_TASK_SOFT,
+ nfs4_cb_recall_done, cbr ))) {
+ dprintk("NFSD: recall_delegation: rpc_call_async failed %d\n",
+ status);
+ goto out_fail;
+ }
+out:
+ return status;
+out_fail:
+ status = nfserrno(status);
+ out_free:
+ kfree(cbr);
+ goto out;
+}
* A cache entry is "single use" if c_state == RC_INPROG
* Otherwise, it when accessing _prev or _next, the lock must be held.
*/
-static spinlock_t cache_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(cache_lock);
void
nfsd_cache_init(void)
static struct svc_serv *nfsd_serv;
static atomic_t nfsd_busy;
static unsigned long nfsd_last_call;
-static spinlock_t nfsd_call_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(nfsd_call_lock);
struct nfsd_list {
struct list_head list;
#include <linux/nls.h>
#include <linux/errno.h>
-static wchar_t charset2uni[128] = {
+static wchar_t charset2uni[256] = {
/* 0x00*/
0x0000, 0x0001, 0x0002, 0x0003,
0x0004, 0x0005, 0x0006, 0x0007,
0x007c, 0x007d, 0x007e, 0x007f,
};
-static unsigned char page00[128] = {
+static unsigned char page00[256] = {
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, /* 0x00-0x07 */
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, /* 0x08-0x0f */
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, /* 0x10-0x17 */
0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f, /* 0x78-0x7f */
};
-static unsigned char *page_uni2charset[128] = {
- page00, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
+static unsigned char *page_uni2charset[256] = {
+ page00,
};
-static unsigned char charset2lower[128] = {
+static unsigned char charset2lower[256] = {
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, /* 0x00-0x07 */
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, /* 0x08-0x0f */
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, /* 0x10-0x17 */
0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f, /* 0x78-0x7f */
};
-static unsigned char charset2upper[128] = {
+static unsigned char charset2upper[256] = {
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, /* 0x00-0x07 */
0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, /* 0x08-0x0f */
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, /* 0x10-0x17 */
static struct nls_table default_table;
static struct nls_table *tables = &default_table;
-static spinlock_t nls_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(nls_lock);
/*
* Sample implementation from Unicode home page.
0x201D,0xFF08,0xFF09,0x3014,0x3015,0xFF3B,0xFF3D,0xFF5B,/* 0x68-0x6F */
0xFF5D,0x3008,0x3009,0x300A,0x300B,0x300C,0x300D,0x300E,/* 0x70-0x77 */
0x300F,0x3010,0x3011,0xFF0B,0xFF0D,0x00B1,0x00D7,0x0000,/* 0x78-0x7F */
-
+
0x00F7,0xFF1D,0x2260,0xFF1C,0xFF1E,0x2266,0x2267,0x221E,/* 0x80-0x87 */
0x2234,0x2642,0x2640,0x00B0,0x2032,0x2033,0x2103,0xFFE5,/* 0x88-0x8F */
0xFF04,0xFFE0,0xFFE1,0xFF05,0xFF03,0xFF06,0xFF0A,0xFF20,/* 0x90-0x97 */
0xFF29,0xFF2A,0xFF2B,0xFF2C,0xFF2D,0xFF2E,0xFF2F,0xFF30,/* 0x68-0x6F */
0xFF31,0xFF32,0xFF33,0xFF34,0xFF35,0xFF36,0xFF37,0xFF38,/* 0x70-0x77 */
0xFF39,0xFF3A,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xFF41,0xFF42,0xFF43,0xFF44,0xFF45,0xFF46,0xFF47,/* 0x80-0x87 */
0xFF48,0xFF49,0xFF4A,0xFF4B,0xFF4C,0xFF4D,0xFF4E,0xFF4F,/* 0x88-0x8F */
0xFF50,0xFF51,0xFF52,0xFF53,0xFF54,0xFF55,0xFF56,0xFF57,/* 0x90-0x97 */
0x30C9,0x30CA,0x30CB,0x30CC,0x30CD,0x30CE,0x30CF,0x30D0,/* 0x68-0x6F */
0x30D1,0x30D2,0x30D3,0x30D4,0x30D5,0x30D6,0x30D7,0x30D8,/* 0x70-0x77 */
0x30D9,0x30DA,0x30DB,0x30DC,0x30DD,0x30DE,0x30DF,0x0000,/* 0x78-0x7F */
-
+
0x30E0,0x30E1,0x30E2,0x30E3,0x30E4,0x30E5,0x30E6,0x30E7,/* 0x80-0x87 */
0x30E8,0x30E9,0x30EA,0x30EB,0x30EC,0x30ED,0x30EE,0x30EF,/* 0x88-0x8F */
0x30F0,0x30F1,0x30F2,0x30F3,0x30F4,0x30F5,0x30F6,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0430,0x0431,0x0432,0x0433,0x0434,0x0435,0x0451,0x0436,/* 0x70-0x77 */
0x0437,0x0438,0x0439,0x043A,0x043B,0x043C,0x043D,0x0000,/* 0x78-0x7F */
-
+
0x043E,0x043F,0x0440,0x0441,0x0442,0x0443,0x0444,0x0445,/* 0x80-0x87 */
0x0446,0x0447,0x0448,0x0449,0x044A,0x044B,0x044C,0x044D,/* 0x88-0x8F */
0x044E,0x044F,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x3357,0x330D,0x3326,0x3323,0x332B,0x334A,0x333B,0x339C,/* 0x68-0x6F */
0x339D,0x339E,0x338E,0x338F,0x33C4,0x33A1,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x337B,0x0000,/* 0x78-0x7F */
-
+
0x301D,0x301F,0x2116,0x33CD,0x2121,0x32A4,0x32A5,0x32A6,/* 0x80-0x87 */
0x32A7,0x32A8,0x3231,0x3232,0x3239,0x337E,0x337D,0x337C,/* 0x88-0x8F */
0x2252,0x2261,0x222B,0x222E,0x2211,0x221A,0x22A5,0x2220,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x6804,0x6C38,0x6CF3,0x6D29,0x745B,0x76C8,0x7A4E,0x9834,/* 0x68-0x6F */
0x82F1,0x885B,0x8A60,0x92ED,0x6DB2,0x75AB,0x76CA,0x99C5,/* 0x70-0x77 */
0x60A6,0x8B01,0x8D8A,0x95B2,0x698E,0x53AD,0x5186,0x0000,/* 0x78-0x7F */
-
+
0x5712,0x5830,0x5944,0x5BB4,0x5EF6,0x6028,0x63A9,0x63F4,/* 0x80-0x87 */
0x6CBF,0x6F14,0x708E,0x7114,0x7159,0x71D5,0x733F,0x7E01,/* 0x88-0x8F */
0x8276,0x82D1,0x8597,0x9060,0x925B,0x9D1B,0x5869,0x65BC,/* 0x90-0x97 */
0x64B9,0x683C,0x6838,0x6BBB,0x7372,0x78BA,0x7A6B,0x899A,/* 0x68-0x6F */
0x89D2,0x8D6B,0x8F03,0x90ED,0x95A3,0x9694,0x9769,0x5B66,/* 0x70-0x77 */
0x5CB3,0x697D,0x984D,0x984E,0x639B,0x7B20,0x6A2B,0x0000,/* 0x78-0x7F */
-
+
0x6A7F,0x68B6,0x9C0D,0x6F5F,0x5272,0x559D,0x6070,0x62EC,/* 0x80-0x87 */
0x6D3B,0x6E07,0x6ED1,0x845B,0x8910,0x8F44,0x4E14,0x9C39,/* 0x88-0x8F */
0x53F6,0x691B,0x6A3A,0x9784,0x682A,0x515C,0x7AC3,0x84B2,/* 0x90-0x97 */
0x5403,0x55AB,0x6854,0x6A58,0x8A70,0x7827,0x6775,0x9ECD,/* 0x68-0x6F */
0x5374,0x5BA2,0x811A,0x8650,0x9006,0x4E18,0x4E45,0x4EC7,/* 0x70-0x77 */
0x4F11,0x53CA,0x5438,0x5BAE,0x5F13,0x6025,0x6551,0x0000,/* 0x78-0x7F */
-
+
0x673D,0x6C42,0x6C72,0x6CE3,0x7078,0x7403,0x7A76,0x7AAE,/* 0x80-0x87 */
0x7B08,0x7D1A,0x7CFE,0x7D66,0x65E7,0x725B,0x53BB,0x5C45,/* 0x88-0x8F */
0x5DE8,0x62D2,0x62E0,0x6319,0x6E20,0x865A,0x8A31,0x8DDD,/* 0x90-0x97 */
0x656C,0x666F,0x6842,0x6E13,0x7566,0x7A3D,0x7CFB,0x7D4C,/* 0x68-0x6F */
0x7D99,0x7E4B,0x7F6B,0x830E,0x834A,0x86CD,0x8A08,0x8A63,/* 0x70-0x77 */
0x8B66,0x8EFD,0x981A,0x9D8F,0x82B8,0x8FCE,0x9BE8,0x0000,/* 0x78-0x7F */
-
+
0x5287,0x621F,0x6483,0x6FC0,0x9699,0x6841,0x5091,0x6B20,/* 0x80-0x87 */
0x6C7A,0x6F54,0x7A74,0x7D50,0x8840,0x8A23,0x6708,0x4EF6,/* 0x88-0x8F */
0x5039,0x5026,0x5065,0x517C,0x5238,0x5263,0x55A7,0x570F,/* 0x90-0x97 */
0x7D18,0x7D5E,0x7DB1,0x8015,0x8003,0x80AF,0x80B1,0x8154,/* 0x68-0x6F */
0x818F,0x822A,0x8352,0x884C,0x8861,0x8B1B,0x8CA2,0x8CFC,/* 0x70-0x77 */
0x90CA,0x9175,0x9271,0x783F,0x92FC,0x95A4,0x964D,0x0000,/* 0x78-0x7F */
-
+
0x9805,0x9999,0x9AD8,0x9D3B,0x525B,0x52AB,0x53F7,0x5408,/* 0x80-0x87 */
0x58D5,0x62F7,0x6FE0,0x8C6A,0x8F5F,0x9EB9,0x514B,0x523B,/* 0x88-0x8F */
0x544A,0x56FD,0x7A40,0x9177,0x9D60,0x9ED2,0x7344,0x6F09,/* 0x90-0x97 */
0x523A,0x53F8,0x53F2,0x55E3,0x56DB,0x58EB,0x59CB,0x59C9,/* 0x68-0x6F */
0x59FF,0x5B50,0x5C4D,0x5E02,0x5E2B,0x5FD7,0x601D,0x6307,/* 0x70-0x77 */
0x652F,0x5B5C,0x65AF,0x65BD,0x65E8,0x679D,0x6B62,0x0000,/* 0x78-0x7F */
-
+
0x6B7B,0x6C0F,0x7345,0x7949,0x79C1,0x7CF8,0x7D19,0x7D2B,/* 0x80-0x87 */
0x80A2,0x8102,0x81F3,0x8996,0x8A5E,0x8A69,0x8A66,0x8A8C,/* 0x88-0x8F */
0x8AEE,0x8CC7,0x8CDC,0x96CC,0x98FC,0x6B6F,0x4E8B,0x4F3C,/* 0x90-0x97 */
0x5BBF,0x6DD1,0x795D,0x7E2E,0x7C9B,0x587E,0x719F,0x51FA,/* 0x68-0x6F */
0x8853,0x8FF0,0x4FCA,0x5CFB,0x6625,0x77AC,0x7AE3,0x821C,/* 0x70-0x77 */
0x99FF,0x51C6,0x5FAA,0x65EC,0x696F,0x6B89,0x6DF3,0x0000,/* 0x78-0x7F */
-
+
0x6E96,0x6F64,0x76FE,0x7D14,0x5DE1,0x9075,0x9187,0x9806,/* 0x80-0x87 */
0x51E6,0x521D,0x6240,0x6691,0x66D9,0x6E1A,0x5EB6,0x7DD2,/* 0x88-0x8F */
0x7F72,0x66F8,0x85AF,0x85F7,0x8AF8,0x52A9,0x53D9,0x5973,/* 0x90-0x97 */
0x8F9B,0x9032,0x91DD,0x9707,0x4EBA,0x4EC1,0x5203,0x5875,/* 0x68-0x6F */
0x58EC,0x5C0B,0x751A,0x5C3D,0x814E,0x8A0A,0x8FC5,0x9663,/* 0x70-0x77 */
0x976D,0x7B25,0x8ACF,0x9808,0x9162,0x56F3,0x53A8,0x0000,/* 0x78-0x7F */
-
+
0x9017,0x5439,0x5782,0x5E25,0x63A8,0x6C34,0x708A,0x7761,/* 0x80-0x87 */
0x7C8B,0x7FE0,0x8870,0x9042,0x9154,0x9310,0x9318,0x968F,/* 0x88-0x8F */
0x745E,0x9AC4,0x5D07,0x5D69,0x6570,0x67A2,0x8DA8,0x96DB,/* 0x90-0x97 */
0x8607,0x8A34,0x963B,0x9061,0x9F20,0x50E7,0x5275,0x53CC,/* 0x68-0x6F */
0x53E2,0x5009,0x55AA,0x58EE,0x594F,0x723D,0x5B8B,0x5C64,/* 0x70-0x77 */
0x531D,0x60E3,0x60F3,0x635C,0x6383,0x633F,0x63BB,0x0000,/* 0x78-0x7F */
-
+
0x64CD,0x65E9,0x66F9,0x5DE3,0x69CD,0x69FD,0x6F15,0x71E5,/* 0x80-0x87 */
0x4E89,0x75E9,0x76F8,0x7A93,0x7CDF,0x7DCF,0x7D9C,0x8061,/* 0x88-0x8F */
0x8349,0x8358,0x846C,0x84BC,0x85FB,0x88C5,0x8D70,0x9001,/* 0x90-0x97 */
0x6A80,0x6BB5,0x7537,0x8AC7,0x5024,0x77E5,0x5730,0x5F1B,/* 0x68-0x6F */
0x6065,0x667A,0x6C60,0x75F4,0x7A1A,0x7F6E,0x81F4,0x8718,/* 0x70-0x77 */
0x9045,0x99B3,0x7BC9,0x755C,0x7AF9,0x7B51,0x84C4,0x0000,/* 0x78-0x7F */
-
+
0x9010,0x79E9,0x7A92,0x8336,0x5AE1,0x7740,0x4E2D,0x4EF2,/* 0x80-0x87 */
0x5B99,0x5FE0,0x62BD,0x663C,0x67F1,0x6CE8,0x866B,0x8877,/* 0x88-0x8F */
0x8A3B,0x914E,0x92F3,0x99D0,0x6A17,0x7026,0x732A,0x82E7,/* 0x90-0x97 */
0x5857,0x59AC,0x5C60,0x5F92,0x6597,0x675C,0x6E21,0x767B,/* 0x68-0x6F */
0x83DF,0x8CED,0x9014,0x90FD,0x934D,0x7825,0x783A,0x52AA,/* 0x70-0x77 */
0x5EA6,0x571F,0x5974,0x6012,0x5012,0x515A,0x51AC,0x0000,/* 0x78-0x7F */
-
+
0x51CD,0x5200,0x5510,0x5854,0x5858,0x5957,0x5B95,0x5CF6,/* 0x80-0x87 */
0x5D8B,0x60BC,0x6295,0x642D,0x6771,0x6843,0x68BC,0x68DF,/* 0x88-0x8F */
0x76D7,0x6DD8,0x6E6F,0x6D9B,0x706F,0x71C8,0x5F53,0x75D8,/* 0x90-0x97 */
0x6D3E,0x7436,0x7834,0x5A46,0x7F75,0x82AD,0x99AC,0x4FF3,/* 0x68-0x6F */
0x5EC3,0x62DD,0x6392,0x6557,0x676F,0x76C3,0x724C,0x80CC,/* 0x70-0x77 */
0x80BA,0x8F29,0x914D,0x500D,0x57F9,0x5A92,0x6885,0x0000,/* 0x78-0x7F */
-
+
0x6973,0x7164,0x72FD,0x8CB7,0x58F2,0x8CE0,0x966A,0x9019,/* 0x80-0x87 */
0x877F,0x79E4,0x77E7,0x8429,0x4F2F,0x5265,0x535A,0x62CD,/* 0x88-0x8F */
0x67CF,0x6CCA,0x767D,0x7B94,0x7C95,0x8236,0x8584,0x8FEB,/* 0x90-0x97 */
0x9C2D,0x54C1,0x5F6C,0x658C,0x6D5C,0x7015,0x8CA7,0x8CD3,/* 0x68-0x6F */
0x983B,0x654F,0x74F6,0x4E0D,0x4ED8,0x57E0,0x592B,0x5A66,/* 0x70-0x77 */
0x5BCC,0x51A8,0x5E03,0x5E9C,0x6016,0x6276,0x6577,0x0000,/* 0x78-0x7F */
-
+
0x65A7,0x666E,0x6D6E,0x7236,0x7B26,0x8150,0x819A,0x8299,/* 0x80-0x87 */
0x8B5C,0x8CA0,0x8CE6,0x8D74,0x961C,0x9644,0x4FAE,0x64AB,/* 0x88-0x8F */
0x6B66,0x821E,0x8461,0x856A,0x90E8,0x5C01,0x6953,0x98A8,/* 0x90-0x97 */
0x9632,0x5420,0x982C,0x5317,0x50D5,0x535C,0x58A8,0x64B2,/* 0x68-0x6F */
0x6734,0x7267,0x7766,0x7A46,0x91E6,0x52C3,0x6CA1,0x6B86,/* 0x70-0x77 */
0x5800,0x5E4C,0x5954,0x672C,0x7FFB,0x51E1,0x76C6,0x0000,/* 0x78-0x7F */
-
+
0x6469,0x78E8,0x9B54,0x9EBB,0x57CB,0x59B9,0x6627,0x679A,/* 0x80-0x87 */
0x6BCE,0x54E9,0x69D9,0x5E55,0x819C,0x6795,0x9BAA,0x67FE,/* 0x88-0x8F */
0x9C52,0x685D,0x4EA6,0x4FE3,0x53C8,0x62B9,0x672B,0x6CAB,/* 0x90-0x97 */
0x63FA,0x64C1,0x66DC,0x694A,0x69D8,0x6D0B,0x6EB6,0x7194,/* 0x68-0x6F */
0x7528,0x7AAF,0x7F8A,0x8000,0x8449,0x84C9,0x8981,0x8B21,/* 0x70-0x77 */
0x8E0A,0x9065,0x967D,0x990A,0x617E,0x6291,0x6B32,0x0000,/* 0x78-0x7F */
-
+
0x6C83,0x6D74,0x7FCC,0x7FFC,0x6DC0,0x7F85,0x87BA,0x88F8,/* 0x80-0x87 */
0x6765,0x83B1,0x983C,0x96F7,0x6D1B,0x7D61,0x843D,0x916A,/* 0x88-0x8F */
0x4E71,0x5375,0x5D50,0x6B04,0x6FEB,0x85CD,0x862D,0x89A7,/* 0x90-0x97 */
0x9DF2,0x4E99,0x4E98,0x9C10,0x8A6B,0x85C1,0x8568,0x6900,/* 0x68-0x6F */
0x6E7E,0x7897,0x8155,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x5191,0x5193,0x5195,0x5196,0x51A4,0x51A6,0x51A2,0x51A9,/* 0x68-0x6F */
0x51AA,0x51AB,0x51B3,0x51B1,0x51B2,0x51B0,0x51B5,0x51BD,/* 0x70-0x77 */
0x51C5,0x51C9,0x51DB,0x51E0,0x8655,0x51E9,0x51ED,0x0000,/* 0x78-0x7F */
-
+
0x51F0,0x51F5,0x51FE,0x5204,0x520B,0x5214,0x520E,0x5227,/* 0x80-0x87 */
0x522A,0x522E,0x5233,0x5239,0x524F,0x5244,0x524B,0x524C,/* 0x88-0x8F */
0x525E,0x5254,0x526A,0x5274,0x5269,0x5273,0x527F,0x527D,/* 0x90-0x97 */
0x5587,0x55A8,0x55DA,0x55C5,0x55DF,0x55C4,0x55DC,0x55E4,/* 0x68-0x6F */
0x55D4,0x5614,0x55F7,0x5616,0x55FE,0x55FD,0x561B,0x55F9,/* 0x70-0x77 */
0x564E,0x5650,0x71DF,0x5634,0x5636,0x5632,0x5638,0x0000,/* 0x78-0x7F */
-
+
0x566B,0x5664,0x562F,0x566C,0x566A,0x5686,0x5680,0x568A,/* 0x80-0x87 */
0x56A0,0x5694,0x568F,0x56A5,0x56AE,0x56B6,0x56B4,0x56C2,/* 0x88-0x8F */
0x56BC,0x56C1,0x56C3,0x56C0,0x56C8,0x56CE,0x56D1,0x56D3,/* 0x90-0x97 */
0x5B0B,0x5B16,0x5B32,0x5AD0,0x5B2A,0x5B36,0x5B3E,0x5B43,/* 0x68-0x6F */
0x5B45,0x5B40,0x5B51,0x5B55,0x5B5A,0x5B5B,0x5B65,0x5B69,/* 0x70-0x77 */
0x5B70,0x5B73,0x5B75,0x5B78,0x6588,0x5B7A,0x5B80,0x0000,/* 0x78-0x7F */
-
+
0x5B83,0x5BA6,0x5BB8,0x5BC3,0x5BC7,0x5BC9,0x5BD4,0x5BD0,/* 0x80-0x87 */
0x5BE4,0x5BE6,0x5BE2,0x5BDE,0x5BE5,0x5BEB,0x5BF0,0x5BF6,/* 0x88-0x8F */
0x5BF3,0x5C05,0x5C07,0x5C08,0x5C0D,0x5C13,0x5C20,0x5C22,/* 0x90-0x97 */
0x5F82,0x5F7F,0x5F8A,0x5F88,0x5F91,0x5F87,0x5F9E,0x5F99,/* 0x68-0x6F */
0x5F98,0x5FA0,0x5FA8,0x5FAD,0x5FBC,0x5FD6,0x5FFB,0x5FE4,/* 0x70-0x77 */
0x5FF8,0x5FF1,0x5FDD,0x60B3,0x5FFF,0x6021,0x6060,0x0000,/* 0x78-0x7F */
-
+
0x6019,0x6010,0x6029,0x600E,0x6031,0x601B,0x6015,0x602B,/* 0x80-0x87 */
0x6026,0x600F,0x603A,0x605A,0x6041,0x606A,0x6077,0x605F,/* 0x88-0x8F */
0x604A,0x6046,0x604D,0x6063,0x6043,0x6064,0x6042,0x606C,/* 0x90-0x97 */
0x62EE,0x62F1,0x6327,0x6302,0x6308,0x62EF,0x62F5,0x6350,/* 0x68-0x6F */
0x633E,0x634D,0x641C,0x634F,0x6396,0x638E,0x6380,0x63AB,/* 0x70-0x77 */
0x6376,0x63A3,0x638F,0x6389,0x639F,0x63B5,0x636B,0x0000,/* 0x78-0x7F */
-
+
0x6369,0x63BE,0x63E9,0x63C0,0x63C6,0x63E3,0x63C9,0x63D2,/* 0x80-0x87 */
0x63F6,0x63C4,0x6416,0x6434,0x6406,0x6413,0x6426,0x6436,/* 0x88-0x8F */
0x651D,0x6417,0x6428,0x640F,0x6467,0x646F,0x6476,0x644E,/* 0x90-0x97 */
0x67EF,0x67B4,0x67EC,0x67B3,0x67E9,0x67B8,0x67E4,0x67DE,/* 0x68-0x6F */
0x67DD,0x67E2,0x67EE,0x67B9,0x67CE,0x67C6,0x67E7,0x6A9C,/* 0x70-0x77 */
0x681E,0x6846,0x6829,0x6840,0x684D,0x6832,0x684E,0x0000,/* 0x78-0x7F */
-
+
0x68B3,0x682B,0x6859,0x6863,0x6877,0x687F,0x689F,0x688F,/* 0x80-0x87 */
0x68AD,0x6894,0x689D,0x689B,0x6883,0x6AAE,0x68B9,0x6874,/* 0x88-0x8F */
0x68B5,0x68A0,0x68BA,0x690F,0x688D,0x687E,0x6901,0x68CA,/* 0x90-0x97 */
0x6B84,0x6B83,0x6B8D,0x6B98,0x6B95,0x6B9E,0x6BA4,0x6BAA,/* 0x68-0x6F */
0x6BAB,0x6BAF,0x6BB2,0x6BB1,0x6BB3,0x6BB7,0x6BBC,0x6BC6,/* 0x70-0x77 */
0x6BCB,0x6BD3,0x6BDF,0x6BEC,0x6BEB,0x6BF3,0x6BEF,0x0000,/* 0x78-0x7F */
-
+
0x9EBE,0x6C08,0x6C13,0x6C14,0x6C1B,0x6C24,0x6C23,0x6C5E,/* 0x80-0x87 */
0x6C55,0x6C62,0x6C6A,0x6C82,0x6C8D,0x6C9A,0x6C81,0x6C9B,/* 0x88-0x8F */
0x6C7E,0x6C68,0x6C73,0x6C92,0x6C90,0x6CC4,0x6CF1,0x6CD3,/* 0x90-0x97 */
0x6FFE,0x701B,0x701A,0x6F74,0x701D,0x7018,0x701F,0x7030,/* 0x68-0x6F */
0x703E,0x7032,0x7051,0x7063,0x7099,0x7092,0x70AF,0x70F1,/* 0x70-0x77 */
0x70AC,0x70B8,0x70B3,0x70AE,0x70DF,0x70CB,0x70DD,0x0000,/* 0x78-0x7F */
-
+
0x70D9,0x7109,0x70FD,0x711C,0x7119,0x7165,0x7155,0x7188,/* 0x80-0x87 */
0x7166,0x7162,0x714C,0x7156,0x716C,0x718F,0x71FB,0x7184,/* 0x88-0x8F */
0x7195,0x71A8,0x71AC,0x71D7,0x71B9,0x71BE,0x71D2,0x71C9,/* 0x90-0x97 */
0x7589,0x7582,0x7594,0x759A,0x759D,0x75A5,0x75A3,0x75C2,/* 0x68-0x6F */
0x75B3,0x75C3,0x75B5,0x75BD,0x75B8,0x75BC,0x75B1,0x75CD,/* 0x70-0x77 */
0x75CA,0x75D2,0x75D9,0x75E3,0x75DE,0x75FE,0x75FF,0x0000,/* 0x78-0x7F */
-
+
0x75FC,0x7601,0x75F0,0x75FA,0x75F2,0x75F3,0x760B,0x760D,/* 0x80-0x87 */
0x7609,0x761F,0x7627,0x7620,0x7621,0x7622,0x7624,0x7634,/* 0x88-0x8F */
0x7630,0x763B,0x7647,0x7648,0x7646,0x765C,0x7658,0x7661,/* 0x90-0x97 */
0x7980,0x7A31,0x7A3B,0x7A3E,0x7A37,0x7A43,0x7A57,0x7A49,/* 0x68-0x6F */
0x7A61,0x7A62,0x7A69,0x9F9D,0x7A70,0x7A79,0x7A7D,0x7A88,/* 0x70-0x77 */
0x7A97,0x7A95,0x7A98,0x7A96,0x7AA9,0x7AC8,0x7AB0,0x0000,/* 0x78-0x7F */
-
+
0x7AB6,0x7AC5,0x7AC4,0x7ABF,0x9083,0x7AC7,0x7ACA,0x7ACD,/* 0x80-0x87 */
0x7ACF,0x7AD5,0x7AD3,0x7AD9,0x7ADA,0x7ADD,0x7AE1,0x7AE2,/* 0x88-0x8F */
0x7AE6,0x7AED,0x7AF0,0x7B02,0x7B0F,0x7B0A,0x7B06,0x7B33,/* 0x90-0x97 */
0x7DDD,0x7DE4,0x7DDE,0x7DFB,0x7DF2,0x7DE1,0x7E05,0x7E0A,/* 0x68-0x6F */
0x7E23,0x7E21,0x7E12,0x7E31,0x7E1F,0x7E09,0x7E0B,0x7E22,/* 0x70-0x77 */
0x7E46,0x7E66,0x7E3B,0x7E35,0x7E39,0x7E43,0x7E37,0x0000,/* 0x78-0x7F */
-
+
0x7E32,0x7E3A,0x7E67,0x7E5D,0x7E56,0x7E5E,0x7E59,0x7E5A,/* 0x80-0x87 */
0x7E79,0x7E6A,0x7E69,0x7E7C,0x7E7B,0x7E83,0x7DD5,0x7E7D,/* 0x88-0x8F */
0x8FAE,0x7E7F,0x7E88,0x7E89,0x7E8C,0x7E92,0x7E90,0x7E93,/* 0x90-0x97 */
0x81E7,0x81FA,0x81FB,0x81FE,0x8201,0x8202,0x8205,0x8207,/* 0x68-0x6F */
0x820A,0x820D,0x8210,0x8216,0x8229,0x822B,0x8238,0x8233,/* 0x70-0x77 */
0x8240,0x8259,0x8258,0x825D,0x825A,0x825F,0x8264,0x0000,/* 0x78-0x7F */
-
+
0x8262,0x8268,0x826A,0x826B,0x822E,0x8271,0x8277,0x8278,/* 0x80-0x87 */
0x827E,0x828D,0x8292,0x82AB,0x829F,0x82BB,0x82AC,0x82E1,/* 0x88-0x8F */
0x82E3,0x82DF,0x82D2,0x82F4,0x82F3,0x82FA,0x8393,0x8303,/* 0x90-0x97 */
0x4E55,0x8654,0x865F,0x8667,0x8671,0x8693,0x86A3,0x86A9,/* 0x68-0x6F */
0x86AA,0x868B,0x868C,0x86B6,0x86AF,0x86C4,0x86C6,0x86B0,/* 0x70-0x77 */
0x86C9,0x8823,0x86AB,0x86D4,0x86DE,0x86E9,0x86EC,0x0000,/* 0x78-0x7F */
-
+
0x86DF,0x86DB,0x86EF,0x8712,0x8706,0x8708,0x8700,0x8703,/* 0x80-0x87 */
0x86FB,0x8711,0x8709,0x870D,0x86F9,0x870A,0x8734,0x873F,/* 0x88-0x8F */
0x8737,0x873B,0x8725,0x8729,0x871A,0x8760,0x875F,0x8778,/* 0x90-0x97 */
0x8A46,0x8A48,0x8A7C,0x8A6D,0x8A6C,0x8A62,0x8A85,0x8A82,/* 0x68-0x6F */
0x8A84,0x8AA8,0x8AA1,0x8A91,0x8AA5,0x8AA6,0x8A9A,0x8AA3,/* 0x70-0x77 */
0x8AC4,0x8ACD,0x8AC2,0x8ADA,0x8AEB,0x8AF3,0x8AE7,0x0000,/* 0x78-0x7F */
-
+
0x8AE4,0x8AF1,0x8B14,0x8AE0,0x8AE2,0x8AF7,0x8ADE,0x8ADB,/* 0x80-0x87 */
0x8B0C,0x8B07,0x8B1A,0x8AE1,0x8B16,0x8B10,0x8B17,0x8B20,/* 0x88-0x8F */
0x8B33,0x97AB,0x8B26,0x8B2B,0x8B3E,0x8B28,0x8B41,0x8B4C,/* 0x90-0x97 */
0x8F0A,0x8F05,0x8F15,0x8F12,0x8F19,0x8F13,0x8F1C,0x8F1F,/* 0x68-0x6F */
0x8F1B,0x8F0C,0x8F26,0x8F33,0x8F3B,0x8F39,0x8F45,0x8F42,/* 0x70-0x77 */
0x8F3E,0x8F4C,0x8F49,0x8F46,0x8F4E,0x8F57,0x8F5C,0x0000,/* 0x78-0x7F */
-
+
0x8F62,0x8F63,0x8F64,0x8F9C,0x8F9F,0x8FA3,0x8FAD,0x8FAF,/* 0x80-0x87 */
0x8FB7,0x8FDA,0x8FE5,0x8FE2,0x8FEA,0x8FEF,0x9087,0x8FF4,/* 0x88-0x8F */
0x9005,0x8FF9,0x8FFA,0x9011,0x9015,0x9021,0x900D,0x901E,/* 0x90-0x97 */
0x9444,0x945B,0x9460,0x9462,0x945E,0x946A,0x9229,0x9470,/* 0x68-0x6F */
0x9475,0x9477,0x947D,0x945A,0x947C,0x947E,0x9481,0x947F,/* 0x70-0x77 */
0x9582,0x9587,0x958A,0x9594,0x9596,0x9598,0x9599,0x0000,/* 0x78-0x7F */
-
+
0x95A0,0x95A8,0x95A7,0x95AD,0x95BC,0x95BB,0x95B9,0x95BE,/* 0x80-0x87 */
0x95CA,0x6FF6,0x95C3,0x95CD,0x95CC,0x95D5,0x95D4,0x95D6,/* 0x88-0x8F */
0x95DC,0x95E1,0x95E5,0x95E2,0x9621,0x9628,0x962E,0x962F,/* 0x90-0x97 */
0x99BC,0x99DF,0x99DB,0x99DD,0x99D8,0x99D1,0x99ED,0x99EE,/* 0x68-0x6F */
0x99F1,0x99F2,0x99FB,0x99F8,0x9A01,0x9A0F,0x9A05,0x99E2,/* 0x70-0x77 */
0x9A19,0x9A2B,0x9A37,0x9A45,0x9A42,0x9A40,0x9A43,0x0000,/* 0x78-0x7F */
-
+
0x9A3E,0x9A55,0x9A4D,0x9A5B,0x9A57,0x9A5F,0x9A62,0x9A65,/* 0x80-0x87 */
0x9A64,0x9A69,0x9A6B,0x9A6A,0x9AAD,0x9AB0,0x9ABC,0x9AC0,/* 0x88-0x8F */
0x9ACF,0x9AD1,0x9AD3,0x9AD4,0x9ADE,0x9ADF,0x9AE2,0x9AE3,/* 0x90-0x97 */
0x9E8C,0x9E92,0x9E95,0x9E91,0x9E9D,0x9EA5,0x9EA9,0x9EB8,/* 0x68-0x6F */
0x9EAA,0x9EAD,0x9761,0x9ECC,0x9ECE,0x9ECF,0x9ED0,0x9ED4,/* 0x70-0x77 */
0x9EDC,0x9EDE,0x9EDD,0x9EE0,0x9EE5,0x9EE8,0x9EEF,0x0000,/* 0x78-0x7F */
-
+
0x9EF4,0x9EF6,0x9EF7,0x9EF9,0x9EFB,0x9EFC,0x9EFD,0x9F07,/* 0x80-0x87 */
0x9F08,0x76B7,0x9F15,0x9F21,0x9F2C,0x9F3E,0x9F4A,0x9F52,/* 0x88-0x8F */
0x9F54,0x9F63,0x9F5F,0x9F60,0x9F61,0x9F66,0x9F67,0x9F6C,/* 0x90-0x97 */
0x9F6A,0x9F77,0x9F72,0x9F76,0x9F95,0x9F9C,0x9FA0,0x582F,/* 0x98-0x9F */
0x69C7,0x9059,0x7464,0x51DC,0x7199,0x0000,0x0000,0x0000,/* 0xA0-0xA7 */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xA8-0xAF */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xB0-0xB7 */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xB8-0xBF */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xC0-0xC7 */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xC8-0xCF */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xD0-0xD7 */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xD8-0xDF */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xE0-0xE7 */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xE8-0xEF */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xF0-0xF7 */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_ED[256] = {
0x529C,0x52A6,0x52C0,0x52DB,0x5300,0x5307,0x5324,0x5372,/* 0x68-0x6F */
0x5393,0x53B2,0x53DD,0xFA0E,0x549C,0x548A,0x54A9,0x54FF,/* 0x70-0x77 */
0x5586,0x5759,0x5765,0x57AC,0x57C8,0x57C7,0xFA0F,0x0000,/* 0x78-0x7F */
-
+
0xFA10,0x589E,0x58B2,0x590B,0x5953,0x595B,0x595D,0x5963,/* 0x80-0x87 */
0x59A4,0x59BA,0x5B56,0x5BC0,0x752F,0x5BD8,0x5BEC,0x5C1E,/* 0x88-0x8F */
0x5CA6,0x5CBA,0x5CF5,0x5D27,0x5D53,0xFA11,0x5D42,0x5D6D,/* 0x90-0x97 */
0x7AE7,0xFA1C,0x7AEB,0x7B9E,0xFA1D,0x7D48,0x7D5C,0x7DB7,/* 0x68-0x6F */
0x7DA0,0x7DD6,0x7E52,0x7F47,0x7FA1,0xFA1E,0x8301,0x8362,/* 0x70-0x77 */
0x837F,0x83C7,0x83F6,0x8448,0x84B4,0x8553,0x8559,0x0000,/* 0x78-0x7F */
-
+
0x856B,0xFA1F,0x85B0,0xFA20,0xFA21,0x8807,0x88F5,0x8A12,/* 0x80-0x87 */
0x8A37,0x8A79,0x8AA7,0x8ABE,0x8ADF,0xFA22,0x8AF6,0x8B53,/* 0x88-0x8F */
0x8B7F,0x8CF0,0x8CF4,0x8D12,0x8D76,0xFA23,0x8ECF,0xFA24,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x2170,0x2171,0x2172,0x2173,0x2174,0x2175,0x2176,0x2177,/* 0x40-0x47 */
0x2178,0x2179,0x2160,0x2161,0x2162,0x2163,0x2164,0x2165,/* 0x48-0x4F */
- 0x2166,0x2167,0x2168,0x2169,0x0000,0xFFE4,0xFF07,0xFF02,/* 0x50-0x57 */
+ 0x2166,0x2167,0x2168,0x2169,0xFFE2,0xFFE4,0xFF07,0xFF02,/* 0x50-0x57 */
0x3231,0x2116,0x2121,0x2235,0x7E8A,0x891C,0x9348,0x9288,/* 0x58-0x5F */
0x84DC,0x4FC9,0x70BB,0x6631,0x68C8,0x92F9,0x66FB,0x5F45,/* 0x60-0x67 */
0x4E28,0x4EE1,0x4EFC,0x4F00,0x4F03,0x4F39,0x4F56,0x4F92,/* 0x68-0x6F */
0x4F8A,0x4F9A,0x4F94,0x4FCD,0x5040,0x5022,0x4FFF,0x501E,/* 0x70-0x77 */
0x5046,0x5070,0x5042,0x5094,0x50F4,0x50D8,0x514A,0x0000,/* 0x78-0x7F */
-
+
0x5164,0x519D,0x51BE,0x51EC,0x5215,0x529C,0x52A6,0x52C0,/* 0x80-0x87 */
0x52DB,0x5300,0x5307,0x5324,0x5372,0x5393,0x53B2,0x53DD,/* 0x88-0x8F */
0xFA0E,0x549C,0x548A,0x54A9,0x54FF,0x5586,0x5759,0x5765,/* 0x90-0x97 */
0x742A,0x7429,0x742E,0x7462,0x7489,0x749F,0x7501,0x756F,/* 0x68-0x6F */
0x7682,0x769C,0x769E,0x769B,0x76A6,0xFA17,0x7746,0x52AF,/* 0x70-0x77 */
0x7821,0x784E,0x7864,0x787A,0x7930,0xFA18,0xFA19,0x0000,/* 0x78-0x7F */
-
+
0xFA1A,0x7994,0xFA1B,0x799B,0x7AD1,0x7AE7,0xFA1C,0x7AEB,/* 0x80-0x87 */
0x7B9E,0xFA1D,0x7D48,0x7D5C,0x7DB7,0x7DA0,0x7DD6,0x7E52,/* 0x88-0x8F */
0x7F47,0x7FA1,0xFA1E,0x8301,0x8362,0x837F,0x83C7,0x83F6,/* 0x90-0x97 */
0x4E63,0x4E64,0x4E65,0x4E67,0x4E68,0x4E6A,0x4E6B,0x4E6C,/* 0x60-0x67 */
0x4E6D,0x4E6E,0x4E6F,0x4E72,0x4E74,0x4E75,0x4E76,0x4E77,/* 0x68-0x6F */
0x4E78,0x4E79,0x4E7A,0x4E7B,0x4E7C,0x4E7D,0x4E7F,0x4E80,/* 0x70-0x77 */
- 0x4E81,0xF91B,0x4E83,0x4E84,0x4E85,0x4E87,0x4E8A,0x0000,/* 0x78-0x7F */
-
+ 0x4E81,0x4E82,0x4E83,0x4E84,0x4E85,0x4E87,0x4E8A,0x0000,/* 0x78-0x7F */
+
0x4E90,0x4E96,0x4E97,0x4E99,0x4E9C,0x4E9D,0x4E9E,0x4EA3,/* 0x80-0x87 */
0x4EAA,0x4EAF,0x4EB0,0x4EB1,0x4EB4,0x4EB6,0x4EB7,0x4EB8,/* 0x88-0x8F */
0x4EB9,0x4EBC,0x4EBD,0x4EBE,0x4EC8,0x4ECC,0x4ECF,0x4ED0,/* 0x90-0x97 */
0x4F47,0x4F48,0x4F49,0x4F4A,0x4F4B,0x4F4C,0x4F52,0x4F54,/* 0xD0-0xD7 */
0x4F56,0x4F61,0x4F62,0x4F66,0x4F68,0x4F6A,0x4F6B,0x4F6D,/* 0xD8-0xDF */
0x4F6E,0x4F71,0x4F72,0x4F75,0x4F77,0x4F78,0x4F79,0x4F7A,/* 0xE0-0xE7 */
- 0x4F7D,0x4F80,0x4F81,0x4F82,0x4F85,0xF92D,0x4F87,0x4F8A,/* 0xE8-0xEF */
+ 0x4F7D,0x4F80,0x4F81,0x4F82,0x4F85,0x4F86,0x4F87,0x4F8A,/* 0xE8-0xEF */
0x4F8C,0x4F8E,0x4F90,0x4F92,0x4F93,0x4F95,0x4F96,0x4F98,/* 0xF0-0xF7 */
0x4F99,0x4F9A,0x4F9C,0x4F9E,0x4F9F,0x4FA1,0x4FA2,0x0000,/* 0xF8-0xFF */
};
0x4FEC,0x4FF0,0x4FF2,0x4FF4,0x4FF5,0x4FF6,0x4FF7,0x4FF9,/* 0x68-0x6F */
0x4FFB,0x4FFC,0x4FFD,0x4FFF,0x5000,0x5001,0x5002,0x5003,/* 0x70-0x77 */
0x5004,0x5005,0x5006,0x5007,0x5008,0x5009,0x500A,0x0000,/* 0x78-0x7F */
-
+
0x500B,0x500E,0x5010,0x5011,0x5013,0x5015,0x5016,0x5017,/* 0x80-0x87 */
0x501B,0x501D,0x501E,0x5020,0x5022,0x5023,0x5024,0x5027,/* 0x88-0x8F */
- 0xF9D4,0x502F,0x5030,0x5031,0x5032,0x5033,0x5034,0x5035,/* 0x90-0x97 */
+ 0x502B,0x502F,0x5030,0x5031,0x5032,0x5033,0x5034,0x5035,/* 0x90-0x97 */
0x5036,0x5037,0x5038,0x5039,0x503B,0x503D,0x503F,0x5040,/* 0x98-0x9F */
0x5041,0x5042,0x5044,0x5045,0x5046,0x5049,0x504A,0x504B,/* 0xA0-0xA7 */
0x504D,0x5050,0x5051,0x5052,0x5053,0x5054,0x5056,0x5057,/* 0xA8-0xAF */
0x50EA,0x50EB,0x50EF,0x50F0,0x50F1,0x50F2,0x50F4,0x50F6,/* 0x68-0x6F */
0x50F7,0x50F8,0x50F9,0x50FA,0x50FC,0x50FD,0x50FE,0x50FF,/* 0x70-0x77 */
0x5100,0x5101,0x5102,0x5103,0x5104,0x5105,0x5108,0x0000,/* 0x78-0x7F */
-
+
0x5109,0x510A,0x510C,0x510D,0x510E,0x510F,0x5110,0x5111,/* 0x80-0x87 */
0x5113,0x5114,0x5115,0x5116,0x5117,0x5118,0x5119,0x511A,/* 0x88-0x8F */
0x511B,0x511C,0x511D,0x511E,0x511F,0x5120,0x5122,0x5123,/* 0x90-0x97 */
0x513C,0x513D,0x513E,0x5142,0x5147,0x514A,0x514C,0x514E,/* 0xB0-0xB7 */
0x514F,0x5150,0x5152,0x5153,0x5157,0x5158,0x5159,0x515B,/* 0xB8-0xBF */
0x515D,0x515E,0x515F,0x5160,0x5161,0x5163,0x5164,0x5166,/* 0xC0-0xC7 */
- 0x5167,0xF978,0x516A,0x516F,0x5172,0x517A,0x517E,0x517F,/* 0xC8-0xCF */
+ 0x5167,0x5169,0x516A,0x516F,0x5172,0x517A,0x517E,0x517F,/* 0xC8-0xCF */
0x5183,0x5184,0x5186,0x5187,0x518A,0x518B,0x518E,0x518F,/* 0xD0-0xD7 */
0x5190,0x5191,0x5193,0x5194,0x5198,0x519A,0x519D,0x519E,/* 0xD8-0xDF */
0x519F,0x51A1,0x51A3,0x51A6,0x51A7,0x51A8,0x51A9,0x51AA,/* 0xE0-0xE7 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x28-0x2F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
- 0x51D8,0x51D9,0x51DA,0xF954,0xFA15,0x51DF,0x51E2,0x51E3,/* 0x40-0x47 */
+ 0x51D8,0x51D9,0x51DA,0x51DC,0x51DE,0x51DF,0x51E2,0x51E3,/* 0x40-0x47 */
0x51E5,0x51E6,0x51E7,0x51E8,0x51E9,0x51EA,0x51EC,0x51EE,/* 0x48-0x4F */
0x51F1,0x51F2,0x51F4,0x51F7,0x51FE,0x5204,0x5205,0x5209,/* 0x50-0x57 */
0x520B,0x520C,0x520F,0x5210,0x5213,0x5214,0x5215,0x521C,/* 0x58-0x5F */
0x522A,0x522C,0x522F,0x5231,0x5232,0x5234,0x5235,0x523C,/* 0x68-0x6F */
0x523E,0x5244,0x5245,0x5246,0x5247,0x5248,0x5249,0x524B,/* 0x70-0x77 */
0x524E,0x524F,0x5252,0x5253,0x5255,0x5257,0x5258,0x0000,/* 0x78-0x7F */
-
+
0x5259,0x525A,0x525B,0x525D,0x525F,0x5260,0x5262,0x5263,/* 0x80-0x87 */
0x5264,0x5266,0x5268,0x526B,0x526C,0x526D,0x526E,0x5270,/* 0x88-0x8F */
0x5271,0x5273,0x5274,0x5275,0x5276,0x5277,0x5278,0x5279,/* 0x90-0x97 */
0x527A,0x527B,0x527C,0x527E,0x5280,0x5283,0x5284,0x5285,/* 0x98-0x9F */
- 0x5286,0x5287,0xF9C7,0x528A,0x528B,0x528C,0x528D,0x528E,/* 0xA0-0xA7 */
+ 0x5286,0x5287,0x5289,0x528A,0x528B,0x528C,0x528D,0x528E,/* 0xA0-0xA7 */
0x528F,0x5291,0x5292,0x5294,0x5295,0x5296,0x5297,0x5298,/* 0xA8-0xAF */
0x5299,0x529A,0x529C,0x52A4,0x52A5,0x52A6,0x52A7,0x52AE,/* 0xB0-0xB7 */
0x52AF,0x52B0,0x52B4,0x52B5,0x52B6,0x52B7,0x52B8,0x52B9,/* 0xB8-0xBF */
0x52BA,0x52BB,0x52BC,0x52BD,0x52C0,0x52C1,0x52C2,0x52C4,/* 0xC0-0xC7 */
0x52C5,0x52C6,0x52C8,0x52CA,0x52CC,0x52CD,0x52CE,0x52CF,/* 0xC8-0xCF */
0x52D1,0x52D3,0x52D4,0x52D5,0x52D7,0x52D9,0x52DA,0x52DB,/* 0xD0-0xD7 */
- 0x52DC,0x52DD,0xF92F,0x52E0,0x52E1,0x52E2,0x52E3,0x52E5,/* 0xD8-0xDF */
+ 0x52DC,0x52DD,0x52DE,0x52E0,0x52E1,0x52E2,0x52E3,0x52E5,/* 0xD8-0xDF */
0x52E6,0x52E7,0x52E8,0x52E9,0x52EA,0x52EB,0x52EC,0x52ED,/* 0xE0-0xE7 */
- 0x52EE,0x52EF,0x52F1,0x52F2,0x52F3,0x52F4,0xF97F,0x52F6,/* 0xE8-0xEF */
+ 0x52EE,0x52EF,0x52F1,0x52F2,0x52F3,0x52F4,0x52F5,0x52F6,/* 0xE8-0xEF */
0x52F7,0x52F8,0x52FB,0x52FC,0x52FD,0x5301,0x5302,0x5303,/* 0xF0-0xF7 */
0x5304,0x5307,0x5309,0x530A,0x530B,0x530C,0x530E,0x0000,/* 0xF8-0xFF */
};
0x5359,0x535B,0x535D,0x5365,0x5368,0x536A,0x536C,0x536D,/* 0x68-0x6F */
0x5372,0x5376,0x5379,0x537B,0x537C,0x537D,0x537E,0x5380,/* 0x70-0x77 */
0x5381,0x5383,0x5387,0x5388,0x538A,0x538E,0x538F,0x0000,/* 0x78-0x7F */
-
+
0x5390,0x5391,0x5392,0x5393,0x5394,0x5396,0x5397,0x5399,/* 0x80-0x87 */
0x539B,0x539C,0x539E,0x53A0,0x53A1,0x53A4,0x53A7,0x53AA,/* 0x88-0x8F */
0x53AB,0x53AC,0x53AD,0x53AF,0x53B0,0x53B1,0x53B2,0x53B3,/* 0x90-0x97 */
0x53B4,0x53B5,0x53B7,0x53B8,0x53B9,0x53BA,0x53BC,0x53BD,/* 0x98-0x9F */
- 0x53BE,0x53C0,0xF96B,0x53C4,0x53C5,0x53C6,0x53C7,0x53CE,/* 0xA0-0xA7 */
+ 0x53BE,0x53C0,0x53C3,0x53C4,0x53C5,0x53C6,0x53C7,0x53CE,/* 0xA0-0xA7 */
0x53CF,0x53D0,0x53D2,0x53D3,0x53D5,0x53DA,0x53DC,0x53DD,/* 0xA8-0xAF */
0x53DE,0x53E1,0x53E2,0x53E7,0x53F4,0x53FA,0x53FE,0x53FF,/* 0xB0-0xB7 */
0x5400,0x5402,0x5405,0x5407,0x540B,0x5414,0x5418,0x5419,/* 0xB8-0xBF */
0x541A,0x541C,0x5422,0x5424,0x5425,0x542A,0x5430,0x5433,/* 0xC0-0xC7 */
- 0x5436,0x5437,0x543A,0x543D,0x543F,0x5441,0xF980,0x5444,/* 0xC8-0xCF */
+ 0x5436,0x5437,0x543A,0x543D,0x543F,0x5441,0x5442,0x5444,/* 0xC8-0xCF */
0x5445,0x5447,0x5449,0x544C,0x544D,0x544E,0x544F,0x5451,/* 0xD0-0xD7 */
0x545A,0x545D,0x545E,0x545F,0x5460,0x5461,0x5463,0x5465,/* 0xD8-0xDF */
0x5467,0x5469,0x546A,0x546B,0x546C,0x546D,0x546E,0x546F,/* 0xE0-0xE7 */
0x5504,0x5505,0x5508,0x550A,0x550B,0x550C,0x550D,0x550E,/* 0x68-0x6F */
0x5512,0x5513,0x5515,0x5516,0x5517,0x5518,0x5519,0x551A,/* 0x70-0x77 */
0x551C,0x551D,0x551E,0x551F,0x5521,0x5525,0x5526,0x0000,/* 0x78-0x7F */
-
+
0x5528,0x5529,0x552B,0x552D,0x5532,0x5534,0x5535,0x5536,/* 0x80-0x87 */
0x5538,0x5539,0x553A,0x553B,0x553D,0x5540,0x5542,0x5545,/* 0x88-0x8F */
0x5547,0x5548,0x554B,0x554C,0x554D,0x554E,0x554F,0x5551,/* 0x90-0x97 */
0x5643,0x5644,0x5645,0x5646,0x5647,0x5648,0x5649,0x564A,/* 0x68-0x6F */
0x564B,0x564F,0x5650,0x5651,0x5652,0x5653,0x5655,0x5656,/* 0x70-0x77 */
0x565A,0x565B,0x565D,0x565E,0x565F,0x5660,0x5661,0x0000,/* 0x78-0x7F */
-
+
0x5663,0x5665,0x5666,0x5667,0x566D,0x566E,0x566F,0x5670,/* 0x80-0x87 */
0x5672,0x5673,0x5674,0x5675,0x5677,0x5678,0x5679,0x567A,/* 0x88-0x8F */
0x567D,0x567E,0x567F,0x5680,0x5681,0x5682,0x5683,0x5684,/* 0x90-0x97 */
0x5754,0x5755,0x5756,0x5758,0x5759,0x5762,0x5763,0x5765,/* 0x68-0x6F */
0x5767,0x576C,0x576E,0x5770,0x5771,0x5772,0x5774,0x5775,/* 0x70-0x77 */
0x5778,0x5779,0x577A,0x577D,0x577E,0x577F,0x5780,0x0000,/* 0x78-0x7F */
-
+
0x5781,0x5787,0x5788,0x5789,0x578A,0x578D,0x578E,0x578F,/* 0x80-0x87 */
0x5790,0x5791,0x5794,0x5795,0x5796,0x5797,0x5798,0x5799,/* 0x88-0x8F */
0x579A,0x579C,0x579D,0x579E,0x579F,0x57A5,0x57A8,0x57AA,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x583E,0x583F,0x5840,0x5841,0x5842,0x5843,0x5845,0x5846,/* 0x40-0x47 */
0x5847,0x5848,0x5849,0x584A,0x584B,0x584E,0x584F,0x5850,/* 0x48-0x4F */
- 0x5852,0x5853,0x5855,0x5856,0x5857,0x5859,0xFA10,0x585B,/* 0x50-0x57 */
+ 0x5852,0x5853,0x5855,0x5856,0x5857,0x5859,0x585A,0x585B,/* 0x50-0x57 */
0x585C,0x585D,0x585F,0x5860,0x5861,0x5862,0x5863,0x5864,/* 0x58-0x5F */
0x5866,0x5867,0x5868,0x5869,0x586A,0x586D,0x586E,0x586F,/* 0x60-0x67 */
0x5870,0x5871,0x5872,0x5873,0x5874,0x5875,0x5876,0x5877,/* 0x68-0x6F */
0x5878,0x5879,0x587A,0x587B,0x587C,0x587D,0x587F,0x5882,/* 0x70-0x77 */
0x5884,0x5886,0x5887,0x5888,0x588A,0x588B,0x588C,0x0000,/* 0x78-0x7F */
-
+
0x588D,0x588E,0x588F,0x5890,0x5891,0x5894,0x5895,0x5896,/* 0x80-0x87 */
0x5897,0x5898,0x589B,0x589C,0x589D,0x58A0,0x58A1,0x58A2,/* 0x88-0x8F */
0x58A3,0x58A4,0x58A5,0x58A6,0x58A7,0x58AA,0x58AB,0x58AC,/* 0x90-0x97 */
0x58B5,0x58B6,0x58B7,0x58B8,0x58B9,0x58BA,0x58BB,0x58BD,/* 0xA0-0xA7 */
0x58BE,0x58BF,0x58C0,0x58C2,0x58C3,0x58C4,0x58C6,0x58C7,/* 0xA8-0xAF */
0x58C8,0x58C9,0x58CA,0x58CB,0x58CC,0x58CD,0x58CE,0x58CF,/* 0xB0-0xB7 */
- 0x58D0,0x58D2,0x58D3,0x58D4,0x58D6,0x58D7,0xF94A,0x58D9,/* 0xB8-0xBF */
- 0x58DA,0x58DB,0x58DC,0x58DD,0x58DE,0xF942,0x58E0,0x58E1,/* 0xC0-0xC7 */
+ 0x58D0,0x58D2,0x58D3,0x58D4,0x58D6,0x58D7,0x58D8,0x58D9,/* 0xB8-0xBF */
+ 0x58DA,0x58DB,0x58DC,0x58DD,0x58DE,0x58DF,0x58E0,0x58E1,/* 0xC0-0xC7 */
0x58E2,0x58E3,0x58E5,0x58E6,0x58E7,0x58E8,0x58E9,0x58EA,/* 0xC8-0xCF */
0x58ED,0x58EF,0x58F1,0x58F2,0x58F4,0x58F5,0x58F7,0x58F8,/* 0xD0-0xD7 */
0x58FA,0x58FB,0x58FC,0x58FD,0x58FE,0x58FF,0x5900,0x5901,/* 0xD8-0xDF */
0x597E,0x597F,0x5980,0x5985,0x5989,0x598B,0x598C,0x598E,/* 0x68-0x6F */
0x598F,0x5990,0x5991,0x5994,0x5995,0x5998,0x599A,0x599B,/* 0x70-0x77 */
0x599C,0x599D,0x599F,0x59A0,0x59A1,0x59A2,0x59A6,0x0000,/* 0x78-0x7F */
-
+
0x59A7,0x59AC,0x59AD,0x59B0,0x59B1,0x59B3,0x59B4,0x59B5,/* 0x80-0x87 */
0x59B6,0x59B7,0x59B8,0x59BA,0x59BC,0x59BD,0x59BF,0x59C0,/* 0x88-0x8F */
0x59C1,0x59C2,0x59C3,0x59C4,0x59C5,0x59C7,0x59C8,0x59C9,/* 0x90-0x97 */
0x5A93,0x5A94,0x5A95,0x5A96,0x5A97,0x5A98,0x5A99,0x5A9C,/* 0x68-0x6F */
0x5A9D,0x5A9E,0x5A9F,0x5AA0,0x5AA1,0x5AA2,0x5AA3,0x5AA4,/* 0x70-0x77 */
0x5AA5,0x5AA6,0x5AA7,0x5AA8,0x5AA9,0x5AAB,0x5AAC,0x0000,/* 0x78-0x7F */
-
+
0x5AAD,0x5AAE,0x5AAF,0x5AB0,0x5AB1,0x5AB4,0x5AB6,0x5AB7,/* 0x80-0x87 */
0x5AB9,0x5ABA,0x5ABB,0x5ABC,0x5ABD,0x5ABF,0x5AC0,0x5AC3,/* 0x88-0x8F */
0x5AC4,0x5AC5,0x5AC6,0x5AC7,0x5AC8,0x5ACA,0x5ACB,0x5ACD,/* 0x90-0x97 */
0x5BA7,0x5BA8,0x5BA9,0x5BAC,0x5BAD,0x5BAE,0x5BAF,0x5BB1,/* 0x68-0x6F */
0x5BB2,0x5BB7,0x5BBA,0x5BBB,0x5BBC,0x5BC0,0x5BC1,0x5BC3,/* 0x70-0x77 */
0x5BC8,0x5BC9,0x5BCA,0x5BCB,0x5BCD,0x5BCE,0x5BCF,0x0000,/* 0x78-0x7F */
-
+
0x5BD1,0x5BD4,0x5BD5,0x5BD6,0x5BD7,0x5BD8,0x5BD9,0x5BDA,/* 0x80-0x87 */
- 0x5BDB,0x5BDC,0x5BE0,0x5BE2,0x5BE3,0x5BE6,0xF9AA,0x5BE9,/* 0x88-0x8F */
+ 0x5BDB,0x5BDC,0x5BE0,0x5BE2,0x5BE3,0x5BE6,0x5BE7,0x5BE9,/* 0x88-0x8F */
0x5BEA,0x5BEB,0x5BEC,0x5BED,0x5BEF,0x5BF1,0x5BF2,0x5BF3,/* 0x90-0x97 */
0x5BF4,0x5BF5,0x5BF6,0x5BF7,0x5BFD,0x5BFE,0x5C00,0x5C02,/* 0x98-0x9F */
0x5C03,0x5C05,0x5C07,0x5C08,0x5C0B,0x5C0C,0x5C0D,0x5C0E,/* 0xA0-0xA7 */
0x5C2D,0x5C2E,0x5C2F,0x5C30,0x5C32,0x5C33,0x5C35,0x5C36,/* 0xB8-0xBF */
0x5C37,0x5C43,0x5C44,0x5C46,0x5C47,0x5C4C,0x5C4D,0x5C52,/* 0xC0-0xC7 */
0x5C53,0x5C54,0x5C56,0x5C57,0x5C58,0x5C5A,0x5C5B,0x5C5C,/* 0xC8-0xCF */
- 0x5C5D,0x5C5F,0xF94B,0x5C64,0x5C67,0x5C68,0x5C69,0x5C6A,/* 0xD0-0xD7 */
+ 0x5C5D,0x5C5F,0x5C62,0x5C64,0x5C67,0x5C68,0x5C69,0x5C6A,/* 0xD0-0xD7 */
0x5C6B,0x5C6C,0x5C6D,0x5C70,0x5C72,0x5C73,0x5C74,0x5C75,/* 0xD8-0xDF */
0x5C76,0x5C77,0x5C78,0x5C7B,0x5C7C,0x5C7D,0x5C7E,0x5C80,/* 0xE0-0xE7 */
0x5C83,0x5C84,0x5C85,0x5C86,0x5C87,0x5C89,0x5C8A,0x5C8B,/* 0xE8-0xEF */
0x5CE2,0x5CE3,0x5CE7,0x5CE9,0x5CEB,0x5CEC,0x5CEE,0x5CEF,/* 0x68-0x6F */
0x5CF1,0x5CF2,0x5CF3,0x5CF4,0x5CF5,0x5CF6,0x5CF7,0x5CF8,/* 0x70-0x77 */
0x5CF9,0x5CFA,0x5CFC,0x5CFD,0x5CFE,0x5CFF,0x5D00,0x0000,/* 0x78-0x7F */
-
+
0x5D01,0x5D04,0x5D05,0x5D08,0x5D09,0x5D0A,0x5D0B,0x5D0C,/* 0x80-0x87 */
0x5D0D,0x5D0F,0x5D10,0x5D11,0x5D12,0x5D13,0x5D15,0x5D17,/* 0x88-0x8F */
- 0x5D18,0xF9D5,0x5D1A,0x5D1C,0x5D1D,0x5D1F,0x5D20,0x5D21,/* 0x90-0x97 */
+ 0x5D18,0x5D19,0x5D1A,0x5D1C,0x5D1D,0x5D1F,0x5D20,0x5D21,/* 0x90-0x97 */
0x5D22,0x5D23,0x5D25,0x5D28,0x5D2A,0x5D2B,0x5D2C,0x5D2F,/* 0x98-0x9F */
0x5D30,0x5D31,0x5D32,0x5D33,0x5D35,0x5D36,0x5D37,0x5D38,/* 0xA0-0xA7 */
0x5D39,0x5D3A,0x5D3B,0x5D3C,0x5D3F,0x5D40,0x5D41,0x5D42,/* 0xA8-0xAF */
0x5D43,0x5D44,0x5D45,0x5D46,0x5D48,0x5D49,0x5D4D,0x5D4E,/* 0xB0-0xB7 */
- 0x5D4F,0xF921,0x5D51,0x5D52,0x5D53,0x5D54,0x5D55,0x5D56,/* 0xB8-0xBF */
+ 0x5D4F,0x5D50,0x5D51,0x5D52,0x5D53,0x5D54,0x5D55,0x5D56,/* 0xB8-0xBF */
0x5D57,0x5D59,0x5D5A,0x5D5C,0x5D5E,0x5D5F,0x5D60,0x5D61,/* 0xC0-0xC7 */
0x5D62,0x5D63,0x5D64,0x5D65,0x5D66,0x5D67,0x5D68,0x5D6A,/* 0xC8-0xCF */
0x5D6D,0x5D6E,0x5D70,0x5D71,0x5D72,0x5D73,0x5D75,0x5D76,/* 0xD0-0xD7 */
0x5DA1,0x5DA2,0x5DA3,0x5DA4,0x5DA5,0x5DA6,0x5DA7,0x5DA8,/* 0x40-0x47 */
0x5DA9,0x5DAA,0x5DAB,0x5DAC,0x5DAD,0x5DAE,0x5DAF,0x5DB0,/* 0x48-0x4F */
0x5DB1,0x5DB2,0x5DB3,0x5DB4,0x5DB5,0x5DB6,0x5DB8,0x5DB9,/* 0x50-0x57 */
- 0xF9AB,0x5DBB,0x5DBC,0x5DBD,0x5DBE,0x5DBF,0x5DC0,0x5DC1,/* 0x58-0x5F */
+ 0x5DBA,0x5DBB,0x5DBC,0x5DBD,0x5DBE,0x5DBF,0x5DC0,0x5DC1,/* 0x58-0x5F */
0x5DC2,0x5DC3,0x5DC4,0x5DC6,0x5DC7,0x5DC8,0x5DC9,0x5DCA,/* 0x60-0x67 */
0x5DCB,0x5DCC,0x5DCE,0x5DCF,0x5DD0,0x5DD1,0x5DD2,0x5DD3,/* 0x68-0x6F */
0x5DD4,0x5DD5,0x5DD6,0x5DD7,0x5DD8,0x5DD9,0x5DDA,0x5DDC,/* 0x70-0x77 */
0x5DDF,0x5DE0,0x5DE3,0x5DE4,0x5DEA,0x5DEC,0x5DED,0x0000,/* 0x78-0x7F */
-
+
0x5DF0,0x5DF5,0x5DF6,0x5DF8,0x5DF9,0x5DFA,0x5DFB,0x5DFC,/* 0x80-0x87 */
0x5DFF,0x5E00,0x5E04,0x5E07,0x5E09,0x5E0A,0x5E0B,0x5E0D,/* 0x88-0x8F */
0x5E0E,0x5E12,0x5E13,0x5E17,0x5E1E,0x5E1F,0x5E20,0x5E21,/* 0x90-0x97 */
0x5EC6,0x5EC7,0x5EC8,0x5ECB,0x5ECC,0x5ECD,0x5ECE,0x5ECF,/* 0x40-0x47 */
0x5ED0,0x5ED4,0x5ED5,0x5ED7,0x5ED8,0x5ED9,0x5EDA,0x5EDC,/* 0x48-0x4F */
0x5EDD,0x5EDE,0x5EDF,0x5EE0,0x5EE1,0x5EE2,0x5EE3,0x5EE4,/* 0x50-0x57 */
- 0x5EE5,0x5EE6,0x5EE7,0x5EE9,0x5EEB,0xF982,0x5EED,0x5EEE,/* 0x58-0x5F */
+ 0x5EE5,0x5EE6,0x5EE7,0x5EE9,0x5EEB,0x5EEC,0x5EED,0x5EEE,/* 0x58-0x5F */
0x5EEF,0x5EF0,0x5EF1,0x5EF2,0x5EF3,0x5EF5,0x5EF8,0x5EF9,/* 0x60-0x67 */
0x5EFB,0x5EFC,0x5EFD,0x5F05,0x5F06,0x5F07,0x5F09,0x5F0C,/* 0x68-0x6F */
0x5F0D,0x5F0E,0x5F10,0x5F12,0x5F14,0x5F16,0x5F19,0x5F1A,/* 0x70-0x77 */
0x5F1C,0x5F1D,0x5F1E,0x5F21,0x5F22,0x5F23,0x5F24,0x0000,/* 0x78-0x7F */
-
+
0x5F28,0x5F2B,0x5F2C,0x5F2E,0x5F30,0x5F32,0x5F33,0x5F34,/* 0x80-0x87 */
0x5F35,0x5F36,0x5F37,0x5F38,0x5F3B,0x5F3D,0x5F3E,0x5F3F,/* 0x88-0x8F */
0x5F41,0x5F42,0x5F43,0x5F44,0x5F45,0x5F46,0x5F47,0x5F48,/* 0x90-0x97 */
0x5F74,0x5F75,0x5F76,0x5F78,0x5F7A,0x5F7D,0x5F7E,0x5F7F,/* 0xB0-0xB7 */
0x5F83,0x5F86,0x5F8D,0x5F8E,0x5F8F,0x5F91,0x5F93,0x5F94,/* 0xB8-0xBF */
0x5F96,0x5F9A,0x5F9B,0x5F9D,0x5F9E,0x5F9F,0x5FA0,0x5FA2,/* 0xC0-0xC7 */
- 0x5FA3,0x5FA4,0x5FA5,0x5FA6,0x5FA7,0xF966,0x5FAB,0x5FAC,/* 0xC8-0xCF */
+ 0x5FA3,0x5FA4,0x5FA5,0x5FA6,0x5FA7,0x5FA9,0x5FAB,0x5FAC,/* 0xC8-0xCF */
0x5FAF,0x5FB0,0x5FB1,0x5FB2,0x5FB3,0x5FB4,0x5FB6,0x5FB8,/* 0xD0-0xD7 */
0x5FB9,0x5FBA,0x5FBB,0x5FBE,0x5FBF,0x5FC0,0x5FC1,0x5FC2,/* 0xD8-0xDF */
0x5FC7,0x5FC8,0x5FCA,0x5FCB,0x5FCE,0x5FD3,0x5FD4,0x5FD5,/* 0xE0-0xE7 */
0x604F,0x6051,0x6053,0x6054,0x6056,0x6057,0x6058,0x605B,/* 0x68-0x6F */
0x605C,0x605E,0x605F,0x6060,0x6061,0x6065,0x6066,0x606E,/* 0x70-0x77 */
0x6071,0x6072,0x6074,0x6075,0x6077,0x607E,0x6080,0x0000,/* 0x78-0x7F */
-
+
0x6081,0x6082,0x6085,0x6086,0x6087,0x6088,0x608A,0x608B,/* 0x80-0x87 */
0x608E,0x608F,0x6090,0x6091,0x6093,0x6095,0x6097,0x6098,/* 0x88-0x8F */
0x6099,0x609C,0x609E,0x60A1,0x60A2,0x60A4,0x60A5,0x60A7,/* 0x90-0x97 */
0x60B9,0x60BA,0x60BD,0x60BE,0x60BF,0x60C0,0x60C1,0x60C2,/* 0xA0-0xA7 */
0x60C3,0x60C4,0x60C7,0x60C8,0x60C9,0x60CC,0x60CD,0x60CE,/* 0xA8-0xAF */
0x60CF,0x60D0,0x60D2,0x60D3,0x60D4,0x60D6,0x60D7,0x60D9,/* 0xB0-0xB7 */
- 0x60DB,0x60DE,0xF9B9,0x60E2,0x60E3,0x60E4,0x60E5,0x60EA,/* 0xB8-0xBF */
+ 0x60DB,0x60DE,0x60E1,0x60E2,0x60E3,0x60E4,0x60E5,0x60EA,/* 0xB8-0xBF */
0x60F1,0x60F2,0x60F5,0x60F7,0x60F8,0x60FB,0x60FC,0x60FD,/* 0xC0-0xC7 */
0x60FE,0x60FF,0x6102,0x6103,0x6104,0x6105,0x6107,0x610A,/* 0xC8-0xCF */
0x610B,0x610C,0x6110,0x6111,0x6112,0x6113,0x6114,0x6116,/* 0xD0-0xD7 */
0x6122,0x6125,0x6128,0x6129,0x612A,0x612C,0x612D,0x612E,/* 0xE0-0xE7 */
0x612F,0x6130,0x6131,0x6132,0x6133,0x6134,0x6135,0x6136,/* 0xE8-0xEF */
0x6137,0x6138,0x6139,0x613A,0x613B,0x613C,0x613D,0x613E,/* 0xF0-0xF7 */
- 0x6140,0x6141,0x6142,0x6143,0xF9D9,0x6145,0x6146,0x0000,/* 0xF8-0xFF */
+ 0x6140,0x6141,0x6142,0x6143,0x6144,0x6145,0x6146,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_91[256] = {
0x6172,0x6173,0x6174,0x6176,0x6178,0x6179,0x617A,0x617B,/* 0x60-0x67 */
0x617C,0x617D,0x617E,0x617F,0x6180,0x6181,0x6182,0x6183,/* 0x68-0x6F */
0x6184,0x6185,0x6186,0x6187,0x6188,0x6189,0x618A,0x618C,/* 0x70-0x77 */
- 0x618D,0x618F,0xF98F,0x6191,0x6192,0x6193,0x6195,0x0000,/* 0x78-0x7F */
-
+ 0x618D,0x618F,0x6190,0x6191,0x6192,0x6193,0x6195,0x0000,/* 0x78-0x7F */
+
0x6196,0x6197,0x6198,0x6199,0x619A,0x619B,0x619C,0x619E,/* 0x80-0x87 */
0x619F,0x61A0,0x61A1,0x61A2,0x61A3,0x61A4,0x61A5,0x61A6,/* 0x88-0x8F */
0x61AA,0x61AB,0x61AD,0x61AE,0x61AF,0x61B0,0x61B1,0x61B2,/* 0x90-0x97 */
0x61DC,0x61DD,0x61DE,0x61DF,0x61E0,0x61E1,0x61E2,0x61E3,/* 0xB8-0xBF */
0x61E4,0x61E5,0x61E7,0x61E8,0x61E9,0x61EA,0x61EB,0x61EC,/* 0xC0-0xC7 */
0x61ED,0x61EE,0x61EF,0x61F0,0x61F1,0x61F2,0x61F3,0x61F4,/* 0xC8-0xCF */
- 0xF90D,0x61F7,0x61F8,0x61F9,0x61FA,0x61FB,0x61FC,0x61FD,/* 0xD0-0xD7 */
- 0x61FE,0xF990,0x6201,0x6202,0x6203,0x6204,0x6205,0x6207,/* 0xD8-0xDF */
+ 0x61F6,0x61F7,0x61F8,0x61F9,0x61FA,0x61FB,0x61FC,0x61FD,/* 0xD0-0xD7 */
+ 0x61FE,0x6200,0x6201,0x6202,0x6203,0x6204,0x6205,0x6207,/* 0xD8-0xDF */
0x6209,0x6213,0x6214,0x6219,0x621C,0x621D,0x621E,0x6220,/* 0xE0-0xE7 */
0x6223,0x6226,0x6227,0x6228,0x6229,0x622B,0x622D,0x622F,/* 0xE8-0xEF */
0x6230,0x6231,0x6232,0x6235,0x6236,0x6238,0x6239,0x623A,/* 0xF0-0xF7 */
0x6299,0x629C,0x629D,0x629E,0x62A3,0x62A6,0x62A7,0x62A9,/* 0x68-0x6F */
0x62AA,0x62AD,0x62AE,0x62AF,0x62B0,0x62B2,0x62B3,0x62B4,/* 0x70-0x77 */
0x62B6,0x62B7,0x62B8,0x62BA,0x62BE,0x62C0,0x62C1,0x0000,/* 0x78-0x7F */
-
- 0x62C3,0x62CB,0xF95B,0x62D1,0x62D5,0x62DD,0x62DE,0x62E0,/* 0x80-0x87 */
+
+ 0x62C3,0x62CB,0x62CF,0x62D1,0x62D5,0x62DD,0x62DE,0x62E0,/* 0x80-0x87 */
0x62E1,0x62E4,0x62EA,0x62EB,0x62F0,0x62F2,0x62F5,0x62F8,/* 0x88-0x8F */
0x62F9,0x62FA,0x62FB,0x6300,0x6303,0x6304,0x6305,0x6306,/* 0x90-0x97 */
0x630A,0x630B,0x630C,0x630D,0x630F,0x6310,0x6312,0x6313,/* 0x98-0x9F */
0x63FE,0x6403,0x6404,0x6406,0x6407,0x6408,0x6409,0x640A,/* 0x68-0x6F */
0x640D,0x640E,0x6411,0x6412,0x6415,0x6416,0x6417,0x6418,/* 0x70-0x77 */
0x6419,0x641A,0x641D,0x641F,0x6422,0x6423,0x6424,0x0000,/* 0x78-0x7F */
-
+
0x6425,0x6427,0x6428,0x6429,0x642B,0x642E,0x642F,0x6430,/* 0x80-0x87 */
0x6431,0x6432,0x6433,0x6435,0x6436,0x6437,0x6438,0x6439,/* 0x88-0x8F */
0x643B,0x643C,0x643E,0x6440,0x6442,0x6443,0x6449,0x644B,/* 0x90-0x97 */
0x6473,0x6474,0x6475,0x6476,0x6477,0x647B,0x647C,0x647D,/* 0xB8-0xBF */
0x647E,0x647F,0x6480,0x6481,0x6483,0x6486,0x6488,0x6489,/* 0xC0-0xC7 */
0x648A,0x648B,0x648C,0x648D,0x648E,0x648F,0x6490,0x6493,/* 0xC8-0xCF */
- 0x6494,0x6497,0x6498,0xF991,0x649B,0x649C,0x649D,0x649F,/* 0xD0-0xD7 */
+ 0x6494,0x6497,0x6498,0x649A,0x649B,0x649C,0x649D,0x649F,/* 0xD0-0xD7 */
0x64A0,0x64A1,0x64A2,0x64A3,0x64A5,0x64A6,0x64A7,0x64A8,/* 0xD8-0xDF */
0x64AA,0x64AB,0x64AF,0x64B1,0x64B2,0x64B3,0x64B4,0x64B6,/* 0xE0-0xE7 */
- 0x64B9,0x64BB,0x64BD,0x64BE,0x64BF,0x64C1,0x64C3,0xF930,/* 0xE8-0xEF */
+ 0x64B9,0x64BB,0x64BD,0x64BE,0x64BF,0x64C1,0x64C3,0x64C4,/* 0xE8-0xEF */
0x64C6,0x64C7,0x64C8,0x64C9,0x64CA,0x64CB,0x64CC,0x64CF,/* 0xF0-0xF7 */
0x64D1,0x64D3,0x64D4,0x64D5,0x64D6,0x64D9,0x64DA,0x0000,/* 0xF8-0xFF */
};
0x6508,0x650A,0x650B,0x650C,0x650D,0x650E,0x650F,0x6510,/* 0x68-0x6F */
0x6511,0x6513,0x6514,0x6515,0x6516,0x6517,0x6519,0x651A,/* 0x70-0x77 */
0x651B,0x651C,0x651D,0x651E,0x651F,0x6520,0x6521,0x0000,/* 0x78-0x7F */
-
+
0x6522,0x6523,0x6524,0x6526,0x6527,0x6528,0x6529,0x652A,/* 0x80-0x87 */
0x652C,0x652D,0x6530,0x6531,0x6532,0x6533,0x6537,0x653A,/* 0x88-0x8F */
0x653C,0x653D,0x6540,0x6541,0x6542,0x6543,0x6544,0x6546,/* 0x90-0x97 */
0x6547,0x654A,0x654B,0x654D,0x654E,0x6550,0x6552,0x6553,/* 0x98-0x9F */
0x6554,0x6557,0x6558,0x655A,0x655C,0x655F,0x6560,0x6561,/* 0xA0-0xA7 */
0x6564,0x6565,0x6567,0x6568,0x6569,0x656A,0x656D,0x656E,/* 0xA8-0xAF */
- 0x656F,0x6571,0x6573,0x6575,0x6576,0xF969,0x6579,0x657A,/* 0xB0-0xB7 */
+ 0x656F,0x6571,0x6573,0x6575,0x6576,0x6578,0x6579,0x657A,/* 0xB0-0xB7 */
0x657B,0x657C,0x657D,0x657E,0x657F,0x6580,0x6581,0x6582,/* 0xB8-0xBF */
0x6583,0x6584,0x6585,0x6586,0x6588,0x6589,0x658A,0x658D,/* 0xC0-0xC7 */
0x658E,0x658F,0x6592,0x6594,0x6595,0x6596,0x6598,0x659A,/* 0xC8-0xCF */
0x6632,0x6633,0x6637,0x6638,0x6639,0x663A,0x663B,0x663D,/* 0x68-0x6F */
0x663F,0x6640,0x6642,0x6644,0x6645,0x6646,0x6647,0x6648,/* 0x70-0x77 */
0x6649,0x664A,0x664D,0x664E,0x6650,0x6651,0x6658,0x0000,/* 0x78-0x7F */
-
+
0x6659,0x665B,0x665C,0x665D,0x665E,0x6660,0x6662,0x6663,/* 0x80-0x87 */
0x6665,0x6667,0x6669,0x666A,0x666B,0x666C,0x666D,0x6671,/* 0x88-0x8F */
0x6672,0x6673,0x6675,0x6678,0x6679,0x667B,0x667C,0x667D,/* 0x90-0x97 */
- 0x667F,0x6680,0x6681,0x6683,0x6685,0x6686,0xF9C5,0x6689,/* 0x98-0x9F */
+ 0x667F,0x6680,0x6681,0x6683,0x6685,0x6686,0x6688,0x6689,/* 0x98-0x9F */
0x668A,0x668B,0x668D,0x668E,0x668F,0x6690,0x6692,0x6693,/* 0xA0-0xA7 */
0x6694,0x6695,0x6698,0x6699,0x669A,0x669B,0x669C,0x669E,/* 0xA8-0xAF */
0x669F,0x66A0,0x66A1,0x66A2,0x66A3,0x66A4,0x66A5,0x66A6,/* 0xB0-0xB7 */
0x66A9,0x66AA,0x66AB,0x66AC,0x66AD,0x66AF,0x66B0,0x66B1,/* 0xB8-0xBF */
0x66B2,0x66B3,0x66B5,0x66B6,0x66B7,0x66B8,0x66BA,0x66BB,/* 0xC0-0xC7 */
0x66BC,0x66BD,0x66BF,0x66C0,0x66C1,0x66C2,0x66C3,0x66C4,/* 0xC8-0xCF */
- 0x66C5,0xF98B,0x66C7,0x66C8,0x66C9,0x66CA,0x66CB,0x66CC,/* 0xD0-0xD7 */
+ 0x66C5,0x66C6,0x66C7,0x66C8,0x66C9,0x66CA,0x66CB,0x66CC,/* 0xD0-0xD7 */
0x66CD,0x66CE,0x66CF,0x66D0,0x66D1,0x66D2,0x66D3,0x66D4,/* 0xD8-0xDF */
0x66D5,0x66D6,0x66D7,0x66D8,0x66DA,0x66DE,0x66DF,0x66E0,/* 0xE0-0xE7 */
0x66E1,0x66E2,0x66E3,0x66E4,0x66E5,0x66E7,0x66E8,0x66EA,/* 0xE8-0xEF */
0x674A,0x674B,0x674D,0x6752,0x6754,0x6755,0x6757,0x6758,/* 0x68-0x6F */
0x6759,0x675A,0x675B,0x675D,0x6762,0x6763,0x6764,0x6766,/* 0x70-0x77 */
0x6767,0x676B,0x676C,0x676E,0x6771,0x6774,0x6776,0x0000,/* 0x78-0x7F */
-
- 0x6778,0x6779,0x677A,0xF9C8,0x677D,0x6780,0x6782,0x6783,/* 0x80-0x87 */
+
+ 0x6778,0x6779,0x677A,0x677B,0x677D,0x6780,0x6782,0x6783,/* 0x80-0x87 */
0x6785,0x6786,0x6788,0x678A,0x678C,0x678D,0x678E,0x678F,/* 0x88-0x8F */
0x6791,0x6792,0x6793,0x6794,0x6796,0x6799,0x679B,0x679F,/* 0x90-0x97 */
0x67A0,0x67A1,0x67A4,0x67A6,0x67A9,0x67AC,0x67AE,0x67B1,/* 0x98-0x9F */
0x6899,0x689A,0x689B,0x689C,0x689D,0x689E,0x689F,0x68A0,/* 0x68-0x6F */
0x68A1,0x68A3,0x68A4,0x68A5,0x68A9,0x68AA,0x68AB,0x68AC,/* 0x70-0x77 */
0x68AE,0x68B1,0x68B2,0x68B4,0x68B6,0x68B7,0x68B8,0x0000,/* 0x78-0x7F */
-
+
0x68B9,0x68BA,0x68BB,0x68BC,0x68BD,0x68BE,0x68BF,0x68C1,/* 0x80-0x87 */
0x68C3,0x68C4,0x68C5,0x68C6,0x68C7,0x68C8,0x68CA,0x68CC,/* 0x88-0x8F */
0x68CE,0x68CF,0x68D0,0x68D1,0x68D3,0x68D4,0x68D6,0x68D7,/* 0x90-0x97 */
0x699F,0x69A0,0x69A1,0x69A2,0x69A3,0x69A4,0x69A5,0x69A6,/* 0x68-0x6F */
0x69A9,0x69AA,0x69AC,0x69AE,0x69AF,0x69B0,0x69B2,0x69B3,/* 0x70-0x77 */
0x69B5,0x69B6,0x69B8,0x69B9,0x69BA,0x69BC,0x69BD,0x0000,/* 0x78-0x7F */
-
+
0x69BE,0x69BF,0x69C0,0x69C2,0x69C3,0x69C4,0x69C5,0x69C6,/* 0x80-0x87 */
0x69C7,0x69C8,0x69C9,0x69CB,0x69CD,0x69CF,0x69D1,0x69D2,/* 0x88-0x8F */
0x69D3,0x69D5,0x69D6,0x69D7,0x69D8,0x69D9,0x69DA,0x69DC,/* 0x90-0x97 */
0x69DD,0x69DE,0x69E1,0x69E2,0x69E3,0x69E4,0x69E5,0x69E6,/* 0x98-0x9F */
0x69E7,0x69E8,0x69E9,0x69EA,0x69EB,0x69EC,0x69EE,0x69EF,/* 0xA0-0xA7 */
0x69F0,0x69F1,0x69F3,0x69F4,0x69F5,0x69F6,0x69F7,0x69F8,/* 0xA8-0xAF */
- 0x69F9,0x69FA,0x69FB,0x69FC,0x69FE,0x6A00,0x6A01,0xF9BF,/* 0xB0-0xB7 */
+ 0x69F9,0x69FA,0x69FB,0x69FC,0x69FE,0x6A00,0x6A01,0x6A02,/* 0xB0-0xB7 */
0x6A03,0x6A04,0x6A05,0x6A06,0x6A07,0x6A08,0x6A09,0x6A0B,/* 0xB8-0xBF */
- 0x6A0C,0x6A0D,0x6A0E,0x6A0F,0x6A10,0x6A11,0x6A12,0xF94C,/* 0xC0-0xC7 */
+ 0x6A0C,0x6A0D,0x6A0E,0x6A0F,0x6A10,0x6A11,0x6A12,0x6A13,/* 0xC0-0xC7 */
0x6A14,0x6A15,0x6A16,0x6A19,0x6A1A,0x6A1B,0x6A1C,0x6A1D,/* 0xC8-0xCF */
0x6A1E,0x6A20,0x6A22,0x6A23,0x6A24,0x6A25,0x6A26,0x6A27,/* 0xD0-0xD7 */
0x6A29,0x6A2B,0x6A2C,0x6A2D,0x6A2E,0x6A30,0x6A32,0x6A33,/* 0xD8-0xDF */
0x6A8B,0x6A8C,0x6A8D,0x6A8F,0x6A92,0x6A93,0x6A94,0x6A95,/* 0x68-0x6F */
0x6A96,0x6A98,0x6A99,0x6A9A,0x6A9B,0x6A9C,0x6A9D,0x6A9E,/* 0x70-0x77 */
0x6A9F,0x6AA1,0x6AA2,0x6AA3,0x6AA4,0x6AA5,0x6AA6,0x0000,/* 0x78-0x7F */
-
+
0x6AA7,0x6AA8,0x6AAA,0x6AAD,0x6AAE,0x6AAF,0x6AB0,0x6AB1,/* 0x80-0x87 */
0x6AB2,0x6AB3,0x6AB4,0x6AB5,0x6AB6,0x6AB7,0x6AB8,0x6AB9,/* 0x88-0x8F */
0x6ABA,0x6ABB,0x6ABC,0x6ABD,0x6ABE,0x6ABF,0x6AC0,0x6AC1,/* 0x90-0x97 */
0x6AC2,0x6AC3,0x6AC4,0x6AC5,0x6AC6,0x6AC7,0x6AC8,0x6AC9,/* 0x98-0x9F */
0x6ACA,0x6ACB,0x6ACC,0x6ACD,0x6ACE,0x6ACF,0x6AD0,0x6AD1,/* 0xA0-0xA7 */
- 0x6AD2,0xF931,0x6AD4,0x6AD5,0x6AD6,0x6AD7,0x6AD8,0x6AD9,/* 0xA8-0xAF */
+ 0x6AD2,0x6AD3,0x6AD4,0x6AD5,0x6AD6,0x6AD7,0x6AD8,0x6AD9,/* 0xA8-0xAF */
0x6ADA,0x6ADB,0x6ADC,0x6ADD,0x6ADE,0x6ADF,0x6AE0,0x6AE1,/* 0xB0-0xB7 */
0x6AE2,0x6AE3,0x6AE4,0x6AE5,0x6AE6,0x6AE7,0x6AE8,0x6AE9,/* 0xB8-0xBF */
0x6AEA,0x6AEB,0x6AEC,0x6AED,0x6AEE,0x6AEF,0x6AF0,0x6AF1,/* 0xC0-0xC7 */
0x6AF2,0x6AF3,0x6AF4,0x6AF5,0x6AF6,0x6AF7,0x6AF8,0x6AF9,/* 0xC8-0xCF */
0x6AFA,0x6AFB,0x6AFC,0x6AFD,0x6AFE,0x6AFF,0x6B00,0x6B01,/* 0xD0-0xD7 */
- 0x6B02,0x6B03,0xF91D,0x6B05,0x6B06,0x6B07,0x6B08,0x6B09,/* 0xD8-0xDF */
+ 0x6B02,0x6B03,0x6B04,0x6B05,0x6B06,0x6B07,0x6B08,0x6B09,/* 0xD8-0xDF */
0x6B0A,0x6B0B,0x6B0C,0x6B0D,0x6B0E,0x6B0F,0x6B10,0x6B11,/* 0xE0-0xE7 */
0x6B12,0x6B13,0x6B14,0x6B15,0x6B16,0x6B17,0x6B18,0x6B19,/* 0xE8-0xEF */
0x6B1A,0x6B1B,0x6B1C,0x6B1D,0x6B1E,0x6B1F,0x6B25,0x6B26,/* 0xF0-0xF7 */
0x6B51,0x6B52,0x6B53,0x6B54,0x6B55,0x6B56,0x6B57,0x6B58,/* 0x58-0x5F */
0x6B5A,0x6B5B,0x6B5C,0x6B5D,0x6B5E,0x6B5F,0x6B60,0x6B61,/* 0x60-0x67 */
0x6B68,0x6B69,0x6B6B,0x6B6C,0x6B6D,0x6B6E,0x6B6F,0x6B70,/* 0x68-0x6F */
- 0x6B71,0x6B72,0x6B73,0x6B74,0x6B75,0x6B76,0xF98C,0x6B78,/* 0x70-0x77 */
+ 0x6B71,0x6B72,0x6B73,0x6B74,0x6B75,0x6B76,0x6B77,0x6B78,/* 0x70-0x77 */
0x6B7A,0x6B7D,0x6B7E,0x6B7F,0x6B80,0x6B85,0x6B88,0x0000,/* 0x78-0x7F */
-
+
0x6B8C,0x6B8E,0x6B8F,0x6B90,0x6B91,0x6B94,0x6B95,0x6B97,/* 0x80-0x87 */
0x6B98,0x6B99,0x6B9C,0x6B9D,0x6B9E,0x6B9F,0x6BA0,0x6BA2,/* 0x88-0x8F */
0x6BA3,0x6BA4,0x6BA5,0x6BA6,0x6BA7,0x6BA8,0x6BA9,0x6BAB,/* 0x90-0x97 */
- 0x6BAC,0x6BAD,0xF9A5,0x6BAF,0x6BB0,0x6BB1,0x6BB2,0x6BB6,/* 0x98-0x9F */
- 0x6BB8,0x6BB9,0xF970,0x6BBB,0x6BBC,0x6BBD,0x6BBE,0x6BC0,/* 0xA0-0xA7 */
+ 0x6BAC,0x6BAD,0x6BAE,0x6BAF,0x6BB0,0x6BB1,0x6BB2,0x6BB6,/* 0x98-0x9F */
+ 0x6BB8,0x6BB9,0x6BBA,0x6BBB,0x6BBC,0x6BBD,0x6BBE,0x6BC0,/* 0xA0-0xA7 */
0x6BC3,0x6BC4,0x6BC6,0x6BC7,0x6BC8,0x6BC9,0x6BCA,0x6BCC,/* 0xA8-0xAF */
0x6BCE,0x6BD0,0x6BD1,0x6BD8,0x6BDA,0x6BDC,0x6BDD,0x6BDE,/* 0xB0-0xB7 */
0x6BDF,0x6BE0,0x6BE2,0x6BE3,0x6BE4,0x6BE5,0x6BE6,0x6BE7,/* 0xB8-0xBF */
0x6CA8,0x6CAC,0x6CAF,0x6CB0,0x6CB4,0x6CB5,0x6CB6,0x6CB7,/* 0x68-0x6F */
0x6CBA,0x6CC0,0x6CC1,0x6CC2,0x6CC3,0x6CC6,0x6CC7,0x6CC8,/* 0x70-0x77 */
0x6CCB,0x6CCD,0x6CCE,0x6CCF,0x6CD1,0x6CD2,0x6CD8,0x0000,/* 0x78-0x7F */
-
+
0x6CD9,0x6CDA,0x6CDC,0x6CDD,0x6CDF,0x6CE4,0x6CE6,0x6CE7,/* 0x80-0x87 */
0x6CE9,0x6CEC,0x6CED,0x6CF2,0x6CF4,0x6CF9,0x6CFF,0x6D00,/* 0x88-0x8F */
0x6D02,0x6D03,0x6D05,0x6D06,0x6D08,0x6D09,0x6D0A,0x6D0D,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x6DCD,0x6DCE,0x6DCF,0x6DD0,0x6DD2,0x6DD3,0x6DD4,0x6DD5,/* 0x40-0x47 */
- 0x6DD7,0xF94D,0x6DDB,0x6DDC,0x6DDF,0x6DE2,0x6DE3,0x6DE5,/* 0x48-0x4F */
- 0x6DE7,0x6DE8,0x6DE9,0xF9D6,0x6DED,0x6DEF,0x6DF0,0x6DF2,/* 0x50-0x57 */
+ 0x6DD7,0x6DDA,0x6DDB,0x6DDC,0x6DDF,0x6DE2,0x6DE3,0x6DE5,/* 0x48-0x4F */
+ 0x6DE7,0x6DE8,0x6DE9,0x6DEA,0x6DED,0x6DEF,0x6DF0,0x6DF2,/* 0x50-0x57 */
0x6DF4,0x6DF5,0x6DF6,0x6DF8,0x6DFA,0x6DFD,0x6DFE,0x6DFF,/* 0x58-0x5F */
0x6E00,0x6E01,0x6E02,0x6E03,0x6E04,0x6E06,0x6E07,0x6E08,/* 0x60-0x67 */
0x6E09,0x6E0B,0x6E0F,0x6E12,0x6E13,0x6E15,0x6E18,0x6E19,/* 0x68-0x6F */
0x6E1B,0x6E1C,0x6E1E,0x6E1F,0x6E22,0x6E26,0x6E27,0x6E28,/* 0x70-0x77 */
0x6E2A,0x6E2C,0x6E2E,0x6E30,0x6E31,0x6E33,0x6E35,0x0000,/* 0x78-0x7F */
-
+
0x6E36,0x6E37,0x6E39,0x6E3B,0x6E3C,0x6E3D,0x6E3E,0x6E3F,/* 0x80-0x87 */
0x6E40,0x6E41,0x6E42,0x6E45,0x6E46,0x6E47,0x6E48,0x6E49,/* 0x88-0x8F */
0x6E4A,0x6E4B,0x6E4C,0x6E4F,0x6E50,0x6E51,0x6E52,0x6E55,/* 0x90-0x97 */
0x6F03,0x6F04,0x6F05,0x6F07,0x6F08,0x6F0A,0x6F0B,0x6F0C,/* 0x50-0x57 */
0x6F0D,0x6F0E,0x6F10,0x6F11,0x6F12,0x6F16,0x6F17,0x6F18,/* 0x58-0x5F */
0x6F19,0x6F1A,0x6F1B,0x6F1C,0x6F1D,0x6F1E,0x6F1F,0x6F21,/* 0x60-0x67 */
- 0x6F22,0xF992,0x6F25,0x6F26,0x6F27,0x6F28,0x6F2C,0x6F2E,/* 0x68-0x6F */
+ 0x6F22,0x6F23,0x6F25,0x6F26,0x6F27,0x6F28,0x6F2C,0x6F2E,/* 0x68-0x6F */
0x6F30,0x6F32,0x6F34,0x6F35,0x6F37,0x6F38,0x6F39,0x6F3A,/* 0x70-0x77 */
0x6F3B,0x6F3C,0x6F3D,0x6F3F,0x6F40,0x6F41,0x6F42,0x0000,/* 0x78-0x7F */
-
+
0x6F43,0x6F44,0x6F45,0x6F48,0x6F49,0x6F4A,0x6F4C,0x6F4E,/* 0x80-0x87 */
0x6F4F,0x6F50,0x6F51,0x6F52,0x6F53,0x6F54,0x6F55,0x6F56,/* 0x88-0x8F */
0x6F57,0x6F59,0x6F5A,0x6F5B,0x6F5D,0x6F5F,0x6F60,0x6F61,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x28-0x2F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
- 0x6FE6,0x6FE7,0x6FE8,0x6FE9,0x6FEA,0xF922,0x6FEC,0x6FED,/* 0x40-0x47 */
+ 0x6FE6,0x6FE7,0x6FE8,0x6FE9,0x6FEA,0x6FEB,0x6FEC,0x6FED,/* 0x40-0x47 */
0x6FF0,0x6FF1,0x6FF2,0x6FF3,0x6FF4,0x6FF5,0x6FF6,0x6FF7,/* 0x48-0x4F */
- 0x6FF8,0x6FF9,0x6FFA,0x6FFB,0x6FFC,0x6FFD,0xF984,0x6FFF,/* 0x50-0x57 */
+ 0x6FF8,0x6FF9,0x6FFA,0x6FFB,0x6FFC,0x6FFD,0x6FFE,0x6FFF,/* 0x50-0x57 */
0x7000,0x7001,0x7002,0x7003,0x7004,0x7005,0x7006,0x7007,/* 0x58-0x5F */
0x7008,0x7009,0x700A,0x700B,0x700C,0x700D,0x700E,0x700F,/* 0x60-0x67 */
0x7010,0x7012,0x7013,0x7014,0x7015,0x7016,0x7017,0x7018,/* 0x68-0x6F */
0x7019,0x701C,0x701D,0x701E,0x701F,0x7020,0x7021,0x7022,/* 0x70-0x77 */
0x7024,0x7025,0x7026,0x7027,0x7028,0x7029,0x702A,0x0000,/* 0x78-0x7F */
-
+
0x702B,0x702C,0x702D,0x702E,0x702F,0x7030,0x7031,0x7032,/* 0x80-0x87 */
0x7033,0x7034,0x7036,0x7037,0x7038,0x703A,0x703B,0x703C,/* 0x88-0x8F */
0x703D,0x703E,0x703F,0x7040,0x7041,0x7042,0x7043,0x7044,/* 0x90-0x97 */
0x7117,0x711B,0x711C,0x711D,0x711E,0x711F,0x7120,0x7121,/* 0x68-0x6F */
0x7122,0x7123,0x7124,0x7125,0x7127,0x7128,0x7129,0x712A,/* 0x70-0x77 */
0x712B,0x712C,0x712D,0x712E,0x7132,0x7133,0x7134,0x0000,/* 0x78-0x7F */
-
+
0x7135,0x7137,0x7138,0x7139,0x713A,0x713B,0x713C,0x713D,/* 0x80-0x87 */
0x713E,0x713F,0x7140,0x7141,0x7142,0x7143,0x7144,0x7146,/* 0x88-0x8F */
- 0x7147,0x7148,0xF993,0x714B,0x714D,0x714F,0x7150,0x7151,/* 0x90-0x97 */
+ 0x7147,0x7148,0x7149,0x714B,0x714D,0x714F,0x7150,0x7151,/* 0x90-0x97 */
0x7152,0x7153,0x7154,0x7155,0x7156,0x7157,0x7158,0x7159,/* 0x98-0x9F */
0x715A,0x715B,0x715D,0x715F,0x7160,0x7161,0x7162,0x7163,/* 0xA0-0xA7 */
0x7165,0x7169,0x716A,0x716B,0x716C,0x716D,0x716F,0x7170,/* 0xA8-0xAF */
0x71B0,0x71B1,0x71B2,0x71B4,0x71B6,0x71B7,0x71B8,0x71BA,/* 0xE0-0xE7 */
0x71BB,0x71BC,0x71BD,0x71BE,0x71BF,0x71C0,0x71C1,0x71C2,/* 0xE8-0xEF */
0x71C4,0x71C5,0x71C6,0x71C7,0x71C8,0x71C9,0x71CA,0x71CB,/* 0xF0-0xF7 */
- 0x71CC,0x71CD,0x71CF,0xF9EE,0x71D1,0x71D2,0x71D3,0x0000,/* 0xF8-0xFF */
+ 0x71CC,0x71CD,0x71CF,0x71D0,0x71D1,0x71D2,0x71D3,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_A0[256] = {
0x71F2,0x71F3,0x71F4,0x71F5,0x71F6,0x71F7,0x71F8,0x71FA,/* 0x58-0x5F */
0x71FB,0x71FC,0x71FD,0x71FE,0x71FF,0x7200,0x7201,0x7202,/* 0x60-0x67 */
0x7203,0x7204,0x7205,0x7207,0x7208,0x7209,0x720A,0x720B,/* 0x68-0x6F */
- 0x720C,0x720D,0x720E,0x720F,0xF932,0x7211,0x7212,0x7213,/* 0x70-0x77 */
+ 0x720C,0x720D,0x720E,0x720F,0x7210,0x7211,0x7212,0x7213,/* 0x70-0x77 */
0x7214,0x7215,0x7216,0x7217,0x7218,0x7219,0x721A,0x0000,/* 0x78-0x7F */
-
- 0xF91E,0x721C,0x721E,0x721F,0x7220,0x7221,0x7222,0x7223,/* 0x80-0x87 */
+
+ 0x721B,0x721C,0x721E,0x721F,0x7220,0x7221,0x7222,0x7223,/* 0x80-0x87 */
0x7224,0x7225,0x7226,0x7227,0x7229,0x722B,0x722D,0x722E,/* 0x88-0x8F */
0x722F,0x7232,0x7233,0x7234,0x723A,0x723C,0x723E,0x7240,/* 0x90-0x97 */
0x7241,0x7242,0x7243,0x7244,0x7245,0x7246,0x7249,0x724A,/* 0x98-0x9F */
0x7298,0x7299,0x729A,0x729B,0x729C,0x729D,0x729E,0x72A0,/* 0xD0-0xD7 */
0x72A1,0x72A2,0x72A3,0x72A4,0x72A5,0x72A6,0x72A7,0x72A8,/* 0xD8-0xDF */
0x72A9,0x72AA,0x72AB,0x72AE,0x72B1,0x72B2,0x72B3,0x72B5,/* 0xE0-0xE7 */
- 0x72BA,0x72BB,0x72BC,0x72BD,0x72BE,0x72BF,0xF9FA,0x72C5,/* 0xE8-0xEF */
+ 0x72BA,0x72BB,0x72BC,0x72BD,0x72BE,0x72BF,0x72C0,0x72C5,/* 0xE8-0xEF */
0x72C6,0x72C7,0x72C9,0x72CA,0x72CB,0x72CC,0x72CF,0x72D1,/* 0xF0-0xF7 */
0x72D3,0x72D4,0x72D5,0x72D6,0x72D8,0x72DA,0x72DB,0x0000,/* 0xF8-0xFF */
};
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x222A,0x2229,0x2208,0x2237,0x221A,0x22A5,0x2225,0x2220,/* 0xC8-0xCF */
0x2312,0x2299,0x222B,0x222E,0x2261,0x224C,0x2248,0x223D,/* 0xD0-0xD7 */
0x221D,0x2260,0x226E,0x226F,0x2264,0x2265,0x221E,0x2235,/* 0xD8-0xDF */
- 0x2234,0x2642,0x2640,0x2218,0x2032,0x2033,0x2103,0xFF04,/* 0xE0-0xE7 */
+ 0x2234,0x2642,0x2640,0x00B0,0x2032,0x2033,0x2103,0xFF04,/* 0xE0-0xE7 */
0x00A4,0xFFE0,0xFFE1,0x2030,0x00A7,0x2116,0x2606,0x2605,/* 0xE8-0xEF */
0x25CB,0x25CF,0x25CE,0x25C7,0x25C6,0x25A1,0x25A0,0x25B3,/* 0xF0-0xF7 */
0x25B2,0x203B,0x2192,0x2190,0x2191,0x2193,0x3013,0x0000,/* 0xF8-0xFF */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x2564,0x2565,0x2566,0x2567,0x2568,0x2569,0x256A,0x256B,/* 0x68-0x6F */
0x256C,0x256D,0x256E,0x256F,0x2570,0x2571,0x2572,0x2573,/* 0x70-0x77 */
0x2581,0x2582,0x2583,0x2584,0x2585,0x2586,0x2587,0x0000,/* 0x78-0x7F */
-
+
0x2588,0x2589,0x258A,0x258B,0x258C,0x258D,0x258E,0x258F,/* 0x80-0x87 */
0x2593,0x2594,0x2595,0x25BC,0x25BD,0x25E2,0x25E3,0x25E4,/* 0x88-0x8F */
0x25E5,0x2609,0x2295,0x3012,0x301D,0x301E,0x0000,0x0000,/* 0x90-0x97 */
0xFE49,0xFE4A,0xFE4B,0xFE4C,0xFE4D,0xFE4E,0xFE4F,0xFE50,/* 0x68-0x6F */
0xFE51,0xFE52,0xFE54,0xFE55,0xFE56,0xFE57,0xFE59,0xFE5A,/* 0x70-0x77 */
0xFE5B,0xFE5C,0xFE5D,0xFE5E,0xFE5F,0xFE60,0xFE61,0x0000,/* 0x78-0x7F */
-
+
0xFE62,0xFE63,0xFE64,0xFE65,0xFE66,0xFE68,0xFE69,0xFE6A,/* 0x80-0x87 */
0xFE6B,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x3007,0x0000,/* 0x90-0x97 */
0x7326,0x7327,0x7328,0x732D,0x732F,0x7330,0x7332,0x7333,/* 0x68-0x6F */
0x7335,0x7336,0x733A,0x733B,0x733C,0x733D,0x7340,0x7341,/* 0x70-0x77 */
0x7342,0x7343,0x7344,0x7345,0x7346,0x7347,0x7348,0x0000,/* 0x78-0x7F */
-
+
0x7349,0x734A,0x734B,0x734C,0x734E,0x734F,0x7351,0x7353,/* 0x80-0x87 */
0x7354,0x7355,0x7356,0x7358,0x7359,0x735A,0x735B,0x735C,/* 0x88-0x8F */
0x735D,0x735E,0x735F,0x7361,0x7362,0x7363,0x7364,0x7365,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x28-0x2F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
- 0x7372,0x7373,0x7374,0xF9A7,0x7376,0x7377,0x7378,0x7379,/* 0x40-0x47 */
+ 0x7372,0x7373,0x7374,0x7375,0x7376,0x7377,0x7378,0x7379,/* 0x40-0x47 */
0x737A,0x737B,0x737C,0x737D,0x737F,0x7380,0x7381,0x7382,/* 0x48-0x4F */
0x7383,0x7385,0x7386,0x7388,0x738A,0x738C,0x738D,0x738F,/* 0x50-0x57 */
0x7390,0x7392,0x7393,0x7394,0x7395,0x7397,0x7398,0x7399,/* 0x58-0x5F */
0x73A5,0x73A6,0x73A7,0x73A8,0x73AA,0x73AC,0x73AD,0x73B1,/* 0x68-0x6F */
0x73B4,0x73B5,0x73B6,0x73B8,0x73B9,0x73BC,0x73BD,0x73BE,/* 0x70-0x77 */
0x73BF,0x73C1,0x73C3,0x73C4,0x73C5,0x73C6,0x73C7,0x0000,/* 0x78-0x7F */
-
+
0x73CB,0x73CC,0x73CE,0x73D2,0x73D3,0x73D4,0x73D5,0x73D6,/* 0x80-0x87 */
0x73D7,0x73D8,0x73DA,0x73DB,0x73DC,0x73DD,0x73DF,0x73E1,/* 0x88-0x8F */
0x73E2,0x73E3,0x73E4,0x73E6,0x73E8,0x73EA,0x73EB,0x73EC,/* 0x90-0x97 */
0x7431,0x7432,0x7437,0x7438,0x7439,0x743A,0x743B,0x743D,/* 0x68-0x6F */
0x743E,0x743F,0x7440,0x7442,0x7443,0x7444,0x7445,0x7446,/* 0x70-0x77 */
0x7447,0x7448,0x7449,0x744A,0x744B,0x744C,0x744D,0x0000,/* 0x78-0x7F */
-
+
0x744E,0x744F,0x7450,0x7451,0x7452,0x7453,0x7454,0x7456,/* 0x80-0x87 */
0x7458,0x745D,0x7460,0x7461,0x7462,0x7463,0x7464,0x7465,/* 0x88-0x8F */
- 0x7466,0x7467,0x7468,0xF9AE,0x746A,0x746B,0x746C,0x746E,/* 0x90-0x97 */
+ 0x7466,0x7467,0x7468,0x7469,0x746A,0x746B,0x746C,0x746E,/* 0x90-0x97 */
0x746F,0x7471,0x7472,0x7473,0x7474,0x7475,0x7478,0x7479,/* 0x98-0x9F */
0x747A,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xA0-0xA7 */
};
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x747B,0x747C,0x747D,0x747F,0x7482,0x7484,0x7485,0x7486,/* 0x40-0x47 */
- 0x7488,0xF994,0x748A,0x748C,0x748D,0x748F,0x7491,0x7492,/* 0x48-0x4F */
- 0x7493,0x7494,0x7495,0x7496,0x7497,0xF9EF,0x7499,0x749A,/* 0x50-0x57 */
+ 0x7488,0x7489,0x748A,0x748C,0x748D,0x748F,0x7491,0x7492,/* 0x48-0x4F */
+ 0x7493,0x7494,0x7495,0x7496,0x7497,0x7498,0x7499,0x749A,/* 0x50-0x57 */
0x749B,0x749D,0x749F,0x74A0,0x74A1,0x74A2,0x74A3,0x74A4,/* 0x58-0x5F */
0x74A5,0x74A6,0x74AA,0x74AB,0x74AC,0x74AD,0x74AE,0x74AF,/* 0x60-0x67 */
0x74B0,0x74B1,0x74B2,0x74B3,0x74B4,0x74B5,0x74B6,0x74B7,/* 0x68-0x6F */
0x74B8,0x74B9,0x74BB,0x74BC,0x74BD,0x74BE,0x74BF,0x74C0,/* 0x70-0x77 */
0x74C1,0x74C2,0x74C3,0x74C4,0x74C5,0x74C6,0x74C7,0x0000,/* 0x78-0x7F */
-
+
0x74C8,0x74C9,0x74CA,0x74CB,0x74CC,0x74CD,0x74CE,0x74CF,/* 0x80-0x87 */
0x74D0,0x74D1,0x74D3,0x74D4,0x74D5,0x74D6,0x74D7,0x74D8,/* 0x88-0x8F */
0x74D9,0x74DA,0x74DB,0x74DD,0x74DF,0x74E1,0x74E5,0x74E7,/* 0x90-0x97 */
0x7534,0x7536,0x7539,0x753C,0x753D,0x753F,0x7541,0x7542,/* 0x68-0x6F */
0x7543,0x7544,0x7546,0x7547,0x7549,0x754A,0x754D,0x7550,/* 0x70-0x77 */
0x7551,0x7552,0x7553,0x7555,0x7556,0x7557,0x7558,0x0000,/* 0x78-0x7F */
-
+
0x755D,0x755E,0x755F,0x7560,0x7561,0x7562,0x7563,0x7564,/* 0x80-0x87 */
0x7567,0x7568,0x7569,0x756B,0x756C,0x756D,0x756E,0x756F,/* 0x88-0x8F */
- 0xF962,0x7571,0x7573,0x7575,0x7576,0x7577,0x757A,0x757B,/* 0x90-0x97 */
+ 0x7570,0x7571,0x7573,0x7575,0x7576,0x7577,0x757A,0x757B,/* 0x90-0x97 */
0x757C,0x757D,0x757E,0x7580,0x7581,0x7582,0x7584,0x7585,/* 0x98-0x9F */
0x7587,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xA0-0xA7 */
};
0x75DF,0x75E0,0x75E1,0x75E5,0x75E9,0x75EC,0x75ED,0x75EE,/* 0x68-0x6F */
0x75EF,0x75F2,0x75F3,0x75F5,0x75F6,0x75F7,0x75F8,0x75FA,/* 0x70-0x77 */
0x75FB,0x75FD,0x75FE,0x7602,0x7604,0x7606,0x7607,0x0000,/* 0x78-0x7F */
-
+
0x7608,0x7609,0x760B,0x760D,0x760E,0x760F,0x7611,0x7612,/* 0x80-0x87 */
0x7613,0x7614,0x7616,0x761A,0x761C,0x761D,0x761E,0x7621,/* 0x88-0x8F */
0x7623,0x7627,0x7628,0x762C,0x762E,0x762F,0x7631,0x7632,/* 0x90-0x97 */
- 0x7636,0x7637,0x7639,0x763A,0x763B,0x763D,0x7641,0xF9C1,/* 0x98-0x9F */
+ 0x7636,0x7637,0x7639,0x763A,0x763B,0x763D,0x7641,0x7642,/* 0x98-0x9F */
0x7644,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xA0-0xA7 */
};
0x7645,0x7646,0x7647,0x7648,0x7649,0x764A,0x764B,0x764E,/* 0x40-0x47 */
0x764F,0x7650,0x7651,0x7652,0x7653,0x7655,0x7657,0x7658,/* 0x48-0x4F */
0x7659,0x765A,0x765B,0x765D,0x765F,0x7660,0x7661,0x7662,/* 0x50-0x57 */
- 0x7664,0x7665,0x7666,0x7667,0x7668,0xF90E,0x766A,0x766C,/* 0x58-0x5F */
+ 0x7664,0x7665,0x7666,0x7667,0x7668,0x7669,0x766A,0x766C,/* 0x58-0x5F */
0x766D,0x766E,0x7670,0x7671,0x7672,0x7673,0x7674,0x7675,/* 0x60-0x67 */
0x7676,0x7677,0x7679,0x767A,0x767C,0x767F,0x7680,0x7681,/* 0x68-0x6F */
0x7683,0x7685,0x7689,0x768A,0x768C,0x768D,0x768F,0x7690,/* 0x70-0x77 */
0x7692,0x7694,0x7695,0x7697,0x7698,0x769A,0x769B,0x0000,/* 0x78-0x7F */
-
+
0x769C,0x769D,0x769E,0x769F,0x76A0,0x76A1,0x76A2,0x76A3,/* 0x80-0x87 */
0x76A5,0x76A6,0x76A7,0x76A8,0x76A9,0x76AA,0x76AB,0x76AC,/* 0x88-0x8F */
0x76AD,0x76AF,0x76B0,0x76B3,0x76B5,0x76B6,0x76B7,0x76B8,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x76C4,0x76C7,0x76C9,0x76CB,0x76CC,0x76D3,0x76D5,0x76D9,/* 0x40-0x47 */
0x76DA,0x76DC,0x76DD,0x76DE,0x76E0,0x76E1,0x76E2,0x76E3,/* 0x48-0x4F */
- 0x76E4,0x76E6,0xF933,0x76E8,0x76E9,0x76EA,0x76EB,0x76EC,/* 0x50-0x57 */
+ 0x76E4,0x76E6,0x76E7,0x76E8,0x76E9,0x76EA,0x76EB,0x76EC,/* 0x50-0x57 */
0x76ED,0x76F0,0x76F3,0x76F5,0x76F6,0x76F7,0x76FA,0x76FB,/* 0x58-0x5F */
0x76FD,0x76FF,0x7700,0x7702,0x7703,0x7705,0x7706,0x770A,/* 0x60-0x67 */
0x770C,0x770E,0x770F,0x7710,0x7711,0x7712,0x7713,0x7714,/* 0x68-0x6F */
0x7715,0x7716,0x7717,0x7718,0x771B,0x771C,0x771D,0x771E,/* 0x70-0x77 */
0x7721,0x7723,0x7724,0x7725,0x7727,0x772A,0x772B,0x0000,/* 0x78-0x7F */
-
+
0x772C,0x772E,0x7730,0x7731,0x7732,0x7733,0x7734,0x7739,/* 0x80-0x87 */
0x773B,0x773D,0x773E,0x773F,0x7742,0x7744,0x7745,0x7746,/* 0x88-0x8F */
0x7748,0x7749,0x774A,0x774B,0x774C,0x774D,0x774E,0x774F,/* 0x90-0x97 */
0x7752,0x7753,0x7754,0x7755,0x7756,0x7757,0x7758,0x7759,/* 0x98-0x9F */
0x775C,0x8584,0x96F9,0x4FDD,0x5821,0x9971,0x5B9D,0x62B1,/* 0xA0-0xA7 */
- 0x62A5,0xFA06,0x8C79,0x9C8D,0x7206,0x676F,0x7891,0x60B2,/* 0xA8-0xAF */
- 0x5351,0xF963,0x8F88,0x80CC,0x8D1D,0x94A1,0x500D,0x72C8,/* 0xB0-0xB7 */
+ 0x62A5,0x66B4,0x8C79,0x9C8D,0x7206,0x676F,0x7891,0x60B2,/* 0xA8-0xAF */
+ 0x5351,0x5317,0x8F88,0x80CC,0x8D1D,0x94A1,0x500D,0x72C8,/* 0xB0-0xB7 */
0x5907,0x60EB,0x7119,0x88AB,0x5954,0x82EF,0x672C,0x7B28,/* 0xB8-0xBF */
0x5D29,0x7EF7,0x752D,0x6CF5,0x8E66,0x8FF8,0x903C,0x9F3B,/* 0xC0-0xC7 */
0x6BD4,0x9119,0x7B14,0x5F7C,0x78A7,0x84D6,0x853D,0x6BD5,/* 0xC8-0xCF */
0x6BD9,0x6BD6,0x5E01,0x5E87,0x75F9,0x95ED,0x655D,0x5F0A,/* 0xD0-0xD7 */
0x5FC5,0x8F9F,0x58C1,0x81C2,0x907F,0x965B,0x97AD,0x8FB9,/* 0xD8-0xDF */
- 0x7F16,0x8D2C,0x6241,0xF965,0x53D8,0x535E,0x8FA8,0x8FA9,/* 0xE0-0xE7 */
+ 0x7F16,0x8D2C,0x6241,0x4FBF,0x53D8,0x535E,0x8FA8,0x8FA9,/* 0xE0-0xE7 */
0x8FAB,0x904D,0x6807,0x5F6A,0x8198,0x8868,0x9CD6,0x618B,/* 0xE8-0xEF */
0x522B,0x762A,0x5F6C,0x658C,0x6FD2,0x6EE8,0x5BBE,0x6448,/* 0xF0-0xF7 */
0x5175,0x51B0,0x67C4,0x4E19,0x79C9,0x997C,0x70B3,0x0000,/* 0xF8-0xFF */
0x7799,0x779A,0x779B,0x779C,0x779D,0x779E,0x77A1,0x77A3,/* 0x68-0x6F */
0x77A4,0x77A6,0x77A8,0x77AB,0x77AD,0x77AE,0x77AF,0x77B1,/* 0x70-0x77 */
0x77B2,0x77B4,0x77B6,0x77B7,0x77B8,0x77B9,0x77BA,0x0000,/* 0x78-0x7F */
-
+
0x77BC,0x77BE,0x77C0,0x77C1,0x77C2,0x77C3,0x77C4,0x77C5,/* 0x80-0x87 */
0x77C6,0x77C7,0x77C8,0x77C9,0x77CA,0x77CB,0x77CC,0x77CE,/* 0x88-0x8F */
0x77CF,0x77D0,0x77D1,0x77D2,0x77D3,0x77D4,0x77D5,0x77D6,/* 0x90-0x97 */
0x77E4,0x75C5,0x5E76,0x73BB,0x83E0,0x64AD,0x62E8,0x94B5,/* 0xA0-0xA7 */
0x6CE2,0x535A,0x52C3,0x640F,0x94C2,0x7B94,0x4F2F,0x5E1B,/* 0xA8-0xAF */
0x8236,0x8116,0x818A,0x6E24,0x6CCA,0x9A73,0x6355,0x535C,/* 0xB0-0xB7 */
- 0x54FA,0x8865,0x57E0,0xF967,0x5E03,0x6B65,0x7C3F,0x90E8,/* 0xB8-0xBF */
+ 0x54FA,0x8865,0x57E0,0x4E0D,0x5E03,0x6B65,0x7C3F,0x90E8,/* 0xB8-0xBF */
0x6016,0x64E6,0x731C,0x88C1,0x6750,0x624D,0x8D22,0x776C,/* 0xC0-0xC7 */
0x8E29,0x91C7,0x5F69,0x83DC,0x8521,0x9910,0x53C2,0x8695,/* 0xC8-0xCF */
0x6B8B,0x60ED,0x60E8,0x707F,0x82CD,0x8231,0x4ED3,0x6CA7,/* 0xD0-0xD7 */
0x85CF,0x64CD,0x7CD9,0x69FD,0x66F9,0x8349,0x5395,0x7B56,/* 0xD8-0xDF */
0x4FA7,0x518C,0x6D4B,0x5C42,0x8E6D,0x63D2,0x53C9,0x832C,/* 0xE0-0xE7 */
- 0xF9FE,0x67E5,0x78B4,0x643D,0x5BDF,0x5C94,0x5DEE,0x8BE7,/* 0xE8-0xEF */
+ 0x8336,0x67E5,0x78B4,0x643D,0x5BDF,0x5C94,0x5DEE,0x8BE7,/* 0xE8-0xEF */
0x62C6,0x67F4,0x8C7A,0x6400,0x63BA,0x8749,0x998B,0x8C17,/* 0xF0-0xF7 */
0x7F20,0x94F2,0x4EA7,0x9610,0x98A4,0x660C,0x7316,0x0000,/* 0xF8-0xFF */
};
0x7832,0x7833,0x7835,0x7836,0x783D,0x783F,0x7841,0x7842,/* 0x68-0x6F */
0x7843,0x7844,0x7846,0x7848,0x7849,0x784A,0x784B,0x784D,/* 0x70-0x77 */
0x784F,0x7851,0x7853,0x7854,0x7858,0x7859,0x785A,0x0000,/* 0x78-0x7F */
-
+
0x785B,0x785C,0x785E,0x785F,0x7860,0x7861,0x7862,0x7863,/* 0x80-0x87 */
0x7864,0x7865,0x7866,0x7867,0x7868,0x7869,0x786F,0x7870,/* 0x88-0x8F */
0x7871,0x7872,0x7873,0x7874,0x7875,0x7876,0x7878,0x7879,/* 0x90-0x97 */
0x7883,0x573A,0x5C1D,0x5E38,0x957F,0x507F,0x80A0,0x5382,/* 0xA0-0xA7 */
0x655E,0x7545,0x5531,0x5021,0x8D85,0x6284,0x949E,0x671D,/* 0xA8-0xAF */
0x5632,0x6F6E,0x5DE2,0x5435,0x7092,0x8F66,0x626F,0x64A4,/* 0xB0-0xB7 */
- 0x63A3,0x5F7B,0x6F88,0x90F4,0x81E3,0xF971,0x5C18,0x6668,/* 0xB8-0xBF */
+ 0x63A3,0x5F7B,0x6F88,0x90F4,0x81E3,0x8FB0,0x5C18,0x6668,/* 0xB8-0xBF */
0x5FF1,0x6C89,0x9648,0x8D81,0x886C,0x6491,0x79F0,0x57CE,/* 0xC0-0xC7 */
0x6A59,0x6210,0x5448,0x4E58,0x7A0B,0x60E9,0x6F84,0x8BDA,/* 0xC8-0xCF */
0x627F,0x901E,0x9A8B,0x79E4,0x5403,0x75F4,0x6301,0x5319,/* 0xD0-0xD7 */
0x78C6,0x78C7,0x78C8,0x78CC,0x78CD,0x78CE,0x78CF,0x78D1,/* 0x68-0x6F */
0x78D2,0x78D3,0x78D6,0x78D7,0x78D8,0x78DA,0x78DB,0x78DC,/* 0x70-0x77 */
0x78DD,0x78DE,0x78DF,0x78E0,0x78E1,0x78E2,0x78E3,0x0000,/* 0x78-0x7F */
-
+
0x78E4,0x78E5,0x78E6,0x78E7,0x78E9,0x78EA,0x78EB,0x78ED,/* 0x80-0x87 */
0x78EE,0x78EF,0x78F0,0x78F1,0x78F3,0x78F5,0x78F6,0x78F8,/* 0x88-0x8F */
- 0x78F9,0xF964,0x78FC,0x78FD,0x78FE,0x78FF,0x7900,0x7902,/* 0x90-0x97 */
+ 0x78F9,0x78FB,0x78FC,0x78FD,0x78FE,0x78FF,0x7900,0x7902,/* 0x90-0x97 */
0x7903,0x7904,0x7906,0x7907,0x7908,0x7909,0x790A,0x790B,/* 0x98-0x9F */
0x790C,0x7840,0x50A8,0x77D7,0x6410,0x89E6,0x5904,0x63E3,/* 0xA0-0xA7 */
- 0x5DDD,0x7A7F,0x693D,0x4F20,0x8239,0x5598,0xF905,0x75AE,/* 0xA8-0xAF */
+ 0x5DDD,0x7A7F,0x693D,0x4F20,0x8239,0x5598,0x4E32,0x75AE,/* 0xA8-0xAF */
0x7A97,0x5E62,0x5E8A,0x95EF,0x521B,0x5439,0x708A,0x6376,/* 0xB0-0xB7 */
0x9524,0x5782,0x6625,0x693F,0x9187,0x5507,0x6DF3,0x7EAF,/* 0xB8-0xBF */
0x8822,0x6233,0x7EF0,0x75B5,0x8328,0x78C1,0x96CC,0x8F9E,/* 0xC0-0xC7 */
- 0x6148,0x74F7,0x8BCD,0x6B64,0xF9FF,0x8D50,0x6B21,0x806A,/* 0xC8-0xCF */
+ 0x6148,0x74F7,0x8BCD,0x6B64,0x523A,0x8D50,0x6B21,0x806A,/* 0xC8-0xCF */
0x8471,0x56F1,0x5306,0x4ECE,0x4E1B,0x51D1,0x7C97,0x918B,/* 0xD0-0xD7 */
0x7C07,0x4FC3,0x8E7F,0x7BE1,0x7A9C,0x6467,0x5D14,0x50AC,/* 0xD8-0xDF */
0x8106,0x7601,0x7CB9,0x6DEC,0x7FE0,0x6751,0x5B58,0x5BF8,/* 0xE0-0xE7 */
0x790D,0x790E,0x790F,0x7910,0x7911,0x7912,0x7914,0x7915,/* 0x40-0x47 */
0x7916,0x7917,0x7918,0x7919,0x791A,0x791B,0x791C,0x791D,/* 0x48-0x4F */
0x791F,0x7920,0x7921,0x7922,0x7923,0x7925,0x7926,0x7927,/* 0x50-0x57 */
- 0x7928,0x7929,0xF985,0x792B,0x792C,0x792D,0x792E,0x792F,/* 0x58-0x5F */
+ 0x7928,0x7929,0x792A,0x792B,0x792C,0x792D,0x792E,0x792F,/* 0x58-0x5F */
0x7930,0x7931,0x7932,0x7933,0x7935,0x7936,0x7937,0x7938,/* 0x60-0x67 */
0x7939,0x793D,0x793F,0x7942,0x7943,0x7944,0x7945,0x7947,/* 0x68-0x6F */
0x794A,0x794B,0x794C,0x794D,0x794E,0x794F,0x7950,0x7951,/* 0x70-0x77 */
0x7952,0x7954,0x7955,0x7958,0x7959,0x7961,0x7963,0x0000,/* 0x78-0x7F */
-
+
0x7964,0x7966,0x7969,0x796A,0x796B,0x796C,0x796E,0x7970,/* 0x80-0x87 */
0x7971,0x7972,0x7973,0x7974,0x7975,0x7976,0x7979,0x797B,/* 0x88-0x8F */
- 0x797C,0x797D,0x797E,0xF93C,0x7982,0x7983,0x7986,0x7987,/* 0x90-0x97 */
+ 0x797C,0x797D,0x797E,0x797F,0x7982,0x7983,0x7986,0x7987,/* 0x90-0x97 */
0x7988,0x7989,0x798B,0x798C,0x798D,0x798E,0x7990,0x7991,/* 0x98-0x9F */
- 0x7992,0x6020,0x803D,0x62C5,0xF95E,0x5355,0x90F8,0x63B8,/* 0xA0-0xA7 */
+ 0x7992,0x6020,0x803D,0x62C5,0x4E39,0x5355,0x90F8,0x63B8,/* 0xA0-0xA7 */
0x80C6,0x65E6,0x6C2E,0x4F46,0x60EE,0x6DE1,0x8BDE,0x5F39,/* 0xA8-0xAF */
0x86CB,0x5F53,0x6321,0x515A,0x8361,0x6863,0x5200,0x6363,/* 0xB0-0xB7 */
0x8E48,0x5012,0x5C9B,0x7977,0x5BFC,0x5230,0x7A3B,0x60BC,/* 0xB8-0xBF */
0x7993,0x7994,0x7995,0x7996,0x7997,0x7998,0x7999,0x799B,/* 0x40-0x47 */
0x799C,0x799D,0x799E,0x799F,0x79A0,0x79A1,0x79A2,0x79A3,/* 0x48-0x4F */
0x79A4,0x79A5,0x79A6,0x79A8,0x79A9,0x79AA,0x79AB,0x79AC,/* 0x50-0x57 */
- 0x79AD,0xF9B6,0x79AF,0x79B0,0x79B1,0x79B2,0x79B4,0x79B5,/* 0x58-0x5F */
+ 0x79AD,0x79AE,0x79AF,0x79B0,0x79B1,0x79B2,0x79B4,0x79B5,/* 0x58-0x5F */
0x79B6,0x79B7,0x79B8,0x79BC,0x79BF,0x79C2,0x79C4,0x79C5,/* 0x60-0x67 */
0x79C7,0x79C8,0x79CA,0x79CC,0x79CE,0x79CF,0x79D0,0x79D3,/* 0x68-0x6F */
0x79D4,0x79D6,0x79D7,0x79D9,0x79DA,0x79DB,0x79DC,0x79DD,/* 0x70-0x77 */
0x79DE,0x79E0,0x79E1,0x79E2,0x79E5,0x79E8,0x79EA,0x0000,/* 0x78-0x7F */
-
+
0x79EC,0x79EE,0x79F1,0x79F2,0x79F3,0x79F4,0x79F5,0x79F6,/* 0x80-0x87 */
0x79F7,0x79F9,0x79FA,0x79FC,0x79FE,0x79FF,0x7A01,0x7A04,/* 0x88-0x8F */
0x7A05,0x7A07,0x7A08,0x7A09,0x7A0A,0x7A0C,0x7A0F,0x7A10,/* 0x90-0x97 */
0x7A11,0x7A12,0x7A13,0x7A15,0x7A16,0x7A18,0x7A19,0x7A1B,/* 0x98-0x9F */
- 0xF956,0x4E01,0x76EF,0x53EE,0x9489,0x9876,0x9F0E,0x952D,/* 0xA0-0xA7 */
+ 0x7A1C,0x4E01,0x76EF,0x53EE,0x9489,0x9876,0x9F0E,0x952D,/* 0xA0-0xA7 */
0x5B9A,0x8BA2,0x4E22,0x4E1C,0x51AC,0x8463,0x61C2,0x52A8,/* 0xA8-0xAF */
- 0x680B,0x4F97,0x606B,0x51BB,0xFA05,0x515C,0x6296,0x6597,/* 0xB0-0xB7 */
- 0x9661,0x8C46,0x9017,0x75D8,0xFA26,0x7763,0x6BD2,0x728A,/* 0xB8-0xBF */
+ 0x680B,0x4F97,0x606B,0x51BB,0x6D1E,0x515C,0x6296,0x6597,/* 0xB0-0xB7 */
+ 0x9661,0x8C46,0x9017,0x75D8,0x90FD,0x7763,0x6BD2,0x728A,/* 0xB8-0xBF */
0x72EC,0x8BFB,0x5835,0x7779,0x8D4C,0x675C,0x9540,0x809A,/* 0xC0-0xC7 */
- 0xFA01,0x6E21,0x5992,0x7AEF,0x77ED,0x953B,0x6BB5,0x65AD,/* 0xC8-0xCF */
+ 0x5EA6,0x6E21,0x5992,0x7AEF,0x77ED,0x953B,0x6BB5,0x65AD,/* 0xC8-0xCF */
0x7F0E,0x5806,0x5151,0x961F,0x5BF9,0x58A9,0x5428,0x8E72,/* 0xD0-0xD7 */
0x6566,0x987F,0x56E4,0x949D,0x76FE,0x9041,0x6387,0x54C6,/* 0xD8-0xDF */
0x591A,0x593A,0x579B,0x8EB2,0x6735,0x8DFA,0x8235,0x5241,/* 0xE0-0xE7 */
0x7A50,0x7A52,0x7A53,0x7A54,0x7A55,0x7A56,0x7A58,0x7A59,/* 0x68-0x6F */
0x7A5A,0x7A5B,0x7A5C,0x7A5D,0x7A5E,0x7A5F,0x7A60,0x7A61,/* 0x70-0x77 */
0x7A62,0x7A63,0x7A64,0x7A65,0x7A66,0x7A67,0x7A68,0x0000,/* 0x78-0x7F */
-
+
0x7A69,0x7A6A,0x7A6B,0x7A6C,0x7A6D,0x7A6E,0x7A6F,0x7A71,/* 0x80-0x87 */
0x7A72,0x7A73,0x7A75,0x7A7B,0x7A7C,0x7A7D,0x7A7E,0x7A82,/* 0x88-0x8F */
0x7A85,0x7A87,0x7A89,0x7A8A,0x7A8B,0x7A8C,0x7A8E,0x7A8F,/* 0x90-0x97 */
0x7AD3,0x7AD4,0x7AD5,0x7AD7,0x7AD8,0x7ADA,0x7ADB,0x7ADC,/* 0x68-0x6F */
0x7ADD,0x7AE1,0x7AE2,0x7AE4,0x7AE7,0x7AE8,0x7AE9,0x7AEA,/* 0x70-0x77 */
0x7AEB,0x7AEC,0x7AEE,0x7AF0,0x7AF1,0x7AF2,0x7AF3,0x0000,/* 0x78-0x7F */
-
+
0x7AF4,0x7AF5,0x7AF6,0x7AF7,0x7AF8,0x7AFB,0x7AFC,0x7AFE,/* 0x80-0x87 */
0x7B00,0x7B01,0x7B02,0x7B05,0x7B07,0x7B09,0x7B0C,0x7B0D,/* 0x88-0x8F */
0x7B0E,0x7B10,0x7B12,0x7B13,0x7B16,0x7B17,0x7B18,0x7B1A,/* 0x90-0x97 */
0x7B1C,0x7B1D,0x7B1F,0x7B21,0x7B22,0x7B23,0x7B27,0x7B29,/* 0x98-0x9F */
- 0x7B2D,0x6D6E,0x6DAA,0xFA1B,0x88B1,0x5F17,0x752B,0x629A,/* 0xA0-0xA7 */
+ 0x7B2D,0x6D6E,0x6DAA,0x798F,0x88B1,0x5F17,0x752B,0x629A,/* 0xA0-0xA7 */
0x8F85,0x4FEF,0x91DC,0x65A7,0x812F,0x8151,0x5E9C,0x8150,/* 0xA8-0xAF */
0x8D74,0x526F,0x8986,0x8D4B,0x590D,0x5085,0x4ED8,0x961C,/* 0xB0-0xB7 */
0x7236,0x8179,0x8D1F,0x5BCC,0x8BA3,0x9644,0x5987,0x7F1A,/* 0xB8-0xBF */
0x818F,0x7F94,0x7CD5,0x641E,0x9550,0x7A3F,0x544A,0x54E5,/* 0xE0-0xE7 */
0x6B4C,0x6401,0x6208,0x9E3D,0x80F3,0x7599,0x5272,0x9769,/* 0xE8-0xEF */
0x845B,0x683C,0x86E4,0x9601,0x9694,0x94EC,0x4E2A,0x5404,/* 0xF0-0xF7 */
- 0x7ED9,0x6839,0x8DDF,0x8015,0xF901,0x5E9A,0x7FB9,0x0000,/* 0xF8-0xFF */
+ 0x7ED9,0x6839,0x8DDF,0x8015,0x66F4,0x5E9A,0x7FB9,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_B9[256] = {
0x7B6F,0x7B70,0x7B73,0x7B74,0x7B76,0x7B78,0x7B7A,0x7B7C,/* 0x68-0x6F */
0x7B7D,0x7B7F,0x7B81,0x7B82,0x7B83,0x7B84,0x7B86,0x7B87,/* 0x70-0x77 */
0x7B88,0x7B89,0x7B8A,0x7B8B,0x7B8C,0x7B8E,0x7B8F,0x0000,/* 0x78-0x7F */
-
+
0x7B91,0x7B92,0x7B93,0x7B96,0x7B98,0x7B99,0x7B9A,0x7B9B,/* 0x80-0x87 */
0x7B9E,0x7B9F,0x7BA0,0x7BA3,0x7BA4,0x7BA5,0x7BAE,0x7BAF,/* 0x88-0x8F */
0x7BB0,0x7BB2,0x7BB3,0x7BB5,0x7BB6,0x7BB7,0x7BB9,0x7BBA,/* 0x90-0x97 */
0x7BFD,0x7BFF,0x7C00,0x7C01,0x7C02,0x7C03,0x7C04,0x7C05,/* 0x68-0x6F */
0x7C06,0x7C08,0x7C09,0x7C0A,0x7C0D,0x7C0E,0x7C10,0x7C11,/* 0x70-0x77 */
0x7C12,0x7C13,0x7C14,0x7C15,0x7C17,0x7C18,0x7C19,0x0000,/* 0x78-0x7F */
-
+
0x7C1A,0x7C1B,0x7C1C,0x7C1D,0x7C1E,0x7C20,0x7C21,0x7C22,/* 0x80-0x87 */
0x7C23,0x7C24,0x7C25,0x7C28,0x7C29,0x7C2B,0x7C2C,0x7C2D,/* 0x88-0x8F */
0x7C2E,0x7C2F,0x7C30,0x7C31,0x7C32,0x7C33,0x7C34,0x7C35,/* 0x90-0x97 */
- 0x7C36,0x7C37,0x7C39,0x7C3A,0x7C3B,0x7C3C,0x7C3D,0xF9A6,/* 0x98-0x9F */
+ 0x7C36,0x7C37,0x7C39,0x7C3A,0x7C3B,0x7C3C,0x7C3D,0x7C3E,/* 0x98-0x9F */
0x7C42,0x9AB8,0x5B69,0x6D77,0x6C26,0x4EA5,0x5BB3,0x9A87,/* 0xA0-0xA7 */
0x9163,0x61A8,0x90AF,0x97E9,0x542B,0x6DB5,0x5BD2,0x51FD,/* 0xA8-0xAF */
0x558A,0x7F55,0x7FF0,0x64BC,0x634D,0x65F1,0x61BE,0x608D,/* 0xB0-0xB7 */
0x7C43,0x7C44,0x7C45,0x7C46,0x7C47,0x7C48,0x7C49,0x7C4A,/* 0x40-0x47 */
0x7C4B,0x7C4C,0x7C4E,0x7C4F,0x7C50,0x7C51,0x7C52,0x7C53,/* 0x48-0x4F */
0x7C54,0x7C55,0x7C56,0x7C57,0x7C58,0x7C59,0x7C5A,0x7C5B,/* 0x50-0x57 */
- 0x7C5C,0x7C5D,0x7C5E,0x7C5F,0xF944,0x7C61,0x7C62,0x7C63,/* 0x58-0x5F */
+ 0x7C5C,0x7C5D,0x7C5E,0x7C5F,0x7C60,0x7C61,0x7C62,0x7C63,/* 0x58-0x5F */
0x7C64,0x7C65,0x7C66,0x7C67,0x7C68,0x7C69,0x7C6A,0x7C6B,/* 0x60-0x67 */
0x7C6C,0x7C6D,0x7C6E,0x7C6F,0x7C70,0x7C71,0x7C72,0x7C75,/* 0x68-0x6F */
0x7C76,0x7C77,0x7C78,0x7C79,0x7C7A,0x7C7E,0x7C7F,0x7C80,/* 0x70-0x77 */
0x7C81,0x7C82,0x7C83,0x7C84,0x7C85,0x7C86,0x7C87,0x0000,/* 0x78-0x7F */
-
+
0x7C88,0x7C8A,0x7C8B,0x7C8C,0x7C8D,0x7C8E,0x7C8F,0x7C90,/* 0x80-0x87 */
0x7C93,0x7C94,0x7C96,0x7C99,0x7C9A,0x7C9B,0x7CA0,0x7CA1,/* 0x88-0x8F */
0x7CA3,0x7CA6,0x7CA7,0x7CA8,0x7CA9,0x7CAB,0x7CAC,0x7CAD,/* 0x90-0x97 */
0x7CAF,0x7CB0,0x7CB4,0x7CB5,0x7CB6,0x7CB7,0x7CB8,0x7CBA,/* 0x98-0x9F */
0x7CBB,0x5F27,0x864E,0x552C,0x62A4,0x4E92,0x6CAA,0x6237,/* 0xA0-0xA7 */
- 0x82B1,0x54D7,0x534E,0x733E,0xF904,0x753B,0x5212,0x5316,/* 0xA8-0xAF */
+ 0x82B1,0x54D7,0x534E,0x733E,0x6ED1,0x753B,0x5212,0x5316,/* 0xA8-0xAF */
0x8BDD,0x69D0,0x5F8A,0x6000,0x6DEE,0x574F,0x6B22,0x73AF,/* 0xB0-0xB7 */
0x6853,0x8FD8,0x7F13,0x6362,0x60A3,0x5524,0x75EA,0x8C62,/* 0xB8-0xBF */
0x7115,0x6DA3,0x5BA6,0x5E7B,0x8352,0x614C,0x9EC4,0x78FA,/* 0xC0-0xC7 */
0x7CBF,0x7CC0,0x7CC2,0x7CC3,0x7CC4,0x7CC6,0x7CC9,0x7CCB,/* 0x40-0x47 */
0x7CCE,0x7CCF,0x7CD0,0x7CD1,0x7CD2,0x7CD3,0x7CD4,0x7CD8,/* 0x48-0x4F */
0x7CDA,0x7CDB,0x7CDD,0x7CDE,0x7CE1,0x7CE2,0x7CE3,0x7CE4,/* 0x50-0x57 */
- 0x7CE5,0x7CE6,0xF97B,0x7CE9,0x7CEA,0x7CEB,0x7CEC,0x7CED,/* 0x58-0x5F */
+ 0x7CE5,0x7CE6,0x7CE7,0x7CE9,0x7CEA,0x7CEB,0x7CEC,0x7CED,/* 0x58-0x5F */
0x7CEE,0x7CF0,0x7CF1,0x7CF2,0x7CF3,0x7CF4,0x7CF5,0x7CF6,/* 0x60-0x67 */
0x7CF7,0x7CF9,0x7CFA,0x7CFC,0x7CFD,0x7CFE,0x7CFF,0x7D00,/* 0x68-0x6F */
0x7D01,0x7D02,0x7D03,0x7D04,0x7D05,0x7D06,0x7D07,0x7D08,/* 0x70-0x77 */
- 0x7D09,0x7D0B,0x7D0C,0x7D0D,0x7D0E,0x7D0F,0xF9CF,0x0000,/* 0x78-0x7F */
-
+ 0x7D09,0x7D0B,0x7D0C,0x7D0D,0x7D0E,0x7D0F,0x7D10,0x0000,/* 0x78-0x7F */
+
0x7D11,0x7D12,0x7D13,0x7D14,0x7D15,0x7D16,0x7D17,0x7D18,/* 0x80-0x87 */
0x7D19,0x7D1A,0x7D1B,0x7D1C,0x7D1D,0x7D1E,0x7D1F,0x7D21,/* 0x88-0x8F */
0x7D23,0x7D24,0x7D25,0x7D26,0x7D28,0x7D29,0x7D2A,0x7D2C,/* 0x90-0x97 */
0x7D5F,0x7D60,0x7D61,0x7D62,0x7D63,0x7D64,0x7D65,0x7D66,/* 0x68-0x6F */
0x7D67,0x7D68,0x7D69,0x7D6A,0x7D6B,0x7D6C,0x7D6D,0x7D6F,/* 0x70-0x77 */
0x7D70,0x7D71,0x7D72,0x7D73,0x7D74,0x7D75,0x7D76,0x0000,/* 0x78-0x7F */
-
+
0x7D78,0x7D79,0x7D7A,0x7D7B,0x7D7C,0x7D7D,0x7D7E,0x7D7F,/* 0x80-0x87 */
0x7D80,0x7D81,0x7D82,0x7D83,0x7D84,0x7D85,0x7D86,0x7D87,/* 0x88-0x8F */
0x7D88,0x7D89,0x7D8A,0x7D8B,0x7D8C,0x7D8D,0x7D8E,0x7D8F,/* 0x90-0x97 */
0x7D90,0x7D91,0x7D92,0x7D93,0x7D94,0x7D95,0x7D96,0x7D97,/* 0x98-0x9F */
0x7D98,0x5065,0x8230,0x5251,0x996F,0x6E10,0x6E85,0x6DA7,/* 0xA0-0xA7 */
0x5EFA,0x50F5,0x59DC,0x5C06,0x6D46,0x6C5F,0x7586,0x848B,/* 0xA8-0xAF */
- 0x6868,0x5956,0x8BB2,0x5320,0x9171,0xFA09,0x8549,0x6912,/* 0xB0-0xB7 */
+ 0x6868,0x5956,0x8BB2,0x5320,0x9171,0x964D,0x8549,0x6912,/* 0xB0-0xB7 */
0x7901,0x7126,0x80F6,0x4EA4,0x90CA,0x6D47,0x9A84,0x5A07,/* 0xB8-0xBF */
0x56BC,0x6405,0x94F0,0x77EB,0x4FA5,0x811A,0x72E1,0x89D2,/* 0xC0-0xC7 */
0x997A,0x7F34,0x7EDE,0x527F,0x6559,0x9175,0x8F7F,0x8F83,/* 0xC8-0xCF */
0x622A,0x52AB,0x8282,0x6854,0x6770,0x6377,0x776B,0x7AED,/* 0xD8-0xDF */
0x6D01,0x7ED3,0x89E3,0x59D0,0x6212,0x85C9,0x82A5,0x754C,/* 0xE0-0xE7 */
0x501F,0x4ECB,0x75A5,0x8BEB,0x5C4A,0x5DFE,0x7B4B,0x65A4,/* 0xE8-0xEF */
- 0xF90A,0x4ECA,0x6D25,0x895F,0x7D27,0x9526,0x4EC5,0x8C28,/* 0xF0-0xF7 */
+ 0x91D1,0x4ECA,0x6D25,0x895F,0x7D27,0x9526,0x4EC5,0x8C28,/* 0xF0-0xF7 */
0x8FDB,0x9773,0x664B,0x7981,0x8FD1,0x70EC,0x6D78,0x0000,/* 0xF8-0xFF */
};
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x28-0x2F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
- 0x7D99,0x7D9A,0x7D9B,0x7D9C,0x7D9D,0x7D9E,0x7D9F,0xF93D,/* 0x40-0x47 */
+ 0x7D99,0x7D9A,0x7D9B,0x7D9C,0x7D9D,0x7D9E,0x7D9F,0x7DA0,/* 0x40-0x47 */
0x7DA1,0x7DA2,0x7DA3,0x7DA4,0x7DA5,0x7DA7,0x7DA8,0x7DA9,/* 0x48-0x4F */
0x7DAA,0x7DAB,0x7DAC,0x7DAD,0x7DAF,0x7DB0,0x7DB1,0x7DB2,/* 0x50-0x57 */
0x7DB3,0x7DB4,0x7DB5,0x7DB6,0x7DB7,0x7DB8,0x7DB9,0x7DBA,/* 0x58-0x5F */
- 0x7DBB,0x7DBC,0x7DBD,0xF957,0x7DBF,0x7DC0,0x7DC1,0x7DC2,/* 0x60-0x67 */
+ 0x7DBB,0x7DBC,0x7DBD,0x7DBE,0x7DBF,0x7DC0,0x7DC1,0x7DC2,/* 0x60-0x67 */
0x7DC3,0x7DC4,0x7DC5,0x7DC6,0x7DC7,0x7DC8,0x7DC9,0x7DCA,/* 0x68-0x6F */
0x7DCB,0x7DCC,0x7DCD,0x7DCE,0x7DCF,0x7DD0,0x7DD1,0x7DD2,/* 0x70-0x77 */
0x7DD3,0x7DD4,0x7DD5,0x7DD6,0x7DD7,0x7DD8,0x7DD9,0x0000,/* 0x78-0x7F */
-
+
0x7DDA,0x7DDB,0x7DDC,0x7DDD,0x7DDE,0x7DDF,0x7DE0,0x7DE1,/* 0x80-0x87 */
0x7DE2,0x7DE3,0x7DE4,0x7DE5,0x7DE6,0x7DE7,0x7DE8,0x7DE9,/* 0x88-0x8F */
0x7DEA,0x7DEB,0x7DEC,0x7DED,0x7DEE,0x7DEF,0x7DF0,0x7DF1,/* 0x90-0x97 */
- 0x7DF2,0x7DF3,0xF996,0x7DF5,0x7DF6,0x7DF7,0x7DF8,0x7DF9,/* 0x98-0x9F */
+ 0x7DF2,0x7DF3,0x7DF4,0x7DF5,0x7DF6,0x7DF7,0x7DF8,0x7DF9,/* 0x98-0x9F */
0x7DFA,0x5C3D,0x52B2,0x8346,0x5162,0x830E,0x775B,0x6676,/* 0xA0-0xA7 */
- 0x9CB8,0x4EAC,0x60CA,0xFA1D,0x7CB3,0x7ECF,0x4E95,0x8B66,/* 0xA8-0xAF */
+ 0x9CB8,0x4EAC,0x60CA,0x7CBE,0x7CB3,0x7ECF,0x4E95,0x8B66,/* 0xA8-0xAF */
0x666F,0x9888,0x9759,0x5883,0x656C,0x955C,0x5F84,0x75C9,/* 0xB0-0xB7 */
- 0xFA1C,0x7ADF,0x7ADE,0x51C0,0x70AF,0x7A98,0x63EA,0x7A76,/* 0xB8-0xBF */
+ 0x9756,0x7ADF,0x7ADE,0x51C0,0x70AF,0x7A98,0x63EA,0x7A76,/* 0xB8-0xBF */
0x7EA0,0x7396,0x97ED,0x4E45,0x7078,0x4E5D,0x9152,0x53A9,/* 0xC0-0xC7 */
0x6551,0x65E7,0x81FC,0x8205,0x548E,0x5C31,0x759A,0x97A0,/* 0xC8-0xCF */
0x62D8,0x72D9,0x75BD,0x5C45,0x9A79,0x83CA,0x5C40,0x5480,/* 0xD0-0xD7 */
0x77E9,0x4E3E,0x6CAE,0x805A,0x62D2,0x636E,0x5DE8,0x5177,/* 0xD8-0xDF */
- 0x8DDD,0x8E1E,0x952F,0x4FF1,0xF906,0x60E7,0x70AC,0x5267,/* 0xE0-0xE7 */
+ 0x8DDD,0x8E1E,0x952F,0x4FF1,0x53E5,0x60E7,0x70AC,0x5267,/* 0xE0-0xE7 */
0x6350,0x9E43,0x5A1F,0x5026,0x7737,0x5377,0x7EE2,0x6485,/* 0xE8-0xEF */
0x652B,0x6289,0x6398,0x5014,0x7235,0x89C9,0x51B3,0x8BC0,/* 0xF0-0xF7 */
0x7EDD,0x5747,0x83CC,0x94A7,0x519B,0x541B,0x5CFB,0x0000,/* 0xF8-0xFF */
0x7E1B,0x7E1C,0x7E1D,0x7E1E,0x7E1F,0x7E20,0x7E21,0x7E22,/* 0x60-0x67 */
0x7E23,0x7E24,0x7E25,0x7E26,0x7E27,0x7E28,0x7E29,0x7E2A,/* 0x68-0x6F */
0x7E2B,0x7E2C,0x7E2D,0x7E2E,0x7E2F,0x7E30,0x7E31,0x7E32,/* 0x70-0x77 */
- 0x7E33,0x7E34,0x7E35,0x7E36,0xF950,0x7E38,0x7E39,0x0000,/* 0x78-0x7F */
-
+ 0x7E33,0x7E34,0x7E35,0x7E36,0x7E37,0x7E38,0x7E39,0x0000,/* 0x78-0x7F */
+
0x7E3A,0x7E3C,0x7E3D,0x7E3E,0x7E3F,0x7E40,0x7E42,0x7E43,/* 0x80-0x87 */
0x7E44,0x7E45,0x7E46,0x7E48,0x7E49,0x7E4A,0x7E4B,0x7E4C,/* 0x88-0x8F */
0x7E4D,0x7E4E,0x7E4F,0x7E50,0x7E51,0x7E52,0x7E53,0x7E54,/* 0x90-0x97 */
0x7E87,0x7E88,0x7E89,0x7E8A,0x7E8B,0x7E8C,0x7E8D,0x7E8E,/* 0x68-0x6F */
0x7E8F,0x7E90,0x7E91,0x7E92,0x7E93,0x7E94,0x7E95,0x7E96,/* 0x70-0x77 */
0x7E97,0x7E98,0x7E99,0x7E9A,0x7E9C,0x7E9D,0x7E9E,0x0000,/* 0x78-0x7F */
-
+
0x7EAE,0x7EB4,0x7EBB,0x7EBC,0x7ED6,0x7EE4,0x7EEC,0x7EF9,/* 0x80-0x87 */
0x7F0A,0x7F10,0x7F1E,0x7F37,0x7F39,0x7F3B,0x7F3C,0x7F3D,/* 0x88-0x8F */
0x7F3E,0x7F3F,0x7F40,0x7F41,0x7F43,0x7F46,0x7F47,0x7F48,/* 0x90-0x97 */
0x7F49,0x7F4A,0x7F4B,0x7F4C,0x7F4D,0x7F4E,0x7F4F,0x7F52,/* 0x98-0x9F */
0x7F53,0x9988,0x6127,0x6E83,0x5764,0x6606,0x6346,0x56F0,/* 0xA0-0xA7 */
- 0x62EC,0x6269,0xFA0B,0x9614,0x5783,0xF925,0xF90B,0x8721,/* 0xA8-0xAF */
+ 0x62EC,0x6269,0x5ED3,0x9614,0x5783,0x62C9,0x5587,0x8721,/* 0xA8-0xAF */
0x814A,0x8FA3,0x5566,0x83B1,0x6765,0x8D56,0x84DD,0x5A6A,/* 0xB0-0xB7 */
0x680F,0x62E6,0x7BEE,0x9611,0x5170,0x6F9C,0x8C30,0x63FD,/* 0xB8-0xBF */
- 0x89C8,0x61D2,0x7F06,0x70C2,0x6EE5,0x7405,0x6994,0xF92B,/* 0xC0-0xC7 */
- 0xF928,0x90CE,0xF929,0xF92A,0x635E,0x52B3,0xF946,0xF934,/* 0xC8-0xCF */
- 0x4F6C,0x59E5,0xF919,0xF916,0x6D9D,0xF952,0x4E50,0xF949,/* 0xD0-0xD7 */
- 0x956D,0x857E,0xF947,0xF94F,0x5121,0x5792,0x64C2,0xF953,/* 0xD8-0xDF */
- 0x7C7B,0x6CEA,0x68F1,0x695E,0xF92E,0x5398,0xF9E2,0x7281,/* 0xE0-0xE7 */
- 0xF989,0x7BF1,0x72F8,0x79BB,0x6F13,0xF9E4,0xF9E1,0xF9E9,/* 0xE8-0xEF */
- 0x9CA4,0x793C,0x8389,0x8354,0xF9DE,0xF9DA,0x4E3D,0x5389,/* 0xF0-0xF7 */
- 0x52B1,0x783E,0x5386,0xF9DD,0x5088,0xF9B5,0x4FD0,0x0000,/* 0xF8-0xFF */
+ 0x89C8,0x61D2,0x7F06,0x70C2,0x6EE5,0x7405,0x6994,0x72FC,/* 0xC0-0xC7 */
+ 0x5ECA,0x90CE,0x6717,0x6D6A,0x635E,0x52B3,0x7262,0x8001,/* 0xC8-0xCF */
+ 0x4F6C,0x59E5,0x916A,0x70D9,0x6D9D,0x52D2,0x4E50,0x96F7,/* 0xD0-0xD7 */
+ 0x956D,0x857E,0x78CA,0x7D2F,0x5121,0x5792,0x64C2,0x808B,/* 0xD8-0xDF */
+ 0x7C7B,0x6CEA,0x68F1,0x695E,0x51B7,0x5398,0x68A8,0x7281,/* 0xE0-0xE7 */
+ 0x9ECE,0x7BF1,0x72F8,0x79BB,0x6F13,0x7406,0x674E,0x91CC,/* 0xE8-0xEF */
+ 0x9CA4,0x793C,0x8389,0x8354,0x540F,0x6817,0x4E3D,0x5389,/* 0xF0-0xF7 */
+ 0x52B1,0x783E,0x5386,0x5229,0x5088,0x4F8B,0x4FD0,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_C1[256] = {
0x7F56,0x7F59,0x7F5B,0x7F5C,0x7F5D,0x7F5E,0x7F60,0x7F63,/* 0x40-0x47 */
0x7F64,0x7F65,0x7F66,0x7F67,0x7F6B,0x7F6C,0x7F6D,0x7F6F,/* 0x48-0x4F */
0x7F70,0x7F73,0x7F75,0x7F76,0x7F77,0x7F78,0x7F7A,0x7F7B,/* 0x50-0x57 */
- 0x7F7C,0x7F7D,0x7F7F,0x7F80,0x7F82,0x7F83,0x7F84,0xF90F,/* 0x58-0x5F */
+ 0x7F7C,0x7F7D,0x7F7F,0x7F80,0x7F82,0x7F83,0x7F84,0x7F85,/* 0x58-0x5F */
0x7F86,0x7F87,0x7F88,0x7F89,0x7F8B,0x7F8D,0x7F8F,0x7F90,/* 0x60-0x67 */
0x7F91,0x7F92,0x7F93,0x7F95,0x7F96,0x7F97,0x7F98,0x7F99,/* 0x68-0x6F */
0x7F9B,0x7F9C,0x7FA0,0x7FA2,0x7FA3,0x7FA5,0x7FA6,0x7FA8,/* 0x70-0x77 */
0x7FA9,0x7FAA,0x7FAB,0x7FAC,0x7FAD,0x7FAE,0x7FB1,0x0000,/* 0x78-0x7F */
-
+
0x7FB3,0x7FB4,0x7FB5,0x7FB6,0x7FB7,0x7FBA,0x7FBB,0x7FBE,/* 0x80-0x87 */
0x7FC0,0x7FC2,0x7FC3,0x7FC4,0x7FC6,0x7FC7,0x7FC8,0x7FC9,/* 0x88-0x8F */
0x7FCB,0x7FCD,0x7FCF,0x7FD0,0x7FD1,0x7FD2,0x7FD3,0x7FD6,/* 0x90-0x97 */
0x7FD7,0x7FD9,0x7FDA,0x7FDB,0x7FDC,0x7FDD,0x7FDE,0x7FE2,/* 0x98-0x9F */
- 0x7FE3,0xF9E5,0xF9F7,0xF9F9,0x6CA5,0x96B6,0xF98A,0x7483,/* 0xA0-0xA7 */
- 0x54E9,0x4FE9,0x8054,0x83B2,0x8FDE,0x9570,0xF9A2,0xF9AC,/* 0xA8-0xAF */
+ 0x7FE3,0x75E2,0x7ACB,0x7C92,0x6CA5,0x96B6,0x529B,0x7483,/* 0xA0-0xA7 */
+ 0x54E9,0x4FE9,0x8054,0x83B2,0x8FDE,0x9570,0x5EC9,0x601C,/* 0xA8-0xAF */
0x6D9F,0x5E18,0x655B,0x8138,0x94FE,0x604B,0x70BC,0x7EC3,/* 0xB0-0xB7 */
- 0x7CAE,0x51C9,0xF97A,0x7CB1,0xF97C,0x4E24,0x8F86,0xF97E,/* 0xB8-0xBF */
- 0x667E,0xF977,0x8C05,0x64A9,0x804A,0xF9BB,0x7597,0xF9C0,/* 0xC0-0xC7 */
- 0x5BE5,0x8FBD,0x6F66,0xF9BA,0x6482,0x9563,0x5ED6,0xF9BE,/* 0xC8-0xCF */
- 0xF99C,0xF9A0,0xF99F,0xF99D,0x730E,0x7433,0xF9F4,0x78F7,/* 0xD0-0xD7 */
- 0x9716,0x4E34,0x90BB,0x9CDE,0xF9F5,0x51DB,0x8D41,0xF9ED,/* 0xD8-0xDF */
- 0x62CE,0xF9AD,0xF958,0xF9B2,0x9F84,0x94C3,0x4F36,0xF9AF,/* 0xE0-0xE7 */
- 0xF955,0x7075,0xF959,0x5CAD,0x9886,0x53E6,0xF9A8,0xF9CB,/* 0xE8-0xEF */
- 0xF9CC,0x69B4,0xF9CE,0x998F,0xF9CD,0x5218,0x7624,0xF9CA,/* 0xF0-0xF7 */
- 0xF9C9,0xF9D1,0x9F99,0x804B,0x5499,0x7B3C,0x7ABF,0x0000,/* 0xF8-0xFF */
+ 0x7CAE,0x51C9,0x6881,0x7CB1,0x826F,0x4E24,0x8F86,0x91CF,/* 0xB8-0xBF */
+ 0x667E,0x4EAE,0x8C05,0x64A9,0x804A,0x50DA,0x7597,0x71CE,/* 0xC0-0xC7 */
+ 0x5BE5,0x8FBD,0x6F66,0x4E86,0x6482,0x9563,0x5ED6,0x6599,/* 0xC8-0xCF */
+ 0x5217,0x88C2,0x70C8,0x52A3,0x730E,0x7433,0x6797,0x78F7,/* 0xD0-0xD7 */
+ 0x9716,0x4E34,0x90BB,0x9CDE,0x6DCB,0x51DB,0x8D41,0x541D,/* 0xD8-0xDF */
+ 0x62CE,0x73B2,0x83F1,0x96F6,0x9F84,0x94C3,0x4F36,0x7F9A,/* 0xE0-0xE7 */
+ 0x51CC,0x7075,0x9675,0x5CAD,0x9886,0x53E6,0x4EE4,0x6E9C,/* 0xE8-0xEF */
+ 0x7409,0x69B4,0x786B,0x998F,0x7559,0x5218,0x7624,0x6D41,/* 0xF0-0xF7 */
+ 0x67F3,0x516D,0x9F99,0x804B,0x5499,0x7B3C,0x7ABF,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_C2[256] = {
0x802F,0x8030,0x8032,0x8034,0x8039,0x803A,0x803C,0x803E,/* 0x68-0x6F */
0x8040,0x8041,0x8044,0x8045,0x8047,0x8048,0x8049,0x804E,/* 0x70-0x77 */
0x804F,0x8050,0x8051,0x8053,0x8055,0x8056,0x8057,0x0000,/* 0x78-0x7F */
-
+
0x8059,0x805B,0x805C,0x805D,0x805E,0x805F,0x8060,0x8061,/* 0x80-0x87 */
0x8062,0x8063,0x8064,0x8065,0x8066,0x8067,0x8068,0x806B,/* 0x88-0x8F */
- 0x806C,0x806D,0x806E,0xF997,0x8070,0x8072,0x8073,0x8074,/* 0x90-0x97 */
+ 0x806C,0x806D,0x806E,0x806F,0x8070,0x8072,0x8073,0x8074,/* 0x90-0x97 */
0x8075,0x8076,0x8077,0x8078,0x8079,0x807A,0x807B,0x807C,/* 0x98-0x9F */
- 0x807D,0xF9DC,0x5784,0x62E2,0x9647,0x697C,0x5A04,0x6402,/* 0xA0-0xA7 */
- 0x7BD3,0xF94E,0xF951,0x82A6,0x5362,0x9885,0x5E90,0x7089,/* 0xA8-0xAF */
- 0x63B3,0x5364,0x864F,0x9C81,0x9E93,0xF93B,0xF938,0xF937,/* 0xB0-0xB7 */
- 0x8D42,0xF940,0x6F5E,0x7984,0x5F55,0x9646,0xF9D2,0x9A74,/* 0xB8-0xBF */
- 0x5415,0x94DD,0x4FA3,0xF983,0xF9DF,0x5C61,0x7F15,0x8651,/* 0xC0-0xC7 */
- 0x6C2F,0xF9D8,0xF9DB,0x6EE4,0x7EFF,0x5CE6,0x631B,0x5B6A,/* 0xC8-0xCF */
- 0x6EE6,0xF91C,0x4E71,0xF975,0xF976,0x62A1,0x8F6E,0x4F26,/* 0xD0-0xD7 */
- 0x4ED1,0x6CA6,0x7EB6,0x8BBA,0x841D,0xF911,0x7F57,0x903B,/* 0xD8-0xDF */
- 0x9523,0x7BA9,0x9AA1,0xF912,0xF918,0xF915,0x9A86,0x7EDC,/* 0xE0-0xE7 */
+ 0x807D,0x9686,0x5784,0x62E2,0x9647,0x697C,0x5A04,0x6402,/* 0xA0-0xA7 */
+ 0x7BD3,0x6F0F,0x964B,0x82A6,0x5362,0x9885,0x5E90,0x7089,/* 0xA8-0xAF */
+ 0x63B3,0x5364,0x864F,0x9C81,0x9E93,0x788C,0x9732,0x8DEF,/* 0xB0-0xB7 */
+ 0x8D42,0x9E7F,0x6F5E,0x7984,0x5F55,0x9646,0x622E,0x9A74,/* 0xB8-0xBF */
+ 0x5415,0x94DD,0x4FA3,0x65C5,0x5C65,0x5C61,0x7F15,0x8651,/* 0xC0-0xC7 */
+ 0x6C2F,0x5F8B,0x7387,0x6EE4,0x7EFF,0x5CE6,0x631B,0x5B6A,/* 0xC8-0xCF */
+ 0x6EE6,0x5375,0x4E71,0x63A0,0x7565,0x62A1,0x8F6E,0x4F26,/* 0xD0-0xD7 */
+ 0x4ED1,0x6CA6,0x7EB6,0x8BBA,0x841D,0x87BA,0x7F57,0x903B,/* 0xD8-0xDF */
+ 0x9523,0x7BA9,0x9AA1,0x88F8,0x843D,0x6D1B,0x9A86,0x7EDC,/* 0xE0-0xE7 */
0x5988,0x9EBB,0x739B,0x7801,0x8682,0x9A6C,0x9A82,0x561B,/* 0xE8-0xEF */
0x5417,0x57CB,0x4E70,0x9EA6,0x5356,0x8FC8,0x8109,0x7792,/* 0xF0-0xF7 */
0x9992,0x86EE,0x6EE1,0x8513,0x66FC,0x6162,0x6F2B,0x0000,/* 0xF8-0xFF */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x28-0x2F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
- 0xF945,0x8081,0x8082,0x8085,0x8088,0x808A,0x808D,0x808E,/* 0x40-0x47 */
+ 0x807E,0x8081,0x8082,0x8085,0x8088,0x808A,0x808D,0x808E,/* 0x40-0x47 */
0x808F,0x8090,0x8091,0x8092,0x8094,0x8095,0x8097,0x8099,/* 0x48-0x4F */
0x809E,0x80A3,0x80A6,0x80A7,0x80A8,0x80AC,0x80B0,0x80B3,/* 0x50-0x57 */
0x80B5,0x80B6,0x80B8,0x80B9,0x80BB,0x80C5,0x80C7,0x80C8,/* 0x58-0x5F */
0x80D4,0x80D5,0x80D8,0x80DF,0x80E0,0x80E2,0x80E3,0x80E6,/* 0x68-0x6F */
0x80EE,0x80F5,0x80F7,0x80F9,0x80FB,0x80FE,0x80FF,0x8100,/* 0x70-0x77 */
0x8101,0x8103,0x8104,0x8105,0x8107,0x8108,0x810B,0x0000,/* 0x78-0x7F */
-
+
0x810C,0x8115,0x8117,0x8119,0x811B,0x811C,0x811D,0x811F,/* 0x80-0x87 */
0x8120,0x8121,0x8122,0x8123,0x8124,0x8125,0x8126,0x8127,/* 0x88-0x8F */
0x8128,0x8129,0x812A,0x812B,0x812D,0x812E,0x8130,0x8133,/* 0x90-0x97 */
0x7F8E,0x6627,0x5BD0,0x59B9,0x5A9A,0x95E8,0x95F7,0x4EEC,/* 0xC0-0xC7 */
0x840C,0x8499,0x6AAC,0x76DF,0x9530,0x731B,0x68A6,0x5B5F,/* 0xC8-0xCF */
0x772F,0x919A,0x9761,0x7CDC,0x8FF7,0x8C1C,0x5F25,0x7C73,/* 0xD0-0xD7 */
- 0x79D8,0x89C5,0xF968,0x871C,0x5BC6,0x5E42,0x68C9,0x7720,/* 0xD8-0xDF */
+ 0x79D8,0x89C5,0x6CCC,0x871C,0x5BC6,0x5E42,0x68C9,0x7720,/* 0xD8-0xDF */
0x7EF5,0x5195,0x514D,0x52C9,0x5A29,0x7F05,0x9762,0x82D7,/* 0xE0-0xE7 */
0x63CF,0x7784,0x85D0,0x79D2,0x6E3A,0x5E99,0x5999,0x8511,/* 0xE8-0xEF */
0x706D,0x6C11,0x62BF,0x76BF,0x654F,0x60AF,0x95FD,0x660E,/* 0xF0-0xF7 */
0x8186,0x8187,0x8189,0x818B,0x818C,0x818D,0x818E,0x8190,/* 0x68-0x6F */
0x8192,0x8193,0x8194,0x8195,0x8196,0x8197,0x8199,0x819A,/* 0x70-0x77 */
0x819E,0x819F,0x81A0,0x81A1,0x81A2,0x81A4,0x81A5,0x0000,/* 0x78-0x7F */
-
+
0x81A7,0x81A9,0x81AB,0x81AC,0x81AD,0x81AE,0x81AF,0x81B0,/* 0x80-0x87 */
0x81B1,0x81B2,0x81B4,0x81B5,0x81B6,0x81B7,0x81B8,0x81B9,/* 0x88-0x8F */
0x81BC,0x81BD,0x81BE,0x81BF,0x81C4,0x81C5,0x81C7,0x81C8,/* 0x90-0x97 */
0x964C,0x8C0B,0x725F,0x67D0,0x62C7,0x7261,0x4EA9,0x59C6,/* 0xB0-0xB7 */
0x6BCD,0x5893,0x66AE,0x5E55,0x52DF,0x6155,0x6728,0x76EE,/* 0xB8-0xBF */
0x7766,0x7267,0x7A46,0x62FF,0x54EA,0x5450,0x94A0,0x90A3,/* 0xC0-0xC7 */
- 0x5A1C,0x7EB3,0x6C16,0x4E43,0x5976,0x8010,0xF90C,0x5357,/* 0xC8-0xCF */
+ 0x5A1C,0x7EB3,0x6C16,0x4E43,0x5976,0x8010,0x5948,0x5357,/* 0xC8-0xCF */
0x7537,0x96BE,0x56CA,0x6320,0x8111,0x607C,0x95F9,0x6DD6,/* 0xD0-0xD7 */
0x5462,0x9981,0x5185,0x5AE9,0x80FD,0x59AE,0x9713,0x502A,/* 0xD8-0xDF */
- 0xF9E3,0x5C3C,0x62DF,0x4F60,0xF9EB,0x817B,0x9006,0xF9EC,/* 0xE0-0xE7 */
- 0x852B,0x62C8,0xF98E,0x78BE,0x64B5,0xF9A4,0xF9A3,0x5A18,/* 0xE8-0xEF */
- 0x917F,0x9E1F,0xF9BD,0x634F,0x8042,0x5B7D,0x556E,0x954A,/* 0xF0-0xF7 */
+ 0x6CE5,0x5C3C,0x62DF,0x4F60,0x533F,0x817B,0x9006,0x6EBA,/* 0xE0-0xE7 */
+ 0x852B,0x62C8,0x5E74,0x78BE,0x64B5,0x637B,0x5FF5,0x5A18,/* 0xE8-0xEF */
+ 0x917F,0x9E1F,0x5C3F,0x634F,0x8042,0x5B7D,0x556E,0x954A,/* 0xF0-0xF7 */
0x954D,0x6D85,0x60A8,0x67E0,0x72DE,0x51DD,0x5B81,0x0000,/* 0xF8-0xFF */
};
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x28-0x2F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
- 0x81D4,0x81D5,0x81D6,0x81D7,0xF926,0x81D9,0x81DA,0x81DB,/* 0x40-0x47 */
+ 0x81D4,0x81D5,0x81D6,0x81D7,0x81D8,0x81D9,0x81DA,0x81DB,/* 0x40-0x47 */
0x81DC,0x81DD,0x81DE,0x81DF,0x81E0,0x81E1,0x81E2,0x81E4,/* 0x48-0x4F */
- 0x81E5,0x81E6,0xF9F6,0x81E9,0x81EB,0x81EE,0x81EF,0x81F0,/* 0x50-0x57 */
+ 0x81E5,0x81E6,0x81E8,0x81E9,0x81EB,0x81EE,0x81EF,0x81F0,/* 0x50-0x57 */
0x81F1,0x81F2,0x81F5,0x81F6,0x81F7,0x81F8,0x81F9,0x81FA,/* 0x58-0x5F */
0x81FD,0x81FF,0x8203,0x8207,0x8208,0x8209,0x820A,0x820B,/* 0x60-0x67 */
0x820E,0x820F,0x8211,0x8213,0x8215,0x8216,0x8217,0x8218,/* 0x68-0x6F */
0x8219,0x821A,0x821D,0x8220,0x8224,0x8225,0x8226,0x8227,/* 0x70-0x77 */
0x8229,0x822E,0x8232,0x823A,0x823C,0x823D,0x823F,0x0000,/* 0x78-0x7F */
-
+
0x8240,0x8241,0x8242,0x8243,0x8245,0x8246,0x8248,0x824A,/* 0x80-0x87 */
0x824C,0x824D,0x824E,0x8250,0x8251,0x8252,0x8253,0x8254,/* 0x88-0x8F */
0x8255,0x8256,0x8257,0x8259,0x825B,0x825C,0x825D,0x825E,/* 0x90-0x97 */
0x8260,0x8261,0x8262,0x8263,0x8264,0x8265,0x8266,0x8267,/* 0x98-0x9F */
0x8269,0x62E7,0x6CDE,0x725B,0x626D,0x94AE,0x7EBD,0x8113,/* 0xA0-0xA7 */
- 0x6D53,0x519C,0xF943,0x5974,0x52AA,0xF960,0xF981,0x6696,/* 0xA8-0xAF */
+ 0x6D53,0x519C,0x5F04,0x5974,0x52AA,0x6012,0x5973,0x6696,/* 0xA8-0xAF */
0x8650,0x759F,0x632A,0x61E6,0x7CEF,0x8BFA,0x54E6,0x6B27,/* 0xB0-0xB7 */
0x9E25,0x6BB4,0x85D5,0x5455,0x5076,0x6CA4,0x556A,0x8DB4,/* 0xB8-0xBF */
0x722C,0x5E15,0x6015,0x7436,0x62CD,0x6392,0x724C,0x5F98,/* 0xC0-0xC7 */
0x82C3,0x82C5,0x82C6,0x82C9,0x82D0,0x82D6,0x82D9,0x82DA,/* 0x68-0x6F */
0x82DD,0x82E2,0x82E7,0x82E8,0x82E9,0x82EA,0x82EC,0x82ED,/* 0x70-0x77 */
0x82EE,0x82F0,0x82F2,0x82F3,0x82F5,0x82F6,0x82F8,0x0000,/* 0x78-0x7F */
-
+
0x82FA,0x82FC,0x82FD,0x82FE,0x82FF,0x8300,0x830A,0x830B,/* 0x80-0x87 */
0x830D,0x8310,0x8312,0x8313,0x8316,0x8318,0x8319,0x831D,/* 0x88-0x8F */
0x831E,0x831F,0x8320,0x8321,0x8322,0x8323,0x8324,0x8325,/* 0x90-0x97 */
0x66DD,0x7011,0x671F,0x6B3A,0x6816,0x621A,0x59BB,0x4E03,/* 0xD8-0xDF */
0x51C4,0x6F06,0x67D2,0x6C8F,0x5176,0x68CB,0x5947,0x6B67,/* 0xE0-0xE7 */
0x7566,0x5D0E,0x8110,0x9F50,0x65D7,0x7948,0x7941,0x9A91,/* 0xE8-0xEF */
- 0x8D77,0x5C82,0x4E5E,0x4F01,0x542F,0xF909,0x780C,0x5668,/* 0xF0-0xF7 */
+ 0x8D77,0x5C82,0x4E5E,0x4F01,0x542F,0x5951,0x780C,0x5668,/* 0xF0-0xF7 */
0x6C14,0x8FC4,0x5F03,0x6C7D,0x6CE3,0x8BAB,0x6390,0x0000,/* 0xF8-0xFF */
};
0x838C,0x838D,0x838F,0x8390,0x8391,0x8394,0x8395,0x8396,/* 0x68-0x6F */
0x8397,0x8399,0x839A,0x839D,0x839F,0x83A1,0x83A2,0x83A3,/* 0x70-0x77 */
0x83A4,0x83A5,0x83A6,0x83A7,0x83AC,0x83AD,0x83AE,0x0000,/* 0x78-0x7F */
-
+
0x83AF,0x83B5,0x83BB,0x83BE,0x83BF,0x83C2,0x83C3,0x83C4,/* 0x80-0x87 */
- 0x83C6,0x83C8,0xF93E,0x83CB,0x83CD,0x83CE,0x83D0,0x83D1,/* 0x88-0x8F */
+ 0x83C6,0x83C8,0x83C9,0x83CB,0x83CD,0x83CE,0x83D0,0x83D1,/* 0x88-0x8F */
0x83D2,0x83D3,0x83D5,0x83D7,0x83D9,0x83DA,0x83DB,0x83DE,/* 0x90-0x97 */
0x83E2,0x83E3,0x83E4,0x83E6,0x83E7,0x83E8,0x83EB,0x83EC,/* 0x98-0x9F */
0x83ED,0x6070,0x6D3D,0x7275,0x6266,0x948E,0x94C5,0x5343,/* 0xA0-0xA7 */
0x6B49,0x67AA,0x545B,0x8154,0x7F8C,0x5899,0x8537,0x5F3A,/* 0xB8-0xBF */
0x62A2,0x6A47,0x9539,0x6572,0x6084,0x6865,0x77A7,0x4E54,/* 0xC0-0xC7 */
0x4FA8,0x5DE7,0x9798,0x64AC,0x7FD8,0x5CED,0x4FCF,0x7A8D,/* 0xC8-0xCF */
- 0xFA00,0x8304,0x4E14,0x602F,0x7A83,0x94A6,0x4FB5,0x4EB2,/* 0xD0-0xD7 */
+ 0x5207,0x8304,0x4E14,0x602F,0x7A83,0x94A6,0x4FB5,0x4EB2,/* 0xD0-0xD7 */
0x79E6,0x7434,0x52E4,0x82B9,0x64D2,0x79BD,0x5BDD,0x6C81,/* 0xD8-0xDF */
- 0x9752,0x8F7B,0x6C22,0x503E,0x537F,0x6E05,0x64CE,0xFA12,/* 0xE0-0xE7 */
+ 0x9752,0x8F7B,0x6C22,0x503E,0x537F,0x6E05,0x64CE,0x6674,/* 0xE0-0xE7 */
0x6C30,0x60C5,0x9877,0x8BF7,0x5E86,0x743C,0x7A77,0x79CB,/* 0xE8-0xEF */
0x4E18,0x90B1,0x7403,0x6C42,0x56DA,0x914B,0x6CC5,0x8D8B,/* 0xF0-0xF7 */
0x533A,0x86C6,0x66F2,0x8EAF,0x5C48,0x9A71,0x6E20,0x0000,/* 0xF8-0xFF */
0x8421,0x8422,0x8423,0x8429,0x842A,0x842B,0x842C,0x842D,/* 0x60-0x67 */
0x842E,0x842F,0x8430,0x8432,0x8433,0x8434,0x8435,0x8436,/* 0x68-0x6F */
0x8437,0x8439,0x843A,0x843B,0x843E,0x843F,0x8440,0x8441,/* 0x70-0x77 */
- 0x8442,0x8443,0x8444,0x8445,0x8447,0x8448,0xF96E,0x0000,/* 0x78-0x7F */
-
+ 0x8442,0x8443,0x8444,0x8445,0x8447,0x8448,0x8449,0x0000,/* 0x78-0x7F */
+
0x844A,0x844B,0x844C,0x844D,0x844E,0x844F,0x8450,0x8452,/* 0x80-0x87 */
0x8453,0x8454,0x8455,0x8456,0x8458,0x845D,0x845E,0x845F,/* 0x88-0x8F */
0x8460,0x8462,0x8464,0x8465,0x8466,0x8467,0x8468,0x846A,/* 0x90-0x97 */
0x5203,0x598A,0x7EAB,0x6254,0x4ECD,0x65E5,0x620E,0x8338,/* 0xD0-0xD7 */
0x84C9,0x8363,0x878D,0x7194,0x6EB6,0x5BB9,0x7ED2,0x5197,/* 0xD8-0xDF */
0x63C9,0x67D4,0x8089,0x8339,0x8815,0x5112,0x5B7A,0x5982,/* 0xE0-0xE7 */
- 0x8FB1,0x4E73,0x6C5D,0x5165,0x8925,0x8F6F,0xF9C6,0x854A,/* 0xE8-0xEF */
- 0x745E,0x9510,0x95F0,0x6DA6,0xF974,0x5F31,0x6492,0x6D12,/* 0xF0-0xF7 */
- 0x8428,0x816E,0x9CC3,0xF96C,0x8D5B,0x4E09,0x53C1,0x0000,/* 0xF8-0xFF */
+ 0x8FB1,0x4E73,0x6C5D,0x5165,0x8925,0x8F6F,0x962E,0x854A,/* 0xE8-0xEF */
+ 0x745E,0x9510,0x95F0,0x6DA6,0x82E5,0x5F31,0x6492,0x6D12,/* 0xF0-0xF7 */
+ 0x8428,0x816E,0x9CC3,0x585E,0x8D5B,0x4E09,0x53C1,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_C9[256] = {
0x84B1,0x84B3,0x84B5,0x84B6,0x84B7,0x84BB,0x84BC,0x84BE,/* 0x68-0x6F */
0x84C0,0x84C2,0x84C3,0x84C5,0x84C6,0x84C7,0x84C8,0x84CB,/* 0x70-0x77 */
0x84CC,0x84CE,0x84CF,0x84D2,0x84D4,0x84D5,0x84D7,0x0000,/* 0x78-0x7F */
-
+
0x84D8,0x84D9,0x84DA,0x84DB,0x84DC,0x84DE,0x84E1,0x84E2,/* 0x80-0x87 */
- 0x84E4,0x84E7,0x84E8,0x84E9,0x84EA,0x84EB,0x84ED,0xF999,/* 0x88-0x8F */
+ 0x84E4,0x84E7,0x84E8,0x84E9,0x84EA,0x84EB,0x84ED,0x84EE,/* 0x88-0x8F */
0x84EF,0x84F1,0x84F2,0x84F3,0x84F4,0x84F5,0x84F6,0x84F7,/* 0x90-0x97 */
0x84F8,0x84F9,0x84FA,0x84FB,0x84FD,0x84FE,0x8500,0x8501,/* 0x98-0x9F */
0x8502,0x4F1E,0x6563,0x6851,0x55D3,0x4E27,0x6414,0x9A9A,/* 0xA0-0xA7 */
0x97F6,0x5C11,0x54E8,0x90B5,0x7ECD,0x5962,0x8D4A,0x86C7,/* 0xD8-0xDF */
0x820C,0x820D,0x8D66,0x6444,0x5C04,0x6151,0x6D89,0x793E,/* 0xE0-0xE7 */
0x8BBE,0x7837,0x7533,0x547B,0x4F38,0x8EAB,0x6DF1,0x5A20,/* 0xE8-0xEF */
- 0x7EC5,0xFA19,0xF972,0x5BA1,0x5A76,0x751A,0x80BE,0x614E,/* 0xF0-0xF7 */
+ 0x7EC5,0x795E,0x6C88,0x5BA1,0x5A76,0x751A,0x80BE,0x614E,/* 0xF0-0xF7 */
0x6E17,0x58F0,0x751F,0x7525,0x7272,0x5347,0x7EF3,0x0000,/* 0xF8-0xFF */
};
0x8534,0x8535,0x8536,0x853E,0x853F,0x8540,0x8541,0x8542,/* 0x68-0x6F */
0x8544,0x8545,0x8546,0x8547,0x854B,0x854C,0x854D,0x854E,/* 0x70-0x77 */
0x854F,0x8550,0x8551,0x8552,0x8553,0x8554,0x8555,0x0000,/* 0x78-0x7F */
-
+
0x8557,0x8558,0x855A,0x855B,0x855C,0x855D,0x855F,0x8560,/* 0x80-0x87 */
0x8561,0x8562,0x8563,0x8565,0x8566,0x8567,0x8569,0x856A,/* 0x88-0x8F */
0x856B,0x856C,0x856D,0x856E,0x856F,0x8570,0x8571,0x8573,/* 0x90-0x97 */
0x8575,0x8576,0x8577,0x8578,0x857C,0x857D,0x857F,0x8580,/* 0x98-0x9F */
- 0x8581,0xF96D,0x76DB,0x5269,0x80DC,0x5723,0x5E08,0x5931,/* 0xA0-0xA7 */
+ 0x8581,0x7701,0x76DB,0x5269,0x80DC,0x5723,0x5E08,0x5931,/* 0xA0-0xA7 */
0x72EE,0x65BD,0x6E7F,0x8BD7,0x5C38,0x8671,0x5341,0x77F3,/* 0xA8-0xAF */
- 0xF973,0x65F6,0xF9FD,0x98DF,0x8680,0x5B9E,0x8BC6,0x53F2,/* 0xB0-0xB7 */
+ 0x62FE,0x65F6,0x4EC0,0x98DF,0x8680,0x5B9E,0x8BC6,0x53F2,/* 0xB0-0xB7 */
0x77E2,0x4F7F,0x5C4E,0x9A76,0x59CB,0x5F0F,0x793A,0x58EB,/* 0xB8-0xBF */
0x4E16,0x67FF,0x4E8B,0x62ED,0x8A93,0x901D,0x52BF,0x662F,/* 0xC0-0xC7 */
0x55DC,0x566C,0x9002,0x4ED5,0x4F8D,0x91CA,0x9970,0x6C0F,/* 0xC8-0xCF */
0x85AB,0x85AC,0x85AD,0x85B1,0x85B2,0x85B3,0x85B4,0x85B5,/* 0x60-0x67 */
0x85B6,0x85B8,0x85BA,0x85BB,0x85BC,0x85BD,0x85BE,0x85BF,/* 0x68-0x6F */
0x85C0,0x85C2,0x85C3,0x85C4,0x85C5,0x85C6,0x85C7,0x85C8,/* 0x70-0x77 */
- 0x85CA,0x85CB,0x85CC,0xF923,0x85CE,0x85D1,0x85D2,0x0000,/* 0x78-0x7F */
-
+ 0x85CA,0x85CB,0x85CC,0x85CD,0x85CE,0x85D1,0x85D2,0x0000,/* 0x78-0x7F */
+
0x85D4,0x85D6,0x85D7,0x85D8,0x85D9,0x85DA,0x85DB,0x85DD,/* 0x80-0x87 */
0x85DE,0x85DF,0x85E0,0x85E1,0x85E2,0x85E3,0x85E5,0x85E6,/* 0x88-0x8F */
0x85E7,0x85E8,0x85EA,0x85EB,0x85EC,0x85ED,0x85EE,0x85EF,/* 0x90-0x97 */
0x7D20,0x901F,0x7C9F,0x50F3,0x5851,0x6EAF,0x5BBF,0x8BC9,/* 0xD8-0xDF */
0x8083,0x9178,0x849C,0x7B97,0x867D,0x968B,0x968F,0x7EE5,/* 0xE0-0xE7 */
0x9AD3,0x788E,0x5C81,0x7A57,0x9042,0x96A7,0x795F,0x5B59,/* 0xE8-0xEF */
- 0x635F,0x7B0B,0x84D1,0x68AD,0x5506,0x7F29,0x7410,0xF96A,/* 0xF0-0xF7 */
+ 0x635F,0x7B0B,0x84D1,0x68AD,0x5506,0x7F29,0x7410,0x7D22,/* 0xF0-0xF7 */
0x9501,0x6240,0x584C,0x4ED6,0x5B83,0x5979,0x5854,0x0000,/* 0xF8-0xFF */
};
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x28-0x2F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
- 0x85F9,0xF9F0,0x85FC,0x85FD,0x85FE,0x8600,0x8601,0x8602,/* 0x40-0x47 */
- 0x8603,0x8604,0xF935,0x8607,0x8608,0x8609,0x860A,0x860B,/* 0x48-0x4F */
+ 0x85F9,0x85FA,0x85FC,0x85FD,0x85FE,0x8600,0x8601,0x8602,/* 0x40-0x47 */
+ 0x8603,0x8604,0x8606,0x8607,0x8608,0x8609,0x860A,0x860B,/* 0x48-0x4F */
0x860C,0x860D,0x860E,0x860F,0x8610,0x8612,0x8613,0x8614,/* 0x50-0x57 */
0x8615,0x8617,0x8618,0x8619,0x861A,0x861B,0x861C,0x861D,/* 0x58-0x5F */
0x861E,0x861F,0x8620,0x8621,0x8622,0x8623,0x8624,0x8625,/* 0x60-0x67 */
- 0x8626,0x8628,0x862A,0x862B,0x862C,0xF91F,0x862E,0x862F,/* 0x68-0x6F */
+ 0x8626,0x8628,0x862A,0x862B,0x862C,0x862D,0x862E,0x862F,/* 0x68-0x6F */
0x8630,0x8631,0x8632,0x8633,0x8634,0x8635,0x8636,0x8637,/* 0x70-0x77 */
- 0x8639,0x863A,0x863B,0x863D,0x863E,0xF910,0x8640,0x0000,/* 0x78-0x7F */
-
+ 0x8639,0x863A,0x863B,0x863D,0x863E,0x863F,0x8640,0x0000,/* 0x78-0x7F */
+
0x8641,0x8642,0x8643,0x8644,0x8645,0x8646,0x8647,0x8648,/* 0x80-0x87 */
0x8649,0x864A,0x864B,0x864C,0x8652,0x8653,0x8655,0x8656,/* 0x88-0x8F */
- 0x8657,0x8658,0x8659,0x865B,0xF936,0x865D,0x865F,0x8660,/* 0x90-0x97 */
+ 0x8657,0x8658,0x8659,0x865B,0x865C,0x865D,0x865F,0x8660,/* 0x90-0x97 */
0x8661,0x8663,0x8664,0x8665,0x8666,0x8667,0x8668,0x8669,/* 0x98-0x9F */
0x866A,0x736D,0x631E,0x8E4B,0x8E0F,0x80CE,0x82D4,0x62AC,/* 0xA0-0xA7 */
0x53F0,0x6CF0,0x915E,0x592A,0x6001,0x6C70,0x574D,0x644A,/* 0xA8-0xAF */
0x8D2A,0x762B,0x6EE9,0x575B,0x6A80,0x75F0,0x6F6D,0x8C2D,/* 0xB0-0xB7 */
0x8C08,0x5766,0x6BEF,0x8892,0x78B3,0x63A2,0x53F9,0x70AD,/* 0xB8-0xBF */
- 0x6C64,0x5858,0x642A,0x5802,0x68E0,0x819B,0x5510,0xFA03,/* 0xC0-0xC7 */
+ 0x6C64,0x5858,0x642A,0x5802,0x68E0,0x819B,0x5510,0x7CD6,/* 0xC0-0xC7 */
0x5018,0x8EBA,0x6DCC,0x8D9F,0x70EB,0x638F,0x6D9B,0x6ED4,/* 0xC8-0xCF */
0x7EE6,0x8404,0x6843,0x9003,0x6DD8,0x9676,0x8BA8,0x5957,/* 0xD0-0xD7 */
0x7279,0x85E4,0x817E,0x75BC,0x8A8A,0x68AF,0x5254,0x8E22,/* 0xD8-0xDF */
0x86B3,0x86B7,0x86B8,0x86B9,0x86BB,0x86BC,0x86BD,0x86BE,/* 0x68-0x6F */
0x86BF,0x86C1,0x86C2,0x86C3,0x86C5,0x86C8,0x86CC,0x86CD,/* 0x70-0x77 */
0x86D2,0x86D3,0x86D5,0x86D6,0x86D7,0x86DA,0x86DC,0x0000,/* 0x78-0x7F */
-
+
0x86DD,0x86E0,0x86E1,0x86E2,0x86E3,0x86E5,0x86E6,0x86E7,/* 0x80-0x87 */
0x86E8,0x86EA,0x86EB,0x86EC,0x86EF,0x86F5,0x86F6,0x86F7,/* 0x88-0x8F */
0x86FA,0x86FB,0x86FC,0x86FD,0x86FF,0x8701,0x8704,0x8705,/* 0x90-0x97 */
0x5C60,0x571F,0x5410,0x5154,0x6E4D,0x56E2,0x63A8,0x9893,/* 0xC0-0xC7 */
0x817F,0x8715,0x892A,0x9000,0x541E,0x5C6F,0x81C0,0x62D6,/* 0xC8-0xCF */
0x6258,0x8131,0x9E35,0x9640,0x9A6E,0x9A7C,0x692D,0x59A5,/* 0xD0-0xD7 */
- 0xFA02,0x553E,0x6316,0x54C7,0x86D9,0x6D3C,0x5A03,0x74E6,/* 0xD8-0xDF */
+ 0x62D3,0x553E,0x6316,0x54C7,0x86D9,0x6D3C,0x5A03,0x74E6,/* 0xD8-0xDF */
0x889C,0x6B6A,0x5916,0x8C4C,0x5F2F,0x6E7E,0x73A9,0x987D,/* 0xE0-0xE7 */
0x4E38,0x70F7,0x5B8C,0x7897,0x633D,0x665A,0x7696,0x60CB,/* 0xE8-0xEF */
0x5B9B,0x5A49,0x4E07,0x8155,0x6C6A,0x738B,0x4EA1,0x6789,/* 0xF0-0xF7 */
0x8756,0x8758,0x875A,0x875B,0x875C,0x875D,0x875E,0x875F,/* 0x68-0x6F */
0x8761,0x8762,0x8766,0x8767,0x8768,0x8769,0x876A,0x876B,/* 0x70-0x77 */
0x876C,0x876D,0x876F,0x8771,0x8772,0x8773,0x8775,0x0000,/* 0x78-0x7F */
-
+
0x8777,0x8778,0x8779,0x877A,0x877F,0x8780,0x8781,0x8784,/* 0x80-0x87 */
0x8786,0x8787,0x8789,0x878A,0x878C,0x878E,0x878F,0x8790,/* 0x88-0x8F */
0x8791,0x8792,0x8794,0x8795,0x8796,0x8798,0x8799,0x879A,/* 0x90-0x97 */
0x87DE,0x87DF,0x87E1,0x87E2,0x87E3,0x87E4,0x87E6,0x87E7,/* 0x68-0x6F */
0x87E8,0x87E9,0x87EB,0x87EC,0x87ED,0x87EF,0x87F0,0x87F1,/* 0x70-0x77 */
0x87F2,0x87F3,0x87F4,0x87F5,0x87F6,0x87F7,0x87F8,0x0000,/* 0x78-0x7F */
-
+
0x87FA,0x87FB,0x87FC,0x87FD,0x87FF,0x8800,0x8801,0x8802,/* 0x80-0x87 */
0x8804,0x8805,0x8806,0x8807,0x8808,0x8809,0x880B,0x880C,/* 0x88-0x8F */
0x880D,0x880E,0x880F,0x8810,0x8811,0x8812,0x8814,0x8817,/* 0x90-0x97 */
- 0x8818,0x8819,0x881A,0x881C,0x881D,0x881E,0xF927,0x8820,/* 0x98-0x9F */
+ 0x8818,0x8819,0x881A,0x881C,0x881D,0x881E,0x881F,0x8820,/* 0x98-0x9F */
0x8823,0x7A00,0x606F,0x5E0C,0x6089,0x819D,0x5915,0x60DC,/* 0xA0-0xA7 */
0x7184,0x70EF,0x6EAA,0x6C50,0x7280,0x6A84,0x88AD,0x5E2D,/* 0xA8-0xAF */
0x4E60,0x5AB3,0x559C,0x94E3,0x6D17,0x7CFB,0x9699,0x620F,/* 0xB0-0xB7 */
0x95F2,0x6D8E,0x5F26,0x5ACC,0x663E,0x9669,0x73B0,0x732E,/* 0xD0-0xD7 */
0x53BF,0x817A,0x9985,0x7FA1,0x5BAA,0x9677,0x9650,0x7EBF,/* 0xD8-0xDF */
0x76F8,0x53A2,0x9576,0x9999,0x7BB1,0x8944,0x6E58,0x4E61,/* 0xE0-0xE7 */
- 0x7FD4,0xFA1A,0x8BE6,0x60F3,0x54CD,0x4EAB,0x9879,0x5DF7,/* 0xE8-0xEF */
+ 0x7FD4,0x7965,0x8BE6,0x60F3,0x54CD,0x4EAB,0x9879,0x5DF7,/* 0xE8-0xEF */
0x6A61,0x50CF,0x5411,0x8C61,0x8427,0x785D,0x9704,0x524A,/* 0xF0-0xF7 */
0x54EE,0x56A3,0x9500,0x6D88,0x5BB5,0x6DC6,0x6653,0x0000,/* 0xF8-0xFF */
};
0x8855,0x8856,0x8858,0x885A,0x885B,0x885C,0x885D,0x885E,/* 0x68-0x6F */
0x885F,0x8860,0x8866,0x8867,0x886A,0x886D,0x886F,0x8871,/* 0x70-0x77 */
0x8873,0x8874,0x8875,0x8876,0x8878,0x8879,0x887A,0x0000,/* 0x78-0x7F */
-
+
0x887B,0x887C,0x8880,0x8883,0x8886,0x8887,0x8889,0x888A,/* 0x80-0x87 */
0x888C,0x888E,0x888F,0x8890,0x8891,0x8893,0x8894,0x8895,/* 0x88-0x8F */
0x8897,0x8898,0x8899,0x889A,0x889B,0x889D,0x889E,0x889F,/* 0x90-0x97 */
0x61C8,0x6CC4,0x6CFB,0x8C22,0x5C51,0x85AA,0x82AF,0x950C,/* 0xB8-0xBF */
0x6B23,0x8F9B,0x65B0,0x5FFB,0x5FC3,0x4FE1,0x8845,0x661F,/* 0xC0-0xC7 */
0x8165,0x7329,0x60FA,0x5174,0x5211,0x578B,0x5F62,0x90A2,/* 0xC8-0xCF */
- 0xFA08,0x9192,0x5E78,0x674F,0x6027,0x59D3,0x5144,0x51F6,/* 0xD0-0xD7 */
+ 0x884C,0x9192,0x5E78,0x674F,0x6027,0x59D3,0x5144,0x51F6,/* 0xD0-0xD7 */
0x80F8,0x5308,0x6C79,0x96C4,0x718A,0x4F11,0x4FEE,0x7F9E,/* 0xD8-0xDF */
0x673D,0x55C5,0x9508,0x79C0,0x8896,0x7EE3,0x589F,0x620C,/* 0xE0-0xE7 */
0x9700,0x865A,0x5618,0x987B,0x5F90,0x8BB8,0x84C4,0x9157,/* 0xE8-0xEF */
0x88B6,0x88B8,0x88B9,0x88BA,0x88BB,0x88BD,0x88BE,0x88BF,/* 0x48-0x4F */
0x88C0,0x88C3,0x88C4,0x88C7,0x88C8,0x88CA,0x88CB,0x88CC,/* 0x50-0x57 */
0x88CD,0x88CF,0x88D0,0x88D1,0x88D3,0x88D6,0x88D7,0x88DA,/* 0x58-0x5F */
- 0x88DB,0x88DC,0x88DD,0x88DE,0x88E0,0xF9E8,0x88E6,0x88E7,/* 0x60-0x67 */
+ 0x88DB,0x88DC,0x88DD,0x88DE,0x88E0,0x88E1,0x88E6,0x88E7,/* 0x60-0x67 */
0x88E9,0x88EA,0x88EB,0x88EC,0x88ED,0x88EE,0x88EF,0x88F2,/* 0x68-0x6F */
0x88F5,0x88F6,0x88F7,0x88FA,0x88FB,0x88FD,0x88FF,0x8900,/* 0x70-0x77 */
0x8901,0x8903,0x8904,0x8905,0x8906,0x8907,0x8908,0x0000,/* 0x78-0x7F */
-
+
0x8909,0x890B,0x890C,0x890D,0x890E,0x890F,0x8911,0x8914,/* 0x80-0x87 */
0x8915,0x8916,0x8917,0x8918,0x891C,0x891D,0x891E,0x891F,/* 0x88-0x8F */
0x8920,0x8922,0x8923,0x8924,0x8926,0x8927,0x8928,0x8929,/* 0x90-0x97 */
0x5BFB,0x9A6F,0x5DE1,0x6B89,0x6C5B,0x8BAD,0x8BAF,0x900A,/* 0xB0-0xB7 */
0x8FC5,0x538B,0x62BC,0x9E26,0x9E2D,0x5440,0x4E2B,0x82BD,/* 0xB8-0xBF */
0x7259,0x869C,0x5D16,0x8859,0x6DAF,0x96C5,0x54D1,0x4E9A,/* 0xC0-0xC7 */
- 0x8BB6,0x7109,0xF99E,0x9609,0x70DF,0x6DF9,0x76D0,0x4E25,/* 0xC8-0xCF */
+ 0x8BB6,0x7109,0x54BD,0x9609,0x70DF,0x6DF9,0x76D0,0x4E25,/* 0xC8-0xCF */
0x7814,0x8712,0x5CA9,0x5EF6,0x8A00,0x989C,0x960E,0x708E,/* 0xD0-0xD7 */
0x6CBF,0x5944,0x63A9,0x773C,0x884D,0x6F14,0x8273,0x5830,/* 0xD8-0xDF */
0x71D5,0x538C,0x781A,0x96C1,0x5501,0x5F66,0x7130,0x5BB4,/* 0xE0-0xE7 */
0x894A,0x894B,0x894C,0x894D,0x894E,0x894F,0x8950,0x8951,/* 0x50-0x57 */
0x8952,0x8953,0x8954,0x8955,0x8956,0x8957,0x8958,0x8959,/* 0x58-0x5F */
0x895A,0x895B,0x895C,0x895D,0x8960,0x8961,0x8962,0x8963,/* 0x60-0x67 */
- 0xF924,0x8965,0x8967,0x8968,0x8969,0x896A,0x896B,0x896C,/* 0x68-0x6F */
+ 0x8964,0x8965,0x8967,0x8968,0x8969,0x896A,0x896B,0x896C,/* 0x68-0x6F */
0x896D,0x896E,0x896F,0x8970,0x8971,0x8972,0x8973,0x8974,/* 0x70-0x77 */
0x8975,0x8976,0x8977,0x8978,0x8979,0x897A,0x897C,0x0000,/* 0x78-0x7F */
-
+
0x897D,0x897E,0x8980,0x8982,0x8984,0x8985,0x8987,0x8988,/* 0x80-0x87 */
- 0x8989,0x898A,0xFA0A,0x898C,0x898D,0x898E,0x898F,0x8990,/* 0x88-0x8F */
+ 0x8989,0x898A,0x898B,0x898C,0x898D,0x898E,0x898F,0x8990,/* 0x88-0x8F */
0x8991,0x8992,0x8993,0x8994,0x8995,0x8996,0x8997,0x8998,/* 0x90-0x97 */
0x8999,0x899A,0x899B,0x899C,0x899D,0x899E,0x899F,0x89A0,/* 0x98-0x9F */
0x89A1,0x6447,0x5C27,0x9065,0x7A91,0x8C23,0x59DA,0x54AC,/* 0xA0-0xA7 */
0x814B,0x591C,0x6DB2,0x4E00,0x58F9,0x533B,0x63D6,0x94F1,/* 0xB8-0xBF */
0x4F9D,0x4F0A,0x8863,0x9890,0x5937,0x9057,0x79FB,0x4EEA,/* 0xC0-0xC7 */
0x80F0,0x7591,0x6C82,0x5B9C,0x59E8,0x5F5D,0x6905,0x8681,/* 0xC8-0xCF */
- 0x501A,0x5DF2,0x4E59,0x77E3,0x4EE5,0x827A,0x6291,0xF9E0,/* 0xD0-0xD7 */
- 0x9091,0x5C79,0x4EBF,0x5F79,0x81C6,0xFA25,0x8084,0x75AB,/* 0xD8-0xDF */
- 0x4EA6,0x88D4,0x610F,0x6BC5,0x5FC6,0x4E49,0xFA17,0x6EA2,/* 0xE0-0xE7 */
+ 0x501A,0x5DF2,0x4E59,0x77E3,0x4EE5,0x827A,0x6291,0x6613,/* 0xD0-0xD7 */
+ 0x9091,0x5C79,0x4EBF,0x5F79,0x81C6,0x9038,0x8084,0x75AB,/* 0xD8-0xDF */
+ 0x4EA6,0x88D4,0x610F,0x6BC5,0x5FC6,0x4E49,0x76CA,0x6EA2,/* 0xE0-0xE7 */
0x8BE3,0x8BAE,0x8C0A,0x8BD1,0x5F02,0x7FFC,0x7FCC,0x7ECE,/* 0xE8-0xEF */
0x8335,0x836B,0x56E0,0x6BB7,0x97F3,0x9634,0x59FB,0x541F,/* 0xF0-0xF7 */
0x94F6,0x6DEB,0x5BC5,0x996E,0x5C39,0x5F15,0x9690,0x0000,/* 0xF8-0xFF */
0x89DD,0x89DF,0x89E0,0x89E1,0x89E2,0x89E4,0x89E7,0x89E8,/* 0x68-0x6F */
0x89E9,0x89EA,0x89EC,0x89ED,0x89EE,0x89F0,0x89F1,0x89F2,/* 0x70-0x77 */
0x89F4,0x89F5,0x89F6,0x89F7,0x89F8,0x89F9,0x89FA,0x0000,/* 0x78-0x7F */
-
+
0x89FB,0x89FC,0x89FD,0x89FE,0x89FF,0x8A01,0x8A02,0x8A03,/* 0x80-0x87 */
0x8A04,0x8A05,0x8A06,0x8A08,0x8A09,0x8A0A,0x8A0B,0x8A0C,/* 0x88-0x8F */
0x8A0D,0x8A0E,0x8A0F,0x8A10,0x8A11,0x8A12,0x8A13,0x8A14,/* 0x90-0x97 */
0x8FC2,0x6DE4,0x4E8E,0x76C2,0x6986,0x865E,0x611A,0x8206,/* 0xD8-0xDF */
0x4F59,0x4FDE,0x903E,0x9C7C,0x6109,0x6E1D,0x6E14,0x9685,/* 0xE0-0xE7 */
0x4E88,0x5A31,0x96E8,0x4E0E,0x5C7F,0x79B9,0x5B87,0x8BED,/* 0xE8-0xEF */
- 0xFA1E,0x7389,0x57DF,0x828B,0x90C1,0x5401,0x9047,0x55BB,/* 0xF0-0xF7 */
+ 0x7FBD,0x7389,0x57DF,0x828B,0x90C1,0x5401,0x9047,0x55BB,/* 0xF0-0xF7 */
0x5CEA,0x5FA1,0x6108,0x6B32,0x72F1,0x80B2,0x8A89,0x0000,/* 0xF8-0xFF */
};
0x8A47,0x8A49,0x8A4A,0x8A4B,0x8A4C,0x8A4D,0x8A4E,0x8A4F,/* 0x68-0x6F */
0x8A50,0x8A51,0x8A52,0x8A53,0x8A54,0x8A55,0x8A56,0x8A57,/* 0x70-0x77 */
0x8A58,0x8A59,0x8A5A,0x8A5B,0x8A5C,0x8A5D,0x8A5E,0x0000,/* 0x78-0x7F */
-
+
0x8A5F,0x8A60,0x8A61,0x8A62,0x8A63,0x8A64,0x8A65,0x8A66,/* 0x80-0x87 */
0x8A67,0x8A68,0x8A69,0x8A6A,0x8A6B,0x8A6C,0x8A6D,0x8A6E,/* 0x88-0x8F */
0x8A6F,0x8A70,0x8A71,0x8A72,0x8A73,0x8A74,0x8A75,0x8A76,/* 0x90-0x97 */
0x8A8B,0x8A8C,0x8A8D,0x8A8E,0x8A8F,0x8A90,0x8A91,0x8A92,/* 0x48-0x4F */
0x8A94,0x8A95,0x8A96,0x8A97,0x8A98,0x8A99,0x8A9A,0x8A9B,/* 0x50-0x57 */
0x8A9C,0x8A9D,0x8A9E,0x8A9F,0x8AA0,0x8AA1,0x8AA2,0x8AA3,/* 0x58-0x5F */
- 0x8AA4,0x8AA5,0x8AA6,0x8AA7,0x8AA8,0x8AA9,0xF9A1,0x8AAB,/* 0x60-0x67 */
+ 0x8AA4,0x8AA5,0x8AA6,0x8AA7,0x8AA8,0x8AA9,0x8AAA,0x8AAB,/* 0x60-0x67 */
0x8AAC,0x8AAD,0x8AAE,0x8AAF,0x8AB0,0x8AB1,0x8AB2,0x8AB3,/* 0x68-0x6F */
0x8AB4,0x8AB5,0x8AB6,0x8AB7,0x8AB8,0x8AB9,0x8ABA,0x8ABB,/* 0x70-0x77 */
0x8ABC,0x8ABD,0x8ABE,0x8ABF,0x8AC0,0x8AC1,0x8AC2,0x0000,/* 0x78-0x7F */
-
+
0x8AC3,0x8AC4,0x8AC5,0x8AC6,0x8AC7,0x8AC8,0x8AC9,0x8ACA,/* 0x80-0x87 */
- 0x8ACB,0x8ACC,0x8ACD,0x8ACE,0x8ACF,0x8AD0,0x8AD1,0xF97D,/* 0x88-0x8F */
- 0x8AD3,0x8AD4,0x8AD5,0xF941,0x8AD7,0x8AD8,0x8AD9,0x8ADA,/* 0x90-0x97 */
+ 0x8ACB,0x8ACC,0x8ACD,0x8ACE,0x8ACF,0x8AD0,0x8AD1,0x8AD2,/* 0x88-0x8F */
+ 0x8AD3,0x8AD4,0x8AD5,0x8AD6,0x8AD7,0x8AD8,0x8AD9,0x8ADA,/* 0x90-0x97 */
0x8ADB,0x8ADC,0x8ADD,0x8ADE,0x8ADF,0x8AE0,0x8AE1,0x8AE2,/* 0x98-0x9F */
0x8AE3,0x94E1,0x95F8,0x7728,0x6805,0x69A8,0x548B,0x4E4D,/* 0xA0-0xA7 */
- 0x70B8,0x8BC8,0x6458,0x658B,0xFA04,0x7A84,0x503A,0x5BE8,/* 0xA8-0xAF */
+ 0x70B8,0x8BC8,0x6458,0x658B,0x5B85,0x7A84,0x503A,0x5BE8,/* 0xA8-0xAF */
0x77BB,0x6BE1,0x8A79,0x7C98,0x6CBE,0x76CF,0x65A9,0x8F97,/* 0xB0-0xB7 */
0x5D2D,0x5C55,0x8638,0x6808,0x5360,0x6218,0x7AD9,0x6E5B,/* 0xB8-0xBF */
0x7EFD,0x6A1F,0x7AE0,0x5F70,0x6F33,0x5F20,0x638C,0x6DA8,/* 0xC0-0xC7 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x8AE4,0x8AE5,0x8AE6,0x8AE7,0x8AE8,0x8AE9,0x8AEA,0x8AEB,/* 0x40-0x47 */
0x8AEC,0x8AED,0x8AEE,0x8AEF,0x8AF0,0x8AF1,0x8AF2,0x8AF3,/* 0x48-0x4F */
- 0x8AF4,0x8AF5,0x8AF6,0x8AF7,0xFA22,0x8AF9,0x8AFA,0x8AFB,/* 0x50-0x57 */
- 0x8AFC,0x8AFD,0xF95D,0x8AFF,0x8B00,0x8B01,0x8B02,0x8B03,/* 0x58-0x5F */
+ 0x8AF4,0x8AF5,0x8AF6,0x8AF7,0x8AF8,0x8AF9,0x8AFA,0x8AFB,/* 0x50-0x57 */
+ 0x8AFC,0x8AFD,0x8AFE,0x8AFF,0x8B00,0x8B01,0x8B02,0x8B03,/* 0x58-0x5F */
0x8B04,0x8B05,0x8B06,0x8B08,0x8B09,0x8B0A,0x8B0B,0x8B0C,/* 0x60-0x67 */
0x8B0D,0x8B0E,0x8B0F,0x8B10,0x8B11,0x8B12,0x8B13,0x8B14,/* 0x68-0x6F */
0x8B15,0x8B16,0x8B17,0x8B18,0x8B19,0x8B1A,0x8B1B,0x8B1C,/* 0x70-0x77 */
0x8B1D,0x8B1E,0x8B1F,0x8B20,0x8B21,0x8B22,0x8B23,0x0000,/* 0x78-0x7F */
-
+
0x8B24,0x8B25,0x8B27,0x8B28,0x8B29,0x8B2A,0x8B2B,0x8B2C,/* 0x80-0x87 */
0x8B2D,0x8B2E,0x8B2F,0x8B30,0x8B31,0x8B32,0x8B33,0x8B34,/* 0x88-0x8F */
0x8B35,0x8B36,0x8B37,0x8B38,0x8B39,0x8B3A,0x8B3B,0x8B3C,/* 0x90-0x97 */
0x804C,0x76F4,0x690D,0x6B96,0x6267,0x503C,0x4F84,0x5740,/* 0xB0-0xB7 */
0x6307,0x6B62,0x8DBE,0x53EA,0x65E8,0x7EB8,0x5FD7,0x631A,/* 0xB8-0xBF */
0x63B7,0x81F3,0x81F4,0x7F6E,0x5E1C,0x5CD9,0x5236,0x667A,/* 0xC0-0xC7 */
- 0x79E9,0x7A1A,0x8D28,0xF9FB,0x75D4,0x6EDE,0x6CBB,0x7A92,/* 0xC8-0xCF */
+ 0x79E9,0x7A1A,0x8D28,0x7099,0x75D4,0x6EDE,0x6CBB,0x7A92,/* 0xC8-0xCF */
0x4E2D,0x76C5,0x5FE0,0x949F,0x8877,0x7EC8,0x79CD,0x80BF,/* 0xD0-0xD7 */
0x91CD,0x4EF2,0x4F17,0x821F,0x5468,0x5DDE,0x6D32,0x8BCC,/* 0xD8-0xDF */
0x7CA5,0x8F74,0x8098,0x5E1A,0x5492,0x76B1,0x5B99,0x663C,/* 0xE0-0xE7 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x8B46,0x8B47,0x8B48,0x8B49,0x8B4A,0x8B4B,0x8B4C,0x8B4D,/* 0x40-0x47 */
0x8B4E,0x8B4F,0x8B50,0x8B51,0x8B52,0x8B53,0x8B54,0x8B55,/* 0x48-0x4F */
- 0x8B56,0x8B57,0xF9FC,0x8B59,0x8B5A,0x8B5B,0x8B5C,0x8B5D,/* 0x50-0x57 */
+ 0x8B56,0x8B57,0x8B58,0x8B59,0x8B5A,0x8B5B,0x8B5C,0x8B5D,/* 0x50-0x57 */
0x8B5E,0x8B5F,0x8B60,0x8B61,0x8B62,0x8B63,0x8B64,0x8B65,/* 0x58-0x5F */
0x8B67,0x8B68,0x8B69,0x8B6A,0x8B6B,0x8B6D,0x8B6E,0x8B6F,/* 0x60-0x67 */
0x8B70,0x8B71,0x8B72,0x8B73,0x8B74,0x8B75,0x8B76,0x8B77,/* 0x68-0x6F */
0x8B78,0x8B79,0x8B7A,0x8B7B,0x8B7C,0x8B7D,0x8B7E,0x8B7F,/* 0x70-0x77 */
- 0xF95A,0x8B81,0x8B82,0x8B83,0x8B84,0x8B85,0x8B86,0x0000,/* 0x78-0x7F */
-
+ 0x8B80,0x8B81,0x8B82,0x8B83,0x8B84,0x8B85,0x8B86,0x0000,/* 0x78-0x7F */
+
0x8B87,0x8B88,0x8B89,0x8B8A,0x8B8B,0x8B8C,0x8B8D,0x8B8E,/* 0x80-0x87 */
0x8B8F,0x8B90,0x8B91,0x8B92,0x8B93,0x8B94,0x8B95,0x8B96,/* 0x88-0x8F */
0x8B97,0x8B98,0x8B99,0x8B9A,0x8B9B,0x8B9C,0x8B9D,0x8B9E,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x8C38,0x8C39,0x8C3A,0x8C3B,0x8C3C,0x8C3D,0x8C3E,0x8C3F,/* 0x40-0x47 */
- 0x8C40,0x8C42,0x8C43,0x8C44,0x8C45,0xF900,0x8C4A,0x8C4B,/* 0x48-0x4F */
+ 0x8C40,0x8C42,0x8C43,0x8C44,0x8C45,0x8C48,0x8C4A,0x8C4B,/* 0x48-0x4F */
0x8C4D,0x8C4E,0x8C4F,0x8C50,0x8C51,0x8C52,0x8C53,0x8C54,/* 0x50-0x57 */
0x8C56,0x8C57,0x8C58,0x8C59,0x8C5B,0x8C5C,0x8C5D,0x8C5E,/* 0x58-0x5F */
0x8C5F,0x8C60,0x8C63,0x8C64,0x8C65,0x8C66,0x8C67,0x8C68,/* 0x60-0x67 */
- 0x8C69,0xFA16,0x8C6D,0x8C6E,0x8C6F,0x8C70,0x8C71,0x8C72,/* 0x68-0x6F */
+ 0x8C69,0x8C6C,0x8C6D,0x8C6E,0x8C6F,0x8C70,0x8C71,0x8C72,/* 0x68-0x6F */
0x8C74,0x8C75,0x8C76,0x8C77,0x8C7B,0x8C7C,0x8C7D,0x8C7E,/* 0x70-0x77 */
0x8C7F,0x8C80,0x8C81,0x8C83,0x8C84,0x8C86,0x8C87,0x0000,/* 0x78-0x7F */
-
+
0x8C88,0x8C8B,0x8C8D,0x8C8E,0x8C8F,0x8C90,0x8C91,0x8C92,/* 0x80-0x87 */
0x8C93,0x8C95,0x8C96,0x8C97,0x8C99,0x8C9A,0x8C9B,0x8C9C,/* 0x88-0x8F */
0x8C9D,0x8C9E,0x8C9F,0x8CA0,0x8CA1,0x8CA2,0x8CA3,0x8CA4,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x8CAE,0x8CAF,0x8CB0,0x8CB1,0x8CB2,0x8CB3,0x8CB4,0x8CB5,/* 0x40-0x47 */
0x8CB6,0x8CB7,0x8CB8,0x8CB9,0x8CBA,0x8CBB,0x8CBC,0x8CBD,/* 0x48-0x4F */
- 0x8CBE,0x8CBF,0x8CC0,0x8CC1,0xF948,0x8CC3,0x8CC4,0x8CC5,/* 0x50-0x57 */
- 0x8CC6,0x8CC7,0xF903,0x8CC9,0x8CCA,0x8CCB,0x8CCC,0x8CCD,/* 0x58-0x5F */
+ 0x8CBE,0x8CBF,0x8CC0,0x8CC1,0x8CC2,0x8CC3,0x8CC4,0x8CC5,/* 0x50-0x57 */
+ 0x8CC6,0x8CC7,0x8CC8,0x8CC9,0x8CCA,0x8CCB,0x8CCC,0x8CCD,/* 0x58-0x5F */
0x8CCE,0x8CCF,0x8CD0,0x8CD1,0x8CD2,0x8CD3,0x8CD4,0x8CD5,/* 0x60-0x67 */
0x8CD6,0x8CD7,0x8CD8,0x8CD9,0x8CDA,0x8CDB,0x8CDC,0x8CDD,/* 0x68-0x6F */
0x8CDE,0x8CDF,0x8CE0,0x8CE1,0x8CE2,0x8CE3,0x8CE4,0x8CE5,/* 0x70-0x77 */
0x8CE6,0x8CE7,0x8CE8,0x8CE9,0x8CEA,0x8CEB,0x8CEC,0x0000,/* 0x78-0x7F */
-
+
0x8CED,0x8CEE,0x8CEF,0x8CF0,0x8CF1,0x8CF2,0x8CF3,0x8CF4,/* 0x80-0x87 */
0x8CF5,0x8CF6,0x8CF7,0x8CF8,0x8CF9,0x8CFA,0x8CFB,0x8CFC,/* 0x88-0x8F */
0x8CFD,0x8CFE,0x8CFF,0x8D00,0x8D01,0x8D02,0x8D03,0x8D04,/* 0x90-0x97 */
0x8D86,0x8D87,0x8D88,0x8D89,0x8D8C,0x8D8D,0x8D8E,0x8D8F,/* 0x68-0x6F */
0x8D90,0x8D92,0x8D93,0x8D95,0x8D96,0x8D97,0x8D98,0x8D99,/* 0x70-0x77 */
0x8D9A,0x8D9B,0x8D9C,0x8D9D,0x8D9E,0x8DA0,0x8DA1,0x0000,/* 0x78-0x7F */
-
+
0x8DA2,0x8DA4,0x8DA5,0x8DA6,0x8DA7,0x8DA8,0x8DA9,0x8DAA,/* 0x80-0x87 */
0x8DAB,0x8DAC,0x8DAD,0x8DAE,0x8DAF,0x8DB0,0x8DB2,0x8DB6,/* 0x88-0x8F */
0x8DB7,0x8DB9,0x8DBB,0x8DBD,0x8DC0,0x8DC1,0x8DC2,0x8DC5,/* 0x90-0x97 */
0x8E19,0x8E1A,0x8E1B,0x8E1C,0x8E20,0x8E21,0x8E24,0x8E25,/* 0x68-0x6F */
0x8E26,0x8E27,0x8E28,0x8E2B,0x8E2D,0x8E30,0x8E32,0x8E33,/* 0x70-0x77 */
0x8E34,0x8E36,0x8E37,0x8E38,0x8E3B,0x8E3C,0x8E3E,0x0000,/* 0x78-0x7F */
-
+
0x8E3F,0x8E43,0x8E45,0x8E46,0x8E4C,0x8E4D,0x8E4E,0x8E4F,/* 0x80-0x87 */
0x8E50,0x8E53,0x8E54,0x8E55,0x8E56,0x8E57,0x8E58,0x8E5A,/* 0x88-0x8F */
0x8E5B,0x8E5C,0x8E5D,0x8E5E,0x8E5F,0x8E60,0x8E61,0x8E62,/* 0x90-0x97 */
0x8EA7,0x8EA8,0x8EA9,0x8EAA,0x8EAD,0x8EAE,0x8EB0,0x8EB1,/* 0x68-0x6F */
0x8EB3,0x8EB4,0x8EB5,0x8EB6,0x8EB7,0x8EB8,0x8EB9,0x8EBB,/* 0x70-0x77 */
0x8EBC,0x8EBD,0x8EBE,0x8EBF,0x8EC0,0x8EC1,0x8EC2,0x0000,/* 0x78-0x7F */
-
- 0x8EC3,0x8EC4,0x8EC5,0x8EC6,0x8EC7,0x8EC8,0x8EC9,0xF902,/* 0x80-0x87 */
+
+ 0x8EC3,0x8EC4,0x8EC5,0x8EC6,0x8EC7,0x8EC8,0x8EC9,0x8ECA,/* 0x80-0x87 */
0x8ECB,0x8ECC,0x8ECD,0x8ECF,0x8ED0,0x8ED1,0x8ED2,0x8ED3,/* 0x88-0x8F */
0x8ED4,0x8ED5,0x8ED6,0x8ED7,0x8ED8,0x8ED9,0x8EDA,0x8EDB,/* 0x90-0x97 */
0x8EDC,0x8EDD,0x8EDE,0x8EDF,0x8EE0,0x8EE1,0x8EE2,0x8EE3,/* 0x98-0x9F */
0x8F0D,0x8F0E,0x8F0F,0x8F10,0x8F11,0x8F12,0x8F13,0x8F14,/* 0x68-0x6F */
0x8F15,0x8F16,0x8F17,0x8F18,0x8F19,0x8F1A,0x8F1B,0x8F1C,/* 0x70-0x77 */
0x8F1D,0x8F1E,0x8F1F,0x8F20,0x8F21,0x8F22,0x8F23,0x0000,/* 0x78-0x7F */
-
- 0x8F24,0x8F25,0xF998,0x8F27,0x8F28,0x8F29,0xF9D7,0x8F2B,/* 0x80-0x87 */
+
+ 0x8F24,0x8F25,0x8F26,0x8F27,0x8F28,0x8F29,0x8F2A,0x8F2B,/* 0x80-0x87 */
0x8F2C,0x8F2D,0x8F2E,0x8F2F,0x8F30,0x8F31,0x8F32,0x8F33,/* 0x88-0x8F */
- 0x8F34,0x8F35,0x8F36,0x8F37,0x8F38,0x8F39,0x8F3A,0xFA07,/* 0x90-0x97 */
+ 0x8F34,0x8F35,0x8F36,0x8F37,0x8F38,0x8F39,0x8F3A,0x8F3B,/* 0x90-0x97 */
0x8F3C,0x8F3D,0x8F3E,0x8F3F,0x8F40,0x8F41,0x8F42,0x8F43,/* 0x98-0x9F */
0x8F44,0x8368,0x831B,0x8369,0x836C,0x836A,0x836D,0x836E,/* 0xA0-0xA7 */
0x83B0,0x8378,0x83B3,0x83B4,0x83A0,0x83AA,0x8393,0x839C,/* 0xA8-0xAF */
0x8F45,0x8F46,0x8F47,0x8F48,0x8F49,0x8F4A,0x8F4B,0x8F4C,/* 0x40-0x47 */
0x8F4D,0x8F4E,0x8F4F,0x8F50,0x8F51,0x8F52,0x8F53,0x8F54,/* 0x48-0x4F */
0x8F55,0x8F56,0x8F57,0x8F58,0x8F59,0x8F5A,0x8F5B,0x8F5C,/* 0x50-0x57 */
- 0x8F5D,0x8F5E,0x8F5F,0x8F60,0x8F61,0xF98D,0x8F63,0x8F64,/* 0x58-0x5F */
+ 0x8F5D,0x8F5E,0x8F5F,0x8F60,0x8F61,0x8F62,0x8F63,0x8F64,/* 0x58-0x5F */
0x8F65,0x8F6A,0x8F80,0x8F8C,0x8F92,0x8F9D,0x8FA0,0x8FA1,/* 0x60-0x67 */
0x8FA2,0x8FA4,0x8FA5,0x8FA6,0x8FA7,0x8FAA,0x8FAC,0x8FAD,/* 0x68-0x6F */
0x8FAE,0x8FAF,0x8FB2,0x8FB3,0x8FB4,0x8FB5,0x8FB7,0x8FB8,/* 0x70-0x77 */
0x8FBA,0x8FBB,0x8FBC,0x8FBF,0x8FC0,0x8FC3,0x8FC6,0x0000,/* 0x78-0x7F */
-
+
0x8FC9,0x8FCA,0x8FCB,0x8FCC,0x8FCD,0x8FCF,0x8FD2,0x8FD6,/* 0x80-0x87 */
0x8FD7,0x8FDA,0x8FE0,0x8FE1,0x8FE3,0x8FE7,0x8FEC,0x8FEF,/* 0x88-0x8F */
0x8FF1,0x8FF2,0x8FF4,0x8FF5,0x8FF6,0x8FFA,0x8FFB,0x8FFC,/* 0x90-0x97 */
0x8FFE,0x8FFF,0x9007,0x9008,0x900C,0x900E,0x9013,0x9015,/* 0x98-0x9F */
- 0x9018,0x8556,0x853B,0x84FF,0xF9C2,0x8559,0x8548,0x8568,/* 0xA0-0xA7 */
+ 0x9018,0x8556,0x853B,0x84FF,0x84FC,0x8559,0x8548,0x8568,/* 0xA0-0xA7 */
0x8564,0x855E,0x857A,0x77A2,0x8543,0x8572,0x857B,0x85A4,/* 0xA8-0xAF */
0x85A8,0x8587,0x858F,0x8579,0x85AE,0x859C,0x8585,0x85B9,/* 0xB0-0xB7 */
0x85B7,0x85B0,0x85D3,0x85C1,0x85DC,0x85FF,0x8627,0x8605,/* 0xB8-0xBF */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x28-0x2F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
- 0x9019,0x901C,0xF99A,0x9024,0x9025,0x9027,0x9028,0x9029,/* 0x40-0x47 */
+ 0x9019,0x901C,0x9023,0x9024,0x9025,0x9027,0x9028,0x9029,/* 0x40-0x47 */
0x902A,0x902B,0x902C,0x9030,0x9031,0x9032,0x9033,0x9034,/* 0x48-0x4F */
0x9037,0x9039,0x903A,0x903D,0x903F,0x9040,0x9043,0x9045,/* 0x50-0x57 */
0x9046,0x9048,0x9049,0x904A,0x904B,0x904C,0x904E,0x9054,/* 0x58-0x5F */
0x9055,0x9056,0x9059,0x905A,0x905C,0x905D,0x905E,0x905F,/* 0x60-0x67 */
0x9060,0x9061,0x9064,0x9066,0x9067,0x9069,0x906A,0x906B,/* 0x68-0x6F */
0x906C,0x906F,0x9070,0x9071,0x9072,0x9073,0x9076,0x9077,/* 0x70-0x77 */
- 0x9078,0x9079,0x907A,0x907B,0xF9C3,0x907E,0x9081,0x0000,/* 0x78-0x7F */
-
+ 0x9078,0x9079,0x907A,0x907B,0x907C,0x907E,0x9081,0x0000,/* 0x78-0x7F */
+
0x9084,0x9085,0x9086,0x9087,0x9089,0x908A,0x908C,0x908D,/* 0x80-0x87 */
- 0x908E,0xF913,0x9090,0x9092,0x9094,0x9096,0x9098,0x909A,/* 0x88-0x8F */
+ 0x908E,0x908F,0x9090,0x9092,0x9094,0x9096,0x9098,0x909A,/* 0x88-0x8F */
0x909C,0x909E,0x909F,0x90A0,0x90A4,0x90A5,0x90A7,0x90A8,/* 0x90-0x97 */
0x90A9,0x90AB,0x90AD,0x90B2,0x90B7,0x90BC,0x90BD,0x90BF,/* 0x98-0x9F */
0x90C0,0x647A,0x64B7,0x64B8,0x6499,0x64BA,0x64C0,0x64D0,/* 0xA0-0xA7 */
0x9105,0x9106,0x9107,0x9108,0x9109,0x910A,0x910B,0x910C,/* 0x68-0x6F */
0x910D,0x910E,0x910F,0x9110,0x9111,0x9112,0x9113,0x9114,/* 0x70-0x77 */
0x9115,0x9116,0x9117,0x9118,0x911A,0x911B,0x911C,0x0000,/* 0x78-0x7F */
-
+
0x911D,0x911F,0x9120,0x9121,0x9124,0x9125,0x9126,0x9127,/* 0x80-0x87 */
0x9128,0x9129,0x912A,0x912B,0x912C,0x912D,0x912E,0x9130,/* 0x88-0x8F */
0x9132,0x9133,0x9134,0x9135,0x9136,0x9137,0x9138,0x913A,/* 0x90-0x97 */
0x562D,0x5658,0x5639,0x5657,0x562C,0x564D,0x5662,0x5659,/* 0xD8-0xDF */
0x565C,0x564C,0x5654,0x5686,0x5664,0x5671,0x566B,0x567B,/* 0xE0-0xE7 */
0x567C,0x5685,0x5693,0x56AF,0x56D4,0x56D7,0x56DD,0x56E1,/* 0xE8-0xEF */
- 0x56F5,0x56EB,0xF9A9,0x56FF,0x5704,0x570A,0x5709,0x571C,/* 0xF0-0xF7 */
+ 0x56F5,0x56EB,0x56F9,0x56FF,0x5704,0x570A,0x5709,0x571C,/* 0xF0-0xF7 */
0x5E0F,0x5E19,0x5E14,0x5E11,0x5E31,0x5E3B,0x5E3C,0x0000,/* 0xF8-0xFF */
};
0x919C,0x919D,0x919E,0x919F,0x91A0,0x91A1,0x91A4,0x91A5,/* 0x68-0x6F */
0x91A6,0x91A7,0x91A8,0x91A9,0x91AB,0x91AC,0x91B0,0x91B1,/* 0x70-0x77 */
0x91B2,0x91B3,0x91B6,0x91B7,0x91B8,0x91B9,0x91BB,0x0000,/* 0x78-0x7F */
-
+
0x91BC,0x91BD,0x91BE,0x91BF,0x91C0,0x91C1,0x91C2,0x91C3,/* 0x80-0x87 */
0x91C4,0x91C5,0x91C6,0x91C8,0x91CB,0x91D0,0x91D2,0x91D3,/* 0x88-0x8F */
0x91D4,0x91D5,0x91D6,0x91D7,0x91D8,0x91D9,0x91DA,0x91DB,/* 0x90-0x97 */
0x920E,0x920F,0x9210,0x9211,0x9212,0x9213,0x9214,0x9215,/* 0x68-0x6F */
0x9216,0x9217,0x9218,0x9219,0x921A,0x921B,0x921C,0x921D,/* 0x70-0x77 */
0x921E,0x921F,0x9220,0x9221,0x9222,0x9223,0x9224,0x0000,/* 0x78-0x7F */
-
+
0x9225,0x9226,0x9227,0x9228,0x9229,0x922A,0x922B,0x922C,/* 0x80-0x87 */
- 0x922D,0x922E,0x922F,0x9230,0x9231,0x9232,0x9233,0xF9B1,/* 0x88-0x8F */
+ 0x922D,0x922E,0x922F,0x9230,0x9231,0x9232,0x9233,0x9234,/* 0x88-0x8F */
0x9235,0x9236,0x9237,0x9238,0x9239,0x923A,0x923B,0x923C,/* 0x90-0x97 */
0x923D,0x923E,0x923F,0x9240,0x9241,0x9242,0x9243,0x9244,/* 0x98-0x9F */
0x9245,0x72FB,0x7317,0x7313,0x7321,0x730A,0x731E,0x731D,/* 0xA0-0xA7 */
0x926E,0x926F,0x9270,0x9271,0x9272,0x9273,0x9275,0x9276,/* 0x68-0x6F */
0x9277,0x9278,0x9279,0x927A,0x927B,0x927C,0x927D,0x927E,/* 0x70-0x77 */
0x927F,0x9280,0x9281,0x9282,0x9283,0x9284,0x9285,0x0000,/* 0x78-0x7F */
-
+
0x9286,0x9287,0x9288,0x9289,0x928A,0x928B,0x928C,0x928D,/* 0x80-0x87 */
0x928F,0x9290,0x9291,0x9292,0x9293,0x9294,0x9295,0x9296,/* 0x88-0x8F */
0x9297,0x9298,0x9299,0x929A,0x929B,0x929C,0x929D,0x929E,/* 0x90-0x97 */
0x92D2,0x92D3,0x92D4,0x92D5,0x92D6,0x92D7,0x92D8,0x92D9,/* 0x68-0x6F */
0x92DA,0x92DB,0x92DC,0x92DD,0x92DE,0x92DF,0x92E0,0x92E1,/* 0x70-0x77 */
0x92E2,0x92E3,0x92E4,0x92E5,0x92E6,0x92E7,0x92E8,0x0000,/* 0x78-0x7F */
-
+
0x92E9,0x92EA,0x92EB,0x92EC,0x92ED,0x92EE,0x92EF,0x92F0,/* 0x80-0x87 */
0x92F1,0x92F2,0x92F3,0x92F4,0x92F5,0x92F6,0x92F7,0x92F8,/* 0x88-0x8F */
0x92F9,0x92FA,0x92FB,0x92FC,0x92FD,0x92FE,0x92FF,0x9300,/* 0x90-0x97 */
- 0x9301,0x9302,0x9303,0xF93F,0x9305,0x9306,0x9307,0x9308,/* 0x98-0x9F */
+ 0x9301,0x9302,0x9303,0x9304,0x9305,0x9306,0x9307,0x9308,/* 0x98-0x9F */
0x9309,0x6D39,0x6D27,0x6D0C,0x6D43,0x6D48,0x6D07,0x6D04,/* 0xA0-0xA7 */
0x6D19,0x6D0E,0x6D2B,0x6D4D,0x6D2E,0x6D35,0x6D1A,0x6D4F,/* 0xA8-0xAF */
0x6D52,0x6D54,0x6D33,0x6D91,0x6D6F,0x6D9E,0x6DA0,0x6D5E,/* 0xB0-0xB7 */
0x9332,0x9333,0x9334,0x9335,0x9336,0x9337,0x9338,0x9339,/* 0x68-0x6F */
0x933A,0x933B,0x933C,0x933D,0x933F,0x9340,0x9341,0x9342,/* 0x70-0x77 */
0x9343,0x9344,0x9345,0x9346,0x9347,0x9348,0x9349,0x0000,/* 0x78-0x7F */
-
- 0xF99B,0x934B,0x934C,0x934D,0x934E,0x934F,0x9350,0x9351,/* 0x80-0x87 */
+
+ 0x934A,0x934B,0x934C,0x934D,0x934E,0x934F,0x9350,0x9351,/* 0x80-0x87 */
0x9352,0x9353,0x9354,0x9355,0x9356,0x9357,0x9358,0x9359,/* 0x88-0x8F */
0x935A,0x935B,0x935C,0x935D,0x935E,0x935F,0x9360,0x9361,/* 0x90-0x97 */
0x9362,0x9363,0x9364,0x9365,0x9366,0x9367,0x9368,0x9369,/* 0x98-0x9F */
0x936B,0x6FC9,0x6FA7,0x6FB9,0x6FB6,0x6FC2,0x6FE1,0x6FEE,/* 0xA0-0xA7 */
0x6FDE,0x6FE0,0x6FEF,0x701A,0x7023,0x701B,0x7039,0x7035,/* 0xA8-0xAF */
0x704F,0x705E,0x5B80,0x5B84,0x5B95,0x5B93,0x5BA5,0x5BB8,/* 0xB0-0xB7 */
- 0x752F,0x9A9E,0x6434,0x5BE4,0xF9BC,0x8930,0x5BF0,0x8E47,/* 0xB8-0xBF */
+ 0x752F,0x9A9E,0x6434,0x5BE4,0x5BEE,0x8930,0x5BF0,0x8E47,/* 0xB8-0xBF */
0x8B07,0x8FB6,0x8FD3,0x8FD5,0x8FE5,0x8FEE,0x8FE4,0x8FE9,/* 0xC0-0xC7 */
0x8FE6,0x8FF3,0x8FE8,0x9005,0x9004,0x900B,0x9026,0x9011,/* 0xC8-0xCF */
0x900D,0x9016,0x9021,0x9035,0x9036,0x902D,0x902F,0x9044,/* 0xD0-0xD7 */
0x9395,0x9396,0x9397,0x9398,0x9399,0x939A,0x939B,0x939C,/* 0x68-0x6F */
0x939D,0x939E,0x939F,0x93A0,0x93A1,0x93A2,0x93A3,0x93A4,/* 0x70-0x77 */
0x93A5,0x93A6,0x93A7,0x93A8,0x93A9,0x93AA,0x93AB,0x0000,/* 0x78-0x7F */
-
+
0x93AC,0x93AD,0x93AE,0x93AF,0x93B0,0x93B1,0x93B2,0x93B3,/* 0x80-0x87 */
0x93B4,0x93B5,0x93B6,0x93B7,0x93B8,0x93B9,0x93BA,0x93BB,/* 0x88-0x8F */
0x93BC,0x93BD,0x93BE,0x93BF,0x93C0,0x93C1,0x93C2,0x93C3,/* 0x90-0x97 */
0x93F7,0x93F8,0x93F9,0x93FA,0x93FB,0x93FC,0x93FD,0x93FE,/* 0x68-0x6F */
0x93FF,0x9400,0x9401,0x9402,0x9403,0x9404,0x9405,0x9406,/* 0x70-0x77 */
0x9407,0x9408,0x9409,0x940A,0x940B,0x940C,0x940D,0x0000,/* 0x78-0x7F */
-
+
0x940E,0x940F,0x9410,0x9411,0x9412,0x9413,0x9414,0x9415,/* 0x80-0x87 */
0x9416,0x9417,0x9418,0x9419,0x941A,0x941B,0x941C,0x941D,/* 0x88-0x8F */
0x941E,0x941F,0x9420,0x9421,0x9422,0x9423,0x9424,0x9425,/* 0x90-0x97 */
0x7F32,0x7F33,0x7F35,0x5E7A,0x757F,0x5DDB,0x753E,0x9095,/* 0xD8-0xDF */
0x738E,0x7391,0x73AE,0x73A2,0x739F,0x73CF,0x73C2,0x73D1,/* 0xE0-0xE7 */
0x73B7,0x73B3,0x73C0,0x73C9,0x73C8,0x73E5,0x73D9,0x987C,/* 0xE8-0xEF */
- 0x740A,0x73E9,0x73E7,0xF917,0x73BA,0x73F2,0x740F,0x742A,/* 0xF0-0xF7 */
+ 0x740A,0x73E9,0x73E7,0x73DE,0x73BA,0x73F2,0x740F,0x742A,/* 0xF0-0xF7 */
0x745B,0x7426,0x7425,0x7428,0x7430,0x742E,0x742C,0x0000,/* 0xF8-0xFF */
};
0x9458,0x9459,0x945A,0x945B,0x945C,0x945D,0x945E,0x945F,/* 0x68-0x6F */
0x9460,0x9461,0x9462,0x9463,0x9464,0x9465,0x9466,0x9467,/* 0x70-0x77 */
0x9468,0x9469,0x946A,0x946C,0x946D,0x946E,0x946F,0x0000,/* 0x78-0x7F */
-
+
0x9470,0x9471,0x9472,0x9473,0x9474,0x9475,0x9476,0x9477,/* 0x80-0x87 */
0x9478,0x9479,0x947A,0x947B,0x947C,0x947D,0x947E,0x947F,/* 0x88-0x8F */
0x9480,0x9481,0x9482,0x9483,0x9484,0x9491,0x9496,0x9498,/* 0x90-0x97 */
0x9594,0x9595,0x9596,0x9597,0x9598,0x9599,0x959A,0x959B,/* 0x68-0x6F */
0x959C,0x959D,0x959E,0x959F,0x95A0,0x95A1,0x95A2,0x95A3,/* 0x70-0x77 */
0x95A4,0x95A5,0x95A6,0x95A7,0x95A8,0x95A9,0x95AA,0x0000,/* 0x78-0x7F */
-
- 0x95AB,0x95AC,0xF986,0x95AE,0x95AF,0x95B0,0x95B1,0x95B2,/* 0x80-0x87 */
+
+ 0x95AB,0x95AC,0x95AD,0x95AE,0x95AF,0x95B0,0x95B1,0x95B2,/* 0x80-0x87 */
0x95B3,0x95B4,0x95B5,0x95B6,0x95B7,0x95B8,0x95B9,0x95BA,/* 0x88-0x8F */
0x95BB,0x95BC,0x95BD,0x95BE,0x95BF,0x95C0,0x95C1,0x95C2,/* 0x90-0x97 */
0x95C3,0x95C4,0x95C5,0x95C6,0x95C7,0x95C8,0x95C9,0x95CA,/* 0x98-0x9F */
0x9627,0x9628,0x9629,0x962B,0x962C,0x962D,0x962F,0x9630,/* 0x68-0x6F */
0x9637,0x9638,0x9639,0x963A,0x963E,0x9641,0x9643,0x964A,/* 0x70-0x77 */
0x964E,0x964F,0x9651,0x9652,0x9653,0x9656,0x9657,0x0000,/* 0x78-0x7F */
-
+
0x9658,0x9659,0x965A,0x965C,0x965D,0x965E,0x9660,0x9663,/* 0x80-0x87 */
0x9665,0x9666,0x966B,0x966D,0x966E,0x966F,0x9670,0x9671,/* 0x88-0x8F */
- 0x9673,0xF9D3,0x9679,0x967A,0x967B,0x967C,0x967D,0x967E,/* 0x90-0x97 */
+ 0x9673,0x9678,0x9679,0x967A,0x967B,0x967C,0x967D,0x967E,/* 0x90-0x97 */
0x967F,0x9680,0x9681,0x9682,0x9683,0x9684,0x9687,0x9689,/* 0x98-0x9F */
0x968A,0x8F8D,0x8F8E,0x8F8F,0x8F98,0x8F9A,0x8ECE,0x620B,/* 0xA0-0xA7 */
0x6217,0x621B,0x621F,0x6222,0x6221,0x6225,0x6224,0x622C,/* 0xA8-0xAF */
0x969B,0x969D,0x969E,0x969F,0x96A0,0x96A1,0x96A2,0x96A3,/* 0x48-0x4F */
0x96A4,0x96A5,0x96A6,0x96A8,0x96A9,0x96AA,0x96AB,0x96AC,/* 0x50-0x57 */
0x96AD,0x96AE,0x96AF,0x96B1,0x96B2,0x96B4,0x96B5,0x96B7,/* 0x58-0x5F */
- 0xF9B8,0x96BA,0x96BB,0x96BF,0x96C2,0x96C3,0x96C8,0x96CA,/* 0x60-0x67 */
+ 0x96B8,0x96BA,0x96BB,0x96BF,0x96C2,0x96C3,0x96C8,0x96CA,/* 0x60-0x67 */
0x96CB,0x96D0,0x96D1,0x96D3,0x96D4,0x96D6,0x96D7,0x96D8,/* 0x68-0x6F */
0x96D9,0x96DA,0x96DB,0x96DC,0x96DD,0x96DE,0x96DF,0x96E1,/* 0x70-0x77 */
- 0xF9EA,0x96E3,0x96E4,0x96E5,0x96E6,0x96E7,0x96EB,0x0000,/* 0x78-0x7F */
-
+ 0x96E2,0x96E3,0x96E4,0x96E5,0x96E6,0x96E7,0x96EB,0x0000,/* 0x78-0x7F */
+
0x96EC,0x96ED,0x96EE,0x96F0,0x96F1,0x96F2,0x96F4,0x96F5,/* 0x80-0x87 */
0x96F8,0x96FA,0x96FB,0x96FC,0x96FD,0x96FF,0x9702,0x9703,/* 0x88-0x8F */
0x9705,0x970A,0x970B,0x970C,0x9710,0x9711,0x9712,0x9714,/* 0x90-0x97 */
0x9729,0x972B,0x972C,0x972E,0x972F,0x9731,0x9733,0x9734,/* 0x48-0x4F */
0x9735,0x9736,0x9737,0x973A,0x973B,0x973C,0x973D,0x973F,/* 0x50-0x57 */
0x9740,0x9741,0x9742,0x9743,0x9744,0x9745,0x9746,0x9747,/* 0x58-0x5F */
- 0xF9B3,0x9749,0x974A,0x974B,0x974C,0x974D,0x974E,0x974F,/* 0x60-0x67 */
+ 0x9748,0x9749,0x974A,0x974B,0x974C,0x974D,0x974E,0x974F,/* 0x60-0x67 */
0x9750,0x9751,0x9754,0x9755,0x9757,0x9758,0x975A,0x975C,/* 0x68-0x6F */
0x975D,0x975F,0x9763,0x9764,0x9766,0x9767,0x9768,0x976A,/* 0x70-0x77 */
0x976B,0x976C,0x976D,0x976E,0x976F,0x9770,0x9771,0x0000,/* 0x78-0x7F */
-
+
0x9772,0x9775,0x9777,0x9778,0x9779,0x977A,0x977B,0x977D,/* 0x80-0x87 */
0x977E,0x977F,0x9780,0x9781,0x9782,0x9783,0x9784,0x9786,/* 0x88-0x8F */
0x9787,0x9788,0x9789,0x978A,0x978C,0x978E,0x978F,0x9790,/* 0x90-0x97 */
0x97CD,0x97CE,0x97CF,0x97D0,0x97D1,0x97D2,0x97D3,0x97D4,/* 0x68-0x6F */
0x97D5,0x97D6,0x97D7,0x97D8,0x97D9,0x97DA,0x97DB,0x97DC,/* 0x70-0x77 */
0x97DD,0x97DE,0x97DF,0x97E0,0x97E1,0x97E2,0x97E3,0x0000,/* 0x78-0x7F */
-
+
0x97E4,0x97E5,0x97E8,0x97EE,0x97EF,0x97F0,0x97F1,0x97F2,/* 0x80-0x87 */
0x97F4,0x97F7,0x97F8,0x97F9,0x97FA,0x97FB,0x97FC,0x97FD,/* 0x88-0x8F */
0x97FE,0x97FF,0x9800,0x9801,0x9802,0x9803,0x9804,0x9805,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x980F,0x9810,0x9811,0x9812,0x9813,0x9814,0x9815,0x9816,/* 0x40-0x47 */
- 0x9817,0xF9B4,0x9819,0x981A,0x981B,0x981C,0x981D,0x981E,/* 0x48-0x4F */
+ 0x9817,0x9818,0x9819,0x981A,0x981B,0x981C,0x981D,0x981E,/* 0x48-0x4F */
0x981F,0x9820,0x9821,0x9822,0x9823,0x9824,0x9825,0x9826,/* 0x50-0x57 */
0x9827,0x9828,0x9829,0x982A,0x982B,0x982C,0x982D,0x982E,/* 0x58-0x5F */
0x982F,0x9830,0x9831,0x9832,0x9833,0x9834,0x9835,0x9836,/* 0x60-0x67 */
0x9837,0x9838,0x9839,0x983A,0x983B,0x983C,0x983D,0x983E,/* 0x68-0x6F */
0x983F,0x9840,0x9841,0x9842,0x9843,0x9844,0x9845,0x9846,/* 0x70-0x77 */
0x9847,0x9848,0x9849,0x984A,0x984B,0x984C,0x984D,0x0000,/* 0x78-0x7F */
-
+
0x984E,0x984F,0x9850,0x9851,0x9852,0x9853,0x9854,0x9855,/* 0x80-0x87 */
0x9856,0x9857,0x9858,0x9859,0x985A,0x985B,0x985C,0x985D,/* 0x88-0x8F */
- 0xF9D0,0x985F,0x9860,0x9861,0x9862,0x9863,0x9864,0x9865,/* 0x90-0x97 */
+ 0x985E,0x985F,0x9860,0x9861,0x9862,0x9863,0x9864,0x9865,/* 0x90-0x97 */
0x9866,0x9867,0x9868,0x9869,0x986A,0x986B,0x986C,0x986D,/* 0x98-0x9F */
0x986E,0x7762,0x7765,0x777F,0x778D,0x777D,0x7780,0x778C,/* 0xA0-0xA7 */
0x7791,0x779F,0x77A0,0x77B0,0x77B5,0x77BD,0x753A,0x7540,/* 0xA8-0xAF */
0x754E,0x754B,0x7548,0x755B,0x7572,0x7579,0x7583,0x7F58,/* 0xB0-0xB7 */
- 0x7F61,0x7F5F,0x8A48,0x7F68,0x7F74,0x7F71,0xF9E6,0x7F81,/* 0xB8-0xBF */
+ 0x7F61,0x7F5F,0x8A48,0x7F68,0x7F74,0x7F71,0x7F79,0x7F81,/* 0xB8-0xBF */
0x7F7E,0x76CD,0x76E5,0x8832,0x9485,0x9486,0x9487,0x948B,/* 0xC0-0xC7 */
0x948A,0x948C,0x948D,0x948F,0x9490,0x9494,0x9497,0x9495,/* 0xC8-0xCF */
0x949A,0x949B,0x949C,0x94A3,0x94A4,0x94AB,0x94AA,0x94AD,/* 0xD0-0xD7 */
0x98C4,0x98C5,0x98C6,0x98C7,0x98C8,0x98C9,0x98CA,0x98CB,/* 0x68-0x6F */
0x98CC,0x98CD,0x98CF,0x98D0,0x98D4,0x98D6,0x98D7,0x98DB,/* 0x70-0x77 */
0x98DC,0x98DD,0x98E0,0x98E1,0x98E2,0x98E3,0x98E4,0x0000,/* 0x78-0x7F */
-
+
0x98E5,0x98E6,0x98E9,0x98EA,0x98EB,0x98EC,0x98ED,0x98EE,/* 0x80-0x87 */
- 0xFA2A,0x98F0,0x98F1,0x98F2,0x98F3,0x98F4,0x98F5,0x98F6,/* 0x88-0x8F */
- 0x98F7,0x98F8,0x98F9,0x98FA,0x98FB,0xFA2B,0x98FD,0x98FE,/* 0x90-0x97 */
+ 0x98EF,0x98F0,0x98F1,0x98F2,0x98F3,0x98F4,0x98F5,0x98F6,/* 0x88-0x8F */
+ 0x98F7,0x98F8,0x98F9,0x98FA,0x98FB,0x98FC,0x98FD,0x98FE,/* 0x90-0x97 */
0x98FF,0x9900,0x9901,0x9902,0x9903,0x9904,0x9905,0x9906,/* 0x98-0x9F */
0x9907,0x94E9,0x94EB,0x94EE,0x94EF,0x94F3,0x94F4,0x94F5,/* 0xA0-0xA7 */
0x94F7,0x94F9,0x94FC,0x94FD,0x94FF,0x9503,0x9502,0x9506,/* 0xA8-0xAF */
0x9908,0x9909,0x990A,0x990B,0x990C,0x990E,0x990F,0x9911,/* 0x40-0x47 */
0x9912,0x9913,0x9914,0x9915,0x9916,0x9917,0x9918,0x9919,/* 0x48-0x4F */
0x991A,0x991B,0x991C,0x991D,0x991E,0x991F,0x9920,0x9921,/* 0x50-0x57 */
- 0x9922,0x9923,0x9924,0x9925,0x9926,0x9927,0xFA2C,0x9929,/* 0x58-0x5F */
+ 0x9922,0x9923,0x9924,0x9925,0x9926,0x9927,0x9928,0x9929,/* 0x58-0x5F */
0x992A,0x992B,0x992C,0x992D,0x992F,0x9930,0x9931,0x9932,/* 0x60-0x67 */
0x9933,0x9934,0x9935,0x9936,0x9937,0x9938,0x9939,0x993A,/* 0x68-0x6F */
0x993B,0x993C,0x993D,0x993E,0x993F,0x9940,0x9941,0x9942,/* 0x70-0x77 */
0x9943,0x9944,0x9945,0x9946,0x9947,0x9948,0x9949,0x0000,/* 0x78-0x7F */
-
+
0x994A,0x994B,0x994C,0x994D,0x994E,0x994F,0x9950,0x9951,/* 0x80-0x87 */
0x9952,0x9953,0x9956,0x9957,0x9958,0x9959,0x995A,0x995B,/* 0x88-0x8F */
0x995C,0x995D,0x995E,0x995F,0x9960,0x9961,0x9962,0x9964,/* 0x90-0x97 */
0x99C2,0x99C3,0x99C4,0x99C5,0x99C6,0x99C7,0x99C8,0x99C9,/* 0x68-0x6F */
0x99CA,0x99CB,0x99CC,0x99CD,0x99CE,0x99CF,0x99D0,0x99D1,/* 0x70-0x77 */
0x99D2,0x99D3,0x99D4,0x99D5,0x99D6,0x99D7,0x99D8,0x0000,/* 0x78-0x7F */
-
+
0x99D9,0x99DA,0x99DB,0x99DC,0x99DD,0x99DE,0x99DF,0x99E0,/* 0x80-0x87 */
0x99E1,0x99E2,0x99E3,0x99E4,0x99E5,0x99E6,0x99E7,0x99E8,/* 0x88-0x8F */
0x99E9,0x99EA,0x99EB,0x99EC,0x99ED,0x99EE,0x99EF,0x99F0,/* 0x90-0x97 */
- 0xF91A,0x99F2,0x99F3,0x99F4,0x99F5,0x99F6,0x99F7,0x99F8,/* 0x98-0x9F */
+ 0x99F1,0x99F2,0x99F3,0x99F4,0x99F5,0x99F6,0x99F7,0x99F8,/* 0x98-0x9F */
0x99F9,0x761B,0x763C,0x7622,0x7620,0x7640,0x762D,0x7630,/* 0xA0-0xA7 */
0x763F,0x7635,0x7643,0x763E,0x7633,0x764D,0x765E,0x7654,/* 0xA8-0xAF */
0x765C,0x7656,0x766B,0x766F,0x7FCA,0x7AE6,0x7A78,0x7A79,/* 0xB0-0xB7 */
0x8919,0x8913,0x891B,0x890A,0x8934,0x892B,0x8936,0x8941,/* 0xD8-0xDF */
0x8966,0x897B,0x758B,0x80E5,0x76B2,0x76B4,0x77DC,0x8012,/* 0xE0-0xE7 */
0x8014,0x8016,0x801C,0x8020,0x8022,0x8025,0x8026,0x8027,/* 0xE8-0xEF */
- 0x8029,0x8028,0x8031,0x800B,0x8035,0x8043,0xF9B0,0x804D,/* 0xF0-0xF7 */
+ 0x8029,0x8028,0x8031,0x800B,0x8035,0x8043,0x8046,0x804D,/* 0xF0-0xF7 */
0x8052,0x8069,0x8071,0x8983,0x9878,0x9880,0x9883,0x0000,/* 0xF8-0xFF */
};
0x9A22,0x9A23,0x9A24,0x9A25,0x9A26,0x9A27,0x9A28,0x9A29,/* 0x68-0x6F */
0x9A2A,0x9A2B,0x9A2C,0x9A2D,0x9A2E,0x9A2F,0x9A30,0x9A31,/* 0x70-0x77 */
0x9A32,0x9A33,0x9A34,0x9A35,0x9A36,0x9A37,0x9A38,0x0000,/* 0x78-0x7F */
-
+
0x9A39,0x9A3A,0x9A3B,0x9A3C,0x9A3D,0x9A3E,0x9A3F,0x9A40,/* 0x80-0x87 */
0x9A41,0x9A42,0x9A43,0x9A44,0x9A45,0x9A46,0x9A47,0x9A48,/* 0x88-0x8F */
0x9A49,0x9A4A,0x9A4B,0x9A4C,0x9A4D,0x9A4E,0x9A4F,0x9A50,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x9A5A,0x9A5B,0x9A5C,0x9A5D,0x9A5E,0x9A5F,0x9A60,0x9A61,/* 0x40-0x47 */
0x9A62,0x9A63,0x9A64,0x9A65,0x9A66,0x9A67,0x9A68,0x9A69,/* 0x48-0x4F */
- 0xF987,0x9A6B,0x9A72,0x9A83,0x9A89,0x9A8D,0x9A8E,0x9A94,/* 0x50-0x57 */
+ 0x9A6A,0x9A6B,0x9A72,0x9A83,0x9A89,0x9A8D,0x9A8E,0x9A94,/* 0x50-0x57 */
0x9A95,0x9A99,0x9AA6,0x9AA9,0x9AAA,0x9AAB,0x9AAC,0x9AAD,/* 0x58-0x5F */
0x9AAE,0x9AAF,0x9AB2,0x9AB3,0x9AB4,0x9AB5,0x9AB9,0x9ABB,/* 0x60-0x67 */
0x9ABD,0x9ABE,0x9ABF,0x9AC3,0x9AC4,0x9AC6,0x9AC7,0x9AC8,/* 0x68-0x6F */
0x9AC9,0x9ACA,0x9ACD,0x9ACE,0x9ACF,0x9AD0,0x9AD2,0x9AD4,/* 0x70-0x77 */
0x9AD5,0x9AD6,0x9AD7,0x9AD9,0x9ADA,0x9ADB,0x9ADC,0x0000,/* 0x78-0x7F */
-
+
0x9ADD,0x9ADE,0x9AE0,0x9AE2,0x9AE3,0x9AE4,0x9AE5,0x9AE7,/* 0x80-0x87 */
0x9AE8,0x9AE9,0x9AEA,0x9AEC,0x9AEE,0x9AF0,0x9AF1,0x9AF2,/* 0x88-0x8F */
0x9AF3,0x9AF4,0x9AF5,0x9AF6,0x9AF7,0x9AF8,0x9AFA,0x9AFC,/* 0x90-0x97 */
0x87FE,0x880A,0x881B,0x8821,0x8839,0x883C,0x7F36,0x7F42,/* 0xB8-0xBF */
0x7F44,0x7F45,0x8210,0x7AFA,0x7AFD,0x7B08,0x7B03,0x7B04,/* 0xC0-0xC7 */
0x7B15,0x7B0A,0x7B2B,0x7B0F,0x7B47,0x7B38,0x7B2A,0x7B19,/* 0xC8-0xCF */
- 0x7B2E,0x7B31,0xF9F8,0x7B25,0x7B24,0x7B33,0x7B3E,0x7B1E,/* 0xD0-0xD7 */
+ 0x7B2E,0x7B31,0x7B20,0x7B25,0x7B24,0x7B33,0x7B3E,0x7B1E,/* 0xD0-0xD7 */
0x7B58,0x7B5A,0x7B45,0x7B75,0x7B4C,0x7B5D,0x7B60,0x7B6E,/* 0xD8-0xDF */
0x7B7B,0x7B62,0x7B72,0x7B71,0x7B90,0x7BA6,0x7BA7,0x7BB8,/* 0xE0-0xE7 */
0x7BAC,0x7B9D,0x7BA8,0x7B85,0x7BAA,0x7B9C,0x7BA2,0x7BAB,/* 0xE8-0xEF */
0x9B36,0x9B37,0x9B38,0x9B39,0x9B3A,0x9B3D,0x9B3E,0x9B3F,/* 0x68-0x6F */
0x9B40,0x9B46,0x9B4A,0x9B4B,0x9B4C,0x9B4E,0x9B50,0x9B52,/* 0x70-0x77 */
0x9B53,0x9B55,0x9B56,0x9B57,0x9B58,0x9B59,0x9B5A,0x0000,/* 0x78-0x7F */
-
+
0x9B5B,0x9B5C,0x9B5D,0x9B5E,0x9B5F,0x9B60,0x9B61,0x9B62,/* 0x80-0x87 */
0x9B63,0x9B64,0x9B65,0x9B66,0x9B67,0x9B68,0x9B69,0x9B6A,/* 0x88-0x8F */
- 0x9B6B,0x9B6C,0x9B6D,0x9B6E,0xF939,0x9B70,0x9B71,0x9B72,/* 0x90-0x97 */
+ 0x9B6B,0x9B6C,0x9B6D,0x9B6E,0x9B6F,0x9B70,0x9B71,0x9B72,/* 0x90-0x97 */
0x9B73,0x9B74,0x9B75,0x9B76,0x9B77,0x9B78,0x9B79,0x9B7A,/* 0x98-0x9F */
0x9B7B,0x7C1F,0x7C2A,0x7C26,0x7C38,0x7C41,0x7C40,0x81FE,/* 0xA0-0xA7 */
0x8201,0x8202,0x8204,0x81EC,0x8844,0x8221,0x8222,0x8223,/* 0xA8-0xAF */
0x9BA4,0x9BA5,0x9BA6,0x9BA7,0x9BA8,0x9BA9,0x9BAA,0x9BAB,/* 0x68-0x6F */
0x9BAC,0x9BAD,0x9BAE,0x9BAF,0x9BB0,0x9BB1,0x9BB2,0x9BB3,/* 0x70-0x77 */
0x9BB4,0x9BB5,0x9BB6,0x9BB7,0x9BB8,0x9BB9,0x9BBA,0x0000,/* 0x78-0x7F */
-
+
0x9BBB,0x9BBC,0x9BBD,0x9BBE,0x9BBF,0x9BC0,0x9BC1,0x9BC2,/* 0x80-0x87 */
0x9BC3,0x9BC4,0x9BC5,0x9BC6,0x9BC7,0x9BC8,0x9BC9,0x9BCA,/* 0x88-0x8F */
0x9BCB,0x9BCC,0x9BCD,0x9BCE,0x9BCF,0x9BD0,0x9BD1,0x9BD2,/* 0x90-0x97 */
0x9BD3,0x9BD4,0x9BD5,0x9BD6,0x9BD7,0x9BD8,0x9BD9,0x9BDA,/* 0x98-0x9F */
0x9BDB,0x9162,0x9161,0x9170,0x9169,0x916F,0x917D,0x917E,/* 0xA0-0xA7 */
0x9172,0x9174,0x9179,0x918C,0x9185,0x9190,0x918D,0x9191,/* 0xA8-0xAF */
- 0x91A2,0x91A3,0x91AA,0x91AD,0x91AE,0x91AF,0x91B5,0xF9B7,/* 0xB0-0xB7 */
+ 0x91A2,0x91A3,0x91AA,0x91AD,0x91AE,0x91AF,0x91B5,0x91B4,/* 0xB0-0xB7 */
0x91BA,0x8C55,0x9E7E,0x8DB8,0x8DEB,0x8E05,0x8E59,0x8E69,/* 0xB8-0xBF */
0x8DB5,0x8DBF,0x8DBC,0x8DBA,0x8DC4,0x8DD6,0x8DD7,0x8DDA,/* 0xC0-0xC7 */
0x8DDE,0x8DCE,0x8DCF,0x8DDB,0x8DC6,0x8DEC,0x8DF7,0x8DF8,/* 0xC8-0xCF */
0x9C04,0x9C05,0x9C06,0x9C07,0x9C08,0x9C09,0x9C0A,0x9C0B,/* 0x68-0x6F */
0x9C0C,0x9C0D,0x9C0E,0x9C0F,0x9C10,0x9C11,0x9C12,0x9C13,/* 0x70-0x77 */
0x9C14,0x9C15,0x9C16,0x9C17,0x9C18,0x9C19,0x9C1A,0x0000,/* 0x78-0x7F */
-
+
0x9C1B,0x9C1C,0x9C1D,0x9C1E,0x9C1F,0x9C20,0x9C21,0x9C22,/* 0x80-0x87 */
0x9C23,0x9C24,0x9C25,0x9C26,0x9C27,0x9C28,0x9C29,0x9C2A,/* 0x88-0x8F */
0x9C2B,0x9C2C,0x9C2D,0x9C2E,0x9C2F,0x9C30,0x9C31,0x9C32,/* 0x90-0x97 */
0x9C3C,0x9C3D,0x9C3E,0x9C3F,0x9C40,0x9C41,0x9C42,0x9C43,/* 0x40-0x47 */
0x9C44,0x9C45,0x9C46,0x9C47,0x9C48,0x9C49,0x9C4A,0x9C4B,/* 0x48-0x4F */
0x9C4C,0x9C4D,0x9C4E,0x9C4F,0x9C50,0x9C51,0x9C52,0x9C53,/* 0x50-0x57 */
- 0x9C54,0x9C55,0x9C56,0xF9F2,0x9C58,0x9C59,0x9C5A,0x9C5B,/* 0x58-0x5F */
+ 0x9C54,0x9C55,0x9C56,0x9C57,0x9C58,0x9C59,0x9C5A,0x9C5B,/* 0x58-0x5F */
0x9C5C,0x9C5D,0x9C5E,0x9C5F,0x9C60,0x9C61,0x9C62,0x9C63,/* 0x60-0x67 */
0x9C64,0x9C65,0x9C66,0x9C67,0x9C68,0x9C69,0x9C6A,0x9C6B,/* 0x68-0x6F */
0x9C6C,0x9C6D,0x9C6E,0x9C6F,0x9C70,0x9C71,0x9C72,0x9C73,/* 0x70-0x77 */
0x9C74,0x9C75,0x9C76,0x9C77,0x9C78,0x9C79,0x9C7A,0x0000,/* 0x78-0x7F */
-
+
0x9C7B,0x9C7D,0x9C7E,0x9C80,0x9C83,0x9C84,0x9C89,0x9C8A,/* 0x80-0x87 */
0x9C8C,0x9C8F,0x9C93,0x9C96,0x9C97,0x9C98,0x9C99,0x9C9D,/* 0x88-0x8F */
0x9CAA,0x9CAC,0x9CAF,0x9CB9,0x9CBE,0x9CBF,0x9CC0,0x9CC1,/* 0x90-0x97 */
0x990D,0x992E,0x9955,0x9954,0x9ADF,0x9AE1,0x9AE6,0x9AEF,/* 0xD0-0xD7 */
0x9AEB,0x9AFB,0x9AED,0x9AF9,0x9B08,0x9B0F,0x9B13,0x9B1F,/* 0xD8-0xDF */
0x9B23,0x9EBD,0x9EBE,0x7E3B,0x9E82,0x9E87,0x9E88,0x9E8B,/* 0xE0-0xE7 */
- 0x9E92,0x93D6,0x9E9D,0xF9F3,0x9EDB,0x9EDC,0x9EDD,0x9EE0,/* 0xE8-0xEF */
+ 0x9E92,0x93D6,0x9E9D,0x9E9F,0x9EDB,0x9EDC,0x9EDD,0x9EE0,/* 0xE8-0xEF */
0x9EDF,0x9EE2,0x9EE9,0x9EE7,0x9EE5,0x9EEA,0x9EEF,0x9F22,/* 0xF0-0xF7 */
0x9F2C,0x9F2F,0x9F39,0x9F37,0x9F3D,0x9F3E,0x9F44,0x0000,/* 0xF8-0xFF */
};
0x9D0B,0x9D0C,0x9D0D,0x9D0E,0x9D0F,0x9D10,0x9D11,0x9D12,/* 0x68-0x6F */
0x9D13,0x9D14,0x9D15,0x9D16,0x9D17,0x9D18,0x9D19,0x9D1A,/* 0x70-0x77 */
0x9D1B,0x9D1C,0x9D1D,0x9D1E,0x9D1F,0x9D20,0x9D21,0x0000,/* 0x78-0x7F */
-
+
0x9D22,0x9D23,0x9D24,0x9D25,0x9D26,0x9D27,0x9D28,0x9D29,/* 0x80-0x87 */
0x9D2A,0x9D2B,0x9D2C,0x9D2D,0x9D2E,0x9D2F,0x9D30,0x9D31,/* 0x88-0x8F */
0x9D32,0x9D33,0x9D34,0x9D35,0x9D36,0x9D37,0x9D38,0x9D39,/* 0x90-0x97 */
0x9D3A,0x9D3B,0x9D3C,0x9D3D,0x9D3E,0x9D3F,0x9D40,0x9D41,/* 0x98-0x9F */
0x9D42,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xA0-0xA7 */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xA8-0xAF */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xB0-0xB7 */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xB8-0xBF */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xC0-0xC7 */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xC8-0xCF */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xD0-0xD7 */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xD8-0xDF */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xE0-0xE7 */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xE8-0xEF */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xF0-0xF7 */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_F9[256] = {
0x9D6B,0x9D6C,0x9D6D,0x9D6E,0x9D6F,0x9D70,0x9D71,0x9D72,/* 0x68-0x6F */
0x9D73,0x9D74,0x9D75,0x9D76,0x9D77,0x9D78,0x9D79,0x9D7A,/* 0x70-0x77 */
0x9D7B,0x9D7C,0x9D7D,0x9D7E,0x9D7F,0x9D80,0x9D81,0x0000,/* 0x78-0x7F */
-
+
0x9D82,0x9D83,0x9D84,0x9D85,0x9D86,0x9D87,0x9D88,0x9D89,/* 0x80-0x87 */
0x9D8A,0x9D8B,0x9D8C,0x9D8D,0x9D8E,0x9D8F,0x9D90,0x9D91,/* 0x88-0x8F */
0x9D92,0x9D93,0x9D94,0x9D95,0x9D96,0x9D97,0x9D98,0x9D99,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x9DA3,0x9DA4,0x9DA5,0x9DA6,0x9DA7,0x9DA8,0x9DA9,0x9DAA,/* 0x40-0x47 */
0x9DAB,0x9DAC,0x9DAD,0x9DAE,0x9DAF,0x9DB0,0x9DB1,0x9DB2,/* 0x48-0x4F */
- 0x9DB3,0xFA2D,0x9DB5,0x9DB6,0x9DB7,0x9DB8,0x9DB9,0x9DBA,/* 0x50-0x57 */
+ 0x9DB3,0x9DB4,0x9DB5,0x9DB6,0x9DB7,0x9DB8,0x9DB9,0x9DBA,/* 0x50-0x57 */
0x9DBB,0x9DBC,0x9DBD,0x9DBE,0x9DBF,0x9DC0,0x9DC1,0x9DC2,/* 0x58-0x5F */
0x9DC3,0x9DC4,0x9DC5,0x9DC6,0x9DC7,0x9DC8,0x9DC9,0x9DCA,/* 0x60-0x67 */
0x9DCB,0x9DCC,0x9DCD,0x9DCE,0x9DCF,0x9DD0,0x9DD1,0x9DD2,/* 0x68-0x6F */
0x9DD3,0x9DD4,0x9DD5,0x9DD6,0x9DD7,0x9DD8,0x9DD9,0x9DDA,/* 0x70-0x77 */
0x9DDB,0x9DDC,0x9DDD,0x9DDE,0x9DDF,0x9DE0,0x9DE1,0x0000,/* 0x78-0x7F */
-
+
0x9DE2,0x9DE3,0x9DE4,0x9DE5,0x9DE6,0x9DE7,0x9DE8,0x9DE9,/* 0x80-0x87 */
0x9DEA,0x9DEB,0x9DEC,0x9DED,0x9DEE,0x9DEF,0x9DF0,0x9DF1,/* 0x88-0x8F */
0x9DF2,0x9DF3,0x9DF4,0x9DF5,0x9DF6,0x9DF7,0x9DF8,0x9DF9,/* 0x90-0x97 */
- 0xF93A,0x9DFB,0x9DFC,0x9DFD,0x9DFE,0x9DFF,0x9E00,0x9E01,/* 0x98-0x9F */
+ 0x9DFA,0x9DFB,0x9DFC,0x9DFD,0x9DFE,0x9DFF,0x9E00,0x9E01,/* 0x98-0x9F */
0x9E02,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xA0-0xA7 */
};
0x9E03,0x9E04,0x9E05,0x9E06,0x9E07,0x9E08,0x9E09,0x9E0A,/* 0x40-0x47 */
0x9E0B,0x9E0C,0x9E0D,0x9E0E,0x9E0F,0x9E10,0x9E11,0x9E12,/* 0x48-0x4F */
0x9E13,0x9E14,0x9E15,0x9E16,0x9E17,0x9E18,0x9E19,0x9E1A,/* 0x50-0x57 */
- 0x9E1B,0x9E1C,0x9E1D,0xF920,0x9E24,0x9E27,0x9E2E,0x9E30,/* 0x58-0x5F */
+ 0x9E1B,0x9E1C,0x9E1D,0x9E1E,0x9E24,0x9E27,0x9E2E,0x9E30,/* 0x58-0x5F */
0x9E34,0x9E3B,0x9E3C,0x9E40,0x9E4D,0x9E50,0x9E52,0x9E53,/* 0x60-0x67 */
0x9E54,0x9E56,0x9E59,0x9E5D,0x9E5F,0x9E60,0x9E61,0x9E62,/* 0x68-0x6F */
0x9E65,0x9E6E,0x9E6F,0x9E72,0x9E74,0x9E75,0x9E76,0x9E77,/* 0x70-0x77 */
0x9E78,0x9E79,0x9E7A,0x9E7B,0x9E7C,0x9E7D,0x9E80,0x0000,/* 0x78-0x7F */
-
+
0x9E81,0x9E83,0x9E84,0x9E85,0x9E86,0x9E89,0x9E8A,0x9E8C,/* 0x80-0x87 */
0x9E8D,0x9E8E,0x9E8F,0x9E90,0x9E91,0x9E94,0x9E95,0x9E96,/* 0x88-0x8F */
- 0xF988,0x9E98,0x9E99,0x9E9A,0x9E9B,0x9E9C,0x9E9E,0x9EA0,/* 0x90-0x97 */
+ 0x9E97,0x9E98,0x9E99,0x9E9A,0x9E9B,0x9E9C,0x9E9E,0x9EA0,/* 0x90-0x97 */
0x9EA1,0x9EA2,0x9EA3,0x9EA4,0x9EA5,0x9EA7,0x9EA8,0x9EA9,/* 0x98-0x9F */
0x9EAA,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xA0-0xA7 */
};
0x9EE8,0x9EEB,0x9EEC,0x9EED,0x9EEE,0x9EF0,0x9EF1,0x9EF2,/* 0x68-0x6F */
0x9EF3,0x9EF4,0x9EF5,0x9EF6,0x9EF7,0x9EF8,0x9EFA,0x9EFD,/* 0x70-0x77 */
0x9EFF,0x9F00,0x9F01,0x9F02,0x9F03,0x9F04,0x9F05,0x0000,/* 0x78-0x7F */
-
+
0x9F06,0x9F07,0x9F08,0x9F09,0x9F0A,0x9F0C,0x9F0F,0x9F11,/* 0x80-0x87 */
0x9F12,0x9F14,0x9F15,0x9F16,0x9F18,0x9F1A,0x9F1B,0x9F1C,/* 0x88-0x8F */
0x9F1D,0x9F1E,0x9F1F,0x9F21,0x9F23,0x9F24,0x9F25,0x9F26,/* 0x90-0x97 */
0x9F62,0x9F63,0x9F64,0x9F65,0x9F66,0x9F67,0x9F68,0x9F69,/* 0x68-0x6F */
0x9F6A,0x9F6B,0x9F6C,0x9F6D,0x9F6E,0x9F6F,0x9F70,0x9F71,/* 0x70-0x77 */
0x9F72,0x9F73,0x9F74,0x9F75,0x9F76,0x9F77,0x9F78,0x0000,/* 0x78-0x7F */
-
+
0x9F79,0x9F7A,0x9F7B,0x9F7C,0x9F7D,0x9F7E,0x9F81,0x9F82,/* 0x80-0x87 */
- 0xF9C4,0x9F8E,0x9F8F,0x9F90,0x9F91,0x9F92,0x9F93,0x9F94,/* 0x88-0x8F */
- 0x9F95,0x9F96,0x9F97,0x9F98,0xF908,0x9F9D,0x9F9E,0x9FA1,/* 0x90-0x97 */
+ 0x9F8D,0x9F8E,0x9F8F,0x9F90,0x9F91,0x9F92,0x9F93,0x9F94,/* 0x88-0x8F */
+ 0x9F95,0x9F96,0x9F97,0x9F98,0x9F9C,0x9F9D,0x9F9E,0x9FA1,/* 0x90-0x97 */
0x9FA2,0x9FA3,0x9FA4,0x9FA5,0xF92C,0xF979,0xF995,0xF9E7,/* 0x98-0x9F */
0xF9F1,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xA0-0xA7 */
};
0xAC3F,0xAC41,0xAC42,0xAC43,0xAC44,0xAC45,0xAC46,0xAC47,/* 0x68-0x6F */
0xAC48,0xAC49,0xAC4A,0xAC4C,0xAC4E,0xAC4F,0xAC50,0xAC51,/* 0x70-0x77 */
0xAC52,0xAC53,0xAC55,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xAC56,0xAC57,0xAC59,0xAC5A,0xAC5B,0xAC5D,0xAC5E,/* 0x80-0x87 */
0xAC5F,0xAC60,0xAC61,0xAC62,0xAC63,0xAC64,0xAC65,0xAC66,/* 0x88-0x8F */
0xAC67,0xAC68,0xAC69,0xAC6A,0xAC6B,0xAC6C,0xAC6D,0xAC6E,/* 0x90-0x97 */
0xAD3F,0xAD40,0xAD41,0xAD42,0xAD43,0xAD46,0xAD48,0xAD4A,/* 0x68-0x6F */
0xAD4B,0xAD4C,0xAD4D,0xAD4E,0xAD4F,0xAD51,0xAD52,0xAD53,/* 0x70-0x77 */
0xAD55,0xAD56,0xAD57,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xAD59,0xAD5A,0xAD5B,0xAD5C,0xAD5D,0xAD5E,0xAD5F,/* 0x80-0x87 */
0xAD60,0xAD62,0xAD64,0xAD65,0xAD66,0xAD67,0xAD68,0xAD69,/* 0x88-0x8F */
0xAD6A,0xAD6B,0xAD6E,0xAD6F,0xAD71,0xAD72,0xAD77,0xAD78,/* 0x90-0x97 */
0xAE24,0xAE25,0xAE26,0xAE27,0xAE28,0xAE29,0xAE2A,0xAE2B,/* 0x68-0x6F */
0xAE2C,0xAE2D,0xAE2E,0xAE2F,0xAE32,0xAE33,0xAE35,0xAE36,/* 0x70-0x77 */
0xAE39,0xAE3B,0xAE3C,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xAE3D,0xAE3E,0xAE3F,0xAE42,0xAE44,0xAE47,0xAE48,/* 0x80-0x87 */
0xAE49,0xAE4B,0xAE4F,0xAE51,0xAE52,0xAE53,0xAE55,0xAE57,/* 0x88-0x8F */
0xAE58,0xAE59,0xAE5A,0xAE5B,0xAE5E,0xAE62,0xAE63,0xAE64,/* 0x90-0x97 */
0xAF11,0xAF12,0xAF13,0xAF14,0xAF15,0xAF16,0xAF17,0xAF18,/* 0x68-0x6F */
0xAF19,0xAF1A,0xAF1B,0xAF1C,0xAF1D,0xAF1E,0xAF1F,0xAF20,/* 0x70-0x77 */
0xAF21,0xAF22,0xAF23,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xAF24,0xAF25,0xAF26,0xAF27,0xAF28,0xAF29,0xAF2A,/* 0x80-0x87 */
0xAF2B,0xAF2E,0xAF2F,0xAF31,0xAF33,0xAF35,0xAF36,0xAF37,/* 0x88-0x8F */
0xAF38,0xAF39,0xAF3A,0xAF3B,0xAF3E,0xAF40,0xAF44,0xAF45,/* 0x90-0x97 */
0xAFEB,0xAFEC,0xAFED,0xAFEE,0xAFEF,0xAFF2,0xAFF3,0xAFF5,/* 0x68-0x6F */
0xAFF6,0xAFF7,0xAFF9,0xAFFA,0xAFFB,0xAFFC,0xAFFD,0xAFFE,/* 0x70-0x77 */
0xAFFF,0xB002,0xB003,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xB005,0xB006,0xB007,0xB008,0xB009,0xB00A,0xB00B,/* 0x80-0x87 */
0xB00D,0xB00E,0xB00F,0xB011,0xB012,0xB013,0xB015,0xB016,/* 0x88-0x8F */
0xB017,0xB018,0xB019,0xB01A,0xB01B,0xB01E,0xB01F,0xB020,/* 0x90-0x97 */
0xB0DC,0xB0DD,0xB0DE,0xB0DF,0xB0E1,0xB0E2,0xB0E3,0xB0E4,/* 0x68-0x6F */
0xB0E6,0xB0E7,0xB0E8,0xB0E9,0xB0EA,0xB0EB,0xB0EC,0xB0ED,/* 0x70-0x77 */
0xB0EE,0xB0EF,0xB0F0,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xB0F1,0xB0F2,0xB0F3,0xB0F4,0xB0F5,0xB0F6,0xB0F7,/* 0x80-0x87 */
0xB0F8,0xB0F9,0xB0FA,0xB0FB,0xB0FC,0xB0FD,0xB0FE,0xB0FF,/* 0x88-0x8F */
0xB100,0xB101,0xB102,0xB103,0xB104,0xB105,0xB106,0xB107,/* 0x90-0x97 */
0xB1C0,0xB1C1,0xB1C2,0xB1C3,0xB1C4,0xB1C5,0xB1C6,0xB1C7,/* 0x68-0x6F */
0xB1C8,0xB1C9,0xB1CA,0xB1CB,0xB1CD,0xB1CE,0xB1CF,0xB1D1,/* 0x70-0x77 */
0xB1D2,0xB1D3,0xB1D5,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xB1D6,0xB1D7,0xB1D8,0xB1D9,0xB1DA,0xB1DB,0xB1DE,/* 0x80-0x87 */
0xB1E0,0xB1E1,0xB1E2,0xB1E3,0xB1E4,0xB1E5,0xB1E6,0xB1E7,/* 0x88-0x8F */
0xB1EA,0xB1EB,0xB1ED,0xB1EE,0xB1EF,0xB1F1,0xB1F2,0xB1F3,/* 0x90-0x97 */
0xB29C,0xB29D,0xB29E,0xB29F,0xB2A2,0xB2A4,0xB2A7,0xB2A8,/* 0x68-0x6F */
0xB2A9,0xB2AB,0xB2AD,0xB2AE,0xB2AF,0xB2B1,0xB2B2,0xB2B3,/* 0x70-0x77 */
0xB2B5,0xB2B6,0xB2B7,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xB2B8,0xB2B9,0xB2BA,0xB2BB,0xB2BC,0xB2BD,0xB2BE,/* 0x80-0x87 */
0xB2BF,0xB2C0,0xB2C1,0xB2C2,0xB2C3,0xB2C4,0xB2C5,0xB2C6,/* 0x88-0x8F */
0xB2C7,0xB2CA,0xB2CB,0xB2CD,0xB2CE,0xB2CF,0xB2D1,0xB2D3,/* 0x90-0x97 */
0xB397,0xB398,0xB399,0xB39A,0xB39B,0xB39C,0xB39D,0xB39E,/* 0x68-0x6F */
0xB39F,0xB3A2,0xB3A3,0xB3A4,0xB3A5,0xB3A6,0xB3A7,0xB3A9,/* 0x70-0x77 */
0xB3AA,0xB3AB,0xB3AD,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xB3AE,0xB3AF,0xB3B0,0xB3B1,0xB3B2,0xB3B3,0xB3B4,/* 0x80-0x87 */
0xB3B5,0xB3B6,0xB3B7,0xB3B8,0xB3B9,0xB3BA,0xB3BB,0xB3BC,/* 0x88-0x8F */
0xB3BD,0xB3BE,0xB3BF,0xB3C0,0xB3C1,0xB3C2,0xB3C3,0xB3C6,/* 0x90-0x97 */
0xB46F,0xB470,0xB471,0xB472,0xB473,0xB474,0xB475,0xB476,/* 0x68-0x6F */
0xB477,0xB478,0xB479,0xB47A,0xB47B,0xB47C,0xB47D,0xB47E,/* 0x70-0x77 */
0xB47F,0xB481,0xB482,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xB483,0xB484,0xB485,0xB486,0xB487,0xB489,0xB48A,/* 0x80-0x87 */
0xB48B,0xB48C,0xB48D,0xB48E,0xB48F,0xB490,0xB491,0xB492,/* 0x88-0x8F */
0xB493,0xB494,0xB495,0xB496,0xB497,0xB498,0xB499,0xB49A,/* 0x90-0x97 */
0xB552,0xB553,0xB555,0xB556,0xB557,0xB558,0xB559,0xB55A,/* 0x68-0x6F */
0xB55B,0xB55E,0xB562,0xB563,0xB564,0xB565,0xB566,0xB567,/* 0x70-0x77 */
0xB568,0xB569,0xB56A,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xB56B,0xB56C,0xB56D,0xB56E,0xB56F,0xB570,0xB571,/* 0x80-0x87 */
0xB572,0xB573,0xB574,0xB575,0xB576,0xB577,0xB578,0xB579,/* 0x88-0x8F */
0xB57A,0xB57B,0xB57C,0xB57D,0xB57E,0xB57F,0xB580,0xB581,/* 0x90-0x97 */
0xB626,0xB627,0xB628,0xB629,0xB62A,0xB62B,0xB62D,0xB62E,/* 0x68-0x6F */
0xB62F,0xB630,0xB631,0xB632,0xB633,0xB635,0xB636,0xB637,/* 0x70-0x77 */
0xB638,0xB639,0xB63A,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xB63B,0xB63C,0xB63D,0xB63E,0xB63F,0xB640,0xB641,/* 0x80-0x87 */
0xB642,0xB643,0xB644,0xB645,0xB646,0xB647,0xB649,0xB64A,/* 0x88-0x8F */
0xB64B,0xB64C,0xB64D,0xB64E,0xB64F,0xB650,0xB651,0xB652,/* 0x90-0x97 */
0xB6E5,0xB6E6,0xB6E7,0xB6E8,0xB6E9,0xB6EA,0xB6EB,0xB6EC,/* 0x68-0x6F */
0xB6ED,0xB6EE,0xB6EF,0xB6F1,0xB6F2,0xB6F3,0xB6F5,0xB6F6,/* 0x70-0x77 */
0xB6F7,0xB6F9,0xB6FA,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xB6FB,0xB6FC,0xB6FD,0xB6FE,0xB6FF,0xB702,0xB703,/* 0x80-0x87 */
0xB704,0xB706,0xB707,0xB708,0xB709,0xB70A,0xB70B,0xB70C,/* 0x88-0x8F */
0xB70D,0xB70E,0xB70F,0xB710,0xB711,0xB712,0xB713,0xB714,/* 0x90-0x97 */
0xB7CB,0xB7CC,0xB7CD,0xB7CE,0xB7CF,0xB7D0,0xB7D1,0xB7D2,/* 0x68-0x6F */
0xB7D3,0xB7D4,0xB7D5,0xB7D6,0xB7D7,0xB7D8,0xB7D9,0xB7DA,/* 0x70-0x77 */
0xB7DB,0xB7DC,0xB7DD,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xB7DE,0xB7DF,0xB7E0,0xB7E1,0xB7E2,0xB7E3,0xB7E4,/* 0x80-0x87 */
0xB7E5,0xB7E6,0xB7E7,0xB7E8,0xB7E9,0xB7EA,0xB7EB,0xB7EE,/* 0x88-0x8F */
0xB7EF,0xB7F1,0xB7F2,0xB7F3,0xB7F5,0xB7F6,0xB7F7,0xB7F8,/* 0x90-0x97 */
0xB8A7,0xB8A9,0xB8AA,0xB8AB,0xB8AC,0xB8AD,0xB8AE,0xB8AF,/* 0x68-0x6F */
0xB8B1,0xB8B2,0xB8B3,0xB8B5,0xB8B6,0xB8B7,0xB8B9,0xB8BA,/* 0x70-0x77 */
0xB8BB,0xB8BC,0xB8BD,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xB8BE,0xB8BF,0xB8C2,0xB8C4,0xB8C6,0xB8C7,0xB8C8,/* 0x80-0x87 */
0xB8C9,0xB8CA,0xB8CB,0xB8CD,0xB8CE,0xB8CF,0xB8D1,0xB8D2,/* 0x88-0x8F */
0xB8D3,0xB8D5,0xB8D6,0xB8D7,0xB8D8,0xB8D9,0xB8DA,0xB8DB,/* 0x90-0x97 */
0xB988,0xB98B,0xB98C,0xB98F,0xB990,0xB991,0xB992,0xB993,/* 0x68-0x6F */
0xB994,0xB995,0xB996,0xB997,0xB998,0xB999,0xB99A,0xB99B,/* 0x70-0x77 */
0xB99C,0xB99D,0xB99E,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xB99F,0xB9A0,0xB9A1,0xB9A2,0xB9A3,0xB9A4,0xB9A5,/* 0x80-0x87 */
0xB9A6,0xB9A7,0xB9A8,0xB9A9,0xB9AA,0xB9AB,0xB9AE,0xB9AF,/* 0x88-0x8F */
0xB9B1,0xB9B2,0xB9B3,0xB9B5,0xB9B6,0xB9B7,0xB9B8,0xB9B9,/* 0x90-0x97 */
0xBA7B,0xBA7C,0xBA7D,0xBA7E,0xBA7F,0xBA80,0xBA81,0xBA82,/* 0x68-0x6F */
0xBA86,0xBA88,0xBA89,0xBA8A,0xBA8B,0xBA8D,0xBA8E,0xBA8F,/* 0x70-0x77 */
0xBA90,0xBA91,0xBA92,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xBA93,0xBA94,0xBA95,0xBA96,0xBA97,0xBA98,0xBA99,/* 0x80-0x87 */
0xBA9A,0xBA9B,0xBA9C,0xBA9D,0xBA9E,0xBA9F,0xBAA0,0xBAA1,/* 0x88-0x8F */
0xBAA2,0xBAA3,0xBAA4,0xBAA5,0xBAA6,0xBAA7,0xBAAA,0xBAAD,/* 0x90-0x97 */
0xBB5C,0xBB5D,0xBB5E,0xBB5F,0xBB60,0xBB62,0xBB64,0xBB65,/* 0x68-0x6F */
0xBB66,0xBB67,0xBB68,0xBB69,0xBB6A,0xBB6B,0xBB6D,0xBB6E,/* 0x70-0x77 */
0xBB6F,0xBB70,0xBB71,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xBB72,0xBB73,0xBB74,0xBB75,0xBB76,0xBB77,0xBB78,/* 0x80-0x87 */
0xBB79,0xBB7A,0xBB7B,0xBB7C,0xBB7D,0xBB7E,0xBB7F,0xBB80,/* 0x88-0x8F */
0xBB81,0xBB82,0xBB83,0xBB84,0xBB85,0xBB86,0xBB87,0xBB89,/* 0x90-0x97 */
0xBC3E,0xBC3F,0xBC42,0xBC46,0xBC47,0xBC48,0xBC4A,0xBC4B,/* 0x68-0x6F */
0xBC4E,0xBC4F,0xBC51,0xBC52,0xBC53,0xBC54,0xBC55,0xBC56,/* 0x70-0x77 */
0xBC57,0xBC58,0xBC59,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xBC5A,0xBC5B,0xBC5C,0xBC5E,0xBC5F,0xBC60,0xBC61,/* 0x80-0x87 */
0xBC62,0xBC63,0xBC64,0xBC65,0xBC66,0xBC67,0xBC68,0xBC69,/* 0x88-0x8F */
0xBC6A,0xBC6B,0xBC6C,0xBC6D,0xBC6E,0xBC6F,0xBC70,0xBC71,/* 0x90-0x97 */
0xBD26,0xBD27,0xBD28,0xBD29,0xBD2A,0xBD2B,0xBD2D,0xBD2E,/* 0x68-0x6F */
0xBD2F,0xBD30,0xBD31,0xBD32,0xBD33,0xBD34,0xBD35,0xBD36,/* 0x70-0x77 */
0xBD37,0xBD38,0xBD39,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xBD3A,0xBD3B,0xBD3C,0xBD3D,0xBD3E,0xBD3F,0xBD41,/* 0x80-0x87 */
0xBD42,0xBD43,0xBD44,0xBD45,0xBD46,0xBD47,0xBD4A,0xBD4B,/* 0x88-0x8F */
0xBD4D,0xBD4E,0xBD4F,0xBD51,0xBD52,0xBD53,0xBD54,0xBD55,/* 0x90-0x97 */
0xBDFB,0xBDFC,0xBDFD,0xBDFE,0xBDFF,0xBE01,0xBE02,0xBE04,/* 0x68-0x6F */
0xBE06,0xBE07,0xBE08,0xBE09,0xBE0A,0xBE0B,0xBE0E,0xBE0F,/* 0x70-0x77 */
0xBE11,0xBE12,0xBE13,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xBE15,0xBE16,0xBE17,0xBE18,0xBE19,0xBE1A,0xBE1B,/* 0x80-0x87 */
0xBE1E,0xBE20,0xBE21,0xBE22,0xBE23,0xBE24,0xBE25,0xBE26,/* 0x88-0x8F */
0xBE27,0xBE28,0xBE29,0xBE2A,0xBE2B,0xBE2C,0xBE2D,0xBE2E,/* 0x90-0x97 */
0xBEDE,0xBEDF,0xBEE1,0xBEE2,0xBEE6,0xBEE7,0xBEE8,0xBEE9,/* 0x68-0x6F */
0xBEEA,0xBEEB,0xBEED,0xBEEE,0xBEEF,0xBEF0,0xBEF1,0xBEF2,/* 0x70-0x77 */
0xBEF3,0xBEF4,0xBEF5,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xBEF6,0xBEF7,0xBEF8,0xBEF9,0xBEFA,0xBEFB,0xBEFC,/* 0x80-0x87 */
0xBEFD,0xBEFE,0xBEFF,0xBF00,0xBF02,0xBF03,0xBF04,0xBF05,/* 0x88-0x8F */
0xBF06,0xBF07,0xBF0A,0xBF0B,0xBF0C,0xBF0D,0xBF0E,0xBF0F,/* 0x90-0x97 */
0xBFA5,0xBFA6,0xBFA7,0xBFA8,0xBFA9,0xBFAA,0xBFAB,0xBFAC,/* 0x68-0x6F */
0xBFAD,0xBFAE,0xBFAF,0xBFB1,0xBFB2,0xBFB3,0xBFB4,0xBFB5,/* 0x70-0x77 */
0xBFB6,0xBFB7,0xBFB8,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xBFB9,0xBFBA,0xBFBB,0xBFBC,0xBFBD,0xBFBE,0xBFBF,/* 0x80-0x87 */
0xBFC0,0xBFC1,0xBFC2,0xBFC3,0xBFC4,0xBFC6,0xBFC7,0xBFC8,/* 0x88-0x8F */
0xBFC9,0xBFCA,0xBFCB,0xBFCE,0xBFCF,0xBFD1,0xBFD2,0xBFD3,/* 0x90-0x97 */
0xC065,0xC066,0xC067,0xC06A,0xC06B,0xC06C,0xC06D,0xC06E,/* 0x68-0x6F */
0xC06F,0xC070,0xC071,0xC072,0xC073,0xC074,0xC075,0xC076,/* 0x70-0x77 */
0xC077,0xC078,0xC079,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xC07A,0xC07B,0xC07C,0xC07D,0xC07E,0xC07F,0xC080,/* 0x80-0x87 */
0xC081,0xC082,0xC083,0xC084,0xC085,0xC086,0xC087,0xC088,/* 0x88-0x8F */
0xC089,0xC08A,0xC08B,0xC08C,0xC08D,0xC08E,0xC08F,0xC092,/* 0x90-0x97 */
0xC161,0xC162,0xC163,0xC166,0xC16A,0xC16B,0xC16C,0xC16D,/* 0x68-0x6F */
0xC16E,0xC16F,0xC171,0xC172,0xC173,0xC175,0xC176,0xC177,/* 0x70-0x77 */
0xC179,0xC17A,0xC17B,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xC17C,0xC17D,0xC17E,0xC17F,0xC180,0xC181,0xC182,/* 0x80-0x87 */
0xC183,0xC184,0xC186,0xC187,0xC188,0xC189,0xC18A,0xC18B,/* 0x88-0x8F */
0xC18F,0xC191,0xC192,0xC193,0xC195,0xC197,0xC198,0xC199,/* 0x90-0x97 */
0xC24E,0xC24F,0xC252,0xC253,0xC255,0xC256,0xC257,0xC259,/* 0x68-0x6F */
0xC25A,0xC25B,0xC25C,0xC25D,0xC25E,0xC25F,0xC261,0xC262,/* 0x70-0x77 */
0xC263,0xC264,0xC266,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xC267,0xC268,0xC269,0xC26A,0xC26B,0xC26E,0xC26F,/* 0x80-0x87 */
0xC271,0xC272,0xC273,0xC275,0xC276,0xC277,0xC278,0xC279,/* 0x88-0x8F */
0xC27A,0xC27B,0xC27E,0xC280,0xC282,0xC283,0xC284,0xC285,/* 0x90-0x97 */
0xC33A,0xC33B,0xC33C,0xC33D,0xC33E,0xC33F,0xC340,0xC341,/* 0x68-0x6F */
0xC342,0xC343,0xC344,0xC346,0xC347,0xC348,0xC349,0xC34A,/* 0x70-0x77 */
0xC34B,0xC34C,0xC34D,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xC34E,0xC34F,0xC350,0xC351,0xC352,0xC353,0xC354,/* 0x80-0x87 */
0xC355,0xC356,0xC357,0xC358,0xC359,0xC35A,0xC35B,0xC35C,/* 0x88-0x8F */
0xC35D,0xC35E,0xC35F,0xC360,0xC361,0xC362,0xC363,0xC364,/* 0x90-0x97 */
0xC406,0xC407,0xC409,0xC40A,0xC40B,0xC40C,0xC40D,0xC40E,/* 0x68-0x6F */
0xC40F,0xC411,0xC412,0xC413,0xC414,0xC415,0xC416,0xC417,/* 0x70-0x77 */
0xC418,0xC419,0xC41A,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xC41B,0xC41C,0xC41D,0xC41E,0xC41F,0xC420,0xC421,/* 0x80-0x87 */
0xC422,0xC423,0xC425,0xC426,0xC427,0xC428,0xC429,0xC42A,/* 0x88-0x8F */
0xC42B,0xC42D,0xC42E,0xC42F,0xC431,0xC432,0xC433,0xC435,/* 0x90-0x97 */
0xC4CD,0xC4CE,0xC4CF,0xC4D0,0xC4D1,0xC4D2,0xC4D3,0xC4D4,/* 0x68-0x6F */
0xC4D5,0xC4D6,0xC4D7,0xC4D8,0xC4D9,0xC4DA,0xC4DB,0xC4DC,/* 0x70-0x77 */
0xC4DD,0xC4DE,0xC4DF,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xC4E0,0xC4E1,0xC4E2,0xC4E3,0xC4E4,0xC4E5,0xC4E6,/* 0x80-0x87 */
0xC4E7,0xC4E8,0xC4EA,0xC4EB,0xC4EC,0xC4ED,0xC4EE,0xC4EF,/* 0x88-0x8F */
0xC4F2,0xC4F3,0xC4F5,0xC4F6,0xC4F7,0xC4F9,0xC4FB,0xC4FC,/* 0x90-0x97 */
0xC5CB,0xC5CD,0xC5CF,0xC5D2,0xC5D3,0xC5D5,0xC5D6,0xC5D7,/* 0x68-0x6F */
0xC5D9,0xC5DA,0xC5DB,0xC5DC,0xC5DD,0xC5DE,0xC5DF,0xC5E2,/* 0x70-0x77 */
0xC5E4,0xC5E6,0xC5E7,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xC5E8,0xC5E9,0xC5EA,0xC5EB,0xC5EF,0xC5F1,0xC5F2,/* 0x80-0x87 */
0xC5F3,0xC5F5,0xC5F8,0xC5F9,0xC5FA,0xC5FB,0xC602,0xC603,/* 0x88-0x8F */
0xC604,0xC609,0xC60A,0xC60B,0xC60D,0xC60E,0xC60F,0xC611,/* 0x90-0x97 */
0xC6D8,0xC6D9,0xC6DA,0xC6DB,0xC6DE,0xC6DF,0xC6E2,0xC6E3,/* 0x68-0x6F */
0xC6E4,0xC6E5,0xC6E6,0xC6E7,0xC6EA,0xC6EB,0xC6ED,0xC6EE,/* 0x70-0x77 */
0xC6EF,0xC6F1,0xC6F2,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xC6F3,0xC6F4,0xC6F5,0xC6F6,0xC6F7,0xC6FA,0xC6FB,/* 0x80-0x87 */
0xC6FC,0xC6FE,0xC6FF,0xC700,0xC701,0xC702,0xC703,0xC706,/* 0x88-0x8F */
0xC707,0xC709,0xC70A,0xC70B,0xC70D,0xC70E,0xC70F,0xC710,/* 0x90-0x97 */
0xC7E6,0xC7E7,0xC7E9,0xC7EA,0xC7EB,0xC7ED,0xC7EE,0xC7EF,/* 0x68-0x6F */
0xC7F0,0xC7F1,0xC7F2,0xC7F3,0xC7F4,0xC7F5,0xC7F6,0xC7F7,/* 0x70-0x77 */
0xC7F8,0xC7F9,0xC7FA,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xC7FB,0xC7FC,0xC7FD,0xC7FE,0xC7FF,0xC802,0xC803,/* 0x80-0x87 */
0xC805,0xC806,0xC807,0xC809,0xC80B,0xC80C,0xC80D,0xC80E,/* 0x88-0x8F */
0xC80F,0xC812,0xC814,0xC817,0xC818,0xC819,0xC81A,0xC81B,/* 0x90-0x97 */
0xC8CB,0xC8CD,0xC8CE,0xC8CF,0xC8D0,0xC8D1,0xC8D2,0xC8D3,/* 0x68-0x6F */
0xC8D6,0xC8D8,0xC8DA,0xC8DB,0xC8DC,0xC8DD,0xC8DE,0xC8DF,/* 0x70-0x77 */
0xC8E2,0xC8E3,0xC8E5,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xC8E6,0xC8E7,0xC8E8,0xC8E9,0xC8EA,0xC8EB,0xC8EC,/* 0x80-0x87 */
0xC8ED,0xC8EE,0xC8EF,0xC8F0,0xC8F1,0xC8F2,0xC8F3,0xC8F4,/* 0x88-0x8F */
0xC8F6,0xC8F7,0xC8F8,0xC8F9,0xC8FA,0xC8FB,0xC8FE,0xC8FF,/* 0x90-0x97 */
0xC935,0xC936,0xC937,0xC938,0xC939,0xC93A,0xC93B,0xC93C,/* 0x68-0x6F */
0xC93D,0xC93E,0xC93F,0xC940,0xC941,0xC942,0xC943,0xC944,/* 0x70-0x77 */
0xC945,0xC946,0xC947,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xC948,0xC949,0xC94A,0xC94B,0xC94C,0xC94D,0xC94E,/* 0x80-0x87 */
0xC94F,0xC952,0xC953,0xC955,0xC956,0xC957,0xC959,0xC95A,/* 0x88-0x8F */
0xC95B,0xC95C,0xC95D,0xC95E,0xC95F,0xC962,0xC964,0xC965,/* 0x90-0x97 */
0x25A5,0x25A8,0x25A7,0x25A6,0x25A9,0x2668,0x260F,0x260E,/* 0xC8-0xCF */
0x261C,0x261E,0x00B6,0x2020,0x2021,0x2195,0x2197,0x2199,/* 0xD0-0xD7 */
0x2196,0x2198,0x266D,0x2669,0x266A,0x266C,0x327F,0x321C,/* 0xD8-0xDF */
- 0x2116,0x33C7,0x2122,0x33C2,0x33D8,0x2121,0x0000,0x0000,/* 0xE0-0xE7 */
+ 0x2116,0x33C7,0x2122,0x33C2,0x33D8,0x2121,0x20AC,0x00AE,/* 0xE0-0xE7 */
};
static wchar_t c2u_A3[256] = {
0xC99A,0xC99C,0xC99E,0xC99F,0xC9A0,0xC9A1,0xC9A2,0xC9A3,/* 0x68-0x6F */
0xC9A4,0xC9A5,0xC9A6,0xC9A7,0xC9A8,0xC9A9,0xC9AA,0xC9AB,/* 0x70-0x77 */
0xC9AC,0xC9AD,0xC9AE,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xC9AF,0xC9B0,0xC9B1,0xC9B2,0xC9B3,0xC9B4,0xC9B5,/* 0x80-0x87 */
0xC9B6,0xC9B7,0xC9B8,0xC9B9,0xC9BA,0xC9BB,0xC9BC,0xC9BD,/* 0x88-0x8F */
0xC9BE,0xC9BF,0xC9C2,0xC9C3,0xC9C5,0xC9C6,0xC9C9,0xC9CB,/* 0x90-0x97 */
0xCA11,0xCA12,0xCA13,0xCA15,0xCA16,0xCA17,0xCA19,0xCA1A,/* 0x68-0x6F */
0xCA1B,0xCA1C,0xCA1D,0xCA1E,0xCA1F,0xCA20,0xCA21,0xCA22,/* 0x70-0x77 */
0xCA23,0xCA24,0xCA25,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xCA26,0xCA27,0xCA28,0xCA2A,0xCA2B,0xCA2C,0xCA2D,/* 0x80-0x87 */
0xCA2E,0xCA2F,0xCA30,0xCA31,0xCA32,0xCA33,0xCA34,0xCA35,/* 0x88-0x8F */
0xCA36,0xCA37,0xCA38,0xCA39,0xCA3A,0xCA3B,0xCA3C,0xCA3D,/* 0x90-0x97 */
0xCA3E,0xCA3F,0xCA40,0xCA41,0xCA42,0xCA43,0xCA44,0xCA45,/* 0x98-0x9F */
- 0xCA46,0xFFA1,0xFFA2,0xFFA3,0xFFA4,0xFFA5,0xFFA6,0xFFA7,/* 0xA0-0xA7 */
- 0xFFA8,0xFFA9,0xFFAA,0xFFAB,0xFFAC,0xFFAD,0xFFAE,0xFFAF,/* 0xA8-0xAF */
- 0xFFB0,0xFFB1,0xFFB2,0xFFB3,0xFFB4,0xFFB5,0xFFB6,0xFFB7,/* 0xB0-0xB7 */
- 0xFFB8,0xFFB9,0xFFBA,0xFFBB,0xFFBC,0xFFBD,0xFFBE,0xFFC2,/* 0xB8-0xBF */
- 0xFFC3,0xFFC4,0xFFC5,0xFFC6,0xFFC7,0xFFCA,0xFFCB,0xFFCC,/* 0xC0-0xC7 */
- 0xFFCD,0xFFCE,0xFFCF,0xFFD2,0xFFD3,0xFFD4,0xFFD5,0xFFD6,/* 0xC8-0xCF */
- 0xFFD7,0xFFDA,0xFFDB,0xFFDC,0xFFA0,0x3165,0x3166,0x3167,/* 0xD0-0xD7 */
+ 0xCA46,0x3131,0x3132,0x3133,0x3134,0x3135,0x3136,0x3137,/* 0xA0-0xA7 */
+ 0x3138,0x3139,0x313A,0x313B,0x313C,0x313D,0x313E,0x313F,/* 0xA8-0xAF */
+ 0x3140,0x3141,0x3142,0x3143,0x3144,0x3145,0x3146,0x3147,/* 0xB0-0xB7 */
+ 0x3148,0x3149,0x314A,0x314B,0x314C,0x314D,0x314E,0x314F,/* 0xB8-0xBF */
+ 0x3150,0x3151,0x3152,0x3153,0x3154,0x3155,0x3156,0x3157,/* 0xC0-0xC7 */
+ 0x3158,0x3159,0x315A,0x315B,0x315C,0x315D,0x315E,0x315F,/* 0xC8-0xCF */
+ 0x3160,0x3161,0x3162,0x3163,0x3164,0x3165,0x3166,0x3167,/* 0xD0-0xD7 */
0x3168,0x3169,0x316A,0x316B,0x316C,0x316D,0x316E,0x316F,/* 0xD8-0xDF */
0x3170,0x3171,0x3172,0x3173,0x3174,0x3175,0x3176,0x3177,/* 0xE0-0xE7 */
0x3178,0x3179,0x317A,0x317B,0x317C,0x317D,0x317E,0x317F,/* 0xE8-0xEF */
0xCA72,0xCA73,0xCA74,0xCA75,0xCA76,0xCA77,0xCA78,0xCA79,/* 0x68-0x6F */
0xCA7A,0xCA7B,0xCA7C,0xCA7E,0xCA7F,0xCA80,0xCA81,0xCA82,/* 0x70-0x77 */
0xCA83,0xCA85,0xCA86,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xCA87,0xCA88,0xCA89,0xCA8A,0xCA8B,0xCA8C,0xCA8D,/* 0x80-0x87 */
0xCA8E,0xCA8F,0xCA90,0xCA91,0xCA92,0xCA93,0xCA94,0xCA95,/* 0x88-0x8F */
0xCA96,0xCA97,0xCA99,0xCA9A,0xCA9B,0xCA9C,0xCA9D,0xCA9E,/* 0x90-0x97 */
0xCAD0,0xCAD2,0xCAD4,0xCAD5,0xCAD6,0xCAD7,0xCADA,0xCADB,/* 0x68-0x6F */
0xCADC,0xCADD,0xCADE,0xCADF,0xCAE1,0xCAE2,0xCAE3,0xCAE4,/* 0x70-0x77 */
0xCAE5,0xCAE6,0xCAE7,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xCAE8,0xCAE9,0xCAEA,0xCAEB,0xCAED,0xCAEE,0xCAEF,/* 0x80-0x87 */
0xCAF0,0xCAF1,0xCAF2,0xCAF3,0xCAF5,0xCAF6,0xCAF7,0xCAF8,/* 0x88-0x8F */
0xCAF9,0xCAFA,0xCAFB,0xCAFC,0xCAFD,0xCAFE,0xCAFF,0xCB00,/* 0x90-0x97 */
0xCB31,0xCB32,0xCB33,0xCB34,0xCB35,0xCB36,0xCB37,0xCB38,/* 0x68-0x6F */
0xCB39,0xCB3A,0xCB3B,0xCB3C,0xCB3D,0xCB3E,0xCB3F,0xCB40,/* 0x70-0x77 */
0xCB42,0xCB43,0xCB44,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xCB45,0xCB46,0xCB47,0xCB4A,0xCB4B,0xCB4D,0xCB4E,/* 0x80-0x87 */
0xCB4F,0xCB51,0xCB52,0xCB53,0xCB54,0xCB55,0xCB56,0xCB57,/* 0x88-0x8F */
0xCB5A,0xCB5B,0xCB5C,0xCB5E,0xCB5F,0xCB60,0xCB61,0xCB62,/* 0x90-0x97 */
0xCB90,0xCB91,0xCB92,0xCB93,0xCB94,0xCB95,0xCB96,0xCB97,/* 0x68-0x6F */
0xCB98,0xCB99,0xCB9A,0xCB9B,0xCB9D,0xCB9E,0xCB9F,0xCBA0,/* 0x70-0x77 */
0xCBA1,0xCBA2,0xCBA3,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xCBA4,0xCBA5,0xCBA6,0xCBA7,0xCBA8,0xCBA9,0xCBAA,/* 0x80-0x87 */
0xCBAB,0xCBAC,0xCBAD,0xCBAE,0xCBAF,0xCBB0,0xCBB1,0xCBB2,/* 0x88-0x8F */
0xCBB3,0xCBB4,0xCBB5,0xCBB6,0xCBB7,0xCBB9,0xCBBA,0xCBBB,/* 0x90-0x97 */
0xCBEA,0xCBEB,0xCBEC,0xCBED,0xCBEE,0xCBEF,0xCBF0,0xCBF1,/* 0x68-0x6F */
0xCBF2,0xCBF3,0xCBF4,0xCBF5,0xCBF6,0xCBF7,0xCBF8,0xCBF9,/* 0x70-0x77 */
0xCBFA,0xCBFB,0xCBFC,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xCBFD,0xCBFE,0xCBFF,0xCC00,0xCC01,0xCC02,0xCC03,/* 0x80-0x87 */
0xCC04,0xCC05,0xCC06,0xCC07,0xCC08,0xCC09,0xCC0A,0xCC0B,/* 0x88-0x8F */
0xCC0E,0xCC0F,0xCC11,0xCC12,0xCC13,0xCC15,0xCC16,0xCC17,/* 0x90-0x97 */
0xCC5B,0xCC5C,0xCC5D,0xCC5E,0xCC5F,0xCC61,0xCC62,0xCC63,/* 0x68-0x6F */
0xCC65,0xCC67,0xCC69,0xCC6A,0xCC6B,0xCC6C,0xCC6D,0xCC6E,/* 0x70-0x77 */
0xCC6F,0xCC71,0xCC72,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xCC73,0xCC74,0xCC76,0xCC77,0xCC78,0xCC79,0xCC7A,/* 0x80-0x87 */
0xCC7B,0xCC7C,0xCC7D,0xCC7E,0xCC7F,0xCC80,0xCC81,0xCC82,/* 0x88-0x8F */
0xCC83,0xCC84,0xCC85,0xCC86,0xCC87,0xCC88,0xCC89,0xCC8A,/* 0x90-0x97 */
0xCCC2,0xCCC3,0xCCC6,0xCCC8,0xCCCA,0xCCCB,0xCCCC,0xCCCD,/* 0x68-0x6F */
0xCCCE,0xCCCF,0xCCD1,0xCCD2,0xCCD3,0xCCD5,0xCCD6,0xCCD7,/* 0x70-0x77 */
0xCCD8,0xCCD9,0xCCDA,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xCCDB,0xCCDC,0xCCDD,0xCCDE,0xCCDF,0xCCE0,0xCCE1,/* 0x80-0x87 */
0xCCE2,0xCCE3,0xCCE5,0xCCE6,0xCCE7,0xCCE8,0xCCE9,0xCCEA,/* 0x88-0x8F */
0xCCEB,0xCCED,0xCCEE,0xCCEF,0xCCF1,0xCCF2,0xCCF3,0xCCF4,/* 0x90-0x97 */
0xCD2A,0xCD2B,0xCD2D,0xCD2E,0xCD2F,0xCD30,0xCD31,0xCD32,/* 0x68-0x6F */
0xCD33,0xCD34,0xCD35,0xCD36,0xCD37,0xCD38,0xCD3A,0xCD3B,/* 0x70-0x77 */
0xCD3C,0xCD3D,0xCD3E,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xCD3F,0xCD40,0xCD41,0xCD42,0xCD43,0xCD44,0xCD45,/* 0x80-0x87 */
0xCD46,0xCD47,0xCD48,0xCD49,0xCD4A,0xCD4B,0xCD4C,0xCD4D,/* 0x88-0x8F */
0xCD4E,0xCD4F,0xCD50,0xCD51,0xCD52,0xCD53,0xCD54,0xCD55,/* 0x90-0x97 */
0xCD89,0xCD8A,0xCD8B,0xCD8C,0xCD8D,0xCD8E,0xCD8F,0xCD90,/* 0x68-0x6F */
0xCD91,0xCD92,0xCD93,0xCD96,0xCD97,0xCD99,0xCD9A,0xCD9B,/* 0x70-0x77 */
0xCD9D,0xCD9E,0xCD9F,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xCDA0,0xCDA1,0xCDA2,0xCDA3,0xCDA6,0xCDA8,0xCDAA,/* 0x80-0x87 */
0xCDAB,0xCDAC,0xCDAD,0xCDAE,0xCDAF,0xCDB1,0xCDB2,0xCDB3,/* 0x88-0x8F */
0xCDB4,0xCDB5,0xCDB6,0xCDB7,0xCDB8,0xCDB9,0xCDBA,0xCDBB,/* 0x90-0x97 */
0xCDEA,0xCDEB,0xCDED,0xCDEE,0xCDEF,0xCDF1,0xCDF2,0xCDF3,/* 0x68-0x6F */
0xCDF4,0xCDF5,0xCDF6,0xCDF7,0xCDFA,0xCDFC,0xCDFE,0xCDFF,/* 0x70-0x77 */
0xCE00,0xCE01,0xCE02,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xCE03,0xCE05,0xCE06,0xCE07,0xCE09,0xCE0A,0xCE0B,/* 0x80-0x87 */
0xCE0D,0xCE0E,0xCE0F,0xCE10,0xCE11,0xCE12,0xCE13,0xCE15,/* 0x88-0x8F */
0xCE16,0xCE17,0xCE18,0xCE1A,0xCE1B,0xCE1C,0xCE1D,0xCE1E,/* 0x90-0x97 */
0xCE51,0xCE52,0xCE53,0xCE54,0xCE55,0xCE56,0xCE57,0xCE5A,/* 0x68-0x6F */
0xCE5B,0xCE5D,0xCE5E,0xCE62,0xCE63,0xCE64,0xCE65,0xCE66,/* 0x70-0x77 */
0xCE67,0xCE6A,0xCE6C,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xCE6E,0xCE6F,0xCE70,0xCE71,0xCE72,0xCE73,0xCE76,/* 0x80-0x87 */
0xCE77,0xCE79,0xCE7A,0xCE7B,0xCE7D,0xCE7E,0xCE7F,0xCE80,/* 0x88-0x8F */
0xCE81,0xCE82,0xCE83,0xCE86,0xCE88,0xCE8A,0xCE8B,0xCE8C,/* 0x90-0x97 */
0xCEC3,0xCEC4,0xCEC5,0xCEC6,0xCEC7,0xCEC8,0xCEC9,0xCECA,/* 0x68-0x6F */
0xCECB,0xCECC,0xCECD,0xCECE,0xCECF,0xCED0,0xCED1,0xCED2,/* 0x70-0x77 */
0xCED3,0xCED4,0xCED5,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xCED6,0xCED7,0xCED8,0xCED9,0xCEDA,0xCEDB,0xCEDC,/* 0x80-0x87 */
0xCEDD,0xCEDE,0xCEDF,0xCEE0,0xCEE1,0xCEE2,0xCEE3,0xCEE6,/* 0x88-0x8F */
0xCEE7,0xCEE9,0xCEEA,0xCEED,0xCEEE,0xCEEF,0xCEF0,0xCEF1,/* 0x90-0x97 */
0xCF2E,0xCF32,0xCF33,0xCF34,0xCF35,0xCF36,0xCF37,0xCF39,/* 0x68-0x6F */
0xCF3A,0xCF3B,0xCF3C,0xCF3D,0xCF3E,0xCF3F,0xCF40,0xCF41,/* 0x70-0x77 */
0xCF42,0xCF43,0xCF44,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xCF45,0xCF46,0xCF47,0xCF48,0xCF49,0xCF4A,0xCF4B,/* 0x80-0x87 */
0xCF4C,0xCF4D,0xCF4E,0xCF4F,0xCF50,0xCF51,0xCF52,0xCF53,/* 0x88-0x8F */
0xCF56,0xCF57,0xCF59,0xCF5A,0xCF5B,0xCF5D,0xCF5E,0xCF5F,/* 0x90-0x97 */
0xCF95,0xCF96,0xCF97,0xCF98,0xCF99,0xCF9A,0xCF9B,0xCF9C,/* 0x68-0x6F */
0xCF9D,0xCF9E,0xCF9F,0xCFA0,0xCFA2,0xCFA3,0xCFA4,0xCFA5,/* 0x70-0x77 */
0xCFA6,0xCFA7,0xCFA9,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xCFAA,0xCFAB,0xCFAC,0xCFAD,0xCFAE,0xCFAF,0xCFB1,/* 0x80-0x87 */
0xCFB2,0xCFB3,0xCFB4,0xCFB5,0xCFB6,0xCFB7,0xCFB8,0xCFB9,/* 0x88-0x8F */
0xCFBA,0xCFBB,0xCFBC,0xCFBD,0xCFBE,0xCFBF,0xCFC0,0xCFC1,/* 0x90-0x97 */
0xCFF4,0xCFF6,0xCFF7,0xCFF8,0xCFF9,0xCFFA,0xCFFB,0xCFFD,/* 0x68-0x6F */
0xCFFE,0xCFFF,0xD001,0xD002,0xD003,0xD005,0xD006,0xD007,/* 0x70-0x77 */
0xD008,0xD009,0xD00A,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD00B,0xD00C,0xD00D,0xD00E,0xD00F,0xD010,0xD012,/* 0x80-0x87 */
0xD013,0xD014,0xD015,0xD016,0xD017,0xD019,0xD01A,0xD01B,/* 0x88-0x8F */
0xD01C,0xD01D,0xD01E,0xD01F,0xD020,0xD021,0xD022,0xD023,/* 0x90-0x97 */
0xD05A,0xD05B,0xD05C,0xD05D,0xD05E,0xD05F,0xD061,0xD062,/* 0x68-0x6F */
0xD063,0xD064,0xD065,0xD066,0xD067,0xD068,0xD069,0xD06A,/* 0x70-0x77 */
0xD06B,0xD06E,0xD06F,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD071,0xD072,0xD073,0xD075,0xD076,0xD077,0xD078,/* 0x80-0x87 */
0xD079,0xD07A,0xD07B,0xD07E,0xD07F,0xD080,0xD082,0xD083,/* 0x88-0x8F */
0xD084,0xD085,0xD086,0xD087,0xD088,0xD089,0xD08A,0xD08B,/* 0x90-0x97 */
0xD0BE,0xD0BF,0xD0C2,0xD0C3,0xD0C5,0xD0C6,0xD0C7,0xD0CA,/* 0x68-0x6F */
0xD0CB,0xD0CC,0xD0CD,0xD0CE,0xD0CF,0xD0D2,0xD0D6,0xD0D7,/* 0x70-0x77 */
0xD0D8,0xD0D9,0xD0DA,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD0DB,0xD0DE,0xD0DF,0xD0E1,0xD0E2,0xD0E3,0xD0E5,/* 0x80-0x87 */
0xD0E6,0xD0E7,0xD0E8,0xD0E9,0xD0EA,0xD0EB,0xD0EE,0xD0F2,/* 0x88-0x8F */
0xD0F3,0xD0F4,0xD0F5,0xD0F6,0xD0F7,0xD0F9,0xD0FA,0xD0FB,/* 0x90-0x97 */
0xD127,0xD128,0xD129,0xD12A,0xD12B,0xD12C,0xD12D,0xD12E,/* 0x68-0x6F */
0xD12F,0xD132,0xD133,0xD135,0xD136,0xD137,0xD139,0xD13B,/* 0x70-0x77 */
0xD13C,0xD13D,0xD13E,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD13F,0xD142,0xD146,0xD147,0xD148,0xD149,0xD14A,/* 0x80-0x87 */
0xD14B,0xD14E,0xD14F,0xD151,0xD152,0xD153,0xD155,0xD156,/* 0x88-0x8F */
0xD157,0xD158,0xD159,0xD15A,0xD15B,0xD15E,0xD160,0xD162,/* 0x90-0x97 */
0xD192,0xD193,0xD194,0xD195,0xD196,0xD197,0xD198,0xD199,/* 0x68-0x6F */
0xD19A,0xD19B,0xD19C,0xD19D,0xD19E,0xD19F,0xD1A2,0xD1A3,/* 0x70-0x77 */
0xD1A5,0xD1A6,0xD1A7,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD1A9,0xD1AA,0xD1AB,0xD1AC,0xD1AD,0xD1AE,0xD1AF,/* 0x80-0x87 */
0xD1B2,0xD1B4,0xD1B6,0xD1B7,0xD1B8,0xD1B9,0xD1BB,0xD1BD,/* 0x88-0x8F */
0xD1BE,0xD1BF,0xD1C1,0xD1C2,0xD1C3,0xD1C4,0xD1C5,0xD1C6,/* 0x90-0x97 */
0xD1F2,0xD1F3,0xD1F5,0xD1F6,0xD1F7,0xD1F9,0xD1FA,0xD1FB,/* 0x68-0x6F */
0xD1FC,0xD1FD,0xD1FE,0xD1FF,0xD200,0xD201,0xD202,0xD203,/* 0x70-0x77 */
0xD204,0xD205,0xD206,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD208,0xD20A,0xD20B,0xD20C,0xD20D,0xD20E,0xD20F,/* 0x80-0x87 */
0xD211,0xD212,0xD213,0xD214,0xD215,0xD216,0xD217,0xD218,/* 0x88-0x8F */
0xD219,0xD21A,0xD21B,0xD21C,0xD21D,0xD21E,0xD21F,0xD220,/* 0x90-0x97 */
0xD254,0xD255,0xD256,0xD257,0xD258,0xD259,0xD25A,0xD25B,/* 0x68-0x6F */
0xD25D,0xD25E,0xD25F,0xD260,0xD261,0xD262,0xD263,0xD265,/* 0x70-0x77 */
0xD266,0xD267,0xD268,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD269,0xD26A,0xD26B,0xD26C,0xD26D,0xD26E,0xD26F,/* 0x80-0x87 */
0xD270,0xD271,0xD272,0xD273,0xD274,0xD275,0xD276,0xD277,/* 0x88-0x8F */
0xD278,0xD279,0xD27A,0xD27B,0xD27C,0xD27D,0xD27E,0xD27F,/* 0x90-0x97 */
0xD2B6,0xD2B7,0xD2BA,0xD2BB,0xD2BD,0xD2BE,0xD2C1,0xD2C3,/* 0x68-0x6F */
0xD2C4,0xD2C5,0xD2C6,0xD2C7,0xD2CA,0xD2CC,0xD2CD,0xD2CE,/* 0x70-0x77 */
0xD2CF,0xD2D0,0xD2D1,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD2D2,0xD2D3,0xD2D5,0xD2D6,0xD2D7,0xD2D9,0xD2DA,/* 0x80-0x87 */
0xD2DB,0xD2DD,0xD2DE,0xD2DF,0xD2E0,0xD2E1,0xD2E2,0xD2E3,/* 0x88-0x8F */
0xD2E6,0xD2E7,0xD2E8,0xD2E9,0xD2EA,0xD2EB,0xD2EC,0xD2ED,/* 0x90-0x97 */
0xD32F,0xD331,0xD332,0xD333,0xD334,0xD335,0xD336,0xD337,/* 0x68-0x6F */
0xD33A,0xD33E,0xD33F,0xD340,0xD341,0xD342,0xD343,0xD346,/* 0x70-0x77 */
0xD347,0xD348,0xD349,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD34A,0xD34B,0xD34C,0xD34D,0xD34E,0xD34F,0xD350,/* 0x80-0x87 */
0xD351,0xD352,0xD353,0xD354,0xD355,0xD356,0xD357,0xD358,/* 0x88-0x8F */
0xD359,0xD35A,0xD35B,0xD35C,0xD35D,0xD35E,0xD35F,0xD360,/* 0x90-0x97 */
0xD394,0xD395,0xD396,0xD397,0xD39A,0xD39B,0xD39D,0xD39E,/* 0x68-0x6F */
0xD39F,0xD3A1,0xD3A2,0xD3A3,0xD3A4,0xD3A5,0xD3A6,0xD3A7,/* 0x70-0x77 */
0xD3AA,0xD3AC,0xD3AE,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD3AF,0xD3B0,0xD3B1,0xD3B2,0xD3B3,0xD3B5,0xD3B6,/* 0x80-0x87 */
0xD3B7,0xD3B9,0xD3BA,0xD3BB,0xD3BD,0xD3BE,0xD3BF,0xD3C0,/* 0x88-0x8F */
0xD3C1,0xD3C2,0xD3C3,0xD3C6,0xD3C7,0xD3CA,0xD3CB,0xD3CC,/* 0x90-0x97 */
0xD403,0xD404,0xD405,0xD406,0xD407,0xD409,0xD40A,0xD40B,/* 0x68-0x6F */
0xD40C,0xD40D,0xD40E,0xD40F,0xD410,0xD411,0xD412,0xD413,/* 0x70-0x77 */
0xD414,0xD415,0xD416,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD417,0xD418,0xD419,0xD41A,0xD41B,0xD41C,0xD41E,/* 0x80-0x87 */
0xD41F,0xD420,0xD421,0xD422,0xD423,0xD424,0xD425,0xD426,/* 0x88-0x8F */
0xD427,0xD428,0xD429,0xD42A,0xD42B,0xD42C,0xD42D,0xD42E,/* 0x90-0x97 */
0xD45B,0xD45D,0xD45E,0xD45F,0xD461,0xD462,0xD463,0xD465,/* 0x68-0x6F */
0xD466,0xD467,0xD468,0xD469,0xD46A,0xD46B,0xD46C,0xD46E,/* 0x70-0x77 */
0xD470,0xD471,0xD472,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD473,0xD474,0xD475,0xD476,0xD477,0xD47A,0xD47B,/* 0x80-0x87 */
0xD47D,0xD47E,0xD481,0xD483,0xD484,0xD485,0xD486,0xD487,/* 0x88-0x8F */
0xD48A,0xD48C,0xD48E,0xD48F,0xD490,0xD491,0xD492,0xD493,/* 0x90-0x97 */
0xD4C0,0xD4C1,0xD4C2,0xD4C3,0xD4C4,0xD4C5,0xD4C6,0xD4C7,/* 0x68-0x6F */
0xD4C8,0xD4C9,0xD4CA,0xD4CB,0xD4CD,0xD4CE,0xD4CF,0xD4D1,/* 0x70-0x77 */
0xD4D2,0xD4D3,0xD4D5,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD4D6,0xD4D7,0xD4D8,0xD4D9,0xD4DA,0xD4DB,0xD4DD,/* 0x80-0x87 */
0xD4DE,0xD4E0,0xD4E1,0xD4E2,0xD4E3,0xD4E4,0xD4E5,0xD4E6,/* 0x88-0x8F */
0xD4E7,0xD4E9,0xD4EA,0xD4EB,0xD4ED,0xD4EE,0xD4EF,0xD4F1,/* 0x90-0x97 */
0xD525,0xD526,0xD527,0xD528,0xD529,0xD52A,0xD52B,0xD52C,/* 0x68-0x6F */
0xD52D,0xD52E,0xD52F,0xD530,0xD531,0xD532,0xD533,0xD534,/* 0x70-0x77 */
0xD535,0xD536,0xD537,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD538,0xD539,0xD53A,0xD53B,0xD53E,0xD53F,0xD541,/* 0x80-0x87 */
0xD542,0xD543,0xD545,0xD546,0xD547,0xD548,0xD549,0xD54A,/* 0x88-0x8F */
0xD54B,0xD54E,0xD550,0xD552,0xD553,0xD554,0xD555,0xD556,/* 0x90-0x97 */
0xD594,0xD595,0xD596,0xD597,0xD598,0xD599,0xD59A,0xD59B,/* 0x68-0x6F */
0xD59C,0xD59D,0xD59E,0xD59F,0xD5A0,0xD5A1,0xD5A2,0xD5A3,/* 0x70-0x77 */
0xD5A4,0xD5A6,0xD5A7,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD5A8,0xD5A9,0xD5AA,0xD5AB,0xD5AC,0xD5AD,0xD5AE,/* 0x80-0x87 */
0xD5AF,0xD5B0,0xD5B1,0xD5B2,0xD5B3,0xD5B4,0xD5B5,0xD5B6,/* 0x88-0x8F */
0xD5B7,0xD5B8,0xD5B9,0xD5BA,0xD5BB,0xD5BC,0xD5BD,0xD5BE,/* 0x90-0x97 */
0xD5FA,0xD5FB,0xD5FC,0xD5FD,0xD5FE,0xD5FF,0xD602,0xD603,/* 0x68-0x6F */
0xD605,0xD606,0xD607,0xD609,0xD60A,0xD60B,0xD60C,0xD60D,/* 0x70-0x77 */
0xD60E,0xD60F,0xD612,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD616,0xD617,0xD618,0xD619,0xD61A,0xD61B,0xD61D,/* 0x80-0x87 */
0xD61E,0xD61F,0xD621,0xD622,0xD623,0xD625,0xD626,0xD627,/* 0x88-0x8F */
0xD628,0xD629,0xD62A,0xD62B,0xD62C,0xD62E,0xD62F,0xD630,/* 0x90-0x97 */
0xD66B,0xD66C,0xD66D,0xD66E,0xD66F,0xD672,0xD673,0xD675,/* 0x68-0x6F */
0xD676,0xD677,0xD678,0xD679,0xD67A,0xD67B,0xD67C,0xD67D,/* 0x70-0x77 */
0xD67E,0xD67F,0xD680,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD681,0xD682,0xD684,0xD686,0xD687,0xD688,0xD689,/* 0x80-0x87 */
0xD68A,0xD68B,0xD68E,0xD68F,0xD691,0xD692,0xD693,0xD695,/* 0x88-0x8F */
0xD696,0xD697,0xD698,0xD699,0xD69A,0xD69B,0xD69C,0xD69E,/* 0x90-0x97 */
0xD6D6,0xD6D8,0xD6DA,0xD6DB,0xD6DC,0xD6DD,0xD6DE,0xD6DF,/* 0x68-0x6F */
0xD6E1,0xD6E2,0xD6E3,0xD6E5,0xD6E6,0xD6E7,0xD6E9,0xD6EA,/* 0x70-0x77 */
0xD6EB,0xD6EC,0xD6ED,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD6EE,0xD6EF,0xD6F1,0xD6F2,0xD6F3,0xD6F4,0xD6F6,/* 0x80-0x87 */
0xD6F7,0xD6F8,0xD6F9,0xD6FA,0xD6FB,0xD6FE,0xD6FF,0xD701,/* 0x88-0x8F */
0xD702,0xD703,0xD705,0xD706,0xD707,0xD708,0xD709,0xD70A,/* 0x90-0x97 */
0xD742,0xD743,0xD745,0xD746,0xD748,0xD74A,0xD74B,0xD74C,/* 0x68-0x6F */
0xD74D,0xD74E,0xD74F,0xD752,0xD753,0xD755,0xD75A,0xD75B,/* 0x70-0x77 */
0xD75C,0xD75D,0xD75E,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0xD75F,0xD762,0xD764,0xD766,0xD767,0xD768,0xD76A,/* 0x80-0x87 */
0xD76B,0xD76D,0xD76E,0xD76F,0xD771,0xD772,0xD773,0xD775,/* 0x88-0x8F */
0xD776,0xD777,0xD778,0xD779,0xD77A,0xD77B,0xD77E,0xD77F,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x79D1,0x83D3,0x8A87,0x8AB2,0x8DE8,0x904E,0x934B,/* 0xA0-0xA7 */
0x9846,0x5ED3,0x69E8,0x85FF,0x90ED,0xF905,0x51A0,0x5B98,/* 0xA8-0xAF */
0x5BEC,0x6163,0x68FA,0x6B3E,0x704C,0x742F,0x74D8,0x7BA1,/* 0xB0-0xB7 */
- 0x7F50,0x83C5,0x89C0,0x8CAB,0x95DC,0xFA2C,0x522E,0x605D,/* 0xB8-0xBF */
+ 0x7F50,0x83C5,0x89C0,0x8CAB,0x95DC,0x9928,0x522E,0x605D,/* 0xB8-0xBF */
0x62EC,0x9002,0x4F8A,0x5149,0x5321,0x58D9,0x5EE3,0x66E0,/* 0xC0-0xC7 */
0x6D38,0x709A,0x72C2,0x73D6,0x7B50,0x80F1,0x945B,0x5366,/* 0xC8-0xCF */
0x639B,0x7F6B,0x4E56,0x5080,0x584A,0x58DE,0x602A,0x6127,/* 0xD0-0xD7 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
0x0000,0x68F9,0x6AC2,0x6DD8,0x6E21,0x6ED4,0x6FE4,0x71FE,/* 0xA0-0xA7 */
0x76DC,0x7779,0x79B1,0x7A3B,0x8404,0x89A9,0x8CED,0x8DF3,/* 0xA8-0xAF */
- 0x8E48,0x9003,0x9014,0x9053,0xFA26,0x934D,0x9676,0x97DC,/* 0xB0-0xB7 */
+ 0x8E48,0x9003,0x9014,0x9053,0x90FD,0x934D,0x9676,0x97DC,/* 0xB0-0xB7 */
0x6BD2,0x7006,0x7258,0x72A2,0x7368,0x7763,0x79BF,0x7BE4,/* 0xB8-0xBF */
0x7E9B,0x8B80,0x58A9,0x60C7,0x6566,0x65FD,0x66BE,0x6C8C,/* 0xC0-0xC7 */
0x711E,0x71C9,0x8C5A,0x9813,0x4E6D,0x7A81,0x4EDD,0x51AC,/* 0xC8-0xCF */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x99C1,0x4F34,0x534A,0x53CD,0x53DB,0x62CC,0x642C,0x6500,/* 0xE0-0xE7 */
0x6591,0x69C3,0x6CEE,0x6F58,0x73ED,0x7554,0x7622,0x76E4,/* 0xE8-0xEF */
0x76FC,0x78D0,0x78FB,0x792C,0x7D46,0x822C,0x87E0,0x8FD4,/* 0xF0-0xF7 */
- 0x9812,0xFA2A,0x52C3,0x62D4,0x64A5,0x6E24,0x6F51,0x0000,/* 0xF8-0xFF */
+ 0x9812,0x98EF,0x52C3,0x62D4,0x64A5,0x6E24,0x6F51,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_DB[256] = {
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x9A08,0x4FDD,0x5821,0x5831,0x5BF6,0x666E,0x6B65,0x6D11,/* 0xC0-0xC7 */
0x6E7A,0x6F7D,0x73E4,0x752B,0x83E9,0x88DC,0x8913,0x8B5C,/* 0xC8-0xCF */
0x8F14,0x4F0F,0x50D5,0x5310,0x535C,0x5B93,0x5FA9,0x670D,/* 0xD0-0xD7 */
- 0xFA1B,0x8179,0x832F,0x8514,0x8907,0x8986,0x8F39,0x8F3B,/* 0xD8-0xDF */
+ 0x798F,0x8179,0x832F,0x8514,0x8907,0x8986,0x8F39,0x8F3B,/* 0xD8-0xDF */
0x99A5,0x9C12,0x672C,0x4E76,0x4FF8,0x5949,0x5C01,0x5CEF,/* 0xE0-0xE7 */
0x5CF0,0x6367,0x68D2,0x70FD,0x71A2,0x742B,0x7E2B,0x84EC,/* 0xE8-0xEF */
0x8702,0x9022,0x92D2,0x9CF3,0x4E0D,0x4ED8,0x4FEF,0x5085,/* 0xF0-0xF7 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x6E23,0x7009,0x7345,0x7802,0x793E,0x7940,0x7960,0x79C1,/* 0xE0-0xE7 */
0x7BE9,0x7D17,0x7D72,0x8086,0x820D,0x838E,0x84D1,0x86C7,/* 0xE8-0xEF */
0x88DF,0x8A50,0x8A5E,0x8B1D,0x8CDC,0x8D66,0x8FAD,0x90AA,/* 0xF0-0xF7 */
- 0xFA2B,0x99DF,0x9E9D,0x524A,0xF969,0x6714,0xF96A,0x0000,/* 0xF8-0xFF */
+ 0x98FC,0x99DF,0x9E9D,0x524A,0xF969,0x6714,0xF96A,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_DF[256] = {
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x8518,0x886B,0x63F7,0x6F81,0x9212,0x98AF,0x4E0A,0x50B7,/* 0xB8-0xBF */
0x50CF,0x511F,0x5546,0x55AA,0x5617,0x5B40,0x5C19,0x5CE0,/* 0xC0-0xC7 */
0x5E38,0x5E8A,0x5EA0,0x5EC2,0x60F3,0x6851,0x6A61,0x6E58,/* 0xC8-0xCF */
- 0x723D,0x7240,0x72C0,0x76F8,0xFA1A,0x7BB1,0x7FD4,0x88F3,/* 0xD0-0xD7 */
+ 0x723D,0x7240,0x72C0,0x76F8,0x7965,0x7BB1,0x7FD4,0x88F3,/* 0xD0-0xD7 */
0x89F4,0x8A73,0x8C61,0x8CDE,0x971C,0x585E,0x74BD,0x8CFD,/* 0xD8-0xDF */
0x55C7,0xF96C,0x7A61,0x7D22,0x8272,0x7272,0x751F,0x7525,/* 0xE0-0xE7 */
0xF96D,0x7B19,0x5885,0x58FB,0x5DBC,0x5E8F,0x5EB6,0x5F90,/* 0xE8-0xEF */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x57F4,0x5BD4,0x5F0F,0x606F,0x62ED,0x690D,0x6B96,0x6E5C,/* 0xD0-0xD7 */
0x7184,0x7BD2,0x8755,0x8B58,0x8EFE,0x98DF,0x98FE,0x4F38,/* 0xD8-0xDF */
0x4F81,0x4FE1,0x547B,0x5A20,0x5BB8,0x613C,0x65B0,0x6668,/* 0xE0-0xE7 */
- 0x71FC,0x7533,0xFA19,0x7D33,0x814E,0x81E3,0x8398,0x85AA,/* 0xE8-0xEF */
+ 0x71FC,0x7533,0x795E,0x7D33,0x814E,0x81E3,0x8398,0x85AA,/* 0xE8-0xEF */
0x85CE,0x8703,0x8A0A,0x8EAB,0x8F9B,0xF971,0x8FC5,0x5931,/* 0xF0-0xF7 */
0x5BA4,0x5BE6,0x6089,0x5BE9,0x5C0B,0x5FC3,0x6C81,0x0000,/* 0xF8-0xFF */
};
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
0x0000,0x70CF,0x71AC,0x7352,0x7B7D,0x8708,0x8AA4,0x9C32,/* 0xA0-0xA7 */
0x9F07,0x5C4B,0x6C83,0x7344,0x7389,0x923A,0x6EAB,0x7465,/* 0xA8-0xAF */
- 0x761F,0x7A69,0x7E15,0x860A,0xFA0C,0x58C5,0x64C1,0x74EE,/* 0xB0-0xB7 */
+ 0x761F,0x7A69,0x7E15,0x860A,0x5140,0x58C5,0x64C1,0x74EE,/* 0xB0-0xB7 */
0x7515,0x7670,0x7FC1,0x9095,0x96CD,0x9954,0x6E26,0x74E6,/* 0xB8-0xBF */
0x7AA9,0x7AAA,0x81E5,0x86D9,0x8778,0x8A1B,0x5A49,0x5B8C,/* 0xC0-0xC7 */
0x5B9B,0x68A1,0x6900,0x6D63,0x73A9,0x7413,0x742C,0x7897,/* 0xC8-0xCF */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x84C9,0x8E0A,0x9394,0x93DE,0xF9C4,0x4E8E,0x4F51,0x5076,/* 0xC8-0xCF */
0x512A,0x53C8,0x53CB,0x53F3,0x5B87,0x5BD3,0x5C24,0x611A,/* 0xD0-0xD7 */
0x6182,0x65F4,0x725B,0x7397,0x7440,0x76C2,0x7950,0x7991,/* 0xD8-0xDF */
- 0x79B9,0x7D06,0xFA1E,0x828B,0x85D5,0x865E,0x8FC2,0x9047,/* 0xE0-0xE7 */
+ 0x79B9,0x7D06,0x7FBD,0x828B,0x85D5,0x865E,0x8FC2,0x9047,/* 0xE0-0xE7 */
0x90F5,0x91EA,0x9685,0x96E8,0x96E9,0x52D6,0x5F67,0x65ED,/* 0xE8-0xEF */
0x6631,0x682F,0x715C,0x7A36,0x90C1,0x980A,0x4E91,0xF9C5,/* 0xF0-0xF7 */
0x6A52,0x6B9E,0x6F90,0x7189,0x8018,0x82B8,0x8553,0x0000,/* 0xF8-0xFF */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0xF9E1,0xF9E2,0xF9E3,0x723E,0x73E5,0xF9E4,0x7570,0x75CD,/* 0xB0-0xB7 */
0xF9E5,0x79FB,0xF9E6,0x800C,0x8033,0x8084,0x82E1,0x8351,/* 0xB8-0xBF */
0xF9E7,0xF9E8,0x8CBD,0x8CB3,0x9087,0xF9E9,0xF9EA,0x98F4,/* 0xC0-0xC7 */
- 0x990C,0xF9EB,0xF9EC,0x7037,0xFA17,0x7FCA,0x7FCC,0x7FFC,/* 0xC8-0xCF */
+ 0x990C,0xF9EB,0xF9EC,0x7037,0x76CA,0x7FCA,0x7FCC,0x7FFC,/* 0xC8-0xCF */
0x8B1A,0x4EBA,0x4EC1,0x5203,0x5370,0xF9ED,0x54BD,0x56E0,/* 0xD0-0xD7 */
0x59FB,0x5BC5,0x5F15,0x5FCD,0x6E6E,0xF9EE,0xF9EF,0x7D6A,/* 0xD8-0xDF */
0x8335,0xF9F0,0x8693,0x8A8D,0xF9F1,0x976D,0x9777,0xF9F2,/* 0xE0-0xE7 */
- 0xF9F3,0x4E00,0x4F5A,0x4F7E,0x58F9,0x65E5,0x6EA2,0xFA25,/* 0xE8-0xEF */
+ 0xF9F3,0x4E00,0x4F5A,0x4F7E,0x58F9,0x65E5,0x6EA2,0x9038,/* 0xE8-0xEF */
0x93B0,0x99B9,0x4EFB,0x58EC,0x598A,0x59D9,0x6041,0xF9F4,/* 0xF0-0xF7 */
0xF9F5,0x7A14,0xF9F6,0x834F,0x8CC3,0x5165,0x5344,0x0000,/* 0xF8-0xFF */
};
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x633A,0x653F,0x6574,0x65CC,0x6676,0x6678,0x67FE,0x6968,/* 0xD8-0xDF */
0x6A89,0x6B63,0x6C40,0x6DC0,0x6DE8,0x6E1F,0x6E5E,0x701E,/* 0xE0-0xE7 */
0x70A1,0x738E,0x73FD,0x753A,0x775B,0x7887,0x798E,0x7A0B,/* 0xE8-0xEF */
- 0x7A7D,0xFA1D,0x7D8E,0x8247,0x8A02,0x8AEA,0x8C9E,0x912D,/* 0xF0-0xF7 */
- 0x914A,0x91D8,0x9266,0x92CC,0x9320,0x9706,0xFA1C,0x0000,/* 0xF8-0xFF */
+ 0x7A7D,0x7CBE,0x7D8E,0x8247,0x8A02,0x8AEA,0x8C9E,0x912D,/* 0xF0-0xF7 */
+ 0x914A,0x91D8,0x9266,0x92CC,0x9320,0x9706,0x9756,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_F0[256] = {
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
0x0000,0x975C,0x9802,0x9F0E,0x5236,0x5291,0x557C,0x5824,/* 0xA0-0xA7 */
0x5E1D,0x5F1F,0x608C,0x63D0,0x68AF,0x6FDF,0x796D,0x7B2C,/* 0xA8-0xAF */
- 0x81CD,0x85BA,0x88FD,0xFA22,0x8E44,0x918D,0x9664,0x969B,/* 0xB0-0xB7 */
+ 0x81CD,0x85BA,0x88FD,0x8AF8,0x8E44,0x918D,0x9664,0x969B,/* 0xB0-0xB7 */
0x973D,0x984C,0x9F4A,0x4FCE,0x5146,0x51CB,0x52A9,0x5632,/* 0xB8-0xBF */
0x5F14,0x5F6B,0x63AA,0x64CD,0x65E9,0x6641,0x66FA,0x66F9,/* 0xC0-0xC7 */
0x671D,0x689D,0x68D7,0x69FD,0x6F15,0x6F6E,0x7167,0x71E5,/* 0xC8-0xCF */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x51F8,0x54F2,0x5586,0x5FB9,0x64A4,0x6F88,0x7DB4,0x8F1F,/* 0xC8-0xCF */
0x8F4D,0x9435,0x50C9,0x5C16,0x6CBE,0x6DFB,0x751B,0x77BB,/* 0xD0-0xD7 */
0x7C3D,0x7C64,0x8A79,0x8AC2,0x581E,0x59BE,0x5E16,0x6377,/* 0xD8-0xDF */
- 0x7252,0x758A,0x776B,0x8ADC,0x8CBC,0x8F12,0x5EF3,0xFA12,/* 0xE0-0xE7 */
+ 0x7252,0x758A,0x776B,0x8ADC,0x8CBC,0x8F12,0x5EF3,0x6674,/* 0xE0-0xE7 */
0x6DF8,0x807D,0x83C1,0x8ACB,0x9751,0x9BD6,0xFA00,0x5243,/* 0xE8-0xEF */
0x66FF,0x6D95,0x6EEF,0x7DE0,0x8AE6,0x902E,0x905E,0x9AD4,/* 0xF0-0xF7 */
0x521D,0x527F,0x54E8,0x6194,0x6284,0x62DB,0x68A2,0x0000,/* 0xF8-0xFF */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x790E,0x79D2,0x7A0D,0x8096,0x8278,0x82D5,0x8349,0x8549,/* 0xA8-0xAF */
0x8C82,0x8D85,0x9162,0x918B,0x91AE,0x4FC3,0x56D1,0x71ED,/* 0xB0-0xB7 */
0x77D7,0x8700,0x89F8,0x5BF8,0x5FD6,0x6751,0x90A8,0x53E2,/* 0xB8-0xBF */
- 0xFA10,0x5BF5,0x60A4,0x6181,0x6460,0x7E3D,0x8070,0x8525,/* 0xC0-0xC7 */
+ 0x585A,0x5BF5,0x60A4,0x6181,0x6460,0x7E3D,0x8070,0x8525,/* 0xC0-0xC7 */
0x9283,0x64AE,0x50AC,0x5D14,0x6700,0x589C,0x62BD,0x63A8,/* 0xC8-0xCF */
0x690E,0x6978,0x6A1E,0x6E6B,0x76BA,0x79CB,0x82BB,0x8429,/* 0xD0-0xD7 */
0x8ACF,0x8DA8,0x8FFD,0x9112,0x914B,0x919C,0x9310,0x9318,/* 0xD8-0xDF */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x5F3C,0x5FC5,0x6CCC,0x73CC,0x7562,0x758B,0x7B46,0x82FE,/* 0xB0-0xB7 */
0x999D,0x4E4F,0x903C,0x4E0B,0x4F55,0x53A6,0x590F,0x5EC8,/* 0xB8-0xBF */
0x6630,0x6CB3,0x7455,0x8377,0x8766,0x8CC0,0x9050,0x971E,/* 0xC0-0xC7 */
- 0x9C15,0x58D1,0x5B78,0x8650,0x8B14,0xFA2D,0x5BD2,0x6068,/* 0xC8-0xCF */
+ 0x9C15,0x58D1,0x5B78,0x8650,0x8B14,0x9DB4,0x5BD2,0x6068,/* 0xC8-0xCF */
0x608D,0x65F1,0x6C57,0x6F22,0x6FA3,0x701A,0x7F55,0x7FF0,/* 0xD0-0xD7 */
0x9591,0x9592,0x9650,0x97D3,0x5272,0x8F44,0x51FD,0x542B,/* 0xD8-0xDF */
0x54B8,0x5563,0x558A,0x6ABB,0x6DB5,0x7DD8,0x8266,0x929C,/* 0xE0-0xE7 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x68-0x6F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x70-0x77 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x8667,0x6064,0x8B4E,0x9DF8,0x5147,0x51F6,0x5308,0x6D36,/* 0xD0-0xD7 */
0x80F8,0x9ED1,0x6615,0x6B23,0x7098,0x75D5,0x5403,0x5C79,/* 0xD8-0xDF */
0x7D07,0x8A16,0x6B20,0x6B3D,0x6B46,0x5438,0x6070,0x6D3D,/* 0xE0-0xE7 */
- 0x7FD5,0x8208,0x50D6,0xFA15,0x559C,0x566B,0x56CD,0x59EC,/* 0xE8-0xEF */
+ 0x7FD5,0x8208,0x50D6,0x51DE,0x559C,0x566B,0x56CD,0x59EC,/* 0xE8-0xEF */
0x5B09,0x5E0C,0x6199,0x6198,0x6231,0x665E,0x66E6,0x7199,/* 0xF0-0xF7 */
0x71B9,0x71BA,0x72A7,0x79A7,0x7A00,0x7FB2,0x8A70,0x0000,/* 0xF8-0xFF */
};
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x3000,0xFF0C,0x3001,0x3002,0xFF0E,0x2027,0xFF1B,0xFF1A,/* 0x40-0x47 */
- 0xFF1F,0xFF01,0xFE30,0x2026,0x2025,0xFE50,0xFF64,0xFE52,/* 0x48-0x4F */
+ 0xFF1F,0xFF01,0xFE30,0x2026,0x2025,0xFE50,0xFE51,0xFE52,/* 0x48-0x4F */
0x00B7,0xFE54,0xFE55,0xFE56,0xFE57,0xFF5C,0x2013,0xFE31,/* 0x50-0x57 */
0x2014,0xFE33,0x2574,0xFE34,0xFE4F,0xFF08,0xFF09,0xFE35,/* 0x58-0x5F */
- 0xFE36,0xFF5B,0xFF5D,0xFE37,0xFE38,0xFF3B,0xFF3D,0xFE39,/* 0x60-0x67 */
+ 0xFE36,0xFF5B,0xFF5D,0xFE37,0xFE38,0x3014,0x3015,0xFE39,/* 0x60-0x67 */
0xFE3A,0x3010,0x3011,0xFE3B,0xFE3C,0x300A,0x300B,0xFE3D,/* 0x68-0x6F */
- 0xFE3E,0x3008,0x3009,0xFF3E,0xFE40,0x300C,0x300D,0xFE41,/* 0x70-0x77 */
+ 0xFE3E,0x3008,0x3009,0xFE3F,0xFE40,0x300C,0x300D,0xFE41,/* 0x70-0x77 */
0xFE42,0x300E,0x300F,0xFE43,0xFE44,0xFE59,0xFE5A,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
- 0x0000,0xFE5B,0xFE5C,0xFE5D,0xFE5E,0xFF40,0xFF07,0x201C,/* 0xA0-0xA7 */
- 0xFF02,0x301D,0x301E,0x2035,0x2032,0xFF03,0xFF06,0xFF0A,/* 0xA8-0xAF */
+ 0x0000,0xFE5B,0xFE5C,0xFE5D,0xFE5E,0x2018,0x2019,0x201C,/* 0xA0-0xA7 */
+ 0x201D,0x301D,0x301E,0x2035,0x2032,0xFF03,0xFF06,0xFF0A,/* 0xA8-0xAF */
0x203B,0x00A7,0x3003,0x25CB,0x25CF,0x25B3,0x25B2,0x25CE,/* 0xB0-0xB7 */
0x2606,0x2605,0x25C7,0x25C6,0x25A1,0x25A0,0x25BD,0x25BC,/* 0xB8-0xBF */
- 0x32A3,0x2105,0x0305,0xFFE3,0xFF3F,0x02CD,0xFE49,0xFE4A,/* 0xC0-0xC7 */
+ 0x32A3,0x2105,0x00AF,0xFFE3,0xFF3F,0x02CD,0xFE49,0xFE4A,/* 0xC0-0xC7 */
0xFE4D,0xFE4E,0xFE4B,0xFE4C,0xFE5F,0xFE60,0xFE61,0xFF0B,/* 0xC8-0xCF */
0xFF0D,0x00D7,0x00F7,0x00B1,0x221A,0xFF1C,0xFF1E,0xFF1D,/* 0xD0-0xD7 */
- 0x2266,0x2267,0x2260,0x221E,0x2252,0x2263,0xFE62,0xFE63,/* 0xD8-0xDF */
+ 0x2266,0x2267,0x2260,0x221E,0x2252,0x2261,0xFE62,0xFE63,/* 0xD8-0xDF */
0xFE64,0xFE65,0xFE66,0xFF5E,0x2229,0x222A,0x22A5,0x2220,/* 0xE0-0xE7 */
0x221F,0x22BF,0x33D2,0x33D1,0x222B,0x222E,0x2235,0x2234,/* 0xE8-0xEF */
- 0x2640,0x2642,0x2641,0x2609,0x2191,0x2193,0x2190,0x2192,/* 0xF0-0xF7 */
+ 0x2640,0x2642,0x2295,0x2299,0x2191,0x2193,0x2190,0x2192,/* 0xF0-0xF7 */
0x2196,0x2197,0x2199,0x2198,0x2225,0x2223,0xFF0F,0x0000,/* 0xF8-0xFF */
};
0xFF3C,0x2215,0xFE68,0xFF04,0xFFE5,0x3012,0xFFE0,0xFFE1,/* 0x40-0x47 */
0xFF05,0xFF20,0x2103,0x2109,0xFE69,0xFE6A,0xFE6B,0x33D5,/* 0x48-0x4F */
0x339C,0x339D,0x339E,0x33CE,0x33A1,0x338E,0x338F,0x33C4,/* 0x50-0x57 */
- 0x2218,0x5159,0x515B,0x515E,0x515D,0x5161,0x5163,0x55E7,/* 0x58-0x5F */
+ 0x00B0,0x5159,0x515B,0x515E,0x515D,0x5161,0x5163,0x55E7,/* 0x58-0x5F */
0x74E9,0x7CCE,0x2581,0x2582,0x2583,0x2584,0x2585,0x2586,/* 0x60-0x67 */
0x2587,0x2588,0x258F,0x258E,0x258D,0x258C,0x258B,0x258A,/* 0x68-0x6F */
0x2589,0x253C,0x2534,0x252C,0x2524,0x251C,0x2594,0x2500,/* 0x70-0x77 */
0x2502,0x2595,0x250C,0x2510,0x2514,0x2518,0x256D,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
- 0x0000,0x256E,0x2570,0x256F,0x0000,0x0000,0x0000,0x0000,/* 0xA0-0xA7 */
+ 0x0000,0x256E,0x2570,0x256F,0x2550,0x255E,0x256A,0x2561,/* 0xA0-0xA7 */
0x25E2,0x25E3,0x25E5,0x25E4,0x2571,0x2572,0x2573,0xFF10,/* 0xA8-0xAF */
0xFF11,0xFF12,0xFF13,0xFF14,0xFF15,0xFF16,0xFF17,0xFF18,/* 0xB0-0xB7 */
0xFF19,0x2160,0x2161,0x2162,0x2163,0x2164,0x2165,0x2166,/* 0xB8-0xBF */
0x2167,0x2168,0x2169,0x3021,0x3022,0x3023,0x3024,0x3025,/* 0xC0-0xC7 */
- 0x3026,0x3027,0x3028,0x3029,0x0000,0x5344,0x0000,0xFF21,/* 0xC8-0xCF */
+ 0x3026,0x3027,0x3028,0x3029,0x5341,0x5344,0x5345,0xFF21,/* 0xC8-0xCF */
0xFF22,0xFF23,0xFF24,0xFF25,0xFF26,0xFF27,0xFF28,0xFF29,/* 0xD0-0xD7 */
0xFF2A,0xFF2B,0xFF2C,0xFF2D,0xFF2E,0xFF2F,0xFF30,0xFF31,/* 0xD8-0xDF */
0xFF32,0xFF33,0xFF34,0xFF35,0xFF36,0xFF37,0xFF38,0xFF39,/* 0xE0-0xE7 */
0x03BD,0x03BE,0x03BF,0x03C0,0x03C1,0x03C3,0x03C4,0x03C5,/* 0x68-0x6F */
0x03C6,0x03C7,0x03C8,0x03C9,0x3105,0x3106,0x3107,0x3108,/* 0x70-0x77 */
0x3109,0x310A,0x310B,0x310C,0x310D,0x310E,0x310F,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x3110,0x3111,0x3112,0x3113,0x3114,0x3115,0x3116,/* 0xA0-0xA7 */
0x3117,0x3118,0x3119,0x311A,0x311B,0x311C,0x311D,0x311E,/* 0xA8-0xAF */
0x311F,0x3120,0x3121,0x3122,0x3123,0x3124,0x3125,0x3126,/* 0xB0-0xB7 */
- 0x3127,0x3128,0x3129,0x2024,0x02C9,0x02CA,0x02C7,0x02CB,/* 0xB8-0xBF */
+ 0x3127,0x3128,0x3129,0x02D9,0x02C9,0x02CA,0x02C7,0x02CB,/* 0xB8-0xBF */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xC0-0xC7 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xC8-0xCF */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xD0-0xD7 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xD8-0xDF */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xE0-0xE7 */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xE8-0xEF */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xF0-0xF7 */
- 0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xF8-0xFF */
+ 0x0000,0x20AC,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0xE0-0xE7 */
};
static wchar_t c2u_A4[256] = {
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x28-0x2F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
- 0x4E00,0x4E59,0x4E01,0x4E03,0x4E43,0x4E5D,0xF9BA,0x4E8C,/* 0x40-0x47 */
- 0x4EBA,0x513F,0x5165,0x516B,0x51E0,0x5200,0x5201,0xF98A,/* 0x48-0x4F */
+ 0x4E00,0x4E59,0x4E01,0x4E03,0x4E43,0x4E5D,0x4E86,0x4E8C,/* 0x40-0x47 */
+ 0x4EBA,0x513F,0x5165,0x516B,0x51E0,0x5200,0x5201,0x529B,/* 0x48-0x4F */
0x5315,0x5341,0x535C,0x53C8,0x4E09,0x4E0B,0x4E08,0x4E0A,/* 0x50-0x57 */
0x4E2B,0x4E38,0x51E1,0x4E45,0x4E48,0x4E5F,0x4E5E,0x4E8E,/* 0x58-0x5F */
0x4EA1,0x5140,0x5203,0x52FA,0x5343,0x53C9,0x53E3,0x571F,/* 0x60-0x67 */
- 0x58EB,0x5915,0x5927,0xF981,0x5B50,0x5B51,0x5B53,0x5BF8,/* 0x68-0x6F */
+ 0x58EB,0x5915,0x5927,0x5973,0x5B50,0x5B51,0x5B53,0x5BF8,/* 0x68-0x6F */
0x5C0F,0x5C22,0x5C38,0x5C71,0x5DDD,0x5DE5,0x5DF1,0x5DF2,/* 0x70-0x77 */
0x5DF3,0x5DFE,0x5E72,0x5EFE,0x5F0B,0x5F13,0x624D,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
- 0x0000,0x4E11,0x4E10,0xF967,0x4E2D,0x4E30,0xF95E,0x4E4B,/* 0xA0-0xA7 */
+ 0x0000,0x4E11,0x4E10,0x4E0D,0x4E2D,0x4E30,0x4E39,0x4E4B,/* 0xA0-0xA7 */
0x5C39,0x4E88,0x4E91,0x4E95,0x4E92,0x4E94,0x4EA2,0x4EC1,/* 0xA8-0xAF */
- 0xF9FD,0x4EC3,0x4EC6,0x4EC7,0x4ECD,0x4ECA,0x4ECB,0x4EC4,/* 0xB0-0xB7 */
- 0x5143,0x5141,0x5167,0xF9D1,0x516E,0x516C,0x5197,0x51F6,/* 0xB8-0xBF */
- 0x5206,0xFA00,0x5208,0x52FB,0x52FE,0x52FF,0x5316,0x5339,/* 0xC0-0xC7 */
+ 0x4EC0,0x4EC3,0x4EC6,0x4EC7,0x4ECD,0x4ECA,0x4ECB,0x4EC4,/* 0xB0-0xB7 */
+ 0x5143,0x5141,0x5167,0x516D,0x516E,0x516C,0x5197,0x51F6,/* 0xB8-0xBF */
+ 0x5206,0x5207,0x5208,0x52FB,0x52FE,0x52FF,0x5316,0x5339,/* 0xC0-0xC7 */
0x5348,0x5347,0x5345,0x535E,0x5384,0x53CB,0x53CA,0x53CD,/* 0xC8-0xCF */
0x58EC,0x5929,0x592B,0x592A,0x592D,0x5B54,0x5C11,0x5C24,/* 0xD0-0xD7 */
0x5C3A,0x5C6F,0x5DF4,0x5E7B,0x5EFF,0x5F14,0x5F15,0x5FC3,/* 0xD8-0xDF */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x4E16,0x4E15,0x4E14,0x4E18,0x4E3B,0x4E4D,0x4E4F,0x4E4E,/* 0x40-0x47 */
- 0x4EE5,0x4ED8,0x4ED4,0x4ED5,0x4ED6,0x4ED7,0x4EE3,0xF9A8,/* 0x48-0x4F */
+ 0x4EE5,0x4ED8,0x4ED4,0x4ED5,0x4ED6,0x4ED7,0x4EE3,0x4EE4,/* 0x48-0x4F */
0x4ED9,0x4EDE,0x5145,0x5144,0x5189,0x518A,0x51AC,0x51F9,/* 0x50-0x57 */
- 0x51FA,0x51F8,0x520A,0x52A0,0x529F,0x5305,0x5306,0xF963,/* 0x58-0x5F */
+ 0x51FA,0x51F8,0x520A,0x52A0,0x529F,0x5305,0x5306,0x5317,/* 0x58-0x5F */
0x531D,0x4EDF,0x534A,0x5349,0x5361,0x5360,0x536F,0x536E,/* 0x60-0x67 */
0x53BB,0x53EF,0x53E4,0x53F3,0x53EC,0x53EE,0x53E9,0x53E8,/* 0x68-0x6F */
0x53FC,0x53F8,0x53F5,0x53EB,0x53E6,0x53EA,0x53F2,0x53F1,/* 0x70-0x77 */
- 0x53F0,0xF906,0x53ED,0x53FB,0x56DB,0x56DA,0x5916,0x0000,/* 0x78-0x7F */
-
+ 0x53F0,0x53E5,0x53ED,0x53FB,0x56DB,0x56DA,0x5916,0x0000,/* 0x78-0x7F */
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x6BCD,0x6C11,0x6C10,0x6C38,0x6C41,0x6C40,0x6C3E,0x72AF,/* 0xC0-0xC7 */
0x7384,0x7389,0x74DC,0x74E6,0x7518,0x751F,0x7528,0x7529,/* 0xC8-0xCF */
0x7530,0x7531,0x7532,0x7533,0x758B,0x767D,0x76AE,0x76BF,/* 0xD0-0xD7 */
- 0x76EE,0x77DB,0x77E2,0x77F3,0x793A,0x79BE,0x7A74,0xF9F7,/* 0xD8-0xDF */
+ 0x76EE,0x77DB,0x77E2,0x77F3,0x793A,0x79BE,0x7A74,0x7ACB,/* 0xD8-0xDF */
0x4E1E,0x4E1F,0x4E52,0x4E53,0x4E69,0x4E99,0x4EA4,0x4EA6,/* 0xE0-0xE7 */
0x4EA5,0x4EFF,0x4F09,0x4F19,0x4F0A,0x4F15,0x4F0D,0x4F10,/* 0xE8-0xEF */
0x4F11,0x4F0F,0x4EF2,0x4EF6,0x4EFB,0x4EF0,0x4EF3,0x4EFD,/* 0xF0-0xF7 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x28-0x2F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
- 0x5171,0x518D,0x51B0,0xF99C,0x5211,0x5212,0x520E,0x5216,/* 0x40-0x47 */
- 0xF99D,0x5308,0x5321,0x5320,0x5370,0x5371,0x5409,0xF9DE,/* 0x48-0x4F */
+ 0x5171,0x518D,0x51B0,0x5217,0x5211,0x5212,0x520E,0x5216,/* 0x40-0x47 */
+ 0x52A3,0x5308,0x5321,0x5320,0x5370,0x5371,0x5409,0x540F,/* 0x48-0x4F */
0x540C,0x540A,0x5410,0x5401,0x540B,0x5404,0x5411,0x540D,/* 0x50-0x57 */
0x5408,0x5403,0x540E,0x5406,0x5412,0x56E0,0x56DE,0x56DD,/* 0x58-0x5F */
0x5733,0x5730,0x5728,0x572D,0x572C,0x572F,0x5729,0x5919,/* 0x60-0x67 */
0x591A,0x5937,0x5938,0x5984,0x5978,0x5983,0x597D,0x5979,/* 0x68-0x6F */
- 0x5982,0x5981,0x5B57,0x5B58,0x5B87,0x5B88,0xFA04,0x5B89,/* 0x70-0x77 */
- 0x5BFA,0x5C16,0x5C79,0x5DDE,0x5E06,0x5E76,0xF98E,0x0000,/* 0x78-0x7F */
-
+ 0x5982,0x5981,0x5B57,0x5B58,0x5B87,0x5B88,0x5B85,0x5B89,/* 0x70-0x77 */
+ 0x5BFA,0x5C16,0x5C79,0x5DDE,0x5E06,0x5E76,0x5E74,0x0000,/* 0x78-0x7F */
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x6B21,0x6B64,0x6B7B,0x6C16,0x6C5D,0x6C57,0x6C59,0x6C5F,/* 0xB8-0xBF */
0x6C60,0x6C50,0x6C55,0x6C61,0x6C5B,0x6C4D,0x6C4E,0x7070,/* 0xC0-0xC7 */
0x725F,0x725D,0x767E,0x7AF9,0x7C73,0x7CF8,0x7F36,0x7F8A,/* 0xC8-0xCF */
- 0xFA1E,0xF934,0x8003,0x800C,0x8012,0x8033,0x807F,0x8089,/* 0xD0-0xD7 */
- 0xF953,0x808C,0x81E3,0x81EA,0x81F3,0x81FC,0x820C,0x821B,/* 0xD8-0xDF */
- 0x821F,0x826E,0x8272,0x827E,0x866B,0x8840,0xFA08,0x8863,/* 0xE0-0xE7 */
- 0x897F,0x9621,0xF905,0x4EA8,0x4F4D,0x4F4F,0x4F47,0x4F57,/* 0xE8-0xEF */
+ 0x7FBD,0x8001,0x8003,0x800C,0x8012,0x8033,0x807F,0x8089,/* 0xD0-0xD7 */
+ 0x808B,0x808C,0x81E3,0x81EA,0x81F3,0x81FC,0x820C,0x821B,/* 0xD8-0xDF */
+ 0x821F,0x826E,0x8272,0x827E,0x866B,0x8840,0x884C,0x8863,/* 0xE0-0xE7 */
+ 0x897F,0x9621,0x4E32,0x4EA8,0x4F4D,0x4F4F,0x4F47,0x4F57,/* 0xE8-0xEF */
0x4F5E,0x4F34,0x4F5B,0x4F55,0x4F30,0x4F50,0x4F51,0x4F3D,/* 0xF0-0xF7 */
0x4F3A,0x4F38,0x4F43,0x4F54,0x4F3C,0x4F46,0x4F63,0x0000,/* 0xF8-0xFF */
};
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x4F5C,0x4F60,0x4F2F,0x4F4E,0x4F36,0x4F59,0x4F5D,0x4F48,/* 0x40-0x47 */
- 0x4F5A,0x514C,0x514B,0x514D,0x5175,0x51B6,0xF92E,0x5225,/* 0x48-0x4F */
- 0x5224,0xF9DD,0x522A,0x5228,0x52AB,0x52A9,0x52AA,0x52AC,/* 0x50-0x57 */
- 0x5323,0x5373,0xF91C,0xF9ED,0x542D,0x541E,0x543E,0x5426,/* 0x58-0x5F */
- 0x544E,0x5427,0x5446,0x5443,0x5433,0x5448,0xF980,0x541B,/* 0x60-0x67 */
+ 0x4F5A,0x514C,0x514B,0x514D,0x5175,0x51B6,0x51B7,0x5225,/* 0x48-0x4F */
+ 0x5224,0x5229,0x522A,0x5228,0x52AB,0x52A9,0x52AA,0x52AC,/* 0x50-0x57 */
+ 0x5323,0x5373,0x5375,0x541D,0x542D,0x541E,0x543E,0x5426,/* 0x58-0x5F */
+ 0x544E,0x5427,0x5446,0x5443,0x5433,0x5448,0x5442,0x541B,/* 0x60-0x67 */
0x5429,0x544A,0x5439,0x543B,0x5438,0x542E,0x5435,0x5436,/* 0x68-0x6F */
0x5420,0x543C,0x5440,0x5431,0x542B,0x541F,0x542C,0x56EA,/* 0x70-0x77 */
0x56F0,0x56E4,0x56EB,0x574A,0x5751,0x5740,0x574D,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x5747,0x574E,0x573E,0x5750,0x574F,0x573B,0x58EF,/* 0xA0-0xA7 */
0x593E,0x599D,0x5992,0x59A8,0x599E,0x59A3,0x5999,0x5996,/* 0xA8-0xAF */
0x598D,0x59A4,0x5993,0x598A,0x59A5,0x5B5D,0x5B5C,0x5B5A,/* 0xB0-0xB7 */
- 0x5B5B,0x5B8C,0x5B8B,0x5B8F,0x5C2C,0x5C40,0x5C41,0xF9BD,/* 0xB8-0xBF */
+ 0x5B5B,0x5B8C,0x5B8B,0x5B8F,0x5C2C,0x5C40,0x5C41,0x5C3F,/* 0xB8-0xBF */
0x5C3E,0x5C90,0x5C91,0x5C94,0x5C8C,0x5DEB,0x5E0C,0x5E8F,/* 0xC0-0xC7 */
- 0x5E87,0x5E8A,0x5EF7,0xF943,0x5F1F,0x5F64,0x5F62,0x5F77,/* 0xC8-0xCF */
+ 0x5E87,0x5E8A,0x5EF7,0x5F04,0x5F1F,0x5F64,0x5F62,0x5F77,/* 0xC8-0xCF */
0x5F79,0x5FD8,0x5FCC,0x5FD7,0x5FCD,0x5FF1,0x5FEB,0x5FF8,/* 0xD0-0xD7 */
0x5FEA,0x6212,0x6211,0x6284,0x6297,0x6296,0x6280,0x6276,/* 0xD8-0xDF */
0x6289,0x626D,0x628A,0x627C,0x627E,0x6279,0x6273,0x6292,/* 0xE0-0xE7 */
0x626F,0x6298,0x626E,0x6295,0x6293,0x6291,0x6286,0x6539,/* 0xE8-0xEF */
- 0x653B,0x6538,0x65F1,0xF901,0x675F,0xF9E1,0x674F,0x6750,/* 0xF0-0xF7 */
+ 0x653B,0x6538,0x65F1,0x66F4,0x675F,0x674E,0x674F,0x6750,/* 0xF0-0xF7 */
0x6751,0x675C,0x6756,0x675E,0x6749,0x6746,0x6760,0x0000,/* 0xF8-0xFF */
};
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x6753,0x6757,0x6B65,0x6BCF,0x6C42,0x6C5E,0x6C99,0x6C81,/* 0x40-0x47 */
- 0xF972,0x6C89,0x6C85,0x6C9B,0x6C6A,0x6C7A,0x6C90,0x6C70,/* 0x48-0x4F */
+ 0x6C88,0x6C89,0x6C85,0x6C9B,0x6C6A,0x6C7A,0x6C90,0x6C70,/* 0x48-0x4F */
0x6C8C,0x6C68,0x6C96,0x6C92,0x6C7D,0x6C83,0x6C72,0x6C7E,/* 0x50-0x57 */
0x6C74,0x6C86,0x6C76,0x6C8D,0x6C94,0x6C98,0x6C82,0x7076,/* 0x58-0x5F */
- 0x707C,0x707D,0x7078,0xF946,0x7261,0x7260,0x72C4,0x72C2,/* 0x60-0x67 */
+ 0x707C,0x707D,0x7078,0x7262,0x7261,0x7260,0x72C4,0x72C2,/* 0x60-0x67 */
0x7396,0x752C,0x752B,0x7537,0x7538,0x7682,0x76EF,0x77E3,/* 0x68-0x6F */
0x79C1,0x79C0,0x79BF,0x7A76,0x7CFB,0x7F55,0x8096,0x8093,/* 0x70-0x77 */
- 0x809D,0x8098,0x809B,0x809A,0x80B2,0xF97C,0x8292,0x0000,/* 0x78-0x7F */
-
+ 0x809D,0x8098,0x809B,0x809A,0x80B2,0x826F,0x8292,0x0000,/* 0x78-0x7F */
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
- 0x0000,0x828B,0x828D,0xFA0A,0x89D2,0x8A00,0x8C37,0x8C46,/* 0xA0-0xA7 */
- 0x8C55,0x8C9D,0x8D64,0x8D70,0x8DB3,0x8EAB,0xF902,0x8F9B,/* 0xA8-0xAF */
- 0xF971,0x8FC2,0x8FC6,0x8FC5,0x8FC4,0x5DE1,0x9091,0x90A2,/* 0xB0-0xB7 */
- 0x90AA,0x90A6,0x90A3,0x9149,0x91C6,0xF9E9,0x9632,0xF9C6,/* 0xB8-0xBF */
+ 0x0000,0x828B,0x828D,0x898B,0x89D2,0x8A00,0x8C37,0x8C46,/* 0xA0-0xA7 */
+ 0x8C55,0x8C9D,0x8D64,0x8D70,0x8DB3,0x8EAB,0x8ECA,0x8F9B,/* 0xA8-0xAF */
+ 0x8FB0,0x8FC2,0x8FC6,0x8FC5,0x8FC4,0x5DE1,0x9091,0x90A2,/* 0xB0-0xB7 */
+ 0x90AA,0x90A6,0x90A3,0x9149,0x91C6,0x91CC,0x9632,0x962E,/* 0xB8-0xBF */
0x9631,0x962A,0x962C,0x4E26,0x4E56,0x4E73,0x4E8B,0x4E9B,/* 0xC0-0xC7 */
0x4E9E,0x4EAB,0x4EAC,0x4F6F,0x4F9D,0x4F8D,0x4F73,0x4F7F,/* 0xC8-0xCF */
- 0x4F6C,0x4F9B,0xF9B5,0xF92D,0x4F83,0x4F70,0x4F75,0x4F88,/* 0xD0-0xD7 */
+ 0x4F6C,0x4F9B,0x4F8B,0x4F86,0x4F83,0x4F70,0x4F75,0x4F88,/* 0xD0-0xD7 */
0x4F69,0x4F7B,0x4F96,0x4F7E,0x4F8F,0x4F91,0x4F7A,0x5154,/* 0xD8-0xDF */
- 0x5152,0x5155,0xF978,0x5177,0x5176,0x5178,0x51BD,0x51FD,/* 0xE0-0xE7 */
- 0x523B,0x5238,0x5237,0xF9FF,0x5230,0x522E,0x5236,0x5241,/* 0xE8-0xEF */
+ 0x5152,0x5155,0x5169,0x5177,0x5176,0x5178,0x51BD,0x51FD,/* 0xE0-0xE7 */
+ 0x523B,0x5238,0x5237,0x523A,0x5230,0x522E,0x5236,0x5241,/* 0xE8-0xEF */
0x52BE,0x52BB,0x5352,0x5354,0x5353,0x5351,0x5366,0x5377,/* 0xF0-0xF7 */
0x5378,0x5379,0x53D6,0x53D4,0x53D7,0x5473,0x5475,0x0000,/* 0xF8-0xFF */
};
0x5486,0x547C,0x5490,0x5471,0x5476,0x548C,0x549A,0x5462,/* 0x48-0x4F */
0x5468,0x548B,0x547D,0x548E,0x56FA,0x5783,0x5777,0x576A,/* 0x50-0x57 */
0x5769,0x5761,0x5766,0x5764,0x577C,0x591C,0x5949,0x5947,/* 0x58-0x5F */
- 0xF90C,0x5944,0x5954,0x59BE,0x59BB,0x59D4,0x59B9,0x59AE,/* 0x60-0x67 */
+ 0x5948,0x5944,0x5954,0x59BE,0x59BB,0x59D4,0x59B9,0x59AE,/* 0x60-0x67 */
0x59D1,0x59C6,0x59D0,0x59CD,0x59CB,0x59D3,0x59CA,0x59AF,/* 0x68-0x6F */
0x59B3,0x59D2,0x59C5,0x5B5F,0x5B64,0x5B63,0x5B97,0x5B9A,/* 0x70-0x77 */
0x5B98,0x5B9C,0x5B99,0x5B9B,0x5C1A,0x5C48,0x5C45,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x5CB3,0x5E18,0x5E1A,0x5E16,0x5E15,0x5E1B,0x5E11,0x5E78,/* 0xA8-0xAF */
0x5E9A,0x5E97,0x5E9C,0x5E95,0x5E96,0x5EF6,0x5F26,0x5F27,/* 0xB0-0xB7 */
0x5F29,0x5F80,0x5F81,0x5F7F,0x5F7C,0x5FDD,0x5FE0,0x5FFD,/* 0xB8-0xBF */
- 0xF9A3,0x5FFF,0x600F,0x6014,0x602F,0x6035,0x6016,0x602A,/* 0xC0-0xC7 */
+ 0x5FF5,0x5FFF,0x600F,0x6014,0x602F,0x6035,0x6016,0x602A,/* 0xC0-0xC7 */
0x6015,0x6021,0x6027,0x6029,0x602B,0x601B,0x6216,0x6215,/* 0xC8-0xCF */
- 0x623F,0x623E,0x6240,0x627F,0xF925,0x62CC,0x62C4,0x62BF,/* 0xD0-0xD7 */
- 0x62C2,0x62B9,0x62D2,0x62DB,0x62AB,0xFA02,0x62D4,0x62CB,/* 0xD8-0xDF */
+ 0x623F,0x623E,0x6240,0x627F,0x62C9,0x62CC,0x62C4,0x62BF,/* 0xD0-0xD7 */
+ 0x62C2,0x62B9,0x62D2,0x62DB,0x62AB,0x62D3,0x62D4,0x62CB,/* 0xD8-0xDF */
0x62C8,0x62A8,0x62BD,0x62BC,0x62D0,0x62D9,0x62C7,0x62CD,/* 0xE0-0xE7 */
0x62B5,0x62DA,0x62B1,0x62D8,0x62D6,0x62D7,0x62C6,0x62AC,/* 0xE8-0xEF */
- 0x62CE,0x653E,0x65A7,0x65BC,0x65FA,0x6614,0xF9E0,0x660C,/* 0xF0-0xF7 */
+ 0x62CE,0x653E,0x65A7,0x65BC,0x65FA,0x6614,0x6613,0x660C,/* 0xF0-0xF7 */
0x6606,0x6602,0x660E,0x6600,0x660F,0x6615,0x660A,0x0000,/* 0xF8-0xFF */
};
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x6607,0x670D,0x670B,0x676D,0x678B,0x6795,0x6771,0x679C,/* 0x40-0x47 */
- 0x6773,0x6777,0x6787,0x679D,0xF9F4,0x676F,0x6770,0x677F,/* 0x48-0x4F */
+ 0x6773,0x6777,0x6787,0x679D,0x6797,0x676F,0x6770,0x677F,/* 0x48-0x4F */
0x6789,0x677E,0x6790,0x6775,0x679A,0x6793,0x677C,0x676A,/* 0x50-0x57 */
0x6772,0x6B23,0x6B66,0x6B67,0x6B7F,0x6C13,0x6C1B,0x6CE3,/* 0x58-0x5F */
- 0x6CE8,0x6CF3,0x6CB1,0xF968,0xF9E3,0x6CB3,0x6CBD,0x6CBE,/* 0x60-0x67 */
+ 0x6CE8,0x6CF3,0x6CB1,0x6CCC,0x6CE5,0x6CB3,0x6CBD,0x6CBE,/* 0x60-0x67 */
0x6CBC,0x6CE2,0x6CAB,0x6CD5,0x6CD3,0x6CB8,0x6CC4,0x6CB9,/* 0x68-0x6F */
0x6CC1,0x6CAE,0x6CD7,0x6CC5,0x6CF1,0x6CBF,0x6CBB,0x6CE1,/* 0x70-0x77 */
0x6CDB,0x6CCA,0x6CAC,0x6CEF,0x6CDC,0x6CD6,0x6CE0,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
- 0x0000,0x7095,0x708E,0x7092,0x708A,0xF9FB,0x722C,0x722D,/* 0xA0-0xA7 */
- 0x7238,0x7248,0x7267,0x7269,0xF9FA,0x72CE,0x72D9,0x72D7,/* 0xA8-0xAF */
+ 0x0000,0x7095,0x708E,0x7092,0x708A,0x7099,0x722C,0x722D,/* 0xA0-0xA7 */
+ 0x7238,0x7248,0x7267,0x7269,0x72C0,0x72CE,0x72D9,0x72D7,/* 0xA8-0xAF */
0x72D0,0x73A9,0x73A8,0x739F,0x73AB,0x73A5,0x753D,0x759D,/* 0xB0-0xB7 */
0x7599,0x759A,0x7684,0x76C2,0x76F2,0x76F4,0x77E5,0x77FD,/* 0xB8-0xBF */
0x793E,0x7940,0x7941,0x79C9,0x79C8,0x7A7A,0x7A79,0x7AFA,/* 0xC0-0xC7 */
0x81FE,0x820D,0x82B3,0x829D,0x8299,0x82AD,0x82BD,0x829F,/* 0xD8-0xDF */
0x82B9,0x82B1,0x82AC,0x82A5,0x82AF,0x82B8,0x82A3,0x82B0,/* 0xE0-0xE7 */
0x82BE,0x82B7,0x864E,0x8671,0x521D,0x8868,0x8ECB,0x8FCE,/* 0xE8-0xEF */
- 0x8FD4,0x8FD1,0x90B5,0x90B8,0x90B1,0x90B6,0x91C7,0xF90A,/* 0xF0-0xF7 */
+ 0x8FD4,0x8FD1,0x90B5,0x90B8,0x90B1,0x90B6,0x91C7,0x91D1,/* 0xF0-0xF7 */
0x9577,0x9580,0x961C,0x9640,0x963F,0x963B,0x9644,0x0000,/* 0xF8-0xFF */
};
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x28-0x2F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
- 0x9642,0x96B9,0x96E8,0x9752,0x975E,0x4E9F,0x4EAD,0xF977,/* 0x40-0x47 */
- 0x4FE1,0x4FB5,0x4FAF,0xF965,0x4FE0,0x4FD1,0x4FCF,0x4FDD,/* 0x48-0x4F */
+ 0x9642,0x96B9,0x96E8,0x9752,0x975E,0x4E9F,0x4EAD,0x4EAE,/* 0x40-0x47 */
+ 0x4FE1,0x4FB5,0x4FAF,0x4FBF,0x4FE0,0x4FD1,0x4FCF,0x4FDD,/* 0x48-0x4F */
0x4FC3,0x4FB6,0x4FD8,0x4FDF,0x4FCA,0x4FD7,0x4FAE,0x4FD0,/* 0x50-0x57 */
0x4FC4,0x4FC2,0x4FDA,0x4FCE,0x4FDE,0x4FB7,0x5157,0x5192,/* 0x58-0x5F */
0x5191,0x51A0,0x524E,0x5243,0x524A,0x524D,0x524C,0x524B,/* 0x60-0x67 */
0x5247,0x52C7,0x52C9,0x52C3,0x52C1,0x530D,0x5357,0x537B,/* 0x68-0x6F */
0x539A,0x53DB,0x54AC,0x54C0,0x54A8,0x54CE,0x54C9,0x54B8,/* 0x70-0x77 */
- 0x54A6,0x54B3,0x54C7,0x54C2,0xF99E,0x54AA,0x54C1,0x0000,/* 0x78-0x7F */
-
+ 0x54A6,0x54B3,0x54C7,0x54C2,0x54BD,0x54AA,0x54C1,0x0000,/* 0x78-0x7F */
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
0x0000,0x54C4,0x54C8,0x54AF,0x54AB,0x54B1,0x54BB,0x54A9,/* 0xA0-0xA7 */
0x54A7,0x54BF,0x56FF,0x5782,0x578B,0x57A0,0x57A3,0x57A2,/* 0xA8-0xAF */
- 0x57CE,0x57AE,0x5793,0x5955,0xF909,0x594F,0x594E,0x5950,/* 0xB0-0xB7 */
+ 0x57CE,0x57AE,0x5793,0x5955,0x5951,0x594F,0x594E,0x5950,/* 0xB0-0xB7 */
0x59DC,0x59D8,0x59FF,0x59E3,0x59E8,0x5A03,0x59E5,0x59EA,/* 0xB8-0xBF */
0x59DA,0x59E6,0x5A01,0x59FB,0x5B69,0x5BA3,0x5BA6,0x5BA4,/* 0xC0-0xC7 */
0x5BA2,0x5BA5,0x5C01,0x5C4E,0x5C4F,0x5C4D,0x5C4B,0x5CD9,/* 0xC8-0xCF */
- 0x5CD2,0x5DF7,0x5E1D,0x5E25,0x5E1F,0x5E7D,0x5EA0,0xFA01,/* 0xD0-0xD7 */
- 0x5EFA,0x5F08,0x5F2D,0x5F65,0x5F88,0x5F85,0x5F8A,0xF9D8,/* 0xD8-0xDF */
- 0x5F87,0x5F8C,0x5F89,0xF960,0x601D,0x6020,0x6025,0x600E,/* 0xE0-0xE7 */
+ 0x5CD2,0x5DF7,0x5E1D,0x5E25,0x5E1F,0x5E7D,0x5EA0,0x5EA6,/* 0xD0-0xD7 */
+ 0x5EFA,0x5F08,0x5F2D,0x5F65,0x5F88,0x5F85,0x5F8A,0x5F8B,/* 0xD8-0xDF */
+ 0x5F87,0x5F8C,0x5F89,0x6012,0x601D,0x6020,0x6025,0x600E,/* 0xE0-0xE7 */
0x6028,0x604D,0x6070,0x6068,0x6062,0x6046,0x6043,0x606C,/* 0xE8-0xEF */
0x606B,0x606A,0x6064,0x6241,0x62DC,0x6316,0x6309,0x62FC,/* 0xF0-0xF7 */
0x62ED,0x6301,0x62EE,0x62FD,0x6307,0x62F1,0x62F7,0x0000,/* 0xF8-0xFF */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x28-0x2F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
- 0x62EF,0x62EC,0xF973,0x62F4,0x6311,0x6302,0x653F,0x6545,/* 0x40-0x47 */
+ 0x62EF,0x62EC,0x62FE,0x62F4,0x6311,0x6302,0x653F,0x6545,/* 0x40-0x47 */
0x65AB,0x65BD,0x65E2,0x6625,0x662D,0x6620,0x6627,0x662F,/* 0x48-0x4F */
0x661F,0x6628,0x6631,0x6624,0x66F7,0x67FF,0x67D3,0x67F1,/* 0x50-0x57 */
0x67D4,0x67D0,0x67EC,0x67B6,0x67AF,0x67F5,0x67E9,0x67EF,/* 0x58-0x5F */
0x67C4,0x67D1,0x67B4,0x67DA,0x67E5,0x67B8,0x67CF,0x67DE,/* 0x60-0x67 */
- 0xF9C9,0x67B0,0x67D9,0x67E2,0x67DD,0x67D2,0x6B6A,0x6B83,/* 0x68-0x6F */
+ 0x67F3,0x67B0,0x67D9,0x67E2,0x67DD,0x67D2,0x6B6A,0x6B83,/* 0x68-0x6F */
0x6B86,0x6BB5,0x6BD2,0x6BD7,0x6C1F,0x6CC9,0x6D0B,0x6D32,/* 0x70-0x77 */
- 0x6D2A,0xF9CA,0x6D25,0x6D0C,0x6D31,0xFA05,0x6D17,0x0000,/* 0x78-0x7F */
-
+ 0x6D2A,0x6D41,0x6D25,0x6D0C,0x6D31,0x6D1E,0x6D17,0x0000,/* 0x78-0x7F */
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
- 0x0000,0x6D3B,0x6D3D,0x6D3E,0x6D36,0xF915,0x6CF5,0x6D39,/* 0xA0-0xA7 */
+ 0x0000,0x6D3B,0x6D3D,0x6D3E,0x6D36,0x6D1B,0x6CF5,0x6D39,/* 0xA0-0xA7 */
0x6D27,0x6D38,0x6D29,0x6D2E,0x6D35,0x6D0E,0x6D2B,0x70AB,/* 0xA8-0xAF */
0x70BA,0x70B3,0x70AC,0x70AF,0x70AD,0x70B8,0x70AE,0x70A4,/* 0xB0-0xB7 */
0x7230,0x7272,0x726F,0x7274,0x72E9,0x72E0,0x72E1,0x73B7,/* 0xB8-0xBF */
- 0x73CA,0x73BB,0xF9AD,0x73CD,0x73C0,0x73B3,0x751A,0x752D,/* 0xC0-0xC7 */
+ 0x73CA,0x73BB,0x73B2,0x73CD,0x73C0,0x73B3,0x751A,0x752D,/* 0xC0-0xC7 */
0x754F,0x754C,0x754E,0x754B,0x75AB,0x75A4,0x75A5,0x75A2,/* 0xC8-0xCF */
0x75A3,0x7678,0x7686,0x7687,0x7688,0x76C8,0x76C6,0x76C3,/* 0xD0-0xD7 */
- 0x76C5,0xF96D,0x76F9,0x76F8,0x7709,0x770B,0x76FE,0x76FC,/* 0xD8-0xDF */
+ 0x76C5,0x7701,0x76F9,0x76F8,0x7709,0x770B,0x76FE,0x76FC,/* 0xD8-0xDF */
0x7707,0x77DC,0x7802,0x7814,0x780C,0x780D,0x7946,0x7949,/* 0xE0-0xE7 */
0x7948,0x7947,0x79B9,0x79BA,0x79D1,0x79D2,0x79CB,0x7A7F,/* 0xE8-0xEF */
0x7A81,0x7AFF,0x7AFD,0x7C7D,0x7D02,0x7D05,0x7D00,0x7D09,/* 0xF0-0xF7 */
0x8010,0x800D,0x8011,0x8036,0x80D6,0x80E5,0x80DA,0x80C3,/* 0x40-0x47 */
0x80C4,0x80CC,0x80E1,0x80DB,0x80CE,0x80DE,0x80E4,0x80DD,/* 0x48-0x4F */
0x81F4,0x8222,0x82E7,0x8303,0x8305,0x82E3,0x82DB,0x82E6,/* 0x50-0x57 */
- 0x8304,0xF974,0x8302,0x8309,0x82D2,0x82D7,0x82F1,0x8301,/* 0x58-0x5F */
+ 0x8304,0x82E5,0x8302,0x8309,0x82D2,0x82D7,0x82F1,0x8301,/* 0x58-0x5F */
0x82DC,0x82D4,0x82D1,0x82DE,0x82D3,0x82DF,0x82EF,0x8306,/* 0x60-0x67 */
0x8650,0x8679,0x867B,0x867A,0x884D,0x886B,0x8981,0x89D4,/* 0x68-0x6F */
0x8A08,0x8A02,0x8A03,0x8C9E,0x8CA0,0x8D74,0x8D73,0x8DB4,/* 0x70-0x77 */
0x8ECD,0x8ECC,0x8FF0,0x8FE6,0x8FE2,0x8FEA,0x8FE5,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
0x0000,0x8FED,0x8FEB,0x8FE4,0x8FE8,0x90CA,0x90CE,0x90C1,/* 0xA0-0xA7 */
- 0x90C3,0x914B,0x914A,0x91CD,0x9582,0x9650,0xF951,0x964C,/* 0xA8-0xAF */
- 0xFA09,0x9762,0x9769,0x97CB,0x97ED,0x97F3,0x9801,0x98A8,/* 0xB0-0xB7 */
+ 0x90C3,0x914B,0x914A,0x91CD,0x9582,0x9650,0x964B,0x964C,/* 0xA8-0xAF */
+ 0x964D,0x9762,0x9769,0x97CB,0x97ED,0x97F3,0x9801,0x98A8,/* 0xB0-0xB7 */
0x98DB,0x98DF,0x9996,0x9999,0x4E58,0x4EB3,0x500C,0x500D,/* 0xB8-0xBF */
0x5023,0x4FEF,0x5026,0x5025,0x4FF8,0x5029,0x5016,0x5006,/* 0xC0-0xC7 */
0x503C,0x501F,0x501A,0x5012,0x5011,0x4FFA,0x5000,0x5014,/* 0xC8-0xCF */
0x5028,0x4FF1,0x5021,0x500B,0x5019,0x5018,0x4FF3,0x4FEE,/* 0xD0-0xD7 */
- 0x502D,0x502A,0x4FFE,0xF9D4,0x5009,0x517C,0x51A4,0x51A5,/* 0xD8-0xDF */
- 0x51A2,0x51CD,0xF955,0x51C6,0x51CB,0x5256,0x525C,0x5254,/* 0xE0-0xE7 */
+ 0x502D,0x502A,0x4FFE,0x502B,0x5009,0x517C,0x51A4,0x51A5,/* 0xD8-0xDF */
+ 0x51A2,0x51CD,0x51CC,0x51C6,0x51CB,0x5256,0x525C,0x5254,/* 0xE0-0xE7 */
0x525B,0x525D,0x532A,0x537F,0x539F,0x539D,0x53DF,0x54E8,/* 0xE8-0xEF */
0x5510,0x5501,0x5537,0x54FC,0x54E5,0x54F2,0x5506,0x54FA,/* 0xF0-0xF7 */
0x5514,0x54E9,0x54ED,0x54E1,0x5509,0x54EE,0x54EA,0x0000,/* 0xF8-0xFF */
0x5C51,0x5C55,0x5C50,0x5CED,0x5CFD,0x5CFB,0x5CEA,0x5CE8,/* 0x68-0x6F */
0x5CF0,0x5CF6,0x5D01,0x5CF4,0x5DEE,0x5E2D,0x5E2B,0x5EAB,/* 0x70-0x77 */
0x5EAD,0x5EA7,0x5F31,0x5F92,0x5F91,0x5F90,0x6059,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x6084,0x609F,0x609A,0x608D,0x6094,0x608C,0x6085,0x6096,/* 0xA8-0xAF */
0x6247,0x62F3,0x6308,0x62FF,0x634E,0x633E,0x632F,0x6355,/* 0xB0-0xB7 */
0x6342,0x6346,0x634F,0x6349,0x633A,0x6350,0x633D,0x632A,/* 0xB8-0xBF */
- 0x632B,0x6328,0x634D,0x634C,0x6548,0x6549,0xF9BE,0x65C1,/* 0xC0-0xC7 */
- 0xF983,0x6642,0x6649,0x664F,0x6643,0x6652,0x664C,0x6645,/* 0xC8-0xCF */
- 0x6641,0x66F8,0x6714,0x6715,0xF929,0x6821,0x6838,0x6848,/* 0xD0-0xD7 */
- 0x6846,0x6853,0x6839,0x6842,0x6854,0x6829,0x68B3,0xF9DA,/* 0xD8-0xDF */
+ 0x632B,0x6328,0x634D,0x634C,0x6548,0x6549,0x6599,0x65C1,/* 0xC0-0xC7 */
+ 0x65C5,0x6642,0x6649,0x664F,0x6643,0x6652,0x664C,0x6645,/* 0xC8-0xCF */
+ 0x6641,0x66F8,0x6714,0x6715,0x6717,0x6821,0x6838,0x6848,/* 0xD0-0xD7 */
+ 0x6846,0x6853,0x6839,0x6842,0x6854,0x6829,0x68B3,0x6817,/* 0xD8-0xDF */
0x684C,0x6851,0x683D,0x67F4,0x6850,0x6840,0x683C,0x6843,/* 0xE0-0xE7 */
0x682A,0x6845,0x6813,0x6818,0x6841,0x6B8A,0x6B89,0x6BB7,/* 0xE8-0xEF */
- 0x6C23,0x6C27,0x6C28,0x6C26,0x6C24,0x6CF0,0xF92A,0x6D95,/* 0xF0-0xF7 */
+ 0x6C23,0x6C27,0x6C28,0x6C26,0x6C24,0x6CF0,0x6D6A,0x6D95,/* 0xF0-0xF7 */
0x6D88,0x6D87,0x6D66,0x6D78,0x6D77,0x6D59,0x6D93,0x0000,/* 0xF8-0xFF */
};
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x6D6C,0x6D89,0x6D6E,0x6D5A,0x6D74,0x6D69,0x6D8C,0x6D8A,/* 0x40-0x47 */
- 0x6D79,0x6D85,0x6D65,0x6D94,0x70CA,0x70D8,0x70E4,0xF916,/* 0x48-0x4F */
- 0xF99F,0x70CF,0x7239,0x7279,0xF92B,0x72F9,0x72FD,0x72F8,/* 0x50-0x57 */
- 0x72F7,0x7386,0x73ED,0xF9CC,0x73EE,0x73E0,0x73EA,0xF917,/* 0x58-0x5F */
- 0x7554,0x755D,0x755C,0x755A,0xF9CD,0x75BE,0x75C5,0x75C7,/* 0x60-0x67 */
+ 0x6D79,0x6D85,0x6D65,0x6D94,0x70CA,0x70D8,0x70E4,0x70D9,/* 0x48-0x4F */
+ 0x70C8,0x70CF,0x7239,0x7279,0x72FC,0x72F9,0x72FD,0x72F8,/* 0x50-0x57 */
+ 0x72F7,0x7386,0x73ED,0x7409,0x73EE,0x73E0,0x73EA,0x73DE,/* 0x58-0x5F */
+ 0x7554,0x755D,0x755C,0x755A,0x7559,0x75BE,0x75C5,0x75C7,/* 0x60-0x67 */
0x75B2,0x75B3,0x75BD,0x75BC,0x75B9,0x75C2,0x75B8,0x768B,/* 0x68-0x6F */
- 0x76B0,0xFA17,0x76CD,0x76CE,0x7729,0x771F,0x7720,0x7728,/* 0x70-0x77 */
+ 0x76B0,0x76CA,0x76CD,0x76CE,0x7729,0x771F,0x7720,0x7728,/* 0x70-0x77 */
0x77E9,0x7830,0x7827,0x7838,0x781D,0x7834,0x7837,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
0x0000,0x7825,0x782D,0x7820,0x781F,0x7832,0x7955,0x7950,/* 0xA0-0xA7 */
- 0x7960,0x795F,0x7956,0xFA19,0x795D,0x7957,0x795A,0x79E4,/* 0xA8-0xAF */
+ 0x7960,0x795F,0x7956,0x795E,0x795D,0x7957,0x795A,0x79E4,/* 0xA8-0xAF */
0x79E3,0x79E7,0x79DF,0x79E6,0x79E9,0x79D8,0x7A84,0x7A88,/* 0xB0-0xB7 */
0x7AD9,0x7B06,0x7B11,0x7C89,0x7D21,0x7D17,0x7D0B,0x7D0A,/* 0xB8-0xBF */
- 0x7D20,0xF96A,0x7D14,0xF9CF,0x7D15,0x7D1A,0x7D1C,0x7D0D,/* 0xC0-0xC7 */
+ 0x7D20,0x7D22,0x7D14,0x7D10,0x7D15,0x7D1A,0x7D1C,0x7D0D,/* 0xC0-0xC7 */
0x7D19,0x7D1B,0x7F3A,0x7F5F,0x7F94,0x7FC5,0x7FC1,0x8006,/* 0xC8-0xCF */
0x8018,0x8015,0x8019,0x8017,0x803D,0x803F,0x80F1,0x8102,/* 0xD0-0xD7 */
0x80F0,0x8105,0x80ED,0x80F4,0x8106,0x80F8,0x80F3,0x8108,/* 0xD8-0xDF */
0x80FD,0x810A,0x80FC,0x80EF,0x81ED,0x81EC,0x8200,0x8210,/* 0xE0-0xE7 */
0x822A,0x822B,0x8228,0x822C,0x82BB,0x832B,0x8352,0x8354,/* 0xE8-0xEF */
0x834A,0x8338,0x8350,0x8349,0x8335,0x8334,0x834F,0x8332,/* 0xF0-0xF7 */
- 0x8339,0xF9FE,0x8317,0x8340,0x8331,0x8328,0x8343,0x0000,/* 0xF8-0xFF */
+ 0x8339,0x8336,0x8317,0x8340,0x8331,0x8328,0x8343,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_B0[256] = {
0x8654,0x868A,0x86AA,0x8693,0x86A4,0x86A9,0x868C,0x86A3,/* 0x40-0x47 */
0x869C,0x8870,0x8877,0x8881,0x8882,0x887D,0x8879,0x8A18,/* 0x48-0x4F */
0x8A10,0x8A0E,0x8A0C,0x8A15,0x8A0A,0x8A17,0x8A13,0x8A16,/* 0x50-0x57 */
- 0x8A0F,0x8A11,0xF900,0x8C7A,0x8C79,0x8CA1,0x8CA2,0x8D77,/* 0x58-0x5F */
+ 0x8A0F,0x8A11,0x8C48,0x8C7A,0x8C79,0x8CA1,0x8CA2,0x8D77,/* 0x58-0x5F */
0x8EAC,0x8ED2,0x8ED4,0x8ECF,0x8FB1,0x9001,0x9006,0x8FF7,/* 0x60-0x67 */
0x9000,0x8FFA,0x8FF4,0x9003,0x8FFD,0x9005,0x8FF8,0x9095,/* 0x68-0x6F */
0x90E1,0x90DD,0x90E2,0x9152,0x914D,0x914C,0x91D8,0x91DD,/* 0x70-0x77 */
0x91D7,0x91DC,0x91D9,0x9583,0x9662,0x9663,0x9661,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x99AC,0x9AA8,0x9AD8,0x9B25,0x9B32,0x9B3C,0x4E7E,0x507A,/* 0xA8-0xAF */
0x507D,0x505C,0x5047,0x5043,0x504C,0x505A,0x5049,0x5065,/* 0xB0-0xB7 */
0x5076,0x504E,0x5055,0x5075,0x5074,0x5077,0x504F,0x500F,/* 0xB8-0xBF */
- 0x506F,0x506D,0x515C,0x5195,0x51F0,0x526A,0x526F,0xF952,/* 0xC0-0xC7 */
- 0x52D9,0x52D8,0x52D5,0x5310,0x530F,0x5319,0xF9EB,0x5340,/* 0xC8-0xCF */
- 0x533E,0xF96B,0x66FC,0x5546,0x556A,0x5566,0x5544,0x555E,/* 0xD0-0xD7 */
+ 0x506F,0x506D,0x515C,0x5195,0x51F0,0x526A,0x526F,0x52D2,/* 0xC0-0xC7 */
+ 0x52D9,0x52D8,0x52D5,0x5310,0x530F,0x5319,0x533F,0x5340,/* 0xC8-0xCF */
+ 0x533E,0x53C3,0x66FC,0x5546,0x556A,0x5566,0x5544,0x555E,/* 0xD0-0xD7 */
0x5561,0x5543,0x554A,0x5531,0x5556,0x554F,0x5555,0x552F,/* 0xD8-0xDF */
0x5564,0x5538,0x552E,0x555C,0x552C,0x5563,0x5533,0x5541,/* 0xE0-0xE7 */
0x5557,0x5708,0x570B,0x5709,0x57DF,0x5805,0x580A,0x5806,/* 0xE8-0xEF */
0x5A3C,0x5A62,0x5A5A,0x5A46,0x5A4A,0x5B70,0x5BC7,0x5BC5,/* 0x40-0x47 */
0x5BC4,0x5BC2,0x5BBF,0x5BC6,0x5C09,0x5C08,0x5C07,0x5C60,/* 0x48-0x4F */
0x5C5C,0x5C5D,0x5D07,0x5D06,0x5D0E,0x5D1B,0x5D16,0x5D22,/* 0x50-0x57 */
- 0x5D11,0x5D29,0x5D14,0xF9D5,0x5D24,0x5D27,0x5D17,0x5DE2,/* 0x58-0x5F */
+ 0x5D11,0x5D29,0x5D14,0x5D19,0x5D24,0x5D27,0x5D17,0x5DE2,/* 0x58-0x5F */
0x5E38,0x5E36,0x5E33,0x5E37,0x5EB7,0x5EB8,0x5EB6,0x5EB5,/* 0x60-0x67 */
0x5EBE,0x5F35,0x5F37,0x5F57,0x5F6C,0x5F69,0x5F6B,0x5F97,/* 0x68-0x6F */
0x5F99,0x5F9E,0x5F98,0x5FA1,0x5FA0,0x5F9C,0x607F,0x60A3,/* 0x70-0x77 */
0x6089,0x60A0,0x60A8,0x60CB,0x60B4,0x60E6,0x60BD,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
0x0000,0x60C5,0x60BB,0x60B5,0x60DC,0x60BC,0x60D8,0x60D5,/* 0xA0-0xA7 */
0x60C6,0x60DF,0x60B8,0x60DA,0x60C7,0x621A,0x621B,0x6248,/* 0xA8-0xAF */
- 0xF975,0x63A7,0x6372,0x6396,0x63A2,0x63A5,0x6377,0x6367,/* 0xB0-0xB7 */
+ 0x63A0,0x63A7,0x6372,0x6396,0x63A2,0x63A5,0x6377,0x6367,/* 0xB0-0xB7 */
0x6398,0x63AA,0x6371,0x63A9,0x6389,0x6383,0x639B,0x636B,/* 0xB8-0xBF */
0x63A8,0x6384,0x6388,0x6399,0x63A1,0x63AC,0x6392,0x638F,/* 0xC0-0xC7 */
- 0x6380,0xF9A4,0x6369,0x6368,0x637A,0x655D,0x6556,0x6551,/* 0xC8-0xCF */
+ 0x6380,0x637B,0x6369,0x6368,0x637A,0x655D,0x6556,0x6551,/* 0xC8-0xCF */
0x6559,0x6557,0x555F,0x654F,0x6558,0x6555,0x6554,0x659C,/* 0xD0-0xD7 */
0x659B,0x65AC,0x65CF,0x65CB,0x65CC,0x65CE,0x665D,0x665A,/* 0xD8-0xDF */
- 0x6664,0x6668,0x6666,0x665E,0x66F9,0x52D7,0x671B,0xF97A,/* 0xE0-0xE7 */
+ 0x6664,0x6668,0x6666,0x665E,0x66F9,0x52D7,0x671B,0x6881,/* 0xE0-0xE7 */
0x68AF,0x68A2,0x6893,0x68B5,0x687F,0x6876,0x68B1,0x68A7,/* 0xE8-0xEF */
0x6897,0x68B0,0x6883,0x68C4,0x68AD,0x6886,0x6885,0x6894,/* 0xF0-0xF7 */
- 0x689D,0xF9E2,0x689F,0x68A1,0x6882,0x6B32,0xF970,0x0000,/* 0xF8-0xFF */
+ 0x689D,0x68A8,0x689F,0x68A1,0x6882,0x6B32,0x6BBA,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_B2[256] = {
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x6BEB,0x6BEC,0x6C2B,0x6D8E,0x6DBC,0x6DF3,0x6DD9,0x6DB2,/* 0x40-0x47 */
- 0x6DE1,0x6DCC,0x6DE4,0x6DFB,0x6DFA,0x6E05,0x6DC7,0xF9F5,/* 0x48-0x4F */
+ 0x6DE1,0x6DCC,0x6DE4,0x6DFB,0x6DFA,0x6E05,0x6DC7,0x6DCB,/* 0x48-0x4F */
0x6DAF,0x6DD1,0x6DAE,0x6DDE,0x6DF9,0x6DB8,0x6DF7,0x6DF5,/* 0x50-0x57 */
- 0x6DC5,0x6DD2,0x6E1A,0x6DB5,0xF94D,0x6DEB,0x6DD8,0xF9D6,/* 0x58-0x5F */
+ 0x6DC5,0x6DD2,0x6E1A,0x6DB5,0x6DDA,0x6DEB,0x6DD8,0x6DEA,/* 0x58-0x5F */
0x6DF1,0x6DEE,0x6DE8,0x6DC6,0x6DC4,0x6DAA,0x6DEC,0x6DBF,/* 0x60-0x67 */
0x6DE6,0x70F9,0x7109,0x710A,0x70FD,0x70EF,0x723D,0x727D,/* 0x68-0x6F */
- 0x7281,0x731C,0x731B,0x7316,0x7313,0x7319,0xF9DB,0x7405,/* 0x70-0x77 */
- 0x740A,0x7403,0xF9E4,0x73FE,0x740D,0x74E0,0x74F6,0x0000,/* 0x78-0x7F */
-
+ 0x7281,0x731C,0x731B,0x7316,0x7313,0x7319,0x7387,0x7405,/* 0x70-0x77 */
+ 0x740A,0x7403,0x7406,0x73FE,0x740D,0x74E0,0x74F6,0x0000,/* 0x78-0x7F */
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
- 0x0000,0x74F7,0x751C,0x7522,0xF976,0x7566,0x7562,0xF962,/* 0xA0-0xA7 */
+ 0x0000,0x74F7,0x751C,0x7522,0x7565,0x7566,0x7562,0x7570,/* 0xA0-0xA7 */
0x758F,0x75D4,0x75D5,0x75B5,0x75CA,0x75CD,0x768E,0x76D4,/* 0xA8-0xAF */
0x76D2,0x76DB,0x7737,0x773E,0x773C,0x7736,0x7738,0x773A,/* 0xB0-0xB7 */
- 0xF9CE,0x7843,0x784E,0xFA1A,0x7968,0x796D,0x79FB,0x7A92,/* 0xB8-0xBF */
- 0x7A95,0xF9F8,0x7B28,0x7B1B,0x7B2C,0x7B26,0x7B19,0x7B1E,/* 0xC0-0xC7 */
- 0x7B2E,0xF9F9,0x7C97,0x7C95,0x7D46,0x7D43,0x7D71,0x7D2E,/* 0xC8-0xCF */
- 0x7D39,0x7D3C,0x7D40,0x7D30,0x7D33,0x7D44,0xF94F,0x7D42,/* 0xD0-0xD7 */
- 0x7D32,0x7D31,0x7F3D,0x7F9E,0xF9AF,0x7FCC,0x7FCE,0x7FD2,/* 0xD8-0xDF */
- 0x801C,0x804A,0xF9B0,0x812F,0x8116,0x8123,0x812B,0x8129,/* 0xE0-0xE7 */
+ 0x786B,0x7843,0x784E,0x7965,0x7968,0x796D,0x79FB,0x7A92,/* 0xB8-0xBF */
+ 0x7A95,0x7B20,0x7B28,0x7B1B,0x7B2C,0x7B26,0x7B19,0x7B1E,/* 0xC0-0xC7 */
+ 0x7B2E,0x7C92,0x7C97,0x7C95,0x7D46,0x7D43,0x7D71,0x7D2E,/* 0xC8-0xCF */
+ 0x7D39,0x7D3C,0x7D40,0x7D30,0x7D33,0x7D44,0x7D2F,0x7D42,/* 0xD0-0xD7 */
+ 0x7D32,0x7D31,0x7F3D,0x7F9E,0x7F9A,0x7FCC,0x7FCE,0x7FD2,/* 0xD8-0xDF */
+ 0x801C,0x804A,0x8046,0x812F,0x8116,0x8123,0x812B,0x8129,/* 0xE0-0xE7 */
0x8130,0x8124,0x8202,0x8235,0x8237,0x8236,0x8239,0x838E,/* 0xE8-0xEF */
0x839E,0x8398,0x8378,0x83A2,0x8396,0x83BD,0x83AB,0x8392,/* 0xF0-0xF7 */
0x838A,0x8393,0x8389,0x83A0,0x8377,0x837B,0x837C,0x0000,/* 0xF8-0xFF */
0x8A2A,0x8A1D,0x8A23,0x8A25,0x8A31,0x8A2D,0x8A1F,0x8A1B,/* 0x58-0x5F */
0x8A22,0x8C49,0x8C5A,0x8CA9,0x8CAC,0x8CAB,0x8CA8,0x8CAA,/* 0x60-0x67 */
0x8CA7,0x8D67,0x8D66,0x8DBE,0x8DBA,0x8EDB,0x8EDF,0x9019,/* 0x68-0x6F */
- 0x900D,0x901A,0x9017,0xF99A,0x901F,0x901D,0x9010,0x9015,/* 0x70-0x77 */
+ 0x900D,0x901A,0x9017,0x9023,0x901F,0x901D,0x9010,0x9015,/* 0x70-0x77 */
0x901E,0x9020,0x900F,0x9022,0x9016,0x901B,0x9014,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
- 0x0000,0x90E8,0x90ED,0xFA26,0x9157,0x91CE,0x91F5,0x91E6,/* 0xA0-0xA7 */
- 0x91E3,0x91E7,0x91ED,0x91E9,0x9589,0x966A,0xF959,0x9673,/* 0xA8-0xAF */
- 0xF9D3,0x9670,0x9674,0x9676,0x9677,0x966C,0x96C0,0x96EA,/* 0xB0-0xB7 */
+ 0x0000,0x90E8,0x90ED,0x90FD,0x9157,0x91CE,0x91F5,0x91E6,/* 0xA0-0xA7 */
+ 0x91E3,0x91E7,0x91ED,0x91E9,0x9589,0x966A,0x9675,0x9673,/* 0xA8-0xAF */
+ 0x9678,0x9670,0x9674,0x9676,0x9677,0x966C,0x96C0,0x96EA,/* 0xB0-0xB7 */
0x96E9,0x7AE0,0x7ADF,0x9802,0x9803,0x9B5A,0x9CE5,0x9E75,/* 0xB8-0xBF */
- 0xF940,0x9EA5,0x9EBB,0x50A2,0x508D,0x5085,0x5099,0x5091,/* 0xC0-0xC7 */
+ 0x9E7F,0x9EA5,0x9EBB,0x50A2,0x508D,0x5085,0x5099,0x5091,/* 0xC0-0xC7 */
0x5080,0x5096,0x5098,0x509A,0x6700,0x51F1,0x5272,0x5274,/* 0xC8-0xCF */
- 0x5275,0x5269,0xF92F,0x52DD,0x52DB,0x535A,0x53A5,0x557B,/* 0xD0-0xD7 */
+ 0x5275,0x5269,0x52DE,0x52DD,0x52DB,0x535A,0x53A5,0x557B,/* 0xD0-0xD7 */
0x5580,0x55A7,0x557C,0x558A,0x559D,0x5598,0x5582,0x559C,/* 0xD8-0xDF */
- 0x55AA,0x5594,0xF90B,0x558B,0x5583,0x55B3,0x55AE,0x559F,/* 0xE0-0xE7 */
+ 0x55AA,0x5594,0x5587,0x558B,0x5583,0x55B3,0x55AE,0x559F,/* 0xE0-0xE7 */
0x553E,0x55B2,0x559A,0x55BB,0x55AC,0x55B1,0x557E,0x5589,/* 0xE8-0xEF */
0x55AB,0x5599,0x570D,0x582F,0x582A,0x5834,0x5824,0x5830,/* 0xF0-0xF7 */
0x5831,0x5821,0x581D,0x5820,0x58F9,0x58FA,0x5960,0x0000,/* 0xF8-0xFF */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x5A77,0x5A9A,0x5A7F,0x5A92,0x5A9B,0x5AA7,0x5B73,0x5B71,/* 0x40-0x47 */
0x5BD2,0x5BCC,0x5BD3,0x5BD0,0x5C0A,0x5C0B,0x5C31,0x5D4C,/* 0x48-0x4F */
- 0xF921,0x5D34,0x5D47,0x5DFD,0x5E45,0x5E3D,0x5E40,0x5E43,/* 0x50-0x57 */
- 0x5E7E,0xF928,0x5EC1,0x5EC2,0x5EC4,0x5F3C,0x5F6D,0xF966,/* 0x58-0x5F */
- 0x5FAA,0x5FA8,0x60D1,0xF9B9,0x60B2,0x60B6,0x60E0,0x611C,/* 0x60-0x67 */
+ 0x5D50,0x5D34,0x5D47,0x5DFD,0x5E45,0x5E3D,0x5E40,0x5E43,/* 0x50-0x57 */
+ 0x5E7E,0x5ECA,0x5EC1,0x5EC2,0x5EC4,0x5F3C,0x5F6D,0x5FA9,/* 0x58-0x5F */
+ 0x5FAA,0x5FA8,0x60D1,0x60E1,0x60B2,0x60B6,0x60E0,0x611C,/* 0x60-0x67 */
0x6123,0x60FA,0x6115,0x60F0,0x60FB,0x60F4,0x6168,0x60F1,/* 0x68-0x6F */
0x610E,0x60F6,0x6109,0x6100,0x6112,0x621F,0x6249,0x63A3,/* 0x70-0x77 */
0x638C,0x63CF,0x63C0,0x63E9,0x63C9,0x63C6,0x63CD,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x63D2,0x63E3,0x63D0,0x63E1,0x63D6,0x63ED,0x63EE,/* 0xA0-0xA7 */
0x6376,0x63F4,0x63EA,0x63DB,0x6452,0x63DA,0x63F9,0x655E,/* 0xA8-0xAF */
0x6566,0x6562,0x6563,0x6591,0x6590,0x65AF,0x666E,0x6670,/* 0xB0-0xB7 */
- 0xFA12,0x6676,0x666F,0x6691,0x667A,0x667E,0x6677,0x66FE,/* 0xB8-0xBF */
+ 0x6674,0x6676,0x666F,0x6691,0x667A,0x667E,0x6677,0x66FE,/* 0xB8-0xBF */
0x66FF,0x671F,0x671D,0x68FA,0x68D5,0x68E0,0x68D8,0x68D7,/* 0xC0-0xC7 */
0x6905,0x68DF,0x68F5,0x68EE,0x68E7,0x68F9,0x68D2,0x68F2,/* 0xC8-0xCF */
0x68E3,0x68CB,0x68CD,0x690D,0x6912,0x690E,0x68C9,0x68DA,/* 0xD0-0xD7 */
0x7119,0x711A,0x7126,0x7130,0x7121,0x7136,0x716E,0x711C,/* 0x48-0x4F */
0x724C,0x7284,0x7280,0x7336,0x7325,0x7334,0x7329,0x743A,/* 0x50-0x57 */
0x742A,0x7433,0x7422,0x7425,0x7435,0x7436,0x7434,0x742F,/* 0x58-0x5F */
- 0x741B,0x7426,0x7428,0x7525,0x7526,0x756B,0x756A,0xF9E5,/* 0x60-0x67 */
+ 0x741B,0x7426,0x7428,0x7525,0x7526,0x756B,0x756A,0x75E2,/* 0x60-0x67 */
0x75DB,0x75E3,0x75D9,0x75D8,0x75DE,0x75E0,0x767B,0x767C,/* 0x68-0x6F */
0x7696,0x7693,0x76B4,0x76DC,0x774F,0x77ED,0x785D,0x786C,/* 0x70-0x77 */
0x786F,0x7A0D,0x7A08,0x7A0B,0x7A05,0x7A00,0x7A98,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x800B,0x8052,0x8085,0x8155,0x8154,0x814B,0x8151,0x814E,/* 0xC0-0xC7 */
0x8139,0x8146,0x813E,0x814C,0x8153,0x8174,0x8212,0x821C,/* 0xC8-0xCF */
0x83E9,0x8403,0x83F8,0x840D,0x83E0,0x83C5,0x840B,0x83C1,/* 0xD0-0xD7 */
- 0x83EF,0xF958,0x83F4,0x8457,0x840A,0x83F0,0x840C,0x83CC,/* 0xD8-0xDF */
+ 0x83EF,0x83F1,0x83F4,0x8457,0x840A,0x83F0,0x840C,0x83CC,/* 0xD8-0xDF */
0x83FD,0x83F2,0x83CA,0x8438,0x840E,0x8404,0x83DC,0x8407,/* 0xE0-0xE7 */
0x83D4,0x83DF,0x865B,0x86DF,0x86D9,0x86ED,0x86D4,0x86DB,/* 0xE8-0xEF */
- 0x86E4,0x86D0,0x86DE,0x8857,0x88C1,0xF9A0,0x88B1,0x8983,/* 0xF0-0xF7 */
+ 0x86E4,0x86D0,0x86DE,0x8857,0x88C1,0x88C2,0x88B1,0x8983,/* 0xF0-0xF7 */
0x8996,0x8A3B,0x8A60,0x8A55,0x8A5E,0x8A3C,0x8A41,0x0000,/* 0xF8-0xFF */
};
0x8CC0,0x8CB4,0x8CB7,0x8CB6,0x8CBF,0x8CB8,0x8D8A,0x8D85,/* 0x50-0x57 */
0x8D81,0x8DCE,0x8DDD,0x8DCB,0x8DDA,0x8DD1,0x8DCC,0x8DDB,/* 0x58-0x5F */
0x8DC6,0x8EFB,0x8EF8,0x8EFC,0x8F9C,0x902E,0x9035,0x9031,/* 0x60-0x67 */
- 0xFA25,0x9032,0x9036,0x9102,0x90F5,0x9109,0x90FE,0x9163,/* 0x68-0x6F */
- 0x9165,0xF97E,0x9214,0x9215,0x9223,0x9209,0x921E,0x920D,/* 0x70-0x77 */
+ 0x9038,0x9032,0x9036,0x9102,0x90F5,0x9109,0x90FE,0x9163,/* 0x68-0x6F */
+ 0x9165,0x91CF,0x9214,0x9215,0x9223,0x9209,0x921E,0x920D,/* 0x70-0x77 */
0x9210,0x9207,0x9211,0x9594,0x958F,0x958B,0x9591,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
0x0000,0x9593,0x9592,0x958E,0x968A,0x968E,0x968B,0x967D,/* 0xA0-0xA7 */
- 0x9685,0xF9DC,0x968D,0x9672,0x9684,0x96C1,0x96C5,0x96C4,/* 0xA8-0xAF */
+ 0x9685,0x9686,0x968D,0x9672,0x9684,0x96C1,0x96C5,0x96C4,/* 0xA8-0xAF */
0x96C6,0x96C7,0x96EF,0x96F2,0x97CC,0x9805,0x9806,0x9808,/* 0xB0-0xB7 */
- 0x98E7,0x98EA,0xFA2A,0x98E9,0x98F2,0x98ED,0x99AE,0x99AD,/* 0xB8-0xBF */
- 0x9EC3,0x9ECD,0x9ED1,0xF91B,0x50AD,0x50B5,0x50B2,0x50B3,/* 0xC0-0xC7 */
+ 0x98E7,0x98EA,0x98EF,0x98E9,0x98F2,0x98ED,0x99AE,0x99AD,/* 0xB8-0xBF */
+ 0x9EC3,0x9ECD,0x9ED1,0x4E82,0x50AD,0x50B5,0x50B2,0x50B3,/* 0xC0-0xC7 */
0x50C5,0x50BE,0x50AC,0x50B7,0x50BB,0x50AF,0x50C7,0x527F,/* 0xC8-0xCF */
0x5277,0x527D,0x52DF,0x52E6,0x52E4,0x52E2,0x52E3,0x532F,/* 0xD0-0xD7 */
0x55DF,0x55E8,0x55D3,0x55E6,0x55CE,0x55DC,0x55C7,0x55D1,/* 0xD8-0xDF */
0x55E3,0x55E4,0x55EF,0x55DA,0x55E1,0x55C5,0x55C6,0x55E5,/* 0xE0-0xE7 */
- 0x55C9,0x5712,0x5713,0xF96C,0x5851,0x5858,0x5857,0xFA10,/* 0xE8-0xEF */
+ 0x55C9,0x5712,0x5713,0x585E,0x5851,0x5858,0x5857,0x585A,/* 0xE8-0xEF */
0x5854,0x586B,0x584C,0x586D,0x584A,0x5862,0x5852,0x584B,/* 0xF0-0xF7 */
0x5967,0x5AC1,0x5AC9,0x5ACC,0x5ABE,0x5ABD,0x5ABC,0x0000,/* 0xF8-0xFF */
};
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x28-0x2F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
- 0x5AB3,0x5AC2,0x5AB2,0x5D69,0x5D6F,0x5E4C,0x5E79,0xF9A2,/* 0x40-0x47 */
+ 0x5AB3,0x5AC2,0x5AB2,0x5D69,0x5D6F,0x5E4C,0x5E79,0x5EC9,/* 0x40-0x47 */
0x5EC8,0x5F12,0x5F59,0x5FAC,0x5FAE,0x611A,0x610F,0x6148,/* 0x48-0x4F */
0x611F,0x60F3,0x611B,0x60F9,0x6101,0x6108,0x614E,0x614C,/* 0x50-0x57 */
- 0xF9D9,0x614D,0x613E,0x6134,0x6127,0x610D,0x6106,0x6137,/* 0x58-0x5F */
+ 0x6144,0x614D,0x613E,0x6134,0x6127,0x610D,0x6106,0x6137,/* 0x58-0x5F */
0x6221,0x6222,0x6413,0x643E,0x641E,0x642A,0x642D,0x643D,/* 0x60-0x67 */
0x642C,0x640F,0x641C,0x6414,0x640D,0x6436,0x6416,0x6417,/* 0x68-0x6F */
- 0x6406,0x656C,0x659F,0x65B0,0x6697,0x6689,0x6687,0xF9C5,/* 0x70-0x77 */
+ 0x6406,0x656C,0x659F,0x65B0,0x6697,0x6689,0x6687,0x6688,/* 0x70-0x77 */
0x6696,0x6684,0x6698,0x668D,0x6703,0x6994,0x696D,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x694A,0x6968,0x696B,0x695E,0x6953,0x6979,0x6986,0x695D,/* 0xA8-0xAF */
0x6963,0x695B,0x6B47,0x6B72,0x6BC0,0x6BBF,0x6BD3,0x6BFD,/* 0xB0-0xB7 */
0x6EA2,0x6EAF,0x6ED3,0x6EB6,0x6EC2,0x6E90,0x6E9D,0x6EC7,/* 0xB8-0xBF */
- 0x6EC5,0x6EA5,0x6E98,0x6EBC,0xF9EC,0x6EAB,0xF904,0x6E96,/* 0xC0-0xC7 */
- 0xF9CB,0x6EC4,0x6ED4,0x6EAA,0x6EA7,0x6EB4,0x714E,0x7159,/* 0xC8-0xCF */
- 0x7169,0x7164,0xF993,0x7167,0x715C,0x716C,0x7166,0x714C,/* 0xD0-0xD7 */
+ 0x6EC5,0x6EA5,0x6E98,0x6EBC,0x6EBA,0x6EAB,0x6ED1,0x6E96,/* 0xC0-0xC7 */
+ 0x6E9C,0x6EC4,0x6ED4,0x6EAA,0x6EA7,0x6EB4,0x714E,0x7159,/* 0xC8-0xCF */
+ 0x7169,0x7164,0x7149,0x7167,0x715C,0x716C,0x7166,0x714C,/* 0xD0-0xD7 */
0x7165,0x715E,0x7146,0x7168,0x7156,0x723A,0x7252,0x7337,/* 0xD8-0xDF */
0x7345,0x733F,0x733E,0x746F,0x745A,0x7455,0x745F,0x745E,/* 0xE0-0xE7 */
0x7441,0x743F,0x7459,0x745B,0x745C,0x7576,0x7578,0x7600,/* 0xE8-0xEF */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x7779,0x776A,0x776C,0x775C,0x7765,0x7768,0x7762,0x77EE,/* 0x40-0x47 */
- 0x788E,0x78B0,0x7897,0x7898,0xF93B,0x7889,0x787C,0x7891,/* 0x48-0x4F */
- 0x7893,0x787F,0x797A,0xF93C,0x7981,0x842C,0x79BD,0xF956,/* 0x50-0x57 */
+ 0x788E,0x78B0,0x7897,0x7898,0x788C,0x7889,0x787C,0x7891,/* 0x48-0x4F */
+ 0x7893,0x787F,0x797A,0x797F,0x7981,0x842C,0x79BD,0x7A1C,/* 0x50-0x57 */
0x7A1A,0x7A20,0x7A14,0x7A1F,0x7A1E,0x7A9F,0x7AA0,0x7B77,/* 0x58-0x5F */
0x7BC0,0x7B60,0x7B6E,0x7B67,0x7CB1,0x7CB3,0x7CB5,0x7D93,/* 0x60-0x67 */
0x7D79,0x7D91,0x7D81,0x7D8F,0x7D5B,0x7F6E,0x7F69,0x7F6A,/* 0x68-0x6F */
0x7F72,0x7FA9,0x7FA8,0x7FA4,0x8056,0x8058,0x8086,0x8084,/* 0x70-0x77 */
0x8171,0x8170,0x8178,0x8165,0x816E,0x8173,0x816B,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
0x0000,0x8179,0x817A,0x8166,0x8205,0x8247,0x8482,0x8477,/* 0xA0-0xA7 */
- 0xF918,0x8431,0x8475,0x8466,0x846B,0xF96E,0x846C,0x845B,/* 0xA8-0xAF */
+ 0x843D,0x8431,0x8475,0x8466,0x846B,0x8449,0x846C,0x845B,/* 0xA8-0xAF */
0x843C,0x8435,0x8461,0x8463,0x8469,0x846D,0x8446,0x865E,/* 0xB0-0xB7 */
- 0xF936,0x865F,0x86F9,0x8713,0x8708,0x8707,0x8700,0x86FE,/* 0xB8-0xBF */
+ 0x865C,0x865F,0x86F9,0x8713,0x8708,0x8707,0x8700,0x86FE,/* 0xB8-0xBF */
0x86FB,0x8702,0x8703,0x8706,0x870A,0x8859,0x88DF,0x88D4,/* 0xC0-0xC7 */
- 0x88D9,0x88DC,0x88D8,0x88DD,0xF9E8,0x88CA,0x88D5,0x88D2,/* 0xC8-0xCF */
+ 0x88D9,0x88DC,0x88D8,0x88DD,0x88E1,0x88CA,0x88D5,0x88D2,/* 0xC8-0xCF */
0x899C,0x89E3,0x8A6B,0x8A72,0x8A73,0x8A66,0x8A69,0x8A70,/* 0xD0-0xD7 */
0x8A87,0x8A7C,0x8A63,0x8AA0,0x8A71,0x8A85,0x8A6D,0x8A62,/* 0xD8-0xDF */
0x8A6E,0x8A6C,0x8A79,0x8A7B,0x8A3E,0x8A68,0x8C62,0x8C8A,/* 0xE0-0xE7 */
- 0x8C89,0x8CCA,0x8CC7,0xF903,0x8CC4,0x8CB2,0x8CC3,0xF948,/* 0xE8-0xEF */
- 0x8CC5,0x8DE1,0x8DDF,0x8DE8,0xF937,0x8DF3,0x8DFA,0x8DEA,/* 0xF0-0xF7 */
+ 0x8C89,0x8CCA,0x8CC7,0x8CC8,0x8CC4,0x8CB2,0x8CC3,0x8CC2,/* 0xE8-0xEF */
+ 0x8CC5,0x8DE1,0x8DDF,0x8DE8,0x8DEF,0x8DF3,0x8DFA,0x8DEA,/* 0xF0-0xF7 */
0x8DE4,0x8DE6,0x8EB2,0x8F03,0x8F09,0x8EFE,0x8F0A,0x0000,/* 0xF8-0xFF */
};
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x8F9F,0x8FB2,0x904B,0x904A,0x9053,0x9042,0x9054,0x903C,/* 0x40-0x47 */
0x9055,0x9050,0x9047,0x904F,0x904E,0x904D,0x9051,0x903E,/* 0x48-0x4F */
- 0x9041,0x9112,0x9117,0x916C,0xF919,0x9169,0x91C9,0x9237,/* 0x50-0x57 */
+ 0x9041,0x9112,0x9117,0x916C,0x916A,0x9169,0x91C9,0x9237,/* 0x50-0x57 */
0x9257,0x9238,0x923D,0x9240,0x923E,0x925B,0x924B,0x9264,/* 0x58-0x5F */
- 0x9251,0xF9B1,0x9249,0x924D,0x9245,0x9239,0x923F,0x925A,/* 0x60-0x67 */
+ 0x9251,0x9234,0x9249,0x924D,0x9245,0x9239,0x923F,0x925A,/* 0x60-0x67 */
0x9598,0x9698,0x9694,0x9695,0x96CD,0x96CB,0x96C9,0x96CA,/* 0x68-0x6F */
- 0xF949,0x96FB,0x96F9,0xF9B2,0xFA1C,0x9774,0x9776,0x9810,/* 0x70-0x77 */
- 0x9811,0x9813,0x980A,0x9812,0x980C,0xFA2B,0x98F4,0x0000,/* 0x78-0x7F */
-
+ 0x96F7,0x96FB,0x96F9,0x96F6,0x9756,0x9774,0x9776,0x9810,/* 0x70-0x77 */
+ 0x9811,0x9813,0x980A,0x9812,0x980C,0x98FC,0x98F4,0x0000,/* 0x78-0x7F */
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
0x0000,0x98FD,0x98FE,0x99B3,0x99B1,0x99B4,0x9AE1,0x9CE9,/* 0xA0-0xA7 */
0x9E82,0x9F0E,0x9F13,0x9F20,0x50E7,0x50EE,0x50E5,0x50D6,/* 0xA8-0xAF */
- 0x50ED,0xF9BB,0x50D5,0x50CF,0x50D1,0x50F1,0x50CE,0x50E9,/* 0xB0-0xB7 */
+ 0x50ED,0x50DA,0x50D5,0x50CF,0x50D1,0x50F1,0x50CE,0x50E9,/* 0xB0-0xB7 */
0x5162,0x51F3,0x5283,0x5282,0x5331,0x53AD,0x55FE,0x5600,/* 0xB8-0xBF */
0x561B,0x5617,0x55FD,0x5614,0x5606,0x5609,0x560D,0x560E,/* 0xC0-0xC7 */
0x55F7,0x5616,0x561F,0x5608,0x5610,0x55F6,0x5718,0x5716,/* 0xC8-0xCF */
0x5875,0x587E,0x5883,0x5893,0x588A,0x5879,0x5885,0x587D,/* 0xD0-0xD7 */
0x58FD,0x5925,0x5922,0x5924,0x596A,0x5969,0x5AE1,0x5AE6,/* 0xD8-0xDF */
- 0x5AE9,0x5AD7,0x5AD6,0x5AD8,0x5AE3,0x5B75,0x5BDE,0xF9AA,/* 0xE0-0xE7 */
+ 0x5AE9,0x5AD7,0x5AD6,0x5AD8,0x5AE3,0x5B75,0x5BDE,0x5BE7,/* 0xE0-0xE7 */
0x5BE1,0x5BE5,0x5BE6,0x5BE8,0x5BE2,0x5BE4,0x5BDF,0x5C0D,/* 0xE8-0xEF */
- 0xF94B,0x5D84,0x5D87,0x5E5B,0x5E63,0x5E55,0x5E57,0x5E54,/* 0xF0-0xF7 */
- 0xFA0B,0x5ED6,0x5F0A,0x5F46,0x5F70,0x5FB9,0x6147,0x0000,/* 0xF8-0xFF */
+ 0x5C62,0x5D84,0x5D87,0x5E5B,0x5E63,0x5E55,0x5E57,0x5E54,/* 0xF0-0xF7 */
+ 0x5ED3,0x5ED6,0x5F0A,0x5F46,0x5F70,0x5FB9,0x6147,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_BA[256] = {
0x69C1,0x69AE,0x69D3,0x69CB,0x699B,0x69B7,0x69BB,0x69AB,/* 0x60-0x67 */
0x69B4,0x69D0,0x69CD,0x69AD,0x69CC,0x69A6,0x69C3,0x69A3,/* 0x68-0x6F */
0x6B49,0x6B4C,0x6C33,0x6F33,0x6F14,0x6EFE,0x6F13,0x6EF4,/* 0x70-0x77 */
- 0x6F29,0x6F3E,0x6F20,0x6F2C,0xF94E,0x6F02,0x6F22,0x0000,/* 0x78-0x7F */
-
+ 0x6F29,0x6F3E,0x6F20,0x6F2C,0x6F0F,0x6F02,0x6F22,0x0000,/* 0x78-0x7F */
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
- 0x0000,0x6EFF,0x6EEF,0x6F06,0x6F31,0x6F38,0x6F32,0xF992,/* 0xA0-0xA7 */
+ 0x0000,0x6EFF,0x6EEF,0x6F06,0x6F31,0x6F38,0x6F32,0x6F23,/* 0xA0-0xA7 */
0x6F15,0x6F2B,0x6F2F,0x6F88,0x6F2A,0x6EEC,0x6F01,0x6EF2,/* 0xA8-0xAF */
0x6ECC,0x6EF7,0x7194,0x7199,0x717D,0x718A,0x7184,0x7192,/* 0xB0-0xB7 */
0x723E,0x7292,0x7296,0x7344,0x7350,0x7464,0x7463,0x746A,/* 0xB8-0xBF */
0x7470,0x746D,0x7504,0x7591,0x7627,0x760D,0x760B,0x7609,/* 0xC0-0xC7 */
0x7613,0x76E1,0x76E3,0x7784,0x777D,0x777F,0x7761,0x78C1,/* 0xC8-0xCF */
- 0x789F,0x78A7,0x78B3,0x78A9,0x78A3,0x798E,0xFA1B,0x798D,/* 0xD0-0xD7 */
+ 0x789F,0x78A7,0x78B3,0x78A9,0x78A3,0x798E,0x798F,0x798D,/* 0xD0-0xD7 */
0x7A2E,0x7A31,0x7AAA,0x7AA9,0x7AED,0x7AEF,0x7BA1,0x7B95,/* 0xD8-0xDF */
0x7B8B,0x7B75,0x7B97,0x7B9D,0x7B94,0x7B8F,0x7BB8,0x7B87,/* 0xE0-0xE7 */
- 0x7B84,0x7CB9,0x7CBD,0xFA1D,0x7DBB,0x7DB0,0x7D9C,0x7DBD,/* 0xE8-0xEF */
- 0xF957,0xF93D,0x7DCA,0x7DB4,0x7DB2,0x7DB1,0x7DBA,0x7DA2,/* 0xF0-0xF7 */
+ 0x7B84,0x7CB9,0x7CBD,0x7CBE,0x7DBB,0x7DB0,0x7D9C,0x7DBD,/* 0xE8-0xEF */
+ 0x7DBE,0x7DA0,0x7DCA,0x7DB4,0x7DB2,0x7DB1,0x7DBA,0x7DA2,/* 0xF0-0xF7 */
0x7DBF,0x7DB5,0x7DB8,0x7DAD,0x7DD2,0x7DC7,0x7DAC,0x0000,/* 0xF8-0xFF */
};
0x8499,0x849E,0x84B2,0x849C,0x84CB,0x84B8,0x84C0,0x84D3,/* 0x58-0x5F */
0x8490,0x84BC,0x84D1,0x84CA,0x873F,0x871C,0x873B,0x8722,/* 0x60-0x67 */
0x8725,0x8734,0x8718,0x8755,0x8737,0x8729,0x88F3,0x8902,/* 0x68-0x6F */
- 0x88F4,0x88F9,0xF912,0x88FD,0x88E8,0x891A,0x88EF,0x8AA6,/* 0x70-0x77 */
+ 0x88F4,0x88F9,0x88F8,0x88FD,0x88E8,0x891A,0x88EF,0x8AA6,/* 0x70-0x77 */
0x8A8C,0x8A9E,0x8AA3,0x8A8D,0x8AA1,0x8A93,0x8AA4,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
- 0x0000,0xF9A1,0x8AA5,0x8AA8,0x8A98,0x8A91,0x8A9A,0x8AA7,/* 0xA0-0xA7 */
+ 0x0000,0x8AAA,0x8AA5,0x8AA8,0x8A98,0x8A91,0x8A9A,0x8AA7,/* 0xA0-0xA7 */
0x8C6A,0x8C8D,0x8C8C,0x8CD3,0x8CD1,0x8CD2,0x8D6B,0x8D99,/* 0xA8-0xAF */
0x8D95,0x8DFC,0x8F14,0x8F12,0x8F15,0x8F13,0x8FA3,0x9060,/* 0xB0-0xB7 */
0x9058,0x905C,0x9063,0x9059,0x905E,0x9062,0x905D,0x905B,/* 0xB8-0xBF */
0x9280,0x9285,0x9298,0x9296,0x927B,0x9293,0x929C,0x92A8,/* 0xC8-0xCF */
0x927C,0x9291,0x95A1,0x95A8,0x95A9,0x95A3,0x95A5,0x95A4,/* 0xD0-0xD7 */
0x9699,0x969C,0x969B,0x96CC,0x96D2,0x9700,0x977C,0x9785,/* 0xD8-0xDF */
- 0x97F6,0x9817,0xF9B4,0x98AF,0x98B1,0x9903,0x9905,0x990C,/* 0xE0-0xE7 */
+ 0x97F6,0x9817,0x9818,0x98AF,0x98B1,0x9903,0x9905,0x990C,/* 0xE0-0xE7 */
0x9909,0x99C1,0x9AAF,0x9AB0,0x9AE6,0x9B41,0x9B42,0x9CF4,/* 0xE8-0xEF */
0x9CF6,0x9CF3,0x9EBC,0x9F3B,0x9F4A,0x5104,0x5100,0x50FB,/* 0xF0-0xF7 */
- 0x50F5,0x50F9,0x5102,0x5108,0x5109,0x5105,0xF954,0x0000,/* 0xF8-0xFF */
+ 0x50F5,0x50F9,0x5102,0x5108,0x5109,0x5105,0x51DC,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_BC[256] = {
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x28-0x2F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
- 0x5287,0x5288,0xF9C7,0x528D,0x528A,0x52F0,0x53B2,0x562E,/* 0x40-0x47 */
+ 0x5287,0x5288,0x5289,0x528D,0x528A,0x52F0,0x53B2,0x562E,/* 0x40-0x47 */
0x563B,0x5639,0x5632,0x563F,0x5634,0x5629,0x5653,0x564E,/* 0x48-0x4F */
0x5657,0x5674,0x5636,0x562F,0x5630,0x5880,0x589F,0x589E,/* 0x50-0x57 */
0x58B3,0x589C,0x58AE,0x58A9,0x58A6,0x596D,0x5B09,0x5AFB,/* 0x58-0x5F */
- 0x5B0B,0x5AF5,0x5B0C,0x5B08,0xF9BC,0x5BEC,0x5BE9,0x5BEB,/* 0x60-0x67 */
- 0x5C64,0xF9DF,0x5D9D,0x5D94,0x5E62,0x5E5F,0x5E61,0x5EE2,/* 0x68-0x6F */
+ 0x5B0B,0x5AF5,0x5B0C,0x5B08,0x5BEE,0x5BEC,0x5BE9,0x5BEB,/* 0x60-0x67 */
+ 0x5C64,0x5C65,0x5D9D,0x5D94,0x5E62,0x5E5F,0x5E61,0x5EE2,/* 0x68-0x6F */
0x5EDA,0x5EDF,0x5EDD,0x5EE3,0x5EE0,0x5F48,0x5F71,0x5FB7,/* 0x70-0x77 */
0x5FB5,0x6176,0x6167,0x616E,0x615D,0x6155,0x6182,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
- 0x0000,0x617C,0x6170,0x616B,0x617E,0x61A7,0xF98F,0x61AB,/* 0xA0-0xA7 */
- 0x618E,0x61AC,0x619A,0x61A4,0x6194,0x61AE,0xF9D2,0x6469,/* 0xA8-0xAF */
+ 0x0000,0x617C,0x6170,0x616B,0x617E,0x61A7,0x6190,0x61AB,/* 0xA0-0xA7 */
+ 0x618E,0x61AC,0x619A,0x61A4,0x6194,0x61AE,0x622E,0x6469,/* 0xA8-0xAF */
0x646F,0x6479,0x649E,0x64B2,0x6488,0x6490,0x64B0,0x64A5,/* 0xB0-0xB7 */
- 0x6493,0x6495,0x64A9,0x6492,0x64AE,0x64AD,0x64AB,0xF991,/* 0xB8-0xBF */
- 0x64AC,0x6499,0x64A2,0x64B3,0x6575,0x6577,0xF969,0x66AE,/* 0xC0-0xC7 */
- 0x66AB,0xFA06,0x66B1,0x6A23,0x6A1F,0x69E8,0x6A01,0x6A1E,/* 0xC8-0xCF */
- 0x6A19,0x69FD,0x6A21,0xF94C,0x6A0A,0x69F3,0xF9BF,0x6A05,/* 0xD0-0xD7 */
+ 0x6493,0x6495,0x64A9,0x6492,0x64AE,0x64AD,0x64AB,0x649A,/* 0xB8-0xBF */
+ 0x64AC,0x6499,0x64A2,0x64B3,0x6575,0x6577,0x6578,0x66AE,/* 0xC0-0xC7 */
+ 0x66AB,0x66B4,0x66B1,0x6A23,0x6A1F,0x69E8,0x6A01,0x6A1E,/* 0xC8-0xCF */
+ 0x6A19,0x69FD,0x6A21,0x6A13,0x6A0A,0x69F3,0x6A02,0x6A05,/* 0xD0-0xD7 */
0x69ED,0x6A11,0x6B50,0x6B4E,0x6BA4,0x6BC5,0x6BC6,0x6F3F,/* 0xD8-0xDF */
0x6F7C,0x6F84,0x6F51,0x6F66,0x6F54,0x6F86,0x6F6D,0x6F5B,/* 0xE0-0xE7 */
0x6F78,0x6F6E,0x6F8E,0x6F7A,0x6F70,0x6F64,0x6F97,0x6F58,/* 0xE8-0xEF */
0x6ED5,0x6F6F,0x6F60,0x6F5F,0x719F,0x71AC,0x71B1,0x71A8,/* 0xF0-0xF7 */
- 0x7256,0x729B,0x734E,0x7357,0xF9AE,0x748B,0x7483,0x0000,/* 0xF8-0xFF */
+ 0x7256,0x729B,0x734E,0x7357,0x7469,0x748B,0x7483,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_BD[256] = {
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x747E,0x7480,0x757F,0x7620,0x7629,0x761F,0x7624,0x7626,/* 0x40-0x47 */
0x7621,0x7622,0x769A,0x76BA,0x76E4,0x778E,0x7787,0x778C,/* 0x48-0x4F */
- 0x7791,0x778B,0x78CB,0x78C5,0x78BA,0xF947,0x78BE,0x78D5,/* 0x50-0x57 */
+ 0x7791,0x778B,0x78CB,0x78C5,0x78BA,0x78CA,0x78BE,0x78D5,/* 0x50-0x57 */
0x78BC,0x78D0,0x7A3F,0x7A3C,0x7A40,0x7A3D,0x7A37,0x7A3B,/* 0x58-0x5F */
0x7AAF,0x7AAE,0x7BAD,0x7BB1,0x7BC4,0x7BB4,0x7BC6,0x7BC7,/* 0x60-0x67 */
- 0x7BC1,0x7BA0,0x7BCC,0x7CCA,0x7DE0,0xF996,0x7DEF,0x7DFB,/* 0x68-0x6F */
+ 0x7BC1,0x7BA0,0x7BCC,0x7CCA,0x7DE0,0x7DF4,0x7DEF,0x7DFB,/* 0x68-0x6F */
0x7DD8,0x7DEC,0x7DDD,0x7DE8,0x7DE3,0x7DDA,0x7DDE,0x7DE9,/* 0x70-0x77 */
0x7D9E,0x7DD9,0x7DF2,0x7DF9,0x7F75,0x7F77,0x7FAF,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
0x0000,0x7FE9,0x8026,0x819B,0x819C,0x819D,0x81A0,0x819A,/* 0xA0-0xA7 */
- 0x8198,0x8517,0x853D,0x851A,0xF999,0x852C,0x852D,0x8513,/* 0xA8-0xAF */
+ 0x8198,0x8517,0x853D,0x851A,0x84EE,0x852C,0x852D,0x8513,/* 0xA8-0xAF */
0x8511,0x8523,0x8521,0x8514,0x84EC,0x8525,0x84FF,0x8506,/* 0xB0-0xB7 */
0x8782,0x8774,0x8776,0x8760,0x8766,0x8778,0x8768,0x8759,/* 0xB8-0xBF */
0x8757,0x874C,0x8753,0x885B,0x885D,0x8910,0x8907,0x8912,/* 0xC0-0xC7 */
- 0x8913,0x8915,0x890A,0x8ABC,0xF97D,0x8AC7,0x8AC4,0x8A95,/* 0xC8-0xCF */
- 0x8ACB,0xFA22,0x8AB2,0x8AC9,0x8AC2,0x8ABF,0x8AB0,0xF941,/* 0xD0-0xD7 */
- 0x8ACD,0x8AB6,0x8AB9,0x8ADB,0x8C4C,0x8C4E,0xFA16,0x8CE0,/* 0xD8-0xDF */
+ 0x8913,0x8915,0x890A,0x8ABC,0x8AD2,0x8AC7,0x8AC4,0x8A95,/* 0xC8-0xCF */
+ 0x8ACB,0x8AF8,0x8AB2,0x8AC9,0x8AC2,0x8ABF,0x8AB0,0x8AD6,/* 0xD0-0xD7 */
+ 0x8ACD,0x8AB6,0x8AB9,0x8ADB,0x8C4C,0x8C4E,0x8C6C,0x8CE0,/* 0xD8-0xDF */
0x8CDE,0x8CE6,0x8CE4,0x8CEC,0x8CED,0x8CE2,0x8CE3,0x8CDC,/* 0xE0-0xE7 */
0x8CEA,0x8CE1,0x8D6D,0x8D9F,0x8DA3,0x8E2B,0x8E10,0x8E1D,/* 0xE8-0xEF */
0x8E22,0x8E0F,0x8E29,0x8E1F,0x8E21,0x8E1E,0x8EBA,0x8F1D,/* 0xF0-0xF7 */
- 0x8F1B,0x8F1F,0x8F29,0xF998,0xF9D7,0x8F1C,0x8F1E,0x0000,/* 0xF8-0xFF */
+ 0x8F1B,0x8F1F,0x8F29,0x8F26,0x8F2A,0x8F1C,0x8F1E,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_BE[256] = {
0x8F25,0x9069,0x906E,0x9068,0x906D,0x9077,0x9130,0x912D,/* 0x40-0x47 */
0x9127,0x9131,0x9187,0x9189,0x918B,0x9183,0x92C5,0x92BB,/* 0x48-0x4F */
0x92B7,0x92EA,0x92AC,0x92E4,0x92C1,0x92B3,0x92BC,0x92D2,/* 0x50-0x57 */
- 0x92C7,0x92F0,0x92B2,0xF986,0x95B1,0x9704,0x9706,0x9707,/* 0x58-0x5F */
+ 0x92C7,0x92F0,0x92B2,0x95AD,0x95B1,0x9704,0x9706,0x9707,/* 0x58-0x5F */
0x9709,0x9760,0x978D,0x978B,0x978F,0x9821,0x982B,0x981C,/* 0x60-0x67 */
0x98B3,0x990A,0x9913,0x9912,0x9918,0x99DD,0x99D0,0x99DF,/* 0x68-0x6F */
0x99DB,0x99D1,0x99D5,0x99D2,0x99D9,0x9AB7,0x9AEE,0x9AEF,/* 0x70-0x77 */
- 0x9B27,0x9B45,0x9B44,0x9B77,0xF939,0x9D06,0x9D09,0x0000,/* 0x78-0x7F */
-
+ 0x9B27,0x9B45,0x9B44,0x9B77,0x9B6F,0x9D06,0x9D09,0x0000,/* 0x78-0x7F */
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
- 0x0000,0x9D03,0x9EA9,0x9EBE,0xF989,0x58A8,0x9F52,0x5112,/* 0xA0-0xA7 */
+ 0x0000,0x9D03,0x9EA9,0x9EBE,0x9ECE,0x58A8,0x9F52,0x5112,/* 0xA0-0xA7 */
0x5118,0x5114,0x5110,0x5115,0x5180,0x51AA,0x51DD,0x5291,/* 0xA8-0xAF */
0x5293,0x52F3,0x5659,0x566B,0x5679,0x5669,0x5664,0x5678,/* 0xB0-0xB7 */
0x566A,0x5668,0x5665,0x5671,0x566F,0x566C,0x5662,0x5676,/* 0xB8-0xBF */
0x58C1,0x58BE,0x58C7,0x58C5,0x596E,0x5B1D,0x5B34,0x5B78,/* 0xC0-0xC7 */
0x5BF0,0x5C0E,0x5F4A,0x61B2,0x6191,0x61A9,0x618A,0x61CD,/* 0xC8-0xCF */
0x61B6,0x61BE,0x61CA,0x61C8,0x6230,0x64C5,0x64C1,0x64CB,/* 0xD0-0xD7 */
- 0x64BB,0x64BC,0x64DA,0xF930,0x64C7,0x64C2,0x64CD,0x64BF,/* 0xD8-0xDF */
- 0x64D2,0x64D4,0x64BE,0x6574,0xF98B,0x66C9,0x66B9,0x66C4,/* 0xE0-0xE7 */
+ 0x64BB,0x64BC,0x64DA,0x64C4,0x64C7,0x64C2,0x64CD,0x64BF,/* 0xD8-0xDF */
+ 0x64D2,0x64D4,0x64BE,0x6574,0x66C6,0x66C9,0x66B9,0x66C4,/* 0xE0-0xE7 */
0x66C7,0x66B8,0x6A3D,0x6A38,0x6A3A,0x6A59,0x6A6B,0x6A58,/* 0xE8-0xEF */
0x6A39,0x6A44,0x6A62,0x6A61,0x6A4B,0x6A47,0x6A35,0x6A5F,/* 0xF0-0xF7 */
- 0x6A48,0x6B59,0xF98C,0x6C05,0x6FC2,0x6FB1,0x6FA1,0x0000,/* 0xF8-0xFF */
+ 0x6A48,0x6B59,0x6B77,0x6C05,0x6FC2,0x6FB1,0x6FA1,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_BF[256] = {
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x6FC3,0x6FA4,0x6FC1,0x6FA7,0x6FB3,0x6FC0,0x6FB9,0x6FB6,/* 0x40-0x47 */
- 0x6FA6,0x6FA0,0x6FB4,0x71BE,0x71C9,0xF9EE,0x71D2,0x71C8,/* 0x48-0x4F */
- 0x71D5,0x71B9,0xF9C0,0x71D9,0x71DC,0x71C3,0x71C4,0x7368,/* 0x50-0x57 */
- 0x749C,0x74A3,0xF9EF,0x749F,0x749E,0x74E2,0x750C,0x750D,/* 0x58-0x5F */
- 0x7634,0x7638,0x763A,0xF933,0x76E5,0x77A0,0x779E,0x779F,/* 0x60-0x67 */
+ 0x6FA6,0x6FA0,0x6FB4,0x71BE,0x71C9,0x71D0,0x71D2,0x71C8,/* 0x48-0x4F */
+ 0x71D5,0x71B9,0x71CE,0x71D9,0x71DC,0x71C3,0x71C4,0x7368,/* 0x50-0x57 */
+ 0x749C,0x74A3,0x7498,0x749F,0x749E,0x74E2,0x750C,0x750D,/* 0x58-0x5F */
+ 0x7634,0x7638,0x763A,0x76E7,0x76E5,0x77A0,0x779E,0x779F,/* 0x60-0x67 */
0x77A5,0x78E8,0x78DA,0x78EC,0x78E7,0x79A6,0x7A4D,0x7A4E,/* 0x68-0x6F */
0x7A46,0x7A4C,0x7A4B,0x7ABA,0x7BD9,0x7C11,0x7BC9,0x7BE4,/* 0x70-0x77 */
- 0x7BDB,0x7BE1,0x7BE9,0x7BE6,0x7CD5,0xFA03,0x7E0A,0x0000,/* 0x78-0x7F */
-
+ 0x7BDB,0x7BE1,0x7BE9,0x7BE6,0x7CD5,0x7CD6,0x7E0A,0x0000,/* 0x78-0x7F */
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
0x0000,0x7E11,0x7E08,0x7E1B,0x7E23,0x7E1E,0x7E1D,0x7E09,/* 0xA0-0xA7 */
- 0x7E10,0xF9E6,0x7FB2,0x7FF0,0x7FF1,0x7FEE,0x8028,0x81B3,/* 0xA8-0xAF */
+ 0x7E10,0x7F79,0x7FB2,0x7FF0,0x7FF1,0x7FEE,0x8028,0x81B3,/* 0xA8-0xAF */
0x81A9,0x81A8,0x81FB,0x8208,0x8258,0x8259,0x854A,0x8559,/* 0xB0-0xB7 */
0x8548,0x8568,0x8569,0x8543,0x8549,0x856D,0x856A,0x855E,/* 0xB8-0xBF */
0x8783,0x879F,0x879E,0x87A2,0x878D,0x8861,0x892A,0x8932,/* 0xC0-0xC7 */
0x8925,0x892B,0x8921,0x89AA,0x89A6,0x8AE6,0x8AFA,0x8AEB,/* 0xC8-0xCF */
- 0x8AF1,0x8B00,0x8ADC,0x8AE7,0x8AEE,0xF95D,0x8B01,0x8B02,/* 0xD0-0xD7 */
+ 0x8AF1,0x8B00,0x8ADC,0x8AE7,0x8AEE,0x8AFE,0x8B01,0x8B02,/* 0xD0-0xD7 */
0x8AF7,0x8AED,0x8AF3,0x8AF6,0x8AFC,0x8C6B,0x8C6D,0x8C93,/* 0xD8-0xDF */
- 0x8CF4,0x8E44,0x8E31,0x8E34,0x8E42,0x8E39,0x8E35,0xFA07,/* 0xE0-0xE7 */
+ 0x8CF4,0x8E44,0x8E31,0x8E34,0x8E42,0x8E39,0x8E35,0x8F3B,/* 0xE0-0xE7 */
0x8F2F,0x8F38,0x8F33,0x8FA8,0x8FA6,0x9075,0x9074,0x9078,/* 0xE8-0xEF */
- 0x9072,0xF9C3,0x907A,0x9134,0x9192,0x9320,0x9336,0x92F8,/* 0xF0-0xF7 */
- 0x9333,0x932F,0x9322,0x92FC,0x932B,0xF93F,0x931A,0x0000,/* 0xF8-0xFF */
+ 0x9072,0x907C,0x907A,0x9134,0x9192,0x9320,0x9336,0x92F8,/* 0xF0-0xF7 */
+ 0x9333,0x932F,0x9322,0x92FC,0x932B,0x9304,0x931A,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_C0[256] = {
0x9310,0x9326,0x9321,0x9315,0x932E,0x9319,0x95BB,0x96A7,/* 0x40-0x47 */
0x96A8,0x96AA,0x96D5,0x970E,0x9711,0x9716,0x970D,0x9713,/* 0x48-0x4F */
0x970F,0x975B,0x975C,0x9766,0x9798,0x9830,0x9838,0x983B,/* 0x50-0x57 */
- 0x9837,0x982D,0x9839,0x9824,0x9910,0xFA2C,0x991E,0x991B,/* 0x58-0x5F */
- 0x9921,0x991A,0x99ED,0x99E2,0xF91A,0x9AB8,0x9ABC,0x9AFB,/* 0x60-0x67 */
+ 0x9837,0x982D,0x9839,0x9824,0x9910,0x9928,0x991E,0x991B,/* 0x58-0x5F */
+ 0x9921,0x991A,0x99ED,0x99E2,0x99F1,0x9AB8,0x9ABC,0x9AFB,/* 0x60-0x67 */
0x9AED,0x9B28,0x9B91,0x9D15,0x9D23,0x9D26,0x9D28,0x9D12,/* 0x68-0x6F */
- 0x9D1B,0x9ED8,0x9ED4,0xF9C4,0xF908,0x512A,0x511F,0x5121,/* 0x70-0x77 */
- 0x5132,0xF97F,0x568E,0x5680,0x5690,0x5685,0x5687,0x0000,/* 0x78-0x7F */
-
+ 0x9D1B,0x9ED8,0x9ED4,0x9F8D,0x9F9C,0x512A,0x511F,0x5121,/* 0x70-0x77 */
+ 0x5132,0x52F5,0x568E,0x5680,0x5690,0x5685,0x5687,0x0000,/* 0x78-0x7F */
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
0x0000,0x568F,0x58D5,0x58D3,0x58D1,0x58CE,0x5B30,0x5B2A,/* 0xA0-0xA7 */
- 0x5B24,0x5B7A,0x5C37,0x5C68,0x5DBC,0xF9AB,0x5DBD,0x5DB8,/* 0xA8-0xAF */
+ 0x5B24,0x5B7A,0x5C37,0x5C68,0x5DBC,0x5DBA,0x5DBD,0x5DB8,/* 0xA8-0xAF */
0x5E6B,0x5F4C,0x5FBD,0x61C9,0x61C2,0x61C7,0x61E6,0x61CB,/* 0xB0-0xB7 */
0x6232,0x6234,0x64CE,0x64CA,0x64D8,0x64E0,0x64F0,0x64E6,/* 0xB8-0xBF */
0x64EC,0x64F1,0x64E2,0x64ED,0x6582,0x6583,0x66D9,0x66D6,/* 0xC0-0xC7 */
0x6A80,0x6A94,0x6A84,0x6AA2,0x6A9C,0x6ADB,0x6AA3,0x6A7E,/* 0xC8-0xCF */
- 0x6A97,0x6A90,0x6AA0,0x6B5C,0xF9A5,0x6BDA,0x6C08,0x6FD8,/* 0xD0-0xD7 */
- 0x6FF1,0x6FDF,0x6FE0,0x6FDB,0x6FE4,0xF922,0x6FEF,0x6F80,/* 0xD8-0xDF */
+ 0x6A97,0x6A90,0x6AA0,0x6B5C,0x6BAE,0x6BDA,0x6C08,0x6FD8,/* 0xD0-0xD7 */
+ 0x6FF1,0x6FDF,0x6FE0,0x6FDB,0x6FE4,0x6FEB,0x6FEF,0x6F80,/* 0xD8-0xDF */
0x6FEC,0x6FE1,0x6FE9,0x6FD5,0x6FEE,0x6FF0,0x71E7,0x71DF,/* 0xE0-0xE7 */
0x71EE,0x71E6,0x71E5,0x71ED,0x71EC,0x71F4,0x71E0,0x7235,/* 0xE8-0xEF */
0x7246,0x7370,0x7372,0x74A9,0x74B0,0x74A6,0x74A8,0x7646,/* 0xF0-0xF7 */
- 0xF9C1,0x764C,0x76EA,0x77B3,0x77AA,0x77B0,0x77AC,0x0000,/* 0xF8-0xFF */
+ 0x7642,0x764C,0x76EA,0x77B3,0x77AA,0x77B0,0x77AC,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_C1[256] = {
0x77A7,0x77AD,0x77EF,0x78F7,0x78FA,0x78F4,0x78EF,0x7901,/* 0x40-0x47 */
0x79A7,0x79AA,0x7A57,0x7ABF,0x7C07,0x7C0D,0x7BFE,0x7BF7,/* 0x48-0x4F */
0x7C0C,0x7BE0,0x7CE0,0x7CDC,0x7CDE,0x7CE2,0x7CDF,0x7CD9,/* 0x50-0x57 */
- 0x7CDD,0x7E2E,0x7E3E,0x7E46,0xF950,0x7E32,0x7E43,0x7E2B,/* 0x58-0x5F */
+ 0x7CDD,0x7E2E,0x7E3E,0x7E46,0x7E37,0x7E32,0x7E43,0x7E2B,/* 0x58-0x5F */
0x7E3D,0x7E31,0x7E45,0x7E41,0x7E34,0x7E39,0x7E48,0x7E35,/* 0x60-0x67 */
0x7E3F,0x7E2F,0x7F44,0x7FF3,0x7FFC,0x8071,0x8072,0x8070,/* 0x68-0x6F */
- 0xF997,0x8073,0x81C6,0x81C3,0x81BA,0x81C2,0x81C0,0x81BF,/* 0x70-0x77 */
- 0x81BD,0x81C9,0x81BE,0xF9F6,0x8209,0x8271,0x85AA,0x0000,/* 0x78-0x7F */
-
+ 0x806F,0x8073,0x81C6,0x81C3,0x81BA,0x81C2,0x81C0,0x81BF,/* 0x70-0x77 */
+ 0x81BD,0x81C9,0x81BE,0x81E8,0x8209,0x8271,0x85AA,0x0000,/* 0x78-0x7F */
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
0x0000,0x8584,0x857E,0x859C,0x8591,0x8594,0x85AF,0x859B,/* 0xA0-0xA7 */
0x8587,0x85A8,0x858A,0x8667,0x87C0,0x87D1,0x87B3,0x87D2,/* 0xA8-0xAF */
- 0x87C6,0x87AB,0x87BB,0xF911,0x87C8,0x87CB,0x893B,0x8936,/* 0xB0-0xB7 */
+ 0x87C6,0x87AB,0x87BB,0x87BA,0x87C8,0x87CB,0x893B,0x8936,/* 0xB0-0xB7 */
0x8944,0x8938,0x893D,0x89AC,0x8B0E,0x8B17,0x8B19,0x8B1B,/* 0xB8-0xBF */
0x8B0A,0x8B20,0x8B1D,0x8B04,0x8B10,0x8C41,0x8C3F,0x8C73,/* 0xC0-0xC7 */
0x8CFA,0x8CFD,0x8CFC,0x8CF8,0x8CFB,0x8DA8,0x8E49,0x8E4B,/* 0xC8-0xCF */
0x8E48,0x8E4A,0x8F44,0x8F3E,0x8F42,0x8F45,0x8F3F,0x907F,/* 0xD0-0xD7 */
0x907D,0x9084,0x9081,0x9082,0x9080,0x9139,0x91A3,0x919E,/* 0xD8-0xDF */
- 0x919C,0x934D,0x9382,0x9328,0x9375,0xF99B,0x9365,0x934B,/* 0xE0-0xE7 */
+ 0x919C,0x934D,0x9382,0x9328,0x9375,0x934A,0x9365,0x934B,/* 0xE0-0xE7 */
0x9318,0x937E,0x936C,0x935B,0x9370,0x935A,0x9354,0x95CA,/* 0xE8-0xEF */
- 0x95CB,0x95CC,0x95C8,0x95C6,0x96B1,0xF9B8,0x96D6,0x971C,/* 0xF0-0xF7 */
+ 0x95CB,0x95CC,0x95C8,0x95C6,0x96B1,0x96B8,0x96D6,0x971C,/* 0xF0-0xF7 */
0x971E,0x97A0,0x97D3,0x9846,0x98B6,0x9935,0x9A01,0x0000,/* 0xF8-0xFF */
};
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x99FF,0x9BAE,0x9BAB,0x9BAA,0x9BAD,0x9D3B,0x9D3F,0x9E8B,/* 0x40-0x47 */
0x9ECF,0x9EDE,0x9EDC,0x9EDD,0x9EDB,0x9F3E,0x9F4B,0x53E2,/* 0x48-0x4F */
- 0x5695,0x56AE,0x58D9,0xF94A,0x5B38,0x5F5D,0x61E3,0x6233,/* 0x50-0x57 */
+ 0x5695,0x56AE,0x58D9,0x58D8,0x5B38,0x5F5D,0x61E3,0x6233,/* 0x50-0x57 */
0x64F4,0x64F2,0x64FE,0x6506,0x64FA,0x64FB,0x64F7,0x65B7,/* 0x58-0x5F */
0x66DC,0x6726,0x6AB3,0x6AAC,0x6AC3,0x6ABB,0x6AB8,0x6AC2,/* 0x60-0x67 */
- 0x6AAE,0x6AAF,0x6B5F,0x6B78,0x6BAF,0x7009,0x700B,0xF984,/* 0x68-0x6F */
+ 0x6AAE,0x6AAF,0x6B5F,0x6B78,0x6BAF,0x7009,0x700B,0x6FFE,/* 0x68-0x6F */
0x7006,0x6FFA,0x7011,0x700F,0x71FB,0x71FC,0x71FE,0x71F8,/* 0x70-0x77 */
- 0x7377,0xF9A7,0x74A7,0x74BF,0x7515,0x7656,0x7658,0x0000,/* 0x78-0x7F */
-
+ 0x7377,0x7375,0x74A7,0x74BF,0x7515,0x7656,0x7658,0x0000,/* 0x78-0x7F */
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
- 0x0000,0x7652,0x77BD,0x77BF,0x77BB,0x77BC,0x790E,0xF9B6,/* 0xA0-0xA7 */
+ 0x0000,0x7652,0x77BD,0x77BF,0x77BB,0x77BC,0x790E,0x79AE,/* 0xA0-0xA7 */
0x7A61,0x7A62,0x7A60,0x7AC4,0x7AC5,0x7C2B,0x7C27,0x7C2A,/* 0xA8-0xAF */
- 0x7C1E,0x7C23,0x7C21,0xF97B,0x7E54,0x7E55,0x7E5E,0x7E5A,/* 0xB0-0xB7 */
+ 0x7C1E,0x7C23,0x7C21,0x7CE7,0x7E54,0x7E55,0x7E5E,0x7E5A,/* 0xB0-0xB7 */
0x7E61,0x7E52,0x7E59,0x7F48,0x7FF9,0x7FFB,0x8077,0x8076,/* 0xB8-0xBF */
- 0x81CD,0x81CF,0x820A,0x85CF,0x85A9,0xF923,0x85D0,0x85C9,/* 0xC0-0xC7 */
+ 0x81CD,0x81CF,0x820A,0x85CF,0x85A9,0x85CD,0x85D0,0x85C9,/* 0xC0-0xC7 */
0x85B0,0x85BA,0x85B9,0x85A6,0x87EF,0x87EC,0x87F2,0x87E0,/* 0xC8-0xCF */
0x8986,0x89B2,0x89F4,0x8B28,0x8B39,0x8B2C,0x8B2B,0x8C50,/* 0xD0-0xD7 */
0x8D05,0x8E59,0x8E63,0x8E66,0x8E64,0x8E5F,0x8E55,0x8EC0,/* 0xD8-0xDF */
0x8F49,0x8F4D,0x9087,0x9083,0x9088,0x91AB,0x91AC,0x91D0,/* 0xE0-0xE7 */
0x9394,0x938A,0x9396,0x93A2,0x93B3,0x93AE,0x93AC,0x93B0,/* 0xE8-0xEF */
- 0x9398,0x939A,0x9397,0x95D4,0x95D6,0x95D0,0x95D5,0xF9EA,/* 0xF0-0xF7 */
+ 0x9398,0x939A,0x9397,0x95D4,0x95D6,0x95D0,0x95D5,0x96E2,/* 0xF0-0xF7 */
0x96DC,0x96D9,0x96DB,0x96DE,0x9724,0x97A3,0x97A6,0x0000,/* 0xF8-0xFF */
};
0x993E,0x993F,0x993D,0x992E,0x99A5,0x9A0E,0x9AC1,0x9B03,/* 0x48-0x4F */
0x9B06,0x9B4F,0x9B4E,0x9B4D,0x9BCA,0x9BC9,0x9BFD,0x9BC8,/* 0x50-0x57 */
0x9BC0,0x9D51,0x9D5D,0x9D60,0x9EE0,0x9F15,0x9F2C,0x5133,/* 0x58-0x5F */
- 0x56A5,0x58DE,0xF942,0x58E2,0x5BF5,0x9F90,0xF982,0x61F2,/* 0x60-0x67 */
- 0x61F7,0xF90D,0x61F5,0x6500,0x650F,0x66E0,0x66DD,0x6AE5,/* 0x68-0x6F */
- 0x6ADD,0x6ADA,0xF931,0x701B,0x701F,0x7028,0x701A,0x701D,/* 0x70-0x77 */
+ 0x56A5,0x58DE,0x58DF,0x58E2,0x5BF5,0x9F90,0x5EEC,0x61F2,/* 0x60-0x67 */
+ 0x61F7,0x61F6,0x61F5,0x6500,0x650F,0x66E0,0x66DD,0x6AE5,/* 0x68-0x6F */
+ 0x6ADD,0x6ADA,0x6AD3,0x701B,0x701F,0x7028,0x701A,0x701D,/* 0x70-0x77 */
0x7015,0x7018,0x7206,0x720D,0x7258,0x72A2,0x7378,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
0x0000,0x737A,0x74BD,0x74CA,0x74E3,0x7587,0x7586,0x765F,/* 0xA0-0xA7 */
- 0x7661,0x77C7,0x7919,0x79B1,0x7A6B,0x7A69,0xF9A6,0x7C3F,/* 0xA8-0xAF */
+ 0x7661,0x77C7,0x7919,0x79B1,0x7A6B,0x7A69,0x7C3E,0x7C3F,/* 0xA8-0xAF */
0x7C38,0x7C3D,0x7C37,0x7C40,0x7E6B,0x7E6D,0x7E79,0x7E69,/* 0xB0-0xB7 */
- 0x7E6A,0xF90F,0x7E73,0x7FB6,0x7FB9,0x7FB8,0xF926,0x85E9,/* 0xB8-0xBF */
+ 0x7E6A,0x7F85,0x7E73,0x7FB6,0x7FB9,0x7FB8,0x81D8,0x85E9,/* 0xB8-0xBF */
0x85DD,0x85EA,0x85D5,0x85E4,0x85E5,0x85F7,0x87FB,0x8805,/* 0xC0-0xC7 */
0x880D,0x87F9,0x87FE,0x8960,0x895F,0x8956,0x895E,0x8B41,/* 0xC8-0xCF */
- 0x8B5C,0xF9FC,0x8B49,0x8B5A,0x8B4E,0x8B4F,0x8B46,0x8B59,/* 0xD0-0xD7 */
+ 0x8B5C,0x8B58,0x8B49,0x8B5A,0x8B4E,0x8B4F,0x8B46,0x8B59,/* 0xD0-0xD7 */
0x8D08,0x8D0A,0x8E7C,0x8E72,0x8E87,0x8E76,0x8E6C,0x8E7A,/* 0xD8-0xDF */
0x8E74,0x8F54,0x8F4E,0x8FAD,0x908A,0x908B,0x91B1,0x91AE,/* 0xE0-0xE7 */
0x93E1,0x93D1,0x93DF,0x93C3,0x93C8,0x93DC,0x93DD,0x93D6,/* 0xE8-0xEF */
0x93E2,0x93CD,0x93D8,0x93E4,0x93D7,0x93E8,0x95DC,0x96B4,/* 0xF0-0xF7 */
- 0x96E3,0x972A,0x9727,0x9761,0x97DC,0x97FB,0xF9D0,0x0000,/* 0xF8-0xFF */
+ 0x96E3,0x972A,0x9727,0x9761,0x97DC,0x97FB,0x985E,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_C4[256] = {
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x9858,0x985B,0x98BC,0x9945,0x9949,0x9A16,0x9A19,0x9B0D,/* 0x40-0x47 */
0x9BE8,0x9BE7,0x9BD6,0x9BDB,0x9D89,0x9D61,0x9D72,0x9D6A,/* 0x48-0x4F */
- 0x9D6C,0x9E92,0xF988,0x9E93,0x9EB4,0x52F8,0x56A8,0x56B7,/* 0x50-0x57 */
+ 0x9D6C,0x9E92,0x9E97,0x9E93,0x9EB4,0x52F8,0x56A8,0x56B7,/* 0x50-0x57 */
0x56B6,0x56B4,0x56BC,0x58E4,0x5B40,0x5B43,0x5B7D,0x5BF6,/* 0x58-0x5F */
0x5DC9,0x61F8,0x61FA,0x6518,0x6514,0x6519,0x66E6,0x6727,/* 0x60-0x67 */
- 0x6AEC,0x703E,0x7030,0x7032,0xF932,0x737B,0x74CF,0x7662,/* 0x68-0x6F */
- 0x7665,0x7926,0xF985,0x792C,0x792B,0x7AC7,0x7AF6,0x7C4C,/* 0x70-0x77 */
+ 0x6AEC,0x703E,0x7030,0x7032,0x7210,0x737B,0x74CF,0x7662,/* 0x68-0x6F */
+ 0x7665,0x7926,0x792A,0x792C,0x792B,0x7AC7,0x7AF6,0x7C4C,/* 0x70-0x77 */
0x7C43,0x7C4D,0x7CEF,0x7CF0,0x8FAE,0x7E7D,0x7E7C,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
0x0000,0x7E82,0x7F4C,0x8000,0x81DA,0x8266,0x85FB,0x85F9,/* 0xA0-0xA7 */
- 0x8611,0xF9F0,0xF935,0x860B,0x8607,0x860A,0x8814,0x8815,/* 0xA8-0xAF */
- 0xF924,0x89BA,0x89F8,0x8B70,0x8B6C,0x8B66,0x8B6F,0x8B5F,/* 0xB0-0xB7 */
- 0x8B6B,0x8D0F,0x8D0D,0x8E89,0x8E81,0x8E85,0x8E82,0xF9B7,/* 0xB8-0xBF */
+ 0x8611,0x85FA,0x8606,0x860B,0x8607,0x860A,0x8814,0x8815,/* 0xA8-0xAF */
+ 0x8964,0x89BA,0x89F8,0x8B70,0x8B6C,0x8B66,0x8B6F,0x8B5F,/* 0xB0-0xB7 */
+ 0x8B6B,0x8D0F,0x8D0D,0x8E89,0x8E81,0x8E85,0x8E82,0x91B4,/* 0xB8-0xBF */
0x91CB,0x9418,0x9403,0x93FD,0x95E1,0x9730,0x98C4,0x9952,/* 0xC0-0xC7 */
0x9951,0x99A8,0x9A2B,0x9A30,0x9A37,0x9A35,0x9C13,0x9C0D,/* 0xC8-0xCF */
0x9E79,0x9EB5,0x9EE8,0x9F2F,0x9F5F,0x9F63,0x9F61,0x5137,/* 0xD0-0xD7 */
0x5138,0x56C1,0x56C0,0x56C2,0x5914,0x5C6C,0x5DCD,0x61FC,/* 0xD8-0xDF */
- 0x61FE,0x651D,0x651C,0x6595,0x66E9,0x6AFB,0xF91D,0x6AFA,/* 0xE0-0xE7 */
- 0x6BB2,0x704C,0xF91E,0x72A7,0x74D6,0x74D4,0xF90E,0x77D3,/* 0xE8-0xEF */
- 0x7C50,0x7E8F,0x7E8C,0x7FBC,0x8617,0xF91F,0x861A,0x8823,/* 0xF0-0xF7 */
- 0x8822,0x8821,0xF927,0x896A,0x896C,0x89BD,0x8B74,0x0000,/* 0xF8-0xFF */
+ 0x61FE,0x651D,0x651C,0x6595,0x66E9,0x6AFB,0x6B04,0x6AFA,/* 0xE0-0xE7 */
+ 0x6BB2,0x704C,0x721B,0x72A7,0x74D6,0x74D4,0x7669,0x77D3,/* 0xE8-0xEF */
+ 0x7C50,0x7E8F,0x7E8C,0x7FBC,0x8617,0x862D,0x861A,0x8823,/* 0xF0-0xF7 */
+ 0x8822,0x8821,0x881F,0x896A,0x896C,0x89BD,0x8B74,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_C5[256] = {
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
0x8B77,0x8B7D,0x8D13,0x8E8A,0x8E8D,0x8E8B,0x8F5F,0x8FAF,/* 0x40-0x47 */
0x91BA,0x942E,0x9433,0x9435,0x943A,0x9438,0x9432,0x942B,/* 0x48-0x4F */
- 0x95E2,0x9738,0x9739,0xF938,0x97FF,0x9867,0x9865,0x9957,/* 0x50-0x57 */
+ 0x95E2,0x9738,0x9739,0x9732,0x97FF,0x9867,0x9865,0x9957,/* 0x50-0x57 */
0x9A45,0x9A43,0x9A40,0x9A3E,0x9ACF,0x9B54,0x9B51,0x9C2D,/* 0x58-0x5F */
- 0x9C25,0x9DAF,0xFA2D,0x9DC2,0x9DB8,0x9E9D,0x9EEF,0x9F19,/* 0x60-0x67 */
+ 0x9C25,0x9DAF,0x9DB4,0x9DC2,0x9DB8,0x9E9D,0x9EEF,0x9F19,/* 0x60-0x67 */
0x9F5C,0x9F66,0x9F67,0x513C,0x513B,0x56C8,0x56CA,0x56C9,/* 0x68-0x6F */
0x5B7F,0x5DD4,0x5DD2,0x5F4E,0x61FF,0x6524,0x6B0A,0x6B61,/* 0x70-0x77 */
0x7051,0x7058,0x7380,0x74E4,0x758A,0x766E,0x766C,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
- 0x0000,0x79B3,0xF944,0x7C5F,0xF945,0x807D,0x81DF,0x8972,/* 0xA0-0xA7 */
- 0x896F,0x89FC,0xF95A,0x8D16,0x8D17,0x8E91,0x8E93,0x8F61,/* 0xA8-0xAF */
+ 0x0000,0x79B3,0x7C60,0x7C5F,0x807E,0x807D,0x81DF,0x8972,/* 0xA0-0xA7 */
+ 0x896F,0x89FC,0x8B80,0x8D16,0x8D17,0x8E91,0x8E93,0x8F61,/* 0xA8-0xAF */
0x9148,0x9444,0x9451,0x9452,0x973D,0x973E,0x97C3,0x97C1,/* 0xB0-0xB7 */
0x986B,0x9955,0x9A55,0x9A4D,0x9AD2,0x9B1A,0x9C49,0x9C31,/* 0xB8-0xBF */
0x9C3E,0x9C3B,0x9DD3,0x9DD7,0x9F34,0x9F6C,0x9F6A,0x9F94,/* 0xC0-0xC7 */
- 0x56CC,0x5DD6,0xF990,0x6523,0x652B,0x652A,0x66EC,0x6B10,/* 0xC8-0xCF */
+ 0x56CC,0x5DD6,0x6200,0x6523,0x652B,0x652A,0x66EC,0x6B10,/* 0xC8-0xCF */
0x74DA,0x7ACA,0x7C64,0x7C63,0x7C65,0x7E93,0x7E96,0x7E94,/* 0xD0-0xD7 */
- 0x81E2,0x8638,0xF910,0x8831,0x8B8A,0x9090,0xF913,0x9463,/* 0xD8-0xDF */
+ 0x81E2,0x8638,0x863F,0x8831,0x8B8A,0x9090,0x908F,0x9463,/* 0xD8-0xDF */
0x9460,0x9464,0x9768,0x986F,0x995C,0x9A5A,0x9A5B,0x9A57,/* 0xE0-0xE7 */
- 0x9AD3,0x9AD4,0x9AD1,0x9C54,0xF9F2,0x9C56,0x9DE5,0xF9F3,/* 0xE8-0xEF */
+ 0x9AD3,0x9AD4,0x9AD1,0x9C54,0x9C57,0x9C56,0x9DE5,0x9E9F,/* 0xE8-0xEF */
0x9EF4,0x56D1,0x58E9,0x652C,0x705E,0x7671,0x7672,0x77D7,/* 0xF0-0xF7 */
0x7F50,0x7F88,0x8836,0x8839,0x8862,0x8B93,0x8B92,0x0000,/* 0xF8-0xFF */
};
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x28-0x2F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x30-0x37 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x38-0x3F */
- 0x8B96,0x8277,0x8D1B,0x91C0,0x946A,0x9742,0xF9B3,0x9744,/* 0x40-0x47 */
- 0x97C6,0x9870,0x9A5F,0x9B22,0x9B58,0x9C5F,0x9DF9,0xF93A,/* 0x48-0x4F */
+ 0x8B96,0x8277,0x8D1B,0x91C0,0x946A,0x9742,0x9748,0x9744,/* 0x40-0x47 */
+ 0x97C6,0x9870,0x9A5F,0x9B22,0x9B58,0x9C5F,0x9DF9,0x9DFA,/* 0x48-0x4F */
0x9E7C,0x9E7D,0x9F07,0x9F77,0x9F72,0x5EF3,0x6B16,0x7063,/* 0x50-0x57 */
0x7C6C,0x7C6E,0x883B,0x89C0,0x8EA1,0x91C1,0x9472,0x9470,/* 0x58-0x5F */
0x9871,0x995E,0x9AD6,0x9B23,0x9ECC,0x7064,0x77DA,0x8B9A,/* 0x60-0x67 */
0x9477,0x97C9,0x9A62,0x9A65,0x7E9C,0x8B9C,0x8EAA,0x91C5,/* 0x68-0x6F */
0x947D,0x947E,0x947C,0x9C77,0x9C78,0x9EF7,0x8C54,0x947F,/* 0x70-0x77 */
- 0x9E1A,0x7228,0xF987,0x9B31,0x9E1B,0xF920,0x7C72,0x0000,/* 0x78-0x7F */
+ 0x9E1A,0x7228,0x9A6A,0x9B31,0x9E1B,0x9E1E,0x7C72,0x0000,/* 0x78-0x7F */
};
static wchar_t c2u_C9[256] = {
0x4EE1,0x4EDD,0x4EDA,0x520C,0x531C,0x534C,0x5722,0x5723,/* 0x68-0x6F */
0x5917,0x592F,0x5B81,0x5B84,0x5C12,0x5C3B,0x5C74,0x5C73,/* 0x70-0x77 */
0x5E04,0x5E80,0x5E82,0x5FC9,0x6209,0x6250,0x6C15,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x4F3F,0x4F61,0x518F,0x51B9,0x521C,0x521E,0x5221,0x52AD,/* 0x68-0x6F */
0x52AE,0x5309,0x5363,0x5372,0x538E,0x538F,0x5430,0x5437,/* 0x70-0x77 */
0x542A,0x5454,0x5445,0x5419,0x541C,0x5425,0x5418,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x7395,0x7397,0x7393,0x7394,0x7392,0x753A,0x7539,0x7594,/* 0x68-0x6F */
0x7595,0x7681,0x793D,0x8034,0x8095,0x8099,0x8090,0x8092,/* 0x70-0x77 */
0x809C,0x8290,0x828F,0x8285,0x828E,0x8291,0x8293,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x533C,0x5392,0x5394,0x5487,0x547F,0x5481,0x5491,0x5482,/* 0xD8-0xDF */
0x5488,0x546B,0x547A,0x547E,0x5465,0x546C,0x5474,0x5466,/* 0xE0-0xE7 */
0x548D,0x546F,0x5461,0x5460,0x5498,0x5463,0x5467,0x5464,/* 0xE8-0xEF */
- 0x56F7,0xF9A9,0x576F,0x5772,0x576D,0x576B,0x5771,0x5770,/* 0xF0-0xF7 */
+ 0x56F7,0x56F9,0x576F,0x5772,0x576D,0x576B,0x5771,0x5770,/* 0xF0-0xF7 */
0x5776,0x5780,0x5775,0x577B,0x5773,0x5774,0x5762,0x0000,/* 0xF8-0xFF */
};
0x5C9D,0x5CA5,0x5CB6,0x5CB0,0x5CA6,0x5E17,0x5E14,0x5E19,/* 0x68-0x6F */
0x5F28,0x5F22,0x5F23,0x5F24,0x5F54,0x5F82,0x5F7E,0x5F7D,/* 0x70-0x77 */
0x5FDE,0x5FE5,0x602D,0x6026,0x6019,0x6032,0x600B,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
0x0000,0x6034,0x600A,0x6017,0x6033,0x601A,0x601E,0x602C,/* 0xA0-0xA7 */
0x6022,0x600D,0x6010,0x602E,0x6013,0x6011,0x600C,0x6009,/* 0xA8-0xAF */
- 0xF9AC,0x6214,0x623D,0x62AD,0x62B4,0x62D1,0x62BE,0x62AA,/* 0xB0-0xB7 */
+ 0x601C,0x6214,0x623D,0x62AD,0x62B4,0x62D1,0x62BE,0x62AA,/* 0xB0-0xB7 */
0x62B6,0x62CA,0x62AE,0x62B3,0x62AF,0x62BB,0x62A9,0x62B0,/* 0xB8-0xBF */
0x62B8,0x653D,0x65A8,0x65BB,0x6609,0x65FC,0x6604,0x6612,/* 0xC0-0xC7 */
0x6608,0x65FB,0x6603,0x660B,0x660D,0x6605,0x65FD,0x6611,/* 0xC8-0xCF */
0x6610,0x66F6,0x670A,0x6785,0x676C,0x678E,0x6792,0x6776,/* 0xD0-0xD7 */
- 0xF9C8,0x6798,0x6786,0x6784,0x6774,0x678D,0x678C,0x677A,/* 0xD8-0xDF */
+ 0x677B,0x6798,0x6786,0x6784,0x6774,0x678D,0x678C,0x677A,/* 0xD8-0xDF */
0x679F,0x6791,0x6799,0x6783,0x677D,0x6781,0x6778,0x6779,/* 0xE0-0xE7 */
0x6794,0x6B25,0x6B80,0x6B7E,0x6BDE,0x6C1D,0x6C93,0x6CEC,/* 0xE8-0xEF */
0x6CEB,0x6CEE,0x6CD9,0x6CB6,0x6CD4,0x6CAD,0x6CE7,0x6CB7,/* 0xF0-0xF7 */
0x73AD,0x73A6,0x73A2,0x73A0,0x73AC,0x739D,0x74DD,0x74E8,/* 0x68-0x6F */
0x753F,0x7540,0x753E,0x758C,0x7598,0x76AF,0x76F3,0x76F1,/* 0x70-0x77 */
0x76F0,0x76F5,0x77F8,0x77FC,0x77F9,0x77FB,0x77FA,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x59FA,0x59FD,0x59FC,0x59F6,0x59E4,0x59F2,0x59F7,0x59DB,/* 0x68-0x6F */
0x59E9,0x59F3,0x59F5,0x59E0,0x59FE,0x59F4,0x59ED,0x5BA8,/* 0x70-0x77 */
0x5C4C,0x5CD0,0x5CD8,0x5CCC,0x5CD7,0x5CCB,0x5CDB,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x5E9B,0x5EA3,0x5EA5,0x5F07,0x5F2E,0x5F56,0x5F86,0x6037,/* 0xB8-0xBF */
0x6039,0x6054,0x6072,0x605E,0x6045,0x6053,0x6047,0x6049,/* 0xC0-0xC7 */
0x605B,0x604C,0x6040,0x6042,0x605F,0x6024,0x6044,0x6058,/* 0xC8-0xCF */
- 0x6066,0x606E,0x6242,0x6243,0xF95B,0x630D,0x630B,0x62F5,/* 0xD0-0xD7 */
+ 0x6066,0x606E,0x6242,0x6243,0x62CF,0x630D,0x630B,0x62F5,/* 0xD0-0xD7 */
0x630E,0x6303,0x62EB,0x62F9,0x630F,0x630C,0x62F8,0x62F6,/* 0xD8-0xDF */
0x6300,0x6313,0x6314,0x62FA,0x6315,0x62FB,0x62F0,0x6541,/* 0xE0-0xE7 */
0x6543,0x65AA,0x65BF,0x6636,0x6621,0x6632,0x6635,0x661C,/* 0xE8-0xEF */
0x6BD6,0x6BD8,0x6BE0,0x6C20,0x6C21,0x6D28,0x6D34,0x6D2D,/* 0x68-0x6F */
0x6D1F,0x6D3C,0x6D3F,0x6D12,0x6D0A,0x6CDA,0x6D33,0x6D04,/* 0x70-0x77 */
0x6D19,0x6D3A,0x6D1A,0x6D11,0x6D00,0x6D1D,0x6D42,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x81FF,0x8221,0x8294,0x82D9,0x82FE,0x82F9,0x8307,0x82E8,/* 0x68-0x6F */
0x8300,0x82D5,0x833A,0x82EB,0x82D6,0x82F4,0x82EC,0x82E1,/* 0x70-0x77 */
0x82F2,0x82F5,0x830C,0x82FB,0x82F6,0x82F0,0x82EA,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x5BAC,0x5C03,0x5C56,0x5C54,0x5CEC,0x5CFF,0x5CEE,0x5CF1,/* 0x68-0x6F */
0x5CF7,0x5D00,0x5CF9,0x5E29,0x5E28,0x5EA8,0x5EAE,0x5EAA,/* 0x70-0x77 */
0x5EAC,0x5F33,0x5F30,0x5F67,0x605D,0x605A,0x6067,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x6D75,0x6D90,0x70DC,0x70D3,0x70D1,0x70DD,0x70CB,0x7F39,/* 0x68-0x6F */
0x70E2,0x70D7,0x70D2,0x70DE,0x70E0,0x70D4,0x70CD,0x70C5,/* 0x70-0x77 */
0x70C6,0x70C7,0x70DA,0x70CE,0x70E1,0x7242,0x7278,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x8039,0x80FA,0x80F2,0x80F9,0x80F5,0x8101,0x80FB,0x8100,/* 0x68-0x6F */
0x8201,0x822F,0x8225,0x8333,0x832D,0x8344,0x8319,0x8351,/* 0x70-0x77 */
0x8325,0x8356,0x833F,0x8341,0x8326,0x831C,0x8322,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x51D0,0x526B,0x526D,0x526C,0x526E,0x52D6,0x52D3,0x532D,/* 0x68-0x6F */
0x539C,0x5575,0x5576,0x553C,0x554D,0x5550,0x5534,0x552A,/* 0x70-0x77 */
0x5551,0x5562,0x5536,0x5535,0x5530,0x5552,0x5545,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x638A,0x6382,0x637D,0x63BD,0x639E,0x63AD,0x639D,0x6397,/* 0x68-0x6F */
0x63AB,0x638E,0x636F,0x6387,0x6390,0x636E,0x63AF,0x6375,/* 0x70-0x77 */
0x639C,0x636D,0x63AE,0x637C,0x63A4,0x633B,0x639F,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x730F,0x731E,0x7388,0x73F6,0x73F8,0x73F5,0x7404,0x7401,/* 0x68-0x6F */
0x73FD,0x7407,0x7400,0x73FA,0x73FC,0x73FF,0x740C,0x740B,/* 0x70-0x77 */
0x73F4,0x7408,0x7564,0x7563,0x75CE,0x75D2,0x75CF,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x837D,0x8383,0x838C,0x839D,0x839B,0x83AA,0x838B,0x837E,/* 0x68-0x6F */
0x83A5,0x83AF,0x8388,0x8397,0x83B0,0x837F,0x83A6,0x8387,/* 0x70-0x77 */
0x83AE,0x8376,0x839A,0x8659,0x8656,0x86BF,0x86B7,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x55A5,0x55AD,0x5577,0x5645,0x55A2,0x5593,0x5588,0x558F,/* 0x68-0x6F */
0x55B5,0x5581,0x55A3,0x5592,0x55A4,0x557D,0x558C,0x55A6,/* 0x70-0x77 */
0x557F,0x5595,0x55A1,0x558E,0x570C,0x5829,0x5837,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x63D8,0x63D3,0x63C2,0x63C7,0x63CC,0x63CB,0x63C8,0x63F0,/* 0x68-0x6F */
0x63D7,0x63D9,0x6532,0x6567,0x656A,0x6564,0x655C,0x6568,/* 0x70-0x77 */
0x6565,0x658C,0x659D,0x659E,0x65AE,0x65D0,0x65D2,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x7288,0x7289,0x7286,0x7285,0x728B,0x7312,0x730B,0x7330,/* 0x68-0x6F */
0x7322,0x7331,0x7333,0x7327,0x7332,0x732D,0x7326,0x7323,/* 0x70-0x77 */
0x7335,0x730C,0x742E,0x742C,0x7430,0x742B,0x7416,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x823C,0x823D,0x823F,0x8275,0x833B,0x83CF,0x83F9,0x8423,/* 0x58-0x5F */
0x83C0,0x83E8,0x8412,0x83E7,0x83E4,0x83FC,0x83F6,0x8410,/* 0x60-0x67 */
0x83C6,0x83C8,0x83EB,0x83E3,0x83BF,0x8401,0x83DD,0x83E5,/* 0x68-0x6F */
- 0x83D8,0x83FF,0x83E1,0x83CB,0x83CE,0x83D6,0x83F5,0xF93E,/* 0x70-0x77 */
+ 0x83D8,0x83FF,0x83E1,0x83CB,0x83CE,0x83D6,0x83F5,0x83C9,/* 0x70-0x77 */
0x8409,0x840F,0x83DE,0x8411,0x8406,0x83C2,0x83F3,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x920F,0x920C,0x9200,0x9212,0x91FF,0x91FD,0x9206,0x9204,/* 0x68-0x6F */
0x9227,0x9202,0x921C,0x9224,0x9219,0x9217,0x9205,0x9216,/* 0x70-0x77 */
0x957B,0x958D,0x958C,0x9590,0x9687,0x967E,0x9688,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x612B,0x6145,0x6136,0x6132,0x612E,0x6146,0x612F,0x614F,/* 0x68-0x6F */
0x6129,0x6140,0x6220,0x9168,0x6223,0x6225,0x6224,0x63C5,/* 0x70-0x77 */
0x63F1,0x63EB,0x6410,0x6412,0x6409,0x6420,0x6424,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x6E97,0x6EAE,0x6EA3,0x7147,0x7154,0x7152,0x7163,0x7160,/* 0x68-0x6F */
0x7141,0x715D,0x7162,0x7172,0x7178,0x716A,0x7161,0x7142,/* 0x70-0x77 */
0x7158,0x7143,0x714B,0x7170,0x715F,0x7150,0x7153,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x7F6B,0x7F67,0x7F68,0x7F6C,0x7FA6,0x7FA5,0x7FA7,0x7FDB,/* 0x68-0x6F */
0x7FDC,0x8021,0x8164,0x8160,0x8177,0x815C,0x8169,0x815B,/* 0x70-0x77 */
0x8162,0x8172,0x6721,0x815E,0x8176,0x8167,0x816F,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x8DE0,0x8DEC,0x8DF1,0x8DEE,0x8DD0,0x8DE9,0x8DE3,0x8DE2,/* 0x68-0x6F */
0x8DE7,0x8DF2,0x8DEB,0x8DF4,0x8F06,0x8EFF,0x8F01,0x8F00,/* 0x70-0x77 */
0x8F05,0x8F07,0x8F08,0x8F02,0x8F0B,0x9052,0x903F,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x588F,0x58FE,0x596B,0x5ADC,0x5AEE,0x5AE5,0x5AD5,0x5AEA,/* 0x68-0x6F */
0x5ADA,0x5AED,0x5AEB,0x5AF3,0x5AE2,0x5AE0,0x5ADB,0x5AEC,/* 0x70-0x77 */
0x5ADE,0x5ADD,0x5AD9,0x5AE8,0x5ADF,0x5B77,0x5BE0,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x6BA0,0x6BC3,0x6BC4,0x6BFE,0x6ECE,0x6EF5,0x6EF1,0x6F03,/* 0x68-0x6F */
0x6F25,0x6EF8,0x6F37,0x6EFB,0x6F2E,0x6F09,0x6F4E,0x6F19,/* 0x70-0x77 */
0x6F1A,0x6F27,0x6F18,0x6F3B,0x6F12,0x6EED,0x6F0A,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x7DC0,0x7DC5,0x7D9D,0x7DCE,0x7DC4,0x7DC6,0x7DCB,0x7DCC,/* 0x68-0x6F */
0x7DAF,0x7DB9,0x7D96,0x7DBC,0x7D9F,0x7DA6,0x7DAE,0x7DA9,/* 0x70-0x77 */
0x7DA1,0x7DC9,0x7F73,0x7FE2,0x7FE3,0x7FE5,0x7FDE,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x9123,0x911C,0x9120,0x9122,0x911F,0x911D,0x911A,0x9124,/* 0x68-0x6F */
0x9121,0x911B,0x917A,0x9172,0x9179,0x9173,0x92A5,0x92A4,/* 0x70-0x77 */
0x9276,0x929B,0x927A,0x92A0,0x9294,0x92AA,0x928D,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x5D95,0x5DA0,0x5D9C,0x5DA1,0x5D9A,0x5D9E,0x5E69,0x5E5D,/* 0x68-0x6F */
0x5E60,0x5E5C,0x7DF3,0x5EDB,0x5EDE,0x5EE1,0x5F49,0x5FB2,/* 0x70-0x77 */
0x618B,0x6183,0x6179,0x61B1,0x61B0,0x61A2,0x6189,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x71A9,0x71B5,0x719D,0x71A5,0x719E,0x71A4,0x71A1,0x71AA,/* 0x68-0x6F */
0x719C,0x71A7,0x71B3,0x7298,0x729A,0x7358,0x7352,0x735E,/* 0x70-0x77 */
0x735F,0x7360,0x735D,0x735B,0x7361,0x735A,0x7359,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
- 0x0000,0x7362,0x7487,0xF994,0x748A,0x7486,0x7481,0x747D,/* 0xA0-0xA7 */
+ 0x0000,0x7362,0x7487,0x7489,0x748A,0x7486,0x7481,0x747D,/* 0xA0-0xA7 */
0x7485,0x7488,0x747C,0x7479,0x7508,0x7507,0x757E,0x7625,/* 0xA8-0xAF */
0x761E,0x7619,0x761D,0x761C,0x7623,0x761A,0x7628,0x761B,/* 0xB0-0xB7 */
0x769C,0x769D,0x769E,0x769B,0x778D,0x778F,0x7789,0x7788,/* 0xB8-0xBF */
0x8252,0x8250,0x824E,0x8251,0x8524,0x853B,0x850F,0x8500,/* 0x48-0x4F */
0x8529,0x850E,0x8509,0x850D,0x851F,0x850A,0x8527,0x851C,/* 0x50-0x57 */
0x84FB,0x852B,0x84FA,0x8508,0x850C,0x84F4,0x852A,0x84F2,/* 0x58-0x5F */
- 0x8515,0x84F7,0x84EB,0x84F3,0xF9C2,0x8512,0x84EA,0x84E9,/* 0x60-0x67 */
+ 0x8515,0x84F7,0x84EB,0x84F3,0x84FC,0x8512,0x84EA,0x84E9,/* 0x60-0x67 */
0x8516,0x84FE,0x8528,0x851D,0x852E,0x8502,0x84FD,0x851E,/* 0x68-0x6F */
0x84F6,0x8531,0x8526,0x84E7,0x84E8,0x84F0,0x84EF,0x84F9,/* 0x70-0x77 */
0x8518,0x8520,0x8530,0x850B,0x8519,0x852F,0x8662,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x92CF,0x92F1,0x92DF,0x92D8,0x92E9,0x92D7,0x92DD,0x92CC,/* 0x68-0x6F */
0x92EF,0x92C2,0x92E8,0x92CA,0x92C8,0x92CE,0x92E6,0x92CD,/* 0x70-0x77 */
0x92D5,0x92C9,0x92E0,0x92DE,0x92E7,0x92D1,0x92D3,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x9B70,0x9B68,0x9B64,0x9B6C,0x9CFC,0x9CFA,0x9CFD,0x9CFF,/* 0xE0-0xE7 */
0x9CF7,0x9D07,0x9D00,0x9CF9,0x9CFB,0x9D08,0x9D05,0x9D04,/* 0xE8-0xEF */
0x9E83,0x9ED3,0x9F0F,0x9F10,0x511C,0x5113,0x5117,0x511A,/* 0xF0-0xF7 */
- 0x5111,0xFA15,0x5334,0x53E1,0x5670,0x5660,0x566E,0x0000,/* 0xF8-0xFF */
+ 0x5111,0x51DE,0x5334,0x53E1,0x5670,0x5660,0x566E,0x0000,/* 0xF8-0xFF */
};
static wchar_t c2u_E9[256] = {
0x5DAD,0x5DAF,0x5DB4,0x5E67,0x5E68,0x5E66,0x5E6F,0x5EE9,/* 0x68-0x6F */
0x5EE7,0x5EE6,0x5EE8,0x5EE5,0x5F4B,0x5FBC,0x619D,0x61A8,/* 0x70-0x77 */
0x6196,0x61C5,0x61B4,0x61C6,0x61C1,0x61CC,0x61BA,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x74A1,0x750B,0x7580,0x762F,0x762D,0x7631,0x763D,0x7633,/* 0x68-0x6F */
0x763C,0x7635,0x7632,0x7630,0x76BB,0x76E6,0x779A,0x779D,/* 0x70-0x77 */
0x77A1,0x779C,0x779B,0x77A2,0x77A3,0x7795,0x7799,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x8790,0x8791,0x879D,0x8784,0x8794,0x879C,0x879A,0x8789,/* 0x68-0x6F */
0x891E,0x8926,0x8930,0x892D,0x892E,0x8927,0x8931,0x8922,/* 0x70-0x77 */
0x8929,0x8923,0x892F,0x892C,0x891F,0x89F1,0x8AE0,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x99E9,0x99E7,0x9AB9,0x9ABF,0x9AB4,0x9ABB,0x9AF6,0x9AFA,/* 0x68-0x6F */
0x9AF9,0x9AF7,0x9B33,0x9B80,0x9B85,0x9B87,0x9B7C,0x9B7E,/* 0x70-0x77 */
0x9B7B,0x9B82,0x9B93,0x9B92,0x9B90,0x9B7A,0x9B95,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x74AB,0x7490,0x74AA,0x74AD,0x74B1,0x74A5,0x74AF,0x7510,/* 0x68-0x6F */
0x7511,0x7512,0x750F,0x7584,0x7643,0x7648,0x7649,0x7647,/* 0x70-0x77 */
0x76A4,0x76E9,0x77B5,0x77AB,0x77B2,0x77B7,0x77B6,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x98-0x9F */
0x0000,0x77B4,0x77B1,0x77A8,0x77F0,0x78F3,0x78FD,0x7902,/* 0xA0-0xA7 */
- 0xF964,0x78FC,0x78F2,0x7905,0x78F9,0x78FE,0x7904,0x79AB,/* 0xA8-0xAF */
+ 0x78FB,0x78FC,0x78F2,0x7905,0x78F9,0x78FE,0x7904,0x79AB,/* 0xA8-0xAF */
0x79A8,0x7A5C,0x7A5B,0x7A56,0x7A58,0x7A54,0x7A5A,0x7ABE,/* 0xB0-0xB7 */
0x7AC0,0x7AC1,0x7C05,0x7C0F,0x7BF2,0x7C00,0x7BFF,0x7BFB,/* 0xB8-0xBF */
0x7C0E,0x7BF4,0x7C0B,0x7BF3,0x7C02,0x7C09,0x7C03,0x7C01,/* 0xC0-0xC7 */
0x87C4,0x87CA,0x87B4,0x87B6,0x87BF,0x87B8,0x87BD,0x87DE,/* 0x68-0x6F */
0x87B2,0x8935,0x8933,0x893C,0x893E,0x8941,0x8952,0x8937,/* 0x70-0x77 */
0x8942,0x89AD,0x89AF,0x89AE,0x89F2,0x89F3,0x8B1E,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x9AFC,0x9B48,0x9B9A,0x9BA8,0x9B9E,0x9B9B,0x9BA6,0x9BA1,/* 0x68-0x6F */
0x9BA5,0x9BA4,0x9B86,0x9BA2,0x9BA0,0x9BAF,0x9D33,0x9D41,/* 0x70-0x77 */
0x9D67,0x9D36,0x9D2E,0x9D2F,0x9D31,0x9D38,0x9D30,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x7C26,0x7C28,0x7C22,0x7C25,0x7C30,0x7E5C,0x7E50,0x7E56,/* 0x68-0x6F */
0x7E63,0x7E58,0x7E62,0x7E5F,0x7E51,0x7E60,0x7E57,0x7E53,/* 0x70-0x77 */
0x7FB5,0x7FB3,0x7FF7,0x7FF8,0x8075,0x81D1,0x81D2,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x93A8,0x93B4,0x93A3,0x93A5,0x95D2,0x95D3,0x95D1,0x96B3,/* 0x68-0x6F */
0x96D7,0x96DA,0x5DC2,0x96DF,0x96D8,0x96DD,0x9723,0x9722,/* 0x70-0x77 */
0x9725,0x97AC,0x97AE,0x97A8,0x97AB,0x97A4,0x97AA,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x702A,0x720C,0x720A,0x7207,0x7202,0x7205,0x72A5,0x72A6,/* 0x68-0x6F */
0x72A4,0x72A3,0x72A1,0x74CB,0x74C5,0x74B7,0x74C3,0x7516,/* 0x70-0x77 */
0x7660,0x77C9,0x77CA,0x77C4,0x77F1,0x791D,0x791B,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x93CC,0x93D9,0x93A9,0x93E6,0x93CA,0x93D4,0x93EE,0x93E3,/* 0x68-0x6F */
0x93D5,0x93C4,0x93CE,0x93C0,0x93D2,0x93E7,0x957D,0x95DA,/* 0x70-0x77 */
0x95DB,0x96E1,0x9729,0x972B,0x972C,0x9728,0x9726,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x7033,0x7041,0x7213,0x7214,0x72A8,0x737D,0x737C,0x74BA,/* 0x68-0x6F */
0x76AB,0x76AA,0x76BE,0x76ED,0x77CC,0x77CE,0x77CF,0x77CD,/* 0x70-0x77 */
0x77F2,0x7925,0x7923,0x7927,0x7928,0x7924,0x7929,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x9B12,0x9B11,0x9C0B,0x9C08,0x9BF7,0x9C05,0x9C12,0x9BF8,/* 0x68-0x6F */
0x9C40,0x9C07,0x9C0E,0x9C06,0x9C17,0x9C14,0x9C09,0x9D9F,/* 0x70-0x77 */
0x9D99,0x9DA4,0x9D9D,0x9D92,0x9D98,0x9D90,0x9D9B,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x942C,0x9440,0x9431,0x95E5,0x95E4,0x95E3,0x9735,0x973A,/* 0x68-0x6F */
0x97BF,0x97E1,0x9864,0x98C9,0x98C6,0x98C0,0x9958,0x9956,/* 0x70-0x77 */
0x9A39,0x9A3D,0x9A46,0x9A44,0x9A42,0x9A41,0x9A3A,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x8635,0x8633,0x862C,0x8632,0x8636,0x882C,0x8828,0x8826,/* 0x48-0x4F */
0x882A,0x8825,0x8971,0x89BF,0x89BE,0x89FB,0x8B7E,0x8B84,/* 0x50-0x57 */
0x8B82,0x8B86,0x8B85,0x8B7F,0x8D15,0x8E95,0x8E94,0x8E9A,/* 0x58-0x5F */
- 0x8E92,0x8E90,0x8E96,0x8E97,0x8F60,0xF98D,0x9147,0x944C,/* 0x60-0x67 */
+ 0x8E92,0x8E90,0x8E96,0x8E97,0x8F60,0x8F62,0x9147,0x944C,/* 0x60-0x67 */
0x9450,0x944A,0x944B,0x944F,0x9447,0x9445,0x9448,0x9449,/* 0x68-0x6F */
0x9446,0x973F,0x97E3,0x986A,0x9869,0x98CB,0x9954,0x995B,/* 0x70-0x77 */
0x9A4E,0x9A53,0x9A54,0x9A4C,0x9A4F,0x9A48,0x9A4A,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x9DF6,0x9DE1,0x9DEE,0x9DE6,0x9DF2,0x9DF0,0x9DE2,0x9DEC,/* 0x68-0x6F */
0x9DF4,0x9DF3,0x9DE8,0x9DED,0x9EC2,0x9ED0,0x9EF2,0x9EF3,/* 0x70-0x77 */
0x9F06,0x9F1C,0x9F38,0x9F37,0x9F36,0x9F43,0x9F4F,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x9F7B,0x9F7A,0x9F79,0x571E,0x7066,0x7C6F,0x883C,0x8DB2,/* 0x68-0x6F */
0x8EA6,0x91C3,0x9474,0x9478,0x9476,0x9475,0x9A60,0x9C74,/* 0x70-0x77 */
0x9C73,0x9C71,0x9C75,0x9E14,0x9E13,0x9EF6,0x9F0A,0x0000,/* 0x78-0x7F */
-
+
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x80-0x87 */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x88-0x8F */
0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,0x0000,/* 0x90-0x97 */
0x2560,0x256C,0x2563,0x255A,0x2569,0x255D,0x2552,0x2564,/* 0xE0-0xE7 */
0x2555,0x255E,0x256A,0x2561,0x2558,0x2567,0x255B,0x2553,/* 0xE8-0xEF */
0x2565,0x2556,0x255F,0x256B,0x2562,0x2559,0x2568,0x255C,/* 0xF0-0xF7 */
- 0x2551,0x2550,0x0000,0x0000,0x0000,0x0000,0x2593,0x0000,/* 0xF8-0xFF */
+ 0x2551,0x2550,0x256D,0x256E,0x2570,0x256F,0x2593,0x0000,/* 0xF8-0xFF */
};
static wchar_t *page_charset2uni[256] = {
for(i = 0; i < 16; i++, p++) {
blocks = be32_to_cpu(p->num_blocks);
start = be32_to_cpu(p->first_block);
- if (blocks)
- put_partition(state, slot++, start, blocks);
+ if (blocks) {
+ put_partition(state, slot, start, blocks);
+ if (be32_to_cpu(p->type) == LINUX_RAID_PARTITION)
+ state->parts[slot].flags = 1;
+ }
+ slot++;
}
printk("\n");
put_dev_sector(sect);
--- /dev/null
+/* internal.h: internal procfs definitions
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/proc_fs.h>
+
+struct vmalloc_info {
+ unsigned long used;
+ unsigned long largest_chunk;
+};
+
+#ifdef CONFIG_MMU
+#define VMALLOC_TOTAL (VMALLOC_END - VMALLOC_START)
+extern void get_vmalloc_info(struct vmalloc_info *vmi);
+#else
+
+#define VMALLOC_TOTAL 0UL
+#define get_vmalloc_info(vmi) \
+do { \
+ (vmi)->used = 0; \
+ (vmi)->largest_chunk = 0; \
+} while(0)
+
+#endif
+
+extern void create_seq_entry(char *name, mode_t mode, struct file_operations *f);
+extern int proc_exe_link(struct inode *, struct dentry **, struct vfsmount **);
+extern int proc_tid_stat(struct task_struct *, char *);
+extern int proc_tgid_stat(struct task_struct *, char *);
+extern int proc_pid_status(struct task_struct *, char *);
+extern int proc_pid_statm(struct task_struct *, char *);
+
+static inline struct task_struct *proc_task(struct inode *inode)
+{
+ return PROC_I(inode)->task;
+}
+
+static inline int proc_type(struct inode *inode)
+{
+ return PROC_I(inode)->type;
+}
--- /dev/null
+/* nommu.c: mmu-less memory info files
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/time.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/mman.h>
+#include <linux/proc_fs.h>
+#include <linux/mm.h>
+#include <linux/mmzone.h>
+#include <linux/pagemap.h>
+#include <linux/swap.h>
+#include <linux/slab.h>
+#include <linux/smp.h>
+#include <linux/seq_file.h>
+#include <linux/hugetlb.h>
+#include <linux/vmalloc.h>
+#include <asm/uaccess.h>
+#include <asm/pgtable.h>
+#include <asm/tlb.h>
+#include <asm/div64.h>
+#include "internal.h"
+
+/*
+ * display a list of all the VMAs the kernel knows about
+ * - nommu kernals have a single flat list
+ */
+static int nommu_vma_list_show(struct seq_file *m, void *v)
+{
+ struct vm_area_struct *vma;
+ unsigned long ino = 0;
+ struct file *file;
+ dev_t dev = 0;
+ int flags, len;
+
+ vma = rb_entry((struct rb_node *) v, struct vm_area_struct, vm_rb);
+
+ flags = vma->vm_flags;
+ file = vma->vm_file;
+
+ if (file) {
+ struct inode *inode = vma->vm_file->f_dentry->d_inode;
+ dev = inode->i_sb->s_dev;
+ ino = inode->i_ino;
+ }
+
+ seq_printf(m,
+ "%08lx-%08lx %c%c%c%c %08lx %02x:%02x %lu %n",
+ vma->vm_start,
+ vma->vm_end,
+ flags & VM_READ ? 'r' : '-',
+ flags & VM_WRITE ? 'w' : '-',
+ flags & VM_EXEC ? 'x' : '-',
+ flags & VM_MAYSHARE ? flags & VM_SHARED ? 'S' : 's' : 'p',
+ vma->vm_pgoff << PAGE_SHIFT,
+ MAJOR(dev), MINOR(dev), ino, &len);
+
+ if (file) {
+ len = 25 + sizeof(void *) * 6 - len;
+ if (len < 1)
+ len = 1;
+ seq_printf(m, "%*c", len, ' ');
+ seq_path(m, file->f_vfsmnt, file->f_dentry, "");
+ }
+
+ seq_putc(m, '\n');
+ return 0;
+}
+
+static void *nommu_vma_list_start(struct seq_file *m, loff_t *_pos)
+{
+ struct rb_node *_rb;
+ loff_t pos = *_pos;
+ void *next = NULL;
+
+ down_read(&nommu_vma_sem);
+
+ for (_rb = rb_first(&nommu_vma_tree); _rb; _rb = rb_next(_rb)) {
+ if (pos == 0) {
+ next = _rb;
+ break;
+ }
+ }
+
+ return next;
+}
+
+static void nommu_vma_list_stop(struct seq_file *m, void *v)
+{
+ up_read(&nommu_vma_sem);
+}
+
+static void *nommu_vma_list_next(struct seq_file *m, void *v, loff_t *pos)
+{
+ (*pos)++;
+ return rb_next((struct rb_node *) v);
+}
+
+static struct seq_operations proc_nommu_vma_list_seqop = {
+ .start = nommu_vma_list_start,
+ .next = nommu_vma_list_next,
+ .stop = nommu_vma_list_stop,
+ .show = nommu_vma_list_show
+};
+
+static int proc_nommu_vma_list_open(struct inode *inode, struct file *file)
+{
+ return seq_open(file, &proc_nommu_vma_list_seqop);
+}
+
+static struct file_operations proc_nommu_vma_list_operations = {
+ .open = proc_nommu_vma_list_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
+static int __init proc_nommu_init(void)
+{
+ create_seq_entry("maps", S_IRUGO, &proc_nommu_vma_list_operations);
+ return 0;
+}
+
+module_init(proc_nommu_init);
#include <linux/mm.h>
#include <linux/file.h>
+#include <linux/mount.h>
#include <linux/seq_file.h>
+#include "internal.h"
/*
* Logic: we've got two memory sums for each process, "shared", and
*/
char *task_mem(struct mm_struct *mm, char *buffer)
{
+ struct vm_list_struct *vml;
unsigned long bytes = 0, sbytes = 0, slack = 0;
- struct mm_tblock_struct *tblock;
down_read(&mm->mmap_sem);
- for (tblock = &mm->context.tblock; tblock; tblock = tblock->next) {
- if (!tblock->rblock)
+ for (vml = mm->context.vmlist; vml; vml = vml->next) {
+ if (!vml->vma)
continue;
- bytes += kobjsize(tblock);
+
+ bytes += kobjsize(vml);
if (atomic_read(&mm->mm_count) > 1 ||
- tblock->rblock->refcount > 1) {
- sbytes += kobjsize(tblock->rblock->kblock);
- sbytes += kobjsize(tblock->rblock);
+ atomic_read(&vml->vma->vm_usage) > 1
+ ) {
+ sbytes += kobjsize((void *) vml->vma->vm_start);
+ sbytes += kobjsize(vml->vma);
} else {
- bytes += kobjsize(tblock->rblock->kblock);
- bytes += kobjsize(tblock->rblock);
- slack += kobjsize(tblock->rblock->kblock) -
- tblock->rblock->size;
+ bytes += kobjsize((void *) vml->vma->vm_start);
+ bytes += kobjsize(vml->vma);
+ slack += kobjsize((void *) vml->vma->vm_start) -
+ (vml->vma->vm_end - vml->vma->vm_start);
}
}
unsigned long task_vsize(struct mm_struct *mm)
{
- struct mm_tblock_struct *tbp;
+ struct vm_list_struct *tbp;
unsigned long vsize = 0;
down_read(&mm->mmap_sem);
- for (tbp = &mm->context.tblock; tbp; tbp = tbp->next) {
- if (tbp->rblock)
- vsize += kobjsize(tbp->rblock->kblock);
+ for (tbp = mm->context.vmlist; tbp; tbp = tbp->next) {
+ if (tbp->vma)
+ vsize += kobjsize((void *) tbp->vma->vm_start);
}
up_read(&mm->mmap_sem);
return vsize;
int task_statm(struct mm_struct *mm, int *shared, int *text,
int *data, int *resident)
{
- struct mm_tblock_struct *tbp;
+ struct vm_list_struct *tbp;
int size = kobjsize(mm);
down_read(&mm->mmap_sem);
- for (tbp = &mm->context.tblock; tbp; tbp = tbp->next) {
- if (tbp->next)
- size += kobjsize(tbp->next);
- if (tbp->rblock) {
- size += kobjsize(tbp->rblock);
- size += kobjsize(tbp->rblock->kblock);
+ for (tbp = mm->context.vmlist; tbp; tbp = tbp->next) {
+ size += kobjsize(tbp);
+ if (tbp->vma) {
+ size += kobjsize(tbp->vma);
+ size += kobjsize((void *) tbp->vma->vm_start);
}
}
return size;
}
+int proc_exe_link(struct inode *inode, struct dentry **dentry, struct vfsmount **mnt)
+{
+ struct vm_list_struct *vml;
+ struct vm_area_struct *vma;
+ struct task_struct *task = proc_task(inode);
+ struct mm_struct *mm = get_task_mm(task);
+ int result = -ENOENT;
+
+ if (!mm)
+ goto out;
+ down_read(&mm->mmap_sem);
+
+ vml = mm->context.vmlist;
+ vma = NULL;
+ while (vml) {
+ if ((vml->vma->vm_flags & VM_EXECUTABLE) && vml->vma->vm_file) {
+ vma = vml->vma;
+ break;
+ }
+ vml = vml->next;
+ }
+
+ if (vma) {
+ *mnt = mntget(vma->vm_file->f_vfsmnt);
+ *dentry = dget(vma->vm_file->f_dentry);
+ result = 0;
+ }
+
+ up_read(&mm->mmap_sem);
+ mmput(mm);
+out:
+ return result;
+}
+
/*
* Albert D. Cahalan suggested to fake entries for the traditional
* sections here. This might be worth investigating.
mark_buffer_dirty(bh);
inode->i_nlink = 0;
mark_inode_dirty(inode);
- inode->i_ctime = dir->i_ctime = dir->i_mtime = CURRENT_TIME;
+ inode->i_ctime = dir->i_ctime = dir->i_mtime = CURRENT_TIME_SEC;
dir->i_nlink--;
mark_inode_dirty(dir);
retval = 0;
memset(de->di_fname, 0, sizeof de->di_fname);
de->di_mode = 0;
mark_buffer_dirty(bh);
- dir->i_ctime = dir->i_mtime = CURRENT_TIME;
+ dir->i_ctime = dir->i_mtime = CURRENT_TIME_SEC;
mark_inode_dirty(dir);
inode->i_nlink--;
inode->i_ctime = dir->i_ctime;
/* TODO */
}
QNX4DEBUG(("qnx4: qnx4_truncate called\n"));
- inode->i_mtime = inode->i_ctime = CURRENT_TIME;
+ inode->i_mtime = inode->i_ctime = CURRENT_TIME_SEC;
mark_inode_dirty(inode);
unlock_kernel();
}
#include <linux/init.h>
#include <linux/module.h>
-#include <asm/uaccess.h>
#include <asm/byteorder.h>
MODULE_AUTHOR("Jan Kara");
static int v1_read_dqblk(struct dquot *dquot)
{
int type = dquot->dq_type;
- struct file *filp;
- mm_segment_t fs;
- loff_t offset;
struct v1_disk_dqblk dqblk;
- filp = sb_dqopt(dquot->dq_sb)->files[type];
- if (filp == (struct file *)NULL)
+ if (!sb_dqopt(dquot->dq_sb)->files[type])
return -EINVAL;
- /* Now we are sure filp is valid */
- offset = v1_dqoff(dquot->dq_id);
/* Set structure to 0s in case read fails/is after end of file */
memset(&dqblk, 0, sizeof(struct v1_disk_dqblk));
- fs = get_fs();
- set_fs(KERNEL_DS);
- filp->f_op->read(filp, (char *)&dqblk, sizeof(struct v1_disk_dqblk), &offset);
- set_fs(fs);
+ dquot->dq_sb->s_op->quota_read(dquot->dq_sb, type, (char *)&dqblk, sizeof(struct v1_disk_dqblk), v1_dqoff(dquot->dq_id));
v1_disk2mem_dqblk(&dquot->dq_dqb, &dqblk);
if (dquot->dq_dqb.dqb_bhardlimit == 0 && dquot->dq_dqb.dqb_bsoftlimit == 0 &&
static int v1_commit_dqblk(struct dquot *dquot)
{
short type = dquot->dq_type;
- struct file *filp;
- mm_segment_t fs;
- loff_t offset;
ssize_t ret;
struct v1_disk_dqblk dqblk;
- filp = sb_dqopt(dquot->dq_sb)->files[type];
- offset = v1_dqoff(dquot->dq_id);
- fs = get_fs();
- set_fs(KERNEL_DS);
-
v1_mem2disk_dqblk(&dqblk, &dquot->dq_dqb);
if (dquot->dq_id == 0) {
dqblk.dqb_btime = sb_dqopt(dquot->dq_sb)->info[type].dqi_bgrace;
dqblk.dqb_itime = sb_dqopt(dquot->dq_sb)->info[type].dqi_igrace;
}
ret = 0;
- if (filp)
- ret = filp->f_op->write(filp, (char *)&dqblk,
- sizeof(struct v1_disk_dqblk), &offset);
+ if (sb_dqopt(dquot->dq_sb)->files[type])
+ ret = dquot->dq_sb->s_op->quota_write(dquot->dq_sb, type, (char *)&dqblk,
+ sizeof(struct v1_disk_dqblk), v1_dqoff(dquot->dq_id));
if (ret != sizeof(struct v1_disk_dqblk)) {
printk(KERN_WARNING "VFS: dquota write failed on dev %s\n",
dquot->dq_sb->s_id);
ret = 0;
out:
- set_fs(fs);
dqstats.writes++;
return ret;
static int v1_check_quota_file(struct super_block *sb, int type)
{
- struct file *f = sb_dqopt(sb)->files[type];
- struct inode *inode = f->f_dentry->d_inode;
+ struct inode *inode = sb_dqopt(sb)->files[type];
ulong blocks;
size_t off;
struct v2_disk_dqheader dqhead;
- mm_segment_t fs;
ssize_t size;
- loff_t offset = 0;
loff_t isize;
static const uint quota_magics[] = V2_INITQMAGICS;
if ((blocks % sizeof(struct v1_disk_dqblk) * BLOCK_SIZE + off) % sizeof(struct v1_disk_dqblk))
return 0;
/* Doublecheck whether we didn't get file with new format - with old quotactl() this could happen */
- fs = get_fs();
- set_fs(KERNEL_DS);
- size = f->f_op->read(f, (char *)&dqhead, sizeof(struct v2_disk_dqheader), &offset);
- set_fs(fs);
+ size = sb->s_op->quota_read(sb, type, (char *)&dqhead, sizeof(struct v2_disk_dqheader), 0);
if (size != sizeof(struct v2_disk_dqheader))
return 1; /* Probably not new format */
if (le32_to_cpu(dqhead.dqh_magic) != quota_magics[type])
static int v1_read_file_info(struct super_block *sb, int type)
{
struct quota_info *dqopt = sb_dqopt(sb);
- mm_segment_t fs;
- loff_t offset;
- struct file *filp = dqopt->files[type];
struct v1_disk_dqblk dqblk;
int ret;
- offset = v1_dqoff(0);
- fs = get_fs();
- set_fs(KERNEL_DS);
- if ((ret = filp->f_op->read(filp, (char *)&dqblk, sizeof(struct v1_disk_dqblk), &offset)) != sizeof(struct v1_disk_dqblk)) {
+ if ((ret = sb->s_op->quota_read(sb, type, (char *)&dqblk, sizeof(struct v1_disk_dqblk), v1_dqoff(0))) != sizeof(struct v1_disk_dqblk)) {
if (ret >= 0)
ret = -EIO;
goto out;
dqopt->info[type].dqi_igrace = dqblk.dqb_itime ? dqblk.dqb_itime : MAX_IQ_TIME;
dqopt->info[type].dqi_bgrace = dqblk.dqb_btime ? dqblk.dqb_btime : MAX_DQ_TIME;
out:
- set_fs(fs);
return ret;
}
static int v1_write_file_info(struct super_block *sb, int type)
{
struct quota_info *dqopt = sb_dqopt(sb);
- mm_segment_t fs;
- struct file *filp = dqopt->files[type];
struct v1_disk_dqblk dqblk;
- loff_t offset;
int ret;
dqopt->info[type].dqi_flags &= ~DQF_INFO_DIRTY;
- offset = v1_dqoff(0);
- fs = get_fs();
- set_fs(KERNEL_DS);
- if ((ret = filp->f_op->read(filp, (char *)&dqblk, sizeof(struct v1_disk_dqblk), &offset)) != sizeof(struct v1_disk_dqblk)) {
+ if ((ret = sb->s_op->quota_read(sb, type, (char *)&dqblk,
+ sizeof(struct v1_disk_dqblk), v1_dqoff(0))) != sizeof(struct v1_disk_dqblk)) {
if (ret >= 0)
ret = -EIO;
goto out;
}
dqblk.dqb_itime = dqopt->info[type].dqi_igrace;
dqblk.dqb_btime = dqopt->info[type].dqi_bgrace;
- offset = v1_dqoff(0);
- ret = filp->f_op->write(filp, (char *)&dqblk, sizeof(struct v1_disk_dqblk), &offset);
+ ret = sb->s_op->quota_write(sb, type, (char *)&dqblk,
+ sizeof(struct v1_disk_dqblk), v1_dqoff(0));
if (ret == sizeof(struct v1_disk_dqblk))
ret = 0;
else if (ret > 0)
ret = -EIO;
out:
- set_fs(fs);
return ret;
}
#include <linux/reiserfs_acl.h>
#include <asm/uaccess.h>
+static int reiserfs_set_acl(struct inode *inode, int type, struct posix_acl *acl);
+
static int
xattr_set_acl(struct inode *inode, int type, const void *value, size_t size)
{
* inode->i_sem: down
* BKL held [before 2.5.x]
*/
-int
+static int
reiserfs_set_acl(struct inode *inode, int type, struct posix_acl *acl)
{
char *name;
if (!root)
goto out;
- s->s_root = d_alloc_root(iget(s, sz));
-
+ s->s_root = d_alloc_root(root);
if (!s->s_root)
goto outiput;
memset (de->name + namelen, 0, SYSV_DIRSIZE - namelen - 2);
de->inode = cpu_to_fs16(SYSV_SB(inode->i_sb), inode->i_ino);
err = dir_commit_chunk(page, from, to);
- dir->i_mtime = dir->i_ctime = CURRENT_TIME;
+ dir->i_mtime = dir->i_ctime = CURRENT_TIME_SEC;
mark_inode_dirty(dir);
out_page:
dir_put_page(page);
de->inode = 0;
err = dir_commit_chunk(page, from, to);
dir_put_page(page);
- inode->i_ctime = inode->i_mtime = CURRENT_TIME;
+ inode->i_ctime = inode->i_mtime = CURRENT_TIME_SEC;
mark_inode_dirty(inode);
return err;
}
de->inode = cpu_to_fs16(SYSV_SB(inode->i_sb), inode->i_ino);
err = dir_commit_chunk(page, from, to);
dir_put_page(page);
- dir->i_mtime = dir->i_ctime = CURRENT_TIME;
+ dir->i_mtime = dir->i_ctime = CURRENT_TIME_SEC;
mark_inode_dirty(dir);
}
inode->i_uid = current->fsuid;
inode->i_ino = fs16_to_cpu(sbi, ino);
- inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
+ inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME_SEC;
inode->i_blocks = inode->i_blksize = 0;
memset(SYSV_I(inode)->i_data, 0, sizeof(SYSV_I(inode)->i_data));
SYSV_I(inode)->i_dir_start_lookup = 0;
if (inode->i_nlink >= SYSV_SB(inode->i_sb)->s_link_max)
return -EMLINK;
- inode->i_ctime = CURRENT_TIME;
+ inode->i_ctime = CURRENT_TIME_SEC;
inc_count(inode);
atomic_inc(&inode->i_count);
goto out_dir;
inc_count(old_inode);
sysv_set_link(new_de, new_page, old_inode);
- new_inode->i_ctime = CURRENT_TIME;
+ new_inode->i_ctime = CURRENT_TIME_SEC;
if (dir_de)
new_inode->i_nlink--;
dec_count(new_inode);
struct udf_bitmap *bitmap,
kernel_lb_addr bloc, uint32_t offset, uint32_t count)
{
+ struct udf_sb_info *sbi = UDF_SB(sb);
struct buffer_head * bh = NULL;
unsigned long block;
unsigned long block_group;
int bitmap_nr;
unsigned long overflow;
- lock_super(sb);
+ down(&sbi->s_alloc_sem);
if (bloc.logicalBlockNum < 0 ||
(bloc.logicalBlockNum + count) > UDF_SB_PARTLEN(sb, bloc.partitionReferenceNum))
{
sb->s_dirt = 1;
if (UDF_SB_LVIDBH(sb))
mark_buffer_dirty(UDF_SB_LVIDBH(sb));
- unlock_super(sb);
+ up(&sbi->s_alloc_sem);
return;
}
struct udf_bitmap *bitmap, uint16_t partition, uint32_t first_block,
uint32_t block_count)
{
+ struct udf_sb_info *sbi = UDF_SB(sb);
int alloc_count = 0;
int bit, block, block_group, group_start;
int nr_groups, bitmap_nr;
struct buffer_head *bh;
- lock_super(sb);
-
+ down(&sbi->s_alloc_sem);
if (first_block < 0 || first_block >= UDF_SB_PARTLEN(sb, partition))
goto out;
mark_buffer_dirty(UDF_SB_LVIDBH(sb));
}
sb->s_dirt = 1;
- unlock_super(sb);
+ up(&sbi->s_alloc_sem);
return alloc_count;
}
struct inode * inode,
struct udf_bitmap *bitmap, uint16_t partition, uint32_t goal, int *err)
{
+ struct udf_sb_info *sbi = UDF_SB(sb);
int newbit, bit=0, block, block_group, group_start;
int end_goal, nr_groups, bitmap_nr, i;
struct buffer_head *bh = NULL;
int newblock = 0;
*err = -ENOSPC;
- lock_super(sb);
+ down(&sbi->s_alloc_sem);
repeat:
if (goal < 0 || goal >= UDF_SB_PARTLEN(sb, partition))
}
if (i >= (nr_groups*2))
{
- unlock_super(sb);
+ up(&sbi->s_alloc_sem);
return newblock;
}
if (bit < sb->s_blocksize << 3)
bit = udf_find_next_one_bit(bh->b_data, sb->s_blocksize << 3, group_start << 3);
if (bit >= sb->s_blocksize << 3)
{
- unlock_super(sb);
+ up(&sbi->s_alloc_sem);
return 0;
}
*/
if (inode && DQUOT_ALLOC_BLOCK(inode, 1))
{
- unlock_super(sb);
+ up(&sbi->s_alloc_sem);
*err = -EDQUOT;
return 0;
}
mark_buffer_dirty(UDF_SB_LVIDBH(sb));
}
sb->s_dirt = 1;
- unlock_super(sb);
+ up(&sbi->s_alloc_sem);
*err = 0;
return newblock;
error_return:
*err = -EIO;
- unlock_super(sb);
+ up(&sbi->s_alloc_sem);
return 0;
}
struct inode * table,
kernel_lb_addr bloc, uint32_t offset, uint32_t count)
{
+ struct udf_sb_info *sbi = UDF_SB(sb);
uint32_t start, end;
uint32_t nextoffset, oextoffset, elen;
kernel_lb_addr nbloc, obloc, eloc;
int8_t etype;
int i;
- lock_super(sb);
+ down(&sbi->s_alloc_sem);
if (bloc.logicalBlockNum < 0 ||
(bloc.logicalBlockNum + count) > UDF_SB_PARTLEN(sb, bloc.partitionReferenceNum))
{
error_return:
sb->s_dirt = 1;
- unlock_super(sb);
+ up(&sbi->s_alloc_sem);
return;
}
struct inode *table, uint16_t partition, uint32_t first_block,
uint32_t block_count)
{
+ struct udf_sb_info *sbi = UDF_SB(sb);
int alloc_count = 0;
uint32_t extoffset, elen, adsize;
kernel_lb_addr bloc, eloc;
else
return 0;
- lock_super(sb);
-
+ down(&sbi->s_alloc_sem);
extoffset = sizeof(struct unallocSpaceEntry);
bloc = UDF_I_LOCATION(table);
mark_buffer_dirty(UDF_SB_LVIDBH(sb));
sb->s_dirt = 1;
}
- unlock_super(sb);
+ up(&sbi->s_alloc_sem);
return alloc_count;
}
struct inode * inode,
struct inode *table, uint16_t partition, uint32_t goal, int *err)
{
+ struct udf_sb_info *sbi = UDF_SB(sb);
uint32_t spread = 0xFFFFFFFF, nspread = 0xFFFFFFFF;
uint32_t newblock = 0, adsize;
uint32_t extoffset, goal_extoffset, elen, goal_elen = 0;
else
return newblock;
- lock_super(sb);
-
+ down(&sbi->s_alloc_sem);
if (goal < 0 || goal >= UDF_SB_PARTLEN(sb, partition))
goal = 0;
if (spread == 0xFFFFFFFF)
{
udf_release_data(goal_bh);
- unlock_super(sb);
+ up(&sbi->s_alloc_sem);
return 0;
}
if (inode && DQUOT_ALLOC_BLOCK(inode, 1))
{
udf_release_data(goal_bh);
- unlock_super(sb);
+ up(&sbi->s_alloc_sem);
*err = -EDQUOT;
return 0;
}
}
sb->s_dirt = 1;
- unlock_super(sb);
+ up(&sbi->s_alloc_sem);
*err = 0;
return newblock;
}
void udf_free_inode(struct inode * inode)
{
- struct super_block * sb = inode->i_sb;
- int is_directory;
- unsigned long ino;
-
- ino = inode->i_ino;
+ struct super_block *sb = inode->i_sb;
+ struct udf_sb_info *sbi = UDF_SB(sb);
/*
* Note: we must free any quota before locking the superblock,
DQUOT_FREE_INODE(inode);
DQUOT_DROP(inode);
- lock_super(sb);
-
- is_directory = S_ISDIR(inode->i_mode);
-
clear_inode(inode);
- if (UDF_SB_LVIDBH(sb))
- {
- if (is_directory)
+ down(&sbi->s_alloc_sem);
+ if (sbi->s_lvidbh) {
+ if (S_ISDIR(inode->i_mode))
UDF_SB_LVIDIU(sb)->numDirs =
cpu_to_le32(le32_to_cpu(UDF_SB_LVIDIU(sb)->numDirs) - 1);
else
UDF_SB_LVIDIU(sb)->numFiles =
cpu_to_le32(le32_to_cpu(UDF_SB_LVIDIU(sb)->numFiles) - 1);
- mark_buffer_dirty(UDF_SB_LVIDBH(sb));
+ mark_buffer_dirty(sbi->s_lvidbh);
}
- unlock_super(sb);
+ up(&sbi->s_alloc_sem);
udf_free_blocks(sb, NULL, UDF_I_LOCATION(inode), 0, 1);
}
struct inode * udf_new_inode (struct inode *dir, int mode, int * err)
{
- struct super_block *sb;
+ struct super_block *sb = dir->i_sb;
+ struct udf_sb_info *sbi = UDF_SB(sb);
struct inode * inode;
int block;
uint32_t start = UDF_I_LOCATION(dir).logicalBlockNum;
- sb = dir->i_sb;
inode = new_inode(sb);
if (!inode)
iput(inode);
return NULL;
}
- lock_super(sb);
+ down(&sbi->s_alloc_sem);
UDF_I_UNIQUE(inode) = 0;
UDF_I_LENEXTENTS(inode) = 0;
UDF_I_NEXT_ALLOC_BLOCK(inode) = 0;
else
UDF_I_ALLOCTYPE(inode) = ICBTAG_FLAG_AD_LONG;
inode->i_mtime = inode->i_atime = inode->i_ctime =
- UDF_I_CRTIME(inode) = CURRENT_TIME;
+ UDF_I_CRTIME(inode) = current_fs_time(inode->i_sb);
insert_inode_hash(inode);
mark_inode_dirty(inode);
+ up(&sbi->s_alloc_sem);
- unlock_super(sb);
if (DQUOT_ALLOC_INODE(inode))
{
DQUOT_DROP(inode);
kernel_lb_addr, uint32_t, struct buffer_head **);
static int udf_get_block(struct inode *, sector_t, struct buffer_head *, int);
-/*
- * udf_put_inode
- *
- * PURPOSE
- *
- * DESCRIPTION
- * This routine is called whenever the kernel no longer needs the inode.
- *
- * HISTORY
- * July 1, 1997 - Andrew E. Mileski
- * Written, tested, and released.
- *
- * Called at each iput()
- */
-void udf_put_inode(struct inode * inode)
-{
- if (!(inode->i_sb->s_flags & MS_RDONLY))
- {
- lock_kernel();
- udf_discard_prealloc(inode);
- unlock_kernel();
- }
-}
-
/*
* udf_delete_inode
*
void udf_clear_inode(struct inode *inode)
{
+ if (!(inode->i_sb->s_flags & MS_RDONLY)) {
+ lock_kernel();
+ udf_discard_prealloc(inode);
+ unlock_kernel();
+ }
+
kfree(UDF_I_DATA(inode));
UDF_I_DATA(inode) = NULL;
}
*new = 1;
UDF_I_NEXT_ALLOC_BLOCK(inode) = block;
UDF_I_NEXT_ALLOC_GOAL(inode) = newblocknum;
- inode->i_ctime = CURRENT_TIME;
+ inode->i_ctime = current_fs_time(inode->i_sb);
if (IS_SYNC(inode))
udf_sync_inode(inode);
udf_truncate_extents(inode);
}
- inode->i_mtime = inode->i_ctime = CURRENT_TIME;
+ inode->i_mtime = inode->i_ctime = current_fs_time(inode->i_sb);
if (IS_SYNC(inode))
udf_sync_inode (inode);
else
unlock_kernel();
}
-/*
- * udf_read_inode
- *
- * PURPOSE
- * Read an inode.
- *
- * DESCRIPTION
- * This routine is called by iget() [which is called by udf_iget()]
- * (clean_inode() will have been called first)
- * when an inode is first read into memory.
- *
- * HISTORY
- * July 1, 1997 - Andrew E. Mileski
- * Written, tested, and released.
- *
- * 12/19/98 dgb Updated to fix size problems.
- */
-
-void
-udf_read_inode(struct inode *inode)
-{
- memset(&UDF_I_LOCATION(inode), 0xFF, sizeof(kernel_lb_addr));
-}
-
static void
__udf_read_inode(struct inode *inode)
{
return err;
}
-/*
- * udf_iget
- *
- * PURPOSE
- * Get an inode.
- *
- * DESCRIPTION
- * This routine replaces iget() and read_inode().
- *
- * HISTORY
- * October 3, 1997 - Andrew E. Mileski
- * Written, tested, and released.
- *
- * 12/19/98 dgb Added semaphore and changed to be a wrapper of iget
- */
struct inode *
udf_iget(struct super_block *sb, kernel_lb_addr ino)
{
- struct inode *inode;
- unsigned long block;
-
- block = udf_get_lb_pblock(sb, ino, 0);
-
- /* Get the inode */
-
- inode = iget(sb, block);
- /* calls udf_read_inode() ! */
+ unsigned long block = udf_get_lb_pblock(sb, ino, 0);
+ struct inode *inode = iget_locked(sb, block);
if (!inode)
- {
- printk(KERN_ERR "udf: iget() failed\n");
return NULL;
- }
- else if (is_bad_inode(inode))
- {
- iput(inode);
- return NULL;
- }
- else if (UDF_I_LOCATION(inode).logicalBlockNum == 0xFFFFFFFF &&
- UDF_I_LOCATION(inode).partitionReferenceNum == 0xFFFF)
- {
+
+ if (inode->i_state & I_NEW) {
memcpy(&UDF_I_LOCATION(inode), &ino, sizeof(kernel_lb_addr));
__udf_read_inode(inode);
- if (is_bad_inode(inode))
- {
- iput(inode);
- return NULL;
- }
+ unlock_new_inode(inode);
}
- if ( ino.logicalBlockNum >= UDF_SB_PARTLEN(sb, ino.partitionReferenceNum) )
- {
+ if (is_bad_inode(inode))
+ goto out_iput;
+
+ if (ino.logicalBlockNum >= UDF_SB_PARTLEN(sb, ino.partitionReferenceNum)) {
udf_debug("block=%d, partition=%d out of range\n",
ino.logicalBlockNum, ino.partitionReferenceNum);
make_bad_inode(inode);
- iput(inode);
- return NULL;
- }
+ goto out_iput;
+ }
return inode;
+
+ out_iput:
+ iput(inode);
+ return NULL;
}
int8_t udf_add_aext(struct inode *inode, kernel_lb_addr *bloc, int *extoffset,
extern struct buffer_head * udf_bread(struct inode *, int, int, int *);
extern void udf_truncate(struct inode *);
extern void udf_read_inode(struct inode *);
-extern void udf_put_inode(struct inode *);
extern void udf_delete_inode(struct inode *);
extern void udf_clear_inode(struct inode *);
extern int udf_write_inode(struct inode *, int);
if (IS_DIRSYNC(dir))
sync_dirty_buffer(bh);
brelse (bh);
- dir->i_mtime = dir->i_ctime = CURRENT_TIME;
+ dir->i_mtime = dir->i_ctime = CURRENT_TIME_SEC;
dir->i_version++;
mark_inode_dirty(dir);
fs16_to_cpu(sb, dir->d_reclen));
dir->d_ino = 0;
inode->i_version++;
- inode->i_ctime = inode->i_mtime = CURRENT_TIME;
+ inode->i_ctime = inode->i_mtime = CURRENT_TIME_SEC;
mark_inode_dirty(inode);
mark_buffer_dirty(bh);
if (IS_DIRSYNC(inode))
inode->i_ino = cg * uspi->s_ipg + bit;
inode->i_blksize = PAGE_SIZE; /* This is the optimal IO size (for stat), not the fs block size */
inode->i_blocks = 0;
- inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
+ inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME_SEC;
ufsi->i_flags = UFS_I(dir)->i_flags;
ufsi->i_lastfrag = 0;
ufsi->i_gen = 0;
*new = 1;
}
- inode->i_ctime = CURRENT_TIME;
+ inode->i_ctime = CURRENT_TIME_SEC;
if (IS_SYNC(inode))
ufs_sync_inode (inode);
mark_inode_dirty(inode);
mark_buffer_dirty(bh);
if (IS_SYNC(inode))
sync_dirty_buffer(bh);
- inode->i_ctime = CURRENT_TIME;
+ inode->i_ctime = CURRENT_TIME_SEC;
mark_inode_dirty(inode);
out:
brelse (bh);
return -EMLINK;
}
- inode->i_ctime = CURRENT_TIME;
+ inode->i_ctime = CURRENT_TIME_SEC;
ufs_inc_count(inode);
atomic_inc(&inode->i_count);
goto out_dir;
ufs_inc_count(old_inode);
ufs_set_link(new_dir, new_de, new_bh, old_inode);
- new_inode->i_ctime = CURRENT_TIME;
+ new_inode->i_ctime = CURRENT_TIME_SEC;
if (dir_de)
new_inode->i_nlink--;
ufs_dec_count(new_inode);
brelse (bh);
}
}
- inode->i_mtime = inode->i_ctime = CURRENT_TIME;
+ inode->i_mtime = inode->i_ctime = CURRENT_TIME_SEC;
ufsi->i_lastfrag = DIRECT_FRAGMENT;
unlock_kernel();
mark_inode_dirty(inode);
*
* Short name translation 1999, 2001 by Wolfram Pienkoss <wp@bszh.de>
*
- * Support Multibyte character and cleanup by
- * OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
+ * Support Multibyte characters and cleanup by
+ * OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
*/
#include <linux/module.h>
#include <linux/buffer_head.h>
#include <linux/namei.h>
-static int vfat_hashi(struct dentry *parent, struct qstr *qstr);
-static int vfat_hash(struct dentry *parent, struct qstr *qstr);
-static int vfat_cmpi(struct dentry *dentry, struct qstr *a, struct qstr *b);
-static int vfat_cmp(struct dentry *dentry, struct qstr *a, struct qstr *b);
-static int vfat_revalidate(struct dentry *dentry, struct nameidata *nd);
-
-static struct dentry_operations vfat_dentry_ops[4] = {
- {
- .d_hash = vfat_hashi,
- .d_compare = vfat_cmpi,
- },
- {
- .d_revalidate = vfat_revalidate,
- .d_hash = vfat_hashi,
- .d_compare = vfat_cmpi,
- },
- {
- .d_hash = vfat_hash,
- .d_compare = vfat_cmp,
- },
- {
- .d_revalidate = vfat_revalidate,
- .d_hash = vfat_hash,
- .d_compare = vfat_cmp,
- }
-};
-
static int vfat_revalidate(struct dentry *dentry, struct nameidata *nd)
{
int ret = 1;
{
unsigned int len = qstr->len;
- while (len && qstr->name[len-1] == '.')
+ while (len && qstr->name[len - 1] == '.')
len--;
-
return len;
}
static int vfat_hash(struct dentry *dentry, struct qstr *qstr)
{
qstr->hash = full_name_hash(qstr->name, vfat_striptail_len(qstr));
-
return 0;
}
return 1;
}
-/* Characters that are undesirable in an MS-DOS file name */
-
-static wchar_t bad_chars[] = {
- /* `*' `?' `<' `>' `|' `"' `:' `/' */
- 0x002A, 0x003F, 0x003C, 0x003E, 0x007C, 0x0022, 0x003A, 0x002F,
- /* `\' */
- 0x005C, 0,
+static struct dentry_operations vfat_dentry_ops[4] = {
+ {
+ .d_hash = vfat_hashi,
+ .d_compare = vfat_cmpi,
+ },
+ {
+ .d_revalidate = vfat_revalidate,
+ .d_hash = vfat_hashi,
+ .d_compare = vfat_cmpi,
+ },
+ {
+ .d_hash = vfat_hash,
+ .d_compare = vfat_cmp,
+ },
+ {
+ .d_revalidate = vfat_revalidate,
+ .d_hash = vfat_hash,
+ .d_compare = vfat_cmp,
+ }
};
-#define IS_BADCHAR(uni) (vfat_unistrchr(bad_chars, (uni)) != NULL)
-static wchar_t replace_chars[] = {
- /* `[' `]' `;' `,' `+' `=' */
- 0x005B, 0x005D, 0x003B, 0x002C, 0x002B, 0x003D, 0,
-};
-#define IS_REPLACECHAR(uni) (vfat_unistrchr(replace_chars, (uni)) != NULL)
+/* Characters that are undesirable in an MS-DOS file name */
-static wchar_t skip_chars[] = {
- /* `.' ` ' */
- 0x002E, 0x0020, 0,
-};
-#define IS_SKIPCHAR(uni) \
- ((wchar_t)(uni) == skip_chars[0] || (wchar_t)(uni) == skip_chars[1])
+static inline wchar_t vfat_bad_char(wchar_t w)
+{
+ return (w < 0x0020)
+ || (w == '*') || (w == '?') || (w == '<') || (w == '>')
+ || (w == '|') || (w == '"') || (w == ':') || (w == '/')
+ || (w == '\\');
+}
+
+static inline wchar_t vfat_replace_char(wchar_t w)
+{
+ return (w == '[') || (w == ']') || (w == ';') || (w == ',')
+ || (w == '+') || (w == '=');
+}
-static inline wchar_t *vfat_unistrchr(const wchar_t *s, const wchar_t c)
+static wchar_t vfat_skip_char(wchar_t w)
{
- for(; *s != c; ++s)
- if (*s == 0)
- return NULL;
- return (wchar_t *) s;
+ return (w == '.') || (w == ' ');
}
static inline int vfat_is_used_badchars(const wchar_t *s, int len)
{
int i;
-
+
for (i = 0; i < len; i++)
- if (s[i] < 0x0020 || IS_BADCHAR(s[i]))
+ if (vfat_bad_char(s[i]))
return -EINVAL;
return 0;
}
static int vfat_valid_longname(const unsigned char *name, unsigned int len)
{
- if (len && name[len-1] == ' ')
- return 0;
+ if (name[len - 1] == ' ')
+ return -EINVAL;
if (len >= 256)
- return 0;
+ return -ENAMETOOLONG;
/* MS-DOS "device special files" */
if (len == 3 || (len > 3 && name[3] == '.')) { /* basename == 3 */
!strnicmp(name, "con", 3) ||
!strnicmp(name, "nul", 3) ||
!strnicmp(name, "prn", 3))
- return 0;
+ return -EINVAL;
}
if (len == 4 || (len > 4 && name[4] == '.')) { /* basename == 4 */
/* "com1", "com2", ... */
if ('1' <= name[3] && name[3] <= '9') {
if (!strnicmp(name, "com", 3) ||
!strnicmp(name, "lpt", 3))
- return 0;
+ return -EINVAL;
}
}
- return 1;
+ return 0;
}
static int vfat_find_form(struct inode *dir, unsigned char *name)
return 0;
}
-/*
+/*
* 1) Valid characters for the 8.3 format alias are any combination of
* letters, uppercase alphabets, digits, any of the
* following special characters:
* WinNT's Extension:
* File name and extension name is contain uppercase/lowercase
* only. And it is expressed by CASE_LOWER_BASE and CASE_LOWER_EXT.
- *
+ *
* 2) File name is 8.3 format, but it contain the uppercase and
* lowercase char, muliti bytes char, etc. In this case numtail is not
* added, but Longfilename is stored.
- *
+ *
* 3) When the one except for the above, or the following special
* character are contained:
* . [ ] ; , + =
(x)->valid = 1; \
} while (0)
-static inline unsigned char
-shortname_info_to_lcase(struct shortname_info *base,
- struct shortname_info *ext)
-{
- unsigned char lcase = 0;
-
- if (base->valid && ext->valid) {
- if (!base->upper && base->lower && (ext->lower || ext->upper))
- lcase |= CASE_LOWER_BASE;
- if (!ext->upper && ext->lower && (base->lower || base->upper))
- lcase |= CASE_LOWER_EXT;
- }
-
- return lcase;
-}
-
static inline int to_shortname_char(struct nls_table *nls,
- unsigned char *buf, int buf_size, wchar_t *src,
- struct shortname_info *info)
+ unsigned char *buf, int buf_size,
+ wchar_t *src, struct shortname_info *info)
{
int len;
- if (IS_SKIPCHAR(*src)) {
+ if (vfat_skip_char(*src)) {
info->valid = 0;
return 0;
}
- if (IS_REPLACECHAR(*src)) {
+ if (vfat_replace_char(*src)) {
info->valid = 0;
buf[0] = '_';
return 1;
}
-
+
len = nls->uni2char(*src, buf, buf_size);
if (len <= 0) {
info->valid = 0;
info->lower = 0;
info->upper = 0;
}
-
+
return len;
}
/* Now, we need to create a shortname from the long name */
ext_start = end = &uname[ulen];
while (--ext_start >= uname) {
- if (*ext_start == 0x002E) { /* is `.' */
+ if (*ext_start == 0x002E) { /* is `.' */
if (ext_start == end - 1) {
sz = ulen;
ext_start = NULL;
*/
name_start = &uname[0];
while (name_start < ext_start) {
- if (!IS_SKIPCHAR(*name_start))
+ if (!vfat_skip_char(*name_start))
break;
name_start++;
}
ext_start++;
} else {
sz = ulen;
- ext_start=NULL;
+ ext_start = NULL;
}
}
numtail2_baselen = baselen;
if (baselen < 6 && (baselen + chl) > 6)
numtail_baselen = baselen;
- for (chi = 0; chi < chl; chi++){
+ for (chi = 0; chi < chl; chi++) {
*p++ = charbuf[chi];
baselen++;
if (baselen >= 8)
if (opt_shortname & VFAT_SFN_CREATE_WIN95) {
return (base_info.upper && ext_info.upper);
} else if (opt_shortname & VFAT_SFN_CREATE_WINNT) {
- if ((base_info.upper || base_info.lower)
- && (ext_info.upper || ext_info.lower)) {
- *lcase = shortname_info_to_lcase(&base_info,
- &ext_info);
+ if ((base_info.upper || base_info.lower) &&
+ (ext_info.upper || ext_info.lower)) {
+ if (!base_info.upper && base_info.lower)
+ *lcase |= CASE_LOWER_BASE;
+ if (!ext_info.upper && ext_info.lower)
+ *lcase |= CASE_LOWER_EXT;
return 1;
}
return 0;
BUG();
}
}
-
+
if (MSDOS_SB(dir->i_sb)->options.numtail == 0)
if (vfat_find_form(dir, name_res) < 0)
return 0;
* values for part of the base.
*/
- if (baselen>6) {
+ if (baselen > 6) {
baselen = numtail_baselen;
name_res[7] = ' ';
}
name_res[baselen] = '~';
for (i = 1; i < 10; i++) {
- name_res[baselen+1] = i + '0';
+ name_res[baselen + 1] = i + '0';
if (vfat_find_form(dir, name_res) < 0)
return 0;
}
i = jiffies & 0xffff;
sz = (jiffies >> 16) & 0x7;
- if (baselen>2) {
+ if (baselen > 2) {
baselen = numtail2_baselen;
name_res[7] = ' ';
}
- name_res[baselen+4] = '~';
- name_res[baselen+5] = '1' + sz;
+ name_res[baselen + 4] = '~';
+ name_res[baselen + 5] = '1' + sz;
while (1) {
sprintf(buf, "%04X", i);
memcpy(&name_res[baselen], buf, 4);
* We stripped '.'s before and set len appropriately,
* but utf8_mbstowcs doesn't care about len
*/
- *outlen -= (name_len-len);
+ *outlen -= (name_len - len);
op = &outname[*outlen * sizeof(wchar_t)];
} else {
if (nls) {
for (i = 0, ip = name, op = outname, *outlen = 0;
- i < len && *outlen <= 260; *outlen += 1)
+ i < len && *outlen <= 260;
+ *outlen += 1)
{
if (escape && (*ip == ':')) {
if (i > len - 5)
ip += 5;
i += 5;
} else {
- if ((charlen = nls->char2uni(ip, len-i, (wchar_t *)op)) < 0)
+ if ((charlen = nls->char2uni(ip, len - i, (wchar_t *)op)) < 0)
return -EINVAL;
ip += charlen;
i += charlen;
}
} else {
for (i = 0, ip = name, op = outname, *outlen = 0;
- i < len && *outlen <= 260; i++, *outlen += 1)
+ i < len && *outlen <= 260;
+ i++, *outlen += 1)
{
*op++ = *ip++;
*op++ = 0;
loff_t offset;
*slots = 0;
- if (!vfat_valid_longname(name, len))
- return -EINVAL;
+ res = vfat_valid_longname(name, len);
+ if (res)
+ return res;
- if(!(page = __get_free_page(GFP_KERNEL)))
+ page = __get_free_page(GFP_KERNEL);
+ if (!page)
return -ENOMEM;
uname = (wchar_t *)page;
/* build the entry of long file name */
*slots = usize / 13;
- for (cksum = i = 0; i < 11; i++) {
+ for (cksum = i = 0; i < 11; i++)
cksum = (((cksum&1)<<7)|((cksum&0xfe)>>1)) + msdos_name[i];
- }
for (ps = ds, slot = *slots; slot > 0; slot--, ps++) {
ps->id = slot;
fatwchar_to16(ps->name11_12, uname + offset + 11, 2);
}
ds[0].id |= 0x40;
- de = (struct msdos_dir_entry *) ps;
+ de = (struct msdos_dir_entry *)ps;
shortname:
/* build the entry of 8.3 alias name */
return res;
}
-static int vfat_add_entry(struct inode *dir,struct qstr* qname,
+static int vfat_add_entry(struct inode *dir, struct qstr *qname,
int is_dir, struct vfat_slot_info *sinfo_out,
struct buffer_head **bh, struct msdos_dir_entry **de)
{
if (len == 0)
return -ENOENT;
- dir_slots =
- kmalloc(sizeof(struct msdos_dir_slot) * MSDOS_SLOTS, GFP_KERNEL);
+ dir_slots = kmalloc(sizeof(*dir_slots) * MSDOS_SLOTS, GFP_KERNEL);
if (dir_slots == NULL)
return -ENOMEM;
goto cleanup;
/* build the empty directory entry of number of slots */
- offset = fat_add_entries(dir, slots, &dummy_bh, &dummy_de, &dummy_i_pos);
+ offset =
+ fat_add_entries(dir, slots, &dummy_bh, &dummy_de, &dummy_i_pos);
if (offset < 0) {
res = offset;
goto cleanup;
res = 0;
/* update timestamp */
- dir->i_ctime = dir->i_mtime = dir->i_atime = CURRENT_TIME;
+ dir->i_ctime = dir->i_mtime = dir->i_atime = CURRENT_TIME_SEC;
mark_inode_dirty(dir);
fat_date_unix2dos(dir->i_mtime.tv_sec, &(*de)->time, &(*de)->date);
+ dir->i_mtime.tv_nsec = 0;
(*de)->ctime = (*de)->time;
(*de)->adate = (*de)->cdate = (*de)->date;
return res;
}
-static int vfat_find(struct inode *dir,struct qstr* qname,
- struct vfat_slot_info *sinfo, struct buffer_head **last_bh,
- struct msdos_dir_entry **last_de)
+static int vfat_find(struct inode *dir, struct qstr *qname,
+ struct vfat_slot_info *sinfo, struct buffer_head **last_bh,
+ struct msdos_dir_entry **last_de)
{
struct super_block *sb = dir->i_sb;
loff_t offset;
res = fat_search_long(dir, qname->name, len,
(MSDOS_SB(sb)->options.name_check != 's'),
&offset, &sinfo->longname_offset);
- if (res>0) {
- sinfo->long_slots = res-1;
- if (fat_get_entry(dir,&offset,last_bh,last_de,&sinfo->i_pos)>=0)
+ if (res > 0) {
+ sinfo->long_slots = res - 1;
+ if (fat_get_entry(dir, &offset, last_bh, last_de, &sinfo->i_pos) >= 0)
return 0;
res = -EIO;
- }
+ }
return res ? res : -ENOENT;
}
static struct dentry *vfat_lookup(struct inode *dir, struct dentry *dentry,
- struct nameidata *nd)
+ struct nameidata *nd)
{
int res;
struct vfat_slot_info sinfo;
struct buffer_head *bh = NULL;
struct msdos_dir_entry *de;
int table;
-
+
lock_kernel();
table = (MSDOS_SB(dir->i_sb)->options.name_check == 's') ? 2 : 0;
dentry->d_op = &vfat_dentry_ops[table];
inode = NULL;
- res = vfat_find(dir,&dentry->d_name,&sinfo,&bh,&de);
+ res = vfat_find(dir, &dentry->d_name, &sinfo, &bh, &de);
if (res < 0) {
table++;
goto error;
}
alias = d_find_alias(inode);
if (alias) {
- if (d_invalidate(alias)==0)
+ if (d_invalidate(alias) == 0)
dput(alias);
else {
iput(inode);
unlock_kernel();
return alias;
}
-
+
}
error:
unlock_kernel();
return dentry;
}
-static int vfat_create(struct inode *dir, struct dentry* dentry, int mode,
- struct nameidata *nd)
+static int vfat_create(struct inode *dir, struct dentry *dentry, int mode,
+ struct nameidata *nd)
{
struct super_block *sb = dir->i_sb;
struct inode *inode = NULL;
if (!inode)
goto out;
res = 0;
- inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
+ inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME_SEC;
mark_inode_dirty(inode);
inode->i_version++;
dir->i_version++;
dentry->d_time = dentry->d_parent->d_inode->i_version;
- d_instantiate(dentry,inode);
+ d_instantiate(dentry, inode);
out:
unlock_kernel();
return res;
}
-static void vfat_remove_entry(struct inode *dir,struct vfat_slot_info *sinfo,
- struct buffer_head *bh, struct msdos_dir_entry *de)
+static void vfat_remove_entry(struct inode *dir, struct vfat_slot_info *sinfo,
+ struct buffer_head *bh,
+ struct msdos_dir_entry *de)
{
loff_t offset, i_pos;
int i;
/* remove the shortname */
- dir->i_mtime = dir->i_atime = CURRENT_TIME;
+ dir->i_mtime = dir->i_atime = CURRENT_TIME_SEC;
dir->i_version++;
mark_inode_dirty(dir);
de->name[0] = DELETED_FLAG;
mark_buffer_dirty(bh);
/* remove the longname */
- offset = sinfo->longname_offset; de = NULL;
+ offset = sinfo->longname_offset;
+ de = NULL;
for (i = sinfo->long_slots; i > 0; --i) {
if (fat_get_entry(dir, &offset, &bh, &de, &i_pos) < 0)
continue;
brelse(bh);
}
-static int vfat_rmdir(struct inode *dir, struct dentry* dentry)
+static int vfat_rmdir(struct inode *dir, struct dentry *dentry)
{
- int res;
+ struct inode *inode = dentry->d_inode;
struct vfat_slot_info sinfo;
struct buffer_head *bh = NULL;
struct msdos_dir_entry *de;
+ int res;
lock_kernel();
- res = fat_dir_empty(dentry->d_inode);
+ res = fat_dir_empty(inode);
if (res)
goto out;
- res = vfat_find(dir,&dentry->d_name,&sinfo, &bh, &de);
+ res = vfat_find(dir, &dentry->d_name, &sinfo, &bh, &de);
if (res < 0)
goto out;
+
res = 0;
- dentry->d_inode->i_nlink = 0;
- dentry->d_inode->i_mtime = dentry->d_inode->i_atime = CURRENT_TIME;
- fat_detach(dentry->d_inode);
- mark_inode_dirty(dentry->d_inode);
+ inode->i_nlink = 0;
+ inode->i_mtime = inode->i_atime = CURRENT_TIME_SEC;
+ fat_detach(inode);
+ mark_inode_dirty(inode);
/* releases bh */
- vfat_remove_entry(dir,&sinfo,bh,de);
+ vfat_remove_entry(dir, &sinfo, bh, de);
dir->i_nlink--;
out:
unlock_kernel();
static int vfat_unlink(struct inode *dir, struct dentry *dentry)
{
- int res;
+ struct inode *inode = dentry->d_inode;
struct vfat_slot_info sinfo;
struct buffer_head *bh = NULL;
struct msdos_dir_entry *de;
+ int res;
lock_kernel();
- res = vfat_find(dir,&dentry->d_name,&sinfo,&bh,&de);
+ res = vfat_find(dir, &dentry->d_name, &sinfo, &bh, &de);
if (res < 0)
goto out;
- dentry->d_inode->i_nlink = 0;
- dentry->d_inode->i_mtime = dentry->d_inode->i_atime = CURRENT_TIME;
- fat_detach(dentry->d_inode);
- mark_inode_dirty(dentry->d_inode);
+ inode->i_nlink = 0;
+ inode->i_mtime = inode->i_atime = CURRENT_TIME_SEC;
+ fat_detach(inode);
+ mark_inode_dirty(inode);
/* releases bh */
- vfat_remove_entry(dir,&sinfo,bh,de);
+ vfat_remove_entry(dir, &sinfo, bh, de);
out:
unlock_kernel();
return res;
}
-static int vfat_mkdir(struct inode *dir,struct dentry* dentry,int mode)
+static int vfat_mkdir(struct inode *dir, struct dentry *dentry, int mode)
{
struct super_block *sb = dir->i_sb;
struct inode *inode = NULL;
inode = fat_build_inode(sb, de, sinfo.i_pos, &res);
if (!inode)
goto out_brelse;
- inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
+ inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME_SEC;
mark_inode_dirty(inode);
inode->i_version++;
dir->i_version++;
dir->i_nlink++;
- inode->i_nlink = 2; /* no need to mark them dirty */
+ inode->i_nlink = 2; /* no need to mark them dirty */
res = fat_new_dir(inode, dir, 1);
if (res < 0)
goto mkdir_failed;
dentry->d_time = dentry->d_parent->d_inode->i_version;
- d_instantiate(dentry,inode);
+ d_instantiate(dentry, inode);
out_brelse:
brelse(bh);
out:
mkdir_failed:
inode->i_nlink = 0;
- inode->i_mtime = inode->i_atime = CURRENT_TIME;
+ inode->i_mtime = inode->i_atime = CURRENT_TIME_SEC;
fat_detach(inode);
mark_inode_dirty(inode);
/* releases bh */
- vfat_remove_entry(dir,&sinfo,bh,de);
+ vfat_remove_entry(dir, &sinfo, bh, de);
iput(inode);
dir->i_nlink--;
goto out;
}
-
+
static int vfat_rename(struct inode *old_dir, struct dentry *old_dentry,
- struct inode *new_dir, struct dentry *new_dentry)
+ struct inode *new_dir, struct dentry *new_dentry)
{
- struct buffer_head *old_bh,*new_bh,*dotdot_bh;
- struct msdos_dir_entry *old_de,*new_de,*dotdot_de;
+ struct buffer_head *old_bh, *new_bh, *dotdot_bh;
+ struct msdos_dir_entry *old_de, *new_de, *dotdot_de;
loff_t dotdot_i_pos;
struct inode *old_inode, *new_inode;
int res, is_dir;
- struct vfat_slot_info old_sinfo,sinfo;
+ struct vfat_slot_info old_sinfo, sinfo;
old_bh = new_bh = dotdot_bh = NULL;
old_inode = old_dentry->d_inode;
new_inode = new_dentry->d_inode;
lock_kernel();
- res = vfat_find(old_dir,&old_dentry->d_name,&old_sinfo,&old_bh,&old_de);
+ res = vfat_find(old_dir, &old_dentry->d_name, &old_sinfo, &old_bh,
+ &old_de);
if (res < 0)
goto rename_done;
}
if (new_dentry->d_inode) {
- res = vfat_find(new_dir,&new_dentry->d_name,&sinfo,&new_bh,
+ res = vfat_find(new_dir, &new_dentry->d_name, &sinfo, &new_bh,
&new_de);
if (res < 0 || MSDOS_I(new_inode)->i_pos != sinfo.i_pos) {
/* WTF??? Cry and fail. */
}
fat_detach(new_inode);
} else {
- res = vfat_add_entry(new_dir,&new_dentry->d_name,is_dir,&sinfo,
- &new_bh,&new_de);
- if (res < 0) goto rename_done;
+ res = vfat_add_entry(new_dir, &new_dentry->d_name, is_dir,
+ &sinfo, &new_bh, &new_de);
+ if (res < 0)
+ goto rename_done;
}
new_dir->i_version++;
/* releases old_bh */
- vfat_remove_entry(old_dir,&old_sinfo,old_bh,old_de);
- old_bh=NULL;
+ vfat_remove_entry(old_dir, &old_sinfo, old_bh, old_de);
+ old_bh = NULL;
fat_detach(old_inode);
fat_attach(old_inode, sinfo.i_pos);
mark_inode_dirty(old_inode);
old_dir->i_version++;
- old_dir->i_ctime = old_dir->i_mtime = CURRENT_TIME;
+ old_dir->i_ctime = old_dir->i_mtime = CURRENT_TIME_SEC;
mark_inode_dirty(old_dir);
if (new_inode) {
new_inode->i_nlink--;
- new_inode->i_ctime=CURRENT_TIME;
+ new_inode->i_ctime = CURRENT_TIME_SEC;
}
if (is_dir) {
brelse(new_bh);
unlock_kernel();
return res;
-
}
static struct inode_operations vfat_dir_inode_operations = {
}
static struct super_block *vfat_get_sb(struct file_system_type *fs_type,
- int flags, const char *dev_name, void *data)
+ int flags, const char *dev_name,
+ void *data)
{
return get_sb_bdev(fs_type, flags, dev_name, data, vfat_fill_super);
}
--- /dev/null
+menu "XFS support"
+
+config XFS_FS
+ tristate "XFS filesystem support"
+ select EXPORTFS if NFSD!=n
+ help
+ XFS is a high performance journaling filesystem which originated
+ on the SGI IRIX platform. It is completely multi-threaded, can
+ support large files and large filesystems, extended attributes,
+ variable block sizes, is extent based, and makes extensive use of
+ Btrees (directories, extents, free space) to aid both performance
+ and scalability.
+
+ Refer to the documentation at <http://oss.sgi.com/projects/xfs/>
+ for complete details. This implementation is on-disk compatible
+ with the IRIX version of XFS.
+
+ To compile this file system support as a module, choose M here: the
+ module will be called xfs. Be aware, however, that if the file
+ system of your root partition is compiled as a module, you'll need
+ to use an initial ramdisk (initrd) to boot.
+
+config XFS_EXPORT
+ bool
+ default y if XFS_FS && EXPORTFS
+
+config XFS_RT
+ bool "Realtime support (EXPERIMENTAL)"
+ depends on XFS_FS && EXPERIMENTAL
+ help
+ If you say Y here you will be able to mount and use XFS filesystems
+ which contain a realtime subvolume. The realtime subvolume is a
+ separate area of disk space where only file data is stored. The
+ realtime subvolume is designed to provide very deterministic
+ data rates suitable for media streaming applications.
+
+ See the xfs man page in section 5 for a bit more information.
+
+ This feature is unsupported at this time, is not yet fully
+ functional, and may cause serious problems.
+
+ If unsure, say N.
+
+config XFS_QUOTA
+ bool "Quota support"
+ depends on XFS_FS
+ help
+ If you say Y here, you will be able to set limits for disk usage on
+ a per user and/or a per group basis under XFS. XFS considers quota
+ information as filesystem metadata and uses journaling to provide a
+ higher level guarantee of consistency. The on-disk data format for
+ quota is also compatible with the IRIX version of XFS, allowing a
+ filesystem to be migrated between Linux and IRIX without any need
+ for conversion.
+
+ If unsure, say N. More comprehensive documentation can be found in
+ README.quota in the xfsprogs package. XFS quota can be used either
+ with or without the generic quota support enabled (CONFIG_QUOTA) -
+ they are completely independent subsystems.
+
+config XFS_SECURITY
+ bool "Security Label support"
+ depends on XFS_FS
+ help
+ Security labels support alternative access control models
+ implemented by security modules like SELinux. This option
+ enables an extended attribute namespace for inode security
+ labels in the XFS filesystem.
+
+ If you are not using a security module that requires using
+ extended attributes for inode security labels, say N.
+
+config XFS_POSIX_ACL
+ bool "POSIX ACL support"
+ depends on XFS_FS
+ help
+ POSIX Access Control Lists (ACLs) support permissions for users and
+ groups beyond the owner/group/world scheme.
+
+ To learn more about Access Control Lists, visit the POSIX ACLs for
+ Linux website <http://acl.bestbits.at/>.
+
+ If you don't know what Access Control Lists are, say N.
+
+endmenu
#endif
if (flags & KM_NOSLEEP) {
- lflags = GFP_ATOMIC;
+ lflags |= GFP_ATOMIC;
} else {
- lflags = GFP_KERNEL;
+ lflags |= GFP_KERNEL;
/* avoid recusive callbacks to filesystem during transactions */
if (PFLAGS_TEST_FSTRANS() || (flags & KM_NOFS))
/*
- * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ * Copyright (c) 2000-2005 Silicon Graphics, Inc. All Rights Reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of version 2 of the GNU General Public License as
bhv_desc_t *bdp;
vnode_t *vp = LINVFS_GET_VP(inode);
loff_t isize = i_size_read(inode);
- loff_t offset = page->index << PAGE_CACHE_SHIFT;
+ loff_t offset = (loff_t)page->index << PAGE_CACHE_SHIFT;
int delalloc = -1, unmapped = -1, unwritten = -1;
if (page_has_buffers(page))
{
ASSERT(!private || inode == (struct inode *)private);
- /* private indicates an unwritten extent lay beneath this IO,
- * see linvfs_get_block_core.
- */
+ /* private indicates an unwritten extent lay beneath this IO */
if (private && size > 0) {
vnode_t *vp = LINVFS_GET_VP(inode);
int error;
pgoff_t end_index, last_index, tlast;
int len, err, i, cnt = 0, uptodate = 1;
int flags = startio ? 0 : BMAPI_TRYLOCK;
- int page_dirty = 1;
- int delalloc = 0;
-
+ int page_dirty, delalloc = 0;
- /* Are we off the end of the file ? */
+ /* Is this page beyond the end of the file? */
offset = i_size_read(inode);
end_index = offset >> PAGE_CACHE_SHIFT;
last_index = (offset - 1) >> PAGE_CACHE_SHIFT;
bh = head = page_buffers(page);
iomp = NULL;
+ /*
+ * page_dirty is initially a count of buffers on the page and
+ * is decrememted as we move each into a cleanable state.
+ */
len = bh->b_size;
+ page_dirty = PAGE_CACHE_SIZE / len;
+
do {
if (offset >= end_offset)
break;
}
BUG_ON(!buffer_locked(bh));
bh_arr[cnt++] = bh;
- page_dirty = 0;
+ page_dirty--;
}
/*
* Second case, allocate space for a delalloc buffer.
unlock_buffer(bh);
mark_buffer_dirty(bh);
}
- page_dirty = 0;
+ page_dirty--;
}
} else if ((buffer_uptodate(bh) || PageUptodate(page)) &&
(unmapped || startio)) {
unlock_buffer(bh);
mark_buffer_dirty(bh);
}
- page_dirty = 0;
+ page_dirty--;
}
} else if (startio) {
if (buffer_uptodate(bh) &&
!test_and_set_bit(BH_Lock, &bh->b_state)) {
bh_arr[cnt++] = bh;
- page_dirty = 0;
+ page_dirty--;
}
}
}
}
STATIC int
-linvfs_get_block_core(
+__linvfs_get_block(
struct inode *inode,
sector_t iblock,
unsigned long blocks,
if (iomap.iomap_flags & IOMAP_DELAY) {
BUG_ON(direct);
if (create) {
- set_buffer_mapped(bh_result);
set_buffer_uptodate(bh_result);
+ set_buffer_mapped(bh_result);
+ set_buffer_delay(bh_result);
}
- set_buffer_delay(bh_result);
}
if (blocks) {
struct buffer_head *bh_result,
int create)
{
- return linvfs_get_block_core(inode, iblock, 0, bh_result,
+ return __linvfs_get_block(inode, iblock, 0, bh_result,
create, 0, BMAPI_WRITE);
}
struct buffer_head *bh_result,
int create)
{
- return linvfs_get_block_core(inode, iblock, max_blocks, bh_result,
+ return __linvfs_get_block(inode, iblock, max_blocks, bh_result,
create, 1, BMAPI_WRITE|BMAPI_DIRECT);
}
#include <linux/sysctl.h>
#include <linux/proc_fs.h>
#include <linux/workqueue.h>
-#include <linux/suspend.h>
#include <linux/percpu.h>
#include <linux/blkdev.h>
+#include <linux/hash.h>
#include "xfs_linux.h"
kmem_zone_free(pagebuf_cache, (pb));
/*
- * Pagebuf hashing
+ * Page Region interfaces.
+ *
+ * For pages in filesystems where the blocksize is smaller than the
+ * pagesize, we use the page->private field (long) to hold a bitmap
+ * of uptodate regions within the page.
+ *
+ * Each such region is "bytes per page / bits per long" bytes long.
+ *
+ * NBPPR == number-of-bytes-per-page-region
+ * BTOPR == bytes-to-page-region (rounded up)
+ * BTOPRT == bytes-to-page-region-truncated (rounded down)
*/
+#if (BITS_PER_LONG == 32)
+#define PRSHIFT (PAGE_CACHE_SHIFT - 5) /* (32 == 1<<5) */
+#elif (BITS_PER_LONG == 64)
+#define PRSHIFT (PAGE_CACHE_SHIFT - 6) /* (64 == 1<<6) */
+#else
+#error BITS_PER_LONG must be 32 or 64
+#endif
+#define NBPPR (PAGE_CACHE_SIZE/BITS_PER_LONG)
+#define BTOPR(b) (((unsigned int)(b) + (NBPPR - 1)) >> PRSHIFT)
+#define BTOPRT(b) (((unsigned int)(b) >> PRSHIFT))
+
+STATIC unsigned long
+page_region_mask(
+ size_t offset,
+ size_t length)
+{
+ unsigned long mask;
+ int first, final;
-#define NBITS 8
-#define NHASH (1<<NBITS)
+ first = BTOPR(offset);
+ final = BTOPRT(offset + length - 1);
+ first = min(first, final);
-typedef struct {
- struct list_head pb_hash;
- spinlock_t pb_hash_lock;
-} pb_hash_t;
+ mask = ~0UL;
+ mask <<= BITS_PER_LONG - (final - first);
+ mask >>= BITS_PER_LONG - (final);
-STATIC pb_hash_t pbhash[NHASH];
-#define pb_hash(pb) &pbhash[pb->pb_hash_index]
+ ASSERT(offset + length <= PAGE_CACHE_SIZE);
+ ASSERT((final - first) < BITS_PER_LONG && (final - first) >= 0);
-STATIC int
-_bhash(
- struct block_device *bdev,
- loff_t base)
+ return mask;
+}
+
+STATIC inline void
+set_page_region(
+ struct page *page,
+ size_t offset,
+ size_t length)
{
- int bit, hval;
+ page->private |= page_region_mask(offset, length);
+ if (page->private == ~0UL)
+ SetPageUptodate(page);
+}
- base >>= 9;
- base ^= (unsigned long)bdev / L1_CACHE_BYTES;
- for (bit = hval = 0; base && bit < sizeof(base) * 8; bit += NBITS) {
- hval ^= (int)base & (NHASH-1);
- base >>= NBITS;
- }
- return hval;
+STATIC inline int
+test_page_region(
+ struct page *page,
+ size_t offset,
+ size_t length)
+{
+ unsigned long mask = page_region_mask(offset, length);
+
+ return (mask && (page->private & mask) == mask);
}
/*
STATIC a_list_t *as_free_head;
STATIC int as_list_len;
-STATIC spinlock_t as_lock = SPIN_LOCK_UNLOCKED;
+STATIC DEFINE_SPINLOCK(as_lock);
/*
* Try to batch vunmaps because they are costly.
{
a_list_t *aentry;
- aentry = kmalloc(sizeof(a_list_t), GFP_ATOMIC);
- if (aentry) {
+ aentry = kmalloc(sizeof(a_list_t), GFP_ATOMIC & ~__GFP_HIGH);
+ if (likely(aentry)) {
spin_lock(&as_lock);
aentry->next = as_free_head;
aentry->vm_addr = addr;
uint flags)
{
struct address_space *mapping = bp->pb_target->pbr_mapping;
- unsigned int sectorshift = bp->pb_target->pbr_sshift;
size_t blocksize = bp->pb_target->pbr_bsize;
size_t size = bp->pb_count_desired;
size_t nbytes, offset;
if (!PageUptodate(page)) {
page_count--;
- if (blocksize == PAGE_CACHE_SIZE) {
+ if (blocksize >= PAGE_CACHE_SIZE) {
if (flags & PBF_READ)
bp->pb_locked = 1;
} else if (!PagePrivate(page)) {
- unsigned long j, range;
-
- /*
- * In this case page->private holds a bitmap
- * of uptodate sectors within the page
- */
- ASSERT(blocksize < PAGE_CACHE_SIZE);
- range = (offset + nbytes) >> sectorshift;
- for (j = offset >> sectorshift; j < range; j++)
- if (!test_bit(j, &page->private))
- break;
- if (j == range)
+ if (test_page_region(page, offset, nbytes))
page_count++;
}
}
* are unlocked. No I/O is implied by this call.
*/
xfs_buf_t *
-_pagebuf_find( /* find buffer for block */
- xfs_buftarg_t *target,/* target for block */
+_pagebuf_find(
+ xfs_buftarg_t *btp, /* block device target */
loff_t ioff, /* starting offset of range */
size_t isize, /* length of range */
page_buf_flags_t flags, /* PBF_TRYLOCK */
{
loff_t range_base;
size_t range_length;
- int hval;
- pb_hash_t *h;
+ xfs_bufhash_t *hash;
xfs_buf_t *pb, *n;
- int not_locked;
range_base = (ioff << BBSHIFT);
range_length = (isize << BBSHIFT);
- /* Ensure we never do IOs smaller than the sector size */
- BUG_ON(range_length < (1 << target->pbr_sshift));
+ /* Check for IOs smaller than the sector size / not sector aligned */
+ ASSERT(!(range_length < (1 << btp->pbr_sshift)));
+ ASSERT(!(range_base & (loff_t)btp->pbr_smask));
- /* Ensure we never do IOs that are not sector aligned */
- BUG_ON(range_base & (loff_t)target->pbr_smask);
+ hash = &btp->bt_hash[hash_long((unsigned long)ioff, btp->bt_hashshift)];
- hval = _bhash(target->pbr_bdev, range_base);
- h = &pbhash[hval];
+ spin_lock(&hash->bh_lock);
- spin_lock(&h->pb_hash_lock);
- list_for_each_entry_safe(pb, n, &h->pb_hash, pb_hash_list) {
- if (pb->pb_target == target &&
- pb->pb_file_offset == range_base &&
+ list_for_each_entry_safe(pb, n, &hash->bh_list, pb_hash_list) {
+ ASSERT(btp == pb->pb_target);
+ if (pb->pb_file_offset == range_base &&
pb->pb_buffer_length == range_length) {
- /* If we look at something bring it to the
- * front of the list for next time
+ /*
+ * If we look at something bring it to the
+ * front of the list for next time.
*/
atomic_inc(&pb->pb_hold);
- list_move(&pb->pb_hash_list, &h->pb_hash);
+ list_move(&pb->pb_hash_list, &hash->bh_list);
goto found;
}
}
/* No match found */
if (new_pb) {
- _pagebuf_initialize(new_pb, target, range_base,
+ _pagebuf_initialize(new_pb, btp, range_base,
range_length, flags);
- new_pb->pb_hash_index = hval;
- list_add(&new_pb->pb_hash_list, &h->pb_hash);
+ new_pb->pb_hash = hash;
+ list_add(&new_pb->pb_hash_list, &hash->bh_list);
} else {
XFS_STATS_INC(pb_miss_locked);
}
- spin_unlock(&h->pb_hash_lock);
- return (new_pb);
+ spin_unlock(&hash->bh_lock);
+ return new_pb;
found:
- spin_unlock(&h->pb_hash_lock);
+ spin_unlock(&hash->bh_lock);
/* Attempt to get the semaphore without sleeping,
* if this does not work then we need to drop the
* spinlock and do a hard attempt on the semaphore.
*/
- not_locked = down_trylock(&pb->pb_sema);
- if (not_locked) {
+ if (down_trylock(&pb->pb_sema)) {
if (!(flags & PBF_TRYLOCK)) {
/* wait for buffer ownership */
PB_TRACE(pb, "get_lock", 0);
bdi = target->pbr_mapping->backing_dev_info;
if (bdi_read_congested(bdi))
return;
- if (bdi_write_congested(bdi))
- return;
flags |= (PBF_TRYLOCK|PBF_ASYNC|PBF_READ_AHEAD);
xfs_buf_read_flags(target, ioff, isize, flags);
pagebuf_rele(
xfs_buf_t *pb)
{
- pb_hash_t *hash = pb_hash(pb);
+ xfs_bufhash_t *hash = pb->pb_hash;
PB_TRACE(pb, "rele", pb->pb_relse);
- if (atomic_dec_and_lock(&pb->pb_hold, &hash->pb_hash_lock)) {
+ /*
+ * pagebuf_lookup buffers are not hashed, not delayed write,
+ * and don't have their own release routines. Special case.
+ */
+ if (unlikely(!hash)) {
+ ASSERT(!pb->pb_relse);
+ if (atomic_dec_and_test(&pb->pb_hold))
+ xfs_buf_free(pb);
+ return;
+ }
+
+ if (atomic_dec_and_lock(&pb->pb_hold, &hash->bh_lock)) {
int do_free = 1;
if (pb->pb_relse) {
atomic_inc(&pb->pb_hold);
- spin_unlock(&hash->pb_hash_lock);
+ spin_unlock(&hash->bh_lock);
(*(pb->pb_relse)) (pb);
- spin_lock(&hash->pb_hash_lock);
+ spin_lock(&hash->bh_lock);
do_free = 0;
}
if (do_free) {
list_del_init(&pb->pb_hash_list);
- spin_unlock(&hash->pb_hash_lock);
+ spin_unlock(&hash->bh_lock);
pagebuf_free(pb);
} else {
- spin_unlock(&hash->pb_hash_lock);
+ spin_unlock(&hash->bh_lock);
}
}
}
return(locked ? 0 : -EBUSY);
}
+#ifdef DEBUG
/*
* pagebuf_lock_value
*
{
return(atomic_read(&pb->pb_sema.count));
}
+#endif
/*
* pagebuf_lock
{
xfs_buf_t *pb = (xfs_buf_t *)bio->bi_private;
unsigned int i, blocksize = pb->pb_target->pbr_bsize;
- unsigned int sectorshift = pb->pb_target->pbr_sshift;
struct bio_vec *bvec = bio->bi_io_vec;
if (bio->bi_size)
SetPageUptodate(page);
} else if (!PagePrivate(page) &&
(pb->pb_flags & _PBF_PAGE_CACHE)) {
- unsigned long j, range;
-
- ASSERT(blocksize < PAGE_CACHE_SIZE);
- range = (bvec->bv_offset + bvec->bv_len) >> sectorshift;
- for (j = bvec->bv_offset >> sectorshift; j < range; j++)
- set_bit(j, &page->private);
- if (page->private == (unsigned long)(PAGE_CACHE_SIZE-1))
- SetPageUptodate(page);
+ set_page_region(page, bvec->bv_offset, bvec->bv_len);
}
if (_pagebuf_iolocked(pb)) {
*/
void
xfs_wait_buftarg(
- xfs_buftarg_t *target)
+ xfs_buftarg_t *btp)
{
- xfs_buf_t *pb, *n;
- pb_hash_t *h;
- int i;
+ xfs_buf_t *bp, *n;
+ xfs_bufhash_t *hash;
+ uint i;
- for (i = 0; i < NHASH; i++) {
- h = &pbhash[i];
+ for (i = 0; i < (1 << btp->bt_hashshift); i++) {
+ hash = &btp->bt_hash[i];
again:
- spin_lock(&h->pb_hash_lock);
- list_for_each_entry_safe(pb, n, &h->pb_hash, pb_hash_list) {
- if (pb->pb_target == target &&
- !(pb->pb_flags & PBF_FS_MANAGED)) {
- spin_unlock(&h->pb_hash_lock);
+ spin_lock(&hash->bh_lock);
+ list_for_each_entry_safe(bp, n, &hash->bh_list, pb_hash_list) {
+ ASSERT(btp == bp->pb_target);
+ if (!(bp->pb_flags & PBF_FS_MANAGED)) {
+ spin_unlock(&hash->bh_lock);
delay(100);
goto again;
}
}
- spin_unlock(&h->pb_hash_lock);
+ spin_unlock(&hash->bh_lock);
}
}
+/*
+ * Allocate buffer hash table for a given target.
+ * For devices containing metadata (i.e. not the log/realtime devices)
+ * we need to allocate a much larger hash table.
+ */
+STATIC void
+xfs_alloc_bufhash(
+ xfs_buftarg_t *btp,
+ int external)
+{
+ unsigned int i;
+
+ btp->bt_hashshift = external ? 3 : 8; /* 8 or 256 buckets */
+ btp->bt_hashmask = (1 << btp->bt_hashshift) - 1;
+ btp->bt_hash = kmem_zalloc((1 << btp->bt_hashshift) *
+ sizeof(xfs_bufhash_t), KM_SLEEP);
+ for (i = 0; i < (1 << btp->bt_hashshift); i++) {
+ spin_lock_init(&btp->bt_hash[i].bh_lock);
+ INIT_LIST_HEAD(&btp->bt_hash[i].bh_list);
+ }
+}
+
+STATIC void
+xfs_free_bufhash(
+ xfs_buftarg_t *btp)
+{
+ kmem_free(btp->bt_hash,
+ (1 << btp->bt_hashshift) * sizeof(xfs_bufhash_t));
+ btp->bt_hash = NULL;
+}
+
void
xfs_free_buftarg(
xfs_buftarg_t *btp,
xfs_flush_buftarg(btp, 1);
if (external)
xfs_blkdev_put(btp->pbr_bdev);
+ xfs_free_bufhash(btp);
iput(btp->pbr_mapping->host);
kmem_free(btp, sizeof(*btp));
}
truncate_inode_pages(btp->pbr_mapping, 0LL);
}
-int
-xfs_setsize_buftarg(
+STATIC int
+xfs_setsize_buftarg_flags(
xfs_buftarg_t *btp,
unsigned int blocksize,
- unsigned int sectorsize)
+ unsigned int sectorsize,
+ int verbose)
{
btp->pbr_bsize = blocksize;
btp->pbr_sshift = ffs(sectorsize) - 1;
sectorsize, XFS_BUFTARG_NAME(btp));
return EINVAL;
}
+
+ if (verbose &&
+ (PAGE_CACHE_SIZE / BITS_PER_LONG) > sectorsize) {
+ printk(KERN_WARNING
+ "XFS: %u byte sectors in use on device %s. "
+ "This is suboptimal; %u or greater is ideal.\n",
+ sectorsize, XFS_BUFTARG_NAME(btp),
+ (unsigned int)PAGE_CACHE_SIZE / BITS_PER_LONG);
+ }
+
return 0;
}
+/*
+* When allocating the initial buffer target we have not yet
+* read in the superblock, so don't know what sized sectors
+* are being used is at this early stage. Play safe.
+*/
+STATIC int
+xfs_setsize_buftarg_early(
+ xfs_buftarg_t *btp,
+ struct block_device *bdev)
+{
+ return xfs_setsize_buftarg_flags(btp,
+ PAGE_CACHE_SIZE, bdev_hardsect_size(bdev), 0);
+}
+
+int
+xfs_setsize_buftarg(
+ xfs_buftarg_t *btp,
+ unsigned int blocksize,
+ unsigned int sectorsize)
+{
+ return xfs_setsize_buftarg_flags(btp, blocksize, sectorsize, 1);
+}
+
STATIC int
xfs_mapping_buftarg(
xfs_buftarg_t *btp,
mapping = &inode->i_data;
mapping->a_ops = &mapping_aops;
mapping->backing_dev_info = bdi;
- mapping_set_gfp_mask(mapping, GFP_KERNEL);
+ mapping_set_gfp_mask(mapping, GFP_NOFS);
btp->pbr_mapping = mapping;
return 0;
}
xfs_buftarg_t *
xfs_alloc_buftarg(
- struct block_device *bdev)
+ struct block_device *bdev,
+ int external)
{
xfs_buftarg_t *btp;
btp->pbr_dev = bdev->bd_dev;
btp->pbr_bdev = bdev;
- if (xfs_setsize_buftarg(btp, PAGE_CACHE_SIZE, bdev_hardsect_size(bdev)))
+ if (xfs_setsize_buftarg_early(btp, bdev))
goto error;
if (xfs_mapping_buftarg(btp, bdev))
goto error;
+ xfs_alloc_bufhash(btp, external);
return btp;
error:
*/
STATIC LIST_HEAD(pbd_delwrite_queue);
-STATIC spinlock_t pbd_delwrite_lock = SPIN_LOCK_UNLOCKED;
+STATIC DEFINE_SPINLOCK(pbd_delwrite_lock);
STATIC void
pagebuf_delwri_queue(
INIT_LIST_HEAD(&tmp);
do {
- /* swsusp */
- if (current->flags & PF_FREEZE)
- refrigerator(PF_FREEZE);
+ try_to_freeze(PF_FREEZE);
set_current_state(TASK_INTERRUPTIBLE);
schedule_timeout((xfs_buf_timer_centisecs * HZ) / 100);
int __init
pagebuf_init(void)
{
- int i;
-
pagebuf_cache = kmem_cache_create("xfs_buf_t", sizeof(xfs_buf_t), 0,
SLAB_HWCACHE_ALIGN, NULL, NULL);
if (pagebuf_cache == NULL) {
return -ENOMEM;
}
- for (i = 0; i < NHASH; i++) {
- spin_lock_init(&pbhash[i].pb_hash_lock);
- INIT_LIST_HEAD(&pbhash[i].pb_hash);
- }
-
return 0;
}
#define PBF_NOT_DONE(pb) (((pb)->pb_flags & (PBF_PARTIAL|PBF_NONE)) != 0)
#define PBF_DONE(pb) (((pb)->pb_flags & (PBF_PARTIAL|PBF_NONE)) == 0)
+typedef struct xfs_bufhash {
+ struct list_head bh_list;
+ spinlock_t bh_lock;
+} xfs_bufhash_t;
+
typedef struct xfs_buftarg {
dev_t pbr_dev;
struct block_device *pbr_bdev;
unsigned int pbr_bsize;
unsigned int pbr_sshift;
size_t pbr_smask;
+
+ /* per-device buffer hash table */
+ uint bt_hashmask;
+ uint bt_hashshift;
+ xfs_bufhash_t *bt_hash;
} xfs_buftarg_t;
/*
* xfs_buf_t: Buffer structure for page cache-based buffers
*
* This buffer structure is used by the page cache buffer management routines
- * to refer to an assembly of pages forming a logical buffer. The actual
- * I/O is performed with buffer_head or bio structures, as required by drivers,
- * for drivers which do not understand this structure. The buffer structure is
- * used on temporary basis only, and discarded when released.
- *
- * The real data storage is recorded in the page cache. Metadata is
- * hashed to the inode for the block device on which the file system resides.
- * File data is hashed to the inode for the file. Pages which are only
- * partially filled with data have bits set in their block_map entry
- * to indicate which disk blocks in the page are not valid.
+ * to refer to an assembly of pages forming a logical buffer. The actual I/O
+ * is performed with buffer_head structures, as required by drivers.
+ *
+ * The buffer structure is used on temporary basis only, and discarded when
+ * released. The real data storage is recorded in the page cache. Metadata is
+ * hashed to the block device on which the file system resides.
*/
struct xfs_buf;
+
+/* call-back function on I/O completion */
typedef void (*page_buf_iodone_t)(struct xfs_buf *);
- /* call-back function on I/O completion */
+/* call-back function on I/O completion */
typedef void (*page_buf_relse_t)(struct xfs_buf *);
- /* call-back function on I/O completion */
+/* pre-write function */
typedef int (*page_buf_bdstrat_t)(struct xfs_buf *);
-#define PB_PAGES 4
+#define PB_PAGES 2
typedef struct xfs_buf {
struct semaphore pb_sema; /* semaphore for lockables */
wait_queue_head_t pb_waiters; /* unpin waiters */
struct list_head pb_list;
page_buf_flags_t pb_flags; /* status flags */
- struct list_head pb_hash_list;
- xfs_buftarg_t *pb_target; /* logical object */
+ struct list_head pb_hash_list; /* hash table list */
+ xfs_bufhash_t *pb_hash; /* hash table list start */
+ xfs_buftarg_t *pb_target; /* buffer target (device) */
atomic_t pb_hold; /* reference count */
xfs_daddr_t pb_bn; /* block number for I/O */
loff_t pb_file_offset; /* offset in file */
void *pb_fspriv2;
void *pb_fspriv3;
unsigned short pb_error; /* error code on I/O */
- unsigned short pb_page_count; /* size of page array */
- unsigned short pb_offset; /* page offset in first page */
- unsigned char pb_locked; /* page array is locked */
- unsigned char pb_hash_index; /* hash table index */
+ unsigned short pb_locked; /* page array is locked */
+ unsigned int pb_page_count; /* size of page array */
+ unsigned int pb_offset; /* page offset in first page */
struct page **pb_pages; /* array of page pointers */
struct page *pb_page_array[PB_PAGES]; /* inline pages */
#ifdef PAGEBUF_LOCK_TRACKING
pagebuf_associate_memory(bp, val, count)
#define XFS_BUF_ADDR(bp) ((bp)->pb_bn)
#define XFS_BUF_SET_ADDR(bp, blk) \
- ((bp)->pb_bn = (blk))
+ ((bp)->pb_bn = (xfs_daddr_t)(blk))
#define XFS_BUF_OFFSET(bp) ((bp)->pb_file_offset)
#define XFS_BUF_SET_OFFSET(bp, off) \
((bp)->pb_file_offset = (off))
* Handling of buftargs.
*/
-extern xfs_buftarg_t *xfs_alloc_buftarg(struct block_device *);
+extern xfs_buftarg_t *xfs_alloc_buftarg(struct block_device *, int);
extern void xfs_free_buftarg(xfs_buftarg_t *, int);
extern void xfs_wait_buftarg(xfs_buftarg_t *);
extern int xfs_setsize_buftarg(xfs_buftarg_t *, unsigned int, unsigned int);
--- /dev/null
+/*
+ * Copyright (c) 2004-2005 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include "xfs.h"
+
+
+STATIC struct dentry *
+linvfs_decode_fh(
+ struct super_block *sb,
+ __u32 *fh,
+ int fh_len,
+ int fileid_type,
+ int (*acceptable)(
+ void *context,
+ struct dentry *de),
+ void *context)
+{
+ __u32 parent[2];
+ parent[0] = parent[1] = 0;
+
+ if (fh_len < 2 || fileid_type > 2)
+ return NULL;
+
+ if (fileid_type == 2 && fh_len > 2) {
+ if (fh_len == 3) {
+ printk(KERN_WARNING
+ "XFS: detected filehandle without "
+ "parent inode generation information.");
+ return ERR_PTR(-ESTALE);
+ }
+
+ parent[0] = fh[2];
+ parent[1] = fh[3];
+ }
+
+ return find_exported_dentry(sb, fh, parent, acceptable, context);
+
+}
+
+STATIC struct dentry *
+linvfs_get_dentry(
+ struct super_block *sb,
+ void *data)
+{
+ vnode_t *vp;
+ struct inode *inode;
+ struct dentry *result;
+ xfs_fid2_t xfid;
+ vfs_t *vfsp = LINVFS_GET_VFS(sb);
+ int error;
+
+ xfid.fid_len = sizeof(xfs_fid2_t) - sizeof(xfid.fid_len);
+ xfid.fid_pad = 0;
+ xfid.fid_gen = ((__u32 *)data)[1];
+ xfid.fid_ino = ((__u32 *)data)[0];
+
+ VFS_VGET(vfsp, &vp, (fid_t *)&xfid, error);
+ if (error || vp == NULL)
+ return ERR_PTR(-ESTALE) ;
+
+ inode = LINVFS_GET_IP(vp);
+ result = d_alloc_anon(inode);
+ if (!result) {
+ iput(inode);
+ return ERR_PTR(-ENOMEM);
+ }
+ return result;
+}
+
+STATIC struct dentry *
+linvfs_get_parent(
+ struct dentry *child)
+{
+ int error;
+ vnode_t *vp, *cvp;
+ struct dentry *parent;
+ struct dentry dotdot;
+
+ dotdot.d_name.name = "..";
+ dotdot.d_name.len = 2;
+ dotdot.d_inode = NULL;
+
+ cvp = NULL;
+ vp = LINVFS_GET_VP(child->d_inode);
+ VOP_LOOKUP(vp, &dotdot, &cvp, 0, NULL, NULL, error);
+ if (unlikely(error))
+ return ERR_PTR(-error);
+
+ parent = d_alloc_anon(LINVFS_GET_IP(cvp));
+ if (unlikely(!parent)) {
+ VN_RELE(cvp);
+ return ERR_PTR(-ENOMEM);
+ }
+ return parent;
+}
+
+struct export_operations linvfs_export_ops = {
+ .decode_fh = linvfs_decode_fh,
+ .get_parent = linvfs_get_parent,
+ .get_dentry = linvfs_get_dentry,
+};
#include "xfs_inode.h"
#include "xfs_error.h"
#include "xfs_rw.h"
+#include "xfs_ioctl32.h"
#include <linux/dcache.h>
#include <linux/smp_lock.h>
STATIC ssize_t
-linvfs_read(
+linvfs_aio_read(
struct kiocb *iocb,
char __user *buf,
size_t count,
loff_t pos)
{
- return __linvfs_read(iocb, buf, 0, count, pos);
+ return __linvfs_read(iocb, buf, IO_ISAIO, count, pos);
}
STATIC ssize_t
-linvfs_read_invis(
+linvfs_aio_read_invis(
struct kiocb *iocb,
char __user *buf,
size_t count,
loff_t pos)
{
- return __linvfs_read(iocb, buf, IO_INVIS, count, pos);
+ return __linvfs_read(iocb, buf, IO_ISAIO|IO_INVIS, count, pos);
}
STATIC ssize_t
-linvfs_write(
+linvfs_aio_write(
struct kiocb *iocb,
const char __user *buf,
size_t count,
loff_t pos)
{
- return __linvfs_write(iocb, buf, 0, count, pos);
+ return __linvfs_write(iocb, buf, IO_ISAIO, count, pos);
}
STATIC ssize_t
-linvfs_write_invis(
+linvfs_aio_write_invis(
struct kiocb *iocb,
const char __user *buf,
size_t count,
loff_t pos)
{
- return __linvfs_write(iocb, buf, IO_INVIS, count, pos);
+ return __linvfs_write(iocb, buf, IO_ISAIO|IO_INVIS, count, pos);
}
}
-STATIC int
+STATIC long
linvfs_ioctl(
- struct inode *inode,
struct file *filp,
unsigned int cmd,
unsigned long arg)
{
int error;
+ struct inode *inode = filp->f_dentry->d_inode;
vnode_t *vp = LINVFS_GET_VP(inode);
- unlock_kernel();
VOP_IOCTL(vp, inode, filp, 0, cmd, (void __user *)arg, error);
VMODIFY(vp);
- lock_kernel();
/* NOTE: some of the ioctl's return positive #'s as a
* byte count indicating success, such as
return error;
}
-STATIC int
+STATIC long
linvfs_ioctl_invis(
- struct inode *inode,
struct file *filp,
unsigned int cmd,
unsigned long arg)
{
int error;
+ struct inode *inode = filp->f_dentry->d_inode;
vnode_t *vp = LINVFS_GET_VP(inode);
- unlock_kernel();
ASSERT(vp);
VOP_IOCTL(vp, inode, filp, IO_INVIS, cmd, (void __user *)arg, error);
VMODIFY(vp);
- lock_kernel();
/* NOTE: some of the ioctl's return positive #'s as a
* byte count indicating success, such as
.write = do_sync_write,
.readv = linvfs_readv,
.writev = linvfs_writev,
- .aio_read = linvfs_read,
- .aio_write = linvfs_write,
+ .aio_read = linvfs_aio_read,
+ .aio_write = linvfs_aio_write,
.sendfile = linvfs_sendfile,
- .ioctl = linvfs_ioctl,
+ .unlocked_ioctl = linvfs_ioctl,
+#ifdef CONFIG_COMPAT
+ .compat_ioctl = xfs_compat_ioctl,
+#endif
.mmap = linvfs_file_mmap,
.open = linvfs_open,
.release = linvfs_release,
.write = do_sync_write,
.readv = linvfs_readv_invis,
.writev = linvfs_writev_invis,
- .aio_read = linvfs_read_invis,
- .aio_write = linvfs_write_invis,
+ .aio_read = linvfs_aio_read_invis,
+ .aio_write = linvfs_aio_write_invis,
.sendfile = linvfs_sendfile,
- .ioctl = linvfs_ioctl_invis,
+ .unlocked_ioctl = linvfs_ioctl_invis,
+#ifdef CONFIG_COMPAT
+ .compat_ioctl = xfs_compat_invis_ioctl,
+#endif
.mmap = linvfs_file_mmap,
.open = linvfs_open,
.release = linvfs_release,
struct file_operations linvfs_dir_operations = {
.read = generic_read_dir,
.readdir = linvfs_readdir,
- .ioctl = linvfs_ioctl,
+ .unlocked_ioctl = linvfs_ioctl,
.fsync = linvfs_fsync,
};
static struct vm_operations_struct linvfs_file_vm_ops = {
.nopage = filemap_nopage,
+ .populate = filemap_populate,
#ifdef HAVE_VMOP_MPROTECT
.mprotect = linvfs_mprotect,
#endif
#include <linux/ioctl32.h>
#include <linux/syscalls.h>
#include <linux/types.h>
+#include <linux/fs.h>
#include <asm/uaccess.h>
+#include "xfs.h"
#include "xfs_types.h"
#include "xfs_fs.h"
+#include "xfs_vfs.h"
+#include "xfs_vnode.h"
#include "xfs_dfrag.h"
#if defined(CONFIG_IA64) || defined(CONFIG_X86_64)
__s32 ocount; /* output count pointer */
} xfs_fsop_bulkreq32_t;
-static int
-xfs_ioctl32_bulkstat(
- unsigned int fd,
- unsigned int cmd,
- unsigned long arg,
- struct file * file)
+static unsigned long
+xfs_ioctl32_bulkstat(unsigned long arg)
{
xfs_fsop_bulkreq32_t __user *p32 = (void __user *)arg;
xfs_fsop_bulkreq_t __user *p = compat_alloc_user_space(sizeof(*p));
put_user(compat_ptr(addr), &p->ocount))
return -EFAULT;
- return sys_ioctl(fd, cmd, (unsigned long)p);
+ return (unsigned long)p;
}
#endif
-struct ioctl_trans xfs_ioctl32_trans[] = {
- { XFS_IOC_DIOINFO, },
- { XFS_IOC_FSGEOMETRY_V1, },
- { XFS_IOC_FSGEOMETRY, },
- { XFS_IOC_GETVERSION, },
- { XFS_IOC_GETXFLAGS, },
- { XFS_IOC_SETXFLAGS, },
- { XFS_IOC_FSGETXATTR, },
- { XFS_IOC_FSSETXATTR, },
- { XFS_IOC_FSGETXATTRA, },
- { XFS_IOC_FSSETDM, },
- { XFS_IOC_GETBMAP, },
- { XFS_IOC_GETBMAPA, },
- { XFS_IOC_GETBMAPX, },
+static long
+__xfs_compat_ioctl(int mode, struct file *f, unsigned cmd, unsigned long arg)
+{
+ int error;
+ struct inode *inode = f->f_dentry->d_inode;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+
+ switch (cmd) {
+ case XFS_IOC_DIOINFO:
+ case XFS_IOC_FSGEOMETRY_V1:
+ case XFS_IOC_FSGEOMETRY:
+ case XFS_IOC_GETVERSION:
+ case XFS_IOC_GETXFLAGS:
+ case XFS_IOC_SETXFLAGS:
+ case XFS_IOC_FSGETXATTR:
+ case XFS_IOC_FSSETXATTR:
+ case XFS_IOC_FSGETXATTRA:
+ case XFS_IOC_FSSETDM:
+ case XFS_IOC_GETBMAP:
+ case XFS_IOC_GETBMAPA:
+ case XFS_IOC_GETBMAPX:
/* not handled
- { XFS_IOC_FD_TO_HANDLE, },
- { XFS_IOC_PATH_TO_HANDLE, },
- { XFS_IOC_PATH_TO_HANDLE, },
- { XFS_IOC_PATH_TO_FSHANDLE, },
- { XFS_IOC_OPEN_BY_HANDLE, },
- { XFS_IOC_FSSETDM_BY_HANDLE, },
- { XFS_IOC_READLINK_BY_HANDLE, },
- { XFS_IOC_ATTRLIST_BY_HANDLE, },
- { XFS_IOC_ATTRMULTI_BY_HANDLE, },
+ case XFS_IOC_FD_TO_HANDLE:
+ case XFS_IOC_PATH_TO_HANDLE:
+ case XFS_IOC_PATH_TO_HANDLE:
+ case XFS_IOC_PATH_TO_FSHANDLE:
+ case XFS_IOC_OPEN_BY_HANDLE:
+ case XFS_IOC_FSSETDM_BY_HANDLE:
+ case XFS_IOC_READLINK_BY_HANDLE:
+ case XFS_IOC_ATTRLIST_BY_HANDLE:
+ case XFS_IOC_ATTRMULTI_BY_HANDLE:
*/
- { XFS_IOC_FSCOUNTS, NULL, },
- { XFS_IOC_SET_RESBLKS, NULL, },
- { XFS_IOC_GET_RESBLKS, NULL, },
- { XFS_IOC_FSGROWFSDATA, NULL, },
- { XFS_IOC_FSGROWFSLOG, NULL, },
- { XFS_IOC_FSGROWFSRT, NULL, },
- { XFS_IOC_FREEZE, NULL, },
- { XFS_IOC_THAW, NULL, },
- { XFS_IOC_GOINGDOWN, NULL, },
- { XFS_IOC_ERROR_INJECTION, NULL, },
- { XFS_IOC_ERROR_CLEARALL, NULL, },
+ case XFS_IOC_FSCOUNTS:
+ case XFS_IOC_SET_RESBLKS:
+ case XFS_IOC_GET_RESBLKS:
+ case XFS_IOC_FSGROWFSDATA:
+ case XFS_IOC_FSGROWFSLOG:
+ case XFS_IOC_FSGROWFSRT:
+ case XFS_IOC_FREEZE:
+ case XFS_IOC_THAW:
+ case XFS_IOC_GOINGDOWN:
+ case XFS_IOC_ERROR_INJECTION:
+ case XFS_IOC_ERROR_CLEARALL:
+ break;
+
#ifndef BROKEN_X86_ALIGNMENT
/* xfs_flock_t and xfs_bstat_t have wrong u32 vs u64 alignment */
- { XFS_IOC_ALLOCSP, },
- { XFS_IOC_FREESP, },
- { XFS_IOC_RESVSP, },
- { XFS_IOC_UNRESVSP, },
- { XFS_IOC_ALLOCSP64, },
- { XFS_IOC_FREESP64, },
- { XFS_IOC_RESVSP64, },
- { XFS_IOC_UNRESVSP64, },
- { XFS_IOC_SWAPEXT, },
- { XFS_IOC_FSBULKSTAT_SINGLE, xfs_ioctl32_bulkstat },
- { XFS_IOC_FSBULKSTAT, xfs_ioctl32_bulkstat},
- { XFS_IOC_FSINUMBERS, xfs_ioctl32_bulkstat},
-#endif
- { 0, },
-};
+ case XFS_IOC_ALLOCSP:
+ case XFS_IOC_FREESP:
+ case XFS_IOC_RESVSP:
+ case XFS_IOC_UNRESVSP:
+ case XFS_IOC_ALLOCSP64:
+ case XFS_IOC_FREESP64:
+ case XFS_IOC_RESVSP64:
+ case XFS_IOC_UNRESVSP64:
+ case XFS_IOC_SWAPEXT:
+ break;
-int __init
-xfs_ioctl32_init(void)
-{
- int error, i;
-
- for (i = 0; xfs_ioctl32_trans[i].cmd != 0; i++) {
- error = register_ioctl32_conversion(xfs_ioctl32_trans[i].cmd,
- xfs_ioctl32_trans[i].handler);
- if (error)
- goto fail;
+ case XFS_IOC_FSBULKSTAT_SINGLE:
+ case XFS_IOC_FSBULKSTAT:
+ case XFS_IOC_FSINUMBERS:
+ arg = xfs_ioctl32_bulkstat(arg);
+ break;
+#endif
+ default:
+ return -ENOIOCTLCMD;
}
- return 0;
+ VOP_IOCTL(vp, inode, f, mode, cmd, (void __user *)arg, error);
+ VMODIFY(vp);
- fail:
- while (--i)
- unregister_ioctl32_conversion(xfs_ioctl32_trans[i].cmd);
return error;
}
-void
-xfs_ioctl32_exit(void)
+long xfs_compat_ioctl(struct file *f, unsigned cmd, unsigned long arg)
{
- int i;
+ return __xfs_compat_ioctl(0, f, cmd, arg);
+}
- for (i = 0; xfs_ioctl32_trans[i].cmd != 0; i++)
- unregister_ioctl32_conversion(xfs_ioctl32_trans[i].cmd);
+long xfs_compat_invis_ioctl(struct file *f, unsigned cmd, unsigned long arg)
+{
+ return __xfs_compat_ioctl(IO_INVIS, f, cmd, arg);
}
* http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
*/
-#include <linux/config.h>
-
-#ifdef CONFIG_COMPAT
-extern int xfs_ioctl32_init(void);
-extern void xfs_ioctl32_exit(void);
-#else
-static inline int xfs_ioctl32_init(void) { return 0; }
-static inline void xfs_ioctl32_exit(void) { }
-#endif
+long xfs_compat_ioctl(struct file *f, unsigned cmd, unsigned long arg);
+long xfs_compat_invis_ioctl(struct file *f, unsigned cmd, unsigned long arg);
#define xfs_inherit_nosymlinks xfs_params.inherit_nosym.val
#define xfs_rotorstep xfs_params.rotorstep.val
-#define current_cpu() smp_processor_id()
+#ifndef __smp_processor_id
+#define __smp_processor_id() smp_processor_id()
+#endif
+#define current_cpu() __smp_processor_id()
#define current_pid() (current->pid)
#define current_fsuid(cred) (current->fsuid)
#define current_fsgid(cred) (current->fsgid)
struct block_device **);
extern void xfs_blkdev_put(struct block_device *);
+extern struct export_operations linvfs_export_ops;
+
#endif /* __XFS_SUPER_H__ */
vfsp = kmem_zalloc(sizeof(vfs_t), KM_SLEEP);
bhv_head_init(VFS_BHVHEAD(vfsp), "vfs");
INIT_LIST_HEAD(&vfsp->vfs_sync_list);
- vfsp->vfs_sync_lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&vfsp->vfs_sync_lock);
init_waitqueue_head(&vfsp->vfs_wait_sync_task);
init_waitqueue_head(&vfsp->vfs_wait_single_sync_task);
return vfsp;
/*
* Flags for read/write calls - same values as IRIX
*/
+#define IO_ISAIO 0x00001 /* don't wait for completion */
#define IO_ISDIRECT 0x00004 /* bypass page cache */
#define IO_INVIS 0x00020 /* don't update inode timestamps */
int doass = 1;
static char message[256]; /* keep it off the stack */
-static spinlock_t xfs_err_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(xfs_err_lock);
/* Translate from CE_FOO to KERN_FOO, err_level(CE_FOO) == KERN_FOO */
#define XFS_MAX_ERR_LEVEL 7
#include "xfs_inode.h"
#include "xfs_quota.h"
#include "xfs_utils.h"
+#include "xfs_bit.h"
/*
* Initialize the inode hash table for the newly mounted file system.
- *
- * mp -- this is the mount point structure for the file system being
- * initialized
+ * Choose an initial table size based on user specified value, else
+ * use a simple algorithm using the maximum number of inodes as an
+ * indicator for table size, and cap it at 16 pages (gettin' big).
*/
void
xfs_ihash_init(xfs_mount_t *mp)
{
- int i;
+ __uint64_t icount;
+ uint i, flags = KM_SLEEP | KM_MAYFAIL;
+
+ if (!mp->m_ihsize) {
+ icount = mp->m_maxicount ? mp->m_maxicount :
+ (mp->m_sb.sb_dblocks << mp->m_sb.sb_inopblog);
+ mp->m_ihsize = 1 << max_t(uint, xfs_highbit64(icount) / 3, 8);
+ mp->m_ihsize = min_t(uint, mp->m_ihsize, 16 * PAGE_SIZE);
+ }
- mp->m_ihsize = XFS_BUCKETS(mp);
- mp->m_ihash = (xfs_ihash_t *)kmem_zalloc(mp->m_ihsize
- * sizeof(xfs_ihash_t), KM_SLEEP);
- ASSERT(mp->m_ihash != NULL);
+ while (!(mp->m_ihash = (xfs_ihash_t *)kmem_zalloc(mp->m_ihsize *
+ sizeof(xfs_ihash_t), flags))) {
+ if ((mp->m_ihsize >>= 1) <= NBPP)
+ flags = KM_SLEEP;
+ }
for (i = 0; i < mp->m_ihsize; i++) {
rwlock_init(&(mp->m_ihash[i].ih_lock));
}
/*
* Initialize the inode cluster hash table for the newly mounted file system.
- *
- * mp -- this is the mount point structure for the file system being
- * initialized
+ * Its size is derived from the ihash table size.
*/
void
xfs_chash_init(xfs_mount_t *mp)
{
- int i;
+ uint i;
- /*
- * m_chash size is based on m_ihash
- * with a minimum of 37 entries
- */
- mp->m_chsize = (XFS_BUCKETS(mp)) /
- (XFS_INODE_CLUSTER_SIZE(mp) >> mp->m_sb.sb_inodelog);
- if (mp->m_chsize < 37) {
- mp->m_chsize = 37;
- }
+ mp->m_chsize = max_t(uint, 1, mp->m_ihsize /
+ (XFS_INODE_CLUSTER_SIZE(mp) >> mp->m_sb.sb_inodelog));
+ mp->m_chsize = min_t(uint, mp->m_chsize, mp->m_ihsize);
mp->m_chash = (xfs_chash_t *)kmem_zalloc(mp->m_chsize
* sizeof(xfs_chash_t),
KM_SLEEP);
- ASSERT(mp->m_chash != NULL);
-
for (i = 0; i < mp->m_chsize; i++) {
spinlock_init(&mp->m_chash[i].ch_lock,"xfshash");
}
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
/* Version string */
-#define ACPI_CA_VERSION 0x20041105
+#define ACPI_CA_VERSION 0x20050211
/*
* OS name, used for the _OS object. The _OS object is essentially obsolete,
/* Version of ACPI supported */
-#define ACPI_CA_SUPPORT_LEVEL 2
+#define ACPI_CA_SUPPORT_LEVEL 3
/* String size constants */
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
u32 length,
u32 level);
+void
+acpi_dm_extended_descriptor (
+ struct asl_extended_address_desc *resource,
+ u32 length,
+ u32 level);
+
void
acpi_dm_qword_descriptor (
struct asl_qword_address_desc *resource,
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
acpi_ds_get_current_walk_state (
struct acpi_thread_state *thread);
+#ifdef ACPI_ENABLE_OBJECT_CACHE
void
acpi_ds_delete_walk_state_cache (
void);
+#endif
#ifdef ACPI_FUTURE_USAGE
acpi_status
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* 1) Allow "implicit return" of last value in a control method
* 2) Allow access beyond end of operation region
* 3) Allow access to uninitialized locals/args (auto-init to integer 0)
+ * 4) Allow ANY object type to be a source operand for the Store() operator
*/
ACPI_EXTERN u8 ACPI_INIT_GLOBAL (acpi_gbl_enable_interpreter_slack, FALSE);
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
u8
acpi_ex_do_match (
u32 match_op,
- acpi_integer package_value,
- acpi_integer match_value);
+ union acpi_operand_object *package_obj,
+ union acpi_operand_object *match_obj);
acpi_status
acpi_ex_get_object_reference (
acpi_status
acpi_ex_store_buffer_to_buffer (
+ acpi_object_type original_src_type,
union acpi_operand_object *source_desc,
union acpi_operand_object *target_desc);
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
/*
* Large resource descriptor types
*/
-
#define ACPI_RDESC_TYPE_MEMORY_24 0x81
#define ACPI_RDESC_TYPE_GENERAL_REGISTER 0x82
#define ACPI_RDESC_TYPE_LARGE_VENDOR 0x84
#define ACPI_RDESC_TYPE_WORD_ADDRESS_SPACE 0x88
#define ACPI_RDESC_TYPE_EXTENDED_XRUPT 0x89
#define ACPI_RDESC_TYPE_QWORD_ADDRESS_SPACE 0x8A
+#define ACPI_RDESC_TYPE_EXTENDED_ADDRESS_SPACE 0x8B
/*****************************************************************************
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
#define ACPI_SET_BIT(target,bit) ((target) |= (bit))
#define ACPI_CLEAR_BIT(target,bit) ((target) &= ~(bit))
+#define ACPI_MIN(a,b) (((a)<(b))?(a):(b))
#if ACPI_MACHINE_WIDTH == 16
* The first parameter should be the procedure name as a quoted string. This is declared
* as a local string ("_proc_name) so that it can be also used by the function exit macros below.
*/
-#define ACPI_FUNCTION_NAME(a) struct acpi_debug_print_info _dbg; \
- _dbg.component_id = _COMPONENT; \
- _dbg.proc_name = a; \
- _dbg.module_name = _THIS_MODULE;
+#define ACPI_FUNCTION_NAME(a) struct acpi_debug_print_info _debug_info; \
+ _debug_info.component_id = _COMPONENT; \
+ _debug_info.proc_name = a; \
+ _debug_info.module_name = _THIS_MODULE;
#define ACPI_FUNCTION_TRACE(a) ACPI_FUNCTION_NAME(a) \
- acpi_ut_trace(__LINE__,&_dbg)
+ acpi_ut_trace(__LINE__,&_debug_info)
#define ACPI_FUNCTION_TRACE_PTR(a,b) ACPI_FUNCTION_NAME(a) \
- acpi_ut_trace_ptr(__LINE__,&_dbg,(void *)b)
+ acpi_ut_trace_ptr(__LINE__,&_debug_info,(void *)b)
#define ACPI_FUNCTION_TRACE_U32(a,b) ACPI_FUNCTION_NAME(a) \
- acpi_ut_trace_u32(__LINE__,&_dbg,(u32)b)
+ acpi_ut_trace_u32(__LINE__,&_debug_info,(u32)b)
#define ACPI_FUNCTION_TRACE_STR(a,b) ACPI_FUNCTION_NAME(a) \
- acpi_ut_trace_str(__LINE__,&_dbg,(char *)b)
+ acpi_ut_trace_str(__LINE__,&_debug_info,(char *)b)
#define ACPI_FUNCTION_ENTRY() acpi_ut_track_stack_ptr()
#define ACPI_DO_WHILE0(a) a
#endif
-#define return_VOID ACPI_DO_WHILE0 ({acpi_ut_exit(__LINE__,&_dbg);return;})
-#define return_ACPI_STATUS(s) ACPI_DO_WHILE0 ({acpi_ut_status_exit(__LINE__,&_dbg,(s));return((s));})
-#define return_VALUE(s) ACPI_DO_WHILE0 ({acpi_ut_value_exit(__LINE__,&_dbg,(acpi_integer)(s));return((s));})
-#define return_PTR(s) ACPI_DO_WHILE0 ({acpi_ut_ptr_exit(__LINE__,&_dbg,(u8 *)(s));return((s));})
+#define return_VOID ACPI_DO_WHILE0 ({acpi_ut_exit(__LINE__,&_debug_info);return;})
+#define return_ACPI_STATUS(s) ACPI_DO_WHILE0 ({acpi_ut_status_exit(__LINE__,&_debug_info,(s));return((s));})
+#define return_VALUE(s) ACPI_DO_WHILE0 ({acpi_ut_value_exit(__LINE__,&_debug_info,(acpi_integer)(s));return((s));})
+#define return_PTR(s) ACPI_DO_WHILE0 ({acpi_ut_ptr_exit(__LINE__,&_debug_info,(u8 *)(s));return((s));})
/* Conditional execution */
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
u32 bit_length; /* Length of field in bits */\
u32 base_byte_offset; /* Byte offset within containing object */\
u8 start_field_bit_offset;/* Bit offset within first field datum (0-63) */\
- u8 datum_valid_bits; /* Valid bit in first "Field datum" */\
- u8 end_field_valid_bits; /* Valid bits in the last "field datum" */\
- u8 end_buffer_valid_bits; /* Valid bits in the last "buffer datum" */\
+ u8 access_bit_width; /* Read/Write size in bits (8-64) */\
u32 value; /* Value to store into the Bank or Index register */\
struct acpi_namespace_node *node; /* Link back to parent node */
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
/*
* Debug level macros that are used in the DEBUG_PRINT macros
*/
-#define ACPI_DEBUG_LEVEL(dl) (u32) dl,__LINE__,&_dbg
+#define ACPI_DEBUG_LEVEL(dl) (u32) dl,__LINE__,&_debug_info
/* Exception level -- used in the global "debug_level" */
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
acpi_ps_free_op (
union acpi_parse_object *op);
+#ifdef ACPI_ENABLE_OBJECT_CACHE
void
acpi_ps_delete_parse_cache (
void);
+#endif
u8
acpi_ps_is_leading_char (
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
typedef int (*acpi_op_resume) (struct acpi_device *device, int state);
typedef int (*acpi_op_scan) (struct acpi_device *device);
typedef int (*acpi_op_bind) (struct acpi_device *device);
+typedef int (*acpi_op_unbind) (struct acpi_device *device);
typedef int (*acpi_op_match) (struct acpi_device *device,
struct acpi_driver *driver);
acpi_op_resume resume;
acpi_op_scan scan;
acpi_op_bind bind;
+ acpi_op_unbind unbind;
acpi_op_match match;
};
* External Functions
*/
-int acpi_bus_get_device(acpi_handle, struct acpi_device **device);
+int acpi_bus_get_device(acpi_handle handle, struct acpi_device **device);
+void acpi_bus_data_handler(acpi_handle handle, u32 function, void *context);
int acpi_bus_get_status (struct acpi_device *device);
int acpi_bus_get_power (acpi_handle handle, int *state);
int acpi_bus_set_power (acpi_handle handle, int state);
int acpi_bus_receive_event (struct acpi_bus_event *event);
int acpi_bus_register_driver (struct acpi_driver *driver);
int acpi_bus_unregister_driver (struct acpi_driver *driver);
+int acpi_bus_scan (struct acpi_device *start);
+int acpi_bus_trim(struct acpi_device *start, int rmdevice);
+int acpi_bus_add (struct acpi_device **child, struct acpi_device *parent,
+ acpi_handle handle, int type);
+
int acpi_match_ids (struct acpi_device *device, char *ids);
int acpi_create_dir(struct acpi_device *);
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
u32 event,
u32 flags);
-#ifdef ACPI_FUTURE_USAGE
acpi_status
acpi_clear_event (
u32 event);
+#ifdef ACPI_FUTURE_USAGE
acpi_status
acpi_get_event_status (
u32 event,
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
acpi_ut_delete_generic_state (
union acpi_generic_state *state);
+#ifdef ACPI_ENABLE_OBJECT_CACHE
void
acpi_ut_delete_generic_state_cache (
void);
void
acpi_ut_delete_object_cache (
void);
+#endif
/*
* utmisc
u32 list_id,
void *object);
+#ifdef ACPI_ENABLE_OBJECT_CACHE
void
acpi_ut_delete_generic_cache (
u32 list_id);
+#endif
acpi_status
acpi_ut_validate_buffer (
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
#define ARGI_COMPLEXOBJ 0x13 /* Buffer, String, or package (Used by INDEX op only) */
#define ARGI_REF_OR_STRING 0x14 /* Reference or String (Used by DEREFOF op only) */
#define ARGI_REGION_OR_FIELD 0x15 /* Used by LOAD op only */
+#define ARGI_DATAREFOBJ 0x16
/* Note: types above can expand to 0x1F maximum */
#define AML_HAS_TARGET 0x0800
#define AML_HAS_ARGS 0x1000
#define AML_CONSTANT 0x2000
+#define AML_NO_OPERAND_RESOLVE 0x4000
/* Convenient flag groupings */
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
#define ASL_RESNAME_ADDRESS "_ADR"
#define ASL_RESNAME_ALIGNMENT "_ALN"
#define ASL_RESNAME_ADDRESSSPACE "_ASI"
+#define ASL_RESNAME_ACCESSSIZE "_ASZ"
+#define ASL_RESNAME_TYPESPECIFICATTRIBUTES "_ATT"
#define ASL_RESNAME_BASEADDRESS "_BAS"
#define ASL_RESNAME_BUSMASTER "_BM_" /* Master(1), Slave(0) */
#define ASL_RESNAME_DECODE "_DEC"
};
+struct asl_extended_address_desc
+{
+ u8 descriptor_type;
+ u16 length;
+ u8 resource_type;
+ u8 flags;
+ u8 specific_flags;
+ u8 revision_iD;
+ u8 reserved;
+ u64 granularity;
+ u64 address_min;
+ u64 address_max;
+ u64 translation_offset;
+ u64 address_length;
+ u64 type_specific_attributes;
+ u8 optional_fields[2]; /* Used for length calculation only */
+};
+
+#define ASL_EXTENDED_ADDRESS_DESC_REVISION 1 /* ACPI 3.0 */
+
+
struct asl_qword_address_desc
{
u8 descriptor_type;
u8 address_space_id;
u8 bit_width;
u8 bit_offset;
- u8 reserved;
+ u8 access_size; /* ACPI 3.0, was Reserved */
u64 address;
};
struct asl_qword_address_desc qas;
struct asl_dword_address_desc das;
struct asl_word_address_desc was;
+ struct asl_extended_address_desc eas;
struct asl_extended_xrupt_desc exx;
struct asl_general_register_desc grg;
u32 u32_item;
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
#define ACPI_DISASSEMBLER
#define ACPI_NO_METHOD_EXECUTION
#define ACPI_USE_SYSTEM_CLIBRARY
+#define ACPI_ENABLE_OBJECT_CACHE
#endif
#ifdef _ACPI_EXEC_APP
#define ACPI_DEBUGGER
#define ACPI_DISASSEMBLER
#define ACPI_USE_SYSTEM_CLIBRARY
+#define ACPI_ENABLE_OBJECT_CACHE
#endif
#ifdef _ACPI_ASL_COMPILER
#define ACPI_DISASSEMBLER
#define ACPI_CONSTANT_EVAL_ONLY
#define ACPI_USE_SYSTEM_CLIBRARY
+#define ACPI_ENABLE_OBJECT_CACHE
#endif
/*
*****************************************************************************/
/*
- * Copyright (C) 2000 - 2004, R. Byron Moore
+ * Copyright (C) 2000 - 2005, R. Byron Moore
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
#define __ACPI_PROCESSOR_H
#include <linux/kernel.h>
+#include <linux/config.h>
#define ACPI_PROCESSOR_BUSY_METRIC 10
-#define ACPI_PROCESSOR_MAX_POWER ACPI_C_STATE_COUNT
+#define ACPI_PROCESSOR_MAX_POWER 8
#define ACPI_PROCESSOR_MAX_C2_LATENCY 100
#define ACPI_PROCESSOR_MAX_C3_LATENCY 1000
/* Power Management */
+struct acpi_processor_cx;
+
+struct acpi_power_register {
+ u8 descriptor;
+ u16 length;
+ u8 space_id;
+ u8 bit_width;
+ u8 bit_offset;
+ u8 reserved;
+ u64 address;
+} __attribute__ ((packed));
+
+
struct acpi_processor_cx_policy {
u32 count;
- u32 state;
+ struct acpi_processor_cx *state;
struct {
u32 time;
u32 ticks;
struct acpi_processor_cx {
u8 valid;
+ u8 type;
u32 address;
u32 latency;
u32 latency_ticks;
};
struct acpi_processor_power {
- u32 state;
+ struct acpi_processor_cx *state;
+ unsigned long bm_check_timestamp;
u32 default_state;
u32 bm_activity;
+ int count;
struct acpi_processor_cx states[ACPI_PROCESSOR_MAX_POWER];
};
u8 limit:1;
u8 bm_control:1;
u8 bm_check:1;
- u8 reserved:2;
+ u8 has_cst:1;
+ u8 power_setup_done:1;
};
struct acpi_processor {
acpi_handle handle;
u32 acpi_id;
u32 id;
+ u32 pblk;
int performance_platform_limit;
struct acpi_processor_flags flags;
struct acpi_processor_power power;
struct acpi_processor_limit limit;
};
+struct acpi_processor_errata {
+ u8 smp;
+ struct {
+ u8 throttle:1;
+ u8 fdma:1;
+ u8 reserved:6;
+ u32 bmisx;
+ } piix4;
+};
+
extern int acpi_processor_register_performance (
struct acpi_processor_performance * performance,
unsigned int cpu);
if a _PPC object exists, rmmod is disallowed then */
int acpi_processor_notify_smm(struct module *calling_module);
+
+
+/* for communication between multiple parts of the processor kernel module */
+extern struct acpi_processor *processors[NR_CPUS];
+extern struct acpi_processor_errata errata;
+
+/* in processor_perflib.c */
+#ifdef CONFIG_CPU_FREQ
+void acpi_processor_ppc_init(void);
+void acpi_processor_ppc_exit(void);
+int acpi_processor_ppc_has_changed(struct acpi_processor *pr);
+#else
+static inline void acpi_processor_ppc_init(void) { return; }
+static inline void acpi_processor_ppc_exit(void) { return; }
+static inline int acpi_processor_ppc_has_changed(struct acpi_processor *pr) {
+ static unsigned int printout = 1;
+ if (printout) {
+ printk(KERN_WARNING "Warning: Processor Platform Limit event detected, but not handled.\n");
+ printk(KERN_WARNING "Consider compiling CPUfreq support into your kernel.\n");
+ printout = 0;
+ }
+ return 0;
+}
+#endif /* CONFIG_CPU_FREQ */
+
+/* in processor_throttling.c */
+int acpi_processor_get_throttling_info (struct acpi_processor *pr);
+int acpi_processor_set_throttling (struct acpi_processor *pr, int state);
+int acpi_processor_throttling_open_fs(struct inode *inode, struct file *file);
+ssize_t acpi_processor_write_throttling (
+ struct file *file,
+ const char __user *buffer,
+ size_t count,
+ loff_t *data);
+extern struct file_operations acpi_processor_throttling_fops;
+
+/* in processor_idle.c */
+int acpi_processor_power_init(struct acpi_processor *pr, struct acpi_device *device);
+int acpi_processor_cst_has_changed (struct acpi_processor *pr);
+int acpi_processor_power_exit(struct acpi_processor *pr, struct acpi_device *device);
+
+
+/* in processor_thermal.c */
+int acpi_processor_get_limit_info (struct acpi_processor *pr);
+int acpi_processor_limit_open_fs(struct inode *inode, struct file *file);
+ssize_t acpi_processor_write_limit (
+ struct file *file,
+ const char __user *buffer,
+ size_t count,
+ loff_t *data);
+extern struct file_operations acpi_processor_limit_fops;
+
+#ifdef CONFIG_CPU_FREQ
+void acpi_thermal_cpufreq_init(void);
+void acpi_thermal_cpufreq_exit(void);
+#else
+static inline void acpi_thermal_cpufreq_init(void) { return; }
+static inline void acpi_thermal_cpufreq_exit(void) { return; }
+#endif
+
+
#endif
pci_unmap_sg(alpha_gendev_to_pci(dev), sg, nents, dir)
#define dma_supported(dev, mask) \
pci_dma_supported(alpha_gendev_to_pci(dev), mask)
+#define dma_mapping_error(addr) \
+ pci_dma_mapping_error(addr)
#else /* no PCI - no IOMMU. */
#define dma_unmap_page(dev, addr, size, dir) do { } while (0)
#define dma_unmap_sg(dev, sg, nents, dir) do { } while (0)
+#define dma_mapping_error(addr) (0)
+
#endif /* !CONFIG_PCI */
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
/* entry.S is sensitive to the offsets of these fields */
typedef struct {
unsigned long __softirq_pending;
- unsigned int __syscall_count;
- unsigned long idle_timestamp;
- struct task_struct * __ksoftirqd_task;
} ____cacheline_aligned irq_cpustat_t;
#include <linux/irq_cpustat.h> /* Standard mappings for irq_cpustat_t above */
#error HARDIRQ_BITS is too low!
#endif
-#define irq_enter() (preempt_count() += HARDIRQ_OFFSET)
-#define irq_exit() \
-do { \
- preempt_count() -= IRQ_EXIT_OFFSET; \
- if (!in_interrupt() && \
- softirq_pending(smp_processor_id())) \
- do_softirq(); \
- preempt_enable_no_resched(); \
-} while (0)
-
#endif /* _ALPHA_HARDIRQ_H */
__EXTERN_INLINE void
IO_CONCAT(__IO_PREFIX,iowrite16)(u16 b, void __iomem *a)
{
- __kernel_stb(b, *(volatile u16 __force *)a);
+ __kernel_stw(b, *(volatile u16 __force *)a);
}
#endif
__EXTERN_INLINE void
IO_CONCAT(__IO_PREFIX,writew)(u16 b, volatile void __iomem *a)
{
- __kernel_stb(b, *(volatile u16 __force *)a);
+ __kernel_stw(b, *(volatile u16 __force *)a);
}
#elif IO_CONCAT(__IO_PREFIX,trivial_rw_bw) == 2
__EXTERN_INLINE u8
#define TASK_UNMAPPED_BASE \
((current->personality & ADDR_LIMIT_32BIT) ? 0x40000000 : TASK_SIZE / 2)
-/*
- * Bus types
- */
-#define MCA_bus 0
-#define MCA_bus__is_a_macro /* for versions in ksyms.c */
-
typedef struct {
unsigned long seg;
} mm_segment_t;
#define __HAVE_ARCH_MEMCPY
extern void * memcpy(void *, const void *, size_t);
#define __HAVE_ARCH_MEMMOVE
-#define __HAVE_ARCH_BCOPY
extern void * memmove(void *, const void *, size_t);
/* For backward compatibility with modules. Unused otherwise. */
#define TIF_UAC_NOPRINT 6 /* see sysinfo.h */
#define TIF_UAC_NOFIX 7
#define TIF_UAC_SIGBUS 8
+#define TIF_MEMDIE 9
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
--- /dev/null
+/* linux/include/asm-arm/arch-cl7500/debug-macro.S
+ *
+ * Debugging macro include header
+ *
+ * Copyright (C) 1994-1999 Russell King
+ * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+*/
+
+ .macro addruart,rx
+ mov \rx, #0xe0000000
+ orr \rx, \rx, #0x00010000
+ orr \rx, \rx, #0x00000be0
+ .endm
+
+ .macro senduart,rd,rx
+ strb \rd, [\rx]
+ .endm
+
+ .macro busyuart,rd,rx
+ .endm
+
+ .macro waituart,rd,rx
+1001: ldrb \rd, [\rx, #0x14]
+ tst \rd, #0x20
+ beq 1001b
+ .endm
--- /dev/null
+/*
+ * include/asm-arm/arch-CLPS711x/entry-macro.S
+ *
+ * Low-level IRQ helper macros for CLPS711X-based platforms
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+#include <asm/hardware/clps7111.h>
+
+ .macro disable_fiq
+ .endm
+
+#if (INTSR2 - INTSR1) != (INTMR2 - INTMR1)
+#error INTSR stride != INTMR stride
+#endif
+
+ .macro get_irqnr_and_base, irqnr, stat, base, mask
+ mov \base, #CLPS7111_BASE
+ ldr \stat, [\base, #INTSR1]
+ ldr \mask, [\base, #INTMR1]
+ mov \irqnr, #4
+ mov \mask, \mask, lsl #16
+ and \stat, \stat, \mask, lsr #16
+ movs \stat, \stat, lsr #4
+ bne 1001f
+
+ add \base, \base, #INTSR2 - INTSR1
+ ldr \stat, [\base, #INTSR1]
+ ldr \mask, [\base, #INTMR1]
+ mov \irqnr, #16
+ mov \mask, \mask, lsl #16
+ and \stat, \stat, \mask, lsr #16
+
+1001: tst \stat, #255
+ addeq \irqnr, \irqnr, #8
+ moveq \stat, \stat, lsr #8
+ tst \stat, #15
+ addeq \irqnr, \irqnr, #4
+ moveq \stat, \stat, lsr #4
+ tst \stat, #3
+ addeq \irqnr, \irqnr, #2
+ moveq \stat, \stat, lsr #2
+ tst \stat, #1
+ addeq \irqnr, \irqnr, #1
+ moveq \stat, \stat, lsr #1
+ tst \stat, #1 @ bit 0 should be set
+ .endm
+
+
--- /dev/null
+/* linux/include/asm-arm/arch-ebsa110/debug-macro.S
+ *
+ * Debugging macro include header
+ *
+ * Copyright (C) 1994-1999 Russell King
+ * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+**/
+
+ .macro addruart,rx
+ mov \rx, #0xf0000000
+ orr \rx, \rx, #0x00000be0
+ .endm
+
+ .macro senduart,rd,rx
+ strb \rd, [\rx]
+ .endm
+
+ .macro busyuart,rd,rx
+1002: ldrb \rd, [\rx, #0x14]
+ and \rd, \rd, #0x60
+ teq \rd, #0x60
+ bne 1002b
+ .endm
+
+ .macro waituart,rd,rx
+1001: ldrb \rd, [\rx, #0x18]
+ tst \rd, #0x10
+ beq 1001b
+ .endm
--- /dev/null
+/* linux/include/asm-arm/arch-ebsa285/debug-macro.S
+ *
+ * Debugging macro include header
+ *
+ * Copyright (C) 1994-1999 Russell King
+ * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+*/
+
+#include <asm/hardware/dec21285.h>
+
+#ifndef CONFIG_DEBUG_DC21285_PORT
+ /* For NetWinder debugging */
+ .macro addruart,rx
+ mrc p15, 0, \rx, c1, c0
+ tst \rx, #1 @ MMU enabled?
+ moveq \rx, #0x7c000000 @ physical
+ movne \rx, #0xff000000 @ virtual
+ orr \rx, \rx, #0x000003f8
+ .endm
+
+ .macro senduart,rd,rx
+ strb \rd, [\rx]
+ .endm
+
+ .macro busyuart,rd,rx
+1002: ldrb \rd, [\rx, #0x5]
+ and \rd, \rd, #0x60
+ teq \rd, #0x60
+ bne 1002b
+ .endm
+
+ .macro waituart,rd,rx
+1001: ldrb \rd, [\rx, #0x6]
+ tst \rd, #0x10
+ beq 1001b
+ .endm
+#else
+ /* For EBSA285 debugging */
+ .equ dc21285_high, ARMCSR_BASE & 0xff000000
+ .equ dc21285_low, ARMCSR_BASE & 0x00ffffff
+
+ .macro addruart,rx
+ mov \rx, #dc21285_high
+ .if dc21285_low
+ orr \rx, \rx, #dc21285_low
+ .endif
+ .endm
+
+ .macro senduart,rd,rx
+ str \rd, [\rx, #0x160] @ UARTDR
+ .endm
+
+ .macro busyuart,rd,rx
+1001: ldr \rd, [\rx, #0x178] @ UARTFLG
+ tst \rd, #1 << 3
+ bne 1001b
+ .endm
+
+ .macro waituart,rd,rx
+ .endm
+#endif
--- /dev/null
+/*
+ * include/asm-arm/arch-footbridge/entry-macro.S
+ *
+ * Low-level IRQ helper macros for footbridge-based platforms
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+#include <asm/hardware/dec21285.h>
+
+ .macro disable_fiq
+ .endm
+
+ .equ dc21285_high, ARMCSR_BASE & 0xff000000
+ .equ dc21285_low, ARMCSR_BASE & 0x00ffffff
+
+ .macro get_irqnr_and_base, irqnr, irqstat, base, tmp
+ mov r4, #dc21285_high
+ .if dc21285_low
+ orr r4, r4, #dc21285_low
+ .endif
+ ldr \irqstat, [r4, #0x180] @ get interrupts
+
+ mov \irqnr, #IRQ_SDRAMPARITY
+ tst \irqstat, #IRQ_MASK_SDRAMPARITY
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_UART_RX
+ movne \irqnr, #IRQ_CONRX
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_DMA1
+ movne \irqnr, #IRQ_DMA1
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_DMA2
+ movne \irqnr, #IRQ_DMA2
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_IN0
+ movne \irqnr, #IRQ_IN0
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_IN1
+ movne \irqnr, #IRQ_IN1
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_IN2
+ movne \irqnr, #IRQ_IN2
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_IN3
+ movne \irqnr, #IRQ_IN3
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_PCI
+ movne \irqnr, #IRQ_PCI
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_DOORBELLHOST
+ movne \irqnr, #IRQ_DOORBELLHOST
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_I2OINPOST
+ movne \irqnr, #IRQ_I2OINPOST
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_TIMER1
+ movne \irqnr, #IRQ_TIMER1
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_TIMER2
+ movne \irqnr, #IRQ_TIMER2
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_TIMER3
+ movne \irqnr, #IRQ_TIMER3
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_UART_TX
+ movne \irqnr, #IRQ_CONTX
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_PCI_ABORT
+ movne \irqnr, #IRQ_PCI_ABORT
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_PCI_SERR
+ movne \irqnr, #IRQ_PCI_SERR
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_DISCARD_TIMER
+ movne \irqnr, #IRQ_DISCARD_TIMER
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_PCI_DPERR
+ movne \irqnr, #IRQ_PCI_DPERR
+ bne 1001f
+
+ tst \irqstat, #IRQ_MASK_PCI_PERR
+ movne \irqnr, #IRQ_PCI_PERR
+1001:
+ .endm
+
--- /dev/null
+/* linux/include/asm-arm/arch-epxa10db/debug-macro.S
+ *
+ * Debugging macro include header
+ *
+ * Copyright (C) 1994-1999 Russell King
+ * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+*/
+
+#include <asm/arch/excalibur.h>
+#define UART00_TYPE
+#include <asm/arch/uart00.h>
+
+ .macro addruart,rx
+ mrc p15, 0, \rx, c1, c0
+ tst \rx, #1 @ MMU enabled?
+ ldr \rx, =EXC_UART00_BASE @ physical base address
+ orrne \rx, \rx, #0xff000000 @ virtual base
+ orrne \rx, \rx, #0x00f00000
+ .endm
+
+ .macro senduart,rd,rx
+ str \rd, [\rx, #UART_TD(0)]
+ .endm
+
+ .macro waituart,rd,rx
+1001: ldr \rd, [\rx, #UART_TSR(0)]
+ and \rd, \rd, #UART_TSR_TX_LEVEL_MSK
+ cmp \rd, #15
+ beq 1001b
+ .endm
+
+ .macro busyuart,rd,rx
+1001: ldr \rd, [\rx, #UART_TSR(0)]
+ ands \rd, \rd, #UART_TSR_TX_LEVEL_MSK
+ bne 1001b
+ .endm
--- /dev/null
+/*
+ * include/asm-arm/arch-epxa10db/entry-macro.S
+ *
+ * Low-level IRQ helper macros for epxa10db platform
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+#include <asm/arch/platform.h>
+#undef IRQ_MODE /* same name defined in asm/proc/ptrace.h */
+#include <asm/arch/int_ctrl00.h>
+
+ .macro disable_fiq
+ .endm
+
+ .macro get_irqnr_and_base, irqnr, irqstat, base, tmp
+
+ ldr \irqstat, =INT_ID(IO_ADDRESS(EXC_INT_CTRL00_BASE))
+ ldr \irqnr,[\irqstat]
+ cmp \irqnr,#0
+ subne \irqnr,\irqnr,#1
+
+ .endm
+
* which is included by this file.
*/
-#define SERIAL2_VIRT (IO_VIRT + 0x2d000)
-#define SERIAL3_VIRT (IO_VIRT + 0x2e000)
+#define SERIAL2_OFS 0x2d000
+#define SERIAL2_BASE (IO_PHYS + SERIAL2_OFS)
+#define SERIAL2_VIRT (IO_VIRT + SERIAL2_OFS)
+#define SERIAL3_OFS 0x2e000
+#define SERIAL3_BASE (IO_PHYS + SERIAL3_OFS)
+#define SERIAL3_VIRT (IO_VIRT + SERIAL3_OFS)
/* Matrix Keyboard Controller */
#define KBD_VIRT (IO_VIRT + 0x22000)
#define GPIO_C_VIRT (GPIO_VIRT(2))
#define GPIO_D_VIRT (GPIO_VIRT(3))
#define GPIO_E_VIRT (GPIO_VIRT(4))
-#define GPIO_AMULSEL (GPIO_VIRT + 0xA4)
+#define GPIO_AMULSEL (GPIO_VIRT(0) + 0xA4)
+
+#define AMULSEL_USIN2 (1<<5)
+#define AMULSEL_USOUT2 (1<<6)
+#define AMULSEL_USIN3 (1<<13)
+#define AMULSEL_USOUT3 (1<<14)
+#define AMULSEL_IRDIN (1<<15)
+#define AMULSEL_IRDOUT (1<<7)
+
/* Register offsets general purpose I/O */
#define GPIO_DATA 0x00
#define GPIO_DIR 0x04
#define LCD_PALETTE_BASE (IO_VIRT + 0x10400)
/* Serial ports */
-#define SERIAL0_VIRT (IO_VIRT + 0x20000)
-#define SERIAL1_VIRT (IO_VIRT + 0x21000)
+#define SERIAL0_OFS 0x20000
+#define SERIAL0_VIRT (IO_VIRT + SERIAL0_OFS)
+#define SERIAL0_BASE (IO_PHYS + SERIAL0_OFS)
-#define SERIAL0_BASE SERIAL0_VIRT
-#define SERIAL1_BASE SERIAL1_VIRT
-#define SERIAL2_BASE SERIAL2_VIRT
-#define SERIAL3_BASE SERIAL3_VIRT
+#define SERIAL1_OFS 0x21000
+#define SERIAL1_VIRT (IO_VIRT + SERIAL1_OFS)
+#define SERIAL1_BASE (IO_PHYS + SERIAL1_OFS)
+#define SERIAL_ENABLE 0x30
+#define SERIAL_ENABLE_EN (1<<0)
/* General defines to pacify gcc */
#define PCIO_BASE (0) /* for inb, outb and friends */
--- /dev/null
+/* linux/include/asm-arm/arch-imx/debug-macro.S
+ *
+ * Debugging macro include header
+ *
+ * Copyright (C) 1994-1999 Russell King
+ * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+*/
+
+ .macro addruart,rx
+ mrc p15, 0, \rx, c1, c0
+ tst \rx, #1 @ MMU enabled?
+ moveq \rx, #0x00000000 @ physical
+ movne \rx, #0xe0000000 @ virtual
+ orr \rx, \rx, #0x00200000
+ orr \rx, \rx, #0x00006000 @ UART1 offset
+ .endm
+
+ .macro senduart,rd,rx
+ str \rd, [\rx, #0x40] @ TXDATA
+ .endm
+
+ .macro waituart,rd,rx
+ .endm
+
+ .macro busyuart,rd,rx
+1002: ldr \rd, [\rx, #0x98] @ SR2
+ tst \rd, #1 << 3 @ TXDC
+ beq 1002b @ wait until transmit done
+ .endm
--- /dev/null
+/* linux/include/asm-arm/arch-integrator/debug-macro.S
+ *
+ * Debugging macro include header
+ *
+ * Copyright (C) 1994-1999 Russell King
+ * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+*/
+
+#include <asm/hardware/amba_serial.h>
+
+ .macro addruart,rx
+ mrc p15, 0, \rx, c1, c0
+ tst \rx, #1 @ MMU enabled?
+ moveq \rx, #0x16000000 @ physical base address
+ movne \rx, #0xf0000000 @ virtual base
+ addne \rx, \rx, #0x16000000 >> 4
+ .endm
+
+ .macro senduart,rd,rx
+ strb \rd, [\rx, #UART01x_DR]
+ .endm
+
+ .macro waituart,rd,rx
+1001: ldr \rd, [\rx, #0x18] @ UARTFLG
+ tst \rd, #1 << 5 @ UARTFLGUTXFF - 1 when full
+ bne 1001b
+ .endm
+
+ .macro busyuart,rd,rx
+1001: ldr \rd, [\rx, #0x18] @ UARTFLG
+ tst \rd, #1 << 3 @ UARTFLGUBUSY - 1 when busy
+ bne 1001b
+ .endm
--- /dev/null
+/* linux/include/asm-arm/arch-iop3xx/debug-macro.S
+ *
+ * Debugging macro include header
+ *
+ * Copyright (C) 1994-1999 Russell King
+ * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+*/
+
+ .macro addruart,rx
+ mov \rx, #0xfe000000 @ physical
+#if defined(CONFIG_ARCH_IQ80321) || defined(CONFIG_ARCH_IQ31244)
+ orr \rx, \rx, #0x00800000 @ location of the UART
+#elif defined(CONFIG_ARCH_IOP331)
+ mrc p15, 0, \rx, c1, c0
+ tst \rx, #1 @ MMU enabled?
+ moveq \rx, #0x000fe000 @ Physical Base
+ movne \rx, #0
+ orr \rx, \rx, #0xfe000000
+ orr \rx, \rx, #0x00f00000 @ Virtual Base
+ orr \rx, \rx, #0x00001700 @ location of the UART
+#else
+#error Unknown IOP3XX implementation
+#endif
+ .endm
+
+ .macro senduart,rd,rx
+ strb \rd, [\rx]
+ .endm
+
+ .macro busyuart,rd,rx
+1002: ldrb \rd, [\rx, #0x5]
+ and \rd, \rd, #0x60
+ teq \rd, #0x60
+ bne 1002b
+ .endm
+
+ .macro waituart,rd,rx
+#if !defined(CONFIG_ARCH_IQ80321) || !defined(CONFIG_ARCH_IQ31244) || !defined(CONFIG_ARCH_IQ80331)
+1001: ldrb \rd, [\rx, #0x6]
+ tst \rd, #0x10
+ beq 1001b
+#endif
+ .endm
--- /dev/null
+/*
+ * include/asm-arm/arch-iop3xx/entry-macro.S
+ *
+ * Low-level IRQ helper macros for IOP3xx-based platforms
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+#if defined(CONFIG_ARCH_IOP321)
+ .macro disable_fiq
+ .endm
+
+ /*
+ * Note: only deal with normal interrupts, not FIQ
+ */
+ .macro get_irqnr_and_base, irqnr, irqstat, base, tmp
+ mov \irqnr, #0
+ mrc p6, 0, \irqstat, c8, c0, 0 @ Read IINTSRC
+ cmp \irqstat, #0
+ beq 1001f
+ clz \irqnr, \irqstat
+ mov \base, #31
+ subs \irqnr,\base,\irqnr
+ add \irqnr,\irqnr,#IRQ_IOP321_DMA0_EOT
+1001:
+ .endm
+
+#elif defined(CONFIG_ARCH_IOP331)
+ .macro disable_fiq
+ .endm
+
+ /*
+ * Note: only deal with normal interrupts, not FIQ
+ */
+ .macro get_irqnr_and_base, irqnr, irqstat, base, tmp
+ mov \irqnr, #0
+ mrc p6, 0, \irqstat, c4, c0, 0 @ Read IINTSRC0
+ cmp \irqstat, #0
+ bne 1002f
+ mrc p6, 0, \irqstat, c5, c0, 0 @ Read IINTSRC1
+ cmp \irqstat, #0
+ beq 1001f
+ clz \irqnr, \irqstat
+ rsbs \irqnr,\irqnr,#31 @ recommend by RMK
+ add \irqnr,\irqnr,#IRQ_IOP331_XINT8
+ b 1001f
+1002: clz \irqnr, \irqstat
+ rsbs \irqnr,\irqnr,#31 @ recommend by RMK
+ add \irqnr,\irqnr,#IRQ_IOP331_DMA0_EOT
+1001:
+ .endm
+
+#endif
+
#include "iq80321.h"
#include "iq31244.h"
#include "iq80331.h"
+#include "iq80332.h"
#endif /* _ASM_ARCH_HARDWARE_H */
*
* Author: Rory Bolt <rorybolt@pacbell.net>
* Copyright (C) 2002 Rory Bolt
+ * Copyright (C) 2004 Intel Corp.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
/*
* IOP321 I/O and Mem space regions for PCI autoconfiguration
*/
-#define IOP321_PCI_LOWER_IO 0x90000000
-#define IOP321_PCI_UPPER_IO 0x9000ffff
-#define IOP321_PCI_LOWER_MEM 0x80000000
-#define IOP321_PCI_UPPER_MEM 0x83ffffff
-
-#define IOP321_PCI_WINDOW_SIZE 64 * 0x100000
+#define IOP321_PCI_IO_WINDOW_SIZE 0x00010000
+#define IOP321_PCI_LOWER_IO_PA 0x90000000
+#define IOP321_PCI_LOWER_IO_VA 0xfe000000
+#define IOP321_PCI_LOWER_IO_BA (*IOP321_OIOWTVR)
+#define IOP321_PCI_UPPER_IO_PA (IOP321_PCI_LOWER_IO_PA + IOP321_PCI_IO_WINDOW_SIZE - 1)
+#define IOP321_PCI_UPPER_IO_VA (IOP321_PCI_LOWER_IO_VA + IOP321_PCI_IO_WINDOW_SIZE - 1)
+#define IOP321_PCI_UPPER_IO_BA (IOP321_PCI_LOWER_IO_BA + IOP321_PCI_IO_WINDOW_SIZE - 1)
+#define IOP321_PCI_IO_OFFSET (IOP321_PCI_LOWER_IO_VA - IOP321_PCI_LOWER_IO_BA)
+
+//#define IOP321_PCI_MEM_WINDOW_SIZE (~*IOP321_IALR1 + 1)
+#define IOP321_PCI_MEM_WINDOW_SIZE 0x04000000 /* 64M outbound window */
+#define IOP321_PCI_LOWER_MEM_PA 0x80000000
+#define IOP321_PCI_LOWER_MEM_BA (*IOP321_OMWTVR0)
+#define IOP321_PCI_UPPER_MEM_PA (IOP321_PCI_LOWER_MEM_PA + IOP321_PCI_MEM_WINDOW_SIZE - 1)
+#define IOP321_PCI_UPPER_MEM_BA (IOP321_PCI_LOWER_MEM_BA + IOP321_PCI_MEM_WINDOW_SIZE - 1)
+#define IOP321_PCI_MEM_OFFSET (IOP321_PCI_LOWER_MEM_PA - IOP321_PCI_LOWER_MEM_BA)
/*
* IOP321 chipset registers
*/
#define IOP321_VIRT_MEM_BASE 0xfeffe000 /* chip virtual mem address*/
-//#define IOP321_VIRT_MEM_BASE 0xfff00000 /* chip virtual mem address*/
-
-#define IOP321_PHY_MEM_BASE 0xffffe000 /* chip physical memory address */
+#define IOP321_PHYS_MEM_BASE 0xffffe000 /* chip physical memory address */
#define IOP321_REG_ADDR(reg) (IOP321_VIRT_MEM_BASE | (reg))
/* Reserved 0x00000000 through 0x000000FF */
#define NR_IRQS NR_IOP331_IRQS
+#if defined(CONFIG_ARCH_IQ80331)
/*
* Interrupts available on the IQ80331 board
*/
#define IRQ_IQ80331_INTC IRQ_IOP331_XINT2
#define IRQ_IQ80331_INTD IRQ_IOP331_XINT3
+#elif defined(CONFIG_MACH_IQ80332)
+/*
+ * Interrupts available on the IQ80332 board
+ */
+
+/*
+ * On board devices
+ */
+#define IRQ_IQ80332_I82544 IRQ_IOP331_XINT0
+#define IRQ_IQ80332_UART0 IRQ_IOP331_UART0
+#define IRQ_IQ80332_UART1 IRQ_IOP331_UART1
+
+/*
+ * PCI interrupts
+ */
+#define IRQ_IQ80332_INTA IRQ_IOP331_XINT0
+#define IRQ_IQ80332_INTB IRQ_IOP331_XINT1
+#define IRQ_IQ80332_INTC IRQ_IOP331_XINT2
+#define IRQ_IQ80332_INTD IRQ_IOP331_XINT3
+
+#endif
+
#endif // _IOP331_IRQ_H_
* Intel IOP331 Chip definitions
*
* Author: Dave Jiang (dave.jiang@intel.com)
- * Copyright (C) 2003 Intel Corp.
+ * Copyright (C) 2003, 2004 Intel Corp.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
*/
#ifndef __ASSEMBLY__
#ifdef CONFIG_ARCH_IOP331
-#define iop_is_331() ((processor_id & 0xffffffb0) == 0x69054090)
+/*#define iop_is_331() ((processor_id & 0xffffffb0) == 0x69054090) */
+#define iop_is_331() ((processor_id & 0xffffff30) == 0x69054010)
#else
#define iop_is_331() 0
#endif
/*
* IOP331 I/O and Mem space regions for PCI autoconfiguration
*/
-#define IOP331_PCI_LOWER_IO 0x90000000
-#define IOP331_PCI_UPPER_IO 0x900fffff
-#define IOP331_PCI_LOWER_MEM 0x80000000
-#define IOP331_PCI_UPPER_MEM 0x87ffffff
-
-#define IOP331_PCI_WINDOW_SIZE 128 * 0x100000
-
+#define IOP331_PCI_IO_WINDOW_SIZE 0x00010000
+#define IOP331_PCI_LOWER_IO_PA 0x90000000
+#define IOP331_PCI_LOWER_IO_VA 0xfe000000
+#define IOP331_PCI_LOWER_IO_BA (*IOP331_OIOWTVR)
+#define IOP331_PCI_UPPER_IO_PA (IOP331_PCI_LOWER_IO_PA + IOP331_PCI_IO_WINDOW_SIZE - 1)
+#define IOP331_PCI_UPPER_IO_VA (IOP331_PCI_LOWER_IO_VA + IOP331_PCI_IO_WINDOW_SIZE - 1)
+#define IOP331_PCI_UPPER_IO_BA (IOP331_PCI_LOWER_IO_BA + IOP331_PCI_IO_WINDOW_SIZE - 1)
+#define IOP331_PCI_IO_OFFSET (IOP331_PCI_LOWER_IO_VA - IOP331_PCI_LOWER_IO_BA)
+
+/* this can be 128M if OMWTVR1 is set */
+#define IOP331_PCI_MEM_WINDOW_SIZE 0x04000000 /* 64M outbound window */
+//#define IOP331_PCI_MEM_WINDOW_SIZE (~*IOP331_IALR1 + 1)
+#define IOP331_PCI_LOWER_MEM_PA 0x80000000
+#define IOP331_PCI_LOWER_MEM_BA (*IOP331_OMWTVR0)
+#define IOP331_PCI_UPPER_MEM_PA (IOP331_PCI_LOWER_MEM_PA + IOP331_PCI_MEM_WINDOW_SIZE - 1)
+#define IOP331_PCI_UPPER_MEM_BA (IOP331_PCI_LOWER_MEM_BA + IOP331_PCI_MEM_WINDOW_SIZE - 1)
+#define IOP331_PCI_MEM_OFFSET (IOP331_PCI_LOWER_MEM_PA - IOP331_PCI_LOWER_MEM_BA)
/*
* IOP331 chipset registers
*/
-#define IOP331_VIRT_MEM_BASE 0xfeffe000 /* chip virtual mem address*/
-// #define IOP331_VIRT_MEM_BASE 0xfff00000 /* chip virtual mem address*/
-
+#define IOP331_VIRT_MEM_BASE 0xfeffe000 /* chip virtual mem address*/
#define IOP331_PHYS_MEM_BASE 0xffffe000 /* chip physical memory address */
#define IOP331_REG_ADDR(reg) (IOP331_VIRT_MEM_BASE | (reg))
#define IOP331_TU_TISR (volatile u32 *)IOP331_REG_ADDR(0x000007E8)
#define IOP331_TU_WDTCR (volatile u32 *)IOP331_REG_ADDR(0x000007EC)
-#define IOP331_TICK_RATE 266000000 /* 266 MHz clock */
+#if defined(CONFIG_ARCH_IOP331)
+#define IOP331_TICK_RATE 266000000 /* 266 MHz IB clock */
+#endif
+#if defined(CONFIG_IOP331_STEPD) || defined(CONFIG_ARCH_IQ80333)
+#undef IOP331_TICK_RATE
+#define IOP331_TICK_RATE 333000000 /* 333 Mhz IB clock */
+#endif
/* Application accelerator unit 0x00000800 - 0x000008FF */
#define IOP331_AAU_ACR (volatile u32 *)IOP331_REG_ADDR(0x00000800)
/* 0x00001740 through 0x0000176C UART 1 */
+#define IOP331_UART0_PHYS (IOP331_PHYS_MEM_BASE | 0x00001700) /* UART #1 physical */
+#define IOP331_UART1_PHYS (IOP331_PHYS_MEM_BASE | 0x00001740) /* UART #2 physical */
+#define IOP331_UART0_VIRT (IOP331_VIRT_MEM_BASE | 0x00001700) /* UART #1 virtual addr */
+#define IOP331_UART1_VIRT (IOP331_VIRT_MEM_BASE | 0x00001740) /* UART #2 virtual addr */
+
/* Reserved 0x00001770 through 0x0000177F */
/* General Purpose I/O Registers */
/* Reserved 0x0000178c through 0x000019ff */
+
#ifndef __ASSEMBLY__
extern void iop331_map_io(void);
extern void iop331_init_irq(void);
#ifndef _IQ31244_H_
#define _IQ31244_H_
-#define IQ31244_RAMBASE 0xa0000000
-
#define IQ31244_FLASHBASE 0xf0000000 /* Flash */
#define IQ31244_FLASHSIZE 0x00800000
#define IQ31244_FLASHWIDTH 2
#define IQ31244_ROTARY_SW 0xfe8d0000 /* Rotary Switch */
#define IQ31244_BATT_STAT 0xfe8f0000 /* Battery Status */
-/*
- * IQ31244 PCI I/O and Mem space regions
- */
-#define IQ31244_PCI_IO_BASE 0x90000000
-#define IQ31244_PCI_IO_SIZE 0x00010000
-#define IQ31244_PCI_MEM_BASE 0x80000000
-//#define IQ31244_PCI_MEM_SIZE 0x04000000
-#define IQ31244_PCI_MEM_SIZE 0x08000000
-#define IQ31244_PCI_IO_OFFSET 0x6e000000
-
#ifndef __ASSEMBLY__
extern void iq31244_map_io(void);
#endif
#ifndef _IQ80321_H_
#define _IQ80321_H_
-#define IQ80321_RAMBASE 0xa0000000
-
#define IQ80321_FLASHBASE 0xf0000000 /* Flash */
#define IQ80321_FLASHSIZE 0x00800000
#define IQ80321_FLASHWIDTH 1
#define IQ80321_ROTARY_SW 0xfe8d0000 /* Rotary Switch */
#define IQ80321_BATT_STAT 0xfe8f0000 /* Battery Status */
-/*
- * IQ80321 PCI I/O and Mem space regions
- */
-#define IQ80321_PCI_IO_BASE 0x90000000
-#define IQ80321_PCI_IO_SIZE 0x00010000
-#define IQ80321_PCI_MEM_BASE 0x80000000
-#define IQ80321_PCI_MEM_SIZE 0x04000000
-#define IQ80321_PCI_IO_OFFSET 0x6e000000
-
#ifndef __ASSEMBLY__
extern void iq80321_map_io(void);
#endif
#ifndef _IQ80331_H_
#define _IQ80331_H_
-#define IQ80331_RAMBASE 0x00000000
-
#define IQ80331_FLASHBASE 0xc0000000 /* Flash */
#define IQ80331_FLASHSIZE 0x00800000
#define IQ80331_FLASHWIDTH 1
-#define IQ80331_UART0_PHYS (IOP331_PHYS_MEM_BASE | 0x00001700) /* UART #1 physical */
-#define IQ80331_UART1_PHYS (IOP331_PHYS_MEM_BASE | 0x00001740) /* UART #2 physical */
-#define IQ80331_UART0_VIRT (IOP331_VIRT_MEM_BASE | 0x00001700) /* UART #1 virtual addr */
-#define IQ80331_UART1_VIRT (IOP331_VIRT_MEM_BASE | 0x00001740) /* UART #2 virtual addr */
#define IQ80331_7SEG_1 0xce840000 /* 7-Segment MSB */
#define IQ80331_7SEG_0 0xce850000 /* 7-Segment LSB (WO) */
#define IQ80331_ROTARY_SW 0xce8d0000 /* Rotary Switch */
#define IQ80331_BATT_STAT 0xce8f0000 /* Battery Status */
-/*
- * IQ80331 PCI I/O and Mem space regions
- */
-#define IQ80331_PCI_IO_BASE 0x90000000
-#define IQ80331_PCI_IO_SIZE 0x00010000
-#define IQ80331_PCI_MEM_BASE 0x80000000
-#define IQ80331_PCI_MEM_SIZE 0x08000000
-#define IQ80331_PCI_IO_OFFSET 0x6e000000
-
#ifndef __ASSEMBLY__
extern void iq80331_map_io(void);
#endif
--- /dev/null
+/*
+ * linux/include/asm/arch-iop3xx/iq80332.h
+ *
+ * Intel IQ80332 evaluation board registers
+ */
+
+#ifndef _IQ80332_H_
+#define _IQ80332_H_
+
+#define IQ80332_FLASHBASE 0xc0000000 /* Flash */
+#define IQ80332_FLASHSIZE 0x00800000
+#define IQ80332_FLASHWIDTH 1
+
+#define IQ80332_7SEG_1 0xce840000 /* 7-Segment MSB */
+#define IQ80332_7SEG_0 0xce850000 /* 7-Segment LSB (WO) */
+#define IQ80332_ROTARY_SW 0xce8d0000 /* Rotary Switch */
+#define IQ80332_BATT_STAT 0xce8f0000 /* Battery Status */
+
+#ifndef __ASSEMBLY__
+extern void iq80332_map_io(void);
+#endif
+
+#endif // _IQ80332_H_
#define CLOCK_TICK_RATE IOP321_TICK_RATE
-#elif defined(CONFIG_ARCH_IQ80331)
+#elif defined(CONFIG_ARCH_IQ80331) || defined(CONFIG_MACH_IQ80332)
#define CLOCK_TICK_RATE IOP331_TICK_RATE
#ifdef CONFIG_ARCH_IOP321
#define UTYPE unsigned char *
-#else
+#elif defined(CONFIG_ARCH_IOP331)
#define UTYPE u32 *
+#else
+#error "Missing IOP3xx arch type def"
#endif
static volatile UTYPE uart_base;
uart_base = (volatile UTYPE)IQ80321_UART;
else if(machine_is_iq31244())
uart_base = (volatile UTYPE)IQ31244_UART;
- else if(machine_is_iq80331())
- uart_base = (volatile UTYPE)IQ80331_UART0_PHYS;
+ else if(machine_is_iq80331() || machine_is_iq80332())
+ uart_base = (volatile UTYPE)IOP331_UART0_PHYS;
else
uart_base = (volatile UTYPE)0xfe800000;
}
#define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
-#define VMALLOC_END (0xe8000000)
+//#define VMALLOC_END (0xe8000000)
+/* increase usable physical RAM to ~992M per RMK */
+#define VMALLOC_END (0xfe000000)
+
--- /dev/null
+/* linux/include/asm-arm/arch-ixp2000/debug-macro.S
+ *
+ * Debugging macro include header
+ *
+ * Copyright (C) 1994-1999 Russell King
+ * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+*/
+
+ .macro addruart,rx
+ mrc p15, 0, \rx, c1, c0
+ tst \rx, #1 @ MMU enabled?
+ moveq \rx, #0xc0000000 @ Physical base
+ movne \rx, #0xfe000000 @ virtual base
+ orrne \rx, \rx, #0x00f00000
+ orr \rx, \rx, #0x00030000
+#ifdef __ARMEB__
+ orr \rx, \rx, #0x00000003
+#endif
+ .endm
+
+ .macro senduart,rd,rx
+ strb \rd, [\rx]
+ .endm
+
+ .macro busyuart,rd,rx
+1002: ldrb \rd, [\rx, #0x14]
+ tst \rd, #0x20
+ beq 1002b
+ .endm
+
+ .macro waituart,rd,rx
+ nop
+ nop
+ nop
+ .endm
/*
* Pick up VMALLOC_END
*/
-#define ___io(p) ((unsigned long)((p)+IXP2000_PCI_IO_VIRT_BASE))
+#define ___io(p) ((void __iomem *)((p)+IXP2000_PCI_IO_VIRT_BASE))
/*
* IXP2000 does not do proper byte-lane conversion for PCI addresses,
* so we need to override standard functions.
*/
-#define alignb(addr) ((addr & ~3) + (3 - (addr & 3)))
-#define alignw(addr) ((addr & ~2) + (2 - (addr & 2)))
+#define alignb(addr) (((unsigned long)addr & ~3) + (3 - ((unsigned long)addr & 3)))
+#define alignw(addr) (((unsigned long)addr & ~2) + (2 - ((unsigned long)addr & 2)))
#define outb(v,p) __raw_writeb(v,alignb(___io(p)))
#define outw(v,p) __raw_writew((v),alignw(___io(p)))
#define WDT_ENABLE 0x00000001
#define TIMER_DIVIDER_256 0x00000008
#define TIMER_ENABLE 0x00000080
+#define IRQ_MASK_TIMER1 (1 << 4)
/*
* Interrupt controller registers
* linux/include/asm-arm/arch-ixp2000/system.h
*
* Copyright (C) 2002 Intel Corp.
+ * Copyricht (C) 2003-2005 MontaVista Software, Inc.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
{
cli();
- if (machine_is_ixdp2401() || machine_is_ixdp2801()) {
- *IXDP2X01_CPLD_FLASH_REG = ((0 >> IXDP2X01_FLASH_WINDOW_BITS) | IXDP2X01_CPLD_FLASH_INTERN);
+ /*
+ * Reset flash banking register so that we are pointing at
+ * RedBoot bank.
+ */
+ if (machine_is_ixdp2401()) {
+ *IXDP2X01_CPLD_FLASH_REG = ((0 >> IXDP2X01_FLASH_WINDOW_BITS)
+ | IXDP2X01_CPLD_FLASH_INTERN);
*IXDP2X01_CPLD_RESET_REG = 0xffffffff;
}
+ /*
+ * On IXDP2801 we need to write this magic sequence to the CPLD
+ * to cause a complete reset of the CPU and all external devices
+ * and moves the flash bank register back to 0.
+ */
+ if (machine_is_ixdp2801()) {
+ unsigned long reset_reg = *IXDP2X01_CPLD_RESET_REG;
+ reset_reg = 0x55AA0000 | (reset_reg & 0x0000FFFF);
+ *IXDP2X01_CPLD_RESET_REG = reset_reg;
+ mb();
+ *IXDP2X01_CPLD_RESET_REG = 0x80000000;
+ }
+
/*
* We do a reset all if we are PCI master. We could be a slave and we
* don't want to do anything funky on the PCI bus.
--- /dev/null
+/* linux/include/asm-arm/arch-ixp4xx/debug-macro.S
+ *
+ * Debugging macro include header
+ *
+ * Copyright (C) 1994-1999 Russell King
+ * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+ .macro addruart,rx
+ mrc p15, 0, \rx, c1, c0
+ tst \rx, #1 @ MMU enabled?
+ moveq \rx, #0xc8000000
+ movne \rx, #0xff000000
+ add \rx,\rx,#3 @ Uart regs are at off set of 3 if
+ @ byte writes used - Big Endian.
+ .endm
+
+ .macro senduart,rd,rx
+ strb \rd, [\rx]
+ .endm
+
+ .macro waituart,rd,rx
+1002: ldrb \rd, [\rx, #0x14]
+ and \rd, \rd, #0x60 @ check THRE and TEMT bits
+ teq \rd, #0x60
+ bne 1002b
+ .endm
+
+ .macro busyuart,rd,rx
+ .endm
*/
extern void ixp4xx_map_io(void);
extern void ixp4xx_init_irq(void);
+extern void ixp4xx_sys_init(void);
extern struct sys_timer ixp4xx_timer;
extern void ixp4xx_pci_preinit(void);
struct pci_sys_data;
/* set the "key" register to enable access to
* "timer" and "enable" registers
*/
- *IXP4XX_OSWK = 0x482e;
+ *IXP4XX_OSWK = IXP4XX_WDT_KEY;
- /* write 0 to the timer register for an immidiate reset */
+ /* write 0 to the timer register for an immediate reset */
*IXP4XX_OSWT = 0;
- /* disable watchdog interrupt, enable reset, enable count */
- *IXP4XX_OSWE = 0x3;
+ *IXP4XX_OSWE = IXP4XX_WDT_RESET_ENABLE | IXP4XX_WDT_COUNT_ENABLE;
}
}
static __inline__ void __arch_decomp_setup(unsigned long arch_id)
{
/*
- * Coyote only has UART2 connected
+ * Coyote and gtwx5715 only have UART2 connected
*/
- if (machine_is_adi_coyote())
+ if (machine_is_adi_coyote() || machine_is_gtwx5715())
uart_base = (volatile u32*) IXP4XX_UART2_BASE_PHYS;
else
uart_base = (volatile u32*) IXP4XX_UART1_BASE_PHYS;
--- /dev/null
+/*
+ * include/asm-arm/arch-lh7a40x/entry-macro.S
+ *
+ * Low-level IRQ helper macros for LH7A40x platforms
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+# if defined (CONFIG_ARCH_LH7A400) && defined (CONFIG_ARCH_LH7A404)
+# error "LH7A400 and LH7A404 are mutually exclusive"
+# endif
+
+# if defined (CONFIG_ARCH_LH7A400)
+ .macro disable_fiq
+ .endm
+
+ .macro get_irqnr_and_base, irqnr, irqstat, base, tmp
+ mov \irqnr, #0
+ mov \base, #io_p2v(0x80000000) @ APB registers
+ ldr \irqstat, [\base, #0x500] @ PIC INTSR
+
+1001: movs \irqstat, \irqstat, lsr #1 @ Shift into carry
+ bcs 1008f @ Bit set; irq found
+ add \irqnr, \irqnr, #1
+ bne 1001b @ Until no bits
+ b 1009f @ Nothing? Hmm.
+1008: movs \irqstat, #1 @ Force !Z
+1009:
+ .endm
+
+#elif defined(CONFIG_ARCH_LH7A404)
+
+ .macro disable_fiq
+ .endm
+
+ .macro get_irqnr_and_base, irqnr, irqstat, base, tmp
+ mov \irqnr, #0 @ VIC1 irq base
+ mov \base, #io_p2v(0x80000000) @ APB registers
+ add \base, \base, #0x8000
+ ldr \tmp, [\base, #0x0030] @ VIC1_VECTADDR
+ tst \tmp, #VA_VECTORED @ Direct vectored
+ bne 1002f
+ tst \tmp, #VA_VIC1DEFAULT @ Default vectored VIC1
+ ldrne \irqstat, [\base, #0] @ VIC1_IRQSTATUS
+ bne 1001f
+ add \base, \base, #(0xa000 - 0x8000)
+ ldr \tmp, [\base, #0x0030] @ VIC2_VECTADDR
+ tst \tmp, #VA_VECTORED @ Direct vectored
+ bne 1002f
+ ldr \irqstat, [\base, #0] @ VIC2_IRQSTATUS
+ mov \irqnr, #32 @ VIC2 irq base
+
+1001: movs \irqstat, \irqstat, lsr #1 @ Shift into carry
+ bcs 1008f @ Bit set; irq found
+ add \irqnr, \irqnr, #1
+ bne 1001b @ Until no bits
+ b 1009f @ Nothing? Hmm.
+1002: and \irqnr, \tmp, #0x3f @ Mask for valid bits
+1008: movs \irqstat, #1 @ Force !Z
+ str \tmp, [\base, #0x0030] @ Clear vector
+1009:
+ .endm
+#endif
+
+
#define OMAP_GPIO_RISING_EDGE 0x02
#define OMAP_GPIO_BOTH_EDGES 0x03
+extern int omap_gpio_init(void); /* Call from board init only */
extern int omap_request_gpio(int gpio);
extern void omap_free_gpio(int gpio);
extern void omap_set_gpio_direction(int gpio, int is_input);
#define INT_1610_SoSSI (9 + IH2_BASE)
#define INT_1610_SoSSI_MATCH (19 + IH2_BASE)
#define INT_1610_McBSP2RX_OF (31 + IH2_BASE)
+#define INT_1610_STI (32 + IH2_BASE)
+#define INT_1610_STI_WAKEUP (33 + IH2_BASE)
#define INT_1610_GPIO_BANK2 (40 + IH2_BASE)
#define INT_1610_GPIO_BANK3 (41 + IH2_BASE)
#define INT_1610_MMC2 (42 + IH2_BASE)
*/
#define OMAP1610_SRAM_IDLE_SUSPEND (OMAP16XX_SRAM_BASE + OMAP1610_SRAM_SIZE - 0x200)
-#define OMAP5912_SRAM_IDLE_SUSPEND (OMAP16XX_SRAM_BASE + OMAP5912_SRAM_SIZE - 0x200)
#define OMAP1610_SRAM_API_SUSPEND (OMAP1610_SRAM_IDLE_SUSPEND + 0x100)
+#define OMAP5912_SRAM_IDLE_SUSPEND (OMAP16XX_SRAM_BASE + OMAP5912_SRAM_SIZE - 0x200)
+#define OMAP5912_SRAM_API_SUSPEND (OMAP5912_SRAM_IDLE_SUSPEND + 0x100)
/*
* ---------------------------------------------------------------------------
--- /dev/null
+#ifndef __ASM_ARCH_AUDIO_H__
+#define __ASM_ARCH_AUDIO_H__
+
+#include <sound/driver.h>
+#include <sound/core.h>
+#include <sound/pcm.h>
+
+typedef struct {
+ int (*startup)(snd_pcm_substream_t *, void *);
+ void (*shutdown)(snd_pcm_substream_t *, void *);
+ void (*suspend)(void *);
+ void (*resume)(void *);
+ void *priv;
+} pxa2xx_audio_ops_t;
+
+#endif
--- /dev/null
+/* linux/include/asm-arm/arch-pxa/debug-macro.S
+ *
+ * Debugging macro include header
+ *
+ * Copyright (C) 1994-1999 Russell King
+ * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+*/
+
+ .macro addruart,rx
+ mrc p15, 0, \rx, c1, c0
+ tst \rx, #1 @ MMU enabled?
+ moveq \rx, #0x40000000 @ physical
+ movne \rx, #io_p2v(0x40000000) @ virtual
+ orr \rx, \rx, #0x00100000
+ .endm
+
+ .macro senduart,rd,rx
+ str \rd, [\rx, #0]
+ .endm
+
+ .macro busyuart,rd,rx
+1002: ldr \rd, [\rx, #0x14]
+ tst \rd, #(1 << 6)
+ beq 1002b
+ .endm
+
+ .macro waituart,rd,rx
+1001: ldr \rd, [\rx, #0x14]
+ tst \rd, #(1 << 5)
+ beq 1001b
+ .endm
void ssp_disable(struct ssp_dev *dev);
void ssp_save_state(struct ssp_dev *dev, struct ssp_state *ssp);
void ssp_restore_state(struct ssp_dev *dev, struct ssp_state *ssp);
-int ssp_init(struct ssp_dev *dev, u32 port, u32 mode, u32 flags, u32 psp_flags,
- u32 speed);
+int ssp_init(struct ssp_dev *dev, u32 port);
+int ssp_config(struct ssp_dev *dev, u32 mode, u32 flags, u32 psp_flags, u32 speed);
void ssp_exit(struct ssp_dev *dev);
#endif
--- /dev/null
+/* linux/include/asm-arm/arch-rpc/debug-macro.S
+ *
+ * Debugging macro include header
+ *
+ * Copyright (C) 1994-1999 Russell King
+ * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+*/
+
+ .macro addruart,rx
+ mov \rx, #0xe0000000
+ orr \rx, \rx, #0x00010000
+ orr \rx, \rx, #0x00000fe0
+ .endm
+
+ .macro senduart,rd,rx
+ strb \rd, [\rx]
+ .endm
+
+ .macro busyuart,rd,rx
+1001: ldrb \rd, [\rx, #0x14]
+ and \rd, \rd, #0x60
+ teq \rd, #0x60
+ bne 1001b
+ .endm
+
+ .macro waituart,rd,rx
+1001: ldrb \rd, [\rx, #0x18]
+ tst \rd, #0x10
+ beq 1001b
+ .endm
return (unsigned sz)value; \
}
-static inline unsigned int __ioaddr (unsigned int port) \
-{ \
- if (__PORT_PCIO(port)) \
- return (unsigned int)(PCIO_BASE + (port << 2)); \
- else \
- return (unsigned int)(IO_BASE + (port << 2)); \
+static inline void __iomem *__ioaddr(unsigned int port)
+{
+ void __iomem *ret;
+ if (__PORT_PCIO(port))
+ ret = (void __iomem *)PCIO_BASE;
+ else
+ ret = (void __iomem *)IO_BASE;
+ return ret + (port << 2);
}
#define DECLARE_IO(sz,fnsuffix,instr) \
else \
__asm__ __volatile__( \
"str %0, [%1, %2] @ outlc" \
- : : "r" (__v), "r" (IO_BASE), "r" ((port) << 2)); \
+ : : "r" (__v), "r" (IO_BASE), "r" ((port) << 2)); \
})
#define __inlc(port) \
})
#define __ioaddrc(port) \
- (__PORT_PCIO((port)) ? PCIO_BASE + ((port) << 2) : IO_BASE + ((port) << 2))
+ ((void __iomem *)(__PORT_PCIO((port)) ? PCIO_BASE : IO_BASE) + ((port) << 2))
#define inb(p) (__builtin_constant_p((p)) ? __inbc(p) : __inb(p))
#define inw(p) (__builtin_constant_p((p)) ? __inwc(p) : __inw(p))
#define outl(v,p) (__builtin_constant_p((p)) ? __outlc(v,p) : __outl(v,p))
#define __ioaddr(p) (__builtin_constant_p((p)) ? __ioaddr(p) : __ioaddrc(p))
/* the following macro is deprecated */
-#define ioaddr(port) __ioaddr((port))
+#define ioaddr(port) ((unsigned long)__ioaddr((port)))
#define insb(p,d,l) __raw_readsb(__ioaddr(p),d,l)
#define insw(p,d,l) __raw_readsw(__ioaddr(p),d,l)
--- /dev/null
+/* linux/include/asm-arm/arch-s3c2410/debug-macro.S
+ *
+ * Debugging macro include header
+ *
+ * Copyright (C) 1994-1999 Russell King
+ * Copyright (C) 2005 Simtec Electronics
+ *
+ * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+*/
+
+#include <asm/arch/map.h>
+#include <asm/arch/regs-serial.h>
+#include <asm/arch/regs-gpio.h>
+
+#define S3C2410_UART1_OFF (0x4000)
+#define SHIFT_2440TXF (14-9)
+
+ .macro addruart, rx
+ mrc p15, 0, \rx, c1, c0
+ tst \rx, #1
+ ldreq \rx, = S3C2410_PA_UART
+ ldrne \rx, = S3C2410_VA_UART
+#if CONFIG_DEBUG_S3C2410_UART != 0
+ add \rx, \rx, #(S3C2410_UART1_OFF * CONFIG_DEBUG_S3C2410_UART)
+#endif
+ .endm
+
+ .macro senduart,rd,rx
+ str \rd, [\rx, # S3C2410_UTXH ]
+ .endm
+
+ .macro busyuart, rd, rx
+ ldr \rd, [ \rx, # S3C2410_UFCON ]
+ tst \rd, #S3C2410_UFCON_FIFOMODE @ fifo enabled?
+ beq 1001f @
+ @ FIFO enabled...
+1003:
+ mrc p15, 0, \rd, c1, c0
+ tst \rd, #1
+ addeq \rd, \rx, #(S3C2410_PA_GPIO - S3C2410_PA_UART)
+ addne \rd, \rx, #(S3C2410_VA_GPIO - S3C2410_VA_UART)
+ bic \rd, \rd, #0xff000
+ ldr \rd, [ \rd, # S3C2410_GSTATUS1 - S3C2410_GPIOREG(0) ]
+ and \rd, \rd, #0x00ff0000
+ teq \rd, #0x00440000 @ is it 2440?
+
+ ldr \rd, [ \rx, # S3C2410_UFSTAT ]
+ moveq \rd, \rd, lsr #SHIFT_2440TXF
+ tst \rd, #S3C2410_UFSTAT_TXFULL
+ bne 1003b
+ b 1002f
+
+1001:
+ @ busy waiting for non fifo
+ ldr \rd, [ \rx, # S3C2410_UTRSTAT ]
+ tst \rd, #S3C2410_UTRSTAT_TXFE
+ beq 1001b
+
+1002: @ exit busyuart
+ .endm
+
+ .macro waituart,rd,rx
+
+ ldr \rd, [ \rx, # S3C2410_UFCON ]
+ tst \rd, #S3C2410_UFCON_FIFOMODE @ fifo enabled?
+ beq 1001f @
+ @ FIFO enabled...
+1003:
+ mrc p15, 0, \rd, c1, c0
+ tst \rd, #1
+ addeq \rd, \rx, #(S3C2410_PA_GPIO - S3C2410_PA_UART)
+ addne \rd, \rx, #(S3C2410_VA_GPIO - S3C2410_VA_UART)
+ bic \rd, \rd, #0xff000
+ ldr \rd, [ \rd, # S3C2410_GSTATUS1 - S3C2410_GPIOREG(0) ]
+ and \rd, \rd, #0x00ff0000
+ teq \rd, #0x00440000 @ is it 2440?
+
+ ldr \rd, [ \rx, # S3C2410_UFSTAT ]
+ andne \rd, \rd, #S3C2410_UFSTAT_TXMASK
+ andeq \rd, \rd, #S3C2440_UFSTAT_TXMASK
+ teq \rd, #0
+ bne 1003b
+ b 1002f
+
+1001:
+ @ idle waiting for non fifo
+ ldr \rd, [ \rx, # S3C2410_UTRSTAT ]
+ tst \rd, #S3C2410_UTRSTAT_TXFE
+ beq 1001b
+
+1002: @ exit busyuart
+ .endm
extern int s3c2410_dma_devconfig(int channel, s3c2410_dmasrc_t source,
int hwcfg, unsigned long devaddr);
+/* s3c2410_dma_getposition
+ *
+ * get the position that the dma transfer is currently at
+*/
+
+extern int s3c2410_dma_getposition(dmach_t channel,
+ dma_addr_t *src, dma_addr_t *dest);
+
extern int s3c2410_dma_set_opfn(dmach_t, s3c2410_dma_opfn_t rtn);
extern int s3c2410_dma_set_buffdone_fn(dmach_t, s3c2410_dma_cbfn_t rtn);
--- /dev/null
+/*
+ * include/asm-arm/arch-s3c2410/entry-macro.S
+ *
+ * Low-level IRQ helper macros for S3C2410-based platforms
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+
+ .macro get_irqnr_and_base, irqnr, irqstat, base, tmp
+
+ mov \tmp, #S3C2410_VA_IRQ
+ ldr \irqnr, [ \tmp, #0x14 ] @ get irq no
+30000:
+ teq \irqnr, #4
+ teqne \irqnr, #5
+ beq 1002f @ external irq reg
+
+ @ debug check to see if interrupt reported is the same
+ @ as the offset....
+
+ teq \irqnr, #0
+ beq 20002f
+ ldr \irqstat, [ \tmp, #0x10 ] @ INTPND
+ mov \irqstat, \irqstat, lsr \irqnr
+ tst \irqstat, #1
+ bne 20002f
+
+ /* debug/warning if we get an invalud response from the
+ * INTOFFSET register */
+#if 1
+ stmfd r13!, { r0 - r4 , r8-r12, r14 }
+ ldr r1, [ \tmp, #0x14 ] @ INTOFFSET
+ ldr r2, [ \tmp, #0x10 ] @ INTPND
+ ldr r3, [ \tmp, #0x00 ] @ SRCPND
+ adr r0, 20003f
+ bl printk
+ b 20004f
+
+20003:
+ .ascii "<7>irq: err - bad offset %d, intpnd=%08x, srcpnd=%08x\n"
+ .byte 0
+ .align 4
+20004:
+ mov r1, #1
+ mov \tmp, #S3C2410_VA_IRQ
+ ldmfd r13!, { r0 - r4 , r8-r12, r14 }
+#endif
+
+ @ try working out interrupt number for ourselves
+ mov \irqnr, #0
+ ldr \irqstat, [ \tmp, #0x10 ] @ INTPND
+10021:
+ movs \irqstat, \irqstat, lsr#1
+ bcs 30000b @ try and re-start the proccess
+ add \irqnr, \irqnr, #1
+ cmp \irqnr, #32
+ ble 10021b
+
+ @ found no interrupt, set Z flag and leave
+ movs \irqnr, #0
+ b 1001f
+
+20005:
+20002: @ exit
+ @ we base the s3c2410x interrupts at 16 and above to allow
+ @ isa peripherals to have their standard interrupts, also
+ @ ensure that Z flag is un-set on exit
+
+ @ note, we cannot be sure if we get IRQ_EINT0 (0) that
+ @ there is simply no interrupt pending, so in all other
+ @ cases we jump to say we have found something, otherwise
+ @ we check to see if the interrupt really is assrted
+ adds \irqnr, \irqnr, #IRQ_EINT0
+ teq \irqnr, #IRQ_EINT0
+ bne 1001f @ exit
+ ldr \irqstat, [ \tmp, #0x10 ] @ INTPND
+ teq \irqstat, #0
+ moveq \irqnr, #0
+ b 1001f
+
+ @ we get here from no main or external interrupts pending
+1002:
+ add \tmp, \tmp, #S3C2410_VA_GPIO - S3C2410_VA_IRQ
+ ldr \irqstat, [ \tmp, # 0xa8 ] @ EXTINTPEND
+ ldr \irqnr, [ \tmp, # 0xa4 ] @ EXTINTMASK
+
+ bic \irqstat, \irqstat, \irqnr @ clear masked irqs
+
+ mov \irqnr, #IRQ_EINT4 @ start extint nos
+ mov \irqstat, \irqstat, lsr#4 @ ignore bottom 4 bits
+10021:
+ movs \irqstat, \irqstat, lsr#1
+ bcs 1004f
+ add \irqnr, \irqnr, #1
+ cmp \irqnr, #IRQ_EINT23
+ ble 10021b
+
+ @ found no interrupt, set Z flag and leave
+ movs \irqnr, #0
+
+1004: @ ensure Z flag clear in case our MOVS shifted out the last bit
+ teq \irqnr, #0
+1001:
+ @ exit irq routine
+ .endm
+
+
+ /* currently don't need an disable_fiq macro */
+
+ .macro disable_fiq
+ .endm
+
+
return (unsigned sz)value; \
}
-static inline unsigned int __ioaddr (unsigned int port)
+static inline void __iomem *__ioaddr (unsigned int port)
{
- if (__PORT_PCIO(port))
- return (unsigned int)(PCIO_BASE + (port));
- else
- return (unsigned int)(0 + (port));
+ return (void __iomem *)(__PORT_PCIO(port) ? PCIO_BASE + port : port);
}
#define DECLARE_IO(sz,fnsuffix,instr) \
result; \
})
-#define __ioaddrc(port) (__PORT_PCIO((port)) ? PCIO_BASE + ((port)) : ((port)))
+#define __ioaddrc(port) ((void __iomem *)(__PORT_PCIO(port) ? PCIO_BASE + (port) : (port)))
#define inb(p) (__builtin_constant_p((p)) ? __inbc(p) : __inb(p))
#define inw(p) (__builtin_constant_p((p)) ? __inwc(p) : __inw(p))
* 19-06-2003 Ben Dooks Created file
* 12-03-2004 Ben Dooks Updated include protection
* 29-Sep-2004 Ben Dooks Fixed usage for assembly inclusion
+ * 10-Feb-2005 Ben Dooks Fixed CAMDIVN address (Guillaume Gourat)
*/
#ifndef __ASM_ARM_REGS_CLOCK
static inline unsigned int
s3c2410_get_pll(int pllval, int baseclk)
{
- int mdiv, pdiv, sdiv;
+ int mdiv, pdiv, sdiv;
- mdiv = pllval >> S3C2410_PLLCON_MDIVSHIFT;
- pdiv = pllval >> S3C2410_PLLCON_PDIVSHIFT;
- sdiv = pllval >> S3C2410_PLLCON_SDIVSHIFT;
+ mdiv = pllval >> S3C2410_PLLCON_MDIVSHIFT;
+ pdiv = pllval >> S3C2410_PLLCON_PDIVSHIFT;
+ sdiv = pllval >> S3C2410_PLLCON_SDIVSHIFT;
- mdiv &= S3C2410_PLLCON_MDIVMASK;
- pdiv &= S3C2410_PLLCON_PDIVMASK;
- sdiv &= S3C2410_PLLCON_SDIVMASK;
+ mdiv &= S3C2410_PLLCON_MDIVMASK;
+ pdiv &= S3C2410_PLLCON_PDIVMASK;
+ sdiv &= S3C2410_PLLCON_SDIVMASK;
- return (baseclk * (mdiv + 8)) / ((pdiv + 2) << sdiv);
+ return (baseclk * (mdiv + 8)) / ((pdiv + 2) << sdiv);
}
#endif /* __ASSEMBLY__ */
#ifdef CONFIG_CPU_S3C2440
/* extra registers */
-#define S3C2440_CAMDIVN S3C2410_CLKREG(0x14)
+#define S3C2440_CAMDIVN S3C2410_CLKREG(0x18)
#define S3C2440_CLKCON_CAMERA (1<<19)
#define S3C2440_CLKCON_AC97 (1<<20)
*
* Changelog:
* 11-Aug-2004 BJD Created file
+ * 10-Feb-2005 BJD Fix GPJ12 definition (Guillaume Gourat)
*/
#define S3C2440_GPJ12 S3C2410_GPIONO(S3C2440_GPIO_BANKJ, 12)
#define S3C2440_GPJ12_INP (0x00 << 24)
#define S3C2440_GPJ12_OUTP (0x01 << 24)
-#define S3C2440_GPJ12_CAMCLKOUT (0x02 << 24)
+#define S3C2440_GPJ12_CAMRESET (0x02 << 24)
#endif /* __ASM_ARCH_REGS_GPIOJ_H */
#ifndef __ASM_ARCH_REGS_IIS_H
#define __ASM_ARCH_REGS_IIS_H
-#define S3C2410_IISCON (S3C2410_VA_IIS + 0x00)
+#define S3C2410_IISCON (0x00)
#define S3C2410_IISCON_LRINDEX (1<<8)
#define S3C2410_IISCON_TXFIFORDY (1<<7)
#define S3C2410_IISCON_RXIDLE (1<<2)
#define S3C2410_IISCON_IISEN (1<<0)
-#define S3C2410_IISMOD (S3C2410_VA_IIS + 0x04)
+#define S3C2410_IISMOD (0x04)
#define S3C2410_IISMOD_SLAVE (1<<8)
#define S3C2410_IISMOD_NOXFER (0<<6)
#define S3C2410_IISMOD_32FS (1<<0)
#define S3C2410_IISMOD_48FS (2<<0)
-#define S3C2410_IISPSR (S3C2410_VA_IIS + 0x08)
+#define S3C2410_IISPSR (0x08)
+#define S3C2410_IISPSR_INTMASK (31<<5)
+#define S3C2410_IISPSR_INTSHFIT (5)
+#define S3C2410_IISPSR_EXTMASK (31<<0)
+#define S3C2410_IISPSR_EXTSHFIT (0)
-#define S3C2410_IISFCON (S3C2410_VA_IIS + 0x0c)
+#define S3C2410_IISFCON (0x0c)
#define S3C2410_IISFCON_TXDMA (1<<15)
#define S3C2410_IISFCON_RXDMA (1<<14)
#define S3C2410_IISFCON_TXENABLE (1<<13)
#define S3C2410_IISFCON_RXENABLE (1<<12)
-#define S3C2410_IISFIFO (S3C2410_VA_IIS + 0x10)
+#define S3C2410_IISFIFO (0x10)
#endif /* __ASM_ARCH_REGS_IIS_H */
/* linux/include/asm-arm/arch-s3c2410/timex.h
*
- * (c) 2003,2004 Simtec Electronics
+ * (c) 2003-2005 Simtec Electronics
* Ben Dooks <ben@simtec.co.uk>
*
* S3C2410 - time parameters
* 02-Sep-2003 BJD Created file
* 05-Jan-2004 BJD Updated for Linux 2.6.0
* 22-Nov-2004 BJD Fixed CLOCK_TICK_RATE
+ * 10-Jan-2004 BJD Removed s3c2410_clock_tick_rate
*/
#ifndef __ASM_ARCH_TIMEX_H
#define __ASM_ARCH_TIMEX_H
-#if 0
-/* todo - this does not seem to work with 2.6.0 -> division by zero
- * in header files
- */
-extern int s3c2410_clock_tick_rate;
+/* CLOCK_TICK_RATE needs to be evaluatable by the cpp, so making it
+ * a variable is useless. It seems as long as we make our timers an
+ * exact multiple of HZ, any value that makes a 1->1 correspondence
+ * for the time conversion functions to/from jiffies is acceptable.
+*/
-#define CLOCK_TICK_RATE (s3c2410_clock_tick_rate)
-#endif
-/* currently, the BAST uses 12MHz as a base clock rate */
#define CLOCK_TICK_RATE 12000000
/* linux/include/asm-arm/arch-s3c2410/vr1000-map.h
*
- * (c) 2003,2004 Simtec Electronics
+ * (c) 2003-2005 Simtec Electronics
* Ben Dooks <ben@simtec.co.uk>
*
* Machine VR1000 - Memory map definitions
* 06-Jan-2003 BJD Linux 2.6.0 version, split specifics from arch/map.h
* 12-Mar-2004 BJD Fixed header include protection
* 19-Mar-2004 BJD Copied to VR1000 machine headers.
+ * 19-Jan-2005 BJD Updated map definitions
*/
/* needs arch/map.h including with this */
#define VR1000_VA_DM9000 (VR1000_VA_MULTISPACE + 0x02500000)
#define VR1000_VA_SUPERIO (VR1000_VA_MULTISPACE + 0x02600000)
-
/* physical offset addresses for the peripherals */
#define VR1000_PA_IDEPRI (0x02000000)
#define VR1000_PA_DM9000 (0x05000000)
#define VR1000_PA_SERIAL (0x11800000)
+#define VR1000_VA_SERIAL (VR1000_IOADDR(0x00700000))
+
+/* VR1000 ram is in CS1, with A26..A24 = 2_101 */
+#define VR1000_PA_SRAM (S3C2410_CS1 | 0x05000000)
/* some configurations for the peripherals */
#include <linux/config.h>
-#define CF_BUF_CTRL_BASE 0xF0800000
-#define COLLIE_SCP_REG(adr) (*(volatile unsigned short*)(CF_BUF_CTRL_BASE+(adr)))
-#define COLLIE_SCP_MCR 0x00
-#define COLLIE_SCP_CDR 0x04
-#define COLLIE_SCP_CSR 0x08
-#define COLLIE_SCP_CPR 0x0C
-#define COLLIE_SCP_CCR 0x10
-#define COLLIE_SCP_IRR 0x14
-#define COLLIE_SCP_IRM 0x14
-#define COLLIE_SCP_IMR 0x18
-#define COLLIE_SCP_ISR 0x1C
-#define COLLIE_SCP_GPCR 0x20
-#define COLLIE_SCP_GPWR 0x24
-#define COLLIE_SCP_GPRR 0x28
-#define COLLIE_SCP_REG_MCR COLLIE_SCP_REG(COLLIE_SCP_MCR)
-#define COLLIE_SCP_REG_CDR COLLIE_SCP_REG(COLLIE_SCP_CDR)
-#define COLLIE_SCP_REG_CSR COLLIE_SCP_REG(COLLIE_SCP_CSR)
-#define COLLIE_SCP_REG_CPR COLLIE_SCP_REG(COLLIE_SCP_CPR)
-#define COLLIE_SCP_REG_CCR COLLIE_SCP_REG(COLLIE_SCP_CCR)
-#define COLLIE_SCP_REG_IRR COLLIE_SCP_REG(COLLIE_SCP_IRR)
-#define COLLIE_SCP_REG_IRM COLLIE_SCP_REG(COLLIE_SCP_IRM)
-#define COLLIE_SCP_REG_IMR COLLIE_SCP_REG(COLLIE_SCP_IMR)
-#define COLLIE_SCP_REG_ISR COLLIE_SCP_REG(COLLIE_SCP_ISR)
-#define COLLIE_SCP_REG_GPCR COLLIE_SCP_REG(COLLIE_SCP_GPCR)
-#define COLLIE_SCP_REG_GPWR COLLIE_SCP_REG(COLLIE_SCP_GPWR)
-#define COLLIE_SCP_REG_GPRR COLLIE_SCP_REG(COLLIE_SCP_GPRR)
-
-#define COLLIE_SCP_GPCR_PA19 ( 1 << 9 )
-#define COLLIE_SCP_GPCR_PA18 ( 1 << 8 )
-#define COLLIE_SCP_GPCR_PA17 ( 1 << 7 )
-#define COLLIE_SCP_GPCR_PA16 ( 1 << 6 )
-#define COLLIE_SCP_GPCR_PA15 ( 1 << 5 )
-#define COLLIE_SCP_GPCR_PA14 ( 1 << 4 )
-#define COLLIE_SCP_GPCR_PA13 ( 1 << 3 )
-#define COLLIE_SCP_GPCR_PA12 ( 1 << 2 )
-#define COLLIE_SCP_GPCR_PA11 ( 1 << 1 )
-
-#define COLLIE_SCP_CHARGE_ON COLLIE_SCP_GPCR_PA11
-#define COLLIE_SCP_DIAG_BOOT1 COLLIE_SCP_GPCR_PA12
-#define COLLIE_SCP_DIAG_BOOT2 COLLIE_SCP_GPCR_PA13
-#define COLLIE_SCP_MUTE_L COLLIE_SCP_GPCR_PA14
-#define COLLIE_SCP_MUTE_R COLLIE_SCP_GPCR_PA15
-#define COLLIE_SCP_5VON COLLIE_SCP_GPCR_PA16
-#define COLLIE_SCP_AMP_ON COLLIE_SCP_GPCR_PA17
-#define COLLIE_SCP_VPEN COLLIE_SCP_GPCR_PA18
-#define COLLIE_SCP_LB_VOL_CHG COLLIE_SCP_GPCR_PA19
-
-#define COLLIE_SCP_IO_DIR ( COLLIE_SCP_CHARGE_ON | COLLIE_SCP_MUTE_L | COLLIE_SCP_MUTE_R | \
+#define COLLIE_SCP_CHARGE_ON SCOOP_GPCR_PA11
+#define COLLIE_SCP_DIAG_BOOT1 SCOOP_GPCR_PA12
+#define COLLIE_SCP_DIAG_BOOT2 SCOOP_GPCR_PA13
+#define COLLIE_SCP_MUTE_L SCOOP_GPCR_PA14
+#define COLLIE_SCP_MUTE_R SCOOP_GPCR_PA15
+#define COLLIE_SCP_5VON SCOOP_GPCR_PA16
+#define COLLIE_SCP_AMP_ON SCOOP_GPCR_PA17
+#define COLLIE_SCP_VPEN SCOOP_GPCR_PA18
+#define COLLIE_SCP_LB_VOL_CHG SCOOP_GPCR_PA19
+
+#define COLLIE_SCOOP_IO_DIR ( COLLIE_SCP_CHARGE_ON | COLLIE_SCP_MUTE_L | COLLIE_SCP_MUTE_R | \
COLLIE_SCP_5VON | COLLIE_SCP_AMP_ON | COLLIE_SCP_VPEN | \
COLLIE_SCP_LB_VOL_CHG )
-#define COLLIE_SCP_IO_OUT ( COLLIE_SCP_MUTE_L | COLLIE_SCP_MUTE_R | COLLIE_SCP_VPEN | \
+#define COLLIE_SCOOP_IO_OUT ( COLLIE_SCP_MUTE_L | COLLIE_SCP_MUTE_R | COLLIE_SCP_VPEN | \
COLLIE_SCP_CHARGE_ON )
/* GPIOs for which the generic definition doesn't say much */
--- /dev/null
+/* linux/include/asm-arm/arch-sa1100/debug-macro.S
+ *
+ * Debugging macro include header
+ *
+ * Copyright (C) 1994-1999 Russell King
+ * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+*/
+
+ .macro addruart,rx
+ mrc p15, 0, \rx, c1, c0
+ tst \rx, #1 @ MMU enabled?
+ moveq \rx, #0x80000000 @ physical base address
+ movne \rx, #0xf8000000 @ virtual address
+
+ @ We probe for the active serial port here, coherently with
+ @ the comment in include/asm-arm/arch-sa1100/uncompress.h.
+ @ We assume r1 can be clobbered.
+
+ @ see if Ser3 is active
+ add \rx, \rx, #0x00050000
+ ldr r1, [\rx, #UTCR3]
+ tst r1, #UTCR3_TXE
+
+ @ if Ser3 is inactive, then try Ser1
+ addeq \rx, \rx, #(0x00010000 - 0x00050000)
+ ldreq r1, [\rx, #UTCR3]
+ tsteq r1, #UTCR3_TXE
+
+ @ if Ser1 is inactive, then try Ser2
+ addeq \rx, \rx, #(0x00030000 - 0x00010000)
+ ldreq r1, [\rx, #UTCR3]
+ tsteq r1, #UTCR3_TXE
+
+ @ if all ports are inactive, then there is nothing we can do
+ moveq pc, lr
+ .endm
+
+ .macro senduart,rd,rx
+ str \rd, [\rx, #UTDR]
+ .endm
+
+ .macro waituart,rd,rx
+1001: ldr \rd, [\rx, #UTSR1]
+ tst \rd, #UTSR1_TNF
+ beq 1001b
+ .endm
+
+ .macro busyuart,rd,rx
+1001: ldr \rd, [\rx, #UTSR1]
+ tst \rd, #UTSR1_TBY
+ bne 1001b
+ .endm
--- /dev/null
+/* linux/include/asm-arm/arch-versatile/debug-macro.S
+ *
+ * Debugging macro include header
+ *
+ * Copyright (C) 1994-1999 Russell King
+ * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+*/
+
+#include <asm/hardware/amba_serial.h>
+
+ .macro addruart,rx
+ mrc p15, 0, \rx, c1, c0
+ tst \rx, #1 @ MMU enabled?
+ moveq \rx, #0x10000000
+ movne \rx, #0xf1000000 @ virtual base
+ orr \rx, \rx, #0x001F0000
+ orr \rx, \rx, #0x00001000
+ .endm
+
+ .macro senduart,rd,rx
+ strb \rd, [\rx, #UART01x_DR]
+ .endm
+
+ .macro waituart,rd,rx
+1001: ldr \rd, [\rx, #0x18] @ UARTFLG
+ tst \rd, #1 << 5 @ UARTFLGUTXFF - 1 when full
+ bne 1001b
+ .endm
+
+ .macro busyuart,rd,rx
+1001: ldr \rd, [\rx, #0x18] @ UARTFLG
+ tst \rd, #1 << 3 @ UARTFLGUBUSY - 1 when busy
+ bne 1001b
+ .endm
extern int _test_and_set_bit_le(int nr, volatile unsigned long * p);
extern int _test_and_clear_bit_le(int nr, volatile unsigned long * p);
extern int _test_and_change_bit_le(int nr, volatile unsigned long * p);
-extern int _find_first_zero_bit_le(void * p, unsigned size);
-extern int _find_next_zero_bit_le(void * p, int size, int offset);
+extern int _find_first_zero_bit_le(const void * p, unsigned size);
+extern int _find_next_zero_bit_le(const void * p, int size, int offset);
extern int _find_first_bit_le(const unsigned long *p, unsigned size);
extern int _find_next_bit_le(const unsigned long *p, int size, int offset);
extern int _test_and_set_bit_be(int nr, volatile unsigned long * p);
extern int _test_and_clear_bit_be(int nr, volatile unsigned long * p);
extern int _test_and_change_bit_be(int nr, volatile unsigned long * p);
-extern int _find_first_zero_bit_be(void * p, unsigned size);
-extern int _find_next_zero_bit_be(void * p, int size, int offset);
+extern int _find_first_zero_bit_be(const void * p, unsigned size);
+extern int _find_next_zero_bit_be(const void * p, int size, int offset);
extern int _find_first_bit_be(const unsigned long *p, unsigned size);
extern int _find_next_bit_be(const unsigned long *p, int size, int offset);
* Find first bit set in a 168-bit bitmap, where the first
* 128 bits are unlikely to be set.
*/
-static inline int sched_find_first_bit(unsigned long *b)
+static inline int sched_find_first_bit(const unsigned long *b)
{
unsigned long v;
unsigned int off;
--- /dev/null
+/*
+ * linux/include/asm-arm/cpu.h
+ *
+ * Copyright (C) 2004-2005 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#ifndef __ASM_ARM_CPU_H
+#define __ASM_ARM_CPU_H
+
+#include <linux/config.h>
+#include <linux/percpu.h>
+
+struct cpuinfo_arm {
+ struct cpu cpu;
+#ifdef CONFIG_SMP
+ unsigned int loops_per_jiffy;
+#endif
+};
+
+DECLARE_PER_CPU(struct cpuinfo_arm, cpu_data);
+
+#endif
* Domain numbers
*
* DOMAIN_IO - domain 2 includes all IO only
- * DOMAIN_KERNEL - domain 1 includes all kernel memory only
- * DOMAIN_USER - domain 0 includes all user memory only
+ * DOMAIN_USER - domain 1 includes all user memory only
+ * DOMAIN_KERNEL - domain 0 includes all kernel memory only
*/
-#define DOMAIN_USER 0
-#define DOMAIN_KERNEL 1
-#define DOMAIN_TABLE 1
+#define DOMAIN_KERNEL 0
+#define DOMAIN_TABLE 0
+#define DOMAIN_USER 1
#define DOMAIN_IO 2
/*
#define DOMAIN_CLIENT 1
#define DOMAIN_MANAGER 3
-#define domain_val(dom,type) ((type) << 2*(dom))
+#define domain_val(dom,type) ((type) << (2*(dom)))
+#ifndef __ASSEMBLY__
#define set_domain(x) \
do { \
__asm__ __volatile__( \
} while (0)
#endif
+#endif /* !__ASSEMBLY__ */
#define EM_ARM 40
#define EF_ARM_APCS26 0x08
#define EF_ARM_SOFT_FLOAT 0x200
+#define EF_ARM_EABI_MASK 0xFF000000
#define R_ARM_NONE 0
#define R_ARM_PC24 1
#define SET_PERSONALITY(ex,ibcs2) \
do { \
set_personality(PER_LINUX_32BIT); \
- if ((ex).e_flags & EF_ARM_SOFT_FLOAT) \
+ if (((ex).e_flags & EF_ARM_EABI_MASK) || \
+ ((ex).e_flags & EF_ARM_SOFT_FLOAT)) \
set_thread_flag(TIF_USING_IWMMXT); \
} while (0)
# error HARDIRQ_BITS is too low!
#endif
-#define irq_enter() (preempt_count() += HARDIRQ_OFFSET)
-
-extern asmlinkage void __do_softirq(void);
-
-#define irq_exit() \
- do { \
- preempt_count() -= IRQ_EXIT_OFFSET; \
- if (!in_interrupt() && local_softirq_pending()) \
- __do_softirq(); \
- preempt_enable_no_resched(); \
- } while (0)
+#define __ARCH_IRQ_EXIT_IRQS_DISABLED 1
#endif /* __ASM_HARDIRQ_H */
*/
int (*setup)(struct clcd_fb *);
+ /*
+ * mmap the framebuffer memory
+ */
+ int (*mmap)(struct clcd_fb *, struct vm_area_struct *);
+
/*
* Remove platform specific parts of CLCD driver
*/
--- /dev/null
+/*
+ * arch/arm/commond/entry-macro-iomd.S
+ *
+ * Low-level IRQ helper macros for IOC/IOMD based platforms
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+/* IOC / IOMD based hardware */
+#include <asm/hardware/iomd.h>
+
+ .equ ioc_base_high, IOC_BASE & 0xff000000
+ .equ ioc_base_low, IOC_BASE & 0x00ff0000
+ .macro disable_fiq
+ mov r12, #ioc_base_high
+ .if ioc_base_low
+ orr r12, r12, #ioc_base_low
+ .endif
+ strb r12, [r12, #0x38] @ Disable FIQ register
+ .endm
+
+ .macro get_irqnr_and_base, irqnr, irqstat, base, tmp
+ mov r4, #ioc_base_high @ point at IOC
+ .if ioc_base_low
+ orr r4, r4, #ioc_base_low
+ .endif
+ ldrb \irqstat, [r4, #IOMD_IRQREQB] @ get high priority first
+ ldr \base, =irq_prio_h
+ teq \irqstat, #0
+#ifdef IOMD_BASE
+ ldreqb \irqstat, [r4, #IOMD_DMAREQ] @ get dma
+ addeq \base, \base, #256 @ irq_prio_h table size
+ teqeq \irqstat, #0
+ bne 2406f
+#endif
+ ldreqb \irqstat, [r4, #IOMD_IRQREQA] @ get low priority
+ addeq \base, \base, #256 @ irq_prio_d table size
+ teqeq \irqstat, #0
+#ifdef IOMD_IRQREQC
+ ldreqb \irqstat, [r4, #IOMD_IRQREQC]
+ addeq \base, \base, #256 @ irq_prio_l table size
+ teqeq \irqstat, #0
+#endif
+#ifdef IOMD_IRQREQD
+ ldreqb \irqstat, [r4, #IOMD_IRQREQD]
+ addeq \base, \base, #256 @ irq_prio_lc table size
+ teqeq \irqstat, #0
+#endif
+2406: ldrneb \irqnr, [\base, \irqstat] @ get IRQ number
+ .endm
+
+/*
+ * Interrupt table (incorporates priority). Please note that we
+ * rely on the order of these tables (see above code).
+ */
+ .align 5
+irq_prio_h: .byte 0, 8, 9, 8,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 12, 8, 9, 8,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 14,14,14,14,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 14,14,14,14,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 15,15,15,15,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+ .byte 13,13,13,13,10,10,10,10,11,11,11,11,10,10,10,10
+#ifdef IOMD_BASE
+irq_prio_d: .byte 0,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 20,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 22,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 22,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 23,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 23,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 22,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 22,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+ .byte 21,16,17,16,18,16,17,16,19,16,17,16,18,16,17,16
+#endif
+irq_prio_l: .byte 0, 0, 1, 0, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3
+ .byte 4, 0, 1, 0, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3
+ .byte 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5
+ .byte 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5
+ .byte 6, 6, 6, 6, 6, 6, 6, 6, 3, 3, 3, 3, 3, 3, 3, 3
+ .byte 6, 6, 6, 6, 6, 6, 6, 6, 3, 3, 3, 3, 3, 3, 3, 3
+ .byte 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5
+ .byte 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+ .byte 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7
+#ifdef IOMD_IRQREQC
+irq_prio_lc: .byte 24,24,25,24,26,26,26,26,27,27,27,27,27,27,27,27
+ .byte 28,24,25,24,26,26,26,26,27,27,27,27,27,27,27,27
+ .byte 29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29
+ .byte 29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29
+ .byte 30,30,30,30,30,30,30,30,27,27,27,27,27,27,27,27
+ .byte 30,30,30,30,30,30,30,30,27,27,27,27,27,27,27,27
+ .byte 29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29
+ .byte 29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29
+ .byte 31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31
+ .byte 31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31
+ .byte 31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31
+ .byte 31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31
+ .byte 31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31
+ .byte 31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31
+ .byte 31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31
+ .byte 31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31
+#endif
+#ifdef IOMD_IRQREQD
+irq_prio_ld: .byte 40,40,41,40,42,42,42,42,43,43,43,43,43,43,43,43
+ .byte 44,40,41,40,42,42,42,42,43,43,43,43,43,43,43,43
+ .byte 45,45,45,45,45,45,45,45,45,45,45,45,45,45,45,45
+ .byte 45,45,45,45,45,45,45,45,45,45,45,45,45,45,45,45
+ .byte 46,46,46,46,46,46,46,46,43,43,43,43,43,43,43,43
+ .byte 46,46,46,46,46,46,46,46,43,43,43,43,43,43,43,43
+ .byte 45,45,45,45,45,45,45,45,45,45,45,45,45,45,45,45
+ .byte 45,45,45,45,45,45,45,45,45,45,45,45,45,45,45,45
+ .byte 47,47,47,47,47,47,47,47,47,47,47,47,47,47,47,47
+ .byte 47,47,47,47,47,47,47,47,47,47,47,47,47,47,47,47
+ .byte 47,47,47,47,47,47,47,47,47,47,47,47,47,47,47,47
+ .byte 47,47,47,47,47,47,47,47,47,47,47,47,47,47,47,47
+ .byte 47,47,47,47,47,47,47,47,47,47,47,47,47,47,47,47
+ .byte 47,47,47,47,47,47,47,47,47,47,47,47,47,47,47,47
+ .byte 47,47,47,47,47,47,47,47,47,47,47,47,47,47,47,47
+ .byte 47,47,47,47,47,47,47,47,47,47,47,47,47,47,47,47
+#endif
+
* Set wakeup-enable on the selected IRQ
*/
int (*wake)(unsigned int, unsigned int);
+
+#ifdef CONFIG_SMP
+ /*
+ * Route an interrupt to a CPU
+ */
+ void (*set_cpu)(struct irqdesc *desc, unsigned int irq, unsigned int cpu);
+#endif
};
struct irqdesc {
unsigned int noautoenable : 1; /* don't automatically enable IRQ */
unsigned int unused :25;
+ struct proc_dir_entry *procdir;
+
+#ifdef CONFIG_SMP
+ cpumask_t affinity;
+ unsigned int cpu;
+#endif
+
/*
* IRQ lock detection
*/
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
+
+/* This declaration for the size of the NUMA (CONFIG_DISCONTIGMEM)
+ * memory node table is the default.
+ *
+ * A good place to override this value is include/asm/arch/memory.h.
+ */
+
#ifndef __ASM_ARM_NUMNODES_H
#define __ASM_ARM_NUMNODES_H
-#ifdef CONFIG_ARCH_LH7A40X
-# define NODES_SHIFT 4 /* Max 16 nodes for the Sharp CPUs */
-#else
+#ifndef NODES_SHIFT
# define NODES_SHIFT 2 /* Normally, Max 4 Nodes */
#endif
{
pte_t *pte;
- pte = (pte_t *)__get_free_page(GFP_KERNEL|__GFP_REPEAT);
+ pte = (pte_t *)__get_free_page(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO);
if (pte) {
- clear_page(pte);
clean_dcache_area(pte, sizeof(pte_t) * PTRS_PER_PTE);
pte += PTRS_PER_PTE;
}
{
struct page *pte;
- pte = alloc_pages(GFP_KERNEL|__GFP_REPEAT, 0);
+ pte = alloc_pages(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO, 0);
if (pte) {
void *page = page_address(pte);
- clear_page(page);
clean_dcache_area(page, sizeof(pte_t) * PTRS_PER_PTE);
}
#define PTRACE_OLDSETOPTIONS 21
+#define PTRACE_GET_THREAD_AREA 22
/*
* PSR bits
*/
-#ifndef __ASM_SMP_H
-#define __ASM_SMP_H
+/*
+ * linux/include/asm-arm/smp.h
+ *
+ * Copyright (C) 2004-2005 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#ifndef __ASM_ARM_SMP_H
+#define __ASM_ARM_SMP_H
#include <linux/config.h>
+#include <linux/threads.h>
+#include <linux/cpumask.h>
+#include <linux/thread_info.h>
-#ifdef CONFIG_SMP
-#error SMP not supported
-#endif
+#include <asm/arch/smp.h>
+#ifndef CONFIG_SMP
+# error "<asm-arm/smp.h> included in non-SMP build"
#endif
+
+#define smp_processor_id() (current_thread_info()->cpu)
+
+extern cpumask_t cpu_present_mask;
+#define cpu_possible_map cpu_present_mask
+
+/*
+ * at the moment, there's not a big penalty for changing CPUs
+ * (the >big< penalty is running SMP in the first place)
+ */
+#define PROC_CHANGE_PENALTY 15
+
+struct seq_file;
+
+/*
+ * generate IPI list text
+ */
+extern void show_ipi_list(struct seq_file *p);
+
+/*
+ * Move global data into per-processor storage.
+ */
+extern void smp_store_cpu_info(unsigned int cpuid);
+
+/*
+ * Raise an IPI cross call on CPUs in callmap.
+ */
+extern void smp_cross_call(cpumask_t callmap);
+
+/*
+ * Boot a secondary CPU, and assign it the specified idle task.
+ * This also gives us the initial stack to use for this CPU.
+ */
+extern int boot_secondary(unsigned int cpu, struct task_struct *);
+
+#endif /* ifndef __ASM_ARM_SMP_H */
#define __HAVE_ARCH_MEMSET
extern void * memset(void *, int, __kernel_size_t);
-#define __HAVE_ARCH_BCOPY
-
extern void __memzero(void *ptr, __kernel_size_t n);
#define memset(p,v,n) \
/*
* Copyright 1995, Russell King.
- * Various bits and pieces copyrights include:
- * Linus Torvalds (test_bit).
- * Big endian support: Copyright 2001, Nicolas Pitre
- * reworked by rmk.
*
- * bit 0 is the LSB of addr; bit 32 is the LSB of (addr+1).
+ * Based on the arm32 version by RMK (and others). Their copyrights apply to
+ * Those parts.
+ * Modified for arm26 by Ian Molton on 25/11/04
+ *
+ * bit 0 is the LSB of an "unsigned long" quantity.
*
* Please note that the code in this file should never be included
* from user space. Many of these are not implemented in assembler
- * since they would be too costly. Also, they require priviledged
+ * since they would be too costly. Also, they require privileged
* instructions (which are not available from user mode) to ensure
* that they are atomic.
*/
#ifdef __KERNEL__
+#include <linux/compiler.h>
#include <asm/system.h>
#define smp_mb__before_clear_bit() do { } while (0)
/*
* These functions are the basis of our bit ops.
- * First, the atomic bitops.
*
- * The endian issue for these functions is handled by the macros below.
+ * First, the atomic bitops. These use native endian.
*/
-static inline void
-____atomic_set_bit(unsigned int bit, volatile unsigned long *p)
+static inline void ____atomic_set_bit(unsigned int bit, volatile unsigned long *p)
{
unsigned long flags;
unsigned long mask = 1UL << (bit & 31);
local_irq_restore(flags);
}
-static inline void
-____atomic_clear_bit(unsigned int bit, volatile unsigned long *p)
+static inline void ____atomic_clear_bit(unsigned int bit, volatile unsigned long *p)
{
unsigned long flags;
unsigned long mask = 1UL << (bit & 31);
local_irq_restore(flags);
}
-static inline void
-____atomic_change_bit(unsigned int bit, volatile unsigned long *p)
+static inline void ____atomic_change_bit(unsigned int bit, volatile unsigned long *p)
{
unsigned long flags;
unsigned long mask = 1UL << (bit & 31);
}
static inline int
-____atomic_test_and_change_bit_mask(unsigned int bit, volatile unsigned long *p)
+____atomic_test_and_change_bit(unsigned int bit, volatile unsigned long *p)
{
unsigned long flags;
unsigned int res;
oldval = *p;
*p = oldval & ~mask;
-
return oldval & mask;
}
oldval = *p;
*p = oldval ^ mask;
-
return oldval & mask;
}
/*
* This routine doesn't need to be atomic.
*/
-static inline int __test_bit(int nr, const unsigned long * p)
+static inline int __test_bit(int nr, const volatile unsigned long * p)
{
- return p[nr >> 5] & (1UL << (nr & 31));
+ return (p[nr >> 5] >> (nr & 31)) & 1UL;
}
-/*
- * A note about Endian-ness.
- * -------------------------
- *
- * ------------ physical data bus bits -----------
- * D31 ... D24 D23 ... D16 D15 ... D8 D7 ... D0
- * byte 3 byte 2 byte 1 byte 0
- *
- * Note that bit 0 is defined to be 32-bit word bit 0, not byte 0 bit 0.
- */
-
/*
* Little endian assembly bitops. nr = 0 -> byte 0 bit 0.
*/
extern int _test_and_change_bit_le(int nr, volatile unsigned long * p);
extern int _find_first_zero_bit_le(void * p, unsigned size);
extern int _find_next_zero_bit_le(void * p, int size, int offset);
+extern int _find_first_bit_le(const unsigned long *p, unsigned size);
+extern int _find_next_bit_le(const unsigned long *p, int size, int offset);
/*
* The __* form of bitops are non-atomic and may be reordered.
____atomic_##name(nr, p) : \
_##name##_le(nr,p))
-#define ATOMIC_BITOP_BE(name,nr,p) \
- (__builtin_constant_p(nr) ? \
- ____atomic_##name(nr, p) : \
- _##name##_be(nr,p))
-
#define NONATOMIC_BITOP(name,nr,p) \
(____nonatomic_##name(nr, p))
#define test_bit(nr,p) __test_bit(nr,p)
#define find_first_zero_bit(p,sz) _find_first_zero_bit_le(p,sz)
#define find_next_zero_bit(p,sz,off) _find_next_zero_bit_le(p,sz,off)
+#define find_first_bit(p,sz) _find_first_bit_le(p,sz)
+#define find_next_bit(p,sz,off) _find_next_bit_le(p,sz,off)
#define WORD_BITOFF_TO_LE(x) ((x))
* These do not need to be atomic.
*/
#define ext2_set_bit(nr,p) \
- __test_and_set_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)p)
-#define ext2_set_bit_atomic(lock,nr,p) \
- test_and_set_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)(p))
+ __test_and_set_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)(p))
+#define ext2_set_bit_atomic(lock,nr,p) \
+ test_and_set_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)(p))
#define ext2_clear_bit(nr,p) \
- __test_and_clear_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)p)
+ __test_and_clear_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)(p))
#define ext2_clear_bit_atomic(lock,nr,p) \
- test_and_clear_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)(p))
+ test_and_clear_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)(p))
#define ext2_test_bit(nr,p) \
- __test_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)p)
+ __test_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)(p))
#define ext2_find_first_zero_bit(p,sz) \
_find_first_zero_bit_le(p,sz)
#define ext2_find_next_zero_bit(p,sz,off) \
* These do not need to be atomic.
*/
#define minix_set_bit(nr,p) \
- __set_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)p)
+ __set_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)(p))
#define minix_test_bit(nr,p) \
- __test_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)p)
+ __test_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)(p))
#define minix_test_and_set_bit(nr,p) \
- __test_and_set_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)p)
+ __test_and_set_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)(p))
#define minix_test_and_clear_bit(nr,p) \
- __test_and_clear_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)p)
+ __test_and_clear_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)(p))
#define minix_find_first_zero_bit(p,sz) \
_find_first_zero_bit_le(p,sz)
#ifndef __ASM_ARM_CHECKSUM_H
#define __ASM_ARM_CHECKSUM_H
+#include <linux/in6.h>
+
/*
* computes the checksum of a memory block at buff, length len,
* and adds in "sum" (32-bit)
csum_partial_copy_nocheck(const char *src, char *dst, int len, int sum);
unsigned int
-csum_partial_copy_from_user(const char *src, char *dst, int len, int sum, int *err_ptr);
+csum_partial_copy_from_user(const char __user *src, char *dst, int len, int sum, int *err_ptr);
/*
- * These are the old (and unsafe) way of doing checksums, a warning message will be
- * printed if they are used and an exception occurs.
+ * This is the old (and unsafe) way of doing checksums, a warning message will
+ * be printed if it is used and an exception occurs.
*
- * these functions should go away after some time.
+ * this functions should go away after some time.
*/
#define csum_partial_copy(src,dst,len,sum) csum_partial_copy_nocheck(src,dst,len,sum)
adcs %0, %0, %5 \n\
adc %0, %0, #0"
: "=&r"(sum)
- : "r" (sum), "r" (daddr), "r" (saddr), "r" (ntohs(len) << 16), "Ir" (proto << 8)
+ : "r" (sum), "r" (daddr), "r" (saddr), "r" (ntohs(len)), "Ir" (ntohs(proto))
: "cc");
return sum;
}
addcs %0, %0, #0x10000 \n\
mvn %0, %0"
: "=&r"(sum)
- : "r" (sum), "r" (daddr), "r" (saddr), "r" (ntohs(len)), "Ir" (proto << 8)
+ : "r" (sum), "r" (daddr), "r" (saddr), "r" (ntohs(len)), "Ir" (ntohs(proto))
: "cc");
return sum >> 16;
}
*
*/
-#define TSK_USED_MATH 788 /* offsetof(struct task_struct, used_math) */
#define TSK_ACTIVE_MM 96 /* offsetof(struct task_struct, active_mm) */
#define VMA_VM_MM 0 /* offsetof(struct vm_area_struct, vm_mm) */
typedef struct {
unsigned int __softirq_pending;
- unsigned int __local_irq_count;
- unsigned int __local_bh_count;
- unsigned int __syscall_count;
- struct task_struct * __ksoftirqd_task; /* waitqueue is too large */
} ____cacheline_aligned irq_cpustat_t;
#include <linux/irq_cpustat.h> /* Standard mappings for irq_cpustat_t above */
#define irq_enter() (preempt_count() += HARDIRQ_OFFSET)
#ifndef CONFIG_SMP
-#define irq_exit() \
- do { \
- preempt_count() -= HARDIRQ_OFFSET; \
- if (!in_interrupt() && softirq_pending(smp_processor_id())) \
- __asm__("bl%? __do_softirq": : : "lr");/* out of line */\
- preempt_enable_no_resched(); \
- } while (0)
+extern asmlinkage void __do_softirq(void);
+
+#define irq_exit() \
+ do { \
+ preempt_count() -= IRQ_EXIT_OFFSET; \
+ if (!in_interrupt() && local_softirq_pending()) \
+ __do_softirq(); \
+ preempt_enable_no_resched(); \
+ } while (0)
#endif
+
#endif /* __ASM_HARDIRQ_H */
#define IOEB_PSCLR (IOEB_BASE + 0x58)
#define IOEB_MONTYPE (IOEB_BASE + 0x70)
+//FIXME - These adresses are weird - ISTR some weirdo address shifting stuff was going on here...
#define IO_EC_IOC_BASE 0x80090000
#define IO_EC_MEMC_BASE 0x80000000
void set_irq_chip(unsigned int irq, struct irqchip *);
void set_irq_flags(unsigned int irq, unsigned int flags);
-#ifdef not_yet
-/*
- * This is to be used by the top-level machine IRQ decoder only.
- */
-static inline void call_irq(struct pt_regs *regs, unsigned int irq)
-{
- struct irqdesc *desc = irq_desc + irq;
-
- spin_lock(&irq_controller_lock);
- desc->handle(irq, desc, regs);
- spin_unlock(&irq_controller_lock);
-
- if (softirq_pending(smp_processor_id()))
- do_softirq();
-}
-#endif
-
#define IRQF_VALID (1 << 0)
#define IRQF_PROBE (1 << 1)
#define IRQF_NOAUTOEN (1 << 2)
#undef __FD_SET
#define __FD_SET(fd, fdsetp) \
- (((fd_set *)fdsetp)->fds_bits[fd >> 5] |= (1<<(fd & 31)))
+ (((fd_set *)(fdsetp))->fds_bits[(fd) >> 5] |= (1<<((fd) & 31)))
#undef __FD_CLR
#define __FD_CLR(fd, fdsetp) \
- (((fd_set *)fdsetp)->fds_bits[fd >> 5] &= ~(1<<(fd & 31)))
+ (((fd_set *)(fdsetp))->fds_bits[(fd) >> 5] &= ~(1<<((fd) & 31)))
#undef __FD_ISSET
#define __FD_ISSET(fd, fdsetp) \
- ((((fd_set *)fdsetp)->fds_bits[fd >> 5] & (1<<(fd & 31))) != 0)
+ ((((fd_set *)(fdsetp))->fds_bits[(fd) >> 5] & (1<<((fd) & 31))) != 0)
#undef __FD_ZERO
#define __FD_ZERO(fdsetp) \
- (memset (fdsetp, 0, sizeof (*(fd_set *)fdsetp)))
+ (memset ((fdsetp), 0, sizeof (*(fd_set *)(fdsetp))))
#endif
#ifdef __KERNEL__
-#define MCA_bus 0
-#define MCA_bus__is_a_macro
-
#include <asm/atomic.h>
#include <asm/ptrace.h>
#include <linux/string.h>
int (*parse)(const struct tag *);
};
-#define __tag __attribute__((unused, __section__(".taglist")))
+#define __tag __attribute_used__ __attribute__((__section__(".taglist")))
#define __tagtable(tag, fn) \
static struct tagtable __tagtable_##fn __tag = { tag, fn }
#ifdef __KERNEL__
#include <linux/config.h>
-#include <linux/kernel.h>
-#include <asm/proc-fns.h>
-#define vectors_base() (0)
+/*
+ * This is used to ensure the compiler did actually allocate the register we
+ * asked it for some inline assembly sequences. Apparently we can't trust
+ * the compiler from one version to another so a bit of paranoia won't hurt.
+ * This string is meant to be concatenated with the inline asm string and
+ * will cause compilation to stop on mismatch. (From ARM32 - may come in handy)
+ */
+#define __asmeq(x, y) ".ifnc " x "," y " ; .err ; .endif\n\t"
-static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
-{
- extern void __bad_xchg(volatile void *, int);
+#ifndef __ASSEMBLY__
- switch (size) {
- case 1: return cpu_xchg_1(x, ptr);
- case 4: return cpu_xchg_4(x, ptr);
- default: __bad_xchg(ptr, size);
- }
- return 0;
-}
+#include <linux/linkage.h>
+
+struct thread_info;
+struct task_struct;
+#if 0
+/* information about the system we're running on */
+extern unsigned int system_rev;
+extern unsigned int system_serial_low;
+extern unsigned int system_serial_high;
+extern unsigned int mem_fclk_21285;
+
+FIXME - sort this
/*
* We need to turn the caches off before calling the reset vector - RiscOS
* messes up if we don't
*/
#define proc_hard_reset() cpu_proc_fin()
+#endif
+
+struct pt_regs;
+
+void die(const char *msg, struct pt_regs *regs, int err)
+ __attribute__((noreturn));
+
+void die_if_kernel(const char *str, struct pt_regs *regs, int err);
+
+void hook_fault_code(int nr, int (*fn)(unsigned long, unsigned int,
+ struct pt_regs *),
+ int sig, const char *name);
+
+#include <asm/proc-fns.h>
+
+#define xchg(ptr,x) \
+ ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
+
+#define tas(ptr) (xchg((ptr),1))
+
+extern asmlinkage void __backtrace(void);
+
+#define set_cr(x) \
+ __asm__ __volatile__( \
+ "mcr p15, 0, %0, c1, c0, 0 @ set CR" \
+ : : "r" (x) : "cc")
+
+#define get_cr() \
+ ({ \
+ unsigned int __val; \
+ __asm__ __volatile__( \
+ "mrc p15, 0, %0, c1, c0, 0 @ get CR" \
+ : "=r" (__val) : : "cc"); \
+ __val; \
+ })
+
+extern unsigned long cr_no_alignment; /* defined in entry-armv.S */
+extern unsigned long cr_alignment; /* defined in entry-armv.S */
+
+#define UDBG_UNDEFINED (1 << 0)
+#define UDBG_SYSCALL (1 << 1)
+#define UDBG_BADABORT (1 << 2)
+#define UDBG_SEGV (1 << 3)
+#define UDBG_BUS (1 << 4)
+
+extern unsigned int user_debug;
+
+#define vectors_base() (0)
+
+#define mb() __asm__ __volatile__ ("" : : : "memory")
+#define rmb() mb()
+#define wmb() mb()
+#define nop() __asm__ __volatile__("mov\tr0,r0\t@ nop\n\t");
+
+#define read_barrier_depends() do { } while(0)
+#define set_mb(var, value) do { var = value; mb(); } while (0)
+#define set_wmb(var, value) do { var = value; wmb(); } while (0)
+
+/*
+ * We assume knowledge of how
+ * spin_unlock_irq() and friends are implemented. This avoids
+ * us needlessly decrementing and incrementing the preempt count.
+ */
+#define prepare_arch_switch(rq,next) local_irq_enable()
+#define finish_arch_switch(rq,prev) spin_unlock(&(rq)->lock)
+#define task_running(rq,p) ((rq)->curr == (p))
+
+/*
+ * switch_to(prev, next) should switch from task `prev' to `next'
+ * `prev' will never be the same as `next'. schedule() itself
+ * contains the memory barrier to tell GCC not to cache `current'.
+ */
+extern struct task_struct *__switch_to(struct task_struct *, struct thread_info *, struct thread_info *);
+
+#define switch_to(prev,next,last) \
+do { \
+ last = __switch_to(prev,prev->thread_info,next->thread_info); \
+} while (0)
+
/*
- * A couple of speedups for the ARM
+ * Save the current interrupt enable state & disable IRQs
*/
+#define local_irq_save(x) \
+ do { \
+ unsigned long temp; \
+ __asm__ __volatile__( \
+" mov %0, pc @ save_flags_cli\n" \
+" orr %1, %0, #0x08000000\n" \
+" and %0, %0, #0x0c000000\n" \
+" teqp %1, #0\n" \
+ : "=r" (x), "=r" (temp) \
+ : \
+ : "memory"); \
+ } while (0)
/*
* Enable IRQs (sti)
: "memory"); \
} while(0)
-/* Disable FIQs (clf) */
+/* Enable FIQs (stf) */
-#define __clf() do { \
+#define __stf() do { \
unsigned long temp; \
__asm__ __volatile__( \
-" mov %0, pc @ clf\n" \
-" orr %0, %0, #0x04000000\n" \
+" mov %0, pc @ stf\n" \
+" bic %0, %0, #0x04000000\n" \
" teqp %0, #0\n" \
: "=r" (temp)); \
} while(0)
-/* Enable FIQs (stf) */
+/* Disable FIQs (clf) */
-#define __stf() do { \
+#define __clf() do { \
unsigned long temp; \
__asm__ __volatile__( \
-" mov %0, pc @ stf\n" \
-" bic %0, %0, #0x04000000\n" \
+" mov %0, pc @ clf\n" \
+" orr %0, %0, #0x04000000\n" \
" teqp %0, #0\n" \
: "=r" (temp)); \
} while(0)
+
/*
- * save current IRQ & FIQ state
+ * Save the current interrupt enable state.
*/
#define local_save_flags(x) \
do { \
: "=r" (x)); \
} while (0)
-/*
- * Save the current interrupt enable state & disable IRQs
- */
-#define local_irq_save(x) \
- do { \
- unsigned long temp; \
- __asm__ __volatile__( \
-" mov %0, pc @ save_flags_cli\n" \
-" orr %1, %0, #0x08000000\n" \
-" and %0, %0, #0x0c000000\n" \
-" teqp %1, #0\n" \
- : "=r" (x), "=r" (temp) \
- : \
- : "memory"); \
- } while (0)
/*
* restore saved IRQ & FIQ state
} while (0)
-struct thread_info;
-
-/* information about the system we're running on */
-extern unsigned int system_rev;
-extern unsigned int system_serial_low;
-extern unsigned int system_serial_high;
-
-struct pt_regs;
-
-void die(const char *msg, struct pt_regs *regs, int err)
- __attribute__((noreturn));
-
-void die_if_kernel(const char *str, struct pt_regs *regs, int err);
-
-void hook_fault_code(int nr, int (*fn)(unsigned long, unsigned int,
- struct pt_regs *),
- int sig, const char *name);
-
-#define xchg(ptr,x) \
- ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
-
-#define tas(ptr) (xchg((ptr),1))
-
-extern asmlinkage void __backtrace(void);
-
-/*
- * Include processor dependent parts
- */
-
-#define mb() __asm__ __volatile__ ("" : : : "memory")
-#define rmb() mb()
-#define wmb() mb()
-#define nop() __asm__ __volatile__("mov\tr0,r0\t@ nop\n\t");
-
-#define prepare_to_switch() do { } while(0)
-
-/*
- * switch_to(prev, next) should switch from task `prev' to `next'
- * `prev' will never be the same as `next'.
- * The `mb' is to tell GCC not to cache `current' across this call.
- */
-struct thread_info;
-
-extern struct task_struct *__switch_to(struct thread_info *, struct thread_info *);
-
-#define switch_to(prev,next,last) \
- do { \
- __switch_to(prev->thread_info,next->thread_info); \
- mb(); \
- } while (0)
-
-
#ifdef CONFIG_SMP
#error SMP not supported
-#endif /* CONFIG_SMP */
-
-#define irqs_disabled() \
-({ \
- unsigned long flags; \
- local_save_flags(flags); \
- flags & PSR_I_BIT; \
-})
+#endif
-#define set_mb(var, value) do { var = value; mb(); } while (0)
#define smp_mb() barrier()
#define smp_rmb() barrier()
#define smp_wmb() barrier()
-#define smp_read_barrier_depends() do { } while(0)
+#define smp_read_barrier_depends() do { } while(0)
#define clf() __clf()
#define stf() __stf()
+#define irqs_disabled() \
+({ \
+ unsigned long flags; \
+ local_save_flags(flags); \
+ flags & PSR_I_BIT; \
+})
+
+static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
+{
+ extern void __bad_xchg(volatile void *, int);
+
+ switch (size) {
+ case 1: return cpu_xchg_1(x, ptr);
+ case 4: return cpu_xchg_4(x, ptr);
+ default: __bad_xchg(ptr, size);
+ }
+ return 0;
+}
+
+#endif /* __ASSEMBLY__ */
+
#endif /* __KERNEL__ */
#endif
struct task_struct;
struct exec_domain;
+#include <linux/compiler.h>
#include <asm/fpstate.h>
#include <asm/ptrace.h>
#include <asm/types.h>
}
/* FIXME - PAGE_SIZE < 32K */
-#define THREAD_SIZE (8192)
+#define THREAD_SIZE (8*32768) // FIXME - this needs attention (see kernel/fork.c which gets a nice div by zero if this is lower than 8*32768
#define __get_user_regs(x) (((struct pt_regs *)((unsigned long)(x) + THREAD_SIZE - 8)) - 1)
extern struct thread_info *alloc_thread_info(struct task_struct *task);
#define TIF_SYSCALL_TRACE 8
#define TIF_USED_FPU 16
#define TIF_POLLING_NRFLAG 17
+#define TIF_MEMDIE 18
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
/* entry.S is sensitive to the offsets of these fields */
typedef struct {
unsigned int __softirq_pending;
- unsigned int __local_irq_count;
- unsigned int __local_bh_count;
- unsigned int __syscall_count;
- struct task_struct * __ksoftirqd_task; /* waitqueue is too large */
} ____cacheline_aligned irq_cpustat_t;
#include <linux/irq_cpustat.h> /* Standard mappings for irq_cpustat_t above */
# error HARDIRQ_BITS is too low!
#endif
-#define irq_enter() (preempt_count() += HARDIRQ_OFFSET)
-#define irq_exit() \
-do { \
- preempt_count() -= IRQ_EXIT_OFFSET; \
- if (!in_interrupt() && softirq_pending(smp_processor_id())) \
- do_softirq(); \
- preempt_enable_no_resched(); \
-} while (0)
-
#endif /* __ASM_HARDIRQ_H */
extern inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long address)
{
- pte_t *pte = (pte_t *)__get_free_page(GFP_KERNEL|__GFP_REPEAT);
- if (pte)
- clear_page(pte);
+ pte_t *pte = (pte_t *)__get_free_page(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO);
return pte;
}
extern inline struct page *pte_alloc_one(struct mm_struct *mm, unsigned long address)
{
struct page *pte;
- pte = alloc_pages(GFP_KERNEL|__GFP_REPEAT, 0);
- if (pte)
- clear_page(page_address(pte));
+ pte = alloc_pages(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO, 0);
return pte;
}
#define TIF_SIGPENDING 2 /* signal pending */
#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
#define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_RESCHED */
+#define TIF_MEMDIE 17
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
--- /dev/null
+/* atomic.h: atomic operation emulation for FR-V
+ *
+ * For an explanation of how atomic ops work in this arch, see:
+ * Documentation/fujitsu/frv/atomic-ops.txt
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+#ifndef _ASM_ATOMIC_H
+#define _ASM_ATOMIC_H
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <asm/spr-regs.h>
+
+#ifdef CONFIG_SMP
+#error not SMP safe
+#endif
+
+/*
+ * Atomic operations that C can't guarantee us. Useful for
+ * resource counting etc..
+ *
+ * We do not have SMP systems, so we don't have to deal with that.
+ */
+
+/* Atomic operations are already serializing */
+#define smp_mb__before_atomic_dec() barrier()
+#define smp_mb__after_atomic_dec() barrier()
+#define smp_mb__before_atomic_inc() barrier()
+#define smp_mb__after_atomic_inc() barrier()
+
+typedef struct {
+ int counter;
+} atomic_t;
+
+#define ATOMIC_INIT(i) { (i) }
+#define atomic_read(v) ((v)->counter)
+#define atomic_set(v, i) (((v)->counter) = (i))
+
+#ifndef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS
+static inline int atomic_add_return(int i, atomic_t *v)
+{
+ unsigned long val;
+
+ asm("0: \n"
+ " orcc gr0,gr0,gr0,icc3 \n" /* set ICC3.Z */
+ " ckeq icc3,cc7 \n"
+ " ld.p %M0,%1 \n" /* LD.P/ORCR must be atomic */
+ " orcr cc7,cc7,cc3 \n" /* set CC3 to true */
+ " add%I2 %1,%2,%1 \n"
+ " cst.p %1,%M0 ,cc3,#1 \n"
+ " corcc gr29,gr29,gr0 ,cc3,#1 \n" /* clear ICC3.Z if store happens */
+ " beq icc3,#0,0b \n"
+ : "+U"(v->counter), "=&r"(val)
+ : "NPr"(i)
+ : "memory", "cc7", "cc3", "icc3"
+ );
+
+ return val;
+}
+
+static inline int atomic_sub_return(int i, atomic_t *v)
+{
+ unsigned long val;
+
+ asm("0: \n"
+ " orcc gr0,gr0,gr0,icc3 \n" /* set ICC3.Z */
+ " ckeq icc3,cc7 \n"
+ " ld.p %M0,%1 \n" /* LD.P/ORCR must be atomic */
+ " orcr cc7,cc7,cc3 \n" /* set CC3 to true */
+ " sub%I2 %1,%2,%1 \n"
+ " cst.p %1,%M0 ,cc3,#1 \n"
+ " corcc gr29,gr29,gr0 ,cc3,#1 \n" /* clear ICC3.Z if store happens */
+ " beq icc3,#0,0b \n"
+ : "+U"(v->counter), "=&r"(val)
+ : "NPr"(i)
+ : "memory", "cc7", "cc3", "icc3"
+ );
+
+ return val;
+}
+
+#else
+
+extern int atomic_add_return(int i, atomic_t *v);
+extern int atomic_sub_return(int i, atomic_t *v);
+
+#endif
+
+static inline int atomic_add_negative(int i, atomic_t *v)
+{
+ return atomic_add_return(i, v) < 0;
+}
+
+static inline void atomic_add(int i, atomic_t *v)
+{
+ atomic_add_return(i, v);
+}
+
+static inline void atomic_sub(int i, atomic_t *v)
+{
+ atomic_sub_return(i, v);
+}
+
+static inline void atomic_inc(atomic_t *v)
+{
+ atomic_add_return(1, v);
+}
+
+static inline void atomic_dec(atomic_t *v)
+{
+ atomic_sub_return(1, v);
+}
+
+#define atomic_dec_return(v) atomic_sub_return(1, (v))
+#define atomic_inc_return(v) atomic_add_return(1, (v))
+
+#define atomic_sub_and_test(i,v) (atomic_sub_return((i), (v)) == 0)
+#define atomic_dec_and_test(v) (atomic_sub_return(1, (v)) == 0)
+#define atomic_inc_and_test(v) (atomic_add_return(1, (v)) == 0)
+
+#ifndef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS
+static inline
+unsigned long atomic_test_and_ANDNOT_mask(unsigned long mask, volatile unsigned long *v)
+{
+ unsigned long old, tmp;
+
+ asm volatile(
+ "0: \n"
+ " orcc gr0,gr0,gr0,icc3 \n" /* set ICC3.Z */
+ " ckeq icc3,cc7 \n"
+ " ld.p %M0,%1 \n" /* LD.P/ORCR are atomic */
+ " orcr cc7,cc7,cc3 \n" /* set CC3 to true */
+ " and%I3 %1,%3,%2 \n"
+ " cst.p %2,%M0 ,cc3,#1 \n" /* if store happens... */
+ " corcc gr29,gr29,gr0 ,cc3,#1 \n" /* ... clear ICC3.Z */
+ " beq icc3,#0,0b \n"
+ : "+U"(*v), "=&r"(old), "=r"(tmp)
+ : "NPr"(~mask)
+ : "memory", "cc7", "cc3", "icc3"
+ );
+
+ return old;
+}
+
+static inline
+unsigned long atomic_test_and_OR_mask(unsigned long mask, volatile unsigned long *v)
+{
+ unsigned long old, tmp;
+
+ asm volatile(
+ "0: \n"
+ " orcc gr0,gr0,gr0,icc3 \n" /* set ICC3.Z */
+ " ckeq icc3,cc7 \n"
+ " ld.p %M0,%1 \n" /* LD.P/ORCR are atomic */
+ " orcr cc7,cc7,cc3 \n" /* set CC3 to true */
+ " or%I3 %1,%3,%2 \n"
+ " cst.p %2,%M0 ,cc3,#1 \n" /* if store happens... */
+ " corcc gr29,gr29,gr0 ,cc3,#1 \n" /* ... clear ICC3.Z */
+ " beq icc3,#0,0b \n"
+ : "+U"(*v), "=&r"(old), "=r"(tmp)
+ : "NPr"(mask)
+ : "memory", "cc7", "cc3", "icc3"
+ );
+
+ return old;
+}
+
+static inline
+unsigned long atomic_test_and_XOR_mask(unsigned long mask, volatile unsigned long *v)
+{
+ unsigned long old, tmp;
+
+ asm volatile(
+ "0: \n"
+ " orcc gr0,gr0,gr0,icc3 \n" /* set ICC3.Z */
+ " ckeq icc3,cc7 \n"
+ " ld.p %M0,%1 \n" /* LD.P/ORCR are atomic */
+ " orcr cc7,cc7,cc3 \n" /* set CC3 to true */
+ " xor%I3 %1,%3,%2 \n"
+ " cst.p %2,%M0 ,cc3,#1 \n" /* if store happens... */
+ " corcc gr29,gr29,gr0 ,cc3,#1 \n" /* ... clear ICC3.Z */
+ " beq icc3,#0,0b \n"
+ : "+U"(*v), "=&r"(old), "=r"(tmp)
+ : "NPr"(mask)
+ : "memory", "cc7", "cc3", "icc3"
+ );
+
+ return old;
+}
+
+#else
+
+extern unsigned long atomic_test_and_ANDNOT_mask(unsigned long mask, volatile unsigned long *v);
+extern unsigned long atomic_test_and_OR_mask(unsigned long mask, volatile unsigned long *v);
+extern unsigned long atomic_test_and_XOR_mask(unsigned long mask, volatile unsigned long *v);
+
+#endif
+
+#define atomic_clear_mask(mask, v) atomic_test_and_ANDNOT_mask((mask), (v))
+#define atomic_set_mask(mask, v) atomic_test_and_OR_mask((mask), (v))
+
+/*****************************************************************************/
+/*
+ * exchange value with memory
+ */
+#ifndef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS
+
+#define xchg(ptr, x) \
+({ \
+ __typeof__(ptr) __xg_ptr = (ptr); \
+ __typeof__(*(ptr)) __xg_orig; \
+ \
+ switch (sizeof(__xg_orig)) { \
+ case 1: \
+ asm volatile( \
+ "0: \n" \
+ " orcc gr0,gr0,gr0,icc3 \n" \
+ " ckeq icc3,cc7 \n" \
+ " ldub.p %M0,%1 \n" \
+ " orcr cc7,cc7,cc3 \n" \
+ " cstb.p %2,%M0 ,cc3,#1 \n" \
+ " corcc gr29,gr29,gr0 ,cc3,#1 \n" \
+ " beq icc3,#0,0b \n" \
+ : "+U"(*__xg_ptr), "=&r"(__xg_orig) \
+ : "r"(x) \
+ : "memory", "cc7", "cc3", "icc3" \
+ ); \
+ break; \
+ \
+ case 2: \
+ asm volatile( \
+ "0: \n" \
+ " orcc gr0,gr0,gr0,icc3 \n" \
+ " ckeq icc3,cc7 \n" \
+ " lduh.p %M0,%1 \n" \
+ " orcr cc7,cc7,cc3 \n" \
+ " csth.p %2,%M0 ,cc3,#1 \n" \
+ " corcc gr29,gr29,gr0 ,cc3,#1 \n" \
+ " beq icc3,#0,0b \n" \
+ : "+U"(*__xg_ptr), "=&r"(__xg_orig) \
+ : "r"(x) \
+ : "memory", "cc7", "cc3", "icc3" \
+ ); \
+ break; \
+ \
+ case 4: \
+ asm volatile( \
+ "0: \n" \
+ " orcc gr0,gr0,gr0,icc3 \n" \
+ " ckeq icc3,cc7 \n" \
+ " ld.p %M0,%1 \n" \
+ " orcr cc7,cc7,cc3 \n" \
+ " cst.p %2,%M0 ,cc3,#1 \n" \
+ " corcc gr29,gr29,gr0 ,cc3,#1 \n" \
+ " beq icc3,#0,0b \n" \
+ : "+U"(*__xg_ptr), "=&r"(__xg_orig) \
+ : "r"(x) \
+ : "memory", "cc7", "cc3", "icc3" \
+ ); \
+ break; \
+ \
+ default: \
+ __xg_orig = 0; \
+ asm volatile("break"); \
+ break; \
+ } \
+ \
+ __xg_orig; \
+})
+
+#else
+
+extern uint8_t __xchg_8 (uint8_t i, volatile void *v);
+extern uint16_t __xchg_16(uint16_t i, volatile void *v);
+extern uint32_t __xchg_32(uint32_t i, volatile void *v);
+
+#define xchg(ptr, x) \
+({ \
+ __typeof__(ptr) __xg_ptr = (ptr); \
+ __typeof__(*(ptr)) __xg_orig; \
+ \
+ switch (sizeof(__xg_orig)) { \
+ case 1: __xg_orig = (__typeof__(*(ptr))) __xchg_8 ((uint8_t) x, __xg_ptr); break; \
+ case 2: __xg_orig = (__typeof__(*(ptr))) __xchg_16((uint16_t) x, __xg_ptr); break; \
+ case 4: __xg_orig = (__typeof__(*(ptr))) __xchg_32((uint32_t) x, __xg_ptr); break; \
+ default: \
+ __xg_orig = 0; \
+ asm volatile("break"); \
+ break; \
+ } \
+ __xg_orig; \
+})
+
+#endif
+
+#define tas(ptr) (xchg((ptr), 1))
+
+/*****************************************************************************/
+/*
+ * compare and conditionally exchange value with memory
+ * - if (*ptr == test) then orig = *ptr; *ptr = test;
+ * - if (*ptr != test) then orig = *ptr;
+ */
+#ifndef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS
+
+#define cmpxchg(ptr, test, new) \
+({ \
+ __typeof__(ptr) __xg_ptr = (ptr); \
+ __typeof__(*(ptr)) __xg_orig, __xg_tmp; \
+ __typeof__(*(ptr)) __xg_test = (test); \
+ __typeof__(*(ptr)) __xg_new = (new); \
+ \
+ switch (sizeof(__xg_orig)) { \
+ case 1: \
+ asm volatile( \
+ "0: \n" \
+ " orcc gr0,gr0,gr0,icc3 \n" \
+ " ckeq icc3,cc7 \n" \
+ " ldub.p %M0,%1 \n" \
+ " orcr cc7,cc7,cc3 \n" \
+ " sub%I4 %1,%4,%2 \n" \
+ " sllcc %2,#24,gr0,icc0 \n" \
+ " bne icc0,#0,1f \n" \
+ " cstb.p %3,%M0 ,cc3,#1 \n" \
+ " corcc gr29,gr29,gr0 ,cc3,#1 \n" \
+ " beq icc3,#0,0b \n" \
+ "1: \n" \
+ : "+U"(*__xg_ptr), "=&r"(__xg_orig), "=&r"(__xg_tmp) \
+ : "r"(__xg_new), "NPr"(__xg_test) \
+ : "memory", "cc7", "cc3", "icc3", "icc0" \
+ ); \
+ break; \
+ \
+ case 2: \
+ asm volatile( \
+ "0: \n" \
+ " orcc gr0,gr0,gr0,icc3 \n" \
+ " ckeq icc3,cc7 \n" \
+ " lduh.p %M0,%1 \n" \
+ " orcr cc7,cc7,cc3 \n" \
+ " sub%I4 %1,%4,%2 \n" \
+ " sllcc %2,#16,gr0,icc0 \n" \
+ " bne icc0,#0,1f \n" \
+ " csth.p %3,%M0 ,cc3,#1 \n" \
+ " corcc gr29,gr29,gr0 ,cc3,#1 \n" \
+ " beq icc3,#0,0b \n" \
+ "1: \n" \
+ : "+U"(*__xg_ptr), "=&r"(__xg_orig), "=&r"(__xg_tmp) \
+ : "r"(__xg_new), "NPr"(__xg_test) \
+ : "memory", "cc7", "cc3", "icc3", "icc0" \
+ ); \
+ break; \
+ \
+ case 4: \
+ asm volatile( \
+ "0: \n" \
+ " orcc gr0,gr0,gr0,icc3 \n" \
+ " ckeq icc3,cc7 \n" \
+ " ld.p %M0,%1 \n" \
+ " orcr cc7,cc7,cc3 \n" \
+ " sub%I4cc %1,%4,%2,icc0 \n" \
+ " bne icc0,#0,1f \n" \
+ " cst.p %3,%M0 ,cc3,#1 \n" \
+ " corcc gr29,gr29,gr0 ,cc3,#1 \n" \
+ " beq icc3,#0,0b \n" \
+ "1: \n" \
+ : "+U"(*__xg_ptr), "=&r"(__xg_orig), "=&r"(__xg_tmp) \
+ : "r"(__xg_new), "NPr"(__xg_test) \
+ : "memory", "cc7", "cc3", "icc3", "icc0" \
+ ); \
+ break; \
+ \
+ default: \
+ __xg_orig = 0; \
+ asm volatile("break"); \
+ break; \
+ } \
+ \
+ __xg_orig; \
+})
+
+#else
+
+extern uint8_t __cmpxchg_8 (uint8_t *v, uint8_t test, uint8_t new);
+extern uint16_t __cmpxchg_16(uint16_t *v, uint16_t test, uint16_t new);
+extern uint32_t __cmpxchg_32(uint32_t *v, uint32_t test, uint32_t new);
+
+#define cmpxchg(ptr, test, new) \
+({ \
+ __typeof__(ptr) __xg_ptr = (ptr); \
+ __typeof__(*(ptr)) __xg_orig; \
+ __typeof__(*(ptr)) __xg_test = (test); \
+ __typeof__(*(ptr)) __xg_new = (new); \
+ \
+ switch (sizeof(__xg_orig)) { \
+ case 1: __xg_orig = __cmpxchg_8 (__xg_ptr, __xg_test, __xg_new); break; \
+ case 2: __xg_orig = __cmpxchg_16(__xg_ptr, __xg_test, __xg_new); break; \
+ case 4: __xg_orig = __cmpxchg_32(__xg_ptr, __xg_test, __xg_new); break; \
+ default: \
+ __xg_orig = 0; \
+ asm volatile("break"); \
+ break; \
+ } \
+ \
+ __xg_orig; \
+})
+
+#endif
+
+#endif /* _ASM_ATOMIC_H */
--- /dev/null
+/* bitops.h: bit operations for the Fujitsu FR-V CPUs
+ *
+ * For an explanation of how atomic ops work in this arch, see:
+ * Documentation/fujitsu/frv/atomic-ops.txt
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+#ifndef _ASM_BITOPS_H
+#define _ASM_BITOPS_H
+
+#include <linux/config.h>
+#include <linux/compiler.h>
+#include <asm/byteorder.h>
+#include <asm/system.h>
+#include <asm/atomic.h>
+
+#ifdef __KERNEL__
+
+/*
+ * ffz = Find First Zero in word. Undefined if no zero exists,
+ * so code should check against ~0UL first..
+ */
+static inline unsigned long ffz(unsigned long word)
+{
+ unsigned long result = 0;
+
+ while (word & 1) {
+ result++;
+ word >>= 1;
+ }
+ return result;
+}
+
+/*
+ * clear_bit() doesn't provide any barrier for the compiler.
+ */
+#define smp_mb__before_clear_bit() barrier()
+#define smp_mb__after_clear_bit() barrier()
+
+static inline int test_and_clear_bit(int nr, volatile void *addr)
+{
+ volatile unsigned long *ptr = addr;
+ unsigned long mask = 1UL << (nr & 31);
+ ptr += nr >> 5;
+ return (atomic_test_and_ANDNOT_mask(mask, ptr) & mask) != 0;
+}
+
+static inline int test_and_set_bit(int nr, volatile void *addr)
+{
+ volatile unsigned long *ptr = addr;
+ unsigned long mask = 1UL << (nr & 31);
+ ptr += nr >> 5;
+ return (atomic_test_and_OR_mask(mask, ptr) & mask) != 0;
+}
+
+static inline int test_and_change_bit(int nr, volatile void *addr)
+{
+ volatile unsigned long *ptr = addr;
+ unsigned long mask = 1UL << (nr & 31);
+ ptr += nr >> 5;
+ return (atomic_test_and_XOR_mask(mask, ptr) & mask) != 0;
+}
+
+static inline void clear_bit(int nr, volatile void *addr)
+{
+ test_and_clear_bit(nr, addr);
+}
+
+static inline void set_bit(int nr, volatile void *addr)
+{
+ test_and_set_bit(nr, addr);
+}
+
+static inline void change_bit(int nr, volatile void * addr)
+{
+ test_and_change_bit(nr, addr);
+}
+
+static inline void __clear_bit(int nr, volatile void * addr)
+{
+ volatile unsigned long *a = addr;
+ int mask;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 31);
+ *a &= ~mask;
+}
+
+static inline void __set_bit(int nr, volatile void * addr)
+{
+ volatile unsigned long *a = addr;
+ int mask;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 31);
+ *a |= mask;
+}
+
+static inline void __change_bit(int nr, volatile void *addr)
+{
+ volatile unsigned long *a = addr;
+ int mask;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 31);
+ *a ^= mask;
+}
+
+static inline int __test_and_clear_bit(int nr, volatile void * addr)
+{
+ volatile unsigned long *a = addr;
+ int mask, retval;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 31);
+ retval = (mask & *a) != 0;
+ *a &= ~mask;
+ return retval;
+}
+
+static inline int __test_and_set_bit(int nr, volatile void * addr)
+{
+ volatile unsigned long *a = addr;
+ int mask, retval;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 31);
+ retval = (mask & *a) != 0;
+ *a |= mask;
+ return retval;
+}
+
+static inline int __test_and_change_bit(int nr, volatile void * addr)
+{
+ volatile unsigned long *a = addr;
+ int mask, retval;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 31);
+ retval = (mask & *a) != 0;
+ *a ^= mask;
+ return retval;
+}
+
+/*
+ * This routine doesn't need to be atomic.
+ */
+static inline int __constant_test_bit(int nr, const volatile void * addr)
+{
+ return ((1UL << (nr & 31)) & (((const volatile unsigned int *) addr)[nr >> 5])) != 0;
+}
+
+static inline int __test_bit(int nr, const volatile void * addr)
+{
+ int * a = (int *) addr;
+ int mask;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 0x1f);
+ return ((mask & *a) != 0);
+}
+
+#define test_bit(nr,addr) \
+(__builtin_constant_p(nr) ? \
+ __constant_test_bit((nr),(addr)) : \
+ __test_bit((nr),(addr)))
+
+extern int find_next_bit(const unsigned long *addr, int size, int offset);
+
+#define find_first_bit(addr, size) find_next_bit(addr, size, 0)
+
+#define find_first_zero_bit(addr, size) \
+ find_next_zero_bit((addr), (size), 0)
+
+static inline int find_next_zero_bit(const void *addr, int size, int offset)
+{
+ const unsigned long *p = ((const unsigned long *) addr) + (offset >> 5);
+ unsigned long result = offset & ~31UL;
+ unsigned long tmp;
+
+ if (offset >= size)
+ return size;
+ size -= result;
+ offset &= 31UL;
+ if (offset) {
+ tmp = *(p++);
+ tmp |= ~0UL >> (32-offset);
+ if (size < 32)
+ goto found_first;
+ if (~tmp)
+ goto found_middle;
+ size -= 32;
+ result += 32;
+ }
+ while (size & ~31UL) {
+ if (~(tmp = *(p++)))
+ goto found_middle;
+ result += 32;
+ size -= 32;
+ }
+ if (!size)
+ return result;
+ tmp = *p;
+
+found_first:
+ tmp |= ~0UL >> size;
+found_middle:
+ return result + ffz(tmp);
+}
+
+#define ffs(x) generic_ffs(x)
+#define __ffs(x) (ffs(x) - 1)
+
+/*
+ * fls: find last bit set.
+ */
+#define fls(x) \
+({ \
+ int bit; \
+ \
+ asm("scan %1,gr0,%0" : "=r"(bit) : "r"(x)); \
+ \
+ bit ? 33 - bit : bit; \
+})
+
+/*
+ * Every architecture must define this function. It's the fastest
+ * way of searching a 140-bit bitmap where the first 100 bits are
+ * unlikely to be set. It's guaranteed that at least one of the 140
+ * bits is cleared.
+ */
+static inline int sched_find_first_bit(const unsigned long *b)
+{
+ if (unlikely(b[0]))
+ return __ffs(b[0]);
+ if (unlikely(b[1]))
+ return __ffs(b[1]) + 32;
+ if (unlikely(b[2]))
+ return __ffs(b[2]) + 64;
+ if (b[3])
+ return __ffs(b[3]) + 96;
+ return __ffs(b[4]) + 128;
+}
+
+
+/*
+ * hweightN: returns the hamming weight (i.e. the number
+ * of bits set) of a N-bit word
+ */
+
+#define hweight32(x) generic_hweight32(x)
+#define hweight16(x) generic_hweight16(x)
+#define hweight8(x) generic_hweight8(x)
+
+#define ext2_set_bit(nr, addr) test_and_set_bit ((nr) ^ 0x18, (addr))
+#define ext2_clear_bit(nr, addr) test_and_clear_bit((nr) ^ 0x18, (addr))
+
+#define ext2_set_bit_atomic(lock,nr,addr) ext2_set_bit((nr), addr)
+#define ext2_clear_bit_atomic(lock,nr,addr) ext2_clear_bit((nr), addr)
+
+static inline int ext2_test_bit(int nr, const volatile void * addr)
+{
+ const volatile unsigned char *ADDR = (const unsigned char *) addr;
+ int mask;
+
+ ADDR += nr >> 3;
+ mask = 1 << (nr & 0x07);
+ return ((mask & *ADDR) != 0);
+}
+
+#define ext2_find_first_zero_bit(addr, size) \
+ ext2_find_next_zero_bit((addr), (size), 0)
+
+static inline unsigned long ext2_find_next_zero_bit(const void *addr,
+ unsigned long size,
+ unsigned long offset)
+{
+ const unsigned long *p = ((const unsigned long *) addr) + (offset >> 5);
+ unsigned long result = offset & ~31UL;
+ unsigned long tmp;
+
+ if (offset >= size)
+ return size;
+ size -= result;
+ offset &= 31UL;
+ if(offset) {
+ /* We hold the little endian value in tmp, but then the
+ * shift is illegal. So we could keep a big endian value
+ * in tmp, like this:
+ *
+ * tmp = __swab32(*(p++));
+ * tmp |= ~0UL >> (32-offset);
+ *
+ * but this would decrease preformance, so we change the
+ * shift:
+ */
+ tmp = *(p++);
+ tmp |= __swab32(~0UL >> (32-offset));
+ if(size < 32)
+ goto found_first;
+ if(~tmp)
+ goto found_middle;
+ size -= 32;
+ result += 32;
+ }
+ while(size & ~31UL) {
+ if(~(tmp = *(p++)))
+ goto found_middle;
+ result += 32;
+ size -= 32;
+ }
+ if(!size)
+ return result;
+ tmp = *p;
+
+found_first:
+ /* tmp is little endian, so we would have to swab the shift,
+ * see above. But then we have to swab tmp below for ffz, so
+ * we might as well do this here.
+ */
+ return result + ffz(__swab32(tmp) | (~0UL << size));
+found_middle:
+ return result + ffz(__swab32(tmp));
+}
+
+/* Bitmap functions for the minix filesystem. */
+#define minix_test_and_set_bit(nr,addr) ext2_set_bit(nr,addr)
+#define minix_set_bit(nr,addr) ext2_set_bit(nr,addr)
+#define minix_test_and_clear_bit(nr,addr) ext2_clear_bit(nr,addr)
+#define minix_test_bit(nr,addr) ext2_test_bit(nr,addr)
+#define minix_find_first_zero_bit(addr,size) ext2_find_first_zero_bit(addr,size)
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_BITOPS_H */
--- /dev/null
+/* bug.h: FRV bug trapping
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+#ifndef _ASM_BUG_H
+#define _ASM_BUG_H
+
+#include <linux/config.h>
+
+/*
+ * Tell the user there is some problem.
+ */
+extern asmlinkage void __debug_bug_trap(int signr);
+
+#ifdef CONFIG_NO_KERNEL_MSG
+#define _debug_bug_printk()
+#else
+extern void __debug_bug_printk(const char *file, unsigned line);
+#define _debug_bug_printk() __debug_bug_printk(__FILE__, __LINE__)
+#endif
+
+#define _debug_bug_trap(signr) \
+do { \
+ __debug_bug_trap(signr); \
+ asm volatile("nop"); \
+} while(0)
+
+#define HAVE_ARCH_BUG
+#define BUG() \
+do { \
+ _debug_bug_printk(); \
+ _debug_bug_trap(6 /*SIGABRT*/); \
+} while (0)
+
+#ifdef CONFIG_GDBSTUB
+#define HAVE_ARCH_KGDB_RAISE
+#define kgdb_raise(signr) do { _debug_bug_trap(signr); } while(0)
+
+#define HAVE_ARCH_KGDB_BAD_PAGE
+#define kgdb_bad_page(page) do { kgdb_raise(SIGABRT); } while(0)
+#endif
+
+#include <asm-generic/bug.h>
+
+#endif
--- /dev/null
+/* cache.h: FRV cache definitions
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef __ASM_CACHE_H
+#define __ASM_CACHE_H
+
+#include <linux/config.h>
+
+/* bytes per L1 cache line */
+#define L1_CACHE_SHIFT (CONFIG_FRV_L1_CACHE_SHIFT)
+#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
+
+#define __cacheline_aligned __attribute__((aligned(L1_CACHE_BYTES)))
+#define ____cacheline_aligned __attribute__((aligned(L1_CACHE_BYTES)))
+
+#endif
--- /dev/null
+/* cacheflush.h: FRV cache flushing routines
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_CACHEFLUSH_H
+#define _ASM_CACHEFLUSH_H
+
+/* Keep includes the same across arches. */
+#include <linux/mm.h>
+
+/*
+ * virtually-indexed cache management (our cache is physically indexed)
+ */
+#define flush_cache_all() do {} while(0)
+#define flush_cache_mm(mm) do {} while(0)
+#define flush_cache_range(mm, start, end) do {} while(0)
+#define flush_cache_page(vma, vmaddr) do {} while(0)
+#define flush_cache_vmap(start, end) do {} while(0)
+#define flush_cache_vunmap(start, end) do {} while(0)
+#define flush_dcache_mmap_lock(mapping) do {} while(0)
+#define flush_dcache_mmap_unlock(mapping) do {} while(0)
+
+/*
+ * physically-indexed cache managment
+ * - see arch/frv/lib/cache.S
+ */
+extern void frv_dcache_writeback(unsigned long start, unsigned long size);
+extern void frv_cache_invalidate(unsigned long start, unsigned long size);
+extern void frv_icache_invalidate(unsigned long start, unsigned long size);
+extern void frv_cache_wback_inv(unsigned long start, unsigned long size);
+
+static inline void __flush_cache_all(void)
+{
+ asm volatile(" dcef @(gr0,gr0),#1 \n"
+ " icei @(gr0,gr0),#1 \n"
+ " membar \n"
+ : : : "memory"
+ );
+}
+
+/* dcache/icache coherency... */
+#ifdef CONFIG_MMU
+extern void flush_dcache_page(struct page *page);
+#else
+static inline void flush_dcache_page(struct page *page)
+{
+ unsigned long addr = page_to_phys(page);
+ frv_dcache_writeback(addr, addr + PAGE_SIZE);
+}
+#endif
+
+static inline void flush_page_to_ram(struct page *page)
+{
+ flush_dcache_page(page);
+}
+
+static inline void flush_icache(void)
+{
+ __flush_cache_all();
+}
+
+static inline void flush_icache_range(unsigned long start, unsigned long end)
+{
+ frv_cache_wback_inv(start, end);
+}
+
+#ifdef CONFIG_MMU
+extern void flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
+ unsigned long start, unsigned long len);
+#else
+static inline void flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
+ unsigned long start, unsigned long len)
+{
+ frv_cache_wback_inv(start, start + len);
+}
+#endif
+
+static inline void flush_icache_page(struct vm_area_struct *vma, struct page *page)
+{
+ flush_icache_user_range(vma, page, page_to_phys(page), PAGE_SIZE);
+}
+
+
+#endif /* _ASM_CACHEFLUSH_H */
--- /dev/null
+/* checksum.h: FRV checksumming
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_CHECKSUM_H
+#define _ASM_CHECKSUM_H
+
+#include <linux/in6.h>
+
+/*
+ * computes the checksum of a memory block at buff, length len,
+ * and adds in "sum" (32-bit)
+ *
+ * returns a 32-bit number suitable for feeding into itself
+ * or csum_tcpudp_magic
+ *
+ * this function must be called with even lengths, except
+ * for the last fragment, which may be odd
+ *
+ * it's best to have buff aligned on a 32-bit boundary
+ */
+unsigned int csum_partial(const unsigned char * buff, int len, unsigned int sum);
+
+/*
+ * the same as csum_partial, but copies from src while it
+ * checksums
+ *
+ * here even more important to align src and dst on a 32-bit (or even
+ * better 64-bit) boundary
+ */
+unsigned int csum_partial_copy(const char *src, char *dst, int len, int sum);
+
+/*
+ * the same as csum_partial_copy, but copies from user space.
+ *
+ * here even more important to align src and dst on a 32-bit (or even
+ * better 64-bit) boundary
+ */
+extern unsigned int csum_partial_copy_from_user(const char *src, char *dst,
+ int len, int sum, int *csum_err);
+
+#define csum_partial_copy_nocheck(src, dst, len, sum) \
+ csum_partial_copy((src), (dst), (len), (sum))
+
+/*
+ * This is a version of ip_compute_csum() optimized for IP headers,
+ * which always checksum on 4 octet boundaries.
+ *
+ */
+static inline
+unsigned short ip_fast_csum(unsigned char *iph, unsigned int ihl)
+{
+ unsigned int tmp, inc, sum = 0;
+
+ asm(" addcc gr0,gr0,gr0,icc0\n" /* clear icc0.C */
+ " subi %1,#4,%1 \n"
+ "0: \n"
+ " ldu.p @(%1,%3),%4 \n"
+ " subicc %2,#1,%2,icc1 \n"
+ " addxcc.p %4,%0,%0,icc0 \n"
+ " bhi icc1,#2,0b \n"
+
+ /* fold the 33-bit result into 16-bits */
+ " addxcc gr0,%0,%0,icc0 \n"
+ " srli %0,#16,%1 \n"
+ " sethi #0,%0 \n"
+ " add %1,%0,%0 \n"
+ " srli %0,#16,%1 \n"
+ " add %1,%0,%0 \n"
+
+ : "=r" (sum), "=r" (iph), "=r" (ihl), "=r" (inc), "=&r"(tmp)
+ : "0" (sum), "1" (iph), "2" (ihl), "3" (4),
+ "m"(*(volatile struct { int _[100]; } *)iph)
+ : "icc0", "icc1"
+ );
+
+ return ~sum;
+}
+
+/*
+ * Fold a partial checksum
+ */
+static inline unsigned int csum_fold(unsigned int sum)
+{
+ unsigned int tmp;
+
+ asm(" srli %0,#16,%1 \n"
+ " sethi #0,%0 \n"
+ " add %1,%0,%0 \n"
+ " srli %0,#16,%1 \n"
+ " add %1,%0,%0 \n"
+ : "=r"(sum), "=&r"(tmp)
+ : "0"(sum)
+ );
+
+ return ~sum;
+}
+
+/*
+ * computes the checksum of the TCP/UDP pseudo-header
+ * returns a 16-bit checksum, already complemented
+ */
+static inline unsigned int
+csum_tcpudp_nofold(unsigned long saddr, unsigned long daddr, unsigned short len,
+ unsigned short proto, unsigned int sum)
+{
+ asm(" addcc %1,%0,%0,icc0 \n"
+ " addxcc %2,%0,%0,icc0 \n"
+ " addxcc %3,%0,%0,icc0 \n"
+ " addxcc gr0,%0,%0,icc0 \n"
+ : "=r" (sum)
+ : "r" (daddr), "r" (saddr), "r" (len + proto), "0"(sum)
+ : "icc0"
+ );
+ return sum;
+}
+
+static inline unsigned short int
+csum_tcpudp_magic(unsigned long saddr, unsigned long daddr, unsigned short len,
+ unsigned short proto, unsigned int sum)
+{
+ return csum_fold(csum_tcpudp_nofold(saddr,daddr,len,proto,sum));
+}
+
+/*
+ * this routine is used for miscellaneous IP-like checksums, mainly
+ * in icmp.c
+ */
+extern unsigned short ip_compute_csum(const unsigned char * buff, int len);
+
+#define _HAVE_ARCH_IPV6_CSUM
+static inline unsigned short int
+csum_ipv6_magic(struct in6_addr *saddr, struct in6_addr *daddr,
+ __u32 len, unsigned short proto, unsigned int sum)
+{
+ unsigned long tmp, tmp2;
+
+ asm(" addcc %2,%0,%0,icc0 \n"
+
+ /* add up the source addr */
+ " ldi @(%3,0),%1 \n"
+ " addxcc %1,%0,%0,icc0 \n"
+ " ldi @(%3,4),%2 \n"
+ " addxcc %2,%0,%0,icc0 \n"
+ " ldi @(%3,8),%1 \n"
+ " addxcc %1,%0,%0,icc0 \n"
+ " ldi @(%3,12),%2 \n"
+ " addxcc %2,%0,%0,icc0 \n"
+
+ /* add up the dest addr */
+ " ldi @(%4,0),%1 \n"
+ " addxcc %1,%0,%0,icc0 \n"
+ " ldi @(%4,4),%2 \n"
+ " addxcc %2,%0,%0,icc0 \n"
+ " ldi @(%4,8),%1 \n"
+ " addxcc %1,%0,%0,icc0 \n"
+ " ldi @(%4,12),%2 \n"
+ " addxcc %2,%0,%0,icc0 \n"
+
+ /* fold the 33-bit result into 16-bits */
+ " addxcc gr0,%0,%0,icc0 \n"
+ " srli %0,#16,%1 \n"
+ " sethi #0,%0 \n"
+ " add %1,%0,%0 \n"
+ " srli %0,#16,%1 \n"
+ " add %1,%0,%0 \n"
+
+ : "=r" (sum), "=&r" (tmp), "=r" (tmp2)
+ : "r" (saddr), "r" (daddr), "0" (sum), "2" (len + proto)
+ : "icc0"
+ );
+
+ return ~sum;
+}
+
+#endif /* _ASM_CHECKSUM_H */
--- /dev/null
+/* cpu-irqs.h: on-CPU peripheral irqs
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_CPU_IRQS_H
+#define _ASM_CPU_IRQS_H
+
+#ifndef __ASSEMBLY__
+
+#include <asm/irq-routing.h>
+
+#define IRQ_BASE_CPU (NR_IRQ_ACTIONS_PER_GROUP * 0)
+
+/* IRQ IDs presented to drivers */
+enum {
+ IRQ_CPU__UNUSED = IRQ_BASE_CPU,
+ IRQ_CPU_UART0,
+ IRQ_CPU_UART1,
+ IRQ_CPU_TIMER0,
+ IRQ_CPU_TIMER1,
+ IRQ_CPU_TIMER2,
+ IRQ_CPU_DMA0,
+ IRQ_CPU_DMA1,
+ IRQ_CPU_DMA2,
+ IRQ_CPU_DMA3,
+ IRQ_CPU_DMA4,
+ IRQ_CPU_DMA5,
+ IRQ_CPU_DMA6,
+ IRQ_CPU_DMA7,
+ IRQ_CPU_EXTERNAL0,
+ IRQ_CPU_EXTERNAL1,
+ IRQ_CPU_EXTERNAL2,
+ IRQ_CPU_EXTERNAL3,
+ IRQ_CPU_EXTERNAL4,
+ IRQ_CPU_EXTERNAL5,
+ IRQ_CPU_EXTERNAL6,
+ IRQ_CPU_EXTERNAL7,
+};
+
+/* IRQ to level mappings */
+#define IRQ_GDBSTUB_LEVEL 15
+#define IRQ_UART_LEVEL 13
+
+#ifdef CONFIG_GDBSTUB_UART0
+#define IRQ_UART0_LEVEL IRQ_GDBSTUB_LEVEL
+#else
+#define IRQ_UART0_LEVEL IRQ_UART_LEVEL
+#endif
+
+#ifdef CONFIG_GDBSTUB_UART1
+#define IRQ_UART1_LEVEL IRQ_GDBSTUB_LEVEL
+#else
+#define IRQ_UART1_LEVEL IRQ_UART_LEVEL
+#endif
+
+#define IRQ_DMA0_LEVEL 14
+#define IRQ_DMA1_LEVEL 14
+#define IRQ_DMA2_LEVEL 14
+#define IRQ_DMA3_LEVEL 14
+#define IRQ_DMA4_LEVEL 14
+#define IRQ_DMA5_LEVEL 14
+#define IRQ_DMA6_LEVEL 14
+#define IRQ_DMA7_LEVEL 14
+
+#define IRQ_TIMER0_LEVEL 12
+#define IRQ_TIMER1_LEVEL 11
+#define IRQ_TIMER2_LEVEL 10
+
+#define IRQ_XIRQ0_LEVEL 1
+#define IRQ_XIRQ1_LEVEL 2
+#define IRQ_XIRQ2_LEVEL 3
+#define IRQ_XIRQ3_LEVEL 4
+#define IRQ_XIRQ4_LEVEL 5
+#define IRQ_XIRQ5_LEVEL 6
+#define IRQ_XIRQ6_LEVEL 7
+#define IRQ_XIRQ7_LEVEL 8
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_CPU_IRQS_H */
--- /dev/null
+#ifndef _ASM_DMA_MAPPING_H
+#define _ASM_DMA_MAPPING_H
+
+#include <linux/device.h>
+#include <asm/cache.h>
+#include <asm/cacheflush.h>
+#include <asm/scatterlist.h>
+#include <asm/io.h>
+
+#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
+#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
+
+extern unsigned long __nongprelbss dma_coherent_mem_start;
+extern unsigned long __nongprelbss dma_coherent_mem_end;
+
+void *dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, int gfp);
+void dma_free_coherent(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle);
+
+/*
+ * These macros should be used after a pci_map_sg call has been done
+ * to get bus addresses of each of the SG entries and their lengths.
+ * You should only work with the number of sg entries pci_map_sg
+ * returns, or alternatively stop on the first sg_dma_len(sg) which
+ * is 0.
+ */
+#define sg_dma_address(sg) ((unsigned long) (page_to_phys((sg)->page) + (sg)->offset))
+#define sg_dma_len(sg) ((sg)->length)
+
+/*
+ * Map a single buffer of the indicated size for DMA in streaming mode.
+ * The 32-bit bus address to use is returned.
+ *
+ * Once the device is given the dma address, the device owns this memory
+ * until either pci_unmap_single or pci_dma_sync_single is performed.
+ */
+extern dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size,
+ enum dma_data_direction direction);
+
+/*
+ * Unmap a single streaming mode DMA translation. The dma_addr and size
+ * must match what was provided for in a previous pci_map_single call. All
+ * other usages are undefined.
+ *
+ * After this call, reads by the cpu to the buffer are guarenteed to see
+ * whatever the device wrote there.
+ */
+static inline
+void dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
+ enum dma_data_direction direction)
+{
+ BUG_ON(direction == DMA_NONE);
+}
+
+/*
+ * Map a set of buffers described by scatterlist in streaming
+ * mode for DMA. This is the scather-gather version of the
+ * above pci_map_single interface. Here the scatter gather list
+ * elements are each tagged with the appropriate dma address
+ * and length. They are obtained via sg_dma_{address,length}(SG).
+ *
+ * NOTE: An implementation may be able to use a smaller number of
+ * DMA address/length pairs than there are SG table elements.
+ * (for example via virtual mapping capabilities)
+ * The routine returns the number of addr/length pairs actually
+ * used, at most nents.
+ *
+ * Device ownership issues as mentioned above for pci_map_single are
+ * the same here.
+ */
+extern int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
+ enum dma_data_direction direction);
+
+/*
+ * Unmap a set of streaming mode DMA translations.
+ * Again, cpu read rules concerning calls here are the same as for
+ * pci_unmap_single() above.
+ */
+static inline
+void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
+ enum dma_data_direction direction)
+{
+ BUG_ON(direction == DMA_NONE);
+}
+
+extern
+dma_addr_t dma_map_page(struct device *dev, struct page *page, unsigned long offset,
+ size_t size, enum dma_data_direction direction);
+
+static inline
+void dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
+ enum dma_data_direction direction)
+{
+ BUG_ON(direction == DMA_NONE);
+}
+
+
+static inline
+void dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
+ enum dma_data_direction direction)
+{
+}
+
+static inline
+void dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size,
+ enum dma_data_direction direction)
+{
+ flush_write_buffers();
+}
+
+static inline
+void dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
+ unsigned long offset, size_t size,
+ enum dma_data_direction direction)
+{
+}
+
+static inline
+void dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
+ unsigned long offset, size_t size,
+ enum dma_data_direction direction)
+{
+ flush_write_buffers();
+}
+
+static inline
+void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
+ enum dma_data_direction direction)
+{
+}
+
+static inline
+void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
+ enum dma_data_direction direction)
+{
+ flush_write_buffers();
+}
+
+static inline
+int dma_mapping_error(dma_addr_t dma_addr)
+{
+ return 0;
+}
+
+static inline
+int dma_supported(struct device *dev, u64 mask)
+{
+ /*
+ * we fall back to GFP_DMA when the mask isn't all 1s,
+ * so we can't guarantee allocations that must be
+ * within a tighter range than GFP_DMA..
+ */
+ if (mask < 0x00ffffff)
+ return 0;
+
+ return 1;
+}
+
+static inline
+int dma_set_mask(struct device *dev, u64 mask)
+{
+ if (!dev->dma_mask || !dma_supported(dev, mask))
+ return -EIO;
+
+ *dev->dma_mask = mask;
+
+ return 0;
+}
+
+static inline
+int dma_get_cache_alignment(void)
+{
+ return 1 << L1_CACHE_SHIFT;
+}
+
+#define dma_is_consistent(d) (1)
+
+static inline
+void dma_cache_sync(void *vaddr, size_t size,
+ enum dma_data_direction direction)
+{
+ flush_write_buffers();
+}
+
+#endif /* _ASM_DMA_MAPPING_H */
--- /dev/null
+/* dma.h: FRV DMA controller management
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_DMA_H
+#define _ASM_DMA_H
+
+//#define DMA_DEBUG 1
+
+#include <linux/config.h>
+#include <linux/interrupt.h>
+
+#undef MAX_DMA_CHANNELS /* don't use kernel/dma.c */
+
+/* under 2.4 this is actually needed by the new bootmem allocator */
+#define MAX_DMA_ADDRESS PAGE_OFFSET
+
+/*
+ * FRV DMA controller management
+ */
+struct pt_regs;
+
+typedef irqreturn_t (*dma_irq_handler_t)(int dmachan, unsigned long cstr, void *data,
+ struct pt_regs *regs);
+
+extern void frv_dma_init(void);
+
+extern int frv_dma_open(const char *devname,
+ unsigned long dmamask,
+ int dmacap,
+ dma_irq_handler_t handler,
+ unsigned long irq_flags,
+ void *data);
+
+/* channels required */
+#define FRV_DMA_MASK_ANY ULONG_MAX /* any channel */
+
+/* capabilities required */
+#define FRV_DMA_CAP_DREQ 0x01 /* DMA request pin */
+#define FRV_DMA_CAP_DACK 0x02 /* DMA ACK pin */
+#define FRV_DMA_CAP_DONE 0x04 /* DMA done pin */
+
+extern void frv_dma_close(int dma);
+
+extern void frv_dma_config(int dma, unsigned long ccfr, unsigned long cctr, unsigned long apr);
+
+extern void frv_dma_start(int dma,
+ unsigned long sba, unsigned long dba,
+ unsigned long pix, unsigned long six, unsigned long bcl);
+
+extern void frv_dma_restart_circular(int dma, unsigned long six);
+
+extern void frv_dma_stop(int dma);
+
+extern int is_frv_dma_interrupting(int dma);
+
+extern void frv_dma_dump(int dma);
+
+extern void frv_dma_status_clear(int dma);
+
+#define FRV_DMA_NCHANS 8
+#define FRV_DMA_4CHANS 4
+#define FRV_DMA_8CHANS 8
+
+#define DMAC_CCFRx 0x00 /* channel configuration reg */
+#define DMAC_CCFRx_CM_SHIFT 16
+#define DMAC_CCFRx_CM_DA 0x00000000
+#define DMAC_CCFRx_CM_SCA 0x00010000
+#define DMAC_CCFRx_CM_DCA 0x00020000
+#define DMAC_CCFRx_CM_2D 0x00030000
+#define DMAC_CCFRx_ATS_SHIFT 8
+#define DMAC_CCFRx_RS_INTERN 0x00000000
+#define DMAC_CCFRx_RS_EXTERN 0x00000001
+#define DMAC_CCFRx_RS_SHIFT 0
+
+#define DMAC_CSTRx 0x08 /* channel status reg */
+#define DMAC_CSTRx_FS 0x0000003f
+#define DMAC_CSTRx_NE 0x00000100
+#define DMAC_CSTRx_FED 0x00000200
+#define DMAC_CSTRx_WER 0x00000800
+#define DMAC_CSTRx_RER 0x00001000
+#define DMAC_CSTRx_CE 0x00002000
+#define DMAC_CSTRx_INT 0x00800000
+#define DMAC_CSTRx_BUSY 0x80000000
+
+#define DMAC_CCTRx 0x10 /* channel control reg */
+#define DMAC_CCTRx_DSIZ_1 0x00000000
+#define DMAC_CCTRx_DSIZ_2 0x00000001
+#define DMAC_CCTRx_DSIZ_4 0x00000002
+#define DMAC_CCTRx_DSIZ_32 0x00000005
+#define DMAC_CCTRx_DAU_HOLD 0x00000000
+#define DMAC_CCTRx_DAU_INC 0x00000010
+#define DMAC_CCTRx_DAU_DEC 0x00000020
+#define DMAC_CCTRx_SSIZ_1 0x00000000
+#define DMAC_CCTRx_SSIZ_2 0x00000100
+#define DMAC_CCTRx_SSIZ_4 0x00000200
+#define DMAC_CCTRx_SSIZ_32 0x00000500
+#define DMAC_CCTRx_SAU_HOLD 0x00000000
+#define DMAC_CCTRx_SAU_INC 0x00001000
+#define DMAC_CCTRx_SAU_DEC 0x00002000
+#define DMAC_CCTRx_FC 0x08000000
+#define DMAC_CCTRx_ICE 0x10000000
+#define DMAC_CCTRx_IE 0x40000000
+#define DMAC_CCTRx_ACT 0x80000000
+
+#define DMAC_SBAx 0x18 /* source base address reg */
+#define DMAC_DBAx 0x20 /* data base address reg */
+#define DMAC_PIXx 0x28 /* primary index reg */
+#define DMAC_SIXx 0x30 /* secondary index reg */
+#define DMAC_BCLx 0x38 /* byte count limit reg */
+#define DMAC_APRx 0x40 /* alternate pointer reg */
+
+/*
+ * required for PCI + MODULES
+ */
+#ifdef CONFIG_PCI
+extern int isa_dma_bridge_buggy;
+#else
+#define isa_dma_bridge_buggy (0)
+#endif
+
+#endif /* _ASM_DMA_H */
--- /dev/null
+/* elf.h: FR-V ELF definitions
+ *
+ * Copyright (C) 2003 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ * - Derived from include/asm-m68knommu/elf.h
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+#ifndef __ASM_ELF_H
+#define __ASM_ELF_H
+
+#include <linux/config.h>
+#include <asm/ptrace.h>
+#include <asm/user.h>
+
+struct elf32_hdr;
+
+/*
+ * ELF header e_flags defines.
+ */
+#define EF_FRV_GPR_MASK 0x00000003 /* mask for # of gprs */
+#define EF_FRV_GPR32 0x00000001 /* Only uses GR on 32-register */
+#define EF_FRV_GPR64 0x00000002 /* Only uses GR on 64-register */
+#define EF_FRV_FPR_MASK 0x0000000c /* mask for # of fprs */
+#define EF_FRV_FPR32 0x00000004 /* Only uses FR on 32-register */
+#define EF_FRV_FPR64 0x00000008 /* Only uses FR on 64-register */
+#define EF_FRV_FPR_NONE 0x0000000C /* Uses software floating-point */
+#define EF_FRV_DWORD_MASK 0x00000030 /* mask for dword support */
+#define EF_FRV_DWORD_YES 0x00000010 /* Assumes stack aligned to 8-byte boundaries. */
+#define EF_FRV_DWORD_NO 0x00000020 /* Assumes stack aligned to 4-byte boundaries. */
+#define EF_FRV_DOUBLE 0x00000040 /* Uses double instructions. */
+#define EF_FRV_MEDIA 0x00000080 /* Uses media instructions. */
+#define EF_FRV_PIC 0x00000100 /* Uses position independent code. */
+#define EF_FRV_NON_PIC_RELOCS 0x00000200 /* Does not use position Independent code. */
+#define EF_FRV_MULADD 0x00000400 /* -mmuladd */
+#define EF_FRV_BIGPIC 0x00000800 /* -fPIC */
+#define EF_FRV_LIBPIC 0x00001000 /* -mlibrary-pic */
+#define EF_FRV_G0 0x00002000 /* -G 0, no small data ptr */
+#define EF_FRV_NOPACK 0x00004000 /* -mnopack */
+#define EF_FRV_FDPIC 0x00008000 /* -mfdpic */
+#define EF_FRV_CPU_MASK 0xff000000 /* specific cpu bits */
+#define EF_FRV_CPU_GENERIC 0x00000000 /* Set CPU type is FR-V */
+#define EF_FRV_CPU_FR500 0x01000000 /* Set CPU type is FR500 */
+#define EF_FRV_CPU_FR300 0x02000000 /* Set CPU type is FR300 */
+#define EF_FRV_CPU_SIMPLE 0x03000000 /* SIMPLE */
+#define EF_FRV_CPU_TOMCAT 0x04000000 /* Tomcat, FR500 prototype */
+#define EF_FRV_CPU_FR400 0x05000000 /* Set CPU type is FR400 */
+#define EF_FRV_CPU_FR550 0x06000000 /* Set CPU type is FR550 */
+#define EF_FRV_CPU_FR405 0x07000000 /* Set CPU type is FR405 */
+#define EF_FRV_CPU_FR450 0x08000000 /* Set CPU type is FR450 */
+
+/*
+ * FR-V ELF relocation types
+ */
+
+
+/*
+ * ELF register definitions..
+ */
+typedef unsigned long elf_greg_t;
+
+#define ELF_NGREG (sizeof(struct pt_regs) / sizeof(elf_greg_t))
+typedef elf_greg_t elf_gregset_t[ELF_NGREG];
+
+typedef struct fpmedia_struct elf_fpregset_t;
+
+/*
+ * This is used to ensure we don't load something for the wrong architecture.
+ */
+extern int elf_check_arch(const struct elf32_hdr *hdr);
+
+#define elf_check_fdpic(x) ((x)->e_flags & EF_FRV_FDPIC && !((x)->e_flags & EF_FRV_NON_PIC_RELOCS))
+#define elf_check_const_displacement(x) ((x)->e_flags & EF_FRV_PIC)
+
+/*
+ * These are used to set parameters in the core dumps.
+ */
+#define ELF_CLASS ELFCLASS32
+#define ELF_DATA ELFDATA2MSB
+#define ELF_ARCH EM_FRV
+
+#define ELF_PLAT_INIT(_r) \
+do { \
+ __kernel_frame0_ptr->gr16 = 0; \
+ __kernel_frame0_ptr->gr17 = 0; \
+ __kernel_frame0_ptr->gr18 = 0; \
+ __kernel_frame0_ptr->gr19 = 0; \
+ __kernel_frame0_ptr->gr20 = 0; \
+ __kernel_frame0_ptr->gr21 = 0; \
+ __kernel_frame0_ptr->gr22 = 0; \
+ __kernel_frame0_ptr->gr23 = 0; \
+ __kernel_frame0_ptr->gr24 = 0; \
+ __kernel_frame0_ptr->gr25 = 0; \
+ __kernel_frame0_ptr->gr26 = 0; \
+ __kernel_frame0_ptr->gr27 = 0; \
+ __kernel_frame0_ptr->gr29 = 0; \
+} while(0)
+
+#define ELF_FDPIC_PLAT_INIT(_regs, _exec_map_addr, _interp_map_addr, _dynamic_addr) \
+do { \
+ __kernel_frame0_ptr->gr16 = _exec_map_addr; \
+ __kernel_frame0_ptr->gr17 = _interp_map_addr; \
+ __kernel_frame0_ptr->gr18 = _dynamic_addr; \
+ __kernel_frame0_ptr->gr19 = 0; \
+ __kernel_frame0_ptr->gr20 = 0; \
+ __kernel_frame0_ptr->gr21 = 0; \
+ __kernel_frame0_ptr->gr22 = 0; \
+ __kernel_frame0_ptr->gr23 = 0; \
+ __kernel_frame0_ptr->gr24 = 0; \
+ __kernel_frame0_ptr->gr25 = 0; \
+ __kernel_frame0_ptr->gr26 = 0; \
+ __kernel_frame0_ptr->gr27 = 0; \
+ __kernel_frame0_ptr->gr29 = 0; \
+} while(0)
+
+#define USE_ELF_CORE_DUMP
+#define ELF_EXEC_PAGESIZE 16384
+
+/* This is the location that an ET_DYN program is loaded if exec'ed. Typical
+ use of this is to invoke "./ld.so someprog" to test out a new version of
+ the loader. We need to make sure that it is out of the way of the program
+ that it will "exec", and that there is sufficient room for the brk. */
+
+#define ELF_ET_DYN_BASE 0x08000000UL
+
+#define ELF_CORE_COPY_REGS(pr_reg, regs) \
+ memcpy(&pr_reg[0], ®s->sp, 31 * sizeof(uint32_t));
+
+/* This yields a mask that user programs can use to figure out what
+ instruction set this cpu supports. */
+
+#define ELF_HWCAP (0)
+
+/* This yields a string that ld.so will use to load implementation
+ specific libraries for optimization. This is more specific in
+ intent than poking at uname or /proc/cpuinfo. */
+
+#define ELF_PLATFORM (NULL)
+
+#ifdef __KERNEL__
+#define SET_PERSONALITY(ex, ibcs2) set_personality((ibcs2)?PER_SVR4:PER_LINUX)
+#endif
+
+#endif
--- /dev/null
+#ifndef __ASM_FPU_H
+#define __ASM_FPU_H
+
+#include <linux/config.h>
+
+/*
+ * MAX floating point unit state size (FSAVE/FRESTORE)
+ */
+
+#define kernel_fpu_end() do { asm volatile("bar":::"memory"); preempt_enable(); } while(0)
+
+#endif /* __ASM_FPU_H */
--- /dev/null
+/* gdb-stub.h: FRV GDB stub
+ *
+ * Copyright (C) 2003 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ * - Derived from asm-mips/gdb-stub.h (c) 1995 Andreas Busse
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+#ifndef __ASM_GDB_STUB_H
+#define __ASM_GDB_STUB_H
+
+#undef GDBSTUB_DEBUG_PROTOCOL
+
+#include <asm/ptrace.h>
+
+/*
+ * important register numbers in GDB protocol
+ * - GR0, GR1, GR2, GR3, GR4, GR5, GR6, GR7,
+ * - GR8, GR9, GR10, GR11, GR12, GR13, GR14, GR15,
+ * - GR16, GR17, GR18, GR19, GR20, GR21, GR22, GR23,
+ * - GR24, GR25, GR26, GR27, GR28, GR29, GR30, GR31,
+ * - GR32, GR33, GR34, GR35, GR36, GR37, GR38, GR39,
+ * - GR40, GR41, GR42, GR43, GR44, GR45, GR46, GR47,
+ * - GR48, GR49, GR50, GR51, GR52, GR53, GR54, GR55,
+ * - GR56, GR57, GR58, GR59, GR60, GR61, GR62, GR63,
+ * - FR0, FR1, FR2, FR3, FR4, FR5, FR6, FR7,
+ * - FR8, FR9, FR10, FR11, FR12, FR13, FR14, FR15,
+ * - FR16, FR17, FR18, FR19, FR20, FR21, FR22, FR23,
+ * - FR24, FR25, FR26, FR27, FR28, FR29, FR30, FR31,
+ * - FR32, FR33, FR34, FR35, FR36, FR37, FR38, FR39,
+ * - FR40, FR41, FR42, FR43, FR44, FR45, FR46, FR47,
+ * - FR48, FR49, FR50, FR51, FR52, FR53, FR54, FR55,
+ * - FR56, FR57, FR58, FR59, FR60, FR61, FR62, FR63,
+ * - PC, PSR, CCR, CCCR,
+ * - _X132, _X133, _X134
+ * - TBR, BRR, DBAR0, DBAR1, DBAR2, DBAR3,
+ * - SCR0, SCR1, SCR2, SCR3,
+ * - LR, LCR,
+ * - IACC0H, IACC0L,
+ * - FSR0,
+ * - ACC0, ACC1, ACC2, ACC3, ACC4, ACC5, ACC6, ACC7,
+ * - ACCG0123, ACCG4567,
+ * - MSR0, MSR1,
+ * - GNER0, GNER1,
+ * - FNER0, FNER1,
+ */
+#define GDB_REG_GR(N) (N)
+#define GDB_REG_FR(N) (64+(N))
+#define GDB_REG_PC 128
+#define GDB_REG_PSR 129
+#define GDB_REG_CCR 130
+#define GDB_REG_CCCR 131
+#define GDB_REG_TBR 135
+#define GDB_REG_BRR 136
+#define GDB_REG_DBAR(N) (137+(N))
+#define GDB_REG_SCR(N) (141+(N))
+#define GDB_REG_LR 145
+#define GDB_REG_LCR 146
+#define GDB_REG_FSR0 149
+#define GDB_REG_ACC(N) (150+(N))
+#define GDB_REG_ACCG(N) (158+(N)/4)
+#define GDB_REG_MSR(N) (160+(N))
+#define GDB_REG_GNER(N) (162+(N))
+#define GDB_REG_FNER(N) (164+(N))
+
+#define GDB_REG_SP GDB_REG_GR(1)
+#define GDB_REG_FP GDB_REG_GR(2)
+
+#ifndef _LANGUAGE_ASSEMBLY
+
+/*
+ * Prototypes
+ */
+extern void show_registers_only(struct pt_regs *regs);
+
+extern void gdbstub_init(void);
+extern void gdbstub(int type);
+extern void gdbstub_exit(int status);
+
+extern void gdbstub_io_init(void);
+extern void gdbstub_set_baud(unsigned baud);
+extern int gdbstub_rx_char(unsigned char *_ch, int nonblock);
+extern void gdbstub_tx_char(unsigned char ch);
+extern void gdbstub_tx_flush(void);
+extern void gdbstub_do_rx(void);
+
+extern asmlinkage void __debug_stub_init_break(void);
+extern asmlinkage void __break_hijack_kernel_event(void);
+extern asmlinkage void start_kernel(void);
+
+extern asmlinkage void gdbstub_rx_handler(void);
+extern asmlinkage void gdbstub_rx_irq(void);
+extern asmlinkage void gdbstub_intercept(void);
+
+extern uint32_t __entry_usertrap_table[];
+extern uint32_t __entry_kerneltrap_table[];
+
+extern volatile u8 gdbstub_rx_buffer[PAGE_SIZE];
+extern volatile u32 gdbstub_rx_inp;
+extern volatile u32 gdbstub_rx_outp;
+extern volatile u8 gdbstub_rx_overflow;
+extern u8 gdbstub_rx_unget;
+
+extern void gdbstub_printk(const char *fmt, ...);
+extern void debug_to_serial(const char *p, int n);
+extern void console_set_baud(unsigned baud);
+
+#ifdef GDBSTUB_DEBUG_PROTOCOL
+#define gdbstub_proto(FMT,...) gdbstub_printk(FMT,##__VA_ARGS__)
+#else
+#define gdbstub_proto(FMT,...) ({ 0; })
+#endif
+
+#endif /* _LANGUAGE_ASSEMBLY */
+#endif /* __ASM_GDB_STUB_H */
--- /dev/null
+/* hardirq.h: FRV hardware IRQ management
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef __ASM_HARDIRQ_H
+#define __ASM_HARDIRQ_H
+
+#include <linux/config.h>
+#include <linux/threads.h>
+
+typedef struct {
+ unsigned int __softirq_pending;
+ unsigned long idle_timestamp;
+} ____cacheline_aligned irq_cpustat_t;
+
+#include <linux/irq_cpustat.h> /* Standard mappings for irq_cpustat_t above */
+
+#ifdef CONFIG_SMP
+#error SMP not available on FR-V
+#endif /* CONFIG_SMP */
+
+
+#endif
--- /dev/null
+/* highmem.h: virtual kernel memory mappings for high memory
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ * - Derived from include/asm-i386/highmem.h
+ *
+ * See Documentation/fujitsu/frv/mmu-layout.txt for more information.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_HIGHMEM_H
+#define _ASM_HIGHMEM_H
+
+#ifdef __KERNEL__
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <asm/mem-layout.h>
+#include <asm/spr-regs.h>
+#include <asm/mb-regs.h>
+
+#define NR_TLB_LINES 64 /* number of lines in the TLB */
+
+#ifndef __ASSEMBLY__
+
+#include <linux/interrupt.h>
+#include <asm/kmap_types.h>
+#include <asm/pgtable.h>
+
+#ifdef CONFIG_DEBUG_HIGHMEM
+#define HIGHMEM_DEBUG 1
+#else
+#define HIGHMEM_DEBUG 0
+#endif
+
+/* declarations for highmem.c */
+extern unsigned long highstart_pfn, highend_pfn;
+
+#define kmap_prot PAGE_KERNEL
+#define kmap_pte ______kmap_pte_in_TLB
+extern pte_t *pkmap_page_table;
+
+extern void kmap_init(void);
+
+#define flush_cache_kmaps() do { } while (0)
+
+/*
+ * Right now we initialize only a single pte table. It can be extended
+ * easily, subsequent pte tables have to be allocated in one physical
+ * chunk of RAM.
+ */
+#define LAST_PKMAP PTRS_PER_PTE
+#define LAST_PKMAP_MASK (LAST_PKMAP - 1)
+#define PKMAP_NR(virt) ((virt - PKMAP_BASE) >> PAGE_SHIFT)
+#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
+
+extern void *kmap_high(struct page *page);
+extern void kunmap_high(struct page *page);
+
+extern void *kmap(struct page *page);
+extern void kunmap(struct page *page);
+
+extern struct page *kmap_atomic_to_page(void *ptr);
+
+#endif /* !__ASSEMBLY__ */
+
+/*
+ * The use of kmap_atomic/kunmap_atomic is discouraged - kmap/kunmap
+ * gives a more generic (and caching) interface. But kmap_atomic can
+ * be used in IRQ contexts, so in some (very limited) cases we need
+ * it.
+ */
+#define KMAP_ATOMIC_CACHE_DAMR 8
+
+#ifndef __ASSEMBLY__
+
+#define __kmap_atomic_primary(type, paddr, ampr) \
+({ \
+ unsigned long damlr, dampr; \
+ \
+ dampr = paddr | xAMPRx_L | xAMPRx_M | xAMPRx_S | xAMPRx_SS_16Kb | xAMPRx_V; \
+ \
+ if (type != __KM_CACHE) \
+ asm volatile("movgs %0,dampr"#ampr :: "r"(dampr)); \
+ else \
+ asm volatile("movgs %0,iampr"#ampr"\n" \
+ "movgs %0,dampr"#ampr"\n" \
+ :: "r"(dampr) \
+ ); \
+ \
+ asm("movsg damlr"#ampr",%0" : "=r"(damlr)); \
+ \
+ /*printk("DAMR"#ampr": PRIM sl=%d L=%08lx P=%08lx\n", type, damlr, dampr);*/ \
+ \
+ (void *) damlr; \
+})
+
+#define __kmap_atomic_secondary(slot, paddr) \
+({ \
+ unsigned long damlr = KMAP_ATOMIC_SECONDARY_FRAME + (slot) * PAGE_SIZE; \
+ unsigned long dampr = paddr | xAMPRx_L | xAMPRx_M | xAMPRx_S | xAMPRx_SS_16Kb | xAMPRx_V; \
+ \
+ asm volatile("movgs %0,tplr \n" \
+ "movgs %1,tppr \n" \
+ "tlbpr %0,gr0,#2,#1" \
+ : : "r"(damlr), "r"(dampr)); \
+ \
+ /*printk("TLB: SECN sl=%d L=%08lx P=%08lx\n", slot, damlr, dampr);*/ \
+ \
+ (void *) damlr; \
+})
+
+static inline void *kmap_atomic(struct page *page, enum km_type type)
+{
+ unsigned long paddr;
+
+ preempt_disable();
+ paddr = page_to_phys(page);
+
+ switch (type) {
+ case 0: return __kmap_atomic_primary(0, paddr, 2);
+ case 1: return __kmap_atomic_primary(1, paddr, 3);
+ case 2: return __kmap_atomic_primary(2, paddr, 4);
+ case 3: return __kmap_atomic_primary(3, paddr, 5);
+ case 4: return __kmap_atomic_primary(4, paddr, 6);
+ case 5: return __kmap_atomic_primary(5, paddr, 7);
+ case 6: return __kmap_atomic_primary(6, paddr, 8);
+ case 7: return __kmap_atomic_primary(7, paddr, 9);
+ case 8: return __kmap_atomic_primary(8, paddr, 10);
+
+ case 9 ... 9 + NR_TLB_LINES - 1:
+ return __kmap_atomic_secondary(type - 9, paddr);
+
+ default:
+ BUG();
+ return 0;
+ }
+}
+
+#define __kunmap_atomic_primary(type, ampr) \
+do { \
+ asm volatile("movgs gr0,dampr"#ampr"\n"); \
+ if (type == __KM_CACHE) \
+ asm volatile("movgs gr0,iampr"#ampr"\n"); \
+} while(0)
+
+#define __kunmap_atomic_secondary(slot, vaddr) \
+do { \
+ asm volatile("tlbpr %0,gr0,#4,#1" : : "r"(vaddr)); \
+} while(0)
+
+static inline void kunmap_atomic(void *kvaddr, enum km_type type)
+{
+ switch (type) {
+ case 0: __kunmap_atomic_primary(0, 2); break;
+ case 1: __kunmap_atomic_primary(1, 3); break;
+ case 2: __kunmap_atomic_primary(2, 4); break;
+ case 3: __kunmap_atomic_primary(3, 5); break;
+ case 4: __kunmap_atomic_primary(4, 6); break;
+ case 5: __kunmap_atomic_primary(5, 7); break;
+ case 6: __kunmap_atomic_primary(6, 8); break;
+ case 7: __kunmap_atomic_primary(7, 9); break;
+ case 8: __kunmap_atomic_primary(8, 10); break;
+
+ case 9 ... 9 + NR_TLB_LINES - 1:
+ __kunmap_atomic_secondary(type - 9, kvaddr);
+ break;
+
+ default:
+ BUG();
+ }
+ preempt_enable();
+}
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_HIGHMEM_H */
--- /dev/null
+/* ide.h: FRV IDE declarations
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_IDE_H
+#define _ASM_IDE_H
+
+#ifdef __KERNEL__
+
+#include <linux/config.h>
+#include <asm/setup.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+
+#undef SUPPORT_SLOW_DATA_PORTS
+#define SUPPORT_SLOW_DATA_PORTS 0
+
+#undef SUPPORT_VLB_SYNC
+#define SUPPORT_VLB_SYNC 0
+
+#ifndef MAX_HWIFS
+#define MAX_HWIFS 8
+#endif
+
+/****************************************************************************/
+/*
+ * some bits needed for parts of the IDE subsystem to compile
+ */
+#define __ide_mm_insw(port, addr, n) insw(port, addr, n)
+#define __ide_mm_insl(port, addr, n) insl(port, addr, n)
+#define __ide_mm_outsw(port, addr, n) outsw(port, addr, n)
+#define __ide_mm_outsl(port, addr, n) outsl(port, addr, n)
+
+
+#endif /* __KERNEL__ */
+#endif /* _ASM_IDE_H */
__asm__ __volatile__ ("membar" : : :"memory");
}
-
-/*
- * Convert a physical pointer to a virtual kernel pointer for /dev/mem
- * access
- */
-#define xlate_dev_mem_ptr(p) __va(p)
-
-/*
- * Convert a virtual cached pointer to an uncached pointer
- */
-#define xlate_dev_kmem_ptr(p) p
-
#endif /* __KERNEL__ */
#endif /* _ASM_IO_H */
--- /dev/null
+/* irq-routing.h: multiplexed IRQ routing
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_IRQ_ROUTING_H
+#define _ASM_IRQ_ROUTING_H
+
+#ifndef __ASSEMBLY__
+
+#include <linux/spinlock.h>
+#include <asm/irq.h>
+
+struct irq_source;
+struct irq_level;
+
+/*
+ * IRQ action distribution sets
+ */
+struct irq_group {
+ int first_irq; /* first IRQ distributed here */
+ void (*control)(struct irq_group *group, int index, int on);
+
+ struct irqaction *actions[NR_IRQ_ACTIONS_PER_GROUP]; /* IRQ action chains */
+ struct irq_source *sources[NR_IRQ_ACTIONS_PER_GROUP]; /* IRQ sources */
+ int disable_cnt[NR_IRQ_ACTIONS_PER_GROUP]; /* disable counts */
+};
+
+/*
+ * IRQ source manager
+ */
+struct irq_source {
+ struct irq_source *next;
+ struct irq_level *level;
+ const char *muxname;
+ volatile void __iomem *muxdata;
+ unsigned long irqmask;
+
+ void (*doirq)(struct irq_source *source);
+};
+
+/*
+ * IRQ level management (per CPU IRQ priority / entry vector)
+ */
+struct irq_level {
+ int usage;
+ int disable_count;
+ unsigned long flags; /* current SA_INTERRUPT and SA_SHIRQ settings */
+ spinlock_t lock;
+ struct irq_source *sources;
+};
+
+extern struct irq_level frv_irq_levels[16];
+extern struct irq_group *irq_groups[NR_IRQ_GROUPS];
+
+extern void frv_irq_route(struct irq_source *source, int irqlevel);
+extern void frv_irq_route_external(struct irq_source *source, int irq);
+extern void frv_irq_set_group(struct irq_group *group);
+extern void distribute_irqs(struct irq_group *group, unsigned long irqmask);
+extern void route_cpu_irqs(void);
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_IRQ_ROUTING_H */
--- /dev/null
+/* irq.h: FRV IRQ definitions
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_IRQ_H_
+#define _ASM_IRQ_H_
+
+#include <linux/config.h>
+
+/*
+ * the system has an on-CPU PIC and another PIC on the FPGA and other PICs on other peripherals,
+ * so we do some routing in irq-routing.[ch] to reduce the number of false-positives seen by
+ * drivers
+ */
+
+/* this number is used when no interrupt has been assigned */
+#define NO_IRQ (-1)
+
+#define NR_IRQ_LOG2_ACTIONS_PER_GROUP 5
+#define NR_IRQ_ACTIONS_PER_GROUP (1 << NR_IRQ_LOG2_ACTIONS_PER_GROUP)
+#define NR_IRQ_GROUPS 4
+#define NR_IRQS (NR_IRQ_ACTIONS_PER_GROUP * NR_IRQ_GROUPS)
+
+/* probe returns a 32-bit IRQ mask:-/ */
+#define MIN_PROBE_IRQ (NR_IRQS - 32)
+
+static inline int irq_canonicalize(int irq)
+{
+ return irq;
+}
+
+extern void disable_irq_nosync(unsigned int irq);
+extern void disable_irq(unsigned int irq);
+extern void enable_irq(unsigned int irq);
+
+
+#endif /* _ASM_IRQ_H_ */
--- /dev/null
+/* mb-regs.h: motherboard registers
+ *
+ * Copyright (C) 2003, 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_MB_REGS_H
+#define _ASM_MB_REGS_H
+
+#include <asm/cpu-irqs.h>
+#include <asm/sections.h>
+#include <asm/mem-layout.h>
+
+#define __region_IO KERNEL_IO_START /* the region from 0xe0000000 to 0xffffffff has suitable
+ * protection laid over the top for use in memory-mapped
+ * I/O
+ */
+
+#define __region_CS0 0xff000000 /* Boot ROMs area */
+
+#ifdef CONFIG_MB93091_VDK
+/*
+ * VDK motherboard and CPU card specific stuff
+ */
+
+#include <asm/mb93091-fpga-irqs.h>
+
+#define IRQ_CPU_MB93493_0 IRQ_CPU_EXTERNAL0
+#define IRQ_CPU_MB93493_1 IRQ_CPU_EXTERNAL1
+
+#define __region_CS2 0xe0000000 /* SLBUS/PCI I/O space */
+#define __region_CS2_M 0x0fffffff /* mask */
+#define __region_CS2_C 0x00000000 /* control */
+#define __region_CS5 0xf0000000 /* MB93493 CSC area (DAV daughter board) */
+#define __region_CS5_M 0x00ffffff
+#define __region_CS5_C 0x00010000
+#define __region_CS7 0xf1000000 /* CB70 CPU-card PCMCIA port I/O space */
+#define __region_CS7_M 0x00ffffff
+#define __region_CS7_C 0x00410701
+#define __region_CS1 0xfc000000 /* SLBUS/PCI bridge control registers */
+#define __region_CS1_M 0x000fffff
+#define __region_CS1_C 0x00000000
+#define __region_CS6 0xfc100000 /* CB70 CPU-card DM9000 LAN I/O space */
+#define __region_CS6_M 0x000fffff
+#define __region_CS6_C 0x00400707
+#define __region_CS3 0xfc200000 /* MB93493 CSR area (DAV daughter board) */
+#define __region_CS3_M 0x000fffff
+#define __region_CS3_C 0xc8100000
+#define __region_CS4 0xfd000000 /* CB70 CPU-card extra flash space */
+#define __region_CS4_M 0x00ffffff
+#define __region_CS4_C 0x00000f07
+
+#define __region_PCI_IO (__region_CS2 + 0x04000000UL)
+#define __region_PCI_MEM (__region_CS2 + 0x08000000UL)
+#define __flush_PCI_writes() \
+do { \
+ __builtin_write8((volatile void *) __region_PCI_MEM, 0); \
+} while(0)
+
+#define __is_PCI_IO(addr) \
+ (((unsigned long)(addr) >> 24) - (__region_PCI_IO >> 24) < (0x04000000UL >> 24))
+
+#define __is_PCI_MEM(addr) \
+ ((unsigned long)(addr) - __region_PCI_MEM < 0x08000000UL)
+
+#define __get_CLKSW() ({ *(volatile unsigned long *)(__region_CS2 + 0x0130000cUL) & 0xffUL; })
+#define __get_CLKIN() (__get_CLKSW() * 125U * 100000U / 24U)
+
+#ifndef __ASSEMBLY__
+extern int __nongprelbss mb93090_mb00_detected;
+#endif
+
+#define __addr_LEDS() (__region_CS2 + 0x01200004UL)
+#ifdef CONFIG_MB93090_MB00
+#define __set_LEDS(X) \
+do { \
+ if (mb93090_mb00_detected) \
+ __builtin_write32((void *) __addr_LEDS(), ~(X)); \
+} while (0)
+#else
+#define __set_LEDS(X)
+#endif
+
+#define __addr_LCD() (__region_CS2 + 0x01200008UL)
+#define __get_LCD(B) __builtin_read32((volatile void *) (B))
+#define __set_LCD(B,X) __builtin_write32((volatile void *) (B), (X))
+
+#define LCD_D 0x000000ff /* LCD data bus */
+#define LCD_RW 0x00000100 /* LCD R/W signal */
+#define LCD_RS 0x00000200 /* LCD Register Select */
+#define LCD_E 0x00000400 /* LCD Start Enable Signal */
+
+#define LCD_CMD_CLEAR (LCD_E|0x001)
+#define LCD_CMD_HOME (LCD_E|0x002)
+#define LCD_CMD_CURSOR_INC (LCD_E|0x004)
+#define LCD_CMD_SCROLL_INC (LCD_E|0x005)
+#define LCD_CMD_CURSOR_DEC (LCD_E|0x006)
+#define LCD_CMD_SCROLL_DEC (LCD_E|0x007)
+#define LCD_CMD_OFF (LCD_E|0x008)
+#define LCD_CMD_ON(CRSR,BLINK) (LCD_E|0x00c|(CRSR<<1)|BLINK)
+#define LCD_CMD_CURSOR_MOVE_L (LCD_E|0x010)
+#define LCD_CMD_CURSOR_MOVE_R (LCD_E|0x014)
+#define LCD_CMD_DISPLAY_SHIFT_L (LCD_E|0x018)
+#define LCD_CMD_DISPLAY_SHIFT_R (LCD_E|0x01c)
+#define LCD_CMD_FUNCSET(DL,N,F) (LCD_E|0x020|(DL<<4)|(N<<3)|(F<<2))
+#define LCD_CMD_SET_CG_ADDR(X) (LCD_E|0x040|X)
+#define LCD_CMD_SET_DD_ADDR(X) (LCD_E|0x080|X)
+#define LCD_CMD_READ_BUSY (LCD_E|LCD_RW)
+#define LCD_DATA_WRITE(X) (LCD_E|LCD_RS|(X))
+#define LCD_DATA_READ (LCD_E|LCD_RS|LCD_RW)
+
+#else
+/*
+ * PDK unit specific stuff
+ */
+
+#include <asm/mb93093-fpga-irqs.h>
+
+#define IRQ_CPU_MB93493_0 IRQ_CPU_EXTERNAL0
+#define IRQ_CPU_MB93493_1 IRQ_CPU_EXTERNAL1
+
+#define __region_CS5 0xf0000000 /* MB93493 CSC area (DAV daughter board) */
+#define __region_CS5_M 0x00ffffff /* mask */
+#define __region_CS5_C 0x00010000 /* control */
+#define __region_CS2 0x20000000 /* FPGA registers */
+#define __region_CS2_M 0x000fffff
+#define __region_CS2_C 0x00000000
+#define __region_CS1 0xfc100000 /* LAN registers */
+#define __region_CS1_M 0x000fffff
+#define __region_CS1_C 0x00010404
+#define __region_CS3 0xfc200000 /* MB93493 CSR area (DAV daughter board) */
+#define __region_CS3_M 0x000fffff
+#define __region_CS3_C 0xc8000000
+#define __region_CS4 0xfd000000 /* extra ROMs area */
+#define __region_CS4_M 0x00ffffff
+#define __region_CS4_C 0x00000f07
+
+#define __region_CS6 0xfe000000 /* not used - hide behind CPU resource I/O regs */
+#define __region_CS6_M 0x000fffff
+#define __region_CS6_C 0x00000f07
+#define __region_CS7 0xfe000000 /* not used - hide behind CPU resource I/O regs */
+#define __region_CS7_M 0x000fffff
+#define __region_CS7_C 0x00000f07
+
+#define __is_PCI_IO(addr) 0 /* no PCI */
+#define __is_PCI_MEM(addr) 0
+#define __region_PCI_IO 0
+#define __region_PCI_MEM 0
+#define __flush_PCI_writes() do { } while(0)
+
+#define __get_CLKSW() 0UL
+#define __get_CLKIN() 66000000UL
+
+#define __addr_LEDS() (__region_CS2 + 0x00000023UL)
+#define __set_LEDS(X) __builtin_write8((volatile void *) __addr_LEDS(), (X))
+
+#define __addr_FPGATR() (__region_CS2 + 0x00000030UL)
+#define __set_FPGATR(X) __builtin_write32((volatile void *) __addr_FPGATR(), (X))
+#define __get_FPGATR() __builtin_read32((volatile void *) __addr_FPGATR())
+
+#define MB93093_FPGA_FPGATR_AUDIO_CLK 0x00000003
+
+#define __set_FPGATR_AUDIO_CLK(V) \
+ __set_FPGATR((__get_FPGATR() & ~MB93093_FPGA_FPGATR_AUDIO_CLK) | (V))
+
+#define MB93093_FPGA_FPGATR_AUDIO_CLK_OFF 0x0
+#define MB93093_FPGA_FPGATR_AUDIO_CLK_11MHz 0x1
+#define MB93093_FPGA_FPGATR_AUDIO_CLK_12MHz 0x2
+#define MB93093_FPGA_FPGATR_AUDIO_CLK_02MHz 0x3
+
+#define MB93093_FPGA_SWR_PUSHSWMASK (0x1F<<26)
+#define MB93093_FPGA_SWR_PUSHSW4 (1<<29)
+
+#define __addr_FPGA_SWR ((volatile void *)(__region_CS2 + 0x28UL))
+#define __get_FPGA_PUSHSW1_5() (__builtin_read32(__addr_FPGA_SWR) & MB93093_FPGA_SWR_PUSHSWMASK)
+
+
+#endif
+
+#endif /* _ASM_MB_REGS_H */
--- /dev/null
+/* mb93091-fpga-irqs.h: MB93091 CPU board FPGA IRQs
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_MB93091_FPGA_IRQS_H
+#define _ASM_MB93091_FPGA_IRQS_H
+
+#ifndef __ASSEMBLY__
+
+#include <asm/irq-routing.h>
+
+#define IRQ_BASE_FPGA (NR_IRQ_ACTIONS_PER_GROUP * 1)
+
+/* IRQ IDs presented to drivers */
+enum {
+ IRQ_FPGA__UNUSED = IRQ_BASE_FPGA,
+ IRQ_FPGA_SYSINT_BUS_EXPANSION_1,
+ IRQ_FPGA_SL_BUS_EXPANSION_2,
+ IRQ_FPGA_PCI_INTD,
+ IRQ_FPGA_PCI_INTC,
+ IRQ_FPGA_PCI_INTB,
+ IRQ_FPGA_PCI_INTA,
+ IRQ_FPGA_SL_BUS_EXPANSION_7,
+ IRQ_FPGA_SYSINT_BUS_EXPANSION_8,
+ IRQ_FPGA_SL_BUS_EXPANSION_9,
+ IRQ_FPGA_MB86943_PCI_INTA,
+ IRQ_FPGA_MB86943_SLBUS_SIDE,
+ IRQ_FPGA_RTL8029_INTA,
+ IRQ_FPGA_SYSINT_BUS_EXPANSION_13,
+ IRQ_FPGA_SL_BUS_EXPANSION_14,
+ IRQ_FPGA_NMI,
+};
+
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_MB93091_FPGA_IRQS_H */
--- /dev/null
+/* mb93093-fpga-irqs.h: MB93093 CPU board FPGA IRQs
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_MB93093_FPGA_IRQS_H
+#define _ASM_MB93093_FPGA_IRQS_H
+
+#ifndef __ASSEMBLY__
+
+#include <asm/irq-routing.h>
+
+#define IRQ_BASE_FPGA (NR_IRQ_ACTIONS_PER_GROUP * 1)
+
+/* IRQ IDs presented to drivers */
+enum {
+ IRQ_FPGA_PUSH_BUTTON_SW1_5 = IRQ_BASE_FPGA + 8,
+ IRQ_FPGA_ROCKER_C_SW8 = IRQ_BASE_FPGA + 9,
+ IRQ_FPGA_ROCKER_C_SW9 = IRQ_BASE_FPGA + 10,
+};
+
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_MB93093_FPGA_IRQS_H */
--- /dev/null
+/* mb93493-irqs.h: MB93493 companion chip IRQs
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_MB93493_IRQS_H
+#define _ASM_MB93493_IRQS_H
+
+#ifndef __ASSEMBLY__
+
+#include <asm/irq-routing.h>
+
+#define IRQ_BASE_MB93493 (NR_IRQ_ACTIONS_PER_GROUP * 2)
+
+/* IRQ IDs presented to drivers */
+enum {
+ IRQ_MB93493_VDC = IRQ_BASE_MB93493 + 0,
+ IRQ_MB93493_VCC = IRQ_BASE_MB93493 + 1,
+ IRQ_MB93493_AUDIO_OUT = IRQ_BASE_MB93493 + 2,
+ IRQ_MB93493_I2C_0 = IRQ_BASE_MB93493 + 3,
+ IRQ_MB93493_I2C_1 = IRQ_BASE_MB93493 + 4,
+ IRQ_MB93493_USB = IRQ_BASE_MB93493 + 5,
+ IRQ_MB93493_LOCAL_BUS = IRQ_BASE_MB93493 + 7,
+ IRQ_MB93493_PCMCIA = IRQ_BASE_MB93493 + 8,
+ IRQ_MB93493_GPIO = IRQ_BASE_MB93493 + 9,
+ IRQ_MB93493_AUDIO_IN = IRQ_BASE_MB93493 + 10,
+};
+
+/* IRQ multiplexor mappings */
+#define ROUTE_VIA_IRQ0 0 /* route IRQ by way of CPU external IRQ 0 */
+#define ROUTE_VIA_IRQ1 1 /* route IRQ by way of CPU external IRQ 1 */
+
+#define IRQ_MB93493_VDC_ROUTE ROUTE_VIA_IRQ0
+#define IRQ_MB93493_VCC_ROUTE ROUTE_VIA_IRQ1
+#define IRQ_MB93493_AUDIO_OUT_ROUTE ROUTE_VIA_IRQ1
+#define IRQ_MB93493_I2C_0_ROUTE ROUTE_VIA_IRQ1
+#define IRQ_MB93493_I2C_1_ROUTE ROUTE_VIA_IRQ1
+#define IRQ_MB93493_USB_ROUTE ROUTE_VIA_IRQ1
+#define IRQ_MB93493_LOCAL_BUS_ROUTE ROUTE_VIA_IRQ1
+#define IRQ_MB93493_PCMCIA_ROUTE ROUTE_VIA_IRQ1
+#define IRQ_MB93493_GPIO_ROUTE ROUTE_VIA_IRQ1
+#define IRQ_MB93493_AUDIO_IN_ROUTE ROUTE_VIA_IRQ1
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_MB93493_IRQS_H */
--- /dev/null
+/* mb93493-regs.h: MB93493 companion chip registers
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_MB93493_REGS_H
+#define _ASM_MB93493_REGS_H
+
+#include <asm/mb-regs.h>
+#include <asm/mb93493-irqs.h>
+
+#define __get_MB93493(X) ({ *(volatile unsigned long *)(__region_CS3 + (X)); })
+
+#define __set_MB93493(X,V) \
+do { \
+ *(volatile unsigned long *)(__region_CS3 + (X)) = (V); mb(); \
+} while(0)
+
+#define __get_MB93493_STSR(X) __get_MB93493(0x3c0 + (X) * 4)
+#define __set_MB93493_STSR(X,V) __set_MB93493(0x3c0 + (X) * 4, (V))
+#define MB93493_STSR_EN
+
+#define __get_MB93493_IQSR(X) __get_MB93493(0x3d0 + (X) * 4)
+#define __set_MB93493_IQSR(X,V) __set_MB93493(0x3d0 + (X) * 4, (V))
+
+#define __get_MB93493_DQSR(X) __get_MB93493(0x3e0 + (X) * 4)
+#define __set_MB93493_DQSR(X,V) __set_MB93493(0x3e0 + (X) * 4, (V))
+
+#define __get_MB93493_LBSER() __get_MB93493(0x3f0)
+#define __set_MB93493_LBSER(V) __set_MB93493(0x3f0, (V))
+
+#define MB93493_LBSER_VDC 0x00010000
+#define MB93493_LBSER_VCC 0x00020000
+#define MB93493_LBSER_AUDIO 0x00040000
+#define MB93493_LBSER_I2C_0 0x00080000
+#define MB93493_LBSER_I2C_1 0x00100000
+#define MB93493_LBSER_USB 0x00200000
+#define MB93493_LBSER_GPIO 0x00800000
+#define MB93493_LBSER_PCMCIA 0x01000000
+
+#define __get_MB93493_LBSR() __get_MB93493(0x3fc)
+#define __set_MB93493_LBSR(V) __set_MB93493(0x3fc, (V))
+
+/*
+ * video display controller
+ */
+#define __get_MB93493_VDC(X) __get_MB93493(MB93493_VDC_##X)
+#define __set_MB93493_VDC(X,V) __set_MB93493(MB93493_VDC_##X, (V))
+
+#define MB93493_VDC_RCURSOR 0x140 /* cursor position */
+#define MB93493_VDC_RCT1 0x144 /* cursor colour 1 */
+#define MB93493_VDC_RCT2 0x148 /* cursor colour 2 */
+#define MB93493_VDC_RHDC 0x150 /* horizontal display period */
+#define MB93493_VDC_RH_MARGINS 0x154 /* horizontal margin sizes */
+#define MB93493_VDC_RVDC 0x158 /* vertical display period */
+#define MB93493_VDC_RV_MARGINS 0x15c /* vertical margin sizes */
+#define MB93493_VDC_RC 0x170 /* VDC control */
+#define MB93493_VDC_RCLOCK 0x174 /* clock divider, DMA req delay */
+#define MB93493_VDC_RBLACK 0x178 /* black insert sizes */
+#define MB93493_VDC_RS 0x17c /* VDC status */
+
+#define __addr_MB93493_VDC_BCI(X) ({ (volatile unsigned long *)(__region_CS3 + 0x000 + (X)); })
+#define __addr_MB93493_VDC_TPO(X) (__region_CS3 + 0x1c0 + (X))
+
+#define VDC_TPO_WIDTH 32
+
+#define VDC_RC_DSR 0x00000080 /* VDC master reset */
+
+#define VDC_RS_IT 0x00060000 /* interrupt indicators */
+#define VDC_RS_IT_UNDERFLOW 0x00040000 /* - underflow event */
+#define VDC_RS_IT_VSYNC 0x00020000 /* - VSYNC event */
+#define VDC_RS_DFI 0x00010000 /* current interlace field number */
+#define VDC_RS_DFI_TOP 0x00000000 /* - top field */
+#define VDC_RS_DFI_BOTTOM 0x00010000 /* - bottom field */
+#define VDC_RS_DCSR 0x00000010 /* cursor state */
+#define VDC_RS_DCM 0x00000003 /* display mode */
+#define VDC_RS_DCM_DISABLED 0x00000000 /* - display disabled */
+#define VDC_RS_DCM_STOPPED 0x00000001 /* - VDC stopped */
+#define VDC_RS_DCM_FREERUNNING 0x00000002 /* - VDC free-running */
+#define VDC_RS_DCM_TRANSFERRING 0x00000003 /* - data being transferred to VDC */
+
+/*
+ * video capture controller
+ */
+#define __get_MB93493_VCC(X) __get_MB93493(MB93493_VCC_##X)
+#define __set_MB93493_VCC(X,V) __set_MB93493(MB93493_VCC_##X, (V))
+
+#define MB93493_VCC_RREDUCT 0x104 /* reduction rate */
+#define MB93493_VCC_RHY 0x108 /* horizontal brightness filter coefficients */
+#define MB93493_VCC_RHC 0x10c /* horizontal colour-difference filter coefficients */
+#define MB93493_VCC_RHSIZE 0x110 /* horizontal cycle sizes */
+#define MB93493_VCC_RHBC 0x114 /* horizontal back porch size */
+#define MB93493_VCC_RVCC 0x118 /* vertical capture period */
+#define MB93493_VCC_RVBC 0x11c /* vertical back porch period */
+#define MB93493_VCC_RV 0x120 /* vertical filter coefficients */
+#define MB93493_VCC_RDTS 0x128 /* DMA transfer size */
+#define MB93493_VCC_RDTS_4B 0x01000000 /* 4-byte transfer */
+#define MB93493_VCC_RDTS_32B 0x03000000 /* 32-byte transfer */
+#define MB93493_VCC_RDTS_SHIFT 24
+#define MB93493_VCC_RCC 0x130 /* VCC control */
+#define MB93493_VCC_RIS 0x134 /* VCC interrupt status */
+
+#define __addr_MB93493_VCC_TPI(X) (__region_CS3 + 0x180 + (X))
+
+#define VCC_RHSIZE_RHCC 0x000007ff
+#define VCC_RHSIZE_RHCC_SHIFT 0
+#define VCC_RHSIZE_RHTCC 0x0fff0000
+#define VCC_RHSIZE_RHTCC_SHIFT 16
+
+#define VCC_RVBC_RVBC 0x00003f00
+#define VCC_RVBC_RVBC_SHIFT 8
+
+#define VCC_RREDUCT_RHR 0x07ff0000
+#define VCC_RREDUCT_RHR_SHIFT 16
+#define VCC_RREDUCT_RVR 0x000007ff
+#define VCC_RREDUCT_RVR_SHIFT 0
+
+#define VCC_RCC_CE 0x00000001 /* VCC enable */
+#define VCC_RCC_CS 0x00000002 /* request video capture start */
+#define VCC_RCC_CPF 0x0000000c /* pixel format */
+#define VCC_RCC_CPF_YCBCR_16 0x00000000 /* - YCbCr 4:2:2 16-bit format */
+#define VCC_RCC_CPF_RGB 0x00000004 /* - RGB 4:4:4 format */
+#define VCC_RCC_CPF_YCBCR_24 0x00000008 /* - YCbCr 4:2:2 24-bit format */
+#define VCC_RCC_CPF_BT656 0x0000000c /* - ITU R-BT.656 format */
+#define VCC_RCC_CPF_SHIFT 2
+#define VCC_RCC_CSR 0x00000080 /* request reset */
+#define VCC_RCC_HSIP 0x00000100 /* HSYNC polarity */
+#define VCC_RCC_HSIP_LOACT 0x00000000 /* - low active */
+#define VCC_RCC_HSIP_HIACT 0x00000100 /* - high active */
+#define VCC_RCC_VSIP 0x00000200 /* VSYNC polarity */
+#define VCC_RCC_VSIP_LOACT 0x00000000 /* - low active */
+#define VCC_RCC_VSIP_HIACT 0x00000200 /* - high active */
+#define VCC_RCC_CIE 0x00000800 /* interrupt enable */
+#define VCC_RCC_CFP 0x00001000 /* RGB pixel packing */
+#define VCC_RCC_CFP_4TO3 0x00000000 /* - pack 4 pixels into 3 words */
+#define VCC_RCC_CFP_1TO1 0x00001000 /* - pack 1 pixel into 1 words */
+#define VCC_RCC_CSM 0x00006000 /* interlace specification */
+#define VCC_RCC_CSM_ONEPASS 0x00002000 /* - non-interlaced */
+#define VCC_RCC_CSM_INTERLACE 0x00004000 /* - interlaced */
+#define VCC_RCC_CSM_SHIFT 13
+#define VCC_RCC_ES 0x00008000 /* capture start polarity */
+#define VCC_RCC_ES_NEG 0x00000000 /* - negative edge */
+#define VCC_RCC_ES_POS 0x00008000 /* - positive edge */
+#define VCC_RCC_IFI 0x00080000 /* inferlace field evaluation reverse */
+#define VCC_RCC_FDTS 0x00300000 /* interlace field start */
+#define VCC_RCC_FDTS_3_8 0x00000000 /* - 3/8 of horizontal entire cycle */
+#define VCC_RCC_FDTS_1_4 0x00100000 /* - 1/4 of horizontal entire cycle */
+#define VCC_RCC_FDTS_7_16 0x00200000 /* - 7/16 of horizontal entire cycle */
+#define VCC_RCC_FDTS_SHIFT 20
+#define VCC_RCC_MOV 0x00400000 /* test bit - always set to 1 */
+#define VCC_RCC_STP 0x00800000 /* request video capture stop */
+#define VCC_RCC_TO 0x01000000 /* input during top-field only */
+
+#define VCC_RIS_VSYNC 0x01000000 /* VSYNC interrupt */
+#define VCC_RIS_OV 0x02000000 /* overflow interrupt */
+#define VCC_RIS_BOTTOM 0x08000000 /* interlace bottom field */
+#define VCC_RIS_STARTED 0x10000000 /* capture started */
+
+/*
+ * I2C
+ */
+#define MB93493_I2C_BSR 0x340 /* bus status */
+#define MB93493_I2C_BCR 0x344 /* bus control */
+#define MB93493_I2C_CCR 0x348 /* clock control */
+#define MB93493_I2C_ADR 0x34c /* address */
+#define MB93493_I2C_DTR 0x350 /* data */
+#define MB93493_I2C_BC2R 0x35c /* bus control 2 */
+
+#define __addr_MB93493_I2C(port,X) (__region_CS3 + MB93493_I2C_##X + ((port)*0x20))
+#define __get_MB93493_I2C(port,X) __get_MB93493(MB93493_I2C_##X + ((port)*0x20))
+#define __set_MB93493_I2C(port,X,V) __set_MB93493(MB93493_I2C_##X + ((port)*0x20), (V))
+
+#define I2C_BSR_BB (1 << 7)
+
+/*
+ * audio controller (I2S) registers
+ */
+#define __get_MB93493_I2S(X) __get_MB93493(MB93493_I2S_##X)
+#define __set_MB93493_I2S(X,V) __set_MB93493(MB93493_I2S_##X, (V))
+
+#define MB93493_I2S_ALDR 0x300 /* L-channel data */
+#define MB93493_I2S_ARDR 0x304 /* R-channel data */
+#define MB93493_I2S_APDR 0x308 /* 16-bit packed data */
+#define MB93493_I2S_AISTR 0x310 /* status */
+#define MB93493_I2S_AICR 0x314 /* control */
+
+#define __addr_MB93493_I2S_ALDR(X) (__region_CS3 + MB93493_I2S_ALDR + (X))
+#define __addr_MB93493_I2S_ARDR(X) (__region_CS3 + MB93493_I2S_ARDR + (X))
+#define __addr_MB93493_I2S_APDR(X) (__region_CS3 + MB93493_I2S_APDR + (X))
+#define __addr_MB93493_I2S_ADR(X) (__region_CS3 + 0x320 + (X))
+
+#define I2S_AISTR_OTST 0x00000003 /* status of output data transfer */
+#define I2S_AISTR_OTR 0x00000010 /* output transfer request pending */
+#define I2S_AISTR_OUR 0x00000020 /* output FIFO underrun detected */
+#define I2S_AISTR_OOR 0x00000040 /* output FIFO overrun detected */
+#define I2S_AISTR_ODS 0x00000100 /* output DMA transfer size */
+#define I2S_AISTR_ODE 0x00000400 /* output DMA transfer request enable */
+#define I2S_AISTR_OTRIE 0x00001000 /* output transfer request interrupt enable */
+#define I2S_AISTR_OURIE 0x00002000 /* output FIFO underrun interrupt enable */
+#define I2S_AISTR_OORIE 0x00004000 /* output FIFO overrun interrupt enable */
+#define I2S_AISTR__OUT_MASK 0x00007570
+#define I2S_AISTR_ITST 0x00030000 /* status of input data transfer */
+#define I2S_AISTR_ITST_SHIFT 16
+#define I2S_AISTR_ITR 0x00100000 /* input transfer request pending */
+#define I2S_AISTR_IUR 0x00200000 /* input FIFO underrun detected */
+#define I2S_AISTR_IOR 0x00400000 /* input FIFO overrun detected */
+#define I2S_AISTR_IDS 0x01000000 /* input DMA transfer size */
+#define I2S_AISTR_IDE 0x04000000 /* input DMA transfer request enable */
+#define I2S_AISTR_ITRIE 0x10000000 /* input transfer request interrupt enable */
+#define I2S_AISTR_IURIE 0x20000000 /* input FIFO underrun interrupt enable */
+#define I2S_AISTR_IORIE 0x40000000 /* input FIFO overrun interrupt enable */
+#define I2S_AISTR__IN_MASK 0x75700000
+
+#define I2S_AICR_MI 0x00000001 /* mono input requested */
+#define I2S_AICR_AMI 0x00000002 /* relation between LRCKI/FS1 and SDI */
+#define I2S_AICR_LRI 0x00000004 /* function of LRCKI pin */
+#define I2S_AICR_SDMI 0x00000070 /* format of input audio data */
+#define I2S_AICR_SDMI_SHIFT 4
+#define I2S_AICR_CLI 0x00000080 /* input FIFO clearing control */
+#define I2S_AICR_IM 0x00000300 /* input state control */
+#define I2S_AICR_IM_SHIFT 8
+#define I2S_AICR__IN_MASK 0x000003f7
+#define I2S_AICR_MO 0x00001000 /* mono output requested */
+#define I2S_AICR_AMO 0x00002000 /* relation between LRCKO/FS0 and SDO */
+#define I2S_AICR_AMO_SHIFT 13
+#define I2S_AICR_LRO 0x00004000 /* function of LRCKO pin */
+#define I2S_AICR_SDMO 0x00070000 /* format of output audio data */
+#define I2S_AICR_SDMO_SHIFT 16
+#define I2S_AICR_CLO 0x00080000 /* output FIFO clearing control */
+#define I2S_AICR_OM 0x00100000 /* output state control */
+#define I2S_AICR__OUT_MASK 0x001f7000
+#define I2S_AICR_DIV 0x03000000 /* frequency division rate */
+#define I2S_AICR_DIV_SHIFT 24
+#define I2S_AICR_FL 0x20000000 /* frame length */
+#define I2S_AICR_FS 0x40000000 /* frame sync method */
+#define I2S_AICR_ME 0x80000000 /* master enable */
+
+/*
+ * PCMCIA
+ */
+#define __addr_MB93493_PCMCIA(X) ((volatile unsigned long *)(__region_CS5 + (X)))
+
+/*
+ * GPIO
+ */
+#define __get_MB93493_GPIO_PDR(X) __get_MB93493(0x380 + (X) * 0xc0)
+#define __set_MB93493_GPIO_PDR(X,V) __set_MB93493(0x380 + (X) * 0xc0, (V))
+
+#define __get_MB93493_GPIO_GPDR(X) __get_MB93493(0x384 + (X) * 0xc0)
+#define __set_MB93493_GPIO_GPDR(X,V) __set_MB93493(0x384 + (X) * 0xc0, (V))
+
+#define __get_MB93493_GPIO_SIR(X) __get_MB93493(0x388 + (X) * 0xc0)
+#define __set_MB93493_GPIO_SIR(X,V) __set_MB93493(0x388 + (X) * 0xc0, (V))
+
+#define __get_MB93493_GPIO_SOR(X) __get_MB93493(0x38c + (X) * 0xc0)
+#define __set_MB93493_GPIO_SOR(X,V) __set_MB93493(0x38c + (X) * 0xc0, (V))
+
+#define __get_MB93493_GPIO_PDSR(X) __get_MB93493(0x390 + (X) * 0xc0)
+#define __set_MB93493_GPIO_PDSR(X,V) __set_MB93493(0x390 + (X) * 0xc0, (V))
+
+#define __get_MB93493_GPIO_PDCR(X) __get_MB93493(0x394 + (X) * 0xc0)
+#define __set_MB93493_GPIO_PDCR(X,V) __set_MB93493(0x394 + (X) * 0xc0, (V))
+
+#define __get_MB93493_GPIO_INTST(X) __get_MB93493(0x398 + (X) * 0xc0)
+#define __set_MB93493_GPIO_INTST(X,V) __set_MB93493(0x398 + (X) * 0xc0, (V))
+
+#define __get_MB93493_GPIO_IEHL(X) __get_MB93493(0x39c + (X) * 0xc0)
+#define __set_MB93493_GPIO_IEHL(X,V) __set_MB93493(0x39c + (X) * 0xc0, (V))
+
+#define __get_MB93493_GPIO_IELH(X) __get_MB93493(0x3a0 + (X) * 0xc0)
+#define __set_MB93493_GPIO_IELH(X,V) __set_MB93493(0x3a0 + (X) * 0xc0, (V))
+
+#endif /* _ASM_MB93493_REGS_H */
--- /dev/null
+/* mmu_context.h: MMU context management routines
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_MMU_CONTEXT_H
+#define _ASM_MMU_CONTEXT_H
+
+#include <linux/config.h>
+#include <asm/setup.h>
+#include <asm/page.h>
+#include <asm/pgalloc.h>
+
+static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
+{
+}
+
+#ifdef CONFIG_MMU
+extern int init_new_context(struct task_struct *tsk, struct mm_struct *mm);
+extern void change_mm_context(mm_context_t *old, mm_context_t *ctx, pgd_t *_pgd);
+extern void destroy_context(struct mm_struct *mm);
+
+#else
+#define init_new_context(tsk, mm) ({ 0; })
+#define change_mm_context(old, ctx, _pml4) do {} while(0)
+#define destroy_context(mm) do {} while(0)
+#endif
+
+#define switch_mm(prev, next, tsk) \
+do { \
+ if (prev != next) \
+ change_mm_context(&prev->context, &next->context, next->pgd); \
+} while(0)
+
+#define activate_mm(prev, next) \
+do { \
+ change_mm_context(&prev->context, &next->context, next->pgd); \
+} while(0)
+
+#define deactivate_mm(tsk, mm) \
+do { \
+} while(0)
+
+#endif
--- /dev/null
+/*
+ * asm/namei.h
+ *
+ * Included from linux/fs/namei.c
+ */
+
+#ifndef __ASM_NAMEI_H
+#define __ASM_NAMEI_H
+
+/* This dummy routine maybe changed to something useful
+ * for /usr/gnemul/ emulation stuff.
+ * Look at asm-sparc/namei.h for details.
+ */
+
+#define __emul_prefix() NULL
+
+#endif
+
--- /dev/null
+#ifndef _ASM_PAGE_H
+#define _ASM_PAGE_H
+
+#ifdef __KERNEL__
+
+#include <linux/config.h>
+#include <asm/virtconvert.h>
+#include <asm/mem-layout.h>
+#include <asm/sections.h>
+#include <asm/setup.h>
+
+#ifndef __ASSEMBLY__
+
+#define get_user_page(vaddr) __get_free_page(GFP_KERNEL)
+#define free_user_page(page, addr) free_page(addr)
+
+#define clear_page(pgaddr) memset((pgaddr), 0, PAGE_SIZE)
+#define copy_page(to,from) memcpy((to), (from), PAGE_SIZE)
+
+#define clear_user_page(pgaddr, vaddr, page) memset((pgaddr), 0, PAGE_SIZE)
+#define copy_user_page(vto, vfrom, vaddr, topg) memcpy((vto), (vfrom), PAGE_SIZE)
+
+/*
+ * These are used to make use of C type-checking..
+ */
+typedef struct { unsigned long pte; } pte_t;
+typedef struct { unsigned long ste[64];} pmd_t;
+typedef struct { pmd_t pue[1]; } pud_t;
+typedef struct { pud_t pge[1]; } pgd_t;
+typedef struct { unsigned long pgprot; } pgprot_t;
+
+#define pte_val(x) ((x).pte)
+#define pmd_val(x) ((x).ste[0])
+#define pud_val(x) ((x).pue[0])
+#define pgd_val(x) ((x).pge[0])
+#define pgprot_val(x) ((x).pgprot)
+
+#define __pte(x) ((pte_t) { (x) } )
+#define __pmd(x) ((pmd_t) { (x) } )
+#define __pud(x) ((pud_t) { (x) } )
+#define __pgd(x) ((pgd_t) { (x) } )
+#define __pgprot(x) ((pgprot_t) { (x) } )
+#define PTE_MASK PAGE_MASK
+
+/* to align the pointer to the (next) page boundary */
+#define PAGE_ALIGN(addr) (((addr) + PAGE_SIZE - 1) & PAGE_MASK)
+
+/* Pure 2^n version of get_order */
+static inline int get_order(unsigned long size) __attribute_const__;
+static inline int get_order(unsigned long size)
+{
+ int order;
+
+ size = (size - 1) >> (PAGE_SHIFT - 1);
+ order = -1;
+ do {
+ size >>= 1;
+ order++;
+ } while (size);
+ return order;
+}
+
+#define devmem_is_allowed(pfn) 1
+
+#define __pa(vaddr) virt_to_phys((void *) vaddr)
+#define __va(paddr) phys_to_virt((unsigned long) paddr)
+
+#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
+
+extern unsigned long max_low_pfn;
+extern unsigned long min_low_pfn;
+extern unsigned long max_pfn;
+
+#ifdef CONFIG_MMU
+#define pfn_to_page(pfn) (mem_map + (pfn))
+#define page_to_pfn(page) ((unsigned long) ((page) - mem_map))
+#define pfn_valid(pfn) ((pfn) < max_mapnr)
+
+#else
+#define pfn_to_page(pfn) (&mem_map[(pfn) - (PAGE_OFFSET >> PAGE_SHIFT)])
+#define page_to_pfn(page) ((PAGE_OFFSET >> PAGE_SHIFT) + (unsigned long) ((page) - mem_map))
+#define pfn_valid(pfn) ((pfn) >= min_low_pfn && (pfn) < max_low_pfn)
+
+#endif
+
+#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
+#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
+
+
+#ifdef CONFIG_MMU
+#define VM_DATA_DEFAULT_FLAGS \
+ (VM_READ | VM_WRITE | \
+ ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : 0 ) | \
+ VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
+#endif
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __KERNEL__ */
+
+#ifdef CONFIG_CONTIGUOUS_PAGE_ALLOC
+#define WANT_PAGE_VIRTUAL 1
+#endif
+
+#endif /* _ASM_PAGE_H */
--- /dev/null
+#ifndef _ASM_PARAM_H
+#define _ASM_PARAM_H
+
+#ifdef __KERNEL__
+#define HZ 1000 /* Internal kernel timer frequency */
+#define USER_HZ 100 /* .. some user interfaces are in "ticks" */
+#define CLOCKS_PER_SEC (USER_HZ) /* like times() */
+#endif
+
+#ifndef HZ
+#define HZ 100
+#endif
+
+#define EXEC_PAGESIZE 16384
+
+#ifndef NOGROUP
+#define NOGROUP (-1)
+#endif
+
+#define MAXHOSTNAMELEN 64 /* max length of hostname */
+#define COMMAND_LINE_SIZE 512
+
+#endif /* _ASM_PARAM_H */
--- /dev/null
+/* pci.h: FR-V specific PCI declarations
+ *
+ * Copyright (C) 2003 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ * - Derived from include/asm-m68k/pci.h
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef ASM_PCI_H
+#define ASM_PCI_H
+
+#include <linux/config.h>
+#include <linux/mm.h>
+#include <asm/scatterlist.h>
+#include <asm-generic/pci-dma-compat.h>
+#include <asm-generic/pci.h>
+
+struct pci_dev;
+
+#define pcibios_assign_all_busses() 0
+
+static inline void pcibios_add_platform_entries(struct pci_dev *dev)
+{
+}
+
+extern void pcibios_set_master(struct pci_dev *dev);
+
+extern void pcibios_penalize_isa_irq(int irq);
+
+#ifdef CONFIG_MMU
+extern void *consistent_alloc(int gfp, size_t size, dma_addr_t *dma_handle);
+extern void consistent_free(void *vaddr);
+extern void consistent_sync(void *vaddr, size_t size, int direction);
+extern void consistent_sync_page(struct page *page, unsigned long offset,
+ size_t size, int direction);
+#endif
+
+extern void *pci_alloc_consistent(struct pci_dev *hwdev, size_t size,
+ dma_addr_t *dma_handle);
+
+extern void pci_free_consistent(struct pci_dev *hwdev, size_t size,
+ void *vaddr, dma_addr_t dma_handle);
+
+/* This is always fine. */
+#define pci_dac_dma_supported(pci_dev, mask) (1)
+
+/* Return the index of the PCI controller for device PDEV. */
+#define pci_controller_num(PDEV) (0)
+
+/* The PCI address space does equal the physical memory
+ * address space. The networking and block device layers use
+ * this boolean for bounce buffer decisions.
+ */
+#define PCI_DMA_BUS_IS_PHYS (1)
+
+/*
+ * These are pretty much arbitary with the CoMEM implementation.
+ * We have the whole address space to ourselves.
+ */
+#define PCIBIOS_MIN_IO 0x100
+#define PCIBIOS_MIN_MEM 0x00010000
+
+/* Make physical memory consistent for a single
+ * streaming mode DMA translation after a transfer.
+ *
+ * If you perform a pci_map_single() but wish to interrogate the
+ * buffer using the cpu, yet do not wish to teardown the PCI dma
+ * mapping, you must call this function before doing so. At the
+ * next point you give the PCI dma address back to the card, the
+ * device again owns the buffer.
+ */
+static inline void pci_dma_sync_single(struct pci_dev *hwdev,
+ dma_addr_t dma_handle,
+ size_t size, int direction)
+{
+ if (direction == PCI_DMA_NONE)
+ BUG();
+
+ frv_cache_wback_inv((unsigned long)bus_to_virt(dma_handle),
+ (unsigned long)bus_to_virt(dma_handle) + size);
+}
+
+/* Make physical memory consistent for a set of streaming
+ * mode DMA translations after a transfer.
+ *
+ * The same as pci_dma_sync_single but for a scatter-gather list,
+ * same rules and usage.
+ */
+static inline void pci_dma_sync_sg(struct pci_dev *hwdev,
+ struct scatterlist *sg,
+ int nelems, int direction)
+{
+ int i;
+
+ if (direction == PCI_DMA_NONE)
+ BUG();
+
+ for (i = 0; i < nelems; i++)
+ frv_cache_wback_inv(sg_dma_address(&sg[i]),
+ sg_dma_address(&sg[i])+sg_dma_len(&sg[i]));
+}
+
+
+#endif
--- /dev/null
+/* pgalloc.h: Page allocation routines for FRV
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * Derived from:
+ * include/asm-m68knommu/pgalloc.h
+ * include/asm-i386/pgalloc.h
+ */
+#ifndef _ASM_PGALLOC_H
+#define _ASM_PGALLOC_H
+
+#include <linux/config.h>
+#include <asm/setup.h>
+#include <asm/virtconvert.h>
+
+#ifdef CONFIG_MMU
+
+#define pmd_populate_kernel(mm, pmd, pte) __set_pmd(pmd, __pa(pte) | _PAGE_TABLE)
+#define pmd_populate(MM, PMD, PAGE) \
+do { \
+ __set_pmd((PMD), page_to_pfn(PAGE) << PAGE_SHIFT | _PAGE_TABLE); \
+} while(0)
+
+/*
+ * Allocate and free page tables.
+ */
+
+extern pgd_t *pgd_alloc(struct mm_struct *);
+extern void pgd_free(pgd_t *);
+
+extern pte_t *pte_alloc_one_kernel(struct mm_struct *, unsigned long);
+
+extern struct page *pte_alloc_one(struct mm_struct *, unsigned long);
+
+static inline void pte_free_kernel(pte_t *pte)
+{
+ free_page((unsigned long)pte);
+}
+
+static inline void pte_free(struct page *pte)
+{
+ __free_page(pte);
+}
+
+#define __pte_free_tlb(tlb,pte) tlb_remove_page((tlb),(pte))
+
+/*
+ * allocating and freeing a pmd is trivial: the 1-entry pmd is
+ * inside the pgd, so has no extra memory associated with it.
+ * (In the PAE case we free the pmds as part of the pgd.)
+ */
+#define pmd_alloc_one(mm, addr) ({ BUG(); ((pmd_t *) 2); })
+#define pmd_free(x) do { } while (0)
+#define __pmd_free_tlb(tlb,x) do { } while (0)
+
+#endif /* CONFIG_MMU */
+
+#endif /* _ASM_PGALLOC_H */
--- /dev/null
+/* pgtable.h: FR-V page table mangling
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * Derived from:
+ * include/asm-m68knommu/pgtable.h
+ * include/asm-i386/pgtable.h
+ */
+
+#ifndef _ASM_PGTABLE_H
+#define _ASM_PGTABLE_H
+
+#include <linux/config.h>
+#include <asm/mem-layout.h>
+#include <asm/setup.h>
+#include <asm/processor.h>
+
+#ifndef __ASSEMBLY__
+#include <linux/threads.h>
+#include <linux/slab.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#endif
+
+#ifndef __ASSEMBLY__
+#if defined(CONFIG_HIGHPTE)
+typedef unsigned long pte_addr_t;
+#else
+typedef pte_t *pte_addr_t;
+#endif
+#endif
+
+/*****************************************************************************/
+/*
+ * MMU-less operation case first
+ */
+#ifndef CONFIG_MMU
+
+#define pgd_present(pgd) (1) /* pages are always present on NO_MM */
+#define pgd_none(pgd) (0)
+#define pgd_bad(pgd) (0)
+#define pgd_clear(pgdp)
+#define kern_addr_valid(addr) (1)
+#define pmd_offset(a, b) ((void *) 0)
+
+#define PAGE_NONE __pgprot(0) /* these mean nothing to NO_MM */
+#define PAGE_SHARED __pgprot(0) /* these mean nothing to NO_MM */
+#define PAGE_COPY __pgprot(0) /* these mean nothing to NO_MM */
+#define PAGE_READONLY __pgprot(0) /* these mean nothing to NO_MM */
+#define PAGE_KERNEL __pgprot(0) /* these mean nothing to NO_MM */
+
+#define __swp_type(x) (0)
+#define __swp_offset(x) (0)
+#define __swp_entry(typ,off) ((swp_entry_t) { ((typ) | ((off) << 7)) })
+#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
+#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
+
+#ifndef __ASSEMBLY__
+static inline int pte_file(pte_t pte) { return 0; }
+#endif
+
+#define ZERO_PAGE(vaddr) ({ BUG(); NULL; })
+
+#define swapper_pg_dir ((pgd_t *) NULL)
+
+#define pgtable_cache_init() do {} while(0)
+
+#else /* !CONFIG_MMU */
+/*****************************************************************************/
+/*
+ * then MMU operation
+ */
+
+/*
+ * ZERO_PAGE is a global shared page that is always zero: used
+ * for zero-mapped memory areas etc..
+ */
+#ifndef __ASSEMBLY__
+extern unsigned long empty_zero_page;
+#define ZERO_PAGE(vaddr) virt_to_page(empty_zero_page)
+#endif
+
+/*
+ * we use 2-level page tables, folding the PMD (mid-level table) into the PGE (top-level entry)
+ * [see Documentation/fujitsu/frv/mmu-layout.txt]
+ *
+ * Page Directory:
+ * - Size: 16KB
+ * - 64 PGEs per PGD
+ * - Each PGE holds 1 PUD and covers 64MB
+ *
+ * Page Upper Directory:
+ * - Size: 256B
+ * - 1 PUE per PUD
+ * - Each PUE holds 1 PMD and covers 64MB
+ *
+ * Page Mid-Level Directory
+ * - Size: 256B
+ * - 1 PME per PMD
+ * - Each PME holds 64 STEs, all of which point to separate chunks of the same Page Table
+ * - All STEs are instantiated at the same time
+ *
+ * Page Table
+ * - Size: 16KB
+ * - 4096 PTEs per PT
+ * - Each Linux PT is subdivided into 64 FR451 PT's, each of which holds 64 entries
+ *
+ * Pages
+ * - Size: 4KB
+ *
+ * total PTEs
+ * = 1 PML4E * 64 PGEs * 1 PUEs * 1 PMEs * 4096 PTEs
+ * = 1 PML4E * 64 PGEs * 64 STEs * 64 PTEs/FR451-PT
+ * = 262144 (or 256 * 1024)
+ */
+#define PGDIR_SHIFT 26
+#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
+#define PGDIR_MASK (~(PGDIR_SIZE - 1))
+#define PTRS_PER_PGD 64
+
+#define PUD_SHIFT 26
+#define PTRS_PER_PUD 1
+#define PUD_SIZE (1UL << PUD_SHIFT)
+#define PUD_MASK (~(PUD_SIZE - 1))
+#define PUE_SIZE 256
+
+#define PMD_SHIFT 26
+#define PMD_SIZE (1UL << PMD_SHIFT)
+#define PMD_MASK (~(PMD_SIZE - 1))
+#define PTRS_PER_PMD 1
+#define PME_SIZE 256
+
+#define __frv_PT_SIZE 256
+
+#define PTRS_PER_PTE 4096
+
+#define USER_PGDS_IN_LAST_PML4 (TASK_SIZE / PGDIR_SIZE)
+#define FIRST_USER_PGD_NR 0
+
+#define USER_PGD_PTRS (PAGE_OFFSET >> PGDIR_SHIFT)
+#define KERNEL_PGD_PTRS (PTRS_PER_PGD - USER_PGD_PTRS)
+
+#define TWOLEVEL_PGDIR_SHIFT 26
+#define BOOT_USER_PGD_PTRS (__PAGE_OFFSET >> TWOLEVEL_PGDIR_SHIFT)
+#define BOOT_KERNEL_PGD_PTRS (PTRS_PER_PGD - BOOT_USER_PGD_PTRS)
+
+#ifndef __ASSEMBLY__
+
+extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+
+#define pte_ERROR(e) \
+ printk("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, (e).pte)
+#define pmd_ERROR(e) \
+ printk("%s:%d: bad pmd %08lx.\n", __FILE__, __LINE__, pmd_val(e))
+#define pud_ERROR(e) \
+ printk("%s:%d: bad pud %08lx.\n", __FILE__, __LINE__, pmd_val(pud_val(e)))
+#define pgd_ERROR(e) \
+ printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pmd_val(pud_val(pgd_val(e))))
+
+/*
+ * Certain architectures need to do special things when PTEs
+ * within a page table are directly modified. Thus, the following
+ * hook is made available.
+ */
+#define set_pte(pteptr, pteval) \
+do { \
+ *(pteptr) = (pteval); \
+ asm volatile("dcf %M0" :: "U"(*pteptr)); \
+} while(0)
+
+#define set_pte_atomic(pteptr, pteval) set_pte((pteptr), (pteval))
+
+/*
+ * pgd_offset() returns a (pgd_t *)
+ * pgd_index() is used get the offset into the pgd page's array of pgd_t's;
+ */
+#define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address))
+
+/*
+ * a shortcut which implies the use of the kernel's pgd, instead
+ * of a process's
+ */
+#define pgd_offset_k(address) pgd_offset(&init_mm, address)
+
+/*
+ * The "pgd_xxx()" functions here are trivial for a folded two-level
+ * setup: the pud is never bad, and a pud always exists (as it's folded
+ * into the pgd entry)
+ */
+static inline int pgd_none(pgd_t pgd) { return 0; }
+static inline int pgd_bad(pgd_t pgd) { return 0; }
+static inline int pgd_present(pgd_t pgd) { return 1; }
+static inline void pgd_clear(pgd_t *pgd) { }
+
+#define pgd_populate(mm, pgd, pud) do { } while (0)
+/*
+ * (puds are folded into pgds so this doesn't get actually called,
+ * but the define is needed for a generic inline function.)
+ */
+#define set_pgd(pgdptr, pgdval) \
+do { \
+ memcpy((pgdptr), &(pgdval), sizeof(pgd_t)); \
+ asm volatile("dcf %M0" :: "U"(*(pgdptr))); \
+} while(0)
+
+static inline pud_t *pud_offset(pgd_t *pgd, unsigned long address)
+{
+ return (pud_t *) pgd;
+}
+
+#define pgd_page(pgd) (pud_page((pud_t){ pgd }))
+#define pgd_page_kernel(pgd) (pud_page_kernel((pud_t){ pgd }))
+
+/*
+ * allocating and freeing a pud is trivial: the 1-entry pud is
+ * inside the pgd, so has no extra memory associated with it.
+ */
+#define pud_alloc_one(mm, address) NULL
+#define pud_free(x) do { } while (0)
+#define __pud_free_tlb(tlb, x) do { } while (0)
+
+/*
+ * The "pud_xxx()" functions here are trivial for a folded two-level
+ * setup: the pmd is never bad, and a pmd always exists (as it's folded
+ * into the pud entry)
+ */
+static inline int pud_none(pud_t pud) { return 0; }
+static inline int pud_bad(pud_t pud) { return 0; }
+static inline int pud_present(pud_t pud) { return 1; }
+static inline void pud_clear(pud_t *pud) { }
+
+#define pud_populate(mm, pmd, pte) do { } while (0)
+
+/*
+ * (pmds are folded into puds so this doesn't get actually called,
+ * but the define is needed for a generic inline function.)
+ */
+#define set_pud(pudptr, pudval) set_pmd((pmd_t *)(pudptr), (pmd_t) { pudval })
+
+#define pud_page(pud) (pmd_page((pmd_t){ pud }))
+#define pud_page_kernel(pud) (pmd_page_kernel((pmd_t){ pud }))
+
+/*
+ * (pmds are folded into pgds so this doesn't get actually called,
+ * but the define is needed for a generic inline function.)
+ */
+extern void __set_pmd(pmd_t *pmdptr, unsigned long __pmd);
+
+#define set_pmd(pmdptr, pmdval) \
+do { \
+ __set_pmd((pmdptr), (pmdval).ste[0]); \
+} while(0)
+
+#define __pmd_index(address) 0
+
+static inline pmd_t *pmd_offset(pud_t *dir, unsigned long address)
+{
+ return (pmd_t *) dir + __pmd_index(address);
+}
+
+#define pte_same(a, b) ((a).pte == (b).pte)
+#define pte_page(x) (mem_map + ((unsigned long)(((x).pte >> PAGE_SHIFT))))
+#define pte_none(x) (!(x).pte)
+#define pte_pfn(x) ((unsigned long)(((x).pte >> PAGE_SHIFT)))
+#define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
+#define pfn_pmd(pfn, prot) __pmd(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
+
+#define VMALLOC_VMADDR(x) ((unsigned long) (x))
+
+#endif /* !__ASSEMBLY__ */
+
+/*
+ * control flags in AMPR registers and TLB entries
+ */
+#define _PAGE_BIT_PRESENT xAMPRx_V_BIT
+#define _PAGE_BIT_WP DAMPRx_WP_BIT
+#define _PAGE_BIT_NOCACHE xAMPRx_C_BIT
+#define _PAGE_BIT_SUPER xAMPRx_S_BIT
+#define _PAGE_BIT_ACCESSED xAMPRx_RESERVED8_BIT
+#define _PAGE_BIT_DIRTY xAMPRx_M_BIT
+#define _PAGE_BIT_NOTGLOBAL xAMPRx_NG_BIT
+
+#define _PAGE_PRESENT xAMPRx_V
+#define _PAGE_WP DAMPRx_WP
+#define _PAGE_NOCACHE xAMPRx_C
+#define _PAGE_SUPER xAMPRx_S
+#define _PAGE_ACCESSED xAMPRx_RESERVED8 /* accessed if set */
+#define _PAGE_DIRTY xAMPRx_M
+#define _PAGE_NOTGLOBAL xAMPRx_NG
+
+#define _PAGE_RESERVED_MASK (xAMPRx_RESERVED8 | xAMPRx_RESERVED13)
+
+#define _PAGE_FILE 0x002 /* set:pagecache unset:swap */
+#define _PAGE_PROTNONE 0x000 /* If not present */
+
+#define _PAGE_CHG_MASK (PTE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
+
+#define __PGPROT_BASE \
+ (_PAGE_PRESENT | xAMPRx_SS_16Kb | xAMPRx_D | _PAGE_NOTGLOBAL | _PAGE_ACCESSED)
+
+#define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_ACCESSED)
+#define PAGE_SHARED __pgprot(__PGPROT_BASE)
+#define PAGE_COPY __pgprot(__PGPROT_BASE | _PAGE_WP)
+#define PAGE_READONLY __pgprot(__PGPROT_BASE | _PAGE_WP)
+
+#define __PAGE_KERNEL (__PGPROT_BASE | _PAGE_SUPER | _PAGE_DIRTY)
+#define __PAGE_KERNEL_NOCACHE (__PGPROT_BASE | _PAGE_SUPER | _PAGE_DIRTY | _PAGE_NOCACHE)
+#define __PAGE_KERNEL_RO (__PGPROT_BASE | _PAGE_SUPER | _PAGE_DIRTY | _PAGE_WP)
+
+#define MAKE_GLOBAL(x) __pgprot((x) & ~_PAGE_NOTGLOBAL)
+
+#define PAGE_KERNEL MAKE_GLOBAL(__PAGE_KERNEL)
+#define PAGE_KERNEL_RO MAKE_GLOBAL(__PAGE_KERNEL_RO)
+#define PAGE_KERNEL_NOCACHE MAKE_GLOBAL(__PAGE_KERNEL_NOCACHE)
+
+#define _PAGE_TABLE (_PAGE_PRESENT | xAMPRx_SS_16Kb)
+
+#ifndef __ASSEMBLY__
+
+/*
+ * The FR451 can do execute protection by virtue of having separate TLB miss handlers for
+ * instruction access and for data access. However, we don't have enough reserved bits to say
+ * "execute only", so we don't bother. If you can read it, you can execute it and vice versa.
+ */
+#define __P000 PAGE_NONE
+#define __P001 PAGE_READONLY
+#define __P010 PAGE_COPY
+#define __P011 PAGE_COPY
+#define __P100 PAGE_READONLY
+#define __P101 PAGE_READONLY
+#define __P110 PAGE_COPY
+#define __P111 PAGE_COPY
+
+#define __S000 PAGE_NONE
+#define __S001 PAGE_READONLY
+#define __S010 PAGE_SHARED
+#define __S011 PAGE_SHARED
+#define __S100 PAGE_READONLY
+#define __S101 PAGE_READONLY
+#define __S110 PAGE_SHARED
+#define __S111 PAGE_SHARED
+
+/*
+ * Define this to warn about kernel memory accesses that are
+ * done without a 'verify_area(VERIFY_WRITE,..)'
+ */
+#undef TEST_VERIFY_AREA
+
+#define pte_present(x) (pte_val(x) & _PAGE_PRESENT)
+#define pte_clear(xp) do { set_pte(xp, __pte(0)); } while (0)
+
+#define pmd_none(x) (!pmd_val(x))
+#define pmd_present(x) (pmd_val(x) & _PAGE_PRESENT)
+#define pmd_bad(x) (pmd_val(x) & xAMPRx_SS)
+#define pmd_clear(xp) do { __set_pmd(xp, 0); } while(0)
+
+#define pmd_page_kernel(pmd) \
+ ((unsigned long) __va(pmd_val(pmd) & PAGE_MASK))
+
+#ifndef CONFIG_DISCONTIGMEM
+#define pmd_page(pmd) (pfn_to_page(pmd_val(pmd) >> PAGE_SHIFT))
+#endif
+
+#define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
+
+/*
+ * The following only work if pte_present() is true.
+ * Undefined behaviour if not..
+ */
+static inline int pte_read(pte_t pte) { return !((pte).pte & _PAGE_SUPER); }
+static inline int pte_exec(pte_t pte) { return !((pte).pte & _PAGE_SUPER); }
+static inline int pte_dirty(pte_t pte) { return (pte).pte & _PAGE_DIRTY; }
+static inline int pte_young(pte_t pte) { return (pte).pte & _PAGE_ACCESSED; }
+static inline int pte_write(pte_t pte) { return !((pte).pte & _PAGE_WP); }
+
+static inline pte_t pte_rdprotect(pte_t pte) { (pte).pte |= _PAGE_SUPER; return pte; }
+static inline pte_t pte_exprotect(pte_t pte) { (pte).pte |= _PAGE_SUPER; return pte; }
+static inline pte_t pte_mkclean(pte_t pte) { (pte).pte &= ~_PAGE_DIRTY; return pte; }
+static inline pte_t pte_mkold(pte_t pte) { (pte).pte &= ~_PAGE_ACCESSED; return pte; }
+static inline pte_t pte_wrprotect(pte_t pte) { (pte).pte |= _PAGE_WP; return pte; }
+static inline pte_t pte_mkread(pte_t pte) { (pte).pte &= ~_PAGE_SUPER; return pte; }
+static inline pte_t pte_mkexec(pte_t pte) { (pte).pte &= ~_PAGE_SUPER; return pte; }
+static inline pte_t pte_mkdirty(pte_t pte) { (pte).pte |= _PAGE_DIRTY; return pte; }
+static inline pte_t pte_mkyoung(pte_t pte) { (pte).pte |= _PAGE_ACCESSED; return pte; }
+static inline pte_t pte_mkwrite(pte_t pte) { (pte).pte &= ~_PAGE_WP; return pte; }
+
+static inline int ptep_test_and_clear_dirty(pte_t *ptep)
+{
+ int i = test_and_clear_bit(_PAGE_BIT_DIRTY, ptep);
+ asm volatile("dcf %M0" :: "U"(*ptep));
+ return i;
+}
+
+static inline int ptep_test_and_clear_young(pte_t *ptep)
+{
+ int i = test_and_clear_bit(_PAGE_BIT_ACCESSED, ptep);
+ asm volatile("dcf %M0" :: "U"(*ptep));
+ return i;
+}
+
+static inline pte_t ptep_get_and_clear(pte_t *ptep)
+{
+ unsigned long x = xchg(&ptep->pte, 0);
+ asm volatile("dcf %M0" :: "U"(*ptep));
+ return __pte(x);
+}
+
+static inline void ptep_set_wrprotect(pte_t *ptep)
+{
+ set_bit(_PAGE_BIT_WP, ptep);
+ asm volatile("dcf %M0" :: "U"(*ptep));
+}
+
+static inline void ptep_mkdirty(pte_t *ptep)
+{
+ set_bit(_PAGE_BIT_DIRTY, ptep);
+ asm volatile("dcf %M0" :: "U"(*ptep));
+}
+
+/*
+ * Conversion functions: convert a page and protection to a page entry,
+ * and a page entry and page directory to the page they refer to.
+ */
+
+#define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot))
+#define mk_pte_huge(entry) ((entry).pte_low |= _PAGE_PRESENT | _PAGE_PSE)
+
+/* This takes a physical page address that is used by the remapping functions */
+#define mk_pte_phys(physpage, pgprot) pfn_pte((physpage) >> PAGE_SHIFT, pgprot)
+
+static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{
+ pte.pte &= _PAGE_CHG_MASK;
+ pte.pte |= pgprot_val(newprot);
+ return pte;
+}
+
+#define page_pte(page) page_pte_prot((page), __pgprot(0))
+
+/* to find an entry in a page-table-directory. */
+#define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
+#define pgd_index_k(addr) pgd_index(addr)
+
+/* Find an entry in the bottom-level page table.. */
+#define __pte_index(address) (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+
+/*
+ * the pte page can be thought of an array like this: pte_t[PTRS_PER_PTE]
+ *
+ * this macro returns the index of the entry in the pte page which would
+ * control the given virtual address
+ */
+#define pte_index(address) \
+ (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+#define pte_offset_kernel(dir, address) \
+ ((pte_t *) pmd_page_kernel(*(dir)) + pte_index(address))
+
+#if defined(CONFIG_HIGHPTE)
+#define pte_offset_map(dir, address) \
+ ((pte_t *)kmap_atomic(pmd_page(*(dir)),KM_PTE0) + pte_index(address))
+#define pte_offset_map_nested(dir, address) \
+ ((pte_t *)kmap_atomic(pmd_page(*(dir)),KM_PTE1) + pte_index(address))
+#define pte_unmap(pte) kunmap_atomic(pte, KM_PTE0)
+#define pte_unmap_nested(pte) kunmap_atomic((pte), KM_PTE1)
+#else
+#define pte_offset_map(dir, address) \
+ ((pte_t *)page_address(pmd_page(*(dir))) + pte_index(address))
+#define pte_offset_map_nested(dir, address) pte_offset_map((dir), (address))
+#define pte_unmap(pte) do { } while (0)
+#define pte_unmap_nested(pte) do { } while (0)
+#endif
+
+/*
+ * Handle swap and file entries
+ * - the PTE is encoded in the following format:
+ * bit 0: Must be 0 (!_PAGE_PRESENT)
+ * bit 1: Type: 0 for swap, 1 for file (_PAGE_FILE)
+ * bits 2-7: Swap type
+ * bits 8-31: Swap offset
+ * bits 2-31: File pgoff
+ */
+#define __swp_type(x) (((x).val >> 2) & 0x1f)
+#define __swp_offset(x) ((x).val >> 8)
+#define __swp_entry(type, offset) ((swp_entry_t) { ((type) << 2) | ((offset) << 8) })
+#define __pte_to_swp_entry(pte) ((swp_entry_t) { (pte).pte })
+#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
+
+static inline int pte_file(pte_t pte)
+{
+ return pte.pte & _PAGE_FILE;
+}
+
+#define PTE_FILE_MAX_BITS 29
+
+#define pte_to_pgoff(PTE) ((PTE).pte >> 2)
+#define pgoff_to_pte(off) __pte((off) << 2 | _PAGE_FILE)
+
+/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
+#define PageSkip(page) (0)
+#define kern_addr_valid(addr) (1)
+
+#define io_remap_page_range(vma, vaddr, paddr, size, prot) \
+ remap_pfn_range(vma, vaddr, (paddr) >> PAGE_SHIFT, size, prot)
+
+#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
+#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_DIRTY
+#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
+#define __HAVE_ARCH_PTEP_SET_WRPROTECT
+#define __HAVE_ARCH_PTEP_MKDIRTY
+#define __HAVE_ARCH_PTE_SAME
+#include <asm-generic/pgtable.h>
+
+/*
+ * preload information about a newly instantiated PTE into the SCR0/SCR1 PGE cache
+ */
+static inline void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t pte)
+{
+ unsigned long ampr;
+ pgd_t *pge = pgd_offset(current->mm, address);
+ pud_t *pue = pud_offset(pge, address);
+ pmd_t *pme = pmd_offset(pue, address);
+
+ ampr = pme->ste[0] & 0xffffff00;
+ ampr |= xAMPRx_L | xAMPRx_SS_16Kb | xAMPRx_S | xAMPRx_C | xAMPRx_V;
+
+ asm volatile("movgs %0,scr0\n"
+ "movgs %0,scr1\n"
+ "movgs %1,dampr4\n"
+ "movgs %1,dampr5\n"
+ :
+ : "r"(address), "r"(ampr)
+ );
+}
+
+#ifdef CONFIG_PROC_FS
+extern char *proc_pid_status_frv_cxnr(struct mm_struct *mm, char *buffer);
+#endif
+
+extern void __init pgtable_cache_init(void);
+
+#endif /* !__ASSEMBLY__ */
+#endif /* !CONFIG_MMU */
+
+#ifndef __ASSEMBLY__
+extern void __init paging_init(void);
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_PGTABLE_H */
--- /dev/null
+#ifndef _ASM_POLL_H
+#define _ASM_POLL_H
+
+#define POLLIN 1
+#define POLLPRI 2
+#define POLLOUT 4
+#define POLLERR 8
+#define POLLHUP 16
+#define POLLNVAL 32
+#define POLLRDNORM 64
+#define POLLWRNORM POLLOUT
+#define POLLRDBAND 128
+#define POLLWRBAND 256
+#define POLLMSG 0x0400
+
+struct pollfd {
+ int fd;
+ short events;
+ short revents;
+};
+
+#endif
+
--- /dev/null
+/* processor.h: FRV processor definitions
+ *
+ * Copyright (C) 2003 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_PROCESSOR_H
+#define _ASM_PROCESSOR_H
+
+#include <linux/config.h>
+#include <asm/mem-layout.h>
+
+#ifndef __ASSEMBLY__
+/*
+ * Default implementation of macro that returns current
+ * instruction pointer ("program counter").
+ */
+#define current_text_addr() ({ __label__ _l; _l: &&_l;})
+
+#include <linux/linkage.h>
+#include <asm/sections.h>
+#include <asm/segment.h>
+#include <asm/fpu.h>
+#include <asm/registers.h>
+#include <asm/ptrace.h>
+#include <asm/current.h>
+#include <asm/cache.h>
+
+/* Forward declaration, a strange C thing */
+struct task_struct;
+
+/*
+ * CPU type and hardware bug flags. Kept separately for each CPU.
+ */
+struct cpuinfo_frv {
+#ifdef CONFIG_MMU
+ unsigned long *pgd_quick;
+ unsigned long *pte_quick;
+ unsigned long pgtable_cache_sz;
+#endif
+} __cacheline_aligned;
+
+extern struct cpuinfo_frv __nongprelbss boot_cpu_data;
+
+#define cpu_data (&boot_cpu_data)
+#define current_cpu_data boot_cpu_data
+
+/*
+ * Bus types
+ */
+#define EISA_bus 0
+#define MCA_bus 0
+
+struct thread_struct {
+ struct pt_regs *frame; /* [GR28] exception frame ptr for this thread */
+ struct task_struct *curr; /* [GR29] current pointer for this thread */
+ unsigned long sp; /* [GR1 ] kernel stack pointer */
+ unsigned long fp; /* [GR2 ] kernel frame pointer */
+ unsigned long lr; /* link register */
+ unsigned long pc; /* program counter */
+ unsigned long gr[12]; /* [GR16-GR27] */
+ unsigned long sched_lr; /* LR from schedule() */
+
+ union {
+ struct pt_regs *frame0; /* top (user) stack frame */
+ struct user_context *user; /* userspace context */
+ };
+} __attribute__((aligned(8)));
+
+extern struct pt_regs *__kernel_frame0_ptr;
+extern struct task_struct *__kernel_current_task;
+
+#endif
+
+#ifndef __ASSEMBLY__
+#define INIT_THREAD_FRAME0 \
+ ((struct pt_regs *) \
+ (sizeof(init_stack) + (unsigned long) init_stack - sizeof(struct user_context)))
+
+#define INIT_THREAD { \
+ NULL, \
+ (struct task_struct *) init_stack, \
+ 0, 0, 0, 0, \
+ { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }, \
+ 0, \
+ { INIT_THREAD_FRAME0 }, \
+}
+
+/*
+ * do necessary setup to start up a newly executed thread.
+ * - need to discard the frame stacked by init() invoking the execve syscall
+ */
+#define start_thread(_regs, _pc, _usp) \
+do { \
+ set_fs(USER_DS); /* reads from user space */ \
+ __frame = __kernel_frame0_ptr; \
+ __frame->pc = (_pc); \
+ __frame->psr &= ~PSR_S; \
+ __frame->sp = (_usp); \
+} while(0)
+
+extern void prepare_to_copy(struct task_struct *tsk);
+
+/* Free all resources held by a thread. */
+static inline void release_thread(struct task_struct *dead_task)
+{
+}
+
+extern asmlinkage int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags);
+extern asmlinkage void save_user_regs(struct user_context *target);
+extern asmlinkage void *restore_user_regs(const struct user_context *target, ...);
+
+#define copy_segments(tsk, mm) do { } while (0)
+#define release_segments(mm) do { } while (0)
+#define forget_segments() do { } while (0)
+
+/*
+ * Free current thread data structures etc..
+ */
+static inline void exit_thread(void)
+{
+}
+
+/*
+ * Return saved PC of a blocked thread.
+ */
+extern unsigned long thread_saved_pc(struct task_struct *tsk);
+
+unsigned long get_wchan(struct task_struct *p);
+
+#define KSTK_EIP(tsk) ((tsk)->thread.frame0->pc)
+#define KSTK_ESP(tsk) ((tsk)->thread.frame0->sp)
+
+/* Allocation and freeing of basic task resources. */
+extern struct task_struct *alloc_task_struct(void);
+extern void free_task_struct(struct task_struct *p);
+
+#define cpu_relax() do { } while (0)
+
+/* data cache prefetch */
+#define ARCH_HAS_PREFETCH
+static inline void prefetch(const void *x)
+{
+ asm volatile("dcpl %0,gr0,#0" : : "r"(x));
+}
+
+#endif /* __ASSEMBLY__ */
+#endif /* _ASM_PROCESSOR_H */
--- /dev/null
+/* ptrace.h: ptrace() relevant definitions
+ *
+ * Copyright (C) 2003 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+#ifndef _ASM_PTRACE_H
+#define _ASM_PTRACE_H
+
+#include <asm/registers.h>
+
+#define in_syscall(regs) (((regs)->tbr & TBR_TT) == TBR_TT_TRAP0)
+
+
+#define PT_PSR 0
+#define PT_ISR 1
+#define PT_CCR 2
+#define PT_CCCR 3
+#define PT_LR 4
+#define PT_LCR 5
+#define PT_PC 6
+
+#define PT__STATUS 7 /* exception status */
+#define PT_SYSCALLNO 8 /* syscall number or -1 */
+#define PT_ORIG_GR8 9 /* saved GR8 for signal handling */
+#define PT_GNER0 10
+#define PT_GNER1 11
+#define PT_IACC0H 12
+#define PT_IACC0L 13
+
+#define PT_GR(j) ( 14 + (j)) /* GRj for 0<=j<=63 */
+#define PT_FR(j) ( 78 + (j)) /* FRj for 0<=j<=63 */
+#define PT_FNER(j) (142 + (j)) /* FNERj for 0<=j<=1 */
+#define PT_MSR(j) (144 + (j)) /* MSRj for 0<=j<=2 */
+#define PT_ACC(j) (146 + (j)) /* ACCj for 0<=j<=7 */
+#define PT_ACCG(jklm) (154 + (jklm)) /* ACCGjklm for 0<=jklm<=1 (reads four regs per slot) */
+#define PT_FSR(j) (156 + (j)) /* FSRj for 0<=j<=0 */
+#define PT__GPEND 78
+#define PT__END 157
+
+#define PT_TBR PT_GR(0)
+#define PT_SP PT_GR(1)
+#define PT_FP PT_GR(2)
+#define PT_PREV_FRAME PT_GR(28) /* previous exception frame pointer (old gr28 value) */
+#define PT_CURR_TASK PT_GR(29) /* current task */
+
+
+/* Arbitrarily choose the same ptrace numbers as used by the Sparc code. */
+#define PTRACE_GETREGS 12
+#define PTRACE_SETREGS 13
+#define PTRACE_GETFPREGS 14
+#define PTRACE_SETFPREGS 15
+#define PTRACE_GETFDPIC 31 /* get the ELF fdpic loadmap address */
+
+#define PTRACE_GETFDPIC_EXEC 0 /* [addr] request the executable loadmap */
+#define PTRACE_GETFDPIC_INTERP 1 /* [addr] request the interpreter loadmap */
+
+#ifndef __ASSEMBLY__
+
+/*
+ * dedicate GR28; to keeping the a pointer to the current exception frame
+ */
+register struct pt_regs *__frame asm("gr28");
+register struct pt_regs *__debug_frame asm("gr31");
+
+#ifndef container_of
+#define container_of(ptr, type, member) ({ \
+ const typeof( ((type *)0)->member ) *__mptr = (ptr); \
+ (type *)( (char *)__mptr - offsetof(type,member) );})
+#endif
+
+#define __debug_regs container_of(__debug_frame, struct pt_debug_regs, normal_regs)
+
+#define user_mode(regs) (!((regs)->psr & PSR_S))
+#define instruction_pointer(regs) ((regs)->pc)
+
+extern unsigned long user_stack(const struct pt_regs *);
+extern void show_regs(struct pt_regs *);
+#define profile_pc(regs) ((regs)->pc)
+
+#endif /* !__ASSEMBLY__ */
+#endif /* _ASM_PTRACE_H */
--- /dev/null
+/* registers.h: register frame declarations
+ *
+ * Copyright (C) 2003 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+/*
+ * notes:
+ *
+ * (1) that the members of all these structures are carefully aligned to permit
+ * usage of STD/STDF instructions
+ *
+ * (2) if you change these structures, you must change the code in
+ * arch/frvnommu/kernel/{break.S,entry.S,switch_to.S,gdb-stub.c}
+ *
+ *
+ * the kernel stack space block looks like this:
+ *
+ * +0x2000 +----------------------
+ * | union {
+ * | struct user_context
+ * | struct pt_regs [user exception]
+ * | }
+ * +---------------------- <-- __kernel_frame0_ptr (maybe GR28)
+ * |
+ * | kernel stack
+ * |
+ * |......................
+ * | struct pt_regs [kernel exception]
+ * |...................... <-- __kernel_frame0_ptr (maybe GR28)
+ * |
+ * | kernel stack
+ * |
+ * |...................... <-- stack pointer (GR1)
+ * |
+ * | unused stack space
+ * |
+ * +----------------------
+ * | struct thread_info
+ * +0x0000 +---------------------- <-- __current_thread_info (GR15);
+ *
+ * note that GR28 points to the current exception frame
+ */
+
+#ifndef _ASM_REGISTERS_H
+#define _ASM_REGISTERS_H
+
+#ifndef __ASSEMBLY__
+#define __OFFSET(X) (X)
+#define __OFFSETC(X,N) xxxxxxxxxxxxxxxxxxxxxxxx
+#else
+#define __OFFSET(X) ((X)*4)
+#define __OFFSETC(X,N) ((X)*4+(N))
+#endif
+
+/*****************************************************************************/
+/*
+ * Exception/Interrupt frame
+ * - held on kernel stack
+ * - 8-byte aligned on stack (old SP is saved in frame)
+ * - GR0 is fixed 0, so we don't save it
+ */
+#ifndef __ASSEMBLY__
+
+struct pt_regs {
+ unsigned long psr; /* Processor Status Register */
+ unsigned long isr; /* Integer Status Register */
+ unsigned long ccr; /* Condition Code Register */
+ unsigned long cccr; /* Condition Code for Conditional Insns Register */
+ unsigned long lr; /* Link Register */
+ unsigned long lcr; /* Loop Count Register */
+ unsigned long pc; /* Program Counter Register */
+ unsigned long __status; /* exception status */
+ unsigned long syscallno; /* syscall number or -1 */
+ unsigned long orig_gr8; /* original syscall arg #1 */
+ unsigned long gner0;
+ unsigned long gner1;
+ unsigned long long iacc0;
+ unsigned long tbr; /* GR0 is fixed zero, so we use this for TBR */
+ unsigned long sp; /* GR1: USP/KSP */
+ unsigned long fp; /* GR2: FP */
+ unsigned long gr3;
+ unsigned long gr4;
+ unsigned long gr5;
+ unsigned long gr6;
+ unsigned long gr7; /* syscall number */
+ unsigned long gr8; /* 1st syscall param; syscall return */
+ unsigned long gr9; /* 2nd syscall param */
+ unsigned long gr10; /* 3rd syscall param */
+ unsigned long gr11; /* 4th syscall param */
+ unsigned long gr12; /* 5th syscall param */
+ unsigned long gr13; /* 6th syscall param */
+ unsigned long gr14;
+ unsigned long gr15;
+ unsigned long gr16; /* GP pointer */
+ unsigned long gr17; /* small data */
+ unsigned long gr18; /* PIC/PID */
+ unsigned long gr19;
+ unsigned long gr20;
+ unsigned long gr21;
+ unsigned long gr22;
+ unsigned long gr23;
+ unsigned long gr24;
+ unsigned long gr25;
+ unsigned long gr26;
+ unsigned long gr27;
+ struct pt_regs *next_frame; /* GR28 - next exception frame */
+ unsigned long gr29; /* GR29 - OS reserved */
+ unsigned long gr30; /* GR30 - OS reserved */
+ unsigned long gr31; /* GR31 - OS reserved */
+} __attribute__((aligned(8)));
+
+#endif
+
+#define REG_PSR __OFFSET( 0) /* Processor Status Register */
+#define REG_ISR __OFFSET( 1) /* Integer Status Register */
+#define REG_CCR __OFFSET( 2) /* Condition Code Register */
+#define REG_CCCR __OFFSET( 3) /* Condition Code for Conditional Insns Register */
+#define REG_LR __OFFSET( 4) /* Link Register */
+#define REG_LCR __OFFSET( 5) /* Loop Count Register */
+#define REG_PC __OFFSET( 6) /* Program Counter */
+
+#define REG__STATUS __OFFSET( 7) /* exception status */
+#define REG__STATUS_STEP 0x00000001 /* - reenable single stepping on return */
+#define REG__STATUS_STEPPED 0x00000002 /* - single step caused exception */
+#define REG__STATUS_BROKE 0x00000004 /* - BREAK insn caused exception */
+#define REG__STATUS_SYSC_ENTRY 0x40000000 /* - T on syscall entry (ptrace.c only) */
+#define REG__STATUS_SYSC_EXIT 0x80000000 /* - T on syscall exit (ptrace.c only) */
+
+#define REG_SYSCALLNO __OFFSET( 8) /* syscall number or -1 */
+#define REG_ORIG_GR8 __OFFSET( 9) /* saved GR8 for signal handling */
+#define REG_GNER0 __OFFSET(10)
+#define REG_GNER1 __OFFSET(11)
+#define REG_IACC0 __OFFSET(12)
+
+#define REG_TBR __OFFSET(14) /* Trap Vector Register */
+#define REG_GR(R) __OFFSET((14+(R)))
+#define REG__END REG_GR(32)
+
+#define REG_SP REG_GR(1)
+#define REG_FP REG_GR(2)
+#define REG_PREV_FRAME REG_GR(28) /* previous exception frame pointer (old gr28 value) */
+#define REG_CURR_TASK REG_GR(29) /* current task */
+
+/*****************************************************************************/
+/*
+ * extension tacked in front of the exception frame in debug mode
+ */
+#ifndef __ASSEMBLY__
+
+struct pt_debug_regs
+{
+ unsigned long bpsr;
+ unsigned long dcr;
+ unsigned long brr;
+ unsigned long nmar;
+ struct pt_regs normal_regs;
+} __attribute__((aligned(8)));
+
+#endif
+
+#define REG_NMAR __OFFSET(-1)
+#define REG_BRR __OFFSET(-2)
+#define REG_DCR __OFFSET(-3)
+#define REG_BPSR __OFFSET(-4)
+#define REG__DEBUG_XTRA __OFFSET(4)
+
+/*****************************************************************************/
+/*
+ * userspace registers
+ */
+#ifndef __ASSEMBLY__
+
+struct user_int_regs
+{
+ /* integer registers
+ * - up to gr[31] mirror pt_regs
+ * - total size must be multiple of 8 bytes
+ */
+ unsigned long psr; /* Processor Status Register */
+ unsigned long isr; /* Integer Status Register */
+ unsigned long ccr; /* Condition Code Register */
+ unsigned long cccr; /* Condition Code for Conditional Insns Register */
+ unsigned long lr; /* Link Register */
+ unsigned long lcr; /* Loop Count Register */
+ unsigned long pc; /* Program Counter Register */
+ unsigned long __status; /* exception status */
+ unsigned long syscallno; /* syscall number or -1 */
+ unsigned long orig_gr8; /* original syscall arg #1 */
+ unsigned long gner[2];
+ unsigned long long iacc[1];
+
+ union {
+ unsigned long tbr;
+ unsigned long gr[64];
+ };
+};
+
+struct user_fpmedia_regs
+{
+ /* FP/Media registers */
+ unsigned long fr[64];
+ unsigned long fner[2];
+ unsigned long msr[2];
+ unsigned long acc[8];
+ unsigned char accg[8];
+ unsigned long fsr[1];
+};
+
+struct user_context
+{
+ struct user_int_regs i;
+ struct user_fpmedia_regs f;
+
+ /* we provide a context extension so that we can save the regs for CPUs that
+ * implement many more of Fujitsu's lavish register spec
+ */
+ void *extension;
+} __attribute__((aligned(8)));
+
+#endif
+
+#define NR_USER_INT_REGS (14 + 64)
+#define NR_USER_FPMEDIA_REGS (64 + 2 + 2 + 8 + 8/4 + 1)
+#define NR_USER_CONTEXT (NR_USER_INT_REGS + NR_USER_FPMEDIA_REGS + 1)
+
+#define USER_CONTEXT_SIZE (((NR_USER_CONTEXT + 1) & ~1) * 4)
+
+#define __THREAD_FRAME __OFFSET(0)
+#define __THREAD_CURR __OFFSET(1)
+#define __THREAD_SP __OFFSET(2)
+#define __THREAD_FP __OFFSET(3)
+#define __THREAD_LR __OFFSET(4)
+#define __THREAD_PC __OFFSET(5)
+#define __THREAD_GR(R) __OFFSET(6 + (R) - 16)
+#define __THREAD_FRAME0 __OFFSET(19)
+#define __THREAD_USER __OFFSET(19)
+
+#define __USER_INT __OFFSET(0)
+#define __INT_GR(R) __OFFSET(14 + (R))
+
+#define __USER_FPMEDIA __OFFSET(NR_USER_INT_REGS)
+#define __FPMEDIA_FR(R) __OFFSET(NR_USER_INT_REGS + (R))
+#define __FPMEDIA_FNER(R) __OFFSET(NR_USER_INT_REGS + 64 + (R))
+#define __FPMEDIA_MSR(R) __OFFSET(NR_USER_INT_REGS + 66 + (R))
+#define __FPMEDIA_ACC(R) __OFFSET(NR_USER_INT_REGS + 68 + (R))
+#define __FPMEDIA_ACCG(R) __OFFSETC(NR_USER_INT_REGS + 76, (R))
+#define __FPMEDIA_FSR(R) __OFFSET(NR_USER_INT_REGS + 78 + (R))
+
+#endif /* _ASM_REGISTERS_H */
--- /dev/null
+/* segment.h: MMU segment settings
+ *
+ * Copyright (C) 2003 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_SEGMENT_H
+#define _ASM_SEGMENT_H
+
+#include <linux/config.h>
+
+#ifndef __ASSEMBLY__
+
+typedef struct {
+ unsigned long seg;
+} mm_segment_t;
+
+#define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })
+
+#define KERNEL_DS MAKE_MM_SEG(0xdfffffffUL)
+
+#ifdef CONFIG_MMU
+#define USER_DS MAKE_MM_SEG(TASK_SIZE - 1)
+#else
+#define USER_DS KERNEL_DS
+#endif
+
+#define get_ds() (KERNEL_DS)
+#define get_fs() (__current_thread_info->addr_limit)
+#define segment_eq(a,b) ((a).seg == (b).seg)
+#define __kernel_ds_p() segment_eq(get_fs(), KERNEL_DS)
+#define get_addr_limit() (get_fs().seg)
+
+#define set_fs(_x) \
+do { \
+ __current_thread_info->addr_limit = (_x); \
+} while(0)
+
+
+#endif /* __ASSEMBLY__ */
+#endif /* _ASM_SEGMENT_H */
--- /dev/null
+/*
+ * serial.h
+ *
+ * Copyright (C) 2003 Develer S.r.l. (http://www.develer.com/)
+ * Author: Bernardo Innocenti <bernie@codewiz.org>
+ *
+ * Based on linux/include/asm-i386/serial.h
+ */
+#include <linux/config.h>
+#include <asm/serial-regs.h>
+
+/*
+ * the base baud is derived from the clock speed and so is variable
+ */
+#define BASE_BAUD 0
+
+#define STD_COM_FLAGS ASYNC_BOOT_AUTOCONF
+
+#define SERIAL_PORT_DFNS
--- /dev/null
+/* setup.h: setup stuff
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_SETUP_H
+#define _ASM_SETUP_H
+
+#include <linux/init.h>
+
+#ifndef __ASSEMBLY__
+
+#ifdef CONFIG_MMU
+extern unsigned long __initdata num_mappedpages;
+#endif
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_SETUP_H */
--- /dev/null
+#ifndef _ASM_SIGNAL_H
+#define _ASM_SIGNAL_H
+
+#include <linux/types.h>
+
+/* Avoid too many header ordering problems. */
+struct siginfo;
+
+#ifdef __KERNEL__
+/* Most things should be clean enough to redefine this at will, if care
+ is taken to make libc match. */
+
+#define _NSIG 64
+#define _NSIG_BPW 32
+#define _NSIG_WORDS (_NSIG / _NSIG_BPW)
+
+typedef unsigned long old_sigset_t; /* at least 32 bits */
+
+typedef struct {
+ unsigned long sig[_NSIG_WORDS];
+} sigset_t;
+
+#else
+/* Here we must cater to libcs that poke about in kernel headers. */
+
+#define NSIG 32
+typedef unsigned long sigset_t;
+
+#endif /* __KERNEL__ */
+
+#define SIGHUP 1
+#define SIGINT 2
+#define SIGQUIT 3
+#define SIGILL 4
+#define SIGTRAP 5
+#define SIGABRT 6
+#define SIGIOT 6
+#define SIGBUS 7
+#define SIGFPE 8
+#define SIGKILL 9
+#define SIGUSR1 10
+#define SIGSEGV 11
+#define SIGUSR2 12
+#define SIGPIPE 13
+#define SIGALRM 14
+#define SIGTERM 15
+#define SIGSTKFLT 16
+#define SIGCHLD 17
+#define SIGCONT 18
+#define SIGSTOP 19
+#define SIGTSTP 20
+#define SIGTTIN 21
+#define SIGTTOU 22
+#define SIGURG 23
+#define SIGXCPU 24
+#define SIGXFSZ 25
+#define SIGVTALRM 26
+#define SIGPROF 27
+#define SIGWINCH 28
+#define SIGIO 29
+#define SIGPOLL SIGIO
+/*
+#define SIGLOST 29
+*/
+#define SIGPWR 30
+#define SIGSYS 31
+#define SIGUNUSED 31
+
+/* These should not be considered constants from userland. */
+#define SIGRTMIN 32
+#define SIGRTMAX (_NSIG-1)
+
+/*
+ * SA_FLAGS values:
+ *
+ * SA_ONSTACK indicates that a registered stack_t will be used.
+ * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
+ * SA_RESTART flag to get restarting signals (which were the default long ago)
+ * SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
+ * SA_RESETHAND clears the handler when the signal is delivered.
+ * SA_NOCLDWAIT flag on SIGCHLD to inhibit zombies.
+ * SA_NODEFER prevents the current signal from being masked in the handler.
+ *
+ * SA_ONESHOT and SA_NOMASK are the historical Linux names for the Single
+ * Unix names RESETHAND and NODEFER respectively.
+ */
+#define SA_NOCLDSTOP 0x00000001
+#define SA_NOCLDWAIT 0x00000002 /* not supported yet */
+#define SA_SIGINFO 0x00000004
+#define SA_ONSTACK 0x08000000
+#define SA_RESTART 0x10000000
+#define SA_NODEFER 0x40000000
+#define SA_RESETHAND 0x80000000
+
+#define SA_NOMASK SA_NODEFER
+#define SA_ONESHOT SA_RESETHAND
+#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
+
+#define SA_RESTORER 0x04000000
+
+/*
+ * sigaltstack controls
+ */
+#define SS_ONSTACK 1
+#define SS_DISABLE 2
+
+#define MINSIGSTKSZ 2048
+#define SIGSTKSZ 8192
+
+#ifdef __KERNEL__
+
+/*
+ * These values of sa_flags are used only by the kernel as part of the
+ * irq handling routines.
+ *
+ * SA_INTERRUPT is also used by the irq handling routines.
+ * SA_SHIRQ is for shared interrupt support on PCI and EISA.
+ */
+#define SA_PROBE SA_ONESHOT
+#define SA_SAMPLE_RANDOM SA_RESTART
+#define SA_SHIRQ 0x04000000
+#endif
+
+#define SIG_BLOCK 0 /* for blocking signals */
+#define SIG_UNBLOCK 1 /* for unblocking signals */
+#define SIG_SETMASK 2 /* for setting the signal mask */
+
+/* Type of a signal handler. */
+typedef void (*__sighandler_t)(int);
+
+#define SIG_DFL ((__sighandler_t)0) /* default signal handling */
+#define SIG_IGN ((__sighandler_t)1) /* ignore signal */
+#define SIG_ERR ((__sighandler_t)-1) /* error return from signal */
+
+#ifdef __KERNEL__
+struct old_sigaction {
+ __sighandler_t sa_handler;
+ old_sigset_t sa_mask;
+ unsigned long sa_flags;
+ void (*sa_restorer)(void);
+};
+
+struct sigaction {
+ __sighandler_t sa_handler;
+ unsigned long sa_flags;
+ void (*sa_restorer)(void);
+ sigset_t sa_mask; /* mask last for extensibility */
+};
+
+struct k_sigaction {
+ struct sigaction sa;
+};
+#else
+/* Here we must cater to libcs that poke about in kernel headers. */
+
+struct sigaction {
+ union {
+ __sighandler_t _sa_handler;
+ void (*_sa_sigaction)(int, struct siginfo *, void *);
+ } _u;
+ sigset_t sa_mask;
+ unsigned long sa_flags;
+ void (*sa_restorer)(void);
+};
+
+#define sa_handler _u._sa_handler
+#define sa_sigaction _u._sa_sigaction
+
+#endif /* __KERNEL__ */
+
+typedef struct sigaltstack {
+ void *ss_sp;
+ int ss_flags;
+ size_t ss_size;
+} stack_t;
+
+extern int do_signal(struct pt_regs *regs, sigset_t *oldset);
+#define ptrace_signal_deliver(regs, cookie) do { } while (0)
+
+#ifdef __KERNEL__
+
+#include <asm/sigcontext.h>
+#undef __HAVE_ARCH_SIG_BITOPS
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_SIGNAL_H */
--- /dev/null
+#ifndef __ASM_SMP_H
+#define __ASM_SMP_H
+
+#include <linux/config.h>
+
+#ifdef CONFIG_SMP
+#error SMP not supported
+#endif
+
+#endif
--- /dev/null
+#ifndef _ASM_SOCKET_H
+#define _ASM_SOCKET_H
+
+#include <asm/sockios.h>
+
+/* For setsockopt(2) */
+#define SOL_SOCKET 1
+
+#define SO_DEBUG 1
+#define SO_REUSEADDR 2
+#define SO_TYPE 3
+#define SO_ERROR 4
+#define SO_DONTROUTE 5
+#define SO_BROADCAST 6
+#define SO_SNDBUF 7
+#define SO_RCVBUF 8
+#define SO_KEEPALIVE 9
+#define SO_OOBINLINE 10
+#define SO_NO_CHECK 11
+#define SO_PRIORITY 12
+#define SO_LINGER 13
+#define SO_BSDCOMPAT 14
+/* To add :#define SO_REUSEPORT 15 */
+#define SO_PASSCRED 16
+#define SO_PEERCRED 17
+#define SO_RCVLOWAT 18
+#define SO_SNDLOWAT 19
+#define SO_RCVTIMEO 20
+#define SO_SNDTIMEO 21
+
+/* Security levels - as per NRL IPv6 - don't actually do anything */
+#define SO_SECURITY_AUTHENTICATION 22
+#define SO_SECURITY_ENCRYPTION_TRANSPORT 23
+#define SO_SECURITY_ENCRYPTION_NETWORK 24
+
+#define SO_BINDTODEVICE 25
+
+/* Socket filtering */
+#define SO_ATTACH_FILTER 26
+#define SO_DETACH_FILTER 27
+
+#define SO_PEERNAME 28
+#define SO_TIMESTAMP 29
+#define SCM_TIMESTAMP SO_TIMESTAMP
+
+#define SO_ACCEPTCONN 30
+
+#define SO_PEERSEC 31
+
+#endif /* _ASM_SOCKET_H */
+
--- /dev/null
+/* system.h: FR-V CPU control definitions
+ *
+ * Copyright (C) 2003 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_SYSTEM_H
+#define _ASM_SYSTEM_H
+
+#include <linux/config.h> /* get configuration macros */
+#include <linux/linkage.h>
+#include <asm/atomic.h>
+
+struct thread_struct;
+
+#define prepare_to_switch() do { } while(0)
+
+/*
+ * switch_to(prev, next) should switch from task `prev' to `next'
+ * `prev' will never be the same as `next'.
+ * The `mb' is to tell GCC not to cache `current' across this call.
+ */
+extern asmlinkage
+struct task_struct *__switch_to(struct thread_struct *prev_thread,
+ struct thread_struct *next_thread,
+ struct task_struct *prev);
+
+#define switch_to(prev, next, last) \
+do { \
+ (prev)->thread.sched_lr = \
+ (unsigned long) __builtin_return_address(0); \
+ (last) = __switch_to(&(prev)->thread, &(next)->thread, (prev)); \
+ mb(); \
+} while(0)
+
+/*
+ * interrupt flag manipulation
+ */
+#define local_irq_disable() \
+do { \
+ unsigned long psr; \
+ asm volatile(" movsg psr,%0 \n" \
+ " andi %0,%2,%0 \n" \
+ " ori %0,%1,%0 \n" \
+ " movgs %0,psr \n" \
+ : "=r"(psr) \
+ : "i" (PSR_PIL_14), "i" (~PSR_PIL) \
+ : "memory"); \
+} while(0)
+
+#define local_irq_enable() \
+do { \
+ unsigned long psr; \
+ asm volatile(" movsg psr,%0 \n" \
+ " andi %0,%1,%0 \n" \
+ " movgs %0,psr \n" \
+ : "=r"(psr) \
+ : "i" (~PSR_PIL) \
+ : "memory"); \
+} while(0)
+
+#define local_save_flags(flags) \
+do { \
+ typecheck(unsigned long, flags); \
+ asm("movsg psr,%0" \
+ : "=r"(flags) \
+ : \
+ : "memory"); \
+} while(0)
+
+#define local_irq_save(flags) \
+do { \
+ unsigned long npsr; \
+ typecheck(unsigned long, flags); \
+ asm volatile(" movsg psr,%0 \n" \
+ " andi %0,%3,%1 \n" \
+ " ori %1,%2,%1 \n" \
+ " movgs %1,psr \n" \
+ : "=r"(flags), "=r"(npsr) \
+ : "i" (PSR_PIL_14), "i" (~PSR_PIL) \
+ : "memory"); \
+} while(0)
+
+#define local_irq_restore(flags) \
+do { \
+ typecheck(unsigned long, flags); \
+ asm volatile(" movgs %0,psr \n" \
+ : \
+ : "r" (flags) \
+ : "memory"); \
+} while(0)
+
+#define irqs_disabled() \
+ ((__get_PSR() & PSR_PIL) >= PSR_PIL_14)
+
+/*
+ * Force strict CPU ordering.
+ */
+#define nop() asm volatile ("nop"::)
+#define mb() asm volatile ("membar" : : :"memory")
+#define rmb() asm volatile ("membar" : : :"memory")
+#define wmb() asm volatile ("membar" : : :"memory")
+#define set_mb(var, value) do { var = value; mb(); } while (0)
+#define set_wmb(var, value) do { var = value; wmb(); } while (0)
+
+#define smp_mb() mb()
+#define smp_rmb() rmb()
+#define smp_wmb() wmb()
+
+#define read_barrier_depends() do {} while(0)
+#define smp_read_barrier_depends() read_barrier_depends()
+
+#define HARD_RESET_NOW() \
+do { \
+ cli(); \
+} while(1)
+
+extern void die_if_kernel(const char *, ...) __attribute__((format(printf, 1, 2)));
+extern void free_initmem(void);
+
+#endif /* _ASM_SYSTEM_H */
--- /dev/null
+#ifndef _ASM_TERMBITS_H__
+#define _ASM_TERMBITS_H__
+
+#include <linux/posix_types.h>
+
+typedef unsigned char cc_t;
+typedef unsigned int speed_t;
+typedef unsigned int tcflag_t;
+
+#define NCCS 19
+struct termios {
+ tcflag_t c_iflag; /* input mode flags */
+ tcflag_t c_oflag; /* output mode flags */
+ tcflag_t c_cflag; /* control mode flags */
+ tcflag_t c_lflag; /* local mode flags */
+ cc_t c_line; /* line discipline */
+ cc_t c_cc[NCCS]; /* control characters */
+};
+
+/* c_cc characters */
+#define VINTR 0
+#define VQUIT 1
+#define VERASE 2
+#define VKILL 3
+#define VEOF 4
+#define VTIME 5
+#define VMIN 6
+#define VSWTC 7
+#define VSTART 8
+#define VSTOP 9
+#define VSUSP 10
+#define VEOL 11
+#define VREPRINT 12
+#define VDISCARD 13
+#define VWERASE 14
+#define VLNEXT 15
+#define VEOL2 16
+
+
+/* c_iflag bits */
+#define IGNBRK 0000001
+#define BRKINT 0000002
+#define IGNPAR 0000004
+#define PARMRK 0000010
+#define INPCK 0000020
+#define ISTRIP 0000040
+#define INLCR 0000100
+#define IGNCR 0000200
+#define ICRNL 0000400
+#define IUCLC 0001000
+#define IXON 0002000
+#define IXANY 0004000
+#define IXOFF 0010000
+#define IMAXBEL 0020000
+#define IUTF8 0040000
+
+/* c_oflag bits */
+#define OPOST 0000001
+#define OLCUC 0000002
+#define ONLCR 0000004
+#define OCRNL 0000010
+#define ONOCR 0000020
+#define ONLRET 0000040
+#define OFILL 0000100
+#define OFDEL 0000200
+#define NLDLY 0000400
+#define NL0 0000000
+#define NL1 0000400
+#define CRDLY 0003000
+#define CR0 0000000
+#define CR1 0001000
+#define CR2 0002000
+#define CR3 0003000
+#define TABDLY 0014000
+#define TAB0 0000000
+#define TAB1 0004000
+#define TAB2 0010000
+#define TAB3 0014000
+#define XTABS 0014000
+#define BSDLY 0020000
+#define BS0 0000000
+#define BS1 0020000
+#define VTDLY 0040000
+#define VT0 0000000
+#define VT1 0040000
+#define FFDLY 0100000
+#define FF0 0000000
+#define FF1 0100000
+
+/* c_cflag bit meaning */
+#define CBAUD 0010017
+#define B0 0000000 /* hang up */
+#define B50 0000001
+#define B75 0000002
+#define B110 0000003
+#define B134 0000004
+#define B150 0000005
+#define B200 0000006
+#define B300 0000007
+#define B600 0000010
+#define B1200 0000011
+#define B1800 0000012
+#define B2400 0000013
+#define B4800 0000014
+#define B9600 0000015
+#define B19200 0000016
+#define B38400 0000017
+#define EXTA B19200
+#define EXTB B38400
+#define CSIZE 0000060
+#define CS5 0000000
+#define CS6 0000020
+#define CS7 0000040
+#define CS8 0000060
+#define CSTOPB 0000100
+#define CREAD 0000200
+#define PARENB 0000400
+#define PARODD 0001000
+#define HUPCL 0002000
+#define CLOCAL 0004000
+#define CBAUDEX 0010000
+#define B57600 0010001
+#define B115200 0010002
+#define B230400 0010003
+#define B460800 0010004
+#define B500000 0010005
+#define B576000 0010006
+#define B921600 0010007
+#define B1000000 0010010
+#define B1152000 0010011
+#define B1500000 0010012
+#define B2000000 0010013
+#define B2500000 0010014
+#define B3000000 0010015
+#define B3500000 0010016
+#define B4000000 0010017
+#define CIBAUD 002003600000 /* input baud rate (not used) */
+#define CTVB 004000000000 /* VisioBraille Terminal flow control */
+#define CMSPAR 010000000000 /* mark or space (stick) parity */
+#define CRTSCTS 020000000000 /* flow control */
+
+/* c_lflag bits */
+#define ISIG 0000001
+#define ICANON 0000002
+#define XCASE 0000004
+#define ECHO 0000010
+#define ECHOE 0000020
+#define ECHOK 0000040
+#define ECHONL 0000100
+#define NOFLSH 0000200
+#define TOSTOP 0000400
+#define ECHOCTL 0001000
+#define ECHOPRT 0002000
+#define ECHOKE 0004000
+#define FLUSHO 0010000
+#define PENDIN 0040000
+#define IEXTEN 0100000
+
+
+/* tcflow() and TCXONC use these */
+#define TCOOFF 0
+#define TCOON 1
+#define TCIOFF 2
+#define TCION 3
+
+/* tcflush() and TCFLSH use these */
+#define TCIFLUSH 0
+#define TCOFLUSH 1
+#define TCIOFLUSH 2
+
+/* tcsetattr uses these */
+#define TCSANOW 0
+#define TCSADRAIN 1
+#define TCSAFLUSH 2
+
+#endif /* _ASM_TERMBITS_H__ */
+
--- /dev/null
+#ifndef _ASM_TERMIOS_H
+#define _ASM_TERMIOS_H
+
+#include <asm/termbits.h>
+#include <asm/ioctls.h>
+
+struct winsize {
+ unsigned short ws_row;
+ unsigned short ws_col;
+ unsigned short ws_xpixel;
+ unsigned short ws_ypixel;
+};
+
+#define NCC 8
+struct termio {
+ unsigned short c_iflag; /* input mode flags */
+ unsigned short c_oflag; /* output mode flags */
+ unsigned short c_cflag; /* control mode flags */
+ unsigned short c_lflag; /* local mode flags */
+ unsigned char c_line; /* line discipline */
+ unsigned char c_cc[NCC]; /* control characters */
+};
+
+#ifdef __KERNEL__
+/* intr=^C quit=^| erase=del kill=^U
+ eof=^D vtime=\0 vmin=\1 sxtc=\0
+ start=^Q stop=^S susp=^Z eol=\0
+ reprint=^R discard=^U werase=^W lnext=^V
+ eol2=\0
+*/
+#define INIT_C_CC "\003\034\177\025\004\0\1\0\021\023\032\0\022\017\027\026\0"
+#endif
+
+/* modem lines */
+#define TIOCM_LE 0x001
+#define TIOCM_DTR 0x002
+#define TIOCM_RTS 0x004
+#define TIOCM_ST 0x008
+#define TIOCM_SR 0x010
+#define TIOCM_CTS 0x020
+#define TIOCM_CAR 0x040
+#define TIOCM_RNG 0x080
+#define TIOCM_DSR 0x100
+#define TIOCM_CD TIOCM_CAR
+#define TIOCM_RI TIOCM_RNG
+#define TIOCM_OUT1 0x2000
+#define TIOCM_OUT2 0x4000
+#define TIOCM_LOOP 0x8000
+
+#define TIOCM_MODEM_BITS TIOCM_OUT2 /* IRDA support */
+
+/* ioctl (fd, TIOCSERGETLSR, &result) where result may be as below */
+
+/* line disciplines */
+#define N_TTY 0
+#define N_SLIP 1
+#define N_MOUSE 2
+#define N_PPP 3
+#define N_STRIP 4
+#define N_AX25 5
+#define N_X25 6 /* X.25 async */
+#define N_6PACK 7
+#define N_MASC 8 /* Reserved for Mobitex module <kaz@cafe.net> */
+#define N_R3964 9 /* Reserved for Simatic R3964 module */
+#define N_PROFIBUS_FDL 10 /* Reserved for Profibus <Dave@mvhi.com> */
+#define N_IRDA 11 /* Linux IrDa - http://irda.sourceforge.net/ */
+#define N_SMSBLOCK 12 /* SMS block mode - for talking to GSM data cards about SMS messages */
+#define N_HDLC 13 /* synchronous HDLC */
+#define N_SYNC_PPP 14
+#define N_HCI 15 /* Bluetooth HCI UART */
+
+#include <asm-generic/termios.h>
+
+#endif /* _ASM_TERMIOS_H */
--- /dev/null
+/* thread_info.h: description
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ * Derived from include/asm-i386/thread_info.h
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_THREAD_INFO_H
+#define _ASM_THREAD_INFO_H
+
+#ifdef __KERNEL__
+
+#ifndef __ASSEMBLY__
+#include <asm/processor.h>
+#endif
+
+/*
+ * low level task data that entry.S needs immediate access to
+ * - this struct should fit entirely inside of one cache line
+ * - this struct shares the supervisor stack pages
+ * - if the contents of this structure are changed, the assembly constants must also be changed
+ */
+#ifndef __ASSEMBLY__
+
+struct thread_info {
+ struct task_struct *task; /* main task structure */
+ struct exec_domain *exec_domain; /* execution domain */
+ unsigned long flags; /* low level flags */
+ unsigned long status; /* thread-synchronous flags */
+ __u32 cpu; /* current CPU */
+ __s32 preempt_count; /* 0 => preemptable, <0 => BUG */
+
+ mm_segment_t addr_limit; /* thread address space:
+ 0-0xBFFFFFFF for user-thead
+ 0-0xFFFFFFFF for kernel-thread
+ */
+ struct restart_block restart_block;
+
+ __u8 supervisor_stack[0];
+};
+
+#else /* !__ASSEMBLY__ */
+
+/* offsets into the thread_info struct for assembly code access */
+#define TI_TASK 0x00000000
+#define TI_EXEC_DOMAIN 0x00000004
+#define TI_FLAGS 0x00000008
+#define TI_STATUS 0x0000000C
+#define TI_CPU 0x00000010
+#define TI_PRE_COUNT 0x00000014
+#define TI_ADDR_LIMIT 0x00000018
+#define TI_RESTART_BLOCK 0x0000001C
+
+#endif
+
+#define PREEMPT_ACTIVE 0x4000000
+
+/*
+ * macros/functions for gaining access to the thread information structure
+ *
+ * preempt_count needs to be 1 initially, until the scheduler is functional.
+ */
+#ifndef __ASSEMBLY__
+
+#define INIT_THREAD_INFO(tsk) \
+{ \
+ .task = &tsk, \
+ .exec_domain = &default_exec_domain, \
+ .flags = 0, \
+ .cpu = 0, \
+ .preempt_count = 1, \
+ .addr_limit = KERNEL_DS, \
+ .restart_block = { \
+ .fn = do_no_restart_syscall, \
+ }, \
+}
+
+#define init_thread_info (init_thread_union.thread_info)
+#define init_stack (init_thread_union.stack)
+
+#ifdef CONFIG_SMALL_TASKS
+#define THREAD_SIZE 4096
+#else
+#define THREAD_SIZE 8192
+#endif
+
+/* how to get the thread information struct from C */
+register struct thread_info *__current_thread_info asm("gr15");
+
+#define current_thread_info() ({ __current_thread_info; })
+
+/* thread information allocation */
+#ifdef CONFIG_DEBUG_STACK_USAGE
+#define alloc_thread_info(tsk) \
+ ({ \
+ struct thread_info *ret; \
+ \
+ ret = kmalloc(THREAD_SIZE, GFP_KERNEL); \
+ if (ret) \
+ memset(ret, 0, THREAD_SIZE); \
+ ret; \
+ })
+#else
+#define alloc_thread_info(tsk) kmalloc(THREAD_SIZE, GFP_KERNEL)
+#endif
+
+#define free_thread_info(info) kfree(info)
+#define get_thread_info(ti) get_task_struct((ti)->task)
+#define put_thread_info(ti) put_task_struct((ti)->task)
+
+#else /* !__ASSEMBLY__ */
+
+#define THREAD_SIZE 8192
+
+#endif
+
+/*
+ * thread information flags
+ * - these are process state flags that various assembly files may need to access
+ * - pending work-to-be-done flags are in LSW
+ * - other flags in MSW
+ */
+#define TIF_SYSCALL_TRACE 0 /* syscall trace active */
+#define TIF_NOTIFY_RESUME 1 /* resumption notification requested */
+#define TIF_SIGPENDING 2 /* signal pending */
+#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
+#define TIF_SINGLESTEP 4 /* restore singlestep on return to user mode */
+#define TIF_IRET 5 /* return with iret */
+#define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_RESCHED */
+#define TIF_MEMDIE 17 /* OOM killer killed process */
+
+#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
+#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
+#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
+#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
+#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
+#define _TIF_IRET (1 << TIF_IRET)
+#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)
+
+#define _TIF_WORK_MASK 0x0000FFFE /* work to do on interrupt/exception return */
+#define _TIF_ALLWORK_MASK 0x0000FFFF /* work to do on any return to u-space */
+
+/*
+ * Thread-synchronous status.
+ *
+ * This is different from the flags in that nobody else
+ * ever touches our thread-synchronous status, so we don't
+ * have to worry about atomic accesses.
+ */
+#define TS_USEDFPM 0x0001 /* FPU/Media was used by this task this quantum (SMP) */
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_THREAD_INFO_H */
--- /dev/null
+/* timex.h: FR-V architecture timex specifications
+ */
+#ifndef _ASM_TIMEX_H
+#define _ASM_TIMEX_H
+
+#define CLOCK_TICK_RATE 1193180 /* Underlying HZ */
+#define CLOCK_TICK_FACTOR 20 /* Factor of both 1000000 and CLOCK_TICK_RATE */
+
+#define FINETUNE \
+((((((long)LATCH * HZ - CLOCK_TICK_RATE) << SHIFT_HZ) * \
+ (1000000/CLOCK_TICK_FACTOR) / (CLOCK_TICK_RATE/CLOCK_TICK_FACTOR)) \
+ << (SHIFT_SCALE-SHIFT_HZ)) / HZ)
+
+typedef unsigned long cycles_t;
+
+static inline cycles_t get_cycles(void)
+{
+ return 0;
+}
+
+#define vxtime_lock() do {} while (0)
+#define vxtime_unlock() do {} while (0)
+
+#endif
+
--- /dev/null
+/* tlbflush.h: TLB flushing functions
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_TLBFLUSH_H
+#define _ASM_TLBFLUSH_H
+
+#include <linux/config.h>
+#include <linux/mm.h>
+#include <asm/processor.h>
+
+#ifdef CONFIG_MMU
+
+#ifndef __ASSEMBLY__
+extern void asmlinkage __flush_tlb_all(void);
+extern void asmlinkage __flush_tlb_mm(unsigned long contextid);
+extern void asmlinkage __flush_tlb_page(unsigned long contextid, unsigned long start);
+extern void asmlinkage __flush_tlb_range(unsigned long contextid,
+ unsigned long start, unsigned long end);
+#endif /* !__ASSEMBLY__ */
+
+#define flush_tlb_all() \
+do { \
+ preempt_disable(); \
+ __flush_tlb_all(); \
+ preempt_enable(); \
+} while(0)
+
+#define flush_tlb_mm(mm) \
+do { \
+ preempt_disable(); \
+ __flush_tlb_mm((mm)->context.id); \
+ preempt_enable(); \
+} while(0)
+
+#define flush_tlb_range(vma,start,end) \
+do { \
+ preempt_disable(); \
+ __flush_tlb_range((vma)->vm_mm->context.id, start, end); \
+ preempt_enable(); \
+} while(0)
+
+#define flush_tlb_page(vma,addr) \
+do { \
+ preempt_disable(); \
+ __flush_tlb_page((vma)->vm_mm->context.id, addr); \
+ preempt_enable(); \
+} while(0)
+
+
+#define __flush_tlb_global() flush_tlb_all()
+#define flush_tlb() flush_tlb_all()
+#define flush_tlb_kernel_range(start, end) flush_tlb_all()
+#define flush_tlb_pgtables(mm,start,end) asm volatile("movgs gr0,scr0 ! movgs gr0,scr1");
+
+#else
+
+#define flush_tlb() BUG()
+#define flush_tlb_all() BUG()
+#define flush_tlb_mm(mm) BUG()
+#define flush_tlb_page(vma,addr) BUG()
+#define flush_tlb_range(mm,start,end) BUG()
+#define flush_tlb_pgtables(mm,start,end) BUG()
+#define flush_tlb_kernel_range(start, end) BUG()
+
+#endif
+
+
+#endif /* _ASM_TLBFLUSH_H */
--- /dev/null
+/* types.h: FRV types
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_TYPES_H
+#define _ASM_TYPES_H
+
+#ifndef __ASSEMBLY__
+
+typedef unsigned short umode_t;
+
+/*
+ * __xx is ok: it doesn't pollute the POSIX namespace. Use these in the
+ * header files exported to user space
+ */
+
+typedef __signed__ char __s8;
+typedef unsigned char __u8;
+
+typedef __signed__ short __s16;
+typedef unsigned short __u16;
+
+typedef __signed__ int __s32;
+typedef unsigned int __u32;
+
+#if defined(__GNUC__) && !defined(__STRICT_ANSI__)
+typedef __signed__ long long __s64;
+typedef unsigned long long __u64;
+#endif
+
+#endif /* __ASSEMBLY__ */
+
+/*
+ * These aren't exported outside the kernel to avoid name space clashes
+ */
+#ifdef __KERNEL__
+
+#define BITS_PER_LONG 32
+
+#ifndef __ASSEMBLY__
+
+#include <linux/config.h>
+
+typedef signed char s8;
+typedef unsigned char u8;
+
+typedef signed short s16;
+typedef unsigned short u16;
+
+typedef signed int s32;
+typedef unsigned int u32;
+
+typedef signed long long s64;
+typedef unsigned long long u64;
+typedef u64 u_quad_t;
+
+/* Dma addresses are 32-bits wide. */
+
+typedef u32 dma_addr_t;
+
+typedef unsigned short kmem_bufctl_t;
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_TYPES_H */
--- /dev/null
+/* uaccess.h: userspace accessor functions
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_UACCESS_H
+#define _ASM_UACCESS_H
+
+/*
+ * User space memory access functions
+ */
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <asm/segment.h>
+#include <asm/sections.h>
+
+#define HAVE_ARCH_UNMAPPED_AREA /* we decide where to put mmaps */
+
+#define __ptr(x) ((unsigned long *)(x))
+
+#define VERIFY_READ 0
+#define VERIFY_WRITE 1
+
+#define __addr_ok(addr) ((unsigned long)(addr) < get_addr_limit())
+
+/*
+ * check that a range of addresses falls within the current address limit
+ */
+static inline int ___range_ok(unsigned long addr, unsigned long size)
+{
+#ifdef CONFIG_MMU
+ int flag = -EFAULT, tmp;
+
+ asm volatile (
+ " addcc %3,%2,%1,icc0 \n" /* set C-flag if addr+size>4GB */
+ " subcc.p %1,%4,gr0,icc1 \n" /* jump if addr+size>limit */
+ " bc icc0,#0,0f \n"
+ " bhi icc1,#0,0f \n"
+ " setlos #0,%0 \n" /* mark okay */
+ "0: \n"
+ : "=r"(flag), "=&r"(tmp)
+ : "r"(addr), "r"(size), "r"(get_addr_limit()), "0"(flag)
+ );
+
+ return flag;
+
+#else
+
+ if (addr < memory_start ||
+ addr > memory_end ||
+ size > memory_end - memory_start ||
+ addr + size > memory_end)
+ return -EFAULT;
+
+ return 0;
+#endif
+}
+
+#define __range_ok(addr,size) ___range_ok((unsigned long) (addr), (unsigned long) (size))
+
+#define access_ok(type,addr,size) (__range_ok((addr), (size)) == 0)
+#define __access_ok(addr,size) (__range_ok((addr), (size)) == 0)
+
+static inline int verify_area(int type, const void * addr, unsigned long size)
+{
+ return __range_ok(addr, size);
+}
+
+/*
+ * The exception table consists of pairs of addresses: the first is the
+ * address of an instruction that is allowed to fault, and the second is
+ * the address at which the program should continue. No registers are
+ * modified, so it is entirely up to the continuation code to figure out
+ * what to do.
+ *
+ * All the routines below use bits of fixup code that are out of line
+ * with the main instruction path. This means when everything is well,
+ * we don't even have to jump over them. Further, they do not intrude
+ * on our cache or tlb entries.
+ */
+struct exception_table_entry
+{
+ unsigned long insn, fixup;
+};
+
+/* Returns 0 if exception not found and fixup otherwise. */
+extern unsigned long search_exception_table(unsigned long);
+
+
+/*
+ * These are the main single-value transfer routines. They automatically
+ * use the right size if we just have the right pointer type.
+ */
+#define __put_user(x, ptr) \
+({ \
+ int __pu_err = 0; \
+ \
+ typeof(*(ptr)) __pu_val = (x); \
+ \
+ switch (sizeof (*(ptr))) { \
+ case 1: \
+ __put_user_asm(__pu_err, __pu_val, ptr, "b", "r"); \
+ break; \
+ case 2: \
+ __put_user_asm(__pu_err, __pu_val, ptr, "h", "r"); \
+ break; \
+ case 4: \
+ __put_user_asm(__pu_err, __pu_val, ptr, "", "r"); \
+ break; \
+ case 8: \
+ __put_user_asm(__pu_err, __pu_val, ptr, "d", "e"); \
+ break; \
+ default: \
+ __pu_err = __put_user_bad(); \
+ break; \
+ } \
+ __pu_err; \
+})
+
+#define put_user(x, ptr) \
+({ \
+ typeof(&*ptr) _p = (ptr); \
+ int _e; \
+ \
+ _e = __range_ok(_p, sizeof(*_p)); \
+ if (_e == 0) \
+ _e = __put_user((x), _p); \
+ _e; \
+})
+
+extern int __put_user_bad(void);
+
+/*
+ * Tell gcc we read from memory instead of writing: this is because
+ * we do not write to any memory gcc knows about, so there are no
+ * aliasing issues.
+ */
+
+#ifdef CONFIG_MMU
+
+#define __put_user_asm(err,x,ptr,dsize,constraint) \
+do { \
+ asm volatile("1: st"dsize"%I1 %2,%M1 \n" \
+ "2: \n" \
+ ".subsection 2 \n" \
+ "3: setlos %3,%0 \n" \
+ " bra 2b \n" \
+ ".previous \n" \
+ ".section __ex_table,\"a\" \n" \
+ " .balign 8 \n" \
+ " .long 1b,3b \n" \
+ ".previous" \
+ : "=r" (err) \
+ : "m" (*__ptr(ptr)), constraint (x), "i"(-EFAULT), "0"(err) \
+ : "memory"); \
+} while (0)
+
+#else
+
+#define __put_user_asm(err,x,ptr,bwl,con) \
+do { \
+ asm(" st"bwl"%I0 %1,%M0 \n" \
+ " membar \n" \
+ : \
+ : "m" (*__ptr(ptr)), con (x) \
+ : "memory"); \
+} while (0)
+
+#endif
+
+/*****************************************************************************/
+/*
+ *
+ */
+#define __get_user(x, ptr) \
+({ \
+ typeof(*(ptr)) __gu_val = 0; \
+ int __gu_err = 0; \
+ \
+ switch (sizeof(*(ptr))) { \
+ case 1: \
+ __get_user_asm(__gu_err, __gu_val, ptr, "ub", "=r"); \
+ break; \
+ case 2: \
+ __get_user_asm(__gu_err, __gu_val, ptr, "uh", "=r"); \
+ break; \
+ case 4: \
+ __get_user_asm(__gu_err, __gu_val, ptr, "", "=r"); \
+ break; \
+ case 8: \
+ __get_user_asm(__gu_err, __gu_val, ptr, "d", "=e"); \
+ break; \
+ default: \
+ __gu_err = __get_user_bad(); \
+ break; \
+ } \
+ (x) = __gu_val; \
+ __gu_err; \
+})
+
+#define get_user(x, ptr) \
+({ \
+ typeof(&*ptr) _p = (ptr); \
+ int _e; \
+ \
+ _e = __range_ok(_p, sizeof(*_p)); \
+ if (likely(_e == 0)) \
+ _e = __get_user((x), _p); \
+ else \
+ (x) = (typeof(x)) 0; \
+ _e; \
+})
+
+extern int __get_user_bad(void);
+
+#ifdef CONFIG_MMU
+
+#define __get_user_asm(err,x,ptr,dtype,constraint) \
+do { \
+ asm("1: ld"dtype"%I2 %M2,%1 \n" \
+ "2: \n" \
+ ".subsection 2 \n" \
+ "3: setlos %3,%0 \n" \
+ " setlos #0,%1 \n" \
+ " bra 2b \n" \
+ ".previous \n" \
+ ".section __ex_table,\"a\" \n" \
+ " .balign 8 \n" \
+ " .long 1b,3b \n" \
+ ".previous" \
+ : "=r" (err), constraint (x) \
+ : "m" (*__ptr(ptr)), "i"(-EFAULT), "0"(err) \
+ ); \
+} while(0)
+
+#else
+
+#define __get_user_asm(err,x,ptr,bwl,con) \
+ asm(" ld"bwl"%I1 %M1,%0 \n" \
+ " membar \n" \
+ : con(x) \
+ : "m" (*__ptr(ptr)))
+
+#endif
+
+/*****************************************************************************/
+/*
+ *
+ */
+#ifdef CONFIG_MMU
+extern long __memset_user(void *dst, unsigned long count);
+extern long __memcpy_user(void *dst, const void *src, unsigned long count);
+
+#define clear_user(dst,count) __memset_user((dst), (count))
+#define __copy_from_user_inatomic(to, from, n) __memcpy_user((to), (from), (n))
+#define __copy_to_user_inatomic(to, from, n) __memcpy_user((to), (from), (n))
+
+#else
+
+#define clear_user(dst,count) (memset((dst), 0, (count)), 0)
+#define __copy_from_user_inatomic(to, from, n) (memcpy((to), (from), (n)), 0)
+#define __copy_to_user_inatomic(to, from, n) (memcpy((to), (from), (n)), 0)
+
+#endif
+
+static inline unsigned long __must_check
+__copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+ might_sleep();
+ return __copy_to_user_inatomic(to, from, n);
+}
+
+static inline unsigned long
+__copy_from_user(void *to, const void __user *from, unsigned long n)
+{
+ might_sleep();
+ return __copy_from_user_inatomic(to, from, n);
+}
+
+static inline long copy_from_user(void *to, const void *from, unsigned long n)
+{
+ unsigned long ret = n;
+
+ if (likely(__access_ok(from, n)))
+ ret = __copy_from_user(to, from, n);
+
+ if (unlikely(ret != 0))
+ memset(to + (n - ret), 0, ret);
+
+ return ret;
+}
+
+static inline long copy_to_user(void *to, const void *from, unsigned long n)
+{
+ return likely(__access_ok(to, n)) ? __copy_to_user(to, from, n) : n;
+}
+
+#define copy_to_user_ret(to,from,n,retval) ({ if (copy_to_user(to,from,n)) return retval; })
+#define copy_from_user_ret(to,from,n,retval) ({ if (copy_from_user(to,from,n)) return retval; })
+
+extern long strncpy_from_user(char *dst, const char *src, long count);
+extern long strnlen_user(const char *src, long count);
+
+#define strlen_user(str) strnlen_user(str, 32767)
+
+extern unsigned long search_exception_table(unsigned long addr);
+
+#define copy_to_user_page(vma, page, vaddr, dst, src, len) memcpy(dst, src, len)
+#define copy_from_user_page(vma, page, vaddr, dst, src, len) memcpy(dst, src, len)
+
+#endif /* _ASM_UACCESS_H */
--- /dev/null
+/* unaligned.h: unaligned access handler
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_UNALIGNED_H
+#define _ASM_UNALIGNED_H
+
+#include <linux/config.h>
+
+/*
+ * Unaligned accesses on uClinux can't be performed in a fault handler - the
+ * CPU detects them as imprecise exceptions making this impossible.
+ *
+ * With the FR451, however, they are precise, and so we used to fix them up in
+ * the memory access fault handler. However, instruction bundling make this
+ * impractical. So, now we fall back to using memcpy.
+ */
+#ifdef CONFIG_MMU
+
+/*
+ * The asm statement in the macros below is a way to get GCC to copy a
+ * value from one variable to another without having any clue it's
+ * actually doing so, so that it won't have any idea that the values
+ * in the two variables are related.
+ */
+
+#define get_unaligned(ptr) ({ \
+ typeof((*(ptr))) __x; \
+ void *__ptrcopy; \
+ asm("" : "=r" (__ptrcopy) : "0" (ptr)); \
+ memcpy(&__x, __ptrcopy, sizeof(*(ptr))); \
+ __x; \
+})
+
+#define put_unaligned(val, ptr) ({ \
+ typeof((*(ptr))) __x = (val); \
+ void *__ptrcopy; \
+ asm("" : "=r" (__ptrcopy) : "0" (ptr)); \
+ memcpy(__ptrcopy, &__x, sizeof(*(ptr))); \
+})
+
+extern int handle_misalignment(unsigned long esr0, unsigned long ear0, unsigned long epcr0);
+
+#else
+
+#define get_unaligned(ptr) \
+({ \
+ typeof(*(ptr)) x; \
+ const char *__p = (const char *) (ptr); \
+ \
+ switch (sizeof(x)) { \
+ case 1: \
+ x = *(ptr); \
+ break; \
+ case 2: \
+ { \
+ uint8_t a; \
+ asm(" ldub%I2 %M2,%0 \n" \
+ " ldub%I3.p %M3,%1 \n" \
+ " slli %0,#8,%0 \n" \
+ " or %0,%1,%0 \n" \
+ : "=&r"(x), "=&r"(a) \
+ : "m"(__p[0]), "m"(__p[1]) \
+ ); \
+ break; \
+ } \
+ \
+ case 4: \
+ { \
+ uint8_t a; \
+ asm(" ldub%I2 %M2,%0 \n" \
+ " ldub%I3.p %M3,%1 \n" \
+ " slli %0,#8,%0 \n" \
+ " or %0,%1,%0 \n" \
+ " ldub%I4.p %M4,%1 \n" \
+ " slli %0,#8,%0 \n" \
+ " or %0,%1,%0 \n" \
+ " ldub%I5.p %M5,%1 \n" \
+ " slli %0,#8,%0 \n" \
+ " or %0,%1,%0 \n" \
+ : "=&r"(x), "=&r"(a) \
+ : "m"(__p[0]), "m"(__p[1]), "m"(__p[2]), "m"(__p[3]) \
+ ); \
+ break; \
+ } \
+ \
+ case 8: \
+ { \
+ union { uint64_t x; u32 y[2]; } z; \
+ uint8_t a; \
+ asm(" ldub%I3 %M3,%0 \n" \
+ " ldub%I4.p %M4,%2 \n" \
+ " slli %0,#8,%0 \n" \
+ " or %0,%2,%0 \n" \
+ " ldub%I5.p %M5,%2 \n" \
+ " slli %0,#8,%0 \n" \
+ " or %0,%2,%0 \n" \
+ " ldub%I6.p %M6,%2 \n" \
+ " slli %0,#8,%0 \n" \
+ " or %0,%2,%0 \n" \
+ " ldub%I7 %M7,%1 \n" \
+ " ldub%I8.p %M8,%2 \n" \
+ " slli %1,#8,%1 \n" \
+ " or %1,%2,%1 \n" \
+ " ldub%I9.p %M9,%2 \n" \
+ " slli %1,#8,%1 \n" \
+ " or %1,%2,%1 \n" \
+ " ldub%I10.p %M10,%2 \n" \
+ " slli %1,#8,%1 \n" \
+ " or %1,%2,%1 \n" \
+ : "=&r"(z.y[0]), "=&r"(z.y[1]), "=&r"(a) \
+ : "m"(__p[0]), "m"(__p[1]), "m"(__p[2]), "m"(__p[3]), \
+ "m"(__p[4]), "m"(__p[5]), "m"(__p[6]), "m"(__p[7]) \
+ ); \
+ x = z.x; \
+ break; \
+ } \
+ \
+ default: \
+ x = 0; \
+ BUG(); \
+ break; \
+ } \
+ \
+ x; \
+})
+
+#define put_unaligned(val, ptr) \
+do { \
+ char *__p = (char *) (ptr); \
+ int x; \
+ \
+ switch (sizeof(*ptr)) { \
+ case 2: \
+ { \
+ asm(" stb%I1.p %0,%M1 \n" \
+ " srli %0,#8,%0 \n" \
+ " stb%I2 %0,%M2 \n" \
+ : "=r"(x), "=m"(__p[1]), "=m"(__p[0]) \
+ : "0"(val) \
+ ); \
+ break; \
+ } \
+ \
+ case 4: \
+ { \
+ asm(" stb%I1.p %0,%M1 \n" \
+ " srli %0,#8,%0 \n" \
+ " stb%I2.p %0,%M2 \n" \
+ " srli %0,#8,%0 \n" \
+ " stb%I3.p %0,%M3 \n" \
+ " srli %0,#8,%0 \n" \
+ " stb%I4 %0,%M4 \n" \
+ : "=r"(x), "=m"(__p[3]), "=m"(__p[2]), "=m"(__p[1]), "=m"(__p[0]) \
+ : "0"(val) \
+ ); \
+ break; \
+ } \
+ \
+ case 8: \
+ { \
+ uint32_t __high, __low; \
+ __high = (uint64_t)val >> 32; \
+ __low = val & 0xffffffff; \
+ asm(" stb%I2.p %0,%M2 \n" \
+ " srli %0,#8,%0 \n" \
+ " stb%I3.p %0,%M3 \n" \
+ " srli %0,#8,%0 \n" \
+ " stb%I4.p %0,%M4 \n" \
+ " srli %0,#8,%0 \n" \
+ " stb%I5.p %0,%M5 \n" \
+ " srli %0,#8,%0 \n" \
+ " stb%I6.p %1,%M6 \n" \
+ " srli %1,#8,%1 \n" \
+ " stb%I7.p %1,%M7 \n" \
+ " srli %1,#8,%1 \n" \
+ " stb%I8.p %1,%M8 \n" \
+ " srli %1,#8,%1 \n" \
+ " stb%I9 %1,%M9 \n" \
+ : "=&r"(__low), "=&r"(__high), "=m"(__p[7]), "=m"(__p[6]), \
+ "=m"(__p[5]), "=m"(__p[4]), "=m"(__p[3]), "=m"(__p[2]), \
+ "=m"(__p[1]), "=m"(__p[0]) \
+ : "0"(__low), "1"(__high) \
+ ); \
+ break; \
+ } \
+ \
+ default: \
+ *(ptr) = (val); \
+ break; \
+ } \
+} while(0)
+
+#endif
+
+#endif
--- /dev/null
+#ifndef _ASM_UNISTD_H_
+#define _ASM_UNISTD_H_
+
+/*
+ * This file contains the system call numbers.
+ */
+
+#define __NR_restart_syscall 0
+#define __NR_exit 1
+#define __NR_fork 2
+#define __NR_read 3
+#define __NR_write 4
+#define __NR_open 5
+#define __NR_close 6
+#define __NR_waitpid 7
+#define __NR_creat 8
+#define __NR_link 9
+#define __NR_unlink 10
+#define __NR_execve 11
+#define __NR_chdir 12
+#define __NR_time 13
+#define __NR_mknod 14
+#define __NR_chmod 15
+#define __NR_lchown 16
+#define __NR_break 17
+#define __NR_oldstat 18
+#define __NR_lseek 19
+#define __NR_getpid 20
+#define __NR_mount 21
+#define __NR_umount 22
+#define __NR_setuid 23
+#define __NR_getuid 24
+#define __NR_stime 25
+#define __NR_ptrace 26
+#define __NR_alarm 27
+#define __NR_oldfstat 28
+#define __NR_pause 29
+#define __NR_utime 30
+#define __NR_stty 31
+#define __NR_gtty 32
+#define __NR_access 33
+#define __NR_nice 34
+#define __NR_ftime 35
+#define __NR_sync 36
+#define __NR_kill 37
+#define __NR_rename 38
+#define __NR_mkdir 39
+#define __NR_rmdir 40
+#define __NR_dup 41
+#define __NR_pipe 42
+#define __NR_times 43
+#define __NR_prof 44
+#define __NR_brk 45
+#define __NR_setgid 46
+#define __NR_getgid 47
+#define __NR_signal 48
+#define __NR_geteuid 49
+#define __NR_getegid 50
+#define __NR_acct 51
+#define __NR_umount2 52
+#define __NR_lock 53
+#define __NR_ioctl 54
+#define __NR_fcntl 55
+#define __NR_mpx 56
+#define __NR_setpgid 57
+#define __NR_ulimit 58
+// #define __NR_oldolduname /* 59 */ obsolete
+#define __NR_umask 60
+#define __NR_chroot 61
+#define __NR_ustat 62
+#define __NR_dup2 63
+#define __NR_getppid 64
+#define __NR_getpgrp 65
+#define __NR_setsid 66
+#define __NR_sigaction 67
+#define __NR_sgetmask 68
+#define __NR_ssetmask 69
+#define __NR_setreuid 70
+#define __NR_setregid 71
+#define __NR_sigsuspend 72
+#define __NR_sigpending 73
+#define __NR_sethostname 74
+#define __NR_setrlimit 75
+#define __NR_getrlimit 76 /* Back compatible 2Gig limited rlimit */
+#define __NR_getrusage 77
+#define __NR_gettimeofday 78
+#define __NR_settimeofday 79
+#define __NR_getgroups 80
+#define __NR_setgroups 81
+#define __NR_select 82
+#define __NR_symlink 83
+#define __NR_oldlstat 84
+#define __NR_readlink 85
+#define __NR_uselib 86
+#define __NR_swapon 87
+#define __NR_reboot 88
+#define __NR_readdir 89
+// #define __NR_mmap 90 /* obsolete - not implemented */
+#define __NR_munmap 91
+#define __NR_truncate 92
+#define __NR_ftruncate 93
+#define __NR_fchmod 94
+#define __NR_fchown 95
+#define __NR_getpriority 96
+#define __NR_setpriority 97
+// #define __NR_profil /* 98 */ obsolete
+#define __NR_statfs 99
+#define __NR_fstatfs 100
+// #define __NR_ioperm /* 101 */ not supported
+#define __NR_socketcall 102
+#define __NR_syslog 103
+#define __NR_setitimer 104
+#define __NR_getitimer 105
+#define __NR_stat 106
+#define __NR_lstat 107
+#define __NR_fstat 108
+// #define __NR_olduname /* 109 */ obsolete
+// #define __NR_iopl /* 110 */ not supported
+#define __NR_vhangup 111
+// #define __NR_idle /* 112 */ Obsolete
+// #define __NR_vm86old /* 113 */ not supported
+#define __NR_wait4 114
+#define __NR_swapoff 115
+#define __NR_sysinfo 116
+#define __NR_ipc 117
+#define __NR_fsync 118
+#define __NR_sigreturn 119
+#define __NR_clone 120
+#define __NR_setdomainname 121
+#define __NR_uname 122
+// #define __NR_modify_ldt /* 123 */ not supported
+#define __NR_cacheflush 123
+#define __NR_adjtimex 124
+#define __NR_mprotect 125
+#define __NR_sigprocmask 126
+#define __NR_create_module 127
+#define __NR_init_module 128
+#define __NR_delete_module 129
+#define __NR_get_kernel_syms 130
+#define __NR_quotactl 131
+#define __NR_getpgid 132
+#define __NR_fchdir 133
+#define __NR_bdflush 134
+#define __NR_sysfs 135
+#define __NR_personality 136
+#define __NR_afs_syscall 137 /* Syscall for Andrew File System */
+#define __NR_setfsuid 138
+#define __NR_setfsgid 139
+#define __NR__llseek 140
+#define __NR_getdents 141
+#define __NR__newselect 142
+#define __NR_flock 143
+#define __NR_msync 144
+#define __NR_readv 145
+#define __NR_writev 146
+#define __NR_getsid 147
+#define __NR_fdatasync 148
+#define __NR__sysctl 149
+#define __NR_mlock 150
+#define __NR_munlock 151
+#define __NR_mlockall 152
+#define __NR_munlockall 153
+#define __NR_sched_setparam 154
+#define __NR_sched_getparam 155
+#define __NR_sched_setscheduler 156
+#define __NR_sched_getscheduler 157
+#define __NR_sched_yield 158
+#define __NR_sched_get_priority_max 159
+#define __NR_sched_get_priority_min 160
+#define __NR_sched_rr_get_interval 161
+#define __NR_nanosleep 162
+#define __NR_mremap 163
+#define __NR_setresuid 164
+#define __NR_getresuid 165
+// #define __NR_vm86 /* 166 */ not supported
+#define __NR_query_module 167
+#define __NR_poll 168
+#define __NR_nfsservctl 169
+#define __NR_setresgid 170
+#define __NR_getresgid 171
+#define __NR_prctl 172
+#define __NR_rt_sigreturn 173
+#define __NR_rt_sigaction 174
+#define __NR_rt_sigprocmask 175
+#define __NR_rt_sigpending 176
+#define __NR_rt_sigtimedwait 177
+#define __NR_rt_sigqueueinfo 178
+#define __NR_rt_sigsuspend 179
+#define __NR_pread 180
+#define __NR_pwrite 181
+#define __NR_chown 182
+#define __NR_getcwd 183
+#define __NR_capget 184
+#define __NR_capset 185
+#define __NR_sigaltstack 186
+#define __NR_sendfile 187
+#define __NR_getpmsg 188 /* some people actually want streams */
+#define __NR_putpmsg 189 /* some people actually want streams */
+#define __NR_vfork 190
+#define __NR_ugetrlimit 191 /* SuS compliant getrlimit */
+#define __NR_mmap2 192
+#define __NR_truncate64 193
+#define __NR_ftruncate64 194
+#define __NR_stat64 195
+#define __NR_lstat64 196
+#define __NR_fstat64 197
+#define __NR_lchown32 198
+#define __NR_getuid32 199
+#define __NR_getgid32 200
+#define __NR_geteuid32 201
+#define __NR_getegid32 202
+#define __NR_setreuid32 203
+#define __NR_setregid32 204
+#define __NR_getgroups32 205
+#define __NR_setgroups32 206
+#define __NR_fchown32 207
+#define __NR_setresuid32 208
+#define __NR_getresuid32 209
+#define __NR_setresgid32 210
+#define __NR_getresgid32 211
+#define __NR_chown32 212
+#define __NR_setuid32 213
+#define __NR_setgid32 214
+#define __NR_setfsuid32 215
+#define __NR_setfsgid32 216
+#define __NR_pivot_root 217
+#define __NR_mincore 218
+#define __NR_madvise 219
+
+#define __NR_getdents64 220
+#define __NR_fcntl64 221
+#define __NR_security 223 /* syscall for security modules */
+#define __NR_gettid 224
+#define __NR_readahead 225
+#define __NR_setxattr 226
+#define __NR_lsetxattr 227
+#define __NR_fsetxattr 228
+#define __NR_getxattr 229
+#define __NR_lgetxattr 230
+#define __NR_fgetxattr 231
+#define __NR_listxattr 232
+#define __NR_llistxattr 233
+#define __NR_flistxattr 234
+#define __NR_removexattr 235
+#define __NR_lremovexattr 236
+#define __NR_fremovexattr 237
+#define __NR_tkill 238
+#define __NR_sendfile64 239
+#define __NR_futex 240
+#define __NR_sched_setaffinity 241
+#define __NR_sched_getaffinity 242
+#define __NR_set_thread_area 243
+#define __NR_get_thread_area 244
+#define __NR_io_setup 245
+#define __NR_io_destroy 246
+#define __NR_io_getevents 247
+#define __NR_io_submit 248
+#define __NR_io_cancel 249
+#define __NR_fadvise64 250
+
+#define __NR_exit_group 252
+#define __NR_lookup_dcookie 253
+#define __NR_epoll_create 254
+#define __NR_epoll_ctl 255
+#define __NR_epoll_wait 256
+#define __NR_remap_file_pages 257
+#define __NR_set_tid_address 258
+#define __NR_timer_create 259
+#define __NR_timer_settime (__NR_timer_create+1)
+#define __NR_timer_gettime (__NR_timer_create+2)
+#define __NR_timer_getoverrun (__NR_timer_create+3)
+#define __NR_timer_delete (__NR_timer_create+4)
+#define __NR_clock_settime (__NR_timer_create+5)
+#define __NR_clock_gettime (__NR_timer_create+6)
+#define __NR_clock_getres (__NR_timer_create+7)
+#define __NR_clock_nanosleep (__NR_timer_create+8)
+#define __NR_statfs64 268
+#define __NR_fstatfs64 269
+#define __NR_tgkill 270
+#define __NR_utimes 271
+#define __NR_fadvise64_64 272
+#define __NR_vserver 273
+#define __NR_mbind 274
+#define __NR_get_mempolicy 275
+#define __NR_set_mempolicy 276
+#define __NR_mq_open 277
+#define __NR_mq_unlink (__NR_mq_open+1)
+#define __NR_mq_timedsend (__NR_mq_open+2)
+#define __NR_mq_timedreceive (__NR_mq_open+3)
+#define __NR_mq_notify (__NR_mq_open+4)
+#define __NR_mq_getsetattr (__NR_mq_open+5)
+#define __NR_sys_kexec_load 283
+#define __NR_waitid 284
+/* #define __NR_sys_setaltroot 285 */
+#define __NR_add_key 286
+#define __NR_request_key 287
+#define __NR_keyctl 288
+#define __NR_vperfctr_open 289
+#define __NR_vperfctr_control (__NR_perfctr_info+1)
+#define __NR_vperfctr_unlink (__NR_perfctr_info+2)
+#define __NR_vperfctr_iresume (__NR_perfctr_info+3)
+#define __NR_vperfctr_read (__NR_perfctr_info+4)
+
+#define NR_syscalls 294
+
+/*
+ * process the return value of a syscall, consigning it to one of two possible fates
+ * - user-visible error numbers are in the range -1 - -4095: see <asm-frv/errno.h>
+ */
+#undef __syscall_return
+#define __syscall_return(type, res) \
+do { \
+ unsigned long __sr2 = (res); \
+ if (__builtin_expect(__sr2 >= (unsigned long)(-4095), 0)) { \
+ errno = (-__sr2); \
+ __sr2 = ULONG_MAX; \
+ } \
+ return (type) __sr2; \
+} while (0)
+
+/* XXX - _foo needs to be __foo, while __NR_bar could be _NR_bar. */
+
+#undef _syscall0
+#define _syscall0(type,name) \
+type name(void) \
+{ \
+ register unsigned long __scnum __asm__ ("gr7") = (__NR_##name); \
+ register unsigned long __sc0 __asm__ ("gr8"); \
+ __asm__ __volatile__ ("tira gr0,#0" \
+ : "=r" (__sc0) \
+ : "r" (__scnum)); \
+ __syscall_return(type, __sc0); \
+}
+
+#undef _syscall1
+#define _syscall1(type,name,type1,arg1) \
+type name(type1 arg1) \
+{ \
+ register unsigned long __scnum __asm__ ("gr7") = (__NR_##name); \
+ register unsigned long __sc0 __asm__ ("gr8") = (unsigned long) arg1; \
+ __asm__ __volatile__ ("tira gr0,#0" \
+ : "+r" (__sc0) \
+ : "r" (__scnum)); \
+ __syscall_return(type, __sc0); \
+}
+
+#undef _syscall2
+#define _syscall2(type,name,type1,arg1,type2,arg2) \
+type name(type1 arg1,type2 arg2) \
+{ \
+ register unsigned long __scnum __asm__ ("gr7") = (__NR_##name); \
+ register unsigned long __sc0 __asm__ ("gr8") = (unsigned long) arg1; \
+ register unsigned long __sc1 __asm__ ("gr9") = (unsigned long) arg2; \
+ __asm__ __volatile__ ("tira gr0,#0" \
+ : "+r" (__sc0) \
+ : "r" (__scnum), "r" (__sc1)); \
+ __syscall_return(type, __sc0); \
+}
+
+#undef _syscall3
+#define _syscall3(type,name,type1,arg1,type2,arg2,type3,arg3) \
+type name(type1 arg1,type2 arg2,type3 arg3) \
+{ \
+ register unsigned long __scnum __asm__ ("gr7") = (__NR_##name); \
+ register unsigned long __sc0 __asm__ ("gr8") = (unsigned long) arg1; \
+ register unsigned long __sc1 __asm__ ("gr9") = (unsigned long) arg2; \
+ register unsigned long __sc2 __asm__ ("gr10") = (unsigned long) arg3; \
+ __asm__ __volatile__ ("tira gr0,#0" \
+ : "+r" (__sc0) \
+ : "r" (__scnum), "r" (__sc1), "r" (__sc2)); \
+ __syscall_return(type, __sc0); \
+}
+
+#undef _syscall4
+#define _syscall4(type,name,type1,arg1,type2,arg2,type3,arg3,type4,arg4) \
+type name (type1 arg1, type2 arg2, type3 arg3, type4 arg4) \
+{ \
+ register unsigned long __scnum __asm__ ("gr7") = (__NR_##name); \
+ register unsigned long __sc0 __asm__ ("gr8") = (unsigned long) arg1; \
+ register unsigned long __sc1 __asm__ ("gr9") = (unsigned long) arg2; \
+ register unsigned long __sc2 __asm__ ("gr10") = (unsigned long) arg3; \
+ register unsigned long __sc3 __asm__ ("gr11") = (unsigned long) arg4; \
+ __asm__ __volatile__ ("tira gr0,#0" \
+ : "+r" (__sc0) \
+ : "r" (__scnum), "r" (__sc1), "r" (__sc2), "r" (__sc3)); \
+ __syscall_return(type, __sc0); \
+}
+
+#undef _syscall5
+#define _syscall5(type,name,type1,arg1,type2,arg2,type3,arg3,type4,arg4,type5,arg5) \
+type name (type1 arg1, type2 arg2, type3 arg3, type4 arg4, type5 arg5) \
+{ \
+ register unsigned long __scnum __asm__ ("gr7") = (__NR_##name); \
+ register unsigned long __sc0 __asm__ ("gr8") = (unsigned long) arg1; \
+ register unsigned long __sc1 __asm__ ("gr9") = (unsigned long) arg2; \
+ register unsigned long __sc2 __asm__ ("gr10") = (unsigned long) arg3; \
+ register unsigned long __sc3 __asm__ ("gr11") = (unsigned long) arg4; \
+ register unsigned long __sc4 __asm__ ("gr12") = (unsigned long) arg5; \
+ __asm__ __volatile__ ("tira gr0,#0" \
+ : "+r" (__sc0) \
+ : "r" (__scnum), "r" (__sc1), "r" (__sc2), \
+ "r" (__sc3), "r" (__sc4)); \
+ __syscall_return(type, __sc0); \
+}
+
+#undef _syscall6
+#define _syscall6(type,name,type1,arg1,type2,arg2,type3,arg3,type4,arg4,type5,arg5, type6, arg6) \
+type name (type1 arg1, type2 arg2, type3 arg3, type4 arg4, type5 arg5, type6 arg6) \
+{ \
+ register unsigned long __scnum __asm__ ("gr7") = (__NR_##name); \
+ register unsigned long __sc0 __asm__ ("gr8") = (unsigned long) arg1; \
+ register unsigned long __sc1 __asm__ ("gr9") = (unsigned long) arg2; \
+ register unsigned long __sc2 __asm__ ("gr10") = (unsigned long) arg3; \
+ register unsigned long __sc3 __asm__ ("gr11") = (unsigned long) arg4; \
+ register unsigned long __sc4 __asm__ ("gr12") = (unsigned long) arg5; \
+ register unsigned long __sc5 __asm__ ("gr13") = (unsigned long) arg6; \
+ __asm__ __volatile__ ("tira gr0,#0" \
+ : "+r" (__sc0) \
+ : "r" (__scnum), "r" (__sc1), "r" (__sc2), \
+ "r" (__sc3), "r" (__sc4), "r" (__sc5)); \
+ __syscall_return(type, __sc0); \
+}
+
+
+#ifdef __KERNEL_SYSCALLS__
+
+#include <linux/compiler.h>
+#include <linux/types.h>
+#include <linux/linkage.h>
+#include <asm/ptrace.h>
+
+/*
+ * we need this inline - forking from kernel space will result
+ * in NO COPY ON WRITE (!!!), until an execve is executed. This
+ * is no problem, but for the stack. This is handled by not letting
+ * main() use the stack at all after fork(). Thus, no function
+ * calls - which means inline code for fork too, as otherwise we
+ * would use the stack upon exit from 'fork()'.
+ *
+ * Actually only pause and fork are needed inline, so that there
+ * won't be any messing with the stack from main(), but we define
+ * some others too.
+ */
+#define __NR__exit __NR_exit
+static inline _syscall0(int,pause)
+static inline _syscall0(int,sync)
+static inline _syscall0(pid_t,setsid)
+static inline _syscall3(int,write,int,fd,const char *,buf,off_t,count)
+static inline _syscall3(int,read,int,fd,char *,buf,off_t,count)
+static inline _syscall3(off_t,lseek,int,fd,off_t,offset,int,count)
+static inline _syscall1(int,dup,int,fd)
+static inline _syscall3(int,execve,const char *,file,char **,argv,char **,envp)
+static inline _syscall3(int,open,const char *,file,int,flag,int,mode)
+static inline _syscall1(int,close,int,fd)
+static inline _syscall1(int,_exit,int,exitcode)
+static inline _syscall3(pid_t,waitpid,pid_t,pid,int *,wait_stat,int,options)
+static inline _syscall1(int,delete_module,const char *,name)
+
+static inline pid_t wait(int * wait_stat)
+{
+ return waitpid(-1,wait_stat,0);
+}
+
+#endif
+
+#ifdef __KERNEL__
+#define __ARCH_WANT_IPC_PARSE_VERSION
+/* #define __ARCH_WANT_OLD_READDIR */
+#define __ARCH_WANT_OLD_STAT
+#define __ARCH_WANT_STAT64
+#define __ARCH_WANT_SYS_ALARM
+/* #define __ARCH_WANT_SYS_GETHOSTNAME */
+#define __ARCH_WANT_SYS_PAUSE
+/* #define __ARCH_WANT_SYS_SGETMASK */
+/* #define __ARCH_WANT_SYS_SIGNAL */
+#define __ARCH_WANT_SYS_TIME
+#define __ARCH_WANT_SYS_UTIME
+#define __ARCH_WANT_SYS_WAITPID
+#define __ARCH_WANT_SYS_SOCKETCALL
+#define __ARCH_WANT_SYS_FADVISE64
+#define __ARCH_WANT_SYS_GETPGRP
+#define __ARCH_WANT_SYS_LLSEEK
+#define __ARCH_WANT_SYS_NICE
+/* #define __ARCH_WANT_SYS_OLD_GETRLIMIT */
+#define __ARCH_WANT_SYS_OLDUMOUNT
+/* #define __ARCH_WANT_SYS_SIGPENDING */
+#define __ARCH_WANT_SYS_SIGPROCMASK
+#define __ARCH_WANT_SYS_RT_SIGACTION
+#endif
+
+/*
+ * "Conditional" syscalls
+ *
+ * What we want is __attribute__((weak,alias("sys_ni_syscall"))),
+ * but it doesn't work on all toolchains, so we just do it by hand
+ */
+#ifndef cond_syscall
+#define cond_syscall(x) asm(".weak\t" #x "\n\t.set\t" #x ",sys_ni_syscall");
+#endif
+
+#endif /* _ASM_UNISTD_H_ */
--- /dev/null
+/* virtconvert.h: virtual/physical/page address convertion
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+#ifndef _ASM_VIRTCONVERT_H
+#define _ASM_VIRTCONVERT_H
+
+/*
+ * Macros used for converting between virtual and physical mappings.
+ */
+
+#ifdef __KERNEL__
+
+#include <linux/config.h>
+#include <asm/setup.h>
+
+#ifdef CONFIG_MMU
+
+#define phys_to_virt(vaddr) ((void *) ((unsigned long)(vaddr) + PAGE_OFFSET))
+#define virt_to_phys(vaddr) ((unsigned long) (vaddr) - PAGE_OFFSET)
+
+#else
+
+#define phys_to_virt(vaddr) ((void *) (vaddr))
+#define virt_to_phys(vaddr) ((unsigned long) (vaddr))
+
+#endif
+
+#define virt_to_bus virt_to_phys
+#define bus_to_virt phys_to_virt
+
+#define __page_address(page) (PAGE_OFFSET + (((page) - mem_map) << PAGE_SHIFT))
+#define page_to_phys(page) virt_to_phys((void *)__page_address(page))
+
+#endif
+#endif
--- /dev/null
+#ifndef _4LEVEL_FIXUP_H
+#define _4LEVEL_FIXUP_H
+
+#define __ARCH_HAS_4LEVEL_HACK
+
+#define PUD_SIZE PGDIR_SIZE
+#define PUD_MASK PGDIR_MASK
+#define PTRS_PER_PUD 1
+
+#define pud_t pgd_t
+
+#define pmd_alloc(mm, pud, address) \
+({ pmd_t *ret; \
+ if (pgd_none(*pud)) \
+ ret = __pmd_alloc(mm, pud, address); \
+ else \
+ ret = pmd_offset(pud, address); \
+ ret; \
+})
+
+#define pud_alloc(mm, pgd, address) (pgd)
+#define pud_offset(pgd, start) (pgd)
+#define pud_none(pud) 0
+#define pud_bad(pud) 0
+#define pud_present(pud) 1
+#define pud_ERROR(pud) do { } while (0)
+#define pud_clear(pud) pgd_clear(pud)
+
+#undef pud_free_tlb
+#define pud_free_tlb(tlb, x) do { } while (0)
+#define pud_free(x) do { } while (0)
+#define __pud_free_tlb(tlb, x) do { } while (0)
+
+#endif
--- /dev/null
+#ifndef _ASM_GENERIC_CPUTIME_H
+#define _ASM_GENERIC_CPUTIME_H
+
+#include <linux/time.h>
+#include <linux/jiffies.h>
+
+typedef unsigned long cputime_t;
+
+#define cputime_zero (0UL)
+#define cputime_max ((~0UL >> 1) - 1)
+#define cputime_add(__a, __b) ((__a) + (__b))
+#define cputime_sub(__a, __b) ((__a) - (__b))
+#define cputime_eq(__a, __b) ((__a) == (__b))
+#define cputime_gt(__a, __b) ((__a) > (__b))
+#define cputime_ge(__a, __b) ((__a) >= (__b))
+#define cputime_lt(__a, __b) ((__a) < (__b))
+#define cputime_le(__a, __b) ((__a) <= (__b))
+#define cputime_to_jiffies(__ct) (__ct)
+#define jiffies_to_cputime(__hz) (__hz)
+
+typedef u64 cputime64_t;
+
+#define cputime64_zero (0ULL)
+#define cputime64_add(__a, __b) ((__a) + (__b))
+#define cputime64_to_jiffies64(__ct) (__ct)
+#define cputime_to_cputime64(__ct) ((u64) __ct)
+
+
+/*
+ * Convert cputime to milliseconds and back.
+ */
+#define cputime_to_msecs(__ct) jiffies_to_msecs(__ct)
+#define msecs_to_cputime(__msecs) msecs_to_jiffies(__msecs)
+
+/*
+ * Convert cputime to seconds and back.
+ */
+#define cputime_to_secs(jif) ((jif) / HZ)
+#define secs_to_cputime(sec) ((sec) * HZ)
+
+/*
+ * Convert cputime to timespec and back.
+ */
+#define timespec_to_cputime(__val) timespec_to_jiffies(__val)
+#define cputime_to_timespec(__ct,__val) jiffies_to_timespec(__ct,__val)
+
+/*
+ * Convert cputime to timeval and back.
+ */
+#define timeval_to_cputime(__val) timeval_to_jiffies(__val)
+#define cputime_to_timeval(__ct,__val) jiffies_to_timeval(__ct,__val)
+
+/*
+ * Convert cputime to clock and back.
+ */
+#define cputime_to_clock_t(__ct) jiffies_to_clock_t(__ct)
+#define clock_t_to_cputime(__x) clock_t_to_jiffies(__x)
+
+/*
+ * Convert cputime64 to clock.
+ */
+#define cputime64_to_clock_t(__ct) jiffies_64_to_clock_t(__ct)
+
+#endif
--- /dev/null
+#ifndef _PGTABLE_NOPMD_H
+#define _PGTABLE_NOPMD_H
+
+#ifndef __ASSEMBLY__
+
+#include <asm-generic/pgtable-nopud.h>
+
+/*
+ * Having the pmd type consist of a pud gets the size right, and allows
+ * us to conceptually access the pud entry that this pmd is folded into
+ * without casting.
+ */
+typedef struct { pud_t pud; } pmd_t;
+
+#define PMD_SHIFT PUD_SHIFT
+#define PTRS_PER_PMD 1
+#define PMD_SIZE (1UL << PMD_SHIFT)
+#define PMD_MASK (~(PMD_SIZE-1))
+
+/*
+ * The "pud_xxx()" functions here are trivial for a folded two-level
+ * setup: the pmd is never bad, and a pmd always exists (as it's folded
+ * into the pud entry)
+ */
+static inline int pud_none(pud_t pud) { return 0; }
+static inline int pud_bad(pud_t pud) { return 0; }
+static inline int pud_present(pud_t pud) { return 1; }
+static inline void pud_clear(pud_t *pud) { }
+#define pmd_ERROR(pmd) (pud_ERROR((pmd).pud))
+
+#define pud_populate(mm, pmd, pte) do { } while (0)
+
+/*
+ * (pmds are folded into puds so this doesn't get actually called,
+ * but the define is needed for a generic inline function.)
+ */
+#define set_pud(pudptr, pudval) set_pmd((pmd_t *)(pudptr), (pmd_t) { pudval })
+
+static inline pmd_t * pmd_offset(pud_t * pud, unsigned long address)
+{
+ return (pmd_t *)pud;
+}
+
+#define pmd_val(x) (pud_val((x).pud))
+#define __pmd(x) ((pmd_t) { __pud(x) } )
+
+#define pud_page(pud) (pmd_page((pmd_t){ pud }))
+#define pud_page_kernel(pud) (pmd_page_kernel((pmd_t){ pud }))
+
+/*
+ * allocating and freeing a pmd is trivial: the 1-entry pmd is
+ * inside the pud, so has no extra memory associated with it.
+ */
+#define pmd_alloc_one(mm, address) NULL
+#define pmd_free(x) do { } while (0)
+#define __pmd_free_tlb(tlb, x) do { } while (0)
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _PGTABLE_NOPMD_H */
--- /dev/null
+#ifndef _PGTABLE_NOPUD_H
+#define _PGTABLE_NOPUD_H
+
+#ifndef __ASSEMBLY__
+
+/*
+ * Having the pud type consist of a pgd gets the size right, and allows
+ * us to conceptually access the pgd entry that this pud is folded into
+ * without casting.
+ */
+typedef struct { pgd_t pgd; } pud_t;
+
+#define PUD_SHIFT PGDIR_SHIFT
+#define PTRS_PER_PUD 1
+#define PUD_SIZE (1UL << PUD_SHIFT)
+#define PUD_MASK (~(PUD_SIZE-1))
+
+/*
+ * The "pgd_xxx()" functions here are trivial for a folded two-level
+ * setup: the pud is never bad, and a pud always exists (as it's folded
+ * into the pgd entry)
+ */
+static inline int pgd_none(pgd_t pgd) { return 0; }
+static inline int pgd_bad(pgd_t pgd) { return 0; }
+static inline int pgd_present(pgd_t pgd) { return 1; }
+static inline void pgd_clear(pgd_t *pgd) { }
+#define pud_ERROR(pud) (pgd_ERROR((pud).pgd))
+
+#define pgd_populate(mm, pgd, pud) do { } while (0)
+/*
+ * (puds are folded into pgds so this doesn't get actually called,
+ * but the define is needed for a generic inline function.)
+ */
+#define set_pgd(pgdptr, pgdval) set_pud((pud_t *)(pgdptr), (pud_t) { pgdval })
+
+static inline pud_t * pud_offset(pgd_t * pgd, unsigned long address)
+{
+ return (pud_t *)pgd;
+}
+
+#define pud_val(x) (pgd_val((x).pgd))
+#define __pud(x) ((pud_t) { __pgd(x) } )
+
+#define pgd_page(pgd) (pud_page((pud_t){ pgd }))
+#define pgd_page_kernel(pgd) (pud_page_kernel((pud_t){ pgd }))
+
+/*
+ * allocating and freeing a pud is trivial: the 1-entry pud is
+ * inside the pgd, so has no extra memory associated with it.
+ */
+#define pud_alloc_one(mm, address) NULL
+#define pud_free(x) do { } while (0)
+#define __pud_free_tlb(tlb, x) do { } while (0)
+
+#endif /* __ASSEMBLY__ */
+#endif /* _PGTABLE_NOPUD_H */
extern char __bss_start[], __bss_stop[];
extern char __init_begin[], __init_end[];
extern char _sinittext[], _einittext[];
+extern char _end[];
#endif /* _ASM_GENERIC_SECTIONS_H_ */
--- /dev/null
+/* termios.h: generic termios/termio user copying/translation
+ */
+
+#ifndef _ASM_GENERIC_TERMIOS_H
+#define _ASM_GENERIC_TERMIOS_H
+
+#include <asm/uaccess.h>
+
+#ifndef __ARCH_TERMIO_GETPUT
+
+/*
+ * Translate a "termio" structure into a "termios". Ugh.
+ */
+static inline int user_termio_to_kernel_termios(struct termios *termios,
+ struct termio __user *termio)
+{
+ unsigned short tmp;
+
+ if (get_user(tmp, &termio->c_iflag) < 0)
+ goto fault;
+ termios->c_iflag = (0xffff0000 & termios->c_iflag) | tmp;
+
+ if (get_user(tmp, &termio->c_oflag) < 0)
+ goto fault;
+ termios->c_oflag = (0xffff0000 & termios->c_oflag) | tmp;
+
+ if (get_user(tmp, &termio->c_cflag) < 0)
+ goto fault;
+ termios->c_cflag = (0xffff0000 & termios->c_cflag) | tmp;
+
+ if (get_user(tmp, &termio->c_lflag) < 0)
+ goto fault;
+ termios->c_lflag = (0xffff0000 & termios->c_lflag) | tmp;
+
+ if (get_user(termios->c_line, &termio->c_line) < 0)
+ goto fault;
+
+ if (copy_from_user(termios->c_cc, termio->c_cc, NCC) != 0)
+ goto fault;
+
+ return 0;
+
+ fault:
+ return -EFAULT;
+}
+
+/*
+ * Translate a "termios" structure into a "termio". Ugh.
+ */
+static inline int kernel_termios_to_user_termio(struct termio __user *termio,
+ struct termios *termios)
+{
+ if (put_user(termios->c_iflag, &termio->c_iflag) < 0 ||
+ put_user(termios->c_oflag, &termio->c_oflag) < 0 ||
+ put_user(termios->c_cflag, &termio->c_cflag) < 0 ||
+ put_user(termios->c_lflag, &termio->c_lflag) < 0 ||
+ put_user(termios->c_line, &termio->c_line) < 0 ||
+ copy_to_user(termio->c_cc, termios->c_cc, NCC) != 0)
+ return -EFAULT;
+
+ return 0;
+}
+
+#define user_termios_to_kernel_termios(k, u) copy_from_user(k, u, sizeof(struct termios))
+#define kernel_termios_to_user_termios(u, k) copy_to_user(u, k, sizeof(struct termios))
+
+#endif /* __ARCH_TERMIO_GETPUT */
+
+#endif /* _ASM_GENERIC_TERMIOS_H */
/* Copyright (C) 2002, David McCullough <davidm@snapgear.com> */
-struct mm_rblock_struct {
- int size;
- int refcount;
- void *kblock;
-};
-
-struct mm_tblock_struct {
- struct mm_rblock_struct *rblock;
- struct mm_tblock_struct *next;
-};
-
typedef struct {
- struct mm_tblock_struct tblock;
+ struct vm_list_struct *vmlist;
unsigned long end_brk;
} mm_context_t;
* undefined" opcode for parsing in the trap handler.
*/
-#if 1 /* Set to zero for a slightly smaller kernel */
+#ifdef CONFIG_DEBUG_BUGVERBOSE
#define BUG() \
__asm__ __volatile__( "ud2\n" \
"\t.word %c0\n" \
? (MAX_STACK_SIZE) \
: (((unsigned long)current_thread_info()) + THREAD_SIZE - (ADDR)))
+#define JPROBE_ENTRY(pentry) (kprobe_opcode_t *)pentry
+
/* Architecture specific copy of original instruction*/
struct arch_specific_insn {
/* copy of the original instruction */
*/
#define MCA_NUMADAPTERS (MCA_MAX_SLOT_NR+3)
-/* lock to protect access to the MCA registers */
-extern spinlock_t mca_lock;
-
#endif
* the i386 is two-level, so we don't really have any
* PMD directory physically.
*/
-#define PMD_SHIFT 22
-#define PTRS_PER_PMD 1
#define PTRS_PER_PTE 1024
#define PTRACE_SET_THREAD_AREA 26
#ifdef __KERNEL__
+struct task_struct;
+extern void send_sigtrap(struct task_struct *tsk, struct pt_regs *regs, int error_code);
#define user_mode(regs) ((VM_MASK & (regs)->eflags) || (3 & (regs)->xcs))
#define instruction_pointer(regs) ((regs)->eip)
#if defined(CONFIG_SMP) && defined(CONFIG_FRAME_POINTER)
return node_to_cpumask(mp_bus_id_to_node[bus]);
}
-/* Node-to-Node distance */
-#define node_distance(from, to) ((from) != (to))
-
/* sched_domains SD_NODE_INIT for NUMAQ machines */
#define SD_NODE_INIT (struct sched_domain) { \
.span = CPU_MASK_NONE, \
.max_interval = 32, \
.busy_factor = 32, \
.imbalance_pct = 125, \
- .cache_hot_time = (10*1000), \
+ .cache_hot_time = (10*1000000), \
.cache_nice_tries = 1, \
.per_cpu_gain = 100, \
.flags = SD_LOAD_BALANCE \
| SD_BALANCE_EXEC \
+ | SD_BALANCE_NEWIDLE \
+ | SD_WAKE_IDLE \
| SD_WAKE_BALANCE, \
.last_balance = jiffies, \
.balance_interval = 1, \
#endif /* __KERNEL__ */
-extern int __find_next_zero_bit (void *addr, unsigned long size,
+extern int __find_next_zero_bit (const void *addr, unsigned long size,
unsigned long offset);
extern int __find_next_bit(const void *addr, unsigned long size,
unsigned long offset);
#ifdef __KERNEL__
#define SET_PERSONALITY(ex, ibcs2) set_personality(PER_LINUX)
-#define elf_read_implies_exec(ex, have_pt_gnu_stack) \
- (!(have_pt_gnu_stack) && ((ex).e_flags & EF_IA_64_LINUX_EXECUTABLE_STACK) != 0)
+#define elf_read_implies_exec(ex, executable_stack) \
+ ((executable_stack!=EXSTACK_DISABLE_X) && ((ex).e_flags & EF_IA_64_LINUX_EXECUTABLE_STACK) != 0)
struct task_struct;
#define __ARCH_IRQ_STAT 1
-#define softirq_pending(cpu) (cpu_data(cpu)->softirq_pending)
#define local_softirq_pending() (local_cpu_data->softirq_pending)
#define HARDIRQ_BITS 14
extern void __iomem *ipi_base_addr;
+void ack_bad_irq(unsigned int irq);
+
#endif /* _ASM_IA64_HARDIRQ_H */
*/
#define IA64_FIRST_DEVICE_VECTOR 0x30
#define IA64_LAST_DEVICE_VECTOR 0xe7
+#define IA64_NUM_DEVICE_VECTORS (IA64_LAST_DEVICE_VECTOR - IA64_FIRST_DEVICE_VECTOR + 1)
#define IA64_MCA_RENDEZ_VECTOR 0xe8 /* MCA rendez interrupt */
#define IA64_PERFMON_VECTOR 0xee /* performanc monitor interrupt vector */
extern struct hw_interrupt_type irq_type_ia64_lsapic; /* CPU-internal interrupt controller */
extern int assign_irq_vector (int irq); /* allocate a free vector */
+extern void free_irq_vector (int vector);
extern void ia64_send_ipi (int cpu, int vector, int delivery_mode, int redirect);
extern void register_percpu_irq (ia64_vector vec, struct irqaction *action);
* Default implementations for the irq-descriptor API:
*/
-extern irq_desc_t _irq_desc[NR_IRQS];
+extern irq_desc_t irq_desc[NR_IRQS];
#ifndef CONFIG_IA64_GENERIC
-static inline irq_desc_t *
-__ia64_irq_desc (unsigned int irq)
-{
- return _irq_desc + irq;
-}
-
-static inline ia64_vector
-__ia64_irq_to_vector (unsigned int irq)
-{
- return (ia64_vector) irq;
-}
-
static inline unsigned int
__ia64_local_vector_to_irq (ia64_vector vec)
{
static inline irq_desc_t *
irq_descp (int irq)
{
- return platform_irq_desc(irq);
+ return irq_desc + irq;
}
/* Extract the IA-64 vector that corresponds to IRQ. */
static inline ia64_vector
irq_to_vector (int irq)
{
- return platform_irq_to_vector(irq);
+ return (ia64_vector) irq;
}
/*
/*
* String version of IO memory access ops:
*/
-extern void __ia64_memcpy_fromio (void *, volatile void __iomem *, long);
-extern void __ia64_memcpy_toio (volatile void __iomem *, void *, long);
-extern void __ia64_memset_c_io (volatile void __iomem *, unsigned long, long);
-
-#define memcpy_fromio(to,from,len) __ia64_memcpy_fromio((to), (from),(len))
-#define memcpy_toio(to,from,len) __ia64_memcpy_toio((to),(from),(len))
-#define memset_io(addr,c,len) __ia64_memset_c_io((addr), 0x0101010101010101UL*(u8)(c), \
- (len))
+extern void memcpy_fromio(void *dst, const volatile void __iomem *src, long n);
+extern void memcpy_toio(volatile void __iomem *dst, const void *src, long n);
+extern void memset_io(volatile void __iomem *s, int c, long n);
#define dma_cache_inv(_start,_size) do { } while (0)
#define dma_cache_wback(_start,_size) do { } while (0)
*/
#define IA64_KR_IO_BASE 0 /* ar.k0: legacy I/O base address */
#define IA64_KR_TSSD 1 /* ar.k1: IVE uses this as the TSSD */
+#define IA64_KR_PER_CPU_DATA 3 /* ar.k3: physical per-CPU base */
#define IA64_KR_CURRENT_STACK 4 /* ar.k4: what's mapped in IA64_TR_CURRENT_STACK */
#define IA64_KR_FPU_OWNER 5 /* ar.k5: fpu-owner (UP only, at the moment) */
#define IA64_KR_CURRENT 6 /* ar.k6: "current" task pointer */
extern ia64_mv_send_ipi_t ia64_send_ipi;
extern ia64_mv_global_tlb_purge_t ia64_global_tlb_purge;
-extern ia64_mv_irq_desc __ia64_irq_desc;
-extern ia64_mv_irq_to_vector __ia64_irq_to_vector;
extern ia64_mv_local_vector_to_irq __ia64_local_vector_to_irq;
+extern ia64_mv_pci_get_legacy_mem_t ia64_pci_get_legacy_mem;
+extern ia64_mv_pci_legacy_read_t ia64_pci_legacy_read;
+extern ia64_mv_pci_legacy_write_t ia64_pci_legacy_write;
extern ia64_mv_inb_t __ia64_inb;
extern ia64_mv_inw_t __ia64_inw;
* Copyright (C) 1999, 2004 Silicon Graphics, Inc.
* Copyright (C) Vijay Chander (vijay@engr.sgi.com)
* Copyright (C) Srinivasa Thirumalachar (sprasad@engr.sgi.com)
+ * Copyright (C) Russ Anderson (rja@sgi.com)
*/
#ifndef _ASM_IA64_MCA_H
#define _ASM_IA64_MCA_H
+#define IA64_MCA_STACK_SIZE 8192
+
#if !defined(__ASSEMBLY__)
#include <linux/interrupt.h>
IA64_MCA_RENDEZ_CHECKIN_DONE = 0x1
};
-/* the following data structure is used for TLB error recovery purposes */
-extern struct ia64_mca_tlb_info {
- u64 cr_lid;
- u64 percpu_paddr;
- u64 ptce_base;
- u32 ptce_count[2];
- u32 ptce_stride[2];
- u64 pal_paddr;
- u64 pal_base;
-} ia64_mca_tlb_list[NR_CPUS];
-
/* Information maintained by the MC infrastructure */
typedef struct ia64_mc_info_s {
u64 imi_mca_handler;
*/
} ia64_mca_os_to_sal_state_t;
+/* Per-CPU MCA state that is too big for normal per-CPU variables. */
+
+struct ia64_mca_cpu {
+ u64 stack[IA64_MCA_STACK_SIZE/8]; /* MCA memory-stack */
+ u64 proc_state_dump[512];
+ u64 stackframe[32];
+ u64 rbstore[IA64_MCA_STACK_SIZE/8]; /* MCA reg.-backing store */
+ u64 init_stack[KERNEL_STACK_SIZE/8];
+} __attribute__ ((aligned(16)));
+
+/* Array of physical addresses of each CPU's MCA area. */
+extern unsigned long __per_cpu_mca[NR_CPUS];
+
extern void ia64_mca_init(void);
+extern void ia64_mca_cpu_init(void *);
extern void ia64_os_mca_dispatch(void);
extern void ia64_os_mca_dispatch_end(void);
extern void ia64_mca_ucmc_handler(void);
mov temp = 0x7 ;; \
dep addr = temp, addr, 61, 3
+#define GET_THIS_PADDR(reg, var) \
+ mov reg = IA64_KR(PER_CPU_DATA);; \
+ addl reg = THIS_CPU(var), reg
+
/*
* This macro jumps to the instruction at the given virtual address
* and starts execution in physical mode with all the address
#ifndef _ASM_IA64_MMZONE_H
#define _ASM_IA64_MMZONE_H
-#include <linux/config.h>
+#include <linux/numa.h>
#include <asm/page.h>
#include <asm/meminit.h>
#ifdef CONFIG_IA64_DIG /* DIG systems are small */
# define MAX_PHYSNODE_ID 8
-# define NR_NODES 8
-# define NR_NODE_MEMBLKS (NR_NODES * 8)
+# define NR_NODE_MEMBLKS (MAX_NUMNODES * 8)
#else /* sn2 is the biggest case, so we use that if !DIG */
# define MAX_PHYSNODE_ID 2048
-# define NR_NODES 256
-# define NR_NODE_MEMBLKS (NR_NODES * 4)
+# define NR_NODE_MEMBLKS (MAX_NUMNODES * 4)
#endif
#else /* CONFIG_DISCONTIGMEM */
-# define NR_NODE_MEMBLKS 4
+# define NR_NODE_MEMBLKS (MAX_NUMNODES * 4)
#endif /* CONFIG_DISCONTIGMEM */
+
#endif /* _ASM_IA64_MMZONE_H */
static inline void set_intr_gate (int nr, void *func) {}
#define IO_APIC_VECTOR(irq) (irq)
#define ack_APIC_irq ia64_eoi
-#define irq_desc _irq_desc
#define cpu_mask_to_apicid(mask) cpu_physical_id(first_cpu(mask))
#define MSI_DEST_MODE MSI_PHYSICAL_MODE
#define MSI_TARGET_CPU ((ia64_getreg(_IA64_REG_CR_LID) >> 16) & 0xffff)
struct ia64_node_data {
short active_cpu_count;
short node;
- struct pglist_data *pg_data_ptrs[NR_NODES];
+ struct pglist_data *pg_data_ptrs[MAX_NUMNODES];
};
*/
extern u8 numa_slit[MAX_NUMNODES * MAX_NUMNODES];
-#define node_distance(from,to) (numa_slit[(from) * numnodes + (to)])
+#define node_distance(from,to) (numa_slit[(from) * num_online_nodes() + (to)])
extern int paddr_to_nid(unsigned long paddr);
#ifdef CONFIG_IA64_DIG
/* Max 8 Nodes */
#define NODES_SHIFT 3
-#elif defined(CONFIG_IA64_HP_ZX1)
+#elif defined(CONFIG_IA64_HP_ZX1) || defined(CONFIG_IA64_HP_ZX1_SWIOTLB)
/* Max 32 Nodes */
#define NODES_SHIFT 5
#elif defined(CONFIG_IA64_SGI_SN2) || defined(CONFIG_IA64_GENERIC)
efi_guid_t platform_guid; /* Unique OEM Platform ID */
} sal_log_record_header_t;
+#define sal_log_severity_recoverable 0
+#define sal_log_severity_fatal 1
+#define sal_log_severity_corrected 2
+
/* Definition of log section header structures */
typedef struct sal_log_sec_header {
efi_guid_t guid; /* Unique Section ID */
extern int ia64_sal_oemcall_reentrant(struct ia64_sal_retval *, u64, u64, u64,
u64, u64, u64, u64, u64);
+extern void ia64_sal_handler_init(void *entry_point, void *gpval);
+
#endif /* __ASSEMBLY__ */
#endif /* _ASM_IA64_SAL_H */
extern char __start_gate_fsyscall_patchlist[], __end_gate_fsyscall_patchlist[];
extern char __start_gate_brl_fsys_bubble_down_patchlist[], __end_gate_brl_fsys_bubble_down_patchlist[];
extern char __start_unwind[], __end_unwind[];
-extern char _end[]; /* end of kernel image */
#endif /* _ASM_IA64_SECTIONS_H */
#define __HAVE_ARCH_STRLEN 1 /* see arch/ia64/lib/strlen.S */
#define __HAVE_ARCH_MEMSET 1 /* see arch/ia64/lib/memset.S */
#define __HAVE_ARCH_MEMCPY 1 /* see arch/ia64/lib/memcpy.S */
-#define __HAVE_ARCH_BCOPY 1 /* see arch/ia64/lib/memcpy.S */
extern __kernel_size_t strlen (const char *);
extern void *memcpy (void *, const void *, __kernel_size_t);
.per_cpu_gain = 100, \
.flags = SD_LOAD_BALANCE \
| SD_BALANCE_EXEC \
+ | SD_BALANCE_NEWIDLE \
+ | SD_WAKE_IDLE \
| SD_WAKE_BALANCE, \
.last_balance = jiffies, \
.balance_interval = 1, \
#ifndef _ASM_M32R_ASSEMBLER_H
#define _ASM_M32R_ASSEMBLER_H
-/* $Id$ */
-
/*
* linux/asm-m32r/assembler.h
*
- * This file contains M32R architecture specific defines.
+ * Copyright (C) 2004 Hirokazu Takata <takata at linux-m32r.org>
*
- * Do not include any C declarations in this file - it is included by
- * assembler source.
+ * This file contains M32R architecture specific macro definitions.
*/
#include <linux/config.h>
+#ifndef __STR
+#ifdef __ASSEMBLY__
+#define __STR(x) x
+#else
+#define __STR(x) #x
+#endif
+#endif /* __STR */
+
+#ifdef CONFIG_SMP
+#define M32R_LOCK __STR(lock)
+#define M32R_UNLOCK __STR(unlock)
+#else
+#define M32R_LOCK __STR(ld)
+#define M32R_UNLOCK __STR(st)
+#endif
+#ifdef __ASSEMBLY__
#undef ENTRY
#define ENTRY(name) ENTRY_M name
.macro ENTRY_M name
ALIGN
\name:
.endm
+#endif
-/*
- * LDIMM: load immediate value
- *
- * STI: enable interruption
- * CLI: disable interruption
+
+/**
+ * LDIMM - load immediate value
+ * STI - enable interruption
+ * CLI - disable interruption
*/
#ifdef __ASSEMBLY__
#endif /* __ASSEMBLY__ */
#endif /* _ASM_M32R_ASSEMBLER_H */
-
*/
#include <linux/config.h>
+#include <asm/assembler.h>
#include <asm/system.h>
/*
* resource counting etc..
*/
-#undef LOAD
-#undef STORE
-#ifdef CONFIG_SMP
-#define LOAD "lock"
-#define STORE "unlock"
-#else
-#define LOAD "ld"
-#define STORE "st"
-#endif
-
/*
* Make sure gcc doesn't try to be clever and move things around
* on us. We need to use _exactly_ the address the user gave us,
*
* Atomically adds @i to @v and return (@i + @v).
*/
-static inline int atomic_add_return(int i, atomic_t *v)
+static __inline__ int atomic_add_return(int i, atomic_t *v)
{
unsigned long flags;
int result;
__asm__ __volatile__ (
"# atomic_add_return \n\t"
DCACHE_CLEAR("%0", "r4", "%1")
- LOAD" %0, @%1; \n\t"
+ M32R_LOCK" %0, @%1; \n\t"
"add %0, %2; \n\t"
- STORE" %0, @%1; \n\t"
+ M32R_UNLOCK" %0, @%1; \n\t"
: "=&r" (result)
: "r" (&v->counter), "r" (i)
: "memory"
*
* Atomically subtracts @i from @v and return (@v - @i).
*/
-static inline int atomic_sub_return(int i, atomic_t *v)
+static __inline__ int atomic_sub_return(int i, atomic_t *v)
{
unsigned long flags;
int result;
__asm__ __volatile__ (
"# atomic_sub_return \n\t"
DCACHE_CLEAR("%0", "r4", "%1")
- LOAD" %0, @%1; \n\t"
+ M32R_LOCK" %0, @%1; \n\t"
"sub %0, %2; \n\t"
- STORE" %0, @%1; \n\t"
+ M32R_UNLOCK" %0, @%1; \n\t"
: "=&r" (result)
: "r" (&v->counter), "r" (i)
: "memory"
*
* Atomically increments @v by 1 and returns the result.
*/
-static inline int atomic_inc_return(atomic_t *v)
+static __inline__ int atomic_inc_return(atomic_t *v)
{
unsigned long flags;
int result;
__asm__ __volatile__ (
"# atomic_inc_return \n\t"
DCACHE_CLEAR("%0", "r4", "%1")
- LOAD" %0, @%1; \n\t"
+ M32R_LOCK" %0, @%1; \n\t"
"addi %0, #1; \n\t"
- STORE" %0, @%1; \n\t"
+ M32R_UNLOCK" %0, @%1; \n\t"
: "=&r" (result)
: "r" (&v->counter)
: "memory"
*
* Atomically decrements @v by 1 and returns the result.
*/
-static inline int atomic_dec_return(atomic_t *v)
+static __inline__ int atomic_dec_return(atomic_t *v)
{
unsigned long flags;
int result;
__asm__ __volatile__ (
"# atomic_dec_return \n\t"
DCACHE_CLEAR("%0", "r4", "%1")
- LOAD" %0, @%1; \n\t"
+ M32R_LOCK" %0, @%1; \n\t"
"addi %0, #-1; \n\t"
- STORE" %0, @%1; \n\t"
+ M32R_UNLOCK" %0, @%1; \n\t"
: "=&r" (result)
: "r" (&v->counter)
: "memory"
*/
#define atomic_add_negative(i,v) (atomic_add_return((i), (v)) < 0)
-static inline void atomic_clear_mask(unsigned long mask, atomic_t *addr)
+static __inline__ void atomic_clear_mask(unsigned long mask, atomic_t *addr)
{
unsigned long flags;
unsigned long tmp;
__asm__ __volatile__ (
"# atomic_clear_mask \n\t"
DCACHE_CLEAR("%0", "r5", "%1")
- LOAD" %0, @%1; \n\t"
+ M32R_LOCK" %0, @%1; \n\t"
"and %0, %2; \n\t"
- STORE" %0, @%1; \n\t"
+ M32R_UNLOCK" %0, @%1; \n\t"
: "=&r" (tmp)
: "r" (addr), "r" (~mask)
: "memory"
local_irq_restore(flags);
}
-static inline void atomic_set_mask(unsigned long mask, atomic_t *addr)
+static __inline__ void atomic_set_mask(unsigned long mask, atomic_t *addr)
{
unsigned long flags;
unsigned long tmp;
__asm__ __volatile__ (
"# atomic_set_mask \n\t"
DCACHE_CLEAR("%0", "r5", "%1")
- LOAD" %0, @%1; \n\t"
+ M32R_LOCK" %0, @%1; \n\t"
"or %0, %2; \n\t"
- STORE" %0, @%1; \n\t"
+ M32R_UNLOCK" %0, @%1; \n\t"
: "=&r" (tmp)
: "r" (addr), "r" (mask)
: "memory"
#define smp_mb__after_atomic_inc() barrier()
#endif /* _ASM_M32R_ATOMIC_H */
-
#include <linux/config.h>
#include <linux/compiler.h>
+#include <asm/assembler.h>
#include <asm/system.h>
#include <asm/byteorder.h>
#include <asm/types.h>
* bit 0 is the LSB of addr; bit 32 is the LSB of (addr+1).
*/
-#undef LOAD
-#undef STORE
-#ifdef CONFIG_SMP
-#define LOAD "lock"
-#define STORE "unlock"
-#else
-#define LOAD "ld"
-#define STORE "st"
-#endif
-
-/* #define ADDR (*(volatile long *) addr) */
-
/**
* set_bit - Atomically set a bit in memory
* @nr: the bit to set
* Note that @nr may be almost arbitrarily large; this function is not
* restricted to acting on a single-word quantity.
*/
-static inline void set_bit(int nr, volatile void * addr)
+static __inline__ void set_bit(int nr, volatile void * addr)
{
__u32 mask;
volatile __u32 *a = addr;
local_irq_save(flags);
__asm__ __volatile__ (
DCACHE_CLEAR("%0", "r6", "%1")
- LOAD" %0, @%1; \n\t"
+ M32R_LOCK" %0, @%1; \n\t"
"or %0, %2; \n\t"
- STORE" %0, @%1; \n\t"
+ M32R_UNLOCK" %0, @%1; \n\t"
: "=&r" (tmp)
: "r" (a), "r" (mask)
: "memory"
* If it's called on the same region of memory simultaneously, the effect
* may be that only one operation succeeds.
*/
-static inline void __set_bit(int nr, volatile void * addr)
+static __inline__ void __set_bit(int nr, volatile void * addr)
{
__u32 mask;
volatile __u32 *a = addr;
* you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
* in order to ensure changes are visible on other processors.
*/
-static inline void clear_bit(int nr, volatile void * addr)
+static __inline__ void clear_bit(int nr, volatile void * addr)
{
__u32 mask;
volatile __u32 *a = addr;
__asm__ __volatile__ (
DCACHE_CLEAR("%0", "r6", "%1")
- LOAD" %0, @%1; \n\t"
+ M32R_LOCK" %0, @%1; \n\t"
"and %0, %2; \n\t"
- STORE" %0, @%1; \n\t"
+ M32R_UNLOCK" %0, @%1; \n\t"
: "=&r" (tmp)
: "r" (a), "r" (~mask)
: "memory"
local_irq_restore(flags);
}
-static inline void __clear_bit(int nr, volatile unsigned long * addr)
+static __inline__ void __clear_bit(int nr, volatile unsigned long * addr)
{
unsigned long mask;
volatile unsigned long *a = addr;
* If it's called on the same region of memory simultaneously, the effect
* may be that only one operation succeeds.
*/
-static inline void __change_bit(int nr, volatile void * addr)
+static __inline__ void __change_bit(int nr, volatile void * addr)
{
__u32 mask;
volatile __u32 *a = addr;
* Note that @nr may be almost arbitrarily large; this function is not
* restricted to acting on a single-word quantity.
*/
-static inline void change_bit(int nr, volatile void * addr)
+static __inline__ void change_bit(int nr, volatile void * addr)
{
__u32 mask;
volatile __u32 *a = addr;
local_irq_save(flags);
__asm__ __volatile__ (
DCACHE_CLEAR("%0", "r6", "%1")
- LOAD" %0, @%1; \n\t"
+ M32R_LOCK" %0, @%1; \n\t"
"xor %0, %2; \n\t"
- STORE" %0, @%1; \n\t"
+ M32R_UNLOCK" %0, @%1; \n\t"
: "=&r" (tmp)
: "r" (a), "r" (mask)
: "memory"
* This operation is atomic and cannot be reordered.
* It also implies a memory barrier.
*/
-static inline int test_and_set_bit(int nr, volatile void * addr)
+static __inline__ int test_and_set_bit(int nr, volatile void * addr)
{
__u32 mask, oldbit;
volatile __u32 *a = addr;
local_irq_save(flags);
__asm__ __volatile__ (
DCACHE_CLEAR("%0", "%1", "%2")
- LOAD" %0, @%2; \n\t"
+ M32R_LOCK" %0, @%2; \n\t"
"mv %1, %0; \n\t"
"and %0, %3; \n\t"
"or %1, %3; \n\t"
- STORE" %1, @%2; \n\t"
+ M32R_UNLOCK" %1, @%2; \n\t"
: "=&r" (oldbit), "=&r" (tmp)
: "r" (a), "r" (mask)
: "memory"
* If two examples of this operation race, one can appear to succeed
* but actually fail. You must protect multiple accesses with a lock.
*/
-static inline int __test_and_set_bit(int nr, volatile void * addr)
+static __inline__ int __test_and_set_bit(int nr, volatile void * addr)
{
__u32 mask, oldbit;
volatile __u32 *a = addr;
* This operation is atomic and cannot be reordered.
* It also implies a memory barrier.
*/
-static inline int test_and_clear_bit(int nr, volatile void * addr)
+static __inline__ int test_and_clear_bit(int nr, volatile void * addr)
{
__u32 mask, oldbit;
volatile __u32 *a = addr;
__asm__ __volatile__ (
DCACHE_CLEAR("%0", "%1", "%3")
- LOAD" %0, @%3; \n\t"
- "mv %1, %0; \n\t"
- "and %0, %2; \n\t"
- "not %2, %2; \n\t"
- "and %1, %2; \n\t"
- STORE" %1, @%3; \n\t"
+ M32R_LOCK" %0, @%3; \n\t"
+ "mv %1, %0; \n\t"
+ "and %0, %2; \n\t"
+ "not %2, %2; \n\t"
+ "and %1, %2; \n\t"
+ M32R_UNLOCK" %1, @%3; \n\t"
: "=&r" (oldbit), "=&r" (tmp), "+r" (mask)
: "r" (a)
: "memory"
* If two examples of this operation race, one can appear to succeed
* but actually fail. You must protect multiple accesses with a lock.
*/
-static inline int __test_and_clear_bit(int nr, volatile void * addr)
+static __inline__ int __test_and_clear_bit(int nr, volatile void * addr)
{
__u32 mask, oldbit;
volatile __u32 *a = addr;
}
/* WARNING: non atomic and it can be reordered! */
-static inline int __test_and_change_bit(int nr, volatile void * addr)
+static __inline__ int __test_and_change_bit(int nr, volatile void * addr)
{
__u32 mask, oldbit;
volatile __u32 *a = addr;
* This operation is atomic and cannot be reordered.
* It also implies a memory barrier.
*/
-static inline int test_and_change_bit(int nr, volatile void * addr)
+static __inline__ int test_and_change_bit(int nr, volatile void * addr)
{
__u32 mask, oldbit;
volatile __u32 *a = addr;
local_irq_save(flags);
__asm__ __volatile__ (
DCACHE_CLEAR("%0", "%1", "%2")
- LOAD" %0, @%2; \n\t"
+ M32R_LOCK" %0, @%2; \n\t"
"mv %1, %0; \n\t"
"and %0, %3; \n\t"
"xor %1, %3; \n\t"
- STORE" %1, @%2; \n\t"
+ M32R_UNLOCK" %1, @%2; \n\t"
: "=&r" (oldbit), "=&r" (tmp)
: "r" (a), "r" (mask)
: "memory"
return (oldbit != 0);
}
-#if 0 /* Fool kernel-doc since it doesn't do macros yet */
/**
* test_bit - Determine whether a bit is set
* @nr: bit number to test
* @addr: Address to start counting from
*/
-static int test_bit(int nr, const volatile void * addr);
-#endif
-
-static inline int test_bit(int nr, const volatile void * addr)
+static __inline__ int test_bit(int nr, const volatile void * addr)
{
__u32 mask;
const volatile __u32 *a = addr;
*
* Undefined if no zero exists, so code should check against ~0UL first.
*/
-static inline unsigned long ffz(unsigned long word)
+static __inline__ unsigned long ffz(unsigned long word)
{
int k;
* @offset: The bitnumber to start searching at
* @size: The maximum size to search
*/
-static inline int find_next_zero_bit(void *addr, int size, int offset)
+static __inline__ int find_next_zero_bit(const unsigned long *addr,
+ int size, int offset)
{
- unsigned long *p = ((unsigned long *) addr) + (offset >> 5);
+ const unsigned long *p = addr + (offset >> 5);
unsigned long result = offset & ~31UL;
unsigned long tmp;
*
* Undefined if no bit exists, so code should check against 0 first.
*/
-static inline unsigned long __ffs(unsigned long word)
+static __inline__ unsigned long __ffs(unsigned long word)
{
int k = 0;
*
* it's best to have buff aligned on a 32-bit boundary
*/
-asmlinkage unsigned int csum_partial(const unsigned char *buff, int len, unsigned int sum);
+asmlinkage unsigned int csum_partial(const unsigned char *buff,
+ int len, unsigned int sum);
/*
* The same as csum_partial, but copies from src while it checksums.
* Here even more important to align src and dst on a 32-bit (or even
* better 64-bit) boundary
*/
-extern unsigned int csum_partial_copy_nocheck(const char *src, char *dst,
+extern unsigned int csum_partial_copy_nocheck(const unsigned char *src,
+ unsigned char *dst,
int len, unsigned int sum);
/*
* This is a new version of the above that records errors it finds in *errp,
* but continues and zeros thre rest of the buffer.
*/
-extern unsigned int csum_partial_copy_from_user(const char __user *src,
- char *dst,
+extern unsigned int csum_partial_copy_from_user(const unsigned char __user *src,
+ unsigned char *dst,
int len, unsigned int sum,
int *err_ptr);
#define R_M32R_GOTPC_HI_ULO 59
#define R_M32R_GOTPC_HI_SLO 60
#define R_M32R_GOTPC_LO 61
+#define R_M32R_GOTOFF_HI_ULO 62
+#define R_M32R_GOTOFF_HI_SLO 63
+#define R_M32R_GOTOFF_LO 64
#define R_M32R_NUM 256
+#ifdef __KERNEL__
#ifndef __ASM_HARDIRQ_H
#define __ASM_HARDIRQ_H
typedef struct {
unsigned int __softirq_pending;
- unsigned int __syscall_count;
- struct task_struct * __ksoftirqd_task; /* waitqueue is too large */
} ____cacheline_aligned irq_cpustat_t;
#include <linux/irq_cpustat.h> /* Standard mappings for irq_cpustat_t above */
# error HARDIRQ_BITS is too low!
#endif
-#define irq_enter() (preempt_count() += HARDIRQ_OFFSET)
-#define nmi_enter() (irq_enter())
-#define nmi_exit() (preempt_count() -= HARDIRQ_OFFSET)
-
-#define irq_exit() \
-do { \
- preempt_count() -= IRQ_EXIT_OFFSET; \
- if (!in_interrupt() && softirq_pending(smp_processor_id())) \
- do_softirq(); \
- preempt_enable_no_resched(); \
-} while (0)
+static inline void ack_bad_irq(int irq)
+{
+ printk(KERN_CRIT "unexpected IRQ trap at vector %02x\n", irq);
+ BUG();
+}
#endif /* __ASM_HARDIRQ_H */
+#endif /* __KERNEL__ */
+#ifdef __KERNEL__
#ifndef _ASM_M32R_IRQ_H
#define _ASM_M32R_IRQ_H
-/* $Id$ */
-
#include <linux/config.h>
#if defined(CONFIG_PLAT_M32700UT_Alpha) || defined(CONFIG_PLAT_USRV)
#define irq_canonicalize(irq) (irq)
-#ifndef __ASSEMBLY__
-extern void disable_irq(unsigned int);
-extern void disable_irq_nosync(unsigned int);
-extern void enable_irq(unsigned int);
-
-struct irqaction;
-struct pt_regs;
-int handle_IRQ_event(unsigned int, struct pt_regs *, struct irqaction *);
-#endif
-
-#endif /* _ASM_M32R_IRQ_H */
-
+#endif /* _ASM_M32R_IRQ_H */
+#endif /* __KERNEL__ */
#ifndef _ASM_M32R_MMU_CONTEXT_H
#define _ASM_M32R_MMU_CONTEXT_H
-/* $Id */
+#ifdef __KERNEL__
#include <linux/config.h>
#define mm_context(mm) mm->context[smp_processor_id()]
#endif /* not CONFIG_SMP */
-#define set_tlb_tag(entry, tag) (*entry = (tag & PAGE_MASK)|get_asid())
+#define set_tlb_tag(entry, tag) (*entry = (tag & PAGE_MASK)|get_asid())
#define set_tlb_data(entry, data) (*entry = (data | _PAGE_PRESENT))
#ifdef CONFIG_MMU
#define enter_lazy_tlb(mm, tsk) do { } while (0)
-static __inline__ void get_new_mmu_context(struct mm_struct *mm)
+static inline void get_new_mmu_context(struct mm_struct *mm)
{
unsigned long mc = ++mmu_context_cache;
/*
* Get MMU context if needed.
*/
-static __inline__ void get_mmu_context(struct mm_struct *mm)
+static inline void get_mmu_context(struct mm_struct *mm)
{
if (mm) {
unsigned long mc = mmu_context_cache;
* Initialize the context related info for a new mm_struct
* instance.
*/
-static __inline__ int init_new_context(struct task_struct *tsk,
+static inline int init_new_context(struct task_struct *tsk,
struct mm_struct *mm)
{
#ifndef CONFIG_SMP
*/
#define destroy_context(mm) do { } while (0)
-static __inline__ void set_asid(unsigned long asid)
+static inline void set_asid(unsigned long asid)
{
*(volatile unsigned long *)MASID = (asid & MMU_CONTEXT_ASID_MASK);
}
-static __inline__ unsigned long get_asid(void)
+static inline unsigned long get_asid(void)
{
unsigned long asid;
* After we have set current->mm to a new value, this activates
* the context for the new mm so we see the new mappings.
*/
-static __inline__ void activate_context(struct mm_struct *mm)
+static inline void activate_context(struct mm_struct *mm)
{
get_mmu_context(mm);
set_asid(mm_context(mm) & MMU_CONTEXT_ASID_MASK);
}
-static __inline__ void switch_mm(struct mm_struct *prev,
+static inline void switch_mm(struct mm_struct *prev,
struct mm_struct *next, struct task_struct *tsk)
{
#ifdef CONFIG_SMP
#endif /* not __ASSEMBLY__ */
-#endif /* _ASM_M32R_MMU_CONTEXT_H */
+#endif /* __KERNEL__ */
+#endif /* _ASM_M32R_MMU_CONTEXT_H */
#define clear_user_page(page, vaddr, pg) clear_page(page)
#define copy_user_page(to, from, vaddr, pg) copy_page(to, from)
+#define alloc_zeroed_user_highpage(vma, vaddr) alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO, vma, vaddr)
+#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
+
/*
* These are used to make use of C type-checking..
*/
*/
static __inline__ pgd_t *pgd_alloc(struct mm_struct *mm)
{
- pgd_t *pgd = (pgd_t *)__get_free_page(GFP_KERNEL);
-
- if (pgd)
- clear_page(pgd);
+ pgd_t *pgd = (pgd_t *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
return pgd;
}
static __inline__ pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long address)
{
- pte_t *pte = (pte_t *)__get_free_page(GFP_KERNEL);
-
- if (pte)
- clear_page(pte);
+ pte_t *pte = (pte_t *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
return pte;
}
static __inline__ struct page *pte_alloc_one(struct mm_struct *mm,
unsigned long address)
{
- struct page *pte = alloc_page(GFP_KERNEL);
+ struct page *pte = alloc_page(GFP_KERNEL|__GFP_ZERO);
- if (pte)
- clear_page(page_address(pte));
return pte;
}
#ifndef _ASM_M32R_PGTABLE_2LEVEL_H
#define _ASM_M32R_PGTABLE_2LEVEL_H
-/* $Id$ */
+#ifdef __KERNEL__
#include <linux/config.h>
* setup: the pgd is never bad, and a pmd always exists (as it's folded
* into the pgd entry)
*/
-static __inline__ int pgd_none(pgd_t pgd) { return 0; }
-static __inline__ int pgd_bad(pgd_t pgd) { return 0; }
-static __inline__ int pgd_present(pgd_t pgd) { return 1; }
+static inline int pgd_none(pgd_t pgd) { return 0; }
+static inline int pgd_bad(pgd_t pgd) { return 0; }
+static inline int pgd_present(pgd_t pgd) { return 1; }
#define pgd_clear(xp) do { } while (0)
/*
#define pgd_page(pgd) \
((unsigned long) __va(pgd_val(pgd) & PAGE_MASK))
-static __inline__ pmd_t *pmd_offset(pgd_t * dir, unsigned long address)
+static inline pmd_t *pmd_offset(pgd_t * dir, unsigned long address)
{
return (pmd_t *) dir;
}
#define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
#define pfn_pmd(pfn, prot) __pmd(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
-/* M32R_FIXME : PTE_FILE_MAX_BITS, pte_to_pgoff, pgoff_to_pte */
-#define PTE_FILE_MAX_BITS 31
-#define pte_to_pgoff(pte) (pte_val(pte) >> 1)
-#define pgoff_to_pte(off) ((pte_t) { ((off) << 1) | _PAGE_FILE })
+#define PTE_FILE_MAX_BITS 29
+#define pte_to_pgoff(pte) (((pte_val(pte) >> 2) & 0xef) | (((pte_val(pte) >> 10)) << 7))
+#define pgoff_to_pte(off) ((pte_t) { (((off) & 0xef) << 2) | (((off) >> 7) << 10) | _PAGE_FILE })
-#endif /* _ASM_M32R_PGTABLE_2LEVEL_H */
+#endif /* __KERNEL__ */
+#endif /* _ASM_M32R_PGTABLE_2LEVEL_H */
#ifndef _ASM_M32R_PGTABLE_H
#define _ASM_M32R_PGTABLE_H
-/* $Id$ */
+#include <asm-generic/4level-fixup.h>
+#ifdef __KERNEL__
/*
* The Linux memory management assumes a three-level page table setup. On
* the M32R, we use that, but "fold" the mid level into the top-level page
#endif /* !__ASSEMBLY__ */
-/*
- * The Linux x86 paging architecture is 'compile-time dual-mode', it
- * implements both the traditional 2-level x86 page tables and the
- * newer 3-level PAE-mode page tables.
- */
#ifndef __ASSEMBLY__
#include <asm/pgtable-2level.h>
#endif
#define VMALLOC_START KSEG2
#define VMALLOC_END KSEG3
-/*
- * The 4MB page is guessing.. Detailed in the infamous "Chapter H"
- * of the Pentium details, but assuming intel did the straightforward
- * thing, this bit set in the page directory entry just means that
- * the page directory entry points directly to a 4MB-aligned block of
- * memory.
- */
-
/*
* M32R TLB format
*
* RWX
*/
-#define _PAGE_BIT_DIRTY 0 /* software */
+#define _PAGE_BIT_DIRTY 0 /* software: page changed */
#define _PAGE_BIT_FILE 0 /* when !present: nonlinear file
mapping */
-#define _PAGE_BIT_PRESENT 1 /* Valid */
+#define _PAGE_BIT_PRESENT 1 /* Valid: page is valid */
#define _PAGE_BIT_GLOBAL 2 /* Global */
#define _PAGE_BIT_LARGE 3 /* Large */
#define _PAGE_BIT_EXEC 4 /* Execute */
#define _PAGE_BIT_WRITE 5 /* Write */
#define _PAGE_BIT_READ 6 /* Read */
#define _PAGE_BIT_NONCACHABLE 7 /* Non cachable */
-#define _PAGE_BIT_USER 8 /* software */
-#define _PAGE_BIT_ACCESSED 9 /* software */
-
-#define _PAGE_DIRTY \
- (1UL << _PAGE_BIT_DIRTY) /* software : page changed */
-#define _PAGE_FILE \
- (1UL << _PAGE_BIT_FILE) /* when !present: nonlinear file
- mapping */
-#define _PAGE_PRESENT \
- (1UL << _PAGE_BIT_PRESENT) /* Valid : Page is Valid */
-#define _PAGE_GLOBAL \
- (1UL << _PAGE_BIT_GLOBAL) /* Global */
-#define _PAGE_LARGE \
- (1UL << _PAGE_BIT_LARGE) /* Large */
-#define _PAGE_EXEC \
- (1UL << _PAGE_BIT_EXEC) /* Execute */
-#define _PAGE_WRITE \
- (1UL << _PAGE_BIT_WRITE) /* Write */
-#define _PAGE_READ \
- (1UL << _PAGE_BIT_READ) /* Read */
-#define _PAGE_NONCACHABLE \
- (1UL<<_PAGE_BIT_NONCACHABLE) /* Non cachable */
-#define _PAGE_USER \
- (1UL << _PAGE_BIT_USER) /* software : user space access
- allowed */
-#define _PAGE_ACCESSED \
- (1UL << _PAGE_BIT_ACCESSED) /* software : page referenced */
+#define _PAGE_BIT_ACCESSED 8 /* software: page referenced */
+#define _PAGE_BIT_PROTNONE 9 /* software: if not present */
+
+#define _PAGE_DIRTY (1UL << _PAGE_BIT_DIRTY)
+#define _PAGE_FILE (1UL << _PAGE_BIT_FILE)
+#define _PAGE_PRESENT (1UL << _PAGE_BIT_PRESENT)
+#define _PAGE_GLOBAL (1UL << _PAGE_BIT_GLOBAL)
+#define _PAGE_LARGE (1UL << _PAGE_BIT_LARGE)
+#define _PAGE_EXEC (1UL << _PAGE_BIT_EXEC)
+#define _PAGE_WRITE (1UL << _PAGE_BIT_WRITE)
+#define _PAGE_READ (1UL << _PAGE_BIT_READ)
+#define _PAGE_NONCACHABLE (1UL << _PAGE_BIT_NONCACHABLE)
+#define _PAGE_ACCESSED (1UL << _PAGE_BIT_ACCESSED)
+#define _PAGE_PROTNONE (1UL << _PAGE_BIT_PROTNONE)
#define _PAGE_TABLE \
- ( _PAGE_PRESENT | _PAGE_WRITE | _PAGE_READ | _PAGE_USER \
- | _PAGE_ACCESSED | _PAGE_DIRTY )
+ ( _PAGE_PRESENT | _PAGE_WRITE | _PAGE_READ | _PAGE_ACCESSED \
+ | _PAGE_DIRTY )
#define _KERNPG_TABLE \
( _PAGE_PRESENT | _PAGE_WRITE | _PAGE_READ | _PAGE_ACCESSED \
| _PAGE_DIRTY )
#ifdef CONFIG_MMU
#define PAGE_NONE \
- __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED)
+ __pgprot(_PAGE_PROTNONE | _PAGE_ACCESSED)
#define PAGE_SHARED \
- __pgprot(_PAGE_PRESENT | _PAGE_WRITE | _PAGE_READ | _PAGE_USER \
- | _PAGE_ACCESSED)
-#define PAGE_SHARED_X \
+ __pgprot(_PAGE_PRESENT | _PAGE_WRITE | _PAGE_READ | _PAGE_ACCESSED)
+#define PAGE_SHARED_EXEC \
__pgprot(_PAGE_PRESENT | _PAGE_EXEC | _PAGE_WRITE | _PAGE_READ \
- | _PAGE_USER | _PAGE_ACCESSED)
-#define PAGE_COPY \
- __pgprot(_PAGE_PRESENT | _PAGE_EXEC | _PAGE_READ | _PAGE_USER \
- | _PAGE_ACCESSED)
-#define PAGE_COPY_X \
- __pgprot(_PAGE_PRESENT | _PAGE_EXEC | _PAGE_READ | _PAGE_USER \
| _PAGE_ACCESSED)
+#define PAGE_COPY \
+ __pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_ACCESSED)
+#define PAGE_COPY_EXEC \
+ __pgprot(_PAGE_PRESENT | _PAGE_EXEC | _PAGE_READ | _PAGE_ACCESSED)
#define PAGE_READONLY \
- __pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_USER | _PAGE_ACCESSED)
-#define PAGE_READONLY_X \
- __pgprot(_PAGE_PRESENT | _PAGE_EXEC | _PAGE_READ | _PAGE_USER \
- | _PAGE_ACCESSED)
+ __pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_ACCESSED)
+#define PAGE_READONLY_EXEC \
+ __pgprot(_PAGE_PRESENT | _PAGE_EXEC | _PAGE_READ | _PAGE_ACCESSED)
#define __PAGE_KERNEL \
( _PAGE_PRESENT | _PAGE_EXEC | _PAGE_WRITE | _PAGE_READ | _PAGE_DIRTY \
#define PAGE_KERNEL_NOCACHE MAKE_GLOBAL(__PAGE_KERNEL_NOCACHE)
#else
-#define PAGE_NONE __pgprot(0)
-#define PAGE_SHARED __pgprot(0)
-#define PAGE_SHARED_X __pgprot(0)
-#define PAGE_COPY __pgprot(0)
-#define PAGE_COPY_X __pgprot(0)
-#define PAGE_READONLY __pgprot(0)
-#define PAGE_READONLY_X __pgprot(0)
-
-#define PAGE_KERNEL __pgprot(0)
-#define PAGE_KERNEL_RO __pgprot(0)
-#define PAGE_KERNEL_NOCACHE __pgprot(0)
+#define PAGE_NONE __pgprot(0)
+#define PAGE_SHARED __pgprot(0)
+#define PAGE_SHARED_EXEC __pgprot(0)
+#define PAGE_COPY __pgprot(0)
+#define PAGE_COPY_EXEC __pgprot(0)
+#define PAGE_READONLY __pgprot(0)
+#define PAGE_READONLY_EXEC __pgprot(0)
+
+#define PAGE_KERNEL __pgprot(0)
+#define PAGE_KERNEL_RO __pgprot(0)
+#define PAGE_KERNEL_NOCACHE __pgprot(0)
#endif /* CONFIG_MMU */
-/*
- * The i386 can't do page protection for execute, and considers that
- * the same are read. Also, write permissions imply read permissions.
- * This is the closest we can get..
- */
- /* rwx */
+ /* xwr */
#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY_X
-#define __P010 PAGE_COPY_X
-#define __P011 PAGE_COPY_X
-#define __P100 PAGE_READONLY
-#define __P101 PAGE_READONLY_X
-#define __P110 PAGE_COPY_X
-#define __P111 PAGE_COPY_X
+#define __P001 PAGE_READONLY
+#define __P010 PAGE_COPY
+#define __P011 PAGE_COPY
+#define __P100 PAGE_READONLY_EXEC
+#define __P101 PAGE_READONLY_EXEC
+#define __P110 PAGE_COPY_EXEC
+#define __P111 PAGE_COPY_EXEC
#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY_X
+#define __S001 PAGE_READONLY
#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED_X
-#define __S100 PAGE_READONLY
-#define __S101 PAGE_READONLY_X
-#define __S110 PAGE_SHARED
-#define __S111 PAGE_SHARED_X
+#define __S011 PAGE_SHARED
+#define __S100 PAGE_READONLY_EXEC
+#define __S101 PAGE_READONLY_EXEC
+#define __S110 PAGE_SHARED_EXEC
+#define __S111 PAGE_SHARED_EXEC
/* page table for 0-4MB for everybody */
-#define pte_present(x) (pte_val(x) & _PAGE_PRESENT)
+#define pte_present(x) (pte_val(x) & (_PAGE_PRESENT | _PAGE_PROTNONE))
#define pte_clear(xp) do { set_pte(xp, __pte(0)); } while (0)
#define pmd_none(x) (!pmd_val(x))
#define pmd_present(x) (pmd_val(x) & _PAGE_PRESENT)
#define pmd_clear(xp) do { set_pmd(xp, __pmd(0)); } while (0)
-#define pmd_bad(x) ((pmd_val(x) & (~PAGE_MASK & ~_PAGE_USER)) \
- != _KERNPG_TABLE)
+#define pmd_bad(x) ((pmd_val(x) & ~PAGE_MASK) != _KERNPG_TABLE)
#define pages_to_mb(x) ((x) >> (20 - PAGE_SHIFT))
* The following only work if pte_present() is true.
* Undefined behaviour if not..
*/
-static __inline__ int pte_user(pte_t pte)
-{
- return pte_val(pte) & _PAGE_USER;
-}
-
-static __inline__ int pte_read(pte_t pte)
+static inline int pte_read(pte_t pte)
{
return pte_val(pte) & _PAGE_READ;
}
-static __inline__ int pte_exec(pte_t pte)
+static inline int pte_exec(pte_t pte)
{
return pte_val(pte) & _PAGE_EXEC;
}
-static __inline__ int pte_dirty(pte_t pte)
+static inline int pte_dirty(pte_t pte)
{
return pte_val(pte) & _PAGE_DIRTY;
}
-static __inline__ int pte_young(pte_t pte)
+static inline int pte_young(pte_t pte)
{
return pte_val(pte) & _PAGE_ACCESSED;
}
-static __inline__ int pte_write(pte_t pte)
+static inline int pte_write(pte_t pte)
{
return pte_val(pte) & _PAGE_WRITE;
}
/*
* The following only works if pte_present() is not true.
*/
-static __inline__ int pte_file(pte_t pte)
+static inline int pte_file(pte_t pte)
{
return pte_val(pte) & _PAGE_FILE;
}
-static __inline__ pte_t pte_rdprotect(pte_t pte)
+static inline pte_t pte_rdprotect(pte_t pte)
{
pte_val(pte) &= ~_PAGE_READ;
return pte;
}
-static __inline__ pte_t pte_exprotect(pte_t pte)
+static inline pte_t pte_exprotect(pte_t pte)
{
pte_val(pte) &= ~_PAGE_EXEC;
return pte;
}
-static __inline__ pte_t pte_mkclean(pte_t pte)
+static inline pte_t pte_mkclean(pte_t pte)
{
pte_val(pte) &= ~_PAGE_DIRTY;
return pte;
}
-static __inline__ pte_t pte_mkold(pte_t pte)
+static inline pte_t pte_mkold(pte_t pte)
{
- pte_val(pte) &= ~_PAGE_ACCESSED;return pte;}
+ pte_val(pte) &= ~_PAGE_ACCESSED;
+ return pte;
+}
-static __inline__ pte_t pte_wrprotect(pte_t pte)
+static inline pte_t pte_wrprotect(pte_t pte)
{
pte_val(pte) &= ~_PAGE_WRITE;
return pte;
}
-static __inline__ pte_t pte_mkread(pte_t pte)
+static inline pte_t pte_mkread(pte_t pte)
{
pte_val(pte) |= _PAGE_READ;
return pte;
}
-static __inline__ pte_t pte_mkexec(pte_t pte)
+static inline pte_t pte_mkexec(pte_t pte)
{
pte_val(pte) |= _PAGE_EXEC;
return pte;
}
-static __inline__ pte_t pte_mkdirty(pte_t pte)
+static inline pte_t pte_mkdirty(pte_t pte)
{
pte_val(pte) |= _PAGE_DIRTY;
return pte;
}
-static __inline__ pte_t pte_mkyoung(pte_t pte)
+static inline pte_t pte_mkyoung(pte_t pte)
{
pte_val(pte) |= _PAGE_ACCESSED;
return pte;
}
-static __inline__ pte_t pte_mkwrite(pte_t pte)
+static inline pte_t pte_mkwrite(pte_t pte)
{
pte_val(pte) |= _PAGE_WRITE;
return pte;
}
-static __inline__ int ptep_test_and_clear_dirty(pte_t *ptep)
+static inline int ptep_test_and_clear_dirty(pte_t *ptep)
{
return test_and_clear_bit(_PAGE_BIT_DIRTY, ptep);
}
-static __inline__ int ptep_test_and_clear_young(pte_t *ptep)
+static inline int ptep_test_and_clear_young(pte_t *ptep)
{
return test_and_clear_bit(_PAGE_BIT_ACCESSED, ptep);
}
-static __inline__ void ptep_set_wrprotect(pte_t *ptep)
+static inline void ptep_set_wrprotect(pte_t *ptep)
{
clear_bit(_PAGE_BIT_WRITE, ptep);
}
-static __inline__ void ptep_mkdirty(pte_t *ptep)
+static inline void ptep_mkdirty(pte_t *ptep)
{
set_bit(_PAGE_BIT_DIRTY, ptep);
}
+/*
+ * Macro and implementation to make a page protection as uncachable.
+ */
+static inline pgprot_t pgprot_noncached(pgprot_t _prot)
+{
+ unsigned long prot = pgprot_val(_prot);
+
+ prot |= _PAGE_NONCACHABLE;
+ return __pgprot(prot);
+}
+
+#define pgprot_writecombine(prot) pgprot_noncached(prot)
+
/*
* Conversion functions: convert a page and protection to a page entry,
* and a page entry and page directory to the page they refer to.
*/
#define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), pgprot)
-static __inline__ pte_t pte_modify(pte_t pte, pgprot_t newprot)
+static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
{
set_pte(&pte, __pte((pte_val(pte) & _PAGE_CHG_MASK) \
| pgprot_val(newprot)));
* and a page entry and page directory to the page they refer to.
*/
-static __inline__ void pmd_set(pmd_t * pmdp, pte_t * ptep)
+static inline void pmd_set(pmd_t * pmdp, pte_t * ptep)
{
pmd_val(*pmdp) = (((unsigned long) ptep) & PAGE_MASK);
}
#define pte_unmap_nested(pte) do { } while (0)
/* Encode and de-code a swap entry */
-#define __swp_type(x) (((x).val >> 1) & 0x3f)
-#define __swp_offset(x) ((x).val >> 8)
+#define __swp_type(x) (((x).val >> 2) & 0x3f)
+#define __swp_offset(x) ((x).val >> 10)
#define __swp_entry(type, offset) \
- ((swp_entry_t) { ((type) << 1) | ((offset) << 8) })
+ ((swp_entry_t) { ((type) << 2) | ((offset) << 10) })
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
#define kern_addr_valid(addr) (1)
-#define io_remap_page_range(vma, vaddr, paddr, size, prot) \
- remap_pfn_range(vma, vaddr, (paddr) >> PAGE_SHIFT, size, prot)
+#define io_remap_page_range(vma, vaddr, paddr, size, prot) \
+ remap_pfn_range(vma, vaddr, (paddr) >> PAGE_SHIFT, size, prot)
#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_DIRTY
#define __HAVE_ARCH_PTE_SAME
#include <asm-generic/pgtable.h>
-#endif /* _ASM_M32R_PGTABLE_H */
+#endif /* __KERNEL__ */
+#endif /* _ASM_M32R_PGTABLE_H */
unsigned long seg;
} mm_segment_t;
+#define MAX_TRAPS 10
+
struct debug_trap {
int nr_trap;
- unsigned long addr;
- unsigned long insn;
+ unsigned long addr[MAX_TRAPS];
+ unsigned long insn[MAX_TRAPS];
};
struct thread_struct {
#ifndef _ASM_M32R_RESOURCE_H
#define _ASM_M32R_RESOURCE_H
-/* $Id$ */
-
-/* orig : i386 2.4.18 */
-
-/*
- * Resource limits
- */
-
-#define RLIMIT_CPU 0 /* CPU time in ms */
-#define RLIMIT_FSIZE 1 /* Maximum filesize */
-#define RLIMIT_DATA 2 /* max data size */
-#define RLIMIT_STACK 3 /* max stack size */
-#define RLIMIT_CORE 4 /* max core file size */
-#define RLIMIT_RSS 5 /* max resident set size */
-#define RLIMIT_NPROC 6 /* max number of processes */
-#define RLIMIT_NOFILE 7 /* max number of open files */
-#define RLIMIT_MEMLOCK 8 /* max locked-in-memory address space */
-#define RLIMIT_AS 9 /* address space limit */
-#define RLIMIT_LOCKS 10 /* maximum file locks held */
-#define RLIMIT_SIGPENDING 11 /* max number of pending signals */
-#define RLIMIT_MSGQUEUE 12 /* maximum bytes in POSIX mqueues */
-
-#define RLIM_NLIMITS 13
-
-/*
- * SuS says limits have to be unsigned.
- * Which makes a ton more sense anyway.
- */
-#define RLIM_INFINITY (~0UL)
-
-#ifdef __KERNEL__
-
-#define INIT_RLIMITS \
-{ \
- { RLIM_INFINITY, RLIM_INFINITY }, \
- { RLIM_INFINITY, RLIM_INFINITY }, \
- { RLIM_INFINITY, RLIM_INFINITY }, \
- { _STK_LIM, RLIM_INFINITY }, \
- { 0, RLIM_INFINITY }, \
- { RLIM_INFINITY, RLIM_INFINITY }, \
- { 0, 0 }, \
- { INR_OPEN, INR_OPEN }, \
- { MLOCK_LIMIT, MLOCK_LIMIT }, \
- { RLIM_INFINITY, RLIM_INFINITY }, \
- { RLIM_INFINITY, RLIM_INFINITY }, \
- { MAX_SIGPENDING, MAX_SIGPENDING }, \
- { MQ_BYTES_MAX, MQ_BYTES_MAX }, \
-}
-
-#endif /* __KERNEL__ */
+#include <asm-generic/resource.h>
#endif /* _ASM_M32R_RESOURCE_H */
#include <linux/config.h>
#include <linux/wait.h>
#include <linux/rwsem.h>
+#include <asm/assembler.h>
#include <asm/system.h>
#include <asm/atomic.h>
-#undef LOAD
-#undef STORE
-#ifdef CONFIG_SMP
-#define LOAD "lock"
-#define STORE "unlock"
-#else
-#define LOAD "ld"
-#define STORE "st"
-#endif
-
struct semaphore {
atomic_t count;
int sleepers;
__asm__ __volatile__ (
"# down \n\t"
DCACHE_CLEAR("%0", "r4", "%1")
- LOAD" %0, @%1; \n\t"
+ M32R_LOCK" %0, @%1; \n\t"
"addi %0, #-1; \n\t"
- STORE" %0, @%1; \n\t"
+ M32R_UNLOCK" %0, @%1; \n\t"
: "=&r" (count)
: "r" (&sem->count)
: "memory"
__asm__ __volatile__ (
"# down_interruptible \n\t"
DCACHE_CLEAR("%0", "r4", "%1")
- LOAD" %0, @%1; \n\t"
+ M32R_LOCK" %0, @%1; \n\t"
"addi %0, #-1; \n\t"
- STORE" %0, @%1; \n\t"
+ M32R_UNLOCK" %0, @%1; \n\t"
: "=&r" (count)
: "r" (&sem->count)
: "memory"
__asm__ __volatile__ (
"# down_trylock \n\t"
DCACHE_CLEAR("%0", "r4", "%1")
- LOAD" %0, @%1; \n\t"
+ M32R_LOCK" %0, @%1; \n\t"
"addi %0, #-1; \n\t"
- STORE" %0, @%1; \n\t"
+ M32R_UNLOCK" %0, @%1; \n\t"
: "=&r" (count)
: "r" (&sem->count)
: "memory"
__asm__ __volatile__ (
"# up \n\t"
DCACHE_CLEAR("%0", "r4", "%1")
- LOAD" %0, @%1; \n\t"
+ M32R_LOCK" %0, @%1; \n\t"
"addi %0, #1; \n\t"
- STORE" %0, @%1; \n\t"
+ M32R_UNLOCK" %0, @%1; \n\t"
: "=&r" (count)
: "r" (&sem->count)
: "memory"
}
extern void smp_send_timer(void);
-extern void calibrate_delay(void);
extern unsigned long send_IPI_mask_phys(cpumask_t, int, int);
#endif /* not __ASSEMBLY__ */
#define RW_LOCK_BIAS 0x01000000
#define RW_LOCK_BIAS_STR "0x01000000"
-/* It seems that people are forgetting to
- * initialize their spinlocks properly, tsk tsk.
- * Remember to turn this off in 2.4. -ben
- */
-#if defined(CONFIG_DEBUG_SPINLOCK)
-#define SPINLOCK_DEBUG 1
-#else
-#define SPINLOCK_DEBUG 0
-#endif
-
/*
* Your basic SMP spinlocks, allowing only a single CPU anywhere
*/
typedef struct {
- volatile int lock;
-#if SPINLOCK_DEBUG
+ volatile int slock;
+#ifdef CONFIG_DEBUG_SPINLOCK
unsigned magic;
#endif
#ifdef CONFIG_PREEMPT
#define SPINLOCK_MAGIC 0xdead4ead
-#if SPINLOCK_DEBUG
+#ifdef CONFIG_DEBUG_SPINLOCK
#define SPINLOCK_MAGIC_INIT , SPINLOCK_MAGIC
#else
#define SPINLOCK_MAGIC_INIT /* */
* We make no fairness assumptions. They have a cost.
*/
-#define spin_is_locked(x) (*(volatile int *)(&(x)->lock) <= 0)
+#define spin_is_locked(x) (*(volatile int *)(&(x)->slock) <= 0)
#define spin_unlock_wait(x) do { barrier(); } while(spin_is_locked(x))
#define _raw_spin_lock_flags(lock, flags) _raw_spin_lock(lock)
unsigned long tmp1, tmp2;
/*
- * lock->lock : =1 : unlock
- * : <=0 : lock
+ * lock->slock : =1 : unlock
+ * : <=0 : lock
* {
- * oldval = lock->lock; <--+ need atomic operation
- * lock->lock = 0; <--+
+ * oldval = lock->slock; <--+ need atomic operation
+ * lock->slock = 0; <--+
* }
*/
__asm__ __volatile__ (
"unlock %1, @%3; \n\t"
"mvtc %2, psw; \n\t"
: "=&r" (oldval), "=&r" (tmp1), "=&r" (tmp2)
- : "r" (&lock->lock)
+ : "r" (&lock->slock)
: "memory"
#ifdef CONFIG_CHIP_M32700_TS1
, "r6"
{
unsigned long tmp0, tmp1;
-#if SPINLOCK_DEBUG
+#ifdef CONFIG_DEBUG_SPINLOCK
__label__ here;
here:
if (lock->magic != SPINLOCK_MAGIC) {
- printk("eip: %p\n", &&here);
+ printk("pc: %p\n", &&here);
BUG();
}
#endif
/*
- * lock->lock : =1 : unlock
- * : <=0 : lock
+ * lock->slock : =1 : unlock
+ * : <=0 : lock
*
* for ( ; ; ) {
- * lock->lock -= 1; <-- need atomic operation
- * if (lock->lock == 0) break;
- * for ( ; lock->lock <= 0 ; );
+ * lock->slock -= 1; <-- need atomic operation
+ * if (lock->slock == 0) break;
+ * for ( ; lock->slock <= 0 ; );
* }
*/
__asm__ __volatile__ (
"bra 2b; \n\t"
LOCK_SECTION_END
: "=&r" (tmp0), "=&r" (tmp1)
- : "r" (&lock->lock)
+ : "r" (&lock->slock)
: "memory"
#ifdef CONFIG_CHIP_M32700_TS1
, "r6"
static inline void _raw_spin_unlock(spinlock_t *lock)
{
-#if SPINLOCK_DEBUG
+#ifdef CONFIG_DEBUG_SPINLOCK
BUG_ON(lock->magic != SPINLOCK_MAGIC);
BUG_ON(!spin_is_locked(lock));
#endif
mb();
- lock->lock = 1;
+ lock->slock = 1;
}
/*
*/
typedef struct {
volatile int lock;
-#if SPINLOCK_DEBUG
+#ifdef CONFIG_DEBUG_SPINLOCK
unsigned magic;
#endif
#ifdef CONFIG_PREEMPT
#define RWLOCK_MAGIC 0xdeaf1eed
-#if SPINLOCK_DEBUG
+#ifdef CONFIG_DEBUG_SPINLOCK
#define RWLOCK_MAGIC_INIT , RWLOCK_MAGIC
#else
#define RWLOCK_MAGIC_INIT /* */
#define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0)
-#define rwlock_is_locked(x) ((x)->lock != RW_LOCK_BIAS)
+/**
+ * read_can_lock - would read_trylock() succeed?
+ * @lock: the rwlock in question.
+ */
+#define read_can_lock(x) ((int)(x)->lock > 0)
+
+/**
+ * write_can_lock - would write_trylock() succeed?
+ * @lock: the rwlock in question.
+ */
+#define write_can_lock(x) ((x)->lock == RW_LOCK_BIAS)
/*
* On x86, we implement read-write locks as a 32-bit counter
{
unsigned long tmp0, tmp1;
-#if SPINLOCK_DEBUG
+#ifdef CONFIG_DEBUG_SPINLOCK
BUG_ON(rw->magic != RWLOCK_MAGIC);
#endif
/*
{
unsigned long tmp0, tmp1, tmp2;
-#if SPINLOCK_DEBUG
+#ifdef CONFIG_DEBUG_SPINLOCK
BUG_ON(rw->magic != RWLOCK_MAGIC);
#endif
/*
* for more details.
*
* Copyright (C) 2001 by Hiroyuki Kondo, Hirokazu Takata, and Hitoshi Yamamoto
+ * Copyright (C) 2004 Hirokazu Takata <takata at linux-m32r.org>
*/
#include <linux/config.h>
#define local_irq_disable() \
__asm__ __volatile__ ("clrpsw #0x40 -> nop": : :"memory")
#else /* CONFIG_CHIP_M32102 */
-static __inline__ void local_irq_enable(void)
+static inline void local_irq_enable(void)
{
unsigned long tmpreg;
__asm__ __volatile__(
: "=&r" (tmpreg) : : "cbit", "memory");
}
-static __inline__ void local_irq_disable(void)
+static inline void local_irq_disable(void)
{
unsigned long tmpreg0, tmpreg1;
__asm__ __volatile__(
* rmb() prevents loads being reordered across this point.
* wmb() prevents stores being reordered across this point.
*/
-#if 0
-#define mb() __asm__ __volatile__ ("push r0; \n\t pop r0;" : : : "memory")
-#else
-#define mb() __asm__ __volatile__ ("" : : : "memory")
-#endif
+#define mb() barrier()
#define rmb() mb()
#define wmb() mb()
#define set_wmb(var, value) do { var = value; wmb(); } while (0)
#endif /* _ASM_M32R_SYSTEM_H */
-
-/* thread_info.h: i386 low-level thread information
+#ifndef _ASM_M32R_THREAD_INFO_H
+#define _ASM_M32R_THREAD_INFO_H
+
+/* thread_info.h: m32r low-level thread information
*
* Copyright (C) 2002 David Howells (dhowells@redhat.com)
* - Incorporating suggestions made by Linus Torvalds and Dave Miller
+ * Copyright (C) 2004 Hirokazu Takata <takata at linux-m32r.org>
*/
-#ifndef _ASM_THREAD_INFO_H
-#define _ASM_THREAD_INFO_H
-
#ifdef __KERNEL__
#ifndef __ASSEMBLY__
__s32 preempt_count; /* 0 => preemptable, <0 => BUG */
mm_segment_t addr_limit; /* thread address space:
- 0-0xBFFFFFFF for user-thead
+ 0-0xBFFFFFFF for user-thread
0-0xFFFFFFFF for kernel-thread
*/
struct restart_block restart_block;
#endif
-#define PREEMPT_ACTIVE 0x4000000
+#define PREEMPT_ACTIVE 0x10000000
/*
* macros/functions for gaining access to the thread information structure
#define init_thread_info (init_thread_union.thread_info)
#define init_stack (init_thread_union.stack)
+#define THREAD_SIZE (2*PAGE_SIZE)
+
/* how to get the thread information struct from C */
static inline struct thread_info *current_thread_info(void)
{
struct thread_info *ti;
__asm__ __volatile__ (
- "ldi %0, #0xffffe000; \n\t"
- "and %0, sp; \n\t"
- : "=r" (ti)
+ "ldi %0, #%1 \n\t"
+ "and %0, sp \n\t"
+ : "=r" (ti) : "i" (~(THREAD_SIZE - 1))
);
return ti;
}
/* thread information allocation */
-#define THREAD_SIZE (2*PAGE_SIZE)
-#define alloc_thread_info(task) \
- ((struct thread_info *) __get_free_pages(GFP_KERNEL,1))
-#define free_thread_info(ti) free_pages((unsigned long) (ti), 1)
+#if CONFIG_DEBUG_STACK_USAGE
+#define alloc_thread_info(tsk) \
+ ({ \
+ struct thread_info *ret; \
+ \
+ ret = kmalloc(THREAD_SIZE, GFP_KERNEL); \
+ if (ret) \
+ memset(ret, 0, THREAD_SIZE); \
+ ret; \
+ })
+#else
+#define alloc_thread_info(tsk) kmalloc(THREAD_SIZE, GFP_KERNEL)
+#endif
+
+#define free_thread_info(info) kfree(info)
#define get_thread_info(ti) get_task_struct((ti)->task)
#define put_thread_info(ti) put_task_struct((ti)->task)
+#define TI_FLAG_FAULT_CODE_SHIFT 28
+
+static inline void set_thread_fault_code(unsigned int val)
+{
+ struct thread_info *ti = current_thread_info();
+ ti->flags = (ti->flags & (~0 >> (32 - TI_FLAG_FAULT_CODE_SHIFT)))
+ | (val << TI_FLAG_FAULT_CODE_SHIFT);
+}
+
+static inline unsigned int get_thread_fault_code(void)
+{
+ struct thread_info *ti = current_thread_info();
+ return ti->flags >> TI_FLAG_FAULT_CODE_SHIFT;
+}
+
#else /* !__ASSEMBLY__ */
+#define THREAD_SIZE 8192
+
/* how to get the thread information struct from ASM */
#define GET_THREAD_INFO(reg) GET_THREAD_INFO reg
.macro GET_THREAD_INFO reg
- ldi \reg, #0xffffe000
+ ldi \reg, #-THREAD_SIZE
and \reg, sp
.endm
#define TIF_SINGLESTEP 4 /* restore singlestep on return to user mode */
#define TIF_IRET 5 /* return with iret */
#define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling TIF_NEED_RESCHED */
+ /* 31..28 fault code */
+#define TIF_MEMDIE 17
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
#endif /* __KERNEL__ */
-#endif /* _ASM_THREAD_INFO_H */
+#endif /* _ASM_M32R_THREAD_INFO_H */
#define __NR_time 13
#define __NR_mknod 14
#define __NR_chmod 15
-#define __NR_lchown 16
-#define __NR_break 17
-#define __NR_oldstat 18
+/* 16 is unused */
+/* 17 is unused */
+/* 18 is unused */
#define __NR_lseek 19
#define __NR_getpid 20
#define __NR_mount 21
#define __NR_umount 22
-#define __NR_setuid 23
-#define __NR_getuid 24
+/* 23 is unused */
+/* 24 is unused */
#define __NR_stime 25
#define __NR_ptrace 26
#define __NR_alarm 27
-#define __NR_oldfstat 28
+/* 28 is unused */
#define __NR_pause 29
#define __NR_utime 30
-#define __NR_cacheflush 31 /* old #define __NR_stty 31*/
+/* 31 is unused */
#define __NR_cachectl 32 /* old #define __NR_gtty 32*/
#define __NR_access 33
-#define __NR_nice 34
-#define __NR_ftime 35
+/* 34 is unused */
+/* 35 is unused */
#define __NR_sync 36
#define __NR_kill 37
#define __NR_rename 38
#define __NR_dup 41
#define __NR_pipe 42
#define __NR_times 43
-#define __NR_prof 44
+/* 44 is unused */
#define __NR_brk 45
-#define __NR_setgid 46
-#define __NR_getgid 47
-#define __NR_signal 48
-#define __NR_geteuid 49
-#define __NR_getegid 50
+/* 46 is unused */
+/* 47 is unused (getgid16) */
+/* 48 is unused */
+/* 49 is unused */
+/* 50 is unused */
#define __NR_acct 51
#define __NR_umount2 52
-#define __NR_lock 53
+/* 53 is unused */
#define __NR_ioctl 54
-#define __NR_fcntl 55
-#define __NR_mpx 56
+/* 55 is unused (fcntl) */
+/* 56 is unused */
#define __NR_setpgid 57
-#define __NR_ulimit 58
-#define __NR_oldolduname 59
+/* 58 is unused */
+/* 59 is unused */
#define __NR_umask 60
#define __NR_chroot 61
#define __NR_ustat 62
#define __NR_getppid 64
#define __NR_getpgrp 65
#define __NR_setsid 66
-#define __NR_sigaction 67
-#define __NR_sgetmask 68
-#define __NR_ssetmask 69
-#define __NR_setreuid 70
-#define __NR_setregid 71
-#define __NR_sigsuspend 72
-#define __NR_sigpending 73
+/* 67 is unused */
+/* 68 is unused*/
+/* 69 is unused*/
+/* 70 is unused */
+/* 71 is unused */
+/* 72 is unused */
+/* 73 is unused */
#define __NR_sethostname 74
#define __NR_setrlimit 75
-#define __NR_getrlimit 76 /* Back compatible 2Gig limited rlimit */
+/* 76 is unused (old getrlimit) */
#define __NR_getrusage 77
#define __NR_gettimeofday 78
#define __NR_settimeofday 79
-#define __NR_getgroups 80
-#define __NR_setgroups 81
-#define __NR_select 82
+/* 80 is unused */
+/* 81 is unused */
+/* 82 is unused */
#define __NR_symlink 83
-#define __NR_oldlstat 84
+/* 84 is unused */
#define __NR_readlink 85
#define __NR_uselib 86
#define __NR_swapon 87
#define __NR_reboot 88
-#define __NR_readdir 89
-#define __NR_mmap 90
+/* 89 is unused */
+/* 90 is unused */
#define __NR_munmap 91
#define __NR_truncate 92
#define __NR_ftruncate 93
#define __NR_fchmod 94
-#define __NR_fchown 95
+/* 95 is unused */
#define __NR_getpriority 96
#define __NR_setpriority 97
-#define __NR_profil 98
+/* 98 is unused */
#define __NR_statfs 99
#define __NR_fstatfs 100
-#define __NR_ioperm 101
+/* 101 is unused */
#define __NR_socketcall 102
#define __NR_syslog 103
#define __NR_setitimer 104
#define __NR_stat 106
#define __NR_lstat 107
#define __NR_fstat 108
-#define __NR_olduname 109
-#define __NR_iopl 110
+/* 109 is unused */
+/* 110 is unused */
#define __NR_vhangup 111
-#define __NR_idle 112
-#define __NR_vm86old 113
+/* 112 is unused */
+/* 113 is unused */
#define __NR_wait4 114
#define __NR_swapoff 115
#define __NR_sysinfo 116
#define __NR_ipc 117
#define __NR_fsync 118
-#define __NR_sigreturn 119
+/* 119 is unused */
#define __NR_clone 120
#define __NR_setdomainname 121
#define __NR_uname 122
-#define __NR_modify_ldt 123
+/* 123 is unused */
#define __NR_adjtimex 124
#define __NR_mprotect 125
-#define __NR_sigprocmask 126
-#define __NR_create_module 127
+/* 126 is unused */
+/* 127 is unused */
#define __NR_init_module 128
#define __NR_delete_module 129
-#define __NR_get_kernel_syms 130
+/* 130 is unused */
#define __NR_quotactl 131
#define __NR_getpgid 132
#define __NR_fchdir 133
#define __NR_bdflush 134
#define __NR_sysfs 135
#define __NR_personality 136
-#define __NR_afs_syscall 137 /* Syscall for Andrew File System */
-#define __NR_setfsuid 138
-#define __NR_setfsgid 139
+/* 137 is unused */
+/* 138 is unused */
+/* 139 is unused */
#define __NR__llseek 140
#define __NR_getdents 141
#define __NR__newselect 142
#define __NR_sched_rr_get_interval 161
#define __NR_nanosleep 162
#define __NR_mremap 163
-#define __NR_setresuid 164
-#define __NR_getresuid 165
+/* 164 is unused */
+/* 165 is unused */
#define __NR_tas 166
-#define __NR_query_module 167
+/* 167 is unused */
#define __NR_poll 168
#define __NR_nfsservctl 169
-#define __NR_setresgid 170
-#define __NR_getresgid 171
+/* 170 is unused */
+/* 171 is unused */
#define __NR_prctl 172
#define __NR_rt_sigreturn 173
#define __NR_rt_sigaction 174
#define __NR_rt_sigsuspend 179
#define __NR_pread64 180
#define __NR_pwrite64 181
-#define __NR_chown 182
+/* 182 is unused */
#define __NR_getcwd 183
#define __NR_capget 184
#define __NR_capset 185
#define __NR_sigaltstack 186
#define __NR_sendfile 187
-#define __NR_getpmsg 188 /* some people actually want streams */
-#define __NR_putpmsg 189 /* some people actually want streams */
+/* 188 is unused */
+/* 189 is unused */
#define __NR_vfork 190
#define __NR_ugetrlimit 191 /* SuS compliant getrlimit */
#define __NR_mmap2 192
#define __NR_pivot_root 217
#define __NR_mincore 218
#define __NR_madvise 219
-#define __NR_madvise1 219 /* delete when C lib stub is removed */
#define __NR_getdents64 220
#define __NR_fcntl64 221
-#define __NR_security 223 /* syscall for security modules */
-#define __NR_gettid 224
-#define __NR_readahead 225
-#define __NR_setxattr 226
-#define __NR_lsetxattr 227
-#define __NR_fsetxattr 228
-#define __NR_getxattr 229
-#define __NR_lgetxattr 230
-#define __NR_fgetxattr 231
-#define __NR_listxattr 232
-#define __NR_llistxattr 233
-#define __NR_flistxattr 234
-#define __NR_removexattr 235
-#define __NR_lremovexattr 236
-#define __NR_fremovexattr 237
+/* 222 is unused */
+/* 223 is unused */
+#define __NR_gettid 224
+#define __NR_readahead 225
+#define __NR_setxattr 226
+#define __NR_lsetxattr 227
+#define __NR_fsetxattr 228
+#define __NR_getxattr 229
+#define __NR_lgetxattr 230
+#define __NR_fgetxattr 231
+#define __NR_listxattr 232
+#define __NR_llistxattr 233
+#define __NR_flistxattr 234
+#define __NR_removexattr 235
+#define __NR_lremovexattr 236
+#define __NR_fremovexattr 237
#define __NR_tkill 238
#define __NR_sendfile64 239
#define __NR_futex 240
#define __NR_sched_setaffinity 241
#define __NR_sched_getaffinity 242
-#define __NR_set_thread_area 243
-#define __NR_get_thread_area 244
-#define __NR_io_setup 245
-#define __NR_io_destroy 246
-#define __NR_io_getevents 247
-#define __NR_io_submit 248
-#define __NR_io_cancel 249
-#define __NR_fadvise64 250
-
-#define __NR_exit_group 252
-#define __NR_lookup_dcookie 253
-#define __NR_epoll_create 254
-#define __NR_epoll_ctl 255
-#define __NR_epoll_wait 256
-#define __NR_remap_file_pages 257
-#define __NR_set_tid_address 258
-#define __NR_timer_create 259
-#define __NR_timer_settime (__NR_timer_create+1)
-#define __NR_timer_gettime (__NR_timer_create+2)
-#define __NR_timer_getoverrun (__NR_timer_create+3)
-#define __NR_timer_delete (__NR_timer_create+4)
-#define __NR_clock_settime (__NR_timer_create+5)
-#define __NR_clock_gettime (__NR_timer_create+6)
-#define __NR_clock_getres (__NR_timer_create+7)
-#define __NR_clock_nanosleep (__NR_timer_create+8)
-#define __NR_statfs64 268
-#define __NR_fstatfs64 269
-#define __NR_tgkill 270
-#define __NR_utimes 271
-#define __NR_fadvise64_64 272
-#define __NR_vserver 273
-#define __NR_mbind 274
-#define __NR_get_mempolicy 275
-#define __NR_set_mempolicy 276
-#define __NR_mq_open 277
-#define __NR_mq_unlink (__NR_mq_open+1)
-#define __NR_mq_timedsend (__NR_mq_open+2)
-#define __NR_mq_timedreceive (__NR_mq_open+3)
-#define __NR_mq_notify (__NR_mq_open+4)
-#define __NR_mq_getsetattr (__NR_mq_open+5)
-#define __NR_sys_kexec_load 283
-#define __NR_waitid 284
+#define __NR_set_thread_area 243
+#define __NR_get_thread_area 244
+#define __NR_io_setup 245
+#define __NR_io_destroy 246
+#define __NR_io_getevents 247
+#define __NR_io_submit 248
+#define __NR_io_cancel 249
+#define __NR_fadvise64 250
+/* 251 is unused */
+#define __NR_exit_group 252
+#define __NR_lookup_dcookie 253
+#define __NR_epoll_create 254
+#define __NR_epoll_ctl 255
+#define __NR_epoll_wait 256
+#define __NR_remap_file_pages 257
+#define __NR_set_tid_address 258
+#define __NR_timer_create 259
+#define __NR_timer_settime (__NR_timer_create+1)
+#define __NR_timer_gettime (__NR_timer_create+2)
+#define __NR_timer_getoverrun (__NR_timer_create+3)
+#define __NR_timer_delete (__NR_timer_create+4)
+#define __NR_clock_settime (__NR_timer_create+5)
+#define __NR_clock_gettime (__NR_timer_create+6)
+#define __NR_clock_getres (__NR_timer_create+7)
+#define __NR_clock_nanosleep (__NR_timer_create+8)
+#define __NR_statfs64 268
+#define __NR_fstatfs64 269
+#define __NR_tgkill 270
+#define __NR_utimes 271
+#define __NR_fadvise64_64 272
+#define __NR_vserver 273
+#define __NR_mbind 274
+#define __NR_get_mempolicy 275
+#define __NR_set_mempolicy 276
+#define __NR_mq_open 277
+#define __NR_mq_unlink (__NR_mq_open+1)
+#define __NR_mq_timedsend (__NR_mq_open+2)
+#define __NR_mq_timedreceive (__NR_mq_open+3)
+#define __NR_mq_notify (__NR_mq_open+4)
+#define __NR_mq_getsetattr (__NR_mq_open+5)
+#define __NR_sys_kexec_load 283
+#define __NR_waitid 284
#define NR_syscalls 285
#ifdef __KERNEL__
#define __ARCH_WANT_IPC_PARSE_VERSION
-#define __ARCH_WANT_OLD_READDIR
-#define __ARCH_WANT_OLD_STAT
#define __ARCH_WANT_STAT64
#define __ARCH_WANT_SYS_ALARM
#define __ARCH_WANT_SYS_GETHOSTNAME
#define __ARCH_WANT_SYS_PAUSE
-#define __ARCH_WANT_SYS_SGETMASK
-#define __ARCH_WANT_SYS_SIGNAL
#define __ARCH_WANT_SYS_TIME
#define __ARCH_WANT_SYS_UTIME
#define __ARCH_WANT_SYS_WAITPID
#define __ARCH_WANT_SYS_FADVISE64
#define __ARCH_WANT_SYS_GETPGRP
#define __ARCH_WANT_SYS_LLSEEK
-#define __ARCH_WANT_SYS_NICE
-#define __ARCH_WANT_SYS_OLD_GETRLIMIT
+#define __ARCH_WANT_SYS_OLD_GETRLIMIT /*will be unused*/
#define __ARCH_WANT_SYS_OLDUMOUNT
-#define __ARCH_WANT_SYS_SIGPENDING
-#define __ARCH_WANT_SYS_SIGPROCMASK
#define __ARCH_WANT_SYS_RT_SIGACTION
#endif
*/
static __inline__ _syscall3(int,execve,const char *,file,char **,argv,char **,envp)
-asmlinkage int sys_modify_ldt(int func, void __user *ptr, unsigned long bytecount);
asmlinkage long sys_mmap2(unsigned long addr, unsigned long len,
unsigned long prot, unsigned long flags,
unsigned long fd, unsigned long pgoff);
asmlinkage int sys_vfork(struct pt_regs regs);
asmlinkage int sys_pipe(unsigned long __user *fildes);
asmlinkage int sys_ptrace(long request, long pid, long addr, long data);
-asmlinkage long sys_iopl(unsigned long unused);
struct sigaction;
asmlinkage long sys_rt_sigaction(int sig,
const struct sigaction __user *act,
# error HARDIRQ_BITS is too low!
#endif
-#define irq_enter() (preempt_count() += HARDIRQ_OFFSET)
-#define irq_exit() \
-do { \
- preempt_count() -= IRQ_EXIT_OFFSET; \
- if (!in_interrupt() && softirq_pending(smp_processor_id())) \
- do_softirq(); \
- preempt_enable_no_resched(); \
-} while (0)
-
#endif
{
pte_t *pte;
- pte = (pte_t *)__get_free_page(GFP_KERNEL|__GFP_REPEAT);
+ pte = (pte_t *)__get_free_page(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO);
if (pte) {
- clear_page(pte);
__flush_page_to_ram(pte);
flush_tlb_kernel_page(pte);
nocache_page(pte);
static inline struct page *pte_alloc_one(struct mm_struct *mm, unsigned long address)
{
- struct page *page = alloc_pages(GFP_KERNEL|__GFP_REPEAT, 0);
+ struct page *page = alloc_pages(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO, 0);
pte_t *pte;
if(!page)
pte = kmap(page);
if (pte) {
- clear_page(pte);
__flush_page_to_ram(pte);
flush_tlb_kernel_page(pte);
nocache_page(pte);
#define ELF_PLAT_INIT(_r, load_addr) _r->a1 = 0
#define USE_ELF_CORE_DUMP
-#ifndef CONFIG_SUN3
#define ELF_EXEC_PAGESIZE 4096
-#else
-#define ELF_EXEC_PAGESIZE 8192
-#endif
/* This is the location that an ET_DYN program is loaded if exec'ed. Typical
use of this is to invoke "./ld.so someprog" to test out a new version of
the loader. We need to make sure that it is out of the way of the program
that it will "exec", and that there is sufficient room for the brk. */
-#ifndef CONFIG_SUN3
#define ELF_ET_DYN_BASE 0xD0000000UL
-#else
-#define ELF_ET_DYN_BASE 0x0D800000UL
-#endif
#define ELF_CORE_COPY_REGS(pr_reg, regs) \
/* Bleech. */ \
typedef struct {
unsigned int __softirq_pending;
- unsigned int __syscall_count;
- struct task_struct * __ksoftirqd_task;
} ____cacheline_aligned irq_cpustat_t;
#include <linux/irq_cpustat.h> /* Standard mappings for irq_cpustat_t above */
# error HARDIRQ_BITS is too low!
#endif
-#define irq_enter() (preempt_count() += HARDIRQ_OFFSET)
-#define irq_exit() \
-do { \
- preempt_count() -= IRQ_EXIT_OFFSET; \
- if (!in_interrupt() && softirq_pending(smp_processor_id())) \
- do_softirq(); \
- preempt_enable_no_resched(); \
-} while (0)
-
#endif /* __M68K_HARDIRQ_H */
#define MCFSIM_DCRR 0x46 /* DRAM Refresh reg (r/w) */
#define MCFSIM_DCTR 0x4a /* DRAM Timing reg (r/w) */
-#define MCFSIM_DCAR0 0x4c /* DRAM 0 Address reg(r/w) */
-#define MCFSIM_DCMR0 0x50 /* DRAM 0 Mask reg (r/w) */
-#define MCFSIM_DCCR0 0x57 /* DRAM 0 Control reg (r/w) */
-#define MCFSIM_DCAR1 0x58 /* DRAM 1 Address reg (r/w) */
-#define MCFSIM_DCMR1 0x5c /* DRAM 1 Mask reg (r/w) */
-#define MCFSIM_DCCR1 0x63 /* DRAM 1 Control reg (r/w) */
+#define MCFSIM_DAR0 0x4c /* DRAM 0 Address reg(r/w) */
+#define MCFSIM_DMR0 0x50 /* DRAM 0 Mask reg (r/w) */
+#define MCFSIM_DCR0 0x57 /* DRAM 0 Control reg (r/w) */
+#define MCFSIM_DAR1 0x58 /* DRAM 1 Address reg (r/w) */
+#define MCFSIM_DMR1 0x5c /* DRAM 1 Mask reg (r/w) */
+#define MCFSIM_DCR1 0x63 /* DRAM 1 Control reg (r/w) */
#define MCFSIM_CSAR0 0x64 /* CS 0 Address 0 reg (r/w) */
#define MCFSIM_CSMR0 0x68 /* CS 0 Mask 0 reg (r/w) */
#define mcf_getipr() \
*((volatile unsigned long *) (MCF_MBAR + MCFSIM_IPR))
+/****************************************************************************/
+
+#ifdef __ASSEMBLER__
+
+/*
+ * The M5249C3 board needs a little help getting all its SIM devices
+ * initialized at kernel start time. dBUG doesn't set much up, so
+ * we need to do it manually.
+ */
+.macro m5249c3_setup
+ /*
+ * Set MBAR1 and MBAR2, just incase they are not set.
+ */
+ movel #0x10000001,%a0
+ movec %a0,%MBAR /* map MBAR region */
+ subql #1,%a0 /* get MBAR address in a0 */
+
+ movel #0x80000001,%a1
+ movec %a1,#3086 /* map MBAR2 region */
+ subql #1,%a1 /* get MBAR2 address in a1 */
+
+ /*
+ * Move secondary interrupts to base at 128.
+ */
+ moveb #0x80,%d0
+ moveb %d0,0x16b(%a1) /* interrupt base register */
+
+ /*
+ * Work around broken CSMR0/DRAM vector problem.
+ */
+ movel #0x001F0021,%d0 /* disable C/I bit */
+ movel %d0,0x84(%a0) /* set CSMR0 */
+
+ /*
+ * Disable the PLL firstly. (Who knows what state it is
+ * in here!).
+ */
+ movel 0x180(%a1),%d0 /* get current PLL value */
+ andl #0xfffffffe,%d0 /* PLL bypass first */
+ movel %d0,0x180(%a1) /* set PLL register */
+ nop
+
+#ifdef CONFIG_CLOCK_140MHz
+ /*
+ * Set initial clock frequency. This assumes M5249C3 board
+ * is fitted with 11.2896MHz crystal. It will program the
+ * PLL for 140MHz. Lets go fast :-)
+ */
+ movel #0x125a40f0,%d0 /* set for 140MHz */
+ movel %d0,0x180(%a1) /* set PLL register */
+ orl #0x1,%d0
+ movel %d0,0x180(%a1) /* set PLL register */
+#endif
+
+ /*
+ * Setup CS1 for ethernet controller.
+ * (Setup as per M5249C3 doco).
+ */
+ movel #0xe0000000,%d0 /* CS1 mapped at 0xe0000000 */
+ movel %d0,0x8c(%a0)
+ movel #0x001f0021,%d0 /* CS1 size of 1Mb */
+ movel %d0,0x90(%a0)
+ movew #0x0080,%d0 /* CS1 = 16bit port, AA */
+ movew %d0,0x96(%a0)
+
+ /*
+ * Setup CS2 for IDE interface.
+ */
+ movel #0x50000000,%d0 /* CS2 mapped at 0x50000000 */
+ movel %d0,0x98(%a0)
+ movel #0x001f0001,%d0 /* CS2 size of 1MB */
+ movel %d0,0x9c(%a0)
+ movew #0x0080,%d0 /* CS2 = 16bit, TA */
+ movew %d0,0xa2(%a0)
+
+ movel #0x00107000,%d0 /* IDEconfig1 */
+ movel %d0,0x18c(%a1)
+ movel #0x000c0400,%d0 /* IDEconfig2 */
+ movel %d0,0x190(%a1)
+
+ movel #0x00080000,%d0 /* GPIO19, IDE reset bit */
+ orl %d0,0xc(%a1) /* function GPIO19 */
+ orl %d0,0x8(%a1) /* enable GPIO19 as output */
+ orl %d0,0x4(%a1) /* de-assert IDE reset */
+.endm
+
+#define PLATFORM_SETUP m5249c3_setup
+
+#endif /* __ASSEMBLER__ */
+
/****************************************************************************/
#endif /* m5249sim_h */
#define MCFINT_UART2 15 /* Interrupt number for UART2 */
#define MCFINT_PIT1 36 /* Interrupt number for PIT1 */
+/*
+ * SDRAM configuration registers.
+ */
+#ifdef CONFIG_M5271EVB
+#define MCFSIM_DCR 0x40 /* SDRAM control */
+#define MCFSIM_DACR0 0x48 /* SDRAM base address 0 */
+#define MCFSIM_DMR0 0x4c /* SDRAM address mask 0 */
+#define MCFSIM_DACR1 0x50 /* SDRAM base address 1 */
+#define MCFSIM_DMR1 0x54 /* SDRAM address mask 1 */
+#else
+#define MCFSIM_DMR 0x40 /* SDRAM mode */
+#define MCFSIM_DCR 0x44 /* SDRAM control */
+#define MCFSIM_DCFG1 0x48 /* SDRAM configuration 1 */
+#define MCFSIM_DCFG2 0x4c /* SDRAM configuration 2 */
+#define MCFSIM_DBAR0 0x50 /* SDRAM base address 0 */
+#define MCFSIM_DMR0 0x54 /* SDRAM address mask 0 */
+#define MCFSIM_DBAR1 0x58 /* SDRAM base address 1 */
+#define MCFSIM_DMR1 0x5c /* SDRAM address mask 1 */
+#endif
+
/****************************************************************************/
#endif /* m527xsim_h */
#define MCFINT_UART0 13 /* Interrupt number for UART0 */
#define MCFINT_PIT1 55 /* Interrupt number for PIT1 */
+/*
+ * SDRAM configuration registers.
+ */
+#define MCFSIM_DCR 0x44 /* SDRAM control */
+#define MCFSIM_DACR0 0x48 /* SDRAM base address 0 */
+#define MCFSIM_DMR0 0x4c /* SDRAM address mask 0 */
+#define MCFSIM_DACR1 0x50 /* SDRAM base address 1 */
+#define MCFSIM_DMR1 0x54 /* SDRAM address mask 1 */
+
/****************************************************************************/
#endif /* m528xsim_h */
--- /dev/null
+/****************************************************************************/
+
+/*
+ * mcfcache.h -- ColdFire CPU cache support code
+ *
+ * (C) Copyright 2004, Greg Ungerer <gerg@snapgear.com>
+ */
+
+/****************************************************************************/
+#ifndef __M68KNOMMU_MCFCACHE_H
+#define __M68KNOMMU_MCFCACHE_H
+/****************************************************************************/
+
+#include <linux/config.h>
+
+/*
+ * The different ColdFire families have different cache arrangments.
+ * Everything from a small linstruction only cache, to configurable
+ * data and/or instruction cache, to unified instruction/data, to
+ * harvard style separate instruction and data caches.
+ */
+
+#if defined(CONFIG_M5206) || defined(CONFIG_M5206e) || defined(CONFIG_M5272)
+/*
+ * Simple verion 2 core cache. These have instruction cache only,
+ * we just need to invalidate it and enable it.
+ */
+.macro CACHE_ENABLE
+ movel #0x01000000,%d0 /* invalidate cache cmd */
+ movec %d0,%CACR /* do invalidate cache */
+ movel #0x80000100,%d0 /* setup cache mask */
+ movec %d0,%CACR /* enable cache */
+.endm
+#endif /* CONFIG_M5206 || CONFIG_M5206e || CONFIG_M5272 */
+
+#if defined(CONFIG_M527x)
+/*
+ * New version 2 cores have a configurable split cache arrangement.
+ * For now I am just enabling instruction cache - but ultimately I
+ * think a split instruction/data cache would be better.
+ */
+.macro CACHE_ENABLE
+ movel #0x01400000,%d0
+ movec %d0,%CACR /* invalidate cache */
+ nop
+ movel #0x0000c000,%d0 /* set SDRAM cached only */
+ movec %d0,%ACR0
+ movel #0x00000000,%d0 /* no other regions cached */
+ movec %d0,%ACR1
+ movel #0x80400100,%d0 /* configure cache */
+ movec %d0,%CACR /* enable cache */
+ nop
+.endm
+#endif /* CONFIG_M527x */
+
+#if defined(CONFIG_M528x)
+/*
+ * Cache is totally broken on early 5282 silicon. So far now we
+ * disable its cache all together.
+ */
+.macro CACHE_ENABLE
+ movel #0x01000000,%d0
+ movec %d0,%CACR /* invalidate cache */
+ nop
+ movel #0x0000c000,%d0 /* set SDRAM cached only */
+ movec %d0,%ACR0
+ movel #0x00000000,%d0 /* no other regions cached */
+ movec %d0,%ACR1
+ movel #0x00000000,%d0 /* configure cache */
+ movec %d0,%CACR /* enable cache */
+ nop
+.endm
+#endif /* CONFIG_M528x */
+
+#if defined(CONFIG_M5249) || defined(CONFIG_M5307)
+/*
+ * The version 3 core cache. Oddly enough the version 2 core 5249
+ * has the same SDRAM and cache setup as the version 3 cores.
+ * This is a single unified instruction/data cache.
+ */
+.macro CACHE_ENABLE
+ movel #0x01000000,%d0 /* invalidate whole cache */
+ movec %d0,%CACR
+ nop
+#if defined(DEBUGGER_COMPATIBLE_CACHE) || defined(CONFIG_SECUREEDGEMP3)
+ movel #0x0000c000,%d0 /* set SDRAM cached (write-thru) */
+#else
+ movel #0x0000c020,%d0 /* set SDRAM cached (copyback) */
+#endif
+ movec %d0,%ACR0
+ movel #0x00000000,%d0 /* no other regions cached */
+ movec %d0,%ACR1
+ movel #0xa0000200,%d0 /* enable cache */
+ movec %d0,%CACR
+ nop
+.endm
+#endif /* CONFIG_M5249 || CONFIG_M5307 */
+
+#if defined(CONFIG_M5407)
+/*
+ * Version 4 cores have a true hardvard style separate instruction
+ * and data cache. Invalidate and enable cache, also enable write
+ * buffers and branch accelerator.
+ */
+.macro CACHE_ENABLE
+ movel #0x01040100,%d0 /* invalidate whole cache */
+ movec %d0,%CACR
+ nop
+ movel #0x000fc000,%d0 /* set SDRAM cached only */
+ movec %d0, %ACR0
+ movel #0x00000000,%d0 /* no other regions cached */
+ movec %d0, %ACR1
+ movel #0x000fc000,%d0 /* set SDRAM cached only */
+ movec %d0, %ACR2
+ movel #0x00000000,%d0 /* no other regions cached */
+ movec %d0, %ACR3
+ movel #0xb6088400,%d0 /* enable caches */
+ movec %d0,%CACR
+ nop
+.endm
+#endif /* CONFIG_M5407 */
+
+
+/****************************************************************************/
+#endif /* __M68KNOMMU_MCFCACHE_H */
*/
#define TASK_UNMAPPED_BASE 0
-/*
- * Bus types
- */
-#define MCA_bus 0
-
/*
* if you change this structure, you must change the code and offsets
* in m68k/machasm.S
({ \
unsigned long eip = 0; \
if ((tsk)->thread.esp0 > PAGE_SIZE && \
- MAP_NR((tsk)->thread.esp0) < max_mapnr) \
+ (virt_addr_valid((tsk)->thread.esp0))) \
eip = ((struct pt_regs *) (tsk)->thread.esp0)->pc; \
eip; })
#define KSTK_ESP(tsk) ((tsk) == current ? rdusp() : (tsk)->thread.usp)
#ifdef __KERNEL__
-/*
- * Size of kernel stack for each process. This must be a power of 2...
- */
-#define THREAD_SIZE 8192 /* 2 pages */
-
-
#ifndef __ASSEMBLY__
/*
"move.l %%sp, %0 \n\t"
"and.l %1, %0"
: "=&d"(ti)
- : "d" (~(THREAD_SIZE-1))
+ : "di" (~(THREAD_SIZE-1))
);
return ti;
}
#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
#define TIF_POLLING_NRFLAG 4 /* true if poll_idle() is polling
TIF_NEED_RESCHED */
+#define TIF_MEMDIE 5
/* as above, but as bit values */
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
|| defined (CONFIG_CPU_R4X00) \
|| defined (CONFIG_CPU_R5000) \
|| defined (CONFIG_CPU_NEVADA) \
+ || defined (CONFIG_CPU_TX49XX) \
|| defined (CONFIG_CPU_MIPS64)
#define KUSIZE 0x0000010000000000 /* 2^^40 */
#define KUSIZE_64 0x0000010000000000 /* 2^^40 */
__bi_flags;
a += nr >> SZLONG_LOG;
- mask = 1 << (nr & SZLONG_MASK);
+ mask = 1UL << (nr & SZLONG_MASK);
__bi_local_irq_save(flags);
*a |= mask;
__bi_local_irq_restore(flags);
__bi_flags;
a += nr >> SZLONG_LOG;
- mask = 1 << (nr & SZLONG_MASK);
+ mask = 1UL << (nr & SZLONG_MASK);
__bi_local_irq_save(flags);
*a &= ~mask;
__bi_local_irq_restore(flags);
__bi_flags;
a += nr >> SZLONG_LOG;
- mask = 1 << (nr & SZLONG_MASK);
+ mask = 1UL << (nr & SZLONG_MASK);
__bi_local_irq_save(flags);
*a ^= mask;
__bi_local_irq_restore(flags);
__bi_flags;
a += nr >> SZLONG_LOG;
- mask = 1 << (nr & SZLONG_MASK);
+ mask = 1UL << (nr & SZLONG_MASK);
__bi_local_irq_save(flags);
retval = (mask & *a) != 0;
*a |= mask;
int retval;
a += nr >> SZLONG_LOG;
- mask = 1 << (nr & SZLONG_MASK);
+ mask = 1UL << (nr & SZLONG_MASK);
retval = (mask & *a) != 0;
*a |= mask;
__bi_flags;
a += nr >> SZLONG_LOG;
- mask = 1 << (nr & SZLONG_MASK);
+ mask = 1UL << (nr & SZLONG_MASK);
__bi_local_irq_save(flags);
retval = (mask & *a) != 0;
*a &= ~mask;
__bi_flags;
a += nr >> SZLONG_LOG;
- mask = 1 << (nr & SZLONG_MASK);
+ mask = 1UL << (nr & SZLONG_MASK);
__bi_local_irq_save(flags);
retval = (mask & *a) != 0;
*a ^= mask;
#define BRK_STACKOVERFLOW 9 /* For Ada stackchecking */
#define BRK_NORLD 10 /* No rld found - not used by Linux/MIPS */
#define _BRK_THREADBP 11 /* For threads, user bp (used by debuggers) */
-#define BRK_MULOVF 1023 /* Multiply overflow */
#define BRK_BUG 512 /* Used by BUG() */
+#define BRK_MULOVF 1023 /* Multiply overflow */
#endif /* __ASM_BREAK_H */
return (void *) (regs->regs[29] - len);
}
+#if defined (__MIPSEL__)
+#define __COMPAT_ENDIAN_SWAP__ 1
+#endif
#endif /* _ASM_COMPAT_H */
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
- * Copyright (C) 2003 Ralf Baechle
+ * Copyright (C) 2003, 2004 Ralf Baechle
*/
#ifndef __ASM_CPU_FEATURES_H
#define __ASM_CPU_FEATURES_H
+#include <linux/config.h>
+
#include <asm/cpu.h>
#include <asm/cpu-info.h>
#include <cpu-feature-overrides.h>
#define cpu_has_ic_fills_f_dc (cpu_data[0].icache.flags & MIPS_CACHE_IC_F_DC)
#endif
+/*
+ * I-Cache snoops remote store. This only matters on SMP. Some multiprocessors
+ * such as the R10000 have I-Caches that snoop local stores; the embedded ones
+ * don't. For maintaining I-cache coherency this means we need to flush the
+ * D-cache all the way back to whever the I-cache does refills from, so the
+ * I-cache has a chance to see the new data at all. Then we have to flush the
+ * I-cache also.
+ * Note we may have been rescheduled and may no longer be running on the CPU
+ * that did the store so we can't optimize this into only doing the flush on
+ * the local CPU.
+ */
+#ifndef cpu_icache_snoops_remote_store
+#ifdef CONFIG_SMP
+#define cpu_icache_snoops_remote_store (cpu_data[0].icache.flags & MIPS_IC_SNOOPS_REMOTE)
+#else
+#define cpu_icache_snoops_remote_store 1
+#endif
+#endif
+
/*
* Certain CPUs may throw bizarre exceptions if not the whole cacheline
* contains valid instructions. For these we ensure proper alignment of
#define MIPS_CACHE_VTAG 0x00000002 /* Virtually tagged cache */
#define MIPS_CACHE_ALIASES 0x00000004 /* Cache could have aliases */
#define MIPS_CACHE_IC_F_DC 0x00000008 /* Ic can refill from D-cache */
+#define MIPS_IC_SNOOPS_REMOTE 0x00000010 /* Ic snoops remote stores */
struct cpuinfo_mips {
unsigned long udelay_val;
#include <linux/kernel.h>
#define db_assert(x) if (!(x)) { \
- panic("assertion failed at %s:%d: %s\n", __FILE__, __LINE__, #x); }
+ panic("assertion failed at %s:%d: %s", __FILE__, __LINE__, #x); }
#define db_warn(x) if (!(x)) { \
- printk(KERN_WARNING "warning at %s:%d: %s\n", __FILE__, __LINE__, #x); }
+ printk(KERN_WARNING "warning at %s:%d: %s", __FILE__, __LINE__, #x); }
#define db_verify(x, y) db_assert(x y)
#define db_verify_warn(x, y) db_warn(x y)
#define db_run(x) do { x; } while (0)
#define EF_MIPS_ABI_O64 0x00002000 /* O32 extended for 64 bit. */
#define PT_MIPS_REGINFO 0x70000000
-#define PT_MIPS_OPTIONS 0x70000001
+#define PT_MIPS_RTPROC 0x70000001
+#define PT_MIPS_OPTIONS 0x70000002
/* Flags in the e_flags field of the header */
#define EF_MIPS_NOREORDER 0x00000001
#define DT_MIPS_ICHECKSUM 0x70000003
#define DT_MIPS_IVERSION 0x70000004
#define DT_MIPS_FLAGS 0x70000005
- #define RHF_NONE 0
- #define RHF_HARDWAY 1
- #define RHF_NOTPOT 2
+ #define RHF_NONE 0x00000000
+ #define RHF_HARDWAY 0x00000001
+ #define RHF_NOTPOT 0x00000002
+ #define RHF_SGI_ONLY 0x00000010
#define DT_MIPS_BASE_ADDRESS 0x70000006
#define DT_MIPS_CONFLICT 0x70000008
#define DT_MIPS_LIBLIST 0x70000009
#endif /* CONFIG_MIPS64 */
+extern void dump_regs(elf_greg_t *, struct pt_regs *regs);
+extern int dump_task_fpu(struct task_struct *, elf_fpregset_t *);
+
+#define ELF_CORE_COPY_REGS(elf_regs, regs) \
+ dump_regs((elf_greg_t *)&(elf_regs), regs);
+#define ELF_CORE_COPY_FPREGS(tsk, elf_fpregs) \
+ dump_task_fpu(tsk, elf_fpregs)
+
#endif /* __KERNEL__ */
/* This one accepts IRIX binaries. */
-#define irix_elf_check_arch(hdr) ((hdr)->e_machine == EM_MIPS)
+#define irix_elf_check_arch(hdr) ((hdr)->e_flags & RHF_SGI_ONLY)
#define USE_ELF_CORE_DUMP
#define ELF_EXEC_PAGESIZE PAGE_SIZE
-#define ELF_CORE_COPY_REGS(_dest,_regs) \
- memcpy((char *) &_dest, (char *) _regs, \
- sizeof(struct pt_regs));
-
/* This yields a mask that user programs can use to figure out what
instruction set this cpu supports. This could be done in userspace,
but it's not easy, and we've already done it here. */
/*
- * Carsten Langgaard, carstenl@mips.com
- * Copyright (C) 2000 MIPS Technologies, Inc. All rights reserved.
+ * Copyright (C) 2000, 2004, 2005 MIPS Technologies, Inc.
+ * All rights reserved.
+ * Authors: Carsten Langgaard <carstenl@mips.com>
+ * Maciej W. Rozycki <macro@mips.com>
+ * Copyright (C) 2005 Ralf Baechle (ralf@linux-mips.org)
*
* This program is free software; you can distribute it and/or modify it
* under the terms of the GNU General Public License (Version 2) as
#ifndef _ASM_GT64120_H
#define _ASM_GT64120_H
-#include <linux/config.h>
#include <asm/addrspace.h>
#include <asm/byteorder.h>
-#define MSK(n) ((1 << (n)) - 1)
+#define MSK(n) ((1 << (n)) - 1)
/*
* Register offset addresses
*/
+/* CPU Configuration. */
#define GT_CPU_OFS 0x000
-/*
- * Interrupt Registers
- */
+#define GT_MULTI_OFS 0x120
+
+/* CPU Address Decode. */
#define GT_SCS10LD_OFS 0x008
#define GT_SCS10HD_OFS 0x010
#define GT_SCS32LD_OFS 0x018
#define GT_PCI0M0LD_OFS 0x058
#define GT_PCI0M0HD_OFS 0x060
#define GT_ISD_OFS 0x068
+
#define GT_PCI0M1LD_OFS 0x080
#define GT_PCI0M1HD_OFS 0x088
#define GT_PCI1IOLD_OFS 0x090
#define GT_PCI1M0HD_OFS 0x0a8
#define GT_PCI1M1LD_OFS 0x0b0
#define GT_PCI1M1HD_OFS 0x0b8
+#define GT_PCI1M1LD_OFS 0x0b0
+#define GT_PCI1M1HD_OFS 0x0b8
+
+#define GT_SCS10AR_OFS 0x0d0
+#define GT_SCS32AR_OFS 0x0d8
+#define GT_CS20R_OFS 0x0e0
+#define GT_CS3BOOTR_OFS 0x0e8
-/*
- * GT64120A only
- */
#define GT_PCI0IOREMAP_OFS 0x0f0
#define GT_PCI0M0REMAP_OFS 0x0f8
#define GT_PCI0M1REMAP_OFS 0x100
#define GT_PCI1M0REMAP_OFS 0x110
#define GT_PCI1M1REMAP_OFS 0x118
+/* CPU Error Report. */
+#define GT_CPUERR_ADDRLO_OFS 0x070
+#define GT_CPUERR_ADDRHI_OFS 0x078
+
+#define GT_CPUERR_DATALO_OFS 0x128 /* GT-64120A only */
+#define GT_CPUERR_DATAHI_OFS 0x130 /* GT-64120A only */
+#define GT_CPUERR_PARITY_OFS 0x138 /* GT-64120A only */
+
+/* CPU Sync Barrier. */
+#define GT_PCI0SYNC_OFS 0x0c0
+#define GT_PCI1SYNC_OFS 0x0c8
+
+/* SDRAM and Device Address Decode. */
#define GT_SCS0LD_OFS 0x400
#define GT_SCS0HD_OFS 0x404
#define GT_SCS1LD_OFS 0x408
#define GT_BOOTLD_OFS 0x440
#define GT_BOOTHD_OFS 0x444
-#define GT_SDRAM_B0_OFS 0x44c
+#define GT_ADERR_OFS 0x470
+
+/* SDRAM Configuration. */
#define GT_SDRAM_CFG_OFS 0x448
-#define GT_SDRAM_B2_OFS 0x454
+
#define GT_SDRAM_OPMODE_OFS 0x474
#define GT_SDRAM_BM_OFS 0x478
#define GT_SDRAM_ADDRDECODE_OFS 0x47c
-#define GT_PCI0_CMD_OFS 0xc00 /* GT64120A only */
+/* SDRAM Parameters. */
+#define GT_SDRAM_B0_OFS 0x44c
+#define GT_SDRAM_B1_OFS 0x450
+#define GT_SDRAM_B2_OFS 0x454
+#define GT_SDRAM_B3_OFS 0x458
+
+/* Device Parameters. */
+#define GT_DEV_B0_OFS 0x45c
+#define GT_DEV_B1_OFS 0x460
+#define GT_DEV_B2_OFS 0x464
+#define GT_DEV_B3_OFS 0x468
+#define GT_DEV_BOOT_OFS 0x46c
+
+/* ECC. */
+#define GT_ECC_ERRDATALO 0x480 /* GT-64120A only */
+#define GT_ECC_ERRDATAHI 0x484 /* GT-64120A only */
+#define GT_ECC_MEM 0x488 /* GT-64120A only */
+#define GT_ECC_CALC 0x48c /* GT-64120A only */
+#define GT_ECC_ERRADDR 0x490 /* GT-64120A only */
+
+/* DMA Record. */
+#define GT_DMA0_CNT_OFS 0x800
+#define GT_DMA1_CNT_OFS 0x804
+#define GT_DMA2_CNT_OFS 0x808
+#define GT_DMA3_CNT_OFS 0x80c
+#define GT_DMA0_SA_OFS 0x810
+#define GT_DMA1_SA_OFS 0x814
+#define GT_DMA2_SA_OFS 0x818
+#define GT_DMA3_SA_OFS 0x81c
+#define GT_DMA0_DA_OFS 0x820
+#define GT_DMA1_DA_OFS 0x824
+#define GT_DMA2_DA_OFS 0x828
+#define GT_DMA3_DA_OFS 0x82c
+#define GT_DMA0_NEXT_OFS 0x830
+#define GT_DMA1_NEXT_OFS 0x834
+#define GT_DMA2_NEXT_OFS 0x838
+#define GT_DMA3_NEXT_OFS 0x83c
+
+#define GT_DMA0_CUR_OFS 0x870
+#define GT_DMA1_CUR_OFS 0x874
+#define GT_DMA2_CUR_OFS 0x878
+#define GT_DMA3_CUR_OFS 0x87c
+
+/* DMA Channel Control. */
+#define GT_DMA0_CTRL_OFS 0x840
+#define GT_DMA1_CTRL_OFS 0x844
+#define GT_DMA2_CTRL_OFS 0x848
+#define GT_DMA3_CTRL_OFS 0x84c
+
+/* DMA Arbiter. */
+#define GT_DMA_ARB_OFS 0x860
+
+/* Timer/Counter. */
+#define GT_TC0_OFS 0x850
+#define GT_TC1_OFS 0x854
+#define GT_TC2_OFS 0x858
+#define GT_TC3_OFS 0x85c
+
+#define GT_TC_CONTROL_OFS 0x864
+
+/* PCI Internal. */
+#define GT_PCI0_CMD_OFS 0xc00
#define GT_PCI0_TOR_OFS 0xc04
-#define GT_PCI0_BS_SCS10_OFS 0xc08
-#define GT_PCI0_BS_SCS32_OFS 0xc0c
-#define GT_INTRCAUSE_OFS 0xc18
-#define GT_INTRMASK_OFS 0xc1c /* GT64120A only */
+#define GT_PCI0_BS_SCS10_OFS 0xc08
+#define GT_PCI0_BS_SCS32_OFS 0xc0c
+#define GT_PCI0_BS_CS20_OFS 0xc10
+#define GT_PCI0_BS_CS3BT_OFS 0xc14
+
+#define GT_PCI1_IACK_OFS 0xc30
#define GT_PCI0_IACK_OFS 0xc34
+
#define GT_PCI0_BARE_OFS 0xc3c
-#define GT_HINTRCAUSE_OFS 0xc98 /* GT64120A only */
-#define GT_HINTRMASK_OFS 0xc9c /* GT64120A only */
-#define GT_PCI1_CFGADDR_OFS 0xcf0 /* GT64120A only */
-#define GT_PCI1_CFGDATA_OFS 0xcf4 /* GT64120A only */
+#define GT_PCI0_PREFMBR_OFS 0xc40
+
+#define GT_PCI0_SCS10_BAR_OFS 0xc48
+#define GT_PCI0_SCS32_BAR_OFS 0xc4c
+#define GT_PCI0_CS20_BAR_OFS 0xc50
+#define GT_PCI0_CS3BT_BAR_OFS 0xc54
+#define GT_PCI0_SSCS10_BAR_OFS 0xc58
+#define GT_PCI0_SSCS32_BAR_OFS 0xc5c
+
+#define GT_PCI0_SCS3BT_BAR_OFS 0xc64
+
+#define GT_PCI1_CMD_OFS 0xc80
+#define GT_PCI1_TOR_OFS 0xc84
+#define GT_PCI1_BS_SCS10_OFS 0xc88
+#define GT_PCI1_BS_SCS32_OFS 0xc8c
+#define GT_PCI1_BS_CS20_OFS 0xc90
+#define GT_PCI1_BS_CS3BT_OFS 0xc94
+
+#define GT_PCI1_BARE_OFS 0xcbc
+#define GT_PCI1_PREFMBR_OFS 0xcc0
+
+#define GT_PCI1_SCS10_BAR_OFS 0xcc8
+#define GT_PCI1_SCS32_BAR_OFS 0xccc
+#define GT_PCI1_CS20_BAR_OFS 0xcd0
+#define GT_PCI1_CS3BT_BAR_OFS 0xcd4
+#define GT_PCI1_SSCS10_BAR_OFS 0xcd8
+#define GT_PCI1_SSCS32_BAR_OFS 0xcdc
+
+#define GT_PCI1_SCS3BT_BAR_OFS 0xce4
+
+#define GT_PCI1_CFGADDR_OFS 0xcf0
+#define GT_PCI1_CFGDATA_OFS 0xcf4
#define GT_PCI0_CFGADDR_OFS 0xcf8
#define GT_PCI0_CFGDATA_OFS 0xcfc
+/* Interrupts. */
+#define GT_INTRCAUSE_OFS 0xc18
+#define GT_INTRMASK_OFS 0xc1c
+
+#define GT_PCI0_ICMASK_OFS 0xc24
+#define GT_PCI0_SERR0MASK_OFS 0xc28
+
+#define GT_CPU_INTSEL_OFS 0xc70
+#define GT_PCI0_INTSEL_OFS 0xc74
+
+#define GT_HINTRCAUSE_OFS 0xc98
+#define GT_HINTRMASK_OFS 0xc9c
+
+#define GT_PCI0_HICMASK_OFS 0xca4
+#define GT_PCI1_SERR1MASK_OFS 0xca8
-/*
- * Timer/Counter. GT64120A only.
- */
-#define GT_TC0_OFS 0x850
-#define GT_TC1_OFS 0x854
-#define GT_TC2_OFS 0x858
-#define GT_TC3_OFS 0x85C
-#define GT_TC_CONTROL_OFS 0x864
/*
* I2O Support Registers
/*
* Register encodings
*/
-#define GT_CPU_ENDIAN_SHF 12
-#define GT_CPU_ENDIAN_MSK (MSK(1) << GT_CPU_ENDIAN_SHF)
-#define GT_CPU_ENDIAN_BIT GT_CPU_ENDIAN_MSK
+#define GT_CPU_ENDIAN_SHF 12
+#define GT_CPU_ENDIAN_MSK (MSK(1) << GT_CPU_ENDIAN_SHF)
+#define GT_CPU_ENDIAN_BIT GT_CPU_ENDIAN_MSK
#define GT_CPU_WR_SHF 16
#define GT_CPU_WR_MSK (MSK(1) << GT_CPU_WR_SHF)
#define GT_CPU_WR_BIT GT_CPU_WR_MSK
#define GT_CPU_WR_DDDD 1
+#define GT_PCI_DCRM_SHF 21
+#define GT_PCI_LD_SHF 0
+#define GT_PCI_LD_MSK (MSK(15) << GT_PCI_LD_SHF)
+#define GT_PCI_HD_SHF 0
+#define GT_PCI_HD_MSK (MSK(7) << GT_PCI_HD_SHF)
+#define GT_PCI_REMAP_SHF 0
+#define GT_PCI_REMAP_MSK (MSK(11) << GT_PCI_REMAP_SHF)
+
+
#define GT_CFGADDR_CFGEN_SHF 31
#define GT_CFGADDR_CFGEN_MSK (MSK(1) << GT_CFGADDR_CFGEN_SHF)
#define GT_CFGADDR_CFGEN_BIT GT_CFGADDR_CFGEN_MSK
#define GT_SDRAM_CFG_REFINT_MSK (MSK(14) << GT_SDRAM_CFG_REFINT_SHF)
#define GT_SDRAM_CFG_NINTERLEAVE_SHF 14
-#define GT_SDRAM_CFG_NINTERLEAVE_MSK (MSK(1) << GT_SDRAM_CFG_NINTERLEAVE_SHF)
+#define GT_SDRAM_CFG_NINTERLEAVE_MSK (MSK(1) << GT_SDRAM_CFG_NINTERLEAVE_SHF)
#define GT_SDRAM_CFG_NINTERLEAVE_BIT GT_SDRAM_CFG_NINTERLEAVE_MSK
#define GT_SDRAM_CFG_RMW_SHF 15
#define GT_PCI0_CFGADDR_REGNUM_SHF 2
#define GT_PCI0_CFGADDR_REGNUM_MSK (MSK(6) << GT_PCI0_CFGADDR_REGNUM_SHF)
#define GT_PCI0_CFGADDR_FUNCTNUM_SHF 8
-#define GT_PCI0_CFGADDR_FUNCTNUM_MSK (MSK(3) << GT_PCI0_CFGADDR_FUNCTNUM_SHF)
+#define GT_PCI0_CFGADDR_FUNCTNUM_MSK (MSK(3) << GT_PCI0_CFGADDR_FUNCTNUM_SHF)
#define GT_PCI0_CFGADDR_DEVNUM_SHF 11
#define GT_PCI0_CFGADDR_DEVNUM_MSK (MSK(5) << GT_PCI0_CFGADDR_DEVNUM_SHF)
#define GT_PCI0_CFGADDR_BUSNUM_SHF 16
#define GT_PCI0_CFGADDR_CONFIGEN_MSK (MSK(1) << GT_PCI0_CFGADDR_CONFIGEN_SHF)
#define GT_PCI0_CFGADDR_CONFIGEN_BIT GT_PCI0_CFGADDR_CONFIGEN_MSK
-#define GT_PCI0_CMD_MBYTESWAP_SHF 0
-#define GT_PCI0_CMD_MBYTESWAP_MSK (MSK(1) << GT_PCI0_CMD_MBYTESWAP_SHF)
-#define GT_PCI0_CMD_MBYTESWAP_BIT GT_PCI0_CMD_MBYTESWAP_MSK
-#define GT_PCI0_CMD_MWORDSWAP_SHF 10
-#define GT_PCI0_CMD_MWORDSWAP_MSK (MSK(1) << GT_PCI0_CMD_MWORDSWAP_SHF)
-#define GT_PCI0_CMD_MWORDSWAP_BIT GT_PCI0_CMD_MWORDSWAP_MSK
-#define GT_PCI0_CMD_SBYTESWAP_SHF 16
-#define GT_PCI0_CMD_SBYTESWAP_MSK (MSK(1) << GT_PCI0_CMD_SBYTESWAP_SHF)
-#define GT_PCI0_CMD_SBYTESWAP_BIT GT_PCI0_CMD_SBYTESWAP_MSK
-#define GT_PCI0_CMD_SWORDSWAP_SHF 11
-#define GT_PCI0_CMD_SWORDSWAP_MSK (MSK(1) << GT_PCI0_CMD_SWORDSWAP_SHF)
-#define GT_PCI0_CMD_SWORDSWAP_BIT GT_PCI0_CMD_SWORDSWAP_MSK
+#define GT_PCI0_CMD_MBYTESWAP_SHF 0
+#define GT_PCI0_CMD_MBYTESWAP_MSK (MSK(1) << GT_PCI0_CMD_MBYTESWAP_SHF)
+#define GT_PCI0_CMD_MBYTESWAP_BIT GT_PCI0_CMD_MBYTESWAP_MSK
+#define GT_PCI0_CMD_MWORDSWAP_SHF 10
+#define GT_PCI0_CMD_MWORDSWAP_MSK (MSK(1) << GT_PCI0_CMD_MWORDSWAP_SHF)
+#define GT_PCI0_CMD_MWORDSWAP_BIT GT_PCI0_CMD_MWORDSWAP_MSK
+#define GT_PCI0_CMD_SBYTESWAP_SHF 16
+#define GT_PCI0_CMD_SBYTESWAP_MSK (MSK(1) << GT_PCI0_CMD_SBYTESWAP_SHF)
+#define GT_PCI0_CMD_SBYTESWAP_BIT GT_PCI0_CMD_SBYTESWAP_MSK
+#define GT_PCI0_CMD_SWORDSWAP_SHF 11
+#define GT_PCI0_CMD_SWORDSWAP_MSK (MSK(1) << GT_PCI0_CMD_SWORDSWAP_SHF)
+#define GT_PCI0_CMD_SWORDSWAP_BIT GT_PCI0_CMD_SWORDSWAP_MSK
/*
* Misc
#define GT_DEF_PCI0_MEM0_SIZE 0x02000000UL
#define GT_DEF_BASE 0x14000000UL
-#define GT_MAX_BANKSIZE (256 * 1024 * 1024) /* Max 256MB bank */
-#define GT_LATTIM_MIN 6 /* Minimum lat */
+#define GT_MAX_BANKSIZE (256 * 1024 * 1024) /* Max 256MB bank */
+#define GT_LATTIM_MIN 6 /* Minimum lat */
/*
* The gt64120_dep.h file must define the following macros
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
- * Copyright (C) 1997, 1998, 1999, 2000, 2001 by Ralf Baechle
+ * Copyright (C) 1997, 98, 99, 2000, 01, 05 Ralf Baechle (ralf@linux-mips.org)
* Copyright (C) 1999, 2000 Silicon Graphics, Inc.
* Copyright (C) 2001 MIPS Technologies, Inc.
*/
#ifndef _ASM_HARDIRQ_H
#define _ASM_HARDIRQ_H
-#include <linux/config.h>
#include <linux/threads.h>
#include <linux/irq.h>
* Copyright (C) 1994, 1995 Waldorf GmbH
* Copyright (C) 1994 - 2000 Ralf Baechle
* Copyright (C) 1999, 2000 Silicon Graphics, Inc.
+ * Copyright (C) 2004, 2005 MIPS Technologies, Inc. All rights reserved.
+ * Author: Maciej W. Rozycki <macro@mips.com>
*/
#ifndef _ASM_IO_H
#define _ASM_IO_H
#include <linux/config.h>
#include <linux/compiler.h>
+#include <linux/kernel.h>
#include <linux/types.h>
#include <asm/addrspace.h>
+#include <asm/bug.h>
+#include <asm/byteorder.h>
#include <asm/cpu.h>
#include <asm/cpu-features.h>
#include <asm/page.h>
#include <asm/pgtable-bits.h>
#include <asm/processor.h>
-#include <asm/byteorder.h>
+
#include <mangle-port.h>
/*
#undef CONF_SLOWDOWN_IO
/*
- * Sane hardware offers swapping of I/O space accesses in hardware; less
- * sane hardware forces software to fiddle with this ...
+ * Raw operations are never swapped in software. Otoh values that raw
+ * operations are working on may or may not have been swapped by the bus
+ * hardware. An example use would be for flash memory that's used for
+ * execute in place.
*/
-#if defined(CONFIG_SWAP_IO_SPACE) && defined(__MIPSEB__)
+# define __raw_ioswabb(x) (x)
+# define __raw_ioswabw(x) (x)
+# define __raw_ioswabl(x) (x)
+# define __raw_ioswabq(x) (x)
-#define __ioswab8(x) (x)
+/*
+ * Sane hardware offers swapping of PCI/ISA I/O space accesses in hardware;
+ * less sane hardware forces software to fiddle with this...
+ */
+#if defined(CONFIG_SWAP_IO_SPACE)
-#ifdef CONFIG_SGI_IP22
+# define ioswabb(x) (x)
+# ifdef CONFIG_SGI_IP22
/*
* IP22 seems braindead enough to swap 16bits values in hardware, but
* not 32bits. Go figure... Can't tell without documentation.
*/
-#define __ioswab16(x) (x)
-#else
-#define __ioswab16(x) swab16(x)
-#endif
-#define __ioswab32(x) swab32(x)
-#define __ioswab64(x) swab64(x)
+# define ioswabw(x) (x)
+# else
+# define ioswabw(x) le16_to_cpu(x)
+# endif
+# define ioswabl(x) le32_to_cpu(x)
+# define ioswabq(x) le64_to_cpu(x)
#else
-#define __ioswab8(x) (x)
-#define __ioswab16(x) (x)
-#define __ioswab32(x) (x)
-#define __ioswab64(x) (x)
+# define ioswabb(x) (x)
+# define ioswabw(x) (x)
+# define ioswabl(x) (x)
+# define ioswabq(x) (x)
#endif
+/*
+ * Native bus accesses never swapped.
+ */
+#define bus_ioswabb(x) (x)
+#define bus_ioswabw(x) (x)
+#define bus_ioswabl(x) (x)
+#define bus_ioswabq(x) (x)
+
+#define __bus_ioswabq bus_ioswabq
+
#define IO_SPACE_LIMIT 0xffff
/*
static inline void iounmap(volatile void __iomem *addr)
{
- if (cpu_has_64bits)
+ if (cpu_has_64bit_addresses)
return;
__iounmap(addr);
}
-#define __raw_readb(addr) \
- (*(volatile unsigned char *) __swizzle_addr_b((unsigned long)(addr)))
-#define __raw_readw(addr) \
- (*(volatile unsigned short *) __swizzle_addr_w((unsigned long)(addr)))
-#define __raw_readl(addr) \
- (*(volatile unsigned int *) __swizzle_addr_l((unsigned long)(addr)))
-#ifdef CONFIG_MIPS32
-#define ____raw_readq(addr) \
-({ \
- u64 __res; \
+
+#define __BUILD_MEMORY_SINGLE(pfx, bwlq, type, irq) \
\
- __asm__ __volatile__ ( \
- " .set mips3 # ____raw_readq \n" \
- " ld %L0, (%1) \n" \
- " dsra32 %M0, %L0, 0 \n" \
- " sll %L0, %L0, 0 \n" \
- " .set mips0 \n" \
- : "=r" (__res) \
- : "r" (__swizzle_addr_q((unsigned long)(addr)))); \
- __res; \
-})
-#define __raw_readq(addr) \
-({ \
- unsigned long __flags; \
- u64 __res; \
+static inline void pfx##write##bwlq(type val, \
+ volatile void __iomem *mem) \
+{ \
+ volatile type *__mem; \
+ type __val; \
\
- local_irq_save(__flags); \
- __res = ____raw_readq(addr); \
- local_irq_restore(__flags); \
- __res; \
-})
-#endif
-#ifdef CONFIG_MIPS64
-#define ____raw_readq(addr) \
- (*(volatile unsigned long *)__swizzle_addr_q((unsigned long)(addr)))
-#define __raw_readq(addr) ____raw_readq(addr)
-#endif
+ __mem = (void *)__swizzle_addr_##bwlq((unsigned long)(mem)); \
+ \
+ __val = pfx##ioswab##bwlq(val); \
+ \
+ if (sizeof(type) != sizeof(u64) || sizeof(u64) == sizeof(long)) \
+ *__mem = __val; \
+ else if (cpu_has_64bits) { \
+ unsigned long __flags; \
+ type __tmp; \
+ \
+ if (irq) \
+ local_irq_save(__flags); \
+ __asm__ __volatile__( \
+ ".set mips3" "\t\t# __writeq""\n\t" \
+ "dsll32 %L0, %L0, 0" "\n\t" \
+ "dsrl32 %L0, %L0, 0" "\n\t" \
+ "dsll32 %M0, %M0, 0" "\n\t" \
+ "or %L0, %L0, %M0" "\n\t" \
+ "sd %L0, %2" "\n\t" \
+ ".set mips0" "\n" \
+ : "=r" (__tmp) \
+ : "0" (__val), "m" (*__mem)); \
+ if (irq) \
+ local_irq_restore(__flags); \
+ } else \
+ BUG(); \
+} \
+ \
+static inline type pfx##read##bwlq(volatile void __iomem *mem) \
+{ \
+ volatile type *__mem; \
+ type __val; \
+ \
+ __mem = (void *)__swizzle_addr_##bwlq((unsigned long)(mem)); \
+ \
+ if (sizeof(type) != sizeof(u64) || sizeof(u64) == sizeof(long)) \
+ __val = *__mem; \
+ else if (cpu_has_64bits) { \
+ unsigned long __flags; \
+ \
+ local_irq_save(__flags); \
+ __asm__ __volatile__( \
+ ".set mips3" "\t\t# __readq" "\n\t" \
+ "ld %L0, %1" "\n\t" \
+ "dsra32 %M0, %L0, 0" "\n\t" \
+ "sll %L0, %L0, 0" "\n\t" \
+ ".set mips0" "\n" \
+ : "=r" (__val) \
+ : "m" (*__mem)); \
+ local_irq_restore(__flags); \
+ } else { \
+ __val = 0; \
+ BUG(); \
+ } \
+ \
+ return pfx##ioswab##bwlq(__val); \
+}
-#define readb(addr) __ioswab8(__raw_readb(addr))
-#define readw(addr) __ioswab16(__raw_readw(addr))
-#define readl(addr) __ioswab32(__raw_readl(addr))
-#define readq(addr) __ioswab64(__raw_readq(addr))
-#define readb_relaxed(addr) readb(addr)
-#define readw_relaxed(addr) readw(addr)
-#define readl_relaxed(addr) readl(addr)
-#define readq_relaxed(addr) readq(addr)
-
-#define __raw_writeb(b,addr) \
-do { \
- ((*(volatile unsigned char *)__swizzle_addr_b((unsigned long)(addr))) = (b)); \
-} while (0)
-
-#define __raw_writew(w,addr) \
-do { \
- ((*(volatile unsigned short *)__swizzle_addr_w((unsigned long)(addr))) = (w)); \
-} while (0)
-
-#define __raw_writel(l,addr) \
-do { \
- ((*(volatile unsigned int *)__swizzle_addr_l((unsigned long)(addr))) = (l)); \
-} while (0)
-
-#ifdef CONFIG_MIPS32
-#define ____raw_writeq(val,addr) \
-do { \
- u64 __tmp; \
+#define __BUILD_IOPORT_SINGLE(pfx, bwlq, type, p, slow) \
\
- __asm__ __volatile__ ( \
- " .set mips3 \n" \
- " dsll32 %L0, %L0, 0 # ____raw_writeq\n" \
- " dsrl32 %L0, %L0, 0 \n" \
- " dsll32 %M0, %M0, 0 \n" \
- " or %L0, %L0, %M0 \n" \
- " sd %L0, (%2) \n" \
- " .set mips0 \n" \
- : "=r" (__tmp) \
- : "0" ((unsigned long long)val), \
- "r" (__swizzle_addr_q((unsigned long)(addr)))); \
-} while (0)
-
-#define __raw_writeq(val,addr) \
-do { \
- unsigned long __flags; \
+static inline void pfx##out##bwlq##p(type val, unsigned long port) \
+{ \
+ volatile type *__addr; \
+ type __val; \
\
- local_irq_save(__flags); \
- ____raw_writeq(val, addr); \
- local_irq_restore(__flags); \
-} while (0)
-#endif
-#ifdef CONFIG_MIPS64
-#define ____raw_writeq(q,addr) \
-do { \
- *(volatile unsigned long *)__swizzle_addr_q((unsigned long)(addr)) = (q); \
-} while (0)
+ port = __swizzle_addr_##bwlq(port); \
+ __addr = (void *)(mips_io_port_base + port); \
+ \
+ __val = pfx##ioswab##bwlq(val); \
+ \
+ if (sizeof(type) != sizeof(u64)) { \
+ *__addr = __val; \
+ slow; \
+ } else \
+ BUILD_BUG(); \
+} \
+ \
+static inline type pfx##in##bwlq##p(unsigned long port) \
+{ \
+ volatile type *__addr; \
+ type __val; \
+ \
+ port = __swizzle_addr_##bwlq(port); \
+ __addr = (void *)(mips_io_port_base + port); \
+ \
+ if (sizeof(type) != sizeof(u64)) { \
+ __val = *__addr; \
+ slow; \
+ } else { \
+ __val = 0; \
+ BUILD_BUG(); \
+ } \
+ \
+ return pfx##ioswab##bwlq(__val); \
+}
-#define __raw_writeq(q,addr) ____raw_writeq(q, addr)
-#endif
+#define __BUILD_MEMORY_PFX(bus, bwlq, type) \
+ \
+__BUILD_MEMORY_SINGLE(bus, bwlq, type, 1)
+
+#define __BUILD_IOPORT_PFX(bus, bwlq, type) \
+ \
+__BUILD_IOPORT_SINGLE(bus, bwlq, type, ,) \
+__BUILD_IOPORT_SINGLE(bus, bwlq, type, _p, SLOW_DOWN_IO)
+
+#define BUILDIO(bwlq, type) \
+ \
+__BUILD_MEMORY_PFX(, bwlq, type) \
+__BUILD_MEMORY_PFX(__raw_, bwlq, type) \
+__BUILD_MEMORY_PFX(bus_, bwlq, type) \
+__BUILD_IOPORT_PFX(, bwlq, type) \
+__BUILD_IOPORT_PFX(__raw_, bwlq, type)
+
+#define __BUILDIO(bwlq, type) \
+ \
+__BUILD_MEMORY_SINGLE(__bus_, bwlq, type, 0)
+
+BUILDIO(b, u8)
+BUILDIO(w, u16)
+BUILDIO(l, u32)
+BUILDIO(q, u64)
+
+__BUILDIO(q, u64)
+
+#define readb_relaxed readb
+#define readw_relaxed readw
+#define readl_relaxed readl
+#define readq_relaxed readq
+
+/*
+ * Some code tests for these symbols
+ */
+#define readq readq
+#define writeq writeq
+
+#define __BUILD_MEMORY_STRING(bwlq, type) \
+ \
+static inline void writes##bwlq(volatile void __iomem *mem, void *addr, \
+ unsigned int count) \
+{ \
+ volatile type *__addr = addr; \
+ \
+ while (count--) { \
+ __raw_write##bwlq(*__addr, mem); \
+ __addr++; \
+ } \
+} \
+ \
+static inline void reads##bwlq(volatile void __iomem *mem, void *addr, \
+ unsigned int count) \
+{ \
+ volatile type *__addr = addr; \
+ \
+ while (count--) { \
+ *__addr = __raw_read##bwlq(mem); \
+ __addr++; \
+ } \
+}
+
+#define __BUILD_IOPORT_STRING(bwlq, type) \
+ \
+static inline void outs##bwlq(unsigned long port, void *addr, \
+ unsigned int count) \
+{ \
+ volatile type *__addr = addr; \
+ \
+ while (count--) { \
+ __raw_out##bwlq(*__addr, port); \
+ __addr++; \
+ } \
+} \
+ \
+static inline void ins##bwlq(unsigned long port, void *addr, \
+ unsigned int count) \
+{ \
+ volatile type *__addr = addr; \
+ \
+ while (count--) { \
+ *__addr = __raw_in##bwlq(port); \
+ __addr++; \
+ } \
+}
+
+#define BUILDSTRING(bwlq, type) \
+ \
+__BUILD_MEMORY_STRING(bwlq, type) \
+__BUILD_IOPORT_STRING(bwlq, type)
+
+BUILDSTRING(b, u8)
+BUILDSTRING(w, u16)
+BUILDSTRING(l, u32)
+BUILDSTRING(q, u64)
-#define writeb(b,addr) __raw_writeb(__ioswab8(b),(addr))
-#define writew(w,addr) __raw_writew(__ioswab16(w),(addr))
-#define writel(l,addr) __raw_writel(__ioswab32(l),(addr))
-#define writeq(q,addr) __raw_writeq(__ioswab64(q),(addr))
/* Depends on MIPS II instruction set */
#define mmiowb() asm volatile ("sync" ::: "memory")
#define memcpy_fromio(a,b,c) memcpy((a),(void *)(b),(c))
#define memcpy_toio(a,b,c) memcpy((void *)(a),(b),(c))
+/*
+ * Memory Mapped I/O
+ */
+#define ioread8(addr) readb(addr)
+#define ioread16(addr) readw(addr)
+#define ioread32(addr) readl(addr)
+
+#define iowrite8(b,addr) writeb(b,addr)
+#define iowrite16(w,addr) writew(w,addr)
+#define iowrite32(l,addr) writel(l,addr)
+
+#define ioread8_rep(a,b,c) readsb(a,b,c)
+#define ioread16_rep(a,b,c) readsw(a,b,c)
+#define ioread32_rep(a,b,c) readsl(a,b,c)
+
+#define iowrite8_rep(a,b,c) writesb(a,b,c)
+#define iowrite16_rep(a,b,c) writesw(a,b,c)
+#define iowrite32_rep(a,b,c) writesl(a,b,c)
+
+/* Create a virtual mapping cookie for an IO port range */
+extern void __iomem *ioport_map(unsigned long port, unsigned int nr);
+extern void ioport_unmap(void __iomem *);
+
+/* Create a virtual mapping cookie for a PCI BAR (memory or IO) */
+struct pci_dev;
+extern void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long max);
+extern void pci_iounmap(struct pci_dev *dev, void __iomem *);
+
/*
* ISA space is 'always mapped' on currently supported MIPS systems, no need
* to explicitly ioremap() it. The fact that the ISA IO space is mapped
* address should have been obtained by ioremap.
* Returns 1 on a match.
*/
-static inline int check_signature(unsigned long io_addr,
+static inline int check_signature(char __iomem *io_addr,
const unsigned char *signature, int length)
{
int retval = 0;
return retval;
}
-static inline void __outb(unsigned char val, unsigned long port)
-{
- port = __swizzle_addr_b(port);
-
- *(volatile u8 *)(mips_io_port_base + port) = __ioswab8(val);
-}
-
-static inline void __outw(unsigned short val, unsigned long port)
-{
- port = __swizzle_addr_w(port);
-
- *(volatile u16 *)(mips_io_port_base + port) = __ioswab16(val);
-}
-
-static inline void __outl(unsigned int val, unsigned long port)
-{
- port = __swizzle_addr_l(port);
-
- *(volatile u32 *)(mips_io_port_base + port) = __ioswab32(val);
-}
-
-static inline void __outb_p(unsigned char val, unsigned long port)
-{
- port = __swizzle_addr_b(port);
-
- *(volatile u8 *)(mips_io_port_base + port) = __ioswab8(val);
- SLOW_DOWN_IO;
-}
-
-static inline void __outw_p(unsigned short val, unsigned long port)
-{
- port = __swizzle_addr_w(port);
-
- *(volatile u16 *)(mips_io_port_base + port) = __ioswab16(val);
- SLOW_DOWN_IO;
-}
-
-static inline void __outl_p(unsigned int val, unsigned long port)
-{
- port = __swizzle_addr_l(port);
-
- *(volatile u32 *)(mips_io_port_base + port) = __ioswab32(val);
- SLOW_DOWN_IO;
-}
-
-#define outb(val, port) __outb(val, port)
-#define outw(val, port) __outw(val, port)
-#define outl(val, port) __outl(val, port)
-#define outb_p(val, port) __outb_p(val, port)
-#define outw_p(val, port) __outw_p(val, port)
-#define outl_p(val, port) __outl_p(val, port)
-
-static inline unsigned char __inb(unsigned long port)
-{
- port = __swizzle_addr_b(port);
-
- return __ioswab8(*(volatile u8 *)(mips_io_port_base + port));
-}
-
-static inline unsigned short __inw(unsigned long port)
-{
- port = __swizzle_addr_w(port);
-
- return __ioswab16(*(volatile u16 *)(mips_io_port_base + port));
-}
-
-static inline unsigned int __inl(unsigned long port)
-{
- port = __swizzle_addr_l(port);
-
- return __ioswab32(*(volatile u32 *)(mips_io_port_base + port));
-}
-
-static inline unsigned char __inb_p(unsigned long port)
-{
- u8 __val;
-
- port = __swizzle_addr_b(port);
-
- __val = *(volatile u8 *)(mips_io_port_base + port);
- SLOW_DOWN_IO;
-
- return __ioswab8(__val);
-}
-
-static inline unsigned short __inw_p(unsigned long port)
-{
- u16 __val;
-
- port = __swizzle_addr_w(port);
-
- __val = *(volatile u16 *)(mips_io_port_base + port);
- SLOW_DOWN_IO;
-
- return __ioswab16(__val);
-}
-
-static inline unsigned int __inl_p(unsigned long port)
-{
- u32 __val;
-
- port = __swizzle_addr_l(port);
-
- __val = *(volatile u32 *)(mips_io_port_base + port);
- SLOW_DOWN_IO;
-
- return __ioswab32(__val);
-}
-
-#define inb(port) __inb(port)
-#define inw(port) __inw(port)
-#define inl(port) __inl(port)
-#define inb_p(port) __inb_p(port)
-#define inw_p(port) __inw_p(port)
-#define inl_p(port) __inl_p(port)
-
-static inline void __outsb(unsigned long port, void *addr, unsigned int count)
-{
- while (count--) {
- outb(*(u8 *)addr, port);
- addr++;
- }
-}
-
-static inline void __insb(unsigned long port, void *addr, unsigned int count)
-{
- while (count--) {
- *(u8 *)addr = inb(port);
- addr++;
- }
-}
-
-static inline void __outsw(unsigned long port, void *addr, unsigned int count)
-{
- while (count--) {
- outw(*(u16 *)addr, port);
- addr += 2;
- }
-}
-
-static inline void __insw(unsigned long port, void *addr, unsigned int count)
-{
- while (count--) {
- *(u16 *)addr = inw(port);
- addr += 2;
- }
-}
-
-static inline void __outsl(unsigned long port, void *addr, unsigned int count)
-{
- while (count--) {
- outl(*(u32 *)addr, port);
- addr += 4;
- }
-}
-
-static inline void __insl(unsigned long port, void *addr, unsigned int count)
-{
- while (count--) {
- *(u32 *)addr = inl(port);
- addr += 4;
- }
-}
-
-#define outsb(port, addr, count) __outsb(port, addr, count)
-#define insb(port, addr, count) __insb(port, addr, count)
-#define outsw(port, addr, count) __outsw(port, addr, count)
-#define insw(port, addr, count) __insw(port, addr, count)
-#define outsl(port, addr, count) __outsl(port, addr, count)
-#define insl(port, addr, count) __insl(port, addr, count)
-
/*
* The caches on some architectures aren't dma-coherent and have need to
* handle this in software. There are three types of operations that
extern struct sgi_crime *crime;
+#define CRIME_HI_MEM_BASE 0x40000000 /* this is where whole 1G of RAM is mapped */
+
#endif /* __ASM_CRIME_H__ */
extern void mips_cpu_irq_init(int irq_base);
extern void rm7k_cpu_irq_init(int irq_base);
+extern void rm9k_cpu_irq_init(int irq_base);
#endif /* _ASM_IRQ_CPU_H */
/*
- * Carsten Langgaard, carstenl@mips.com
- * Copyright (C) 1999,2000 MIPS Technologies, Inc. All rights reserved.
- * Copyright (C) 2003 by Ralf Baechle
+ * Copyright (C) 1999, 2000, 2005 MIPS Technologies, Inc.
+ * All rights reserved.
+ * Authors: Carsten Langgaard <carstenl@mips.com>
+ * Maciej W. Rozycki <macro@mips.com>
+ * Copyright (C) 2003, 05 Ralf Baechle (ralf@linux-mips.org)
*
* This program is free software; you can distribute it and/or modify it
* under the terms of the GNU General Public License (Version 2) as
#ifndef __ASM_MACH_ATLAS_MC146818RTC_H
#define __ASM_MACH_ATLAS_MC146818RTC_H
-#include <asm/io.h>
+#include <linux/types.h>
+
+#include <asm/addrspace.h>
+
#include <asm/mips-boards/atlas.h>
#include <asm/mips-boards/atlasint.h>
-
-#define RTC_PORT(x) (ATLAS_RTC_ADR_REG + (x)*8)
-#define RTC_IOMAPPED 1
-#define RTC_EXTENT 16
+#define RTC_PORT(x) (ATLAS_RTC_ADR_REG + (x) * 8)
+#define RTC_IO_EXTENT 0x100
+#define RTC_IOMAPPED 0
#define RTC_IRQ ATLASINT_RTC
-#ifdef CONFIG_CPU_LITTLE_ENDIAN
-#define ATLAS_RTC_PORT(x) (RTC_PORT(x) + 0)
-#else
-#define ATLAS_RTC_PORT(x) (RTC_PORT(x) + 3)
-#endif
-
static inline unsigned char CMOS_READ(unsigned long addr)
{
- outb(addr, ATLAS_RTC_PORT(0));
+ volatile u32 *ireg = (void *)CKSEG1ADDR(RTC_PORT(0));
+ volatile u32 *dreg = (void *)CKSEG1ADDR(RTC_PORT(1));
- return inb(ATLAS_RTC_PORT(1));
+ *ireg = addr;
+ return *dreg;
}
static inline void CMOS_WRITE(unsigned char data, unsigned long addr)
{
- outb(addr, ATLAS_RTC_PORT(0));
- outb(data, ATLAS_RTC_PORT(1));
+ volatile u32 *ireg = (void *)CKSEG1ADDR(RTC_PORT(0));
+ volatile u32 *dreg = (void *)CKSEG1ADDR(RTC_PORT(1));
+
+ *ireg = addr;
+ *dreg = data;
}
#define RTC_ALWAYS_BCD 0
#ifndef _AU1000_DBDMA_H_
#define _AU1000_DBDMA_H_
+#include <linux/config.h>
+
#ifndef _LANGUAGE_ASSEMBLY
/* The DMA base addresses.
#ifndef _AU1000_PSC_H_
#define _AU1000_PSC_H_
-/* The PSC base addresses.
-*/
-#define PSC_BASE0 0xb1a00000
-#define PSC_BASE1 0xb1b00000
-#define PSC_BASE2 0xb0a00000
-#define PSC_BASE3 0xb0d00000
-
-/* These should be defined in a board specific file!
-*/
-#ifdef CONFIG_MIPS_PB1550
-#define SPI_PSC_BASE PSC_BASE0
-#define AC97_PSC_BASE PSC_BASE1
-#define SMBUS_PSC_BASE PSC_BASE2
+/* The PSC base addresses. */
+#ifdef CONFIG_SOC_AU1550
+#define PSC0_BASE_ADDR 0xb1a00000
+#define PSC1_BASE_ADDR 0xb1b00000
+#define PSC2_BASE_ADDR 0xb0a00000
+#define PSC3_BASE_ADDR 0xb0d00000
#endif
-#ifdef CONFIG_MIPS_DB1550
-#define SPI_PSC_BASE PSC_BASE0
-#define AC97_PSC_BASE PSC_BASE1
-#define SMBUS_PSC_BASE PSC_BASE2
-#endif
-
/* The PSC select and control registers are common to
* all protocols.
#define PSC_AC97RST_SNC (1 << 0)
+/* PSC in I2S Mode.
+*/
+typedef struct psc_i2s {
+ u32 psc_sel;
+ u32 psc_ctrl;
+ u32 psc_i2scfg;
+ u32 psc_i2smsk;
+ u32 psc_i2spcr;
+ u32 psc_i2sstat;
+ u32 psc_i2sevent;
+ u32 psc_i2stxrx;
+ u32 psc_i2sudf;
+} psc_i2s_t;
+
+/* I2S Config Register.
+*/
+#define PSC_I2SCFG_RT_MASK (3 << 30)
+#define PSC_I2SCFG_RT_FIFO1 (0 << 30)
+#define PSC_I2SCFG_RT_FIFO2 (1 << 30)
+#define PSC_I2SCFG_RT_FIFO4 (2 << 30)
+#define PSC_I2SCFG_RT_FIFO8 (3 << 30)
+
+#define PSC_I2SCFG_TT_MASK (3 << 28)
+#define PSC_I2SCFG_TT_FIFO1 (0 << 28)
+#define PSC_I2SCFG_TT_FIFO2 (1 << 28)
+#define PSC_I2SCFG_TT_FIFO4 (2 << 28)
+#define PSC_I2SCFG_TT_FIFO8 (3 << 28)
+
+#define PSC_I2SCFG_DD_DISABLE (1 << 27)
+#define PSC_I2SCFG_DE_ENABLE (1 << 26)
+#define PSC_I2SCFG_SET_WS(x) (((((x) / 2) - 1) & 0x7f) << 16)
+#define PSC_I2SCFG_WI (1 << 15)
+
+#define PSC_I2SCFG_DIV_MASK (3 << 13)
+#define PSC_I2SCFG_DIV2 (0 << 13)
+#define PSC_I2SCFG_DIV4 (1 << 13)
+#define PSC_I2SCFG_DIV8 (2 << 13)
+#define PSC_I2SCFG_DIV16 (3 << 13)
+
+#define PSC_I2SCFG_BI (1 << 12)
+#define PSC_I2SCFG_BUF (1 << 11)
+#define PSC_I2SCFG_MLJ (1 << 10)
+#define PSC_I2SCFG_XM (1 << 9)
+
+/* The word length equation is simply LEN+1.
+ */
+#define PSC_I2SCFG_SET_LEN(x) ((((x) - 1) & 0x1f) << 4)
+#define PSC_I2SCFG_GET_LEN(x) ((((x) >> 4) & 0x1f) + 1)
+
+#define PSC_I2SCFG_LB (1 << 2)
+#define PSC_I2SCFG_MLF (1 << 1)
+#define PSC_I2SCFG_MS (1 << 0)
+
+/* I2S Mask Register.
+*/
+#define PSC_I2SMSK_RR (1 << 13)
+#define PSC_I2SMSK_RO (1 << 12)
+#define PSC_I2SMSK_RU (1 << 11)
+#define PSC_I2SMSK_TR (1 << 10)
+#define PSC_I2SMSK_TO (1 << 9)
+#define PSC_I2SMSK_TU (1 << 8)
+#define PSC_I2SMSK_RD (1 << 5)
+#define PSC_I2SMSK_TD (1 << 4)
+#define PSC_I2SMSK_ALLMASK (PSC_I2SMSK_RR | PSC_I2SMSK_RO | \
+ PSC_I2SMSK_RU | PSC_I2SMSK_TR | \
+ PSC_I2SMSK_TO | PSC_I2SMSK_TU | \
+ PSC_I2SMSK_RD | PSC_I2SMSK_TD)
+
+/* I2S Protocol Control Register.
+*/
+#define PSC_I2SPCR_RC (1 << 6)
+#define PSC_I2SPCR_RP (1 << 5)
+#define PSC_I2SPCR_RS (1 << 4)
+#define PSC_I2SPCR_TC (1 << 2)
+#define PSC_I2SPCR_TP (1 << 1)
+#define PSC_I2SPCR_TS (1 << 0)
+
+/* I2S Status register (read only).
+*/
+#define PSC_I2SSTAT_RF (1 << 13)
+#define PSC_I2SSTAT_RE (1 << 12)
+#define PSC_I2SSTAT_RR (1 << 11)
+#define PSC_I2SSTAT_TF (1 << 10)
+#define PSC_I2SSTAT_TE (1 << 9)
+#define PSC_I2SSTAT_TR (1 << 8)
+#define PSC_I2SSTAT_RB (1 << 5)
+#define PSC_I2SSTAT_TB (1 << 4)
+#define PSC_I2SSTAT_DI (1 << 2)
+#define PSC_I2SSTAT_DR (1 << 1)
+#define PSC_I2SSTAT_SR (1 << 0)
+
+/* I2S Event Register.
+*/
+#define PSC_I2SEVNT_RR (1 << 13)
+#define PSC_I2SEVNT_RO (1 << 12)
+#define PSC_I2SEVNT_RU (1 << 11)
+#define PSC_I2SEVNT_TR (1 << 10)
+#define PSC_I2SEVNT_TO (1 << 9)
+#define PSC_I2SEVNT_TU (1 << 8)
+#define PSC_I2SEVNT_RD (1 << 5)
+#define PSC_I2SEVNT_TD (1 << 4)
+
+/* PSC in SPI Mode.
+*/
+typedef struct psc_spi {
+ u32 psc_sel;
+ u32 psc_ctrl;
+ u32 psc_spicfg;
+ u32 psc_spimsk;
+ u32 psc_spipcr;
+ u32 psc_spistat;
+ u32 psc_spievent;
+ u32 psc_spitxrx;
+} psc_spi_t;
+
+/* SPI Config Register.
+*/
+#define PSC_SPICFG_RT_MASK (3 << 30)
+#define PSC_SPICFG_RT_FIFO1 (0 << 30)
+#define PSC_SPICFG_RT_FIFO2 (1 << 30)
+#define PSC_SPICFG_RT_FIFO4 (2 << 30)
+#define PSC_SPICFG_RT_FIFO8 (3 << 30)
+
+#define PSC_SPICFG_TT_MASK (3 << 28)
+#define PSC_SPICFG_TT_FIFO1 (0 << 28)
+#define PSC_SPICFG_TT_FIFO2 (1 << 28)
+#define PSC_SPICFG_TT_FIFO4 (2 << 28)
+#define PSC_SPICFG_TT_FIFO8 (3 << 28)
+
+#define PSC_SPICFG_DD_DISABLE (1 << 27)
+#define PSC_SPICFG_DE_ENABLE (1 << 26)
+#define PSC_SPICFG_CLR_BAUD(x) ((x) & ~((0x3f) << 15))
+#define PSC_SPICFG_SET_BAUD(x) (((x) & 0x3f) << 15)
+
+#define PSC_SPICFG_SET_DIV(x) (((x) & 0x03) << 13)
+#define PSC_SPICFG_DIV2 0
+#define PSC_SPICFG_DIV4 1
+#define PSC_SPICFG_DIV8 2
+#define PSC_SPICFG_DIV16 3
+
+#define PSC_SPICFG_BI (1 << 12)
+#define PSC_SPICFG_PSE (1 << 11)
+#define PSC_SPICFG_CGE (1 << 10)
+#define PSC_SPICFG_CDE (1 << 9)
+
+#define PSC_SPICFG_CLR_LEN(x) ((x) & ~((0x1f) << 4))
+#define PSC_SPICFG_SET_LEN(x) (((x-1) & 0x1f) << 4)
+
+#define PSC_SPICFG_LB (1 << 3)
+#define PSC_SPICFG_MLF (1 << 1)
+#define PSC_SPICFG_MO (1 << 0)
+
+/* SPI Mask Register.
+*/
+#define PSC_SPIMSK_MM (1 << 16)
+#define PSC_SPIMSK_RR (1 << 13)
+#define PSC_SPIMSK_RO (1 << 12)
+#define PSC_SPIMSK_RU (1 << 11)
+#define PSC_SPIMSK_TR (1 << 10)
+#define PSC_SPIMSK_TO (1 << 9)
+#define PSC_SPIMSK_TU (1 << 8)
+#define PSC_SPIMSK_SD (1 << 5)
+#define PSC_SPIMSK_MD (1 << 4)
+#define PSC_SPIMSK_ALLMASK (PSC_SPIMSK_MM | PSC_SPIMSK_RR | \
+ PSC_SPIMSK_RO | PSC_SPIMSK_TO | \
+ PSC_SPIMSK_TU | PSC_SPIMSK_SD | \
+ PSC_SPIMSK_MD)
+
+/* SPI Protocol Control Register.
+*/
+#define PSC_SPIPCR_RC (1 << 6)
+#define PSC_SPIPCR_SP (1 << 5)
+#define PSC_SPIPCR_SS (1 << 4)
+#define PSC_SPIPCR_TC (1 << 2)
+#define PSC_SPIPCR_MS (1 << 0)
+
+/* SPI Status register (read only).
+*/
+#define PSC_SPISTAT_RF (1 << 13)
+#define PSC_SPISTAT_RE (1 << 12)
+#define PSC_SPISTAT_RR (1 << 11)
+#define PSC_SPISTAT_TF (1 << 10)
+#define PSC_SPISTAT_TE (1 << 9)
+#define PSC_SPISTAT_TR (1 << 8)
+#define PSC_SPISTAT_SB (1 << 5)
+#define PSC_SPISTAT_MB (1 << 4)
+#define PSC_SPISTAT_DI (1 << 2)
+#define PSC_SPISTAT_DR (1 << 1)
+#define PSC_SPISTAT_SR (1 << 0)
+
+/* SPI Event Register.
+*/
+#define PSC_SPIEVNT_MM (1 << 16)
+#define PSC_SPIEVNT_RR (1 << 13)
+#define PSC_SPIEVNT_RO (1 << 12)
+#define PSC_SPIEVNT_RU (1 << 11)
+#define PSC_SPIEVNT_TR (1 << 10)
+#define PSC_SPIEVNT_TO (1 << 9)
+#define PSC_SPIEVNT_TU (1 << 8)
+#define PSC_SPIEVNT_SD (1 << 5)
+#define PSC_SPIEVNT_MD (1 << 4)
+
+/* Transmit register control.
+*/
+#define PSC_SPITXRX_LC (1 << 29)
+#define PSC_SPITXRX_SR (1 << 28)
+
+/* PSC in SMBus (I2C) Mode.
+*/
+typedef struct psc_smb {
+ u32 psc_sel;
+ u32 psc_ctrl;
+ u32 psc_smbcfg;
+ u32 psc_smbmsk;
+ u32 psc_smbpcr;
+ u32 psc_smbstat;
+ u32 psc_smbevnt;
+ u32 psc_smbtxrx;
+ u32 psc_smbtmr;
+} psc_smb_t;
+
+/* SMBus Config Register.
+*/
+#define PSC_SMBCFG_RT_MASK (3 << 30)
+#define PSC_SMBCFG_RT_FIFO1 (0 << 30)
+#define PSC_SMBCFG_RT_FIFO2 (1 << 30)
+#define PSC_SMBCFG_RT_FIFO4 (2 << 30)
+#define PSC_SMBCFG_RT_FIFO8 (3 << 30)
+
+#define PSC_SMBCFG_TT_MASK (3 << 28)
+#define PSC_SMBCFG_TT_FIFO1 (0 << 28)
+#define PSC_SMBCFG_TT_FIFO2 (1 << 28)
+#define PSC_SMBCFG_TT_FIFO4 (2 << 28)
+#define PSC_SMBCFG_TT_FIFO8 (3 << 28)
+
+#define PSC_SMBCFG_DD_DISABLE (1 << 27)
+#define PSC_SMBCFG_DE_ENABLE (1 << 26)
+
+#define PSC_SMBCFG_SET_DIV(x) (((x) & 0x03) << 13)
+#define PSC_SMBCFG_DIV2 0
+#define PSC_SMBCFG_DIV4 1
+#define PSC_SMBCFG_DIV8 2
+#define PSC_SMBCFG_DIV16 3
+
+#define PSC_SMBCFG_GCE (1 << 9)
+#define PSC_SMBCFG_SFM (1 << 8)
+
+#define PSC_SMBCFG_SET_SLV(x) (((x) & 0x7f) << 1)
+
+/* SMBus Mask Register.
+*/
+#define PSC_SMBMSK_DN (1 << 30)
+#define PSC_SMBMSK_AN (1 << 29)
+#define PSC_SMBMSK_AL (1 << 28)
+#define PSC_SMBMSK_RR (1 << 13)
+#define PSC_SMBMSK_RO (1 << 12)
+#define PSC_SMBMSK_RU (1 << 11)
+#define PSC_SMBMSK_TR (1 << 10)
+#define PSC_SMBMSK_TO (1 << 9)
+#define PSC_SMBMSK_TU (1 << 8)
+#define PSC_SMBMSK_SD (1 << 5)
+#define PSC_SMBMSK_MD (1 << 4)
+#define PSC_SMBMSK_ALLMASK (PSC_SMBMSK_DN | PSC_SMBMSK_AN | \
+ PSC_SMBMSK_AL | PSC_SMBMSK_RR | \
+ PSC_SMBMSK_RO | PSC_SMBMSK_TO | \
+ PSC_SMBMSK_TU | PSC_SMBMSK_SD | \
+ PSC_SMBMSK_MD)
+
+/* SMBus Protocol Control Register.
+*/
+#define PSC_SMBPCR_DC (1 << 2)
+#define PSC_SMBPCR_MS (1 << 0)
+
+/* SMBus Status register (read only).
+*/
+#define PSC_SMBSTAT_BB (1 << 28)
+#define PSC_SMBSTAT_RF (1 << 13)
+#define PSC_SMBSTAT_RE (1 << 12)
+#define PSC_SMBSTAT_RR (1 << 11)
+#define PSC_SMBSTAT_TF (1 << 10)
+#define PSC_SMBSTAT_TE (1 << 9)
+#define PSC_SMBSTAT_TR (1 << 8)
+#define PSC_SMBSTAT_SB (1 << 5)
+#define PSC_SMBSTAT_MB (1 << 4)
+#define PSC_SMBSTAT_DI (1 << 2)
+#define PSC_SMBSTAT_DR (1 << 1)
+#define PSC_SMBSTAT_SR (1 << 0)
+
+/* SMBus Event Register.
+*/
+#define PSC_SMBEVNT_DN (1 << 30)
+#define PSC_SMBEVNT_AN (1 << 29)
+#define PSC_SMBEVNT_AL (1 << 28)
+#define PSC_SMBEVNT_RR (1 << 13)
+#define PSC_SMBEVNT_RO (1 << 12)
+#define PSC_SMBEVNT_RU (1 << 11)
+#define PSC_SMBEVNT_TR (1 << 10)
+#define PSC_SMBEVNT_TO (1 << 9)
+#define PSC_SMBEVNT_TU (1 << 8)
+#define PSC_SMBEVNT_SD (1 << 5)
+#define PSC_SMBEVNT_MD (1 << 4)
+#define PSC_SMBEVNT_ALLCLR (PSC_SMBEVNT_DN | PSC_SMBEVNT_AN | \
+ PSC_SMBEVNT_AL | PSC_SMBEVNT_RR | \
+ PSC_SMBEVNT_RO | PSC_SMBEVNT_TO | \
+ PSC_SMBEVNT_TU | PSC_SMBEVNT_SD | \
+ PSC_SMBEVNT_MD)
+
+/* Transmit register control.
+*/
+#define PSC_SMBTXRX_RSR (1 << 30)
+#define PSC_SMBTXRX_STP (1 << 29)
+#define PSC_SMBTXRX_DATAMASK (0xff)
+
+/* SMBus protocol timers register.
+*/
+#define PSC_SMBTMR_SET_TH(x) (((x) & 0x3) << 30)
+#define PSC_SMBTMR_SET_PS(x) (((x) & 0x1f) << 25)
+#define PSC_SMBTMR_SET_PU(x) (((x) & 0x1f) << 20)
+#define PSC_SMBTMR_SET_SH(x) (((x) & 0x1f) << 15)
+#define PSC_SMBTMR_SET_SU(x) (((x) & 0x1f) << 10)
+#define PSC_SMBTMR_SET_CL(x) (((x) & 0x1f) << 5)
+#define PSC_SMBTMR_SET_CH(x) (((x) & 0x1f) << 0)
+
+
#endif /* _AU1000_PSC_H_ */
/*
* PCI Bus allocation
*/
-#define GT_PCI_MEM_BASE 0x12000000UL
-#define GT_PCI_MEM_SIZE 0x02000000UL
-#define GT_PCI_IO_BASE 0x10000000UL
-#define GT_PCI_IO_SIZE 0x02000000UL
-#define GT_ISA_IO_BASE PCI_IO_BASE
+#define GT_PCI_MEM_BASE 0x12000000UL
+#define GT_PCI_MEM_SIZE 0x02000000UL
+#define GT_PCI_IO_BASE 0x10000000UL
+#define GT_PCI_IO_SIZE 0x02000000UL
+#define GT_ISA_IO_BASE PCI_IO_BASE
/*
* Duart I/O ports.
*/
-#define EV64120_COM1_BASE_ADDR (0x1d000000 + 0x20)
-#define EV64120_COM2_BASE_ADDR (0x1d000000 + 0x00)
+#define EV64120_COM1_BASE_ADDR (0x1d000000 + 0x20)
+#define EV64120_COM2_BASE_ADDR (0x1d000000 + 0x00)
/*
* EV64120 interrupt controller register base.
*/
-#define EV64120_ICTRL_REGS_BASE (KSEG1ADDR(0x1f000000))
+#define EV64120_ICTRL_REGS_BASE (KSEG1ADDR(0x1f000000))
/*
* EV64120 UART register base.
*/
-#define EV64120_UART0_REGS_BASE (KSEG1ADDR(EV64120_COM1_BASE_ADDR))
-#define EV64120_UART1_REGS_BASE (KSEG1ADDR(EV64120_COM2_BASE_ADDR))
+#define EV64120_UART0_REGS_BASE (KSEG1ADDR(EV64120_COM1_BASE_ADDR))
+#define EV64120_UART1_REGS_BASE (KSEG1ADDR(EV64120_COM2_BASE_ADDR))
#define EV64120_BASE_BAUD ( 3686400 / 16 )
/*
*
* (Guessing ...)
*/
-#define GT_PCI_MEM_BASE 0x12000000UL
-#define GT_PCI_MEM_SIZE 0x02000000UL
-#define GT_PCI_IO_BASE 0x10000000UL
-#define GT_PCI_IO_SIZE 0x02000000UL
-#define GT_ISA_IO_BASE PCI_IO_BASE
+#define GT_PCI_MEM_BASE 0x12000000UL
+#define GT_PCI_MEM_SIZE 0x02000000UL
+#define GT_PCI_IO_BASE 0x10000000UL
+#define GT_PCI_IO_SIZE 0x02000000UL
+#define GT_ISA_IO_BASE PCI_IO_BASE
/*
* Duart I/O ports.
*/
-#define EV96100_COM1_BASE_ADDR (0xBD000000 + 0x20)
-#define EV96100_COM2_BASE_ADDR (0xBD000000 + 0x00)
+#define EV96100_COM1_BASE_ADDR (0xBD000000 + 0x20)
+#define EV96100_COM2_BASE_ADDR (0xBD000000 + 0x00)
/*
* EV96100 interrupt controller register base.
*/
-#define EV96100_ICTRL_REGS_BASE (KSEG1ADDR(0x1f000000))
+#define EV96100_ICTRL_REGS_BASE (KSEG1ADDR(0x1f000000))
/*
* EV96100 UART register base.
*/
-#define EV96100_UART0_REGS_BASE EV96100_COM1_BASE_ADDR
-#define EV96100_UART1_REGS_BASE EV96100_COM2_BASE_ADDR
-#define EV96100_BASE_BAUD ( 3686400 / 16 )
+#define EV96100_UART0_REGS_BASE EV96100_COM1_BASE_ADDR
+#define EV96100_UART1_REGS_BASE EV96100_COM2_BASE_ADDR
+#define EV96100_BASE_BAUD ( 3686400 / 16 )
#endif /* _ASM_GT64120_EV96100_GT64120_DEP_H */
#define cpu_has_vtag_icache 0
#define cpu_has_dc_aliases 0
#define cpu_has_ic_fills_f_dc 0
+#define cpu_icache_snoops_remote_store 1
#define cpu_has_nofpuex 0
#define cpu_has_64bits 1
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2005 Ilya A. Volynets-Evenbakh
+ * Copyright (C) 2005 Ralf Baechle (ralf@linux-mips.org)
+ */
+#ifndef __ASM_MACH_IP32_CPU_FEATURE_OVERRIDES_H
+#define __ASM_MACH_IP32_CPU_FEATURE_OVERRIDES_H
+
+#include <linux/config.h>
+
+/*
+ * R5000 has an interesting "restriction": ll(d)/sc(d)
+ * instructions to XKPHYS region simply do uncached bus
+ * requests. This breaks all the atomic bitops functions.
+ * so, for 64bit IP32 kernel we just don't use ll/sc.
+ * This does not affect luserland.
+ */
+#if defined(CONFIG_CPU_R5000) && defined(CONFIG_MIPS64)
+#define cpu_has_llsc 0
+#else
+#define cpu_has_llsc 1
+#endif
+
+/* Settings which are common for all ip32 CPUs */
+#define cpu_has_tlb 1
+#define cpu_has_4kex 1
+#define cpu_has_fpu 1
+#define cpu_has_32fpr 1
+#define cpu_has_counter 1
+#define cpu_has_mips16 0
+#define cpu_has_vce 0
+#define cpu_has_cache_cdex_s 0
+#define cpu_has_mcheck 0
+#define cpu_has_ejtag 0
+#define cpu_has_vtag_icache 0
+#define cpu_has_ic_fills_f_dc 0
+
+#endif /* __ASM_MACH_IP32_CPU_FEATURE_OVERRIDES_H */
#include <asm/ip32/mace.h>
#define RTC_PORT(x) (0x70 + (x))
-#define RTC_IRQ MACEISA_RTC_IRQ
static unsigned char CMOS_READ(unsigned long addr)
{
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
- * Copyright (C) 1994 - 1999, 2000, 03, 04 Ralf Baechle
+ * Copyright (C) 1994 - 1999, 2000, 03, 04, 05 Ralf Baechle (ralf@linux-mips.org)
* Copyright (C) 2000, 2002 Maciej W. Rozycki
* Copyright (C) 1990, 1999, 2000 Silicon Graphics, Inc.
*/
#ifndef _ASM_MACH_IP32_SPACES_H
#define _ASM_MACH_IP32_SPACES_H
-#include <linux/config.h>
-
-/*
- * This handles the memory map.
- */
-#define PAGE_OFFSET 0xffffffff80000000
-
/*
* Memory above this physical address will be considered highmem.
* Fixme: 59 bits is a fictive number and makes assumptions about processors
#define HIGHMEM_START (1UL << 59UL)
#endif
-#ifdef CONFIG_DMA_NONCOHERENT
#define CAC_BASE 0x9800000000000000
-#else
-#define CAC_BASE 0xa800000000000000
-#endif
#define IO_BASE 0x9000000000000000
#define UNCAC_BASE 0x9000000000000000
#define MAP_BASE 0xc000000000000000
#define TO_CAC(x) (CAC_BASE | ((x) & TO_PHYS_MASK))
#define TO_UNCAC(x) (UNCAC_BASE | ((x) & TO_PHYS_MASK))
+/*
+ * This handles the memory map.
+ */
+#define PAGE_OFFSET CAC_BASE
+
#endif /* __ASM_MACH_IP32_SPACES_H */
#define cpu_has_vtag_icache 0
#define cpu_has_dc_aliases 0
#define cpu_has_ic_fills_f_dc 0
+#define cpu_icache_snoops_remote_store 0
#define cpu_has_nofpuex 0
#define cpu_has_64bits 1
*
* (Guessing ...)
*/
-#define GT_PCI_MEM_BASE 0x12000000UL
-#define GT_PCI_MEM_SIZE 0x02000000UL
-#define GT_PCI_IO_BASE 0x10000000UL
-#define GT_PCI_IO_SIZE 0x02000000UL
-#define GT_ISA_IO_BASE PCI_IO_BASE
+#define GT_PCI_MEM_BASE 0x12000000UL
+#define GT_PCI_MEM_SIZE 0x02000000UL
+#define GT_PCI_IO_BASE 0x10000000UL
+#define GT_PCI_IO_SIZE 0x02000000UL
+#define GT_ISA_IO_BASE PCI_IO_BASE
#endif /* _ASM_GT64120_LASAT_GT64120_DEP_H */
* for more details.
*
* Copyright (C) 2003, 2004 Chris Dearman
+ * Copyright (C) 2005 Ralf Baechle (ralf@linux-mips.org)
*/
#ifndef __ASM_MACH_MIPS_CPU_FEATURE_OVERRIDES_H
#define __ASM_MACH_MIPS_CPU_FEATURE_OVERRIDES_H
+#include <linux/config.h>
+
/*
* CPU feature overrides for MIPS boards
*/
#ifndef _ASM_MACH_MIPS_MACH_GT64120_DEP_H
#define _ASM_MACH_MIPS_MACH_GT64120_DEP_H
-#define MIPS_GT_BASE 0x1be00000
+#define MIPS_GT_BASE 0x1be00000
extern unsigned long _pcictrl_gt64120;
/*
/*
* PCI Bus allocation
*/
-#define GT_PCI_MEM_BASE 0x12000000UL
-#define GT_PCI_MEM_SIZE 0x02000000UL
-#define GT_PCI_IO_BASE 0x10000000UL
-#define GT_PCI_IO_SIZE 0x02000000UL
-#define GT_ISA_IO_BASE PCI_IO_BASE
+#define GT_PCI_MEM_BASE 0x12000000UL
+#define GT_PCI_MEM_SIZE 0x02000000UL
+#define GT_PCI_IO_BASE 0x10000000UL
+#define GT_PCI_IO_SIZE 0x02000000UL
+#define GT_ISA_IO_BASE PCI_IO_BASE
#endif /* _ASM_MACH_MIPS_MACH_GT64120_DEP_H */
/*
* PCI address allocation
*/
-#define GT_PCI_MEM_BASE (0x22000000UL)
-#define GT_PCI_MEM_SIZE GT_DEF_PCI0_MEM0_SIZE
-#define GT_PCI_IO_BASE (0x20000000UL)
-#define GT_PCI_IO_SIZE GT_DEF_PCI0_IO_SIZE
+#define GT_PCI_MEM_BASE (0x22000000UL)
+#define GT_PCI_MEM_SIZE GT_DEF_PCI0_MEM0_SIZE
+#define GT_PCI_IO_BASE (0x20000000UL)
+#define GT_PCI_IO_SIZE GT_DEF_PCI0_IO_SIZE
extern unsigned long gt64120_base;
-#define GT64120_BASE (gt64120_base)
+#define GT64120_BASE (gt64120_base)
/*
* GT timer irq
#define cpu_has_vtag_icache 0
#define cpu_has_dc_aliases 0
#define cpu_has_ic_fills_f_dc 0
+#define cpu_icache_snoops_remote_store 0
#define cpu_has_nofpuex 0
#define cpu_has_64bits 1
* Board Registers defines.
*
* Copyright 2004 Embedded Edge LLC.
+ * Copyright 2005 Ralf Baechle (ralf@linux-mips.org)
*
* ########################################################################
*
#ifndef __ASM_PB1550_H
#define __ASM_PB1550_H
+#include <linux/config.h>
#include <linux/types.h>
+#define DBDMA_AC97_TX_CHAN DSCR_CMD0_PSC1_TX
+#define DBDMA_AC97_RX_CHAN DSCR_CMD0_PSC1_RX
+#define DBDMA_I2S_TX_CHAN DSCR_CMD0_PSC3_TX
+#define DBDMA_I2S_RX_CHAN DSCR_CMD0_PSC3_RX
+
+#define SPI_PSC_BASE PSC0_BASE_ADDR
+#define AC97_PSC_BASE PSC1_BASE_ADDR
+#define SMBUS_PSC_BASE PSC2_BASE_ADDR
+#define I2S_PSC_BASE PSC3_BASE_ADDR
+
#define BCSR_PHYS_ADDR 0xAF000000
typedef volatile struct
#define cpu_has_vtag_icache 1
#define cpu_has_dc_aliases 0
#define cpu_has_ic_fills_f_dc 0
+#define cpu_icache_snoops_remote_store 0
#define cpu_has_nofpuex 0
#define cpu_has_64bits 1
#define cpu_has_vtag_icache 0
#define cpu_has_dc_aliases 0
#define cpu_has_ic_fills_f_dc 0
+#define cpu_icache_snoops_remote_store 0
#define cpu_has_nofpuex 0
#define cpu_has_64bits 1
{
pte_t *pte;
- pte = (pte_t *) __get_free_pages(GFP_KERNEL|__GFP_REPEAT, PTE_ORDER);
- if (pte)
- clear_page(pte);
+ pte = (pte_t *) __get_free_pages(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO, PTE_ORDER);
return pte;
}
#ifndef __ASM_PREFETCH_H
#define __ASM_PREFETCH_H
+#include <linux/config.h>
+
/*
* R5000 and RM5200 implements pref and prefx instructions but they're nops, so
* rather than wasting time we pretend these processors don't support
#define Pref_WriteBackInvalidate 25
#define Pref_PrepareForStore 30
+#ifdef __ASSEMBLY__
+
+ .macro __pref hint addr
+#ifdef CONFIG_CPU_HAS_PREFETCH
+ pref \hint, \addr
+#endif
+ .endm
+
+ .macro pref_load addr
+ __pref Pref_Load, \addr
+ .endm
+
+ .macro pref_store addr
+ __pref Pref_Store, \addr
+ .endm
+
+ .macro pref_load_streamed addr
+ __pref Pref_LoadStreamed, \addr
+ .endm
+
+ .macro pref_store_streamed addr
+ __pref Pref_StoreStreamed, \addr
+ .endm
+
+ .macro pref_load_retained addr
+ __pref Pref_LoadRetained, \addr
+ .endm
+
+ .macro pref_store_retained addr
+ __pref Pref_StoreRetained, \addr
+ .endm
+
+ .macro pref_wback_inv addr
+ __pref Pref_WriteBackInvalidate, \addr
+ .endm
+
+ .macro pref_prepare_for_store addr
+ __pref Pref_PrepareForStore, \addr
+ .endm
+
+#endif
+
#endif /* __ASM_PREFETCH_H */
: "i" (Hit_Writeback_Inv_D), "r" (addr));
}
+static inline void protected_writeback_scache_line(unsigned long addr)
+{
+ __asm__ __volatile__(
+ ".set noreorder\n\t"
+ ".set mips3\n"
+ "1:\tcache %0,(%1)\n"
+ "2:\t.set mips0\n\t"
+ ".set reorder\n\t"
+ ".section\t__ex_table,\"a\"\n\t"
+ STR(PTR)"\t1b,2b\n\t"
+ ".previous"
+ :
+ : "i" (Hit_Writeback_Inv_SD), "r" (addr));
+}
+
/*
* This one is RM7000-specific
*/
--- /dev/null
+/*
+ * Various register offset definitions for debuggers, core file
+ * examiners and whatnot.
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1995, 1999 Ralf Baechle
+ * Copyright (C) 1995, 1999 Silicon Graphics
+ */
+#ifndef __ASM_MIPS_REG_H
+#define __ASM_MIPS_REG_H
+
+#include <linux/config.h>
+
+#if defined(CONFIG_MIPS32) || defined(WANT_COMPAT_REG_H)
+
+#define EF_R0 6
+#define EF_R1 7
+#define EF_R2 8
+#define EF_R3 9
+#define EF_R4 10
+#define EF_R5 11
+#define EF_R6 12
+#define EF_R7 13
+#define EF_R8 14
+#define EF_R9 15
+#define EF_R10 16
+#define EF_R11 17
+#define EF_R12 18
+#define EF_R13 19
+#define EF_R14 20
+#define EF_R15 21
+#define EF_R16 22
+#define EF_R17 23
+#define EF_R18 24
+#define EF_R19 25
+#define EF_R20 26
+#define EF_R21 27
+#define EF_R22 28
+#define EF_R23 29
+#define EF_R24 30
+#define EF_R25 31
+
+/*
+ * k0/k1 unsaved
+ */
+#define EF_R26 32
+#define EF_R27 33
+
+#define EF_R28 34
+#define EF_R29 35
+#define EF_R30 36
+#define EF_R31 37
+
+/*
+ * Saved special registers
+ */
+#define EF_LO 38
+#define EF_HI 39
+
+#define EF_CP0_EPC 40
+#define EF_CP0_BADVADDR 41
+#define EF_CP0_STATUS 42
+#define EF_CP0_CAUSE 43
+#define EF_UNUSED0 44
+
+#define EF_SIZE 180
+
+#endif
+
+#if CONFIG_MIPS64
+
+#define EF_R0 0
+#define EF_R1 1
+#define EF_R2 2
+#define EF_R3 3
+#define EF_R4 4
+#define EF_R5 5
+#define EF_R6 6
+#define EF_R7 7
+#define EF_R8 8
+#define EF_R9 9
+#define EF_R10 10
+#define EF_R11 11
+#define EF_R12 12
+#define EF_R13 13
+#define EF_R14 14
+#define EF_R15 15
+#define EF_R16 16
+#define EF_R17 17
+#define EF_R18 18
+#define EF_R19 19
+#define EF_R20 20
+#define EF_R21 21
+#define EF_R22 22
+#define EF_R23 23
+#define EF_R24 24
+#define EF_R25 25
+
+/*
+ * k0/k1 unsaved
+ */
+#define EF_R26 26
+#define EF_R27 27
+
+
+#define EF_R28 28
+#define EF_R29 29
+#define EF_R30 30
+#define EF_R31 31
+
+/*
+ * Saved special registers
+ */
+#define EF_LO 32
+#define EF_HI 33
+
+#define EF_CP0_EPC 34
+#define EF_CP0_BADVADDR 35
+#define EF_CP0_STATUS 36
+#define EF_CP0_CAUSE 37
+
+#define EF_SIZE 304 /* size in bytes */
+
+#endif /* CONFIG_MIPS64 */
+
+#endif /* __ASM_MIPS_REG_H */
#include <asm-generic/sections.h>
extern char _fdata;
-extern char _end;
#endif /* _ASM_SECTIONS_H */
ip26, /* TFP UP, Indigo2 */
ip27, /* R10k MP, R12k MP, Origin */
ip28, /* R10k UP, Indigo2 */
- ip30,
- ip32,
+ ip30, /* Octane */
+ ip32, /* O2 */
};
extern enum sgi_mach sgimach;
#endif
-#define IOADDR(a) (UNCAC_BASE + (a))
+#define IOADDR(a) (IO_BASE + (a))
#endif
#if _MIPS_SIM == _MIPS_SIM_ABI64 || _MIPS_SIM == _MIPS_SIM_NABI32
-#include <linux/types.h>
-
/*
* Keep this struct definition in sync with the sigcontext fragment
* in arch/mips/tools/offset.c
};
#ifdef __KERNEL__
+
+#include <linux/posix_types.h>
+
struct sigcontext32 {
__u32 sc_regmask; /* Unused */
__u32 sc_status;
#include <linux/config.h>
#include <asm/addrspace.h>
+#ifdef CONFIG_BUILD_ELF64
+#define REP_BASE CAC_BASE
+#else
+#define REP_BASE CKSEG0
+#endif
+
#ifdef CONFIG_MAPPED_KERNEL
-#define MAPPED_ADDR_RO_TO_PHYS(x) (x - CKSSEG)
-#define MAPPED_ADDR_RW_TO_PHYS(x) (x - CKSSEG - 16777216)
+#define MAPPED_ADDR_RO_TO_PHYS(x) (x - REP_BASE)
+#define MAPPED_ADDR_RW_TO_PHYS(x) (x - REP_BASE - 16777216)
#define MAPPED_KERN_RO_PHYSBASE(n) \
(PLAT_NODE_DATA(n)->kern_vars.kv_ro_baseaddr)
#else /* CONFIG_MAPPED_KERNEL */
-#define MAPPED_KERN_RO_TO_PHYS(x) (x - CKSEG0)
-#define MAPPED_KERN_RW_TO_PHYS(x) (x - CKSEG0)
+#define MAPPED_KERN_RO_TO_PHYS(x) (x - REP_BASE)
+#define MAPPED_KERN_RW_TO_PHYS(x) (x - REP_BASE)
#endif /* CONFIG_MAPPED_KERNEL */
#define ICRBN_A_CERR_SHFT 54
#define ICRBN_A_ERR_MASK 0x3ff
+#if 0 /* Disabled, this causes namespace polution and break allmodconfig */
/*
* Easy access macros.
*/
#define a_addr icrba_fields_s.addr
#define a_valid icrba_fields_s.valid
#define a_iow icrba_fields_s.iow
+#endif
#endif /* !__ASSEMBLY__ */
extern void per_cpu_init(void);
extern void install_cpu_nmi_handler(int slice);
extern void install_ipi(void);
-extern void setup_replication_mask(int);
-extern void replicate_kernel_text(int);
+extern void setup_replication_mask(void);
+extern void replicate_kernel_text(void);
extern pfn_t node_getfirstfree(cnodeid_t);
#endif /* __ASM_SN_SN_PRIVATE_H */
*/
#ifdef CONFIG_MIPS32
+#ifndef IN_STRING_C
+
#define __HAVE_ARCH_STRCPY
static __inline__ char *strcpy(char *__dest, __const__ char *__src)
{
return __res;
}
+#endif /* !defined(IN_STRING_C) */
+
#define __HAVE_ARCH_STRNCMP
static __inline__ int
strncmp(__const__ char *__cs, __const__ char *__ct, size_t __count)
#define __HAVE_ARCH_MEMMOVE
extern void *memmove(void *__dest, __const__ void *__src, size_t __n);
-/* Don't build bcopy at all ... */
-#define __HAVE_ARCH_BCOPY
-
#ifdef CONFIG_MIPS32
#define __HAVE_ARCH_MEMSCAN
static __inline__ void *memscan(void *__addr, int __c, size_t __size)
--- /dev/null
+/*
+ * include/asm-mips/vr41xx/cmbvr4133.h
+ *
+ * Include file for NEC CMB-VR4133.
+ *
+ * Author: Yoichi Yuasa <yyuasa@mvista.com, or source@mvista.com> and
+ * Jun Sun <jsun@mvista.com, or source@mvista.com> and
+ * Alex Sapkov <asapkov@ru.mvista.com>
+ *
+ * 2002-2004 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+#ifndef __NEC_CMBVR4133_H
+#define __NEC_CMBVR4133_H
+
+#include <asm/addrspace.h>
+#include <asm/vr41xx/vr41xx.h>
+
+/*
+ * General-Purpose I/O Pin Number
+ */
+#define CMBVR41XX_INTA_PIN 1
+#define CMBVR41XX_INTB_PIN 1
+#define CMBVR41XX_INTC_PIN 3
+#define CMBVR41XX_INTD_PIN 1
+#define CMBVR41XX_INTE_PIN 1
+
+/*
+ * Interrupt Number
+ */
+#define CMBVR41XX_INTA_IRQ GIU_IRQ(CMBVR41XX_INTA_PIN)
+#define CMBVR41XX_INTB_IRQ GIU_IRQ(CMBVR41XX_INTB_PIN)
+#define CMBVR41XX_INTC_IRQ GIU_IRQ(CMBVR41XX_INTC_PIN)
+#define CMBVR41XX_INTD_IRQ GIU_IRQ(CMBVR41XX_INTD_PIN)
+#define CMBVR41XX_INTE_IRQ GIU_IRQ(CMBVR41XX_INTE_PIN)
+
+#define I8259_IRQ_BASE 72
+#define I8259_IRQ(x) (I8259_IRQ_BASE + (x))
+#define TIMER_IRQ I8259_IRQ(0)
+#define KEYBOARD_IRQ I8259_IRQ(1)
+#define I8259_SLAVE_IRQ I8259_IRQ(2)
+#define UART3_IRQ I8259_IRQ(3)
+#define UART1_IRQ I8259_IRQ(4)
+#define UART2_IRQ I8259_IRQ(5)
+#define FDC_IRQ I8259_IRQ(6)
+#define PARPORT_IRQ I8259_IRQ(7)
+#define RTC_IRQ I8259_IRQ(8)
+#define USB_IRQ I8259_IRQ(9)
+#define I8259_INTA_IRQ I8259_IRQ(10)
+#define AUDIO_IRQ I8259_IRQ(11)
+#define AUX_IRQ I8259_IRQ(12)
+#define IDE_PRIMARY_IRQ I8259_IRQ(14)
+#define IDE_SECONDARY_IRQ I8259_IRQ(15)
+#define I8259_IRQ_LAST IDE_SECONDARY_IRQ
+
+#define RTC_PORT(x) (0xaf000100 + (x))
+#define RTC_IO_EXTENT 0x140
+
+#endif /* __NEC_CMBVR4133_H */
#include <linux/config.h>
#include <linux/threads.h>
#include <linux/cache.h>
+#include <linux/irq.h>
typedef struct {
unsigned long __softirq_pending; /* set_bit is used on this */
- unsigned int __syscall_count;
- struct task_struct * __ksoftirqd_task;
- unsigned long idle_timestamp;
} ____cacheline_aligned irq_cpustat_t;
#include <linux/irq_cpustat.h> /* Standard mappings for irq_cpustat_t above */
#define HARDIRQ_BITS 16
/*
- * The hardirq mask has to be large enough to have space for potentially all IRQ sources
- * in the system nesting on a single CPU:
+ * The hardirq mask has to be large enough to have space for potentially all
+ * IRQ sources in the system nesting on a single CPU:
*/
#if (1 << HARDIRQ_BITS) < NR_IRQS
# error HARDIRQ_BITS is too low!
#endif
-#define irq_enter() (preempt_count() += HARDIRQ_OFFSET)
-#define irq_exit() \
-do { \
- preempt_count() -= IRQ_EXIT_OFFSET; \
- if (!in_interrupt() && softirq_pending(smp_processor_id())) \
- do_softirq(); \
- preempt_enable_no_resched(); \
-} while (0)
+void ack_bad_irq(unsigned int irq);
#endif /* _PARISC_HARDIRQ_H */
* <tomsoft@informatik.tu-chemnitz.de>
*/
-#include <asm/irq.h>
+extern void hw_resend_irq(struct hw_interrupt_type *, unsigned int);
#endif
#ifndef _ASM_IO_H
#define _ASM_IO_H
-/* USE_HPPA_IOREMAP IS THE MAGIC FLAG TO ENABLE OR DISABLE REAL IOREMAP() FUNCTIONALITY */
-/* FOR 712 or 715 MACHINES THIS SHOULD BE ENABLED,
- NEWER MACHINES STILL HAVE SOME ISSUES IN THE SCSI AND/OR NETWORK DRIVERS AND
- BECAUSE OF THAT I WILL LEAVE IT DISABLED FOR NOW <deller@gmx.de> */
-/* WHEN THOSE ISSUES ARE SOLVED, USE_HPPA_IOREMAP WILL GO AWAY */
-#define USE_HPPA_IOREMAP 0
-
-
#include <linux/config.h>
#include <linux/types.h>
#include <asm/pgtable.h>
#define virt_to_bus virt_to_phys
#define bus_to_virt phys_to_virt
-/* Memory mapped IO */
-
-extern void * __ioremap(unsigned long offset, unsigned long size, unsigned long flags);
-
-extern inline void * ioremap(unsigned long offset, unsigned long size)
-{
- return __ioremap(offset, size, 0);
-}
-
/*
- * This one maps high address device memory and turns off caching for that area.
- * it's useful if some control registers are in such an area and write combining
- * or read caching is not desirable:
+ * Memory mapped I/O
+ *
+ * readX()/writeX() do byteswapping and take an ioremapped address
+ * __raw_readX()/__raw_writeX() don't byteswap and take an ioremapped address.
+ * gsc_*() don't byteswap and operate on physical addresses;
+ * eg dev->hpa or 0xfee00000.
*/
-extern inline void * ioremap_nocache(unsigned long offset, unsigned long size)
-{
- return __ioremap(offset, size, _PAGE_NO_CACHE /* _PAGE_PCD */);
-}
-extern void iounmap(void *addr);
+#ifdef CONFIG_DEBUG_IOREMAP
+#ifdef CONFIG_64BIT
+#define NYBBLE_SHIFT 60
+#else
+#define NYBBLE_SHIFT 28
+#endif
+extern void gsc_bad_addr(unsigned long addr);
+extern void __raw_bad_addr(const volatile void __iomem *addr);
+#define gsc_check_addr(addr) \
+ if ((addr >> NYBBLE_SHIFT) != 0xf) { \
+ gsc_bad_addr(addr); \
+ addr |= 0xfUL << NYBBLE_SHIFT; \
+ }
+#define __raw_check_addr(addr) \
+ if (((unsigned long)addr >> NYBBLE_SHIFT) != 0xe) \
+ __raw_bad_addr(addr); \
+ addr = (void *)((unsigned long)addr | (0xfUL << NYBBLE_SHIFT));
+#else
+#define gsc_check_addr(addr)
+#define __raw_check_addr(addr)
+#endif
-/*
- * __raw_ variants have no defined meaning. on hppa, it means `i was
- * too lazy to ioremap first'. kind of like isa_, except that there's
- * no additional base address to add on.
- */
-#define __raw_readb(a) ___raw_readb((unsigned long)(a))
-extern __inline__ unsigned char ___raw_readb(unsigned long addr)
+static inline unsigned char gsc_readb(unsigned long addr)
{
long flags;
unsigned char ret;
+ gsc_check_addr(addr);
+
__asm__ __volatile__(
" rsm 2,%0\n"
" ldbx 0(%2),%1\n"
return ret;
}
-#define __raw_readw(a) ___raw_readw((unsigned long)(a))
-extern __inline__ unsigned short ___raw_readw(unsigned long addr)
+static inline unsigned short gsc_readw(unsigned long addr)
{
long flags;
unsigned short ret;
+ gsc_check_addr(addr);
+
__asm__ __volatile__(
" rsm 2,%0\n"
" ldhx 0(%2),%1\n"
return ret;
}
-#define __raw_readl(a) ___raw_readl((unsigned long)(a))
-extern __inline__ unsigned int ___raw_readl(unsigned long addr)
+static inline unsigned int gsc_readl(unsigned long addr)
{
u32 ret;
+ gsc_check_addr(addr);
+
__asm__ __volatile__(
" ldwax 0(%1),%0\n"
: "=r" (ret) : "r" (addr) );
return ret;
}
-#define __raw_readq(a) ___raw_readq((unsigned long)(a))
-extern __inline__ unsigned long long ___raw_readq(unsigned long addr)
+static inline unsigned long long gsc_readq(unsigned long addr)
{
unsigned long long ret;
+ gsc_check_addr(addr);
+
#ifdef __LP64__
__asm__ __volatile__(
" ldda 0(%1),%0\n"
: "=r" (ret) : "r" (addr) );
#else
/* two reads may have side effects.. */
- ret = ((u64) __raw_readl(addr)) << 32;
- ret |= __raw_readl(addr+4);
+ ret = ((u64) gsc_readl(addr)) << 32;
+ ret |= gsc_readl(addr+4);
#endif
return ret;
}
-#define __raw_writeb(a,b) ___raw_writeb(a, (unsigned long)(b))
-extern __inline__ void ___raw_writeb(unsigned char val, unsigned long addr)
+static inline void gsc_writeb(unsigned char val, unsigned long addr)
{
long flags;
+ gsc_check_addr(addr);
+
__asm__ __volatile__(
" rsm 2,%0\n"
" stbs %1,0(%2)\n"
: "=&r" (flags) : "r" (val), "r" (addr) );
}
-#define __raw_writew(a,b) ___raw_writew(a, (unsigned long)(b))
-extern __inline__ void ___raw_writew(unsigned short val, unsigned long addr)
+static inline void gsc_writew(unsigned short val, unsigned long addr)
{
long flags;
+ gsc_check_addr(addr);
+
__asm__ __volatile__(
" rsm 2,%0\n"
" sths %1,0(%2)\n"
: "=&r" (flags) : "r" (val), "r" (addr) );
}
-#define __raw_writel(a,b) ___raw_writel(a, (unsigned long)(b))
-extern __inline__ void ___raw_writel(unsigned int val, unsigned long addr)
+static inline void gsc_writel(unsigned int val, unsigned long addr)
{
+ gsc_check_addr(addr);
+
__asm__ __volatile__(
" stwas %0,0(%1)\n"
: : "r" (val), "r" (addr) );
}
-#define __raw_writeq(a,b) ___raw_writeq(a, (unsigned long)(b))
-extern __inline__ void ___raw_writeq(unsigned long long val, unsigned long addr)
+static inline void gsc_writeq(unsigned long long val, unsigned long addr)
{
+ gsc_check_addr(addr);
+
#ifdef __LP64__
__asm__ __volatile__(
" stda %0,0(%1)\n"
: : "r" (val), "r" (addr) );
#else
/* two writes may have side effects.. */
- __raw_writel(val >> 32, addr);
- __raw_writel(val, addr+4);
+ gsc_writel(val >> 32, addr);
+ gsc_writel(val, addr+4);
#endif
}
+/*
+ * The standard PCI ioremap interfaces
+ */
+
+extern void __iomem * __ioremap(unsigned long offset, unsigned long size, unsigned long flags);
+
+extern inline void __iomem * ioremap(unsigned long offset, unsigned long size)
+{
+ return __ioremap(offset, size, 0);
+}
+
+/*
+ * This one maps high address device memory and turns off caching for that area.
+ * it's useful if some control registers are in such an area and write combining
+ * or read caching is not desirable:
+ */
+extern inline void * ioremap_nocache(unsigned long offset, unsigned long size)
+{
+ return __ioremap(offset, size, _PAGE_NO_CACHE /* _PAGE_PCD */);
+}
+
+extern void iounmap(void __iomem *addr);
+
+/*
+ * USE_HPPA_IOREMAP is the magic flag to enable or disable real ioremap()
+ * functionality. It's currently disabled because it may not work on some
+ * machines.
+ */
+#define USE_HPPA_IOREMAP 0
+
#if USE_HPPA_IOREMAP
-#define readb(addr) (*(volatile unsigned char *) (addr))
-#define readw(addr) (*(volatile unsigned short *) (addr))
-#define readl(addr) (*(volatile unsigned int *) (addr))
-#define readq(addr) (*(volatile u64 *) (addr))
-#define writeb(b,addr) (*(volatile unsigned char *) (addr) = (b))
-#define writew(b,addr) (*(volatile unsigned short *) (addr) = (b))
-#define writel(b,addr) (*(volatile unsigned int *) (addr) = (b))
-#define writeq(b,addr) (*(volatile u64 *) (addr) = (b))
+static inline unsigned char __raw_readb(const volatile void __iomem *addr)
+{
+ return (*(volatile unsigned char __force *) (addr));
+}
+static inline unsigned short __raw_readw(const volatile void __iomem *addr)
+{
+ return *(volatile unsigned short __force *) addr;
+}
+static inline unsigned int __raw_readl(const volatile void __iomem *addr)
+{
+ return *(volatile unsigned int __force *) addr;
+}
+static inline unsigned long long __raw_readq(const volatile void __iomem *addr)
+{
+ return *(volatile unsigned long long __force *) addr;
+}
+
+static inline void __raw_writeb(unsigned char b, volatile void __iomem *addr)
+{
+ *(volatile unsigned char __force *) addr = b;
+}
+static inline void __raw_writew(unsigned short b, volatile void __iomem *addr)
+{
+ *(volatile unsigned short __force *) addr = b;
+}
+static inline void __raw_writel(unsigned int b, volatile void __iomem *addr)
+{
+ *(volatile unsigned int __force *) addr = b;
+}
+static inline void __raw_writeq(unsigned long long b, volatile void __iomem *addr)
+{
+ *(volatile unsigned long long __force *) addr = b;
+}
#else /* !USE_HPPA_IOREMAP */
+static inline unsigned char __raw_readb(const volatile void __iomem *addr)
+{
+ __raw_check_addr(addr);
+
+ return gsc_readb((unsigned long) addr);
+}
+static inline unsigned short __raw_readw(const volatile void __iomem *addr)
+{
+ __raw_check_addr(addr);
+
+ return gsc_readw((unsigned long) addr);
+}
+static inline unsigned int __raw_readl(const volatile void __iomem *addr)
+{
+ __raw_check_addr(addr);
+
+ return gsc_readl((unsigned long) addr);
+}
+static inline unsigned long long __raw_readq(const volatile void __iomem *addr)
+{
+ __raw_check_addr(addr);
+
+ return gsc_readq((unsigned long) addr);
+}
+
+static inline void __raw_writeb(unsigned char b, volatile void __iomem *addr)
+{
+ __raw_check_addr(addr);
+
+ gsc_writeb(b, (unsigned long) addr);
+}
+static inline void __raw_writew(unsigned short b, volatile void __iomem *addr)
+{
+ __raw_check_addr(addr);
+
+ gsc_writew(b, (unsigned long) addr);
+}
+static inline void __raw_writel(unsigned int b, volatile void __iomem *addr)
+{
+ __raw_check_addr(addr);
+
+ gsc_writel(b, (unsigned long) addr);
+}
+static inline void __raw_writeq(unsigned long long b, volatile void __iomem *addr)
+{
+ __raw_check_addr(addr);
+
+ gsc_writeq(b, (unsigned long) addr);
+}
+#endif /* !USE_HPPA_IOREMAP */
+
#define readb(addr) __raw_readb(addr)
#define readw(addr) le16_to_cpu(__raw_readw(addr))
#define readl(addr) le32_to_cpu(__raw_readl(addr))
#define readq(addr) le64_to_cpu(__raw_readq(addr))
-#define writeb(b,addr) __raw_writeb(b,addr)
-#define writew(b,addr) __raw_writew(cpu_to_le16(b),addr)
-#define writel(b,addr) __raw_writel(cpu_to_le32(b),addr)
-#define writeq(b,addr) __raw_writeq(cpu_to_le64(b),addr)
-#endif /* !USE_HPPA_IOREMAP */
+#define writeb(b, addr) __raw_writeb(b, addr)
+#define writew(b, addr) __raw_writew(cpu_to_le16(b), addr)
+#define writel(b, addr) __raw_writel(cpu_to_le32(b), addr)
+#define writeq(b, addr) __raw_writeq(cpu_to_le64(b), addr)
#define readb_relaxed(addr) readb(addr)
#define readw_relaxed(addr) readw(addr)
#define readl_relaxed(addr) readl(addr)
#define readq_relaxed(addr) readq(addr)
-#define mmiowb()
+#define mmiowb() do { } while (0)
-extern void __memcpy_fromio(unsigned long dest, unsigned long src, int count);
-extern void __memcpy_toio(unsigned long dest, unsigned long src, int count);
-extern void __memset_io(unsigned long dest, char fill, int count);
-
-#define memcpy_fromio(a,b,c) __memcpy_fromio((unsigned long)(a), (unsigned long)(b), (c))
-#define memcpy_toio(a,b,c) __memcpy_toio((unsigned long)(a), (unsigned long)(b), (c))
-#define memset_io(a,b,c) __memset_io((unsigned long)(a), (b), (c))
+void memset_io(volatile void __iomem *addr, unsigned char val, int count);
+void memcpy_fromio(void *dst, const volatile void __iomem *src, int count);
+void memcpy_toio(volatile void __iomem *dst, const void *src, int count);
/* Support old drivers which don't ioremap.
* NB this interface is scheduled to disappear in 2.5
*/
-#define EISA_BASE 0xfffffffffc000000UL
-#define isa_readb(a) readb(EISA_BASE | (a))
-#define isa_readw(a) readw(EISA_BASE | (a))
-#define isa_readl(a) readl(EISA_BASE | (a))
-#define isa_writeb(b,a) writeb((b), EISA_BASE | (a))
-#define isa_writew(b,a) writew((b), EISA_BASE | (a))
-#define isa_writel(b,a) writel((b), EISA_BASE | (a))
-#define isa_memset_io(a,b,c) memset_io(EISA_BASE | (a), (b), (c))
-#define isa_memcpy_fromio(a,b,c) memcpy_fromio((a), EISA_BASE | (b), (c))
-#define isa_memcpy_toio(a,b,c) memcpy_toio(EISA_BASE | (a), (b), (c))
-
-/*
- * These functions support PA-RISC drivers which don't yet call ioremap().
- * They will disappear once the last of these drivers is gone.
- */
-#define gsc_readb(x) __raw_readb(x)
-#define gsc_readw(x) __raw_readw(x)
-#define gsc_readl(x) __raw_readl(x)
-#define gsc_writeb(x, y) __raw_writeb(x, y)
-#define gsc_writew(x, y) __raw_writew(x, y)
-#define gsc_writel(x, y) __raw_writel(x, y)
+#define __isa_addr(x) (void __iomem *)(F_EXTEND(0xfc000000) | (x))
+#define isa_readb(a) readb(__isa_addr(a))
+#define isa_readw(a) readw(__isa_addr(a))
+#define isa_readl(a) readl(__isa_addr(a))
+#define isa_writeb(b,a) writeb((b), __isa_addr(a))
+#define isa_writew(b,a) writew((b), __isa_addr(a))
+#define isa_writel(b,a) writel((b), __isa_addr(a))
+#define isa_memset_io(a,b,c) memset_io(__isa_addr(a), (b), (c))
+#define isa_memcpy_fromio(a,b,c) memcpy_fromio((a), __isa_addr(b), (c))
+#define isa_memcpy_toio(a,b,c) memcpy_toio(__isa_addr(a), (b), (c))
/*
* value for either 32 or 64 bit mode */
#define F_EXTEND(x) ((unsigned long)((x) | (0xffffffff00000000ULL)))
+#include <asm-generic/iomap.h>
+
#endif
/*
- * linux/include/asm-parisc/irq.h
+ * include/asm-parisc/irq.h
*
- * (C) 1992, 1993 Linus Torvalds, (C) 1997 Ingo Molnar,
- * Copyright 1999 SuSE GmbH
- *
- * IRQ/IPI changes taken from work by Thomas Radke
- * <tomsoft@informatik.tu-chemnitz.de>
+ * Copyright 2005 Matthew Wilcox <matthew@wil.cx>
*/
#ifndef _ASM_PARISC_IRQ_H
#define _ASM_PARISC_IRQ_H
-#include <asm/ptrace.h>
-#include <asm/types.h>
-#include <asm/errno.h>
-
-#include <linux/string.h>
-#include <linux/interrupt.h>
#include <linux/config.h>
+#include <asm/types.h>
+#define NO_IRQ (-1)
-#define CPU_IRQ_REGION 1
-#define TIMER_IRQ (IRQ_FROM_REGION(CPU_IRQ_REGION) | 0)
-#define IPI_IRQ (IRQ_FROM_REGION(CPU_IRQ_REGION) | 1)
-
-/* This should be 31 for PA1.1 binaries and 63 for PA-2.0 wide mode */
-#define MAX_CPU_IRQ (BITS_PER_LONG - 1)
-
-#if BITS_PER_LONG == 32
-# define IRQ_REGION_SHIFT 5
+#ifdef CONFIG_GSC
+#define GSC_IRQ_BASE 16
+#define GSC_IRQ_MAX 63
+#define CPU_IRQ_BASE 64
#else
-# define IRQ_REGION_SHIFT 6
+#define CPU_IRQ_BASE 16
#endif
-#define IRQ_PER_REGION (1 << IRQ_REGION_SHIFT)
-#define NR_IRQ_REGS 16
-#define NR_IRQS (NR_IRQ_REGS * IRQ_PER_REGION)
-
-#define IRQ_REGION(irq) ((irq) >> IRQ_REGION_SHIFT)
-#define IRQ_OFFSET(irq) ((irq) & ((1<<IRQ_REGION_SHIFT)-1))
-#define IRQ_FROM_REGION(reg) ((reg) << IRQ_REGION_SHIFT)
-
-#define EISA_IRQ_REGION 0 /* region 0 needs to be reserved for EISA */
-#define EISA_MAX_IRQS 16 /* max. (E)ISA irq line */
-
-struct irq_region_ops {
- void (*disable_irq)(void *dev, int irq);
- void (* enable_irq)(void *dev, int irq);
- void (* mask_irq)(void *dev, int irq);
- void (* unmask_irq)(void *dev, int irq);
-};
-
-struct irq_region_data {
- void *dev;
- const char *name;
- int irqbase;
- unsigned int status[IRQ_PER_REGION]; /* IRQ status */
-};
+#define TIMER_IRQ (CPU_IRQ_BASE + 0)
+#define IPI_IRQ (CPU_IRQ_BASE + 1)
+#define CPU_IRQ_MAX (CPU_IRQ_BASE + (BITS_PER_LONG - 1))
-struct irq_region {
- struct irq_region_ops ops;
- struct irq_region_data data;
-
- struct irqaction *action;
-};
-
-extern struct irq_region *irq_region[NR_IRQ_REGS];
+#define NR_IRQS (CPU_IRQ_MAX + 1)
static __inline__ int irq_canonicalize(int irq)
{
-#ifdef CONFIG_EISA
- return (irq == (IRQ_FROM_REGION(EISA_IRQ_REGION)+2)
- ? (IRQ_FROM_REGION(EISA_IRQ_REGION)+9) : irq);
-#else
- return irq;
-#endif
+ return (irq == 2) ? 9 : irq;
}
-extern void disable_irq(int);
-#define disable_irq_nosync(i) disable_irq(i)
-extern void enable_irq(int);
-
-extern void do_irq(struct irqaction *a, int i, struct pt_regs *p);
-extern void do_irq_mask(unsigned long mask, struct irq_region *region,
- struct pt_regs *regs);
+struct hw_interrupt_type;
-extern struct irq_region *alloc_irq_region(int count, struct irq_region_ops *ops,
- const char *name, void *dev);
+/*
+ * Some useful "we don't have to do anything here" handlers. Should
+ * probably be provided by the generic code.
+ */
+void no_ack_irq(unsigned int irq);
+void no_end_irq(unsigned int irq);
extern int txn_alloc_irq(void);
extern int txn_claim_irq(int);
extern unsigned int txn_alloc_data(int, unsigned int);
extern unsigned long txn_alloc_addr(int);
+extern int cpu_claim_irq(unsigned int irq, struct hw_interrupt_type *, void *);
+
/* soft power switch support (power.c) */
extern struct tasklet_struct power_tasklet;
-struct irqaction;
-int handle_IRQ_event(unsigned int, struct pt_regs *, struct irqaction *);
-
#endif /* _ASM_PARISC_IRQ_H */
struct parisc_device {
unsigned long hpa; /* Hard Physical Address */
struct parisc_device_id id;
- struct parisc_device *parent;
- struct parisc_device *sibling;
- struct parisc_device *child;
struct parisc_driver *driver; /* Driver for this device */
- void *sysdata; /* Driver instance private data */
char name[80]; /* The hardware description */
int irq;
#define to_parisc_device(d) container_of(d, struct parisc_device, dev)
#define to_parisc_driver(d) container_of(d, struct parisc_driver, drv)
+#define parisc_parent(d) to_parisc_device(d->dev.parent)
+
+static inline void
+parisc_set_drvdata(struct parisc_device *d, void *p)
+{
+ dev_set_drvdata(&d->dev, p);
+}
+
+static inline void *
+parisc_get_drvdata(struct parisc_device *d)
+{
+ return dev_get_drvdata(&d->dev);
+}
extern struct bus_type parisc_bus_type;
/* XXX should we use iaoq[1] or iaoq[0] ? */
#define user_mode(regs) (((regs)->iaoq[0] & 3) ? 1 : 0)
-#define user_space(regs) (((regs)->iasq[0] != 0) ? 1 : 0)
+#define user_space(regs) (((regs)->iasq[1] != 0) ? 1 : 0)
#define instruction_pointer(regs) ((regs)->iaoq[0] & ~3)
#define profile_pc(regs) instruction_pointer(regs)
extern void show_regs(struct pt_regs *);
#define LASI_BASE_BAUD ( 7272727 / 16 )
#define BASE_BAUD LASI_BASE_BAUD
-#ifdef CONFIG_SERIAL_DETECT_IRQ
-#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST | ASYNC_AUTO_IRQ)
-#define STD_COM4_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_AUTO_IRQ)
-#else
-#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST)
-#define STD_COM4_FLAGS ASYNC_BOOT_AUTOCONF
-#endif
-
-#ifdef CONFIG_SERIAL_MANY_PORTS
-#define FOURPORT_FLAGS ASYNC_FOURPORT
-#define ACCENT_FLAGS 0
-#define BOCA_FLAGS 0
-#define HUB6_FLAGS 0
-#endif
-
/*
* We don't use the ISA probing code, so these entries are just to reserve
* space. Some example (maximal) configurations:
#define __HAVE_ARCH_MEMCPY
void * memcpy(void * dest,const void *src,size_t count);
-#define __HAVE_ARCH_BCOPY
-void bcopy(const void * srcp, void * destp, size_t count);
-
#endif
u32 pp_base;
u32 acpi_base;
int suckyio_irq_enabled;
- struct irq_region *irq_region;
struct pci_dev *lio_pdev; /* pci device for legacy IO (fn 1) */
struct pci_dev *usb_pdev; /* pci device for USB (fn 2) */
};
|| ((x)->device == PCI_DEVICE_ID_NS_87560_LIO) \
|| ((x)->device == PCI_DEVICE_ID_NS_87560_USB) ) )
-struct hwif_s;
-
-extern void superio_inform_irq(int irq);
-extern void superio_serial_init(void); /* called by rs_init() */
extern int superio_fixup_irq(struct pci_dev *pcidev); /* called by iosapic */
-extern void superio_fixup_pci(struct pci_dev *pdev);
#endif /* _PARISC_SUPERIO_H */
#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
#define TIF_POLLING_NRFLAG 4 /* true if poll_idle() is polling TIF_NEED_RESCHED */
#define TIF_32BIT 5 /* 32 bit binary */
+#define TIF_MEMDIE 6
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
--- /dev/null
+#ifndef __PPC_CPUTIME_H
+#define __PPC_CPUTIME_H
+
+#include <asm-generic/cputime.h>
+
+#endif /* __PPC_CPUTIME_H */
* GG2 specific PCI Registers
*/
-extern unsigned long gg2_pci_config_base; /* kernel virtual address */
+extern void __iomem *gg2_pci_config_base; /* kernel virtual address */
#define GG2_PCI_BUSNO 0x40 /* Bus number */
#define GG2_PCI_SUBBUSNO 0x41 /* Subordinate bus number */
static inline void *kmap(struct page *page)
{
might_sleep();
- if (page < highmem_start_page)
+ if (!PageHighMem(page))
return page_address(page);
return kmap_high(page);
}
static inline void kunmap(struct page *page)
{
BUG_ON(in_interrupt());
- if (page < highmem_start_page)
+ if (!PageHighMem(page))
return;
kunmap_high(page);
}
/* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */
inc_preempt_count();
- if (page < highmem_start_page)
+ if (!PageHighMem(page))
return page_address(page);
idx = type + KM_TYPE_NR*smp_processor_id();
#ifdef CONFIG_SBC8560
#include <platforms/85xx/sbc8560.h>
#endif
+#ifdef CONFIG_STX_GP3
+#include <platforms/85xx/stx_gp3.h>
+#endif
#define _IO_BASE isa_io_base
#define _ISA_MEM_BASE isa_mem_base
#define MPC85xx_CPM_SIZE (0x40000)
#define MPC85xx_DMA_OFFSET (0x21000)
#define MPC85xx_DMA_SIZE (0x01000)
+#define MPC85xx_DMA0_OFFSET (0x21100)
+#define MPC85xx_DMA0_SIZE (0x00080)
+#define MPC85xx_DMA1_OFFSET (0x21180)
+#define MPC85xx_DMA1_SIZE (0x00080)
+#define MPC85xx_DMA2_OFFSET (0x21200)
+#define MPC85xx_DMA2_SIZE (0x00080)
+#define MPC85xx_DMA3_OFFSET (0x21280)
+#define MPC85xx_DMA3_SIZE (0x00080)
#define MPC85xx_ENET1_OFFSET (0x24000)
#define MPC85xx_ENET1_SIZE (0x01000)
#define MPC85xx_ENET2_OFFSET (0x25000)
#define CCSRBAR BOARD_CCSRBAR
#endif
+enum ppc_sys_devices {
+ MPC85xx_TSEC1,
+ MPC85xx_TSEC2,
+ MPC85xx_FEC,
+ MPC85xx_IIC1,
+ MPC85xx_DMA0,
+ MPC85xx_DMA1,
+ MPC85xx_DMA2,
+ MPC85xx_DMA3,
+ MPC85xx_DUART,
+ MPC85xx_PERFMON,
+ MPC85xx_SEC2,
+ MPC85xx_CPM_SPI,
+ MPC85xx_CPM_I2C,
+ MPC85xx_CPM_USB,
+ MPC85xx_CPM_SCC1,
+ MPC85xx_CPM_SCC2,
+ MPC85xx_CPM_SCC3,
+ MPC85xx_CPM_SCC4,
+ MPC85xx_CPM_FCC1,
+ MPC85xx_CPM_FCC2,
+ MPC85xx_CPM_FCC3,
+ MPC85xx_CPM_MCC1,
+ MPC85xx_CPM_MCC2,
+ MPC85xx_CPM_SMC1,
+ MPC85xx_CPM_SMC2,
+};
+
#endif /* CONFIG_85xx */
#endif /* __ASM_MPC85xx_H__ */
#endif /* __KERNEL__ */
extern u_int OpenPIC_NumInitSenses;
extern u_char *OpenPIC_InitSenses;
-extern void* OpenPIC_Addr;
+extern void __iomem * OpenPIC_Addr;
extern int epic_serial_mode;
/* Exported functions */
-extern void openpic_set_sources(int first_irq, int num_irqs, void *isr);
+extern void openpic_set_sources(int first_irq, int num_irqs, void __iomem *isr);
extern void openpic_init(int linux_irq_offset);
extern void openpic_init_nmi_irq(u_int irq);
extern void openpic_hookup_cascade(u_int irq, char *name,
unsigned long pci_mem_offset;
struct pci_ops *ops;
- volatile unsigned int *cfg_addr;
- volatile unsigned char *cfg_data;
+ volatile unsigned int __iomem *cfg_addr;
+ volatile void __iomem *cfg_data;
/*
* If set, indirect method will set the cfg_type bit as
* needed to generate type 1 configuration transactions.
int where, u32 val);
extern void setup_indirect_pci_nomap(struct pci_controller* hose,
- u32 cfg_addr, u32 cfg_data);
+ void __iomem *cfg_addr, void __iomem *cfg_data);
extern void setup_indirect_pci(struct pci_controller* hose,
u32 cfg_addr, u32 cfg_data);
extern void setup_grackle(struct pci_controller *hose);
--- /dev/null
+#ifndef __PERFMON_H
+#define __PERFMON_H
+
+extern void (*perf_irq)(struct pt_regs *);
+
+int request_perfmon_irq(void (*handler)(struct pt_regs *));
+void free_perfmon_irq(void);
+
+#ifdef CONFIG_FSL_BOOKE
+void init_pmc_stop(int ctr);
+void set_pmc_event(int ctr, int event);
+void set_pmc_user_kernel(int ctr, int user, int kernel);
+void set_pmc_marked(int ctr, int mark0, int mark1);
+void pmc_start_ctr(int ctr, int enable);
+void pmc_start_ctrs(int enable);
+void pmc_stop_ctrs(void);
+void dump_pmcs(void);
+
+extern struct op_ppc32_model op_model_fsl_booke;
+#endif
+
+#endif /* __PERFMON_H */
#define DMA_TCE_ENABLE (1<<(8-DMA_CR_OFFSET))
#define SET_DMA_TCE(x) (((x)&0x1)<<(8-DMA_CR_OFFSET))
-#define DMA_DEC (1<<(2) /* Address Decrement */
+#define DMA_DEC (1<<(2)) /* Address Decrement */
#define SET_DMA_DEC(x) (((x)&0x1)<<2)
#define GET_DMA_DEC(x) (((x)&DMA_DEC)>>2)
+
/*
* Transfer Modes
* These modes are defined in a way that makes it possible to
#define DMA_SG2 (1<<5)
#define DMA_SG3 (1<<4)
+/* DMA Channel Count Register */
+#define DMA_CTC_BTEN (1<<23) /* Burst Enable/Disable bit */
+#define DMA_CTC_BSIZ_MSK (3<<21) /* Mask of the Burst size bits */
+#define DMA_CTC_BSIZ_2 (0)
+#define DMA_CTC_BSIZ_4 (1<<21)
+#define DMA_CTC_BSIZ_8 (2<<21)
+#define DMA_CTC_BSIZ_16 (3<<21)
+
/*
* DMA SG Command Register
*/
char td; /* transfer direction */
#endif
+ char int_on_final_sg;/* for scatter/gather - only interrupt on last sg */
} ppc_dma_ch_t;
/*
extern int ppc4xx_alloc_dma_handle(sgl_handle_t *, unsigned int, unsigned int);
extern void ppc4xx_free_dma_handle(sgl_handle_t);
extern int ppc4xx_get_dma_status(void);
+extern int ppc4xx_enable_burst(unsigned int);
+extern int ppc4xx_disable_burst(unsigned int);
+extern int ppc4xx_set_burst_size(unsigned int, unsigned int);
extern void ppc4xx_set_src_addr(int dmanr, phys_addr_t src_addr);
extern void ppc4xx_set_dst_addr(int dmanr, phys_addr_t dst_addr);
extern void ppc4xx_enable_dma(unsigned int dmanr);
/*
+ * include/asm-ppc/ppc4xx_pic.h
*
- * Copyright (c) 1999 Grant Erickson <grant@lcse.umn.edu>
+ * Interrupt controller driver for PowerPC 4xx-based processors.
*
- * Module name: ppc4xx_pic.h
+ * Copyright (c) 1999 Grant Erickson <grant@lcse.umn.edu>
*
- * Description:
- * Interrupt controller driver for PowerPC 4xx-based processors.
+ * Eugene Surovegin <eugene.surovegin@zultys.com> or <ebs@ebshome.net>
+ * Copyright (c) 2004 Zultys Technologies
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
*/
#ifndef __PPC4XX_PIC_H__
#define __PPC4XX_PIC_H__
#include <linux/config.h>
+#include <linux/types.h>
#include <linux/irq.h>
-/* External Global Variables */
-
-extern struct hw_interrupt_type *ppc4xx_pic;
-extern unsigned int ibm4xxPIC_NumInitSenses;
-extern unsigned char *ibm4xxPIC_InitSenses;
-
-/* Function Prototypes */
+/* "Fixed" UIC settings (they are chip, not board specific),
+ * e.g. polarity/triggerring for internal interrupt sources.
+ *
+ * Platform port should provide NR_UICS-sized array named ppc4xx_core_uic_cfg
+ * with these "fixed" settings: .polarity contains exact value which will
+ * be written (masked with "ext_irq_mask") into UICx_PR register,
+ * .triggering - to UICx_TR.
+ *
+ * Settings for external IRQs can be specified separately by the
+ * board support code. In this case properly sized array of unsigned
+ * char named ppc4xx_uic_ext_irq_cfg should be filled with correct
+ * values using IRQ_SENSE_XXXXX and IRQ_POLARITY_XXXXXXX defines.
+ *
+ * If these arrays aren't provided, UIC initialization code keeps firmware
+ * configuration. Also, ppc4xx_uic_ext_irq_cfg implies ppc4xx_core_uic_cfg
+ * is defined.
+ *
+ * Both ppc4xx_core_uic_cfg and ppc4xx_uic_ext_irq_cfg are declared as
+ * "weak" symbols in ppc4xx_pic.c
+ *
+ */
+struct ppc4xx_uic_settings {
+ u32 polarity;
+ u32 triggering;
+ u32 ext_irq_mask;
+};
extern void ppc4xx_pic_init(void);
-extern int ppc4xx_pic_get_irq(struct pt_regs *regs);
-#endif /* __PPC4XX_PIC_H__ */
+#endif /* __PPC4XX_PIC_H__ */
--- /dev/null
+/*
+ * include/asm-ppc/ppc_sys.h
+ *
+ * PPC system definitions and library functions
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2005 Freescale Semiconductor, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifdef __KERNEL__
+#ifndef __ASM_PPC_SYS_H
+#define __ASM_PPC_SYS_H
+
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/types.h>
+
+#if defined(CONFIG_85xx)
+#include <asm/mpc85xx.h>
+#else
+#error "need definition of ppc_sys_devices"
+#endif
+
+struct ppc_sys_spec {
+ /* PPC sys is matched via (ID & mask) == value, id could be
+ * PVR, SVR, IMMR, * etc. */
+ u32 mask;
+ u32 value;
+ u32 num_devices;
+ char *ppc_sys_name;
+ enum ppc_sys_devices *device_list;
+};
+
+/* describes all specific chips and which devices they have on them */
+extern struct ppc_sys_spec ppc_sys_specs[];
+extern struct ppc_sys_spec *cur_ppc_sys_spec;
+
+/* determine which specific SOC we are */
+extern void identify_ppc_sys_by_id(u32 id) __init;
+extern void identify_ppc_sys_by_name(char *name) __init;
+
+/* describes all devices that may exist in a given family of processors */
+extern struct platform_device ppc_sys_platform_devices[];
+
+/* allow any platform_device fixup to occur before device is registered */
+extern int (*ppc_sys_device_fixup) (struct platform_device * pdev);
+
+/* Update all memory resources by paddr, call before platform_device_register */
+extern void ppc_sys_fixup_mem_resource(struct platform_device *pdev,
+ phys_addr_t paddr) __init;
+
+/* Get platform_data pointer out of platform device, call before platform_device_register */
+extern void *ppc_sys_get_pdata(enum ppc_sys_devices dev) __init;
+
+/* remove a device from the system */
+extern void ppc_sys_device_remove(enum ppc_sys_devices dev);
+
+#endif /* __ASM_PPC_SYS_H */
+#endif /* __KERNEL__ */
#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
spinlock_t wait_lock;
struct list_head wait_list;
-#ifdef RWSEM_DEBUG
+#if RWSEM_DEBUG
int debug;
#endif
};
/*
* initialisation
*/
-#ifdef RWSEM_DEBUG
+#if RWSEM_DEBUG
#define __RWSEM_DEBUG_INIT , 0
#else
#define __RWSEM_DEBUG_INIT /* */
sem->count = RWSEM_UNLOCKED_VALUE;
spin_lock_init(&sem->wait_lock);
INIT_LIST_HEAD(&sem->wait_list);
-#ifdef RWSEM_DEBUG
+#if RWSEM_DEBUG
sem->debug = 0;
#endif
}
#include <asm-generic/sections.h>
-extern char _end[];
-
#define __pmac __attribute__ ((__section__ (".pmac.text")))
#define __pmacdata __attribute__ ((__section__ (".pmac.data")))
#define __pmacfunc(__argpmac) \
#define ptrace_signal_deliver(regs, cookie) do { } while (0)
#endif /* __KERNEL__ */
+/*
+ * These are parameters to dbg_sigreturn syscall. They enable or
+ * disable certain debugging things that can be done from signal
+ * handlers. The dbg_sigreturn syscall *must* be called from a
+ * SA_SIGINFO signal so the ucontext can be passed to it. It takes an
+ * array of struct sig_dbg_op, which has the debug operations to
+ * perform before returning from the signal.
+ */
+struct sig_dbg_op {
+ int dbg_type;
+ unsigned long dbg_value;
+};
+
+/* Enable or disable single-stepping. The value sets the state. */
+#define SIG_DBG_SINGLE_STEPPING 1
+
+/* Enable or disable branch tracing. The value sets the state. */
+#define SIG_DBG_BRANCH_TRACING 2
+
#endif
#define __HAVE_ARCH_STRCMP
#define __HAVE_ARCH_STRCAT
#define __HAVE_ARCH_MEMSET
-#define __HAVE_ARCH_BCOPY
#define __HAVE_ARCH_MEMCPY
#define __HAVE_ARCH_MEMMOVE
#define __HAVE_ARCH_MEMCMP
#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
#define TIF_POLLING_NRFLAG 4 /* true if poll_idle() is polling
TIF_NEED_RESCHED */
+#define TIF_MEMDIE 5
/* as above, but as bit values */
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
const char *function;
};
+struct bug_entry *find_bug(unsigned long bugaddr);
+
/*
* If this bit is set in the line number it means that the trap
* is for WARN_ON rather than BUG or BUG_ON.
#ifndef __ARCH_PPC64_CACHE_H
#define __ARCH_PPC64_CACHE_H
+#include <asm/types.h>
+
/* bytes per L1 cache line */
#define L1_CACHE_SHIFT 7
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
#define SMP_CACHE_BYTES L1_CACHE_BYTES
#define L1_CACHE_SHIFT_MAX 7 /* largest L1 which this arch supports */
+#ifndef __ASSEMBLY__
+
+struct ppc64_caches {
+ u32 dsize; /* L1 d-cache size */
+ u32 dline_size; /* L1 d-cache line size */
+ u32 log_dline_size;
+ u32 dlines_per_page;
+ u32 isize; /* L1 i-cache size */
+ u32 iline_size; /* L1 i-cache line size */
+ u32 log_iline_size;
+ u32 ilines_per_page;
+};
+
+extern struct ppc64_caches ppc64_caches;
+
+#endif
+
#endif
--- /dev/null
+#ifndef __PPC_CPUTIME_H
+#define __PPC_CPUTIME_H
+
+#include <asm-generic/cputime.h>
+
+#endif /* __PPC_CPUTIME_H */
extern int HvLpEvent_registerHandler( HvLpEvent_Type eventType, LpEventHandler hdlr);
// Unregister a handler for an event type
+// This call will sleep until the handler being removed is guaranteed to
+// be no longer executing on any CPU. Do not call with locks held.
+//
// returns 0 on success
// Unregister will fail if there are any paths open for the type
extern int HvLpEvent_unregisterHandler( HvLpEvent_Type eventType );
// address of the OS's NACA).
//
#include <asm/types.h>
+#include <asm/naca.h>
//=============================================================================
//
#include <asm/page.h>
#include <asm/abs_addr.h>
-#include <asm/naca.h>
#include <asm/iSeries/ItLpNaca.h>
-#include <asm/iSeries/ItLpPaca.h>
#include <asm/iSeries/ItLpRegSave.h>
-#include <asm/paca.h>
#include <asm/iSeries/HvReleaseData.h>
#include <asm/iSeries/LparMap.h>
#include <asm/iSeries/ItVpdAreas.h>
char Location[20]; /* Frame 1, Card C10 */
};
-/************************************************************************/
-/* Location Data extracted from the VPD list and device info. */
-/************************************************************************/
-
-struct LocationDataStruct { /* Location data structure for device */
- u16 Bus; /* iSeries Bus Number 0x00*/
- u16 Board; /* iSeries Board 0x02*/
- u8 FrameId; /* iSeries spcn Frame Id 0x04*/
- u8 PhbId; /* iSeries Phb Location 0x05*/
- u8 AgentId; /* iSeries AgentId 0x06*/
- u8 Card;
- char CardLocation[4];
-};
-
-typedef struct LocationDataStruct LocationData;
-#define LOCATION_DATA_SIZE 48
-
/************************************************************************/
/* Functions */
/************************************************************************/
-extern LocationData* iSeries_GetLocationData(struct pci_dev* PciDev);
extern int iSeries_Device_Information(struct pci_dev*,char*, int);
extern void iSeries_Get_Location_Code(struct iSeries_Device_Node*);
extern int iSeries_Device_ToggleReset(struct pci_dev* PciDev, int AssertTime, int DelayTime);
--- /dev/null
+#ifndef _PPC64_KDEBUG_H
+#define _PPC64_KDEBUG_H 1
+
+/* nearly identical to x86_64/i386 code */
+
+#include <linux/notifier.h>
+
+struct pt_regs;
+
+struct die_args {
+ struct pt_regs *regs;
+ const char *str;
+ long err;
+ int trapnr;
+ int signr;
+};
+
+/*
+ Note - you should never unregister because that can race with NMIs.
+ If you really want to do it first unregister - then synchronize_kernel -
+ then free.
+ */
+int register_die_notifier(struct notifier_block *nb);
+extern struct notifier_block *ppc64_die_chain;
+
+/* Grossly misnamed. */
+enum die_val {
+ DIE_OOPS = 1,
+ DIE_IABR_MATCH,
+ DIE_DABR_MATCH,
+ DIE_BPT,
+ DIE_SSTEP,
+ DIE_GPF,
+ DIE_PAGE_FAULT,
+};
+
+static inline int notify_die(enum die_val val,char *str,struct pt_regs *regs,long err,int trap, int sig)
+{
+ struct die_args args = { .regs=regs, .str=str, .err=err, .trapnr=trap,.signr=sig };
+ return notifier_call_chain(&ppc64_die_chain, val, &args);
+}
+
+#endif
--- /dev/null
+#ifndef _ASM_KPROBES_H
+#define _ASM_KPROBES_H
+/*
+ * Kernel Probes (KProbes)
+ * include/asm-ppc64/kprobes.h
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ * Copyright (C) IBM Corporation, 2002, 2004
+ *
+ * 2002-Oct Created by Vamsi Krishna S <vamsi_krishna@in.ibm.com> Kernel
+ * Probes initial implementation ( includes suggestions from
+ * Rusty Russell).
+ * 2004-Nov Modified for PPC64 by Ananth N Mavinakayanahalli
+ * <ananth@in.ibm.com>
+ */
+#include <linux/types.h>
+#include <linux/ptrace.h>
+
+struct pt_regs;
+
+typedef unsigned int kprobe_opcode_t;
+#define BREAKPOINT_INSTRUCTION 0x7fe00008 /* trap */
+#define MAX_INSN_SIZE 1
+
+#define JPROBE_ENTRY(pentry) (kprobe_opcode_t *)((func_descr_t *)pentry)
+
+/* Architecture specific copy of original instruction */
+struct arch_specific_insn {
+ /* copy of original instruction */
+ kprobe_opcode_t insn[MAX_INSN_SIZE];
+};
+
+#ifdef CONFIG_KPROBES
+extern int kprobe_exceptions_notify(struct notifier_block *self,
+ unsigned long val, void *data);
+#else /* !CONFIG_KPROBES */
+static inline int kprobe_exceptions_notify(struct notifier_block *self,
+ unsigned long val, void *data)
+{
+ return 0;
+}
+#endif
+#endif /* _ASM_KPROBES_H */
--- /dev/null
+/*
+ * lppaca.h
+ * Copyright (C) 2001 Mike Corrigan IBM Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+#ifndef _ASM_LPPACA_H
+#define _ASM_LPPACA_H
+
+//=============================================================================
+//
+// This control block contains the data that is shared between the
+// hypervisor (PLIC) and the OS.
+//
+//
+//----------------------------------------------------------------------------
+#include <asm/types.h>
+
+struct lppaca
+{
+//=============================================================================
+// CACHE_LINE_1 0x0000 - 0x007F Contains read-only data
+// NOTE: The xDynXyz fields are fields that will be dynamically changed by
+// PLIC when preparing to bring a processor online or when dispatching a
+// virtual processor!
+//=============================================================================
+ u32 desc; // Eye catcher 0xD397D781 x00-x03
+ u16 size; // Size of this struct x04-x05
+ u16 reserved1; // Reserved x06-x07
+ u16 reserved2:14; // Reserved x08-x09
+ u8 shared_proc:1; // Shared processor indicator ...
+ u8 secondary_thread:1; // Secondary thread indicator ...
+ volatile u8 dyn_proc_status:8; // Dynamic Status of this proc x0A-x0A
+ u8 secondary_thread_count; // Secondary thread count x0B-x0B
+ volatile u16 dyn_hv_phys_proc_index;// Dynamic HV Physical Proc Index0C-x0D
+ volatile u16 dyn_hv_log_proc_index;// Dynamic HV Logical Proc Indexx0E-x0F
+ u32 decr_val; // Value for Decr programming x10-x13
+ u32 pmc_val; // Value for PMC regs x14-x17
+ volatile u32 dyn_hw_node_id; // Dynamic Hardware Node id x18-x1B
+ volatile u32 dyn_hw_proc_id; // Dynamic Hardware Proc Id x1C-x1F
+ volatile u32 dyn_pir; // Dynamic ProcIdReg value x20-x23
+ u32 dsei_data; // DSEI data x24-x27
+ u64 sprg3; // SPRG3 value x28-x2F
+ u8 reserved3[80]; // Reserved x30-x7F
+
+//=============================================================================
+// CACHE_LINE_2 0x0080 - 0x00FF Contains local read-write data
+//=============================================================================
+ // This Dword contains a byte for each type of interrupt that can occur.
+ // The IPI is a count while the others are just a binary 1 or 0.
+ union {
+ u64 any_int;
+ struct {
+ u16 reserved; // Reserved - cleared by #mpasmbl
+ u8 xirr_int; // Indicates xXirrValue is valid or Immed IO
+ u8 ipi_cnt; // IPI Count
+ u8 decr_int; // DECR interrupt occurred
+ u8 pdc_int; // PDC interrupt occurred
+ u8 quantum_int; // Interrupt quantum reached
+ u8 old_plic_deferred_ext_int; // Old PLIC has a deferred XIRR pending
+ } fields;
+ } int_dword;
+
+ // Whenever any fields in this Dword are set then PLIC will defer the
+ // processing of external interrupts. Note that PLIC will store the
+ // XIRR directly into the xXirrValue field so that another XIRR will
+ // not be presented until this one clears. The layout of the low
+ // 4-bytes of this Dword is upto SLIC - PLIC just checks whether the
+ // entire Dword is zero or not. A non-zero value in the low order
+ // 2-bytes will result in SLIC being granted the highest thread
+ // priority upon return. A 0 will return to SLIC as medium priority.
+ u64 plic_defer_ints_area; // Entire Dword
+
+ // Used to pass the real SRR0/1 from PLIC to SLIC as well as to
+ // pass the target SRR0/1 from SLIC to PLIC on a SetAsrAndRfid.
+ u64 saved_srr0; // Saved SRR0 x10-x17
+ u64 saved_srr1; // Saved SRR1 x18-x1F
+
+ // Used to pass parms from the OS to PLIC for SetAsrAndRfid
+ u64 saved_gpr3; // Saved GPR3 x20-x27
+ u64 saved_gpr4; // Saved GPR4 x28-x2F
+ u64 saved_gpr5; // Saved GPR5 x30-x37
+
+ u8 reserved4; // Reserved x38-x38
+ u8 cpuctls_task_attrs; // Task attributes for cpuctls x39-x39
+ u8 fpregs_in_use; // FP regs in use x3A-x3A
+ u8 pmcregs_in_use; // PMC regs in use x3B-x3B
+ volatile u32 saved_decr; // Saved Decr Value x3C-x3F
+ volatile u64 emulated_time_base;// Emulated TB for this thread x40-x47
+ volatile u64 cur_plic_latency; // Unaccounted PLIC latency x48-x4F
+ u64 tot_plic_latency; // Accumulated PLIC latency x50-x57
+ u64 wait_state_cycles; // Wait cycles for this proc x58-x5F
+ u64 end_of_quantum; // TB at end of quantum x60-x67
+ u64 pdc_saved_sprg1; // Saved SPRG1 for PMC int x68-x6F
+ u64 pdc_saved_srr0; // Saved SRR0 for PMC int x70-x77
+ volatile u32 virtual_decr; // Virtual DECR for shared procsx78-x7B
+ u16 slb_count; // # of SLBs to maintain x7C-x7D
+ u8 idle; // Indicate OS is idle x7E
+ u8 reserved5; // Reserved x7F
+
+
+//=============================================================================
+// CACHE_LINE_3 0x0100 - 0x007F: This line is shared with other processors
+//=============================================================================
+ // This is the yield_count. An "odd" value (low bit on) means that
+ // the processor is yielded (either because of an OS yield or a PLIC
+ // preempt). An even value implies that the processor is currently
+ // executing.
+ // NOTE: This value will ALWAYS be zero for dedicated processors and
+ // will NEVER be zero for shared processors (ie, initialized to a 1).
+ volatile u32 yield_count; // PLIC increments each dispatchx00-x03
+ u8 reserved6[124]; // Reserved x04-x7F
+
+//=============================================================================
+// CACHE_LINE_4-5 0x0100 - 0x01FF Contains PMC interrupt data
+//=============================================================================
+ u8 pmc_save_area[256]; // PMC interrupt Area x00-xFF
+};
+
+#endif /* _ASM_LPPACA_H */
********************************************************************/
#include <linux/config.h>
+#include <linux/types.h>
#include <asm/udbg.h>
#include <stdarg.h>
#define PPCDBG_BITVAL(X) ((1UL)<<((unsigned long)(X)))
/* Defined below are the bit positions of various debug flags in the
- * debug_switch variable (defined in naca.h).
+ * ppc64_debug_switch variable.
* -- When adding new values, please enter them into trace names below --
*
* Values 62 & 63 can be used to stress the hardware page table management
#define PPCDBG_NUM_FLAGS 64
+extern u64 ppc64_debug_switch;
+
#ifdef WANT_PPCDBG_TAB
/* A table of debug switch names to allow name lookup in xmon
* (and whoever else wants it.
struct pt_regs;
+/*
+ * We don't allow single-stepping an mtmsrd that would clear
+ * MSR_RI, since that would make the exception unrecoverable.
+ * Since we need to single-step to proceed from a breakpoint,
+ * we don't allow putting a breakpoint on an mtmsrd instruction.
+ * Similarly we don't allow breakpoints on rfid instructions.
+ * These macros tell us if an instruction is a mtmsrd or rfid.
+ */
+#define IS_MTMSRD(instr) (((instr) & 0xfc0007fe) == 0x7c000164)
+#define IS_RFID(instr) (((instr) & 0xfc0007fe) == 0x4c000024)
+
/* Emulate instructions that cause a transfer of control. */
extern int emulate_step(struct pt_regs *regs, unsigned int instr);
#define __HAVE_ARCH_STRCMP
#define __HAVE_ARCH_STRCAT
#define __HAVE_ARCH_MEMSET
-#define __HAVE_ARCH_BCOPY
#define __HAVE_ARCH_MEMCPY
#define __HAVE_ARCH_MEMMOVE
#define __HAVE_ARCH_MEMCMP
static inline void flush_tlb_pending(void)
{
- struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch);
+ struct ppc64_tlb_batch *batch = &get_cpu_var(ppc64_tlb_batch);
if (batch->index)
__flush_tlb_pending(batch);
+ put_cpu_var(ppc64_tlb_batch);
}
#define flush_tlb_mm(mm) flush_tlb_pending()
.max_interval = 32, \
.busy_factor = 32, \
.imbalance_pct = 125, \
- .cache_hot_time = (10*1000), \
+ .cache_hot_time = (10*1000000), \
.cache_nice_tries = 1, \
.per_cpu_gain = 100, \
.flags = SD_LOAD_BALANCE \
| SD_BALANCE_EXEC \
+ | SD_BALANCE_NEWIDLE \
+ | SD_WAKE_IDLE \
| SD_WAKE_BALANCE, \
.last_balance = jiffies, \
.balance_interval = 1, \
int xics_get_irq(struct pt_regs *);
void xics_setup_cpu(void);
void xics_cause_IPI(int cpu);
+void xics_request_IPIs(void);
+void xics_migrate_irqs_away(void);
/* first argument is ignored for now*/
void pSeriesLP_cppr_info(int n_cpu, u8 value);
extern struct xics_ipi_struct xics_ipi_message[NR_CPUS] __cacheline_aligned;
+extern unsigned int default_distrib_server;
+extern unsigned int interrupt_server_size;
+
#endif /* _PPC64_KERNEL_XICS_H */
*/
extern int ccw_device_start_timeout(struct ccw_device *, struct ccw1 *,
unsigned long, __u8, unsigned long, int);
+/*
+ * ccw_device_start_key()
+ * ccw_device_start_key_timeout()
+ *
+ * Same as ccw_device_start() and ccw_device_start_timeout(), except a
+ * storage key != default key can be provided for the I/O.
+ */
+extern int ccw_device_start_key(struct ccw_device *, struct ccw1 *,
+ unsigned long, __u8, __u8, unsigned long);
+extern int ccw_device_start_timeout_key(struct ccw_device *, struct ccw1 *,
+ unsigned long, __u8, __u8,
+ unsigned long, int);
+
extern int ccw_device_resume(struct ccw_device *);
extern int ccw_device_halt(struct ccw_device *, unsigned long);
extern void wait_cons_dev(void);
+extern void clear_all_subchannels(void);
+
#endif
#endif
* S390 version
* Copyright (C) 1999 IBM Deutschland Entwicklung GmbH, IBM Corporation
* Author(s): Martin Schwidefsky (schwidefsky@de.ibm.com),
+ * Christian Borntraeger (cborntra@de.ibm.com),
*/
#ifndef __CPCMD__
#define __CPCMD__
+/*
+ * the caller of __cpcmd has to ensure that the response buffer is below 2 GB
+ */
+extern void __cpcmd(char *cmd, char *response, int rlen);
+
+#ifndef __s390x__
+#define cpcmd __cpcmd
+#else
extern void cpcmd(char *cmd, char *response, int rlen);
+#endif /*__s390x__*/
#endif
#include <linux/threads.h>
#include <linux/sched.h>
#include <linux/cache.h>
+#include <linux/interrupt.h>
#include <asm/lowcore.h>
/* irq_cpustat_t is unused currently, but could be converted
#define local_softirq_pending() (S390_lowcore.softirq_pending)
-/* this is always called with cpu == smp_processor_id() at the moment */
-static inline __u32
-softirq_pending(unsigned int cpu)
-{
- if (cpu == smp_processor_id())
- return local_softirq_pending();
- return lowcore_ptr[cpu]->softirq_pending;
-}
-
#define __ARCH_IRQ_STAT
+#define __ARCH_HAS_DO_SOFTIRQ
#define HARDIRQ_BITS 8
extern void account_ticks(struct pt_regs *);
-#define __ARCH_HAS_DO_SOFTIRQ
-
-#define irq_enter() \
-do { \
- (preempt_count() += HARDIRQ_OFFSET); \
-} while(0)
-#define irq_exit() \
-do { \
- preempt_count() -= IRQ_EXIT_OFFSET; \
- if (!in_interrupt() && local_softirq_pending()) \
- /* Use the async. stack for softirq */ \
- do_softirq(); \
- preempt_enable_no_resched(); \
-} while (0)
-
#endif /* __ASM_HARDIRQ_H */
#define __LC_RETURN_PSW 0x200
-#define __LC_IRB 0x210
-
-#define __LC_DIAG44_OPCODE 0x250
-
#define __LC_SAVE_AREA 0xC00
#ifndef __s390x__
+#define __LC_IRB 0x208
+#define __LC_SYNC_ENTER_TIMER 0x248
+#define __LC_ASYNC_ENTER_TIMER 0x250
+#define __LC_EXIT_TIMER 0x258
+#define __LC_LAST_UPDATE_TIMER 0x260
+#define __LC_USER_TIMER 0x268
+#define __LC_SYSTEM_TIMER 0x270
+#define __LC_LAST_UPDATE_CLOCK 0x278
+#define __LC_STEAL_CLOCK 0x280
#define __LC_KERNEL_STACK 0xC40
#define __LC_THREAD_INFO 0xC44
#define __LC_ASYNC_STACK 0xC48
#define __LC_CURRENT 0xC90
#define __LC_INT_CLOCK 0xC98
#else /* __s390x__ */
+#define __LC_IRB 0x210
+#define __LC_SYNC_ENTER_TIMER 0x250
+#define __LC_ASYNC_ENTER_TIMER 0x258
+#define __LC_EXIT_TIMER 0x260
+#define __LC_LAST_UPDATE_TIMER 0x268
+#define __LC_USER_TIMER 0x270
+#define __LC_SYSTEM_TIMER 0x278
+#define __LC_LAST_UPDATE_CLOCK 0x280
+#define __LC_STEAL_CLOCK 0x288
+#define __LC_DIAG44_OPCODE 0x290
#define __LC_KERNEL_STACK 0xD40
#define __LC_THREAD_INFO 0xD48
#define __LC_ASYNC_STACK 0xD50
#define __LC_IPLDEV 0xDB8
#define __LC_JIFFY_TIMER 0xDC0
#define __LC_CURRENT 0xDD8
-#define __LC_INT_CLOCK 0xDe8
+#define __LC_INT_CLOCK 0xDE8
#endif /* __s390x__ */
#define __LC_PANIC_MAGIC 0xE00
psw_t return_psw; /* 0x200 */
__u8 irb[64]; /* 0x208 */
- __u8 pad8[0xc00-0x248]; /* 0x248 */
+ __u64 sync_enter_timer; /* 0x248 */
+ __u64 async_enter_timer; /* 0x250 */
+ __u64 exit_timer; /* 0x258 */
+ __u64 last_update_timer; /* 0x260 */
+ __u64 user_timer; /* 0x268 */
+ __u64 system_timer; /* 0x270 */
+ __u64 last_update_clock; /* 0x278 */
+ __u64 steal_clock; /* 0x280 */
+ __u8 pad8[0xc00-0x288]; /* 0x288 */
/* System info area */
__u32 save_area[16]; /* 0xc00 */
psw_t io_new_psw; /* 0x1f0 */
psw_t return_psw; /* 0x200 */
__u8 irb[64]; /* 0x210 */
- __u32 diag44_opcode; /* 0x250 */
- __u8 pad8[0xc00-0x254]; /* 0x254 */
+ __u64 sync_enter_timer; /* 0x250 */
+ __u64 async_enter_timer; /* 0x258 */
+ __u64 exit_timer; /* 0x260 */
+ __u64 last_update_timer; /* 0x268 */
+ __u64 user_timer; /* 0x270 */
+ __u64 system_timer; /* 0x278 */
+ __u64 last_update_clock; /* 0x280 */
+ __u64 steal_clock; /* 0x288 */
+ __u32 diag44_opcode; /* 0x290 */
+ __u8 pad8[0xc00-0x294]; /* 0x294 */
/* System info area */
__u64 save_area[16]; /* 0xc00 */
__u8 pad9[0xd40-0xc80]; /* 0xc80 */
#include <linux/types.h>
#endif
-#define __HAVE_ARCH_BCOPY /* arch function */
#define __HAVE_ARCH_MEMCHR /* inline & arch function */
#define __HAVE_ARCH_MEMCMP /* arch function */
#define __HAVE_ARCH_MEMCPY /* gcc builtin & arch function */
#define prepare_arch_switch(rq, next) do { } while(0)
#define task_running(rq, p) ((rq)->curr == (p))
+
+#ifdef CONFIG_VIRT_CPU_ACCOUNTING
+extern void account_user_vtime(struct task_struct *);
+extern void account_system_vtime(struct task_struct *);
+
+#define finish_arch_switch(rq, prev) do { \
+ set_fs(current->thread.mm_segment); \
+ spin_unlock(&(rq)->lock); \
+ account_system_vtime(prev); \
+ local_irq_enable(); \
+} while (0)
+
+#else
+
#define finish_arch_switch(rq, prev) do { \
set_fs(current->thread.mm_segment); \
spin_unlock_irq(&(rq)->lock); \
} while (0)
+#endif
+
#define nop() __asm__ __volatile__ ("nop")
#define xchg(ptr,x) \
__u64 idle; /* temp var for idle */
};
-void set_vtimer(__u64 expires);
-
extern void init_virt_timer(struct vtimer_list *timer);
extern void add_virt_timer(void *new);
extern void add_virt_timer_periodic(void *new);
# error HARDIRQ_BITS is too low!
#endif
-#define nmi_enter() (irq_enter())
-#define nmi_exit() (preempt_count() -= HARDIRQ_OFFSET)
-
-#define irq_enter() (preempt_count() += HARDIRQ_OFFSET)
-#define irq_exit() \
-do { \
- preempt_count() -= IRQ_EXIT_OFFSET; \
- if (!in_interrupt() && softirq_pending(smp_processor_id())) \
- do_softirq(); \
- preempt_enable_no_resched(); \
-} while (0)
-
#endif /* __ASM_SH_HARDIRQ_H */
#define __HAVE_ARCH_STRLEN
extern size_t strlen(const char *);
-/* Don't build bcopy at all ... */
-#define __HAVE_ARCH_BCOPY
-
#endif /* __ASM_SH_STRING_H */
#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
#define TIF_USEDFPU 16 /* FPU was used by this task this quantum (SMP) */
#define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling TIF_NEED_RESCHED */
+#define TIF_MEMDIE 18
#define TIF_USERSPACE 31 /* true if FS sets userspace */
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
#endif
#define aux_request_irq(hand, dev_id) \
- request_irq(AUX_IRQ, hand, SA_SHIRQ, "PS/2 Mouse", dev_id)
+ request_irq(AUX_IRQ, hand, SA_SHIRQ, "PS2 Mouse", dev_id)
#define aux_free_irq(dev_id) free_irq(AUX_IRQ, dev_id)
{
pte_t *pte;
- pte = (pte_t *)__get_free_page(GFP_KERNEL | __GFP_REPEAT);
- if (pte)
- clear_page(pte);
+ pte = (pte_t *)__get_free_page(GFP_KERNEL | __GFP_REPEAT|__GFP_ZERO);
return pte;
}
{
struct page *pte;
- pte = alloc_pages(GFP_KERNEL|__GFP_REPEAT, 0);
- if (pte)
- clear_page(page_address(pte));
+ pte = alloc_pages(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO, 0);
return pte;
}
static __inline__ pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
{
pmd_t *pmd;
- pmd = (pmd_t *) __get_free_page(GFP_KERNEL|__GFP_REPEAT);
- if (pmd)
- clear_page(pmd);
+ pmd = (pmd_t *) __get_free_page(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO);
return pmd;
}
#ifndef __ASM_SH64_PGTABLE_H
#define __ASM_SH64_PGTABLE_H
+#include <asm-generic/4level-fixup.h>
+
/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
*/
extern int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags);
-/*
- * Bus types
- */
-#define MCA_bus 0
-#define MCA_bus__is_a_macro /* for versions in ksyms.c */
-
/* Copy and release all segment info associated with a VM */
#define copy_segments(p, mm) do { } while (0)
#define ASI_M_DCDR 0x39 /* Data Cache Diagnostics Register rw, ss */
#define ASI_M_VIKING_TMP1 0x40 /* Emulation temporary 1 on Viking */
-#define ASI_M_VIKING_TMP2 0x41 /* Emulation temporary 2 on Viking */
+/* only available on SuperSparc I */
+/* #define ASI_M_VIKING_TMP2 0x41 */ /* Emulation temporary 2 on Viking */
#define ASI_M_ACTION 0x4c /* Breakpoint Action Register (GNU/Viking) */
int last_off;
int last_size;
int first_free;
+ int num_colors;
};
extern int bit_map_string_get(struct bit_map *t, int len, int align);
#define ELF_CORE_COPY_REGS(__elf_regs, __pt_regs) \
do { unsigned long *dest = &(__elf_regs[0]); \
struct pt_regs *src = (__pt_regs); \
- unsigned long *sp; \
+ unsigned long __user *sp; \
memcpy(&dest[0], &src->u_regs[0], \
sizeof(unsigned long) * 16); \
/* Don't try this at home kids... */ \
- sp = (unsigned long *) src->u_regs[14]; \
+ sp = (unsigned long __user *) src->u_regs[14]; \
copy_from_user(&dest[16], sp, \
sizeof(unsigned long) * 16); \
dest[32] = src->psr; \
volatile int doing_pdma = 0;
/* This is software state */
-char *pdma_base = 0;
+char *pdma_base = NULL;
unsigned long pdma_areasize;
/* Common routines to all controller types on the Sparc. */
}
/* Our low-level entry point in arch/sparc/kernel/entry.S */
-extern void floppy_hardint(int irq, void *unused, struct pt_regs *regs);
+irqreturn_t floppy_hardint(int irq, void *unused, struct pt_regs *regs);
static int sun_fd_request_irq(void)
{
}
/* The sun4m lets us know if the controller is actually usable. */
- if(sparc_cpu_model == sun4m) {
- prom_getproperty(fd_node, "status", state, sizeof(state));
+ if(sparc_cpu_model == sun4m &&
+ prom_getproperty(fd_node, "status", state, sizeof(state)) != -1) {
if(!strcmp(state, "disabled")) {
goto no_sun_fdc;
}
static inline void *kmap(struct page *page)
{
BUG_ON(in_interrupt());
- if (page < highmem_start_page)
+ if (!PageHighMem(page))
return page_address(page);
return kmap_high(page);
}
static inline void kunmap(struct page *page)
{
BUG_ON(in_interrupt());
- if (page < highmem_start_page)
+ if (!PageHighMem(page))
return;
kunmap_high(page);
}
#define inl_p(__addr) inl(__addr)
#define outl_p(__l, __addr) outl(__l, __addr)
-extern void outsb(unsigned long addr, const void *src, unsigned long cnt);
-extern void outsw(unsigned long addr, const void *src, unsigned long cnt);
-extern void outsl(unsigned long addr, const void *src, unsigned long cnt);
-extern void insb(unsigned long addr, void *dst, unsigned long count);
-extern void insw(unsigned long addr, void *dst, unsigned long count);
-extern void insl(unsigned long addr, void *dst, unsigned long count);
+void outsb(unsigned long addr, const void *src, unsigned long cnt);
+void outsw(unsigned long addr, const void *src, unsigned long cnt);
+void outsl(unsigned long addr, const void *src, unsigned long cnt);
+void insb(unsigned long addr, void *dst, unsigned long count);
+void insw(unsigned long addr, void *dst, unsigned long count);
+void insl(unsigned long addr, void *dst, unsigned long count);
#define IO_SPACE_LIMIT 0xffffffff
#include <asm/openprom.h>
#include <linux/spinlock.h>
+#include <linux/compiler.h>
/* The master romvec pointer... */
extern struct linux_romvec *romvec;
/* Fetch the requested property using the given buffer. Returns
* the number of bytes the prom put into your buffer or -1 on error.
*/
-extern int prom_getproperty(int thisnode, char *property,
- char *prop_buffer, int propbuf_size);
+extern int __must_check prom_getproperty(int thisnode, char *property,
+ char *prop_buffer, int propbuf_size);
/* Acquire an integer property. */
extern int prom_getint(int node, char *property);
#include <asm/pbm.h>
struct linux_pcic {
- unsigned long pcic_regs;
+ void * __iomem pcic_regs;
unsigned long pcic_io;
- unsigned long pcic_config_space_addr;
- unsigned long pcic_config_space_data;
+ void * __iomem pcic_config_space_addr;
+ void * __iomem pcic_config_space_data;
struct resource pcic_res_regs;
struct resource pcic_res_io;
struct resource pcic_res_cfg_addr;
#ifndef EXPORT_SYMTAB_STROPS
/* First the mem*() things. */
-#define __HAVE_ARCH_BCOPY
#define __HAVE_ARCH_MEMMOVE
#undef memmove
#define memmove(_to, _from, _n) \
if(n <= 32) {
__builtin_memcpy(to, from, n);
+ } else if (((unsigned int) to & 7) != 0) {
+ /* Destination is not aligned on the double-word boundary */
+ __memcpy(to, from, n);
} else {
switch(n) {
case PAGE_SIZE:
typedef struct {
int count;
- int *winptr [SVR4_MAXWIN]; /* pointer to the windows */
+ int __user *winptr [SVR4_MAXWIN]; /* pointer to the windows */
svr4_rwindow_t win[SVR4_MAXWIN]; /* the windows */
} svr4_gwindows_t;
/* Machine dependent context */
typedef struct {
svr4_gregset_t greg; /* registers 0..19 (see top) */
- svr4_gwindows_t *gwin; /* may point to register windows */
+ svr4_gwindows_t __user *gwin; /* may point to register windows */
svr4_fregset_t freg; /* floating point registers */
svr4_xrs_t xrs; /* mhm? */
long pad[19];
/* signal stack exection place, unsupported */
typedef struct svr4_stack_t {
- char *sp;
+ char __user *sp;
int size;
int flags;
} svr4_stack_t;
and %idreg, 0xc, %idreg; \
ld [%idreg + %dest_reg], %dest_reg;
-/* Sliiick. We have a Linux current register :) -jj */
-#define LOAD_CURRENT4D(dest_reg) \
- lda [%g0] ASI_M_VIKING_TMP2, %dest_reg;
+#define LOAD_CURRENT4D(dest_reg, idreg) \
+ lda [%g0] ASI_M_VIKING_TMP1, %idreg; \
+ sethi %hi(C_LABEL(current_set)), %dest_reg; \
+ sll %idreg, 2, %idreg; \
+ or %dest_reg, %lo(C_LABEL(current_set)), %dest_reg; \
+ ld [%idreg + %dest_reg], %dest_reg;
/* Blackbox - take care with this... - check smp4m and smp4d before changing this. */
#define LOAD_CURRENT(dest_reg, idreg) \
struct fbcurpos hot; /* cursor hot spot */
struct fbcmap cmap; /* color map info */
struct fbcurpos size; /* cursor bit map size */
- char *image; /* cursor image bits */
- char *mask; /* cursor mask bits */
+ char __user *image; /* cursor image bits */
+ char __user *mask; /* cursor mask bits */
};
/* set/get cursor attributes/shape */
#define BREAKPOINT_INSTRUCTION_2 0x91d02071 /* ta 0x71 */
#define MAX_INSN_SIZE 2
+#define JPROBE_ENTRY(pentry) (kprobe_opcode_t *)pentry
+
/* Architecture specific copy of original instruction*/
struct arch_specific_insn {
/* copy of the original instruction */
paddr = __pa((__mm)->pgd); \
pgd_cache = 0UL; \
if ((__tsk)->thread_info->flags & _TIF_32BIT) \
- pgd_cache = \
- ((unsigned long)pgd_val((__mm)->pgd[0])) << 11UL; \
+ pgd_cache = get_pgd_cache((__mm)->pgd); \
__asm__ __volatile__("wrpr %%g0, 0x494, %%pstate\n\t" \
"mov %3, %%g4\n\t" \
"mov %0, %%g7\n\t" \
#include <asm/segment.h>
#include <asm/page.h>
-/* Bus types */
-#define MCA_bus 0
-#define MCA_bus__is_a_macro /* for versions in ksyms.c */
-
/* The sparc has no problems with write protection */
#define wp_works_ok 1
#define wp_works_ok__is_a_macro /* for versions in ksyms.c */
/* nothing to see, move along */
#include <asm-generic/sections.h>
-extern char _end[], _start[];
+extern char _start[];
#endif
#define nop() __asm__ __volatile__ ("nop")
-#define membar(type) __asm__ __volatile__ ("membar " type : : : "memory");
+#define membar(type) __asm__ __volatile__ ("membar " type : : : "memory")
#define mb() \
- membar("#LoadLoad | #LoadStore | #StoreStore | #StoreLoad");
+ membar("#LoadLoad | #LoadStore | #StoreStore | #StoreLoad")
#define rmb() membar("#LoadLoad")
#define wmb() membar("#StoreStore")
#define read_barrier_depends() do { } while(0)
#define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
#else
-#define smp_mb() __asm__ __volatile__("":::"memory");
-#define smp_rmb() __asm__ __volatile__("":::"memory");
-#define smp_wmb() __asm__ __volatile__("":::"memory");
+#define smp_mb() __asm__ __volatile__("":::"memory")
+#define smp_rmb() __asm__ __volatile__("":::"memory")
+#define smp_wmb() __asm__ __volatile__("":::"memory")
#define smp_read_barrier_depends() do { } while(0)
#endif
/* Performance counter register access. */
#define read_pcr(__p) __asm__ __volatile__("rd %%pcr, %0" : "=r" (__p))
-#define write_pcr(__p) __asm__ __volatile__("wr %0, 0x0, %%pcr" : : "r" (__p));
+#define write_pcr(__p) __asm__ __volatile__("wr %0, 0x0, %%pcr" : : "r" (__p))
#define read_pic(__p) __asm__ __volatile__("rd %%pic, %0" : "=r" (__p))
/* Blackbird errata workaround. See commentary in
static __inline__ unsigned long xchg32(__volatile__ unsigned int *m, unsigned int val)
{
__asm__ __volatile__(
+" membar #StoreLoad | #LoadLoad\n"
" mov %0, %%g5\n"
"1: lduw [%2], %%g7\n"
" cas [%2], %%g7, %0\n"
static __inline__ unsigned long xchg64(__volatile__ unsigned long *m, unsigned long val)
{
__asm__ __volatile__(
+" membar #StoreLoad | #LoadLoad\n"
" mov %0, %%g5\n"
"1: ldx [%2], %%g7\n"
" casx [%2], %%g7, %0\n"
static __inline__ unsigned long
__cmpxchg_u32(volatile int *m, int old, int new)
{
- __asm__ __volatile__("cas [%2], %3, %0\n\t"
+ __asm__ __volatile__("membar #StoreLoad | #LoadLoad\n"
+ "cas [%2], %3, %0\n\t"
"membar #StoreLoad | #StoreStore"
: "=&r" (new)
: "0" (new), "r" (m), "r" (old)
static __inline__ unsigned long
__cmpxchg_u64(volatile long *m, unsigned long old, unsigned long new)
{
- __asm__ __volatile__("casx [%2], %3, %0\n\t"
+ __asm__ __volatile__("membar #StoreLoad | #LoadLoad\n"
+ "casx [%2], %3, %0\n\t"
"membar #StoreLoad | #StoreStore"
: "=&r" (new)
: "0" (new), "r" (m), "r" (old)
#define ELF_DATA ELFDATA2MSB
#define ELF_ARCH EM_PPC
-/********* Bits for asm-um/delay.h **********/
-
-typedef unsigned int um_udelay_t;
-
/********* Bits for asm-um/hw_irq.h **********/
struct hw_interrupt_type;
--- /dev/null
+/*
+ * Copyright 2003 PathScale, Inc.
+ *
+ * Licensed under the GPL
+ */
+
+#ifndef __UM_MODULE_X86_64_H
+#define __UM_MODULE_X86_64_H
+
+/* UML is simple */
+struct mod_arch_specific
+{
+};
+
+#define Elf_Shdr Elf64_Shdr
+#define Elf_Sym Elf64_Sym
+#define Elf_Ehdr Elf64_Ehdr
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2000, 2001, 2002 Jeff Dike (jdike@karaya.com)
+ * Copyright 2003 PathScale, Inc.
+ * Derived from include/asm-i386/pgtable.h
+ * Licensed under the GPL
+ */
+
+#ifndef __UM_PGTABLE_2LEVEL_H
+#define __UM_PGTABLE_2LEVEL_H
+
+#include <asm-generic/pgtable-nopmd.h>
+
+/* PGDIR_SHIFT determines what a third-level page table entry can map */
+
+#define PGDIR_SHIFT 22
+#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
+#define PGDIR_MASK (~(PGDIR_SIZE-1))
+
+/*
+ * entries per page directory level: the i386 is two-level, so
+ * we don't really have any PMD directory physically.
+ */
+#define PTRS_PER_PTE 1024
+#define PTRS_PER_PMD 1
+#define USER_PTRS_PER_PGD ((TASK_SIZE + (PGDIR_SIZE - 1)) / PGDIR_SIZE)
+#define PTRS_PER_PGD 1024
+#define FIRST_USER_PGD_NR 0
+
+#define pte_ERROR(e) \
+ printk("%s:%d: bad pte %p(%08lx).\n", __FILE__, __LINE__, &(e), \
+ pte_val(e))
+#define pgd_ERROR(e) \
+ printk("%s:%d: bad pgd %p(%08lx).\n", __FILE__, __LINE__, &(e), \
+ pgd_val(e))
+
+static inline int pgd_newpage(pgd_t pgd) { return 0; }
+static inline void pgd_mkuptodate(pgd_t pgd) { }
+
+#define pte_present(x) (pte_val(x) & (_PAGE_PRESENT | _PAGE_PROTNONE))
+
+static inline pte_t pte_mknewprot(pte_t pte)
+{
+ pte_val(pte) |= _PAGE_NEWPROT;
+ return(pte);
+}
+
+static inline pte_t pte_mknewpage(pte_t pte)
+{
+ pte_val(pte) |= _PAGE_NEWPAGE;
+ return(pte);
+}
+
+static inline void set_pte(pte_t *pteptr, pte_t pteval)
+{
+ /* If it's a swap entry, it needs to be marked _PAGE_NEWPAGE so
+ * fix_range knows to unmap it. _PAGE_NEWPROT is specific to
+ * mapped pages.
+ */
+ *pteptr = pte_mknewpage(pteval);
+ if(pte_present(*pteptr)) *pteptr = pte_mknewprot(*pteptr);
+}
+
+#define set_pmd(pmdptr, pmdval) (*(pmdptr) = (pmdval))
+
+#define pte_page(x) pfn_to_page(pte_pfn(x))
+#define pte_none(x) !(pte_val(x) & ~_PAGE_NEWPAGE)
+#define pte_pfn(x) phys_to_pfn(pte_val(x))
+#define pfn_pte(pfn, prot) __pte(pfn_to_phys(pfn) | pgprot_val(prot))
+#define pfn_pmd(pfn, prot) __pmd(pfn_to_phys(pfn) | pgprot_val(prot))
+
+#define pmd_page_kernel(pmd) \
+ ((unsigned long) __va(pmd_val(pmd) & PAGE_MASK))
+
+/*
+ * Bits 0 through 3 are taken
+ */
+#define PTE_FILE_MAX_BITS 28
+
+#define pte_to_pgoff(pte) (pte_val(pte) >> 4)
+
+#define pgoff_to_pte(off) ((pte_t) { ((off) << 4) + _PAGE_FILE })
+
+#endif
--- /dev/null
+/*
+ * Copyright 2003 PathScale Inc
+ * Derived from include/asm-i386/pgtable.h
+ * Licensed under the GPL
+ */
+
+#ifndef __UM_PGTABLE_3LEVEL_H
+#define __UM_PGTABLE_3LEVEL_H
+
+#include <asm-generic/pgtable-nopud.h>
+
+/* PGDIR_SHIFT determines what a third-level page table entry can map */
+
+#define PGDIR_SHIFT 30
+#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
+#define PGDIR_MASK (~(PGDIR_SIZE-1))
+
+/* PMD_SHIFT determines the size of the area a second-level page table can
+ * map
+ */
+
+#define PMD_SHIFT 21
+#define PMD_SIZE (1UL << PMD_SHIFT)
+#define PMD_MASK (~(PMD_SIZE-1))
+
+/*
+ * entries per page directory level
+ */
+
+#define PTRS_PER_PTE 512
+#define PTRS_PER_PMD 512
+#define USER_PTRS_PER_PGD ((TASK_SIZE + (PGDIR_SIZE - 1)) / PGDIR_SIZE)
+#define PTRS_PER_PGD 512
+#define FIRST_USER_PGD_NR 0
+
+#define pte_ERROR(e) \
+ printk("%s:%d: bad pte %p(%016lx).\n", __FILE__, __LINE__, &(e), \
+ pte_val(e))
+#define pmd_ERROR(e) \
+ printk("%s:%d: bad pmd %p(%016lx).\n", __FILE__, __LINE__, &(e), \
+ pmd_val(e))
+#define pgd_ERROR(e) \
+ printk("%s:%d: bad pgd %p(%016lx).\n", __FILE__, __LINE__, &(e), \
+ pgd_val(e))
+
+#define pud_none(x) (!(pud_val(x) & ~_PAGE_NEWPAGE))
+#define pud_bad(x) ((pud_val(x) & (~PAGE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE)
+#define pud_present(x) (pud_val(x) & _PAGE_PRESENT)
+#define pud_populate(mm, pud, pmd) \
+ set_pud(pud, __pud(_PAGE_TABLE + __pa(pmd)))
+
+#define set_pud(pudptr, pudval) set_64bit((phys_t *) (pudptr), pud_val(pudval))
+static inline int pgd_newpage(pgd_t pgd)
+{
+ return(pgd_val(pgd) & _PAGE_NEWPAGE);
+}
+
+static inline void pgd_mkuptodate(pgd_t pgd) { pgd_val(pgd) &= ~_PAGE_NEWPAGE; }
+
+
+#define pte_present(x) pte_get_bits(x, (_PAGE_PRESENT | _PAGE_PROTNONE))
+
+static inline pte_t pte_mknewprot(pte_t pte)
+{
+ pte_set_bits(pte, _PAGE_NEWPROT);
+ return(pte);
+}
+
+static inline pte_t pte_mknewpage(pte_t pte)
+{
+ pte_set_bits(pte, _PAGE_NEWPAGE);
+ return(pte);
+}
+
+static inline void set_pte(pte_t *pteptr, pte_t pteval)
+{
+ pte_copy(*pteptr, pteval);
+
+ /* If it's a swap entry, it needs to be marked _PAGE_NEWPAGE so
+ * fix_range knows to unmap it. _PAGE_NEWPROT is specific to
+ * mapped pages.
+ */
+
+ *pteptr = pte_mknewpage(*pteptr);
+ if(pte_present(*pteptr)) *pteptr = pte_mknewprot(*pteptr);
+}
+
+#define set_pmd(pmdptr, pmdval) set_64bit((phys_t *) (pmdptr), pmd_val(pmdval))
+
+static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
+{
+ pmd_t *pmd = (pmd_t *) __get_free_page(GFP_KERNEL);
+
+ if(pmd)
+ memset(pmd, 0, PAGE_SIZE);
+
+ return pmd;
+}
+
+static inline void pmd_free(pmd_t *pmd){
+ free_page((unsigned long) pmd);
+}
+
+#define __pmd_free_tlb(tlb,x) do { } while (0)
+
+static inline void pud_clear (pud_t * pud) { }
+
+#define pud_page(pud) \
+ ((struct page *) __va(pud_val(pud) & PAGE_MASK))
+
+/* Find an entry in the second-level page table.. */
+#define pmd_offset(pud, address) ((pmd_t *) pud_page(*(pud)) + \
+ pmd_index(address))
+
+#define pte_page(x) pfn_to_page(pte_pfn(x))
+
+static inline int pte_none(pte_t pte)
+{
+ return pte_is_zero(pte);
+}
+
+static inline unsigned long pte_pfn(pte_t pte)
+{
+ return phys_to_pfn(pte_val(pte));
+}
+
+static inline pte_t pfn_pte(pfn_t page_nr, pgprot_t pgprot)
+{
+ pte_t pte;
+ phys_t phys = pfn_to_phys(page_nr);
+
+ pte_set_val(pte, phys, pgprot);
+ return pte;
+}
+
+static inline pmd_t pfn_pmd(pfn_t page_nr, pgprot_t pgprot)
+{
+ return __pmd((page_nr << PAGE_SHIFT) | pgprot_val(pgprot));
+}
+
+/*
+ * Bits 0 through 3 are taken in the low part of the pte,
+ * put the 32 bits of offset into the high part.
+ */
+#define PTE_FILE_MAX_BITS 32
+
+#ifdef CONFIG_64_BIT
+
+#define pte_to_pgoff(p) ((p).pte >> 32)
+
+#define pgoff_to_pte(off) ((pte_t) { ((off) < 32) | _PAGE_FILE })
+
+#else
+
+#define pte_to_pgoff(pte) ((pte).pte_high)
+
+#define pgoff_to_pte(off) ((pte_t) { _PAGE_FILE, (off) })
+
+#endif
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright 2003 PathScale, Inc.
+ *
+ * Licensed under the GPL
+ */
+
+#ifndef __UM_PROCESSOR_X86_64_H
+#define __UM_PROCESSOR_X86_64_H
+
+#include "asm/arch/user.h"
+
+struct arch_thread {
+};
+
+#define INIT_ARCH_THREAD { }
+
+#define current_text_addr() \
+ ({ void *pc; __asm__("movq $1f,%0\n1:":"=g" (pc)); pc; })
+
+#define ARCH_IS_STACKGROW(address) \
+ (address + 128 >= UPT_SP(¤t->thread.regs.regs))
+
+#include "asm/processor-generic.h"
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
#define pt_regs pt_regs_subarch
#define show_regs show_regs_subarch
+#define send_sigtrap send_sigtrap_subarch
#include "asm/arch/ptrace.h"
#undef pt_regs
#undef show_regs
+#undef send_sigtrap
#undef user_mode
#undef instruction_pointer
#include "sysdep/ptrace.h"
-#include "skas_ptrace.h"
struct pt_regs {
union uml_pt_regs regs;
extern void show_regs(struct pt_regs *regs);
+extern void send_sigtrap(struct task_struct *tsk, union uml_pt_regs *regs,
+ int error_code);
+
#endif
#endif
--- /dev/null
+/*
+ * Copyright 2003 PathScale, Inc.
+ *
+ * Licensed under the GPL
+ */
+
+#ifndef __UM_PTRACE_X86_64_H
+#define __UM_PTRACE_X86_64_H
+
+#include "linux/compiler.h"
+
+#define signal_fault signal_fault_x86_64
+#define __FRAME_OFFSETS /* Needed to get the R* macros */
+#include "asm/ptrace-generic.h"
+#undef signal_fault
+
+void signal_fault(struct pt_regs_subarch *regs, void *frame, char *where);
+
+#define FS_BASE (21 * sizeof(unsigned long))
+#define GS_BASE (22 * sizeof(unsigned long))
+#define DS (23 * sizeof(unsigned long))
+#define ES (24 * sizeof(unsigned long))
+#define FS (25 * sizeof(unsigned long))
+#define GS (26 * sizeof(unsigned long))
+
+#define PT_REGS_RBX(r) UPT_RBX(&(r)->regs)
+#define PT_REGS_RCX(r) UPT_RCX(&(r)->regs)
+#define PT_REGS_RDX(r) UPT_RDX(&(r)->regs)
+#define PT_REGS_RSI(r) UPT_RSI(&(r)->regs)
+#define PT_REGS_RDI(r) UPT_RDI(&(r)->regs)
+#define PT_REGS_RBP(r) UPT_RBP(&(r)->regs)
+#define PT_REGS_RAX(r) UPT_RAX(&(r)->regs)
+#define PT_REGS_R8(r) UPT_R8(&(r)->regs)
+#define PT_REGS_R9(r) UPT_R9(&(r)->regs)
+#define PT_REGS_R10(r) UPT_R10(&(r)->regs)
+#define PT_REGS_R11(r) UPT_R11(&(r)->regs)
+#define PT_REGS_R12(r) UPT_R12(&(r)->regs)
+#define PT_REGS_R13(r) UPT_R13(&(r)->regs)
+#define PT_REGS_R14(r) UPT_R14(&(r)->regs)
+#define PT_REGS_R15(r) UPT_R15(&(r)->regs)
+
+#define PT_REGS_FS(r) UPT_FS(&(r)->regs)
+#define PT_REGS_GS(r) UPT_GS(&(r)->regs)
+#define PT_REGS_DS(r) UPT_DS(&(r)->regs)
+#define PT_REGS_ES(r) UPT_ES(&(r)->regs)
+#define PT_REGS_SS(r) UPT_SS(&(r)->regs)
+#define PT_REGS_CS(r) UPT_CS(&(r)->regs)
+
+#define PT_REGS_ORIG_RAX(r) UPT_ORIG_RAX(&(r)->regs)
+#define PT_REGS_RIP(r) UPT_IP(&(r)->regs)
+#define PT_REGS_RSP(r) UPT_SP(&(r)->regs)
+
+#define PT_REGS_EFLAGS(r) UPT_EFLAGS(&(r)->regs)
+
+/* XXX */
+#define user_mode(r) UPT_IS_USER(&(r)->regs)
+#define PT_REGS_ORIG_SYSCALL(r) PT_REGS_RAX(r)
+#define PT_REGS_SYSCALL_RET(r) PT_REGS_RAX(r)
+
+#define PT_FIX_EXEC_STACK(sp) do ; while(0)
+
+#define profile_pc(regs) PT_REGS_IP(regs)
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
typedef struct {
unsigned int __softirq_pending;
- unsigned int __syscall_count;
- struct task_struct * __ksoftirqd_task;
} ____cacheline_aligned irq_cpustat_t;
#include <linux/irq_cpustat.h> /* Standard mappings for irq_cpustat_t above */
# error HARDIRQ_BITS is too low!
#endif
-#define irq_enter() (preempt_count() += HARDIRQ_OFFSET)
-#define irq_exit() \
-do { \
- preempt_count() -= IRQ_EXIT_OFFSET; \
- if (!in_interrupt() && softirq_pending(smp_processor_id())) \
- do_softirq(); \
- preempt_enable_no_resched(); \
-} while (0)
-
#endif /* __V850_HARDIRQ_H__ */
*/
#define current_text_addr() ({ __label__ _l; _l: &&_l;})
-
-/*
- * Bus types
- */
-#define MCA_bus 0
-#define MCA_bus__is_a_macro /* for versions in ksyms.c */
-
/* If you change this, you must change the associated assembly-languages
constants defined below, THREAD_*. */
struct thread_struct {
#ifndef __V850_STRING_H__
#define __V850_STRING_H__
-#define __HAVE_ARCH_BCOPY
#define __HAVE_ARCH_MEMCPY
#define __HAVE_ARCH_MEMSET
#define __HAVE_ARCH_MEMMOVE
extern void *memcpy (void *, const void *, __kernel_size_t);
-extern void bcopy (const char *, char *, int);
extern void *memset (void *, int, __kernel_size_t);
extern void *memmove (void *, const void *, __kernel_size_t);
#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
#define TIF_POLLING_NRFLAG 4 /* true if poll_idle() is polling
TIF_NEED_RESCHED */
+#define TIF_MEMDIE 5
/* as above, but as bit values */
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
RESTORE_ARGS 0,\addskip
.endm
- /* push in order ss, rsp, eflags, cs, rip */
- .macro FAKE_STACK_FRAME child_rip
- xorl %eax,%eax
- subq $6*8,%rsp
- movq %rax,5*8(%rsp) /* ss */
- movq %rax,4*8(%rsp) /* rsp */
- movq $(1<<9),3*8(%rsp) /* eflags */
- movq $__KERNEL_CS,2*8(%rsp) /* cs */
- movq \child_rip,1*8(%rsp) /* rip */
- movq %rax,(%rsp) /* orig_rax */
- .endm
-
- .macro UNFAKE_STACK_FRAME
- addq $8*6, %rsp
- .endm
-
.macro icebp
.byte 0xf1
.endm
/* More extended AMD flags: CPUID level 0x80000001, ecx, word 5 */
#define X86_FEATURE_LAHF_LM (5*32+ 0) /* LAHF/SAHF in long mode */
-#define X86_FEATURE_HTVALID (5*32+ 1) /* HyperThreading valid, otherwise CMP */
+#define X86_FEATURE_CMP_LEGACY (5*32+ 1) /* If yes HyperThreading not valid */
#define cpu_has(c, bit) test_bit(bit, (c)->x86_capability)
#define boot_cpu_has(bit) test_bit(bit, boot_cpu_data.x86_capability)
#define LOWMEMSIZE() (0x9f000)
-#define MAXMEM (120UL * 1024 * 1024 * 1024 * 1024) /* 120TB */
-
-
#ifndef __ASSEMBLY__
struct e820entry {
u64 addr; /* start of memory segment */
* An executable for which elf_read_implies_exec() returns TRUE will
* have the READ_IMPLIES_EXEC personality flag set automatically.
*/
-#define elf_read_implies_exec(ex, have_pt_gnu_stack) (!(have_pt_gnu_stack))
-
-/*
- * An executable for which elf_read_implies_exec() returns TRUE will
- * have the READ_IMPLIES_EXEC personality flag set automatically.
- */
-#define elf_read_implies_exec_binary(ex, have_pt_gnu_stack) \
- (!(have_pt_gnu_stack))
+#define elf_read_implies_exec(ex, executable_stack) (executable_stack != EXSTACK_DISABLE_X)
extern int dump_task_regs (struct task_struct *, elf_gregset_t *);
extern int dump_task_fpu (struct task_struct *, elf_fpregset_t *);
*/
static inline void ack_bad_irq(unsigned int irq)
{
-#ifdef CONFIG_X86
printk("unexpected IRQ trap at vector %02x\n", irq);
#ifdef CONFIG_X86_LOCAL_APIC
/*
*/
ack_APIC_irq();
#endif
-#endif
}
#endif /* __ASM_HARDIRQ_H */
DIE_OOPS = 1,
DIE_INT3,
DIE_DEBUG,
+ DIE_DEBUGSTEP,
DIE_PANIC,
DIE_NMI,
DIE_DIE,
? (MAX_STACK_SIZE) \
: (((unsigned long)current_thread_info()) + THREAD_SIZE - (ADDR)))
+#define JPROBE_ENTRY(pentry) (kprobe_opcode_t *)pentry
+
/* Architecture specific copy of original instruction*/
struct arch_specific_insn {
/* copy of the original instruction */
#define MCE_GET_LOG_LEN _IOR('M', 2, int)
#define MCE_GETCLEAR_FLAGS _IOR('M', 3, int)
+/* Software defined banks */
+#define MCE_EXTENDED_BANK 128
+#define MCE_THERMAL_BANK MCE_EXTENDED_BANK + 0
+
+void mce_log(struct mce *m);
+#ifdef CONFIG_X86_MCE_INTEL
+void mce_intel_feature_init(struct cpuinfo_x86 *c);
+#else
+static inline void mce_intel_feature_init(struct cpuinfo_x86 *c)
+{
+}
+#endif
+
#endif
#include <asm/smp.h>
-#define MAXNODE 8
#define NODEMAPSIZE 0xff
/* Simple perfect hash to map physical addresses to node numbers */
#define page_to_pfn(page) \
(long)(((page) - page_zone(page)->zone_mem_map) + page_zone(page)->zone_start_pfn)
-/* AK: !DISCONTIGMEM just forces it to 1. Can't we too? */
-#define pfn_valid(pfn) ((pfn) < num_physpages)
-
-
+#define pfn_valid(pfn) ((pfn) >= num_physpages ? 0 : \
+ ({ u8 nid__ = pfn_to_nid(pfn); \
+ nid__ != 0xff && (pfn) >= node_start_pfn(nid__) && (pfn) <= node_end_pfn(nid__); }))
#endif
#endif
unsigned int type, char increment);
extern int mtrr_del (int reg, unsigned long base, unsigned long size);
extern int mtrr_del_page (int reg, unsigned long base, unsigned long size);
-extern void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi);
# else
static __inline__ int mtrr_add (unsigned long base, unsigned long size,
unsigned int type, char increment)
return -ENODEV;
}
-static __inline__ void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi) {}
-
# endif
#endif
#define _ASM_X8664_NUMA_H 1
#include <linux/nodemask.h>
-
-#define MAXNODE 8
-#define NODEMASK 0xff
+#include <asm/numnodes.h>
struct node {
u64 start,end;
};
-#define for_all_nodes(x) for ((x) = 0; (x) < numnodes; (x)++) \
- if (node_online(x))
-
-extern int compute_hash_shift(struct node *nodes);
+extern int compute_hash_shift(struct node *nodes, int numnodes);
#define ZONE_ALIGN (1UL << (MAX_ORDER+PAGE_SHIFT))
extern void numa_add_cpu(int cpu);
extern void numa_init_array(void);
+extern int numa_off;
+
+#define NUMA_NO_NODE 0xff
#endif
#include <linux/config.h>
-/* Max 8 Nodes - APIC limit currently */
-#define NODES_SHIFT 3
+#ifdef CONFIG_NUMA
+#define NODES_SHIFT 6
+#else
+#define NODES_SHIFT 0
+#endif
#endif
#include <linux/slab.h>
#include <asm/scatterlist.h>
#include <linux/string.h>
-#include <asm/io.h>
#include <asm/page.h>
extern int iommu_setup(char *opt);
int irqcount; /* Irq nesting counter. Starts with -1 */
int cpunumber; /* Logical CPU number */
char *irqstackptr; /* top of irqstack */
- unsigned long volatile *level4_pgt; /* Per CPU top level page table */
unsigned int __softirq_pending;
unsigned int __nmi_count; /* number of NMI on this CPUs */
struct mm_struct *active_mm;
extern void early_idt_handler(void);
extern void mcheck_init(struct cpuinfo_x86 *c);
-extern void init_memory_mapping(void);
+extern void init_memory_mapping(unsigned long start, unsigned long end);
extern void system_call(void);
extern int kernel_syscall(void);
extern void ia32_cstar_target(void);
extern void ia32_sysenter_target(void);
-extern void calibrate_delay(void);
-extern void cpu_idle(void);
extern void config_acpi_tables(void);
extern void ia32_syscall(void);
extern void iommu_hole_init(void);
extern unsigned long end_pfn_map;
-extern unsigned long cpu_initialized;
+extern cpumask_t cpu_initialized;
extern void show_trace(unsigned long * rsp);
extern void show_registers(struct pt_regs *regs);
extern int fix_aperture;
extern int force_iommu;
+extern int reboot_force;
+
extern void smp_local_timer_interrupt(struct pt_regs * regs);
long do_arch_prctl(struct task_struct *task, int code, unsigned long addr);
"thread_return:\n\t" \
"movq %%gs:%P[pda_pcurrent],%%rsi\n\t" \
"movq %P[thread_info](%%rsi),%%r8\n\t" \
- "btr %[tif_fork],%P[ti_flags](%%r8)\n\t" \
+ LOCK "btr %[tif_fork],%P[ti_flags](%%r8)\n\t" \
"movq %%rax,%%rdi\n\t" \
"jc ret_from_fork\n\t" \
RESTORE_CONTEXT \
/* For spinlocks etc */
#define local_irq_save(x) do { warn_if_not_ulong(x); __asm__ __volatile__("# local_irq_save \n\t pushfq ; popq %0 ; cli":"=g" (x): /* no input */ :"memory"); } while (0)
+void cpu_idle_wait(void);
+
/*
* disable hlt during certain critical i/o operations
*/
#define TIF_IA32 17 /* 32bit process */
#define TIF_FORK 18 /* ret_from_fork */
#define TIF_ABI_PENDING 19
+#define TIF_MEMDIE 20
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
#define AC97_EXTSTAT_PRK 0x2000
#define AC97_EXTSTAT_PRL 0x4000
+/* extended audio ID register bit defines */
+#define AC97_EXTID_VRA 0x0001
+#define AC97_EXTID_DRA 0x0002
+#define AC97_EXTID_SPDIF 0x0004
+#define AC97_EXTID_VRM 0x0008
+#define AC97_EXTID_DSA0 0x0010
+#define AC97_EXTID_DSA1 0x0020
+#define AC97_EXTID_CDAC 0x0040
+#define AC97_EXTID_SDAC 0x0080
+#define AC97_EXTID_LDAC 0x0100
+#define AC97_EXTID_AMAP 0x0200
+#define AC97_EXTID_REV0 0x0400
+#define AC97_EXTID_REV1 0x0800
+#define AC97_EXTID_ID0 0x4000
+#define AC97_EXTID_ID1 0x8000
+
+/* extended status register bit defines */
+#define AC97_EXTSTAT_VRA 0x0001
+#define AC97_EXTSTAT_DRA 0x0002
+#define AC97_EXTSTAT_SPDIF 0x0004
+#define AC97_EXTSTAT_VRM 0x0008
+#define AC97_EXTSTAT_SPSA0 0x0010
+#define AC97_EXTSTAT_SPSA1 0x0020
+#define AC97_EXTSTAT_CDAC 0x0040
+#define AC97_EXTSTAT_SDAC 0x0080
+#define AC97_EXTSTAT_LDAC 0x0100
+#define AC97_EXTSTAT_MADC 0x0200
+#define AC97_EXTSTAT_SPCV 0x0400
+#define AC97_EXTSTAT_PRI 0x0800
+#define AC97_EXTSTAT_PRJ 0x1000
+#define AC97_EXTSTAT_PRK 0x2000
+#define AC97_EXTSTAT_PRL 0x4000
+
/* useful power states */
#define AC97_PWR_D0 0x0000 /* everything on */
#define AC97_PWR_D1 AC97_PWR_PR0|AC97_PWR_PR1|AC97_PWR_PR4
extern int ac97_register_driver(struct ac97_driver *driver);
extern void ac97_unregister_driver(struct ac97_driver *driver);
+/* quirk types */
+enum {
+ AC97_TUNE_DEFAULT = -1, /* use default from quirk list (not valid in list) */
+ AC97_TUNE_NONE = 0, /* nothing extra to do */
+ AC97_TUNE_HP_ONLY, /* headphone (true line-out) control as master only */
+ AC97_TUNE_SWAP_HP, /* swap headphone and master controls */
+ AC97_TUNE_SWAP_SURROUND, /* swap master and surround controls */
+ AC97_TUNE_AD_SHARING, /* for AD1985, turn on OMS bit and use headphone */
+ AC97_TUNE_ALC_JACK, /* for Realtek, enable JACK detection */
+};
+
+struct ac97_quirk {
+ unsigned short vendor; /* PCI vendor id */
+ unsigned short device; /* PCI device id */
+ unsigned short mask; /* device id bit mask, 0 = accept all */
+ const char *name; /* name shown as info */
+ int type; /* quirk type above */
+};
+
+struct pci_dev;
+extern int ac97_tune_hardware(struct pci_dev *pdev, struct ac97_quirk *quirk, int override);
+
#endif /* _AC97_CODEC_H_ */
struct pci_dev *device;
enum chipset_type chipset;
unsigned long mode;
- off_t aper_base;
+ unsigned long aper_base;
size_t aper_size;
int max_memory; /* In pages */
int current_memory;
extern int agp_backend_acquire(void);
extern void agp_backend_release(void);
-/*
- * Interface between drm and agp code. When agp initializes, it makes
- * the below structure available via inter_module_register(), drm might
- * use it. Keith Owens <kaos@ocs.com.au> 28 Oct 2000.
- */
-typedef struct {
- void (*free_memory)(struct agp_memory *);
- struct agp_memory * (*allocate_memory)(size_t, u32);
- int (*bind_memory)(struct agp_memory *, off_t);
- int (*unbind_memory)(struct agp_memory *);
- void (*enable)(u32);
- int (*acquire)(void);
- void (*release)(void);
- int (*copy_info)(struct agp_kern_info *);
-} drm_agp_t;
-
-extern const drm_agp_t *drm_agp_p;
-
#endif /* __KERNEL__ */
#endif /* _AGP_BACKEND_H */
struct agp_version version; /* version of the driver */
__u32 bridge_id; /* bridge vendor/device */
__u32 agp_mode; /* mode info of bridge */
- off_t aper_base; /* base of aperture */
+ unsigned long aper_base;/* base of aperture */
size_t aper_size; /* size of aperture */
size_t pg_total; /* max pages (swap + system) */
size_t pg_system; /* max pages (system) */
struct agp_version version; /* version of the driver */
u32 bridge_id; /* bridge vendor/device */
u32 agp_mode; /* mode info of bridge */
- off_t aper_base; /* base of aperture */
+ unsigned long aper_base;/* base of aperture */
size_t aper_size; /* size of aperture */
size_t pg_total; /* max pages (swap + system) */
size_t pg_system; /* max pages (system) */
#define AMIGAFFS_H
#include <linux/types.h>
-#include <linux/buffer_head.h>
-#include <linux/string.h>
#include <asm/byteorder.h>
-/* AmigaOS allows file names with up to 30 characters length.
- * Names longer than that will be silently truncated. If you
- * want to disallow this, comment out the following #define.
- * Creating filesystem objects with longer names will then
- * result in an error (ENAMETOOLONG).
- */
-/*#define AFFS_NO_TRUNCATE */
-
-/* Ugly macros make the code more pretty. */
-
-#define GET_END_PTR(st,p,sz) ((st *)((char *)(p)+((sz)-sizeof(st))))
-#define AFFS_GET_HASHENTRY(data,hashkey) be32_to_cpu(((struct dir_front *)data)->hashtable[hashkey])
-#define AFFS_BLOCK(sb, bh, blk) (AFFS_HEAD(bh)->table[AFFS_SB(sb)->s_hashsize-1-(blk)])
-
-static inline void
-affs_set_blocksize(struct super_block *sb, int size)
-{
- sb_set_blocksize(sb, size);
-}
-static inline struct buffer_head *
-affs_bread(struct super_block *sb, int block)
-{
- pr_debug("affs_bread: %d\n", block);
- if (block >= AFFS_SB(sb)->s_reserved && block < AFFS_SB(sb)->s_partition_size)
- return sb_bread(sb, block);
- return NULL;
-}
-static inline struct buffer_head *
-affs_getblk(struct super_block *sb, int block)
-{
- pr_debug("affs_getblk: %d\n", block);
- if (block >= AFFS_SB(sb)->s_reserved && block < AFFS_SB(sb)->s_partition_size)
- return sb_getblk(sb, block);
- return NULL;
-}
-static inline struct buffer_head *
-affs_getzeroblk(struct super_block *sb, int block)
-{
- struct buffer_head *bh;
- pr_debug("affs_getzeroblk: %d\n", block);
- if (block >= AFFS_SB(sb)->s_reserved && block < AFFS_SB(sb)->s_partition_size) {
- bh = sb_getblk(sb, block);
- lock_buffer(bh);
- memset(bh->b_data, 0 , sb->s_blocksize);
- set_buffer_uptodate(bh);
- unlock_buffer(bh);
- return bh;
- }
- return NULL;
-}
-static inline struct buffer_head *
-affs_getemptyblk(struct super_block *sb, int block)
-{
- struct buffer_head *bh;
- pr_debug("affs_getemptyblk: %d\n", block);
- if (block >= AFFS_SB(sb)->s_reserved && block < AFFS_SB(sb)->s_partition_size) {
- bh = sb_getblk(sb, block);
- wait_on_buffer(bh);
- set_buffer_uptodate(bh);
- return bh;
- }
- return NULL;
-}
-static inline void
-affs_brelse(struct buffer_head *bh)
-{
- if (bh)
- pr_debug("affs_brelse: %lld\n", (long long) bh->b_blocknr);
- brelse(bh);
-}
-
-static inline void
-affs_adjust_checksum(struct buffer_head *bh, u32 val)
-{
- u32 tmp = be32_to_cpu(((__be32 *)bh->b_data)[5]);
- ((__be32 *)bh->b_data)[5] = cpu_to_be32(tmp - val);
-}
-static inline void
-affs_adjust_bitmapchecksum(struct buffer_head *bh, u32 val)
-{
- u32 tmp = be32_to_cpu(((__be32 *)bh->b_data)[0]);
- ((__be32 *)bh->b_data)[0] = cpu_to_be32(tmp - val);
-}
-
-static inline void
-affs_lock_link(struct inode *inode)
-{
- down(&AFFS_I(inode)->i_link_lock);
-}
-static inline void
-affs_unlock_link(struct inode *inode)
-{
- up(&AFFS_I(inode)->i_link_lock);
-}
-static inline void
-affs_lock_dir(struct inode *inode)
-{
- down(&AFFS_I(inode)->i_hash_lock);
-}
-static inline void
-affs_unlock_dir(struct inode *inode)
-{
- up(&AFFS_I(inode)->i_hash_lock);
-}
-static inline void
-affs_lock_ext(struct inode *inode)
-{
- down(&AFFS_I(inode)->i_ext_lock);
-}
-static inline void
-affs_unlock_ext(struct inode *inode)
-{
- up(&AFFS_I(inode)->i_ext_lock);
-}
-
-#ifdef __LITTLE_ENDIAN
-#define BO_EXBITS 0x18UL
-#elif defined(__BIG_ENDIAN)
-#define BO_EXBITS 0x00UL
-#else
-#error Endianness must be known for affs to work.
-#endif
-
#define FS_OFS 0x444F5300
#define FS_FFS 0x444F5301
#define FS_INTLOFS 0x444F5302
#define AFFS_ROOT_BMAPS 25
-#define AFFS_HEAD(bh) ((struct affs_head *)(bh)->b_data)
-#define AFFS_TAIL(sb, bh) ((struct affs_tail *)((bh)->b_data+(sb)->s_blocksize-sizeof(struct affs_tail)))
-#define AFFS_ROOT_HEAD(bh) ((struct affs_root_head *)(bh)->b_data)
-#define AFFS_ROOT_TAIL(sb, bh) ((struct affs_root_tail *)((bh)->b_data+(sb)->s_blocksize-sizeof(struct affs_root_tail)))
-#define AFFS_DATA_HEAD(bh) ((struct affs_data_head *)(bh)->b_data)
-#define AFFS_DATA(bh) (((struct affs_data_head *)(bh)->b_data)->data)
-
struct affs_date {
__be32 days;
__be32 mins;
#define bool int
#endif
-
/*
* RECON_THRESHOLD is the maximum number of RECON messages to receive
* within one minute before printing a "cabling problem" warning. The
#define D_SKB 1024 /* show skb's */
#define D_SKB_SIZE 2048 /* show skb sizes */
#define D_TIMING 4096 /* show time needed to copy buffers to card */
+#define D_DEBUG 8192 /* Very detailed debug line for line */
#ifndef ARCNET_DEBUG_MAX
#define ARCNET_DEBUG_MAX (127) /* change to ~0 if you want detailed debugging */
#define TXACKflag 0x02 /* transmitted msg. ackd */
#define RECONflag 0x04 /* network reconfigured */
#define TESTflag 0x08 /* test flag */
+#define EXCNAKflag 0x08 /* excesive nak flag */
#define RESETflag 0x10 /* power-on-reset */
#define RES1flag 0x20 /* reserved - usually set by jumper */
#define RES2flag 0x40 /* reserved - usually set by jumper */
#define RESETclear 0x08 /* power-on-reset */
#define CONFIGclear 0x10 /* system reconfigured */
+#define EXCNAKclear 0x0E /* Clear and acknowledge the excive nak bit */
+
/* flags for "load test flags" command */
#define TESTload 0x08 /* test flag (diagnostic) */
struct ArcProto {
char suffix; /* a for RFC1201, e for ether-encap, etc. */
int mtu; /* largest possible packet */
+ int is_ip; /* This is a ip plugin - not a raw thing */
void (*rx) (struct net_device * dev, int bufnum,
struct archdr * pkthdr, int length);
int (*prepare_tx) (struct net_device * dev, struct archdr * pkt, int length,
int bufnum);
int (*continue_tx) (struct net_device * dev, int bufnum);
+ int (*ack_tx) (struct net_device * dev, int acked);
};
-extern struct ArcProto *arc_proto_map[256], *arc_proto_default, *arc_bcast_proto;
+extern struct ArcProto *arc_proto_map[256], *arc_proto_default,
+ *arc_bcast_proto, *arc_raw_proto;
extern struct ArcProto arc_proto_null;
char *card_name; /* card ident string */
int card_flags; /* special card features */
+
+ /* On preemtive and SMB a lock is needed */
+ spinlock_t lock;
+
/*
* Buffer management: an ARCnet card has 4 x 512-byte buffers, each of
* which can be used for either sending or receiving. The new dynamic
int num_recons; /* number of RECONs between first and last. */
bool network_down; /* do we think the network is down? */
+ bool excnak_pending; /* We just got an excesive nak interrupt */
+
struct {
uint16_t sequence; /* sequence number (incs with each packet) */
uint16_t aborted_seq;
#endif
#if (ARCNET_DEBUG_MAX & D_RX) || (ARCNET_DEBUG_MAX & D_TX)
-void arcnet_dump_packet(struct net_device *dev, int bufnum, char *desc);
+void arcnet_dump_packet(struct net_device *dev, int bufnum, char *desc,
+ int take_arcnet_lock);
#else
-#define arcnet_dump_packet(dev, bufnum, desc) ;
+#define arcnet_dump_packet(dev, bufnum, desc,take_arcnet_lock) ;
#endif
void arcnet_unregister_proto(struct ArcProto *proto);
irqreturn_t arcnet_interrupt(int irq, void *dev_id, struct pt_regs *regs);
-void arcdev_setup(struct net_device *dev);
struct net_device *alloc_arcdev(char *name);
void arcnet_rx(struct net_device *dev, int bufnum);
extern int aarp_send_ddp(struct net_device *dev,
struct sk_buff *skb,
struct atalk_addr *sa, void *hwaddr);
-extern void aarp_send_probe(struct net_device *dev,
- struct atalk_addr *addr);
extern void aarp_device_down(struct net_device *dev);
extern void aarp_probe_network(struct atalk_iface *atif);
extern int aarp_proxy_probe_network(struct atalk_iface *atif,
extern int atalk_proc_init(void);
extern void atalk_proc_exit(void);
#else
-#define atalk_proc_init() 0
+#define atalk_proc_init() ({ 0; })
#define atalk_proc_exit() do { } while(0)
#endif /* CONFIG_PROC_FS */
__u32 backlog; /* messages waiting in queue */
};
-struct audit_login {
- __u32 loginuid;
- int msglen;
- char msg[1024];
-};
-
struct audit_rule { /* for AUDIT_LIST, AUDIT_ADD, and AUDIT_DEL */
__u32 flags; /* AUDIT_PER_{TASK,CALL}, AUDIT_PREPEND */
__u32 action; /* AUDIT_NEVER, AUDIT_POSSIBLE, AUDIT_ALWAYS */
extern void audit_get_stamp(struct audit_context *ctx,
struct timespec *t, int *serial);
extern int audit_set_loginuid(struct audit_context *ctx, uid_t loginuid);
+extern uid_t audit_get_loginuid(struct audit_context *ctx);
#else
#define audit_alloc(t) ({ 0; })
#define audit_free(t) do { ; } while (0)
#define audit_getname(n) do { ; } while (0)
#define audit_putname(n) do { ; } while (0)
#define audit_inode(n,i,d) do { ; } while (0)
+#define audit_get_loginuid(c) ({ -1; })
#endif
#ifdef CONFIG_AUDIT
--- /dev/null
+/*
+ * Backlight Lowlevel Control Abstraction
+ *
+ * Copyright (C) 2003,2004 Hewlett-Packard Company
+ *
+ */
+
+#ifndef _LINUX_BACKLIGHT_H
+#define _LINUX_BACKLIGHT_H
+
+#include <linux/device.h>
+#include <linux/notifier.h>
+
+struct backlight_device;
+struct fb_info;
+
+/* This structure defines all the properties of a backlight
+ (usually attached to a LCD). */
+struct backlight_properties {
+ /* Owner module */
+ struct module *owner;
+ /* Get the backlight power status (0: full on, 1..3: power saving
+ modes; 4: full off), see FB_BLANK_XXX */
+ int (*get_power)(struct backlight_device *);
+ /* Enable or disable power to the LCD (0: on; 4: off, see FB_BLANK_XXX) */
+ int (*set_power)(struct backlight_device *, int power);
+ /* Maximal value for brightness (read-only) */
+ int max_brightness;
+ /* Get current backlight brightness */
+ int (*get_brightness)(struct backlight_device *);
+ /* Set backlight brightness (0..max_brightness) */
+ int (*set_brightness)(struct backlight_device *, int brightness);
+ /* Check if given framebuffer device is the one bound to this backlight;
+ return 0 if not, !=0 if it is. If NULL, backlight always matches the fb. */
+ int (*check_fb)(struct fb_info *);
+};
+
+struct backlight_device {
+ /* This protects the 'props' field. If 'props' is NULL, the driver that
+ registered this device has been unloaded, and if class_get_devdata()
+ points to something in the body of that driver, it is also invalid. */
+ struct semaphore sem;
+ /* If this is NULL, the backing module is unloaded */
+ struct backlight_properties *props;
+ /* The framebuffer notifier block */
+ struct notifier_block fb_notif;
+ /* The class device structure */
+ struct class_device class_dev;
+};
+
+extern struct backlight_device *backlight_device_register(const char *name,
+ void *devdata, struct backlight_properties *bp);
+extern void backlight_device_unregister(struct backlight_device *bd);
+
+#define to_backlight_device(obj) container_of(obj, struct backlight_device, class_dev)
+
+#endif
#ifndef _LINUX_BITOPS_H
#define _LINUX_BITOPS_H
#include <asm/types.h>
-#include <asm/bitops.h>
/*
* ffs: find first bit set. This is defined the same way as
return r;
}
+/*
+ * Include this here because some architectures need generic_ffs/fls in
+ * scope
+ */
+#include <asm/bitops.h>
+
static __inline__ int get_bitmask_order(unsigned int count)
{
int order;
/* these global variables hold the actual statistics data */
extern struct coda_vfs_stats coda_vfs_stat;
-extern struct coda_cache_inv_stats coda_cache_inv_stat;
-
-/* reset statistics to 0 */
-void reset_coda_vfs_stats( void );
-void reset_coda_cache_inv_stats( void );
-
-/* like coda_dointvec, these functions are to be registered in the ctl_table
- * data structure for /proc/sys/... files
- */
-int do_reset_coda_vfs_stats( ctl_table * table, int write, struct file * filp,
- void __user * buffer, size_t * lenp, loff_t * ppos );
-int do_reset_coda_cache_inv_stats( ctl_table * table, int write,
- struct file * filp, void __user * buffer,
- size_t * lenp, loff_t * ppos );
-
-/* these functions are called to form the content of /proc/fs/coda/... files */
-int coda_vfs_stats_get_info( char * buffer, char ** start, off_t offset,
- int length);
-int coda_cache_inv_stats_get_info( char * buffer, char ** start, off_t offset,
- int length);
-
#endif /* _CODA_PROC_H */
const char *name, int length,
struct CodaFid *newfid, struct coda_vattr *attrs);
int venus_create(struct super_block *sb, struct CodaFid *dirfid,
- const char *name, int length, int excl, int mode, dev_t rdev,
+ const char *name, int length, int excl, int mode,
struct CodaFid *newfid, struct coda_vattr *attrs) ;
int venus_rmdir(struct super_block *sb, struct CodaFid *dirfid,
const char *name, int length);
#define ARCNET_TOTAL_SIZE 8
/* various register addresses */
-#define _INTMASK (ioaddr+0) /* writable */
-#define _STATUS (ioaddr+0) /* readable */
-#define _COMMAND (ioaddr+1) /* standard arcnet commands */
-#define _DIAGSTAT (ioaddr+1) /* diagnostic status register */
-#define _ADDR_HI (ioaddr+2) /* control registers for IO-mapped memory */
-#define _ADDR_LO (ioaddr+3)
-#define _MEMDATA (ioaddr+4) /* data port for IO-mapped memory */
-#define _SUBADR (ioaddr+5) /* the extended port _XREG refers to */
-#define _CONFIG (ioaddr+6) /* configuration register */
-#define _XREG (ioaddr+7) /* extra registers (indexed by _CONFIG
- or _SUBADR) */
+#ifdef CONFIG_SA1100_CT6001
+#define BUS_ALIGN 2 /* 8 bit device on a 16 bit bus - needs padding */
+#else
+#define BUS_ALIGN 1
+#endif
+
+
+#define _INTMASK (ioaddr+BUS_ALIGN*0) /* writable */
+#define _STATUS (ioaddr+BUS_ALIGN*0) /* readable */
+#define _COMMAND (ioaddr+BUS_ALIGN*1) /* standard arcnet commands */
+#define _DIAGSTAT (ioaddr+BUS_ALIGN*1) /* diagnostic status register */
+#define _ADDR_HI (ioaddr+BUS_ALIGN*2) /* control registers for IO-mapped memory */
+#define _ADDR_LO (ioaddr+BUS_ALIGN*3)
+#define _MEMDATA (ioaddr+BUS_ALIGN*4) /* data port for IO-mapped memory */
+#define _SUBADR (ioaddr+BUS_ALIGN*5) /* the extended port _XREG refers to */
+#define _CONFIG (ioaddr+BUS_ALIGN*6) /* configuration register */
+#define _XREG (ioaddr+BUS_ALIGN*7) /* extra registers (indexed by _CONFIG
+ or _SUBADR) */
/* in the ADDR_HI register */
#define RDDATAflag 0x80 /* next access is a read (not a write) */
}
#define ASTATUS() inb(_STATUS)
+#define ADIAGSTATUS() inb(_DIAGSTAT)
#define ACOMMAND(cmd) outb((cmd),_COMMAND)
#define AINTMASK(msk) outb((msk),_INTMASK)
}
extern void FASTCALL(wait_for_completion(struct completion *));
+extern int FASTCALL(wait_for_completion_interruptible(struct completion *x));
+extern unsigned long FASTCALL(wait_for_completion_timeout(struct completion *x,
+ unsigned long timeout));
+extern unsigned long FASTCALL(wait_for_completion_interruptible_timeout(
+ struct completion *x, unsigned long timeout));
+
extern void FASTCALL(complete(struct completion *));
extern void FASTCALL(complete_all(struct completion *));
* to achieve effects such as fast scrolling by changing the origin.
*/
+struct vt_struct;
+
#define NPAR 16
struct vc_data {
struct vc_data **vc_display_fg; /* [!] Ptr to var holding fg console for this display */
unsigned long vc_uni_pagedir;
unsigned long *vc_uni_pagedir_loc; /* [!] Location of uni_pagedir variable for this console */
+ struct vt_struct *vc_vt;
/* additional information is in vt_kern.h */
};
--- /dev/null
+/*
+ * debugfs.h - a tiny little debug file system
+ *
+ * Copyright (C) 2004 Greg Kroah-Hartman <greg@kroah.com>
+ * Copyright (C) 2004 IBM Inc.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version
+ * 2 as published by the Free Software Foundation.
+ *
+ * debugfs is for people to use instead of /proc or /sys.
+ * See Documentation/DocBook/kernel-api for more details.
+ */
+
+#ifndef _DEBUGFS_H_
+#define _DEBUGFS_H_
+
+#if defined(CONFIG_DEBUG_FS)
+struct dentry *debugfs_create_file(const char *name, mode_t mode,
+ struct dentry *parent, void *data,
+ struct file_operations *fops);
+
+struct dentry *debugfs_create_dir(const char *name, struct dentry *parent);
+
+void debugfs_remove(struct dentry *dentry);
+
+struct dentry *debugfs_create_u8(const char *name, mode_t mode,
+ struct dentry *parent, u8 *value);
+struct dentry *debugfs_create_u16(const char *name, mode_t mode,
+ struct dentry *parent, u16 *value);
+struct dentry *debugfs_create_u32(const char *name, mode_t mode,
+ struct dentry *parent, u32 *value);
+struct dentry *debugfs_create_bool(const char *name, mode_t mode,
+ struct dentry *parent, u32 *value);
+
+#else
+/*
+ * We do not return NULL from these functions if CONFIG_DEBUG_FS is not enabled
+ * so users have a chance to detect if there was a real error or not. We don't
+ * want to duplicate the design decision mistakes of procfs and devfs again.
+ */
+
+static inline struct dentry *debugfs_create_file(const char *name, mode_t mode,
+ struct dentry *parent,
+ void *data,
+ struct file_operations *fops)
+{
+ return ERR_PTR(-ENODEV);
+}
+
+static inline struct dentry *debugfs_create_dir(const char *name,
+ struct dentry *parent)
+{
+ return ERR_PTR(-ENODEV);
+}
+
+static inline void debugfs_remove(struct dentry *dentry)
+{ }
+
+static inline struct dentry *debugfs_create_u8(const char *name, mode_t mode,
+ struct dentry *parent,
+ u8 *value)
+{
+ return ERR_PTR(-ENODEV);
+}
+
+static inline struct dentry *debugfs_create_u16(const char *name, mode_t mode,
+ struct dentry *parent,
+ u8 *value)
+{
+ return ERR_PTR(-ENODEV);
+}
+
+static inline struct dentry *debugfs_create_u32(const char *name, mode_t mode,
+ struct dentry *parent,
+ u8 *value)
+{
+ return ERR_PTR(-ENODEV);
+}
+
+static inline struct dentry *debugfs_create_bool(const char *name, mode_t mode,
+ struct dentry *parent,
+ u8 *value)
+{
+ return ERR_PTR(-ENODEV);
+}
+
+#endif
+
+#endif
struct bio *bio, int error,
union map_info *map_context);
-typedef void (*dm_suspend_fn) (struct dm_target *ti);
+typedef void (*dm_presuspend_fn) (struct dm_target *ti);
+typedef void (*dm_postsuspend_fn) (struct dm_target *ti);
typedef void (*dm_resume_fn) (struct dm_target *ti);
typedef int (*dm_status_fn) (struct dm_target *ti, status_type_t status_type,
dm_dtr_fn dtr;
dm_map_fn map;
dm_endio_fn end_io;
- dm_suspend_fn suspend;
+ dm_presuspend_fn presuspend;
+ dm_postsuspend_fn postsuspend;
dm_resume_fn resume;
dm_status_fn status;
dm_message_fn message;
sector_t len;
/* FIXME: turn this into a mask, and merge with io_restrictions */
+ /* Always a power of 2 */
sector_t split_io;
/*
/*
* Copyright (C) 2001 - 2003 Sistina Software (UK) Limited.
- * Copyright (C) 2004 Red Hat, Inc. All rights reserved.
+ * Copyright (C) 2004 - 2005 Red Hat, Inc. All rights reserved.
*
* This file is released under the LGPL.
*/
#define DM_TARGET_MSG _IOWR(DM_IOCTL, DM_TARGET_MSG_CMD, struct dm_ioctl)
#define DM_VERSION_MAJOR 4
-#define DM_VERSION_MINOR 3
+#define DM_VERSION_MINOR 4
#define DM_VERSION_PATCHLEVEL 0
-#define DM_VERSION_EXTRA "-ioctl (2004-09-30)"
+#define DM_VERSION_EXTRA "-ioctl (2005-01-12)"
/* Status bits */
#define DM_READONLY_FLAG (1 << 0) /* In/Out */
*/
#define DM_BUFFER_FULL_FLAG (1 << 8) /* Out */
+/*
+ * Set this to improve performance when you aren't going to use open_count
+ */
+#define DM_SKIP_BDGET_FLAG (1 << 9) /* In */
+
#endif /* _LINUX_DM_IOCTL_H */
QAM_64,
QAM_128,
QAM_256,
- QAM_AUTO
+ QAM_AUTO,
+ VSB_8,
+ VSB_16
} fe_modulation_t;
-
typedef enum fe_transmit_mode {
TRANSMISSION_MODE_2K,
TRANSMISSION_MODE_8K,
fe_modulation_t modulation; /* modulation type (see above) */
};
+struct dvb_vsb_parameters {
+ fe_modulation_t modulation; /* modulation type (see above) */
+};
struct dvb_ofdm_parameters {
fe_bandwidth_t bandwidth;
struct dvb_frontend_parameters {
- __u32 frequency; /* (absolute) frequency in Hz for QAM/OFDM */
+ __u32 frequency; /* (absolute) frequency in Hz for QAM/OFDM/ATSC */
/* intermediate frequency in kHz for QPSK */
fe_spectral_inversion_t inversion;
union {
struct dvb_qpsk_parameters qpsk;
struct dvb_qam_parameters qam;
struct dvb_ofdm_parameters ofdm;
+ struct dvb_vsb_parameters vsb;
} u;
};
#define _DVBVERSION_H_
#define DVB_API_VERSION 3
+#define DVB_API_VERSION_MINOR 1
#endif /*_DVBVERSION_H_*/
struct pt_types {
int pt_type;
char *pt_name;
-} sgi_pt_types[] = {
- {0x00, "SGI vh"},
- {0x01, "SGI trkrepl"},
- {0x02, "SGI secrepl"},
- {0x03, "SGI raw"},
- {0x04, "SGI bsd"},
- {SGI_SYSV, "SGI sysv"},
- {0x06, "SGI vol"},
- {SGI_EFS, "SGI efs"},
- {0x08, "SGI lv"},
- {0x09, "SGI rlv"},
- {0x0A, "SGI xfs"},
- {0x0B, "SGI xfslog"},
- {0x0C, "SGI xlv"},
- {0x82, "Linux swap"},
- {0x83, "Linux native"},
- {0, NULL}
};
#endif /* __EFS_VH_H__ */
unsigned char * haddr);
extern int eth_header_cache(struct neighbour *neigh,
struct hh_cache *hh);
-extern int eth_header_parse(struct sk_buff *skb,
- unsigned char *haddr);
extern struct net_device *alloc_etherdev(int sizeof_priv);
static inline void eth_copy_and_sum (struct sk_buff *dest,
#include <linux/if_fc.h>
#ifdef __KERNEL__
-extern int fc_header(struct sk_buff *skb, struct net_device *dev,
- unsigned short type, void *daddr,
- void *saddr, unsigned len);
-extern int fc_rebuild_header(struct sk_buff *skb);
extern unsigned short fc_type_trans(struct sk_buff *skb, struct net_device *dev);
extern struct net_device *alloc_fcdev(int sizeof_priv);
#include <linux/if_fddi.h>
#ifdef __KERNEL__
-extern int fddi_header(struct sk_buff *skb,
- struct net_device *dev,
- unsigned short type,
- void *daddr,
- void *saddr,
- unsigned len);
-extern int fddi_rebuild_header(struct sk_buff *skb);
extern unsigned short fddi_type_trans(struct sk_buff *skb,
struct net_device *dev);
extern struct net_device *alloc_fddidev(int sizeof_priv);
size_t size;
u8 *data;
};
+struct device;
int request_firmware(const struct firmware **fw, const char *name,
struct device *device);
int request_firmware_nowait(
--- /dev/null
+/*
+ * include/linux/fsl_devices.h
+ *
+ * Definitions for any platform device related flags or structures for
+ * Freescale processor devices
+ *
+ * Maintainer: Kumar Gala (kumar.gala@freescale.com)
+ *
+ * Copyright 2004 Freescale Semiconductor, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifdef __KERNEL__
+#ifndef _FSL_DEVICE_H_
+#define _FSL_DEVICE_H_
+
+#include <linux/types.h>
+
+/*
+ * Some conventions on how we handle peripherals on Freescale chips
+ *
+ * unique device: a platform_device entry in fsl_plat_devs[] plus
+ * associated device information in its platform_data structure.
+ *
+ * A chip is described by a set of unique devices.
+ *
+ * Each sub-arch has its own master list of unique devices and
+ * enumerates them by enum fsl_devices in a sub-arch specific header
+ *
+ * The platform data structure is broken into two parts. The
+ * first is device specific information that help identify any
+ * unique features of a peripheral. The second is any
+ * information that may be defined by the board or how the device
+ * is connected externally of the chip.
+ *
+ * naming conventions:
+ * - platform data structures: <driver>_platform_data
+ * - platform data device flags: FSL_<driver>_DEV_<FLAG>
+ * - platform data board flags: FSL_<driver>_BRD_<FLAG>
+ *
+ */
+
+struct gianfar_platform_data {
+ /* device specific information */
+ u32 device_flags;
+ u32 phy_reg_addr;
+
+ /* board specific information */
+ u32 board_flags;
+ u32 phyid;
+ u32 interruptPHY;
+ u8 mac_addr[6];
+};
+
+/* Flags related to gianfar device features */
+#define FSL_GIANFAR_DEV_HAS_GIGABIT 0x00000001
+#define FSL_GIANFAR_DEV_HAS_COALESCE 0x00000002
+#define FSL_GIANFAR_DEV_HAS_RMON 0x00000004
+#define FSL_GIANFAR_DEV_HAS_MULTI_INTR 0x00000008
+
+/* Flags in gianfar_platform_data */
+#define FSL_GIANFAR_BRD_HAS_PHY_INTR 0x00000001 /* if not set use a timer */
+
+struct fsl_i2c_platform_data {
+ /* device specific information */
+ u32 device_flags;
+};
+
+/* Flags related to I2C device features */
+#define FSL_I2C_DEV_SEPARATE_DFSRR 0x00000001
+#define FSL_I2C_DEV_CLOCK_5200 0x00000002
+
+#endif /* _FSL_DEVICE_H_ */
+#endif /* __KERNEL__ */
void gs_stop(struct tty_struct *tty);
void gs_start(struct tty_struct *tty);
void gs_hangup(struct tty_struct *tty);
-void gs_do_softint(void *private_);
int gs_block_til_ready(void *port, struct file *filp);
void gs_close(struct tty_struct *tty, struct file *filp);
void gs_set_termios (struct tty_struct * tty,
#endif
};
+/* Structure for sysfs attributes on block devices */
+struct disk_attribute {
+ struct attribute attr;
+ ssize_t (*show)(struct gendisk *, char *);
+};
+
/*
* Macros to operate on percpu disk statistics:
*
extern void disk_round_stats(struct gendisk *disk);
/* drivers/block/genhd.c */
+extern int get_blkdev_list(char *);
extern void add_disk(struct gendisk *disk);
extern void del_gendisk(struct gendisk *gp);
extern void unlink_gendisk(struct gendisk *gp);
#include <linux/config.h>
#include <linux/smp_lock.h>
#include <asm/hardirq.h>
+#include <asm/system.h>
/*
* We put the hardirq and softirq counter into the preemption
#define in_softirq() (softirq_count())
#define in_interrupt() (irq_count())
-#ifdef CONFIG_PREEMPT
+#if defined(CONFIG_PREEMPT) && !defined(CONFIG_PREEMPT_BKL)
# define in_atomic() ((preempt_count() & ~PREEMPT_ACTIVE) != kernel_locked())
+#else
+# define in_atomic() ((preempt_count() & ~PREEMPT_ACTIVE) != 0)
+#endif
+
+#ifdef CONFIG_PREEMPT
# define preemptible() (preempt_count() == 0 && !irqs_disabled())
# define IRQ_EXIT_OFFSET (HARDIRQ_OFFSET-1)
#else
-# define in_atomic() (preempt_count() != 0)
# define preemptible() 0
# define IRQ_EXIT_OFFSET HARDIRQ_OFFSET
#endif
# define synchronize_irq(irq) barrier()
#endif
-#ifdef CONFIG_GENERIC_HARDIRQS
-#define nmi_enter() (preempt_count() += HARDIRQ_OFFSET)
-#define nmi_exit() (preempt_count() -= HARDIRQ_OFFSET)
+#define nmi_enter() irq_enter()
+#define nmi_exit() sub_preempt_count(HARDIRQ_OFFSET)
-#define irq_enter() (preempt_count() += HARDIRQ_OFFSET)
-extern void irq_exit(void);
+#ifndef CONFIG_VIRT_CPU_ACCOUNTING
+static inline void account_user_vtime(struct task_struct *tsk)
+{
+}
+
+static inline void account_system_vtime(struct task_struct *tsk)
+{
+}
#endif
+#define irq_enter() \
+ do { \
+ account_system_vtime(current); \
+ add_preempt_count(HARDIRQ_OFFSET); \
+ } while (0)
+
+extern void irq_exit(void);
+
#endif /* LINUX_HARDIRQ_H */
#include <linux/if_hippi.h>
#ifdef __KERNEL__
-extern int hippi_header(struct sk_buff *skb,
- struct net_device *dev,
- unsigned short type,
- void *daddr,
- void *saddr,
- unsigned len);
-
-extern int hippi_rebuild_header(struct sk_buff *skb);
-
extern unsigned short hippi_type_trans(struct sk_buff *skb,
struct net_device *dev);
-extern void hippi_header_cache_bind(struct hh_cache ** hhp,
- struct net_device *dev,
- unsigned short htype,
- __u32 daddr);
-
-extern void hippi_header_cache_update(struct hh_cache *hh,
- struct net_device *dev,
- unsigned char * haddr);
-extern int hippi_header_parse(struct sk_buff *skb, unsigned char *haddr);
-
-extern void hippi_net_init(void);
-
extern struct net_device *alloc_hippi_dev(int sizeof_priv);
#endif
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License version 2 as published by the Free Software Foundation.
+ *
+ * Copyright (C) 2003 Ladislav Michl <ladis@linux-mips.org>
+ */
+
+#ifndef I2C_ALGO_SGI_H
+#define I2C_ALGO_SGI_H 1
+
+#include <linux/i2c.h>
+
+struct i2c_algo_sgi_data {
+ void *data; /* private data for lowlevel routines */
+ unsigned (*getctrl)(void *data);
+ void (*setctrl)(void *data, unsigned val);
+ unsigned (*rdata)(void *data);
+ void (*wdata)(void *data, unsigned val);
+
+ int xfer_timeout;
+ int ack_timeout;
+};
+
+int i2c_sgi_add_bus(struct i2c_adapter *);
+int i2c_sgi_del_bus(struct i2c_adapter *);
+
+#endif /* I2C_ALGO_SGI_H */
--- /dev/null
+/*
+ * Copyright (C) 2001,2002,2003 Broadcom Corporation
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ */
+
+#ifndef I2C_ALGO_SIBYTE_H
+#define I2C_ALGO_SIBYTE_H 1
+
+#include <linux/i2c.h>
+
+struct i2c_algo_sibyte_data {
+ void *data; /* private data */
+ int bus; /* which bus */
+ void *reg_base; /* CSR base */
+};
+
+int i2c_sibyte_add_bus(struct i2c_adapter *, int speed);
+int i2c_sibyte_del_bus(struct i2c_adapter *);
+
+#endif /* I2C_ALGO_SIBYTE_H */
* These are the defined ARCnet Protocol ID's.
*/
+/* CAP mode */
+/* No macro but uses 1-8 */
+
/* RFC1201 Protocol ID's */
#define ARC_P_IP 212 /* 0xD4 */
#define ARC_P_IPV6 196 /* 0xC4: RFC2497 */
#define ETH_ENCAP_HDR_SIZE 14
+struct arc_cap
+{
+ uint8_t proto;
+ uint8_t cookie[sizeof(int)]; /* Actually NOT sent over the network */
+ union {
+ uint8_t ack;
+ uint8_t raw[0]; /* 507 bytes */
+ } mes;
+};
+
/*
* The data needed by the actual arcnet hardware.
*
struct arc_rfc1201 rfc1201;
struct arc_rfc1051 rfc1051;
struct arc_eth_encap eth_encap;
+ struct arc_cap cap;
uint8_t raw[0]; /* 508 bytes */
} soft;
};
#define ETH_P_ATMFATE 0x8884 /* Frame-based ATM Transport
* over Ethernet
*/
-#define ETH_P_EDP2 0x88A2 /* Coraid EDP2 */
+#define ETH_P_AOE 0x88A2 /* ATA over Ethernet */
/*
* Non DIX types. Won't clash for 1500 types.
#define ETH_P_IRDA 0x0017 /* Linux-IrDA */
#define ETH_P_ECONET 0x0018 /* Acorn Econet */
#define ETH_P_HDLC 0x0019 /* HDLC frames */
+#define ETH_P_ARCNET 0x001A /* 1A for ArcNet :-) */
/*
* This is an Ethernet frame header.
struct fasync_struct *fasync;
+ unsigned long if_flags;
+ u8 dev_addr[ETH_ALEN];
+ u32 chr_filter[2];
+ u32 net_filter[2];
+
#ifdef TUN_DEBUG
int debug;
#endif
#endif /* __KERNEL__ */
/* Read queue size */
-#define TUN_READQ_SIZE 10
+#define TUN_READQ_SIZE 500
/* TUN device flags */
#define TUN_TUN_DEV 0x0001
* NOTE: Be aware the IN6ADDR_* constants and in6addr_* externals are defined
* in network byte order, not in host byte order as are the IPv4 equivalents
*/
+#if 0
extern const struct in6_addr in6addr_any;
#define IN6ADDR_ANY_INIT { { { 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 } } }
+#endif
extern const struct in6_addr in6addr_loopback;
#define IN6ADDR_LOOPBACK_INIT { { { 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1 } } }
#define REL_X 0x00
#define REL_Y 0x01
#define REL_Z 0x02
+#define REL_RX 0x03
+#define REL_RY 0x04
+#define REL_RZ 0x05
#define REL_HWHEEL 0x06
#define REL_DIAL 0x07
#define REL_WHEEL 0x08
#define LED_SUSPEND 0x06
#define LED_MUTE 0x07
#define LED_MISC 0x08
+#define LED_MAIL 0x09
+#define LED_CHARGING 0x0a
#define LED_MAX 0x0f
/*
unsigned int repeat_key;
struct timer_list timer;
- struct pm_dev *pm_dev;
struct pt_regs *regs;
int state;
} cork;
};
-struct raw6_opt {
+/* WARNING: don't change the layout of the members in {raw,udp,tcp}6_sock! */
+struct raw6_sock {
+ /* inet_sock has to be the first member of raw6_sock */
+ struct inet_sock inet;
__u32 checksum; /* perform checksum */
__u32 offset; /* checksum offset */
-
struct icmp6_filter filter;
-};
-
-/* WARNING: don't change the layout of the members in {raw,udp,tcp}6_sock! */
-struct raw6_sock {
- struct sock sk;
- struct ipv6_pinfo *pinet6;
- struct inet_opt inet;
- struct raw6_opt raw6;
- struct ipv6_pinfo inet6;
+ /* ipv6_pinfo has to be the last member of raw6_sock, see inet6_sk_generic */
+ struct ipv6_pinfo inet6;
};
struct udp6_sock {
- struct sock sk;
- struct ipv6_pinfo *pinet6;
- struct inet_opt inet;
- struct udp_opt udp;
+ struct udp_sock udp;
+ /* ipv6_pinfo has to be the last member of udp6_sock, see inet6_sk_generic */
struct ipv6_pinfo inet6;
};
struct tcp6_sock {
- struct sock sk;
- struct ipv6_pinfo *pinet6;
- struct inet_opt inet;
- struct tcp_opt tcp;
+ struct tcp_sock tcp;
+ /* ipv6_pinfo has to be the last member of tcp6_sock, see inet6_sk_generic */
struct ipv6_pinfo inet6;
};
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
static inline struct ipv6_pinfo * inet6_sk(const struct sock *__sk)
{
- return ((struct raw6_sock *)__sk)->pinet6;
+ return inet_sk(__sk)->pinet6;
}
-static inline struct raw6_opt * raw6_sk(const struct sock *__sk)
+static inline struct raw6_sock *raw6_sk(const struct sock *sk)
{
- return &((struct raw6_sock *)__sk)->raw6;
+ return (struct raw6_sock *)sk;
+}
+
+static inline void inet_sk_copy_descendant(struct sock *sk_to,
+ const struct sock *sk_from)
+{
+ int ancestor_size = sizeof(struct inet_sock);
+
+ if (sk_from->sk_family == PF_INET6)
+ ancestor_size += sizeof(struct ipv6_pinfo);
+
+ __inet_sk_copy_descendant(sk_to, sk_from, ancestor_size);
}
-#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
#define __ipv6_only_sock(sk) (inet6_sk(sk)->ipv6only)
#define ipv6_only_sock(sk) ((sk)->sk_family == PF_INET6 && __ipv6_only_sock(sk))
#else
#define __ipv6_only_sock(sk) 0
#define ipv6_only_sock(sk) 0
+
+static inline struct ipv6_pinfo * inet6_sk(const struct sock *__sk)
+{
+ return NULL;
+}
+
+static inline struct raw6_sock *raw6_sk(const struct sock *sk)
+{
+ return NULL;
+}
+
#endif
#endif
#endif
/* arch independent irq_stat fields */
-#define softirq_pending(cpu) __IRQ_STAT((cpu), __softirq_pending)
-#define local_softirq_pending() softirq_pending(smp_processor_id())
+#define local_softirq_pending() \
+ __IRQ_STAT(smp_processor_id(), __softirq_pending)
/* arch dependent irq_stat fields */
#define nmi_count(cpu) __IRQ_STAT((cpu), __nmi_count) /* i386 */
-/* $Id: jffs2_fs_sb.h,v 1.45 2003/10/08 11:46:27 dwmw2 Exp $ */
+/* $Id: jffs2_fs_sb.h,v 1.48 2004/11/20 10:41:12 dwmw2 Exp $ */
#ifndef _JFFS2_FS_SB
#define _JFFS2_FS_SB
#include <linux/timer.h>
#include <linux/wait.h>
#include <linux/list.h>
+#include <linux/rwsem.h>
#define JFFS2_SB_FLAG_RO 1
#define JFFS2_SB_FLAG_MOUNTING 2
struct semaphore alloc_sem; /* Used to protect all the following
fields, and also to protect against
- out-of-order writing of nodes.
- And GC.
- */
+ out-of-order writing of nodes. And GC. */
uint32_t cleanmarker_size; /* Size of an _inline_ CLEANMARKER
(i.e. zero for OOB CLEANMARKER */
to an obsoleted node. I don't like this. Alternatives welcomed. */
struct semaphore erase_free_sem;
-#ifdef CONFIG_JFFS2_FS_NAND
+#if defined CONFIG_JFFS2_FS_NAND || defined CONFIG_JFFS2_FS_NOR_ECC
/* Write-behind buffer for NAND flash */
unsigned char *wbuf;
uint32_t wbuf_ofs;
uint32_t wbuf_pagesize;
struct jffs2_inodirty *wbuf_inodes;
+ struct rw_semaphore wbuf_sem; /* Protects the write buffer */
+
/* Information about out-of-band area usage... */
struct nand_oobinfo *oobinfo;
uint32_t badblock_pos;
/* a value TUSEC for TICK_USEC (can be set bij adjtimex) */
#define TICK_USEC_TO_NSEC(TUSEC) (SH_DIV (TUSEC * USER_HZ * 1000, ACTHZ, 8))
+/* some arch's have a small-data section that can be accessed register-relative
+ * but that can only take up to, say, 4-byte variables. jiffies being part of
+ * an 8-byte variable may not be correctly accessed unless we force the issue
+ */
+#define __jiffy_data __attribute__((section(".data")))
+
/*
* The 64-bit value is not volatile - you MUST NOT read it
* without sampling the sequence number in xtime_lock.
* get_jiffies_64() will do this for you as appropriate.
*/
-extern u64 jiffies_64;
-extern unsigned long volatile jiffies;
+extern u64 __jiffy_data jiffies_64;
+extern unsigned long volatile __jiffy_data jiffies;
#if (BITS_PER_LONG < 64)
u64 get_jiffies_64(void);
static inline unsigned int jiffies_to_usecs(const unsigned long j)
{
-#if HZ <= 1000 && !(1000 % HZ)
+#if HZ <= 1000000 && !(1000000 % HZ)
return (1000000 / HZ) * j;
-#elif HZ > 1000 && !(HZ % 1000)
- return (j*1000 + (HZ - 1000))/(HZ / 1000);
+#elif HZ > 1000000 && !(HZ % 1000000)
+ return (j + (HZ / 1000000) - 1)/(HZ / 1000000);
#else
return (j * 1000000) / HZ;
#endif
#endif
}
+static inline unsigned long usecs_to_jiffies(const unsigned int u)
+{
+ if (u > jiffies_to_usecs(MAX_JIFFY_OFFSET))
+ return MAX_JIFFY_OFFSET;
+#if HZ <= 1000000 && !(1000000 % HZ)
+ return (u + (1000000 / HZ) - 1) / (1000000 / HZ);
+#elif HZ > 1000000 && !(HZ % 1000000)
+ return u * (HZ / 1000000);
+#else
+ return (u * HZ + 999999) / 1000000;
+#endif
+}
+
/*
* The TICK_NSEC - 1 rounds up the value to the next resolution. Note
* that a remainder subtract here would not do the right thing as the
}
extern int arch_prepare_kprobe(struct kprobe *p);
+extern void arch_copy_kprobe(struct kprobe *p);
extern void arch_remove_kprobe(struct kprobe *p);
extern void show_registers(struct pt_regs *regs);
--- /dev/null
+#ifndef _LIBPS2_H
+#define _LIBPS2_H
+
+/*
+ * Copyright (C) 1999-2002 Vojtech Pavlik
+ * Copyright (C) 2004 Dmitry Torokhov
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ */
+
+
+#define PS2_CMD_GETID 0x02f2
+#define PS2_CMD_RESET_BAT 0x02ff
+
+#define PS2_RET_BAT 0xaa
+#define PS2_RET_ID 0x00
+#define PS2_RET_ACK 0xfa
+#define PS2_RET_NAK 0xfe
+
+#define PS2_FLAG_ACK 1 /* Waiting for ACK/NAK */
+#define PS2_FLAG_CMD 2 /* Waiting for command to finish */
+#define PS2_FLAG_CMD1 4 /* Waiting for the first byte of command response */
+#define PS2_FLAG_WAITID 8 /* Command execiting is GET ID */
+
+struct ps2dev {
+ struct serio *serio;
+
+ /* Ensures that only one command is executing at a time */
+ struct semaphore cmd_sem;
+
+ /* Used to signal completion from interrupt handler */
+ wait_queue_head_t wait;
+
+ unsigned long flags;
+ unsigned char cmdbuf[6];
+ unsigned char cmdcnt;
+ unsigned char nak;
+};
+
+void ps2_init(struct ps2dev *ps2dev, struct serio *serio);
+int ps2_sendbyte(struct ps2dev *ps2dev, unsigned char byte, int timeout);
+int ps2_command(struct ps2dev *ps2dev, unsigned char *param, int command);
+int ps2_schedule_command(struct ps2dev *ps2dev, unsigned char *param, int command);
+int ps2_handle_ack(struct ps2dev *ps2dev, unsigned char data);
+int ps2_handle_response(struct ps2dev *ps2dev, unsigned char data);
+void ps2_cmd_aborted(struct ps2dev *ps2dev);
+
+#endif /* _LIBPS2_H */
* @head: the head for your list.
*/
#define list_for_each(pos, head) \
- for (pos = (head)->next, prefetch(pos->next); pos != (head); \
- pos = pos->next, prefetch(pos->next))
+ for (pos = (head)->next; prefetch(pos->next), pos != (head); \
+ pos = pos->next)
/**
* __list_for_each - iterate over a list
* @head: the head for your list.
*/
#define list_for_each_prev(pos, head) \
- for (pos = (head)->prev, prefetch(pos->prev); pos != (head); \
- pos = pos->prev, prefetch(pos->prev))
+ for (pos = (head)->prev; prefetch(pos->prev), pos != (head); \
+ pos = pos->prev)
/**
* list_for_each_safe - iterate over a list safe against removal of list entry
* @member: the name of the list_struct within the struct.
*/
#define list_for_each_entry(pos, head, member) \
- for (pos = list_entry((head)->next, typeof(*pos), member), \
- prefetch(pos->member.next); \
- &pos->member != (head); \
- pos = list_entry(pos->member.next, typeof(*pos), member), \
- prefetch(pos->member.next))
+ for (pos = list_entry((head)->next, typeof(*pos), member); \
+ prefetch(pos->member.next), &pos->member != (head); \
+ pos = list_entry(pos->member.next, typeof(*pos), member))
/**
* list_for_each_entry_reverse - iterate backwards over list of given type.
* @member: the name of the list_struct within the struct.
*/
#define list_for_each_entry_reverse(pos, head, member) \
- for (pos = list_entry((head)->prev, typeof(*pos), member), \
- prefetch(pos->member.prev); \
- &pos->member != (head); \
- pos = list_entry(pos->member.prev, typeof(*pos), member), \
- prefetch(pos->member.prev))
+ for (pos = list_entry((head)->prev, typeof(*pos), member); \
+ prefetch(pos->member.prev), &pos->member != (head); \
+ pos = list_entry(pos->member.prev, typeof(*pos), member))
/**
* list_prepare_entry - prepare a pos entry for use as a start point in
* @member: the name of the list_struct within the struct.
*/
#define list_for_each_entry_continue(pos, head, member) \
- for (pos = list_entry(pos->member.next, typeof(*pos), member), \
- prefetch(pos->member.next); \
- &pos->member != (head); \
- pos = list_entry(pos->member.next, typeof(*pos), member), \
- prefetch(pos->member.next))
+ for (pos = list_entry(pos->member.next, typeof(*pos), member); \
+ prefetch(pos->member.next), &pos->member != (head); \
+ pos = list_entry(pos->member.next, typeof(*pos), member))
/**
* list_for_each_entry_safe - iterate over list of given type safe against removal of list entry
* as long as the traversal is guarded by rcu_read_lock().
*/
#define list_for_each_rcu(pos, head) \
- for (pos = (head)->next, prefetch(pos->next); pos != (head); \
- pos = rcu_dereference(pos->next), prefetch(pos->next))
+ for (pos = (head)->next; prefetch(pos->next), pos != (head); \
+ pos = rcu_dereference(pos->next))
#define __list_for_each_rcu(pos, head) \
for (pos = (head)->next; pos != (head); \
* as long as the traversal is guarded by rcu_read_lock().
*/
#define list_for_each_entry_rcu(pos, head, member) \
- for (pos = list_entry((head)->next, typeof(*pos), member), \
- prefetch(pos->member.next); \
- &pos->member != (head); \
+ for (pos = list_entry((head)->next, typeof(*pos), member); \
+ prefetch(pos->member.next), &pos->member != (head); \
pos = rcu_dereference(list_entry(pos->member.next, \
- typeof(*pos), member)), \
- prefetch(pos->member.next))
+ typeof(*pos), member)))
/**
* as long as the traversal is guarded by rcu_read_lock().
*/
#define list_for_each_continue_rcu(pos, head) \
- for ((pos) = (pos)->next, prefetch((pos)->next); (pos) != (head); \
- (pos) = rcu_dereference((pos)->next), prefetch((pos)->next))
+ for ((pos) = (pos)->next; prefetch((pos)->next), (pos) != (head); \
+ (pos) = rcu_dereference((pos)->next))
/*
* Double linked lists with a single pointer list head.
*/
#define LP_DELAY 50
-/*
- * function prototypes
- */
-
-extern int lp_init(void);
-
#endif
#endif
extern int mca_find_adapter(int id, int start);
extern int mca_find_unused_adapter(int id, int start);
-/* adapter state info - returns 0 if no */
-extern int mca_isadapter(int slot);
-extern int mca_isenabled(int slot);
-
extern int mca_is_adapter_used(int slot);
extern int mca_mark_as_used(int slot);
extern void mca_mark_as_unused(int slot);
* so we can have a more interesting /proc/mca.
*/
extern void mca_set_adapter_name(int slot, char* name);
-extern char* mca_get_adapter_name(int slot);
/* These routines actually mess with the hardware POS registers. They
* temporarily disable the device (and interrupts), so make sure you know
unsigned int timeout_clks; /* data timeout (in clocks) */
unsigned int blksz_bits; /* data block size */
unsigned int blocks; /* number of blocks */
- struct request *req __attribute__((deprecated));/* request structure (use the sg list instead) */
unsigned int error; /* data error */
unsigned int flags;
--- /dev/null
+/*
+ * MTD primitives for XIP support
+ *
+ * Author: Nicolas Pitre
+ * Created: Nov 2, 2004
+ * Copyright: (C) 2004 MontaVista Software, Inc.
+ *
+ * This XIP support for MTD has been loosely inspired
+ * by an earlier patch authored by David Woodhouse.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * $Id: xip.h,v 1.2 2004/12/01 15:49:10 nico Exp $
+ */
+
+#ifndef __LINUX_MTD_XIP_H__
+#define __LINUX_MTD_XIP_H__
+
+#include <linux/config.h>
+
+#ifdef CONFIG_MTD_XIP
+
+/*
+ * Function that are modifying the flash state away from array mode must
+ * obviously not be running from flash. The __xipram is therefore marking
+ * those functions so they get relocated to ram.
+ */
+#define __xipram __attribute__ ((__section__ (".data")))
+
+/*
+ * We really don't want gcc to guess anything.
+ * We absolutely _need_ proper inlining.
+ */
+#include <linux/compiler.h>
+
+/*
+ * Each architecture has to provide the following macros. They must access
+ * the hardware directly and not rely on any other (XIP) functions since they
+ * won't be available when used (flash not in array mode).
+ *
+ * xip_irqpending()
+ *
+ * return non zero when any hardware interrupt is pending.
+ *
+ * xip_currtime()
+ *
+ * return a platform specific time reference to be used with
+ * xip_elapsed_since().
+ *
+ * xip_elapsed_since(x)
+ *
+ * return in usecs the elapsed timebetween now and the reference x as
+ * returned by xip_currtime().
+ *
+ * note 1: convertion to usec can be approximated, as long as the
+ * returned value is <= the real elapsed time.
+ * note 2: this should be able to cope with a few seconds without
+ * overflowing.
+ */
+
+#if defined(CONFIG_ARCH_SA1100) || defined(CONFIG_ARCH_PXA)
+
+#include <asm/hardware.h>
+#ifdef CONFIG_ARCH_PXA
+#include <asm/arch/pxa-regs.h>
+#endif
+
+#define xip_irqpending() (ICIP & ICMR)
+
+/* we sample OSCR and convert desired delta to usec (1/4 ~= 1000000/3686400) */
+#define xip_currtime() (OSCR)
+#define xip_elapsed_since(x) (signed)((OSCR - (x)) / 4)
+
+#else
+
+#warning "missing IRQ and timer primitives for XIP MTD support"
+#warning "some of the XIP MTD support code will be disabled"
+#warning "your system will therefore be unresponsive when writing or erasing flash"
+
+#define xip_irqpending() (0)
+#define xip_currtime() (0)
+#define xip_elapsed_since(x) (0)
+
+#endif
+
+/*
+ * xip_cpu_idle() is used when waiting for a delay equal or larger than
+ * the system timer tick period. This should put the CPU into idle mode
+ * to save power and to be woken up only when some interrupts are pending.
+ * As above, this should not rely upon standard kernel code.
+ */
+
+#if defined(CONFIG_CPU_XSCALE)
+#define xip_cpu_idle() asm volatile ("mcr p14, 0, %0, c7, c0, 0" :: "r" (1))
+#else
+#define xip_cpu_idle() do { } while (0)
+#endif
+
+#else
+
+#define __xipram
+
+#endif /* CONFIG_MTD_XIP */
+
+#endif /* __LINUX_MTD_XIP_H__ */
#ifndef __ASM_MV64340_H
#define __ASM_MV64340_H
+#ifdef __MIPS__
#include <asm/addrspace.h>
#include <asm/marvell.h>
+#endif
+#include <asm/types.h>
/****************************************/
/* Processor Address Space */
extern void mv64340_irq_init(unsigned int base);
+/* MPSC Platform Device, Driver Data (Shared register regions) */
+#define MPSC_SHARED_NAME "mpsc_shared"
+
+#define MPSC_ROUTING_BASE_ORDER 0
+#define MPSC_SDMA_INTR_BASE_ORDER 1
+
+#define MPSC_ROUTING_REG_BLOCK_SIZE 0x000c
+#define MPSC_SDMA_INTR_REG_BLOCK_SIZE 0x0084
+
+struct mpsc_shared_pdata {
+ u32 mrr_val;
+ u32 rcrr_val;
+ u32 tcrr_val;
+ u32 intr_cause_val;
+ u32 intr_mask_val;
+};
+
+/* MPSC Platform Device, Driver Data */
+#define MPSC_CTLR_NAME "mpsc"
+
+#define MPSC_BASE_ORDER 0
+#define MPSC_SDMA_BASE_ORDER 1
+#define MPSC_BRG_BASE_ORDER 2
+
+#define MPSC_REG_BLOCK_SIZE 0x0038
+#define MPSC_SDMA_REG_BLOCK_SIZE 0x0c18
+#define MPSC_BRG_REG_BLOCK_SIZE 0x0008
+
+struct mpsc_pdata {
+ u8 mirror_regs;
+ u8 cache_mgmt;
+ u8 max_idle;
+ int default_baud;
+ int default_bits;
+ int default_parity;
+ int default_flow;
+ u32 chr_1_val;
+ u32 chr_2_val;
+ u32 chr_10_val;
+ u32 mpcr_val;
+ u32 bcr_val;
+ u8 brg_can_tune;
+ u8 brg_clk_src;
+ u32 brg_clk_freq;
+};
+
#endif /* __ASM_MV64340_H */
{
struct list_head list;
const char name[EBT_FUNCTION_MAXNAMELEN];
- void (*watcher)(const struct sk_buff *skb, const struct net_device *in,
- const struct net_device *out, const void *watcherdata,
- unsigned int datalen);
+ void (*watcher)(const struct sk_buff *skb, unsigned int hooknr,
+ const struct net_device *in, const struct net_device *out,
+ const void *watcherdata, unsigned int datalen);
/* 0 == let it in */
int (*check)(const char *tablename, unsigned int hookmask,
const struct ebt_entry *e, void *watcherdata, unsigned int datalen);
#define _IP_CONNTRACK_AMANDA_H
/* AMANDA tracking. */
-struct ip_ct_amanda_expect
-{
- u_int16_t port; /* port number of this expectation */
- u_int16_t offset; /* offset of port in ctrl packet */
- u_int16_t len; /* length of the port number string */
-};
-
+struct ip_conntrack_expect;
+extern unsigned int (*ip_nat_amanda_hook)(struct sk_buff **pskb,
+ enum ip_conntrack_info ctinfo,
+ unsigned int matchoff,
+ unsigned int matchlen,
+ struct ip_conntrack_expect *exp);
#endif /* _IP_CONNTRACK_AMANDA_H */
ip_conntrack_find_get(const struct ip_conntrack_tuple *tuple,
const struct ip_conntrack *ignored_conntrack);
-extern int __ip_conntrack_confirm(struct sk_buff *skb);
+extern int __ip_conntrack_confirm(struct sk_buff **pskb);
/* Confirm a connection: returns NF_DROP if packet must be dropped. */
-static inline int ip_conntrack_confirm(struct sk_buff *skb)
+static inline int ip_conntrack_confirm(struct sk_buff **pskb)
{
- if (skb->nfct
- && !is_confirmed((struct ip_conntrack *)skb->nfct))
- return __ip_conntrack_confirm(skb);
+ if ((*pskb)->nfct
+ && !is_confirmed((struct ip_conntrack *)(*pskb)->nfct))
+ return __ip_conntrack_confirm(pskb);
return NF_ACCEPT;
}
extern struct list_head *ip_conntrack_hash;
extern struct list_head ip_conntrack_expect_list;
DECLARE_RWLOCK_EXTERN(ip_conntrack_lock);
-DECLARE_RWLOCK_EXTERN(ip_conntrack_expect_tuple_lock);
#endif /* _IP_CONNTRACK_CORE_H */
IP_CT_FTP_EPSV,
};
-/* This structure is per expected connection */
-struct ip_ct_ftp_expect
-{
- /* We record seq number and length of ftp ip/port text here: all in
- * host order. */
-
- /* sequence number of IP address in packet is in ip_conntrack_expect */
- u_int32_t len; /* length of IP address */
- enum ip_ct_ftp_type ftptype; /* PORT or PASV ? */
- u_int16_t port; /* TCP port that was to be used */
-};
-
+#define NUM_SEQ_TO_REMEMBER 2
/* This structure exists only once per master */
struct ip_ct_ftp_master {
- /* Next valid seq position for cmd matching after newline */
- u_int32_t seq_aft_nl[IP_CT_DIR_MAX];
+ /* Valid seq positions for cmd matching after newline */
+ u_int32_t seq_aft_nl[IP_CT_DIR_MAX][NUM_SEQ_TO_REMEMBER];
/* 0 means seq_match_aft_nl not set */
- int seq_aft_nl_set[IP_CT_DIR_MAX];
+ int seq_aft_nl_num[IP_CT_DIR_MAX];
};
+struct ip_conntrack_expect;
+
+/* For NAT to hook in when we find a packet which describes what other
+ * connection we should expect. */
+extern unsigned int (*ip_nat_ftp_hook)(struct sk_buff **pskb,
+ enum ip_conntrack_info ctinfo,
+ enum ip_ct_ftp_type type,
+ unsigned int matchoff,
+ unsigned int matchlen,
+ struct ip_conntrack_expect *exp,
+ u32 *seq);
#endif /* _IP_CONNTRACK_FTP_H */
struct module;
-/* Reuse expectation when max_expected reached */
-#define IP_CT_HELPER_F_REUSE_EXPECT 0x01
-
struct ip_conntrack_helper
{
struct list_head list; /* Internal use. */
const char *name; /* name of the module */
- unsigned char flags; /* Flags (see above) */
struct module *me; /* pointer to self */
unsigned int max_expected; /* Maximum number of concurrent
* expected connections */
/* Function to call when data passes; return verdict, or -1 to
invalidate. */
- int (*help)(struct sk_buff *skb,
+ int (*help)(struct sk_buff **pskb,
struct ip_conntrack *ct,
enum ip_conntrack_info conntrackinfo);
};
extern int ip_conntrack_helper_register(struct ip_conntrack_helper *);
extern void ip_conntrack_helper_unregister(struct ip_conntrack_helper *);
-extern struct ip_conntrack_helper *ip_ct_find_helper(const struct ip_conntrack_tuple *tuple);
-
-
/* Allocate space for an expectation: this is mandatory before calling
ip_conntrack_expect_related. */
extern struct ip_conntrack_expect *ip_conntrack_expect_alloc(void);
+extern void ip_conntrack_expect_free(struct ip_conntrack_expect *exp);
+
/* Add an expected connection: can have more than one per connection */
-extern int ip_conntrack_expect_related(struct ip_conntrack_expect *exp,
- struct ip_conntrack *related_to);
-extern int ip_conntrack_change_expect(struct ip_conntrack_expect *expect,
- struct ip_conntrack_tuple *newtuple);
+extern int ip_conntrack_expect_related(struct ip_conntrack_expect *exp);
extern void ip_conntrack_unexpect_related(struct ip_conntrack_expect *exp);
#endif /*_IP_CONNTRACK_HELPER_H*/
#ifndef _IP_CONNTRACK_IRC_H
#define _IP_CONNTRACK_IRC_H
-/* We record seq number and length of irc ip/port text here: all in
- host order. */
-
-/* This structure is per expected connection */
-struct ip_ct_irc_expect
-{
- /* length of IP address */
- u_int32_t len;
- /* Port that was to be used */
- u_int16_t port;
-};
-
/* This structure exists only once per master */
struct ip_ct_irc_master {
};
-
#ifdef __KERNEL__
+extern unsigned int (*ip_nat_irc_hook)(struct sk_buff **pskb,
+ enum ip_conntrack_info ctinfo,
+ unsigned int matchoff,
+ unsigned int matchlen,
+ struct ip_conntrack_expect *exp);
#define IRC_PORT 6667
/* Called when a conntrack entry is destroyed */
void (*destroy)(struct ip_conntrack *conntrack);
- /* Has to decide if a expectation matches one packet or not */
- int (*exp_matches_pkt)(struct ip_conntrack_expect *exp,
- const struct sk_buff *skb);
-
int (*error)(struct sk_buff *skb, enum ip_conntrack_info *ctinfo,
unsigned int hooknum);
u_int8_t retrans; /* Number of retransmitted packets */
u_int8_t last_index; /* Index of the last packet */
u_int32_t last_seq; /* Last sequence number seen in dir */
+ u_int32_t last_ack; /* Last sequence number seen in opposite dir */
u_int32_t last_end; /* Last seq + len */
};
#define TFTP_OPCODE_ACK 4
#define TFTP_OPCODE_ERROR 5
+extern unsigned int (*ip_nat_tftp_hook)(struct sk_buff **pskb,
+ enum ip_conntrack_info ctinfo,
+ struct ip_conntrack_expect *exp);
+
#endif /* _IP_CT_TFTP */
IP_NAT_MANIP_DST
};
-#ifndef CONFIG_IP_NF_NAT_LOCAL
-/* SRC manip occurs only on POST_ROUTING */
-#define HOOK2MANIP(hooknum) ((hooknum) != NF_IP_POST_ROUTING)
-#else
/* SRC manip occurs POST_ROUTING or LOCAL_IN */
#define HOOK2MANIP(hooknum) ((hooknum) != NF_IP_POST_ROUTING && (hooknum) != NF_IP_LOCAL_IN)
-#endif
#define IP_NAT_RANGE_MAP_IPS 1
#define IP_NAT_RANGE_PROTO_SPECIFIED 2
-/* Used internally by get_unique_tuple(). */
-#define IP_NAT_RANGE_FULL 4
/* NAT sequence number modifications */
struct ip_nat_seq {
union ip_conntrack_manip_proto min, max;
};
-/* A range consists of an array of 1 or more ip_nat_range */
-struct ip_nat_multi_range
+/* For backwards compat: don't use in modern code. */
+struct ip_nat_multi_range_compat
{
- unsigned int rangesize;
+ unsigned int rangesize; /* Must be 1. */
/* hangs off end. */
struct ip_nat_range range[1];
};
-/* Worst case: local-out manip + 1 post-routing, and reverse dirn. */
-#define IP_NAT_MAX_MANIPS (2*3)
-
-struct ip_nat_info_manip
-{
- /* The direction. */
- u_int8_t direction;
-
- /* Which hook the manipulation happens on. */
- u_int8_t hooknum;
-
- /* The manipulation type. */
- u_int8_t maniptype;
-
- /* Manipulations to occur at each conntrack in this dirn. */
- struct ip_conntrack_manip manip;
-};
-
#ifdef __KERNEL__
#include <linux/list.h>
#include <linux/netfilter_ipv4/lockhelp.h>
/* The structure embedded in the conntrack structure. */
struct ip_nat_info
{
- /* Set to zero when conntrack created: bitmask of maniptypes */
- u_int16_t initialized;
-
- u_int16_t num_manips;
-
- /* Manipulations to be done on this conntrack. */
- struct ip_nat_info_manip manips[IP_NAT_MAX_MANIPS];
-
- struct list_head bysource, byipsproto;
+ struct list_head bysource;
/* Helper (NULL if none). */
struct ip_nat_helper *helper;
struct ip_nat_seq seq[IP_CT_DIR_MAX];
};
+struct ip_conntrack;
+
/* Set up the info structure to map into this range. */
extern unsigned int ip_nat_setup_info(struct ip_conntrack *conntrack,
- const struct ip_nat_multi_range *mr,
+ const struct ip_nat_range *range,
unsigned int hooknum);
/* Is this tuple already taken? (not by us)*/
extern u_int16_t ip_nat_cheat_check(u_int32_t oldvalinv,
u_int32_t newval,
u_int16_t oldcheck);
+#else /* !__KERNEL__: iptables wants this to compile. */
+#define ip_nat_multi_range ip_nat_multi_range_compat
#endif /*__KERNEL__*/
#endif
extern int ip_nat_init(void);
extern void ip_nat_cleanup(void);
-extern unsigned int do_bindings(struct ip_conntrack *ct,
- enum ip_conntrack_info conntrackinfo,
- struct ip_nat_info *info,
- unsigned int hooknum,
- struct sk_buff **pskb);
+extern unsigned int nat_packet(struct ip_conntrack *ct,
+ enum ip_conntrack_info conntrackinfo,
+ unsigned int hooknum,
+ struct sk_buff **pskb);
extern int icmp_reply_translation(struct sk_buff **pskb,
- struct ip_conntrack *conntrack,
- unsigned int hooknum,
- int dir);
-
-extern void replace_in_hashes(struct ip_conntrack *conntrack,
- struct ip_nat_info *info);
-extern void place_in_hashes(struct ip_conntrack *conntrack,
- struct ip_nat_info *info);
-
+ struct ip_conntrack *ct,
+ enum ip_nat_manip_type manip,
+ enum ip_conntrack_dir dir);
#endif /* _IP_NAT_CORE_H */
struct sk_buff;
-/* Flags */
-/* NAT helper must be called on every packet (for TCP) */
-#define IP_NAT_HELPER_F_ALWAYS 0x01
-
-struct ip_nat_helper
-{
- struct list_head list; /* Internal use */
-
- const char *name; /* name of the module */
- unsigned char flags; /* Flags (see above) */
- struct module *me; /* pointer to self */
-
- /* Mask of things we will help: vs. tuple from server */
- struct ip_conntrack_tuple tuple;
- struct ip_conntrack_tuple mask;
-
- /* Helper function: returns verdict */
- unsigned int (*help)(struct ip_conntrack *ct,
- struct ip_conntrack_expect *exp,
- struct ip_nat_info *info,
- enum ip_conntrack_info ctinfo,
- unsigned int hooknum,
- struct sk_buff **pskb);
-
- /* Returns verdict and sets up NAT for this connection */
- unsigned int (*expect)(struct sk_buff **pskb,
- unsigned int hooknum,
- struct ip_conntrack *ct,
- struct ip_nat_info *info);
-};
-
-extern int ip_nat_helper_register(struct ip_nat_helper *me);
-extern void ip_nat_helper_unregister(struct ip_nat_helper *me);
-
-extern struct ip_nat_helper *
-ip_nat_find_helper(const struct ip_conntrack_tuple *tuple);
-
-extern struct ip_nat_helper *
-__ip_nat_find_helper(const struct ip_conntrack_tuple *tuple);
-
/* These return true or false. */
extern int ip_nat_mangle_tcp_packet(struct sk_buff **skb,
struct ip_conntrack *ct,
extern int ip_nat_seq_adjust(struct sk_buff **pskb,
struct ip_conntrack *ct,
enum ip_conntrack_info ctinfo);
+
+/* Setup NAT on this expected conntrack so it follows master, but goes
+ * to port ct->master->saved_proto. */
+extern void ip_nat_follow_master(struct ip_conntrack *ct,
+ struct ip_conntrack_expect *this);
#endif
/* Protocol number. */
unsigned int protonum;
- /* Do a packet translation according to the ip_nat_proto_manip
- * and manip type. Return true if succeeded. */
+ /* Translate a packet to the target according to manip type.
+ Return true if succeeded. */
int (*manip_pkt)(struct sk_buff **pskb,
unsigned int iphdroff,
- const struct ip_conntrack_manip *manip,
+ const struct ip_conntrack_tuple *tuple,
enum ip_nat_manip_type maniptype);
/* Is the manipable part of the tuple between min and max incl? */
#define IPT_LOG_TCPSEQ 0x01 /* Log TCP sequence numbers */
#define IPT_LOG_TCPOPT 0x02 /* Log TCP options */
#define IPT_LOG_IPOPT 0x04 /* Log IP options */
-#define IPT_LOG_MASK 0x07
+#define IPT_LOG_UID 0x08 /* Log UID owning local socket */
+#define IPT_LOG_MASK 0x0f
struct ipt_log_info {
unsigned char level;
#ifndef _IPT_MARK_H_target
#define _IPT_MARK_H_target
+/* Version 0 */
struct ipt_mark_target_info {
unsigned long mark;
};
+/* Version 1 */
+enum {
+ IPT_MARK_SET=0,
+ IPT_MARK_AND,
+ IPT_MARK_OR
+};
+
+struct ipt_mark_target_info_v1 {
+ unsigned long mark;
+ u_int8_t mode;
+};
#endif /*_IPT_MARK_H_target*/
#define IPT_CONNTRACK_STATUS 0x40
#define IPT_CONNTRACK_EXPIRES 0x80
+/* This is exposed to userspace, so remains frozen in time. */
+struct ip_conntrack_old_tuple
+{
+ struct {
+ __u32 ip;
+ union {
+ __u16 all;
+ } u;
+ } src;
+
+ struct {
+ __u32 ip;
+ union {
+ __u16 all;
+ } u;
+
+ /* The protocol. */
+ u16 protonum;
+ } dst;
+};
+
struct ipt_conntrack_info
{
unsigned int statemask, statusmask;
- struct ip_conntrack_tuple tuple[IP_CT_DIR_MAX];
+ struct ip_conntrack_old_tuple tuple[IP_CT_DIR_MAX];
struct in_addr sipmsk[IP_CT_DIR_MAX], dipmsk[IP_CT_DIR_MAX];
unsigned long expires_min, expires_max;
u_int8_t count; /* Number of ports */
u_int16_t ports[IPT_MULTI_PORTS]; /* Ports */
};
+
+struct ipt_multiport_v1
+{
+ u_int8_t flags; /* Type of comparison */
+ u_int8_t count; /* Number of ports */
+ u_int16_t ports[IPT_MULTI_PORTS]; /* Ports */
+ u_int8_t pflags[IPT_MULTI_PORTS]; /* Port flags */
+ u_int8_t invert; /* Invert flag */
+};
#endif /*_IPT_MULTIPORT_H*/
#define IP6T_LOG_TCPSEQ 0x01 /* Log TCP sequence numbers */
#define IP6T_LOG_TCPOPT 0x02 /* Log TCP options */
#define IP6T_LOG_IPOPT 0x04 /* Log IP options */
-#define IP6T_LOG_MASK 0x07
+#define IP6T_LOG_UID 0x08 /* Log UID owning local socket */
+#define IP6T_LOG_MASK 0x0f
struct ip6t_log_info {
unsigned char level;
int (*read) (struct nfs_read_data *);
int (*write) (struct nfs_write_data *);
int (*commit) (struct nfs_write_data *);
- struct inode * (*create) (struct inode *, struct qstr *,
+ struct inode * (*create) (struct inode *, struct dentry *,
struct iattr *, int);
int (*remove) (struct inode *, struct qstr *);
int (*unlink_setup) (struct rpc_message *,
*
* int first_node(mask) Number lowest set bit, or MAX_NUMNODES
* int next_node(node, mask) Next node past 'node', or MAX_NUMNODES
+ * int first_unset_node(mask) First node not set in mask, or
+ * MAX_NUMNODES.
*
* nodemask_t nodemask_of_node(node) Return nodemask with bit 'node' set
* NODE_MASK_ALL Initializer - all bits set
bitmap_shift_left(dstp->bits, srcp->bits, n, nbits);
}
-#define first_node(src) __first_node(&(src), MAX_NUMNODES)
-static inline int __first_node(const nodemask_t *srcp, int nbits)
+/* FIXME: better would be to fix all architectures to never return
+ > MAX_NUMNODES, then the silly min_ts could be dropped. */
+
+#define first_node(src) __first_node(&(src))
+static inline int __first_node(const nodemask_t *srcp)
{
- return min_t(int, nbits, find_first_bit(srcp->bits, nbits));
+ return min_t(int, MAX_NUMNODES, find_first_bit(srcp->bits, MAX_NUMNODES));
}
-#define next_node(n, src) __next_node((n), &(src), MAX_NUMNODES)
-static inline int __next_node(int n, const nodemask_t *srcp, int nbits)
+#define next_node(n, src) __next_node((n), &(src))
+static inline int __next_node(int n, const nodemask_t *srcp)
{
- return min_t(int, nbits, find_next_bit(srcp->bits, nbits, n+1));
+ return min_t(int,MAX_NUMNODES,find_next_bit(srcp->bits, MAX_NUMNODES, n+1));
}
#define nodemask_of_node(node) \
m; \
})
+#define first_unset_node(mask) __first_unset_node(&(mask))
+static inline int __first_unset_node(const nodemask_t *maskp)
+{
+ return min_t(int,MAX_NUMNODES,
+ find_first_zero_bit(maskp->bits, MAX_NUMNODES));
+}
+
#define NODE_MASK_LAST_WORD BITMAP_LAST_WORD_MASK(MAX_NUMNODES)
#if MAX_NUMNODES <= BITS_PER_LONG
extern int parport_pc_claim_resources(struct parport *p);
-extern void parport_pc_init_state(struct pardevice *, struct parport_state *s);
-
-extern void parport_pc_save_state(struct parport *p, struct parport_state *s);
-
-extern void parport_pc_restore_state(struct parport *p, struct parport_state *s);
-
/* PCMCIA code will want to get us to look at a port. Provide a mechanism. */
extern struct parport *parport_pc_probe_port (unsigned long base,
unsigned long base_hi,
--- /dev/null
+/*
+ * File: pcieport_if.h
+ * Purpose: PCI Express Port Bus Driver's IF Data Structure
+ *
+ * Copyright (C) 2004 Intel
+ * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com)
+ */
+
+#ifndef _PCIEPORT_IF_H_
+#define _PCIEPORT_IF_H_
+
+/* Port Type */
+#define PCIE_RC_PORT 4 /* Root port of RC */
+#define PCIE_SW_UPSTREAM_PORT 5 /* Upstream port of Switch */
+#define PCIE_SW_DOWNSTREAM_PORT 6 /* Downstream port of Switch */
+#define PCIE_ANY_PORT 7
+
+/* Service Type */
+#define PCIE_PORT_SERVICE_PME 1 /* Power Management Event */
+#define PCIE_PORT_SERVICE_AER 2 /* Advanced Error Reporting */
+#define PCIE_PORT_SERVICE_HP 4 /* Native Hotplug */
+#define PCIE_PORT_SERVICE_VC 8 /* Virtual Channel */
+
+/* Root/Upstream/Downstream Port's Interrupt Mode */
+#define PCIE_PORT_INTx_MODE 0
+#define PCIE_PORT_MSI_MODE 1
+#define PCIE_PORT_MSIX_MODE 2
+
+struct pcie_port_service_id {
+ __u32 vendor, device; /* Vendor and device ID or PCI_ANY_ID*/
+ __u32 subvendor, subdevice; /* Subsystem ID's or PCI_ANY_ID */
+ __u32 class, class_mask; /* (class,subclass,prog-if) triplet */
+ __u32 port_type, service_type; /* Port Entity */
+ kernel_ulong_t driver_data;
+};
+
+struct pcie_device {
+ int irq; /* Service IRQ/MSI/MSI-X Vector */
+ int interrupt_mode; /* [0:INTx | 1:MSI | 2:MSI-X] */
+ struct pcie_port_service_id id; /* Service ID */
+ struct pci_dev *port; /* Root/Upstream/Downstream Port */
+ void *priv_data; /* Service Private Data */
+ struct device device; /* Generic Device Interface */
+};
+#define to_pcie_device(d) container_of(d, struct pcie_device, device)
+
+static inline void set_service_data(struct pcie_device *dev, void *data)
+{
+ dev->priv_data = data;
+}
+
+static inline void* get_service_data(struct pcie_device *dev)
+{
+ return dev->priv_data;
+}
+
+struct pcie_port_service_driver {
+ const char *name;
+ int (*probe) (struct pcie_device *dev,
+ const struct pcie_port_service_id *id);
+ void (*remove) (struct pcie_device *dev);
+ int (*suspend) (struct pcie_device *dev, u32 state);
+ int (*resume) (struct pcie_device *dev);
+
+ const struct pcie_port_service_id *id_table;
+ struct device_driver driver;
+};
+#define to_service_driver(d) \
+ container_of(d, struct pcie_port_service_driver, driver)
+
+extern int pcie_port_service_register(struct pcie_port_service_driver *new);
+extern void pcie_port_service_unregister(struct pcie_port_service_driver *new);
+
+#endif /* _PCIEPORT_IF_H_ */
#define _LINUX_PIPE_FS_I_H
#define PIPEFS_MAGIC 0x50495045
+
+#define PIPE_BUFFERS (16)
+
+struct pipe_buffer {
+ struct page *page;
+ unsigned int offset, len;
+ struct pipe_buf_operations *ops;
+};
+
+struct pipe_buf_operations {
+ int can_merge;
+ void * (*map)(struct file *, struct pipe_inode_info *, struct pipe_buffer *);
+ void (*unmap)(struct pipe_inode_info *, struct pipe_buffer *);
+ void (*release)(struct pipe_inode_info *, struct pipe_buffer *);
+};
+
struct pipe_inode_info {
wait_queue_head_t wait;
- char *base;
- unsigned int len;
+ unsigned int nrbufs, curbuf;
+ struct pipe_buffer bufs[PIPE_BUFFERS];
+ struct page *tmp_page;
unsigned int start;
unsigned int readers;
unsigned int writers;
#define PIPE_FASYNC_READERS(inode) (&((inode).i_pipe->fasync_readers))
#define PIPE_FASYNC_WRITERS(inode) (&((inode).i_pipe->fasync_writers))
-#define PIPE_EMPTY(inode) (PIPE_LEN(inode) == 0)
-#define PIPE_FULL(inode) (PIPE_LEN(inode) == PIPE_SIZE)
-#define PIPE_FREE(inode) (PIPE_SIZE - PIPE_LEN(inode))
-#define PIPE_END(inode) ((PIPE_START(inode) + PIPE_LEN(inode)) & (PIPE_SIZE-1))
-#define PIPE_MAX_RCHUNK(inode) (PIPE_SIZE - PIPE_START(inode))
-#define PIPE_MAX_WCHUNK(inode) (PIPE_SIZE - PIPE_END(inode))
-
/* Drop the inode semaphore and wait for a pipe event, atomically */
void pipe_wait(struct inode * inode);
struct inode* pipe_new(struct inode* inode);
+void free_pipe_info(struct inode* inode);
#endif
#include <linux/config.h>
#include <linux/linkage.h>
-#define preempt_count() (current_thread_info()->preempt_count)
+#ifdef CONFIG_DEBUG_PREEMPT
+ extern void fastcall add_preempt_count(int val);
+ extern void fastcall sub_preempt_count(int val);
+#else
+# define add_preempt_count(val) do { preempt_count() += (val); } while (0)
+# define sub_preempt_count(val) do { preempt_count() -= (val); } while (0)
+#endif
-#define inc_preempt_count() \
-do { \
- preempt_count()++; \
-} while (0)
+#define inc_preempt_count() add_preempt_count(1)
+#define dec_preempt_count() sub_preempt_count(1)
-#define dec_preempt_count() \
-do { \
- preempt_count()--; \
-} while (0)
+#define preempt_count() (current_thread_info()->preempt_count)
#ifdef CONFIG_PREEMPT
#ifdef CONFIG_REISERFS_FS_POSIX_ACL
struct posix_acl * reiserfs_get_acl(struct inode *inode, int type);
-int reiserfs_set_acl(struct inode *inode, int type, struct posix_acl *acl);
int reiserfs_acl_chmod (struct inode *inode);
int reiserfs_inherit_default_acl (struct inode *dir, struct dentry *dentry, struct inode *inode);
int reiserfs_cache_default_acl (struct inode *dir);
extern struct reiserfs_xattr_handler posix_acl_access_handler;
#else
-#define reiserfs_set_acl NULL
#define reiserfs_get_acl NULL
#define reiserfs_cache_default_acl(inode) 0
*/
typedef struct sctp_abort_chunk {
sctp_chunkhdr_t uh;
-} __attribute__((packed)) sctp_abort_chunkt_t;
+} __attribute__((packed)) sctp_abort_chunk_t;
/* For the graceful shutdown we must carry the tag (in common header)
#define SERIAL_RSA_BAUD_BASE (921600)
#define SERIAL_RSA_BAUD_BASE_LO (SERIAL_RSA_BAUD_BASE / 8)
+/*
+ * Extra serial register definitions for the internal UARTs
+ * in TI OMAP processors.
+ */
+#define UART_OMAP_MDR1 0x08 /* Mode definition register */
+#define UART_OMAP_MDR2 0x09 /* Mode definition register 2 */
+#define UART_OMAP_SCR 0x10 /* Supplementary control register */
+#define UART_OMAP_SSR 0x11 /* Supplementary status register */
+#define UART_OMAP_EBLR 0x12 /* BOF length register */
+#define UART_OMAP_OSC_12M_SEL 0x13 /* OMAP1510 12MHz osc select */
+#define UART_OMAP_MVER 0x14 /* Module version register */
+#define UART_OMAP_SYSC 0x15 /* System configuration register */
+#define UART_OMAP_SYSS 0x16 /* System status register */
+
#endif /* _LINUX_SERIAL_REG_H */
#include <linux/config.h>
+extern void cpu_idle(void);
+
#ifdef CONFIG_SMP
#include <linux/preempt.h>
/*
* These macros fold the SMP functionality into a single CPU system
*/
-
-#define smp_processor_id() 0
+
+#if !defined(__smp_processor_id) || !defined(CONFIG_PREEMPT)
+# define smp_processor_id() 0
+#endif
#define hard_smp_processor_id() 0
#define smp_threads_ready 1
#define smp_call_function(func,info,retry,wait) ({ 0; })
#endif /* !SMP */
+/*
+ * DEBUG_PREEMPT support: check whether smp_processor_id() is being
+ * used in a preemption-safe way.
+ *
+ * An architecture has to enable this debugging code explicitly.
+ * It can do so by renaming the smp_processor_id() macro to
+ * __smp_processor_id(). This should only be done after some minimal
+ * testing, because usually there are a number of false positives
+ * that an architecture will trigger.
+ *
+ * To fix a false positive (i.e. smp_processor_id() use that the
+ * debugging code reports but which use for some reason is legal),
+ * change the smp_processor_id() reference to _smp_processor_id(),
+ * which is the nondebug variant. NOTE: don't use this to hack around
+ * real bugs.
+ */
+#ifdef __smp_processor_id
+# if defined(CONFIG_PREEMPT) && defined(CONFIG_DEBUG_PREEMPT)
+ extern unsigned int smp_processor_id(void);
+# else
+# define smp_processor_id() __smp_processor_id()
+# endif
+# define _smp_processor_id() __smp_processor_id()
+#else
+# define _smp_processor_id() smp_processor_id()
+#endif
+
#define get_cpu() ({ preempt_disable(); smp_processor_id(); })
#define put_cpu() preempt_enable()
#define put_cpu_no_resched() preempt_enable_no_resched()
/*
* Sony Programmable I/O Control Device driver for VAIO
*
- * Copyright (C) 2001-2004 Stelian Pop <stelian@popies.net>
+ * Copyright (C) 2001-2005 Stelian Pop <stelian@popies.net>
*
+ * Copyright (C) 2005 Narayanan R S <nars@kadamba.org>
+
* Copyright (C) 2001-2002 Alcôve <www.alcove.com>
*
* Copyright (C) 2001 Michael Ashley <m.ashley@unsw.edu.au>
#define SONYPI_IOCGBLUE _IOR('v', 8, __u8)
#define SONYPI_IOCSBLUE _IOW('v', 9, __u8)
+/* get/set fan state on/off */
+#define SONYPI_IOCGFAN _IOR('v', 10, __u8)
+#define SONYPI_IOCSFAN _IOW('v', 11, __u8)
+
+/* get temperature (C) */
+#define SONYPI_IOCGTEMP _IOR('v', 12, __u8)
+
#ifdef __KERNEL__
/* used only for communication between v4l and sonypi */
#include <linux/timer.h>
#include <linux/sunrpc/types.h>
+#include <linux/spinlock.h>
#include <linux/wait.h>
+#include <linux/workqueue.h>
#include <linux/sunrpc/xdr.h>
/*
struct rpc_cred * rpc_cred; /* Credentials */
};
+struct rpc_wait_queue;
+struct rpc_wait {
+ struct list_head list; /* wait queue links */
+ struct list_head links; /* Links to related tasks */
+ wait_queue_head_t waitq; /* sync: sleep on this q */
+ struct rpc_wait_queue * rpc_waitq; /* RPC wait queue we're on */
+};
+
/*
* This is the RPC task struct
*/
struct rpc_task {
- struct list_head tk_list; /* wait queue links */
#ifdef RPC_DEBUG
unsigned long tk_magic; /* 0xf00baa */
#endif
struct rpc_clnt * tk_client; /* RPC client */
struct rpc_rqst * tk_rqstp; /* RPC request */
int tk_status; /* result of last operation */
- struct rpc_wait_queue * tk_rpcwait; /* RPC wait queue we're on */
/*
* RPC call state
* you have a pathological interest in kernel oopses.
*/
struct timer_list tk_timer; /* kernel timer */
- wait_queue_head_t tk_wait; /* sync: sleep on this q */
unsigned long tk_timeout; /* timeout for rpc_sleep() */
unsigned short tk_flags; /* misc flags */
unsigned char tk_active : 1;/* Task has been activated */
unsigned char tk_priority : 2;/* Task priority */
unsigned long tk_runstate; /* Task run status */
- struct list_head tk_links; /* links to related tasks */
+ struct workqueue_struct *tk_workqueue; /* Normally rpciod, but could
+ * be any workqueue
+ */
+ union {
+ struct work_struct tk_work; /* Async task work queue */
+ struct rpc_wait tk_wait; /* RPC wait */
+ } u;
#ifdef RPC_DEBUG
unsigned short tk_pid; /* debugging aid */
#endif
/* support walking a list of tasks on a wait queue */
#define task_for_each(task, pos, head) \
list_for_each(pos, head) \
- if ((task=list_entry(pos, struct rpc_task, tk_list)),1)
+ if ((task=list_entry(pos, struct rpc_task, u.tk_wait.list)),1)
#define task_for_first(task, head) \
if (!list_empty(head) && \
- ((task=list_entry((head)->next, struct rpc_task, tk_list)),1))
+ ((task=list_entry((head)->next, struct rpc_task, u.tk_wait.list)),1))
/* .. and walking list of all tasks */
#define alltask_for_each(task, pos, head) \
#define RPC_IS_SOFT(t) ((t)->tk_flags & RPC_TASK_SOFT)
#define RPC_TASK_UNINTERRUPTIBLE(t) ((t)->tk_flags & RPC_TASK_NOINTR)
-#define RPC_TASK_SLEEPING 0
-#define RPC_TASK_RUNNING 1
-#define RPC_IS_SLEEPING(t) (test_bit(RPC_TASK_SLEEPING, &(t)->tk_runstate))
-#define RPC_IS_RUNNING(t) (test_bit(RPC_TASK_RUNNING, &(t)->tk_runstate))
+#define RPC_TASK_RUNNING 0
+#define RPC_TASK_QUEUED 1
+#define RPC_TASK_WAKEUP 2
+#define RPC_TASK_HAS_TIMER 3
+#define RPC_IS_RUNNING(t) (test_bit(RPC_TASK_RUNNING, &(t)->tk_runstate))
#define rpc_set_running(t) (set_bit(RPC_TASK_RUNNING, &(t)->tk_runstate))
-#define rpc_clear_running(t) (clear_bit(RPC_TASK_RUNNING, &(t)->tk_runstate))
+#define rpc_test_and_set_running(t) \
+ (test_and_set_bit(RPC_TASK_RUNNING, &(t)->tk_runstate))
+#define rpc_clear_running(t) \
+ do { \
+ smp_mb__before_clear_bit(); \
+ clear_bit(RPC_TASK_RUNNING, &(t)->tk_runstate); \
+ smp_mb__after_clear_bit(); \
+ } while (0)
-#define rpc_set_sleeping(t) (set_bit(RPC_TASK_SLEEPING, &(t)->tk_runstate))
+#define RPC_IS_QUEUED(t) (test_bit(RPC_TASK_QUEUED, &(t)->tk_runstate))
+#define rpc_set_queued(t) (set_bit(RPC_TASK_QUEUED, &(t)->tk_runstate))
+#define rpc_clear_queued(t) \
+ do { \
+ smp_mb__before_clear_bit(); \
+ clear_bit(RPC_TASK_QUEUED, &(t)->tk_runstate); \
+ smp_mb__after_clear_bit(); \
+ } while (0)
-#define rpc_clear_sleeping(t) \
+#define rpc_start_wakeup(t) \
+ (test_and_set_bit(RPC_TASK_WAKEUP, &(t)->tk_runstate) == 0)
+#define rpc_finish_wakeup(t) \
do { \
smp_mb__before_clear_bit(); \
- clear_bit(RPC_TASK_SLEEPING, &(t)->tk_runstate); \
+ clear_bit(RPC_TASK_WAKEUP, &(t)->tk_runstate); \
smp_mb__after_clear_bit(); \
- } while(0)
+ } while (0)
/*
* Task priorities.
* RPC synchronization objects
*/
struct rpc_wait_queue {
+ spinlock_t lock;
struct list_head tasks[RPC_NR_PRIORITY]; /* task queue for each priority level */
unsigned long cookie; /* cookie of last task serviced */
unsigned char maxpriority; /* maximum priority (0 if queue is not a priority queue) */
#ifndef RPC_DEBUG
# define RPC_WAITQ_INIT(var,qname) { \
+ .lock = SPIN_LOCK_UNLOCKED, \
.tasks = { \
[0] = LIST_HEAD_INIT(var.tasks[0]), \
[1] = LIST_HEAD_INIT(var.tasks[1]), \
}
#else
# define RPC_WAITQ_INIT(var,qname) { \
+ .lock = SPIN_LOCK_UNLOCKED, \
.tasks = { \
[0] = LIST_HEAD_INIT(var.tasks[0]), \
[1] = LIST_HEAD_INIT(var.tasks[1]), \
int rpc_execute(struct rpc_task *);
void rpc_run_child(struct rpc_task *parent, struct rpc_task *child,
rpc_action action);
-int rpc_add_wait_queue(struct rpc_wait_queue *, struct rpc_task *);
-void rpc_remove_wait_queue(struct rpc_task *);
void rpc_init_priority_wait_queue(struct rpc_wait_queue *, const char *);
void rpc_init_wait_queue(struct rpc_wait_queue *, const char *);
void rpc_sleep_on(struct rpc_wait_queue *, struct rpc_task *,
rpc_action action, rpc_action timer);
-void rpc_add_timer(struct rpc_task *, rpc_action);
void rpc_wake_up_task(struct rpc_task *);
void rpc_wake_up(struct rpc_wait_queue *);
struct rpc_task *rpc_wake_up_next(struct rpc_wait_queue *);
void rpc_wake_up_status(struct rpc_wait_queue *, int);
void rpc_delay(struct rpc_task *, unsigned long);
void * rpc_malloc(struct rpc_task *, size_t);
-void rpc_free(struct rpc_task *);
int rpciod_up(void);
void rpciod_down(void);
void rpciod_wake_up(void);
u32 * xdr_decode_string_inplace(u32 *p, char **sp, int *lenp, int maxlen);
u32 * xdr_encode_netobj(u32 *p, const struct xdr_netobj *);
u32 * xdr_decode_netobj(u32 *p, struct xdr_netobj *);
-u32 * xdr_decode_netobj_fixed(u32 *p, void *obj, unsigned int len);
void xdr_encode_pages(struct xdr_buf *, struct page **, unsigned int,
unsigned int);
return iov->iov_len = ((u8 *) p - (u8 *) iov->iov_base);
}
-void xdr_shift_iovec(struct kvec *, int, size_t);
-
/*
* Maximum number of iov's we use.
*/
/*
* XDR buffer helper functions
*/
-extern int xdr_kmap(struct kvec *, struct xdr_buf *, size_t);
-extern void xdr_kunmap(struct xdr_buf *, size_t);
extern void xdr_shift_buf(struct xdr_buf *, size_t);
-extern void _copy_from_pages(char *, struct page **, size_t, size_t);
extern void xdr_buf_from_iov(struct kvec *, struct xdr_buf *);
extern int xdr_buf_subsegment(struct xdr_buf *, struct xdr_buf *, int, int);
extern int xdr_buf_read_netobj(struct xdr_buf *, struct xdr_netobj *, int);
})
#endif
-static inline int __next_node_with_cpus(int node)
-{
- do
- ++node;
- while (node < numnodes && !nr_cpus_node(node));
- return node;
-}
-
-#define for_each_node_with_cpus(node) \
- for (node = 0; node < numnodes; node = __next_node_with_cpus(node))
+#define for_each_node_with_cpus(node) \
+ for_each_online_node(node) \
+ if (nr_cpus_node(node))
#ifndef node_distance
-#define node_distance(from,to) ((from) != (to))
+/* Conform to ACPI 2.0 SLIT distance definitions */
+#define LOCAL_DISTANCE 10
+#define REMOTE_DISTANCE 20
+#define node_distance(from,to) ((from) == (to) ? LOCAL_DISTANCE : REMOTE_DISTANCE)
#endif
#ifndef PENALTY_FOR_NODE_WITH_CPUS
#define PENALTY_FOR_NODE_WITH_CPUS (1)
.max_interval = 4, \
.busy_factor = 64, \
.imbalance_pct = 125, \
- .cache_hot_time = (5*1000/2), \
+ .cache_hot_time = (5*1000000/2), \
.cache_nice_tries = 1, \
.per_cpu_gain = 100, \
.flags = SD_LOAD_BALANCE \
| SD_BALANCE_NEWIDLE \
| SD_BALANCE_EXEC \
| SD_WAKE_AFFINE \
+ | SD_WAKE_IDLE \
| SD_WAKE_BALANCE, \
.last_balance = jiffies, \
.balance_interval = 1, \
#include <linux/if_tr.h>
#ifdef __KERNEL__
-extern int tr_header(struct sk_buff *skb, struct net_device *dev,
- unsigned short type, void *daddr,
- void *saddr, unsigned len);
-extern int tr_rebuild_header(struct sk_buff *skb);
extern unsigned short tr_type_trans(struct sk_buff *skb, struct net_device *dev);
extern void tr_source_route(struct sk_buff *skb, struct trh_hdr *trh, struct net_device *dev);
extern struct net_device *alloc_trdev(int sizeof_priv);
#ifndef _UDF_FS_SB_H
#define _UDF_FS_SB_H 1
+#include <asm/semaphore.h>
+
#pragma pack(1)
#define UDF_MAX_BLOCK_LOADED 8
/* VAT inode */
struct inode *s_vat;
+
+ struct semaphore s_alloc_sem;
};
#endif /* _UDF_FS_SB_H */
#include <net/sock.h>
#include <linux/ip.h>
-struct udp_opt {
- int pending; /* Any pending frames ? */
- unsigned int corkflag; /* Cork is required */
- __u16 encap_type; /* Is this an Encapsulation socket? */
+struct udp_sock {
+ /* inet_sock has to be the first member */
+ struct inet_sock inet;
+ int pending; /* Any pending frames ? */
+ unsigned int corkflag; /* Cork is required */
+ __u16 encap_type; /* Is this an Encapsulation socket? */
/*
* Following member retains the infomation to create a UDP header
* when the socket is uncorked.
*/
- __u16 len; /* total length of pending frames */
-};
-
-/* WARNING: don't change the layout of the members in udp_sock! */
-struct udp_sock {
- struct sock sk;
-#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
- struct ipv6_pinfo *pinet6;
-#endif
- struct inet_opt inet;
- struct udp_opt udp;
+ __u16 len; /* total length of pending frames */
};
-static inline struct udp_opt * udp_sk(const struct sock *__sk)
+static inline struct udp_sock *udp_sk(const struct sock *sk)
{
- return &((struct udp_sock *)__sk)->udp;
+ return (struct udp_sock *)sk;
}
#endif
#define USB_DEVICE_B_HNP_ENABLE 3 /* dev may initiate HNP */
#define USB_DEVICE_A_HNP_SUPPORT 4 /* RH port supports HNP */
#define USB_DEVICE_A_ALT_HNP_SUPPORT 5 /* other RH port does */
+#define USB_DEVICE_DEBUG_MODE 6 /* (special devices only) */
#define USB_ENDPOINT_HALT 0 /* IN/OUT will STALL */
__u8 bLength;
__u8 bDescriptorType;
- __u16 bcdUSB;
+ __le16 bcdUSB;
__u8 bDeviceClass;
__u8 bDeviceSubClass;
__u8 bDeviceProtocol;
__u8 bMaxPacketSize0;
- __u16 idVendor;
- __u16 idProduct;
- __u16 bcdDevice;
+ __le16 idVendor;
+ __le16 idProduct;
+ __le16 bcdDevice;
__u8 iManufacturer;
__u8 iProduct;
__u8 iSerialNumber;
__u8 bLength;
__u8 bDescriptorType;
- __u16 wTotalLength;
+ __le16 wTotalLength;
__u8 bNumInterfaces;
__u8 bConfigurationValue;
__u8 iConfiguration;
__u8 bEndpointAddress;
__u8 bmAttributes;
- __u16 wMaxPacketSize;
+ __le16 wMaxPacketSize;
__u8 bInterval;
// NOTE: these two are _only_ in audio endpoints.
__u8 bLength;
__u8 bDescriptorType;
- __u16 bcdUSB;
+ __le16 bcdUSB;
__u8 bDeviceClass;
__u8 bDeviceSubClass;
__u8 bDeviceProtocol;
/*-------------------------------------------------------------------------*/
+/* USB_DT_DEBUG: for special highspeed devices, replacing serial console */
+struct usb_debug_descriptor {
+ __u8 bLength;
+ __u8 bDescriptorType;
+
+ /* bulk endpoints with 8 byte maxpacket */
+ __u8 bDebugInEndpoint;
+ __u8 bDebugOutEndpoint;
+};
+
+/*-------------------------------------------------------------------------*/
+
/* USB_DT_INTERFACE_ASSOCIATION: groups interfaces */
struct usb_interface_assoc_descriptor {
__u8 bLength;
#define TUNER_MICROTUNE_4042FI5 49 /* FusionHDTV 3 Gold - 4042 FI5 (3X 8147) */
#define TUNER_TCL_2002N 50
#define TUNER_PHILIPS_FM1256_IH3 51
+#define TUNER_THOMSON_DTT7610 52
#define NOTUNER 0
#define PAL 1 /* PAL_BG */
#define HITACHI 9
#define Panasonic 10
#define TCL 11
+#define THOMSON 12
#define TUNER_SET_TYPE _IOW('t',1,int) /* set tuner type */
#define TUNER_SET_TVFREQ _IOW('t',2,int) /* set tv freq */
#define TDA9887_SET_CONFIG _IOW('t',5,int)
/* tv card specific */
# define TDA9887_PRESENT (1<<0)
-# define TDA9887_PORT1 (1<<1)
-# define TDA9887_PORT2 (1<<2)
+# define TDA9887_PORT1_INACTIVE (1<<1)
+# define TDA9887_PORT2_INACTIVE (1<<2)
# define TDA9887_QSS (1<<3)
# define TDA9887_INTERCARRIER (1<<4)
+# define TDA9887_PORT1_ACTIVE (1<<5)
+# define TDA9887_PORT2_ACTIVE (1<<6)
/* config options */
# define TDA9887_DEEMPHASIS_MASK (3<<16)
# define TDA9887_DEEMPHASIS_NONE (1<<16)
--- /dev/null
+struct tveeprom {
+ u32 has_radio;
+
+ u32 tuner_type;
+ u32 tuner_formats;
+
+ u32 digitizer;
+ u32 digitizer_formats;
+
+ u32 audio_processor;
+ /* a_p_fmts? */
+
+ u32 model;
+ u32 revision;
+ u32 serial_number;
+ char rev_str[5];
+};
+
+void tveeprom_hauppauge_analog(struct tveeprom *tvee,
+ unsigned char *eeprom_data);
+
+int tveeprom_read(struct i2c_client *c, unsigned char *eedata, int len);
+int tveeprom_dump(unsigned char *eedata, int len);
struct dvb_net net;
};
-int videobuf_dvb_register(struct videobuf_dvb *dvb);
+int videobuf_dvb_register(struct videobuf_dvb *dvb,
+ struct module *module,
+ void *adapter_priv);
void videobuf_dvb_unregister(struct videobuf_dvb *dvb);
/*
/*
- * $Id: mtd-abi.h,v 1.6 2004/08/09 13:38:30 dwmw2 Exp $
+ * $Id: mtd-abi.h,v 1.7 2004/11/23 15:37:32 gleixner Exp $
*
* Portions of MTD ABI definition which are shared by kernel and user space
*/
#define MTD_XIP 32 // eXecute-In-Place possible
#define MTD_OOB 64 // Out-of-band data (NAND flash)
#define MTD_ECC 128 // Device capable of automatic ECC
+#define MTD_NO_VIRTBLOCKS 256 // Virtual blocks not allowed
// Some common devices / combinations of capabilities
#define MTD_CAP_ROM 0
extern int tcf_unregister_action(struct tc_action_ops *a);
extern void tcf_action_destroy(struct tc_action *a, int bind);
extern int tcf_action_exec(struct sk_buff *skb, struct tc_action *a, struct tcf_result *res);
-extern int tcf_action_init(struct rtattr *rta, struct rtattr *est, struct tc_action *a,char *n, int ovr, int bind);
-extern int tcf_action_init_1(struct rtattr *rta, struct rtattr *est, struct tc_action *a,char *n, int ovr, int bind);
+extern struct tc_action *tcf_action_init(struct rtattr *rta, struct rtattr *est, char *n, int ovr, int bind, int *err);
+extern struct tc_action *tcf_action_init_1(struct rtattr *rta, struct rtattr *est, char *n, int ovr, int bind, int *err);
extern int tcf_action_dump(struct sk_buff *skb, struct tc_action *a, int, int);
extern int tcf_action_dump_old(struct sk_buff *skb, struct tc_action *a, int, int);
extern int tcf_action_dump_1(struct sk_buff *skb, struct tc_action *a, int, int);
extern int tcf_action_copy_stats (struct sk_buff *,struct tc_action *);
-extern int tcf_act_police_locate(struct rtattr *rta, struct rtattr *est,struct tc_action *,int , int );
-extern int tcf_act_police_dump(struct sk_buff *, struct tc_action *, int, int);
-extern int tcf_act_police(struct sk_buff **skb, struct tc_action *a);
#endif /* CONFIG_NET_CLS_ACT */
extern int tcf_police(struct sk_buff *skb, struct tcf_police *p);
extern void ax25_destroy_socket(ax25_cb *);
extern ax25_cb *ax25_create_cb(void);
extern void ax25_fillin_cb(ax25_cb *, ax25_dev *);
-extern int ax25_create(struct socket *, int);
extern struct sock *ax25_make_new(struct sock *, struct ax25_dev *);
/* ax25_addr.c */
extern void ax25_ds_nr_error_recovery(ax25_cb *);
extern void ax25_ds_enquiry_response(ax25_cb *);
extern void ax25_ds_establish_data_link(ax25_cb *);
-extern void ax25_dev_dama_on(ax25_dev *);
extern void ax25_dev_dama_off(ax25_dev *);
extern void ax25_dama_on(ax25_cb *);
extern void ax25_dama_off(ax25_cb *);
extern void dn_start_slow_timer(struct sock *sk);
extern void dn_stop_slow_timer(struct sock *sk);
-extern void dn_start_fast_timer(struct sock *sk);
-extern void dn_stop_fast_timer(struct sock *sk);
extern dn_address decnet_address;
extern int decnet_debug_level;
extern void dn_fib_init(void);
extern void dn_fib_cleanup(void);
-extern int dn_fib_rt_message(struct sk_buff *skb);
extern int dn_fib_ioctl(struct socket *sock, unsigned int cmd,
unsigned long arg);
extern struct dn_fib_info *dn_fib_create_info(const struct rtmsg *r,
/* Move into dst.h ? */
extern int xrlim_allow(struct dst_entry *dst, int timeout);
-struct raw_opt {
- struct icmp_filter filter;
-};
-
-struct ipv6_pinfo;
-
-/* WARNING: don't change the layout of the members in raw_sock! */
struct raw_sock {
- struct sock sk;
-#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
- struct ipv6_pinfo *pinet6;
-#endif
- struct inet_opt inet;
- struct raw_opt raw4;
+ /* inet_sock has to be the first member */
+ struct inet_sock inet;
+ struct icmp_filter filter;
};
-#define raw4_sk(__sk) (&((struct raw_sock *)__sk)->raw4)
+static inline struct raw_sock *raw_sk(const struct sock *sk)
+{
+ return (struct raw_sock *)sk;
+}
#endif /* _ICMP_H */
{
buf[0] = 0x00;
}
+
+static inline void ipv6_ib_mc_map(struct in6_addr *addr, char *buf)
+{
+ buf[0] = 0; /* Reserved */
+ buf[1] = 0xff; /* Multicast QPN */
+ buf[2] = 0xff;
+ buf[3] = 0xff;
+ buf[4] = 0xff;
+ buf[5] = 0x12; /* link local scope */
+ buf[6] = 0x60; /* IPv6 signature */
+ buf[7] = 0x1b;
+ buf[8] = 0; /* P_Key */
+ buf[9] = 0;
+ memcpy(buf + 10, addr->s6_addr + 6, 10);
+}
#endif
#endif
/* Exported by fib_frontend.c */
extern void ip_fib_init(void);
-extern void fib_flush(void);
extern int inet_rtm_delroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg);
extern int inet_rtm_newroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg);
extern int inet_rtm_getroute(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg);
extern int inet_rtm_delrule(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg);
extern int inet_rtm_newrule(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg);
extern int inet_dump_rules(struct sk_buff *skb, struct netlink_callback *cb);
-extern u32 fib_rules_map_destination(u32 daddr, struct fib_result *res);
#ifdef CONFIG_NET_CLS_ROUTE
extern u32 fib_rules_tclass(struct fib_result *res);
#endif
* (from ip_vs_core.c)
*/
extern const char *ip_vs_proto_name(unsigned proto);
-extern unsigned int check_for_ip_vs_out(struct sk_buff **skb_p,
- int (*okfn)(struct sk_buff *));
extern void ip_vs_init_hash_table(struct list_head *table, int rows);
#define IP_VS_INIT_HASH_TABLE(t) ip_vs_init_hash_table(t, sizeof(t)/sizeof(t[0]))
/* The following are initdata: */
-extern int ic_enable; /* Enable or disable the whole shebang */
-
extern int ic_proto_enabled; /* Protocols enabled (see IC_xxx) */
-extern int ic_host_name_set; /* Host name set by ipconfig? */
extern int ic_set_manually; /* IPconfig parameters set manually */
extern u32 ic_myaddr; /* My IP address */
-extern u32 ic_netmask; /* Netmask for local subnet */
extern u32 ic_gateway; /* Gateway IP address */
extern u32 ic_servaddr; /* Boot server IP address */
extern u8 root_server_path[]; /* Path to mount as root */
-
-/* The following are persistent (not initdata): */
-
-extern int ic_proto_used; /* Protocol used, if any */
-extern u32 ic_nameserver; /* DNS server IP address */
-extern u8 ic_domain[]; /* DNS (not NIS) domain name */
-
/* bits in ic_proto_{enabled,used} */
#define IC_PROTO 0xFF /* Protocols mask: */
#define IC_BOOTP 0x01 /* BOOTP (or DHCP, see below) */
} last_hop;
};
-struct ipx_opt {
+#include <net/sock.h>
+
+struct ipx_sock {
+ /* struct sock has to be the first member of ipx_sock */
+ struct sock sk;
struct ipx_address dest_addr;
struct ipx_interface *intrfc;
unsigned short port;
unsigned short ipx_ncp_conn;
};
-#define ipx_sk(__sk) ((struct ipx_opt *)(__sk)->sk_protinfo)
+static inline struct ipx_sock *ipx_sk(struct sock *sk)
+{
+ return (struct ipx_sock *)sk;
+}
+
#define IPX_SKB_CB(__skb) ((struct ipx_cb *)&((__skb)->cb[0]))
#endif
+
#define IPX_MIN_EPHEMERAL_SOCKET 0x4000
#define IPX_MAX_EPHEMERAL_SOCKET 0x7fff
ipxitf_down(intrfc);
}
-extern void __ipxitf_down(struct ipx_interface *intrfc);
-
-static __inline__ void __ipxitf_put(struct ipx_interface *intrfc)
-{
- if (atomic_dec_and_test(&intrfc->refcnt))
- __ipxitf_down(intrfc);
-}
-
static __inline__ void ipxrtr_hold(struct ipx_route *rt)
{
atomic_inc(&rt->refcnt);
};
extern char *ircomm_state[];
-extern char *ircomm_event[];
struct ircomm_cb; /* Forward decl. */
#define IRCOMM_LMP_H
#include <net/irda/ircomm_core.h>
-#include <net/irda/ircomm_event.h>
int ircomm_open_lsap(struct ircomm_cb *self);
-int ircomm_lmp_connect_request(struct ircomm_cb *self,
- struct sk_buff *userdata,
- struct ircomm_info *info);
-int ircomm_lmp_connect_response(struct ircomm_cb *self, struct sk_buff *skb);
-int ircomm_lmp_disconnect_request(struct ircomm_cb *self,
- struct sk_buff *userdata,
- struct ircomm_info *info);
-int ircomm_lmp_data_request(struct ircomm_cb *self, struct sk_buff *skb,
- int clen);
-int ircomm_lmp_control_request(struct ircomm_cb *self,
- struct sk_buff *userdata);
-int ircomm_lmp_data_indication(void *instance, void *sap,
- struct sk_buff *skb);
-void ircomm_lmp_connect_confirm(void *instance, void *sap,
- struct qos_info *qos,
- __u32 max_sdu_size,
- __u8 max_header_size,
- struct sk_buff *skb);
-void ircomm_lmp_connect_indication(void *instance, void *sap,
- struct qos_info *qos,
- __u32 max_sdu_size,
- __u8 max_header_size,
- struct sk_buff *skb);
-void ircomm_lmp_disconnect_indication(void *instance, void *sap,
- LM_REASON reason,
- struct sk_buff *skb);
#endif
struct ircomm_tty_cb; /* Forward decl. */
-int ircomm_param_flush(struct ircomm_tty_cb *self);
int ircomm_param_request(struct ircomm_tty_cb *self, __u8 pi, int flush);
extern pi_param_info_t ircomm_param_info;
#define IRCOMM_TTP_H
#include <net/irda/ircomm_core.h>
-#include <net/irda/ircomm_event.h>
int ircomm_open_tsap(struct ircomm_cb *self);
-int ircomm_ttp_connect_request(struct ircomm_cb *self,
- struct sk_buff *userdata,
- struct ircomm_info *info);
-int ircomm_ttp_connect_response(struct ircomm_cb *self, struct sk_buff *skb);
-int ircomm_ttp_disconnect_request(struct ircomm_cb *self,
- struct sk_buff *userdata,
- struct ircomm_info *info);
-int ircomm_ttp_data_request(struct ircomm_cb *self, struct sk_buff *skb,
- int clen);
-int ircomm_ttp_control_request(struct ircomm_cb *self,
- struct sk_buff *userdata);
-int ircomm_ttp_data_indication(void *instance, void *sap,
- struct sk_buff *skb);
-void ircomm_ttp_connect_confirm(void *instance, void *sap,
- struct qos_info *qos,
- __u32 max_sdu_size,
- __u8 max_header_size,
- struct sk_buff *skb);
-void ircomm_ttp_connect_indication(void *instance, void *sap,
- struct qos_info *qos,
- __u32 max_sdu_size,
- __u8 max_header_size,
- struct sk_buff *skb);
-void ircomm_ttp_disconnect_indication(void *instance, void *sap,
- LM_REASON reason,
- struct sk_buff *skb);
-void ircomm_ttp_flow_indication(void *instance, void *sap, LOCAL_FLOW cmd);
#endif
-
-
-
};
void ircomm_tty_start(struct tty_struct *tty);
-void ircomm_tty_stop(struct tty_struct *tty);
void ircomm_tty_check_modem_status(struct ircomm_tty_cb *self);
-extern void ircomm_tty_change_speed(struct ircomm_tty_cb *self);
extern int ircomm_tty_tiocmget(struct tty_struct *tty, struct file *file);
extern int ircomm_tty_tiocmset(struct tty_struct *tty, struct file *file,
unsigned int set, unsigned int clear);
};
extern char *ircomm_state[];
-extern char *ircomm_event[];
extern char *ircomm_tty_state[];
int ircomm_tty_do_event(struct ircomm_tty_cb *self, IRCOMM_TTY_EVENT event,
int iriap_getvaluebyclass_request(struct iriap_cb *self,
__u32 saddr, __u32 daddr,
char *name, char *attr);
-void iriap_getvaluebyclass_confirm(struct iriap_cb *self, struct sk_buff *skb);
void iriap_connect_request(struct iriap_cb *self);
void iriap_send_ack( struct iriap_cb *self);
void iriap_call_indication(struct iriap_cb *self, struct sk_buff *skb);
void iriap_register_server(void);
-void iriap_watchdog_timer_expired(void *data);
-
-static inline void iriap_start_watchdog_timer(struct iriap_cb *self,
- int timeout)
-{
- irda_start_timer(&self->watchdog_timer, timeout, self,
- iriap_watchdog_timer_expired);
-}
-
#endif
#include <net/irda/irias_object.h>
#include <net/irda/irlan_event.h>
-void irlan_client_start_kick_timer(struct irlan_cb *self, int timeout);
void irlan_client_discovery_indication(discinfo_t *, DISCOVERY_MODE, void *);
void irlan_client_wakeup(struct irlan_cb *self, __u32 saddr, __u32 daddr);
-void irlan_client_open_ctrl_tsap( struct irlan_cb *self);
-
void irlan_client_parse_response(struct irlan_cb *self, struct sk_buff *skb);
void irlan_client_get_value_confirm(int result, __u16 obj_id,
struct ias_value *value, void *priv);
struct timer_list watchdog_timer;
};
-struct irlan_cb *irlan_open(__u32 saddr, __u32 daddr);
void irlan_close(struct irlan_cb *self);
void irlan_close_tsaps(struct irlan_cb *self);
struct irlan_cb *irlan_get_any(void);
void irlan_get_provider_info(struct irlan_cb *self);
-void irlan_get_unicast_addr(struct irlan_cb *self);
void irlan_get_media_char(struct irlan_cb *self);
void irlan_open_data_channel(struct irlan_cb *self);
void irlan_close_data_channel(struct irlan_cb *self);
void irlan_set_multicast_filter(struct irlan_cb *self, int status);
void irlan_set_broadcast_filter(struct irlan_cb *self, int status);
-void irlan_open_unicast_addr(struct irlan_cb *self);
int irlan_insert_byte_param(struct sk_buff *skb, char *param, __u8 value);
int irlan_insert_short_param(struct sk_buff *skb, char *param, __u16 value);
int irlap_generate_rand_time_slot(int S, int s);
void irlap_initiate_connection_state(struct irlap_cb *);
void irlap_flush_all_queues(struct irlap_cb *);
-void irlap_change_speed(struct irlap_cb *self, __u32 speed, int now);
void irlap_wait_min_turn_around(struct irlap_cb *, struct qos_info *);
-void irlap_init_qos_capabilities(struct irlap_cb *, struct qos_info *);
void irlap_apply_default_connection_parameters(struct irlap_cb *self);
void irlap_apply_connection_parameters(struct irlap_cb *self, int now);
void irlap_resend_rejected_frames(struct irlap_cb *, int command);
void irlap_resend_rejected_frame(struct irlap_cb *self, int command);
-void irlap_send_i_frame(struct irlap_cb *, struct sk_buff *, int command);
void irlap_send_ui_frame(struct irlap_cb *self, struct sk_buff *skb,
__u8 caddr, int command);
int irttp_disconnect_request(struct tsap_cb *self, struct sk_buff *skb,
int priority);
void irttp_flow_request(struct tsap_cb *self, LOCAL_FLOW flow);
-void irttp_status_indication(void *instance,
- LINK_STATUS link, LOCK_STATUS lock);
-void irttp_flow_indication(void *instance, void *sap, LOCAL_FLOW flow);
struct tsap_cb *irttp_dup(struct tsap_cb *self, void *instance);
static __inline __u32 irttp_get_saddr(struct tsap_cb *self)
} pi_param_info_t;
int irda_param_pack(__u8 *buf, char *fmt, ...);
-int irda_param_unpack(__u8 *buf, char *fmt, ...);
int irda_param_insert(void *self, __u8 pi, __u8 *buf, int len,
pi_param_info_t *info);
-int irda_param_extract(void *self, __u8 *buf, int len, pi_param_info_t *info);
int irda_param_extract_all(void *self, __u8 *buf, int len,
pi_param_info_t *info);
void irda_qos_compute_intersection(struct qos_info *, struct qos_info *);
__u32 irlap_max_line_capacity(__u32 speed, __u32 max_turn_time);
-__u32 irlap_requested_line_capacity(struct qos_info *qos);
void irda_qos_bits_to_value(struct qos_info *qos);
* Those may be called only within the kernel.
*/
-/* Data needed by fs/compat_ioctl.c for 32->64 bit conversion */
-extern const char iw_priv_type_size[];
-
/* First : function strictly used inside the kernel */
/* Handle /proc/net/wireless, called in net/code/dev.c */
extern int llc_conn_ac_disc_ind(struct sock* sk, struct sk_buff *skb);
extern int llc_conn_ac_rst_ind(struct sock* sk, struct sk_buff *skb);
extern int llc_conn_ac_rst_confirm(struct sock* sk, struct sk_buff *skb);
-extern int llc_conn_ac_report_status(struct sock* sk, struct sk_buff *skb);
extern int llc_conn_ac_clear_remote_busy_if_f_eq_1(struct sock* sk,
struct sk_buff *skb);
extern int llc_conn_ac_stop_rej_tmr_if_data_flag_eq_2(struct sock* sk,
struct sk_buff *skb);
extern int llc_conn_ac_send_dm_rsp_f_set_1(struct sock* sk,
struct sk_buff *skb);
-extern int llc_conn_ac_send_dm_rsp_f_set_f_flag(struct sock* sk,
- struct sk_buff *skb);
extern int llc_conn_ac_send_frmr_rsp_f_set_x(struct sock* sk,
struct sk_buff *skb);
extern int llc_conn_ac_resend_frmr_rsp_f_set_0(struct sock* sk,
extern int llc_conn_ac_resend_frmr_rsp_f_set_p(struct sock* sk,
struct sk_buff *skb);
extern int llc_conn_ac_send_i_cmd_p_set_1(struct sock* sk, struct sk_buff *skb);
-extern int llc_conn_ac_send_i_cmd_p_set_0(struct sock* sk, struct sk_buff *skb);
-extern int llc_conn_ac_resend_i_cmd_p_set_1(struct sock* sk,
- struct sk_buff *skb);
-extern int llc_conn_ac_resend_i_cmd_p_set_1_or_send_rr(struct sock* sk,
- struct sk_buff *skb);
extern int llc_conn_ac_send_i_xxx_x_set_0(struct sock* sk, struct sk_buff *skb);
extern int llc_conn_ac_resend_i_xxx_x_set_0(struct sock* sk,
struct sk_buff *skb);
struct sk_buff *skb);
extern int llc_conn_ac_send_rr_cmd_p_set_1(struct sock* sk,
struct sk_buff *skb);
-extern int llc_conn_ac_send_ack_cmd_p_set_1(struct sock* sk,
- struct sk_buff *skb);
extern int llc_conn_ac_send_rr_rsp_f_set_1(struct sock* sk,
struct sk_buff *skb);
extern int llc_conn_ac_send_ack_rsp_f_set_1(struct sock* sk,
struct sk_buff *skb);
extern int llc_conn_ac_send_sabme_cmd_p_set_x(struct sock* sk,
struct sk_buff *skb);
-extern int llc_conn_ac_send_ua_rsp_f_set_f_flag(struct sock* sk,
- struct sk_buff *skb);
extern int llc_conn_ac_send_ua_rsp_f_set_p(struct sock* sk,
struct sk_buff *skb);
extern int llc_conn_ac_set_s_flag_0(struct sock* sk, struct sk_buff *skb);
extern int llc_conn_ac_set_data_flag_1_if_data_flag_eq_0(struct sock* sk,
struct sk_buff *skb);
extern int llc_conn_ac_set_p_flag_0(struct sock* sk, struct sk_buff *skb);
-extern int llc_conn_ac_set_p_flag_1(struct sock* sk, struct sk_buff *skb);
extern int llc_conn_ac_set_remote_busy_0(struct sock* sk, struct sk_buff *skb);
extern int llc_conn_ac_set_retry_cnt_0(struct sock* sk, struct sk_buff *skb);
extern int llc_conn_ac_set_cause_flag_0(struct sock* sk, struct sk_buff *skb);
extern int llc_conn_ac_set_vs_nr(struct sock* sk, struct sk_buff *skb);
extern int llc_conn_ac_rst_vs(struct sock* sk, struct sk_buff *skb);
extern int llc_conn_ac_upd_vs(struct sock* sk, struct sk_buff *skb);
-extern int llc_conn_ac_set_f_flag_p(struct sock* sk, struct sk_buff *skb);
extern int llc_conn_disc(struct sock* sk, struct sk_buff *skb);
extern int llc_conn_reset(struct sock* sk, struct sk_buff *skb);
extern int llc_conn_ac_disc_confirm(struct sock* sk, struct sk_buff *skb);
extern u8 llc_circular_between(u8 a, u8 b, u8 c);
extern int llc_conn_ac_send_ack_if_needed(struct sock* sk, struct sk_buff *skb);
-extern int llc_conn_ac_inc_npta_value(struct sock* sk, struct sk_buff *skb);
extern int llc_conn_ac_adjust_npta_by_rr(struct sock* sk, struct sk_buff *skb);
extern int llc_conn_ac_adjust_npta_by_rnr(struct sock* sk, struct sk_buff *skb);
extern int llc_conn_ac_rst_sendack_flag(struct sock* sk, struct sk_buff *skb);
-extern int llc_conn_ac_send_rr_rsp_f_set_ackpf(struct sock* sk,
- struct sk_buff *skb);
-extern int llc_conn_ac_send_i_rsp_f_set_ackpf(struct sock* sk,
- struct sk_buff *skb);
extern int llc_conn_ac_send_i_rsp_as_ack(struct sock* sk, struct sk_buff *skb);
extern int llc_conn_ac_send_i_as_ack(struct sock* sk, struct sk_buff *skb);
typedef int (*llc_conn_ev_qfyr_t)(struct sock *sk, struct sk_buff *skb);
extern int llc_conn_ev_conn_req(struct sock *sk, struct sk_buff *skb);
-extern int llc_conn_ev_conn_resp(struct sock *sk, struct sk_buff *skb);
extern int llc_conn_ev_data_req(struct sock *sk, struct sk_buff *skb);
extern int llc_conn_ev_disc_req(struct sock *sk, struct sk_buff *skb);
extern int llc_conn_ev_rst_req(struct sock *sk, struct sk_buff *skb);
-extern int llc_conn_ev_rst_resp(struct sock *sk, struct sk_buff *skb);
extern int llc_conn_ev_local_busy_detected(struct sock *sk,
struct sk_buff *skb);
extern int llc_conn_ev_local_busy_cleared(struct sock *sk, struct sk_buff *skb);
struct sk_buff *skb);
extern int llc_conn_ev_rx_xxx_rsp_fbit_set_x(struct sock *sk,
struct sk_buff *skb);
-extern int llc_conn_ev_rx_xxx_yyy(struct sock *sk, struct sk_buff *skb);
extern int llc_conn_ev_rx_zzz_cmd_pbit_set_x_inval_nr(struct sock *sk,
struct sk_buff *skb);
extern int llc_conn_ev_rx_zzz_rsp_fbit_set_x_inval_nr(struct sock *sk,
extern int llc_conn_ev_ack_tmr_exp(struct sock *sk, struct sk_buff *skb);
extern int llc_conn_ev_rej_tmr_exp(struct sock *sk, struct sk_buff *skb);
extern int llc_conn_ev_busy_tmr_exp(struct sock *sk, struct sk_buff *skb);
-extern int llc_conn_ev_any_tmr_exp(struct sock *sk, struct sk_buff *skb);
extern int llc_conn_ev_sendack_tmr_exp(struct sock *sk, struct sk_buff *skb);
/* NOT_USED functions and their variations */
extern int llc_conn_ev_rx_xxx_cmd_pbit_set_1(struct sock *sk,
struct sk_buff *skb);
-extern int llc_conn_ev_rx_xxx_cmd_pbit_set_0(struct sock *sk,
- struct sk_buff *skb);
extern int llc_conn_ev_rx_xxx_rsp_fbit_set_1(struct sock *sk,
struct sk_buff *skb);
extern int llc_conn_ev_rx_i_cmd_pbit_set_0_unexpd_ns(struct sock *sk,
struct sk_buff *skb);
extern int llc_conn_ev_qlfy_cause_flag_eq_0(struct sock *sk,
struct sk_buff *skb);
-extern int llc_conn_ev_qlfy_init_p_f_cycle(struct sock *sk,
- struct sk_buff *skb);
extern int llc_conn_ev_qlfy_set_status_conn(struct sock *sk,
struct sk_buff *skb);
extern int llc_conn_ev_qlfy_set_status_disc(struct sock *sk,
struct sk_buff *skb);
extern int llc_conn_ev_qlfy_set_status_failed(struct sock *sk,
struct sk_buff *skb);
-extern int llc_conn_ev_qlfy_set_status_impossible(struct sock *sk,
- struct sk_buff *skb);
extern int llc_conn_ev_qlfy_set_status_remote_busy(struct sock *sk,
struct sk_buff *skb);
-extern int llc_conn_ev_qlfy_set_status_received(struct sock *sk,
- struct sk_buff *skb);
extern int llc_conn_ev_qlfy_set_status_refuse(struct sock *sk,
struct sk_buff *skb);
extern int llc_conn_ev_qlfy_set_status_conflict(struct sock *sk,
extern void llc_sk_free(struct sock *sk);
extern void llc_sk_reset(struct sock *sk);
-extern int llc_sk_init(struct sock *sk);
/* Access to a connection */
extern int llc_conn_state_process(struct sock *sk, struct sk_buff *skb);
extern struct sock *llc_lookup_established(struct llc_sap *sap,
struct llc_addr *daddr,
struct llc_addr *laddr);
-extern struct sock *llc_lookup_listener(struct llc_sap *sap,
- struct llc_addr *laddr);
extern void llc_sap_add_socket(struct llc_sap *sap, struct sock *sk);
extern void llc_sap_remove_socket(struct llc_sap *sap, struct sock *sk);
extern void llc_pdu_set_cmd_rsp(struct sk_buff *skb, u8 type);
extern void llc_pdu_set_pf_bit(struct sk_buff *skb, u8 bit_value);
extern void llc_pdu_decode_pf_bit(struct sk_buff *skb, u8 *pf_bit);
-extern void llc_pdu_decode_cr_bit(struct sk_buff *skb, u8 *cr_bit);
extern void llc_pdu_init_as_disc_cmd(struct sk_buff *skb, u8 p_bit);
extern void llc_pdu_init_as_i_cmd(struct sk_buff *skb, u8 p_bit, u8 ns, u8 nr);
extern void llc_pdu_init_as_rej_cmd(struct sk_buff *skb, u8 p_bit, u8 nr);
struct llc_sap;
struct sk_buff;
-extern void llc_sap_state_process(struct llc_sap *sap, struct sk_buff *skb);
extern void llc_sap_rtn_pdu(struct llc_sap *sap, struct sk_buff *skb);
extern void llc_save_primitive(struct sk_buff* skb, unsigned char prim);
extern struct sk_buff *llc_alloc_frame(void);
#ifdef CONFIG_NET_ACT_INIT
static inline struct tcf_st *
-tcf_hash_check(struct tc_st *parm, struct tc_action *a, int ovr, int bind)
+tcf_hash_check(u32 index, struct tc_action *a, int ovr, int bind)
{
struct tcf_st *p = NULL;
- if (parm->index && (p = tcf_hash_lookup(parm->index)) != NULL) {
- spin_lock(&p->lock);
+ if (index && (p = tcf_hash_lookup(index)) != NULL) {
if (bind) {
p->bindcnt++;
p->refcnt++;
}
- spin_unlock(&p->lock);
- a->priv = (void *) p;
+ a->priv = p;
}
return p;
}
static inline struct tcf_st *
-tcf_hash_create(struct tc_st *parm, struct rtattr *est, struct tc_action *a, int size, int ovr, int bind)
+tcf_hash_create(u32 index, struct rtattr *est, struct tc_action *a, int size, int ovr, int bind)
{
- unsigned h;
struct tcf_st *p = NULL;
p = kmalloc(size, GFP_KERNEL);
spin_lock_init(&p->lock);
p->stats_lock = &p->lock;
- p->index = parm->index ? : tcf_hash_new_index();
+ p->index = index ? : tcf_hash_new_index();
p->tm.install = jiffies;
p->tm.lastuse = jiffies;
#ifdef CONFIG_NET_ESTIMATOR
if (est)
gen_new_estimator(&p->bstats, &p->rate_est, p->stats_lock, est);
#endif
- h = tcf_hash(p->index);
- write_lock_bh(&tcf_t_lock);
- p->next = tcf_ht[h];
- tcf_ht[h] = p;
- write_unlock_bh(&tcf_t_lock);
-
a->priv = (void *) p;
return p;
}
-static inline struct tcf_st *
-tcf_hash_init(struct tc_st *parm, struct rtattr *est, struct tc_action *a, int size, int ovr, int bind)
+static inline void tcf_hash_insert(struct tcf_st *p)
{
- struct tcf_st *p = tcf_hash_check (parm,a,ovr,bind);
+ unsigned h = tcf_hash(p->index);
- if (!p)
- p = tcf_hash_create(parm, est, a, size, ovr, bind);
- return p;
+ write_lock_bh(&tcf_t_lock);
+ p->next = tcf_ht[h];
+ tcf_ht[h] = p;
+ write_unlock_bh(&tcf_t_lock);
}
#endif
extern void rose_destroy_socket(struct sock *);
/* rose_dev.c */
-extern int rose_rx_ip(struct sk_buff *, struct net_device *);
extern void rose_setup(struct net_device *);
/* rose_in.c */
/* rose_link.c */
extern void rose_start_ftimer(struct rose_neigh *);
-extern void rose_start_t0timer(struct rose_neigh *);
extern void rose_stop_ftimer(struct rose_neigh *);
extern void rose_stop_t0timer(struct rose_neigh *);
extern int rose_ftimer_running(struct rose_neigh *);
-extern int rose_t0timer_running(struct rose_neigh *);
extern void rose_link_rx_restart(struct sk_buff *, struct rose_neigh *, unsigned short);
-extern void rose_transmit_restart_request(struct rose_neigh *);
-extern void rose_transmit_restart_confirmation(struct rose_neigh *);
-extern void rose_transmit_diagnostic(struct rose_neigh *, unsigned char);
extern void rose_transmit_clear_request(struct rose_neigh *, unsigned int, unsigned char, unsigned char);
extern void rose_transmit_link(struct sk_buff *, struct rose_neigh *);
extern struct net_device *rose_dev_first(void);
extern struct net_device *rose_dev_get(rose_address *);
extern struct rose_route *rose_route_free_lci(unsigned int, struct rose_neigh *);
-extern struct net_device *rose_ax25_dev_get(char *);
extern struct rose_neigh *rose_get_neigh(rose_address *, unsigned char *, unsigned char *);
extern int rose_rt_ioctl(unsigned int, void __user *);
extern void rose_link_failed(ax25_cb *, int);
extern void rose_write_internal(struct sock *, int);
extern int rose_decode(struct sk_buff *, int *, int *, int *, int *, int *);
extern int rose_parse_facilities(unsigned char *, struct rose_facilities_struct *);
-extern int rose_create_facilities(unsigned char *, rose_cb *);
extern void rose_disconnect(struct sock *, int, int, int);
/* rose_timer.c */
} sctp_cmd_seq_t;
-/* Create a new sctp_command_sequence.
- * Return NULL if creating a new sequence fails.
- */
-sctp_cmd_seq_t *sctp_new_cmd_seq(int gfp);
-
/* Initialize a block of memory as a command sequence.
* Return 0 if the initialization fails.
*/
*/
int sctp_add_cmd(sctp_cmd_seq_t *seq, sctp_verb_t verb, sctp_arg_t obj);
-/* Rewind an sctp_cmd_seq_t to iterate from the start.
- * Return 0 if the rewind fails.
- */
-int sctp_rewind_sequence(sctp_cmd_seq_t *seq);
-
/* Return the next command structure in an sctp_cmd_seq.
* Return NULL at the end of the sequence.
*/
sctp_cmd_t *sctp_next_cmd(sctp_cmd_seq_t *seq);
-/* Dispose of a command sequence. */
-void sctp_free_cmd_seq(sctp_cmd_seq_t *seq);
-
#endif /* __net_sctp_command_h__ */
typedef enum {
SCTP_EVENT_NO_PENDING_TSN = 0,
+ SCTP_EVENT_ICMP_PROTO_UNREACH,
} sctp_event_other_t;
-#define SCTP_EVENT_OTHER_MAX SCTP_EVENT_NO_PENDING_TSN
+#define SCTP_EVENT_OTHER_MAX SCTP_EVENT_ICMP_PROTO_UNREACH
#define SCTP_NUM_OTHER_TYPES (SCTP_EVENT_OTHER_MAX + 1)
/* These are primitive requests from the ULP. */
- (unsigned long)(c->chunk_hdr)\
- sizeof(sctp_data_chunk_t)))
-/* This is a table of printable names of sctp_param_t's. */
-extern const char *sctp_param_tbl[];
-
-
#define SCTP_MAX_ERROR_CAUSE SCTP_ERROR_NONEXIST_IP
#define SCTP_NUM_ERROR_CAUSE 10
SCTP_IERROR_IGNORE_TSN,
SCTP_IERROR_NO_DATA,
SCTP_IERROR_BAD_STREAM,
+ SCTP_IERROR_BAD_PORTS,
} sctp_ierror_t;
sctp_state_fn_t sctp_sf_do_ecn_cwr;
sctp_state_fn_t sctp_sf_do_ecne;
sctp_state_fn_t sctp_sf_ootb;
-sctp_state_fn_t sctp_sf_shut_8_4_5;
sctp_state_fn_t sctp_sf_pdiscard;
sctp_state_fn_t sctp_sf_violation;
+sctp_state_fn_t sctp_sf_violation_chunklen;
sctp_state_fn_t sctp_sf_discard_chunk;
sctp_state_fn_t sctp_sf_do_5_2_1_siminit;
sctp_state_fn_t sctp_sf_do_5_2_2_dupinit;
sctp_state_fn_t sctp_sf_unk_chunk;
sctp_state_fn_t sctp_sf_do_8_5_1_E_sa;
sctp_state_fn_t sctp_sf_cookie_echoed_err;
-sctp_state_fn_t sctp_sf_do_5_2_6_stale;
sctp_state_fn_t sctp_sf_do_asconf;
sctp_state_fn_t sctp_sf_do_asconf_ack;
sctp_state_fn_t sctp_sf_do_9_2_reshutack;
sctp_state_fn_t sctp_sf_do_9_2_start_shutdown;
sctp_state_fn_t sctp_sf_do_9_2_shutdown_ack;
sctp_state_fn_t sctp_sf_ignore_other;
+sctp_state_fn_t sctp_sf_cookie_wait_icmp_abort;
/* Prototypes for timeout event state functions. */
sctp_state_fn_t sctp_sf_do_6_3_3_rtx;
struct sctp_chunk *sctp_make_cwr(const struct sctp_association *,
const __u32 lowest_tsn,
const struct sctp_chunk *);
-struct sctp_chunk *sctp_make_datafrag(struct sctp_association *,
- const struct sctp_sndrcvinfo *sinfo,
- int len, const __u8 *data,
- __u8 flags, __u16 ssn);
struct sctp_chunk * sctp_make_datafrag_empty(struct sctp_association *,
const struct sctp_sndrcvinfo *sinfo,
int len, const __u8 flags,
__u16 ssn);
-struct sctp_chunk *sctp_make_data(struct sctp_association *,
- const struct sctp_sndrcvinfo *sinfo,
- int len, const __u8 *data);
-struct sctp_chunk *sctp_make_data_empty(struct sctp_association *,
- const struct sctp_sndrcvinfo *, int len);
struct sctp_chunk *sctp_make_ecne(const struct sctp_association *,
const __u32);
struct sctp_chunk *sctp_make_sack(const struct sctp_association *);
struct sctp_chunk *sctp_make_abort_user(const struct sctp_association *,
const struct sctp_chunk *,
const struct msghdr *);
+struct sctp_chunk *sctp_make_abort_violation(const struct sctp_association *,
+ const struct sctp_chunk *,
+ const __u8 *,
+ const size_t );
struct sctp_chunk *sctp_make_heartbeat(const struct sctp_association *,
const struct sctp_transport *,
const void *payload,
const void *payload,
size_t paylen);
-struct sctp_chunk *sctp_make_asconf(struct sctp_association *asoc,
- union sctp_addr *addr,
- int vparam_len);
struct sctp_chunk *sctp_make_asconf_update_ip(struct sctp_association *,
union sctp_addr *,
struct sockaddr *,
int, __u16);
struct sctp_chunk *sctp_make_asconf_set_prim(struct sctp_association *asoc,
union sctp_addr *addr);
-struct sctp_chunk *sctp_make_asconf_ack(const struct sctp_association *asoc,
- __u32 serial, int vparam_len);
struct sctp_chunk *sctp_process_asconf(struct sctp_association *asoc,
struct sctp_chunk *asconf);
int sctp_process_asconf_ack(struct sctp_association *asoc,
void sctp_chunk_assign_tsn(struct sctp_chunk *);
void sctp_chunk_assign_ssn(struct sctp_chunk *);
+void sctp_stop_t1_and_abort(sctp_cmd_seq_t *commands, __u16 error);
+
/* Prototypes for statetable processing. */
int sctp_do_sm(sctp_event_t event_type, sctp_subtype_t subtype,
void *event_arg,
int gfp);
-int sctp_side_effects(sctp_event_t event_type, sctp_subtype_t subtype,
- sctp_state_t state,
- struct sctp_endpoint *,
- struct sctp_association *asoc,
- void *event_arg,
- sctp_disposition_t status,
- sctp_cmd_seq_t *commands,
- int gfp);
-
/* 2nd level prototypes */
-int sctp_cmd_interpreter(sctp_event_t, sctp_subtype_t, sctp_state_t,
- struct sctp_endpoint *, struct sctp_association *,
- void *event_arg, sctp_disposition_t,
- sctp_cmd_seq_t *retval, int gfp);
-
-
-int sctp_gen_sack(struct sctp_association *, int force, sctp_cmd_seq_t *);
void sctp_generate_t3_rtx_event(unsigned long peer);
void sctp_generate_heartbeat_event(unsigned long peer);
-sctp_sackhdr_t *sctp_sm_pull_sack(struct sctp_chunk *);
-struct sctp_packet *sctp_abort_pkt_new(const struct sctp_endpoint *,
- const struct sctp_association *,
- struct sctp_chunk *chunk,
- const void *payload,
- size_t paylen);
-struct sctp_packet *sctp_ootb_pkt_new(const struct sctp_association *,
- const struct sctp_chunk *);
void sctp_ootb_pkt_free(struct sctp_packet *);
-struct sctp_cookie_param *
-sctp_pack_cookie(const struct sctp_endpoint *, const struct sctp_association *,
- const struct sctp_chunk *, int *cookie_len,
- const __u8 *, int addrs_len);
struct sctp_association *sctp_unpack_cookie(const struct sctp_endpoint *,
const struct sctp_association *,
struct sctp_chunk *, int gfp, int *err,
struct sctp_chunk **err_chk_p);
int sctp_addip_addr_config(struct sctp_association *, sctp_param_t,
struct sockaddr_storage*, int);
-void sctp_send_stale_cookie_err(const struct sctp_endpoint *ep,
- const struct sctp_association *asoc,
- const struct sctp_chunk *chunk,
- sctp_cmd_seq_t *commands,
- struct sctp_chunk *err_chunk);
-int sctp_eat_data(const struct sctp_association *asoc,
- struct sctp_chunk *chunk,
- sctp_cmd_seq_t *commands);
/* 3rd level prototypes */
__u32 sctp_generate_tag(const struct sctp_endpoint *);
__u32 sctp_generate_tsn(const struct sctp_endpoint *);
/* Extern declarations for major data structures. */
-const sctp_sm_table_entry_t *sctp_chunk_event_lookup(sctp_cid_t, sctp_state_t);
-extern const sctp_sm_table_entry_t
-primitive_event_table[SCTP_NUM_PRIMITIVE_TYPES][SCTP_STATE_NUM_STATES];
-extern const sctp_sm_table_entry_t
-other_event_table[SCTP_NUM_OTHER_TYPES][SCTP_STATE_NUM_STATES];
-extern const sctp_sm_table_entry_t
-timeout_event_table[SCTP_NUM_TIMEOUT_TYPES][SCTP_STATE_NUM_STATES];
extern sctp_timer_event_t *sctp_timer_events[SCTP_NUM_TIMEOUT_TYPES];
-/* These are some handy utility macros... */
-
/* Get the size of a DATA chunk payload. */
static inline __u16 sctp_data_size(struct sctp_chunk *chunk)
#include <linux/socket.h> /* linux/in.h needs this!! */
#include <linux/in.h> /* We get struct sockaddr_in. */
#include <linux/in6.h> /* We get struct in6_addr */
+#include <linux/ipv6.h>
#include <asm/param.h> /* We get MAXHOSTNAMELEN. */
#include <asm/atomic.h> /* This gets us atomic counters. */
#include <linux/skbuff.h> /* We need sk_buff_head. */
struct sctp_outq;
struct sctp_bind_addr;
struct sctp_ulpq;
-struct sctp_opt;
struct sctp_ep_common;
struct sctp_ssnmap;
} sctp_socket_type_t;
/* Per socket SCTP information. */
-struct sctp_opt {
+struct sctp_sock {
+ /* inet_sock has to be the first member of sctp_sock */
+ struct inet_sock inet;
/* What kind of a socket is this? */
sctp_socket_type_t type;
struct sk_buff_head pd_lobby;
};
+static inline struct sctp_sock *sctp_sk(const struct sock *sk)
+{
+ return (struct sctp_sock *)sk;
+}
+
+static inline struct sock *sctp_opt2sk(const struct sctp_sock *sp)
+{
+ return (struct sock *)sp;
+}
+
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+struct sctp6_sock {
+ struct sctp_sock sctp;
+ struct ipv6_pinfo inet6;
+};
+#endif /* CONFIG_IPV6 */
/* This is our APPLICATION-SPECIFIC state cookie.
/* This holds the originating address of the INIT packet. */
union sctp_addr peer_addr;
+ /* IG Section 2.35.3
+ * Include the source port of the INIT-ACK
+ */
+ __u16 my_port;
+
__u8 prsctp_capable;
+ /* Padding for future use */
+ __u8 padding;
+
__u32 adaption_ind;
+
/* This is a shim for my peer's INIT packet, followed by
* a copy of the raw address list of the association.
* The length of the raw address list is saved in the
int malloced;
};
-struct sctp_ssnmap *sctp_ssnmap_init(struct sctp_ssnmap *, __u16, __u16);
struct sctp_ssnmap *sctp_ssnmap_new(__u16 in, __u16 out, int gfp);
void sctp_ssnmap_free(struct sctp_ssnmap *map);
void sctp_ssnmap_clear(struct sctp_ssnmap *map);
int (*to_addr_param) (const union sctp_addr *,
union sctp_addr_param *);
int (*addr_valid) (union sctp_addr *,
- struct sctp_opt *);
+ struct sctp_sock *);
sctp_scope_t (*scope) (union sctp_addr *);
void (*inaddr_any) (union sctp_addr *, unsigned short);
int (*is_any) (const union sctp_addr *);
int (*available) (union sctp_addr *,
- struct sctp_opt *);
+ struct sctp_sock *);
int (*skb_iif) (const struct sk_buff *sk);
int (*is_ce) (const struct sk_buff *sk);
void (*seq_dump_addr)(struct seq_file *seq,
struct sctp_pf {
void (*event_msgname)(struct sctp_ulpevent *, char *, int *);
void (*skb_msgname) (struct sk_buff *, char *, int *);
- int (*af_supported) (sa_family_t, struct sctp_opt *);
+ int (*af_supported) (sa_family_t, struct sctp_sock *);
int (*cmp_addr) (const union sctp_addr *,
const union sctp_addr *,
- struct sctp_opt *);
- int (*bind_verify) (struct sctp_opt *, union sctp_addr *);
- int (*send_verify) (struct sctp_opt *, union sctp_addr *);
- int (*supported_addrs)(const struct sctp_opt *, __u16 *);
+ struct sctp_sock *);
+ int (*bind_verify) (struct sctp_sock *, union sctp_addr *);
+ int (*send_verify) (struct sctp_sock *, union sctp_addr *);
+ int (*supported_addrs)(const struct sctp_sock *, __u16 *);
struct sock *(*create_accept_sk) (struct sock *sk,
struct sctp_association *asoc);
- void (*addr_v4map) (struct sctp_opt *, union sctp_addr *);
+ void (*addr_v4map) (struct sctp_sock *, union sctp_addr *);
struct sctp_af *af;
};
struct sctp_datamsg *sctp_datamsg_from_user(struct sctp_association *,
struct sctp_sndrcvinfo *,
struct msghdr *, int len);
-struct sctp_datamsg *sctp_datamsg_new(int gfp);
void sctp_datamsg_put(struct sctp_datamsg *);
-void sctp_datamsg_hold(struct sctp_datamsg *);
void sctp_datamsg_free(struct sctp_datamsg *);
void sctp_datamsg_track(struct sctp_chunk *);
-void sctp_datamsg_assign(struct sctp_datamsg *, struct sctp_chunk *);
void sctp_chunk_fail(struct sctp_chunk *, int error);
int sctp_chunk_abandoned(struct sctp_chunk *);
void sctp_chunk_put(struct sctp_chunk *);
int sctp_user_addto_chunk(struct sctp_chunk *chunk, int off, int len,
struct iovec *data);
-struct sctp_chunk *sctp_make_chunk(const struct sctp_association *, __u8 type,
- __u8 flags, int size);
void sctp_chunk_free(struct sctp_chunk *);
void *sctp_addto_chunk(struct sctp_chunk *, int len, const void *data);
struct sctp_chunk *sctp_chunkify(struct sk_buff *,
/* Error count : The current error count for this destination. */
unsigned short error_count;
- /* Error : Current error threshold for this destination
- * Threshold : i.e. what value marks the destination down if
- * : errorCount reaches this value.
- */
- unsigned short error_threshold;
-
/* This is the max_retrans value for the transport and will
* be initialized to proto.max_retrans.path. This can be changed
* using SCTP_SET_PEER_ADDR_PARAMS socket option.
};
struct sctp_transport *sctp_transport_new(const union sctp_addr *, int);
-struct sctp_transport *sctp_transport_init(struct sctp_transport *,
- const union sctp_addr *, int);
void sctp_transport_set_owner(struct sctp_transport *,
struct sctp_association *);
void sctp_transport_route(struct sctp_transport *, union sctp_addr *,
- struct sctp_opt *);
+ struct sctp_sock *);
void sctp_transport_pmtu(struct sctp_transport *);
void sctp_transport_free(struct sctp_transport *);
-void sctp_transport_destroy(struct sctp_transport *);
void sctp_transport_reset_timers(struct sctp_transport *);
void sctp_transport_hold(struct sctp_transport *);
void sctp_transport_put(struct sctp_transport *);
int malloced; /* Is this structure kfree()able? */
};
-struct sctp_inq *sctp_inq_new(void);
void sctp_inq_init(struct sctp_inq *);
void sctp_inq_free(struct sctp_inq *);
void sctp_inq_push(struct sctp_inq *, struct sctp_chunk *packet);
char malloced;
};
-struct sctp_outq *sctp_outq_new(struct sctp_association *);
void sctp_outq_init(struct sctp_association *, struct sctp_outq *);
void sctp_outq_teardown(struct sctp_outq *);
void sctp_outq_free(struct sctp_outq*);
int malloced; /* Are we kfree()able? */
};
-struct sctp_bind_addr *sctp_bind_addr_new(int gfp_mask);
void sctp_bind_addr_init(struct sctp_bind_addr *, __u16 port);
void sctp_bind_addr_free(struct sctp_bind_addr *);
int sctp_bind_addr_copy(struct sctp_bind_addr *dest,
int gfp);
int sctp_del_bind_addr(struct sctp_bind_addr *, union sctp_addr *);
int sctp_bind_addr_match(struct sctp_bind_addr *, const union sctp_addr *,
- struct sctp_opt *);
+ struct sctp_sock *);
union sctp_addr *sctp_find_unmatch_addr(struct sctp_bind_addr *bp,
const union sctp_addr *addrs,
int addrcnt,
- struct sctp_opt *opt);
+ struct sctp_sock *opt);
union sctp_params sctp_bind_addrs_to_raw(const struct sctp_bind_addr *bp,
int *addrs_len, int gfp);
int sctp_raw_to_bind_addrs(struct sctp_bind_addr *bp, __u8 *raw, int len,
/* These are function signatures for manipulating endpoints. */
struct sctp_endpoint *sctp_endpoint_new(struct sock *, int);
-struct sctp_endpoint *sctp_endpoint_init(struct sctp_endpoint *,
- struct sock *, int gfp);
void sctp_endpoint_free(struct sctp_endpoint *);
void sctp_endpoint_put(struct sctp_endpoint *);
void sctp_endpoint_hold(struct sctp_endpoint *);
int sctp_process_init(struct sctp_association *, sctp_cid_t cid,
const union sctp_addr *peer,
sctp_init_chunk_t *init, int gfp);
-int sctp_process_param(struct sctp_association *, union sctp_params param,
- const union sctp_addr *from, int gfp);
__u32 sctp_generate_tag(const struct sctp_endpoint *);
__u32 sctp_generate_tsn(const struct sctp_endpoint *);
struct sctp_association *
sctp_association_new(const struct sctp_endpoint *, const struct sock *,
sctp_scope_t scope, int gfp);
-struct sctp_association *
-sctp_association_init(struct sctp_association *, const struct sctp_endpoint *,
- const struct sock *, sctp_scope_t scope,
- int gfp);
void sctp_association_free(struct sctp_association *);
void sctp_association_put(struct sctp_association *);
void sctp_association_hold(struct sctp_association *);
struct sctp_association *new);
__u32 sctp_association_get_next_tsn(struct sctp_association *);
-__u32 sctp_association_get_tsn_block(struct sctp_association *, int);
void sctp_assoc_sync_pmtu(struct sctp_association *);
void sctp_assoc_rwnd_increase(struct sctp_association *, unsigned);
int sctp_cmp_addr_exact(const union sctp_addr *ss1,
const union sctp_addr *ss2);
struct sctp_chunk *sctp_get_ecne_prepend(struct sctp_association *asoc);
-struct sctp_chunk *sctp_get_no_prepend(struct sctp_association *asoc);
/* A convenience structure to parse out SCTP specific CMSGs. */
typedef struct sctp_cmsgs {
__u32 start;
};
-/* Create a new tsnmap. */
-struct sctp_tsnmap *sctp_tsnmap_new(__u16 len, __u32 init_tsn, int gfp);
-
-/* Dispose of a tsnmap. */
-void sctp_tsnmap_free(struct sctp_tsnmap *);
-
/* This macro assists in creation of external storage for variable length
* internal buffers. We double allocate so the overflow map works.
*/
/* Is there a gap in the TSN map? */
int sctp_tsnmap_has_gap(const struct sctp_tsnmap *);
-/* Initialize a gap ack block interator from user-provided memory. */
-void sctp_tsnmap_iter_init(const struct sctp_tsnmap *,
- struct sctp_tsnmap_iter *);
-
-/* Get the next gap ack blocks. We return 0 if there are no more
- * gap ack blocks.
- */
-int sctp_tsnmap_next_gap_ack(const struct sctp_tsnmap *,
- struct sctp_tsnmap_iter *,__u16 *start, __u16 *end);
-
#endif /* __sctp_tsnmap_h__ */
return (struct sctp_ulpevent *)skb->cb;
}
-struct sctp_ulpevent *sctp_ulpevent_new(int size, int flags, int gfp);
-void sctp_ulpevent_init(struct sctp_ulpevent *, int flags);
void sctp_ulpevent_free(struct sctp_ulpevent *);
int sctp_ulpevent_is_notification(const struct sctp_ulpevent *);
void sctp_queue_purge_ulpevents(struct sk_buff_head *list);
};
/* Prototypes. */
-struct sctp_ulpq *sctp_ulpq_new(struct sctp_association *asoc, int gfp);
struct sctp_ulpq *sctp_ulpq_init(struct sctp_ulpq *,
struct sctp_association *);
void sctp_ulpq_free(struct sctp_ulpq *);
tca_gen(pedit);
unsigned char nkeys;
unsigned char flags;
- struct tc_pedit_key keys[0];
+ struct tc_pedit_key *keys;
};
#endif
#define TCP_ECN_QUEUE_CWR 2
#define TCP_ECN_DEMAND_CWR 4
-static __inline__ void
-TCP_ECN_queue_cwr(struct tcp_opt *tp)
+static inline void TCP_ECN_queue_cwr(struct tcp_sock *tp)
{
if (tp->ecn_flags&TCP_ECN_OK)
tp->ecn_flags |= TCP_ECN_QUEUE_CWR;
/* Output functions */
-static __inline__ void
-TCP_ECN_send_synack(struct tcp_opt *tp, struct sk_buff *skb)
+static inline void TCP_ECN_send_synack(struct tcp_sock *tp,
+ struct sk_buff *skb)
{
TCP_SKB_CB(skb)->flags &= ~TCPCB_FLAG_CWR;
if (!(tp->ecn_flags&TCP_ECN_OK))
TCP_SKB_CB(skb)->flags &= ~TCPCB_FLAG_ECE;
}
-static __inline__ void
-TCP_ECN_send_syn(struct sock *sk, struct tcp_opt *tp, struct sk_buff *skb)
+static inline void TCP_ECN_send_syn(struct sock *sk, struct tcp_sock *tp,
+ struct sk_buff *skb)
{
tp->ecn_flags = 0;
if (sysctl_tcp_ecn && !(sk->sk_route_caps & NETIF_F_TSO)) {
th->ece = 1;
}
-static __inline__ void
-TCP_ECN_send(struct sock *sk, struct tcp_opt *tp, struct sk_buff *skb, int tcp_header_len)
+static inline void TCP_ECN_send(struct sock *sk, struct tcp_sock *tp,
+ struct sk_buff *skb, int tcp_header_len)
{
if (tp->ecn_flags & TCP_ECN_OK) {
/* Not-retransmitted data segment: set ECT and inject CWR. */
/* Input functions */
-static __inline__ void
-TCP_ECN_accept_cwr(struct tcp_opt *tp, struct sk_buff *skb)
+static inline void TCP_ECN_accept_cwr(struct tcp_sock *tp, struct sk_buff *skb)
{
if (skb->h.th->cwr)
tp->ecn_flags &= ~TCP_ECN_DEMAND_CWR;
}
-static __inline__ void
-TCP_ECN_withdraw_cwr(struct tcp_opt *tp)
+static inline void TCP_ECN_withdraw_cwr(struct tcp_sock *tp)
{
tp->ecn_flags &= ~TCP_ECN_DEMAND_CWR;
}
-static __inline__ void
-TCP_ECN_check_ce(struct tcp_opt *tp, struct sk_buff *skb)
+static inline void TCP_ECN_check_ce(struct tcp_sock *tp, struct sk_buff *skb)
{
if (tp->ecn_flags&TCP_ECN_OK) {
if (INET_ECN_is_ce(TCP_SKB_CB(skb)->flags))
}
}
-static __inline__ void
-TCP_ECN_rcv_synack(struct tcp_opt *tp, struct tcphdr *th)
+static inline void TCP_ECN_rcv_synack(struct tcp_sock *tp, struct tcphdr *th)
{
if ((tp->ecn_flags&TCP_ECN_OK) && (!th->ece || th->cwr))
tp->ecn_flags &= ~TCP_ECN_OK;
}
-static __inline__ void
-TCP_ECN_rcv_syn(struct tcp_opt *tp, struct tcphdr *th)
+static inline void TCP_ECN_rcv_syn(struct tcp_sock *tp, struct tcphdr *th)
{
if ((tp->ecn_flags&TCP_ECN_OK) && (!th->ece || !th->cwr))
tp->ecn_flags &= ~TCP_ECN_OK;
}
-static __inline__ int
-TCP_ECN_rcv_ecn_echo(struct tcp_opt *tp, struct tcphdr *th)
+static inline int TCP_ECN_rcv_ecn_echo(struct tcp_sock *tp, struct tcphdr *th)
{
if (th->ece && !th->syn && (tp->ecn_flags&TCP_ECN_OK))
return 1;
return 0;
}
-static __inline__ void
-TCP_ECN_openreq_child(struct tcp_opt *tp, struct open_request *req)
+static inline void TCP_ECN_openreq_child(struct tcp_sock *tp,
+ struct open_request *req)
{
tp->ecn_flags = req->ecn_ok ? TCP_ECN_OK : 0;
}
struct x25_address *);
extern int x25_addr_aton(unsigned char *, struct x25_address *,
struct x25_address *);
-extern unsigned int x25_new_lci(struct x25_neigh *);
extern struct sock *x25_find_socket(unsigned int, struct x25_neigh *);
extern void x25_destroy_socket(struct sock *);
extern int x25_rx_call_request(struct sk_buff *, struct x25_neigh *, unsigned int);
/* x25_dev.c */
extern void x25_send_frame(struct sk_buff *, struct x25_neigh *);
extern int x25_lapb_receive_frame(struct sk_buff *, struct net_device *, struct packet_type *);
-extern int x25_llc_receive_frame(struct sk_buff *, struct net_device *, struct packet_type *);
extern void x25_establish_link(struct x25_neigh *);
extern void x25_terminate_link(struct x25_neigh *);
extern void x25_link_device_down(struct net_device *);
extern void x25_link_established(struct x25_neigh *);
extern void x25_link_terminated(struct x25_neigh *);
-extern void x25_transmit_restart_request(struct x25_neigh *);
-extern void x25_transmit_restart_confirmation(struct x25_neigh *);
-extern void x25_transmit_diagnostic(struct x25_neigh *, unsigned char);
extern void x25_transmit_clear_request(struct x25_neigh *, unsigned int, unsigned char);
extern void x25_transmit_link(struct sk_buff *, struct x25_neigh *);
extern int x25_subscr_ioctl(unsigned int, void __user *);
extern void x25_write_internal(struct sock *, int);
extern int x25_decode(struct sock *, struct sk_buff *, int *, int *, int *, int *, int *);
extern void x25_disconnect(struct sock *, int, unsigned char, unsigned char);
+extern int x25_check_calluserdata(struct x25_calluserdata *,struct x25_calluserdata *);
/* x25_timer.c */
extern void x25_start_heartbeat(struct sock *);
/*
- * Definitions for bulk memory services
+ * bulkmem.h -- Definitions for bulk memory services
*
- * bulkmem.h 1.12 2000/06/12 21:55:41
- *
- * The contents of this file are subject to the Mozilla Public License
- * Version 1.1 (the "License"); you may not use this file except in
- * compliance with the License. You may obtain a copy of the License
- * at http://www.mozilla.org/MPL/
- *
- * Software distributed under the License is distributed on an "AS IS"
- * basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
- * the License for the specific language governing rights and
- * limitations under the License.
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
*
* The initial developer of the original code is David A. Hinds
* <dahinds@users.sourceforge.net>. Portions created by David A. Hinds
* are Copyright (C) 1999 David A. Hinds. All Rights Reserved.
*
- * Alternatively, the contents of this file may be used under the
- * terms of the GNU General Public License version 2 (the "GPL"), in which
- * case the provisions of the GPL are applicable instead of the
- * above. If you wish to allow the use of your version of this file
- * only under the terms of the GPL and not to allow others to use
- * your version of this file under the MPL, indicate your decision by
- * deleting the provisions above and replace them with the notice and
- * other provisions required by the GPL. If you do not delete the
- * provisions above, a recipient may use your version of this file
- * under either the MPL or the GPL.
- * bulkmem.h 1.3 1995/05/27 04:49:49
+ * (C) 1999 David A. Hinds
*/
#ifndef _LINUX_BULKMEM_H
/*
- * ciscode.h 1.56 2002/10/25 06:37:30
+ * ciscode.h -- Definitions for bulk memory services
*
- * The contents of this file are subject to the Mozilla Public License
- * Version 1.1 (the "License"); you may not use this file except in
- * compliance with the License. You may obtain a copy of the License
- * at http://www.mozilla.org/MPL/
- *
- * Software distributed under the License is distributed on an "AS IS"
- * basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
- * the License for the specific language governing rights and
- * limitations under the License.
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
*
* The initial developer of the original code is David A. Hinds
* <dahinds@users.sourceforge.net>. Portions created by David A. Hinds
* are Copyright (C) 1999 David A. Hinds. All Rights Reserved.
*
- * Alternatively, the contents of this file may be used under the
- * terms of the GNU General Public License version 2 (the "GPL"), in
- * which case the provisions of the GPL are applicable instead of the
- * above. If you wish to allow the use of your version of this file
- * only under the terms of the GPL and not to allow others to use
- * your version of this file under the MPL, indicate your decision by
- * deleting the provisions above and replace them with the notice and
- * other provisions required by the GPL. If you do not delete the
- * provisions above, a recipient may use your version of this file
- * under either the MPL or the GPL.
+ * (C) 1999 David A. Hinds
*/
#ifndef _LINUX_CISCODE_H
/*
- * cisreg.h 1.17 2000/06/12 21:55:41
+ * cisreg.h
*
- * The contents of this file are subject to the Mozilla Public License
- * Version 1.1 (the "License"); you may not use this file except in
- * compliance with the License. You may obtain a copy of the License
- * at http://www.mozilla.org/MPL/
- *
- * Software distributed under the License is distributed on an "AS IS"
- * basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
- * the License for the specific language governing rights and
- * limitations under the License.
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
*
* The initial developer of the original code is David A. Hinds
* <dahinds@users.sourceforge.net>. Portions created by David A. Hinds
* are Copyright (C) 1999 David A. Hinds. All Rights Reserved.
*
- * Alternatively, the contents of this file may be used under the
- * terms of the GNU General Public License version 2 (the "GPL"), in which
- * case the provisions of the GPL are applicable instead of the
- * above. If you wish to allow the use of your version of this file
- * only under the terms of the GPL and not to allow others to use
- * your version of this file under the MPL, indicate your decision by
- * deleting the provisions above and replace them with the notice and
- * other provisions required by the GPL. If you do not delete the
- * provisions above, a recipient may use your version of this file
- * under either the MPL or the GPL.
+ * (C) 1999 David A. Hinds
*/
#ifndef _LINUX_CISREG_H
/*
- * cistpl.h 1.34 2000/06/19 23:18:12
+ * cistpl.h
*
- * The contents of this file are subject to the Mozilla Public License
- * Version 1.1 (the "License"); you may not use this file except in
- * compliance with the License. You may obtain a copy of the License
- * at http://www.mozilla.org/MPL/
- *
- * Software distributed under the License is distributed on an "AS IS"
- * basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
- * the License for the specific language governing rights and
- * limitations under the License.
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
*
* The initial developer of the original code is David A. Hinds
* <dahinds@users.sourceforge.net>. Portions created by David A. Hinds
* are Copyright (C) 1999 David A. Hinds. All Rights Reserved.
*
- * Alternatively, the contents of this file may be used under the
- * terms of the GNU General Public License version 2 (the "GPL"), in which
- * case the provisions of the GPL are applicable instead of the
- * above. If you wish to allow the use of your version of this file
- * only under the terms of the GPL and not to allow others to use
- * your version of this file under the MPL, indicate your decision by
- * deleting the provisions above and replace them with the notice and
- * other provisions required by the GPL. If you do not delete the
- * provisions above, a recipient may use your version of this file
- * under either the MPL or the GPL.
+ * (C) 1999 David A. Hinds
*/
#ifndef _LINUX_CISTPL_H
/*
- * cs.h 1.71 2000/08/29 00:54:20
+ * cs.h
*
- * The contents of this file are subject to the Mozilla Public License
- * Version 1.1 (the "License"); you may not use this file except in
- * compliance with the License. You may obtain a copy of the License
- * at http://www.mozilla.org/MPL/
- *
- * Software distributed under the License is distributed on an "AS IS"
- * basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
- * the License for the specific language governing rights and
- * limitations under the License.
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
*
* The initial developer of the original code is David A. Hinds
* <dahinds@users.sourceforge.net>. Portions created by David A. Hinds
* are Copyright (C) 1999 David A. Hinds. All Rights Reserved.
*
- * Alternatively, the contents of this file may be used under the
- * terms of the GNU General Public License version 2 (the "GPL"), in which
- * case the provisions of the GPL are applicable instead of the
- * above. If you wish to allow the use of your version of this file
- * only under the terms of the GPL and not to allow others to use
- * your version of this file under the MPL, indicate your decision by
- * deleting the provisions above and replace them with the notice and
- * other provisions required by the GPL. If you do not delete the
- * provisions above, a recipient may use your version of this file
- * under either the MPL or the GPL.
+ * (C) 1999 David A. Hinds
*/
#ifndef _LINUX_CS_H
/* For RegisterClient */
typedef struct client_reg_t {
dev_info_t *dev_info;
- u_int Attributes;
+ u_int Attributes; /* UNUSED */
u_int EventMask;
int (*event_handler)(event_t event, int priority,
event_callback_args_t *);
typedef struct irq_req_t {
u_int Attributes;
u_int AssignedIRQ;
- u_int IRQInfo1, IRQInfo2;
+ u_int IRQInfo1, IRQInfo2; /* IRQInfo2 is ignored */
void *Handler;
void *Instance;
} irq_req_t;
#define WIN_BAR_MASK 0xe000
#define WIN_BAR_SHIFT 13
-/* Attributes for RegisterClient */
+/* Attributes for RegisterClient -- UNUSED -- */
#define INFO_MASTER_CLIENT 0x01
#define INFO_IO_CLIENT 0x02
#define INFO_MTD_CLIENT 0x04
int retcode;
} error_info_t;
-/* Special stuff for binding drivers to sockets */
-typedef struct bind_req_t {
- struct pcmcia_socket *Socket;
- u_char Function;
- dev_info_t *dev_info;
-} bind_req_t;
-
/* Flag to bind to all functions */
#define BIND_FN_ALL 0xff
-typedef struct mtd_bind_t {
- struct pcmcia_socket *Socket;
- u_int Attributes;
- u_int CardOffset;
- dev_info_t *dev_info;
-} mtd_bind_t;
-
/* Events */
#define CS_EVENT_PRI_LOW 0
#define CS_EVENT_PRI_HIGH 1
GetFirstWindow, GetNextWindow, GetMemPage
};
+struct pcmcia_socket;
+
int pcmcia_access_configuration_register(client_handle_t handle, conf_reg_t *reg);
int pcmcia_deregister_client(client_handle_t handle);
int pcmcia_get_configuration_info(client_handle_t handle, config_info_t *config);
int pcmcia_insert_card(struct pcmcia_socket *skt);
int pcmcia_report_error(client_handle_t handle, error_info_t *err);
-#ifdef CONFIG_PCMCIA_OBSOLETE
-int pcmcia_get_first_client(client_handle_t *handle, client_req_t *req);
-int pcmcia_get_next_client(client_handle_t *handle, client_req_t *req);
-int pcmcia_modify_window(window_handle_t win, modwin_t *req);
-int pcmcia_set_event_mask(client_handle_t handle, eventmask_t *mask);
-#endif
+struct pcmcia_socket * pcmcia_get_socket(struct pcmcia_socket *skt);
+void pcmcia_put_socket(struct pcmcia_socket *skt);
#endif /* __KERNEL__ */
/*
- * ds.h 1.56 2000/06/12 21:55:40
+ * ds.h -- 16-bit PCMCIA core support
*
- * The contents of this file are subject to the Mozilla Public License
- * Version 1.1 (the "License"); you may not use this file except in
- * compliance with the License. You may obtain a copy of the License
- * at http://www.mozilla.org/MPL/
- *
- * Software distributed under the License is distributed on an "AS IS"
- * basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
- * the License for the specific language governing rights and
- * limitations under the License.
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
*
* The initial developer of the original code is David A. Hinds
* <dahinds@users.sourceforge.net>. Portions created by David A. Hinds
* are Copyright (C) 1999 David A. Hinds. All Rights Reserved.
*
- * Alternatively, the contents of this file may be used under the
- * terms of the GNU General Public License version 2 (the "GPL"), in which
- * case the provisions of the GPL are applicable instead of the
- * above. If you wish to allow the use of your version of this file
- * only under the terms of the GPL and not to allow others to use
- * your version of this file under the MPL, indicate your decision by
- * deleting the provisions above and replace them with the notice and
- * other provisions required by the GPL. If you do not delete the
- * provisions above, a recipient may use your version of this file
- * under either the MPL or the GPL.
+ * (C) 1999 David A. Hinds
+ * (C) 2003 - 2004 Dominik Brodowski
*/
#ifndef _LINUX_DS_H
((l) && ((l->state & ~DEV_BUSY) == (DEV_CONFIG|DEV_PRESENT)))
+struct pcmcia_socket;
+
extern struct bus_type pcmcia_bus_type;
struct pcmcia_driver {
- int use_count;
dev_link_t *(*attach)(void);
void (*detach)(dev_link_t *);
struct module *owner;
int pcmcia_register_driver(struct pcmcia_driver *driver);
void pcmcia_unregister_driver(struct pcmcia_driver *driver);
+struct pcmcia_device {
+ /* the socket and the device_no [for multifunction devices]
+ uniquely define a pcmcia_device */
+ struct pcmcia_socket *socket;
+
+ u8 device_no;
+
+ /* the hardware "function" device; certain subdevices can
+ * share one hardware "function" device. */
+ u8 func;
+
+ struct list_head socket_device_list;
+
+ /* deprecated, a cleaned up version will be moved into this
+ struct soon */
+ dev_link_t *instance;
+ struct client_t {
+ u_short client_magic;
+ struct pcmcia_socket *Socket;
+ u_char Function;
+ u_int state;
+ event_t EventMask;
+ int (*event_handler) (event_t event, int priority,
+ event_callback_args_t *);
+ event_callback_args_t event_callback_args;
+ } client;
+
+ struct device dev;
+};
+
+#define to_pcmcia_dev(n) container_of(n, struct pcmcia_device, dev)
+#define to_pcmcia_drv(n) container_of(n, struct pcmcia_driver, drv)
+
+#define handle_to_pdev(handle) container_of(handle, struct pcmcia_device, client);
+#define handle_to_dev(handle) ((container_of(handle, struct pcmcia_device, client))->dev)
+
/* error reporting */
void cs_error(client_handle_t handle, int func, int ret);
/*
- * mem_op.h 1.13 2000/06/12 21:55:40
+ * mem_op.h
*
- * The contents of this file are subject to the Mozilla Public License
- * Version 1.1 (the "License"); you may not use this file except in
- * compliance with the License. You may obtain a copy of the License
- * at http://www.mozilla.org/MPL/
- *
- * Software distributed under the License is distributed on an "AS IS"
- * basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
- * the License for the specific language governing rights and
- * limitations under the License.
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
*
* The initial developer of the original code is David A. Hinds
* <dahinds@users.sourceforge.net>. Portions created by David A. Hinds
* are Copyright (C) 1999 David A. Hinds. All Rights Reserved.
*
- * Alternatively, the contents of this file may be used under the
- * terms of the GNU General Public License version 2 (the "GPL"), in which
- * case the provisions of the GPL are applicable instead of the
- * above. If you wish to allow the use of your version of this file
- * only under the terms of the GPL and not to allow others to use
- * your version of this file under the MPL, indicate your decision by
- * deleting the provisions above and replace them with the notice and
- * other provisions required by the GPL. If you do not delete the
- * provisions above, a recipient may use your version of this file
- * under either the MPL or the GPL.
+ * (C) 1999 David A. Hinds
*/
#ifndef _LINUX_MEM_OP_H
#else /* UNSAFE_MEMCPY */
-static inline void copy_from_pc(void *to, const void *from, size_t n)
+static inline void copy_from_pc(void *to, void __iomem *from, size_t n)
{
- size_t odd = (n & 1);
- n -= odd;
- while (n) {
- u_short *t = to;
-
- *t = __raw_readw(from);
- to = (void *)((long)to + 2);
- from = (const void *)((long)from + 2);
- n -= 2;
- }
- if (odd)
- *(u_char *)to = readb(from);
+ __u16 *t = to;
+ __u16 __iomem *f = from;
+ size_t odd = (n & 1);
+ for (n >>= 1; n; n--)
+ *t++ = __raw_readw(f++);
+ if (odd)
+ *(__u8 *)t = readb(f);
}
-static inline void copy_to_pc(void *to, const void *from, size_t n)
+static inline void copy_to_pc(void __iomem *to, const void *from, size_t n)
{
- size_t odd = (n & 1);
- n -= odd;
- while (n) {
- __raw_writew(*(u_short *)from, to);
- to = (void *)((long)to + 2);
- from = (const void *)((long)from + 2);
- n -= 2;
- }
- if (odd)
- writeb(*(u_char *)from, to);
+ __u16 __iomem *t = to;
+ const __u16 *f = from;
+ size_t odd = (n & 1);
+ for (n >>= 1; n ; n--)
+ __raw_writew(*f++, t++);
+ if (odd)
+ writeb(*(__u8 *)f, t);
}
-static inline void copy_pc_to_user(void *to, const void *from, size_t n)
+static inline void copy_pc_to_user(void __user *to, void __iomem *from, size_t n)
{
- size_t odd = (n & 1);
- n -= odd;
- while (n) {
- put_user(__raw_readw(from), (short *)to);
- to = (void *)((long)to + 2);
- from = (const void *)((long)from + 2);
- n -= 2;
- }
- if (odd)
- put_user(readb(from), (char *)to);
+ __u16 __user *t = to;
+ __u16 __iomem *f = from;
+ size_t odd = (n & 1);
+ for (n >>= 1; n ; n--)
+ put_user(__raw_readw(f++), t++);
+ if (odd)
+ put_user(readb(f), (char __user *)t);
}
-static inline void copy_user_to_pc(void *to, const void *from, size_t n)
+static inline void copy_user_to_pc(void __iomem *to, void __user *from, size_t n)
{
- short s;
- char c;
- size_t odd = (n & 1);
- n -= odd;
- while (n) {
- get_user(s, (short *)from);
- __raw_writew(s, to);
- to = (void *)((long)to + 2);
- from = (const void *)((long)from + 2);
- n -= 2;
- }
- if (odd) {
- get_user(c, (char *)from);
- writeb(c, to);
- }
+ __u16 __user *f = from;
+ __u16 __iomem *t = to;
+ short s;
+ char c;
+ size_t odd = (n & 1);
+ for (n >>= 1; n; n--) {
+ get_user(s, f++);
+ __raw_writew(s, t++);
+ }
+ if (odd) {
+ get_user(c, (char __user *)f);
+ writeb(c, t);
+ }
}
#endif /* UNSAFE_MEMCPY */
} __attribute__((packed));
-extern const char *rxrpc_acks[];
-
#endif /* _LINUX_RXRPC_PACKET_H */
struct rxrpc_message *msg,
int error);
-extern void rxrpc_clear_transport(struct rxrpc_transport *trans);
-
#endif /* _LINUX_RXRPC_TRANSPORT_H */
extern void __scsi_print_command(unsigned char *);
extern void scsi_print_sense(const char *, struct scsi_cmnd *);
extern void scsi_print_req_sense(const char *, struct scsi_request *);
+extern void __scsi_print_sense(const char *name,
+ const unsigned char *sense_buffer,
+ int sense_len);
extern void scsi_print_driverbyte(int);
extern void scsi_print_hostbyte(int);
extern void scsi_print_status(unsigned char);
#define SCSI_NO_TAG (-1) /* identify no tag in use */
+
+/**
+ * scsi_get_tag_type - get the type of tag the device supports
+ * @sdev: the scsi device
+ *
+ * Notes:
+ * If the drive only supports simple tags, returns MSG_SIMPLE_TAG
+ * if it supports all tag types, returns MSG_ORDERED_TAG.
+ */
+static inline int scsi_get_tag_type(struct scsi_device *sdev)
+{
+ if (!sdev->tagged_supported)
+ return 0;
+ if (sdev->ordered_tags)
+ return MSG_ORDERED_TAG;
+ if (sdev->simple_tags)
+ return MSG_SIMPLE_TAG;
+ return 0;
+}
+
+static inline void scsi_set_tag_type(struct scsi_device *sdev, int tag)
+{
+ switch (tag) {
+ case MSG_ORDERED_TAG:
+ sdev->ordered_tags = 1;
+ /* fall through */
+ case MSG_SIMPLE_TAG:
+ sdev->simple_tags = 1;
+ break;
+ case 0:
+ /* fall through */
+ default:
+ sdev->ordered_tags = 0;
+ sdev->simple_tags = 0;
+ break;
+ }
+}
/**
* scsi_activate_tcq - turn on tag command queueing
* @SDpnt: device to turn on TCQ for
**/
static inline void scsi_activate_tcq(struct scsi_device *sdev, int depth)
{
- if (sdev->tagged_supported) {
- if (!blk_queue_tagged(sdev->request_queue))
- blk_queue_init_tags(sdev->request_queue, depth, NULL);
- scsi_adjust_queue_depth(sdev, MSG_ORDERED_TAG, depth);
- }
+ if (!sdev->tagged_supported)
+ return;
+
+ if (!blk_queue_tagged(sdev->request_queue))
+ blk_queue_init_tags(sdev->request_queue, depth, NULL);
+
+ scsi_adjust_queue_depth(sdev, scsi_get_tag_type(sdev), depth);
}
/**
static inline int scsi_populate_tag_msg(struct scsi_cmnd *cmd, char *msg)
{
struct request *req = cmd->request;
+ struct scsi_device *sdev = cmd->device;
if (blk_rq_tagged(req)) {
- if (req->flags & REQ_HARDBARRIER)
+ if (sdev->ordered_tags && req->flags & REQ_HARDBARRIER)
*msg++ = MSG_ORDERED_TAG;
else
*msg++ = MSG_SIMPLE_TAG;
--- /dev/null
+/*
+ * iSCSI transport class definitions
+ *
+ * Copyright (C) IBM Corporation, 2004
+ * Copyright (C) Mike Christie, 2004
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ */
+#ifndef SCSI_TRANSPORT_ISCSI_H
+#define SCSI_TRANSPORT_ISCSI_H
+
+#include <linux/config.h>
+#include <linux/in6.h>
+#include <linux/in.h>
+
+struct scsi_transport_template;
+
+struct iscsi_class_session {
+ uint8_t isid[6];
+ uint16_t tsih;
+ int header_digest; /* 1 CRC32, 0 None */
+ int data_digest; /* 1 CRC32, 0 None */
+ uint16_t tpgt;
+ union {
+ struct in6_addr sin6_addr;
+ struct in_addr sin_addr;
+ } u;
+ sa_family_t addr_type; /* must be AF_INET or AF_INET6 */
+ uint16_t port; /* must be in network byte order */
+ int initial_r2t; /* 1 Yes, 0 No */
+ int immediate_data; /* 1 Yes, 0 No */
+ uint32_t max_recv_data_segment_len;
+ uint32_t max_burst_len;
+ uint32_t first_burst_len;
+ uint16_t def_time2wait;
+ uint16_t def_time2retain;
+ uint16_t max_outstanding_r2t;
+ int data_pdu_in_order; /* 1 Yes, 0 No */
+ int data_sequence_in_order; /* 1 Yes, 0 No */
+ int erl;
+};
+
+/*
+ * accessor macros
+ */
+#define iscsi_isid(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->isid)
+#define iscsi_tsih(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->tsih)
+#define iscsi_header_digest(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->header_digest)
+#define iscsi_data_digest(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->data_digest)
+#define iscsi_port(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->port)
+#define iscsi_addr_type(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->addr_type)
+#define iscsi_sin_addr(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->u.sin_addr)
+#define iscsi_sin6_addr(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->u.sin6_addr)
+#define iscsi_tpgt(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->tpgt)
+#define iscsi_initial_r2t(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->initial_r2t)
+#define iscsi_immediate_data(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->immediate_data)
+#define iscsi_max_recv_data_segment_len(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->max_recv_data_segment_len)
+#define iscsi_max_burst_len(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->max_burst_len)
+#define iscsi_first_burst_len(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->first_burst_len)
+#define iscsi_def_time2wait(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->def_time2wait)
+#define iscsi_def_time2retain(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->def_time2retain)
+#define iscsi_max_outstanding_r2t(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->max_outstanding_r2t)
+#define iscsi_data_pdu_in_order(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->data_pdu_in_order)
+#define iscsi_data_sequence_in_order(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->data_sequence_in_order)
+#define iscsi_erl(x) \
+ (((struct iscsi_class_session *)&(x)->starget_data)->erl)
+
+/*
+ * The functions by which the transport class and the driver communicate
+ */
+struct iscsi_function_template {
+ /*
+ * target attrs
+ */
+ void (*get_isid)(struct scsi_target *);
+ void (*get_tsih)(struct scsi_target *);
+ void (*get_header_digest)(struct scsi_target *);
+ void (*get_data_digest)(struct scsi_target *);
+ void (*get_port)(struct scsi_target *);
+ void (*get_tpgt)(struct scsi_target *);
+ /*
+ * In get_ip_address the lld must set the address and
+ * the address type
+ */
+ void (*get_ip_address)(struct scsi_target *);
+ /*
+ * The lld should snprintf the name or alias to the buffer
+ */
+ ssize_t (*get_target_name)(struct scsi_target *, char *, ssize_t);
+ ssize_t (*get_target_alias)(struct scsi_target *, char *, ssize_t);
+ void (*get_initial_r2t)(struct scsi_target *);
+ void (*get_immediate_data)(struct scsi_target *);
+ void (*get_max_recv_data_segment_len)(struct scsi_target *);
+ void (*get_max_burst_len)(struct scsi_target *);
+ void (*get_first_burst_len)(struct scsi_target *);
+ void (*get_def_time2wait)(struct scsi_target *);
+ void (*get_def_time2retain)(struct scsi_target *);
+ void (*get_max_outstanding_r2t)(struct scsi_target *);
+ void (*get_data_pdu_in_order)(struct scsi_target *);
+ void (*get_data_sequence_in_order)(struct scsi_target *);
+ void (*get_erl)(struct scsi_target *);
+
+ /*
+ * host atts
+ */
+
+ /*
+ * The lld should snprintf the name or alias to the buffer
+ */
+ ssize_t (*get_initiator_alias)(struct Scsi_Host *, char *, ssize_t);
+ ssize_t (*get_initiator_name)(struct Scsi_Host *, char *, ssize_t);
+ /*
+ * The driver sets these to tell the transport class it
+ * wants the attributes displayed in sysfs. If the show_ flag
+ * is not set, the attribute will be private to the transport
+ * class. We could probably just test if a get_ fn was set
+ * since we only use the values for sysfs but this is how
+ * fc does it too.
+ */
+ unsigned long show_isid:1;
+ unsigned long show_tsih:1;
+ unsigned long show_header_digest:1;
+ unsigned long show_data_digest:1;
+ unsigned long show_port:1;
+ unsigned long show_tpgt:1;
+ unsigned long show_ip_address:1;
+ unsigned long show_target_name:1;
+ unsigned long show_target_alias:1;
+ unsigned long show_initial_r2t:1;
+ unsigned long show_immediate_data:1;
+ unsigned long show_max_recv_data_segment_len:1;
+ unsigned long show_max_burst_len:1;
+ unsigned long show_first_burst_len:1;
+ unsigned long show_def_time2wait:1;
+ unsigned long show_def_time2retain:1;
+ unsigned long show_max_outstanding_r2t:1;
+ unsigned long show_data_pdu_in_order:1;
+ unsigned long show_data_sequence_in_order:1;
+ unsigned long show_erl:1;
+ unsigned long show_initiator_name:1;
+ unsigned long show_initiator_alias:1;
+};
+
+struct scsi_transport_template *iscsi_attach_transport(struct iscsi_function_template *);
+void iscsi_release_transport(struct scsi_transport_template *);
+
+#endif
#include "seq_instr.h"
-extern char *snd_seq_fm_id;
-
int snd_seq_fm_init(snd_seq_kinstr_ops_t * ops,
snd_seq_kinstr_ops_t * next);
#include "seq_instr.h"
-extern char *snd_seq_gf1_id;
-
typedef struct {
void *private_data;
int (*info)(void *private_data, gf1_info_t *info);
#include "seq_instr.h"
-extern char *snd_seq_iwffff_id;
-
typedef struct {
void *private_data;
int (*info)(void *private_data, iwffff_info_t *info);
#include "seq_instr.h"
-extern char *snd_seq_simple_id;
-
typedef struct {
void *private_data;
int (*info)(void *private_data, simple_instrument_info_t *info);
void (*private_free) (ak4531_t *ak4531);
/* --- */
unsigned char regs[0x20];
- spinlock_t reg_lock;
+ struct semaphore reg_mutex;
};
int snd_ak4531_mixer(snd_card_t * card, ak4531_t * _ak4531, ak4531_t ** rak4531);
#define CS8427_VERSHIFT 0
#define CS8427_VER8427A 0x71
-int snd_cs8427_detect(snd_i2c_bus_t *bus, unsigned char addr);
int snd_cs8427_create(snd_i2c_bus_t *bus, unsigned char addr,
unsigned int reset_timeout, snd_i2c_device_t **r_cs8427);
-void snd_cs8427_reset(snd_i2c_device_t *cs8427);
int snd_cs8427_reg_write(snd_i2c_device_t *device, unsigned char reg, unsigned char val);
-int snd_cs8427_reg_read(snd_i2c_device_t *device, unsigned char reg);
int snd_cs8427_iec958_build(snd_i2c_device_t *cs8427, snd_pcm_substream_t *playback_substream, snd_pcm_substream_t *capture_substream);
int snd_cs8427_iec958_active(snd_i2c_device_t *cs8427, int active);
int snd_cs8427_iec958_pcm(snd_i2c_device_t *cs8427, unsigned int rate);
*/
void snd_es1688_mixer_write(es1688_t *chip, unsigned char reg, unsigned char data);
-unsigned char snd_es1688_mixer_read(es1688_t *chip, unsigned char reg);
-
-irqreturn_t snd_es1688_interrupt(int irq, void *dev_id, struct pt_regs *regs);
int snd_es1688_create(snd_card_t * card,
unsigned long port,
/* gus_dma.c */
-void snd_gf1_dma_program(snd_gus_card_t * gus, unsigned int addr,
- unsigned long buf_addr, unsigned int count,
- unsigned int cmd);
-void snd_gf1_dma_ack(snd_gus_card_t * gus);
int snd_gf1_dma_init(snd_gus_card_t * gus);
int snd_gf1_dma_done(snd_gus_card_t * gus);
int snd_gf1_dma_transfer_block(snd_gus_card_t * gus,
int snd_info_register(snd_info_entry_t * entry);
int snd_info_unregister(snd_info_entry_t * entry);
-struct proc_dir_entry *snd_create_proc_entry(const char *name, mode_t mode,
- struct proc_dir_entry *parent);
-void snd_remove_proc_entry(struct proc_dir_entry *parent,
- struct proc_dir_entry *de);
-
/* for card drivers */
int snd_card_proc_new(snd_card_t *card, const char *name, snd_info_entry_t **entryp);
static inline int snd_info_register(snd_info_entry_t * entry) { return 0; }
static inline int snd_info_unregister(snd_info_entry_t * entry) { return 0; }
-static inline struct proc_dir_entry *snd_create_proc_entry(const char *name, mode_t mode, struct proc_dir_entry *parent) { return NULL; }
-static inline void snd_remove_proc_entry(struct proc_dir_entry *parent,
- struct proc_dir_entry *de) { ; }
-
#define snd_card_proc_new(card,name,entryp) 0 /* always success */
#define snd_info_set_text_ops(entry,private_data,read_size,read) /*NOP*/
snd_rawmidi_t ** rmidi);
void snd_rawmidi_set_ops(snd_rawmidi_t * rmidi, int stream, snd_rawmidi_ops_t * ops);
-/* control functions */
-
-int snd_rawmidi_control_ioctl(snd_card_t * card,
- snd_ctl_file_t * control,
- unsigned int cmd,
- unsigned long arg);
-
/* callbacks */
void snd_rawmidi_receive_reset(snd_rawmidi_substream_t * substream);
int snd_sb16dsp_configure(sb_t *chip);
/* sb16.c */
irqreturn_t snd_sb16dsp_interrupt(int irq, void *dev_id, struct pt_regs *regs);
-int snd_sb16_playback_open(snd_pcm_substream_t *substream);
-int snd_sb16_capture_open(snd_pcm_substream_t *substream);
-int snd_sb16_playback_close(snd_pcm_substream_t *substream);
-int snd_sb16_capture_close(snd_pcm_substream_t *substream);
/* exported mixer stuffs */
enum {
void snd_midi_process_event(snd_midi_op_t *ops, snd_seq_event_t *ev,
snd_midi_channel_set_t *chanset);
void snd_midi_channel_set_clear(snd_midi_channel_set_t *chset);
-void snd_midi_channel_init(snd_midi_channel_t *p, int n);
-snd_midi_channel_t *snd_midi_channel_init_set(int n);
snd_midi_channel_set_t *snd_midi_channel_alloc_set(int n);
void snd_midi_channel_free_set(snd_midi_channel_set_t *chset);
};
extern void snd_wavefront_internal_interrupt (snd_wavefront_card_t *card);
-extern int snd_wavefront_interrupt_bits (int irq);
extern int snd_wavefront_detect_irq (snd_wavefront_t *dev) ;
extern int snd_wavefront_check_irq (snd_wavefront_t *dev, int irq);
extern int snd_wavefront_restart (snd_wavefront_t *dev);
int snd_soundfont_remove_samples(snd_sf_list_t *sflist);
int snd_soundfont_remove_unlocked(snd_sf_list_t *sflist);
-int snd_soundfont_mem_used(snd_sf_list_t *sflist);
int snd_soundfont_search_zone(snd_sf_list_t *sflist, int *notep, int vel,
int preset, int bank,
#include <sound/hwdep.h>
#include <linux/interrupt.h>
+#if defined(CONFIG_FW_LOADER) || defined(CONFIG_FW_LOADER_MODULE)
+#if !defined(CONFIG_USE_VXLOADER) && !defined(CONFIG_SND_VX_LIB) /* built-in kernel */
+#define SND_VX_FW_LOADER /* use the standard firmware loader */
+#endif
+#endif
+
+struct firmware;
+struct device;
+
typedef struct snd_vx_core vx_core_t;
typedef struct vx_pipe vx_pipe_t;
void (*change_audio_source)(vx_core_t *chip, int src);
void (*set_clock_source)(vx_core_t *chp, int src);
/* chip init */
- int (*load_dsp)(vx_core_t *chip, const snd_hwdep_dsp_image_t *dsp);
+ int (*load_dsp)(vx_core_t *chip, int idx, const struct firmware *fw);
void (*reset_dsp)(vx_core_t *chip);
void (*reset_board)(vx_core_t *chip, int cold_reset);
int (*add_controls)(vx_core_t *chip);
unsigned int chip_status;
unsigned int pcm_running;
+ struct device *dev;
snd_hwdep_t *hwdep;
struct vx_rmh irq_rmh; /* RMH used in interrupts */
unsigned char audio_monitor_active[4]; /* playback hw-monitor mute/unmute */
struct semaphore mixer_mutex;
+
+ const struct firmware *firmware[4]; /* loaded firmware data */
};
*/
vx_core_t *snd_vx_create(snd_card_t *card, struct snd_vx_hardware *hw,
struct snd_vx_ops *ops, int extra_size);
-int snd_vx_hwdep_new(vx_core_t *chip);
-int snd_vx_load_boot_image(vx_core_t *chip, const snd_hwdep_dsp_image_t *boot);
-int snd_vx_dsp_boot(vx_core_t *chip, const snd_hwdep_dsp_image_t *boot);
-int snd_vx_dsp_load(vx_core_t *chip, const snd_hwdep_dsp_image_t *dsp);
+int snd_vx_setup_firmware(vx_core_t *chip);
+int snd_vx_load_boot_image(vx_core_t *chip, const struct firmware *dsp);
+int snd_vx_dsp_boot(vx_core_t *chip, const struct firmware *dsp);
+int snd_vx_dsp_load(vx_core_t *chip, const struct firmware *dsp);
+
+void snd_vx_free_firmware(vx_core_t *chip);
/*
* interrupt handler; exported for pcmcia
*/
irqreturn_t snd_vx_irq_handler(int irq, void *dev, struct pt_regs *regs);
-/*
- * power-management routines
- */
-#ifdef CONFIG_PM
-void snd_vx_suspend(vx_core_t *chip);
-void snd_vx_resume(vx_core_t *chip);
-#endif
-
-
/*
* lowlevel functions
*/
*/
void vx_set_iec958_status(vx_core_t *chip, unsigned int bits);
int vx_set_clock(vx_core_t *chip, unsigned int freq);
-void vx_change_clock_source(vx_core_t *chip, int source);
void vx_set_internal_clock(vx_core_t *chip, unsigned int freq);
int vx_change_frequency(vx_core_t *chip);
* archive for more details.
*/
+#include <asm/addrspace.h>
/*
* IMS332 video controller register base address
*/
-#define MAXINEFB_IMS332_ADDRESS 0xbc140000
+#define MAXINEFB_IMS332_ADDRESS KSEG1ADDR(0x1c140000)
/*
* Begin of DECstation 5000/xx onboard framebuffer memory, default resolution
* is 1024x768x8
*/
-#define DS5000_xx_ONBOARD_FBMEM_START 0xaa000000
+#define DS5000_xx_ONBOARD_FBMEM_START KSEG1ADDR(0x0a000000)
/*
* The IMS 332 video controller used in the DECstation 5000/xx series
unsigned int _unused2[0x1ef];
struct newport_cregs cgo;
};
-extern struct newport_regs *npregs;
-
typedef struct {
unsigned int drawmode1;
/* Miscellaneous NEWPORT routines. */
#define BUSY_TIMEOUT 100000
-static __inline__ int newport_wait(void)
+static __inline__ int newport_wait(struct newport_regs *regs)
{
- int i = 0;
+ int t = BUSY_TIMEOUT;
- while(i < BUSY_TIMEOUT)
- if(!(npregs->cset.status & NPORT_STAT_GBUSY))
+ while (t--)
+ if (!(regs->cset.status & NPORT_STAT_GBUSY))
break;
- if(i == BUSY_TIMEOUT)
- return 1;
- return 0;
+ return !t;
}
-static __inline__ int newport_bfwait(void)
+static __inline__ int newport_bfwait(struct newport_regs *regs)
{
- int i = 0;
+ int t = BUSY_TIMEOUT;
- while(i < BUSY_TIMEOUT)
- if(!(npregs->cset.status & NPORT_STAT_BBUSY))
+ while (t--)
+ if(!(regs->cset.status & NPORT_STAT_BBUSY))
break;
- if(i == BUSY_TIMEOUT)
- return 1;
- return 0;
+ return !t;
}
-/* newport.c and cons_newport.c routines */
-extern struct graphics_ops *newport_probe (int, const char **);
-
-void newport_save (void *);
-void newport_restore (void *);
-void newport_reset (void);
-int newport_ioctl (int card, int cmd, unsigned long arg);
-
/*
* DCBMODE register defines:
*/
{
rex->set.dcbmode = DCB_XMAP0 | XM9_CRS_FIFO_AVAIL |
DCB_DATAWIDTH_1 | R_DCB_XMAP9_PROTOCOL;
- newport_bfwait ();
+ newport_bfwait (rex);
while ((rex->set.dcbdata0.bybytes.b3 & 3) != XM9_FIFO_EMPTY)
;
#define PM2F_VSYNC_ACT_LOW 0x60
#define PM2F_LINE_DOUBLE 0x04
#define PM2F_VIDEO_ENABLE 0x01
+#define PM2F_RD_PIXELFORMAT_SVGA 0x01
+#define PM2F_RD_PIXELFORMAT_RGB232OFFSET 0x02
+#define PM2F_RD_PIXELFORMAT_RGBA2321 0x03
+#define PM2F_RD_PIXELFORMAT_RGBA5551 0x04
+#define PM2F_RD_PIXELFORMAT_RGBA4444 0x05
+#define PM2F_RD_PIXELFORMAT_RGB565 0x06
+#define PM2F_RD_PIXELFORMAT_RGBA8888 0x08
+#define PM2F_RD_PIXELFORMAT_RGB888 0x09
#define PM2F_RD_GUI_ACTIVE 0x10
#define PM2F_RD_COLOR_MODE_RGB 0x20
#define PM2F_DELTA_ORDER_RGB (1L<<18)
#define PM2F_MEM_BANKS_2 (1L<<29)
#define PM2F_MEM_BANKS_3 (2L<<29)
#define PM2F_MEM_BANKS_4 (3L<<29)
+#define PM2F_APERTURE_STANDARD 0
+#define PM2F_APERTURE_BYTESWAP 1
+#define PM2F_APERTURE_HALFWORDSWAP 2
typedef enum {
PM2_TYPE_PERMEDIA2,
--- /dev/null
+/* calibrate.c: default delay calibration
+ *
+ * Excised from init/main.c
+ * Copyright (C) 1991, 1992 Linus Torvalds
+ */
+
+#include <linux/sched.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+
+static unsigned long preset_lpj;
+static int __init lpj_setup(char *str)
+{
+ preset_lpj = simple_strtoul(str,NULL,0);
+ return 1;
+}
+
+__setup("lpj=", lpj_setup);
+
+/*
+ * This is the number of bits of precision for the loops_per_jiffy. Each
+ * bit takes on average 1.5/HZ seconds. This (like the original) is a little
+ * better than 1%
+ */
+#define LPS_PREC 8
+
+void __devinit calibrate_delay(void)
+{
+ unsigned long ticks, loopbit;
+ int lps_precision = LPS_PREC;
+
+ if (preset_lpj) {
+ loops_per_jiffy = preset_lpj;
+ printk("Calibrating delay loop (skipped)... "
+ "%lu.%02lu BogoMIPS preset\n",
+ loops_per_jiffy/(500000/HZ),
+ (loops_per_jiffy/(5000/HZ)) % 100);
+ } else {
+ loops_per_jiffy = (1<<12);
+
+ printk(KERN_DEBUG "Calibrating delay loop... ");
+ while ((loops_per_jiffy <<= 1) != 0) {
+ /* wait for "start of" clock tick */
+ ticks = jiffies;
+ while (ticks == jiffies)
+ /* nothing */;
+ /* Go .. */
+ ticks = jiffies;
+ __delay(loops_per_jiffy);
+ ticks = jiffies - ticks;
+ if (ticks)
+ break;
+ }
+
+ /*
+ * Do a binary approximation to get loops_per_jiffy set to
+ * equal one clock (up to lps_precision bits)
+ */
+ loops_per_jiffy >>= 1;
+ loopbit = loops_per_jiffy;
+ while (lps_precision-- && (loopbit >>= 1)) {
+ loops_per_jiffy |= loopbit;
+ ticks = jiffies;
+ while (ticks == jiffies)
+ /* nothing */;
+ ticks = jiffies;
+ __delay(loops_per_jiffy);
+ if (jiffies != ticks) /* longer than 1 tick */
+ loops_per_jiffy &= ~loopbit;
+ }
+
+ /* Round the value and print it */
+ printk("%lu.%02lu BogoMIPS (lpj=%lu)\n",
+ loops_per_jiffy/(500000/HZ),
+ (loops_per_jiffy/(5000/HZ)) % 100,
+ loops_per_jiffy);
+ }
+
+}
#include <linux/suspend.h>
#include <linux/root_dev.h>
#include <linux/security.h>
+#include <linux/delay.h>
#include <linux/nfs_fs.h>
#include <linux/nfs_fs_sb.h>
return 1;
}
+static unsigned int __initdata root_delay;
+static int __init root_delay_setup(char *str)
+{
+ root_delay = simple_strtoul(str, NULL, 0);
+ return 1;
+}
+
__setup("rootflags=", root_data_setup);
__setup("rootfstype=", fs_names_setup);
+__setup("rootdelay=", root_delay_setup);
static void __init get_fs_names(char *page)
{
mount_devfs();
+ if (root_delay) {
+ printk(KERN_INFO "Waiting %dsec before mounting root device...\n",
+ root_delay);
+ ssleep(root_delay);
+ }
+
md_run_setup();
if (saved_root_name[0]) {
err = sys_ioctl(fd, RUN_ARRAY, 0);
if (err)
printk(KERN_WARNING "md: starting md%d failed\n", minor);
+ else {
+ /* reread the partition table.
+ * I (neilb) and not sure why this is needed, but I cannot
+ * boot a kernel with devfs compiled in from partitioned md
+ * array without it
+ */
+ sys_close(fd);
+ fd = sys_open(name, 0, 0);
+ sys_ioctl(fd, BLKRRPART, 0);
+ }
sys_close(fd);
}
}
static void default_handler(int, struct pt_regs *);
static struct exec_domain *exec_domains = &default_exec_domain;
-static rwlock_t exec_domains_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(exec_domains_lock);
static u_long ident_map[32] = {
*/
static struct list_head ime_list = LIST_HEAD_INIT(ime_list);
-static spinlock_t ime_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(ime_lock);
static int kmalloc_failed;
struct inter_module_entry {
return mask & val;
}
+EXPORT_SYMBOL(probe_irq_mask);
/**
* probe_irq_off - end an interrupt autodetect
#ifdef CONFIG_SMP
+cpumask_t irq_affinity[NR_IRQS] = { [0 ... NR_IRQS-1] = CPU_MASK_ALL };
+
/**
* synchronize_irq - wait for pending IRQ handlers (on other CPUs)
*
*/
static struct proc_dir_entry *smp_affinity_entry[NR_IRQS];
-cpumask_t irq_affinity[NR_IRQS] = { [0 ... NR_IRQS-1] = CPU_MASK_ALL };
-
static int irq_affinity_read_proc(char *page, char **start, off_t off,
int count, int *eof, void *data)
{
static struct hlist_head kprobe_table[KPROBE_TABLE_SIZE];
unsigned int kprobe_cpu = NR_CPUS;
-static spinlock_t kprobe_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(kprobe_lock);
/* Locks kprobe: irqs must be disabled */
void lock_kprobes(void)
int register_kprobe(struct kprobe *p)
{
int ret = 0;
- unsigned long flags;
+ unsigned long flags = 0;
+ if ((ret = arch_prepare_kprobe(p)) != 0) {
+ goto out;
+ }
spin_lock_irqsave(&kprobe_lock, flags);
INIT_HLIST_NODE(&p->hlist);
if (get_kprobe(p->addr)) {
ret = -EEXIST;
goto out;
}
+ arch_copy_kprobe(p);
- if ((ret = arch_prepare_kprobe(p)) != 0) {
- goto out;
- }
hlist_add_head(&p->hlist,
&kprobe_table[hash_ptr(p->addr, KPROBE_HASH_BITS)]);
(unsigned long) p->addr + sizeof(kprobe_opcode_t));
out:
spin_unlock_irqrestore(&kprobe_lock, flags);
+ if (ret == -EEXIST)
+ arch_remove_kprobe(p);
return ret;
}
void unregister_kprobe(struct kprobe *p)
{
unsigned long flags;
- spin_lock_irqsave(&kprobe_lock, flags);
arch_remove_kprobe(p);
+ spin_lock_irqsave(&kprobe_lock, flags);
*p->addr = p->opcode;
hlist_del(&p->hlist);
flush_icache_range((unsigned long) p->addr,
KERNEL_ATTR_RO(hotplug_seqnum);
#endif
-static decl_subsys(kernel, NULL, NULL);
+decl_subsys(kernel, NULL, NULL);
+EXPORT_SYMBOL_GPL(kernel_subsys);
static struct attribute * kernel_attrs[] = {
#ifdef CONFIG_HOTPLUG
involved in suspending. Also in this case there is a risk that buffers
on disk won't match with saved ones.
- For more information take a look at Documentation/power/swsusp.txt.
+ For more information take a look at <file:Documentation/power/swsusp.txt>.
config PM_STD_PARTITION
string "Default resume partition"
local_irq_save(flags);
switch(mode) {
case PM_DISK_PLATFORM:
- device_power_down(PM_SUSPEND_DISK);
+ device_power_down(PMSG_SUSPEND);
error = pm_ops->enter(PM_SUSPEND_DISK);
break;
case PM_DISK_SHUTDOWN:
free_some_memory();
disable_nonboot_cpus();
- if ((error = device_suspend(PM_SUSPEND_DISK)))
+ if ((error = device_suspend(PMSG_FREEZE))) {
+ printk("Some devices failed to suspend\n");
goto Finish;
+ }
return 0;
Finish:
*
* If we're going through the firmware, then get it over with quickly.
*
- * If not, then call swsusp to do it's thing, then figure out how
+ * If not, then call swsusp to do its thing, then figure out how
* to power down the system.
*/
* software_resume - Resume from a saved image.
*
* Called as a late_initcall (so all devices are discovered and
- * initialized), we call pmdisk to see if we have a saved image or not.
+ * initialized), we call swsusp to see if we have a saved image or not.
* If so, we quiesce devices, the restore the saved image. We will
* return above (in pm_suspend_disk() ) if everything goes well.
* Otherwise, we fail gracefully and return to the normally
return 0;
}
- pr_debug("PM: Reading pmdisk image.\n");
+ pr_debug("PM: Reading swsusp image.\n");
if ((error = swsusp_read()))
goto Done;
static ssize_t disk_show(struct subsystem * subsys, char * buf)
{
- return sprintf(buf,"%s\n",pm_disk_modes[pm_disk_mode]);
+ return sprintf(buf, "%s\n", pm_disk_modes[pm_disk_mode]);
}
goto Thaw;
}
- if ((error = device_suspend(state)))
+ if ((error = device_suspend(PMSG_SUSPEND)))
goto Finish;
return 0;
Finish:
}
-static int suspend_enter(u32 state)
+static int suspend_enter(suspend_state_t state)
{
int error = 0;
unsigned long flags;
local_irq_save(flags);
- if ((error = device_power_down(state)))
+
+ if ((error = device_power_down(PMSG_SUSPEND)))
goto Done;
error = pm_ops->enter(state);
device_power_up();
* @state: State we're coming out of.
*
* Call platform code to clean up, restart processes, and free the
- * console that we've allocated.
+ * console that we've allocated. This is not called for suspend-to-disk.
*/
static void suspend_finish(suspend_state_t state)
if (down_trylock(&pm_sem))
return -EBUSY;
- /* Suspend is hard to get right on SMP. */
- if (num_online_cpus() != 1) {
- error = -EPERM;
+ if (state == PM_SUSPEND_DISK) {
+ error = pm_suspend_disk();
goto Unlock;
}
- if (state == PM_SUSPEND_DISK) {
- error = pm_suspend_disk();
+ /* Suspend is hard to get right on SMP. */
+ if (num_online_cpus() != 1) {
+ error = -EPERM;
goto Unlock;
}
#ifdef CONFIG_PROFILING
static DECLARE_RWSEM(profile_rwsem);
-static rwlock_t handoff_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(handoff_lock);
static struct notifier_block * task_exit_notifier;
static struct notifier_block * task_free_notifier;
static struct notifier_block * munmap_notifier;
node = cpu_to_node(cpu);
per_cpu(cpu_profile_flip, cpu) = 0;
if (!per_cpu(cpu_profile_hits, cpu)[1]) {
- page = alloc_pages_node(node, GFP_KERNEL, 0);
+ page = alloc_pages_node(node, GFP_KERNEL | __GFP_ZERO, 0);
if (!page)
return NOTIFY_BAD;
- clear_highpage(page);
per_cpu(cpu_profile_hits, cpu)[1] = page_address(page);
}
if (!per_cpu(cpu_profile_hits, cpu)[0]) {
- page = alloc_pages_node(node, GFP_KERNEL, 0);
+ page = alloc_pages_node(node, GFP_KERNEL | __GFP_ZERO, 0);
if (!page)
goto out_free;
- clear_highpage(page);
per_cpu(cpu_profile_hits, cpu)[0] = page_address(page);
}
break;
int node = cpu_to_node(cpu);
struct page *page;
- page = alloc_pages_node(node, GFP_KERNEL, 0);
+ page = alloc_pages_node(node, GFP_KERNEL | __GFP_ZERO, 0);
if (!page)
goto out_cleanup;
- clear_highpage(page);
per_cpu(cpu_profile_hits, cpu)[1]
= (struct profile_hit *)page_address(page);
- page = alloc_pages_node(node, GFP_KERNEL, 0);
+ page = alloc_pages_node(node, GFP_KERNEL | __GFP_ZERO, 0);
if (!page)
goto out_cleanup;
- clear_highpage(page);
per_cpu(cpu_profile_hits, cpu)[0]
= (struct profile_hit *)page_address(page);
}
EXPORT_SYMBOL(iomem_resource);
-static rwlock_t resource_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(resource_lock);
#ifdef CONFIG_PROC_FS
* Copyright (2004) Linus Torvalds
*
* Author: Zwane Mwaikambo <zwane@fsmlabs.com>
+ *
+ * Copyright (2004) Ingo Molnar
*/
#include <linux/config.h>
#include <linux/interrupt.h>
#include <linux/module.h>
+/*
+ * Generic declaration of the raw read_trylock() function,
+ * architectures are supposed to optimize this:
+ */
+int __lockfunc generic_raw_read_trylock(rwlock_t *lock)
+{
+ _raw_read_lock(lock);
+ return 1;
+}
+EXPORT_SYMBOL(generic_raw_read_trylock);
+
int __lockfunc _spin_trylock(spinlock_t *lock)
{
preempt_disable();
}
EXPORT_SYMBOL(_spin_trylock);
-int __lockfunc _write_trylock(rwlock_t *lock)
+int __lockfunc _read_trylock(rwlock_t *lock)
{
preempt_disable();
- if (_raw_write_trylock(lock))
+ if (_raw_read_trylock(lock))
return 1;
preempt_enable();
return 0;
}
-EXPORT_SYMBOL(_write_trylock);
-
-#ifdef CONFIG_PREEMPT
-/*
- * This could be a long-held lock. If another CPU holds it for a long time,
- * and that CPU is not asked to reschedule then *this* CPU will spin on the
- * lock for a long time, even if *this* CPU is asked to reschedule.
- *
- * So what we do here, in the slow (contended) path is to spin on the lock by
- * hand while permitting preemption.
- *
- * Called inside preempt_disable().
- */
-static inline void __preempt_spin_lock(spinlock_t *lock)
-{
- if (preempt_count() > 1) {
- _raw_spin_lock(lock);
- return;
- }
-
- do {
- preempt_enable();
- while (spin_is_locked(lock))
- cpu_relax();
- preempt_disable();
- } while (!_raw_spin_trylock(lock));
-}
+EXPORT_SYMBOL(_read_trylock);
-void __lockfunc _spin_lock(spinlock_t *lock)
+int __lockfunc _write_trylock(rwlock_t *lock)
{
preempt_disable();
- if (unlikely(!_raw_spin_trylock(lock)))
- __preempt_spin_lock(lock);
-}
-
-static inline void __preempt_write_lock(rwlock_t *lock)
-{
- if (preempt_count() > 1) {
- _raw_write_lock(lock);
- return;
- }
-
- do {
- preempt_enable();
- while (rwlock_is_locked(lock))
- cpu_relax();
- preempt_disable();
- } while (!_raw_write_trylock(lock));
-}
+ if (_raw_write_trylock(lock))
+ return 1;
-void __lockfunc _write_lock(rwlock_t *lock)
-{
- preempt_disable();
- if (unlikely(!_raw_write_trylock(lock)))
- __preempt_write_lock(lock);
-}
-#else
-void __lockfunc _spin_lock(spinlock_t *lock)
-{
- preempt_disable();
- _raw_spin_lock(lock);
+ preempt_enable();
+ return 0;
}
+EXPORT_SYMBOL(_write_trylock);
-void __lockfunc _write_lock(rwlock_t *lock)
-{
- preempt_disable();
- _raw_write_lock(lock);
-}
-#endif
-EXPORT_SYMBOL(_spin_lock);
-EXPORT_SYMBOL(_write_lock);
+#ifndef CONFIG_PREEMPT
void __lockfunc _read_lock(rwlock_t *lock)
{
}
EXPORT_SYMBOL(_read_lock);
-void __lockfunc _spin_unlock(spinlock_t *lock)
-{
- _raw_spin_unlock(lock);
- preempt_enable();
-}
-EXPORT_SYMBOL(_spin_unlock);
-
-void __lockfunc _write_unlock(rwlock_t *lock)
-{
- _raw_write_unlock(lock);
- preempt_enable();
-}
-EXPORT_SYMBOL(_write_unlock);
-
-void __lockfunc _read_unlock(rwlock_t *lock)
-{
- _raw_read_unlock(lock);
- preempt_enable();
-}
-EXPORT_SYMBOL(_read_unlock);
-
unsigned long __lockfunc _spin_lock_irqsave(spinlock_t *lock)
{
unsigned long flags;
}
EXPORT_SYMBOL(_write_lock_bh);
+void __lockfunc _spin_lock(spinlock_t *lock)
+{
+ preempt_disable();
+ _raw_spin_lock(lock);
+}
+
+EXPORT_SYMBOL(_spin_lock);
+
+void __lockfunc _write_lock(rwlock_t *lock)
+{
+ preempt_disable();
+ _raw_write_lock(lock);
+}
+
+EXPORT_SYMBOL(_write_lock);
+
+#else /* CONFIG_PREEMPT: */
+
+/*
+ * This could be a long-held lock. We both prepare to spin for a long
+ * time (making _this_ CPU preemptable if possible), and we also signal
+ * towards that other CPU that it should break the lock ASAP.
+ *
+ * (We do this in a function because inlining it would be excessive.)
+ */
+
+#define BUILD_LOCK_OPS(op, locktype) \
+void __lockfunc _##op##_lock(locktype##_t *lock) \
+{ \
+ preempt_disable(); \
+ for (;;) { \
+ if (likely(_raw_##op##_trylock(lock))) \
+ break; \
+ preempt_enable(); \
+ if (!(lock)->break_lock) \
+ (lock)->break_lock = 1; \
+ while (!op##_can_lock(lock) && (lock)->break_lock) \
+ cpu_relax(); \
+ preempt_disable(); \
+ } \
+} \
+ \
+EXPORT_SYMBOL(_##op##_lock); \
+ \
+unsigned long __lockfunc _##op##_lock_irqsave(locktype##_t *lock) \
+{ \
+ unsigned long flags; \
+ \
+ preempt_disable(); \
+ for (;;) { \
+ local_irq_save(flags); \
+ if (likely(_raw_##op##_trylock(lock))) \
+ break; \
+ local_irq_restore(flags); \
+ \
+ preempt_enable(); \
+ if (!(lock)->break_lock) \
+ (lock)->break_lock = 1; \
+ while (!op##_can_lock(lock) && (lock)->break_lock) \
+ cpu_relax(); \
+ preempt_disable(); \
+ } \
+ return flags; \
+} \
+ \
+EXPORT_SYMBOL(_##op##_lock_irqsave); \
+ \
+void __lockfunc _##op##_lock_irq(locktype##_t *lock) \
+{ \
+ _##op##_lock_irqsave(lock); \
+} \
+ \
+EXPORT_SYMBOL(_##op##_lock_irq); \
+ \
+void __lockfunc _##op##_lock_bh(locktype##_t *lock) \
+{ \
+ unsigned long flags; \
+ \
+ /* */ \
+ /* Careful: we must exclude softirqs too, hence the */ \
+ /* irq-disabling. We use the generic preemption-aware */ \
+ /* function: */ \
+ /**/ \
+ flags = _##op##_lock_irqsave(lock); \
+ local_bh_disable(); \
+ local_irq_restore(flags); \
+} \
+ \
+EXPORT_SYMBOL(_##op##_lock_bh)
+
+/*
+ * Build preemption-friendly versions of the following
+ * lock-spinning functions:
+ *
+ * _[spin|read|write]_lock()
+ * _[spin|read|write]_lock_irq()
+ * _[spin|read|write]_lock_irqsave()
+ * _[spin|read|write]_lock_bh()
+ */
+BUILD_LOCK_OPS(spin, spinlock);
+BUILD_LOCK_OPS(read, rwlock);
+BUILD_LOCK_OPS(write, rwlock);
+
+#endif /* CONFIG_PREEMPT */
+
+void __lockfunc _spin_unlock(spinlock_t *lock)
+{
+ _raw_spin_unlock(lock);
+ preempt_enable();
+}
+EXPORT_SYMBOL(_spin_unlock);
+
+void __lockfunc _write_unlock(rwlock_t *lock)
+{
+ _raw_write_unlock(lock);
+ preempt_enable();
+}
+EXPORT_SYMBOL(_write_unlock);
+
+void __lockfunc _read_unlock(rwlock_t *lock)
+{
+ _raw_read_unlock(lock);
+ preempt_enable();
+}
+EXPORT_SYMBOL(_read_unlock);
+
void __lockfunc _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
{
_raw_spin_unlock(lock);
stopmachine_state = STOPMACHINE_WAIT;
for_each_online_cpu(i) {
- if (i == smp_processor_id())
+ if (i == _smp_processor_id())
continue;
ret = kernel_thread(stopmachine, (void *)(long)i,CLONE_KERNEL);
if (ret < 0)
/* If they don't care which CPU fn runs on, bind to any online one. */
if (cpu == NR_CPUS)
- cpu = smp_processor_id();
+ cpu = _smp_processor_id();
p = kthread_create(do_stop, &smdata, "kstopmachine");
if (!IS_ERR(p)) {
config MAGIC_SYSRQ
bool "Magic SysRq key"
depends on DEBUG_KERNEL && (H8300 || M68KNOMMU || V850)
- depends (USERMODE && MCONSOLE)
help
Enables console device to interpret special characters as
commands to dump state information.
allocation as well as poisoning memory on free to catch use of freed
memory. This can make kmalloc/kfree-intensive workloads much slower.
+config DEBUG_PREEMPT
+ bool "Debug preemptible kernel"
+ depends on PREEMPT
+ default y
+ help
+ If you say Y here then the kernel will use a debug variant of the
+ commonly used smp_processor_id() function and will print warnings
+ if kernel code uses it in a preemption-unsafe way. Also, the kernel
+ will detect preemption count underflows.
+
config DEBUG_SPINLOCK
bool "Spinlock debugging"
depends on DEBUG_KERNEL && (ALPHA || ARM || X86 || IA64 || M32R || MIPS || PARISC || PPC32 || (SUPERH && !SUPERH64) || SPARC32 || SPARC64 || USERMODE || X86_64)
Disable for production systems.
config DEBUG_BUGVERBOSE
- bool "Verbose BUG() reporting (adds 70K)"
- depends on DEBUG_KERNEL && (ARM || ARM26 || M32R || M68K || SPARC32 || SPARC64)
+ bool "Verbose BUG() reporting (adds 70K)" if DEBUG_KERNEL && EMBEDDED
+ depends on ARM || ARM26 || M32R || M68K || SPARC32 || SPARC64 || (X86 && !X86_64)
+ default !EMBEDDED
help
Say Y here to make BUG() panics output the file name and line number
of the BUG call as well as the EIP and oops trace. This aids
If you're truly short on disk space or don't expect to report any
bugs back to the UML developers, say N, otherwise say Y.
+config DEBUG_IOREMAP
+ bool "Enable ioremap() debugging"
+ depends on DEBUG_KERNEL && PARISC
+ help
+ Enabling this option will cause the kernel to distinguish between
+ ioremapped and physical addresses. It will print a backtrace (at
+ most one every 10 seconds), hopefully allowing you to see which
+ drivers need work. Fixing all these problems is a prerequisite
+ for turning on USE_HPPA_IOREMAP. The warnings are harmless;
+ the kernel has enough information to fix the broken drivers
+ automatically, but we'd like to make it more efficient by not
+ having to do that.
+
+config DEBUG_FS
+ bool "Debug Filesystem"
+ depends on DEBUG_KERNEL
+ help
+ debugfs is a virtual file system that kernel developers use to put
+ debugging files into. Enable this option to be able to read and
+ write to these files.
+
+ If unsure, say N.
+
if !X86_64
config FRAME_POINTER
bool "Compile the kernel with frame pointers"
--- /dev/null
+/* find_next_bit.c: fallback find next bit implementation
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/bitops.h>
+
+int find_next_bit(const unsigned long *addr, int size, int offset)
+{
+ const unsigned long *base;
+ const int NBITS = sizeof(*addr) * 8;
+ unsigned long tmp;
+
+ base = addr;
+ if (offset) {
+ int suboffset;
+
+ addr += offset / NBITS;
+
+ suboffset = offset % NBITS;
+ if (suboffset) {
+ tmp = *addr;
+ tmp >>= suboffset;
+ if (tmp)
+ goto finish;
+ }
+
+ addr++;
+ }
+
+ while ((tmp = *addr) == 0)
+ addr++;
+
+ offset = (addr - base) * NBITS;
+
+ finish:
+ /* count the remaining bits without using __ffs() since that takes a 32-bit arg */
+ while (!(tmp & 0xff)) {
+ offset += 8;
+ tmp >>= 8;
+ }
+
+ while (!(tmp & 1)) {
+ offset++;
+ tmp >>= 1;
+ }
+
+ return offset;
+}
*/
#include <linux/smp_lock.h>
#include <linux/module.h>
+#include <linux/kallsyms.h>
+
+#if defined(CONFIG_PREEMPT) && defined(__smp_processor_id) && \
+ defined(CONFIG_DEBUG_PREEMPT)
+
+/*
+ * Debugging check.
+ */
+unsigned int smp_processor_id(void)
+{
+ unsigned long preempt_count = preempt_count();
+ int this_cpu = __smp_processor_id();
+ cpumask_t this_mask;
+
+ if (likely(preempt_count))
+ goto out;
+
+ if (irqs_disabled())
+ goto out;
+
+ /*
+ * Kernel threads bound to a single CPU can safely use
+ * smp_processor_id():
+ */
+ this_mask = cpumask_of_cpu(this_cpu);
+
+ if (cpus_equal(current->cpus_allowed, this_mask))
+ goto out;
+
+ /*
+ * It is valid to assume CPU-locality during early bootup:
+ */
+ if (system_state != SYSTEM_RUNNING)
+ goto out;
+
+ /*
+ * Avoid recursion:
+ */
+ preempt_disable();
+
+ if (!printk_ratelimit())
+ goto out_enable;
+
+ printk(KERN_ERR "BUG: using smp_processor_id() in preemptible [%08x] code: %s/%d\n", preempt_count(), current->comm, current->pid);
+ print_symbol("caller is %s\n", (long)__builtin_return_address(0));
+ dump_stack();
+
+out_enable:
+ preempt_enable_no_resched();
+out:
+ return this_cpu;
+}
+
+EXPORT_SYMBOL(smp_processor_id);
+
+#endif /* PREEMPT && __smp_processor_id && DEBUG_PREEMPT */
+
+#ifdef CONFIG_PREEMPT_BKL
+/*
+ * The 'big kernel semaphore'
+ *
+ * This mutex is taken and released recursively by lock_kernel()
+ * and unlock_kernel(). It is transparently dropped and reaquired
+ * over schedule(). It is used to protect legacy code that hasn't
+ * been migrated to a proper locking design yet.
+ *
+ * Note: code locked by this semaphore will only be serialized against
+ * other code using the same locking facility. The code guarantees that
+ * the task remains on the same CPU.
+ *
+ * Don't use in new code.
+ */
+DECLARE_MUTEX(kernel_sem);
+
+/*
+ * Re-acquire the kernel semaphore.
+ *
+ * This function is called with preemption off.
+ *
+ * We are executing in schedule() so the code must be extremely careful
+ * about recursion, both due to the down() and due to the enabling of
+ * preemption. schedule() will re-check the preemption flag after
+ * reacquiring the semaphore.
+ */
+int __lockfunc __reacquire_kernel_lock(void)
+{
+ struct task_struct *task = current;
+ int saved_lock_depth = task->lock_depth;
+
+ BUG_ON(saved_lock_depth < 0);
+
+ task->lock_depth = -1;
+ preempt_enable_no_resched();
+
+ down(&kernel_sem);
+
+ preempt_disable();
+ task->lock_depth = saved_lock_depth;
+
+ return 0;
+}
+
+void __lockfunc __release_kernel_lock(void)
+{
+ up(&kernel_sem);
+}
+
+/*
+ * Getting the big kernel semaphore.
+ */
+void __lockfunc lock_kernel(void)
+{
+ struct task_struct *task = current;
+ int depth = task->lock_depth + 1;
+
+ if (likely(!depth))
+ /*
+ * No recursion worries - we set up lock_depth _after_
+ */
+ down(&kernel_sem);
+
+ task->lock_depth = depth;
+}
+
+void __lockfunc unlock_kernel(void)
+{
+ struct task_struct *task = current;
+
+ BUG_ON(task->lock_depth < 0);
+
+ if (likely(--task->lock_depth < 0))
+ up(&kernel_sem);
+}
+
+#else
/*
* The 'big kernel lock'
*
* Don't use in new code.
*/
-static spinlock_t kernel_flag __cacheline_aligned_in_smp = SPIN_LOCK_UNLOCKED;
+static __cacheline_aligned_in_smp DEFINE_SPINLOCK(kernel_flag);
/*
* (This works on UP too - _raw_spin_trylock will never
* return false in that case)
*/
-int __lockfunc get_kernel_lock(void)
+int __lockfunc __reacquire_kernel_lock(void)
{
while (!_raw_spin_trylock(&kernel_flag)) {
if (test_thread_flag(TIF_NEED_RESCHED))
return 0;
}
-void __lockfunc put_kernel_lock(void)
+void __lockfunc __release_kernel_lock(void)
{
_raw_spin_unlock(&kernel_flag);
preempt_enable_no_resched();
__unlock_kernel();
}
+#endif
+
EXPORT_SYMBOL(lock_kernel);
EXPORT_SYMBOL(unlock_kernel);
+
#ifdef CONFIG_HOTPLUG
char hotplug_path[HOTPLUG_PATH_LEN] = "/sbin/hotplug";
u64 hotplug_seqnum;
-static spinlock_t sequence_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(sequence_lock);
/**
* kobject_hotplug - notify userspace by executing /sbin/hotplug
* Reed Solomon code lifted from reed solomon library written by Phil Karn
* Copyright 2002 Phil Karn, KA9Q
*
- * $Id: rslib.c,v 1.4 2004/10/05 22:07:53 gleixner Exp $
+ * $Id: rslib.c,v 1.5 2004/10/22 15:41:47 gleixner Exp $
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* Each user must call init_rs to get a pointer to a rs_control
* structure for the given rs parameters. This structure is either
* generated or a already available matching control structure is used.
- * If a structure is generated then the polynominal arrays for
+ * If a structure is generated then the polynomial arrays for
* fast encoding / decoding are built. This can take some time so
- * make sure not to call this function from a timecritical path.
- * Usually a module / driver should initialize the neccecary
+ * make sure not to call this function from a time critical path.
+ * Usually a module / driver should initialize the necessary
* rs_control structure on module / driver init and release it
* on exit.
- * The encoding puts the calculated syndrome into a given syndrom
+ * The encoding puts the calculated syndrome into a given syndrome
* buffer.
* The decoding is a two step process. The first step calculates
- * the syndrome over the received (data + syndrom) and calls the
+ * the syndrome over the received (data + syndrome) and calls the
* second stage, which does the decoding / error correction itself.
- * Many hw encoders provide a syndrom calculation over the received
- * data + syndrom and can call the second stage directly.
+ * Many hw encoders provide a syndrome calculation over the received
+ * data + syndrome and can call the second stage directly.
*
*/
--- /dev/null
+/* internal.h: mm/ internal definitions
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+/* page_alloc.c */
+extern void set_page_refs(struct page *page, int order);
/* Ensure all existing pages follow the policy. */
static int
-verify_pages(unsigned long addr, unsigned long end, unsigned long *nodes)
+verify_pages(struct mm_struct *mm,
+ unsigned long addr, unsigned long end, unsigned long *nodes)
{
while (addr < end) {
struct page *p;
pte_t *pte;
pmd_t *pmd;
- pgd_t *pgd = pgd_offset_k(addr);
+ pud_t *pud;
+ pgd_t *pgd;
+ pgd = pgd_offset(mm, addr);
if (pgd_none(*pgd)) {
- addr = (addr + PGDIR_SIZE) & PGDIR_MASK;
+ unsigned long next = (addr + PGDIR_SIZE) & PGDIR_MASK;
+ if (next > addr)
+ break;
+ addr = next;
+ continue;
+ }
+ pud = pud_offset(pgd, addr);
+ if (pud_none(*pud)) {
+ addr = (addr + PUD_SIZE) & PUD_MASK;
continue;
}
- pmd = pmd_offset(pgd, addr);
+ pmd = pmd_offset(pud, addr);
if (pmd_none(*pmd)) {
addr = (addr + PMD_SIZE) & PMD_MASK;
continue;
if (prev && prev->vm_end < vma->vm_start)
return ERR_PTR(-EFAULT);
if ((flags & MPOL_MF_STRICT) && !is_vm_hugetlb_page(vma)) {
- err = verify_pages(vma->vm_start, vma->vm_end, nodes);
+ err = verify_pages(vma->vm_mm,
+ vma->vm_start, vma->vm_end, nodes);
if (err) {
first = ERR_PTR(err);
break;
if (flags & ~(unsigned long)(MPOL_F_NODE|MPOL_F_ADDR))
return -EINVAL;
- if (nmask != NULL && maxnode < numnodes)
+ if (nmask != NULL && maxnode < MAX_NUMNODES)
return -EINVAL;
if (flags & MPOL_F_ADDR) {
down_read(&mm->mmap_sem);
} else
pval = pol->policy;
- err = -EFAULT;
+ if (vma) {
+ up_read(¤t->mm->mmap_sem);
+ vma = NULL;
+ }
+
if (policy && put_user(pval, policy))
- goto out;
+ return -EFAULT;
err = 0;
if (nmask) {
return -ENOMEM;
goto restart;
}
- n->end = end;
+ n->end = start;
sp_insert(sp, new2);
new2 = NULL;
- }
- /* Old crossing beginning, but not end (easy) */
- if (n->start < start && n->end > start)
+ break;
+ } else
n->end = start;
}
if (!next)
while (next) {
n = rb_entry(next, struct sp_node, nd);
next = rb_next(&n->nd);
- rb_erase(&n->nd, &p->root);
mpol_free(n->policy);
kmem_cache_free(sn_cache, n);
}
spin_unlock(&p->lock);
+ p->root = RB_ROOT;
}
/* assumes fs == KERNEL_DS */
#include <linux/sched.h>
#include <linux/swap.h>
-static spinlock_t swap_token_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(swap_token_lock);
static unsigned long swap_token_timeout;
unsigned long swap_token_check;
struct mm_struct * swap_token_mm = &init_mm;
* daddr=NULL means leave destination address (eg unresolved arp)
*/
-int fddi_header(struct sk_buff *skb, struct net_device *dev, unsigned short type,
- void *daddr, void *saddr, unsigned len)
+static int fddi_header(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type,
+ void *daddr, void *saddr, unsigned len)
{
int hl = FDDI_K_SNAP_HLEN;
struct fddihdr *fddi;
* this sk_buff. We now let ARP fill in the other fields.
*/
-int fddi_rebuild_header(struct sk_buff *skb)
+static int fddi_rebuild_header(struct sk_buff *skb)
{
struct fddihdr *fddi = (struct fddihdr *)skb->data;
#include <asm/checksum.h>
#include <asm/system.h>
-/*
- * hippi_net_init()
- *
- * Do nothing, this is just to pursuade the stupid linker to behave.
- */
-
-void hippi_net_init(void)
-{
- return;
-}
-
/*
* Create the HIPPI MAC header for an arbitrary protocol layer
*
* daddr=NULL means leave destination address (eg unresolved arp)
*/
-int hippi_header(struct sk_buff *skb, struct net_device *dev,
- unsigned short type, void *daddr, void *saddr,
- unsigned len)
+static int hippi_header(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, void *daddr, void *saddr,
+ unsigned len)
{
struct hippi_hdr *hip = (struct hippi_hdr *)skb_push(skb, HIPPI_HLEN);
* completed on this sk_buff. We now let ARP fill in the other fields.
*/
-int hippi_rebuild_header(struct sk_buff *skb)
+static int hippi_rebuild_header(struct sk_buff *skb)
{
struct hippi_hdr *hip = (struct hippi_hdr *)skb->data;
#include <linux/init.h>
static LIST_HEAD(snap_list);
-static spinlock_t snap_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(snap_lock);
static struct llc_sap *snap_sap;
/*
*/
/* starting at dev, find a VLAN device */
-struct net_device *vlan_skip(struct net_device *dev)
+static struct net_device *vlan_skip(struct net_device *dev)
{
while (dev && !(dev->priv_flags & IFF_802_1Q_VLAN))
dev = dev->next;
#define vlan_proc_init() (0)
#define vlan_proc_cleanup() do {} while(0)
-#define vlan_proc_add_dev(dev) ((void)(dev), 0)
-#define vlan_proc_rem_dev(dev) ((void)(dev), 0)
+#define vlan_proc_add_dev(dev) ({(void)(dev), 0;})
+#define vlan_proc_rem_dev(dev) ({(void)(dev), 0;})
#endif
static int unresolved_count;
/* One lock protects it all. */
-static rwlock_t aarp_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(aarp_lock);
/* Used to walk the list and purge/kick entries. */
static struct timer_list aarp_timer;
* aarp_proxy_probe_network.
*/
-void aarp_send_probe(struct net_device *dev, struct atalk_addr *us)
+static void aarp_send_probe(struct net_device *dev, struct atalk_addr *us)
{
struct elapaarp *eah;
int len = dev->hard_header_len + sizeof(*eah) + aarp_dl->header_length;
* Probe a Phase 1 device or a device that requires its Net:Node to
* be set via an ioctl.
*/
-void aarp_send_probe_phase1(struct atalk_iface *iface)
+static void aarp_send_probe_phase1(struct atalk_iface *iface)
{
struct ifreq atreq;
struct sockaddr_at *sa = (struct sockaddr_at *)&atreq.ifr_addr;
return 0;
}
-struct seq_operations atalk_seq_interface_ops = {
+static struct seq_operations atalk_seq_interface_ops = {
.start = atalk_seq_interface_start,
.next = atalk_seq_interface_next,
.stop = atalk_seq_interface_stop,
.show = atalk_seq_interface_show,
};
-struct seq_operations atalk_seq_route_ops = {
+static struct seq_operations atalk_seq_route_ops = {
.start = atalk_seq_route_start,
.next = atalk_seq_route_next,
.stop = atalk_seq_route_stop,
.show = atalk_seq_route_show,
};
-struct seq_operations atalk_seq_socket_ops = {
+static struct seq_operations atalk_seq_socket_ops = {
.start = atalk_seq_socket_start,
.next = atalk_seq_socket_next,
.stop = atalk_seq_socket_stop,
establishes multiple Multicast Forward VCCs to us. This list
collects all those VCCs. LANEv1 client has only one item in this
list. These entries are not aged out. */
- atomic_t lec_arp_users;
spinlock_t lec_arp_lock;
struct atm_vcc *mcast_vcc; /* Default Multicast Send VCC */
struct atm_vcc *lecd;
#include <linux/kernel.h> /* for barrier */
#include <linux/module.h>
#include <linux/bitops.h>
+#include <linux/delay.h>
#include <net/sock.h> /* for struct sock */
#include "common.h"
LIST_HEAD(atm_devs);
-spinlock_t atm_dev_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(atm_dev_lock);
static struct atm_dev *__alloc_atm_dev(const char *type)
{
dev->signal = ATM_PHY_SIG_UNKNOWN;
dev->link_rate = ATM_OC3_PCR;
spin_lock_init(&dev->lock);
+ INIT_LIST_HEAD(&dev->local);
return dev;
}
warning_time = jiffies;
while (atomic_read(&dev->refcnt) != 1) {
- current->state = TASK_INTERRUPTIBLE;
- schedule_timeout(HZ / 4);
+ msleep(250);
if ((jiffies - warning_time) > 10 * HZ) {
printk(KERN_EMERG "atm_dev_deregister: waiting for "
"dev %d to become free. Usage count = %d\n",
#include <linux/init.h>
ax25_dev *ax25_dev_list;
-spinlock_t ax25_dev_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(ax25_dev_lock);
ax25_dev *ax25_addr_ax25dev(ax25_address *addr)
{
return res;
}
-void ax25_dev_dama_on(ax25_dev *ax25_dev)
+static void ax25_dev_dama_on(ax25_dev *ax25_dev)
{
if (ax25_dev == NULL)
return;
unsigned int pid;
int (*func)(struct sk_buff *, ax25_cb *);
} *protocol_list = NULL;
-static rwlock_t protocol_list_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(protocol_list_lock);
static struct linkfail_struct {
struct linkfail_struct *next;
void (*func)(ax25_cb *, int);
} *linkfail_list = NULL;
-static spinlock_t linkfail_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(linkfail_lock);
static struct listen_struct {
struct listen_struct *next;
ax25_address callsign;
struct net_device *dev;
} *listen_list = NULL;
-static spinlock_t listen_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(listen_lock);
int ax25_protocol_register(unsigned int pid,
int (*func)(struct sk_buff *, ax25_cb *))
#include <linux/mm.h>
#include <linux/interrupt.h>
-static spinlock_t ax25_frag_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(ax25_frag_lock);
ax25_cb *ax25_send_frame(struct sk_buff *skb, int paclen, ax25_address *src, ax25_address *dest, ax25_digi *digi, struct net_device *dev)
{
*/
static ax25_uid_assoc *ax25_uid_list;
-static rwlock_t ax25_uid_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(ax25_uid_lock);
int ax25_uid_policy = 0;
#include <linux/socket.h>
#include <linux/ioctl.h>
#include <linux/file.h>
+#include <linux/wait.h>
#include <net/sock.h>
#include <linux/isdn/capilli.h>
return session->msgnum;
}
+static void cmtp_send_capimsg(struct cmtp_session *session, struct sk_buff *skb)
+{
+ struct cmtp_scb *scb = (void *) skb->cb;
+
+ BT_DBG("session %p skb %p len %d", session, skb, skb->len);
+
+ scb->id = -1;
+ scb->data = (CAPIMSG_COMMAND(skb->data) == CAPI_DATA_B3);
+
+ skb_queue_tail(&session->transmit, skb);
+
+ cmtp_schedule(session);
+}
static void cmtp_send_interopmsg(struct cmtp_session *session,
__u8 subcmd, __u16 appl, __u16 msgnum,
capi_ctr_handle_message(ctrl, appl, skb);
}
-void cmtp_send_capimsg(struct cmtp_session *session, struct sk_buff *skb)
-{
- struct cmtp_scb *scb = (void *) skb->cb;
-
- BT_DBG("session %p skb %p len %d", session, skb, skb->len);
-
- scb->id = -1;
- scb->data = (CAPIMSG_COMMAND(skb->data) == CAPI_DATA_B3);
-
- skb_queue_tail(&session->transmit, skb);
-
- cmtp_schedule(session);
-}
-
-
static int cmtp_load_firmware(struct capi_ctr *ctrl, capiloaddata *data)
{
BT_DBG("ctrl %p data %p", ctrl, data);
static void cmtp_release_appl(struct capi_ctr *ctrl, __u16 appl)
{
- DECLARE_WAITQUEUE(wait, current);
struct cmtp_session *session = ctrl->driverdata;
struct cmtp_application *application;
- unsigned long timeo = CMTP_INTEROP_TIMEOUT;
BT_DBG("ctrl %p appl %d", ctrl, appl);
cmtp_send_interopmsg(session, CAPI_REQ, application->mapping, application->msgnum,
CAPI_FUNCTION_RELEASE, NULL, 0);
- add_wait_queue(&session->wait, &wait);
- while (timeo) {
- set_current_state(TASK_INTERRUPTIBLE);
-
- if (application->state == BT_CLOSED)
- break;
-
- if (signal_pending(current))
- break;
-
- timeo = schedule_timeout(timeo);
- }
- set_current_state(TASK_RUNNING);
- remove_wait_queue(&session->wait, &wait);
+ wait_event_interruptible_timeout(session->wait,
+ (application->state == BT_CLOSED), CMTP_INTEROP_TIMEOUT);
cmtp_application_del(session, application);
}
int cmtp_attach_device(struct cmtp_session *session)
{
- DECLARE_WAITQUEUE(wait, current);
- unsigned long timeo = CMTP_INTEROP_TIMEOUT;
unsigned char buf[4];
+ long ret;
BT_DBG("session %p", session);
cmtp_send_interopmsg(session, CAPI_REQ, 0xffff, CMTP_INITIAL_MSGNUM,
CAPI_FUNCTION_GET_PROFILE, buf, 4);
- add_wait_queue(&session->wait, &wait);
- while (timeo) {
- set_current_state(TASK_INTERRUPTIBLE);
-
- if (session->ncontroller)
- break;
-
- if (signal_pending(current))
- break;
-
- timeo = schedule_timeout(timeo);
- }
- set_current_state(TASK_RUNNING);
- remove_wait_queue(&session->wait, &wait);
-
+ ret = wait_event_interruptible_timeout(session->wait,
+ session->ncontroller, CMTP_INTEROP_TIMEOUT);
+
BT_INFO("Found %d CAPI controller(s) on device %s", session->ncontroller, session->name);
- if (!timeo)
+ if (!ret)
return -ETIMEDOUT;
if (!session->ncontroller)
return -ENODEV;
-
if (session->ncontroller > 1)
BT_INFO("Setting up only CAPI controller 1");
void cmtp_detach_device(struct cmtp_session *session);
void cmtp_recv_capimsg(struct cmtp_session *session, struct sk_buff *skb);
-void cmtp_send_capimsg(struct cmtp_session *session, struct sk_buff *skb);
static inline void cmtp_schedule(struct cmtp_session *session)
{
#include <linux/ioctl.h>
#include <linux/file.h>
#include <linux/init.h>
+#include <linux/wait.h>
#include <net/sock.h>
#include <linux/input.h>
#define BT_DBG(D...)
#endif
-#define VERSION "1.0"
+#define VERSION "1.1"
static DECLARE_RWSEM(hidp_session_sem);
static LIST_HEAD(hidp_session_list);
150,158,159,128,136,177,178,176,142,152,173,140
};
+static unsigned char hidp_mkeyspat[] = { 0x01, 0x01, 0x01, 0x01, 0x01, 0x01 };
+
static struct hidp_session *__hidp_get_session(bdaddr_t *bdaddr)
{
struct hidp_session *session;
struct sk_buff *skb;
unsigned char newleds;
- BT_DBG("session %p hid %p data %p size %d", session, device, data, size);
+ BT_DBG("input %p type %d code %d value %d", dev, type, code, value);
if (type != EV_LED)
return -1;
return -ENOMEM;
}
- *skb_put(skb, 1) = 0xa2;
+ *skb_put(skb, 1) = HIDP_TRANS_DATA | HIDP_DATA_RTYPE_OUPUT;
*skb_put(skb, 1) = 0x01;
*skb_put(skb, 1) = newleds;
for (i = 0; i < 8; i++)
input_report_key(dev, hidp_keycode[i + 224], (udata[0] >> i) & 1);
+ /* If all the key codes have been set to 0x01, it means
+ * too many keys were pressed at the same time. */
+ if (!memcmp(udata + 2, hidp_mkeyspat, 6))
+ break;
+
for (i = 2; i < 8; i++) {
if (keys[i] > 3 && memscan(udata + 2, keys[i], 6) == udata + 8) {
if (hidp_keycode[keys[i]])
del_timer(&session->timer);
}
-static inline void hidp_send_message(struct hidp_session *session, unsigned char hdr)
+static int __hidp_send_ctrl_message(struct hidp_session *session,
+ unsigned char hdr, unsigned char *data, int size)
{
struct sk_buff *skb;
- BT_DBG("session %p", session);
+ BT_DBG("session %p data %p size %d", session, data, size);
- if (!(skb = alloc_skb(1, GFP_ATOMIC))) {
- BT_ERR("Can't allocate memory for message");
- return;
+ if (!(skb = alloc_skb(size + 1, GFP_ATOMIC))) {
+ BT_ERR("Can't allocate memory for new frame");
+ return -ENOMEM;
}
*skb_put(skb, 1) = hdr;
+ if (data && size > 0)
+ memcpy(skb_put(skb, size), data, size);
skb_queue_tail(&session->ctrl_transmit, skb);
+ return 0;
+}
+
+static int inline hidp_send_ctrl_message(struct hidp_session *session,
+ unsigned char hdr, unsigned char *data, int size)
+{
+ int err;
+
+ err = __hidp_send_ctrl_message(session, hdr, data, size);
+
hidp_schedule(session);
+
+ return err;
}
-static inline int hidp_recv_frame(struct hidp_session *session, struct sk_buff *skb)
+static inline void hidp_process_handshake(struct hidp_session *session, unsigned char param)
{
- __u8 hdr;
+ BT_DBG("session %p param 0x%02x", session, param);
+
+ switch (param) {
+ case HIDP_HSHK_SUCCESSFUL:
+ /* FIXME: Call into SET_ GET_ handlers here */
+ break;
+
+ case HIDP_HSHK_NOT_READY:
+ case HIDP_HSHK_ERR_INVALID_REPORT_ID:
+ case HIDP_HSHK_ERR_UNSUPPORTED_REQUEST:
+ case HIDP_HSHK_ERR_INVALID_PARAMETER:
+ /* FIXME: Call into SET_ GET_ handlers here */
+ break;
+
+ case HIDP_HSHK_ERR_UNKNOWN:
+ break;
+
+ case HIDP_HSHK_ERR_FATAL:
+ /* Device requests a reboot, as this is the only way this error
+ * can be recovered. */
+ __hidp_send_ctrl_message(session,
+ HIDP_TRANS_HID_CONTROL | HIDP_CTRL_SOFT_RESET, NULL, 0);
+ break;
+
+ default:
+ __hidp_send_ctrl_message(session,
+ HIDP_TRANS_HANDSHAKE | HIDP_HSHK_ERR_INVALID_PARAMETER, NULL, 0);
+ break;
+ }
+}
+
+static inline void hidp_process_hid_control(struct hidp_session *session, unsigned char param)
+{
+ BT_DBG("session %p param 0x%02x", session, param);
+
+ switch (param) {
+ case HIDP_CTRL_NOP:
+ break;
+
+ case HIDP_CTRL_VIRTUAL_CABLE_UNPLUG:
+ /* Flush the transmit queues */
+ skb_queue_purge(&session->ctrl_transmit);
+ skb_queue_purge(&session->intr_transmit);
+
+ /* Kill session thread */
+ atomic_inc(&session->terminate);
+ break;
+
+ case HIDP_CTRL_HARD_RESET:
+ case HIDP_CTRL_SOFT_RESET:
+ case HIDP_CTRL_SUSPEND:
+ case HIDP_CTRL_EXIT_SUSPEND:
+ /* FIXME: We have to parse these and return no error */
+ break;
+
+ default:
+ __hidp_send_ctrl_message(session,
+ HIDP_TRANS_HANDSHAKE | HIDP_HSHK_ERR_INVALID_PARAMETER, NULL, 0);
+ break;
+ }
+}
+
+static inline void hidp_process_data(struct hidp_session *session, struct sk_buff *skb, unsigned char param)
+{
+ BT_DBG("session %p skb %p len %d param 0x%02x", session, skb, skb->len, param);
+
+ switch (param) {
+ case HIDP_DATA_RTYPE_INPUT:
+ hidp_set_timer(session);
+
+ if (session->input)
+ hidp_input_report(session, skb);
+ break;
+
+ case HIDP_DATA_RTYPE_OTHER:
+ case HIDP_DATA_RTYPE_OUPUT:
+ case HIDP_DATA_RTYPE_FEATURE:
+ break;
+
+ default:
+ __hidp_send_ctrl_message(session,
+ HIDP_TRANS_HANDSHAKE | HIDP_HSHK_ERR_INVALID_PARAMETER, NULL, 0);
+ }
+}
+
+static inline void hidp_recv_ctrl_frame(struct hidp_session *session, struct sk_buff *skb)
+{
+ unsigned char hdr, type, param;
BT_DBG("session %p skb %p len %d", session, skb, skb->len);
hdr = skb->data[0];
skb_pull(skb, 1);
- if (hdr == 0xa1) {
- hidp_set_timer(session);
+ type = hdr & HIDP_HEADER_TRANS_MASK;
+ param = hdr & HIDP_HEADER_PARAM_MASK;
+
+ switch (type) {
+ case HIDP_TRANS_HANDSHAKE:
+ hidp_process_handshake(session, param);
+ break;
+
+ case HIDP_TRANS_HID_CONTROL:
+ hidp_process_hid_control(session, param);
+ break;
+
+ case HIDP_TRANS_DATA:
+ hidp_process_data(session, skb, param);
+ break;
+
+ default:
+ __hidp_send_ctrl_message(session,
+ HIDP_TRANS_HANDSHAKE | HIDP_HSHK_ERR_UNSUPPORTED_REQUEST, NULL, 0);
+ break;
+ }
+
+ kfree_skb(skb);
+}
+
+static inline void hidp_recv_intr_frame(struct hidp_session *session, struct sk_buff *skb)
+{
+ unsigned char hdr;
+
+ BT_DBG("session %p skb %p len %d", session, skb, skb->len);
+
+ hdr = skb->data[0];
+ skb_pull(skb, 1);
+ if (hdr == (HIDP_TRANS_DATA | HIDP_DATA_RTYPE_INPUT)) {
+ hidp_set_timer(session);
if (session->input)
hidp_input_report(session, skb);
} else {
}
kfree_skb(skb);
- return 0;
}
static int hidp_send_frame(struct socket *sock, unsigned char *data, int len)
struct sk_buff *skb;
int vendor = 0x0000, product = 0x0000;
wait_queue_t ctrl_wait, intr_wait;
- unsigned long timeo = HZ;
BT_DBG("session %p", session);
while ((skb = skb_dequeue(&ctrl_sk->sk_receive_queue))) {
skb_orphan(skb);
- hidp_recv_frame(session, skb);
+ hidp_recv_ctrl_frame(session, skb);
}
while ((skb = skb_dequeue(&intr_sk->sk_receive_queue))) {
skb_orphan(skb);
- hidp_recv_frame(session, skb);
+ hidp_recv_intr_frame(session, skb);
}
hidp_process_transmit(session);
hidp_del_timer(session);
- if (intr_sk->sk_state != BT_CONNECTED) {
- init_waitqueue_entry(&ctrl_wait, current);
- add_wait_queue(ctrl_sk->sk_sleep, &ctrl_wait);
- while (timeo && ctrl_sk->sk_state != BT_CLOSED) {
- set_current_state(TASK_INTERRUPTIBLE);
- timeo = schedule_timeout(timeo);
- }
- set_current_state(TASK_RUNNING);
- remove_wait_queue(ctrl_sk->sk_sleep, &ctrl_wait);
- timeo = HZ;
- }
+ if (intr_sk->sk_state != BT_CONNECTED)
+ wait_event_timeout(*(ctrl_sk->sk_sleep), (ctrl_sk->sk_state == BT_CLOSED), HZ);
fput(session->ctrl_sock->file);
- init_waitqueue_entry(&intr_wait, current);
- add_wait_queue(intr_sk->sk_sleep, &intr_wait);
- while (timeo && intr_sk->sk_state != BT_CLOSED) {
- set_current_state(TASK_INTERRUPTIBLE);
- timeo = schedule_timeout(timeo);
- }
- set_current_state(TASK_RUNNING);
- remove_wait_queue(intr_sk->sk_sleep, &intr_wait);
+ wait_event_timeout(*(intr_sk->sk_sleep), (intr_sk->sk_state == BT_CLOSED), HZ);
fput(session->intr_sock->file);
goto unlink;
if (session->input) {
- hidp_send_message(session, 0x70);
+ hidp_send_ctrl_message(session,
+ HIDP_TRANS_SET_PROTOCOL | HIDP_PROTO_BOOT, NULL, 0);
session->flags |= (1 << HIDP_BOOT_PROTOCOL_MODE);
session->leds = 0xff;
session = __hidp_get_session(&req->bdaddr);
if (session) {
if (req->flags & (1 << HIDP_VIRTUAL_CABLE_UNPLUG)) {
- hidp_send_message(session, 0x15);
+ hidp_send_ctrl_message(session,
+ HIDP_TRANS_HID_CONTROL | HIDP_CTRL_VIRTUAL_CABLE_UNPLUG, NULL, 0);
} else {
/* Flush the transmit queues */
skb_queue_purge(&session->ctrl_transmit);
};
static LIST_HEAD(rfcomm_dev_list);
-static rwlock_t rfcomm_dev_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(rfcomm_dev_lock);
static void rfcomm_dev_data_ready(struct rfcomm_dlc *dlc, struct sk_buff *skb);
static void rfcomm_dev_state_change(struct rfcomm_dlc *dlc, int err);
{
struct sk_buff *skb = *pskb;
-#ifdef CONFIG_SYSCTL
- if (!skb->nf_bridge) {
- struct vlan_ethhdr *hdr = vlan_eth_hdr(skb);
-
- if (skb->protocol == __constant_htons(ETH_P_IP) ||
- IS_VLAN_IP) {
- if (!brnf_call_iptables)
- return NF_ACCEPT;
- } else if (!brnf_call_ip6tables)
- return NF_ACCEPT;
- }
-#endif
-
if ((out->hard_start_xmit == br_dev_xmit &&
okfn != br_nf_forward_finish &&
okfn != br_nf_local_out_finish &&
) {
struct nf_bridge_info *nf_bridge;
- if (!skb->nf_bridge && !nf_bridge_alloc(skb))
- return NF_DROP;
+ if (!skb->nf_bridge) {
+#ifdef CONFIG_SYSCTL
+ /* This code is executed while in the IP(v6) stack,
+ the version should be 4 or 6. We can't use
+ skb->protocol because that isn't set on
+ PF_INET(6)/LOCAL_OUT. */
+ struct iphdr *ip = skb->nh.iph;
+
+ if (ip->version == 4 && !brnf_call_iptables)
+ return NF_ACCEPT;
+ else if (ip->version == 6 && !brnf_call_ip6tables)
+ return NF_ACCEPT;
+#endif
+ if (hook == NF_IP_POST_ROUTING)
+ return NF_ACCEPT;
+ if (!nf_bridge_alloc(skb))
+ return NF_DROP;
+ }
nf_bridge = skb->nf_bridge;
tristate "ebt: log support"
depends on BRIDGE_NF_EBTABLES
help
- This option adds the log target, that you can use in any rule in
- any ebtables table. It records the frame header to the syslog.
+ This option adds the log watcher, that you can use in any rule
+ in any ebtables table. It records info about the frame header
+ to the syslog.
+
+ To compile it as a module, choose M here. If unsure, say N.
+
+config BRIDGE_EBT_ULOG
+ tristate "ebt: ulog support"
+ depends on BRIDGE_NF_EBTABLES
+ help
+ This option adds the ulog watcher, that you can use in any rule
+ in any ebtables table. The packet is passed to a userspace
+ logging daemon using netlink multicast sockets. This differs
+ from the log watcher in the sense that the complete packet is
+ sent to userspace instead of a descriptive text and that
+ netlink multicast sockets are used instead of the syslog.
To compile it as a module, choose M here. If unsure, say N.
# watchers
obj-$(CONFIG_BRIDGE_EBT_LOG) += ebt_log.o
+obj-$(CONFIG_BRIDGE_EBT_LOG) += ebt_ulog.o
#include <linux/netdevice.h>
#include <linux/spinlock.h>
-static spinlock_t limit_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(limit_lock);
#define MAX_CPJ (0xFFFFFFFF / (HZ*60*60*24))
#include <linux/if_arp.h>
#include <linux/spinlock.h>
-static spinlock_t ebt_log_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(ebt_log_lock);
static int ebt_log_check(const char *tablename, unsigned int hookmask,
const struct ebt_entry *e, void *data, unsigned int datalen)
}
#define myNIPQUAD(a) a[0], a[1], a[2], a[3]
-static void ebt_log(const struct sk_buff *skb, const struct net_device *in,
- const struct net_device *out, const void *data, unsigned int datalen)
+static void ebt_log(const struct sk_buff *skb, unsigned int hooknr,
+ const struct net_device *in, const struct net_device *out,
+ const void *data, unsigned int datalen)
{
struct ebt_log_info *info = (struct ebt_log_info *)data;
char level_string[4] = "< >";
--- /dev/null
+/*
+ * netfilter module for userspace bridged Ethernet frames logging daemons
+ *
+ * Authors:
+ * Bart De Schuymer <bdschuym@pandora.be>
+ *
+ * November, 2004
+ *
+ * Based on ipt_ULOG.c, which is
+ * (C) 2000-2002 by Harald Welte <laforge@netfilter.org>
+ *
+ * This module accepts two parameters:
+ *
+ * nlbufsiz:
+ * The parameter specifies how big the buffer for each netlink multicast
+ * group is. e.g. If you say nlbufsiz=8192, up to eight kb of packets will
+ * get accumulated in the kernel until they are sent to userspace. It is
+ * NOT possible to allocate more than 128kB, and it is strongly discouraged,
+ * because atomically allocating 128kB inside the network rx softirq is not
+ * reliable. Please also keep in mind that this buffer size is allocated for
+ * each nlgroup you are using, so the total kernel memory usage increases
+ * by that factor.
+ *
+ * flushtimeout:
+ * Specify, after how many hundredths of a second the queue should be
+ * flushed even if it is not full yet.
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/config.h>
+#include <linux/spinlock.h>
+#include <linux/socket.h>
+#include <linux/skbuff.h>
+#include <linux/kernel.h>
+#include <linux/timer.h>
+#include <linux/netlink.h>
+#include <linux/netdevice.h>
+#include <linux/module.h>
+#include <linux/netfilter_bridge/ebtables.h>
+#include <linux/netfilter_bridge/ebt_ulog.h>
+#include <net/sock.h>
+#include "../br_private.h"
+
+#define PRINTR(format, args...) do { if (net_ratelimit()) \
+ printk(format , ## args); } while (0)
+
+static unsigned int nlbufsiz = 4096;
+module_param(nlbufsiz, uint, 0600);
+MODULE_PARM_DESC(nlbufsiz, "netlink buffer size (number of bytes) "
+ "(defaults to 4096)");
+
+static unsigned int flushtimeout = 10;
+module_param(flushtimeout, uint, 0600);
+MODULE_PARM_DESC(flushtimeout, "buffer flush timeout (hundredths ofa second) "
+ "(defaults to 10)");
+
+typedef struct {
+ unsigned int qlen; /* number of nlmsgs' in the skb */
+ struct nlmsghdr *lastnlh; /* netlink header of last msg in skb */
+ struct sk_buff *skb; /* the pre-allocated skb */
+ struct timer_list timer; /* the timer function */
+ spinlock_t lock; /* the per-queue lock */
+} ebt_ulog_buff_t;
+
+static ebt_ulog_buff_t ulog_buffers[EBT_ULOG_MAXNLGROUPS];
+static struct sock *ebtulognl;
+
+/* send one ulog_buff_t to userspace */
+static void ulog_send(unsigned int nlgroup)
+{
+ ebt_ulog_buff_t *ub = &ulog_buffers[nlgroup];
+
+ if (timer_pending(&ub->timer))
+ del_timer(&ub->timer);
+
+ /* last nlmsg needs NLMSG_DONE */
+ if (ub->qlen > 1)
+ ub->lastnlh->nlmsg_type = NLMSG_DONE;
+
+ NETLINK_CB(ub->skb).dst_groups = 1 << nlgroup;
+ netlink_broadcast(ebtulognl, ub->skb, 0, 1 << nlgroup, GFP_ATOMIC);
+
+ ub->qlen = 0;
+ ub->skb = NULL;
+}
+
+/* timer function to flush queue in flushtimeout time */
+static void ulog_timer(unsigned long data)
+{
+ spin_lock_bh(&ulog_buffers[data].lock);
+ if (ulog_buffers[data].skb)
+ ulog_send(data);
+ spin_unlock_bh(&ulog_buffers[data].lock);
+}
+
+static struct sk_buff *ulog_alloc_skb(unsigned int size)
+{
+ struct sk_buff *skb;
+
+ skb = alloc_skb(nlbufsiz, GFP_ATOMIC);
+ if (!skb) {
+ PRINTR(KERN_ERR "ebt_ulog: can't alloc whole buffer "
+ "of size %ub!\n", nlbufsiz);
+ if (size < nlbufsiz) {
+ /* try to allocate only as much as we need for
+ * current packet */
+ skb = alloc_skb(size, GFP_ATOMIC);
+ if (!skb)
+ PRINTR(KERN_ERR "ebt_ulog: can't even allocate "
+ "buffer of size %ub\n", size);
+ }
+ }
+
+ return skb;
+}
+
+static void ebt_ulog(const struct sk_buff *skb, unsigned int hooknr,
+ const struct net_device *in, const struct net_device *out,
+ const void *data, unsigned int datalen)
+{
+ ebt_ulog_packet_msg_t *pm;
+ size_t size, copy_len;
+ struct nlmsghdr *nlh;
+ struct ebt_ulog_info *uloginfo = (struct ebt_ulog_info *)data;
+ unsigned int group = uloginfo->nlgroup;
+ ebt_ulog_buff_t *ub = &ulog_buffers[group];
+ spinlock_t *lock = &ub->lock;
+
+ if ((uloginfo->cprange == 0) ||
+ (uloginfo->cprange > skb->len + ETH_HLEN))
+ copy_len = skb->len + ETH_HLEN;
+ else
+ copy_len = uloginfo->cprange;
+
+ size = NLMSG_SPACE(sizeof(*pm) + copy_len);
+ if (size > nlbufsiz) {
+ PRINTR("ebt_ulog: Size %Zd needed, but nlbufsiz=%d\n",
+ size, nlbufsiz);
+ return;
+ }
+
+ spin_lock_bh(lock);
+
+ if (!ub->skb) {
+ if (!(ub->skb = ulog_alloc_skb(size)))
+ goto alloc_failure;
+ } else if (size > skb_tailroom(ub->skb)) {
+ ulog_send(group);
+
+ if (!(ub->skb = ulog_alloc_skb(size)))
+ goto alloc_failure;
+ }
+
+ nlh = NLMSG_PUT(ub->skb, 0, ub->qlen, 0,
+ size - NLMSG_ALIGN(sizeof(*nlh)));
+ ub->qlen++;
+
+ pm = NLMSG_DATA(nlh);
+
+ /* Fill in the ulog data */
+ pm->version = EBT_ULOG_VERSION;
+ do_gettimeofday(&pm->stamp);
+ if (ub->qlen == 1)
+ ub->skb->stamp = pm->stamp;
+ pm->data_len = copy_len;
+ pm->mark = skb->nfmark;
+ pm->hook = hooknr;
+ if (uloginfo->prefix != NULL)
+ strcpy(pm->prefix, uloginfo->prefix);
+ else
+ *(pm->prefix) = '\0';
+
+ if (in) {
+ strcpy(pm->physindev, in->name);
+ /* If in isn't a bridge, then physindev==indev */
+ if (in->br_port)
+ strcpy(pm->indev, in->br_port->br->dev->name);
+ else
+ strcpy(pm->indev, in->name);
+ } else
+ pm->indev[0] = pm->physindev[0] = '\0';
+
+ if (out) {
+ /* If out exists, then out is a bridge port */
+ strcpy(pm->physoutdev, out->name);
+ strcpy(pm->outdev, out->br_port->br->dev->name);
+ } else
+ pm->outdev[0] = pm->physoutdev[0] = '\0';
+
+ if (skb_copy_bits(skb, -ETH_HLEN, pm->data, copy_len) < 0)
+ BUG();
+
+ if (ub->qlen > 1)
+ ub->lastnlh->nlmsg_flags |= NLM_F_MULTI;
+
+ ub->lastnlh = nlh;
+
+ if (ub->qlen >= uloginfo->qthreshold)
+ ulog_send(group);
+ else if (!timer_pending(&ub->timer)) {
+ ub->timer.expires = jiffies + flushtimeout * HZ / 100;
+ add_timer(&ub->timer);
+ }
+
+unlock:
+ spin_unlock_bh(lock);
+
+ return;
+
+nlmsg_failure:
+ printk(KERN_CRIT "ebt_ulog: error during NLMSG_PUT. This should "
+ "not happen, please report to author.\n");
+ goto unlock;
+alloc_failure:
+ goto unlock;
+}
+
+static int ebt_ulog_check(const char *tablename, unsigned int hookmask,
+ const struct ebt_entry *e, void *data, unsigned int datalen)
+{
+ struct ebt_ulog_info *uloginfo = (struct ebt_ulog_info *)data;
+
+ if (datalen != EBT_ALIGN(sizeof(struct ebt_ulog_info)) ||
+ uloginfo->nlgroup > 31)
+ return -EINVAL;
+
+ uloginfo->prefix[EBT_ULOG_PREFIX_LEN - 1] = '\0';
+
+ if (uloginfo->qthreshold > EBT_ULOG_MAX_QLEN)
+ uloginfo->qthreshold = EBT_ULOG_MAX_QLEN;
+
+ return 0;
+}
+
+static struct ebt_watcher ulog = {
+ .name = EBT_ULOG_WATCHER,
+ .watcher = ebt_ulog,
+ .check = ebt_ulog_check,
+ .me = THIS_MODULE,
+};
+
+static int __init init(void)
+{
+ int i, ret = 0;
+
+ if (nlbufsiz >= 128*1024) {
+ printk(KERN_NOTICE "ebt_ulog: Netlink buffer has to be <= 128kB,"
+ " please try a smaller nlbufsiz parameter.\n");
+ return -EINVAL;
+ }
+
+ /* initialize ulog_buffers */
+ for (i = 0; i < EBT_ULOG_MAXNLGROUPS; i++) {
+ init_timer(&ulog_buffers[i].timer);
+ ulog_buffers[i].timer.function = ulog_timer;
+ ulog_buffers[i].timer.data = i;
+ spin_lock_init(&ulog_buffers[i].lock);
+ }
+
+ ebtulognl = netlink_kernel_create(NETLINK_NFLOG, NULL);
+ if (!ebtulognl)
+ ret = -ENOMEM;
+ else if ((ret = ebt_register_watcher(&ulog)))
+ sock_release(ebtulognl->sk_socket);
+
+ return ret;
+}
+
+static void __exit fini(void)
+{
+ ebt_ulog_buff_t *ub;
+ int i;
+
+ ebt_unregister_watcher(&ulog);
+ for (i = 0; i < EBT_ULOG_MAXNLGROUPS; i++) {
+ ub = &ulog_buffers[i];
+ if (timer_pending(&ub->timer))
+ del_timer(&ub->timer);
+ spin_lock_bh(&ub->lock);
+ if (ub->skb) {
+ kfree_skb(ub->skb);
+ ub->skb = NULL;
+ }
+ spin_unlock_bh(&ub->lock);
+ }
+ sock_release(ebtulognl->sk_socket);
+}
+
+module_init(init);
+module_exit(fini);
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Bart De Schuymer <bdschuym@pandora.be>");
+MODULE_DESCRIPTION("ebtables userspace logging module for bridged Ethernet"
+ " frames");
{
int alloc_size = (sizeof(struct divert_blk) + 3) & ~3;
+ dev->divert = NULL;
if (dev->type == ARPHRD_ETHER) {
- printk(KERN_DEBUG "divert: allocating divert_blk for %s\n",
- dev->name);
-
dev->divert = (struct divert_blk *)
kmalloc(alloc_size, GFP_KERNEL);
if (dev->divert == NULL) {
- printk(KERN_DEBUG "divert: unable to allocate divert_blk for %s\n",
+ printk(KERN_INFO "divert: unable to allocate divert_blk for %s\n",
dev->name);
return -ENOMEM;
- } else {
- memset(dev->divert, 0, sizeof(struct divert_blk));
}
- dev_hold(dev);
- } else {
- printk(KERN_DEBUG "divert: not allocating divert_blk for non-ethernet device %s\n",
- dev->name);
- dev->divert = NULL;
+ memset(dev->divert, 0, sizeof(struct divert_blk));
+ dev_hold(dev);
}
+
return 0;
}
kfree(dev->divert);
dev->divert=NULL;
dev_put(dev);
- printk(KERN_DEBUG "divert: freeing divert_blk for %s\n",
- dev->name);
- } else {
- printk(KERN_DEBUG "divert: no divert_blk to free, %s not ethernet\n",
- dev->name);
}
}
/*
* control function of the diverter
*/
+#if 0
#define DVDBG(a) \
printk(KERN_DEBUG "divert_ioctl() line %d %s\n", __LINE__, (a))
+#else
+#define DVDBG(a)
+#endif
int divert_ioctl(unsigned int cmd, struct divert_cf __user *arg)
{
static struct gen_estimator_head elist[EST_MAX_INTERVAL+1];
/* Estimator array lock */
-static rwlock_t est_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(est_lock);
static void est_timer(unsigned long arg)
{
static DECLARE_WORK(linkwatch_work, linkwatch_event, NULL);
static LIST_HEAD(lweventlist);
-static spinlock_t lweventlist_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(lweventlist_lock);
struct lw_event {
struct list_head list;
sizeof(struct iw_ioctl_description));
/* Size (in bytes) of the various private data types */
-const char iw_priv_type_size[] = {
+static const char iw_priv_type_size[] = {
0, /* IW_PRIV_TYPE_NONE */
1, /* IW_PRIV_TYPE_BYTE */
1, /* IW_PRIV_TYPE_CHAR */
extern int dn_cache_dump(struct sk_buff *skb, struct netlink_callback *cb);
-static spinlock_t dn_fib_multipath_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(dn_fib_multipath_lock);
static struct dn_fib_info *dn_fib_info_list;
-static rwlock_t dn_fib_info_lock = RW_LOCK_UNLOCKED;
-int dn_fib_info_cnt;
+static DEFINE_RWLOCK(dn_fib_info_lock);
static struct
{
dev_put(nh->nh_dev);
nh->nh_dev = NULL;
} endfor_nexthops(fi);
- dn_fib_info_cnt--;
kfree(fi);
}
if (dn_fib_info_list)
dn_fib_info_list->fib_prev = fi;
dn_fib_info_list = fi;
- dn_fib_info_cnt++;
write_unlock(&dn_fib_info_lock);
return fi;
}
-/*
- * Punt to user via netlink for example, but for now
- * we just drop it.
- */
-int dn_fib_rt_message(struct sk_buff *skb)
-{
- kfree_skb(skb);
-
- return 0;
-}
-
-
static int dn_fib_check_attr(struct rtmsg *r, struct rtattr **rta)
{
int i;
* basically does a neigh_lookup(), but without comparing the device
* field. This is required for the On-Ethernet cache
*/
-/*
- * Any traffic on a pointopoint link causes the timer to be reset
- * for the entry in the neighbour table.
- */
-void dn_neigh_pointopoint_notify(struct sk_buff *skb)
-{
- return;
-}
/*
* Pointopoint link receives a hello message
};
static struct dn_fib_rule *dn_fib_rules = &default_rule;
-static rwlock_t dn_fib_rules_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(dn_fib_rules_lock);
int dn_fib_rtm_delrule(struct sk_buff *skb, struct nlmsghdr *nlh, void *arg)
#endif
(!rtm->rtm_type || rtm->rtm_type == r->r_action) &&
(!rta[RTA_PRIORITY-1] || memcmp(RTA_DATA(rta[RTA_PRIORITY-1]), &r->r_preference, 4) == 0) &&
- (!rta[RTA_IIF-1] || strcmp(RTA_DATA(rta[RTA_IIF-1]), r->r_ifname) == 0) &&
+ (!rta[RTA_IIF-1] || rtattr_strcmp(rta[RTA_IIF-1], r->r_ifname) == 0) &&
(!rtm->rtm_table || (r && rtm->rtm_table == r->r_table))) {
err = -EPERM;
new_r->r_table = table_id;
if (rta[RTA_IIF-1]) {
struct net_device *dev;
- memcpy(new_r->r_ifname, RTA_DATA(rta[RTA_IIF-1]), IFNAMSIZ);
- new_r->r_ifname[IFNAMSIZ-1] = 0;
+ rtattr_strlcpy(new_r->r_ifname, rta[RTA_IIF-1], IFNAMSIZ);
new_r->r_ifindex = -1;
dev = dev_get_by_name(new_r->r_ifname);
if (dev) {
#define RT_TABLE_MIN 1
-static rwlock_t dn_fib_tables_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(dn_fib_tables_lock);
struct dn_fib_table *dn_fib_tables[RT_TABLE_MAX + 1];
static kmem_cache_t *dn_hash_kmem;
#include <net/dn.h>
/*
- * Fast timer is for delayed acks (200mS max)
* Slow timer is for everything else (n * 500mS)
*/
-#define FAST_INTERVAL (HZ/5)
#define SLOW_INTERVAL (HZ/2)
static void dn_slow_timer(unsigned long arg);
bh_unlock_sock(sk);
sock_put(sk);
}
-
-static void dn_fast_timer(unsigned long arg)
-{
- struct sock *sk = (struct sock *)arg;
- struct dn_scp *scp = DN_SK(sk);
-
- bh_lock_sock(sk);
- if (sock_owned_by_user(sk)) {
- scp->delack_timer.expires = jiffies + HZ / 20;
- add_timer(&scp->delack_timer);
- goto out;
- }
-
- scp->delack_pending = 0;
-
- if (scp->delack_fxn)
- scp->delack_fxn(sk);
-out:
- bh_unlock_sock(sk);
-}
-
-void dn_start_fast_timer(struct sock *sk)
-{
- struct dn_scp *scp = DN_SK(sk);
-
- if (!scp->delack_pending) {
- scp->delack_pending = 1;
- init_timer(&scp->delack_timer);
- scp->delack_timer.expires = jiffies + FAST_INTERVAL;
- scp->delack_timer.data = (unsigned long)sk;
- scp->delack_timer.function = dn_fast_timer;
- add_timer(&scp->delack_timer);
- }
-}
-
-void dn_stop_fast_timer(struct sock *sk)
-{
- struct dn_scp *scp = DN_SK(sk);
-
- if (scp->delack_pending) {
- scp->delack_pending = 0;
- del_timer(&scp->delack_timer);
- }
-}
-
int ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
{
- struct inet_opt *inet = inet_sk(sk);
+ struct inet_sock *inet = inet_sk(sk);
struct sockaddr_in *usin = (struct sockaddr_in *) uaddr;
struct rtable *rt;
u32 saddr;
extern int fib_dump_info(struct sk_buff *skb, u32 pid, u32 seq, int event,
u8 tb_id, u8 type, u8 scope, void *dst,
int dst_len, u8 tos, struct fib_info *fi);
+extern void rtmsg_fib(int event, u32 key, struct fib_alias *fa,
+ int z, int tb_id,
+ struct nlmsghdr *n, struct netlink_skb_parms *req);
+extern struct fib_alias *fib_find_alias(struct list_head *fah,
+ u8 tos, u32 prio);
+extern int fib_detect_death(struct fib_info *fi, int order,
+ struct fib_info **last_resort,
+ int *last_idx, int *dflt);
#endif /* _FIB_LOOKUP_H */
};
static struct fib_rule *fib_rules = &local_rule;
-static rwlock_t fib_rules_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(fib_rules_lock);
int inet_rtm_delrule(struct sk_buff *skb, struct nlmsghdr* nlh, void *arg)
{
#endif
(!rtm->rtm_type || rtm->rtm_type == r->r_action) &&
(!rta[RTA_PRIORITY-1] || memcmp(RTA_DATA(rta[RTA_PRIORITY-1]), &r->r_preference, 4) == 0) &&
- (!rta[RTA_IIF-1] || strcmp(RTA_DATA(rta[RTA_IIF-1]), r->r_ifname) == 0) &&
+ (!rta[RTA_IIF-1] || rtattr_strcmp(rta[RTA_IIF-1], r->r_ifname) == 0) &&
(!rtm->rtm_table || (r && rtm->rtm_table == r->r_table))) {
err = -EPERM;
if (r == &local_rule)
new_r->r_table = table_id;
if (rta[RTA_IIF-1]) {
struct net_device *dev;
- memcpy(new_r->r_ifname, RTA_DATA(rta[RTA_IIF-1]), IFNAMSIZ);
- new_r->r_ifname[IFNAMSIZ-1] = 0;
+ rtattr_strlcpy(new_r->r_ifname, rta[RTA_IIF-1], IFNAMSIZ);
new_r->r_ifindex = -1;
dev = __dev_get_by_name(new_r->r_ifname);
if (dev)
return 0;
}
-u32 fib_rules_map_destination(u32 daddr, struct fib_result *res)
-{
- u32 mask = inet_make_mask(res->prefixlen);
- return (daddr&~mask)|res->fi->fib_nh->nh_gw;
-}
-
#ifdef CONFIG_NET_CLS_ROUTE
u32 fib_rules_tclass(struct fib_result *res)
{
}
-struct notifier_block fib_rules_notifier = {
+static struct notifier_block fib_rules_notifier = {
.notifier_call =fib_rules_event,
};
*/
/* Exported for inet_getid inline function. */
-spinlock_t inet_peer_idlock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(inet_peer_idlock);
static kmem_cache_t *peer_cachep;
};
#define peer_avl_empty (&peer_fake_node)
static struct inet_peer *peer_root = peer_avl_empty;
-static rwlock_t peer_pool_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(peer_pool_lock);
#define PEER_MAXDEPTH 40 /* sufficient for about 2^27 nodes */
static volatile int peer_total;
/* Exported for inet_putpeer inline function. */
struct inet_peer *inet_peer_unused_head,
**inet_peer_unused_tailp = &inet_peer_unused_head;
-spinlock_t inet_peer_unused_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(inet_peer_unused_lock);
#define PEER_MAX_CLEANUP_WORK 30
static void peer_check_expire(unsigned long dummy);
/*
* Allocate/initialize app incarnation and register it in proto apps.
*/
-int
+static int
ip_vs_app_inc_new(struct ip_vs_app *app, __u16 proto, __u16 port)
{
struct ip_vs_protocol *pp;
} __attribute__((__aligned__(SMP_CACHE_BYTES)));
/* lock array for conn table */
-struct ip_vs_aligned_lock
+static struct ip_vs_aligned_lock
__ip_vs_conntbl_lock_array[CT_LOCKARRAY_SIZE] __cacheline_aligned;
static inline void ct_read_lock(unsigned key)
static struct ip_vs_estimator *est_list = NULL;
-static rwlock_t est_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(est_lock);
static struct timer_list est_timer;
static void estimation_timer(unsigned long arg)
/*
* register an ipvs protocol
*/
-int register_ip_vs_protocol(struct ip_vs_protocol *pp)
+static int register_ip_vs_protocol(struct ip_vs_protocol *pp)
{
unsigned hash = IP_VS_PROTO_HASH(pp->protocol);
/*
* unregister an ipvs protocol
*/
-int unregister_ip_vs_protocol(struct ip_vs_protocol *pp)
+static int unregister_ip_vs_protocol(struct ip_vs_protocol *pp)
{
struct ip_vs_protocol **pp_p;
unsigned hash = IP_VS_PROTO_HASH(pp->protocol);
static char * icmp_state_name_table[1] = { "ICMP" };
-struct ip_vs_conn *
+static struct ip_vs_conn *
icmp_conn_in_get(const struct sk_buff *skb,
struct ip_vs_protocol *pp,
const struct iphdr *iph,
#endif
}
-struct ip_vs_conn *
+static struct ip_vs_conn *
icmp_conn_out_get(const struct sk_buff *skb,
struct ip_vs_protocol *pp,
const struct iphdr *iph,
#define TCP_APP_TAB_MASK (TCP_APP_TAB_SIZE - 1)
static struct list_head tcp_apps[TCP_APP_TAB_SIZE];
-static spinlock_t tcp_app_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(tcp_app_lock);
static inline __u16 tcp_app_hashkey(__u16 port)
{
#define UDP_APP_TAB_MASK (UDP_APP_TAB_SIZE - 1)
static struct list_head udp_apps[UDP_APP_TAB_SIZE];
-static spinlock_t udp_app_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(udp_app_lock);
static inline __u16 udp_app_hashkey(__u16 port)
{
static LIST_HEAD(ip_vs_schedulers);
/* lock for service table */
-static rwlock_t __ip_vs_sched_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(__ip_vs_sched_lock);
/*
static struct arpt_table packet_filter = {
.name = "filter",
- .table = &initial_table.repl,
.valid_hooks = FILTER_VALID_HOOKS,
.lock = RW_LOCK_UNLOCKED,
.private = NULL,
int ret, i;
/* Register table */
- ret = arpt_register_table(&packet_filter);
+ ret = arpt_register_table(&packet_filter, &initial_table.repl);
if (ret < 0)
return ret;
static char irc_buffer[65536];
static DECLARE_LOCK(irc_buffer_lock);
+unsigned int (*ip_nat_irc_hook)(struct sk_buff **pskb,
+ enum ip_conntrack_info ctinfo,
+ unsigned int matchoff,
+ unsigned int matchlen,
+ struct ip_conntrack_expect *exp);
+EXPORT_SYMBOL_GPL(ip_nat_irc_hook);
+
MODULE_AUTHOR("Harald Welte <laforge@netfilter.org>");
MODULE_DESCRIPTION("IRC (DCC) connection tracking helper");
MODULE_LICENSE("GPL");
static char *dccprotos[] = { "SEND ", "CHAT ", "MOVE ", "TSEND ", "SCHAT " };
#define MINMATCHLEN 5
-struct module *ip_conntrack_irc = THIS_MODULE;
-
#if 0
#define DEBUGP(format, args...) printk(KERN_DEBUG "%s:%s:" format, \
__FILE__, __FUNCTION__ , ## args)
return 0;
}
-static int help(struct sk_buff *skb,
+static int help(struct sk_buff **pskb,
struct ip_conntrack *ct, enum ip_conntrack_info ctinfo)
{
unsigned int dataoff;
char *data, *data_limit, *ib_ptr;
int dir = CTINFO2DIR(ctinfo);
struct ip_conntrack_expect *exp;
- struct ip_ct_irc_expect *exp_irc_info = NULL;
-
+ u32 seq;
u_int32_t dcc_ip;
u_int16_t dcc_port;
- int i;
+ int i, ret = NF_ACCEPT;
char *addr_beg_p, *addr_end_p;
DEBUGP("entered\n");
}
/* Not a full tcp header? */
- th = skb_header_pointer(skb, skb->nh.iph->ihl*4,
+ th = skb_header_pointer(*pskb, (*pskb)->nh.iph->ihl*4,
sizeof(_tcph), &_tcph);
if (th == NULL)
return NF_ACCEPT;
/* No data? */
- dataoff = skb->nh.iph->ihl*4 + th->doff*4;
- if (dataoff >= skb->len)
+ dataoff = (*pskb)->nh.iph->ihl*4 + th->doff*4;
+ if (dataoff >= (*pskb)->len)
return NF_ACCEPT;
LOCK_BH(&irc_buffer_lock);
- ib_ptr = skb_header_pointer(skb, dataoff,
- skb->len - dataoff, irc_buffer);
+ ib_ptr = skb_header_pointer(*pskb, dataoff,
+ (*pskb)->len - dataoff, irc_buffer);
BUG_ON(ib_ptr == NULL);
data = ib_ptr;
- data_limit = ib_ptr + skb->len - dataoff;
+ data_limit = ib_ptr + (*pskb)->len - dataoff;
/* strlen("\1DCC SENT t AAAAAAAA P\1\n")=24
* 5+MINMATCHLEN+strlen("t AAAAAAAA P\1\n")=14 */
}
exp = ip_conntrack_expect_alloc();
- if (exp == NULL)
+ if (exp == NULL) {
+ ret = NF_DROP;
goto out;
-
- exp_irc_info = &exp->help.exp_irc_info;
+ }
/* save position of address in dcc string,
* necessary for NAT */
DEBUGP("tcph->seq = %u\n", th->seq);
- exp->seq = ntohl(th->seq) + (addr_beg_p - ib_ptr);
- exp_irc_info->len = (addr_end_p - addr_beg_p);
- exp_irc_info->port = dcc_port;
- DEBUGP("wrote info seq=%u (ofs=%u), len=%d\n",
- exp->seq, (addr_end_p - _data), exp_irc_info->len);
+ seq = ntohl(th->seq) + (addr_beg_p - ib_ptr);
+ /* We refer to the reverse direction ("!dir")
+ * tuples here, because we're expecting
+ * something in the other * direction.
+ * Doesn't matter unless NAT is happening. */
exp->tuple = ((struct ip_conntrack_tuple)
{ { 0, { 0 } },
- { ct->tuplehash[dir].tuple.src.ip, { .tcp = { htons(dcc_port) } },
+ { ct->tuplehash[!dir].tuple.dst.ip,
+ { .tcp = { htons(dcc_port) } },
IPPROTO_TCP }});
exp->mask = ((struct ip_conntrack_tuple)
{ { 0, { 0 } },
- { 0xFFFFFFFF, { .tcp = { 0xFFFF } }, 0xFFFF }});
-
+ { 0xFFFFFFFF, { .tcp = { 0xFFFF } }, 0xFF }});
exp->expectfn = NULL;
-
- DEBUGP("expect_related %u.%u.%u.%u:%u-%u.%u.%u.%u:%u\n",
- NIPQUAD(exp->tuple.src.ip),
- ntohs(exp->tuple.src.u.tcp.port),
- NIPQUAD(exp->tuple.dst.ip),
- ntohs(exp->tuple.dst.u.tcp.port));
-
- ip_conntrack_expect_related(exp, ct);
-
+ exp->master = ct;
+ if (ip_nat_irc_hook)
+ ret = ip_nat_irc_hook(pskb, ctinfo,
+ addr_beg_p - ib_ptr,
+ addr_end_p - addr_beg_p,
+ exp);
+ else if (ip_conntrack_expect_related(exp) != 0) {
+ ip_conntrack_expect_free(exp);
+ ret = NF_DROP;
+ }
goto out;
} /* for .. NUM_DCCPROTO */
} /* while data < ... */
out:
UNLOCK_BH(&irc_buffer_lock);
- return NF_ACCEPT;
+ return ret;
}
static struct ip_conntrack_helper irc_helpers[MAX_PORTS];
hlpr->tuple.src.u.tcp.port = htons(ports[i]);
hlpr->tuple.dst.protonum = IPPROTO_TCP;
hlpr->mask.src.u.tcp.port = 0xFFFF;
- hlpr->mask.dst.protonum = 0xFFFF;
+ hlpr->mask.dst.protonum = 0xFF;
hlpr->max_expected = max_dcc_channels;
hlpr->timeout = dcc_timeout;
- hlpr->flags = IP_CT_HELPER_F_REUSE_EXPECT;
- hlpr->me = ip_conntrack_irc;
+ hlpr->me = THIS_MODULE;
hlpr->help = help;
tmpname = &irc_names[i][0];
}
}
-PROVIDES_CONNTRACK(irc);
-
module_init(init);
module_exit(fini);
#define HOURS * 60 MINS
#define DAYS * 24 HOURS
-unsigned long ip_ct_sctp_timeout_closed = 10 SECS;
-unsigned long ip_ct_sctp_timeout_cookie_wait = 3 SECS;
-unsigned long ip_ct_sctp_timeout_cookie_echoed = 3 SECS;
-unsigned long ip_ct_sctp_timeout_established = 5 DAYS;
-unsigned long ip_ct_sctp_timeout_shutdown_sent = 300 SECS / 1000;
-unsigned long ip_ct_sctp_timeout_shutdown_recd = 300 SECS / 1000;
-unsigned long ip_ct_sctp_timeout_shutdown_ack_sent = 3 SECS;
+static unsigned long ip_ct_sctp_timeout_closed = 10 SECS;
+static unsigned long ip_ct_sctp_timeout_cookie_wait = 3 SECS;
+static unsigned long ip_ct_sctp_timeout_cookie_echoed = 3 SECS;
+static unsigned long ip_ct_sctp_timeout_established = 5 DAYS;
+static unsigned long ip_ct_sctp_timeout_shutdown_sent = 300 SECS / 1000;
+static unsigned long ip_ct_sctp_timeout_shutdown_recd = 300 SECS / 1000;
+static unsigned long ip_ct_sctp_timeout_shutdown_ack_sent = 3 SECS;
static unsigned long * sctp_timeouts[]
= { NULL, /* SCTP_CONNTRACK_NONE */
return 1;
}
-static int sctp_exp_matches_pkt(struct ip_conntrack_expect *exp,
- const struct sk_buff *skb)
-{
- /* To be implemented */
- return 0;
-}
-
-struct ip_conntrack_protocol ip_conntrack_protocol_sctp = {
+static struct ip_conntrack_protocol ip_conntrack_protocol_sctp = {
.proto = IPPROTO_SCTP,
.name = "sctp",
.pkt_to_tuple = sctp_pkt_to_tuple,
.packet = sctp_packet,
.new = sctp_new,
.destroy = NULL,
- .exp_matches_pkt = sctp_exp_matches_pkt,
.me = THIS_MODULE
};
static struct ctl_table_header *ip_ct_sysctl_header;
#endif
-int __init init(void)
+static int __init init(void)
{
int ret;
#ifdef CONFIG_SYSCTL
ip_ct_sysctl_header = register_sysctl_table(ip_ct_net_table, 0);
if (ip_ct_sysctl_header == NULL) {
+ ret = -ENOMEM;
printk("ip_conntrack_proto_sctp: can't register to sysctl.\n");
goto cleanup;
}
return ret;
}
-void __exit fini(void)
+static void __exit fini(void)
{
ip_conntrack_protocol_unregister(&ip_conntrack_protocol_sctp);
#ifdef CONFIG_SYSCTL
#define DEBUGP(format, args...)
#endif
-static int tftp_help(struct sk_buff *skb,
+unsigned int (*ip_nat_tftp_hook)(struct sk_buff **pskb,
+ enum ip_conntrack_info ctinfo,
+ struct ip_conntrack_expect *exp);
+EXPORT_SYMBOL_GPL(ip_nat_tftp_hook);
+
+static int tftp_help(struct sk_buff **pskb,
struct ip_conntrack *ct,
enum ip_conntrack_info ctinfo)
{
struct tftphdr _tftph, *tfh;
struct ip_conntrack_expect *exp;
+ unsigned int ret = NF_ACCEPT;
- tfh = skb_header_pointer(skb,
- skb->nh.iph->ihl * 4 + sizeof(struct udphdr),
+ tfh = skb_header_pointer(*pskb,
+ (*pskb)->nh.iph->ihl*4+sizeof(struct udphdr),
sizeof(_tftph), &_tftph);
if (tfh == NULL)
return NF_ACCEPT;
exp = ip_conntrack_expect_alloc();
if (exp == NULL)
- return NF_ACCEPT;
+ return NF_DROP;
exp->tuple = ct->tuplehash[IP_CT_DIR_REPLY].tuple;
exp->mask.src.ip = 0xffffffff;
exp->mask.dst.ip = 0xffffffff;
exp->mask.dst.u.udp.port = 0xffff;
- exp->mask.dst.protonum = 0xffff;
+ exp->mask.dst.protonum = 0xff;
exp->expectfn = NULL;
+ exp->master = ct;
DEBUGP("expect: ");
DUMP_TUPLE(&exp->tuple);
DUMP_TUPLE(&exp->mask);
- ip_conntrack_expect_related(exp, ct);
+ if (ip_nat_tftp_hook)
+ ret = ip_nat_tftp_hook(pskb, ctinfo, exp);
+ else if (ip_conntrack_expect_related(exp) != 0) {
+ ip_conntrack_expect_free(exp);
+ ret = NF_DROP;
+ }
break;
case TFTP_OPCODE_DATA:
case TFTP_OPCODE_ACK:
tftp[i].tuple.dst.protonum = IPPROTO_UDP;
tftp[i].tuple.src.u.udp.port = htons(ports[i]);
- tftp[i].mask.dst.protonum = 0xFFFF;
+ tftp[i].mask.dst.protonum = 0xFF;
tftp[i].mask.src.u.udp.port = 0xFFFF;
tftp[i].max_expected = 1;
- tftp[i].timeout = 0;
- tftp[i].flags = IP_CT_HELPER_F_REUSE_EXPECT;
+ tftp[i].timeout = 5 * 60; /* 5 minutes */
tftp[i].me = THIS_MODULE;
tftp[i].help = tftp_help;
return(0);
}
-PROVIDES_CONNTRACK(tftp);
-
module_init(init);
module_exit(fini);
MODULE_DESCRIPTION("Amanda NAT helper");
MODULE_LICENSE("GPL");
-static unsigned int
-amanda_nat_expected(struct sk_buff **pskb,
- unsigned int hooknum,
- struct ip_conntrack *ct,
- struct ip_nat_info *info)
+static unsigned int help(struct sk_buff **pskb,
+ enum ip_conntrack_info ctinfo,
+ unsigned int matchoff,
+ unsigned int matchlen,
+ struct ip_conntrack_expect *exp)
{
- struct ip_conntrack *master = master_ct(ct);
- struct ip_ct_amanda_expect *exp_amanda_info;
- struct ip_nat_multi_range mr;
- u_int32_t newip;
-
- IP_NF_ASSERT(info);
- IP_NF_ASSERT(master);
- IP_NF_ASSERT(!(info->initialized & (1 << HOOK2MANIP(hooknum))));
+ char buffer[sizeof("65535")];
+ u_int16_t port;
+ unsigned int ret;
- if (HOOK2MANIP(hooknum) == IP_NAT_MANIP_SRC)
- newip = master->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip;
- else
- newip = master->tuplehash[IP_CT_DIR_REPLY].tuple.src.ip;
+ /* Connection comes from client. */
+ exp->saved_proto.tcp.port = exp->tuple.dst.u.tcp.port;
+ exp->dir = IP_CT_DIR_ORIGINAL;
- mr.rangesize = 1;
- /* We don't want to manip the per-protocol, just the IPs. */
- mr.range[0].flags = IP_NAT_RANGE_MAP_IPS;
- mr.range[0].min_ip = mr.range[0].max_ip = newip;
+ /* When you see the packet, we need to NAT it the same as the
+ * this one (ie. same IP: it will be TCP and master is UDP). */
+ exp->expectfn = ip_nat_follow_master;
- if (HOOK2MANIP(hooknum) == IP_NAT_MANIP_DST) {
- exp_amanda_info = &ct->master->help.exp_amanda_info;
- mr.range[0].flags |= IP_NAT_RANGE_PROTO_SPECIFIED;
- mr.range[0].min = mr.range[0].max
- = ((union ip_conntrack_manip_proto)
- { .udp = { htons(exp_amanda_info->port) } });
+ /* Try to get same port: if not, try to change it. */
+ for (port = ntohs(exp->saved_proto.tcp.port); port != 0; port++) {
+ exp->tuple.dst.u.tcp.port = htons(port);
+ if (ip_conntrack_expect_related(exp) == 0)
+ break;
}
- return ip_nat_setup_info(ct, &mr, hooknum);
-}
-
-static int amanda_data_fixup(struct ip_conntrack *ct,
- struct sk_buff **pskb,
- enum ip_conntrack_info ctinfo,
- struct ip_conntrack_expect *exp)
-{
- struct ip_ct_amanda_expect *exp_amanda_info;
- struct ip_conntrack_tuple t = exp->tuple;
- char buffer[sizeof("65535")];
- u_int16_t port;
-
- /* Alter conntrack's expectations. */
- exp_amanda_info = &exp->help.exp_amanda_info;
- t.dst.ip = ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.ip;
- for (port = exp_amanda_info->port; port != 0; port++) {
- t.dst.u.tcp.port = htons(port);
- if (ip_conntrack_change_expect(exp, &t) == 0)
- break;
+ if (port == 0) {
+ ip_conntrack_expect_free(exp);
+ return NF_DROP;
}
- if (port == 0)
- return 0;
sprintf(buffer, "%u", port);
- return ip_nat_mangle_udp_packet(pskb, ct, ctinfo,
- exp_amanda_info->offset,
- exp_amanda_info->len,
- buffer, strlen(buffer));
-}
-
-static unsigned int help(struct ip_conntrack *ct,
- struct ip_conntrack_expect *exp,
- struct ip_nat_info *info,
- enum ip_conntrack_info ctinfo,
- unsigned int hooknum,
- struct sk_buff **pskb)
-{
- int dir = CTINFO2DIR(ctinfo);
- int ret = NF_ACCEPT;
-
- /* Only mangle things once: original direction in POST_ROUTING
- and reply direction on PRE_ROUTING. */
- if (!((hooknum == NF_IP_POST_ROUTING && dir == IP_CT_DIR_ORIGINAL)
- || (hooknum == NF_IP_PRE_ROUTING && dir == IP_CT_DIR_REPLY)))
- return NF_ACCEPT;
-
- /* if this exectation has a "offset" the packet needs to be mangled */
- if (exp->help.exp_amanda_info.offset != 0)
- if (!amanda_data_fixup(ct, pskb, ctinfo, exp))
- ret = NF_DROP;
- exp->help.exp_amanda_info.offset = 0;
-
+ ret = ip_nat_mangle_udp_packet(pskb, exp->master, ctinfo,
+ matchoff, matchlen,
+ buffer, strlen(buffer));
+ if (ret != NF_ACCEPT)
+ ip_conntrack_unexpect_related(exp);
return ret;
}
-static struct ip_nat_helper ip_nat_amanda_helper;
-
static void __exit fini(void)
{
- ip_nat_helper_unregister(&ip_nat_amanda_helper);
+ ip_nat_amanda_hook = NULL;
+ /* Make sure noone calls it, meanwhile. */
+ synchronize_net();
}
static int __init init(void)
{
- struct ip_nat_helper *hlpr = &ip_nat_amanda_helper;
-
- hlpr->tuple.dst.protonum = IPPROTO_UDP;
- hlpr->tuple.src.u.udp.port = htons(10080);
- hlpr->mask.src.u.udp.port = 0xFFFF;
- hlpr->mask.dst.protonum = 0xFFFF;
- hlpr->help = help;
- hlpr->flags = 0;
- hlpr->me = THIS_MODULE;
- hlpr->expect = amanda_nat_expected;
- hlpr->name = "amanda";
-
- return ip_nat_helper_register(hlpr);
+ BUG_ON(ip_nat_amanda_hook);
+ ip_nat_amanda_hook = help;
+ return 0;
}
-NEEDS_CONNTRACK(amanda);
module_init(init);
module_exit(fini);
#define DEBUGP(format, args...)
#endif
-#define MAX_PORTS 8
-static int ports[MAX_PORTS];
-static int ports_c;
-
-module_param_array(ports, int, &ports_c, 0400);
-
/* FIXME: Time out? --RR */
-static unsigned int
-ftp_nat_expected(struct sk_buff **pskb,
- unsigned int hooknum,
- struct ip_conntrack *ct,
- struct ip_nat_info *info)
-{
- struct ip_nat_multi_range mr;
- u_int32_t newdstip, newsrcip, newip;
- struct ip_ct_ftp_expect *exp_ftp_info;
-
- struct ip_conntrack *master = master_ct(ct);
-
- IP_NF_ASSERT(info);
- IP_NF_ASSERT(master);
-
- IP_NF_ASSERT(!(info->initialized & (1<<HOOK2MANIP(hooknum))));
-
- DEBUGP("nat_expected: We have a connection!\n");
- exp_ftp_info = &ct->master->help.exp_ftp_info;
-
- if (exp_ftp_info->ftptype == IP_CT_FTP_PORT
- || exp_ftp_info->ftptype == IP_CT_FTP_EPRT) {
- /* PORT command: make connection go to the client. */
- newdstip = master->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip;
- newsrcip = master->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.ip;
- DEBUGP("nat_expected: PORT cmd. %u.%u.%u.%u->%u.%u.%u.%u\n",
- NIPQUAD(newsrcip), NIPQUAD(newdstip));
- } else {
- /* PASV command: make the connection go to the server */
- newdstip = master->tuplehash[IP_CT_DIR_REPLY].tuple.src.ip;
- newsrcip = master->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip;
- DEBUGP("nat_expected: PASV cmd. %u.%u.%u.%u->%u.%u.%u.%u\n",
- NIPQUAD(newsrcip), NIPQUAD(newdstip));
- }
-
- if (HOOK2MANIP(hooknum) == IP_NAT_MANIP_SRC)
- newip = newsrcip;
- else
- newip = newdstip;
-
- DEBUGP("nat_expected: IP to %u.%u.%u.%u\n", NIPQUAD(newip));
-
- mr.rangesize = 1;
- /* We don't want to manip the per-protocol, just the IPs... */
- mr.range[0].flags = IP_NAT_RANGE_MAP_IPS;
- mr.range[0].min_ip = mr.range[0].max_ip = newip;
-
- /* ... unless we're doing a MANIP_DST, in which case, make
- sure we map to the correct port */
- if (HOOK2MANIP(hooknum) == IP_NAT_MANIP_DST) {
- mr.range[0].flags |= IP_NAT_RANGE_PROTO_SPECIFIED;
- mr.range[0].min = mr.range[0].max
- = ((union ip_conntrack_manip_proto)
- { .tcp = { htons(exp_ftp_info->port) } });
- }
- return ip_nat_setup_info(ct, &mr, hooknum);
-}
-
static int
mangle_rfc959_packet(struct sk_buff **pskb,
u_int32_t newip,
unsigned int matchoff,
unsigned int matchlen,
struct ip_conntrack *ct,
- enum ip_conntrack_info ctinfo)
+ enum ip_conntrack_info ctinfo,
+ u32 *seq)
{
char buffer[sizeof("nnn,nnn,nnn,nnn,nnn,nnn")];
DEBUGP("calling ip_nat_mangle_tcp_packet\n");
+ *seq += strlen(buffer) - matchlen;
return ip_nat_mangle_tcp_packet(pskb, ct, ctinfo, matchoff,
matchlen, buffer, strlen(buffer));
}
unsigned int matchoff,
unsigned int matchlen,
struct ip_conntrack *ct,
- enum ip_conntrack_info ctinfo)
+ enum ip_conntrack_info ctinfo,
+ u32 *seq)
{
char buffer[sizeof("|1|255.255.255.255|65535|")];
DEBUGP("calling ip_nat_mangle_tcp_packet\n");
+ *seq += strlen(buffer) - matchlen;
return ip_nat_mangle_tcp_packet(pskb, ct, ctinfo, matchoff,
matchlen, buffer, strlen(buffer));
}
unsigned int matchoff,
unsigned int matchlen,
struct ip_conntrack *ct,
- enum ip_conntrack_info ctinfo)
+ enum ip_conntrack_info ctinfo,
+ u32 *seq)
{
char buffer[sizeof("|||65535|")];
DEBUGP("calling ip_nat_mangle_tcp_packet\n");
+ *seq += strlen(buffer) - matchlen;
return ip_nat_mangle_tcp_packet(pskb, ct, ctinfo, matchoff,
matchlen, buffer, strlen(buffer));
}
unsigned int,
unsigned int,
struct ip_conntrack *,
- enum ip_conntrack_info)
+ enum ip_conntrack_info,
+ u32 *seq)
= { [IP_CT_FTP_PORT] = mangle_rfc959_packet,
[IP_CT_FTP_PASV] = mangle_rfc959_packet,
[IP_CT_FTP_EPRT] = mangle_eprt_packet,
[IP_CT_FTP_EPSV] = mangle_epsv_packet
};
-static int ftp_data_fixup(const struct ip_ct_ftp_expect *exp_ftp_info,
- struct ip_conntrack *ct,
- struct sk_buff **pskb,
- enum ip_conntrack_info ctinfo,
- struct ip_conntrack_expect *expect)
+/* So, this packet has hit the connection tracking matching code.
+ Mangle it, and change the expectation to match the new version. */
+static unsigned int ip_nat_ftp(struct sk_buff **pskb,
+ enum ip_conntrack_info ctinfo,
+ enum ip_ct_ftp_type type,
+ unsigned int matchoff,
+ unsigned int matchlen,
+ struct ip_conntrack_expect *exp,
+ u32 *seq)
{
u_int32_t newip;
- struct iphdr *iph = (*pskb)->nh.iph;
- struct tcphdr *tcph = (void *)iph + iph->ihl*4;
u_int16_t port;
- struct ip_conntrack_tuple newtuple;
+ int dir = CTINFO2DIR(ctinfo);
+ struct ip_conntrack *ct = exp->master;
- DEBUGP("FTP_NAT: seq %u + %u in %u\n",
- expect->seq, exp_ftp_info->len,
- ntohl(tcph->seq));
+ DEBUGP("FTP_NAT: type %i, off %u len %u\n", type, matchoff, matchlen);
- /* Change address inside packet to match way we're mapping
- this connection. */
- if (exp_ftp_info->ftptype == IP_CT_FTP_PASV
- || exp_ftp_info->ftptype == IP_CT_FTP_EPSV) {
- /* PASV/EPSV response: must be where client thinks server
- is */
- newip = ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.ip;
- /* Expect something from client->server */
- newtuple.src.ip =
- ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip;
- newtuple.dst.ip =
- ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.ip;
- } else {
- /* PORT command: must be where server thinks client is */
- newip = ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip;
- /* Expect something from server->client */
- newtuple.src.ip =
- ct->tuplehash[IP_CT_DIR_REPLY].tuple.src.ip;
- newtuple.dst.ip =
- ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip;
- }
- newtuple.dst.protonum = IPPROTO_TCP;
- newtuple.src.u.tcp.port = expect->tuple.src.u.tcp.port;
+ /* Connection will come from wherever this packet goes, hence !dir */
+ newip = ct->tuplehash[!dir].tuple.dst.ip;
+ exp->saved_proto.tcp.port = exp->tuple.dst.u.tcp.port;
+ exp->dir = !dir;
- /* Try to get same port: if not, try to change it. */
- for (port = exp_ftp_info->port; port != 0; port++) {
- newtuple.dst.u.tcp.port = htons(port);
+ /* When you see the packet, we need to NAT it the same as the
+ * this one. */
+ exp->expectfn = ip_nat_follow_master;
- if (ip_conntrack_change_expect(expect, &newtuple) == 0)
+ /* Try to get same port: if not, try to change it. */
+ for (port = ntohs(exp->saved_proto.tcp.port); port != 0; port++) {
+ exp->tuple.dst.u.tcp.port = htons(port);
+ if (ip_conntrack_expect_related(exp) == 0)
break;
}
- if (port == 0)
- return 0;
-
- if (!mangle[exp_ftp_info->ftptype](pskb, newip, port,
- expect->seq - ntohl(tcph->seq),
- exp_ftp_info->len, ct, ctinfo))
- return 0;
- return 1;
-}
-
-static unsigned int help(struct ip_conntrack *ct,
- struct ip_conntrack_expect *exp,
- struct ip_nat_info *info,
- enum ip_conntrack_info ctinfo,
- unsigned int hooknum,
- struct sk_buff **pskb)
-{
- struct iphdr *iph = (*pskb)->nh.iph;
- struct tcphdr *tcph = (void *)iph + iph->ihl*4;
- unsigned int datalen;
- int dir;
- struct ip_ct_ftp_expect *exp_ftp_info;
-
- if (!exp)
- DEBUGP("ip_nat_ftp: no exp!!");
-
- exp_ftp_info = &exp->help.exp_ftp_info;
-
- /* Only mangle things once: original direction in POST_ROUTING
- and reply direction on PRE_ROUTING. */
- dir = CTINFO2DIR(ctinfo);
- if (!((hooknum == NF_IP_POST_ROUTING && dir == IP_CT_DIR_ORIGINAL)
- || (hooknum == NF_IP_PRE_ROUTING && dir == IP_CT_DIR_REPLY))) {
- DEBUGP("nat_ftp: Not touching dir %s at hook %s\n",
- dir == IP_CT_DIR_ORIGINAL ? "ORIG" : "REPLY",
- hooknum == NF_IP_POST_ROUTING ? "POSTROUTING"
- : hooknum == NF_IP_PRE_ROUTING ? "PREROUTING"
- : hooknum == NF_IP_LOCAL_OUT ? "OUTPUT" : "???");
- return NF_ACCEPT;
+ if (port == 0) {
+ ip_conntrack_expect_free(exp);
+ return NF_DROP;
}
- datalen = (*pskb)->len - iph->ihl * 4 - tcph->doff * 4;
- /* If it's in the right range... */
- if (between(exp->seq + exp_ftp_info->len,
- ntohl(tcph->seq),
- ntohl(tcph->seq) + datalen)) {
- if (!ftp_data_fixup(exp_ftp_info, ct, pskb, ctinfo, exp))
- return NF_DROP;
- } else {
- /* Half a match? This means a partial retransmisison.
- It's a cracker being funky. */
- if (net_ratelimit()) {
- printk("FTP_NAT: partial packet %u/%u in %u/%u\n",
- exp->seq, exp_ftp_info->len,
- ntohl(tcph->seq),
- ntohl(tcph->seq) + datalen);
- }
+ if (!mangle[type](pskb, newip, port, matchoff, matchlen, ct, ctinfo,
+ seq)) {
+ ip_conntrack_unexpect_related(exp);
return NF_DROP;
}
return NF_ACCEPT;
}
-static struct ip_nat_helper ftp[MAX_PORTS];
-static char ftp_names[MAX_PORTS][10];
-
-/* Not __exit: called from init() */
-static void fini(void)
+static void __exit fini(void)
{
- int i;
-
- for (i = 0; i < ports_c; i++) {
- DEBUGP("ip_nat_ftp: unregistering port %d\n", ports[i]);
- ip_nat_helper_unregister(&ftp[i]);
- }
+ ip_nat_ftp_hook = NULL;
+ /* Make sure noone calls it, meanwhile. */
+ synchronize_net();
}
static int __init init(void)
{
- int i, ret = 0;
- char *tmpname;
-
- if (ports_c == 0)
- ports[ports_c++] = FTP_PORT;
-
- for (i = 0; i < ports_c; i++) {
- ftp[i].tuple.dst.protonum = IPPROTO_TCP;
- ftp[i].tuple.src.u.tcp.port = htons(ports[i]);
- ftp[i].mask.dst.protonum = 0xFFFF;
- ftp[i].mask.src.u.tcp.port = 0xFFFF;
- ftp[i].help = help;
- ftp[i].me = THIS_MODULE;
- ftp[i].flags = 0;
- ftp[i].expect = ftp_nat_expected;
-
- tmpname = &ftp_names[i][0];
- if (ports[i] == FTP_PORT)
- sprintf(tmpname, "ftp");
- else
- sprintf(tmpname, "ftp-%d", i);
- ftp[i].name = tmpname;
-
- DEBUGP("ip_nat_ftp: Trying to register for port %d\n",
- ports[i]);
- ret = ip_nat_helper_register(&ftp[i]);
-
- if (ret) {
- printk("ip_nat_ftp: error registering "
- "helper for port %d\n", ports[i]);
- fini();
- return ret;
- }
- }
-
- return ret;
+ BUG_ON(ip_nat_ftp_hook);
+ ip_nat_ftp_hook = ip_nat_ftp;
+ return 0;
}
-NEEDS_CONNTRACK(ftp);
-
module_init(init);
module_exit(fini);
#define DUMP_OFFSET(x)
#endif
-static LIST_HEAD(helpers);
-DECLARE_LOCK(ip_nat_seqofs_lock);
+static DECLARE_LOCK(ip_nat_seqofs_lock);
/* Setup TCP sequence correction given this change at this sequence */
static inline void
tcph->check = tcp_v4_check(tcph, datalen, iph->saddr, iph->daddr,
csum_partial((char *)tcph, datalen, 0));
- adjust_tcp_sequence(ntohl(tcph->seq),
- (int)rep_len - (int)match_len,
- ct, ctinfo);
+ if (rep_len != match_len) {
+ set_bit(IPS_SEQ_ADJUST_BIT, &ct->status);
+ adjust_tcp_sequence(ntohl(tcph->seq),
+ (int)rep_len - (int)match_len,
+ ct, ctinfo);
+ /* Tell TCP window tracking about seq change */
+ ip_conntrack_tcp_update(*pskb, ct, CTINFO2DIR(ctinfo));
+ }
return 1;
}
this_way = &ct->nat.info.seq[dir];
other_way = &ct->nat.info.seq[!dir];
- /* No adjustments to make? Very common case. */
- if (!this_way->offset_before && !this_way->offset_after
- && !other_way->offset_before && !other_way->offset_after)
- return 1;
-
if (!skb_ip_make_writable(pskb, (*pskb)->nh.iph->ihl*4+sizeof(*tcph)))
return 0;
return 1;
}
-static inline int
-helper_cmp(const struct ip_nat_helper *helper,
- const struct ip_conntrack_tuple *tuple)
+/* Setup NAT on this expected conntrack so it follows master. */
+/* If we fail to get a free NAT slot, we'll get dropped on confirm */
+void ip_nat_follow_master(struct ip_conntrack *ct,
+ struct ip_conntrack_expect *exp)
{
- return ip_ct_tuple_mask_cmp(tuple, &helper->tuple, &helper->mask);
-}
-
-int ip_nat_helper_register(struct ip_nat_helper *me)
-{
- int ret = 0;
-
- WRITE_LOCK(&ip_nat_lock);
- if (LIST_FIND(&helpers, helper_cmp, struct ip_nat_helper *,&me->tuple))
- ret = -EBUSY;
- else
- list_prepend(&helpers, me);
- WRITE_UNLOCK(&ip_nat_lock);
-
- return ret;
-}
-
-struct ip_nat_helper *
-__ip_nat_find_helper(const struct ip_conntrack_tuple *tuple)
-{
- return LIST_FIND(&helpers, helper_cmp, struct ip_nat_helper *, tuple);
-}
-
-struct ip_nat_helper *
-ip_nat_find_helper(const struct ip_conntrack_tuple *tuple)
-{
- struct ip_nat_helper *h;
-
- READ_LOCK(&ip_nat_lock);
- h = __ip_nat_find_helper(tuple);
- READ_UNLOCK(&ip_nat_lock);
-
- return h;
-}
-
-static int
-kill_helper(const struct ip_conntrack *i, void *helper)
-{
- int ret;
-
- READ_LOCK(&ip_nat_lock);
- ret = (i->nat.info.helper == helper);
- READ_UNLOCK(&ip_nat_lock);
-
- return ret;
-}
-
-void ip_nat_helper_unregister(struct ip_nat_helper *me)
-{
- WRITE_LOCK(&ip_nat_lock);
- /* Autoloading conntrack helper might have failed */
- if (LIST_FIND(&helpers, helper_cmp, struct ip_nat_helper *,&me->tuple)) {
- LIST_DELETE(&helpers, me);
- }
- WRITE_UNLOCK(&ip_nat_lock);
-
- /* Someone could be still looking at the helper in a bh. */
- synchronize_net();
-
- /* Find anything using it, and umm, kill them. We can't turn
- them into normal connections: if we've adjusted SYNs, then
- they'll ackstorm. So we just drop it. We used to just
- bump module count when a connection existed, but that
- forces admins to gen fake RSTs or bounce box, either of
- which is just a long-winded way of making things
- worse. --RR */
- ip_ct_selective_cleanup(kill_helper, me);
+ struct ip_nat_range range;
+
+ /* This must be a fresh one. */
+ BUG_ON(ct->status & IPS_NAT_DONE_MASK);
+
+ /* Change src to where master sends to */
+ range.flags = IP_NAT_RANGE_MAP_IPS;
+ range.min_ip = range.max_ip
+ = ct->master->tuplehash[!exp->dir].tuple.dst.ip;
+ /* hook doesn't matter, but it has to do source manip */
+ ip_nat_setup_info(ct, &range, NF_IP_POST_ROUTING);
+
+ /* For DST manip, map port here to where it's expected. */
+ range.flags = (IP_NAT_RANGE_MAP_IPS | IP_NAT_RANGE_PROTO_SPECIFIED);
+ range.min = range.max = exp->saved_proto;
+ range.min_ip = range.max_ip
+ = ct->master->tuplehash[!exp->dir].tuple.src.ip;
+ /* hook doesn't matter, but it has to do destination manip */
+ ip_nat_setup_info(ct, &range, NF_IP_PRE_ROUTING);
}
/* IRC extension for TCP NAT alteration.
* (C) 2000-2001 by Harald Welte <laforge@gnumonks.org>
+ * (C) 2004 Rusty Russell <rusty@rustcorp.com.au> IBM Corporation
* based on a copy of RR's ip_nat_ftp.c
*
* ip_nat_irc.c,v 1.16 2001/12/06 07:42:10 laforge Exp
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
- *
- * Module load syntax:
- * insmod ip_nat_irc.o ports=port1,port2,...port<MAX_PORTS>
- *
- * please give the ports of all IRC servers You wish to connect to.
- * If You don't specify ports, the default will be port 6667
*/
#include <linux/module.h>
#define DEBUGP(format, args...)
#endif
-#define MAX_PORTS 8
-static int ports[MAX_PORTS];
-static int ports_c;
-
MODULE_AUTHOR("Harald Welte <laforge@gnumonks.org>");
MODULE_DESCRIPTION("IRC (DCC) NAT helper");
MODULE_LICENSE("GPL");
-module_param_array(ports, int, &ports_c, 0400);
-MODULE_PARM_DESC(ports, "port numbers of IRC servers");
-
-/* FIXME: Time out? --RR */
-
-static unsigned int
-irc_nat_expected(struct sk_buff **pskb,
- unsigned int hooknum,
- struct ip_conntrack *ct,
- struct ip_nat_info *info)
-{
- struct ip_nat_multi_range mr;
- u_int32_t newdstip, newsrcip, newip;
-
- struct ip_conntrack *master = master_ct(ct);
-
- IP_NF_ASSERT(info);
- IP_NF_ASSERT(master);
-
- IP_NF_ASSERT(!(info->initialized & (1 << HOOK2MANIP(hooknum))));
-
- DEBUGP("nat_expected: We have a connection!\n");
-
- newdstip = master->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip;
- newsrcip = ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip;
- DEBUGP("nat_expected: DCC cmd. %u.%u.%u.%u->%u.%u.%u.%u\n",
- NIPQUAD(newsrcip), NIPQUAD(newdstip));
- if (HOOK2MANIP(hooknum) == IP_NAT_MANIP_SRC)
- newip = newsrcip;
- else
- newip = newdstip;
-
- DEBUGP("nat_expected: IP to %u.%u.%u.%u\n", NIPQUAD(newip));
-
- mr.rangesize = 1;
- /* We don't want to manip the per-protocol, just the IPs. */
- mr.range[0].flags = IP_NAT_RANGE_MAP_IPS;
- mr.range[0].min_ip = mr.range[0].max_ip = newip;
-
- return ip_nat_setup_info(ct, &mr, hooknum);
-}
-
-static int irc_data_fixup(const struct ip_ct_irc_expect *exp_irc_info,
- struct ip_conntrack *ct,
- struct sk_buff **pskb,
- enum ip_conntrack_info ctinfo,
- struct ip_conntrack_expect *expect)
+static unsigned int help(struct sk_buff **pskb,
+ enum ip_conntrack_info ctinfo,
+ unsigned int matchoff,
+ unsigned int matchlen,
+ struct ip_conntrack_expect *exp)
{
- u_int32_t newip;
- struct ip_conntrack_tuple t;
- struct iphdr *iph = (*pskb)->nh.iph;
- struct tcphdr *tcph = (void *) iph + iph->ihl * 4;
u_int16_t port;
+ unsigned int ret;
/* "4294967296 65635 " */
char buffer[18];
expect->seq, exp_irc_info->len,
ntohl(tcph->seq));
- newip = ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip;
+ /* Reply comes from server. */
+ exp->saved_proto.tcp.port = exp->tuple.dst.u.tcp.port;
+ exp->dir = IP_CT_DIR_REPLY;
+
+ /* When you see the packet, we need to NAT it the same as the
+ * this one. */
+ exp->expectfn = ip_nat_follow_master;
- /* Alter conntrack's expectations. */
- t = expect->tuple;
- t.dst.ip = newip;
- for (port = exp_irc_info->port; port != 0; port++) {
- t.dst.u.tcp.port = htons(port);
- if (ip_conntrack_change_expect(expect, &t) == 0) {
- DEBUGP("using port %d", port);
+ /* Try to get same port: if not, try to change it. */
+ for (port = ntohs(exp->saved_proto.tcp.port); port != 0; port++) {
+ exp->tuple.dst.u.tcp.port = htons(port);
+ if (ip_conntrack_expect_related(exp) == 0)
break;
- }
+ }
+ if (port == 0) {
+ ip_conntrack_expect_free(exp);
+ return NF_DROP;
}
- if (port == 0)
- return 0;
/* strlen("\1DCC CHAT chat AAAAAAAA P\1\n")=27
* strlen("\1DCC SCHAT chat AAAAAAAA P\1\n")=28
* 0x01, \n: terminators
*/
- sprintf(buffer, "%u %u", ntohl(newip), port);
+ /* AAA = "us", ie. where server normally talks to. */
+ sprintf(buffer, "%u %u",
+ ntohl(exp->master->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip),
+ port);
DEBUGP("ip_nat_irc: Inserting '%s' == %u.%u.%u.%u, port %u\n",
- buffer, NIPQUAD(newip), port);
-
- return ip_nat_mangle_tcp_packet(pskb, ct, ctinfo,
- expect->seq - ntohl(tcph->seq),
- exp_irc_info->len, buffer,
- strlen(buffer));
-}
-
-static unsigned int help(struct ip_conntrack *ct,
- struct ip_conntrack_expect *exp,
- struct ip_nat_info *info,
- enum ip_conntrack_info ctinfo,
- unsigned int hooknum,
- struct sk_buff **pskb)
-{
- struct iphdr *iph = (*pskb)->nh.iph;
- struct tcphdr *tcph = (void *) iph + iph->ihl * 4;
- unsigned int datalen;
- int dir;
- struct ip_ct_irc_expect *exp_irc_info;
-
- if (!exp)
- DEBUGP("ip_nat_irc: no exp!!");
-
- exp_irc_info = &exp->help.exp_irc_info;
+ buffer, NIPQUAD(exp->tuple.src.ip), port);
- /* Only mangle things once: original direction in POST_ROUTING
- and reply direction on PRE_ROUTING. */
- dir = CTINFO2DIR(ctinfo);
- if (!((hooknum == NF_IP_POST_ROUTING && dir == IP_CT_DIR_ORIGINAL)
- || (hooknum == NF_IP_PRE_ROUTING && dir == IP_CT_DIR_REPLY))) {
- DEBUGP("nat_irc: Not touching dir %s at hook %s\n",
- dir == IP_CT_DIR_ORIGINAL ? "ORIG" : "REPLY",
- hooknum == NF_IP_POST_ROUTING ? "POSTROUTING"
- : hooknum == NF_IP_PRE_ROUTING ? "PREROUTING"
- : hooknum == NF_IP_LOCAL_OUT ? "OUTPUT" : "???");
- return NF_ACCEPT;
- }
- DEBUGP("got beyond not touching\n");
-
- datalen = (*pskb)->len - iph->ihl * 4 - tcph->doff * 4;
- /* Check whether the whole IP/address pattern is carried in the payload */
- if (between(exp->seq + exp_irc_info->len,
- ntohl(tcph->seq),
- ntohl(tcph->seq) + datalen)) {
- if (!irc_data_fixup(exp_irc_info, ct, pskb, ctinfo, exp))
- return NF_DROP;
- } else {
- /* Half a match? This means a partial retransmisison.
- It's a cracker being funky. */
- if (net_ratelimit()) {
- printk
- ("IRC_NAT: partial packet %u/%u in %u/%u\n",
- exp->seq, exp_irc_info->len,
- ntohl(tcph->seq),
- ntohl(tcph->seq) + datalen);
- }
- return NF_DROP;
- }
- return NF_ACCEPT;
+ ret = ip_nat_mangle_tcp_packet(pskb, exp->master, ctinfo,
+ matchoff, matchlen, buffer,
+ strlen(buffer));
+ if (ret != NF_ACCEPT)
+ ip_conntrack_unexpect_related(exp);
+ return ret;
}
-static struct ip_nat_helper ip_nat_irc_helpers[MAX_PORTS];
-static char irc_names[MAX_PORTS][10];
-
-/* This function is intentionally _NOT_ defined as __exit, because
- * it is needed by init() */
-static void fini(void)
+static void __exit fini(void)
{
- int i;
-
- for (i = 0; i < ports_c; i++) {
- DEBUGP("ip_nat_irc: unregistering helper for port %d\n",
- ports[i]);
- ip_nat_helper_unregister(&ip_nat_irc_helpers[i]);
- }
+ ip_nat_irc_hook = NULL;
+ /* Make sure noone calls it, meanwhile. */
+ synchronize_net();
}
static int __init init(void)
{
- int ret = 0;
- int i;
- struct ip_nat_helper *hlpr;
- char *tmpname;
-
- if (ports_c == 0)
- ports[ports_c++] = IRC_PORT;
-
- for (i = 0; i < ports_c; i++) {
- hlpr = &ip_nat_irc_helpers[i];
- hlpr->tuple.dst.protonum = IPPROTO_TCP;
- hlpr->tuple.src.u.tcp.port = htons(ports[i]);
- hlpr->mask.src.u.tcp.port = 0xFFFF;
- hlpr->mask.dst.protonum = 0xFFFF;
- hlpr->help = help;
- hlpr->flags = 0;
- hlpr->me = THIS_MODULE;
- hlpr->expect = irc_nat_expected;
-
- tmpname = &irc_names[i][0];
- if (ports[i] == IRC_PORT)
- sprintf(tmpname, "irc");
- else
- sprintf(tmpname, "irc-%d", i);
- hlpr->name = tmpname;
-
- DEBUGP
- ("ip_nat_irc: Trying to register helper for port %d: name %s\n",
- ports[i], hlpr->name);
- ret = ip_nat_helper_register(hlpr);
-
- if (ret) {
- printk
- ("ip_nat_irc: error registering helper for port %d\n",
- ports[i]);
- fini();
- return 1;
- }
- }
- return ret;
+ BUG_ON(ip_nat_irc_hook);
+ ip_nat_irc_hook = help;
+ return 0;
}
-NEEDS_CONNTRACK(irc);
-
module_init(init);
module_exit(fini);
static int
icmp_manip_pkt(struct sk_buff **pskb,
unsigned int iphdroff,
- const struct ip_conntrack_manip *manip,
+ const struct ip_conntrack_tuple *tuple,
enum ip_nat_manip_type maniptype)
{
struct iphdr *iph = (struct iphdr *)((*pskb)->data + iphdroff);
if (!skb_ip_make_writable(pskb, hdroff + sizeof(*hdr)))
return 0;
- hdr = (void *)(*pskb)->data + hdroff;
+ hdr = (struct icmphdr *)((*pskb)->data + hdroff);
hdr->checksum = ip_nat_cheat_check(hdr->un.echo.id ^ 0xFFFF,
- manip->u.icmp.id,
+ tuple->src.u.icmp.id,
hdr->checksum);
- hdr->un.echo.id = manip->u.icmp.id;
+ hdr->un.echo.id = tuple->src.u.icmp.id;
return 1;
}
static int
tcp_manip_pkt(struct sk_buff **pskb,
unsigned int iphdroff,
- const struct ip_conntrack_manip *manip,
+ const struct ip_conntrack_tuple *tuple,
enum ip_nat_manip_type maniptype)
{
struct iphdr *iph = (struct iphdr *)((*pskb)->data + iphdroff);
struct tcphdr *hdr;
unsigned int hdroff = iphdroff + iph->ihl*4;
- u_int32_t oldip;
- u_int16_t *portptr, oldport;
+ u32 oldip, newip;
+ u16 *portptr, newport, oldport;
int hdrsize = 8; /* TCP connection tracking guarantees this much */
/* this could be a inner header returned in icmp packet; in such
if (!skb_ip_make_writable(pskb, hdroff + hdrsize))
return 0;
- hdr = (void *)(*pskb)->data + hdroff;
+ iph = (struct iphdr *)((*pskb)->data + iphdroff);
+ hdr = (struct tcphdr *)((*pskb)->data + hdroff);
if (maniptype == IP_NAT_MANIP_SRC) {
/* Get rid of src ip and src pt */
oldip = iph->saddr;
+ newip = tuple->src.ip;
+ newport = tuple->src.u.tcp.port;
portptr = &hdr->source;
} else {
/* Get rid of dst ip and dst pt */
oldip = iph->daddr;
+ newip = tuple->dst.ip;
+ newport = tuple->dst.u.tcp.port;
portptr = &hdr->dest;
}
oldport = *portptr;
- *portptr = manip->u.tcp.port;
+ *portptr = newport;
if (hdrsize < sizeof(*hdr))
return 1;
- hdr->check = ip_nat_cheat_check(~oldip, manip->ip,
+ hdr->check = ip_nat_cheat_check(~oldip, newip,
ip_nat_cheat_check(oldport ^ 0xFFFF,
- manip->u.tcp.port,
+ newport,
hdr->check));
return 1;
}
static int
udp_manip_pkt(struct sk_buff **pskb,
unsigned int iphdroff,
- const struct ip_conntrack_manip *manip,
+ const struct ip_conntrack_tuple *tuple,
enum ip_nat_manip_type maniptype)
{
struct iphdr *iph = (struct iphdr *)((*pskb)->data + iphdroff);
struct udphdr *hdr;
unsigned int hdroff = iphdroff + iph->ihl*4;
- u_int32_t oldip;
- u_int16_t *portptr;
+ u32 oldip, newip;
+ u16 *portptr, newport;
- if (!skb_ip_make_writable(pskb, hdroff + sizeof(hdr)))
+ if (!skb_ip_make_writable(pskb, hdroff + sizeof(*hdr)))
return 0;
- hdr = (void *)(*pskb)->data + hdroff;
+ iph = (struct iphdr *)((*pskb)->data + iphdroff);
+ hdr = (struct udphdr *)((*pskb)->data + hdroff);
+
if (maniptype == IP_NAT_MANIP_SRC) {
/* Get rid of src ip and src pt */
oldip = iph->saddr;
+ newip = tuple->src.ip;
+ newport = tuple->src.u.udp.port;
portptr = &hdr->source;
} else {
/* Get rid of dst ip and dst pt */
oldip = iph->daddr;
+ newip = tuple->dst.ip;
+ newport = tuple->dst.u.udp.port;
portptr = &hdr->dest;
}
if (hdr->check) /* 0 is a special case meaning no checksum */
- hdr->check = ip_nat_cheat_check(~oldip, manip->ip,
+ hdr->check = ip_nat_cheat_check(~oldip, newip,
ip_nat_cheat_check(*portptr ^ 0xFFFF,
- manip->u.udp.port,
+ newport,
hdr->check));
- *portptr = manip->u.udp.port;
+ *portptr = newport;
return 1;
}
static int
unknown_manip_pkt(struct sk_buff **pskb,
unsigned int iphdroff,
- const struct ip_conntrack_manip *manip,
+ const struct ip_conntrack_tuple *tuple,
enum ip_nat_manip_type maniptype)
{
return 1;
#include <linux/skbuff.h>
#include <linux/proc_fs.h>
#include <net/checksum.h>
+#include <net/route.h>
#include <linux/bitops.h>
#define ASSERT_READ_LOCK(x) MUST_BE_READ_LOCKED(&ip_nat_lock)
#define NAT_VALID_HOOKS ((1<<NF_IP_PRE_ROUTING) | (1<<NF_IP_POST_ROUTING) | (1<<NF_IP_LOCAL_OUT))
-/* Standard entry. */
-struct ipt_standard
-{
- struct ipt_entry entry;
- struct ipt_standard_target target;
-};
-
-struct ipt_error_target
-{
- struct ipt_entry_target target;
- char errorname[IPT_FUNCTION_MAXNAMELEN];
-};
-
-struct ipt_error
-{
- struct ipt_entry entry;
- struct ipt_error_target target;
-};
-
static struct
{
struct ipt_replace repl;
struct ipt_standard entries[3];
struct ipt_error term;
-} nat_initial_table = {
- { "nat", NAT_VALID_HOOKS, 4,
+} nat_initial_table __initdata
+= { { "nat", NAT_VALID_HOOKS, 4,
sizeof(struct ipt_standard) * 3 + sizeof(struct ipt_error),
{ [NF_IP_PRE_ROUTING] = 0,
[NF_IP_POST_ROUTING] = sizeof(struct ipt_standard),
static struct ipt_table nat_table = {
.name = "nat",
- .table = &nat_initial_table.repl,
.valid_hooks = NAT_VALID_HOOKS,
.lock = RW_LOCK_UNLOCKED,
.me = THIS_MODULE,
{
struct ip_conntrack *ct;
enum ip_conntrack_info ctinfo;
+ const struct ip_nat_multi_range_compat *mr = targinfo;
IP_NF_ASSERT(hooknum == NF_IP_POST_ROUTING);
|| ctinfo == IP_CT_RELATED + IP_CT_IS_REPLY));
IP_NF_ASSERT(out);
- return ip_nat_setup_info(ct, targinfo, hooknum);
+ return ip_nat_setup_info(ct, &mr->range[0], hooknum);
+}
+
+/* Before 2.6.11 we did implicit source NAT if required. Warn about change. */
+static void warn_if_extra_mangle(u32 dstip, u32 srcip)
+{
+ static int warned = 0;
+ struct flowi fl = { .nl_u = { .ip4_u = { .daddr = dstip } } };
+ struct rtable *rt;
+
+ if (ip_route_output_key(&rt, &fl) != 0)
+ return;
+
+ if (rt->rt_src != srcip && !warned) {
+ printk("NAT: no longer support implicit source local NAT\n");
+ printk("NAT: packet src %u.%u.%u.%u -> dst %u.%u.%u.%u\n",
+ NIPQUAD(srcip), NIPQUAD(dstip));
+ warned = 1;
+ }
+ ip_rt_put(rt);
}
static unsigned int ipt_dnat_target(struct sk_buff **pskb,
{
struct ip_conntrack *ct;
enum ip_conntrack_info ctinfo;
+ const struct ip_nat_multi_range_compat *mr = targinfo;
-#ifdef CONFIG_IP_NF_NAT_LOCAL
IP_NF_ASSERT(hooknum == NF_IP_PRE_ROUTING
|| hooknum == NF_IP_LOCAL_OUT);
-#else
- IP_NF_ASSERT(hooknum == NF_IP_PRE_ROUTING);
-#endif
ct = ip_conntrack_get(*pskb, &ctinfo);
/* Connection must be valid and new. */
IP_NF_ASSERT(ct && (ctinfo == IP_CT_NEW || ctinfo == IP_CT_RELATED));
- return ip_nat_setup_info(ct, targinfo, hooknum);
+ if (hooknum == NF_IP_LOCAL_OUT
+ && mr->range[0].flags & IP_NAT_RANGE_MAP_IPS)
+ warn_if_extra_mangle((*pskb)->nh.iph->daddr,
+ mr->range[0].min_ip);
+
+ return ip_nat_setup_info(ct, &mr->range[0], hooknum);
}
static int ipt_snat_checkentry(const char *tablename,
unsigned int targinfosize,
unsigned int hook_mask)
{
- struct ip_nat_multi_range *mr = targinfo;
+ struct ip_nat_multi_range_compat *mr = targinfo;
/* Must be a valid range */
- if (targinfosize < sizeof(struct ip_nat_multi_range)) {
- DEBUGP("SNAT: Target size %u too small\n", targinfosize);
+ if (mr->rangesize != 1) {
+ printk("SNAT: multiple ranges no longer supported\n");
return 0;
}
- if (targinfosize != IPT_ALIGN((sizeof(struct ip_nat_multi_range)
- + (sizeof(struct ip_nat_range)
- * (mr->rangesize - 1))))) {
+ if (targinfosize != IPT_ALIGN(sizeof(struct ip_nat_multi_range_compat))) {
DEBUGP("SNAT: Target size %u wrong for %u ranges\n",
targinfosize, mr->rangesize);
return 0;
unsigned int targinfosize,
unsigned int hook_mask)
{
- struct ip_nat_multi_range *mr = targinfo;
+ struct ip_nat_multi_range_compat *mr = targinfo;
/* Must be a valid range */
- if (targinfosize < sizeof(struct ip_nat_multi_range)) {
- DEBUGP("DNAT: Target size %u too small\n", targinfosize);
+ if (mr->rangesize != 1) {
+ printk("DNAT: multiple ranges no longer supported\n");
return 0;
}
- if (targinfosize != IPT_ALIGN((sizeof(struct ip_nat_multi_range)
- + (sizeof(struct ip_nat_range)
- * (mr->rangesize - 1))))) {
+ if (targinfosize != IPT_ALIGN(sizeof(struct ip_nat_multi_range_compat))) {
DEBUGP("DNAT: Target size %u wrong for %u ranges\n",
targinfosize, mr->rangesize);
return 0;
return 0;
}
-#ifndef CONFIG_IP_NF_NAT_LOCAL
- if (hook_mask & (1 << NF_IP_LOCAL_OUT)) {
- DEBUGP("DNAT: CONFIG_IP_NF_NAT_LOCAL not enabled\n");
- return 0;
- }
-#endif
-
return 1;
}
= (HOOK2MANIP(hooknum) == IP_NAT_MANIP_SRC
? conntrack->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip
: conntrack->tuplehash[IP_CT_DIR_REPLY].tuple.src.ip);
- struct ip_nat_multi_range mr
- = { 1, { { IP_NAT_RANGE_MAP_IPS, ip, ip, { 0 }, { 0 } } } };
+ struct ip_nat_range range
+ = { IP_NAT_RANGE_MAP_IPS, ip, ip, { 0 }, { 0 } };
DEBUGP("Allocating NULL binding for %p (%u.%u.%u.%u)\n", conntrack,
NIPQUAD(ip));
- return ip_nat_setup_info(conntrack, &mr, hooknum);
+ return ip_nat_setup_info(conntrack, &range, hooknum);
}
int ip_nat_rule_find(struct sk_buff **pskb,
ret = ipt_do_table(pskb, hooknum, in, out, &nat_table, NULL);
if (ret == NF_ACCEPT) {
- if (!(info->initialized & (1 << HOOK2MANIP(hooknum))))
+ if (!ip_nat_initialized(ct, HOOK2MANIP(hooknum)))
/* NUL mapping */
ret = alloc_null_binding(ct, info, hooknum);
}
{
int ret;
- ret = ipt_register_table(&nat_table);
+ ret = ipt_register_table(&nat_table, &nat_initial_table.repl);
if (ret != 0)
return ret;
ret = ipt_register_target(&ipt_snat_reg);
MODULE_DESCRIPTION("tftp NAT helper");
MODULE_LICENSE("GPL");
-#define MAX_PORTS 8
-
-static int ports[MAX_PORTS];
-static int ports_c = 0;
-module_param_array(ports, int, &ports_c, 0400);
-MODULE_PARM_DESC(ports, "port numbers of tftp servers");
-
-#if 0
-#define DEBUGP(format, args...) printk("%s:%s:" format, \
- __FILE__, __FUNCTION__ , ## args)
-#else
-#define DEBUGP(format, args...)
-#endif
-static unsigned int
-tftp_nat_help(struct ip_conntrack *ct,
- struct ip_conntrack_expect *exp,
- struct ip_nat_info *info,
- enum ip_conntrack_info ctinfo,
- unsigned int hooknum,
- struct sk_buff **pskb)
+static unsigned int help(struct sk_buff **pskb,
+ enum ip_conntrack_info ctinfo,
+ struct ip_conntrack_expect *exp)
{
- int dir = CTINFO2DIR(ctinfo);
- struct tftphdr _tftph, *tfh;
- struct ip_conntrack_tuple repl;
-
- if (!((hooknum == NF_IP_POST_ROUTING && dir == IP_CT_DIR_ORIGINAL)
- || (hooknum == NF_IP_PRE_ROUTING && dir == IP_CT_DIR_REPLY)))
- return NF_ACCEPT;
-
- if (!exp) {
- DEBUGP("no conntrack expectation to modify\n");
- return NF_ACCEPT;
- }
-
- tfh = skb_header_pointer(*pskb,
- (*pskb)->nh.iph->ihl*4+sizeof(struct udphdr),
- sizeof(_tftph), &_tftph);
- if (tfh == NULL)
+ exp->saved_proto.udp.port = exp->tuple.dst.u.tcp.port;
+ exp->dir = IP_CT_DIR_REPLY;
+ exp->expectfn = ip_nat_follow_master;
+ if (ip_conntrack_expect_related(exp) != 0) {
+ ip_conntrack_expect_free(exp);
return NF_DROP;
-
- switch (ntohs(tfh->opcode)) {
- /* RRQ and WRQ works the same way */
- case TFTP_OPCODE_READ:
- case TFTP_OPCODE_WRITE:
- repl = ct->tuplehash[IP_CT_DIR_REPLY].tuple;
- DEBUGP("");
- DUMP_TUPLE(&ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple);
- DUMP_TUPLE(&ct->tuplehash[IP_CT_DIR_REPLY].tuple);
- DEBUGP("expecting: ");
- DUMP_TUPLE(&repl);
- DUMP_TUPLE(&exp->mask);
- ip_conntrack_change_expect(exp, &repl);
- break;
- default:
- DEBUGP("Unknown opcode\n");
- }
-
- return NF_ACCEPT;
-}
-
-static unsigned int
-tftp_nat_expected(struct sk_buff **pskb,
- unsigned int hooknum,
- struct ip_conntrack *ct,
- struct ip_nat_info *info)
-{
- const struct ip_conntrack *master = ct->master->expectant;
- const struct ip_conntrack_tuple *orig =
- &master->tuplehash[IP_CT_DIR_ORIGINAL].tuple;
- struct ip_nat_multi_range mr;
-#if 0
- const struct ip_conntrack_tuple *repl =
- &master->tuplehash[IP_CT_DIR_REPLY].tuple;
- struct udphdr _udph, *uh;
-
- uh = skb_header_pointer(*pskb,
- (*pskb)->nh.iph->ihl*4,
- sizeof(_udph), &_udph);
- if (uh == NULL)
- return NF_DROP;
-#endif
-
- IP_NF_ASSERT(info);
- IP_NF_ASSERT(master);
- IP_NF_ASSERT(!(info->initialized & (1 << HOOK2MANIP(hooknum))));
-
- mr.rangesize = 1;
- mr.range[0].flags = IP_NAT_RANGE_MAP_IPS;
-
- if (HOOK2MANIP(hooknum) == IP_NAT_MANIP_SRC) {
- mr.range[0].min_ip = mr.range[0].max_ip = orig->dst.ip;
- DEBUGP("orig: %u.%u.%u.%u:%u <-> %u.%u.%u.%u:%u "
- "newsrc: %u.%u.%u.%u\n",
- NIPQUAD((*pskb)->nh.iph->saddr), ntohs(uh->source),
- NIPQUAD((*pskb)->nh.iph->daddr), ntohs(uh->dest),
- NIPQUAD(orig->dst.ip));
- } else {
- mr.range[0].min_ip = mr.range[0].max_ip = orig->src.ip;
- mr.range[0].min.udp.port = mr.range[0].max.udp.port =
- orig->src.u.udp.port;
- mr.range[0].flags |= IP_NAT_RANGE_PROTO_SPECIFIED;
-
- DEBUGP("orig: %u.%u.%u.%u:%u <-> %u.%u.%u.%u:%u "
- "newdst: %u.%u.%u.%u:%u\n",
- NIPQUAD((*pskb)->nh.iph->saddr), ntohs(uh->source),
- NIPQUAD((*pskb)->nh.iph->daddr), ntohs(uh->dest),
- NIPQUAD(orig->src.ip), ntohs(orig->src.u.udp.port));
}
-
- return ip_nat_setup_info(ct,&mr,hooknum);
+ return NF_ACCEPT;
}
-static struct ip_nat_helper tftp[MAX_PORTS];
-static char tftp_names[MAX_PORTS][10];
-
-static void fini(void)
+static void __exit fini(void)
{
- int i;
-
- for (i = 0 ; i < ports_c; i++) {
- DEBUGP("unregistering helper for port %d\n", ports[i]);
- ip_nat_helper_unregister(&tftp[i]);
- }
+ ip_nat_tftp_hook = NULL;
+ /* Make sure noone calls it, meanwhile. */
+ synchronize_net();
}
static int __init init(void)
{
- int i, ret = 0;
- char *tmpname;
-
- if (ports_c == 0)
- ports[ports_c++] = TFTP_PORT;
-
- for (i = 0; i < ports_c; i++) {
- memset(&tftp[i], 0, sizeof(struct ip_nat_helper));
-
- tftp[i].tuple.dst.protonum = IPPROTO_UDP;
- tftp[i].tuple.src.u.udp.port = htons(ports[i]);
- tftp[i].mask.dst.protonum = 0xFFFF;
- tftp[i].mask.src.u.udp.port = 0xFFFF;
- tftp[i].help = tftp_nat_help;
- tftp[i].flags = 0;
- tftp[i].me = THIS_MODULE;
- tftp[i].expect = tftp_nat_expected;
-
- tmpname = &tftp_names[i][0];
- if (ports[i] == TFTP_PORT)
- sprintf(tmpname, "tftp");
- else
- sprintf(tmpname, "tftp-%d", i);
- tftp[i].name = tmpname;
-
- DEBUGP("ip_nat_tftp: registering for port %d: name %s\n",
- ports[i], tftp[i].name);
- ret = ip_nat_helper_register(&tftp[i]);
-
- if (ret) {
- printk("ip_nat_tftp: unable to register for port %d\n",
- ports[i]);
- fini();
- return ret;
- }
- }
- return ret;
+ BUG_ON(ip_nat_tftp_hook);
+ ip_nat_tftp_hook = help;
+ return 0;
}
module_init(init);
static unsigned char copy_mode = IPQ_COPY_NONE;
static unsigned int queue_maxlen = IPQ_QMAX_DEFAULT;
-static rwlock_t queue_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(queue_lock);
static int peer_pid;
static unsigned int copy_range;
static unsigned int queue_total;
#include <linux/netfilter_ipv4/ip_tables.h>
#include <linux/netfilter_ipv4/ipt_CLUSTERIP.h>
#include <linux/netfilter_ipv4/ip_conntrack.h>
+#include <linux/netfilter_ipv4/lockhelp.h>
#define CLUSTERIP_VERSION "0.6"
/* clusterip_lock protects the clusterip_configs list _AND_ the configurable
* data within all structurses (num_local_nodes, local_nodes[]) */
-DECLARE_RWLOCK(clusterip_lock);
+static DECLARE_RWLOCK(clusterip_lock);
#ifdef CONFIG_PROC_FS
static struct file_operations clusterip_proc_fops;
static inline int
set_ect_tcp(struct sk_buff **pskb, const struct ipt_ECN_info *einfo, int inward)
{
- struct tcphdr _tcph, *th;
+ struct tcphdr _tcph, *tcph;
u_int16_t diffs[2];
/* Not enought header? */
- th = skb_header_pointer(*pskb, (*pskb)->nh.iph->ihl*4,
- sizeof(_tcph), &_tcph);
- if (th == NULL)
+ tcph = skb_header_pointer(*pskb, (*pskb)->nh.iph->ihl*4,
+ sizeof(_tcph), &_tcph);
+ if (!tcph)
return 0;
- diffs[0] = ((u_int16_t *)th)[6];
- if (einfo->operation & IPT_ECN_OP_SET_ECE)
- th->ece = einfo->proto.tcp.ece;
+ if (!(einfo->operation & IPT_ECN_OP_SET_ECE
+ || tcph->ece == einfo->proto.tcp.ece)
+ && (!(einfo->operation & IPT_ECN_OP_SET_CWR
+ || tcph->cwr == einfo->proto.tcp.cwr)))
+ return 1;
+
+ if (!skb_ip_make_writable(pskb, (*pskb)->nh.iph->ihl*4+sizeof(*tcph)))
+ return 0;
+ tcph = (void *)(*pskb)->nh.iph + (*pskb)->nh.iph->ihl*4;
+ diffs[0] = ((u_int16_t *)tcph)[6];
+ if (einfo->operation & IPT_ECN_OP_SET_ECE)
+ tcph->ece = einfo->proto.tcp.ece;
if (einfo->operation & IPT_ECN_OP_SET_CWR)
- th->cwr = einfo->proto.tcp.cwr;
- diffs[1] = ((u_int16_t *)&th)[6];
-
- /* Only mangle if it's changed. */
- if (diffs[0] != diffs[1]) {
- diffs[0] = diffs[0] ^ 0xFFFF;
- if (!skb_ip_make_writable(pskb,
- (*pskb)->nh.iph->ihl*4+sizeof(_tcph)))
+ tcph->cwr = einfo->proto.tcp.cwr;
+ diffs[1] = ((u_int16_t *)tcph)[6];
+ diffs[0] = diffs[0] ^ 0xFFFF;
+
+ if ((*pskb)->ip_summed != CHECKSUM_HW)
+ tcph->check = csum_fold(csum_partial((char *)diffs,
+ sizeof(diffs),
+ tcph->check^0xFFFF));
+ else
+ if (skb_checksum_help(*pskb, inward))
return 0;
-
- if (th != &_tcph)
- memcpy(&_tcph, th, sizeof(_tcph));
-
- if ((*pskb)->ip_summed != CHECKSUM_HW)
- _tcph.check = csum_fold(csum_partial((char *)diffs,
- sizeof(diffs),
- _tcph.check^0xFFFF));
- memcpy((*pskb)->data + (*pskb)->nh.iph->ihl*4,
- &_tcph, sizeof(_tcph));
- if ((*pskb)->ip_summed == CHECKSUM_HW)
- if (skb_checksum_help(*pskb, inward))
- return 0;
- (*pskb)->nfcache |= NFC_ALTERED;
- }
+ (*pskb)->nfcache |= NFC_ALTERED;
return 1;
}
}
if ((einfo->operation & (IPT_ECN_OP_SET_ECE|IPT_ECN_OP_SET_CWR))
- && e->ip.proto != IPPROTO_TCP) {
+ && (e->ip.proto != IPPROTO_TCP || (e->ip.invflags & IPT_INV_PROTO))) {
printk(KERN_WARNING "ECN: cannot use TCP operations on a "
"non-tcp rule\n");
return 0;
unsigned int targinfosize,
unsigned int hook_mask)
{
- const struct ip_nat_multi_range *mr = targinfo;
+ const struct ip_nat_multi_range_compat *mr = targinfo;
if (strcmp(tablename, "nat") != 0) {
DEBUGP(MODULENAME":check: bad table `%s'.\n", tablename);
struct ip_conntrack *ct;
enum ip_conntrack_info ctinfo;
u_int32_t new_ip, netmask;
- const struct ip_nat_multi_range *mr = targinfo;
- struct ip_nat_multi_range newrange;
+ const struct ip_nat_multi_range_compat *mr = targinfo;
+ struct ip_nat_range newrange;
IP_NF_ASSERT(hooknum == NF_IP_PRE_ROUTING
|| hooknum == NF_IP_POST_ROUTING);
new_ip = (*pskb)->nh.iph->saddr & ~netmask;
new_ip |= mr->range[0].min_ip & netmask;
- newrange = ((struct ip_nat_multi_range)
- { 1, { { mr->range[0].flags | IP_NAT_RANGE_MAP_IPS,
- new_ip, new_ip,
- mr->range[0].min, mr->range[0].max } } });
+ newrange = ((struct ip_nat_range)
+ { mr->range[0].flags | IP_NAT_RANGE_MAP_IPS,
+ new_ip, new_ip,
+ mr->range[0].min, mr->range[0].max });
/* Hand modified range to generic setup. */
return ip_nat_setup_info(ct, &newrange, hooknum);
unsigned int targinfosize,
unsigned int hook_mask)
{
- const struct ip_nat_multi_range *mr = targinfo;
+ const struct ip_nat_multi_range_compat *mr = targinfo;
if (strcmp(tablename, "nat") != 0) {
DEBUGP("redirect_check: bad table `%s'.\n", table);
struct ip_conntrack *ct;
enum ip_conntrack_info ctinfo;
u_int32_t newdst;
- const struct ip_nat_multi_range *mr = targinfo;
- struct ip_nat_multi_range newrange;
+ const struct ip_nat_multi_range_compat *mr = targinfo;
+ struct ip_nat_range newrange;
IP_NF_ASSERT(hooknum == NF_IP_PRE_ROUTING
|| hooknum == NF_IP_LOCAL_OUT);
}
/* Transfer from original range. */
- newrange = ((struct ip_nat_multi_range)
- { 1, { { mr->range[0].flags | IP_NAT_RANGE_MAP_IPS,
- newdst, newdst,
- mr->range[0].min, mr->range[0].max } } });
+ newrange = ((struct ip_nat_range)
+ { mr->range[0].flags | IP_NAT_RANGE_MAP_IPS,
+ newdst, newdst,
+ mr->range[0].min, mr->range[0].max });
/* Hand modified range to generic setup. */
return ip_nat_setup_info(ct, &newrange, hooknum);
struct ip_conntrack *ct;
enum ip_conntrack_info ctinfo;
u_int32_t tmpip, aindex, new_ip;
- const struct ipt_same_info *mr = targinfo;
- struct ip_nat_multi_range newrange;
+ const struct ipt_same_info *same = targinfo;
+ struct ip_nat_range newrange;
const struct ip_conntrack_tuple *t;
IP_NF_ASSERT(hooknum == NF_IP_PRE_ROUTING ||
/* Base new source on real src ip and optionally dst ip,
giving some hope for consistency across reboots.
- Here we calculate the index in mr->iparray which
+ Here we calculate the index in same->iparray which
holds the ipaddress we should use */
tmpip = ntohl(t->src.ip);
- if (!(mr->info & IPT_SAME_NODST))
+ if (!(same->info & IPT_SAME_NODST))
tmpip += ntohl(t->dst.ip);
- aindex = tmpip % mr->ipnum;
-
- new_ip = htonl(mr->iparray[aindex]);
+ aindex = tmpip % same->ipnum;
+
+ new_ip = htonl(same->iparray[aindex]);
DEBUGP("ipt_SAME: src=%u.%u.%u.%u dst=%u.%u.%u.%u, "
"new src=%u.%u.%u.%u\n",
NIPQUAD(new_ip));
/* Transfer from original range. */
- newrange = ((struct ip_nat_multi_range)
- { 1, { { mr->range[0].flags | IP_NAT_RANGE_MAP_IPS,
- new_ip, new_ip,
- mr->range[0].min, mr->range[0].max } } });
+ newrange = ((struct ip_nat_range)
+ { same->range[0].flags, new_ip, new_ip,
+ /* FIXME: Use ports from correct range! */
+ same->range[0].min, same->range[0].max });
/* Hand modified range to generic setup. */
return ip_nat_setup_info(ct, &newrange, hooknum);
struct list_head hash[0]; /* hashtable itself */
};
-DECLARE_RWLOCK(hashlimit_lock); /* protects htables list */
+static DECLARE_RWLOCK(hashlimit_lock); /* protects htables list */
+static DECLARE_MUTEX(hlimit_mutex); /* additional checkentry protection */
static LIST_HEAD(hashlimit_htables);
static kmem_cache_t *hashlimit_cachep;
if (!r->cfg.expire)
return 0;
+ /* This is the best we've got: We cannot release and re-grab lock,
+ * since checkentry() is called before ip_tables.c grabs ipt_mutex.
+ * We also cannot grab the hashtable spinlock, since htable_create will
+ * call vmalloc, and that can sleep. And we cannot just re-search
+ * the list of htable's in htable_create(), since then we would
+ * create duplicate proc files. -HW */
+ down(&hlimit_mutex);
r->hinfo = htable_find_get(r->name);
if (!r->hinfo && (htable_create(r) != 0)) {
+ up(&hlimit_mutex);
return 0;
}
+ up(&hlimit_mutex);
/* Ugly hack: For SMP, we only want to use one set */
r->u.master = r;
if (*pos >= htable->cfg.size)
return NULL;
- bucket = kmalloc(sizeof(unsigned int), GFP_KERNEL);
+ bucket = kmalloc(sizeof(unsigned int), GFP_ATOMIC);
if (!bucket)
return ERR_PTR(-ENOMEM);
goto cleanup_nothing;
}
- /* FIXME: do we really want HWCACHE_ALIGN since our objects are
- * quite small ? */
hashlimit_cachep = kmem_cache_create("ipt_hashlimit",
sizeof(struct dsthash_ent), 0,
- SLAB_HWCACHE_ALIGN, NULL, NULL);
+ 0, NULL, NULL);
if (!hashlimit_cachep) {
printk(KERN_ERR "Unable to create ipt_hashlimit slab cache\n");
ret = -ENOMEM;
* see net/sched/sch_tbf.c in the linux source tree
*/
-static spinlock_t limit_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(limit_lock);
/* Rusty: This is my (non-mathematically-inclined) understanding of
this algorithm. The `average rate' in jiffies becomes your initial
return 0;
}
+/* Returns 1 if the port is matched by the test, 0 otherwise. */
+static inline int
+ports_match_v1(const struct ipt_multiport_v1 *minfo,
+ u_int16_t src, u_int16_t dst)
+{
+ unsigned int i;
+ u_int16_t s, e;
+
+ for (i=0; i < minfo->count; i++) {
+ s = minfo->ports[i];
+
+ if (minfo->pflags[i]) {
+ /* range port matching */
+ e = minfo->ports[++i];
+ duprintf("src or dst matches with %d-%d?\n", s, e);
+
+ if (minfo->flags == IPT_MULTIPORT_SOURCE
+ && src >= s && src <= e)
+ return 1 ^ minfo->invert;
+ if (minfo->flags == IPT_MULTIPORT_DESTINATION
+ && dst >= s && dst <= e)
+ return 1 ^ minfo->invert;
+ if (minfo->flags == IPT_MULTIPORT_EITHER
+ && ((dst >= s && dst <= e)
+ || (src >= s && src <= e)))
+ return 1 ^ minfo->invert;
+ } else {
+ /* exact port matching */
+ duprintf("src or dst matches with %d?\n", s);
+
+ if (minfo->flags == IPT_MULTIPORT_SOURCE
+ && src == s)
+ return 1 ^ minfo->invert;
+ if (minfo->flags == IPT_MULTIPORT_DESTINATION
+ && dst == s)
+ return 1 ^ minfo->invert;
+ if (minfo->flags == IPT_MULTIPORT_EITHER
+ && (src == s || dst == s))
+ return 1 ^ minfo->invert;
+ }
+ }
+
+ return minfo->invert;
+}
+
static int
match(const struct sk_buff *skb,
const struct net_device *in,
u16 _ports[2], *pptr;
const struct ipt_multiport *multiinfo = matchinfo;
- /* Must not be a fragment. */
if (offset)
return 0;
- /* Must be big enough to read ports (both UDP and TCP have
- them at the start). */
pptr = skb_header_pointer(skb, skb->nh.iph->ihl * 4,
- sizeof(_ports), &_ports[0]);
+ sizeof(_ports), _ports);
if (pptr == NULL) {
/* We've been asked to examine this packet, and we
* can't. Hence, no choice but to drop.
ntohs(pptr[0]), ntohs(pptr[1]));
}
+static int
+match_v1(const struct sk_buff *skb,
+ const struct net_device *in,
+ const struct net_device *out,
+ const void *matchinfo,
+ int offset,
+ int *hotdrop)
+{
+ u16 _ports[2], *pptr;
+ const struct ipt_multiport_v1 *multiinfo = matchinfo;
+
+ if (offset)
+ return 0;
+
+ pptr = skb_header_pointer(skb, skb->nh.iph->ihl * 4,
+ sizeof(_ports), _ports);
+ if (pptr == NULL) {
+ /* We've been asked to examine this packet, and we
+ * can't. Hence, no choice but to drop.
+ */
+ duprintf("ipt_multiport:"
+ " Dropping evil offset=0 tinygram.\n");
+ *hotdrop = 1;
+ return 0;
+ }
+
+ return ports_match_v1(multiinfo, ntohs(pptr[0]), ntohs(pptr[1]));
+}
+
/* Called when user tries to insert an entry of this type. */
static int
checkentry(const char *tablename,
unsigned int matchsize,
unsigned int hook_mask)
{
- const struct ipt_multiport *multiinfo = matchinfo;
-
- if (matchsize != IPT_ALIGN(sizeof(struct ipt_multiport)))
- return 0;
+ return (matchsize == IPT_ALIGN(sizeof(struct ipt_multiport)));
+}
- /* Must specify proto == TCP/UDP, no unknown flags or bad count */
- return (ip->proto == IPPROTO_TCP || ip->proto == IPPROTO_UDP)
- && !(ip->invflags & IPT_INV_PROTO)
- && matchsize == IPT_ALIGN(sizeof(struct ipt_multiport))
- && (multiinfo->flags == IPT_MULTIPORT_SOURCE
- || multiinfo->flags == IPT_MULTIPORT_DESTINATION
- || multiinfo->flags == IPT_MULTIPORT_EITHER)
- && multiinfo->count <= IPT_MULTI_PORTS;
+static int
+checkentry_v1(const char *tablename,
+ const struct ipt_ip *ip,
+ void *matchinfo,
+ unsigned int matchsize,
+ unsigned int hook_mask)
+{
+ return (matchsize == IPT_ALIGN(sizeof(struct ipt_multiport_v1)));
}
static struct ipt_match multiport_match = {
.name = "multiport",
+ .revision = 0,
.match = &match,
.checkentry = &checkentry,
.me = THIS_MODULE,
};
+static struct ipt_match multiport_match_v1 = {
+ .name = "multiport",
+ .revision = 1,
+ .match = &match_v1,
+ .checkentry = &checkentry_v1,
+ .me = THIS_MODULE,
+};
+
static int __init init(void)
{
- return ipt_register_match(&multiport_match);
+ int err;
+
+ err = ipt_register_match(&multiport_match);
+ if (!err) {
+ err = ipt_register_match(&multiport_match_v1);
+ if (err)
+ ipt_unregister_match(&multiport_match);
+ }
+
+ return err;
}
static void __exit fini(void)
{
ipt_unregister_match(&multiport_match);
+ ipt_unregister_match(&multiport_match_v1);
}
module_init(init);
/* We protect r_list with this spinlock so two processors are not modifying
* the list at the same time.
*/
-static spinlock_t recent_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(recent_lock);
#ifdef CONFIG_PROC_FS
/* Our /proc/net/ipt_recent entry */
int *hotdrop);
/* Function to hash a given address into the hash table of table_size size */
-int hash_func(unsigned int addr, int table_size)
+static int hash_func(unsigned int addr, int table_size)
{
int result = 0;
unsigned int value = addr;
#endif
curr_table = vmalloc(sizeof(struct recent_ip_tables));
- if(curr_table == NULL) return -ENOMEM;
+ if(curr_table == NULL) return 0;
spin_lock_init(&curr_table->list_lock);
curr_table->next = NULL;
#endif
curr_table->table = vmalloc(sizeof(struct recent_ip_list)*ip_list_tot);
- if(curr_table->table == NULL) { vfree(curr_table); return -ENOMEM; }
+ if(curr_table->table == NULL) { vfree(curr_table); return 0; }
memset(curr_table->table,0,sizeof(struct recent_ip_list)*ip_list_tot);
#ifdef DEBUG
if(debug) printk(KERN_INFO RECENT_NAME ": checkentry: Allocating %d for pkt_list.\n",
printk(KERN_INFO RECENT_NAME ": checkentry: unable to allocate for pkt_list.\n");
vfree(curr_table->table);
vfree(curr_table);
- return -ENOMEM;
+ return 0;
}
for(c = 0; c < ip_list_tot; c++) {
curr_table->table[c].last_pkts = hold + c*ip_pkt_list_tot;
vfree(hold);
vfree(curr_table->table);
vfree(curr_table);
- return -ENOMEM;
+ return 0;
}
for(c = 0; c < ip_list_hash_size; c++) {
vfree(hold);
vfree(curr_table->table);
vfree(curr_table);
- return -ENOMEM;
+ return 0;
}
for(c = 0; c < ip_list_tot; c++) {
curr_table->time_info[c].position = c;
if(debug) printk(KERN_INFO RECENT_NAME ": checkentry() create_proc failed, no tables.\n");
#endif
spin_unlock_bh(&recent_lock);
- return -ENOMEM;
+ return 0;
}
while( strncmp(info->name,curr_table->name,IPT_RECENT_NAME_LEN) && (last_table = curr_table) && (curr_table = curr_table->next) );
if(!curr_table) {
if(debug) printk(KERN_INFO RECENT_NAME ": checkentry() create_proc failed, table already destroyed.\n");
#endif
spin_unlock_bh(&recent_lock);
- return -ENOMEM;
+ return 0;
}
if(last_table) last_table->next = curr_table->next; else r_tables = curr_table->next;
spin_unlock_bh(&recent_lock);
vfree(hold);
vfree(curr_table->table);
vfree(curr_table);
- return -ENOMEM;
+ return 0;
}
curr_table->status_proc->owner = THIS_MODULE;
/* Kernel module initialization. */
static int __init init(void)
{
- int count;
+ int err, count;
printk(version);
#ifdef CONFIG_PROC_FS
if(debug) printk(KERN_INFO RECENT_NAME ": ip_list_hash_size: %d\n",ip_list_hash_size);
#endif
- return ipt_register_match(&recent_match);
+ err = ipt_register_match(&recent_match);
+ if (err)
+ remove_proc_entry("ipt_recent", proc_net);
+ return err;
}
/* Kernel module destruction. */
#define FILTER_VALID_HOOKS ((1 << NF_IP_LOCAL_IN) | (1 << NF_IP_FORWARD) | (1 << NF_IP_LOCAL_OUT))
-/* Standard entry. */
-struct ipt_standard
-{
- struct ipt_entry entry;
- struct ipt_standard_target target;
-};
-
-struct ipt_error_target
-{
- struct ipt_entry_target target;
- char errorname[IPT_FUNCTION_MAXNAMELEN];
-};
-
-struct ipt_error
-{
- struct ipt_entry entry;
- struct ipt_error_target target;
-};
-
static struct
{
struct ipt_replace repl;
struct ipt_standard entries[3];
struct ipt_error term;
-} initial_table = {
- { "filter", FILTER_VALID_HOOKS, 4,
+} initial_table __initdata
+= { { "filter", FILTER_VALID_HOOKS, 4,
sizeof(struct ipt_standard) * 3 + sizeof(struct ipt_error),
{ [NF_IP_LOCAL_IN] = 0,
[NF_IP_FORWARD] = sizeof(struct ipt_standard),
static struct ipt_table packet_filter = {
.name = "filter",
- .table = &initial_table.repl,
.valid_hooks = FILTER_VALID_HOOKS,
.lock = RW_LOCK_UNLOCKED,
.me = THIS_MODULE
initial_table.entries[1].target.verdict = -forward - 1;
/* Register table */
- ret = ipt_register_table(&packet_filter);
+ ret = ipt_register_table(&packet_filter, &initial_table.repl);
if (ret < 0)
return ret;
*/
__u32 cookie_v4_init_sequence(struct sock *sk, struct sk_buff *skb, __u16 *mssp)
{
- struct tcp_opt *tp = tcp_sk(sk);
+ struct tcp_sock *tp = tcp_sk(sk);
int mssind;
const __u16 mss = *mssp;
struct open_request *req,
struct dst_entry *dst)
{
- struct tcp_opt *tp = tcp_sk(sk);
+ struct tcp_sock *tp = tcp_sk(sk);
struct sock *child;
child = tp->af_specific->syn_recv_sock(sk, skb, req, dst);
struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb,
struct ip_options *opt)
{
- struct tcp_opt *tp = tcp_sk(sk);
+ struct tcp_sock *tp = tcp_sk(sk);
__u32 cookie = ntohl(skb->h.th->ack_seq) - 1;
struct sock *ret = sk;
struct open_request *req;
static int tcpdiag_fill(struct sk_buff *skb, struct sock *sk,
int ext, u32 pid, u32 seq, u16 nlmsg_flags)
{
- struct inet_opt *inet = inet_sk(sk);
- struct tcp_opt *tp = tcp_sk(sk);
+ struct inet_sock *inet = inet_sk(sk);
+ struct tcp_sock *tp = tcp_sk(sk);
struct tcpdiagmsg *r;
struct nlmsghdr *nlh;
struct tcp_info *info = NULL;
if (cb->nlh->nlmsg_len > 4 + NLMSG_SPACE(sizeof(*r))) {
struct tcpdiag_entry entry;
struct rtattr *bc = (struct rtattr *)(r + 1);
- struct inet_opt *inet = inet_sk(sk);
+ struct inet_sock *inet = inet_sk(sk);
entry.family = sk->sk_family;
#ifdef CONFIG_IP_TCPDIAG_IPV6
struct open_request *req,
u32 pid, u32 seq)
{
- struct inet_opt *inet = inet_sk(sk);
+ struct inet_sock *inet = inet_sk(sk);
unsigned char *b = skb->tail;
struct tcpdiagmsg *r;
struct nlmsghdr *nlh;
{
struct tcpdiag_entry entry;
struct tcpdiagreq *r = NLMSG_DATA(cb->nlh);
- struct tcp_opt *tp = tcp_sk(sk);
+ struct tcp_sock *tp = tcp_sk(sk);
struct tcp_listen_opt *lopt;
struct rtattr *bc = NULL;
- struct inet_opt *inet = inet_sk(sk);
+ struct inet_sock *inet = inet_sk(sk);
int j, s_j;
int reqnum, s_reqnum;
int err = 0;
num = 0;
sk_for_each(sk, node, &tcp_listening_hash[i]) {
- struct inet_opt *inet = inet_sk(sk);
+ struct inet_sock *inet = inet_sk(sk);
if (num < s_num) {
num++;
num = 0;
sk_for_each(sk, node, &head->chain) {
- struct inet_opt *inet = inet_sk(sk);
+ struct inet_sock *inet = inet_sk(sk);
if (num < s_num)
goto next_normal;
if (r->tcpdiag_states&TCPF_TIME_WAIT) {
sk_for_each(sk, node,
&tcp_ehash[i + tcp_ehash_size].chain) {
- struct inet_opt *inet = inet_sk(sk);
+ struct inet_sock *inet = inet_sk(sk);
if (num < s_num)
goto next_dying;
goto error_nolock;
}
- spin_lock_bh(&x->lock);
- err = xfrm_state_check(x, skb);
- if (err)
- goto error;
-
if (x->props.mode) {
err = xfrm4_tunnel_check_size(skb);
if (err)
- goto error;
+ goto error_nolock;
}
+ spin_lock_bh(&x->lock);
+ err = xfrm_state_check(x, skb);
+ if (err)
+ goto error;
+
xfrm4_encap(skb);
err = x->type->output(skb);
void *arg;
};
-rwlock_t fib6_walker_lock = RW_LOCK_UNLOCKED;
+DEFINE_RWLOCK(fib6_walker_lock);
#ifdef CONFIG_IPV6_SUBTREES
return 0;
}
-static spinlock_t fib6_gc_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(fib6_gc_lock);
void fib6_run_gc(unsigned long dummy)
{
If you want to compile it as a module, say M here and read
<file:Documentation/modules.txt>. If unsure, say `N'.
- help
endmenu
static unsigned char copy_mode = IPQ_COPY_NONE;
static unsigned int queue_maxlen = IPQ_QMAX_DEFAULT;
-static rwlock_t queue_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(queue_lock);
static int peer_pid;
static unsigned int copy_range;
static unsigned int queue_total;
* see net/sched/sch_tbf.c in the linux source tree
*/
-static spinlock_t limit_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(limit_lock);
/* Rusty: This is my (non-mathematically-inclined) understanding of
this algorithm. The `average rate' in jiffies becomes your initial
static struct ip6t_table packet_filter = {
.name = "filter",
- .table = &initial_table.repl,
.valid_hooks = FILTER_VALID_HOOKS,
.lock = RW_LOCK_UNLOCKED,
.me = THIS_MODULE,
initial_table.entries[1].target.verdict = -forward - 1;
/* Register table */
- ret = ip6t_register_table(&packet_filter);
+ ret = ip6t_register_table(&packet_filter, &initial_table.repl);
if (ret < 0)
return ret;
static struct ip6t_table packet_mangler = {
.name = "mangle",
- .table = &initial_table.repl,
.valid_hooks = MANGLE_VALID_HOOKS,
.lock = RW_LOCK_UNLOCKED,
.me = THIS_MODULE,
}
#endif
- /* FIXME: Push down to extensions --RR */
- if (skb_is_nonlinear(*pskb) && skb_linearize(*pskb, GFP_ATOMIC) != 0)
- return NF_DROP;
-
/* save source/dest address, nfmark, hoplimit, flowlabel, priority, */
memcpy(&saddr, &(*pskb)->nh.ipv6h->saddr, sizeof(saddr));
memcpy(&daddr, &(*pskb)->nh.ipv6h->daddr, sizeof(daddr));
int ret;
/* Register table */
- ret = ip6t_register_table(&packet_mangler);
+ ret = ip6t_register_table(&packet_mangler, &initial_table.repl);
if (ret < 0)
return ret;
#include <net/protocol.h>
struct inet6_protocol *inet6_protos[MAX_INET_PROTOS];
-static spinlock_t inet6_proto_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(inet6_proto_lock);
int inet6_add_protocol(struct inet6_protocol *prot, unsigned char protocol)
#ifdef CONFIG_SYSCTL
-ctl_table ipv6_table[] = {
+static ctl_table ipv6_table[] = {
{
.ctl_name = NET_IPV6_ROUTE,
.procname = "route",
goto error_nolock;
}
- spin_lock_bh(&x->lock);
- err = xfrm_state_check(x, skb);
- if (err)
- goto error;
-
if (x->props.mode) {
err = xfrm6_tunnel_check_size(skb);
if (err)
- goto error;
+ goto error_nolock;
}
+ spin_lock_bh(&x->lock);
+ err = xfrm_state_check(x, skb);
+ if (err)
+ goto error;
+
xfrm6_encap(skb);
err = x->type->output(skb);
# define XFRM6_TUNNEL_SPI_MAGIC 0xdeadbeef
#endif
-static rwlock_t xfrm6_tunnel_spi_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(xfrm6_tunnel_spi_lock);
static u32 xfrm6_tunnel_spi;
{
struct sock* sk, *next;
struct ipx_interface *i;
- struct ipx_opt *ipxs;
+ struct ipx_sock *ipxs;
++*pos;
if (v == SEQ_START_TOKEN) {
static int ipx_seq_socket_show(struct seq_file *seq, void *v)
{
struct sock *s;
- struct ipx_opt *ipxs;
+ struct ipx_sock *ipxs;
if (v == SEQ_START_TOKEN) {
#ifdef CONFIG_IPX_INTERN
return 0;
}
-struct seq_operations ipx_seq_interface_ops = {
+static struct seq_operations ipx_seq_interface_ops = {
.start = ipx_seq_interface_start,
.next = ipx_seq_interface_next,
.stop = ipx_seq_interface_stop,
.show = ipx_seq_interface_show,
};
-struct seq_operations ipx_seq_route_ops = {
+static struct seq_operations ipx_seq_route_ops = {
.start = ipx_seq_route_start,
.next = ipx_seq_route_next,
.stop = ipx_seq_route_stop,
.show = ipx_seq_route_show,
};
-struct seq_operations ipx_seq_socket_ops = {
+static struct seq_operations ipx_seq_socket_ops = {
.start = ipx_seq_socket_start,
.next = ipx_seq_socket_next,
.stop = ipx_seq_interface_stop,
#include <net/sock.h>
LIST_HEAD(ipx_routes);
-rwlock_t ipx_routes_lock = RW_LOCK_UNLOCKED;
+DEFINE_RWLOCK(ipx_routes_lock);
extern struct ipx_interface *ipx_internal_net;
struct iovec *iov, size_t len, int noblock)
{
struct sk_buff *skb;
- struct ipx_opt *ipxs = ipx_sk(sk);
+ struct ipx_sock *ipxs = ipx_sk(sk);
struct ipx_interface *intrfc;
struct ipxhdr *ipx;
size_t size;
"IRCOMM_CONN",
};
-char *ircomm_event[] = {
+#ifdef CONFIG_IRDA_DEBUG
+static char *ircomm_event[] = {
"IRCOMM_CONNECT_REQUEST",
"IRCOMM_CONNECT_RESPONSE",
"IRCOMM_TTP_CONNECT_INDICATION",
"IRCOMM_CONTROL_REQUEST",
"IRCOMM_CONTROL_INDICATION",
};
+#endif /* CONFIG_IRDA_DEBUG */
static int (*state[])(struct ircomm_cb *self, IRCOMM_EVENT event,
struct sk_buff *skb, struct ircomm_info *info) =
#include <net/irda/ircomm_event.h>
#include <net/irda/ircomm_lmp.h>
-/*
- * Function ircomm_open_lsap (self)
- *
- * Open LSAP. This function will only be used when using "raw" services
- *
- */
-int ircomm_open_lsap(struct ircomm_cb *self)
-{
- notify_t notify;
-
- IRDA_DEBUG(0, "%s()\n", __FUNCTION__ );
-
- /* Register callbacks */
- irda_notify_init(¬ify);
- notify.data_indication = ircomm_lmp_data_indication;
- notify.connect_confirm = ircomm_lmp_connect_confirm;
- notify.connect_indication = ircomm_lmp_connect_indication;
- notify.disconnect_indication = ircomm_lmp_disconnect_indication;
- notify.instance = self;
- strlcpy(notify.name, "IrCOMM", sizeof(notify.name));
-
- self->lsap = irlmp_open_lsap(LSAP_ANY, ¬ify, 0);
- if (!self->lsap) {
- IRDA_DEBUG(0,"%sfailed to allocate tsap\n", __FUNCTION__ );
- return -1;
- }
- self->slsap_sel = self->lsap->slsap_sel;
-
- /*
- * Initialize the call-table for issuing commands
- */
- self->issue.data_request = ircomm_lmp_data_request;
- self->issue.connect_request = ircomm_lmp_connect_request;
- self->issue.connect_response = ircomm_lmp_connect_response;
- self->issue.disconnect_request = ircomm_lmp_disconnect_request;
-
- return 0;
-}
/*
* Function ircomm_lmp_connect_request (self, userdata)
*
*
*/
-int ircomm_lmp_connect_request(struct ircomm_cb *self,
- struct sk_buff *userdata,
- struct ircomm_info *info)
+static int ircomm_lmp_connect_request(struct ircomm_cb *self,
+ struct sk_buff *userdata,
+ struct ircomm_info *info)
{
int ret = 0;
*
*
*/
-int ircomm_lmp_connect_response(struct ircomm_cb *self, struct sk_buff *userdata)
+static int ircomm_lmp_connect_response(struct ircomm_cb *self,
+ struct sk_buff *userdata)
{
struct sk_buff *tx_skb;
int ret;
return 0;
}
-int ircomm_lmp_disconnect_request(struct ircomm_cb *self,
- struct sk_buff *userdata,
- struct ircomm_info *info)
+static int ircomm_lmp_disconnect_request(struct ircomm_cb *self,
+ struct sk_buff *userdata,
+ struct ircomm_info *info)
{
struct sk_buff *tx_skb;
int ret;
* been deallocated. We do this to make sure we don't flood IrLAP with
* frames, since we are not using the IrTTP flow control mechanism
*/
-void ircomm_lmp_flow_control(struct sk_buff *skb)
+static void ircomm_lmp_flow_control(struct sk_buff *skb)
{
struct irda_skb_cb *cb;
struct ircomm_cb *self;
* Send data frame to peer device
*
*/
-int ircomm_lmp_data_request(struct ircomm_cb *self, struct sk_buff *skb,
- int not_used)
+static int ircomm_lmp_data_request(struct ircomm_cb *self,
+ struct sk_buff *skb,
+ int not_used)
{
struct irda_skb_cb *cb;
int ret;
* Incoming data which we must deliver to the state machine, to check
* we are still connected.
*/
-int ircomm_lmp_data_indication(void *instance, void *sap,
- struct sk_buff *skb)
+static int ircomm_lmp_data_indication(void *instance, void *sap,
+ struct sk_buff *skb)
{
struct ircomm_cb *self = (struct ircomm_cb *) instance;
* Connection has been confirmed by peer device
*
*/
-void ircomm_lmp_connect_confirm(void *instance, void *sap,
- struct qos_info *qos,
- __u32 max_seg_size,
- __u8 max_header_size,
- struct sk_buff *skb)
+static void ircomm_lmp_connect_confirm(void *instance, void *sap,
+ struct qos_info *qos,
+ __u32 max_seg_size,
+ __u8 max_header_size,
+ struct sk_buff *skb)
{
struct ircomm_cb *self = (struct ircomm_cb *) instance;
struct ircomm_info info;
* Peer device wants to make a connection with us
*
*/
-void ircomm_lmp_connect_indication(void *instance, void *sap,
- struct qos_info *qos,
- __u32 max_seg_size,
- __u8 max_header_size,
- struct sk_buff *skb)
+static void ircomm_lmp_connect_indication(void *instance, void *sap,
+ struct qos_info *qos,
+ __u32 max_seg_size,
+ __u8 max_header_size,
+ struct sk_buff *skb)
{
struct ircomm_cb *self = (struct ircomm_cb *)instance;
struct ircomm_info info;
* Peer device has closed the connection, or the link went down for some
* other reason
*/
-void ircomm_lmp_disconnect_indication(void *instance, void *sap,
- LM_REASON reason,
- struct sk_buff *skb)
+static void ircomm_lmp_disconnect_indication(void *instance, void *sap,
+ LM_REASON reason,
+ struct sk_buff *skb)
{
struct ircomm_cb *self = (struct ircomm_cb *) instance;
struct ircomm_info info;
if(skb)
dev_kfree_skb(skb);
}
+/*
+ * Function ircomm_open_lsap (self)
+ *
+ * Open LSAP. This function will only be used when using "raw" services
+ *
+ */
+int ircomm_open_lsap(struct ircomm_cb *self)
+{
+ notify_t notify;
+
+ IRDA_DEBUG(0, "%s()\n", __FUNCTION__ );
+
+ /* Register callbacks */
+ irda_notify_init(¬ify);
+ notify.data_indication = ircomm_lmp_data_indication;
+ notify.connect_confirm = ircomm_lmp_connect_confirm;
+ notify.connect_indication = ircomm_lmp_connect_indication;
+ notify.disconnect_indication = ircomm_lmp_disconnect_indication;
+ notify.instance = self;
+ strlcpy(notify.name, "IrCOMM", sizeof(notify.name));
+
+ self->lsap = irlmp_open_lsap(LSAP_ANY, ¬ify, 0);
+ if (!self->lsap) {
+ IRDA_DEBUG(0,"%sfailed to allocate tsap\n", __FUNCTION__ );
+ return -1;
+ }
+ self->slsap_sel = self->lsap->slsap_sel;
+
+ /*
+ * Initialize the call-table for issuing commands
+ */
+ self->issue.data_request = ircomm_lmp_data_request;
+ self->issue.connect_request = ircomm_lmp_connect_request;
+ self->issue.connect_response = ircomm_lmp_connect_response;
+ self->issue.disconnect_request = ircomm_lmp_disconnect_request;
+
+ return 0;
+}
pi_param_info_t ircomm_param_info = { pi_major_call_table, 3, 0x0f, 4 };
-/*
- * Function ircomm_param_flush (self)
- *
- * Flush (send) out all queued parameters
- *
- */
-int ircomm_param_flush(struct ircomm_tty_cb *self)
-{
- /* we should lock here, but I guess this function is unused...
- * Jean II */
- if (self->ctrl_skb) {
- ircomm_control_request(self->ircomm, self->ctrl_skb);
- self->ctrl_skb = NULL;
- }
- return 0;
-}
-
/*
* Function ircomm_param_request (self, pi, flush)
*
#include <net/irda/ircomm_event.h>
#include <net/irda/ircomm_ttp.h>
+static int ircomm_ttp_data_indication(void *instance, void *sap,
+ struct sk_buff *skb);
+static void ircomm_ttp_connect_confirm(void *instance, void *sap,
+ struct qos_info *qos,
+ __u32 max_sdu_size,
+ __u8 max_header_size,
+ struct sk_buff *skb);
+static void ircomm_ttp_connect_indication(void *instance, void *sap,
+ struct qos_info *qos,
+ __u32 max_sdu_size,
+ __u8 max_header_size,
+ struct sk_buff *skb);
+static void ircomm_ttp_flow_indication(void *instance, void *sap,
+ LOCAL_FLOW cmd);
+static void ircomm_ttp_disconnect_indication(void *instance, void *sap,
+ LM_REASON reason,
+ struct sk_buff *skb);
+static int ircomm_ttp_data_request(struct ircomm_cb *self,
+ struct sk_buff *skb,
+ int clen);
+static int ircomm_ttp_connect_request(struct ircomm_cb *self,
+ struct sk_buff *userdata,
+ struct ircomm_info *info);
+static int ircomm_ttp_connect_response(struct ircomm_cb *self,
+ struct sk_buff *userdata);
+static int ircomm_ttp_disconnect_request(struct ircomm_cb *self,
+ struct sk_buff *userdata,
+ struct ircomm_info *info);
+
/*
* Function ircomm_open_tsap (self)
*
*
*
*/
-int ircomm_ttp_connect_request(struct ircomm_cb *self,
- struct sk_buff *userdata,
- struct ircomm_info *info)
+static int ircomm_ttp_connect_request(struct ircomm_cb *self,
+ struct sk_buff *userdata,
+ struct ircomm_info *info)
{
int ret = 0;
*
*
*/
-int ircomm_ttp_connect_response(struct ircomm_cb *self,
- struct sk_buff *userdata)
+static int ircomm_ttp_connect_response(struct ircomm_cb *self,
+ struct sk_buff *userdata)
{
int ret;
* some of them are sent after connection establishment, so this can
* increase the latency a bit.
*/
-int ircomm_ttp_data_request(struct ircomm_cb *self,
- struct sk_buff *skb,
- int clen)
+static int ircomm_ttp_data_request(struct ircomm_cb *self,
+ struct sk_buff *skb,
+ int clen)
{
int ret;
* Incoming data
*
*/
-int ircomm_ttp_data_indication(void *instance, void *sap,
- struct sk_buff *skb)
+static int ircomm_ttp_data_indication(void *instance, void *sap,
+ struct sk_buff *skb)
{
struct ircomm_cb *self = (struct ircomm_cb *) instance;
return 0;
}
-void ircomm_ttp_connect_confirm(void *instance, void *sap,
- struct qos_info *qos,
- __u32 max_sdu_size,
- __u8 max_header_size,
- struct sk_buff *skb)
+static void ircomm_ttp_connect_confirm(void *instance, void *sap,
+ struct qos_info *qos,
+ __u32 max_sdu_size,
+ __u8 max_header_size,
+ struct sk_buff *skb)
{
struct ircomm_cb *self = (struct ircomm_cb *) instance;
struct ircomm_info info;
*
*
*/
-void ircomm_ttp_connect_indication(void *instance, void *sap,
- struct qos_info *qos,
- __u32 max_sdu_size,
- __u8 max_header_size,
- struct sk_buff *skb)
+static void ircomm_ttp_connect_indication(void *instance, void *sap,
+ struct qos_info *qos,
+ __u32 max_sdu_size,
+ __u8 max_header_size,
+ struct sk_buff *skb)
{
struct ircomm_cb *self = (struct ircomm_cb *)instance;
struct ircomm_info info;
*
*
*/
-int ircomm_ttp_disconnect_request(struct ircomm_cb *self,
- struct sk_buff *userdata,
- struct ircomm_info *info)
+static int ircomm_ttp_disconnect_request(struct ircomm_cb *self,
+ struct sk_buff *userdata,
+ struct ircomm_info *info)
{
int ret;
*
*
*/
-void ircomm_ttp_disconnect_indication(void *instance, void *sap,
- LM_REASON reason,
- struct sk_buff *skb)
+static void ircomm_ttp_disconnect_indication(void *instance, void *sap,
+ LM_REASON reason,
+ struct sk_buff *skb)
{
struct ircomm_cb *self = (struct ircomm_cb *) instance;
struct ircomm_info info;
* Layer below is telling us to start or stop the flow of data
*
*/
-void ircomm_ttp_flow_indication(void *instance, void *sap, LOCAL_FLOW cmd)
+static void ircomm_ttp_flow_indication(void *instance, void *sap,
+ LOCAL_FLOW cmd)
{
struct ircomm_cb *self = (struct ircomm_cb *) instance;
void *priv);
static void ircomm_tty_getvalue_confirm(int result, __u16 obj_id,
struct ias_value *value, void *priv);
-void ircomm_tty_start_watchdog_timer(struct ircomm_tty_cb *self, int timeout);
-void ircomm_tty_watchdog_timer_expired(void *data);
+static void ircomm_tty_start_watchdog_timer(struct ircomm_tty_cb *self,
+ int timeout);
+static void ircomm_tty_watchdog_timer_expired(void *data);
static int ircomm_tty_state_idle(struct ircomm_tty_cb *self,
IRCOMM_TTY_EVENT event,
"*** ERROR *** ",
};
-char *ircomm_tty_event[] = {
+#ifdef CONFIG_IRDA_DEBUG
+static char *ircomm_tty_event[] = {
"IRCOMM_TTY_ATTACH_CABLE",
"IRCOMM_TTY_DETACH_CABLE",
"IRCOMM_TTY_DATA_REQUEST",
"IRCOMM_TTY_GOT_LSAPSEL",
"*** ERROR ****",
};
+#endif /* CONFIG_IRDA_DEBUG */
static int (*state[])(struct ircomm_tty_cb *self, IRCOMM_TTY_EVENT event,
struct sk_buff *skb, struct ircomm_tty_info *info) =
* connection attempt is successful, and if not, we will retry after
* the timeout
*/
-void ircomm_tty_start_watchdog_timer(struct ircomm_tty_cb *self, int timeout)
+static void ircomm_tty_start_watchdog_timer(struct ircomm_tty_cb *self,
+ int timeout)
{
ASSERT(self != NULL, return;);
ASSERT(self->magic == IRCOMM_TTY_MAGIC, return;);
* Called when the connect procedure have taken to much time.
*
*/
-void ircomm_tty_watchdog_timer_expired(void *data)
+static void ircomm_tty_watchdog_timer_expired(void *data)
{
struct ircomm_tty_cb *self = (struct ircomm_tty_cb *) data;
* Change speed of the driver. If the remote device is a DCE, then this
* should make it change the speed of its serial port
*/
-void ircomm_tty_change_speed(struct ircomm_tty_cb *self)
+static void ircomm_tty_change_speed(struct ircomm_tty_cb *self)
{
unsigned cflag, cval;
int baud;
* Delete given attribute and deallocate all its memory
*
*/
-void __irias_delete_attrib(struct ias_attrib *attrib)
+static void __irias_delete_attrib(struct ias_attrib *attrib)
{
ASSERT(attrib != NULL, return;);
ASSERT(attrib->magic == IAS_ATTRIB_MAGIC, return;);
* Add attribute to object
*
*/
-void irias_add_attrib( struct ias_object *obj, struct ias_attrib *attrib,
- int owner)
+static void irias_add_attrib(struct ias_object *obj, struct ias_attrib *attrib,
+ int owner)
{
ASSERT(obj != NULL, return;);
ASSERT(obj->magic == IAS_OBJECT_MAGIC, return;);
struct sk_buff *);
static void irlan_check_response_param(struct irlan_cb *self, char *param,
char *value, int val_len);
+static void irlan_client_open_ctrl_tsap(struct irlan_cb *self);
static void irlan_client_kick_timer_expired(void *data)
{
}
}
-void irlan_client_start_kick_timer(struct irlan_cb *self, int timeout)
+static void irlan_client_start_kick_timer(struct irlan_cb *self, int timeout)
{
IRDA_DEBUG(4, "%s()\n", __FUNCTION__ );
* Initialize callbacks and open IrTTP TSAPs
*
*/
-void irlan_client_open_ctrl_tsap(struct irlan_cb *self)
+static void irlan_client_open_ctrl_tsap(struct irlan_cb *self)
{
struct tsap_cb *tsap;
notify_t notify;
irlan_do_client_event(self, IRLAN_CONNECT_COMPLETE, NULL);
}
-/*
- * Function irlan_client_reconnect_data_channel (self)
- *
- * Try to reconnect data channel (currently not used)
- *
- */
-void irlan_client_reconnect_data_channel(struct irlan_cb *self)
-{
- struct sk_buff *skb;
- __u8 *frame;
-
- IRDA_DEBUG(4, "%s()\n", __FUNCTION__ );
-
- ASSERT(self != NULL, return;);
- ASSERT(self->magic == IRLAN_MAGIC, return;);
-
- skb = dev_alloc_skb(128);
- if (!skb)
- return;
-
- /* Reserve space for TTP, LMP, and LAP header */
- skb_reserve(skb, self->max_header_size);
- skb_put(skb, 2);
-
- frame = skb->data;
-
- frame[0] = CMD_RECONNECT_DATA_CHAN;
- frame[1] = 0x01;
- irlan_insert_array_param(skb, "RECONNECT_KEY",
- self->client.reconnect_key,
- self->client.key_len);
-
- irttp_data_request(self->client.tsap_ctrl, skb);
-}
-
-
/*
* Function print_ret_code (code)
*
extern struct proc_dir_entry *proc_irda;
#endif /* CONFIG_PROC_FS */
+static struct irlan_cb *irlan_open(__u32 saddr, __u32 daddr);
static void __irlan_close(struct irlan_cb *self);
static int __irlan_insert_param(struct sk_buff *skb, char *param, int type,
__u8 value_byte, __u16 value_short,
__u8 *value_array, __u16 value_len);
+static void irlan_open_unicast_addr(struct irlan_cb *self);
+static void irlan_get_unicast_addr(struct irlan_cb *self);
void irlan_close_tsaps(struct irlan_cb *self);
/*
* Open new instance of a client/provider, we should only register the
* network device if this instance is ment for a particular client/provider
*/
-struct irlan_cb *irlan_open(__u32 saddr, __u32 daddr)
+static struct irlan_cb *irlan_open(__u32 saddr, __u32 daddr)
{
struct net_device *dev;
struct irlan_cb *self;
* Here we receive the connect indication for the data channel
*
*/
-void irlan_connect_indication(void *instance, void *sap, struct qos_info *qos,
- __u32 max_sdu_size, __u8 max_header_size,
- struct sk_buff *skb)
+static void irlan_connect_indication(void *instance, void *sap,
+ struct qos_info *qos,
+ __u32 max_sdu_size,
+ __u8 max_header_size,
+ struct sk_buff *skb)
{
struct irlan_cb *self;
struct tsap_cb *tsap;
netif_start_queue(self->dev); /* Clear reason */
}
-void irlan_connect_confirm(void *instance, void *sap, struct qos_info *qos,
- __u32 max_sdu_size, __u8 max_header_size,
- struct sk_buff *skb)
+static void irlan_connect_confirm(void *instance, void *sap,
+ struct qos_info *qos,
+ __u32 max_sdu_size,
+ __u8 max_header_size,
+ struct sk_buff *skb)
{
struct irlan_cb *self;
* Callback function for the IrTTP layer. Indicates a disconnection of
* the specified connection (handle)
*/
-void irlan_disconnect_indication(void *instance, void *sap, LM_REASON reason,
- struct sk_buff *userdata)
+static void irlan_disconnect_indication(void *instance,
+ void *sap, LM_REASON reason,
+ struct sk_buff *userdata)
{
struct irlan_cb *self;
struct tsap_cb *tsap;
* This function makes sure that commands on the control channel is being
* sent in a command/response fashion
*/
-void irlan_ctrl_data_request(struct irlan_cb *self, struct sk_buff *skb)
+static void irlan_ctrl_data_request(struct irlan_cb *self, struct sk_buff *skb)
{
IRDA_DEBUG(2, "%s()\n", __FUNCTION__ );
* address.
*
*/
-void irlan_open_unicast_addr(struct irlan_cb *self)
+static void irlan_open_unicast_addr(struct irlan_cb *self)
{
struct sk_buff *skb;
__u8 *frame;
* can construct its packets.
*
*/
-void irlan_get_unicast_addr(struct irlan_cb *self)
+static void irlan_get_unicast_addr(struct irlan_cb *self)
{
struct sk_buff *skb;
__u8 *frame;
irttp_connect_response(tsap, IRLAN_MTU, NULL);
}
-void irlan_provider_disconnect_indication(void *instance, void *sap,
- LM_REASON reason,
- struct sk_buff *userdata)
+static void irlan_provider_disconnect_indication(void *instance, void *sap,
+ LM_REASON reason,
+ struct sk_buff *userdata)
{
struct irlan_cb *self;
struct tsap_cb *tsap;
extern void irlap_queue_xmit(struct irlap_cb *self, struct sk_buff *skb);
static void __irlap_close(struct irlap_cb *self);
+static void irlap_init_qos_capabilities(struct irlap_cb *self,
+ struct qos_info *qos_user);
#ifdef CONFIG_IRDA_DEBUG
static char *lap_reasons[] = {
* Change the speed of the IrDA port
*
*/
-void irlap_change_speed(struct irlap_cb *self, __u32 speed, int now)
+static void irlap_change_speed(struct irlap_cb *self, __u32 speed, int now)
{
struct sk_buff *skb;
* IrLAP itself. Normally, IrLAP will not specify any values, but it can
* be used to restrict certain values.
*/
-void irlap_init_qos_capabilities(struct irlap_cb *self,
- struct qos_info *qos_user)
+static void irlap_init_qos_capabilities(struct irlap_cb *self,
+ struct qos_info *qos_user)
{
ASSERT(self != NULL, return;);
ASSERT(self->magic == LAP_MAGIC, return;);
#include <net/irda/irlap_frame.h>
#include <net/irda/qos.h>
+static void irlap_send_i_frame(struct irlap_cb *self, struct sk_buff *skb,
+ int command);
+
/*
* Function irlap_insert_info (self, skb)
*
irlap_do_event(self, RECV_RR_RSP, skb, info);
}
-void irlap_send_frmr_frame( struct irlap_cb *self, int command)
-{
- struct sk_buff *tx_skb = NULL;
- __u8 *frame;
-
- ASSERT( self != NULL, return;);
- ASSERT( self->magic == LAP_MAGIC, return;);
-
- tx_skb = dev_alloc_skb( 32);
- if (!tx_skb)
- return;
-
- frame = skb_put(tx_skb, 2);
-
- frame[0] = self->caddr;
- frame[0] |= (command) ? CMD_FRAME : 0;
-
- frame[1] = (self->vs << 1);
- frame[1] |= PF_BIT;
- frame[1] |= (self->vr << 5);
-
- frame[2] = 0;
-
- IRDA_DEBUG(4, "%s(), vr=%d, %ld\n", __FUNCTION__, self->vr, jiffies);
-
- irlap_queue_xmit(self, tx_skb);
-}
-
/*
* Function irlap_recv_rnr_frame (self, skb, info)
*
*
* Contruct and transmit Information (I) frame
*/
-void irlap_send_i_frame(struct irlap_cb *self, struct sk_buff *skb,
- int command)
+static void irlap_send_i_frame(struct irlap_cb *self, struct sk_buff *skb,
+ int command)
{
/* Insert connection address */
skb->data[0] = self->caddr;
* Protocol stack initialisation entry point.
* Initialise the various components of the IrDA stack
*/
-int __init irda_init(void)
+static int __init irda_init(void)
{
IRDA_DEBUG(0, "%s()\n", __FUNCTION__);
* Protocol stack cleanup/removal entry point.
* Cleanup the various components of the IrDA stack
*/
-void __exit irda_cleanup(void)
+static void __exit irda_cleanup(void)
{
/* Remove External APIs */
#ifdef CONFIG_SYSCTL
called irnet. IrNET is a PPP driver, so you will also need a
working PPP subsystem (driver, daemon and config)...
- IrNET is an alternate way to tranfer TCP/IP traffic over IrDA. It
+ IrNET is an alternate way to transfer TCP/IP traffic over IrDA. It
uses synchronous PPP over a set of point to point IrDA sockets. You
can use it between Linux machine or with W2k.
/* Exit a function with debug */
#define DRETURN(ret, dbg, args...) \
{DEXIT(dbg, ": " args);\
- return(ret); }
+ return ret; }
/* Exit a function on failed condition */
#define DABORT(cond, ret, dbg, args...) \
{if(cond) {\
DERROR(dbg, args);\
- return(ret); }}
+ return ret; }}
/* Invalid assertion, print out an error and exit... */
#define DASSERT(cond, ret, dbg, args...) \
/* ---------------------------- MODULE ---------------------------- */
extern int
irnet_init(void); /* Initialise IrNET module */
-extern void
- irnet_cleanup(void); /* Teardown IrNET module */
/**************************** VARIABLES ****************************/
#include "irnet_ppp.h" /* Private header */
/* Please put other headers in irnet.h - Thanks */
+/* Generic PPP callbacks (to call us) */
+static struct ppp_channel_ops irnet_ppp_ops = {
+ .start_xmit = ppp_irnet_send,
+ .ioctl = ppp_irnet_ioctl
+};
+
/************************* CONTROL CHANNEL *************************/
/*
* When a pppd instance is not active on /dev/irnet, it acts as a control
/*
* Module exit
*/
-void __exit
+static void __exit
irnet_cleanup(void)
{
irda_irnet_cleanup();
&irnet_device_fops
};
-/* Generic PPP callbacks (to call us) */
-struct ppp_channel_ops irnet_ppp_ops =
-{
- ppp_irnet_send,
- ppp_irnet_ioctl
-};
-
#endif /* IRNET_PPP_H */
extern int sysctl_discovery_timeout;
extern int sysctl_slot_timeout;
extern int sysctl_fast_poll_increase;
-int sysctl_compression = 0;
extern char sysctl_devname[];
extern int sysctl_max_baud_rate;
extern int sysctl_min_tx_turn_time;
static int irttp_param_max_sdu_size(void *instance, irda_param_t *param,
int get);
+static void irttp_flow_indication(void *instance, void *sap, LOCAL_FLOW flow);
+static void irttp_status_indication(void *instance,
+ LINK_STATUS link, LOCK_STATUS lock);
+
/* Information for parsing parameters in IrTTP */
static pi_minor_info_t pi_minor_call_table[] = {
{ NULL, 0 }, /* 0x00 */
* Status_indication, just pass to the higher layer...
*
*/
-void irttp_status_indication(void *instance,
- LINK_STATUS link, LOCK_STATUS lock)
+static void irttp_status_indication(void *instance,
+ LINK_STATUS link, LOCK_STATUS lock)
{
struct tsap_cb *self;
* Flow_indication : IrLAP tells us to send more data.
*
*/
-void irttp_flow_indication(void *instance, void *sap, LOCAL_FLOW flow)
+static void irttp_flow_indication(void *instance, void *sap, LOCAL_FLOW flow)
{
struct tsap_cb *self;
* for some reason should fail. We mark rx sdu as busy to apply back
* pressure is necessary.
*/
-void irttp_do_data_indication(struct tsap_cb *self, struct sk_buff *skb)
+static void irttp_do_data_indication(struct tsap_cb *self, struct sk_buff *skb)
{
int err;
static int irda_insert_no_value(void *self, __u8 *buf, int len, __u8 pi,
PV_TYPE type, PI_HANDLER func);
+static int irda_param_unpack(__u8 *buf, char *fmt, ...);
+
/* Parameter value call table. Must match PV_TYPE */
static PV_HANDLER pv_extract_table[] = {
irda_extract_integer, /* Handler for any length integers */
/*
* Function irda_param_unpack (skb, fmt, ...)
*/
-int irda_param_unpack(__u8 *buf, char *fmt, ...)
+static int irda_param_unpack(__u8 *buf, char *fmt, ...)
{
irda_pv_t arg;
va_list args;
return 0;
}
-EXPORT_SYMBOL(irda_param_unpack);
/*
* Function irda_param_insert (self, pi, buf, len, info)
EXPORT_SYMBOL(irda_param_insert);
/*
- * Function irda_param_extract_all (self, buf, len, info)
+ * Function irda_param_extract (self, buf, len, info)
*
* Parse all parameters. If len is correct, then everything should be
* safe. Returns the number of bytes that was parsed
*
*/
-int irda_param_extract(void *self, __u8 *buf, int len, pi_param_info_t *info)
+static int irda_param_extract(void *self, __u8 *buf, int len,
+ pi_param_info_t *info)
{
pi_minor_info_t *pi_minor_info;
__u8 pi_minor;
type, pi_minor_info->func);
return ret;
}
-EXPORT_SYMBOL(irda_param_extract);
/*
* Function irda_param_extract_all (self, buf, len, info)
static int irlap_param_min_turn_time(void *instance, irda_param_t *param,
int get);
+#ifndef CONFIG_IRDA_DYNAMIC_WINDOW
+static __u32 irlap_requested_line_capacity(struct qos_info *qos);
+#endif
+
static __u32 min_turn_times[] = { 10000, 5000, 1000, 500, 100, 50, 10, 0 }; /* us */
static __u32 baud_rates[] = { 2400, 9600, 19200, 38400, 57600, 115200, 576000,
1152000, 4000000, 16000000 }; /* bps */
* Adjust QoS settings in case some values are not possible to use because
* of other settings
*/
-void irlap_adjust_qos_settings(struct qos_info *qos)
+static void irlap_adjust_qos_settings(struct qos_info *qos)
{
__u32 line_capacity;
int index;
return line_capacity;
}
-__u32 irlap_requested_line_capacity(struct qos_info *qos)
-{ __u32 line_capacity;
-
- line_capacity = qos->window_size.value *
+#ifndef CONFIG_IRDA_DYNAMIC_WINDOW
+static __u32 irlap_requested_line_capacity(struct qos_info *qos)
+{
+ __u32 line_capacity;
+
+ line_capacity = qos->window_size.value *
(qos->data_size.value + 6 + qos->additional_bofs.value) +
- irlap_min_turn_time_in_bytes(qos->baud_rate.value,
+ irlap_min_turn_time_in_bytes(qos->baud_rate.value,
qos->min_turn_time.value);
-
+
IRDA_DEBUG(2, "%s(), requested line capacity=%d\n",
__FUNCTION__, line_capacity);
-
- return line_capacity;
+
+ return line_capacity;
}
+#endif
void irda_qos_bits_to_value(struct qos_info *qos)
{
static void llc_process_tmr_ev(struct sock *sk, struct sk_buff *skb);
static int llc_conn_ac_data_confirm(struct sock *sk, struct sk_buff *ev);
+static int llc_conn_ac_inc_npta_value(struct sock *sk, struct sk_buff *skb);
+
+static int llc_conn_ac_send_rr_rsp_f_set_ackpf(struct sock *sk,
+ struct sk_buff *skb);
+
+static int llc_conn_ac_set_p_flag_1(struct sock *sk, struct sk_buff *skb);
+
#define INCORRECT 0
int llc_conn_ac_clear_remote_busy(struct sock *sk, struct sk_buff *skb)
return 0;
}
-int llc_conn_ac_report_status(struct sock *sk, struct sk_buff *skb)
-{
- return 0;
-}
-
int llc_conn_ac_clear_remote_busy_if_f_eq_1(struct sock *sk,
struct sk_buff *skb)
{
goto out;
}
-int llc_conn_ac_send_dm_rsp_f_set_f_flag(struct sock *sk, struct sk_buff *skb)
-{
- int rc = -ENOBUFS;
- struct sk_buff *nskb = llc_alloc_frame();
-
- if (nskb) {
- struct llc_opt *llc = llc_sk(sk);
- struct llc_sap *sap = llc->sap;
- u8 f_bit = llc->f_flag;
-
- nskb->dev = llc->dev;
- llc_pdu_header_init(nskb, LLC_PDU_TYPE_U, sap->laddr.lsap,
- llc->daddr.lsap, LLC_PDU_RSP);
- llc_pdu_init_as_dm_rsp(nskb, f_bit);
- rc = llc_mac_hdr_init(nskb, llc->dev->dev_addr, llc->daddr.mac);
- if (rc)
- goto free;
- llc_conn_send_pdu(sk, nskb);
- }
-out:
- return rc;
-free:
- kfree_skb(nskb);
- goto out;
-}
-
int llc_conn_ac_send_frmr_rsp_f_set_x(struct sock *sk, struct sk_buff *skb)
{
u8 f_bit;
return rc;
}
-int llc_conn_ac_send_i_cmd_p_set_0(struct sock *sk, struct sk_buff *skb)
+static int llc_conn_ac_send_i_cmd_p_set_0(struct sock *sk, struct sk_buff *skb)
{
int rc;
struct llc_opt *llc = llc_sk(sk);
return rc;
}
-int llc_conn_ac_resend_i_cmd_p_set_1(struct sock *sk, struct sk_buff *skb)
-{
- struct llc_pdu_sn *pdu = llc_pdu_sn_hdr(skb);
- u8 nr = LLC_I_GET_NR(pdu);
-
- llc_conn_resend_i_pdu_as_cmd(sk, nr, 1);
- return 0;
-}
-
-int llc_conn_ac_resend_i_cmd_p_set_1_or_send_rr(struct sock *sk,
- struct sk_buff *skb)
-{
- struct llc_pdu_sn *pdu = llc_pdu_sn_hdr(skb);
- u8 nr = LLC_I_GET_NR(pdu);
- int rc = llc_conn_ac_send_rr_cmd_p_set_1(sk, skb);
-
- if (!rc)
- llc_conn_resend_i_pdu_as_cmd(sk, nr, 0);
- return rc;
-}
-
int llc_conn_ac_send_i_xxx_x_set_0(struct sock *sk, struct sk_buff *skb)
{
int rc;
goto out;
}
-int llc_conn_ac_send_ack_cmd_p_set_1(struct sock *sk, struct sk_buff *skb)
-{
- int rc = -ENOBUFS;
- struct sk_buff *nskb = llc_alloc_frame();
-
- if (nskb) {
- struct llc_opt *llc = llc_sk(sk);
- struct llc_sap *sap = llc->sap;
-
- nskb->dev = llc->dev;
- llc_pdu_header_init(nskb, LLC_PDU_TYPE_S, sap->laddr.lsap,
- llc->daddr.lsap, LLC_PDU_CMD);
- llc_pdu_init_as_rr_cmd(nskb, 1, llc->vR);
- rc = llc_mac_hdr_init(nskb, llc->dev->dev_addr, llc->daddr.mac);
- if (rc)
- goto free;
- llc_conn_send_pdu(sk, nskb);
- }
-out:
- return rc;
-free:
- kfree_skb(nskb);
- goto out;
-}
-
int llc_conn_ac_send_rr_rsp_f_set_1(struct sock *sk, struct sk_buff *skb)
{
int rc = -ENOBUFS;
goto out;
}
-int llc_conn_ac_send_ua_rsp_f_set_f_flag(struct sock *sk, struct sk_buff *skb)
-{
- int rc = -ENOBUFS;
- struct sk_buff *nskb = llc_alloc_frame();
-
- if (nskb) {
- struct llc_opt *llc = llc_sk(sk);
- struct llc_sap *sap = llc->sap;
-
- nskb->dev = llc->dev;
- llc_pdu_header_init(nskb, LLC_PDU_TYPE_U, sap->laddr.lsap,
- llc->daddr.lsap, LLC_PDU_RSP);
- llc_pdu_init_as_ua_rsp(nskb, llc->f_flag);
- rc = llc_mac_hdr_init(nskb, llc->dev->dev_addr, llc->daddr.mac);
- if (rc)
- goto free;
- llc_conn_send_pdu(sk, nskb);
- }
-out:
- return rc;
-free:
- kfree_skb(nskb);
- goto out;
-}
-
int llc_conn_ac_send_ua_rsp_f_set_p(struct sock *sk, struct sk_buff *skb)
{
u8 f_bit;
* set to one if one PDU with p-bit set to one is received. Returns 0 for
* success, 1 otherwise.
*/
-int llc_conn_ac_send_i_rsp_f_set_ackpf(struct sock *sk, struct sk_buff *skb)
+static int llc_conn_ac_send_i_rsp_f_set_ackpf(struct sock *sk,
+ struct sk_buff *skb)
{
int rc;
struct llc_opt *llc = llc_sk(sk);
* if there is any. ack_pf flag indicates if a PDU has been received with
* p-bit set to one. Returns 0 for success, 1 otherwise.
*/
-int llc_conn_ac_send_rr_rsp_f_set_ackpf(struct sock *sk, struct sk_buff *skb)
+static int llc_conn_ac_send_rr_rsp_f_set_ackpf(struct sock *sk,
+ struct sk_buff *skb)
{
int rc = -ENOBUFS;
struct sk_buff *nskb = llc_alloc_frame();
* acknowledgements decreases by increasing of "npta". Returns 0 for
* success, 1 otherwise.
*/
-int llc_conn_ac_inc_npta_value(struct sock *sk, struct sk_buff *skb)
+static int llc_conn_ac_inc_npta_value(struct sock *sk, struct sk_buff *skb)
{
struct llc_opt *llc = llc_sk(sk);
return 0;
}
-int llc_conn_ac_set_p_flag_1(struct sock *sk, struct sk_buff *skb)
+static int llc_conn_ac_set_p_flag_1(struct sock *sk, struct sk_buff *skb)
{
llc_conn_set_p_flag(sk, 1);
return 0;
return 0;
}
-int llc_conn_ac_set_f_flag_p(struct sock *sk, struct sk_buff *skb)
-{
- llc_pdu_decode_pf_bit(skb, &llc_sk(sk)->f_flag);
- return 0;
-}
-
void llc_conn_pf_cycle_tmr_cb(unsigned long timeout_data)
{
struct sock *sk = (struct sock *)timeout_data;
ev->prim_type == LLC_PRIM_TYPE_REQ ? 0 : 1;
}
-int llc_conn_ev_conn_resp(struct sock *sk, struct sk_buff *skb)
-{
- struct llc_conn_state_ev *ev = llc_conn_ev(skb);
-
- return ev->prim == LLC_CONN_PRIM &&
- ev->prim_type == LLC_PRIM_TYPE_RESP ? 0 : 1;
-}
-
int llc_conn_ev_data_req(struct sock *sk, struct sk_buff *skb)
{
struct llc_conn_state_ev *ev = llc_conn_ev(skb);
ev->prim_type == LLC_PRIM_TYPE_REQ ? 0 : 1;
}
-int llc_conn_ev_rst_resp(struct sock *sk, struct sk_buff *skb)
-{
- struct llc_conn_state_ev *ev = llc_conn_ev(skb);
-
- return ev->prim == LLC_RESET_PRIM &&
- ev->prim_type == LLC_PRIM_TYPE_RESP ? 0 : 1;
-}
-
int llc_conn_ev_local_busy_detected(struct sock *sk, struct sk_buff *skb)
{
struct llc_conn_state_ev *ev = llc_conn_ev(skb);
return rc;
}
-int llc_conn_ev_rx_xxx_cmd_pbit_set_0(struct sock *sk, struct sk_buff *skb)
-{
- u16 rc = 1;
- struct llc_pdu_sn *pdu = llc_pdu_sn_hdr(skb);
-
- if (LLC_PDU_IS_CMD(pdu)) {
- if (LLC_PDU_TYPE_IS_I(pdu) || LLC_PDU_TYPE_IS_S(pdu)) {
- if (LLC_I_PF_IS_0(pdu))
- rc = 0;
- } else if (LLC_PDU_TYPE_IS_U(pdu))
- switch (LLC_U_PDU_CMD(pdu)) {
- case LLC_2_PDU_CMD_SABME:
- case LLC_2_PDU_CMD_DISC:
- if (LLC_U_PF_IS_0(pdu))
- rc = 0;
- break;
- }
- }
- return rc;
-}
-
int llc_conn_ev_rx_xxx_cmd_pbit_set_x(struct sock *sk, struct sk_buff *skb)
{
u16 rc = 1;
return rc;
}
-int llc_conn_ev_rx_xxx_yyy(struct sock *sk, struct sk_buff *skb)
-{
- u16 rc = 1;
- struct llc_pdu_un *pdu = llc_pdu_un_hdr(skb);
-
- if (LLC_PDU_TYPE_IS_I(pdu) || LLC_PDU_TYPE_IS_S(pdu))
- rc = 0;
- else if (LLC_PDU_TYPE_IS_U(pdu))
- switch (LLC_U_PDU_CMD(pdu)) {
- case LLC_2_PDU_CMD_SABME:
- case LLC_2_PDU_CMD_DISC:
- case LLC_2_PDU_RSP_UA:
- case LLC_2_PDU_RSP_DM:
- case LLC_2_PDU_RSP_FRMR:
- rc = 0;
- break;
- }
- return rc;
-}
-
int llc_conn_ev_rx_zzz_cmd_pbit_set_x_inval_nr(struct sock *sk,
struct sk_buff *skb)
{
return ev->type != LLC_CONN_EV_TYPE_BUSY_TMR;
}
-int llc_conn_ev_any_tmr_exp(struct sock *sk, struct sk_buff *skb)
-{
- struct llc_conn_state_ev *ev = llc_conn_ev(skb);
-
- return ev->type == LLC_CONN_EV_TYPE_P_TMR ||
- ev->type == LLC_CONN_EV_TYPE_ACK_TMR ||
- ev->type == LLC_CONN_EV_TYPE_REJ_TMR ||
- ev->type == LLC_CONN_EV_TYPE_BUSY_TMR ? 0 : 1;
-}
-
int llc_conn_ev_init_p_f_cycle(struct sock *sk, struct sk_buff *skb)
{
return 1;
return llc_sk(sk)->cause_flag;
}
-int llc_conn_ev_qlfy_init_p_f_cycle(struct sock *sk, struct sk_buff *skb)
-{
- return 0;
-}
-
int llc_conn_ev_qlfy_set_status_conn(struct sock *sk, struct sk_buff *skb)
{
struct llc_conn_state_ev *ev = llc_conn_ev(skb);
return 0;
}
-int llc_conn_ev_qlfy_set_status_impossible(struct sock *sk, struct sk_buff *skb)
-{
- struct llc_conn_state_ev *ev = llc_conn_ev(skb);
-
- ev->status = LLC_STATUS_IMPOSSIBLE;
- return 0;
-}
-
int llc_conn_ev_qlfy_set_status_failed(struct sock *sk, struct sk_buff *skb)
{
struct llc_conn_state_ev *ev = llc_conn_ev(skb);
return 0;
}
-int llc_conn_ev_qlfy_set_status_received(struct sock *sk, struct sk_buff *skb)
-{
- struct llc_conn_state_ev *ev = llc_conn_ev(skb);
-
- ev->status = LLC_STATUS_RECEIVED;
- return 0;
-}
-
int llc_conn_ev_qlfy_set_status_refuse(struct sock *sk, struct sk_buff *skb)
{
struct llc_conn_state_ev *ev = llc_conn_ev(skb);
* local mac, and local sap. Returns pointer for parent socket found,
* %NULL otherwise.
*/
-struct sock *llc_lookup_listener(struct llc_sap *sap, struct llc_addr *laddr)
+static struct sock *llc_lookup_listener(struct llc_sap *sap,
+ struct llc_addr *laddr)
{
struct sock *rc;
struct hlist_node *node;
* Finds offset of next category of transitions in transition table.
* Returns the start index of next category.
*/
-u16 find_next_offset(struct llc_conn_state *state, u16 offset)
+static u16 find_next_offset(struct llc_conn_state *state, u16 offset)
{
u16 cnt = 0;
struct llc_conn_state_trans **next_trans;
*
* Initializes a socket with default llc values.
*/
-int llc_sk_init(struct sock* sk)
+static int llc_sk_init(struct sock* sk)
{
struct llc_opt *llc = kmalloc(sizeof(*llc), GFP_ATOMIC);
int rc = -ENOMEM;
#include <net/llc.h>
LIST_HEAD(llc_sap_list);
-rwlock_t llc_sap_list_lock = RW_LOCK_UNLOCKED;
+DEFINE_RWLOCK(llc_sap_list_lock);
unsigned char llc_station_mac_sa[ETH_ALEN];
*
* Allocates and initializes sap.
*/
-struct llc_sap *llc_sap_alloc(void)
+static struct llc_sap *llc_sap_alloc(void)
{
struct llc_sap *sap = kmalloc(sizeof(*sap), GFP_ATOMIC);
*
* Adds a sap to the LLC's station sap list.
*/
-void llc_add_sap(struct llc_sap *sap)
+static void llc_add_sap(struct llc_sap *sap)
{
write_lock_bh(&llc_sap_list_lock);
list_add_tail(&sap->node, &llc_sap_list);
*
* Removes a sap to the LLC's station sap list.
*/
-void llc_del_sap(struct llc_sap *sap)
+static void llc_del_sap(struct llc_sap *sap)
{
write_lock_bh(&llc_sap_list_lock);
list_del(&sap->node);
return rc;
}
-/**
- * llc_build_and_send_reset_pkt - Resets an established LLC connection
- * @prim: pointer to structure that contains service parameters.
- *
- * Called when upper layer wants to reset an established LLC connection
- * with a remote machine. This function packages a proper event and sends
- * it to connection component state machine. Returns 0 for success, 1
- * otherwise.
- */
-int llc_build_and_send_reset_pkt(struct sock *sk)
-{
- int rc = 1;
- struct sk_buff *skb = alloc_skb(0, GFP_ATOMIC);
-
- if (skb) {
- struct llc_conn_state_ev *ev = llc_conn_ev(skb);
-
- ev->type = LLC_CONN_EV_TYPE_PRIM;
- ev->prim = LLC_RESET_PRIM;
- ev->prim_type = LLC_PRIM_TYPE_REQ;
- rc = llc_conn_state_process(sk, skb);
- }
- return rc;
-}
}
}
-/**
- * llc_pdu_decode_cr_bit - extracts command response bit from LLC header
- * @skb: input skb that c/r bit must be extracted from it.
- * @cr_bit: command/response bit (0 or 1).
- *
- * This function extracts command/response bit from LLC header. this bit
- * is right bit of source SAP.
- */
-void llc_pdu_decode_cr_bit(struct sk_buff *skb, u8 *cr_bit)
-{
- *cr_bit = llc_pdu_un_hdr(skb)->ssap & LLC_PDU_CMD_RSP_MASK;
-}
-
/**
* llc_pdu_init_as_disc_cmd - Builds DISC PDU
* @skb: Address of the skb to build
return 0;
}
-struct seq_operations llc_seq_socket_ops = {
+static struct seq_operations llc_seq_socket_ops = {
.start = llc_seq_start,
.next = llc_seq_next,
.stop = llc_seq_stop,
.show = llc_seq_socket_show,
};
-struct seq_operations llc_seq_core_ops = {
+static struct seq_operations llc_seq_core_ops = {
.start = llc_seq_start,
.next = llc_seq_next,
.stop = llc_seq_stop,
* if needed(on receiving an UI frame). sk can be null for the
* datalink_proto case.
*/
-void llc_sap_state_process(struct llc_sap *sap, struct sk_buff *skb)
+static void llc_sap_state_process(struct llc_sap *sap, struct sk_buff *skb)
{
struct llc_sap_state_ev *ev = llc_sap_ev(skb);
* Search socket list of the SAP and finds connection using the local
* mac, and local sap. Returns pointer for socket found, %NULL otherwise.
*/
-struct sock *llc_lookup_dgram(struct llc_sap *sap, struct llc_addr *laddr)
+static struct sock *llc_lookup_dgram(struct llc_sap *sap,
+ struct llc_addr *laddr)
{
struct sock *rc;
struct hlist_node *node;
* Queues an event (on the station event queue) for handling by the
* station state machine and attempts to process any queued-up events.
*/
-void llc_station_state_process(struct sk_buff *skb)
+static void llc_station_state_process(struct sk_buff *skb)
{
spin_lock_bh(&llc_main_station.ev_q.lock);
skb_queue_tail(&llc_main_station.ev_q.list, skb);
int nr_rx_ip(struct sk_buff *skb, struct net_device *dev)
{
- struct net_device_stats *stats = (struct net_device_stats *)dev->priv;
+ struct net_device_stats *stats = netdev_priv(dev);
if (!netif_running(dev)) {
stats->rx_errors++;
static int nr_rebuild_header(struct sk_buff *skb)
{
struct net_device *dev = skb->dev;
- struct net_device_stats *stats = (struct net_device_stats *)dev->priv;
+ struct net_device_stats *stats = netdev_priv(dev);
struct sk_buff *skbn;
unsigned char *bp = skb->data;
int len;
static int nr_xmit(struct sk_buff *skb, struct net_device *dev)
{
- struct net_device_stats *stats = (struct net_device_stats *)dev->priv;
+ struct net_device_stats *stats = netdev_priv(dev);
dev_kfree_skb(skb);
stats->tx_errors++;
return 0;
static struct net_device_stats *nr_get_stats(struct net_device *dev)
{
- return (struct net_device_stats *)dev->priv;
+ return netdev_priv(dev);
}
void nr_setup(struct net_device *dev)
static unsigned int nr_neigh_no = 1;
static HLIST_HEAD(nr_node_list);
-static spinlock_t nr_node_list_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(nr_node_list_lock);
static HLIST_HEAD(nr_neigh_list);
-static spinlock_t nr_neigh_list_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(nr_neigh_list_lock);
-struct nr_node *nr_node_get(ax25_address *callsign)
+static struct nr_node *nr_node_get(ax25_address *callsign)
{
struct nr_node *found = NULL;
struct nr_node *nr_node;
return found;
}
-struct nr_neigh *nr_neigh_get_dev(ax25_address *callsign, struct net_device *dev)
+static struct nr_neigh *nr_neigh_get_dev(ax25_address *callsign,
+ struct net_device *dev)
{
struct nr_neigh *found = NULL;
struct nr_neigh *nr_neigh;
#include <net/ax25.h>
#include <net/rose.h>
-/*
- * Only allow IP over ROSE frames through if the netrom device is up.
- */
-
-int rose_rx_ip(struct sk_buff *skb, struct net_device *dev)
-{
- struct net_device_stats *stats = (struct net_device_stats *)dev->priv;
-
-#ifdef CONFIG_INET
- if (!netif_running(dev)) {
- stats->rx_errors++;
- return 0;
- }
-
- stats->rx_packets++;
- stats->rx_bytes += skb->len;
-
- skb->protocol = htons(ETH_P_IP);
-
- /* Spoof incoming device */
- skb->dev = dev;
- skb->h.raw = skb->data;
- skb->nh.raw = skb->data;
- skb->pkt_type = PACKET_HOST;
-
- ip_rcv(skb, skb->dev, NULL);
-#else
- kfree_skb(skb);
-#endif
- return 1;
-}
-
static int rose_header(struct sk_buff *skb, struct net_device *dev, unsigned short type,
void *daddr, void *saddr, unsigned len)
{
static int rose_rebuild_header(struct sk_buff *skb)
{
struct net_device *dev = skb->dev;
- struct net_device_stats *stats = (struct net_device_stats *)dev->priv;
+ struct net_device_stats *stats = netdev_priv(dev);
unsigned char *bp = (unsigned char *)skb->data;
struct sk_buff *skbn;
static int rose_xmit(struct sk_buff *skb, struct net_device *dev)
{
- struct net_device_stats *stats = (struct net_device_stats *)dev->priv;
+ struct net_device_stats *stats = netdev_priv(dev);
if (!netif_running(dev)) {
printk(KERN_ERR "ROSE: rose_xmit - called when iface is down\n");
static struct net_device_stats *rose_get_stats(struct net_device *dev)
{
- return (struct net_device_stats *)dev->priv;
+ return netdev_priv(dev);
}
void rose_setup(struct net_device *dev)
static void rose_ftimer_expiry(unsigned long);
static void rose_t0timer_expiry(unsigned long);
+static void rose_transmit_restart_confirmation(struct rose_neigh *neigh);
+static void rose_transmit_restart_request(struct rose_neigh *neigh);
+
void rose_start_ftimer(struct rose_neigh *neigh)
{
del_timer(&neigh->ftimer);
add_timer(&neigh->ftimer);
}
-void rose_start_t0timer(struct rose_neigh *neigh)
+static void rose_start_t0timer(struct rose_neigh *neigh)
{
del_timer(&neigh->t0timer);
return timer_pending(&neigh->ftimer);
}
-int rose_t0timer_running(struct rose_neigh *neigh)
+static int rose_t0timer_running(struct rose_neigh *neigh)
{
return timer_pending(&neigh->t0timer);
}
/*
* This routine is called when a Restart Request is needed
*/
-void rose_transmit_restart_request(struct rose_neigh *neigh)
+static void rose_transmit_restart_request(struct rose_neigh *neigh)
{
struct sk_buff *skb;
unsigned char *dptr;
/*
* This routine is called when a Restart Confirmation is needed
*/
-void rose_transmit_restart_confirmation(struct rose_neigh *neigh)
+static void rose_transmit_restart_confirmation(struct rose_neigh *neigh)
{
struct sk_buff *skb;
unsigned char *dptr;
kfree_skb(skb);
}
-/*
- * This routine is called when a Diagnostic is required.
- */
-void rose_transmit_diagnostic(struct rose_neigh *neigh, unsigned char diag)
-{
- struct sk_buff *skb;
- unsigned char *dptr;
- int len;
-
- len = AX25_BPQ_HEADER_LEN + AX25_MAX_HEADER_LEN + ROSE_MIN_LEN + 2;
-
- if ((skb = alloc_skb(len, GFP_ATOMIC)) == NULL)
- return;
-
- skb_reserve(skb, AX25_BPQ_HEADER_LEN + AX25_MAX_HEADER_LEN);
-
- dptr = skb_put(skb, ROSE_MIN_LEN + 2);
-
- *dptr++ = AX25_P_ROSE;
- *dptr++ = ROSE_GFI;
- *dptr++ = 0x00;
- *dptr++ = ROSE_DIAGNOSTIC;
- *dptr++ = diag;
-
- if (!rose_send_frame(skb, neigh))
- kfree_skb(skb);
-}
-
/*
* This routine is called when a Clear Request is needed outside of the context
* of a connected socket.
#include <linux/interrupt.h>
#include <net/rose.h>
+static int rose_create_facilities(unsigned char *buffer, rose_cb *rose);
+
/*
* This routine purges all of the queues of frames.
*/
return 1;
}
-int rose_create_facilities(unsigned char *buffer, rose_cb *rose)
+static int rose_create_facilities(unsigned char *buffer, rose_cb *rose)
{
unsigned char *p = buffer + 1;
char *callsign;
extern struct rw_semaphore rxrpc_conns_sem;
extern unsigned long rxrpc_conn_timeout;
-extern void rxrpc_conn_do_timeout(struct rxrpc_connection *conn);
extern void rxrpc_conn_clearall(struct rxrpc_peer *peer);
/*
extern void rxrpc_peer_clearall(struct rxrpc_transport *trans);
-extern void rxrpc_peer_do_timeout(struct rxrpc_peer *peer);
-
/*
* proc.c
static atomic_t rxrpc_krxiod_qcount = ATOMIC_INIT(0);
static LIST_HEAD(rxrpc_krxiod_transportq);
-static spinlock_t rxrpc_krxiod_transportq_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(rxrpc_krxiod_transportq_lock);
static LIST_HEAD(rxrpc_krxiod_callq);
-static spinlock_t rxrpc_krxiod_callq_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(rxrpc_krxiod_callq_lock);
static volatile int rxrpc_krxiod_die;
/* queue of unprocessed inbound messages with seqno #1 and
* RXRPC_CLIENT_INITIATED flag set */
static LIST_HEAD(rxrpc_krxsecd_initmsgq);
-static spinlock_t rxrpc_krxsecd_initmsgq_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(rxrpc_krxsecd_initmsgq_lock);
static void rxrpc_krxsecd_process_incoming_call(struct rxrpc_message *msg);
static int krxtimod_die;
static LIST_HEAD(krxtimod_list);
-static spinlock_t krxtimod_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(krxtimod_lock);
static int krxtimod(void *arg);
DECLARE_RWSEM(rxrpc_peers_sem);
unsigned long rxrpc_peer_timeout = 12 * 60 * 60;
+static void rxrpc_peer_do_timeout(struct rxrpc_peer *peer);
+
static void __rxrpc_peer_timeout(rxrpc_timer_t *timer)
{
struct rxrpc_peer *peer =
* handle a peer timing out in the graveyard
* - called from krxtimod
*/
-void rxrpc_peer_do_timeout(struct rxrpc_peer *peer)
+static void rxrpc_peer_do_timeout(struct rxrpc_peer *peer)
{
struct rxrpc_transport *trans = peer->trans;
#include <net/act_api.h>
#if 1 /* control */
-#define DPRINTK(format,args...) printk(KERN_DEBUG format,##args)
+#define DPRINTK(format, args...) printk(KERN_DEBUG format, ##args)
#else
-#define DPRINTK(format,args...)
+#define DPRINTK(format, args...)
#endif
#if 0 /* data */
-#define D2PRINTK(format,args...) printk(KERN_DEBUG format,##args)
+#define D2PRINTK(format, args...) printk(KERN_DEBUG format, ##args)
#else
-#define D2PRINTK(format,args...)
+#define D2PRINTK(format, args...)
#endif
static struct tc_action_ops *act_base = NULL;
-static rwlock_t act_mod_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(act_mod_lock);
int tcf_register_action(struct tc_action_ops *act)
{
struct tc_action_ops *a, **ap;
write_lock(&act_mod_lock);
- for (ap = &act_base; (a=*ap)!=NULL; ap = &a->next) {
+ for (ap = &act_base; (a = *ap) != NULL; ap = &a->next) {
if (act->type == a->type || (strcmp(act->kind, a->kind) == 0)) {
write_unlock(&act_mod_lock);
return -EEXIST;
}
}
-
- act->next = NULL;
+ act->next = NULL;
*ap = act;
-
write_unlock(&act_mod_lock);
-
return 0;
}
int err = -ENOENT;
write_lock(&act_mod_lock);
- for (ap = &act_base; (a=*ap)!=NULL; ap = &a->next)
- if(a == act)
+ for (ap = &act_base; (a = *ap) != NULL; ap = &a->next)
+ if (a == act)
break;
-
if (a) {
*ap = a->next;
a->next = NULL;
/* lookup by name */
static struct tc_action_ops *tc_lookup_action_n(char *kind)
{
-
struct tc_action_ops *a = NULL;
if (kind) {
read_lock(&act_mod_lock);
for (a = act_base; a; a = a->next) {
- if (strcmp(kind,a->kind) == 0) {
+ if (strcmp(kind, a->kind) == 0) {
if (!try_module_get(a->owner)) {
read_unlock(&act_mod_lock);
return NULL;
- }
+ }
break;
}
}
read_unlock(&act_mod_lock);
}
-
return a;
}
/* lookup by rtattr */
static struct tc_action_ops *tc_lookup_action(struct rtattr *kind)
{
-
struct tc_action_ops *a = NULL;
if (kind) {
read_lock(&act_mod_lock);
for (a = act_base; a; a = a->next) {
-
- if (strcmp((char*)RTA_DATA(kind),a->kind) == 0){
+ if (rtattr_strcmp(kind, a->kind) == 0) {
if (!try_module_get(a->owner)) {
read_unlock(&act_mod_lock);
return NULL;
- }
+ }
break;
}
}
read_unlock(&act_mod_lock);
}
-
return a;
}
if (!try_module_get(a->owner)) {
read_unlock(&act_mod_lock);
return NULL;
- }
+ }
break;
}
}
read_unlock(&act_mod_lock);
}
-
return a;
}
#endif
-int tcf_action_exec(struct sk_buff *skb,struct tc_action *act, struct tcf_result *res)
+int tcf_action_exec(struct sk_buff *skb, struct tc_action *act,
+ struct tcf_result *res)
{
-
struct tc_action *a;
- int ret = -1;
+ int ret = -1;
if (skb->tc_verd & TC_NCLS) {
skb->tc_verd = CLR_TC_NCLS(skb->tc_verd);
- D2PRINTK("(%p)tcf_action_exec: cleared TC_NCLS in %s out %s\n",skb,skb->input_dev?skb->input_dev->name:"xxx",skb->dev->name);
+ D2PRINTK("(%p)tcf_action_exec: cleared TC_NCLS in %s out %s\n",
+ skb, skb->input_dev ? skb->input_dev->name : "xxx",
+ skb->dev->name);
ret = TC_ACT_OK;
goto exec_done;
}
while ((a = act) != NULL) {
repeat:
if (a->ops && a->ops->act) {
- ret = a->ops->act(&skb,a);
- if (TC_MUNGED & skb->tc_verd) {
- /* copied already, allow trampling */
- skb->tc_verd = SET_TC_OK2MUNGE(skb->tc_verd);
- skb->tc_verd = CLR_TC_MUNGED(skb->tc_verd);
- }
-
+ ret = a->ops->act(&skb, a);
+ if (TC_MUNGED & skb->tc_verd) {
+ /* copied already, allow trampling */
+ skb->tc_verd = SET_TC_OK2MUNGE(skb->tc_verd);
+ skb->tc_verd = CLR_TC_MUNGED(skb->tc_verd);
+ }
if (ret != TC_ACT_PIPE)
goto exec_done;
if (ret == TC_ACT_REPEAT)
goto repeat; /* we need a ttl - JHS */
-
}
act = a->next;
}
-
exec_done:
if (skb->tc_classid > 0) {
res->classid = skb->tc_classid;
res->class = 0;
skb->tc_classid = 0;
}
-
return ret;
}
{
struct tc_action *a;
- for (a = act; act; a = act) {
- if (a && a->ops && a->ops->cleanup) {
- DPRINTK("tcf_action_destroy destroying %p next %p\n", a,a->next?a->next:NULL);
- act = act->next;
- if (ACT_P_DELETED == a->ops->cleanup(a, bind)) {
+ for (a = act; a; a = act) {
+ if (a->ops && a->ops->cleanup) {
+ DPRINTK("tcf_action_destroy destroying %p next %p\n",
+ a, a->next);
+ if (a->ops->cleanup(a, bind) == ACT_P_DELETED)
module_put(a->ops->owner);
- }
-
- a->ops = NULL;
+ act = act->next;
kfree(a);
} else { /*FIXME: Remove later - catch insertion bugs*/
- printk("tcf_action_destroy: BUG? destroying NULL ops \n");
- if (a) {
- act = act->next;
- kfree(a);
- } else {
- printk("tcf_action_destroy: BUG? destroying NULL action! \n");
- break;
- }
+ printk("tcf_action_destroy: BUG? destroying NULL ops\n");
+ act = act->next;
+ kfree(a);
}
}
}
-int tcf_action_dump_old(struct sk_buff *skb, struct tc_action *a, int bind, int ref)
+int
+tcf_action_dump_old(struct sk_buff *skb, struct tc_action *a, int bind, int ref)
{
int err = -EINVAL;
-
- if ( (NULL == a) || (NULL == a->ops)
- || (NULL == a->ops->dump) )
+ if (a->ops == NULL || a->ops->dump == NULL)
return err;
return a->ops->dump(skb, a, bind, ref);
-
}
-
-int tcf_action_dump_1(struct sk_buff *skb, struct tc_action *a, int bind, int ref)
+int
+tcf_action_dump_1(struct sk_buff *skb, struct tc_action *a, int bind, int ref)
{
int err = -EINVAL;
- unsigned char *b = skb->tail;
+ unsigned char *b = skb->tail;
struct rtattr *r;
-
- if ( (NULL == a) || (NULL == a->ops)
- || (NULL == a->ops->dump) || (NULL == a->ops->kind))
+ if (a->ops == NULL || a->ops->dump == NULL)
return err;
-
RTA_PUT(skb, TCA_KIND, IFNAMSIZ, a->ops->kind);
- if (tcf_action_copy_stats(skb,a))
+ if (tcf_action_copy_stats(skb, a))
goto rtattr_failure;
r = (struct rtattr*) skb->tail;
RTA_PUT(skb, TCA_OPTIONS, 0, NULL);
return err;
}
-
rtattr_failure:
skb_trim(skb, b - skb->data);
return -1;
-
}
-int tcf_action_dump(struct sk_buff *skb, struct tc_action *act, int bind, int ref)
+int
+tcf_action_dump(struct sk_buff *skb, struct tc_action *act, int bind, int ref)
{
struct tc_action *a;
int err = -EINVAL;
- unsigned char *b = skb->tail;
+ unsigned char *b = skb->tail;
struct rtattr *r ;
while ((a = act) != NULL) {
act = a->next;
RTA_PUT(skb, a->order, 0, NULL);
err = tcf_action_dump_1(skb, a, bind, ref);
- if (0 > err)
+ if (err < 0)
goto rtattr_failure;
-
r->rta_len = skb->tail - (u8*)r;
}
rtattr_failure:
skb_trim(skb, b - skb->data);
return -err;
-
}
-int tcf_action_init_1(struct rtattr *rta, struct rtattr *est, struct tc_action *a, char *name, int ovr, int bind )
+struct tc_action *tcf_action_init_1(struct rtattr *rta, struct rtattr *est,
+ char *name, int ovr, int bind, int *err)
{
+ struct tc_action *a;
struct tc_action_ops *a_o;
- char act_name[4 + IFNAMSIZ + 1];
+ char act_name[IFNAMSIZ];
struct rtattr *tb[TCA_ACT_MAX+1];
- struct rtattr *kind = NULL;
+ struct rtattr *kind;
- int err = -EINVAL;
+ *err = -EINVAL;
- if (NULL == name) {
- if (rtattr_parse(tb, TCA_ACT_MAX, RTA_DATA(rta), RTA_PAYLOAD(rta))<0)
+ if (name == NULL) {
+ if (rtattr_parse_nested(tb, TCA_ACT_MAX, rta) < 0)
goto err_out;
kind = tb[TCA_ACT_KIND-1];
- if (NULL != kind) {
- sprintf(act_name, "%s", (char*)RTA_DATA(kind));
- if (RTA_PAYLOAD(kind) >= IFNAMSIZ) {
- printk(" Action %s bad\n", (char*)RTA_DATA(kind));
- goto err_out;
- }
-
- } else {
- printk("Action bad kind\n");
+ if (kind == NULL)
+ goto err_out;
+ if (rtattr_strlcpy(act_name, kind, IFNAMSIZ) >= IFNAMSIZ)
goto err_out;
- }
- a_o = tc_lookup_action(kind);
} else {
- sprintf(act_name, "%s", name);
- DPRINTK("tcf_action_init_1: finding %s\n",act_name);
- a_o = tc_lookup_action_n(name);
+ if (strlcpy(act_name, name, IFNAMSIZ) >= IFNAMSIZ)
+ goto err_out;
}
+
+ a_o = tc_lookup_action_n(act_name);
+ if (a_o == NULL) {
#ifdef CONFIG_KMOD
- if (NULL == a_o) {
- DPRINTK("tcf_action_init_1: trying to load module %s\n",act_name);
- request_module (act_name);
+ rtnl_unlock();
+ request_module(act_name);
+ rtnl_lock();
+
a_o = tc_lookup_action_n(act_name);
- }
+ /* We dropped the RTNL semaphore in order to
+ * perform the module load. So, even if we
+ * succeeded in loading the module we have to
+ * tell the caller to replay the request. We
+ * indicate this using -EAGAIN.
+ */
+ if (a_o != NULL) {
+ *err = -EAGAIN;
+ goto err_mod;
+ }
#endif
- if (NULL == a_o) {
- printk("failed to find %s\n",act_name);
goto err_out;
}
- if (NULL == a) {
+ *err = -ENOMEM;
+ a = kmalloc(sizeof(*a), GFP_KERNEL);
+ if (a == NULL)
goto err_mod;
- }
+ memset(a, 0, sizeof(*a));
/* backward compatibility for policer */
- if (NULL == name) {
- err = a_o->init(tb[TCA_ACT_OPTIONS-1], est, a, ovr, bind);
- if (0 > err ) {
- err = -EINVAL;
- goto err_mod;
- }
- } else {
- err = a_o->init(rta, est, a, ovr, bind);
- if (0 > err ) {
- err = -EINVAL;
- goto err_mod;
- }
- }
+ if (name == NULL)
+ *err = a_o->init(tb[TCA_ACT_OPTIONS-1], est, a, ovr, bind);
+ else
+ *err = a_o->init(rta, est, a, ovr, bind);
+ if (*err < 0)
+ goto err_free;
/* module count goes up only when brand new policy is created
if it exists and is only bound to in a_o->init() then
- ACT_P_CREATED is not returned (a zero is).
- */
- if (ACT_P_CREATED != err) {
+ ACT_P_CREATED is not returned (a zero is).
+ */
+ if (*err != ACT_P_CREATED)
module_put(a_o->owner);
- }
a->ops = a_o;
- DPRINTK("tcf_action_init_1: successfull %s \n",act_name);
+ DPRINTK("tcf_action_init_1: successfull %s\n", act_name);
- return 0;
+ *err = 0;
+ return a;
+
+err_free:
+ kfree(a);
err_mod:
module_put(a_o->owner);
err_out:
- return err;
+ return NULL;
}
-int tcf_action_init(struct rtattr *rta, struct rtattr *est, struct tc_action *a, char *name, int ovr , int bind)
+struct tc_action *tcf_action_init(struct rtattr *rta, struct rtattr *est,
+ char *name, int ovr, int bind, int *err)
{
struct rtattr *tb[TCA_ACT_MAX_PRIO+1];
+ struct tc_action *head = NULL, *act, *act_prev = NULL;
int i;
- struct tc_action *act = a, *a_s = a;
- int err = -EINVAL;
-
- if (rtattr_parse(tb, TCA_ACT_MAX_PRIO, RTA_DATA(rta), RTA_PAYLOAD(rta))<0)
- return err;
-
- for (i=0; i < TCA_ACT_MAX_PRIO ; i++) {
- if (tb[i]) {
- if (NULL == act) {
- act = kmalloc(sizeof(*act),GFP_KERNEL);
- if (NULL == act) {
- err = -ENOMEM;
- goto bad_ret;
- }
- memset(act, 0,sizeof(*act));
- }
- act->next = NULL;
- if (0 > tcf_action_init_1(tb[i],est,act,name,ovr,bind)) {
- printk("Error processing action order %d\n",i);
- return err;
- }
+ if (rtattr_parse_nested(tb, TCA_ACT_MAX_PRIO, rta) < 0) {
+ *err = -EINVAL;
+ return head;
+ }
- act->order = i+1;
- if (a_s != act) {
- a_s->next = act;
- a_s = act;
- }
- act = NULL;
- }
+ for (i=0; i < TCA_ACT_MAX_PRIO && tb[i]; i++) {
+ act = tcf_action_init_1(tb[i], est, name, ovr, bind, err);
+ if (act == NULL)
+ goto err;
+ act->order = i+1;
+ if (head == NULL)
+ head = act;
+ else
+ act_prev->next = act;
+ act_prev = act;
}
+ return head;
- return 0;
-bad_ret:
- tcf_action_destroy(a, bind);
- return err;
+err:
+ if (head != NULL)
+ tcf_action_destroy(head, bind);
+ return NULL;
}
-int tcf_action_copy_stats (struct sk_buff *skb,struct tc_action *a)
+int tcf_action_copy_stats(struct sk_buff *skb, struct tc_action *a)
{
int err;
struct gnet_dump d;
struct tcf_act_hdr *h = a->priv;
-#ifdef CONFIG_KMOD
- /* place holder */
-#endif
-
- if (NULL == h)
+ if (h == NULL)
goto errout;
if (a->type == TCA_OLD_COMPAT)
if (err < 0)
goto errout;
- if (NULL != a->ops && NULL != a->ops->get_stats)
+ if (a->ops != NULL && a->ops->get_stats != NULL)
if (a->ops->get_stats(skb, a) < 0)
goto errout;
return -1;
}
-
static int
-tca_get_fill(struct sk_buff *skb, struct tc_action *a,
- u32 pid, u32 seq, unsigned flags, int event, int bind, int ref)
+tca_get_fill(struct sk_buff *skb, struct tc_action *a, u32 pid, u32 seq,
+ unsigned flags, int event, int bind, int ref)
{
struct tcamsg *t;
- struct nlmsghdr *nlh;
- unsigned char *b = skb->tail;
+ struct nlmsghdr *nlh;
+ unsigned char *b = skb->tail;
struct rtattr *x;
nlh = NLMSG_PUT(skb, pid, seq, event, sizeof(*t));
x = (struct rtattr*) skb->tail;
RTA_PUT(skb, TCA_ACT_TAB, 0, NULL);
- if (0 > tcf_action_dump(skb, a, bind, ref)) {
+ if (tcf_action_dump(skb, a, bind, ref) < 0)
goto rtattr_failure;
- }
x->rta_len = skb->tail - (u8*)x;
return -1;
}
-static int act_get_notify(u32 pid, struct nlmsghdr *n,
- struct tc_action *a, int event)
+static int
+act_get_notify(u32 pid, struct nlmsghdr *n, struct tc_action *a, int event)
{
struct sk_buff *skb;
-
int err = 0;
skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL);
if (!skb)
return -ENOBUFS;
-
- if (tca_get_fill(skb, a, pid, n->nlmsg_seq, 0, event, 0, 0) <= 0) {
+ if (tca_get_fill(skb, a, pid, n->nlmsg_seq, 0, event, 0, 0) <= 0) {
kfree_skb(skb);
return -EINVAL;
}
-
- err = netlink_unicast(rtnl,skb, pid, MSG_DONTWAIT);
+ err = netlink_unicast(rtnl, skb, pid, MSG_DONTWAIT);
if (err > 0)
err = 0;
return err;
}
-static int tcf_action_get_1(struct rtattr *rta, struct tc_action *a, struct nlmsghdr *n, u32 pid)
+static struct tc_action *
+tcf_action_get_1(struct rtattr *rta, struct nlmsghdr *n, u32 pid, int *err)
{
- struct tc_action_ops *a_o;
- char act_name[4 + IFNAMSIZ + 1];
struct rtattr *tb[TCA_ACT_MAX+1];
- struct rtattr *kind = NULL;
+ struct tc_action *a;
int index;
- int err = -EINVAL;
-
- if (rtattr_parse(tb, TCA_ACT_MAX, RTA_DATA(rta), RTA_PAYLOAD(rta))<0)
- goto err_out;
-
-
- kind = tb[TCA_ACT_KIND-1];
- if (NULL != kind) {
- sprintf(act_name, "%s", (char*)RTA_DATA(kind));
- if (RTA_PAYLOAD(kind) >= IFNAMSIZ) {
- printk("tcf_action_get_1: action %s bad\n", (char*)RTA_DATA(kind));
- goto err_out;
- }
-
- } else {
- printk("tcf_action_get_1: action bad kind\n");
- goto err_out;
- }
-
- if (tb[TCA_ACT_INDEX - 1]) {
- index = *(int *)RTA_DATA(tb[TCA_ACT_INDEX - 1]);
- } else {
- printk("tcf_action_get_1: index not received\n");
- goto err_out;
- }
+ *err = -EINVAL;
+ if (rtattr_parse_nested(tb, TCA_ACT_MAX, rta) < 0)
+ return NULL;
- a_o = tc_lookup_action(kind);
-#ifdef CONFIG_KMOD
- if (NULL == a_o) {
- request_module (act_name);
- a_o = tc_lookup_action_n(act_name);
- }
+ if (tb[TCA_ACT_INDEX - 1] == NULL ||
+ RTA_PAYLOAD(tb[TCA_ACT_INDEX - 1]) < sizeof(index))
+ return NULL;
+ index = *(int *)RTA_DATA(tb[TCA_ACT_INDEX - 1]);
-#endif
- if (NULL == a_o) {
- printk("failed to find %s\n",act_name);
- goto err_out;
- }
+ *err = -ENOMEM;
+ a = kmalloc(sizeof(struct tc_action), GFP_KERNEL);
+ if (a == NULL)
+ return NULL;
+ memset(a, 0, sizeof(struct tc_action));
- if (NULL == a) {
+ *err = -EINVAL;
+ a->ops = tc_lookup_action(tb[TCA_ACT_KIND - 1]);
+ if (a->ops == NULL)
+ goto err_free;
+ if (a->ops->lookup == NULL)
goto err_mod;
- }
-
- a->ops = a_o;
-
- if (NULL == a_o->lookup || 0 == a_o->lookup(a, index)) {
- a->ops = NULL;
- err = -EINVAL;
+ *err = -ENOENT;
+ if (a->ops->lookup(a, index) == 0)
goto err_mod;
- }
- module_put(a_o->owner);
- return 0;
+ module_put(a->ops->owner);
+ *err = 0;
+ return a;
err_mod:
- module_put(a_o->owner);
-err_out:
- return err;
+ module_put(a->ops->owner);
+err_free:
+ kfree(a);
+ return NULL;
}
-static void cleanup_a (struct tc_action *act)
+static void cleanup_a(struct tc_action *act)
{
struct tc_action *a;
- for (a = act; act; a = act) {
- if (a) {
- act = act->next;
- a->ops = NULL;
- a->priv = NULL;
- kfree(a);
- } else {
- printk("cleanup_a: BUG? empty action\n");
- }
- }
-}
-
-static struct tc_action_ops *get_ao(struct rtattr *kind, struct tc_action *a)
-{
- char act_name[4 + IFNAMSIZ + 1];
- struct tc_action_ops *a_o = NULL;
-
- if (NULL != kind) {
- sprintf(act_name, "%s", (char*)RTA_DATA(kind));
- if (RTA_PAYLOAD(kind) >= IFNAMSIZ) {
- printk("get_ao: action %s bad\n", (char*)RTA_DATA(kind));
- return NULL;
- }
-
- } else {
- printk("get_ao: action bad kind\n");
- return NULL;
- }
-
- a_o = tc_lookup_action(kind);
-#ifdef CONFIG_KMOD
- if (NULL == a_o) {
- DPRINTK("get_ao: trying to load module %s\n",act_name);
- request_module (act_name);
- a_o = tc_lookup_action_n(act_name);
- }
-#endif
-
- if (NULL == a_o) {
- printk("get_ao: failed to find %s\n",act_name);
- return NULL;
+ for (a = act; a; a = act) {
+ act = a->next;
+ kfree(a);
}
-
- a->ops = a_o;
- return a_o;
}
static struct tc_action *create_a(int i)
{
- struct tc_action *act = NULL;
+ struct tc_action *act;
- act = kmalloc(sizeof(*act),GFP_KERNEL);
- if (NULL == act) { /* grrr .. */
- printk("create_a: failed to alloc! \n");
+ act = kmalloc(sizeof(*act), GFP_KERNEL);
+ if (act == NULL) {
+ printk("create_a: failed to alloc!\n");
return NULL;
}
-
- memset(act, 0,sizeof(*act));
-
+ memset(act, 0, sizeof(*act));
act->order = i;
-
return act;
}
struct netlink_callback dcb;
struct rtattr *x;
struct rtattr *tb[TCA_ACT_MAX+1];
- struct rtattr *kind = NULL;
+ struct rtattr *kind;
struct tc_action *a = create_a(0);
int err = -EINVAL;
- if (NULL == a) {
+ if (a == NULL) {
printk("tca_action_flush: couldnt create tc_action\n");
return err;
}
b = (unsigned char *)skb->tail;
- if (rtattr_parse(tb, TCA_ACT_MAX, RTA_DATA(rta), RTA_PAYLOAD(rta))<0) {
+ if (rtattr_parse_nested(tb, TCA_ACT_MAX, rta) < 0)
goto err_out;
- }
kind = tb[TCA_ACT_KIND-1];
- if (NULL == get_ao(kind, a)) {
+ a->ops = tc_lookup_action(kind);
+ if (a->ops == NULL)
goto err_out;
- }
- nlh = NLMSG_PUT(skb, pid, n->nlmsg_seq, RTM_DELACTION, sizeof (*t));
+ nlh = NLMSG_PUT(skb, pid, n->nlmsg_seq, RTM_DELACTION, sizeof(*t));
t = NLMSG_DATA(nlh);
t->tca_family = AF_UNSPEC;
RTA_PUT(skb, TCA_ACT_TAB, 0, NULL);
err = a->ops->walk(skb, &dcb, RTM_DELACTION, a);
- if (0 > err ) {
+ if (err < 0)
goto rtattr_failure;
- }
x->rta_len = skb->tail - (u8 *) x;
return err;
-
rtattr_failure:
module_put(a->ops->owner);
nlmsg_failure:
return err;
}
-static int tca_action_gd(struct rtattr *rta, struct nlmsghdr *n, u32 pid, int event )
+static int
+tca_action_gd(struct rtattr *rta, struct nlmsghdr *n, u32 pid, int event)
{
-
- int s = 0;
int i, ret = 0;
- struct tc_action *act = NULL;
struct rtattr *tb[TCA_ACT_MAX_PRIO+1];
- struct tc_action *a = NULL, *a_s = NULL;
-
- if (event != RTM_GETACTION && event != RTM_DELACTION)
- ret = -EINVAL;
+ struct tc_action *head = NULL, *act, *act_prev = NULL;
- if (rtattr_parse(tb, TCA_ACT_MAX_PRIO, RTA_DATA(rta), RTA_PAYLOAD(rta))<0) {
- ret = -EINVAL;
- goto nlmsg_failure;
- }
+ if (rtattr_parse_nested(tb, TCA_ACT_MAX_PRIO, rta) < 0)
+ return -EINVAL;
if (event == RTM_DELACTION && n->nlmsg_flags&NLM_F_ROOT) {
- if (NULL != tb[0] && NULL == tb[1]) {
- return tca_action_flush(tb[0],n,pid);
- }
+ if (tb[0] != NULL && tb[1] == NULL)
+ return tca_action_flush(tb[0], n, pid);
}
- for (i=0; i < TCA_ACT_MAX_PRIO ; i++) {
-
- if (NULL == tb[i])
- break;
-
- act = create_a(i+1);
- if (NULL != a && a != act) {
- a->next = act;
- a = act;
- } else {
- a = act;
- }
-
- if (!s) {
- s = 1;
- a_s = a;
- }
-
- ret = tcf_action_get_1(tb[i],act,n,pid);
- if (ret < 0) {
- printk("tcf_action_get: failed to get! \n");
- ret = -EINVAL;
- goto rtattr_failure;
- }
+ for (i=0; i < TCA_ACT_MAX_PRIO && tb[i]; i++) {
+ act = tcf_action_get_1(tb[i], n, pid, &ret);
+ if (act == NULL)
+ goto err;
+ act->order = i+1;
+ if (head == NULL)
+ head = act;
+ else
+ act_prev->next = act;
+ act_prev = act;
}
-
- if (RTM_GETACTION == event) {
- ret = act_get_notify(pid, n, a_s, event);
- } else { /* delete */
-
+ if (event == RTM_GETACTION)
+ ret = act_get_notify(pid, n, head, event);
+ else { /* delete */
struct sk_buff *skb;
skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL);
if (!skb) {
ret = -ENOBUFS;
- goto nlmsg_failure;
+ goto err;
}
- if (tca_get_fill(skb, a_s, pid, n->nlmsg_seq, 0, event, 0 , 1) <= 0) {
+ if (tca_get_fill(skb, head, pid, n->nlmsg_seq, 0, event,
+ 0, 1) <= 0) {
kfree_skb(skb);
ret = -EINVAL;
- goto nlmsg_failure;
+ goto err;
}
/* now do the delete */
- tcf_action_destroy(a_s, 0);
-
- ret = rtnetlink_send(skb, pid, RTMGRP_TC, n->nlmsg_flags&NLM_F_ECHO);
+ tcf_action_destroy(head, 0);
+ ret = rtnetlink_send(skb, pid, RTMGRP_TC,
+ n->nlmsg_flags&NLM_F_ECHO);
if (ret > 0)
return 0;
return ret;
}
-rtattr_failure:
-nlmsg_failure:
- cleanup_a(a_s);
+err:
+ cleanup_a(head);
return ret;
}
-
-static int tcf_add_notify(struct tc_action *a, u32 pid, u32 seq, int event, unsigned flags)
+static int tcf_add_notify(struct tc_action *a, u32 pid, u32 seq, int event,
+ unsigned flags)
{
struct tcamsg *t;
- struct nlmsghdr *nlh;
+ struct nlmsghdr *nlh;
struct sk_buff *skb;
struct rtattr *x;
unsigned char *b;
-
-
int err = 0;
skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL);
x = (struct rtattr*) skb->tail;
RTA_PUT(skb, TCA_ACT_TAB, 0, NULL);
- if (0 > tcf_action_dump(skb, a, 0, 0)) {
+ if (tcf_action_dump(skb, a, 0, 0) < 0)
goto rtattr_failure;
- }
x->rta_len = skb->tail - (u8*)x;
err = rtnetlink_send(skb, pid, RTMGRP_TC, flags&NLM_F_ECHO);
if (err > 0)
err = 0;
-
return err;
rtattr_failure:
}
-static int tcf_action_add(struct rtattr *rta, struct nlmsghdr *n, u32 pid, int ovr )
+static int
+tcf_action_add(struct rtattr *rta, struct nlmsghdr *n, u32 pid, int ovr)
{
int ret = 0;
- struct tc_action *act = NULL;
- struct tc_action *a = NULL;
+ struct tc_action *act;
+ struct tc_action *a;
u32 seq = n->nlmsg_seq;
- act = kmalloc(sizeof(*act),GFP_KERNEL);
- if (NULL == act)
- return -ENOMEM;
-
- memset(act, 0, sizeof(*act));
-
- ret = tcf_action_init(rta, NULL,act,NULL,ovr,0);
- /* NOTE: We have an all-or-none model
- * This means that of any of the actions fail
- * to update then all are undone.
- * */
- if (0 > ret) {
- tcf_action_destroy(act, 0);
+ act = tcf_action_init(rta, NULL, NULL, ovr, 0, &ret);
+ if (act == NULL)
goto done;
- }
/* dump then free all the actions after update; inserted policy
* stays intact
* */
- ret = tcf_add_notify(act, pid, seq, RTM_NEWACTION, n->nlmsg_flags);
- for (a = act; act; a = act) {
- if (a) {
- act = act->next;
- a->ops = NULL;
- a->priv = NULL;
- kfree(a);
- } else {
- printk("tcf_action_add: BUG? empty action\n");
- }
+ ret = tcf_add_notify(act, pid, seq, RTM_NEWACTION, n->nlmsg_flags);
+ for (a = act; a; a = act) {
+ act = a->next;
+ kfree(a);
}
done:
-
return ret;
}
{
struct rtattr **tca = arg;
u32 pid = skb ? NETLINK_CB(skb).pid : 0;
-
int ret = 0, ovr = 0;
- if (NULL == tca[TCA_ACT_TAB-1]) {
- printk("tc_ctl_action: received NO action attribs\n");
- return -EINVAL;
+ if (tca[TCA_ACT_TAB-1] == NULL) {
+ printk("tc_ctl_action: received NO action attribs\n");
+ return -EINVAL;
}
/* n->nlmsg_flags&NLM_F_CREATE
* */
switch (n->nlmsg_type) {
- case RTM_NEWACTION:
+ case RTM_NEWACTION:
/* we are going to assume all other flags
* imply create only if it doesnt exist
* Note that CREATE | EXCL implies that
* but since we want avoid ambiguity (eg when flags
* is zero) then just set this
*/
- if (n->nlmsg_flags&NLM_F_REPLACE) {
+ if (n->nlmsg_flags&NLM_F_REPLACE)
ovr = 1;
- }
- ret = tcf_action_add(tca[TCA_ACT_TAB-1], n, pid, ovr);
+replay:
+ ret = tcf_action_add(tca[TCA_ACT_TAB-1], n, pid, ovr);
+ if (ret == -EAGAIN)
+ goto replay;
break;
case RTM_DELACTION:
- ret = tca_action_gd(tca[TCA_ACT_TAB-1], n, pid,RTM_DELACTION);
+ ret = tca_action_gd(tca[TCA_ACT_TAB-1], n, pid, RTM_DELACTION);
break;
case RTM_GETACTION:
- ret = tca_action_gd(tca[TCA_ACT_TAB-1], n, pid,RTM_GETACTION);
+ ret = tca_action_gd(tca[TCA_ACT_TAB-1], n, pid, RTM_GETACTION);
break;
default:
- printk(" Unknown cmd was detected\n");
- break;
+ BUG();
}
return ret;
struct rtattr *tb1, *tb2[TCA_ACT_MAX+1];
struct rtattr *tb[TCA_ACT_MAX_PRIO + 1];
struct rtattr *rta[TCAA_MAX + 1];
- struct rtattr *kind = NULL;
- int min_len = NLMSG_LENGTH(sizeof (struct tcamsg));
-
+ struct rtattr *kind;
+ int min_len = NLMSG_LENGTH(sizeof(struct tcamsg));
int attrlen = n->nlmsg_len - NLMSG_ALIGN(min_len);
struct rtattr *attr = (void *) n + NLMSG_ALIGN(min_len);
if (rtattr_parse(rta, TCAA_MAX, attr, attrlen) < 0)
return NULL;
tb1 = rta[TCA_ACT_TAB - 1];
- if (NULL == tb1) {
+ if (tb1 == NULL)
return NULL;
- }
- if (rtattr_parse(tb, TCA_ACT_MAX_PRIO, RTA_DATA(tb1), NLMSG_ALIGN(RTA_PAYLOAD(tb1))) < 0)
+ if (rtattr_parse(tb, TCA_ACT_MAX_PRIO, RTA_DATA(tb1),
+ NLMSG_ALIGN(RTA_PAYLOAD(tb1))) < 0)
return NULL;
- if (NULL == tb[0])
+ if (tb[0] == NULL)
return NULL;
- if (rtattr_parse(tb2, TCA_ACT_MAX, RTA_DATA(tb[0]), RTA_PAYLOAD(tb[0]))<0)
+ if (rtattr_parse(tb2, TCA_ACT_MAX, RTA_DATA(tb[0]),
+ RTA_PAYLOAD(tb[0])) < 0)
return NULL;
kind = tb2[TCA_ACT_KIND-1];
struct tc_action_ops *a_o;
struct tc_action a;
int ret = 0;
-
struct tcamsg *t = (struct tcamsg *) NLMSG_DATA(cb->nlh);
char *kind = find_dump_kind(cb->nlh);
- if (NULL == kind) {
+
+ if (kind == NULL) {
printk("tc_dump_action: action bad kind\n");
return 0;
}
a_o = tc_lookup_action_n(kind);
-
- if (NULL == a_o) {
+ if (a_o == NULL) {
printk("failed to find %s\n", kind);
return 0;
}
- memset(&a,0,sizeof(struct tc_action));
+ memset(&a, 0, sizeof(struct tc_action));
a.ops = a_o;
- if (NULL == a_o->walk) {
- printk("tc_dump_action: %s !capable of dumping table\n",kind);
+ if (a_o->walk == NULL) {
+ printk("tc_dump_action: %s !capable of dumping table\n", kind);
goto rtattr_failure;
}
- nlh = NLMSG_PUT(skb, NETLINK_CB(cb->skb).pid, cb->nlh->nlmsg_seq, cb->nlh->nlmsg_type, sizeof (*t));
+ nlh = NLMSG_PUT(skb, NETLINK_CB(cb->skb).pid, cb->nlh->nlmsg_seq,
+ cb->nlh->nlmsg_type, sizeof(*t));
t = NLMSG_DATA(nlh);
t->tca_family = AF_UNSPEC;
RTA_PUT(skb, TCA_ACT_TAB, 0, NULL);
ret = a_o->walk(skb, cb, RTM_GETACTION, &a);
- if (0 > ret ) {
+ if (ret < 0)
goto rtattr_failure;
- }
if (ret > 0) {
x->rta_len = skb->tail - (u8 *) x;
ret = skb->len;
- } else {
+ } else
skb_trim(skb, (u8*)x - skb->data);
- }
nlh->nlmsg_len = skb->tail - b;
- if (NETLINK_CB(cb->skb).pid && ret)
+ if (NETLINK_CB(cb->skb).pid && ret)
nlh->nlmsg_flags |= NLM_F_MULTI;
module_put(a_o->owner);
return skb->len;
link_p[RTM_GETACTION-RTM_BASE].dumpit = tc_dump_action;
}
- printk("TC classifier action (bugs to netdev@oss.sgi.com cc hadi@cyberus.ca)\n");
+ printk("TC classifier action (bugs to netdev@oss.sgi.com cc "
+ "hadi@cyberus.ca)\n");
return 0;
}
EXPORT_SYMBOL(tcf_register_action);
EXPORT_SYMBOL(tcf_unregister_action);
-EXPORT_SYMBOL(tcf_action_init_1);
-EXPORT_SYMBOL(tcf_action_init);
-EXPORT_SYMBOL(tcf_action_destroy);
EXPORT_SYMBOL(tcf_action_exec);
-EXPORT_SYMBOL(tcf_action_copy_stats);
-EXPORT_SYMBOL(tcf_action_dump);
EXPORT_SYMBOL(tcf_action_dump_1);
-EXPORT_SYMBOL(tcf_action_dump_old);
/* use generic hash table */
#define MY_TAB_SIZE 16
#define MY_TAB_MASK 15
+
static u32 idx_gen;
static struct tcf_gact *tcf_gact_ht[MY_TAB_SIZE];
-static rwlock_t gact_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(gact_lock);
/* ovewrride the defaults */
-#define tcf_st tcf_gact
-#define tc_st tc_gact
-#define tcf_t_lock gact_lock
-#define tcf_ht tcf_gact_ht
+#define tcf_st tcf_gact
+#define tc_st tc_gact
+#define tcf_t_lock gact_lock
+#define tcf_ht tcf_gact_ht
#define CONFIG_NET_ACT_INIT 1
#include <net/pkt_act.h>
#ifdef CONFIG_GACT_PROB
-typedef int (*g_rand)(struct tcf_gact *p);
-static int
-gact_net_rand(struct tcf_gact *p) {
+static int gact_net_rand(struct tcf_gact *p)
+{
if (net_random()%p->pval)
return p->action;
return p->paction;
}
-static int
-gact_determ(struct tcf_gact *p) {
+static int gact_determ(struct tcf_gact *p)
+{
if (p->bstats.packets%p->pval)
return p->action;
return p->paction;
}
-
-g_rand gact_rand[MAX_RAND]= { NULL,gact_net_rand, gact_determ};
-
+typedef int (*g_rand)(struct tcf_gact *p);
+static g_rand gact_rand[MAX_RAND]= { NULL, gact_net_rand, gact_determ };
#endif
-static int
-tcf_gact_init(struct rtattr *rta, struct rtattr *est, struct tc_action *a,int ovr,int bind)
+
+static int tcf_gact_init(struct rtattr *rta, struct rtattr *est,
+ struct tc_action *a, int ovr, int bind)
{
struct rtattr *tb[TCA_GACT_MAX];
- struct tc_gact *parm = NULL;
-#ifdef CONFIG_GACT_PROB
- struct tc_gact_p *p_parm = NULL;
-#endif
- struct tcf_gact *p = NULL;
+ struct tc_gact *parm;
+ struct tcf_gact *p;
int ret = 0;
- int size = sizeof (*p);
- if (rtattr_parse(tb, TCA_GACT_MAX, RTA_DATA(rta), RTA_PAYLOAD(rta)) < 0)
- return -1;
-
- if (NULL == a || NULL == tb[TCA_GACT_PARMS - 1]) {
- printk("BUG: tcf_gact_init called with NULL params\n");
- return -1;
- }
+ if (rta == NULL || rtattr_parse_nested(tb, TCA_GACT_MAX, rta) < 0)
+ return -EINVAL;
+ if (tb[TCA_GACT_PARMS - 1] == NULL ||
+ RTA_PAYLOAD(tb[TCA_GACT_PARMS - 1]) < sizeof(*parm))
+ return -EINVAL;
parm = RTA_DATA(tb[TCA_GACT_PARMS - 1]);
+
+ if (tb[TCA_GACT_PROB-1] != NULL)
#ifdef CONFIG_GACT_PROB
- if (NULL != tb[TCA_GACT_PROB - 1]) {
- p_parm = RTA_DATA(tb[TCA_GACT_PROB - 1]);
- }
+ if (RTA_PAYLOAD(tb[TCA_GACT_PROB-1]) < sizeof(struct tc_gact_p))
+ return -EINVAL;
+#else
+ return -EOPNOTSUPP;
#endif
- p = tcf_hash_check(parm, a, ovr, bind);
-
- if (NULL == p) {
- p = tcf_hash_create(parm,est,a,size,ovr, bind);
-
- if (NULL == p) {
- return -1;
- } else {
- p->refcnt = 1;
- ret = 1;
- goto override;
+ p = tcf_hash_check(parm->index, a, ovr, bind);
+ if (p == NULL) {
+ p = tcf_hash_create(parm->index, est, a, sizeof(*p), ovr, bind);
+ if (p == NULL)
+ return -ENOMEM;
+ ret = ACT_P_CREATED;
+ } else {
+ if (!ovr) {
+ tcf_hash_release(p, bind);
+ return -EEXIST;
}
}
- if (ovr) {
-override:
- p->action = parm->action;
+ spin_lock_bh(&p->lock);
+ p->action = parm->action;
#ifdef CONFIG_GACT_PROB
- if (NULL != p_parm) {
- p->paction = p_parm->paction;
- p->pval = p_parm->pval;
- p->ptype = p_parm->ptype;
- } else {
- p->paction = p->pval = p->ptype = 0;
- }
-#endif
+ if (tb[TCA_GACT_PROB-1] != NULL) {
+ struct tc_gact_p *p_parm = RTA_DATA(tb[TCA_GACT_PROB-1]);
+ p->paction = p_parm->paction;
+ p->pval = p_parm->pval;
+ p->ptype = p_parm->ptype;
}
-
+#endif
+ spin_unlock_bh(&p->lock);
+ if (ret == ACT_P_CREATED)
+ tcf_hash_insert(p);
return ret;
}
static int
tcf_gact_cleanup(struct tc_action *a, int bind)
{
- struct tcf_gact *p;
- p = PRIV(a,gact);
- if (NULL != p)
+ struct tcf_gact *p = PRIV(a, gact);
+
+ if (p != NULL)
return tcf_hash_release(p, bind);
return 0;
}
static int
tcf_gact(struct sk_buff **pskb, struct tc_action *a)
{
- struct tcf_gact *p;
+ struct tcf_gact *p = PRIV(a, gact);
struct sk_buff *skb = *pskb;
int action = TC_ACT_SHOT;
- p = PRIV(a,gact);
-
- if (NULL == p) {
- if (net_ratelimit())
- printk("BUG: tcf_gact called with NULL params\n");
- return -1;
- }
-
spin_lock(&p->lock);
#ifdef CONFIG_GACT_PROB
- if (p->ptype && NULL != gact_rand[p->ptype])
+ if (p->ptype && gact_rand[p->ptype] != NULL)
action = gact_rand[p->ptype](p);
else
action = p->action;
#endif
p->bstats.bytes += skb->len;
p->bstats.packets++;
- if (TC_ACT_SHOT == action)
+ if (action == TC_ACT_SHOT)
p->qstats.drops++;
p->tm.lastuse = jiffies;
spin_unlock(&p->lock);
{
unsigned char *b = skb->tail;
struct tc_gact opt;
-#ifdef CONFIG_GACT_PROB
- struct tc_gact_p p_opt;
-#endif
- struct tcf_gact *p;
+ struct tcf_gact *p = PRIV(a, gact);
struct tcf_t t;
- p = PRIV(a,gact);
- if (NULL == p) {
- printk("BUG: tcf_gact_dump called with NULL params\n");
- goto rtattr_failure;
- }
-
opt.index = p->index;
opt.refcnt = p->refcnt - ref;
opt.bindcnt = p->bindcnt - bind;
opt.action = p->action;
- RTA_PUT(skb, TCA_GACT_PARMS, sizeof (opt), &opt);
+ RTA_PUT(skb, TCA_GACT_PARMS, sizeof(opt), &opt);
#ifdef CONFIG_GACT_PROB
if (p->ptype) {
+ struct tc_gact_p p_opt;
p_opt.paction = p->paction;
p_opt.pval = p->pval;
p_opt.ptype = p->ptype;
- RTA_PUT(skb, TCA_GACT_PROB, sizeof (p_opt), &p_opt);
- }
+ RTA_PUT(skb, TCA_GACT_PROB, sizeof(p_opt), &p_opt);
+ }
#endif
t.install = jiffies_to_clock_t(jiffies - p->tm.install);
t.lastuse = jiffies_to_clock_t(jiffies - p->tm.lastuse);
t.expires = jiffies_to_clock_t(p->tm.expires);
- RTA_PUT(skb, TCA_GACT_TM, sizeof (t), &t);
+ RTA_PUT(skb, TCA_GACT_TM, sizeof(t), &t);
return skb->len;
rtattr_failure:
}
static struct tc_action_ops act_gact_ops = {
- .next = NULL,
.kind = "gact",
.type = TCA_ACT_GACT,
.capab = TCA_CAP_NONE,
#include <linux/module.h>
#include <linux/init.h>
#include <linux/proc_fs.h>
+#include <linux/kmod.h>
#include <net/sock.h>
#include <net/pkt_sched.h>
#include <linux/tc_act/tc_ipt.h>
static u32 idx_gen;
static struct tcf_ipt *tcf_ipt_ht[MY_TAB_SIZE];
/* ipt hash table lock */
-static rwlock_t ipt_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(ipt_lock);
/* ovewrride the defaults */
-#define tcf_st tcf_ipt
-#define tcf_t_lock ipt_lock
-#define tcf_ht tcf_ipt_ht
+#define tcf_st tcf_ipt
+#define tcf_t_lock ipt_lock
+#define tcf_ht tcf_ipt_ht
+#define CONFIG_NET_ACT_INIT
#include <net/pkt_act.h>
-static inline int
-init_targ(struct tcf_ipt *p)
+static int
+ipt_init_target(struct ipt_entry_target *t, char *table, unsigned int hook)
{
struct ipt_target *target;
int ret = 0;
- struct ipt_entry_target *t = p->t;
- target = __ipt_find_target_lock(t->u.user.name, &ret);
- if (!target) {
- printk("init_targ: Failed to find %s\n", t->u.user.name);
- return -1;
- }
+ target = ipt_find_target(t->u.user.name, t->u.user.revision);
+ if (!target)
+ return -ENOENT;
- DPRINTK("init_targ: found %s\n", target->name);
- /* we really need proper ref counting
- seems to be only needed for modules?? Talk to laforge */
-/* if (target->me)
- __MOD_INC_USE_COUNT(target->me);
-*/
+ DPRINTK("ipt_init_target: found %s\n", target->name);
t->u.kernel.target = target;
- __ipt_mutex_up();
-
if (t->u.kernel.target->checkentry
- && !t->u.kernel.target->checkentry(p->tname, NULL, t->data,
- t->u.target_size
- - sizeof (*t), p->hook)) {
-/* if (t->u.kernel.target->me)
- __MOD_DEC_USE_COUNT(t->u.kernel.target->me);
-*/
- DPRINTK("ip_tables: check failed for `%s'.\n",
+ && !t->u.kernel.target->checkentry(table, NULL, t->data,
+ t->u.target_size - sizeof(*t),
+ hook)) {
+ DPRINTK("ipt_init_target: check failed for `%s'.\n",
t->u.kernel.target->name);
+ module_put(t->u.kernel.target->me);
ret = -EINVAL;
}
return ret;
}
+static void
+ipt_destroy_target(struct ipt_entry_target *t)
+{
+ if (t->u.kernel.target->destroy)
+ t->u.kernel.target->destroy(t->data,
+ t->u.target_size - sizeof(*t));
+ module_put(t->u.kernel.target->me);
+}
+
static int
-tcf_ipt_init(struct rtattr *rta, struct rtattr *est, struct tc_action *a, int ovr, int bind)
+tcf_ipt_release(struct tcf_ipt *p, int bind)
{
- struct ipt_entry_target *t;
- unsigned h;
- struct rtattr *tb[TCA_IPT_MAX];
- struct tcf_ipt *p;
int ret = 0;
- u32 index = 0;
- u32 hook = 0;
-
- if (NULL == a || NULL == rta ||
- (rtattr_parse(tb, TCA_IPT_MAX, RTA_DATA(rta), RTA_PAYLOAD(rta)) <
- 0)) {
- return -1;
- }
-
-
- if (tb[TCA_IPT_INDEX - 1]) {
- index = *(u32 *) RTA_DATA(tb[TCA_IPT_INDEX - 1]);
- DPRINTK("ipt index %d\n", index);
- }
-
- if (index && (p = tcf_hash_lookup(index)) != NULL) {
- a->priv = (void *) p;
- spin_lock(&p->lock);
- if (bind) {
- p->bindcnt += 1;
- p->refcnt += 1;
+ if (p) {
+ if (bind)
+ p->bindcnt--;
+ p->refcnt--;
+ if (p->bindcnt <= 0 && p->refcnt <= 0) {
+ ipt_destroy_target(p->t);
+ kfree(p->tname);
+ kfree(p->t);
+ tcf_hash_destroy(p);
+ ret = ACT_P_DELETED;
}
- if (ovr) {
- goto override;
- }
- spin_unlock(&p->lock);
- return ret;
}
+ return ret;
+}
- if (NULL == tb[TCA_IPT_TARG - 1] || NULL == tb[TCA_IPT_HOOK - 1]) {
- return -1;
- }
+static int
+tcf_ipt_init(struct rtattr *rta, struct rtattr *est, struct tc_action *a,
+ int ovr, int bind)
+{
+ struct rtattr *tb[TCA_IPT_MAX];
+ struct tcf_ipt *p;
+ struct ipt_entry_target *td, *t;
+ char *tname;
+ int ret = 0, err;
+ u32 hook = 0;
+ u32 index = 0;
- p = kmalloc(sizeof (*p), GFP_KERNEL);
- if (p == NULL)
- return -1;
-
- memset(p, 0, sizeof (*p));
- p->refcnt = 1;
- ret = 1;
- spin_lock_init(&p->lock);
- p->stats_lock = &p->lock;
- if (bind)
- p->bindcnt = 1;
-
-override:
- hook = *(u32 *) RTA_DATA(tb[TCA_IPT_HOOK - 1]);
-
- t = (struct ipt_entry_target *) RTA_DATA(tb[TCA_IPT_TARG - 1]);
-
- p->t = kmalloc(t->u.target_size, GFP_KERNEL);
- if (p->t == NULL) {
- if (ovr) {
- printk("ipt policy messed up \n");
- spin_unlock(&p->lock);
- return -1;
+ if (rta == NULL || rtattr_parse_nested(tb, TCA_IPT_MAX, rta) < 0)
+ return -EINVAL;
+
+ if (tb[TCA_IPT_HOOK-1] == NULL ||
+ RTA_PAYLOAD(tb[TCA_IPT_HOOK-1]) < sizeof(u32))
+ return -EINVAL;
+ if (tb[TCA_IPT_TARG-1] == NULL ||
+ RTA_PAYLOAD(tb[TCA_IPT_TARG-1]) < sizeof(*t))
+ return -EINVAL;
+ td = (struct ipt_entry_target *)RTA_DATA(tb[TCA_IPT_TARG-1]);
+ if (RTA_PAYLOAD(tb[TCA_IPT_TARG-1]) < td->u.target_size)
+ return -EINVAL;
+
+ if (tb[TCA_IPT_INDEX-1] != NULL &&
+ RTA_PAYLOAD(tb[TCA_IPT_INDEX-1]) >= sizeof(u32))
+ index = *(u32 *)RTA_DATA(tb[TCA_IPT_INDEX-1]);
+
+ p = tcf_hash_check(index, a, ovr, bind);
+ if (p == NULL) {
+ p = tcf_hash_create(index, est, a, sizeof(*p), ovr, bind);
+ if (p == NULL)
+ return -ENOMEM;
+ ret = ACT_P_CREATED;
+ } else {
+ if (!ovr) {
+ tcf_ipt_release(p, bind);
+ return -EEXIST;
}
- kfree(p);
- return -1;
}
- memcpy(p->t, RTA_DATA(tb[TCA_IPT_TARG - 1]), t->u.target_size);
- DPRINTK(" target NAME %s size %d data[0] %x data[1] %x\n",
- t->u.user.name, t->u.target_size, t->data[0], t->data[1]);
+ hook = *(u32 *)RTA_DATA(tb[TCA_IPT_HOOK-1]);
- p->tname = kmalloc(IFNAMSIZ, GFP_KERNEL);
+ err = -ENOMEM;
+ tname = kmalloc(IFNAMSIZ, GFP_KERNEL);
+ if (tname == NULL)
+ goto err1;
+ if (tb[TCA_IPT_TABLE - 1] == NULL ||
+ rtattr_strlcpy(tname, tb[TCA_IPT_TABLE-1], IFNAMSIZ) >= IFNAMSIZ)
+ strcpy(tname, "mangle");
- if (p->tname == NULL) {
- if (ovr) {
- printk("ipt policy messed up 2 \n");
- spin_unlock(&p->lock);
- return -1;
- }
- kfree(p->t);
- kfree(p);
- return -1;
- } else {
- int csize = IFNAMSIZ - 1;
-
- memset(p->tname, 0, IFNAMSIZ);
- if (tb[TCA_IPT_TABLE - 1]) {
- if (strlen((char *) RTA_DATA(tb[TCA_IPT_TABLE - 1])) <
- csize)
- csize = strlen(RTA_DATA(tb[TCA_IPT_TABLE - 1]));
- strncpy(p->tname, RTA_DATA(tb[TCA_IPT_TABLE - 1]),
- csize);
- DPRINTK("table name %s\n", p->tname);
- } else {
- strncpy(p->tname, "mangle", 1 + strlen("mangle"));
- }
- }
+ t = kmalloc(td->u.target_size, GFP_KERNEL);
+ if (t == NULL)
+ goto err2;
+ memcpy(t, td, td->u.target_size);
- if (0 > init_targ(p)) {
- if (ovr) {
- printk("ipt policy messed up 2 \n");
- spin_unlock(&p->lock);
- return -1;
- }
+ if ((err = ipt_init_target(t, tname, hook)) < 0)
+ goto err3;
+
+ spin_lock_bh(&p->lock);
+ if (ret != ACT_P_CREATED) {
+ ipt_destroy_target(p->t);
kfree(p->tname);
kfree(p->t);
- kfree(p);
- return -1;
- }
-
- if (ovr) {
- spin_unlock(&p->lock);
- return -1;
}
-
- p->index = index ? : tcf_hash_new_index();
-
- p->tm.lastuse = jiffies;
- /*
- p->tm.expires = jiffies;
- */
- p->tm.install = jiffies;
-#ifdef CONFIG_NET_ESTIMATOR
- if (est)
- gen_new_estimator(&p->bstats, &p->rate_est, p->stats_lock, est);
-#endif
- h = tcf_hash(p->index);
- write_lock_bh(&ipt_lock);
- p->next = tcf_ipt_ht[h];
- tcf_ipt_ht[h] = p;
- write_unlock_bh(&ipt_lock);
- a->priv = (void *) p;
+ p->tname = tname;
+ p->t = t;
+ p->hook = hook;
+ spin_unlock_bh(&p->lock);
+ if (ret == ACT_P_CREATED)
+ tcf_hash_insert(p);
return ret;
+err3:
+ kfree(t);
+err2:
+ kfree(tname);
+err1:
+ kfree(p);
+ return err;
}
static int
tcf_ipt_cleanup(struct tc_action *a, int bind)
{
- struct tcf_ipt *p;
- p = PRIV(a,ipt);
- if (NULL != p)
- return tcf_hash_release(p, bind);
- return 0;
+ struct tcf_ipt *p = PRIV(a, ipt);
+ return tcf_ipt_release(p, bind);
}
static int
tcf_ipt(struct sk_buff **pskb, struct tc_action *a)
{
int ret = 0, result = 0;
- struct tcf_ipt *p;
+ struct tcf_ipt *p = PRIV(a, ipt);
struct sk_buff *skb = *pskb;
- p = PRIV(a,ipt);
-
- if (NULL == p || NULL == skb) {
- return -1;
+ if (skb_cloned(skb)) {
+ if (pskb_expand_head(skb, 0, 0, GFP_ATOMIC))
+ return TC_ACT_UNSPEC;
}
spin_lock(&p->lock);
p->bstats.bytes += skb->len;
p->bstats.packets++;
- if (skb_cloned(skb) ) {
- if (pskb_expand_head(skb, 0, 0, GFP_ATOMIC)) {
- return -1;
- }
- }
/* yes, we have to worry about both in and out dev
worry later - danger - this API seems to have changed
from earlier kernels */
ret = p->t->u.kernel.target->target(&skb, skb->dev, NULL,
- p->hook, p->t->data, (void *)NULL);
+ p->hook, p->t->data, NULL);
switch (ret) {
case NF_ACCEPT:
result = TC_ACT_OK;
struct tcf_t tm;
struct tc_cnt c;
unsigned char *b = skb->tail;
+ struct tcf_ipt *p = PRIV(a, ipt);
- struct tcf_ipt *p;
-
- p = PRIV(a,ipt);
- if (NULL == p) {
- printk("BUG: tcf_ipt_dump called with NULL params\n");
- goto rtattr_failure;
- }
/* for simple targets kernel size == user size
** user name = target name
** for foolproof you need to not assume this
*/
t = kmalloc(p->t->u.user.target_size, GFP_ATOMIC);
-
- if (NULL == t)
+ if (t == NULL)
goto rtattr_failure;
c.bindcnt = p->bindcnt - bind;
DPRINTK("\ttcf_ipt_dump tablename %s length %d\n", p->tname,
strlen(p->tname));
- DPRINTK
- ("\tdump target name %s size %d size user %d data[0] %x data[1] %x\n",
- p->t->u.kernel.target->name, p->t->u.target_size, p->t->u.user.target_size,
- p->t->data[0], p->t->data[1]);
+ DPRINTK("\tdump target name %s size %d size user %d "
+ "data[0] %x data[1] %x\n", p->t->u.kernel.target->name,
+ p->t->u.target_size, p->t->u.user.target_size,
+ p->t->data[0], p->t->data[1]);
RTA_PUT(skb, TCA_IPT_TARG, p->t->u.user.target_size, t);
RTA_PUT(skb, TCA_IPT_INDEX, 4, &p->index);
RTA_PUT(skb, TCA_IPT_HOOK, 4, &p->hook);
}
static struct tc_action_ops act_ipt_ops = {
- .next = NULL,
.kind = "ipt",
.type = TCA_ACT_IPT,
.capab = TCA_CAP_NONE,
#define MY_TAB_MASK (MY_TAB_SIZE - 1)
static u32 idx_gen;
static struct tcf_mirred *tcf_mirred_ht[MY_TAB_SIZE];
-static rwlock_t mirred_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(mirred_lock);
/* ovewrride the defaults */
-#define tcf_st tcf_mirred
-#define tc_st tc_mirred
-#define tcf_t_lock mirred_lock
-#define tcf_ht tcf_mirred_ht
+#define tcf_st tcf_mirred
+#define tc_st tc_mirred
+#define tcf_t_lock mirred_lock
+#define tcf_ht tcf_mirred_ht
#define CONFIG_NET_ACT_INIT 1
#include <net/pkt_act.h>
tcf_mirred_release(struct tcf_mirred *p, int bind)
{
if (p) {
- if (bind) {
+ if (bind)
p->bindcnt--;
- }
-
p->refcnt--;
if(!p->bindcnt && p->refcnt <= 0) {
dev_put(p->dev);
return 1;
}
}
-
return 0;
}
static int
-tcf_mirred_init(struct rtattr *rta, struct rtattr *est, struct tc_action *a,int ovr, int bind)
+tcf_mirred_init(struct rtattr *rta, struct rtattr *est, struct tc_action *a,
+ int ovr, int bind)
{
struct rtattr *tb[TCA_MIRRED_MAX];
struct tc_mirred *parm;
struct tcf_mirred *p;
struct net_device *dev = NULL;
- int size = sizeof (*p), new = 0;
-
-
- if (rtattr_parse(tb, TCA_MIRRED_MAX, RTA_DATA(rta), RTA_PAYLOAD(rta)) < 0) {
- DPRINTK("tcf_mirred_init BUG in user space couldnt parse properly\n");
- return -1;
- }
+ int ret = 0;
+ int ok_push = 0;
- if (NULL == a || NULL == tb[TCA_MIRRED_PARMS - 1]) {
- DPRINTK("BUG: tcf_mirred_init called with NULL params\n");
- return -1;
- }
+ if (rta == NULL || rtattr_parse_nested(tb, TCA_MIRRED_MAX, rta) < 0)
+ return -EINVAL;
- parm = RTA_DATA(tb[TCA_MIRRED_PARMS - 1]);
-
- p = tcf_hash_check(parm, a, ovr, bind);
- if (NULL == p) { /* new */
- p = tcf_hash_create(parm,est,a,size,ovr,bind);
- new = 1;
- if (NULL == p)
- return -1;
- }
+ if (tb[TCA_MIRRED_PARMS-1] == NULL ||
+ RTA_PAYLOAD(tb[TCA_MIRRED_PARMS-1]) < sizeof(*parm))
+ return -EINVAL;
+ parm = RTA_DATA(tb[TCA_MIRRED_PARMS-1]);
if (parm->ifindex) {
- dev = dev_get_by_index(parm->ifindex);
- if (NULL == dev) {
- printk("BUG: tcf_mirred_init called with bad device\n");
- return -1;
- }
+ dev = __dev_get_by_index(parm->ifindex);
+ if (dev == NULL)
+ return -ENODEV;
switch (dev->type) {
case ARPHRD_TUNNEL:
case ARPHRD_TUNNEL6:
case ARPHRD_IPGRE:
case ARPHRD_VOID:
case ARPHRD_NONE:
- p->ok_push = 0;
+ ok_push = 0;
break;
default:
- p->ok_push = 1;
+ ok_push = 1;
break;
}
- } else {
- if (new) {
- kfree(p);
- return -1;
- }
}
- if (new || ovr) {
- spin_lock(&p->lock);
- p->action = parm->action;
- p->eaction = parm->eaction;
- if (parm->ifindex) {
- p->ifindex = parm->ifindex;
- if (ovr)
- dev_put(p->dev);
- p->dev = dev;
+ p = tcf_hash_check(parm->index, a, ovr, bind);
+ if (p == NULL) {
+ if (!parm->ifindex)
+ return -EINVAL;
+ p = tcf_hash_create(parm->index, est, a, sizeof(*p), ovr, bind);
+ if (p == NULL)
+ return -ENOMEM;
+ ret = ACT_P_CREATED;
+ } else {
+ if (!ovr) {
+ tcf_mirred_release(p, bind);
+ return -EEXIST;
}
- spin_unlock(&p->lock);
}
-
- DPRINTK(" tcf_mirred_init index %d action %d eaction %d device %s ifndex %d\n",parm->index,parm->action,parm->eaction,dev->name,parm->ifindex);
- return new;
-
+ spin_lock_bh(&p->lock);
+ p->action = parm->action;
+ p->eaction = parm->eaction;
+ if (parm->ifindex) {
+ p->ifindex = parm->ifindex;
+ if (ret != ACT_P_CREATED)
+ dev_put(p->dev);
+ p->dev = dev;
+ dev_hold(dev);
+ p->ok_push = ok_push;
+ }
+ spin_unlock_bh(&p->lock);
+ if (ret == ACT_P_CREATED)
+ tcf_hash_insert(p);
+
+ DPRINTK("tcf_mirred_init index %d action %d eaction %d device %s "
+ "ifindex %d\n", parm->index, parm->action, parm->eaction,
+ dev->name, parm->ifindex);
+ return ret;
}
static int
tcf_mirred_cleanup(struct tc_action *a, int bind)
{
- struct tcf_mirred *p;
- p = PRIV(a,mirred);
- if (NULL != p)
+ struct tcf_mirred *p = PRIV(a, mirred);
+
+ if (p != NULL)
return tcf_mirred_release(p, bind);
return 0;
}
static int
tcf_mirred(struct sk_buff **pskb, struct tc_action *a)
{
- struct tcf_mirred *p;
+ struct tcf_mirred *p = PRIV(a, mirred);
struct net_device *dev;
struct sk_buff *skb2 = NULL;
struct sk_buff *skb = *pskb;
- __u32 at = G_TC_AT(skb->tc_verd);
-
- if (NULL == a) {
- if (net_ratelimit())
- printk("BUG: tcf_mirred called with NULL action!\n");
- return -1;
- }
-
- p = PRIV(a,mirred);
-
- if (NULL == p) {
- if (net_ratelimit())
- printk("BUG: tcf_mirred called with NULL params\n");
- return -1;
- }
+ u32 at = G_TC_AT(skb->tc_verd);
spin_lock(&p->lock);
- dev = p->dev;
+ dev = p->dev;
p->tm.lastuse = jiffies;
- if (NULL == dev || !(dev->flags&IFF_UP) ) {
+ if (!(dev->flags&IFF_UP) ) {
if (net_ratelimit())
printk("mirred to Houston: device %s is gone!\n",
- dev?dev->name:"");
+ dev->name);
bad_mirred:
- if (NULL != skb2)
+ if (skb2 != NULL)
kfree_skb(skb2);
p->qstats.overlimits++;
p->bstats.bytes += skb->len;
p->bstats.packets++;
spin_unlock(&p->lock);
/* should we be asking for packet to be dropped?
- * may make sense for redirect case only
+ * may make sense for redirect case only
*/
return TC_ACT_SHOT;
- }
+ }
skb2 = skb_clone(skb, GFP_ATOMIC);
- if (skb2 == NULL) {
+ if (skb2 == NULL)
goto bad_mirred;
- }
- if (TCA_EGRESS_MIRROR != p->eaction &&
- TCA_EGRESS_REDIR != p->eaction) {
+ if (p->eaction != TCA_EGRESS_MIRROR && p->eaction != TCA_EGRESS_REDIR) {
if (net_ratelimit())
- printk("tcf_mirred unknown action %d\n",p->eaction);
+ printk("tcf_mirred unknown action %d\n", p->eaction);
goto bad_mirred;
}
p->bstats.bytes += skb2->len;
p->bstats.packets++;
- if ( !(at & AT_EGRESS)) {
- if (p->ok_push) {
+ if (!(at & AT_EGRESS))
+ if (p->ok_push)
skb_push(skb2, skb2->dev->hard_header_len);
- }
- }
/* mirror is always swallowed */
- if (TCA_EGRESS_MIRROR != p->eaction)
- skb2->tc_verd = SET_TC_FROM(skb2->tc_verd,at);
+ if (p->eaction != TCA_EGRESS_MIRROR)
+ skb2->tc_verd = SET_TC_FROM(skb2->tc_verd, at);
skb2->dev = dev;
skb2->input_dev = skb->dev;
}
static int
-tcf_mirred_dump(struct sk_buff *skb, struct tc_action *a,int bind, int ref)
+tcf_mirred_dump(struct sk_buff *skb, struct tc_action *a, int bind, int ref)
{
unsigned char *b = skb->tail;
struct tc_mirred opt;
- struct tcf_mirred *p;
+ struct tcf_mirred *p = PRIV(a, mirred);
struct tcf_t t;
- p = PRIV(a,mirred);
- if (NULL == p) {
- printk("BUG: tcf_mirred_dump called with NULL params\n");
- goto rtattr_failure;
- }
-
opt.index = p->index;
opt.action = p->action;
opt.refcnt = p->refcnt - ref;
opt.bindcnt = p->bindcnt - bind;
opt.eaction = p->eaction;
opt.ifindex = p->ifindex;
- DPRINTK(" tcf_mirred_dump index %d action %d eaction %d ifndex %d\n",p->index,p->action,p->eaction,p->ifindex);
- RTA_PUT(skb, TCA_MIRRED_PARMS, sizeof (opt), &opt);
+ DPRINTK("tcf_mirred_dump index %d action %d eaction %d ifindex %d\n",
+ p->index, p->action, p->eaction, p->ifindex);
+ RTA_PUT(skb, TCA_MIRRED_PARMS, sizeof(opt), &opt);
t.install = jiffies_to_clock_t(jiffies - p->tm.install);
t.lastuse = jiffies_to_clock_t(jiffies - p->tm.lastuse);
t.expires = jiffies_to_clock_t(p->tm.expires);
- RTA_PUT(skb, TCA_MIRRED_TM, sizeof (t), &t);
+ RTA_PUT(skb, TCA_MIRRED_TM, sizeof(t), &t);
return skb->len;
rtattr_failure:
}
static struct tc_action_ops act_mirred_ops = {
- .next = NULL,
.kind = "mirred",
.type = TCA_ACT_MIRRED,
.capab = TCA_CAP_NONE,
MODULE_DESCRIPTION("Device Mirror/redirect actions");
MODULE_LICENSE("GPL");
-
static int __init
mirred_init_module(void)
{
module_init(mirred_init_module);
module_exit(mirred_cleanup_module);
-
#define MY_TAB_MASK 15
static u32 idx_gen;
static struct tcf_pedit *tcf_pedit_ht[MY_TAB_SIZE];
-static rwlock_t pedit_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(pedit_lock);
-#define tcf_st tcf_pedit
-#define tc_st tc_pedit
-#define tcf_t_lock pedit_lock
-#define tcf_ht tcf_pedit_ht
+#define tcf_st tcf_pedit
+#define tc_st tc_pedit
+#define tcf_t_lock pedit_lock
+#define tcf_ht tcf_pedit_ht
#define CONFIG_NET_ACT_INIT 1
#include <net/pkt_act.h>
-
static int
-tcf_pedit_init(struct rtattr *rta, struct rtattr *est, struct tc_action *a,int ovr, int bind)
+tcf_pedit_init(struct rtattr *rta, struct rtattr *est, struct tc_action *a,
+ int ovr, int bind)
{
struct rtattr *tb[TCA_PEDIT_MAX];
struct tc_pedit *parm;
- int size = 0;
int ret = 0;
- struct tcf_pedit *p = NULL;
-
- if (rtattr_parse(tb, TCA_PEDIT_MAX, RTA_DATA(rta), RTA_PAYLOAD(rta)) < 0)
- return -1;
-
- if (NULL == a || NULL == tb[TCA_PEDIT_PARMS - 1]) {
- printk("BUG: tcf_pedit_init called with NULL params\n");
- return -1;
- }
-
- parm = RTA_DATA(tb[TCA_PEDIT_PARMS - 1]);
-
- p = tcf_hash_check(parm, a, ovr, bind);
-
- if (NULL == p) { /* new */
-
+ struct tcf_pedit *p;
+ struct tc_pedit_key *keys = NULL;
+ int ksize;
+
+ if (rta == NULL || rtattr_parse_nested(tb, TCA_PEDIT_MAX, rta) < 0)
+ return -EINVAL;
+
+ if (tb[TCA_PEDIT_PARMS - 1] == NULL ||
+ RTA_PAYLOAD(tb[TCA_PEDIT_PARMS-1]) < sizeof(*parm))
+ return -EINVAL;
+ parm = RTA_DATA(tb[TCA_PEDIT_PARMS-1]);
+ ksize = parm->nkeys * sizeof(struct tc_pedit_key);
+ if (RTA_PAYLOAD(tb[TCA_PEDIT_PARMS-1]) < sizeof(*parm) + ksize)
+ return -EINVAL;
+
+ p = tcf_hash_check(parm->index, a, ovr, bind);
+ if (p == NULL) {
if (!parm->nkeys)
- return -1;
-
- size = sizeof (*p)+ (parm->nkeys*sizeof(struct tc_pedit_key));
-
- p = tcf_hash_create(parm,est,a,size,ovr,bind);
-
- if (NULL == p)
- return -1;
- ret = 1;
- goto override;
- }
+ return -EINVAL;
+ p = tcf_hash_create(parm->index, est, a, sizeof(*p), ovr, bind);
+ if (p == NULL)
+ return -ENOMEM;
+ keys = kmalloc(ksize, GFP_KERNEL);
+ if (keys == NULL) {
+ kfree(p);
+ return -ENOMEM;
+ }
+ ret = ACT_P_CREATED;
+ } else {
+ if (!ovr) {
+ tcf_hash_release(p, bind);
+ return -EEXIST;
+ }
+ if (p->nkeys && p->nkeys != parm->nkeys) {
+ keys = kmalloc(ksize, GFP_KERNEL);
+ if (keys == NULL)
+ return -ENOMEM;
+ }
+ }
- if (ovr) {
-override:
- p->flags = parm->flags;
+ spin_lock_bh(&p->lock);
+ p->flags = parm->flags;
+ p->action = parm->action;
+ if (keys) {
+ kfree(p->keys);
+ p->keys = keys;
p->nkeys = parm->nkeys;
- p->action = parm->action;
- memcpy(p->keys,parm->keys,parm->nkeys*(sizeof(struct tc_pedit_key)));
}
-
+ memcpy(p->keys, parm->keys, ksize);
+ spin_unlock_bh(&p->lock);
+ if (ret == ACT_P_CREATED)
+ tcf_hash_insert(p);
return ret;
}
static int
tcf_pedit_cleanup(struct tc_action *a, int bind)
{
- struct tcf_pedit *p;
- p = PRIV(a,pedit);
- if (NULL != p)
- return tcf_hash_release(p, bind);
+ struct tcf_pedit *p = PRIV(a, pedit);
+
+ if (p != NULL) {
+ struct tc_pedit_key *keys = p->keys;
+ if (tcf_hash_release(p, bind)) {
+ kfree(keys);
+ return 1;
+ }
+ }
return 0;
}
-/*
-**
-*/
static int
tcf_pedit(struct sk_buff **pskb, struct tc_action *a)
{
- struct tcf_pedit *p;
+ struct tcf_pedit *p = PRIV(a, pedit);
struct sk_buff *skb = *pskb;
int i, munged = 0;
u8 *pptr;
- p = PRIV(a,pedit);
-
- if (NULL == p) {
- printk("BUG: tcf_pedit called with NULL params\n");
- return -1; /* change to something symbolic */
- }
-
if (!(skb->tc_verd & TC_OK2MUNGE)) {
/* should we set skb->cloned? */
if (pskb_expand_head(skb, 0, 0, GFP_ATOMIC)) {
p->tm.lastuse = jiffies;
- if (0 < p->nkeys) {
+ if (p->nkeys > 0) {
struct tc_pedit_key *tkey = p->keys;
for (i = p->nkeys; i > 0; i--, tkey++) {
- u32 *ptr ;
-
+ u32 *ptr;
int offset = tkey->off;
+
if (tkey->offmask) {
if (skb->len > tkey->at) {
- char *j = pptr+tkey->at;
- offset +=((*j&tkey->offmask)>>tkey->shift);
+ char *j = pptr + tkey->at;
+ offset += ((*j & tkey->offmask) >>
+ tkey->shift);
} else {
goto bad;
}
printk("offset must be on 32 bit boundaries\n");
goto bad;
}
-
if (skb->len < 0 || (offset > 0 && offset > skb->len)) {
printk("offset %d cant exceed pkt length %d\n",
- offset, skb->len);
+ offset, skb->len);
goto bad;
}
-
ptr = (u32 *)(pptr+offset);
/* just do it, baby */
*ptr = ((*ptr & tkey->mask) ^ tkey->val);
{
unsigned char *b = skb->tail;
struct tc_pedit *opt;
- struct tcf_pedit *p;
+ struct tcf_pedit *p = PRIV(a, pedit);
struct tcf_t t;
int s;
+ s = sizeof(*opt) + p->nkeys * sizeof(struct tc_pedit_key);
- p = PRIV(a,pedit);
-
- if (NULL == p) {
- printk("BUG: tcf_pedit_dump called with NULL params\n");
- goto rtattr_failure;
- }
-
- s = sizeof (*opt)+(p->nkeys*sizeof(struct tc_pedit_key));
-
- /* netlink spinlocks held above us - must use ATOMIC
- * */
+ /* netlink spinlocks held above us - must use ATOMIC */
opt = kmalloc(s, GFP_ATOMIC);
if (opt == NULL)
return -ENOBUFS;
-
memset(opt, 0, s);
- memcpy(opt->keys,p->keys,p->nkeys*(sizeof(struct tc_pedit_key)));
+ memcpy(opt->keys, p->keys, p->nkeys * sizeof(struct tc_pedit_key));
opt->index = p->index;
opt->nkeys = p->nkeys;
opt->flags = p->flags;
(unsigned int)key->off,
(unsigned int)key->val,
(unsigned int)key->mask);
- }
- }
+ }
+ }
#endif
RTA_PUT(skb, TCA_PEDIT_PARMS, s, opt);
t.install = jiffies_to_clock_t(jiffies - p->tm.install);
t.lastuse = jiffies_to_clock_t(jiffies - p->tm.lastuse);
t.expires = jiffies_to_clock_t(p->tm.expires);
- RTA_PUT(skb, TCA_PEDIT_TM, sizeof (t), &t);
+ RTA_PUT(skb, TCA_PEDIT_TM, sizeof(t), &t);
return skb->len;
rtattr_failure:
#include <linux/rtnetlink.h>
#include <net/pkt_sched.h>
#include <net/dsfield.h>
+#include <net/inet_ecn.h>
#include <asm/byteorder.h>
"arg 0x%lx\n",sch,p,classid,parent,*arg);
if (*arg > p->indices)
return -ENOENT;
- if (!opt || rtattr_parse(tb, TCA_DSMARK_MAX, RTA_DATA(opt),
- RTA_PAYLOAD(opt)))
+ if (!opt || rtattr_parse_nested(tb, TCA_DSMARK_MAX, opt))
return -EINVAL;
if (tb[TCA_DSMARK_MASK-1]) {
if (!RTA_PAYLOAD(tb[TCA_DSMARK_MASK-1]))
/* FIXME: Safe with non-linear skbs? --RR */
switch (skb->protocol) {
case __constant_htons(ETH_P_IP):
- skb->tc_index = ipv4_get_dsfield(skb->nh.iph);
+ skb->tc_index = ipv4_get_dsfield(skb->nh.iph)
+ & ~INET_ECN_MASK;
break;
case __constant_htons(ETH_P_IPV6):
- skb->tc_index = ipv6_get_dsfield(skb->nh.ipv6h);
+ skb->tc_index = ipv6_get_dsfield(skb->nh.ipv6h)
+ & ~INET_ECN_MASK;
break;
default:
skb->tc_index = 0;
}
-int dsmark_init(struct Qdisc *sch,struct rtattr *opt)
+static int dsmark_init(struct Qdisc *sch,struct rtattr *opt)
{
struct dsmark_qdisc_data *p = PRIV(sch);
struct rtattr *tb[TCA_DSMARK_MAX];
if (q->loss && q->loss >= get_crandom(&q->loss_cor)) {
pr_debug("netem_enqueue: random loss\n");
sch->qstats.drops++;
+ kfree_skb(skb);
return 0; /* lie about loss so TCP doesn't know */
}
return error;
}
-/* Create a new SCTP_bind_addr from nothing. */
-struct sctp_bind_addr *sctp_bind_addr_new(int gfp)
-{
- struct sctp_bind_addr *retval;
-
- retval = t_new(struct sctp_bind_addr, gfp);
- if (!retval)
- goto nomem;
-
- sctp_bind_addr_init(retval, 0);
- retval->malloced = 1;
- SCTP_DBG_OBJCNT_INC(bind_addr);
-
-nomem:
- return retval;
-}
-
/* Initialize the SCTP_bind_addr structure for either an endpoint or
* an association.
*/
/* Does this contain a specified address? Allow wildcarding. */
int sctp_bind_addr_match(struct sctp_bind_addr *bp,
const union sctp_addr *addr,
- struct sctp_opt *opt)
+ struct sctp_sock *opt)
{
struct sctp_sockaddr_entry *laddr;
struct list_head *pos;
union sctp_addr *sctp_find_unmatch_addr(struct sctp_bind_addr *bp,
const union sctp_addr *addrs,
int addrcnt,
- struct sctp_opt *opt)
+ struct sctp_sock *opt)
{
struct sctp_sockaddr_entry *laddr;
union sctp_addr *addr;
#include <net/sctp/sctp.h>
#include <net/sctp/sm.h>
-/* Create a new sctp_command_sequence. */
-sctp_cmd_seq_t *sctp_new_cmd_seq(int gfp)
-{
- sctp_cmd_seq_t *retval = t_new(sctp_cmd_seq_t, gfp);
-
- if (retval)
- sctp_init_cmd_seq(retval);
-
- return retval;
-}
-
/* Initialize a block of memory as a command sequence. */
int sctp_init_cmd_seq(sctp_cmd_seq_t *seq)
{
return 0;
}
-/* Rewind an sctp_cmd_seq_t to iterate from the start. */
-int sctp_rewind_sequence(sctp_cmd_seq_t *seq)
-{
- seq->next_cmd = 0;
- return 1; /* We always succeed. */
-}
-
/* Return the next command structure in a sctp_cmd_seq.
* Returns NULL at the end of the sequence.
*/
return retval;
}
-/* Dispose of a command sequence. */
-void sctp_free_cmd_seq(sctp_cmd_seq_t *seq)
-{
- kfree(seq);
-}
return "unknown chunk";
}
-/* These are printable form of variable-length parameters. */
-const char *sctp_param_tbl[SCTP_PARAM_ECN_CAPABLE + 1] = {
- "",
- "PARAM_HEARTBEAT_INFO",
- "",
- "",
- "",
- "PARAM_IPV4_ADDRESS",
- "PARAM_IPV6_ADDRESS",
- "PARAM_STATE_COOKIE",
- "PARAM_UNRECOGNIZED_PARAMETERS",
- "PARAM_COOKIE_PRESERVATIVE",
- "",
- "PARAM_HOST_NAME_ADDRESS",
- "PARAM_SUPPORTED_ADDRESS_TYPES",
-};
-
/* These are printable forms of the states. */
const char *sctp_state_tbl[SCTP_STATE_NUM_STATES] = {
"STATE_EMPTY",
static const char *sctp_other_tbl[] = {
"NO_PENDING_TSN",
+ "ICMP_PROTO_UNREACH",
};
/* Lookup "other" debug name. */
{
if (id.other < 0)
return "illegal 'other' event";
- if (id.other < SCTP_EVENT_OTHER_MAX)
+ if (id.other <= SCTP_EVENT_OTHER_MAX)
return sctp_other_tbl[id.other];
return "unknown 'other' event";
}
/* Forward declarations for internal helpers. */
static int sctp_rcv_ootb(struct sk_buff *);
-struct sctp_association *__sctp_rcv_lookup(struct sk_buff *skb,
+static struct sctp_association *__sctp_rcv_lookup(struct sk_buff *skb,
const union sctp_addr *laddr,
const union sctp_addr *paddr,
struct sctp_transport **transportp);
-struct sctp_endpoint *__sctp_rcv_lookup_endpoint(const union sctp_addr *laddr);
+static struct sctp_endpoint *__sctp_rcv_lookup_endpoint(const union sctp_addr *laddr);
+static struct sctp_association *__sctp_lookup_association(
+ const union sctp_addr *local,
+ const union sctp_addr *peer,
+ struct sctp_transport **pt);
/* Calculate the SCTP checksum of an SCTP packet. */
skb_pull(skb, sizeof(struct sctphdr));
+ /* Make sure we at least have chunk headers worth of data left. */
+ if (skb->len < sizeof(struct sctp_chunkhdr))
+ goto discard_it;
+
family = ipver2af(skb->nh.iph->version);
af = sctp_get_af_specific(family);
if (unlikely(!af))
}
}
+/*
+ * SCTP Implementer's Guide, 2.37 ICMP handling procedures
+ *
+ * ICMP8) If the ICMP code is a "Unrecognized next header type encountered"
+ * or a "Protocol Unreachable" treat this message as an abort
+ * with the T bit set.
+ *
+ * This function sends an event to the state machine, which will abort the
+ * association.
+ *
+ */
+void sctp_icmp_proto_unreachable(struct sock *sk,
+ struct sctp_endpoint *ep,
+ struct sctp_association *asoc,
+ struct sctp_transport *t)
+{
+ SCTP_DEBUG_PRINTK("%s\n", __FUNCTION__);
+
+ sctp_do_sm(SCTP_EVENT_T_OTHER,
+ SCTP_ST_OTHER(SCTP_EVENT_ICMP_PROTO_UNREACH),
+ asoc->state, asoc->ep, asoc, NULL,
+ GFP_ATOMIC);
+
+}
+
/* Common lookup code for icmp/icmpv6 error handler. */
struct sock *sctp_err_lookup(int family, struct sk_buff *skb,
struct sctphdr *sctphdr,
}
if (asoc) {
+ sk = asoc->base.sk;
+
if (ntohl(sctphdr->vtag) != asoc->c.peer_vtag) {
ICMP_INC_STATS_BH(ICMP_MIB_INERRORS);
goto out;
}
- sk = asoc->base.sk;
} else
sk = ep->base.sk;
struct sctp_endpoint *ep;
struct sctp_association *asoc;
struct sctp_transport *transport;
- struct inet_opt *inet;
+ struct inet_sock *inet;
char *saveip, *savesctp;
int err;
sctp_icmp_frag_needed(sk, asoc, transport, info);
goto out_unlock;
}
-
+ else {
+ if (ICMP_PROT_UNREACH == code) {
+ sctp_icmp_proto_unreachable(sk, ep, asoc,
+ transport);
+ goto out_unlock;
+ }
+ }
err = icmp_err_convert[code].errno;
break;
case ICMP_TIME_EXCEEDED:
sctp_errhdr_t *err;
ch = (sctp_chunkhdr_t *) skb->data;
+ ch_end = ((__u8 *) ch) + WORD_ROUND(ntohs(ch->length));
/* Scan through all the chunks in the packet. */
- do {
- ch_end = ((__u8 *) ch) + WORD_ROUND(ntohs(ch->length));
+ while (ch_end > (__u8 *)ch && ch_end < skb->tail) {
/* RFC 8.4, 2) If the OOTB packet contains an ABORT chunk, the
* receiver MUST silently discard the OOTB packet and take no
}
ch = (sctp_chunkhdr_t *) ch_end;
- } while (ch_end < skb->tail);
+ ch_end = ((__u8 *) ch) + WORD_ROUND(ntohs(ch->length));
+ }
return 0;
}
/* Insert endpoint into the hash table. */
-void __sctp_hash_endpoint(struct sctp_endpoint *ep)
+static void __sctp_hash_endpoint(struct sctp_endpoint *ep)
{
struct sctp_ep_common **epp;
struct sctp_ep_common *epb;
}
/* Remove endpoint from the hash table. */
-void __sctp_unhash_endpoint(struct sctp_endpoint *ep)
+static void __sctp_unhash_endpoint(struct sctp_endpoint *ep)
{
struct sctp_hashbucket *head;
struct sctp_ep_common *epb;
}
/* Look up an endpoint. */
-struct sctp_endpoint *__sctp_rcv_lookup_endpoint(const union sctp_addr *laddr)
+static struct sctp_endpoint *__sctp_rcv_lookup_endpoint(const union sctp_addr *laddr)
{
struct sctp_hashbucket *head;
struct sctp_ep_common *epb;
return ep;
}
-/* Add an association to the hash. Local BH-safe. */
-void sctp_hash_established(struct sctp_association *asoc)
-{
- sctp_local_bh_disable();
- __sctp_hash_established(asoc);
- sctp_local_bh_enable();
-}
-
/* Insert association into the hash table. */
-void __sctp_hash_established(struct sctp_association *asoc)
+static void __sctp_hash_established(struct sctp_association *asoc)
{
struct sctp_ep_common **epp;
struct sctp_ep_common *epb;
sctp_write_unlock(&head->lock);
}
-/* Remove association from the hash table. Local BH-safe. */
-void sctp_unhash_established(struct sctp_association *asoc)
+/* Add an association to the hash. Local BH-safe. */
+void sctp_hash_established(struct sctp_association *asoc)
{
sctp_local_bh_disable();
- __sctp_unhash_established(asoc);
+ __sctp_hash_established(asoc);
sctp_local_bh_enable();
}
/* Remove association from the hash table. */
-void __sctp_unhash_established(struct sctp_association *asoc)
+static void __sctp_unhash_established(struct sctp_association *asoc)
{
struct sctp_hashbucket *head;
struct sctp_ep_common *epb;
sctp_write_unlock(&head->lock);
}
+/* Remove association from the hash table. Local BH-safe. */
+void sctp_unhash_established(struct sctp_association *asoc)
+{
+ sctp_local_bh_disable();
+ __sctp_unhash_established(asoc);
+ sctp_local_bh_enable();
+}
+
/* Look up an association. */
-struct sctp_association *__sctp_lookup_association(
+static struct sctp_association *__sctp_lookup_association(
const union sctp_addr *local,
const union sctp_addr *peer,
struct sctp_transport **pt)
}
/* Look up an association. BH-safe. */
+SCTP_STATIC
struct sctp_association *sctp_lookup_association(const union sctp_addr *laddr,
- const union sctp_addr *paddr,
+ const union sctp_addr *paddr,
struct sctp_transport **transportp)
{
struct sctp_association *asoc;
return NULL;
}
+ /* The code below will attempt to walk the chunk and extract
+ * parameter information. Before we do that, we need to verify
+ * that the chunk length doesn't cause overflow. Otherwise, we'll
+ * walk off the end.
+ */
+ if (WORD_ROUND(ntohs(ch->length)) > skb->len)
+ return NULL;
+
/*
* This code will NOT touch anything inside the chunk--it is
* strictly READ-ONLY.
}
/* Lookup an association for an inbound skb. */
-struct sctp_association *__sctp_rcv_lookup(struct sk_buff *skb,
+static struct sctp_association *__sctp_rcv_lookup(struct sk_buff *skb,
const union sctp_addr *paddr,
const union sctp_addr *laddr,
struct sctp_transport **transportp)
/* An array to make it easy to pretty print the debug information
* to the proc fs.
*/
-sctp_dbg_objcnt_entry_t sctp_dbg_objcnt[] = {
+static sctp_dbg_objcnt_entry_t sctp_dbg_objcnt[] = {
SCTP_DBG_OBJCNT_ENTRY(sock),
SCTP_DBG_OBJCNT_ENTRY(ep),
SCTP_DBG_OBJCNT_ENTRY(assoc),
#include <linux/init.h>
#include <net/sctp/sctp.h>
-struct snmp_mib sctp_snmp_list[] = {
+static struct snmp_mib sctp_snmp_list[] = {
SNMP_MIB_ITEM("SctpCurrEstab", SCTP_MIB_CURRESTAB),
SNMP_MIB_ITEM("SctpActiveEstabs", SCTP_MIB_ACTIVEESTABS),
SNMP_MIB_ITEM("SctpPassiveEstabs", SCTP_MIB_PASSIVEESTABS),
#include <net/sctp/sctp.h>
#include <net/sctp/sm.h>
+static int sctp_cmd_interpreter(sctp_event_t event_type,
+ sctp_subtype_t subtype,
+ sctp_state_t state,
+ struct sctp_endpoint *ep,
+ struct sctp_association *asoc,
+ void *event_arg,
+ sctp_disposition_t status,
+ sctp_cmd_seq_t *commands,
+ int gfp);
+static int sctp_side_effects(sctp_event_t event_type, sctp_subtype_t subtype,
+ sctp_state_t state,
+ struct sctp_endpoint *ep,
+ struct sctp_association *asoc,
+ void *event_arg,
+ sctp_disposition_t status,
+ sctp_cmd_seq_t *commands,
+ int gfp);
+
/********************************************************************
* Helper functions
********************************************************************/
}
/* Generate SACK if necessary. We call this at the end of a packet. */
-int sctp_gen_sack(struct sctp_association *asoc, int force,
- sctp_cmd_seq_t *commands)
+static int sctp_gen_sack(struct sctp_association *asoc, int force,
+ sctp_cmd_seq_t *commands)
{
__u32 ctsn, max_tsn_seen;
struct sctp_chunk *sack;
sctp_association_put(asoc);
}
-void sctp_generate_t1_cookie_event(unsigned long data)
+static void sctp_generate_t1_cookie_event(unsigned long data)
{
struct sctp_association *asoc = (struct sctp_association *) data;
sctp_generate_timeout_event(asoc, SCTP_EVENT_TIMEOUT_T1_COOKIE);
}
-void sctp_generate_t1_init_event(unsigned long data)
+static void sctp_generate_t1_init_event(unsigned long data)
{
struct sctp_association *asoc = (struct sctp_association *) data;
sctp_generate_timeout_event(asoc, SCTP_EVENT_TIMEOUT_T1_INIT);
}
-void sctp_generate_t2_shutdown_event(unsigned long data)
+static void sctp_generate_t2_shutdown_event(unsigned long data)
{
struct sctp_association *asoc = (struct sctp_association *) data;
sctp_generate_timeout_event(asoc, SCTP_EVENT_TIMEOUT_T2_SHUTDOWN);
}
-void sctp_generate_t4_rto_event(unsigned long data)
+static void sctp_generate_t4_rto_event(unsigned long data)
{
struct sctp_association *asoc = (struct sctp_association *) data;
sctp_generate_timeout_event(asoc, SCTP_EVENT_TIMEOUT_T4_RTO);
}
-void sctp_generate_t5_shutdown_guard_event(unsigned long data)
+static void sctp_generate_t5_shutdown_guard_event(unsigned long data)
{
struct sctp_association *asoc = (struct sctp_association *)data;
sctp_generate_timeout_event(asoc,
} /* sctp_generate_t5_shutdown_guard_event() */
-void sctp_generate_autoclose_event(unsigned long data)
+static void sctp_generate_autoclose_event(unsigned long data)
{
struct sctp_association *asoc = (struct sctp_association *) data;
sctp_generate_timeout_event(asoc, SCTP_EVENT_TIMEOUT_AUTOCLOSE);
}
/* Inject a SACK Timeout event into the state machine. */
-void sctp_generate_sack_event(unsigned long data)
+static void sctp_generate_sack_event(unsigned long data)
{
struct sctp_association *asoc = (struct sctp_association *) data;
sctp_generate_timeout_event(asoc, SCTP_EVENT_TIMEOUT_SACK);
asoc->overall_error_count++;
if (transport->active &&
- (transport->error_count++ >= transport->error_threshold)) {
+ (transport->error_count++ >= transport->max_retrans)) {
SCTP_DEBUG_PRINTK("transport_strike: transport "
"IP:%d.%d.%d.%d failed.\n",
NIPQUAD(transport->ipaddr.v4.sin_addr));
/*****************************************************************
* This the master state function side effect processing function.
*****************************************************************/
-int sctp_side_effects(sctp_event_t event_type, sctp_subtype_t subtype,
- sctp_state_t state,
- struct sctp_endpoint *ep,
- struct sctp_association *asoc,
- void *event_arg,
- sctp_disposition_t status,
- sctp_cmd_seq_t *commands,
- int gfp)
+static int sctp_side_effects(sctp_event_t event_type, sctp_subtype_t subtype,
+ sctp_state_t state,
+ struct sctp_endpoint *ep,
+ struct sctp_association *asoc,
+ void *event_arg,
+ sctp_disposition_t status,
+ sctp_cmd_seq_t *commands,
+ int gfp)
{
int error;
********************************************************************/
/* This is the side-effect interpreter. */
-int sctp_cmd_interpreter(sctp_event_t event_type, sctp_subtype_t subtype,
- sctp_state_t state, struct sctp_endpoint *ep,
- struct sctp_association *asoc, void *event_arg,
- sctp_disposition_t status, sctp_cmd_seq_t *commands,
- int gfp)
+static int sctp_cmd_interpreter(sctp_event_t event_type,
+ sctp_subtype_t subtype,
+ sctp_state_t state,
+ struct sctp_endpoint *ep,
+ struct sctp_association *asoc,
+ void *event_arg,
+ sctp_disposition_t status,
+ sctp_cmd_seq_t *commands,
+ int gfp)
{
int error = 0;
int force;
#include <net/sctp/sctp.h>
#include <net/sctp/sm.h>
+static const sctp_sm_table_entry_t
+primitive_event_table[SCTP_NUM_PRIMITIVE_TYPES][SCTP_STATE_NUM_STATES];
+static const sctp_sm_table_entry_t
+other_event_table[SCTP_NUM_OTHER_TYPES][SCTP_STATE_NUM_STATES];
+static const sctp_sm_table_entry_t
+timeout_event_table[SCTP_NUM_TIMEOUT_TYPES][SCTP_STATE_NUM_STATES];
+
+static const sctp_sm_table_entry_t *sctp_chunk_event_lookup(sctp_cid_t cid,
+ sctp_state_t state);
+
+
static const sctp_sm_table_entry_t bug = {
.fn = sctp_sf_bug,
.name = "sctp_sf_bug"
*
* For base protocol (RFC 2960).
*/
-const sctp_sm_table_entry_t chunk_event_table[SCTP_NUM_BASE_CHUNK_TYPES][SCTP_STATE_NUM_STATES] = {
+static const sctp_sm_table_entry_t chunk_event_table[SCTP_NUM_BASE_CHUNK_TYPES][SCTP_STATE_NUM_STATES] = {
TYPE_SCTP_DATA,
TYPE_SCTP_INIT,
TYPE_SCTP_INIT_ACK,
/* The primary index for this table is the chunk type.
* The secondary index for this table is the state.
*/
-const sctp_sm_table_entry_t addip_chunk_event_table[SCTP_NUM_ADDIP_CHUNK_TYPES][SCTP_STATE_NUM_STATES] = {
+static const sctp_sm_table_entry_t addip_chunk_event_table[SCTP_NUM_ADDIP_CHUNK_TYPES][SCTP_STATE_NUM_STATES] = {
TYPE_SCTP_ASCONF,
TYPE_SCTP_ASCONF_ACK,
}; /*state_fn_t addip_chunk_event_table[][] */
/* The primary index for this table is the chunk type.
* The secondary index for this table is the state.
*/
-const sctp_sm_table_entry_t prsctp_chunk_event_table[SCTP_NUM_PRSCTP_CHUNK_TYPES][SCTP_STATE_NUM_STATES] = {
+static const sctp_sm_table_entry_t prsctp_chunk_event_table[SCTP_NUM_PRSCTP_CHUNK_TYPES][SCTP_STATE_NUM_STATES] = {
TYPE_SCTP_FWD_TSN,
}; /*state_fn_t prsctp_chunk_event_table[][] */
/* The primary index for this table is the primitive type.
* The secondary index for this table is the state.
*/
-const sctp_sm_table_entry_t primitive_event_table[SCTP_NUM_PRIMITIVE_TYPES][SCTP_STATE_NUM_STATES] = {
+static const sctp_sm_table_entry_t primitive_event_table[SCTP_NUM_PRIMITIVE_TYPES][SCTP_STATE_NUM_STATES] = {
TYPE_SCTP_PRIMITIVE_ASSOCIATE,
TYPE_SCTP_PRIMITIVE_SHUTDOWN,
TYPE_SCTP_PRIMITIVE_ABORT,
{.fn = sctp_sf_ignore_other, .name = "sctp_sf_ignore_other"}, \
}
-const sctp_sm_table_entry_t other_event_table[SCTP_NUM_OTHER_TYPES][SCTP_STATE_NUM_STATES] = {
+#define TYPE_SCTP_OTHER_ICMP_PROTO_UNREACH { \
+ /* SCTP_STATE_EMPTY */ \
+ {.fn = sctp_sf_bug, .name = "sctp_sf_bug"}, \
+ /* SCTP_STATE_CLOSED */ \
+ {.fn = sctp_sf_ignore_other, .name = "sctp_sf_ignore_other"}, \
+ /* SCTP_STATE_COOKIE_WAIT */ \
+ {.fn = sctp_sf_cookie_wait_icmp_abort, \
+ .name = "sctp_sf_cookie_wait_icmp_abort"}, \
+ /* SCTP_STATE_COOKIE_ECHOED */ \
+ {.fn = sctp_sf_ignore_other, .name = "sctp_sf_ignore_other"}, \
+ /* SCTP_STATE_ESTABLISHED */ \
+ {.fn = sctp_sf_ignore_other, .name = "sctp_sf_ignore_other"}, \
+ /* SCTP_STATE_SHUTDOWN_PENDING */ \
+ {.fn = sctp_sf_ignore_other, .name = "sctp_sf_ignore_other"}, \
+ /* SCTP_STATE_SHUTDOWN_SENT */ \
+ {.fn = sctp_sf_ignore_other, .name = "sctp_sf_ignore_other"}, \
+ /* SCTP_STATE_SHUTDOWN_RECEIVED */ \
+ {.fn = sctp_sf_ignore_other, .name = "sctp_sf_ignore_other"}, \
+ /* SCTP_STATE_SHUTDOWN_ACK_SENT */ \
+ {.fn = sctp_sf_ignore_other, .name = "sctp_sf_ignore_other"}, \
+}
+
+static const sctp_sm_table_entry_t other_event_table[SCTP_NUM_OTHER_TYPES][SCTP_STATE_NUM_STATES] = {
TYPE_SCTP_OTHER_NO_PENDING_TSN,
+ TYPE_SCTP_OTHER_ICMP_PROTO_UNREACH,
};
#define TYPE_SCTP_EVENT_TIMEOUT_NONE { \
{.fn = sctp_sf_timer_ignore, .name = "sctp_sf_timer_ignore"}, \
}
-const sctp_sm_table_entry_t timeout_event_table[SCTP_NUM_TIMEOUT_TYPES][SCTP_STATE_NUM_STATES] = {
+static const sctp_sm_table_entry_t timeout_event_table[SCTP_NUM_TIMEOUT_TYPES][SCTP_STATE_NUM_STATES] = {
TYPE_SCTP_EVENT_TIMEOUT_NONE,
TYPE_SCTP_EVENT_TIMEOUT_T1_COOKIE,
TYPE_SCTP_EVENT_TIMEOUT_T1_INIT,
TYPE_SCTP_EVENT_TIMEOUT_AUTOCLOSE,
};
-const sctp_sm_table_entry_t *sctp_chunk_event_lookup(sctp_cid_t cid,
- sctp_state_t state)
+static const sctp_sm_table_entry_t *sctp_chunk_event_lookup(sctp_cid_t cid,
+ sctp_state_t state)
{
if (state > SCTP_STATE_MAX)
return &bug;
#define MAX_KMALLOC_SIZE 131072
+static struct sctp_ssnmap *sctp_ssnmap_init(struct sctp_ssnmap *map, __u16 in,
+ __u16 out);
+
/* Storage size needed for map includes 2 headers and then the
* specific needs of in or out streams.
*/
/* Initialize a block of memory as a ssnmap. */
-struct sctp_ssnmap *sctp_ssnmap_init(struct sctp_ssnmap *map, __u16 in,
- __u16 out)
+static struct sctp_ssnmap *sctp_ssnmap_init(struct sctp_ssnmap *map, __u16 in,
+ __u16 out)
{
memset(map, 0x00, sctp_ssnmap_size(in, out));
/* 1st Level Abstractions. */
-/* Allocate and initialize a new transport. */
-struct sctp_transport *sctp_transport_new(const union sctp_addr *addr, int gfp)
-{
- struct sctp_transport *transport;
-
- transport = t_new(struct sctp_transport, gfp);
- if (!transport)
- goto fail;
-
- if (!sctp_transport_init(transport, addr, gfp))
- goto fail_init;
-
- transport->malloced = 1;
- SCTP_DBG_OBJCNT_INC(transport);
-
- return transport;
-
-fail_init:
- kfree(transport);
-
-fail:
- return NULL;
-}
-
/* Initialize a new transport from provided memory. */
-struct sctp_transport *sctp_transport_init(struct sctp_transport *peer,
- const union sctp_addr *addr,
- int gfp)
+static struct sctp_transport *sctp_transport_init(struct sctp_transport *peer,
+ const union sctp_addr *addr,
+ int gfp)
{
/* Copy in the address. */
peer->ipaddr = *addr;
/* Initialize the default path max_retrans. */
peer->max_retrans = sctp_max_retrans_path;
- peer->error_threshold = 0;
peer->error_count = 0;
INIT_LIST_HEAD(&peer->transmitted);
return peer;
}
+/* Allocate and initialize a new transport. */
+struct sctp_transport *sctp_transport_new(const union sctp_addr *addr, int gfp)
+{
+ struct sctp_transport *transport;
+
+ transport = t_new(struct sctp_transport, gfp);
+ if (!transport)
+ goto fail;
+
+ if (!sctp_transport_init(transport, addr, gfp))
+ goto fail_init;
+
+ transport->malloced = 1;
+ SCTP_DBG_OBJCNT_INC(transport);
+
+ return transport;
+
+fail_init:
+ kfree(transport);
+
+fail:
+ return NULL;
+}
+
/* This transport is no longer needed. Free up if possible, or
* delay until it last reference count.
*/
if (del_timer(&transport->hb_timer))
sctp_transport_put(transport);
+ /* Delete the T3_rtx timer if it's active.
+ * There is no point in not doing this now and letting
+ * structure hang around in memory since we know
+ * the tranport is going away.
+ */
+ if (timer_pending(&transport->T3_rtx_timer) &&
+ del_timer(&transport->T3_rtx_timer))
+ sctp_transport_put(transport);
+
+
sctp_transport_put(transport);
}
/* Destroy the transport data structure.
* Assumes there are no more users of this structure.
*/
-void sctp_transport_destroy(struct sctp_transport *transport)
+static void sctp_transport_destroy(struct sctp_transport *transport)
{
SCTP_ASSERT(transport->dead, "Transport is not dead", return);
* address.
*/
void sctp_transport_route(struct sctp_transport *transport,
- union sctp_addr *saddr, struct sctp_opt *opt)
+ union sctp_addr *saddr, struct sctp_sock *opt)
{
struct sctp_association *asoc = transport->asoc;
struct sctp_af *af = transport->af_specific;
int *started, __u16 *start,
int *ended, __u16 *end);
-/* Create a new sctp_tsnmap.
- * Allocate room to store at least 'len' contiguous TSNs.
- */
-struct sctp_tsnmap *sctp_tsnmap_new(__u16 len, __u32 initial_tsn, int gfp)
-{
- struct sctp_tsnmap *retval;
-
- retval = kmalloc(sizeof(struct sctp_tsnmap) +
- sctp_tsnmap_storage_size(len), gfp);
- if (!retval)
- goto fail;
-
- if (!sctp_tsnmap_init(retval, len, initial_tsn))
- goto fail_map;
- retval->malloced = 1;
- return retval;
-
-fail_map:
- kfree(retval);
-fail:
- return NULL;
-}
-
/* Initialize a block of memory as a tsnmap. */
struct sctp_tsnmap *sctp_tsnmap_init(struct sctp_tsnmap *map, __u16 len,
__u32 initial_tsn)
}
-/* Dispose of a tsnmap. */
-void sctp_tsnmap_free(struct sctp_tsnmap *map)
-{
- if (map->malloced)
- kfree(map);
-}
-
/* Initialize a Gap Ack Block iterator from memory being provided. */
-void sctp_tsnmap_iter_init(const struct sctp_tsnmap *map,
- struct sctp_tsnmap_iter *iter)
+SCTP_STATIC void sctp_tsnmap_iter_init(const struct sctp_tsnmap *map,
+ struct sctp_tsnmap_iter *iter)
{
/* Only start looking one past the Cumulative TSN Ack Point. */
iter->start = map->cumulative_tsn_ack_point + 1;
/* Get the next Gap Ack Blocks. Returns 0 if there was not another block
* to get.
*/
-int sctp_tsnmap_next_gap_ack(const struct sctp_tsnmap *map,
- struct sctp_tsnmap_iter *iter, __u16 *start, __u16 *end)
+SCTP_STATIC int sctp_tsnmap_next_gap_ack(const struct sctp_tsnmap *map,
+ struct sctp_tsnmap_iter *iter,
+ __u16 *start, __u16 *end)
{
int started, ended;
__u16 _start, _end, offset;
# define RPCDBG_FACILITY RPCDBG_AUTH
#endif
-struct xdr_netobj gss_mech_spkm3_oid =
- {7, "\053\006\001\005\005\001\003"};
-
static inline int
get_bytes(char **ptr, const char *end, void *res, int len)
{
return GSS_S_FAILURE;
}
-void
+static void
gss_delete_sec_context_spkm3(void *internal_ctx) {
struct spkm3_ctx *sctx = internal_ctx;
kfree(sctx);
}
-u32
+static u32
gss_verify_mic_spkm3(struct gss_ctx *ctx,
struct xdr_buf *signbuf,
struct xdr_netobj *checksum,
return maj_stat;
}
-u32
+static u32
gss_get_mic_spkm3(struct gss_ctx *ctx,
u32 qop,
struct xdr_buf *message_buffer,
static int
nul_refresh(struct rpc_task *task)
{
- return task->tk_status = -EACCES;
+ task->tk_msg.rpc_cred->cr_flags |= RPCAUTH_CRED_UPTODATE;
+ return 0;
}
static u32 *
#include <linux/smp.h>
#include <linux/smp_lock.h>
#include <linux/spinlock.h>
-#include <linux/suspend.h>
#include <linux/sunrpc/clnt.h>
#include <linux/sunrpc/xprt.h>
#ifdef RPC_DEBUG
#define RPCDBG_FACILITY RPCDBG_SCHED
+#define RPC_TASK_MAGIC_ID 0xf00baa
static int rpc_task_id;
#endif
static void __rpc_default_timer(struct rpc_task *task);
static void rpciod_killall(void);
+static void rpc_free(struct rpc_task *task);
-/*
- * When an asynchronous RPC task is activated within a bottom half
- * handler, or while executing another RPC task, it is put on
- * schedq, and rpciod is woken up.
- */
-static RPC_WAITQ(schedq, "schedq");
+static void rpc_async_schedule(void *);
/*
* RPC tasks that create another task (e.g. for contacting the portmapper)
/*
* rpciod-related stuff
*/
-static DECLARE_WAIT_QUEUE_HEAD(rpciod_idle);
-static DECLARE_COMPLETION(rpciod_killer);
static DECLARE_MUTEX(rpciod_sema);
static unsigned int rpciod_users;
-static pid_t rpciod_pid;
-static int rpc_inhibit;
+static struct workqueue_struct *rpciod_workqueue;
-/*
- * Spinlock for wait queues. Access to the latter also has to be
- * interrupt-safe in order to allow timers to wake up sleeping tasks.
- */
-static spinlock_t rpc_queue_lock = SPIN_LOCK_UNLOCKED;
/*
* Spinlock for other critical sections of code.
*/
-static spinlock_t rpc_sched_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(rpc_sched_lock);
/*
* Disable the timer for a given RPC task. Should be called with
- * rpc_queue_lock and bh_disabled in order to avoid races within
+ * queue->lock and bh_disabled in order to avoid races within
* rpc_run_timer().
*/
static inline void
* without calling del_timer_sync(). The latter could cause a
* deadlock if called while we're holding spinlocks...
*/
-static void
-rpc_run_timer(struct rpc_task *task)
+static void rpc_run_timer(struct rpc_task *task)
{
void (*callback)(struct rpc_task *);
- spin_lock_bh(&rpc_queue_lock);
callback = task->tk_timeout_fn;
task->tk_timeout_fn = NULL;
- spin_unlock_bh(&rpc_queue_lock);
- if (callback) {
+ if (callback && RPC_IS_QUEUED(task)) {
dprintk("RPC: %4d running timer\n", task->tk_pid);
callback(task);
}
+ smp_mb__before_clear_bit();
+ clear_bit(RPC_TASK_HAS_TIMER, &task->tk_runstate);
+ smp_mb__after_clear_bit();
}
/*
task->tk_timeout_fn = timer;
else
task->tk_timeout_fn = __rpc_default_timer;
+ set_bit(RPC_TASK_HAS_TIMER, &task->tk_runstate);
mod_timer(&task->tk_timer, jiffies + task->tk_timeout);
}
-/*
- * Set up a timer for an already sleeping task.
- */
-void rpc_add_timer(struct rpc_task *task, rpc_action timer)
-{
- spin_lock_bh(&rpc_queue_lock);
- if (!RPC_IS_RUNNING(task))
- __rpc_add_timer(task, timer);
- spin_unlock_bh(&rpc_queue_lock);
-}
-
/*
* Delete any timer for the current task. Because we use del_timer_sync(),
- * this function should never be called while holding rpc_queue_lock.
+ * this function should never be called while holding queue->lock.
*/
static inline void
rpc_delete_timer(struct rpc_task *task)
{
- if (del_timer_sync(&task->tk_timer))
+ if (test_and_clear_bit(RPC_TASK_HAS_TIMER, &task->tk_runstate)) {
+ del_singleshot_timer_sync(&task->tk_timer);
dprintk("RPC: %4d deleting timer\n", task->tk_pid);
+ }
}
/*
struct list_head *q;
struct rpc_task *t;
+ INIT_LIST_HEAD(&task->u.tk_wait.links);
q = &queue->tasks[task->tk_priority];
if (unlikely(task->tk_priority > queue->maxpriority))
q = &queue->tasks[queue->maxpriority];
- list_for_each_entry(t, q, tk_list) {
+ list_for_each_entry(t, q, u.tk_wait.list) {
if (t->tk_cookie == task->tk_cookie) {
- list_add_tail(&task->tk_list, &t->tk_links);
+ list_add_tail(&task->u.tk_wait.list, &t->u.tk_wait.links);
return;
}
}
- list_add_tail(&task->tk_list, q);
+ list_add_tail(&task->u.tk_wait.list, q);
}
/*
* improve overall performance.
* Everyone else gets appended to the queue to ensure proper FIFO behavior.
*/
-static int __rpc_add_wait_queue(struct rpc_wait_queue *queue, struct rpc_task *task)
+static void __rpc_add_wait_queue(struct rpc_wait_queue *queue, struct rpc_task *task)
{
- if (task->tk_rpcwait == queue)
- return 0;
+ BUG_ON (RPC_IS_QUEUED(task));
- if (task->tk_rpcwait) {
- printk(KERN_WARNING "RPC: doubly enqueued task!\n");
- return -EWOULDBLOCK;
- }
if (RPC_IS_PRIORITY(queue))
__rpc_add_wait_queue_priority(queue, task);
else if (RPC_IS_SWAPPER(task))
- list_add(&task->tk_list, &queue->tasks[0]);
+ list_add(&task->u.tk_wait.list, &queue->tasks[0]);
else
- list_add_tail(&task->tk_list, &queue->tasks[0]);
- task->tk_rpcwait = queue;
+ list_add_tail(&task->u.tk_wait.list, &queue->tasks[0]);
+ task->u.tk_wait.rpc_waitq = queue;
+ rpc_set_queued(task);
dprintk("RPC: %4d added to queue %p \"%s\"\n",
task->tk_pid, queue, rpc_qname(queue));
-
- return 0;
-}
-
-int rpc_add_wait_queue(struct rpc_wait_queue *q, struct rpc_task *task)
-{
- int result;
-
- spin_lock_bh(&rpc_queue_lock);
- result = __rpc_add_wait_queue(q, task);
- spin_unlock_bh(&rpc_queue_lock);
- return result;
}
/*
{
struct rpc_task *t;
- if (!list_empty(&task->tk_links)) {
- t = list_entry(task->tk_links.next, struct rpc_task, tk_list);
- list_move(&t->tk_list, &task->tk_list);
- list_splice_init(&task->tk_links, &t->tk_links);
+ if (!list_empty(&task->u.tk_wait.links)) {
+ t = list_entry(task->u.tk_wait.links.next, struct rpc_task, u.tk_wait.list);
+ list_move(&t->u.tk_wait.list, &task->u.tk_wait.list);
+ list_splice_init(&task->u.tk_wait.links, &t->u.tk_wait.links);
}
- list_del(&task->tk_list);
+ list_del(&task->u.tk_wait.list);
}
/*
*/
static void __rpc_remove_wait_queue(struct rpc_task *task)
{
- struct rpc_wait_queue *queue = task->tk_rpcwait;
-
- if (!queue)
- return;
+ struct rpc_wait_queue *queue;
+ queue = task->u.tk_wait.rpc_waitq;
if (RPC_IS_PRIORITY(queue))
__rpc_remove_wait_queue_priority(task);
else
- list_del(&task->tk_list);
- task->tk_rpcwait = NULL;
-
+ list_del(&task->u.tk_wait.list);
dprintk("RPC: %4d removed from queue %p \"%s\"\n",
task->tk_pid, queue, rpc_qname(queue));
}
-void
-rpc_remove_wait_queue(struct rpc_task *task)
-{
- if (!task->tk_rpcwait)
- return;
- spin_lock_bh(&rpc_queue_lock);
- __rpc_remove_wait_queue(task);
- spin_unlock_bh(&rpc_queue_lock);
-}
-
static inline void rpc_set_waitqueue_priority(struct rpc_wait_queue *queue, int priority)
{
queue->priority = priority;
{
int i;
+ spin_lock_init(&queue->lock);
for (i = 0; i < ARRAY_SIZE(queue->tasks); i++)
INIT_LIST_HEAD(&queue->tasks[i]);
queue->maxpriority = maxprio;
* Note: If the task is ASYNC, this must be called with
* the spinlock held to protect the wait queue operation.
*/
-static inline void
-rpc_make_runnable(struct rpc_task *task)
+static void rpc_make_runnable(struct rpc_task *task)
{
- if (task->tk_timeout_fn) {
- printk(KERN_ERR "RPC: task w/ running timer in rpc_make_runnable!!\n");
+ int do_ret;
+
+ BUG_ON(task->tk_timeout_fn);
+ do_ret = rpc_test_and_set_running(task);
+ rpc_clear_queued(task);
+ if (do_ret)
return;
- }
- rpc_set_running(task);
if (RPC_IS_ASYNC(task)) {
- if (RPC_IS_SLEEPING(task)) {
- int status;
- status = __rpc_add_wait_queue(&schedq, task);
- if (status < 0) {
- printk(KERN_WARNING "RPC: failed to add task to queue: error: %d!\n", status);
- task->tk_status = status;
- return;
- }
- rpc_clear_sleeping(task);
- wake_up(&rpciod_idle);
+ int status;
+
+ INIT_WORK(&task->u.tk_work, rpc_async_schedule, (void *)task);
+ status = queue_work(task->tk_workqueue, &task->u.tk_work);
+ if (status < 0) {
+ printk(KERN_WARNING "RPC: failed to add task to queue: error: %d!\n", status);
+ task->tk_status = status;
+ return;
}
- } else {
- rpc_clear_sleeping(task);
- wake_up(&task->tk_wait);
- }
+ } else
+ wake_up(&task->u.tk_wait.waitq);
}
/*
- * Place a newly initialized task on the schedq.
+ * Place a newly initialized task on the workqueue.
*/
static inline void
rpc_schedule_run(struct rpc_task *task)
if (RPC_IS_ACTIVATED(task))
return;
task->tk_active = 1;
- rpc_set_sleeping(task);
rpc_make_runnable(task);
}
-/*
- * For other people who may need to wake the I/O daemon
- * but should (for now) know nothing about its innards
- */
-void rpciod_wake_up(void)
-{
- if(rpciod_pid==0)
- printk(KERN_ERR "rpciod: wot no daemon?\n");
- wake_up(&rpciod_idle);
-}
-
/*
* Prepare for sleeping on a wait queue.
* By always appending tasks to the list we ensure FIFO behavior.
* NB: An RPC task will only receive interrupt-driven events as long
* as it's on a wait queue.
*/
-static void
-__rpc_sleep_on(struct rpc_wait_queue *q, struct rpc_task *task,
+static void __rpc_sleep_on(struct rpc_wait_queue *q, struct rpc_task *task,
rpc_action action, rpc_action timer)
{
- int status;
-
dprintk("RPC: %4d sleep_on(queue \"%s\" time %ld)\n", task->tk_pid,
rpc_qname(q), jiffies);
}
/* Mark the task as being activated if so needed */
- if (!RPC_IS_ACTIVATED(task)) {
+ if (!RPC_IS_ACTIVATED(task))
task->tk_active = 1;
- rpc_set_sleeping(task);
- }
- status = __rpc_add_wait_queue(q, task);
- if (status) {
- printk(KERN_WARNING "RPC: failed to add task to queue: error: %d!\n", status);
- task->tk_status = status;
- } else {
- rpc_clear_running(task);
- if (task->tk_callback) {
- dprintk(KERN_ERR "RPC: %4d overwrites an active callback\n", task->tk_pid);
- BUG();
- }
- task->tk_callback = action;
- __rpc_add_timer(task, timer);
- }
+ __rpc_add_wait_queue(q, task);
+
+ BUG_ON(task->tk_callback != NULL);
+ task->tk_callback = action;
+ __rpc_add_timer(task, timer);
}
-void
-rpc_sleep_on(struct rpc_wait_queue *q, struct rpc_task *task,
+void rpc_sleep_on(struct rpc_wait_queue *q, struct rpc_task *task,
rpc_action action, rpc_action timer)
{
/*
* Protect the queue operations.
*/
- spin_lock_bh(&rpc_queue_lock);
+ spin_lock_bh(&q->lock);
__rpc_sleep_on(q, task, action, timer);
- spin_unlock_bh(&rpc_queue_lock);
+ spin_unlock_bh(&q->lock);
}
/**
- * __rpc_wake_up_task - wake up a single rpc_task
+ * __rpc_do_wake_up_task - wake up a single rpc_task
* @task: task to be woken up
*
- * Caller must hold rpc_queue_lock
+ * Caller must hold queue->lock, and have cleared the task queued flag.
*/
-static void
-__rpc_wake_up_task(struct rpc_task *task)
+static void __rpc_do_wake_up_task(struct rpc_task *task)
{
- dprintk("RPC: %4d __rpc_wake_up_task (now %ld inh %d)\n",
- task->tk_pid, jiffies, rpc_inhibit);
+ dprintk("RPC: %4d __rpc_wake_up_task (now %ld)\n", task->tk_pid, jiffies);
#ifdef RPC_DEBUG
- if (task->tk_magic != 0xf00baa) {
- printk(KERN_ERR "RPC: attempt to wake up non-existing task!\n");
- rpc_debug = ~0;
- rpc_show_tasks();
- return;
- }
+ BUG_ON(task->tk_magic != RPC_TASK_MAGIC_ID);
#endif
/* Has the task been executed yet? If not, we cannot wake it up! */
if (!RPC_IS_ACTIVATED(task)) {
printk(KERN_ERR "RPC: Inactive task (%p) being woken up!\n", task);
return;
}
- if (RPC_IS_RUNNING(task))
- return;
__rpc_disable_timer(task);
- if (task->tk_rpcwait != &schedq)
- __rpc_remove_wait_queue(task);
+ __rpc_remove_wait_queue(task);
rpc_make_runnable(task);
dprintk("RPC: __rpc_wake_up_task done\n");
}
+/*
+ * Wake up the specified task
+ */
+static void __rpc_wake_up_task(struct rpc_task *task)
+{
+ if (rpc_start_wakeup(task)) {
+ if (RPC_IS_QUEUED(task))
+ __rpc_do_wake_up_task(task);
+ rpc_finish_wakeup(task);
+ }
+}
+
/*
* Default timeout handler if none specified by user
*/
/*
* Wake up the specified task
*/
-void
-rpc_wake_up_task(struct rpc_task *task)
+void rpc_wake_up_task(struct rpc_task *task)
{
- if (RPC_IS_RUNNING(task))
- return;
- spin_lock_bh(&rpc_queue_lock);
- __rpc_wake_up_task(task);
- spin_unlock_bh(&rpc_queue_lock);
+ if (rpc_start_wakeup(task)) {
+ if (RPC_IS_QUEUED(task)) {
+ struct rpc_wait_queue *queue = task->u.tk_wait.rpc_waitq;
+
+ spin_lock_bh(&queue->lock);
+ __rpc_do_wake_up_task(task);
+ spin_unlock_bh(&queue->lock);
+ }
+ rpc_finish_wakeup(task);
+ }
}
/*
*/
q = &queue->tasks[queue->priority];
if (!list_empty(q)) {
- task = list_entry(q->next, struct rpc_task, tk_list);
+ task = list_entry(q->next, struct rpc_task, u.tk_wait.list);
if (queue->cookie == task->tk_cookie) {
if (--queue->nr)
goto out;
- list_move_tail(&task->tk_list, q);
+ list_move_tail(&task->u.tk_wait.list, q);
}
/*
* Check if we need to switch queues.
else
q = q - 1;
if (!list_empty(q)) {
- task = list_entry(q->next, struct rpc_task, tk_list);
+ task = list_entry(q->next, struct rpc_task, u.tk_wait.list);
goto new_queue;
}
} while (q != &queue->tasks[queue->priority]);
struct rpc_task *task = NULL;
dprintk("RPC: wake_up_next(%p \"%s\")\n", queue, rpc_qname(queue));
- spin_lock_bh(&rpc_queue_lock);
+ spin_lock_bh(&queue->lock);
if (RPC_IS_PRIORITY(queue))
task = __rpc_wake_up_next_priority(queue);
else {
task_for_first(task, &queue->tasks[0])
__rpc_wake_up_task(task);
}
- spin_unlock_bh(&rpc_queue_lock);
+ spin_unlock_bh(&queue->lock);
return task;
}
* rpc_wake_up - wake up all rpc_tasks
* @queue: rpc_wait_queue on which the tasks are sleeping
*
- * Grabs rpc_queue_lock
+ * Grabs queue->lock
*/
void rpc_wake_up(struct rpc_wait_queue *queue)
{
struct rpc_task *task;
struct list_head *head;
- spin_lock_bh(&rpc_queue_lock);
+ spin_lock_bh(&queue->lock);
head = &queue->tasks[queue->maxpriority];
for (;;) {
while (!list_empty(head)) {
- task = list_entry(head->next, struct rpc_task, tk_list);
+ task = list_entry(head->next, struct rpc_task, u.tk_wait.list);
__rpc_wake_up_task(task);
}
if (head == &queue->tasks[0])
break;
head--;
}
- spin_unlock_bh(&rpc_queue_lock);
+ spin_unlock_bh(&queue->lock);
}
/**
* @queue: rpc_wait_queue on which the tasks are sleeping
* @status: status value to set
*
- * Grabs rpc_queue_lock
+ * Grabs queue->lock
*/
void rpc_wake_up_status(struct rpc_wait_queue *queue, int status)
{
struct list_head *head;
struct rpc_task *task;
- spin_lock_bh(&rpc_queue_lock);
+ spin_lock_bh(&queue->lock);
head = &queue->tasks[queue->maxpriority];
for (;;) {
while (!list_empty(head)) {
- task = list_entry(head->next, struct rpc_task, tk_list);
+ task = list_entry(head->next, struct rpc_task, u.tk_wait.list);
task->tk_status = status;
__rpc_wake_up_task(task);
}
break;
head--;
}
- spin_unlock_bh(&rpc_queue_lock);
+ spin_unlock_bh(&queue->lock);
}
/*
/*
* This is the RPC `scheduler' (or rather, the finite state machine).
*/
-static int
-__rpc_execute(struct rpc_task *task)
+static int __rpc_execute(struct rpc_task *task)
{
int status = 0;
dprintk("RPC: %4d rpc_execute flgs %x\n",
task->tk_pid, task->tk_flags);
- if (!RPC_IS_RUNNING(task)) {
- printk(KERN_WARNING "RPC: rpc_execute called for sleeping task!!\n");
- return 0;
- }
+ BUG_ON(RPC_IS_QUEUED(task));
restarted:
while (1) {
+ /*
+ * Garbage collection of pending timers...
+ */
+ rpc_delete_timer(task);
+
/*
* Execute any pending callback.
*/
*/
save_callback=task->tk_callback;
task->tk_callback=NULL;
+ lock_kernel();
save_callback(task);
+ unlock_kernel();
}
/*
* tk_action may be NULL when the task has been killed
* by someone else.
*/
- if (RPC_IS_RUNNING(task)) {
- /*
- * Garbage collection of pending timers...
- */
- rpc_delete_timer(task);
+ if (!RPC_IS_QUEUED(task)) {
if (!task->tk_action)
break;
+ lock_kernel();
task->tk_action(task);
- /* micro-optimization to avoid spinlock */
- if (RPC_IS_RUNNING(task))
- continue;
+ unlock_kernel();
}
/*
- * Check whether task is sleeping.
+ * Lockless check for whether task is sleeping or not.
*/
- spin_lock_bh(&rpc_queue_lock);
- if (!RPC_IS_RUNNING(task)) {
- rpc_set_sleeping(task);
- if (RPC_IS_ASYNC(task)) {
- spin_unlock_bh(&rpc_queue_lock);
+ if (!RPC_IS_QUEUED(task))
+ continue;
+ rpc_clear_running(task);
+ if (RPC_IS_ASYNC(task)) {
+ /* Careful! we may have raced... */
+ if (RPC_IS_QUEUED(task))
return 0;
- }
+ if (rpc_test_and_set_running(task))
+ return 0;
+ continue;
}
- spin_unlock_bh(&rpc_queue_lock);
- if (!RPC_IS_SLEEPING(task))
- continue;
/* sync task: sleep here */
dprintk("RPC: %4d sync task going to sleep\n", task->tk_pid);
- if (current->pid == rpciod_pid)
- printk(KERN_ERR "RPC: rpciod waiting on sync task!\n");
-
if (RPC_TASK_UNINTERRUPTIBLE(task)) {
- __wait_event(task->tk_wait, !RPC_IS_SLEEPING(task));
+ __wait_event(task->u.tk_wait.waitq, !RPC_IS_QUEUED(task));
} else {
- __wait_event_interruptible(task->tk_wait, !RPC_IS_SLEEPING(task), status);
+ __wait_event_interruptible(task->u.tk_wait.waitq, !RPC_IS_QUEUED(task), status);
/*
* When a sync task receives a signal, it exits with
* -ERESTARTSYS. In order to catch any callbacks that
rpc_wake_up_task(task);
}
}
+ rpc_set_running(task);
dprintk("RPC: %4d sync task resuming\n", task->tk_pid);
}
if (task->tk_exit) {
+ lock_kernel();
task->tk_exit(task);
+ unlock_kernel();
/* If tk_action is non-null, the user wants us to restart */
if (task->tk_action) {
if (!RPC_ASSASSINATED(task)) {
/* Release all resources associated with the task */
rpc_release_task(task);
-
return status;
}
int
rpc_execute(struct rpc_task *task)
{
- int status = -EIO;
- if (rpc_inhibit) {
- printk(KERN_INFO "RPC: execution inhibited!\n");
- goto out_release;
- }
-
- status = -EWOULDBLOCK;
- if (task->tk_active) {
- printk(KERN_ERR "RPC: active task was run twice!\n");
- goto out_err;
- }
+ BUG_ON(task->tk_active);
task->tk_active = 1;
rpc_set_running(task);
return __rpc_execute(task);
- out_release:
- rpc_release_task(task);
- out_err:
- return status;
}
-/*
- * This is our own little scheduler for async RPC tasks.
- */
-static void
-__rpc_schedule(void)
+static void rpc_async_schedule(void *arg)
{
- struct rpc_task *task;
- int count = 0;
-
- dprintk("RPC: rpc_schedule enter\n");
- while (1) {
-
- task_for_first(task, &schedq.tasks[0]) {
- __rpc_remove_wait_queue(task);
- spin_unlock_bh(&rpc_queue_lock);
-
- __rpc_execute(task);
- spin_lock_bh(&rpc_queue_lock);
- } else {
- break;
- }
-
- if (++count >= 200 || need_resched()) {
- count = 0;
- spin_unlock_bh(&rpc_queue_lock);
- schedule();
- spin_lock_bh(&rpc_queue_lock);
- }
- }
- dprintk("RPC: rpc_schedule leave\n");
+ __rpc_execute((struct rpc_task *)arg);
}
/*
return task->tk_buffer;
}
-void
+static void
rpc_free(struct rpc_task *task)
{
if (task->tk_buffer) {
task->tk_client = clnt;
task->tk_flags = flags;
task->tk_exit = callback;
- init_waitqueue_head(&task->tk_wait);
if (current->uid != current->fsuid || current->gid != current->fsgid)
task->tk_flags |= RPC_TASK_SETUID;
task->tk_priority = RPC_PRIORITY_NORMAL;
task->tk_cookie = (unsigned long)current;
- INIT_LIST_HEAD(&task->tk_links);
- /* Add to global list of all tasks */
- spin_lock(&rpc_sched_lock);
- list_add(&task->tk_task, &all_tasks);
- spin_unlock(&rpc_sched_lock);
+ /* Initialize workqueue for async tasks */
+ task->tk_workqueue = rpciod_workqueue;
+ if (!RPC_IS_ASYNC(task))
+ init_waitqueue_head(&task->u.tk_wait.waitq);
if (clnt) {
atomic_inc(&clnt->cl_users);
}
#ifdef RPC_DEBUG
- task->tk_magic = 0xf00baa;
+ task->tk_magic = RPC_TASK_MAGIC_ID;
task->tk_pid = rpc_task_id++;
#endif
+ /* Add to global list of all tasks */
+ spin_lock(&rpc_sched_lock);
+ list_add_tail(&task->tk_task, &all_tasks);
+ spin_unlock(&rpc_sched_lock);
+
dprintk("RPC: %4d new task procpid %d\n", task->tk_pid,
current->pid);
}
goto out;
}
-void
-rpc_release_task(struct rpc_task *task)
+void rpc_release_task(struct rpc_task *task)
{
dprintk("RPC: %4d release task\n", task->tk_pid);
#ifdef RPC_DEBUG
- if (task->tk_magic != 0xf00baa) {
- printk(KERN_ERR "RPC: attempt to release a non-existing task!\n");
- rpc_debug = ~0;
- rpc_show_tasks();
- return;
- }
+ BUG_ON(task->tk_magic != RPC_TASK_MAGIC_ID);
#endif
/* Remove from global task list */
list_del(&task->tk_task);
spin_unlock(&rpc_sched_lock);
- /* Protect the execution below. */
- spin_lock_bh(&rpc_queue_lock);
-
- /* Disable timer to prevent zombie wakeup */
- __rpc_disable_timer(task);
-
- /* Remove from any wait queue we're still on */
- __rpc_remove_wait_queue(task);
-
+ BUG_ON (RPC_IS_QUEUED(task));
task->tk_active = 0;
- spin_unlock_bh(&rpc_queue_lock);
-
/* Synchronously delete any running timer */
rpc_delete_timer(task);
* queue 'childq'. If so returns a pointer to the parent.
* Upon failure returns NULL.
*
- * Caller must hold rpc_queue_lock
+ * Caller must hold childq.lock
*/
-static inline struct rpc_task *
-rpc_find_parent(struct rpc_task *child)
+static inline struct rpc_task *rpc_find_parent(struct rpc_task *child)
{
struct rpc_task *task, *parent;
struct list_head *le;
return NULL;
}
-static void
-rpc_child_exit(struct rpc_task *child)
+static void rpc_child_exit(struct rpc_task *child)
{
struct rpc_task *parent;
- spin_lock_bh(&rpc_queue_lock);
+ spin_lock_bh(&childq.lock);
if ((parent = rpc_find_parent(child)) != NULL) {
parent->tk_status = child->tk_status;
__rpc_wake_up_task(parent);
}
- spin_unlock_bh(&rpc_queue_lock);
+ spin_unlock_bh(&childq.lock);
}
/*
return NULL;
}
-void
-rpc_run_child(struct rpc_task *task, struct rpc_task *child, rpc_action func)
+void rpc_run_child(struct rpc_task *task, struct rpc_task *child, rpc_action func)
{
- spin_lock_bh(&rpc_queue_lock);
+ spin_lock_bh(&childq.lock);
/* N.B. Is it possible for the child to have already finished? */
__rpc_sleep_on(&childq, task, func, NULL);
rpc_schedule_run(child);
- spin_unlock_bh(&rpc_queue_lock);
+ spin_unlock_bh(&childq.lock);
}
/*
* Kill all tasks for the given client.
* XXX: kill their descendants as well?
*/
-void
-rpc_killall_tasks(struct rpc_clnt *clnt)
+void rpc_killall_tasks(struct rpc_clnt *clnt)
{
struct rpc_task *rovr;
struct list_head *le;
* Spin lock all_tasks to prevent changes...
*/
spin_lock(&rpc_sched_lock);
- alltask_for_each(rovr, le, &all_tasks)
+ alltask_for_each(rovr, le, &all_tasks) {
+ if (! RPC_IS_ACTIVATED(rovr))
+ continue;
if (!clnt || rovr->tk_client == clnt) {
rovr->tk_flags |= RPC_TASK_KILLED;
rpc_exit(rovr, -EIO);
rpc_wake_up_task(rovr);
}
+ }
spin_unlock(&rpc_sched_lock);
}
static DECLARE_MUTEX_LOCKED(rpciod_running);
-static inline int
-rpciod_task_pending(void)
-{
- return !list_empty(&schedq.tasks[0]);
-}
-
-
-/*
- * This is the rpciod kernel thread
- */
-static int
-rpciod(void *ptr)
-{
- int rounds = 0;
-
- lock_kernel();
- /*
- * Let our maker know we're running ...
- */
- rpciod_pid = current->pid;
- up(&rpciod_running);
-
- daemonize("rpciod");
- allow_signal(SIGKILL);
-
- dprintk("RPC: rpciod starting (pid %d)\n", rpciod_pid);
- spin_lock_bh(&rpc_queue_lock);
- while (rpciod_users) {
- DEFINE_WAIT(wait);
- if (signalled()) {
- spin_unlock_bh(&rpc_queue_lock);
- rpciod_killall();
- flush_signals(current);
- spin_lock_bh(&rpc_queue_lock);
- }
- __rpc_schedule();
- if (current->flags & PF_FREEZE) {
- spin_unlock_bh(&rpc_queue_lock);
- refrigerator(PF_FREEZE);
- spin_lock_bh(&rpc_queue_lock);
- }
-
- if (++rounds >= 64) { /* safeguard */
- spin_unlock_bh(&rpc_queue_lock);
- schedule();
- rounds = 0;
- spin_lock_bh(&rpc_queue_lock);
- }
-
- dprintk("RPC: rpciod back to sleep\n");
- prepare_to_wait(&rpciod_idle, &wait, TASK_INTERRUPTIBLE);
- if (!rpciod_task_pending() && !signalled()) {
- spin_unlock_bh(&rpc_queue_lock);
- schedule();
- rounds = 0;
- spin_lock_bh(&rpc_queue_lock);
- }
- finish_wait(&rpciod_idle, &wait);
- dprintk("RPC: switch to rpciod\n");
- }
- spin_unlock_bh(&rpc_queue_lock);
-
- dprintk("RPC: rpciod shutdown commences\n");
- if (!list_empty(&all_tasks)) {
- printk(KERN_ERR "rpciod: active tasks at shutdown?!\n");
- rpciod_killall();
- }
-
- dprintk("RPC: rpciod exiting\n");
- unlock_kernel();
-
- rpciod_pid = 0;
- complete_and_exit(&rpciod_killer, 0);
- return 0;
-}
-
-static void
-rpciod_killall(void)
+static void rpciod_killall(void)
{
unsigned long flags;
while (!list_empty(&all_tasks)) {
clear_thread_flag(TIF_SIGPENDING);
rpc_killall_tasks(NULL);
- spin_lock_bh(&rpc_queue_lock);
- __rpc_schedule();
- spin_unlock_bh(&rpc_queue_lock);
+ flush_workqueue(rpciod_workqueue);
if (!list_empty(&all_tasks)) {
dprintk("rpciod_killall: waiting for tasks to exit\n");
yield();
int
rpciod_up(void)
{
+ struct workqueue_struct *wq;
int error = 0;
down(&rpciod_sema);
- dprintk("rpciod_up: pid %d, users %d\n", rpciod_pid, rpciod_users);
+ dprintk("rpciod_up: users %d\n", rpciod_users);
rpciod_users++;
- if (rpciod_pid)
+ if (rpciod_workqueue)
goto out;
/*
* If there's no pid, we should be the first user.
*/
if (rpciod_users > 1)
- printk(KERN_WARNING "rpciod_up: no pid, %d users??\n", rpciod_users);
+ printk(KERN_WARNING "rpciod_up: no workqueue, %d users??\n", rpciod_users);
/*
* Create the rpciod thread and wait for it to start.
*/
- error = kernel_thread(rpciod, NULL, 0);
- if (error < 0) {
- printk(KERN_WARNING "rpciod_up: create thread failed, error=%d\n", error);
+ error = -ENOMEM;
+ wq = create_workqueue("rpciod");
+ if (wq == NULL) {
+ printk(KERN_WARNING "rpciod_up: create workqueue failed, error=%d\n", error);
rpciod_users--;
goto out;
}
- down(&rpciod_running);
+ rpciod_workqueue = wq;
error = 0;
out:
up(&rpciod_sema);
rpciod_down(void)
{
down(&rpciod_sema);
- dprintk("rpciod_down pid %d sema %d\n", rpciod_pid, rpciod_users);
+ dprintk("rpciod_down sema %d\n", rpciod_users);
if (rpciod_users) {
if (--rpciod_users)
goto out;
} else
- printk(KERN_WARNING "rpciod_down: pid=%d, no users??\n", rpciod_pid);
+ printk(KERN_WARNING "rpciod_down: no users??\n");
- if (!rpciod_pid) {
+ if (!rpciod_workqueue) {
dprintk("rpciod_down: Nothing to do!\n");
goto out;
}
+ rpciod_killall();
- kill_proc(rpciod_pid, SIGKILL, 1);
- wait_for_completion(&rpciod_killer);
+ destroy_workqueue(rpciod_workqueue);
+ rpciod_workqueue = NULL;
out:
up(&rpciod_sema);
}
}
printk("-pid- proc flgs status -client- -prog- --rqstp- -timeout "
"-rpcwait -action- --exit--\n");
- alltask_for_each(t, le, &all_tasks)
+ alltask_for_each(t, le, &all_tasks) {
+ const char *rpc_waitq = "none";
+
+ if (RPC_IS_QUEUED(t))
+ rpc_waitq = rpc_qname(t->u.tk_wait.rpc_waitq);
+
printk("%05d %04d %04x %06d %8p %6d %8p %08ld %8s %8p %8p\n",
t->tk_pid,
(t->tk_msg.rpc_proc ? t->tk_msg.rpc_proc->p_proc : -1),
t->tk_client,
(t->tk_client ? t->tk_client->cl_prog : 0),
t->tk_rqstp, t->tk_timeout,
- rpc_qname(t->tk_rpcwait),
+ rpc_waitq,
t->tk_action, t->tk_exit);
+ }
spin_unlock(&rpc_sched_lock);
}
#endif
EXPORT_SYMBOL(auth_unix_forget_old);
EXPORT_SYMBOL(auth_unix_lookup);
EXPORT_SYMBOL(cache_check);
-EXPORT_SYMBOL(cache_clean);
EXPORT_SYMBOL(cache_flush);
EXPORT_SYMBOL(cache_purge);
EXPORT_SYMBOL(cache_fresh);
extern struct auth_ops svcauth_null;
extern struct auth_ops svcauth_unix;
-static spinlock_t authtab_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(authtab_lock);
static struct auth_ops *authtab[RPC_AUTH_MAXFLAVOR] = {
[0] = &svcauth_null,
[1] = &svcauth_unix,
#define DN_HASHMASK (DN_HASHMAX-1)
static struct cache_head *auth_domain_table[DN_HASHMAX];
-void auth_domain_drop(struct cache_head *item, struct cache_detail *cd)
+
+static void auth_domain_drop(struct cache_head *item, struct cache_detail *cd)
{
struct auth_domain *dom = container_of(item, struct auth_domain, h);
if (cache_put(item,cd))
return p + XDR_QUADLEN(obj->len);
}
-u32 *
-xdr_decode_netobj_fixed(u32 *p, void *obj, unsigned int len)
-{
- if (ntohl(*p++) != len)
- return NULL;
- memcpy(obj, p, len);
- return p + XDR_QUADLEN(len);
-}
-
u32 *
xdr_decode_netobj(u32 *p, struct xdr_netobj *obj)
{
xdr->buflen += len;
}
-/*
- * Realign the kvec if the server missed out some reply elements
- * (such as post-op attributes,...)
- * Note: This is a simple implementation that assumes that
- * len <= iov->iov_len !!!
- * The RPC header (assumed to be the 1st element in the iov array)
- * is not shifted.
- */
-void xdr_shift_iovec(struct kvec *iov, int nr, size_t len)
-{
- struct kvec *pvec;
-
- for (pvec = iov + nr - 1; nr > 1; nr--, pvec--) {
- struct kvec *svec = pvec - 1;
-
- if (len > pvec->iov_len) {
- printk(KERN_DEBUG "RPC: Urk! Large shift of short iovec.\n");
- return;
- }
- memmove((char *)pvec->iov_base + len, pvec->iov_base,
- pvec->iov_len - len);
-
- if (len > svec->iov_len) {
- printk(KERN_DEBUG "RPC: Urk! Large shift of short iovec.\n");
- return;
- }
- memcpy(pvec->iov_base,
- (char *)svec->iov_base + svec->iov_len - len, len);
- }
-}
-
-/*
- * Map a struct xdr_buf into an kvec array.
- */
-int xdr_kmap(struct kvec *iov_base, struct xdr_buf *xdr, size_t base)
-{
- struct kvec *iov = iov_base;
- struct page **ppage = xdr->pages;
- unsigned int len, pglen = xdr->page_len;
-
- len = xdr->head[0].iov_len;
- if (base < len) {
- iov->iov_len = len - base;
- iov->iov_base = (char *)xdr->head[0].iov_base + base;
- iov++;
- base = 0;
- } else
- base -= len;
-
- if (pglen == 0)
- goto map_tail;
- if (base >= pglen) {
- base -= pglen;
- goto map_tail;
- }
- if (base || xdr->page_base) {
- pglen -= base;
- base += xdr->page_base;
- ppage += base >> PAGE_CACHE_SHIFT;
- base &= ~PAGE_CACHE_MASK;
- }
- do {
- len = PAGE_CACHE_SIZE;
- iov->iov_base = kmap(*ppage);
- if (base) {
- iov->iov_base += base;
- len -= base;
- base = 0;
- }
- if (pglen < len)
- len = pglen;
- iov->iov_len = len;
- iov++;
- ppage++;
- } while ((pglen -= len) != 0);
-map_tail:
- if (xdr->tail[0].iov_len) {
- iov->iov_len = xdr->tail[0].iov_len - base;
- iov->iov_base = (char *)xdr->tail[0].iov_base + base;
- iov++;
- }
- return (iov - iov_base);
-}
-
-void xdr_kunmap(struct xdr_buf *xdr, size_t base)
-{
- struct page **ppage = xdr->pages;
- unsigned int pglen = xdr->page_len;
-
- if (!pglen)
- return;
- if (base > xdr->head[0].iov_len)
- base -= xdr->head[0].iov_len;
- else
- base = 0;
-
- if (base >= pglen)
- return;
- if (base || xdr->page_base) {
- pglen -= base;
- base += xdr->page_base;
- ppage += base >> PAGE_CACHE_SHIFT;
- /* Note: The offset means that the length of the first
- * page is really (PAGE_CACHE_SIZE - (base & ~PAGE_CACHE_MASK)).
- * In order to avoid an extra test inside the loop,
- * we bump pglen here, and just subtract PAGE_CACHE_SIZE... */
- pglen += base & ~PAGE_CACHE_MASK;
- }
- for (;;) {
- flush_dcache_page(*ppage);
- kunmap(*ppage);
- if (pglen <= PAGE_CACHE_SIZE)
- break;
- pglen -= PAGE_CACHE_SIZE;
- ppage++;
- }
-}
-
void
xdr_partial_copy_from_skb(struct xdr_buf *xdr, unsigned int base,
skb_reader_t *desc,
do {
/* Are any pointers crossing a page boundary? */
if (pgto_base == 0) {
+ flush_dcache_page(*pgto);
pgto_base = PAGE_CACHE_SIZE;
pgto--;
}
kunmap_atomic(vto, KM_USER0);
} while ((len -= copy) != 0);
+ flush_dcache_page(*pgto);
}
/*
pgbase += copy;
if (pgbase == PAGE_CACHE_SIZE) {
+ flush_dcache_page(*pgto);
pgbase = 0;
pgto++;
}
p += copy;
} while ((len -= copy) != 0);
+ flush_dcache_page(*pgto);
}
/*
* Copies data into an arbitrary memory location from an array of pages
* The copy is assumed to be non-overlapping.
*/
-void
+static void
_copy_from_pages(char *p, struct page **pages, size_t pgbase, size_t len)
{
struct page **pgfrom;
* 'len' bytes. The extra data is not lost, but is instead
* moved into the inlined pages and/or the tail.
*/
-void
+static void
xdr_shrink_bufhead(struct xdr_buf *buf, size_t len)
{
struct kvec *head, *tail;
* 'len' bytes. The extra data is not lost, but is instead
* moved into the tail.
*/
-void
+static void
xdr_shrink_pagelen(struct xdr_buf *buf, size_t len)
{
struct kvec *tail;
extern int sysctl_unix_max_dgram_qlen;
-ctl_table unix_table[] = {
+static ctl_table unix_table[] = {
{
.ctl_name = NET_UNIX_MAX_DGRAM_QLEN,
.procname = "max_dgram_qlen",
* Added /proc/sys/net/x25 directory entry (empty =) ). [MS]
*/
-#include <linux/mm.h>
#include <linux/sysctl.h>
#include <linux/skbuff.h>
#include <linux/socket.h>
*/
#include <linux/config.h>
-#include <linux/errno.h>
-#include <linux/types.h>
-#include <linux/socket.h>
-#include <linux/in.h>
#include <linux/kernel.h>
-#include <linux/sched.h>
-#include <linux/timer.h>
-#include <linux/string.h>
-#include <linux/sockios.h>
-#include <linux/net.h>
-#include <linux/stat.h>
-#include <linux/inet.h>
#include <linux/netdevice.h>
#include <linux/skbuff.h>
#include <net/sock.h>
-#include <asm/system.h>
-#include <asm/uaccess.h>
-#include <linux/fcntl.h>
-#include <linux/termios.h> /* For TIOCINQ/OUTQ */
-#include <linux/mm.h>
-#include <linux/interrupt.h>
-#include <linux/notifier.h>
-#include <linux/proc_fs.h>
#include <linux/if_arp.h>
#include <net/x25.h>
return 0;
}
-int x25_llc_receive_frame(struct sk_buff *skb, struct net_device *dev,
- struct packet_type *ptype)
-{
- struct x25_neigh *nb;
- int rc = 0;
-
- skb->sk = NULL;
-
- /*
- * Packet received from unrecognised device, throw it away.
- */
- nb = x25_get_neigh(dev);
- if (!nb) {
- printk(KERN_DEBUG "X.25: unknown_neighbour - %s\n", dev->name);
- kfree_skb(skb);
- } else {
- rc = x25_receive_data(skb, nb);
- x25_neigh_put(nb);
- }
-
- return rc;
-}
-
void x25_establish_link(struct x25_neigh *nb)
{
struct sk_buff *skb;
* negotiation.
*/
-#include <linux/errno.h>
-#include <linux/types.h>
-#include <linux/socket.h>
-#include <linux/in.h>
#include <linux/kernel.h>
-#include <linux/sched.h>
-#include <linux/timer.h>
#include <linux/string.h>
-#include <linux/sockios.h>
-#include <linux/net.h>
-#include <linux/inet.h>
-#include <linux/netdevice.h>
#include <linux/skbuff.h>
#include <net/sock.h>
-#include <asm/system.h>
-#include <linux/fcntl.h>
-#include <linux/mm.h>
-#include <linux/interrupt.h>
#include <net/x25.h>
/*
*/
#include <linux/errno.h>
-#include <linux/types.h>
-#include <linux/socket.h>
-#include <linux/in.h>
#include <linux/kernel.h>
-#include <linux/sched.h>
-#include <linux/timer.h>
#include <linux/string.h>
-#include <linux/sockios.h>
-#include <linux/net.h>
-#include <linux/inet.h>
-#include <linux/netdevice.h>
#include <linux/skbuff.h>
#include <net/sock.h>
-#include <net/ip.h> /* For ip_rcv */
#include <net/tcp.h>
-#include <asm/system.h>
-#include <linux/fcntl.h>
-#include <linux/mm.h>
-#include <linux/interrupt.h>
#include <net/x25.h>
static int x25_queue_rx_frame(struct sock *sk, struct sk_buff *skb, int more)
* 2000-09-04 Henner Eisen dev_hold() / dev_put() for x25_neigh.
*/
-#include <linux/errno.h>
-#include <linux/types.h>
-#include <linux/socket.h>
-#include <linux/in.h>
#include <linux/kernel.h>
#include <linux/jiffies.h>
#include <linux/timer.h>
-#include <linux/string.h>
-#include <linux/sockios.h>
-#include <linux/net.h>
-#include <linux/inet.h>
#include <linux/netdevice.h>
#include <linux/skbuff.h>
-#include <net/sock.h>
-#include <asm/system.h>
#include <asm/uaccess.h>
-#include <linux/fcntl.h>
-#include <linux/mm.h>
-#include <linux/interrupt.h>
#include <linux/init.h>
#include <net/x25.h>
static struct list_head x25_neigh_list = LIST_HEAD_INIT(x25_neigh_list);
-static rwlock_t x25_neigh_list_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(x25_neigh_list_lock);
static void x25_t20timer_expiry(unsigned long);
+static void x25_transmit_restart_confirmation(struct x25_neigh *nb);
+static void x25_transmit_restart_request(struct x25_neigh *nb);
+
/*
* Linux set/reset timer routines
*/
/*
* This routine is called when a Restart Request is needed
*/
-void x25_transmit_restart_request(struct x25_neigh *nb)
+static void x25_transmit_restart_request(struct x25_neigh *nb)
{
unsigned char *dptr;
int len = X25_MAX_L2_LEN + X25_STD_MIN_LEN + 2;
/*
* This routine is called when a Restart Confirmation is needed
*/
-void x25_transmit_restart_confirmation(struct x25_neigh *nb)
+static void x25_transmit_restart_confirmation(struct x25_neigh *nb)
{
unsigned char *dptr;
int len = X25_MAX_L2_LEN + X25_STD_MIN_LEN;
x25_send_frame(skb, nb);
}
-/*
- * This routine is called when a Diagnostic is required.
- */
-void x25_transmit_diagnostic(struct x25_neigh *nb, unsigned char diag)
-{
- unsigned char *dptr;
- int len = X25_MAX_L2_LEN + X25_STD_MIN_LEN + 1;
- struct sk_buff *skb = alloc_skb(len, GFP_ATOMIC);
-
- if (!skb)
- return;
-
- skb_reserve(skb, X25_MAX_L2_LEN);
-
- dptr = skb_put(skb, X25_STD_MIN_LEN + 1);
-
- *dptr++ = nb->extended ? X25_GFI_EXTSEQ : X25_GFI_STDSEQ;
- *dptr++ = 0x00;
- *dptr++ = X25_DIAGNOSTIC;
- *dptr++ = diag;
-
- skb->sk = NULL;
-
- x25_send_frame(skb, nb);
-}
-
/*
* This routine is called when a Clear Request is needed outside of the context
* of a connected socket.
* needed cleaned seq-number fields.
*/
-#include <linux/errno.h>
-#include <linux/types.h>
#include <linux/socket.h>
-#include <linux/in.h>
#include <linux/kernel.h>
-#include <linux/sched.h>
-#include <linux/timer.h>
#include <linux/string.h>
-#include <linux/sockios.h>
-#include <linux/net.h>
-#include <linux/inet.h>
-#include <linux/netdevice.h>
#include <linux/skbuff.h>
#include <net/sock.h>
-#include <asm/system.h>
-#include <linux/fcntl.h>
-#include <linux/mm.h>
-#include <linux/interrupt.h>
#include <net/x25.h>
static int x25_pacsize_to_bytes(unsigned int pacsize)
return 0;
}
-struct seq_operations x25_seq_route_ops = {
+static struct seq_operations x25_seq_route_ops = {
.start = x25_seq_route_start,
.next = x25_seq_route_next,
.stop = x25_seq_route_stop,
.show = x25_seq_route_show,
};
-struct seq_operations x25_seq_socket_ops = {
+static struct seq_operations x25_seq_socket_ops = {
.start = x25_seq_socket_start,
.next = x25_seq_socket_next,
.stop = x25_seq_socket_stop,
#include <net/x25.h>
struct list_head x25_route_list = LIST_HEAD_INIT(x25_route_list);
-rwlock_t x25_route_list_lock = RW_LOCK_UNLOCKED;
+DEFINE_RWLOCK(x25_route_list_lock);
/*
* Add a new route.
* jun/24/01 Arnaldo C. Melo use skb_queue_purge, cleanups
*/
-#include <linux/errno.h>
-#include <linux/types.h>
-#include <linux/socket.h>
-#include <linux/in.h>
#include <linux/kernel.h>
-#include <linux/sched.h>
-#include <linux/timer.h>
#include <linux/string.h>
-#include <linux/sockios.h>
-#include <linux/net.h>
-#include <linux/inet.h>
-#include <linux/netdevice.h>
#include <linux/skbuff.h>
#include <net/sock.h>
#include <net/tcp.h>
-#include <asm/system.h>
-#include <linux/fcntl.h>
-#include <linux/mm.h>
-#include <linux/interrupt.h>
#include <net/x25.h>
/*
x25_stop_timer(sk);
}
}
+
+/*
+ * Compare 2 calluserdata structures, used to find correct listening sockets
+ * when call user data is used.
+ */
+int x25_check_calluserdata(struct x25_calluserdata *ours, struct x25_calluserdata *theirs)
+{
+ int i;
+ if (ours->cudlength != theirs->cudlength)
+ return 0;
+
+ for (i=0;i<ours->cudlength;i++) {
+ if (ours->cuddata[i] != theirs->cuddata[i]) {
+ return 0;
+ }
+ }
+ return 1;
+}
+
*/
#include <linux/errno.h>
-#include <linux/types.h>
-#include <linux/socket.h>
-#include <linux/in.h>
-#include <linux/kernel.h>
#include <linux/jiffies.h>
#include <linux/timer.h>
-#include <linux/string.h>
-#include <linux/sockios.h>
-#include <linux/net.h>
-#include <linux/inet.h>
-#include <linux/netdevice.h>
-#include <linux/skbuff.h>
#include <net/sock.h>
#include <net/tcp.h>
-#include <asm/system.h>
-#include <linux/fcntl.h>
-#include <linux/mm.h>
-#include <linux/interrupt.h>
#include <net/x25.h>
static void x25_heartbeat_expiry(unsigned long);
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/pfkeyv2.h>
+#include <linux/crypto.h>
#include <net/xfrm.h>
#if defined(CONFIG_INET_AH) || defined(CONFIG_INET_AH_MODULE) || defined(CONFIG_INET6_AH) || defined(CONFIG_INET6_AH_MODULE)
#include <net/ah.h>
}
return NULL;
}
+EXPORT_SYMBOL_GPL(xfrm_aalg_get_byid);
struct xfrm_algo_desc *xfrm_ealg_get_byid(int alg_id)
{
}
return NULL;
}
+EXPORT_SYMBOL_GPL(xfrm_ealg_get_byid);
struct xfrm_algo_desc *xfrm_calg_get_byid(int alg_id)
{
}
return NULL;
}
+EXPORT_SYMBOL_GPL(xfrm_calg_get_byid);
-struct xfrm_algo_desc *xfrm_aalg_get_byname(char *name)
+static struct xfrm_algo_desc *xfrm_get_byname(struct xfrm_algo_desc *list,
+ int entries, char *name,
+ int probe)
{
- int i;
+ int i, status;
if (!name)
return NULL;
- for (i=0; i < aalg_entries(); i++) {
- if (strcmp(name, aalg_list[i].name) == 0) {
- if (aalg_list[i].available)
- return &aalg_list[i];
- else
- break;
- }
- }
- return NULL;
-}
+ for (i = 0; i < entries; i++) {
+ if (strcmp(name, list[i].name))
+ continue;
-struct xfrm_algo_desc *xfrm_ealg_get_byname(char *name)
-{
- int i;
+ if (list[i].available)
+ return &list[i];
- if (!name)
- return NULL;
+ if (!probe)
+ break;
- for (i=0; i < ealg_entries(); i++) {
- if (strcmp(name, ealg_list[i].name) == 0) {
- if (ealg_list[i].available)
- return &ealg_list[i];
- else
- break;
- }
+ status = crypto_alg_available(name, 0);
+ if (!status)
+ break;
+
+ list[i].available = status;
+ return &list[i];
}
return NULL;
}
-struct xfrm_algo_desc *xfrm_calg_get_byname(char *name)
+struct xfrm_algo_desc *xfrm_aalg_get_byname(char *name, int probe)
{
- int i;
+ return xfrm_get_byname(aalg_list, aalg_entries(), name, probe);
+}
+EXPORT_SYMBOL_GPL(xfrm_aalg_get_byname);
- if (!name)
- return NULL;
+struct xfrm_algo_desc *xfrm_ealg_get_byname(char *name, int probe)
+{
+ return xfrm_get_byname(ealg_list, ealg_entries(), name, probe);
+}
+EXPORT_SYMBOL_GPL(xfrm_ealg_get_byname);
- for (i=0; i < calg_entries(); i++) {
- if (strcmp(name, calg_list[i].name) == 0) {
- if (calg_list[i].available)
- return &calg_list[i];
- else
- break;
- }
- }
- return NULL;
+struct xfrm_algo_desc *xfrm_calg_get_byname(char *name, int probe)
+{
+ return xfrm_get_byname(calg_list, calg_entries(), name, probe);
}
+EXPORT_SYMBOL_GPL(xfrm_calg_get_byname);
struct xfrm_algo_desc *xfrm_aalg_get_byidx(unsigned int idx)
{
return &aalg_list[idx];
}
+EXPORT_SYMBOL_GPL(xfrm_aalg_get_byidx);
struct xfrm_algo_desc *xfrm_ealg_get_byidx(unsigned int idx)
{
return &ealg_list[idx];
}
-
-struct xfrm_algo_desc *xfrm_calg_get_byidx(unsigned int idx)
-{
- if (idx >= calg_entries())
- return NULL;
-
- return &calg_list[idx];
-}
+EXPORT_SYMBOL_GPL(xfrm_ealg_get_byidx);
/*
* Probe for the availability of crypto algorithms, and set the available
}
#endif
}
+EXPORT_SYMBOL_GPL(xfrm_probe_algs);
int xfrm_count_auth_supported(void)
{
n++;
return n;
}
+EXPORT_SYMBOL_GPL(xfrm_count_auth_supported);
int xfrm_count_enc_supported(void)
{
n++;
return n;
}
+EXPORT_SYMBOL_GPL(xfrm_count_enc_supported);
/* Move to common area: it is shared with AH. */
if (len)
BUG();
}
+EXPORT_SYMBOL_GPL(skb_icv_walk);
#if defined(CONFIG_INET_ESP) || defined(CONFIG_INET_ESP_MODULE) || defined(CONFIG_INET6_ESP) || defined(CONFIG_INET6_ESP_MODULE)
*/
#include <linux/slab.h>
+#include <linux/module.h>
#include <net/ip.h>
#include <net/xfrm.h>
xfrm_state_put(sp->x[i].xvec);
kmem_cache_free(secpath_cachep, sp);
}
+EXPORT_SYMBOL(__secpath_destroy);
struct sec_path *secpath_dup(struct sec_path *src)
{
atomic_set(&sp->refcnt, 1);
return sp;
}
+EXPORT_SYMBOL(secpath_dup);
/* Fetch spi and seq from ipsec header */
*seq = *(u32*)(skb->h.raw + offset_seq);
return 0;
}
+EXPORT_SYMBOL(xfrm_parse_spi);
void __init xfrm_input_init(void)
{
# Linus' kernel sanity checking tool
ifneq ($(KBUILD_CHECKSRC),0)
- CHECKFLAGS += -I$(shell $(CC) -print-file-name=include)
ifeq ($(KBUILD_CHECKSRC),2)
quiet_cmd_force_checksrc = CHECK $<
cmd_force_checksrc = $(CHECK) $(CHECKFLAGS) $(c_flags) $< ;
# Copyright (C) Martin Schlemmer <azarah@nosferatu.za.org>
# Released under the terms of the GNU GPL
#
-# Generate a newline separated list of entries from the file/directory pointed
-# out by the environment variable: CONFIG_INITRAMFS_SOURCE
+# Generate a newline separated list of entries from the file/directory
+# supplied as an argument.
#
-# If CONFIG_INITRAMFS_SOURCE is non-existing then generate a small dummy file.
+# If a file/directory is not supplied then generate a small dummy file.
#
-# The output is suitable for gen_init_cpio as found in usr/Makefile.
+# The output is suitable for gen_init_cpio built from usr/gen_init_cpio.c.
#
-# TODO: Add support for symlinks, sockets and pipes when gen_init_cpio
-# supports them.
-simple_initramfs() {
+default_initramfs() {
cat <<-EOF
- # This is a very simple initramfs
+ # This is a very simple, default initramfs
dir /dev 0755 0 0
nod /dev/console 0600 0 0 c 5 1
filetype() {
local argv1="$1"
- if [ -f "${argv1}" ]; then
+ # symlink test must come before file test
+ if [ -L "${argv1}" ]; then
+ echo "slink"
+ elif [ -f "${argv1}" ]; then
echo "file"
elif [ -d "${argv1}" ]; then
echo "dir"
elif [ -b "${argv1}" -o -c "${argv1}" ]; then
echo "nod"
+ elif [ -p "${argv1}" ]; then
+ echo "pipe"
+ elif [ -S "${argv1}" ]; then
+ echo "sock"
else
echo "invalid"
fi
parse() {
local location="$1"
local name="${location/${srcdir}//}"
+ # change '//' into '/'
+ name="${name//\/\///}"
local mode="$2"
local uid="$3"
local gid="$4"
local ftype=$(filetype "${location}")
+ # remap uid/gid to 0 if necessary
+ [ "$uid" -eq "$root_uid" ] && uid=0
+ [ "$gid" -eq "$root_gid" ] && gid=0
local str="${mode} ${uid} ${gid}"
[ "${ftype}" == "invalid" ] && return 0
fi
str="${ftype} ${name} ${str} ${dev_type} ${maj} ${min}"
;;
+ "slink")
+ local target=$(LC_ALL=C ls -l "${location}" | \
+ gawk '{print $11}')
+ str="${ftype} ${name} ${target} ${str}"
+ ;;
*)
str="${ftype} ${name} ${str}"
;;
return 0
}
-if [ -z "$1" ]; then
- simple_initramfs
-elif [ -f "$1" ]; then
- print_mtime "$1"
- cat "$1"
-elif [ -d "$1" ]; then
- srcdir=$(echo "$1" | sed -e 's://*:/:g')
- dirlist=$(find "${srcdir}" -printf "%p %m %U %G\n" 2>/dev/null)
-
- # If $dirlist is only one line, then the directory is empty
- if [ "$(echo "${dirlist}" | wc -l)" -gt 1 ]; then
- print_mtime "$1"
+usage() {
+ printf "Usage:\n"
+ printf "$0 [ [-u <root_uid>] [-g <root_gid>] [-d | <cpio_source>] ] . . .\n"
+ printf "\n"
+ printf -- "-u <root_uid> User ID to map to user ID 0 (root).\n"
+ printf " <root_uid> is only meaningful if <cpio_source>\n"
+ printf " is a directory.\n"
+ printf -- "-g <root_gid> Group ID to map to group ID 0 (root).\n"
+ printf " <root_gid> is only meaningful if <cpio_source>\n"
+ printf " is a directory.\n"
+ printf "<cpio_source> File list or directory for cpio archive.\n"
+ printf " If <cpio_source> is not provided then a\n"
+ printf " a default list will be output.\n"
+ printf -- "-d Output the default cpio list. If no <cpio_source>\n"
+ printf " is given then the default cpio list will be output.\n"
+ printf "\n"
+ printf "All options may be repeated and are interpreted sequentially\n"
+ printf "and immediately. -u and -g states are preserved across\n"
+ printf "<cpio_source> options so an explicit \"-u 0 -g 0\" is required\n"
+ printf "to reset the root/group mapping.\n"
+}
+
+build_list() {
+ printf "\n#####################\n# $cpio_source\n"
+
+ if [ -f "$cpio_source" ]; then
+ print_mtime "$cpio_source"
+ cat "$cpio_source"
+ elif [ -d "$cpio_source" ]; then
+ srcdir=$(echo "$cpio_source" | sed -e 's://*:/:g')
+ dirlist=$(find "${srcdir}" -printf "%p %m %U %G\n" 2>/dev/null)
+
+ # If $dirlist is only one line, then the directory is empty
+ if [ "$(echo "${dirlist}" | wc -l)" -gt 1 ]; then
+ print_mtime "$cpio_source"
- echo "${dirlist}" | \
- while read x; do
- parse ${x}
- done
+ echo "${dirlist}" | \
+ while read x; do
+ parse ${x}
+ done
+ else
+ # Failsafe in case directory is empty
+ default_initramfs
+ fi
else
- # Failsafe in case directory is empty
- simple_initramfs
+ echo " $0: Cannot open '$cpio_source'" >&2
+ exit 1
fi
-else
- echo " $0: Cannot open '$1' (CONFIG_INITRAMFS_SOURCE)" >&2
- exit 1
-fi
+}
+
+
+root_uid=0
+root_gid=0
+
+while [ $# -gt 0 ]; do
+ arg="$1"
+ shift
+ case "$arg" in
+ "-u")
+ root_uid="$1"
+ shift
+ ;;
+ "-g")
+ root_gid="$1"
+ shift
+ ;;
+ "-d")
+ default_list="$arg"
+ default_initramfs
+ ;;
+ "-h")
+ usage
+ exit 0
+ ;;
+ *)
+ case "$arg" in
+ "-"*)
+ printf "ERROR: unknown option \"$arg\"\n" >&2
+ printf "If the filename validly begins with '-', then it must be prefixed\n" >&2
+ printf "by './' so that it won't be interpreted as an option." >&2
+ printf "\n" >&2
+ usage >&2
+ exit 1
+ ;;
+ *)
+ cpio_source="$arg"
+ build_list
+ ;;
+ esac
+ ;;
+ esac
+done
+
+# spit out the default cpio list if a source hasn't been specified
+[ -z "$cpio_source" -a -z "$default_list" ] && default_initramfs
exit 0
{
expr_print(e, expr_print_file_helper, out, E_NONE);
}
+
+static void expr_print_gstr_helper(void *data, const char *str)
+{
+ str_append((struct gstr*)data, str);
+}
+
+void expr_gstr_print(struct expr *e, struct gstr *gs)
+{
+ expr_print(e, expr_print_gstr_helper, gs, E_NONE);
+}
struct expr *expr_trans_compare(struct expr *e, enum expr_type type, struct symbol *sym);
void expr_fprint(struct expr *e, FILE *out);
+struct gstr; /* forward */
+void expr_gstr_print(struct expr *e, struct gstr *gs);
static inline int expr_is_yes(struct expr *e)
{
GtkWidget *widget;
GtkTextBuffer *txtbuf;
char title[256];
- GdkPixmap *pixmap;
- GdkBitmap *mask;
GtkStyle *style;
xml = glade_xml_new(glade_file, "window1", NULL);
style = gtk_widget_get_style(main_wnd);
widget = glade_xml_get_widget(xml, "toolbar1");
- pixmap = gdk_pixmap_create_from_xpm_d(main_wnd->window, &mask,
- &style->bg[GTK_STATE_NORMAL],
- (gchar **) xpm_single_view);
- gtk_image_set_from_pixmap(GTK_IMAGE
- (((GtkToolbarChild
- *) (g_list_nth(GTK_TOOLBAR(widget)->
- children,
- 5)->data))->icon),
- pixmap, mask);
- pixmap =
- gdk_pixmap_create_from_xpm_d(main_wnd->window, &mask,
- &style->bg[GTK_STATE_NORMAL],
- (gchar **) xpm_split_view);
- gtk_image_set_from_pixmap(GTK_IMAGE
- (((GtkToolbarChild
- *) (g_list_nth(GTK_TOOLBAR(widget)->
- children,
- 6)->data))->icon),
- pixmap, mask);
- pixmap =
- gdk_pixmap_create_from_xpm_d(main_wnd->window, &mask,
- &style->bg[GTK_STATE_NORMAL],
- (gchar **) xpm_tree_view);
- gtk_image_set_from_pixmap(GTK_IMAGE
- (((GtkToolbarChild
- *) (g_list_nth(GTK_TOOLBAR(widget)->
- children,
- 7)->data))->icon),
- pixmap, mask);
-
switch (view_mode) {
case SINGLE_VIEW:
widget = glade_xml_get_widget(xml, "button4");
<property name="tooltips">True</property>
<child>
- <widget class="button" id="button1">
+ <widget class="GtkToolButton" id="button1">
<property name="visible">True</property>
<property name="tooltip" translatable="yes">Goes up of one level (single view)</property>
<property name="label" translatable="yes">Back</property>
<property name="use_underline">True</property>
- <property name="stock_pixmap">gtk-undo</property>
- <signal name="pressed" handler="on_back_pressed"/>
+ <property name="stock-id">gtk-undo</property>
+ <signal name="clicked" handler="on_back_pressed"/>
</widget>
</child>
</child>
<child>
- <widget class="button" id="button2">
+ <widget class="GtkToolButton" id="button2">
<property name="visible">True</property>
<property name="tooltip" translatable="yes">Load a config file</property>
<property name="label" translatable="yes">Load</property>
<property name="use_underline">True</property>
- <property name="stock_pixmap">gtk-open</property>
- <signal name="pressed" handler="on_load_pressed"/>
+ <property name="stock-id">gtk-open</property>
+ <signal name="clicked" handler="on_load_pressed"/>
</widget>
</child>
<child>
- <widget class="button" id="button3">
+ <widget class="GtkToolButton" id="button3">
<property name="visible">True</property>
<property name="tooltip" translatable="yes">Save a config file</property>
<property name="label" translatable="yes">Save</property>
<property name="use_underline">True</property>
- <property name="stock_pixmap">gtk-save</property>
- <signal name="pressed" handler="on_save_pressed"/>
+ <property name="stock-id">gtk-save</property>
+ <signal name="clicked" handler="on_save_pressed"/>
</widget>
</child>
</child>
<child>
- <widget class="button" id="button4">
+ <widget class="GtkToolButton" id="button4">
<property name="visible">True</property>
<property name="tooltip" translatable="yes">Single view</property>
<property name="label" translatable="yes">Single</property>
<property name="use_underline">True</property>
- <property name="stock_pixmap">gtk-missing-image</property>
+ <property name="stock-id">gtk-indent</property>
<signal name="clicked" handler="on_single_clicked" last_modification_time="Sun, 12 Jan 2003 14:28:39 GMT"/>
</widget>
</child>
<child>
- <widget class="button" id="button5">
+ <widget class="GtkToolButton" id="button5">
<property name="visible">True</property>
<property name="tooltip" translatable="yes">Split view</property>
<property name="label" translatable="yes">Split</property>
<property name="use_underline">True</property>
- <property name="stock_pixmap">gtk-missing-image</property>
+ <property name="stock-id">gtk-copy</property>
<signal name="clicked" handler="on_split_clicked" last_modification_time="Sun, 12 Jan 2003 14:28:45 GMT"/>
</widget>
</child>
<child>
- <widget class="button" id="button6">
+ <widget class="GtkToolButton" id="button6">
<property name="visible">True</property>
<property name="tooltip" translatable="yes">Full view</property>
<property name="label" translatable="yes">Full</property>
<property name="use_underline">True</property>
- <property name="stock_pixmap">gtk-missing-image</property>
+ <property name="stock-id">gtk-justify-left</property>
<signal name="clicked" handler="on_full_clicked" last_modification_time="Sun, 12 Jan 2003 14:28:50 GMT"/>
</widget>
</child>
</child>
<child>
- <widget class="button" id="button7">
+ <widget class="GtkToolButton" id="button7">
<property name="visible">True</property>
<property name="tooltip" translatable="yes">Collapse the whole tree in the right frame</property>
<property name="label" translatable="yes">Collapse</property>
<property name="use_underline">True</property>
- <signal name="pressed" handler="on_collapse_pressed"/>
+ <property name="stock-id">gtk-zoom-out</property>
+ <signal name="clicked" handler="on_collapse_pressed"/>
</widget>
</child>
<child>
- <widget class="button" id="button8">
+ <widget class="GtkToolButton" id="button8">
<property name="visible">True</property>
<property name="tooltip" translatable="yes">Expand the whole tree in the right frame</property>
<property name="label" translatable="yes">Expand</property>
<property name="use_underline">True</property>
- <signal name="pressed" handler="on_expand_pressed"/>
+ <property name="stock-id">gtk-zoom-in</property>
+ <signal name="clicked" handler="on_expand_pressed"/>
</widget>
</child>
</widget>
void menu_add_symbol(enum prop_type type, struct symbol *sym, struct expr *dep);
void menu_finalize(struct menu *parent);
void menu_set_type(int type);
+
+/* util.c */
struct file *file_lookup(const char *name);
int file_write_dep(const char *name);
+struct gstr {
+ size_t len;
+ char *s;
+};
+struct gstr str_new(void);
+struct gstr str_assign(const char *s);
+void str_free(struct gstr *gs);
+void str_append(struct gstr *gs, const char *s);
+void str_printf(struct gstr *gs, const char *fmt, ...);
+const char *str_get(struct gstr *gs);
+
/* symbol.c */
void sym_init(void);
void sym_clear_all_valid(void);
P(sym_lookup,struct symbol *,(const char *name, int isconst));
P(sym_find,struct symbol *,(const char *name));
+P(sym_re_search,struct symbol **,(const char *pattern));
P(sym_type_name,const char *,(enum symbol_type type));
P(sym_calc_value,void,(struct symbol *sym));
P(sym_get_type,enum symbol_type,(struct symbol *sym));
return menu;
}
-struct file *file_lookup(const char *name)
-{
- struct file *file;
-
- for (file = file_list; file; file = file->next) {
- if (!strcmp(name, file->name))
- return file;
- }
-
- file = malloc(sizeof(*file));
- memset(file, 0, sizeof(*file));
- file->name = strdup(name);
- file->next = file_list;
- file_list = file;
- return file;
-}
-
-int file_write_dep(const char *name)
-{
- struct file *file;
- FILE *out;
-
- if (!name)
- name = ".config.cmd";
- out = fopen("..config.tmp", "w");
- if (!out)
- return 1;
- fprintf(out, "deps_config := \\\n");
- for (file = file_list; file; file = file->next) {
- if (file->next)
- fprintf(out, "\t%s \\\n", file->name);
- else
- fprintf(out, "\t%s\n", file->name);
- }
- fprintf(out, "\n.config include/linux/autoconf.h: $(deps_config)\n\n$(deps_config):\n");
- fclose(out);
- rename("..config.tmp", name);
- return 0;
-}
-
#include <ctype.h>
#include <stdlib.h>
#include <string.h>
+#include <regex.h>
#include <sys/utsname.h>
#define LKC_DIRECT_LINK
return symbol;
}
+struct symbol **sym_re_search(const char *pattern)
+{
+ struct symbol *sym, **sym_arr = NULL;
+ int i, cnt, size;
+ regex_t re;
+
+ cnt = size = 0;
+ /* Skip if empty */
+ if (strlen(pattern) == 0)
+ return NULL;
+ if (regcomp(&re, pattern, REG_EXTENDED|REG_NOSUB|REG_ICASE))
+ return NULL;
+
+ for_all_symbols(i, sym) {
+ if (sym->flags & SYMBOL_CONST || !sym->name)
+ continue;
+ if (regexec(&re, sym->name, 0, NULL, 0))
+ continue;
+ if (cnt + 1 >= size) {
+ void *tmp = sym_arr;
+ size += 16;
+ sym_arr = realloc(sym_arr, size * sizeof(struct symbol *));
+ if (!sym_arr) {
+ free(tmp);
+ return NULL;
+ }
+ }
+ sym_arr[cnt++] = sym;
+ }
+ if (sym_arr)
+ sym_arr[cnt] = NULL;
+ regfree(&re);
+
+ return sym_arr;
+}
+
+
struct symbol *sym_check_deps(struct symbol *sym);
static struct symbol *sym_check_expr_deps(struct expr *e)
--- /dev/null
+/*
+ * Copyright (C) 2002-2005 Roman Zippel <zippel@linux-m68k.org>
+ * Copyright (C) 2002-2005 Sam Ravnborg <sam@ravnborg.org>
+ *
+ * Released under the terms of the GNU GPL v2.0.
+ */
+
+#include <string.h>
+#include "lkc.h"
+
+/* file already present in list? If not add it */
+struct file *file_lookup(const char *name)
+{
+ struct file *file;
+
+ for (file = file_list; file; file = file->next) {
+ if (!strcmp(name, file->name))
+ return file;
+ }
+
+ file = malloc(sizeof(*file));
+ memset(file, 0, sizeof(*file));
+ file->name = strdup(name);
+ file->next = file_list;
+ file_list = file;
+ return file;
+}
+
+/* write a dependency file as used by kbuild to track dependencies */
+int file_write_dep(const char *name)
+{
+ struct file *file;
+ FILE *out;
+
+ if (!name)
+ name = ".config.cmd";
+ out = fopen("..config.tmp", "w");
+ if (!out)
+ return 1;
+ fprintf(out, "deps_config := \\\n");
+ for (file = file_list; file; file = file->next) {
+ if (file->next)
+ fprintf(out, "\t%s \\\n", file->name);
+ else
+ fprintf(out, "\t%s\n", file->name);
+ }
+ fprintf(out, "\n.config include/linux/autoconf.h: $(deps_config)\n\n$(deps_config):\n");
+ fclose(out);
+ rename("..config.tmp", name);
+ return 0;
+}
+
+
+/* Allocate initial growable sting */
+struct gstr str_new(void)
+{
+ struct gstr gs;
+ gs.s = malloc(sizeof(char) * 64);
+ gs.len = 16;
+ strcpy(gs.s, "\0");
+ return gs;
+}
+
+/* Allocate and assign growable string */
+struct gstr str_assign(const char *s)
+{
+ struct gstr gs;
+ gs.s = strdup(s);
+ gs.len = strlen(s) + 1;
+ return gs;
+}
+
+/* Free storage for growable string */
+void str_free(struct gstr *gs)
+{
+ if (gs->s)
+ free(gs->s);
+ gs->s = NULL;
+ gs->len = 0;
+}
+
+/* Append to growable string */
+void str_append(struct gstr *gs, const char *s)
+{
+ size_t l = strlen(gs->s) + strlen(s) + 1;
+ if (l > gs->len) {
+ gs->s = realloc(gs->s, l);
+ gs->len = l;
+ }
+ strcat(gs->s, s);
+}
+
+/* Append printf formatted string to growable string */
+void str_printf(struct gstr *gs, const char *fmt, ...)
+{
+ va_list ap;
+ char s[10000]; /* big enough... */
+ va_start(ap, fmt);
+ vsnprintf(s, sizeof(s), fmt, ap);
+ str_append(gs, s);
+ va_end(ap);
+}
+
+/* Retreive value of growable string */
+const char *str_get(struct gstr *gs)
+{
+ return gs->s;
+}
+
}
#include "lex.zconf.c"
+#include "util.c"
#include "confdata.c"
#include "expr.c"
#include "symbol.c"
}
#include "lex.zconf.c"
+#include "util.c"
#include "confdata.c"
#include "expr.c"
#include "symbol.c"
while (key != ESC) {
key = wgetch(menu);
- if ( key == '/' ) {
- int ret = dialog_inputbox("Search Configuration Parameter",
- "Enter Keyword", height, width,
- (char *) NULL);
- if (ret == 0) {
- fprintf(stderr, "%s", dialog_input_result);
- return 26;
- }
- }
if (key < 256 && isalpha(key)) key = tolower(key);
case 'y':
case 'n':
case 'm':
+ case '/':
/* save scroll info */
if ( (f=fopen("lxdialog.scrltmp","w")) != NULL ) {
fprintf(f,"%d\n",scroll);
case 'n': return 4;
case 'm': return 5;
case ' ': return 6;
+ case '/': return 7;
}
return 0;
case 'h':
$l = read(OBJECT, $comment, $size);
die "read $size bytes from $object .comment failed" if ($l != $size);
close(OBJECT);
- if ($comment =~ /GCC\:.*GCC\:/m) {
+ if ($comment =~ /GCC\:.*GCC\:/m || $object =~ /built-in\.o/) {
++$ignore;
delete($object{$object});
}
$from !~ /\.data\.exit$/ &&
$from !~ /\.exit\.data$/ &&
$from !~ /\.altinstructions$/ &&
+ $from !~ /\.pdr$/ &&
$from !~ /\.debug_info$/ &&
$from !~ /\.debug_aranges$/ &&
$from !~ /\.debug_ranges$/ &&
expr --v 2>&1 | awk 'NR==1{print "Sh-utils ", $NF}'
+udevinfo -V | awk '{print "udev ", $3}'
+
if [ -e /proc/modules ]; then
X=`cat /proc/modules | sed -e "s/ .*$//"`
echo "Modules Loaded "$X
config SECURITY_SECLVL
tristate "BSD Secure Levels"
depends on SECURITY
+ select CRYPTO
select CRYPTO_SHA1
help
Implements BSD Secure Levels as an LSM. See
- Documentation/seclvl.txt for instructions on how to use this
+ <file:Documentation/seclvl.txt> for instructions on how to use this
module.
If you are unsure how to answer this question, answer N.
return keyctl_get_keyring_ID(arg2, arg3);
case KEYCTL_JOIN_SESSION_KEYRING:
- return keyctl_join_session_keyring(compat_ptr(arg3));
+ return keyctl_join_session_keyring(compat_ptr(arg2));
case KEYCTL_UPDATE:
return keyctl_update_key(arg2, compat_ptr(arg3), arg4);
static kmem_cache_t *key_jar;
static key_serial_t key_serial_next = 3;
struct rb_root key_serial_tree; /* tree of keys indexed by serial */
-spinlock_t key_serial_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(key_serial_lock);
struct rb_root key_user_tree; /* tree of quota records indexed by UID */
-spinlock_t key_user_lock = SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(key_user_lock);
static LIST_HEAD(key_types_list);
static DECLARE_RWSEM(key_types_sem);
(int) arg3);
case KEYCTL_JOIN_SESSION_KEYRING:
- return keyctl_join_session_keyring((const char __user *) arg3);
+ return keyctl_join_session_keyring((const char __user *) arg2);
case KEYCTL_UPDATE:
return keyctl_update_key((key_serial_t) arg2,
#define KEYRING_NAME_HASH_SIZE (1 << 5)
static struct list_head keyring_name_hash[KEYRING_NAME_HASH_SIZE];
-static rwlock_t keyring_name_lock = RW_LOCK_UNLOCKED;
+static DEFINE_RWLOCK(keyring_name_lock);
static inline unsigned keyring_hash(const char *desc)
{
/* This file is automatically generated. Do not edit. */
-/* FLASK */
+TB_(common_file_perm_to_string)
+ S_("ioctl")
+ S_("read")
+ S_("write")
+ S_("create")
+ S_("getattr")
+ S_("setattr")
+ S_("lock")
+ S_("relabelfrom")
+ S_("relabelto")
+ S_("append")
+ S_("unlink")
+ S_("link")
+ S_("rename")
+ S_("execute")
+ S_("swapon")
+ S_("quotaon")
+ S_("mounton")
+TE_(common_file_perm_to_string)
-static char *common_file_perm_to_string[] =
-{
- "ioctl",
- "read",
- "write",
- "create",
- "getattr",
- "setattr",
- "lock",
- "relabelfrom",
- "relabelto",
- "append",
- "unlink",
- "link",
- "rename",
- "execute",
- "swapon",
- "quotaon",
- "mounton",
-};
+TB_(common_socket_perm_to_string)
+ S_("ioctl")
+ S_("read")
+ S_("write")
+ S_("create")
+ S_("getattr")
+ S_("setattr")
+ S_("lock")
+ S_("relabelfrom")
+ S_("relabelto")
+ S_("append")
+ S_("bind")
+ S_("connect")
+ S_("listen")
+ S_("accept")
+ S_("getopt")
+ S_("setopt")
+ S_("shutdown")
+ S_("recvfrom")
+ S_("sendto")
+ S_("recv_msg")
+ S_("send_msg")
+ S_("name_bind")
+TE_(common_socket_perm_to_string)
-static char *common_socket_perm_to_string[] =
-{
- "ioctl",
- "read",
- "write",
- "create",
- "getattr",
- "setattr",
- "lock",
- "relabelfrom",
- "relabelto",
- "append",
- "bind",
- "connect",
- "listen",
- "accept",
- "getopt",
- "setopt",
- "shutdown",
- "recvfrom",
- "sendto",
- "recv_msg",
- "send_msg",
- "name_bind",
-};
+TB_(common_ipc_perm_to_string)
+ S_("create")
+ S_("destroy")
+ S_("getattr")
+ S_("setattr")
+ S_("read")
+ S_("write")
+ S_("associate")
+ S_("unix_read")
+ S_("unix_write")
+TE_(common_ipc_perm_to_string)
-static char *common_ipc_perm_to_string[] =
-{
- "create",
- "destroy",
- "getattr",
- "setattr",
- "read",
- "write",
- "associate",
- "unix_read",
- "unix_write",
-};
-
-
-/* FLASK */
{ RTM_NEWTFILTER, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
{ RTM_DELTFILTER, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
{ RTM_GETTFILTER, NETLINK_ROUTE_SOCKET__NLMSG_READ },
+ { RTM_NEWACTION, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_DELACTION, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_GETACTION, NETLINK_ROUTE_SOCKET__NLMSG_READ },
{ RTM_NEWPREFIX, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
{ RTM_GETPREFIX, NETLINK_ROUTE_SOCKET__NLMSG_READ },
{ RTM_GETMULTICAST, NETLINK_ROUTE_SOCKET__NLMSG_READ },
struct ebitmap trustedwriters;
struct ebitmap trustedobjects;
#endif
+
+ unsigned int policyvers;
};
extern int policydb_init(struct policydb *p);
size_t len;
};
-static inline void *next_entry(struct policy_file *fp, size_t bytes)
+static inline int next_entry(void *buf, struct policy_file *fp, size_t bytes)
{
- void *buf;
-
if (bytes > fp->len)
- return NULL;
+ return -EINVAL;
- buf = fp->data;
+ memcpy(buf, fp->data, bytes);
fp->data += bytes;
fp->len -= bytes;
- return buf;
+ return 0;
}
#endif /* _SS_POLICYDB_H_ */
source "sound/arm/Kconfig"
+source "sound/mips/Kconfig"
+
# the following will depenend on the order of config.
# here assuming USB is defined before ALSA
source "sound/usb/Kconfig"
endmenu
menu "Open Sound System"
- depends on SOUND!=n && (BROKEN || !(SPARC32 || SPARC64))
+ depends on SOUND!=n && (BROKEN || (!SPARC32 && !SPARC64))
config SOUND_PRIME
tristate "Open Sound System (DEPRECATED)"
obj-$(CONFIG_SOUND) += soundcore.o
obj-$(CONFIG_SOUND_PRIME) += oss/
obj-$(CONFIG_DMASOUND) += oss/
-obj-$(CONFIG_SND) += core/ i2c/ drivers/ isa/ pci/ ppc/ arm/ synth/ usb/ sparc/ parisc/ pcmcia/
+obj-$(CONFIG_SND) += core/ i2c/ drivers/ isa/ pci/ ppc/ arm/ synth/ usb/ sparc/ parisc/ pcmcia/ mips/
ifeq ($(CONFIG_SND),y)
obj-y += last.o
MODULE_DESCRIPTION("Hardware dependent layer");
MODULE_LICENSE("GPL");
-snd_hwdep_t *snd_hwdep_devices[SNDRV_CARDS * SNDRV_MINOR_HWDEPS];
+static snd_hwdep_t *snd_hwdep_devices[SNDRV_CARDS * SNDRV_MINOR_HWDEPS];
static DECLARE_MUTEX(register_mutex);
*/
#include <sound/driver.h>
-#include <linux/version.h>
#include <linux/init.h>
#include <linux/vmalloc.h>
#include <linux/time.h>
#include <linux/smp_lock.h>
-#include <linux/utsname.h>
-#include <linux/config.h>
-
#include <sound/core.h>
-#include <sound/version.h>
#include <sound/minors.h>
#include <sound/info.h>
+#include <sound/version.h>
#include <linux/proc_fs.h>
#include <linux/devfs_fs_kernel.h>
#include <stdarg.h>
de->owner = THIS_MODULE;
}
-void snd_remove_proc_entry(struct proc_dir_entry *parent,
- struct proc_dir_entry *de)
+static void snd_remove_proc_entry(struct proc_dir_entry *parent,
+ struct proc_dir_entry *de)
{
if (de)
remove_proc_entry(de->name, parent);
*
* Returns the pointer of new instance or NULL on failure.
*/
-struct proc_dir_entry *snd_create_proc_entry(const char *name, mode_t mode,
- struct proc_dir_entry *parent)
+static struct proc_dir_entry *snd_create_proc_entry(const char *name, mode_t mode,
+ struct proc_dir_entry *parent)
{
struct proc_dir_entry *p;
p = create_proc_entry(name, mode, parent);
{
if (entry == NULL)
return;
- if (entry->name)
- kfree((char *)entry->name);
+ kfree(entry->name);
if (entry->private_free)
entry->private_free(entry);
kfree(entry);
static void snd_info_version_read(snd_info_entry_t *entry, snd_info_buffer_t * buffer)
{
- static char *kernel_version = system_utsname.release;
-
snd_iprintf(buffer,
- "Advanced Linux Sound Architecture Driver Version " CONFIG_SND_VERSION CONFIG_SND_DATE ".\n"
- "Compiled on " __DATE__ " for kernel %s"
-#ifdef CONFIG_SMP
- " (SMP)"
-#endif
-#ifdef MODVERSIONS
- " with versioned symbols"
-#endif
- ".\n", kernel_version);
+ "Advanced Linux Sound Architecture Driver Version "
+ CONFIG_SND_VERSION CONFIG_SND_DATE ".\n"
+ );
}
static int __init snd_info_version_init(void)
}
/* FIXME: need to unlock BKL to allow preemption */
-int snd_mixer_oss_ioctl(struct inode *inode, struct file *file,
- unsigned int cmd, unsigned long arg)
+static int snd_mixer_oss_ioctl(struct inode *inode, struct file *file,
+ unsigned int cmd, unsigned long arg)
{
int err;
/* FIXME: need to unlock BKL to allow preemption */
*right = snd_mixer_oss_conv1(uctl->value.integer.value[1], uinfo->value.integer.min, uinfo->value.integer.max, &pslot->volume[1]);
__unalloc:
up_read(&card->controls_rwsem);
- if (uctl)
- kfree(uctl);
- if (uinfo)
- kfree(uinfo);
+ kfree(uctl);
+ kfree(uinfo);
}
static void snd_mixer_oss_get_volume1_sw(snd_mixer_oss_file_t *fmixer,
*right = 0;
__unalloc:
up_read(&card->controls_rwsem);
- if (uctl)
- kfree(uctl);
- if (uinfo)
- kfree(uinfo);
+ kfree(uctl);
+ kfree(uinfo);
}
static int snd_mixer_oss_get_volume1(snd_mixer_oss_file_t *fmixer,
snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_VALUE, &kctl->id);
__unalloc:
up_read(&card->controls_rwsem);
- if (uctl)
- kfree(uctl);
- if (uinfo)
- kfree(uinfo);
+ kfree(uctl);
+ kfree(uinfo);
}
static void snd_mixer_oss_put_volume1_sw(snd_mixer_oss_file_t *fmixer,
snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_VALUE, &kctl->id);
__unalloc:
up_read(&card->controls_rwsem);
- if (uctl)
- kfree(uctl);
- if (uinfo)
- kfree(uinfo);
+ kfree(uctl);
+ kfree(uinfo);
}
static int snd_mixer_oss_put_volume1(snd_mixer_oss_file_t *fmixer,
err = 0;
__unlock:
up_read(&card->controls_rwsem);
- if (uctl)
- kfree(uctl);
- if (uinfo)
- kfree(uinfo);
+ kfree(uctl);
+ kfree(uinfo);
return err;
}
err = 0;
__unlock:
up_read(&card->controls_rwsem);
- if (uctl)
- kfree(uctl);
- if (uinfo)
- kfree(uinfo);
+ kfree(uctl);
+ kfree(uinfo);
return err;
}
data = (mulaw_t*)plugin->extra_data;
data->func = func;
data->conv = getput_index(format->format);
+ snd_assert(data->conv >= 0 && data->conv < 4*2*2, return -EINVAL);
plugin->transfer = mulaw_transfer;
*r_plugin = plugin;
return 0;
snd_pcm_sframes_t snd_pcm_plug_client_size(snd_pcm_plug_t *handle, snd_pcm_uframes_t drv_size);
snd_pcm_sframes_t snd_pcm_plug_slave_size(snd_pcm_plug_t *handle, snd_pcm_uframes_t clt_size);
-#define ROUTE_PLUGIN_USE_FLOAT 0
#define FULL ROUTE_PLUGIN_RESOLUTION
#define HALF ROUTE_PLUGIN_RESOLUTION / 2
typedef int route_ttable_entry_t;
return err;
data = (rate_t *)plugin->extra_data;
data->get = getput_index(src_format->format);
+ snd_assert(data->get >= 0 && data->get < 4*2*2, return -EINVAL);
data->put = getput_index(dst_format->format);
+ snd_assert(data->put >= 0 && data->put < 4*2*2, return -EINVAL);
if (src_format->rate < dst_format->rate) {
data->pitch = ((src_format->rate << SHIFT) + (dst_format->rate >> 1)) / dst_format->rate;
#define FORMAT(v) [SNDRV_PCM_FORMAT_##v] = #v
#define SUBFORMAT(v) [SNDRV_PCM_SUBFORMAT_##v] = #v
-char *snd_pcm_stream_names[] = {
+static char *snd_pcm_stream_names[] = {
STREAM(PLAYBACK),
STREAM(CAPTURE),
};
-char *snd_pcm_state_names[] = {
+static char *snd_pcm_state_names[] = {
STATE(OPEN),
STATE(SETUP),
STATE(PREPARED),
STATE(SUSPENDED),
};
-char *snd_pcm_access_names[] = {
+static char *snd_pcm_access_names[] = {
ACCESS(MMAP_INTERLEAVED),
ACCESS(MMAP_NONINTERLEAVED),
ACCESS(MMAP_COMPLEX),
ACCESS(RW_NONINTERLEAVED),
};
-char *snd_pcm_format_names[] = {
+static char *snd_pcm_format_names[] = {
FORMAT(S8),
FORMAT(U8),
FORMAT(S16_LE),
FORMAT(U18_3BE),
};
-char *snd_pcm_subformat_names[] = {
+static char *snd_pcm_subformat_names[] = {
SUBFORMAT(STD),
};
-char *snd_pcm_tstamp_mode_names[] = {
+static char *snd_pcm_tstamp_mode_names[] = {
TSTAMP(NONE),
TSTAMP(MMAP),
};
-const char *snd_pcm_stream_name(snd_pcm_stream_t stream)
+static const char *snd_pcm_stream_name(snd_pcm_stream_t stream)
{
snd_assert(stream <= SNDRV_PCM_STREAM_LAST, return NULL);
return snd_pcm_stream_names[stream];
}
-const char *snd_pcm_access_name(snd_pcm_access_t access)
+static const char *snd_pcm_access_name(snd_pcm_access_t access)
{
snd_assert(access <= SNDRV_PCM_ACCESS_LAST, return NULL);
return snd_pcm_access_names[access];
return snd_pcm_format_names[format];
}
-const char *snd_pcm_subformat_name(snd_pcm_subformat_t subformat)
+static const char *snd_pcm_subformat_name(snd_pcm_subformat_t subformat)
{
snd_assert(subformat <= SNDRV_PCM_SUBFORMAT_LAST, return NULL);
return snd_pcm_subformat_names[subformat];
}
-const char *snd_pcm_tstamp_mode_name(snd_pcm_tstamp_t mode)
+static const char *snd_pcm_tstamp_mode_name(snd_pcm_tstamp_t mode)
{
snd_assert(mode <= SNDRV_PCM_TSTAMP_LAST, return NULL);
return snd_pcm_tstamp_mode_names[mode];
}
-const char *snd_pcm_state_name(snd_pcm_state_t state)
+static const char *snd_pcm_state_name(snd_pcm_state_t state)
{
snd_assert(state <= SNDRV_PCM_STATE_LAST, return NULL);
return snd_pcm_state_names[state];
#if defined(CONFIG_SND_PCM_OSS) || defined(CONFIG_SND_PCM_OSS_MODULE)
#include <linux/soundcard.h>
-const char *snd_pcm_oss_format_name(int format)
+static const char *snd_pcm_oss_format_name(int format)
{
switch (format) {
case AFMT_MU_LAW:
runtime->private_free(runtime);
snd_free_pages((void*)runtime->status, PAGE_ALIGN(sizeof(snd_pcm_mmap_status_t)));
snd_free_pages((void*)runtime->control, PAGE_ALIGN(sizeof(snd_pcm_mmap_control_t)));
- if (runtime->hw_constraints.rules)
- kfree(runtime->hw_constraints.rules);
+ kfree(runtime->hw_constraints.rules);
kfree(runtime);
substream->runtime = NULL;
substream->pstr->substream_opened--;
EXPORT_SYMBOL(snd_pcm_open_substream);
EXPORT_SYMBOL(snd_pcm_release_substream);
EXPORT_SYMBOL(snd_pcm_format_name);
-EXPORT_SYMBOL(snd_pcm_subformat_name);
/* pcm_native.c */
EXPORT_SYMBOL(snd_pcm_link_rwlock);
EXPORT_SYMBOL(snd_pcm_start);
EXPORT_SYMBOL(snd_pcm_kernel_playback_ioctl);
EXPORT_SYMBOL(snd_pcm_kernel_capture_ioctl);
EXPORT_SYMBOL(snd_pcm_kernel_ioctl);
-EXPORT_SYMBOL(snd_pcm_open);
-EXPORT_SYMBOL(snd_pcm_release);
-EXPORT_SYMBOL(snd_pcm_playback_poll);
-EXPORT_SYMBOL(snd_pcm_capture_poll);
EXPORT_SYMBOL(snd_pcm_mmap_data);
#if SNDRV_PCM_INFO_MMAP_IOMEM
EXPORT_SYMBOL(snd_pcm_lib_mmap_iomem);
EXPORT_SYMBOL(snd_pcm_format_big_endian);
EXPORT_SYMBOL(snd_pcm_format_width);
EXPORT_SYMBOL(snd_pcm_format_physical_width);
-EXPORT_SYMBOL(snd_pcm_format_size);
EXPORT_SYMBOL(snd_pcm_format_silence_64);
EXPORT_SYMBOL(snd_pcm_format_set_silence);
EXPORT_SYMBOL(snd_pcm_build_linear_format);
MODULE_DESCRIPTION("Advanced Linux Sound Architecture FM Instrument support.");
MODULE_LICENSE("GPL");
-char *snd_seq_fm_id = SNDRV_SEQ_INSTR_ID_OPL2_3;
-
static int snd_seq_fm_put(void *private_data, snd_seq_kinstr_t *instr,
char __user *instr_data, long len, int atomic, int cmd)
{
memset(ops, 0, sizeof(*ops));
// ops->private_data = private_data;
ops->add_len = sizeof(fm_instrument_t);
- ops->instr_type = snd_seq_fm_id;
+ ops->instr_type = SNDRV_SEQ_INSTR_ID_OPL2_3;
ops->put = snd_seq_fm_put;
ops->get = snd_seq_fm_get;
ops->get_size = snd_seq_fm_get_size;
module_init(alsa_ainstr_fm_init)
module_exit(alsa_ainstr_fm_exit)
-EXPORT_SYMBOL(snd_seq_fm_id);
EXPORT_SYMBOL(snd_seq_fm_init);
MODULE_DESCRIPTION("Advanced Linux Sound Architecture GF1 (GUS) Patch support.");
MODULE_LICENSE("GPL");
-char *snd_seq_gf1_id = SNDRV_SEQ_INSTR_ID_GUS_PATCH;
-
static unsigned int snd_seq_gf1_size(unsigned int size, unsigned int format)
{
unsigned int result = size;
ops->private_data = private_data;
ops->kops.private_data = ops;
ops->kops.add_len = sizeof(gf1_instrument_t);
- ops->kops.instr_type = snd_seq_gf1_id;
+ ops->kops.instr_type = SNDRV_SEQ_INSTR_ID_GUS_PATCH;
ops->kops.put = snd_seq_gf1_put;
ops->kops.get = snd_seq_gf1_get;
ops->kops.get_size = snd_seq_gf1_get_size;
module_init(alsa_ainstr_gf1_init)
module_exit(alsa_ainstr_gf1_exit)
-EXPORT_SYMBOL(snd_seq_gf1_id);
EXPORT_SYMBOL(snd_seq_gf1_init);
MODULE_DESCRIPTION("Advanced Linux Sound Architecture IWFFFF support.");
MODULE_LICENSE("GPL");
-char *snd_seq_iwffff_id = SNDRV_SEQ_INSTR_ID_INTERWAVE;
-
static unsigned int snd_seq_iwffff_size(unsigned int size, unsigned int format)
{
unsigned int result = size;
ops->private_data = private_data;
ops->kops.private_data = ops;
ops->kops.add_len = sizeof(iwffff_instrument_t);
- ops->kops.instr_type = snd_seq_iwffff_id;
+ ops->kops.instr_type = SNDRV_SEQ_INSTR_ID_INTERWAVE;
ops->kops.put = snd_seq_iwffff_put;
ops->kops.get = snd_seq_iwffff_get;
ops->kops.get_size = snd_seq_iwffff_get_size;
module_init(alsa_ainstr_iw_init)
module_exit(alsa_ainstr_iw_exit)
-EXPORT_SYMBOL(snd_seq_iwffff_id);
EXPORT_SYMBOL(snd_seq_iwffff_init);
MODULE_DESCRIPTION("Advanced Linux Sound Architecture Simple Instrument support.");
MODULE_LICENSE("GPL");
-char *snd_seq_simple_id = SNDRV_SEQ_INSTR_ID_SIMPLE;
-
static unsigned int snd_seq_simple_size(unsigned int size, unsigned int format)
{
unsigned int result = size;
ops->private_data = private_data;
ops->kops.private_data = ops;
ops->kops.add_len = sizeof(simple_instrument_t);
- ops->kops.instr_type = snd_seq_simple_id;
+ ops->kops.instr_type = SNDRV_SEQ_INSTR_ID_SIMPLE;
ops->kops.put = snd_seq_simple_put;
ops->kops.get = snd_seq_simple_get;
ops->kops.get_size = snd_seq_simple_get_size;
module_init(alsa_ainstr_simple_init)
module_exit(alsa_ainstr_simple_exit)
-EXPORT_SYMBOL(snd_seq_simple_id);
EXPORT_SYMBOL(snd_seq_simple_init);
/* misc. functions for proc interface */
char *enabled_str(int bool);
-char *filemode_str(int fmode);
/* for debug */
snd_seq_oss_readq_delete(seq_oss_readq_t *q)
{
if (q) {
- if (q->q)
- kfree(q->q);
+ kfree(q->q);
kfree(q);
}
}
ev.queue = dp->queue;
ev.data.queue.queue = dp->queue;
ev.data.queue.param.value = value;
- return snd_seq_kernel_client_dispatch(dp->cseq, &ev, 0, 0);
+ return snd_seq_kernel_client_dispatch(dp->cseq, &ev, 1, 0);
}
/*
int snd_seq_kernel_client_enqueue_blocking(int client, snd_seq_event_t * ev, struct file *file, int atomic, int hop);
int snd_seq_kernel_client_write_poll(int clientid, struct file *file, poll_table *wait);
int snd_seq_client_notify_subscription(int client, int port, snd_seq_port_subscribe_t *info, int evtype);
-int snd_seq_deliver_event(client_t *client, snd_seq_event_t *event, int atomic, int hop);
#endif
}
}
-snd_seq_kcluster_t *snd_seq_cluster_new(int atomic)
-{
- return kcalloc(1, sizeof(snd_seq_kcluster_t), atomic ? GFP_ATOMIC : GFP_KERNEL);
-}
-
-void snd_seq_cluster_free(snd_seq_kcluster_t *cluster, int atomic)
-{
- if (cluster == NULL)
- return;
- kfree(cluster);
-}
-
-snd_seq_kinstr_t *snd_seq_instr_new(int add_len, int atomic)
+static snd_seq_kinstr_t *snd_seq_instr_new(int add_len, int atomic)
{
snd_seq_kinstr_t *instr;
return instr;
}
-int snd_seq_instr_free(snd_seq_kinstr_t *instr, int atomic)
+static int snd_seq_instr_free(snd_seq_kinstr_t *instr, int atomic)
{
int result = 0;
while ((cluster = list->chash[idx]) != NULL) {
list->chash[idx] = cluster->next;
list->ccount--;
- snd_seq_cluster_free(cluster, 0);
+ kfree(cluster);
}
}
kfree(list);
/*
* allocate an event cell.
*/
-int snd_seq_cell_alloc(pool_t *pool, snd_seq_event_cell_t **cellp, int nonblock, struct file *file)
+static int snd_seq_cell_alloc(pool_t *pool, snd_seq_event_cell_t **cellp, int nonblock, struct file *file)
{
snd_seq_event_cell_t *cell;
unsigned long flags;
pool->total_elements = 0;
spin_unlock_irqrestore(&pool->lock, flags);
- if (ptr)
- vfree(ptr);
+ vfree(ptr);
spin_lock_irqsave(&pool->lock, flags);
pool->closing = 0;
};
extern void snd_seq_cell_free(snd_seq_event_cell_t* cell);
-int snd_seq_cell_alloc(pool_t *pool, snd_seq_event_cell_t **cellp, int nonblock, struct file *file);
int snd_seq_event_dup(pool_t *pool, snd_seq_event_t *event, snd_seq_event_cell_t **cellp, int nonblock, struct file *file);
static void sysex(snd_midi_op_t *ops, void *private, unsigned char *sysex, int len, snd_midi_channel_set_t *chset);
static void all_sounds_off(snd_midi_op_t *ops, void *private, snd_midi_channel_t *chan);
static void all_notes_off(snd_midi_op_t *ops, void *private, snd_midi_channel_t *chan);
-void snd_midi_reset_controllers(snd_midi_channel_t *chan);
+static void snd_midi_reset_controllers(snd_midi_channel_t *chan);
static void reset_all_channels(snd_midi_channel_set_t *chset);
} else if (buf[5] == 0x01 && buf[6] == 0x30) {
/* reverb mode */
- parsed = SNDRV_MIDI_SYSEX_GS_CHORUS_MODE;
+ parsed = SNDRV_MIDI_SYSEX_GS_REVERB_MODE;
chset->gs_reverb_mode = buf[7];
} else if (buf[5] == 0x01 && buf[6] == 0x38) {
/* chorus mode */
- parsed = SNDRV_MIDI_SYSEX_GS_REVERB_MODE;
+ parsed = SNDRV_MIDI_SYSEX_GS_CHORUS_MODE;
chset->gs_chorus_mode = buf[7];
} else if (buf[5] == 0x00 && buf[6] == 0x04) {
/*
* Initialise a single midi channel control block.
*/
-void snd_midi_channel_init(snd_midi_channel_t *p, int n)
+static void snd_midi_channel_init(snd_midi_channel_t *p, int n)
{
if (p == NULL)
return;
/*
* Allocate and initialise a set of midi channel control blocks.
*/
-snd_midi_channel_t *snd_midi_channel_init_set(int n)
+static snd_midi_channel_t *snd_midi_channel_init_set(int n)
{
snd_midi_channel_t *chan;
int i;
/*
* Reset the midi controllers on a particular channel to default values.
*/
-void snd_midi_reset_controllers(snd_midi_channel_t *chan)
+static void snd_midi_reset_controllers(snd_midi_channel_t *chan)
{
memset(chan->control, 0, sizeof(chan->control));
chan->gm_volume = 127;
{
if (chset == NULL)
return;
- if (chset->channels != NULL)
- kfree(chset->channels);
+ kfree(chset->channels);
kfree(chset);
}
EXPORT_SYMBOL(snd_midi_process_event);
EXPORT_SYMBOL(snd_midi_channel_set_clear);
-EXPORT_SYMBOL(snd_midi_channel_init);
-EXPORT_SYMBOL(snd_midi_channel_init_set);
EXPORT_SYMBOL(snd_midi_channel_alloc_set);
EXPORT_SYMBOL(snd_midi_channel_free_set);
void snd_midi_event_free(snd_midi_event_t *dev)
{
if (dev != NULL) {
- if (dev->buf)
- kfree(dev->buf);
+ kfree(dev->buf);
kfree(dev);
}
}
dev->bufsize = bufsize;
reset_encode(dev);
spin_unlock_irqrestore(&dev->lock, flags);
- if (old_buf)
- kfree(old_buf);
+ kfree(old_buf);
return 0;
}
vunmap(dmab->area);
dmab->area = NULL;
- if (sgbuf->table)
- kfree(sgbuf->table);
- if (sgbuf->page_table)
- kfree(sgbuf->page_table);
+ kfree(sgbuf->table);
+ kfree(sgbuf->page_table);
kfree(sgbuf);
dmab->private_data = NULL;
/*
* set drum voice characteristics
*/
-void snd_opl3_drum_voice_set(opl3_t *opl3, snd_opl3_drum_voice_t *data)
+static void snd_opl3_drum_voice_set(opl3_t *opl3, snd_opl3_drum_voice_t *data)
{
unsigned char op_offset = snd_opl3_regmap[data->voice][data->op];
unsigned char voice_offset = data->voice;
/*
* Set drum voice pitch
*/
-void snd_opl3_drum_note_set(opl3_t *opl3, snd_opl3_drum_note_t *data)
+static void snd_opl3_drum_note_set(opl3_t *opl3, snd_opl3_drum_note_t *data)
{
unsigned char voice_offset = data->voice;
unsigned short opl3_reg;
/*
* Set drum voice volume and position
*/
-void snd_opl3_drum_vol_set(opl3_t *opl3, snd_opl3_drum_voice_t *data, int vel,
- snd_midi_channel_t *chan)
+static void snd_opl3_drum_vol_set(opl3_t *opl3, snd_opl3_drum_voice_t *data,
+ int vel, snd_midi_channel_t *chan)
{
unsigned char op_offset = snd_opl3_regmap[data->voice][data->op];
unsigned char voice_offset = data->voice;
/*
* Start system timer
*/
-void snd_opl3_start_timer(opl3_t *opl3)
+static void snd_opl3_start_timer(opl3_t *opl3)
{
unsigned long flags;
spin_lock_irqsave(&opl3->sys_timer_lock, flags);
return err;
}
- err = snd_opl3_create(card, fm_port, fm_port + 2, opl4->hardware, 1, &opl3);
+ err = snd_device_new(card, SNDRV_DEV_CODEC, opl4, &ops);
if (err < 0) {
snd_opl4_free(opl4);
return err;
}
- /* opl3 initialization disabled opl4, so reenable */
- snd_opl4_enable_opl4(opl4);
-
- err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, opl4, &ops);
+ err = snd_opl3_create(card, fm_port, fm_port + 2, opl4->hardware, 1, &opl3);
if (err < 0) {
- snd_device_free(card, opl3);
- snd_opl4_free(opl4);
+ snd_device_free(card, opl4);
return err;
}
+ /* opl3 initialization disabled opl4, so reenable */
+ snd_opl4_enable_opl4(opl4);
+
snd_opl4_create_mixer(opl4);
#ifdef CONFIG_PROC_FS
snd_opl4_create_proc(opl4);
/*
* Array of DSP commands
*/
-struct vx_cmd_info vx_dsp_cmds[] = {
+static struct vx_cmd_info vx_dsp_cmds[] = {
[CMD_VERSION] = { 0x010000, 2, RMH_SSIZE_FIXED, 1 },
[CMD_SUPPORTED] = { 0x020000, 1, RMH_SSIZE_FIXED, 2 },
[CMD_TEST_IT] = { 0x040000, 1, RMH_SSIZE_FIXED, 1 },
/*
*
*/
-extern struct vx_cmd_info vx_dsp_cmds[];
-
void vx_init_rmh(struct vx_rmh *rmh, unsigned int cmd);
/**
/*
* Driver for Digigram VX soundcards
*
- * hwdep device manager
+ * DSP firmware management
*
* Copyright (c) 2002 by Takashi Iwai <tiwai@suse.de>
*
*/
#include <sound/driver.h>
+#include <linux/firmware.h>
#include <sound/core.h>
#include <sound/hwdep.h>
#include <sound/vx_core.h>
+#ifdef SND_VX_FW_LOADER
+
+int snd_vx_setup_firmware(vx_core_t *chip)
+{
+ static char *fw_files[VX_TYPE_NUMS][4] = {
+ [VX_TYPE_BOARD] = {
+ NULL, "x1_1_vx2.xlx", "bd56002.boot", "l_1_vx2.d56",
+ },
+ [VX_TYPE_V2] = {
+ NULL, "x1_2_v22.xlx", "bd563v2.boot", "l_1_v22.d56",
+ },
+ [VX_TYPE_MIC] = {
+ NULL, "x1_2_v22.xlx", "bd563v2.boot", "l_1_v22.d56",
+ },
+ [VX_TYPE_VXPOCKET] = {
+ "bx_1_vxp.b56", "x1_1_vxp.xlx", "bd563s3.boot", "l_1_vxp.d56"
+ },
+ [VX_TYPE_VXP440] = {
+ "bx_1_vp4.b56", "x1_1_vp4.xlx", "bd563s3.boot", "l_1_vp4.d56"
+ },
+ };
+
+ int i, err;
+
+ for (i = 0; i < 4; i++) {
+ char path[32];
+ const struct firmware *fw;
+ if (! fw_files[chip->type][i])
+ continue;
+ sprintf(path, "vx/%s", fw_files[chip->type][i]);
+ if (request_firmware(&fw, path, chip->dev)) {
+ snd_printk(KERN_ERR "vx: can't load firmware %s\n", path);
+ return -ENOENT;
+ }
+ err = chip->ops->load_dsp(chip, i, fw);
+ if (err < 0) {
+ release_firmware(fw);
+ return err;
+ }
+ if (i == 1)
+ chip->chip_status |= VX_STAT_XILINX_LOADED;
+#ifdef CONFIG_PM
+ chip->firmware[i] = fw;
+#else
+ release_firmware(fw);
+#endif
+ }
+
+ /* ok, we reached to the last one */
+ /* create the devices if not built yet */
+ if ((err = snd_vx_pcm_new(chip)) < 0)
+ return err;
+
+ if ((err = snd_vx_mixer_new(chip)) < 0)
+ return err;
+
+ if (chip->ops->add_controls)
+ if ((err = chip->ops->add_controls(chip)) < 0)
+ return err;
+
+ chip->chip_status |= VX_STAT_DEVICE_INIT;
+ chip->chip_status |= VX_STAT_CHIP_INIT;
+
+ return snd_card_register(chip->card);
+}
+
+/* exported */
+void snd_vx_free_firmware(vx_core_t *chip)
+{
+#ifdef CONFIG_PM
+ int i;
+ for (i = 0; i < 4; i++)
+ release_firmware(chip->firmware[i]);
+#endif
+}
+
+#else /* old style firmware loading */
+
static int vx_hwdep_open(snd_hwdep_t *hw, struct file *file)
{
return 0;
return 0;
}
+static void free_fw(const struct firmware *fw)
+{
+ if (fw) {
+ vfree(fw->data);
+ kfree(fw);
+ }
+}
+
static int vx_hwdep_dsp_load(snd_hwdep_t *hw, snd_hwdep_dsp_image_t *dsp)
{
vx_core_t *vx = hw->private_data;
int index, err;
+ struct firmware *fw;
snd_assert(vx->ops->load_dsp, return -ENXIO);
- err = vx->ops->load_dsp(vx, dsp);
- if (err < 0)
- return err;
+
+ fw = kmalloc(sizeof(*fw), GFP_KERNEL);
+ if (! fw) {
+ snd_printk(KERN_ERR "cannot allocate firmware\n");
+ return -ENOMEM;
+ }
+ fw->size = dsp->length;
+ fw->data = vmalloc(fw->size);
+ if (! fw->data) {
+ snd_printk(KERN_ERR "cannot allocate firmware image (length=%d)\n",
+ (int)fw->size);
+ kfree(fw);
+ return -ENOMEM;
+ }
+ if (copy_from_user(fw->data, dsp->image, dsp->length)) {
+ free_fw(fw);
+ return -EFAULT;
+ }
index = dsp->index;
if (! vx_is_pcmcia(vx))
index++;
+ err = vx->ops->load_dsp(vx, index, fw);
+ if (err < 0) {
+ free_fw(fw);
+ return err;
+ }
+#ifdef CONFIG_PM
+ vx->firmware[index] = fw;
+#else
+ free_fw(fw);
+#endif
+
if (index == 1)
vx->chip_status |= VX_STAT_XILINX_LOADED;
if (index < 3)
/* exported */
-int snd_vx_hwdep_new(vx_core_t *chip)
+int snd_vx_setup_firmware(vx_core_t *chip)
{
int err;
snd_hwdep_t *hw;
sprintf(hw->name, "VX Loader (%s)", chip->card->driver);
chip->hwdep = hw;
- return 0;
+ return snd_card_register(chip->card);
}
+
+/* exported */
+void snd_vx_free_firmware(vx_core_t *chip)
+{
+#ifdef CONFIG_PM
+ int i;
+ for (i = 0; i < 4; i++)
+ free_fw(chip->firmware[i]);
+#endif
+}
+
+#endif /* SND_VX_FW_LOADER */
static snd_pcm_hardware_t vx_pcm_playback_hw = {
.info = (SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_INTERLEAVED |
- SNDRV_PCM_INFO_PAUSE | SNDRV_PCM_INFO_MMAP_VALID),
+ SNDRV_PCM_INFO_PAUSE | SNDRV_PCM_INFO_MMAP_VALID |
+ SNDRV_PCM_INFO_RESUME),
.formats = /*SNDRV_PCM_FMTBIT_U8 |*/ SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_3LE,
.rates = SNDRV_PCM_RATE_CONTINUOUS | SNDRV_PCM_RATE_8000_48000,
.rate_min = 5000,
switch (cmd) {
case SNDRV_PCM_TRIGGER_START:
+ case SNDRV_PCM_TRIGGER_RESUME:
if (! pipe->is_capture)
vx_pcm_playback_transfer(chip, subs, pipe, 2);
/* FIXME:
pipe->running = 1;
break;
case SNDRV_PCM_TRIGGER_STOP:
+ case SNDRV_PCM_TRIGGER_SUSPEND:
vx_toggle_pipe(chip, pipe, 0);
vx_stop_pipe(chip, pipe);
vx_stop_stream(chip, pipe);
static snd_pcm_hardware_t vx_pcm_capture_hw = {
.info = (SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_INTERLEAVED |
- SNDRV_PCM_INFO_PAUSE | SNDRV_PCM_INFO_MMAP_VALID),
+ SNDRV_PCM_INFO_PAUSE | SNDRV_PCM_INFO_MMAP_VALID |
+ SNDRV_PCM_INFO_RESUME),
.formats = /*SNDRV_PCM_FMTBIT_U8 |*/ SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_3LE,
.rates = SNDRV_PCM_RATE_CONTINUOUS | SNDRV_PCM_RATE_8000_48000,
.rate_min = 5000,
* vx_change_clock_source - change the clock source
* @source: the new source
*/
-void vx_change_clock_source(vx_core_t *chip, int source)
+static void vx_change_clock_source(vx_core_t *chip, int source)
{
unsigned long flags;
#include <sound/cs8427.h>
#include <sound/asoundef.h>
+static void snd_cs8427_reset(snd_i2c_device_t *cs8427);
+
MODULE_AUTHOR("Jaroslav Kysela <perex@suse.cz>");
MODULE_DESCRIPTION("IEC958 (S/PDIF) receiver & transmitter by Cirrus Logic");
MODULE_LICENSE("GPL");
return res;
}
-int snd_cs8427_detect(snd_i2c_bus_t *bus, unsigned char addr)
-{
- int res;
-
- snd_i2c_lock(bus);
- res = snd_i2c_probeaddr(bus, CS8427_ADDR | (addr & 7));
- snd_i2c_unlock(bus);
- return res;
-}
-
int snd_cs8427_reg_write(snd_i2c_device_t *device, unsigned char reg, unsigned char val)
{
int err;
return 0;
}
-int snd_cs8427_reg_read(snd_i2c_device_t *device, unsigned char reg)
+static int snd_cs8427_reg_read(snd_i2c_device_t *device, unsigned char reg)
{
int err;
unsigned char buf;
static void snd_cs8427_free(snd_i2c_device_t *device)
{
- if (device->private_data)
- kfree(device->private_data);
+ kfree(device->private_data);
}
int snd_cs8427_create(snd_i2c_bus_t *bus,
* put back AES3INPUT. This workaround is described in latest
* CS8427 datasheet, otherwise TXDSERIAL will not work.
*/
-void snd_cs8427_reset(snd_i2c_device_t *cs8427)
+static void snd_cs8427_reset(snd_i2c_device_t *cs8427)
{
cs8427_t *chip;
unsigned long end_time;
module_init(alsa_cs8427_module_init)
module_exit(alsa_cs8427_module_exit)
-EXPORT_SYMBOL(snd_cs8427_detect);
EXPORT_SYMBOL(snd_cs8427_create);
EXPORT_SYMBOL(snd_cs8427_reset);
EXPORT_SYMBOL(snd_cs8427_reg_write);
-EXPORT_SYMBOL(snd_cs8427_reg_read);
EXPORT_SYMBOL(snd_cs8427_iec958_build);
EXPORT_SYMBOL(snd_cs8427_iec958_active);
EXPORT_SYMBOL(snd_cs8427_iec958_pcm);
bus->master = master;
}
strlcpy(bus->name, name, sizeof(bus->name));
- if ((err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, bus, &ops)) < 0) {
+ if ((err = snd_device_new(card, SNDRV_DEV_BUS, bus, &ops)) < 0) {
snd_i2c_bus_free(bus);
return err;
}
* 2002-05-12 Tomas Kasparek another code cleanup
*/
-/* $Id: uda1341.c,v 1.13 2004/07/20 15:54:13 cladisch Exp $ */
+/* $Id: uda1341.c,v 1.15 2005/01/03 12:05:20 tiwai Exp $ */
#include <sound/driver.h>
#include <linux/module.h>
return err;
}
- if ((err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, uda1341, &ops)) < 0) {
+ if ((err = snd_device_new(card, SNDRV_DEV_CODEC, uda1341, &ops)) < 0) {
l3_detach_client(uda1341);
kfree(uda1341);
return err;
static void uda1341_detach(struct l3_client *clnt)
{
- if (clnt->driver_data)
- kfree(clnt->driver_data);
+ kfree(clnt->driver_data);
}
static int
chip->rcs1 = reg_read(chip, AK4117_REG_RCS1);
chip->rcs2 = reg_read(chip, AK4117_REG_RCS2);
- if ((err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, chip, &ops)) < 0)
+ if ((err = snd_device_new(card, SNDRV_DEV_CODEC, chip, &ops)) < 0)
goto __fail;
if (r_ak4117)
return -EBUSY;
}
-inline unsigned char snd_ad1816a_in(ad1816a_t *chip, unsigned char reg)
+static inline unsigned char snd_ad1816a_in(ad1816a_t *chip, unsigned char reg)
{
snd_ad1816a_busy_wait(chip);
return inb(AD1816A_REG(reg));
}
-inline void snd_ad1816a_out(ad1816a_t *chip, unsigned char reg,
+static inline void snd_ad1816a_out(ad1816a_t *chip, unsigned char reg,
unsigned char value)
{
snd_ad1816a_busy_wait(chip);
outb(value, AD1816A_REG(reg));
}
-inline void snd_ad1816a_out_mask(ad1816a_t *chip, unsigned char reg,
+static inline void snd_ad1816a_out_mask(ad1816a_t *chip, unsigned char reg,
unsigned char mask, unsigned char value)
{
snd_ad1816a_out(chip, reg,
.fifo_size = 0,
};
+#if 0 /* not used now */
static int snd_ad1816a_timer_close(snd_timer_t *timer)
{
ad1816a_t *chip = snd_timer_chip(timer);
.start = snd_ad1816a_timer_start,
.stop = snd_ad1816a_timer_stop,
};
+#endif /* not used now */
static int snd_ad1816a_playback_open(snd_pcm_substream_t *substream)
return 0;
}
+#if 0 /* not used now */
static void snd_ad1816a_timer_free(snd_timer_t *timer)
{
ad1816a_t *chip = timer->private_data;
*rtimer = timer;
return 0;
}
+#endif /* not used now */
/*
*
return snd_es1688_dsp_command(chip, data);
}
-int snd_es1688_read(es1688_t *chip, unsigned char reg)
+static int snd_es1688_read(es1688_t *chip, unsigned char reg)
{
/* Read a byte from an extended mode register of ES1688 */
if (!snd_es1688_dsp_command(chip, 0xc0))
udelay(10);
}
-unsigned char snd_es1688_mixer_read(es1688_t *chip, unsigned char reg)
+static unsigned char snd_es1688_mixer_read(es1688_t *chip, unsigned char reg)
{
unsigned char result;
return snd_es1688_trigger(chip, cmd, 0x0f);
}
-irqreturn_t snd_es1688_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+static irqreturn_t snd_es1688_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
es1688_t *chip = dev_id;
}
EXPORT_SYMBOL(snd_es1688_mixer_write);
-EXPORT_SYMBOL(snd_es1688_mixer_read);
-EXPORT_SYMBOL(snd_es1688_interrupt);
EXPORT_SYMBOL(snd_es1688_create);
EXPORT_SYMBOL(snd_es1688_pcm);
EXPORT_SYMBOL(snd_es1688_mixer);
#include <sound/core.h>
#include <sound/gus.h>
-void snd_gf1_dma_ack(snd_gus_card_t * gus)
+static void snd_gf1_dma_ack(snd_gus_card_t * gus)
{
unsigned long flags;
spin_unlock_irqrestore(&gus->reg_lock, flags);
}
-void snd_gf1_dma_program(snd_gus_card_t * gus,
- unsigned int addr,
- unsigned long buf_addr,
- unsigned int count,
- unsigned int cmd)
+static void snd_gf1_dma_program(snd_gus_card_t * gus,
+ unsigned int addr,
+ unsigned long buf_addr,
+ unsigned int count,
+ unsigned int cmd)
{
unsigned long flags;
unsigned int address;
if (block->prev)
block->prev->next = block->next;
}
- if (block->name)
- kfree(block->name);
+ kfree(block->name);
kfree(block);
return 0;
}
instr = snd_seq_instr_find(gus->gf1.ilist, &v->instr, 0, 1);
if (instr != NULL) {
if (instr->ops) {
- if (instr->ops->instr_type == snd_seq_simple_id)
+ if (!strcmp(instr->ops->instr_type, SNDRV_SEQ_INSTR_ID_SIMPLE))
snd_gf1_simple_init(v);
}
snd_seq_instr_free_use(gus->gf1.ilist, instr);
return err;
}
- if ((err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, hw, &ops)) < 0) {
+ if ((err = snd_device_new(card, SNDRV_DEV_CODEC, hw, &ops)) < 0) {
snd_emu8000_free(hw);
return err;
}
emu8000_t *hw;
hw = emu->hw;
+ /* skip header */
+ buf += 16;
+ len -= 16;
+
switch (type) {
case SNDRV_EMU8000_LOAD_CHORUS_FX:
return snd_emu8000_load_chorus_fx(hw, mode, buf, len);
static int emu8k_pcm_close(snd_pcm_substream_t *subs)
{
emu8k_pcm_t *rec = subs->runtime->private_data;
- if (rec)
- kfree(rec);
+ kfree(rec);
subs->runtime->private_data = NULL;
return 0;
}
* open/close
*/
-int snd_sb16_playback_open(snd_pcm_substream_t * substream)
+static int snd_sb16_playback_open(snd_pcm_substream_t * substream)
{
unsigned long flags;
sb_t *chip = snd_pcm_substream_chip(substream);
return 0;
}
-int snd_sb16_playback_close(snd_pcm_substream_t * substream)
+static int snd_sb16_playback_close(snd_pcm_substream_t * substream)
{
unsigned long flags;
sb_t *chip = snd_pcm_substream_chip(substream);
return 0;
}
-int snd_sb16_capture_open(snd_pcm_substream_t * substream)
+static int snd_sb16_capture_open(snd_pcm_substream_t * substream)
{
unsigned long flags;
sb_t *chip = snd_pcm_substream_chip(substream);
return 0;
}
-int snd_sb16_capture_close(snd_pcm_substream_t * substream)
+static int snd_sb16_capture_close(snd_pcm_substream_t * substream)
{
unsigned long flags;
sb_t *chip = snd_pcm_substream_chip(substream);
return change;
}
-snd_kcontrol_new_t snd_sb16_dma_control = {
+static snd_kcontrol_new_t snd_sb16_dma_control = {
.iface = SNDRV_CTL_ELEM_IFACE_PCM,
.name = "16-bit DMA Allocation",
.info = snd_sb16_dma_control_info,
*
*/
-int snd_sb8_open(snd_pcm_substream_t *substream)
+static int snd_sb8_open(snd_pcm_substream_t *substream)
{
sb_t *chip = snd_pcm_substream_chip(substream);
snd_pcm_runtime_t *runtime = substream->runtime;
return 0;
}
-int snd_sb8_close(snd_pcm_substream_t *substream)
+static int snd_sb8_close(snd_pcm_substream_t *substream)
{
unsigned long flags;
sb_t *chip = snd_pcm_substream_chip(substream);
return -ENODEV;
}
-int snd_sbdsp_version(sb_t * chip)
+static int snd_sbdsp_version(sb_t * chip)
{
unsigned int result = -ENODEV;
--- /dev/null
+/*
+ * BRIEF MODULE DESCRIPTION
+ * Driver for AMD Au1000 MIPS Processor, AC'97 Sound Port
+ *
+ * Copyright 2004 Cooper Street Innovations Inc.
+ * Author: Charles Eidsness <charles@cooper-street.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN
+ * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+ * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+ * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ * History:
+ *
+ * 2004-09-09 Charles Eidsness -- Original verion -- based on
+ * sa11xx-uda1341.c ALSA driver and the
+ * au1000.c OSS driver.
+ * 2004-09-09 Matt Porter -- Added support for ALSA 1.0.6
+ *
+ */
+
+#include <linux/ioport.h>
+#include <linux/interrupt.h>
+#include <sound/driver.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/version.h>
+#include <sound/core.h>
+#include <sound/initval.h>
+#include <sound/pcm.h>
+#include <sound/ac97_codec.h>
+#include <asm/mach-au1x00/au1000.h>
+#include <asm/mach-au1x00/au1000_dma.h>
+
+MODULE_AUTHOR("Charles Eidsness <charles@cooper-street.com>");
+MODULE_DESCRIPTION("Au1000 AC'97 ALSA Driver");
+MODULE_LICENSE("GPL");
+#if LINUX_VERSION_CODE > KERNEL_VERSION(2,6,8)
+MODULE_SUPPORTED_DEVICE("{{AMD,Au1000 AC'97}}");
+#else
+MODULE_CLASSES("{sound}");
+MODULE_DEVICES("{{AMD,Au1000 AC'97}}");
+#endif
+
+#define chip_t au1000_t
+
+#define PLAYBACK 0
+#define CAPTURE 1
+#define AC97_SLOT_3 0x01
+#define AC97_SLOT_4 0x02
+#define AC97_SLOT_6 0x08
+#define AC97_CMD_IRQ 31
+#define READ 0
+#define WRITE 1
+#define READ_WAIT 2
+#define RW_DONE 3
+
+DECLARE_WAIT_QUEUE_HEAD(ac97_command_wq);
+
+typedef struct au1000_period au1000_period_t;
+struct au1000_period
+{
+ u32 start;
+ u32 relative_end; /*realtive to start of buffer*/
+ au1000_period_t * next;
+};
+
+/*Au1000 AC97 Port Control Reisters*/
+typedef struct au1000_ac97_reg au1000_ac97_reg_t;
+struct au1000_ac97_reg {
+ u32 volatile config;
+ u32 volatile status;
+ u32 volatile data;
+ u32 volatile cmd;
+ u32 volatile cntrl;
+};
+
+typedef struct audio_stream audio_stream_t;
+struct audio_stream {
+ snd_pcm_substream_t * substream;
+ int dma;
+ spinlock_t dma_lock;
+ au1000_period_t * buffer;
+ unsigned long period_size;
+};
+
+typedef struct snd_card_au1000 {
+ snd_card_t *card;
+ au1000_ac97_reg_t volatile *ac97_ioport;
+
+ struct resource *ac97_res_port;
+ spinlock_t ac97_lock;
+ ac97_t *ac97;
+
+ snd_pcm_t *pcm;
+ audio_stream_t *stream[2]; /* playback & capture */
+} au1000_t;
+
+static au1000_t *au1000 = NULL;
+
+/*--------------------------- Local Functions --------------------------------*/
+static void
+au1000_set_ac97_xmit_slots(long xmit_slots)
+{
+ u32 volatile ac97_config;
+
+ spin_lock(&au1000->ac97_lock);
+ ac97_config = au1000->ac97_ioport->config;
+ ac97_config = ac97_config & ~AC97C_XMIT_SLOTS_MASK;
+ ac97_config |= (xmit_slots << AC97C_XMIT_SLOTS_BIT);
+ au1000->ac97_ioport->config = ac97_config;
+ spin_unlock(&au1000->ac97_lock);
+}
+
+static void
+au1000_set_ac97_recv_slots(long recv_slots)
+{
+ u32 volatile ac97_config;
+
+ spin_lock(&au1000->ac97_lock);
+ ac97_config = au1000->ac97_ioport->config;
+ ac97_config = ac97_config & ~AC97C_RECV_SLOTS_MASK;
+ ac97_config |= (recv_slots << AC97C_RECV_SLOTS_BIT);
+ au1000->ac97_ioport->config = ac97_config;
+ spin_unlock(&au1000->ac97_lock);
+}
+
+
+static void
+au1000_dma_stop(audio_stream_t *stream)
+{
+ unsigned long flags;
+ au1000_period_t * pointer;
+ au1000_period_t * pointer_next;
+
+ if (stream->buffer != NULL) {
+ spin_lock_irqsave(&stream->dma_lock, flags);
+ disable_dma(stream->dma);
+ spin_unlock_irqrestore(&stream->dma_lock, flags);
+
+ pointer = stream->buffer;
+ pointer_next = stream->buffer->next;
+
+ do {
+ kfree(pointer);
+ pointer = pointer_next;
+ pointer_next = pointer->next;
+ } while (pointer != stream->buffer);
+
+ stream->buffer = NULL;
+ }
+}
+
+static void
+au1000_dma_start(audio_stream_t *stream)
+{
+ snd_pcm_substream_t *substream = stream->substream;
+ snd_pcm_runtime_t *runtime = substream->runtime;
+
+ unsigned long flags, dma_start;
+ int i;
+ au1000_period_t * pointer;
+
+ if (stream->buffer == NULL) {
+ dma_start = virt_to_phys(runtime->dma_area);
+
+ stream->period_size = frames_to_bytes(runtime,
+ runtime->period_size);
+ stream->buffer = kmalloc(sizeof(au1000_period_t), GFP_KERNEL);
+ pointer = stream->buffer;
+ for (i = 0 ; i < runtime->periods ; i++) {
+ pointer->start = (u32)(dma_start +
+ (i * stream->period_size));
+ pointer->relative_end = (u32)
+ (((i+1) * stream->period_size) - 0x1);
+ if ( i < runtime->periods - 1) {
+ pointer->next = kmalloc(sizeof(au1000_period_t)
+ , GFP_KERNEL);
+ pointer = pointer->next;
+ }
+ }
+ pointer->next = stream->buffer;
+
+ spin_lock_irqsave(&stream->dma_lock, flags);
+ init_dma(stream->dma);
+ if (get_dma_active_buffer(stream->dma) == 0) {
+ clear_dma_done0(stream->dma);
+ set_dma_addr0(stream->dma, stream->buffer->start);
+ set_dma_count0(stream->dma, stream->period_size >> 1);
+ set_dma_addr1(stream->dma, stream->buffer->next->start);
+ set_dma_count1(stream->dma, stream->period_size >> 1);
+ } else {
+ clear_dma_done1(stream->dma);
+ set_dma_addr1(stream->dma, stream->buffer->start);
+ set_dma_count1(stream->dma, stream->period_size >> 1);
+ set_dma_addr0(stream->dma, stream->buffer->next->start);
+ set_dma_count0(stream->dma, stream->period_size >> 1);
+ }
+ enable_dma_buffers(stream->dma);
+ start_dma(stream->dma);
+ spin_unlock_irqrestore(&stream->dma_lock, flags);
+ }
+}
+
+static irqreturn_t
+au1000_dma_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ audio_stream_t *stream = (audio_stream_t *) dev_id;
+ snd_pcm_substream_t *substream = stream->substream;
+
+ spin_lock(&stream->dma_lock);
+ switch (get_dma_buffer_done(stream->dma)) {
+ case DMA_D0:
+ stream->buffer = stream->buffer->next;
+ clear_dma_done0(stream->dma);
+ set_dma_addr0(stream->dma, stream->buffer->next->start);
+ set_dma_count0(stream->dma, stream->period_size >> 1);
+ enable_dma_buffer0(stream->dma);
+ break;
+ case DMA_D1:
+ stream->buffer = stream->buffer->next;
+ clear_dma_done1(stream->dma);
+ set_dma_addr1(stream->dma, stream->buffer->next->start);
+ set_dma_count1(stream->dma, stream->period_size >> 1);
+ enable_dma_buffer1(stream->dma);
+ break;
+ case (DMA_D0 | DMA_D1):
+ spin_unlock(&stream->dma_lock);
+ printk(KERN_ERR "DMA %d missed interrupt.\n",stream->dma);
+ au1000_dma_stop(stream);
+ au1000_dma_start(stream);
+ spin_lock(&stream->dma_lock);
+ break;
+ case (~DMA_D0 & ~DMA_D1):
+ printk(KERN_ERR "DMA %d empty irq.\n",stream->dma);
+ }
+ spin_unlock(&stream->dma_lock);
+ snd_pcm_period_elapsed(substream);
+ return IRQ_HANDLED;
+}
+
+/*-------------------------- PCM Audio Streams -------------------------------*/
+
+static unsigned int rates[] = {8000, 11025, 16000, 22050};
+static snd_pcm_hw_constraint_list_t hw_constraints_rates = {
+ .count = sizeof(rates) / sizeof(rates[0]),
+ .list = rates,
+ .mask = 0,
+};
+
+static snd_pcm_hardware_t snd_au1000 =
+{
+ .info = (SNDRV_PCM_INFO_INTERLEAVED | \
+ SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_MMAP_VALID),
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .rates = (SNDRV_PCM_RATE_8000 | SNDRV_PCM_RATE_11025 |
+ SNDRV_PCM_RATE_16000 | SNDRV_PCM_RATE_22050),
+ .rate_min = 8000,
+ .rate_max = 22050,
+ .channels_min = 1,
+ .channels_max = 2,
+ .buffer_bytes_max = 128*1024,
+ .period_bytes_min = 32,
+ .period_bytes_max = 16*1024,
+ .periods_min = 8,
+ .periods_max = 255,
+ .fifo_size = 16,
+};
+
+static int
+snd_au1000_playback_open(snd_pcm_substream_t * substream)
+{
+ au1000->stream[PLAYBACK]->substream = substream;
+ au1000->stream[PLAYBACK]->buffer = NULL;
+ substream->private_data = au1000->stream[PLAYBACK];
+ substream->runtime->hw = snd_au1000;
+ return (snd_pcm_hw_constraint_list(substream->runtime, 0,
+ SNDRV_PCM_HW_PARAM_RATE, &hw_constraints_rates) < 0);
+}
+
+static int
+snd_au1000_capture_open(snd_pcm_substream_t * substream)
+{
+ au1000->stream[CAPTURE]->substream = substream;
+ au1000->stream[CAPTURE]->buffer = NULL;
+ substream->private_data = au1000->stream[CAPTURE];
+ substream->runtime->hw = snd_au1000;
+ return (snd_pcm_hw_constraint_list(substream->runtime, 0,
+ SNDRV_PCM_HW_PARAM_RATE, &hw_constraints_rates) < 0);
+
+}
+
+static int
+snd_au1000_playback_close(snd_pcm_substream_t * substream)
+{
+ au1000->stream[PLAYBACK]->substream = NULL;
+ return 0;
+}
+
+static int
+snd_au1000_capture_close(snd_pcm_substream_t * substream)
+{
+ au1000->stream[CAPTURE]->substream = NULL;
+ return 0;
+}
+
+static int
+snd_au1000_hw_params(snd_pcm_substream_t * substream,
+ snd_pcm_hw_params_t * hw_params)
+{
+ return snd_pcm_lib_malloc_pages(substream,
+ params_buffer_bytes(hw_params));
+}
+
+static int
+snd_au1000_hw_free(snd_pcm_substream_t * substream)
+{
+ return snd_pcm_lib_free_pages(substream);
+}
+
+static int
+snd_au1000_playback_prepare(snd_pcm_substream_t * substream)
+{
+ snd_pcm_runtime_t *runtime = substream->runtime;
+
+ if (runtime->channels == 1 )
+ au1000_set_ac97_xmit_slots(AC97_SLOT_4);
+ else
+ au1000_set_ac97_xmit_slots(AC97_SLOT_3 | AC97_SLOT_4);
+ snd_ac97_set_rate(au1000->ac97, AC97_PCM_FRONT_DAC_RATE, runtime->rate);
+ return 0;
+}
+
+static int
+snd_au1000_capture_prepare(snd_pcm_substream_t * substream)
+{
+ snd_pcm_runtime_t *runtime = substream->runtime;
+
+ if (runtime->channels == 1 )
+ au1000_set_ac97_recv_slots(AC97_SLOT_4);
+ else
+ au1000_set_ac97_recv_slots(AC97_SLOT_3 | AC97_SLOT_4);
+ snd_ac97_set_rate(au1000->ac97, AC97_PCM_LR_ADC_RATE, runtime->rate);
+ return 0;
+}
+
+static int
+snd_au1000_trigger(snd_pcm_substream_t * substream, int cmd)
+{
+ audio_stream_t *stream = substream->private_data;
+ int err = 0;
+
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ au1000_dma_start(stream);
+ break;
+ case SNDRV_PCM_TRIGGER_STOP:
+ au1000_dma_stop(stream);
+ break;
+ default:
+ err = -EINVAL;
+ break;
+ }
+ return err;
+}
+
+static snd_pcm_uframes_t
+snd_au1000_pointer(snd_pcm_substream_t * substream)
+{
+ audio_stream_t *stream = substream->private_data;
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ unsigned long flags;
+ long location;
+
+ spin_lock_irqsave(&stream->dma_lock, flags);
+ location = get_dma_residue(stream->dma);
+ spin_unlock_irqrestore(&stream->dma_lock, flags);
+ location = stream->buffer->relative_end - location;
+ if (location == -1)
+ location = 0;
+ return bytes_to_frames(runtime,location);
+}
+
+static snd_pcm_ops_t snd_card_au1000_playback_ops = {
+ .open = snd_au1000_playback_open,
+ .close = snd_au1000_playback_close,
+ .ioctl = snd_pcm_lib_ioctl,
+ .hw_params = snd_au1000_hw_params,
+ .hw_free = snd_au1000_hw_free,
+ .prepare = snd_au1000_playback_prepare,
+ .trigger = snd_au1000_trigger,
+ .pointer = snd_au1000_pointer,
+};
+
+static snd_pcm_ops_t snd_card_au1000_capture_ops = {
+ .open = snd_au1000_capture_open,
+ .close = snd_au1000_capture_close,
+ .ioctl = snd_pcm_lib_ioctl,
+ .hw_params = snd_au1000_hw_params,
+ .hw_free = snd_au1000_hw_free,
+ .prepare = snd_au1000_capture_prepare,
+ .trigger = snd_au1000_trigger,
+ .pointer = snd_au1000_pointer,
+};
+
+static int __devinit
+snd_au1000_pcm_new(void)
+{
+ snd_pcm_t *pcm;
+ int err;
+ unsigned long flags;
+
+ if ((err = snd_pcm_new(au1000->card, "AU1000 AC97 PCM", 0, 1, 1, &pcm)) < 0)
+ return err;
+
+ snd_pcm_lib_preallocate_pages_for_all(pcm, SNDRV_DMA_TYPE_CONTINUOUS,
+ snd_dma_continuous_data(GFP_KERNEL), 128*1024, 128*1024);
+
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK,
+ &snd_card_au1000_playback_ops);
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE,
+ &snd_card_au1000_capture_ops);
+
+ pcm->private_data = au1000;
+ pcm->info_flags = 0;
+ strcpy(pcm->name, "Au1000 AC97 PCM");
+
+ flags = claim_dma_lock();
+ if ((au1000->stream[PLAYBACK]->dma = request_au1000_dma(DMA_ID_AC97C_TX,
+ "AC97 TX", au1000_dma_interrupt, SA_INTERRUPT,
+ au1000->stream[PLAYBACK])) < 0) {
+ release_dma_lock(flags);
+ return -EBUSY;
+ }
+ if ((au1000->stream[CAPTURE]->dma = request_au1000_dma(DMA_ID_AC97C_RX,
+ "AC97 RX", au1000_dma_interrupt, SA_INTERRUPT,
+ au1000->stream[CAPTURE])) < 0){
+ release_dma_lock(flags);
+ return -EBUSY;
+ }
+ /* enable DMA coherency in read/write DMA channels */
+ set_dma_mode(au1000->stream[PLAYBACK]->dma,
+ get_dma_mode(au1000->stream[PLAYBACK]->dma) & ~DMA_NC);
+ set_dma_mode(au1000->stream[CAPTURE]->dma,
+ get_dma_mode(au1000->stream[CAPTURE]->dma) & ~DMA_NC);
+ release_dma_lock(flags);
+ spin_lock_init(&au1000->stream[PLAYBACK]->dma_lock);
+ spin_lock_init(&au1000->stream[CAPTURE]->dma_lock);
+ au1000->pcm = pcm;
+ return 0;
+}
+
+
+/*-------------------------- AC97 CODEC Control ------------------------------*/
+
+static unsigned short
+snd_au1000_ac97_read(ac97_t *ac97, unsigned short reg)
+{
+ u32 volatile cmd;
+ u16 volatile data;
+ int i;
+ spin_lock(au1000->ac97_lock);
+/* would rather use the interupt than this polling but it works and I can't
+get the interupt driven case to work efficiently */
+ for (i = 0; i < 0x5000; i++)
+ if (!(au1000->ac97_ioport->status & AC97C_CP))
+ break;
+ if (i == 0x5000)
+ printk(KERN_ERR "au1000 AC97: AC97 command read timeout\n");
+
+ cmd = (u32) reg & AC97C_INDEX_MASK;
+ cmd |= AC97C_READ;
+ au1000->ac97_ioport->cmd = cmd;
+
+ /* now wait for the data */
+ for (i = 0; i < 0x5000; i++)
+ if (!(au1000->ac97_ioport->status & AC97C_CP))
+ break;
+ if (i == 0x5000) {
+ printk(KERN_ERR "au1000 AC97: AC97 command read timeout\n");
+ return 0;
+ }
+
+ data = au1000->ac97_ioport->cmd & 0xffff;
+ spin_unlock(au1000->ac97_lock);
+
+ return data;
+
+}
+
+
+static void
+snd_au1000_ac97_write(ac97_t *ac97, unsigned short reg, unsigned short val)
+{
+ u32 cmd;
+ int i;
+ spin_lock(au1000->ac97_lock);
+/* would rather use the interupt than this polling but it works and I can't
+get the interupt driven case to work efficiently */
+ for (i = 0; i < 0x5000; i++)
+ if (!(au1000->ac97_ioport->status & AC97C_CP))
+ break;
+ if (i == 0x5000)
+ printk(KERN_ERR "au1000 AC97: AC97 command write timeout\n");
+
+ cmd = (u32) reg & AC97C_INDEX_MASK;
+ cmd &= ~AC97C_READ;
+ cmd |= ((u32) val << AC97C_WD_BIT);
+ au1000->ac97_ioport->cmd = cmd;
+ spin_unlock(au1000->ac97_lock);
+}
+static void
+snd_au1000_ac97_free(ac97_t *ac97)
+{
+ au1000->ac97 = NULL;
+}
+
+static int __devinit
+snd_au1000_ac97_new(void)
+{
+ int err;
+
+#if LINUX_VERSION_CODE > KERNEL_VERSION(2,6,8)
+ ac97_bus_t *pbus;
+ ac97_template_t ac97;
+ static ac97_bus_ops_t ops = {
+ .write = snd_au1000_ac97_write,
+ .read = snd_au1000_ac97_read,
+ };
+#else
+ ac97_bus_t bus, *pbus;
+ ac97_t ac97;
+#endif
+
+ if ((au1000->ac97_res_port = request_region(AC97C_CONFIG,
+ sizeof(au1000_ac97_reg_t), "Au1x00 AC97")) == NULL) {
+ snd_printk(KERN_ERR "ALSA AC97: can't grap AC97 port\n");
+ return -EBUSY;
+ }
+ au1000->ac97_ioport = (au1000_ac97_reg_t *) au1000->ac97_res_port->start;
+
+ spin_lock_init(&au1000->ac97_lock);
+
+ spin_lock(&au1000->ac97_lock);
+
+ /* configure pins for AC'97
+ TODO: move to board_setup.c */
+ au_writel(au_readl(SYS_PINFUNC) & ~0x02, SYS_PINFUNC);
+
+ /* Initialise Au1000's AC'97 Control Block */
+ au1000->ac97_ioport->cntrl = AC97C_RS | AC97C_CE;
+ udelay(10);
+ au1000->ac97_ioport->cntrl = AC97C_CE;
+ udelay(10);
+
+ /* Initialise External CODEC -- cold reset */
+ au1000->ac97_ioport->config = AC97C_RESET;
+ udelay(10);
+ au1000->ac97_ioport->config = 0x0;
+ mdelay(5);
+
+ spin_unlock(&au1000->ac97_lock);
+
+ /* Initialise AC97 middle-layer */
+#if LINUX_VERSION_CODE > KERNEL_VERSION(2,6,8)
+ if ((err = snd_ac97_bus(au1000->card, 0, &ops, au1000, &pbus)) < 0)
+ return err;
+#else
+ memset(&bus, 0, sizeof(bus));
+ bus.write = snd_au1000_ac97_write;
+ bus.read = snd_au1000_ac97_read;
+ if ((err = snd_ac97_bus(au1000->card, &bus, &pbus)) < 0)
+ return err;
+#endif
+ memset(&ac97, 0, sizeof(ac97));
+ ac97.private_data = au1000;
+ ac97.private_free = snd_au1000_ac97_free;
+ if ((err = snd_ac97_mixer(pbus, &ac97, &au1000->ac97)) < 0)
+ return err;
+ return 0;
+
+}
+
+/*------------------------------ Setup / Destroy ----------------------------*/
+
+void
+snd_au1000_free(snd_card_t *card)
+{
+
+ if (au1000->ac97_res_port) {
+ /* put internal AC97 block into reset */
+ au1000->ac97_ioport->cntrl = AC97C_RS;
+ au1000->ac97_ioport = NULL;
+ release_resource(au1000->ac97_res_port);
+ kfree_nocheck(au1000->ac97_res_port);
+ }
+
+ if (au1000->stream[PLAYBACK]->dma >= 0)
+ free_au1000_dma(au1000->stream[PLAYBACK]->dma);
+
+ if (au1000->stream[CAPTURE]->dma >= 0)
+ free_au1000_dma(au1000->stream[CAPTURE]->dma);
+
+ kfree(au1000->stream[PLAYBACK]);
+ au1000->stream[PLAYBACK] = NULL;
+ kfree(au1000->stream[CAPTURE]);
+ au1000->stream[CAPTURE] = NULL;
+ kfree(au1000);
+ au1000 = NULL;
+
+}
+
+static int __init
+au1000_init(void)
+{
+ int err;
+
+ au1000 = kmalloc(sizeof(au1000_t), GFP_KERNEL);
+ if (au1000 == NULL)
+ return -ENOMEM;
+ au1000->stream[PLAYBACK] = kmalloc(sizeof(audio_stream_t), GFP_KERNEL);
+ if (au1000->stream[PLAYBACK] == NULL)
+ return -ENOMEM;
+ au1000->stream[CAPTURE] = kmalloc(sizeof(audio_stream_t), GFP_KERNEL);
+ if (au1000->stream[CAPTURE] == NULL)
+ return -ENOMEM;
+ /* so that snd_au1000_free will work as intended */
+ au1000->stream[PLAYBACK]->dma = -1;
+ au1000->stream[CAPTURE]->dma = -1;
+ au1000->ac97_res_port = NULL;
+
+ au1000->card = snd_card_new(-1, "AC97", THIS_MODULE, sizeof(au1000_t));
+ if (au1000->card == NULL) {
+ snd_au1000_free(au1000->card);
+ return -ENOMEM;
+ }
+
+ au1000->card->private_data = (au1000_t *)au1000;
+ au1000->card->private_free = snd_au1000_free;
+
+ if ((err = snd_au1000_ac97_new()) < 0 ) {
+ snd_card_free(au1000->card);
+ return err;
+ }
+
+ if ((err = snd_au1000_pcm_new()) < 0) {
+ snd_card_free(au1000->card);
+ return err;
+ }
+
+ strcpy(au1000->card->driver, "AMD-Au1000-AC97");
+ strcpy(au1000->card->shortname, "Au1000-AC97");
+ sprintf(au1000->card->longname, "AMD Au1000--AC97 ALSA Driver");
+
+ if ((err = snd_card_register(au1000->card)) < 0) {
+ snd_card_free(au1000->card);
+ return err;
+ }
+
+ printk( KERN_INFO "ALSA AC97: Driver Initialized\n" );
+ return 0;
+}
+
+static void __exit au1000_exit(void)
+{
+ snd_card_free(au1000->card);
+}
+
+module_init(au1000_init);
+module_exit(au1000_exit);
+
static int __initdata io = -1;
-MODULE_PARM(io, "i");
+module_param(io, int, 0);
static int __init init_adlib(void)
{
int cdrombase;
};
-struct d_hcfg decoded_hcfg __initdata = {0, };
+static struct d_hcfg decoded_hcfg __initdata = {0, };
#endif /* CONFIG_SC6600 */
{0x00, 0x00}
};
-static struct aedsp16_info ae_config __initdata = {
+static struct aedsp16_info ae_config = {
DEF_AEDSP16_IOB,
DEF_AEDSP16_IRQ,
DEF_AEDSP16_MRQ,
}
#endif
-void __init aedsp16_hard_decode(void) {
+static void __init aedsp16_hard_decode(void) {
DBG((" aedsp16_hard_decode: 0x%x, 0x%x\n", hard_cfg[0], hard_cfg[1]));
DBG(("success.\n"));
}
-void __init aedsp16_hard_encode(void) {
+static void __init aedsp16_hard_encode(void) {
DBG((" aedsp16_hard_encode: 0x%x, 0x%x\n", hard_cfg[0], hard_cfg[1]));
return TRUE;
}
-static void __init uninit_aedsp16_sb(void)
+static void uninit_aedsp16_sb(void)
{
DBG(("uninit_aedsp16_sb: "));
return TRUE;
}
-static void __init uninit_aedsp16_mss(void)
+static void uninit_aedsp16_mss(void)
{
DBG(("uninit_aedsp16_mss: "));
return TRUE;
}
-static void __init uninit_aedsp16_mpu(void)
+static void uninit_aedsp16_mpu(void)
{
DBG(("uninit_aedsp16_mpu: "));
DBG(("done.\n"));
}
-int __init init_aedsp16(void)
+static int __init init_aedsp16(void)
{
int initialized = FALSE;
return initialized;
}
-void __init uninit_aedsp16(void)
+static void __exit uninit_aedsp16(void)
{
if (ae_config.mss_base != -1)
uninit_aedsp16_mss();
static int __initdata mss_base = -1;
static int __initdata mpu_base = -1;
-MODULE_PARM(io, "i");
+module_param(io, int, 0);
MODULE_PARM_DESC(io, "I/O base address (0x220 0x240)");
-MODULE_PARM(irq, "i");
+module_param(irq, int, 0);
MODULE_PARM_DESC(irq, "IRQ line (5 7 9 10 11)");
-MODULE_PARM(dma, "i");
+module_param(dma, int, 0);
MODULE_PARM_DESC(dma, "dma line (0 1 3)");
-MODULE_PARM(mpu_irq, "i");
+module_param(mpu_irq, int, 0);
MODULE_PARM_DESC(mpu_irq, "MPU-401 IRQ line (5 7 9 10 0)");
-MODULE_PARM(mss_base, "i");
+module_param(mss_base, int, 0);
MODULE_PARM_DESC(mss_base, "MSS emulation I/O base address (0x530 0xE80)");
-MODULE_PARM(mpu_base, "i");
+module_param(mpu_base, int, 0);
MODULE_PARM_DESC(mpu_base,"MPU-401 I/O base address (0x300 0x310 0x320 0x330)");
MODULE_AUTHOR("Riccardo Facchetti <fizban@tin.it>");
MODULE_DESCRIPTION("Audio Excel DSP 16 Driver Version " VERSION);
MODULE_AUTHOR("");
MODULE_DESCRIPTION("ALI 5455 audio support");
MODULE_LICENSE("GPL");
-MODULE_PARM(clocking, "i");
-MODULE_PARM(strict_clocking, "i");
-MODULE_PARM(codec_pcmout_share_spdif_locked, "i");
-MODULE_PARM(codec_independent_spdif_locked, "i");
-MODULE_PARM(controller_pcmout_share_spdif_locked, "i");
-MODULE_PARM(controller_independent_spdif_locked, "i");
+module_param(clocking, int, 0);
+/* FIXME: bool? */
+module_param(strict_clocking, uint, 0);
+module_param(codec_pcmout_share_spdif_locked, uint, 0);
+module_param(codec_independent_spdif_locked, uint, 0);
+module_param(controller_pcmout_share_spdif_locked, uint, 0);
+module_param(controller_independent_spdif_locked, uint, 0);
#define ALI5455_MODULE_NAME "ali5455"
static struct pci_driver ali_pci_driver = {
.name = ALI5455_MODULE_NAME,
#include <linux/slab.h>
#include <linux/soundcard.h>
#include <linux/init.h>
+#include <linux/page-flags.h>
#include <linux/poll.h>
#include <linux/pci.h>
#include <linux/bitops.h>
#include <linux/spinlock.h>
#include <linux/smp_lock.h>
#include <linux/ac97_codec.h>
-#include <linux/wrapper.h>
#include <linux/interrupt.h>
#include <asm/io.h>
#include <asm/uaccess.h>
-#include <asm/au1000.h>
-#include <asm/au1000_dma.h>
+#include <asm/mach-au1x00/au1000.h>
+#include <asm/mach-au1x00/au1000_dma.h>
/* --------------------------------------------------------------------- */
#undef OSS_DOCUMENTED_MIXER_SEMANTICS
-#define AU1000_DEBUG
+#undef AU1000_DEBUG
#undef AU1000_VERBOSE_DEBUG
-#define USE_COHERENT_DMA
-
#define AU1000_MODULE_NAME "Au1000 audio"
#define PFX AU1000_MODULE_NAME
struct proc_dir_entry *ac97_ps;
#endif /* AU1000_DEBUG */
- struct ac97_codec *codec;
+ struct ac97_codec codec;
unsigned codec_base_caps;// AC'97 reg 00h, "Reset Register"
unsigned codec_ext_caps; // AC'97 reg 28h, "Extended Audio ID"
int no_vra; // do not use VRA
return r;
}
-
-#ifdef USE_COHERENT_DMA
-static inline void * dma_alloc(size_t size, dma_addr_t * dma_handle)
-{
- void* ret = (void *)__get_free_pages(GFP_ATOMIC | GFP_DMA,
- get_order(size));
- if (ret != NULL) {
- memset(ret, 0, size);
- *dma_handle = virt_to_phys(ret);
- }
- return ret;
-}
-
-static inline void dma_free(size_t size, void* va, dma_addr_t dma_handle)
-{
- free_pages((unsigned long)va, get_order(size));
-}
-#else
-static inline void * dma_alloc(size_t size, dma_addr_t * dma_handle)
-{
- return pci_alloc_consistent(NULL, size, dma_handle);
-}
-
-static inline void dma_free(size_t size, void* va, dma_addr_t dma_handle)
-{
- pci_free_consistent(NULL, size, va, dma_handle);
-}
-#endif
-
/* --------------------------------------------------------------------- */
static void au1000_delay(int msec)
adc->src_factor = 1;
- ac97_extstat = rdcodec(s->codec, AC97_EXTENDED_STATUS);
+ ac97_extstat = rdcodec(&s->codec, AC97_EXTENDED_STATUS);
rate = rate > 48000 ? 48000 : rate;
// enable VRA
- wrcodec(s->codec, AC97_EXTENDED_STATUS,
+ wrcodec(&s->codec, AC97_EXTENDED_STATUS,
ac97_extstat | AC97_EXTSTAT_VRA);
// now write the sample rate
- wrcodec(s->codec, AC97_PCM_LR_ADC_RATE, (u16) rate);
+ wrcodec(&s->codec, AC97_PCM_LR_ADC_RATE, (u16) rate);
// read it back for actual supported rate
- adc_rate = rdcodec(s->codec, AC97_PCM_LR_ADC_RATE);
+ adc_rate = rdcodec(&s->codec, AC97_PCM_LR_ADC_RATE);
#ifdef AU1000_VERBOSE_DEBUG
dbg("%s: set to %d Hz", __FUNCTION__, adc_rate);
// some codec's don't allow unequal DAC and ADC rates, in which case
// writing one rate reg actually changes both.
- dac_rate = rdcodec(s->codec, AC97_PCM_FRONT_DAC_RATE);
+ dac_rate = rdcodec(&s->codec, AC97_PCM_FRONT_DAC_RATE);
if (dac->num_channels > 2)
- wrcodec(s->codec, AC97_PCM_SURR_DAC_RATE, dac_rate);
+ wrcodec(&s->codec, AC97_PCM_SURR_DAC_RATE, dac_rate);
if (dac->num_channels > 4)
- wrcodec(s->codec, AC97_PCM_LFE_DAC_RATE, dac_rate);
+ wrcodec(&s->codec, AC97_PCM_LFE_DAC_RATE, dac_rate);
adc->sample_rate = adc_rate;
dac->sample_rate = dac_rate;
dac->src_factor = 1;
- ac97_extstat = rdcodec(s->codec, AC97_EXTENDED_STATUS);
+ ac97_extstat = rdcodec(&s->codec, AC97_EXTENDED_STATUS);
rate = rate > 48000 ? 48000 : rate;
// enable VRA
- wrcodec(s->codec, AC97_EXTENDED_STATUS,
+ wrcodec(&s->codec, AC97_EXTENDED_STATUS,
ac97_extstat | AC97_EXTSTAT_VRA);
// now write the sample rate
- wrcodec(s->codec, AC97_PCM_FRONT_DAC_RATE, (u16) rate);
+ wrcodec(&s->codec, AC97_PCM_FRONT_DAC_RATE, (u16) rate);
// I don't support different sample rates for multichannel,
// so make these channels the same.
if (dac->num_channels > 2)
- wrcodec(s->codec, AC97_PCM_SURR_DAC_RATE, (u16) rate);
+ wrcodec(&s->codec, AC97_PCM_SURR_DAC_RATE, (u16) rate);
if (dac->num_channels > 4)
- wrcodec(s->codec, AC97_PCM_LFE_DAC_RATE, (u16) rate);
+ wrcodec(&s->codec, AC97_PCM_LFE_DAC_RATE, (u16) rate);
// read it back for actual supported rate
- dac_rate = rdcodec(s->codec, AC97_PCM_FRONT_DAC_RATE);
+ dac_rate = rdcodec(&s->codec, AC97_PCM_FRONT_DAC_RATE);
#ifdef AU1000_VERBOSE_DEBUG
dbg("%s: set to %d Hz", __FUNCTION__, dac_rate);
// some codec's don't allow unequal DAC and ADC rates, in which case
// writing one rate reg actually changes both.
- adc_rate = rdcodec(s->codec, AC97_PCM_LR_ADC_RATE);
+ adc_rate = rdcodec(&s->codec, AC97_PCM_LR_ADC_RATE);
dac->sample_rate = dac_rate;
adc->sample_rate = adc_rate;
pend = virt_to_page(db->rawbuf +
(PAGE_SIZE << db->buforder) - 1);
for (page = virt_to_page(db->rawbuf); page <= pend; page++)
- mem_map_unreserve(page);
- dma_free(PAGE_SIZE << db->buforder, db->rawbuf, db->dmaaddr);
+ ClearPageReserved(page);
+ dma_free_noncoherent(NULL,
+ PAGE_SIZE << db->buforder,
+ db->rawbuf,
+ db->dmaaddr);
}
db->rawbuf = db->nextIn = db->nextOut = NULL;
db->mapped = db->ready = 0;
db->ready = db->mapped = 0;
for (order = DMABUF_DEFAULTORDER;
order >= DMABUF_MINORDER; order--)
- if ((db->rawbuf = dma_alloc(PAGE_SIZE << order,
- &db->dmaaddr)))
+ if ((db->rawbuf = dma_alloc_noncoherent(NULL,
+ PAGE_SIZE << order,
+ &db->dmaaddr,
+ 0)))
break;
if (!db->rawbuf)
return -ENOMEM;
pend = virt_to_page(db->rawbuf +
(PAGE_SIZE << db->buforder) - 1);
for (page = virt_to_page(db->rawbuf); page <= pend; page++)
- mem_map_reserve(page);
+ SetPageReserved(page);
}
db->cnt_factor = 1;
/* hold spinlock for the following */
-static void dac_dma_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+static irqreturn_t dac_dma_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
struct au1000_state *s = (struct au1000_state *) dev_id;
struct dmabuf *dac = &s->dma_dac;
if ((buff_done = get_dma_buffer_done(dac->dmanr)) == 0) {
/* fastpath out, to ease interrupt sharing */
- return;
+ return IRQ_HANDLED;
}
spin_lock(&s->lock);
wake_up(&dac->wait);
spin_unlock(&s->lock);
+
+ return IRQ_HANDLED;
}
-static void adc_dma_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+static irqreturn_t adc_dma_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
struct au1000_state *s = (struct au1000_state *) dev_id;
struct dmabuf *adc = &s->dma_adc;
if ((buff_done = get_dma_buffer_done(adc->dmanr)) == 0) {
/* fastpath out, to ease interrupt sharing */
- return;
+ return IRQ_HANDLED;
}
spin_lock(&s->lock);
stop_adc(s);
adc->error++;
err("adc overrun");
- return;
+ return IRQ_NONE;
}
adc->nextIn += adc->dma_fragsize;
adc->error++;
err("adc overrun");
spin_unlock(&s->lock);
- return;
+ return IRQ_NONE;
}
adc->nextIn += 2*adc->dma_fragsize;
wake_up(&adc->wait);
spin_unlock(&s->lock);
+
+ return IRQ_HANDLED;
}
/* --------------------------------------------------------------------- */
unsigned int cmd, unsigned long arg)
{
struct au1000_state *s = (struct au1000_state *)file->private_data;
- struct ac97_codec *codec = s->codec;
+ struct ac97_codec *codec = &s->codec;
return mixdev_ioctl(codec, cmd, arg);
}
ret = -EINVAL;
goto out;
}
- if (remap_pfn_range(vma->vm_start,
- virt_to_phys(db->rawbuf) >> PAGE_SHIFT,
+ if (remap_pfn_range(vma, vma->vm_start, virt_to_phys(db->rawbuf),
size, vma->vm_page_prot)) {
ret = -EAGAIN;
goto out;
s->dma_dac.num_channels = val ? 2 : 1;
if (s->codec_ext_caps & AC97_EXT_DACS) {
// disable surround and center/lfe in AC'97
- u16 ext_stat = rdcodec(s->codec,
+ u16 ext_stat = rdcodec(&s->codec,
AC97_EXTENDED_STATUS);
- wrcodec(s->codec, AC97_EXTENDED_STATUS,
+ wrcodec(&s->codec, AC97_EXTENDED_STATUS,
ext_stat | (AC97_EXTSTAT_PRI |
AC97_EXTSTAT_PRJ |
AC97_EXTSTAT_PRK));
// disable surround and center/lfe
// channels in AC'97
u16 ext_stat =
- rdcodec(s->codec,
+ rdcodec(&s->codec,
AC97_EXTENDED_STATUS);
- wrcodec(s->codec,
+ wrcodec(&s->codec,
AC97_EXTENDED_STATUS,
ext_stat | (AC97_EXTSTAT_PRI |
AC97_EXTSTAT_PRJ |
// enable surround, center/lfe
// channels in AC'97
u16 ext_stat =
- rdcodec(s->codec,
+ rdcodec(&s->codec,
AC97_EXTENDED_STATUS);
ext_stat &= ~AC97_EXTSTAT_PRJ;
if (val == 6)
ext_stat &=
~(AC97_EXTSTAT_PRI |
AC97_EXTSTAT_PRK);
- wrcodec(s->codec,
+ wrcodec(&s->codec,
AC97_EXTENDED_STATUS,
ext_stat);
}
return -EINVAL;
}
- return mixdev_ioctl(s->codec, cmd, arg);
+ return mixdev_ioctl(&s->codec, cmd, arg);
}
len += sprintf(buf + len, "----------------------\n");
for (cnt = 0; cnt <= 0x7e; cnt += 2)
len += sprintf(buf + len, "reg %02x = %04x\n",
- cnt, rdcodec(s->codec, cnt));
+ cnt, rdcodec(&s->codec, cnt));
if (fpos >= len) {
*start = buf;
{
struct au1000_state *s = &au1000_state;
int val;
+#ifdef AU1000_DEBUG
char proc_str[80];
+#endif
memset(s, 0, sizeof(struct au1000_state));
init_waitqueue_head(&s->open_wait);
init_MUTEX(&s->open_sem);
spin_lock_init(&s->lock);
-
- s->codec = ac97_alloc_codec();
- if(s->codec == NULL)
- {
- error("Out of memory");
- return -1;
- }
- s->codec->private_data = s;
- s->codec->id = 0;
- s->codec->codec_read = rdcodec;
- s->codec->codec_write = wrcodec;
- s->codec->codec_wait = waitcodec;
+ s->codec.private_data = s;
+ s->codec.id = 0;
+ s->codec.codec_read = rdcodec;
+ s->codec.codec_write = wrcodec;
+ s->codec.codec_wait = waitcodec;
- if (!request_region(virt_to_phys((void *) AC97C_CONFIG),
+ if (!request_mem_region(CPHYSADDR(AC97C_CONFIG),
0x14, AU1000_MODULE_NAME)) {
err("AC'97 ports in use");
- goto err_codec;
+ return -1;
}
// Allocate the DMA Channels
if ((s->dma_dac.dmanr = request_au1000_dma(DMA_ID_AC97C_TX,
s->dma_dac.dmanr, get_dma_done_irq(s->dma_dac.dmanr),
s->dma_adc.dmanr, get_dma_done_irq(s->dma_adc.dmanr));
-#ifdef USE_COHERENT_DMA
// enable DMA coherency in read/write DMA channels
set_dma_mode(s->dma_dac.dmanr,
get_dma_mode(s->dma_dac.dmanr) & ~DMA_NC);
set_dma_mode(s->dma_adc.dmanr,
get_dma_mode(s->dma_adc.dmanr) & ~DMA_NC);
-#else
- // disable DMA coherency in read/write DMA channels
- set_dma_mode(s->dma_dac.dmanr,
- get_dma_mode(s->dma_dac.dmanr) | DMA_NC);
- set_dma_mode(s->dma_adc.dmanr,
- get_dma_mode(s->dma_adc.dmanr) | DMA_NC);
-#endif
/* register devices */
if ((s->dev_audio = register_sound_dsp(&au1000_audio_fops, -1)) < 0)
goto err_dev1;
- if ((s->codec->dev_mixer =
+ if ((s->codec.dev_mixer =
register_sound_mixer(&au1000_mixer_fops, -1)) < 0)
goto err_dev2;
au_writel(0, AC97C_CONFIG);
/* codec init */
- if (!ac97_probe_codec(s->codec))
+ if (!ac97_probe_codec(&s->codec))
goto err_dev3;
- s->codec_base_caps = rdcodec(s->codec, AC97_RESET);
- s->codec_ext_caps = rdcodec(s->codec, AC97_EXTENDED_ID);
+ s->codec_base_caps = rdcodec(&s->codec, AC97_RESET);
+ s->codec_ext_caps = rdcodec(&s->codec, AC97_EXTENDED_ID);
info("AC'97 Base/Extended ID = %04x/%04x",
s->codec_base_caps, s->codec_ext_caps);
* ALTPCM). ac97_codec.c does not handle detection
* of this channel correctly.
*/
- s->codec->supported_mixers |= SOUND_MASK_ALTPCM;
+ s->codec.supported_mixers |= SOUND_MASK_ALTPCM;
/*
* Now set AUX_OUT's default volume.
*/
val = 0x4343;
- mixdev_ioctl(s->codec, SOUND_MIXER_WRITE_ALTPCM,
+ mixdev_ioctl(&s->codec, SOUND_MIXER_WRITE_ALTPCM,
(unsigned long) &val);
if (!(s->codec_ext_caps & AC97_EXTID_VRA)) {
s->no_vra = 1;
} else if (!vra) {
// Boot option says disable VRA
- u16 ac97_extstat = rdcodec(s->codec, AC97_EXTENDED_STATUS);
- wrcodec(s->codec, AC97_EXTENDED_STATUS,
+ u16 ac97_extstat = rdcodec(&s->codec, AC97_EXTENDED_STATUS);
+ wrcodec(&s->codec, AC97_EXTENDED_STATUS,
ac97_extstat & ~AC97_EXTSTAT_VRA);
s->no_vra = 1;
}
/* set mic to be the recording source */
val = SOUND_MASK_MIC;
- mixdev_ioctl(s->codec, SOUND_MIXER_WRITE_RECSRC,
+ mixdev_ioctl(&s->codec, SOUND_MIXER_WRITE_RECSRC,
(unsigned long) &val);
#ifdef AU1000_DEBUG
sprintf(proc_str, "driver/%s/%d/ac97", AU1000_MODULE_NAME,
- s->codec->id);
+ s->codec.id);
s->ac97_ps = create_proc_read_entry (proc_str, 0, NULL,
- ac97_read_proc, s->codec);
+ ac97_read_proc, &s->codec);
+#endif
+
+#ifdef CONFIG_MIPS_XXS1500
+ /* deassert eapd */
+ wrcodec(&s->codec, AC97_POWER_CONTROL,
+ rdcodec(&s->codec, AC97_POWER_CONTROL) & ~0x8000);
+ /* mute a number of signals which seem to be causing problems
+ * if not muted.
+ */
+ wrcodec(&s->codec, AC97_PCBEEP_VOL, 0x8000);
+ wrcodec(&s->codec, AC97_PHONE_VOL, 0x8008);
+ wrcodec(&s->codec, AC97_MIC_VOL, 0x8008);
+ wrcodec(&s->codec, AC97_LINEIN_VOL, 0x8808);
+ wrcodec(&s->codec, AC97_CD_VOL, 0x8808);
+ wrcodec(&s->codec, AC97_VIDEO_VOL, 0x8808);
+ wrcodec(&s->codec, AC97_AUX_VOL, 0x8808);
+ wrcodec(&s->codec, AC97_PCMOUT_VOL, 0x0808);
+ wrcodec(&s->codec, AC97_GENERAL_PURPOSE, 0x2000);
#endif
return 0;
err_dev3:
- unregister_sound_mixer(s->codec->dev_mixer);
+ unregister_sound_mixer(s->codec.dev_mixer);
err_dev2:
unregister_sound_dsp(s->dev_audio);
err_dev1:
err_dma2:
free_au1000_dma(s->dma_dac.dmanr);
err_dma1:
- release_region(virt_to_phys((void *) AC97C_CONFIG), 0x14);
- err_codec:
- ac97_release_codec(s->codec);
+ release_mem_region(CPHYSADDR(AC97C_CONFIG), 0x14);
return -1;
}
synchronize_irq();
free_au1000_dma(s->dma_adc.dmanr);
free_au1000_dma(s->dma_dac.dmanr);
- release_region(virt_to_phys((void *) AC97C_CONFIG), 0x14);
+ release_mem_region(CPHYSADDR(AC97C_CONFIG), 0x14);
unregister_sound_dsp(s->dev_audio);
- unregister_sound_mixer(s->codec->dev_mixer);
- ac97_release_codec(s->codec);
+ unregister_sound_mixer(s->codec.dev_mixer);
}
static int __init init_au1000(void)
if (!options || !*options)
return 0;
- while (this_opt = strsep(&options, ",")) {
+ while ((this_opt = strsep(&options, ","))) {
if (!*this_opt)
continue;
if (!strncmp(this_opt, "vra", 3)) {
--- /dev/null
+/*
+ * au1550_ac97.c -- Sound driver for Alchemy Au1550 MIPS Internet Edge
+ * Processor.
+ *
+ * Copyright 2004 Embedded Edge, LLC
+ * dan@embeddededge.com
+ *
+ * Mostly copied from the au1000.c driver and some from the
+ * PowerMac dbdma driver.
+ * We assume the processor can do memory coherent DMA.
+ *
+ * Ported to 2.6 by Matt Porter <mporter@kernel.crashing.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN
+ * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+ * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+ * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ */
+
+#undef DEBUG
+
+#include <linux/version.h>
+#include <linux/module.h>
+#include <linux/string.h>
+#include <linux/ioport.h>
+#include <linux/sched.h>
+#include <linux/delay.h>
+#include <linux/sound.h>
+#include <linux/slab.h>
+#include <linux/soundcard.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/poll.h>
+#include <linux/pci.h>
+#include <linux/bitops.h>
+#include <linux/spinlock.h>
+#include <linux/smp_lock.h>
+#include <linux/ac97_codec.h>
+#include <asm/io.h>
+#include <asm/uaccess.h>
+#include <asm/hardirq.h>
+#include <asm/mach-au1x00/au1000.h>
+#include <asm/mach-au1x00/au1xxx_psc.h>
+#include <asm/mach-au1x00/au1xxx_dbdma.h>
+
+#undef OSS_DOCUMENTED_MIXER_SEMANTICS
+
+/* misc stuff */
+#define POLL_COUNT 0x50000
+#define AC97_EXT_DACS (AC97_EXTID_SDAC | AC97_EXTID_CDAC | AC97_EXTID_LDAC)
+
+/* The number of DBDMA ring descriptors to allocate. No sense making
+ * this too large....if you can't keep up with a few you aren't likely
+ * to be able to with lots of them, either.
+ */
+#define NUM_DBDMA_DESCRIPTORS 4
+
+#define err(format, arg...) printk(KERN_ERR format "\n" , ## arg)
+
+/* Boot options
+ * 0 = no VRA, 1 = use VRA if codec supports it
+ */
+static int vra = 1;
+MODULE_PARM(vra, "i");
+MODULE_PARM_DESC(vra, "if 1 use VRA if codec supports it");
+
+static struct au1550_state {
+ /* soundcore stuff */
+ int dev_audio;
+
+ struct ac97_codec *codec;
+ unsigned codec_base_caps; /* AC'97 reg 00h, "Reset Register" */
+ unsigned codec_ext_caps; /* AC'97 reg 28h, "Extended Audio ID" */
+ int no_vra; /* do not use VRA */
+
+ spinlock_t lock;
+ struct semaphore open_sem;
+ struct semaphore sem;
+ mode_t open_mode;
+ wait_queue_head_t open_wait;
+
+ struct dmabuf {
+ u32 dmanr;
+ unsigned sample_rate;
+ unsigned src_factor;
+ unsigned sample_size;
+ int num_channels;
+ int dma_bytes_per_sample;
+ int user_bytes_per_sample;
+ int cnt_factor;
+
+ void *rawbuf;
+ unsigned buforder;
+ unsigned numfrag;
+ unsigned fragshift;
+ void *nextIn;
+ void *nextOut;
+ int count;
+ unsigned total_bytes;
+ unsigned error;
+ wait_queue_head_t wait;
+
+ /* redundant, but makes calculations easier */
+ unsigned fragsize;
+ unsigned dma_fragsize;
+ unsigned dmasize;
+ unsigned dma_qcount;
+
+ /* OSS stuff */
+ unsigned mapped:1;
+ unsigned ready:1;
+ unsigned stopped:1;
+ unsigned ossfragshift;
+ int ossmaxfrags;
+ unsigned subdivision;
+ } dma_dac, dma_adc;
+} au1550_state;
+
+static unsigned
+ld2(unsigned int x)
+{
+ unsigned r = 0;
+
+ if (x >= 0x10000) {
+ x >>= 16;
+ r += 16;
+ }
+ if (x >= 0x100) {
+ x >>= 8;
+ r += 8;
+ }
+ if (x >= 0x10) {
+ x >>= 4;
+ r += 4;
+ }
+ if (x >= 4) {
+ x >>= 2;
+ r += 2;
+ }
+ if (x >= 2)
+ r++;
+ return r;
+}
+
+static void
+au1550_delay(int msec)
+{
+ unsigned long tmo;
+ signed long tmo2;
+
+ if (in_interrupt())
+ return;
+
+ tmo = jiffies + (msec * HZ) / 1000;
+ for (;;) {
+ tmo2 = tmo - jiffies;
+ if (tmo2 <= 0)
+ break;
+ schedule_timeout(tmo2);
+ }
+}
+
+static u16
+rdcodec(struct ac97_codec *codec, u8 addr)
+{
+ struct au1550_state *s = (struct au1550_state *)codec->private_data;
+ unsigned long flags;
+ u32 cmd, val;
+ u16 data;
+ int i;
+
+ spin_lock_irqsave(&s->lock, flags);
+
+ for (i = 0; i < POLL_COUNT; i++) {
+ val = au_readl(PSC_AC97STAT);
+ au_sync();
+ if (!(val & PSC_AC97STAT_CP))
+ break;
+ }
+ if (i == POLL_COUNT)
+ err("rdcodec: codec cmd pending expired!");
+
+ cmd = (u32)PSC_AC97CDC_INDX(addr);
+ cmd |= PSC_AC97CDC_RD; /* read command */
+ au_writel(cmd, PSC_AC97CDC);
+ au_sync();
+
+ /* now wait for the data
+ */
+ for (i = 0; i < POLL_COUNT; i++) {
+ val = au_readl(PSC_AC97STAT);
+ au_sync();
+ if (!(val & PSC_AC97STAT_CP))
+ break;
+ }
+ if (i == POLL_COUNT) {
+ err("rdcodec: read poll expired!");
+ return 0;
+ }
+
+ /* wait for command done?
+ */
+ for (i = 0; i < POLL_COUNT; i++) {
+ val = au_readl(PSC_AC97EVNT);
+ au_sync();
+ if (val & PSC_AC97EVNT_CD)
+ break;
+ }
+ if (i == POLL_COUNT) {
+ err("rdcodec: read cmdwait expired!");
+ return 0;
+ }
+
+ data = au_readl(PSC_AC97CDC) & 0xffff;
+ au_sync();
+
+ /* Clear command done event.
+ */
+ au_writel(PSC_AC97EVNT_CD, PSC_AC97EVNT);
+ au_sync();
+
+ spin_unlock_irqrestore(&s->lock, flags);
+
+ return data;
+}
+
+
+static void
+wrcodec(struct ac97_codec *codec, u8 addr, u16 data)
+{
+ struct au1550_state *s = (struct au1550_state *)codec->private_data;
+ unsigned long flags;
+ u32 cmd, val;
+ int i;
+
+ spin_lock_irqsave(&s->lock, flags);
+
+ for (i = 0; i < POLL_COUNT; i++) {
+ val = au_readl(PSC_AC97STAT);
+ au_sync();
+ if (!(val & PSC_AC97STAT_CP))
+ break;
+ }
+ if (i == POLL_COUNT)
+ err("wrcodec: codec cmd pending expired!");
+
+ cmd = (u32)PSC_AC97CDC_INDX(addr);
+ cmd |= (u32)data;
+ au_writel(cmd, PSC_AC97CDC);
+ au_sync();
+
+ for (i = 0; i < POLL_COUNT; i++) {
+ val = au_readl(PSC_AC97STAT);
+ au_sync();
+ if (!(val & PSC_AC97STAT_CP))
+ break;
+ }
+ if (i == POLL_COUNT)
+ err("wrcodec: codec cmd pending expired!");
+
+ for (i = 0; i < POLL_COUNT; i++) {
+ val = au_readl(PSC_AC97EVNT);
+ au_sync();
+ if (val & PSC_AC97EVNT_CD)
+ break;
+ }
+ if (i == POLL_COUNT)
+ err("wrcodec: read cmdwait expired!");
+
+ /* Clear command done event.
+ */
+ au_writel(PSC_AC97EVNT_CD, PSC_AC97EVNT);
+ au_sync();
+
+ spin_unlock_irqrestore(&s->lock, flags);
+}
+
+static void
+waitcodec(struct ac97_codec *codec)
+{
+ u16 temp;
+ u32 val;
+ int i;
+
+ /* codec_wait is used to wait for a ready state after
+ * an AC97C_RESET.
+ */
+ au1550_delay(10);
+
+ /* first poll the CODEC_READY tag bit
+ */
+ for (i = 0; i < POLL_COUNT; i++) {
+ val = au_readl(PSC_AC97STAT);
+ au_sync();
+ if (val & PSC_AC97STAT_CR)
+ break;
+ }
+ if (i == POLL_COUNT) {
+ err("waitcodec: CODEC_READY poll expired!");
+ return;
+ }
+
+ /* get AC'97 powerdown control/status register
+ */
+ temp = rdcodec(codec, AC97_POWER_CONTROL);
+
+ /* If anything is powered down, power'em up
+ */
+ if (temp & 0x7f00) {
+ /* Power on
+ */
+ wrcodec(codec, AC97_POWER_CONTROL, 0);
+ au1550_delay(100);
+
+ /* Reread
+ */
+ temp = rdcodec(codec, AC97_POWER_CONTROL);
+ }
+
+ /* Check if Codec REF,ANL,DAC,ADC ready
+ */
+ if ((temp & 0x7f0f) != 0x000f)
+ err("codec reg 26 status (0x%x) not ready!!", temp);
+}
+
+/* stop the ADC before calling */
+static void
+set_adc_rate(struct au1550_state *s, unsigned rate)
+{
+ struct dmabuf *adc = &s->dma_adc;
+ struct dmabuf *dac = &s->dma_dac;
+ unsigned adc_rate, dac_rate;
+ u16 ac97_extstat;
+
+ if (s->no_vra) {
+ /* calc SRC factor
+ */
+ adc->src_factor = ((96000 / rate) + 1) >> 1;
+ adc->sample_rate = 48000 / adc->src_factor;
+ return;
+ }
+
+ adc->src_factor = 1;
+
+ ac97_extstat = rdcodec(s->codec, AC97_EXTENDED_STATUS);
+
+ rate = rate > 48000 ? 48000 : rate;
+
+ /* enable VRA
+ */
+ wrcodec(s->codec, AC97_EXTENDED_STATUS,
+ ac97_extstat | AC97_EXTSTAT_VRA);
+
+ /* now write the sample rate
+ */
+ wrcodec(s->codec, AC97_PCM_LR_ADC_RATE, (u16) rate);
+
+ /* read it back for actual supported rate
+ */
+ adc_rate = rdcodec(s->codec, AC97_PCM_LR_ADC_RATE);
+
+ pr_debug("set_adc_rate: set to %d Hz\n", adc_rate);
+
+ /* some codec's don't allow unequal DAC and ADC rates, in which case
+ * writing one rate reg actually changes both.
+ */
+ dac_rate = rdcodec(s->codec, AC97_PCM_FRONT_DAC_RATE);
+ if (dac->num_channels > 2)
+ wrcodec(s->codec, AC97_PCM_SURR_DAC_RATE, dac_rate);
+ if (dac->num_channels > 4)
+ wrcodec(s->codec, AC97_PCM_LFE_DAC_RATE, dac_rate);
+
+ adc->sample_rate = adc_rate;
+ dac->sample_rate = dac_rate;
+}
+
+/* stop the DAC before calling */
+static void
+set_dac_rate(struct au1550_state *s, unsigned rate)
+{
+ struct dmabuf *dac = &s->dma_dac;
+ struct dmabuf *adc = &s->dma_adc;
+ unsigned adc_rate, dac_rate;
+ u16 ac97_extstat;
+
+ if (s->no_vra) {
+ /* calc SRC factor
+ */
+ dac->src_factor = ((96000 / rate) + 1) >> 1;
+ dac->sample_rate = 48000 / dac->src_factor;
+ return;
+ }
+
+ dac->src_factor = 1;
+
+ ac97_extstat = rdcodec(s->codec, AC97_EXTENDED_STATUS);
+
+ rate = rate > 48000 ? 48000 : rate;
+
+ /* enable VRA
+ */
+ wrcodec(s->codec, AC97_EXTENDED_STATUS,
+ ac97_extstat | AC97_EXTSTAT_VRA);
+
+ /* now write the sample rate
+ */
+ wrcodec(s->codec, AC97_PCM_FRONT_DAC_RATE, (u16) rate);
+
+ /* I don't support different sample rates for multichannel,
+ * so make these channels the same.
+ */
+ if (dac->num_channels > 2)
+ wrcodec(s->codec, AC97_PCM_SURR_DAC_RATE, (u16) rate);
+ if (dac->num_channels > 4)
+ wrcodec(s->codec, AC97_PCM_LFE_DAC_RATE, (u16) rate);
+ /* read it back for actual supported rate
+ */
+ dac_rate = rdcodec(s->codec, AC97_PCM_FRONT_DAC_RATE);
+
+ pr_debug("set_dac_rate: set to %d Hz\n", dac_rate);
+
+ /* some codec's don't allow unequal DAC and ADC rates, in which case
+ * writing one rate reg actually changes both.
+ */
+ adc_rate = rdcodec(s->codec, AC97_PCM_LR_ADC_RATE);
+
+ dac->sample_rate = dac_rate;
+ adc->sample_rate = adc_rate;
+}
+
+static void
+stop_dac(struct au1550_state *s)
+{
+ struct dmabuf *db = &s->dma_dac;
+ u32 stat;
+ unsigned long flags;
+
+ if (db->stopped)
+ return;
+
+ spin_lock_irqsave(&s->lock, flags);
+
+ au_writel(PSC_AC97PCR_TP, PSC_AC97PCR);
+ au_sync();
+
+ /* Wait for Transmit Busy to show disabled.
+ */
+ do {
+ stat = readl((void *)PSC_AC97STAT);
+ au_sync();
+ } while ((stat & PSC_AC97STAT_TB) != 0);
+
+ au1xxx_dbdma_reset(db->dmanr);
+
+ db->stopped = 1;
+
+ spin_unlock_irqrestore(&s->lock, flags);
+}
+
+static void
+stop_adc(struct au1550_state *s)
+{
+ struct dmabuf *db = &s->dma_adc;
+ unsigned long flags;
+ u32 stat;
+
+ if (db->stopped)
+ return;
+
+ spin_lock_irqsave(&s->lock, flags);
+
+ au_writel(PSC_AC97PCR_RP, PSC_AC97PCR);
+ au_sync();
+
+ /* Wait for Receive Busy to show disabled.
+ */
+ do {
+ stat = readl((void *)PSC_AC97STAT);
+ au_sync();
+ } while ((stat & PSC_AC97STAT_RB) != 0);
+
+ au1xxx_dbdma_reset(db->dmanr);
+
+ db->stopped = 1;
+
+ spin_unlock_irqrestore(&s->lock, flags);
+}
+
+
+static void
+set_xmit_slots(int num_channels)
+{
+ u32 ac97_config, stat;
+
+ ac97_config = au_readl(PSC_AC97CFG);
+ au_sync();
+ ac97_config &= ~(PSC_AC97CFG_TXSLOT_MASK | PSC_AC97CFG_DE_ENABLE);
+ au_writel(ac97_config, PSC_AC97CFG);
+ au_sync();
+
+ switch (num_channels) {
+ case 6: /* stereo with surround and center/LFE,
+ * slots 3,4,6,7,8,9
+ */
+ ac97_config |= PSC_AC97CFG_TXSLOT_ENA(6);
+ ac97_config |= PSC_AC97CFG_TXSLOT_ENA(9);
+
+ case 4: /* stereo with surround, slots 3,4,7,8 */
+ ac97_config |= PSC_AC97CFG_TXSLOT_ENA(7);
+ ac97_config |= PSC_AC97CFG_TXSLOT_ENA(8);
+
+ case 2: /* stereo, slots 3,4 */
+ case 1: /* mono */
+ ac97_config |= PSC_AC97CFG_TXSLOT_ENA(3);
+ ac97_config |= PSC_AC97CFG_TXSLOT_ENA(4);
+ }
+
+ au_writel(ac97_config, PSC_AC97CFG);
+ au_sync();
+
+ ac97_config |= PSC_AC97CFG_DE_ENABLE;
+ au_writel(ac97_config, PSC_AC97CFG);
+ au_sync();
+
+ /* Wait for Device ready.
+ */
+ do {
+ stat = readl((void *)PSC_AC97STAT);
+ au_sync();
+ } while ((stat & PSC_AC97STAT_DR) == 0);
+}
+
+static void
+set_recv_slots(int num_channels)
+{
+ u32 ac97_config, stat;
+
+ ac97_config = au_readl(PSC_AC97CFG);
+ au_sync();
+ ac97_config &= ~(PSC_AC97CFG_RXSLOT_MASK | PSC_AC97CFG_DE_ENABLE);
+ au_writel(ac97_config, PSC_AC97CFG);
+ au_sync();
+
+ /* Always enable slots 3 and 4 (stereo). Slot 6 is
+ * optional Mic ADC, which we don't support yet.
+ */
+ ac97_config |= PSC_AC97CFG_RXSLOT_ENA(3);
+ ac97_config |= PSC_AC97CFG_RXSLOT_ENA(4);
+
+ au_writel(ac97_config, PSC_AC97CFG);
+ au_sync();
+
+ ac97_config |= PSC_AC97CFG_DE_ENABLE;
+ au_writel(ac97_config, PSC_AC97CFG);
+ au_sync();
+
+ /* Wait for Device ready.
+ */
+ do {
+ stat = readl((void *)PSC_AC97STAT);
+ au_sync();
+ } while ((stat & PSC_AC97STAT_DR) == 0);
+}
+
+static void
+start_dac(struct au1550_state *s)
+{
+ struct dmabuf *db = &s->dma_dac;
+ unsigned long flags;
+
+ if (!db->stopped)
+ return;
+
+ spin_lock_irqsave(&s->lock, flags);
+
+ set_xmit_slots(db->num_channels);
+ au_writel(PSC_AC97PCR_TC, PSC_AC97PCR);
+ au_sync();
+ au_writel(PSC_AC97PCR_TS, PSC_AC97PCR);
+ au_sync();
+
+ au1xxx_dbdma_start(db->dmanr);
+
+ db->stopped = 0;
+
+ spin_unlock_irqrestore(&s->lock, flags);
+}
+
+static void
+start_adc(struct au1550_state *s)
+{
+ struct dmabuf *db = &s->dma_adc;
+ int i;
+
+ if (!db->stopped)
+ return;
+
+ /* Put two buffers on the ring to get things started.
+ */
+ for (i=0; i<2; i++) {
+ au1xxx_dbdma_put_dest(db->dmanr, db->nextIn, db->dma_fragsize);
+
+ db->nextIn += db->dma_fragsize;
+ if (db->nextIn >= db->rawbuf + db->dmasize)
+ db->nextIn -= db->dmasize;
+ }
+
+ set_recv_slots(db->num_channels);
+ au1xxx_dbdma_start(db->dmanr);
+ au_writel(PSC_AC97PCR_RC, PSC_AC97PCR);
+ au_sync();
+ au_writel(PSC_AC97PCR_RS, PSC_AC97PCR);
+ au_sync();
+
+ db->stopped = 0;
+}
+
+static int
+prog_dmabuf(struct au1550_state *s, struct dmabuf *db)
+{
+ unsigned user_bytes_per_sec;
+ unsigned bufs;
+ unsigned rate = db->sample_rate;
+
+ if (!db->rawbuf) {
+ db->ready = db->mapped = 0;
+ db->buforder = 5; /* 32 * PAGE_SIZE */
+ db->rawbuf = kmalloc((PAGE_SIZE << db->buforder), GFP_KERNEL);
+ if (!db->rawbuf)
+ return -ENOMEM;
+ }
+
+ db->cnt_factor = 1;
+ if (db->sample_size == 8)
+ db->cnt_factor *= 2;
+ if (db->num_channels == 1)
+ db->cnt_factor *= 2;
+ db->cnt_factor *= db->src_factor;
+
+ db->count = 0;
+ db->dma_qcount = 0;
+ db->nextIn = db->nextOut = db->rawbuf;
+
+ db->user_bytes_per_sample = (db->sample_size>>3) * db->num_channels;
+ db->dma_bytes_per_sample = 2 * ((db->num_channels == 1) ?
+ 2 : db->num_channels);
+
+ user_bytes_per_sec = rate * db->user_bytes_per_sample;
+ bufs = PAGE_SIZE << db->buforder;
+ if (db->ossfragshift) {
+ if ((1000 << db->ossfragshift) < user_bytes_per_sec)
+ db->fragshift = ld2(user_bytes_per_sec/1000);
+ else
+ db->fragshift = db->ossfragshift;
+ } else {
+ db->fragshift = ld2(user_bytes_per_sec / 100 /
+ (db->subdivision ? db->subdivision : 1));
+ if (db->fragshift < 3)
+ db->fragshift = 3;
+ }
+
+ db->fragsize = 1 << db->fragshift;
+ db->dma_fragsize = db->fragsize * db->cnt_factor;
+ db->numfrag = bufs / db->dma_fragsize;
+
+ while (db->numfrag < 4 && db->fragshift > 3) {
+ db->fragshift--;
+ db->fragsize = 1 << db->fragshift;
+ db->dma_fragsize = db->fragsize * db->cnt_factor;
+ db->numfrag = bufs / db->dma_fragsize;
+ }
+
+ if (db->ossmaxfrags >= 4 && db->ossmaxfrags < db->numfrag)
+ db->numfrag = db->ossmaxfrags;
+
+ db->dmasize = db->dma_fragsize * db->numfrag;
+ memset(db->rawbuf, 0, bufs);
+
+ pr_debug("prog_dmabuf: rate=%d, samplesize=%d, channels=%d\n",
+ rate, db->sample_size, db->num_channels);
+ pr_debug("prog_dmabuf: fragsize=%d, cnt_factor=%d, dma_fragsize=%d\n",
+ db->fragsize, db->cnt_factor, db->dma_fragsize);
+ pr_debug("prog_dmabuf: numfrag=%d, dmasize=%d\n", db->numfrag, db->dmasize);
+
+ db->ready = 1;
+ return 0;
+}
+
+static int
+prog_dmabuf_adc(struct au1550_state *s)
+{
+ stop_adc(s);
+ return prog_dmabuf(s, &s->dma_adc);
+
+}
+
+static int
+prog_dmabuf_dac(struct au1550_state *s)
+{
+ stop_dac(s);
+ return prog_dmabuf(s, &s->dma_dac);
+}
+
+
+/* hold spinlock for the following */
+static void
+dac_dma_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct au1550_state *s = (struct au1550_state *) dev_id;
+ struct dmabuf *db = &s->dma_dac;
+ u32 ac97c_stat;
+
+ ac97c_stat = au_readl(PSC_AC97STAT);
+ if (ac97c_stat & (AC97C_XU | AC97C_XO | AC97C_TE))
+ pr_debug("AC97C status = 0x%08x\n", ac97c_stat);
+ db->dma_qcount--;
+
+ if (db->count >= db->fragsize) {
+ if (au1xxx_dbdma_put_source(db->dmanr, db->nextOut,
+ db->fragsize) == 0) {
+ err("qcount < 2 and no ring room!");
+ }
+ db->nextOut += db->fragsize;
+ if (db->nextOut >= db->rawbuf + db->dmasize)
+ db->nextOut -= db->dmasize;
+ db->count -= db->fragsize;
+ db->total_bytes += db->dma_fragsize;
+ db->dma_qcount++;
+ }
+
+ /* wake up anybody listening */
+ if (waitqueue_active(&db->wait))
+ wake_up(&db->wait);
+}
+
+
+static void
+adc_dma_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct au1550_state *s = (struct au1550_state *)dev_id;
+ struct dmabuf *dp = &s->dma_adc;
+ u32 obytes;
+ char *obuf;
+
+ /* Pull the buffer from the dma queue.
+ */
+ au1xxx_dbdma_get_dest(dp->dmanr, (void *)(&obuf), &obytes);
+
+ if ((dp->count + obytes) > dp->dmasize) {
+ /* Overrun. Stop ADC and log the error
+ */
+ stop_adc(s);
+ dp->error++;
+ err("adc overrun");
+ return;
+ }
+
+ /* Put a new empty buffer on the destination DMA.
+ */
+ au1xxx_dbdma_put_dest(dp->dmanr, dp->nextIn, dp->dma_fragsize);
+
+ dp->nextIn += dp->dma_fragsize;
+ if (dp->nextIn >= dp->rawbuf + dp->dmasize)
+ dp->nextIn -= dp->dmasize;
+
+ dp->count += obytes;
+ dp->total_bytes += obytes;
+
+ /* wake up anybody listening
+ */
+ if (waitqueue_active(&dp->wait))
+ wake_up(&dp->wait);
+
+}
+
+static loff_t
+au1550_llseek(struct file *file, loff_t offset, int origin)
+{
+ return -ESPIPE;
+}
+
+
+static int
+au1550_open_mixdev(struct inode *inode, struct file *file)
+{
+ file->private_data = &au1550_state;
+ return 0;
+}
+
+static int
+au1550_release_mixdev(struct inode *inode, struct file *file)
+{
+ return 0;
+}
+
+static int
+mixdev_ioctl(struct ac97_codec *codec, unsigned int cmd,
+ unsigned long arg)
+{
+ return codec->mixer_ioctl(codec, cmd, arg);
+}
+
+static int
+au1550_ioctl_mixdev(struct inode *inode, struct file *file,
+ unsigned int cmd, unsigned long arg)
+{
+ struct au1550_state *s = (struct au1550_state *)file->private_data;
+ struct ac97_codec *codec = s->codec;
+
+ return mixdev_ioctl(codec, cmd, arg);
+}
+
+static /*const */ struct file_operations au1550_mixer_fops = {
+ owner:THIS_MODULE,
+ llseek:au1550_llseek,
+ ioctl:au1550_ioctl_mixdev,
+ open:au1550_open_mixdev,
+ release:au1550_release_mixdev,
+};
+
+static int
+drain_dac(struct au1550_state *s, int nonblock)
+{
+ unsigned long flags;
+ int count, tmo;
+
+ if (s->dma_dac.mapped || !s->dma_dac.ready || s->dma_dac.stopped)
+ return 0;
+
+ for (;;) {
+ spin_lock_irqsave(&s->lock, flags);
+ count = s->dma_dac.count;
+ spin_unlock_irqrestore(&s->lock, flags);
+ if (count <= s->dma_dac.fragsize)
+ break;
+ if (signal_pending(current))
+ break;
+ if (nonblock)
+ return -EBUSY;
+ tmo = 1000 * count / (s->no_vra ?
+ 48000 : s->dma_dac.sample_rate);
+ tmo /= s->dma_dac.dma_bytes_per_sample;
+ au1550_delay(tmo);
+ }
+ if (signal_pending(current))
+ return -ERESTARTSYS;
+ return 0;
+}
+
+static inline u8 S16_TO_U8(s16 ch)
+{
+ return (u8) (ch >> 8) + 0x80;
+}
+static inline s16 U8_TO_S16(u8 ch)
+{
+ return (s16) (ch - 0x80) << 8;
+}
+
+/*
+ * Translates user samples to dma buffer suitable for AC'97 DAC data:
+ * If mono, copy left channel to right channel in dma buffer.
+ * If 8 bit samples, cvt to 16-bit before writing to dma buffer.
+ * If interpolating (no VRA), duplicate every audio frame src_factor times.
+ */
+static int
+translate_from_user(struct dmabuf *db, char* dmabuf, char* userbuf,
+ int dmacount)
+{
+ int sample, i;
+ int interp_bytes_per_sample;
+ int num_samples;
+ int mono = (db->num_channels == 1);
+ char usersample[12];
+ s16 ch, dmasample[6];
+
+ if (db->sample_size == 16 && !mono && db->src_factor == 1) {
+ /* no translation necessary, just copy
+ */
+ if (copy_from_user(dmabuf, userbuf, dmacount))
+ return -EFAULT;
+ return dmacount;
+ }
+
+ interp_bytes_per_sample = db->dma_bytes_per_sample * db->src_factor;
+ num_samples = dmacount / interp_bytes_per_sample;
+
+ for (sample = 0; sample < num_samples; sample++) {
+ if (copy_from_user(usersample, userbuf,
+ db->user_bytes_per_sample)) {
+ return -EFAULT;
+ }
+
+ for (i = 0; i < db->num_channels; i++) {
+ if (db->sample_size == 8)
+ ch = U8_TO_S16(usersample[i]);
+ else
+ ch = *((s16 *) (&usersample[i * 2]));
+ dmasample[i] = ch;
+ if (mono)
+ dmasample[i + 1] = ch; /* right channel */
+ }
+
+ /* duplicate every audio frame src_factor times
+ */
+ for (i = 0; i < db->src_factor; i++)
+ memcpy(dmabuf, dmasample, db->dma_bytes_per_sample);
+
+ userbuf += db->user_bytes_per_sample;
+ dmabuf += interp_bytes_per_sample;
+ }
+
+ return num_samples * interp_bytes_per_sample;
+}
+
+/*
+ * Translates AC'97 ADC samples to user buffer:
+ * If mono, send only left channel to user buffer.
+ * If 8 bit samples, cvt from 16 to 8 bit before writing to user buffer.
+ * If decimating (no VRA), skip over src_factor audio frames.
+ */
+static int
+translate_to_user(struct dmabuf *db, char* userbuf, char* dmabuf,
+ int dmacount)
+{
+ int sample, i;
+ int interp_bytes_per_sample;
+ int num_samples;
+ int mono = (db->num_channels == 1);
+ char usersample[12];
+
+ if (db->sample_size == 16 && !mono && db->src_factor == 1) {
+ /* no translation necessary, just copy
+ */
+ if (copy_to_user(userbuf, dmabuf, dmacount))
+ return -EFAULT;
+ return dmacount;
+ }
+
+ interp_bytes_per_sample = db->dma_bytes_per_sample * db->src_factor;
+ num_samples = dmacount / interp_bytes_per_sample;
+
+ for (sample = 0; sample < num_samples; sample++) {
+ for (i = 0; i < db->num_channels; i++) {
+ if (db->sample_size == 8)
+ usersample[i] =
+ S16_TO_U8(*((s16 *) (&dmabuf[i * 2])));
+ else
+ *((s16 *) (&usersample[i * 2])) =
+ *((s16 *) (&dmabuf[i * 2]));
+ }
+
+ if (copy_to_user(userbuf, usersample,
+ db->user_bytes_per_sample)) {
+ return -EFAULT;
+ }
+
+ userbuf += db->user_bytes_per_sample;
+ dmabuf += interp_bytes_per_sample;
+ }
+
+ return num_samples * interp_bytes_per_sample;
+}
+
+/*
+ * Copy audio data to/from user buffer from/to dma buffer, taking care
+ * that we wrap when reading/writing the dma buffer. Returns actual byte
+ * count written to or read from the dma buffer.
+ */
+static int
+copy_dmabuf_user(struct dmabuf *db, char* userbuf, int count, int to_user)
+{
+ char *bufptr = to_user ? db->nextOut : db->nextIn;
+ char *bufend = db->rawbuf + db->dmasize;
+ int cnt, ret;
+
+ if (bufptr + count > bufend) {
+ int partial = (int) (bufend - bufptr);
+ if (to_user) {
+ if ((cnt = translate_to_user(db, userbuf,
+ bufptr, partial)) < 0)
+ return cnt;
+ ret = cnt;
+ if ((cnt = translate_to_user(db, userbuf + partial,
+ db->rawbuf,
+ count - partial)) < 0)
+ return cnt;
+ ret += cnt;
+ } else {
+ if ((cnt = translate_from_user(db, bufptr, userbuf,
+ partial)) < 0)
+ return cnt;
+ ret = cnt;
+ if ((cnt = translate_from_user(db, db->rawbuf,
+ userbuf + partial,
+ count - partial)) < 0)
+ return cnt;
+ ret += cnt;
+ }
+ } else {
+ if (to_user)
+ ret = translate_to_user(db, userbuf, bufptr, count);
+ else
+ ret = translate_from_user(db, bufptr, userbuf, count);
+ }
+
+ return ret;
+}
+
+
+static ssize_t
+au1550_read(struct file *file, char *buffer, size_t count, loff_t *ppos)
+{
+ struct au1550_state *s = (struct au1550_state *)file->private_data;
+ struct dmabuf *db = &s->dma_adc;
+ DECLARE_WAITQUEUE(wait, current);
+ ssize_t ret;
+ unsigned long flags;
+ int cnt, usercnt, avail;
+
+ if (db->mapped)
+ return -ENXIO;
+ if (!access_ok(VERIFY_WRITE, buffer, count))
+ return -EFAULT;
+ ret = 0;
+
+ count *= db->cnt_factor;
+
+ down(&s->sem);
+ add_wait_queue(&db->wait, &wait);
+
+ while (count > 0) {
+ /* wait for samples in ADC dma buffer
+ */
+ do {
+ if (db->stopped)
+ start_adc(s);
+ spin_lock_irqsave(&s->lock, flags);
+ avail = db->count;
+ if (avail <= 0)
+ __set_current_state(TASK_INTERRUPTIBLE);
+ spin_unlock_irqrestore(&s->lock, flags);
+ if (avail <= 0) {
+ if (file->f_flags & O_NONBLOCK) {
+ if (!ret)
+ ret = -EAGAIN;
+ goto out;
+ }
+ up(&s->sem);
+ schedule();
+ if (signal_pending(current)) {
+ if (!ret)
+ ret = -ERESTARTSYS;
+ goto out2;
+ }
+ down(&s->sem);
+ }
+ } while (avail <= 0);
+
+ /* copy from nextOut to user
+ */
+ if ((cnt = copy_dmabuf_user(db, buffer,
+ count > avail ?
+ avail : count, 1)) < 0) {
+ if (!ret)
+ ret = -EFAULT;
+ goto out;
+ }
+
+ spin_lock_irqsave(&s->lock, flags);
+ db->count -= cnt;
+ db->nextOut += cnt;
+ if (db->nextOut >= db->rawbuf + db->dmasize)
+ db->nextOut -= db->dmasize;
+ spin_unlock_irqrestore(&s->lock, flags);
+
+ count -= cnt;
+ usercnt = cnt / db->cnt_factor;
+ buffer += usercnt;
+ ret += usercnt;
+ } /* while (count > 0) */
+
+out:
+ up(&s->sem);
+out2:
+ remove_wait_queue(&db->wait, &wait);
+ set_current_state(TASK_RUNNING);
+ return ret;
+}
+
+static ssize_t
+au1550_write(struct file *file, const char *buffer, size_t count, loff_t * ppos)
+{
+ struct au1550_state *s = (struct au1550_state *)file->private_data;
+ struct dmabuf *db = &s->dma_dac;
+ DECLARE_WAITQUEUE(wait, current);
+ ssize_t ret = 0;
+ unsigned long flags;
+ int cnt, usercnt, avail;
+
+ pr_debug("write: count=%d\n", count);
+
+ if (db->mapped)
+ return -ENXIO;
+ if (!access_ok(VERIFY_READ, buffer, count))
+ return -EFAULT;
+
+ count *= db->cnt_factor;
+
+ down(&s->sem);
+ add_wait_queue(&db->wait, &wait);
+
+ while (count > 0) {
+ /* wait for space in playback buffer
+ */
+ do {
+ spin_lock_irqsave(&s->lock, flags);
+ avail = (int) db->dmasize - db->count;
+ if (avail <= 0)
+ __set_current_state(TASK_INTERRUPTIBLE);
+ spin_unlock_irqrestore(&s->lock, flags);
+ if (avail <= 0) {
+ if (file->f_flags & O_NONBLOCK) {
+ if (!ret)
+ ret = -EAGAIN;
+ goto out;
+ }
+ up(&s->sem);
+ schedule();
+ if (signal_pending(current)) {
+ if (!ret)
+ ret = -ERESTARTSYS;
+ goto out2;
+ }
+ down(&s->sem);
+ }
+ } while (avail <= 0);
+
+ /* copy from user to nextIn
+ */
+ if ((cnt = copy_dmabuf_user(db, (char *) buffer,
+ count > avail ?
+ avail : count, 0)) < 0) {
+ if (!ret)
+ ret = -EFAULT;
+ goto out;
+ }
+
+ spin_lock_irqsave(&s->lock, flags);
+ db->count += cnt;
+ db->nextIn += cnt;
+ if (db->nextIn >= db->rawbuf + db->dmasize)
+ db->nextIn -= db->dmasize;
+
+ /* If the data is available, we want to keep two buffers
+ * on the dma queue. If the queue count reaches zero,
+ * we know the dma has stopped.
+ */
+ while ((db->dma_qcount < 2) && (db->count >= db->fragsize)) {
+ if (au1xxx_dbdma_put_source(db->dmanr, db->nextOut,
+ db->fragsize) == 0) {
+ err("qcount < 2 and no ring room!");
+ }
+ db->nextOut += db->fragsize;
+ if (db->nextOut >= db->rawbuf + db->dmasize)
+ db->nextOut -= db->dmasize;
+ db->total_bytes += db->dma_fragsize;
+ if (db->dma_qcount == 0)
+ start_dac(s);
+ db->dma_qcount++;
+ }
+ spin_unlock_irqrestore(&s->lock, flags);
+
+ count -= cnt;
+ usercnt = cnt / db->cnt_factor;
+ buffer += usercnt;
+ ret += usercnt;
+ } /* while (count > 0) */
+
+out:
+ up(&s->sem);
+out2:
+ remove_wait_queue(&db->wait, &wait);
+ set_current_state(TASK_RUNNING);
+ return ret;
+}
+
+
+/* No kernel lock - we have our own spinlock */
+static unsigned int
+au1550_poll(struct file *file, struct poll_table_struct *wait)
+{
+ struct au1550_state *s = (struct au1550_state *)file->private_data;
+ unsigned long flags;
+ unsigned int mask = 0;
+
+ if (file->f_mode & FMODE_WRITE) {
+ if (!s->dma_dac.ready)
+ return 0;
+ poll_wait(file, &s->dma_dac.wait, wait);
+ }
+ if (file->f_mode & FMODE_READ) {
+ if (!s->dma_adc.ready)
+ return 0;
+ poll_wait(file, &s->dma_adc.wait, wait);
+ }
+
+ spin_lock_irqsave(&s->lock, flags);
+
+ if (file->f_mode & FMODE_READ) {
+ if (s->dma_adc.count >= (signed)s->dma_adc.dma_fragsize)
+ mask |= POLLIN | POLLRDNORM;
+ }
+ if (file->f_mode & FMODE_WRITE) {
+ if (s->dma_dac.mapped) {
+ if (s->dma_dac.count >=
+ (signed)s->dma_dac.dma_fragsize)
+ mask |= POLLOUT | POLLWRNORM;
+ } else {
+ if ((signed) s->dma_dac.dmasize >=
+ s->dma_dac.count + (signed)s->dma_dac.dma_fragsize)
+ mask |= POLLOUT | POLLWRNORM;
+ }
+ }
+ spin_unlock_irqrestore(&s->lock, flags);
+ return mask;
+}
+
+static int
+au1550_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ struct au1550_state *s = (struct au1550_state *)file->private_data;
+ struct dmabuf *db;
+ unsigned long size;
+ int ret = 0;
+
+ lock_kernel();
+ down(&s->sem);
+ if (vma->vm_flags & VM_WRITE)
+ db = &s->dma_dac;
+ else if (vma->vm_flags & VM_READ)
+ db = &s->dma_adc;
+ else {
+ ret = -EINVAL;
+ goto out;
+ }
+ if (vma->vm_pgoff != 0) {
+ ret = -EINVAL;
+ goto out;
+ }
+ size = vma->vm_end - vma->vm_start;
+ if (size > (PAGE_SIZE << db->buforder)) {
+ ret = -EINVAL;
+ goto out;
+ }
+ if (remap_pfn_range(vma, vma->vm_start, page_to_pfn(virt_to_page(db->rawbuf)),
+ size, vma->vm_page_prot)) {
+ ret = -EAGAIN;
+ goto out;
+ }
+ vma->vm_flags &= ~VM_IO;
+ db->mapped = 1;
+out:
+ up(&s->sem);
+ unlock_kernel();
+ return ret;
+}
+
+#ifdef DEBUG
+static struct ioctl_str_t {
+ unsigned int cmd;
+ const char *str;
+} ioctl_str[] = {
+ {SNDCTL_DSP_RESET, "SNDCTL_DSP_RESET"},
+ {SNDCTL_DSP_SYNC, "SNDCTL_DSP_SYNC"},
+ {SNDCTL_DSP_SPEED, "SNDCTL_DSP_SPEED"},
+ {SNDCTL_DSP_STEREO, "SNDCTL_DSP_STEREO"},
+ {SNDCTL_DSP_GETBLKSIZE, "SNDCTL_DSP_GETBLKSIZE"},
+ {SNDCTL_DSP_SAMPLESIZE, "SNDCTL_DSP_SAMPLESIZE"},
+ {SNDCTL_DSP_CHANNELS, "SNDCTL_DSP_CHANNELS"},
+ {SOUND_PCM_WRITE_CHANNELS, "SOUND_PCM_WRITE_CHANNELS"},
+ {SOUND_PCM_WRITE_FILTER, "SOUND_PCM_WRITE_FILTER"},
+ {SNDCTL_DSP_POST, "SNDCTL_DSP_POST"},
+ {SNDCTL_DSP_SUBDIVIDE, "SNDCTL_DSP_SUBDIVIDE"},
+ {SNDCTL_DSP_SETFRAGMENT, "SNDCTL_DSP_SETFRAGMENT"},
+ {SNDCTL_DSP_GETFMTS, "SNDCTL_DSP_GETFMTS"},
+ {SNDCTL_DSP_SETFMT, "SNDCTL_DSP_SETFMT"},
+ {SNDCTL_DSP_GETOSPACE, "SNDCTL_DSP_GETOSPACE"},
+ {SNDCTL_DSP_GETISPACE, "SNDCTL_DSP_GETISPACE"},
+ {SNDCTL_DSP_NONBLOCK, "SNDCTL_DSP_NONBLOCK"},
+ {SNDCTL_DSP_GETCAPS, "SNDCTL_DSP_GETCAPS"},
+ {SNDCTL_DSP_GETTRIGGER, "SNDCTL_DSP_GETTRIGGER"},
+ {SNDCTL_DSP_SETTRIGGER, "SNDCTL_DSP_SETTRIGGER"},
+ {SNDCTL_DSP_GETIPTR, "SNDCTL_DSP_GETIPTR"},
+ {SNDCTL_DSP_GETOPTR, "SNDCTL_DSP_GETOPTR"},
+ {SNDCTL_DSP_MAPINBUF, "SNDCTL_DSP_MAPINBUF"},
+ {SNDCTL_DSP_MAPOUTBUF, "SNDCTL_DSP_MAPOUTBUF"},
+ {SNDCTL_DSP_SETSYNCRO, "SNDCTL_DSP_SETSYNCRO"},
+ {SNDCTL_DSP_SETDUPLEX, "SNDCTL_DSP_SETDUPLEX"},
+ {SNDCTL_DSP_GETODELAY, "SNDCTL_DSP_GETODELAY"},
+ {SNDCTL_DSP_GETCHANNELMASK, "SNDCTL_DSP_GETCHANNELMASK"},
+ {SNDCTL_DSP_BIND_CHANNEL, "SNDCTL_DSP_BIND_CHANNEL"},
+ {OSS_GETVERSION, "OSS_GETVERSION"},
+ {SOUND_PCM_READ_RATE, "SOUND_PCM_READ_RATE"},
+ {SOUND_PCM_READ_CHANNELS, "SOUND_PCM_READ_CHANNELS"},
+ {SOUND_PCM_READ_BITS, "SOUND_PCM_READ_BITS"},
+ {SOUND_PCM_READ_FILTER, "SOUND_PCM_READ_FILTER"}
+};
+#endif
+
+static int
+dma_count_done(struct dmabuf *db)
+{
+ if (db->stopped)
+ return 0;
+
+ return db->dma_fragsize - au1xxx_get_dma_residue(db->dmanr);
+}
+
+
+static int
+au1550_ioctl(struct inode *inode, struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ struct au1550_state *s = (struct au1550_state *)file->private_data;
+ unsigned long flags;
+ audio_buf_info abinfo;
+ count_info cinfo;
+ int count;
+ int val, mapped, ret, diff;
+
+ mapped = ((file->f_mode & FMODE_WRITE) && s->dma_dac.mapped) ||
+ ((file->f_mode & FMODE_READ) && s->dma_adc.mapped);
+
+#ifdef DEBUG
+ for (count=0; count<sizeof(ioctl_str)/sizeof(ioctl_str[0]); count++) {
+ if (ioctl_str[count].cmd == cmd)
+ break;
+ }
+ if (count < sizeof(ioctl_str) / sizeof(ioctl_str[0]))
+ pr_debug("ioctl %s, arg=0x%lxn", ioctl_str[count].str, arg);
+ else
+ pr_debug("ioctl 0x%x unknown, arg=0x%lx\n", cmd, arg);
+#endif
+
+ switch (cmd) {
+ case OSS_GETVERSION:
+ return put_user(SOUND_VERSION, (int *) arg);
+
+ case SNDCTL_DSP_SYNC:
+ if (file->f_mode & FMODE_WRITE)
+ return drain_dac(s, file->f_flags & O_NONBLOCK);
+ return 0;
+
+ case SNDCTL_DSP_SETDUPLEX:
+ return 0;
+
+ case SNDCTL_DSP_GETCAPS:
+ return put_user(DSP_CAP_DUPLEX | DSP_CAP_REALTIME |
+ DSP_CAP_TRIGGER | DSP_CAP_MMAP, (int *)arg);
+
+ case SNDCTL_DSP_RESET:
+ if (file->f_mode & FMODE_WRITE) {
+ stop_dac(s);
+ synchronize_irq();
+ s->dma_dac.count = s->dma_dac.total_bytes = 0;
+ s->dma_dac.nextIn = s->dma_dac.nextOut =
+ s->dma_dac.rawbuf;
+ }
+ if (file->f_mode & FMODE_READ) {
+ stop_adc(s);
+ synchronize_irq();
+ s->dma_adc.count = s->dma_adc.total_bytes = 0;
+ s->dma_adc.nextIn = s->dma_adc.nextOut =
+ s->dma_adc.rawbuf;
+ }
+ return 0;
+
+ case SNDCTL_DSP_SPEED:
+ if (get_user(val, (int *) arg))
+ return -EFAULT;
+ if (val >= 0) {
+ if (file->f_mode & FMODE_READ) {
+ stop_adc(s);
+ set_adc_rate(s, val);
+ }
+ if (file->f_mode & FMODE_WRITE) {
+ stop_dac(s);
+ set_dac_rate(s, val);
+ }
+ if (s->open_mode & FMODE_READ)
+ if ((ret = prog_dmabuf_adc(s)))
+ return ret;
+ if (s->open_mode & FMODE_WRITE)
+ if ((ret = prog_dmabuf_dac(s)))
+ return ret;
+ }
+ return put_user((file->f_mode & FMODE_READ) ?
+ s->dma_adc.sample_rate :
+ s->dma_dac.sample_rate,
+ (int *)arg);
+
+ case SNDCTL_DSP_STEREO:
+ if (get_user(val, (int *) arg))
+ return -EFAULT;
+ if (file->f_mode & FMODE_READ) {
+ stop_adc(s);
+ s->dma_adc.num_channels = val ? 2 : 1;
+ if ((ret = prog_dmabuf_adc(s)))
+ return ret;
+ }
+ if (file->f_mode & FMODE_WRITE) {
+ stop_dac(s);
+ s->dma_dac.num_channels = val ? 2 : 1;
+ if (s->codec_ext_caps & AC97_EXT_DACS) {
+ /* disable surround and center/lfe in AC'97
+ */
+ u16 ext_stat = rdcodec(s->codec,
+ AC97_EXTENDED_STATUS);
+ wrcodec(s->codec, AC97_EXTENDED_STATUS,
+ ext_stat | (AC97_EXTSTAT_PRI |
+ AC97_EXTSTAT_PRJ |
+ AC97_EXTSTAT_PRK));
+ }
+ if ((ret = prog_dmabuf_dac(s)))
+ return ret;
+ }
+ return 0;
+
+ case SNDCTL_DSP_CHANNELS:
+ if (get_user(val, (int *) arg))
+ return -EFAULT;
+ if (val != 0) {
+ if (file->f_mode & FMODE_READ) {
+ if (val < 0 || val > 2)
+ return -EINVAL;
+ stop_adc(s);
+ s->dma_adc.num_channels = val;
+ if ((ret = prog_dmabuf_adc(s)))
+ return ret;
+ }
+ if (file->f_mode & FMODE_WRITE) {
+ switch (val) {
+ case 1:
+ case 2:
+ break;
+ case 3:
+ case 5:
+ return -EINVAL;
+ case 4:
+ if (!(s->codec_ext_caps &
+ AC97_EXTID_SDAC))
+ return -EINVAL;
+ break;
+ case 6:
+ if ((s->codec_ext_caps &
+ AC97_EXT_DACS) != AC97_EXT_DACS)
+ return -EINVAL;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ stop_dac(s);
+ if (val <= 2 &&
+ (s->codec_ext_caps & AC97_EXT_DACS)) {
+ /* disable surround and center/lfe
+ * channels in AC'97
+ */
+ u16 ext_stat =
+ rdcodec(s->codec,
+ AC97_EXTENDED_STATUS);
+ wrcodec(s->codec,
+ AC97_EXTENDED_STATUS,
+ ext_stat | (AC97_EXTSTAT_PRI |
+ AC97_EXTSTAT_PRJ |
+ AC97_EXTSTAT_PRK));
+ } else if (val >= 4) {
+ /* enable surround, center/lfe
+ * channels in AC'97
+ */
+ u16 ext_stat =
+ rdcodec(s->codec,
+ AC97_EXTENDED_STATUS);
+ ext_stat &= ~AC97_EXTSTAT_PRJ;
+ if (val == 6)
+ ext_stat &=
+ ~(AC97_EXTSTAT_PRI |
+ AC97_EXTSTAT_PRK);
+ wrcodec(s->codec,
+ AC97_EXTENDED_STATUS,
+ ext_stat);
+ }
+
+ s->dma_dac.num_channels = val;
+ if ((ret = prog_dmabuf_dac(s)))
+ return ret;
+ }
+ }
+ return put_user(val, (int *) arg);
+
+ case SNDCTL_DSP_GETFMTS: /* Returns a mask */
+ return put_user(AFMT_S16_LE | AFMT_U8, (int *) arg);
+
+ case SNDCTL_DSP_SETFMT: /* Selects ONE fmt */
+ if (get_user(val, (int *) arg))
+ return -EFAULT;
+ if (val != AFMT_QUERY) {
+ if (file->f_mode & FMODE_READ) {
+ stop_adc(s);
+ if (val == AFMT_S16_LE)
+ s->dma_adc.sample_size = 16;
+ else {
+ val = AFMT_U8;
+ s->dma_adc.sample_size = 8;
+ }
+ if ((ret = prog_dmabuf_adc(s)))
+ return ret;
+ }
+ if (file->f_mode & FMODE_WRITE) {
+ stop_dac(s);
+ if (val == AFMT_S16_LE)
+ s->dma_dac.sample_size = 16;
+ else {
+ val = AFMT_U8;
+ s->dma_dac.sample_size = 8;
+ }
+ if ((ret = prog_dmabuf_dac(s)))
+ return ret;
+ }
+ } else {
+ if (file->f_mode & FMODE_READ)
+ val = (s->dma_adc.sample_size == 16) ?
+ AFMT_S16_LE : AFMT_U8;
+ else
+ val = (s->dma_dac.sample_size == 16) ?
+ AFMT_S16_LE : AFMT_U8;
+ }
+ return put_user(val, (int *) arg);
+
+ case SNDCTL_DSP_POST:
+ return 0;
+
+ case SNDCTL_DSP_GETTRIGGER:
+ val = 0;
+ spin_lock_irqsave(&s->lock, flags);
+ if (file->f_mode & FMODE_READ && !s->dma_adc.stopped)
+ val |= PCM_ENABLE_INPUT;
+ if (file->f_mode & FMODE_WRITE && !s->dma_dac.stopped)
+ val |= PCM_ENABLE_OUTPUT;
+ spin_unlock_irqrestore(&s->lock, flags);
+ return put_user(val, (int *) arg);
+
+ case SNDCTL_DSP_SETTRIGGER:
+ if (get_user(val, (int *) arg))
+ return -EFAULT;
+ if (file->f_mode & FMODE_READ) {
+ if (val & PCM_ENABLE_INPUT)
+ start_adc(s);
+ else
+ stop_adc(s);
+ }
+ if (file->f_mode & FMODE_WRITE) {
+ if (val & PCM_ENABLE_OUTPUT)
+ start_dac(s);
+ else
+ stop_dac(s);
+ }
+ return 0;
+
+ case SNDCTL_DSP_GETOSPACE:
+ if (!(file->f_mode & FMODE_WRITE))
+ return -EINVAL;
+ abinfo.fragsize = s->dma_dac.fragsize;
+ spin_lock_irqsave(&s->lock, flags);
+ count = s->dma_dac.count;
+ count -= dma_count_done(&s->dma_dac);
+ spin_unlock_irqrestore(&s->lock, flags);
+ if (count < 0)
+ count = 0;
+ abinfo.bytes = (s->dma_dac.dmasize - count) /
+ s->dma_dac.cnt_factor;
+ abinfo.fragstotal = s->dma_dac.numfrag;
+ abinfo.fragments = abinfo.bytes >> s->dma_dac.fragshift;
+ pr_debug("ioctl SNDCTL_DSP_GETOSPACE: bytes=%d, fragments=%d\n", abinfo.bytes, abinfo.fragments);
+ return copy_to_user((void *) arg, &abinfo,
+ sizeof(abinfo)) ? -EFAULT : 0;
+
+ case SNDCTL_DSP_GETISPACE:
+ if (!(file->f_mode & FMODE_READ))
+ return -EINVAL;
+ abinfo.fragsize = s->dma_adc.fragsize;
+ spin_lock_irqsave(&s->lock, flags);
+ count = s->dma_adc.count;
+ count += dma_count_done(&s->dma_adc);
+ spin_unlock_irqrestore(&s->lock, flags);
+ if (count < 0)
+ count = 0;
+ abinfo.bytes = count / s->dma_adc.cnt_factor;
+ abinfo.fragstotal = s->dma_adc.numfrag;
+ abinfo.fragments = abinfo.bytes >> s->dma_adc.fragshift;
+ return copy_to_user((void *) arg, &abinfo,
+ sizeof(abinfo)) ? -EFAULT : 0;
+
+ case SNDCTL_DSP_NONBLOCK:
+ file->f_flags |= O_NONBLOCK;
+ return 0;
+
+ case SNDCTL_DSP_GETODELAY:
+ if (!(file->f_mode & FMODE_WRITE))
+ return -EINVAL;
+ spin_lock_irqsave(&s->lock, flags);
+ count = s->dma_dac.count;
+ count -= dma_count_done(&s->dma_dac);
+ spin_unlock_irqrestore(&s->lock, flags);
+ if (count < 0)
+ count = 0;
+ count /= s->dma_dac.cnt_factor;
+ return put_user(count, (int *) arg);
+
+ case SNDCTL_DSP_GETIPTR:
+ if (!(file->f_mode & FMODE_READ))
+ return -EINVAL;
+ spin_lock_irqsave(&s->lock, flags);
+ cinfo.bytes = s->dma_adc.total_bytes;
+ count = s->dma_adc.count;
+ if (!s->dma_adc.stopped) {
+ diff = dma_count_done(&s->dma_adc);
+ count += diff;
+ cinfo.bytes += diff;
+ cinfo.ptr = virt_to_phys(s->dma_adc.nextIn) + diff -
+ virt_to_phys(s->dma_adc.rawbuf);
+ } else
+ cinfo.ptr = virt_to_phys(s->dma_adc.nextIn) -
+ virt_to_phys(s->dma_adc.rawbuf);
+ if (s->dma_adc.mapped)
+ s->dma_adc.count &= (s->dma_adc.dma_fragsize-1);
+ spin_unlock_irqrestore(&s->lock, flags);
+ if (count < 0)
+ count = 0;
+ cinfo.blocks = count >> s->dma_adc.fragshift;
+ return copy_to_user((void *) arg, &cinfo, sizeof(cinfo));
+
+ case SNDCTL_DSP_GETOPTR:
+ if (!(file->f_mode & FMODE_READ))
+ return -EINVAL;
+ spin_lock_irqsave(&s->lock, flags);
+ cinfo.bytes = s->dma_dac.total_bytes;
+ count = s->dma_dac.count;
+ if (!s->dma_dac.stopped) {
+ diff = dma_count_done(&s->dma_dac);
+ count -= diff;
+ cinfo.bytes += diff;
+ cinfo.ptr = virt_to_phys(s->dma_dac.nextOut) + diff -
+ virt_to_phys(s->dma_dac.rawbuf);
+ } else
+ cinfo.ptr = virt_to_phys(s->dma_dac.nextOut) -
+ virt_to_phys(s->dma_dac.rawbuf);
+ if (s->dma_dac.mapped)
+ s->dma_dac.count &= (s->dma_dac.dma_fragsize-1);
+ spin_unlock_irqrestore(&s->lock, flags);
+ if (count < 0)
+ count = 0;
+ cinfo.blocks = count >> s->dma_dac.fragshift;
+ return copy_to_user((void *) arg, &cinfo, sizeof(cinfo));
+
+ case SNDCTL_DSP_GETBLKSIZE:
+ if (file->f_mode & FMODE_WRITE)
+ return put_user(s->dma_dac.fragsize, (int *) arg);
+ else
+ return put_user(s->dma_adc.fragsize, (int *) arg);
+
+ case SNDCTL_DSP_SETFRAGMENT:
+ if (get_user(val, (int *) arg))
+ return -EFAULT;
+ if (file->f_mode & FMODE_READ) {
+ stop_adc(s);
+ s->dma_adc.ossfragshift = val & 0xffff;
+ s->dma_adc.ossmaxfrags = (val >> 16) & 0xffff;
+ if (s->dma_adc.ossfragshift < 4)
+ s->dma_adc.ossfragshift = 4;
+ if (s->dma_adc.ossfragshift > 15)
+ s->dma_adc.ossfragshift = 15;
+ if (s->dma_adc.ossmaxfrags < 4)
+ s->dma_adc.ossmaxfrags = 4;
+ if ((ret = prog_dmabuf_adc(s)))
+ return ret;
+ }
+ if (file->f_mode & FMODE_WRITE) {
+ stop_dac(s);
+ s->dma_dac.ossfragshift = val & 0xffff;
+ s->dma_dac.ossmaxfrags = (val >> 16) & 0xffff;
+ if (s->dma_dac.ossfragshift < 4)
+ s->dma_dac.ossfragshift = 4;
+ if (s->dma_dac.ossfragshift > 15)
+ s->dma_dac.ossfragshift = 15;
+ if (s->dma_dac.ossmaxfrags < 4)
+ s->dma_dac.ossmaxfrags = 4;
+ if ((ret = prog_dmabuf_dac(s)))
+ return ret;
+ }
+ return 0;
+
+ case SNDCTL_DSP_SUBDIVIDE:
+ if ((file->f_mode & FMODE_READ && s->dma_adc.subdivision) ||
+ (file->f_mode & FMODE_WRITE && s->dma_dac.subdivision))
+ return -EINVAL;
+ if (get_user(val, (int *) arg))
+ return -EFAULT;
+ if (val != 1 && val != 2 && val != 4)
+ return -EINVAL;
+ if (file->f_mode & FMODE_READ) {
+ stop_adc(s);
+ s->dma_adc.subdivision = val;
+ if ((ret = prog_dmabuf_adc(s)))
+ return ret;
+ }
+ if (file->f_mode & FMODE_WRITE) {
+ stop_dac(s);
+ s->dma_dac.subdivision = val;
+ if ((ret = prog_dmabuf_dac(s)))
+ return ret;
+ }
+ return 0;
+
+ case SOUND_PCM_READ_RATE:
+ return put_user((file->f_mode & FMODE_READ) ?
+ s->dma_adc.sample_rate :
+ s->dma_dac.sample_rate,
+ (int *)arg);
+
+ case SOUND_PCM_READ_CHANNELS:
+ if (file->f_mode & FMODE_READ)
+ return put_user(s->dma_adc.num_channels, (int *)arg);
+ else
+ return put_user(s->dma_dac.num_channels, (int *)arg);
+
+ case SOUND_PCM_READ_BITS:
+ if (file->f_mode & FMODE_READ)
+ return put_user(s->dma_adc.sample_size, (int *)arg);
+ else
+ return put_user(s->dma_dac.sample_size, (int *)arg);
+
+ case SOUND_PCM_WRITE_FILTER:
+ case SNDCTL_DSP_SETSYNCRO:
+ case SOUND_PCM_READ_FILTER:
+ return -EINVAL;
+ }
+
+ return mixdev_ioctl(s->codec, cmd, arg);
+}
+
+
+static int
+au1550_open(struct inode *inode, struct file *file)
+{
+ int minor = MINOR(inode->i_rdev);
+ DECLARE_WAITQUEUE(wait, current);
+ struct au1550_state *s = &au1550_state;
+ int ret;
+
+#ifdef DEBUG
+ if (file->f_flags & O_NONBLOCK)
+ pr_debug("open: non-blocking\n");
+ else
+ pr_debug("open: blocking\n");
+#endif
+
+ file->private_data = s;
+ /* wait for device to become free */
+ down(&s->open_sem);
+ while (s->open_mode & file->f_mode) {
+ if (file->f_flags & O_NONBLOCK) {
+ up(&s->open_sem);
+ return -EBUSY;
+ }
+ add_wait_queue(&s->open_wait, &wait);
+ __set_current_state(TASK_INTERRUPTIBLE);
+ up(&s->open_sem);
+ schedule();
+ remove_wait_queue(&s->open_wait, &wait);
+ set_current_state(TASK_RUNNING);
+ if (signal_pending(current))
+ return -ERESTARTSYS;
+ down(&s->open_sem);
+ }
+
+ stop_dac(s);
+ stop_adc(s);
+
+ if (file->f_mode & FMODE_READ) {
+ s->dma_adc.ossfragshift = s->dma_adc.ossmaxfrags =
+ s->dma_adc.subdivision = s->dma_adc.total_bytes = 0;
+ s->dma_adc.num_channels = 1;
+ s->dma_adc.sample_size = 8;
+ set_adc_rate(s, 8000);
+ if ((minor & 0xf) == SND_DEV_DSP16)
+ s->dma_adc.sample_size = 16;
+ }
+
+ if (file->f_mode & FMODE_WRITE) {
+ s->dma_dac.ossfragshift = s->dma_dac.ossmaxfrags =
+ s->dma_dac.subdivision = s->dma_dac.total_bytes = 0;
+ s->dma_dac.num_channels = 1;
+ s->dma_dac.sample_size = 8;
+ set_dac_rate(s, 8000);
+ if ((minor & 0xf) == SND_DEV_DSP16)
+ s->dma_dac.sample_size = 16;
+ }
+
+ if (file->f_mode & FMODE_READ) {
+ if ((ret = prog_dmabuf_adc(s)))
+ return ret;
+ }
+ if (file->f_mode & FMODE_WRITE) {
+ if ((ret = prog_dmabuf_dac(s)))
+ return ret;
+ }
+
+ s->open_mode |= file->f_mode & (FMODE_READ | FMODE_WRITE);
+ up(&s->open_sem);
+ init_MUTEX(&s->sem);
+ return 0;
+}
+
+static int
+au1550_release(struct inode *inode, struct file *file)
+{
+ struct au1550_state *s = (struct au1550_state *)file->private_data;
+
+ lock_kernel();
+
+ if (file->f_mode & FMODE_WRITE) {
+ unlock_kernel();
+ drain_dac(s, file->f_flags & O_NONBLOCK);
+ lock_kernel();
+ }
+
+ down(&s->open_sem);
+ if (file->f_mode & FMODE_WRITE) {
+ stop_dac(s);
+ kfree(s->dma_dac.rawbuf);
+ s->dma_dac.rawbuf = NULL;
+ }
+ if (file->f_mode & FMODE_READ) {
+ stop_adc(s);
+ kfree(s->dma_adc.rawbuf);
+ s->dma_adc.rawbuf = NULL;
+ }
+ s->open_mode &= ((~file->f_mode) & (FMODE_READ|FMODE_WRITE));
+ up(&s->open_sem);
+ wake_up(&s->open_wait);
+ unlock_kernel();
+ return 0;
+}
+
+static /*const */ struct file_operations au1550_audio_fops = {
+ owner: THIS_MODULE,
+ llseek: au1550_llseek,
+ read: au1550_read,
+ write: au1550_write,
+ poll: au1550_poll,
+ ioctl: au1550_ioctl,
+ mmap: au1550_mmap,
+ open: au1550_open,
+ release: au1550_release,
+};
+
+MODULE_AUTHOR("Advanced Micro Devices (AMD), dan@embeddededge.com");
+MODULE_DESCRIPTION("Au1550 AC97 Audio Driver");
+
+static int __devinit
+au1550_probe(void)
+{
+ struct au1550_state *s = &au1550_state;
+ int val;
+
+ memset(s, 0, sizeof(struct au1550_state));
+
+ init_waitqueue_head(&s->dma_adc.wait);
+ init_waitqueue_head(&s->dma_dac.wait);
+ init_waitqueue_head(&s->open_wait);
+ init_MUTEX(&s->open_sem);
+ spin_lock_init(&s->lock);
+
+ s->codec = ac97_alloc_codec();
+ if(s->codec == NULL) {
+ err("Out of memory");
+ return -1;
+ }
+ s->codec->private_data = s;
+ s->codec->id = 0;
+ s->codec->codec_read = rdcodec;
+ s->codec->codec_write = wrcodec;
+ s->codec->codec_wait = waitcodec;
+
+ if (!request_mem_region(CPHYSADDR(AC97_PSC_SEL),
+ 0x30, "Au1550 AC97")) {
+ err("AC'97 ports in use");
+ }
+
+ /* Allocate the DMA Channels
+ */
+ if ((s->dma_dac.dmanr = au1xxx_dbdma_chan_alloc(DBDMA_MEM_CHAN,
+ DBDMA_AC97_TX_CHAN, dac_dma_interrupt, (void *)s)) == 0) {
+ err("Can't get DAC DMA");
+ goto err_dma1;
+ }
+ au1xxx_dbdma_set_devwidth(s->dma_dac.dmanr, 16);
+ if (au1xxx_dbdma_ring_alloc(s->dma_dac.dmanr,
+ NUM_DBDMA_DESCRIPTORS) == 0) {
+ err("Can't get DAC DMA descriptors");
+ goto err_dma1;
+ }
+
+ if ((s->dma_adc.dmanr = au1xxx_dbdma_chan_alloc(DBDMA_AC97_RX_CHAN,
+ DBDMA_MEM_CHAN, adc_dma_interrupt, (void *)s)) == 0) {
+ err("Can't get ADC DMA");
+ goto err_dma2;
+ }
+ au1xxx_dbdma_set_devwidth(s->dma_adc.dmanr, 16);
+ if (au1xxx_dbdma_ring_alloc(s->dma_adc.dmanr,
+ NUM_DBDMA_DESCRIPTORS) == 0) {
+ err("Can't get ADC DMA descriptors");
+ goto err_dma2;
+ }
+
+ pr_info("DAC: DMA%d, ADC: DMA%d", DBDMA_AC97_TX_CHAN, DBDMA_AC97_RX_CHAN);
+
+ /* register devices */
+
+ if ((s->dev_audio = register_sound_dsp(&au1550_audio_fops, -1)) < 0)
+ goto err_dev1;
+ if ((s->codec->dev_mixer =
+ register_sound_mixer(&au1550_mixer_fops, -1)) < 0)
+ goto err_dev2;
+
+ /* The GPIO for the appropriate PSC was configured by the
+ * board specific start up.
+ *
+ * configure PSC for AC'97
+ */
+ au_writel(0, AC97_PSC_CTRL); /* Disable PSC */
+ au_sync();
+ au_writel((PSC_SEL_CLK_SERCLK | PSC_SEL_PS_AC97MODE), AC97_PSC_SEL);
+ au_sync();
+
+ /* cold reset the AC'97
+ */
+ au_writel(PSC_AC97RST_RST, PSC_AC97RST);
+ au_sync();
+ au1550_delay(10);
+ au_writel(0, PSC_AC97RST);
+ au_sync();
+
+ /* need to delay around 500msec(bleech) to give
+ some CODECs enough time to wakeup */
+ au1550_delay(500);
+
+ /* warm reset the AC'97 to start the bitclk
+ */
+ au_writel(PSC_AC97RST_SNC, PSC_AC97RST);
+ au_sync();
+ udelay(100);
+ au_writel(0, PSC_AC97RST);
+ au_sync();
+
+ /* Enable PSC
+ */
+ au_writel(PSC_CTRL_ENABLE, AC97_PSC_CTRL);
+ au_sync();
+
+ /* Wait for PSC ready.
+ */
+ do {
+ val = readl((void *)PSC_AC97STAT);
+ au_sync();
+ } while ((val & PSC_AC97STAT_SR) == 0);
+
+ /* Configure AC97 controller.
+ * Deep FIFO, 16-bit sample, DMA, make sure DMA matches fifo size.
+ */
+ val = PSC_AC97CFG_SET_LEN(16);
+ val |= PSC_AC97CFG_RT_FIFO8 | PSC_AC97CFG_TT_FIFO8;
+
+ /* Enable device so we can at least
+ * talk over the AC-link.
+ */
+ au_writel(val, PSC_AC97CFG);
+ au_writel(PSC_AC97MSK_ALLMASK, PSC_AC97MSK);
+ au_sync();
+ val |= PSC_AC97CFG_DE_ENABLE;
+ au_writel(val, PSC_AC97CFG);
+ au_sync();
+
+ /* Wait for Device ready.
+ */
+ do {
+ val = readl((void *)PSC_AC97STAT);
+ au_sync();
+ } while ((val & PSC_AC97STAT_DR) == 0);
+
+ /* codec init */
+ if (!ac97_probe_codec(s->codec))
+ goto err_dev3;
+
+ s->codec_base_caps = rdcodec(s->codec, AC97_RESET);
+ s->codec_ext_caps = rdcodec(s->codec, AC97_EXTENDED_ID);
+ pr_info("AC'97 Base/Extended ID = %04x/%04x",
+ s->codec_base_caps, s->codec_ext_caps);
+
+ if (!(s->codec_ext_caps & AC97_EXTID_VRA)) {
+ /* codec does not support VRA
+ */
+ s->no_vra = 1;
+ } else if (!vra) {
+ /* Boot option says disable VRA
+ */
+ u16 ac97_extstat = rdcodec(s->codec, AC97_EXTENDED_STATUS);
+ wrcodec(s->codec, AC97_EXTENDED_STATUS,
+ ac97_extstat & ~AC97_EXTSTAT_VRA);
+ s->no_vra = 1;
+ }
+ if (s->no_vra)
+ pr_info("no VRA, interpolating and decimating");
+
+ /* set mic to be the recording source */
+ val = SOUND_MASK_MIC;
+ mixdev_ioctl(s->codec, SOUND_MIXER_WRITE_RECSRC,
+ (unsigned long) &val);
+
+ return 0;
+
+ err_dev3:
+ unregister_sound_mixer(s->codec->dev_mixer);
+ err_dev2:
+ unregister_sound_dsp(s->dev_audio);
+ err_dev1:
+ au1xxx_dbdma_chan_free(s->dma_adc.dmanr);
+ err_dma2:
+ au1xxx_dbdma_chan_free(s->dma_dac.dmanr);
+ err_dma1:
+ release_mem_region(CPHYSADDR(AC97_PSC_SEL), 0x30);
+
+ ac97_release_codec(s->codec);
+ return -1;
+}
+
+static void __devinit
+au1550_remove(void)
+{
+ struct au1550_state *s = &au1550_state;
+
+ if (!s)
+ return;
+ synchronize_irq();
+ au1xxx_dbdma_chan_free(s->dma_adc.dmanr);
+ au1xxx_dbdma_chan_free(s->dma_dac.dmanr);
+ release_mem_region(CPHYSADDR(AC97_PSC_SEL), 0x30);
+ unregister_sound_dsp(s->dev_audio);
+ unregister_sound_mixer(s->codec->dev_mixer);
+ ac97_release_codec(s->codec);
+}
+
+static int __init
+init_au1550(void)
+{
+ return au1550_probe();
+}
+
+static void __exit
+cleanup_au1550(void)
+{
+ au1550_remove();
+}
+
+module_init(init_au1550);
+module_exit(cleanup_au1550);
+
+#ifndef MODULE
+
+static int __init
+au1550_setup(char *options)
+{
+ char *this_opt;
+
+ if (!options || !*options)
+ return 0;
+
+ while ((this_opt = strsep(&options, ","))) {
+ if (!*this_opt)
+ continue;
+ if (!strncmp(this_opt, "vra", 3)) {
+ vra = 1;
+ }
+ }
+
+ return 1;
+}
+
+__setup("au1550_audio=", au1550_setup);
+
+#endif /* MODULE */
module_init(btaudio_init_module);
module_exit(btaudio_cleanup_module);
-MODULE_PARM(dsp1,"i");
-MODULE_PARM(dsp2,"i");
-MODULE_PARM(mixer,"i");
-MODULE_PARM(debug,"i");
-MODULE_PARM(irq_debug,"i");
-MODULE_PARM(digital,"i");
-MODULE_PARM(analog,"i");
-MODULE_PARM(rate,"i");
-MODULE_PARM(latency,"i");
+module_param(dsp1, int, S_IRUGO);
+module_param(dsp2, int, S_IRUGO);
+module_param(mixer, int, S_IRUGO);
+module_param(debug, int, S_IRUGO | S_IWUSR);
+module_param(irq_debug, int, S_IRUGO | S_IWUSR);
+module_param(digital, int, S_IRUGO);
+module_param(analog, int, S_IRUGO);
+module_param(rate, int, S_IRUGO);
+module_param(latency, int, S_IRUGO);
MODULE_PARM_DESC(latency,"pci latency timer");
MODULE_DEVICE_TABLE(pci, btaudio_pci_tbl);
#include "sound_config.h"
-#include "cs4232.h"
#include "ad1848.h"
#include "mpu401.h"
static int synth_base, synth_irq;
static int mpu_detected;
-int probe_cs4232_mpu(struct address_info *hw_config)
+static int probe_cs4232_mpu(struct address_info *hw_config)
{
/*
* Just write down the config values.
MODULE_AUTHOR("Hannu Savolainen, Paul Barton-Davis");
MODULE_LICENSE("GPL");
-MODULE_PARM(io,"i");
+module_param(io, int, 0);
MODULE_PARM_DESC(io,"base I/O port for AD1848");
-MODULE_PARM(irq,"i");
+module_param(irq, int, 0);
MODULE_PARM_DESC(irq,"IRQ for AD1848 chip");
-MODULE_PARM(dma,"i");
+module_param(dma, int, 0);
MODULE_PARM_DESC(dma,"8 bit DMA for AD1848 chip");
-MODULE_PARM(dma2,"i");
+module_param(dma2, int, 0);
MODULE_PARM_DESC(dma2,"16 bit DMA for AD1848 chip");
-MODULE_PARM(mpuio,"i");
+module_param(mpuio, int, 0);
MODULE_PARM_DESC(mpuio,"MPU 401 base address");
-MODULE_PARM(mpuirq,"i");
+module_param(mpuirq, int, 0);
MODULE_PARM_DESC(mpuirq,"MPU 401 IRQ");
-MODULE_PARM(synthio,"i");
+module_param(synthio, int, 0);
MODULE_PARM_DESC(synthio,"Maui WaveTable base I/O port");
-MODULE_PARM(synthirq,"i");
+module_param(synthirq, int, 0);
MODULE_PARM_DESC(synthirq,"Maui WaveTable IRQ");
-MODULE_PARM(isapnp,"i");
+module_param(isapnp, bool, 0);
MODULE_PARM_DESC(isapnp,"Enable ISAPnP probing (default 1)");
-MODULE_PARM(bss,"i");
+module_param(bss, bool, 0);
MODULE_PARM_DESC(bss,"Enable Bose Sound System Support (default 0)");
/*
#include <linux/spinlock.h>
-int cs4281_resume_null(struct pci_dev *pcidev) { return 0; }
-int cs4281_suspend_null(struct pci_dev *pcidev, u32 state) { return 0; }
+static int cs4281_resume_null(struct pci_dev *pcidev) { return 0; }
+static int cs4281_suspend_null(struct pci_dev *pcidev, u32 state) { return 0; }
#define free_dmabuf(state, dmabuf) \
pci_free_consistent(state->pcidev, \
// rather than 64k as some of the games work more responsively.
// log base 2( buff sz = 32k).
static unsigned long defaultorder = 3;
-MODULE_PARM(defaultorder, "i");
+module_param(defaultorder, ulong, 0);
//
// Turn on/off debugging compilation by commenting out "#define CSDEBUG"
#if CSDEBUG
static unsigned long cs_debuglevel = 1; // levels range from 1-9
static unsigned long cs_debugmask = CS_INIT | CS_ERROR; // use CS_DBGOUT with various mask values
-MODULE_PARM(cs_debuglevel, "i");
-MODULE_PARM(cs_debugmask, "i");
+module_param(cs_debuglevel, ulong, 0);
+module_param(cs_debugmask, ulong, 0);
#endif
#define CS_TRUE 1
#define CS_FALSE 0
})
//LIST_HEAD(cs4281_devs);
-struct list_head cs4281_devs = { &cs4281_devs, &cs4281_devs };
+static struct list_head cs4281_devs = { &cs4281_devs, &cs4281_devs };
struct cs4281_state;
* Suspend - save the ac97 regs, mute the outputs and power down the part.
*
****************************************************************************/
-void cs4281_ac97_suspend(struct cs4281_state *s)
+static void cs4281_ac97_suspend(struct cs4281_state *s)
{
int Count,i;
* Resume - power up the part and restore its registers..
*
****************************************************************************/
-void cs4281_ac97_resume(struct cs4281_state *s)
+static void cs4281_ac97_resume(struct cs4281_state *s)
{
int Count,i;
} // SavePowerState
*/
-void cs4281_SuspendFIFO(struct cs4281_state *s, struct cs4281_pipeline *pl)
+static void cs4281_SuspendFIFO(struct cs4281_state *s, struct cs4281_pipeline *pl)
{
/*
* We need to save the contents of the BASIC FIFO Registers.
pl->u32FCRn_Save = readl(s->pBA0 + pl->u32FCRnAddress);
pl->u32FSICn_Save = readl(s->pBA0 + pl->u32FSICnAddress);
}
-void cs4281_ResumeFIFO(struct cs4281_state *s, struct cs4281_pipeline *pl)
+static void cs4281_ResumeFIFO(struct cs4281_state *s, struct cs4281_pipeline *pl)
{
/*
* We need to restore the contents of the BASIC FIFO Registers.
writel(pl->u32FCRn_Save,s->pBA0 + pl->u32FCRnAddress);
writel(pl->u32FSICn_Save,s->pBA0 + pl->u32FSICnAddress);
}
-void cs4281_SuspendDMAengine(struct cs4281_state *s, struct cs4281_pipeline *pl)
+static void cs4281_SuspendDMAengine(struct cs4281_state *s, struct cs4281_pipeline *pl)
{
//
// We need to save the contents of the BASIC DMA Registers.
pl->u32DCCn_Save = readl(s->pBA0 + pl->u32DCCnAddress);
pl->u32DCAn_Save = readl(s->pBA0 + pl->u32DCAnAddress);
}
-void cs4281_ResumeDMAengine(struct cs4281_state *s, struct cs4281_pipeline *pl)
+static void cs4281_ResumeDMAengine(struct cs4281_state *s, struct cs4281_pipeline *pl)
{
//
// We need to save the contents of the BASIC DMA Registers.
writel( pl->u32DCAn_Save, s->pBA0 + pl->u32DCAnAddress);
}
-int cs4281_suspend(struct cs4281_state *s)
+static int cs4281_suspend(struct cs4281_state *s)
{
int i;
u32 u32CLKCR1;
return 0;
}
-int cs4281_resume(struct cs4281_state *s)
+static int cs4281_resume(struct cs4281_state *s)
{
int i;
unsigned temp1;
#define DMABUF_MINORDER 1 // ==> min buffer size = 8K.
-void dealloc_dmabuf(struct cs4281_state *s, struct dmabuf *db)
+static void dealloc_dmabuf(struct cs4281_state *s, struct dmabuf *db)
{
struct page *map, *mapend;
#ifndef NOT_CS4281_PM
-void __devinit cs4281_BuildFIFO(
+static void __devinit cs4281_BuildFIFO(
struct cs4281_pipeline *p,
struct cs4281_state *s)
{
}
-void __devinit cs4281_BuildDMAengine(
+static void __devinit cs4281_BuildDMAengine(
struct cs4281_pipeline *p,
struct cs4281_state *s)
{
}
-void __devinit cs4281_InitPM(struct cs4281_state *s)
+static void __devinit cs4281_InitPM(struct cs4281_state *s)
{
int i;
struct cs4281_pipeline *p;
MODULE_DEVICE_TABLE(pci, cs4281_pci_tbl);
-struct pci_driver cs4281_pci_driver = {
+static struct pci_driver cs4281_pci_driver = {
.name = "cs4281",
.id_table = cs4281_pci_tbl,
.probe = cs4281_probe,
.resume = CS4281_RESUME_TBL,
};
-int __init cs4281_init_module(void)
+static int __init cs4281_init_module(void)
{
int rtn = 0;
CS_DBGOUT(CS_INIT | CS_FUNCTION, 2, printk(KERN_INFO
return rtn;
}
-void __exit cs4281_cleanup_module(void)
+static void __exit cs4281_cleanup_module(void)
{
pci_unregister_driver(&cs4281_pci_driver);
#ifndef NOT_CS4281_PM
module_init(cs4281_init_module);
module_exit(cs4281_cleanup_module);
-#ifndef MODULE
-int __init init_cs4281(void)
-{
- return cs4281_init_module();
-}
-#endif
#define cs_pm_register(a, b, c) pm_register((a), (b), (c));
#define cs_pm_unregister_all(a) pm_unregister_all((a));
-int cs4281_suspend(struct cs4281_state *s);
-int cs4281_resume(struct cs4281_state *s);
+static int cs4281_suspend(struct cs4281_state *s);
+static int cs4281_resume(struct cs4281_state *s);
/*
* for now (12/22/00) only enable the pm_register PM support.
* allow these table entries to be null.
#define CS4281_SUSPEND_TBL cs4281_suspend_null
#define CS4281_RESUME_TBL cs4281_resume_null
-int cs4281_pm_callback(struct pm_dev *dev, pm_request_t rqst, void *data)
+static int cs4281_pm_callback(struct pm_dev *dev, pm_request_t rqst, void *data)
{
struct cs4281_state *state;
#if CSDEBUG
static unsigned long cs_debuglevel=1; /* levels range from 1-9 */
-MODULE_PARM(cs_debuglevel, "i");
+module_param(cs_debuglevel, ulong, 0644);
static unsigned long cs_debugmask=CS_INIT | CS_ERROR; /* use CS_DBGOUT with various mask values */
-MODULE_PARM(cs_debugmask, "i");
+module_param(cs_debugmask, ulong, 0644);
#endif
static unsigned long hercules_egpio_disable; /* if non-zero set all EGPIO to 0 */
-MODULE_PARM(hercules_egpio_disable, "i");
+module_param(hercules_egpio_disable, ulong, 0);
static unsigned long initdelay=700; /* PM delay in millisecs */
-MODULE_PARM(initdelay, "i");
+module_param(initdelay, ulong, 0);
static unsigned long powerdown=-1; /* turn on/off powerdown processing in driver */
-MODULE_PARM(powerdown, "i");
+module_param(powerdown, ulong, 0);
#define DMABUF_DEFAULTORDER 3
static unsigned long defaultorder=DMABUF_DEFAULTORDER;
-MODULE_PARM(defaultorder, "i");
+module_param(defaultorder, ulong, 0);
static int external_amp;
-MODULE_PARM(external_amp, "i");
+module_param(external_amp, bool, 0);
static int thinkpad;
-MODULE_PARM(thinkpad, "i");
+module_param(thinkpad, bool, 0);
/*
* set the powerdown module parm to 0 to disable all
#define CS46XX_ARCH "32" //architecture key
#endif
-struct list_head cs46xx_devs = { &cs46xx_devs, &cs46xx_devs };
+static struct list_head cs46xx_devs = { &cs46xx_devs, &cs46xx_devs };
/* magic numbers to protect our data structures */
#define CS_CARD_MAGIC 0x43525553 /* "CRUS" */
static int cs46xx_suspend_tbl(struct pci_dev *pcidev, u32 state);
static int cs46xx_resume_tbl(struct pci_dev *pcidev);
-static inline unsigned ld2(unsigned int x)
-{
- unsigned r = 0;
-
- if (x >= 0x10000) {
- x >>= 16;
- r += 16;
- }
- if (x >= 0x100) {
- x >>= 8;
- r += 8;
- }
- if (x >= 0x10) {
- x >>= 4;
- r += 4;
- }
- if (x >= 4) {
- x >>= 2;
- r += 2;
- }
- if (x >= 2)
- r++;
- return r;
-}
+#ifndef CS46XX_ACPI_SUPPORT
+static int cs46xx_pm_callback(struct pm_dev *dev, pm_request_t rqst, void *data);
+#endif
#if CSDEBUG
#define SOUND_MIXER_CS_SETDBGMASK _SIOWR('M',123, int)
#define SOUND_MIXER_CS_APM _SIOWR('M',124, int)
-void printioctl(unsigned int x)
+static void printioctl(unsigned int x)
{
unsigned int i;
unsigned char vidx;
* "SetCaptureSPValues()" -- Initialize record task values before each
* capture startup.
*/
-void SetCaptureSPValues(struct cs_card *card)
+static void SetCaptureSPValues(struct cs_card *card)
{
unsigned i, offset;
CS_DBGOUT(CS_FUNCTION, 8, printk("cs46xx: SetCaptureSPValues()+\n") );
* Suspend - save the ac97 regs, mute the outputs and power down the part.
*
****************************************************************************/
-void cs46xx_ac97_suspend(struct cs_card *card)
+static void cs46xx_ac97_suspend(struct cs_card *card)
{
int Count,i;
struct ac97_codec *dev=card->ac97_codec[0];
* Resume - power up the part and restore its registers..
*
****************************************************************************/
-void cs46xx_ac97_resume(struct cs_card *card)
+static void cs46xx_ac97_resume(struct cs_card *card)
{
int Count,i;
struct ac97_codec *dev=card->ac97_codec[0];
return 0;
}
-void __exit cs46xx_cleanup_module(void);
static int cs_ioctl_mixdev(struct inode *inode, struct file *file, unsigned int cmd,
unsigned long arg)
{
MODULE_DEVICE_TABLE(pci, cs46xx_pci_tbl);
-struct pci_driver cs46xx_pci_driver = {
+static struct pci_driver cs46xx_pci_driver = {
.name = "cs46xx",
.id_table = cs46xx_pci_tbl,
.probe = cs46xx_probe,
.resume = CS46XX_RESUME_TBL,
};
-int __init cs46xx_init_module(void)
+static int __init cs46xx_init_module(void)
{
int rtn = 0;
CS_DBGOUT(CS_INIT | CS_FUNCTION, 2, printk(KERN_INFO
return rtn;
}
-void __exit cs46xx_cleanup_module(void)
+static void __exit cs46xx_cleanup_module(void)
{
pci_unregister_driver(&cs46xx_pci_driver);
cs_pm_unregister_all(cs46xx_pm_callback);
module_init(cs46xx_init_module);
module_exit(cs46xx_cleanup_module);
-int cs46xx_pm_callback(struct pm_dev *dev, pm_request_t rqst, void *data)
+#ifndef CS46XX_ACPI_SUPPORT
+static int cs46xx_pm_callback(struct pm_dev *dev, pm_request_t rqst, void *data)
{
struct cs_card *card;
return 0;
}
+#endif
#if CS46XX_ACPI_SUPPORT
static int cs46xx_suspend_tbl(struct pci_dev *pcidev, u32 state)
#define CS_OWNER .owner =
#define CS_THIS_MODULE THIS_MODULE,
-void cs46xx_null(struct pci_dev *pcidev) { return; }
+static inline void cs46xx_null(struct pci_dev *pcidev) { return; }
#define cs4x_mem_map_reserve(page) SetPageReserved(page)
#define cs4x_mem_map_unreserve(page) ClearPageReserved(page)
#define CS46XX_SUSPEND_TBL cs46xx_null
#define CS46XX_RESUME_TBL cs46xx_null
#endif
-int cs46xx_pm_callback(struct pm_dev *dev, pm_request_t rqst, void *data);
#endif
static void calculate_ofrag(struct woinst *);
static void calculate_ifrag(struct wiinst *);
+static void emu10k1_waveout_bh(unsigned long refdata);
+static void emu10k1_wavein_bh(unsigned long refdata);
+
/* Audio file operations */
static ssize_t emu10k1_audio_read(struct file *file, char __user *buffer, size_t count, loff_t * ppos)
{
return dmapage;
}
-struct vm_operations_struct emu10k1_mm_ops = {
+static struct vm_operations_struct emu10k1_mm_ops = {
.nopage = emu10k1_mm_nopage,
};
wiinst->mmapped = 0;
wiinst->total_recorded = 0;
wiinst->blocks = 0;
- wiinst->lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&wiinst->lock);
tasklet_init(&wiinst->timer.tasklet, emu10k1_wavein_bh, (unsigned long) wave_dev);
wave_dev->wiinst = wiinst;
emu10k1_wavein_setformat(wave_dev, &wiinst->format);
woinst->total_copied = 0;
woinst->total_played = 0;
woinst->blocks = 0;
- woinst->lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&woinst->lock);
tasklet_init(&woinst->timer.tasklet, emu10k1_waveout_bh, (unsigned long) wave_dev);
wave_dev->woinst = woinst;
emu10k1_waveout_setformat(wave_dev, &woinst->format);
return;
}
-void emu10k1_wavein_bh(unsigned long refdata)
+static void emu10k1_wavein_bh(unsigned long refdata)
{
struct emu10k1_wavedevice *wave_dev = (struct emu10k1_wavedevice *) refdata;
struct wiinst *wiinst = wave_dev->wiinst;
return;
}
-void emu10k1_waveout_bh(unsigned long refdata)
+static void emu10k1_waveout_bh(unsigned long refdata)
{
struct emu10k1_wavedevice *wave_dev = (struct emu10k1_wavedevice *) refdata;
struct woinst *woinst = wave_dev->woinst;
u16 enablebits;
};
-void emu10k1_waveout_bh(unsigned long);
-void emu10k1_wavein_bh(unsigned long);
-
#endif /* _AUDIO_H */
#include "cardmi.h"
#include "irqmgr.h"
+
+static int emu10k1_mpuin_callback(struct emu10k1_mpuin *card_mpuin, u32 msg, unsigned long data, u32 bytesvalid);
+
+static int sblive_miStateInit(struct emu10k1_mpuin *);
+static int sblive_miStateEntry(struct emu10k1_mpuin *, u8);
+static int sblive_miStateParse(struct emu10k1_mpuin *, u8);
+static int sblive_miState3Byte(struct emu10k1_mpuin *, u8);
+static int sblive_miState3ByteKey(struct emu10k1_mpuin *, u8);
+static int sblive_miState3ByteVel(struct emu10k1_mpuin *, u8);
+static int sblive_miState2Byte(struct emu10k1_mpuin *, u8);
+static int sblive_miState2ByteKey(struct emu10k1_mpuin *, u8);
+static int sblive_miStateSysCommon2(struct emu10k1_mpuin *, u8);
+static int sblive_miStateSysCommon2Key(struct emu10k1_mpuin *, u8);
+static int sblive_miStateSysCommon3(struct emu10k1_mpuin *, u8);
+static int sblive_miStateSysCommon3Key(struct emu10k1_mpuin *, u8);
+static int sblive_miStateSysCommon3Vel(struct emu10k1_mpuin *, u8);
+static int sblive_miStateSysExNorm(struct emu10k1_mpuin *, u8);
+static int sblive_miStateSysReal(struct emu10k1_mpuin *, u8);
+
+
static struct {
int (*Fn) (struct emu10k1_mpuin *, u8);
} midistatefn[] = {
sblive_miStateSysReal} /* 0xF4 - 0xF6 ,0xF8 - 0xFF */
};
+
/* Installs the IRQ handler for the MPU in port */
/* and initialize parameters */
/* Passes the message with the data back to the client */
/* via IRQ & DPC callbacks to Ring 3 */
-int emu10k1_mpuin_callback(struct emu10k1_mpuin *card_mpuin, u32 msg, unsigned long data, u32 bytesvalid)
+static int emu10k1_mpuin_callback(struct emu10k1_mpuin *card_mpuin, u32 msg, unsigned long data, u32 bytesvalid)
{
unsigned long timein;
struct midi_queue *midiq;
/*****************************************************************************/
/* FIXME: This should be a macro */
-int sblive_miStateInit(struct emu10k1_mpuin *card_mpuin)
+static int sblive_miStateInit(struct emu10k1_mpuin *card_mpuin)
{
card_mpuin->status = 0; /* For MIDI running status */
card_mpuin->fstatus = 0; /* For 0xFn status only */
}
/* FIXME: This should be a macro */
-int sblive_miStateEntry(struct emu10k1_mpuin *card_mpuin, u8 data)
+static int sblive_miStateEntry(struct emu10k1_mpuin *card_mpuin, u8 data)
{
return midistatefn[card_mpuin->curstate].Fn(card_mpuin, data);
}
-int sblive_miStateParse(struct emu10k1_mpuin *card_mpuin, u8 data)
+static int sblive_miStateParse(struct emu10k1_mpuin *card_mpuin, u8 data)
{
switch (data & 0xf0) {
case 0x80:
return midistatefn[card_mpuin->curstate].Fn(card_mpuin, data);
}
-int sblive_miState3Byte(struct emu10k1_mpuin *card_mpuin, u8 data)
+static int sblive_miState3Byte(struct emu10k1_mpuin *card_mpuin, u8 data)
{
u8 temp = data & 0xf0;
return midistatefn[STIN_PARSE].Fn(card_mpuin, data);
}
-int sblive_miState3ByteKey(struct emu10k1_mpuin *card_mpuin, u8 data)
-
+static int sblive_miState3ByteKey(struct emu10k1_mpuin *card_mpuin, u8 data)
/* byte 1 */
{
unsigned long tmp;
return CTSTATUS_NEXT_BYTE;
}
-int sblive_miState3ByteVel(struct emu10k1_mpuin *card_mpuin, u8 data)
-
+static int sblive_miState3ByteVel(struct emu10k1_mpuin *card_mpuin, u8 data)
/* byte 2 */
{
unsigned long tmp;
return 0;
}
-int sblive_miState2Byte(struct emu10k1_mpuin *card_mpuin, u8 data)
+static int sblive_miState2Byte(struct emu10k1_mpuin *card_mpuin, u8 data)
{
u8 temp = data & 0xf0;
return midistatefn[STIN_PARSE].Fn(card_mpuin, data);
}
-int sblive_miState2ByteKey(struct emu10k1_mpuin *card_mpuin, u8 data)
-
+static int sblive_miState2ByteKey(struct emu10k1_mpuin *card_mpuin, u8 data)
/* byte 1 */
{
unsigned long tmp;
return 0;
}
-int sblive_miStateSysCommon2(struct emu10k1_mpuin *card_mpuin, u8 data)
+static int sblive_miStateSysCommon2(struct emu10k1_mpuin *card_mpuin, u8 data)
{
card_mpuin->fstatus = data;
card_mpuin->curstate = STIN_SYS_COMMON_2_KEY;
return CTSTATUS_NEXT_BYTE;
}
-int sblive_miStateSysCommon2Key(struct emu10k1_mpuin *card_mpuin, u8 data)
-
+static int sblive_miStateSysCommon2Key(struct emu10k1_mpuin *card_mpuin, u8 data)
/* byte 1 */
{
unsigned long tmp;
return 0;
}
-int sblive_miStateSysCommon3(struct emu10k1_mpuin *card_mpuin, u8 data)
+static int sblive_miStateSysCommon3(struct emu10k1_mpuin *card_mpuin, u8 data)
{
card_mpuin->fstatus = data;
card_mpuin->curstate = STIN_SYS_COMMON_3_KEY;
return CTSTATUS_NEXT_BYTE;
}
-int sblive_miStateSysCommon3Key(struct emu10k1_mpuin *card_mpuin, u8 data)
-
+static int sblive_miStateSysCommon3Key(struct emu10k1_mpuin *card_mpuin, u8 data)
/* byte 1 */
{
unsigned long tmp;
return CTSTATUS_NEXT_BYTE;
}
-int sblive_miStateSysCommon3Vel(struct emu10k1_mpuin *card_mpuin, u8 data)
-
+static int sblive_miStateSysCommon3Vel(struct emu10k1_mpuin *card_mpuin, u8 data)
/* byte 2 */
{
unsigned long tmp;
return 0;
}
-int sblive_miStateSysExNorm(struct emu10k1_mpuin *card_mpuin, u8 data)
+static int sblive_miStateSysExNorm(struct emu10k1_mpuin *card_mpuin, u8 data)
{
unsigned long flags;
return CTSTATUS_NEXT_BYTE;
}
-int sblive_miStateSysReal(struct emu10k1_mpuin *card_mpuin, u8 data)
+static int sblive_miStateSysReal(struct emu10k1_mpuin *card_mpuin, u8 data)
{
emu10k1_mpuin_callback(card_mpuin, ICARDMIDI_INDATA, data, 1);
int emu10k1_mpuin_stop(struct emu10k1_card *);
int emu10k1_mpuin_reset(struct emu10k1_card *);
-int sblive_miStateInit(struct emu10k1_mpuin *);
-int sblive_miStateEntry(struct emu10k1_mpuin *, u8);
-int sblive_miStateParse(struct emu10k1_mpuin *, u8);
-int sblive_miState3Byte(struct emu10k1_mpuin *, u8);
-int sblive_miState3ByteKey(struct emu10k1_mpuin *, u8);
-int sblive_miState3ByteVel(struct emu10k1_mpuin *, u8);
-int sblive_miState2Byte(struct emu10k1_mpuin *, u8);
-int sblive_miState2ByteKey(struct emu10k1_mpuin *, u8);
-int sblive_miStateSysCommon2(struct emu10k1_mpuin *, u8);
-int sblive_miStateSysCommon2Key(struct emu10k1_mpuin *, u8);
-int sblive_miStateSysCommon3(struct emu10k1_mpuin *, u8);
-int sblive_miStateSysCommon3Key(struct emu10k1_mpuin *, u8);
-int sblive_miStateSysCommon3Vel(struct emu10k1_mpuin *, u8);
-int sblive_miStateSysExNorm(struct emu10k1_mpuin *, u8);
-int sblive_miStateSysReal(struct emu10k1_mpuin *, u8);
-
int emu10k1_mpuin_irqhandler(struct emu10k1_card *);
void emu10k1_mpuin_bh(unsigned long);
-int emu10k1_mpuin_callback(struct emu10k1_mpuin *card_mpuin, u32 msg, unsigned long data, u32 bytesvalid);
#endif /* _CARDMI_H */
* This function will return a valid sound format as close
* to the requested one as possible.
*/
-void query_format(int recsrc, struct wave_format *wave_fmt)
+static void query_format(int recsrc, struct wave_format *wave_fmt)
{
switch (recsrc) {
#define VOLCTRL_STEP_SIZE 5
//An internal function for setting OSS mixer controls.
-void emu10k1_set_oss_vol(struct emu10k1_card *card, int oss_mixer,
- unsigned int left, unsigned int right)
+static void emu10k1_set_oss_vol(struct emu10k1_card *card, int oss_mixer,
+ unsigned int left, unsigned int right)
{
extern char volume_params[SOUND_MIXER_NRDEVICES];
return 0; /* Should never reach this point */
}
-/* Returns an attenuation based upon a cumulative volume value */
-
-/* Algorithm calculates 0x200 - 0x10 log2 (input) */
-u8 sumVolumeToAttenuation(u32 value)
-{
- u16 count = 16;
- s16 ans;
-
- if (value == 0)
- return 0xFF;
-
- /* Find first SET bit. This is the integer part of the value */
- while ((value & 0x10000) == 0) {
- value <<= 1;
- count--;
- }
-
- /* The REST of the data is the fractional part. */
- ans = (s16) (0x110 - ((count << 4) + ((value & 0x0FFFFL) >> 12)));
- if (ans > 0xFF)
- ans = 0xFF;
-
- return (u8) ans;
-}
-
/*******************************************
* write/read PCI function 0 registers *
********************************************/
return;
}
+#ifdef DBGEMU
void emu10k1_writefn0_2(struct emu10k1_card *card, u32 reg, u32 data, int size)
{
unsigned long flags;
return;
}
+#endif /* DBGEMU */
u32 emu10k1_readfn0(struct emu10k1_card * card, u32 reg)
{
return;
}
-void emu10k1_set_stop_on_loop(struct emu10k1_card *card, u32 voicenum)
-{
- /* Voice interrupt */
- if (voicenum >= 32)
- sblive_writeptr(card, SOLEH | ((0x0100 | (voicenum - 32)) << 16), 0, 1);
- else
- sblive_writeptr(card, SOLEL | ((0x0100 | voicenum) << 16), 0, 1);
-
- return;
-}
-
void emu10k1_clear_stop_on_loop(struct emu10k1_card *card, u32 voicenum)
{
/* Voice interrupt */
#define TIMEOUT 16384
u32 srToPitch(u32);
-u8 sumVolumeToAttenuation(u32);
extern struct list_head emu10k1_devs;
void emu10k1_irq_enable(struct emu10k1_card *, u32);
void emu10k1_irq_disable(struct emu10k1_card *, u32);
-void emu10k1_set_stop_on_loop(struct emu10k1_card *, u32);
void emu10k1_clear_stop_on_loop(struct emu10k1_card *, u32);
/* AC97 Codec register access function */
unregister_sound_dsp(card->audio_dev);
}
-int emu10k1_info_proc (char *page, char **start, off_t off,
- int count, int *eof, void *data)
+static int emu10k1_info_proc (char *page, char **start, off_t off,
+ int count, int *eof, void *data)
{
struct emu10k1_card *card = data;
int len = 0;
{
INIT_LIST_HEAD(&card->timers);
card->timer_delay = TIMER_STOPPED;
- card->timer_lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&card->timer_lock);
}
static void __devinit addxmgr_init(struct emu10k1_card *card)
sblive_writeptr(card, DBG, 0, 0);
}
- mgr->lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&mgr->lock);
// Set up Volume controls, try to keep this the same for both Audigy and Live
#include "../sound_config.h"
#endif
-static spinlock_t midi_spinlock __attribute((unused)) = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(midi_spinlock __attribute((unused)));
static void init_midi_hdr(struct midi_hdr *midihdr)
{
#define PITCH_67882 0x00005a82
#define PITCH_57081 0x00004c1c
-u32 emu10k1_select_interprom(struct emu10k1_card *card, struct emu_voice *voice)
+static u32 emu10k1_select_interprom(struct emu10k1_card *card,
+ struct emu_voice *voice)
{
if(voice->pitch_target==PITCH_48000)
return CCCA_INTERPROM_0;
static unsigned int devindex;
-MODULE_PARM(lineout, "1-" __MODULE_STRING(NR_DEVICE) "i");
+module_param_array(lineout, bool, NULL, 0);
MODULE_PARM_DESC(lineout, "if 1 the LINE input is converted to LINE out");
-MODULE_PARM(micbias, "1-" __MODULE_STRING(NR_DEVICE) "i");
+module_param_array(micbias, bool, NULL, 0);
MODULE_PARM_DESC(micbias, "sets the +5V bias for an electret microphone");
MODULE_AUTHOR("Thomas M. Sailer, sailer@ife.ee.ethz.ch, hb9jnx@hb9w.che.eu");
static unsigned int devindex;
-MODULE_PARM(spdif, "1-" __MODULE_STRING(NR_DEVICE) "i");
+module_param_array(spdif, bool, NULL, 0);
MODULE_PARM_DESC(spdif, "if 1 the output is in S/PDIF digital mode");
-MODULE_PARM(nomix, "1-" __MODULE_STRING(NR_DEVICE) "i");
+module_param_array(nomix, bool, NULL, 0);
MODULE_PARM_DESC(nomix, "if 1 no analog audio is mixed to the digital output");
-MODULE_PARM(amplifier, "1-" __MODULE_STRING(NR_DEVICE) "i");
+module_param_array(amplifier, bool, NULL, 0);
MODULE_PARM_DESC(amplifier, "Set to 1 if the machine needs the amp control enabling (many laptops)");
MODULE_AUTHOR("Thomas M. Sailer, sailer@ife.ee.ethz.ch, hb9jnx@hb9w.che.eu");
* @reg: register to read
*/
-u16
+static u16
forte_ac97_read (struct ac97_codec *codec, u8 reg)
{
u16 ret = 0;
* @val: value to write
*/
-void
+static void
forte_ac97_write (struct ac97_codec *codec, u8 reg, u16 val)
{
struct forte_chip *chip = codec->private_data;
static int __initdata dma16 = -1; /* Set this for modules that need it */
static int __initdata type = 0; /* 1 for PnP */
-MODULE_PARM(io, "i");
-MODULE_PARM(irq, "i");
-MODULE_PARM(dma, "i");
-MODULE_PARM(dma16, "i");
-MODULE_PARM(type, "i");
+module_param(io, int, 0);
+module_param(irq, int, 0);
+module_param(dma, int, 0);
+module_param(dma16, int, 0);
+module_param(type, int, 0);
#ifdef CONFIG_SOUND_GUSMAX
-MODULE_PARM(no_wave_dma, "i");
+module_param(no_wave_dma, int, 0);
#endif
#ifdef CONFIG_SOUND_GUS16
-MODULE_PARM(db16, "i");
-MODULE_PARM(gus16, "i");
+module_param(db16, int, 0);
+module_param(gus16, int, 0);
#endif
MODULE_LICENSE("GPL");
#define warn(format, arg...) printk(KERN_WARNING PFX ": " format "\n" , ## arg)
+#define IT8172_MODULE_NAME "IT8172 audio"
+#define PFX IT8172_MODULE_NAME
+
+#ifdef IT8172_DEBUG
+#define dbg(format, arg...) printk(KERN_DEBUG PFX ": " format "\n" , ## arg)
+#else
+#define dbg(format, arg...) do {} while (0)
+#endif
+#define err(format, arg...) printk(KERN_ERR PFX ": " format "\n" , ## arg)
+#define info(format, arg...) printk(KERN_INFO PFX ": " format "\n" , ## arg)
+#define warn(format, arg...) printk(KERN_WARNING PFX ": " format "\n" , ## arg)
+
+
static const unsigned sample_shift[] = { 0, 1, 1, 2 };
struct proc_dir_entry *ac97_ps;
#endif /* IT8172_DEBUG */
- struct ac97_codec *codec;
+ struct ac97_codec codec;
unsigned short pcc, capcc;
unsigned dacrate, adcrate;
pend = virt_to_page(db->rawbuf +
(PAGE_SIZE << db->buforder) - 1);
for (page = virt_to_page(db->rawbuf); page <= pend; page++)
- mem_map_unreserve(page);
+ ClearPageReserved(page);
pci_free_consistent(s->dev, PAGE_SIZE << db->buforder,
db->rawbuf, db->dmaaddr);
}
pend = virt_to_page(db->rawbuf +
(PAGE_SIZE << db->buforder) - 1);
for (page = virt_to_page(db->rawbuf); page <= pend; page++)
- mem_map_reserve(page);
+ SetPageReserved(page);
}
db->count = 0;
/* --------------------------------------------------------------------- */
-static loff_t it8172_llseek(struct file *file, loff_t offset, int origin)
-{
- return -ESPIPE;
-}
-
-
static int it8172_open_mixdev(struct inode *inode, struct file *file)
{
int minor = iminor(inode);
if (list == &devs)
return -ENODEV;
s = list_entry(list, struct it8172_state, devs);
- if (s->codec->dev_mixer == minor)
+ if (s->codec.dev_mixer == minor)
break;
}
file->private_data = s;
unsigned int cmd, unsigned long arg)
{
struct it8172_state *s = (struct it8172_state *)file->private_data;
- struct ac97_codec *codec = s->codec;
+ struct ac97_codec *codec = &s->codec;
return mixdev_ioctl(codec, cmd, arg);
}
static /*const*/ struct file_operations it8172_mixer_fops = {
.owner = THIS_MODULE,
- .llseek = it8172_llseek,
+ .llseek = no_llseek,
.ioctl = it8172_ioctl_mixdev,
.open = it8172_open_mixdev,
.release = it8172_release_mixdev,
case SNDCTL_DSP_RESET:
if (file->f_mode & FMODE_WRITE) {
stop_dac(s);
- synchronize_irq();
+ synchronize_irq(s->irq);
s->dma_dac.count = s->dma_dac.total_bytes = 0;
s->dma_dac.nextIn = s->dma_dac.nextOut =
s->dma_dac.rawbuf;
}
if (file->f_mode & FMODE_READ) {
stop_adc(s);
- synchronize_irq();
+ synchronize_irq(s->irq);
s->dma_adc.count = s->dma_adc.total_bytes = 0;
s->dma_adc.nextIn = s->dma_adc.nextOut =
s->dma_adc.rawbuf;
if (count < 0)
count = 0;
cinfo.blocks = count >> s->dma_adc.fragshift;
- return copy_to_user((void *)arg, &cinfo, sizeof(cinfo)) ? -EFAULT : 0;
+ if (copy_to_user((void *)arg, &cinfo, sizeof(cinfo)))
+ return -EFAULT;
+ return 0;
case SNDCTL_DSP_GETOPTR:
if (!(file->f_mode & FMODE_READ))
if (count < 0)
count = 0;
cinfo.blocks = count >> s->dma_dac.fragshift;
- return copy_to_user((void *)arg, &cinfo, sizeof(cinfo)) ? -EFAULT : 0;
+ if (copy_to_user((void *)arg, &cinfo, sizeof(cinfo)))
+ return -EFAULT;
+ return 0;
case SNDCTL_DSP_GETBLKSIZE:
if (file->f_mode & FMODE_WRITE)
return -EINVAL;
}
- return mixdev_ioctl(s->codec, cmd, arg);
+ return mixdev_ioctl(&s->codec, cmd, arg);
}
static /*const*/ struct file_operations it8172_audio_fops = {
.owner = THIS_MODULE,
- .llseek = it8172_llseek,
+ .llseek = no_llseek,
.read = it8172_read,
.write = it8172_write,
.poll = it8172_poll,
len += sprintf (buf + len, "----------------------\n");
for (cnt=0; cnt <= 0x7e; cnt = cnt +2)
len+= sprintf (buf + len, "reg %02x = %04x\n",
- cnt, rdcodec(s->codec, cnt));
+ cnt, rdcodec(&s->codec, cnt));
if (fpos >=len){
*start = buf;
s->vendor = pcidev->vendor;
s->device = pcidev->device;
pci_read_config_byte(pcidev, PCI_REVISION_ID, &s->rev);
-
- s->codec = ac97_alloc_codec();
- if(s->codec == NULL)
- goto err_codec;
-
- s->codec->private_data = s;
- s->codec->id = 0;
- s->codec->codec_read = rdcodec;
- s->codec->codec_write = wrcodec;
- s->codec->codec_wait = waitcodec;
+ s->codec.private_data = s;
+ s->codec.id = 0;
+ s->codec.codec_read = rdcodec;
+ s->codec.codec_write = wrcodec;
+ s->codec.codec_wait = waitcodec;
if (!request_region(s->io, pci_resource_len(pcidev,0),
IT8172_MODULE_NAME)) {
/* register devices */
if ((s->dev_audio = register_sound_dsp(&it8172_audio_fops, -1)) < 0)
goto err_dev1;
- if ((s->codec->dev_mixer =
+ if ((s->codec.dev_mixer =
register_sound_mixer(&it8172_mixer_fops, -1)) < 0)
goto err_dev2;
#ifdef IT8172_DEBUG
- /* intialize the debug proc device */
+ /* initialize the debug proc device */
s->ps = create_proc_read_entry(IT8172_MODULE_NAME, 0, NULL,
proc_it8172_dump, NULL);
#endif /* IT8172_DEBUG */
outw(0, s->io+IT_AC_CODECC);
/* codec init */
- if (!ac97_probe_codec(s->codec))
+ if (!ac97_probe_codec(&s->codec))
goto err_dev3;
/* add I2S as allowable recording source */
- s->codec->record_sources |= SOUND_MASK_I2S;
+ s->codec.record_sources |= SOUND_MASK_I2S;
/* Enable Volume button interrupts */
imc = inb(s->io+IT_AC_IMC);
/* set mic to be the recording source */
val = SOUND_MASK_MIC;
- mixdev_ioctl(s->codec, SOUND_MIXER_WRITE_RECSRC,
+ mixdev_ioctl(&s->codec, SOUND_MIXER_WRITE_RECSRC,
(unsigned long)&val);
/* mute AC'97 master and PCM when in S/PDIF mode */
if (s->spdif_volume != -1) {
val = 0x0000;
- s->codec->mixer_ioctl(s->codec, SOUND_MIXER_WRITE_VOLUME,
+ s->codec.mixer_ioctl(&s->codec, SOUND_MIXER_WRITE_VOLUME,
(unsigned long)&val);
- s->codec->mixer_ioctl(s->codec, SOUND_MIXER_WRITE_PCM,
+ s->codec.mixer_ioctl(&s->codec, SOUND_MIXER_WRITE_PCM,
(unsigned long)&val);
}
#ifdef IT8172_DEBUG
sprintf(proc_str, "driver/%s/%d/ac97", IT8172_MODULE_NAME,
- s->codec->id);
+ s->codec.id);
s->ac97_ps = create_proc_read_entry (proc_str, 0, NULL,
- ac97_read_proc, s->codec);
+ ac97_read_proc, &s->codec);
#endif
/* store it in the driver field */
return 0;
err_dev3:
- unregister_sound_mixer(s->codec->dev_mixer);
+ unregister_sound_mixer(s->codec.dev_mixer);
err_dev2:
unregister_sound_dsp(s->dev_audio);
err_dev1:
err_irq:
release_region(s->io, pci_resource_len(pcidev,0));
err_region:
- ac97_release_codec(s->codec);
- err_codec:
kfree(s);
return -1;
}
if (s->ps)
remove_proc_entry(IT8172_MODULE_NAME, NULL);
#endif /* IT8172_DEBUG */
- synchronize_irq();
+ synchronize_irq(s->irq);
free_irq(s->irq, s);
release_region(s->io, pci_resource_len(dev,0));
unregister_sound_dsp(s->dev_audio);
- unregister_sound_mixer(s->codec->dev_mixer);
- ac97_codec_release(s->codec);
+ unregister_sound_mixer(s->codec.dev_mixer);
kfree(s);
pci_set_drvdata(dev, NULL);
}
static int __init init_it8172(void)
{
- if (!pci_present()) /* No PCI bus in this machine! */
- return -ENODEV;
info("version v0.5 time " __TIME__ " " __DATE__);
return pci_module_init(&it8172_driver);
}
static int mad16_conf;
static int mad16_cdsel;
static struct gameport gameport;
-static spinlock_t lock=SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(lock);
#define C928 1
#define MOZART 2
static int __initdata opl4 = 0;
static int __initdata joystick = 0;
-MODULE_PARM(mpu_io, "i");
-MODULE_PARM(mpu_irq, "i");
-MODULE_PARM(io,"i");
-MODULE_PARM(dma,"i");
-MODULE_PARM(dma16,"i");
-MODULE_PARM(irq,"i");
-MODULE_PARM(cdtype,"i");
-MODULE_PARM(cdirq,"i");
-MODULE_PARM(cdport,"i");
-MODULE_PARM(cddma,"i");
-MODULE_PARM(opl4,"i");
-MODULE_PARM(joystick,"i");
-MODULE_PARM(debug,"i");
+module_param(mpu_io, int, 0);
+module_param(mpu_irq, int, 0);
+module_param(io, int, 0);
+module_param(dma, int, 0);
+module_param(dma16, int, 0);
+module_param(irq, int, 0);
+module_param(cdtype, int, 0);
+module_param(cdirq, int, 0);
+module_param(cdport, int, 0);
+module_param(cddma, int, 0);
+module_param(opl4, int, 0);
+module_param(joystick, bool, 0);
+module_param(debug, bool, 0644);
static int __initdata dma_map[2][8] =
{
MODULE_LICENSE("GPL");
#ifdef M_DEBUG
-MODULE_PARM(debug,"i");
+module_param(debug, bool, 0644);
#endif
-MODULE_PARM(dsps_order,"i");
-MODULE_PARM(use_pm,"i");
-MODULE_PARM(clocking, "i");
+module_param(dsps_order, int, 0);
+module_param(use_pm, int, 0);
+module_param(clocking, int, 0);
/* --------------------------------------------------------------------- */
#define DRIVER_VERSION "0.15"
/* these masks indicate which units we care about at
which states */
-u16 acpi_state_mask[] = {
+static u16 acpi_state_mask[] = {
[ACPI_D0] = ACPI_ALL,
[ACPI_D1] = ACPI_SLEEP,
[ACPI_D2] = ACPI_SLEEP,
be sure to fill it in if you add oss mixers
to anyone's supported mixer defines */
- unsigned int mixer_defaults[SOUND_MIXER_NRDEVICES] = {
+static unsigned int mixer_defaults[SOUND_MIXER_NRDEVICES] = {
[SOUND_MIXER_VOLUME] = 0x3232,
[SOUND_MIXER_BASS] = 0x3232,
[SOUND_MIXER_TREBLE] = 0x3232,
/* this guy tries to find the pci power management
* register bank. this should really be in core
* code somewhere. 1 on success. */
-int
+static int
parse_power(struct ess_card *card, struct pci_dev *pcidev)
{
u32 n;
.remove = maestro_remove,
};
-int __init init_maestro(void)
+static int __init init_maestro(void)
{
int rc;
/* --------------------------------------------------------------------- */
-void cleanup_maestro(void) {
+static void cleanup_maestro(void) {
M_printk("maestro: unloading\n");
pci_unregister_driver(&maestro_pci_driver);
pm_unregister_all(maestro_pm_callback);
static int m3_suspend(struct pci_dev *pci_dev, u32 state);
static void check_suspend(struct m3_card *card);
-struct notifier_block m3_reboot_nb = {
+static struct notifier_block m3_reboot_nb = {
.notifier_call = m3_notifier,
};
DPRINTK(DPINT,"set_dmac??\n");
}
-u32 get_dma_pos(struct m3_card *card,
- int instance_addr)
+static u32 get_dma_pos(struct m3_card *card,
+ int instance_addr)
{
u16 hi = 0, lo = 0;
int retry = 10;
return lo | (hi<<16);
}
-u32 get_dmaa(struct m3_state *s)
+static u32 get_dmaa(struct m3_state *s)
{
u32 offset;
return offset;
}
-u32 get_dmac(struct m3_state *s)
+static u32 get_dmac(struct m3_state *s)
{
u32 offset;
return i == 0;
}
-u16 m3_ac97_read(struct ac97_codec *codec, u8 reg)
+static u16 m3_ac97_read(struct ac97_codec *codec, u8 reg)
{
u16 ret = 0;
struct m3_card *card = codec->private_data;
return ret;
}
-void m3_ac97_write(struct ac97_codec *codec, u8 reg, u16 val)
+static void m3_ac97_write(struct ac97_codec *codec, u8 reg, u16 val)
{
struct m3_card *card = codec->private_data;
.release = m3_release_mixdev,
};
-void remote_codec_config(int io, int isremote)
+static void remote_codec_config(int io, int isremote)
{
isremote = isremote ? 1 : 0;
};
#ifdef CONFIG_PM
-int alloc_dsp_suspendmem(struct m3_card *card)
+static int alloc_dsp_suspendmem(struct m3_card *card)
{
int len = sizeof(u16) * (REV_B_CODE_MEMORY_LENGTH + REV_B_DATA_MEMORY_LENGTH);
return 0;
}
-void free_dsp_suspendmem(struct m3_card *card)
+static void free_dsp_suspendmem(struct m3_card *card)
{
if(card->suspend_mem)
vfree(card->suspend_mem);
MODULE_LICENSE("GPL");
#ifdef M_DEBUG
-MODULE_PARM(debug,"i");
+module_param(debug, int, 0);
#endif
-MODULE_PARM(external_amp,"i");
-MODULE_PARM(gpio_pin, "i");
+module_param(external_amp, int, 0);
+module_param(gpio_pin, int, 0);
static struct pci_driver m3_pci_driver = {
.name = "ess_m3_audio",
* DSP Code images
*/
-u16 assp_kernel_image[] = {
+static u16 assp_kernel_image[] = {
0x7980, 0x0030, 0x7980, 0x03B4, 0x7980, 0x03B4, 0x7980, 0x00FB, 0x7980, 0x00DD, 0x7980, 0x03B4,
0x7980, 0x0332, 0x7980, 0x0287, 0x7980, 0x03B4, 0x7980, 0x03B4, 0x7980, 0x03B4, 0x7980, 0x03B4,
0x7980, 0x031A, 0x7980, 0x03B4, 0x7980, 0x022F, 0x7980, 0x03B4, 0x7980, 0x03B4, 0x7980, 0x03B4,
* Mini sample rate converter code image
* that is to be loaded at 0x400 on the DSP.
*/
-u16 assp_minisrc_image[] = {
+static u16 assp_minisrc_image[] = {
0xBF80, 0x101E, 0x906E, 0x006E, 0x8B88, 0x6980, 0xEF88, 0x906F, 0x0D6F, 0x6900, 0xEB08, 0x0412,
0xBC20, 0x696E, 0xB801, 0x906E, 0x7980, 0x0403, 0xB90E, 0x8807, 0xBE43, 0xBF01, 0xBE47, 0xBE41,
MODULE_DESCRIPTION ("Turtle Beach " LONGNAME " Linux Driver");
MODULE_LICENSE("GPL");
-MODULE_PARM (io, "i");
-MODULE_PARM (irq, "i");
-MODULE_PARM (mem, "i");
-MODULE_PARM (write_ndelay, "i");
-MODULE_PARM (fifosize, "i");
-MODULE_PARM (calibrate_signal, "i");
-#ifndef MSND_CLASSIC
-MODULE_PARM (digital, "i");
-MODULE_PARM (cfg, "i");
-MODULE_PARM (reset, "i");
-MODULE_PARM (mpu_io, "i");
-MODULE_PARM (mpu_irq, "i");
-MODULE_PARM (ide_io0, "i");
-MODULE_PARM (ide_io1, "i");
-MODULE_PARM (ide_irq, "i");
-MODULE_PARM (joystick_io, "i");
-#endif
-
static int io __initdata = -1;
static int irq __initdata = -1;
static int mem __initdata = -1;
calibrate_signal __initdata = CONFIG_MSND_CALSIGNAL;
#endif /* MODULE */
+module_param (io, int, 0);
+module_param (irq, int, 0);
+module_param (mem, int, 0);
+module_param (write_ndelay, int, 0);
+module_param (fifosize, int, 0);
+module_param (calibrate_signal, int, 0);
+#ifndef MSND_CLASSIC
+module_param (digital, bool, 0);
+module_param (cfg, int, 0);
+module_param (reset, int, 0);
+module_param (mpu_io, int, 0);
+module_param (mpu_irq, int, 0);
+module_param (ide_io0, int, 0);
+module_param (ide_io1, int, 0);
+module_param (ide_irq, int, 0);
+module_param (joystick_io, int, 0);
+#endif
static int __init msnd_init(void)
{
#include <linux/spinlock.h>
#include <linux/smp_lock.h>
#include <linux/ac97_codec.h>
-#include <linux/interrupt.h>
#include <asm/io.h>
#include <asm/dma.h>
#include <asm/uaccess.h>
}
}
+static int ac97_codec_not_present(struct ac97_codec *codec)
+{
+ struct vrc5477_ac97_state *s =
+ (struct vrc5477_ac97_state *)codec->private_data;
+ unsigned long flags;
+ unsigned short count = 0xffff;
+
+ spin_lock_irqsave(&s->lock, flags);
+
+ /* wait until we can access codec registers */
+ do {
+ if (!(inl(s->io + VRC5477_CODEC_WR) & 0x80000000))
+ break;
+ } while (--count);
+
+ if (count == 0) {
+ spin_unlock_irqrestore(&s->lock, flags);
+ return -1;
+ }
+
+ /* write 0 to reset */
+ outl((AC97_RESET << 16) | 0, s->io + VRC5477_CODEC_WR);
+
+ /* test whether we get a response from ac97 chip */
+ count = 0xffff;
+ do {
+ if (!(inl(s->io + VRC5477_CODEC_WR) & 0x80000000))
+ break;
+ } while (--count);
+
+ if (count == 0) {
+ spin_unlock_irqrestore(&s->lock, flags);
+ return -1;
+ }
+ spin_unlock_irqrestore(&s->lock, flags);
+ return 0;
+}
/* --------------------------------------------------------------------- */
-static inline void
+extern inline void
stop_dac(struct vrc5477_ac97_state *s)
{
struct dmabuf* db = &s->dma_dac;
spin_unlock_irqrestore(&s->lock, flags);
}
-static inline void stop_adc(struct vrc5477_ac97_state *s)
+extern inline void stop_adc(struct vrc5477_ac97_state *s)
{
struct dmabuf* db = &s->dma_adc;
unsigned long flags;
#define DMABUF_DEFAULTORDER (16-PAGE_SHIFT)
#define DMABUF_MINORDER 1
-static inline void dealloc_dmabuf(struct vrc5477_ac97_state *s,
+extern inline void dealloc_dmabuf(struct vrc5477_ac97_state *s,
struct dmabuf *db)
{
if (db->lbuf) {
}
+ /* test if get response from ac97, if not return */
+ if (ac97_codec_not_present(&(s->codec))) {
+ printk(KERN_ERR PFX "no ac97 codec\n");
+ goto err_region;
+
+ }
+
if (!request_region(s->io, pci_resource_len(pcidev,0),
VRC5477_AC97_MODULE_NAME)) {
printk(KERN_ERR PFX "io ports %#lx->%#lx in use\n",
remove_proc_entry(VRC5477_AC97_MODULE_NAME, NULL);
#endif /* VRC5477_AC97_DEBUG */
- synchronize_irq(s->irq);
+ synchronize_irq();
free_irq(s->irq, s);
release_region(s->io, pci_resource_len(dev,0));
unregister_sound_dsp(s->dev_audio);
.name = VRC5477_AC97_MODULE_NAME,
.id_table = id_table,
.probe = vrc5477_ac97_probe,
- .remove = __devexit_p(vrc5477_ac97_remove),
+ .remove = __devexit_p(vrc5477_ac97_remove)
};
static int __init init_vrc5477_ac97(void)
static int sb_initialized;
#endif
-static spinlock_t lock=SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(lock);
static unsigned char opl3sa_read(int addr)
{
static int __initdata mpu_io = -1;
static int __initdata mpu_irq = -1;
-MODULE_PARM(io,"i");
-MODULE_PARM(irq,"i");
-MODULE_PARM(dma,"i");
-MODULE_PARM(dma2,"i");
-MODULE_PARM(mpu_io,"i");
-MODULE_PARM(mpu_irq,"i");
+module_param(io, int, 0);
+module_param(irq, int, 0);
+module_param(dma, int, 0);
+module_param(dma2, int, 0);
+module_param(mpu_io, int, 0);
+module_param(mpu_irq, int, 0);
static int __init init_opl3sa(void)
{
static int pas_intr_mask;
static int pas_irq;
static int pas_sb_base;
-spinlock_t pas_lock=SPIN_LOCK_UNLOCKED;
+DEFINE_SPINLOCK(pas_lock);
#ifndef CONFIG_PAS_JOYSTICK
static int joystick;
#else
mix_write(0x80 | 5, 0x078B);
mix_write(5, 0x078B);
-#if !defined(DISABLE_SB_EMULATION)
-
{
struct address_info *sb_config;
else
pas_write(0x00, 0xF788);
}
-#else
- pas_write(0x00, 0xF788);
-#endif
if (!ok)
printk(KERN_WARNING "PAS16: Driver not enabled\n");
if (config_pas_hw(hw_config))
{
pas_pcm_init(hw_config);
-
-#if !defined(MODULE) && !defined(DISABLE_SB_EMULATION)
- sb_dsp_disable_midi(pas_sb_base); /* No MIDI capability */
-#endif
-
pas_midi_init();
pas_init_mixer();
}
static int __initdata sb_dma = -1;
static int __initdata sb_dma16 = -1;
-MODULE_PARM(io,"i");
-MODULE_PARM(irq,"i");
-MODULE_PARM(dma,"i");
-MODULE_PARM(dma16,"i");
+module_param(io, int, 0);
+module_param(irq, int, 0);
+module_param(dma, int, 0);
+module_param(dma16, int, 0);
-MODULE_PARM(sb_io,"i");
-MODULE_PARM(sb_irq,"i");
-MODULE_PARM(sb_dma,"i");
-MODULE_PARM(sb_dma16,"i");
+module_param(sb_io, int, 0);
+module_param(sb_irq, int, 0);
+module_param(sb_dma, int, 0);
+module_param(sb_dma16, int, 0);
-MODULE_PARM(joystick,"i");
-MODULE_PARM(symphony,"i");
-MODULE_PARM(broken_bus_clock,"i");
+module_param(joystick, bool, 0);
+module_param(symphony, bool, 0);
+module_param(broken_bus_clock, bool, 0);
MODULE_LICENSE("GPL");
#define NR_DEVICE 2
static int devices = 1;
-MODULE_PARM(devices, "1-" __MODULE_STRING(NR_DEVICE) "i");
+module_param(devices, int, 0);
MODULE_PARM_DESC(devices, "number of dsp devices allocated by the driver");
/* fiddling with the card (first level hardware control) */
-inline void rme96xx_set_ctrl(rme96xx_info* s,int mask)
+static inline void rme96xx_set_ctrl(rme96xx_info* s,int mask)
{
s->control_register|=mask;
}
-inline void rme96xx_unset_ctrl(rme96xx_info* s,int mask)
+static inline void rme96xx_unset_ctrl(rme96xx_info* s,int mask)
{
s->control_register&=(~mask);
}
-inline int rme96xx_get_sample_rate_status(rme96xx_info* s)
+static inline int rme96xx_get_sample_rate_status(rme96xx_info* s)
{
int val;
u32 status;
return val;
}
-inline int rme96xx_get_sample_rate_ctrl(rme96xx_info* s)
+static inline int rme96xx_get_sample_rate_ctrl(rme96xx_info* s)
{
int val;
val = (s->control_register & RME96xx_freq) ? 48000 : 44100;
/* the function returns the hardware pointer in bytes */
#define RME96xx_BURSTBYTES -64 /* bytes by which hwptr could be off */
-inline int rme96xx_gethwptr(rme96xx_info* s,int exact)
+static inline int rme96xx_gethwptr(rme96xx_info* s,int exact)
{
unsigned long flags;
if (exact) {
return (s->hwbufid ? s->fragsize : 0);
}
-inline void rme96xx_setlatency(rme96xx_info* s,int l)
+static inline void rme96xx_setlatency(rme96xx_info* s,int l)
{
s->latency = l;
s->fragsize = 1<<(8+l);
}
-inline int rme96xx_getospace(struct dmabuf * dma, unsigned int hwp)
+static inline int rme96xx_getospace(struct dmabuf * dma, unsigned int hwp)
{
int cnt;
int swptr;
return cnt;
}
-inline int rme96xx_getispace(struct dmabuf * dma, unsigned int hwp)
+static inline int rme96xx_getispace(struct dmabuf * dma, unsigned int hwp)
{
int cnt;
int swptr;
}
-inline int rme96xx_copyfromuser(struct dmabuf* dma,const char __user * buffer,int count,int hop)
+static inline int rme96xx_copyfromuser(struct dmabuf* dma,const char __user * buffer,int count,int hop)
{
int swptr = dma->writeptr;
switch (dma->format) {
}
/* The count argument is the number of bytes */
-inline int rme96xx_copytouser(struct dmabuf* dma,const char __user* buffer,int count,int hop)
+static inline int rme96xx_copytouser(struct dmabuf* dma,const char __user* buffer,int count,int hop)
{
int swptr = dma->readptr;
switch (dma->format) {
PCI detection and module initialization stuff
----------------------------------------------------------------------------*/
-void* busmaster_malloc(int size) {
+static void* busmaster_malloc(int size) {
int pg; /* 2 s exponent of memory size */
char *buf;
return NULL;
}
-void busmaster_free(void* ptr,int size) {
+static void busmaster_free(void* ptr,int size) {
int pg;
struct page* page, *last_page;
}
-int rme96xx_init(rme96xx_info* s)
+static int rme96xx_init(rme96xx_info* s)
{
int i;
int status;
int sb_audio_open(int dev, int mode);
void sb_audio_close(int dev);
-extern sb_devc *last_sb;
-
/* From sb_common.c */
void sb_dsp_disable_midi(int port);
-void sb_dsp_disable_recording(int port);
int probe_sbmpu (struct address_info *hw_config, struct module *owner);
void unload_sbmpu (struct address_info *hw_config);
static int jazz16_base; /* Not detected */
static unsigned char jazz16_bits; /* I/O relocation bits */
-static spinlock_t jazz16_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(jazz16_lock);
/*
* Logitech Soundman Wave specific initialization code
#endif
-sb_devc *last_sb; /* Last sb loaded */
+static sb_devc *last_sb; /* Last sb loaded */
int sb_dsp_command(sb_devc * devc, unsigned char val)
{
DDB(printk("sb_dsp_detect(%x) entered\n", hw_config->io_base));
- devc->lock = SPIN_LOCK_UNLOCKED;
+ spin_lock_init(&devc->lock);
devc->type = hw_config->card_subtype;
devc->base = hw_config->io_base;
return 1;
}
-void sb_dsp_disable_midi(int io_base)
-{
-}
-
-void sb_dsp_disable_recording(int io_base)
-{
-}
-
/* if (sbmpu) below we allow mpu401 to manage the midi devs
otherwise we have to unload them. (Andrzej Krzysztofowicz) */
EXPORT_SYMBOL(sb_dsp_init);
EXPORT_SYMBOL(sb_dsp_detect);
EXPORT_SYMBOL(sb_dsp_unload);
-EXPORT_SYMBOL(sb_dsp_disable_midi);
EXPORT_SYMBOL(sb_be_quiet);
EXPORT_SYMBOL(probe_sbmpu);
EXPORT_SYMBOL(unload_sbmpu);
sb_common_mixer_set(devc, dev, left, right);
}
-int es_rec_set_recmask(sb_devc * devc, int mask)
+static int es_rec_set_recmask(sb_devc * devc, int mask)
{
int i, i_mask, cur_mask, diff_mask;
int value, left, right;
static int __initdata dma2 = -1;
static int __initdata sgbase = -1;
-MODULE_PARM(io,"i");
-MODULE_PARM(irq,"i");
-MODULE_PARM(dma,"i");
-MODULE_PARM(dma2,"i");
-MODULE_PARM(sgbase,"i");
+module_param(io, int, 0);
+module_param(irq, int, 0);
+module_param(dma, int, 0);
+module_param(dma2, int, 0);
+module_param(sgbase, int, 0);
static int __init init_sgalaxy(void)
{
static unsigned int devindex;
-MODULE_PARM(reverb, "1-" __MODULE_STRING(NR_DEVICE) "i");
+module_param_array(reverb, bool, NULL, 0);
MODULE_PARM_DESC(reverb, "if 1 enables the reverb circuitry. NOTE: your card must have the reverb RAM");
#if 0
MODULE_PARM(wavetable, "1-" __MODULE_STRING(NR_DEVICE) "i");
static struct initvol {
int mixch;
int vol;
-} initvol[] __initdata = {
+} initvol[] __devinitdata = {
{ SOUND_MIXER_WRITE_RECLEV, 0x4040 },
{ SOUND_MIXER_WRITE_LINE1, 0x4040 },
{ SOUND_MIXER_WRITE_CD, 0x4040 },
static int __devinit sv_probe(struct pci_dev *pcidev, const struct pci_device_id *pciid)
{
- static char __initdata sv_ddma_name[] = "S3 Inc. SonicVibes DDMA Controller";
+ static char __devinitdata sv_ddma_name[] = "S3 Inc. SonicVibes DDMA Controller";
struct sv_state *s;
mm_segment_t fs;
int i, val, ret;
* (audio@crystal.cirrus.com).
* -- adapted from cs4281 PCI driver for cs4297a on
* BCM1250 Synchronous Serial interface
-* (kwalker@broadcom.com)
+* (Kip Walker, Broadcom Corp.)
+* Copyright (C) 2004 Maciej W. Rozycki
+* Copyright (C) 2005 Ralf Baechle (ralf@linux-mips.org)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
#include <linux/pci.h>
#include <linux/bitops.h>
#include <linux/interrupt.h>
-#include <asm/io.h>
-#include <asm/dma.h>
#include <linux/init.h>
#include <linux/poll.h>
#include <linux/smp_lock.h>
-#include <linux/wrapper.h>
+
+#include <asm/byteorder.h>
+#include <asm/dma.h>
+#include <asm/io.h>
#include <asm/uaccess.h>
#include <asm/sibyte/sb1250_regs.h>
#include <asm/sibyte/sb1250_syncser.h>
#include <asm/sibyte/sb1250_mac.h>
#include <asm/sibyte/sb1250.h>
-#include <asm/sibyte/64bit.h>
struct cs4297a_state;
CS_DBGOUT(CS_INIT, 2,
printk(KERN_INFO "cs4297a: Setting up serial parameters\n"));
- out64(M_SYNCSER_CMD_RX_RESET | M_SYNCSER_CMD_TX_RESET, SS_CSR(R_SER_CMD));
+ __raw_writeq(M_SYNCSER_CMD_RX_RESET | M_SYNCSER_CMD_TX_RESET, SS_CSR(R_SER_CMD));
- out64(M_SYNCSER_MSB_FIRST, SS_CSR(R_SER_MODE));
- out64(32, SS_CSR(R_SER_MINFRM_SZ));
- out64(32, SS_CSR(R_SER_MAXFRM_SZ));
+ __raw_writeq(M_SYNCSER_MSB_FIRST, SS_CSR(R_SER_MODE));
+ __raw_writeq(32, SS_CSR(R_SER_MINFRM_SZ));
+ __raw_writeq(32, SS_CSR(R_SER_MAXFRM_SZ));
- out64(1, SS_CSR(R_SER_TX_RD_THRSH));
- out64(4, SS_CSR(R_SER_TX_WR_THRSH));
- out64(8, SS_CSR(R_SER_RX_RD_THRSH));
+ __raw_writeq(1, SS_CSR(R_SER_TX_RD_THRSH));
+ __raw_writeq(4, SS_CSR(R_SER_TX_WR_THRSH));
+ __raw_writeq(8, SS_CSR(R_SER_RX_RD_THRSH));
/* This looks good from experimentation */
- out64((M_SYNCSER_TXSYNC_INT | V_SYNCSER_TXSYNC_DLY(0) | M_SYNCSER_TXCLK_EXT |
+ __raw_writeq((M_SYNCSER_TXSYNC_INT | V_SYNCSER_TXSYNC_DLY(0) | M_SYNCSER_TXCLK_EXT |
M_SYNCSER_RXSYNC_INT | V_SYNCSER_RXSYNC_DLY(1) | M_SYNCSER_RXCLK_EXT | M_SYNCSER_RXSYNC_EDGE),
SS_CSR(R_SER_LINE_MODE));
/* This looks good from experimentation */
- out64(V_SYNCSER_SEQ_COUNT(14) | M_SYNCSER_SEQ_ENABLE | M_SYNCSER_SEQ_STROBE,
+ __raw_writeq(V_SYNCSER_SEQ_COUNT(14) | M_SYNCSER_SEQ_ENABLE | M_SYNCSER_SEQ_STROBE,
SS_TXTBL(0));
- out64(V_SYNCSER_SEQ_COUNT(15) | M_SYNCSER_SEQ_ENABLE | M_SYNCSER_SEQ_BYTE,
+ __raw_writeq(V_SYNCSER_SEQ_COUNT(15) | M_SYNCSER_SEQ_ENABLE | M_SYNCSER_SEQ_BYTE,
SS_TXTBL(1));
- out64(V_SYNCSER_SEQ_COUNT(13) | M_SYNCSER_SEQ_ENABLE | M_SYNCSER_SEQ_BYTE,
+ __raw_writeq(V_SYNCSER_SEQ_COUNT(13) | M_SYNCSER_SEQ_ENABLE | M_SYNCSER_SEQ_BYTE,
SS_TXTBL(2));
- out64(V_SYNCSER_SEQ_COUNT( 0) | M_SYNCSER_SEQ_ENABLE |
+ __raw_writeq(V_SYNCSER_SEQ_COUNT( 0) | M_SYNCSER_SEQ_ENABLE |
M_SYNCSER_SEQ_STROBE | M_SYNCSER_SEQ_LAST, SS_TXTBL(3));
- out64(V_SYNCSER_SEQ_COUNT(14) | M_SYNCSER_SEQ_ENABLE | M_SYNCSER_SEQ_STROBE,
+ __raw_writeq(V_SYNCSER_SEQ_COUNT(14) | M_SYNCSER_SEQ_ENABLE | M_SYNCSER_SEQ_STROBE,
SS_RXTBL(0));
- out64(V_SYNCSER_SEQ_COUNT(15) | M_SYNCSER_SEQ_ENABLE | M_SYNCSER_SEQ_BYTE,
+ __raw_writeq(V_SYNCSER_SEQ_COUNT(15) | M_SYNCSER_SEQ_ENABLE | M_SYNCSER_SEQ_BYTE,
SS_RXTBL(1));
- out64(V_SYNCSER_SEQ_COUNT(13) | M_SYNCSER_SEQ_ENABLE | M_SYNCSER_SEQ_BYTE,
+ __raw_writeq(V_SYNCSER_SEQ_COUNT(13) | M_SYNCSER_SEQ_ENABLE | M_SYNCSER_SEQ_BYTE,
SS_RXTBL(2));
- out64(V_SYNCSER_SEQ_COUNT( 0) | M_SYNCSER_SEQ_ENABLE | M_SYNCSER_SEQ_STROBE |
+ __raw_writeq(V_SYNCSER_SEQ_COUNT( 0) | M_SYNCSER_SEQ_ENABLE | M_SYNCSER_SEQ_STROBE |
M_SYNCSER_SEQ_LAST, SS_RXTBL(3));
for (i=4; i<16; i++) {
/* Just in case... */
- out64(M_SYNCSER_SEQ_LAST, SS_TXTBL(i));
- out64(M_SYNCSER_SEQ_LAST, SS_RXTBL(i));
+ __raw_writeq(M_SYNCSER_SEQ_LAST, SS_TXTBL(i));
+ __raw_writeq(M_SYNCSER_SEQ_LAST, SS_RXTBL(i));
}
return 0;
memset(dma->descrtab, 0, dma->ringsz * sizeof(serdma_descr_t));
dma->descrtab_end = dma->descrtab + dma->ringsz;
/* XXX bloddy mess, use proper DMA API here ... */
- dma->descrtab_phys = PHYSADDR((int)dma->descrtab);
+ dma->descrtab_phys = CPHYSADDR((long)dma->descrtab);
dma->descr_add = dma->descr_rem = dma->descrtab;
/* Frame buffer area */
return -1;
}
memset(dma->dma_buf, 0, DMA_BUF_SIZE);
- dma->dma_buf_phys = PHYSADDR((int)dma->dma_buf);
+ dma->dma_buf_phys = CPHYSADDR((long)dma->dma_buf);
/* Samples buffer area */
dma->sbufsz = SAMPLE_BUF_SIZE;
init_serdma(&s->dma_dac))
return -1;
- if (in64(SS_CSR(R_SER_DMA_DSCR_COUNT_RX))||
- in64(SS_CSR(R_SER_DMA_DSCR_COUNT_TX))) {
+ if (__raw_readq(SS_CSR(R_SER_DMA_DSCR_COUNT_RX))||
+ __raw_readq(SS_CSR(R_SER_DMA_DSCR_COUNT_TX))) {
panic("DMA state corrupted?!");
}
s->dma_adc.descrtab[i].descr_b = 0;
}
- out64((M_DMA_EOP_INT_EN | V_DMA_INT_PKTCNT(DMA_INT_CNT) |
+ __raw_writeq((M_DMA_EOP_INT_EN | V_DMA_INT_PKTCNT(DMA_INT_CNT) |
V_DMA_RINGSZ(DMA_DESCR) | M_DMA_TDX_EN),
SS_CSR(R_SER_DMA_CONFIG0_RX));
- out64(M_DMA_L2CA, SS_CSR(R_SER_DMA_CONFIG1_RX));
- out64(s->dma_adc.descrtab_phys, SS_CSR(R_SER_DMA_DSCR_BASE_RX));
+ __raw_writeq(M_DMA_L2CA, SS_CSR(R_SER_DMA_CONFIG1_RX));
+ __raw_writeq(s->dma_adc.descrtab_phys, SS_CSR(R_SER_DMA_DSCR_BASE_RX));
- out64(V_DMA_RINGSZ(DMA_DESCR), SS_CSR(R_SER_DMA_CONFIG0_TX));
- out64(M_DMA_L2CA | M_DMA_NO_DSCR_UPDT, SS_CSR(R_SER_DMA_CONFIG1_TX));
- out64(s->dma_dac.descrtab_phys, SS_CSR(R_SER_DMA_DSCR_BASE_TX));
+ __raw_writeq(V_DMA_RINGSZ(DMA_DESCR), SS_CSR(R_SER_DMA_CONFIG0_TX));
+ __raw_writeq(M_DMA_L2CA | M_DMA_NO_DSCR_UPDT, SS_CSR(R_SER_DMA_CONFIG1_TX));
+ __raw_writeq(s->dma_dac.descrtab_phys, SS_CSR(R_SER_DMA_DSCR_BASE_TX));
/* Prep the receive DMA descriptor ring */
- out64(DMA_DESCR, SS_CSR(R_SER_DMA_DSCR_COUNT_RX));
+ __raw_writeq(DMA_DESCR, SS_CSR(R_SER_DMA_DSCR_COUNT_RX));
- out64(M_SYNCSER_DMA_RX_EN | M_SYNCSER_DMA_TX_EN, SS_CSR(R_SER_DMA_ENABLE));
+ __raw_writeq(M_SYNCSER_DMA_RX_EN | M_SYNCSER_DMA_TX_EN, SS_CSR(R_SER_DMA_ENABLE));
- out64((M_SYNCSER_RX_SYNC_ERR | M_SYNCSER_RX_OVERRUN | M_SYNCSER_RX_EOP_COUNT),
+ __raw_writeq((M_SYNCSER_RX_SYNC_ERR | M_SYNCSER_RX_OVERRUN | M_SYNCSER_RX_EOP_COUNT),
SS_CSR(R_SER_INT_MASK));
/* Enable the rx/tx; let the codec warm up to the sync and
start sending good frames before the receive FIFO is
enabled */
- out64(M_SYNCSER_CMD_TX_EN, SS_CSR(R_SER_CMD));
+ __raw_writeq(M_SYNCSER_CMD_TX_EN, SS_CSR(R_SER_CMD));
udelay(1000);
- out64(M_SYNCSER_CMD_RX_EN | M_SYNCSER_CMD_TX_EN, SS_CSR(R_SER_CMD));
+ __raw_writeq(M_SYNCSER_CMD_RX_EN | M_SYNCSER_CMD_TX_EN, SS_CSR(R_SER_CMD));
/* XXXKW is this magic? (the "1" part) */
- while ((in64(SS_CSR(R_SER_STATUS)) & 0xf1) != 1)
+ while ((__raw_readq(SS_CSR(R_SER_STATUS)) & 0xf1) != 1)
;
CS_DBGOUT(CS_INIT, 4,
printk(KERN_INFO "cs4297a: status: %08x\n",
- (unsigned int)(in64(SS_CSR(R_SER_STATUS)) & 0xffffffff)));
+ (unsigned int)(__raw_readq(SS_CSR(R_SER_STATUS)) & 0xffffffff)));
return 0;
}
descr = &d->descrtab[swptr];
data_p = &d->dma_buf[swptr * 4];
- *data_p = data;
- out64(1, SS_CSR(R_SER_DMA_DSCR_COUNT_TX));
+ *data_p = cpu_to_be64(data);
+ __raw_writeq(1, SS_CSR(R_SER_DMA_DSCR_COUNT_TX));
CS_DBGOUT(CS_DESCR, 4,
printk(KERN_INFO "cs4297a: add_tx %p (%x -> %x)\n",
data_p, swptr, d->hwptr));
/* XXXKW what do I really want here? My theory for now is
that I just flip the "ena" bit, and the interrupt handler
will stop processing the xmit channel */
- out64((s->ena & FMODE_READ) ? M_SYNCSER_DMA_RX_EN : 0,
+ __raw_writeq((s->ena & FMODE_READ) ? M_SYNCSER_DMA_RX_EN : 0,
SS_CSR(R_SER_DMA_ENABLE));
#endif
serdma_descr_t *descr;
// update ADC pointer
- status = intflag ? in64(SS_CSR(R_SER_STATUS)) : 0;
+ status = intflag ? __raw_readq(SS_CSR(R_SER_STATUS)) : 0;
if ((s->ena & FMODE_READ) || (status & (M_SYNCSER_RX_EOP_COUNT))) {
d = &s->dma_adc;
- hwptr = (unsigned) (((in64(SS_CSR(R_SER_DMA_CUR_DSCR_ADDR_RX)) & M_DMA_CURDSCR_ADDR) -
+ hwptr = (unsigned) (((__raw_readq(SS_CSR(R_SER_DMA_CUR_DSCR_ADDR_RX)) & M_DMA_CURDSCR_ADDR) -
d->descrtab_phys) / sizeof(serdma_descr_t));
if (s->ena & FMODE_READ) {
s_ptr = (u32 *)&(d->dma_buf[d->swptr*4]);
descr = &d->descrtab[d->swptr];
while (diff2--) {
- u64 data = *(u64 *)s_ptr;
+ u64 data = be64_to_cpu(*(u64 *)s_ptr);
u64 descr_a;
u16 left, right;
descr_a = descr->descr_a;
descr->descr_a &= ~M_DMA_SERRX_SOP;
- if ((descr_a & M_DMA_DSCRA_A_ADDR) != PHYSADDR((int)s_ptr)) {
+ if ((descr_a & M_DMA_DSCRA_A_ADDR) != CPHYSADDR((long)s_ptr)) {
printk(KERN_ERR "cs4297a: RX Bad address (read)\n");
}
if (((data & 0x9800000000000000) != 0x9800000000000000) ||
continue;
}
good_diff++;
- left = ((s_ptr[1] & 0xff) << 8) | ((s_ptr[2] >> 24) & 0xff);
- right = (s_ptr[2] >> 4) & 0xffff;
- *d->sb_hwptr++ = left;
- *d->sb_hwptr++ = right;
+ left = ((be32_to_cpu(s_ptr[1]) & 0xff) << 8) |
+ ((be32_to_cpu(s_ptr[2]) >> 24) & 0xff);
+ right = (be32_to_cpu(s_ptr[2]) >> 4) & 0xffff;
+ *d->sb_hwptr++ = cpu_to_be16(left);
+ *d->sb_hwptr++ = cpu_to_be16(right);
if (d->sb_hwptr == d->sb_end)
d->sb_hwptr = d->sample_buf;
descr++;
printk(KERN_ERR "cs4297a: bogus receive overflow!!\n");
}
d->swptr = (d->swptr + diff) % d->ringsz;
- out64(diff, SS_CSR(R_SER_DMA_DSCR_COUNT_RX));
+ __raw_writeq(diff, SS_CSR(R_SER_DMA_DSCR_COUNT_RX));
if (d->mapped) {
if (d->count >= (signed) d->fragsize)
wake_up(&d->wait);
here because of an interrupt, so there must
be a buffer to process. */
do {
- data = *data_p;
- if ((descr->descr_a & M_DMA_DSCRA_A_ADDR) != PHYSADDR((int)data_p)) {
- printk(KERN_ERR "cs4297a: RX Bad address %d (%x %x)\n", d->swptr,
- (int)(descr->descr_a & M_DMA_DSCRA_A_ADDR),
- (int)PHYSADDR((int)data_p));
+ data = be64_to_cpu(*data_p);
+ if ((descr->descr_a & M_DMA_DSCRA_A_ADDR) != CPHYSADDR((long)data_p)) {
+ printk(KERN_ERR "cs4297a: RX Bad address %d (%llx %lx)\n", d->swptr,
+ (long long)(descr->descr_a & M_DMA_DSCRA_A_ADDR),
+ (long)CPHYSADDR((long)data_p));
}
if (!(data & (1LL << 63)) ||
!(descr->descr_a & M_DMA_SERRX_SOP) ||
d->swptr = 0;
data_p = d->dma_buf;
}
- out64(1, SS_CSR(R_SER_DMA_DSCR_COUNT_RX));
+ __raw_writeq(1, SS_CSR(R_SER_DMA_DSCR_COUNT_RX));
} while (--diff);
d->hwptr = hwptr;
//
if (s->ena & FMODE_WRITE) {
serdma_t *d = &s->dma_dac;
- hwptr = (unsigned) (((in64(SS_CSR(R_SER_DMA_CUR_DSCR_ADDR_TX)) & M_DMA_CURDSCR_ADDR) -
+ hwptr = (unsigned) (((__raw_readq(SS_CSR(R_SER_DMA_CUR_DSCR_ADDR_TX)) & M_DMA_CURDSCR_ADDR) -
d->descrtab_phys) / sizeof(serdma_descr_t));
diff = (d->ringsz + hwptr - d->hwptr) % d->ringsz;
CS_DBGOUT(CS_WAVE_WRITE, 4, printk(KERN_INFO
if (nonblock)
return -EBUSY;
add_wait_queue(&s->dma_dac.wait, &wait);
- while ((count = in64(SS_CSR(R_SER_DMA_DSCR_COUNT_TX))) ||
+ while ((count = __raw_readq(SS_CSR(R_SER_DMA_DSCR_COUNT_TX))) ||
(s->dma_dac.count > 0)) {
if (!signal_pending(current)) {
set_current_state(TASK_INTERRUPTIBLE);
}
spin_lock_irqsave(&s->lock, flags);
/* Reset the bookkeeping */
- hwptr = (int)(((in64(SS_CSR(R_SER_DMA_CUR_DSCR_ADDR_TX)) & M_DMA_CURDSCR_ADDR) -
+ hwptr = (int)(((__raw_readq(SS_CSR(R_SER_DMA_CUR_DSCR_ADDR_TX)) & M_DMA_CURDSCR_ADDR) -
s->dma_dac.descrtab_phys) / sizeof(serdma_descr_t));
s->dma_dac.hwptr = s->dma_dac.swptr = hwptr;
spin_unlock_irqrestore(&s->lock, flags);
u32 *s_tmpl;
u32 *t_tmpl;
u32 left, right;
- /* XXXKW check system endian here ... */
int swap = (s->prop_dac.fmt == AFMT_S16_LE) || (s->prop_dac.fmt == AFMT_U16_LE);
/* XXXXXX this is broken for BLOAT_FACTOR */
}
if (d->underrun) {
d->underrun = 0;
- hwptr = (unsigned) (((in64(SS_CSR(R_SER_DMA_CUR_DSCR_ADDR_TX)) & M_DMA_CURDSCR_ADDR) -
+ hwptr = (unsigned) (((__raw_readq(SS_CSR(R_SER_DMA_CUR_DSCR_ADDR_TX)) & M_DMA_CURDSCR_ADDR) -
d->descrtab_phys) / sizeof(serdma_descr_t));
d->swptr = d->hwptr = hwptr;
}
/* XXXKW assuming 16-bit stereo! */
do {
- t_tmpl[0] = 0x98000000;
- left = s_tmpl[0] >> 16;
- if (left & 0x8000)
- left |= 0xf0000;
- right = s_tmpl[0] & 0xffff;
- if (right & 0x8000)
- right |= 0xf0000;
- if (swap) {
- t_tmpl[1] = left & 0xff;
- t_tmpl[2] = ((left & 0xff00) << 16) | ((right & 0xff) << 12) |
- ((right & 0xff00) >> 4);
- } else {
- t_tmpl[1] = left >> 8;
- t_tmpl[2] = ((left & 0xff) << 24) | (right << 4);
- }
+ u32 tmp;
+
+ t_tmpl[0] = cpu_to_be32(0x98000000);
+
+ tmp = be32_to_cpu(s_tmpl[0]);
+ left = tmp & 0xffff;
+ right = tmp >> 16;
+ if (swap) {
+ left = swab16(left);
+ right = swab16(right);
+ }
+ t_tmpl[1] = cpu_to_be32(left >> 8);
+ t_tmpl[2] = cpu_to_be32(((left & 0xff) << 24) |
+ (right << 4));
+
s_tmpl++;
t_tmpl += 8;
copy_cnt -= 4;
/* Mux in any pending read/write accesses */
if (s->reg_request) {
- *(u64 *)(d->dma_buf + (swptr * 4)) |= s->reg_request;
+ *(u64 *)(d->dma_buf + (swptr * 4)) |=
+ cpu_to_be64(s->reg_request);
s->reg_request = 0;
wake_up(&s->dma_dac.reg_wait);
}
"cs4297a: copy in %d to swptr %x\n", cnt, swptr));
swptr = (swptr + (cnt/FRAME_SAMPLE_BYTES)) % d->ringsz;
- out64(cnt/FRAME_SAMPLE_BYTES, SS_CSR(R_SER_DMA_DSCR_COUNT_TX));
+ __raw_writeq(cnt/FRAME_SAMPLE_BYTES, SS_CSR(R_SER_DMA_DSCR_COUNT_TX));
spin_lock_irqsave(&s->lock, flags);
d->swptr = swptr;
d->count += cnt;
"cs4297a: cs4297a_ioctl(): DSP_RESET\n"));
if (file->f_mode & FMODE_WRITE) {
stop_dac(s);
- synchronize_irq();
+ synchronize_irq(s->irq);
s->dma_dac.count = s->dma_dac.total_bytes =
s->dma_dac.blocks = s->dma_dac.wakeup = 0;
s->dma_dac.swptr = s->dma_dac.hwptr =
- (int)(((in64(SS_CSR(R_SER_DMA_CUR_DSCR_ADDR_TX)) & M_DMA_CURDSCR_ADDR) -
+ (int)(((__raw_readq(SS_CSR(R_SER_DMA_CUR_DSCR_ADDR_TX)) & M_DMA_CURDSCR_ADDR) -
s->dma_dac.descrtab_phys) / sizeof(serdma_descr_t));
}
if (file->f_mode & FMODE_READ) {
stop_adc(s);
- synchronize_irq();
+ synchronize_irq(s->irq);
s->dma_adc.count = s->dma_adc.total_bytes =
s->dma_adc.blocks = s->dma_dac.wakeup = 0;
s->dma_adc.swptr = s->dma_adc.hwptr =
- (int)(((in64(SS_CSR(R_SER_DMA_CUR_DSCR_ADDR_RX)) & M_DMA_CURDSCR_ADDR) -
+ (int)(((__raw_readq(SS_CSR(R_SER_DMA_CUR_DSCR_ADDR_RX)) & M_DMA_CURDSCR_ADDR) -
s->dma_adc.descrtab_phys) / sizeof(serdma_descr_t));
}
return 0;
"cs4297a: cs4297a_open(): inode=0x%.8x file=0x%.8x f_mode=0x%x\n",
(unsigned) inode, (unsigned) file, file->f_mode));
CS_DBGOUT(CS_FUNCTION | CS_OPEN, 2, printk(KERN_INFO
- "cs4297a: status = %08x\n", (int)in64(SS_CSR(R_SER_STATUS_DEBUG))));
+ "cs4297a: status = %08x\n", (int)__raw_readq(SS_CSR(R_SER_STATUS_DEBUG))));
list_for_each(entry, &cs4297a_devs)
{
return -ENODEV;
}
if (file->f_mode & FMODE_WRITE) {
- if (in64(SS_CSR(R_SER_DMA_DSCR_COUNT_TX)) != 0) {
+ if (__raw_readq(SS_CSR(R_SER_DMA_DSCR_COUNT_TX)) != 0) {
printk(KERN_ERR "cs4297a: TX pipe needs to drain\n");
- while (in64(SS_CSR(R_SER_DMA_DSCR_COUNT_TX)))
+ while (__raw_readq(SS_CSR(R_SER_DMA_DSCR_COUNT_TX)))
;
}
.release = cs4297a_release,
};
-static irqreturn_t cs4297a_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+static void cs4297a_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
struct cs4297a_state *s = (struct cs4297a_state *) dev_id;
u32 status;
- status = in64(SS_CSR(R_SER_STATUS_DEBUG));
+ status = __raw_readq(SS_CSR(R_SER_STATUS_DEBUG));
CS_DBGOUT(CS_INTERRUPT, 6, printk(KERN_INFO
"cs4297a: cs4297a_interrupt() HISR=0x%.8x\n", status));
#if 0
/* XXXKW what check *should* be done here? */
if (!(status & (M_SYNCSER_RX_EOP_COUNT | M_SYNCSER_RX_OVERRUN | M_SYNCSER_RX_SYNC_ERR))) {
- status = in64(SS_CSR(R_SER_STATUS));
+ status = __raw_readq(SS_CSR(R_SER_STATUS));
printk(KERN_ERR "cs4297a: unexpected interrupt (status %08x)\n", status);
- return IRQ_HANDLED;
+ return;
}
#endif
if (status & M_SYNCSER_RX_SYNC_ERR) {
- status = in64(SS_CSR(R_SER_STATUS));
+ status = __raw_readq(SS_CSR(R_SER_STATUS));
printk(KERN_ERR "cs4297a: rx sync error (status %08x)\n", status);
- return IRQ_HANDLED;
+ return;
}
if (status & M_SYNCSER_RX_OVERRUN) {
/* Fix things up: get the receive descriptor pool
clean and give them back to the hardware */
- while (in64(SS_CSR(R_SER_DMA_DSCR_COUNT_RX)))
+ while (__raw_readq(SS_CSR(R_SER_DMA_DSCR_COUNT_RX)))
;
- newptr = (unsigned) (((in64(SS_CSR(R_SER_DMA_CUR_DSCR_ADDR_RX)) & M_DMA_CURDSCR_ADDR) -
+ newptr = (unsigned) (((__raw_readq(SS_CSR(R_SER_DMA_CUR_DSCR_ADDR_RX)) & M_DMA_CURDSCR_ADDR) -
s->dma_adc.descrtab_phys) / sizeof(serdma_descr_t));
for (i=0; i<DMA_DESCR; i++) {
s->dma_adc.descrtab[i].descr_a &= ~M_DMA_SERRX_SOP;
s->dma_adc.swptr = s->dma_adc.hwptr = newptr;
s->dma_adc.count = 0;
s->dma_adc.sb_swptr = s->dma_adc.sb_hwptr = s->dma_adc.sample_buf;
- out64(DMA_DESCR, SS_CSR(R_SER_DMA_DSCR_COUNT_RX));
+ __raw_writeq(DMA_DESCR, SS_CSR(R_SER_DMA_DSCR_COUNT_RX));
}
spin_lock(&s->lock);
CS_DBGOUT(CS_INTERRUPT, 6, printk(KERN_INFO
"cs4297a: cs4297a_interrupt()-\n"));
- return IRQ_HANDLED;
}
+#if 0
static struct initvol {
int mixch;
int vol;
{SOUND_MIXER_WRITE_SPEAKER, 0x4040},
{SOUND_MIXER_WRITE_MIC, 0x0000}
};
+#endif
static int __init cs4297a_init(void)
{
struct cs4297a_state *s;
- u64 cfg;
- u32 pwr, id;
+ u32 pwr, id;
mm_segment_t fs;
- int rval, mdio_val;
+ int rval;
+#ifndef CONFIG_BCM_CS4297A_CSWARM
+ u64 cfg;
+ int mdio_val;
+#endif
CS_DBGOUT(CS_INIT | CS_FUNCTION, 2, printk(KERN_INFO
"cs4297a: cs4297a_init_module()+ \n"));
- mdio_val = in64(KSEG1 + A_MAC_REGISTER(2, R_MAC_MDIO)) &
+#ifndef CONFIG_BCM_CS4297A_CSWARM
+ mdio_val = __raw_readq(KSEG1 + A_MAC_REGISTER(2, R_MAC_MDIO)) &
(M_MAC_MDIO_DIR|M_MAC_MDIO_OUT);
/* Check syscfg for synchronous serial on port 1 */
- cfg = in64(KSEG1 + A_SCD_SYSTEM_CFG);
+ cfg = __raw_readq(KSEG1 + A_SCD_SYSTEM_CFG);
if (!(cfg & M_SYS_SER1_ENABLE)) {
- out64(cfg | M_SYS_SER1_ENABLE, KSEG1+A_SCD_SYSTEM_CFG);
- cfg = in64(KSEG1 + A_SCD_SYSTEM_CFG);
+ __raw_writeq(cfg | M_SYS_SER1_ENABLE, KSEG1+A_SCD_SYSTEM_CFG);
+ cfg = __raw_readq(KSEG1 + A_SCD_SYSTEM_CFG);
if (!(cfg & M_SYS_SER1_ENABLE)) {
printk(KERN_INFO "cs4297a: serial port 1 not configured for synchronous operation\n");
return -1;
/* Force the codec (on SWARM) to reset by clearing
GENO, preserving MDIO (no effect on CSWARM) */
- out64(mdio_val, KSEG1+A_MAC_REGISTER(2, R_MAC_MDIO));
+ __raw_writeq(mdio_val, KSEG1+A_MAC_REGISTER(2, R_MAC_MDIO));
udelay(10);
}
/* Now set GENO */
- out64(mdio_val | M_MAC_GENC, KSEG1+A_MAC_REGISTER(2, R_MAC_MDIO));
+ __raw_writeq(mdio_val | M_MAC_GENC, KSEG1+A_MAC_REGISTER(2, R_MAC_MDIO));
/* Give the codec some time to finish resetting (start the bit clock) */
udelay(100);
+#endif
if (!(s = kmalloc(sizeof(struct cs4297a_state), GFP_KERNEL))) {
CS_DBGOUT(CS_ERROR, 1, printk(KERN_ERR
} while (!rval && (pwr != 0xf));
if (!rval) {
+ char *sb1250_duart_present;
+
fs = get_fs();
set_fs(KERNEL_DS);
#if 0
list_add(&s->list, &cs4297a_devs);
cs4297a_read_ac97(s, AC97_VENDOR_ID1, &id);
-
+
+ sb1250_duart_present = symbol_get(sb1250_duart_present);
+ if (sb1250_duart_present)
+ sb1250_duart_present[1] = 0;
+
printk(KERN_INFO "cs4297a: initialized (vendor id = %x)\n", id);
CS_DBGOUT(CS_INIT | CS_FUNCTION, 2,
// ---------------------------------------------------------------------
-EXPORT_NO_SYMBOLS;
-
-MODULE_AUTHOR("Kip Walker, kwalker@broadcom.com");
+MODULE_AUTHOR("Kip Walker, Broadcom Corp.");
MODULE_DESCRIPTION("Cirrus Logic CS4297a Driver for Broadcom SWARM board");
// ---------------------------------------------------------------------
static int __initdata mpu_io = -1;
static int __initdata mpu_irq = -1;
-MODULE_PARM(io,"i");
-MODULE_PARM(irq,"i");
-MODULE_PARM(dma,"i");
-MODULE_PARM(dma2,"i");
-MODULE_PARM(sb_io,"i");
-MODULE_PARM(sb_dma,"i");
-MODULE_PARM(sb_irq,"i");
-MODULE_PARM(mpu_io,"i");
-MODULE_PARM(mpu_irq,"i");
-MODULE_PARM(joystick, "i");
+module_param(io, int, 0);
+module_param(irq, int, 0);
+module_param(dma, int, 0);
+module_param(dma2, int, 0);
+module_param(sb_io, int, 0);
+module_param(sb_dma, int, 0);
+module_param(sb_irq, int, 0);
+module_param(mpu_io, int, 0);
+module_param(mpu_irq, int, 0);
+module_param(joystick, bool, 0);
static int __init init_trix(void)
{
static struct address_info cfg_mpu;
-static int __initdata io = -1;
-static int __initdata irq = -1;
+static int io = -1;
+static int irq = -1;
-MODULE_PARM(io, "i");
-MODULE_PARM(irq, "i");
+module_param(io, int, 0444);
+module_param(irq, int, 0444);
static int __init init_uart401(void)
static int uart6850_irq;
static int uart6850_detected;
static int my_dev;
-static spinlock_t lock=SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(lock);
static void (*midi_input_intr) (int dev, unsigned char data);
static void poll_uart6850(unsigned long dummy);
static int __initdata io = -1;
static int __initdata irq = -1;
-MODULE_PARM(io,"i");
-MODULE_PARM(irq,"i");
+module_param(io, int, 0);
+module_param(irq, int, 0);
static int __init init_uart6850(void)
{
static void ymfpci_download_image(ymfpci_t *codec);
static void ymf_memload(ymfpci_t *unit);
-static spinlock_t ymf_devs_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(ymf_devs_lock);
static LIST_HEAD(ymf_devs);
/*
* common I/O routines
*/
-static inline u8 ymfpci_readb(ymfpci_t *codec, u32 offset)
-{
- return readb(codec->reg_area_virt + offset);
-}
-
static inline void ymfpci_writeb(ymfpci_t *codec, u32 offset, u8 val)
{
writeb(val, codec->reg_area_virt + offset);
# ifdef MODULE
static int mpu_io;
static int synth_io;
-MODULE_PARM(mpu_io, "i");
-MODULE_PARM(synth_io, "i");
+module_param(mpu_io, int, 0);
+module_param(synth_io, int, 0);
# else
static int mpu_io = 0x330;
static int synth_io = 0x388;
snd-rme96-objs := rme96.o
snd-sonicvibes-objs := sonicvibes.o
snd-via82xx-objs := via82xx.o
+snd-via82xx-modem-objs := via82xx_modem.o
# Toplevel Module Dependency
obj-$(CONFIG_SND_ALS4000) += snd-als4000.o
obj-$(CONFIG_SND_RME96) += snd-rme96.o
obj-$(CONFIG_SND_SONICVIBES) += snd-sonicvibes.o
obj-$(CONFIG_SND_VIA82XX) += snd-via82xx.o
+obj-$(CONFIG_SND_VIA82XX_MODEM) += snd-via82xx-modem.o
obj-$(CONFIG_SND) += \
ac97/ \
ali5451/ \
au88x0/ \
+ ca0106/ \
cs46xx/ \
emu10k1/ \
ice1712/ \
static int snd_ak4531_get_single(snd_kcontrol_t * kcontrol, snd_ctl_elem_value_t * ucontrol)
{
ak4531_t *ak4531 = snd_kcontrol_chip(kcontrol);
- unsigned long flags;
int reg = kcontrol->private_value & 0xff;
int shift = (kcontrol->private_value >> 16) & 0x07;
int mask = (kcontrol->private_value >> 24) & 0xff;
int invert = (kcontrol->private_value >> 22) & 1;
int val;
- spin_lock_irqsave(&ak4531->reg_lock, flags);
+ down(&ak4531->reg_mutex);
val = (ak4531->regs[reg] >> shift) & mask;
- spin_unlock_irqrestore(&ak4531->reg_lock, flags);
+ up(&ak4531->reg_mutex);
if (invert) {
val = mask - val;
}
static int snd_ak4531_put_single(snd_kcontrol_t * kcontrol, snd_ctl_elem_value_t * ucontrol)
{
ak4531_t *ak4531 = snd_kcontrol_chip(kcontrol);
- unsigned long flags;
int reg = kcontrol->private_value & 0xff;
int shift = (kcontrol->private_value >> 16) & 0x07;
int mask = (kcontrol->private_value >> 24) & 0xff;
val = mask - val;
}
val <<= shift;
- spin_lock_irqsave(&ak4531->reg_lock, flags);
+ down(&ak4531->reg_mutex);
val = (ak4531->regs[reg] & ~(mask << shift)) | val;
change = val != ak4531->regs[reg];
ak4531->write(ak4531, reg, ak4531->regs[reg] = val);
- spin_unlock_irqrestore(&ak4531->reg_lock, flags);
+ up(&ak4531->reg_mutex);
return change;
}
static int snd_ak4531_get_double(snd_kcontrol_t * kcontrol, snd_ctl_elem_value_t * ucontrol)
{
ak4531_t *ak4531 = snd_kcontrol_chip(kcontrol);
- unsigned long flags;
int left_reg = kcontrol->private_value & 0xff;
int right_reg = (kcontrol->private_value >> 8) & 0xff;
int left_shift = (kcontrol->private_value >> 16) & 0x07;
int invert = (kcontrol->private_value >> 22) & 1;
int left, right;
- spin_lock_irqsave(&ak4531->reg_lock, flags);
+ down(&ak4531->reg_mutex);
left = (ak4531->regs[left_reg] >> left_shift) & mask;
right = (ak4531->regs[right_reg] >> right_shift) & mask;
- spin_unlock_irqrestore(&ak4531->reg_lock, flags);
+ up(&ak4531->reg_mutex);
if (invert) {
left = mask - left;
right = mask - right;
static int snd_ak4531_put_double(snd_kcontrol_t * kcontrol, snd_ctl_elem_value_t * ucontrol)
{
ak4531_t *ak4531 = snd_kcontrol_chip(kcontrol);
- unsigned long flags;
int left_reg = kcontrol->private_value & 0xff;
int right_reg = (kcontrol->private_value >> 8) & 0xff;
int left_shift = (kcontrol->private_value >> 16) & 0x07;
}
left <<= left_shift;
right <<= right_shift;
- spin_lock_irqsave(&ak4531->reg_lock, flags);
+ down(&ak4531->reg_mutex);
if (left_reg == right_reg) {
left = (ak4531->regs[left_reg] & ~((mask << left_shift) | (mask << right_shift))) | left | right;
change = left != ak4531->regs[left_reg];
ak4531->write(ak4531, left_reg, ak4531->regs[left_reg] = left);
ak4531->write(ak4531, right_reg, ak4531->regs[right_reg] = right);
}
- spin_unlock_irqrestore(&ak4531->reg_lock, flags);
+ up(&ak4531->reg_mutex);
return change;
}
static int snd_ak4531_get_input_sw(snd_kcontrol_t * kcontrol, snd_ctl_elem_value_t * ucontrol)
{
ak4531_t *ak4531 = snd_kcontrol_chip(kcontrol);
- unsigned long flags;
int reg1 = kcontrol->private_value & 0xff;
int reg2 = (kcontrol->private_value >> 8) & 0xff;
int left_shift = (kcontrol->private_value >> 16) & 0x0f;
int right_shift = (kcontrol->private_value >> 24) & 0x0f;
- spin_lock_irqsave(&ak4531->reg_lock, flags);
+ down(&ak4531->reg_mutex);
ucontrol->value.integer.value[0] = (ak4531->regs[reg1] >> left_shift) & 1;
ucontrol->value.integer.value[1] = (ak4531->regs[reg2] >> left_shift) & 1;
ucontrol->value.integer.value[2] = (ak4531->regs[reg1] >> right_shift) & 1;
ucontrol->value.integer.value[3] = (ak4531->regs[reg2] >> right_shift) & 1;
- spin_unlock_irqrestore(&ak4531->reg_lock, flags);
+ up(&ak4531->reg_mutex);
return 0;
}
static int snd_ak4531_put_input_sw(snd_kcontrol_t * kcontrol, snd_ctl_elem_value_t * ucontrol)
{
ak4531_t *ak4531 = snd_kcontrol_chip(kcontrol);
- unsigned long flags;
int reg1 = kcontrol->private_value & 0xff;
int reg2 = (kcontrol->private_value >> 8) & 0xff;
int left_shift = (kcontrol->private_value >> 16) & 0x0f;
int change;
int val1, val2;
- spin_lock_irqsave(&ak4531->reg_lock, flags);
+ down(&ak4531->reg_mutex);
val1 = ak4531->regs[reg1] & ~((1 << left_shift) | (1 << right_shift));
val2 = ak4531->regs[reg2] & ~((1 << left_shift) | (1 << right_shift));
val1 |= (ucontrol->value.integer.value[0] & 1) << left_shift;
change = val1 != ak4531->regs[reg1] || val2 != ak4531->regs[reg2];
ak4531->write(ak4531, reg1, ak4531->regs[reg1] = val1);
ak4531->write(ak4531, reg2, ak4531->regs[reg2] = val2);
- spin_unlock_irqrestore(&ak4531->reg_lock, flags);
+ up(&ak4531->reg_mutex);
return change;
}
if (ak4531 == NULL)
return -ENOMEM;
*ak4531 = *_ak4531;
- spin_lock_init(&ak4531->reg_lock);
+ init_MUTEX(&ak4531->reg_mutex);
if ((err = snd_component_add(card, "AK4531")) < 0) {
snd_ak4531_free(ak4531);
return err;
}
}
snd_ak4531_proc_init(card, ak4531);
- if ((err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, ak4531, &ops)) < 0) {
+ if ((err = snd_device_new(card, SNDRV_DEV_CODEC, ak4531, &ops)) < 0) {
snd_ak4531_free(ak4531);
return err;
}
ac97_t *ac97[NUM_ATI_CODECS];
spinlock_t reg_lock;
- spinlock_t ac97_lock;
atiixp_dma_t dmas[NUM_ATI_DMAS];
struct ac97_pcm *pcms[NUM_ATI_PCMS];
static unsigned short snd_atiixp_ac97_read(ac97_t *ac97, unsigned short reg)
{
atiixp_t *chip = ac97->private_data;
- unsigned short data;
- spin_lock(&chip->ac97_lock);
- data = snd_atiixp_codec_read(chip, ac97->num, reg);
- spin_unlock(&chip->ac97_lock);
- return data;
+ return snd_atiixp_codec_read(chip, ac97->num, reg);
}
static void snd_atiixp_ac97_write(ac97_t *ac97, unsigned short reg, unsigned short val)
{
atiixp_t *chip = ac97->private_data;
- spin_lock(&chip->ac97_lock);
snd_atiixp_codec_write(chip, ac97->num, reg, val);
- spin_unlock(&chip->ac97_lock);
}
/*
pci_set_power_state(chip->pci, 3);
pci_disable_device(chip->pci);
- snd_power_change_state(card, SNDRV_CTL_POWER_D3hot);
return 0;
}
if (chip->ac97[i])
snd_ac97_resume(chip->ac97[i]);
- snd_power_change_state(card, SNDRV_CTL_POWER_D0);
return 0;
}
#endif /* CONFIG_PM */
}
spin_lock_init(&chip->reg_lock);
- spin_lock_init(&chip->ac97_lock);
init_MUTEX(&chip->open_mutex);
chip->card = card;
chip->pci = pci;
memset(&ac97, 0, sizeof(ac97));
// Intialize AC97 codec stuff.
ac97.private_data = vortex;
- return snd_ac97_mixer(pbus, &ac97, &vortex->codec);
+ err = snd_ac97_mixer(pbus, &ac97, &vortex->codec);
+ vortex->isquad = ((vortex->codec == NULL) ? 0 : (vortex->codec->ext_id&0x80));
+ return err;
}
--- /dev/null
+/*
+ * Copyright (c) 2004 James Courtier-Dutton <James@superbug.demon.co.uk>
+ * Driver CA0106 chips. e.g. Sound Blaster Audigy LS and Live 24bit
+ * Version: 0.0.20
+ *
+ * FEATURES currently supported:
+ * See ca0106_main.c for features.
+ *
+ * Changelog:
+ * Support interrupts per period.
+ * Removed noise from Center/LFE channel when in Analog mode.
+ * Rename and remove mixer controls.
+ * 0.0.6
+ * Use separate card based DMA buffer for periods table list.
+ * 0.0.7
+ * Change remove and rename ctrls into lists.
+ * 0.0.8
+ * Try to fix capture sources.
+ * 0.0.9
+ * Fix AC3 output.
+ * Enable S32_LE format support.
+ * 0.0.10
+ * Enable playback 48000 and 96000 rates. (Rates other that these do not work, even with "plug:front".)
+ * 0.0.11
+ * Add Model name recognition.
+ * 0.0.12
+ * Correct interrupt timing. interrupt at end of period, instead of in the middle of a playback period.
+ * Remove redundent "voice" handling.
+ * 0.0.13
+ * Single trigger call for multi channels.
+ * 0.0.14
+ * Set limits based on what the sound card hardware can do.
+ * playback periods_min=2, periods_max=8
+ * capture hw constraints require period_size = n * 64 bytes.
+ * playback hw constraints require period_size = n * 64 bytes.
+ * 0.0.15
+ * Separated ca0106.c into separate functional .c files.
+ * 0.0.16
+ * Implement 192000 sample rate.
+ * 0.0.17
+ * Add support for SB0410 and SB0413.
+ * 0.0.18
+ * Modified Copyright message.
+ * 0.0.19
+ * Added I2C and SPI registers. Filled in interrupt enable.
+ * 0.0.20
+ * Added GPIO info for SB Live 24bit.
+ *
+ *
+ * This code was initally based on code from ALSA's emu10k1x.c which is:
+ * Copyright (c) by Francisco Moraes <fmoraes@nc.rr.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+/************************************************************************************************/
+/* PCI function 0 registers, address = <val> + PCIBASE0 */
+/************************************************************************************************/
+
+#define PTR 0x00 /* Indexed register set pointer register */
+ /* NOTE: The CHANNELNUM and ADDRESS words can */
+ /* be modified independently of each other. */
+ /* CNL[1:0], ADDR[27:16] */
+
+#define DATA 0x04 /* Indexed register set data register */
+ /* DATA[31:0] */
+
+#define IPR 0x08 /* Global interrupt pending register */
+ /* Clear pending interrupts by writing a 1 to */
+ /* the relevant bits and zero to the other bits */
+#define IPR_MIDI_RX_B 0x00020000 /* MIDI UART-B Receive buffer non-empty */
+#define IPR_MIDI_TX_B 0x00010000 /* MIDI UART-B Transmit buffer empty */
+#define IPR_SPDIF_IN_USER 0x00004000 /* SPDIF input user data has 16 more bits */
+#define IPR_SPDIF_OUT_USER 0x00002000 /* SPDIF output user data needs 16 more bits */
+#define IPR_SPDIF_OUT_FRAME 0x00001000 /* SPDIF frame about to start */
+#define IPR_SPI 0x00000800 /* SPI transaction completed */
+#define IPR_I2C_EEPROM 0x00000400 /* I2C EEPROM transaction completed */
+#define IPR_I2C_DAC 0x00000200 /* I2C DAC transaction completed */
+#define IPR_AI 0x00000100 /* Audio pending register changed. See PTR reg 0x76 */
+#define IPR_GPI 0x00000080 /* General Purpose input changed */
+#define IPR_SRC_LOCKED 0x00000040 /* SRC lock status changed */
+#define IPR_SPDIF_STATUS 0x00000020 /* SPDIF status changed */
+#define IPR_TIMER2 0x00000010 /* 192000Hz Timer */
+#define IPR_TIMER1 0x00000008 /* 44100Hz Timer */
+#define IPR_MIDI_RX_A 0x00000004 /* MIDI UART-A Receive buffer non-empty */
+#define IPR_MIDI_TX_A 0x00000002 /* MIDI UART-A Transmit buffer empty */
+#define IPR_PCI 0x00000001 /* PCI Bus error */
+
+#define INTE 0x0c /* Interrupt enable register */
+
+#define INTE_MIDI_RX_B 0x00020000 /* MIDI UART-B Receive buffer non-empty */
+#define INTE_MIDI_TX_B 0x00010000 /* MIDI UART-B Transmit buffer empty */
+#define INTE_SPDIF_IN_USER 0x00004000 /* SPDIF input user data has 16 more bits */
+#define INTE_SPDIF_OUT_USER 0x00002000 /* SPDIF output user data needs 16 more bits */
+#define INTE_SPDIF_OUT_FRAME 0x00001000 /* SPDIF frame about to start */
+#define INTE_SPI 0x00000800 /* SPI transaction completed */
+#define INTE_I2C_EEPROM 0x00000400 /* I2C EEPROM transaction completed */
+#define INTE_I2C_DAC 0x00000200 /* I2C DAC transaction completed */
+#define INTE_AI 0x00000100 /* Audio pending register changed. See PTR reg 0x75 */
+#define INTE_GPI 0x00000080 /* General Purpose input changed */
+#define INTE_SRC_LOCKED 0x00000040 /* SRC lock status changed */
+#define INTE_SPDIF_STATUS 0x00000020 /* SPDIF status changed */
+#define INTE_TIMER2 0x00000010 /* 192000Hz Timer */
+#define INTE_TIMER1 0x00000008 /* 44100Hz Timer */
+#define INTE_MIDI_RX_A 0x00000004 /* MIDI UART-A Receive buffer non-empty */
+#define INTE_MIDI_TX_A 0x00000002 /* MIDI UART-A Transmit buffer empty */
+#define INTE_PCI 0x00000001 /* PCI Bus error */
+
+#define UNKNOWN10 0x10 /* Unknown ??. Defaults to 0 */
+#define HCFG 0x14 /* Hardware config register */
+ /* 0x1000 causes AC3 to fails. It adds a dither bit. */
+
+#define HCFG_STAC 0x10000000 /* Special mode for STAC9460 Codec. */
+#define HCFG_CAPTURE_I2S_BYPASS 0x08000000 /* 1 = bypass I2S input async SRC. */
+#define HCFG_CAPTURE_SPDIF_BYPASS 0x04000000 /* 1 = bypass SPDIF input async SRC. */
+#define HCFG_PLAYBACK_I2S_BYPASS 0x02000000 /* 0 = I2S IN mixer output, 1 = I2S IN1. */
+#define HCFG_FORCE_LOCK 0x01000000 /* For test only. Force input SRC tracker to lock. */
+#define HCFG_PLAYBACK_ATTENUATION 0x00006000 /* Playback attenuation mask. 0 = 0dB, 1 = 6dB, 2 = 12dB, 3 = Mute. */
+#define HCFG_PLAYBACK_DITHER 0x00001000 /* 1 = Add dither bit to all playback channels. */
+#define HCFG_PLAYBACK_S32_LE 0x00000800 /* 1 = S32_LE, 0 = S16_LE */
+#define HCFG_CAPTURE_S32_LE 0x00000400 /* 1 = S32_LE, 0 = S16_LE (S32_LE current not working) */
+#define HCFG_8_CHANNEL_PLAY 0x00000200 /* 1 = 8 channels, 0 = 2 channels per substream.*/
+#define HCFG_8_CHANNEL_CAPTURE 0x00000100 /* 1 = 8 channels, 0 = 2 channels per substream.*/
+#define HCFG_MONO 0x00000080 /* 1 = I2S Input mono */
+#define HCFG_I2S_OUTPUT 0x00000010 /* 1 = I2S Output disabled */
+#define HCFG_AC97 0x00000008 /* 0 = AC97 1.0, 1 = AC97 2.0 */
+#define HCFG_LOCK_PLAYBACK_CACHE 0x00000004 /* 1 = Cancel bustmaster accesses to soundcache */
+ /* NOTE: This should generally never be used. */
+#define HCFG_LOCK_CAPTURE_CACHE 0x00000002 /* 1 = Cancel bustmaster accesses to soundcache */
+ /* NOTE: This should generally never be used. */
+#define HCFG_AUDIOENABLE 0x00000001 /* 0 = CODECs transmit zero-valued samples */
+ /* Should be set to 1 when the EMU10K1 is */
+ /* completely initialized. */
+#define GPIO 0x18 /* Defaults: 005f03a3-Analog, 005f02a2-SPDIF. */
+ /* Here pins 0,1,2,3,4,,6 are output. 5,7 are input */
+ /* For the Audigy LS, pin 0 (or bit 8) controls the SPDIF/Analog jack. */
+ /* SB Live 24bit:
+ * bit 8 0 = SPDIF in and out / 1 = Analog (Mic or Line)-in.
+ * bit 9 0 = Mute / 1 = Analog out.
+ * bit 10 0 = Line-in / 1 = Mic-in.
+ * bit 11 0 = ? / 1 = ?
+ * bit 12 0 = ? / 1 = ?
+ * bit 13 0 = ? / 1 = ?
+ * bit 14 0 = Mute / 1 = Analog out
+ * bit 15 0 = ? / 1 = ?
+ * Both bit 9 and bit 14 have to be set for analog sound to work on the SB Live 24bit.
+ */
+ /* 8 general purpose programmable In/Out pins.
+ * GPI [8:0] Read only. Default 0.
+ * GPO [15:8] Default 0x9. (Default to SPDIF jack enabled for SPDIF)
+ * GPO Enable [23:16] Default 0x0f. Setting a bit to 1, causes the pin to be an output pin.
+ */
+#define AC97DATA 0x1c /* AC97 register set data register (16 bit) */
+
+#define AC97ADDRESS 0x1e /* AC97 register set address register (8 bit) */
+
+/********************************************************************************************************/
+/* CA0106 pointer-offset register set, accessed through the PTR and DATA registers */
+/********************************************************************************************************/
+
+/* Initally all registers from 0x00 to 0x3f have zero contents. */
+#define PLAYBACK_LIST_ADDR 0x00 /* Base DMA address of a list of pointers to each period/size */
+ /* One list entry: 4 bytes for DMA address,
+ * 4 bytes for period_size << 16.
+ * One list entry is 8 bytes long.
+ * One list entry for each period in the buffer.
+ */
+ /* ADDR[31:0], Default: 0x0 */
+#define PLAYBACK_LIST_SIZE 0x01 /* Size of list in bytes << 16. E.g. 8 periods -> 0x00380000 */
+ /* SIZE[21:16], Default: 0x8 */
+#define PLAYBACK_LIST_PTR 0x02 /* Pointer to the current period being played */
+ /* PTR[5:0], Default: 0x0 */
+#define PLAYBACK_UNKNOWN3 0x03 /* Not used ?? */
+#define PLAYBACK_DMA_ADDR 0x04 /* Playback DMA addresss */
+ /* DMA[31:0], Default: 0x0 */
+#define PLAYBACK_PERIOD_SIZE 0x05 /* Playback period size. win2000 uses 0x04000000 */
+ /* SIZE[31:16], Default: 0x0 */
+#define PLAYBACK_POINTER 0x06 /* Playback period pointer. Used with PLAYBACK_LIST_PTR to determine buffer position currently in DAC */
+ /* POINTER[15:0], Default: 0x0 */
+#define PLAYBACK_PERIOD_END_ADDR 0x07 /* Playback fifo end address */
+ /* END_ADDR[15:0], FLAG[16] 0 = don't stop, 1 = stop */
+#define PLAYBACK_FIFO_OFFSET_ADDRESS 0x08 /* Current fifo offset address [21:16] */
+ /* Cache size valid [5:0] */
+#define PLAYBACK_UNKNOWN9 0x09 /* 0x9 to 0xf Unused */
+#define CAPTURE_DMA_ADDR 0x10 /* Capture DMA address */
+ /* DMA[31:0], Default: 0x0 */
+#define CAPTURE_BUFFER_SIZE 0x11 /* Capture buffer size */
+ /* SIZE[31:16], Default: 0x0 */
+#define CAPTURE_POINTER 0x12 /* Capture buffer pointer. Sample currently in ADC */
+ /* POINTER[15:0], Default: 0x0 */
+#define CAPTURE_FIFO_OFFSET_ADDRESS 0x13 /* Current fifo offset address [21:16] */
+ /* Cache size valid [5:0] */
+#define PLAYBACK_LAST_SAMPLE 0x20 /* The sample currently being played */
+/* 0x21 - 0x3f unused */
+#define BASIC_INTERRUPT 0x40 /* Used by both playback and capture interrupt handler */
+ /* Playback (0x1<<channel_id) */
+ /* Capture (0x100<<channel_id) */
+ /* Playback sample rate 96000 = 0x20000 */
+ /* Start Playback [3:0] (one bit per channel)
+ * Start Capture [11:8] (one bit per channel)
+ * Playback rate [23:16] (2 bits per channel) (0=48kHz, 1=44.1kHz, 2=96kHz, 3=192Khz)
+ * Playback mixer in enable [27:24] (one bit per channel)
+ * Playback mixer out enable [31:28] (one bit per channel)
+ */
+/* The Digital out jack is shared with the Center/LFE Analogue output.
+ * The jack has 4 poles. I will call 1 - Tip, 2 - Next to 1, 3 - Next to 2, 4 - Next to 3
+ * For Analogue: 1 -> Center Speaker, 2 -> Sub Woofer, 3 -> Ground, 4 -> Ground
+ * For Digital: 1 -> Front SPDIF, 2 -> Rear SPDIF, 3 -> Center/Subwoofer SPDIF, 4 -> Ground.
+ * Standard 4 pole Video A/V cable with RCA outputs: 1 -> White, 2 -> Yellow, 3 -> Sheild on all three, 4 -> Red.
+ * So, from this you can see that you cannot use a Standard 4 pole Video A/V cable with the SB Audigy LS card.
+ */
+/* The Front SPDIF PCM gets mixed with samples from the AC97 codec, so can only work for Stereo PCM and not AC3/DTS
+ * The Rear SPDIF can be used for Stereo PCM and also AC3/DTS
+ * The Center/LFE SPDIF cannot be used for AC3/DTS, but can be used for Stereo PCM.
+ * Summary: For ALSA we use the Rear channel for SPDIF Digital AC3/DTS output
+ */
+/* A standard 2 pole mono mini-jack to RCA plug can be used for SPDIF Stereo PCM output from the Front channel.
+ * A standard 3 pole stereo mini-jack to 2 RCA plugs can be used for SPDIF AC3/DTS and Stereo PCM output utilising the Rear channel and just one of the RCA plugs.
+ */
+#define SPCS0 0x41 /* SPDIF output Channel Status 0 register. For Rear. default=0x02108004, non-audio=0x02108006 */
+#define SPCS1 0x42 /* SPDIF output Channel Status 1 register. For Front */
+#define SPCS2 0x43 /* SPDIF output Channel Status 2 register. For Center/LFE */
+#define SPCS3 0x44 /* SPDIF output Channel Status 3 register. Unknown */
+ /* When Channel set to 0: */
+#define SPCS_CLKACCYMASK 0x30000000 /* Clock accuracy */
+#define SPCS_CLKACCY_1000PPM 0x00000000 /* 1000 parts per million */
+#define SPCS_CLKACCY_50PPM 0x10000000 /* 50 parts per million */
+#define SPCS_CLKACCY_VARIABLE 0x20000000 /* Variable accuracy */
+#define SPCS_SAMPLERATEMASK 0x0f000000 /* Sample rate */
+#define SPCS_SAMPLERATE_44 0x00000000 /* 44.1kHz sample rate */
+#define SPCS_SAMPLERATE_48 0x02000000 /* 48kHz sample rate */
+#define SPCS_SAMPLERATE_32 0x03000000 /* 32kHz sample rate */
+#define SPCS_CHANNELNUMMASK 0x00f00000 /* Channel number */
+#define SPCS_CHANNELNUM_UNSPEC 0x00000000 /* Unspecified channel number */
+#define SPCS_CHANNELNUM_LEFT 0x00100000 /* Left channel */
+#define SPCS_CHANNELNUM_RIGHT 0x00200000 /* Right channel */
+#define SPCS_SOURCENUMMASK 0x000f0000 /* Source number */
+#define SPCS_SOURCENUM_UNSPEC 0x00000000 /* Unspecified source number */
+#define SPCS_GENERATIONSTATUS 0x00008000 /* Originality flag (see IEC-958 spec) */
+#define SPCS_CATEGORYCODEMASK 0x00007f00 /* Category code (see IEC-958 spec) */
+#define SPCS_MODEMASK 0x000000c0 /* Mode (see IEC-958 spec) */
+#define SPCS_EMPHASISMASK 0x00000038 /* Emphasis */
+#define SPCS_EMPHASIS_NONE 0x00000000 /* No emphasis */
+#define SPCS_EMPHASIS_50_15 0x00000008 /* 50/15 usec 2 channel */
+#define SPCS_COPYRIGHT 0x00000004 /* Copyright asserted flag -- do not modify */
+#define SPCS_NOTAUDIODATA 0x00000002 /* 0 = Digital audio, 1 = not audio */
+#define SPCS_PROFESSIONAL 0x00000001 /* 0 = Consumer (IEC-958), 1 = pro (AES3-1992) */
+
+ /* When Channel set to 1: */
+#define SPCS_WORD_LENGTH_MASK 0x0000000f /* Word Length Mask */
+#define SPCS_WORD_LENGTH_16 0x00000008 /* Word Length 16 bit */
+#define SPCS_WORD_LENGTH_17 0x00000006 /* Word Length 17 bit */
+#define SPCS_WORD_LENGTH_18 0x00000004 /* Word Length 18 bit */
+#define SPCS_WORD_LENGTH_19 0x00000002 /* Word Length 19 bit */
+#define SPCS_WORD_LENGTH_20A 0x0000000a /* Word Length 20 bit */
+#define SPCS_WORD_LENGTH_20 0x00000009 /* Word Length 20 bit (both 0xa and 0x9 are 20 bit) */
+#define SPCS_WORD_LENGTH_21 0x00000007 /* Word Length 21 bit */
+#define SPCS_WORD_LENGTH_21 0x00000007 /* Word Length 21 bit */
+#define SPCS_WORD_LENGTH_22 0x00000005 /* Word Length 22 bit */
+#define SPCS_WORD_LENGTH_23 0x00000003 /* Word Length 23 bit */
+#define SPCS_WORD_LENGTH_24 0x0000000b /* Word Length 24 bit */
+#define SPCS_ORIGINAL_SAMPLE_RATE_MASK 0x000000f0 /* Original Sample rate */
+#define SPCS_ORIGINAL_SAMPLE_RATE_NONE 0x00000000 /* Original Sample rate not indicated */
+#define SPCS_ORIGINAL_SAMPLE_RATE_16000 0x00000010 /* Original Sample rate */
+#define SPCS_ORIGINAL_SAMPLE_RATE_RES1 0x00000020 /* Original Sample rate */
+#define SPCS_ORIGINAL_SAMPLE_RATE_32000 0x00000030 /* Original Sample rate */
+#define SPCS_ORIGINAL_SAMPLE_RATE_12000 0x00000040 /* Original Sample rate */
+#define SPCS_ORIGINAL_SAMPLE_RATE_11025 0x00000050 /* Original Sample rate */
+#define SPCS_ORIGINAL_SAMPLE_RATE_8000 0x00000060 /* Original Sample rate */
+#define SPCS_ORIGINAL_SAMPLE_RATE_RES2 0x00000070 /* Original Sample rate */
+#define SPCS_ORIGINAL_SAMPLE_RATE_192000 0x00000080 /* Original Sample rate */
+#define SPCS_ORIGINAL_SAMPLE_RATE_24000 0x00000090 /* Original Sample rate */
+#define SPCS_ORIGINAL_SAMPLE_RATE_96000 0x000000a0 /* Original Sample rate */
+#define SPCS_ORIGINAL_SAMPLE_RATE_48000 0x000000b0 /* Original Sample rate */
+#define SPCS_ORIGINAL_SAMPLE_RATE_176400 0x000000c0 /* Original Sample rate */
+#define SPCS_ORIGINAL_SAMPLE_RATE_22050 0x000000d0 /* Original Sample rate */
+#define SPCS_ORIGINAL_SAMPLE_RATE_88200 0x000000e0 /* Original Sample rate */
+#define SPCS_ORIGINAL_SAMPLE_RATE_44100 0x000000f0 /* Original Sample rate */
+
+#define SPDIF_SELECT1 0x45 /* Enables SPDIF or Analogue outputs 0-SPDIF, 0xf00-Analogue */
+ /* 0x100 - Front, 0x800 - Rear, 0x200 - Center/LFE.
+ * But as the jack is shared, use 0xf00.
+ * The Windows2000 driver uses 0x0000000f for both digital and analog.
+ * 0xf00 introduces interesting noises onto the Center/LFE.
+ * If you turn the volume up, you hear computer noise,
+ * e.g. mouse moving, changing between app windows etc.
+ * So, I am going to set this to 0x0000000f all the time now,
+ * same as the windows driver does.
+ * Use register SPDIF_SELECT2(0x72) to switch between SPDIF and Analog.
+ */
+ /* When Channel = 0:
+ * Wide SPDIF format [3:0] (one bit for each channel) (0=20bit, 1=24bit)
+ * Tristate SPDIF Output [11:8] (one bit for each channel) (0=Not tristate, 1=Tristate)
+ * SPDIF Bypass enable [19:16] (one bit for each channel) (0=Not bypass, 1=Bypass)
+ */
+ /* When Channel = 1:
+ * SPDIF 0 User data [7:0]
+ * SPDIF 1 User data [15:8]
+ * SPDIF 0 User data [23:16]
+ * SPDIF 0 User data [31:24]
+ * User data can be sent by using the SPDIF output frame pending and SPDIF output user bit interrupts.
+ */
+#define WATERMARK 0x46 /* Test bit to indicate cache usage level */
+#define SPDIF_INPUT_STATUS 0x49 /* SPDIF Input status register. Bits the same as SPCS.
+ * When Channel = 0: Bits the same as SPCS channel 0.
+ * When Channel = 1: Bits the same as SPCS channel 1.
+ * When Channel = 2:
+ * SPDIF Input User data [16:0]
+ * SPDIF Input Frame count [21:16]
+ */
+#define CAPTURE_CACHE_DATA 0x50 /* 0x50-0x5f Recorded samples. */
+#define CAPTURE_SOURCE 0x60 /* Capture Source 0 = MIC */
+#define CAPTURE_SOURCE_CHANNEL0 0xf0000000 /* Mask for selecting the Capture sources */
+#define CAPTURE_SOURCE_CHANNEL1 0x0f000000 /* 0 - SPDIF mixer output. */
+#define CAPTURE_SOURCE_CHANNEL2 0x00f00000 /* 1 - What you hear or . 2 - ?? */
+#define CAPTURE_SOURCE_CHANNEL3 0x000f0000 /* 3 - Mic in, Line in, TAD in, Aux in. */
+#define CAPTURE_SOURCE_RECORD_MAP 0x0000ffff /* Default 0x00e4 */
+ /* Record Map [7:0] (2 bits per channel) 0=mapped to channel 0, 1=mapped to channel 1, 2=mapped to channel2, 3=mapped to channel3
+ * Record source select for channel 0 [18:16]
+ * Record source select for channel 1 [22:20]
+ * Record source select for channel 2 [26:24]
+ * Record source select for channel 3 [30:28]
+ * 0 - SPDIF mixer output.
+ * 1 - i2s mixer output.
+ * 2 - SPDIF input.
+ * 3 - i2s input.
+ * 4 - AC97 capture.
+ * 5 - SRC output.
+ */
+#define CAPTURE_VOLUME1 0x61 /* Capture volume per channel 0-3 */
+#define CAPTURE_VOLUME2 0x62 /* Capture volume per channel 4-7 */
+
+#define PLAYBACK_ROUTING1 0x63 /* Playback routing of channels 0-7. Effects AC3 output. Default 0x32765410 */
+#define ROUTING1_REAR 0x77000000 /* Channel_id 0 sends to 10, Channel_id 1 sends to 32 */
+#define ROUTING1_NULL 0x00770000 /* Channel_id 2 sends to 54, Channel_id 3 sends to 76 */
+#define ROUTING1_CENTER_LFE 0x00007700 /* 0x32765410 means, send Channel_id 0 to FRONT, Channel_id 1 to REAR */
+#define ROUTING1_FRONT 0x00000077 /* Channel_id 2 to CENTER_LFE, Channel_id 3 to NULL. */
+ /* Channel_id's handle stereo channels. Channel X is a single mono channel */
+ /* Host is input from the PCI bus. */
+ /* Host channel 0 [2:0] -> SPDIF Mixer/Router channel 0-7.
+ * Host channel 1 [6:4] -> SPDIF Mixer/Router channel 0-7.
+ * Host channel 2 [10:8] -> SPDIF Mixer/Router channel 0-7.
+ * Host channel 3 [14:12] -> SPDIF Mixer/Router channel 0-7.
+ * Host channel 4 [18:16] -> SPDIF Mixer/Router channel 0-7.
+ * Host channel 5 [22:20] -> SPDIF Mixer/Router channel 0-7.
+ * Host channel 6 [26:24] -> SPDIF Mixer/Router channel 0-7.
+ * Host channel 7 [30:28] -> SPDIF Mixer/Router channel 0-7.
+ */
+
+#define PLAYBACK_ROUTING2 0x64 /* Playback Routing . Feeding Capture channels back into Playback. Effects AC3 output. Default 0x76767676 */
+ /* SRC is input from the capture inputs. */
+ /* SRC channel 0 [2:0] -> SPDIF Mixer/Router channel 0-7.
+ * SRC channel 1 [6:4] -> SPDIF Mixer/Router channel 0-7.
+ * SRC channel 2 [10:8] -> SPDIF Mixer/Router channel 0-7.
+ * SRC channel 3 [14:12] -> SPDIF Mixer/Router channel 0-7.
+ * SRC channel 4 [18:16] -> SPDIF Mixer/Router channel 0-7.
+ * SRC channel 5 [22:20] -> SPDIF Mixer/Router channel 0-7.
+ * SRC channel 6 [26:24] -> SPDIF Mixer/Router channel 0-7.
+ * SRC channel 7 [30:28] -> SPDIF Mixer/Router channel 0-7.
+ */
+
+#define PLAYBACK_MUTE 0x65 /* Unknown. While playing 0x0, while silent 0x00fc0000 */
+ /* SPDIF Mixer input control:
+ * Invert SRC to SPDIF Mixer [7-0] (One bit per channel)
+ * Invert Host to SPDIF Mixer [15:8] (One bit per channel)
+ * SRC to SPDIF Mixer disable [23:16] (One bit per channel)
+ * Host to SPDIF Mixer disable [31:24] (One bit per channel)
+ */
+#define PLAYBACK_VOLUME1 0x66 /* Playback SPDIF volume per channel. Set to the same PLAYBACK_VOLUME(0x6a) */
+ /* PLAYBACK_VOLUME1 must be set to 30303030 for SPDIF AC3 Playback */
+ /* SPDIF mixer input volume. 0=12dB, 0x30=0dB, 0xFE=-51.5dB, 0xff=Mute */
+ /* One register for each of the 4 stereo streams. */
+ /* SRC Right volume [7:0]
+ * SRC Left volume [15:8]
+ * Host Right volume [23:16]
+ * Host Left volume [31:24]
+ */
+#define CAPTURE_ROUTING1 0x67 /* Capture Routing. Default 0x32765410 */
+ /* Similar to register 0x63, except that the destination is the I2S mixer instead of the SPDIF mixer. I.E. Outputs to the Analog outputs instead of SPDIF. */
+#define CAPTURE_ROUTING2 0x68 /* Unknown Routing. Default 0x76767676 */
+ /* Similar to register 0x64, except that the destination is the I2S mixer instead of the SPDIF mixer. I.E. Outputs to the Analog outputs instead of SPDIF. */
+#define CAPTURE_MUTE 0x69 /* Unknown. While capturing 0x0, while silent 0x00fc0000 */
+ /* Similar to register 0x65, except that the destination is the I2S mixer instead of the SPDIF mixer. I.E. Outputs to the Analog outputs instead of SPDIF. */
+#define PLAYBACK_VOLUME2 0x6a /* Playback Analog volume per channel. Does not effect AC3 output */
+ /* Similar to register 0x66, except that the destination is the I2S mixer instead of the SPDIF mixer. I.E. Outputs to the Analog outputs instead of SPDIF. */
+#define UNKNOWN6b 0x6b /* Unknown. Readonly. Default 00400000 00400000 00400000 00400000 */
+#define UART_A_DATA 0x6c /* Uart, used in setting sample rates, bits per sample etc. */
+#define UART_A_CMD 0x6d /* Uart, used in setting sample rates, bits per sample etc. */
+#define UART_B_DATA 0x6e /* Uart, Unknown. */
+#define UART_B_CMD 0x6f /* Uart, Unknown. */
+#define SAMPLE_RATE_TRACKER_STATUS 0x70 /* Readonly. Default 00108000 00108000 00500000 00500000 */
+ /* Estimated sample rate [19:0] Relative to 48kHz. 0x8000 = 1.0
+ * Rate Locked [20]
+ * SPDIF Locked [21] For SPDIF channel only.
+ * Valid Audio [22] For SPDIF channel only.
+ */
+#define CAPTURE_CONTROL 0x71 /* Some sort of routing. default = 40c81000 30303030 30300000 00700000 */
+ /* Channel_id 0: 0x40c81000 must be changed to 0x40c80000 for SPDIF AC3 input or output. */
+ /* Channel_id 1: 0xffffffff(mute) 0x30303030(max) controls CAPTURE feedback into PLAYBACK. */
+ /* Sample rate output control register Channel=0
+ * Sample output rate [1:0] (0=48kHz, 1=44.1kHz, 2=96kHz, 3=192Khz)
+ * Sample input rate [3:2] (0=48kHz, 1=Not available, 2=96kHz, 3=192Khz)
+ * SRC input source select [4] 0=Audio from digital mixer, 1=Audio from analog source.
+ * Record rate [9:8] (0=48kHz, 1=Not available, 2=96kHz, 3=192Khz)
+ * Record mixer output enable [12:10]
+ * I2S input rate master mode [15:14] (0=48kHz, 1=44.1kHz, 2=96kHz, 3=192Khz)
+ * I2S output rate [17:16] (0=48kHz, 1=44.1kHz, 2=96kHz, 3=192Khz)
+ * I2S output source select [18] (0=Audio from host, 1=Audio from SRC)
+ * Record mixer I2S enable [20:19] (enable/disable i2sin1 and i2sin0)
+ * I2S output master clock select [21] (0=256*I2S output rate, 1=512*I2S output rate.)
+ * I2S input master clock select [22] (0=256*I2S input rate, 1=512*I2S input rate.)
+ * I2S input mode [23] (0=Slave, 1=Master)
+ * SPDIF output rate [25:24] (0=48kHz, 1=44.1kHz, 2=96kHz, 3=192Khz)
+ * SPDIF output source select [26] (0=host, 1=SRC)
+ * Not used [27]
+ * Record Source 0 input [29:28] (0=SPDIF in, 1=I2S in, 2=AC97 Mic, 3=AC97 PCM)
+ * Record Source 1 input [31:30] (0=SPDIF in, 1=I2S in, 2=AC97 Mic, 3=AC97 PCM)
+ */
+ /* Sample rate output control register Channel=1
+ * I2S Input 0 volume Right [7:0]
+ * I2S Input 0 volume Left [15:8]
+ * I2S Input 1 volume Right [23:16]
+ * I2S Input 1 volume Left [31:24]
+ */
+ /* Sample rate output control register Channel=2
+ * SPDIF Input volume Right [23:16]
+ * SPDIF Input volume Left [31:24]
+ */
+ /* Sample rate output control register Channel=3
+ * No used
+ */
+#define SPDIF_SELECT2 0x72 /* Some sort of routing. Channel_id 0 only. default = 0x0f0f003f. Analog 0x000b0000, Digital 0x0b000000 */
+#define ROUTING2_FRONT_MASK 0x00010000 /* Enable for Front speakers. */
+#define ROUTING2_CENTER_LFE_MASK 0x00020000 /* Enable for Center/LFE speakers. */
+#define ROUTING2_REAR_MASK 0x00080000 /* Enable for Rear speakers. */
+ /* Audio output control
+ * AC97 output enable [5:0]
+ * I2S output enable [19:16]
+ * SPDIF output enable [27:24]
+ */
+#define UNKNOWN73 0x73 /* Unknown. Readonly. Default 0x0 */
+#define CHIP_VERSION 0x74 /* P17 Chip version. Channel_id 0 only. Default 00000071 */
+#define EXTENDED_INT_MASK 0x75 /* Used by both playback and capture interrupt handler */
+ /* Sets which Interrupts are enabled. */
+ /* 0x00000001 = Half period. Playback.
+ * 0x00000010 = Full period. Playback.
+ * 0x00000100 = Half buffer. Playback.
+ * 0x00001000 = Full buffer. Playback.
+ * 0x00010000 = Half buffer. Capture.
+ * 0x00100000 = Full buffer. Capture.
+ * Capture can only do 2 periods.
+ * 0x01000000 = End audio. Playback.
+ * 0x40000000 = Half buffer Playback,Caputre xrun.
+ * 0x80000000 = Full buffer Playback,Caputre xrun.
+ */
+#define EXTENDED_INT 0x76 /* Used by both playback and capture interrupt handler */
+ /* Shows which interrupts are active at the moment. */
+ /* Same bit layout as EXTENDED_INT_MASK */
+#define COUNTER77 0x77 /* Counter range 0 to 0x3fffff, 192000 counts per second. */
+#define COUNTER78 0x78 /* Counter range 0 to 0x3fffff, 44100 counts per second. */
+#define EXTENDED_INT_TIMER 0x79 /* Channel_id 0 only. Used by both playback and capture interrupt handler */
+ /* Causes interrupts based on timer intervals. */
+#define SPI 0x7a /* SPI: Serial Interface Register */
+#define I2C_A 0x7b /* I2C Address. 32 bit */
+#define I2C_0 0x7c /* I2C Data Port 0. 32 bit */
+#define I2C_1 0x7d /* I2C Data Port 1. 32 bit */
+
+
+#define SET_CHANNEL 0 /* Testing channel outputs 0=Front, 1=Center/LFE, 2=Unknown, 3=Rear */
+#define PCM_FRONT_CHANNEL 0
+#define PCM_REAR_CHANNEL 1
+#define PCM_CENTER_LFE_CHANNEL 2
+#define PCM_UNKNOWN_CHANNEL 3
+#define CONTROL_FRONT_CHANNEL 0
+#define CONTROL_REAR_CHANNEL 3
+#define CONTROL_CENTER_LFE_CHANNEL 1
+#define CONTROL_UNKNOWN_CHANNEL 2
+
+typedef struct snd_ca0106_channel ca0106_channel_t;
+typedef struct snd_ca0106 ca0106_t;
+typedef struct snd_ca0106_pcm ca0106_pcm_t;
+
+struct snd_ca0106_channel {
+ ca0106_t *emu;
+ int number;
+ int use;
+ void (*interrupt)(ca0106_t *emu, ca0106_channel_t *channel);
+ ca0106_pcm_t *epcm;
+};
+
+struct snd_ca0106_pcm {
+ ca0106_t *emu;
+ snd_pcm_substream_t *substream;
+ int channel_id;
+ unsigned short running;
+};
+
+// definition of the chip-specific record
+struct snd_ca0106 {
+ snd_card_t *card;
+ struct pci_dev *pci;
+
+ unsigned long port;
+ struct resource *res_port;
+ int irq;
+
+ unsigned int revision; /* chip revision */
+ unsigned int serial; /* serial number */
+ unsigned short model; /* subsystem id */
+
+ spinlock_t emu_lock;
+
+ ac97_t *ac97;
+ snd_pcm_t *pcm;
+
+ ca0106_channel_t playback_channels[4];
+ ca0106_channel_t capture_channels[4];
+ u32 spdif_bits[4]; /* s/pdif out setup */
+ int spdif_enable;
+ int capture_source;
+
+ struct snd_dma_buffer buffer;
+};
+
+int __devinit snd_ca0106_mixer(ca0106_t *emu);
+int __devinit snd_ca0106_proc_init(ca0106_t * emu);
+
+unsigned int snd_ca0106_ptr_read(ca0106_t * emu,
+ unsigned int reg,
+ unsigned int chn);
+
+void snd_ca0106_ptr_write(ca0106_t *emu,
+ unsigned int reg,
+ unsigned int chn,
+ unsigned int data);
+
--- /dev/null
+/*
+ * Copyright (c) 2004 James Courtier-Dutton <James@superbug.demon.co.uk>
+ * Driver CA0106 chips. e.g. Sound Blaster Audigy LS and Live 24bit
+ * Version: 0.0.21
+ *
+ * FEATURES currently supported:
+ * Front, Rear and Center/LFE.
+ * Surround40 and Surround51.
+ * Capture from MIC an LINE IN input.
+ * SPDIF digital playback of PCM stereo and AC3/DTS works.
+ * (One can use a standard mono mini-jack to one RCA plugs cable.
+ * or one can use a standard stereo mini-jack to two RCA plugs cable.
+ * Plug one of the RCA plugs into the Coax input of the external decoder/receiver.)
+ * ( In theory one could output 3 different AC3 streams at once, to 3 different SPDIF outputs. )
+ * Notes on how to capture sound:
+ * The AC97 is used in the PLAYBACK direction.
+ * The output from the AC97 chip, instead of reaching the speakers, is fed into the Philips 1361T ADC.
+ * So, to record from the MIC, set the MIC Playback volume to max,
+ * unmute the MIC and turn up the MASTER Playback volume.
+ * So, to prevent feedback when capturing, minimise the "Capture feedback into Playback" volume.
+ *
+ * The only playback controls that currently do anything are: -
+ * Analog Front
+ * Analog Rear
+ * Analog Center/LFE
+ * SPDIF Front
+ * SPDIF Rear
+ * SPDIF Center/LFE
+ *
+ * For capture from Mic in or Line in.
+ * Digital/Analog ( switch must be in Analog mode for CAPTURE. )
+ *
+ * CAPTURE feedback into PLAYBACK
+ *
+ * Changelog:
+ * Support interrupts per period.
+ * Removed noise from Center/LFE channel when in Analog mode.
+ * Rename and remove mixer controls.
+ * 0.0.6
+ * Use separate card based DMA buffer for periods table list.
+ * 0.0.7
+ * Change remove and rename ctrls into lists.
+ * 0.0.8
+ * Try to fix capture sources.
+ * 0.0.9
+ * Fix AC3 output.
+ * Enable S32_LE format support.
+ * 0.0.10
+ * Enable playback 48000 and 96000 rates. (Rates other that these do not work, even with "plug:front".)
+ * 0.0.11
+ * Add Model name recognition.
+ * 0.0.12
+ * Correct interrupt timing. interrupt at end of period, instead of in the middle of a playback period.
+ * Remove redundent "voice" handling.
+ * 0.0.13
+ * Single trigger call for multi channels.
+ * 0.0.14
+ * Set limits based on what the sound card hardware can do.
+ * playback periods_min=2, periods_max=8
+ * capture hw constraints require period_size = n * 64 bytes.
+ * playback hw constraints require period_size = n * 64 bytes.
+ * 0.0.15
+ * Minor updates.
+ * 0.0.16
+ * Implement 192000 sample rate.
+ * 0.0.17
+ * Add support for SB0410 and SB0413.
+ * 0.0.18
+ * Modified Copyright message.
+ * 0.0.19
+ * Finally fix support for SB Live 24 bit. SB0410 and SB0413.
+ * The output codec needs resetting, otherwise all output is muted.
+ * 0.0.20
+ * Merge "pci_disable_device(pci);" fixes.
+ * 0.0.21
+ * Add 4 capture channels. (SPDIF only comes in on channel 0. )
+ * Add SPDIF capture using optional digital I/O module for SB Live 24bit. (Analog capture does not yet work.)
+ *
+ * BUGS:
+ * Some stability problems when unloading the snd-ca0106 kernel module.
+ * --
+ *
+ * TODO:
+ * 4 Capture channels, only one implemented so far.
+ * Other capture rates apart from 48khz not implemented.
+ * MIDI
+ * --
+ * GENERAL INFO:
+ * Model: SB0310
+ * P17 Chip: CA0106-DAT
+ * AC97 Codec: STAC 9721
+ * ADC: Philips 1361T (Stereo 24bit)
+ * DAC: WM8746EDS (6-channel, 24bit, 192Khz)
+ *
+ * GENERAL INFO:
+ * Model: SB0410
+ * P17 Chip: CA0106-DAT
+ * AC97 Codec: None
+ * ADC: WM8775EDS (4 Channel)
+ * DAC: CS4382 (114 dB, 24-Bit, 192 kHz, 8-Channel D/A Converter with DSD Support)
+ * SPDIF Out control switches between Mic in and SPDIF out.
+ * No sound out or mic input working yet.
+ *
+ * GENERAL INFO:
+ * Model: SB0413
+ * P17 Chip: CA0106-DAT
+ * AC97 Codec: None.
+ * ADC: Unknown
+ * DAC: Unknown
+ * Trying to handle it like the SB0410.
+ *
+ * This code was initally based on code from ALSA's emu10k1x.c which is:
+ * Copyright (c) by Francisco Moraes <fmoraes@nc.rr.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+#include <sound/driver.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/moduleparam.h>
+#include <sound/core.h>
+#include <sound/initval.h>
+#include <sound/pcm.h>
+#include <sound/ac97_codec.h>
+#include <sound/info.h>
+
+MODULE_AUTHOR("James Courtier-Dutton <James@superbug.demon.co.uk>");
+MODULE_DESCRIPTION("CA0106");
+MODULE_LICENSE("GPL");
+MODULE_SUPPORTED_DEVICE("{{Creative,SB CA0106 chip}}");
+
+// module parameters (see "Module Parameters")
+static int index[SNDRV_CARDS] = SNDRV_DEFAULT_IDX;
+static char *id[SNDRV_CARDS] = SNDRV_DEFAULT_STR;
+static int enable[SNDRV_CARDS] = SNDRV_DEFAULT_ENABLE_PNP;
+
+module_param_array(index, int, NULL, 0444);
+MODULE_PARM_DESC(index, "Index value for the CA0106 soundcard.");
+module_param_array(id, charp, NULL, 0444);
+MODULE_PARM_DESC(id, "ID string for the CA0106 soundcard.");
+module_param_array(enable, bool, NULL, 0444);
+MODULE_PARM_DESC(enable, "Enable the CA0106 soundcard.");
+
+#include "ca0106.h"
+
+typedef struct {
+ u32 serial;
+ char * name;
+} ca0106_names_t;
+
+static ca0106_names_t ca0106_chip_names[] = {
+ { 0x10021102, "AudigyLS [SB0310]"} ,
+ { 0x10051102, "AudigyLS [SB0310b]"} , /* Unknown AudigyLS that also says SB0310 on it */
+ { 0x10061102, "Live! 7.1 24bit [SB0410]"} , /* New Sound Blaster Live! 7.1 24bit. This does not have an AC97. 53SB041000001 */
+ { 0x10071102, "Live! 7.1 24bit [SB0413]"} , /* New Dell Sound Blaster Live! 7.1 24bit. This does not have an AC97. */
+ { 0, "AudigyLS [Unknown]" }
+};
+
+/* hardware definition */
+static snd_pcm_hardware_t snd_ca0106_playback_hw = {
+ .info = (SNDRV_PCM_INFO_MMAP |
+ SNDRV_PCM_INFO_INTERLEAVED |
+ SNDRV_PCM_INFO_BLOCK_TRANSFER |
+ SNDRV_PCM_INFO_MMAP_VALID),
+ .formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S32_LE,
+ .rates = SNDRV_PCM_RATE_48000 | SNDRV_PCM_RATE_96000 | SNDRV_PCM_RATE_192000,
+ .rate_min = 48000,
+ .rate_max = 192000,
+ .channels_min = 2, //1,
+ .channels_max = 2, //6,
+ .buffer_bytes_max = (32*1024),
+ .period_bytes_min = 64,
+ .period_bytes_max = (16*1024),
+ .periods_min = 2,
+ .periods_max = 8,
+ .fifo_size = 0,
+};
+
+static snd_pcm_hardware_t snd_ca0106_capture_hw = {
+ .info = (SNDRV_PCM_INFO_MMAP |
+ SNDRV_PCM_INFO_INTERLEAVED |
+ SNDRV_PCM_INFO_BLOCK_TRANSFER |
+ SNDRV_PCM_INFO_MMAP_VALID),
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .rates = SNDRV_PCM_RATE_48000,
+ .rate_min = 48000,
+ .rate_max = 48000,
+ .channels_min = 2,
+ .channels_max = 2,
+ .buffer_bytes_max = (32*1024),
+ .period_bytes_min = 64,
+ .period_bytes_max = (16*1024),
+ .periods_min = 2,
+ .periods_max = 2,
+ .fifo_size = 0,
+};
+
+unsigned int snd_ca0106_ptr_read(ca0106_t * emu,
+ unsigned int reg,
+ unsigned int chn)
+{
+ unsigned long flags;
+ unsigned int regptr, val;
+
+ regptr = (reg << 16) | chn;
+
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ outl(regptr, emu->port + PTR);
+ val = inl(emu->port + DATA);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+ return val;
+}
+
+void snd_ca0106_ptr_write(ca0106_t *emu,
+ unsigned int reg,
+ unsigned int chn,
+ unsigned int data)
+{
+ unsigned int regptr;
+ unsigned long flags;
+
+ regptr = (reg << 16) | chn;
+
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ outl(regptr, emu->port + PTR);
+ outl(data, emu->port + DATA);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+}
+
+static void snd_ca0106_intr_enable(ca0106_t *emu, unsigned int intrenb)
+{
+ unsigned long flags;
+ unsigned int enable;
+
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ enable = inl(emu->port + INTE) | intrenb;
+ outl(enable, emu->port + INTE);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+}
+
+static void snd_ca0106_pcm_free_substream(snd_pcm_runtime_t *runtime)
+{
+ ca0106_pcm_t *epcm = runtime->private_data;
+
+ if (epcm) {
+ kfree(epcm);
+ }
+}
+
+/* open_playback callback */
+static int snd_ca0106_pcm_open_playback_channel(snd_pcm_substream_t *substream, int channel_id)
+{
+ ca0106_t *chip = snd_pcm_substream_chip(substream);
+ ca0106_channel_t *channel = &(chip->playback_channels[channel_id]);
+ ca0106_pcm_t *epcm;
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ int err;
+
+ epcm = kcalloc(1, sizeof(*epcm), GFP_KERNEL);
+
+ if (epcm == NULL)
+ return -ENOMEM;
+ epcm->emu = chip;
+ epcm->substream = substream;
+ epcm->channel_id=channel_id;
+
+ runtime->private_data = epcm;
+ runtime->private_free = snd_ca0106_pcm_free_substream;
+
+ runtime->hw = snd_ca0106_playback_hw;
+
+ channel->emu = chip;
+ channel->number = channel_id;
+
+ channel->use=1;
+ //printk("open:channel_id=%d, chip=%p, channel=%p\n",channel_id, chip, channel);
+ //channel->interrupt = snd_ca0106_pcm_channel_interrupt;
+ channel->epcm=epcm;
+ if ((err = snd_pcm_hw_constraint_integer(runtime, SNDRV_PCM_HW_PARAM_PERIODS)) < 0)
+ return err;
+ if ((err = snd_pcm_hw_constraint_step(runtime, 0, SNDRV_PCM_HW_PARAM_PERIOD_BYTES, 64)) < 0)
+ return err;
+ return 0;
+}
+
+/* close callback */
+static int snd_ca0106_pcm_close_playback(snd_pcm_substream_t *substream)
+{
+ ca0106_t *chip = snd_pcm_substream_chip(substream);
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ ca0106_pcm_t *epcm = runtime->private_data;
+ chip->playback_channels[epcm->channel_id].use=0;
+/* FIXME: maybe zero others */
+ return 0;
+}
+
+static int snd_ca0106_pcm_open_playback_front(snd_pcm_substream_t *substream)
+{
+ return snd_ca0106_pcm_open_playback_channel(substream, PCM_FRONT_CHANNEL);
+}
+
+static int snd_ca0106_pcm_open_playback_center_lfe(snd_pcm_substream_t *substream)
+{
+ return snd_ca0106_pcm_open_playback_channel(substream, PCM_CENTER_LFE_CHANNEL);
+}
+
+static int snd_ca0106_pcm_open_playback_unknown(snd_pcm_substream_t *substream)
+{
+ return snd_ca0106_pcm_open_playback_channel(substream, PCM_UNKNOWN_CHANNEL);
+}
+
+static int snd_ca0106_pcm_open_playback_rear(snd_pcm_substream_t *substream)
+{
+ return snd_ca0106_pcm_open_playback_channel(substream, PCM_REAR_CHANNEL);
+}
+
+/* open_capture callback */
+static int snd_ca0106_pcm_open_capture_channel(snd_pcm_substream_t *substream, int channel_id)
+{
+ ca0106_t *chip = snd_pcm_substream_chip(substream);
+ ca0106_channel_t *channel = &(chip->capture_channels[channel_id]);
+ ca0106_pcm_t *epcm;
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ int err;
+
+ epcm = kcalloc(1, sizeof(*epcm), GFP_KERNEL);
+ if (epcm == NULL) {
+ snd_printk("open_capture_channel: failed epcm alloc\n");
+ return -ENOMEM;
+ }
+ epcm->emu = chip;
+ epcm->substream = substream;
+ epcm->channel_id=channel_id;
+
+ runtime->private_data = epcm;
+ runtime->private_free = snd_ca0106_pcm_free_substream;
+
+ runtime->hw = snd_ca0106_capture_hw;
+
+ channel->emu = chip;
+ channel->number = channel_id;
+
+ channel->use=1;
+ //printk("open:channel_id=%d, chip=%p, channel=%p\n",channel_id, chip, channel);
+ //channel->interrupt = snd_ca0106_pcm_channel_interrupt;
+ channel->epcm=epcm;
+ if ((err = snd_pcm_hw_constraint_integer(runtime, SNDRV_PCM_HW_PARAM_PERIODS)) < 0)
+ return err;
+ //snd_pcm_hw_constraint_list(runtime, 0, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, &hw_constraints_capture_period_sizes);
+ if ((err = snd_pcm_hw_constraint_step(runtime, 0, SNDRV_PCM_HW_PARAM_PERIOD_BYTES, 64)) < 0)
+ return err;
+ return 0;
+}
+
+/* close callback */
+static int snd_ca0106_pcm_close_capture(snd_pcm_substream_t *substream)
+{
+ ca0106_t *chip = snd_pcm_substream_chip(substream);
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ ca0106_pcm_t *epcm = runtime->private_data;
+ chip->capture_channels[epcm->channel_id].use=0;
+/* FIXME: maybe zero others */
+ return 0;
+}
+
+static int snd_ca0106_pcm_open_0_capture(snd_pcm_substream_t *substream)
+{
+ return snd_ca0106_pcm_open_capture_channel(substream, 0);
+}
+
+static int snd_ca0106_pcm_open_1_capture(snd_pcm_substream_t *substream)
+{
+ return snd_ca0106_pcm_open_capture_channel(substream, 1);
+}
+
+static int snd_ca0106_pcm_open_2_capture(snd_pcm_substream_t *substream)
+{
+ return snd_ca0106_pcm_open_capture_channel(substream, 2);
+}
+
+static int snd_ca0106_pcm_open_3_capture(snd_pcm_substream_t *substream)
+{
+ return snd_ca0106_pcm_open_capture_channel(substream, 3);
+}
+
+/* hw_params callback */
+static int snd_ca0106_pcm_hw_params_playback(snd_pcm_substream_t *substream,
+ snd_pcm_hw_params_t * hw_params)
+{
+ return snd_pcm_lib_malloc_pages(substream,
+ params_buffer_bytes(hw_params));
+}
+
+/* hw_free callback */
+static int snd_ca0106_pcm_hw_free_playback(snd_pcm_substream_t *substream)
+{
+ return snd_pcm_lib_free_pages(substream);
+}
+
+/* hw_params callback */
+static int snd_ca0106_pcm_hw_params_capture(snd_pcm_substream_t *substream,
+ snd_pcm_hw_params_t * hw_params)
+{
+ return snd_pcm_lib_malloc_pages(substream,
+ params_buffer_bytes(hw_params));
+}
+
+/* hw_free callback */
+static int snd_ca0106_pcm_hw_free_capture(snd_pcm_substream_t *substream)
+{
+ return snd_pcm_lib_free_pages(substream);
+}
+
+/* prepare playback callback */
+static int snd_ca0106_pcm_prepare_playback(snd_pcm_substream_t *substream)
+{
+ ca0106_t *emu = snd_pcm_substream_chip(substream);
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ ca0106_pcm_t *epcm = runtime->private_data;
+ int channel = epcm->channel_id;
+ u32 *table_base = (u32 *)(emu->buffer.area+(8*16*channel));
+ u32 period_size_bytes = frames_to_bytes(runtime, runtime->period_size);
+ u32 hcfg_mask = HCFG_PLAYBACK_S32_LE;
+ u32 hcfg_set = 0x00000000;
+ u32 hcfg;
+ u32 reg40_mask = 0x30000 << (channel<<1);
+ u32 reg40_set = 0;
+ u32 reg40;
+ /* FIXME: Depending on mixer selection of SPDIF out or not, select the spdif rate or the DAC rate. */
+ u32 reg71_mask = 0x03030000 ; /* Global. Set SPDIF rate. We only support 44100 to spdif, not to DAC. */
+ u32 reg71_set = 0;
+ u32 reg71;
+ int i;
+
+ //snd_printk("prepare:channel_number=%d, rate=%d, format=0x%x, channels=%d, buffer_size=%ld, period_size=%ld, periods=%u, frames_to_bytes=%d\n",channel, runtime->rate, runtime->format, runtime->channels, runtime->buffer_size, runtime->period_size, runtime->periods, frames_to_bytes(runtime, 1));
+ //snd_printk("dma_addr=%x, dma_area=%p, table_base=%p\n",runtime->dma_addr, runtime->dma_area, table_base);
+ //snd_printk("dma_addr=%x, dma_area=%p, dma_bytes(size)=%x\n",emu->buffer.addr, emu->buffer.area, emu->buffer.bytes);
+ /* Rate can be set per channel. */
+ /* reg40 control host to fifo */
+ /* reg71 controls DAC rate. */
+ switch (runtime->rate) {
+ case 44100:
+ reg40_set = 0x10000 << (channel<<1);
+ reg71_set = 0x01010000;
+ break;
+ case 48000:
+ reg40_set = 0;
+ reg71_set = 0;
+ break;
+ case 96000:
+ reg40_set = 0x20000 << (channel<<1);
+ reg71_set = 0x02020000;
+ break;
+ case 192000:
+ reg40_set = 0x30000 << (channel<<1);
+ reg71_set = 0x03030000;
+ break;
+ default:
+ reg40_set = 0;
+ reg71_set = 0;
+ break;
+ }
+ /* Format is a global setting */
+ /* FIXME: Only let the first channel accessed set this. */
+ switch (runtime->format) {
+ case SNDRV_PCM_FORMAT_S16_LE:
+ hcfg_set = 0;
+ break;
+ case SNDRV_PCM_FORMAT_S32_LE:
+ hcfg_set = HCFG_PLAYBACK_S32_LE;
+ break;
+ default:
+ hcfg_set = 0;
+ break;
+ }
+ hcfg = inl(emu->port + HCFG) ;
+ hcfg = (hcfg & ~hcfg_mask) | hcfg_set;
+ outl(hcfg, emu->port + HCFG);
+ reg40 = snd_ca0106_ptr_read(emu, 0x40, 0);
+ reg40 = (reg40 & ~reg40_mask) | reg40_set;
+ snd_ca0106_ptr_write(emu, 0x40, 0, reg40);
+ reg71 = snd_ca0106_ptr_read(emu, 0x71, 0);
+ reg71 = (reg71 & ~reg71_mask) | reg71_set;
+ snd_ca0106_ptr_write(emu, 0x71, 0, reg71);
+
+ /* FIXME: Check emu->buffer.size before actually writing to it. */
+ for(i=0; i < runtime->periods; i++) {
+ table_base[i*2]=runtime->dma_addr+(i*period_size_bytes);
+ table_base[(i*2)+1]=period_size_bytes<<16;
+ }
+
+ snd_ca0106_ptr_write(emu, PLAYBACK_LIST_ADDR, channel, emu->buffer.addr+(8*16*channel));
+ snd_ca0106_ptr_write(emu, PLAYBACK_LIST_SIZE, channel, (runtime->periods - 1) << 19);
+ snd_ca0106_ptr_write(emu, PLAYBACK_LIST_PTR, channel, 0);
+ snd_ca0106_ptr_write(emu, PLAYBACK_DMA_ADDR, channel, runtime->dma_addr);
+ snd_ca0106_ptr_write(emu, PLAYBACK_PERIOD_SIZE, channel, frames_to_bytes(runtime, runtime->period_size)<<16); // buffer size in bytes
+ snd_ca0106_ptr_write(emu, PLAYBACK_POINTER, channel, 0);
+ snd_ca0106_ptr_write(emu, 0x07, channel, 0x0);
+ snd_ca0106_ptr_write(emu, 0x08, channel, 0);
+ snd_ca0106_ptr_write(emu, PLAYBACK_MUTE, 0x0, 0x0); /* Unmute output */
+#if 0
+ snd_ca0106_ptr_write(emu, SPCS0, 0,
+ SPCS_CLKACCY_1000PPM | SPCS_SAMPLERATE_48 |
+ SPCS_CHANNELNUM_LEFT | SPCS_SOURCENUM_UNSPEC |
+ SPCS_GENERATIONSTATUS | 0x00001200 |
+ 0x00000000 | SPCS_EMPHASIS_NONE | SPCS_COPYRIGHT );
+ }
+#endif
+
+ return 0;
+}
+
+/* prepare capture callback */
+static int snd_ca0106_pcm_prepare_capture(snd_pcm_substream_t *substream)
+{
+ ca0106_t *emu = snd_pcm_substream_chip(substream);
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ ca0106_pcm_t *epcm = runtime->private_data;
+ int channel = epcm->channel_id;
+ //printk("prepare:channel_number=%d, rate=%d, format=0x%x, channels=%d, buffer_size=%ld, period_size=%ld, frames_to_bytes=%d\n",channel, runtime->rate, runtime->format, runtime->channels, runtime->buffer_size, runtime->period_size, frames_to_bytes(runtime, 1));
+ snd_ca0106_ptr_write(emu, 0x13, channel, 0);
+ snd_ca0106_ptr_write(emu, CAPTURE_DMA_ADDR, channel, runtime->dma_addr);
+ snd_ca0106_ptr_write(emu, CAPTURE_BUFFER_SIZE, channel, frames_to_bytes(runtime, runtime->buffer_size)<<16); // buffer size in bytes
+ snd_ca0106_ptr_write(emu, CAPTURE_POINTER, channel, 0);
+
+ return 0;
+}
+
+/* trigger_playback callback */
+static int snd_ca0106_pcm_trigger_playback(snd_pcm_substream_t *substream,
+ int cmd)
+{
+ ca0106_t *emu = snd_pcm_substream_chip(substream);
+ snd_pcm_runtime_t *runtime;
+ ca0106_pcm_t *epcm;
+ int channel;
+ int result = 0;
+ struct list_head *pos;
+ snd_pcm_substream_t *s;
+ u32 basic = 0;
+ u32 extended = 0;
+ int running=0;
+
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ running=1;
+ break;
+ case SNDRV_PCM_TRIGGER_STOP:
+ default:
+ running=0;
+ break;
+ }
+ snd_pcm_group_for_each(pos, substream) {
+ s = snd_pcm_group_substream_entry(pos);
+ runtime = s->runtime;
+ epcm = runtime->private_data;
+ channel = epcm->channel_id;
+ //snd_printk("channel=%d\n",channel);
+ epcm->running = running;
+ basic |= (0x1<<channel);
+ extended |= (0x10<<channel);
+ snd_pcm_trigger_done(s, substream);
+ }
+ //snd_printk("basic=0x%x, extended=0x%x\n",basic, extended);
+
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ snd_ca0106_ptr_write(emu, EXTENDED_INT_MASK, 0, snd_ca0106_ptr_read(emu, EXTENDED_INT_MASK, 0) | (extended));
+ snd_ca0106_ptr_write(emu, BASIC_INTERRUPT, 0, snd_ca0106_ptr_read(emu, BASIC_INTERRUPT, 0)|(basic));
+ break;
+ case SNDRV_PCM_TRIGGER_STOP:
+ snd_ca0106_ptr_write(emu, BASIC_INTERRUPT, 0, snd_ca0106_ptr_read(emu, BASIC_INTERRUPT, 0) & ~(basic));
+ snd_ca0106_ptr_write(emu, EXTENDED_INT_MASK, 0, snd_ca0106_ptr_read(emu, EXTENDED_INT_MASK, 0) & ~(extended));
+ break;
+ default:
+ result = -EINVAL;
+ break;
+ }
+ return result;
+}
+
+/* trigger_capture callback */
+static int snd_ca0106_pcm_trigger_capture(snd_pcm_substream_t *substream,
+ int cmd)
+{
+ ca0106_t *emu = snd_pcm_substream_chip(substream);
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ ca0106_pcm_t *epcm = runtime->private_data;
+ int channel = epcm->channel_id;
+ int result = 0;
+
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ snd_ca0106_ptr_write(emu, EXTENDED_INT_MASK, 0, snd_ca0106_ptr_read(emu, EXTENDED_INT_MASK, 0) | (0x110000<<channel));
+ snd_ca0106_ptr_write(emu, BASIC_INTERRUPT, 0, snd_ca0106_ptr_read(emu, BASIC_INTERRUPT, 0)|(0x100<<channel));
+ epcm->running = 1;
+ break;
+ case SNDRV_PCM_TRIGGER_STOP:
+ snd_ca0106_ptr_write(emu, BASIC_INTERRUPT, 0, snd_ca0106_ptr_read(emu, BASIC_INTERRUPT, 0) & ~(0x100<<channel));
+ snd_ca0106_ptr_write(emu, EXTENDED_INT_MASK, 0, snd_ca0106_ptr_read(emu, EXTENDED_INT_MASK, 0) & ~(0x110000<<channel));
+ epcm->running = 0;
+ break;
+ default:
+ result = -EINVAL;
+ break;
+ }
+ return result;
+}
+
+/* pointer_playback callback */
+static snd_pcm_uframes_t
+snd_ca0106_pcm_pointer_playback(snd_pcm_substream_t *substream)
+{
+ ca0106_t *emu = snd_pcm_substream_chip(substream);
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ ca0106_pcm_t *epcm = runtime->private_data;
+ snd_pcm_uframes_t ptr, ptr1, ptr2,ptr3,ptr4 = 0;
+ int channel = epcm->channel_id;
+
+ if (!epcm->running)
+ return 0;
+
+ ptr3 = snd_ca0106_ptr_read(emu, PLAYBACK_LIST_PTR, channel);
+ ptr1 = snd_ca0106_ptr_read(emu, PLAYBACK_POINTER, channel);
+ ptr4 = snd_ca0106_ptr_read(emu, PLAYBACK_LIST_PTR, channel);
+ if (ptr3 != ptr4) ptr1 = snd_ca0106_ptr_read(emu, PLAYBACK_POINTER, channel);
+ ptr2 = bytes_to_frames(runtime, ptr1);
+ ptr2+= (ptr4 >> 3) * runtime->period_size;
+ ptr=ptr2;
+ if (ptr >= runtime->buffer_size)
+ ptr -= runtime->buffer_size;
+ //printk("ptr1 = 0x%lx, ptr2=0x%lx, ptr=0x%lx, buffer_size = 0x%x, period_size = 0x%x, bits=%d, rate=%d\n", ptr1, ptr2, ptr, (int)runtime->buffer_size, (int)runtime->period_size, (int)runtime->frame_bits, (int)runtime->rate);
+
+ return ptr;
+}
+
+/* pointer_capture callback */
+static snd_pcm_uframes_t
+snd_ca0106_pcm_pointer_capture(snd_pcm_substream_t *substream)
+{
+ ca0106_t *emu = snd_pcm_substream_chip(substream);
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ ca0106_pcm_t *epcm = runtime->private_data;
+ snd_pcm_uframes_t ptr, ptr1, ptr2 = 0;
+ int channel = channel=epcm->channel_id;
+
+ if (!epcm->running)
+ return 0;
+
+ ptr1 = snd_ca0106_ptr_read(emu, CAPTURE_POINTER, channel);
+ ptr2 = bytes_to_frames(runtime, ptr1);
+ ptr=ptr2;
+ if (ptr >= runtime->buffer_size)
+ ptr -= runtime->buffer_size;
+ //printk("ptr1 = 0x%lx, ptr2=0x%lx, ptr=0x%lx, buffer_size = 0x%x, period_size = 0x%x, bits=%d, rate=%d\n", ptr1, ptr2, ptr, (int)runtime->buffer_size, (int)runtime->period_size, (int)runtime->frame_bits, (int)runtime->rate);
+
+ return ptr;
+}
+
+/* operators */
+static snd_pcm_ops_t snd_ca0106_playback_front_ops = {
+ .open = snd_ca0106_pcm_open_playback_front,
+ .close = snd_ca0106_pcm_close_playback,
+ .ioctl = snd_pcm_lib_ioctl,
+ .hw_params = snd_ca0106_pcm_hw_params_playback,
+ .hw_free = snd_ca0106_pcm_hw_free_playback,
+ .prepare = snd_ca0106_pcm_prepare_playback,
+ .trigger = snd_ca0106_pcm_trigger_playback,
+ .pointer = snd_ca0106_pcm_pointer_playback,
+};
+
+static snd_pcm_ops_t snd_ca0106_capture_0_ops = {
+ .open = snd_ca0106_pcm_open_0_capture,
+ .close = snd_ca0106_pcm_close_capture,
+ .ioctl = snd_pcm_lib_ioctl,
+ .hw_params = snd_ca0106_pcm_hw_params_capture,
+ .hw_free = snd_ca0106_pcm_hw_free_capture,
+ .prepare = snd_ca0106_pcm_prepare_capture,
+ .trigger = snd_ca0106_pcm_trigger_capture,
+ .pointer = snd_ca0106_pcm_pointer_capture,
+};
+
+static snd_pcm_ops_t snd_ca0106_capture_1_ops = {
+ .open = snd_ca0106_pcm_open_1_capture,
+ .close = snd_ca0106_pcm_close_capture,
+ .ioctl = snd_pcm_lib_ioctl,
+ .hw_params = snd_ca0106_pcm_hw_params_capture,
+ .hw_free = snd_ca0106_pcm_hw_free_capture,
+ .prepare = snd_ca0106_pcm_prepare_capture,
+ .trigger = snd_ca0106_pcm_trigger_capture,
+ .pointer = snd_ca0106_pcm_pointer_capture,
+};
+
+static snd_pcm_ops_t snd_ca0106_capture_2_ops = {
+ .open = snd_ca0106_pcm_open_2_capture,
+ .close = snd_ca0106_pcm_close_capture,
+ .ioctl = snd_pcm_lib_ioctl,
+ .hw_params = snd_ca0106_pcm_hw_params_capture,
+ .hw_free = snd_ca0106_pcm_hw_free_capture,
+ .prepare = snd_ca0106_pcm_prepare_capture,
+ .trigger = snd_ca0106_pcm_trigger_capture,
+ .pointer = snd_ca0106_pcm_pointer_capture,
+};
+
+static snd_pcm_ops_t snd_ca0106_capture_3_ops = {
+ .open = snd_ca0106_pcm_open_3_capture,
+ .close = snd_ca0106_pcm_close_capture,
+ .ioctl = snd_pcm_lib_ioctl,
+ .hw_params = snd_ca0106_pcm_hw_params_capture,
+ .hw_free = snd_ca0106_pcm_hw_free_capture,
+ .prepare = snd_ca0106_pcm_prepare_capture,
+ .trigger = snd_ca0106_pcm_trigger_capture,
+ .pointer = snd_ca0106_pcm_pointer_capture,
+};
+
+static snd_pcm_ops_t snd_ca0106_playback_center_lfe_ops = {
+ .open = snd_ca0106_pcm_open_playback_center_lfe,
+ .close = snd_ca0106_pcm_close_playback,
+ .ioctl = snd_pcm_lib_ioctl,
+ .hw_params = snd_ca0106_pcm_hw_params_playback,
+ .hw_free = snd_ca0106_pcm_hw_free_playback,
+ .prepare = snd_ca0106_pcm_prepare_playback,
+ .trigger = snd_ca0106_pcm_trigger_playback,
+ .pointer = snd_ca0106_pcm_pointer_playback,
+};
+
+static snd_pcm_ops_t snd_ca0106_playback_unknown_ops = {
+ .open = snd_ca0106_pcm_open_playback_unknown,
+ .close = snd_ca0106_pcm_close_playback,
+ .ioctl = snd_pcm_lib_ioctl,
+ .hw_params = snd_ca0106_pcm_hw_params_playback,
+ .hw_free = snd_ca0106_pcm_hw_free_playback,
+ .prepare = snd_ca0106_pcm_prepare_playback,
+ .trigger = snd_ca0106_pcm_trigger_playback,
+ .pointer = snd_ca0106_pcm_pointer_playback,
+};
+
+static snd_pcm_ops_t snd_ca0106_playback_rear_ops = {
+ .open = snd_ca0106_pcm_open_playback_rear,
+ .close = snd_ca0106_pcm_close_playback,
+ .ioctl = snd_pcm_lib_ioctl,
+ .hw_params = snd_ca0106_pcm_hw_params_playback,
+ .hw_free = snd_ca0106_pcm_hw_free_playback,
+ .prepare = snd_ca0106_pcm_prepare_playback,
+ .trigger = snd_ca0106_pcm_trigger_playback,
+ .pointer = snd_ca0106_pcm_pointer_playback,
+};
+
+
+static unsigned short snd_ca0106_ac97_read(ac97_t *ac97,
+ unsigned short reg)
+{
+ ca0106_t *emu = ac97->private_data;
+ unsigned long flags;
+ unsigned short val;
+
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ outb(reg, emu->port + AC97ADDRESS);
+ val = inw(emu->port + AC97DATA);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+ return val;
+}
+
+static void snd_ca0106_ac97_write(ac97_t *ac97,
+ unsigned short reg, unsigned short val)
+{
+ ca0106_t *emu = ac97->private_data;
+ unsigned long flags;
+
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ outb(reg, emu->port + AC97ADDRESS);
+ outw(val, emu->port + AC97DATA);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+}
+
+static int snd_ca0106_ac97(ca0106_t *chip)
+{
+ ac97_bus_t *pbus;
+ ac97_template_t ac97;
+ int err;
+ static ac97_bus_ops_t ops = {
+ .write = snd_ca0106_ac97_write,
+ .read = snd_ca0106_ac97_read,
+ };
+
+ if ((err = snd_ac97_bus(chip->card, 0, &ops, NULL, &pbus)) < 0)
+ return err;
+ pbus->no_vra = 1; /* we don't need VRA */
+
+ memset(&ac97, 0, sizeof(ac97));
+ ac97.private_data = chip;
+ return snd_ac97_mixer(pbus, &ac97, &chip->ac97);
+}
+
+static int snd_ca0106_free(ca0106_t *chip)
+{
+ if (chip->res_port != NULL) { /* avoid access to already used hardware */
+ // disable interrupts
+ snd_ca0106_ptr_write(chip, BASIC_INTERRUPT, 0, 0);
+ outl(0, chip->port + INTE);
+ snd_ca0106_ptr_write(chip, EXTENDED_INT_MASK, 0, 0);
+ udelay(1000);
+ // disable audio
+ //outl(HCFG_LOCKSOUNDCACHE, chip->port + HCFG);
+ outl(0, chip->port + HCFG);
+ /* FIXME: We need to stop and DMA transfers here.
+ * But as I am not sure how yet, we cannot from the dma pages.
+ * So we can fix: snd-malloc: Memory leak? pages not freed = 8
+ */
+ }
+ // release the data
+#if 1
+ if (chip->buffer.area)
+ snd_dma_free_pages(&chip->buffer);
+#endif
+
+ // release the i/o port
+ if (chip->res_port) {
+ release_resource(chip->res_port);
+ kfree_nocheck(chip->res_port);
+ }
+ // release the irq
+ if (chip->irq >= 0)
+ free_irq(chip->irq, (void *)chip);
+ pci_disable_device(chip->pci);
+ kfree(chip);
+ return 0;
+}
+
+static int snd_ca0106_dev_free(snd_device_t *device)
+{
+ ca0106_t *chip = device->device_data;
+ return snd_ca0106_free(chip);
+}
+
+static irqreturn_t snd_ca0106_interrupt(int irq, void *dev_id,
+ struct pt_regs *regs)
+{
+ unsigned int status;
+
+ ca0106_t *chip = dev_id;
+ int i;
+ int mask;
+ unsigned int stat76;
+ ca0106_channel_t *pchannel;
+
+ spin_lock(&chip->emu_lock);
+
+ status = inl(chip->port + IPR);
+
+ // call updater, unlock before it
+ spin_unlock(&chip->emu_lock);
+
+ if (! status)
+ return IRQ_NONE;
+
+ stat76 = snd_ca0106_ptr_read(chip, EXTENDED_INT, 0);
+ //snd_printk("interrupt status = 0x%08x, stat76=0x%08x\n", status, stat76);
+ //snd_printk("ptr=0x%08x\n",snd_ca0106_ptr_read(chip, PLAYBACK_POINTER, 0));
+ mask = 0x11; /* 0x1 for one half, 0x10 for the other half period. */
+ for(i = 0; i < 4; i++) {
+ pchannel = &(chip->playback_channels[i]);
+ if(stat76 & mask) {
+/* FIXME: Select the correct substream for period elapsed */
+ if(pchannel->use) {
+ snd_pcm_period_elapsed(pchannel->epcm->substream);
+ //printk(KERN_INFO "interrupt [%d] used\n", i);
+ }
+ }
+ //printk(KERN_INFO "channel=%p\n",pchannel);
+ //printk(KERN_INFO "interrupt stat76[%d] = %08x, use=%d, channel=%d\n", i, stat76, pchannel->use, pchannel->number);
+ mask <<= 1;
+ }
+ mask = 0x110000; /* 0x1 for one half, 0x10 for the other half period. */
+ for(i = 0; i < 4; i++) {
+ pchannel = &(chip->capture_channels[i]);
+ if(stat76 & mask) {
+/* FIXME: Select the correct substream for period elapsed */
+ if(pchannel->use) {
+ snd_pcm_period_elapsed(pchannel->epcm->substream);
+ //printk(KERN_INFO "interrupt [%d] used\n", i);
+ }
+ }
+ //printk(KERN_INFO "channel=%p\n",pchannel);
+ //printk(KERN_INFO "interrupt stat76[%d] = %08x, use=%d, channel=%d\n", i, stat76, pchannel->use, pchannel->number);
+ mask <<= 1;
+ }
+
+ snd_ca0106_ptr_write(chip, EXTENDED_INT, 0, stat76);
+ spin_lock(&chip->emu_lock);
+ // acknowledge the interrupt if necessary
+ outl(status, chip->port+IPR);
+
+ spin_unlock(&chip->emu_lock);
+
+ return IRQ_HANDLED;
+}
+
+static void snd_ca0106_pcm_free(snd_pcm_t *pcm)
+{
+ ca0106_t *emu = pcm->private_data;
+ emu->pcm = NULL;
+ snd_pcm_lib_preallocate_free_for_all(pcm);
+}
+
+static int __devinit snd_ca0106_pcm(ca0106_t *emu, int device, snd_pcm_t **rpcm)
+{
+ snd_pcm_t *pcm;
+ snd_pcm_substream_t *substream;
+ int err;
+
+ if (rpcm)
+ *rpcm = NULL;
+ if ((err = snd_pcm_new(emu->card, "ca0106", device, 1, 1, &pcm)) < 0)
+ return err;
+
+ pcm->private_data = emu;
+ pcm->private_free = snd_ca0106_pcm_free;
+
+ switch (device) {
+ case 0:
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_ca0106_playback_front_ops);
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_ca0106_capture_0_ops);
+ break;
+ case 1:
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_ca0106_playback_rear_ops);
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_ca0106_capture_1_ops);
+ break;
+ case 2:
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_ca0106_playback_center_lfe_ops);
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_ca0106_capture_2_ops);
+ break;
+ case 3:
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_ca0106_playback_unknown_ops);
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_ca0106_capture_3_ops);
+ break;
+ }
+
+ pcm->info_flags = 0;
+ pcm->dev_subclass = SNDRV_PCM_SUBCLASS_GENERIC_MIX;
+ strcpy(pcm->name, "CA0106");
+ emu->pcm = pcm;
+
+ for(substream = pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream;
+ substream;
+ substream = substream->next) {
+ if ((err = snd_pcm_lib_preallocate_pages(substream,
+ SNDRV_DMA_TYPE_DEV,
+ snd_dma_pci_data(emu->pci),
+ 64*1024, 64*1024)) < 0) /* FIXME: 32*1024 for sound buffer, between 32and64 for Periods table. */
+ return err;
+ }
+
+ for (substream = pcm->streams[SNDRV_PCM_STREAM_CAPTURE].substream;
+ substream;
+ substream = substream->next) {
+ if ((err = snd_pcm_lib_preallocate_pages(substream,
+ SNDRV_DMA_TYPE_DEV,
+ snd_dma_pci_data(emu->pci),
+ 64*1024, 64*1024)) < 0)
+ return err;
+ }
+
+ if (rpcm)
+ *rpcm = pcm;
+
+ return 0;
+}
+
+static int __devinit snd_ca0106_create(snd_card_t *card,
+ struct pci_dev *pci,
+ ca0106_t **rchip)
+{
+ ca0106_t *chip;
+ int err;
+ int ch;
+ static snd_device_ops_t ops = {
+ .dev_free = snd_ca0106_dev_free,
+ };
+
+ *rchip = NULL;
+
+ if ((err = pci_enable_device(pci)) < 0)
+ return err;
+ if (pci_set_dma_mask(pci, 0xffffffffUL) < 0 ||
+ pci_set_consistent_dma_mask(pci, 0xffffffffUL) < 0) {
+ printk(KERN_ERR "error to set 32bit mask DMA\n");
+ pci_disable_device(pci);
+ return -ENXIO;
+ }
+
+ chip = kcalloc(1, sizeof(*chip), GFP_KERNEL);
+ if (chip == NULL) {
+ pci_disable_device(pci);
+ return -ENOMEM;
+ }
+
+ chip->card = card;
+ chip->pci = pci;
+ chip->irq = -1;
+
+ spin_lock_init(&chip->emu_lock);
+
+ chip->port = pci_resource_start(pci, 0);
+ if ((chip->res_port = request_region(chip->port, 0x20,
+ "snd_ca0106")) == NULL) {
+ snd_ca0106_free(chip);
+ printk(KERN_ERR "cannot allocate the port\n");
+ return -EBUSY;
+ }
+
+ if (request_irq(pci->irq, snd_ca0106_interrupt,
+ SA_INTERRUPT|SA_SHIRQ, "snd_ca0106",
+ (void *)chip)) {
+ snd_ca0106_free(chip);
+ printk(KERN_ERR "cannot grab irq\n");
+ return -EBUSY;
+ }
+ chip->irq = pci->irq;
+
+ /* This stores the periods table. */
+ if(snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV, snd_dma_pci_data(pci), 1024, &chip->buffer) < 0) {
+ snd_ca0106_free(chip);
+ return -ENOMEM;
+ }
+
+ pci_set_master(pci);
+ /* read revision & serial */
+ pci_read_config_byte(pci, PCI_REVISION_ID, (char *)&chip->revision);
+ pci_read_config_dword(pci, PCI_SUBSYSTEM_VENDOR_ID, &chip->serial);
+ pci_read_config_word(pci, PCI_SUBSYSTEM_ID, &chip->model);
+#if 1
+ printk(KERN_INFO "Model %04x Rev %08x Serial %08x\n", chip->model,
+ chip->revision, chip->serial);
+#endif
+
+ outl(0, chip->port + INTE);
+
+ /*
+ * Init to 0x02109204 :
+ * Clock accuracy = 0 (1000ppm)
+ * Sample Rate = 2 (48kHz)
+ * Audio Channel = 1 (Left of 2)
+ * Source Number = 0 (Unspecified)
+ * Generation Status = 1 (Original for Cat Code 12)
+ * Cat Code = 12 (Digital Signal Mixer)
+ * Mode = 0 (Mode 0)
+ * Emphasis = 0 (None)
+ * CP = 1 (Copyright unasserted)
+ * AN = 0 (Audio data)
+ * P = 0 (Consumer)
+ */
+ snd_ca0106_ptr_write(chip, SPCS0, 0,
+ chip->spdif_bits[0] =
+ SPCS_CLKACCY_1000PPM | SPCS_SAMPLERATE_48 |
+ SPCS_CHANNELNUM_LEFT | SPCS_SOURCENUM_UNSPEC |
+ SPCS_GENERATIONSTATUS | 0x00001200 |
+ 0x00000000 | SPCS_EMPHASIS_NONE | SPCS_COPYRIGHT);
+ /* Only SPCS1 has been tested */
+ snd_ca0106_ptr_write(chip, SPCS1, 0,
+ chip->spdif_bits[1] =
+ SPCS_CLKACCY_1000PPM | SPCS_SAMPLERATE_48 |
+ SPCS_CHANNELNUM_LEFT | SPCS_SOURCENUM_UNSPEC |
+ SPCS_GENERATIONSTATUS | 0x00001200 |
+ 0x00000000 | SPCS_EMPHASIS_NONE | SPCS_COPYRIGHT);
+ snd_ca0106_ptr_write(chip, SPCS2, 0,
+ chip->spdif_bits[2] =
+ SPCS_CLKACCY_1000PPM | SPCS_SAMPLERATE_48 |
+ SPCS_CHANNELNUM_LEFT | SPCS_SOURCENUM_UNSPEC |
+ SPCS_GENERATIONSTATUS | 0x00001200 |
+ 0x00000000 | SPCS_EMPHASIS_NONE | SPCS_COPYRIGHT);
+ snd_ca0106_ptr_write(chip, SPCS3, 0,
+ chip->spdif_bits[3] =
+ SPCS_CLKACCY_1000PPM | SPCS_SAMPLERATE_48 |
+ SPCS_CHANNELNUM_LEFT | SPCS_SOURCENUM_UNSPEC |
+ SPCS_GENERATIONSTATUS | 0x00001200 |
+ 0x00000000 | SPCS_EMPHASIS_NONE | SPCS_COPYRIGHT);
+
+ snd_ca0106_ptr_write(chip, PLAYBACK_MUTE, 0, 0x00fc0000);
+ snd_ca0106_ptr_write(chip, CAPTURE_MUTE, 0, 0x00fc0000);
+
+ /* Write 0x8000 to AC97_REC_GAIN to mute it. */
+ outb(AC97_REC_GAIN, chip->port + AC97ADDRESS);
+ outw(0x8000, chip->port + AC97DATA);
+#if 0
+ snd_ca0106_ptr_write(chip, SPCS0, 0, 0x2108006);
+ snd_ca0106_ptr_write(chip, 0x42, 0, 0x2108006);
+ snd_ca0106_ptr_write(chip, 0x43, 0, 0x2108006);
+ snd_ca0106_ptr_write(chip, 0x44, 0, 0x2108006);
+#endif
+
+ //snd_ca0106_ptr_write(chip, SPDIF_SELECT2, 0, 0xf0f003f); /* OSS drivers set this. */
+ /* Analog or Digital output */
+ snd_ca0106_ptr_write(chip, SPDIF_SELECT1, 0, 0xf);
+ snd_ca0106_ptr_write(chip, SPDIF_SELECT2, 0, 0x000b0000); /* 0x0b000000 for digital, 0x000b0000 for analog, from win2000 drivers */
+ chip->spdif_enable = 0; /* Set digital SPDIF output off */
+ chip->capture_source = 3; /* Set CAPTURE_SOURCE */
+ //snd_ca0106_ptr_write(chip, 0x45, 0, 0); /* Analogue out */
+ //snd_ca0106_ptr_write(chip, 0x45, 0, 0xf00); /* Digital out */
+
+ snd_ca0106_ptr_write(chip, CAPTURE_CONTROL, 0, 0x40c81000); /* goes to 0x40c80000 when doing SPDIF IN/OUT */
+ snd_ca0106_ptr_write(chip, CAPTURE_CONTROL, 1, 0xffffffff); /* (Mute) CAPTURE feedback into PLAYBACK volume. Only lower 16 bits matter. */
+ snd_ca0106_ptr_write(chip, CAPTURE_CONTROL, 2, 0x30300000); /* SPDIF IN Volume */
+ snd_ca0106_ptr_write(chip, CAPTURE_CONTROL, 3, 0x00700000); /* SPDIF IN Volume, 0x70 = (vol & 0x3f) | 0x40 */
+ snd_ca0106_ptr_write(chip, PLAYBACK_ROUTING1, 0, 0x32765410);
+ snd_ca0106_ptr_write(chip, PLAYBACK_ROUTING2, 0, 0x76767676);
+ snd_ca0106_ptr_write(chip, CAPTURE_ROUTING1, 0, 0x32765410);
+ snd_ca0106_ptr_write(chip, CAPTURE_ROUTING2, 0, 0x76767676);
+ for(ch = 0; ch < 4; ch++) {
+ snd_ca0106_ptr_write(chip, CAPTURE_VOLUME1, ch, 0x30303030); /* Only high 16 bits matter */
+ snd_ca0106_ptr_write(chip, CAPTURE_VOLUME2, ch, 0x30303030);
+ //snd_ca0106_ptr_write(chip, PLAYBACK_VOLUME1, ch, 0x40404040); /* Mute */
+ //snd_ca0106_ptr_write(chip, PLAYBACK_VOLUME2, ch, 0x40404040); /* Mute */
+ snd_ca0106_ptr_write(chip, PLAYBACK_VOLUME1, ch, 0xffffffff); /* Mute */
+ snd_ca0106_ptr_write(chip, PLAYBACK_VOLUME2, ch, 0xffffffff); /* Mute */
+ }
+ snd_ca0106_ptr_write(chip, CAPTURE_SOURCE, 0x0, 0x333300e4); /* Select MIC, Line in, TAD in, AUX in */
+ chip->capture_source = 3; /* Set CAPTURE_SOURCE */
+
+ if ((chip->serial == 0x10061102) || (chip->serial == 0x10071102) ) { /* The SB0410 and SB0413 use GPIO differently. */
+ /* FIXME: Still need to find out what the other GPIO bits do. E.g. For digital spdif out. */
+ outl(0x0, chip->port+GPIO);
+ //outl(0x00f0e000, chip->port+GPIO); /* Analog */
+ outl(0x005f4300, chip->port+GPIO); /* Analog */
+ } else {
+ outl(0x0, chip->port+GPIO);
+ outl(0x005f03a3, chip->port+GPIO); /* Analog */
+ //outl(0x005f02a2, chip->port+GPIO); /* SPDIF */
+ }
+ snd_ca0106_intr_enable(chip, 0x105); /* Win2000 uses 0x1e0 */
+
+ //outl(HCFG_LOCKSOUNDCACHE|HCFG_AUDIOENABLE, chip->port+HCFG);
+ //outl(0x00001409, chip->port+HCFG); /* 0x1000 causes AC3 to fails. Maybe it effects 24 bit output. */
+ //outl(0x00000009, chip->port+HCFG);
+ outl(HCFG_AC97 | HCFG_AUDIOENABLE, chip->port+HCFG); /* AC97 2.0, Enable outputs. */
+
+ if ((err = snd_device_new(card, SNDRV_DEV_LOWLEVEL,
+ chip, &ops)) < 0) {
+ snd_ca0106_free(chip);
+ return err;
+ }
+ *rchip = chip;
+ return 0;
+}
+
+static int __devinit snd_ca0106_probe(struct pci_dev *pci,
+ const struct pci_device_id *pci_id)
+{
+ static int dev;
+ snd_card_t *card;
+ ca0106_t *chip;
+ ca0106_names_t *c;
+ int err;
+
+ if (dev >= SNDRV_CARDS)
+ return -ENODEV;
+ if (!enable[dev]) {
+ dev++;
+ return -ENOENT;
+ }
+
+ card = snd_card_new(index[dev], id[dev], THIS_MODULE, 0);
+ if (card == NULL)
+ return -ENOMEM;
+
+ if ((err = snd_ca0106_create(card, pci, &chip)) < 0) {
+ snd_card_free(card);
+ return err;
+ }
+
+ if ((err = snd_ca0106_pcm(chip, 0, NULL)) < 0) {
+ snd_card_free(card);
+ return err;
+ }
+ if ((err = snd_ca0106_pcm(chip, 1, NULL)) < 0) {
+ snd_card_free(card);
+ return err;
+ }
+ if ((err = snd_ca0106_pcm(chip, 2, NULL)) < 0) {
+ snd_card_free(card);
+ return err;
+ }
+ if ((err = snd_ca0106_pcm(chip, 3, NULL)) < 0) {
+ snd_card_free(card);
+ return err;
+ }
+ if ((chip->serial != 0x10061102) && (chip->serial != 0x10071102) ) { /* The SB0410 and SB0413 do not have an ac97 chip. */
+ if ((err = snd_ca0106_ac97(chip)) < 0) {
+ snd_card_free(card);
+ return err;
+ }
+ }
+ if ((err = snd_ca0106_mixer(chip)) < 0) {
+ snd_card_free(card);
+ return err;
+ }
+
+ snd_ca0106_proc_init(chip);
+
+ strcpy(card->driver, "CA0106");
+ strcpy(card->shortname, "CA0106");
+
+ for (c=ca0106_chip_names; c->serial; c++) {
+ if (c->serial == chip->serial) break;
+ }
+ sprintf(card->longname, "%s at 0x%lx irq %i",
+ c->name, chip->port, chip->irq);
+
+ if ((err = snd_card_register(card)) < 0) {
+ snd_card_free(card);
+ return err;
+ }
+
+ pci_set_drvdata(pci, card);
+ dev++;
+ return 0;
+}
+
+static void __devexit snd_ca0106_remove(struct pci_dev *pci)
+{
+ snd_card_free(pci_get_drvdata(pci));
+ pci_set_drvdata(pci, NULL);
+}
+
+// PCI IDs
+static struct pci_device_id snd_ca0106_ids[] = {
+ { 0x1102, 0x0007, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 }, /* Audigy LS or Live 24bit */
+ { 0, }
+};
+MODULE_DEVICE_TABLE(pci, snd_ca0106_ids);
+
+// pci_driver definition
+static struct pci_driver driver = {
+ .name = "CA0106",
+ .id_table = snd_ca0106_ids,
+ .probe = snd_ca0106_probe,
+ .remove = __devexit_p(snd_ca0106_remove),
+};
+
+// initialization of the module
+static int __init alsa_card_ca0106_init(void)
+{
+ int err;
+
+ if ((err = pci_module_init(&driver)) > 0)
+ return err;
+
+ return 0;
+}
+
+// clean up the module
+static void __exit alsa_card_ca0106_exit(void)
+{
+ pci_unregister_driver(&driver);
+}
+
+module_init(alsa_card_ca0106_init)
+module_exit(alsa_card_ca0106_exit)
--- /dev/null
+/*
+ * Copyright (c) 2004 James Courtier-Dutton <James@superbug.demon.co.uk>
+ * Driver CA0106 chips. e.g. Sound Blaster Audigy LS and Live 24bit
+ * Version: 0.0.16
+ *
+ * FEATURES currently supported:
+ * See ca0106_main.c for features.
+ *
+ * Changelog:
+ * Support interrupts per period.
+ * Removed noise from Center/LFE channel when in Analog mode.
+ * Rename and remove mixer controls.
+ * 0.0.6
+ * Use separate card based DMA buffer for periods table list.
+ * 0.0.7
+ * Change remove and rename ctrls into lists.
+ * 0.0.8
+ * Try to fix capture sources.
+ * 0.0.9
+ * Fix AC3 output.
+ * Enable S32_LE format support.
+ * 0.0.10
+ * Enable playback 48000 and 96000 rates. (Rates other that these do not work, even with "plug:front".)
+ * 0.0.11
+ * Add Model name recognition.
+ * 0.0.12
+ * Correct interrupt timing. interrupt at end of period, instead of in the middle of a playback period.
+ * Remove redundent "voice" handling.
+ * 0.0.13
+ * Single trigger call for multi channels.
+ * 0.0.14
+ * Set limits based on what the sound card hardware can do.
+ * playback periods_min=2, periods_max=8
+ * capture hw constraints require period_size = n * 64 bytes.
+ * playback hw constraints require period_size = n * 64 bytes.
+ * 0.0.15
+ * Separated ca0106.c into separate functional .c files.
+ * 0.0.16
+ * Modified Copyright message.
+ *
+ * This code was initally based on code from ALSA's emu10k1x.c which is:
+ * Copyright (c) by Francisco Moraes <fmoraes@nc.rr.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+#include <sound/driver.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/moduleparam.h>
+#include <sound/core.h>
+#include <sound/initval.h>
+#include <sound/pcm.h>
+#include <sound/ac97_codec.h>
+#include <sound/info.h>
+
+#include "ca0106.h"
+
+static int snd_ca0106_shared_spdif_info(snd_kcontrol_t *kcontrol, snd_ctl_elem_info_t * uinfo)
+{
+ uinfo->type = SNDRV_CTL_ELEM_TYPE_BOOLEAN;
+ uinfo->count = 1;
+ uinfo->value.integer.min = 0;
+ uinfo->value.integer.max = 1;
+ return 0;
+}
+
+static int snd_ca0106_shared_spdif_get(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ ca0106_t *emu = snd_kcontrol_chip(kcontrol);
+
+ ucontrol->value.enumerated.item[0] = emu->spdif_enable;
+ return 0;
+}
+
+static int snd_ca0106_shared_spdif_put(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ ca0106_t *emu = snd_kcontrol_chip(kcontrol);
+ unsigned int val;
+ int change = 0;
+ u32 mask;
+
+ val = ucontrol->value.enumerated.item[0] ;
+ change = (emu->spdif_enable != val);
+ if (change) {
+ emu->spdif_enable = val;
+ if (val == 1) {
+ /* Digital */
+ snd_ca0106_ptr_write(emu, SPDIF_SELECT1, 0, 0xf);
+ snd_ca0106_ptr_write(emu, SPDIF_SELECT2, 0, 0x0b000000);
+ snd_ca0106_ptr_write(emu, CAPTURE_CONTROL, 0,
+ snd_ca0106_ptr_read(emu, CAPTURE_CONTROL, 0) & ~0x1000);
+ mask = inl(emu->port + GPIO) & ~0x101;
+ outl(mask, emu->port + GPIO);
+
+ } else {
+ /* Analog */
+ snd_ca0106_ptr_write(emu, SPDIF_SELECT1, 0, 0xf);
+ snd_ca0106_ptr_write(emu, SPDIF_SELECT2, 0, 0x000b0000);
+ snd_ca0106_ptr_write(emu, CAPTURE_CONTROL, 0,
+ snd_ca0106_ptr_read(emu, CAPTURE_CONTROL, 0) | 0x1000);
+ mask = inl(emu->port + GPIO) | 0x101;
+ outl(mask, emu->port + GPIO);
+ }
+ }
+ return change;
+}
+
+static snd_kcontrol_new_t snd_ca0106_shared_spdif __devinitdata =
+{
+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = "SPDIF Out",
+ .info = snd_ca0106_shared_spdif_info,
+ .get = snd_ca0106_shared_spdif_get,
+ .put = snd_ca0106_shared_spdif_put
+};
+
+static int snd_ca0106_capture_source_info(snd_kcontrol_t *kcontrol, snd_ctl_elem_info_t * uinfo)
+{
+ static char *texts[6] = { "SPDIF out", "i2s mixer out", "SPDIF in", "i2s in", "AC97 in", "SRC out" };
+
+ uinfo->type = SNDRV_CTL_ELEM_TYPE_ENUMERATED;
+ uinfo->count = 1;
+ uinfo->value.enumerated.items = 6;
+ if (uinfo->value.enumerated.item > 5)
+ uinfo->value.enumerated.item = 5;
+ strcpy(uinfo->value.enumerated.name, texts[uinfo->value.enumerated.item]);
+ return 0;
+}
+
+static int snd_ca0106_capture_source_get(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ ca0106_t *emu = snd_kcontrol_chip(kcontrol);
+
+ ucontrol->value.enumerated.item[0] = emu->capture_source;
+ return 0;
+}
+
+static int snd_ca0106_capture_source_put(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ ca0106_t *emu = snd_kcontrol_chip(kcontrol);
+ unsigned int val;
+ int change = 0;
+ u32 mask;
+ u32 source;
+
+ val = ucontrol->value.enumerated.item[0] ;
+ change = (emu->capture_source != val);
+ if (change) {
+ emu->capture_source = val;
+ source = (val << 28) | (val << 24) | (val << 20) | (val << 16);
+ mask = snd_ca0106_ptr_read(emu, CAPTURE_SOURCE, 0) & 0xffff;
+ snd_ca0106_ptr_write(emu, CAPTURE_SOURCE, 0, source | mask);
+ }
+ return change;
+}
+
+static snd_kcontrol_new_t snd_ca0106_capture_source __devinitdata =
+{
+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = "Capture Source",
+ .info = snd_ca0106_capture_source_info,
+ .get = snd_ca0106_capture_source_get,
+ .put = snd_ca0106_capture_source_put
+};
+
+static int snd_ca0106_spdif_info(snd_kcontrol_t *kcontrol, snd_ctl_elem_info_t * uinfo)
+{
+ uinfo->type = SNDRV_CTL_ELEM_TYPE_IEC958;
+ uinfo->count = 1;
+ return 0;
+}
+
+static int snd_ca0106_spdif_get(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ ca0106_t *emu = snd_kcontrol_chip(kcontrol);
+ unsigned int idx = snd_ctl_get_ioffidx(kcontrol, &ucontrol->id);
+
+ ucontrol->value.iec958.status[0] = (emu->spdif_bits[idx] >> 0) & 0xff;
+ ucontrol->value.iec958.status[1] = (emu->spdif_bits[idx] >> 8) & 0xff;
+ ucontrol->value.iec958.status[2] = (emu->spdif_bits[idx] >> 16) & 0xff;
+ ucontrol->value.iec958.status[3] = (emu->spdif_bits[idx] >> 24) & 0xff;
+ return 0;
+}
+
+static int snd_ca0106_spdif_get_mask(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ ucontrol->value.iec958.status[0] = 0xff;
+ ucontrol->value.iec958.status[1] = 0xff;
+ ucontrol->value.iec958.status[2] = 0xff;
+ ucontrol->value.iec958.status[3] = 0xff;
+ return 0;
+}
+
+static int snd_ca0106_spdif_put(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ ca0106_t *emu = snd_kcontrol_chip(kcontrol);
+ unsigned int idx = snd_ctl_get_ioffidx(kcontrol, &ucontrol->id);
+ int change;
+ unsigned int val;
+
+ val = (ucontrol->value.iec958.status[0] << 0) |
+ (ucontrol->value.iec958.status[1] << 8) |
+ (ucontrol->value.iec958.status[2] << 16) |
+ (ucontrol->value.iec958.status[3] << 24);
+ change = val != emu->spdif_bits[idx];
+ if (change) {
+ snd_ca0106_ptr_write(emu, SPCS0 + idx, 0, val);
+ emu->spdif_bits[idx] = val;
+ }
+ return change;
+}
+
+static snd_kcontrol_new_t snd_ca0106_spdif_mask_control =
+{
+ .access = SNDRV_CTL_ELEM_ACCESS_READ,
+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = SNDRV_CTL_NAME_IEC958("",PLAYBACK,MASK),
+ .count = 4,
+ .info = snd_ca0106_spdif_info,
+ .get = snd_ca0106_spdif_get_mask
+};
+
+static snd_kcontrol_new_t snd_ca0106_spdif_control =
+{
+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = SNDRV_CTL_NAME_IEC958("",PLAYBACK,DEFAULT),
+ .count = 4,
+ .info = snd_ca0106_spdif_info,
+ .get = snd_ca0106_spdif_get,
+ .put = snd_ca0106_spdif_put
+};
+
+static int snd_ca0106_volume_info(snd_kcontrol_t *kcontrol, snd_ctl_elem_info_t * uinfo)
+{
+ uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER;
+ uinfo->count = 2;
+ uinfo->value.integer.min = 0;
+ uinfo->value.integer.max = 255;
+ return 0;
+}
+
+static int snd_ca0106_volume_get(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol, int reg, int channel_id)
+{
+ ca0106_t *emu = snd_kcontrol_chip(kcontrol);
+ unsigned int value;
+
+ value = snd_ca0106_ptr_read(emu, reg, channel_id);
+ ucontrol->value.integer.value[0] = 0xff - ((value >> 24) & 0xff); /* Left */
+ ucontrol->value.integer.value[1] = 0xff - ((value >> 16) & 0xff); /* Right */
+ return 0;
+}
+
+static int snd_ca0106_volume_get_spdif_front(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ int channel_id = CONTROL_FRONT_CHANNEL;
+ int reg = PLAYBACK_VOLUME1;
+ return snd_ca0106_volume_get(kcontrol, ucontrol, reg, channel_id);
+}
+
+static int snd_ca0106_volume_get_spdif_center_lfe(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ int channel_id = CONTROL_CENTER_LFE_CHANNEL;
+ int reg = PLAYBACK_VOLUME1;
+ return snd_ca0106_volume_get(kcontrol, ucontrol, reg, channel_id);
+}
+static int snd_ca0106_volume_get_spdif_unknown(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ int channel_id = CONTROL_UNKNOWN_CHANNEL;
+ int reg = PLAYBACK_VOLUME1;
+ return snd_ca0106_volume_get(kcontrol, ucontrol, reg, channel_id);
+}
+static int snd_ca0106_volume_get_spdif_rear(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ int channel_id = CONTROL_REAR_CHANNEL;
+ int reg = PLAYBACK_VOLUME1;
+ return snd_ca0106_volume_get(kcontrol, ucontrol, reg, channel_id);
+}
+static int snd_ca0106_volume_get_analog_front(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ int channel_id = CONTROL_FRONT_CHANNEL;
+ int reg = PLAYBACK_VOLUME2;
+ return snd_ca0106_volume_get(kcontrol, ucontrol, reg, channel_id);
+}
+
+static int snd_ca0106_volume_get_analog_center_lfe(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ int channel_id = CONTROL_CENTER_LFE_CHANNEL;
+ int reg = PLAYBACK_VOLUME2;
+ return snd_ca0106_volume_get(kcontrol, ucontrol, reg, channel_id);
+}
+static int snd_ca0106_volume_get_analog_unknown(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ int channel_id = CONTROL_UNKNOWN_CHANNEL;
+ int reg = PLAYBACK_VOLUME2;
+ return snd_ca0106_volume_get(kcontrol, ucontrol, reg, channel_id);
+}
+static int snd_ca0106_volume_get_analog_rear(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ int channel_id = CONTROL_REAR_CHANNEL;
+ int reg = PLAYBACK_VOLUME2;
+ return snd_ca0106_volume_get(kcontrol, ucontrol, reg, channel_id);
+}
+
+static int snd_ca0106_volume_get_feedback(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ int channel_id = 1;
+ int reg = CAPTURE_CONTROL;
+ return snd_ca0106_volume_get(kcontrol, ucontrol, reg, channel_id);
+}
+
+static int snd_ca0106_volume_put(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol, int reg, int channel_id)
+{
+ ca0106_t *emu = snd_kcontrol_chip(kcontrol);
+ unsigned int value;
+ //value = snd_ca0106_ptr_read(emu, reg, channel_id);
+ //value = value & 0xffff;
+ value = ((0xff - ucontrol->value.integer.value[0]) << 24) | ((0xff - ucontrol->value.integer.value[1]) << 16);
+ value = value | ((0xff - ucontrol->value.integer.value[0]) << 8) | ((0xff - ucontrol->value.integer.value[1]) );
+ snd_ca0106_ptr_write(emu, reg, channel_id, value);
+ return 1;
+}
+static int snd_ca0106_volume_put_spdif_front(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ int channel_id = CONTROL_FRONT_CHANNEL;
+ int reg = PLAYBACK_VOLUME1;
+ return snd_ca0106_volume_put(kcontrol, ucontrol, reg, channel_id);
+}
+static int snd_ca0106_volume_put_spdif_center_lfe(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ int channel_id = CONTROL_CENTER_LFE_CHANNEL;
+ int reg = PLAYBACK_VOLUME1;
+ return snd_ca0106_volume_put(kcontrol, ucontrol, reg, channel_id);
+}
+static int snd_ca0106_volume_put_spdif_unknown(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ int channel_id = CONTROL_UNKNOWN_CHANNEL;
+ int reg = PLAYBACK_VOLUME1;
+ return snd_ca0106_volume_put(kcontrol, ucontrol, reg, channel_id);
+}
+static int snd_ca0106_volume_put_spdif_rear(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ int channel_id = CONTROL_REAR_CHANNEL;
+ int reg = PLAYBACK_VOLUME1;
+ return snd_ca0106_volume_put(kcontrol, ucontrol, reg, channel_id);
+}
+static int snd_ca0106_volume_put_analog_front(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ int channel_id = CONTROL_FRONT_CHANNEL;
+ int reg = PLAYBACK_VOLUME2;
+ return snd_ca0106_volume_put(kcontrol, ucontrol, reg, channel_id);
+}
+static int snd_ca0106_volume_put_analog_center_lfe(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ int channel_id = CONTROL_CENTER_LFE_CHANNEL;
+ int reg = PLAYBACK_VOLUME2;
+ return snd_ca0106_volume_put(kcontrol, ucontrol, reg, channel_id);
+}
+static int snd_ca0106_volume_put_analog_unknown(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ int channel_id = CONTROL_UNKNOWN_CHANNEL;
+ int reg = PLAYBACK_VOLUME2;
+ return snd_ca0106_volume_put(kcontrol, ucontrol, reg, channel_id);
+}
+static int snd_ca0106_volume_put_analog_rear(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ int channel_id = CONTROL_REAR_CHANNEL;
+ int reg = PLAYBACK_VOLUME2;
+ return snd_ca0106_volume_put(kcontrol, ucontrol, reg, channel_id);
+}
+
+static int snd_ca0106_volume_put_feedback(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ int channel_id = 1;
+ int reg = CAPTURE_CONTROL;
+ return snd_ca0106_volume_put(kcontrol, ucontrol, reg, channel_id);
+}
+
+static snd_kcontrol_new_t snd_ca0106_volume_control_analog_front =
+{
+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = "Analog Front Volume",
+ .info = snd_ca0106_volume_info,
+ .get = snd_ca0106_volume_get_analog_front,
+ .put = snd_ca0106_volume_put_analog_front
+};
+static snd_kcontrol_new_t snd_ca0106_volume_control_analog_center_lfe =
+{
+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = "Analog Center/LFE Volume",
+ .info = snd_ca0106_volume_info,
+ .get = snd_ca0106_volume_get_analog_center_lfe,
+ .put = snd_ca0106_volume_put_analog_center_lfe
+};
+static snd_kcontrol_new_t snd_ca0106_volume_control_analog_unknown =
+{
+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = "Analog Unknown Volume",
+ .info = snd_ca0106_volume_info,
+ .get = snd_ca0106_volume_get_analog_unknown,
+ .put = snd_ca0106_volume_put_analog_unknown
+};
+static snd_kcontrol_new_t snd_ca0106_volume_control_analog_rear =
+{
+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = "Analog Rear Volume",
+ .info = snd_ca0106_volume_info,
+ .get = snd_ca0106_volume_get_analog_rear,
+ .put = snd_ca0106_volume_put_analog_rear
+};
+static snd_kcontrol_new_t snd_ca0106_volume_control_spdif_front =
+{
+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = "SPDIF Front Volume",
+ .info = snd_ca0106_volume_info,
+ .get = snd_ca0106_volume_get_spdif_front,
+ .put = snd_ca0106_volume_put_spdif_front
+};
+static snd_kcontrol_new_t snd_ca0106_volume_control_spdif_center_lfe =
+{
+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = "SPDIF Center/LFE Volume",
+ .info = snd_ca0106_volume_info,
+ .get = snd_ca0106_volume_get_spdif_center_lfe,
+ .put = snd_ca0106_volume_put_spdif_center_lfe
+};
+static snd_kcontrol_new_t snd_ca0106_volume_control_spdif_unknown =
+{
+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = "SPDIF Unknown Volume",
+ .info = snd_ca0106_volume_info,
+ .get = snd_ca0106_volume_get_spdif_unknown,
+ .put = snd_ca0106_volume_put_spdif_unknown
+};
+static snd_kcontrol_new_t snd_ca0106_volume_control_spdif_rear =
+{
+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = "SPDIF Rear Volume",
+ .info = snd_ca0106_volume_info,
+ .get = snd_ca0106_volume_get_spdif_rear,
+ .put = snd_ca0106_volume_put_spdif_rear
+};
+
+static snd_kcontrol_new_t snd_ca0106_volume_control_feedback =
+{
+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = "CAPTURE feedback into PLAYBACK",
+ .info = snd_ca0106_volume_info,
+ .get = snd_ca0106_volume_get_feedback,
+ .put = snd_ca0106_volume_put_feedback
+};
+
+
+static int remove_ctl(snd_card_t *card, const char *name)
+{
+ snd_ctl_elem_id_t id;
+ memset(&id, 0, sizeof(id));
+ strcpy(id.name, name);
+ id.iface = SNDRV_CTL_ELEM_IFACE_MIXER;
+ return snd_ctl_remove_id(card, &id);
+}
+
+static snd_kcontrol_t *ctl_find(snd_card_t *card, const char *name)
+{
+ snd_ctl_elem_id_t sid;
+ memset(&sid, 0, sizeof(sid));
+ /* FIXME: strcpy is bad. */
+ strcpy(sid.name, name);
+ sid.iface = SNDRV_CTL_ELEM_IFACE_MIXER;
+ return snd_ctl_find_id(card, &sid);
+}
+
+static int rename_ctl(snd_card_t *card, const char *src, const char *dst)
+{
+ snd_kcontrol_t *kctl = ctl_find(card, src);
+ if (kctl) {
+ strcpy(kctl->id.name, dst);
+ return 0;
+ }
+ return -ENOENT;
+}
+
+int __devinit snd_ca0106_mixer(ca0106_t *emu)
+{
+ int err;
+ snd_kcontrol_t *kctl;
+ snd_card_t *card = emu->card;
+ char **c;
+ static char *ca0106_remove_ctls[] = {
+ "Master Mono Playback Switch",
+ "Master Mono Playback Volume",
+ "3D Control - Switch",
+ "3D Control Sigmatel - Depth",
+ "PCM Playback Switch",
+ "PCM Playback Volume",
+ "CD Playback Switch",
+ "CD Playback Volume",
+ "Phone Playback Switch",
+ "Phone Playback Volume",
+ "Video Playback Switch",
+ "Video Playback Volume",
+ "PC Speaker Playback Switch",
+ "PC Speaker Playback Volume",
+ "Mono Output Select",
+ "Capture Source",
+ "Capture Switch",
+ "Capture Volume",
+ "External Amplifier",
+ "Sigmatel 4-Speaker Stereo Playback Switch",
+ "Sigmatel Surround Phase Inversion Playback ",
+ NULL
+ };
+ static char *ca0106_rename_ctls[] = {
+ "Master Playback Switch", "Capture Switch",
+ "Master Playback Volume", "Capture Volume",
+ "Line Playback Switch", "AC97 Line Capture Switch",
+ "Line Playback Volume", "AC97 Line Capture Volume",
+ "Aux Playback Switch", "AC97 Aux Capture Switch",
+ "Aux Playback Volume", "AC97 Aux Capture Volume",
+ "Mic Playback Switch", "AC97 Mic Capture Switch",
+ "Mic Playback Volume", "AC97 Mic Capture Volume",
+ "Mic Select", "AC97 Mic Select",
+ "Mic Boost (+20dB)", "AC97 Mic Boost (+20dB)",
+ NULL
+ };
+#if 1
+ for (c=ca0106_remove_ctls; *c; c++)
+ remove_ctl(card, *c);
+ for (c=ca0106_rename_ctls; *c; c += 2)
+ rename_ctl(card, c[0], c[1]);
+#endif
+
+ if ((kctl = snd_ctl_new1(&snd_ca0106_volume_control_analog_front, emu)) == NULL)
+ return -ENOMEM;
+ if ((err = snd_ctl_add(card, kctl)))
+ return err;
+ if ((kctl = snd_ctl_new1(&snd_ca0106_volume_control_analog_rear, emu)) == NULL)
+ return -ENOMEM;
+ if ((err = snd_ctl_add(card, kctl)))
+ return err;
+ if ((kctl = snd_ctl_new1(&snd_ca0106_volume_control_analog_center_lfe, emu)) == NULL)
+ return -ENOMEM;
+ if ((err = snd_ctl_add(card, kctl)))
+ return err;
+ if ((kctl = snd_ctl_new1(&snd_ca0106_volume_control_analog_unknown, emu)) == NULL)
+ return -ENOMEM;
+ if ((err = snd_ctl_add(card, kctl)))
+ return err;
+ if ((kctl = snd_ctl_new1(&snd_ca0106_volume_control_spdif_front, emu)) == NULL)
+ return -ENOMEM;
+ if ((err = snd_ctl_add(card, kctl)))
+ return err;
+ if ((kctl = snd_ctl_new1(&snd_ca0106_volume_control_spdif_rear, emu)) == NULL)
+ return -ENOMEM;
+ if ((err = snd_ctl_add(card, kctl)))
+ return err;
+ if ((kctl = snd_ctl_new1(&snd_ca0106_volume_control_spdif_center_lfe, emu)) == NULL)
+ return -ENOMEM;
+ if ((err = snd_ctl_add(card, kctl)))
+ return err;
+ if ((kctl = snd_ctl_new1(&snd_ca0106_volume_control_spdif_unknown, emu)) == NULL)
+ return -ENOMEM;
+ if ((err = snd_ctl_add(card, kctl)))
+ return err;
+ if ((kctl = snd_ctl_new1(&snd_ca0106_volume_control_feedback, emu)) == NULL)
+ return -ENOMEM;
+ if ((err = snd_ctl_add(card, kctl)))
+ return err;
+ if ((kctl = snd_ctl_new1(&snd_ca0106_spdif_mask_control, emu)) == NULL)
+ return -ENOMEM;
+ if ((err = snd_ctl_add(card, kctl)))
+ return err;
+ if ((kctl = snd_ctl_new1(&snd_ca0106_shared_spdif, emu)) == NULL)
+ return -ENOMEM;
+ if ((err = snd_ctl_add(card, kctl)))
+ return err;
+ if ((kctl = snd_ctl_new1(&snd_ca0106_capture_source, emu)) == NULL)
+ return -ENOMEM;
+ if ((err = snd_ctl_add(card, kctl)))
+ return err;
+ if ((kctl = ctl_find(card, SNDRV_CTL_NAME_IEC958("",PLAYBACK,DEFAULT))) != NULL) {
+ /* already defined by ac97, remove it */
+ /* FIXME: or do we need both controls? */
+ remove_ctl(card, SNDRV_CTL_NAME_IEC958("",PLAYBACK,DEFAULT));
+ }
+ if ((kctl = snd_ctl_new1(&snd_ca0106_spdif_control, emu)) == NULL)
+ return -ENOMEM;
+ if ((err = snd_ctl_add(card, kctl)))
+ return err;
+ return 0;
+}
+
--- /dev/null
+/*
+ * Copyright (c) 2004 James Courtier-Dutton <James@superbug.demon.co.uk>
+ * Driver CA0106 chips. e.g. Sound Blaster Audigy LS and Live 24bit
+ * Version: 0.0.17
+ *
+ * FEATURES currently supported:
+ * See ca0106_main.c for features.
+ *
+ * Changelog:
+ * Support interrupts per period.
+ * Removed noise from Center/LFE channel when in Analog mode.
+ * Rename and remove mixer controls.
+ * 0.0.6
+ * Use separate card based DMA buffer for periods table list.
+ * 0.0.7
+ * Change remove and rename ctrls into lists.
+ * 0.0.8
+ * Try to fix capture sources.
+ * 0.0.9
+ * Fix AC3 output.
+ * Enable S32_LE format support.
+ * 0.0.10
+ * Enable playback 48000 and 96000 rates. (Rates other that these do not work, even with "plug:front".)
+ * 0.0.11
+ * Add Model name recognition.
+ * 0.0.12
+ * Correct interrupt timing. interrupt at end of period, instead of in the middle of a playback period.
+ * Remove redundent "voice" handling.
+ * 0.0.13
+ * Single trigger call for multi channels.
+ * 0.0.14
+ * Set limits based on what the sound card hardware can do.
+ * playback periods_min=2, periods_max=8
+ * capture hw constraints require period_size = n * 64 bytes.
+ * playback hw constraints require period_size = n * 64 bytes.
+ * 0.0.15
+ * Separate ca0106.c into separate functional .c files.
+ * 0.0.16
+ * Modified Copyright message.
+ * 0.0.17
+ * Add iec958 file in proc file system to show status of SPDIF in.
+ *
+ * This code was initally based on code from ALSA's emu10k1x.c which is:
+ * Copyright (c) by Francisco Moraes <fmoraes@nc.rr.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+#include <sound/driver.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/moduleparam.h>
+#include <sound/core.h>
+#include <sound/initval.h>
+#include <sound/pcm.h>
+#include <sound/ac97_codec.h>
+#include <sound/info.h>
+#include <sound/asoundef.h>
+
+#include "ca0106.h"
+
+
+struct snd_ca0106_category_str {
+ int val;
+ const char *name;
+};
+
+static struct snd_ca0106_category_str snd_ca0106_con_category[] = {
+ { IEC958_AES1_CON_DAT, "DAT" },
+ { IEC958_AES1_CON_VCR, "VCR" },
+ { IEC958_AES1_CON_MICROPHONE, "microphone" },
+ { IEC958_AES1_CON_SYNTHESIZER, "synthesizer" },
+ { IEC958_AES1_CON_RATE_CONVERTER, "rate converter" },
+ { IEC958_AES1_CON_MIXER, "mixer" },
+ { IEC958_AES1_CON_SAMPLER, "sampler" },
+ { IEC958_AES1_CON_PCM_CODER, "PCM coder" },
+ { IEC958_AES1_CON_IEC908_CD, "CD" },
+ { IEC958_AES1_CON_NON_IEC908_CD, "non-IEC908 CD" },
+ { IEC958_AES1_CON_GENERAL, "general" },
+};
+
+
+void snd_ca0106_proc_dump_iec958( snd_info_buffer_t *buffer, u32 value)
+{
+ int i;
+ u32 status[4];
+ status[0] = value & 0xff;
+ status[1] = (value >> 8) & 0xff;
+ status[2] = (value >> 16) & 0xff;
+ status[3] = (value >> 24) & 0xff;
+
+ if (! (status[0] & IEC958_AES0_PROFESSIONAL)) {
+ /* consumer */
+ snd_iprintf(buffer, "Mode: consumer\n");
+ snd_iprintf(buffer, "Data: ");
+ if (!(status[0] & IEC958_AES0_NONAUDIO)) {
+ snd_iprintf(buffer, "audio\n");
+ } else {
+ snd_iprintf(buffer, "non-audio\n");
+ }
+ snd_iprintf(buffer, "Rate: ");
+ switch (status[3] & IEC958_AES3_CON_FS) {
+ case IEC958_AES3_CON_FS_44100:
+ snd_iprintf(buffer, "44100 Hz\n");
+ break;
+ case IEC958_AES3_CON_FS_48000:
+ snd_iprintf(buffer, "48000 Hz\n");
+ break;
+ case IEC958_AES3_CON_FS_32000:
+ snd_iprintf(buffer, "32000 Hz\n");
+ break;
+ default:
+ snd_iprintf(buffer, "unknown\n");
+ break;
+ }
+ snd_iprintf(buffer, "Copyright: ");
+ if (status[0] & IEC958_AES0_CON_NOT_COPYRIGHT) {
+ snd_iprintf(buffer, "permitted\n");
+ } else {
+ snd_iprintf(buffer, "protected\n");
+ }
+ snd_iprintf(buffer, "Emphasis: ");
+ if ((status[0] & IEC958_AES0_CON_EMPHASIS) != IEC958_AES0_CON_EMPHASIS_5015) {
+ snd_iprintf(buffer, "none\n");
+ } else {
+ snd_iprintf(buffer, "50/15us\n");
+ }
+ snd_iprintf(buffer, "Category: ");
+ for (i = 0; i < ARRAY_SIZE(snd_ca0106_con_category); i++) {
+ if ((status[1] & IEC958_AES1_CON_CATEGORY) == snd_ca0106_con_category[i].val) {
+ snd_iprintf(buffer, "%s\n", snd_ca0106_con_category[i].name);
+ break;
+ }
+ }
+ if (i >= ARRAY_SIZE(snd_ca0106_con_category)) {
+ snd_iprintf(buffer, "unknown 0x%x\n", status[1] & IEC958_AES1_CON_CATEGORY);
+ }
+ snd_iprintf(buffer, "Original: ");
+ if (status[1] & IEC958_AES1_CON_ORIGINAL) {
+ snd_iprintf(buffer, "original\n");
+ } else {
+ snd_iprintf(buffer, "1st generation\n");
+ }
+ snd_iprintf(buffer, "Clock: ");
+ switch (status[3] & IEC958_AES3_CON_CLOCK) {
+ case IEC958_AES3_CON_CLOCK_1000PPM:
+ snd_iprintf(buffer, "1000 ppm\n");
+ break;
+ case IEC958_AES3_CON_CLOCK_50PPM:
+ snd_iprintf(buffer, "50 ppm\n");
+ break;
+ case IEC958_AES3_CON_CLOCK_VARIABLE:
+ snd_iprintf(buffer, "variable pitch\n");
+ break;
+ default:
+ snd_iprintf(buffer, "unknown\n");
+ break;
+ }
+ } else {
+ snd_iprintf(buffer, "Mode: professional\n");
+ snd_iprintf(buffer, "Data: ");
+ if (!(status[0] & IEC958_AES0_NONAUDIO)) {
+ snd_iprintf(buffer, "audio\n");
+ } else {
+ snd_iprintf(buffer, "non-audio\n");
+ }
+ snd_iprintf(buffer, "Rate: ");
+ switch (status[0] & IEC958_AES0_PRO_FS) {
+ case IEC958_AES0_PRO_FS_44100:
+ snd_iprintf(buffer, "44100 Hz\n");
+ break;
+ case IEC958_AES0_PRO_FS_48000:
+ snd_iprintf(buffer, "48000 Hz\n");
+ break;
+ case IEC958_AES0_PRO_FS_32000:
+ snd_iprintf(buffer, "32000 Hz\n");
+ break;
+ default:
+ snd_iprintf(buffer, "unknown\n");
+ break;
+ }
+ snd_iprintf(buffer, "Rate Locked: ");
+ if (status[0] & IEC958_AES0_PRO_FREQ_UNLOCKED)
+ snd_iprintf(buffer, "no\n");
+ else
+ snd_iprintf(buffer, "yes\n");
+ snd_iprintf(buffer, "Emphasis: ");
+ switch (status[0] & IEC958_AES0_PRO_EMPHASIS) {
+ case IEC958_AES0_PRO_EMPHASIS_CCITT:
+ snd_iprintf(buffer, "CCITT J.17\n");
+ break;
+ case IEC958_AES0_PRO_EMPHASIS_NONE:
+ snd_iprintf(buffer, "none\n");
+ break;
+ case IEC958_AES0_PRO_EMPHASIS_5015:
+ snd_iprintf(buffer, "50/15us\n");
+ break;
+ case IEC958_AES0_PRO_EMPHASIS_NOTID:
+ default:
+ snd_iprintf(buffer, "unknown\n");
+ break;
+ }
+ snd_iprintf(buffer, "Stereophonic: ");
+ if ((status[1] & IEC958_AES1_PRO_MODE) == IEC958_AES1_PRO_MODE_STEREOPHONIC) {
+ snd_iprintf(buffer, "stereo\n");
+ } else {
+ snd_iprintf(buffer, "not indicated\n");
+ }
+ snd_iprintf(buffer, "Userbits: ");
+ switch (status[1] & IEC958_AES1_PRO_USERBITS) {
+ case IEC958_AES1_PRO_USERBITS_192:
+ snd_iprintf(buffer, "192bit\n");
+ break;
+ case IEC958_AES1_PRO_USERBITS_UDEF:
+ snd_iprintf(buffer, "user-defined\n");
+ break;
+ default:
+ snd_iprintf(buffer, "unkown\n");
+ break;
+ }
+ snd_iprintf(buffer, "Sample Bits: ");
+ switch (status[2] & IEC958_AES2_PRO_SBITS) {
+ case IEC958_AES2_PRO_SBITS_20:
+ snd_iprintf(buffer, "20 bit\n");
+ break;
+ case IEC958_AES2_PRO_SBITS_24:
+ snd_iprintf(buffer, "24 bit\n");
+ break;
+ case IEC958_AES2_PRO_SBITS_UDEF:
+ snd_iprintf(buffer, "user defined\n");
+ break;
+ default:
+ snd_iprintf(buffer, "unknown\n");
+ break;
+ }
+ snd_iprintf(buffer, "Word Length: ");
+ switch (status[2] & IEC958_AES2_PRO_WORDLEN) {
+ case IEC958_AES2_PRO_WORDLEN_22_18:
+ snd_iprintf(buffer, "22 bit or 18 bit\n");
+ break;
+ case IEC958_AES2_PRO_WORDLEN_23_19:
+ snd_iprintf(buffer, "23 bit or 19 bit\n");
+ break;
+ case IEC958_AES2_PRO_WORDLEN_24_20:
+ snd_iprintf(buffer, "24 bit or 20 bit\n");
+ break;
+ case IEC958_AES2_PRO_WORDLEN_20_16:
+ snd_iprintf(buffer, "20 bit or 16 bit\n");
+ break;
+ default:
+ snd_iprintf(buffer, "unknown\n");
+ break;
+ }
+ }
+}
+
+static void snd_ca0106_proc_iec958(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer)
+{
+ ca0106_t *emu = entry->private_data;
+ u32 value;
+
+ value = snd_ca0106_ptr_read(emu, SAMPLE_RATE_TRACKER_STATUS, 0);
+ snd_iprintf(buffer, "Status: %s, %s, %s\n",
+ (value & 0x100000) ? "Rate Locked" : "Not Rate Locked",
+ (value & 0x200000) ? "SPDIF Locked" : "No SPDIF Lock",
+ (value & 0x400000) ? "Audio Valid" : "No valid audio" );
+ snd_iprintf(buffer, "Estimated sample rate: %u\n",
+ ((value & 0xfffff) * 48000) / 0x8000 );
+ if (value & 0x200000) {
+ snd_iprintf(buffer, "IEC958/SPDIF input status:\n");
+ value = snd_ca0106_ptr_read(emu, SPDIF_INPUT_STATUS, 0);
+ snd_ca0106_proc_dump_iec958(buffer, value);
+ }
+
+ snd_iprintf(buffer, "\n");
+}
+
+static void snd_ca0106_proc_reg_write32(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer)
+{
+ ca0106_t *emu = entry->private_data;
+ unsigned long flags;
+ char line[64];
+ u32 reg, val;
+ while (!snd_info_get_line(buffer, line, sizeof(line))) {
+ if (sscanf(line, "%x %x", ®, &val) != 2)
+ continue;
+ if ((reg < 0x40) && (reg >=0) && (val <= 0xffffffff) ) {
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ outl(val, emu->port + (reg & 0xfffffffc));
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+ }
+ }
+}
+
+static void snd_ca0106_proc_reg_read32(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer)
+{
+ ca0106_t *emu = entry->private_data;
+ unsigned long value;
+ unsigned long flags;
+ int i;
+ snd_iprintf(buffer, "Registers:\n\n");
+ for(i = 0; i < 0x20; i+=4) {
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ value = inl(emu->port + i);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+ snd_iprintf(buffer, "Register %02X: %08lX\n", i, value);
+ }
+}
+
+static void snd_ca0106_proc_reg_read16(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer)
+{
+ ca0106_t *emu = entry->private_data;
+ unsigned int value;
+ unsigned long flags;
+ int i;
+ snd_iprintf(buffer, "Registers:\n\n");
+ for(i = 0; i < 0x20; i+=2) {
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ value = inw(emu->port + i);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+ snd_iprintf(buffer, "Register %02X: %04X\n", i, value);
+ }
+}
+
+static void snd_ca0106_proc_reg_read8(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer)
+{
+ ca0106_t *emu = entry->private_data;
+ unsigned int value;
+ unsigned long flags;
+ int i;
+ snd_iprintf(buffer, "Registers:\n\n");
+ for(i = 0; i < 0x20; i+=1) {
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ value = inb(emu->port + i);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+ snd_iprintf(buffer, "Register %02X: %02X\n", i, value);
+ }
+}
+
+static void snd_ca0106_proc_reg_read1(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer)
+{
+ ca0106_t *emu = entry->private_data;
+ unsigned long value;
+ int i,j;
+
+ snd_iprintf(buffer, "Registers\n");
+ for(i = 0; i < 0x40; i++) {
+ snd_iprintf(buffer, "%02X: ",i);
+ for (j = 0; j < 4; j++) {
+ value = snd_ca0106_ptr_read(emu, i, j);
+ snd_iprintf(buffer, "%08lX ", value);
+ }
+ snd_iprintf(buffer, "\n");
+ }
+}
+
+static void snd_ca0106_proc_reg_read2(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer)
+{
+ ca0106_t *emu = entry->private_data;
+ unsigned long value;
+ int i,j;
+
+ snd_iprintf(buffer, "Registers\n");
+ for(i = 0x40; i < 0x80; i++) {
+ snd_iprintf(buffer, "%02X: ",i);
+ for (j = 0; j < 4; j++) {
+ value = snd_ca0106_ptr_read(emu, i, j);
+ snd_iprintf(buffer, "%08lX ", value);
+ }
+ snd_iprintf(buffer, "\n");
+ }
+}
+
+static void snd_ca0106_proc_reg_write(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer)
+{
+ ca0106_t *emu = entry->private_data;
+ char line[64];
+ unsigned int reg, channel_id , val;
+ while (!snd_info_get_line(buffer, line, sizeof(line))) {
+ if (sscanf(line, "%x %x %x", ®, &channel_id, &val) != 3)
+ continue;
+ if ((reg < 0x80) && (reg >=0) && (val <= 0xffffffff) && (channel_id >=0) && (channel_id <= 3) )
+ snd_ca0106_ptr_write(emu, reg, channel_id, val);
+ }
+}
+
+
+int __devinit snd_ca0106_proc_init(ca0106_t * emu)
+{
+ snd_info_entry_t *entry;
+
+ if(! snd_card_proc_new(emu->card, "iec958", &entry))
+ snd_info_set_text_ops(entry, emu, 1024, snd_ca0106_proc_iec958);
+ if(! snd_card_proc_new(emu->card, "ca0106_reg32", &entry)) {
+ snd_info_set_text_ops(entry, emu, 1024, snd_ca0106_proc_reg_read32);
+ entry->c.text.write_size = 64;
+ entry->c.text.write = snd_ca0106_proc_reg_write32;
+ }
+ if(! snd_card_proc_new(emu->card, "ca0106_reg16", &entry))
+ snd_info_set_text_ops(entry, emu, 1024, snd_ca0106_proc_reg_read16);
+ if(! snd_card_proc_new(emu->card, "ca0106_reg8", &entry))
+ snd_info_set_text_ops(entry, emu, 1024, snd_ca0106_proc_reg_read8);
+ if(! snd_card_proc_new(emu->card, "ca0106_regs1", &entry)) {
+ snd_info_set_text_ops(entry, emu, 1024, snd_ca0106_proc_reg_read1);
+ entry->c.text.write_size = 64;
+ entry->c.text.write = snd_ca0106_proc_reg_write;
+// entry->private_data = emu;
+ }
+ if(! snd_card_proc_new(emu->card, "ca0106_regs2", &entry))
+ snd_info_set_text_ops(entry, emu, 1024, snd_ca0106_proc_reg_read2);
+ return 0;
+}
+
#define BA1_DWORD_SIZE (13 * 1024 + 512)
#define BA1_MEMORY_COUNT 3
-extern snd_pcm_ops_t snd_cs46xx_playback_ops;
-extern snd_pcm_ops_t snd_cs46xx_playback_indirect_ops;
-extern snd_pcm_ops_t snd_cs46xx_capture_ops;
-extern snd_pcm_ops_t snd_cs46xx_capture_indirect_ops;
-extern snd_pcm_ops_t snd_cs46xx_playback_rear_ops;
-extern snd_pcm_ops_t snd_cs46xx_playback_indirect_rear_ops;
-extern snd_pcm_ops_t snd_cs46xx_playback_iec958_ops;
-extern snd_pcm_ops_t snd_cs46xx_playback_indirect_iec958_ops;
-extern snd_pcm_ops_t snd_cs46xx_playback_clfe_ops;
-extern snd_pcm_ops_t snd_cs46xx_playback_indirect_clfe_ops;
-
-
/*
* common I/O routines
*/
void cs46xx_dsp_spos_destroy (cs46xx_t * chip);
int cs46xx_dsp_load_module (cs46xx_t * chip,dsp_module_desc_t * module);
symbol_entry_t * cs46xx_dsp_lookup_symbol (cs46xx_t * chip,char * symbol_name,int symbol_type);
-symbol_entry_t * cs46xx_dsp_lookup_symbol_addr (cs46xx_t * chip,u32 address,int symbol_type);
int cs46xx_dsp_proc_init (snd_card_t * card, cs46xx_t *chip);
int cs46xx_dsp_proc_done (cs46xx_t *chip);
int cs46xx_dsp_scb_and_task_init (cs46xx_t *chip);
-int cs46xx_dsp_async_init (cs46xx_t *chip,dsp_scb_descriptor_t * fg_entry);
int snd_cs46xx_download (cs46xx_t *chip,u32 *src,unsigned long offset,
unsigned long len);
int snd_cs46xx_clear_BA1(cs46xx_t *chip,unsigned long offset,unsigned long len);
dsp_scb_descriptor_t * cs46xx_dsp_create_scb (cs46xx_t *chip,char * name, u32 * scb_data,u32 dest);
void cs46xx_dsp_proc_free_scb_desc (dsp_scb_descriptor_t * scb);
void cs46xx_dsp_proc_register_scb_desc (cs46xx_t *chip,dsp_scb_descriptor_t * scb);
-dsp_task_descriptor_t * cs46xx_dsp_create_task_tree (cs46xx_t *chip,char * name,
- u32 * task_data,u32 dest,int size);
dsp_scb_descriptor_t * cs46xx_dsp_create_timing_master_scb (cs46xx_t *chip);
dsp_scb_descriptor_t * cs46xx_dsp_create_codec_out_scb(cs46xx_t * chip,char * codec_name,
u16 channel_disp,u16 fifo_addr,
dsp_scb_descriptor_t * parent_scb,
int scb_child_type);
void cs46xx_dsp_remove_scb (cs46xx_t *chip,dsp_scb_descriptor_t * scb);
-dsp_scb_descriptor_t * cs46xx_dsp_create_generic_scb (cs46xx_t *chip,char * name,
- u32 * scb_data,u32 dest,
- char * task_entry_name,
- dsp_scb_descriptor_t * parent_scb,
- int scb_child_type);
dsp_scb_descriptor_t * cs46xx_dsp_create_codec_in_scb(cs46xx_t * chip,char * codec_name,
u16 channel_disp,u16 fifo_addr,
u16 sample_buffer_addr,
u32 dest,dsp_scb_descriptor_t * parent_scb,
int scb_child_type);
-dsp_scb_descriptor_t * cs46xx_dsp_create_pcm_reader_scb(cs46xx_t * chip,char * scb_name,
- u16 sample_buffer_addr,u32 dest,
- int virtual_channel,u32 playback_hw_addr,
- dsp_scb_descriptor_t * parent_scb,
- int scb_child_type);
dsp_scb_descriptor_t * cs46xx_dsp_create_src_task_scb(cs46xx_t * chip,char * scb_name,
int sample_rate,
u16 src_buffer_addr,
u32 dest,
dsp_scb_descriptor_t * parent_scb,
int scb_child_type);
-dsp_scb_descriptor_t * cs46xx_dsp_create_pcm_serial_input_scb(cs46xx_t * chip,char * scb_name,u32 dest,
- dsp_scb_descriptor_t * input_scb,
- dsp_scb_descriptor_t * parent_scb,
- int scb_child_type);
-dsp_scb_descriptor_t * cs46xx_dsp_create_asynch_fg_tx_scb(cs46xx_t * chip,char * scb_name,u32 dest,
- u16 hfg_scb_address,
- u16 asynch_buffer_address,
- dsp_scb_descriptor_t * parent_scb,
- int scb_child_type);
dsp_scb_descriptor_t * cs46xx_dsp_create_asynch_fg_rx_scb(cs46xx_t * chip,char * scb_name,u32 dest,
u16 hfg_scb_address,
u16 asynch_buffer_address,
u16 mix_buffer_addr,u16 writeback_spb,u32 dest,
dsp_scb_descriptor_t * parent_scb,
int scb_child_type);
-dsp_scb_descriptor_t * cs46xx_dsp_create_output_snoop_scb(cs46xx_t * chip,char * scb_name,u32 dest,
- u16 snoop_buffer_address,
- dsp_scb_descriptor_t * snoop_scb,
- dsp_scb_descriptor_t * parent_scb,
- int scb_child_type);
dsp_scb_descriptor_t * cs46xx_dsp_create_magic_snoop_scb(cs46xx_t * chip,char * scb_name,u32 dest,
u16 snoop_buffer_address,
dsp_scb_descriptor_t * snoop_scb,
#include "cs46xx_lib.h"
#include "dsp_spos.h"
+static int cs46xx_dsp_async_init (cs46xx_t *chip, dsp_scb_descriptor_t * fg_entry);
+
static wide_opcode_t wide_opcodes[] = {
WIDE_FOR_BEGIN_LOOP,
WIDE_FOR_BEGIN_LOOP2,
cs46xx_dsp_proc_free_scb_desc ( (ins->scbs + i) );
}
- if (ins->code.data)
- kfree(ins->code.data);
-
- if (ins->symbol_table.symbols)
- vfree(ins->symbol_table.symbols);
-
- if (ins->modules)
- kfree(ins->modules);
-
+ kfree(ins->code.data);
+ vfree(ins->symbol_table.symbols);
+ kfree(ins->modules);
kfree(ins);
up(&chip->spos_mutex);
}
}
-symbol_entry_t * cs46xx_dsp_lookup_symbol_addr (cs46xx_t * chip, u32 address, int symbol_type)
+static symbol_entry_t * cs46xx_dsp_lookup_symbol_addr (cs46xx_t * chip, u32 address, int symbol_type)
{
int i;
dsp_spos_instance_t * ins = chip->dsp_spos_instance;
int i;
for (i = 0; i < size; ++i) {
- if (debug_tree) printk ("addr %p, val %08x\n", spdst,task_data[i]);
+ if (debug_tree) printk ("addr %p, val %08x\n",spdst,task_data[i]);
writel(task_data[i],spdst);
spdst += sizeof(u32);
}
int i;
for (i = 0; i < 0x10; ++i) {
- if (debug_scb) printk ("addr %p, val %08x\n", spdst,scb_data[i]);
+ if (debug_scb) printk ("addr %p, val %08x\n",spdst,scb_data[i]);
writel(scb_data[i],spdst);
spdst += sizeof(u32);
}
}
-dsp_task_descriptor_t * cs46xx_dsp_create_task_tree (cs46xx_t *chip,char * name, u32 * task_data,u32 dest,int size)
+static dsp_task_descriptor_t * cs46xx_dsp_create_task_tree (cs46xx_t *chip,char * name, u32 * task_data,u32 dest,int size)
{
dsp_task_descriptor_t * desc;
return -EINVAL;
}
-int cs46xx_dsp_async_init (cs46xx_t *chip, dsp_scb_descriptor_t * fg_entry)
+static int cs46xx_dsp_async_init (cs46xx_t *chip, dsp_scb_descriptor_t * fg_entry)
{
dsp_spos_instance_t * ins = chip->dsp_spos_instance;
symbol_entry_t * s16_async_codec_input_task;
#ifndef __HEADER_cwcdma_H__
#define __HEADER_cwcdma_H__
-symbol_entry_t cwcdma_symbols[] = {
+static symbol_entry_t cwcdma_symbols[] = {
{ 0x8000, "EXECCHILD",0x03 },
{ 0x8001, "EXECCHILD_98",0x03 },
{ 0x8003, "EXECCHILD_PUSH1IND",0x03 },
{ 0x0018, "#CODE_END",0x00 },
}; /* cwcdma symbols */
-u32 cwcdma_code[] = {
+static u32 cwcdma_code[] = {
/* OVERLAYBEGINADDRESS */
/* 0000 */ 0x00002731,0x00001400,0x0004c108,0x000e5044,
/* 0002 */ 0x0005f608,0x00000000,0x000007ae,0x000be300,
/* #CODE_END */
-segment_desc_t cwcdma_segments[] = {
+static segment_desc_t cwcdma_segments[] = {
{ SEGTYPE_SP_PROGRAM, 0x00000000, 0x00000030, cwcdma_code },
};
-dsp_module_desc_t cwcdma_module = {
+static dsp_module_desc_t cwcdma_module = {
"cwcdma",
{
27,
snd-emu10k1-objs := emu10k1.o emu10k1_main.o \
irq.o memory.o voice.o emumpu401.o emupcm.o io.o \
- emuproc.o emumixer.o emufx.o
+ emuproc.o emumixer.o emufx.o timer.o
snd-emu10k1-synth-objs := emu10k1_synth.o emu10k1_callback.o emu10k1_patch.o
+snd-emu10k1x-objs := emu10k1x.o
#
# this function returns:
# Toplevel Module Dependency
obj-$(CONFIG_SND_EMU10K1) += snd-emu10k1.o
obj-$(call sequencer,$(CONFIG_SND_EMU10K1)) += snd-emu10k1-synth.o
+obj-$(CONFIG_SND_EMU10K1X) += snd-emu10k1x.o
/*
* create a new hardware dependent device for Emu10k1
*/
-int snd_emu10k1_synth_new_device(snd_seq_device_t *dev)
+static int snd_emu10k1_synth_new_device(snd_seq_device_t *dev)
{
snd_emux_t *emu;
emu10k1_t *hw;
return 0;
}
-int snd_emu10k1_synth_delete_device(snd_seq_device_t *dev)
+static int snd_emu10k1_synth_delete_device(snd_seq_device_t *dev)
{
snd_emux_t *emu;
emu10k1_t *hw;
--- /dev/null
+/*
+ * Copyright (c) by Francisco Moraes <fmoraes@nc.rr.com>
+ * Driver EMU10K1X chips
+ *
+ * Parts of this code were adapted from audigyls.c driver which is
+ * Copyright (c) by James Courtier-Dutton <James@superbug.demon.co.uk>
+ *
+ * BUGS:
+ * --
+ *
+ * TODO:
+ *
+ * Chips (SB0200 model):
+ * - EMU10K1X-DBQ
+ * - STAC 9708T
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+#include <sound/driver.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/moduleparam.h>
+#include <sound/core.h>
+#include <sound/initval.h>
+#include <sound/pcm.h>
+#include <sound/ac97_codec.h>
+#include <sound/info.h>
+#include <sound/rawmidi.h>
+
+MODULE_AUTHOR("Francisco Moraes <fmoraes@nc.rr.com>");
+MODULE_DESCRIPTION("EMU10K1X");
+MODULE_LICENSE("GPL");
+MODULE_SUPPORTED_DEVICE("{{Dell Creative Labs,SB Live!}");
+
+// module parameters (see "Module Parameters")
+static int index[SNDRV_CARDS] = SNDRV_DEFAULT_IDX;
+static char *id[SNDRV_CARDS] = SNDRV_DEFAULT_STR;
+static int enable[SNDRV_CARDS] = SNDRV_DEFAULT_ENABLE_PNP;
+
+module_param_array(index, int, NULL, 0444);
+MODULE_PARM_DESC(index, "Index value for the EMU10K1X soundcard.");
+module_param_array(id, charp, NULL, 0444);
+MODULE_PARM_DESC(id, "ID string for the EMU10K1X soundcard.");
+module_param_array(enable, bool, NULL, 0444);
+MODULE_PARM_DESC(enable, "Enable the EMU10K1X soundcard.");
+
+
+// some definitions were borrowed from emu10k1 driver as they seem to be the same
+/************************************************************************************************/
+/* PCI function 0 registers, address = <val> + PCIBASE0 */
+/************************************************************************************************/
+
+#define PTR 0x00 /* Indexed register set pointer register */
+ /* NOTE: The CHANNELNUM and ADDRESS words can */
+ /* be modified independently of each other. */
+
+#define DATA 0x04 /* Indexed register set data register */
+
+#define IPR 0x08 /* Global interrupt pending register */
+ /* Clear pending interrupts by writing a 1 to */
+ /* the relevant bits and zero to the other bits */
+#define IPR_MIDITRANSBUFEMPTY 0x00000001 /* MIDI UART transmit buffer empty */
+#define IPR_MIDIRECVBUFEMPTY 0x00000002 /* MIDI UART receive buffer empty */
+#define IPR_CH_0_LOOP 0x00000800 /* Channel 0 loop */
+#define IPR_CH_0_HALF_LOOP 0x00000100 /* Channel 0 half loop */
+#define IPR_CAP_0_LOOP 0x00080000 /* Channel capture loop */
+#define IPR_CAP_0_HALF_LOOP 0x00010000 /* Channel capture half loop */
+
+#define INTE 0x0c /* Interrupt enable register */
+#define INTE_MIDITXENABLE 0x00000001 /* Enable MIDI transmit-buffer-empty interrupts */
+#define INTE_MIDIRXENABLE 0x00000002 /* Enable MIDI receive-buffer-empty interrupts */
+#define INTE_CH_0_LOOP 0x00000800 /* Channel 0 loop */
+#define INTE_CH_0_HALF_LOOP 0x00000100 /* Channel 0 half loop */
+#define INTE_CAP_0_LOOP 0x00080000 /* Channel capture loop */
+#define INTE_CAP_0_HALF_LOOP 0x00010000 /* Channel capture half loop */
+
+#define HCFG 0x14 /* Hardware config register */
+
+#define HCFG_LOCKSOUNDCACHE 0x00000008 /* 1 = Cancel bustmaster accesses to soundcache */
+ /* NOTE: This should generally never be used. */
+#define HCFG_AUDIOENABLE 0x00000001 /* 0 = CODECs transmit zero-valued samples */
+ /* Should be set to 1 when the EMU10K1 is */
+ /* completely initialized. */
+#define GPIO 0x18 /* Defaults: 00001080-Analog, 00001000-SPDIF. */
+
+
+#define AC97DATA 0x1c /* AC97 register set data register (16 bit) */
+
+#define AC97ADDRESS 0x1e /* AC97 register set address register (8 bit) */
+
+/********************************************************************************************************/
+/* Emu10k1x pointer-offset register set, accessed through the PTR and DATA registers */
+/********************************************************************************************************/
+#define PLAYBACK_LIST_ADDR 0x00 /* Base DMA address of a list of pointers to each period/size */
+ /* One list entry: 4 bytes for DMA address,
+ * 4 bytes for period_size << 16.
+ * One list entry is 8 bytes long.
+ * One list entry for each period in the buffer.
+ */
+#define PLAYBACK_LIST_SIZE 0x01 /* Size of list in bytes << 16. E.g. 8 periods -> 0x00380000 */
+#define PLAYBACK_LIST_PTR 0x02 /* Pointer to the current period being played */
+#define PLAYBACK_DMA_ADDR 0x04 /* Playback DMA addresss */
+#define PLAYBACK_PERIOD_SIZE 0x05 /* Playback period size */
+#define PLAYBACK_POINTER 0x06 /* Playback period pointer. Sample currently in DAC */
+#define PLAYBACK_UNKNOWN1 0x07
+#define PLAYBACK_UNKNOWN2 0x08
+
+/* Only one capture channel supported */
+#define CAPTURE_DMA_ADDR 0x10 /* Capture DMA address */
+#define CAPTURE_BUFFER_SIZE 0x11 /* Capture buffer size */
+#define CAPTURE_POINTER 0x12 /* Capture buffer pointer. Sample currently in ADC */
+#define CAPTURE_UNKNOWN 0x13
+
+/* From 0x20 - 0x3f, last samples played on each channel */
+
+#define TRIGGER_CHANNEL 0x40 /* Trigger channel playback */
+#define TRIGGER_CHANNEL_0 0x00000001 /* Trigger channel 0 */
+#define TRIGGER_CHANNEL_1 0x00000002 /* Trigger channel 1 */
+#define TRIGGER_CHANNEL_2 0x00000004 /* Trigger channel 2 */
+#define TRIGGER_CAPTURE 0x00000100 /* Trigger capture channel */
+
+#define ROUTING 0x41 /* Setup sound routing ? */
+#define ROUTING_FRONT_LEFT 0x00000001
+#define ROUTING_FRONT_RIGHT 0x00000002
+#define ROUTING_REAR_LEFT 0x00000004
+#define ROUTING_REAR_RIGHT 0x00000008
+#define ROUTING_CENTER_LFE 0x00010000
+
+#define SPCS0 0x42 /* SPDIF output Channel Status 0 register */
+
+#define SPCS1 0x43 /* SPDIF output Channel Status 1 register */
+
+#define SPCS2 0x44 /* SPDIF output Channel Status 2 register */
+
+#define SPCS_CLKACCYMASK 0x30000000 /* Clock accuracy */
+#define SPCS_CLKACCY_1000PPM 0x00000000 /* 1000 parts per million */
+#define SPCS_CLKACCY_50PPM 0x10000000 /* 50 parts per million */
+#define SPCS_CLKACCY_VARIABLE 0x20000000 /* Variable accuracy */
+#define SPCS_SAMPLERATEMASK 0x0f000000 /* Sample rate */
+#define SPCS_SAMPLERATE_44 0x00000000 /* 44.1kHz sample rate */
+#define SPCS_SAMPLERATE_48 0x02000000 /* 48kHz sample rate */
+#define SPCS_SAMPLERATE_32 0x03000000 /* 32kHz sample rate */
+#define SPCS_CHANNELNUMMASK 0x00f00000 /* Channel number */
+#define SPCS_CHANNELNUM_UNSPEC 0x00000000 /* Unspecified channel number */
+#define SPCS_CHANNELNUM_LEFT 0x00100000 /* Left channel */
+#define SPCS_CHANNELNUM_RIGHT 0x00200000 /* Right channel */
+#define SPCS_SOURCENUMMASK 0x000f0000 /* Source number */
+#define SPCS_SOURCENUM_UNSPEC 0x00000000 /* Unspecified source number */
+#define SPCS_GENERATIONSTATUS 0x00008000 /* Originality flag (see IEC-958 spec) */
+#define SPCS_CATEGORYCODEMASK 0x00007f00 /* Category code (see IEC-958 spec) */
+#define SPCS_MODEMASK 0x000000c0 /* Mode (see IEC-958 spec) */
+#define SPCS_EMPHASISMASK 0x00000038 /* Emphasis */
+#define SPCS_EMPHASIS_NONE 0x00000000 /* No emphasis */
+#define SPCS_EMPHASIS_50_15 0x00000008 /* 50/15 usec 2 channel */
+#define SPCS_COPYRIGHT 0x00000004 /* Copyright asserted flag -- do not modify */
+#define SPCS_NOTAUDIODATA 0x00000002 /* 0 = Digital audio, 1 = not audio */
+#define SPCS_PROFESSIONAL 0x00000001 /* 0 = Consumer (IEC-958), 1 = pro (AES3-1992) */
+
+#define SPDIF_SELECT 0x45 /* Enables SPDIF or Analogue outputs 0-Analogue, 0x700-SPDIF */
+
+/* This is the MPU port on the card */
+#define MUDATA 0x47
+#define MUCMD 0x48
+#define MUSTAT MUCMD
+
+/* From 0x50 - 0x5f, last samples captured */
+
+/**
+ * The hardware has 3 channels for playback and 1 for capture.
+ * - channel 0 is the front channel
+ * - channel 1 is the rear channel
+ * - channel 2 is the center/lfe chanel
+ * Volume is controlled by the AC97 for the front and rear channels by
+ * the PCM Playback Volume, Sigmatel Surround Playback Volume and
+ * Surround Playback Volume. The Sigmatel 4-Speaker Stereo switch affects
+ * the front/rear channel mixing in the REAR OUT jack. When using the
+ * 4-Speaker Stereo, both front and rear channels will be mixed in the
+ * REAR OUT.
+ * The center/lfe channel has no volume control and cannot be muted during
+ * playback.
+ */
+
+typedef struct snd_emu10k1x_voice emu10k1x_voice_t;
+typedef struct snd_emu10k1x emu10k1x_t;
+typedef struct snd_emu10k1x_pcm emu10k1x_pcm_t;
+
+struct snd_emu10k1x_voice {
+ emu10k1x_t *emu;
+ int number;
+ int use;
+
+ emu10k1x_pcm_t *epcm;
+};
+
+struct snd_emu10k1x_pcm {
+ emu10k1x_t *emu;
+ snd_pcm_substream_t *substream;
+ emu10k1x_voice_t *voice;
+ unsigned short running;
+};
+
+typedef struct {
+ struct snd_emu10k1x *emu;
+ snd_rawmidi_t *rmidi;
+ snd_rawmidi_substream_t *substream_input;
+ snd_rawmidi_substream_t *substream_output;
+ unsigned int midi_mode;
+ spinlock_t input_lock;
+ spinlock_t output_lock;
+ spinlock_t open_lock;
+ int tx_enable, rx_enable;
+ int port;
+ int ipr_tx, ipr_rx;
+ void (*interrupt)(emu10k1x_t *emu, unsigned int status);
+} emu10k1x_midi_t;
+
+// definition of the chip-specific record
+struct snd_emu10k1x {
+ snd_card_t *card;
+ struct pci_dev *pci;
+
+ unsigned long port;
+ struct resource *res_port;
+ int irq;
+
+ unsigned int revision; /* chip revision */
+ unsigned int serial; /* serial number */
+ unsigned short model; /* subsystem id */
+
+ spinlock_t emu_lock;
+ spinlock_t voice_lock;
+
+ ac97_t *ac97;
+ snd_pcm_t *pcm;
+
+ emu10k1x_voice_t voices[3];
+ emu10k1x_voice_t capture_voice;
+ u32 spdif_bits[3]; // SPDIF out setup
+
+ struct snd_dma_buffer dma_buffer;
+
+ emu10k1x_midi_t midi;
+};
+
+/* hardware definition */
+static snd_pcm_hardware_t snd_emu10k1x_playback_hw = {
+ .info = (SNDRV_PCM_INFO_MMAP |
+ SNDRV_PCM_INFO_INTERLEAVED |
+ SNDRV_PCM_INFO_BLOCK_TRANSFER |
+ SNDRV_PCM_INFO_MMAP_VALID),
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .rates = SNDRV_PCM_RATE_48000,
+ .rate_min = 48000,
+ .rate_max = 48000,
+ .channels_min = 2,
+ .channels_max = 2,
+ .buffer_bytes_max = (32*1024),
+ .period_bytes_min = 64,
+ .period_bytes_max = (16*1024),
+ .periods_min = 2,
+ .periods_max = 8,
+ .fifo_size = 0,
+};
+
+static snd_pcm_hardware_t snd_emu10k1x_capture_hw = {
+ .info = (SNDRV_PCM_INFO_MMAP |
+ SNDRV_PCM_INFO_INTERLEAVED |
+ SNDRV_PCM_INFO_BLOCK_TRANSFER |
+ SNDRV_PCM_INFO_MMAP_VALID),
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .rates = SNDRV_PCM_RATE_48000,
+ .rate_min = 48000,
+ .rate_max = 48000,
+ .channels_min = 2,
+ .channels_max = 2,
+ .buffer_bytes_max = (32*1024),
+ .period_bytes_min = 64,
+ .period_bytes_max = (16*1024),
+ .periods_min = 2,
+ .periods_max = 2,
+ .fifo_size = 0,
+};
+
+static unsigned int snd_emu10k1x_ptr_read(emu10k1x_t * emu,
+ unsigned int reg,
+ unsigned int chn)
+{
+ unsigned long flags;
+ unsigned int regptr, val;
+
+ regptr = (reg << 16) | chn;
+
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ outl(regptr, emu->port + PTR);
+ val = inl(emu->port + DATA);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+ return val;
+}
+
+static void snd_emu10k1x_ptr_write(emu10k1x_t *emu,
+ unsigned int reg,
+ unsigned int chn,
+ unsigned int data)
+{
+ unsigned int regptr;
+ unsigned long flags;
+
+ regptr = (reg << 16) | chn;
+
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ outl(regptr, emu->port + PTR);
+ outl(data, emu->port + DATA);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+}
+
+static void snd_emu10k1x_intr_enable(emu10k1x_t *emu, unsigned int intrenb)
+{
+ unsigned long flags;
+ unsigned int enable;
+
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ enable = inl(emu->port + INTE) | intrenb;
+ outl(enable, emu->port + INTE);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+}
+
+static void snd_emu10k1x_intr_disable(emu10k1x_t *emu, unsigned int intrenb)
+{
+ unsigned long flags;
+ unsigned int enable;
+
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ enable = inl(emu->port + INTE) & ~intrenb;
+ outl(enable, emu->port + INTE);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+}
+
+static void snd_emu10k1x_gpio_write(emu10k1x_t *emu, unsigned int value)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ outl(value, emu->port + GPIO);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+}
+
+static void snd_emu10k1x_pcm_free_substream(snd_pcm_runtime_t *runtime)
+{
+ emu10k1x_pcm_t *epcm = runtime->private_data;
+
+ if (epcm)
+ kfree(epcm);
+}
+
+static void snd_emu10k1x_pcm_interrupt(emu10k1x_t *emu, emu10k1x_voice_t *voice)
+{
+ emu10k1x_pcm_t *epcm;
+
+ if ((epcm = voice->epcm) == NULL)
+ return;
+ if (epcm->substream == NULL)
+ return;
+#if 0
+ snd_printk(KERN_INFO "IRQ: position = 0x%x, period = 0x%x, size = 0x%x\n",
+ epcm->substream->ops->pointer(epcm->substream),
+ snd_pcm_lib_period_bytes(epcm->substream),
+ snd_pcm_lib_buffer_bytes(epcm->substream));
+#endif
+ snd_pcm_period_elapsed(epcm->substream);
+}
+
+/* open callback */
+static int snd_emu10k1x_playback_open(snd_pcm_substream_t *substream)
+{
+ emu10k1x_t *chip = snd_pcm_substream_chip(substream);
+ emu10k1x_pcm_t *epcm;
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ int err;
+
+ if ((err = snd_pcm_hw_constraint_integer(runtime, SNDRV_PCM_HW_PARAM_PERIODS)) < 0) {
+ return err;
+ }
+ if ((err = snd_pcm_hw_constraint_step(runtime, 0, SNDRV_PCM_HW_PARAM_PERIOD_BYTES, 64)) < 0)
+ return err;
+
+ epcm = kcalloc(1, sizeof(*epcm), GFP_KERNEL);
+ if (epcm == NULL)
+ return -ENOMEM;
+ epcm->emu = chip;
+ epcm->substream = substream;
+
+ runtime->private_data = epcm;
+ runtime->private_free = snd_emu10k1x_pcm_free_substream;
+
+ runtime->hw = snd_emu10k1x_playback_hw;
+
+ return 0;
+}
+
+/* close callback */
+static int snd_emu10k1x_playback_close(snd_pcm_substream_t *substream)
+{
+ return 0;
+}
+
+/* hw_params callback */
+static int snd_emu10k1x_pcm_hw_params(snd_pcm_substream_t *substream,
+ snd_pcm_hw_params_t * hw_params)
+{
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ emu10k1x_pcm_t *epcm = runtime->private_data;
+
+ if (! epcm->voice) {
+ epcm->voice = &epcm->emu->voices[substream->pcm->device];
+ epcm->voice->use = 1;
+ epcm->voice->epcm = epcm;
+ }
+
+ return snd_pcm_lib_malloc_pages(substream,
+ params_buffer_bytes(hw_params));
+}
+
+/* hw_free callback */
+static int snd_emu10k1x_pcm_hw_free(snd_pcm_substream_t *substream)
+{
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ emu10k1x_pcm_t *epcm;
+
+ if (runtime->private_data == NULL)
+ return 0;
+
+ epcm = runtime->private_data;
+
+ if (epcm->voice) {
+ epcm->voice->use = 0;
+ epcm->voice->epcm = NULL;
+ epcm->voice = NULL;
+ }
+
+ return snd_pcm_lib_free_pages(substream);
+}
+
+/* prepare callback */
+static int snd_emu10k1x_pcm_prepare(snd_pcm_substream_t *substream)
+{
+ emu10k1x_t *emu = snd_pcm_substream_chip(substream);
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ emu10k1x_pcm_t *epcm = runtime->private_data;
+ int voice = epcm->voice->number;
+ u32 *table_base = (u32 *)(emu->dma_buffer.area+1024*voice);
+ u32 period_size_bytes = frames_to_bytes(runtime, runtime->period_size);
+ int i;
+
+ for(i=0; i < runtime->periods; i++) {
+ *table_base++=runtime->dma_addr+(i*period_size_bytes);
+ *table_base++=period_size_bytes<<16;
+ }
+
+ snd_emu10k1x_ptr_write(emu, PLAYBACK_LIST_ADDR, voice, emu->dma_buffer.addr+1024*voice);
+ snd_emu10k1x_ptr_write(emu, PLAYBACK_LIST_SIZE, voice, (runtime->periods - 1) << 19);
+ snd_emu10k1x_ptr_write(emu, PLAYBACK_LIST_PTR, voice, 0);
+ snd_emu10k1x_ptr_write(emu, PLAYBACK_POINTER, voice, 0);
+ snd_emu10k1x_ptr_write(emu, PLAYBACK_UNKNOWN1, voice, 0);
+ snd_emu10k1x_ptr_write(emu, PLAYBACK_UNKNOWN2, voice, 0);
+ snd_emu10k1x_ptr_write(emu, PLAYBACK_DMA_ADDR, voice, runtime->dma_addr);
+
+ snd_emu10k1x_ptr_write(emu, PLAYBACK_PERIOD_SIZE, voice, frames_to_bytes(runtime, runtime->period_size)<<16);
+
+ return 0;
+}
+
+/* trigger callback */
+static int snd_emu10k1x_pcm_trigger(snd_pcm_substream_t *substream,
+ int cmd)
+{
+ emu10k1x_t *emu = snd_pcm_substream_chip(substream);
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ emu10k1x_pcm_t *epcm = runtime->private_data;
+ int channel = epcm->voice->number;
+ int result = 0;
+
+// snd_printk(KERN_INFO "trigger - emu10k1x = 0x%x, cmd = %i, pointer = %d\n", (int)emu, cmd, (int)substream->ops->pointer(substream));
+
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ if(runtime->periods == 2)
+ snd_emu10k1x_intr_enable(emu, (INTE_CH_0_LOOP | INTE_CH_0_HALF_LOOP) << channel);
+ else
+ snd_emu10k1x_intr_enable(emu, INTE_CH_0_LOOP << channel);
+ epcm->running = 1;
+ snd_emu10k1x_ptr_write(emu, TRIGGER_CHANNEL, 0, snd_emu10k1x_ptr_read(emu, TRIGGER_CHANNEL, 0)|(TRIGGER_CHANNEL_0<<channel));
+ break;
+ case SNDRV_PCM_TRIGGER_STOP:
+ epcm->running = 0;
+ snd_emu10k1x_intr_disable(emu, (INTE_CH_0_LOOP | INTE_CH_0_HALF_LOOP) << channel);
+ snd_emu10k1x_ptr_write(emu, TRIGGER_CHANNEL, 0, snd_emu10k1x_ptr_read(emu, TRIGGER_CHANNEL, 0) & ~(TRIGGER_CHANNEL_0<<channel));
+ break;
+ default:
+ result = -EINVAL;
+ break;
+ }
+ return result;
+}
+
+/* pointer callback */
+static snd_pcm_uframes_t
+snd_emu10k1x_pcm_pointer(snd_pcm_substream_t *substream)
+{
+ emu10k1x_t *emu = snd_pcm_substream_chip(substream);
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ emu10k1x_pcm_t *epcm = runtime->private_data;
+ int channel = epcm->voice->number;
+ snd_pcm_uframes_t ptr = 0, ptr1 = 0, ptr2= 0,ptr3 = 0,ptr4 = 0;
+
+ if (!epcm->running)
+ return 0;
+
+ ptr3 = snd_emu10k1x_ptr_read(emu, PLAYBACK_LIST_PTR, channel);
+ ptr1 = snd_emu10k1x_ptr_read(emu, PLAYBACK_POINTER, channel);
+ ptr4 = snd_emu10k1x_ptr_read(emu, PLAYBACK_LIST_PTR, channel);
+
+ if(ptr4 == 0 && ptr1 == frames_to_bytes(runtime, runtime->buffer_size))
+ return 0;
+
+ if (ptr3 != ptr4)
+ ptr1 = snd_emu10k1x_ptr_read(emu, PLAYBACK_POINTER, channel);
+ ptr2 = bytes_to_frames(runtime, ptr1);
+ ptr2 += (ptr4 >> 3) * runtime->period_size;
+ ptr = ptr2;
+
+ if (ptr >= runtime->buffer_size)
+ ptr -= runtime->buffer_size;
+
+ return ptr;
+}
+
+/* operators */
+static snd_pcm_ops_t snd_emu10k1x_playback_ops = {
+ .open = snd_emu10k1x_playback_open,
+ .close = snd_emu10k1x_playback_close,
+ .ioctl = snd_pcm_lib_ioctl,
+ .hw_params = snd_emu10k1x_pcm_hw_params,
+ .hw_free = snd_emu10k1x_pcm_hw_free,
+ .prepare = snd_emu10k1x_pcm_prepare,
+ .trigger = snd_emu10k1x_pcm_trigger,
+ .pointer = snd_emu10k1x_pcm_pointer,
+};
+
+/* open_capture callback */
+static int snd_emu10k1x_pcm_open_capture(snd_pcm_substream_t *substream)
+{
+ emu10k1x_t *chip = snd_pcm_substream_chip(substream);
+ emu10k1x_pcm_t *epcm;
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ int err;
+
+ if ((err = snd_pcm_hw_constraint_integer(runtime, SNDRV_PCM_HW_PARAM_PERIODS)) < 0)
+ return err;
+ if ((err = snd_pcm_hw_constraint_step(runtime, 0, SNDRV_PCM_HW_PARAM_PERIOD_BYTES, 64)) < 0)
+ return err;
+
+ epcm = kcalloc(1, sizeof(*epcm), GFP_KERNEL);
+ if (epcm == NULL)
+ return -ENOMEM;
+
+ epcm->emu = chip;
+ epcm->substream = substream;
+
+ runtime->private_data = epcm;
+ runtime->private_free = snd_emu10k1x_pcm_free_substream;
+
+ runtime->hw = snd_emu10k1x_capture_hw;
+
+ return 0;
+}
+
+/* close callback */
+static int snd_emu10k1x_pcm_close_capture(snd_pcm_substream_t *substream)
+{
+ return 0;
+}
+
+/* hw_params callback */
+static int snd_emu10k1x_pcm_hw_params_capture(snd_pcm_substream_t *substream,
+ snd_pcm_hw_params_t * hw_params)
+{
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ emu10k1x_pcm_t *epcm = runtime->private_data;
+
+ if (! epcm->voice) {
+ if (epcm->emu->capture_voice.use)
+ return -EBUSY;
+ epcm->voice = &epcm->emu->capture_voice;
+ epcm->voice->epcm = epcm;
+ epcm->voice->use = 1;
+ }
+
+ return snd_pcm_lib_malloc_pages(substream,
+ params_buffer_bytes(hw_params));
+}
+
+/* hw_free callback */
+static int snd_emu10k1x_pcm_hw_free_capture(snd_pcm_substream_t *substream)
+{
+ snd_pcm_runtime_t *runtime = substream->runtime;
+
+ emu10k1x_pcm_t *epcm;
+
+ if (runtime->private_data == NULL)
+ return 0;
+ epcm = runtime->private_data;
+
+ if (epcm->voice) {
+ epcm->voice->use = 0;
+ epcm->voice->epcm = NULL;
+ epcm->voice = NULL;
+ }
+
+ return snd_pcm_lib_free_pages(substream);
+}
+
+/* prepare capture callback */
+static int snd_emu10k1x_pcm_prepare_capture(snd_pcm_substream_t *substream)
+{
+ emu10k1x_t *emu = snd_pcm_substream_chip(substream);
+ snd_pcm_runtime_t *runtime = substream->runtime;
+
+ snd_emu10k1x_ptr_write(emu, CAPTURE_DMA_ADDR, 0, runtime->dma_addr);
+ snd_emu10k1x_ptr_write(emu, CAPTURE_BUFFER_SIZE, 0, frames_to_bytes(runtime, runtime->buffer_size)<<16); // buffer size in bytes
+ snd_emu10k1x_ptr_write(emu, CAPTURE_POINTER, 0, 0);
+ snd_emu10k1x_ptr_write(emu, CAPTURE_UNKNOWN, 0, 0);
+
+ return 0;
+}
+
+/* trigger_capture callback */
+static int snd_emu10k1x_pcm_trigger_capture(snd_pcm_substream_t *substream,
+ int cmd)
+{
+ emu10k1x_t *emu = snd_pcm_substream_chip(substream);
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ emu10k1x_pcm_t *epcm = runtime->private_data;
+ int result = 0;
+
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ snd_emu10k1x_intr_enable(emu, INTE_CAP_0_LOOP |
+ INTE_CAP_0_HALF_LOOP);
+ snd_emu10k1x_ptr_write(emu, TRIGGER_CHANNEL, 0, snd_emu10k1x_ptr_read(emu, TRIGGER_CHANNEL, 0)|TRIGGER_CAPTURE);
+ epcm->running = 1;
+ break;
+ case SNDRV_PCM_TRIGGER_STOP:
+ epcm->running = 0;
+ snd_emu10k1x_intr_disable(emu, INTE_CAP_0_LOOP |
+ INTE_CAP_0_HALF_LOOP);
+ snd_emu10k1x_ptr_write(emu, TRIGGER_CHANNEL, 0, snd_emu10k1x_ptr_read(emu, TRIGGER_CHANNEL, 0) & ~(TRIGGER_CAPTURE));
+ break;
+ default:
+ result = -EINVAL;
+ break;
+ }
+ return result;
+}
+
+/* pointer_capture callback */
+static snd_pcm_uframes_t
+snd_emu10k1x_pcm_pointer_capture(snd_pcm_substream_t *substream)
+{
+ emu10k1x_t *emu = snd_pcm_substream_chip(substream);
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ emu10k1x_pcm_t *epcm = runtime->private_data;
+ snd_pcm_uframes_t ptr;
+
+ if (!epcm->running)
+ return 0;
+
+ ptr = bytes_to_frames(runtime, snd_emu10k1x_ptr_read(emu, CAPTURE_POINTER, 0));
+ if (ptr >= runtime->buffer_size)
+ ptr -= runtime->buffer_size;
+
+ return ptr;
+}
+
+static snd_pcm_ops_t snd_emu10k1x_capture_ops = {
+ .open = snd_emu10k1x_pcm_open_capture,
+ .close = snd_emu10k1x_pcm_close_capture,
+ .ioctl = snd_pcm_lib_ioctl,
+ .hw_params = snd_emu10k1x_pcm_hw_params_capture,
+ .hw_free = snd_emu10k1x_pcm_hw_free_capture,
+ .prepare = snd_emu10k1x_pcm_prepare_capture,
+ .trigger = snd_emu10k1x_pcm_trigger_capture,
+ .pointer = snd_emu10k1x_pcm_pointer_capture,
+};
+
+static unsigned short snd_emu10k1x_ac97_read(ac97_t *ac97,
+ unsigned short reg)
+{
+ emu10k1x_t *emu = ac97->private_data;
+ unsigned long flags;
+ unsigned short val;
+
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ outb(reg, emu->port + AC97ADDRESS);
+ val = inw(emu->port + AC97DATA);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+ return val;
+}
+
+static void snd_emu10k1x_ac97_write(ac97_t *ac97,
+ unsigned short reg, unsigned short val)
+{
+ emu10k1x_t *emu = ac97->private_data;
+ unsigned long flags;
+
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ outb(reg, emu->port + AC97ADDRESS);
+ outw(val, emu->port + AC97DATA);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+}
+
+static int snd_emu10k1x_ac97(emu10k1x_t *chip)
+{
+ ac97_bus_t *pbus;
+ ac97_template_t ac97;
+ int err;
+ static ac97_bus_ops_t ops = {
+ .write = snd_emu10k1x_ac97_write,
+ .read = snd_emu10k1x_ac97_read,
+ };
+
+ if ((err = snd_ac97_bus(chip->card, 0, &ops, NULL, &pbus)) < 0)
+ return err;
+ pbus->no_vra = 1; /* we don't need VRA */
+
+ memset(&ac97, 0, sizeof(ac97));
+ ac97.private_data = chip;
+ return snd_ac97_mixer(pbus, &ac97, &chip->ac97);
+}
+
+static int snd_emu10k1x_free(emu10k1x_t *chip)
+{
+ snd_emu10k1x_ptr_write(chip, TRIGGER_CHANNEL, 0, 0);
+ // disable interrupts
+ outl(0, chip->port + INTE);
+ // disable audio
+ outl(HCFG_LOCKSOUNDCACHE, chip->port + HCFG);
+
+ // release the i/o port
+ if (chip->res_port) {
+ release_resource(chip->res_port);
+ kfree_nocheck(chip->res_port);
+ }
+ // release the irq
+ if (chip->irq >= 0)
+ free_irq(chip->irq, (void *)chip);
+
+ // release the DMA
+ if (chip->dma_buffer.area) {
+ snd_dma_free_pages(&chip->dma_buffer);
+ }
+
+ pci_disable_device(chip->pci);
+
+ // release the data
+ kfree(chip);
+ return 0;
+}
+
+static int snd_emu10k1x_dev_free(snd_device_t *device)
+{
+ emu10k1x_t *chip = device->device_data;
+ return snd_emu10k1x_free(chip);
+}
+
+static irqreturn_t snd_emu10k1x_interrupt(int irq, void *dev_id,
+ struct pt_regs *regs)
+{
+ unsigned int status;
+
+ emu10k1x_t *chip = dev_id;
+ emu10k1x_voice_t *pvoice = chip->voices;
+ int i;
+ int mask;
+
+ status = inl(chip->port + IPR);
+
+ if(status) {
+ // capture interrupt
+ if(status & (IPR_CAP_0_LOOP | IPR_CAP_0_HALF_LOOP)) {
+ emu10k1x_voice_t *pvoice = &chip->capture_voice;
+ if(pvoice->use)
+ snd_emu10k1x_pcm_interrupt(chip, pvoice);
+ else
+ snd_emu10k1x_intr_disable(chip,
+ INTE_CAP_0_LOOP |
+ INTE_CAP_0_HALF_LOOP);
+ }
+
+ mask = IPR_CH_0_LOOP|IPR_CH_0_HALF_LOOP;
+ for(i = 0; i < 3; i++) {
+ if(status & mask) {
+ if(pvoice->use)
+ snd_emu10k1x_pcm_interrupt(chip, pvoice);
+ else
+ snd_emu10k1x_intr_disable(chip, mask);
+ }
+ pvoice++;
+ mask <<= 1;
+ }
+
+ if (status & (IPR_MIDITRANSBUFEMPTY|IPR_MIDIRECVBUFEMPTY)) {
+ if (chip->midi.interrupt)
+ chip->midi.interrupt(chip, status);
+ else
+ snd_emu10k1x_intr_disable(chip, INTE_MIDITXENABLE|INTE_MIDIRXENABLE);
+ }
+
+ // acknowledge the interrupt if necessary
+ if(status)
+ outl(status, chip->port+IPR);
+
+// snd_printk(KERN_INFO "interrupt %08x\n", status);
+ }
+
+ return IRQ_HANDLED;
+}
+
+static void snd_emu10k1x_pcm_free(snd_pcm_t *pcm)
+{
+ emu10k1x_t *emu = pcm->private_data;
+ emu->pcm = NULL;
+ snd_pcm_lib_preallocate_free_for_all(pcm);
+}
+
+static int __devinit snd_emu10k1x_pcm(emu10k1x_t *emu, int device, snd_pcm_t **rpcm)
+{
+ snd_pcm_t *pcm;
+ int err;
+ int capture = 0;
+
+ if (rpcm)
+ *rpcm = NULL;
+ if (device == 0)
+ capture = 1;
+
+ if ((err = snd_pcm_new(emu->card, "emu10k1x", device, 1, capture, &pcm)) < 0)
+ return err;
+
+ pcm->private_data = emu;
+ pcm->private_free = snd_emu10k1x_pcm_free;
+
+ switch(device) {
+ case 0:
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_emu10k1x_playback_ops);
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_emu10k1x_capture_ops);
+ break;
+ case 1:
+ case 2:
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_emu10k1x_playback_ops);
+ break;
+ }
+
+ pcm->info_flags = 0;
+ pcm->dev_subclass = SNDRV_PCM_SUBCLASS_GENERIC_MIX;
+ switch(device) {
+ case 0:
+ strcpy(pcm->name, "EMU10K1X Front");
+ break;
+ case 1:
+ strcpy(pcm->name, "EMU10K1X Rear");
+ break;
+ case 2:
+ strcpy(pcm->name, "EMU10K1X Center/LFE");
+ break;
+ }
+ emu->pcm = pcm;
+
+ snd_pcm_lib_preallocate_pages_for_all(pcm, SNDRV_DMA_TYPE_DEV,
+ snd_dma_pci_data(emu->pci),
+ 32*1024, 32*1024);
+
+ if (rpcm)
+ *rpcm = pcm;
+
+ return 0;
+}
+
+static int __devinit snd_emu10k1x_create(snd_card_t *card,
+ struct pci_dev *pci,
+ emu10k1x_t **rchip)
+{
+ emu10k1x_t *chip;
+ int err;
+ int ch;
+ static snd_device_ops_t ops = {
+ .dev_free = snd_emu10k1x_dev_free,
+ };
+
+ *rchip = NULL;
+
+ if ((err = pci_enable_device(pci)) < 0)
+ return err;
+ if (pci_set_dma_mask(pci, 0x0fffffff) < 0 ||
+ pci_set_consistent_dma_mask(pci, 0x0fffffff) < 0) {
+ snd_printk(KERN_ERR "error to set 28bit mask DMA\n");
+ pci_disable_device(pci);
+ return -ENXIO;
+ }
+
+ chip = kcalloc(1, sizeof(*chip), GFP_KERNEL);
+ if (chip == NULL) {
+ pci_disable_device(pci);
+ return -ENOMEM;
+ }
+
+ chip->card = card;
+ chip->pci = pci;
+ chip->irq = -1;
+
+ spin_lock_init(&chip->emu_lock);
+ spin_lock_init(&chip->voice_lock);
+
+ chip->port = pci_resource_start(pci, 0);
+ if ((chip->res_port = request_region(chip->port, 8,
+ "EMU10K1X")) == NULL) {
+ snd_printk(KERN_ERR "emu10k1x: cannot allocate the port 0x%lx\n", chip->port);
+ snd_emu10k1x_free(chip);
+ return -EBUSY;
+ }
+
+ if (request_irq(pci->irq, snd_emu10k1x_interrupt,
+ SA_INTERRUPT|SA_SHIRQ, "EMU10K1X",
+ (void *)chip)) {
+ snd_printk(KERN_ERR "emu10k1x: cannot grab irq %d\n", pci->irq);
+ snd_emu10k1x_free(chip);
+ return -EBUSY;
+ }
+ chip->irq = pci->irq;
+
+ if(snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV, snd_dma_pci_data(pci),
+ 4 * 1024, &chip->dma_buffer) < 0) {
+ snd_emu10k1x_free(chip);
+ return -ENOMEM;
+ }
+
+ pci_set_master(pci);
+ /* read revision & serial */
+ pci_read_config_byte(pci, PCI_REVISION_ID, (char *)&chip->revision);
+ pci_read_config_dword(pci, PCI_SUBSYSTEM_VENDOR_ID, &chip->serial);
+ pci_read_config_word(pci, PCI_SUBSYSTEM_ID, &chip->model);
+ snd_printk(KERN_INFO "Model %04x Rev %08x Serial %08x\n", chip->model,
+ chip->revision, chip->serial);
+
+ outl(0, chip->port + INTE);
+
+ for(ch = 0; ch < 3; ch++) {
+ chip->voices[ch].emu = chip;
+ chip->voices[ch].number = ch;
+ }
+
+ /*
+ * Init to 0x02109204 :
+ * Clock accuracy = 0 (1000ppm)
+ * Sample Rate = 2 (48kHz)
+ * Audio Channel = 1 (Left of 2)
+ * Source Number = 0 (Unspecified)
+ * Generation Status = 1 (Original for Cat Code 12)
+ * Cat Code = 12 (Digital Signal Mixer)
+ * Mode = 0 (Mode 0)
+ * Emphasis = 0 (None)
+ * CP = 1 (Copyright unasserted)
+ * AN = 0 (Audio data)
+ * P = 0 (Consumer)
+ */
+ snd_emu10k1x_ptr_write(chip, SPCS0, 0,
+ chip->spdif_bits[0] =
+ SPCS_CLKACCY_1000PPM | SPCS_SAMPLERATE_48 |
+ SPCS_CHANNELNUM_LEFT | SPCS_SOURCENUM_UNSPEC |
+ SPCS_GENERATIONSTATUS | 0x00001200 |
+ 0x00000000 | SPCS_EMPHASIS_NONE | SPCS_COPYRIGHT);
+ snd_emu10k1x_ptr_write(chip, SPCS1, 0,
+ chip->spdif_bits[1] =
+ SPCS_CLKACCY_1000PPM | SPCS_SAMPLERATE_48 |
+ SPCS_CHANNELNUM_LEFT | SPCS_SOURCENUM_UNSPEC |
+ SPCS_GENERATIONSTATUS | 0x00001200 |
+ 0x00000000 | SPCS_EMPHASIS_NONE | SPCS_COPYRIGHT);
+ snd_emu10k1x_ptr_write(chip, SPCS2, 0,
+ chip->spdif_bits[2] =
+ SPCS_CLKACCY_1000PPM | SPCS_SAMPLERATE_48 |
+ SPCS_CHANNELNUM_LEFT | SPCS_SOURCENUM_UNSPEC |
+ SPCS_GENERATIONSTATUS | 0x00001200 |
+ 0x00000000 | SPCS_EMPHASIS_NONE | SPCS_COPYRIGHT);
+
+ snd_emu10k1x_ptr_write(chip, SPDIF_SELECT, 0, 0x700); // disable SPDIF
+ snd_emu10k1x_ptr_write(chip, ROUTING, 0, 0x1003F); // routing
+ snd_emu10k1x_gpio_write(chip, 0x1080); // analog mode
+
+ outl(HCFG_LOCKSOUNDCACHE|HCFG_AUDIOENABLE, chip->port+HCFG);
+
+ if ((err = snd_device_new(card, SNDRV_DEV_LOWLEVEL,
+ chip, &ops)) < 0) {
+ snd_emu10k1x_free(chip);
+ return err;
+ }
+ *rchip = chip;
+ return 0;
+}
+
+static void snd_emu10k1x_proc_reg_read(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer)
+{
+ emu10k1x_t *emu = entry->private_data;
+ unsigned long value,value1,value2;
+ unsigned long flags;
+ int i;
+
+ snd_iprintf(buffer, "Registers:\n\n");
+ for(i = 0; i < 0x20; i+=4) {
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ value = inl(emu->port + i);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+ snd_iprintf(buffer, "Register %02X: %08lX\n", i, value);
+ }
+ snd_iprintf(buffer, "\nRegisters\n\n");
+ for(i = 0; i <= 0x48; i++) {
+ value = snd_emu10k1x_ptr_read(emu, i, 0);
+ if(i < 0x10 || (i >= 0x20 && i < 0x40)) {
+ value1 = snd_emu10k1x_ptr_read(emu, i, 1);
+ value2 = snd_emu10k1x_ptr_read(emu, i, 2);
+ snd_iprintf(buffer, "%02X: %08lX %08lX %08lX\n", i, value, value1, value2);
+ } else {
+ snd_iprintf(buffer, "%02X: %08lX\n", i, value);
+ }
+ }
+}
+
+static void snd_emu10k1x_proc_reg_write(snd_info_entry_t *entry,
+ snd_info_buffer_t *buffer)
+{
+ emu10k1x_t *emu = entry->private_data;
+ char line[64];
+ unsigned int reg, channel_id , val;
+
+ while (!snd_info_get_line(buffer, line, sizeof(line))) {
+ if (sscanf(line, "%x %x %x", ®, &channel_id, &val) != 3)
+ continue;
+
+ if ((reg < 0x49) && (reg >=0) && (val <= 0xffffffff)
+ && (channel_id >=0) && (channel_id <= 2) )
+ snd_emu10k1x_ptr_write(emu, reg, channel_id, val);
+ }
+}
+
+static int __devinit snd_emu10k1x_proc_init(emu10k1x_t * emu)
+{
+ snd_info_entry_t *entry;
+
+ if(! snd_card_proc_new(emu->card, "emu10k1x_regs", &entry)) {
+ snd_info_set_text_ops(entry, emu, 1024, snd_emu10k1x_proc_reg_read);
+ entry->c.text.write_size = 64;
+ entry->c.text.write = snd_emu10k1x_proc_reg_write;
+ entry->private_data = emu;
+ }
+
+ return 0;
+}
+
+static int snd_emu10k1x_shared_spdif_info(snd_kcontrol_t *kcontrol, snd_ctl_elem_info_t * uinfo)
+{
+ uinfo->type = SNDRV_CTL_ELEM_TYPE_BOOLEAN;
+ uinfo->count = 1;
+ uinfo->value.integer.min = 0;
+ uinfo->value.integer.max = 1;
+ return 0;
+}
+
+static int snd_emu10k1x_shared_spdif_get(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ emu10k1x_t *emu = snd_kcontrol_chip(kcontrol);
+
+ ucontrol->value.integer.value[0] = (snd_emu10k1x_ptr_read(emu, SPDIF_SELECT, 0) == 0x700) ? 0 : 1;
+
+ return 0;
+}
+
+static int snd_emu10k1x_shared_spdif_put(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ emu10k1x_t *emu = snd_kcontrol_chip(kcontrol);
+ unsigned int val;
+ int change = 0;
+
+ val = ucontrol->value.integer.value[0] ;
+
+ if (val) {
+ // enable spdif output
+ snd_emu10k1x_ptr_write(emu, SPDIF_SELECT, 0, 0x000);
+ snd_emu10k1x_ptr_write(emu, ROUTING, 0, 0x700);
+ snd_emu10k1x_gpio_write(emu, 0x1000);
+ } else {
+ // disable spdif output
+ snd_emu10k1x_ptr_write(emu, SPDIF_SELECT, 0, 0x700);
+ snd_emu10k1x_ptr_write(emu, ROUTING, 0, 0x1003F);
+ snd_emu10k1x_gpio_write(emu, 0x1080);
+ }
+ return change;
+}
+
+static snd_kcontrol_new_t snd_emu10k1x_shared_spdif __devinitdata =
+{
+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = "Analog/Digital Output Jack",
+ .info = snd_emu10k1x_shared_spdif_info,
+ .get = snd_emu10k1x_shared_spdif_get,
+ .put = snd_emu10k1x_shared_spdif_put
+};
+
+static int snd_emu10k1x_spdif_info(snd_kcontrol_t *kcontrol, snd_ctl_elem_info_t * uinfo)
+{
+ uinfo->type = SNDRV_CTL_ELEM_TYPE_IEC958;
+ uinfo->count = 1;
+ return 0;
+}
+
+static int snd_emu10k1x_spdif_get(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ emu10k1x_t *emu = snd_kcontrol_chip(kcontrol);
+ unsigned int idx = snd_ctl_get_ioffidx(kcontrol, &ucontrol->id);
+
+ ucontrol->value.iec958.status[0] = (emu->spdif_bits[idx] >> 0) & 0xff;
+ ucontrol->value.iec958.status[1] = (emu->spdif_bits[idx] >> 8) & 0xff;
+ ucontrol->value.iec958.status[2] = (emu->spdif_bits[idx] >> 16) & 0xff;
+ ucontrol->value.iec958.status[3] = (emu->spdif_bits[idx] >> 24) & 0xff;
+ return 0;
+}
+
+static int snd_emu10k1x_spdif_get_mask(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ ucontrol->value.iec958.status[0] = 0xff;
+ ucontrol->value.iec958.status[1] = 0xff;
+ ucontrol->value.iec958.status[2] = 0xff;
+ ucontrol->value.iec958.status[3] = 0xff;
+ return 0;
+}
+
+static int snd_emu10k1x_spdif_put(snd_kcontrol_t * kcontrol,
+ snd_ctl_elem_value_t * ucontrol)
+{
+ emu10k1x_t *emu = snd_kcontrol_chip(kcontrol);
+ unsigned int idx = snd_ctl_get_ioffidx(kcontrol, &ucontrol->id);
+ int change;
+ unsigned int val;
+
+ val = (ucontrol->value.iec958.status[0] << 0) |
+ (ucontrol->value.iec958.status[1] << 8) |
+ (ucontrol->value.iec958.status[2] << 16) |
+ (ucontrol->value.iec958.status[3] << 24);
+ change = val != emu->spdif_bits[idx];
+ if (change) {
+ snd_emu10k1x_ptr_write(emu, SPCS0 + idx, 0, val);
+ emu->spdif_bits[idx] = val;
+ }
+ return change;
+}
+
+static snd_kcontrol_new_t snd_emu10k1x_spdif_mask_control =
+{
+ .access = SNDRV_CTL_ELEM_ACCESS_READ,
+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = SNDRV_CTL_NAME_IEC958("",PLAYBACK,MASK),
+ .count = 3,
+ .info = snd_emu10k1x_spdif_info,
+ .get = snd_emu10k1x_spdif_get_mask
+};
+
+static snd_kcontrol_new_t snd_emu10k1x_spdif_control =
+{
+ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = SNDRV_CTL_NAME_IEC958("",PLAYBACK,DEFAULT),
+ .count = 3,
+ .info = snd_emu10k1x_spdif_info,
+ .get = snd_emu10k1x_spdif_get,
+ .put = snd_emu10k1x_spdif_put
+};
+
+static int __devinit snd_emu10k1x_mixer(emu10k1x_t *emu)
+{
+ int err;
+ snd_kcontrol_t *kctl;
+ snd_card_t *card = emu->card;
+
+ if ((kctl = snd_ctl_new1(&snd_emu10k1x_spdif_mask_control, emu)) == NULL)
+ return -ENOMEM;
+ if ((err = snd_ctl_add(card, kctl)))
+ return err;
+ if ((kctl = snd_ctl_new1(&snd_emu10k1x_shared_spdif, emu)) == NULL)
+ return -ENOMEM;
+ if ((err = snd_ctl_add(card, kctl)))
+ return err;
+ if ((kctl = snd_ctl_new1(&snd_emu10k1x_spdif_control, emu)) == NULL)
+ return -ENOMEM;
+ if ((err = snd_ctl_add(card, kctl)))
+ return err;
+
+ return 0;
+}
+
+#define EMU10K1X_MIDI_MODE_INPUT (1<<0)
+#define EMU10K1X_MIDI_MODE_OUTPUT (1<<1)
+
+static inline unsigned char mpu401_read(emu10k1x_t *emu, emu10k1x_midi_t *mpu, int idx)
+{
+ return (unsigned char)snd_emu10k1x_ptr_read(emu, mpu->port + idx, 0);
+}
+
+static inline void mpu401_write(emu10k1x_t *emu, emu10k1x_midi_t *mpu, int data, int idx)
+{
+ snd_emu10k1x_ptr_write(emu, mpu->port + idx, 0, data);
+}
+
+#define mpu401_write_data(emu, mpu, data) mpu401_write(emu, mpu, data, 0)
+#define mpu401_write_cmd(emu, mpu, data) mpu401_write(emu, mpu, data, 1)
+#define mpu401_read_data(emu, mpu) mpu401_read(emu, mpu, 0)
+#define mpu401_read_stat(emu, mpu) mpu401_read(emu, mpu, 1)
+
+#define mpu401_input_avail(emu,mpu) (!(mpu401_read_stat(emu,mpu) & 0x80))
+#define mpu401_output_ready(emu,mpu) (!(mpu401_read_stat(emu,mpu) & 0x40))
+
+#define MPU401_RESET 0xff
+#define MPU401_ENTER_UART 0x3f
+#define MPU401_ACK 0xfe
+
+static void mpu401_clear_rx(emu10k1x_t *emu, emu10k1x_midi_t *mpu)
+{
+ int timeout = 100000;
+ for (; timeout > 0 && mpu401_input_avail(emu, mpu); timeout--)
+ mpu401_read_data(emu, mpu);
+#ifdef CONFIG_SND_DEBUG
+ if (timeout <= 0)
+ snd_printk(KERN_ERR "cmd: clear rx timeout (status = 0x%x)\n", mpu401_read_stat(emu, mpu));
+#endif
+}
+
+/*
+
+ */
+
+static void do_emu10k1x_midi_interrupt(emu10k1x_t *emu, emu10k1x_midi_t *midi, unsigned int status)
+{
+ unsigned char byte;
+
+ if (midi->rmidi == NULL) {
+ snd_emu10k1x_intr_disable(emu, midi->tx_enable | midi->rx_enable);
+ return;
+ }
+
+ spin_lock(&midi->input_lock);
+ if ((status & midi->ipr_rx) && mpu401_input_avail(emu, midi)) {
+ if (!(midi->midi_mode & EMU10K1X_MIDI_MODE_INPUT)) {
+ mpu401_clear_rx(emu, midi);
+ } else {
+ byte = mpu401_read_data(emu, midi);
+ spin_unlock(&midi->input_lock);
+ if (midi->substream_input)
+ snd_rawmidi_receive(midi->substream_input, &byte, 1);
+ spin_lock(&midi->input_lock);
+ }
+ }
+ spin_unlock(&midi->input_lock);
+
+ spin_lock(&midi->output_lock);
+ if ((status & midi->ipr_tx) && mpu401_output_ready(emu, midi)) {
+ if (midi->substream_output &&
+ snd_rawmidi_transmit(midi->substream_output, &byte, 1) == 1) {
+ mpu401_write_data(emu, midi, byte);
+ } else {
+ snd_emu10k1x_intr_disable(emu, midi->tx_enable);
+ }
+ }
+ spin_unlock(&midi->output_lock);
+}
+
+static void snd_emu10k1x_midi_interrupt(emu10k1x_t *emu, unsigned int status)
+{
+ do_emu10k1x_midi_interrupt(emu, &emu->midi, status);
+}
+
+static void snd_emu10k1x_midi_cmd(emu10k1x_t * emu, emu10k1x_midi_t *midi, unsigned char cmd, int ack)
+{
+ unsigned long flags;
+ int timeout, ok;
+
+ spin_lock_irqsave(&midi->input_lock, flags);
+ mpu401_write_data(emu, midi, 0x00);
+ /* mpu401_clear_rx(emu, midi); */
+
+ mpu401_write_cmd(emu, midi, cmd);
+ if (ack) {
+ ok = 0;
+ timeout = 10000;
+ while (!ok && timeout-- > 0) {
+ if (mpu401_input_avail(emu, midi)) {
+ if (mpu401_read_data(emu, midi) == MPU401_ACK)
+ ok = 1;
+ }
+ }
+ if (!ok && mpu401_read_data(emu, midi) == MPU401_ACK)
+ ok = 1;
+ } else {
+ ok = 1;
+ }
+ spin_unlock_irqrestore(&midi->input_lock, flags);
+ if (!ok)
+ snd_printk(KERN_ERR "midi_cmd: 0x%x failed at 0x%lx (status = 0x%x, data = 0x%x)!!!\n",
+ cmd, emu->port,
+ mpu401_read_stat(emu, midi),
+ mpu401_read_data(emu, midi));
+}
+
+static int snd_emu10k1x_midi_input_open(snd_rawmidi_substream_t * substream)
+{
+ emu10k1x_t *emu;
+ emu10k1x_midi_t *midi = (emu10k1x_midi_t *)substream->rmidi->private_data;
+ unsigned long flags;
+
+ emu = midi->emu;
+ snd_assert(emu, return -ENXIO);
+ spin_lock_irqsave(&midi->open_lock, flags);
+ midi->midi_mode |= EMU10K1X_MIDI_MODE_INPUT;
+ midi->substream_input = substream;
+ if (!(midi->midi_mode & EMU10K1X_MIDI_MODE_OUTPUT)) {
+ spin_unlock_irqrestore(&midi->open_lock, flags);
+ snd_emu10k1x_midi_cmd(emu, midi, MPU401_RESET, 1);
+ snd_emu10k1x_midi_cmd(emu, midi, MPU401_ENTER_UART, 1);
+ } else {
+ spin_unlock_irqrestore(&midi->open_lock, flags);
+ }
+ return 0;
+}
+
+static int snd_emu10k1x_midi_output_open(snd_rawmidi_substream_t * substream)
+{
+ emu10k1x_t *emu;
+ emu10k1x_midi_t *midi = (emu10k1x_midi_t *)substream->rmidi->private_data;
+ unsigned long flags;
+
+ emu = midi->emu;
+ snd_assert(emu, return -ENXIO);
+ spin_lock_irqsave(&midi->open_lock, flags);
+ midi->midi_mode |= EMU10K1X_MIDI_MODE_OUTPUT;
+ midi->substream_output = substream;
+ if (!(midi->midi_mode & EMU10K1X_MIDI_MODE_INPUT)) {
+ spin_unlock_irqrestore(&midi->open_lock, flags);
+ snd_emu10k1x_midi_cmd(emu, midi, MPU401_RESET, 1);
+ snd_emu10k1x_midi_cmd(emu, midi, MPU401_ENTER_UART, 1);
+ } else {
+ spin_unlock_irqrestore(&midi->open_lock, flags);
+ }
+ return 0;
+}
+
+static int snd_emu10k1x_midi_input_close(snd_rawmidi_substream_t * substream)
+{
+ emu10k1x_t *emu;
+ emu10k1x_midi_t *midi = (emu10k1x_midi_t *)substream->rmidi->private_data;
+ unsigned long flags;
+
+ emu = midi->emu;
+ snd_assert(emu, return -ENXIO);
+ spin_lock_irqsave(&midi->open_lock, flags);
+ snd_emu10k1x_intr_disable(emu, midi->rx_enable);
+ midi->midi_mode &= ~EMU10K1X_MIDI_MODE_INPUT;
+ midi->substream_input = NULL;
+ if (!(midi->midi_mode & EMU10K1X_MIDI_MODE_OUTPUT)) {
+ spin_unlock_irqrestore(&midi->open_lock, flags);
+ snd_emu10k1x_midi_cmd(emu, midi, MPU401_RESET, 0);
+ } else {
+ spin_unlock_irqrestore(&midi->open_lock, flags);
+ }
+ return 0;
+}
+
+static int snd_emu10k1x_midi_output_close(snd_rawmidi_substream_t * substream)
+{
+ emu10k1x_t *emu;
+ emu10k1x_midi_t *midi = (emu10k1x_midi_t *)substream->rmidi->private_data;
+ unsigned long flags;
+
+ emu = midi->emu;
+ snd_assert(emu, return -ENXIO);
+ spin_lock_irqsave(&midi->open_lock, flags);
+ snd_emu10k1x_intr_disable(emu, midi->tx_enable);
+ midi->midi_mode &= ~EMU10K1X_MIDI_MODE_OUTPUT;
+ midi->substream_output = NULL;
+ if (!(midi->midi_mode & EMU10K1X_MIDI_MODE_INPUT)) {
+ spin_unlock_irqrestore(&midi->open_lock, flags);
+ snd_emu10k1x_midi_cmd(emu, midi, MPU401_RESET, 0);
+ } else {
+ spin_unlock_irqrestore(&midi->open_lock, flags);
+ }
+ return 0;
+}
+
+static void snd_emu10k1x_midi_input_trigger(snd_rawmidi_substream_t * substream, int up)
+{
+ emu10k1x_t *emu;
+ emu10k1x_midi_t *midi = (emu10k1x_midi_t *)substream->rmidi->private_data;
+ emu = midi->emu;
+ snd_assert(emu, return);
+
+ if (up)
+ snd_emu10k1x_intr_enable(emu, midi->rx_enable);
+ else
+ snd_emu10k1x_intr_disable(emu, midi->rx_enable);
+}
+
+static void snd_emu10k1x_midi_output_trigger(snd_rawmidi_substream_t * substream, int up)
+{
+ emu10k1x_t *emu;
+ emu10k1x_midi_t *midi = (emu10k1x_midi_t *)substream->rmidi->private_data;
+ unsigned long flags;
+
+ emu = midi->emu;
+ snd_assert(emu, return);
+
+ if (up) {
+ int max = 4;
+ unsigned char byte;
+
+ /* try to send some amount of bytes here before interrupts */
+ spin_lock_irqsave(&midi->output_lock, flags);
+ while (max > 0) {
+ if (mpu401_output_ready(emu, midi)) {
+ if (!(midi->midi_mode & EMU10K1X_MIDI_MODE_OUTPUT) ||
+ snd_rawmidi_transmit(substream, &byte, 1) != 1) {
+ /* no more data */
+ spin_unlock_irqrestore(&midi->output_lock, flags);
+ return;
+ }
+ mpu401_write_data(emu, midi, byte);
+ max--;
+ } else {
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&midi->output_lock, flags);
+ snd_emu10k1x_intr_enable(emu, midi->tx_enable);
+ } else {
+ snd_emu10k1x_intr_disable(emu, midi->tx_enable);
+ }
+}
+
+/*
+
+ */
+
+static snd_rawmidi_ops_t snd_emu10k1x_midi_output =
+{
+ .open = snd_emu10k1x_midi_output_open,
+ .close = snd_emu10k1x_midi_output_close,
+ .trigger = snd_emu10k1x_midi_output_trigger,
+};
+
+static snd_rawmidi_ops_t snd_emu10k1x_midi_input =
+{
+ .open = snd_emu10k1x_midi_input_open,
+ .close = snd_emu10k1x_midi_input_close,
+ .trigger = snd_emu10k1x_midi_input_trigger,
+};
+
+static void snd_emu10k1x_midi_free(snd_rawmidi_t *rmidi)
+{
+ emu10k1x_midi_t *midi = (emu10k1x_midi_t *)rmidi->private_data;
+ midi->interrupt = NULL;
+ midi->rmidi = NULL;
+}
+
+static int __devinit emu10k1x_midi_init(emu10k1x_t *emu, emu10k1x_midi_t *midi, int device, char *name)
+{
+ snd_rawmidi_t *rmidi;
+ int err;
+
+ if ((err = snd_rawmidi_new(emu->card, name, device, 1, 1, &rmidi)) < 0)
+ return err;
+ midi->emu = emu;
+ spin_lock_init(&midi->open_lock);
+ spin_lock_init(&midi->input_lock);
+ spin_lock_init(&midi->output_lock);
+ strcpy(rmidi->name, name);
+ snd_rawmidi_set_ops(rmidi, SNDRV_RAWMIDI_STREAM_OUTPUT, &snd_emu10k1x_midi_output);
+ snd_rawmidi_set_ops(rmidi, SNDRV_RAWMIDI_STREAM_INPUT, &snd_emu10k1x_midi_input);
+ rmidi->info_flags |= SNDRV_RAWMIDI_INFO_OUTPUT |
+ SNDRV_RAWMIDI_INFO_INPUT |
+ SNDRV_RAWMIDI_INFO_DUPLEX;
+ rmidi->private_data = midi;
+ rmidi->private_free = snd_emu10k1x_midi_free;
+ midi->rmidi = rmidi;
+ return 0;
+}
+
+static int __devinit snd_emu10k1x_midi(emu10k1x_t *emu)
+{
+ emu10k1x_midi_t *midi = &emu->midi;
+ int err;
+
+ if ((err = emu10k1x_midi_init(emu, midi, 0, "EMU10K1X MPU-401 (UART)")) < 0)
+ return err;
+
+ midi->tx_enable = INTE_MIDITXENABLE;
+ midi->rx_enable = INTE_MIDIRXENABLE;
+ midi->port = MUDATA;
+ midi->ipr_tx = IPR_MIDITRANSBUFEMPTY;
+ midi->ipr_rx = IPR_MIDIRECVBUFEMPTY;
+ midi->interrupt = snd_emu10k1x_midi_interrupt;
+ return 0;
+}
+
+static int __devinit snd_emu10k1x_probe(struct pci_dev *pci,
+ const struct pci_device_id *pci_id)
+{
+ static int dev;
+ snd_card_t *card;
+ emu10k1x_t *chip;
+ int err;
+
+ if (dev >= SNDRV_CARDS)
+ return -ENODEV;
+ if (!enable[dev]) {
+ dev++;
+ return -ENOENT;
+ }
+
+ card = snd_card_new(index[dev], id[dev], THIS_MODULE, 0);
+ if (card == NULL)
+ return -ENOMEM;
+
+ if ((err = snd_emu10k1x_create(card, pci, &chip)) < 0) {
+ snd_card_free(card);
+ return err;
+ }
+
+ if ((err = snd_emu10k1x_pcm(chip, 0, NULL)) < 0) {
+ snd_card_free(card);
+ return err;
+ }
+ if ((err = snd_emu10k1x_pcm(chip, 1, NULL)) < 0) {
+ snd_card_free(card);
+ return err;
+ }
+ if ((err = snd_emu10k1x_pcm(chip, 2, NULL)) < 0) {
+ snd_card_free(card);
+ return err;
+ }
+
+ if ((err = snd_emu10k1x_ac97(chip)) < 0) {
+ snd_card_free(card);
+ return err;
+ }
+
+ if ((err = snd_emu10k1x_mixer(chip)) < 0) {
+ snd_card_free(card);
+ return err;
+ }
+
+ if ((err = snd_emu10k1x_midi(chip)) < 0) {
+ snd_card_free(card);
+ return err;
+ }
+
+ snd_emu10k1x_proc_init(chip);
+
+ strcpy(card->driver, "EMU10K1X");
+ strcpy(card->shortname, "Dell Sound Blaster Live!");
+ sprintf(card->longname, "%s at 0x%lx irq %i",
+ card->shortname, chip->port, chip->irq);
+
+ if ((err = snd_card_register(card)) < 0) {
+ snd_card_free(card);
+ return err;
+ }
+
+ pci_set_drvdata(pci, card);
+ dev++;
+ return 0;
+}
+
+static void __devexit snd_emu10k1x_remove(struct pci_dev *pci)
+{
+ snd_card_free(pci_get_drvdata(pci));
+ pci_set_drvdata(pci, NULL);
+}
+
+// PCI IDs
+static struct pci_device_id snd_emu10k1x_ids[] = {
+ { 0x1102, 0x0006, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 }, /* Dell OEM version (EMU10K1) */
+ { 0, }
+};
+MODULE_DEVICE_TABLE(pci, snd_emu10k1x_ids);
+
+// pci_driver definition
+static struct pci_driver driver = {
+ .name = "EMU10K1X",
+ .id_table = snd_emu10k1x_ids,
+ .probe = snd_emu10k1x_probe,
+ .remove = __devexit_p(snd_emu10k1x_remove),
+};
+
+// initialization of the module
+static int __init alsa_card_emu10k1x_init(void)
+{
+ int err;
+
+ if ((err = pci_module_init(&driver)) > 0)
+ return err;
+
+ return 0;
+}
+
+// clean up the module
+static void __exit alsa_card_emu10k1x_exit(void)
+{
+ pci_unregister_driver(&driver);
+}
+
+module_init(alsa_card_emu10k1x_init)
+module_exit(alsa_card_emu10k1x_exit)
send_routing[3] = 3;
memset(send_amount, 0, sizeof(send_amount));
} else {
+ /* mono, left, right (master voice = left) */
tmp = stereo ? (master ? 1 : 2) : 0;
memcpy(send_routing, &mix->send_routing[tmp][0], 8);
memcpy(send_amount, &mix->send_volume[tmp][0], 8);
// setup routing
if (emu->audigy) {
snd_emu10k1_ptr_write(emu, A_FXRT1, voice,
- ((unsigned int)send_routing[3] << 24) |
- ((unsigned int)send_routing[2] << 16) |
- ((unsigned int)send_routing[1] << 8) |
- (unsigned int)send_routing[0]);
+ snd_emu10k1_compose_audigy_fxrt1(send_routing));
snd_emu10k1_ptr_write(emu, A_FXRT2, voice,
- ((unsigned int)send_routing[7] << 24) |
- ((unsigned int)send_routing[6] << 16) |
- ((unsigned int)send_routing[5] << 8) |
- (unsigned int)send_routing[4]);
+ snd_emu10k1_compose_audigy_fxrt2(send_routing));
snd_emu10k1_ptr_write(emu, A_SENDAMOUNTS, voice,
((unsigned int)send_amount[4] << 24) |
((unsigned int)send_amount[5] << 16) |
{
emu10k1_pcm_t *epcm = runtime->private_data;
- if (epcm)
- kfree(epcm);
+ kfree(epcm);
}
static int snd_emu10k1_playback_open(snd_pcm_substream_t * substream)
};
emu10k1_t *emu = entry->private_data;
- unsigned int val;
+ unsigned int val, val1;
int nefx = emu->audigy ? 64 : 32;
char **outputs = emu->audigy ? audigy_outs : creative_outs;
int idx;
snd_iprintf(buffer, "EMU10K1\n\n");
- val = emu->audigy ?
- snd_emu10k1_ptr_read(emu, A_FXRT1, 0) :
- snd_emu10k1_ptr_read(emu, FXRT, 0);
snd_iprintf(buffer, "Card : %s\n",
emu->audigy ? "Audigy" : (emu->APS ? "EMU APS" : "Creative"));
snd_iprintf(buffer, "Internal TRAM (words) : 0x%x\n", emu->fx8010.itram_size);
snd_iprintf(buffer, "External TRAM (words) : 0x%x\n", (int)emu->fx8010.etram_pages.bytes);
snd_iprintf(buffer, "\n");
- if (emu->audigy) {
- snd_iprintf(buffer, "Effect Send Routing : A=%i, B=%i, C=%i, D=%i\n",
- val & 0x3f,
- (val >> 8) & 0x3f,
- (val >> 16) & 0x3f,
- (val >> 24) & 0x3f);
- } else {
- snd_iprintf(buffer, "Effect Send Routing : A=%i, B=%i, C=%i, D=%i\n",
- (val >> 16) & 0x0f,
- (val >> 20) & 0x0f,
- (val >> 24) & 0x0f,
- (val >> 28) & 0x0f);
+ snd_iprintf(buffer, "Effect Send Routing :\n");
+ for (idx = 0; idx < NUM_G; idx++) {
+ val = emu->audigy ?
+ snd_emu10k1_ptr_read(emu, A_FXRT1, idx) :
+ snd_emu10k1_ptr_read(emu, FXRT, idx);
+ val1 = emu->audigy ?
+ snd_emu10k1_ptr_read(emu, A_FXRT2, idx) :
+ 0;
+ if (emu->audigy) {
+ snd_iprintf(buffer, "Ch%i: A=%i, B=%i, C=%i, D=%i, ",
+ idx,
+ val & 0x3f,
+ (val >> 8) & 0x3f,
+ (val >> 16) & 0x3f,
+ (val >> 24) & 0x3f);
+ snd_iprintf(buffer, "E=%i, F=%i, G=%i, H=%i\n",
+ val1 & 0x3f,
+ (val1 >> 8) & 0x3f,
+ (val1 >> 16) & 0x3f,
+ (val1 >> 24) & 0x3f);
+ } else {
+ snd_iprintf(buffer, "Ch%i: A=%i, B=%i, C=%i, D=%i\n",
+ idx,
+ (val >> 16) & 0x0f,
+ (val >> 20) & 0x0f,
+ (val >> 24) & 0x0f,
+ (val >> 28) & 0x0f);
+ }
}
snd_iprintf(buffer, "\nCaptured FX Outputs :\n");
for (idx = 0; idx < nefx; idx++) {
return 0;
}
+#ifdef CONFIG_SND_DEBUG
+static void snd_emu_proc_io_reg_read(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer)
+{
+ emu10k1_t *emu = entry->private_data;
+ unsigned long value;
+ unsigned long flags;
+ int i;
+ snd_iprintf(buffer, "IO Registers:\n\n");
+ for(i = 0; i < 0x40; i+=4) {
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ value = inl(emu->port + i);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+ snd_iprintf(buffer, "%02X: %08lX\n", i, value);
+ }
+}
+
+static void snd_emu_proc_io_reg_write(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer)
+{
+ emu10k1_t *emu = entry->private_data;
+ unsigned long flags;
+ char line[64];
+ u32 reg, val;
+ while (!snd_info_get_line(buffer, line, sizeof(line))) {
+ if (sscanf(line, "%x %x", ®, &val) != 2)
+ continue;
+ if ((reg < 0x40) && (reg >=0) && (val <= 0xffffffff) ) {
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ outl(val, emu->port + (reg & 0xfffffffc));
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+ }
+ }
+}
+
+static unsigned int snd_ptr_read(emu10k1_t * emu,
+ unsigned int iobase,
+ unsigned int reg,
+ unsigned int chn)
+{
+ unsigned long flags;
+ unsigned int regptr, val;
+
+ regptr = (reg << 16) | chn;
+
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ outl(regptr, emu->port + iobase + PTR);
+ val = inl(emu->port + iobase + DATA);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+ return val;
+}
+
+static void snd_ptr_write(emu10k1_t *emu,
+ unsigned int iobase,
+ unsigned int reg,
+ unsigned int chn,
+ unsigned int data)
+{
+ unsigned int regptr;
+ unsigned long flags;
+
+ regptr = (reg << 16) | chn;
+
+ spin_lock_irqsave(&emu->emu_lock, flags);
+ outl(regptr, emu->port + iobase + PTR);
+ outl(data, emu->port + iobase + DATA);
+ spin_unlock_irqrestore(&emu->emu_lock, flags);
+}
+
+
+static void snd_emu_proc_ptr_reg_read(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer, int iobase, int offset, int length)
+{
+ emu10k1_t *emu = entry->private_data;
+ unsigned long value;
+ int i,j;
+ if (offset+length > 0x80) {
+ snd_iprintf(buffer, "Input values out of range\n");
+ return;
+ }
+ snd_iprintf(buffer, "Registers 0x%x\n", iobase);
+ for(i = offset; i < offset+length; i++) {
+ snd_iprintf(buffer, "%02X: ",i);
+ for (j = 0; j < 4; j++) {
+ if(iobase == 0)
+ value = snd_ptr_read(emu, 0, i, j);
+ else
+ value = snd_ptr_read(emu, 0x20, i, j);
+ snd_iprintf(buffer, "%08lX ", value);
+ }
+ snd_iprintf(buffer, "\n");
+ }
+}
+
+static void snd_emu_proc_ptr_reg_write(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer, int iobase)
+{
+ emu10k1_t *emu = entry->private_data;
+ char line[64];
+ unsigned int reg, channel_id , val;
+ while (!snd_info_get_line(buffer, line, sizeof(line))) {
+ if (sscanf(line, "%x %x %x", ®, &channel_id, &val) != 3)
+ continue;
+ if ((reg < 0x80) && (reg >=0) && (val <= 0xffffffff) && (channel_id >=0) && (channel_id <= 3) )
+ snd_ptr_write(emu, iobase, reg, channel_id, val);
+ }
+}
+
+static void snd_emu_proc_ptr_reg_write00(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer)
+{
+ snd_emu_proc_ptr_reg_write(entry, buffer, 0);
+}
+
+static void snd_emu_proc_ptr_reg_write20(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer)
+{
+ snd_emu_proc_ptr_reg_write(entry, buffer, 0x20);
+}
+
+
+static void snd_emu_proc_ptr_reg_read00a(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer)
+{
+ snd_emu_proc_ptr_reg_read(entry, buffer, 0, 0, 0x40);
+}
+
+static void snd_emu_proc_ptr_reg_read00b(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer)
+{
+ snd_emu_proc_ptr_reg_read(entry, buffer, 0, 0x40, 0x40);
+}
+
+static void snd_emu_proc_ptr_reg_read20a(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer)
+{
+ snd_emu_proc_ptr_reg_read(entry, buffer, 0x20, 0, 0x40);
+}
+
+static void snd_emu_proc_ptr_reg_read20b(snd_info_entry_t *entry,
+ snd_info_buffer_t * buffer)
+{
+ snd_emu_proc_ptr_reg_read(entry, buffer, 0x20, 0x40, 0x40);
+}
+#endif
+
static struct snd_info_entry_ops snd_emu10k1_proc_ops_fx8010 = {
.read = snd_emu10k1_fx8010_read,
};
int __devinit snd_emu10k1_proc_init(emu10k1_t * emu)
{
snd_info_entry_t *entry;
+#ifdef CONFIG_SND_DEBUG
+ if (! snd_card_proc_new(emu->card, "io_regs", &entry)) {
+ snd_info_set_text_ops(entry, emu, 1024, snd_emu_proc_io_reg_read);
+ entry->c.text.write_size = 64;
+ entry->c.text.write = snd_emu_proc_io_reg_write;
+ }
+ if (! snd_card_proc_new(emu->card, "ptr_regs00a", &entry)) {
+ snd_info_set_text_ops(entry, emu, 1024, snd_emu_proc_ptr_reg_read00a);
+ entry->c.text.write_size = 64;
+ entry->c.text.write = snd_emu_proc_ptr_reg_write00;
+ }
+ if (! snd_card_proc_new(emu->card, "ptr_regs00b", &entry)) {
+ snd_info_set_text_ops(entry, emu, 1024, snd_emu_proc_ptr_reg_read00b);
+ entry->c.text.write_size = 64;
+ entry->c.text.write = snd_emu_proc_ptr_reg_write00;
+ }
+ if (! snd_card_proc_new(emu->card, "ptr_regs20a", &entry)) {
+ snd_info_set_text_ops(entry, emu, 1024, snd_emu_proc_ptr_reg_read20a);
+ entry->c.text.write_size = 64;
+ entry->c.text.write = snd_emu_proc_ptr_reg_write20;
+ }
+ if (! snd_card_proc_new(emu->card, "ptr_regs20b", &entry)) {
+ snd_info_set_text_ops(entry, emu, 1024, snd_emu_proc_ptr_reg_read20b);
+ entry->c.text.write_size = 64;
+ entry->c.text.write = snd_emu_proc_ptr_reg_write20;
+ }
+#endif
if (! snd_card_proc_new(emu->card, "emu10k1", &entry))
- snd_info_set_text_ops(entry, emu, 1024, snd_emu10k1_proc_read);
+ snd_info_set_text_ops(entry, emu, 2048, snd_emu10k1_proc_read);
if (! snd_card_proc_new(emu->card, "fx8010_gpr", &entry)) {
entry->content = SNDRV_INFO_CONTENT_DATA;
return 0; /* Should never reach this point */
}
-/*
- * Returns an attenuation based upon a cumulative volume value
- * Algorithm calculates 0x200 - 0x10 log2 (input)
- */
-
-unsigned char snd_emu10k1_sum_vol_attn(unsigned int value)
-{
- unsigned short count = 16, ans;
-
- if (value == 0)
- return 0xFF;
-
- /* Find first SET bit. This is the integer part of the value */
- while ((value & 0x10000) == 0) {
- value <<= 1;
- count--;
- }
-
- /* The REST of the data is the fractional part. */
- ans = (unsigned short) (0x110 - ((count << 4) + ((value & 0x0FFFFL) >> 12)));
- if (ans > 0xFF)
- ans = 0xFF;
-
- return (unsigned char) ans;
-}
status &= ~(IPR_A_MIDITRANSBUFEMPTY2|IPR_A_MIDIRECVBUFEMPTY2);
}
if (status & IPR_INTERVALTIMER) {
- if (emu->timer_interrupt)
- emu->timer_interrupt(emu);
+ if (emu->timer)
+ snd_timer_interrupt(emu->timer, emu->timer->sticks);
else
snd_emu10k1_intr_disable(emu, INTE_INTERVALTIMERENB);
status &= ~IPR_INTERVALTIMER;
return;
for (akidx = 0; akidx < ice->akm_codecs; akidx++) {
akm4xxx_t *ak = &ice->akm[akidx];
- if (ak->private_value[0])
- kfree((void *)ak->private_value[0]);
+ kfree((void*)ak->private_value[0]);
}
kfree(ice->akm);
}
static int get_msg(mixart_mgr_t *mgr, mixart_msg_t *resp, u32 msg_frame_address )
{
unsigned long flags;
- u32 headptr, i;
+ u32 headptr;
u32 size;
int err;
+#ifndef __BIG_ENDIAN
+ unsigned int i;
+#endif
spin_lock_irqsave(&mgr->msg_lock, flags);
err = 0;
}
size -= MSG_DESCRIPTOR_SIZE;
- memcpy_fromio(resp->data, (void *)MIXART_MEM(mgr, msg_frame_address + MSG_HEADER_SIZE ), size);
+ memcpy_fromio(resp->data, MIXART_MEM(mgr, msg_frame_address + MSG_HEADER_SIZE ), size);
resp->size = size;
/* swap if necessary */
/*
* Driver for Digigram miXart soundcards
*
- * hwdep device manager
+ * DSP firmware management
*
* Copyright (c) 2003 by Digigram <alsa@digigram.com>
*
#include <sound/driver.h>
#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/firmware.h>
#include <asm/io.h>
#include <sound/core.h>
#include "mixart.h"
#include "mixart_hwdep.h"
-/* miXart hwdep interface id string */
-#define SND_MIXART_HWDEP_ID "miXart Loader"
-
-static int mixart_hwdep_open(snd_hwdep_t *hw, struct file *file)
-{
- return 0;
-}
-
-static int mixart_hwdep_release(snd_hwdep_t *hw, struct file *file)
-{
- return 0;
-}
-
/**
* wait for a value on a peudo register, exit with a timeout
*
u32 p_align;
};
-static int mixart_load_elf(mixart_mgr_t *mgr, snd_hwdep_dsp_image_t *dsp )
+static int mixart_load_elf(mixart_mgr_t *mgr, const struct firmware *dsp )
{
char elf32_magic_number[4] = {0x7f,'E','L','F'};
- snd_mixart_elf32_ehdr_t elf_header;
+ snd_mixart_elf32_ehdr_t *elf_header;
int i;
- if ( copy_from_user(&elf_header, dsp->image , sizeof(snd_mixart_elf32_ehdr_t)) )
- return -EFAULT;
-
+ elf_header = (snd_mixart_elf32_ehdr_t *)dsp->data;
for( i=0; i<4; i++ )
- if ( elf32_magic_number[i] != elf_header.e_ident[i] )
+ if ( elf32_magic_number[i] != elf_header->e_ident[i] )
return -EINVAL;
- if( elf_header.e_phoff != 0 ) {
+ if( elf_header->e_phoff != 0 ) {
snd_mixart_elf32_phdr_t elf_programheader;
- for( i=0; i < be16_to_cpu(elf_header.e_phnum); i++ ) {
- u32 pos = be32_to_cpu(elf_header.e_phoff) + (u32)(i * be16_to_cpu(elf_header.e_phentsize));
+ for( i=0; i < be16_to_cpu(elf_header->e_phnum); i++ ) {
+ u32 pos = be32_to_cpu(elf_header->e_phoff) + (u32)(i * be16_to_cpu(elf_header->e_phentsize));
- if( copy_from_user( &elf_programheader, dsp->image + pos, sizeof(elf_programheader) ) )
- return -EFAULT;
+ memcpy( &elf_programheader, dsp->data + pos, sizeof(elf_programheader) );
if(elf_programheader.p_type != 0) {
if( elf_programheader.p_filesz != 0 ) {
- if(copy_from_user_toio( MIXART_MEM( mgr, be32_to_cpu(elf_programheader.p_vaddr)),
- dsp->image + be32_to_cpu( elf_programheader.p_offset ),
- be32_to_cpu( elf_programheader.p_filesz )))
- return -EFAULT;
+ memcpy_toio( MIXART_MEM( mgr, be32_to_cpu(elf_programheader.p_vaddr)),
+ dsp->data + be32_to_cpu( elf_programheader.p_offset ),
+ be32_to_cpu( elf_programheader.p_filesz ));
}
}
}
return 0;
}
-static int mixart_hwdep_dsp_status(snd_hwdep_t *hw, snd_hwdep_dsp_status_t *info)
-{
- mixart_mgr_t *mgr = hw->private_data;
-
- strcpy(info->id, "miXart");
- info->num_dsps = MIXART_HARDW_FILES_MAX_INDEX;
-
- if (mgr->hwdep->dsp_loaded & (1 << MIXART_MOTHERBOARD_ELF_INDEX))
- info->chip_ready = 1;
-
- info->version = MIXART_DRIVER_VERSION;
- return 0;
-}
-
/*
* get basic information and init miXart
*/
/* firmware base addresses (when hard coded) */
#define MIXART_MOTHERBOARD_XLX_BASE_ADDRESS 0x00600000
-static int mixart_hwdep_dsp_load(snd_hwdep_t *hw, snd_hwdep_dsp_image_t *dsp)
+static int mixart_dsp_load(mixart_mgr_t* mgr, int index, const struct firmware *dsp)
{
- mixart_mgr_t* mgr = hw->private_data;
int err, card_index;
u32 status_xilinx, status_elf, status_daught;
u32 val;
return -EAGAIN; /* try again later */
}
- switch (dsp->index) {
+ switch (index) {
case MIXART_MOTHERBOARD_XLX_INDEX:
/* xilinx already loaded ? */
}
/* check xilinx validity */
- snd_assert(((u32*)(dsp->image))[0]==0xFFFFFFFF, return -EINVAL);
- snd_assert(dsp->length % 4 == 0, return -EINVAL);
+ snd_assert(((u32*)(dsp->data))[0]==0xFFFFFFFF, return -EINVAL);
+ snd_assert(dsp->size % 4 == 0, return -EINVAL);
/* set xilinx status to copying */
writel_be( 1, MIXART_MEM( mgr, MIXART_PSEUDOREG_MXLX_STATUS_OFFSET ));
/* setup xilinx base address */
writel_be( MIXART_MOTHERBOARD_XLX_BASE_ADDRESS, MIXART_MEM( mgr,MIXART_PSEUDOREG_MXLX_BASE_ADDR_OFFSET ));
/* setup code size for xilinx file */
- writel_be( dsp->length, MIXART_MEM( mgr, MIXART_PSEUDOREG_MXLX_SIZE_OFFSET ));
+ writel_be( dsp->size, MIXART_MEM( mgr, MIXART_PSEUDOREG_MXLX_SIZE_OFFSET ));
/* copy xilinx code */
- if (copy_from_user_toio( MIXART_MEM( mgr, MIXART_MOTHERBOARD_XLX_BASE_ADDRESS), dsp->image, dsp->length))
- return -EFAULT;
+ memcpy_toio( MIXART_MEM( mgr, MIXART_MOTHERBOARD_XLX_BASE_ADDRESS), dsp->data, dsp->size);
/* set xilinx status to copy finished */
writel_be( 2, MIXART_MEM( mgr, MIXART_PSEUDOREG_MXLX_STATUS_OFFSET ));
writel_be( 1, MIXART_MEM( mgr, MIXART_PSEUDOREG_ELF_STATUS_OFFSET ));
/* process the copying of the elf packets */
- err = mixart_load_elf( mgr, dsp);
+ err = mixart_load_elf( mgr, dsp );
if (err < 0) return err;
/* set elf status to copy finished */
}
/* check daughterboard xilinx validity */
- snd_assert(((u32*)(dsp->image))[0]==0xFFFFFFFF, return -EINVAL);
- snd_assert(dsp->length % 4 == 0, return -EINVAL);
+ snd_assert(((u32*)(dsp->data))[0]==0xFFFFFFFF, return -EINVAL);
+ snd_assert(dsp->size % 4 == 0, return -EINVAL);
/* inform mixart about the size of the file */
- writel_be( dsp->length, MIXART_MEM( mgr, MIXART_PSEUDOREG_DXLX_SIZE_OFFSET ));
+ writel_be( dsp->size, MIXART_MEM( mgr, MIXART_PSEUDOREG_DXLX_SIZE_OFFSET ));
/* set daughterboard status to 1 */
writel_be( 1, MIXART_MEM( mgr, MIXART_PSEUDOREG_DXLX_STATUS_OFFSET ));
snd_assert(val != 0, return -EINVAL);
/* copy daughterboard xilinx code */
- if (copy_from_user_toio( MIXART_MEM( mgr, val), dsp->image, dsp->length))
- return -EFAULT;
+ memcpy_toio( MIXART_MEM( mgr, val), dsp->data, dsp->size);
/* set daughterboard status to 4 */
writel_be( 4, MIXART_MEM( mgr, MIXART_PSEUDOREG_DXLX_STATUS_OFFSET ));
}
-int snd_mixart_hwdep_new(mixart_mgr_t *mgr)
+#if defined(CONFIG_FW_LOADER) || defined(CONFIG_FW_LOADER_MODULE)
+#if !defined(CONFIG_USE_MIXARTLOADER) && !defined(CONFIG_SND_MIXART) /* built-in kernel */
+#define SND_MIXART_FW_LOADER /* use the standard firmware loader */
+#endif
+#endif
+
+#ifdef SND_MIXART_FW_LOADER
+
+int snd_mixart_setup_firmware(mixart_mgr_t *mgr)
+{
+ static char *fw_files[3] = {
+ "miXart8.xlx", "miXart8.elf", "miXart8AES.xlx"
+ };
+ char path[32];
+
+ const struct firmware *fw_entry;
+ int i, err;
+
+ for (i = 0; i < 3; i++) {
+ sprintf(path, "mixart/%s", fw_files[i]);
+ if (request_firmware(&fw_entry, path, &mgr->pci->dev)) {
+ snd_printk(KERN_ERR "miXart: can't load firmware %s\n", path);
+ return -ENOENT;
+ }
+ /* fake hwdep dsp record */
+ err = mixart_dsp_load(mgr, i, fw_entry);
+ release_firmware(fw_entry);
+ if (err < 0)
+ return err;
+ }
+ return 0;
+}
+
+
+#else /* old style firmware loading */
+
+/* miXart hwdep interface id string */
+#define SND_MIXART_HWDEP_ID "miXart Loader"
+
+static int mixart_hwdep_open(snd_hwdep_t *hw, struct file *file)
+{
+ return 0;
+}
+
+static int mixart_hwdep_release(snd_hwdep_t *hw, struct file *file)
+{
+ return 0;
+}
+
+static int mixart_hwdep_dsp_status(snd_hwdep_t *hw, snd_hwdep_dsp_status_t *info)
+{
+ mixart_mgr_t *mgr = hw->private_data;
+
+ strcpy(info->id, "miXart");
+ info->num_dsps = MIXART_HARDW_FILES_MAX_INDEX;
+
+ if (mgr->hwdep->dsp_loaded & (1 << MIXART_MOTHERBOARD_ELF_INDEX))
+ info->chip_ready = 1;
+
+ info->version = MIXART_DRIVER_VERSION;
+ return 0;
+}
+
+static int mixart_hwdep_dsp_load(snd_hwdep_t *hw, snd_hwdep_dsp_image_t *dsp)
+{
+ mixart_mgr_t* mgr = hw->private_data;
+ struct firmware fw;
+ int err;
+
+ fw.size = dsp->length;
+ fw.data = vmalloc(dsp->length);
+ if (! fw.data) {
+ snd_printk(KERN_ERR "miXart: cannot allocate image size %d\n",
+ (int)dsp->length);
+ return -ENOMEM;
+ }
+ if (copy_from_user(fw.data, dsp->image, dsp->length)) {
+ vfree(fw.data);
+ return -EFAULT;
+ }
+ err = mixart_dsp_load(mgr, dsp->index, &fw);
+ vfree(fw.data);
+ return err;
+}
+
+int snd_mixart_setup_firmware(mixart_mgr_t *mgr)
{
int err;
snd_hwdep_t *hw;
sprintf(hw->name, SND_MIXART_HWDEP_ID);
mgr->hwdep = hw;
mgr->hwdep->dsp_loaded = 0;
- return 0;
+
+ return snd_card_register(mgr->chip[0]->card);
}
+
+#endif /* SND_MIXART_FW_LOADER */
#define MIXART_OIDI 0x008 /* 0000 0000 1000 */
-/* exported */
-int snd_mixart_hwdep_new(mixart_mgr_t *mgr);
+int snd_mixart_setup_firmware(mixart_mgr_t *mgr);
#endif /* __SOUND_MIXART_HWDEP_H */
return 0;
}
-/*
- * bzero(blk + offset, size)
- */
-int snd_trident_synth_bzero(trident_t *trident, snd_util_memblk_t *blk, int offset, int size)
-{
- int page, nextofs, end_offset, temp, temp1;
-
- offset += blk->offset;
- end_offset = offset + size;
- page = get_aligned_page(offset) + 1;
- do {
- nextofs = aligned_page_offset(page);
- temp = nextofs - offset;
- temp1 = end_offset - offset;
- if (temp1 < temp)
- temp = temp1;
- memset(offset_ptr(trident, offset), 0, temp);
- offset = nextofs;
- page++;
- } while (offset < end_offset);
- return 0;
-}
-
/*
* copy_from_user(blk + offset, data, size)
*/
instr = snd_seq_instr_find(trident->synth.ilist, &v->instr, 0, 1);
if (instr != NULL) {
if (instr->ops) {
- if (instr->ops->instr_type == snd_seq_simple_id)
+ if (!strcmp(instr->ops->instr_type, SNDRV_SEQ_INSTR_ID_SIMPLE))
snd_trident_simple_init(v);
}
snd_seq_instr_free_use(trident->synth.ilist, instr);
snd_seq_instr_list_free_cond(p->trident->synth.ilist, &ifree, client, 0);
}
-int snd_trident_synth_event_input(snd_seq_event_t * ev, int direct, void *private_data, int atomic, int hop)
+static int snd_trident_synth_event_input(snd_seq_event_t * ev, int direct, void *private_data, int atomic, int hop)
{
snd_trident_port_t *p = (snd_trident_port_t *) private_data;
--- /dev/null
+/*
+ * ALSA modem driver for VIA VT82xx (South Bridge)
+ *
+ * VT82C686A/B/C, VT8233A/C, VT8235
+ *
+ * Copyright (c) 2000 Jaroslav Kysela <perex@suse.cz>
+ * Tjeerd.Mulder <Tjeerd.Mulder@fujitsu-siemens.com>
+ * 2002 Takashi Iwai <tiwai@suse.de>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+/*
+ * Changes:
+ *
+ * Sep. 2, 2004 Sasha Khapyorsky <sashak@smlink.com>
+ * Modified from original audio driver 'via82xx.c' to support AC97
+ * modems.
+ */
+
+#include <sound/driver.h>
+#include <asm/io.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/moduleparam.h>
+#include <sound/core.h>
+#include <sound/pcm.h>
+#include <sound/pcm_params.h>
+#include <sound/info.h>
+#include <sound/ac97_codec.h>
+#include <sound/initval.h>
+
+#if 0
+#define POINTER_DEBUG
+#endif
+
+MODULE_AUTHOR("Jaroslav Kysela <perex@suse.cz>");
+MODULE_DESCRIPTION("VIA VT82xx modem");
+MODULE_LICENSE("GPL");
+MODULE_SUPPORTED_DEVICE("{{VIA,VT82C686A/B/C modem,pci}}");
+
+static int index[SNDRV_CARDS] = SNDRV_DEFAULT_IDX; /* Index 0-MAX */
+static char *id[SNDRV_CARDS] = SNDRV_DEFAULT_STR; /* ID for this card */
+static int enable[SNDRV_CARDS] = SNDRV_DEFAULT_ENABLE_PNP; /* Enable this card */
+static int ac97_clock[SNDRV_CARDS] = {[0 ... (SNDRV_CARDS - 1)] = 48000};
+
+module_param_array(index, int, NULL, 0444);
+MODULE_PARM_DESC(index, "Index value for VIA 82xx bridge.");
+module_param_array(id, charp, NULL, 0444);
+MODULE_PARM_DESC(id, "ID string for VIA 82xx bridge.");
+module_param_array(enable, bool, NULL, 0444);
+MODULE_PARM_DESC(enable, "Enable modem part of VIA 82xx bridge.");
+module_param_array(ac97_clock, int, NULL, 0444);
+MODULE_PARM_DESC(ac97_clock, "AC'97 codec clock (default 48000Hz).");
+
+
+/*
+ * Direct registers
+ */
+
+#define VIAREG(via, x) ((via)->port + VIA_REG_##x)
+#define VIADEV_REG(viadev, x) ((viadev)->port + VIA_REG_##x)
+
+/* common offsets */
+#define VIA_REG_OFFSET_STATUS 0x00 /* byte - channel status */
+#define VIA_REG_STAT_ACTIVE 0x80 /* RO */
+#define VIA_REG_STAT_PAUSED 0x40 /* RO */
+#define VIA_REG_STAT_TRIGGER_QUEUED 0x08 /* RO */
+#define VIA_REG_STAT_STOPPED 0x04 /* RWC */
+#define VIA_REG_STAT_EOL 0x02 /* RWC */
+#define VIA_REG_STAT_FLAG 0x01 /* RWC */
+#define VIA_REG_OFFSET_CONTROL 0x01 /* byte - channel control */
+#define VIA_REG_CTRL_START 0x80 /* WO */
+#define VIA_REG_CTRL_TERMINATE 0x40 /* WO */
+#define VIA_REG_CTRL_AUTOSTART 0x20
+#define VIA_REG_CTRL_PAUSE 0x08 /* RW */
+#define VIA_REG_CTRL_INT_STOP 0x04
+#define VIA_REG_CTRL_INT_EOL 0x02
+#define VIA_REG_CTRL_INT_FLAG 0x01
+#define VIA_REG_CTRL_RESET 0x01 /* RW - probably reset? undocumented */
+#define VIA_REG_CTRL_INT (VIA_REG_CTRL_INT_FLAG | VIA_REG_CTRL_INT_EOL | VIA_REG_CTRL_AUTOSTART)
+#define VIA_REG_OFFSET_TYPE 0x02 /* byte - channel type (686 only) */
+#define VIA_REG_TYPE_AUTOSTART 0x80 /* RW - autostart at EOL */
+#define VIA_REG_TYPE_16BIT 0x20 /* RW */
+#define VIA_REG_TYPE_STEREO 0x10 /* RW */
+#define VIA_REG_TYPE_INT_LLINE 0x00
+#define VIA_REG_TYPE_INT_LSAMPLE 0x04
+#define VIA_REG_TYPE_INT_LESSONE 0x08
+#define VIA_REG_TYPE_INT_MASK 0x0c
+#define VIA_REG_TYPE_INT_EOL 0x02
+#define VIA_REG_TYPE_INT_FLAG 0x01
+#define VIA_REG_OFFSET_TABLE_PTR 0x04 /* dword - channel table pointer */
+#define VIA_REG_OFFSET_CURR_PTR 0x04 /* dword - channel current pointer */
+#define VIA_REG_OFFSET_STOP_IDX 0x08 /* dword - stop index, channel type, sample rate */
+#define VIA_REG_OFFSET_CURR_COUNT 0x0c /* dword - channel current count (24 bit) */
+#define VIA_REG_OFFSET_CURR_INDEX 0x0f /* byte - channel current index (for via8233 only) */
+
+#define DEFINE_VIA_REGSET(name,val) \
+enum {\
+ VIA_REG_##name##_STATUS = (val),\
+ VIA_REG_##name##_CONTROL = (val) + 0x01,\
+ VIA_REG_##name##_TYPE = (val) + 0x02,\
+ VIA_REG_##name##_TABLE_PTR = (val) + 0x04,\
+ VIA_REG_##name##_CURR_PTR = (val) + 0x04,\
+ VIA_REG_##name##_STOP_IDX = (val) + 0x08,\
+ VIA_REG_##name##_CURR_COUNT = (val) + 0x0c,\
+}
+
+/* modem block */
+DEFINE_VIA_REGSET(MO, 0x40);
+DEFINE_VIA_REGSET(MI, 0x50);
+
+/* AC'97 */
+#define VIA_REG_AC97 0x80 /* dword */
+#define VIA_REG_AC97_CODEC_ID_MASK (3<<30)
+#define VIA_REG_AC97_CODEC_ID_SHIFT 30
+#define VIA_REG_AC97_CODEC_ID_PRIMARY 0x00
+#define VIA_REG_AC97_CODEC_ID_SECONDARY 0x01
+#define VIA_REG_AC97_SECONDARY_VALID (1<<27)
+#define VIA_REG_AC97_PRIMARY_VALID (1<<25)
+#define VIA_REG_AC97_BUSY (1<<24)
+#define VIA_REG_AC97_READ (1<<23)
+#define VIA_REG_AC97_CMD_SHIFT 16
+#define VIA_REG_AC97_CMD_MASK 0x7e
+#define VIA_REG_AC97_DATA_SHIFT 0
+#define VIA_REG_AC97_DATA_MASK 0xffff
+
+#define VIA_REG_SGD_SHADOW 0x84 /* dword */
+#define VIA_REG_SGD_STAT_PB_FLAG (1<<0)
+#define VIA_REG_SGD_STAT_CP_FLAG (1<<1)
+#define VIA_REG_SGD_STAT_FM_FLAG (1<<2)
+#define VIA_REG_SGD_STAT_PB_EOL (1<<4)
+#define VIA_REG_SGD_STAT_CP_EOL (1<<5)
+#define VIA_REG_SGD_STAT_FM_EOL (1<<6)
+#define VIA_REG_SGD_STAT_PB_STOP (1<<8)
+#define VIA_REG_SGD_STAT_CP_STOP (1<<9)
+#define VIA_REG_SGD_STAT_FM_STOP (1<<10)
+#define VIA_REG_SGD_STAT_PB_ACTIVE (1<<12)
+#define VIA_REG_SGD_STAT_CP_ACTIVE (1<<13)
+#define VIA_REG_SGD_STAT_FM_ACTIVE (1<<14)
+#define VIA_REG_SGD_STAT_MR_FLAG (1<<16)
+#define VIA_REG_SGD_STAT_MW_FLAG (1<<17)
+#define VIA_REG_SGD_STAT_MR_EOL (1<<20)
+#define VIA_REG_SGD_STAT_MW_EOL (1<<21)
+#define VIA_REG_SGD_STAT_MR_STOP (1<<24)
+#define VIA_REG_SGD_STAT_MW_STOP (1<<25)
+#define VIA_REG_SGD_STAT_MR_ACTIVE (1<<28)
+#define VIA_REG_SGD_STAT_MW_ACTIVE (1<<29)
+
+#define VIA_REG_GPI_STATUS 0x88
+#define VIA_REG_GPI_INTR 0x8c
+
+#define VIA_TBL_BIT_FLAG 0x40000000
+#define VIA_TBL_BIT_EOL 0x80000000
+
+/* pci space */
+#define VIA_ACLINK_STAT 0x40
+#define VIA_ACLINK_C11_READY 0x20
+#define VIA_ACLINK_C10_READY 0x10
+#define VIA_ACLINK_C01_READY 0x04 /* secondary codec ready */
+#define VIA_ACLINK_LOWPOWER 0x02 /* low-power state */
+#define VIA_ACLINK_C00_READY 0x01 /* primary codec ready */
+#define VIA_ACLINK_CTRL 0x41
+#define VIA_ACLINK_CTRL_ENABLE 0x80 /* 0: disable, 1: enable */
+#define VIA_ACLINK_CTRL_RESET 0x40 /* 0: assert, 1: de-assert */
+#define VIA_ACLINK_CTRL_SYNC 0x20 /* 0: release SYNC, 1: force SYNC hi */
+#define VIA_ACLINK_CTRL_SDO 0x10 /* 0: release SDO, 1: force SDO hi */
+#define VIA_ACLINK_CTRL_VRA 0x08 /* 0: disable VRA, 1: enable VRA */
+#define VIA_ACLINK_CTRL_PCM 0x04 /* 0: disable PCM, 1: enable PCM */
+#define VIA_ACLINK_CTRL_FM 0x02 /* via686 only */
+#define VIA_ACLINK_CTRL_SB 0x01 /* via686 only */
+#define VIA_ACLINK_CTRL_INIT (VIA_ACLINK_CTRL_ENABLE|\
+ VIA_ACLINK_CTRL_RESET|\
+ VIA_ACLINK_CTRL_PCM)
+#define VIA_FUNC_ENABLE 0x42
+#define VIA_FUNC_MIDI_PNP 0x80 /* FIXME: it's 0x40 in the datasheet! */
+#define VIA_FUNC_MIDI_IRQMASK 0x40 /* FIXME: not documented! */
+#define VIA_FUNC_RX2C_WRITE 0x20
+#define VIA_FUNC_SB_FIFO_EMPTY 0x10
+#define VIA_FUNC_ENABLE_GAME 0x08
+#define VIA_FUNC_ENABLE_FM 0x04
+#define VIA_FUNC_ENABLE_MIDI 0x02
+#define VIA_FUNC_ENABLE_SB 0x01
+#define VIA_PNP_CONTROL 0x43
+#define VIA_MC97_CTRL 0x44
+#define VIA_MC97_CTRL_ENABLE 0x80
+#define VIA_MC97_CTRL_SECONDARY 0x40
+#define VIA_MC97_CTRL_INIT (VIA_MC97_CTRL_ENABLE|\
+ VIA_MC97_CTRL_SECONDARY)
+
+
+typedef struct _snd_via82xx_modem via82xx_t;
+typedef struct via_dev viadev_t;
+
+/*
+ * pcm stream
+ */
+
+struct snd_via_sg_table {
+ unsigned int offset;
+ unsigned int size;
+} ;
+
+#define VIA_TABLE_SIZE 255
+
+struct via_dev {
+ unsigned int reg_offset;
+ unsigned long port;
+ int direction; /* playback = 0, capture = 1 */
+ snd_pcm_substream_t *substream;
+ int running;
+ unsigned int tbl_entries; /* # descriptors */
+ struct snd_dma_buffer table;
+ struct snd_via_sg_table *idx_table;
+ /* for recovery from the unexpected pointer */
+ unsigned int lastpos;
+ unsigned int bufsize;
+ unsigned int bufsize2;
+};
+
+enum { TYPE_CARD_VIA82XX_MODEM = 1 };
+
+#define VIA_MAX_MODEM_DEVS 2
+
+struct _snd_via82xx_modem {
+ int irq;
+
+ unsigned long port;
+
+ unsigned int intr_mask; /* SGD_SHADOW mask to check interrupts */
+
+ struct pci_dev *pci;
+ snd_card_t *card;
+
+ unsigned int num_devs;
+ unsigned int playback_devno, capture_devno;
+ viadev_t devs[VIA_MAX_MODEM_DEVS];
+
+ snd_pcm_t *pcms[2];
+
+ ac97_bus_t *ac97_bus;
+ ac97_t *ac97;
+ unsigned int ac97_clock;
+ unsigned int ac97_secondary; /* secondary AC'97 codec is present */
+
+ spinlock_t reg_lock;
+ snd_info_entry_t *proc_entry;
+};
+
+static struct pci_device_id snd_via82xx_modem_ids[] = {
+ { 0x1106, 0x3068, PCI_ANY_ID, PCI_ANY_ID, 0, 0, TYPE_CARD_VIA82XX_MODEM, },
+ { 0, }
+};
+
+MODULE_DEVICE_TABLE(pci, snd_via82xx_modem_ids);
+
+/*
+ */
+
+/*
+ * allocate and initialize the descriptor buffers
+ * periods = number of periods
+ * fragsize = period size in bytes
+ */
+static int build_via_table(viadev_t *dev, snd_pcm_substream_t *substream,
+ struct pci_dev *pci,
+ unsigned int periods, unsigned int fragsize)
+{
+ unsigned int i, idx, ofs, rest;
+ via82xx_t *chip = snd_pcm_substream_chip(substream);
+ struct snd_sg_buf *sgbuf = snd_pcm_substream_sgbuf(substream);
+
+ if (dev->table.area == NULL) {
+ /* the start of each lists must be aligned to 8 bytes,
+ * but the kernel pages are much bigger, so we don't care
+ */
+ if (snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV, snd_dma_pci_data(chip->pci),
+ PAGE_ALIGN(VIA_TABLE_SIZE * 2 * 8),
+ &dev->table) < 0)
+ return -ENOMEM;
+ }
+ if (! dev->idx_table) {
+ dev->idx_table = kmalloc(sizeof(*dev->idx_table) * VIA_TABLE_SIZE, GFP_KERNEL);
+ if (! dev->idx_table)
+ return -ENOMEM;
+ }
+
+ /* fill the entries */
+ idx = 0;
+ ofs = 0;
+ for (i = 0; i < periods; i++) {
+ rest = fragsize;
+ /* fill descriptors for a period.
+ * a period can be split to several descriptors if it's
+ * over page boundary.
+ */
+ do {
+ unsigned int r;
+ unsigned int flag;
+
+ if (idx >= VIA_TABLE_SIZE) {
+ snd_printk(KERN_ERR "via82xx: too much table size!\n");
+ return -EINVAL;
+ }
+ ((u32 *)dev->table.area)[idx << 1] = cpu_to_le32((u32)snd_pcm_sgbuf_get_addr(sgbuf, ofs));
+ r = PAGE_SIZE - (ofs % PAGE_SIZE);
+ if (rest < r)
+ r = rest;
+ rest -= r;
+ if (! rest) {
+ if (i == periods - 1)
+ flag = VIA_TBL_BIT_EOL; /* buffer boundary */
+ else
+ flag = VIA_TBL_BIT_FLAG; /* period boundary */
+ } else
+ flag = 0; /* period continues to the next */
+ // printk("via: tbl %d: at %d size %d (rest %d)\n", idx, ofs, r, rest);
+ ((u32 *)dev->table.area)[(idx<<1) + 1] = cpu_to_le32(r | flag);
+ dev->idx_table[idx].offset = ofs;
+ dev->idx_table[idx].size = r;
+ ofs += r;
+ idx++;
+ } while (rest > 0);
+ }
+ dev->tbl_entries = idx;
+ dev->bufsize = periods * fragsize;
+ dev->bufsize2 = dev->bufsize / 2;
+ return 0;
+}
+
+
+static int clean_via_table(viadev_t *dev, snd_pcm_substream_t *substream,
+ struct pci_dev *pci)
+{
+ if (dev->table.area) {
+ snd_dma_free_pages(&dev->table);
+ dev->table.area = NULL;
+ }
+ if (dev->idx_table) {
+ kfree(dev->idx_table);
+ dev->idx_table = NULL;
+ }
+ return 0;
+}
+
+/*
+ * Basic I/O
+ */
+
+static inline unsigned int snd_via82xx_codec_xread(via82xx_t *chip)
+{
+ return inl(VIAREG(chip, AC97));
+}
+
+static inline void snd_via82xx_codec_xwrite(via82xx_t *chip, unsigned int val)
+{
+ outl(val, VIAREG(chip, AC97));
+}
+
+static int snd_via82xx_codec_ready(via82xx_t *chip, int secondary)
+{
+ unsigned int timeout = 1000; /* 1ms */
+ unsigned int val;
+
+ while (timeout-- > 0) {
+ udelay(1);
+ if (!((val = snd_via82xx_codec_xread(chip)) & VIA_REG_AC97_BUSY))
+ return val & 0xffff;
+ }
+ snd_printk(KERN_ERR "codec_ready: codec %i is not ready [0x%x]\n", secondary, snd_via82xx_codec_xread(chip));
+ return -EIO;
+}
+
+static int snd_via82xx_codec_valid(via82xx_t *chip, int secondary)
+{
+ unsigned int timeout = 1000; /* 1ms */
+ unsigned int val, val1;
+ unsigned int stat = !secondary ? VIA_REG_AC97_PRIMARY_VALID :
+ VIA_REG_AC97_SECONDARY_VALID;
+
+ while (timeout-- > 0) {
+ val = snd_via82xx_codec_xread(chip);
+ val1 = val & (VIA_REG_AC97_BUSY | stat);
+ if (val1 == stat)
+ return val & 0xffff;
+ udelay(1);
+ }
+ return -EIO;
+}
+
+static void snd_via82xx_codec_wait(ac97_t *ac97)
+{
+ via82xx_t *chip = ac97->private_data;
+ int err;
+ err = snd_via82xx_codec_ready(chip, ac97->num);
+ /* here we need to wait fairly for long time.. */
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(HZ/2);
+}
+
+static void snd_via82xx_codec_write(ac97_t *ac97,
+ unsigned short reg,
+ unsigned short val)
+{
+ via82xx_t *chip = ac97->private_data;
+ unsigned int xval;
+
+ xval = !ac97->num ? VIA_REG_AC97_CODEC_ID_PRIMARY : VIA_REG_AC97_CODEC_ID_SECONDARY;
+ xval <<= VIA_REG_AC97_CODEC_ID_SHIFT;
+ xval |= reg << VIA_REG_AC97_CMD_SHIFT;
+ xval |= val << VIA_REG_AC97_DATA_SHIFT;
+ snd_via82xx_codec_xwrite(chip, xval);
+ snd_via82xx_codec_ready(chip, ac97->num);
+}
+
+static unsigned short snd_via82xx_codec_read(ac97_t *ac97, unsigned short reg)
+{
+ via82xx_t *chip = ac97->private_data;
+ unsigned int xval, val = 0xffff;
+ int again = 0;
+
+ xval = ac97->num << VIA_REG_AC97_CODEC_ID_SHIFT;
+ xval |= ac97->num ? VIA_REG_AC97_SECONDARY_VALID : VIA_REG_AC97_PRIMARY_VALID;
+ xval |= VIA_REG_AC97_READ;
+ xval |= (reg & 0x7f) << VIA_REG_AC97_CMD_SHIFT;
+ while (1) {
+ if (again++ > 3) {
+ snd_printk(KERN_ERR "codec_read: codec %i is not valid [0x%x]\n", ac97->num, snd_via82xx_codec_xread(chip));
+ return 0xffff;
+ }
+ snd_via82xx_codec_xwrite(chip, xval);
+ udelay (20);
+ if (snd_via82xx_codec_valid(chip, ac97->num) >= 0) {
+ udelay(25);
+ val = snd_via82xx_codec_xread(chip);
+ break;
+ }
+ }
+ return val & 0xffff;
+}
+
+static void snd_via82xx_channel_reset(via82xx_t *chip, viadev_t *viadev)
+{
+ outb(VIA_REG_CTRL_PAUSE | VIA_REG_CTRL_TERMINATE | VIA_REG_CTRL_RESET,
+ VIADEV_REG(viadev, OFFSET_CONTROL));
+ inb(VIADEV_REG(viadev, OFFSET_CONTROL));
+ udelay(50);
+ /* disable interrupts */
+ outb(0x00, VIADEV_REG(viadev, OFFSET_CONTROL));
+ /* clear interrupts */
+ outb(0x03, VIADEV_REG(viadev, OFFSET_STATUS));
+ outb(0x00, VIADEV_REG(viadev, OFFSET_TYPE)); /* for via686 */
+ // outl(0, VIADEV_REG(viadev, OFFSET_CURR_PTR));
+ viadev->lastpos = 0;
+}
+
+
+/*
+ * Interrupt handler
+ */
+
+static irqreturn_t snd_via82xx_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ via82xx_t *chip = dev_id;
+ unsigned int status;
+ unsigned int i;
+
+ status = inl(VIAREG(chip, SGD_SHADOW));
+ if (! (status & chip->intr_mask)) {
+ return IRQ_NONE;
+ }
+// _skip_sgd:
+
+ /* check status for each stream */
+ spin_lock(&chip->reg_lock);
+ for (i = 0; i < chip->num_devs; i++) {
+ viadev_t *viadev = &chip->devs[i];
+ unsigned char c_status = inb(VIADEV_REG(viadev, OFFSET_STATUS));
+ c_status &= (VIA_REG_STAT_EOL|VIA_REG_STAT_FLAG|VIA_REG_STAT_STOPPED);
+ if (! c_status)
+ continue;
+ if (viadev->substream && viadev->running) {
+ spin_unlock(&chip->reg_lock);
+ snd_pcm_period_elapsed(viadev->substream);
+ spin_lock(&chip->reg_lock);
+ }
+ outb(c_status, VIADEV_REG(viadev, OFFSET_STATUS)); /* ack */
+ }
+ spin_unlock(&chip->reg_lock);
+ return IRQ_HANDLED;
+}
+
+/*
+ * PCM callbacks
+ */
+
+/*
+ * trigger callback
+ */
+static int snd_via82xx_pcm_trigger(snd_pcm_substream_t * substream, int cmd)
+{
+ via82xx_t *chip = snd_pcm_substream_chip(substream);
+ viadev_t *viadev = (viadev_t *)substream->runtime->private_data;
+ unsigned char val = 0;
+
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ val |= VIA_REG_CTRL_START;
+ viadev->running = 1;
+ break;
+ case SNDRV_PCM_TRIGGER_STOP:
+ val = VIA_REG_CTRL_TERMINATE;
+ viadev->running = 0;
+ break;
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+ val |= VIA_REG_CTRL_PAUSE;
+ viadev->running = 0;
+ break;
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+ viadev->running = 1;
+ break;
+ default:
+ return -EINVAL;
+ }
+ outb(val, VIADEV_REG(viadev, OFFSET_CONTROL));
+ if (cmd == SNDRV_PCM_TRIGGER_STOP)
+ snd_via82xx_channel_reset(chip, viadev);
+ return 0;
+}
+
+static int snd_via82xx_modem_pcm_trigger(snd_pcm_substream_t * substream, int cmd)
+{
+ via82xx_t *chip = snd_pcm_substream_chip(substream);
+ unsigned int val = 0;
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ val = snd_ac97_read(chip->ac97, AC97_GPIO_STATUS);
+ outl(val|AC97_GPIO_LINE1_OH, VIAREG(chip, GPI_STATUS));
+ break;
+ case SNDRV_PCM_TRIGGER_STOP:
+ val = snd_ac97_read(chip->ac97, AC97_GPIO_STATUS);
+ outl(val&~AC97_GPIO_LINE1_OH, VIAREG(chip, GPI_STATUS));
+ break;
+ default:
+ break;
+ }
+ return snd_via82xx_pcm_trigger(substream, cmd);
+}
+
+/*
+ * pointer callbacks
+ */
+
+/*
+ * calculate the linear position at the given sg-buffer index and the rest count
+ */
+
+#define check_invalid_pos(viadev,pos) \
+ ((pos) < viadev->lastpos && ((pos) >= viadev->bufsize2 || viadev->lastpos < viadev->bufsize2))
+
+static inline unsigned int calc_linear_pos(viadev_t *viadev, unsigned int idx, unsigned int count)
+{
+ unsigned int size, res;
+
+ size = viadev->idx_table[idx].size;
+ res = viadev->idx_table[idx].offset + size - count;
+
+ /* check the validity of the calculated position */
+ if (size < count) {
+ snd_printd(KERN_ERR "invalid via82xx_cur_ptr (size = %d, count = %d)\n", (int)size, (int)count);
+ res = viadev->lastpos;
+ } else if (check_invalid_pos(viadev, res)) {
+#ifdef POINTER_DEBUG
+ printk("fail: idx = %i/%i, lastpos = 0x%x, bufsize2 = 0x%x, offsize = 0x%x, size = 0x%x, count = 0x%x\n", idx, viadev->tbl_entries, viadev->lastpos, viadev->bufsize2, viadev->idx_table[idx].offset, viadev->idx_table[idx].size, count);
+#endif
+ if (count && size < count) {
+ snd_printd(KERN_ERR "invalid via82xx_cur_ptr, using last valid pointer\n");
+ res = viadev->lastpos;
+ } else {
+ if (! count)
+ /* bogus count 0 on the DMA boundary? */
+ res = viadev->idx_table[idx].offset;
+ else
+ /* count register returns full size when end of buffer is reached */
+ res = viadev->idx_table[idx].offset + size;
+ if (check_invalid_pos(viadev, res)) {
+ snd_printd(KERN_ERR "invalid via82xx_cur_ptr (2), using last valid pointer\n");
+ res = viadev->lastpos;
+ }
+ }
+ }
+ viadev->lastpos = res; /* remember the last position */
+ if (res >= viadev->bufsize)
+ res -= viadev->bufsize;
+ return res;
+}
+
+/*
+ * get the current pointer on via686
+ */
+static snd_pcm_uframes_t snd_via686_pcm_pointer(snd_pcm_substream_t *substream)
+{
+ via82xx_t *chip = snd_pcm_substream_chip(substream);
+ viadev_t *viadev = (viadev_t *)substream->runtime->private_data;
+ unsigned int idx, ptr, count, res;
+
+ snd_assert(viadev->tbl_entries, return 0);
+ if (!(inb(VIADEV_REG(viadev, OFFSET_STATUS)) & VIA_REG_STAT_ACTIVE))
+ return 0;
+
+ spin_lock(&chip->reg_lock);
+ count = inl(VIADEV_REG(viadev, OFFSET_CURR_COUNT)) & 0xffffff;
+ /* The via686a does not have the current index register,
+ * so we need to calculate the index from CURR_PTR.
+ */
+ ptr = inl(VIADEV_REG(viadev, OFFSET_CURR_PTR));
+ if (ptr <= (unsigned int)viadev->table.addr)
+ idx = 0;
+ else /* CURR_PTR holds the address + 8 */
+ idx = ((ptr - (unsigned int)viadev->table.addr) / 8 - 1) % viadev->tbl_entries;
+ res = calc_linear_pos(viadev, idx, count);
+ spin_unlock(&chip->reg_lock);
+
+ return bytes_to_frames(substream->runtime, res);
+}
+
+/*
+ * hw_params callback:
+ * allocate the buffer and build up the buffer description table
+ */
+static int snd_via82xx_hw_params(snd_pcm_substream_t * substream,
+ snd_pcm_hw_params_t * hw_params)
+{
+ via82xx_t *chip = snd_pcm_substream_chip(substream);
+ viadev_t *viadev = (viadev_t *)substream->runtime->private_data;
+ int err;
+
+ err = snd_pcm_lib_malloc_pages(substream, params_buffer_bytes(hw_params));
+ if (err < 0)
+ return err;
+ err = build_via_table(viadev, substream, chip->pci,
+ params_periods(hw_params),
+ params_period_bytes(hw_params));
+ if (err < 0)
+ return err;
+
+ snd_ac97_write(chip->ac97, AC97_LINE1_RATE, params_rate(hw_params));
+ snd_ac97_write(chip->ac97, AC97_LINE1_LEVEL, 0);
+
+ return 0;
+}
+
+/*
+ * hw_free callback:
+ * clean up the buffer description table and release the buffer
+ */
+static int snd_via82xx_hw_free(snd_pcm_substream_t * substream)
+{
+ via82xx_t *chip = snd_pcm_substream_chip(substream);
+ viadev_t *viadev = (viadev_t *)substream->runtime->private_data;
+
+ clean_via_table(viadev, substream, chip->pci);
+ snd_pcm_lib_free_pages(substream);
+ return 0;
+}
+
+
+/*
+ * set up the table pointer
+ */
+static void snd_via82xx_set_table_ptr(via82xx_t *chip, viadev_t *viadev)
+{
+ snd_via82xx_codec_ready(chip, chip->ac97_secondary);
+ outl((u32)viadev->table.addr, VIADEV_REG(viadev, OFFSET_TABLE_PTR));
+ udelay(20);
+ snd_via82xx_codec_ready(chip, chip->ac97_secondary);
+}
+
+/*
+ * prepare callback for playback and capture
+ */
+static int snd_via82xx_pcm_prepare(snd_pcm_substream_t *substream)
+{
+ via82xx_t *chip = snd_pcm_substream_chip(substream);
+ viadev_t *viadev = (viadev_t *)substream->runtime->private_data;
+
+ snd_via82xx_channel_reset(chip, viadev);
+ /* this must be set after channel_reset */
+ snd_via82xx_set_table_ptr(chip, viadev);
+ outb(VIA_REG_TYPE_AUTOSTART|VIA_REG_TYPE_INT_EOL|VIA_REG_TYPE_INT_FLAG,
+ VIADEV_REG(viadev, OFFSET_TYPE));
+ return 0;
+}
+
+/*
+ * pcm hardware definition, identical for both playback and capture
+ */
+static snd_pcm_hardware_t snd_via82xx_hw =
+{
+ .info = (SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_INTERLEAVED |
+ SNDRV_PCM_INFO_BLOCK_TRANSFER |
+ SNDRV_PCM_INFO_MMAP_VALID |
+ SNDRV_PCM_INFO_RESUME |
+ SNDRV_PCM_INFO_PAUSE),
+ .formats = SNDRV_PCM_FMTBIT_U8 | SNDRV_PCM_FMTBIT_S16_LE,
+ .rates = SNDRV_PCM_RATE_8000 | SNDRV_PCM_RATE_16000 | SNDRV_PCM_RATE_KNOT,
+ .rate_min = 8000,
+ .rate_max = 16000,
+ .channels_min = 1,
+ .channels_max = 1,
+ .buffer_bytes_max = 128 * 1024,
+ .period_bytes_min = 32,
+ .period_bytes_max = 128 * 1024,
+ .periods_min = 2,
+ .periods_max = VIA_TABLE_SIZE / 2,
+ .fifo_size = 0,
+};
+
+
+/*
+ * open callback skeleton
+ */
+static int snd_via82xx_modem_pcm_open(via82xx_t *chip, viadev_t *viadev, snd_pcm_substream_t * substream)
+{
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ int err;
+ static unsigned int rates[] = { 8000, 9600, 12000, 16000 };
+ static snd_pcm_hw_constraint_list_t hw_constraints_rates = {
+ .count = ARRAY_SIZE(rates),
+ .list = rates,
+ .mask = 0,
+ };
+
+ runtime->hw = snd_via82xx_hw;
+
+ if ((err = snd_pcm_hw_constraint_list(runtime, 0, SNDRV_PCM_HW_PARAM_RATE, &hw_constraints_rates)) < 0)
+ return err;
+
+ /* we may remove following constaint when we modify table entries
+ in interrupt */
+ if ((err = snd_pcm_hw_constraint_integer(runtime, SNDRV_PCM_HW_PARAM_PERIODS)) < 0)
+ return err;
+
+ runtime->private_data = viadev;
+ viadev->substream = substream;
+
+ return 0;
+}
+
+
+/*
+ * open callback for playback
+ */
+static int snd_via82xx_playback_open(snd_pcm_substream_t * substream)
+{
+ via82xx_t *chip = snd_pcm_substream_chip(substream);
+ viadev_t *viadev = &chip->devs[chip->playback_devno + substream->number];
+
+ return snd_via82xx_modem_pcm_open(chip, viadev, substream);
+}
+
+/*
+ * open callback for capture
+ */
+static int snd_via82xx_capture_open(snd_pcm_substream_t * substream)
+{
+ via82xx_t *chip = snd_pcm_substream_chip(substream);
+ viadev_t *viadev = &chip->devs[chip->capture_devno + substream->pcm->device];
+
+ return snd_via82xx_modem_pcm_open(chip, viadev, substream);
+}
+
+/*
+ * close callback
+ */
+static int snd_via82xx_pcm_close(snd_pcm_substream_t * substream)
+{
+ viadev_t *viadev = (viadev_t *)substream->runtime->private_data;
+
+ viadev->substream = NULL;
+ return 0;
+}
+
+
+/* via686 playback callbacks */
+static snd_pcm_ops_t snd_via686_playback_ops = {
+ .open = snd_via82xx_playback_open,
+ .close = snd_via82xx_pcm_close,
+ .ioctl = snd_pcm_lib_ioctl,
+ .hw_params = snd_via82xx_hw_params,
+ .hw_free = snd_via82xx_hw_free,
+ .prepare = snd_via82xx_pcm_prepare,
+ .trigger = snd_via82xx_modem_pcm_trigger,
+ .pointer = snd_via686_pcm_pointer,
+ .page = snd_pcm_sgbuf_ops_page,
+};
+
+/* via686 capture callbacks */
+static snd_pcm_ops_t snd_via686_capture_ops = {
+ .open = snd_via82xx_capture_open,
+ .close = snd_via82xx_pcm_close,
+ .ioctl = snd_pcm_lib_ioctl,
+ .hw_params = snd_via82xx_hw_params,
+ .hw_free = snd_via82xx_hw_free,
+ .prepare = snd_via82xx_pcm_prepare,
+ .trigger = snd_via82xx_modem_pcm_trigger,
+ .pointer = snd_via686_pcm_pointer,
+ .page = snd_pcm_sgbuf_ops_page,
+};
+
+
+static void init_viadev(via82xx_t *chip, int idx, unsigned int reg_offset, int direction)
+{
+ chip->devs[idx].reg_offset = reg_offset;
+ chip->devs[idx].direction = direction;
+ chip->devs[idx].port = chip->port + reg_offset;
+}
+
+/*
+ * create a pcm instance for via686a/b
+ */
+static int __devinit snd_via686_pcm_new(via82xx_t *chip)
+{
+ snd_pcm_t *pcm;
+ int err;
+
+ chip->playback_devno = 0;
+ chip->capture_devno = 1;
+ chip->num_devs = 2;
+ chip->intr_mask = 0x330000; /* FLAGS | EOL for MR, MW */
+
+ err = snd_pcm_new(chip->card, chip->card->shortname, 0, 1, 1, &pcm);
+ if (err < 0)
+ return err;
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_via686_playback_ops);
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_via686_capture_ops);
+ pcm->private_data = chip;
+ strcpy(pcm->name, chip->card->shortname);
+ chip->pcms[0] = pcm;
+ init_viadev(chip, 0, VIA_REG_MO_STATUS, 0);
+ init_viadev(chip, 1, VIA_REG_MI_STATUS, 1);
+
+ if ((err = snd_pcm_lib_preallocate_pages_for_all(pcm, SNDRV_DMA_TYPE_DEV_SG,
+ snd_dma_pci_data(chip->pci), 64*1024, 128*1024)) < 0)
+ return err;
+
+ return 0;
+}
+
+
+/*
+ * Mixer part
+ */
+
+
+static void snd_via82xx_mixer_free_ac97_bus(ac97_bus_t *bus)
+{
+ via82xx_t *chip = bus->private_data;
+ chip->ac97_bus = NULL;
+}
+
+static void snd_via82xx_mixer_free_ac97(ac97_t *ac97)
+{
+ via82xx_t *chip = ac97->private_data;
+ chip->ac97 = NULL;
+}
+
+
+static int __devinit snd_via82xx_mixer_new(via82xx_t *chip)
+{
+ ac97_template_t ac97;
+ int err;
+ static ac97_bus_ops_t ops = {
+ .write = snd_via82xx_codec_write,
+ .read = snd_via82xx_codec_read,
+ .wait = snd_via82xx_codec_wait,
+ };
+
+ if ((err = snd_ac97_bus(chip->card, 0, &ops, chip, &chip->ac97_bus)) < 0)
+ return err;
+ chip->ac97_bus->private_free = snd_via82xx_mixer_free_ac97_bus;
+ chip->ac97_bus->clock = chip->ac97_clock;
+ chip->ac97_bus->shared_type = AC97_SHARED_TYPE_VIA;
+
+ memset(&ac97, 0, sizeof(ac97));
+ ac97.private_data = chip;
+ ac97.private_free = snd_via82xx_mixer_free_ac97;
+ ac97.pci = chip->pci;
+ ac97.scaps = AC97_SCAP_SKIP_AUDIO;
+ ac97.num = chip->ac97_secondary;
+
+ if ((err = snd_ac97_mixer(chip->ac97_bus, &ac97, &chip->ac97)) < 0)
+ return err;
+
+ return 0;
+}
+
+
+/*
+ * proc interface
+ */
+static void snd_via82xx_proc_read(snd_info_entry_t *entry, snd_info_buffer_t *buffer)
+{
+ via82xx_t *chip = entry->private_data;
+ int i;
+
+ snd_iprintf(buffer, "%s\n\n", chip->card->longname);
+ for (i = 0; i < 0xa0; i += 4) {
+ snd_iprintf(buffer, "%02x: %08x\n", i, inl(chip->port + i));
+ }
+}
+
+static void __devinit snd_via82xx_proc_init(via82xx_t *chip)
+{
+ snd_info_entry_t *entry;
+
+ if (! snd_card_proc_new(chip->card, "via82xx", &entry))
+ snd_info_set_text_ops(entry, chip, 1024, snd_via82xx_proc_read);
+}
+
+/*
+ *
+ */
+
+static int __devinit snd_via82xx_chip_init(via82xx_t *chip)
+{
+ ac97_t ac97;
+ unsigned int val;
+ int max_count;
+ unsigned char pval;
+
+ memset(&ac97, 0, sizeof(ac97));
+ ac97.private_data = chip;
+
+ pci_read_config_byte(chip->pci, VIA_MC97_CTRL, &pval);
+ if((pval & VIA_MC97_CTRL_INIT) != VIA_MC97_CTRL_INIT) {
+ pci_write_config_byte(chip->pci, 0x44, pval|VIA_MC97_CTRL_INIT);
+ udelay(100);
+ }
+
+ pci_read_config_byte(chip->pci, VIA_ACLINK_STAT, &pval);
+ if (! (pval & VIA_ACLINK_C00_READY)) { /* codec not ready? */
+ /* deassert ACLink reset, force SYNC */
+ pci_write_config_byte(chip->pci, VIA_ACLINK_CTRL,
+ VIA_ACLINK_CTRL_ENABLE |
+ VIA_ACLINK_CTRL_RESET |
+ VIA_ACLINK_CTRL_SYNC);
+ udelay(100);
+#if 1 /* FIXME: should we do full reset here for all chip models? */
+ pci_write_config_byte(chip->pci, VIA_ACLINK_CTRL, 0x00);
+ udelay(100);
+#else
+ /* deassert ACLink reset, force SYNC (warm AC'97 reset) */
+ pci_write_config_byte(chip->pci, VIA_ACLINK_CTRL,
+ VIA_ACLINK_CTRL_RESET|VIA_ACLINK_CTRL_SYNC);
+ udelay(2);
+#endif
+ /* ACLink on, deassert ACLink reset, VSR, SGD data out */
+ pci_write_config_byte(chip->pci, VIA_ACLINK_CTRL, VIA_ACLINK_CTRL_INIT);
+ udelay(100);
+ }
+
+ pci_read_config_byte(chip->pci, VIA_ACLINK_CTRL, &pval);
+ if ((pval & VIA_ACLINK_CTRL_INIT) != VIA_ACLINK_CTRL_INIT) {
+ /* ACLink on, deassert ACLink reset, VSR, SGD data out */
+ pci_write_config_byte(chip->pci, VIA_ACLINK_CTRL, VIA_ACLINK_CTRL_INIT);
+ udelay(100);
+ }
+
+ /* wait until codec ready */
+ max_count = ((3 * HZ) / 4) + 1;
+ do {
+ pci_read_config_byte(chip->pci, VIA_ACLINK_STAT, &pval);
+ if (pval & VIA_ACLINK_C00_READY) /* primary codec ready */
+ break;
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(1);
+ } while (--max_count > 0);
+
+ if ((val = snd_via82xx_codec_xread(chip)) & VIA_REG_AC97_BUSY)
+ snd_printk("AC'97 codec is not ready [0x%x]\n", val);
+
+ /* and then reset codec.. */
+#if 0 /* do we need it? when? */
+ snd_via82xx_codec_ready(chip, 0);
+ snd_via82xx_codec_write(&ac97, AC97_RESET, 0x0000);
+ snd_via82xx_codec_read(&ac97, 0);
+#endif
+
+ snd_via82xx_codec_xwrite(chip, VIA_REG_AC97_READ |
+ VIA_REG_AC97_SECONDARY_VALID |
+ (VIA_REG_AC97_CODEC_ID_SECONDARY << VIA_REG_AC97_CODEC_ID_SHIFT));
+ max_count = ((3 * HZ) / 4) + 1;
+ snd_via82xx_codec_xwrite(chip, VIA_REG_AC97_READ |
+ VIA_REG_AC97_SECONDARY_VALID |
+ (VIA_REG_AC97_CODEC_ID_SECONDARY << VIA_REG_AC97_CODEC_ID_SHIFT));
+ do {
+ if ((val = snd_via82xx_codec_xread(chip)) & VIA_REG_AC97_SECONDARY_VALID) {
+ chip->ac97_secondary = 1;
+ goto __ac97_ok2;
+ }
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout(1);
+ } while (--max_count > 0);
+ /* This is ok, the most of motherboards have only one codec */
+
+ __ac97_ok2:
+
+ /* route FM trap to IRQ, disable FM trap */
+ // pci_write_config_byte(chip->pci, VIA_FM_NMI_CTRL, 0);
+ /* disable all GPI interrupts */
+ outl(0, VIAREG(chip, GPI_INTR));
+
+ return 0;
+}
+
+#ifdef CONFIG_PM
+/*
+ * power management
+ */
+static int snd_via82xx_suspend(snd_card_t *card, unsigned int state)
+{
+ via82xx_t *chip = card->pm_private_data;
+ int i;
+
+ for (i = 0; i < 2; i++)
+ if (chip->pcms[i])
+ snd_pcm_suspend_all(chip->pcms[i]);
+ for (i = 0; i < chip->num_devs; i++)
+ snd_via82xx_channel_reset(chip, &chip->devs[i]);
+ synchronize_irq(chip->irq);
+ snd_ac97_suspend(chip->ac97);
+ pci_set_power_state(chip->pci, 3);
+ pci_disable_device(chip->pci);
+ return 0;
+}
+
+static int snd_via82xx_resume(snd_card_t *card, unsigned int state)
+{
+ via82xx_t *chip = card->pm_private_data;
+ int i;
+
+ pci_enable_device(chip->pci);
+ pci_set_power_state(chip->pci, 0);
+ pci_set_master(chip->pci);
+
+ snd_via82xx_chip_init(chip);
+
+ snd_ac97_resume(chip->ac97);
+
+ for (i = 0; i < chip->num_devs; i++)
+ snd_via82xx_channel_reset(chip, &chip->devs[i]);
+
+ return 0;
+}
+#endif /* CONFIG_PM */
+
+static int snd_via82xx_free(via82xx_t *chip)
+{
+ unsigned int i;
+
+ if (chip->irq < 0)
+ goto __end_hw;
+ /* disable interrupts */
+ for (i = 0; i < chip->num_devs; i++)
+ snd_via82xx_channel_reset(chip, &chip->devs[i]);
+ synchronize_irq(chip->irq);
+ __end_hw:
+ if (chip->irq >= 0)
+ free_irq(chip->irq, (void *)chip);
+ pci_release_regions(chip->pci);
+ pci_disable_device(chip->pci);
+ kfree(chip);
+ return 0;
+}
+
+static int snd_via82xx_dev_free(snd_device_t *device)
+{
+ via82xx_t *chip = device->device_data;
+ return snd_via82xx_free(chip);
+}
+
+static int __devinit snd_via82xx_create(snd_card_t * card,
+ struct pci_dev *pci,
+ int chip_type,
+ int revision,
+ unsigned int ac97_clock,
+ via82xx_t ** r_via)
+{
+ via82xx_t *chip;
+ int err;
+ static snd_device_ops_t ops = {
+ .dev_free = snd_via82xx_dev_free,
+ };
+
+ if ((err = pci_enable_device(pci)) < 0)
+ return err;
+
+ if ((chip = kcalloc(1, sizeof(*chip), GFP_KERNEL)) == NULL) {
+ pci_disable_device(pci);
+ return -ENOMEM;
+ }
+
+ spin_lock_init(&chip->reg_lock);
+ chip->card = card;
+ chip->pci = pci;
+ chip->irq = -1;
+
+ if ((err = pci_request_regions(pci, card->driver)) < 0) {
+ kfree(chip);
+ pci_disable_device(pci);
+ return err;
+ }
+ chip->port = pci_resource_start(pci, 0);
+ if (request_irq(pci->irq, snd_via82xx_interrupt, SA_INTERRUPT|SA_SHIRQ,
+ card->driver, (void *)chip)) {
+ snd_printk("unable to grab IRQ %d\n", pci->irq);
+ snd_via82xx_free(chip);
+ return -EBUSY;
+ }
+ chip->irq = pci->irq;
+ if (ac97_clock >= 8000 && ac97_clock <= 48000)
+ chip->ac97_clock = ac97_clock;
+ synchronize_irq(chip->irq);
+
+ if ((err = snd_via82xx_chip_init(chip)) < 0) {
+ snd_via82xx_free(chip);
+ return err;
+ }
+
+ if ((err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, chip, &ops)) < 0) {
+ snd_via82xx_free(chip);
+ return err;
+ }
+
+ /* The 8233 ac97 controller does not implement the master bit
+ * in the pci command register. IMHO this is a violation of the PCI spec.
+ * We call pci_set_master here because it does not hurt. */
+ pci_set_master(pci);
+
+ snd_card_set_dev(card, &pci->dev);
+
+ *r_via = chip;
+ return 0;
+}
+
+
+static int __devinit snd_via82xx_probe(struct pci_dev *pci,
+ const struct pci_device_id *pci_id)
+{
+ static int dev;
+ snd_card_t *card;
+ via82xx_t *chip;
+ unsigned char revision;
+ int chip_type = 0, card_type;
+ unsigned int i;
+ int err;
+
+ if (dev >= SNDRV_CARDS)
+ return -ENODEV;
+ if (!enable[dev]) {
+ dev++;
+ return -ENOENT;
+ }
+
+ card = snd_card_new(index[dev], id[dev], THIS_MODULE, 0);
+ if (card == NULL)
+ return -ENOMEM;
+
+ card_type = pci_id->driver_data;
+ pci_read_config_byte(pci, PCI_REVISION_ID, &revision);
+ switch (card_type) {
+ case TYPE_CARD_VIA82XX_MODEM:
+ strcpy(card->driver, "VIA82XX-MODEM");
+ sprintf(card->shortname, "VIA 82XX modem");
+ break;
+ default:
+ snd_printk(KERN_ERR "invalid card type %d\n", card_type);
+ err = -EINVAL;
+ goto __error;
+ }
+
+ if ((err = snd_via82xx_create(card, pci, chip_type, revision, ac97_clock[dev], &chip)) < 0)
+ goto __error;
+ if ((err = snd_via82xx_mixer_new(chip)) < 0)
+ goto __error;
+
+ if ((err = snd_via686_pcm_new(chip)) < 0 )
+ goto __error;
+
+ snd_card_set_pm_callback(card, snd_via82xx_suspend, snd_via82xx_resume, chip);
+
+ /* disable interrupts */
+ for (i = 0; i < chip->num_devs; i++)
+ snd_via82xx_channel_reset(chip, &chip->devs[i]);
+
+ sprintf(card->longname, "%s at 0x%lx, irq %d",
+ card->shortname, chip->port, chip->irq);
+
+ snd_via82xx_proc_init(chip);
+
+ if ((err = snd_card_register(card)) < 0) {
+ snd_card_free(card);
+ return err;
+ }
+ pci_set_drvdata(pci, card);
+ dev++;
+ return 0;
+
+ __error:
+ snd_card_free(card);
+ return err;
+}
+
+static void __devexit snd_via82xx_remove(struct pci_dev *pci)
+{
+ snd_card_free(pci_get_drvdata(pci));
+ pci_set_drvdata(pci, NULL);
+}
+
+static struct pci_driver driver = {
+ .name = "VIA 82xx Modem",
+ .id_table = snd_via82xx_modem_ids,
+ .probe = snd_via82xx_probe,
+ .remove = __devexit_p(snd_via82xx_remove),
+ SND_PCI_PM_CALLBACKS
+};
+
+static int __init alsa_card_via82xx_init(void)
+{
+ return pci_module_init(&driver);
+}
+
+static void __exit alsa_card_via82xx_exit(void)
+{
+ pci_unregister_driver(&driver);
+}
+
+module_init(alsa_card_via82xx_init)
+module_exit(alsa_card_via82xx_exit)
#include <sound/driver.h>
#include <linux/delay.h>
+#include <linux/firmware.h>
#include <sound/core.h>
#include <sound/control.h>
#include <asm/io.h>
/*
* load the xilinx image
*/
-static int vx2_load_xilinx_binary(vx_core_t *chip, const snd_hwdep_dsp_image_t *xilinx)
+static int vx2_load_xilinx_binary(vx_core_t *chip, const struct firmware *xilinx)
{
unsigned int i;
unsigned int port;
- unsigned char data;
- unsigned char __user *image;
+ unsigned char *image;
/* XILINX reset (wait at least 1 milisecond between reset on and off). */
vx_outl(chip, CNTRL, VX_CNTRL_REGISTER_VALUE | VX_XILINX_RESET_MASK);
else
port = VX_GPIOC; /* VX222 V2 and VX222_MIC_BOARD with new PLX9030 use this register */
- image = xilinx->image;
- for (i = 0; i < xilinx->length; i++, image++) {
- __get_user(data, image);
- if (put_xilinx_data(chip, port, 8, data) < 0)
+ image = xilinx->data;
+ for (i = 0; i < xilinx->size; i++, image++) {
+ if (put_xilinx_data(chip, port, 8, *image) < 0)
return -EINVAL;
/* don't take too much time in this loop... */
cond_resched();
/*
* load the boot/dsp images
*/
-static int vx2_load_dsp(vx_core_t *vx, const snd_hwdep_dsp_image_t *dsp)
+static int vx2_load_dsp(vx_core_t *vx, int index, const struct firmware *dsp)
{
int err;
- if (*dsp->name)
- snd_printdd("loading dsp [%d] %s, size = %Zd\n",
- dsp->index, dsp->name, dsp->length);
- switch (dsp->index) {
- case 0:
+ switch (index) {
+ case 1:
/* xilinx image */
if ((err = vx2_load_xilinx_binary(vx, dsp)) < 0)
return err;
if ((err = vx2_test_xilinx(vx)) < 0)
return err;
return 0;
- case 1:
+ case 2:
/* DSP boot */
return snd_vx_dsp_boot(vx, dsp);
- case 2:
+ case 3:
/* DSP image */
return snd_vx_dsp_load(vx, dsp);
default:
if (hw)
hw->card_list[vxp->index] = NULL;
chip->card = NULL;
+ if (chip->dev)
+ kfree(chip->dev);
+ snd_vx_free_firmware(chip);
kfree(chip);
return 0;
}
if (! chip)
return NULL;
+#ifdef SND_VX_FW_LOADER
+ /* fake a device here since pcmcia doesn't give a valid device... */
+ chip->dev = kcalloc(1, sizeof(*chip->dev), GFP_KERNEL);
+ if (! chip->dev) {
+ snd_printk(KERN_ERR "vxp: can't malloc chip->dev\n");
+ kfree(chip);
+ snd_card_free(card);
+ return NULL;
+ }
+ device_initialize(chip->dev);
+ sprintf(chip->dev->bus_id, "vxpocket%d", i);
+#endif
+
if (snd_device_new(card, SNDRV_DEV_LOWLEVEL, chip, &ops) < 0) {
kfree(chip);
snd_card_free(card);
link->irq.Attributes = IRQ_TYPE_EXCLUSIVE | IRQ_HANDLE_PRESENT;
// link->irq.Attributes = IRQ_TYPE_DYNAMIC_SHARING|IRQ_FIRST_SHARED;
- link->irq.IRQInfo1 = IRQ_INFO2_VALID | IRQ_LEVEL_ID;
- if (hw->irq_list[0] == -1)
- link->irq.IRQInfo2 = *hw->irq_mask_p;
- else
- for (i = 0; i < 4; i++)
- link->irq.IRQInfo2 |= 1 << hw->irq_list[i];
+ link->irq.IRQInfo1 = IRQ_LEVEL_ID;
link->irq.Handler = &snd_vx_irq_handler;
link->irq.Instance = chip;
link->conf.ConfigIndex = 1;
link->conf.Present = PRESENT_OPTION;
- /* Chain drivers */
- link->next = hw->dev_list;
- hw->dev_list = link;
-
/* Register with Card Services */
+ memset(&client_reg, 0, sizeof(client_reg));
client_reg.dev_info = hw->dev_info;
- client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE;
client_reg.EventMask =
CS_EVENT_CARD_INSERTION | CS_EVENT_CARD_REMOVAL
#ifdef CONFIG_PM
ret = pcmcia_register_client(&link->handle, &client_reg);
if (ret != CS_SUCCESS) {
cs_error(link->handle, RegisterClient, ret);
- snd_vxpocket_detach(hw, link);
+ snd_card_free(card);
return NULL;
}
+ /* Chain drivers */
+ link->next = hw->dev_list;
+ hw->dev_list = link;
+
+ /* snd_card_set_pm_callback(card, snd_vxpocket_suspend, snd_vxpocket_resume, chip); */
+
return link;
}
sprintf(card->longname, "%s at 0x%x, irq %i",
card->shortname, port, irq);
- if ((err = snd_vx_hwdep_new(chip)) < 0)
- return err;
-
chip->irq = irq;
- if ((err = snd_card_register(chip->card)) < 0)
+ if ((err = snd_vx_setup_firmware(chip)) < 0)
return err;
return 0;
*/
void snd_vxpocket_detach(struct snd_vxp_entry *hw, dev_link_t *link)
{
- vx_core_t *chip = link->priv;
+ vx_core_t *chip;
+
+ if (! link)
+ return;
+
+ chip = link->priv;
snd_printdd(KERN_DEBUG "vxpocket_detach called\n");
/* Remove the interface data from the linked list */
dev_link_t **linkp;
/* Locate device structure */
for (linkp = &hw->dev_list; *linkp; linkp = &(*linkp)->next)
- if (*linkp == link)
+ if (*linkp == link) {
+ *linkp = link->next;
break;
- if (*linkp)
- *linkp = link->next;
+ }
}
chip->chip_status |= VX_STAT_IS_STALE; /* to be sure */
snd_card_disconnect(chip->card);
snd_card_free_in_thread(chip->card);
}
-/*
- * snd_vxpocket_detach_all - detach all instances linked to the hw
- */
-void snd_vxpocket_detach_all(struct snd_vxp_entry *hw)
-{
- while (hw->dev_list != NULL)
- snd_vxpocket_detach(hw, hw->dev_list);
-}
-
/*
* configuration callback
*/
vx_core_t *chip = link->priv;
struct snd_vxpocket *vxp = (struct snd_vxpocket *)chip;
tuple_t tuple;
- cisparse_t parse;
- config_info_t conf;
+ cisparse_t *parse = NULL;
u_short buf[32];
int last_fn, last_ret;
snd_printdd(KERN_DEBUG "vxpocket_config called\n");
- tuple.DesiredTuple = CISTPL_CFTABLE_ENTRY;
+ parse = kmalloc(sizeof(*parse), GFP_KERNEL);
+ if (! parse) {
+ snd_printk(KERN_ERR "vx: cannot allocate\n");
+ return;
+ }
tuple.Attributes = 0;
tuple.TupleData = (cisdata_t *)buf;
tuple.TupleDataMax = sizeof(buf);
tuple.DesiredTuple = CISTPL_CONFIG;
CS_CHECK(GetFirstTuple, pcmcia_get_first_tuple(handle, &tuple));
CS_CHECK(GetTupleData, pcmcia_get_tuple_data(handle, &tuple));
- CS_CHECK(ParseTuple, pcmcia_parse_tuple(handle, &tuple, &parse));
- link->conf.ConfigBase = parse.config.base;
- link->conf.ConfigIndex = 1;
-
- CS_CHECK(GetConfigurationInfo, pcmcia_get_configuration_info(handle, &conf));
- link->conf.Vcc = conf.Vcc;
+ CS_CHECK(ParseTuple, pcmcia_parse_tuple(handle, &tuple, parse));
+ link->conf.ConfigBase = parse->config.base;
+ link->conf.Present = parse->config.rmask[0];
/* Configure card */
link->state |= DEV_CONFIG;
link->dev = &vxp->node;
link->state &= ~DEV_CONFIG_PENDING;
+ kfree(parse);
return;
cs_failed:
pcmcia_release_configuration(link->handle);
pcmcia_release_io(link->handle, &link->io);
pcmcia_release_irq(link->handle, &link->irq);
+ link->state &= ~DEV_CONFIG;
+ kfree(parse);
}
break;
case CS_EVENT_CARD_INSERTION:
snd_printdd(KERN_DEBUG "CARD_INSERTION..\n");
- link->state |= DEV_PRESENT;
+ link->state |= DEV_PRESENT | DEV_CONFIG_PENDING;
vxpocket_config(link);
break;
#ifdef CONFIG_PM
case CS_EVENT_PM_SUSPEND:
snd_printdd(KERN_DEBUG "SUSPEND\n");
link->state |= DEV_SUSPEND;
- if (chip) {
+ if (chip && chip->card->pm_suspend) {
snd_printdd(KERN_DEBUG "snd_vx_suspend calling\n");
- snd_vx_suspend(chip);
+ chip->card->pm_suspend(chip->card, 0);
}
/* Fall through... */
case CS_EVENT_RESET_PHYSICAL:
//struct snd_vxpocket *vxp = (struct snd_vxpocket *)chip;
snd_printdd(KERN_DEBUG "requestconfig...\n");
pcmcia_request_configuration(link->handle, &link->conf);
- if (chip) {
+ if (chip && chip->card->pm_resume) {
snd_printdd(KERN_DEBUG "calling snd_vx_resume\n");
- snd_vx_resume(chip);
+ chip->card->pm_resume(chip->card, 0);
}
}
snd_printdd(KERN_DEBUG "resume done!\n");
EXPORT_SYMBOL(snd_vxpocket_ops);
EXPORT_SYMBOL(snd_vxpocket_attach);
EXPORT_SYMBOL(snd_vxpocket_detach);
-EXPORT_SYMBOL(snd_vxpocket_detach_all);
#include <sound/driver.h>
#include <linux/delay.h>
+#include <linux/firmware.h>
#include <sound/core.h>
#include <asm/io.h>
#include "vxpocket.h"
* vx_load_xilinx_binary - load the xilinx binary image
* the binary image is the binary array converted from the bitstream file.
*/
-static int vxp_load_xilinx_binary(vx_core_t *_chip, const snd_hwdep_dsp_image_t *xilinx)
+static int vxp_load_xilinx_binary(vx_core_t *_chip, const struct firmware *fw)
{
struct snd_vxpocket *chip = (struct snd_vxpocket *)_chip;
unsigned int i;
int c;
int regCSUER, regRUER;
- unsigned char __user *image;
+ unsigned char *image;
unsigned char data;
/* Switch to programmation mode */
/* set HF1 for loading xilinx binary */
vx_outb(chip, ICR, ICR_HF1);
- image = xilinx->image;
- for (i = 0; i < xilinx->length; i++, image++) {
- __get_user(data, image);
+ image = fw->data;
+ for (i = 0; i < fw->size; i++, image++) {
+ data = *image;
if (vx_wait_isr_bit(_chip, ISR_TX_EMPTY) < 0)
goto _error;
vx_outb(chip, TXL, data);
c |= (int)vx_inb(chip, RXM) << 8;
c |= vx_inb(chip, RXL);
- snd_printdd(KERN_DEBUG "xilinx: dsp size received 0x%x, orig 0x%x\n", c, xilinx->length);
+ snd_printdd(KERN_DEBUG "xilinx: dsp size received 0x%x, orig 0x%x\n", c, fw->size);
vx_outb(chip, ICR, ICR_HF0);
/*
* vxp_load_dsp - load_dsp callback
*/
-static int vxp_load_dsp(vx_core_t *vx, const snd_hwdep_dsp_image_t *dsp)
+static int vxp_load_dsp(vx_core_t *vx, int index, const struct firmware *fw)
{
int err;
- if (*dsp->name)
- snd_printdd("loading dsp [%d] %s, size = %d\n", dsp->index, dsp->name, dsp->length);
-
- switch (dsp->index) {
+ switch (index) {
case 0:
/* xilinx boot */
if ((err = vx_check_magic(vx)) < 0)
return err;
- if ((err = snd_vx_load_boot_image(vx, dsp)) < 0)
+ if ((err = snd_vx_load_boot_image(vx, fw)) < 0)
return err;
return 0;
case 1:
/* xilinx image */
- return vxp_load_xilinx_binary(vx, dsp);
+ return vxp_load_xilinx_binary(vx, fw);
case 2:
/* DSP boot */
- return snd_vx_dsp_boot(vx, dsp);
+ return snd_vx_dsp_boot(vx, fw);
case 3:
/* DSP image */
- return snd_vx_dsp_load(vx, dsp);
+ return snd_vx_dsp_load(vx, fw);
default:
snd_BUG();
return -EINVAL;
int *index_table;
char **id_table;
int *enable_table;
- unsigned int *irq_mask_p;
- int *irq_list;
int *ibl;
/* h/w config */
*/
dev_link_t *snd_vxpocket_attach(struct snd_vxp_entry *hw);
void snd_vxpocket_detach(struct snd_vxp_entry *hw, dev_link_t *link);
-void snd_vxpocket_detach_all(struct snd_vxp_entry *hw);
int vxp_add_mic_controls(vx_core_t *chip);
* This lock guards the sound loader list.
*/
-static spinlock_t sound_loader_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(sound_loader_lock);
/*
* Allocate the controlling structure and add it to the sound driver
if (emu->sflist)
snd_sf_free(emu->sflist);
- if (emu->voices)
- kfree(emu->voices);
-
- if (emu->name)
- kfree(emu->name);
-
+ kfree(emu->voices);
+ kfree(emu->name);
kfree(emu);
return 0;
}
#ifdef SNDRV_EMUX_USE_RAW_EFFECT
snd_emux_delete_effect(p);
#endif
- if (p->chset.channels)
- kfree(p->chset.channels);
+ kfree(p->chset.channels);
kfree(p);
}
}
}
-/*
- * for Emu10k1 - release at least 1 voice currently using
- */
-int
-snd_emux_release_voice(snd_emux_t *emu)
-{
- return 0;
-}
-
-
/*
* terminate note - if free flag is true, free the terminated voice
*/
return 0;
}
-/*
- * Return the used memory size (in words)
- */
-int
-snd_soundfont_mem_used(snd_sf_list_t *sflist)
-{
- return sflist->mem_used;
-}
config SND_USB_USX2Y
tristate "Tascam US-122, US-224 and US-428 USB driver"
- depends on SND && USB
+ depends on SND && USB && (X86 || PPC || ALPHA)
select SND_HWDEP
select SND_RAWMIDI
select SND_PCM
__u8 bDescriptorType;
__u8 bDescriptorSubtype;
__u8 bcdMSC[2];
- __u16 wTotalLength;
+ __le16 wTotalLength;
} __attribute__ ((packed));
struct usb_ms_endpoint_descriptor {
static void snd_usbmidi_in_endpoint_delete(snd_usb_midi_in_endpoint_t* ep)
{
if (ep->urb) {
- if (ep->urb->transfer_buffer)
- kfree(ep->urb->transfer_buffer);
+ kfree(ep->urb->transfer_buffer);
usb_free_urb(ep->urb);
}
kfree(ep);
struct usb_host_interface *hostif;
struct usb_interface_descriptor* intfd;
- if (umidi->chip->dev->descriptor.idVendor != 0x0582)
+ if (le16_to_cpu(umidi->chip->dev->descriptor.idVendor) != 0x0582)
return NULL;
intf = umidi->iface;
if (!intf || intf->num_altsetting != 2)
if (ep->tasklet.func)
tasklet_kill(&ep->tasklet);
if (ep->urb) {
- if (ep->urb->transfer_buffer)
- kfree(ep->urb->transfer_buffer);
+ kfree(ep->urb->transfer_buffer);
usb_free_urb(ep->urb);
}
kfree(ep);
/* TODO: read port name from jack descriptor */
name_format = "%s MIDI %d";
- vendor = umidi->chip->dev->descriptor.idVendor;
- product = umidi->chip->dev->descriptor.idProduct;
+ vendor = le16_to_cpu(umidi->chip->dev->descriptor.idVendor);
+ product = le16_to_cpu(umidi->chip->dev->descriptor.idProduct);
for (i = 0; i < ARRAY_SIZE(snd_usbmidi_port_names); ++i) {
if (snd_usbmidi_port_names[i].vendor == vendor &&
snd_usbmidi_port_names[i].product == product &&
-snd-usb-usx2y-objs := usbusx2y.o usbusx2yaudio.o usX2Yhwdep.o
+snd-usb-usx2y-objs := usbusx2y.o usX2Yhwdep.o usx2yhwdeppcm.o
obj-$(CONFIG_SND_USB_USX2Y) += snd-usb-usx2y.o
#include "usbusx2y.h"
#include "usX2Yhwdep.h"
+int usX2Y_hwdep_pcm_new(snd_card_t* card);
+
static struct page * snd_us428ctls_vm_nopage(struct vm_area_struct *area, unsigned long address, int *type)
{
{
unsigned int mask = 0;
usX2Ydev_t *us428 = (usX2Ydev_t*)hw->private_data;
- static unsigned LastN;
-
+ us428ctls_sharedmem_t *shm = us428->us428ctls_sharedmem;
if (us428->chip_status & USX2Y_STAT_CHIP_HUP)
return POLLHUP;
poll_wait(file, &us428->us428ctls_wait_queue_head, wait);
- down(&us428->open_mutex);
- if (us428->us428ctls_sharedmem
- && us428->us428ctls_sharedmem->CtlSnapShotLast != LastN) {
+ if (shm != NULL && shm->CtlSnapShotLast != shm->CtlSnapShotRed)
mask |= POLLIN;
- LastN = us428->us428ctls_sharedmem->CtlSnapShotLast;
- }
- up(&us428->open_mutex);
return mask;
}
};
int id = -1;
- switch (((usX2Ydev_t*)hw->private_data)->chip.dev->descriptor.idProduct) {
+ switch (le16_to_cpu(((usX2Ydev_t*)hw->private_data)->chip.dev->descriptor.idProduct)) {
case USB_ID_US122:
id = USX2Y_TYPE_122;
break;
};
struct usb_device *dev = usX2Y(card)->chip.dev;
struct usb_interface *iface = usb_ifnum_to_if(dev, 0);
- snd_usb_audio_quirk_t *quirk = dev->descriptor.idProduct == USB_ID_US428 ? &quirk_2 : &quirk_1;
+ snd_usb_audio_quirk_t *quirk = le16_to_cpu(dev->descriptor.idProduct) == USB_ID_US428 ? &quirk_2 : &quirk_1;
snd_printdd("usX2Y_create_usbmidi \n");
return snd_usb_create_midi_interface(&usX2Y(card)->chip, iface, quirk);
}
if ((err = usX2Y_audio_create(card)) < 0)
break;
+ if ((err = usX2Y_hwdep_pcm_new(card)) < 0)
+ break;
if ((err = snd_card_register(card)) < 0)
break;
} while (0);
/*
- * usbus428.c - ALSA USB US-428 Driver
+ * usbusy2y.c - ALSA USB US-428 Driver
*
+2004-12-14 Karsten Wiese
+ Version 0.8.7.1:
+ snd_pcm_open for rawusb pcm-devices now returns -EBUSY if called without rawusb's hwdep device being open.
+
+2004-12-02 Karsten Wiese
+ Version 0.8.7:
+ Use macro usb_maxpacket() for portability.
+
+2004-10-26 Karsten Wiese
+ Version 0.8.6:
+ wake_up() process waiting in usX2Y_urbs_start() on error.
+
+2004-10-21 Karsten Wiese
+ Version 0.8.5:
+ nrpacks is runtime or compiletime configurable now with tested values from 1 to 4.
+
+2004-10-03 Karsten Wiese
+ Version 0.8.2:
+ Avoid any possible racing while in prepare callback.
+
+2004-09-30 Karsten Wiese
+ Version 0.8.0:
+ Simplified things and made ohci work again.
+
2004-09-20 Karsten Wiese
Version 0.7.3:
Use usb_kill_urb() instead of deprecated (kernel 2.6.9) usb_unlink_urb().
Version 0.0.2: midi works with snd-usb-midi, audio (only fullduplex now) with i.e. bristol.
The firmware has been sniffed from win2k us-428 driver 3.09.
- * Copyright (c) 2002 Karsten Wiese
+ * Copyright (c) 2002 - 2004 Karsten Wiese
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
MODULE_AUTHOR("Karsten Wiese <annabellesgarden@yahoo.de>");
-MODULE_DESCRIPTION("TASCAM "NAME_ALLCAPS" Version 0.7.3");
+MODULE_DESCRIPTION("TASCAM "NAME_ALLCAPS" Version 0.8.7.1");
MODULE_LICENSE("GPL");
MODULE_SUPPORTED_DEVICE("{{TASCAM(0x1604), "NAME_ALLCAPS"(0x8001)(0x8005)(0x8007) }}");
S->urb[i] = NULL;
}
}
- if (S->buffer)
- kfree(S->buffer);
+ kfree(S->buffer);
}
card->private_free = snd_usX2Y_card_private_free;
usX2Y(card)->chip.dev = device;
usX2Y(card)->chip.card = card;
- init_MUTEX (&usX2Y(card)->open_mutex);
+ init_waitqueue_head(&usX2Y(card)->prepare_wait_queue);
+ init_MUTEX (&usX2Y(card)->prepare_mutex);
INIT_LIST_HEAD(&usX2Y(card)->chip.midi_list);
strcpy(card->driver, "USB "NAME_ALLCAPS"");
sprintf(card->shortname, "TASCAM "NAME_ALLCAPS"");
sprintf(card->longname, "%s (%x:%x if %d at %03d/%03d)",
card->shortname,
- device->descriptor.idVendor, device->descriptor.idProduct,
+ le16_to_cpu(device->descriptor.idVendor),
+ le16_to_cpu(device->descriptor.idProduct),
0,//us428(card)->usbmidi.ifnum,
usX2Y(card)->chip.dev->bus->busnum, usX2Y(card)->chip.dev->devnum
);
{
int err;
snd_card_t* card;
- if (device->descriptor.idVendor != 0x1604 ||
- (device->descriptor.idProduct != USB_ID_US122 &&
- device->descriptor.idProduct != USB_ID_US224 &&
- device->descriptor.idProduct != USB_ID_US428) ||
+ if (le16_to_cpu(device->descriptor.idVendor) != 0x1604 ||
+ (le16_to_cpu(device->descriptor.idProduct) != USB_ID_US122 &&
+ le16_to_cpu(device->descriptor.idProduct) != USB_ID_US224 &&
+ le16_to_cpu(device->descriptor.idProduct) != USB_ID_US428) ||
!(card = usX2Y_create_card(device)))
return NULL;
if ((err = usX2Y_hwdep_new(card, device)) < 0 ||
static void snd_usX2Y_card_private_free(snd_card_t *card)
{
- if (usX2Y(card)->In04Buf)
- kfree(usX2Y(card)->In04Buf);
+ kfree(usX2Y(card)->In04Buf);
usb_free_urb(usX2Y(card)->In04urb);
if (usX2Y(card)->us428ctls_sharedmem)
snd_free_pages(usX2Y(card)->us428ctls_sharedmem, sizeof(*usX2Y(card)->us428ctls_sharedmem));
#include "../usbaudio.h"
#include "usbus428ctldefs.h"
-#define NRURBS 2 /* */
-#define NRPACKS 1 /* FIXME: Currently only 1 works.
- usb-frames/ms per urb: 1 and 2 are supported.
- setting to 2 will PERHAPS make it easier for slow machines.
- Jitter will be higher though.
- On my PIII 500Mhz Laptop setting to 1 is the only way to go
- for PLAYING synths. i.e. Jack & Aeolus sound quit nicely
- at 4 periods 64 frames.
- */
+#define NRURBS 2
+
#define URBS_AsyncSeq 10
#define URB_DataLen_AsyncSeq 32
} snd_usX2Y_urbSeq_t;
typedef struct snd_usX2Y_substream snd_usX2Y_substream_t;
+#include "usx2yhwdeppcm.h"
typedef struct {
snd_usb_audio_t chip;
snd_usX2Y_AsyncSeq_t AS04;
unsigned int rate,
format;
- int refframes;
int chip_status;
- struct semaphore open_mutex;
+ struct semaphore prepare_mutex;
us428ctls_sharedmem_t *us428ctls_sharedmem;
+ int wait_iso_frame;
wait_queue_head_t us428ctls_wait_queue_head;
- snd_usX2Y_substream_t *substream[4];
+ snd_usX2Y_hwdep_pcm_shm_t *hwdep_pcm_shm;
+ snd_usX2Y_substream_t *subs[4];
+ snd_usX2Y_substream_t * volatile prepare_subs;
+ wait_queue_head_t prepare_wait_queue;
} usX2Ydev_t;
+struct snd_usX2Y_substream {
+ usX2Ydev_t *usX2Y;
+ snd_pcm_substream_t *pcm_substream;
+
+ int endpoint;
+ unsigned int maxpacksize; /* max packet size in bytes */
+
+ atomic_t state;
+#define state_STOPPED 0
+#define state_STARTING1 1
+#define state_STARTING2 2
+#define state_STARTING3 3
+#define state_PREPARED 4
+#define state_PRERUNNING 6
+#define state_RUNNING 8
+
+ int hwptr; /* free frame position in the buffer (only for playback) */
+ int hwptr_done; /* processed frame position in the buffer */
+ int transfer_done; /* processed frames since last period update */
+
+ struct urb *urb[NRURBS]; /* data urb table */
+ struct urb *completed_urb;
+ char *tmpbuf; /* temporary buffer for playback */
+};
+
+
#define usX2Y(c) ((usX2Ydev_t*)(c)->private_data)
int usX2Y_audio_create(snd_card_t* card);
/*
- * US-428 AUDIO
-
- * Copyright (c) 2002-2003 by Karsten Wiese
-
+ * US-X2Y AUDIO
+ * Copyright (c) 2002-2004 by Karsten Wiese
+ *
* based on
-
+ *
* (Tentative) USB Audio Driver for ALSA
*
* Main and PCM part
#include "usx2y.h"
#include "usbusx2y.h"
-
-struct snd_usX2Y_substream {
- usX2Ydev_t *usX2Y;
- snd_pcm_substream_t *pcm_substream;
-
- unsigned char endpoint;
- unsigned int datapipe; /* the data i/o pipe */
- unsigned int maxpacksize; /* max packet size in bytes */
-
- char prepared,
- running,
- stalled;
-
- int hwptr; /* free frame position in the buffer (only for playback) */
- int hwptr_done; /* processed frame position in the buffer */
- int transfer_done; /* processed frames since last period update */
-
- struct urb *urb[NRURBS]; /* data urb table */
- int next_urb_complete;
- struct urb *completed_urb;
- char *tmpbuf; /* temporary buffer for playback */
- volatile int submitted_urbs;
- wait_queue_head_t wait_queue;
-};
-
-
-
-
+#define USX2Y_NRPACKS 4 /* Default value used for nr of packs per urb.
+ 1 to 4 have been tested ok on uhci.
+ To use 3 on ohci, you'd need a patch:
+ look for "0000425-linux-2.6.9-rc4-mm1_ohci-hcd.patch.gz" on
+ "https://bugtrack.alsa-project.org/alsa-bug/bug_view_page.php?bug_id=0000425"
+ .
+ 1, 2 and 4 work out of the box on ohci, if I recall correctly.
+ Bigger is safer operation,
+ smaller gives lower latencies.
+ */
+#define USX2Y_NRPACKS_VARIABLE y /* If your system works ok with this module's parameter
+ nrpacks set to 1, you might as well comment
+ this #define out, and thereby produce smaller, faster code.
+ You'd also set USX2Y_NRPACKS to 1 then.
+ */
+
+#ifdef USX2Y_NRPACKS_VARIABLE
+ static int nrpacks = USX2Y_NRPACKS; /* number of packets per urb */
+ #define nr_of_packs() nrpacks
+ module_param(nrpacks, int, 0444);
+ MODULE_PARM_DESC(nrpacks, "Number of packets per URB.");
+#else
+ #define nr_of_packs() USX2Y_NRPACKS
+#endif
static int usX2Y_urb_capt_retire(snd_usX2Y_substream_t *subs)
int i, len, lens = 0, hwptr_done = subs->hwptr_done;
usX2Ydev_t *usX2Y = subs->usX2Y;
- for (i = 0; i < NRPACKS; i++) {
+ for (i = 0; i < nr_of_packs(); i++) {
cp = (unsigned char*)urb->transfer_buffer + urb->iso_frame_desc[i].offset;
if (urb->iso_frame_desc[i].status) { /* active? hmm, skip this */
- snd_printdd("activ frame status %i\n", urb->iso_frame_desc[i].status);
+ snd_printk("activ frame status %i. Most propably some hardware problem.\n", urb->iso_frame_desc[i].status);
return urb->iso_frame_desc[i].status;
}
len = urb->iso_frame_desc[i].actual_length / usX2Y->stride;
if (! len) {
- snd_printk("0 == len ERROR!\n");
+ snd_printd("0 == len ERROR!\n");
continue;
}
snd_pcm_runtime_t *runtime = subs->pcm_substream->runtime;
count = 0;
- for (pack = 0; pack < NRPACKS; pack++) {
+ for (pack = 0; pack < nr_of_packs(); pack++) {
/* calculate the size of a packet */
counts = cap_urb->iso_frame_desc[pack].actual_length / usX2Y->stride;
count += counts;
snd_printk("should not be here with counts=%i\n", counts);
return -EPIPE;
}
-
/* set up descriptor */
- urb->iso_frame_desc[pack].offset = pack ? urb->iso_frame_desc[pack - 1].offset + urb->iso_frame_desc[pack - 1].length : 0;
- urb->iso_frame_desc[pack].length = counts * usX2Y->stride;
+ urb->iso_frame_desc[pack].offset = pack ?
+ urb->iso_frame_desc[pack - 1].offset + urb->iso_frame_desc[pack - 1].length :
+ 0;
+ urb->iso_frame_desc[pack].length = cap_urb->iso_frame_desc[pack].actual_length;
}
- if (subs->hwptr + count > runtime->buffer_size) {
- /* err, the transferred area goes over buffer boundary.
- * copy the data to the temp buffer.
- */
- int len;
- len = runtime->buffer_size - subs->hwptr;
- urb->transfer_buffer = subs->tmpbuf;
- memcpy(subs->tmpbuf, runtime->dma_area + subs->hwptr * usX2Y->stride, len * usX2Y->stride);
- memcpy(subs->tmpbuf + len * usX2Y->stride, runtime->dma_area, (count - len) * usX2Y->stride);
- subs->hwptr += count;
- subs->hwptr -= runtime->buffer_size;
- } else {
- /* set the buffer pointer */
- urb->transfer_buffer = runtime->dma_area + subs->hwptr * usX2Y->stride;
- if ((subs->hwptr += count) >= runtime->buffer_size)
+ if (atomic_read(&subs->state) >= state_PRERUNNING)
+ if (subs->hwptr + count > runtime->buffer_size) {
+ /* err, the transferred area goes over buffer boundary.
+ * copy the data to the temp buffer.
+ */
+ int len;
+ len = runtime->buffer_size - subs->hwptr;
+ urb->transfer_buffer = subs->tmpbuf;
+ memcpy(subs->tmpbuf, runtime->dma_area + subs->hwptr * usX2Y->stride, len * usX2Y->stride);
+ memcpy(subs->tmpbuf + len * usX2Y->stride, runtime->dma_area, (count - len) * usX2Y->stride);
+ subs->hwptr += count;
+ subs->hwptr -= runtime->buffer_size;
+ } else {
+ /* set the buffer pointer */
+ urb->transfer_buffer = runtime->dma_area + subs->hwptr * usX2Y->stride;
+ if ((subs->hwptr += count) >= runtime->buffer_size)
subs->hwptr -= runtime->buffer_size;
- }
+ }
+ else
+ urb->transfer_buffer = subs->tmpbuf;
urb->transfer_buffer_length = count * usX2Y->stride;
return 0;
}
*
* update the current position and call callback if a period is processed.
*/
-inline static int usX2Y_urb_play_retire(snd_usX2Y_substream_t *subs, struct urb *urb)
+static void usX2Y_urb_play_retire(snd_usX2Y_substream_t *subs, struct urb *urb)
{
snd_pcm_runtime_t *runtime = subs->pcm_substream->runtime;
- int len = (urb->iso_frame_desc[0].actual_length
-#if NRPACKS > 1
- + urb->iso_frame_desc[1].actual_length
-#endif
- ) / subs->usX2Y->stride;
+ int len = urb->actual_length / subs->usX2Y->stride;
subs->transfer_done += len;
subs->hwptr_done += len;
subs->transfer_done -= runtime->period_size;
snd_pcm_period_elapsed(subs->pcm_substream);
}
- return 0;
}
-inline static int usX2Y_urb_submit(snd_usX2Y_substream_t *subs, struct urb *urb, int frame)
+static int usX2Y_urb_submit(snd_usX2Y_substream_t *subs, struct urb *urb, int frame)
{
int err;
if (!urb)
return -ENODEV;
- urb->start_frame = (frame + NRURBS*NRPACKS) & (1024 - 1);
+ urb->start_frame = (frame + NRURBS * nr_of_packs()); // let hcd do rollover sanity checks
urb->hcpriv = NULL;
urb->dev = subs->usX2Y->chip.dev; /* we need to set this at each time */
if ((err = usb_submit_urb(urb, GFP_ATOMIC)) < 0) {
- snd_printk("%i\n", err);
+ snd_printk("usb_submit_urb() returned %i\n", err);
return err;
- } else {
- subs->submitted_urbs++;
- if (subs->next_urb_complete < 0)
- subs->next_urb_complete = 0;
}
return 0;
}
-
-static inline int frame_distance(int from, int to)
+static inline int usX2Y_usbframe_complete(snd_usX2Y_substream_t *capsubs, snd_usX2Y_substream_t *playbacksubs, int frame)
{
- int distance = to - from;
- if (distance < -512)
- distance += 1024;
- else
- if (distance > 511)
- distance -= 1024;
- return distance;
-}
+ int err, state;
+ {
+ struct urb *urb = playbacksubs->completed_urb;
+ state = atomic_read(&playbacksubs->state);
+ if (NULL != urb) {
+ if (state == state_RUNNING)
+ usX2Y_urb_play_retire(playbacksubs, urb);
+ else
+ if (state >= state_PRERUNNING) {
+ atomic_inc(&playbacksubs->state);
+ }
+ } else {
+ switch (state) {
+ case state_STARTING1:
+ urb = playbacksubs->urb[0];
+ atomic_inc(&playbacksubs->state);
+ break;
+ case state_STARTING2:
+ urb = playbacksubs->urb[1];
+ atomic_inc(&playbacksubs->state);
+ break;
+ }
+ }
+ if (urb) {
+ if ((err = usX2Y_urb_play_prepare(playbacksubs, capsubs->completed_urb, urb)) ||
+ (err = usX2Y_urb_submit(playbacksubs, urb, frame))) {
+ return err;
+ }
+ }
-static void usX2Y_subs_set_next_urb_complete(snd_usX2Y_substream_t *subs)
-{
- int next_urb_complete = subs->next_urb_complete + 1;
- int distance;
- if (next_urb_complete >= NRURBS)
- next_urb_complete = 0;
- distance = frame_distance(subs->completed_urb->start_frame,
- subs->urb[next_urb_complete]->start_frame);
- if (1 == distance) {
- subs->next_urb_complete = next_urb_complete;
- } else {
- snd_printdd("distance %i not set_nuc %i %i %i \n", distance, subs->endpoint, next_urb_complete, subs->urb[next_urb_complete]->status);
- subs->next_urb_complete = -1;
+ playbacksubs->completed_urb = NULL;
+ }
+ state = atomic_read(&capsubs->state);
+ if (state >= state_PREPARED) {
+ if (state == state_RUNNING) {
+ if ((err = usX2Y_urb_capt_retire(capsubs)))
+ return err;
+ } else
+ if (state >= state_PRERUNNING) {
+ atomic_inc(&capsubs->state);
+ }
+ if ((err = usX2Y_urb_submit(capsubs, capsubs->completed_urb, frame)))
+ return err;
}
+ capsubs->completed_urb = NULL;
+ return 0;
}
-static inline void usX2Y_usbframe_complete(snd_usX2Y_substream_t *capsubs, snd_usX2Y_substream_t *playbacksubs, int frame)
+static void usX2Y_clients_stop(usX2Ydev_t *usX2Y)
{
- {
- struct urb *urb;
- if ((urb = playbacksubs->completed_urb)) {
- if (playbacksubs->prepared)
- usX2Y_urb_play_retire(playbacksubs, urb);
- usX2Y_subs_set_next_urb_complete(playbacksubs);
+ int s, u;
+ for (s = 0; s < 4; s++) {
+ snd_usX2Y_substream_t *subs = usX2Y->subs[s];
+ if (subs) {
+ snd_printdd("%i %p state=%i\n", s, subs, atomic_read(&subs->state));
+ atomic_set(&subs->state, state_STOPPED);
}
- if (playbacksubs->running) {
- if (NULL == urb)
- urb = playbacksubs->urb[playbacksubs->next_urb_complete + 1];
- if (urb && 0 == usX2Y_urb_play_prepare(playbacksubs,
- capsubs->completed_urb,
- urb)) {
- if (usX2Y_urb_submit(playbacksubs, urb, frame) < 0)
- return;
- } else
- snd_pcm_stop(playbacksubs->pcm_substream, SNDRV_PCM_STATE_XRUN);
+ }
+ for (s = 0; s < 4; s++) {
+ snd_usX2Y_substream_t *subs = usX2Y->subs[s];
+ if (subs) {
+ if (atomic_read(&subs->state) >= state_PRERUNNING) {
+ snd_pcm_stop(subs->pcm_substream, SNDRV_PCM_STATE_XRUN);
+ }
+ for (u = 0; u < NRURBS; u++) {
+ struct urb *urb = subs->urb[u];
+ if (NULL != urb)
+ snd_printdd("%i status=%i start_frame=%i\n", u, urb->status, urb->start_frame);
+ }
}
- playbacksubs->completed_urb = NULL;
}
- if (capsubs->running)
- usX2Y_urb_capt_retire(capsubs);
- usX2Y_subs_set_next_urb_complete(capsubs);
- if (capsubs->prepared)
- usX2Y_urb_submit(capsubs, capsubs->completed_urb, frame);
- capsubs->completed_urb = NULL;
+ usX2Y->prepare_subs = NULL;
+ wake_up(&usX2Y->prepare_wait_queue);
}
-
-static void usX2Y_clients_stop(snd_usX2Y_substream_t *subs)
+static void usX2Y_error_urb_status(usX2Ydev_t *usX2Y, snd_usX2Y_substream_t *subs, struct urb *urb)
{
- usX2Ydev_t *usX2Y = subs->usX2Y;
- int i;
- for (i = 0; i < 4; i++) {
- snd_usX2Y_substream_t *substream = usX2Y->substream[i];
- if (substream && substream->running)
- snd_pcm_stop(substream->pcm_substream, SNDRV_PCM_STATE_XRUN);
- }
+ snd_printk("ep=%i stalled with status=%i\n", subs->endpoint, urb->status);
+ urb->status = 0;
+ usX2Y_clients_stop(usX2Y);
}
+static void usX2Y_error_sequence(usX2Ydev_t *usX2Y, snd_usX2Y_substream_t *subs, struct urb *urb)
+{
+ snd_printk("Sequence Error!(hcd_frame=%i ep=%i%s;wait=%i,frame=%i).\n"
+ "Most propably some urb of usb-frame %i is still missing.\n"
+ "Cause could be too long delays in usb-hcd interrupt handling.\n",
+ usb_get_current_frame_number(usX2Y->chip.dev),
+ subs->endpoint, usb_pipein(urb->pipe) ? "in" : "out", usX2Y->wait_iso_frame, urb->start_frame, usX2Y->wait_iso_frame);
+ usX2Y_clients_stop(usX2Y);
+}
static void i_usX2Y_urb_complete(struct urb *urb, struct pt_regs *regs)
{
snd_usX2Y_substream_t *subs = (snd_usX2Y_substream_t*)urb->context;
+ usX2Ydev_t *usX2Y = subs->usX2Y;
- subs->submitted_urbs--;
- if (urb->status) {
- snd_printk("ep=%i stalled with status=%i\n", subs->endpoint, urb->status);
- subs->stalled = 1;
- usX2Y_clients_stop(subs);
- urb->status = 0;
+ if (unlikely(atomic_read(&subs->state) < state_PREPARED)) {
+ snd_printdd("hcd_frame=%i ep=%i%s status=%i start_frame=%i\n", usb_get_current_frame_number(usX2Y->chip.dev), subs->endpoint, usb_pipein(urb->pipe) ? "in" : "out", urb->status, urb->start_frame);
+ return;
+ }
+ if (unlikely(urb->status)) {
+ usX2Y_error_urb_status(usX2Y, subs, urb);
return;
}
- if (urb == subs->urb[subs->next_urb_complete]) {
+ if (likely((0xFFFF & urb->start_frame) == usX2Y->wait_iso_frame))
subs->completed_urb = urb;
- } else {
- snd_printk("Sequence Error!(ep=%i;nuc=%i,frame=%i)\n",
- subs->endpoint, subs->next_urb_complete, urb->start_frame);
- subs->stalled = 1;
- usX2Y_clients_stop(subs);
+ else {
+ usX2Y_error_sequence(usX2Y, subs, urb);
return;
}
- if (waitqueue_active(&subs->wait_queue))
- wake_up(&subs->wait_queue);
{
- snd_usX2Y_substream_t *capsubs = subs->usX2Y->substream[SNDRV_PCM_STREAM_CAPTURE],
- *playbacksubs = subs->usX2Y->substream[SNDRV_PCM_STREAM_PLAYBACK];
- if (capsubs->completed_urb &&
- (playbacksubs->completed_urb ||
- !playbacksubs->prepared ||
- (playbacksubs->prepared && (playbacksubs->next_urb_complete < 0 || // not started yet
- frame_distance(capsubs->completed_urb->start_frame,
- playbacksubs->urb[playbacksubs->next_urb_complete]->start_frame)
- > 0 || // other expected later
- playbacksubs->stalled))))
- usX2Y_usbframe_complete(capsubs, playbacksubs, urb->start_frame);
+ snd_usX2Y_substream_t *capsubs = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE],
+ *playbacksubs = usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+ if (capsubs->completed_urb && atomic_read(&capsubs->state) >= state_PREPARED &&
+ (playbacksubs->completed_urb || atomic_read(&playbacksubs->state) < state_PREPARED)) {
+ if (!usX2Y_usbframe_complete(capsubs, playbacksubs, urb->start_frame)) {
+ if (nr_of_packs() <= urb->start_frame &&
+ urb->start_frame <= (2 * nr_of_packs() - 1)) // uhci and ohci
+ usX2Y->wait_iso_frame = urb->start_frame - nr_of_packs();
+ else
+ usX2Y->wait_iso_frame += nr_of_packs();
+ } else {
+ snd_printdd("\n");
+ usX2Y_clients_stop(usX2Y);
+ }
+ }
}
}
-
-static int usX2Y_urbs_capt_start(snd_usX2Y_substream_t *subs)
+static void usX2Y_urbs_set_complete(usX2Ydev_t * usX2Y, void (*complete)(struct urb *, struct pt_regs *))
{
- int i, err;
-
- for (i = 0; i < NRURBS; i++) {
- unsigned long pack;
- struct urb *urb = subs->urb[i];
- urb->dev = subs->usX2Y->chip.dev;
- urb->transfer_flags = URB_ISO_ASAP;
- for (pack = 0; pack < NRPACKS; pack++) {
- urb->iso_frame_desc[pack].offset = subs->maxpacksize * pack;
- urb->iso_frame_desc[pack].length = subs->maxpacksize;
- }
- urb->transfer_buffer_length = subs->maxpacksize * NRPACKS;
- if ((err = usb_submit_urb(urb, GFP_ATOMIC)) < 0) {
- snd_printk (KERN_ERR "cannot submit datapipe for urb %d, err = %d\n", i, err);
- return -EPIPE;
- } else {
- subs->submitted_urbs++;
- }
- urb->transfer_flags = 0;
+ int s, u;
+ for (s = 0; s < 4; s++) {
+ snd_usX2Y_substream_t *subs = usX2Y->subs[s];
+ if (NULL != subs)
+ for (u = 0; u < NRURBS; u++) {
+ struct urb * urb = subs->urb[u];
+ if (NULL != urb)
+ urb->complete = complete;
+ }
}
- subs->stalled = 0;
- subs->next_urb_complete = 0;
- subs->prepared = 1;
- return 0;
}
-/*
- * wait until all urbs are processed.
- */
-static int usX2Y_urbs_wait_clear(snd_usX2Y_substream_t *subs)
-{
- int timeout = HZ;
-
- do {
- if (0 == subs->submitted_urbs)
- break;
- set_current_state(TASK_UNINTERRUPTIBLE);
- snd_printdd("snd_usX2Y_urbs_wait_clear waiting\n");
- schedule_timeout(1);
- } while (--timeout > 0);
- if (subs->submitted_urbs)
- snd_printk(KERN_ERR "timeout: still %d active urbs..\n", subs->submitted_urbs);
- return 0;
-}
-/*
- * return the current pcm pointer. just return the hwptr_done value.
- */
-static snd_pcm_uframes_t snd_usX2Y_pcm_pointer(snd_pcm_substream_t *substream)
+static void usX2Y_subs_startup_finish(usX2Ydev_t * usX2Y)
{
- snd_usX2Y_substream_t *subs = (snd_usX2Y_substream_t *)substream->runtime->private_data;
- return subs->hwptr_done;
+ usX2Y_urbs_set_complete(usX2Y, i_usX2Y_urb_complete);
+ usX2Y->prepare_subs = NULL;
}
-/*
- * start/stop substream
- */
-static int snd_usX2Y_pcm_trigger(snd_pcm_substream_t *substream, int cmd)
+
+static void i_usX2Y_subs_startup(struct urb *urb, struct pt_regs *regs)
{
- snd_usX2Y_substream_t *subs = (snd_usX2Y_substream_t *)substream->runtime->private_data;
+ snd_usX2Y_substream_t *subs = (snd_usX2Y_substream_t*)urb->context;
+ usX2Ydev_t *usX2Y = subs->usX2Y;
+ snd_usX2Y_substream_t *prepare_subs = usX2Y->prepare_subs;
+ if (NULL != prepare_subs)
+ if (urb->start_frame == prepare_subs->urb[0]->start_frame) {
+ usX2Y_subs_startup_finish(usX2Y);
+ atomic_inc(&prepare_subs->state);
+ wake_up(&usX2Y->prepare_wait_queue);
+ }
- switch (cmd) {
- case SNDRV_PCM_TRIGGER_START:
- snd_printdd("snd_usX2Y_pcm_trigger(START)\n");
- if (subs->usX2Y->substream[SNDRV_PCM_STREAM_CAPTURE]->stalled)
- return -EPIPE;
- else
- subs->running = 1;
- break;
- case SNDRV_PCM_TRIGGER_STOP:
- snd_printdd("snd_usX2Y_pcm_trigger(STOP)\n");
- subs->running = 0;
- break;
- default:
- return -EINVAL;
- }
- return 0;
+ i_usX2Y_urb_complete(urb, regs);
}
+static void usX2Y_subs_prepare(snd_usX2Y_substream_t *subs)
+{
+ snd_printdd("usX2Y_substream_prepare(%p) ep=%i urb0=%p urb1=%p\n", subs, subs->endpoint, subs->urb[0], subs->urb[1]);
+ /* reset the pointer */
+ subs->hwptr = 0;
+ subs->hwptr_done = 0;
+ subs->transfer_done = 0;
+}
static void usX2Y_urb_release(struct urb** urb, int free_tb)
{
if (*urb) {
+ usb_kill_urb(*urb);
if (free_tb)
kfree((*urb)->transfer_buffer);
usb_free_urb(*urb);
}
}
/*
- * release a substream
+ * release a substreams urbs
*/
static void usX2Y_urbs_release(snd_usX2Y_substream_t *subs)
{
int i;
- snd_printdd("snd_usX2Y_urbs_release() %i\n", subs->endpoint);
- usX2Y_urbs_wait_clear(subs);
+ snd_printdd("usX2Y_urbs_release() %i\n", subs->endpoint);
for (i = 0; i < NRURBS; i++)
- usX2Y_urb_release(subs->urb + i, subs != subs->usX2Y->substream[SNDRV_PCM_STREAM_PLAYBACK]);
+ usX2Y_urb_release(subs->urb + i, subs != subs->usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK]);
if (subs->tmpbuf) {
kfree(subs->tmpbuf);
subs->tmpbuf = NULL;
}
}
-
-static void usX2Y_substream_prepare(snd_usX2Y_substream_t *subs)
-{
- snd_printdd("usX2Y_substream_prepare() ep=%i urb0=%p urb1=%p\n", subs->endpoint, subs->urb[0], subs->urb[1]);
- /* reset the pointer */
- subs->hwptr = 0;
- subs->hwptr_done = 0;
- subs->transfer_done = 0;
-}
-
-
/*
* initialize a substream's urbs
*/
static int usX2Y_urbs_allocate(snd_usX2Y_substream_t *subs)
{
int i;
- int is_playback = subs == subs->usX2Y->substream[SNDRV_PCM_STREAM_PLAYBACK];
+ unsigned int pipe;
+ int is_playback = subs == subs->usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
struct usb_device *dev = subs->usX2Y->chip.dev;
+ struct usb_host_endpoint *ep;
- snd_assert(!subs->prepared, return 0);
+ pipe = is_playback ? usb_sndisocpipe(dev, subs->endpoint) :
+ usb_rcvisocpipe(dev, subs->endpoint);
+ subs->maxpacksize = usb_maxpacket(dev, pipe, is_playback);
+ if (!subs->maxpacksize)
+ return -EINVAL;
- if (is_playback) { /* allocate a temporary buffer for playback */
- subs->datapipe = usb_sndisocpipe(dev, subs->endpoint);
- subs->maxpacksize = dev->epmaxpacketout[subs->endpoint];
+ if (is_playback && NULL == subs->tmpbuf) { /* allocate a temporary buffer for playback */
+ subs->tmpbuf = kcalloc(nr_of_packs(), subs->maxpacksize, GFP_KERNEL);
if (NULL == subs->tmpbuf) {
- subs->tmpbuf = kcalloc(NRPACKS, subs->maxpacksize, GFP_KERNEL);
- if (NULL == subs->tmpbuf) {
- snd_printk(KERN_ERR "cannot malloc tmpbuf\n");
- return -ENOMEM;
- }
+ snd_printk(KERN_ERR "cannot malloc tmpbuf\n");
+ return -ENOMEM;
}
- } else {
- subs->datapipe = usb_rcvisocpipe(dev, subs->endpoint);
- subs->maxpacksize = dev->epmaxpacketin[subs->endpoint];
}
-
/* allocate and initialize data urbs */
for (i = 0; i < NRURBS; i++) {
struct urb** purb = subs->urb + i;
- if (*purb)
+ if (*purb) {
+ usb_kill_urb(*purb);
continue;
- *purb = usb_alloc_urb(NRPACKS, GFP_KERNEL);
+ }
+ *purb = usb_alloc_urb(nr_of_packs(), GFP_KERNEL);
if (NULL == *purb) {
usX2Y_urbs_release(subs);
return -ENOMEM;
}
if (!is_playback && !(*purb)->transfer_buffer) {
/* allocate a capture buffer per urb */
- (*purb)->transfer_buffer = kmalloc(subs->maxpacksize*NRPACKS, GFP_KERNEL);
+ (*purb)->transfer_buffer = kmalloc(subs->maxpacksize * nr_of_packs(), GFP_KERNEL);
if (NULL == (*purb)->transfer_buffer) {
usX2Y_urbs_release(subs);
return -ENOMEM;
}
}
(*purb)->dev = dev;
- (*purb)->pipe = subs->datapipe;
- (*purb)->number_of_packets = NRPACKS;
+ (*purb)->pipe = pipe;
+ (*purb)->number_of_packets = nr_of_packs();
(*purb)->context = subs;
(*purb)->interval = 1;
- (*purb)->complete = snd_usb_complete_callback(i_usX2Y_urb_complete);
+ (*purb)->complete = i_usX2Y_subs_startup;
}
return 0;
}
-static void i_usX2Y_04Int(struct urb* urb, struct pt_regs *regs)
+static void usX2Y_subs_startup(snd_usX2Y_substream_t *subs)
{
- usX2Ydev_t* usX2Y = urb->context;
-
- if (urb->status) {
- snd_printk("snd_usX2Y_04Int() urb->status=%i\n", urb->status);
- return;
+ usX2Ydev_t *usX2Y = subs->usX2Y;
+ usX2Y->prepare_subs = subs;
+ subs->urb[0]->start_frame = -1;
+ wmb();
+ usX2Y_urbs_set_complete(usX2Y, i_usX2Y_subs_startup);
+}
+
+static int usX2Y_urbs_start(snd_usX2Y_substream_t *subs)
+{
+ int i, err;
+ usX2Ydev_t *usX2Y = subs->usX2Y;
+
+ if ((err = usX2Y_urbs_allocate(subs)) < 0)
+ return err;
+ subs->completed_urb = NULL;
+ for (i = 0; i < 4; i++) {
+ snd_usX2Y_substream_t *subs = usX2Y->subs[i];
+ if (subs != NULL && atomic_read(&subs->state) >= state_PREPARED)
+ goto start;
}
- if (0 == --usX2Y->US04->len)
- wake_up(&usX2Y->In04WaitQueue);
+ usX2Y->wait_iso_frame = -1;
+ start:
+ {
+ usX2Y_subs_startup(subs);
+ for (i = 0; i < NRURBS; i++) {
+ struct urb *urb = subs->urb[i];
+ if (usb_pipein(urb->pipe)) {
+ unsigned long pack;
+ if (0 == i)
+ atomic_set(&subs->state, state_STARTING3);
+ urb->dev = usX2Y->chip.dev;
+ urb->transfer_flags = URB_ISO_ASAP;
+ for (pack = 0; pack < nr_of_packs(); pack++) {
+ urb->iso_frame_desc[pack].offset = subs->maxpacksize * pack;
+ urb->iso_frame_desc[pack].length = subs->maxpacksize;
+ }
+ urb->transfer_buffer_length = subs->maxpacksize * nr_of_packs();
+ if ((err = usb_submit_urb(urb, GFP_ATOMIC)) < 0) {
+ snd_printk (KERN_ERR "cannot submit datapipe for urb %d, err = %d\n", i, err);
+ err = -EPIPE;
+ goto cleanup;
+ } else {
+ if (0 > usX2Y->wait_iso_frame)
+ usX2Y->wait_iso_frame = urb->start_frame;
+ }
+ urb->transfer_flags = 0;
+ } else {
+ atomic_set(&subs->state, state_STARTING1);
+ break;
+ }
+ }
+ err = 0;
+ wait_event(usX2Y->prepare_wait_queue, NULL == usX2Y->prepare_subs);
+ if (atomic_read(&subs->state) != state_PREPARED) {
+ err = -EPIPE;
+ }
+
+ cleanup:
+ if (err) {
+ usX2Y_subs_startup_finish(usX2Y);
+ usX2Y_clients_stop(usX2Y); // something is completely wroong > stop evrything
+ }
+ }
+ return err;
+}
+
+/*
+ * return the current pcm pointer. just return the hwptr_done value.
+ */
+static snd_pcm_uframes_t snd_usX2Y_pcm_pointer(snd_pcm_substream_t *substream)
+{
+ snd_usX2Y_substream_t *subs = (snd_usX2Y_substream_t *)substream->runtime->private_data;
+ return subs->hwptr_done;
+}
+/*
+ * start/stop substream
+ */
+static int snd_usX2Y_pcm_trigger(snd_pcm_substream_t *substream, int cmd)
+{
+ snd_usX2Y_substream_t *subs = (snd_usX2Y_substream_t *)substream->runtime->private_data;
+
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ snd_printdd("snd_usX2Y_pcm_trigger(START)\n");
+ if (atomic_read(&subs->state) == state_PREPARED &&
+ atomic_read(&subs->usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE]->state) >= state_PREPARED) {
+ atomic_set(&subs->state, state_PRERUNNING);
+ } else {
+ snd_printdd("\n");
+ return -EPIPE;
+ }
+ break;
+ case SNDRV_PCM_TRIGGER_STOP:
+ snd_printdd("snd_usX2Y_pcm_trigger(STOP)\n");
+ if (atomic_read(&subs->state) >= state_PRERUNNING)
+ atomic_set(&subs->state, state_PREPARED);
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
}
+
+
/*
* allocate a buffer, setup samplerate
*
};
#define NOOF_SETRATE_URBS ARRAY_SIZE(SetRate48000)
+static void i_usX2Y_04Int(struct urb* urb, struct pt_regs *regs)
+{
+ usX2Ydev_t* usX2Y = urb->context;
+
+ if (urb->status) {
+ snd_printk("snd_usX2Y_04Int() urb->status=%i\n", urb->status);
+ }
+ if (0 == --usX2Y->US04->len)
+ wake_up(&usX2Y->In04WaitQueue);
+}
+
static int usX2Y_rate_set(usX2Ydev_t *usX2Y, int rate)
{
int err = 0, i;
snd_usX2Y_urbSeq_t *us = NULL;
int *usbdata = NULL;
- DECLARE_WAITQUEUE(wait, current);
struct s_c2 *ra = rate == 48000 ? SetRate48000 : SetRate44100;
if (usX2Y->rate != rate) {
- do {
- us = kmalloc(sizeof(*us) + sizeof(struct urb*) * NOOF_SETRATE_URBS, GFP_KERNEL);
- if (NULL == us) {
- err = -ENOMEM;
- break;
- }
- memset(us, 0, sizeof(*us) + sizeof(struct urb*) * NOOF_SETRATE_URBS);
- usbdata = kmalloc(sizeof(int)*NOOF_SETRATE_URBS, GFP_KERNEL);
- if (NULL == usbdata) {
+ us = kmalloc(sizeof(*us) + sizeof(struct urb*) * NOOF_SETRATE_URBS, GFP_KERNEL);
+ if (NULL == us) {
+ err = -ENOMEM;
+ goto cleanup;
+ }
+ memset(us, 0, sizeof(*us) + sizeof(struct urb*) * NOOF_SETRATE_URBS);
+ usbdata = kmalloc(sizeof(int)*NOOF_SETRATE_URBS, GFP_KERNEL);
+ if (NULL == usbdata) {
+ err = -ENOMEM;
+ goto cleanup;
+ }
+ for (i = 0; i < NOOF_SETRATE_URBS; ++i) {
+ if (NULL == (us->urb[i] = usb_alloc_urb(0, GFP_KERNEL))) {
err = -ENOMEM;
- break;
+ goto cleanup;
}
- for (i = 0; i < NOOF_SETRATE_URBS; ++i) {
- if (NULL == (us->urb[i] = usb_alloc_urb(0, GFP_KERNEL))) {
- err = -ENOMEM;
- break;
- }
- ((char*)(usbdata + i))[0] = ra[i].c1;
- ((char*)(usbdata + i))[1] = ra[i].c2;
- usb_fill_bulk_urb(us->urb[i], usX2Y->chip.dev, usb_sndbulkpipe(usX2Y->chip.dev, 4),
- usbdata + i, 2, i_usX2Y_04Int, usX2Y);
+ ((char*)(usbdata + i))[0] = ra[i].c1;
+ ((char*)(usbdata + i))[1] = ra[i].c2;
+ usb_fill_bulk_urb(us->urb[i], usX2Y->chip.dev, usb_sndbulkpipe(usX2Y->chip.dev, 4),
+ usbdata + i, 2, i_usX2Y_04Int, usX2Y);
#ifdef OLD_USB
- us->urb[i]->transfer_flags = USB_QUEUE_BULK;
+ us->urb[i]->transfer_flags = USB_QUEUE_BULK;
#endif
- }
- if (err)
- break;
-
- add_wait_queue(&usX2Y->In04WaitQueue, &wait);
- set_current_state(TASK_INTERRUPTIBLE);
- us->submitted = 0;
- us->len = NOOF_SETRATE_URBS;
- usX2Y->US04 = us;
-
- do {
- signed long timeout = schedule_timeout(HZ/2);
-
- if (signal_pending(current)) {
- err = -ERESTARTSYS;
- break;
- }
- if (0 == timeout) {
- err = -ENODEV;
- break;
- }
- usX2Y->rate = rate;
- usX2Y->refframes = rate == 48000 ? 47 : 44;
- } while (0);
-
- remove_wait_queue(&usX2Y->In04WaitQueue, &wait);
- } while (0);
-
+ }
+ us->submitted = 0;
+ us->len = NOOF_SETRATE_URBS;
+ usX2Y->US04 = us;
+ wait_event_timeout(usX2Y->In04WaitQueue, 0 == us->len, HZ);
+ usX2Y->US04 = NULL;
+ if (us->len)
+ err = -ENODEV;
+ cleanup:
if (us) {
us->submitted = 2*NOOF_SETRATE_URBS;
for (i = 0; i < NOOF_SETRATE_URBS; ++i) {
- usb_kill_urb(us->urb[i]);
- usb_free_urb(us->urb[i]);
+ struct urb *urb = us->urb[i];
+ if (urb->status) {
+ if (!err)
+ err = -ENODEV;
+ usb_kill_urb(urb);
+ }
+ usb_free_urb(urb);
}
usX2Y->US04 = NULL;
kfree(usbdata);
kfree(us);
+ if (!err) {
+ usX2Y->rate = rate;
+ }
}
}
{
snd_pcm_runtime_t *runtime = substream->runtime;
snd_usX2Y_substream_t *subs = (snd_usX2Y_substream_t *)runtime->private_data;
+ down(&subs->usX2Y->prepare_mutex);
snd_printdd("snd_usX2Y_hw_free(%p)\n", substream);
if (SNDRV_PCM_STREAM_PLAYBACK == substream->stream) {
- snd_usX2Y_substream_t *cap_subs = subs->usX2Y->substream[SNDRV_PCM_STREAM_CAPTURE];
- subs->prepared = 0;
+ snd_usX2Y_substream_t *cap_subs = subs->usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE];
+ atomic_set(&subs->state, state_STOPPED);
usX2Y_urbs_release(subs);
if (!cap_subs->pcm_substream ||
!cap_subs->pcm_substream->runtime ||
!cap_subs->pcm_substream->runtime->status ||
cap_subs->pcm_substream->runtime->status->state < SNDRV_PCM_STATE_PREPARED) {
- cap_subs->prepared = 0;
+ atomic_set(&cap_subs->state, state_STOPPED);
usX2Y_urbs_release(cap_subs);
}
} else {
- snd_usX2Y_substream_t *playback_subs = subs->usX2Y->substream[SNDRV_PCM_STREAM_PLAYBACK];
- if (!playback_subs->prepared) {
- subs->prepared = 0;
+ snd_usX2Y_substream_t *playback_subs = subs->usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+ if (atomic_read(&playback_subs->state) < state_PREPARED) {
+ atomic_set(&subs->state, state_STOPPED);
usX2Y_urbs_release(subs);
}
}
-
+ up(&subs->usX2Y->prepare_mutex);
return snd_pcm_lib_free_pages(substream);
}
/*
{
snd_pcm_runtime_t *runtime = substream->runtime;
snd_usX2Y_substream_t *subs = (snd_usX2Y_substream_t *)runtime->private_data;
- snd_usX2Y_substream_t *capsubs = subs->usX2Y->substream[SNDRV_PCM_STREAM_CAPTURE];
+ usX2Ydev_t *usX2Y = subs->usX2Y;
+ snd_usX2Y_substream_t *capsubs = subs->usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE];
int err = 0;
snd_printdd("snd_usX2Y_pcm_prepare(%p)\n", substream);
+ down(&usX2Y->prepare_mutex);
+ usX2Y_subs_prepare(subs);
// Start hardware streams
// SyncStream first....
- if (! capsubs->prepared) {
- if (subs->usX2Y->format != runtime->format)
- if ((err = usX2Y_format_set(subs->usX2Y, runtime->format)) < 0)
- return err;
- if (subs->usX2Y->rate != runtime->rate)
- if ((err = usX2Y_rate_set(subs->usX2Y, runtime->rate)) < 0)
- return err;
- snd_printdd("starting capture pipe for playpipe\n");
- usX2Y_urbs_allocate(capsubs);
- capsubs->completed_urb = NULL;
- {
- DECLARE_WAITQUEUE(wait, current);
- add_wait_queue(&capsubs->wait_queue, &wait);
- if (0 <= (err = usX2Y_urbs_capt_start(capsubs))) {
- signed long timeout;
- set_current_state(TASK_INTERRUPTIBLE);
- timeout = schedule_timeout(HZ/4);
- if (signal_pending(current))
- err = -ERESTARTSYS;
- else {
- snd_printdd("%li\n", HZ/4 - timeout);
- if (0 == timeout)
- err = -EPIPE;
- }
- }
- remove_wait_queue(&capsubs->wait_queue, &wait);
- if (0 > err)
- return err;
- }
+ if (atomic_read(&capsubs->state) < state_PREPARED) {
+ if (usX2Y->format != runtime->format)
+ if ((err = usX2Y_format_set(usX2Y, runtime->format)) < 0)
+ goto up_prepare_mutex;
+ if (usX2Y->rate != runtime->rate)
+ if ((err = usX2Y_rate_set(usX2Y, runtime->rate)) < 0)
+ goto up_prepare_mutex;
+ snd_printdd("starting capture pipe for %s\n", subs == capsubs ? "self" : "playpipe");
+ if (0 > (err = usX2Y_urbs_start(capsubs)))
+ goto up_prepare_mutex;
}
- if (subs != capsubs) {
- int u;
- if (!subs->prepared) {
- if ((err = usX2Y_urbs_allocate(subs)) < 0)
- return err;
- subs->prepared = 1;
- }
- while (subs->submitted_urbs)
- for (u = 0; u < NRURBS; u++) {
- snd_printdd("%i\n", subs->urb[u]->status);
- while(subs->urb[u]->status || NULL != subs->urb[u]->hcpriv) {
- signed long timeout;
- snd_printdd("ep=%i waiting for urb=%p status=%i hcpriv=%p\n",
- subs->endpoint, subs->urb[u],
- subs->urb[u]->status, subs->urb[u]->hcpriv);
- set_current_state(TASK_INTERRUPTIBLE);
- timeout = schedule_timeout(HZ/10);
- if (signal_pending(current)) {
- return -ERESTARTSYS;
- }
- }
- }
- subs->completed_urb = NULL;
- subs->next_urb_complete = -1;
- subs->stalled = 0;
- }
+ if (subs != capsubs && atomic_read(&subs->state) < state_PREPARED)
+ err = usX2Y_urbs_start(subs);
- usX2Y_substream_prepare(subs);
+ up_prepare_mutex:
+ up(&usX2Y->prepare_mutex);
return err;
}
snd_pcm_substream_chip(substream))[substream->stream];
snd_pcm_runtime_t *runtime = substream->runtime;
+ if (subs->usX2Y->chip_status & USX2Y_STAT_CHIP_MMAP_PCM_URBS)
+ return -EBUSY;
+
runtime->hw = snd_usX2Y_2c;
runtime->private_data = subs;
subs->pcm_substream = substream;
snd_pcm_t *pcm;
int err, i;
snd_usX2Y_substream_t **usX2Y_substream =
- usX2Y(card)->substream + 2 * usX2Y(card)->chip.pcm_devs;
+ usX2Y(card)->subs + 2 * usX2Y(card)->chip.pcm_devs;
for (i = playback_endpoint ? SNDRV_PCM_STREAM_PLAYBACK : SNDRV_PCM_STREAM_CAPTURE;
i <= SNDRV_PCM_STREAM_CAPTURE; ++i) {
snd_printk(KERN_ERR "cannot malloc\n");
return -ENOMEM;
}
- init_waitqueue_head(&usX2Y_substream[i]->wait_queue);
usX2Y_substream[i]->usX2Y = usX2Y(card);
}
return 0;
}
-/*
- * free the chip instance
- *
- * here we have to do not much, since pcm and controls are already freed
- *
- */
-static int snd_usX2Y_device_dev_free(snd_device_t *device)
-{
- return 0;
-}
-
-
/*
* create a chip instance and set its names.
*/
int usX2Y_audio_create(snd_card_t* card)
{
int err = 0;
- static snd_device_ops_t ops = {
- .dev_free = snd_usX2Y_device_dev_free,
- };
INIT_LIST_HEAD(&usX2Y(card)->chip.pcm_list);
- if ((err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, usX2Y(card), &ops)) < 0) {
-// snd_usX2Y_audio_free(usX2Y(card));
- return err;
- }
-
if (0 > (err = usX2Y_audio_stream_new(card, 0xA, 0x8)))
return err;
- if (usX2Y(card)->chip.dev->descriptor.idProduct == USB_ID_US428)
+ if (le16_to_cpu(usX2Y(card)->chip.dev->descriptor.idProduct) == USB_ID_US428)
if (0 > (err = usX2Y_audio_stream_new(card, 0, 0xA)))
return err;
- if (usX2Y(card)->chip.dev->descriptor.idProduct != USB_ID_US122)
+ if (le16_to_cpu(usX2Y(card)->chip.dev->descriptor.idProduct) != USB_ID_US122)
err = usX2Y_rate_set(usX2Y(card), 44100); // Lets us428 recognize output-volume settings, disturbs us122.
return err;
}
/* hwdep id string */
#define SND_USX2Y_LOADER_ID "USX2Y Loader"
+#define SND_USX2Y_USBPCM_ID "USX2Y USBPCM"
/* hardware type */
enum {
/* chip status */
enum {
- USX2Y_STAT_CHIP_INIT = (1 << 0), /* all operational */
- USX2Y_STAT_CHIP_HUP = (1 << 31), /* all operational */
+ USX2Y_STAT_CHIP_INIT = (1 << 0), /* all operational */
+ USX2Y_STAT_CHIP_MMAP_PCM_URBS = (1 << 1), /* pcm transport over mmaped urbs */
+ USX2Y_STAT_CHIP_HUP = (1 << 31), /* all operational */
};
#endif /* __SOUND_USX2Y_COMMON_H */
--- /dev/null
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+/* USX2Y "rawusb" aka hwdep_pcm implementation
+
+ Its usb's unableness to atomically handle power of 2 period sized data chuncs
+ at standard samplerates,
+ what led to this part of the usx2y module:
+ It provides the alsa kernel half of the usx2y-alsa-jack driver pair.
+ The pair uses a hardware dependant alsa-device for mmaped pcm transport.
+ Advantage achieved:
+ The usb_hc moves pcm data from/into memory via DMA.
+ That memory is mmaped by jack's usx2y driver.
+ Jack's usx2y driver is the first/last to read/write pcm data.
+ Read/write is a combination of power of 2 period shaping and
+ float/int conversation.
+ Compared to mainline alsa/jack we leave out power of 2 period shaping inside
+ snd-usb-usx2y which needs memcpy() and additional buffers.
+ As a side effect possible unwanted pcm-data coruption resulting of
+ standard alsa's snd-usb-usx2y period shaping scheme falls away.
+ Result is sane jack operation at buffering schemes down to 128frames,
+ 2 periods.
+ plain usx2y alsa mode is able to achieve 64frames, 4periods, but only at the
+ cost of easier triggered i.e. aeolus xruns (128 or 256frames,
+ 2periods works but is useless cause of crackling).
+
+ This is a first "proof of concept" implementation.
+ Later, funcionalities should migrate to more apropriate places:
+ Userland:
+ - The jackd could mmap its float-pcm buffers directly from alsa-lib.
+ - alsa-lib could provide power of 2 period sized shaping combined with int/float
+ conversation.
+ Currently the usx2y jack driver provides above 2 services.
+ Kernel:
+ - rawusb dma pcm buffer transport should go to snd-usb-lib, so also snd-usb-audio
+ devices can use it.
+ Currently rawusb dma pcm buffer transport (this file) is only available to snd-usb-usx2y.
+*/
+
+#include "usbusx2yaudio.c"
+
+#if defined(USX2Y_NRPACKS_VARIABLE) || (!defined(USX2Y_NRPACKS_VARIABLE) && USX2Y_NRPACKS == 1)
+
+#include <sound/hwdep.h>
+
+
+static int usX2Y_usbpcm_urb_capt_retire(snd_usX2Y_substream_t *subs)
+{
+ struct urb *urb = subs->completed_urb;
+ snd_pcm_runtime_t *runtime = subs->pcm_substream->runtime;
+ int i, lens = 0, hwptr_done = subs->hwptr_done;
+ usX2Ydev_t *usX2Y = subs->usX2Y;
+ if (0 > usX2Y->hwdep_pcm_shm->capture_iso_start) { //FIXME
+ int head = usX2Y->hwdep_pcm_shm->captured_iso_head + 1;
+ if (head >= ARRAY_SIZE(usX2Y->hwdep_pcm_shm->captured_iso))
+ head = 0;
+ usX2Y->hwdep_pcm_shm->capture_iso_start = head;
+ snd_printdd("cap start %i\n", head);
+ }
+ for (i = 0; i < nr_of_packs(); i++) {
+ if (urb->iso_frame_desc[i].status) { /* active? hmm, skip this */
+ snd_printk("activ frame status %i. Most propably some hardware problem.\n", urb->iso_frame_desc[i].status);
+ return urb->iso_frame_desc[i].status;
+ }
+ lens += urb->iso_frame_desc[i].actual_length / usX2Y->stride;
+ }
+ if ((hwptr_done += lens) >= runtime->buffer_size)
+ hwptr_done -= runtime->buffer_size;
+ subs->hwptr_done = hwptr_done;
+ subs->transfer_done += lens;
+ /* update the pointer, call callback if necessary */
+ if (subs->transfer_done >= runtime->period_size) {
+ subs->transfer_done -= runtime->period_size;
+ snd_pcm_period_elapsed(subs->pcm_substream);
+ }
+ return 0;
+}
+
+static inline int usX2Y_iso_frames_per_buffer(snd_pcm_runtime_t *runtime, usX2Ydev_t * usX2Y)
+{
+ return (runtime->buffer_size * 1000) / usX2Y->rate + 1; //FIXME: so far only correct period_size == 2^x ?
+}
+
+/*
+ * prepare urb for playback data pipe
+ *
+ * we copy the data directly from the pcm buffer.
+ * the current position to be copied is held in hwptr field.
+ * since a urb can handle only a single linear buffer, if the total
+ * transferred area overflows the buffer boundary, we cannot send
+ * it directly from the buffer. thus the data is once copied to
+ * a temporary buffer and urb points to that.
+ */
+static int usX2Y_hwdep_urb_play_prepare(snd_usX2Y_substream_t *subs,
+ struct urb *urb)
+{
+ int count, counts, pack;
+ usX2Ydev_t *usX2Y = subs->usX2Y;
+ struct snd_usX2Y_hwdep_pcm_shm *shm = usX2Y->hwdep_pcm_shm;
+ snd_pcm_runtime_t *runtime = subs->pcm_substream->runtime;
+
+ if (0 > shm->playback_iso_start) {
+ shm->playback_iso_start = shm->captured_iso_head -
+ usX2Y_iso_frames_per_buffer(runtime, usX2Y);
+ if (0 > shm->playback_iso_start)
+ shm->playback_iso_start += ARRAY_SIZE(shm->captured_iso);
+ shm->playback_iso_head = shm->playback_iso_start;
+ }
+
+ count = 0;
+ for (pack = 0; pack < nr_of_packs(); pack++) {
+ /* calculate the size of a packet */
+ counts = shm->captured_iso[shm->playback_iso_head].length / usX2Y->stride;
+ if (counts < 43 || counts > 50) {
+ snd_printk("should not be here with counts=%i\n", counts);
+ return -EPIPE;
+ }
+ /* set up descriptor */
+ urb->iso_frame_desc[pack].offset = shm->captured_iso[shm->playback_iso_head].offset;
+ urb->iso_frame_desc[pack].length = shm->captured_iso[shm->playback_iso_head].length;
+ if (atomic_read(&subs->state) != state_RUNNING)
+ memset((char *)urb->transfer_buffer + urb->iso_frame_desc[pack].offset, 0,
+ urb->iso_frame_desc[pack].length);
+ if (++shm->playback_iso_head >= ARRAY_SIZE(shm->captured_iso))
+ shm->playback_iso_head = 0;
+ count += counts;
+ }
+ urb->transfer_buffer_length = count * usX2Y->stride;
+ return 0;
+}
+
+
+static inline void usX2Y_usbpcm_urb_capt_iso_advance(snd_usX2Y_substream_t *subs, struct urb *urb)
+{
+ int pack;
+ for (pack = 0; pack < nr_of_packs(); ++pack) {
+ struct usb_iso_packet_descriptor *desc = urb->iso_frame_desc + pack;
+ if (NULL != subs) {
+ snd_usX2Y_hwdep_pcm_shm_t *shm = subs->usX2Y->hwdep_pcm_shm;
+ int head = shm->captured_iso_head + 1;
+ if (head >= ARRAY_SIZE(shm->captured_iso))
+ head = 0;
+ shm->captured_iso[head].frame = urb->start_frame + pack;
+ shm->captured_iso[head].offset = desc->offset;
+ shm->captured_iso[head].length = desc->actual_length;
+ shm->captured_iso_head = head;
+ shm->captured_iso_frames++;
+ }
+ if ((desc->offset += desc->length * NRURBS*nr_of_packs()) +
+ desc->length >= SSS)
+ desc->offset -= (SSS - desc->length);
+ }
+}
+
+static inline int usX2Y_usbpcm_usbframe_complete(snd_usX2Y_substream_t *capsubs,
+ snd_usX2Y_substream_t *capsubs2,
+ snd_usX2Y_substream_t *playbacksubs, int frame)
+{
+ int err, state;
+ struct urb *urb = playbacksubs->completed_urb;
+
+ state = atomic_read(&playbacksubs->state);
+ if (NULL != urb) {
+ if (state == state_RUNNING)
+ usX2Y_urb_play_retire(playbacksubs, urb);
+ else
+ if (state >= state_PRERUNNING) {
+ atomic_inc(&playbacksubs->state);
+ }
+ } else {
+ switch (state) {
+ case state_STARTING1:
+ urb = playbacksubs->urb[0];
+ atomic_inc(&playbacksubs->state);
+ break;
+ case state_STARTING2:
+ urb = playbacksubs->urb[1];
+ atomic_inc(&playbacksubs->state);
+ break;
+ }
+ }
+ if (urb) {
+ if ((err = usX2Y_hwdep_urb_play_prepare(playbacksubs, urb)) ||
+ (err = usX2Y_urb_submit(playbacksubs, urb, frame))) {
+ return err;
+ }
+ }
+
+ playbacksubs->completed_urb = NULL;
+
+ state = atomic_read(&capsubs->state);
+ if (state >= state_PREPARED) {
+ if (state == state_RUNNING) {
+ if ((err = usX2Y_usbpcm_urb_capt_retire(capsubs)))
+ return err;
+ } else {
+ if (state >= state_PRERUNNING)
+ atomic_inc(&capsubs->state);
+ }
+ usX2Y_usbpcm_urb_capt_iso_advance(capsubs, capsubs->completed_urb);
+ if (NULL != capsubs2)
+ usX2Y_usbpcm_urb_capt_iso_advance(NULL, capsubs2->completed_urb);
+ if ((err = usX2Y_urb_submit(capsubs, capsubs->completed_urb, frame)))
+ return err;
+ if (NULL != capsubs2)
+ if ((err = usX2Y_urb_submit(capsubs2, capsubs2->completed_urb, frame)))
+ return err;
+ }
+ capsubs->completed_urb = NULL;
+ if (NULL != capsubs2)
+ capsubs2->completed_urb = NULL;
+ return 0;
+}
+
+
+static void i_usX2Y_usbpcm_urb_complete(struct urb *urb, struct pt_regs *regs)
+{
+ snd_usX2Y_substream_t *subs = (snd_usX2Y_substream_t*)urb->context;
+ usX2Ydev_t *usX2Y = subs->usX2Y;
+ snd_usX2Y_substream_t *capsubs, *capsubs2, *playbacksubs;
+
+ if (unlikely(atomic_read(&subs->state) < state_PREPARED)) {
+ snd_printdd("hcd_frame=%i ep=%i%s status=%i start_frame=%i\n", usb_get_current_frame_number(usX2Y->chip.dev), subs->endpoint, usb_pipein(urb->pipe) ? "in" : "out", urb->status, urb->start_frame);
+ return;
+ }
+ if (unlikely(urb->status)) {
+ usX2Y_error_urb_status(usX2Y, subs, urb);
+ return;
+ }
+ if (likely((0xFFFF & urb->start_frame) == usX2Y->wait_iso_frame))
+ subs->completed_urb = urb;
+ else {
+ usX2Y_error_sequence(usX2Y, subs, urb);
+ return;
+ }
+
+ capsubs = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE];
+ capsubs2 = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE + 2];
+ playbacksubs = usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+ if (capsubs->completed_urb && atomic_read(&capsubs->state) >= state_PREPARED &&
+ (NULL == capsubs2 || capsubs2->completed_urb) &&
+ (playbacksubs->completed_urb || atomic_read(&playbacksubs->state) < state_PREPARED)) {
+ if (!usX2Y_usbpcm_usbframe_complete(capsubs, capsubs2, playbacksubs, urb->start_frame)) {
+ if (nr_of_packs() <= urb->start_frame &&
+ urb->start_frame <= (2 * nr_of_packs() - 1)) // uhci and ohci
+ usX2Y->wait_iso_frame = urb->start_frame - nr_of_packs();
+ else
+ usX2Y->wait_iso_frame += nr_of_packs();
+ } else {
+ snd_printdd("\n");
+ usX2Y_clients_stop(usX2Y);
+ }
+ }
+}
+
+
+static void usX2Y_hwdep_urb_release(struct urb** urb)
+{
+ usb_kill_urb(*urb);
+ usb_free_urb(*urb);
+ *urb = NULL;
+}
+
+/*
+ * release a substream
+ */
+static void usX2Y_usbpcm_urbs_release(snd_usX2Y_substream_t *subs)
+{
+ int i;
+ snd_printdd("snd_usX2Y_urbs_release() %i\n", subs->endpoint);
+ for (i = 0; i < NRURBS; i++)
+ usX2Y_hwdep_urb_release(subs->urb + i);
+}
+
+static void usX2Y_usbpcm_subs_startup_finish(usX2Ydev_t * usX2Y)
+{
+ usX2Y_urbs_set_complete(usX2Y, i_usX2Y_usbpcm_urb_complete);
+ usX2Y->prepare_subs = NULL;
+}
+
+static void i_usX2Y_usbpcm_subs_startup(struct urb *urb, struct pt_regs *regs)
+{
+ snd_usX2Y_substream_t *subs = (snd_usX2Y_substream_t*)urb->context;
+ usX2Ydev_t *usX2Y = subs->usX2Y;
+ snd_usX2Y_substream_t *prepare_subs = usX2Y->prepare_subs;
+ if (NULL != prepare_subs &&
+ urb->start_frame == prepare_subs->urb[0]->start_frame) {
+ atomic_inc(&prepare_subs->state);
+ if (prepare_subs == usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE]) {
+ snd_usX2Y_substream_t *cap_subs2 = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE + 2];
+ if (cap_subs2 != NULL)
+ atomic_inc(&cap_subs2->state);
+ }
+ usX2Y_usbpcm_subs_startup_finish(usX2Y);
+ wake_up(&usX2Y->prepare_wait_queue);
+ }
+
+ i_usX2Y_usbpcm_urb_complete(urb, regs);
+}
+
+/*
+ * initialize a substream's urbs
+ */
+static int usX2Y_usbpcm_urbs_allocate(snd_usX2Y_substream_t *subs)
+{
+ int i;
+ unsigned int pipe;
+ int is_playback = subs == subs->usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+ struct usb_device *dev = subs->usX2Y->chip.dev;
+
+ pipe = is_playback ? usb_sndisocpipe(dev, subs->endpoint) :
+ usb_rcvisocpipe(dev, subs->endpoint);
+ subs->maxpacksize = usb_maxpacket(dev, pipe, is_playback);
+ if (!subs->maxpacksize)
+ return -EINVAL;
+
+ /* allocate and initialize data urbs */
+ for (i = 0; i < NRURBS; i++) {
+ struct urb** purb = subs->urb + i;
+ if (*purb) {
+ usb_kill_urb(*purb);
+ continue;
+ }
+ *purb = usb_alloc_urb(nr_of_packs(), GFP_KERNEL);
+ if (NULL == *purb) {
+ usX2Y_usbpcm_urbs_release(subs);
+ return -ENOMEM;
+ }
+ (*purb)->transfer_buffer = is_playback ?
+ subs->usX2Y->hwdep_pcm_shm->playback : (
+ subs->endpoint == 0x8 ?
+ subs->usX2Y->hwdep_pcm_shm->capture0x8 :
+ subs->usX2Y->hwdep_pcm_shm->capture0xA);
+
+ (*purb)->dev = dev;
+ (*purb)->pipe = pipe;
+ (*purb)->number_of_packets = nr_of_packs();
+ (*purb)->context = subs;
+ (*purb)->interval = 1;
+ (*purb)->complete = i_usX2Y_usbpcm_subs_startup;
+ }
+ return 0;
+}
+
+/*
+ * free the buffer
+ */
+static int snd_usX2Y_usbpcm_hw_free(snd_pcm_substream_t *substream)
+{
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ snd_usX2Y_substream_t *subs = (snd_usX2Y_substream_t *)runtime->private_data,
+ *cap_subs2 = subs->usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE + 2];
+ down(&subs->usX2Y->prepare_mutex);
+ snd_printdd("snd_usX2Y_usbpcm_hw_free(%p)\n", substream);
+
+ if (SNDRV_PCM_STREAM_PLAYBACK == substream->stream) {
+ snd_usX2Y_substream_t *cap_subs = subs->usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE];
+ atomic_set(&subs->state, state_STOPPED);
+ usX2Y_usbpcm_urbs_release(subs);
+ if (!cap_subs->pcm_substream ||
+ !cap_subs->pcm_substream->runtime ||
+ !cap_subs->pcm_substream->runtime->status ||
+ cap_subs->pcm_substream->runtime->status->state < SNDRV_PCM_STATE_PREPARED) {
+ atomic_set(&cap_subs->state, state_STOPPED);
+ if (NULL != cap_subs2)
+ atomic_set(&cap_subs2->state, state_STOPPED);
+ usX2Y_usbpcm_urbs_release(cap_subs);
+ if (NULL != cap_subs2)
+ usX2Y_usbpcm_urbs_release(cap_subs2);
+ }
+ } else {
+ snd_usX2Y_substream_t *playback_subs = subs->usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+ if (atomic_read(&playback_subs->state) < state_PREPARED) {
+ atomic_set(&subs->state, state_STOPPED);
+ if (NULL != cap_subs2)
+ atomic_set(&cap_subs2->state, state_STOPPED);
+ usX2Y_usbpcm_urbs_release(subs);
+ if (NULL != cap_subs2)
+ usX2Y_usbpcm_urbs_release(cap_subs2);
+ }
+ }
+ up(&subs->usX2Y->prepare_mutex);
+ return snd_pcm_lib_free_pages(substream);
+}
+
+static void usX2Y_usbpcm_subs_startup(snd_usX2Y_substream_t *subs)
+{
+ usX2Ydev_t * usX2Y = subs->usX2Y;
+ usX2Y->prepare_subs = subs;
+ subs->urb[0]->start_frame = -1;
+ smp_wmb(); // Make shure above modifications are seen by i_usX2Y_subs_startup()
+ usX2Y_urbs_set_complete(usX2Y, i_usX2Y_usbpcm_subs_startup);
+}
+
+static int usX2Y_usbpcm_urbs_start(snd_usX2Y_substream_t *subs)
+{
+ int p, u, err,
+ stream = subs->pcm_substream->stream;
+ usX2Ydev_t *usX2Y = subs->usX2Y;
+
+ if (SNDRV_PCM_STREAM_CAPTURE == stream) {
+ usX2Y->hwdep_pcm_shm->captured_iso_head = -1;
+ usX2Y->hwdep_pcm_shm->captured_iso_frames = 0;
+ }
+
+ for (p = 0; 3 >= (stream + p); p += 2) {
+ snd_usX2Y_substream_t *subs = usX2Y->subs[stream + p];
+ if (subs != NULL) {
+ if ((err = usX2Y_usbpcm_urbs_allocate(subs)) < 0)
+ return err;
+ subs->completed_urb = NULL;
+ }
+ }
+
+ for (p = 0; p < 4; p++) {
+ snd_usX2Y_substream_t *subs = usX2Y->subs[p];
+ if (subs != NULL && atomic_read(&subs->state) >= state_PREPARED)
+ goto start;
+ }
+ usX2Y->wait_iso_frame = -1;
+
+ start:
+ usX2Y_usbpcm_subs_startup(subs);
+ for (u = 0; u < NRURBS; u++) {
+ for (p = 0; 3 >= (stream + p); p += 2) {
+ snd_usX2Y_substream_t *subs = usX2Y->subs[stream + p];
+ if (subs != NULL) {
+ struct urb *urb = subs->urb[u];
+ if (usb_pipein(urb->pipe)) {
+ unsigned long pack;
+ if (0 == u)
+ atomic_set(&subs->state, state_STARTING3);
+ urb->dev = usX2Y->chip.dev;
+ urb->transfer_flags = URB_ISO_ASAP;
+ for (pack = 0; pack < nr_of_packs(); pack++) {
+ urb->iso_frame_desc[pack].offset = subs->maxpacksize * (pack + u * nr_of_packs());
+ urb->iso_frame_desc[pack].length = subs->maxpacksize;
+ }
+ urb->transfer_buffer_length = subs->maxpacksize * nr_of_packs();
+ if ((err = usb_submit_urb(urb, GFP_KERNEL)) < 0) {
+ snd_printk (KERN_ERR "cannot usb_submit_urb() for urb %d, err = %d\n", u, err);
+ err = -EPIPE;
+ goto cleanup;
+ } else {
+ snd_printdd("%i\n", urb->start_frame);
+ if (0 > usX2Y->wait_iso_frame)
+ usX2Y->wait_iso_frame = urb->start_frame;
+ }
+ urb->transfer_flags = 0;
+ } else {
+ atomic_set(&subs->state, state_STARTING1);
+ break;
+ }
+ }
+ }
+ }
+ err = 0;
+ wait_event(usX2Y->prepare_wait_queue, NULL == usX2Y->prepare_subs);
+ if (atomic_read(&subs->state) != state_PREPARED)
+ err = -EPIPE;
+
+ cleanup:
+ if (err) {
+ usX2Y_subs_startup_finish(usX2Y); // Call it now
+ usX2Y_clients_stop(usX2Y); // something is completely wroong > stop evrything
+ }
+ return err;
+}
+
+/*
+ * prepare callback
+ *
+ * set format and initialize urbs
+ */
+static int snd_usX2Y_usbpcm_prepare(snd_pcm_substream_t *substream)
+{
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ snd_usX2Y_substream_t *subs = (snd_usX2Y_substream_t *)runtime->private_data;
+ usX2Ydev_t *usX2Y = subs->usX2Y;
+ snd_usX2Y_substream_t *capsubs = subs->usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE];
+ int err = 0;
+ snd_printdd("snd_usX2Y_pcm_prepare(%p)\n", substream);
+
+ if (NULL == usX2Y->hwdep_pcm_shm) {
+ if (NULL == (usX2Y->hwdep_pcm_shm = snd_malloc_pages(sizeof(snd_usX2Y_hwdep_pcm_shm_t), GFP_KERNEL)))
+ return -ENOMEM;
+ memset(usX2Y->hwdep_pcm_shm, 0, sizeof(snd_usX2Y_hwdep_pcm_shm_t));
+ }
+
+ down(&usX2Y->prepare_mutex);
+ usX2Y_subs_prepare(subs);
+// Start hardware streams
+// SyncStream first....
+ if (atomic_read(&capsubs->state) < state_PREPARED) {
+ if (usX2Y->format != runtime->format)
+ if ((err = usX2Y_format_set(usX2Y, runtime->format)) < 0)
+ goto up_prepare_mutex;
+ if (usX2Y->rate != runtime->rate)
+ if ((err = usX2Y_rate_set(usX2Y, runtime->rate)) < 0)
+ goto up_prepare_mutex;
+ snd_printdd("starting capture pipe for %s\n", subs == capsubs ? "self" : "playpipe");
+ if (0 > (err = usX2Y_usbpcm_urbs_start(capsubs)))
+ goto up_prepare_mutex;
+ }
+
+ if (subs != capsubs) {
+ usX2Y->hwdep_pcm_shm->playback_iso_start = -1;
+ if (atomic_read(&subs->state) < state_PREPARED) {
+ while (usX2Y_iso_frames_per_buffer(runtime, usX2Y) > usX2Y->hwdep_pcm_shm->captured_iso_frames) {
+ signed long timeout;
+ snd_printd("Wait: iso_frames_per_buffer=%i,captured_iso_frames=%i\n", usX2Y_iso_frames_per_buffer(runtime, usX2Y), usX2Y->hwdep_pcm_shm->captured_iso_frames);
+ set_current_state(TASK_INTERRUPTIBLE);
+ timeout = schedule_timeout(HZ/100 + 1);
+ if (signal_pending(current)) {
+ err = -ERESTARTSYS;
+ goto up_prepare_mutex;
+ }
+ }
+ if (0 > (err = usX2Y_usbpcm_urbs_start(subs)))
+ goto up_prepare_mutex;
+ }
+ snd_printd("Ready: iso_frames_per_buffer=%i,captured_iso_frames=%i\n", usX2Y_iso_frames_per_buffer(runtime, usX2Y), usX2Y->hwdep_pcm_shm->captured_iso_frames);
+ } else
+ usX2Y->hwdep_pcm_shm->capture_iso_start = -1;
+
+ up_prepare_mutex:
+ up(&usX2Y->prepare_mutex);
+ return err;
+}
+
+static snd_pcm_hardware_t snd_usX2Y_4c =
+{
+ .info = (SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_INTERLEAVED |
+ SNDRV_PCM_INFO_BLOCK_TRANSFER |
+ SNDRV_PCM_INFO_MMAP_VALID),
+ .formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_3LE,
+ .rates = SNDRV_PCM_RATE_44100 | SNDRV_PCM_RATE_48000,
+ .rate_min = 44100,
+ .rate_max = 48000,
+ .channels_min = 2,
+ .channels_max = 4,
+ .buffer_bytes_max = (2*128*1024),
+ .period_bytes_min = 64,
+ .period_bytes_max = (128*1024),
+ .periods_min = 2,
+ .periods_max = 1024,
+ .fifo_size = 0
+};
+
+
+
+static int snd_usX2Y_usbpcm_open(snd_pcm_substream_t *substream)
+{
+ snd_usX2Y_substream_t *subs = ((snd_usX2Y_substream_t **)
+ snd_pcm_substream_chip(substream))[substream->stream];
+ snd_pcm_runtime_t *runtime = substream->runtime;
+
+ if (!(subs->usX2Y->chip_status & USX2Y_STAT_CHIP_MMAP_PCM_URBS))
+ return -EBUSY;
+
+ runtime->hw = SNDRV_PCM_STREAM_PLAYBACK == substream->stream ? snd_usX2Y_2c :
+ (subs->usX2Y->subs[3] ? snd_usX2Y_4c : snd_usX2Y_2c);
+ runtime->private_data = subs;
+ subs->pcm_substream = substream;
+ snd_pcm_hw_constraint_minmax(runtime, SNDRV_PCM_HW_PARAM_PERIOD_TIME, 1000, 200000);
+ return 0;
+}
+
+
+static int snd_usX2Y_usbpcm_close(snd_pcm_substream_t *substream)
+{
+ snd_pcm_runtime_t *runtime = substream->runtime;
+ snd_usX2Y_substream_t *subs = (snd_usX2Y_substream_t *)runtime->private_data;
+ int err = 0;
+ snd_printd("\n");
+ subs->pcm_substream = NULL;
+ return err;
+}
+
+
+static snd_pcm_ops_t snd_usX2Y_usbpcm_ops =
+{
+ .open = snd_usX2Y_usbpcm_open,
+ .close = snd_usX2Y_usbpcm_close,
+ .ioctl = snd_pcm_lib_ioctl,
+ .hw_params = snd_usX2Y_pcm_hw_params,
+ .hw_free = snd_usX2Y_usbpcm_hw_free,
+ .prepare = snd_usX2Y_usbpcm_prepare,
+ .trigger = snd_usX2Y_pcm_trigger,
+ .pointer = snd_usX2Y_pcm_pointer,
+};
+
+
+static int usX2Y_pcms_lock_check(snd_card_t *card)
+{
+ struct list_head *list;
+ snd_device_t *dev;
+ snd_pcm_t *pcm;
+ int err = 0;
+ list_for_each(list, &card->devices) {
+ dev = snd_device(list);
+ if (dev->type != SNDRV_DEV_PCM)
+ continue;
+ pcm = dev->device_data;
+ down(&pcm->open_mutex);
+ }
+ list_for_each(list, &card->devices) {
+ int s;
+ dev = snd_device(list);
+ if (dev->type != SNDRV_DEV_PCM)
+ continue;
+ pcm = dev->device_data;
+ for (s = 0; s < 2; ++s) {
+ snd_pcm_substream_t *substream;
+ substream = pcm->streams[s].substream;
+ if (substream && substream->open_flag)
+ err = -EBUSY;
+ }
+ }
+ return err;
+}
+
+
+static void usX2Y_pcms_unlock(snd_card_t *card)
+{
+ struct list_head *list;
+ snd_device_t *dev;
+ snd_pcm_t *pcm;
+ list_for_each(list, &card->devices) {
+ dev = snd_device(list);
+ if (dev->type != SNDRV_DEV_PCM)
+ continue;
+ pcm = dev->device_data;
+ up(&pcm->open_mutex);
+ }
+}
+
+
+static int snd_usX2Y_hwdep_pcm_open(snd_hwdep_t *hw, struct file *file)
+{
+ // we need to be the first
+ snd_card_t *card = hw->card;
+ int err = usX2Y_pcms_lock_check(card);
+ if (0 == err)
+ usX2Y(card)->chip_status |= USX2Y_STAT_CHIP_MMAP_PCM_URBS;
+ usX2Y_pcms_unlock(card);
+ return err;
+}
+
+
+static int snd_usX2Y_hwdep_pcm_release(snd_hwdep_t *hw, struct file *file)
+{
+ snd_card_t *card = hw->card;
+ int err = usX2Y_pcms_lock_check(card);
+ if (0 == err)
+ usX2Y(hw->card)->chip_status &= ~USX2Y_STAT_CHIP_MMAP_PCM_URBS;
+ usX2Y_pcms_unlock(card);
+ return err;
+}
+
+
+static void snd_usX2Y_hwdep_pcm_vm_open(struct vm_area_struct *area)
+{
+}
+
+
+static void snd_usX2Y_hwdep_pcm_vm_close(struct vm_area_struct *area)
+{
+}
+
+
+static struct page * snd_usX2Y_hwdep_pcm_vm_nopage(struct vm_area_struct *area, unsigned long address, int *type)
+{
+ unsigned long offset;
+ struct page *page;
+ void *vaddr;
+
+ offset = area->vm_pgoff << PAGE_SHIFT;
+ offset += address - area->vm_start;
+ snd_assert((offset % PAGE_SIZE) == 0, return NOPAGE_OOM);
+ vaddr = (char*)((usX2Ydev_t*)area->vm_private_data)->hwdep_pcm_shm + offset;
+ page = virt_to_page(vaddr);
+
+ if (type)
+ *type = VM_FAULT_MINOR;
+
+ return page;
+}
+
+
+static struct vm_operations_struct snd_usX2Y_hwdep_pcm_vm_ops = {
+ .open = snd_usX2Y_hwdep_pcm_vm_open,
+ .close = snd_usX2Y_hwdep_pcm_vm_close,
+ .nopage = snd_usX2Y_hwdep_pcm_vm_nopage,
+};
+
+
+static int snd_usX2Y_hwdep_pcm_mmap(snd_hwdep_t * hw, struct file *filp, struct vm_area_struct *area)
+{
+ unsigned long size = (unsigned long)(area->vm_end - area->vm_start);
+ usX2Ydev_t *usX2Y = (usX2Ydev_t*)hw->private_data;
+
+ if (!(((usX2Ydev_t*)hw->private_data)->chip_status & USX2Y_STAT_CHIP_INIT))
+ return -EBUSY;
+
+ /* if userspace tries to mmap beyond end of our buffer, fail */
+ if (size > PAGE_ALIGN(sizeof(snd_usX2Y_hwdep_pcm_shm_t))) {
+ snd_printd("%lu > %lu\n", size, (unsigned long)sizeof(snd_usX2Y_hwdep_pcm_shm_t));
+ return -EINVAL;
+ }
+
+ if (!usX2Y->hwdep_pcm_shm) {
+ return -ENODEV;
+ }
+ area->vm_ops = &snd_usX2Y_hwdep_pcm_vm_ops;
+ area->vm_flags |= VM_RESERVED;
+ snd_printd("vm_flags=0x%lX\n", area->vm_flags);
+ area->vm_private_data = hw->private_data;
+ return 0;
+}
+
+
+static void snd_usX2Y_hwdep_pcm_private_free(snd_hwdep_t *hwdep)
+{
+ usX2Ydev_t *usX2Y = (usX2Ydev_t *)hwdep->private_data;
+ if (NULL != usX2Y->hwdep_pcm_shm)
+ snd_free_pages(usX2Y->hwdep_pcm_shm, sizeof(snd_usX2Y_hwdep_pcm_shm_t));
+}
+
+
+static void snd_usX2Y_usbpcm_private_free(snd_pcm_t *pcm)
+{
+ snd_pcm_lib_preallocate_free_for_all(pcm);
+}
+
+
+int usX2Y_hwdep_pcm_new(snd_card_t* card)
+{
+ int err;
+ snd_hwdep_t *hw;
+ snd_pcm_t *pcm;
+ struct usb_device *dev = usX2Y(card)->chip.dev;
+ if (1 != nr_of_packs())
+ return 0;
+
+ if ((err = snd_hwdep_new(card, SND_USX2Y_USBPCM_ID, 1, &hw)) < 0) {
+ snd_printd("\n");
+ return err;
+ }
+ hw->iface = SNDRV_HWDEP_IFACE_USX2Y_PCM;
+ hw->private_data = usX2Y(card);
+ hw->private_free = snd_usX2Y_hwdep_pcm_private_free;
+ hw->ops.open = snd_usX2Y_hwdep_pcm_open;
+ hw->ops.release = snd_usX2Y_hwdep_pcm_release;
+ hw->ops.mmap = snd_usX2Y_hwdep_pcm_mmap;
+ hw->exclusive = 1;
+ sprintf(hw->name, "/proc/bus/usb/%03d/%03d/hwdeppcm", dev->bus->busnum, dev->devnum);
+
+ err = snd_pcm_new(card, NAME_ALLCAPS" hwdep Audio", 2, 1, 1, &pcm);
+ if (err < 0) {
+ return err;
+ }
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_usX2Y_usbpcm_ops);
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_usX2Y_usbpcm_ops);
+
+ pcm->private_data = usX2Y(card)->subs;
+ pcm->private_free = snd_usX2Y_usbpcm_private_free;
+ pcm->info_flags = 0;
+
+ sprintf(pcm->name, NAME_ALLCAPS" hwdep Audio");
+ if (0 > (err = snd_pcm_lib_preallocate_pages(pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream,
+ SNDRV_DMA_TYPE_CONTINUOUS,
+ snd_dma_continuous_data(GFP_KERNEL),
+ 64*1024, 128*1024)) ||
+ 0 > (err = snd_pcm_lib_preallocate_pages(pcm->streams[SNDRV_PCM_STREAM_CAPTURE].substream,
+ SNDRV_DMA_TYPE_CONTINUOUS,
+ snd_dma_continuous_data(GFP_KERNEL),
+ 64*1024, 128*1024))) {
+ snd_usX2Y_usbpcm_private_free(pcm);
+ return err;
+ }
+
+
+ return 0;
+}
+
+#else
+
+int usX2Y_hwdep_pcm_new(snd_card_t* card)
+{
+ return 0;
+}
+
+#endif
clean-files := initramfs_data.cpio.gz initramfs_list
-# If you want a different list of files in the initramfs_data.cpio
-# then you can either overwrite the cpio_list in this directory
-# or set INITRAMFS_LIST to another filename.
-INITRAMFS_LIST := $(obj)/initramfs_list
-
# initramfs_data.o contains the initramfs_data.cpio.gz image.
# The image is included using .incbin, a dependency which is not
# tracked automatically.
$(obj)/initramfs_data.o: $(obj)/initramfs_data.cpio.gz FORCE
-# initramfs-y are the programs which will be copied into the CPIO
-# archive. Currently, the filenames are hardcoded in gen_init_cpio,
-# but we need the information for the build as well, so it's duplicated
-# here.
+ifdef CONFIG_INITRAMFS_ROOT_UID
+gen_initramfs_args += -u $(CONFIG_INITRAMFS_ROOT_UID)
+endif
+
+ifdef CONFIG_INITRAMFS_ROOT_GID
+gen_initramfs_args += -g $(CONFIG_INITRAMFS_ROOT_GID)
+endif
-# Commented out for now
-# initramfs-y := $(obj)/root/hello
+# The $(shell echo $(CONFIG_INITRAMFS_SOURCE)) is to remove the
+# gratuitous begin and end quotes from the Kconfig string type.
+# Internal, escaped quotes in the Kconfig string will loose the
+# escape and become active quotes.
+quotefixed_initramfs_source := $(shell echo $(CONFIG_INITRAMFS_SOURCE))
filechk_initramfs_list = $(CONFIG_SHELL) \
- $(srctree)/scripts/gen_initramfs_list.sh $(CONFIG_INITRAMFS_SOURCE)
-
+ $(srctree)/scripts/gen_initramfs_list.sh $(gen_initramfs_args) $(quotefixed_initramfs_source)
+
$(obj)/initramfs_list: FORCE
$(call filechk,initramfs_list)
quiet_cmd_cpio = CPIO $@
cmd_cpio = ./$< $(obj)/initramfs_list > $@
+
+# Check if the INITRAMFS_SOURCE is a cpio archive
+ifneq (,$(findstring .cpio,$(quotefixed_initramfs_source)))
+
+# INITRAMFS_SOURCE has a cpio archive - verify that it's a single file
+ifneq (1,$(words $(quotefixed_initramfs_source)))
+$(error Only a single file may be specified in CONFIG_INITRAMFS_SOURCE (="$(quotefixed_initramfs_source)") when a cpio archive is directly specified.)
+endif
+# Now use the cpio archive directly
+initramfs_data_cpio = $(quotefixed_initramfs_source)
+targets += $(quotefixed_initramfs_source)
+
+else
+
+# INITRAMFS_SOURCE is not a cpio archive - create one
$(obj)/initramfs_data.cpio: $(obj)/gen_init_cpio \
$(initramfs-y) $(obj)/initramfs_list FORCE
$(call if_changed,cpio)
targets += initramfs_data.cpio
+initramfs_data_cpio = $(obj)/initramfs_data.cpio
+
+endif
+
-$(obj)/initramfs_data.cpio.gz: $(obj)/initramfs_data.cpio FORCE
+$(obj)/initramfs_data.cpio.gz: $(initramfs_data_cpio) FORCE
$(call if_changed,gzip)
targets += initramfs_data.cpio.gz
#include <ctype.h>
#include <limits.h>
+/*
+ * Original work by Jeff Garzik
+ *
+ * External file lists, symlink, pipe and fifo support by Thayne Harbaugh
+ */
+
#define xstr(s) #s
#define str(s) xstr(s)
static unsigned int offset;
static unsigned int ino = 721;
-struct file_type {
+struct file_handler {
const char *type;
int (*handler)(const char *line);
};
0, /* minor */
0, /* rmajor */
0, /* rminor */
- (unsigned)strlen(name) + 1, /* namesize */
+ (unsigned)strlen(name)+1, /* namesize */
0); /* chksum */
push_hdr(s);
push_rest(name);
}
}
-static int cpio_mkdir(const char *name, unsigned int mode,
+static int cpio_mkslink(const char *name, const char *target,
+ unsigned int mode, uid_t uid, gid_t gid)
+{
+ char s[256];
+ time_t mtime = time(NULL);
+
+ sprintf(s,"%s%08X%08X%08lX%08lX%08X%08lX"
+ "%08X%08X%08X%08X%08X%08X%08X",
+ "070701", /* magic */
+ ino++, /* ino */
+ S_IFLNK | mode, /* mode */
+ (long) uid, /* uid */
+ (long) gid, /* gid */
+ 1, /* nlink */
+ (long) mtime, /* mtime */
+ (unsigned)strlen(target)+1, /* filesize */
+ 3, /* major */
+ 1, /* minor */
+ 0, /* rmajor */
+ 0, /* rminor */
+ (unsigned)strlen(name) + 1,/* namesize */
+ 0); /* chksum */
+ push_hdr(s);
+ push_string(name);
+ push_pad();
+ push_string(target);
+ push_pad();
+ return 0;
+}
+
+static int cpio_mkslink_line(const char *line)
+{
+ char name[PATH_MAX + 1];
+ char target[PATH_MAX + 1];
+ unsigned int mode;
+ int uid;
+ int gid;
+ int rc = -1;
+
+ if (5 != sscanf(line, "%" str(PATH_MAX) "s %" str(PATH_MAX) "s %o %d %d", name, target, &mode, &uid, &gid)) {
+ fprintf(stderr, "Unrecognized dir format '%s'", line);
+ goto fail;
+ }
+ rc = cpio_mkslink(name, target, mode, uid, gid);
+ fail:
+ return rc;
+}
+
+static int cpio_mkgeneric(const char *name, unsigned int mode,
uid_t uid, gid_t gid)
{
char s[256];
"%08X%08X%08X%08X%08X%08X%08X",
"070701", /* magic */
ino++, /* ino */
- S_IFDIR | mode, /* mode */
+ mode, /* mode */
(long) uid, /* uid */
(long) gid, /* gid */
2, /* nlink */
return 0;
}
-static int cpio_mkdir_line(const char *line)
+enum generic_types {
+ GT_DIR,
+ GT_PIPE,
+ GT_SOCK
+};
+
+struct generic_type {
+ const char *type;
+ mode_t mode;
+};
+
+static struct generic_type generic_type_table[] = {
+ [GT_DIR] = {
+ .type = "dir",
+ .mode = S_IFDIR
+ },
+ [GT_PIPE] = {
+ .type = "pipe",
+ .mode = S_IFIFO
+ },
+ [GT_SOCK] = {
+ .type = "sock",
+ .mode = S_IFSOCK
+ }
+};
+
+static int cpio_mkgeneric_line(const char *line, enum generic_types gt)
{
char name[PATH_MAX + 1];
unsigned int mode;
int rc = -1;
if (4 != sscanf(line, "%" str(PATH_MAX) "s %o %d %d", name, &mode, &uid, &gid)) {
- fprintf(stderr, "Unrecognized dir format '%s'", line);
+ fprintf(stderr, "Unrecognized %s format '%s'",
+ line, generic_type_table[gt].type);
goto fail;
}
- rc = cpio_mkdir(name, mode, uid, gid);
+ mode |= generic_type_table[gt].mode;
+ rc = cpio_mkgeneric(name, mode, uid, gid);
fail:
return rc;
}
+static int cpio_mkdir_line(const char *line)
+{
+ return cpio_mkgeneric_line(line, GT_DIR);
+}
+
+static int cpio_mkpipe_line(const char *line)
+{
+ return cpio_mkgeneric_line(line, GT_PIPE);
+}
+
+static int cpio_mksock_line(const char *line)
+{
+ return cpio_mkgeneric_line(line, GT_SOCK);
+}
+
static int cpio_mknod(const char *name, unsigned int mode,
uid_t uid, gid_t gid, char dev_type,
unsigned int maj, unsigned int min)
struct stat buf;
int file = -1;
int retval;
- int i;
int rc = -1;
mode |= S_IFREG;
push_string(name);
push_pad();
- for (i = 0; i < buf.st_size; ++i)
- fputc(filebuf[i], stdout);
+ fwrite(filebuf, buf.st_size, 1, stdout);
offset += buf.st_size;
push_pad();
rc = 0;
"describe the files to be included in the initramfs archive:\n"
"\n"
"# a comment\n"
- "file <name> <location> <mode> <uid> <gid> \n"
+ "file <name> <location> <mode> <uid> <gid>\n"
"dir <name> <mode> <uid> <gid>\n"
"nod <name> <mode> <uid> <gid> <dev_type> <maj> <min>\n"
+ "slink <name> <target> <mode> <uid> <gid>\n"
+ "pipe <name> <mode> <uid> <gid>\n"
+ "sock <name> <mode> <uid> <gid>\n"
"\n"
- "<name> name of the file/dir/nod in the archive\n"
+ "<name> name of the file/dir/nod/etc in the archive\n"
"<location> location of the file in the current filesystem\n"
+ "<target> link target\n"
"<mode> mode/permissions of the file\n"
"<uid> user id (0=root)\n"
"<gid> group id (0=root)\n"
prog);
}
-struct file_type file_type_table[] = {
+struct file_handler file_handler_table[] = {
{
.type = "file",
.handler = cpio_mkfile_line,
}, {
.type = "dir",
.handler = cpio_mkdir_line,
+ }, {
+ .type = "slink",
+ .handler = cpio_mkslink_line,
+ }, {
+ .type = "pipe",
+ .handler = cpio_mkpipe_line,
+ }, {
+ .type = "sock",
+ .handler = cpio_mksock_line,
}, {
.type = NULL,
.handler = NULL,
ec = -1;
}
- for (type_idx = 0; file_type_table[type_idx].type; type_idx++) {
+ for (type_idx = 0; file_handler_table[type_idx].type; type_idx++) {
int rc;
- if (! strcmp(line, file_type_table[type_idx].type)) {
- if ((rc = file_type_table[type_idx].handler(args))) {
+ if (! strcmp(line, file_handler_table[type_idx].type)) {
+ if ((rc = file_handler_table[type_idx].handler(args))) {
ec = rc;
fprintf(stderr, " line %d\n", line_nr);
}
}
}
- if (NULL == file_type_table[type_idx].type) {
+ if (NULL == file_handler_table[type_idx].type) {
fprintf(stderr, "unknown file type line %d: '%s'\n",
line_nr, line);
}