N: Jeff Garzik
E: jgarzik@pobox.com
+N: Kumar Gala
+E: kumar.gala@freescale.com
+D: Embedded PowerPC 6xx/7xx/74xx/82xx/85xx support
+S: Austin, Texas 78729
+S: USA
+
N: Jacques Gelinas
E: jacques@solucorp.qc.ca
D: Author of the Umsdos file system
D: SRM environment driver (for Alpha systems)
P: 1024D/8399E1BB 250D 3BCF 7127 0D8C A444 A961 1DBD 5E75 8399 E1BB
+N: Thomas Gleixner
+E: tglx@linutronix.de
+D: NAND flash hardware support, JFFS2 on NAND flash
+
N: Richard E. Gooch
E: rgooch@atnf.csiro.au
D: parent process death signal to children
D: Maintainer of the Linux Bluetooth Subsystem
D: Author and maintainer of the various Bluetooth HCI drivers
D: Author and maintainer of the CAPI message transport protocol driver
+D: Author and maintainer of the Bluetooth HID protocol driver
D: Various other Bluetooth related patches, cleanups and fixes
S: Germany
N: Eberhard Moenkeberg
E: emoenke@gwdg.de
D: CDROM driver "sbpcd" (Matsushita/Panasonic/Soundblaster)
-S: Reinholdstrasse 14
-S: D-37083 Goettingen
+S: Ruhstrathoehe 2 b.
+S: D-37085 Goettingen
S: Germany
N: Thomas Molina
D: sonypi, meye drivers, mct_u232 usb serial hacks
S: Paris, France
+N: Matt Porter
+E: mporter@kernel.crashing.org
+D: Motorola PowerPC PReP support
+D: cPCI PowerPC support
+D: Embedded PowerPC 4xx/6xx/7xx/74xx support
+S: Chandler, Arizona 85249
+S: USA
+
N: Frederic Potter
E: fpotter@cirpack.com
D: Some PCI kernel support
N: Luca Risolia
E: luca.risolia@studio.unibo.it
+P: 1024D/FCE635A4 88E8 F32F 7244 68BA 3958 5D40 99DA 5D2A FCE6 35A4
D: V4L driver for W996[87]CF JPEG USB Dual Mode Camera Chips
+D: V4L2 driver for SN9C10[12] PC Camera Controllers
S: Via Liberta' 41/A
S: Osio Sotto, 24046, Bergamo
S: Italy
S: USA
N: Marcelo W. Tosatti
-E: marcelo@conectiva.com.br
-W: http://bazar.conectiva.com.br/~marcelo/
-D: Miscellaneous kernel hacker (mostly VM/MM work)
-S: Conectiva S.A.
-S: R. Tocantins, 89 - Cristo Rei
-S: 80050-430 - Curitiba - Paraná
+E: marcelo.tosatti@cyclades.com
+D: Miscellaneous kernel hacker
+D: v2.4 kernel maintainer
+D: Current pc300/cyclades maintainer
+S: Cyclades Corporation
+S: Av Cristovao Colombo, 462. Floresta.
+S: Porto Alegre
S: Brazil
N: Stefan Traby
D: EISA/sysfs subsystem
S: France
+N: Luiz Fernando N. Capitulino
+E: lcapitulino@terra.com.br
+E: lcapitulino@prefeitura.sp.gov.br
+W: http://www.telecentros.sp.gov.br
+D: Little fixes and a lot of janitorial work
+S: E-GOV Telecentros SP
+S: Brazil
+
# Don't add your name here, unless you really _are_ after Marc
# alphabetically. Leonard used to be very proud of being the
# last entry, and he'll get positively pissed if he can't even
Returns: 1 if successful and 0 if not
+u64
+dma_get_required_mask(struct device *dev)
+
+After setting the mask with dma_set_mask(), this API returns the
+actual mask (within that already set) that the platform actually
+requires to operate efficiently. Usually this means the returned mask
+is the minimum required to cover all of memory. Examining the
+required mask gives drivers with variable descriptor sizes the
+opportunity to use smaller descriptors as necessary.
+
+Requesting the required mask does not alter the current mask. If you
+wish to take advantage of it, you should issue another dma_set_mask()
+call to lower the mask again.
+
Part Id - Streaming DMA mappings
--------------------------------
This document is intended to explain how to submit device drivers to the
various kernel trees. Note that if you are interested in video card drivers
-you should probably talk to XFree86 (http://www.xfree86.org) instead.
+you should probably talk to XFree86 (http://www.xfree86.org/) and/or X.Org
+(http://x.org/) instead.
Also read the Documentation/SubmittingPatches document.
<test> is the thing you're trying to measure.
Make sure you have the correct System.map / vmlinux referenced!
-IMHO it's easier to use "make install" for linux and hack /sbin/installkernel
-to copy config files, system.map, vmlinux to /boot.
+
+It is probably easiest to use "make install" for linux and hack
+/sbin/installkernel to copy vmlinux to /boot, in addition to vmlinuz,
+config, System.map, which are usually installed by default.
Readprofile
-----------
-You need a fixed readprofile command for 2.5 ... either get hold of
-a current version from:
+A recent readprofile command is needed for 2.6, such as found in util-linux
+2.12a, which can be downloaded from:
+
http://www.kernel.org/pub/linux/utils/util-linux/
-or get readprofile binary fixed for 2.5 / akpm's 2.5 patch from
-ftp://ftp.kernel.org/pub/linux/kernel/people/mbligh/tools/readprofile/
+
+Most distributions will ship it already.
Add "profile=2" to the kernel command line.
Oprofile
--------
-get source (I use 0.5) from http://oprofile.sourceforge.net/
-add "idle=poll" to the kernel command line
+Get the source (I use 0.8) from http://oprofile.sourceforge.net/
+and add "idle=poll" to the kernel command line
Configure with CONFIG_PROFILING=y and CONFIG_OPROFILE=y & reboot on new kernel
./configure --with-kernel-support
make install
-One time setup (pick appropriate one for your CPU):
-P3 opcontrol --setup --vmlinux=/boot/vmlinux \
- --ctr0-event=CPU_CLK_UNHALTED --ctr0-count=100000
-Athlon/x86-64 opcontrol --setup --vmlinux=/boot/vmlinux \
- --ctr0-event=RETIRED_INSNS --ctr0-count=100000
-P4 opcontrol --setup --vmlinux=/boot/vmlinux \
- --ctr0-event=GLOBAL_POWER_EVENTS \
- --ctr0-unit-mask=1 --ctr0-count=100000
+For superior results, be sure to enable the local APIC. If opreport sees
+a 0Hz CPU, APIC was not on. Be aware that idle=poll may mean a performance
+penalty.
+
+One time setup:
+ opcontrol --setup --vmlinux=/boot/vmlinux
-start daemon opcontrol --start-daemon
clear opcontrol --reset
start opcontrol --start
<test>
stop opcontrol --stop
-dump output oprofpp -dl -i /boot/vmlinux > output_file
+dump output opreport > output_file
+
+To only report on the kernel, run opreport /boot/vmlinux > output_file
+
+A reset is needed to clear old statistics, which survive a reboot.
mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc
To actually register a new binary type, you have to set up a string looking like
-:name:type:offset:magic:mask:interpreter: (where you can choose the ':' upon
+:name:type:offset:magic:mask:interpreter:flags (where you can choose the ':' upon
your needs) and echo it to /proc/sys/fs/binfmt_misc/register.
Here is what the fields mean:
- 'name' is an identifier string. A new /proc file will be created with this
The mask is anded with the byte sequence of the file.
- 'interpreter' is the program that should be invoked with the binary as first
argument (specify the full path)
+ - 'flags' is an optional field that controls several aspects of the invocation
+ of the interpreter. It is a string of capital letters, each controls a certain
+ aspect. The following flags are supported -
+ 'P' - preserve-argv[0]. Legacy behavior of binfmt_misc is to overwrite the
+ original argv[0] with the full path to the binary. When this flag is
+ included, binfmt_misc will add an argument to the argument vector for
+ this purpose, thus preserving the original argv[0].
+ 'O' - open-binary. Legacy behavior of binfmt_misc is to pass the full path
+ of the binary to the interpreter as an argument. When this flag is
+ included, binfmt_misc will open the file for reading and pass its
+ descriptor as an argument, instead of the full path, thus allowing
+ the interpreter to execute non-readable binaries. This feature should
+ be used with care - the interpreter has to be trusted not to emit
+ the contents of the non-readable binary.
+ 'C' - credentials. Currently, the behavior of binfmt_misc is to calculate
+ the credentials and security token of the new process according to
+ the interpreter. When this flag is included, these attributes are
+ calculated according to the binary. It also implies the 'O' flag.
+ This feature should be used with care as the interpreter
+ will run with root permissions when a setuid binary owned by root
+ is run with binfmt_misc.
+
There are some restrictions:
- the whole register string may not exceed 255 characters
write a wrapper script for it. See Documentation/java.txt for an
example.
-Your interpreter should NOT look in the PATH for the filename; the
-kernel passes it the full filename to use. Using the PATH can cause
-unexpected behaviour and be a security hazard.
+Your interpreter should NOT look in the PATH for the filename; the kernel
+passes it the full filename (or the file descriptor) to use. Using $PATH can
+cause unexpected behaviour and can be a security hazard.
There is a web page about binfmt_misc at
translations for software managed TLB configurations.
The sparc64 port currently does this.
+7) void tlb_migrate_finish(struct mm_struct *mm)
+
+ This interface is called at the end of an explicit
+ process migration. This interface provides a hook
+ to allow a platform to update TLB or context-specific
+ information for the address space.
+
+ The ia64 sn2 platform is one example of a platform
+ that uses this interface.
+
+
Next, we have the cache flushing interfaces. In general, when Linux
is changing an existing virtual-->physical mapping to a new value,
the sequence will be in one of the following forms:
225 = /dev/pps Pulse Per Second driver
226 = /dev/systrace Systrace device
227 = /dev/mcelog X86_64 Machine Check Exception driver
+ 228 = /dev/hpet HPET driver
240-254 Reserved for local use
255 Reserved for MISC_DYNAMIC_MINOR
1 = /dev/gpib1 Second GPIB bus
...
-160 block Carmel 8-port SATA Disks on First Controller
- 0 = /dev/carmel/0 SATA disk 0 whole disk
- 1 = /dev/carmel/0p1 SATA disk 0 partition 1
+160 block Promise SX8 8-port SATA Disks on First Controller
+ 0 = /dev/sx8/0 SATA disk 0 whole disk
+ 1 = /dev/sx8/0p1 SATA disk 0 partition 1
...
- 31 = /dev/carmel/0p31 SATA disk 0 partition 31
+ 31 = /dev/sx8/0p31 SATA disk 0 partition 31
- 32 = /dev/carmel/1 SATA disk 1 whole disk
- 64 = /dev/carmel/2 SATA disk 2 whole disk
+ 32 = /dev/sx8/1 SATA disk 1 whole disk
+ 64 = /dev/sx8/2 SATA disk 2 whole disk
...
- 224 = /dev/carmel/7 SATA disk 7 whole disk
+ 224 = /dev/sx8/7 SATA disk 7 whole disk
Partitions are handled in the same way as for IDE
disks (see major number 3) except that the limit on
17 = /dev/irlpt1 Second IrLPT device
...
-161 block Carmel 8-port SATA Disks on Second Controller
- 0 = /dev/carmel/8 SATA disk 8 whole disk
- 1 = /dev/carmel/8p1 SATA disk 8 partition 1
+161 block Promise SX8 8-port SATA Disks on Second Controller
+ 0 = /dev/sx8/8 SATA disk 8 whole disk
+ 1 = /dev/sx8/8p1 SATA disk 8 partition 1
...
- 31 = /dev/carmel/8p31 SATA disk 8 partition 31
+ 31 = /dev/sx8/8p31 SATA disk 8 partition 31
- 32 = /dev/carmel/9 SATA disk 9 whole disk
- 64 = /dev/carmel/10 SATA disk 10 whole disk
+ 32 = /dev/sx8/9 SATA disk 9 whole disk
+ 64 = /dev/sx8/10 SATA disk 10 whole disk
...
- 224 = /dev/carmel/15 SATA disk 15 whole disk
+ 224 = /dev/sx8/15 SATA disk 15 whole disk
Partitions are handled in the same way as for IDE
disks (see major number 3) except that the limit on
-Read/Write HPFS 2.05
-1998-2001, Mikulas Patocka
+Read/Write HPFS 2.09
+1998-2004, Mikulas Patocka
email: mikulas@artax.karlin.mff.cuni.cz
homepage: http://artax.karlin.mff.cuni.cz/~mikulas/vyplody/hpfs/index-e.cgi
2.05 Fixed crash when got mount parameters without =
Fixed crash when allocation of anode failed due to full disk
Fixed some crashes when block io or inode allocation failed
+2.06 Fixed some crash on corrupted disk structures
+ Better allocation strategy
+ Reschedule points added so that it doesn't lock CPU long time
+ It should work in read-only mode on Warp Server
+2.07 More fixes for Warp Server. Now it really works
+2.08 Creating new files is not so slow on large disks
+ An attempt to sync deleted file does not generate filesystem error
+2.09 Fixed error on extremly fragmented files
vim: set textwidth=80:
Note, a technical ChangeLog aimed at kernel hackers is in fs/ntfs/ChangeLog.
+2.1.15:
+ - Invalidate quotas when (re)mounting read-write.
+ NOTE: This now only leave user space journalling on the side. (See
+ note for version 2.1.13, below.)
2.1.14:
- Fix an NFSd caused deadlock reported by several users.
2.1.13:
devices Available devices (block and character)
dma Used DMS channels
filesystems Supported filesystems
- driver Various drivers grouped here, currently rtc (2.4)
+ driver Various drivers grouped here, currently rtc (2.4)
execdomains Execdomains, related to security (2.4)
fb Frame Buffer devices (2.4)
fs File system parameters, currently nfs/exports (2.4)
The files in this directory can be used to tune the operation of the virtual
memory (VM) subsystem of the Linux kernel.
+vfs_cache_pressure
+------------------
+
+Controls the tendency of the kernel to reclaim the memory which is used for
+caching of directory and inode objects.
+
+At the default value of vfs_cache_pressure=100 the kernel will attempt to
+reclaim dentries and inodes at a "fair" rate with respect to pagecache and
+swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer
+to retain dentry and inode caches. Increasing vfs_cache_pressure beyond 100
+causes the kernel to prefer to reclaim dentries and inodes.
+
dirty_background_ratio
----------------------
and thrash the system to death, so large and/or important servers will want to
set this value to 0.
-nr_hugepages and hugetlb_shm_group
-----------------------------------
-
-nr_hugepages configures number of hugetlb page reserved for the system.
-
-hugetlb_shm_group contains group id that is allowed to create SysV shared
-memory segment using hugetlb page.
-
2.5 /proc/sys/dev - Device specific parameters
----------------------------------------------
Writing to this file results in a flush of the routing cache.
-gc_elastic, gc_interval, gc_min_interval, gc_tresh, gc_timeout
+gc_elasticity, gc_interval, gc_min_interval, gc_tresh, gc_timeout,
+gc_thresh, gc_thresh1, gc_thresh2, gc_thresh3
--------------------------------------------------------------
Values to control the frequency and behavior of the garbage collection
command to write value into these files, thereby changing the default settings
of the kernel.
------------------------------------------------------------------------------
+
+
+
+
+
+
+
----------------------------------------------------------------------
umask=### -- The permission mask (for files and directories, see umask(1)).
The default is the umask of current process.
+
dmask=### -- The permission mask for the directory.
The default is the umask of current process.
+
fmask=### -- The permission mask for files.
The default is the umask of current process.
-codepage=### -- Sets the codepage for converting to shortname characters
- on FAT and VFAT filesystems. By default, codepage 437
- is used. This is the default for the U.S. and some
- European countries.
-iocharset=name -- Character set to use for converting between 8 bit characters
- and 16 bit Unicode characters. Long filenames are stored on
- disk in Unicode format, but Unix for the most part doesn't
- know how to deal with Unicode. There is also an option of
- doing UTF8 translations with the utf8 option.
+
+codepage=### -- Sets the codepage number for converting to shortname
+ characters on FAT filesystem.
+ By default, FAT_DEFAULT_CODEPAGE setting is used.
+
+iocharset=name -- Character set to use for converting between the
+ encoding is used for user visible filename and 16 bit
+ Unicode characters. Long filenames are stored on disk
+ in Unicode format, but Unix for the most part doesn't
+ know how to deal with Unicode.
+ By default, FAT_DEFAULT_IOCHARSET setting is used.
+
+ There is also an option of doing UTF8 translations
+ with the utf8 option.
+
+ NOTE: "iocharset=utf8" is not recommended. If unsure,
+ you should consider the following option instead.
+
utf8=<bool> -- UTF8 is the filesystem safe version of Unicode that
is used by the console. It can be be enabled for the
filesystem with this option. If 'uni_xlate' gets set,
UTF8 gets disabled.
+
uni_xlate=<bool> -- Translate unhandled Unicode characters to special
escaped sequences. This would let you backup and
restore filenames that are created with any Unicode
illegal on the vfat filesystem. The escape sequence
that gets used is ':' and the four digits of hexadecimal
unicode.
+
nonumtail=<bool> -- When creating 8.3 aliases, normally the alias will
end in '~1' or tilde followed by some number. If this
option is set, then if the filename is
be the short alias instead of 'longfi~1.txt'.
quiet -- Stops printing certain warning messages.
+
check=s|r|n -- Case sensitivity checking setting.
s: strict, case sensitive
r: relaxed, case insensitive
return;
}
- fd = open(argv[0], O_RDWR);
+ fd = open(argv[0], O_RDONLY);
if (fd < 0)
fprintf(stderr, "hpet_open_close: open failed\n");
else
freq = atoi(argv[1]);
iterations = atoi(argv[2]);
- fd = open(argv[0], O_RDWR);
+ fd = open(argv[0], O_RDONLY);
if (fd < 0) {
fprintf(stderr, "hpet_poll: open of %s failed\n", argv[0]);
goto out;
}
- fd = open(argv[0], O_RDWR);
+ fd = open(argv[0], O_RDONLY);
if (fd < 0) {
fprintf(stderr, "hpet_fasync: failed to open %s\n", argv[0]);
0x1e0 unsigned long ALT_MEM_K, alternative mem check, in Kb
0x1e8 char number of entries in E820MAP (below)
0x1e9 unsigned char number of entries in EDDBUF (below)
+0x1ea unsigned char number of entries in EDD_MBR_SIG_BUFFER (below)
0x1f1 char size of setup.S, number of sectors
0x1f2 unsigned short MOUNT_ROOT_RDONLY (if !=0)
0x1f4 unsigned short size of compressed kernel-part in the
0x21c unsigned long INITRD_SIZE, size in bytes of ramdisk image
0x220 4 bytes (setup.S)
0x224 unsigned short setup.S heap end pointer
-0x2cc 4 bytes DISK80_SIG_BUFFER (setup.S)
+0x290 - 0x2cf EDD_MBR_SIG_BUFFER (edd.S)
0x2d0 - 0x600 E820MAP
-0x600 - 0x7ff EDDBUF (setup.S) for disk signature read sector
-0x600 - 0x7eb EDDBUF (setup.S) for edd data
+0x600 - 0x7ff EDDBUF (edd.S) for disk signature read sector
+0x600 - 0x7eb EDDBUF (edd.S) for edd data
<mailto:michael.klein@puffin.lb.shuttle.de>
0xDD 00-3F ZFCP device driver see drivers/s390/scsi/
<mailto:aherrman@de.ibm.com>
+0xF3 00-3F video/sisfb.h sisfb (in development)
+ <mailto:thomas@winischhofer.net>
aic79xx= [HW,SCSI]
See Documentation/scsi/aic79xx.txt.
- allowdma0 [ISAPNP]
-
AM53C974= [HW,SCSI]
Format: <host-scsi-id>,<target-scsi-id>,<max-rate>,<max-offset>
See also header of drivers/scsi/AM53C974.c.
noexec [IA-64]
+ noexec [i386]
+ noexec=on: enable non-executable mappings (default)
+ noexec=off: disable nn-executable mappings
+
nofxsr [BUGS=IA-32]
nohighio [BUGS=IA-32] Disable highmem block I/O.
The short story
---------------
-If you just want to use it, run the laptop_mode control script (which is included
-at the end of this document) as follows:
+To use laptop mode, you don't need to set any kernel configuration options
+or anything. You simply need to run the laptop_mode control script (which
+is included in this document) as follows:
# laptop_mode start
The details
-----------
-Laptop-mode is controlled by the flag /proc/sys/vm/laptop_mode. When this
-flag is set, any physical disk read operation (that might have caused the
-hard disk to spin up) causes Linux to flush all dirty blocks. The result
-of this is that after a disk has spun down, it will not be spun up anymore
-to write dirty blocks, because those blocks had already been written
-immediately after the most recent read operation
+Laptop-mode is controlled by the flag /proc/sys/vm/laptop_mode. This flag is
+present for all kernels that have the laptop mode patch, regardless of any
+configuration options. When the flag is set, any physical disk read operation
+(that might have caused the hard disk to spin up) causes Linux to flush all dirty
+blocks. The result of this is that after a disk has spun down, it will not be spun
+up anymore to write dirty blocks, because those blocks had already been written
+immediately after the most recent read operation.
To increase the effectiveness of the laptop_mode strategy, the laptop_mode
control script increases dirty_expire_centisecs and dirty_writeback_centisecs in
Please note that this control script works for the Linux 2.4 and 2.6 series.
--------------------CONTROL SCRIPT BEGIN------------------------------------------
-#! /bin/sh
+#!/bin/bash
# start or stop laptop_mode, best run by a power management daemon when
# ac gets connected/disconnected from a laptop
# Bart Samwel
# Micha Feigin
# Andrew Morton
+# Herve Eychenne
# Dax Kelson
#
# Original Linux 2.4 version by: Jens Axboe
+#############################################################################
+
+# Age time, in seconds. should be put into a sysconfig file
+MAX_AGE=600
+
+# Read-ahead, in kilobytes
+READAHEAD=4096
+
+# Shall we remount journaled fs. with appropiate commit interval? (1=yes)
+DO_REMOUNTS=1
+
+# And shall we add the "noatime" option to that as well? (1=yes)
+DO_REMOUNT_NOATIME=1
+
+# Dirty synchronous ratio. At this percentage of dirty pages the process which
+# calls write() does its own writeback
+DIRTY_RATIO=40
+
+#
+# Allowed dirty background ratio, in percent. Once DIRTY_RATIO has been
+# exceeded, the kernel will wake pdflush which will then reduce the amount
+# of dirty memory to dirty_background_ratio. Set this nice and low, so once
+# some writeout has commenced, we do a lot of it.
+#
+DIRTY_BACKGROUND_RATIO=5
+
+# kernel default dirty buffer age
+DEF_AGE=30
+DEF_UPDATE=5
+DEF_DIRTY_BACKGROUND_RATIO=10
+DEF_DIRTY_RATIO=40
+DEF_XFS_AGE_BUFFER=15
+DEF_XFS_SYNC_INTERVAL=30
+DEF_XFS_BUFD_INTERVAL=1
+
+# This must be adjusted manually to the value of HZ in the running kernel
+# on 2.4, until the XFS people change their 2.4 external interfaces to work in
+# centisecs. This can be automated, but it's a work in progress that still needs
+# some fixes. On 2.6 kernels, XFS uses USER_HZ instead of HZ for external
+# interfaces, and that is currently always set to 100. So you don't need to
+# change this on 2.6.
+XFS_HZ=100
+
+#############################################################################
+
+KLEVEL="$(uname -r |
+ {
+ IFS='.' read a b c
+ echo $a.$b
+ }
+)"
+case "$KLEVEL" in
+ "2.4"|"2.6")
+ ;;
+ *)
+ echo "Unhandled kernel version: $KLEVEL ('uname -r' = '$(uname -r)')" >&2
+ exit 1
+ ;;
+esac
+
+if [ ! -e /proc/sys/vm/laptop_mode ] ; then
+ echo "Kernel is not patched with laptop_mode patch." >&2
+ exit 1
+fi
+
+if [ ! -w /proc/sys/vm/laptop_mode ] ; then
+ echo "You do not have enough privileges to enable laptop_mode." >&2
+ exit 1
+fi
+
# Remove an option (the first parameter) of the form option=<number> from
# a mount options string (the rest of the parameters).
parse_mount_opts () {
OPT="$1"
shift
- echo "$*" | \
- sed 's/.*/,&,/' | \
- sed 's/,'"$OPT"'=[0-9]*,/,/g' | \
- sed 's/,,*/,/g' | \
- sed 's/^,//' | \
- sed 's/,$//' | \
- cat -
+ echo ",$*," | sed \
+ -e 's/,'"$OPT"'=[0-9]*,/,/g' \
+ -e 's/,,*/,/g' \
+ -e 's/^,//' \
+ -e 's/,$//'
}
# Remove an option (the first parameter) without any arguments from
parse_nonumber_mount_opts () {
OPT="$1"
shift
- echo "$*" | \
- sed 's/.*/,&,/' | \
- sed 's/,'"$OPT"',/,/g' | \
- sed 's/,,*/,/g' | \
- sed 's/^,//' | \
- sed 's/,$//' | \
- cat -
+ echo ",$*," | sed \
+ -e 's/,'"$OPT"',/,/g' \
+ -e 's/,,*/,/g' \
+ -e 's/^,//' \
+ -e 's/,$//'
}
# Find out the state of a yes/no option (e.g. "atime"/"noatime") in
# If fstab contains, say, "rw" for this filesystem, then the result
# will be "defaults,atime".
parse_yesno_opts_wfstab () {
- L_DEV=$1
- shift
- OPT=$1
- shift
- DEF_OPT=$1
- shift
+ L_DEV="$1"
+ OPT="$2"
+ DEF_OPT="$3"
+ shift 3
L_OPTS="$*"
PARSEDOPTS1="$(parse_nonumber_mount_opts $OPT $L_OPTS)"
PARSEDOPTS1="$(parse_nonumber_mount_opts no$OPT $PARSEDOPTS1)"
# Watch for a default atime in fstab
- FSTAB_OPTS="$(cat /etc/fstab | sed 's/ / /g' | grep ^\ *"$L_DEV " | awk '{ print $4 }')"
- if [ -z "$(echo "$FSTAB_OPTS" | grep "$OPT")" ] ; then
- # option not specified in fstab -- choose the default.
- echo "$PARSEDOPTS1,$DEF_OPT"
- else
+ FSTAB_OPTS="$(awk '$1 == "'$L_DEV'" { print $4 }' /etc/fstab)"
+ if echo "$FSTAB_OPTS" | grep "$OPT" > /dev/null ; then
# option specified in fstab: extract the value and use it
- if [ -z "$(echo "$FSTAB_OPTS" | grep "no$OPT")" ] ; then
+ if echo "$FSTAB_OPTS" | grep "no$OPT" > /dev/null ; then
+ echo "$PARSEDOPTS1,no$OPT"
+ else
# no$OPT not found -- so we must have $OPT.
echo "$PARSEDOPTS1,$OPT"
- else
- echo "$PARSEDOPTS1,no$OPT"
fi
+ else
+ # option not specified in fstab -- choose the default.
+ echo "$PARSEDOPTS1,$DEF_OPT"
fi
}
# If fstab contains, say, "commit=3,rw" for this filesystem, then the
# result will be "rw,commit=3".
parse_mount_opts_wfstab () {
- L_DEV=$1
- shift
- OPT=$1
- shift
+ L_DEV="$1"
+ OPT="$2"
+ shift 2
L_OPTS="$*"
-
PARSEDOPTS1="$(parse_mount_opts $OPT $L_OPTS)"
# Watch for a default commit in fstab
- FSTAB_OPTS="$(cat /etc/fstab | sed 's/ / /g' | grep ^\ *"$L_DEV " | awk '{ print $4 }')"
- if [ -z "$(echo "$FSTAB_OPTS" | grep "$OPT=")" ] ; then
- # option not specified in fstab: set it to 0
- echo "$PARSEDOPTS1,$OPT=0"
- else
+ FSTAB_OPTS="$(awk '$1 == "'$L_DEV'" { print $4 }' /etc/fstab)"
+ if echo "$FSTAB_OPTS" | grep "$OPT=" > /dev/null ; then
# option specified in fstab: extract the value, and use it
echo -n "$PARSEDOPTS1,$OPT="
- echo "$FSTAB_OPTS" | \
- sed 's/.*/,&,/' | \
- sed 's/.*,'"$OPT"'=//' | \
- sed 's/,.*//' | \
- cat -
+ echo ",$FSTAB_OPTS," | sed \
+ -e 's/.*,'"$OPT"'=//' \
+ -e 's/,.*//'
+ else
+ # option not specified in fstab: set it to 0
+ echo "$PARSEDOPTS1,$OPT=0"
fi
}
-KLEVEL=$(
- uname -r |
- (
- IFS="." read a b c
- echo $a.$b
- )
- )
-case "$KLEVEL" in
- "2.4"|"2.6")
- true
- ;;
- *)
- echo "Unhandled kernel version: $KLEVEL ('uname -r' = '$(uname -r)')"
- exit 1
- ;;
-esac
-
-# Shall we remount journaled fs. with appropiate commit interval? (1=yes)
-DO_REMOUNTS=1
-
-# And shall we add the "noatime" option to that as well? (1=yes)
-DO_REMOUNT_NOATIME=1
-
-# age time, in seconds. should be put into a sysconfig file
-MAX_AGE=600
-
-# Dirty synchronous ratio. At this percentage of dirty pages the process which
-# calls write() does its own writeback
-DIRTY_RATIO=40
-
-#
-# Allowed dirty background ratio, in percent. Once DIRTY_RATIO has been
-# exceeded, the kernel will wake pdflush which will then reduce the amount
-# of dirty memory to dirty_background_ratio. Set this nice and low, so once
-# some writeout has commenced, we do a lot of it.
-#
-DIRTY_BACKGROUND_RATIO=5
-
-READAHEAD=4096 # kilobytes
-# kernel default dirty buffer age
-DEF_AGE=30
-DEF_UPDATE=5
-DEF_DIRTY_BACKGROUND_RATIO=10
-DEF_DIRTY_RATIO=40
-DEF_XFS_AGE_BUFFER=15
-DEF_XFS_SYNC_INTERVAL=30
-DEF_XFS_BUFD_INTERVAL=1
-
-# This must be adjusted manually to the value of HZ in the running kernel
-# on 2.4, until the XFS people change their 2.4 external interfaces to work in
-# centisecs. This can be automated, but it's a work in progress that still needs
-# some fixes. On 2.6 kernels, XFS uses USER_HZ instead of HZ for external
-# interfaces, and that is currently always set to 100. So you don't need to
-# change this on 2.6.
-XFS_HZ=100
-
-if [ ! -e /proc/sys/vm/laptop_mode ]; then
- echo "Kernel is not patched with laptop_mode patch."
- exit 1
-fi
-
-if [ ! -w /proc/sys/vm/laptop_mode ]; then
- echo "You do not have enough privileges to enable laptop_mode."
- exit 1
-fi
-
-if [ $DO_REMOUNT_NOATIME -eq 1 ]; then
+if [ $DO_REMOUNT_NOATIME -eq 1 ] ; then
NOATIME_OPT=",noatime"
fi
case "$KLEVEL" in
"2.4")
- echo "1" > /proc/sys/vm/laptop_mode
+ echo 1 > /proc/sys/vm/laptop_mode
echo "30 500 0 0 $AGE $AGE 60 20 0" > /proc/sys/vm/bdflush
;;
"2.6")
- echo "5" > /proc/sys/vm/laptop_mode
+ echo 5 > /proc/sys/vm/laptop_mode
echo "$AGE" > /proc/sys/vm/dirty_writeback_centisecs
echo "$AGE" > /proc/sys/vm/dirty_expire_centisecs
echo "$DIRTY_RATIO" > /proc/sys/vm/dirty_ratio
U_AGE=$((100*$DEF_UPDATE))
B_AGE=$((100*$DEF_AGE))
echo -n "Stopping laptop_mode"
- echo "0" > /proc/sys/vm/laptop_mode
- if [ -f /proc/sys/fs/xfs/age_buffer ] && [ ! -f /proc/sys/fs/xfs/lm_age_buffer ] ; then
+ echo 0 > /proc/sys/vm/laptop_mode
+ if [ -f /proc/sys/fs/xfs/age_buffer -a ! -f /proc/sys/fs/xfs/lm_age_buffer ] ; then
# These need to be restored, if there are no lm_*.
- echo "$(($XFS_HZ*$DEF_XFS_AGE_BUFFER))" > /proc/sys/fs/xfs/age_buffer
- echo "$(($XFS_HZ*$DEF_XFS_SYNC_INTERVAL))" > /proc/sys/fs/xfs/sync_interval
+ echo $(($XFS_HZ*$DEF_XFS_AGE_BUFFER)) > /proc/sys/fs/xfs/age_buffer
+ echo $(($XFS_HZ*$DEF_XFS_SYNC_INTERVAL)) > /proc/sys/fs/xfs/sync_interval
elif [ -f /proc/sys/fs/xfs/age_buffer_centisecs ] ; then
# These need to be restored as well.
- echo "$((100*$DEF_XFS_AGE_BUFFER))" > /proc/sys/fs/xfs/age_buffer_centisecs
- echo "$((100*$DEF_XFS_SYNC_INTERVAL))" > /proc/sys/fs/xfs/xfssyncd_centisecs
- echo "$((100*$DEF_XFS_BUFD_INTERVAL))" > /proc/sys/fs/xfs/xfsbufd_centisecs
+ echo $((100*$DEF_XFS_AGE_BUFFER)) > /proc/sys/fs/xfs/age_buffer_centisecs
+ echo $((100*$DEF_XFS_SYNC_INTERVAL)) > /proc/sys/fs/xfs/xfssyncd_centisecs
+ echo $((100*$DEF_XFS_BUFD_INTERVAL)) > /proc/sys/fs/xfs/xfsbufd_centisecs
fi
case "$KLEVEL" in
"2.4")
echo "$DEF_DIRTY_BACKGROUND_RATIO" > /proc/sys/vm/dirty_background_ratio
;;
esac
- if [ $DO_REMOUNTS -eq 1 ]; then
+ if [ $DO_REMOUNTS -eq 1 ] ; then
cat /etc/mtab | while read DEV MP FST OPTS DUMP PASS ; do
# Reset commit and atime options to defaults.
case "$FST" in
echo "."
;;
*)
- echo "Usage: $0 {start|stop}"
+ echo "Usage: $0 {start|stop}" 2>&1
+ exit 1
;;
esac
exit 0
-
--------------------CONTROL SCRIPT END--------------------------------------------
- information on the 3Com EtherLink Plus (3c505) driver.
6pack.txt
- info on the 6pack protocol, an alternative to KISS for AX.25
-8139too.txt
- - info on the 8139too driver for RTL-8139 based network cards.
Configurable
- info on some of the configurable network parameters
DLINK.txt
Disable Path MTU Discovery.
default FALSE
+min_pmtu - INTEGER
+ default 562 - minimum discovered Path MTU
+
+mtu_expires - INTEGER
+ Time, in seconds, that cached PMTU information is kept.
+
+min_adv_mss - INTEGER
+ The advertised MSS depends on the first hop route MTU, but will
+ never be lower than this setting.
+
IP Fragmentation:
ipfrag_high_thresh - INTEGER
conections.
Default: 7
+
+tcp_frto - BOOLEAN
+ Enables F-RTO, an enhanced recovery algorithm for TCP retransmission
+ timeouts. It is particularly beneficial in wireless environments
+ where packet loss is typically due to random radio interference
+ rather than intermediate router congestion.
+
+somaxconn - INTEGER
+ Limit of socket listen() backlog, known in userspace as SOMAXCONN.
+ Defaults to 128. See also tcp_max_syn_backlog for additional tuning
+ for TCP sockets.
+
+IP Variables:
+
ip_local_port_range - 2 INTEGERS
Defines the local port range that is used by TCP and UDP to
choose the local port. The first number is the first, the
The max value from conf/{all,interface}/arp_ignore is used
when ARP request is received on the {interface}
+app_solicit - INTEGER
+ The maximum number of probes to send to the user space ARP daemon
+ via netlink before dropping back to multicast probes (see
+ mcast_solicit). Defaults to 0.
+
+disable_policy - BOOLEAN
+ Disable IPSEC policy (SPD) for this interface
+
+disable_xfrm - BOOLEAN
+ Disable IPSEC encryption on this interface, whatever the policy
+
+
+
tag - INTEGER
Allows you to write a number, which can be used as required.
Default value is 0.
disabled if local forwarding is enabled.
autoconf - BOOLEAN
- Configure link-local addresses using L2 hardware addresses.
+ Autoconfigure addresses using Prefix Information in Router
+ Advertisements.
- Default: TRUE
+ Functional default: enabled if accept_ra is enabled.
+ disabled if accept_ra is disabled.
dad_transmits - INTEGER
The amount of Duplicate Address Detection probes to send.
Default: 1
+UNDOCUMENTED:
+
+dev_weight FIXME
+discovery_slots FIXME
+discovery_timeout FIXME
+fast_poll_increase FIXME
+ip6_queue_maxlen FIXME
+lap_keepalive_time FIXME
+lo_cong FIXME
+max_baud_rate FIXME
+max_dgram_qlen FIXME
+max_noreply_time FIXME
+max_tx_data_size FIXME
+max_tx_window FIXME
+min_tx_turn_time FIXME
+mod_cong FIXME
+no_cong FIXME
+no_cong_thresh FIXME
+slot_timeout FIXME
+warn_noreply_time FIXME
+
$Id: ip-sysctl.txt,v 1.20 2001/12/13 09:00:18 davem Exp $
5. After this two commands are defined:
A. "pg" to start generator and to get results.
B. "pgset" to change generator parameters. F.e.
- pgset "clone_skb 100" sets the number of coppies of the same packet
+ pgset "clone_skb 100" sets the number of copies of the same packet
will be sent before a new packet is allocated
pgset "clone_skb 0" use multiple SKBs for packet generation
pgset "pkt_size 9014" sets packet size to 9014
pgset "frags 5" packet will consist of 5 fragments
pgset "count 200000" sets number of packets to send, set to zero
- for continious sends untill explicitly
+ for continuous sends until explicitly
stopped.
pgset "ipg 5000" sets artificial gap inserted between packets
to 5000 nanoseconds
~~~~~~~~~~~~~~~~~~~
Before you do anything with the device you've found, you need to enable
it by calling pci_enable_device() which enables I/O and memory regions of
-the device, assigns missing resources if needed and wakes up the device
-if it was in suspended state. Please note that this function can fail.
+the device, allocates an IRQ if necessary, assigns missing resources if
+needed and wakes up the device if it was in suspended state. Please note
+that this function can fail.
If you want to use the device in bus mastering mode, call pci_set_master()
which enables the bus master bit in PCI_COMMAND register and also fixes
dma 2
also there are a series of kernel parameters:
-allowdma0
pnp_reserve_irq=irq1[,irq2] ....
pnp_reserve_dma=dma1[,dma2] ....
pnp_reserve_io=io1,size1[,io2,size2] ....
+------------------+
| Bit | State |
+------------------+
-| 15 | D0 |
-| 14 | D1 |
+| 11 | D0 |
+| 12 | D1 |
| 13 | D2 |
-| 12 | D3hot |
-| 11 | D3cold |
+| 14 | D3hot |
+| 15 | D3cold |
+------------------+
A device can use this to enable wake events:
* ...you'd better find out how to get along
* without your data.
*
+ * If you change kernel command line between suspend and resume...
+ * ...prepare for nasty fsck or worse.
+ *
* (*) pm interface support is needed to make it safe.
You need to append resume=/dev/your_swap_partition to kernel command
patched X, and plain text console (no vesafb or radeonfb), see
http://www.doesi.gmxhome.de/linux/tm800s3/s3.html. (Acer TM 800)
-* radeon systems, where X can soft-boot your video card. You'll need
- patched X, and plain text console (no vesafb or radeonfb), see
- http://www.doesi.gmxhome.de/linux/tm800s3/s3.html. (Acer TM 800)
-
Now, if you pass acpi_sleep=something, and it does not work with your
bios, you'll get hard crash during resume. Be carefull.
Summary:
bios_param - fetch head, sector, cylinder info for a disk
detect - detects HBAs this driver wants to control
+ eh_timed_out - notify the host that a command timer expired
eh_abort_handler - abort given command
eh_bus_reset_handler - issue SCSI bus reset
eh_device_reset_handler - issue SCSI device reset
int detect(struct scsi_host_template * shtp)
+/**
+ * eh_timed_out - The timer for the command has just fired
+ * @scp: identifies command timing out
+ *
+ * Returns:
+ *
+ * EH_HANDLED: I fixed the error, please complete the command
+ * EH_RESET_TIMER: I need more time, reset the timer and
+ * begin counting again
+ * EH_NOT_HANDLED Begin normal error recovery
+
+ *
+ * Locks: None held
+ *
+ * Calling context: interrupt
+ *
+ * Notes: This is to give the LLD an opportunity to do local recovery.
+ * This recovery is limited to determining if the outstanding command
+ * will ever complete. You may not abort and restart the command from
+ * this callback.
+ *
+ * Optionally defined in: LLD
+ **/
+ int eh_timed_out(struct scsi_cmnd * scp)
+
+
/**
* eh_abort_handler - abort command associated with scp
* @scp: identifies command to be aborted
- add a new file include/asm-sh/vapor/io.h which contains prototypes for
any machine specific IO functions prefixed with the machine name, for
example vapor_inb. These will be needed when filling out the machine
- vector. In addition, a section is required which defines what to do when
- building a machine specific version. For example:
-
- #ifdef __WANT_IO_DEF
- #define inb vapor_inb
- ...
- #endif
+ vector.
This is the minimum that is required, however there are ample
opportunities to optimise this. In particular, by making the prototypes
==============================================================
dirty_ratio, dirty_background_ratio, dirty_expire_centisecs,
-dirty_writeback_centisecs:
+dirty_writeback_centisecs, vfs_cache_pressure:
See Documentation/filesystems/proc.txt
'b' - Will immediately reboot the system without syncing or unmounting
your disks.
+'c' - Intentionally crash the system without syncing or unmounting
+ your disks. This is most useful if the NETDUMP client package
+ has been installed.
+
'o' - Will shut your system off (if configured and supported).
's' - Will attempt to sync all mounted filesystems.
re'B'oot is good when you're unable to shut down. But you should also 'S'ync
and 'U'mount first.
+'C'rash immediately crashes your system. This is most useful if the machine
+has been configured as a NETDUMP client because an OOPS report is generated
+and a kernel crash dump is sent to the NETDUMP server.
+
'S'ync is great when your system is locked up, it allows you to sync your
disks and will certainly lessen the chance of data loss and fscking. Note
that the sync hasn't taken place until you see the "OK" and "Done" appear
-ESHUTDOWN The host controller has been disabled due to some
problem that could not be worked around.
+-EPERM Submission failed because urb->reject was set.
+
**************************************************************************
* Error codes returned by in urb->status *
-This file contains some additional information for the Philips webcams.
-E-mail: webcam@smcc.demon.nl Last updated: 2001-09-24
-
-The main webpage for the Philips driver is http://www.smcc.demon.nl/webcam/.
-It contains a lot of extra information, a FAQ, and the binary plugin
-'PWCX'. This plugin contains decompression routines that allow you to
-use higher image sizes and framerates; in addition the webcam uses less
-bandwidth on the USB bus (handy if you want to run more than 1 camera
-simultaneously). These routines fall under an NDA, and may therefor not be
-distributed as source; however, its use is completely optional.
+This file contains some additional information for the Philips and OEM webcams.
+E-mail: webcam@smcc.demon.nl Last updated: 2004-01-19
+Site: http://www.smcc.demon.nl/webcam/
+
+As of this moment, the following cameras are supported:
+ * Philips PCA645
+ * Philips PCA646
+ * Philips PCVC675
+ * Philips PCVC680
+ * Philips PCVC690
+ * Philips PCVC720/40
+ * Philips PCVC730
+ * Philips PCVC740
+ * Philips PCVC750
+ * Askey VC010
+ * Creative Labs Webcam 5
+ * Creative Labs Webcam Pro Ex
+ * Logitech QuickCam 3000 Pro
+ * Logitech QuickCam 4000 Pro
+ * Logitech QuickCam Notebook Pro
+ * Logitech QuickCam Zoom
+ * Logitech QuickCam Orbit
+ * Logitech QuickCam Sphere
+ * Samsung MPC-C10
+ * Samsung MPC-C30
+ * Sotec Afina Eye
+ * AME CU-001
+ * Visionite VCS-UM100
+ * Visionite VCS-UC300
+
+The main webpage for the Philips driver is at the address above. It contains
+a lot of extra information, a FAQ, and the binary plugin 'PWCX'. This plugin
+contains decompression routines that allow you to use higher image sizes and
+framerates; in addition the webcam uses less bandwidth on the USB bus (handy
+if you want to run more than 1 camera simultaneously). These routines fall
+under a NDA, and may therefor not be distributed as source; however, its use
+is completely optional.
You can build this code either into your kernel, or as a module. I recommend
the latter, since it makes troubleshooting a lot easier. The built-in
Specifies the desired framerate. Is an integer in the range of 4-30.
fbufs
- This parameter specifies the number of internal buffers to use for storing
+ This paramter specifies the number of internal buffers to use for storing
frames from the cam. This will help if the process that reads images from
- the cam is a bit slow or momentarily busy. However, on slow machines it
+ the cam is a bit slow or momentarely busy. However, on slow machines it
only introduces lag, so choose carefully. The default is 3, which is
reasonable. You can set it between 2 and 5.
mbufs
- This is an integer between 1 and 4. It will tell the module the number of
+ This is an integer between 1 and 10. It will tell the module the number of
buffers to reserve for mmap(), VIDIOCCGMBUF, VIDIOCMCAPTURE and friends.
The default is 2, which is adequate for most applications (double
buffering).
slack when your program is behind. But you need a multi-threaded or
forked program to really take advantage of these buffers.
- The absolute maximum is 4, but don't set it too high! Every buffer takes
- up 1.22 MB of RAM, so unless you have a lot of memory setting this to
- something more than 2 is an absolute waste. This memory is only
+ The absolute maximum is 10, but don't set it too high! Every buffer takes
+ up 460 KB of RAM, so unless you have a lot of memory setting this to
+ something more than 4 is an absolute waste. This memory is only
allocated during open(), so nothing is wasted when the camera is not in
use.
introduce some unwanted artefacts. The default is 2, medium compression.
See the FAQ on the website for an overview of which modes require
compression.
-
- The compression parameter only applies to the Vesta & ToUCam cameras.
- The 645 and 646 have fixed compression parameters.
+
+ The compression parameter does not apply to the 645 and 646 cameras
+ and OEM models derived from those (only a few). Most cams honour this
+ parameter.
leds
This settings takes 2 integers, that define the on/off time for the LED
leds=0,0
- the LED never goes on, making it suitable for silent survaillance.
+ the LED never goes on, making it suitable for silent surveillance.
By default the camera's LED is on solid while in use, and turned off
when the camera is not used anymore.
- This parameter works only with the ToUCam range of cameras (730, 740,
- 750). For other cameras this command is silently ignored, and the LED
- cannot be controlled.
+ This parameter works only with the ToUCam range of cameras (720, 730, 740,
+ 750) and OEMs. For other cameras this command is silently ignored, and
+ the LED cannot be controlled.
+
+ Finally: this parameters does not take effect UNTIL the first time you
+ open the camera device. Until then, the LED remains on.
dev_hint
A long standing problem with USB devices is their dynamic nature: you
other cameras will get the first free
available slot (see below).
- dev_hint=645:1,680=2 The PCA645 camera will get /dev/video1,
+ dev_hint=645:1,680:2 The PCA645 camera will get /dev/video1,
and a PCVC680 /dev/video2.
dev_hint=645.0123:3,645.4567:0 The PCA645 camera with serialnumber
64 0x40 Show viewport and image sizes Off
+ 128 0x80 PWCX debugging Off
For example, to trace the open() & read() fuctions, sum 8 + 4 = 12,
so you would supply trace=12 during insmod or modprobe. If
you want to turn the initialization and probing tracing off, set trace=0.
The default value for trace is 35 (0x23).
- Example:
+
+
+Example:
# modprobe pwc size=cif fps=15 power_save=1
size and fps only specify defaults when you open() the device; this is to
accommodate some tools that don't set the size. You can change these
settings after open() with the Video4Linux ioctl() calls. The default of
-defaults is QCIF size at 10 fps, BGR order.
+defaults is QCIF size at 10 fps.
The compression parameter is semiglobal; it sets the initial compression
preference for all camera's, but this parameter can be set per camera with
All parameters are optional.
-
============
Copyright (C) 2002-2004 by Luca Risolia <luca.risolia@studio.unibo.it>
+Winbond is a trademark of Winbond Electronics Corporation.
+This driver is not sponsored or developed by Winbond.
+
2. License
==========
3. Overview
===========
This driver supports the video streaming capabilities of the devices mounting
-Winbond W9967CF and Winbond W9968CF JPEG USB Dual Mode Camera Chips, when they
-are being commanded by USB. OV681 based cameras should be supported as well.
+Winbond W9967CF and Winbond W9968CF JPEG USB Dual Mode Camera Chips. OV681
+based cameras should be supported as well.
The driver is divided into two modules: the basic one, "w9968cf", is needed for
the supported devices to work; the second one, "w9968cf-vpp", is an optional
performance purposes. However it is always recommended to download and install
the latest and complete release of the driver, replacing the existing one, if
present: it will be still even possible not to load the "w9968cf-vpp" module at
-all, if you ever want to.
+all, if you ever want to. Another important missing feature of the version in
+the official Linux 2.4 kernels is the writeable /proc filesystem interface.
The latest and full-featured version of the W996[87]CF driver can be found at:
http://go.lamarinapunto.com/ . Please refer to the documentation included in
disconnected from the host many times without turning off the computer, if
your system supports the hotplug facility.
-To change the default settings for each camera, many paramaters can be passed
+To change the default settings for each camera, many parameters can be passed
through command line when the module is loaded into memory.
-The driver relies on the Video4Linux, USB and I2C core modules of the official
-Linux kernels. It has been designed to run properly on SMP systems as well.
-At the moment, an additional module, "ovcamchip", is mandatory; it provides
-support for some OmniVision CMOS sensors connected to the W996[87]CF chips.
-
-The "ovcamchip" module is part of the OV511 driver, version 2.27, which can be
-downloaded from internet:
-http://alpha.dyndns.org/ov511/
-To know how to compile it, read the documentation included in the OV511
-package.
+The driver relies on the Video4Linux, USB and I2C core modules. It has been
+designed to run properly on SMP systems as well. An additional module,
+"ovcamchip", is mandatory; it provides support for some OmniVision image
+sensors connected to the W996[87]CF chips; if found in the system, the module
+will be automatically loaded by default (provided that the kernel has been
+compiled with the automatic module loading option).
4. Supported devices
====================
At the moment, known W996[87]CF and OV681 based devices are:
-- Aroma Digi Pen ADG-5000 Refurbished
-- AVerTV USB
-- Creative Labs Video Blaster WebCam Go
-- Creative Labs Video Blaster WebCam Go Plus
-- Die Lebon LDC-D35A Digital Kamera
-- Ezonics EZ-802 EZMega Cam
-- OPCOM Digi Pen VGA Dual Mode Pen Camera
+- Aroma Digi Pen VGA Dual Mode ADG-5000 (unknown image sensor)
+- AVerMedia AVerTV USB (SAA7111A, Philips FI1216Mk2 tuner, PT2313L audio chip)
+- Creative Labs Video Blaster WebCam Go (OmniVision OV7610 sensor)
+- Creative Labs Video Blaster WebCam Go Plus (OmniVision OV7620 sensor)
+- Lebon LDC-035A (unknown image sensor)
+- Ezonics EZ-802 EZMega Cam (OmniVision OV8610C sensor)
+- OmniVision OV8610-EDE (OmniVision OV8610 sensor)
+- OPCOM Digi Pen VGA Dual Mode Pen Camera (unknown image sensor)
+- Pretec Digi Pen-II (OmniVision OV7620 sensor)
+- Pretec DigiPen-480 (OmniVision OV8610 sensor)
If you know any other W996[87]CF or OV681 based cameras, please contact me.
The list above does NOT imply that all those devices work with this driver: up
-until now only webcams that have a CMOS sensor supported by the "ovcamchip"
+until now only webcams that have an image sensor supported by the "ovcamchip"
module work.
-For a list of supported CMOS sensors, please visit the author's homepage on
+For a list of supported image sensors, please visit the author's homepage on
this module: http://alpha.dyndns.org/ov511/
Possible external microcontrollers of those webcams are not supported: this
5. Module dependencies
======================
-For it to work properly, the driver needs kernel support for Video4Linux,
-USB and I2C, and a third-party module for the CMOS sensor.
+For it to work properly, the driver needs kernel support for Video4Linux, USB
+and I2C, and the "ovcamchip" module for the image sensor. Make sure you are not
+actually using any external "ovcamchip" module, given that the W996[87]CF
+driver depends on the version of the module present in the official kernels.
The following options of the kernel configuration file must be enabled and
corresponding modules must be compiled:
The I2C core module can be compiled statically in the kernel as well.
+ # OmniVision Camera Chip support
+ #
+ CONFIG_VIDEO_OVCAMCHIP=m
+
# USB support
#
CONFIG_USB=m
CONFIG_USB_UHCI_HCD=m
CONFIG_USB_OHCI_HCD=m
-Also, make sure "Enforce bandwidth allocation" is NOT enabled.
-
And finally:
# USB Multimedia devices
#
CONFIG_USB_W9968CF=m
-The last module we need is "ovcamchip.o". To obtain it, you have to download
-the OV511 package, version 2.27 - don't use other versions - and compile it
-according to its documentation.
-The package is available at http://alpha.dyndns.org/ov511/ .
-
6. Module loading
=================
[root@localhost home]# modprobe usbcore
[root@localhost home]# modprobe i2c-core
- [root@localhost ov511-x.xx]# insmod ./ovcamchip.ko
[root@localhost home]# modprobe w9968cf
-At this point the devices should be recognized: "dmesg" can be used to analyze
-kernel messages:
+At this point the pertinent devices should be recognized: "dmesg" can be used
+to analyze kernel messages:
[user@localhost home]$ dmesg
[root@locahost home]# modinfo w9968cf
-7. Module paramaters
+7. Module parameters
====================
-Module paramaters are listed below:
+Module parameters are listed below:
+-------------------------------------------------------------------------------
+Name: ovmod_load
+Type: bool
+Syntax: <0|1>
+Description: Automatic 'ovcamchip' module loading: 0 disabled, 1 enabled.
+ If enabled, 'insmod' searches for the required 'ovcamchip'
+ module in the system, according to its configuration, and
+ loads that module automatically. This action is performed as
+ once soon as the 'w9968cf' module is loaded into memory.
+Default: 1
+Note: The kernel must be compiled with the CONFIG_KMOD option
+ enabled for the 'ovcamchip' module to be loaded and for
+ this parameter to be present.
-------------------------------------------------------------------------------
Name: vppmod_load
Type: bool
If enabled, every time an application attempts to open a
camera, 'insmod' searches for the video post-processing module
in the system and loads it automatically (if present).
- The 'w9968cf-vpp' module adds extra image manipulation
+ The optional 'w9968cf-vpp' module adds extra image manipulation
capabilities to the 'w9968cf' module,like software up-scaling,
- colour conversions and video decoding.
+ colour conversions and video decompression for very high frame
+ rates.
Default: 1
+Note: The kernel must be compiled with the CONFIG_KMOD option
+ enabled for the 'w9968cf-vpp' module to be loaded and for
+ this parameter to be present.
-------------------------------------------------------------------------------
Name: simcams
Type: int
Description: Hardware double buffering: 0 disabled, 1 enabled.
It should be enabled if you want smooth video output: if you
obtain out of sync. video, disable it, or try to
- decrease the 'clockdiv' module paramater value.
+ decrease the 'clockdiv' module parameter value.
Default: 1 for every device.
-------------------------------------------------------------------------------
Name: clamping
Description: Video filter type.
0 none, 1 (1-2-1) 3-tap filter, 2 (2-3-6-3-2) 5-tap filter.
The filter is used to reduce noise and aliasing artifacts
- produced by the CCD or CMOS sensor.
+ produced by the CCD or CMOS image sensor.
Default: 0 for every device.
-------------------------------------------------------------------------------
Name: largeview
Disable it if you have a slow CPU or you don't have enough
memory.
Default: 0 for every device.
-Note: If 'w9968cf-vpp' is not loaded, this paramater is set to 0.
+Note: If 'w9968cf-vpp' is not present, this parameter is set to 0.
-------------------------------------------------------------------------------
Name: decompression
Type: int array (min = 0, max = 32)
YUV420P/YUV420 in any resolutions where width and height are
multiples of 16.
Default: 2 for every device.
-Note: If 'w9968cf-vpp' is not loaded, forcing decompression is not
- allowed; in this case this paramater is set to 2.
+Note: If 'w9968cf-vpp' is not present, forcing decompression is not
+ allowed; in this case this parameter is set to 2.
-------------------------------------------------------------------------------
Name: force_palette
Type: int array (min = 0, max = 32)
3 = RGB565 16 bpp - Software conversion from UYVY
4 = RGB24 24 bpp - Software conversion from UYVY
5 = RGB32 32 bpp - Software conversion from UYVY
- When not 0, this paramater will override 'decompression'.
+ When not 0, this parameter will override 'decompression'.
Default: 0 for every device. Initial palette is 9 (UYVY).
-Note: If 'w9968cf-vpp' is not loaded, this paramater is set to 9.
+Note: If 'w9968cf-vpp' is not present, this parameter is set to 9.
-------------------------------------------------------------------------------
Name: force_rgb
Type: bool array (min = 0, max = 32)
Name: autobright
Type: bool array (min = 0, max = 32)
Syntax: <0|1[,...]>
-Description: CMOS sensor automatically changes brightness:
+Description: Image sensor automatically changes brightness:
0 = no, 1 = yes
Default: 0 for every device.
-------------------------------------------------------------------------------
Name: autoexp
Type: bool array (min = 0, max = 32)
Syntax: <0|1[,...]>
-Description: CMOS sensor automatically changes exposure:
+Description: Image sensor automatically changes exposure:
0 = no, 1 = yes
Default: 1 for every device.
-------------------------------------------------------------------------------
Description: Force pixel clock divisor to a specific value (for experts):
n may vary from 0 to 127.
-1 for automatic value.
- See also the 'double_buffer' module paramater.
+ See also the 'double_buffer' module parameter.
Default: -1 for every device.
-------------------------------------------------------------------------------
Name: backlight
Name: monochrome
Type: bool array (min = 0, max = 32)
Syntax: <0|1[,...]>
-Description: The CMOS sensor is monochrome:
+Description: The image sensor is monochrome:
0 = no, 1 = yes
Default: 0 for every device.
-------------------------------------------------------------------------------
4 = warnings
5 = called functions
6 = function internals
- Level 5 and 6 are useful for testing only, when just one
+ Level 5 and 6 are useful for testing only, when only one
device is used.
Default: 2
-------------------------------------------------------------------------------
the source code of other drivers and without the help of several persons; in
particular:
-- the I2C interface to kernel and high-level CMOS sensor control routines have
+- the I2C interface to kernel and high-level image sensor control routines have
been taken from the OV511 driver by Mark McClelland;
- memory management code has been copied from the bttv driver by Ralph Metzler,
used to change the file attributes on hugetlbfs.
Also, it is important to note that no such mount command is required if the
-applications are going to use only shmat/shmget system calls. Users who
-wish to use hugetlb page via shared memory segment should be a member of
-a supplementary group and system admin needs to configure that gid into
-/proc/sys/vm/hugetlb_shm_group. It is possible for same or different
-applications to use any combination of mmaps and shm* calls. Though the
-mount of filesystem will be required for using mmaps.
+applications are going to use only shmat/shmget system calls. It is possible
+for same or different applications to use any combination of mmaps and shm*
+calls. Though the mount of filesystem will be required for using mmaps.
/* Example of using hugepage in user application using Sys V shared memory
* system calls. In this example, app is requesting memory of size 256MB that
L: linux-net@vger.kernel.org
S: Maintained
+3W-XXXX ATA-RAID CONTROLLER DRIVER
+P: Adam Radford
+M: linuxraid@amcc.com
+L: linux-scsi@vger.kernel.org
+W: http://www.amcc.com
+S: Supported
+
+3W-9XXX SATA-RAID CONTROLLER DRIVER
+P: Adam Radford
+M: linuxraid@amcc.com
+L: linux-scsi@vger.kernel.org
+W: http://www.amcc.com
+S: Supported
+
53C700 AND 53C700-66 SCSI DRIVER
P: James E.J. Bottomley
M: James.Bottomley@HansenPartnership.com
ALPHA PORT
P: Richard Henderson
M: rth@twiddle.net
-S: Odd Fixes for 2.4; Maintained for 2.5.
+S: Odd Fixes for 2.4; Maintained for 2.6.
P: Ivan Kokshaysky
M: ink@jurassic.park.msu.ru
-S: Maintained for 2.4; PCI support for 2.5.
+S: Maintained for 2.4; PCI support for 2.6.
APM DRIVER
P: Stephen Rothwell
M: marcel@holtmann.org
S: Maintained
+BLUETOOTH HIDP LAYER
+P: Marcel Holtmann
+M: marcel@holtmann.org
+S: Maintained
+
BLUETOOTH HCI UART DRIVER
P: Marcel Holtmann
M: marcel@holtmann.org
L: blinux-list@redhat.com
S: Maintained
+DRIVER CORE, KOBJECTS, AND SYSFS
+P: Greg Kroah-Hartman
+M: greg@kroah.com
+L: linux-kernel@vger.kernel.org
+S: Supported
+
DRM DRIVERS
L: dri-devel@lists.sourceforge.net
S: Supported
IOC3 DRIVER
P: Ralf Baechle
-M: ralf@oss.sgi.com
+M: ralf@linux-mips.org
L: linux-mips@linux-mips.org
S: Maintained
L: linuxppc-dev@lists.linuxppc.org
S: Maintained
+LINUX FOR POWERPC EMBEDDED PPC4XX
+P: Matt Porter
+M: mporter@kernel.crashing.org
+W: http://www.penguinppc.org/
+L: linuxppc-embedded@lists.linuxppc.org
+S: Maintained
+
+LINUX FOR POWERPC EMBEDDED PPC85XX
+P: Kumar Gala
+M: kumar.gala@freescale.com
+W: http://www.penguinppc.org/
+L: linuxppc-embedded@lists.linuxppc.org
+S: Maintained
+
LLC (802.2)
P: Arnaldo Carvalho de Melo
M: acme@conectiva.com.br
M: zab@zabbo.net
S: Odd Fixes
+MARVELL MV64340 ETHERNET DRIVER
+P: Manish Lachwani
+M: Manish_Lachwani@pmc-sierra.com
+L: linux-mips@linux-mips.org
+L: netdev@oss.sgi.com
+S: Supported
+
MATROX FRAMEBUFFER DRIVER
P: Petr Vandrovec
M: vandrove@vc.cvut.cz
MIPS
P: Ralf Baechle
-M: ralf@gnu.org
+M: ralf@linux-mips.org
W: http://oss.sgi.com/mips/mips-howto.html
L: linux-mips@linux-mips.org
S: Maintained
M: pazke@donpac.ru
L: linux-visws-devel@lists.sf.net
W: http://linux-visws.sf.net
-S: Maintained for 2.5.
+S: Maintained for 2.6.
SIS 5513 IDE CONTROLLER DRIVER
P: Lionel Bouton
L: linux-kernel@vger.kernel.org
S: Maintained
-STALLION TECHNOLOGIES MULTIPORT SERIAL BOARDS
-M: support@stallion.oz.au
-W: http://www.stallion.com
-S: Supported
-
STARFIRE/DURALAN NETWORK DRIVER
P: Ion Badulescu
M: ionut@cs.columbia.edu
W: http://www.stradis.com/
S: Maintained
-SUPERH
-P: Niibe Yutaka
-M: gniibe@m17n.org
+SUPERH (sh)
+P: Paul Mundt
+M: lethal@linux-sh.org
P: Kazumoto Kojima
M: kkojima@rr.iij4u.or.jp
L: linux-sh@m17n.org
+W: http://www.linux-sh.org
W: http://www.m17n.org/linux-sh/
W: http://www.rr.iij4u.or.jp/~kkojima/linux-sh4.html
S: Maintained
+SUPERH64 (sh64)
+P: Paul Mundt
+M: lethal@linux-sh.org
+P: Richard Curnow
+M: richard.curnow@superh.com
+L: linuxsh-shmedia-dev@lists.sourceforge.net
+W: http://www.linux-sh.org
+S: Maintained
+
SUN3/3X
P: Sam Creasey
M: sammy@sammy.net
M: kraxel@bytesex.org
S: Maintained
+W1 DALLAS'S 1-WIRE BUS
+P: Evgeniy Polyakov
+M: johnpol@2ka.mipt.ru
+L: sensors@stimpy.netroedge.com
+S: Maintained
+
WAN ROUTER & SANGOMA WANPIPE DRIVERS & API (X.25, FRAME RELAY, PPP, CISCO HDLC)
P: Nenad Corbic
M: ncorbic@sangoma.com
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 7
-EXTRAVERSION =
+EXTRAVERSION = -1.planetlab
NAME=Zonked Quokka
# *DOCUMENTATION*
OBJCOPY = $(CROSS_COMPILE)objcopy
OBJDUMP = $(CROSS_COMPILE)objdump
AWK = awk
-RPM := $(shell if [ -x "/usr/bin/rpmbuild" ]; then echo rpmbuild; \
- else echo rpm; fi)
GENKSYMS = scripts/genksyms/genksyms
DEPMOD = /sbin/depmod
KALLSYMS = scripts/kallsyms
# Files to ignore in find ... statements
-RCS_FIND_IGNORE := \( -name SCCS -o -name BitKeeper -o -name .svn -o -name CVS \) -prune -o
-RCS_TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn --exclude CVS
+RCS_FIND_IGNORE := \( -name SCCS -o -name BitKeeper -o -name .svn -o -name CVS -o -name .pc \) -prune -o
+RCS_TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn --exclude CVS --exclude .pc
# ===========================================================================
# Rules shared between *config targets and build targets
scripts_basic: include/linux/autoconf.h
-
-# That's our default target when none is given on the command line
-# Note that 'modules' will be added as a prerequisite as well,
-# in the CONFIG_MODULES part below
-
-all: vmlinux
-
# Objects we will link into vmlinux / subdirs we need to visit
init-y := init/
drivers-y := drivers/ sound/
include $(srctree)/arch/$(ARCH)/Makefile
+# Default kernel image to build when no specific target is given.
+# KBUILD_IMAGE may be overruled on the commandline or
+# set in the environment
+# Also any assingments in arch/$(ARCH)/Makefiel take precedence over
+# this default value
+export KBUILD_IMAGE ?= vmlinux
+
+# The all: target is the default when no target is given on the
+# command line.
+# This allow a user to issue only 'make' to build a kernel including modules
+# Defaults vmlinux but it is usually overriden in the arch makefile
+all: vmlinux
+
ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE
CFLAGS += -Os
else
echo 'cmd_$@ := $(cmd_vmlinux__)' > $(@D)/.$(@F).cmd
endef
-define rule_vmlinux
- $(rule_vmlinux__); \
- $(NM) $@ | grep -v '\(compiled\)\|\(\.o$$\)\|\( [aUw] \)\|\(\.\.ng$$\)\|\(LASH[RL]DI\)' | sort > System.map
-endef
+do_system_map = $(NM) $(1) | grep -v '\(compiled\)\|\(\.o$$\)\|\( [aUw] \)\|\(\.\.ng$$\)\|\(LASH[RL]DI\)' | sort > $(2)
LDFLAGS_vmlinux += -T arch/$(ARCH)/kernel/vmlinux.lds.s
# but due to the added section, some addresses have shifted
# From here, we generate a correct .tmp_kallsyms2.o
# o The correct .tmp_kallsyms2.o is linked into the final vmlinux.
+# o Verify that the System.map from vmlinux matches the map from
+# .tmp_vmlinux2, just in case we did not generate kallsyms correctly.
+# o If CONFIG_KALLSYMS_EXTRA_PASS is set, do an extra pass using
+# .tmp_vmlinux3 and .tmp_kallsyms3.o. This is only meant as a
+# temporary bypass to allow the kernel to be built while the
+# maintainers work out what went wrong with kallsyms.
ifdef CONFIG_KALLSYMS
-kallsyms.o := .tmp_kallsyms2.o
+ifdef CONFIG_KALLSYMS_EXTRA_PASS
+last_kallsyms := 3
+else
+last_kallsyms := 2
+endif
+
+kallsyms.o := .tmp_kallsyms$(last_kallsyms).o
+
+define rule_verify_kallsyms
+ @$(call do_system_map, .tmp_vmlinux$(last_kallsyms), .tmp_System.map)
+ @cmp -s System.map .tmp_System.map || \
+ (echo Inconsistent kallsyms data, try setting CONFIG_KALLSYMS_EXTRA_PASS ; rm .tmp_kallsyms* ; false)
+endef
quiet_cmd_kallsyms = KSYM $@
cmd_kallsyms = $(NM) -n $< | $(KALLSYMS) $(foreach x,$(CONFIG_KALLSYMS_ALL),--all-symbols) > $@
-.tmp_kallsyms1.o .tmp_kallsyms2.o: %.o: %.S scripts FORCE
+.tmp_kallsyms1.o .tmp_kallsyms2.o .tmp_kallsyms3.o: %.o: %.S scripts FORCE
$(call if_changed_dep,as_o_S)
.tmp_kallsyms%.S: .tmp_vmlinux%
$(call cmd,kallsyms)
.tmp_vmlinux1: $(vmlinux-objs) arch/$(ARCH)/kernel/vmlinux.lds.s FORCE
- +$(call if_changed_rule,vmlinux__)
+ $(call if_changed_rule,vmlinux__)
.tmp_vmlinux2: $(vmlinux-objs) .tmp_kallsyms1.o arch/$(ARCH)/kernel/vmlinux.lds.s FORCE
$(call if_changed_rule,vmlinux__)
+.tmp_vmlinux3: $(vmlinux-objs) .tmp_kallsyms2.o arch/$(ARCH)/kernel/vmlinux.lds.s FORCE
+ $(call if_changed_rule,vmlinux__)
+
endif
# Finally the vmlinux rule
+define rule_vmlinux
+ $(rule_vmlinux__); \
+ $(call do_system_map, $@, System.map)
+ $(rule_verify_kallsyms)
+endef
+
vmlinux: $(vmlinux-objs) $(kallsyms.o) arch/$(ARCH)/kernel/vmlinux.lds.s FORCE
$(call if_changed_rule,vmlinux)
# Directories & files removed with 'make clean'
CLEAN_DIRS += $(MODVERDIR)
-CLEAN_FILES += vmlinux System.map kernel.spec \
- .tmp_kallsyms* .tmp_version .tmp_vmlinux*
+CLEAN_FILES += vmlinux System.map \
+ .tmp_kallsyms* .tmp_version .tmp_vmlinux* .tmp_System.map
# Directories & files removed with 'make mrproper'
MRPROPER_DIRS += include/config include2
.PHONY: distclean
distclean: mrproper
- @find . $(RCS_FIND_IGNORE) \
+ @find $(srctree) $(RCS_FIND_IGNORE) \
\( -name '*.orig' -o -name '*.rej' -o -name '*~' \
-o -name '*.bak' -o -name '#*#' -o -name '.*.orig' \
-o -name '.*.rej' -o -size 0 \
-o -name '*%' -o -name '.*.cmd' -o -name 'core' \) \
-type f -print | xargs rm -f
-# RPM target
-# ---------------------------------------------------------------------------
-
-.PHONY: rpm
-
-# Remove hyphens since they have special meaning in RPM filenames
-KERNELPATH=kernel-$(subst -,,$(KERNELRELEASE))
-# If you do a make spec before packing the tarball you can rpm -ta it
-
-spec:
- $(CONFIG_SHELL) $(srctree)/scripts/mkspec > $(objtree)/kernel.spec
-
-# a) Build a tar ball
-# b) generate an rpm from it
-# c) and pack the result
-# - Use /. to avoid tar packing just the symlink
+# Packaging of the kernel to various formats
+# ---------------------------------------------------------------------------
+# rpm target kept for backward compatibility
+package-dir := $(srctree)/scripts/package
-rpm: clean spec
- set -e; \
- cd .. ; \
- ln -sf $(srctree) $(KERNELPATH) ; \
- tar -cvz $(RCS_TAR_IGNORE) -f $(KERNELPATH).tar.gz $(KERNELPATH)/. ; \
- rm $(KERNELPATH)
+.PHONY: %-pkg rpm
- set -e; \
- $(CONFIG_SHELL) $(srctree)/scripts/mkversion > $(objtree)/.tmp_version;\
- mv -f $(objtree)/.tmp_version $(objtree)/.version;
+%pkg: FORCE
+ $(Q)$(MAKE) -f $(package-dir)/Makefile $@
+rpm: FORCE
+ $(Q)$(MAKE) -f $(package-dir)/Makefile $@
- $(RPM) --target $(UTS_MACHINE) -ta ../$(KERNELPATH).tar.gz
- rm ../$(KERNELPATH).tar.gz
# Brief documentation of the typical targets used
# ---------------------------------------------------------------------------
@echo ' tags/TAGS - Generate tags file for editors'
@echo ' cscope - Generate cscope index'
@echo ' checkstack - Generate a list of stack hogs'
+ @echo 'Kernel packaging:'
+ @$(MAKE) -f $(package-dir)/Makefile help
@echo ''
@echo 'Documentation targets:'
@$(MAKE) -f $(srctree)/Documentation/DocBook/Makefile dochelp
.PHONY: checkstack
checkstack:
$(OBJDUMP) -d vmlinux $$(find . -name '*.ko') | \
- $(PERL) scripts/checkstack.pl $(ARCH)
+ $(PERL) $(src)/scripts/checkstack.pl $(ARCH)
# FIXME Should go into a make.lib or something
# ===========================================================================
endmenu
+source "kernel/vserver/Kconfig"
+
source "security/Kconfig"
source "crypto/Kconfig"
NM := $(NM) -B
LDFLAGS_vmlinux := -static -N #-relax
+CHECK := $(CHECK) -D__alpha__=1
cflags-y := -pipe -mno-fp-regs -ffixed-8
# Determine if we can use the BWX instructions with GAS.
CONFIG_NFSD=m
CONFIG_NFSD_V3=y
# CONFIG_NFSD_V4 is not set
-# CONFIG_NFSD_TCP is not set
+CONFIG_NFSD_TCP=y
CONFIG_LOCKD=m
CONFIG_LOCKD_V4=y
CONFIG_EXPORTFS=m
hose->sparse_mem_base = 0;
hose->sparse_io_base = 0;
hose->dense_mem_base
- = (TSUNAMI_MEM(index) & 0xffffffffff) | 0x80000000000;
+ = (TSUNAMI_MEM(index) & 0xffffffffffL) | 0x80000000000L;
hose->dense_io_base
- = (TSUNAMI_IO(index) & 0xffffffffff) | 0x80000000000;
+ = (TSUNAMI_IO(index) & 0xffffffffffL) | 0x80000000000L;
hose->config_space_base = TSUNAMI_CONF(index);
hose->index = index;
jmp $31, do_sys_ptrace
.end sys_ptrace
+ .align 4
+ .globl sys_execve
+ .ent sys_execve
+sys_execve:
+ .prologue 0
+ mov $sp, $19
+ jmp $31, do_sys_execve
+.end sys_execve
+
+ .align 4
+ .globl osf_sigprocmask
+ .ent osf_sigprocmask
+osf_sigprocmask:
+ .prologue 0
+ mov $sp, $18
+ jmp $31, do_osf_sigprocmask
+.end osf_sigprocmask
+
.align 4
.globl alpha_ni_syscall
.ent alpha_ni_syscall
#include <linux/init.h>
#include <linux/init_task.h>
#include <linux/fs.h>
+#include <linux/mqueue.h>
#include <asm/uaccess.h>
#ifdef CONFIG_SMP
static struct proc_dir_entry * smp_affinity_entry[NR_IRQS];
static char irq_user_affinity[NR_IRQS];
-static unsigned long irq_affinity[NR_IRQS] = { [0 ... NR_IRQS-1] = ~0UL };
+static cpumask_t irq_affinity[NR_IRQS] = { [0 ... NR_IRQS-1] = CPU_MASK_ALL };
static void
select_smp_affinity(int irq)
if (! irq_desc[irq].handler->set_affinity || irq_user_affinity[irq])
return;
- while (((cpu_present_mask >> cpu) & 1) == 0)
+ while (!cpu_possible(cpu))
cpu = (cpu < (NR_CPUS-1) ? cpu + 1 : 0);
last_cpu = cpu;
- irq_affinity[irq] = 1UL << cpu;
- irq_desc[irq].handler->set_affinity(irq, 1UL << cpu);
+ irq_affinity[irq] = cpumask_of_cpu(cpu);
+ irq_desc[irq].handler->set_affinity(irq, cpumask_of_cpu(cpu));
}
-#define HEX_DIGITS 16
-
static int
irq_affinity_read_proc (char *page, char **start, off_t off,
int count, int *eof, void *data)
return len;
}
-static unsigned int
-parse_hex_value (const char __user *buffer,
- unsigned long count, unsigned long *ret)
-{
- unsigned char hexnum [HEX_DIGITS];
- unsigned long value;
- unsigned long i;
-
- if (!count)
- return -EINVAL;
- if (count > HEX_DIGITS)
- count = HEX_DIGITS;
- if (copy_from_user(hexnum, buffer, count))
- return -EFAULT;
-
- /*
- * Parse the first 8 characters as a hex string, any non-hex char
- * is end-of-string. '00e1', 'e1', '00E1', 'E1' are all the same.
- */
- value = 0;
-
- for (i = 0; i < count; i++) {
- unsigned int c = hexnum[i];
-
- switch (c) {
- case '0' ... '9': c -= '0'; break;
- case 'a' ... 'f': c -= 'a'-10; break;
- case 'A' ... 'F': c -= 'A'-10; break;
- default:
- goto out;
- }
- value = (value << 4) | c;
- }
-out:
- *ret = value;
- return 0;
-}
-
static int
irq_affinity_write_proc(struct file *file, const char __user *buffer,
unsigned long count, void *data)
{
int irq = (long) data, full_count = count, err;
- unsigned long new_value;
+ cpumask_t new_value;
if (!irq_desc[irq].handler->set_affinity)
return -EIO;
- err = parse_hex_value(buffer, count, &new_value);
+ err = cpumask_parse(buffer, count, new_value);
/* The special value 0 means release control of the
affinity to kernel. */
- if (new_value == 0) {
+ cpus_and(new_value, new_value, cpu_online_map);
+ if (cpus_empty(new_value)) {
irq_user_affinity[irq] = 0;
select_smp_affinity(irq);
}
/* Do not allow disabling IRQs completely - it's a too easy
way to make the system unusable accidentally :-) At least
one online CPU still has to be targeted. */
- else if (!(new_value & cpu_present_mask))
- return -EINVAL;
else {
irq_affinity[irq] = new_value;
irq_user_affinity[irq] = 1;
prof_cpu_mask_write_proc(struct file *file, const char __user *buffer,
unsigned long count, void *data)
{
- unsigned long *mask = (unsigned long *) data, full_count = count, err;
- unsigned long new_value;
+ unsigned long full_count = count, err;
+ cpumask_t new_value, *mask = (cpumask_t *)data;
- err = parse_hex_value(buffer, count, &new_value);
+ err = cpumask_parse(buffer, count, new_value);
if (err)
return err;
int i;
/* create /proc/irq */
- root_irq_dir = proc_mkdir("irq", 0);
+ root_irq_dir = proc_mkdir("irq", NULL);
#ifdef CONFIG_SMP
/* create /proc/irq/prof_cpu_mask */
action->handler = handler;
action->flags = irqflags;
- action->mask = 0;
+ cpus_clear(action->mask);
action->name = devname;
action->next = NULL;
action->dev_id = dev_id;
int error;
if (uss) {
- void *ss_sp;
+ void __user *ss_sp;
error = -EFAULT;
if (get_user(ss_sp, &uss->ss_sp))
info.si_signo = SIGFPE;
info.si_errno = 0;
info.si_code = si_code;
- info.si_addr = 0; /* FIXME */
+ info.si_addr = NULL; /* FIXME */
send_sig_info(SIGFPE, &info, current);
}
return -EFAULT;
}
- return do_utimes(filename, tvs ? ktvs : 0);
+ return do_utimes(filename, tvs ? ktvs : NULL);
}
#define MAX_SELECT_SECONDS \
unsigned long i;
for (i = 0 ; i < count ; i++) {
- int *iov_len_high = (int __user *)&iov[i].iov_len + 1;
+ int __user *iov_len_high = (int __user *)&iov[i].iov_len + 1;
if (put_user(0, iov_len_high))
return -EFAULT;
#ifdef CONFIG_SMP
/* Wait for the secondaries to halt. */
- clear_bit(boot_cpuid, &cpu_present_mask);
- while (cpu_present_mask)
+ cpu_clear(boot_cpuid, cpu_possible_map);
+ while (cpus_weight(cpu_possible_map))
barrier();
#endif
void
show_regs(struct pt_regs *regs)
{
- dik_show_regs(regs, 0);
+ dik_show_regs(regs, NULL);
}
/*
/*
* sys_execve() executes a new program.
- *
- * This works due to the alpha calling sequence: the first 6 args
- * are gotten from registers, while the rest is on the stack, so
- * we get a0-a5 for free, and then magically find "struct pt_regs"
- * on the stack for us..
- *
- * Don't do this at home.
*/
asmlinkage int
-sys_execve(char __user *ufilename, char __user * __user *argv,
- char __user * __user *envp,
- unsigned long a3, unsigned long a4, unsigned long a5,
- struct pt_regs regs)
+do_sys_execve(char __user *ufilename, char __user * __user *argv,
+ char __user * __user *envp, struct pt_regs *regs)
{
int error;
char *filename;
error = PTR_ERR(filename);
if (IS_ERR(filename))
goto out;
- error = do_execve(filename, argv, envp, ®s);
+ error = do_execve(filename, argv, envp, regs);
putname(filename);
out:
return error;
read_unlock(&tasklist_lock);
if (!child)
goto out_notsk;
+ if (!vx_check(vx_task_xid(child), VX_WATCH|VX_IDENT))
+ goto out;
if (request == PTRACE_ATTACH) {
ret = ptrace_attach(child);
#include <linux/reboot.h>
#endif
#include <linux/notifier.h>
+#include <asm/setup.h>
#include <asm/io.h>
extern struct notifier_block *panic_notifier_list;
static void determine_cpu_caches (unsigned int);
static char command_line[COMMAND_LINE_SIZE];
-char saved_command_line[COMMAND_LINE_SIZE];
/*
* The format of "screen_info" is strange, and due to early
platform_string(), nr_processors);
#ifdef CONFIG_SMP
- seq_printf(f, "cpus active\t\t: %ld\n"
+ seq_printf(f, "cpus active\t\t: %d\n"
"cpu active mask\t\t: %016lx\n",
- num_online_cpus(), cpu_present_mask);
+ num_online_cpus(), cpus_addr(cpu_possible_map)[0]);
#endif
show_cache_size (f, "L1 Icache", alpha_l1i_cacheshape);
* operation, as all of this is local to this thread.
*/
asmlinkage unsigned long
-osf_sigprocmask(int how, unsigned long newmask, long a2, long a3,
- long a4, long a5, struct pt_regs regs)
+do_osf_sigprocmask(int how, unsigned long newmask, struct pt_regs *regs)
{
unsigned long oldmask = -EINVAL;
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
- (®s)->r0 = 0; /* special no error return */
+ regs->r0 = 0; /* special no error return */
}
return oldmask;
}
info.si_signo = SIGTRAP;
info.si_errno = 0;
info.si_code = TRAP_BRKPT;
- info.si_addr = (void *) regs->pc;
+ info.si_addr = (void __user *) regs->pc;
info.si_trapno = 0;
send_sig_info(SIGTRAP, &info, current);
}
info.si_signo = SIGTRAP;
info.si_errno = 0;
info.si_code = TRAP_BRKPT;
- info.si_addr = (void *) regs->pc;
+ info.si_addr = (void __user *) regs->pc;
info.si_trapno = 0;
send_sig_info(SIGTRAP, &info, current);
}
** and standard ISA IRQs.
**
*/
-static SMC37c669_IRQ_TRANSLATION_ENTRY *SMC37c669_irq_table __initdata = 0;
+static SMC37c669_IRQ_TRANSLATION_ENTRY *SMC37c669_irq_table __initdata;
/*
** The following definition is for the default IRQ
** ISA DMA channels.
**
*/
-static SMC37c669_DRQ_TRANSLATION_ENTRY *SMC37c669_drq_table __initdata = 0;
+static SMC37c669_DRQ_TRANSLATION_ENTRY *SMC37c669_drq_table __initdata;
/*
** The following definition is the default DRQ
static int smp_secondary_alive __initdata = 0;
/* Which cpus ids came online. */
-unsigned long cpu_present_mask;
+cpumask_t cpu_present_mask;
cpumask_t cpu_online_map;
EXPORT_SYMBOL(cpu_online_map);
smp_num_probed = 1;
hwrpb_cpu_present_mask = (1UL << boot_cpuid);
}
- cpu_present_mask = 1UL << boot_cpuid;
+ cpu_present_mask = cpumask_of_cpu(boot_cpuid);
printk(KERN_INFO "SMP: %d CPUs probed -- cpu_present_mask = %lx\n",
smp_num_probed, hwrpb_cpu_present_mask);
/* Nothing to do on a UP box, or when told not to. */
if (smp_num_probed == 1 || max_cpus == 0) {
- cpu_present_mask = 1UL << boot_cpuid;
+ cpu_present_mask = cpumask_of_cpu(boot_cpuid);
printk(KERN_INFO "SMP mode deactivated.\n");
return;
}
if (((hwrpb_cpu_present_mask >> i) & 1) == 0)
continue;
- cpu_present_mask |= 1UL << i;
+ cpu_set(i, cpu_possible_map);
cpu_count++;
}
if (cpu_online(cpu))
bogosum += cpu_data[cpu].loops_per_jiffy;
- printk(KERN_INFO "SMP: Total of %ld processors activated "
+ printk(KERN_INFO "SMP: Total of %d processors activated "
"(%lu.%02lu BogoMIPS).\n",
num_online_cpus(),
(bogosum + 2500) / (500000/HZ),
\f
static void
-send_ipi_message(unsigned long to_whom, enum ipi_message_type operation)
+send_ipi_message(cpumask_t to_whom, enum ipi_message_type operation)
{
- unsigned long i, set, n;
+ int i;
mb();
- for (i = to_whom; i ; i &= ~set) {
- set = i & -i;
- n = __ffs(set);
- set_bit(operation, &ipi_data[n].bits);
- }
+ for_each_cpu_mask(i, to_whom)
+ set_bit(operation, &ipi_data[i].bits);
mb();
- for (i = to_whom; i ; i &= ~set) {
- set = i & -i;
- n = __ffs(set);
- wripir(n);
- }
+ for_each_cpu_mask(i, to_whom)
+ wripir(i);
}
/* Structure and data for smp_call_function. This is designed to
printk(KERN_WARNING
"smp_send_reschedule: Sending IPI to self.\n");
#endif
- send_ipi_message(1UL << cpu, IPI_RESCHEDULE);
+ send_ipi_message(cpumask_of_cpu(cpu), IPI_RESCHEDULE);
}
void
smp_send_stop(void)
{
- unsigned long to_whom = cpu_present_mask & ~(1UL << smp_processor_id());
+ cpumask_t to_whom = cpu_possible_map;
+ cpu_clear(smp_processor_id(), to_whom);
#ifdef DEBUG_IPI_MSG
if (hard_smp_processor_id() != boot_cpu_id)
printk(KERN_WARNING "smp_send_stop: Not on boot cpu.\n");
int
smp_call_function_on_cpu (void (*func) (void *info), void *info, int retry,
- int wait, unsigned long to_whom)
+ int wait, cpumask_t to_whom)
{
struct smp_call_struct data;
unsigned long timeout;
data.info = info;
data.wait = wait;
- to_whom &= ~(1L << smp_processor_id());
- num_cpus_to_call = hweight64(to_whom);
+ cpu_clear(smp_processor_id(), to_whom);
+ num_cpus_to_call = cpus_weight(to_whom);
atomic_set(&data.unstarted_count, num_cpus_to_call);
atomic_set(&data.unfinished_count, num_cpus_to_call);
/* We either got one or timed out -- clear the lock. */
mb();
- smp_call_function_data = 0;
+ smp_call_function_data = NULL;
/*
* If after both the initial and long timeout periods we still don't
register int bcpu = boot_cpuid;
#ifdef CONFIG_SMP
- register unsigned long cpm = cpu_present_mask;
volatile unsigned long *dim0, *dim1, *dim2, *dim3;
unsigned long mask0, mask1, mask2, mask3, dummy;
dim1 = &cchip->dim1.csr;
dim2 = &cchip->dim2.csr;
dim3 = &cchip->dim3.csr;
- if ((cpm & 1) == 0) dim0 = &dummy;
- if ((cpm & 2) == 0) dim1 = &dummy;
- if ((cpm & 4) == 0) dim2 = &dummy;
- if ((cpm & 8) == 0) dim3 = &dummy;
+ if (cpu_possible(0)) dim0 = &dummy;
+ if (cpu_possible(1)) dim1 = &dummy;
+ if (cpu_possible(2)) dim2 = &dummy;
+ if (cpu_possible(3)) dim3 = &dummy;
*dim0 = mask0;
*dim1 = mask1;
}
static void
-cpu_set_irq_affinity(unsigned int irq, unsigned long affinity)
+cpu_set_irq_affinity(unsigned int irq, cpumask_t affinity)
{
int cpu;
for (cpu = 0; cpu < 4; cpu++) {
unsigned long aff = cpu_irq_affinity[cpu];
- if (affinity & (1UL << cpu))
+ if (cpu_isset(cpu, affinity))
aff |= 1UL << irq;
else
aff &= ~(1UL << irq);
}
static void
-dp264_set_affinity(unsigned int irq, unsigned long affinity)
+dp264_set_affinity(unsigned int irq, cpumask_t affinity)
{
spin_lock(&dp264_irq_lock);
cpu_set_irq_affinity(irq, affinity);
}
static void
-clipper_set_affinity(unsigned int irq, unsigned long affinity)
+clipper_set_affinity(unsigned int irq, cpumask_t affinity)
{
spin_lock(&dp264_irq_lock);
cpu_set_irq_affinity(irq - 16, affinity);
.quad alpha_ni_syscall /* 270 */
.quad alpha_ni_syscall
.quad alpha_ni_syscall
- .quad alpha_ni_syscall
+ .quad sys_vserver /* 273 sys_vserver */
.quad alpha_ni_syscall
.quad alpha_ni_syscall /* 275 */
.quad alpha_ni_syscall
* sets the minutes. Usually you won't notice until after reboot!
*/
-extern int abs(int);
static int
set_rtc_mmss(unsigned long nowtime)
#include <linux/smp_lock.h>
#include <linux/module.h>
#include <linux/init.h>
+#include <linux/kallsyms.h>
#include <asm/gentrap.h>
#include <asm/uaccess.h>
dik_show_trace(unsigned long *sp)
{
long i = 0;
- printk("Trace:");
+ printk("Trace:\n");
while (0x1ff8 & (unsigned long) sp) {
extern char _stext[], _etext[];
unsigned long tmp = *sp;
continue;
if (tmp >= (unsigned long) &_etext)
continue;
- printk("%lx%c", tmp, ' ');
+ printk("[<%lx>]", tmp);
+ print_symbol(" %s", tmp);
+ printk("\n");
if (i > 40) {
printk(" ...");
break;
if (si_code == 0)
return;
}
- die_if_kernel("Arithmetic fault", regs, 0, 0);
+ die_if_kernel("Arithmetic fault", regs, 0, NULL);
info.si_signo = SIGFPE;
info.si_errno = 0;
info.si_code = si_code;
- info.si_addr = (void *) regs->pc;
+ info.si_addr = (void __user *) regs->pc;
send_sig_info(SIGFPE, &info, current);
}
data[0]);
}
die_if_kernel((type == 1 ? "Kernel Bug" : "Instruction fault"),
- regs, type, 0);
+ regs, type, NULL);
}
switch (type) {
info.si_errno = 0;
info.si_code = TRAP_BRKPT;
info.si_trapno = 0;
- info.si_addr = (void *) regs->pc;
+ info.si_addr = (void __user *) regs->pc;
if (ptrace_cancel_bpt(current)) {
regs->pc -= 4; /* make pc point to former bpt */
info.si_signo = SIGTRAP;
info.si_errno = 0;
info.si_code = __SI_FAULT;
- info.si_addr = (void *) regs->pc;
+ info.si_addr = (void __user *) regs->pc;
info.si_trapno = 0;
send_sig_info(SIGTRAP, &info, current);
return;
case 2: /* gentrap */
- info.si_addr = (void *) regs->pc;
+ info.si_addr = (void __user *) regs->pc;
info.si_trapno = regs->r16;
switch ((long) regs->r16) {
case GEN_INTOVF:
info.si_signo = signo;
info.si_errno = 0;
info.si_code = code;
- info.si_addr = (void *) regs->pc;
+ info.si_addr = (void __user *) regs->pc;
send_sig_info(signo, &info, current);
return;
info.si_signo = SIGFPE;
info.si_errno = 0;
info.si_code = si_code;
- info.si_addr = (void *) regs->pc;
+ info.si_addr = (void __user *) regs->pc;
send_sig_info(SIGFPE, &info, current);
return;
}
info.si_signo = SIGILL;
info.si_errno = 0;
info.si_code = ILL_ILLOPC;
- info.si_addr = (void *) regs->pc;
+ info.si_addr = (void __user *) regs->pc;
send_sig_info(SIGILL, &info, current);
}
{
siginfo_t info;
- die_if_kernel("Instruction fault", regs, 0, 0);
+ die_if_kernel("Instruction fault", regs, 0, NULL);
info.si_signo = SIGILL;
info.si_errno = 0;
info.si_code = ILL_ILLOPC;
- info.si_addr = (void *) regs->pc;
+ info.si_addr = (void __user *) regs->pc;
force_sig_info(SIGILL, &info, current);
}
#undef R
asmlinkage void
-do_entUnaUser(void * va, unsigned long opcode,
+do_entUnaUser(void __user * va, unsigned long opcode,
unsigned long reg, struct pt_regs *regs)
{
static int cnt = 0;
info.si_signo = SIGBUS;
info.si_errno = 0;
info.si_code = BUS_ADRERR;
- info.si_addr = (void *) address;
+ info.si_addr = (void __user *) address;
force_sig_info(SIGBUS, &info, current);
if (!user_mode(regs))
goto no_context;
info.si_signo = SIGSEGV;
info.si_errno = 0;
info.si_code = si_code;
- info.si_addr = (void *) address;
+ info.si_addr = (void __user *) address;
force_sig_info(SIGSEGV, &info, current);
return;
printk("\nMem-info:\n");
show_free_areas();
- printk("Free swap: %6dkB\n",nr_swap_pages<<(PAGE_SHIFT-10));
+ printk("Free swap: %6ldkB\n", nr_swap_pages<<(PAGE_SHIFT-10));
i = max_mapnr;
while (i-- > 0) {
total++;
initrd_end,
phys_to_virt(PFN_PHYS(max_low_pfn)));
} else {
- nid = NODE_DATA(kvaddr_to_nid(initrd_start));
- reserve_bootmem_node(nid,
+ nid = kvaddr_to_nid(initrd_start);
+ reserve_bootmem_node(NODE_DATA(nid),
virt_to_phys((void *)initrd_start),
INITRD_SIZE);
}
printk("\nMem-info:\n");
show_free_areas();
- printk("Free swap: %6dkB\n",nr_swap_pages<<(PAGE_SHIFT-10));
+ printk("Free swap: %6ldkB\n", nr_swap_pages<<(PAGE_SHIFT-10));
for (nid = 0; nid < numnodes; nid++) {
struct page * lmem_map = node_mem_map(nid);
i = node_spanned_pages(nid);
prompt "ARM system type"
default ARCH_RPC
-config ARCH_ADIFCC
- bool "ADIFCC-based"
-
config ARCH_CLPS7500
bool "Cirrus-CL-PS7500FE"
depends on ARCH_RPC
default y
+config TIMER_ACORN
+ bool
+ depends on ARCH_ACORN || ARCH_CLPS7500
+ default y
+
#####################################################################
# Footbridge support
config FOOTBRIDGE
depends on ASSABET_NEPONSET || SA1100_ADSBITSY || SA1100_BADGE4 || SA1100_CONSUS || SA1100_GRAPHICSMASTER || SA1100_JORNADA720 || ARCH_LUBBOCK || SA1100_PFS168 || SA1100_PT_SYSTEM3 || SA1100_XP860
default y
+config SHARP_LOCOMO
+ bool
+ depends on SA1100_COLLIE
+ default y
+
config FORCE_MAX_ZONEORDER
int
depends on SA1111
# Select various configuration options depending on the machine type
config DISCONTIGMEM
bool
- depends on ARCH_EDB7211 || ARCH_SA1100 || ARCH_LH7A40X
+ depends on ARCH_EDB7211 || ARCH_SA1100 || (ARCH_LH7A40X && !LH7A40X_SROMLL)
default y
help
- Say Y to upport efficient handling of discontiguous physical memory,
+ Say Y to support efficient handling of discontiguous physical memory,
for architectures which are either NUMA (Non-Uniform Memory Access)
or have huge holes in the physical address space for other reasons.
See <file:Documentation/vm/numa> for more.
# Now handle the bus types
config PCI
bool "PCI support" if ARCH_INTEGRATOR_AP
- default y if ARCH_FTVPCI || ARCH_SHARK || FOOTBRIDGE_HOST || ARCH_IOP3XX || ARCH_IXP4XX
+ default y if ARCH_SHARK || FOOTBRIDGE_HOST || ARCH_IOP3XX || ARCH_IXP4XX
help
Find out whether you have a PCI motherboard. PCI is the name of a
bus system, i.e. the way the CPU talks to the other stuff inside
doesn't.
# Select the host bridge type
-config PCI_HOST_PLX90X0
- bool
- depends on PCI && ARCH_FTVPCI
- default y
-
config PCI_HOST_VIA82C505
bool
depends on PCI && ARCH_SHARK
If you do not feel you need a faster FP emulation you should better
choose NWFPE.
+config VFP
+ bool "VFP-format floating point maths"
+ help
+ Say Y to include VFP support code in the kernel. This is needed
+ if your hardware includes a VFP unit.
+
+ Please see <file:Documentation/arm/VFP/release-notes.txt> for
+ release notes and additional status information.
+
+ Say N if your target does not have VFP hardware.
+
source "fs/Kconfig.binfmt"
source "drivers/base/Kconfig"
config LEDS
bool "Timer and CPU usage LEDs"
- depends on ARCH_NETWINDER || ARCH_EBSA110 || ARCH_EBSA285 || ARCH_FTVPCI || ARCH_SHARK || ARCH_CO285 || ARCH_SA1100 || ARCH_LUBBOCK || MACH_MAINSTONE || ARCH_PXA_IDP || ARCH_INTEGRATOR || ARCH_CDB89712 || ARCH_P720T || ARCH_OMAP || ARCH_VERSATILE_PB
+ depends on ARCH_NETWINDER || ARCH_EBSA110 || ARCH_EBSA285 || ARCH_SHARK || ARCH_CO285 || ARCH_SA1100 || ARCH_LUBBOCK || MACH_MAINSTONE || ARCH_PXA_IDP || ARCH_INTEGRATOR || ARCH_CDB89712 || ARCH_P720T || ARCH_OMAP || ARCH_VERSATILE_PB
help
If you say Y here, the LEDs on your machine will be used
to provide useful information about your current system status.
config LEDS_TIMER
bool "Timer LED" if LEDS && (ARCH_NETWINDER || ARCH_EBSA285 || ARCH_SHARK || MACH_MAINSTONE || ARCH_CO285 || ARCH_SA1100 || ARCH_LUBBOCK || ARCH_PXA_IDP || ARCH_INTEGRATOR || ARCH_P720T || ARCH_VERSATILE_PB)
- depends on ARCH_NETWINDER || ARCH_EBSA110 || ARCH_EBSA285 || ARCH_FTVPCI || ARCH_SHARK || ARCH_CO285 || ARCH_SA1100 || ARCH_LUBBOCK || MACH_MAINSTONE || ARCH_PXA_IDP || ARCH_INTEGRATOR || ARCH_CDB89712 || ARCH_P720T || ARCH_OMAP || ARCH_VERSATILE_PB
+ depends on ARCH_NETWINDER || ARCH_EBSA110 || ARCH_EBSA285 || ARCH_SHARK || ARCH_CO285 || ARCH_SA1100 || ARCH_LUBBOCK || MACH_MAINSTONE || ARCH_PXA_IDP || ARCH_INTEGRATOR || ARCH_CDB89712 || ARCH_P720T || ARCH_OMAP || ARCH_VERSATILE_PB
default y if ARCH_EBSA110
help
If you say Y here, one of the system LEDs (the green one on the
source "net/Kconfig"
-if ARCH_CLPS7500 || ARCH_IOP3XX || ARCH_IXP4XX || ARCH_L7200 || ARCH_LH7A40X || ARCH_FTVPCI || ARCH_PXA || ARCH_RPC || ARCH_S3C2410 || ARCH_SA1100 || ARCH_SHARK || FOOTBRIDGE
+if ARCH_CLPS7500 || ARCH_IOP3XX || ARCH_IXP4XX || ARCH_L7200 || ARCH_LH7A40X || ARCH_PXA || ARCH_RPC || ARCH_S3C2410 || ARCH_SA1100 || ARCH_SHARK || FOOTBRIDGE
source "drivers/ide/Kconfig"
endif
endmenu
+source "kernel/vserver/Kconfig"
+
source "security/Kconfig"
source "crypto/Kconfig"
head-y := arch/arm/kernel/head.o arch/arm/kernel/init_task.o
textaddr-y := 0xC0008000
- machine-$(CONFIG_ARCH_ARCA5K) := arc
machine-$(CONFIG_ARCH_RPC) := rpc
machine-$(CONFIG_ARCH_EBSA110) := ebsa110
machine-$(CONFIG_ARCH_CLPS7500) := clps7500
textaddr-$(CONFIG_ARCH_CO285) := 0x60008000
machine-$(CONFIG_ARCH_CO285) := footbridge
incdir-$(CONFIG_ARCH_CO285) := ebsa285
- machine-$(CONFIG_ARCH_FTVPCI) := ftvpci
- incdir-$(CONFIG_ARCH_FTVPCI) := nexuspci
- machine-$(CONFIG_ARCH_TBOX) := tbox
machine-$(CONFIG_ARCH_SHARK) := shark
machine-$(CONFIG_ARCH_SA1100) := sa1100
ifeq ($(CONFIG_ARCH_SA1100),y)
machine-$(CONFIG_ARCH_CLPS711X) := clps711x
textaddr-$(CONFIG_ARCH_FORTUNET) := 0xc0008000
machine-$(CONFIG_ARCH_IOP3XX) := iop3xx
- machine-$(CONFIG_ARCH_ADIFCC) := adifcc
machine-$(CONFIG_ARCH_IXP4XX) := ixp4xx
machine-$(CONFIG_ARCH_OMAP) := omap
machine-$(CONFIG_ARCH_S3C2410) := s3c2410
machine-$(CONFIG_ARCH_LH7A40X) := lh7a40x
machine-$(CONFIG_ARCH_VERSATILE_PB) := versatile
+ifeq ($(CONFIG_ARCH_EBSA110),y)
+# This is what happens if you forget the IOCS16 line.
+# PCMCIA cards stop working.
+CFLAGS_3c589_cs.o :=-DISA_SIXTEEN_BIT_PERIPHERAL
+export CFLAGS_3c589_cs.o
+endif
+
TEXTADDR := $(textaddr-y)
ifeq ($(incdir-y),)
incdir-y := $(machine-y)
endif
core-$(CONFIG_FPE_NWFPE) += arch/arm/nwfpe/
core-$(CONFIG_FPE_FASTFPE) += $(FASTFPE_OBJ)
+core-$(CONFIG_VFP) += arch/arm/vfp/
drivers-$(CONFIG_OPROFILE) += arch/arm/oprofile/
drivers-$(CONFIG_ARCH_CLPS7500) += drivers/acorn/char/
$(Q)$(MAKE) $(build)=arch/arm/tools include/asm-arm/mach-types.h
# Convert bzImage to zImage
-bzImage: vmlinux
- $(Q)$(MAKE) $(build)=$(boot) $(boot)/zImage
+bzImage: zImage
zImage Image bootpImage uImage: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
archclean:
$(Q)$(MAKE) $(clean)=$(boot)
-# My testing targets (that short circuit a few dependencies)
-zImg:; $(Q)$(MAKE) $(build)=$(boot) $(boot)/zImage
-Img:; $(Q)$(MAKE) $(build)=$(boot) $(boot)/Image
+# My testing targets (bypasses dependencies)
bp:; $(Q)$(MAKE) $(build)=$(boot) $(boot)/bootpImage
-i:; $(Q)$(MAKE) $(build)=$(boot) install
-zi:; $(Q)$(MAKE) $(build)=$(boot) zinstall
+i zi:; $(Q)$(MAKE) $(build)=$(boot) $@
arch/$(ARCH)/kernel/asm-offsets.s: include/asm include/linux/version.h \
include/asm-arm/.arch
echo '* zImage - Compressed kernel image (arch/$(ARCH)/boot/zImage)'
echo ' Image - Uncompressed kernel image (arch/$(ARCH)/boot/Image)'
echo ' bootpImage - Combined zImage and initial RAM disk'
+ echo ' (supply initrd image via make variable INITRD=<path>)'
echo ' install - Install uncompressed kernel'
echo ' zinstall - Install compressed kernel'
echo ' Install using (your) ~/bin/installkernel or'
zreladdr-$(CONFIG_ARCH_PXA) := 0xa0008000
zreladdr-$(CONFIG_ARCH_IOP3XX) := 0xa0008000
params_phys-$(CONFIG_ARCH_IOP3XX) := 0xa0000100
- zreladdr-$(CONFIG_ARCH_ADIFCC) := 0xc0008000
-params_phys-$(CONFIG_ARCH_ADIFCC) := 0xc0000100
zreladdr-$(CONFIG_ARCH_IXP4XX) := 0x00008000
params-phys-$(CONFIG_ARCH_IXP4XX) := 0x00000100
zreladdr-$(CONFIG_ARCH_OMAP) := 0x10008000
initrd_phys-$(CONFIG_ARCH_VERSATILE_PB) := 0x00800000
ZRELADDR := $(zreladdr-y)
-ZTEXTADDR := $(ztextaddr-y)
PARAMS_PHYS := $(params_phys-y)
INITRD_PHYS := $(initrd_phys-y)
-#
-# We now have a PIC decompressor implementation. Decompressors running
-# from RAM should not define ZTEXTADDR. Decompressors running directly
-# from ROM or Flash must define ZTEXTADDR (preferably via the config)
-# FIXME: Previous assignment to ztextaddr-y is lost here. See SHARK
-ifeq ($(CONFIG_ZBOOT_ROM),y)
-ZTEXTADDR := $(CONFIG_ZBOOT_ROM_TEXT)
-ZBSSADDR := $(CONFIG_ZBOOT_ROM_BSS)
-else
-ZTEXTADDR := 0
-ZBSSADDR := ALIGN(4)
-endif
-export ZTEXTADDR ZBSSADDR ZRELADDR INITRD_PHYS PARAMS_PHYS
+export ZRELADDR INITRD_PHYS PARAMS_PHYS
-targets := Image zImage bootpImage
+targets := Image zImage bootpImage uImage
$(obj)/Image: vmlinux FORCE
$(call if_changed,objcopy)
@echo ' Kernel: $@ is ready'
+$(obj)/compressed/vmlinux: $(obj)/Image FORCE
+ $(Q)$(MAKE) $(build)=$(obj)/compressed $@
+
$(obj)/zImage: $(obj)/compressed/vmlinux FORCE
$(call if_changed,objcopy)
@echo ' Kernel: $@ is ready'
-C none -a $(ZRELADDR) -e $(ZRELADDR) \
-n 'Linux-$(KERNELRELEASE)' -d $< $@
-targets += uImage
-$(obj)/uImage: $(obj)/zImage
+$(obj)/uImage: $(obj)/zImage FORCE
$(call if_changed,uimage)
@echo ' Image $@ is ready'
+$(obj)/bootp/bootp: $(obj)/zImage initrd FORCE
+ $(Q)$(MAKE) $(build)=$(obj)/bootp $@
+ @:
+
$(obj)/bootpImage: $(obj)/bootp/bootp FORCE
$(call if_changed,objcopy)
@echo ' Kernel: $@ is ready'
-$(obj)/compressed/vmlinux: vmlinux FORCE
- $(Q)$(MAKE) $(build)=$(obj)/compressed $@
-
-$(obj)/bootp/bootp: $(obj)/zImage initrd FORCE
- $(Q)$(MAKE) $(build)=$(obj)/bootp $@
-
-.PHONY: initrd
+.PHONY: initrd FORCE
initrd:
@test "$(INITRD_PHYS)" != "" || \
(echo This machine does not support INITRD; exit -1)
(echo You must specify INITRD; exit -1)
install: $(obj)/Image
- $(CONFIG_SHELL) $(obj)/install.sh \
- $(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION) \
+ $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
$(obj)/Image System.map "$(INSTALL_PATH)"
zinstall: $(obj)/zImage
- $(CONFIG_SHELL) $(obj)/install.sh \
- $(VERSION).$(PATCHLEVEL).$(SUBLEVEL)$(EXTRAVERSION) \
+ $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
$(obj)/zImage System.map "$(INSTALL_PATH)"
subdir- := bootp compressed
# linux/arch/arm/boot/bootp/Makefile
#
-ZSYSTEM = arch/arm/boot/zImage
-ZLDFLAGS =-p -X -T $(obj)/bootp.lds \
- --defsym initrd_addr=$(INITRD_PHYS) \
- --defsym params=$(PARAMS_PHYS)
+LDFLAGS_bootp :=-p --no-undefined -X \
+ --defsym initrd_phys=$(INITRD_PHYS) \
+ --defsym params_phys=$(PARAMS_PHYS) -T
+AFLAGS_initrd.o :=-DINITRD=\"$(INITRD)\"
-extra-y := bootp
+targets := bootp bootp.lds init.o kernel.o initrd.o
# Note that bootp.lds picks up kernel.o and initrd.o
-$(obj)/bootp: $(addprefix $(obj)/,init.o kernel.o initrd.o bootp.lds)
- $(LD) $(ZLDFLAGS) -o $@ $(obj)/init.o
+$(obj)/bootp: $(addprefix $(obj)/,bootp.lds init.o kernel.o initrd.o) FORCE
+ $(call if_changed,ld)
+ @:
-$(obj)/kernel.o: $(ZSYSTEM)
- $(LD) -r -s -o $@ -b binary $(ZSYSTEM)
+# kernel.o and initrd.o includes a binary image using
+# .incbin, a dependency which is not tracked automatically
-$(obj)/initrd.o: $(INITRD)
- $(LD) -r -s -o $@ -b binary $(INITRD)
+$(obj)/kernel.o: arch/arm/boot/zImage FORCE
-.PHONY: $(INITRD) $(ZSYSTEM)
+$(obj)/initrd.o: $(INITRD) FORCE
+
+.PHONY: $(INITRD) FORCE
SECTIONS
{
. = 0;
- _text = .;
.text : {
_stext = .;
*(.start)
- arch/arm/boot/bootp/kernel.o
- . = ALIGN(32);
- initrd_start = .;
- arch/arm/boot/bootp/initrd.o
- initrd_len = . - initrd_start;
- . = ALIGN(32);
+ *(.text)
+ initrd_size = initrd_end - initrd_start;
_etext = .;
}
.type _start, #function
.globl _start
-_start: adr r12, kernel_start @ offset of kernel zImage
- ldr r4, [r12, #0x2c] @ length of zImage
- adr r13, data
- add r4, r4, r12 @ end of zImage, start of initrd
- ldmia r13!, {r5-r6} @ r5 = dest, r6 = length
+_start: adr r13, data
+ ldmia r13!, {r4-r6} @ r5 = dest, r6 = length
bl move @ move the initrd
/*
*/
movne r10, #0 @ terminator
movne r4, #2 @ Size of this entry (2 words)
- stmneia r8, {r4, r5, r10} @ Size, ATAG_CORE, terminator
+ stmneia r9, {r4, r5, r10} @ Size, ATAG_CORE, terminator
/*
* find the end of the tag list, and then add an INITRD tag on the end.
mov r5, #4 @ Size of initrd tag (4 words)
stmia r9, {r5, r6, r7, r8, r10}
- mov pc, r12 @ call kernel
+ b kernel_start @ call kernel
/*
* Move the block of memory length r6 from address r4 to address r5
.size _start, . - _start
.type data,#object
-data: .word initrd_addr @ destination initrd address
- .word initrd_len @ initrd size
+data: .word initrd_start @ source initrd address
+ .word initrd_phys @ destination initrd address
+ .word initrd_size @ initrd size
- .word 0x54410001 @ r4 = ATAG_CORE
- .word 0x54420005 @ r5 = ATAG_INITRD2
- .word initrd_addr @ r6
- .word initrd_len @ r7
- .word params @ r8
- .size data, . - _data
-
- .type initrd_start,#object
-
-kernel_start:
+ .word 0x54410001 @ r5 = ATAG_CORE
+ .word 0x54420005 @ r6 = ATAG_INITRD2
+ .word initrd_phys @ r7
+ .word initrd_size @ r8
+ .word params_phys @ r9
+ .size data, . - data
#
# create a compressed vmlinuz image from the original vmlinux
#
-# Note! ZTEXTADDR, ZBSSADDR and ZRELADDR are now exported
-# from arch/arm/boot/Makefile
-#
HEAD = head.o
OBJS = misc.o
#
ifeq ($(CONFIG_ARCH_ACORN),y)
OBJS += ll_char_wr.o font.o
-CFLAGS_misc.o := -DPARAMS_PHYS=$(PARAMS_PHYS)
endif
ifeq ($(CONFIG_ARCH_SHARK),y)
OBJS += head-epxa10db.o
endif
-ifeq ($(CONFIG_ARCH_FTVPCI),y)
-OBJS += head-ftvpci.o
-endif
-
ifeq ($(CONFIG_ARCH_L7200),y)
OBJS += head-l7200.o
endif
OBJS += ice-dcc.o
endif
-SEDFLAGS = s/TEXT_START/$(ZTEXTADDR)/;s/LOAD_ADDR/$(ZRELADDR)/;s/BSS_START/$(ZBSSADDR)/
+#
+# We now have a PIC decompressor implementation. Decompressors running
+# from RAM should not define ZTEXTADDR. Decompressors running directly
+# from ROM or Flash must define ZTEXTADDR (preferably via the config)
+# FIXME: Previous assignment to ztextaddr-y is lost here. See SHARK
+ifeq ($(CONFIG_ZBOOT_ROM),y)
+ZTEXTADDR := $(CONFIG_ZBOOT_ROM_TEXT)
+ZBSSADDR := $(CONFIG_ZBOOT_ROM_BSS)
+else
+ZTEXTADDR := 0
+ZBSSADDR := ALIGN(4)
+endif
+
+SEDFLAGS = s/TEXT_START/$(ZTEXTADDR)/;s/BSS_START/$(ZBSSADDR)/
-targets := vmlinux vmlinux.lds piggy piggy.gz piggy.o \
- font.o head.o $(OBJS)
+targets := vmlinux vmlinux.lds piggy.gz piggy.o font.o \
+ head.o misc.o $(OBJS)
EXTRA_CFLAGS := -fpic
EXTRA_AFLAGS :=
-LDFLAGS_vmlinux := -p --no-undefined -X \
+# Supply ZRELADDR, INITRD_PHYS and PARAMS_PHYS to the decompressor via
+# linker symbols. We only define initrd_phys and params_phys if the
+# machine class defined the corresponding makefile variable.
+LDFLAGS_vmlinux := --defsym zreladdr=$(ZRELADDR)
+ifneq ($(INITRD_PHYS),)
+LDFLAGS_vmlinux += --defsym initrd_phys=$(INITRD_PHYS)
+endif
+ifneq ($(PARAMS_PHYS),)
+LDFLAGS_vmlinux += --defsym params_phys=$(PARAMS_PHYS)
+endif
+LDFLAGS_vmlinux += -p --no-undefined -X \
$(shell $(CC) $(CFLAGS) --print-libgcc-file-name) -T
+# Don't allow any static data in misc.o, which
+# would otherwise mess up our GOT table
+CFLAGS_misc.o := -Dstatic=
+
$(obj)/vmlinux: $(obj)/vmlinux.lds $(obj)/$(HEAD) $(obj)/piggy.o \
$(addprefix $(obj)/, $(OBJS)) FORCE
$(call if_changed,ld)
@:
-
-$(obj)/piggy: vmlinux FORCE
- $(call if_changed,objcopy)
-
-$(obj)/piggy.gz: $(obj)/piggy FORCE
+$(obj)/piggy.gz: $(obj)/../Image FORCE
$(call if_changed,gzip)
-LDFLAGS_piggy.o := -r -b binary
$(obj)/piggy.o: $(obj)/piggy.gz FORCE
- $(call if_changed,ld)
CFLAGS_font.o := -Dstatic=
$(obj)/font.o: $(FONTC)
mov r8, #0
#endif
+#ifdef CONFIG_SA1100_COLLIE
+ mov r7, #MACH_TYPE_COLLIE
+#endif
#ifdef CONFIG_SA1100_PFS168
@ REVISIT_PFS168: Temporary until firmware updated to use assigned machine number
mov r7, #MACH_TYPE_PFS168
mov r7, #MACH_TYPE_IQ80310
#endif
-#ifdef CONFIG_ARCH_ADI_EVB
- mov r7, #MACH_TYPE_ADI_EVB
-#endif
LC0: .word LC0 @ r1
.word __bss_start @ r2
.word _end @ r3
- .word _load_addr @ r4
+ .word zreladdr @ r4
.word _start @ r5
.word _got_start @ r6
.word _got_end @ ip
ENTRY(_start)
SECTIONS
{
- . = LOAD_ADDR;
- _load_addr = .;
-
. = TEXT_START;
_text = .;
*(.rodata.*)
*(.glue_7)
*(.glue_7t)
- input_data = .;
- arch/arm/boot/compressed/piggy.o
- input_data_end = .;
+ *(.piggydata)
. = ALIGN(4);
}
#
# User may have a custom install script
+if [ -x ~/bin/installkernel ]; then exec ~/bin/installkernel "$@"; fi
+if [ -x /sbin/installkernel ]; then exec /sbin/installkernel "$@"; fi
-if [ -x /sbin/installkernel ]; then
- exec /sbin/installkernel "$@"
-fi
-
-if [ "$2" = "zImage" ]; then
+if [ "$(basename $2)" = "zImage" ]; then
# Compressed install
echo "Installing compressed kernel"
- if [ -f $4/vmlinuz-$1 ]; then
- mv $4/vmlinuz-$1 $4/vmlinuz.old
- fi
-
- if [ -f $4/System.map-$1 ]; then
- mv $4/System.map-$1 $4/System.old
- fi
-
- cat $2 > $4/vmlinuz-$1
- cp $3 $4/System.map-$1
+ base=vmlinuz
else
# Normal install
echo "Installing normal kernel"
- if [ -f $4/vmlinux-$1 ]; then
- mv $4/vmlinux-$1 $4/vmlinux.old
- fi
+ base=vmlinux
+fi
- if [ -f $4/System.map ]; then
- mv $4/System.map $4/System.old
- fi
+if [ -f $4/$base-$1 ]; then
+ mv $4/$base-$1 $4/$base-$1.old
+fi
+cat $2 > $4/$base-$1
- cat $2 > $4/vmlinux-$1
- cp $3 $4/System.map
+# Install system map file
+if [ -f $4/System.map-$1 ]; then
+ mv $4/System.map-$1 $4/System.map-$1.old
fi
+cp $3 $4/System.map-$1
if [ -x /sbin/loadmap ]; then
- /sbin/loadmap --rdev /dev/ima
+ /sbin/loadmap
else
echo "You have to install it yourself"
fi
# Makefile for the linux kernel.
#
-obj-y += platform.o
obj-$(CONFIG_ARM_AMBA) += amba.o
obj-$(CONFIG_ICST525) += icst525.o
obj-$(CONFIG_SA1111) += sa1111.o
-obj-$(CONFIG_PCI_HOST_PLX90X0) += plx90x0.o
obj-$(CONFIG_PCI_HOST_VIA82C505) += via82c505.o
obj-$(CONFIG_DMABOUNCE) += dmabounce.o
+obj-$(CONFIG_TIMER_ACORN) += time-acorn.o
+obj-$(CONFIG_SHARP_LOCOMO) += locomo.o
}
}
- dma_addr = virt_to_bus(ptr);
+ dma_addr = virt_to_dma(dev, ptr);
if (device_info && dma_needs_bounce(dev, dma_addr, size)) {
struct safe_buffer *buf;
dev_dbg(dev,
"%s: unsafe buffer %p (phy=%p) mapped to %p (phy=%p)\n",
- __func__, buf->ptr, (void *) virt_to_bus(buf->ptr),
+ __func__, buf->ptr, (void *) virt_to_dma(dev, buf->ptr),
buf->safe, (void *) buf->safe_dma_addr);
if ((dir == DMA_TO_DEVICE) ||
dev_dbg(dev,
"%s: unsafe buffer %p (phy=%p) mapped to %p (phy=%p)\n",
- __func__, buf->ptr, (void *) virt_to_bus(buf->ptr),
+ __func__, buf->ptr, (void *) virt_to_dma(dev, buf->ptr),
buf->safe, (void *) buf->safe_dma_addr);
dev_dbg(dev,
"%s: unsafe buffer %p (phy=%p) mapped to %p (phy=%p)\n",
- __func__, buf->ptr, (void *) virt_to_bus(buf->ptr),
+ __func__, buf->ptr, (void *) virt_to_dma(dev, buf->ptr),
buf->safe, (void *) buf->safe_dma_addr);
DO_STATS ( device_info->bounce_count++ );
}
consistent_sync(buf->safe, size, dir);
} else {
- consistent_sync(bus_to_virt(dma_addr), size, dir);
+ consistent_sync(dma_to_virt(dev, dma_addr), size, dir);
}
}
* %-EBUSY physical address already marked in-use.
* %0 successful.
*/
-static int __init
+static int
__sa1111_probe(struct device *me, struct resource *mem, int irq)
{
struct sa1111 *sachip;
static int sa1111_probe(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
- struct resource *mem = NULL, *irq = NULL;
- int i;
+ struct resource *mem;
+ int irq;
- for (i = 0; i < pdev->num_resources; i++) {
- if (pdev->resource[i].flags & IORESOURCE_MEM)
- mem = &pdev->resource[i];
- if (pdev->resource[i].flags & IORESOURCE_IRQ)
- irq = &pdev->resource[i];
- }
- return __sa1111_probe(dev, mem, irq ? irq->start : NO_IRQ);
+ mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!mem)
+ return -EINVAL;
+ irq = platform_get_irq(pdev, 0);
+
+ return __sa1111_probe(dev, mem, irq);
}
static int sa1111_remove(struct device *dev)
# CONFIG_IKCONFIG is not set
# CONFIG_EMBEDDED is not set
CONFIG_KALLSYMS=y
+# CONFIG_KALLSYMS_ALL is not set
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_IOSCHED_NOOP=y
# CONFIG_ARCH_CLPS7500 is not set
# CONFIG_ARCH_CLPS711X is not set
# CONFIG_ARCH_CO285 is not set
-CONFIG_ARCH_PXA=y
# CONFIG_ARCH_EBSA110 is not set
# CONFIG_ARCH_CAMELOT is not set
# CONFIG_ARCH_FOOTBRIDGE is not set
# CONFIG_ARCH_INTEGRATOR is not set
# CONFIG_ARCH_IOP3XX is not set
+# CONFIG_ARCH_IXP4XX is not set
# CONFIG_ARCH_L7200 is not set
+CONFIG_ARCH_PXA=y
# CONFIG_ARCH_RPC is not set
# CONFIG_ARCH_SA1100 is not set
-# CONFIG_ARCH_SHARK is not set
# CONFIG_ARCH_S3C2410 is not set
-# CONFIG_ARCH_OMAP is not set
+# CONFIG_ARCH_SHARK is not set
# CONFIG_ARCH_LH7A40X is not set
+# CONFIG_ARCH_OMAP is not set
# CONFIG_ARCH_VERSATILE_PB is not set
-#
-# CLPS711X/EP721X Implementations
-#
-
-#
-# Epxa10db
-#
-
-#
-# Footbridge Implementations
-#
-
-#
-# IOP3xx Implementation Options
-#
-# CONFIG_ARCH_IOP310 is not set
-# CONFIG_ARCH_IOP321 is not set
-
-#
-# IOP3xx Chipset Features
-#
-
#
# Intel PXA2xx Implementations
#
CONFIG_PXA27x=y
CONFIG_IWMMXT=y
-#
-# SA11x0 Implementations
-#
-
-#
-# TI OMAP Implementations
-#
-
-#
-# OMAP Core Type
-#
-
-#
-# OMAP Board Type
-#
-
-#
-# OMAP Feature Selections
-#
-
-#
-# S3C2410 Implementations
-#
-
-#
-# LH7A40X Implementations
-#
-
#
# Processor Type
#
CONFIG_FPE_NWFPE=y
# CONFIG_FPE_NWFPE_XP is not set
# CONFIG_FPE_FASTFPE is not set
+# CONFIG_VFP is not set
CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_AOUT is not set
# CONFIG_BINFMT_MISC is not set
#
# Generic Driver Options
#
+CONFIG_PREVENT_FIRMWARE_BUILD=y
# CONFIG_FW_LOADER is not set
# CONFIG_DEBUG_DRIVER is not set
# CONFIG_PM is not set
#
CONFIG_BLK_DEV_IDEDISK=y
# CONFIG_IDEDISK_MULTI_MODE is not set
-# CONFIG_IDEDISK_STROKE is not set
CONFIG_BLK_DEV_IDECS=y
# CONFIG_BLK_DEV_IDECD is not set
# CONFIG_BLK_DEV_IDETAPE is not set
# IDE chipset support/bugfixes
#
# CONFIG_IDE_GENERIC is not set
+# CONFIG_IDE_ARM is not set
# CONFIG_BLK_DEV_IDEDMA is not set
# CONFIG_IDEDMA_AUTO is not set
# CONFIG_BLK_DEV_HD is not set
CONFIG_FAT_FS=y
CONFIG_MSDOS_FS=y
# CONFIG_VFAT_FS is not set
+CONFIG_FAT_DEFAULT_CODEPAGE=437
# CONFIG_NTFS_FS is not set
#
# CONFIG_CIFS is not set
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
-# CONFIG_INTERMEZZO_FS is not set
# CONFIG_AFS_FS is not set
#
# CONFIG_NLS_ISO8859_8 is not set
# CONFIG_NLS_CODEPAGE_1250 is not set
# CONFIG_NLS_CODEPAGE_1251 is not set
+# CONFIG_NLS_ASCII is not set
CONFIG_NLS_ISO8859_1=y
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
#
# Graphics support
#
-# CONFIG_FB is not set
+CONFIG_FB=y
+CONFIG_FB_PXA=y
+# CONFIG_FB_PXA_PARAMETERS is not set
+# CONFIG_FB_VIRTUAL is not set
#
# Console display driver support
# CONFIG_VGA_CONSOLE is not set
# CONFIG_MDA_CONSOLE is not set
CONFIG_DUMMY_CONSOLE=y
+CONFIG_FRAMEBUFFER_CONSOLE=y
+CONFIG_PCI_CONSOLE=y
+# CONFIG_FONTS is not set
+CONFIG_FONT_8x8=y
+CONFIG_FONT_8x16=y
+
+#
+# Logo configuration
+#
+CONFIG_LOGO=y
+CONFIG_LOGO_LINUX_MONO=y
+CONFIG_LOGO_LINUX_VGA16=y
+CONFIG_LOGO_LINUX_CLUT224=y
#
# Sound
time.o traps.o
obj-$(CONFIG_APM) += apm.o
-obj-$(CONFIG_ARCH_ACORN) += ecard.o time-acorn.o
-obj-$(CONFIG_ARCH_CLPS7500) += time-acorn.o
+obj-$(CONFIG_ARCH_ACORN) += ecard.o
obj-$(CONFIG_FOOTBRIDGE) += isa.o
obj-$(CONFIG_FIQ) += fiq.o
obj-$(CONFIG_MODULES) += armksyms.o module.o
*/
#include <linux/sched.h>
#include <linux/mm.h>
+#include <asm/mach/arch.h>
/*
* Make sure that the compiler and target are compatible.
DEFINE(PAGE_SZ, PAGE_SIZE);
BLANK();
DEFINE(SYS_ERROR0, 0x9f0000);
+ BLANK();
+ DEFINE(SIZEOF_MACHINE_DESC, sizeof(struct machine_desc));
return 0;
}
#endif
.endm
-#elif defined(CONFIG_ARCH_ADI_EVB)
-
- .macro addruart,rx
- mrc p15, 0, \rx, c1, c0
- tst \rx, #1 @ MMU enabled?
- mov \rx, #0x00400000 @ physical base address
- orrne \rx, \rx, #0xff000000 @ virtual base
- .endm
-
- .macro senduart,rd,rx
- strb \rd, [\rx]
- .endm
-
- .macro busyuart,rd,rx
-1002: ldrb \rd, [\rx, #0x5]
- and \rd, \rd, #0x60
- teq \rd, #0x60
- bne 1002b
- .endm
-
- .macro waituart,rd,rx
-1001: ldrb \rd, [\rx, #0x6]
- tst \rd, #0x10
- beq 1001b
- .endm
-
#elif defined(CONFIG_ARCH_IXP4XX)
.macro addruart,rx
.macro addruart,rx
mrc p15, 0, \rx, c1, c0
tst \rx, #1 @ MMU enabled?
- ldr \rx, =0x80000700 @ physical base address
+ mov \rx, #0x00000700 @ offset from base
+ orreq \rx, \rx, #0x80000000 @ physical base
orrne \rx, \rx, #0xf8000000 @ virtual base
.endm
} else
#endif
- for (i = 0; i < ECARD_RES_IOCSYNC - ECARD_RES_IOCSLOW; i++) {
+ for (i = 0; i <= ECARD_RES_IOCSYNC - ECARD_RES_IOCSLOW; i++) {
ec_set_resource(ec, i + ECARD_RES_IOCSLOW,
base + (slot << 14) + (i << 19),
PODSLOT_IOC_SIZE, IORESOURCE_MEM);
#include <asm/thread_info.h>
#include <asm/glue.h>
#include <asm/ptrace.h>
+#include <asm/vfpmacros.h>
#include "entry-header.S"
.macro irq_prio_table
.endm
-#elif defined(CONFIG_ARCH_IOP310) || defined(CONFIG_ARCH_ADIFCC)
+#elif defined(CONFIG_ARCH_IOP310)
.macro disable_fiq
.endm
mov pc, lr @ CP#7
mov pc, lr @ CP#8
mov pc, lr @ CP#9
+#ifdef CONFIG_VFP
+ b do_vfp @ CP#10 (VFP)
+ b do_vfp @ CP#11 (VFP)
+#else
mov pc, lr @ CP#10 (VFP)
mov pc, lr @ CP#11 (VFP)
+#endif
mov pc, lr @ CP#12
mov pc, lr @ CP#13
mov pc, lr @ CP#14 (Debug)
ldr r3, [r2, #TI_CPU_DOMAIN]!
stmia ip, {r4 - sl, fp, sp, lr} @ Store most regs on stack
mcr p15, 0, r3, c3, c0, 0 @ Set domain register
+#ifdef CONFIG_VFP
+ @ Always disable VFP so we can lazily save/restore the old
+ @ state. This occurs in the context of the previous thread.
+ VFPFMRX r4, FPEXC
+ bic r4, r4, #FPEXC_ENABLE
+ VFPFMXR FPEXC, r4
+#endif
ldmib r2, {r4 - sl, fp, sp, pc} @ Load all regs saved previously
__INIT
#include <asm/mach-types.h>
#include <asm/procinfo.h>
#include <asm/ptrace.h>
-#include <asm/mach/arch.h>
+#include <asm/constants.h>
/*
* We place the page tables 16K below TEXTADDR. Therefore, we must make sure
#include <linux/sched.h>
#include <linux/init.h>
#include <linux/init_task.h>
+#include <linux/mqueue.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
action->handler = handler;
action->flags = irq_flags;
- action->mask = 0;
+ cpus_clear(action->mask);
action->name = devname;
action->next = NULL;
action->dev_id = dev_id;
memset(thread->used_cp, 0, sizeof(thread->used_cp));
memset(&tsk->thread.debug, 0, sizeof(struct debug_info));
fp_init(&thread->fpstate);
+#if defined(CONFIG_VFP)
+ vfp_flush_thread(&thread->vfpstate);
+#endif
}
void release_thread(struct task_struct *dead_task)
{
+#if defined(CONFIG_VFP)
+ vfp_release_thread(&dead_task->thread_info->vfpstate);
+#endif
}
asmlinkage void ret_from_fork(void) __asm__("ret_from_fork");
read_unlock(&tasklist_lock);
if (!child)
goto out;
+ if (!vx_check(vx_task_xid(child), VX_WATCH|VX_IDENT))
+ goto out_tsk;
ret = -EPERM;
if (pid == 1) /* you may not mess with init */
#include <linux/init.h>
#include <linux/root_dev.h>
#include <linux/cpu.h>
+#include <linux/interrupt.h>
#include <asm/elf.h>
#include <asm/hardware.h>
#include <asm/mach/arch.h>
#include <asm/mach/irq.h>
+#include <asm/mach/time.h>
#ifndef MEM_SIZE
#define MEM_SIZE (16*1024*1024)
char elf_platform[ELF_PLATFORM_SIZE];
EXPORT_SYMBOL(elf_platform);
-char saved_command_line[COMMAND_LINE_SIZE];
unsigned long phys_initrd_start __initdata = 0;
unsigned long phys_initrd_size __initdata = 0;
* Set up various architecture-specific pointers
*/
init_arch_irq = mdesc->init_irq;
+ init_arch_time = mdesc->init_time;
init_machine = mdesc->init_machine;
#ifdef CONFIG_VT
#include <linux/errno.h>
#include <linux/profile.h>
#include <linux/sysdev.h>
+#include <linux/timer.h>
#include <asm/hardware.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/leds.h>
+#include <asm/mach/time.h>
+
u64 jiffies_64 = INITIAL_JIFFIES;
EXPORT_SYMBOL(jiffies_64);
/* change this if you have some constant time drift */
#define USECS_PER_JIFFY (1000000/HZ)
-static int dummy_set_rtc(void)
-{
- return 0;
-}
/*
* hook for setting the RTC's idea of the current time.
*/
-int (*set_rtc)(void) = dummy_set_rtc;
+int (*set_rtc)(void);
static unsigned long dummy_gettimeoffset(void)
{
#endif
#ifdef CONFIG_LEDS_TIMER
-static void do_leds(void)
+static inline void do_leds(void)
{
static unsigned int count = 50;
}
}
#else
-#define do_leds()
+#define do_leds()
#endif
void do_gettimeofday(struct timeval *tv)
EXPORT_SYMBOL(do_settimeofday);
-static struct irqaction timer_irq = {
- .name = "timer",
- .flags = SA_INTERRUPT,
-};
+void timer_tick(struct pt_regs *regs)
+{
+ do_profile(regs);
+ do_leds();
+ do_set_rtc();
+ do_timer(regs);
+}
+
+void (*init_arch_time)(void);
+
+void __init time_init(void)
+{
+ init_arch_time();
+}
-/*
- * Include architecture specific code
- */
-#include <asm/arch/time.h>
/*
* Flush a region from virtual address 'r0' to virtual address 'r1'
- * _inclusive_. There is no alignment requirement on either address;
+ * _exclusive_. There is no alignment requirement on either address;
* user space does not need to know the hardware cache layout.
*
* r2 contains flags. It should ALWAYS be passed as ZERO until it
extern void clps711x_map_io(void);
extern void clps711x_init_irq(void);
+extern void clps711x_init_time(void);
/*
* The on-chip registers are given a size of 1MB so that a section can
BOOT_PARAMS(0xc0020000)
MAPIO(autcpu12_map_io)
INITIRQ(clps711x_init_irq)
+ INITTIME(clps711x_init_time)
MACHINE_END
extern void clps711x_init_irq(void);
extern void clps711x_map_io(void);
+extern void clps711x_init_time(void);
/*
* Map the CS89712 Ethernet port. That should be moved to the
BOOT_PARAMS(0xc0000100)
MAPIO(cdb89712_map_io)
INITIRQ(clps711x_init_irq)
+ INITTIME(clps711x_init_time)
MACHINE_END
static int cdb89712_hw_init(void)
#include <asm/mach/map.h>
extern void clps711x_init_irq(void);
+extern void clps711x_init_time(void);
static struct map_desc ceiva_io_desc[] __initdata = {
/* virtual, physical, length, type */
BOOT_PARAMS(0xc0000100)
MAPIO(ceiva_map_io)
INITIRQ(clps711x_init_irq)
+ INITTIME(clps711x_init_time)
MACHINE_END
extern void clps711x_init_irq(void);
extern void clps711x_map_io(void);
+extern void clps711x_init_time(void);
static void __init
fixup_clep7312(struct machine_desc *desc, struct tag *tags,
FIXUP(fixup_clep7312)
MAPIO(clps711x_map_io)
INITIRQ(clps711x_init_irq)
+ INITTIME(clps711x_init_time)
MACHINE_END
extern void clps711x_init_irq(void);
extern void edb7211_map_io(void);
+extern void clps711x_init_time(void);
static void __init
fixup_edb7211(struct machine_desc *desc, struct tag *tags,
FIXUP(fixup_edb7211)
MAPIO(edb7211_map_io)
INITIRQ(clps711x_init_irq)
+ INITTIME(clps711x_init_time)
MACHINE_END
extern void clps711x_map_io(void);
extern void clps711x_init_irq(void);
+extern void clps711x_init_time(void);
struct meminfo memmap = {
.nr_banks = 1,
FIXUP(fortunet_fixup)
MAPIO(clps711x_map_io)
INITIRQ(clps711x_init_irq)
+ INITTIME(clps711x_init_time)
MACHINE_END
extern void clps711x_init_irq(void);
extern void clps711x_map_io(void);
+extern void clps711x_init_time(void);
/*
* Map the P720T system PLD. It occupies two address spaces:
FIXUP(fixup_p720t)
MAPIO(p720t_map_io)
INITIRQ(clps711x_init_irq)
+ INITTIME(clps711x_init_time)
MACHINE_END
static int p720t_hw_init(void)
*/
#include <linux/timex.h>
#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/sched.h>
#include <asm/hardware.h>
+#include <asm/irq.h>
+#include <asm/leds.h>
#include <asm/io.h>
#include <asm/hardware/clps7111.h>
-extern unsigned long (*gettimeoffset)(void);
+#include <asm/mach/time.h>
+
/*
* gettimeoffset() returns time since last timer tick, in usecs.
return (hwticks * (tick_nsec / 1000)) / LATCH;
}
-void __init clps711x_setup_timer(void)
+/*
+ * IRQ handler for the timer
+ */
+static irqreturn_t
+p720t_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ timer_tick(regs);
+ return IRQ_HANDLED;
+}
+
+static struct irqaction clps711x_timer_irq = {
+ .name = "CLPS711x Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = p720t_timer_interrupt
+};
+
+void __init clps711x_init_time(void)
{
struct timespec tv;
unsigned int syscon;
- gettimeoffset = clps711x_gettimeoffset;
-
syscon = clps_readl(SYSCON1);
syscon |= SYSCON1_TC2S | SYSCON1_TC2M;
clps_writel(syscon, SYSCON1);
clps_writel(LATCH-1, TC2D); /* 512kHz / 100Hz - 1 */
+ setup_irq(IRQ_TC2OI, &clps711x_timer_irq);
+ gettimeoffset = clps711x_gettimeoffset;
+
tv.tv_nsec = 0;
tv.tv_sec = clps_readl(RTCDR);
do_settimeofday(&tv);
#include <linux/types.h>
#include <linux/interrupt.h>
#include <linux/list.h>
-#include <linux/timer.h>
+#include <linux/sched.h>
#include <linux/init.h>
#include <asm/mach/arch.h>
#include <asm/mach/map.h>
#include <asm/mach/irq.h>
+#include <asm/mach/time.h>
#include <asm/hardware.h>
#include <asm/hardware/iomd.h>
.unmask = cl7500_no_action,
};
-static struct irqaction irq_isa = { no_action, 0, 0, "isa", NULL, NULL };
+static struct irqaction irq_isa = { no_action, 0, CPU_MASK_NONE, "isa", NULL, NULL };
static void __init clps7500_init_irq(void)
{
iotable_init(cl7500_io_desc, ARRAY_SIZE(cl7500_io_desc));
}
+extern void ioctime_init(void);
+
+static irqreturn_t
+clps7500_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ timer_tick(regs);
+
+ /* Why not using do_leds interface?? */
+ {
+ /* Twinkle the lights. */
+ static int count, state = 0xff00;
+ if (count-- == 0) {
+ state ^= 0x100;
+ count = 25;
+ *((volatile unsigned int *)LED_ADDRESS) = state;
+ }
+ }
+ return IRQ_HANDLED;
+}
+
+static struct irqaction clps7500_timer_irq = {
+ .name = "CLPS7500 Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = clps7500_timer_interrupt
+};
+
+/*
+ * Set up timer interrupt.
+ */
+void __init clps7500_init_time(void)
+{
+ ioctime_init();
+
+ setup_irq(IRQ_TIMER, &clps7500_timer_irq);
+}
+
MACHINE_START(CLPS7500, "CL-PS7500")
MAINTAINER("Philip Blundell")
BOOT_MEM(0x10000000, 0x03000000, 0xe0000000)
MAPIO(clps7500_map_io)
INITIRQ(clps7500_init_irq)
+ INITTIME(clps7500_init_time)
MACHINE_END
*/
#include <linux/kernel.h>
#include <linux/mm.h>
+#include <linux/interrupt.h>
#include <linux/init.h>
#include <asm/hardware.h>
#include <asm/mach/irq.h>
#include <asm/mach/map.h>
+#include <asm/mach/time.h>
+
#define IRQ_MASK 0xfe000000 /* read */
#define IRQ_MSET 0xfe000000 /* write */
#define IRQ_STAT 0xff000000 /* read */
iotable_init(ebsa110_io_desc, ARRAY_SIZE(ebsa110_io_desc));
}
+
+#define PIT_CTRL (PIT_BASE + 0x0d)
+#define PIT_T2 (PIT_BASE + 0x09)
+#define PIT_T1 (PIT_BASE + 0x05)
+#define PIT_T0 (PIT_BASE + 0x01)
+
+/*
+ * This is the rate at which your MCLK signal toggles (in Hz)
+ * This was measured on a 10 digit frequency counter sampling
+ * over 1 second.
+ */
+#define MCLK 47894000
+
+/*
+ * This is the rate at which the PIT timers get clocked
+ */
+#define CLKBY7 (MCLK / 7)
+
+/*
+ * This is the counter value. We tick at 200Hz on this platform.
+ */
+#define COUNT ((CLKBY7 + (HZ / 2)) / HZ)
+
+/*
+ * Get the time offset from the system PIT. Note that if we have missed an
+ * interrupt, then the PIT counter will roll over (ie, be negative).
+ * This actually works out to be convenient.
+ */
+static unsigned long ebsa110_gettimeoffset(void)
+{
+ unsigned long offset, count;
+
+ __raw_writeb(0x40, PIT_CTRL);
+ count = __raw_readb(PIT_T1);
+ count |= __raw_readb(PIT_T1) << 8;
+
+ /*
+ * If count > COUNT, make the number negative.
+ */
+ if (count > COUNT)
+ count |= 0xffff0000;
+
+ offset = COUNT;
+ offset -= count;
+
+ /*
+ * `offset' is in units of timer counts. Convert
+ * offset to units of microseconds.
+ */
+ offset = offset * (1000000 / HZ) / COUNT;
+
+ return offset;
+}
+
+static irqreturn_t
+ebsa110_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ u32 count;
+
+ /* latch and read timer 1 */
+ __raw_writeb(0x40, PIT_CTRL);
+ count = __raw_readb(PIT_T1);
+ count |= __raw_readb(PIT_T1) << 8;
+
+ count += COUNT;
+
+ __raw_writeb(count & 0xff, PIT_T1);
+ __raw_writeb(count >> 8, PIT_T1);
+
+ timer_tick(regs);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction ebsa110_timer_irq = {
+ .name = "EBSA110 Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = ebsa110_timer_interrupt
+};
+
+/*
+ * Set up timer interrupt.
+ */
+static void __init ebsa110_init_time(void)
+{
+ /*
+ * Timer 1, mode 2, LSB/MSB
+ */
+ __raw_writeb(0x70, PIT_CTRL);
+ __raw_writeb(COUNT & 0xff, PIT_T1);
+ __raw_writeb(COUNT >> 8, PIT_T1);
+
+ gettimeoffset = ebsa110_gettimeoffset;
+
+ setup_irq(IRQ_EBSA110_TIMER0, &ebsa110_timer_irq);
+}
+
MACHINE_START(EBSA110, "EBSA110")
MAINTAINER("Russell King")
BOOT_MEM(0x00000000, 0xe0000000, 0xe0000000)
SOFT_REBOOT
MAPIO(ebsa110_map_io)
INITIRQ(ebsa110_init_irq)
+ INITTIME(ebsa110_init_time)
MACHINE_END
((p) >> 3) == (0x2f8 >> 3) || \
((p) >> 3) == (0x378 >> 3))
-u8 __inb(int port)
+/*
+ * We're addressing an 8 or 16-bit peripheral which tranfers
+ * odd addresses on the low ISA byte lane.
+ */
+u8 __inb8(unsigned int port)
{
u32 ret;
return ret;
}
-u16 __inw(int port)
+/*
+ * We're addressing a 16-bit peripheral which transfers odd
+ * addresses on the high ISA byte lane.
+ */
+u8 __inb16(unsigned int port)
+{
+ u32 ret;
+
+ /*
+ * The SuperIO registers use sane addressing techniques...
+ */
+ if (SUPERIO_PORT(port))
+ ret = __raw_readb(ISAIO_BASE + (port << 2));
+ else {
+ u32 a = ISAIO_BASE + ((port & ~1) << 1);
+
+ /*
+ * Shame nothing else does
+ */
+ ret = __raw_readb(a + (port & 1));
+ }
+ return ret;
+}
+
+u16 __inw(unsigned int port)
{
u32 ret;
return ret;
}
-u32 __inl(int port)
+/*
+ * Fake a 32-bit read with two 16-bit reads. Needed for 3c589.
+ */
+u32 __inl(unsigned int port)
{
- BUG();
- return 0;
+ u32 a;
+
+ if (SUPERIO_PORT(port) || port & 3)
+ BUG();
+
+ a = ISAIO_BASE + (port << 1);
+
+ return __raw_readw(a) | __raw_readw(a + 4) << 16;
}
-EXPORT_SYMBOL(__inb);
+EXPORT_SYMBOL(__inb8);
+EXPORT_SYMBOL(__inb16);
EXPORT_SYMBOL(__inw);
EXPORT_SYMBOL(__inl);
-void __outb(u8 val, int port)
+void __outb8(u8 val, unsigned int port)
{
/*
* The SuperIO registers use sane addressing techniques...
}
}
-void __outw(u16 val, int port)
+void __outb16(u8 val, unsigned int port)
+{
+ /*
+ * The SuperIO registers use sane addressing techniques...
+ */
+ if (SUPERIO_PORT(port))
+ __raw_writeb(val, ISAIO_BASE + (port << 2));
+ else {
+ u32 a = ISAIO_BASE + ((port & ~1) << 1);
+
+ /*
+ * Shame nothing else does
+ */
+ __raw_writeb(val, a + (port & 1));
+ }
+}
+
+void __outw(u16 val, unsigned int port)
{
u32 off;
if (SUPERIO_PORT(port))
off = port << 2;
else {
- off = (port & ~1) << 1;
+ off = port << 1;
if (port & 1)
BUG();
__raw_writew(val, ISAIO_BASE + off);
}
-void __outl(u32 val, int port)
+void __outl(u32 val, unsigned int port)
{
BUG();
}
-EXPORT_SYMBOL(__outb);
+EXPORT_SYMBOL(__outb8);
+EXPORT_SYMBOL(__outb16);
EXPORT_SYMBOL(__outw);
EXPORT_SYMBOL(__outl);
EXPORT_SYMBOL(outsw);
EXPORT_SYMBOL(insw);
+/*
+ * We implement these as 16-bit insw/outsw, mainly for
+ * 3c589 cards.
+ */
void outsl(unsigned int port, const void *from, int len)
{
- panic("outsl not supported on this architecture");
+ u32 off = port << 1;
+
+ if (SUPERIO_PORT(port) || port & 3)
+ BUG();
+
+ __raw_writesw(ISAIO_BASE + off, from, len << 1);
}
void insl(unsigned int port, void *from, int len)
{
- panic("insl not supported on this architecture");
+ u32 off = port << 1;
+
+ if (SUPERIO_PORT(port) || port & 3)
+ BUG();
+
+ __raw_readsw(ISAIO_BASE + off, from, len << 1);
}
+
+EXPORT_SYMBOL(outsl);
+EXPORT_SYMBOL(insl);
extern void epxa10db_map_io(void);
extern void epxa10db_init_irq(void);
+extern void epxa10db_init_time(void);
MACHINE_START(CAMELOT, "Altera Epxa10db")
MAINTAINER("Altera Corporation")
BOOT_MEM(0x00000000, 0x7fffc000, 0xffffc000)
MAPIO(epxa10db_map_io)
INITIRQ(epxa10db_init_irq)
+ INITTIME(epxa10db_init_time)
MACHINE_END
+
*/
#include <linux/kernel.h>
#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/sched.h>
#include <asm/hardware.h>
+#include <asm/system.h>
+#include <asm/leds.h>
+#include <asm/mach/time.h>
-extern int (*set_rtc)(void);
+#define TIMER00_TYPE (volatile unsigned int*)
+#include <asm/arch/timer00.h>
static int epxa10db_set_rtc(void)
{
}
__initcall(epxa10db_rtc_init);
+
+
+/*
+ * IRQ handler for the timer
+ */
+static irqreturn_t
+epxa10db_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+
+ // ...clear the interrupt
+ *TIMER0_CR(IO_ADDRESS(EXC_TIMER00_BASE))|=TIMER0_CR_CI_MSK;
+
+ timer_tick(regs);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction epxa10db_timer_irq = {
+ .name = "Excalibur Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = epxa10db_timer_interrupt
+};
+
+/*
+ * Set up timer interrupt, and return the current time in seconds.
+ */
+void __init epxa10db_init_time(void)
+{
+ /* Start the timer */
+ *TIMER0_LIMIT(IO_ADDRESS(EXC_TIMER00_BASE))=(unsigned int)(EXC_AHB2_CLK_FREQUENCY/200);
+ *TIMER0_PRESCALE(IO_ADDRESS(EXC_TIMER00_BASE))=1;
+ *TIMER0_CR(IO_ADDRESS(EXC_TIMER00_BASE))=TIMER0_CR_IE_MSK | TIMER0_CR_S_MSK;
+
+ setup_irq(IRQ_TIMER0, &epxa10db_timer_irq);
+}
+
# Object file lists.
-obj-y := arch.o dc21285.o dma.o irq.o isa-irq.o mm.o
+obj-y := arch.o dc21285.o dma.o irq.o isa-irq.o mm.o time.o
obj-m :=
obj-n :=
obj- :=
extern void footbridge_map_io(void);
extern void footbridge_init_irq(void);
+extern void footbridge_init_time(void);
unsigned int mem_fclk_21285 = 50000000;
VIDEO(0x000a0000, 0x000bffff)
MAPIO(footbridge_map_io)
INITIRQ(footbridge_init_irq)
+ INITTIME(footbridge_init_time)
MACHINE_END
#endif
FIXUP(fixup_netwinder)
MAPIO(footbridge_map_io)
INITIRQ(footbridge_init_irq)
+ INITTIME(footbridge_init_time)
MACHINE_END
#endif
FIXUP(fixup_cats)
MAPIO(footbridge_map_io)
INITIRQ(footbridge_init_irq)
+ INITTIME(footbridge_init_time)
MACHINE_END
#endif
FIXUP(fixup_coebsa285)
MAPIO(footbridge_map_io)
INITIRQ(footbridge_init_irq)
+ INITTIME(footbridge_init_time)
MACHINE_END
#endif
BOOT_PARAMS(0x00000100)
MAPIO(footbridge_map_io)
INITIRQ(footbridge_init_irq)
+ INITTIME(footbridge_init_time)
MACHINE_END
#endif
# Object file lists.
-obj-y := core.o lm.o time.o
+obj-y := clock.o core.o lm.o time.o
obj-$(CONFIG_ARCH_INTEGRATOR_AP) += integrator_ap.o
obj-$(CONFIG_ARCH_INTEGRATOR_CP) += integrator_cp.o
#include <linux/init.h>
#include <linux/device.h>
#include <linux/spinlock.h>
+#include <linux/interrupt.h>
+#include <linux/sched.h>
#include <asm/hardware.h>
#include <asm/irq.h>
#include <asm/io.h>
#include <asm/hardware/amba.h>
#include <asm/arch/cm.h>
+#include <asm/system.h>
+#include <asm/leds.h>
+#include <asm/mach/time.h>
static struct amba_device rtc_device = {
.dev = {
}
EXPORT_SYMBOL(cm_control);
+
+/*
+ * Where is the timer (VA)?
+ */
+#define TIMER0_VA_BASE (IO_ADDRESS(INTEGRATOR_CT_BASE)+0x00000000)
+#define TIMER1_VA_BASE (IO_ADDRESS(INTEGRATOR_CT_BASE)+0x00000100)
+#define TIMER2_VA_BASE (IO_ADDRESS(INTEGRATOR_CT_BASE)+0x00000200)
+#define VA_IC_BASE IO_ADDRESS(INTEGRATOR_IC_BASE)
+
+/*
+ * How long is the timer interval?
+ */
+#define TIMER_INTERVAL (TICKS_PER_uSEC * mSEC_10)
+#if TIMER_INTERVAL >= 0x100000
+#define TICKS2USECS(x) (256 * (x) / TICKS_PER_uSEC)
+#elif TIMER_INTERVAL >= 0x10000
+#define TICKS2USECS(x) (16 * (x) / TICKS_PER_uSEC)
+#else
+#define TICKS2USECS(x) ((x) / TICKS_PER_uSEC)
+#endif
+
+/*
+ * What does it look like?
+ */
+typedef struct TimerStruct {
+ unsigned long TimerLoad;
+ unsigned long TimerValue;
+ unsigned long TimerControl;
+ unsigned long TimerClear;
+} TimerStruct_t;
+
+extern unsigned long (*gettimeoffset)(void);
+
+static unsigned long timer_reload;
+
+/*
+ * Returns number of ms since last clock interrupt. Note that interrupts
+ * will have been disabled by do_gettimeoffset()
+ */
+static unsigned long integrator_gettimeoffset(void)
+{
+ volatile TimerStruct_t *timer1 = (TimerStruct_t *)TIMER1_VA_BASE;
+ unsigned long ticks1, ticks2, status;
+
+ /*
+ * Get the current number of ticks. Note that there is a race
+ * condition between us reading the timer and checking for
+ * an interrupt. We get around this by ensuring that the
+ * counter has not reloaded between our two reads.
+ */
+ ticks2 = timer1->TimerValue & 0xffff;
+ do {
+ ticks1 = ticks2;
+ status = __raw_readl(VA_IC_BASE + IRQ_RAW_STATUS);
+ ticks2 = timer1->TimerValue & 0xffff;
+ } while (ticks2 > ticks1);
+
+ /*
+ * Number of ticks since last interrupt.
+ */
+ ticks1 = timer_reload - ticks2;
+
+ /*
+ * Interrupt pending? If so, we've reloaded once already.
+ */
+ if (status & (1 << IRQ_TIMERINT1))
+ ticks1 += timer_reload;
+
+ /*
+ * Convert the ticks to usecs
+ */
+ return TICKS2USECS(ticks1);
+}
+
+/*
+ * IRQ handler for the timer
+ */
+static irqreturn_t
+integrator_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ volatile TimerStruct_t *timer1 = (volatile TimerStruct_t *)TIMER1_VA_BASE;
+
+ // ...clear the interrupt
+ timer1->TimerClear = 1;
+
+ timer_tick(regs);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction integrator_timer_irq = {
+ .name = "Integrator Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = integrator_timer_interrupt
+};
+
+/*
+ * Set up timer interrupt, and return the current time in seconds.
+ */
+void __init integrator_time_init(unsigned long reload, unsigned int ctrl)
+{
+ volatile TimerStruct_t *timer0 = (volatile TimerStruct_t *)TIMER0_VA_BASE;
+ volatile TimerStruct_t *timer1 = (volatile TimerStruct_t *)TIMER1_VA_BASE;
+ volatile TimerStruct_t *timer2 = (volatile TimerStruct_t *)TIMER2_VA_BASE;
+ unsigned int timer_ctrl = 0x80 | 0x40; /* periodic */
+
+ timer_reload = reload;
+ timer_ctrl |= ctrl;
+
+ if (timer_reload > 0x100000) {
+ timer_reload >>= 8;
+ timer_ctrl |= 0x08; /* /256 */
+ } else if (timer_reload > 0x010000) {
+ timer_reload >>= 4;
+ timer_ctrl |= 0x04; /* /16 */
+ }
+
+ /*
+ * Initialise to a known state (all timers off)
+ */
+ timer0->TimerControl = 0;
+ timer1->TimerControl = 0;
+ timer2->TimerControl = 0;
+
+ timer1->TimerLoad = timer_reload;
+ timer1->TimerValue = timer_reload;
+ timer1->TimerControl = timer_ctrl;
+
+ /*
+ * Make irqs happen for the system timer
+ */
+ setup_irq(IRQ_TIMERINT1, &integrator_timer_irq);
+ gettimeoffset = integrator_gettimeoffset;
+}
#include <asm/arch/impd1.h>
#include <asm/sizes.h>
+#include "clock.h"
+
static int module_id;
module_param_named(lmid, module_id, int, 0444);
MODULE_PARM_DESC(lmid, "logic module stack position");
struct impd1_module {
- void *base;
+ void *base;
+ struct clk vcos[2];
};
static const struct icst525_params impd1_vco_params = {
.rd_max = 120,
};
-void impd1_set_vco(struct device *dev, int vconr, unsigned long period)
+static void impd1_setvco(struct clk *clk, struct icst525_vco vco)
{
- struct impd1_module *impd1 = dev_get_drvdata(dev);
- struct icst525_vco vco;
+ struct impd1_module *impd1 = clk->data;
+ int vconr = clk - impd1->vcos;
u32 val;
- vco = icst525_ps_to_vco(&impd1_vco_params, period);
-
- pr_debug("Guessed VCO reg params: S=%d R=%d V=%d\n",
- vco.s, vco.r, vco.v);
-
val = vco.v | (vco.r << 9) | (vco.s << 16);
writel(0xa05f, impd1->base + IMPD1_LOCK);
switch (vconr) {
- case 1:
+ case 0:
writel(val, impd1->base + IMPD1_OSC1);
break;
- case 2:
+ case 1:
writel(val, impd1->base + IMPD1_OSC2);
break;
}
#endif
}
-EXPORT_SYMBOL(impd1_set_vco);
-
void impd1_tweak_control(struct device *dev, u32 mask, u32 val)
{
struct impd1_module *impd1 = dev_get_drvdata(dev);
}
};
+static const char *impd1_vconames[2] = {
+ "CLCDCLK",
+ "AUXVCO2",
+};
+
static int impd1_probe(struct lm_device *dev)
{
struct impd1_module *impd1;
printk("IM-PD1 found at 0x%08lx\n", dev->resource.start);
+ for (i = 0; i < ARRAY_SIZE(impd1->vcos); i++) {
+ impd1->vcos[i].owner = THIS_MODULE,
+ impd1->vcos[i].name = impd1_vconames[i],
+ impd1->vcos[i].params = &impd1_vco_params,
+ impd1->vcos[i].data = impd1,
+ impd1->vcos[i].setvco = impd1_setvco;
+
+ clk_register(&impd1->vcos[i]);
+ }
+
for (i = 0; i < ARRAY_SIZE(impd1_devs); i++) {
struct impd1_device *idev = impd1_devs + i;
struct amba_device *d;
{
struct impd1_module *impd1 = lm_get_drvdata(dev);
struct list_head *l, *n;
+ int i;
list_for_each_safe(l, n, &dev->dev.children) {
struct device *d = list_to_dev(l);
device_unregister(d);
}
+ for (i = 0; i < ARRAY_SIZE(impd1->vcos); i++)
+ clk_unregister(&impd1->vcos[i]);
+
lm_set_drvdata(dev, NULL);
iounmap(impd1->base);
unsigned long sc_dec;
int i;
- platform_add_device(&cfi_flash_device);
+ platform_device_register(&cfi_flash_device);
sc_dec = readl(VA_SC_BASE + INTEGRATOR_SC_DEC_OFFSET);
for (i = 0; i < 4; i++) {
}
}
+static void ap_time_init(void)
+{
+ integrator_time_init(1000000 * TICKS_PER_uSEC / HZ, 0);
+}
+
MACHINE_START(INTEGRATOR, "ARM-Integrator")
MAINTAINER("ARM Ltd/Deep Blue Solutions Ltd")
BOOT_MEM(0x00000000, 0x16000000, 0xf1600000)
BOOT_PARAMS(0x00000100)
MAPIO(ap_map_io)
INITIRQ(ap_init_irq)
+ INITTIME(ap_time_init)
INIT_MACHINE(ap_init)
MACHINE_END
#include <asm/mach-types.h>
#include <asm/hardware/amba.h>
#include <asm/hardware/amba_kmi.h>
+#include <asm/hardware/icst525.h>
#include <asm/arch/lm.h>
#include <asm/mach/mmc.h>
#include <asm/mach/map.h>
+#include "clock.h"
+
#define INTCP_PA_MMC_BASE 0x1c000000
#define INTCP_PA_AACI_BASE 0x1d000000
#define INTCP_PA_FLASH_BASE 0x24000000
#define INTCP_FLASH_SIZE SZ_32M
+#define INTCP_PA_CLCD_BASE 0xc0000000
+
#define INTCP_VA_CIC_BASE 0xf1000040
#define INTCP_VA_PIC_BASE 0xf1400000
#define INTCP_VA_SIC_BASE 0xfca00000
pic_unmask_irq(IRQ_CP_CPPLDINT);
}
+/*
+ * Clock handling
+ */
+#define CM_LOCK (IO_ADDRESS(INTEGRATOR_HDR_BASE)+INTEGRATOR_HDR_LOCK_OFFSET)
+#define CM_AUXOSC (IO_ADDRESS(INTEGRATOR_HDR_BASE)+0x1c)
+
+static const struct icst525_params cp_auxvco_params = {
+ .ref = 24000,
+ .vco_max = 320000,
+ .vd_min = 8,
+ .vd_max = 263,
+ .rd_min = 3,
+ .rd_max = 65,
+};
+
+static void cp_auxvco_set(struct clk *clk, struct icst525_vco vco)
+{
+ u32 val;
+
+ val = readl(CM_AUXOSC) & ~0x7ffff;
+ val |= vco.v | (vco.r << 9) | (vco.s << 16);
+
+ writel(0xa05f, CM_LOCK);
+ writel(val, CM_AUXOSC);
+ writel(0, CM_LOCK);
+}
+
+static struct clk cp_clcd_clk = {
+ .name = "CLCDCLK",
+ .params = &cp_auxvco_params,
+ .setvco = cp_auxvco_set,
+};
+
+static struct clk cp_mmci_clk = {
+ .name = "MCLK",
+ .rate = 14745600,
+};
+
/*
* Flash handling.
*/
}
static struct mmc_platform_data mmc_data = {
- .mclk = 33000000,
.ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34,
.status = mmc_status,
};
.periphid = 0,
};
+static struct amba_device clcd_device = {
+ .dev = {
+ .bus_id = "mb:c0",
+ .coherent_dma_mask = ~0,
+ },
+ .res = {
+ .start = INTCP_PA_CLCD_BASE,
+ .end = INTCP_PA_CLCD_BASE + SZ_4K - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ .dma_mask = ~0,
+ .irq = { IRQ_CP_CLCDCINT, NO_IRQ },
+ .periphid = 0,
+};
+
static struct amba_device *amba_devs[] __initdata = {
&mmc_device,
&aaci_device,
+ &clcd_device,
};
static void __init intcp_init(void)
{
int i;
+ clk_register(&cp_clcd_clk);
+ clk_register(&cp_mmci_clk);
+
platform_add_devices(intcp_devs, ARRAY_SIZE(intcp_devs));
for (i = 0; i < ARRAY_SIZE(amba_devs); i++) {
}
}
+#define TIMER_CTRL_IE (1 << 5) /* Interrupt Enable */
+
+static void __init intcp_init_time(void)
+{
+ integrator_time_init(1000000 / HZ, TIMER_CTRL_IE);
+}
+
MACHINE_START(CINTEGRATOR, "ARM-IntegratorCP")
MAINTAINER("ARM Ltd/Deep Blue Solutions Ltd")
BOOT_MEM(0x00000000, 0x16000000, 0xf1600000)
BOOT_PARAMS(0x00000100)
MAPIO(intcp_map_io)
INITIRQ(intcp_init_irq)
+ INITTIME(intcp_init_time)
INIT_MACHINE(intcp_init)
MACHINE_END
#ifdef CONFIG_ARCH_IQ80321
extern void iq80321_map_io(void);
extern void iop321_init_irq(void);
+extern void iop321_init_time(void);
#endif
#ifdef CONFIG_ARCH_IQ80310
FIXUP(fixup_iop321)
MAPIO(iq80321_map_io)
INITIRQ(iop321_init_irq)
+ INITTIME(iop321_init_time)
MACHINE_END
#else
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/uaccess.h>
+
#include <asm/mach/irq.h>
+#include <asm/mach/time.h>
static unsigned long iop321_gettimeoffset(void)
{
asm volatile("mcr p6, 0, %0, c6, c1, 0" : : "r" (tisr));
- do_timer(regs);
+ timer_tick(regs);
return IRQ_HANDLED;
}
-extern unsigned long (*gettimeoffset)(void);
-
-static struct irqaction timer_irq = {
- .name = "timer",
+static struct irqaction iop321_timer_irq = {
+ .name = "IOP321 Timer Tick",
.handler = iop321_timer_interrupt,
+ .flags = SA_INTERRUPT
};
extern int setup_arm_irq(int, struct irqaction*);
-void __init time_init(void)
+void __init iop321_init_time(void)
{
u32 timer_ctl;
u32 latch = LATCH;
gettimeoffset = iop321_gettimeoffset;
- setup_irq(IRQ_IOP321_TIMER0, &timer_irq);
+ setup_irq(IRQ_IOP321_TIMER0, &iop321_timer_irq);
timer_ctl = IOP321_TMR_EN | IOP321_TMR_PRIVILEGED | IOP321_TMR_RELOAD |
IOP321_TMR_RATIO_1_1;
EXPORT_SYMBOL(pci_set_dma_mask);
EXPORT_SYMBOL(pci_dac_set_dma_mask);
EXPORT_SYMBOL(pci_set_consistent_dma_mask);
+EXPORT_SYMBOL(ixp4xx_pci_read);
+EXPORT_SYMBOL(ixp4xx_pci_write);
#include <asm/mach/map.h>
#include <asm/mach/irq.h>
+#include <asm/mach/time.h>
/*************************************************************************
* Catch up with the real idea of time
*/
do {
- do_timer(regs);
+ timer_tick(regs);
last_jiffy_time += LATCH;
} while((*IXP4XX_OSTS - last_jiffy_time) > LATCH);
return IRQ_HANDLED;
}
-extern unsigned long (*gettimeoffset)(void);
-
-static struct irqaction timer_irq = {
- .name = "IXP4xx Timer Tick",
- .flags = SA_INTERRUPT
+static struct irqaction ixp4xx_timer_irq = {
+ .name = "IXP4xx Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = ixp4xx_timer_interrupt
};
-void __init time_init(void)
+void __init ixp4xx_init_time(void)
{
gettimeoffset = ixp4xx_gettimeoffset;
- timer_irq.handler = ixp4xx_timer_interrupt;
/* Clear Pending Interrupt by writing '1' to it */
*IXP4XX_OSST = IXP4XX_OSST_TIMER_1_PEND;
last_jiffy_time = 0;
/* Connect the interrupt handler and enable the interrupt */
- setup_irq(IRQ_IXP4XX_TIMER1, &timer_irq);
+ setup_irq(IRQ_IXP4XX_TIMER1, &ixp4xx_timer_irq);
}
.flags = IORESOURCE_MEM,
};
-static struct platform_device coyote_flash_device = {
+static struct platform_device coyote_flash = {
.name = "IXP4XX-Flash",
.id = 0,
.dev = {
.resource = &coyote_flash_resource,
};
+static struct platform_device *coyote_devices[] __initdata = {
+ &coyote_flash
+};
+
static void __init coyote_init(void)
{
- platform_add_device(&coyote_flash_device);
+ platform_add_devices(&coyote_devices, ARRAY_SIZE(coyote_devices));
}
MACHINE_START(ADI_COYOTE, "ADI Engineering IXP4XX Coyote Development Platform")
IXP4XX_PERIPHERAL_BASE_VIRT)
MAPIO(coyote_map_io)
INITIRQ(ixp4xx_init_irq)
+ INITTIME(ixp4xx_init_time)
BOOT_PARAMS(0x0100)
INIT_MACHINE(coyote_init)
MACHINE_END
.flags = IORESOURCE_MEM,
};
-static struct platform_device ixdp425_flash_device = {
+static struct platform_device ixdp425_flash = {
.name = "IXP4XX-Flash",
.id = 0,
.dev = {
.num_resources = 0
};
+static struct platform_device *ixdp425_devices[] __initdata = {
+ &ixdp425_i2c_controller,
+ &ixdp425_flash
+};
+
static void __init ixdp425_init(void)
{
- platform_add_device(&ixdp425_flash_device);
- platform_add_device(&ixdp425_i2c_controller);
+ platform_add_devices(&ixdp425_devices, ARRAY_SIZE(ixdp425_devices));
}
MACHINE_START(IXDP425, "Intel IXDP425 Development Platform")
IXP4XX_PERIPHERAL_BASE_VIRT)
MAPIO(ixdp425_map_io)
INITIRQ(ixp4xx_init_irq)
+ INITTIME(ixp4xx_init_time)
BOOT_PARAMS(0x0100)
INIT_MACHINE(ixdp425_init)
MACHINE_END
IXP4XX_PERIPHERAL_BASE_VIRT)
MAPIO(ixdp425_map_io)
INITIRQ(ixp4xx_init_irq)
+ INITTIME(ixp4xx_init_time)
BOOT_PARAMS(0x0100)
INIT_MACHINE(ixdp425_init)
MACHINE_END
IXP4XX_PERIPHERAL_BASE_VIRT)
MAPIO(ixdp425_map_io)
INITIRQ(ixp4xx_init_irq)
+ INITTIME(ixp4xx_init_time)
BOOT_PARAMS(0x0100)
INIT_MACHINE(ixdp425_init)
MACHINE_END
.flags = IORESOURCE_MEM,
};
-static struct platform_device prpmc1100_flash_device = {
+static struct platform_device prpmc1100_flash = {
.name = "IXP4XX-Flash",
.id = 0,
.dev = {
.resource = &prpmc1100_flash_resource,
};
+static struct platform_device *prpmc1100_devices[] __initdata = {
+ &prpmc1100_flash
+};
+
static void __init prpmc1100_init(void)
{
- platform_add_device(&prpmc1100_flash_device);
+ platform_add_devices(&prpmc1100_devices, ARRAY_SIZE(prpmc1100_devices));
}
MACHINE_START(PRPMC1100, "Motorola PrPMC1100")
IXP4XX_PERIPHERAL_BASE_VIRT)
MAPIO(prpmc1100_map_io)
INITIRQ(ixp4xx_init_irq)
+ INITTIME(ixp4xx_init_time)
BOOT_PARAMS(0x0100)
INIT_MACHINE(prpmc1100_init)
MACHINE_END
config ARCH_LH7A404
bool
+config LH7A40X_CONTIGMEM
+ bool "Disable NUMA Support"
+ depends on ARCH_LH7A40X
+ help
+ Say Y here if your bootloader sets the SROMLL bit(s) in
+ the SDRAM controller, organizing memory as a contiguous
+ array. This option will disable CONFIG_DISCONTIGMEM and
+ force the kernel to manage all memory in one node.
+
+ Setting this option incorrectly may prevent the kernel from
+ booting. It is OK to leave it N.
+
+ For more information, consult
+ <file:Documentation/arm/Sharp-LH/SDRAM>.
+
+config LH7A40X_ONE_BANK_PER_NODE
+ bool "Optimize NUMA Node Tables for Size"
+ depends on ARCH_LH7A40X && !LH7A40X_CONTIGMEM
+ help
+ Say Y here to produce compact memory node tables. By
+ default pairs of adjacent physical RAM banks are managed
+ together in a single node, incurring some wasted overhead
+ in the node tables, however also maintaining compatibility
+ with systems where physical memory is truly contiguous.
+
+ Setting this option incorrectly may prevent the kernel from
+ booting. It is OK to leave it N.
+
+ For more information, consult
+ <file:Documentation/arm/Sharp-LH/SDRAM>.
+
endmenu
endif
# Object file lists.
-obj-y := fiq.o
+obj-y := fiq.o time.o
# generic.o
obj-$(CONFIG_MACH_KEV7A400) += arch-kev7a400.o irq-lh7a400.o
obj-$(CONFIG_MACH_LPD7A400) += arch-lpd7a40x.o ide-lpd7a40x.o irq-lh7a400.o
/* This function calls the board specific IRQ initialization function. */
extern void lh7a400_init_irq (void);
+extern void lh7a40x_init_time (void);
static struct map_desc kev7a400_io_desc[] __initdata = {
{ IO_VIRT, IO_PHYS, IO_SIZE, MT_DEVICE },
BOOT_PARAMS (0xc0000100)
MAPIO (kev7a400_map_io)
INITIRQ (lh7a400_init_irq)
+ INITTIME (lh7a40x_init_time)
MACHINE_END
#ifdef CONFIG_MACH_LPD7A404
extern void lh7a404_init_irq (void);
+extern void lh7a40x_init_time (void);
MACHINE_START (LPD7A404, "Logic Product Development LPD7A404-10")
MAINTAINER ("Marc Singer")
BOOT_PARAMS (0xc0000100)
MAPIO (lpd7a400_map_io)
INITIRQ (lh7a404_init_irq)
+ INITTIME (lh7a40x_init_time)
INIT_MACHINE (lpd7a40x_init)
MACHINE_END
#
# Common support
-obj-y := common.o irq.o dma.o clocks.o mux.o bus.o gpio.o
+obj-y := common.o irq.o dma.o clocks.o mux.o bus.o gpio.o time.o
obj-m :=
obj-n :=
obj- :=
omap_map_io();
}
+static void __init omap_generic_init_time(void)
+{
+ omap_init_time();
+}
+
MACHINE_START(OMAP_GENERIC, "Generic OMAP-1510/1610/1710")
MAINTAINER("Tony Lindgren <tony@atomide.com>")
BOOT_MEM(0x10000000, 0xfff00000, 0xfef00000)
MAPIO(omap_generic_map_io)
INITIRQ(omap_generic_init_irq)
INIT_MACHINE(omap_generic_init)
+ INITTIME(omap_generic_init_time)
MACHINE_END
+
BOOT_PARAMS(0x10000100)
MAPIO(innovator_map_io)
INITIRQ(innovator_init_irq)
+ INITTIME(omap_init_time)
INIT_MACHINE(innovator_init)
MACHINE_END
BOOT_PARAMS(0x10000100)
MAPIO(osk_map_io)
INITIRQ(osk_init_irq)
+ INITTIME(omap_init_time)
INIT_MACHINE(osk_init)
MACHINE_END
BOOT_PARAMS(0x10000100)
MAPIO(omap_perseus2_map_io)
INITIRQ(omap_perseus2_init_irq)
+ INITTIME(omap_init_time)
INIT_MACHINE(omap_perseus2_init)
MACHINE_END
},
};
-#ifdef CONFIG_ARCH_OMAP1510
-/*
- * NOTE: This code _should_ go somewhere else. But let's wait for the
- * dma-mapping code to settle down first.
- */
-
-/*
- * Test for Local Bus device in order to do address translation between
- * dma_handle and Local Bus address.
- */
-inline int dmadev_uses_omap_lbus(struct device * dev)
-{
- if (dev == NULL || !cpu_is_omap1510())
- return 0;
- return dev->bus == &omap_bus_types[OMAP_BUS_LBUS] ? 1 : 0;
-}
-
-/*
- * Translate bus address to Local Bus address for dma-mapping
- */
-inline int dmadev_to_lbus(dma_addr_t addr)
-{
- return bus_to_lbus(addr);
-}
-
-/*
- * Translate Local Bus address to bus address for dma-mapping
- */
-inline int lbus_to_dmadev(dma_addr_t addr)
-{
- return lbus_to_bus(addr);
-}
-#endif
-
static int omap_bus_match(struct device *dev, struct device_driver *drv)
{
struct omap_dev *omapdev = OMAP_DEV(dev);
EXPORT_SYMBOL(omap_device_register);
EXPORT_SYMBOL(omap_device_unregister);
-#ifdef CONFIG_ARCH_OMAP1510
-EXPORT_SYMBOL(dmadev_uses_omap_lbus);
-EXPORT_SYMBOL(dmadev_to_lbus);
-EXPORT_SYMBOL(lbus_to_dmadev);
-#endif
#define __ARCH_ARM_MACH_OMAP_COMMON_H
extern void omap_map_io(void);
+extern void omap_init_time(void);
#endif /* __ARCH_ARM_MACH_OMAP_COMMON_H */
#
# Common support (must be linked before board specific support)
-obj-y += generic.o irq.o dma.o
+obj-y += generic.o irq.o dma.o time.o
obj-$(CONFIG_PXA25x) += pxa25x.o
obj-$(CONFIG_PXA27x) += pxa27x.o
EXPORT_SYMBOL(pxa_gpio_mode);
+/*
+ * Routine to safely enable or disable a clock in the CKEN
+ */
+void pxa_set_cken(int clock, int enable)
+{
+ unsigned long flags;
+ local_irq_save(flags);
+
+ if (enable)
+ CKEN |= clock;
+ else
+ CKEN &= ~clock;
+
+ local_irq_restore(flags);
+}
+
+EXPORT_SYMBOL(pxa_set_cken);
+
/*
* Intel PXA2xx internal register mapping.
*
static u64 pxamci_dmamask = 0xffffffffUL;
static struct platform_device pxamci_device = {
- .name = "pxamci",
- .id = 0,
+ .name = "pxa2xx-mci",
+ .id = -1,
.dev = {
.dma_mask = &pxamci_dmamask,
.coherent_dma_mask = 0xffffffff,
static u64 udc_dma_mask = ~(u32)0;
static struct platform_device udc_device = {
- .name = "pxa2xx_udc",
- .id = 0,
+ .name = "pxa2xx-udc",
+ .id = -1,
.resource = pxa2xx_udc_resources,
.num_resources = ARRAY_SIZE(pxa2xx_udc_resources),
.dev = {
static u64 fb_dma_mask = ~(u64)0;
static struct platform_device pxafb_device = {
- .name = "pxafb",
- .id = 0,
+ .name = "pxa2xx-fb",
+ .id = -1,
.dev = {
.platform_data = &pxa_fb_info,
.dma_mask = &fb_dma_mask,
.resource = pxafb_resources,
};
+static struct platform_device ffuart_device = {
+ .name = "pxa2xx-uart",
+ .id = 0,
+};
+static struct platform_device btuart_device = {
+ .name = "pxa2xx-uart",
+ .id = 1,
+};
+static struct platform_device stuart_device = {
+ .name = "pxa2xx-uart",
+ .id = 2,
+};
+
static struct platform_device *devices[] __initdata = {
&pxamci_device,
&udc_device,
&pxafb_device,
+ &ffuart_device,
+ &btuart_device,
+ &stuart_device,
};
static int __init pxa_init(void)
extern void __init pxa_map_io(void);
extern void __init pxa_init_irq(void);
+extern void __init pxa_init_time(void);
+
+extern unsigned int get_clk_frequency_khz(int info);
#define SET_BANK(__nr,__start,__size) \
mi->bank[__nr].start = (__start), \
BOOT_MEM(0xa0000000, 0x40000000, io_p2v(0x40000000))
MAPIO(idp_map_io)
INITIRQ(idp_init_irq)
+ INITTIME(pxa_init_time)
INIT_MACHINE(idp_init)
MACHINE_END
static struct platform_device sa1111_device = {
.name = "sa1111",
- .id = 0,
+ .id = -1,
.num_resources = ARRAY_SIZE(sa1111_resources),
.resource = sa1111_resources,
};
static struct platform_device smc91x_device = {
.name = "smc91x",
- .id = 0,
+ .id = -1,
.num_resources = ARRAY_SIZE(smc91x_resources),
.resource = smc91x_resources,
};
BOOT_MEM(0xa0000000, 0x40000000, io_p2v(0x40000000))
MAPIO(lubbock_map_io)
INITIRQ(lubbock_init_irq)
+ INITTIME(pxa_init_time)
INIT_MACHINE(lubbock_init)
MACHINE_END
#include <linux/interrupt.h>
#include <linux/sched.h>
#include <linux/bitops.h>
+#include <linux/fb.h>
#include <asm/types.h>
#include <asm/setup.h>
#include <asm/mach/irq.h>
#include <asm/arch/mainstone.h>
+#include <asm/arch/pxafb.h>
#include "generic.h"
.resource = smc91x_resources,
};
+
+static void mainstone_backlight_power(int on)
+{
+ if (on) {
+ pxa_gpio_mode(GPIO16_PWM0_MD);
+ pxa_set_cken(CKEN0_PWM0, 1);
+ PWM_CTRL0 = 0;
+ PWM_PWDUTY0 = 0x3ff;
+ PWM_PERVAL0 = 0x3ff;
+ } else {
+ PWM_CTRL0 = 0;
+ PWM_PWDUTY0 = 0x0;
+ PWM_PERVAL0 = 0x3FF;
+ pxa_set_cken(CKEN0_PWM0, 0);
+ }
+}
+
+static struct pxafb_mach_info toshiba_ltm04c380k __initdata = {
+ .pixclock = 50000,
+ .xres = 640,
+ .yres = 480,
+ .bpp = 16,
+ .hsync_len = 1,
+ .left_margin = 0x9f,
+ .right_margin = 1,
+ .vsync_len = 44,
+ .upper_margin = 0,
+ .lower_margin = 0,
+ .sync = FB_SYNC_HOR_HIGH_ACT|FB_SYNC_VERT_HIGH_ACT,
+ .lccr0 = LCCR0_Act,
+ .lccr3 = LCCR3_PCP,
+ .pxafb_backlight_power = mainstone_backlight_power,
+};
+
+static struct pxafb_mach_info toshiba_ltm035a776c __initdata = {
+ .pixclock = 110000,
+ .xres = 240,
+ .yres = 320,
+ .bpp = 16,
+ .hsync_len = 4,
+ .left_margin = 8,
+ .right_margin = 20,
+ .vsync_len = 3,
+ .upper_margin = 1,
+ .lower_margin = 10,
+ .sync = FB_SYNC_HOR_HIGH_ACT|FB_SYNC_VERT_HIGH_ACT,
+ .lccr0 = LCCR0_Act,
+ .lccr3 = LCCR3_PCP,
+ .pxafb_backlight_power = mainstone_backlight_power,
+};
+
static void __init mainstone_init(void)
{
- platform_add_device(&smc91x_device);
+ platform_device_register(&smc91x_device);
+
+ /* reading Mainstone's "Virtual Configuration Register"
+ might be handy to select LCD type here */
+ if (0)
+ set_pxa_fb_info(&toshiba_ltm04c380k);
+ else
+ set_pxa_fb_info(&toshiba_ltm035a776c);
}
BOOT_MEM(0xa0000000, 0x40000000, io_p2v(0x40000000))
MAPIO(mainstone_map_io)
INITIRQ(mainstone_init_irq)
+ INITTIME(pxa_init_time)
INIT_MACHINE(mainstone_init)
MACHINE_END
#include <linux/delay.h>
#include <linux/pm.h>
#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
#include <asm/elf.h>
#include <asm/io.h>
#include <asm/mach/map.h>
#include <asm/mach/arch.h>
+#include <asm/mach/time.h>
extern void rpc_init_irq(void);
elf_hwcap &= ~HWCAP_HALF;
}
+static irqreturn_t
+rpc_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ timer_tick(regs);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction rpc_timer_irq = {
+ .name = "RiscPC Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = rpc_timer_interrupt
+};
+
+/*
+ * Set up timer interrupt.
+ */
+void __init rpc_init_time(void)
+{
+ extern void ioctime_init(void);
+ ioctime_init();
+
+ setup_irq(IRQ_TIMER, &rpc_timer_irq);
+}
+
MACHINE_START(RISCPC, "Acorn-RiscPC")
MAINTAINER("Russell King")
BOOT_MEM(0x10000000, 0x03000000, 0xe0000000)
DISABLE_PARPORT(1)
MAPIO(rpc_map_io)
INITIRQ(rpc_init_irq)
+ INITTIME(rpc_init_time)
MACHINE_END
# Object file lists.
-obj-y := s3c2410.o irq.o
+obj-y := s3c2410.o irq.o time.o
obj-m :=
obj-n :=
obj- :=
}
+void __init bast_init_time(void)
+{
+ s3c2410_init_time();
+}
+
MACHINE_START(BAST, "Simtec-BAST")
MAINTAINER("Ben Dooks <ben@simtec.co.uk>")
BOOT_MEM(S3C2410_SDRAM_PA, S3C2410_PA_UART, S3C2410_VA_UART)
BOOT_PARAMS(S3C2410_SDRAM_PA + 0x100)
MAPIO(bast_map_io)
INITIRQ(bast_init_irq)
+ INITTIME(bast_init_time)
MACHINE_END
}
+void __init ipaq_init_time(void)
+{
+ s3c2410_init_time();
+}
+
MACHINE_START(H1940, "IPAQ-H1940")
MAINTAINER("Ben Dooks <ben@fluff.org>")
BOOT_MEM(S3C2410_SDRAM_PA, S3C2410_PA_UART, S3C2410_VA_UART)
BOOT_PARAMS(S3C2410_SDRAM_PA + 0x100)
MAPIO(ipaq_map_io)
INITIRQ(ipaq_init_irq)
+ INITTIME(ipaq_init_time)
MACHINE_END
s3c2410_init_irq();
}
+void __init smdk2410_init_time(void)
+{
+ s3c2410_init_time();
+}
+
MACHINE_START(SMDK2410, "SMDK2410") /* @TODO: request a new identifier and switch
* to SMDK2410 */
MAINTAINER("Jonas Dietsche")
BOOT_PARAMS(S3C2410_SDRAM_PA + 0x100)
MAPIO(smdk2410_map_io)
INITIRQ(smdk2410_init_irq)
+ INITTIME(smdk2410_init_time)
MACHINE_END
/* linux/arch/arm/mach-s3c2410/mach-vr1000.c
*
- * Copyright (c) 2003 Simtec Electronics
+ * Copyright (c) 2003,2004 Simtec Electronics
* Ben Dooks <ben@simtec.co.uk>
*
- * http://www.simtec.co.uk/
+ * Machine support for Thorcom VR1000 board. Designed for Thorcom by
+ * Simtec Electronics, http://www.simtec.co.uk/
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* Modifications:
+ * 12-Jul-2004 BJD Renamed machine
* 16-May-2003 BJD Created initial version
* 16-Aug-2003 BJD Fixed header files and copyright, added URL
* 05-Sep-2003 BJD Moved to v2.6 kernel
}
-MACHINE_START(VR1000, "Simtec-VR1000")
+void __init vr1000_init_time(void)
+{
+ s3c2401_init_time();
+}
+
+MACHINE_START(VR1000, "Thorcom-VR1000")
MAINTAINER("Ben Dooks <ben@simtec.co.uk>")
BOOT_MEM(S3C2410_SDRAM_PA, S3C2410_PA_UART, S3C2410_VA_UART)
BOOT_PARAMS(S3C2410_SDRAM_PA + 0x100)
MAPIO(vr1000_map_io)
INITIRQ(vr1000_init_irq)
+ INITTIME(vr1000_init_time)
MACHINE_END
extern void s3c2410_init_irq(void);
+extern s3c2410_init_time(void);
+
endchoice
+config SA1100_COLLIE
+ bool "Sharp Zaurus SL5500"
+ depends on ARCH_SA1100
+ help
+ Say Y here to support the Sharp Zaurus SL5500 PDAs.
+
config SA1100_H3100
bool "Compaq iPAQ H3100"
help
#
# Common support
-obj-y := generic.o irq.o dma.o
+obj-y := generic.o irq.o dma.o time.o
obj-m :=
obj-n :=
obj- :=
obj-$(CONFIG_SA1100_CERF) += cerf.o
led-$(CONFIG_SA1100_CERF) += leds-cerf.o
+obj-$(CONFIG_SA1100_COLLIE) += collie.o
+
obj-$(CONFIG_SA1100_EMPEG) += empeg.o
obj-$(CONFIG_SA1100_FLEXANET) += flexanet.o
BOOT_MEM(0xc0000000, 0x80000000, 0xf8000000)
MAPIO(adsbitsy_map_io)
INITIRQ(adsbitsy_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
FIXUP(fixup_assabet)
MAPIO(assabet_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
INIT_MACHINE(assabet_init)
MACHINE_END
BOOT_PARAMS(0xc0000100)
MAPIO(badge4_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_MEM(0xc0000000, 0x80000000, 0xf8000000)
MAPIO(brutus_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_MEM(0xc0000000, 0x80000000, 0xf8000000)
MAPIO(cerf_map_io)
INITIRQ(cerf_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
MAPIO(collie_map_io)
INITIRQ(sa1100_init_irq)
INIT_MACHINE(collie_init)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_MEM(0xc0000000, 0x80000000, 0xf8000000)
MAPIO(empeg_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_PARAMS(0xc0000100)
MAPIO(flexanet_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
#endif
MAPIO(freebird_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
extern void __init sa1100_map_io(void);
extern void __init sa1100_init_irq(void);
+extern void __init sa1100_init_time(void);
#define SET_BANK(__nr,__start,__size) \
mi->bank[__nr].start = (__start), \
BOOT_MEM(0xc0000000, 0x80000000, 0xf8000000)
MAPIO(graphicsclient_map_io)
INITIRQ(graphicsclient_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_MEM(0xc0000000, 0x80000000, 0xf8000000)
MAPIO(graphicsmaster_map_io)
INITIRQ(graphicsmaster_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_PARAMS(0xc0000100)
MAPIO(h3100_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
#endif /* CONFIG_SA1100_H3100 */
BOOT_PARAMS(0xc0000100)
MAPIO(h3600_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
#endif /* CONFIG_SA1100_H3600 */
BOOT_PARAMS(0xc0000100)
MAPIO(h3800_map_io)
INITIRQ(h3800_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
#endif /* CONFIG_SA1100_H3800 */
BOOT_PARAMS(0xc0000100)
MAPIO(hackkit_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_MEM(0xc0000000, 0x80000000, 0xf8000000)
MAPIO(huw_webpanel_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_PARAMS(0xc0000100)
MAPIO(itsy_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_PARAMS(0xc0000100)
MAPIO(jornada720_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_PARAMS(0xc0000100)
MAPIO(lart_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
FIXUP(fixup_nanoengine)
MAPIO(nanoengine_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_MEM(0xc0000000, 0x80000000, 0xf8000000)
MAPIO(omnimeter_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_MEM(0xc0000000, 0x80000000, 0xf8000000)
MAPIO(pangolin_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_PARAMS(0xc0000100)
MAPIO(pfs168_map_io)
INITIRQ(pfs168_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_MEM(0xc0000000, 0x80000000, 0xf8000000)
MAPIO(pleb_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_PARAMS(0xc0000100)
MAPIO(shannon_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_MEM(0xc0000000, 0x80000000, 0xf8000000)
MAPIO(sherman_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_PARAMS(0xc0000100)
MAPIO(simpad_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_PARAMS(0xc0000100)
MAPIO(stork_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_PARAMS(0xc0000100)
MAPIO(system3_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_MEM(0xc0000000, 0x80000000, 0xf8000000)
MAPIO(trizeps_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_MEM(0xc0000000, 0x80000000, 0xf8000000)
MAPIO(xp860_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
BOOT_PARAMS(0xc0000100)
MAPIO(yopy_map_io)
INITIRQ(sa1100_init_irq)
+ INITTIME(sa1100_init_time)
MACHINE_END
*/
#include <linux/kernel.h>
#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/sched.h>
#include <asm/setup.h>
#include <asm/mach-types.h>
#include <asm/io.h>
+#include <asm/leds.h>
+#include <asm/param.h>
#include <asm/mach/map.h>
#include <asm/mach/arch.h>
+#include <asm/mach/time.h>
extern void shark_init_irq(void);
iotable_init(shark_io_desc, ARRAY_SIZE(shark_io_desc));
}
+#define IRQ_TIMER 0
+#define HZ_TIME ((1193180 + HZ/2) / HZ)
+
+static irqreturn_t
+shark_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ timer_tick(regs);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction shark_timer_irq = {
+ .name = "Shark Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = shark_timer_interrupt
+};
+
+/*
+ * Set up timer interrupt, and return the current time in seconds.
+ */
+void __init shark_init_time(void)
+{
+ unsigned long flags;
+
+ outb(0x34, 0x43); /* binary, mode 0, LSB/MSB, Ch 0 */
+ outb(HZ_TIME & 0xff, 0x40); /* LSB of count */
+ outb(HZ_TIME >> 8, 0x40);
+
+ setup_irq(IRQ_TIMER, &shark_timer_irq);
+}
+
+
MACHINE_START(SHARK, "Shark")
MAINTAINER("Alexander Schulz")
BOOT_MEM(0x08000000, 0x40000000, 0xe0000000)
BOOT_PARAMS(0x08003000)
MAPIO(shark_map_io)
INITIRQ(shark_init_irq)
+ INITTIME(shark_init_time)
MACHINE_END
# Makefile for the linux kernel.
#
-obj-y := core.o
+obj-y := core.o clock.o
ret = 0;
}
#endif
- return 0;
+ return ret;
}
EXPORT_SYMBOL(clk_set_rate);
#include <linux/init.h>
#include <linux/device.h>
#include <linux/sysdev.h>
+#include <linux/interrupt.h>
+#include <asm/system.h>
#include <asm/hardware.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/mach/arch.h>
#include <asm/mach/flash.h>
#include <asm/mach/irq.h>
+#include <asm/mach/time.h>
#include <asm/mach/map.h>
#ifdef CONFIG_MMC
#include <asm/mach/mmc.h>
}
static struct mmc_platform_data mmc0_plat_data = {
- .mclk = 33000000,
.ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34,
.status = mmc_status,
};
static struct mmc_platform_data mmc1_plat_data = {
- .mclk = 33000000,
.ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34,
.status = mmc_status,
};
{
int i;
- platform_add_device(&versatile_flash_device);
- platform_add_device(&smc91x_device);
+ platform_device_register(&versatile_flash_device);
+ platform_device_register(&smc91x_device);
for (i = 0; i < ARRAY_SIZE(amba_devs); i++) {
struct amba_device *d = amba_devs[i];
leds_event = versatile_leds_event;
}
+/*
+ * Where is the timer (VA)?
+ */
+#define TIMER0_VA_BASE IO_ADDRESS(VERSATILE_TIMER0_1_BASE)
+#define TIMER1_VA_BASE (IO_ADDRESS(VERSATILE_TIMER0_1_BASE) + 0x20)
+#define TIMER2_VA_BASE IO_ADDRESS(VERSATILE_TIMER2_3_BASE)
+#define TIMER3_VA_BASE (IO_ADDRESS(VERSATILE_TIMER2_3_BASE) + 0x20)
+#define VA_IC_BASE IO_ADDRESS(VERSATILE_VIC_BASE)
+
+/*
+ * How long is the timer interval?
+ */
+#define TIMER_INTERVAL (TICKS_PER_uSEC * mSEC_10)
+#if TIMER_INTERVAL >= 0x100000
+#define TIMER_RELOAD (TIMER_INTERVAL >> 8) /* Divide by 256 */
+#define TIMER_CTRL 0x88 /* Enable, Clock / 256 */
+#define TICKS2USECS(x) (256 * (x) / TICKS_PER_uSEC)
+#elif TIMER_INTERVAL >= 0x10000
+#define TIMER_RELOAD (TIMER_INTERVAL >> 4) /* Divide by 16 */
+#define TIMER_CTRL 0x84 /* Enable, Clock / 16 */
+#define TICKS2USECS(x) (16 * (x) / TICKS_PER_uSEC)
+#else
+#define TIMER_RELOAD (TIMER_INTERVAL)
+#define TIMER_CTRL 0x80 /* Enable */
+#define TICKS2USECS(x) ((x) / TICKS_PER_uSEC)
+#endif
+
+#define TIMER_CTRL_IE (1 << 5) /* Interrupt Enable */
+
+/*
+ * What does it look like?
+ */
+typedef struct TimerStruct {
+ unsigned long TimerLoad;
+ unsigned long TimerValue;
+ unsigned long TimerControl;
+ unsigned long TimerClear;
+} TimerStruct_t;
+
+extern unsigned long (*gettimeoffset)(void);
+
+/*
+ * Returns number of ms since last clock interrupt. Note that interrupts
+ * will have been disabled by do_gettimeoffset()
+ */
+static unsigned long versatile_gettimeoffset(void)
+{
+ volatile TimerStruct_t *timer0 = (TimerStruct_t *)TIMER0_VA_BASE;
+ unsigned long ticks1, ticks2, status;
+
+ /*
+ * Get the current number of ticks. Note that there is a race
+ * condition between us reading the timer and checking for
+ * an interrupt. We get around this by ensuring that the
+ * counter has not reloaded between our two reads.
+ */
+ ticks2 = timer0->TimerValue & 0xffff;
+ do {
+ ticks1 = ticks2;
+ status = __raw_readl(VA_IC_BASE + VIC_IRQ_RAW_STATUS);
+ ticks2 = timer0->TimerValue & 0xffff;
+ } while (ticks2 > ticks1);
+
+ /*
+ * Number of ticks since last interrupt.
+ */
+ ticks1 = TIMER_RELOAD - ticks2;
+
+ /*
+ * Interrupt pending? If so, we've reloaded once already.
+ *
+ * FIXME: Need to check this is effectively timer 0 that expires
+ */
+ if (status & IRQMASK_TIMERINT0_1)
+ ticks1 += TIMER_RELOAD;
+
+ /*
+ * Convert the ticks to usecs
+ */
+ return TICKS2USECS(ticks1);
+}
+
+/*
+ * IRQ handler for the timer
+ */
+static irqreturn_t versatile_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ volatile TimerStruct_t *timer0 = (volatile TimerStruct_t *)TIMER0_VA_BASE;
+
+ // ...clear the interrupt
+ timer0->TimerClear = 1;
+
+ timer_tick(regs);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction versatile_timer_irq = {
+ .name = "Versatile Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = versatile_timer_interrupt
+};
+
+/*
+ * Set up timer interrupt, and return the current time in seconds.
+ */
+void __init versatile_init_time(void)
+{
+ volatile TimerStruct_t *timer0 = (volatile TimerStruct_t *)TIMER0_VA_BASE;
+ volatile TimerStruct_t *timer1 = (volatile TimerStruct_t *)TIMER1_VA_BASE;
+ volatile TimerStruct_t *timer2 = (volatile TimerStruct_t *)TIMER2_VA_BASE;
+ volatile TimerStruct_t *timer3 = (volatile TimerStruct_t *)TIMER3_VA_BASE;
+
+ /*
+ * set clock frequency:
+ * VERSATILE_REFCLK is 32KHz
+ * VERSATILE_TIMCLK is 1MHz
+ */
+ *(volatile unsigned int *)IO_ADDRESS(VERSATILE_SCTL_BASE) |=
+ ((VERSATILE_TIMCLK << VERSATILE_TIMER1_EnSel) | (VERSATILE_TIMCLK << VERSATILE_TIMER2_EnSel) |
+ (VERSATILE_TIMCLK << VERSATILE_TIMER3_EnSel) | (VERSATILE_TIMCLK << VERSATILE_TIMER4_EnSel));
+
+ /*
+ * Initialise to a known state (all timers off)
+ */
+ timer0->TimerControl = 0;
+ timer1->TimerControl = 0;
+ timer2->TimerControl = 0;
+ timer3->TimerControl = 0;
+
+ timer0->TimerLoad = TIMER_RELOAD;
+ timer0->TimerValue = TIMER_RELOAD;
+ timer0->TimerControl = TIMER_CTRL | 0x40 | TIMER_CTRL_IE; /* periodic + IE */
+
+ /*
+ * Make irqs happen for the system timer
+ */
+ setup_irq(IRQ_TIMERINT0_1, &versatile_timer_irq);
+ gettimeoffset = versatile_gettimeoffset;
+}
+
MACHINE_START(VERSATILE_PB, "ARM-Versatile PB")
MAINTAINER("ARM Ltd/Deep Blue Solutions Ltd")
BOOT_MEM(0x00000000, 0x101f1000, 0xf11f1000)
BOOT_PARAMS(0x00000100)
MAPIO(versatile_map_io)
INITIRQ(versatile_init_irq)
+ INITTIME(versatile_init_time)
INIT_MACHINE(versatile_init)
MACHINE_END
# XScale
config CPU_XSCALE
bool
- depends on ARCH_IOP3XX || ARCH_ADIFCC || ARCH_PXA || ARCH_IXP4XX
+ depends on ARCH_IOP3XX || ARCH_PXA || ARCH_IXP4XX
default y
select CPU_32v5
select CPU_ABRT_EV5T
/*
* Set the "dma handle"
*/
- *handle = page_to_bus(page);
+ *handle = page_to_dma(dev, page);
do {
BUG_ON(!pte_none(*pte));
printk("Mem-info:\n");
show_free_areas();
- printk("Free swap: %6dkB\n",nr_swap_pages<<(PAGE_SHIFT-10));
+ printk("Free swap: %6ldkB\n", nr_swap_pages<<(PAGE_SHIFT-10));
for (node = 0; node < numnodes; node++) {
struct page *page, *end;
#include <asm/procinfo.h>
#include <asm/hardware.h>
#include <asm/pgtable.h>
-#include <asm/ptrace.h>
/*
* the cache line size of the I and D cache
# To add an entry into this database, please see Documentation/arm/README,
# or contact rmk@arm.linux.org.uk
#
-# Last update: Fri May 28 13:17:46 2004
+# Last update: Fri Jul 2 11:58:36 2004
#
# machine_is_xxx CONFIG_xxxx MACH_TYPE_xxx number
#
pdb ARCH_PDB PDB 239
blue_2g SA1100_BLUE_2G BLUE_2G 240
bluearch SA1100_BLUEARCH BLUEARCH 241
-ixdp2400 ARCH_IXMB2400 IXMB2400 242
-ixdp2800 ARCH_IXMB2800 IXMB2800 243
+ixdp2400 ARCH_IXDB2400 IXDB2400 242
+ixdp2800 ARCH_IXDB2800 IXDB2800 243
explorer SA1100_EXPLORER EXPLORER 244
ixdp425 ARCH_IXDP425 IXDP425 245
chimp ARCH_CHIMP CHIMP 246
skyminder MACH_SKYMINDER SKYMINDER 536
lpd79520 MACH_LPD79520 LPD79520 537
edb9302 MACH_EDB9302 EDB9302 538
+hw90340 MACH_HW90340 HW90340 539
+cip_box MACH_CIP_BOX CIP_BOX 540
+ivpn MACH_IVPN IVPN 541
+rsoc2 MACH_RSOC2 RSOC2 542
+husky MACH_HUSKY HUSKY 543
+boxer MACH_BOXER BOXER 544
+shepherd MACH_SHEPHERD SHEPHERD 545
+aml42800aa MACH_AML42800AA AML42800AA 546
+ml674001 MACH_MACH_TYPE_ML674001 MACH_TYPE_ML674001 547
+lpc2294 MACH_LPC2294 LPC2294 548
+switchgrass MACH_SWITCHGRASS SWITCHGRASS 549
+ens_cmu MACH_ENS_CMU ENS_CMU 550
+mm6_sdb MACH_MM6_SDB MM6_SDB 551
endmenu
+source "kernel/vserver/Kconfig"
+
source "security/Kconfig"
source "crypto/Kconfig"
#include <linux/sched.h>
#include <linux/init.h>
#include <linux/init_task.h>
+#include <linux/mqueue.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
action->handler = handler;
action->flags = irq_flags;
- action->mask = 0;
+ cpus_clear(action->mask);
action->name = devname;
action->next = NULL;
action->dev_id = dev_id;
read_unlock(&tasklist_lock);
if (!child)
goto out;
+ if (!vx_check(vx_task_xid(child), VX_WATCH|VX_IDENT))
+ goto out_tsk;
ret = -EPERM;
if (pid == 1) /* you may not mess with init */
unsigned char aux_device_present;
char elf_platform[ELF_PLATFORM_SIZE];
-char saved_command_line[COMMAND_LINE_SIZE];
unsigned long phys_initrd_start __initdata = 0;
unsigned long phys_initrd_size __initdata = 0;
printk("Mem-info:\n");
show_free_areas();
- printk("Free swap: %6dkB\n",nr_swap_pages<<(PAGE_SHIFT-10));
+ printk("Free swap: %6ldkB\n", nr_swap_pages<<(PAGE_SHIFT-10));
page = NODE_MEM_MAP(0);
endmenu
+source "kernel/vserver/Kconfig"
+
source "security/Kconfig"
source "crypto/Kconfig"
*/
static struct irqaction irq2 = { timer_interrupt, SA_SHIRQ | SA_INTERRUPT,
- 0, "timer", NULL, NULL};
+ CPU_MASK_NONE, "timer", NULL, NULL};
void __init
time_init(void)
action->handler = handler;
action->flags = irqflags;
- action->mask = 0;
+ cpus_clear(action->mask);
action->name = devname;
action->next = NULL;
action->dev_id = dev_id;
#include <linux/fs.h>
#include <linux/user.h>
#include <linux/elfcore.h>
+#include <linux/mqueue.h>
//#define DEBUG
#include <linux/seq_file.h>
#include <linux/tty.h>
+#include <asm/setup.h>
+
/*
* Setup options
*/
extern int root_mountflags;
extern char _etext, _edata, _end;
-#define COMMAND_LINE_SIZE 256
-
static char command_line[COMMAND_LINE_SIZE] = { 0, };
- char saved_command_line[COMMAND_LINE_SIZE];
extern const unsigned long text_start, edata; /* set by the linker script */
extern unsigned long dram_start, dram_end;
printk("\nMem-info:\n");
show_free_areas();
- printk("Free swap: %6dkB\n",nr_swap_pages<<(PAGE_SHIFT-10));
+ printk("Free swap: %6ldkB\n", nr_swap_pages<<(PAGE_SHIFT-10));
i = max_mapnr;
while (i-- > 0) {
total++;
config CONFIG_SH_STANDARD_BIOS
bool "Use gdb protocol serial console"
- depends on (!H8300H_SIM && H8S_SIM)
+ depends on (!H8300H_SIM && !H8S_SIM)
help
serial console output using GDB protocol.
Require eCos/RedBoot
BLKDEV start address.
endmenu
+source "kernel/vserver/Kconfig"
+
source "security/Kconfig"
source "crypto/Kconfig"
#include <linux/init.h>
#include <linux/init_task.h>
#include <linux/fs.h>
+#include <linux/mqueue.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
read_unlock(&tasklist_lock);
if (!child)
goto out;
+ if (!vx_check(vx_task_xid(child), VX_WATCH|VX_IDENT))
+ goto out_tsk;
ret = -EPERM;
if (pid == 1) /* you may not mess with init */
case PTRACE_PEEKUSR: {
unsigned long tmp;
- if ((addr & 3) || addr < 0 || addr >= sizeof(struct user))
+ if ((addr & 3) || addr < 0 || addr >= sizeof(struct user)) {
ret = -EIO;
+ break ;
+ }
- tmp = 0; /* Default return condition */
+ ret = 0; /* Default return condition */
addr = addr >> 2; /* temporary hack. */
+
if (addr < H8300_REGS_NO)
tmp = h8300_get_reg(child, addr);
else {
- ret = -EIO;
- break ;
+ switch(addr) {
+ case 49:
+ tmp = child->mm->start_code;
+ break ;
+ case 50:
+ tmp = child->mm->start_data;
+ break ;
+ case 51:
+ tmp = child->mm->end_code;
+ break ;
+ case 52:
+ tmp = child->mm->end_data;
+ break ;
+ default:
+ ret = -EIO;
+ }
}
- ret = put_user(tmp,(unsigned long *) data);
+ if (!ret)
+ ret = put_user(tmp,(unsigned long *) data);
break ;
}
#include <linux/major.h>
#include <linux/bootmem.h>
#include <linux/seq_file.h>
+#include <linux/init.h>
#include <asm/setup.h>
#include <asm/irq.h>
unsigned long memory_start;
unsigned long memory_end;
-char command_line[512];
-char saved_command_line[512];
+char command_line[COMMAND_LINE_SIZE];
extern int _stext, _etext, _sdata, _edata, _sbss, _ebss, _end;
extern int _ramstart, _ramend;
#endif
/* Keep a copy of command line */
*cmdline_p = &command_line[0];
- memcpy(saved_command_line, command_line, sizeof(saved_command_line));
- saved_command_line[sizeof(saved_command_line)-1] = 0;
+ memcpy(saved_command_line, command_line, COMMAND_LINE_SIZE);
+ saved_command_line[COMMAND_LINE_SIZE-1] = 0;
#ifdef DEBUG
if (strlen(*cmdline_p))
mov.l @(LER0-LER1:16,sp),er1 /* restore ER0 */
mov.l er1,@er0
mov.w @(LEXR-LER1:16,sp),r1 /* restore EXR */
+ mov.b r1l,r1h
mov.w r1,@(8:16,er0)
mov.w @(LCCR-LER1:16,sp),r1 /* restore the RET addr */
mov.b r1l,r1h
jsr @SYMBOL_NAME(syscall_trace)
bra SYMBOL_NAME(ret_from_exception):8
-
SYMBOL_NAME_LABEL(ret_from_fork)
mov.l er2,er0
jsr @SYMBOL_NAME(schedule_tail)
static const int h8300_register_offset[] = {
PT_REG(er1), PT_REG(er2), PT_REG(er3), PT_REG(er4),
PT_REG(er5), PT_REG(er6), PT_REG(er0), PT_REG(orig_er0),
- PT_REG(ccr), PT_REG(pc), PT_REG(exr)
+ PT_REG(ccr), PT_REG(pc), 0, PT_REG(exr)
};
/* read register */
depends on (MWINCHIP3D || MWINCHIP2 || MWINCHIPC6) && MTRR
default y
+config X86_4G
+ bool "4 GB kernel-space and 4 GB user-space virtual memory support"
+ help
+ This option is only useful for systems that have more than 1 GB
+ of RAM.
+
+ The default kernel VM layout leaves 1 GB of virtual memory for
+ kernel-space mappings, and 3 GB of VM for user-space applications.
+ This option ups both the kernel-space VM and the user-space VM to
+ 4 GB.
+
+ The cost of this option is additional TLB flushes done at
+ system-entry points that transition from user-mode into kernel-mode.
+ I.e. system calls and page faults, and IRQs that interrupt user-mode
+ code. There's also additional overhead to kernel operations that copy
+ memory to/from user-space. The overhead from this is hard to tell and
+ depends on the workload - it can be anything from no visible overhead
+ to 20-30% overhead. A good rule of thumb is to count with a runtime
+ overhead of 20%.
+
+ The upside is the much increased kernel-space VM, which more than
+ quadruples the maximum amount of RAM supported. Kernels compiled with
+ this option boot on 64GB of RAM and still have more than 3.1 GB of
+ 'lowmem' left. Another bonus is that highmem IO bouncing decreases,
+ if used with drivers that still use bounce-buffers.
+
+ There's also a 33% increase in user-space VM size - database
+ applications might see a boost from this.
+
+ But the cost of the TLB flushes and the runtime overhead has to be
+ weighed against the bonuses offered by the larger VM spaces. The
+ dividing line depends on the actual workload - there might be 4 GB
+ systems that benefit from this option. Systems with less than 4 GB
+ of RAM will rarely see a benefit from this option - but it's not
+ out of question, the exact circumstances have to be considered.
+
+config X86_SWITCH_PAGETABLES
+ def_bool X86_4G
+
+config X86_4G_VM_LAYOUT
+ def_bool X86_4G
+
+config X86_UACCESS_INDIRECT
+ def_bool X86_4G
+
+config X86_HIGH_ENTRY
+ def_bool X86_4G
+
config HPET_TIMER
bool "HPET Timer Support"
help
Choose N to continue using the legacy 8254 timer.
config HPET_EMULATE_RTC
- def_bool HPET_TIMER && RTC=y
+ bool "Provide RTC interrupt"
+ depends on HPET_TIMER && RTC=y
config SMP
bool "Symmetric multi-processing support"
Say Y here if you are building a kernel for a desktop, embedded
or real-time system. Say N if you are unsure.
+config PREEMPT_VOLUNTARY
+ bool "Voluntary Kernel Preemption"
+ depends on !PREEMPT
+ default y
+ help
+ This option reduces the latency of the kernel by adding more
+ "explicit preemption points" to the kernel code. These new
+ preemption points have been selected to minimize the maximum
+ latency of rescheduling, providing faster application reactions.
+
+ Say Y here if you are building a kernel for a desktop system.
+ Say N if you are unsure.
+
config X86_UP_APIC
bool "Local APIC support on uniprocessors" if !SMP
depends on !(X86_VISWS || X86_VOYAGER)
menu "Kernel hacking"
+config CRASH_DUMP
+ tristate "Crash dump support (EXPERIMENTAL)"
+ depends on EXPERIMENTAL
+ default n
+ ---help---
+ Say Y here to enable saving an image of system memory when a panic
+ or other error occurs. Dumps can also be forced with the SysRq+d
+ key if MAGIC_SYSRQ is enabled.
+
+config CRASH_DUMP_BLOCKDEV
+ tristate "Crash dump block device driver"
+ depends on CRASH_DUMP
+ help
+ Say Y to allow saving crash dumps directly to a disk device.
+
+config CRASH_DUMP_NETDEV
+ tristate "Crash dump network device driver"
+ depends on CRASH_DUMP
+ help
+ Say Y to allow saving crash dumps over a network device.
+
+config CRASH_DUMP_MEMDEV
+ bool "Crash dump staged memory driver"
+ depends on CRASH_DUMP
+ help
+ Say Y to allow intermediate saving crash dumps in spare
+ memory pages which would then be written out to disk
+ later.
+
+config CRASH_DUMP_SOFTBOOT
+ bool "Save crash dump across a soft reboot"
+ depends on CRASH_DUMP_MEMDEV
+ help
+ Say Y to allow a crash dump to be preserved in memory
+ pages across a soft reboot and written out to disk
+ thereafter. For this to work, CRASH_DUMP must be
+ configured as part of the kernel (not as a module).
+
+config CRASH_DUMP_COMPRESS_RLE
+ tristate "Crash dump RLE compression"
+ depends on CRASH_DUMP
+ help
+ Say Y to allow saving dumps with Run Length Encoding compression.
+
+config CRASH_DUMP_COMPRESS_GZIP
+ tristate "Crash dump GZIP compression"
+ select ZLIB_INFLATE
+ select ZLIB_DEFLATE
+ depends on CRASH_DUMP
+ help
+ Say Y to allow saving dumps with Gnu Zip compression.
+
config DEBUG_KERNEL
bool "Kernel debugging"
help
If you don't debug the kernel, you can say N, but we may not be able
to solve problems without frame pointers.
-config 4KSTACKS
- bool "Use 4Kb for kernel stacks instead of 8Kb"
- help
- If you say Y here the kernel will use a 4Kb stacksize for the
- kernel stack attached to each process/thread. This facilitates
- running more threads on a system and also reduces the pressure
- on the VM subsystem for higher order allocations. This option
- will also use IRQ stacks to compensate for the reduced stackspace.
-
config X86_FIND_SMP_CONFIG
bool
depends on X86_LOCAL_APIC || X86_VOYAGER
endmenu
+source "kernel/vserver/Kconfig"
+
source "security/Kconfig"
source "crypto/Kconfig"
depends on X86_SMP || (X86_VOYAGER && SMP)
default y
-# std_resources is overridden for pc9800, but that's not
-# a currently selectable arch choice
-config X86_STD_RESOURCES
- bool
- default y
-
config PC
bool
depends on X86 && !EMBEDDED
LDFLAGS_vmlinux :=
CHECK := $(CHECK) -D__i386__=1
-CFLAGS += -pipe -msoft-float
+CFLAGS += -pipe -msoft-float -m32 -fno-builtin-sprintf -fno-builtin-log2 -fno-builtin-puts
# prevent gcc from keeping the stack 16 byte aligned
CFLAGS += $(call check_gcc,-mpreferred-stack-boundary=2,)
drivers-$(CONFIG_PM) += arch/i386/power/
CFLAGS += $(mflags-y)
-AFLAGS += $(mflags-y)
+AFLAGS += $(mflags-y) -m32
boot := arch/i386/boot
all: bzImage
-BOOTIMAGE=arch/i386/boot/bzImage
-zImage zlilo zdisk: BOOTIMAGE=arch/i386/boot/zImage
+# KBUILD_IMAGE specify target image being built
+ KBUILD_IMAGE := $(boot)/bzImage
+zImage zlilo zdisk: KBUILD_IMAGE := arch/i386/boot/zImage
zImage bzImage: vmlinux
- $(Q)$(MAKE) $(build)=$(boot) $(BOOTIMAGE)
+ $(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE)
compressed: zImage
zlilo bzlilo: vmlinux
- $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(BOOTIMAGE) zlilo
+ $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) zlilo
zdisk bzdisk: vmlinux
- $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(BOOTIMAGE) zdisk
+ $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) zdisk
install fdimage fdimage144 fdimage288: vmlinux
- $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(BOOTIMAGE) $@
+ $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(KBUILD_IMAGE) $@
prepare: include/asm-$(ARCH)/asm_offsets.h
CLEAN_FILES += include/asm-$(ARCH)/asm_offsets.h
install: $(BOOTIMAGE)
sh $(srctree)/$(src)/install.sh $(KERNELRELEASE) $< System.map "$(INSTALL_PATH)"
+ if [ -f init/kerntypes.o ]; then cp init/kerntypes.o $(INSTALL_PATH)/Kerntypes; fi
*/
static unsigned char *real_mode; /* Pointer to real-mode data */
-#define EXT_MEM_K (*(unsigned short *)(real_mode + 0x2))
+#define RM_EXT_MEM_K (*(unsigned short *)(real_mode + 0x2))
#ifndef STANDARD_MEMORY_BIOS_CALL
-#define ALT_MEM_K (*(unsigned long *)(real_mode + 0x1e0))
+#define RM_ALT_MEM_K (*(unsigned long *)(real_mode + 0x1e0))
#endif
-#define SCREEN_INFO (*(struct screen_info *)(real_mode+0))
-#define EDID_INFO (*(struct edid_info *)(real_mode+0x440))
+#define RM_SCREEN_INFO (*(struct screen_info *)(real_mode+0))
extern char input_data[];
extern int input_len;
int x,y,pos;
char c;
- x = SCREEN_INFO.orig_x;
- y = SCREEN_INFO.orig_y;
+ x = RM_SCREEN_INFO.orig_x;
+ y = RM_SCREEN_INFO.orig_y;
while ( ( c = *s++ ) != '\0' ) {
if ( c == '\n' ) {
}
}
- SCREEN_INFO.orig_x = x;
- SCREEN_INFO.orig_y = y;
+ RM_SCREEN_INFO.orig_x = x;
+ RM_SCREEN_INFO.orig_y = y;
pos = (x + cols * y) * 2; /* Update cursor position */
outb_p(14, vidport);
static void setup_normal_output_buffer(void)
{
#ifdef STANDARD_MEMORY_BIOS_CALL
- if (EXT_MEM_K < 1024) error("Less than 2MB of memory");
+ if (RM_EXT_MEM_K < 1024) error("Less than 2MB of memory");
#else
- if ((ALT_MEM_K > EXT_MEM_K ? ALT_MEM_K : EXT_MEM_K) < 1024) error("Less than 2MB of memory");
+ if ((RM_ALT_MEM_K > RM_EXT_MEM_K ? RM_ALT_MEM_K : RM_EXT_MEM_K) < 1024) error("Less than 2MB of memory");
#endif
output_data = (char *)0x100000; /* Points to 1M */
free_mem_end_ptr = (long)real_mode;
{
high_buffer_start = (uch *)(((ulg)&end) + HEAP_SIZE);
#ifdef STANDARD_MEMORY_BIOS_CALL
- if (EXT_MEM_K < (3*1024)) error("Less than 4MB of memory");
+ if (RM_EXT_MEM_K < (3*1024)) error("Less than 4MB of memory");
#else
- if ((ALT_MEM_K > EXT_MEM_K ? ALT_MEM_K : EXT_MEM_K) < (3*1024)) error("Less than 4MB of memory");
+ if ((RM_ALT_MEM_K > RM_EXT_MEM_K ? RM_ALT_MEM_K : RM_EXT_MEM_K) <
+ (3*1024))
+ error("Less than 4MB of memory");
#endif
mv->low_buffer_start = output_data = (char *)LOW_BUFFER_START;
low_buffer_end = ((unsigned int)real_mode > LOW_BUFFER_MAX
{
real_mode = rmode;
- if (SCREEN_INFO.orig_video_mode == 7) {
+ if (RM_SCREEN_INFO.orig_video_mode == 7) {
vidmem = (char *) 0xb0000;
vidport = 0x3b4;
} else {
vidport = 0x3d4;
}
- lines = SCREEN_INFO.orig_video_lines;
- cols = SCREEN_INFO.orig_video_cols;
+ lines = RM_SCREEN_INFO.orig_video_lines;
+ cols = RM_SCREEN_INFO.orig_video_cols;
if (free_mem_ptr < 0x100000) setup_normal_output_buffer();
else setup_output_buffer_if_we_run_high(mv);
* conformant to T13 Committee www.t13.org
* projects 1572D, 1484D, 1386D, 1226DT
* disk signature read by Matt Domsch <Matt_Domsch@dell.com>
- * and Andrew Wilks <Andrew_Wilks@dell.com> September 2003
+ * and Andrew Wilks <Andrew_Wilks@dell.com> September 2003, June 2004
* legacy CHS retreival by Patrick J. LoPresti <patl@users.sourceforge.net>
* March 2004
*/
#include <linux/edd.h>
#if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE)
-# Read the first sector of device 80h and store the 4-byte signature
+# Read the first sector of each BIOS disk device and store the 4-byte signature
+edd_mbr_sig_start:
+ movb $0, (EDD_MBR_SIG_NR_BUF) # zero value at EDD_MBR_SIG_NR_BUF
+ movb $0x80, %dl # from device 80
+ movw $EDD_MBR_SIG_BUF, %bx # store buffer ptr in bx
+edd_mbr_sig_read:
movl $0xFFFFFFFF, %eax
- movl %eax, (DISK80_SIG_BUFFER) # assume failure
+ movl %eax, (%bx) # assume failure
+ pushw %bx
movb $READ_SECTORS, %ah
movb $1, %al # read 1 sector
- movb $0x80, %dl # from device 80
movb $0, %dh # at head 0
movw $1, %cx # cylinder 0, sector 0
pushw %es
pushw %ds
popw %es
- movw $EDDBUF, %bx
- pushw %dx # work around buggy BIOSes
+ movw $EDDBUF, %bx # disk's data goes into EDDBUF
+ pushw %dx # work around buggy BIOSes
stc # work around buggy BIOSes
- int $0x13
+ int $0x13
sti # work around buggy BIOSes
- popw %dx
- jc disk_sig_done
- movl (EDDBUF+MBR_SIG_OFFSET), %eax
- movl %eax, (DISK80_SIG_BUFFER) # store success
-disk_sig_done:
+ popw %dx
popw %es
+ popw %bx
+ jc edd_mbr_sig_done # on failure, we're done.
+ movl (EDDBUF+EDD_MBR_SIG_OFFSET), %eax # read sig out of the MBR
+ movl %eax, (%bx) # store success
+ incb (EDD_MBR_SIG_NR_BUF) # note that we stored something
+ incb %dl # increment to next device
+ addw $4, %bx # increment sig buffer ptr
+ cmpb $EDD_MBR_SIG_MAX, (EDD_MBR_SIG_NR_BUF) # Out of space?
+ jb edd_mbr_sig_read # keep looping
+edd_mbr_sig_done:
# Do the BIOS Enhanced Disk Drive calls
# This consists of two calls:
# can be located anywhere in
# low memory 0x10000 or higher.
-ramdisk_max: .long (MAXMEM-1) & 0x7fffffff
+ramdisk_max: .long (__MAXMEM-1) & 0x7fffffff
# (Header version 0x0203 or later)
# The highest safe address for
# the contents of an initrd
call mode_set # Set the mode
jc vid1
+#if 0
leaw badmdt, %si # Invalid mode ID
call prtstr
+#else
+ jmp vid1
+#endif /* CONFIG_VIDEO_IGNORE_BAD_MODE */
vid2: call mode_menu
vid1:
#ifdef CONFIG_VIDEO_RETAIN
# CONFIG_NFS_DIRECTIO is not set
CONFIG_NFSD=y
# CONFIG_NFSD_V3 is not set
-# CONFIG_NFSD_TCP is not set
+CONFIG_NFSD_TCP=y
CONFIG_LOCKD=y
CONFIG_EXPORTFS=y
CONFIG_SUNRPC=y
obj-y := process.o semaphore.o signal.o entry.o traps.o irq.o vm86.o \
ptrace.o i8259.o ioport.o ldt.o setup.o time.o sys_i386.o \
pci-dma.o i386_ksyms.o i387.o dmi_scan.o bootflag.o \
- doublefault.o
+ doublefault.o entry_trampoline.o
obj-y += cpu/
obj-y += timers/
obj-$(CONFIG_HPET_TIMER) += time_hpet.o
obj-$(CONFIG_EFI) += efi.o efi_stub.o
obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
-obj-$(CONFIG_X86_STD_RESOURCES) += std_resources.o
-EXTRA_AFLAGS := -traditional
+EXTRA_AFLAGS := -traditional -m32
obj-$(CONFIG_SCx200) += scx200.o
cmd_syscall = $(CC) -nostdlib $(SYSCFLAGS_$(@F)) \
-Wl,-T,$(filter-out FORCE,$^) -o $@
-vsyscall-flags = -shared -s -Wl,-soname=linux-gate.so.1
+vsyscall-flags = -m32 -shared -s -Wl,-soname=linux-gate.so.1
SYSCFLAGS_vsyscall-sysenter.so = $(vsyscall-flags)
SYSCFLAGS_vsyscall-int80.so = $(vsyscall-flags)
$(obj)/built-in.o: $(obj)/vsyscall-syms.o
$(obj)/built-in.o: ld_flags += -R $(obj)/vsyscall-syms.o
-SYSCFLAGS_vsyscall-syms.o = -r
+SYSCFLAGS_vsyscall-syms.o = -m32 -r
$(obj)/vsyscall-syms.o: $(src)/vsyscall.lds $(obj)/vsyscall-sysenter.o FORCE
$(call if_changed,syscall)
#include <linux/acpi.h>
#include <linux/efi.h>
#include <linux/irq.h>
-#include <asm/pgalloc.h>
+#include <linux/module.h>
+
+#include <asm/pgtable.h>
#include <asm/io_apic.h>
#include <asm/apic.h>
#include <asm/io.h>
idx = FIX_ACPI_END;
while (mapped_size < size) {
if (--idx < FIX_ACPI_BEGIN)
- return 0; /* cannot handle this */
+ return NULL; /* cannot handle this */
phys += PAGE_SIZE;
set_fixmap(idx, phys);
mapped_size += PAGE_SIZE;
return 0;
}
+unsigned int acpi_register_gsi(u32 gsi, int edge_level, int active_high_low)
+{
+ unsigned int irq;
+
+#ifdef CONFIG_PCI
+ /*
+ * Make sure all (legacy) PCI IRQs are set as level-triggered.
+ */
+ if (acpi_irq_model == ACPI_IRQ_MODEL_PIC) {
+ static u16 irq_mask;
+ extern void eisa_set_level_irq(unsigned int irq);
+
+ if (edge_level == ACPI_LEVEL_SENSITIVE) {
+ if ((gsi < 16) && !((1 << gsi) & irq_mask)) {
+ Dprintk(KERN_DEBUG PREFIX "Setting GSI %u as level-triggered\n", gsi);
+ irq_mask |= (1 << gsi);
+ eisa_set_level_irq(gsi);
+ }
+ }
+ }
+#endif
+
+#ifdef CONFIG_X86_IO_APIC
+ if (acpi_irq_model == ACPI_IRQ_MODEL_IOAPIC) {
+ mp_register_gsi(gsi, edge_level, active_high_low);
+ }
+#endif
+ acpi_gsi_to_irq(gsi, &irq);
+ return irq;
+}
+EXPORT_SYMBOL(acpi_register_gsi);
+
static unsigned long __init
acpi_scan_rsdp (
unsigned long start,
* RSDP signature.
*/
for (offset = 0; offset < length; offset += 16) {
- if (strncmp((char *) (start + offset), "RSD PTR ", sig_len))
+ if (strncmp((char *) __va(start + offset), "RSD PTR ", sig_len))
continue;
return (start + offset);
}
static int __init acpi_parse_fadt(unsigned long phys, unsigned long size)
{
- struct fadt_descriptor_rev2 *fadt =0;
+ struct fadt_descriptor_rev2 *fadt = NULL;
fadt = (struct fadt_descriptor_rev2*) __acpi_map_table(phys,size);
if(!fadt) {
extern unsigned long FASTCALL(acpi_copy_wakeup_routine(unsigned long));
-static void init_low_mapping(pgd_t *pgd, int pgd_limit)
+static void map_low(pgd_t *pgd_base, unsigned long start, unsigned long end)
{
- int pgd_ofs = 0;
+ unsigned long vaddr;
+ pmd_t *pmd;
+ pgd_t *pgd;
+ int i, j;
- while ((pgd_ofs < pgd_limit) && (pgd_ofs + USER_PTRS_PER_PGD < PTRS_PER_PGD)) {
- set_pgd(pgd, *(pgd+USER_PTRS_PER_PGD));
- pgd_ofs++, pgd++;
+ pgd = pgd_base;
+
+ for (i = 0; i < PTRS_PER_PGD; pgd++, i++) {
+ vaddr = i*PGDIR_SIZE;
+ if (end && (vaddr >= end))
+ break;
+ pmd = pmd_offset(pgd, 0);
+ for (j = 0; j < PTRS_PER_PMD; pmd++, j++) {
+ vaddr = i*PGDIR_SIZE + j*PMD_SIZE;
+ if (end && (vaddr >= end))
+ break;
+ if (vaddr < start)
+ continue;
+ set_pmd(pmd, __pmd(_KERNPG_TABLE + _PAGE_PSE +
+ vaddr - start));
+ }
}
}
{
if (!acpi_wakeup_address)
return 1;
- init_low_mapping(swapper_pg_dir, USER_PTRS_PER_PGD);
+ if (!cpu_has_pse)
+ return 1;
+ map_low(swapper_pg_dir, 0, LOW_MAPPINGS_SIZE);
memcpy((void *) acpi_wakeup_address, &wakeup_start, &wakeup_end - &wakeup_start);
acpi_copy_wakeup_routine(acpi_wakeup_address);
movw $0x0e00 + 'i', %fs:(0x12)
# need a gdt
+ #use the gdt copied in this low mem
+ lea temp_gdt_table - wakeup_code, %eax
+ xor %ebx, %ebx
+ movw %ds, %bx
+ shll $4, %ebx
+ addl %ebx, %eax
+ movl %eax, real_save_gdt + 2 - wakeup_code
lgdt real_save_gdt - wakeup_code
movl real_save_cr0 - wakeup_code, %eax
real_magic: .long 0
video_mode: .long 0
video_flags: .long 0
+temp_gdt_table: .fill GDT_ENTRIES, 8, 0
bogus_real_magic:
movw $0x0e00 + 'B', %fs:(0x12)
movl %edx, real_save_cr0 - wakeup_start (%eax)
sgdt real_save_gdt - wakeup_start (%eax)
+ # gdt wont be addressable from real mode in 4g4g split
+ # copying it to the lower mem
+ xor %ecx, %ecx
+ movw saved_gdt, %cx
+ movl saved_gdt + 2, %esi
+ lea temp_gdt_table - wakeup_start (%eax), %edi
+ rep movsb
movl saved_videomode, %edx
movl %edx, video_mode - wakeup_start (%eax)
movl acpi_video_flags, %edx
#include <asm/smp.h>
#include <asm/mtrr.h>
#include <asm/mpspec.h>
-#include <asm/pgalloc.h>
#include <asm/desc.h>
#include <asm/arch_hooks.h>
#include <asm/hpet.h>
apic_write_around(APIC_LVT0, v);
}
+int get_physical_broadcast(void)
+{
+ unsigned int lvr, version;
+ lvr = apic_read(APIC_LVR);
+ version = GET_APIC_VERSION(lvr);
+ if (version >= 0x14)
+ return 0xff;
+ else
+ return 0xf;
+}
+
int get_maxlvt(void)
{
unsigned int v, ver, maxlvt;
#include <linux/kernel.h>
#include <linux/smp.h>
#include <linux/smp_lock.h>
+#include <linux/dmi.h>
+#include <linux/suspend.h>
#include <asm/system.h>
#include <asm/uaccess.h>
#include <asm/desc.h>
-#include <asm/suspend.h>
#include "io_ports.h"
&apm_bios_fops
};
+
+/* Simple "print if true" callback */
+static int __init print_if_true(struct dmi_system_id *d)
+{
+ printk("%s\n", d->ident);
+ return 0;
+}
+
+/*
+ * Some Bioses enable the PS/2 mouse (touchpad) at resume, even if it was
+ * disabled before the suspend. Linux used to get terribly confused by that.
+ */
+static int __init broken_ps2_resume(struct dmi_system_id *d)
+{
+ printk(KERN_INFO "%s machine detected. Mousepad Resume Bug workaround hopefully not needed.\n", d->ident);
+ return 0;
+}
+
+/* Some bioses have a broken protected mode poweroff and need to use realmode */
+static int __init set_realmode_power_off(struct dmi_system_id *d)
+{
+ if (apm_info.realmode_power_off == 0) {
+ apm_info.realmode_power_off = 1;
+ printk(KERN_INFO "%s bios detected. Using realmode poweroff only.\n", d->ident);
+ }
+ return 0;
+}
+
+/* Some laptops require interrupts to be enabled during APM calls */
+static int __init set_apm_ints(struct dmi_system_id *d)
+{
+ if (apm_info.allow_ints == 0) {
+ apm_info.allow_ints = 1;
+ printk(KERN_INFO "%s machine detected. Enabling interrupts during APM calls.\n", d->ident);
+ }
+ return 0;
+}
+
+/* Some APM bioses corrupt memory or just plain do not work */
+static int __init apm_is_horked(struct dmi_system_id *d)
+{
+ if (apm_info.disabled == 0) {
+ apm_info.disabled = 1;
+ printk(KERN_INFO "%s machine detected. Disabling APM.\n", d->ident);
+ }
+ return 0;
+}
+
+static int __init apm_is_horked_d850md(struct dmi_system_id *d)
+{
+ if (apm_info.disabled == 0) {
+ apm_info.disabled = 1;
+ printk(KERN_INFO "%s machine detected. Disabling APM.\n", d->ident);
+ printk(KERN_INFO "This bug is fixed in bios P15 which is available for \n");
+ printk(KERN_INFO "download from support.intel.com \n");
+ }
+ return 0;
+}
+
+/* Some APM bioses hang on APM idle calls */
+static int __init apm_likes_to_melt(struct dmi_system_id *d)
+{
+ if (apm_info.forbid_idle == 0) {
+ apm_info.forbid_idle = 1;
+ printk(KERN_INFO "%s machine detected. Disabling APM idle calls.\n", d->ident);
+ }
+ return 0;
+}
+
+/*
+ * Check for clue free BIOS implementations who use
+ * the following QA technique
+ *
+ * [ Write BIOS Code ]<------
+ * | ^
+ * < Does it Compile >----N--
+ * |Y ^
+ * < Does it Boot Win98 >-N--
+ * |Y
+ * [Ship It]
+ *
+ * Phoenix A04 08/24/2000 is known bad (Dell Inspiron 5000e)
+ * Phoenix A07 09/29/2000 is known good (Dell Inspiron 5000)
+ */
+static int __init broken_apm_power(struct dmi_system_id *d)
+{
+ apm_info.get_power_status_broken = 1;
+ printk(KERN_WARNING "BIOS strings suggest APM bugs, disabling power status reporting.\n");
+ return 0;
+}
+
+/*
+ * This bios swaps the APM minute reporting bytes over (Many sony laptops
+ * have this problem).
+ */
+static int __init swab_apm_power_in_minutes(struct dmi_system_id *d)
+{
+ apm_info.get_power_status_swabinminutes = 1;
+ printk(KERN_WARNING "BIOS strings suggest APM reports battery life in minutes and wrong byte order.\n");
+ return 0;
+}
+
+static struct dmi_system_id __initdata apm_dmi_table[] = {
+ {
+ print_if_true,
+ KERN_WARNING "IBM T23 - BIOS 1.03b+ and controller firmware 1.02+ may be needed for Linux APM.",
+ { DMI_MATCH(DMI_SYS_VENDOR, "IBM"),
+ DMI_MATCH(DMI_BIOS_VERSION, "1AET38WW (1.01b)"), },
+ },
+ { /* Handle problems with APM on the C600 */
+ broken_ps2_resume, "Dell Latitude C600",
+ { DMI_MATCH(DMI_SYS_VENDOR, "Dell"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Latitude C600"), },
+ },
+ { /* Allow interrupts during suspend on Dell Latitude laptops*/
+ set_apm_ints, "Dell Latitude",
+ { DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Latitude C510"), }
+ },
+ { /* APM crashes */
+ apm_is_horked, "Dell Inspiron 2500",
+ { DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 2500"),
+ DMI_MATCH(DMI_BIOS_VENDOR,"Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION,"A11"), },
+ },
+ { /* Allow interrupts during suspend on Dell Inspiron laptops*/
+ set_apm_ints, "Dell Inspiron", {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 4000"), },
+ },
+ { /* Handle problems with APM on Inspiron 5000e */
+ broken_apm_power, "Dell Inspiron 5000e",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION, "A04"),
+ DMI_MATCH(DMI_BIOS_DATE, "08/24/2000"), },
+ },
+ { /* Handle problems with APM on Inspiron 2500 */
+ broken_apm_power, "Dell Inspiron 2500",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION, "A12"),
+ DMI_MATCH(DMI_BIOS_DATE, "02/04/2002"), },
+ },
+ { /* APM crashes */
+ apm_is_horked, "Dell Dimension 4100",
+ { DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "XPS-Z"),
+ DMI_MATCH(DMI_BIOS_VENDOR,"Intel Corp."),
+ DMI_MATCH(DMI_BIOS_VERSION,"A11"), },
+ },
+ { /* Allow interrupts during suspend on Compaq Laptops*/
+ set_apm_ints, "Compaq 12XL125",
+ { DMI_MATCH(DMI_SYS_VENDOR, "Compaq"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Compaq PC"),
+ DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION,"4.06"), },
+ },
+ { /* Allow interrupts during APM or the clock goes slow */
+ set_apm_ints, "ASUSTeK",
+ { DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "L8400K series Notebook PC"), },
+ },
+ { /* APM blows on shutdown */
+ apm_is_horked, "ABIT KX7-333[R]",
+ { DMI_MATCH(DMI_BOARD_VENDOR, "ABIT"),
+ DMI_MATCH(DMI_BOARD_NAME, "VT8367-8233A (KX7-333[R])"), },
+ },
+ { /* APM crashes */
+ apm_is_horked, "Trigem Delhi3",
+ { DMI_MATCH(DMI_SYS_VENDOR, "TriGem Computer, Inc"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Delhi3"), },
+ },
+ { /* APM crashes */
+ apm_is_horked, "Fujitsu-Siemens",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "hoenix/FUJITSU SIEMENS"),
+ DMI_MATCH(DMI_BIOS_VERSION, "Version1.01"), },
+ },
+ { /* APM crashes */
+ apm_is_horked_d850md, "Intel D850MD",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Intel Corp."),
+ DMI_MATCH(DMI_BIOS_VERSION, "MV85010A.86A.0016.P07.0201251536"), },
+ },
+ { /* APM crashes */
+ apm_is_horked, "Intel D810EMO",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Intel Corp."),
+ DMI_MATCH(DMI_BIOS_VERSION, "MO81010A.86A.0008.P04.0004170800"), },
+ },
+ { /* APM crashes */
+ apm_is_horked, "Dell XPS-Z",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Intel Corp."),
+ DMI_MATCH(DMI_BIOS_VERSION, "A11"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "XPS-Z"), },
+ },
+ { /* APM crashes */
+ apm_is_horked, "Sharp PC-PJ/AX",
+ { DMI_MATCH(DMI_SYS_VENDOR, "SHARP"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "PC-PJ/AX"),
+ DMI_MATCH(DMI_BIOS_VENDOR,"SystemSoft"),
+ DMI_MATCH(DMI_BIOS_VERSION,"Version R2.08"), },
+ },
+ { /* APM crashes */
+ apm_is_horked, "Dell Inspiron 2500",
+ { DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 2500"),
+ DMI_MATCH(DMI_BIOS_VENDOR,"Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION,"A11"), },
+ },
+ { /* APM idle hangs */
+ apm_likes_to_melt, "Jabil AMD",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "American Megatrends Inc."),
+ DMI_MATCH(DMI_BIOS_VERSION, "0AASNP06"), },
+ },
+ { /* APM idle hangs */
+ apm_likes_to_melt, "AMI Bios",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "American Megatrends Inc."),
+ DMI_MATCH(DMI_BIOS_VERSION, "0AASNP05"), },
+ },
+ { /* Handle problems with APM on Sony Vaio PCG-N505X(DE) */
+ swab_apm_power_in_minutes, "Sony VAIO",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION, "R0206H"),
+ DMI_MATCH(DMI_BIOS_DATE, "08/23/99"), },
+ },
+ { /* Handle problems with APM on Sony Vaio PCG-N505VX */
+ swab_apm_power_in_minutes, "Sony VAIO",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION, "W2K06H0"),
+ DMI_MATCH(DMI_BIOS_DATE, "02/03/00"), },
+ },
+ { /* Handle problems with APM on Sony Vaio PCG-XG29 */
+ swab_apm_power_in_minutes, "Sony VAIO",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION, "R0117A0"),
+ DMI_MATCH(DMI_BIOS_DATE, "04/25/00"), },
+ },
+ { /* Handle problems with APM on Sony Vaio PCG-Z600NE */
+ swab_apm_power_in_minutes, "Sony VAIO",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION, "R0121Z1"),
+ DMI_MATCH(DMI_BIOS_DATE, "05/11/00"), },
+ },
+ { /* Handle problems with APM on Sony Vaio PCG-Z600NE */
+ swab_apm_power_in_minutes, "Sony VAIO",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION, "WME01Z1"),
+ DMI_MATCH(DMI_BIOS_DATE, "08/11/00"), },
+ },
+ { /* Handle problems with APM on Sony Vaio PCG-Z600LEK(DE) */
+ swab_apm_power_in_minutes, "Sony VAIO",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION, "R0206Z3"),
+ DMI_MATCH(DMI_BIOS_DATE, "12/25/00"), },
+ },
+ { /* Handle problems with APM on Sony Vaio PCG-Z505LS */
+ swab_apm_power_in_minutes, "Sony VAIO",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION, "R0203D0"),
+ DMI_MATCH(DMI_BIOS_DATE, "05/12/00"), },
+ },
+ { /* Handle problems with APM on Sony Vaio PCG-Z505LS */
+ swab_apm_power_in_minutes, "Sony VAIO",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION, "R0203Z3"),
+ DMI_MATCH(DMI_BIOS_DATE, "08/25/00"), },
+ },
+ { /* Handle problems with APM on Sony Vaio PCG-Z505LS (with updated BIOS) */
+ swab_apm_power_in_minutes, "Sony VAIO",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION, "R0209Z3"),
+ DMI_MATCH(DMI_BIOS_DATE, "05/12/01"), },
+ },
+ { /* Handle problems with APM on Sony Vaio PCG-F104K */
+ swab_apm_power_in_minutes, "Sony VAIO",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION, "R0204K2"),
+ DMI_MATCH(DMI_BIOS_DATE, "08/28/00"), },
+ },
+
+ { /* Handle problems with APM on Sony Vaio PCG-C1VN/C1VE */
+ swab_apm_power_in_minutes, "Sony VAIO",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION, "R0208P1"),
+ DMI_MATCH(DMI_BIOS_DATE, "11/09/00"), },
+ },
+ { /* Handle problems with APM on Sony Vaio PCG-C1VE */
+ swab_apm_power_in_minutes, "Sony VAIO",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION, "R0204P1"),
+ DMI_MATCH(DMI_BIOS_DATE, "09/12/00"), },
+ },
+ { /* Handle problems with APM on Sony Vaio PCG-C1VE */
+ swab_apm_power_in_minutes, "Sony VAIO",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
+ DMI_MATCH(DMI_BIOS_VERSION, "WXPO1Z3"),
+ DMI_MATCH(DMI_BIOS_DATE, "10/26/01"), },
+ },
+ { /* broken PM poweroff bios */
+ set_realmode_power_off, "Award Software v4.60 PGMA",
+ { DMI_MATCH(DMI_BIOS_VENDOR, "Award Software International, Inc."),
+ DMI_MATCH(DMI_BIOS_VERSION, "4.60 PGMA"),
+ DMI_MATCH(DMI_BIOS_DATE, "134526184"), },
+ },
+
+ /* Generic per vendor APM settings */
+
+ { /* Allow interrupts during suspend on IBM laptops */
+ set_apm_ints, "IBM",
+ { DMI_MATCH(DMI_SYS_VENDOR, "IBM"), },
+ },
+
+ { }
+};
+
/*
* Just start the APM thread. We do NOT want to do APM BIOS
* calls from anything but the APM thread, if for no other reason
int ret;
int i;
+ dmi_check_system(apm_dmi_table);
+
if (apm_info.bios.version == 0) {
printk(KERN_INFO "apm: BIOS not found.\n");
return -ENODEV;
OFFSET(TI_preempt_count, thread_info, preempt_count);
OFFSET(TI_addr_limit, thread_info, addr_limit);
OFFSET(TI_restart_block, thread_info, restart_block);
+ OFFSET(TI_sysenter_return, thread_info, sysenter_return);
BLANK();
OFFSET(EXEC_DOMAIN_handler, exec_domain, handler);
DEFINE(TSS_sysenter_esp0, offsetof(struct tss_struct, esp0) -
sizeof(struct tss_struct));
+ DEFINE(TI_task, offsetof (struct thread_info, task));
+ DEFINE(TI_exec_domain, offsetof (struct thread_info, exec_domain));
+ DEFINE(TI_flags, offsetof (struct thread_info, flags));
+ DEFINE(TI_preempt_count, offsetof (struct thread_info, preempt_count));
+ DEFINE(TI_addr_limit, offsetof (struct thread_info, addr_limit));
+ DEFINE(TI_sysenter_return,
+ offsetof (struct thread_info, sysenter_return));
+ DEFINE(TI_real_stack, offsetof (struct thread_info, real_stack));
+ DEFINE(TI_virtual_stack, offsetof (struct thread_info, virtual_stack));
+ DEFINE(TI_user_pgd, offsetof (struct thread_info, user_pgd));
+
+ DEFINE(FIX_ENTRY_TRAMPOLINE_0_addr,
+ __fix_to_virt(FIX_ENTRY_TRAMPOLINE_0));
+ DEFINE(FIX_VSYSCALL_addr, __fix_to_virt(FIX_VSYSCALL));
DEFINE(PAGE_SIZE_asm, PAGE_SIZE);
+ DEFINE(task_thread_db7,
+ offsetof (struct task_struct, thread.debugreg[7]));
}
generic_identify(c);
- printk(KERN_DEBUG "CPU: After generic identify, caps: %08lx %08lx %08lx %08lx\n",
+ printk(KERN_DEBUG "CPU: After generic identify, caps: %08lx %08lx %08lx %08lx\n",
c->x86_capability[0],
c->x86_capability[1],
c->x86_capability[2],
if (this_cpu->c_identify) {
this_cpu->c_identify(c);
- printk(KERN_DEBUG "CPU: After vendor identify, caps: %08lx %08lx %08lx %08lx\n",
+ printk(KERN_DEBUG "CPU: After vendor identify, caps: %08lx %08lx %08lx %08lx\n",
c->x86_capability[0],
c->x86_capability[1],
c->x86_capability[2],
if (disable_pse)
clear_bit(X86_FEATURE_PSE, c->x86_capability);
+ /* hack: disable SEP for non-NX cpus; SEP breaks Execshield. */
+ if (!test_bit(X86_FEATURE_NX, c->x86_capability))
+ clear_bit(X86_FEATURE_SEP, c->x86_capability);
+
/* If the model name is still unset, do table lookup. */
if ( !c->x86_model_id[0] ) {
char *p;
/* Now the feature flags better reflect actual CPU features! */
- printk(KERN_DEBUG "CPU: After all inits, caps: %08lx %08lx %08lx %08lx\n",
+ printk(KERN_DEBUG "CPU: After all inits, caps: %08lx %08lx %08lx %08lx\n",
c->x86_capability[0],
c->x86_capability[1],
c->x86_capability[2],
void __init early_cpu_init(void)
{
- early_cpu_detect();
intel_cpu_init();
cyrix_init_cpu();
nsc_init_cpu();
rise_init_cpu();
nexgen_init_cpu();
umc_init_cpu();
+ early_cpu_detect();
#ifdef CONFIG_DEBUG_PAGEALLOC
/* pse is not compatible with on-the-fly unmapping,
set_tss_desc(cpu,t);
cpu_gdt_table[cpu][GDT_ENTRY_TSS].b &= 0xfffffdff;
load_TR_desc();
- load_LDT(&init_mm.context);
+ if (cpu)
+ load_LDT(&init_mm.context);
/* Set up doublefault TSS pointer in the GDT */
__set_tss_desc(cpu, GDT_ENTRY_DOUBLEFAULT_TSS, &doublefault_tss);
cpu_gdt_table[cpu][GDT_ENTRY_DOUBLEFAULT_TSS].b &= 0xfffffdff;
+ if (cpu)
+ trap_init_virtual_GDT();
+
/* Clear %fs and %gs. */
asm volatile ("xorl %eax, %eax; movl %eax, %fs; movl %eax, %gs");
#include <linux/cpufreq.h>
#include <linux/slab.h>
#include <linux/string.h>
+#include <linux/dmi.h>
#include <asm/msr.h>
#include <asm/timex.h>
}
+static int __init acer_cpufreq_pst(struct dmi_system_id *d)
+{
+ printk(KERN_WARNING "%s laptop with broken PST tables in BIOS detected.\n", d->ident);
+ printk(KERN_WARNING "You need to downgrade to 3A21 (09/09/2002), or try a newer BIOS than 3A71 (01/20/2003)\n");
+ printk(KERN_WARNING "cpufreq scaling has been disabled as a result of this.\n");
+ return 0;
+}
+
+/*
+ * Some Athlon laptops have really fucked PST tables.
+ * A BIOS update is all that can save them.
+ * Mention this, and disable cpufreq.
+ */
+static struct dmi_system_id __initdata powernow_dmi_table[] = {
+ {
+ .callback = acer_cpufreq_pst,
+ .ident = "Acer Aspire",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Insyde Software"),
+ DMI_MATCH(DMI_BIOS_VERSION, "3A71"),
+ },
+ },
+ { }
+};
+
static int __init powernow_cpu_init (struct cpufreq_policy *policy)
{
union msr_fidvidstatus fidvidstatus;
}
dprintk(KERN_INFO PFX "FSB: %3d.%03d MHz\n", fsb/1000, fsb%1000);
- if ((dmi_broken & BROKEN_CPUFREQ) || acpi_force) {
+ if (dmi_check_system(powernow_dmi_table) || acpi_force) {
printk (KERN_INFO PFX "PSB/PST known to be broken. Trying ACPI instead\n");
result = powernow_acpi_init();
} else {
return -ENODEV;
}
-static int __exit powernowk8_cpu_exit (struct cpufreq_policy *pol)
+static int powernowk8_cpu_exit (struct cpufreq_policy *pol)
{
struct powernow_k8_data *data = powernow_data[pol->cpu];
.verify = powernowk8_verify,
.target = powernowk8_target,
.init = powernowk8_cpu_init,
- .exit = powernowk8_cpu_exit,
+ .exit = __devexit_p(powernowk8_cpu_exit),
.get = powernowk8_get,
.name = "powernow-k8",
.owner = THIS_MODULE,
}
/* driver entry point for term */
-static void __exit powernowk8_exit(void)
+static void powernowk8_exit(void)
{
dprintk(KERN_INFO PFX "exit\n");
BANIAS(1500),
BANIAS(1600),
BANIAS(1700),
- { 0, }
+ { NULL, }
};
#undef _BANIAS
#undef BANIAS
#include <asm/processor.h>
#include <asm/msr.h>
#include <asm/uaccess.h>
+#include <asm/desc.h>
#include "cpu.h"
#include <mach_apic.h>
#endif
-extern int trap_init_f00f_bug(void);
-
#ifdef CONFIG_X86_INTEL_USERCOPY
/*
* Alignment at which movsl is preferred for bulk memory copies.
c->f00f_bug = 1;
if ( !f00f_workaround_enabled ) {
- trap_init_f00f_bug();
+ trap_init_virtual_IDT();
printk(KERN_NOTICE "Intel Pentium with F0 0F bug - workaround enabled.\n");
f00f_workaround_enabled = 1;
}
mtrr_type def_type;
};
-static unsigned long smp_changes_mask __initdata = 0;
+static unsigned long smp_changes_mask;
struct mtrr_state mtrr_state = {};
/* AMD-defined */
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, "syscall", NULL, NULL, NULL, NULL,
- NULL, NULL, NULL, "mp", NULL, NULL, "mmxext", NULL,
+ NULL, NULL, NULL, "mp", "nx", NULL, "mmxext", NULL,
NULL, NULL, NULL, NULL, NULL, "lm", "3dnowext", "3dnow",
/* Transmeta-defined */
#include <linux/fs.h>
#include <linux/smp_lock.h>
#include <linux/fs.h>
+#include <linux/device.h>
+#include <linux/cpu.h>
+#include <linux/notifier.h>
#include <asm/processor.h>
#include <asm/msr.h>
#include <asm/uaccess.h>
#include <asm/system.h>
+static struct class_simple *cpuid_class;
+
#ifdef CONFIG_SMP
struct cpuid_command {
.open = cpuid_open,
};
+static int cpuid_class_simple_device_add(int i)
+{
+ int err = 0;
+ struct class_device *class_err;
+
+ class_err = class_simple_device_add(cpuid_class, MKDEV(CPUID_MAJOR, i), NULL, "cpu%d",i);
+ if (IS_ERR(class_err))
+ err = PTR_ERR(class_err);
+ return err;
+}
+
+static int __devinit cpuid_class_cpu_callback(struct notifier_block *nfb, unsigned long action, void *hcpu)
+{
+ unsigned int cpu = (unsigned long)hcpu;
+
+ switch (action) {
+ case CPU_ONLINE:
+ cpuid_class_simple_device_add(cpu);
+ break;
+ case CPU_DEAD:
+ class_simple_device_remove(MKDEV(CPUID_MAJOR, cpu));
+ break;
+ }
+ return NOTIFY_OK;
+}
+
+static struct notifier_block cpuid_class_cpu_notifier =
+{
+ .notifier_call = cpuid_class_cpu_callback,
+};
+
int __init cpuid_init(void)
{
+ int i, err = 0;
+ i = 0;
+
if (register_chrdev(CPUID_MAJOR, "cpu/cpuid", &cpuid_fops)) {
printk(KERN_ERR "cpuid: unable to get major %d for cpuid\n",
CPUID_MAJOR);
- return -EBUSY;
+ err = -EBUSY;
+ goto out;
+ }
+ cpuid_class = class_simple_create(THIS_MODULE, "cpuid");
+ if (IS_ERR(cpuid_class)) {
+ err = PTR_ERR(cpuid_class);
+ goto out_chrdev;
}
+ for_each_online_cpu(i) {
+ err = cpuid_class_simple_device_add(i);
+ if (err != 0)
+ goto out_class;
+ }
+ register_cpu_notifier(&cpuid_class_cpu_notifier);
- return 0;
+ err = 0;
+ goto out;
+
+out_class:
+ i = 0;
+ for_each_online_cpu(i) {
+ class_simple_device_remove(MKDEV(CPUID_MAJOR, i));
+ }
+ class_simple_destroy(cpuid_class);
+out_chrdev:
+ unregister_chrdev(CPUID_MAJOR, "cpu/cpuid");
+out:
+ return err;
}
void __exit cpuid_exit(void)
{
+ int cpu = 0;
+
+ for_each_online_cpu(cpu)
+ class_simple_device_remove(MKDEV(CPUID_MAJOR, cpu));
+ class_simple_destroy(cpuid_class);
unregister_chrdev(CPUID_MAJOR, "cpu/cpuid");
+ unregister_cpu_notifier(&cpuid_class_cpu_notifier);
}
module_init(cpuid_init);
-#include <linux/config.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/init.h>
#include <linux/module.h>
-#include <linux/apm_bios.h>
#include <linux/slab.h>
#include <asm/acpi.h>
#include <asm/io.h>
#include <linux/pm.h>
#include <asm/system.h>
+#include <linux/dmi.h>
#include <linux/bootmem.h>
-unsigned long dmi_broken;
-EXPORT_SYMBOL(dmi_broken);
-int is_sony_vaio_laptop;
-int is_unsafe_smbus;
int es7000_plat = 0;
struct dmi_header
return -1;
}
-
-enum
-{
- DMI_BIOS_VENDOR,
- DMI_BIOS_VERSION,
- DMI_BIOS_DATE,
- DMI_SYS_VENDOR,
- DMI_PRODUCT_NAME,
- DMI_PRODUCT_VERSION,
- DMI_BOARD_VENDOR,
- DMI_BOARD_NAME,
- DMI_BOARD_VERSION,
- DMI_STRING_MAX
-};
-
static char *dmi_ident[DMI_STRING_MAX];
/*
}
/*
- * DMI callbacks for problem boards
- */
-
-struct dmi_strmatch
-{
- u8 slot;
- char *substr;
-};
-
-#define NONE 255
-
-struct dmi_blacklist
-{
- int (*callback)(struct dmi_blacklist *);
- char *ident;
- struct dmi_strmatch matches[4];
-};
-
-#define NO_MATCH { NONE, NULL}
-#define MATCH(a,b) { a, b }
-
-/*
- * Reboot options and system auto-detection code provided by
- * Dell Inc. so their systems "just work". :-)
- */
-
-/*
- * Some machines require the "reboot=b" commandline option, this quirk makes that automatic.
- */
-static __init int set_bios_reboot(struct dmi_blacklist *d)
-{
- extern int reboot_thru_bios;
- if (reboot_thru_bios == 0)
- {
- reboot_thru_bios = 1;
- printk(KERN_INFO "%s series board detected. Selecting BIOS-method for reboots.\n", d->ident);
- }
- return 0;
-}
-
-/*
- * Some machines require the "reboot=s" commandline option, this quirk makes that automatic.
- */
-static __init int set_smp_reboot(struct dmi_blacklist *d)
-{
-#ifdef CONFIG_SMP
- extern int reboot_smp;
- if (reboot_smp == 0)
- {
- reboot_smp = 1;
- printk(KERN_INFO "%s series board detected. Selecting SMP-method for reboots.\n", d->ident);
- }
-#endif
- return 0;
-}
-
-/*
- * Some machines require the "reboot=b,s" commandline option, this quirk makes that automatic.
+ * Ugly compatibility crap.
*/
-static __init int set_smp_bios_reboot(struct dmi_blacklist *d)
-{
- set_smp_reboot(d);
- set_bios_reboot(d);
- return 0;
-}
-
-/*
- * Some bioses have a broken protected mode poweroff and need to use realmode
- */
-
-static __init int set_realmode_power_off(struct dmi_blacklist *d)
-{
- if (apm_info.realmode_power_off == 0)
- {
- apm_info.realmode_power_off = 1;
- printk(KERN_INFO "%s bios detected. Using realmode poweroff only.\n", d->ident);
- }
- return 0;
-}
-
-
-/*
- * Some laptops require interrupts to be enabled during APM calls
- */
-
-static __init int set_apm_ints(struct dmi_blacklist *d)
-{
- if (apm_info.allow_ints == 0)
- {
- apm_info.allow_ints = 1;
- printk(KERN_INFO "%s machine detected. Enabling interrupts during APM calls.\n", d->ident);
- }
- return 0;
-}
-
-/*
- * Some APM bioses corrupt memory or just plain do not work
- */
-
-static __init int apm_is_horked(struct dmi_blacklist *d)
-{
- if (apm_info.disabled == 0)
- {
- apm_info.disabled = 1;
- printk(KERN_INFO "%s machine detected. Disabling APM.\n", d->ident);
- }
- return 0;
-}
-
-static __init int apm_is_horked_d850md(struct dmi_blacklist *d)
-{
- if (apm_info.disabled == 0) {
- apm_info.disabled = 1;
- printk(KERN_INFO "%s machine detected. Disabling APM.\n", d->ident);
- printk(KERN_INFO "This bug is fixed in bios P15 which is available for \n");
- printk(KERN_INFO "download from support.intel.com \n");
- }
- return 0;
-}
-
-/*
- * Some APM bioses hang on APM idle calls
- */
-
-static __init int apm_likes_to_melt(struct dmi_blacklist *d)
-{
- if (apm_info.forbid_idle == 0) {
- apm_info.forbid_idle = 1;
- printk(KERN_INFO "%s machine detected. Disabling APM idle calls.\n", d->ident);
- }
- return 0;
-}
+#define dmi_blacklist dmi_system_id
+#define NO_MATCH { DMI_NONE, NULL}
+#define MATCH DMI_MATCH
/*
* Some machines, usually laptops, can't handle an enabled local APIC.
return 0;
}
-/*
- * Don't access SMBus on IBM systems which get corrupted eeproms
- */
-
-static __init int disable_smbus(struct dmi_blacklist *d)
-{
- if (is_unsafe_smbus == 0) {
- is_unsafe_smbus = 1;
- printk(KERN_INFO "%s machine detected. Disabling SMBus accesses.\n", d->ident);
- }
- return 0;
-}
-
-/*
- * Work around broken HP Pavilion Notebooks which assign USB to
- * IRQ 9 even though it is actually wired to IRQ 11
- */
-static __init int fix_broken_hp_bios_irq9(struct dmi_blacklist *d)
-{
-#ifdef CONFIG_PCI
- extern int broken_hp_bios_irq9;
- if (broken_hp_bios_irq9 == 0)
- {
- broken_hp_bios_irq9 = 1;
- printk(KERN_INFO "%s detected - fixing broken IRQ routing\n", d->ident);
- }
-#endif
- return 0;
-}
-
-/*
- * Check for clue free BIOS implementations who use
- * the following QA technique
- *
- * [ Write BIOS Code ]<------
- * | ^
- * < Does it Compile >----N--
- * |Y ^
- * < Does it Boot Win98 >-N--
- * |Y
- * [Ship It]
- *
- * Phoenix A04 08/24/2000 is known bad (Dell Inspiron 5000e)
- * Phoenix A07 09/29/2000 is known good (Dell Inspiron 5000)
- */
-
-static __init int broken_apm_power(struct dmi_blacklist *d)
-{
- apm_info.get_power_status_broken = 1;
- printk(KERN_WARNING "BIOS strings suggest APM bugs, disabling power status reporting.\n");
- return 0;
-}
-
-/*
- * Check for a Sony Vaio system
- *
- * On a Sony system we want to enable the use of the sonypi
- * driver for Sony-specific goodies like the camera and jogdial.
- * We also want to avoid using certain functions of the PnP BIOS.
- */
-
-static __init int sony_vaio_laptop(struct dmi_blacklist *d)
-{
- if (is_sony_vaio_laptop == 0)
- {
- is_sony_vaio_laptop = 1;
- printk(KERN_INFO "%s laptop detected.\n", d->ident);
- }
- return 0;
-}
-
-/*
- * This bios swaps the APM minute reporting bytes over (Many sony laptops
- * have this problem).
- */
-
-static __init int swab_apm_power_in_minutes(struct dmi_blacklist *d)
-{
- apm_info.get_power_status_swabinminutes = 1;
- printk(KERN_WARNING "BIOS strings suggest APM reports battery life in minutes and wrong byte order.\n");
- return 0;
-}
-
-/*
- * ASUS K7V-RM has broken ACPI table defining sleep modes
- */
-
-static __init int broken_acpi_Sx(struct dmi_blacklist *d)
-{
- printk(KERN_WARNING "Detected ASUS mainboard with broken ACPI sleep table\n");
- dmi_broken |= BROKEN_ACPI_Sx;
- return 0;
-}
/*
* Toshiba keyboard likes to repeat keys when they are not repeated.
return 0;
}
-/*
- * Toshiba fails to preserve interrupts over S1
- */
-
-static __init int init_ints_after_s1(struct dmi_blacklist *d)
-{
- printk(KERN_WARNING "Toshiba with broken S1 detected.\n");
- dmi_broken |= BROKEN_INIT_AFTER_S1;
- return 0;
-}
#ifdef CONFIG_ACPI_SLEEP
static __init int reset_videomode_after_s3(struct dmi_blacklist *d)
}
#endif
-/*
- * Some Bioses enable the PS/2 mouse (touchpad) at resume, even if it was
- * disabled before the suspend. Linux used to get terribly confused by that.
- */
-
-static __init int broken_ps2_resume(struct dmi_blacklist *d)
-{
- printk(KERN_INFO "%s machine detected. Mousepad Resume Bug workaround hopefully not needed.\n", d->ident);
- return 0;
-}
-
-/*
- * Exploding PnPBIOS. Don't yet know if its the BIOS or us for
- * some entries
- */
-
-static __init int exploding_pnp_bios(struct dmi_blacklist *d)
-{
- printk(KERN_WARNING "%s detected. Disabling PnPBIOS\n", d->ident);
- dmi_broken |= BROKEN_PNP_BIOS;
- return 0;
-}
-
-static __init int acer_cpufreq_pst(struct dmi_blacklist *d)
-{
- printk(KERN_WARNING "%s laptop with broken PST tables in BIOS detected.\n", d->ident);
- printk(KERN_WARNING "You need to downgrade to 3A21 (09/09/2002), or try a newer BIOS than 3A71 (01/20/2003)\n");
- printk(KERN_WARNING "cpufreq scaling has been disabled as a result of this.\n");
- dmi_broken |= BROKEN_CPUFREQ;
- return 0;
-}
-
-
-/*
- * Simple "print if true" callback
- */
-
-static __init int print_if_true(struct dmi_blacklist *d)
-{
- printk("%s\n", d->ident);
- return 0;
-}
-
#ifdef CONFIG_ACPI_BOOT
extern int acpi_force;
#ifdef CONFIG_ACPI_PCI
static __init int disable_acpi_irq(struct dmi_blacklist *d)
-{
- printk(KERN_NOTICE "%s detected: force use of acpi=noirq\n", d->ident);
- acpi_noirq_set();
+{
+ if (!acpi_force) {
+ printk(KERN_NOTICE "%s detected: force use of acpi=noirq\n",
+ d->ident);
+ acpi_noirq_set();
+ }
return 0;
}
static __init int disable_acpi_pci(struct dmi_blacklist *d)
-{
- printk(KERN_NOTICE "%s detected: force use of pci=noacpi\n", d->ident);
- acpi_disable_pci();
+{
+ if (!acpi_force) {
+ printk(KERN_NOTICE "%s detected: force use of pci=noacpi\n",
+ d->ident);
+ acpi_disable_pci();
+ }
return 0;
}
#endif
*/
static __initdata struct dmi_blacklist dmi_blacklist[]={
- { broken_ps2_resume, "Dell Latitude C600", { /* Handle problems with APM on the C600 */
- MATCH(DMI_SYS_VENDOR, "Dell"),
- MATCH(DMI_PRODUCT_NAME, "Latitude C600"),
- NO_MATCH, NO_MATCH
- } },
- { set_apm_ints, "Dell Latitude", { /* Allow interrupts during suspend on Dell Latitude laptops*/
- MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
- MATCH(DMI_PRODUCT_NAME, "Latitude C510"),
- NO_MATCH, NO_MATCH
- } },
- { apm_is_horked, "Dell Inspiron 2500", { /* APM crashes */
- MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
- MATCH(DMI_PRODUCT_NAME, "Inspiron 2500"),
- MATCH(DMI_BIOS_VENDOR,"Phoenix Technologies LTD"),
- MATCH(DMI_BIOS_VERSION,"A11")
- } },
- { set_apm_ints, "Dell Inspiron", { /* Allow interrupts during suspend on Dell Inspiron laptops*/
- MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
- MATCH(DMI_PRODUCT_NAME, "Inspiron 4000"),
- NO_MATCH, NO_MATCH
- } },
- { broken_apm_power, "Dell Inspiron 5000e", { /* Handle problems with APM on Inspiron 5000e */
- MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
- MATCH(DMI_BIOS_VERSION, "A04"),
- MATCH(DMI_BIOS_DATE, "08/24/2000"), NO_MATCH
- } },
- { broken_apm_power, "Dell Inspiron 2500", { /* Handle problems with APM on Inspiron 2500 */
- MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
- MATCH(DMI_BIOS_VERSION, "A12"),
- MATCH(DMI_BIOS_DATE, "02/04/2002"), NO_MATCH
- } },
- { apm_is_horked, "Dell Dimension 4100", { /* APM crashes */
- MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
- MATCH(DMI_PRODUCT_NAME, "XPS-Z"),
- MATCH(DMI_BIOS_VENDOR,"Intel Corp."),
- MATCH(DMI_BIOS_VERSION,"A11")
- } },
- { set_realmode_power_off, "Award Software v4.60 PGMA", { /* broken PM poweroff bios */
- MATCH(DMI_BIOS_VENDOR, "Award Software International, Inc."),
- MATCH(DMI_BIOS_VERSION, "4.60 PGMA"),
- MATCH(DMI_BIOS_DATE, "134526184"), NO_MATCH
- } },
- { set_smp_bios_reboot, "Dell PowerEdge 1300", { /* Handle problems with rebooting on Dell 1300's */
- MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
- MATCH(DMI_PRODUCT_NAME, "PowerEdge 1300/"),
- NO_MATCH, NO_MATCH
- } },
- { set_bios_reboot, "Dell PowerEdge 300", { /* Handle problems with rebooting on Dell 300's */
- MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
- MATCH(DMI_PRODUCT_NAME, "PowerEdge 300/"),
- NO_MATCH, NO_MATCH
- } },
- { set_bios_reboot, "Dell PowerEdge 2400", { /* Handle problems with rebooting on Dell 2400's */
- MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
- MATCH(DMI_PRODUCT_NAME, "PowerEdge 2400"),
- NO_MATCH, NO_MATCH
- } },
- { set_apm_ints, "Compaq 12XL125", { /* Allow interrupts during suspend on Compaq Laptops*/
- MATCH(DMI_SYS_VENDOR, "Compaq"),
- MATCH(DMI_PRODUCT_NAME, "Compaq PC"),
- MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
- MATCH(DMI_BIOS_VERSION,"4.06")
- } },
- { set_apm_ints, "ASUSTeK", { /* Allow interrupts during APM or the clock goes slow */
- MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer Inc."),
- MATCH(DMI_PRODUCT_NAME, "L8400K series Notebook PC"),
- NO_MATCH, NO_MATCH
- } },
- { apm_is_horked, "ABIT KX7-333[R]", { /* APM blows on shutdown */
- MATCH(DMI_BOARD_VENDOR, "ABIT"),
- MATCH(DMI_BOARD_NAME, "VT8367-8233A (KX7-333[R])"),
- NO_MATCH, NO_MATCH,
- } },
- { apm_is_horked, "Trigem Delhi3", { /* APM crashes */
- MATCH(DMI_SYS_VENDOR, "TriGem Computer, Inc"),
- MATCH(DMI_PRODUCT_NAME, "Delhi3"),
- NO_MATCH, NO_MATCH,
- } },
- { apm_is_horked, "Fujitsu-Siemens", { /* APM crashes */
- MATCH(DMI_BIOS_VENDOR, "hoenix/FUJITSU SIEMENS"),
- MATCH(DMI_BIOS_VERSION, "Version1.01"),
- NO_MATCH, NO_MATCH,
- } },
- { apm_is_horked_d850md, "Intel D850MD", { /* APM crashes */
- MATCH(DMI_BIOS_VENDOR, "Intel Corp."),
- MATCH(DMI_BIOS_VERSION, "MV85010A.86A.0016.P07.0201251536"),
- NO_MATCH, NO_MATCH,
- } },
- { apm_is_horked, "Intel D810EMO", { /* APM crashes */
- MATCH(DMI_BIOS_VENDOR, "Intel Corp."),
- MATCH(DMI_BIOS_VERSION, "MO81010A.86A.0008.P04.0004170800"),
- NO_MATCH, NO_MATCH,
- } },
- { apm_is_horked, "Dell XPS-Z", { /* APM crashes */
- MATCH(DMI_BIOS_VENDOR, "Intel Corp."),
- MATCH(DMI_BIOS_VERSION, "A11"),
- MATCH(DMI_PRODUCT_NAME, "XPS-Z"),
- NO_MATCH,
- } },
- { apm_is_horked, "Sharp PC-PJ/AX", { /* APM crashes */
- MATCH(DMI_SYS_VENDOR, "SHARP"),
- MATCH(DMI_PRODUCT_NAME, "PC-PJ/AX"),
- MATCH(DMI_BIOS_VENDOR,"SystemSoft"),
- MATCH(DMI_BIOS_VERSION,"Version R2.08")
- } },
- { apm_is_horked, "Dell Inspiron 2500", { /* APM crashes */
- MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
- MATCH(DMI_PRODUCT_NAME, "Inspiron 2500"),
- MATCH(DMI_BIOS_VENDOR,"Phoenix Technologies LTD"),
- MATCH(DMI_BIOS_VERSION,"A11")
- } },
- { apm_likes_to_melt, "Jabil AMD", { /* APM idle hangs */
- MATCH(DMI_BIOS_VENDOR, "American Megatrends Inc."),
- MATCH(DMI_BIOS_VERSION, "0AASNP06"),
- NO_MATCH, NO_MATCH,
- } },
- { apm_likes_to_melt, "AMI Bios", { /* APM idle hangs */
- MATCH(DMI_BIOS_VENDOR, "American Megatrends Inc."),
- MATCH(DMI_BIOS_VERSION, "0AASNP05"),
- NO_MATCH, NO_MATCH,
- } },
- { sony_vaio_laptop, "Sony Vaio", { /* This is a Sony Vaio laptop */
- MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
- MATCH(DMI_PRODUCT_NAME, "PCG-"),
- NO_MATCH, NO_MATCH,
- } },
- { swab_apm_power_in_minutes, "Sony VAIO", { /* Handle problems with APM on Sony Vaio PCG-N505X(DE) */
- MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
- MATCH(DMI_BIOS_VERSION, "R0206H"),
- MATCH(DMI_BIOS_DATE, "08/23/99"), NO_MATCH
- } },
-
- { swab_apm_power_in_minutes, "Sony VAIO", { /* Handle problems with APM on Sony Vaio PCG-N505VX */
- MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
- MATCH(DMI_BIOS_VERSION, "W2K06H0"),
- MATCH(DMI_BIOS_DATE, "02/03/00"), NO_MATCH
- } },
-
- { swab_apm_power_in_minutes, "Sony VAIO", { /* Handle problems with APM on Sony Vaio PCG-XG29 */
- MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
- MATCH(DMI_BIOS_VERSION, "R0117A0"),
- MATCH(DMI_BIOS_DATE, "04/25/00"), NO_MATCH
- } },
-
- { swab_apm_power_in_minutes, "Sony VAIO", { /* Handle problems with APM on Sony Vaio PCG-Z600NE */
- MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
- MATCH(DMI_BIOS_VERSION, "R0121Z1"),
- MATCH(DMI_BIOS_DATE, "05/11/00"), NO_MATCH
- } },
-
- { swab_apm_power_in_minutes, "Sony VAIO", { /* Handle problems with APM on Sony Vaio PCG-Z600NE */
- MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
- MATCH(DMI_BIOS_VERSION, "WME01Z1"),
- MATCH(DMI_BIOS_DATE, "08/11/00"), NO_MATCH
- } },
-
- { swab_apm_power_in_minutes, "Sony VAIO", { /* Handle problems with APM on Sony Vaio PCG-Z600LEK(DE) */
- MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
- MATCH(DMI_BIOS_VERSION, "R0206Z3"),
- MATCH(DMI_BIOS_DATE, "12/25/00"), NO_MATCH
- } },
-
- { swab_apm_power_in_minutes, "Sony VAIO", { /* Handle problems with APM on Sony Vaio PCG-Z505LS */
- MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
- MATCH(DMI_BIOS_VERSION, "R0203D0"),
- MATCH(DMI_BIOS_DATE, "05/12/00"), NO_MATCH
- } },
-
- { swab_apm_power_in_minutes, "Sony VAIO", { /* Handle problems with APM on Sony Vaio PCG-Z505LS */
- MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
- MATCH(DMI_BIOS_VERSION, "R0203Z3"),
- MATCH(DMI_BIOS_DATE, "08/25/00"), NO_MATCH
- } },
-
- { swab_apm_power_in_minutes, "Sony VAIO", { /* Handle problems with APM on Sony Vaio PCG-Z505LS (with updated BIOS) */
- MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
- MATCH(DMI_BIOS_VERSION, "R0209Z3"),
- MATCH(DMI_BIOS_DATE, "05/12/01"), NO_MATCH
- } },
-
- { swab_apm_power_in_minutes, "Sony VAIO", { /* Handle problems with APM on Sony Vaio PCG-F104K */
- MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
- MATCH(DMI_BIOS_VERSION, "R0204K2"),
- MATCH(DMI_BIOS_DATE, "08/28/00"), NO_MATCH
- } },
-
- { swab_apm_power_in_minutes, "Sony VAIO", { /* Handle problems with APM on Sony Vaio PCG-C1VN/C1VE */
- MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
- MATCH(DMI_BIOS_VERSION, "R0208P1"),
- MATCH(DMI_BIOS_DATE, "11/09/00"), NO_MATCH
- } },
-
- { swab_apm_power_in_minutes, "Sony VAIO", { /* Handle problems with APM on Sony Vaio PCG-C1VE */
- MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
- MATCH(DMI_BIOS_VERSION, "R0204P1"),
- MATCH(DMI_BIOS_DATE, "09/12/00"), NO_MATCH
- } },
-
- { swab_apm_power_in_minutes, "Sony VAIO", { /* Handle problems with APM on Sony Vaio PCG-C1VE */
- MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies LTD"),
- MATCH(DMI_BIOS_VERSION, "WXPO1Z3"),
- MATCH(DMI_BIOS_DATE, "10/26/01"), NO_MATCH
- } },
-
- { exploding_pnp_bios, "Higraded P14H", { /* PnPBIOS GPF on boot */
- MATCH(DMI_BIOS_VENDOR, "American Megatrends Inc."),
- MATCH(DMI_BIOS_VERSION, "07.00T"),
- MATCH(DMI_SYS_VENDOR, "Higraded"),
- MATCH(DMI_PRODUCT_NAME, "P14H")
- } },
- { exploding_pnp_bios, "ASUS P4P800", { /* PnPBIOS GPF on boot */
- MATCH(DMI_BOARD_VENDOR, "ASUSTeK Computer Inc."),
- MATCH(DMI_BOARD_NAME, "P4P800"),
- NO_MATCH, NO_MATCH
- } },
/* Machines which have problems handling enabled local APICs */
NO_MATCH, NO_MATCH
} },
- { broken_acpi_Sx, "ASUS K7V-RM", { /* Bad ACPI Sx table */
- MATCH(DMI_BIOS_VERSION,"ASUS K7V-RM ACPI BIOS Revision 1003A"),
- MATCH(DMI_BOARD_NAME, "<K7V-RM>"),
- NO_MATCH, NO_MATCH
- } },
-
{ broken_toshiba_keyboard, "Toshiba Satellite 4030cdt", { /* Keyboard generates spurious repeats */
MATCH(DMI_PRODUCT_NAME, "S4030CDT/4.3"),
NO_MATCH, NO_MATCH, NO_MATCH
} },
- { init_ints_after_s1, "Toshiba Satellite 4030cdt", { /* Reinitialization of 8259 is needed after S1 resume */
- MATCH(DMI_PRODUCT_NAME, "S4030CDT/4.3"),
- NO_MATCH, NO_MATCH, NO_MATCH
- } },
#ifdef CONFIG_ACPI_SLEEP
{ reset_videomode_after_s3, "Toshiba Satellite 4030cdt", { /* Reset video mode after returning from ACPI S3 sleep */
MATCH(DMI_PRODUCT_NAME, "S4030CDT/4.3"),
} },
#endif
- { print_if_true, KERN_WARNING "IBM T23 - BIOS 1.03b+ and controller firmware 1.02+ may be needed for Linux APM.", {
- MATCH(DMI_SYS_VENDOR, "IBM"),
- MATCH(DMI_BIOS_VERSION, "1AET38WW (1.01b)"),
- NO_MATCH, NO_MATCH
- } },
-
- { fix_broken_hp_bios_irq9, "HP Pavilion N5400 Series Laptop", {
- MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
- MATCH(DMI_BIOS_VERSION, "GE.M1.03"),
- MATCH(DMI_PRODUCT_VERSION, "HP Pavilion Notebook Model GE"),
- MATCH(DMI_BOARD_VERSION, "OmniBook N32N-736")
- } },
-
-
- /*
- * Generic per vendor APM settings
- */
-
- { set_apm_ints, "IBM", { /* Allow interrupts during suspend on IBM laptops */
- MATCH(DMI_SYS_VENDOR, "IBM"),
- NO_MATCH, NO_MATCH, NO_MATCH
- } },
-
- /*
- * SMBus / sensors settings
- */
-
- { disable_smbus, "IBM", {
- MATCH(DMI_SYS_VENDOR, "IBM"),
- NO_MATCH, NO_MATCH, NO_MATCH
- } },
-
- /*
- * Some Athlon laptops have really fucked PST tables.
- * A BIOS update is all that can save them.
- * Mention this, and disable cpufreq.
- */
- { acer_cpufreq_pst, "Acer Aspire", {
- MATCH(DMI_SYS_VENDOR, "Insyde Software"),
- MATCH(DMI_BIOS_VERSION, "3A71"),
- NO_MATCH, NO_MATCH,
- } },
-
#ifdef CONFIG_ACPI_BOOT
/*
* If your system is blacklisted here, but you find that acpi=force
MATCH(DMI_BOARD_NAME, "PR-DLS"),
MATCH(DMI_BIOS_VERSION, "ASUS PR-DLS ACPI BIOS Revision 1010"),
MATCH(DMI_BIOS_DATE, "03/21/2003") }},
+
+ { disable_acpi_pci, "Acer TravelMate 36x Laptop", {
+ MATCH(DMI_SYS_VENDOR, "Acer"),
+ MATCH(DMI_PRODUCT_NAME, "TravelMate 360"),
+ NO_MATCH, NO_MATCH
+ } },
+
#endif
{ NULL, }
static __init void dmi_check_blacklist(void)
{
- struct dmi_blacklist *d;
- int i;
-
#ifdef CONFIG_ACPI_BOOT
#define ACPI_BLACKLIST_CUTOFF_YEAR 2001
}
}
#endif
-
- d=&dmi_blacklist[0];
- while(d->callback)
- {
- for(i=0;i<4;i++)
- {
- int s = d->matches[i].slot;
- if(s==NONE)
- continue;
- if(dmi_ident[s] && strstr(dmi_ident[s], d->matches[i].substr))
- continue;
- /* No match */
- goto fail;
- }
- if(d->callback(d))
- return;
-fail:
- d++;
- }
+ dmi_check_system(dmi_blacklist);
}
printk(KERN_INFO "DMI not present.\n");
}
-EXPORT_SYMBOL(is_unsafe_smbus);
+
+/**
+ * dmi_check_system - check system DMI data
+ * @list: array of dmi_system_id structures to match against
+ *
+ * Walk the blacklist table running matching functions until someone
+ * returns non zero or we hit the end. Callback function is called for
+ * each successfull match. Returns the number of matches.
+ */
+int dmi_check_system(struct dmi_system_id *list)
+{
+ int i, count = 0;
+ struct dmi_system_id *d = list;
+
+ while (d->ident) {
+ for (i = 0; i < ARRAY_SIZE(d->matches); i++) {
+ int s = d->matches[i].slot;
+ if (s == DMI_NONE)
+ continue;
+ if (dmi_ident[s] && strstr(dmi_ident[s], d->matches[i].substr))
+ continue;
+ /* No match */
+ goto fail;
+ }
+ if (d->callback && d->callback(d))
+ break;
+ count++;
+fail: d++;
+ }
+
+ return count;
+}
+
+EXPORT_SYMBOL(dmi_check_system);
+
+/**
+ * dmi_get_system_info - return DMI data value
+ * @field: data index (see enum dmi_filed)
+ *
+ * Returns one DMI data value, can be used to perform
+ * complex DMI data checks.
+ */
+char * dmi_get_system_info(int field)
+{
+ return dmi_ident[field];
+}
+
+EXPORT_SYMBOL(dmi_get_system_info);
#include <asm/pgtable.h>
#include <asm/processor.h>
#include <asm/desc.h>
+#include <asm/fixmap.h>
#define DOUBLEFAULT_STACKSIZE (1024)
static unsigned long doublefault_stack[DOUBLEFAULT_STACKSIZE];
#define STACK_START (unsigned long)(doublefault_stack+DOUBLEFAULT_STACKSIZE)
-#define ptr_ok(x) ((x) > 0xc0000000 && (x) < 0xc1000000)
+#define ptr_ok(x) (((x) > __PAGE_OFFSET && (x) < (__PAGE_OFFSET + 0x01000000)) || ((x) >= FIXADDR_START))
static void doublefault_fn(void)
{
printk("eax = %08lx, ebx = %08lx, ecx = %08lx, edx = %08lx\n",
t->eax, t->ebx, t->ecx, t->edx);
- printk("esi = %08lx, edi = %08lx\n",
- t->esi, t->edi);
+ printk("esi = %08lx, edi = %08lx, ebp = %08lx\n",
+ t->esi, t->edi, t->ebp);
}
}
#include <asm/pgtable.h>
#include <asm/processor.h>
#include <asm/desc.h>
-#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
#define EFI_DEBUG 0
#include <linux/config.h>
#include <linux/linkage.h>
#include <asm/thread_info.h>
+#include <asm/asm_offsets.h>
#include <asm/errno.h>
#include <asm/segment.h>
+#include <asm/page.h>
#include <asm/smp.h>
#include <asm/page.h>
#include "irq_vectors.h"
#define resume_kernel restore_all
#endif
-#define SAVE_ALL \
+#ifdef CONFIG_X86_HIGH_ENTRY
+
+#ifdef CONFIG_X86_SWITCH_PAGETABLES
+
+#if defined(CONFIG_PREEMPT) && defined(CONFIG_SMP)
+/*
+ * If task is preempted in __SWITCH_KERNELSPACE, and moved to another cpu,
+ * __switch_to repoints %esp to the appropriate virtual stack; but %ebp is
+ * left stale, so we must check whether to repeat the real stack calculation.
+ */
+#define repeat_if_esp_changed \
+ xorl %esp, %ebp; \
+ testl $-THREAD_SIZE, %ebp; \
+ jnz 0b
+#else
+#define repeat_if_esp_changed
+#endif
+
+/* clobbers ebx, edx and ebp */
+
+#define __SWITCH_KERNELSPACE \
+ cmpl $0xff000000, %esp; \
+ jb 1f; \
+ \
+ /* \
+ * switch pagetables and load the real stack, \
+ * keep the stack offset: \
+ */ \
+ \
+ movl $swapper_pg_dir-__PAGE_OFFSET, %edx; \
+ \
+ /* GET_THREAD_INFO(%ebp) intermixed */ \
+0: \
+ movl %esp, %ebp; \
+ movl %esp, %ebx; \
+ andl $(-THREAD_SIZE), %ebp; \
+ andl $(THREAD_SIZE-1), %ebx; \
+ orl TI_real_stack(%ebp), %ebx; \
+ repeat_if_esp_changed; \
+ \
+ movl %edx, %cr3; \
+ movl %ebx, %esp; \
+1:
+
+#endif
+
+
+#define __SWITCH_USERSPACE \
+ /* interrupted any of the user return paths? */ \
+ \
+ movl EIP(%esp), %eax; \
+ \
+ cmpl $int80_ret_start_marker, %eax; \
+ jb 33f; /* nope - continue with sysexit check */\
+ cmpl $int80_ret_end_marker, %eax; \
+ jb 22f; /* yes - switch to virtual stack */ \
+33: \
+ cmpl $sysexit_ret_start_marker, %eax; \
+ jb 44f; /* nope - continue with user check */ \
+ cmpl $sysexit_ret_end_marker, %eax; \
+ jb 22f; /* yes - switch to virtual stack */ \
+ /* return to userspace? */ \
+44: \
+ movl EFLAGS(%esp),%ecx; \
+ movb CS(%esp),%cl; \
+ testl $(VM_MASK | 3),%ecx; \
+ jz 2f; \
+22: \
+ /* \
+ * switch to the virtual stack, then switch to \
+ * the userspace pagetables. \
+ */ \
+ \
+ GET_THREAD_INFO(%ebp); \
+ movl TI_virtual_stack(%ebp), %edx; \
+ movl TI_user_pgd(%ebp), %ecx; \
+ \
+ movl %esp, %ebx; \
+ andl $(THREAD_SIZE-1), %ebx; \
+ orl %ebx, %edx; \
+int80_ret_start_marker: \
+ movl %edx, %esp; \
+ movl %ecx, %cr3; \
+ \
+ __RESTORE_ALL; \
+int80_ret_end_marker: \
+2:
+
+#else /* !CONFIG_X86_HIGH_ENTRY */
+
+#define __SWITCH_KERNELSPACE
+#define __SWITCH_USERSPACE
+
+#endif
+
+#define __SAVE_ALL \
cld; \
pushl %es; \
pushl %ds; \
movl %edx, %ds; \
movl %edx, %es;
-#define RESTORE_INT_REGS \
+#define __RESTORE_INT_REGS \
popl %ebx; \
popl %ecx; \
popl %edx; \
popl %ebp; \
popl %eax
-#define RESTORE_REGS \
- RESTORE_INT_REGS; \
-1: popl %ds; \
-2: popl %es; \
+#define __RESTORE_REGS \
+ __RESTORE_INT_REGS; \
+111: popl %ds; \
+222: popl %es; \
.section .fixup,"ax"; \
-3: movl $0,(%esp); \
- jmp 1b; \
-4: movl $0,(%esp); \
- jmp 2b; \
+444: movl $0,(%esp); \
+ jmp 111b; \
+555: movl $0,(%esp); \
+ jmp 222b; \
.previous; \
.section __ex_table,"a";\
.align 4; \
- .long 1b,3b; \
- .long 2b,4b; \
+ .long 111b,444b;\
+ .long 222b,555b;\
.previous
-
-#define RESTORE_ALL \
- RESTORE_REGS \
+#define __RESTORE_ALL \
+ __RESTORE_REGS \
addl $4, %esp; \
-1: iret; \
+333: iret; \
.section .fixup,"ax"; \
-2: sti; \
+666: sti; \
movl $(__USER_DS), %edx; \
movl %edx, %ds; \
movl %edx, %es; \
.previous; \
.section __ex_table,"a";\
.align 4; \
- .long 1b,2b; \
+ .long 333b,666b;\
.previous
+#define SAVE_ALL \
+ __SAVE_ALL; \
+ __SWITCH_KERNELSPACE;
+
+#define RESTORE_ALL \
+ __SWITCH_USERSPACE; \
+ __RESTORE_ALL;
+.section .entry.text,"ax"
ENTRY(lcall7)
pushfl # We get a different stack layout with call
pushl %ebp
pushfl
pushl $(__USER_CS)
- pushl $SYSENTER_RETURN
-
-/*
- * Load the potential sixth argument from user stack.
- * Careful about security.
- */
- cmpl $__PAGE_OFFSET-3,%ebp
- jae syscall_fault
-1: movl (%ebp),%ebp
-.section __ex_table,"a"
- .align 4
- .long 1b,syscall_fault
-.previous
-
+ /*
+ * Push current_thread_info()->sysenter_return to the stack.
+ * A tiny bit of offset fixup is necessary - 4*4 means the 4 words
+ * pushed above, and the word being pushed now:
+ */
+ pushl (TI_sysenter_return-THREAD_SIZE+4*4)(%esp)
+ /*
+ * No six-argument syscall is ever used with sysenter.
+ */
pushl %eax
SAVE_ALL
GET_THREAD_INFO(%ebp)
movl TI_flags(%ebp), %ecx
testw $_TIF_ALLWORK_MASK, %cx
jne syscall_exit_work
+
+#ifdef CONFIG_X86_SWITCH_PAGETABLES
+
+ GET_THREAD_INFO(%ebp)
+ movl TI_virtual_stack(%ebp), %edx
+ movl TI_user_pgd(%ebp), %ecx
+ movl %esp, %ebx
+ andl $(THREAD_SIZE-1), %ebx
+ orl %ebx, %edx
+sysexit_ret_start_marker:
+ movl %edx, %esp
+ movl %ecx, %cr3
+ /*
+ * only ebx is not restored by the userspace sysenter vsyscall
+ * code, it assumes it to be callee-saved.
+ */
+ movl EBX(%esp), %ebx
+#endif
+
/* if something modifies registers it must also disable sysexit */
movl EIP(%esp), %edx
movl OLDESP(%esp), %ecx
sti
sysexit
-
+#ifdef CONFIG_X86_SWITCH_PAGETABLES
+sysexit_ret_end_marker:
+ nop
+#endif
# system call handler stub
ENTRY(system_call)
# vm86-space
xorl %edx, %edx
call do_notify_resume
+
+#if CONFIG_X86_HIGH_ENTRY
+ /*
+ * Reload db7 if necessary:
+ */
+ movl TI_flags(%ebp), %ecx
+ testb $_TIF_DB7, %cl
+ jnz work_db7
+
+ jmp restore_all
+
+work_db7:
+ movl TI_task(%ebp), %edx;
+ movl task_thread_db7(%edx), %edx;
+ movl %edx, %db7;
+#endif
jmp restore_all
ALIGN
call do_syscall_trace
jmp resume_userspace
- ALIGN
-syscall_fault:
- pushl %eax # save orig_eax
- SAVE_ALL
- GET_THREAD_INFO(%ebp)
- movl $-EFAULT,EAX(%esp)
- jmp resume_userspace
-
ALIGN
syscall_badsys:
movl $-ENOSYS,EAX(%esp)
*/
.data
ENTRY(interrupt)
-.text
+.previous
vector=0
ENTRY(irq_entries_start)
jmp common_interrupt
.data
.long 1b
-.text
+.previous
vector=vector+1
.endr
movl ES(%esp), %edi # get the function address
movl %eax, ORIG_EAX(%esp)
movl %ecx, ES(%esp)
- movl %esp, %edx
pushl %esi # push the error code
- pushl %edx # push the pt_regs pointer
movl $(__USER_DS), %edx
movl %edx, %ds
movl %edx, %es
+
+/* clobbers edx, ebx and ebp */
+ __SWITCH_KERNELSPACE
+
+ leal 4(%esp), %edx # prepare pt_regs
+ pushl %edx # push pt_regs
+
call *%edi
addl $8, %esp
jmp ret_from_exception
pushl %edx
call do_nmi
addl $8, %esp
- RESTORE_ALL
+ jmp restore_all
nmi_stack_fixup:
FIX_STACK(12,nmi_stack_correct, 1)
pushl $do_spurious_interrupt_bug
jmp error_code
+.previous
+
.data
ENTRY(sys_call_table)
.long sys_restart_syscall /* 0 - old "setup()" system call, used for restarting */
.long sys_tgkill /* 270 */
.long sys_utimes
.long sys_fadvise64_64
- .long sys_ni_syscall /* sys_vserver */
+ .long sys_vserver
.long sys_mbind
.long sys_get_mempolicy
.long sys_set_mempolicy
.long sys_mq_notify
.long sys_mq_getsetattr
.long sys_ni_syscall /* reserved for kexec */
- .long sys_ioprio_set
- .long sys_ioprio_get /* 285 */
syscall_table_size=(.-sys_call_table)
*/
trap_init_virtual_IDT();
- __set_fixmap(FIX_ENTRY_TRAMPOLINE_0, __pa((unsigned long)&__entry_tramp_start), PAGE_KERNEL);
- __set_fixmap(FIX_ENTRY_TRAMPOLINE_1, __pa((unsigned long)&__entry_tramp_start) + PAGE_SIZE, PAGE_KERNEL);
+ __set_fixmap(FIX_ENTRY_TRAMPOLINE_0, __pa((unsigned long)&__entry_tramp_start), PAGE_KERNEL_EXEC);
+ __set_fixmap(FIX_ENTRY_TRAMPOLINE_1, __pa((unsigned long)&__entry_tramp_start) + PAGE_SIZE, PAGE_KERNEL_EXEC);
tramp = (void *)fix_to_virt(FIX_ENTRY_TRAMPOLINE_0);
printk("mapped 4G/4G trampoline to %p.\n", tramp);
orl %edx,%eax
movl %eax,%cr4
+ btl $5, %eax # check if PAE is enabled
+ jnc 6f
+
+ /* Check if extended functions are implemented */
+ movl $0x80000000, %eax
+ cpuid
+ cmpl $0x80000000, %eax
+ jbe 6f
+ mov $0x80000001, %eax
+ cpuid
+ /* Execute Disable bit supported? */
+ btl $20, %edx
+ jnc 6f
+
+ /* Setup EFER (Extended Feature Enable Register) */
+ movl $0xc0000080, %ecx
+ rdmsr
+
+ btsl $11, %eax
+ /* Make changes effective */
+ wrmsr
+
+6:
+ /* cpuid clobbered ebx, set it up again: */
+ xorl %ebx,%ebx
+ incl %ebx
3:
#endif /* CONFIG_SMP */
#include <linux/tty.h>
#include <linux/highmem.h>
#include <linux/time.h>
+#include <linux/nmi.h>
#include <asm/semaphore.h>
#include <asm/processor.h>
#include <asm/mmx.h>
#include <asm/desc.h>
#include <asm/pgtable.h>
-#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
#include <asm/nmi.h>
#include <asm/ist.h>
+#include <asm/e820.h>
extern void dump_thread(struct pt_regs *, struct user *);
extern spinlock_t rtc_lock;
EXPORT_SYMBOL_NOVERS(__down_failed_trylock);
EXPORT_SYMBOL_NOVERS(__up_wakeup);
/* Networking helper routines. */
-EXPORT_SYMBOL(csum_partial_copy_generic);
/* Delay loops */
EXPORT_SYMBOL(__ndelay);
EXPORT_SYMBOL(__udelay);
EXPORT_SYMBOL(strpbrk);
EXPORT_SYMBOL(strstr);
+#if !defined(CONFIG_X86_UACCESS_INDIRECT)
EXPORT_SYMBOL(strncpy_from_user);
-EXPORT_SYMBOL(__strncpy_from_user);
+EXPORT_SYMBOL(__direct_strncpy_from_user);
EXPORT_SYMBOL(clear_user);
EXPORT_SYMBOL(__clear_user);
EXPORT_SYMBOL(__copy_from_user_ll);
EXPORT_SYMBOL(__copy_to_user_ll);
EXPORT_SYMBOL(strnlen_user);
+#else /* CONFIG_X86_UACCESS_INDIRECT */
+EXPORT_SYMBOL(direct_csum_partial_copy_generic);
+#endif
EXPORT_SYMBOL(dma_alloc_coherent);
EXPORT_SYMBOL(dma_free_coherent);
EXPORT_SYMBOL_GPL(set_nmi_callback);
EXPORT_SYMBOL_GPL(unset_nmi_callback);
-
-#undef memcpy
-#undef memset
+
#undef memcmp
-extern void * memset(void *,int,__kernel_size_t);
-extern void * memcpy(void *,const void *,__kernel_size_t);
extern int memcmp(const void *,const void *,__kernel_size_t);
-EXPORT_SYMBOL_NOVERS(memcpy);
-EXPORT_SYMBOL_NOVERS(memset);
EXPORT_SYMBOL_NOVERS(memcmp);
#ifdef CONFIG_HAVE_DEC_LOCK
EXPORT_SYMBOL(atomic_dec_and_lock);
#endif
-extern int is_sony_vaio_laptop;
-EXPORT_SYMBOL(is_sony_vaio_laptop);
-
EXPORT_SYMBOL(__PAGE_KERNEL);
#ifdef CONFIG_HIGHMEM
#endif
EXPORT_SYMBOL(csum_partial);
+
+EXPORT_SYMBOL_GPL(empty_zero_page);
+
+#ifdef CONFIG_CRASH_DUMP_MODULE
+#ifdef CONFIG_SMP
+extern irq_desc_t irq_desc[NR_IRQS];
+extern unsigned long irq_affinity[NR_IRQS];
+extern void stop_this_cpu(void *);
+EXPORT_SYMBOL(irq_desc);
+EXPORT_SYMBOL(irq_affinity);
+EXPORT_SYMBOL(stop_this_cpu);
+EXPORT_SYMBOL(dump_send_ipi);
+#endif
+extern int pfn_is_ram(unsigned long);
+EXPORT_SYMBOL(pfn_is_ram);
+#ifdef ARCH_HAS_NMI_WATCHDOG
+EXPORT_SYMBOL(touch_nmi_watchdog);
+#endif
+#endif
static int convert_fxsr_to_user( struct _fpstate __user *buf,
struct i387_fxsave_struct *fxsave )
{
+ struct _fpreg tmp[8]; /* 80 bytes scratch area */
unsigned long env[7];
struct _fpreg __user *to;
struct _fpxreg *from;
if ( __copy_to_user( buf, env, 7 * sizeof(unsigned long) ) )
return 1;
- to = &buf->_st[0];
+ to = tmp;
from = (struct _fpxreg *) &fxsave->st_space[0];
for ( i = 0 ; i < 8 ; i++, to++, from++ ) {
unsigned long __user *t = (unsigned long __user *)to;
unsigned long *f = (unsigned long *)from;
- if (__put_user(*f, t) ||
- __put_user(*(f + 1), t + 1) ||
- __put_user(from->exponent, &to->exponent))
- return 1;
+ *t = *f;
+ *(t + 1) = *(f+1);
+ to->exponent = from->exponent;
}
+ if (copy_to_user(buf->_st, tmp, sizeof(struct _fpreg [8])))
+ return 1;
return 0;
}
static int convert_fxsr_from_user( struct i387_fxsave_struct *fxsave,
struct _fpstate __user *buf )
{
+ struct _fpreg tmp[8]; /* 80 bytes scratch area */
unsigned long env[7];
struct _fpxreg *to;
struct _fpreg __user *from;
if ( __copy_from_user( env, buf, 7 * sizeof(long) ) )
return 1;
+ if (copy_from_user(tmp, buf->_st, sizeof(struct _fpreg [8])))
+ return 1;
fxsave->cwd = (unsigned short)(env[0] & 0xffff);
fxsave->swd = (unsigned short)(env[1] & 0xffff);
fxsave->fos = env[6];
to = (struct _fpxreg *) &fxsave->st_space[0];
- from = &buf->_st[0];
+ from = tmp;
for ( i = 0 ; i < 8 ; i++, to++, from++ ) {
unsigned long *t = (unsigned long *)to;
unsigned long __user *f = (unsigned long __user *)from;
- if (__get_user(*t, f) ||
- __get_user(*(t + 1), f + 1) ||
- __get_user(to->exponent, &from->exponent))
- return 1;
+ *t = *f;
+ *(t + 1) = *(f + 1);
+ to->exponent = from->exponent;
}
return 0;
}
* be shot.
*/
-/*
- * =PC9800NOTE= In NEC PC-9800, we use irq8 instead of irq13!
- */
static irqreturn_t math_error_irq(int cpl, void *dev_id, struct pt_regs *regs)
{
- extern void math_error(void *);
-#ifndef CONFIG_X86_PC9800
+ extern void math_error(void __user *);
outb(0,0xF0);
-#endif
if (ignore_fpu_irq || !boot_cpu_data.hard_math)
return IRQ_NONE;
- math_error((void *)regs->eip);
+ math_error((void __user *)regs->eip);
return IRQ_HANDLED;
}
* New motherboards sometimes make IRQ 13 be a PCI interrupt,
* so allow interrupt sharing.
*/
-static struct irqaction fpu_irq = { math_error_irq, 0, 0, "fpu", NULL, NULL };
+static struct irqaction fpu_irq = { math_error_irq, 0, CPU_MASK_NONE, "fpu", NULL, NULL };
void __init init_ISA_irqs (void)
{
for (i = 0; i < NR_IRQS; i++) {
irq_desc[i].status = IRQ_DISABLED;
- irq_desc[i].action = 0;
+ irq_desc[i].action = NULL;
irq_desc[i].depth = 1;
if (i < 16) {
#include <linux/init.h>
#include <linux/init_task.h>
#include <linux/fs.h>
+#include <linux/mqueue.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
*/
union thread_union init_thread_union
__attribute__((__section__(".data.init_task"))) =
- { INIT_THREAD_INFO(init_task) };
+ { INIT_THREAD_INFO(init_task, init_thread_union) };
/*
* Initial task structure.
* section. Since TSS's are completely CPU-local, we want them
* on exact cacheline boundaries, to eliminate cacheline ping-pong.
*/
-struct tss_struct init_tss[NR_CPUS] __cacheline_aligned = { [0 ... NR_CPUS-1] = INIT_TSS };
+struct tss_struct init_tss[NR_CPUS] __attribute__((__section__(".data.tss"))) = { [0 ... NR_CPUS-1] = INIT_TSS };
#include "io_ports.h"
-#undef APIC_LOCKUP_DEBUG
-
-#define APIC_LOCKUP_DEBUG
-
static spinlock_t ioapic_lock = SPIN_LOCK_UNLOCKED;
/*
}
}
-/* mask = 1 */
-static void __mask_IO_APIC_irq (unsigned int irq)
+static void __modify_IO_APIC_irq (unsigned int irq, unsigned long enable, unsigned long disable)
{
- int pin;
struct irq_pin_list *entry = irq_2_pin + irq;
+ unsigned int pin, reg;
for (;;) {
- unsigned int reg;
pin = entry->pin;
if (pin == -1)
break;
reg = io_apic_read(entry->apic, 0x10 + pin*2);
- io_apic_modify(entry->apic, 0x10 + pin*2, reg |= 0x00010000);
+ reg &= ~disable;
+ reg |= enable;
+ io_apic_modify(entry->apic, 0x10 + pin*2, reg);
if (!entry->next)
break;
entry = irq_2_pin + entry->next;
}
- io_apic_sync(entry->apic);
+}
+
+/* mask = 1 */
+static void __mask_IO_APIC_irq (unsigned int irq)
+{
+ __modify_IO_APIC_irq(irq, 0x00010000, 0);
}
/* mask = 0 */
static void __unmask_IO_APIC_irq (unsigned int irq)
{
- int pin;
- struct irq_pin_list *entry = irq_2_pin + irq;
-
- for (;;) {
- unsigned int reg;
- pin = entry->pin;
- if (pin == -1)
- break;
- reg = io_apic_read(entry->apic, 0x10 + pin*2);
- io_apic_modify(entry->apic, 0x10 + pin*2, reg &= 0xfffeffff);
- if (!entry->next)
- break;
- entry = irq_2_pin + entry->next;
- }
+ __modify_IO_APIC_irq(irq, 0, 0x00010000);
}
/* mask = 1, trigger = 0 */
static void __mask_and_edge_IO_APIC_irq (unsigned int irq)
{
- int pin;
- struct irq_pin_list *entry = irq_2_pin + irq;
-
- for (;;) {
- unsigned int reg;
- pin = entry->pin;
- if (pin == -1)
- break;
- reg = io_apic_read(entry->apic, 0x10 + pin*2);
- reg = (reg & 0xffff7fff) | 0x00010000;
- io_apic_modify(entry->apic, 0x10 + pin*2, reg);
- if (!entry->next)
- break;
- entry = irq_2_pin + entry->next;
- }
+ __modify_IO_APIC_irq(irq, 0x00010000, 0x00008000);
}
/* mask = 0, trigger = 1 */
static void __unmask_and_level_IO_APIC_irq (unsigned int irq)
{
- int pin;
- struct irq_pin_list *entry = irq_2_pin + irq;
-
- for (;;) {
- unsigned int reg;
- pin = entry->pin;
- if (pin == -1)
- break;
- reg = io_apic_read(entry->apic, 0x10 + pin*2);
- reg = (reg & 0xfffeffff) | 0x00008000;
- io_apic_modify(entry->apic, 0x10 + pin*2, reg);
- if (!entry->next)
- break;
- entry = irq_2_pin + entry->next;
- }
+ __modify_IO_APIC_irq(irq, 0x00008000, 0x00010000);
}
static void mask_IO_APIC_irq (unsigned int irq)
struct irq_pin_list *entry = irq_2_pin + irq;
unsigned int apicid_value;
- apicid_value = cpu_mask_to_apicid(mk_cpumask_const(cpumask));
+ apicid_value = cpu_mask_to_apicid(cpumask);
/* Prepare to do the io_apic_write */
apicid_value = apicid_value << 24;
spin_lock_irqsave(&ioapic_lock, flags);
return;
}
-int balanced_irq(void *unused)
+static int balanced_irq(void *unused)
{
int i;
unsigned long prev_balance_time = jiffies;
pending_irq_balance_cpumask[i] = cpumask_of_cpu(0);
}
-repeat:
- set_current_state(TASK_INTERRUPTIBLE);
- time_remaining = schedule_timeout(time_remaining);
- if (time_after(jiffies, prev_balance_time+balanced_irq_interval)) {
- Dprintk("balanced_irq: calling do_irq_balance() %lu\n",
- jiffies);
- do_irq_balance();
- prev_balance_time = jiffies;
- time_remaining = balanced_irq_interval;
+ for ( ; ; ) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ time_remaining = schedule_timeout(time_remaining);
+ if (time_after(jiffies,
+ prev_balance_time+balanced_irq_interval)) {
+ do_irq_balance();
+ prev_balance_time = jiffies;
+ time_remaining = balanced_irq_interval;
+ }
}
- goto repeat;
+ return 0;
}
static int __init balanced_irq_init(void)
printk(KERN_DEBUG "....... : physical APIC id: %02X\n", reg_00.bits.ID);
printk(KERN_DEBUG "....... : Delivery Type: %X\n", reg_00.bits.delivery_type);
printk(KERN_DEBUG "....... : LTS : %X\n", reg_00.bits.LTS);
- if (reg_00.bits.ID >= APIC_BROADCAST_ID)
+ if (reg_00.bits.ID >= get_physical_broadcast())
UNEXPECTED_IO_APIC();
if (reg_00.bits.__reserved_1 || reg_00.bits.__reserved_2)
UNEXPECTED_IO_APIC();
);
}
}
+ if (use_pci_vector())
+ printk(KERN_INFO "Using vector-based indexing\n");
printk(KERN_DEBUG "IRQ to pin mappings:\n");
for (i = 0; i < NR_IRQS; i++) {
struct irq_pin_list *entry = irq_2_pin + i;
if (entry->pin < 0)
continue;
- printk(KERN_DEBUG "IRQ%d ", i);
+ if (use_pci_vector() && !platform_legacy_irq(i))
+ printk(KERN_DEBUG "IRQ%d ", IO_APIC_VECTOR(i));
+ else
+ printk(KERN_DEBUG "IRQ%d ", i);
for (;;) {
printk("-> %d:%d", entry->apic, entry->pin);
if (!entry->next)
old_id = mp_ioapics[apic].mpc_apicid;
- if (mp_ioapics[apic].mpc_apicid >= APIC_BROADCAST_ID) {
+ if (mp_ioapics[apic].mpc_apicid >= get_physical_broadcast()) {
printk(KERN_ERR "BIOS bug, IO-APIC#%d ID is %d in the MPC table!...\n",
apic, mp_ioapics[apic].mpc_apicid);
printk(KERN_ERR "... fixing up to %d. (tell your hw vendor)\n",
mp_ioapics[apic].mpc_apicid)) {
printk(KERN_ERR "BIOS bug, IO-APIC#%d ID %d is already used!...\n",
apic, mp_ioapics[apic].mpc_apicid);
- for (i = 0; i < APIC_BROADCAST_ID; i++)
+ for (i = 0; i < get_physical_broadcast(); i++)
if (!physid_isset(i, phys_id_present_map))
break;
- if (i >= APIC_BROADCAST_ID)
+ if (i >= get_physical_broadcast())
panic("Max APIC ID exceeded!\n");
printk(KERN_ERR "... fixing up to %d. (tell your hw vendor)\n",
i);
ack_APIC_irq();
if (!(v & (1 << (i & 0x1f)))) {
-#ifdef APIC_LOCKUP_DEBUG
- struct irq_pin_list *entry;
-#endif
-
#ifdef APIC_MISMATCH_DEBUG
atomic_inc(&irq_mis_count);
#endif
spin_lock(&ioapic_lock);
__mask_and_edge_IO_APIC_irq(irq);
-#ifdef APIC_LOCKUP_DEBUG
- for (entry = irq_2_pin + irq;;) {
- unsigned int reg;
-
- if (entry->pin == -1)
- break;
- reg = io_apic_read(entry->apic, 0x10 + entry->pin * 2);
- if (reg & 0x00004000)
- printk(KERN_CRIT "Aieee!!! Remote IRR"
- " still set after unlock!\n");
- if (!entry->next)
- break;
- entry = irq_2_pin + entry->next;
- }
-#endif
__unmask_and_level_IO_APIC_irq(irq);
spin_unlock(&ioapic_lock);
}
#ifdef CONFIG_ACPI_BOOT
-#define IO_APIC_MAX_ID APIC_BROADCAST_ID
-
int __init io_apic_get_unique_id (int ioapic, int apic_id)
{
union IO_APIC_reg_00 reg_00;
reg_00.raw = io_apic_read(ioapic, 0);
spin_unlock_irqrestore(&ioapic_lock, flags);
- if (apic_id >= IO_APIC_MAX_ID) {
+ if (apic_id >= get_physical_broadcast()) {
printk(KERN_WARNING "IOAPIC[%d]: Invalid apic_id %d, trying "
"%d\n", ioapic, apic_id, reg_00.bits.ID);
apic_id = reg_00.bits.ID;
*/
if (check_apicid_used(apic_id_map, apic_id)) {
- for (i = 0; i < IO_APIC_MAX_ID; i++) {
+ for (i = 0; i < get_physical_broadcast(); i++) {
if (!check_apicid_used(apic_id_map, i))
break;
}
- if (i == IO_APIC_MAX_ID)
+ if (i == get_physical_broadcast())
panic("Max apic_id exceeded!\n");
printk(KERN_WARNING "IOAPIC[%d]: apic_id %d already used, "
#include <asm/system.h>
#include <asm/bitops.h>
#include <asm/uaccess.h>
-#include <asm/pgalloc.h>
#include <asm/delay.h>
#include <asm/desc.h>
#include <asm/irq.h>
/*
* per-CPU IRQ handling stacks
*/
-#ifdef CONFIG_4KSTACKS
union irq_ctx *hardirq_ctx[NR_CPUS];
union irq_ctx *softirq_ctx[NR_CPUS];
-#endif
/*
* Special irq handlers.
int status = 1; /* Force the "do bottom halves" bit */
int retval = 0;
- if (!(action->flags & SA_INTERRUPT))
- local_irq_enable();
-
do {
status |= action->flags;
retval |= action->handler(irq, action->dev_id, regs);
printk(KERN_ERR "irq event %d: bogus return value %x\n",
irq, action_ret);
} else {
- printk(KERN_ERR "irq %d: nobody cared!\n", irq);
+ printk(KERN_ERR "irq %d: nobody cared! (screaming interrupt?)\n", irq);
+ printk(KERN_ERR "irq %d: Please try booting with acpi=off and report a bug\n", irq);
}
dump_stack();
printk(KERN_ERR "handlers:\n");
* useful for irq hardware that does not mask cleanly in an
* SMP environment.
*/
-#ifdef CONFIG_4KSTACKS
for (;;) {
irqreturn_t action_ret;
/* build the stack frame on the IRQ stack */
isp = (u32*) ((char*)irqctx + sizeof(*irqctx));
irqctx->tinfo.task = curctx->tinfo.task;
+ irqctx->tinfo.real_stack = curctx->tinfo.real_stack;
+ irqctx->tinfo.virtual_stack = curctx->tinfo.virtual_stack;
irqctx->tinfo.previous_esp = current_stack_pointer();
*--isp = (u32) action;
desc->status &= ~IRQ_PENDING;
}
-#else
-
- for (;;) {
- irqreturn_t action_ret;
-
- spin_unlock(&desc->lock);
-
- action_ret = handle_IRQ_event(irq, ®s, action);
-
- spin_lock(&desc->lock);
- if (!noirqdebug)
- note_interrupt(irq, desc, action_ret);
- if (likely(!(desc->status & IRQ_PENDING)))
- break;
- desc->status &= ~IRQ_PENDING;
- }
-#endif
desc->status &= ~IRQ_INPROGRESS;
out:
action->handler = handler;
action->flags = irqflags;
- action->mask = 0;
+ cpus_clear(action->mask);
action->name = devname;
action->next = NULL;
action->dev_id = dev_id;
int i;
/* create /proc/irq */
- root_irq_dir = proc_mkdir("irq", 0);
+ root_irq_dir = proc_mkdir("irq", NULL);
/* create /proc/irq/prof_cpu_mask */
entry = create_proc_entry("prof_cpu_mask", 0600, root_irq_dir);
}
-#ifdef CONFIG_4KSTACKS
static char softirq_stack[NR_CPUS * THREAD_SIZE] __attribute__((__aligned__(THREAD_SIZE), __section__(".bss.page_aligned")));
static char hardirq_stack[NR_CPUS * THREAD_SIZE] __attribute__((__aligned__(THREAD_SIZE), __section__(".bss.page_aligned")));
curctx = current_thread_info();
irqctx = softirq_ctx[smp_processor_id()];
irqctx->tinfo.task = curctx->task;
+ irqctx->tinfo.real_stack = curctx->real_stack;
+ irqctx->tinfo.virtual_stack = curctx->virtual_stack;
irqctx->tinfo.previous_esp = current_stack_pointer();
/* build the stack frame on the softirq stack */
}
EXPORT_SYMBOL(do_softirq);
-#endif
* linux/kernel/ldt.c
*
* Copyright (C) 1992 Krishna Balasubramanian and Linus Torvalds
- * Copyright (C) 1999 Ingo Molnar <mingo@redhat.com>
+ * Copyright (C) 1999, 2003 Ingo Molnar <mingo@redhat.com>
*/
#include <linux/errno.h>
#include <asm/system.h>
#include <asm/ldt.h>
#include <asm/desc.h>
+#include <linux/highmem.h>
+#include <asm/atomic_kmap.h>
#ifdef CONFIG_SMP /* avoids "defined but not used" warnig */
static void flush_ldt(void *null)
static int alloc_ldt(mm_context_t *pc, int mincount, int reload)
{
- void *oldldt;
- void *newldt;
- int oldsize;
+ int oldsize, newsize, i;
if (mincount <= pc->size)
return 0;
+ /*
+ * LDT got larger - reallocate if necessary.
+ */
oldsize = pc->size;
mincount = (mincount+511)&(~511);
- if (mincount*LDT_ENTRY_SIZE > PAGE_SIZE)
- newldt = vmalloc(mincount*LDT_ENTRY_SIZE);
- else
- newldt = kmalloc(mincount*LDT_ENTRY_SIZE, GFP_KERNEL);
-
- if (!newldt)
- return -ENOMEM;
-
- if (oldsize)
- memcpy(newldt, pc->ldt, oldsize*LDT_ENTRY_SIZE);
- oldldt = pc->ldt;
- memset(newldt+oldsize*LDT_ENTRY_SIZE, 0, (mincount-oldsize)*LDT_ENTRY_SIZE);
- pc->ldt = newldt;
- wmb();
+ newsize = mincount*LDT_ENTRY_SIZE;
+ for (i = 0; i < newsize; i += PAGE_SIZE) {
+ int nr = i/PAGE_SIZE;
+ BUG_ON(i >= 64*1024);
+ if (!pc->ldt_pages[nr]) {
+ pc->ldt_pages[nr] = alloc_page(GFP_HIGHUSER);
+ if (!pc->ldt_pages[nr])
+ return -ENOMEM;
+ clear_highpage(pc->ldt_pages[nr]);
+ }
+ }
pc->size = mincount;
- wmb();
-
if (reload) {
#ifdef CONFIG_SMP
cpumask_t mask;
+
preempt_disable();
load_LDT(pc);
mask = cpumask_of_cpu(smp_processor_id());
if (!cpus_equal(current->mm->cpu_vm_mask, mask))
- smp_call_function(flush_ldt, 0, 1, 1);
+ smp_call_function(flush_ldt, NULL, 1, 1);
preempt_enable();
#else
load_LDT(pc);
#endif
}
- if (oldsize) {
- if (oldsize*LDT_ENTRY_SIZE > PAGE_SIZE)
- vfree(oldldt);
- else
- kfree(oldldt);
- }
return 0;
}
static inline int copy_ldt(mm_context_t *new, mm_context_t *old)
{
- int err = alloc_ldt(new, old->size, 0);
- if (err < 0)
+ int i, err, size = old->size, nr_pages = (size*LDT_ENTRY_SIZE + PAGE_SIZE-1)/PAGE_SIZE;
+
+ err = alloc_ldt(new, size, 0);
+ if (err < 0) {
+ new->size = 0;
return err;
- memcpy(new->ldt, old->ldt, old->size*LDT_ENTRY_SIZE);
+ }
+ for (i = 0; i < nr_pages; i++)
+ copy_user_highpage(new->ldt_pages[i], old->ldt_pages[i], 0);
return 0;
}
init_MUTEX(&mm->context.sem);
mm->context.size = 0;
+ memset(mm->context.ldt_pages, 0, sizeof(struct page *) * MAX_LDT_PAGES);
old_mm = current->mm;
if (old_mm && old_mm->context.size > 0) {
down(&old_mm->context.sem);
/*
* No need to lock the MM as we are the last user
+ * Do not touch the ldt register, we are already
+ * in the next thread.
*/
void destroy_context(struct mm_struct *mm)
{
- if (mm->context.size) {
- if (mm == current->active_mm)
- clear_LDT();
- if (mm->context.size*LDT_ENTRY_SIZE > PAGE_SIZE)
- vfree(mm->context.ldt);
- else
- kfree(mm->context.ldt);
- mm->context.size = 0;
- }
+ int i, nr_pages = (mm->context.size*LDT_ENTRY_SIZE + PAGE_SIZE-1) / PAGE_SIZE;
+
+ for (i = 0; i < nr_pages; i++)
+ __free_page(mm->context.ldt_pages[i]);
+ mm->context.size = 0;
}
static int read_ldt(void __user * ptr, unsigned long bytecount)
{
- int err;
+ int err, i;
unsigned long size;
struct mm_struct * mm = current->mm;
size = bytecount;
err = 0;
- if (copy_to_user(ptr, mm->context.ldt, size))
- err = -EFAULT;
+ /*
+ * This is necessary just in case we got here straight from a
+ * context-switch where the ptes were set but no tlb flush
+ * was done yet. We rather avoid doing a TLB flush in the
+ * context-switch path and do it here instead.
+ */
+ __flush_tlb_global();
+
+ for (i = 0; i < size; i += PAGE_SIZE) {
+ int nr = i / PAGE_SIZE, bytes;
+ char *kaddr = kmap(mm->context.ldt_pages[nr]);
+
+ bytes = size - i;
+ if (bytes > PAGE_SIZE)
+ bytes = PAGE_SIZE;
+ if (copy_to_user(ptr + i, kaddr, bytes))
+ err = -EFAULT;
+ kunmap(mm->context.ldt_pages[nr]);
+ }
up(&mm->context.sem);
if (err < 0)
return err;
err = 0;
address = &default_ldt[0];
- size = 5*sizeof(struct desc_struct);
+ size = 5*LDT_ENTRY_SIZE;
if (size > bytecount)
size = bytecount;
goto out_unlock;
}
- lp = (__u32 *) ((ldt_info.entry_number << 3) + (char *) mm->context.ldt);
+ /*
+ * No rescheduling allowed from this point to the install.
+ *
+ * We do a TLB flush for the same reason as in the read_ldt() path.
+ */
+ preempt_disable();
+ __flush_tlb_global();
+ lp = (__u32 *) ((ldt_info.entry_number << 3) +
+ (char *) __kmap_atomic_vaddr(KM_LDT_PAGE0));
/* Allow LDTs to be cleared by the user. */
if (ldt_info.base_addr == 0 && ldt_info.limit == 0) {
*lp = entry_1;
*(lp+1) = entry_2;
error = 0;
+ preempt_enable();
out_unlock:
up(&mm->context.sem);
}
return ret;
}
+
+/*
+ * load one particular LDT into the current CPU
+ */
+void load_LDT_nolock(mm_context_t *pc, int cpu)
+{
+ struct page **pages = pc->ldt_pages;
+ int count = pc->size;
+ int nr_pages, i;
+
+ if (likely(!count)) {
+ pages = &default_ldt_page;
+ count = 5;
+ }
+ nr_pages = (count*LDT_ENTRY_SIZE + PAGE_SIZE-1) / PAGE_SIZE;
+
+ for (i = 0; i < nr_pages; i++) {
+ __kunmap_atomic_type(KM_LDT_PAGE0 - i);
+ __kmap_atomic(pages[i], KM_LDT_PAGE0 - i);
+ }
+ set_ldt_desc(cpu, (void *)__kmap_atomic_vaddr(KM_LDT_PAGE0), count);
+ load_LDT_desc();
+}
/*
- * Intel CPU Microcode Update driver for Linux
+ * Intel CPU Microcode Update Driver for Linux
*
- * Copyright (C) 2000 Tigran Aivazian
+ * Copyright (C) 2000-2004 Tigran Aivazian
*
* This driver allows to upgrade microcode on Intel processors
* belonging to IA-32 family - PentiumPro, Pentium II,
* Added misc device support (now uses both devfs and misc).
* Added MICROCODE_IOCFREE ioctl to clear memory.
* 1.05 09 Jun 2000, Simon Trimmer <simon@veritas.com>
- * Messages for error cases (non intel & no suitable microcode).
+ * Messages for error cases (non Intel & no suitable microcode).
* 1.06 03 Aug 2000, Tigran Aivazian <tigran@veritas.com>
* Removed ->release(). Removed exclusive open and status bitmap.
* Added microcode_rwsem to serialize read()/write()/ioctl().
* Removed ->read() method and obsoleted MICROCODE_IOCFREE ioctl
* because we no longer hold a copy of applied microcode
* in kernel memory.
+ * 1.14 25 Jun 2004 Tigran Aivazian <tigran@veritas.com>
+ * Fix sigmatch() macro to handle old CPUs with pf == 0.
+ * Thanks to Stuart Swales for pointing out this bug.
*/
#include <asm/uaccess.h>
#include <asm/processor.h>
-MODULE_DESCRIPTION("Intel CPU (IA-32) microcode update driver");
+MODULE_DESCRIPTION("Intel CPU (IA-32) Microcode Update Driver");
MODULE_AUTHOR("Tigran Aivazian <tigran@veritas.com>");
MODULE_LICENSE("GPL");
-#define MICROCODE_VERSION "1.13"
+#define MICROCODE_VERSION "1.14"
#define MICRO_DEBUG 0
#if MICRO_DEBUG
#define dprintk(x...) printk(KERN_INFO x)
#define get_datasize(mc) \
(((microcode_t *)mc)->hdr.datasize ? \
((microcode_t *)mc)->hdr.datasize : DEFAULT_UCODE_DATASIZE)
-#define sigmatch(s1, s2, p1, p2) (((s1) == (s2)) && ((p1) & (p2)))
+
+#define sigmatch(s1, s2, p1, p2) \
+ (((s1) == (s2)) && (((p1) & (p2)) || (((p1) == 0) && ((p2) == 0))))
+
#define exttable_size(et) ((et)->count * EXT_SIGNATURE_SIZE + EXT_HEADER_SIZE)
/* serialize access to the physical write to MSR 0x79 */
struct ucode_cpu_info *uci = ucode_cpu_info + cpu_num;
if (uci->mc == NULL) {
- printk(KERN_INFO "microcode: No suitable data for cpu %d\n", cpu_num);
+ printk(KERN_INFO "microcode: No new microdata for cpu %d\n", cpu_num);
return;
}
}
printk(KERN_INFO
- "IA-32 Microcode Update Driver: v%s <tigran@veritas.com>\n",
- MICROCODE_VERSION);
+ "IA-32 Microcode Update Driver: v" MICROCODE_VERSION " <tigran@veritas.com>\n");
return 0;
}
static void __exit microcode_exit (void)
{
misc_deregister(µcode_dev);
- printk(KERN_INFO "IA-32 Microcode Update Driver v%s unregistered\n",
- MICROCODE_VERSION);
+ printk(KERN_INFO "IA-32 Microcode Update Driver v" MICROCODE_VERSION " unregistered\n");
}
module_init(microcode_init)
{
if (size == 0)
return NULL;
- return vmalloc(size);
+ return vmalloc_exec(size);
}
#include <linux/smp_lock.h>
#include <linux/kernel_stat.h>
#include <linux/mc146818rtc.h>
+#include <linux/bitops.h>
#include <asm/smp.h>
#include <asm/acpi.h>
#include <asm/mtrr.h>
#include <asm/mpspec.h>
-#include <asm/pgalloc.h>
#include <asm/io_apic.h>
#include <mach_apic.h>
static int mpc_record;
static struct mpc_config_translation *translation_table[MAX_MPC_ENTRY] __initdata;
+#ifdef CONFIG_X86_NUMAQ
+static int MP_valid_apicid(int apicid, int version)
+{
+ return hweight_long(apicid & 0xf) == 1 && (apicid >> 4) != 0xf;
+}
+#else
+static int MP_valid_apicid(int apicid, int version)
+{
+ if (version >= 0x14)
+ return apicid < 0xff;
+ else
+ return apicid < 0xf;
+}
+#endif
+
void __init MP_processor_info (struct mpc_config_processor *m)
{
int ver, apicid;
return;
}
num_processors++;
+ ver = m->mpc_apicver;
- if (MAX_APICS - m->mpc_apicid <= 0) {
+ if (!MP_valid_apicid(apicid, ver)) {
printk(KERN_WARNING "Processor #%d INVALID. (Max ID: %d).\n",
m->mpc_apicid, MAX_APICS);
--num_processors;
return;
}
- ver = m->mpc_apicver;
tmp = apicid_to_cpu_present(apicid);
physids_or(phys_cpu_present_map, phys_cpu_present_map, tmp);
* Read the physical hardware table. Anything here will
* override the defaults.
*/
- if (!smp_read_mpc((void *)mpf->mpf_physptr)) {
+ if (!smp_read_mpc((void *)phys_to_virt(mpf->mpf_physptr))) {
smp_found_config = 0;
printk(KERN_ERR "BIOS bug, MP table errors detected!...\n");
printk(KERN_ERR "... disabling SMP support. (tell your hw vendor)\n");
MP_processor_info(&processor);
}
-#if defined(CONFIG_X86_IO_APIC) && defined(CONFIG_ACPI_INTERPRETER)
+#if defined(CONFIG_X86_IO_APIC) && (defined(CONFIG_ACPI_INTERPRETER) || defined(CONFIG_ACPI_BOOT))
#define MP_ISA_BUS 0
#define MP_MAX_IOAPIC_PIN 127
} mp_ioapic_routing[MAX_IO_APICS];
-static int __init mp_find_ioapic (
+static int mp_find_ioapic (
int gsi)
{
int i = 0;
for (i = 0; i < 16; i++) {
int idx;
- for (idx = 0; idx < mp_irq_entries; idx++)
- if (mp_irqs[idx].mpc_srcbus == MP_ISA_BUS &&
- (mp_irqs[idx].mpc_srcbusirq == i ||
- mp_irqs[idx].mpc_dstirq == i))
- break;
+ for (idx = 0; idx < mp_irq_entries; idx++) {
+ struct mpc_config_intsrc *irq = mp_irqs + idx;
- if (idx != mp_irq_entries)
- continue; /* IRQ already used */
+ /* Do we already have a mapping for this ISA IRQ? */
+ if (irq->mpc_srcbus == MP_ISA_BUS && irq->mpc_srcbusirq == i)
+ break;
+
+ /* Do we already have a mapping for this IOAPIC pin */
+ if ((irq->mpc_dstapic == intsrc.mpc_dstapic) &&
+ (irq->mpc_dstirq == i))
+ break;
+ }
+
+ if (idx != mp_irq_entries) {
+ printk(KERN_DEBUG "ACPI: IRQ%d used by override.\n", i);
+ continue; /* IRQ already used */
+ }
intsrc.mpc_irqtype = mp_INT;
intsrc.mpc_srcbusirq = i; /* Identity mapped */
}
}
-extern FADT_DESCRIPTOR acpi_fadt;
-
-#ifdef CONFIG_ACPI_PCI
-
int (*platform_rename_gsi)(int ioapic, int gsi);
-void __init mp_parse_prt (void)
+void mp_register_gsi (u32 gsi, int edge_level, int active_high_low)
{
- struct list_head *node = NULL;
- struct acpi_prt_entry *entry = NULL;
int ioapic = -1;
int ioapic_pin = 0;
- int gsi = 0;
int idx, bit = 0;
- int edge_level = 0;
- int active_high_low = 0;
- /*
- * Parsing through the PCI Interrupt Routing Table (PRT) and program
- * routing for all entries.
- */
- list_for_each(node, &acpi_prt.entries) {
- entry = list_entry(node, struct acpi_prt_entry, node);
-
- /* Need to get gsi for dynamic entry */
- if (entry->link.handle) {
- gsi = acpi_pci_link_get_irq(entry->link.handle, entry->link.index, &edge_level, &active_high_low);
- if (!gsi)
- continue;
- }
- else {
- /* Hardwired GSI. Assume PCI standard settings */
- gsi = entry->link.index;
- edge_level = 1;
- active_high_low = 1;
- }
+#ifdef CONFIG_ACPI_BUS
+ /* Don't set up the ACPI SCI because it's already set up */
+ if (acpi_fadt.sci_int == gsi)
+ return;
+#endif
- /* Don't set up the ACPI SCI because it's already set up */
- if (acpi_fadt.sci_int == gsi) {
- /* we still need to set entry's irq */
- acpi_gsi_to_irq(gsi, &entry->irq);
- continue;
- }
-
- ioapic = mp_find_ioapic(gsi);
- if (ioapic < 0)
- continue;
- ioapic_pin = gsi - mp_ioapic_routing[ioapic].gsi_base;
-
- if (platform_rename_gsi)
- gsi = platform_rename_gsi(ioapic, gsi);
-
- /*
- * Avoid pin reprogramming. PRTs typically include entries
- * with redundant pin->gsi mappings (but unique PCI devices);
- * we only only program the IOAPIC on the first.
- */
- bit = ioapic_pin % 32;
- idx = (ioapic_pin < 32) ? 0 : (ioapic_pin / 32);
- if (idx > 3) {
- printk(KERN_ERR "Invalid reference to IOAPIC pin "
- "%d-%d\n", mp_ioapic_routing[ioapic].apic_id,
- ioapic_pin);
- continue;
- }
- if ((1<<bit) & mp_ioapic_routing[ioapic].pin_programmed[idx]) {
- Dprintk(KERN_DEBUG "Pin %d-%d already programmed\n",
- mp_ioapic_routing[ioapic].apic_id, ioapic_pin);
- acpi_gsi_to_irq(gsi, &entry->irq);
- continue;
- }
+ ioapic = mp_find_ioapic(gsi);
+ if (ioapic < 0) {
+ printk(KERN_WARNING "No IOAPIC for GSI %u\n", gsi);
+ return;
+ }
- mp_ioapic_routing[ioapic].pin_programmed[idx] |= (1<<bit);
+ ioapic_pin = gsi - mp_ioapic_routing[ioapic].gsi_base;
- if (!io_apic_set_pci_routing(ioapic, ioapic_pin, gsi, edge_level, active_high_low)) {
- acpi_gsi_to_irq(gsi, &entry->irq);
- }
- printk(KERN_DEBUG "%02x:%02x:%02x[%c] -> %d-%d -> IRQ %d %s %s\n",
- entry->id.segment, entry->id.bus,
- entry->id.device, ('A' + entry->pin),
- mp_ioapic_routing[ioapic].apic_id, ioapic_pin,
- entry->irq, edge_level ? "level" : "edge",
- active_high_low ? "low" : "high");
+ if (platform_rename_gsi)
+ gsi = platform_rename_gsi(ioapic, gsi);
+
+ /*
+ * Avoid pin reprogramming. PRTs typically include entries
+ * with redundant pin->gsi mappings (but unique PCI devices);
+ * we only program the IOAPIC on the first.
+ */
+ bit = ioapic_pin % 32;
+ idx = (ioapic_pin < 32) ? 0 : (ioapic_pin / 32);
+ if (idx > 3) {
+ printk(KERN_ERR "Invalid reference to IOAPIC pin "
+ "%d-%d\n", mp_ioapic_routing[ioapic].apic_id,
+ ioapic_pin);
+ return;
+ }
+ if ((1<<bit) & mp_ioapic_routing[ioapic].pin_programmed[idx]) {
+ Dprintk(KERN_DEBUG "Pin %d-%d already programmed\n",
+ mp_ioapic_routing[ioapic].apic_id, ioapic_pin);
+ return;
}
- print_IO_APIC();
+ mp_ioapic_routing[ioapic].pin_programmed[idx] |= (1<<bit);
- return;
+ io_apic_set_pci_routing(ioapic, ioapic_pin, gsi,
+ edge_level == ACPI_EDGE_SENSITIVE ? 0 : 1,
+ active_high_low == ACPI_ACTIVE_HIGH ? 0 : 1);
}
-#endif /*CONFIG_ACPI_PCI*/
-#endif /*CONFIG_X86_IO_APIC && CONFIG_ACPI_INTERPRETER*/
+#endif /*CONFIG_X86_IO_APIC && (CONFIG_ACPI_INTERPRETER || CONFIG_ACPI_BOOT)*/
#endif /*CONFIG_ACPI_BOOT*/
#include <linux/smp_lock.h>
#include <linux/major.h>
#include <linux/fs.h>
+#include <linux/device.h>
+#include <linux/cpu.h>
+#include <linux/notifier.h>
#include <asm/processor.h>
#include <asm/msr.h>
#include <asm/uaccess.h>
#include <asm/system.h>
+static struct class_simple *msr_class;
+
/* Note: "err" is handled in a funny way below. Otherwise one version
of gcc or another breaks. */
.open = msr_open,
};
+static int msr_class_simple_device_add(int i)
+{
+ int err = 0;
+ struct class_device *class_err;
+
+ class_err = class_simple_device_add(msr_class, MKDEV(MSR_MAJOR, i), NULL, "msr%d",i);
+ if (IS_ERR(class_err))
+ err = PTR_ERR(class_err);
+ return err;
+}
+
+static int __devinit msr_class_cpu_callback(struct notifier_block *nfb, unsigned long action, void *hcpu)
+{
+ unsigned int cpu = (unsigned long)hcpu;
+
+ switch (action) {
+ case CPU_ONLINE:
+ msr_class_simple_device_add(cpu);
+ break;
+ case CPU_DEAD:
+ class_simple_device_remove(MKDEV(MSR_MAJOR, cpu));
+ break;
+ }
+ return NOTIFY_OK;
+}
+
+static struct notifier_block msr_class_cpu_notifier =
+{
+ .notifier_call = msr_class_cpu_callback,
+};
+
int __init msr_init(void)
{
+ int i, err = 0;
+ i = 0;
+
if (register_chrdev(MSR_MAJOR, "cpu/msr", &msr_fops)) {
printk(KERN_ERR "msr: unable to get major %d for msr\n",
MSR_MAJOR);
- return -EBUSY;
+ err = -EBUSY;
+ goto out;
+ }
+ msr_class = class_simple_create(THIS_MODULE, "msr");
+ if (IS_ERR(msr_class)) {
+ err = PTR_ERR(msr_class);
+ goto out_chrdev;
}
+ for_each_online_cpu(i) {
+ err = msr_class_simple_device_add(i);
+ if (err != 0)
+ goto out_class;
+ }
+ register_cpu_notifier(&msr_class_cpu_notifier);
- return 0;
+ err = 0;
+ goto out;
+
+out_class:
+ i = 0;
+ for_each_online_cpu(i)
+ class_simple_device_remove(MKDEV(MSR_MAJOR, i));
+ class_simple_destroy(msr_class);
+out_chrdev:
+ unregister_chrdev(MSR_MAJOR, "cpu/msr");
+out:
+ return err;
}
void __exit msr_exit(void)
{
+ int cpu = 0;
+ for_each_online_cpu(cpu)
+ class_simple_device_remove(MKDEV(MSR_MAJOR, cpu));
+ class_simple_destroy(msr_class);
unregister_chrdev(MSR_MAJOR, "cpu/msr");
+ unregister_cpu_notifier(&msr_class_cpu_notifier);
}
module_init(msr_init);
#include <linux/module.h>
#include <linux/nmi.h>
#include <linux/sysdev.h>
+#include <linux/dump.h>
#include <asm/smp.h>
#include <asm/mtrr.h>
* wait a few IRQs (5 seconds) before doing the oops ...
*/
alert_counter[cpu]++;
- if (alert_counter[cpu] == 5*nmi_hz) {
+ if (alert_counter[cpu] == 60*nmi_hz) {
spin_lock(&nmi_print_lock);
/*
* We are in trouble anyway, lets at least try
bust_spinlocks(1);
printk("NMI Watchdog detected LOCKUP on CPU%d, eip %08lx, registers:\n", cpu, regs->eip);
show_registers(regs);
+ dump("NMI Watchdog detected LOCKUP", regs);
printk("console shuts up ...\n");
console_silent();
spin_unlock(&nmi_print_lock);
EXPORT_SYMBOL(release_lapic_nmi);
EXPORT_SYMBOL(disable_timer_nmi_watchdog);
EXPORT_SYMBOL(enable_timer_nmi_watchdog);
+EXPORT_SYMBOL_GPL(touch_nmi_watchdog);
}
}
-/*
- * for each node mark the regions
- * TOPOFMEM = hi_shrd_mem_start + hi_shrd_mem_size
- *
- * need to be very careful to not mark 1024+ as belonging
- * to node 0. will want 1027 to show as belonging to node 1
- * example:
- * TOPOFMEM = 1024
- * 1024 >> 8 = 4 (subtract 1 for starting at 0]
- * tmpvar = TOPOFMEM - 256 = 768
- * 1024 >> 8 = 4 (subtract 1 for starting at 0]
- *
- */
-static void __init initialize_physnode_map(void)
-{
- int nid;
- unsigned int topofmem, cur;
- struct eachquadmem *eq;
- struct sys_cfg_data *scd =
- (struct sys_cfg_data *)__va(SYS_CFG_DATA_PRIV_ADDR);
-
-
- for(nid = 0; nid < numnodes; nid++) {
- if(scd->quads_present31_0 & (1 << nid)) {
- eq = &scd->eq[nid];
- cur = eq->hi_shrd_mem_start;
- topofmem = eq->hi_shrd_mem_start + eq->hi_shrd_mem_size;
- while (cur < topofmem) {
- physnode_map[cur >> 8] = nid;
- cur ++;
- }
- }
- }
-}
-
/*
* Unlike Summit, we don't really care to let the NUMA-Q
* fall back to flat mode. Don't compile for NUMA-Q
int __init get_memcfg_numaq(void)
{
smp_dump_qct();
- initialize_physnode_map();
return 1;
}
#include <linux/module.h>
#include <linux/kallsyms.h>
#include <linux/ptrace.h>
+#include <linux/mman.h>
+#include <linux/random.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
#include <asm/i387.h>
#include <asm/irq.h>
#include <asm/desc.h>
+#include <asm/atomic_kmap.h>
#ifdef CONFIG_MATH_EMULATION
#include <asm/math_emu.h>
#endif
show_trace(NULL, ®s->esp);
}
+EXPORT_SYMBOL_GPL(show_regs);
+
/*
* This gets run with %ebx containing the
* function to call, and %edx containing
struct task_struct *tsk = current;
memset(tsk->thread.debugreg, 0, sizeof(unsigned long)*8);
+#ifdef CONFIG_X86_HIGH_ENTRY
+ clear_thread_flag(TIF_DB7);
+#endif
memset(tsk->thread.tls_array, 0, sizeof(tsk->thread.tls_array));
/*
* Forget coprocessor state..
if (dead_task->mm) {
// temporary debugging check
if (dead_task->mm->context.size) {
- printk("WARNING: dead process %8s still has LDT? <%p/%d>\n",
+ printk("WARNING: dead process %8s still has LDT? <%d>\n",
dead_task->comm,
- dead_task->mm->context.ldt,
dead_task->mm->context.size);
BUG();
}
{
struct pt_regs * childregs;
struct task_struct *tsk;
- int err;
+ int err, i;
childregs = ((struct pt_regs *) (THREAD_SIZE + (unsigned long) p->thread_info)) - 1;
- struct_cpy(childregs, regs);
+ *childregs = *regs;
childregs->eax = 0;
childregs->esp = esp;
p->set_child_tid = p->clear_child_tid = NULL;
p->thread.esp = (unsigned long) childregs;
p->thread.esp0 = (unsigned long) (childregs+1);
+ /*
+ * get the two stack pages, for the virtual stack.
+ *
+ * IMPORTANT: this code relies on the fact that the task
+ * structure is an THREAD_SIZE aligned piece of physical memory.
+ */
+ for (i = 0; i < ARRAY_SIZE(p->thread.stack_page); i++)
+ p->thread.stack_page[i] =
+ virt_to_page((unsigned long)p->thread_info + (i*PAGE_SIZE));
+
p->thread.eip = (unsigned long) ret_from_fork;
+ p->thread_info->real_stack = p->thread_info;
savesegment(fs,p->thread.fs);
savesegment(gs,p->thread.gs);
/* never put a printk in __switch_to... printk() calls wake_up*() indirectly */
__unlazy_fpu(prev_p);
+ if (next_p->mm)
+ load_user_cs_desc(cpu, next_p->mm);
+#ifdef CONFIG_X86_HIGH_ENTRY
+{
+ int i;
+ /*
+ * Set the ptes of the virtual stack. (NOTE: a one-page TLB flush is
+ * needed because otherwise NMIs could interrupt the
+ * user-return code with a virtual stack and stale TLBs.)
+ */
+ for (i = 0; i < ARRAY_SIZE(next->stack_page); i++) {
+ __kunmap_atomic_type(KM_VSTACK_TOP-i);
+ __kmap_atomic(next->stack_page[i], KM_VSTACK_TOP-i);
+ }
+ /*
+ * NOTE: here we rely on the task being the stack as well
+ */
+ next_p->thread_info->virtual_stack =
+ (void *)__kmap_atomic_vaddr(KM_VSTACK_TOP);
+}
+#if defined(CONFIG_PREEMPT) && defined(CONFIG_SMP)
+ /*
+ * If next was preempted on entry from userspace to kernel,
+ * and now it's on a different cpu, we need to adjust %esp.
+ * This assumes that entry.S does not copy %esp while on the
+ * virtual stack (with interrupts enabled): which is so,
+ * except within __SWITCH_KERNELSPACE itself.
+ */
+ if (unlikely(next->esp >= TASK_SIZE)) {
+ next->esp &= THREAD_SIZE - 1;
+ next->esp |= (unsigned long) next_p->thread_info->virtual_stack;
+ }
+#endif
+#endif
/*
* Reload esp0, LDT and the page table pointer:
*/
- load_esp0(tss, next);
+ load_virtual_esp0(tss, next_p);
/*
* Load the per-thread Thread-Local Storage descriptor.
return 0;
}
+/*
+ * Get a random word:
+ */
+static inline unsigned int get_random_int(void)
+{
+ unsigned int val = 0;
+
+ if (!exec_shield_randomize)
+ return 0;
+
+#ifdef CONFIG_X86_HAS_TSC
+ rdtscl(val);
+#endif
+ val += current->pid + jiffies + (int)&val;
+
+ /*
+ * Use IP's RNG. It suits our purpose perfectly: it re-keys itself
+ * every second, from the entropy pool (and thus creates a limited
+ * drain on it), and uses halfMD4Transform within the second. We
+ * also spice it with the TSC (if available), jiffies, PID and the
+ * stack address:
+ */
+ return secure_ip_id(val);
+}
+
+unsigned long arch_align_stack(unsigned long sp)
+{
+ if (current->flags & PF_RELOCEXEC)
+ sp -= ((get_random_int() % 65536) << 4);
+ return sp & ~0xf;
+}
+
+#if SHLIB_BASE >= 0x01000000
+# error SHLIB_BASE must be under 16MB!
+#endif
+
+static unsigned long
+arch_get_unmapped_nonexecutable_area(struct mm_struct *mm, unsigned long addr, unsigned long len)
+{
+ struct vm_area_struct *vma, *prev_vma;
+ unsigned long stack_limit;
+ int first_time = 1;
+
+ if (!mm->mmap_top) {
+ printk("hm, %s:%d, !mmap_top.\n", current->comm, current->pid);
+ mm->mmap_top = mmap_top();
+ }
+ stack_limit = mm->mmap_top;
+
+ /* requested length too big for entire address space */
+ if (len > TASK_SIZE)
+ return -ENOMEM;
+
+ /* dont allow allocations above current stack limit */
+ if (mm->non_executable_cache > stack_limit)
+ mm->non_executable_cache = stack_limit;
+
+ /* requesting a specific address */
+ if (addr) {
+ addr = PAGE_ALIGN(addr);
+ vma = find_vma(mm, addr);
+ if (TASK_SIZE - len >= addr &&
+ (!vma || addr + len <= vma->vm_start))
+ return addr;
+ }
+
+ /* make sure it can fit in the remaining address space */
+ if (mm->non_executable_cache < len)
+ return -ENOMEM;
+
+ /* either no address requested or cant fit in requested address hole */
+try_again:
+ addr = (mm->non_executable_cache - len)&PAGE_MASK;
+ do {
+ if (!(vma = find_vma_prev(mm, addr, &prev_vma)))
+ return -ENOMEM;
+
+ /* new region fits between prev_vma->vm_end and vma->vm_start, use it */
+ if (addr+len <= vma->vm_start && (!prev_vma || (addr >= prev_vma->vm_end))) {
+ /* remember the address as a hint for next time */
+ mm->non_executable_cache = addr;
+ return addr;
+
+ /* pull non_executable_cache down to the first hole */
+ } else if (mm->non_executable_cache == vma->vm_end)
+ mm->non_executable_cache = vma->vm_start;
+
+ /* try just below the current vma->vm_start */
+ addr = vma->vm_start-len;
+ } while (len <= vma->vm_start);
+ /* if hint left us with no space for the requested mapping try again */
+ if (first_time) {
+ first_time = 0;
+ mm->non_executable_cache = stack_limit;
+ goto try_again;
+ }
+ return -ENOMEM;
+}
+
+static unsigned long randomize_range(unsigned long start, unsigned long end, unsigned long len)
+{
+ unsigned long range = end - len - start;
+ if (end <= start + len)
+ return 0;
+ return PAGE_ALIGN(get_random_int() % range + start);
+}
+
+static inline unsigned long
+stock_arch_get_unmapped_area(struct file *filp, unsigned long addr,
+ unsigned long len, unsigned long pgoff, unsigned long flags)
+{
+ struct mm_struct *mm = current->mm;
+ struct vm_area_struct *vma;
+ unsigned long start_addr;
+
+ if (len > TASK_SIZE)
+ return -ENOMEM;
+
+ if (addr) {
+ addr = PAGE_ALIGN(addr);
+ vma = find_vma(mm, addr);
+ if (TASK_SIZE - len >= addr &&
+ (!vma || addr + len <= vma->vm_start))
+ return addr;
+ }
+ start_addr = addr = mm->free_area_cache;
+
+full_search:
+ for (vma = find_vma(mm, addr); ; vma = vma->vm_next) {
+ /* At this point: (!vma || addr < vma->vm_end). */
+ if (TASK_SIZE - len < addr) {
+ /*
+ * Start a new search - just in case we missed
+ * some holes.
+ */
+ if (start_addr != TASK_UNMAPPED_BASE) {
+ start_addr = addr = TASK_UNMAPPED_BASE;
+ goto full_search;
+ }
+ return -ENOMEM;
+ }
+ if (!vma || addr + len <= vma->vm_start) {
+ /*
+ * Remember the place where we stopped the search:
+ */
+ mm->free_area_cache = addr + len;
+ return addr;
+ }
+ addr = vma->vm_end;
+ }
+}
+
+unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr0,
+ unsigned long len0, unsigned long pgoff, unsigned long flags,
+ unsigned long prot)
+{
+ unsigned long addr = addr0, len = len0;
+ struct mm_struct *mm = current->mm;
+ struct vm_area_struct *vma;
+ int ascii_shield = 0;
+ unsigned long tmp;
+
+ /*
+ * Fall back to the old layout:
+ */
+ if (!(current->flags & PF_RELOCEXEC))
+ return stock_arch_get_unmapped_area(filp, addr0, len0, pgoff, flags);
+ if (len > TASK_SIZE)
+ return -ENOMEM;
+
+ if (!addr && (prot & PROT_EXEC) && !(flags & MAP_FIXED))
+ addr = randomize_range(SHLIB_BASE, 0x01000000, len);
+
+ if (addr) {
+ addr = PAGE_ALIGN(addr);
+ vma = find_vma(mm, addr);
+ if (TASK_SIZE - len >= addr &&
+ (!vma || addr + len <= vma->vm_start)) {
+ return addr;
+ }
+ }
+
+ if (prot & PROT_EXEC) {
+ ascii_shield = 1;
+ addr = SHLIB_BASE;
+ } else {
+ /* this can fail if the stack was unlimited */
+ if ((tmp = arch_get_unmapped_nonexecutable_area(mm, addr, len)) != -ENOMEM)
+ return tmp;
+search_upper:
+ addr = PAGE_ALIGN(arch_align_stack(TASK_UNMAPPED_BASE));
+ }
+
+ for (vma = find_vma(mm, addr); ; vma = vma->vm_next) {
+ /* At this point: (!vma || addr < vma->vm_end). */
+ if (TASK_SIZE - len < addr) {
+ return -ENOMEM;
+ }
+ if (!vma || addr + len <= vma->vm_start) {
+ /*
+ * Must not let a PROT_EXEC mapping get into the
+ * brk area:
+ */
+ if (ascii_shield && (addr + len > mm->brk)) {
+ ascii_shield = 0;
+ goto search_upper;
+ }
+ /*
+ * Up until the brk area we randomize addresses
+ * as much as possible:
+ */
+ if (ascii_shield && (addr >= 0x01000000)) {
+ tmp = randomize_range(0x01000000, mm->brk, len);
+ vma = find_vma(mm, tmp);
+ if (TASK_SIZE - len >= tmp &&
+ (!vma || tmp + len <= vma->vm_start))
+ return tmp;
+ }
+ /*
+ * Ok, randomization didnt work out - return
+ * the result of the linear search:
+ */
+ return addr;
+ }
+ addr = vma->vm_end;
+ }
+}
+
+void arch_add_exec_range(struct mm_struct *mm, unsigned long limit)
+{
+ if (limit > mm->context.exec_limit) {
+ mm->context.exec_limit = limit;
+ set_user_cs(&mm->context.user_cs, limit);
+ if (mm == current->mm)
+ load_user_cs_desc(smp_processor_id(), mm);
+ }
+}
+
+void arch_remove_exec_range(struct mm_struct *mm, unsigned long old_end)
+{
+ struct vm_area_struct *vma;
+ unsigned long limit = 0;
+
+ if (old_end == mm->context.exec_limit) {
+ for (vma = mm->mmap; vma; vma = vma->vm_next)
+ if ((vma->vm_flags & VM_EXEC) && (vma->vm_end > limit))
+ limit = vma->vm_end;
+
+ mm->context.exec_limit = limit;
+ set_user_cs(&mm->context.user_cs, limit);
+ if (mm == current->mm)
+ load_user_cs_desc(smp_processor_id(), mm);
+ }
+}
+
+void arch_flush_exec_range(struct mm_struct *mm)
+{
+ mm->context.exec_limit = 0;
+ set_user_cs(&mm->context.user_cs, 0);
+}
+
+/*
+ * Generate random brk address between 128MB and 196MB. (if the layout
+ * allows it.)
+ */
+void randomize_brk(unsigned long old_brk)
+{
+ unsigned long new_brk, range_start, range_end;
+
+ range_start = 0x08000000;
+ if (current->mm->brk >= range_start)
+ range_start = current->mm->brk;
+ range_end = range_start + 0x02000000;
+ new_brk = randomize_range(range_start, range_end, 0);
+ if (new_brk)
+ current->mm->brk = new_brk;
+}
+
+/*
+ * Top of mmap area (just below the process stack).
+ * leave an at least ~128 MB hole. Randomize it.
+ */
+#define MIN_GAP (128*1024*1024)
+#define MAX_GAP (TASK_SIZE/6*5)
+
+unsigned long mmap_top(void)
+{
+ unsigned long gap = 0;
+
+ gap = current->rlim[RLIMIT_STACK].rlim_cur;
+ if (gap < MIN_GAP)
+ gap = MIN_GAP;
+ else if (gap > MAX_GAP)
+ gap = MAX_GAP;
+
+ gap = arch_align_stack(gap) & PAGE_MASK;
+
+ return TASK_SIZE - gap;
+}
+
read_unlock(&tasklist_lock);
if (!child)
goto out;
+ if (!vx_check(vx_task_xid(child), VX_WATCH|VX_IDENT))
+ goto out_tsk;
ret = -EPERM;
if (pid == 1) /* you may not mess with init */
#include <linux/interrupt.h>
#include <linux/mc146818rtc.h>
#include <linux/efi.h>
+#include <linux/dmi.h>
#include <asm/uaccess.h>
#include <asm/apic.h>
#include "mach_reboot.h"
__setup("reboot=", reboot_setup);
+/*
+ * Reboot options and system auto-detection code provided by
+ * Dell Inc. so their systems "just work". :-)
+ */
+
+/*
+ * Some machines require the "reboot=b" commandline option, this quirk makes that automatic.
+ */
+static int __init set_bios_reboot(struct dmi_system_id *d)
+{
+ if (!reboot_thru_bios) {
+ reboot_thru_bios = 1;
+ printk(KERN_INFO "%s series board detected. Selecting BIOS-method for reboots.\n", d->ident);
+ }
+ return 0;
+}
+
+/*
+ * Some machines require the "reboot=s" commandline option, this quirk makes that automatic.
+ */
+static int __init set_smp_reboot(struct dmi_system_id *d)
+{
+#ifdef CONFIG_SMP
+ if (!reboot_smp) {
+ reboot_smp = 1;
+ printk(KERN_INFO "%s series board detected. Selecting SMP-method for reboots.\n", d->ident);
+ }
+#endif
+ return 0;
+}
+
+/*
+ * Some machines require the "reboot=b,s" commandline option, this quirk makes that automatic.
+ */
+static int __init set_smp_bios_reboot(struct dmi_system_id *d)
+{
+ set_smp_reboot(d);
+ set_bios_reboot(d);
+ return 0;
+}
+
+static struct dmi_system_id __initdata reboot_dmi_table[] = {
+ { /* Handle problems with rebooting on Dell 1300's */
+ .callback = set_smp_bios_reboot,
+ .ident = "Dell PowerEdge 1300",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge 1300/"),
+ },
+ },
+ { /* Handle problems with rebooting on Dell 300's */
+ .callback = set_bios_reboot,
+ .ident = "Dell PowerEdge 300",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge 300/"),
+ },
+ },
+ { /* Handle problems with rebooting on Dell 2400's */
+ .callback = set_bios_reboot,
+ .ident = "Dell PowerEdge 2400",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Computer Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge 2400"),
+ },
+ },
+ { }
+};
+
+static int reboot_init(void)
+{
+ dmi_check_system(reboot_dmi_table);
+ return 0;
+}
+
+core_initcall(reboot_init);
+
/* The following code and data reboots the machine by switching to real
mode and jumping to the BIOS reset entry point, as if the CPU has
really been reset. The previous version asked the keyboard
unsigned long long * base __attribute__ ((packed));
}
real_mode_gdt = { sizeof (real_mode_gdt_entries) - 1, real_mode_gdt_entries },
-real_mode_idt = { 0x3ff, 0 },
-no_idt = { 0, 0 };
+real_mode_idt = { 0x3ff, NULL },
+no_idt = { 0, NULL };
/* This is 16-bit protected mode code to disable paging and the cache,
CMOS_WRITE(0x00, 0x8f);
spin_unlock_irqrestore(&rtc_lock, flags);
- /* Remap the kernel at virtual address zero, as well as offset zero
- from the kernel segment. This assumes the kernel segment starts at
- virtual address PAGE_OFFSET. */
-
- memcpy (swapper_pg_dir, swapper_pg_dir + USER_PGD_PTRS,
- sizeof (swapper_pg_dir [0]) * KERNEL_PGD_PTRS);
+ /*
+ * Remap the first 16 MB of RAM (which includes the kernel image)
+ * at virtual address zero:
+ */
+ setup_identity_mappings(swapper_pg_dir, 0, LOW_MAPPINGS_SIZE);
/*
* Use `swapper_pg_dir' as our page directory.
* Stop all CPUs and turn off local APICs and the IO-APIC, so
* other OSs see a clean IRQ state.
*/
- smp_send_stop();
+ if (!netdump_mode)
+ smp_send_stop();
#elif defined(CONFIG_X86_LOCAL_APIC)
if (cpu_has_apic) {
local_irq_disable();
if (!reboot_thru_bios) {
if (efi_enabled) {
- efi.reset_system(EFI_RESET_COLD, EFI_SUCCESS, 0, 0);
+ efi.reset_system(EFI_RESET_COLD, EFI_SUCCESS, 0, NULL);
__asm__ __volatile__("lidt %0": :"m" (no_idt));
__asm__ __volatile__("int3");
}
}
}
if (efi_enabled)
- efi.reset_system(EFI_RESET_WARM, EFI_SUCCESS, 0, 0);
+ efi.reset_system(EFI_RESET_WARM, EFI_SUCCESS, 0, NULL);
machine_real_restart(jump_to_bios, sizeof(jump_to_bios));
}
void machine_power_off(void)
{
if (efi_enabled)
- efi.reset_system(EFI_RESET_SHUTDOWN, EFI_SUCCESS, 0, 0);
+ efi.reset_system(EFI_RESET_SHUTDOWN, EFI_SUCCESS, 0, NULL);
if (pm_power_off)
pm_power_off();
}
#include <linux/init.h>
#include <linux/pci.h>
-#include <linux/scx200.h>
#include <linux/scx200.h>
#define NAME "scx200"
#include <asm/sections.h>
#include <asm/io_apic.h>
#include <asm/ist.h>
-#include <asm/std_resources.h>
+#include <asm/io.h>
#include "setup_arch_pre.h"
/* This value is set up by the early boot code to point to the value
int disable_pse __initdata = 0;
-static inline char * __init machine_specific_memory_setup(void);
-
/*
* Machine setup..
*/
#define RAMDISK_LOAD_FLAG 0x4000
static char command_line[COMMAND_LINE_SIZE];
- char saved_command_line[COMMAND_LINE_SIZE];
unsigned char __initdata boot_params[PARAM_SIZE];
-static struct resource code_resource = { "Kernel code", 0x100000, 0 };
-static struct resource data_resource = { "Kernel data", 0, 0 };
+static struct resource data_resource = {
+ .name = "Kernel data",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_BUSY | IORESOURCE_MEM
+};
+
+static struct resource code_resource = {
+ .name = "Kernel code",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_BUSY | IORESOURCE_MEM
+};
+
+static struct resource system_rom_resource = {
+ .name = "System ROM",
+ .start = 0xf0000,
+ .end = 0xfffff,
+ .flags = IORESOURCE_BUSY | IORESOURCE_READONLY | IORESOURCE_MEM
+};
+
+static struct resource extension_rom_resource = {
+ .name = "Extension ROM",
+ .start = 0xe0000,
+ .end = 0xeffff,
+ .flags = IORESOURCE_BUSY | IORESOURCE_READONLY | IORESOURCE_MEM
+};
+
+static struct resource adapter_rom_resources[] = { {
+ .name = "Adapter ROM",
+ .start = 0xc8000,
+ .end = 0,
+ .flags = IORESOURCE_BUSY | IORESOURCE_READONLY | IORESOURCE_MEM
+}, {
+ .name = "Adapter ROM",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_BUSY | IORESOURCE_READONLY | IORESOURCE_MEM
+}, {
+ .name = "Adapter ROM",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_BUSY | IORESOURCE_READONLY | IORESOURCE_MEM
+}, {
+ .name = "Adapter ROM",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_BUSY | IORESOURCE_READONLY | IORESOURCE_MEM
+}, {
+ .name = "Adapter ROM",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_BUSY | IORESOURCE_READONLY | IORESOURCE_MEM
+}, {
+ .name = "Adapter ROM",
+ .start = 0,
+ .end = 0,
+ .flags = IORESOURCE_BUSY | IORESOURCE_READONLY | IORESOURCE_MEM
+} };
+
+#define ADAPTER_ROM_RESOURCES \
+ (sizeof adapter_rom_resources / sizeof adapter_rom_resources[0])
+
+static struct resource video_rom_resource = {
+ .name = "Video ROM",
+ .start = 0xc0000,
+ .end = 0xc7fff,
+ .flags = IORESOURCE_BUSY | IORESOURCE_READONLY | IORESOURCE_MEM
+};
+
+static struct resource video_ram_resource = {
+ .name = "Video RAM area",
+ .start = 0xa0000,
+ .end = 0xbffff,
+ .flags = IORESOURCE_BUSY | IORESOURCE_MEM
+};
+
+static struct resource standard_io_resources[] = { {
+ .name = "dma1",
+ .start = 0x0000,
+ .end = 0x001f,
+ .flags = IORESOURCE_BUSY | IORESOURCE_IO
+}, {
+ .name = "pic1",
+ .start = 0x0020,
+ .end = 0x0021,
+ .flags = IORESOURCE_BUSY | IORESOURCE_IO
+}, {
+ .name = "timer",
+ .start = 0x0040,
+ .end = 0x005f,
+ .flags = IORESOURCE_BUSY | IORESOURCE_IO
+}, {
+ .name = "keyboard",
+ .start = 0x0060,
+ .end = 0x006f,
+ .flags = IORESOURCE_BUSY | IORESOURCE_IO
+}, {
+ .name = "dma page reg",
+ .start = 0x0080,
+ .end = 0x008f,
+ .flags = IORESOURCE_BUSY | IORESOURCE_IO
+}, {
+ .name = "pic2",
+ .start = 0x00a0,
+ .end = 0x00a1,
+ .flags = IORESOURCE_BUSY | IORESOURCE_IO
+}, {
+ .name = "dma2",
+ .start = 0x00c0,
+ .end = 0x00df,
+ .flags = IORESOURCE_BUSY | IORESOURCE_IO
+}, {
+ .name = "fpu",
+ .start = 0x00f0,
+ .end = 0x00ff,
+ .flags = IORESOURCE_BUSY | IORESOURCE_IO
+} };
+
+#define STANDARD_IO_RESOURCES \
+ (sizeof standard_io_resources / sizeof standard_io_resources[0])
+
+#define romsignature(x) (*(unsigned short *)(x) == 0xaa55)
+
+static int __init romchecksum(unsigned char *rom, unsigned long length)
+{
+ unsigned char *p, sum = 0;
+
+ for (p = rom; p < rom + length; p++)
+ sum += *p;
+ return sum == 0;
+}
+
+static void __init probe_roms(void)
+{
+ unsigned long start, length, upper;
+ unsigned char *rom;
+ int i;
+
+ /* video rom */
+ upper = adapter_rom_resources[0].start;
+ for (start = video_rom_resource.start; start < upper; start += 2048) {
+ rom = isa_bus_to_virt(start);
+ if (!romsignature(rom))
+ continue;
+
+ video_rom_resource.start = start;
+
+ /* 0 < length <= 0x7f * 512, historically */
+ length = rom[2] * 512;
+
+ /* if checksum okay, trust length byte */
+ if (length && romchecksum(rom, length))
+ video_rom_resource.end = start + length - 1;
+
+ request_resource(&iomem_resource, &video_rom_resource);
+ break;
+ }
+
+ start = (video_rom_resource.end + 1 + 2047) & ~2047UL;
+ if (start < upper)
+ start = upper;
+
+ /* system rom */
+ request_resource(&iomem_resource, &system_rom_resource);
+ upper = system_rom_resource.start;
+
+ /* check for extension rom (ignore length byte!) */
+ rom = isa_bus_to_virt(extension_rom_resource.start);
+ if (romsignature(rom)) {
+ length = extension_rom_resource.end - extension_rom_resource.start + 1;
+ if (romchecksum(rom, length)) {
+ request_resource(&iomem_resource, &extension_rom_resource);
+ upper = extension_rom_resource.start;
+ }
+ }
+
+ /* check for adapter roms on 2k boundaries */
+ for (i = 0; i < ADAPTER_ROM_RESOURCES && start < upper; start += 2048) {
+ rom = isa_bus_to_virt(start);
+ if (!romsignature(rom))
+ continue;
+
+ /* 0 < length <= 0x7f * 512, historically */
+ length = rom[2] * 512;
+
+ /* but accept any length that fits if checksum okay */
+ if (!length || start + length > upper || !romchecksum(rom, length))
+ continue;
+
+ adapter_rom_resources[i].start = start;
+ adapter_rom_resources[i].end = start + length - 1;
+ request_resource(&iomem_resource, &adapter_rom_resources[i]);
+
+ start = adapter_rom_resources[i++].end & ~2047UL;
+ }
+}
static void __init limit_regions(unsigned long long size)
{
}
#if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE)
-unsigned char eddnr;
-struct edd_info edd[EDDMAXNR];
-unsigned int edd_disk80_sig;
+struct edd edd;
#ifdef CONFIG_EDD_MODULE
-EXPORT_SYMBOL(eddnr);
EXPORT_SYMBOL(edd);
-EXPORT_SYMBOL(edd_disk80_sig);
#endif
/**
* copy_edd() - Copy the BIOS EDD information
*/
static inline void copy_edd(void)
{
- eddnr = EDD_NR;
- memcpy(edd, EDD_BUF, sizeof(edd));
- edd_disk80_sig = DISK80_SIGNATURE;
+ memcpy(edd.mbr_signature, EDD_MBR_SIGNATURE, sizeof(edd.mbr_signature));
+ memcpy(edd.edd_info, EDD_BUF, sizeof(edd.edd_info));
+ edd.mbr_signature_nr = EDD_MBR_SIG_NR;
+ edd.edd_info_nr = EDD_NR;
}
#else
-#define copy_edd() do {} while (0)
+static inline void copy_edd(void)
+{
+}
#endif
/*
*/
#define LOWMEMSIZE() (0x9f000)
-static void __init setup_memory_region(void)
-{
- char *who = machine_specific_memory_setup();
- printk(KERN_INFO "BIOS-provided physical RAM map:\n");
- print_memory_map(who);
-} /* setup_memory_region */
-
+unsigned long crashdump_addr = 0xdeadbeef;
static void __init parse_cmdline_early (char ** cmdline_p)
{
if (c == ' ' && !memcmp(from, "highmem=", 8))
highmem_pages = memparse(from+8, &from) >> PAGE_SHIFT;
+ if (c == ' ' && !memcmp(from, "crashdump=", 10))
+ crashdump_addr = memparse(from+10, &from);
+
c = *(from++);
if (!c)
break;
static void __init register_memory(unsigned long max_low_pfn)
{
unsigned long low_mem_size;
+ int i;
if (efi_enabled)
efi_initialize_iomem_resources(&code_resource, &data_resource);
legacy_init_iomem_resources(&code_resource, &data_resource);
/* EFI systems may still have VGA */
- request_graphics_resource();
+ request_resource(&iomem_resource, &video_ram_resource);
/* request I/O space for devices used on all i[345]86 PCs */
- request_standard_io_resources();
+ for (i = 0; i < STANDARD_IO_RESOURCES; i++)
+ request_resource(&ioport_resource, &standard_io_resources[i]);
/* Tell the PCI layer not to allocate too close to the RAM area.. */
low_mem_size = ((max_low_pfn << PAGE_SHIFT) + 0xfffff) & ~0xfffff;
} noptypes[] = {
{ X86_FEATURE_K8, k8_nops },
{ X86_FEATURE_K7, k7_nops },
- { -1, 0 }
+ { -1, NULL }
};
/* Replace instructions with better alternatives for this CPU type.
__setup("noreplacement", noreplacement_setup);
+static char * __init machine_specific_memory_setup(void);
+
+#ifdef CONFIG_CRASH_DUMP_SOFTBOOT
+extern void crashdump_reserve(void);
+#endif
+
/*
* Determine if we were loaded by an EFI loader. If so, then we have also been
* passed the efi memmap, systab, etc., so we should use these data structures
ARCH_SETUP
if (efi_enabled)
efi_init();
- else
- setup_memory_region();
+ else {
+ printk(KERN_INFO "BIOS-provided physical RAM map:\n");
+ print_memory_map(machine_specific_memory_setup());
+ }
copy_edd();
#endif
+#ifdef CONFIG_CRASH_DUMP_SOFTBOOT
+ crashdump_reserve(); /* Preserve crash dump state from prev boot */
+#endif
+
dmi_scan_machine();
#ifdef CONFIG_X86_GENERICARCH
}
asmlinkage int
-sys_rt_sigsuspend(sigset_t __user *unewset, size_t sigsetsize)
+sys_rt_sigsuspend(struct pt_regs regs)
{
- struct pt_regs * regs = (struct pt_regs *) &unewset;
sigset_t saveset, newset;
/* XXX: Don't preclude handling different sized sigset_t's. */
- if (sigsetsize != sizeof(sigset_t))
+ if (regs.ecx != sizeof(sigset_t))
return -EINVAL;
- if (copy_from_user(&newset, unewset, sizeof(newset)))
+ if (copy_from_user(&newset, (sigset_t __user *)regs.ebx, sizeof(newset)))
return -EFAULT;
sigdelsetmask(&newset, ~_BLOCKABLE);
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
- regs->eax = -EINTR;
+ regs.eax = -EINTR;
while (1) {
current->state = TASK_INTERRUPTIBLE;
schedule();
- if (do_signal(regs, &saveset))
+ if (do_signal(®s, &saveset))
return -EINTR;
}
}
}
asmlinkage int
-sys_sigaltstack(const stack_t __user *uss, stack_t __user *uoss)
+sys_sigaltstack(unsigned long ebx)
{
- struct pt_regs *regs = (struct pt_regs *) &uss;
+ /* This is needed to make gcc realize it doesn't own the "struct pt_regs" */
+ struct pt_regs *regs = (struct pt_regs *)&ebx;
+ const stack_t __user *uss = (const stack_t __user *)ebx;
+ stack_t __user *uoss = (stack_t __user *)regs->ecx;
+
return do_sigaltstack(uss, uoss, regs->esp);
}
*/
static int
-restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, int *peax)
+restore_sigcontext(struct pt_regs *regs,
+ struct sigcontext __user *__sc, int *peax)
{
- unsigned int err = 0;
+ struct sigcontext scratch; /* 88 bytes of scratch area */
/* Always make any pending restarted system calls return -EINTR */
current_thread_info()->restart_block.fn = do_no_restart_syscall;
-#define COPY(x) err |= __get_user(regs->x, &sc->x)
+ if (copy_from_user(&scratch, __sc, sizeof(scratch)))
+ return -EFAULT;
+
+#define COPY(x) regs->x = scratch.x
#define COPY_SEG(seg) \
- { unsigned short tmp; \
- err |= __get_user(tmp, &sc->seg); \
+ { unsigned short tmp = scratch.seg; \
regs->x##seg = tmp; }
#define COPY_SEG_STRICT(seg) \
- { unsigned short tmp; \
- err |= __get_user(tmp, &sc->seg); \
+ { unsigned short tmp = scratch.seg; \
regs->x##seg = tmp|3; }
#define GET_SEG(seg) \
- { unsigned short tmp; \
- err |= __get_user(tmp, &sc->seg); \
+ { unsigned short tmp = scratch.seg; \
loadsegment(seg,tmp); }
#define FIX_EFLAGS (X86_EFLAGS_AC | X86_EFLAGS_OF | X86_EFLAGS_DF | \
COPY_SEG_STRICT(ss);
{
- unsigned int tmpflags;
- err |= __get_user(tmpflags, &sc->eflags);
+ unsigned int tmpflags = scratch.eflags;
regs->eflags = (regs->eflags & ~FIX_EFLAGS) | (tmpflags & FIX_EFLAGS);
regs->orig_eax = -1; /* disable syscall checks */
}
{
- struct _fpstate __user * buf;
- err |= __get_user(buf, &sc->fpstate);
+ struct _fpstate * buf = scratch.fpstate;
if (buf) {
if (verify_area(VERIFY_READ, buf, sizeof(*buf)))
- goto badframe;
- err |= restore_i387(buf);
+ return -EFAULT;
+ if (restore_i387(buf))
+ return -EFAULT;
}
}
- err |= __get_user(*peax, &sc->eax);
- return err;
-
-badframe:
- return 1;
+ *peax = scratch.eax;
+ return 0;
}
asmlinkage int sys_sigreturn(unsigned long __unused)
*/
static int
-setup_sigcontext(struct sigcontext __user *sc, struct _fpstate __user *fpstate,
+setup_sigcontext(struct sigcontext __user *__sc, struct _fpstate __user *fpstate,
struct pt_regs *regs, unsigned long mask)
{
- int tmp, err = 0;
+ struct sigcontext sc; /* 88 bytes of scratch area */
+ int tmp;
tmp = 0;
__asm__("movl %%gs,%0" : "=r"(tmp): "0"(tmp));
- err |= __put_user(tmp, (unsigned int __user *)&sc->gs);
+ *(unsigned int *)&sc.gs = tmp;
__asm__("movl %%fs,%0" : "=r"(tmp): "0"(tmp));
- err |= __put_user(tmp, (unsigned int __user *)&sc->fs);
-
- err |= __put_user(regs->xes, (unsigned int __user *)&sc->es);
- err |= __put_user(regs->xds, (unsigned int __user *)&sc->ds);
- err |= __put_user(regs->edi, &sc->edi);
- err |= __put_user(regs->esi, &sc->esi);
- err |= __put_user(regs->ebp, &sc->ebp);
- err |= __put_user(regs->esp, &sc->esp);
- err |= __put_user(regs->ebx, &sc->ebx);
- err |= __put_user(regs->edx, &sc->edx);
- err |= __put_user(regs->ecx, &sc->ecx);
- err |= __put_user(regs->eax, &sc->eax);
- err |= __put_user(current->thread.trap_no, &sc->trapno);
- err |= __put_user(current->thread.error_code, &sc->err);
- err |= __put_user(regs->eip, &sc->eip);
- err |= __put_user(regs->xcs, (unsigned int __user *)&sc->cs);
- err |= __put_user(regs->eflags, &sc->eflags);
- err |= __put_user(regs->esp, &sc->esp_at_signal);
- err |= __put_user(regs->xss, (unsigned int __user *)&sc->ss);
+ *(unsigned int *)&sc.fs = tmp;
+ *(unsigned int *)&sc.es = regs->xes;
+ *(unsigned int *)&sc.ds = regs->xds;
+ sc.edi = regs->edi;
+ sc.esi = regs->esi;
+ sc.ebp = regs->ebp;
+ sc.esp = regs->esp;
+ sc.ebx = regs->ebx;
+ sc.edx = regs->edx;
+ sc.ecx = regs->ecx;
+ sc.eax = regs->eax;
+ sc.trapno = current->thread.trap_no;
+ sc.err = current->thread.error_code;
+ sc.eip = regs->eip;
+ *(unsigned int *)&sc.cs = regs->xcs;
+ sc.eflags = regs->eflags;
+ sc.esp_at_signal = regs->esp;
+ *(unsigned int *)&sc.ss = regs->xss;
tmp = save_i387(fpstate);
if (tmp < 0)
- err = 1;
- else
- err |= __put_user(tmp ? fpstate : NULL, &sc->fpstate);
+ return 1;
+ sc.fpstate = tmp ? fpstate : NULL;
/* non-iBCS2 extensions.. */
- err |= __put_user(mask, &sc->oldmask);
- err |= __put_user(current->thread.cr2, &sc->cr2);
+ sc.oldmask = mask;
+ sc.cr2 = current->thread.cr2;
- return err;
+ if (copy_to_user(__sc, &sc, sizeof(sc)))
+ return 1;
+ return 0;
}
/*
/* These symbols are defined with the addresses in the vsyscall page.
See vsyscall-sigreturn.S. */
-extern void __kernel_sigreturn, __kernel_rt_sigreturn;
+extern void __user __kernel_sigreturn;
+extern void __user __kernel_rt_sigreturn;
+extern SYSENTER_RETURN;
static void setup_frame(int sig, struct k_sigaction *ka,
sigset_t *set, struct pt_regs * regs)
{
- void *restorer;
+ void __user *restorer;
struct sigframe __user *frame;
int err = 0;
if (err)
goto give_sigsegv;
- restorer = &__kernel_sigreturn;
+ restorer = current->mm->context.vdso + (long)&__kernel_sigreturn;
if (ka->sa.sa_flags & SA_RESTORER)
restorer = ka->sa.sa_restorer;
static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
sigset_t *set, struct pt_regs * regs)
{
- void *restorer;
+ void __user *restorer;
struct rt_sigframe __user *frame;
int err = 0;
/* Create the ucontext. */
err |= __put_user(0, &frame->uc.uc_flags);
err |= __put_user(0, &frame->uc.uc_link);
- err |= __put_user(current->sas_ss_sp, &frame->uc.uc_stack.ss_sp);
+ err |= __put_user(current->sas_ss_sp, (unsigned long *)&frame->uc.uc_stack.ss_sp);
err |= __put_user(sas_ss_flags(regs->esp),
&frame->uc.uc_stack.ss_flags);
err |= __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size);
goto give_sigsegv;
/* Set up to return from userspace. */
- restorer = &__kernel_rt_sigreturn;
+ restorer = current->mm->context.vdso + (long)&__kernel_rt_sigreturn;
if (ka->sa.sa_flags & SA_RESTORER)
restorer = ka->sa.sa_restorer;
+
err |= __put_user(restorer, &frame->pretcode);
/*
#include <linux/mc146818rtc.h>
#include <linux/cache.h>
#include <linux/interrupt.h>
+#include <linux/dump.h>
#include <asm/mtrr.h>
-#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
#include <mach_ipi.h>
#include <mach_apic.h>
*/
cfg = __prepare_ICR(shortcut, vector);
+ if (vector == DUMP_VECTOR) {
+ /*
+ * Setup DUMP IPI to be delivered as an NMI
+ */
+ cfg = (cfg&~APIC_VECTOR_MASK)|APIC_DM_NMI;
+ }
+
/*
* Send the IPI. The write to APIC_ICR fires this off.
*/
*/
inline void send_IPI_mask_bitmask(cpumask_t cpumask, int vector)
{
- unsigned long mask = cpus_coerce(cpumask);
+ unsigned long mask = cpus_addr(cpumask)[0];
unsigned long cfg;
unsigned long flags;
* program the ICR
*/
cfg = __prepare_ICR(0, vector);
-
+
+ if (vector == DUMP_VECTOR) {
+ /*
+ * Setup DUMP IPI to be delivered as an NMI
+ */
+ cfg = (cfg&~APIC_VECTOR_MASK)|APIC_DM_NMI;
+ }
/*
* Send the IPI. The write to APIC_ICR fires this off.
*/
if (flush_mm == cpu_tlbstate[cpu].active_mm) {
if (cpu_tlbstate[cpu].state == TLBSTATE_OK) {
+#ifndef CONFIG_X86_SWITCH_PAGETABLES
if (flush_va == FLUSH_ALL)
local_flush_tlb();
else
__flush_tlb_one(flush_va);
+#endif
} else
leave_mm(cpu);
}
spin_unlock(&tlbstate_lock);
}
-void flush_tlb_current_task(void)
-{
- struct mm_struct *mm = current->mm;
- cpumask_t cpu_mask;
-
- preempt_disable();
- cpu_mask = mm->cpu_vm_mask;
- cpu_clear(smp_processor_id(), cpu_mask);
-
- local_flush_tlb();
- if (!cpus_empty(cpu_mask))
- flush_tlb_others(cpu_mask, mm, FLUSH_ALL);
- preempt_enable();
-}
-
void flush_tlb_mm (struct mm_struct * mm)
{
cpumask_t cpu_mask;
if (current->active_mm == mm) {
if(current->mm)
- __flush_tlb_one(va);
+#ifndef CONFIG_X86_SWITCH_PAGETABLES
+ __flush_tlb_one(va)
+#endif
+ ;
else
leave_mm(smp_processor_id());
}
void flush_tlb_all(void)
{
- on_each_cpu(do_flush_tlb_all, 0, 1, 1);
+ on_each_cpu(do_flush_tlb_all, NULL, 1, 1);
+}
+
+void dump_send_ipi(void)
+{
+ send_IPI_allbutself(DUMP_VECTOR);
}
/*
* <func> The function to run. This must be fast and non-blocking.
* <info> An arbitrary pointer to pass to the function.
* <nonatomic> currently unused.
- * <wait> If true, wait (atomically) until function has completed on other CPUs.
+ * <wait> If 1, wait (atomically) until function has completed on other CPUs.
+ * If 0, wait for the IPI to be received by other CPUs, but do not wait
+ * for the completion of the function on each CPU.
+ * If -1, do not wait for other CPUs to receive IPI.
* [RETURNS] 0 on success, else a negative status code. Does not return until
* remote CPUs are nearly ready to execute <<func>> or are or have executed.
*
return 0;
/* Can deadlock when called with interrupts disabled */
- WARN_ON(irqs_disabled());
+ /* Only if we are waiting for other CPU to ack */
+ WARN_ON(irqs_disabled() && wait >= 0);
data.func = func;
data.info = info;
atomic_set(&data.started, 0);
- data.wait = wait;
- if (wait)
+ data.wait = wait > 0 ? wait : 0;
+ if (wait > 0)
atomic_set(&data.finished, 0);
spin_lock(&call_lock);
send_IPI_allbutself(CALL_FUNCTION_VECTOR);
/* Wait for response */
- while (atomic_read(&data.started) != cpus)
- barrier();
+ if (wait >= 0)
+ while (atomic_read(&data.started) != cpus)
+ barrier();
- if (wait)
+ if (wait > 0)
while (atomic_read(&data.finished) != cpus)
barrier();
spin_unlock(&call_lock);
return 0;
}
-static void stop_this_cpu (void * dummy)
+void stop_this_cpu (void * dummy)
{
/*
* Remove this CPU:
local_irq_enable();
}
+EXPORT_SYMBOL(smp_send_stop);
+
/*
* Reschedule call back. Nothing to do,
* all the work is done automatically when
atomic_inc(&call_data->finished);
}
}
-
#include <linux/delay.h>
#include <linux/mc146818rtc.h>
-#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
#include <asm/desc.h>
#include <asm/arch_hooks.h>
extern unsigned char trampoline_data [];
extern unsigned char trampoline_end [];
static unsigned char *trampoline_base;
+static int trampoline_exec;
/*
* Currently trivial. Write the real->protected mode
*/
if (__pa(trampoline_base) >= 0x9F000)
BUG();
+ /*
+ * Make the SMP trampoline executable:
+ */
+ trampoline_exec = set_kernel_exec((unsigned long)trampoline_base, 1);
}
/*
int j;
cpumask_t nodemask;
struct sched_group *node = &sched_group_nodes[i];
- cpus_and(nodemask, node_to_cpumask(i), cpu_possible_map);
+ cpumask_t node_cpumask = node_to_cpumask(i);
+
+ cpus_and(nodemask, node_cpumask, cpu_possible_map);
if (cpus_empty(nodemask))
continue;
for (i = 0; i < MAX_NUMNODES; i++) {
struct sched_group *cpu = &sched_group_nodes[i];
cpumask_t nodemask;
- cpus_and(nodemask, node_to_cpumask(i), cpu_possible_map);
+ cpumask_t node_cpumask = node_to_cpumask(i);
+
+ cpus_and(nodemask, node_cpumask, cpu_possible_map);
if (cpus_empty(nodemask))
continue;
setup_ioapic_dest();
#endif
zap_low_mappings();
+ /*
+ * Disable executability of the SMP trampoline:
+ */
+ set_kernel_exec((unsigned long)trampoline_base, trampoline_exec);
}
void __init smp_intr_init(void)
}
}
-static void __init initialize_physnode_map(void)
-{
- int i;
- unsigned long pfn;
- struct node_memory_chunk_s *nmcp;
-
- /* Run the list of memory chunks and fill in the phymap. */
- nmcp = node_memory_chunk;
- for (i = num_memory_chunks; --i >= 0; nmcp++) {
- for (pfn = nmcp->start_pfn; pfn <= nmcp->end_pfn;
- pfn += PAGES_PER_ELEMENT)
- {
- physnode_map[pfn / PAGES_PER_ELEMENT] = (int)nmcp->nid;
- }
- }
-}
-
/* Parse the ACPI Static Resource Affinity Table */
static int __init acpi20_parse_srat(struct acpi_table_srat *sratp)
{
for (i = 0; i < num_memory_chunks; i++)
node_memory_chunk[i].nid = pxm_to_nid_map[node_memory_chunk[i].pxm];
- initialize_physnode_map();
-
printk("pxm bitmap: ");
for (i = 0; i < sizeof(pxm_bitmap); i++) {
printk("%02X ", pxm_bitmap[i]);
#include <linux/mman.h>
#include <linux/file.h>
#include <linux/utsname.h>
+#include <linux/vs_cvirt.h>
#include <asm/uaccess.h>
#include <asm/ipc.h>
}
down_write(¤t->mm->mmap_sem);
- error = do_mmap_pgoff(file, addr, len, prot, flags, pgoff);
+ error = do_mmap_pgoff(current->mm, file, addr, len, prot, flags, pgoff);
up_write(¤t->mm->mmap_sem);
if (file)
if (!name)
return -EFAULT;
down_read(&uts_sem);
- err=copy_to_user(name, &system_utsname, sizeof (*name));
+ err=copy_to_user(name, vx_new_utsname(), sizeof (*name));
up_read(&uts_sem);
return err?-EFAULT:0;
}
asmlinkage int sys_olduname(struct oldold_utsname __user * name)
{
int error;
+ struct new_utsname *ptr;
if (!name)
return -EFAULT;
down_read(&uts_sem);
- error = __copy_to_user(&name->sysname,&system_utsname.sysname,__OLD_UTS_LEN);
+ ptr = vx_new_utsname();
+ error = __copy_to_user(&name->sysname,ptr->sysname,__OLD_UTS_LEN);
error |= __put_user(0,name->sysname+__OLD_UTS_LEN);
- error |= __copy_to_user(&name->nodename,&system_utsname.nodename,__OLD_UTS_LEN);
+ error |= __copy_to_user(&name->nodename,ptr->nodename,__OLD_UTS_LEN);
error |= __put_user(0,name->nodename+__OLD_UTS_LEN);
- error |= __copy_to_user(&name->release,&system_utsname.release,__OLD_UTS_LEN);
+ error |= __copy_to_user(&name->release,ptr->release,__OLD_UTS_LEN);
error |= __put_user(0,name->release+__OLD_UTS_LEN);
- error |= __copy_to_user(&name->version,&system_utsname.version,__OLD_UTS_LEN);
+ error |= __copy_to_user(&name->version,ptr->version,__OLD_UTS_LEN);
error |= __put_user(0,name->version+__OLD_UTS_LEN);
- error |= __copy_to_user(&name->machine,&system_utsname.machine,__OLD_UTS_LEN);
+ error |= __copy_to_user(&name->machine,ptr->machine,__OLD_UTS_LEN);
error |= __put_user(0,name->machine+__OLD_UTS_LEN);
up_read(&uts_sem);
#include <linux/gfp.h>
#include <linux/string.h>
#include <linux/elf.h>
+#include <linux/mman.h>
#include <asm/cpufeature.h>
#include <asm/msr.h>
#include <asm/pgtable.h>
#include <asm/unistd.h>
+#include <linux/highmem.h>
extern asmlinkage void sysenter_entry(void);
void enable_sep_cpu(void *info)
{
int cpu = get_cpu();
+#ifdef CONFIG_X86_HIGH_ENTRY
+ struct tss_struct *tss = (struct tss_struct *) __fix_to_virt(FIX_TSS_0) + cpu;
+#else
struct tss_struct *tss = init_tss + cpu;
+#endif
tss->ss1 = __KERNEL_CS;
tss->esp1 = sizeof(struct tss_struct) + (unsigned long) tss;
extern const char vsyscall_int80_start, vsyscall_int80_end;
extern const char vsyscall_sysenter_start, vsyscall_sysenter_end;
+struct page *sysenter_page;
+
static int __init sysenter_setup(void)
{
unsigned long page = get_zeroed_page(GFP_ATOMIC);
- __set_fixmap(FIX_VSYSCALL, __pa(page), PAGE_READONLY);
+ __set_fixmap(FIX_VSYSCALL, __pa(page), PAGE_KERNEL_RO);
+ sysenter_page = virt_to_page(page);
if (!boot_cpu_has(X86_FEATURE_SEP)) {
memcpy((void *) page,
&vsyscall_sysenter_end - &vsyscall_sysenter_start);
on_each_cpu(enable_sep_cpu, NULL, 1, 1);
+
return 0;
}
__initcall(sysenter_setup);
+
+extern void SYSENTER_RETURN_OFFSET;
+
+unsigned int vdso_enabled = 1;
+
+void map_vsyscall(void)
+{
+ struct thread_info *ti = current_thread_info();
+ struct vm_area_struct *vma;
+ unsigned long addr;
+
+ if (unlikely(!vdso_enabled)) {
+ current->mm->context.vdso = NULL;
+ return;
+ }
+
+ /*
+ * Map the vDSO (it will be randomized):
+ */
+ down_write(¤t->mm->mmap_sem);
+ addr = do_mmap(NULL, 0, 4096, PROT_READ | PROT_EXEC, MAP_PRIVATE, 0);
+ current->mm->context.vdso = (void *)addr;
+ ti->sysenter_return = (void *)addr + (long)&SYSENTER_RETURN_OFFSET;
+ if (addr != -1) {
+ vma = find_vma(current->mm, addr);
+ if (vma) {
+ pgprot_val(vma->vm_page_prot) &= ~_PAGE_RW;
+ get_page(sysenter_page);
+ install_page(current->mm, vma, addr,
+ sysenter_page, vma->vm_page_prot);
+
+ }
+ }
+ up_write(¤t->mm->mmap_sem);
+}
+
+static int __init vdso_setup(char *str)
+{
+ vdso_enabled = simple_strtoul(str, NULL, 0);
+ return 1;
+}
+__setup("vdso=", vdso_setup);
+
#include <linux/config.h>
#include <asm/hpet.h>
+#include <linux/hpet.h>
unsigned long hpet_period; /* fsecs / HPET clock */
unsigned long hpet_tick; /* hpet clks count per tick */
hpet_writel(cfg, HPET_CFG);
use_hpet = 1;
+
+#ifdef CONFIG_HPET
+ {
+ struct hpet_data hd;
+ unsigned int ntimer;
+
+ memset(&hd, 0, sizeof (hd));
+
+ ntimer = hpet_readl(HPET_ID);
+ ntimer = (ntimer & HPET_ID_NUMBER) >> HPET_ID_NUMBER_SHIFT;
+ ntimer++;
+
+ /*
+ * Register with driver.
+ * Timer0 and Timer1 is used by platform.
+ */
+ hd.hd_address = hpet_virt_address;
+ hd.hd_nirqs = ntimer;
+ hd.hd_flags = HPET_DATA_PLATFORM;
+ hpet_reserve_timer(&hd, 0);
+#ifdef CONFIG_HPET_EMULATE_RTC
+ hpet_reserve_timer(&hd, 1);
+#endif
+ hd.hd_irq[0] = HPET_LEGACY_8254;
+ hd.hd_irq[1] = HPET_LEGACY_RTC;
+ if (ntimer > 2) {
+ struct hpet *hpet;
+ struct hpet_timer *timer;
+ int i;
+
+ hpet = (struct hpet *) hpet_virt_address;
+
+ for (i = 2, timer = &hpet->hpet_timers[2]; i < ntimer;
+ timer++, i++)
+ hd.hd_irq[i] = (timer->hpet_config &
+ Tn_INT_ROUTE_CNF_MASK) >>
+ Tn_INT_ROUTE_CNF_SHIFT;
+
+ }
+
+ hpet_alloc(&hd);
+ }
+#endif
+
#ifdef CONFIG_X86_LOCAL_APIC
wait_timer_tick = wait_hpet_tick;
#endif
#ifdef CONFIG_HPET_TIMER
&timer_hpet,
#endif
+ &timer_tsc,
#ifdef CONFIG_X86_PM_TIMER
&timer_pmtmr,
#endif
- &timer_tsc,
&timer_pit,
NULL,
};
:"0" (loops));
}
-/* tsc timer_opts struct */
+/* none timer_opts struct */
struct timer_opts timer_none = {
.name = "none",
.init = init_none,
#include <asm/io.h>
#include <asm/arch_hooks.h>
+#include <linux/timex.h>
+#include "mach_timer.h"
+
+/* Number of PMTMR ticks expected during calibration run */
+#define PMTMR_TICKS_PER_SEC 3579545
+#define PMTMR_EXPECTED_RATE \
+ ((CALIBRATE_LATCH * (PMTMR_TICKS_PER_SEC >> 10)) / (CLOCK_TICK_RATE>>10))
+
/* The I/O port the PMTMR resides at.
* The location is detected during setup_arch(),
return v2 & ACPI_PM_MASK;
}
+
+/*
+ * Some boards have the PMTMR running way too fast. We check
+ * the PMTMR rate against PIT channel 2 to catch these cases.
+ */
+static int verify_pmtmr_rate(void)
+{
+ u32 value1, value2;
+ unsigned long count, delta;
+
+ mach_prepare_counter();
+ value1 = read_pmtmr();
+ mach_countup(&count);
+ value2 = read_pmtmr();
+ delta = (value2 - value1) & ACPI_PM_MASK;
+
+ /* Check that the PMTMR delta is within 5% of what we expect */
+ if (delta < (PMTMR_EXPECTED_RATE * 19) / 20 ||
+ delta > (PMTMR_EXPECTED_RATE * 21) / 20) {
+ printk(KERN_INFO "PM-Timer running at invalid rate: %lu%% of normal - aborting.\n", 100UL * delta / PMTMR_EXPECTED_RATE);
+ return -1;
+ }
+
+ return 0;
+}
+
+
static int init_pmtmr(char* override)
{
u32 value1, value2;
return -ENODEV;
pm_good:
+ if (verify_pmtmr_rate() != 0)
+ return -ENODEV;
+
init_cpu_khz();
return 0;
}
/*
* This code largely moved from arch/i386/kernel/time.c.
* See comments there for proper credits.
+ *
+ * 2004-06-25 Jesper Juhl
+ * moved mark_offset_tsc below cpufreq_delayed_get to avoid gcc 3.4
+ * failing to inline.
*/
#include <linux/spinlock.h>
return (cyc * cyc2ns_scale) >> CYC2NS_SCALE_FACTOR;
}
-
static int count2; /* counter for mark_offset_tsc() */
/* Cached *multiplier* to convert TSC counts to microseconds.
return cycles_2_ns(this_offset);
}
-
-static void mark_offset_tsc(void)
-{
- unsigned long lost,delay;
- unsigned long delta = last_tsc_low;
- int count;
- int countmp;
- static int count1 = 0;
- unsigned long long this_offset, last_offset;
- static int lost_count = 0;
-
- write_seqlock(&monotonic_lock);
- last_offset = ((unsigned long long)last_tsc_high<<32)|last_tsc_low;
- /*
- * It is important that these two operations happen almost at
- * the same time. We do the RDTSC stuff first, since it's
- * faster. To avoid any inconsistencies, we need interrupts
- * disabled locally.
- */
-
- /*
- * Interrupts are just disabled locally since the timer irq
- * has the SA_INTERRUPT flag set. -arca
- */
-
- /* read Pentium cycle counter */
-
- rdtsc(last_tsc_low, last_tsc_high);
-
- spin_lock(&i8253_lock);
- outb_p(0x00, PIT_MODE); /* latch the count ASAP */
-
- count = inb_p(PIT_CH0); /* read the latched count */
- count |= inb(PIT_CH0) << 8;
-
- /*
- * VIA686a test code... reset the latch if count > max + 1
- * from timer_pit.c - cjb
- */
- if (count > LATCH) {
- outb_p(0x34, PIT_MODE);
- outb_p(LATCH & 0xff, PIT_CH0);
- outb(LATCH >> 8, PIT_CH0);
- count = LATCH - 1;
- }
-
- spin_unlock(&i8253_lock);
-
- if (pit_latch_buggy) {
- /* get center value of last 3 time lutch */
- if ((count2 >= count && count >= count1)
- || (count1 >= count && count >= count2)) {
- count2 = count1; count1 = count;
- } else if ((count1 >= count2 && count2 >= count)
- || (count >= count2 && count2 >= count1)) {
- countmp = count;count = count2;
- count2 = count1;count1 = countmp;
- } else {
- count2 = count1; count1 = count; count = count1;
- }
- }
-
- /* lost tick compensation */
- delta = last_tsc_low - delta;
- {
- register unsigned long eax, edx;
- eax = delta;
- __asm__("mull %2"
- :"=a" (eax), "=d" (edx)
- :"rm" (fast_gettimeoffset_quotient),
- "0" (eax));
- delta = edx;
- }
- delta += delay_at_last_interrupt;
- lost = delta/(1000000/HZ);
- delay = delta%(1000000/HZ);
- if (lost >= 2) {
- jiffies_64 += lost-1;
-
- /* sanity check to ensure we're not always losing ticks */
- if (lost_count++ > 100) {
- printk(KERN_WARNING "Losing too many ticks!\n");
- printk(KERN_WARNING "TSC cannot be used as a timesource. \n");
- printk(KERN_WARNING "Possible reasons for this are:\n");
- printk(KERN_WARNING " You're running with Speedstep,\n");
- printk(KERN_WARNING " You don't have DMA enabled for your hard disk (see hdparm),\n");
- printk(KERN_WARNING " Incorrect TSC synchronization on an SMP system (see dmesg).\n");
- printk(KERN_WARNING "Falling back to a sane timesource now.\n");
-
- clock_fallback();
- }
- /* ... but give the TSC a fair chance */
- if (lost_count > 25)
- cpufreq_delayed_get();
- } else
- lost_count = 0;
- /* update the monotonic base value */
- this_offset = ((unsigned long long)last_tsc_high<<32)|last_tsc_low;
- monotonic_base += cycles_2_ns(this_offset - last_offset);
- write_sequnlock(&monotonic_lock);
-
- /* calculate delay_at_last_interrupt */
- count = ((LATCH-1) - count) * TICK_SIZE;
- delay_at_last_interrupt = (count + LATCH/2) / LATCH;
-
- /* catch corner case where tick rollover occured
- * between tsc and pit reads (as noted when
- * usec delta is > 90% # of usecs/tick)
- */
- if (lost && abs(delay - delay_at_last_interrupt) > (900000/HZ))
- jiffies_64++;
-}
-
static void delay_tsc(unsigned long loops)
{
unsigned long bclock, now;
{
int ret;
INIT_WORK(&cpufreq_delayed_get_work, handle_cpufreq_delayed_get, NULL);
- ret = cpufreq_register_notifier(&time_cpufreq_notifier_block, CPUFREQ_TRANSITION_NOTIFIER);
+ ret = cpufreq_register_notifier(&time_cpufreq_notifier_block,
+ CPUFREQ_TRANSITION_NOTIFIER);
if (!ret)
cpufreq_init = 1;
return ret;
static inline void cpufreq_delayed_get(void) { return; }
#endif
+static void mark_offset_tsc(void)
+{
+ unsigned long lost,delay;
+ unsigned long delta = last_tsc_low;
+ int count;
+ int countmp;
+ static int count1 = 0;
+ unsigned long long this_offset, last_offset;
+ static int lost_count = 0;
+
+ write_seqlock(&monotonic_lock);
+ last_offset = ((unsigned long long)last_tsc_high<<32)|last_tsc_low;
+ /*
+ * It is important that these two operations happen almost at
+ * the same time. We do the RDTSC stuff first, since it's
+ * faster. To avoid any inconsistencies, we need interrupts
+ * disabled locally.
+ */
+
+ /*
+ * Interrupts are just disabled locally since the timer irq
+ * has the SA_INTERRUPT flag set. -arca
+ */
+
+ /* read Pentium cycle counter */
+
+ rdtsc(last_tsc_low, last_tsc_high);
+
+ spin_lock(&i8253_lock);
+ outb_p(0x00, PIT_MODE); /* latch the count ASAP */
+
+ count = inb_p(PIT_CH0); /* read the latched count */
+ count |= inb(PIT_CH0) << 8;
+
+ /*
+ * VIA686a test code... reset the latch if count > max + 1
+ * from timer_pit.c - cjb
+ */
+ if (count > LATCH) {
+ outb_p(0x34, PIT_MODE);
+ outb_p(LATCH & 0xff, PIT_CH0);
+ outb(LATCH >> 8, PIT_CH0);
+ count = LATCH - 1;
+ }
+
+ spin_unlock(&i8253_lock);
+
+ if (pit_latch_buggy) {
+ /* get center value of last 3 time lutch */
+ if ((count2 >= count && count >= count1)
+ || (count1 >= count && count >= count2)) {
+ count2 = count1; count1 = count;
+ } else if ((count1 >= count2 && count2 >= count)
+ || (count >= count2 && count2 >= count1)) {
+ countmp = count;count = count2;
+ count2 = count1;count1 = countmp;
+ } else {
+ count2 = count1; count1 = count; count = count1;
+ }
+ }
+
+ /* lost tick compensation */
+ delta = last_tsc_low - delta;
+ {
+ register unsigned long eax, edx;
+ eax = delta;
+ __asm__("mull %2"
+ :"=a" (eax), "=d" (edx)
+ :"rm" (fast_gettimeoffset_quotient),
+ "0" (eax));
+ delta = edx;
+ }
+ delta += delay_at_last_interrupt;
+ lost = delta/(1000000/HZ);
+ delay = delta%(1000000/HZ);
+ if (lost >= 2) {
+ jiffies_64 += lost-1;
+
+ /* sanity check to ensure we're not always losing ticks */
+ if (lost_count++ > 100) {
+ printk(KERN_WARNING "Losing too many ticks!\n");
+ printk(KERN_WARNING "TSC cannot be used as a timesource. \n");
+ printk(KERN_WARNING "Possible reasons for this are:\n");
+ printk(KERN_WARNING " You're running with Speedstep,\n");
+ printk(KERN_WARNING " You don't have DMA enabled for your hard disk (see hdparm),\n");
+ printk(KERN_WARNING " Incorrect TSC synchronization on an SMP system (see dmesg).\n");
+ printk(KERN_WARNING "Falling back to a sane timesource now.\n");
+
+ clock_fallback();
+ }
+ /* ... but give the TSC a fair chance */
+ if (lost_count > 25)
+ cpufreq_delayed_get();
+ } else
+ lost_count = 0;
+ /* update the monotonic base value */
+ this_offset = ((unsigned long long)last_tsc_high<<32)|last_tsc_low;
+ monotonic_base += cycles_2_ns(this_offset - last_offset);
+ write_sequnlock(&monotonic_lock);
+
+ /* calculate delay_at_last_interrupt */
+ count = ((LATCH-1) - count) * TICK_SIZE;
+ delay_at_last_interrupt = (count + LATCH/2) / LATCH;
+
+ /* catch corner case where tick rollover occured
+ * between tsc and pit reads (as noted when
+ * usec delta is > 90% # of usecs/tick)
+ */
+ if (lost && abs(delay - delay_at_last_interrupt) > (900000/HZ))
+ jiffies_64++;
+}
static int __init init_tsc(char* override)
{
#include <linux/kallsyms.h>
#include <linux/ptrace.h>
#include <linux/version.h>
+#include <linux/dump.h>
#ifdef CONFIG_EISA
#include <linux/ioport.h>
#include <asm/nmi.h>
#include <asm/smp.h>
-#include <asm/pgalloc.h>
#include <asm/arch_hooks.h>
#include <linux/irq.h>
#include "mach_traps.h"
-asmlinkage int system_call(void);
-asmlinkage void lcall7(void);
-asmlinkage void lcall27(void);
-
-struct desc_struct default_ldt[] = { { 0, 0 }, { 0, 0 }, { 0, 0 },
- { 0, 0 }, { 0, 0 } };
+struct desc_struct default_ldt[] __attribute__((__section__(".data.default_ldt"))) = { { 0, 0 }, { 0, 0 }, { 0, 0 }, { 0, 0 }, { 0, 0 } };
+struct page *default_ldt_page;
/* Do we ignore FPU interrupts ? */
char ignore_fpu_irq = 0;
}
#ifdef CONFIG_FRAME_POINTER
-void print_context_stack(struct task_struct *task, unsigned long *stack,
+static void print_context_stack(struct task_struct *task, unsigned long *stack,
unsigned long ebp)
{
unsigned long addr;
}
}
#else
-void print_context_stack(struct task_struct *task, unsigned long *stack,
+static void print_context_stack(struct task_struct *task, unsigned long *stack,
unsigned long ebp)
{
unsigned long addr;
while (!kstack_end(stack)) {
addr = *stack++;
- if (kernel_text_address(addr)) {
- printk(" [<%08lx>] ", addr);
- print_symbol("%s\n", addr);
+ if (__kernel_text_address(addr)) {
+ printk(" [<%08lx>]", addr);
+ print_symbol(" %s", addr);
+ printk("\n");
}
}
}
break;
printk(" =======================\n");
}
- printk("\n");
}
void show_stack(struct task_struct *task, unsigned long *esp)
for(i=0;i<20;i++)
{
- unsigned char c;
- if(__get_user(c, &((unsigned char*)regs->eip)[i])) {
+ unsigned char c = 0;
+ if ((user_mode(regs) && get_user(c, &((unsigned char*)regs->eip)[i])) ||
+ (!user_mode(regs) && __direct_get_user(c, &((unsigned char*)regs->eip)[i]))) {
+
bad:
printk(" Bad EIP value.");
break;
eip = regs->eip;
- if (eip < PAGE_OFFSET)
- goto no_bug;
- if (__get_user(ud2, (unsigned short *)eip))
+ if (__direct_get_user(ud2, (unsigned short *)eip))
goto no_bug;
if (ud2 != 0x0b0f)
goto no_bug;
- if (__get_user(line, (unsigned short *)(eip + 2)))
+ if (__direct_get_user(line, (unsigned short *)(eip + 2)))
goto bug;
- if (__get_user(file, (char **)(eip + 4)) ||
- (unsigned long)file < PAGE_OFFSET || __get_user(c, file))
+ if (__direct_get_user(file, (char **)(eip + 4)) ||
+ __direct_get_user(c, file))
file = "<bad filename>";
printk("------------[ cut here ]------------\n");
}
spinlock_t die_lock = SPIN_LOCK_UNLOCKED;
+static int die_owner = -1;
void die(const char * str, struct pt_regs * regs, long err)
{
int nl = 0;
console_verbose();
- spin_lock_irq(&die_lock);
+ local_irq_disable();
+ if (!spin_trylock(&die_lock)) {
+ if (smp_processor_id() != die_owner)
+ spin_lock(&die_lock);
+ /* allow recursive die to fall through */
+ }
+ die_owner = smp_processor_id();
bust_spinlocks(1);
handle_BUG(regs);
printk(KERN_ALERT "%s: %04lx [#%d]\n", str, err & 0xffff, ++die_counter);
if (nl)
printk("\n");
show_registers(regs);
+ if (netdump_func)
+ netdump_func(regs);
+ dump((char *)str, regs);
bust_spinlocks(0);
+ die_owner = -1;
spin_unlock_irq(&die_lock);
if (in_interrupt())
panic("Fatal exception in interrupt");
if (panic_on_oops) {
+ if (netdump_func)
+ netdump_func = NULL;
printk(KERN_EMERG "Fatal exception: panic in 5 seconds\n");
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout(5 * HZ);
info.si_signo = signr; \
info.si_errno = 0; \
info.si_code = sicode; \
- info.si_addr = (void *)siaddr; \
+ info.si_addr = (void __user *)siaddr; \
do_trap(trapnr, signr, str, 0, regs, error_code, &info); \
}
info.si_signo = signr; \
info.si_errno = 0; \
info.si_code = sicode; \
- info.si_addr = (void *)siaddr; \
+ info.si_addr = (void __user *)siaddr; \
do_trap(trapnr, signr, str, 1, regs, error_code, &info); \
}
DO_ERROR(12, SIGBUS, "stack segment", stack_segment)
DO_ERROR_INFO(17, SIGBUS, "alignment check", alignment_check, BUS_ADRALN, get_cr2())
+/*
+ * the original non-exec stack patch was written by
+ * Solar Designer <solar at openwall.com>. Thanks!
+ */
asmlinkage void do_general_protection(struct pt_regs * regs, long error_code)
{
if (regs->eflags & X86_EFLAGS_IF)
if (!(regs->xcs & 3))
goto gp_in_kernel;
+ /*
+ * lazy-check for CS validity on exec-shield binaries:
+ */
+ if (current->mm) {
+ int cpu = smp_processor_id();
+ struct desc_struct *desc1, *desc2;
+ struct vm_area_struct *vma;
+ unsigned long limit = 0;
+
+ spin_lock(¤t->mm->page_table_lock);
+ for (vma = current->mm->mmap; vma; vma = vma->vm_next)
+ if ((vma->vm_flags & VM_EXEC) && (vma->vm_end > limit))
+ limit = vma->vm_end;
+ spin_unlock(¤t->mm->page_table_lock);
+
+ current->mm->context.exec_limit = limit;
+ set_user_cs(¤t->mm->context.user_cs, limit);
+
+ desc1 = ¤t->mm->context.user_cs;
+ desc2 = cpu_gdt_table[cpu] + GDT_ENTRY_DEFAULT_USER_CS;
+
+ /*
+ * The CS was not in sync - reload it and retry the
+ * instruction. If the instruction still faults then
+ * we wont hit this branch next time around.
+ */
+ if (desc1->a != desc2->a || desc1->b != desc2->b) {
+ if (print_fatal_signals >= 2) {
+ printk("#GPF fixup (%ld[seg:%lx]) at %08lx, CPU#%d.\n", error_code, error_code/8, regs->eip, smp_processor_id());
+ printk(" exec_limit: %08lx, user_cs: %08lx/%08lx, CPU_cs: %08lx/%08lx.\n", current->mm->context.exec_limit, desc1->a, desc1->b, desc2->a, desc2->b);
+ }
+ load_user_cs_desc(cpu, current->mm);
+ return;
+ }
+ }
+ if (print_fatal_signals) {
+ printk("#GPF(%ld[seg:%lx]) at %08lx, CPU#%d.\n", error_code, error_code/8, regs->eip, smp_processor_id());
+ printk(" exec_limit: %08lx, user_cs: %08lx/%08lx.\n", current->mm->context.exec_limit, current->mm->context.user_cs.a, current->mm->context.user_cs.b);
+ }
+
current->thread.error_code = error_code;
current->thread.trap_no = 13;
force_sig(SIGSEGV, current);
if (regs->eflags & X86_EFLAGS_IF)
local_irq_enable();
- /* Mask out spurious debug traps due to lazy DR7 setting */
+ /*
+ * Mask out spurious debug traps due to lazy DR7 setting or
+ * due to 4G/4G kernel mode:
+ */
if (condition & (DR_TRAP0|DR_TRAP1|DR_TRAP2|DR_TRAP3)) {
if (!tsk->thread.debugreg[7])
goto clear_dr7;
+ if (!user_mode(regs)) {
+ // restore upon return-to-userspace:
+ set_thread_flag(TIF_DB7);
+ goto clear_dr7;
+ }
}
if (regs->eflags & VM_MASK)
/* If this is a kernel mode trap, save the user PC on entry to
* the kernel, that's what the debugger can make sense of.
*/
- info.si_addr = ((regs->xcs & 3) == 0) ? (void *)tsk->thread.eip :
- (void *)regs->eip;
+ info.si_addr = ((regs->xcs & 3) == 0) ? (void __user *)tsk->thread.eip
+ : (void __user *)regs->eip;
force_sig_info(SIGTRAP, &info, tsk);
/* Disable additional traps. They'll be re-enabled when
* the correct behaviour even in the presence of the asynchronous
* IRQ13 behaviour
*/
-void math_error(void *eip)
+void math_error(void __user *eip)
{
struct task_struct * task;
siginfo_t info;
asmlinkage void do_coprocessor_error(struct pt_regs * regs, long error_code)
{
ignore_fpu_irq = 1;
- math_error((void *)regs->eip);
+ math_error((void __user *)regs->eip);
}
-void simd_math_error(void *eip)
+void simd_math_error(void __user *eip)
{
struct task_struct * task;
siginfo_t info;
if (cpu_has_xmm) {
/* Handle SIMD FPU exceptions on PIII+ processors. */
ignore_fpu_irq = 1;
- simd_math_error((void *)regs->eip);
+ simd_math_error((void __user *)regs->eip);
} else {
/*
* Handle strange cache flush from user space exception
#endif /* CONFIG_MATH_EMULATION */
-#ifdef CONFIG_X86_F00F_BUG
-void __init trap_init_f00f_bug(void)
+void __init trap_init_virtual_IDT(void)
{
- __set_fixmap(FIX_F00F_IDT, __pa(&idt_table), PAGE_KERNEL_RO);
-
/*
- * Update the IDT descriptor and reload the IDT so that
- * it uses the read-only mapped virtual address.
+ * "idt" is magic - it overlaps the idt_descr
+ * variable so that updating idt will automatically
+ * update the idt descriptor..
*/
- idt_descr.address = fix_to_virt(FIX_F00F_IDT);
+ __set_fixmap(FIX_IDT, __pa(&idt_table), PAGE_KERNEL_RO);
+ idt_descr.address = __fix_to_virt(FIX_IDT);
+
__asm__ __volatile__("lidt %0" : : "m" (idt_descr));
}
+
+void __init trap_init_virtual_GDT(void)
+{
+ int cpu = smp_processor_id();
+ struct Xgt_desc_struct *gdt_desc = cpu_gdt_descr + cpu;
+ struct Xgt_desc_struct tmp_desc = {0, 0};
+ struct tss_struct * t;
+
+ __asm__ __volatile__("sgdt %0": "=m" (tmp_desc): :"memory");
+
+#ifdef CONFIG_X86_HIGH_ENTRY
+ if (!cpu) {
+ __set_fixmap(FIX_GDT_0, __pa(cpu_gdt_table), PAGE_KERNEL);
+ __set_fixmap(FIX_GDT_1, __pa(cpu_gdt_table) + PAGE_SIZE, PAGE_KERNEL);
+ __set_fixmap(FIX_TSS_0, __pa(init_tss), PAGE_KERNEL);
+ __set_fixmap(FIX_TSS_1, __pa(init_tss) + 1*PAGE_SIZE, PAGE_KERNEL);
+ __set_fixmap(FIX_TSS_2, __pa(init_tss) + 2*PAGE_SIZE, PAGE_KERNEL);
+ __set_fixmap(FIX_TSS_3, __pa(init_tss) + 3*PAGE_SIZE, PAGE_KERNEL);
+ }
+
+ gdt_desc->address = __fix_to_virt(FIX_GDT_0) + sizeof(cpu_gdt_table[0]) * cpu;
+#else
+ gdt_desc->address = (unsigned long)cpu_gdt_table[cpu];
+#endif
+ __asm__ __volatile__("lgdt %0": "=m" (*gdt_desc));
+
+#ifdef CONFIG_X86_HIGH_ENTRY
+ t = (struct tss_struct *) __fix_to_virt(FIX_TSS_0) + cpu;
+#else
+ t = init_tss + cpu;
#endif
+ set_tss_desc(cpu, t);
+ cpu_gdt_table[cpu][GDT_ENTRY_TSS].b &= 0xfffffdff;
+ load_TR_desc();
+}
#define _set_gate(gate_addr,type,dpl,addr,seg) \
do { \
_set_gate(idt_table+n,14,0,addr,__KERNEL_CS);
}
-static void __init set_trap_gate(unsigned int n, void *addr)
+void __init set_trap_gate(unsigned int n, void *addr)
{
_set_gate(idt_table+n,15,0,addr,__KERNEL_CS);
}
-static void __init set_system_gate(unsigned int n, void *addr)
+void __init set_system_gate(unsigned int n, void *addr)
{
_set_gate(idt_table+n,15,3,addr,__KERNEL_CS);
}
-static void __init set_call_gate(void *a, void *addr)
+void __init set_call_gate(void *a, void *addr)
{
_set_gate(a,12,3,addr,__KERNEL_CS);
}
#ifdef CONFIG_X86_LOCAL_APIC
init_apic_mappings();
#endif
+ init_entry_mappings();
set_trap_gate(0,÷_error);
set_intr_gate(1,&debug);
* default LDT is a single-entry callgate to lcall7 for iBCS
* and a callgate to lcall27 for Solaris/x86 binaries
*/
+#if 0
set_call_gate(&default_ldt[0],lcall7);
set_call_gate(&default_ldt[4],lcall27);
-
+#endif
/*
* Should be a barrier for any external CPU state.
*/
#include <linux/ptrace.h>
#include <asm/uaccess.h>
-#include <asm/pgalloc.h>
#include <asm/io.h>
#include <asm/tlbflush.h>
#include <asm/irq.h>
tss = init_tss + get_cpu();
current->thread.esp0 = current->thread.saved_esp0;
current->thread.sysenter_cs = __KERNEL_CS;
- load_esp0(tss, ¤t->thread);
+ load_virtual_esp0(tss, current);
current->thread.saved_esp0 = 0;
put_cpu();
static int do_vm86_irq_handling(int subfunction, int irqnumber);
static void do_sys_vm86(struct kernel_vm86_struct *info, struct task_struct *tsk);
-asmlinkage int sys_vm86old(struct vm86_struct __user * v86)
+asmlinkage int sys_vm86old(struct pt_regs regs)
{
+ struct vm86_struct __user *v86 = (struct vm86_struct __user *)regs.ebx;
struct kernel_vm86_struct info; /* declare this _on top_,
* this avoids wasting of stack space.
* This remains on the stack until we
if (tmp)
goto out;
memset(&info.vm86plus, 0, (int)&info.regs32 - (int)&info.vm86plus);
- info.regs32 = (struct pt_regs *) &v86;
+ info.regs32 = ®s;
tsk->thread.vm86_info = v86;
do_sys_vm86(&info, tsk);
ret = 0; /* we never return here */
}
-asmlinkage int sys_vm86(unsigned long subfunction, struct vm86plus_struct __user * v86)
+asmlinkage int sys_vm86(struct pt_regs regs)
{
struct kernel_vm86_struct info; /* declare this _on top_,
* this avoids wasting of stack space.
*/
struct task_struct *tsk;
int tmp, ret;
+ struct vm86plus_struct __user *v86;
tsk = current;
- switch (subfunction) {
+ switch (regs.ebx) {
case VM86_REQUEST_IRQ:
case VM86_FREE_IRQ:
case VM86_GET_IRQ_BITS:
case VM86_GET_AND_RESET_IRQ:
- ret = do_vm86_irq_handling(subfunction,(int)v86);
+ ret = do_vm86_irq_handling(regs.ebx, (int)regs.ecx);
goto out;
case VM86_PLUS_INSTALL_CHECK:
/* NOTE: on old vm86 stuff this will return the error
ret = -EPERM;
if (tsk->thread.saved_esp0)
goto out;
+ v86 = (struct vm86plus_struct __user *)regs.ecx;
tmp = copy_from_user(&info, v86, VM86_REGS_SIZE1);
tmp += copy_from_user(&info.regs.VM86_REGS_PART2, &v86->regs.VM86_REGS_PART2,
(long)&info.regs32 - (long)&info.regs.VM86_REGS_PART2);
ret = -EFAULT;
if (tmp)
goto out;
- info.regs32 = (struct pt_regs *) &subfunction;
+ info.regs32 = ®s;
info.vm86plus.is_vm86pus = 1;
tsk->thread.vm86_info = (struct vm86_struct __user *)v86;
do_sys_vm86(&info, tsk);
tsk->thread.esp0 = (unsigned long) &info->VM86_TSS_ESP0;
if (cpu_has_sep)
tsk->thread.sysenter_cs = 0;
- load_esp0(tss, &tsk->thread);
+ load_virtual_esp0(tss, tsk);
put_cpu();
tsk->thread.screen_bitmap = info->screen_bitmap;
if (VEFLAGS & VIF_MASK)
flags |= IF_MASK;
+ flags |= IOPL_MASK;
return flags | (VEFLAGS & current->thread.v86mask);
}
* in userspace is always better than an Oops anyway.) [KD]
*/
static void do_int(struct kernel_vm86_regs *regs, int i,
- unsigned char * ssp, unsigned short sp)
+ unsigned char __user * ssp, unsigned short sp)
{
- unsigned long *intr_ptr, segoffs;
+ unsigned long __user *intr_ptr;
+ unsigned long segoffs;
if (regs->cs == BIOSSEG)
goto cannot_handle;
goto cannot_handle;
if (i==0x21 && is_revectored(AH(regs),&KVM86->int21_revectored))
goto cannot_handle;
- intr_ptr = (unsigned long *) (i << 2);
+ intr_ptr = (unsigned long __user *) (i << 2);
if (get_user(segoffs, intr_ptr))
goto cannot_handle;
if ((segoffs >> 16) == BIOSSEG)
if (VMPI.is_vm86pus) {
if ( (trapno==3) || (trapno==1) )
return_to_32bit(regs, VM86_TRAP + (trapno << 8));
- do_int(regs, trapno, (unsigned char *) (regs->ss << 4), SP(regs));
+ do_int(regs, trapno, (unsigned char __user *) (regs->ss << 4), SP(regs));
return 0;
}
if (trapno !=1)
void handle_vm86_fault(struct kernel_vm86_regs * regs, long error_code)
{
- unsigned char *csp, *ssp, opcode;
+ unsigned char opcode;
+ unsigned char __user *csp;
+ unsigned char __user *ssp;
unsigned short ip, sp;
int data32, pref_done;
return_to_32bit(regs, VM86_PICRETURN); \
return; } while (0)
- csp = (unsigned char *) (regs->cs << 4);
- ssp = (unsigned char *) (regs->ss << 4);
+ csp = (unsigned char __user *) (regs->cs << 4);
+ ssp = (unsigned char __user *) (regs->ss << 4);
sp = SP(regs);
ip = IP(regs);
{
unsigned long flags;
- free_irq(irqnumber,0);
- vm86_irqs[irqnumber].tsk = 0;
+ free_irq(irqnumber, NULL);
+ vm86_irqs[irqnumber].tsk = NULL;
spin_lock_irqsave(&irqbits_lock, flags);
irqbits &= ~(1 << irqnumber);
if (!((1 << sig) & ALLOWED_SIGS)) return -EPERM;
if (invalid_vm86_irq(irq)) return -EPERM;
if (vm86_irqs[irq].tsk) return -EPERM;
- ret = request_irq(irq, &irq_handler, 0, VM86_IRQNAME, 0);
+ ret = request_irq(irq, &irq_handler, 0, VM86_IRQNAME, NULL);
if (ret) return ret;
vm86_irqs[irq].sig = sig;
vm86_irqs[irq].tsk = current;
#include <asm-generic/vmlinux.lds.h>
#include <asm/thread_info.h>
+#include <linux/config.h>
+#include <asm/page.h>
+#include <asm/asm_offsets.h>
+
OUTPUT_FORMAT("elf32-i386", "elf32-i386", "elf32-i386")
OUTPUT_ARCH(i386)
ENTRY(startup_32)
jiffies = jiffies_64;
SECTIONS
{
- . = 0xC0000000 + 0x100000;
+ . = __PAGE_OFFSET + 0x100000;
/* read-only */
_text = .; /* Text and read-only data */
.text : {
*(.gnu.warning)
} = 0x9090
+#ifdef CONFIG_X86_4G
+ . = ALIGN(PAGE_SIZE_asm);
+ __entry_tramp_start = .;
+ . = FIX_ENTRY_TRAMPOLINE_0_addr;
+ __start___entry_text = .;
+ .entry.text : AT (__entry_tramp_start) { *(.entry.text) }
+ __entry_tramp_end = __entry_tramp_start + SIZEOF(.entry.text);
+ . = __entry_tramp_end;
+ . = ALIGN(PAGE_SIZE_asm);
+#else
+ .entry.text : { *(.entry.text) }
+#endif
+
_etext = .; /* End of text section */
. = ALIGN(16); /* Exception table */
CONSTRUCTORS
}
- . = ALIGN(4096);
+ . = ALIGN(PAGE_SIZE_asm);
__nosave_begin = .;
.data_nosave : { *(.data.nosave) }
- . = ALIGN(4096);
+ . = ALIGN(PAGE_SIZE_asm);
__nosave_end = .;
- . = ALIGN(4096);
- .data.page_aligned : { *(.data.idt) }
-
. = ALIGN(32);
.data.cacheline_aligned : { *(.data.cacheline_aligned) }
.data.init_task : { *(.data.init_task) }
/* will be freed after init */
- . = ALIGN(4096); /* Init code and data */
+ . = ALIGN(PAGE_SIZE_asm); /* Init code and data */
__init_begin = .;
.init.text : {
_sinittext = .;
from .altinstructions and .eh_frame */
.exit.text : { *(.exit.text) }
.exit.data : { *(.exit.data) }
- . = ALIGN(4096);
+ . = ALIGN(PAGE_SIZE_asm);
__initramfs_start = .;
.init.ramfs : { *(.init.ramfs) }
__initramfs_end = .;
__per_cpu_start = .;
.data.percpu : { *(.data.percpu) }
__per_cpu_end = .;
- . = ALIGN(4096);
+ . = ALIGN(PAGE_SIZE_asm);
__init_end = .;
/* freed after init ends here */
-
+
+ . = ALIGN(PAGE_SIZE_asm);
+ .data.page_aligned_tss : { *(.data.tss) }
+
+ . = ALIGN(PAGE_SIZE_asm);
+ .data.page_aligned_default_ldt : { *(.data.default_ldt) }
+
+ . = ALIGN(PAGE_SIZE_asm);
+ .data.page_aligned_idt : { *(.data.idt) }
+
+ . = ALIGN(PAGE_SIZE_asm);
+ .data.page_aligned_gdt : { *(.data.gdt) }
+
__bss_start = .; /* BSS */
.bss : {
*(.bss.page_aligned)
.stab.index 0 : { *(.stab.index) }
.stab.indexstr 0 : { *(.stab.indexstr) }
.comment 0 : { *(.comment) }
+
+
}
.type __kernel_vsyscall,@function
__kernel_vsyscall:
.LSTART_vsyscall:
+ cmpl $192, %eax
+ jne 1f
+ int $0x80
+ ret
+1:
push %ecx
.Lpush_ecx:
push %edx
/* 7: align return point with nop's to make disassembly easier */
.space 7,0x90
- /* 14: System call restart point is here! (SYSENTER_RETURN - 2) */
+ /* 14: System call restart point is here! (SYSENTER_RETURN_OFFSET-2) */
jmp .Lenter_kernel
/* 16: System call normal return point is here! */
- .globl SYSENTER_RETURN /* Symbol used by entry.S. */
-SYSENTER_RETURN:
+ .globl SYSENTER_RETURN_OFFSET /* Symbol used by sysenter.c */
+SYSENTER_RETURN_OFFSET:
pop %ebp
.Lpop_ebp:
pop %edx
/*
* Linker script for vsyscall DSO. The vsyscall page is an ELF shared
- * object prelinked to its virtual address, and with only one read-only
- * segment (that fits in one page). This script controls its layout.
+ * object with only one read-only segment (that fits in one page).
+ * This script controls its layout.
*/
-/* This must match <asm/fixmap.h>. */
-VSYSCALL_BASE = 0xffffe000;
-
SECTIONS
{
- . = VSYSCALL_BASE + SIZEOF_HEADERS;
+ . = SIZEOF_HEADERS;
.hash : { *(.hash) } :text
.dynsym : { *(.dynsym) }
For the layouts to match, we need to skip more than enough
space for the dynamic symbol table et al. If this amount
is insufficient, ld -shared will barf. Just increase it here. */
- . = VSYSCALL_BASE + 0x400;
+ . = 0x400;
.text : { *(.text) } :text =0x90909090
#
-lib-y = checksum.o delay.o \
- usercopy.o getuser.o \
- memcpy.o strstr.o
+lib-y = checksum.o delay.o usercopy.o getuser.o memcpy.o strstr.o \
+ bitops.o
lib-$(CONFIG_X86_USE_3DNOW) += mmx.o
lib-$(CONFIG_HAVE_DEC_LOCK) += dec_and_lock.o
.previous
.align 4
-.globl csum_partial_copy_generic
+.globl direct_csum_partial_copy_generic
#ifndef CONFIG_X86_USE_PPRO_CHECKSUM
#define ARGBASE 16
#define FP 12
-csum_partial_copy_generic:
+direct_csum_partial_copy_generic:
subl $4,%esp
pushl %edi
pushl %esi
#define ARGBASE 12
-csum_partial_copy_generic:
+direct_csum_partial_copy_generic:
pushl %ebx
pushl %edi
pushl %esi
inline void __const_udelay(unsigned long xloops)
{
int d0;
+ xloops *= 4;
__asm__("mull %0"
:"=d" (xloops), "=&a" (d0)
- :"1" (xloops),"0" (current_cpu_data.loops_per_jiffy));
- __delay(xloops * HZ);
+ :"1" (xloops),"0" (current_cpu_data.loops_per_jiffy * (HZ/4)));
+ __delay(++xloops);
}
void __udelay(unsigned long usecs)
{
- __const_udelay(usecs * 0x000010c6); /* 2**32 / 1000000 */
+ __const_udelay(usecs * 0x000010c7); /* 2**32 / 1000000 (rounded up) */
}
void __ndelay(unsigned long nsecs)
* return value.
*/
#include <asm/thread_info.h>
+#include <asm/asm_offsets.h>
/*
#include <linux/config.h>
#include <linux/string.h>
+#include <linux/module.h>
#undef memcpy
#undef memset
-void * memcpy(void * to, const void * from, size_t n)
+void *memcpy(void *to, const void *from, size_t n)
{
#ifdef CONFIG_X86_USE_3DNOW
return __memcpy3d(to, from, n);
return __memcpy(to, from, n);
#endif
}
+EXPORT_SYMBOL_NOVERS(memcpy);
-void * memset(void * s, int c, size_t count)
+void *memset(void *s, int c, size_t count)
{
return __memset(s, c, count);
}
+EXPORT_SYMBOL_NOVERS(memset);
+
+void *memmove(void *dest, const void *src, size_t n)
+{
+ int d0, d1, d2;
+
+ if (dest < src) {
+ memcpy(dest,src,n);
+ } else {
+ __asm__ __volatile__(
+ "std\n\t"
+ "rep\n\t"
+ "movsb\n\t"
+ "cld"
+ : "=&c" (d0), "=&S" (d1), "=&D" (d2)
+ :"0" (n),
+ "1" (n-1+(const char *)src),
+ "2" (n-1+(char *)dest)
+ :"memory");
+ }
+ return dest;
+}
+EXPORT_SYMBOL_NOVERS(memmove);
#include <linux/mm.h>
#include <linux/highmem.h>
#include <linux/blkdev.h>
-#include <linux/module.h>
#include <asm/uaccess.h>
#include <asm/mmx.h>
#define __do_strncpy_from_user(dst,src,count,res) \
do { \
int __d0, __d1, __d2; \
+ might_sleep(); \
__asm__ __volatile__( \
" testl %1,%1\n" \
" jz 2f\n" \
* and returns @count.
*/
long
-__strncpy_from_user(char *dst, const char __user *src, long count)
+__direct_strncpy_from_user(char *dst, const char __user *src, long count)
{
long res;
__do_strncpy_from_user(dst, src, count, res);
* and returns @count.
*/
long
-strncpy_from_user(char *dst, const char __user *src, long count)
+direct_strncpy_from_user(char *dst, const char __user *src, long count)
{
long res = -EFAULT;
if (access_ok(VERIFY_READ, src, 1))
#define __do_clear_user(addr,size) \
do { \
int __d0; \
+ might_sleep(); \
__asm__ __volatile__( \
"0: rep; stosl\n" \
" movl %2,%0\n" \
* On success, this will be zero.
*/
unsigned long
-clear_user(void __user *to, unsigned long n)
+direct_clear_user(void __user *to, unsigned long n)
{
might_sleep();
if (access_ok(VERIFY_WRITE, to, n))
* On success, this will be zero.
*/
unsigned long
-__clear_user(void __user *to, unsigned long n)
+__direct_clear_user(void __user *to, unsigned long n)
{
__do_clear_user(to, n);
return n;
* On exception, returns 0.
* If the string is too long, returns a value greater than @n.
*/
-long strnlen_user(const char __user *s, long n)
+long direct_strnlen_user(const char __user *s, long n)
{
unsigned long mask = -__addr_ok(s);
unsigned long res, tmp;
#ifdef CONFIG_X86_INTEL_USERCOPY
static unsigned long
-__copy_user_intel(void *to, const void *from,unsigned long size)
+__copy_user_intel(void __user *to, const void *from, unsigned long size)
{
int d0, d1;
__asm__ __volatile__(
}
static unsigned long
-__copy_user_zeroing_intel(void *to, const void *from, unsigned long size)
+__copy_user_zeroing_intel(void *to, const void __user *from, unsigned long size)
{
int d0, d1;
__asm__ __volatile__(
* them
*/
unsigned long
-__copy_user_zeroing_intel(void *to, const void *from, unsigned long size);
+__copy_user_zeroing_intel(void *to, const void __user *from, unsigned long size);
unsigned long
-__copy_user_intel(void *to, const void *from,unsigned long size);
+__copy_user_intel(void __user *to, const void *from, unsigned long size);
#endif /* CONFIG_X86_INTEL_USERCOPY */
/* Generic arbitrary sized copy. */
}
#endif
if (movsl_is_ok(to, from, n))
- __copy_user((void *)to, from, n);
+ __copy_user(to, from, n);
else
- n = __copy_user_intel((void *)to, from, n);
+ n = __copy_user_intel(to, from, n);
return n;
}
-unsigned long
-__copy_from_user_ll(void *to, const void __user *from, unsigned long n)
+unsigned long __copy_from_user_ll(void *to, const void __user *from, unsigned long n)
{
if (movsl_is_ok(to, from, n))
- __copy_user_zeroing(to, (const void *) from, n);
+ __copy_user_zeroing(to, from, n);
else
- n = __copy_user_zeroing_intel(to, (const void *) from, n);
- return n;
-}
-
-/**
- * copy_to_user: - Copy a block of data into user space.
- * @to: Destination address, in user space.
- * @from: Source address, in kernel space.
- * @n: Number of bytes to copy.
- *
- * Context: User context only. This function may sleep.
- *
- * Copy data from kernel space to user space.
- *
- * Returns number of bytes that could not be copied.
- * On success, this will be zero.
- */
-unsigned long
-copy_to_user(void __user *to, const void *from, unsigned long n)
-{
- might_sleep();
- if (access_ok(VERIFY_WRITE, to, n))
- n = __copy_to_user(to, from, n);
+ n = __copy_user_zeroing_intel(to, from, n);
return n;
}
-EXPORT_SYMBOL(copy_to_user);
-/**
- * copy_from_user: - Copy a block of data from user space.
- * @to: Destination address, in kernel space.
- * @from: Source address, in user space.
- * @n: Number of bytes to copy.
- *
- * Context: User context only. This function may sleep.
- *
- * Copy data from user space to kernel space.
- *
- * Returns number of bytes that could not be copied.
- * On success, this will be zero.
- *
- * If some data could not be copied, this function will pad the copied
- * data to the requested size using zero bytes.
- */
-unsigned long
-copy_from_user(void *to, const void __user *from, unsigned long n)
-{
- might_sleep();
- if (access_ok(VERIFY_READ, from, n))
- n = __copy_from_user(to, from, n);
- else
- memset(to, 0, n);
- return n;
-}
-EXPORT_SYMBOL(copy_from_user);
/*
* IRQ2 is cascade interrupt to second interrupt controller
*/
-static struct irqaction irq2 = { no_action, 0, 0, "cascade", NULL, NULL};
+static struct irqaction irq2 = { no_action, 0, CPU_MASK_NONE, "cascade", NULL, NULL};
/**
* intr_init_hook - post gate setup interrupt initialisation
{
}
-static struct irqaction irq0 = { timer_interrupt, SA_INTERRUPT, 0, "timer", NULL, NULL};
+static struct irqaction irq0 = { timer_interrupt, SA_INTERRUPT, CPU_MASK_NONE, "timer", NULL, NULL};
/**
* time_init_hook - do any specific initialisations for the system timer.
boot_cpu_logical_apicid = logical_apicid;
}
- if (m->mpc_apicid > MAX_APICS) {
+ ver = m->mpc_apicver;
+ if ((ver >= 0x14 && m->mpc_apicid >= 0xff) || m->mpc_apicid >= 0xf) {
printk(KERN_ERR "Processor #%d INVALID. (Max ID: %d).\n",
m->mpc_apicid, MAX_APICS);
return;
}
- ver = m->mpc_apicver;
apic_cpus = apicid_to_cpu_present(m->mpc_apicid);
physids_or(phys_cpu_present_map, phys_cpu_present_map, apic_cpus);
#include <linux/pci_ids.h>
#include <asm/io.h>
-#include <asm/pgalloc.h>
#include <asm/arch_hooks.h>
#include <asm/apic.h>
#include "cobalt.h"
/*
* IRQ2 is cascade interrupt to second interrupt controller
*/
-static struct irqaction irq2 = { no_action, 0, 0, "cascade", NULL, NULL};
+static struct irqaction irq2 = { no_action, 0, CPU_MASK_NONE, "cascade", NULL, NULL};
void __init intr_init_hook(void)
{
{
}
-static struct irqaction irq0 = { timer_interrupt, SA_INTERRUPT, 0, "timer", NULL, NULL};
+static struct irqaction irq0 = { timer_interrupt, SA_INTERRUPT, CPU_MASK_NONE, "timer", NULL, NULL};
void __init time_init_hook(void)
{
#include <linux/reboot.h>
#include <linux/sysrq.h>
#include <asm/io.h>
-#include <asm/pgalloc.h>
#include <asm/voyager.h>
#include <asm/vic.h>
#include <linux/pm.h>
#include <asm/desc.h>
#include <asm/voyager.h>
#include <asm/vic.h>
-#include <asm/pgalloc.h>
#include <asm/mtrr.h>
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
send_CPI_allbutself(__u8 cpi)
{
__u8 cpu = smp_processor_id();
- __u32 mask = cpus_coerce(cpu_online_map) & ~(1 << cpu);
+ __u32 mask = cpus_addr(cpu_online_map)[0] & ~(1 << cpu);
send_CPI(mask, cpi);
}
/* set up everything for just this CPU, we can alter
* this as we start the other CPUs later */
/* now get the CPU disposition from the extended CMOS */
- phys_cpu_present_map = cpus_promote(voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK));
- cpus_coerce(phys_cpu_present_map) |= voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK + 1) << 8;
- cpus_coerce(phys_cpu_present_map) |= voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK + 2) << 16;
- cpus_coerce(phys_cpu_present_map) |= voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK + 3) << 24;
- printk("VOYAGER SMP: phys_cpu_present_map = 0x%lx\n", cpus_coerce(phys_cpu_present_map));
+ cpus_addr(phys_cpu_present_map)[0] = voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK);
+ cpus_addr(phys_cpu_present_map)[0] |= voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK + 1) << 8;
+ cpus_addr(phys_cpu_present_map)[0] |= voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK + 2) << 16;
+ cpus_addr(phys_cpu_present_map)[0] |= voyager_extended_cmos_read(VOYAGER_PROCESSOR_PRESENT_MASK + 3) << 24;
+ printk("VOYAGER SMP: phys_cpu_present_map = 0x%lx\n", cpus_addr(phys_cpu_present_map)[0]);
/* Here we set up the VIC to enable SMP */
/* enable the CPIs by writing the base vector to their register */
outb(VIC_DEFAULT_CPI_BASE, VIC_CPI_BASE_REGISTER);
/* now that the cat has probed the Voyager System Bus, sanity
* check the cpu map */
if( ((voyager_quad_processors | voyager_extended_vic_processors)
- & cpus_coerce(phys_cpu_present_map)) != cpus_coerce(phys_cpu_present_map)) {
+ & cpus_addr(phys_cpu_present_map)[0]) != cpus_addr(phys_cpu_present_map)[0]) {
/* should panic */
printk("\n\n***WARNING*** Sanity check of CPU present map FAILED\n");
}
} else if(voyager_level == 4)
- voyager_extended_vic_processors = cpus_coerce(phys_cpu_present_map);
+ voyager_extended_vic_processors = cpus_addr(phys_cpu_present_map)[0];
/* this sets up the idle task to run on the current cpu */
voyager_extended_cpus = 1;
if (!cpumask)
BUG();
- if ((cpumask & cpus_coerce(cpu_online_map)) != cpumask)
+ if ((cpumask & cpus_addr(cpu_online_map)[0]) != cpumask)
BUG();
if (cpumask & (1 << smp_processor_id()))
BUG();
preempt_disable();
- cpu_mask = cpus_coerce(mm->cpu_vm_mask) & ~(1 << smp_processor_id());
+ cpu_mask = cpus_addr(mm->cpu_vm_mask)[0] & ~(1 << smp_processor_id());
local_flush_tlb();
if (cpu_mask)
flush_tlb_others(cpu_mask, mm, FLUSH_ALL);
preempt_disable();
- cpu_mask = cpus_coerce(mm->cpu_vm_mask) & ~(1 << smp_processor_id());
+ cpu_mask = cpus_addr(mm->cpu_vm_mask)[0] & ~(1 << smp_processor_id());
if (current->active_mm == mm) {
if (current->mm)
preempt_disable();
- cpu_mask = cpus_coerce(mm->cpu_vm_mask) & ~(1 << smp_processor_id());
+ cpu_mask = cpus_addr(mm->cpu_vm_mask)[0] & ~(1 << smp_processor_id());
if (current->active_mm == mm) {
if(current->mm)
__flush_tlb_one(va);
int wait)
{
struct call_data_struct data;
- __u32 mask = cpus_coerce(cpu_online_map);
+ __u32 mask = cpus_addr(cpu_online_map)[0];
mask &= ~(1<<smp_processor_id());
unsigned long irq_mask = 1 << irq;
int cpu;
- real_mask = cpus_coerce(mask) & voyager_extended_vic_processors;
+ real_mask = cpus_addr(mask)[0] & voyager_extended_vic_processors;
- if(cpus_coerce(mask) == 0)
+ if(cpus_addr(mask)[0] == 0)
/* can't have no cpu's to accept the interrupt -- extremely
* bad things will happen */
return;
#include <asm/desc.h>
#include <asm/voyager.h>
#include <asm/vic.h>
-#include <asm/pgalloc.h>
#include <asm/mtrr.h>
#include <asm/msr.h>
RE_ENTRANT_CHECK_OFF;
/* No need to verify_area(), we have previously fetched these bytes. */
- printk("Unimplemented FPU Opcode at eip=%p : ", (void *) address);
+ printk("Unimplemented FPU Opcode at eip=%p : ", (void __user *) address);
if ( FPU_CS == __USER_CS )
{
while ( 1 )
{
- FPU_get_user(byte1, (u_char *) address);
+ FPU_get_user(byte1, (u_char __user *) address);
if ( (byte1 & 0xf8) == 0xd8 ) break;
printk("[%02x]", byte1);
address++;
}
printk("%02x ", byte1);
- FPU_get_user(FPU_modrm, 1 + (u_char *) address);
+ FPU_get_user(FPU_modrm, 1 + (u_char __user *) address);
if (FPU_modrm >= 0300)
printk("%02x (%02x+%d)\n", FPU_modrm, FPU_modrm & 0xf8, FPU_modrm & 7);
#define MAX_PRINTED_BYTES 20
for ( i = 0; i < MAX_PRINTED_BYTES; i++ )
{
- FPU_get_user(byte1, (u_char *) address);
+ FPU_get_user(byte1, (u_char __user *) address);
if ( (byte1 & 0xf8) == 0xd8 )
{
printk(" %02x", byte1);
printk(" [more..]\n");
else
{
- FPU_get_user(FPU_modrm, 1 + (u_char *) address);
+ FPU_get_user(FPU_modrm, 1 + (u_char __user *) address);
if (FPU_modrm >= 0300)
printk(" %02x (%02x+%d)\n", FPU_modrm, FPU_modrm & 0xf8, FPU_modrm & 7);
#include "status_w.h"
-void fadd__()
+void fadd__(void)
{
/* fadd st,st(i) */
int i = FPU_rm;
}
-void fmul__()
+void fmul__(void)
{
/* fmul st,st(i) */
int i = FPU_rm;
-void fsub__()
+void fsub__(void)
{
/* fsub st,st(i) */
clear_C1();
}
-void fsubr_()
+void fsubr_(void)
{
/* fsubr st,st(i) */
clear_C1();
}
-void fdiv__()
+void fdiv__(void)
{
/* fdiv st,st(i) */
clear_C1();
}
-void fdivr_()
+void fdivr_(void)
{
/* fdivr st,st(i) */
clear_C1();
-void fadd_i()
+void fadd_i(void)
{
/* fadd st(i),st */
int i = FPU_rm;
}
-void fmul_i()
+void fmul_i(void)
{
/* fmul st(i),st */
clear_C1();
}
-void fsubri()
+void fsubri(void)
{
/* fsubr st(i),st */
clear_C1();
}
-void fsub_i()
+void fsub_i(void)
{
/* fsub st(i),st */
clear_C1();
}
-void fdivri()
+void fdivri(void)
{
/* fdivr st(i),st */
clear_C1();
}
-void fdiv_i()
+void fdiv_i(void)
{
/* fdiv st(i),st */
clear_C1();
-void faddp_()
+void faddp_(void)
{
/* faddp st(i),st */
int i = FPU_rm;
}
-void fmulp_()
+void fmulp_(void)
{
/* fmulp st(i),st */
clear_C1();
-void fsubrp()
+void fsubrp(void)
{
/* fsubrp st(i),st */
clear_C1();
}
-void fsubp_()
+void fsubp_(void)
{
/* fsubp st(i),st */
clear_C1();
}
-void fdivrp()
+void fdivrp(void)
{
/* fdivrp st(i),st */
clear_C1();
}
-void fdivp_()
+void fdivp_(void)
{
/* fdivp st(i),st */
clear_C1();
}
/* Needs to be externally visible */
-void finit()
+void finit(void)
{
control_word = 0x037f;
partial_status = 0;
fsetpm, FPU_illegal, FPU_illegal, FPU_illegal
};
-void finit_()
+void finit_(void)
{
(finit_table[FPU_rm])();
}
FPU_illegal, FPU_illegal, FPU_illegal, FPU_illegal
};
-void fstsw_()
+void fstsw_(void)
{
(fstsw_table[FPU_rm])();
}
FPU_illegal, FPU_illegal, FPU_illegal, FPU_illegal
};
-void fp_nop()
+void fp_nop(void)
{
(fp_nop_table[FPU_rm])();
}
-void fld_i_()
+void fld_i_(void)
{
FPU_REG *st_new_ptr;
int i;
}
-void fxch_i()
+void fxch_i(void)
{
/* fxch st(i) */
FPU_REG t;
}
-void ffree_()
+void ffree_(void)
{
/* ffree st(i) */
FPU_settagi(FPU_rm, TAG_Empty);
}
-void ffreep()
+void ffreep(void)
{
/* ffree st(i) + pop - unofficial code */
FPU_settagi(FPU_rm, TAG_Empty);
}
-void fst_i_()
+void fst_i_(void)
{
/* fst st(i) */
FPU_copy_to_regi(&st(0), FPU_gettag0(), FPU_rm);
}
-void fstp_i()
+void fstp_i(void)
{
/* fstp st(i) */
FPU_copy_to_regi(&st(0), FPU_gettag0(), FPU_rm);
u_char emulating=0;
#endif /* RE_ENTRANT_CHECKING */
-static int valid_prefix(u_char *Byte, u_char **fpu_eip,
+static int valid_prefix(u_char *Byte, u_char __user **fpu_eip,
overrides *override);
asmlinkage void math_emulate(long arg)
FPU_REG loaded_data;
FPU_REG *st0_ptr;
u_char loaded_tag, st0_tag;
- void *data_address;
+ void __user *data_address;
struct address data_sel_off;
struct address entry_sel_off;
unsigned long code_base = 0;
math_abort(FPU_info, SIGILL);
}
- if ( SEG_D_SIZE(code_descriptor = LDT_DESCRIPTOR(FPU_CS)) )
+ code_descriptor = LDT_DESCRIPTOR(FPU_CS);
+ if ( SEG_D_SIZE(code_descriptor) )
{
/* The above test may be wrong, the book is not clear */
/* Segmented 32 bit protected mode */
if (current->ptrace & PT_PTRACED)
FPU_lookahead = 0;
- if ( !valid_prefix(&byte1, (u_char **)&FPU_EIP,
+ if ( !valid_prefix(&byte1, (u_char __user **)&FPU_EIP,
&addr_modes.override) )
{
RE_ENTRANT_CHECK_OFF;
RE_ENTRANT_CHECK_OFF;
FPU_code_verify_area(1);
- FPU_get_user(FPU_modrm, (u_char *) FPU_EIP);
+ FPU_get_user(FPU_modrm, (u_char __user *) FPU_EIP);
RE_ENTRANT_CHECK_ON;
FPU_EIP++;
switch ( (byte1 >> 1) & 3 )
{
case 0:
- unmasked = FPU_load_single((float *)data_address,
+ unmasked = FPU_load_single((float __user *)data_address,
&loaded_data);
loaded_tag = unmasked & 0xff;
unmasked &= ~0xff;
break;
case 1:
- loaded_tag = FPU_load_int32((long *)data_address, &loaded_data);
+ loaded_tag = FPU_load_int32((long __user *)data_address, &loaded_data);
break;
case 2:
- unmasked = FPU_load_double((double *)data_address,
+ unmasked = FPU_load_double((double __user *)data_address,
&loaded_data);
loaded_tag = unmasked & 0xff;
unmasked &= ~0xff;
break;
case 3:
default: /* Used here to suppress gcc warnings. */
- loaded_tag = FPU_load_int16((short *)data_address, &loaded_data);
+ loaded_tag = FPU_load_int16((short __user *)data_address, &loaded_data);
break;
}
if (FPU_lookahead && !need_resched())
{
FPU_ORIG_EIP = FPU_EIP - code_base;
- if ( valid_prefix(&byte1, (u_char **)&FPU_EIP,
+ if ( valid_prefix(&byte1, (u_char __user **)&FPU_EIP,
&addr_modes.override) )
goto do_another_FPU_instruction;
}
all prefix bytes, further changes are needed in the emulator code
which accesses user address space. Access to separate segments is
important for msdos emulation. */
-static int valid_prefix(u_char *Byte, u_char **fpu_eip,
+static int valid_prefix(u_char *Byte, u_char __user **fpu_eip,
overrides *override)
{
u_char byte;
- u_char *ip = *fpu_eip;
+ u_char __user *ip = *fpu_eip;
*override = (overrides) { 0, 0, PREFIX_DEFAULT }; /* defaults */
#define sstatus_word() \
((S387->swd & ~SW_Top & 0xffff) | ((S387->ftop << SW_Top_Shift) & SW_Top))
-int restore_i387_soft(void *s387, struct _fpstate *buf)
+int restore_i387_soft(void *s387, struct _fpstate __user *buf)
{
- u_char *d = (u_char *)buf;
+ u_char __user *d = (u_char __user *)buf;
int offset, other, i, tags, regnr, tag, newtop;
RE_ENTRANT_CHECK_OFF;
}
-int save_i387_soft(void *s387, struct _fpstate * buf)
+int save_i387_soft(void *s387, struct _fpstate __user * buf)
{
- u_char *d = (u_char *)buf;
+ u_char __user *d = (u_char __user *)buf;
int offset = (S387->ftop & 7) * 10, other = 80 - offset;
RE_ENTRANT_CHECK_OFF;
ftst_, fxam, (FUNC_ST0)FPU_illegal, (FUNC_ST0)FPU_illegal
};
-void FPU_etc()
+void FPU_etc(void)
{
(fp_etc_table[FPU_rm])(&st(0), FPU_gettag0());
}
extern void FPU_triga(void);
extern void FPU_trigb(void);
/* get_address.c */
-extern void *FPU_get_address(u_char FPU_modrm, unsigned long *fpu_eip,
+extern void __user *FPU_get_address(u_char FPU_modrm, unsigned long *fpu_eip,
struct address *addr, fpu_addr_modes addr_modes);
-extern void *FPU_get_address_16(u_char FPU_modrm, unsigned long *fpu_eip,
+extern void __user *FPU_get_address_16(u_char FPU_modrm, unsigned long *fpu_eip,
struct address *addr, fpu_addr_modes addr_modes);
/* load_store.c */
extern int FPU_load_store(u_char type, fpu_addr_modes addr_modes,
- void *data_address);
+ void __user *data_address);
/* poly_2xm1.c */
extern int poly_2xm1(u_char sign, FPU_REG *arg, FPU_REG *result);
/* poly_atan.c */
/* reg_constant.c */
extern void fconst(void);
/* reg_ld_str.c */
-extern int FPU_load_extended(long double *s, int stnr);
-extern int FPU_load_double(double *dfloat, FPU_REG *loaded_data);
-extern int FPU_load_single(float *single, FPU_REG *loaded_data);
-extern int FPU_load_int64(long long *_s);
-extern int FPU_load_int32(long *_s, FPU_REG *loaded_data);
-extern int FPU_load_int16(short *_s, FPU_REG *loaded_data);
-extern int FPU_load_bcd(u_char *s);
+extern int FPU_load_extended(long double __user *s, int stnr);
+extern int FPU_load_double(double __user *dfloat, FPU_REG *loaded_data);
+extern int FPU_load_single(float __user *single, FPU_REG *loaded_data);
+extern int FPU_load_int64(long long __user *_s);
+extern int FPU_load_int32(long __user *_s, FPU_REG *loaded_data);
+extern int FPU_load_int16(short __user *_s, FPU_REG *loaded_data);
+extern int FPU_load_bcd(u_char __user *s);
extern int FPU_store_extended(FPU_REG *st0_ptr, u_char st0_tag,
- long double *d);
-extern int FPU_store_double(FPU_REG *st0_ptr, u_char st0_tag, double *dfloat);
-extern int FPU_store_single(FPU_REG *st0_ptr, u_char st0_tag, float *single);
-extern int FPU_store_int64(FPU_REG *st0_ptr, u_char st0_tag, long long *d);
-extern int FPU_store_int32(FPU_REG *st0_ptr, u_char st0_tag, long *d);
-extern int FPU_store_int16(FPU_REG *st0_ptr, u_char st0_tag, short *d);
-extern int FPU_store_bcd(FPU_REG *st0_ptr, u_char st0_tag, u_char *d);
+ long double __user *d);
+extern int FPU_store_double(FPU_REG *st0_ptr, u_char st0_tag, double __user *dfloat);
+extern int FPU_store_single(FPU_REG *st0_ptr, u_char st0_tag, float __user *single);
+extern int FPU_store_int64(FPU_REG *st0_ptr, u_char st0_tag, long long __user *d);
+extern int FPU_store_int32(FPU_REG *st0_ptr, u_char st0_tag, long __user *d);
+extern int FPU_store_int16(FPU_REG *st0_ptr, u_char st0_tag, short __user *d);
+extern int FPU_store_bcd(FPU_REG *st0_ptr, u_char st0_tag, u_char __user *d);
extern int FPU_round_to_int(FPU_REG *r, u_char tag);
-extern u_char *fldenv(fpu_addr_modes addr_modes, u_char *s);
-extern void frstor(fpu_addr_modes addr_modes, u_char *data_address);
-extern u_char *fstenv(fpu_addr_modes addr_modes, u_char *d);
-extern void fsave(fpu_addr_modes addr_modes, u_char *data_address);
+extern u_char __user *fldenv(fpu_addr_modes addr_modes, u_char __user *s);
+extern void frstor(fpu_addr_modes addr_modes, u_char __user *data_address);
+extern u_char __user *fstenv(fpu_addr_modes addr_modes, u_char __user *d);
+extern void fsave(fpu_addr_modes addr_modes, u_char __user *data_address);
extern int FPU_tagof(FPU_REG *ptr);
/* reg_mul.c */
extern int FPU_mul(FPU_REG const *b, u_char tagb, int deststnr, int control_w);
#include <linux/sched.h>
#include <linux/kernel.h>
#include <linux/mm.h>
+#include <asm/atomic_kmap.h>
/* This sets the pointer FPU_info to point to the argument part
of the stack frame of math_emulate() */
/* s is always from a cpu register, and the cpu does bounds checking
* during register load --> no further bounds checks needed */
-#define LDT_DESCRIPTOR(s) (((struct desc_struct *)current->mm->context.ldt)[(s) >> 3])
+#define LDT_DESCRIPTOR(s) (((struct desc_struct *)__kmap_atomic_vaddr(KM_LDT_PAGE0))[(s) >> 3])
#define SEG_D_SIZE(x) ((x).b & (3 << 21))
#define SEG_G_BIT(x) ((x).b & (1 << 23))
#define SEG_GRANULARITY(x) (((x).b & (1 << 23)) ? 4096 : 1)
/* A simpler test than verify_area() can probably be done for
FPU_code_verify_area() because the only possible error is to step
past the upper boundary of a legal code area. */
-#define FPU_code_verify_area(z) FPU_verify_area(VERIFY_READ,(void *)FPU_EIP,z)
+#define FPU_code_verify_area(z) FPU_verify_area(VERIFY_READ,(void __user *)FPU_EIP,z)
#endif
#define FPU_get_user(x,y) get_user((x),(y))
RE_ENTRANT_CHECK_OFF;
FPU_code_verify_area(1);
- FPU_get_user(base, (u_char *) (*fpu_eip)); /* The SIB byte */
+ FPU_get_user(base, (u_char __user *) (*fpu_eip)); /* The SIB byte */
RE_ENTRANT_CHECK_ON;
(*fpu_eip)++;
ss = base >> 6;
long displacement;
RE_ENTRANT_CHECK_OFF;
FPU_code_verify_area(1);
- FPU_get_user(displacement, (signed char *) (*fpu_eip));
+ FPU_get_user(displacement, (signed char __user *) (*fpu_eip));
offset += displacement;
RE_ENTRANT_CHECK_ON;
(*fpu_eip)++;
long displacement;
RE_ENTRANT_CHECK_OFF;
FPU_code_verify_area(4);
- FPU_get_user(displacement, (long *) (*fpu_eip));
+ FPU_get_user(displacement, (long __user *) (*fpu_eip));
offset += displacement;
RE_ENTRANT_CHECK_ON;
(*fpu_eip) += 4;
*/
-void *FPU_get_address(u_char FPU_modrm, unsigned long *fpu_eip,
+void __user *FPU_get_address(u_char FPU_modrm, unsigned long *fpu_eip,
struct address *addr,
fpu_addr_modes addr_modes)
{
/* Special case: disp32 */
RE_ENTRANT_CHECK_OFF;
FPU_code_verify_area(4);
- FPU_get_user(address, (unsigned long *) (*fpu_eip));
+ FPU_get_user(address, (unsigned long __user *) (*fpu_eip));
(*fpu_eip) += 4;
RE_ENTRANT_CHECK_ON;
addr->offset = address;
- return (void *) address;
+ return (void __user *) address;
}
else
{
address = *cpu_reg_ptr; /* Just return the contents
of the cpu register */
addr->offset = address;
- return (void *) address;
+ return (void __user *) address;
}
case 1:
/* 8 bit signed displacement */
RE_ENTRANT_CHECK_OFF;
FPU_code_verify_area(1);
- FPU_get_user(address, (signed char *) (*fpu_eip));
+ FPU_get_user(address, (signed char __user *) (*fpu_eip));
RE_ENTRANT_CHECK_ON;
(*fpu_eip)++;
break;
/* 32 bit displacement */
RE_ENTRANT_CHECK_OFF;
FPU_code_verify_area(4);
- FPU_get_user(address, (long *) (*fpu_eip));
+ FPU_get_user(address, (long __user *) (*fpu_eip));
(*fpu_eip) += 4;
RE_ENTRANT_CHECK_ON;
break;
EXCEPTION(EX_INTERNAL|0x133);
}
- return (void *)address;
+ return (void __user *)address;
}
-void *FPU_get_address_16(u_char FPU_modrm, unsigned long *fpu_eip,
+void __user *FPU_get_address_16(u_char FPU_modrm, unsigned long *fpu_eip,
struct address *addr,
fpu_addr_modes addr_modes)
{
/* Special case: disp16 */
RE_ENTRANT_CHECK_OFF;
FPU_code_verify_area(2);
- FPU_get_user(address, (unsigned short *) (*fpu_eip));
+ FPU_get_user(address, (unsigned short __user *) (*fpu_eip));
(*fpu_eip) += 2;
RE_ENTRANT_CHECK_ON;
goto add_segment;
/* 8 bit signed displacement */
RE_ENTRANT_CHECK_OFF;
FPU_code_verify_area(1);
- FPU_get_user(address, (signed char *) (*fpu_eip));
+ FPU_get_user(address, (signed char __user *) (*fpu_eip));
RE_ENTRANT_CHECK_ON;
(*fpu_eip)++;
break;
/* 16 bit displacement */
RE_ENTRANT_CHECK_OFF;
FPU_code_verify_area(2);
- FPU_get_user(address, (unsigned short *) (*fpu_eip));
+ FPU_get_user(address, (unsigned short __user *) (*fpu_eip));
(*fpu_eip) += 2;
RE_ENTRANT_CHECK_ON;
break;
EXCEPTION(EX_INTERNAL|0x131);
}
- return (void *)address ;
+ return (void __user *)address ;
}
};
int FPU_load_store(u_char type, fpu_addr_modes addr_modes,
- void *data_address)
+ void __user *data_address)
{
FPU_REG loaded_data;
FPU_REG *st0_ptr;
{
case 000: /* fld m32real */
clear_C1();
- loaded_tag = FPU_load_single((float *)data_address, &loaded_data);
+ loaded_tag = FPU_load_single((float __user *)data_address, &loaded_data);
if ( (loaded_tag == TAG_Special)
&& isNaN(&loaded_data)
&& (real_1op_NaN(&loaded_data) < 0) )
break;
case 001: /* fild m32int */
clear_C1();
- loaded_tag = FPU_load_int32((long *)data_address, &loaded_data);
+ loaded_tag = FPU_load_int32((long __user *)data_address, &loaded_data);
FPU_copy_to_reg0(&loaded_data, loaded_tag);
break;
case 002: /* fld m64real */
clear_C1();
- loaded_tag = FPU_load_double((double *)data_address, &loaded_data);
+ loaded_tag = FPU_load_double((double __user *)data_address, &loaded_data);
if ( (loaded_tag == TAG_Special)
&& isNaN(&loaded_data)
&& (real_1op_NaN(&loaded_data) < 0) )
break;
case 003: /* fild m16int */
clear_C1();
- loaded_tag = FPU_load_int16((short *)data_address, &loaded_data);
+ loaded_tag = FPU_load_int16((short __user *)data_address, &loaded_data);
FPU_copy_to_reg0(&loaded_data, loaded_tag);
break;
case 010: /* fst m32real */
clear_C1();
- FPU_store_single(st0_ptr, st0_tag, (float *)data_address);
+ FPU_store_single(st0_ptr, st0_tag, (float __user *)data_address);
break;
case 011: /* fist m32int */
clear_C1();
- FPU_store_int32(st0_ptr, st0_tag, (long *)data_address);
+ FPU_store_int32(st0_ptr, st0_tag, (long __user *)data_address);
break;
case 012: /* fst m64real */
clear_C1();
- FPU_store_double(st0_ptr, st0_tag, (double *)data_address);
+ FPU_store_double(st0_ptr, st0_tag, (double __user *)data_address);
break;
case 013: /* fist m16int */
clear_C1();
- FPU_store_int16(st0_ptr, st0_tag, (short *)data_address);
+ FPU_store_int16(st0_ptr, st0_tag, (short __user *)data_address);
break;
case 014: /* fstp m32real */
clear_C1();
- if ( FPU_store_single(st0_ptr, st0_tag, (float *)data_address) )
+ if ( FPU_store_single(st0_ptr, st0_tag, (float __user *)data_address) )
pop_0(); /* pop only if the number was actually stored
(see the 80486 manual p16-28) */
break;
case 015: /* fistp m32int */
clear_C1();
- if ( FPU_store_int32(st0_ptr, st0_tag, (long *)data_address) )
+ if ( FPU_store_int32(st0_ptr, st0_tag, (long __user *)data_address) )
pop_0(); /* pop only if the number was actually stored
(see the 80486 manual p16-28) */
break;
case 016: /* fstp m64real */
clear_C1();
- if ( FPU_store_double(st0_ptr, st0_tag, (double *)data_address) )
+ if ( FPU_store_double(st0_ptr, st0_tag, (double __user *)data_address) )
pop_0(); /* pop only if the number was actually stored
(see the 80486 manual p16-28) */
break;
case 017: /* fistp m16int */
clear_C1();
- if ( FPU_store_int16(st0_ptr, st0_tag, (short *)data_address) )
+ if ( FPU_store_int16(st0_ptr, st0_tag, (short __user *)data_address) )
pop_0(); /* pop only if the number was actually stored
(see the 80486 manual p16-28) */
break;
case 020: /* fldenv m14/28byte */
- fldenv(addr_modes, (u_char *)data_address);
+ fldenv(addr_modes, (u_char __user *)data_address);
/* Ensure that the values just loaded are not changed by
fix-up operations. */
return 1;
case 022: /* frstor m94/108byte */
- frstor(addr_modes, (u_char *)data_address);
+ frstor(addr_modes, (u_char __user *)data_address);
/* Ensure that the values just loaded are not changed by
fix-up operations. */
return 1;
case 023: /* fbld m80dec */
clear_C1();
- loaded_tag = FPU_load_bcd((u_char *)data_address);
+ loaded_tag = FPU_load_bcd((u_char __user *)data_address);
FPU_settag0(loaded_tag);
break;
case 024: /* fldcw */
RE_ENTRANT_CHECK_OFF;
FPU_verify_area(VERIFY_READ, data_address, 2);
- FPU_get_user(control_word, (unsigned short *) data_address);
+ FPU_get_user(control_word, (unsigned short __user *) data_address);
RE_ENTRANT_CHECK_ON;
if ( partial_status & ~control_word & CW_Exceptions )
partial_status |= (SW_Summary | SW_Backward);
return 1;
case 025: /* fld m80real */
clear_C1();
- loaded_tag = FPU_load_extended((long double *)data_address, 0);
+ loaded_tag = FPU_load_extended((long double __user *)data_address, 0);
FPU_settag0(loaded_tag);
break;
case 027: /* fild m64int */
clear_C1();
- loaded_tag = FPU_load_int64((long long *)data_address);
+ loaded_tag = FPU_load_int64((long long __user *)data_address);
FPU_settag0(loaded_tag);
break;
case 030: /* fstenv m14/28byte */
- fstenv(addr_modes, (u_char *)data_address);
+ fstenv(addr_modes, (u_char __user *)data_address);
return 1;
case 032: /* fsave */
- fsave(addr_modes, (u_char *)data_address);
+ fsave(addr_modes, (u_char __user *)data_address);
return 1;
case 033: /* fbstp m80dec */
clear_C1();
- if ( FPU_store_bcd(st0_ptr, st0_tag, (u_char *)data_address) )
+ if ( FPU_store_bcd(st0_ptr, st0_tag, (u_char __user *)data_address) )
pop_0(); /* pop only if the number was actually stored
(see the 80486 manual p16-28) */
break;
case 034: /* fstcw m16int */
RE_ENTRANT_CHECK_OFF;
FPU_verify_area(VERIFY_WRITE,data_address,2);
- FPU_put_user(control_word, (unsigned short *) data_address);
+ FPU_put_user(control_word, (unsigned short __user *) data_address);
RE_ENTRANT_CHECK_ON;
return 1;
case 035: /* fstp m80real */
clear_C1();
- if ( FPU_store_extended(st0_ptr, st0_tag, (long double *)data_address) )
+ if ( FPU_store_extended(st0_ptr, st0_tag, (long double __user *)data_address) )
pop_0(); /* pop only if the number was actually stored
(see the 80486 manual p16-28) */
break;
case 036: /* fstsw m2byte */
RE_ENTRANT_CHECK_OFF;
FPU_verify_area(VERIFY_WRITE,data_address,2);
- FPU_put_user(status_word(),(unsigned short *) data_address);
+ FPU_put_user(status_word(),(unsigned short __user *) data_address);
RE_ENTRANT_CHECK_ON;
return 1;
case 037: /* fistp m64int */
clear_C1();
- if ( FPU_store_int64(st0_ptr, st0_tag, (long long *)data_address) )
+ if ( FPU_store_int64(st0_ptr, st0_tag, (long long __user *)data_address) )
pop_0(); /* pop only if the number was actually stored
(see the 80486 manual p16-28) */
break;
/*---------------------------------------------------------------------------*/
-void fcom_st()
+void fcom_st(void)
{
/* fcom st(i) */
compare_st_st(FPU_rm);
}
-void fcompst()
+void fcompst(void)
{
/* fcomp st(i) */
if ( !compare_st_st(FPU_rm) )
}
-void fcompp()
+void fcompp(void)
{
/* fcompp */
if (FPU_rm != 1)
}
-void fucom_()
+void fucom_(void)
{
/* fucom st(i) */
compare_u_st_st(FPU_rm);
}
-void fucomp()
+void fucomp(void)
{
/* fucomp st(i) */
if ( !compare_u_st_st(FPU_rm) )
}
-void fucompp()
+void fucompp(void)
{
/* fucompp */
if (FPU_rm == 1)
/* Get a long double from user memory */
-int FPU_load_extended(long double *s, int stnr)
+int FPU_load_extended(long double __user *s, int stnr)
{
FPU_REG *sti_ptr = &st(stnr);
/* Get a double from user memory */
-int FPU_load_double(double *dfloat, FPU_REG *loaded_data)
+int FPU_load_double(double __user *dfloat, FPU_REG *loaded_data)
{
int exp, tag, negative;
unsigned m64, l64;
RE_ENTRANT_CHECK_OFF;
FPU_verify_area(VERIFY_READ, dfloat, 8);
- FPU_get_user(m64, 1 + (unsigned long *) dfloat);
- FPU_get_user(l64, (unsigned long *) dfloat);
+ FPU_get_user(m64, 1 + (unsigned long __user *) dfloat);
+ FPU_get_user(l64, (unsigned long __user *) dfloat);
RE_ENTRANT_CHECK_ON;
negative = (m64 & 0x80000000) ? SIGN_Negative : SIGN_Positive;
/* Get a float from user memory */
-int FPU_load_single(float *single, FPU_REG *loaded_data)
+int FPU_load_single(float __user *single, FPU_REG *loaded_data)
{
unsigned m32;
int exp, tag, negative;
RE_ENTRANT_CHECK_OFF;
FPU_verify_area(VERIFY_READ, single, 4);
- FPU_get_user(m32, (unsigned long *) single);
+ FPU_get_user(m32, (unsigned long __user *) single);
RE_ENTRANT_CHECK_ON;
negative = (m32 & 0x80000000) ? SIGN_Negative : SIGN_Positive;
/* Get a long long from user memory */
-int FPU_load_int64(long long *_s)
+int FPU_load_int64(long long __user *_s)
{
long long s;
int sign;
/* Get a long from user memory */
-int FPU_load_int32(long *_s, FPU_REG *loaded_data)
+int FPU_load_int32(long __user *_s, FPU_REG *loaded_data)
{
long s;
int negative;
/* Get a short from user memory */
-int FPU_load_int16(short *_s, FPU_REG *loaded_data)
+int FPU_load_int16(short __user *_s, FPU_REG *loaded_data)
{
int s, negative;
/* Get a packed bcd array from user memory */
-int FPU_load_bcd(u_char *s)
+int FPU_load_bcd(u_char __user *s)
{
FPU_REG *st0_ptr = &st(0);
int pos;
{
l *= 10;
RE_ENTRANT_CHECK_OFF;
- FPU_get_user(bcd, (u_char *) s+pos);
+ FPU_get_user(bcd, s+pos);
RE_ENTRANT_CHECK_ON;
l += bcd >> 4;
l *= 10;
}
RE_ENTRANT_CHECK_OFF;
- FPU_get_user(sign, (u_char *) s+9);
+ FPU_get_user(sign, s+9);
sign = sign & 0x80 ? SIGN_Negative : SIGN_Positive;
RE_ENTRANT_CHECK_ON;
/*===========================================================================*/
/* Put a long double into user memory */
-int FPU_store_extended(FPU_REG *st0_ptr, u_char st0_tag, long double *d)
+int FPU_store_extended(FPU_REG *st0_ptr, u_char st0_tag, long double __user *d)
{
/*
The only exception raised by an attempt to store to an
RE_ENTRANT_CHECK_OFF;
FPU_verify_area(VERIFY_WRITE, d, 10);
- FPU_put_user(st0_ptr->sigl, (unsigned long *) d);
- FPU_put_user(st0_ptr->sigh, (unsigned long *) ((u_char *)d + 4));
- FPU_put_user(exponent16(st0_ptr), (unsigned short *) ((u_char *)d + 8));
+ FPU_put_user(st0_ptr->sigl, (unsigned long __user *) d);
+ FPU_put_user(st0_ptr->sigh, (unsigned long __user *) ((u_char __user *)d + 4));
+ FPU_put_user(exponent16(st0_ptr), (unsigned short __user *) ((u_char __user *)d + 8));
RE_ENTRANT_CHECK_ON;
return 1;
/* Put out the QNaN indefinite */
RE_ENTRANT_CHECK_OFF;
FPU_verify_area(VERIFY_WRITE,d,10);
- FPU_put_user(0, (unsigned long *) d);
- FPU_put_user(0xc0000000, 1 + (unsigned long *) d);
- FPU_put_user(0xffff, 4 + (short *) d);
+ FPU_put_user(0, (unsigned long __user *) d);
+ FPU_put_user(0xc0000000, 1 + (unsigned long __user *) d);
+ FPU_put_user(0xffff, 4 + (short __user *) d);
RE_ENTRANT_CHECK_ON;
return 1;
}
/* Put a double into user memory */
-int FPU_store_double(FPU_REG *st0_ptr, u_char st0_tag, double *dfloat)
+int FPU_store_double(FPU_REG *st0_ptr, u_char st0_tag, double __user *dfloat)
{
unsigned long l[2];
unsigned long increment = 0; /* avoid gcc warnings */
/* The masked response */
/* Put out the QNaN indefinite */
RE_ENTRANT_CHECK_OFF;
- FPU_verify_area(VERIFY_WRITE,(void *)dfloat,8);
- FPU_put_user(0, (unsigned long *) dfloat);
- FPU_put_user(0xfff80000, 1 + (unsigned long *) dfloat);
+ FPU_verify_area(VERIFY_WRITE,dfloat,8);
+ FPU_put_user(0, (unsigned long __user *) dfloat);
+ FPU_put_user(0xfff80000, 1 + (unsigned long __user *) dfloat);
RE_ENTRANT_CHECK_ON;
return 1;
}
l[1] |= 0x80000000;
RE_ENTRANT_CHECK_OFF;
- FPU_verify_area(VERIFY_WRITE,(void *)dfloat,8);
- FPU_put_user(l[0], (unsigned long *)dfloat);
- FPU_put_user(l[1], 1 + (unsigned long *)dfloat);
+ FPU_verify_area(VERIFY_WRITE,dfloat,8);
+ FPU_put_user(l[0], (unsigned long __user *)dfloat);
+ FPU_put_user(l[1], 1 + (unsigned long __user *)dfloat);
RE_ENTRANT_CHECK_ON;
return 1;
/* Put a float into user memory */
-int FPU_store_single(FPU_REG *st0_ptr, u_char st0_tag, float *single)
+int FPU_store_single(FPU_REG *st0_ptr, u_char st0_tag, float __user *single)
{
long templ = 0;
unsigned long increment = 0; /* avoid gcc warnings */
/* The masked response */
/* Put out the QNaN indefinite */
RE_ENTRANT_CHECK_OFF;
- FPU_verify_area(VERIFY_WRITE,(void *)single,4);
- FPU_put_user(0xffc00000, (unsigned long *) single);
+ FPU_verify_area(VERIFY_WRITE,single,4);
+ FPU_put_user(0xffc00000, (unsigned long __user *) single);
RE_ENTRANT_CHECK_ON;
return 1;
}
templ |= 0x80000000;
RE_ENTRANT_CHECK_OFF;
- FPU_verify_area(VERIFY_WRITE,(void *)single,4);
- FPU_put_user(templ,(unsigned long *) single);
+ FPU_verify_area(VERIFY_WRITE,single,4);
+ FPU_put_user(templ,(unsigned long __user *) single);
RE_ENTRANT_CHECK_ON;
return 1;
/* Put a long long into user memory */
-int FPU_store_int64(FPU_REG *st0_ptr, u_char st0_tag, long long *d)
+int FPU_store_int64(FPU_REG *st0_ptr, u_char st0_tag, long long __user *d)
{
FPU_REG t;
long long tll;
}
RE_ENTRANT_CHECK_OFF;
- FPU_verify_area(VERIFY_WRITE,(void *)d,8);
+ FPU_verify_area(VERIFY_WRITE,d,8);
copy_to_user(d, &tll, 8);
RE_ENTRANT_CHECK_ON;
/* Put a long into user memory */
-int FPU_store_int32(FPU_REG *st0_ptr, u_char st0_tag, long *d)
+int FPU_store_int32(FPU_REG *st0_ptr, u_char st0_tag, long __user *d)
{
FPU_REG t;
int precision_loss;
RE_ENTRANT_CHECK_OFF;
FPU_verify_area(VERIFY_WRITE,d,4);
- FPU_put_user(t.sigl, (unsigned long *) d);
+ FPU_put_user(t.sigl, (unsigned long __user *) d);
RE_ENTRANT_CHECK_ON;
return 1;
/* Put a short into user memory */
-int FPU_store_int16(FPU_REG *st0_ptr, u_char st0_tag, short *d)
+int FPU_store_int16(FPU_REG *st0_ptr, u_char st0_tag, short __user *d)
{
FPU_REG t;
int precision_loss;
RE_ENTRANT_CHECK_OFF;
FPU_verify_area(VERIFY_WRITE,d,2);
- FPU_put_user((short)t.sigl,(short *) d);
+ FPU_put_user((short)t.sigl, d);
RE_ENTRANT_CHECK_ON;
return 1;
/* Put a packed bcd array into user memory */
-int FPU_store_bcd(FPU_REG *st0_ptr, u_char st0_tag, u_char *d)
+int FPU_store_bcd(FPU_REG *st0_ptr, u_char st0_tag, u_char __user *d)
{
FPU_REG t;
unsigned long long ll;
RE_ENTRANT_CHECK_OFF;
FPU_verify_area(VERIFY_WRITE,d,10);
for ( i = 0; i < 7; i++)
- FPU_put_user(0, (u_char *) d+i); /* These bytes "undefined" */
- FPU_put_user(0xc0, (u_char *) d+7); /* This byte "undefined" */
- FPU_put_user(0xff, (u_char *) d+8);
- FPU_put_user(0xff, (u_char *) d+9);
+ FPU_put_user(0, d+i); /* These bytes "undefined" */
+ FPU_put_user(0xc0, d+7); /* This byte "undefined" */
+ FPU_put_user(0xff, d+8);
+ FPU_put_user(0xff, d+9);
RE_ENTRANT_CHECK_ON;
return 1;
}
b = FPU_div_small(&ll, 10);
b |= (FPU_div_small(&ll, 10)) << 4;
RE_ENTRANT_CHECK_OFF;
- FPU_put_user(b,(u_char *) d+i);
+ FPU_put_user(b, d+i);
RE_ENTRANT_CHECK_ON;
}
RE_ENTRANT_CHECK_OFF;
- FPU_put_user(sign,(u_char *) d+9);
+ FPU_put_user(sign, d+9);
RE_ENTRANT_CHECK_ON;
return 1;
/*===========================================================================*/
-u_char *fldenv(fpu_addr_modes addr_modes, u_char *s)
+u_char __user *fldenv(fpu_addr_modes addr_modes, u_char __user *s)
{
unsigned short tag_word = 0;
u_char tag;
{
RE_ENTRANT_CHECK_OFF;
FPU_verify_area(VERIFY_READ, s, 0x0e);
- FPU_get_user(control_word, (unsigned short *) s);
- FPU_get_user(partial_status, (unsigned short *) (s+2));
- FPU_get_user(tag_word, (unsigned short *) (s+4));
- FPU_get_user(instruction_address.offset, (unsigned short *) (s+6));
- FPU_get_user(instruction_address.selector, (unsigned short *) (s+8));
- FPU_get_user(operand_address.offset, (unsigned short *) (s+0x0a));
- FPU_get_user(operand_address.selector, (unsigned short *) (s+0x0c));
+ FPU_get_user(control_word, (unsigned short __user *) s);
+ FPU_get_user(partial_status, (unsigned short __user *) (s+2));
+ FPU_get_user(tag_word, (unsigned short __user *) (s+4));
+ FPU_get_user(instruction_address.offset, (unsigned short __user *) (s+6));
+ FPU_get_user(instruction_address.selector, (unsigned short __user *) (s+8));
+ FPU_get_user(operand_address.offset, (unsigned short __user *) (s+0x0a));
+ FPU_get_user(operand_address.selector, (unsigned short __user *) (s+0x0c));
RE_ENTRANT_CHECK_ON;
s += 0x0e;
if ( addr_modes.default_mode == VM86 )
{
RE_ENTRANT_CHECK_OFF;
FPU_verify_area(VERIFY_READ, s, 0x1c);
- FPU_get_user(control_word, (unsigned short *) s);
- FPU_get_user(partial_status, (unsigned short *) (s+4));
- FPU_get_user(tag_word, (unsigned short *) (s+8));
- FPU_get_user(instruction_address.offset, (unsigned long *) (s+0x0c));
- FPU_get_user(instruction_address.selector, (unsigned short *) (s+0x10));
- FPU_get_user(instruction_address.opcode, (unsigned short *) (s+0x12));
- FPU_get_user(operand_address.offset, (unsigned long *) (s+0x14));
- FPU_get_user(operand_address.selector, (unsigned long *) (s+0x18));
+ FPU_get_user(control_word, (unsigned short __user *) s);
+ FPU_get_user(partial_status, (unsigned short __user *) (s+4));
+ FPU_get_user(tag_word, (unsigned short __user *) (s+8));
+ FPU_get_user(instruction_address.offset, (unsigned long __user *) (s+0x0c));
+ FPU_get_user(instruction_address.selector, (unsigned short __user *) (s+0x10));
+ FPU_get_user(instruction_address.opcode, (unsigned short __user *) (s+0x12));
+ FPU_get_user(operand_address.offset, (unsigned long __user *) (s+0x14));
+ FPU_get_user(operand_address.selector, (unsigned long __user *) (s+0x18));
RE_ENTRANT_CHECK_ON;
s += 0x1c;
}
}
-void frstor(fpu_addr_modes addr_modes, u_char *data_address)
+void frstor(fpu_addr_modes addr_modes, u_char __user *data_address)
{
int i, regnr;
- u_char *s = fldenv(addr_modes, data_address);
+ u_char __user *s = fldenv(addr_modes, data_address);
int offset = (top & 7) * 10, other = 80 - offset;
/* Copy all registers in stack order. */
}
-u_char *fstenv(fpu_addr_modes addr_modes, u_char *d)
+u_char __user *fstenv(fpu_addr_modes addr_modes, u_char __user *d)
{
if ( (addr_modes.default_mode == VM86) ||
((addr_modes.default_mode == PM16)
RE_ENTRANT_CHECK_OFF;
FPU_verify_area(VERIFY_WRITE,d,14);
#ifdef PECULIAR_486
- FPU_put_user(control_word & ~0xe080, (unsigned long *) d);
+ FPU_put_user(control_word & ~0xe080, (unsigned long __user *) d);
#else
- FPU_put_user(control_word, (unsigned short *) d);
+ FPU_put_user(control_word, (unsigned short __user *) d);
#endif /* PECULIAR_486 */
- FPU_put_user(status_word(), (unsigned short *) (d+2));
- FPU_put_user(fpu_tag_word, (unsigned short *) (d+4));
- FPU_put_user(instruction_address.offset, (unsigned short *) (d+6));
- FPU_put_user(operand_address.offset, (unsigned short *) (d+0x0a));
+ FPU_put_user(status_word(), (unsigned short __user *) (d+2));
+ FPU_put_user(fpu_tag_word, (unsigned short __user *) (d+4));
+ FPU_put_user(instruction_address.offset, (unsigned short __user *) (d+6));
+ FPU_put_user(operand_address.offset, (unsigned short __user *) (d+0x0a));
if ( addr_modes.default_mode == VM86 )
{
FPU_put_user((instruction_address.offset & 0xf0000) >> 4,
- (unsigned short *) (d+8));
+ (unsigned short __user *) (d+8));
FPU_put_user((operand_address.offset & 0xf0000) >> 4,
- (unsigned short *) (d+0x0c));
+ (unsigned short __user *) (d+0x0c));
}
else
{
- FPU_put_user(instruction_address.selector, (unsigned short *) (d+8));
- FPU_put_user(operand_address.selector, (unsigned short *) (d+0x0c));
+ FPU_put_user(instruction_address.selector, (unsigned short __user *) (d+8));
+ FPU_put_user(operand_address.selector, (unsigned short __user *) (d+0x0c));
}
RE_ENTRANT_CHECK_ON;
d += 0x0e;
}
-void fsave(fpu_addr_modes addr_modes, u_char *data_address)
+void fsave(fpu_addr_modes addr_modes, u_char __user *data_address)
{
- u_char *d;
+ u_char __user *d;
int offset = (top & 7) * 10, other = 80 - offset;
d = fstenv(addr_modes, data_address);
.text
.globl fpu_reg_round
-.globl fpu_reg_round_sqrt
.globl fpu_Arith_exit
/* Entry point when called from C */
* physnode_map[4-7] = 1;
* physnode_map[8- ] = -1;
*/
-u8 physnode_map[MAX_ELEMENTS] = { [0 ... (MAX_ELEMENTS - 1)] = -1};
+s8 physnode_map[MAX_ELEMENTS] = { [0 ... (MAX_ELEMENTS - 1)] = -1};
unsigned long node_start_pfn[MAX_NUMNODES];
unsigned long node_end_pfn[MAX_NUMNODES];
*/
int __init get_memcfg_numa_flat(void)
{
- int pfn;
-
printk("NUMA - single node, flat memory mode\n");
/* Run the memory configuration and find the top of memory. */
node_start_pfn[0] = 0;
node_end_pfn[0] = max_pfn;
- /* Fill in the physnode_map with our simplistic memory model,
- * all memory is in node 0.
- */
- for (pfn = node_start_pfn[0]; pfn <= node_end_pfn[0];
- pfn += PAGES_PER_ELEMENT)
- {
- physnode_map[pfn / PAGES_PER_ELEMENT] = 0;
- }
-
- /* Indicate there is one node available. */
+ /* Indicate there is one node available. */
node_set_online(0);
numnodes = 1;
return 1;
}
/*
- * Allocate memory for the pg_data_t via a crude pre-bootmem method
- * We ought to relocate these onto their own node later on during boot.
+ * Allocate memory for the pg_data_t for this node via a crude pre-bootmem
+ * method. For node zero take this from the bottom of memory, for
+ * subsequent nodes place them at node_remap_start_vaddr which contains
+ * node local data in physically node local memory. See setup_memory()
+ * for details.
*/
static void __init allocate_pgdat(int nid)
{
{
int nid;
unsigned long bootmap_size, system_start_pfn, system_max_low_pfn;
- unsigned long reserve_pages;
+ unsigned long reserve_pages, pfn;
+ /*
+ * When mapping a NUMA machine we allocate the node_mem_map arrays
+ * from node local memory. They are then mapped directly into KVA
+ * between zone normal and vmalloc space. Calculate the size of
+ * this space and use it to adjust the boundry between ZONE_NORMAL
+ * and ZONE_HIGHMEM.
+ */
get_memcfg_numa();
+
+ /* Fill in the physnode_map */
+ for (nid = 0; nid < numnodes; nid++) {
+ printk("Node: %d, start_pfn: %ld, end_pfn: %ld\n",
+ nid, node_start_pfn[nid], node_end_pfn[nid]);
+ printk(" Setting physnode_map array to node %d for pfns:\n ",
+ nid);
+ for (pfn = node_start_pfn[nid]; pfn < node_end_pfn[nid];
+ pfn += PAGES_PER_ELEMENT) {
+ physnode_map[pfn / PAGES_PER_ELEMENT] = nid;
+ printk("%ld ", pfn);
+ }
+ printk("\n");
+ }
+
reserve_pages = calculate_numa_remap_pages();
/* partially used pages are not usable - thus round upwards */
system_start_pfn = min_low_pfn = PFN_UP(init_pg_tables_end);
find_max_pfn();
- system_max_low_pfn = max_low_pfn = find_max_low_pfn();
+ system_max_low_pfn = max_low_pfn = find_max_low_pfn() - reserve_pages;
+ printk("reserve_pages = %ld find_max_low_pfn() ~ %ld\n",
+ reserve_pages, max_low_pfn + reserve_pages);
+ printk("max_pfn = %ld\n", max_pfn);
#ifdef CONFIG_HIGHMEM
highstart_pfn = highend_pfn = max_pfn;
if (max_pfn > system_max_low_pfn)
printk(KERN_NOTICE "%ldMB HIGHMEM available.\n",
pages_to_mb(highend_pfn - highstart_pfn));
#endif
- system_max_low_pfn = max_low_pfn = max_low_pfn - reserve_pages;
printk(KERN_NOTICE "%ldMB LOWMEM available.\n",
pages_to_mb(system_max_low_pfn));
printk("min_low_pfn = %ld, max_low_pfn = %ld, highstart_pfn = %ld\n",
(ulong) pfn_to_kaddr(max_low_pfn));
for (nid = 0; nid < numnodes; nid++) {
node_remap_start_vaddr[nid] = pfn_to_kaddr(
- highstart_pfn - node_remap_offset[nid]);
+ (highstart_pfn + reserve_pages) - node_remap_offset[nid]);
allocate_pgdat(nid);
printk ("node %d will remap to vaddr %08lx - %08lx\n", nid,
(ulong) node_remap_start_vaddr[nid],
- (ulong) pfn_to_kaddr(highstart_pfn
+ (ulong) pfn_to_kaddr(highstart_pfn + reserve_pages
- node_remap_offset[nid] + node_remap_size[nid]));
}
printk("High memory starts at vaddr %08lx\n",
(ulong) pfn_to_kaddr(highstart_pfn));
+ vmalloc_earlyreserve = reserve_pages * PAGE_SIZE;
for (nid = 0; nid < numnodes; nid++)
find_max_pfn_node(nid);
void __init set_highmem_pages_init(int bad_ppro)
{
#ifdef CONFIG_HIGHMEM
- int nid;
+ struct zone *zone;
- for (nid = 0; nid < numnodes; nid++) {
+ for_each_zone(zone) {
unsigned long node_pfn, node_high_size, zone_start_pfn;
struct page * zone_mem_map;
- node_high_size = NODE_DATA(nid)->node_zones[ZONE_HIGHMEM].spanned_pages;
- zone_mem_map = NODE_DATA(nid)->node_zones[ZONE_HIGHMEM].zone_mem_map;
- zone_start_pfn = NODE_DATA(nid)->node_zones[ZONE_HIGHMEM].zone_start_pfn;
+ if (!is_highmem(zone))
+ continue;
+
+ printk("Initializing %s for node %d\n", zone->name,
+ zone->zone_pgdat->node_id);
+
+ node_high_size = zone->spanned_pages;
+ zone_mem_map = zone->zone_mem_map;
+ zone_start_pfn = zone->zone_start_pfn;
- printk("Initializing highpages for node %d\n", nid);
for (node_pfn = 0; node_pfn < node_high_size; node_pfn++) {
one_highpage_init((struct page *)(zone_mem_map + node_pfn),
zone_start_pfn + node_pfn, bad_ppro);
#include <asm/system.h>
#include <asm/uaccess.h>
-#include <asm/pgalloc.h>
#include <asm/hardirq.h>
#include <asm/desc.h>
+#include <asm/tlbflush.h>
extern void die(const char *,struct pt_regs *,long);
if (seg & (1<<2)) {
/* Must lock the LDT while reading it. */
down(¤t->mm->context.sem);
+#if 1
+ /* horrible hack for 4/4 disabled kernels.
+ I'm not quite sure what the TLB flush is good for,
+ it's mindlessly copied from the read_ldt code */
+ __flush_tlb_global();
+ desc = kmap(current->mm->context.ldt_pages[(seg&~7)/PAGE_SIZE]);
+ desc = (void *)desc + ((seg & ~7) % PAGE_SIZE);
+#else
desc = current->mm->context.ldt;
desc = (void *)desc + (seg & ~7);
+#endif
} else {
/* Must disable preemption while reading the GDT. */
desc = (u32 *)&cpu_gdt_table[get_cpu()];
(desc[1] & 0xff000000);
if (seg & (1<<2)) {
+#if 1
+ kunmap((void *)((unsigned long)desc & PAGE_MASK));
+#endif
up(¤t->mm->context.sem);
} else
put_cpu();
return prefetch;
}
-static inline int is_prefetch(struct pt_regs *regs, unsigned long addr)
+static inline int is_prefetch(struct pt_regs *regs, unsigned long addr,
+ unsigned long error_code)
{
if (unlikely(boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
- boot_cpu_data.x86 >= 6))
+ boot_cpu_data.x86 >= 6)) {
+ /* Catch an obscure case of prefetch inside an NX page. */
+ if (nx_enabled && (error_code & 16))
+ return 0;
return __is_prefetch(regs, addr);
+ }
return 0;
}
* (error_code & 4) == 0, and that the fault was not a
* protection error (error_code & 1) == 0.
*/
+#ifdef CONFIG_X86_4G
+ /*
+ * On 4/4 all kernels faults are either bugs, vmalloc or prefetch
+ */
+ /* If it's vm86 fall through */
+ if (unlikely(!(regs->eflags & VM_MASK) && ((regs->xcs & 3) == 0))) {
+ if (error_code & 3)
+ goto bad_area_nosemaphore;
+ goto vmalloc_fault;
+ }
+#else
if (unlikely(address >= TASK_SIZE)) {
if (!(error_code & 5))
goto vmalloc_fault;
*/
goto bad_area_nosemaphore;
}
+#endif
mm = tsk->mm;
if (in_atomic() || !mm)
goto bad_area_nosemaphore;
- down_read(&mm->mmap_sem);
+ /* When running in the kernel we expect faults to occur only to
+ * addresses in user space. All other faults represent errors in the
+ * kernel and should generate an OOPS. Unfortunatly, in the case of an
+ * erroneous fault occuring in a code path which already holds mmap_sem
+ * we will deadlock attempting to validate the fault against the
+ * address space. Luckily the kernel only validly references user
+ * space from well defined areas of code, which are listed in the
+ * exceptions table.
+ *
+ * As the vast majority of faults will be valid we will only perform
+ * the source reference check when there is a possibilty of a deadlock.
+ * Attempt to lock the address space, if we cannot we then validate the
+ * source. If this is invalid we can skip the address space check,
+ * thus avoiding the deadlock.
+ */
+ if (!down_read_trylock(&mm->mmap_sem)) {
+ if ((error_code & 4) == 0 &&
+ !search_exception_tables(regs->eip))
+ goto bad_area_nosemaphore;
+ down_read(&mm->mmap_sem);
+ }
vma = find_vma(mm, address);
if (!vma)
* Valid to do another page fault here because this one came
* from user space.
*/
- if (is_prefetch(regs, address))
+ if (is_prefetch(regs, address, error_code))
return;
tsk->thread.cr2 = address;
info.si_signo = SIGSEGV;
info.si_errno = 0;
/* info.si_code has been set above */
- info.si_addr = (void *)address;
+ info.si_addr = (void __user *)address;
force_sig_info(SIGSEGV, &info, tsk);
return;
}
* had been triggered by is_prefetch fixup_exception would have
* handled it.
*/
- if (is_prefetch(regs, address))
+ if (is_prefetch(regs, address, error_code))
return;
/*
bust_spinlocks(1);
+#ifdef CONFIG_X86_PAE
+ if (error_code & 16) {
+ pte_t *pte = lookup_address(address);
+
+ if (pte && pte_present(*pte) && !pte_exec_kernel(*pte))
+ printk(KERN_CRIT "kernel tried to execute NX-protected page - exploit attempt? (uid: %d)\n", current->uid);
+ }
+#endif
if (address < PAGE_SIZE)
printk(KERN_ALERT "Unable to handle kernel NULL pointer dereference");
else
goto no_context;
/* User space => ok to do another page fault */
- if (is_prefetch(regs, address))
+ if (is_prefetch(regs, address, error_code))
return;
tsk->thread.cr2 = address;
info.si_signo = SIGBUS;
info.si_errno = 0;
info.si_code = BUS_ADRERR;
- info.si_addr = (void *)address;
+ info.si_addr = (void __user *)address;
force_sig_info(SIGBUS, &info, tsk);
return;
if (!pte_none(*(kmap_pte-idx)))
BUG();
#endif
- set_pte(kmap_pte-idx, mk_pte(page, kmap_prot));
+ /*
+ * If the page is not a normal RAM page, then map it
+ * uncached to be on the safe side - it could be device
+ * memory that must not be prefetched:
+ */
+ if (PageReserved(page))
+ set_pte(kmap_pte-idx, mk_pte(page, kmap_prot_nocache));
+ else
+ set_pte(kmap_pte-idx, mk_pte(page, kmap_prot));
__flush_tlb_one(vaddr);
return (void*) vaddr;
}
+/*
+ * page frame number based kmaps - useful for PCI mappings.
+ * NOTE: we map the page with the dont-cache flag.
+ */
+void *kmap_atomic_nocache_pfn(unsigned long pfn, enum km_type type)
+{
+ enum fixed_addresses idx;
+ unsigned long vaddr;
+
+ /* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */
+ inc_preempt_count();
+ if (pfn < highstart_pfn)
+ return pfn_to_kaddr(pfn);
+
+ idx = type + KM_TYPE_NR*smp_processor_id();
+ vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
+#ifdef CONFIG_DEBUG_HIGHMEM
+ if (!pte_none(*(kmap_pte-idx)))
+ BUG();
+#endif
+ set_pte(kmap_pte-idx, pfn_pte(pfn, kmap_prot_nocache));
+ __flush_tlb_one(vaddr);
+
+ return (void*) vaddr;
+}
+
+
void kunmap_atomic(void *kvaddr, enum km_type type)
{
#ifdef CONFIG_DEBUG_HIGHMEM
#include <linux/err.h>
#include <linux/sysctl.h>
#include <asm/mman.h>
-#include <asm/pgalloc.h>
#include <asm/tlb.h>
#include <asm/tlbflush.h>
{
pte_t entry;
- mm->rss += (HPAGE_SIZE / PAGE_SIZE);
+ // mm->rss += (HPAGE_SIZE / PAGE_SIZE);
+ vx_rsspages_add(mm, HPAGE_SIZE / PAGE_SIZE);
if (write_access) {
entry =
pte_mkwrite(pte_mkdirty(mk_pte(page, vma->vm_page_prot)));
ptepage = pte_page(entry);
get_page(ptepage);
set_pte(dst_pte, entry);
- dst->rss += (HPAGE_SIZE / PAGE_SIZE);
+ // dst->rss += (HPAGE_SIZE / PAGE_SIZE);
+ vx_rsspages_add(dst, HPAGE_SIZE / PAGE_SIZE);
addr += HPAGE_SIZE;
}
return 0;
struct page *page;
struct vm_area_struct *vma;
- if (! mm->used_hugetlb)
- return ERR_PTR(-EINVAL);
-
vma = find_vma(mm, addr);
if (!vma || !is_vm_hugetlb_page(vma))
return ERR_PTR(-EINVAL);
page = pte_page(pte);
put_page(page);
}
- mm->rss -= (end - start) >> PAGE_SHIFT;
+ // mm->rss -= (end - start) >> PAGE_SHIFT;
+ vx_rsspages_sub(mm, (end - start) >> PAGE_SHIFT);
flush_tlb_range(vma, start, end);
}
ret = -ENOMEM;
goto out;
}
- if (!pte_none(*pte))
- continue;
+
+ if (!pte_none(*pte)) {
+ pmd_t *pmd = (pmd_t *) pte;
+
+ page = pmd_page(*pmd);
+ pmd_clear(pmd);
+ dec_page_state(nr_page_table_pages);
+ page_cache_release(page);
+ }
idx = ((addr - vma->vm_start) >> HPAGE_SHIFT)
+ (vma->vm_pgoff >> (HPAGE_SHIFT - PAGE_SHIFT));
#include <asm/system.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
-#include <asm/pgalloc.h>
#include <asm/dma.h>
#include <asm/fixmap.h>
#include <asm/e820.h>
#include <asm/tlb.h>
#include <asm/tlbflush.h>
#include <asm/sections.h>
+#include <asm/desc.h>
DEFINE_PER_CPU(struct mmu_gather, mmu_gathers);
unsigned long highstart_pfn, highend_pfn;
static int do_test_wp_bit(void);
-/*
- * Creates a middle page table and puts a pointer to it in the
- * given global directory entry. This only returns the gd entry
- * in non-PAE compilation mode, since the middle layer is folded.
- */
-static pmd_t * __init one_md_table_init(pgd_t *pgd)
-{
- pmd_t *pmd_table;
-
-#ifdef CONFIG_X86_PAE
- pmd_table = (pmd_t *) alloc_bootmem_low_pages(PAGE_SIZE);
- set_pgd(pgd, __pgd(__pa(pmd_table) | _PAGE_PRESENT));
- if (pmd_table != pmd_offset(pgd, 0))
- BUG();
-#else
- pmd_table = pmd_offset(pgd, 0);
-#endif
-
- return pmd_table;
-}
-
-/*
- * Create a page table and place a pointer to it in a middle page
- * directory entry.
- */
-static pte_t * __init one_page_table_init(pmd_t *pmd)
-{
- if (pmd_none(*pmd)) {
- pte_t *page_table = (pte_t *) alloc_bootmem_low_pages(PAGE_SIZE);
- set_pmd(pmd, __pmd(__pa(page_table) | _PAGE_TABLE));
- if (page_table != pte_offset_kernel(pmd, 0))
- BUG();
-
- return page_table;
- }
-
- return pte_offset_kernel(pmd, 0);
-}
-
-/*
- * This function initializes a certain range of kernel virtual memory
- * with new bootmem page tables, everywhere page tables are missing in
- * the given range.
- */
-
-/*
- * NOTE: The pagetables are allocated contiguous on the physical space
- * so we can cache the place of the first one and move around without
- * checking the pgd every time.
- */
-static void __init page_table_range_init (unsigned long start, unsigned long end, pgd_t *pgd_base)
-{
- pgd_t *pgd;
- pmd_t *pmd;
- int pgd_idx, pmd_idx;
- unsigned long vaddr;
-
- vaddr = start;
- pgd_idx = pgd_index(vaddr);
- pmd_idx = pmd_index(vaddr);
- pgd = pgd_base + pgd_idx;
-
- for ( ; (pgd_idx < PTRS_PER_PGD) && (vaddr != end); pgd++, pgd_idx++) {
- if (pgd_none(*pgd))
- one_md_table_init(pgd);
-
- pmd = pmd_offset(pgd, vaddr);
- for (; (pmd_idx < PTRS_PER_PMD) && (vaddr != end); pmd++, pmd_idx++) {
- if (pmd_none(*pmd))
- one_page_table_init(pmd);
-
- vaddr += PMD_SIZE;
- }
- pmd_idx = 0;
- }
-}
-
-/*
- * This maps the physical memory to kernel virtual address space, a total
- * of max_low_pfn pages, by creating page tables starting from address
- * PAGE_OFFSET.
- */
-static void __init kernel_physical_mapping_init(pgd_t *pgd_base)
-{
- unsigned long pfn;
- pgd_t *pgd;
- pmd_t *pmd;
- pte_t *pte;
- int pgd_idx, pmd_idx, pte_ofs;
-
- pgd_idx = pgd_index(PAGE_OFFSET);
- pgd = pgd_base + pgd_idx;
- pfn = 0;
-
- for (; pgd_idx < PTRS_PER_PGD; pgd++, pgd_idx++) {
- pmd = one_md_table_init(pgd);
- if (pfn >= max_low_pfn)
- continue;
- for (pmd_idx = 0; pmd_idx < PTRS_PER_PMD && pfn < max_low_pfn; pmd++, pmd_idx++) {
- /* Map with big pages if possible, otherwise create normal page tables. */
- if (cpu_has_pse) {
- set_pmd(pmd, pfn_pmd(pfn, PAGE_KERNEL_LARGE));
- pfn += PTRS_PER_PTE;
- } else {
- pte = one_page_table_init(pmd);
-
- for (pte_ofs = 0; pte_ofs < PTRS_PER_PTE && pfn < max_low_pfn; pte++, pfn++, pte_ofs++)
- set_pte(pte, pfn_pte(pfn, PAGE_KERNEL));
- }
- }
- }
-}
-
static inline int page_kills_ppro(unsigned long pagenr)
{
if (pagenr >= 0x70000 && pagenr <= 0x7003F)
extern int is_available_memory(efi_memory_desc_t *);
-static inline int page_is_ram(unsigned long pagenr)
+int page_is_ram(unsigned long pagenr)
{
int i;
unsigned long addr, end;
return 0;
}
-#ifdef CONFIG_HIGHMEM
+/* To enable modules to check if a page is in RAM */
+int pfn_is_ram(unsigned long pfn)
+{
+ return (page_is_ram(pfn));
+}
+
+
pte_t *kmap_pte;
-pgprot_t kmap_prot;
-EXPORT_SYMBOL(kmap_prot);
EXPORT_SYMBOL(kmap_pte);
#define kmap_get_fixmap_pte(vaddr) \
void __init kmap_init(void)
{
- unsigned long kmap_vstart;
-
- /* cache the first kmap pte */
- kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN);
- kmap_pte = kmap_get_fixmap_pte(kmap_vstart);
-
- kmap_prot = PAGE_KERNEL;
-}
-
-void __init permanent_kmaps_init(pgd_t *pgd_base)
-{
- pgd_t *pgd;
- pmd_t *pmd;
- pte_t *pte;
- unsigned long vaddr;
-
- vaddr = PKMAP_BASE;
- page_table_range_init(vaddr, vaddr + PAGE_SIZE*LAST_PKMAP, pgd_base);
-
- pgd = swapper_pg_dir + pgd_index(vaddr);
- pmd = pmd_offset(pgd, vaddr);
- pte = pte_offset_kernel(pmd, vaddr);
- pkmap_page_table = pte;
+ kmap_pte = kmap_get_fixmap_pte(__fix_to_virt(FIX_KMAP_BEGIN));
}
void __init one_highpage_init(struct page *page, int pfn, int bad_ppro)
SetPageReserved(page);
}
+EXPORT_SYMBOL_GPL(page_is_ram);
+
+#ifdef CONFIG_HIGHMEM
+
#ifndef CONFIG_DISCONTIGMEM
void __init set_highmem_pages_init(int bad_ppro)
{
#else
extern void set_highmem_pages_init(int);
#endif /* !CONFIG_DISCONTIGMEM */
-
#else
-#define kmap_init() do { } while (0)
-#define permanent_kmaps_init(pgd_base) do { } while (0)
-#define set_highmem_pages_init(bad_ppro) do { } while (0)
-#endif /* CONFIG_HIGHMEM */
+# define set_highmem_pages_init(bad_ppro) do { } while (0)
+#endif
-unsigned long __PAGE_KERNEL = _PAGE_KERNEL;
+unsigned long long __PAGE_KERNEL = _PAGE_KERNEL;
+unsigned long long __PAGE_KERNEL_EXEC = _PAGE_KERNEL_EXEC;
#ifndef CONFIG_DISCONTIGMEM
#define remap_numa_kva() do {} while (0)
extern void __init remap_numa_kva(void);
#endif
-static void __init pagetable_init (void)
+static __init void prepare_pagetables(pgd_t *pgd_base, unsigned long address)
+{
+ pgd_t *pgd;
+ pmd_t *pmd;
+ pte_t *pte;
+
+ pgd = pgd_base + pgd_index(address);
+ pmd = pmd_offset(pgd, address);
+ if (!pmd_present(*pmd)) {
+ pte = (pte_t *) alloc_bootmem_low_pages(PAGE_SIZE);
+ set_pmd(pmd, __pmd(_PAGE_TABLE + __pa(pte)));
+ }
+}
+
+static void __init fixrange_init (unsigned long start, unsigned long end, pgd_t *pgd_base)
{
unsigned long vaddr;
- pgd_t *pgd_base = swapper_pg_dir;
+ for (vaddr = start; vaddr != end; vaddr += PAGE_SIZE)
+ prepare_pagetables(pgd_base, vaddr);
+}
+
+void setup_identity_mappings(pgd_t *pgd_base, unsigned long start, unsigned long end)
+{
+ unsigned long vaddr;
+ pgd_t *pgd;
+ int i, j, k;
+ pmd_t *pmd;
+ pte_t *pte, *pte_base;
+
+ pgd = pgd_base;
+
+ for (i = 0; i < PTRS_PER_PGD; pgd++, i++) {
+ vaddr = i*PGDIR_SIZE;
+ if (end && (vaddr >= end))
+ break;
+ pmd = pmd_offset(pgd, 0);
+ for (j = 0; j < PTRS_PER_PMD; pmd++, j++) {
+ vaddr = i*PGDIR_SIZE + j*PMD_SIZE;
+ if (end && (vaddr >= end))
+ break;
+ if (vaddr < start)
+ continue;
+ if (cpu_has_pse) {
+ unsigned long __pe;
+
+ set_in_cr4(X86_CR4_PSE);
+ boot_cpu_data.wp_works_ok = 1;
+ __pe = _KERNPG_TABLE + _PAGE_PSE + vaddr - start;
+ /* Make it "global" too if supported */
+ if (cpu_has_pge) {
+ set_in_cr4(X86_CR4_PGE);
+#if !defined(CONFIG_X86_SWITCH_PAGETABLES)
+ __pe += _PAGE_GLOBAL;
+ __PAGE_KERNEL |= _PAGE_GLOBAL;
+#endif
+ }
+ set_pmd(pmd, __pmd(__pe));
+ continue;
+ }
+ if (!pmd_present(*pmd))
+ pte_base = (pte_t *) alloc_bootmem_low_pages(PAGE_SIZE);
+ else
+ pte_base = pte_offset_kernel(pmd, 0);
+ pte = pte_base;
+ for (k = 0; k < PTRS_PER_PTE; pte++, k++) {
+ vaddr = i*PGDIR_SIZE + j*PMD_SIZE + k*PAGE_SIZE;
+ if (end && (vaddr >= end))
+ break;
+ if (vaddr < start)
+ continue;
+ *pte = mk_pte_phys(vaddr-start, PAGE_KERNEL);
+ }
+ set_pmd(pmd, __pmd(_KERNPG_TABLE + __pa(pte_base)));
+ }
+ }
+}
+
+static void __init pagetable_init (void)
+{
+ unsigned long vaddr, end;
+ pgd_t *pgd_base;
#ifdef CONFIG_X86_PAE
int i;
- /* Init entries of the first-level page table to the zero page */
- for (i = 0; i < PTRS_PER_PGD; i++)
- set_pgd(pgd_base + i, __pgd(__pa(empty_zero_page) | _PAGE_PRESENT));
#endif
- /* Enable PSE if available */
- if (cpu_has_pse) {
- set_in_cr4(X86_CR4_PSE);
- }
+ /*
+ * This can be zero as well - no problem, in that case we exit
+ * the loops anyway due to the PTRS_PER_* conditions.
+ */
+ end = (unsigned long)__va(max_low_pfn*PAGE_SIZE);
- /* Enable PGE if available */
- if (cpu_has_pge) {
- set_in_cr4(X86_CR4_PGE);
- __PAGE_KERNEL |= _PAGE_GLOBAL;
+ pgd_base = swapper_pg_dir;
+#ifdef CONFIG_X86_PAE
+ /*
+ * It causes too many problems if there's no proper pmd set up
+ * for all 4 entries of the PGD - so we allocate all of them.
+ * PAE systems will not miss this extra 4-8K anyway ...
+ */
+ for (i = 0; i < PTRS_PER_PGD; i++) {
+ pmd_t *pmd = (pmd_t *) alloc_bootmem_low_pages(PAGE_SIZE);
+ set_pgd(pgd_base + i, __pgd(__pa(pmd) + 0x1));
}
+#endif
+ /*
+ * Set up lowmem-sized identity mappings at PAGE_OFFSET:
+ */
+ setup_identity_mappings(pgd_base, PAGE_OFFSET, end);
- kernel_physical_mapping_init(pgd_base);
+ /*
+ * Add flat-mode identity-mappings - SMP needs it when
+ * starting up on an AP from real-mode. (In the non-PAE
+ * case we already have these mappings through head.S.)
+ * All user-space mappings are explicitly cleared after
+ * SMP startup.
+ */
+#if defined(CONFIG_SMP) && defined(CONFIG_X86_PAE)
+ setup_identity_mappings(pgd_base, 0, 16*1024*1024);
+#endif
remap_numa_kva();
/*
* created - mappings will be set by set_fixmap():
*/
vaddr = __fix_to_virt(__end_of_fixed_addresses - 1) & PMD_MASK;
- page_table_range_init(vaddr, 0, pgd_base);
+ fixrange_init(vaddr, 0, pgd_base);
- permanent_kmaps_init(pgd_base);
+#ifdef CONFIG_HIGHMEM
+ {
+ pgd_t *pgd;
+ pmd_t *pmd;
+ pte_t *pte;
-#ifdef CONFIG_X86_PAE
- /*
- * Add low memory identity-mappings - SMP needs it when
- * starting up on an AP from real-mode. In the non-PAE
- * case we already have these mappings through head.S.
- * All user-space mappings are explicitly cleared after
- * SMP startup.
- */
- pgd_base[0] = pgd_base[USER_PTRS_PER_PGD];
+ /*
+ * Permanent kmaps:
+ */
+ vaddr = PKMAP_BASE;
+ fixrange_init(vaddr, vaddr + PAGE_SIZE*LAST_PKMAP, pgd_base);
+
+ pgd = swapper_pg_dir + pgd_index(vaddr);
+ pmd = pmd_offset(pgd, vaddr);
+ pte = pte_offset_kernel(pmd, vaddr);
+ pkmap_page_table = pte;
+ }
#endif
}
-#if defined(CONFIG_PM_DISK) || defined(CONFIG_SOFTWARE_SUSPEND)
/*
- * Swap suspend & friends need this for resume because things like the intel-agp
- * driver might have split up a kernel 4MB mapping.
+ * Clear kernel pagetables in a PMD_SIZE-aligned range.
*/
-char __nosavedata swsusp_pg_dir[PAGE_SIZE]
- __attribute__ ((aligned (PAGE_SIZE)));
-
-static inline void save_pg_dir(void)
-{
- memcpy(swsusp_pg_dir, swapper_pg_dir, PAGE_SIZE);
-}
-#else
-static inline void save_pg_dir(void)
+static void clear_mappings(pgd_t *pgd_base, unsigned long start, unsigned long end)
{
+ unsigned long vaddr;
+ pgd_t *pgd;
+ pmd_t *pmd;
+ int i, j;
+
+ pgd = pgd_base;
+
+ for (i = 0; i < PTRS_PER_PGD; pgd++, i++) {
+ vaddr = i*PGDIR_SIZE;
+ if (end && (vaddr >= end))
+ break;
+ pmd = pmd_offset(pgd, 0);
+ for (j = 0; j < PTRS_PER_PMD; pmd++, j++) {
+ vaddr = i*PGDIR_SIZE + j*PMD_SIZE;
+ if (end && (vaddr >= end))
+ break;
+ if (vaddr < start)
+ continue;
+ pmd_clear(pmd);
+ }
+ }
+ flush_tlb_all();
}
-#endif
-void zap_low_mappings (void)
+void zap_low_mappings(void)
{
- int i;
-
- save_pg_dir();
-
+ printk("zapping low mappings.\n");
/*
* Zap initial low-memory mappings.
- *
- * Note that "pgd_clear()" doesn't do it for
- * us, because pgd_clear() is a no-op on i386.
*/
- for (i = 0; i < USER_PTRS_PER_PGD; i++)
-#ifdef CONFIG_X86_PAE
- set_pgd(swapper_pg_dir+i, __pgd(1 + __pa(empty_zero_page)));
-#else
- set_pgd(swapper_pg_dir+i, __pgd(0));
-#endif
- flush_tlb_all();
+ clear_mappings(swapper_pg_dir, 0, 16*1024*1024);
}
#ifndef CONFIG_DISCONTIGMEM
extern void zone_sizes_init(void);
#endif /* !CONFIG_DISCONTIGMEM */
+static int disable_nx __initdata = 0;
+u64 __supported_pte_mask = ~_PAGE_NX;
+
+/*
+ * noexec = on|off
+ *
+ * Control non executable mappings.
+ *
+ * on Enable
+ * off Disable (disables exec-shield too)
+ */
+static int __init noexec_setup(char *str)
+{
+ if (!strncmp(str, "on",2) && cpu_has_nx) {
+ __supported_pte_mask |= _PAGE_NX;
+ disable_nx = 0;
+ } else if (!strncmp(str,"off",3)) {
+ disable_nx = 1;
+ __supported_pte_mask &= ~_PAGE_NX;
+ exec_shield = 0;
+ }
+ return 1;
+}
+
+__setup("noexec=", noexec_setup);
+
+#ifdef CONFIG_X86_PAE
+int nx_enabled = 0;
+
+static void __init set_nx(void)
+{
+ unsigned int v[4], l, h;
+
+ if (cpu_has_pae && (cpuid_eax(0x80000000) > 0x80000001)) {
+ cpuid(0x80000001, &v[0], &v[1], &v[2], &v[3]);
+ if ((v[3] & (1 << 20)) && !disable_nx) {
+ rdmsr(MSR_EFER, l, h);
+ l |= EFER_NX;
+ wrmsr(MSR_EFER, l, h);
+ nx_enabled = 1;
+ __supported_pte_mask |= _PAGE_NX;
+ }
+ }
+}
+/*
+ * Enables/disables executability of a given kernel page and
+ * returns the previous setting.
+ */
+int __init set_kernel_exec(unsigned long vaddr, int enable)
+{
+ pte_t *pte;
+ int ret = 1;
+
+ if (!nx_enabled)
+ goto out;
+
+ pte = lookup_address(vaddr);
+ BUG_ON(!pte);
+
+ if (!pte_exec_kernel(*pte))
+ ret = 0;
+
+ if (enable)
+ pte->pte_high &= ~(1 << (_PAGE_BIT_NX - 32));
+ else
+ pte->pte_high |= 1 << (_PAGE_BIT_NX - 32);
+ __flush_tlb_all();
+out:
+ return ret;
+}
+
+#endif
+
/*
* paging_init() sets up the page tables - note that the first 8MB are
* already mapped by head.S.
*/
void __init paging_init(void)
{
+#ifdef CONFIG_X86_PAE
+ set_nx();
+ if (nx_enabled)
+ printk("NX (Execute Disable) protection: active\n");
+#endif
+
pagetable_init();
load_cr3(swapper_pg_dir);
set_in_cr4(X86_CR4_PAE);
#endif
__flush_tlb_all();
-
+ /*
+ * Subtle. SMP is doing it's boot stuff late (because it has to
+ * fork idle threads) - but it also needs low mappings for the
+ * protected-mode entry to work. We zap these entries only after
+ * the WP-bit has been tested.
+ */
+#ifndef CONFIG_SMP
+ zap_low_mappings();
+#endif
kmap_init();
zone_sizes_init();
}
if (boot_cpu_data.wp_works_ok < 0)
test_wp_bit();
- /*
- * Subtle. SMP is doing it's boot stuff late (because it has to
- * fork idle threads) - but it also needs low mappings for the
- * protected-mode entry to work. We zap these entries only after
- * the WP-bit has been tested.
- */
-#ifndef CONFIG_SMP
- zap_low_mappings();
-#endif
+ entry_trampoline_setup();
+ default_ldt_page = virt_to_page(default_ldt);
+ load_LDT(&init_mm.context);
}
-kmem_cache_t *pgd_cache;
-kmem_cache_t *pmd_cache;
+kmem_cache_t *pgd_cache, *pmd_cache, *kpmd_cache;
void __init pgtable_cache_init(void)
{
+ void (*ctor)(void *, kmem_cache_t *, unsigned long);
+ void (*dtor)(void *, kmem_cache_t *, unsigned long);
+
if (PTRS_PER_PMD > 1) {
pmd_cache = kmem_cache_create("pmd",
PTRS_PER_PMD*sizeof(pmd_t),
NULL);
if (!pmd_cache)
panic("pgtable_cache_init(): cannot create pmd cache");
+
+ if (TASK_SIZE > PAGE_OFFSET) {
+ kpmd_cache = kmem_cache_create("kpmd",
+ PTRS_PER_PMD*sizeof(pmd_t),
+ PTRS_PER_PMD*sizeof(pmd_t),
+ 0,
+ kpmd_ctor,
+ NULL);
+ if (!kpmd_cache)
+ panic("pgtable_cache_init(): "
+ "cannot create kpmd cache");
+ }
}
+
+ if (PTRS_PER_PMD == 1 || TASK_SIZE <= PAGE_OFFSET)
+ ctor = pgd_ctor;
+ else
+ ctor = NULL;
+
+ if (PTRS_PER_PMD == 1 && TASK_SIZE <= PAGE_OFFSET)
+ dtor = pgd_dtor;
+ else
+ dtor = NULL;
+
pgd_cache = kmem_cache_create("pgd",
PTRS_PER_PGD*sizeof(pgd_t),
PTRS_PER_PGD*sizeof(pgd_t),
0,
- pgd_ctor,
- PTRS_PER_PMD == 1 ? pgd_dtor : NULL);
+ ctor,
+ dtor);
if (!pgd_cache)
panic("pgtable_cache_init(): Cannot create pgd cache");
}
* This function cannot be __init, since exceptions don't work in that
* section. Put this after the callers, so that it cannot be inlined.
*/
-static int do_test_wp_bit(void)
+static int noinline do_test_wp_bit(void)
{
char tmp_reg;
int flag;
#include <linux/init.h>
#include <linux/slab.h>
#include <asm/io.h>
-#include <asm/pgalloc.h>
#include <asm/fixmap.h>
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
static struct list_head df_list = LIST_HEAD_INIT(df_list);
-static inline pte_t *lookup_address(unsigned long address)
+pte_t *lookup_address(unsigned long address)
{
pgd_t *pgd = pgd_offset_k(address);
pmd_t *pmd;
static void set_pmd_pte(pte_t *kpte, unsigned long address, pte_t pte)
{
- struct page *page;
- unsigned long flags;
-
set_pte_atomic(kpte, pte); /* change init_mm */
- if (PTRS_PER_PMD > 1)
- return;
-
- spin_lock_irqsave(&pgd_lock, flags);
- for (page = pgd_list; page; page = (struct page *)page->index) {
- pgd_t *pgd;
- pmd_t *pmd;
- pgd = (pgd_t *)page_address(page) + pgd_index(address);
- pmd = pmd_offset(pgd, address);
- set_pte_atomic((pte_t *)pmd, pte);
+#ifndef CONFIG_X86_PAE
+ {
+ struct list_head *l;
+ if (TASK_SIZE > PAGE_OFFSET)
+ return;
+ spin_lock(&mmlist_lock);
+ list_for_each(l, &init_mm.mmlist) {
+ struct mm_struct *mm = list_entry(l, struct mm_struct, mmlist);
+ pmd_t *pmd = pmd_offset(pgd_offset(mm, address), address);
+ set_pte_atomic((pte_t *)pmd, pte);
+ }
+ spin_unlock(&mmlist_lock);
}
- spin_unlock_irqrestore(&pgd_lock, flags);
+#endif
}
/*
#include <linux/slab.h>
#include <linux/pagemap.h>
#include <linux/spinlock.h>
+#include <linux/module.h>
#include <asm/system.h>
#include <asm/pgtable.h>
#include <asm/e820.h>
#include <asm/tlb.h>
#include <asm/tlbflush.h>
+#include <asm/atomic_kmap.h>
void show_mem(void)
{
printk("Mem-info:\n");
show_free_areas();
- printk("Free swap: %6dkB\n",nr_swap_pages<<(PAGE_SHIFT-10));
+ printk("Free swap: %6ldkB\n", nr_swap_pages<<(PAGE_SHIFT-10));
for_each_pgdat(pgdat) {
for (i = 0; i < pgdat->node_spanned_pages; ++i) {
page = pgdat->node_mem_map + i;
printk("%d pages swap cached\n",cached);
}
+EXPORT_SYMBOL_GPL(show_mem);
+
/*
* Associate a virtual page frame with a given physical page frame
* and protection flags for that frame.
memset(pmd, 0, PTRS_PER_PMD*sizeof(pmd_t));
}
+void kpmd_ctor(void *__pmd, kmem_cache_t *cache, unsigned long flags)
+{
+ pmd_t *kpmd, *pmd;
+ kpmd = pmd_offset(&swapper_pg_dir[PTRS_PER_PGD-1],
+ (PTRS_PER_PMD - NR_SHARED_PMDS)*PMD_SIZE);
+ pmd = (pmd_t *)__pmd + (PTRS_PER_PMD - NR_SHARED_PMDS);
+
+ memset(__pmd, 0, (PTRS_PER_PMD - NR_SHARED_PMDS)*sizeof(pmd_t));
+ memcpy(pmd, kpmd, NR_SHARED_PMDS*sizeof(pmd_t));
+}
+
/*
- * List of all pgd's needed for non-PAE so it can invalidate entries
- * in both cached and uncached pgd's; not needed for PAE since the
- * kernel pmd is shared. If PAE were not to share the pmd a similar
- * tactic would be needed. This is essentially codepath-based locking
+ * List of all pgd's needed so it can invalidate entries in both cached
+ * and uncached pgd's. This is essentially codepath-based locking
* against pageattr.c; it is the unique case in which a valid change
* of kernel pagetables can't be lazily synchronized by vmalloc faults.
* vmalloc faults work because attached pagetables are never freed.
* checks at dup_mmap(), exec(), and other mmlist addition points
* could be used. The locking scheme was chosen on the basis of
* manfred's recommendations and having no core impact whatsoever.
+ *
+ * Lexicon for #ifdefless conditions to config options:
+ * (a) PTRS_PER_PMD == 1 means non-PAE.
+ * (b) PTRS_PER_PMD > 1 means PAE.
+ * (c) TASK_SIZE > PAGE_OFFSET means 4:4.
+ * (d) TASK_SIZE <= PAGE_OFFSET means non-4:4.
* -- wli
*/
spinlock_t pgd_lock = SPIN_LOCK_UNLOCKED;
next->private = (unsigned long)pprev;
}
-void pgd_ctor(void *pgd, kmem_cache_t *cache, unsigned long unused)
+void pgd_ctor(void *__pgd, kmem_cache_t *cache, unsigned long unused)
{
+ pgd_t *pgd = __pgd;
unsigned long flags;
- if (PTRS_PER_PMD == 1)
- spin_lock_irqsave(&pgd_lock, flags);
+ if (PTRS_PER_PMD == 1) {
+ if (TASK_SIZE <= PAGE_OFFSET)
+ spin_lock_irqsave(&pgd_lock, flags);
+ else
+ memcpy(&pgd[PTRS_PER_PGD - NR_SHARED_PMDS],
+ &swapper_pg_dir[PTRS_PER_PGD - NR_SHARED_PMDS],
+ NR_SHARED_PMDS*sizeof(pgd_t));
+ }
- memcpy((pgd_t *)pgd + USER_PTRS_PER_PGD,
- swapper_pg_dir + USER_PTRS_PER_PGD,
- (PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
+ if (TASK_SIZE <= PAGE_OFFSET)
+ memcpy(&pgd[USER_PTRS_PER_PGD],
+ &swapper_pg_dir[USER_PTRS_PER_PGD],
+ (PTRS_PER_PGD - USER_PTRS_PER_PGD)*sizeof(pgd_t));
if (PTRS_PER_PMD > 1)
return;
- pgd_list_add(pgd);
- spin_unlock_irqrestore(&pgd_lock, flags);
- memset(pgd, 0, USER_PTRS_PER_PGD*sizeof(pgd_t));
+ if (TASK_SIZE > PAGE_OFFSET)
+ memset(pgd, 0, (PTRS_PER_PGD - NR_SHARED_PMDS)*sizeof(pgd_t));
+ else {
+ pgd_list_add(pgd);
+ spin_unlock_irqrestore(&pgd_lock, flags);
+ memset(pgd, 0, USER_PTRS_PER_PGD*sizeof(pgd_t));
+ }
}
-/* never called when PTRS_PER_PMD > 1 */
+/* Never called when PTRS_PER_PMD > 1 || TASK_SIZE > PAGE_OFFSET */
void pgd_dtor(void *pgd, kmem_cache_t *cache, unsigned long unused)
{
unsigned long flags; /* can be called from interrupt context */
if (PTRS_PER_PMD == 1 || !pgd)
return pgd;
+ /*
+ * In the 4G userspace case alias the top 16 MB virtual
+ * memory range into the user mappings as well (these
+ * include the trampoline and CPU data structures).
+ */
for (i = 0; i < USER_PTRS_PER_PGD; ++i) {
- pmd_t *pmd = kmem_cache_alloc(pmd_cache, GFP_KERNEL);
+ pmd_t *pmd;
+
+ if (TASK_SIZE > PAGE_OFFSET && i == USER_PTRS_PER_PGD - 1)
+ pmd = kmem_cache_alloc(kpmd_cache, GFP_KERNEL);
+ else
+ pmd = kmem_cache_alloc(pmd_cache, GFP_KERNEL);
+
if (!pmd)
goto out_oom;
set_pgd(&pgd[i], __pgd(1 + __pa((u64)((u32)pmd))));
}
- return pgd;
+ return pgd;
out_oom:
+ /*
+ * we don't have to handle the kpmd_cache here, since it's the
+ * last allocation, and has either nothing to free or when it
+ * succeeds the whole operation succeeds.
+ */
for (i--; i >= 0; i--)
kmem_cache_free(pmd_cache, (void *)__va(pgd_val(pgd[i])-1));
kmem_cache_free(pgd_cache, pgd);
{
int i;
- /* in the PAE case user pgd entries are overwritten before usage */
- if (PTRS_PER_PMD > 1)
- for (i = 0; i < USER_PTRS_PER_PGD; ++i)
- kmem_cache_free(pmd_cache, (void *)__va(pgd_val(pgd[i])-1));
/* in the non-PAE case, clear_page_tables() clears user pgd entries */
+ if (PTRS_PER_PMD == 1)
+ goto out_free;
+
+ /* in the PAE case user pgd entries are overwritten before usage */
+ for (i = 0; i < USER_PTRS_PER_PGD; ++i) {
+ pmd_t *pmd = __va(pgd_val(pgd[i]) - 1);
+
+ /*
+ * only userspace pmd's are cleared for us
+ * by mm/memory.c; it's a slab cache invariant
+ * that we must separate the kernel pmd slab
+ * all times, else we'll have bad pmd's.
+ */
+ if (TASK_SIZE > PAGE_OFFSET && i == USER_PTRS_PER_PGD - 1)
+ kmem_cache_free(kpmd_cache, pmd);
+ else
+ kmem_cache_free(pmd_cache, pmd);
+ }
+out_free:
kmem_cache_free(pgd_cache, pgd);
}
+
unsigned int escr = 0;
unsigned int high = 0;
unsigned int counter_bit;
- struct p4_event_binding * ev = 0;
+ struct p4_event_binding *ev = NULL;
unsigned int stag;
stag = get_stagger();
#include <linux/pci.h>
#include <linux/acpi.h>
#include <linux/init.h>
+#include <linux/irq.h>
+#include <asm/hw_irq.h>
#include "pci.h"
struct pci_bus * __devinit pci_acpi_scan_root(struct acpi_device *device, int domain, int busnum)
static int __init pci_acpi_init(void)
{
+ struct pci_dev *dev = NULL;
+
if (pcibios_scanned)
return 0;
- if (!acpi_noirq) {
- if (!acpi_pci_irq_init()) {
- printk(KERN_INFO "PCI: Using ACPI for IRQ routing\n");
- pcibios_scanned++;
- pcibios_enable_irq = acpi_pci_irq_enable;
- } else
- printk(KERN_WARNING "PCI: Invalid ACPI-PCI IRQ routing table\n");
+ if (acpi_noirq)
+ return 0;
- }
+ printk(KERN_INFO "PCI: Using ACPI for IRQ routing\n");
+ acpi_irq_penalty_init();
+ pcibios_scanned++;
+ pcibios_enable_irq = acpi_pci_irq_enable;
+
+ /*
+ * PCI IRQ routing is set up by pci_enable_device(), but we
+ * also do it here in case there are still broken drivers that
+ * don't use pci_enable_device().
+ */
+ while ((dev = pci_find_device(PCI_ANY_ID, PCI_ANY_ID, dev)) != NULL)
+ acpi_pci_irq_enable(dev);
+
+#ifdef CONFIG_X86_IO_APIC
+ if (acpi_ioapic)
+ print_IO_APIC();
+#endif
return 0;
}
#include <linux/slab.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
+#include <linux/dmi.h>
#include <asm/io.h>
#include <asm/smp.h>
#include <asm/io_apic.h>
#define PIRQ_SIGNATURE (('$' << 0) + ('P' << 8) + ('I' << 16) + ('R' << 24))
#define PIRQ_VERSION 0x0100
-int broken_hp_bios_irq9;
+static int broken_hp_bios_irq9;
+static int acer_tm360_irqrouting;
static struct irq_routing_table *pirq_table;
r->set(pirq_router_dev, dev, pirq, 11);
}
+ /* same for Acer Travelmate 360, but with CB and irq 11 -> 10 */
+ if (acer_tm360_irqrouting && dev->irq == 11 && dev->vendor == PCI_VENDOR_ID_O2) {
+ pirq = 0x68;
+ mask = 0x400;
+ dev->irq = r->get(pirq_router_dev, dev, pirq);
+ pci_write_config_byte(dev, PCI_INTERRUPT_LINE, dev->irq);
+ }
+
/*
* Find the best IRQ to assign: use the one
* reported by the device if possible.
}
}
+/*
+ * Work around broken HP Pavilion Notebooks which assign USB to
+ * IRQ 9 even though it is actually wired to IRQ 11
+ */
+static int __init fix_broken_hp_bios_irq9(struct dmi_system_id *d)
+{
+ if (!broken_hp_bios_irq9) {
+ broken_hp_bios_irq9 = 1;
+ printk(KERN_INFO "%s detected - fixing broken IRQ routing\n", d->ident);
+ }
+ return 0;
+}
+
+/*
+ * Work around broken Acer TravelMate 360 Notebooks which assign
+ * Cardbus to IRQ 11 even though it is actually wired to IRQ 10
+ */
+static int __init fix_acer_tm360_irqrouting(struct dmi_system_id *d)
+{
+ if (!acer_tm360_irqrouting) {
+ acer_tm360_irqrouting = 1;
+ printk(KERN_INFO "%s detected - fixing broken IRQ routing\n", d->ident);
+ }
+ return 0;
+}
+
+static struct dmi_system_id __initdata pciirq_dmi_table[] = {
+ {
+ .callback = fix_broken_hp_bios_irq9,
+ .ident = "HP Pavilion N5400 Series Laptop",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+ DMI_MATCH(DMI_BIOS_VERSION, "GE.M1.03"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "HP Pavilion Notebook Model GE"),
+ DMI_MATCH(DMI_BOARD_VERSION, "OmniBook N32N-736"),
+ },
+ },
+ {
+ .callback = fix_acer_tm360_irqrouting,
+ .ident = "Acer TravelMate 36x Laptop",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 360"),
+ },
+ },
+ { }
+};
+
static int __init pcibios_irq_init(void)
{
DBG("PCI: IRQ init\n");
if (pcibios_enable_irq || raw_pci_ops == NULL)
return 0;
+ dmi_check_system(pciirq_dmi_table);
+
pirq_table = pirq_find_routing_table();
#ifdef CONFIG_PCI_BIOS
/* VIA bridges use interrupt line for apic/pci steering across
the V-Link */
else if (interrupt_line_quirk)
- pci_write_config_byte(dev, PCI_INTERRUPT_LINE, dev->irq);
+ pci_write_config_byte(dev, PCI_INTERRUPT_LINE, dev->irq & 15);
return 0;
}
#include "pci.h"
-int broken_hp_bios_irq9;
-
extern struct pci_raw_ops pci_direct_conf1;
static int pci_visws_enable_irq(struct pci_dev *dev) { return 0; }
#include <asm/tlbflush.h>
static struct saved_context saved_context;
-static void fix_processor_context(void);
unsigned long saved_context_eax, saved_context_ebx;
unsigned long saved_context_ecx, saved_context_edx;
extern void enable_sep_cpu(void *);
-void save_processor_state(void)
+void __save_processor_state(struct saved_context *ctxt)
{
kernel_fpu_begin();
/*
* descriptor tables
*/
- asm volatile ("sgdt %0" : "=m" (saved_context.gdt_limit));
- asm volatile ("sidt %0" : "=m" (saved_context.idt_limit));
- asm volatile ("sldt %0" : "=m" (saved_context.ldt));
- asm volatile ("str %0" : "=m" (saved_context.tr));
+ asm volatile ("sgdt %0" : "=m" (ctxt->gdt_limit));
+ asm volatile ("sidt %0" : "=m" (ctxt->idt_limit));
+ asm volatile ("sldt %0" : "=m" (ctxt->ldt));
+ asm volatile ("str %0" : "=m" (ctxt->tr));
/*
* segment registers
*/
- asm volatile ("movw %%es, %0" : "=m" (saved_context.es));
- asm volatile ("movw %%fs, %0" : "=m" (saved_context.fs));
- asm volatile ("movw %%gs, %0" : "=m" (saved_context.gs));
- asm volatile ("movw %%ss, %0" : "=m" (saved_context.ss));
+ asm volatile ("movw %%es, %0" : "=m" (ctxt->es));
+ asm volatile ("movw %%fs, %0" : "=m" (ctxt->fs));
+ asm volatile ("movw %%gs, %0" : "=m" (ctxt->gs));
+ asm volatile ("movw %%ss, %0" : "=m" (ctxt->ss));
/*
* control registers
*/
- asm volatile ("movl %%cr0, %0" : "=r" (saved_context.cr0));
- asm volatile ("movl %%cr2, %0" : "=r" (saved_context.cr2));
- asm volatile ("movl %%cr3, %0" : "=r" (saved_context.cr3));
- asm volatile ("movl %%cr4, %0" : "=r" (saved_context.cr4));
+ asm volatile ("movl %%cr0, %0" : "=r" (ctxt->cr0));
+ asm volatile ("movl %%cr2, %0" : "=r" (ctxt->cr2));
+ asm volatile ("movl %%cr3, %0" : "=r" (ctxt->cr3));
+ asm volatile ("movl %%cr4, %0" : "=r" (ctxt->cr4));
+}
+
+void save_processor_state(void)
+{
+ __save_processor_state(&saved_context);
}
static void
mxcsr_feature_mask_init();
}
-void restore_processor_state(void)
+
+static void fix_processor_context(void)
+{
+ int cpu = smp_processor_id();
+
+ cpu_gdt_table[cpu][GDT_ENTRY_TSS].b &= 0xfffffdff;
+
+ load_TR_desc(); /* This does ltr */
+ load_LDT(¤t->active_mm->context); /* This does lldt */
+
+ /*
+ * Now maybe reload the debug registers
+ */
+ if (current->thread.debugreg[7]){
+ loaddebug(¤t->thread, 0);
+ loaddebug(¤t->thread, 1);
+ loaddebug(¤t->thread, 2);
+ loaddebug(¤t->thread, 3);
+ /* no 4 and 5 */
+ loaddebug(¤t->thread, 6);
+ loaddebug(¤t->thread, 7);
+ }
+
+}
+
+void __restore_processor_state(struct saved_context *ctxt)
{
/*
* control registers
*/
- asm volatile ("movl %0, %%cr4" :: "r" (saved_context.cr4));
- asm volatile ("movl %0, %%cr3" :: "r" (saved_context.cr3));
- asm volatile ("movl %0, %%cr2" :: "r" (saved_context.cr2));
- asm volatile ("movl %0, %%cr0" :: "r" (saved_context.cr0));
+ asm volatile ("movl %0, %%cr4" :: "r" (ctxt->cr4));
+ asm volatile ("movl %0, %%cr3" :: "r" (ctxt->cr3));
+ asm volatile ("movl %0, %%cr2" :: "r" (ctxt->cr2));
+ asm volatile ("movl %0, %%cr0" :: "r" (ctxt->cr0));
/*
* segment registers
*/
- asm volatile ("movw %0, %%es" :: "r" (saved_context.es));
- asm volatile ("movw %0, %%fs" :: "r" (saved_context.fs));
- asm volatile ("movw %0, %%gs" :: "r" (saved_context.gs));
- asm volatile ("movw %0, %%ss" :: "r" (saved_context.ss));
+ asm volatile ("movw %0, %%es" :: "r" (ctxt->es));
+ asm volatile ("movw %0, %%fs" :: "r" (ctxt->fs));
+ asm volatile ("movw %0, %%gs" :: "r" (ctxt->gs));
+ asm volatile ("movw %0, %%ss" :: "r" (ctxt->ss));
/*
* now restore the descriptor tables to their proper values
* ltr is done i fix_processor_context().
*/
- asm volatile ("lgdt %0" :: "m" (saved_context.gdt_limit));
- asm volatile ("lidt %0" :: "m" (saved_context.idt_limit));
- asm volatile ("lldt %0" :: "m" (saved_context.ldt));
+ asm volatile ("lgdt %0" :: "m" (ctxt->gdt_limit));
+ asm volatile ("lidt %0" :: "m" (ctxt->idt_limit));
+ asm volatile ("lldt %0" :: "m" (ctxt->ldt));
/*
* sysenter MSRs
do_fpu_end();
}
-static void fix_processor_context(void)
+void restore_processor_state(void)
{
- int cpu = smp_processor_id();
- struct tss_struct * t = init_tss + cpu;
-
- set_tss_desc(cpu,t); /* This just modifies memory; should not be necessary. But... This is necessary, because 386 hardware has concept of busy TSS or some similar stupidity. */
- cpu_gdt_table[cpu][GDT_ENTRY_TSS].b &= 0xfffffdff;
-
- load_TR_desc(); /* This does ltr */
- load_LDT(¤t->active_mm->context); /* This does lldt */
-
- /*
- * Now maybe reload the debug registers
- */
- if (current->thread.debugreg[7]){
- loaddebug(¤t->thread, 0);
- loaddebug(¤t->thread, 1);
- loaddebug(¤t->thread, 2);
- loaddebug(¤t->thread, 3);
- /* no 4 and 5 */
- loaddebug(¤t->thread, 6);
- loaddebug(¤t->thread, 7);
- }
-
+ __restore_processor_state(&saved_context);
}
+
EXPORT_SYMBOL(save_processor_state);
EXPORT_SYMBOL(restore_processor_state);
ENTRY(do_magic)
pushl %ebx
cmpl $0,8(%esp)
- jne .L1450
+ jne resume
call do_magic_suspend_1
call save_processor_state
pushfl ; popl saved_context_eflags
call do_magic_suspend_2
- jmp .L1449
- .p2align 4,,7
-.L1450:
+ popl %ebx
+ ret
+
+resume:
movl $swsusp_pg_dir-__PAGE_OFFSET,%ecx
movl %ecx,%cr3
call do_magic_resume_1
movl $0,loop
cmpl $0,nr_copy_pages
- je .L1453
- .p2align 4,,7
-.L1455:
+ je copy_done
+copy_loop:
movl $0,loop2
.p2align 4,,7
-.L1459:
+copy_one_page:
movl pagedir_nosave,%ecx
movl loop,%eax
movl loop2,%edx
movl (%ecx,%eax),%eax
movb (%edx,%eax),%al
movb %al,(%edx,%ebx)
- movl %cr3, %eax;
- movl %eax, %cr3; # flush TLB
movl loop2,%eax
leal 1(%eax),%edx
movl %edx,loop2
movl %edx,%eax
cmpl $4095,%eax
- jbe .L1459
+ jbe copy_one_page
movl loop,%eax
leal 1(%eax),%edx
movl %edx,loop
movl %edx,%eax
cmpl nr_copy_pages,%eax
- jb .L1455
- .p2align 4,,7
-.L1453:
+ jb copy_loop
+
+copy_done:
movl $__USER_DS,%eax
movw %ax, %ds
call restore_processor_state
pushl saved_context_eflags ; popfl
call do_magic_resume_2
-.L1449:
popl %ebx
ret
config IA64_GENERIC
bool "generic"
- select NUMA
- select ACPI_NUMA
select VIRTUAL_MEM_MAP
- select DISCONTIGMEM
help
This selects the system type of your hardware. A "generic" kernel
will run on any supported IA-64 system. However, if you configure
default "6" if ITANIUM
# align cache-sensitive data to 64 bytes
-config MCKINLEY_ASTEP_SPECIFIC
- bool "McKinley A-step specific code"
- depends on MCKINLEY
- help
- Select this option to build a kernel for an IA-64 McKinley prototype
- system with any A-stepping CPU.
-
-config MCKINLEY_A0_SPECIFIC
- bool "McKinley A0/A1-step specific code"
- depends on MCKINLEY_ASTEP_SPECIFIC
- help
- Select this option to build a kernel for an IA-64 McKinley prototype
- system with an A0 or A1 stepping CPU.
-
config NUMA
bool "NUMA support"
depends on !IA64_HP_SIM
config IA64_GRANULE_64MB
bool "64MB"
- depends on !(IA64_GENERIC || IA64_HP_ZX1)
+ depends on !(IA64_GENERIC || IA64_HP_ZX1 || IA64_SGI_SN2)
endchoice
default y
endmenu
+source "kernel/vserver/Kconfig"
+
source "security/Kconfig"
source "crypto/Kconfig"
ifeq ($(GCC_VERSION),3)
ifeq ($(GCC_MINOR_VERSION),4)
- cflags-$(CONFIG_ITANIUM) += -mtune=merced
+# Workaround Itanium 1 bugs in gcc 3.4.
+# cflags-$(CONFIG_ITANIUM) += -mtune=merced
cflags-$(CONFIG_MCKINLEY) += -mtune=mckinley
endif
endif
core-$(CONFIG_IA64_DIG) += arch/ia64/dig/
core-$(CONFIG_IA64_GENERIC) += arch/ia64/dig/
core-$(CONFIG_IA64_HP_ZX1) += arch/ia64/dig/
-core-$(CONFIG_IA64_SGI_SN2) += arch/ia64/sn/
-
+ifeq ($(CONFIG_DISCONTIGMEM),y)
+ core-$(CONFIG_IA64_SGI_SN2) += arch/ia64/sn/
+endif
drivers-$(CONFIG_PCI) += arch/ia64/pci/
drivers-$(CONFIG_IA64_HP_SIM) += arch/ia64/hp/sim/
drivers-$(CONFIG_IA64_HP_ZX1) += arch/ia64/hp/common/ arch/ia64/hp/zx1/
-drivers-$(CONFIG_IA64_GENERIC) += arch/ia64/hp/common/ arch/ia64/hp/zx1/ arch/ia64/hp/sim/ arch/ia64/sn/
+drivers-$(CONFIG_IA64_GENERIC) += arch/ia64/hp/common/ arch/ia64/hp/zx1/ arch/ia64/hp/sim/
+ifeq ($(CONFIG_DISCONTIGMEM),y)
+drivers-$(CONFIG_IA64_GENERIC) += arch/ia64/sn/
+endif
drivers-$(CONFIG_OPROFILE) += arch/ia64/oprofile/
boot := arch/ia64/hp/sim/boot
all: compressed unwcheck
+bzImage: compressed
+ mkdir -p arch/ia64/boot
+ cp vmlinux.gz arch/ia64/boot/bzImage
+
compressed: vmlinux.gz
vmlinux.gz: vmlinux
#
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
+CONFIG_POSIX_MQUEUE=y
# CONFIG_BSD_PROCESS_ACCT is not set
CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
CONFIG_LOG_BUF_SHIFT=20
CONFIG_HOTPLUG=y
# CONFIG_IKCONFIG is not set
# CONFIG_EMBEDDED is not set
CONFIG_KALLSYMS=y
+CONFIG_KALLSYMS_ALL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_AS=y
CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
#
CONFIG_IA64_PAGE_SIZE_16KB=y
# CONFIG_IA64_PAGE_SIZE_64KB is not set
CONFIG_IA64_L1_CACHE_SHIFT=7
-# CONFIG_MCKINLEY_ASTEP_SPECIFIC is not set
CONFIG_NUMA=y
CONFIG_VIRTUAL_MEM_MAP=y
CONFIG_DISCONTIGMEM=y
CONFIG_FORCE_MAX_ZONEORDER=18
CONFIG_SMP=y
CONFIG_NR_CPUS=512
+# CONFIG_HOTPLUG_CPU is not set
# CONFIG_PREEMPT is not set
CONFIG_HAVE_DEC_LOCK=y
CONFIG_IA32_SUPPORT=y
CONFIG_COMPAT=y
CONFIG_PERFMON=y
CONFIG_IA64_PALINFO=y
+
+#
+# Firmware Drivers
+#
CONFIG_EFI_VARS=y
CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_MISC is not set
#
CONFIG_PCI=y
CONFIG_PCI_DOMAINS=y
+# CONFIG_PCI_USE_VECTOR is not set
CONFIG_PCI_LEGACY_PROC=y
CONFIG_PCI_NAMES=y
#
# Generic Driver Options
#
+CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=m
# CONFIG_DEBUG_DRIVER is not set
#
# CONFIG_BLK_CPQ_DA is not set
# CONFIG_BLK_CPQ_CISS_DA is not set
-CONFIG_BLK_DEV_DAC960=m
-CONFIG_BLK_DEV_UMEM=m
+# CONFIG_BLK_DEV_DAC960 is not set
+# CONFIG_BLK_DEV_UMEM is not set
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_NBD=m
-# CONFIG_BLK_DEV_CARMEL is not set
+# CONFIG_BLK_DEV_SX8 is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=4096
CONFIG_BLK_DEV_INITRD=y
#
# Please see Documentation/ide.txt for help/info on IDE drives
#
-CONFIG_BLK_DEV_IDEDISK=y
-# CONFIG_IDEDISK_MULTI_MODE is not set
-# CONFIG_IDEDISK_STROKE is not set
-CONFIG_BLK_DEV_IDECD=m
-CONFIG_BLK_DEV_IDETAPE=m
-CONFIG_BLK_DEV_IDEFLOPPY=y
-CONFIG_BLK_DEV_IDESCSI=m
+# CONFIG_BLK_DEV_IDE_SATA is not set
+# CONFIG_BLK_DEV_IDEDISK is not set
+CONFIG_BLK_DEV_IDECD=y
+# CONFIG_BLK_DEV_IDETAPE is not set
+# CONFIG_BLK_DEV_IDEFLOPPY is not set
+# CONFIG_BLK_DEV_IDESCSI is not set
# CONFIG_IDE_TASK_IOCTL is not set
# CONFIG_IDE_TASKFILE_IO is not set
#
CONFIG_IDE_GENERIC=y
CONFIG_BLK_DEV_IDEPCI=y
-CONFIG_IDEPCI_SHARE_IRQ=y
+# CONFIG_IDEPCI_SHARE_IRQ is not set
# CONFIG_BLK_DEV_OFFBOARD is not set
-CONFIG_BLK_DEV_GENERIC=y
+# CONFIG_BLK_DEV_GENERIC is not set
# CONFIG_BLK_DEV_OPTI621 is not set
CONFIG_BLK_DEV_IDEDMA_PCI=y
# CONFIG_BLK_DEV_IDEDMA_FORCED is not set
# CONFIG_BLK_DEV_AEC62XX is not set
# CONFIG_BLK_DEV_ALI15X3 is not set
# CONFIG_BLK_DEV_AMD74XX is not set
-CONFIG_BLK_DEV_CMD64X=m
+# CONFIG_BLK_DEV_CMD64X is not set
# CONFIG_BLK_DEV_TRIFLEX is not set
# CONFIG_BLK_DEV_CY82C693 is not set
# CONFIG_BLK_DEV_CS5520 is not set
# CONFIG_BLK_DEV_CS5530 is not set
-CONFIG_BLK_DEV_HPT34X=m
-CONFIG_HPT34X_AUTODMA=y
-CONFIG_BLK_DEV_HPT366=m
+# CONFIG_BLK_DEV_HPT34X is not set
+# CONFIG_BLK_DEV_HPT366 is not set
# CONFIG_BLK_DEV_SC1200 is not set
# CONFIG_BLK_DEV_PIIX is not set
# CONFIG_BLK_DEV_NS87415 is not set
# CONFIG_BLK_DEV_PDC202XX_OLD is not set
# CONFIG_BLK_DEV_PDC202XX_NEW is not set
-CONFIG_BLK_DEV_SVWKS=m
+# CONFIG_BLK_DEV_SVWKS is not set
CONFIG_BLK_DEV_SGIIOC4=y
# CONFIG_BLK_DEV_SIIMAGE is not set
# CONFIG_BLK_DEV_SLC90E66 is not set
# CONFIG_BLK_DEV_TRM290 is not set
# CONFIG_BLK_DEV_VIA82CXXX is not set
+# CONFIG_IDE_ARM is not set
CONFIG_BLK_DEV_IDEDMA=y
# CONFIG_IDEDMA_IVB is not set
CONFIG_IDEDMA_AUTO=y
#
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_ST=m
-CONFIG_CHR_DEV_OSST=m
-CONFIG_BLK_DEV_SR=y
+# CONFIG_CHR_DEV_OSST is not set
+CONFIG_BLK_DEV_SR=m
# CONFIG_BLK_DEV_SR_VENDOR is not set
CONFIG_CHR_DEV_SG=m
# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
#
# CONFIG_SCSI_MULTI_LUN is not set
-CONFIG_SCSI_REPORT_LUNS=y
CONFIG_SCSI_CONSTANTS=y
# CONFIG_SCSI_LOGGING is not set
# SCSI low-level drivers
#
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
+# CONFIG_SCSI_3W_9XXX is not set
# CONFIG_SCSI_ACARD is not set
# CONFIG_SCSI_AACRAID is not set
-CONFIG_SCSI_AIC7XXX=m
-CONFIG_AIC7XXX_CMDS_PER_DEVICE=32
-CONFIG_AIC7XXX_RESET_DELAY_MS=15000
-# CONFIG_AIC7XXX_BUILD_FIRMWARE is not set
-CONFIG_AIC7XXX_DEBUG_ENABLE=y
-CONFIG_AIC7XXX_DEBUG_MASK=0
-CONFIG_AIC7XXX_REG_PRETTY_PRINT=y
-CONFIG_SCSI_AIC7XXX_OLD=m
-CONFIG_SCSI_AIC79XX=m
-CONFIG_AIC79XX_CMDS_PER_DEVICE=32
-CONFIG_AIC79XX_RESET_DELAY_MS=15000
-# CONFIG_AIC79XX_BUILD_FIRMWARE is not set
-# CONFIG_AIC79XX_ENABLE_RD_STRM is not set
-CONFIG_AIC79XX_DEBUG_ENABLE=y
-CONFIG_AIC79XX_DEBUG_MASK=0
-CONFIG_AIC79XX_REG_PRETTY_PRINT=y
+# CONFIG_SCSI_AIC7XXX is not set
+# CONFIG_SCSI_AIC7XXX_OLD is not set
+# CONFIG_SCSI_AIC79XX is not set
# CONFIG_SCSI_ADVANSYS is not set
-CONFIG_SCSI_MEGARAID=m
+# CONFIG_SCSI_MEGARAID is not set
CONFIG_SCSI_SATA=y
-CONFIG_SCSI_SATA_SVW=m
-CONFIG_SCSI_ATA_PIIX=m
-CONFIG_SCSI_SATA_PROMISE=m
+# CONFIG_SCSI_SATA_SVW is not set
+# CONFIG_SCSI_ATA_PIIX is not set
+# CONFIG_SCSI_SATA_NV is not set
+# CONFIG_SCSI_SATA_PROMISE is not set
+# CONFIG_SCSI_SATA_SX4 is not set
# CONFIG_SCSI_SATA_SIL is not set
-CONFIG_SCSI_SATA_VIA=m
+# CONFIG_SCSI_SATA_SIS is not set
+# CONFIG_SCSI_SATA_VIA is not set
CONFIG_SCSI_SATA_VITESSE=y
# CONFIG_SCSI_BUSLOGIC is not set
-# CONFIG_SCSI_CPQFCTS is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_EATA is not set
# CONFIG_SCSI_EATA_PIO is not set
# CONFIG_SCSI_GDTH is not set
# CONFIG_SCSI_IPS is not set
# CONFIG_SCSI_INIA100 is not set
-CONFIG_SCSI_SYM53C8XX_2=m
-CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1
-CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16
-CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64
-# CONFIG_SCSI_SYM53C8XX_IOMAPPED is not set
+# CONFIG_SCSI_SYM53C8XX_2 is not set
+# CONFIG_SCSI_IPR is not set
# CONFIG_SCSI_QLOGIC_ISP is not set
# CONFIG_SCSI_QLOGIC_FC is not set
CONFIG_SCSI_QLOGIC_1280=y
CONFIG_SCSI_QLA22XX=y
CONFIG_SCSI_QLA2300=y
CONFIG_SCSI_QLA2322=y
-CONFIG_SCSI_QLA6312=y
-CONFIG_SCSI_QLA6322=y
+# CONFIG_SCSI_QLA6312 is not set
+# CONFIG_SCSI_QLA6322 is not set
# CONFIG_SCSI_DC395x is not set
# CONFIG_SCSI_DC390T is not set
# CONFIG_SCSI_DEBUG is not set
CONFIG_MD_MULTIPATH=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=m
+CONFIG_DM_SNAPSHOT=m
+CONFIG_DM_MIRROR=m
+CONFIG_DM_ZERO=m
#
# Fusion MPT device support
CONFIG_FUSION_MAX_SGE=40
CONFIG_FUSION_ISENSE=m
CONFIG_FUSION_CTL=m
-# CONFIG_FUSION_LAN is not set
#
# IEEE 1394 (FireWire) support
#
# I2O device support
#
+# CONFIG_I2O is not set
#
# Networking support
# CONFIG_NET_KEY is not set
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
-CONFIG_IP_ADVANCED_ROUTER=y
-CONFIG_IP_MULTIPLE_TABLES=y
-# CONFIG_IP_ROUTE_FWMARK is not set
-CONFIG_IP_ROUTE_NAT=y
-CONFIG_IP_ROUTE_MULTIPATH=y
-CONFIG_IP_ROUTE_TOS=y
-CONFIG_IP_ROUTE_VERBOSE=y
+# CONFIG_IP_ADVANCED_ROUTER is not set
# CONFIG_IP_PNP is not set
-CONFIG_NET_IPIP=m
-CONFIG_NET_IPGRE=m
-# CONFIG_NET_IPGRE_BROADCAST is not set
-CONFIG_IP_MROUTE=y
-CONFIG_IP_PIMSM_V1=y
-CONFIG_IP_PIMSM_V2=y
-CONFIG_ARPD=y
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_IP_MROUTE is not set
+# CONFIG_ARPD is not set
CONFIG_SYN_COOKIES=y
-CONFIG_INET_AH=m
-CONFIG_INET_ESP=m
-CONFIG_INET_IPCOMP=m
-
-#
-# IP: Virtual Server Configuration
-#
-# CONFIG_IP_VS is not set
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
CONFIG_IPV6=m
-CONFIG_IPV6_PRIVACY=y
-CONFIG_INET6_AH=m
-CONFIG_INET6_ESP=m
-CONFIG_INET6_IPCOMP=m
-CONFIG_IPV6_TUNNEL=m
-# CONFIG_DECNET is not set
-CONFIG_BRIDGE=m
-CONFIG_NETFILTER=y
-# CONFIG_NETFILTER_DEBUG is not set
-CONFIG_BRIDGE_NETFILTER=y
-
-#
-# IP: Netfilter Configuration
-#
-CONFIG_IP_NF_CONNTRACK=m
-CONFIG_IP_NF_FTP=m
-CONFIG_IP_NF_IRC=m
-CONFIG_IP_NF_TFTP=m
-CONFIG_IP_NF_AMANDA=m
-CONFIG_IP_NF_QUEUE=m
-CONFIG_IP_NF_IPTABLES=m
-CONFIG_IP_NF_MATCH_LIMIT=m
-CONFIG_IP_NF_MATCH_IPRANGE=m
-CONFIG_IP_NF_MATCH_MAC=m
-CONFIG_IP_NF_MATCH_PKTTYPE=m
-CONFIG_IP_NF_MATCH_MARK=m
-CONFIG_IP_NF_MATCH_MULTIPORT=m
-CONFIG_IP_NF_MATCH_TOS=m
-CONFIG_IP_NF_MATCH_RECENT=m
-CONFIG_IP_NF_MATCH_ECN=m
-CONFIG_IP_NF_MATCH_DSCP=m
-CONFIG_IP_NF_MATCH_AH_ESP=m
-CONFIG_IP_NF_MATCH_LENGTH=m
-CONFIG_IP_NF_MATCH_TTL=m
-CONFIG_IP_NF_MATCH_TCPMSS=m
-CONFIG_IP_NF_MATCH_HELPER=m
-CONFIG_IP_NF_MATCH_STATE=m
-CONFIG_IP_NF_MATCH_CONNTRACK=m
-CONFIG_IP_NF_MATCH_OWNER=m
-CONFIG_IP_NF_MATCH_PHYSDEV=m
-CONFIG_IP_NF_FILTER=m
-CONFIG_IP_NF_TARGET_REJECT=m
-CONFIG_IP_NF_NAT=m
-CONFIG_IP_NF_NAT_NEEDED=y
-CONFIG_IP_NF_TARGET_MASQUERADE=m
-CONFIG_IP_NF_TARGET_REDIRECT=m
-CONFIG_IP_NF_TARGET_NETMAP=m
-CONFIG_IP_NF_TARGET_SAME=m
-# CONFIG_IP_NF_NAT_LOCAL is not set
-CONFIG_IP_NF_NAT_SNMP_BASIC=m
-CONFIG_IP_NF_NAT_IRC=m
-CONFIG_IP_NF_NAT_FTP=m
-CONFIG_IP_NF_NAT_TFTP=m
-CONFIG_IP_NF_NAT_AMANDA=m
-CONFIG_IP_NF_MANGLE=m
-CONFIG_IP_NF_TARGET_TOS=m
-CONFIG_IP_NF_TARGET_ECN=m
-CONFIG_IP_NF_TARGET_DSCP=m
-CONFIG_IP_NF_TARGET_MARK=m
-CONFIG_IP_NF_TARGET_CLASSIFY=m
-CONFIG_IP_NF_TARGET_LOG=m
-CONFIG_IP_NF_TARGET_ULOG=m
-CONFIG_IP_NF_TARGET_TCPMSS=m
-CONFIG_IP_NF_ARPTABLES=m
-CONFIG_IP_NF_ARPFILTER=m
-CONFIG_IP_NF_ARP_MANGLE=m
-# CONFIG_IP_NF_COMPAT_IPCHAINS is not set
-# CONFIG_IP_NF_COMPAT_IPFWADM is not set
-
-#
-# IPv6: Netfilter Configuration
-#
-# CONFIG_IP6_NF_QUEUE is not set
-# CONFIG_IP6_NF_IPTABLES is not set
-
-#
-# Bridge: Netfilter Configuration
-#
-# CONFIG_BRIDGE_NF_EBTABLES is not set
-CONFIG_XFRM=y
-CONFIG_XFRM_USER=m
+# CONFIG_IPV6_PRIVACY is not set
+# CONFIG_INET6_AH is not set
+# CONFIG_INET6_ESP is not set
+# CONFIG_INET6_IPCOMP is not set
+# CONFIG_IPV6_TUNNEL is not set
+# CONFIG_NETFILTER is not set
#
# SCTP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_SCTP is not set
# CONFIG_ATM is not set
-CONFIG_VLAN_8021Q=m
+# CONFIG_BRIDGE is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_DECNET is not set
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
-CONFIG_NET_DIVERT=y
+# CONFIG_NET_DIVERT is not set
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
# CONFIG_NET_FASTROUTE is not set
#
# QoS and/or fair queueing
#
-CONFIG_NET_SCHED=y
-CONFIG_NET_SCH_CBQ=m
-CONFIG_NET_SCH_HTB=m
-CONFIG_NET_SCH_HFSC=m
-CONFIG_NET_SCH_CSZ=m
-CONFIG_NET_SCH_PRIO=m
-CONFIG_NET_SCH_RED=m
-CONFIG_NET_SCH_SFQ=m
-CONFIG_NET_SCH_TEQL=m
-CONFIG_NET_SCH_TBF=m
-CONFIG_NET_SCH_GRED=m
-CONFIG_NET_SCH_DSMARK=m
-# CONFIG_NET_SCH_DELAY is not set
-# CONFIG_NET_SCH_INGRESS is not set
-CONFIG_NET_QOS=y
-CONFIG_NET_ESTIMATOR=y
-CONFIG_NET_CLS=y
-CONFIG_NET_CLS_TCINDEX=m
-CONFIG_NET_CLS_ROUTE4=m
-CONFIG_NET_CLS_ROUTE=y
-CONFIG_NET_CLS_FW=m
-CONFIG_NET_CLS_U32=m
-CONFIG_NET_CLS_RSVP=m
-CONFIG_NET_CLS_RSVP6=m
-CONFIG_NET_CLS_POLICE=y
+# CONFIG_NET_SCHED is not set
+# CONFIG_NET_CLS_ROUTE is not set
#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
+CONFIG_NETPOLL=y
+# CONFIG_NETPOLL_RX is not set
+# CONFIG_NETPOLL_TRAP is not set
+CONFIG_NET_POLL_CONTROLLER=y
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
CONFIG_NETDEVICES=y
+# CONFIG_DUMMY is not set
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+# CONFIG_ETHERTAP is not set
#
# ARCnet devices
#
# CONFIG_ARCNET is not set
-CONFIG_DUMMY=m
-CONFIG_BONDING=m
-CONFIG_EQUALIZER=m
-CONFIG_TUN=m
-# CONFIG_ETHERTAP is not set
#
# Ethernet (10 or 100Mbit)
#
-CONFIG_NET_ETHERNET=y
-CONFIG_MII=m
-# CONFIG_HAPPYMEAL is not set
-# CONFIG_SUNGEM is not set
-# CONFIG_NET_VENDOR_3COM is not set
-
-#
-# Tulip family network device support
-#
-CONFIG_NET_TULIP=y
-# CONFIG_DE2104X is not set
-CONFIG_TULIP=m
-# CONFIG_TULIP_MWI is not set
-# CONFIG_TULIP_MMIO is not set
-# CONFIG_TULIP_NAPI is not set
-# CONFIG_DE4X5 is not set
-# CONFIG_WINBOND_840 is not set
-# CONFIG_DM9102 is not set
-# CONFIG_HP100 is not set
-CONFIG_NET_PCI=y
-# CONFIG_PCNET32 is not set
-# CONFIG_AMD8111_ETH is not set
-# CONFIG_ADAPTEC_STARFIRE is not set
-# CONFIG_B44 is not set
-# CONFIG_FORCEDETH is not set
-# CONFIG_DGRS is not set
-CONFIG_EEPRO100=m
-# CONFIG_EEPRO100_PIO is not set
-# CONFIG_E100 is not set
-# CONFIG_FEALNX is not set
-# CONFIG_NATSEMI is not set
-# CONFIG_NE2K_PCI is not set
-# CONFIG_8139CP is not set
-# CONFIG_8139TOO is not set
-# CONFIG_SIS900 is not set
-# CONFIG_EPIC100 is not set
-# CONFIG_SUNDANCE is not set
-# CONFIG_VIA_RHINE is not set
+# CONFIG_NET_ETHERNET is not set
#
# Ethernet (1000 Mbit)
#
-CONFIG_ACENIC=m
-# CONFIG_ACENIC_OMIT_TIGON_I is not set
-CONFIG_DL2K=m
-CONFIG_E1000=y
-# CONFIG_E1000_NAPI is not set
-CONFIG_NS83820=m
-CONFIG_HAMACHI=m
-CONFIG_YELLOWFIN=m
-CONFIG_R8169=m
-CONFIG_SIS190=m
-CONFIG_SK98LIN=m
+# CONFIG_ACENIC is not set
+# CONFIG_DL2K is not set
+# CONFIG_E1000 is not set
+# CONFIG_NS83820 is not set
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+# CONFIG_R8169 is not set
+# CONFIG_SK98LIN is not set
CONFIG_TIGON3=y
#
# Ethernet (10000 Mbit)
#
-CONFIG_IXGB=m
-# CONFIG_IXGB_NAPI is not set
-# CONFIG_FDDI is not set
-# CONFIG_HIPPI is not set
-CONFIG_PPP=m
-CONFIG_PPP_MULTILINK=y
-CONFIG_PPP_FILTER=y
-CONFIG_PPP_ASYNC=m
-CONFIG_PPP_SYNC_TTY=m
-CONFIG_PPP_DEFLATE=m
-# CONFIG_PPP_BSDCOMP is not set
-# CONFIG_PPPOE is not set
-# CONFIG_SLIP is not set
-
-#
-# Wireless LAN (non-hamradio)
-#
-# CONFIG_NET_RADIO is not set
+# CONFIG_IXGB is not set
+CONFIG_S2IO=m
+# CONFIG_S2IO_NAPI is not set
#
# Token Ring devices
#
# CONFIG_TR is not set
-CONFIG_NET_FC=y
-# CONFIG_SHAPER is not set
-CONFIG_NETCONSOLE=y
-
-#
-# Wan interfaces
-#
-# CONFIG_WAN is not set
-
-#
-# Amateur Radio support
-#
-# CONFIG_HAMRADIO is not set
#
-# IrDA (infrared) support
+# Wireless LAN (non-hamradio)
#
-# CONFIG_IRDA is not set
+# CONFIG_NET_RADIO is not set
#
-# Bluetooth support
+# Wan interfaces
#
-# CONFIG_BT is not set
-CONFIG_NETPOLL=y
-# CONFIG_NETPOLL_RX is not set
-# CONFIG_NETPOLL_TRAP is not set
-CONFIG_NET_POLL_CONTROLLER=y
+# CONFIG_WAN is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+# CONFIG_NET_FC is not set
+# CONFIG_SHAPER is not set
+CONFIG_NETCONSOLE=y
#
# ISDN subsystem
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
-CONFIG_INPUT_MOUSEDEV_PSAUX=y
+# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
#
# CONFIG_GAMEPORT is not set
CONFIG_SOUND_GAMEPORT=y
-CONFIG_SERIO=y
-CONFIG_SERIO_I8042=y
-# CONFIG_SERIO_SERPORT is not set
-# CONFIG_SERIO_CT82C710 is not set
-# CONFIG_SERIO_PCIPS2 is not set
+# CONFIG_SERIO is not set
+# CONFIG_SERIO_I8042 is not set
#
# Input Device Drivers
#
-CONFIG_INPUT_KEYBOARD=y
-CONFIG_KEYBOARD_ATKBD=y
-# CONFIG_KEYBOARD_SUNKBD is not set
-# CONFIG_KEYBOARD_LKKBD is not set
-# CONFIG_KEYBOARD_XTKBD is not set
-# CONFIG_KEYBOARD_NEWTON is not set
-CONFIG_INPUT_MOUSE=y
-CONFIG_MOUSE_PS2=y
-# CONFIG_MOUSE_SERIAL is not set
-# CONFIG_MOUSE_VSXXXAA is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
# CONFIG_INPUT_JOYSTICK is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
# CONFIG_INPUT_MISC is not set
CONFIG_HW_CONSOLE=y
CONFIG_SERIAL_NONSTANDARD=y
# CONFIG_ROCKETPORT is not set
+# CONFIG_CYCLADES is not set
# CONFIG_SYNCLINK is not set
# CONFIG_SYNCLINKMP is not set
# CONFIG_N_HDLC is not set
#
# Serial drivers
#
-CONFIG_SERIAL_8250=y
-CONFIG_SERIAL_8250_CONSOLE=y
-CONFIG_SERIAL_8250_HCDP=y
-CONFIG_SERIAL_8250_ACPI=y
-CONFIG_SERIAL_8250_NR_UARTS=4
-# CONFIG_SERIAL_8250_EXTENDED is not set
+# CONFIG_SERIAL_8250 is not set
#
# Non-8250 serial port support
#
-CONFIG_SERIAL_CORE=y
-CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_UNIX98_PTYS=y
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
#
# CONFIG_WATCHDOG is not set
# CONFIG_HW_RANDOM is not set
-# CONFIG_GEN_RTC is not set
CONFIG_EFI_RTC=y
# CONFIG_DTLK is not set
# CONFIG_R3964 is not set
# CONFIG_AGP is not set
# CONFIG_DRM is not set
CONFIG_RAW_DRIVER=m
+# CONFIG_HPET is not set
CONFIG_MAX_RAW_DEVS=256
#
#
# USB support
#
-# CONFIG_USB is not set
+CONFIG_USB=m
+# CONFIG_USB_DEBUG is not set
+
+#
+# Miscellaneous USB options
+#
+# CONFIG_USB_DEVICEFS is not set
+# CONFIG_USB_BANDWIDTH is not set
+# CONFIG_USB_DYNAMIC_MINORS is not set
+
+#
+# USB Host Controller Drivers
+#
+CONFIG_USB_EHCI_HCD=m
+# CONFIG_USB_EHCI_SPLIT_ISO is not set
+# CONFIG_USB_EHCI_ROOT_HUB_TT is not set
+CONFIG_USB_OHCI_HCD=m
+CONFIG_USB_UHCI_HCD=m
+
+#
+# USB Device Class drivers
+#
+# CONFIG_USB_BLUETOOTH_TTY is not set
+# CONFIG_USB_ACM is not set
+# CONFIG_USB_PRINTER is not set
+# CONFIG_USB_STORAGE is not set
+
+#
+# USB Human Interface Devices (HID)
+#
+CONFIG_USB_HID=m
+CONFIG_USB_HIDINPUT=y
+# CONFIG_HID_FF is not set
+# CONFIG_USB_HIDDEV is not set
+
+#
+# USB HID Boot Protocol drivers
+#
+# CONFIG_USB_KBD is not set
+# CONFIG_USB_MOUSE is not set
+# CONFIG_USB_AIPTEK is not set
+# CONFIG_USB_WACOM is not set
+# CONFIG_USB_KBTAB is not set
+# CONFIG_USB_POWERMATE is not set
+# CONFIG_USB_MTOUCH is not set
+# CONFIG_USB_EGALAX is not set
+# CONFIG_USB_XPAD is not set
+# CONFIG_USB_ATI_REMOTE is not set
+
+#
+# USB Imaging devices
+#
+# CONFIG_USB_MDC800 is not set
+# CONFIG_USB_MICROTEK is not set
+# CONFIG_USB_HPUSBSCSI is not set
+
+#
+# USB Multimedia devices
+#
+# CONFIG_USB_DABUSB is not set
+
+#
+# Video4Linux support is needed for USB Multimedia device support
+#
+
+#
+# USB Network adaptors
+#
+# CONFIG_USB_CATC is not set
+# CONFIG_USB_KAWETH is not set
+# CONFIG_USB_PEGASUS is not set
+# CONFIG_USB_RTL8150 is not set
+# CONFIG_USB_USBNET is not set
+
+#
+# USB port drivers
+#
+
+#
+# USB Serial Converter support
+#
+# CONFIG_USB_SERIAL is not set
+
+#
+# USB Miscellaneous drivers
+#
+# CONFIG_USB_EMI62 is not set
+# CONFIG_USB_EMI26 is not set
+# CONFIG_USB_TIGL is not set
+# CONFIG_USB_AUERSWALD is not set
+# CONFIG_USB_RIO500 is not set
+# CONFIG_USB_LEGOTOWER is not set
+# CONFIG_USB_LCD is not set
+# CONFIG_USB_LED is not set
+# CONFIG_USB_CYTHERM is not set
+# CONFIG_USB_PHIDGETSERVO is not set
#
# USB Gadget Support
#
# File systems
#
-CONFIG_EXT2_FS=y
+CONFIG_EXT2_FS=m
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
-# CONFIG_EXT2_FS_SECURITY is not set
+CONFIG_EXT2_FS_SECURITY=y
CONFIG_EXT3_FS=y
CONFIG_EXT3_FS_XATTR=y
CONFIG_EXT3_FS_POSIX_ACL=y
-# CONFIG_EXT3_FS_SECURITY is not set
+CONFIG_EXT3_FS_SECURITY=y
CONFIG_JBD=y
# CONFIG_JBD_DEBUG is not set
CONFIG_FS_MBCACHE=y
-# CONFIG_REISERFS_FS is not set
+CONFIG_REISERFS_FS=y
+# CONFIG_REISERFS_CHECK is not set
+# CONFIG_REISERFS_PROC_INFO is not set
+CONFIG_REISERFS_FS_XATTR=y
+CONFIG_REISERFS_FS_POSIX_ACL=y
+CONFIG_REISERFS_FS_SECURITY=y
# CONFIG_JFS_FS is not set
CONFIG_FS_POSIX_ACL=y
CONFIG_XFS_FS=y
# CONFIG_QFMT_V1 is not set
# CONFIG_QFMT_V2 is not set
CONFIG_QUOTACTL=y
-CONFIG_AUTOFS_FS=y
-CONFIG_AUTOFS4_FS=y
+CONFIG_AUTOFS_FS=m
+CONFIG_AUTOFS4_FS=m
#
# CD-ROM/DVD Filesystems
CONFIG_FAT_FS=y
# CONFIG_MSDOS_FS is not set
CONFIG_VFAT_FS=y
+CONFIG_FAT_DEFAULT_CODEPAGE=437
+CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
# CONFIG_NTFS_FS is not set
#
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
+CONFIG_SYSFS=y
# CONFIG_DEVFS_FS is not set
# CONFIG_DEVPTS_FS_XATTR is not set
CONFIG_TMPFS=y
#
# Network File Systems
#
-CONFIG_NFS_FS=y
+CONFIG_NFS_FS=m
CONFIG_NFS_V3=y
-# CONFIG_NFS_V4 is not set
-# CONFIG_NFS_DIRECTIO is not set
-CONFIG_NFSD=y
+CONFIG_NFS_V4=y
+CONFIG_NFS_DIRECTIO=y
+CONFIG_NFSD=m
CONFIG_NFSD_V3=y
-# CONFIG_NFSD_V4 is not set
-# CONFIG_NFSD_TCP is not set
-CONFIG_LOCKD=y
+CONFIG_NFSD_V4=y
+CONFIG_NFSD_TCP=y
+CONFIG_LOCKD=m
CONFIG_LOCKD_V4=y
-CONFIG_EXPORTFS=y
-CONFIG_SUNRPC=y
-# CONFIG_RPCSEC_GSS_KRB5 is not set
+CONFIG_EXPORTFS=m
+CONFIG_SUNRPC=m
+CONFIG_SUNRPC_GSS=m
+CONFIG_RPCSEC_GSS_KRB5=m
CONFIG_SMB_FS=m
# CONFIG_SMB_NLS_DEFAULT is not set
-# CONFIG_CIFS is not set
+CONFIG_CIFS=m
+# CONFIG_CIFS_STATS is not set
+# CONFIG_CIFS_POSIX is not set
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
-# CONFIG_INTERMEZZO_FS is not set
# CONFIG_AFS_FS is not set
#
# CONFIG_SOLARIS_X86_PARTITION is not set
# CONFIG_UNIXWARE_DISKLABEL is not set
# CONFIG_LDM_PARTITION is not set
-# CONFIG_NEC98_PARTITION is not set
CONFIG_SGI_PARTITION=y
# CONFIG_ULTRIX_PARTITION is not set
# CONFIG_SUN_PARTITION is not set
#
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="iso8859-1"
-# CONFIG_NLS_CODEPAGE_437 is not set
+CONFIG_NLS_CODEPAGE_437=y
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
# CONFIG_NLS_CODEPAGE_850 is not set
# CONFIG_NLS_ISO8859_8 is not set
# CONFIG_NLS_CODEPAGE_1250 is not set
# CONFIG_NLS_CODEPAGE_1251 is not set
-# CONFIG_NLS_ISO8859_1 is not set
+# CONFIG_NLS_ASCII is not set
+CONFIG_NLS_ISO8859_1=y
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
# CONFIG_NLS_ISO8859_4 is not set
# CONFIG_NLS_ISO8859_15 is not set
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
-# CONFIG_NLS_UTF8 is not set
+CONFIG_NLS_UTF8=y
#
# Library routines
#
CONFIG_CRC32=y
+# CONFIG_LIBCRC32C is not set
CONFIG_ZLIB_INFLATE=m
CONFIG_ZLIB_DEFLATE=m
# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
# CONFIG_IA64_DEBUG_CMPXCHG is not set
# CONFIG_IA64_DEBUG_IRQ is not set
-# CONFIG_DEBUG_INFO is not set
+CONFIG_DEBUG_INFO=y
CONFIG_SYSVIPC_COMPAT=y
#
# CONFIG_CRYPTO_ARC4 is not set
CONFIG_CRYPTO_DEFLATE=m
# CONFIG_CRYPTO_MICHAEL_MIC is not set
+# CONFIG_CRYPTO_CRC32C is not set
# CONFIG_CRYPTO_TEST is not set
CONFIG_IKCONFIG_PROC=y
# CONFIG_EMBEDDED is not set
CONFIG_KALLSYMS=y
+# CONFIG_KALLSYMS_ALL is not set
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_IOSCHED_NOOP=y
CONFIG_IA64_PAGE_SIZE_16KB=y
# CONFIG_IA64_PAGE_SIZE_64KB is not set
CONFIG_IA64_L1_CACHE_SHIFT=7
-# CONFIG_MCKINLEY_ASTEP_SPECIFIC is not set
# CONFIG_NUMA is not set
CONFIG_VIRTUAL_MEM_MAP=y
# CONFIG_IA64_CYCLONE is not set
CONFIG_FORCE_MAX_ZONEORDER=18
CONFIG_SMP=y
CONFIG_NR_CPUS=16
+# CONFIG_HOTPLUG_CPU is not set
# CONFIG_PREEMPT is not set
CONFIG_HAVE_DEC_LOCK=y
CONFIG_IA32_SUPPORT=y
#
CONFIG_BLK_DEV_IDEDISK=y
CONFIG_IDEDISK_MULTI_MODE=y
-# CONFIG_IDEDISK_STROKE is not set
CONFIG_BLK_DEV_IDECD=y
# CONFIG_BLK_DEV_IDETAPE is not set
CONFIG_BLK_DEV_IDEFLOPPY=m
# CONFIG_BLK_DEV_SLC90E66 is not set
# CONFIG_BLK_DEV_TRM290 is not set
# CONFIG_BLK_DEV_VIA82CXXX is not set
+# CONFIG_IDE_ARM is not set
CONFIG_BLK_DEV_IDEDMA=y
# CONFIG_IDEDMA_IVB is not set
CONFIG_IDEDMA_AUTO=y
# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
#
CONFIG_SCSI_MULTI_LUN=y
-CONFIG_SCSI_REPORT_LUNS=y
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16
CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64
# CONFIG_SCSI_SYM53C8XX_IOMAPPED is not set
+# CONFIG_SCSI_IPR is not set
# CONFIG_SCSI_PCI2000 is not set
# CONFIG_SCSI_PCI2220I is not set
# CONFIG_SCSI_QLOGIC_ISP is not set
#
# I2O device support
#
+# CONFIG_I2O is not set
#
# Networking support
#
# IPMI
#
-# CONFIG_IPMI_HANDLER is not set
+CONFIG_IPMI_HANDLER=m
+CONFIG_IPMI_PANIC_EVENT=y
+CONFIG_IPMI_PANIC_STRING=y
+CONFIG_IPMI_DEVICE_INTERFACE=m
+CONFIG_IPMI_SI=m
+CONFIG_IPMI_WATCHDOG=m
#
# Watchdog Cards
# CONFIG_SENSORS_LM83 is not set
# CONFIG_SENSORS_LM85 is not set
# CONFIG_SENSORS_LM90 is not set
+# CONFIG_SENSORS_MAX1619 is not set
# CONFIG_SENSORS_VIA686A is not set
# CONFIG_SENSORS_W83781D is not set
# CONFIG_SENSORS_W83L785TS is not set
# CONFIG_SENSORS_EEPROM is not set
# CONFIG_SENSORS_PCF8574 is not set
# CONFIG_SENSORS_PCF8591 is not set
+# CONFIG_SENSORS_RTC8564 is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# CONFIG_FB_CIRRUS is not set
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
+# CONFIG_FB_ASILIANT is not set
# CONFIG_FB_IMSTT is not set
CONFIG_FB_RIVA=m
# CONFIG_FB_MATROX is not set
# CONFIG_USB_KBTAB is not set
# CONFIG_USB_POWERMATE is not set
# CONFIG_USB_MTOUCH is not set
+# CONFIG_USB_EGALAX is not set
# CONFIG_USB_XPAD is not set
# CONFIG_USB_ATI_REMOTE is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_LED is not set
# CONFIG_USB_CYTHERM is not set
+# CONFIG_USB_PHIDGETSERVO is not set
# CONFIG_USB_TEST is not set
#
CONFIG_NFSD=y
CONFIG_NFSD_V3=y
# CONFIG_NFSD_V4 is not set
-# CONFIG_NFSD_TCP is not set
+CONFIG_NFSD_TCP=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_EXPORTFS=y
# CONFIG_CIFS is not set
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
-# CONFIG_INTERMEZZO_FS is not set
# CONFIG_AFS_FS is not set
#
# CONFIG_IA64_GRANULE_64MB is not set
CONFIG_DEBUG_KERNEL=y
CONFIG_IA64_PRINT_HAZARDS=y
-CONFIG_DISABLE_VHPT=y
+# CONFIG_DISABLE_VHPT is not set
CONFIG_MAGIC_SYSRQ=y
# CONFIG_IA64_EARLY_PRINTK_VGA is not set
# CONFIG_DEBUG_SLAB is not set
int
ia32_setup_arg_pages (struct linux_binprm *bprm, int executable_stack)
{
- unsigned long stack_base;
+ unsigned long stack_base, grow;
struct vm_area_struct *mpnt;
struct mm_struct *mm = current->mm;
int i;
if (!mpnt)
return -ENOMEM;
- if (security_vm_enough_memory((IA32_STACK_TOP - (PAGE_MASK & (unsigned long) bprm->p))>>PAGE_SHIFT)) {
+ grow = (IA32_STACK_TOP - (PAGE_MASK & (unsigned long) bprm->p))
+ >> PAGE_SHIFT;
+ if (security_vm_enough_memory(grow) ||
+ !vx_vmpages_avail(mm, grow)) {
kmem_cache_free(vm_area_cachep, mpnt);
return -ENOMEM;
}
mpnt->vm_page_prot = (mpnt->vm_flags & VM_EXEC)?
PAGE_COPY_EXEC: PAGE_COPY;
insert_vm_struct(current->mm, mpnt);
- current->mm->total_vm = (mpnt->vm_end - mpnt->vm_start) >> PAGE_SHIFT;
+ // current->mm->total_vm = (mpnt->vm_end - mpnt->vm_start) >> PAGE_SHIFT;
+ vx_vmpages_sub(current->mm, current->mm->total_vm -
+ ((mpnt->vm_end - mpnt->vm_start) >> PAGE_SHIFT));
}
for (i = 0 ; i < MAX_ARG_PAGES ; i++) {
}
up_write(¤t->mm->mmap_sem);
+ /* Can't do it in ia64_elf32_init(). Needs to be done before calls to
+ elf32_map() */
+ current->thread.ppl = ia32_init_pp_list();
+
return 0;
}
}
static unsigned long
-elf32_map (struct file *filep, unsigned long addr, struct elf_phdr *eppnt, int prot, int type)
+elf32_map (struct file *filep, unsigned long addr, struct elf_phdr *eppnt, int prot, int type, unsigned long unused)
{
unsigned long pgoff = (eppnt->p_vaddr) & ~IA32_PAGE_MASK;
END(ia32_execve)
ENTRY(ia32_clone)
- .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(2)
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(5)
alloc r16=ar.pfs,5,2,6,0
DO_SAVE_SWITCH_STACK
mov loc0=rp
ld4 r2=[r2]
;;
mov r8=0
- tbit.nz p6,p0=r2,TIF_SYSCALL_TRACE
+ and r2=_TIF_SYSCALL_TRACEAUDIT,r2
+ ;;
+ cmp.ne p6,p0=r2,r0
(p6) br.cond.spnt .ia32_strace_check_retval
;; // prevent RAW on r8
END(ia32_ret_from_clone)
adds r2=IA64_PT_REGS_R8_OFFSET+16,sp
;;
st8 [r2]=r3 // initialize return code to -ENOSYS
- br.call.sptk.few rp=syscall_trace // give parent a chance to catch syscall args
+ br.call.sptk.few rp=syscall_trace_enter // give parent a chance to catch syscall args
.ret2: // Need to reload arguments (they may be changed by the tracing process)
adds r2=IA64_PT_REGS_R1_OFFSET+16,sp // r2 = &pt_regs.r1
adds r3=IA64_PT_REGS_R13_OFFSET+16,sp // r3 = &pt_regs.r13
adds r2=IA64_PT_REGS_R8_OFFSET+16,sp // r2 = &pt_regs.r8
;;
st8.spill [r2]=r8 // store return value in slot for r8
- br.call.sptk.few rp=syscall_trace // give parent a chance to catch return value
+ br.call.sptk.few rp=syscall_trace_leave // give parent a chance to catch return value
.ret4: alloc r2=ar.pfs,0,0,0,0 // drop the syscall argument frame
br.cond.sptk.many ia64_leave_kernel
END(ia32_trace_syscall)
data8 sys_setfsuid /* 16-bit version */
data8 sys_setfsgid /* 16-bit version */
data8 sys_llseek /* 140 */
- data8 sys32_getdents
+ data8 compat_sys_getdents
data8 compat_sys_select
data8 sys_flock
data8 sys32_msync
data8 sys_sched_get_priority_min /* 160 */
data8 sys32_sched_rr_get_interval
data8 compat_sys_nanosleep
- data8 sys_mremap
+ data8 sys32_mremap
data8 sys_setresuid /* 16-bit version */
data8 sys32_getresuid16 /* 16-bit version */ /* 165 */
data8 sys_ni_syscall /* vm86 */
data8 sys_pivot_root
data8 sys_mincore
data8 sys_madvise
- data8 sys_getdents64 /* 220 */
+ data8 compat_sys_getdents64 /* 220 */
data8 compat_sys_fcntl64
data8 sys_ni_syscall /* reserved for TUX */
data8 sys_ni_syscall /* reserved for Security */
ia32_exec_domain.signal_map = default_exec_domain.signal_map;
ia32_exec_domain.signal_invmap = default_exec_domain.signal_invmap;
register_exec_domain(&ia32_exec_domain);
+
+#if PAGE_SHIFT > IA32_PAGE_SHIFT
+ {
+ extern kmem_cache_t *partial_page_cachep;
+
+ partial_page_cachep = kmem_cache_create("partial_page_cache",
+ sizeof(struct partial_page), 0, 0,
+ NULL, NULL);
+ if (!partial_page_cachep)
+ panic("Cannot create partial page SLAB cache");
+ }
+#endif
return 0;
}
-#ifndef _ASM_IA64_IA32_H
-#define _ASM_IA64_IA32_H
+#ifndef _ASM_IA64_IA32_PRIV_H
+#define _ASM_IA64_IA32_PRIV_H
#include <linux/config.h>
#include <linux/binfmts.h>
#include <linux/compat.h>
+#include <linux/rbtree.h>
#include <asm/processor.h>
* 32 bit structures for IA32 support.
*/
-#define IA32_PAGE_SHIFT 12 /* 4KB pages */
#define IA32_PAGE_SIZE (1UL << IA32_PAGE_SHIFT)
#define IA32_PAGE_MASK (~(IA32_PAGE_SIZE - 1))
#define IA32_PAGE_ALIGN(addr) (((addr) + IA32_PAGE_SIZE - 1) & IA32_PAGE_MASK)
#define IA32_CLOCKS_PER_SEC 100 /* Cast in stone for IA32 Linux */
+/*
+ * partially mapped pages provide precise accounting of which 4k sub pages
+ * are mapped and which ones are not, thereby improving IA-32 compatibility.
+ */
+struct partial_page {
+ struct partial_page *next; /* linked list, sorted by address */
+ struct rb_node pp_rb;
+ /* 64K is the largest "normal" page supported by ia64 ABI. So 4K*32
+ * should suffice.*/
+ unsigned int bitmap;
+ unsigned int base;
+};
+
+struct partial_page_list {
+ struct partial_page *pp_head; /* list head, points to the lowest
+ * addressed partial page */
+ struct rb_root ppl_rb;
+ struct partial_page *pp_hint; /* pp_hint->next is the last
+ * accessed partial page */
+ atomic_t pp_count; /* reference count */
+};
+
+#if PAGE_SHIFT > IA32_PAGE_SHIFT
+struct partial_page_list* ia32_init_pp_list (void);
+#else
+# define ia32_init_pp_list() 0
+#endif
+
/* sigcontext.h */
/*
* As documented in the iBCS2 standard..
#endif /* !CONFIG_IA32_SUPPORT */
-#endif /* _ASM_IA64_IA32_H */
+#endif /* _ASM_IA64_IA32_PRIV_H */
* Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu)
* Copyright (C) 2000-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 2004 Gordon Jin <gordon.jin@intel.com>
*
* These routines maintain argument size conversion between 32bit and 64bit
* environment.
#include <linux/ipc.h>
#include <linux/compat.h>
#include <linux/vfs.h>
+#include <linux/mman.h>
#include <asm/intrinsics.h>
#include <asm/semaphore.h>
return ret;
}
+/* SLAB cache for partial_page structures */
+kmem_cache_t *partial_page_cachep;
+
+/*
+ * init partial_page_list.
+ * return 0 means kmalloc fail.
+ */
+struct partial_page_list*
+ia32_init_pp_list(void)
+{
+ struct partial_page_list *p;
+
+ if ((p = kmalloc(sizeof(*p), GFP_KERNEL)) == NULL)
+ return p;
+ p->pp_head = 0;
+ p->ppl_rb = RB_ROOT;
+ p->pp_hint = 0;
+ atomic_set(&p->pp_count, 1);
+ return p;
+}
+
+/*
+ * Search for the partial page with @start in partial page list @ppl.
+ * If finds the partial page, return the found partial page.
+ * Else, return 0 and provide @pprev, @rb_link, @rb_parent to
+ * be used by later __ia32_insert_pp().
+ */
+static struct partial_page *
+__ia32_find_pp(struct partial_page_list *ppl, unsigned int start,
+ struct partial_page **pprev, struct rb_node ***rb_link,
+ struct rb_node **rb_parent)
+{
+ struct partial_page *pp;
+ struct rb_node **__rb_link, *__rb_parent, *rb_prev;
+
+ pp = ppl->pp_hint;
+ if (pp && pp->base == start)
+ return pp;
+
+ __rb_link = &ppl->ppl_rb.rb_node;
+ rb_prev = __rb_parent = NULL;
+
+ while (*__rb_link) {
+ __rb_parent = *__rb_link;
+ pp = rb_entry(__rb_parent, struct partial_page, pp_rb);
+
+ if (pp->base == start) {
+ ppl->pp_hint = pp;
+ return pp;
+ } else if (pp->base < start) {
+ rb_prev = __rb_parent;
+ __rb_link = &__rb_parent->rb_right;
+ } else {
+ __rb_link = &__rb_parent->rb_left;
+ }
+ }
+
+ *rb_link = __rb_link;
+ *rb_parent = __rb_parent;
+ *pprev = NULL;
+ if (rb_prev)
+ *pprev = rb_entry(rb_prev, struct partial_page, pp_rb);
+ return NULL;
+}
+
+/*
+ * insert @pp into @ppl.
+ */
+static void
+__ia32_insert_pp(struct partial_page_list *ppl, struct partial_page *pp,
+ struct partial_page *prev, struct rb_node **rb_link,
+ struct rb_node *rb_parent)
+{
+ /* link list */
+ if (prev) {
+ pp->next = prev->next;
+ prev->next = pp;
+ } else {
+ ppl->pp_head = pp;
+ if (rb_parent)
+ pp->next = rb_entry(rb_parent,
+ struct partial_page, pp_rb);
+ else
+ pp->next = NULL;
+ }
+
+ /* link rb */
+ rb_link_node(&pp->pp_rb, rb_parent, rb_link);
+ rb_insert_color(&pp->pp_rb, &ppl->ppl_rb);
+
+ ppl->pp_hint = pp;
+}
+
+/*
+ * delete @pp from partial page list @ppl.
+ */
+static void
+__ia32_delete_pp(struct partial_page_list *ppl, struct partial_page *pp,
+ struct partial_page *prev)
+{
+ if (prev) {
+ prev->next = pp->next;
+ if (ppl->pp_hint == pp)
+ ppl->pp_hint = prev;
+ } else {
+ ppl->pp_head = pp->next;
+ if (ppl->pp_hint == pp)
+ ppl->pp_hint = pp->next;
+ }
+ rb_erase(&pp->pp_rb, &ppl->ppl_rb);
+ kmem_cache_free(partial_page_cachep, pp);
+}
+
+static struct partial_page *
+__pp_prev(struct partial_page *pp)
+{
+ struct rb_node *prev = rb_prev(&pp->pp_rb);
+ if (prev)
+ return rb_entry(prev, struct partial_page, pp_rb);
+ else
+ return NULL;
+}
+
+/*
+ * Delete partial pages with address between @start and @end.
+ * @start and @end are page aligned.
+ */
+static void
+__ia32_delete_pp_range(unsigned int start, unsigned int end)
+{
+ struct partial_page *pp, *prev;
+ struct rb_node **rb_link, *rb_parent;
+
+ if (start >= end)
+ return;
+
+ pp = __ia32_find_pp(current->thread.ppl, start, &prev,
+ &rb_link, &rb_parent);
+ if (pp)
+ prev = __pp_prev(pp);
+ else {
+ if (prev)
+ pp = prev->next;
+ else
+ pp = current->thread.ppl->pp_head;
+ }
+
+ while (pp && pp->base < end) {
+ struct partial_page *tmp = pp->next;
+ __ia32_delete_pp(current->thread.ppl, pp, prev);
+ pp = tmp;
+ }
+}
+
+/*
+ * Set the range between @start and @end in bitmap.
+ * @start and @end should be IA32 page aligned and in the same IA64 page.
+ */
+static int
+__ia32_set_pp(unsigned int start, unsigned int end, int flags)
+{
+ struct partial_page *pp, *prev;
+ struct rb_node ** rb_link, *rb_parent;
+ unsigned int pstart, start_bit, end_bit, i;
+
+ pstart = PAGE_START(start);
+ start_bit = (start % PAGE_SIZE) / IA32_PAGE_SIZE;
+ end_bit = (end % PAGE_SIZE) / IA32_PAGE_SIZE;
+ if (end_bit == 0)
+ end_bit = PAGE_SIZE / IA32_PAGE_SIZE;
+ pp = __ia32_find_pp(current->thread.ppl, pstart, &prev,
+ &rb_link, &rb_parent);
+ if (pp) {
+ for (i = start_bit; i < end_bit; i++)
+ set_bit(i, &pp->bitmap);
+ /*
+ * Check: if this partial page has been set to a full page,
+ * then delete it.
+ */
+ if (find_first_zero_bit(&pp->bitmap, sizeof(pp->bitmap)*8) >=
+ PAGE_SIZE/IA32_PAGE_SIZE) {
+ __ia32_delete_pp(current->thread.ppl, pp, __pp_prev(pp));
+ }
+ return 0;
+ }
+
+ /*
+ * MAP_FIXED may lead to overlapping mmap.
+ * In this case, the requested mmap area may already mmaped as a full
+ * page. So check vma before adding a new partial page.
+ */
+ if (flags & MAP_FIXED) {
+ struct vm_area_struct *vma = find_vma(current->mm, pstart);
+ if (vma && vma->vm_start <= pstart)
+ return 0;
+ }
+
+ /* new a partial_page */
+ pp = kmem_cache_alloc(partial_page_cachep, GFP_KERNEL);
+ if (!pp)
+ return -ENOMEM;
+ pp->base = pstart;
+ pp->bitmap = 0;
+ for (i=start_bit; i<end_bit; i++)
+ set_bit(i, &(pp->bitmap));
+ pp->next = NULL;
+ __ia32_insert_pp(current->thread.ppl, pp, prev, rb_link, rb_parent);
+ return 0;
+}
+
+/*
+ * @start and @end should be IA32 page aligned, but don't need to be in the
+ * same IA64 page. Split @start and @end to make sure they're in the same IA64
+ * page, then call __ia32_set_pp().
+ */
+static void
+ia32_set_pp(unsigned int start, unsigned int end, int flags)
+{
+ down_write(¤t->mm->mmap_sem);
+ if (flags & MAP_FIXED) {
+ /*
+ * MAP_FIXED may lead to overlapping mmap. When this happens,
+ * a series of complete IA64 pages results in deletion of
+ * old partial pages in that range.
+ */
+ __ia32_delete_pp_range(PAGE_ALIGN(start), PAGE_START(end));
+ }
+
+ if (end < PAGE_ALIGN(start)) {
+ __ia32_set_pp(start, end, flags);
+ } else {
+ if (offset_in_page(start))
+ __ia32_set_pp(start, PAGE_ALIGN(start), flags);
+ if (offset_in_page(end))
+ __ia32_set_pp(PAGE_START(end), end, flags);
+ }
+ up_write(¤t->mm->mmap_sem);
+}
+
+/*
+ * Unset the range between @start and @end in bitmap.
+ * @start and @end should be IA32 page aligned and in the same IA64 page.
+ * After doing that, if the bitmap is 0, then free the page and return 1,
+ * else return 0;
+ * If not find the partial page in the list, then
+ * If the vma exists, then the full page is set to a partial page;
+ * Else return -ENOMEM.
+ */
+static int
+__ia32_unset_pp(unsigned int start, unsigned int end)
+{
+ struct partial_page *pp, *prev;
+ struct rb_node ** rb_link, *rb_parent;
+ unsigned int pstart, start_bit, end_bit, i;
+ struct vm_area_struct *vma;
+
+ pstart = PAGE_START(start);
+ start_bit = (start % PAGE_SIZE) / IA32_PAGE_SIZE;
+ end_bit = (end % PAGE_SIZE) / IA32_PAGE_SIZE;
+ if (end_bit == 0)
+ end_bit = PAGE_SIZE / IA32_PAGE_SIZE;
+
+ pp = __ia32_find_pp(current->thread.ppl, pstart, &prev,
+ &rb_link, &rb_parent);
+ if (pp) {
+ for (i = start_bit; i < end_bit; i++)
+ clear_bit(i, &pp->bitmap);
+ if (pp->bitmap == 0) {
+ __ia32_delete_pp(current->thread.ppl, pp, __pp_prev(pp));
+ return 1;
+ }
+ return 0;
+ }
+
+ vma = find_vma(current->mm, pstart);
+ if (!vma || vma->vm_start > pstart) {
+ return -ENOMEM;
+ }
+
+ /* new a partial_page */
+ pp = kmem_cache_alloc(partial_page_cachep, GFP_KERNEL);
+ if (!pp)
+ return -ENOMEM;
+ pp->base = pstart;
+ pp->bitmap = 0;
+ for (i = 0; i < start_bit; i++)
+ set_bit(i, &(pp->bitmap));
+ for (i = end_bit; i < PAGE_SIZE / IA32_PAGE_SIZE; i++)
+ set_bit(i, &(pp->bitmap));
+ pp->next = NULL;
+ __ia32_insert_pp(current->thread.ppl, pp, prev, rb_link, rb_parent);
+ return 0;
+}
+
+/*
+ * Delete pp between PAGE_ALIGN(start) and PAGE_START(end) by calling
+ * __ia32_delete_pp_range(). Unset possible partial pages by calling
+ * __ia32_unset_pp().
+ * The returned value see __ia32_unset_pp().
+ */
+static int
+ia32_unset_pp(unsigned int *startp, unsigned int *endp)
+{
+ unsigned int start = *startp, end = *endp;
+ int ret = 0;
+
+ down_write(¤t->mm->mmap_sem);
+
+ __ia32_delete_pp_range(PAGE_ALIGN(start), PAGE_START(end));
+
+ if (end < PAGE_ALIGN(start)) {
+ ret = __ia32_unset_pp(start, end);
+ if (ret == 1) {
+ *startp = PAGE_START(start);
+ *endp = PAGE_ALIGN(end);
+ }
+ if (ret == 0) {
+ /* to shortcut sys_munmap() in sys32_munmap() */
+ *startp = PAGE_START(start);
+ *endp = PAGE_START(end);
+ }
+ } else {
+ if (offset_in_page(start)) {
+ ret = __ia32_unset_pp(start, PAGE_ALIGN(start));
+ if (ret == 1)
+ *startp = PAGE_START(start);
+ if (ret == 0)
+ *startp = PAGE_ALIGN(start);
+ if (ret < 0)
+ goto out;
+ }
+ if (offset_in_page(end)) {
+ ret = __ia32_unset_pp(PAGE_START(end), end);
+ if (ret == 1)
+ *endp = PAGE_ALIGN(end);
+ if (ret == 0)
+ *endp = PAGE_START(end);
+ }
+ }
+
+ out:
+ up_write(¤t->mm->mmap_sem);
+ return ret;
+}
+
+/*
+ * Compare the range between @start and @end with bitmap in partial page.
+ * @start and @end should be IA32 page aligned and in the same IA64 page.
+ */
+static int
+__ia32_compare_pp(unsigned int start, unsigned int end)
+{
+ struct partial_page *pp, *prev;
+ struct rb_node ** rb_link, *rb_parent;
+ unsigned int pstart, start_bit, end_bit, size;
+ unsigned int first_bit, next_zero_bit; /* the first range in bitmap */
+
+ pstart = PAGE_START(start);
+
+ pp = __ia32_find_pp(current->thread.ppl, pstart, &prev,
+ &rb_link, &rb_parent);
+ if (!pp)
+ return 1;
+
+ start_bit = (start % PAGE_SIZE) / IA32_PAGE_SIZE;
+ end_bit = (end % PAGE_SIZE) / IA32_PAGE_SIZE;
+ size = sizeof(pp->bitmap) * 8;
+ first_bit = find_first_bit(&pp->bitmap, size);
+ next_zero_bit = find_next_zero_bit(&pp->bitmap, size, first_bit);
+ if ((start_bit < first_bit) || (end_bit > next_zero_bit)) {
+ /* exceeds the first range in bitmap */
+ return -ENOMEM;
+ } else if ((start_bit == first_bit) && (end_bit == next_zero_bit)) {
+ first_bit = find_next_bit(&pp->bitmap, size, next_zero_bit);
+ if ((next_zero_bit < first_bit) && (first_bit < size))
+ return 1; /* has next range */
+ else
+ return 0; /* no next range */
+ } else
+ return 1;
+}
+
+/*
+ * @start and @end should be IA32 page aligned, but don't need to be in the
+ * same IA64 page. Split @start and @end to make sure they're in the same IA64
+ * page, then call __ia32_compare_pp().
+ *
+ * Take this as example: the range is the 1st and 2nd 4K page.
+ * Return 0 if they fit bitmap exactly, i.e. bitmap = 00000011;
+ * Return 1 if the range doesn't cover whole bitmap, e.g. bitmap = 00001111;
+ * Return -ENOMEM if the range exceeds the bitmap, e.g. bitmap = 00000001 or
+ * bitmap = 00000101.
+ */
+static int
+ia32_compare_pp(unsigned int *startp, unsigned int *endp)
+{
+ unsigned int start = *startp, end = *endp;
+ int retval = 0;
+
+ down_write(¤t->mm->mmap_sem);
+
+ if (end < PAGE_ALIGN(start)) {
+ retval = __ia32_compare_pp(start, end);
+ if (retval == 0) {
+ *startp = PAGE_START(start);
+ *endp = PAGE_ALIGN(end);
+ }
+ } else {
+ if (offset_in_page(start)) {
+ retval = __ia32_compare_pp(start,
+ PAGE_ALIGN(start));
+ if (retval == 0)
+ *startp = PAGE_START(start);
+ if (retval < 0)
+ goto out;
+ }
+ if (offset_in_page(end)) {
+ retval = __ia32_compare_pp(PAGE_START(end), end);
+ if (retval == 0)
+ *endp = PAGE_ALIGN(end);
+ }
+ }
+
+ out:
+ up_write(¤t->mm->mmap_sem);
+ return retval;
+}
+
+static void
+__ia32_drop_pp_list(struct partial_page_list *ppl)
+{
+ struct partial_page *pp = ppl->pp_head;
+
+ while (pp) {
+ struct partial_page *next = pp->next;
+ kmem_cache_free(partial_page_cachep, pp);
+ pp = next;
+ }
+
+ kfree(ppl);
+}
+
+void
+ia32_drop_partial_page_list(struct task_struct *task)
+{
+ struct partial_page_list* ppl = task->thread.ppl;
+
+ if (ppl && atomic_dec_and_test(&ppl->pp_count))
+ __ia32_drop_pp_list(ppl);
+}
+
+/*
+ * Copy current->thread.ppl to ppl (already initialized).
+ */
+static int
+__ia32_copy_pp_list(struct partial_page_list *ppl)
+{
+ struct partial_page *pp, *tmp, *prev;
+ struct rb_node **rb_link, *rb_parent;
+
+ ppl->pp_head = NULL;
+ ppl->pp_hint = NULL;
+ ppl->ppl_rb = RB_ROOT;
+ rb_link = &ppl->ppl_rb.rb_node;
+ rb_parent = NULL;
+ prev = NULL;
+
+ for (pp = current->thread.ppl->pp_head; pp; pp = pp->next) {
+ tmp = kmem_cache_alloc(partial_page_cachep, GFP_KERNEL);
+ if (!tmp)
+ return -ENOMEM;
+ *tmp = *pp;
+ __ia32_insert_pp(ppl, tmp, prev, rb_link, rb_parent);
+ prev = tmp;
+ rb_link = &tmp->pp_rb.rb_right;
+ rb_parent = &tmp->pp_rb;
+ }
+ return 0;
+}
+
+int
+ia32_copy_partial_page_list(struct task_struct *p, unsigned long clone_flags)
+{
+ int retval = 0;
+
+ if (clone_flags & CLONE_VM) {
+ atomic_inc(¤t->thread.ppl->pp_count);
+ p->thread.ppl = current->thread.ppl;
+ } else {
+ p->thread.ppl = ia32_init_pp_list();
+ if (!p->thread.ppl)
+ return -ENOMEM;
+ down_write(¤t->mm->mmap_sem);
+ {
+ retval = __ia32_copy_pp_list(p->thread.ppl);
+ }
+ up_write(¤t->mm->mmap_sem);
+ }
+
+ return retval;
+}
+
static unsigned long
emulate_mmap (struct file *file, unsigned long start, unsigned long len, int prot, int flags,
loff_t off)
pend = PAGE_ALIGN(end);
if (flags & MAP_FIXED) {
+ ia32_set_pp((unsigned int)start, (unsigned int)end, flags);
if (start > pstart) {
if (flags & MAP_SHARED)
printk(KERN_INFO
return ret;
pstart += PAGE_SIZE;
if (pstart >= pend)
- return start; /* done */
+ goto out; /* done */
}
if (end < pend) {
if (flags & MAP_SHARED)
return ret;
pend -= PAGE_SIZE;
if (pstart >= pend)
- return start; /* done */
+ goto out; /* done */
}
} else {
/*
if (!(prot & PROT_WRITE) && sys_mprotect(pstart, pend - pstart, prot) < 0)
return -EINVAL;
}
+
+ if (!(flags & MAP_FIXED))
+ ia32_set_pp((unsigned int)start, (unsigned int)end, flags);
+out:
return start;
}
#if PAGE_SHIFT <= IA32_PAGE_SHIFT
ret = sys_munmap(start, end - start);
#else
+ if (OFFSET4K(start))
+ return -EINVAL;
+
+ end = IA32_PAGE_ALIGN(end);
if (start >= end)
return -EINVAL;
- start = PAGE_ALIGN(start);
- end = PAGE_START(end);
+ ret = ia32_unset_pp(&start, &end);
+ if (ret < 0)
+ return ret;
if (start >= end)
return 0;
asmlinkage long
sys32_mprotect (unsigned int start, unsigned int len, int prot)
{
- unsigned long end = start + len;
+ unsigned int end = start + len;
#if PAGE_SHIFT > IA32_PAGE_SHIFT
long retval = 0;
#endif
if (end < start)
return -EINVAL;
+ retval = ia32_compare_pp(&start, &end);
+
+ if (retval < 0)
+ return retval;
+
down(&ia32_mmap_sem);
{
if (offset_in_page(start)) {
#endif
}
+asmlinkage long
+sys32_mremap (unsigned int addr, unsigned int old_len, unsigned int new_len,
+ unsigned int flags, unsigned int new_addr)
+{
+ long ret;
+
+#if PAGE_SHIFT <= IA32_PAGE_SHIFT
+ ret = sys_mremap(addr, old_len, new_len, flags, new_addr);
+#else
+ unsigned int old_end, new_end;
+
+ if (OFFSET4K(addr))
+ return -EINVAL;
+
+ old_len = IA32_PAGE_ALIGN(old_len);
+ new_len = IA32_PAGE_ALIGN(new_len);
+ old_end = addr + old_len;
+ new_end = addr + new_len;
+
+ if (!new_len)
+ return -EINVAL;
+
+ if ((flags & MREMAP_FIXED) && (OFFSET4K(new_addr)))
+ return -EINVAL;
+
+ if (old_len >= new_len) {
+ ret = sys32_munmap(addr + new_len, old_len - new_len);
+ if (ret && old_len != new_len)
+ return ret;
+ ret = addr;
+ if (!(flags & MREMAP_FIXED) || (new_addr == addr))
+ return ret;
+ old_len = new_len;
+ }
+
+ addr = PAGE_START(addr);
+ old_len = PAGE_ALIGN(old_end) - addr;
+ new_len = PAGE_ALIGN(new_end) - addr;
+
+ down(&ia32_mmap_sem);
+ {
+ ret = sys_mremap(addr, old_len, new_len, flags, new_addr);
+ }
+ up(&ia32_mmap_sem);
+
+ if ((ret >= 0) && (old_len < new_len)) {
+ /* mremap expanded successfully */
+ ia32_set_pp(old_end, new_end, flags);
+ }
+#endif
+ return ret;
+}
+
asmlinkage long
sys32_pipe (int *fd)
{
int ret;
mm_segment_t old_fs = get_fs();
- if (uss32)
+ if (uss32) {
if (copy_from_user(&buf32, uss32, sizeof(ia32_stack_t)))
return -EFAULT;
- uss.ss_sp = (void *) (long) buf32.ss_sp;
- uss.ss_flags = buf32.ss_flags;
- /* MINSIGSTKSZ is different for ia32 vs ia64. We lie here to pass the
- check and set it to the user requested value later */
- if ((buf32.ss_flags != SS_DISABLE) && (buf32.ss_size < MINSIGSTKSZ_IA32)) {
- ret = -ENOMEM;
- goto out;
+ uss.ss_sp = (void *) (long) buf32.ss_sp;
+ uss.ss_flags = buf32.ss_flags;
+ /* MINSIGSTKSZ is different for ia32 vs ia64. We lie here to pass the
+ check and set it to the user requested value later */
+ if ((buf32.ss_flags != SS_DISABLE) && (buf32.ss_size < MINSIGSTKSZ_IA32)) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ uss.ss_size = MINSIGSTKSZ;
}
- uss.ss_size = MINSIGSTKSZ;
set_fs(KERNEL_DS);
ret = do_sigaltstack(uss32 ? &uss : NULL, &uoss, pt->r12);
current->sas_ss_size = buf32.ss_size;
#endif /* CONFIG_ACPI_NUMA */
unsigned int
-acpi_register_gsi (u32 gsi, int polarity, int trigger)
+acpi_register_gsi (u32 gsi, int edge_level, int active_high_low)
{
- return acpi_register_irq(gsi, polarity, trigger);
+ if (has_8259 && gsi < 16)
+ return isa_irq_to_vector(gsi);
+
+ return iosapic_register_intr(gsi,
+ (active_high_low == ACPI_ACTIVE_HIGH) ? IOSAPIC_POL_HIGH : IOSAPIC_POL_LOW,
+ (edge_level == ACPI_EDGE_SENSITIVE) ? IOSAPIC_EDGE : IOSAPIC_LEVEL);
}
EXPORT_SYMBOL(acpi_register_gsi);
if (fadt->iapc_boot_arch & BAF_LEGACY_DEVICES)
acpi_legacy_devices = 1;
- acpi_register_gsi(fadt->sci_int, ACPI_ACTIVE_LOW, ACPI_LEVEL_SENSITIVE);
+ acpi_register_gsi(fadt->sci_int, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW);
return 0;
}
return 0;
}
-int
-acpi_register_irq (u32 gsi, u32 polarity, u32 trigger)
-{
- if (has_8259 && gsi < 16)
- return isa_irq_to_vector(gsi);
-
- return iosapic_register_intr(gsi,
- (polarity == ACPI_ACTIVE_HIGH) ? IOSAPIC_POL_HIGH : IOSAPIC_POL_LOW,
- (trigger == ACPI_EDGE_SENSITIVE) ? IOSAPIC_EDGE : IOSAPIC_LEVEL);
-}
-EXPORT_SYMBOL(acpi_register_irq);
-
#endif /* CONFIG_ACPI_BOOT */
#define efi_call_virt(f, args...) (*(f))(args)
-#define STUB_GET_TIME(prefix, adjust_arg) \
-static efi_status_t \
-prefix##_get_time (efi_time_t *tm, efi_time_cap_t *tc) \
-{ \
- struct ia64_fpreg fr[6]; \
- efi_status_t ret; \
- \
- ia64_save_scratch_fpregs(fr); \
- ret = efi_call_##prefix((efi_get_time_t *) __va(runtime->get_time), adjust_arg(tm), \
- adjust_arg(tc)); \
- ia64_load_scratch_fpregs(fr); \
- return ret; \
+#define STUB_GET_TIME(prefix, adjust_arg) \
+static efi_status_t \
+prefix##_get_time (efi_time_t *tm, efi_time_cap_t *tc) \
+{ \
+ struct ia64_fpreg fr[6]; \
+ efi_time_cap_t *atc = 0; \
+ efi_status_t ret; \
+ \
+ if (tc) \
+ atc = adjust_arg(tc); \
+ ia64_save_scratch_fpregs(fr); \
+ ret = efi_call_##prefix((efi_get_time_t *) __va(runtime->get_time), adjust_arg(tm), atc); \
+ ia64_load_scratch_fpregs(fr); \
+ return ret; \
}
#define STUB_SET_TIME(prefix, adjust_arg) \
prefix##_set_wakeup_time (efi_bool_t enabled, efi_time_t *tm) \
{ \
struct ia64_fpreg fr[6]; \
+ efi_time_t *atm = 0; \
efi_status_t ret; \
\
+ if (tm) \
+ atm = adjust_arg(tm); \
ia64_save_scratch_fpregs(fr); \
ret = efi_call_##prefix((efi_set_wakeup_time_t *) __va(runtime->set_wakeup_time), \
- enabled, adjust_arg(tm)); \
+ enabled, atm); \
ia64_load_scratch_fpregs(fr); \
return ret; \
}
unsigned long *data_size, void *data) \
{ \
struct ia64_fpreg fr[6]; \
+ u32 *aattr = 0; \
efi_status_t ret; \
\
+ if (attr) \
+ aattr = adjust_arg(attr); \
ia64_save_scratch_fpregs(fr); \
ret = efi_call_##prefix((efi_get_variable_t *) __va(runtime->get_variable), \
- adjust_arg(name), adjust_arg(vendor), adjust_arg(attr), \
+ adjust_arg(name), adjust_arg(vendor), aattr, \
adjust_arg(data_size), adjust_arg(data)); \
ia64_load_scratch_fpregs(fr); \
return ret; \
unsigned long data_size, efi_char16_t *data) \
{ \
struct ia64_fpreg fr[6]; \
+ efi_char16_t *adata = 0; \
+ \
+ if (data) \
+ adata = adjust_arg(data); \
\
ia64_save_scratch_fpregs(fr); \
efi_call_##prefix((efi_reset_system_t *) __va(runtime->reset_system), \
- reset_type, status, data_size, adjust_arg(data)); \
+ reset_type, status, data_size, adata); \
/* should not return, but just in case... */ \
ia64_load_scratch_fpregs(fr); \
}
-STUB_GET_TIME(phys, __pa)
-STUB_SET_TIME(phys, __pa)
-STUB_GET_WAKEUP_TIME(phys, __pa)
-STUB_SET_WAKEUP_TIME(phys, __pa)
-STUB_GET_VARIABLE(phys, __pa)
-STUB_GET_NEXT_VARIABLE(phys, __pa)
-STUB_SET_VARIABLE(phys, __pa)
-STUB_GET_NEXT_HIGH_MONO_COUNT(phys, __pa)
-STUB_RESET_SYSTEM(phys, __pa)
-
-STUB_GET_TIME(virt, )
-STUB_SET_TIME(virt, )
-STUB_GET_WAKEUP_TIME(virt, )
-STUB_SET_WAKEUP_TIME(virt, )
-STUB_GET_VARIABLE(virt, )
-STUB_GET_NEXT_VARIABLE(virt, )
-STUB_SET_VARIABLE(virt, )
-STUB_GET_NEXT_HIGH_MONO_COUNT(virt, )
-STUB_RESET_SYSTEM(virt, )
+#define phys_ptr(arg) ((__typeof__(arg)) ia64_tpa(arg))
+
+STUB_GET_TIME(phys, phys_ptr)
+STUB_SET_TIME(phys, phys_ptr)
+STUB_GET_WAKEUP_TIME(phys, phys_ptr)
+STUB_SET_WAKEUP_TIME(phys, phys_ptr)
+STUB_GET_VARIABLE(phys, phys_ptr)
+STUB_GET_NEXT_VARIABLE(phys, phys_ptr)
+STUB_SET_VARIABLE(phys, phys_ptr)
+STUB_GET_NEXT_HIGH_MONO_COUNT(phys, phys_ptr)
+STUB_RESET_SYSTEM(phys, phys_ptr)
+
+#define id(arg) arg
+
+STUB_GET_TIME(virt, id)
+STUB_SET_TIME(virt, id)
+STUB_GET_WAKEUP_TIME(virt, id)
+STUB_SET_WAKEUP_TIME(virt, id)
+STUB_GET_VARIABLE(virt, id)
+STUB_GET_NEXT_VARIABLE(virt, id)
+STUB_SET_VARIABLE(virt, id)
+STUB_GET_NEXT_HIGH_MONO_COUNT(virt, id)
+STUB_RESET_SYSTEM(virt, id)
void
efi_gettimeofday (struct timespec *ts)
GLOBAL_ENTRY(efi_call_phys)
.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
- alloc loc1=ar.pfs,8,5,7,0
+ alloc loc1=ar.pfs,8,7,7,0
ld8 r2=[in0],8 // load EFI function's entry point
mov loc0=rp
.body
mov out3=in4
mov out5=in6
mov out6=in7
+ mov loc5=r19
+ mov loc6=r20
br.call.sptk.many rp=b6 // call the EFI function
.ret1: mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov r16=loc3
+ mov r19=loc5
+ mov r20=loc6
br.call.sptk.many rp=ia64_switch_mode_virt // return to virtual mode
.ret2: mov ar.rsc=loc4 // restore RSE configuration
mov ar.pfs=loc1
.body
adds r22=IA64_TASK_THREAD_KSP_OFFSET,r13
+ movl r25=init_task
mov r27=IA64_KR(CURRENT_STACK)
- dep r20=0,in0,61,3 // physical address of "current"
+ adds r21=IA64_TASK_THREAD_KSP_OFFSET,in0
+ dep r20=0,in0,61,3 // physical address of "next"
;;
st8 [r22]=sp // save kernel stack pointer of old task
shr.u r26=r20,IA64_GRANULE_SHIFT
- adds r21=IA64_TASK_THREAD_KSP_OFFSET,in0
+ cmp.eq p7,p6=r25,in0
;;
/*
* If we've already mapped this task's page, we can skip doing it again.
*/
- cmp.eq p7,p6=r26,r27
+(p6) cmp.eq p7,p6=r26,r27
(p6) br.cond.dpnt .map
;;
.done:
-(p6) ssm psr.ic // if we we had to map, renable the psr.ic bit FIRST!!!
+(p6) ssm psr.ic // if we had to map, reenable the psr.ic bit FIRST!!!
;;
(p6) srlz.d
ld8 sp=[r21] // load kernel stack pointer of new task
;;
stf.spill [r16]=f10
stf.spill [r17]=f11
- br.call.sptk.many rp=syscall_trace // give parent a chance to catch syscall args
+ br.call.sptk.many rp=syscall_trace_enter // give parent a chance to catch syscall args
adds r16=PT(F6)+16,sp
adds r17=PT(F7)+16,sp
;;
.strace_save_retval:
.mem.offset 0,0; st8.spill [r2]=r8 // store return value in slot for r8
.mem.offset 8,0; st8.spill [r3]=r10 // clear error indication in slot for r10
- br.call.sptk.many rp=syscall_trace // give parent a chance to catch return value
+ br.call.sptk.many rp=syscall_trace_leave // give parent a chance to catch return value
.ret3: br.cond.sptk ia64_leave_syscall
strace_error:
*/
nop.m 0
nop.i 0
- br.call.sptk.many rp=syscall_trace // give parent a chance to catch return value
+ br.call.sptk.many rp=syscall_trace_leave // give parent a chance to catch return value
}
.ret4: br.cond.sptk ia64_leave_kernel
END(ia64_strace_leave_kernel)
ld4 r2=[r2]
;;
mov r8=0
- tbit.nz p6,p0=r2,TIF_SYSCALL_TRACE
+ and r2=_TIF_SYSCALL_TRACEAUDIT,r2
+ ;;
+ cmp.ne p6,p0=r2,r0
(p6) br.cond.spnt .strace_check_retval
;; // added stop bits to prevent r8 dependency
END(ia64_ret_from_clone)
PT_REGS_UNWIND_INFO(0)
/*
* work.need_resched etc. mustn't get changed by this CPU before it returns to
- * user- or fsys-mode, hence we disable interrupts early on:
+ * user- or fsys-mode, hence we disable interrupts early on.
+ *
+ * p6 controls whether current_thread_info()->flags needs to be check for
+ * extra work. We always check for extra work when returning to user-level.
+ * With CONFIG_PREEMPT, we also check for extra work when the preempt_count
+ * is 0. After extra work processing has been completed, execution
+ * resumes at .work_processed_syscall with p6 set to 1 if the extra-work-check
+ * needs to be redone.
*/
#ifdef CONFIG_PREEMPT
rsm psr.i // disable interrupts
-#else
-(pUStk) rsm psr.i
-#endif
cmp.eq pLvSys,p0=r0,r0 // pLvSys=1: leave from syscall
-(pUStk) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUStk
-.work_processed_syscall:
-#ifdef CONFIG_PREEMPT
(pKStk) adds r20=TI_PRE_COUNT+IA64_TASK_SIZE,r13
;;
.pred.rel.mutex pUStk,pKStk
(pKStk) ld4 r21=[r20] // r21 <- preempt_count
(pUStk) mov r21=0 // r21 <- 0
;;
-(p6) cmp.eq.unc p6,p0=r21,r0 // p6 <- p6 && (r21 == 0)
-#endif /* CONFIG_PREEMPT */
+ cmp.eq p6,p0=r21,r0 // p6 <- pUStk || (preempt_count == 0)
+#else /* !CONFIG_PREEMPT */
+(pUStk) rsm psr.i
+ cmp.eq pLvSys,p0=r0,r0 // pLvSys=1: leave from syscall
+(pUStk) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUStk
+#endif
+.work_processed_syscall:
adds r16=PT(LOADRS)+16,r12
adds r17=PT(AR_BSPSTORE)+16,r12
adds r18=TI_FLAGS+IA64_TASK_SIZE,r13
PT_REGS_UNWIND_INFO(0)
/*
* work.need_resched etc. mustn't get changed by this CPU before it returns to
- * user- or fsys-mode, hence we disable interrupts early on:
+ * user- or fsys-mode, hence we disable interrupts early on.
+ *
+ * p6 controls whether current_thread_info()->flags needs to be check for
+ * extra work. We always check for extra work when returning to user-level.
+ * With CONFIG_PREEMPT, we also check for extra work when the preempt_count
+ * is 0. After extra work processing has been completed, execution
+ * resumes at .work_processed_syscall with p6 set to 1 if the extra-work-check
+ * needs to be redone.
*/
#ifdef CONFIG_PREEMPT
rsm psr.i // disable interrupts
-#else
-(pUStk) rsm psr.i
-#endif
cmp.eq p0,pLvSys=r0,r0 // pLvSys=0: leave from kernel
-(pUStk) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUStk
- ;;
-.work_processed_kernel:
-#ifdef CONFIG_PREEMPT
- adds r20=TI_PRE_COUNT+IA64_TASK_SIZE,r13
+(pKStk) adds r20=TI_PRE_COUNT+IA64_TASK_SIZE,r13
;;
.pred.rel.mutex pUStk,pKStk
(pKStk) ld4 r21=[r20] // r21 <- preempt_count
(pUStk) mov r21=0 // r21 <- 0
;;
-(p6) cmp.eq.unc p6,p0=r21,r0 // p6 <- p6 && (r21 == 0)
-#endif /* CONFIG_PREEMPT */
+ cmp.eq p6,p0=r21,r0 // p6 <- pUStk || (preempt_count == 0)
+#else
+(pUStk) rsm psr.i
+ cmp.eq p0,pLvSys=r0,r0 // pLvSys=0: leave from kernel
+(pUStk) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUStk
+#endif
+.work_processed_kernel:
adds r17=TI_FLAGS+IA64_TASK_SIZE,r13
;;
(p6) ld4 r31=[r17] // load current_thread_info()->flags
br.cond.sptk.many .work_processed_kernel // re-check
.notify:
- br.call.spnt.many rp=notify_resume_user
+(pUStk) br.call.spnt.many rp=notify_resume_user
.ret10: cmp.ne p6,p0=r0,r0 // p6 <- 0
(pLvSys)br.cond.sptk.many .work_processed_syscall // don't re-check
br.cond.sptk.many .work_processed_kernel // don't re-check
data8 sys_mq_notify
data8 sys_mq_getsetattr
data8 sys_ni_syscall // reserved for kexec_load
- data8 sys_ni_syscall
+ data8 sys_vserver
data8 sys_ni_syscall // 1270
data8 sys_ni_syscall
data8 sys_ni_syscall
add r9=TI_FLAGS+IA64_TASK_SIZE,r16
addl r3=THIS_CPU(cpu_info),r0
- mov.m r31=ar.itc // put time stamp into r31 (ITC) == now (35 cyc)
#ifdef CONFIG_SMP
movl r10=__per_cpu_offset
movl r2=sal_platform_features
;;
ldf8 f8=[r21] // f8 now contains itm_next
+ mov.m r31=ar.itc // put time stamp into r31 (ITC) == now
sub r28=r29, r28, 1 // r28 now contains "-(lost + 1)"
- tbit.nz p9, p10=r23, 0 // p9 <- is_odd(r23), p10 <- is_even(r23)
;;
ld8 r2=[r19] // r2 = sec = xtime.tv_sec
ld8 r29=[r20] // r29 = nsec = xtime.tv_nsec
+ tbit.nz p9, p10=r23, 0 // p9 <- is_odd(r23), p10 <- is_even(r23)
setf.sig f6=r28 // f6 <- -(lost + 1) (6 cyc)
;;
nop 0
;;
- mov r31=ar.itc // re-read ITC in case we .retry (35 cyc)
xma.l f8=f11, f8, f12 // f8 (elapsed_cycles) <- (-1*last_tick + now) = (now - last_tick)
nop 0
;;
* Initialize kernel region registers:
* rr[5]: VHPT enabled, page size = PAGE_SHIFT
* rr[6]: VHPT disabled, page size = IA64_GRANULE_SHIFT
- * rr[5]: VHPT disabled, page size = IA64_GRANULE_SHIFT
+ * rr[7]: VHPT disabled, page size = IA64_GRANULE_SHIFT
*/
mov r16=((ia64_rid(IA64_REGION_ID_KERNEL, (5<<61)) << 8) | (PAGE_SHIFT << 2) | 1)
movl r17=(5<<61)
#endif
;;
tpa r3=r2 // r3 == phys addr of task struct
+ mov r16=-1
+(isBP) br.cond.dpnt .load_current // BP stack is on region 5 --- no need to map it
+
// load mapping for stack (virtaddr in r2, physaddr in r3)
rsm psr.ic
movl r17=PAGE_KERNEL
srlz.d
;;
+.load_current:
// load the "current" pointer (r13) and ar.k6 with the current task
mov IA64_KR(CURRENT)=r2 // virtual address
mov IA64_KR(CURRENT_STACK)=r16
*
* Inputs:
* r16 = new psr to establish
+ * Output:
+ * r19 = old virtual address of ar.bsp
+ * r20 = old virtual address of sp
*
* Note: RSE must already be in enforced lazy mode
*/
mov cr.ipsr=r16 // set new PSR
add r3=1f-ia64_switch_mode_phys,r15
- mov r17=ar.bsp
+ mov r19=ar.bsp
+ mov r20=sp
mov r14=rp // get return address into a general register
;;
// going to physical mode, use tpa to translate virt->phys
- tpa r17=r17
+ tpa r17=r19
tpa r3=r3
tpa sp=sp
tpa r14=r14
*
* Inputs:
* r16 = new psr to establish
+ * r19 = new bspstore to establish
+ * r20 = new sp to establish
*
* Note: RSE must already be in enforced lazy mode
*/
mov cr.ipsr=r16 // set new PSR
add r3=1f-ia64_switch_mode_virt,r15
- mov r17=ar.bsp
mov r14=rp // get return address into a general register
;;
movl r18=KERNEL_START
dep r3=0,r3,KERNEL_TR_PAGE_SHIFT,64-KERNEL_TR_PAGE_SHIFT
dep r14=0,r14,KERNEL_TR_PAGE_SHIFT,64-KERNEL_TR_PAGE_SHIFT
- dep r17=-1,r17,61,3
- dep sp=-1,sp,61,3
+ mov sp=r20
;;
or r3=r3,r18
or r14=r14,r18
;;
mov r18=ar.rnat // save ar.rnat
- mov ar.bspstore=r17 // this steps on ar.rnat
+ mov ar.bspstore=r19 // this steps on ar.rnat
mov cr.iip=r3
mov cr.ifs=r0
;;
#include <asm/unwind.h>
EXPORT_SYMBOL(unw_init_running);
+#include <linux/efi.h>
+EXPORT_SYMBOL(efi_mem_type);
+
#ifdef ASM_SUPPORTED
# ifdef CONFIG_SMP
# if __GNUC__ < 3 || (__GNUC__ == 3 && __GNUC_MINOR__ < 3)
#include <linux/module.h>
#include <linux/sched.h>
#include <linux/init_task.h>
+#include <linux/mqueue.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
#endif
static spinlock_t iosapic_lock = SPIN_LOCK_UNLOCKED;
-extern cpumask_t __cacheline_aligned pending_irq_cpumask[NR_IRQS];
/* These tables map IA-64 vectors to the IOSAPIC pin that generates this vector. */
spin_lock_irqsave(&iosapic_lock, flags);
{
- writel(IOSAPIC_RTE_HIGH(rte_index), addr + IOSAPIC_REG_SELECT);
- writel(high32, addr + IOSAPIC_WINDOW);
- writel(IOSAPIC_RTE_LOW(rte_index), addr + IOSAPIC_REG_SELECT);
- writel(low32, addr + IOSAPIC_WINDOW);
+ iosapic_write(addr, IOSAPIC_RTE_HIGH(rte_index), high32);
+ iosapic_write(addr, IOSAPIC_RTE_LOW(rte_index), low32);
iosapic_intr_info[vector].low32 = low32;
}
spin_unlock_irqrestore(&iosapic_lock, flags);
spin_lock_irqsave(&iosapic_lock, flags);
{
- writel(IOSAPIC_RTE_LOW(rte_index), addr + IOSAPIC_REG_SELECT);
-
/* set only the mask bit */
low32 = iosapic_intr_info[vec].low32 |= IOSAPIC_MASK;
-
- writel(low32, addr + IOSAPIC_WINDOW);
+ iosapic_write(addr, IOSAPIC_RTE_LOW(rte_index), low32);
}
spin_unlock_irqrestore(&iosapic_lock, flags);
}
spin_lock_irqsave(&iosapic_lock, flags);
{
- writel(IOSAPIC_RTE_LOW(rte_index), addr + IOSAPIC_REG_SELECT);
low32 = iosapic_intr_info[vec].low32 &= ~IOSAPIC_MASK;
- writel(low32, addr + IOSAPIC_WINDOW);
+ iosapic_write(addr, IOSAPIC_RTE_LOW(rte_index), low32);
}
spin_unlock_irqrestore(&iosapic_lock, flags);
}
low32 |= (IOSAPIC_FIXED << IOSAPIC_DELIVERY_SHIFT);
iosapic_intr_info[vec].low32 = low32;
- writel(IOSAPIC_RTE_HIGH(rte_index), addr + IOSAPIC_REG_SELECT);
- writel(high32, addr + IOSAPIC_WINDOW);
- writel(IOSAPIC_RTE_LOW(rte_index), addr + IOSAPIC_REG_SELECT);
- writel(low32, addr + IOSAPIC_WINDOW);
+ iosapic_write(addr, IOSAPIC_RTE_HIGH(rte_index), high32);
+ iosapic_write(addr, IOSAPIC_RTE_LOW(rte_index), low32);
}
spin_unlock_irqrestore(&iosapic_lock, flags);
#endif
}
-static inline void move_irq(int irq)
-{
- /* note - we hold desc->lock */
- cpumask_t tmp;
- irq_desc_t *desc = irq_descp(irq);
-
- if (!cpus_empty(pending_irq_cpumask[irq])) {
- cpus_and(tmp, pending_irq_cpumask[irq], cpu_online_map);
- if (unlikely(!cpus_empty(tmp))) {
- desc->handler->set_affinity(irq, pending_irq_cpumask[irq]);
- }
- cpus_clear(pending_irq_cpumask[irq]);
- }
-}
-
/*
* Handlers for level-triggered interrupts.
*/
ia64_vector vec = irq_to_vector(irq);
move_irq(irq);
- writel(vec, iosapic_intr_info[vec].addr + IOSAPIC_EOI);
+ iosapic_eoi(iosapic_intr_info[vec].addr, vec);
}
#define iosapic_shutdown_level_irq mask_irq
* unsigned int reserved2 : 8;
* }
*/
- writel(IOSAPIC_VERSION, addr + IOSAPIC_REG_SELECT);
- return readl(IOSAPIC_WINDOW + addr);
+ return iosapic_read(addr, IOSAPIC_VERSION);
}
/*
index = find_iosapic(gsi);
if (index < 0) {
- printk(KERN_WARNING "%s: No IOSAPIC for GSI 0x%x\n", __FUNCTION__, gsi);
+ printk(KERN_WARNING "%s: No IOSAPIC for GSI %u\n", __FUNCTION__, gsi);
return;
}
}
}
+static unsigned int
+get_target_cpu (void)
+{
+#ifdef CONFIG_SMP
+ static int cpu = -1;
+
+ /*
+ * If the platform supports redirection via XTP, let it
+ * distribute interrupts.
+ */
+ if (smp_int_redirect & SMP_IRQ_REDIRECTION)
+ return hard_smp_processor_id();
+
+ /*
+ * Some interrupts (ACPI SCI, for instance) are registered
+ * before the BSP is marked as online.
+ */
+ if (!cpu_online(smp_processor_id()))
+ return hard_smp_processor_id();
+
+ /*
+ * Otherwise, round-robin interrupt vectors across all the
+ * processors. (It'd be nice if we could be smarter in the
+ * case of NUMA.)
+ */
+ do {
+ if (++cpu >= NR_CPUS)
+ cpu = 0;
+ } while (!cpu_online(cpu));
+
+ return cpu_physical_id(cpu);
+#else
+ return hard_smp_processor_id();
+#endif
+}
+
/*
* ACPI can describe IOSAPIC interrupts via static tables and namespace
* methods. This provides an interface to register those interrupts and
unsigned long polarity, unsigned long trigger)
{
int vector;
- unsigned int dest = (ia64_getreg(_IA64_REG_CR_LID) >> 16) & 0xffff;
+ unsigned int dest;
+ unsigned long flags;
- vector = gsi_to_vector(gsi);
- if (vector < 0)
- vector = assign_irq_vector(AUTO_ASSIGN);
+ /*
+ * If this GSI has already been registered (i.e., it's a
+ * shared interrupt, or we lost a race to register it),
+ * don't touch the RTE.
+ */
+ spin_lock_irqsave(&iosapic_lock, flags);
+ {
+ vector = gsi_to_vector(gsi);
+ if (vector > 0) {
+ spin_unlock_irqrestore(&iosapic_lock, flags);
+ return vector;
+ }
- register_intr(gsi, vector, IOSAPIC_LOWEST_PRIORITY,
- polarity, trigger);
+ vector = assign_irq_vector(AUTO_ASSIGN);
+ dest = get_target_cpu();
+ register_intr(gsi, vector, IOSAPIC_LOWEST_PRIORITY,
+ polarity, trigger);
+ }
+ spin_unlock_irqrestore(&iosapic_lock, flags);
- printk(KERN_INFO "GSI 0x%x(%s,%s) -> CPU 0x%04x vector %d\n",
- gsi, (polarity == IOSAPIC_POL_HIGH ? "high" : "low"),
- (trigger == IOSAPIC_EDGE ? "edge" : "level"), dest, vector);
+ printk(KERN_INFO "GSI %u (%s, %s) -> CPU %d (0x%04x) vector %d\n",
+ gsi, (trigger == IOSAPIC_EDGE ? "edge" : "level"),
+ (polarity == IOSAPIC_POL_HIGH ? "high" : "low"),
+ cpu_logical_id(dest), dest, vector);
- /* program the IOSAPIC routing table */
- set_rte(vector, dest, 0);
+ set_rte(vector, dest, 1);
return vector;
}
int iosapic_vector, u16 eid, u16 id,
unsigned long polarity, unsigned long trigger)
{
+ static const char * const name[] = {"unknown", "PMI", "INIT", "CPEI"};
unsigned char delivery;
- int vector;
+ int vector, mask = 0;
unsigned int dest = ((id << 8) | eid) & 0xffff;
switch (int_type) {
case ACPI_INTERRUPT_CPEI:
vector = IA64_CPE_VECTOR;
delivery = IOSAPIC_LOWEST_PRIORITY;
+ mask = 1;
break;
default:
- printk(KERN_ERR "iosapic_register_platform_irq(): invalid int type\n");
+ printk(KERN_ERR "iosapic_register_platform_irq(): invalid int type 0x%x\n", int_type);
return -1;
}
- register_intr(gsi, vector, delivery, polarity,
- trigger);
+ register_intr(gsi, vector, delivery, polarity, trigger);
- printk(KERN_INFO "PLATFORM int 0x%x: GSI 0x%x(%s,%s) -> CPU 0x%04x vector %d\n",
- int_type, gsi, (polarity == IOSAPIC_POL_HIGH ? "high" : "low"),
- (trigger == IOSAPIC_EDGE ? "edge" : "level"), dest, vector);
+ printk(KERN_INFO "PLATFORM int %s (0x%x): GSI %u (%s, %s) -> CPU %d (0x%04x) vector %d\n",
+ int_type < ARRAY_SIZE(name) ? name[int_type] : "unknown",
+ int_type, gsi, (trigger == IOSAPIC_EDGE ? "edge" : "level"),
+ (polarity == IOSAPIC_POL_HIGH ? "high" : "low"),
+ cpu_logical_id(dest), dest, vector);
- /* program the IOSAPIC routing table */
- set_rte(vector, dest, 0);
+ set_rte(vector, dest, mask);
return vector;
}
unsigned long trigger)
{
int vector;
- unsigned int dest = (ia64_getreg(_IA64_REG_CR_LID) >> 16) & 0xffff;
+ unsigned int dest = hard_smp_processor_id();
vector = isa_irq_to_vector(isa_irq);
register_intr(gsi, vector, IOSAPIC_LOWEST_PRIORITY, polarity, trigger);
- DBG("ISA: IRQ %u -> GSI 0x%x (%s,%s) -> CPU 0x%04x vector %d\n",
- isa_irq, gsi, polarity == IOSAPIC_POL_HIGH ? "high" : "low",
- trigger == IOSAPIC_EDGE ? "edge" : "level", dest, vector);
+ DBG("ISA: IRQ %u -> GSI %u (%s,%s) -> CPU %d (0x%04x) vector %d\n",
+ isa_irq, gsi, trigger == IOSAPIC_EDGE ? "edge" : "level",
+ polarity == IOSAPIC_POL_HIGH ? "high" : "low",
+ cpu_logical_id(dest), dest, vector);
- /* program the IOSAPIC routing table */
- set_rte(vector, dest, 0);
+ set_rte(vector, dest, 1);
}
void __init
iosapic_override_isa_irq(isa_irq, isa_irq, IOSAPIC_POL_HIGH, IOSAPIC_EDGE);
}
}
-
-void
-iosapic_enable_intr (unsigned int vector)
-{
- unsigned int dest;
- irq_desc_t *desc;
-
- /*
- * In the case of a shared interrupt, do not re-route the vector, and
- * especially do not mask a running interrupt (startup will not get
- * called for a shared interrupt).
- */
- desc = irq_descp(vector);
- if (desc->action)
- return;
-
-#ifdef CONFIG_SMP
- /*
- * For platforms that do not support interrupt redirect via the XTP interface, we
- * can round-robin the PCI device interrupts to the processors
- */
- if (!(smp_int_redirect & SMP_IRQ_REDIRECTION)) {
- static int cpu_index = -1;
-
- do
- if (++cpu_index >= NR_CPUS)
- cpu_index = 0;
- while (!cpu_online(cpu_index));
-
- dest = cpu_physical_id(cpu_index) & 0xffff;
- } else {
- /*
- * Direct the interrupt vector to the current cpu, platform redirection
- * will distribute them.
- */
- dest = (ia64_getreg(_IA64_REG_CR_LID) >> 16) & 0xffff;
- }
-#else
- /* direct the interrupt vector to the running cpu id */
- dest = (ia64_getreg(_IA64_REG_CR_LID) >> 16) & 0xffff;
-#endif
- set_rte(vector, dest, 1);
-
- printk(KERN_INFO "IOSAPIC: vector %d -> CPU 0x%04x, enabled\n",
- vector, dest);
-}
-
-#ifdef CONFIG_ACPI_PCI
-
-void __init
-iosapic_parse_prt (void)
-{
- struct acpi_prt_entry *entry;
- struct list_head *node;
- unsigned int gsi;
- int vector;
- char pci_id[16];
- struct hw_interrupt_type *irq_type = &irq_type_iosapic_level;
- irq_desc_t *idesc;
-
- list_for_each(node, &acpi_prt.entries) {
- entry = list_entry(node, struct acpi_prt_entry, node);
-
- /* We're only interested in static (non-link) entries. */
- if (entry->link.handle)
- continue;
-
- gsi = entry->link.index;
-
- vector = gsi_to_vector(gsi);
- if (vector < 0) {
- if (find_iosapic(gsi) < 0)
- continue;
-
- /* allocate a vector for this interrupt line */
- if (pcat_compat && (gsi < 16))
- vector = isa_irq_to_vector(gsi);
- else
- /* new GSI; allocate a vector for it */
- vector = assign_irq_vector(AUTO_ASSIGN);
-
- register_intr(gsi, vector, IOSAPIC_LOWEST_PRIORITY, IOSAPIC_POL_LOW,
- IOSAPIC_LEVEL);
- }
- entry->irq = vector;
- snprintf(pci_id, sizeof(pci_id), "%02x:%02x:%02x[%c]",
- entry->id.segment, entry->id.bus, entry->id.device, 'A' + entry->pin);
-
- /*
- * If vector was previously initialized to a different
- * handler, re-initialize.
- */
- idesc = irq_descp(vector);
- if (idesc->handler != irq_type)
- register_intr(gsi, vector, IOSAPIC_LOWEST_PRIORITY, IOSAPIC_POL_LOW,
- IOSAPIC_LEVEL);
-
- }
-}
-
-#endif /* CONFIG_ACPI */
action->handler = handler;
action->flags = irqflags;
- action->mask = 0;
+ cpus_clear(action->mask);
action->name = devname;
action->next = NULL;
action->dev_id = dev_id;
return full_count;
}
+void move_irq(int irq)
+{
+ /* note - we hold desc->lock */
+ cpumask_t tmp;
+ irq_desc_t *desc = irq_descp(irq);
+
+ if (!cpus_empty(pending_irq_cpumask[irq])) {
+ cpus_and(tmp, pending_irq_cpumask[irq], cpu_online_map);
+ if (unlikely(!cpus_empty(tmp))) {
+ desc->handler->set_affinity(irq, pending_irq_cpumask[irq]);
+ }
+ cpus_clear(pending_irq_cpumask[irq]);
+ }
+}
+
+
#endif /* CONFIG_SMP */
#ifdef CONFIG_HOTPLUG_CPU
;;
ld4 r2=[r2] // r2 = current_thread_info()->flags
;;
- tbit.z p8,p0=r2,TIF_SYSCALL_TRACE
+ and r2=_TIF_SYSCALL_TRACEAUDIT,r2 // mask trace or audit
+ ;;
+ cmp.eq p8,p0=r2,r0
mov b6=r20
;;
(p8) br.call.sptk.many b6=b6 // ignore this return addr
ld4 r2=[r2] // r2 = current_thread_info()->flags
;;
ld8 r16=[r16]
- tbit.z p8,p0=r2,TIF_SYSCALL_TRACE
+ and r2=_TIF_SYSCALL_TRACEAUDIT,r2 // mask trace or audit
;;
mov b6=r16
movl r15=ia32_ret_from_syscall
+ cmp.eq p8,p0=r2,r0
;;
mov rp=r15
(p8) br.call.sptk.many b6=b6
#endif /* CONFIG_IA64_GENERIC */
-void
-machvec_noop (void)
-{
-}
-EXPORT_SYMBOL(machvec_noop);
-
void
machvec_setup (char **arg)
{
#define MAX_CPE_POLL_INTERVAL (15*60*HZ) /* 15 minutes */
#define MIN_CPE_POLL_INTERVAL (2*60*HZ) /* 2 minutes */
#define CMC_POLL_INTERVAL (1*60*HZ) /* 1 minute */
+#define CPE_HISTORY_LENGTH 5
#define CMC_HISTORY_LENGTH 5
static struct timer_list cpe_poll_timer;
*/
static int cpe_poll_enabled = 1;
+static int cpe_vector = -1;
+
extern void salinfo_log_wakeup(int type, u8 *buffer, u64 size, int irqsafe);
/*
u8 *buffer;
u64 size;
int irq_safe = sal_info_type != SAL_INFO_TYPE_MCA && sal_info_type != SAL_INFO_TYPE_INIT;
+#ifdef IA64_MCA_DEBUG_INFO
static const char * const rec_name[] = { "MCA", "INIT", "CMC", "CPE" };
+#endif
size = ia64_log_get(sal_info_type, &buffer, irq_safe);
if (!size)
salinfo_log_wakeup(sal_info_type, buffer, size, irq_safe);
if (irq_safe)
- printk(KERN_INFO "CPU %d: SAL log contains %s error record\n",
+ IA64_MCA_DEBUG("CPU %d: SAL log contains %s error record\n",
smp_processor_id(),
sal_info_type < ARRAY_SIZE(rec_name) ? rec_name[sal_info_type] : "UNKNOWN");
*/
#ifndef PLATFORM_MCA_HANDLERS
+#ifdef CONFIG_ACPI
+
static irqreturn_t
ia64_mca_cpe_int_handler (int cpe_irq, void *arg, struct pt_regs *ptregs)
{
- IA64_MCA_DEBUG("%s: received interrupt. CPU:%d vector = %#x\n",
- __FUNCTION__, smp_processor_id(), cpe_irq);
+ static unsigned long cpe_history[CPE_HISTORY_LENGTH];
+ static int index;
+ static spinlock_t cpe_history_lock = SPIN_LOCK_UNLOCKED;
+
+ IA64_MCA_DEBUG("%s: received interrupt vector = %#x on CPU %d\n",
+ __FUNCTION__, cpe_irq, smp_processor_id());
/* SAL spec states this should run w/ interrupts enabled */
local_irq_enable();
- /* Get the CMC error record and log it */
+ /* Get the CPE error record and log it */
ia64_mca_log_sal_error_record(SAL_INFO_TYPE_CPE);
+
+ spin_lock(&cpe_history_lock);
+ if (!cpe_poll_enabled && cpe_vector >= 0) {
+
+ int i, count = 1; /* we know 1 happened now */
+ unsigned long now = jiffies;
+
+ for (i = 0; i < CPE_HISTORY_LENGTH; i++) {
+ if (now - cpe_history[i] <= HZ)
+ count++;
+ }
+
+ IA64_MCA_DEBUG(KERN_INFO "CPE threshold %d/%d\n", count, CPE_HISTORY_LENGTH);
+ if (count >= CPE_HISTORY_LENGTH) {
+
+ cpe_poll_enabled = 1;
+ spin_unlock(&cpe_history_lock);
+ disable_irq_nosync(local_vector_to_irq(IA64_CPE_VECTOR));
+
+ /*
+ * Corrected errors will still be corrected, but
+ * make sure there's a log somewhere that indicates
+ * something is generating more than we can handle.
+ */
+ printk(KERN_WARNING "WARNING: Switching to polling CPE handler; error records may be lost\n");
+
+ mod_timer(&cpe_poll_timer, jiffies + MIN_CPE_POLL_INTERVAL);
+
+ /* lock already released, get out now */
+ return IRQ_HANDLED;
+ } else {
+ cpe_history[index++] = now;
+ if (index == CPE_HISTORY_LENGTH)
+ index = 0;
+ }
+ }
+ spin_unlock(&cpe_history_lock);
return IRQ_HANDLED;
}
+#endif /* CONFIG_ACPI */
+
static void
show_min_state (pal_min_state_area_t *minstate)
{
cmcv = (cmcv_reg_t)ia64_getreg(_IA64_REG_CR_CMCV);
cmcv.cmcv_mask = 1; /* Mask/disable interrupt */
- ia64_setreg(_IA64_REG_CR_CMCV, cmcv.cmcv_regval)
+ ia64_setreg(_IA64_REG_CR_CMCV, cmcv.cmcv_regval);
IA64_MCA_DEBUG("%s: CPU %d corrected "
"machine check vector %#x disabled.\n",
cmcv = (cmcv_reg_t)ia64_getreg(_IA64_REG_CR_CMCV);
cmcv.cmcv_mask = 0; /* Unmask/enable interrupt */
- ia64_setreg(_IA64_REG_CR_CMCV, cmcv.cmcv_regval)
+ ia64_setreg(_IA64_REG_CR_CMCV, cmcv.cmcv_regval);
IA64_MCA_DEBUG("%s: CPU %d corrected "
"machine check vector %#x enabled.\n",
* handled
*/
static irqreturn_t
-ia64_mca_cmc_int_caller(int cpe_irq, void *arg, struct pt_regs *ptregs)
+ia64_mca_cmc_int_caller(int cmc_irq, void *arg, struct pt_regs *ptregs)
{
static int start_count = -1;
unsigned int cpuid;
if (start_count == -1)
start_count = IA64_LOG_COUNT(SAL_INFO_TYPE_CMC);
- ia64_mca_cmc_int_handler(cpe_irq, arg, ptregs);
+ ia64_mca_cmc_int_handler(cmc_irq, arg, ptregs);
for (++cpuid ; cpuid < NR_CPUS && !cpu_online(cpuid) ; cpuid++);
* Outputs
* handled
*/
+#ifdef CONFIG_ACPI
+
static irqreturn_t
ia64_mca_cpe_int_caller(int cpe_irq, void *arg, struct pt_regs *ptregs)
{
static int start_count = -1;
- static int poll_time = MAX_CPE_POLL_INTERVAL;
+ static int poll_time = MIN_CPE_POLL_INTERVAL;
unsigned int cpuid;
cpuid = smp_processor_id();
} else {
/*
* If a log was recorded, increase our polling frequency,
- * otherwise, backoff.
+ * otherwise, backoff or return to interrupt mode.
*/
if (start_count != IA64_LOG_COUNT(SAL_INFO_TYPE_CPE)) {
poll_time = max(MIN_CPE_POLL_INTERVAL, poll_time / 2);
- } else {
+ } else if (cpe_vector < 0) {
poll_time = min(MAX_CPE_POLL_INTERVAL, poll_time * 2);
+ } else {
+ poll_time = MIN_CPE_POLL_INTERVAL;
+
+ printk(KERN_WARNING "Returning to interrupt driven CPE handler\n");
+ enable_irq(local_vector_to_irq(IA64_CPE_VECTOR));
+ cpe_poll_enabled = 0;
}
+
+ if (cpe_poll_enabled)
+ mod_timer(&cpe_poll_timer, jiffies + poll_time);
start_count = -1;
- mod_timer(&cpe_poll_timer, jiffies + poll_time);
}
return IRQ_HANDLED;
}
+#endif /* CONFIG_ACPI */
+
/*
* ia64_mca_cpe_poll
*
register_percpu_irq(IA64_MCA_WAKEUP_VECTOR, &mca_wkup_irqaction);
#ifdef CONFIG_ACPI
- /* Setup the CPE interrupt vector */
+ /* Setup the CPEI/P vector and handler */
{
irq_desc_t *desc;
unsigned int irq;
- int cpev = acpi_request_vector(ACPI_INTERRUPT_CPEI);
- if (cpev >= 0) {
+ cpe_vector = acpi_request_vector(ACPI_INTERRUPT_CPEI);
+
+ if (cpe_vector >= 0) {
for (irq = 0; irq < NR_IRQS; ++irq)
- if (irq_to_vector(irq) == cpev) {
+ if (irq_to_vector(irq) == cpe_vector) {
desc = irq_descp(irq);
desc->status |= IRQ_PER_CPU;
setup_irq(irq, &mca_cpe_irqaction);
}
- ia64_mca_register_cpev(cpev);
+ ia64_mca_register_cpev(cpe_vector);
}
+ register_percpu_irq(IA64_CPEP_VECTOR, &mca_cpep_irqaction);
}
#endif
#ifdef CONFIG_ACPI
/* If platform doesn't support CPEI, get the timer going. */
- if (acpi_request_vector(ACPI_INTERRUPT_CPEI) < 0 && cpe_poll_enabled) {
- register_percpu_irq(IA64_CPEP_VECTOR, &mca_cpep_irqaction);
+ if (cpe_vector < 0 && cpe_poll_enabled) {
ia64_mca_cpe_poll(0UL);
+ } else {
+ cpe_poll_enabled = 0;
}
#endif
case RV_PCREL:
switch (r_type) {
case R_IA64_PCREL21B:
- /* special because it can cross into other module/kernel-core. */
- if (!is_internal(mod, val))
+ if ((in_init(mod, val) && in_core(mod, (uint64_t)location)) ||
+ (in_core(mod, val) && in_init(mod, (uint64_t)location))) {
+ /*
+ * Init section may have been allocated far away from core,
+ * if the branch won't reach, then allocate a plt for it.
+ */
+ uint64_t delta = ((int64_t)val - (int64_t)location) / 16;
+ if (delta + (1 << 20) >= (1 << 21)) {
+ val = get_fdesc(mod, val, &ok);
+ val = get_plt(mod, location, val, &ok);
+ }
+ } else if (!is_internal(mod, val))
val = get_plt(mod, location, val, &ok);
/* FALL THROUGH */
default:
*
*/
GLOBAL_ENTRY(ia64_pal_call_static)
- .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(6)
- alloc loc1 = ar.pfs,6,90,0,0
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(5)
+ alloc loc1 = ar.pfs,5,5,0,0
movl loc2 = pal_entry_point
1: {
mov r28 = in0
ld8 loc2 = [loc2] // loc2 <- entry point
tbit.nz p6,p7 = in4, 0
adds r8 = 1f-1b,r8
+ mov loc4=ar.rsc // save RSE configuration
;;
+ mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov loc3 = psr
mov loc0 = rp
.body
mov rp = r8
br.cond.sptk.many b7
1: mov psr.l = loc3
+ mov ar.rsc = loc4 // restore RSE configuration
mov ar.pfs = loc1
mov rp = loc0
;;
* in2 - in3 Remaning PAL arguments
*/
GLOBAL_ENTRY(ia64_pal_call_stacked)
- .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(5)
- alloc loc1 = ar.pfs,5,4,87,0
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(4)
+ alloc loc1 = ar.pfs,4,4,4,0
movl loc2 = pal_entry_point
mov r28 = in0 // Index MUST be copied to r28
GLOBAL_ENTRY(ia64_pal_call_phys_static)
- .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(6)
- alloc loc1 = ar.pfs,6,90,0,0
+ .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(4)
+ alloc loc1 = ar.pfs,4,7,0,0
movl loc2 = pal_entry_point
1: {
mov r28 = in0 // copy procedure index
andcm r16=loc3,r16 // removes bits to clear from psr
br.call.sptk.many rp=ia64_switch_mode_phys
.ret1: mov rp = r8 // install return address (physical)
+ mov loc5 = r19
+ mov loc6 = r20
br.cond.sptk.many b7
1:
mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov r16=loc3 // r16= original psr
+ mov r19=loc5
+ mov r20=loc6
br.call.sptk.many rp=ia64_switch_mode_virt // return to virtual mode
.ret2:
mov psr.l = loc3 // restore init PSR
*/
GLOBAL_ENTRY(ia64_pal_call_phys_stacked)
.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(5)
- alloc loc1 = ar.pfs,5,5,86,0
+ alloc loc1 = ar.pfs,5,7,4,0
movl loc2 = pal_entry_point
1: {
mov r28 = in0 // copy procedure index
andcm r16=loc3,r16 // removes bits to clear from psr
br.call.sptk.many rp=ia64_switch_mode_phys
.ret6:
+ mov loc5 = r19
+ mov loc6 = r20
br.call.sptk.many rp=b7 // now make the call
.ret7:
mov ar.rsc=0 // put RSE in enforced lazy, LE mode
mov r16=loc3 // r16= original psr
+ mov r19=loc5
+ mov r20=loc6
br.call.sptk.many rp=ia64_switch_mode_virt // return to virtual mode
.ret8: mov psr.l = loc3 // restore init PSR
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL,
- "Enable Cache Line Repl. Exclusive",
"Enable Cache Line Repl. Shared",
+ "Enable Cache Line Repl. Exclusive",
"Disable Transaction Queuing",
- "Disable Reponse Error Checking",
+ "Disable Response Error Checking",
"Disable Bus Error Checking",
"Disable Bus Requester Internal Error Signalling",
"Disable Bus Requester Error Signalling",
unsigned int ctx_cpu; /* cpu to which perfmon is applied (system wide) */
int ctx_fd; /* file descriptor used my this context */
+ pfm_ovfl_arg_t ctx_ovfl_arg; /* argument to custom buffer format handler */
pfm_buffer_fmt_t *ctx_buf_fmt; /* buffer format callbacks */
void *ctx_smpl_hdr; /* points to sampling buffer header kernel vaddr */
static inline unsigned long
pfm_get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags, unsigned long exec)
{
- return get_unmapped_area(file, addr, len, pgoff, flags);
+ return get_unmapped_area(file, addr, len, pgoff, flags, 0);
}
*/
insert_vm_struct(mm, vma);
- mm->total_vm += size >> PAGE_SHIFT;
+ // mm->total_vm += size >> PAGE_SHIFT;
+ vx_vmpages_add(mm, size >> PAGE_SHIFT);
up_write(&task->mm->mmap_sem);
return 0;
}
-static void
-pfm_force_cleanup(pfm_context_t *ctx, struct pt_regs *regs)
-{
- struct task_struct *task = ctx->ctx_task;
-
- ia64_psr(regs)->up = 0;
- ia64_psr(regs)->sp = 1;
-
- if (GET_PMU_OWNER() == task) {
- DPRINT(("cleared ownership for [%d]\n", ctx->ctx_task->pid));
- SET_PMU_OWNER(NULL, NULL);
- }
-
- /*
- * disconnect the task from the context and vice-versa
- */
- PFM_SET_WORK_PENDING(task, 0);
-
- task->thread.pfm_context = NULL;
- task->thread.flags &= ~IA64_THREAD_PM_VALID;
-
- DPRINT(("force cleanupf for [%d]\n", task->pid));
-}
-
-
/*
* called only from exit_thread(): task == current
pfm_check_task_state(pfm_context_t *ctx, int cmd, unsigned long flags)
{
struct task_struct *task;
- int state;
+ int state, old_state;
+recheck:
state = ctx->ctx_state;
+ task = ctx->ctx_task;
- task = PFM_CTX_TASK(ctx);
if (task == NULL) {
DPRINT(("context %d no task, state=%d\n", ctx->ctx_fd, state));
return 0;
}
DPRINT(("context %d state=%d [%d] task_state=%ld must_stop=%d\n",
- ctx->ctx_fd,
- state,
- task->pid,
- task->state, PFM_CMD_STOPPED(cmd)));
+ ctx->ctx_fd,
+ state,
+ task->pid,
+ task->state, PFM_CMD_STOPPED(cmd)));
/*
* self-monitoring always ok.
if (task == current || ctx->ctx_fl_system) return 0;
/*
- * context is UNLOADED, MASKED we are safe to go
+ * no command can operate on a zombie context
*/
- if (state != PFM_CTX_LOADED) return 0;
+ if (state == PFM_CTX_ZOMBIE) {
+ DPRINT(("cmd %d state zombie cannot operate on context\n", cmd));
+ return -EINVAL;
+ }
- if (state == PFM_CTX_ZOMBIE) return -EINVAL;
+ /*
+ * if context is UNLOADED, MASKED we are safe to go
+ */
+ if (state != PFM_CTX_LOADED) return 0;
/*
- * context is loaded, we must make sure the task is stopped
+ * context is LOADED, we must make sure the task is stopped
* We could lift this restriction for UP but it would mean that
* the user has no guarantee the task would not run between
* two successive calls to perfmonctl(). That's probably OK.
* If this user wants to ensure the task does not run, then
* the task must be stopped.
*/
- if (PFM_CMD_STOPPED(cmd) && task->state != TASK_STOPPED) {
- DPRINT(("[%d] task not in stopped state\n", task->pid));
- return -EBUSY;
- }
+ if (PFM_CMD_STOPPED(cmd)) {
+ if (task->state != TASK_STOPPED) {
+ DPRINT(("[%d] task not in stopped state\n", task->pid));
+ return -EBUSY;
+ }
+ /*
+ * task is now stopped, wait for ctxsw out
+ *
+ * This is an interesting point in the code.
+ * We need to unprotect the context because
+ * the pfm_save_regs() routines needs to grab
+ * the same lock. There are danger in doing
+ * this because it leaves a window open for
+ * another task to get access to the context
+ * and possibly change its state. The one thing
+ * that is not possible is for the context to disappear
+ * because we are protected by the VFS layer, i.e.,
+ * get_fd()/put_fd().
+ */
+ old_state = state;
- UNPROTECT_CTX(ctx, flags);
+ UNPROTECT_CTX(ctx, flags);
- wait_task_inactive(task);
+ wait_task_inactive(task);
- PROTECT_CTX(ctx, flags);
+ PROTECT_CTX(ctx, flags);
+ /*
+ * we must recheck to verify if state has changed
+ */
+ if (ctx->ctx_state != old_state) {
+ DPRINT(("old_state=%d new_state=%d\n", old_state, ctx->ctx_state));
+ goto recheck;
+ }
+ }
return 0;
}
static void
pfm_overflow_handler(struct task_struct *task, pfm_context_t *ctx, u64 pmc0, struct pt_regs *regs)
{
- pfm_ovfl_arg_t ovfl_arg;
+ pfm_ovfl_arg_t *ovfl_arg;
unsigned long mask;
unsigned long old_val, ovfl_val, new_val;
unsigned long ovfl_notify = 0UL, ovfl_pmds = 0UL, smpl_pmds = 0UL, reset_pmds;
int j, k, ret = 0;
int this_cpu = smp_processor_id();
- pmd_mask = ovfl_pmds >> PMU_FIRST_COUNTER;
+ pmd_mask = ovfl_pmds >> PMU_FIRST_COUNTER;
+ ovfl_arg = &ctx->ctx_ovfl_arg;
prefetch(ctx->ctx_smpl_hdr);
if ((pmd_mask & 0x1) == 0) continue;
- ovfl_arg.ovfl_pmd = (unsigned char )i;
- ovfl_arg.ovfl_notify = ovfl_notify & mask ? 1 : 0;
- ovfl_arg.active_set = 0;
- ovfl_arg.ovfl_ctrl.val = 0; /* module must fill in all fields */
- ovfl_arg.smpl_pmds[0] = smpl_pmds = ctx->ctx_pmds[i].smpl_pmds[0];
+ ovfl_arg->ovfl_pmd = (unsigned char )i;
+ ovfl_arg->ovfl_notify = ovfl_notify & mask ? 1 : 0;
+ ovfl_arg->active_set = 0;
+ ovfl_arg->ovfl_ctrl.val = 0; /* module must fill in all fields */
+ ovfl_arg->smpl_pmds[0] = smpl_pmds = ctx->ctx_pmds[i].smpl_pmds[0];
- ovfl_arg.pmd_value = ctx->ctx_pmds[i].val;
- ovfl_arg.pmd_last_reset = ctx->ctx_pmds[i].lval;
- ovfl_arg.pmd_eventid = ctx->ctx_pmds[i].eventid;
+ ovfl_arg->pmd_value = ctx->ctx_pmds[i].val;
+ ovfl_arg->pmd_last_reset = ctx->ctx_pmds[i].lval;
+ ovfl_arg->pmd_eventid = ctx->ctx_pmds[i].eventid;
/*
* copy values of pmds of interest. Sampling format may copy them
if (smpl_pmds) {
for(j=0, k=0; smpl_pmds; j++, smpl_pmds >>=1) {
if ((smpl_pmds & 0x1) == 0) continue;
- ovfl_arg.smpl_pmds_values[k++] = PMD_IS_COUNTING(j) ? pfm_read_soft_counter(ctx, j) : ia64_get_pmd(j);
- DPRINT_ovfl(("smpl_pmd[%d]=pmd%u=0x%lx\n", k-1, j, ovfl_arg.smpl_pmds_values[k-1]));
+ ovfl_arg->smpl_pmds_values[k++] = PMD_IS_COUNTING(j) ? pfm_read_soft_counter(ctx, j) : ia64_get_pmd(j);
+ DPRINT_ovfl(("smpl_pmd[%d]=pmd%u=0x%lx\n", k-1, j, ovfl_arg->smpl_pmds_values[k-1]));
}
}
/*
* call custom buffer format record (handler) routine
*/
- ret = (*ctx->ctx_buf_fmt->fmt_handler)(task, ctx->ctx_smpl_hdr, &ovfl_arg, regs, tstamp);
+ ret = (*ctx->ctx_buf_fmt->fmt_handler)(task, ctx->ctx_smpl_hdr, ovfl_arg, regs, tstamp);
end_cycles = ia64_get_itc();
* For those controls, we take the union because they have
* an all or nothing behavior.
*/
- ovfl_ctrl.bits.notify_user |= ovfl_arg.ovfl_ctrl.bits.notify_user;
- ovfl_ctrl.bits.block_task |= ovfl_arg.ovfl_ctrl.bits.block_task;
- ovfl_ctrl.bits.mask_monitoring |= ovfl_arg.ovfl_ctrl.bits.mask_monitoring;
+ ovfl_ctrl.bits.notify_user |= ovfl_arg->ovfl_ctrl.bits.notify_user;
+ ovfl_ctrl.bits.block_task |= ovfl_arg->ovfl_ctrl.bits.block_task;
+ ovfl_ctrl.bits.mask_monitoring |= ovfl_arg->ovfl_ctrl.bits.mask_monitoring;
/*
* build the bitmask of pmds to reset now
*/
- if (ovfl_arg.ovfl_ctrl.bits.reset_ovfl_pmds) reset_pmds |= mask;
+ if (ovfl_arg->ovfl_ctrl.bits.reset_ovfl_pmds) reset_pmds |= mask;
pfm_stats[this_cpu].pfm_smpl_handler_cycles += end_cycles - start_cycles;
}
}
#ifdef CONFIG_SMP
+
+static void
+pfm_force_cleanup(pfm_context_t *ctx, struct pt_regs *regs)
+{
+ struct task_struct *task = ctx->ctx_task;
+
+ ia64_psr(regs)->up = 0;
+ ia64_psr(regs)->sp = 1;
+
+ if (GET_PMU_OWNER() == task) {
+ DPRINT(("cleared ownership for [%d]\n", ctx->ctx_task->pid));
+ SET_PMU_OWNER(NULL, NULL);
+ }
+
+ /*
+ * disconnect the task from the context and vice-versa
+ */
+ PFM_SET_WORK_PENDING(task, 0);
+
+ task->thread.pfm_context = NULL;
+ task->thread.flags &= ~IA64_THREAD_PM_VALID;
+
+ DPRINT(("force cleanup for [%d]\n", task->pid));
+}
+
+
/*
* in 2.6, interrupts are masked when we come here and the runqueue lock is held
*/
ia32_save_state(p);
if (clone_flags & CLONE_SETTLS)
retval = ia32_clone_tls(p, child_ptregs);
+
+ /* Copy partially mapped page list */
+ if (!retval)
+ retval = ia32_copy_partial_page_list(p, clone_flags);
}
#endif
/* drop floating-point and debug-register state if it exists: */
current->thread.flags &= ~(IA64_THREAD_FPH_VALID | IA64_THREAD_DBG_VALID);
ia64_drop_fpu(current);
+ if (IS_IA32_PROCESS(ia64_task_regs(current)))
+ ia32_drop_partial_page_list(current);
}
/*
if (current->thread.flags & IA64_THREAD_DBG_VALID)
pfm_release_debug_registers(current);
#endif
+ if (IS_IA32_PROCESS(ia64_task_regs(current)))
+ ia32_drop_partial_page_list(current);
}
unsigned long
read_unlock(&tasklist_lock);
if (!child)
goto out;
+ if (!vx_check(vx_task_xid(child), VX_WATCH|VX_IDENT))
+ goto out_tsk;
+
ret = -EPERM;
if (pid == 1) /* no messing around with init! */
goto out_tsk;
return ret;
}
-/* "asmlinkage" so the input arguments are preserved... */
-asmlinkage void
+void
syscall_trace (void)
{
if (!test_thread_flag(TIF_SYSCALL_TRACE))
current->exit_code = 0;
}
}
+
+/* "asmlinkage" so the input arguments are preserved... */
+
+asmlinkage void
+syscall_trace_enter (long arg0, long arg1, long arg2, long arg3,
+ long arg4, long arg5, long arg6, long arg7, long stack)
+{
+ struct pt_regs *regs = (struct pt_regs *) &stack;
+ long syscall;
+
+ if (unlikely(current->audit_context)) {
+ if (IS_IA32_PROCESS(regs))
+ syscall = regs->r1;
+ else
+ syscall = regs->r15;
+
+ audit_syscall_entry(current, syscall, arg0, arg1, arg2, arg3);
+ }
+
+ if (test_thread_flag(TIF_SYSCALL_TRACE) && (current->ptrace & PT_PTRACED))
+ syscall_trace();
+}
+
+/* "asmlinkage" so the input arguments are preserved... */
+
+asmlinkage void
+syscall_trace_leave (long arg0, long arg1, long arg2, long arg3,
+ long arg4, long arg5, long arg6, long arg7, long stack)
+{
+ if (unlikely(current->audit_context))
+ audit_syscall_exit(current, ((struct pt_regs *) &stack)->r8);
+
+ if (test_thread_flag(TIF_SYSCALL_TRACE) && (current->ptrace & PT_PTRACED))
+ syscall_trace();
+}
break;
}
}
+
+static void __init
+chk_nointroute_opt(void)
+{
+ char *cp;
+ extern char saved_command_line[];
+
+ for (cp = saved_command_line; *cp; ) {
+ if (memcmp(cp, "nointroute", 10) == 0) {
+ no_int_routing = 1;
+ printk ("no_int_routing on\n");
+ break;
+ } else {
+ while (*cp != ' ' && *cp)
+ ++cp;
+ while (*cp == ' ')
+ ++cp;
+ }
+ }
+}
+
#else
static void __init sal_desc_ap_wakeup(void *p) { }
#endif
printk(KERN_ERR "bad signature in system table!");
check_versions(systab);
+#ifdef CONFIG_SMP
+ chk_nointroute_opt();
+#endif
/* revisions are coded in BCD, so %x does the job for us */
printk(KERN_INFO "SAL %x.%x: %.32s %.32s%sversion %x.%x\n",
#include <asm/sal.h>
#include <asm/sections.h>
#include <asm/serial.h>
+#include <asm/setup.h>
#include <asm/smp.h>
#include <asm/system.h>
#include <asm/unistd.h>
unsigned long ia64_max_iommu_merge_mask = ~0UL;
EXPORT_SYMBOL(ia64_max_iommu_merge_mask);
-#define COMMAND_LINE_SIZE 512
-
-char saved_command_line[COMMAND_LINE_SIZE]; /* used in proc filesystem */
-
/*
* We use a special marker for the end of memory and it uses the extra (+1) slot
*/
}
#endif
+/**
+ * early_console_setup - setup debugging console
+ *
+ * Consoles started here require little enough setup that we can start using
+ * them very early in the boot process, either right after the machine
+ * vector initialization, or even before if the drivers can detect their hw.
+ *
+ * Returns non-zero if a console couldn't be setup.
+ */
+static inline int __init
+early_console_setup (void)
+{
+#ifdef CONFIG_SERIAL_SGI_L1_CONSOLE
+ {
+ extern int sn_serial_console_early_setup(void);
+ if(!sn_serial_console_early_setup())
+ return 0;
+ }
+#endif
+
+ return -1;
+}
+
void __init
setup_arch (char **cmdline_p)
{
ia64_patch_vtop((u64) __start___vtop_patchlist, (u64) __end___vtop_patchlist);
*cmdline_p = __va(ia64_boot_param->command_line);
- strlcpy(saved_command_line, *cmdline_p, sizeof(saved_command_line));
+ strlcpy(saved_command_line, *cmdline_p, COMMAND_LINE_SIZE);
efi_init();
io_port_init();
machvec_init(acpi_get_sysname());
#endif
+#ifdef CONFIG_SMP
+ /* If we register an early console, allow CPU 0 to printk */
+ if (!early_console_setup())
+ cpu_set(smp_processor_id(), cpu_online_map);
+#endif
+
#ifdef CONFIG_ACPI_BOOT
/* Initialize the ACPI boot-time table parser */
acpi_table_init();
#ifdef CONFIG_ACPI_BOOT
acpi_boot_init();
#endif
-#ifdef CONFIG_SERIAL_8250_CONSOLE
-#ifdef CONFIG_SERIAL_8250_HCDP
- if (efi.hcdp) {
- void setup_serial_hcdp(void *);
- setup_serial_hcdp(efi.hcdp);
- }
+#ifdef CONFIG_EFI_PCDP
+ efi_setup_pcdp_console(*cmdline_p);
#endif
+#ifdef CONFIG_SERIAL_8250_CONSOLE
if (!efi.hcdp)
setup_serial_legacy();
#endif
#ifdef CONFIG_VT
+ if (!conswitchp) {
# if defined(CONFIG_DUMMY_CONSOLE)
- conswitchp = &dummy_con;
+ conswitchp = &dummy_con;
# endif
# if defined(CONFIG_VGA_CONSOLE)
- /*
- * Non-legacy systems may route legacy VGA MMIO range to system
- * memory. vga_con probes the MMIO hole, so memory looks like
- * a VGA device to it. The EFI memory map can tell us if it's
- * memory so we can avoid this problem.
- */
- if (efi_mem_type(0xA0000) != EFI_CONVENTIONAL_MEMORY)
- conswitchp = &vga_con;
+ /*
+ * Non-legacy systems may route legacy VGA MMIO range to system
+ * memory. vga_con probes the MMIO hole, so memory looks like
+ * a VGA device to it. The EFI memory map can tell us if it's
+ * memory so we can avoid this problem.
+ */
+ if (efi_mem_type(0xA0000) != EFI_CONVENTIONAL_MEMORY)
+ conswitchp = &vga_con;
# endif
+ }
#endif
/* enable IA-64 Machine Check Abort Handling */
} else
printk(KERN_ERR "Recursive die() failure, output suppressed\n");
+ if (netdump_func)
+ netdump_func(regs);
+ if (panic_on_oops) {
+ if (netdump_func)
+ netdump_func = NULL;
+ panic("Fatal exception");
+ }
bust_spinlocks(0);
die.lock_owner = -1;
spin_unlock_irq(&die.lock);
}
if (write) {
- if (read_only(addr))
- UNW_DPRINT(0, "unwind.%s: ignoring attempt to write read-only location\n");
- else {
+ if (read_only(addr)) {
+ UNW_DPRINT(0, "unwind.%s: ignoring attempt to write read-only location\n",
+ __FUNCTION__);
+ } else {
*addr = *val;
if (*nat)
*nat_addr |= nat_mask;
return -1;
}
if (write)
- if (read_only(addr))
- UNW_DPRINT(0, "unwind.%s: ignoring attempt to write read-only location\n");
- else
+ if (read_only(addr)) {
+ UNW_DPRINT(0, "unwind.%s: ignoring attempt to write read-only location\n",
+ __FUNCTION__);
+ } else
*addr = *val;
else
*val = *addr;
}
if (write)
- if (read_only(addr))
- UNW_DPRINT(0, "unwind.%s: ignoring attempt to write read-only location\n");
- else
+ if (read_only(addr)) {
+ UNW_DPRINT(0, "unwind.%s: ignoring attempt to write read-only location\n",
+ __FUNCTION__);
+ } else
*addr = *val;
else
*val = *addr;
}
if (write) {
- if (read_only(addr))
- UNW_DPRINT(0, "unwind.%s: ignoring attempt to write read-only location\n");
- else
+ if (read_only(addr)) {
+ UNW_DPRINT(0, "unwind.%s: ignoring attempt to write read-only location\n",
+ __FUNCTION__);
+ } else
*addr = *val;
} else
*val = *addr;
addr = &info->sw->pr;
if (write) {
- if (read_only(addr))
- UNW_DPRINT(0, "unwind.%s: ignoring attempt to write read-only location\n");
- else
+ if (read_only(addr)) {
+ UNW_DPRINT(0, "unwind.%s: ignoring attempt to write read-only location\n",
+ __FUNCTION__);
+ } else
*addr = *val;
} else
*val = *addr;
printk("Mem-info:\n");
show_free_areas();
- printk("Free swap: %6dkB\n", nr_swap_pages<<(PAGE_SHIFT-10));
+ printk("Free swap: %6ldkB\n", nr_swap_pages<<(PAGE_SHIFT-10));
i = max_mapnr;
while (i-- > 0) {
if (!pfn_valid(i))
memcpy(numa_slit, numa_slit_fix, sizeof (numa_slit));
+ for (i = nnode; i < numnodes; i++)
+ node_set_offline(i);
+
numnodes = nnode;
return;
printk("Mem-info:\n");
show_free_areas();
- printk("Free swap: %6dkB\n", nr_swap_pages<<(PAGE_SHIFT-10));
+ printk("Free swap: %6ldkB\n", nr_swap_pages<<(PAGE_SHIFT-10));
for_each_pgdat(pgdat) {
printk("Node ID: %d\n", pgdat->node_id);
for(i = 0; i < pgdat->node_spanned_pages; i++) {
if (!num_node_memblks) {
/* No SRAT table, so assume one node (node 0) */
if (start < end)
- (*func)(start, len, 0);
+ (*func)(start, end - start, 0);
return;
}
grow = PAGE_SIZE >> PAGE_SHIFT;
if (address - vma->vm_start > current->rlim[RLIMIT_STACK].rlim_cur
- || (((vma->vm_mm->total_vm + grow) << PAGE_SHIFT) > current->rlim[RLIMIT_AS].rlim_cur))
+ || (((vma->vm_mm->total_vm + grow) << PAGE_SHIFT) >
+ current->rlim[RLIMIT_AS].rlim_cur))
+ return -ENOMEM;
+ if (!vx_vmpages_avail(vma->vm_mm, grow) ||
+ ((vma->vm_flags & VM_LOCKED) &&
+ !vx_vmlocked_avail(vma->vm_mm, grow)))
return -ENOMEM;
vma->vm_end += PAGE_SIZE;
- vma->vm_mm->total_vm += grow;
+ // vma->vm_mm->total_vm += grow;
+ vx_vmpages_add(vma->vm_mm, grow);
if (vma->vm_flags & VM_LOCKED)
- vma->vm_mm->locked_vm += grow;
+ // vma->vm_mm->locked_vm += grow;
+ vx_vmlocked_add(vma->vm_mm, grow);
return 0;
}
{
pte_t entry;
- mm->rss += (HPAGE_SIZE / PAGE_SIZE);
+ // mm->rss += (HPAGE_SIZE / PAGE_SIZE);
+ vx_rsspages_add(mm, HPAGE_SIZE / PAGE_SIZE);
if (write_access) {
entry =
pte_mkwrite(pte_mkdirty(mk_pte(page, vma->vm_page_prot)));
ptepage = pte_page(entry);
get_page(ptepage);
set_pte(dst_pte, entry);
- dst->rss += (HPAGE_SIZE / PAGE_SIZE);
+ // dst->rss += (HPAGE_SIZE / PAGE_SIZE);
+ vx_rsspages_add(dst, HPAGE_SIZE / PAGE_SIZE);
addr += HPAGE_SIZE;
}
return 0;
struct page *page;
pte_t *ptep;
- if (! mm->used_hugetlb)
- return ERR_PTR(-EINVAL);
if (REGION_NUMBER(addr) != REGION_HPAGE)
return ERR_PTR(-EINVAL);
put_page(page);
pte_clear(pte);
}
- mm->rss -= (end - start) >> PAGE_SHIFT;
+ // mm->rss -= (end - start) >> PAGE_SHIFT;
+ vx_rsspages_sub(mm, (end - start) >> PAGE_SHIFT);
flush_tlb_range(vma, start, end);
}
unlock_page(page);
} else {
hugetlb_put_quota(mapping);
- free_huge_page(page);
+ page_cache_release(page);
goto out;
}
}
}
}
+int page_is_ram(unsigned long pagenr)
+{
+ //FIXME: implement w/efi walk
+ printk("page is ram is called!!!!!\n");
+ return 1;
+}
+
/*
* This installs a clean page in the kernel's page table.
*/
static int __init
pci_acpi_init (void)
{
- if (!acpi_pci_irq_init())
- printk(KERN_INFO "PCI: Using ACPI for IRQ routing\n");
- else
- printk(KERN_WARNING "PCI: Invalid ACPI-PCI IRQ routing table\n");
+ struct pci_dev *dev = NULL;
+
+ printk(KERN_INFO "PCI: Using ACPI for IRQ routing\n");
+
+ /*
+ * PCI IRQ routing is set up by pci_enable_device(), but we
+ * also do it here in case there are still broken drivers that
+ * don't use pci_enable_device().
+ */
+ while ((dev = pci_find_device(PCI_ANY_ID, PCI_ANY_ID, dev)) != NULL)
+ acpi_pci_irq_enable(dev);
+
return 0;
}
*/
#include <linux/config.h>
#include <linux/efi.h>
+#include <linux/kernel.h>
#include <asm/pal.h>
#include <asm/sal.h>
#include <asm/sn/sn_sal.h>
#define BOOT_PARAM_ADDR 0x40000
#define MAX(i,j) ((i) > (j) ? (i) : (j))
#define MIN(i,j) ((i) < (j) ? (i) : (j))
-#define ABS(i) ((i) > 0 ? (i) : -(i))
#define ALIGN8(p) (((long)(p) +7) & ~7)
#define FPROM_BUG() do {while (1);} while (0)
for (i=0; i<=max_nasid; i++)
for (j=0; j<=max_nasid; j++)
if (nasid_present(i) && nasid_present(j))
- *(cp+PROXIMITY_DOMAIN(i)*acpi_slit->localities+PROXIMITY_DOMAIN(j)) = 10 + MIN(254, 5*ABS(i-j));
+ *(cp+PROXIMITY_DOMAIN(i)*acpi_slit->localities+PROXIMITY_DOMAIN(j)) = 10 + MIN(254, 5*abs(i-j));
cp = acpi_slit->entry + acpi_slit->localities*acpi_slit->localities;
acpi_checksum(&acpi_slit->header, cp - (char*)acpi_slit);
/* Is this PCI bus associated with this moduleid? */
moduleid = NODE_MODULEID(
nasid_to_cnodeid(pcibr_soft->bs_nasid));
- if (modules[i]->id == moduleid) {
+ if (sn_modules[i]->id == moduleid) {
struct pcibr_list_s *new_element;
new_element = kmalloc(sizeof (struct pcibr_soft_s), GFP_KERNEL);
/*
* We now have a list of all the pci bridges associated with
- * the module_id, modules[i]. Call pci_bus_map_create() for
+ * the module_id, sn_modules[i]. Call pci_bus_map_create() for
* each pci bridge
*/
softlistp = first_in_list;
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
- * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
*/
err_nodepda->bte_if[i].cleanup_active = 0;
BTE_PRINTK(("eh:%p:%d Unlocked %d\n", err_nodepda,
smp_processor_id(), i));
- spin_unlock(&pda->cpu_bte_if[i]->spinlock);
+ spin_unlock(&err_nodepda->bte_if[i].spinlock);
}
del_timer(recovery_timer);
/* Use module as module vertex fastinfo */
memset(buffer, 0, 16);
- format_module_id(buffer, modules[cm]->id, MODULE_FORMAT_BRIEF);
+ format_module_id(buffer, sn_modules[cm]->id, MODULE_FORMAT_BRIEF);
sprintf(name, EDGE_LBL_MODULE "/%s", buffer);
rc = hwgraph_path_add(hwgraph_root, name, &module_vhdl);
rc = rc;
HWGRAPH_DEBUG(__FILE__, __FUNCTION__, __LINE__, module_vhdl, NULL, "Created module path.\n");
- hwgraph_fastinfo_set(module_vhdl, (arbitrary_info_t) modules[cm]);
+ hwgraph_fastinfo_set(module_vhdl, (arbitrary_info_t) sn_modules[cm]);
/* Add system controller */
sprintf(name,
ASSERT(hubv != GRAPH_VERTEX_NONE);
+ /*
+ * attach our hub_provider information to hubv,
+ * so we can use it as a crosstalk provider "master"
+ * vertex.
+ */
+ xtalk_provider_register(hubv, &hub_provider);
+ xtalk_provider_startup(hubv);
+
/*
* If nothing connected to this hub's xtalk port, we're done.
*/
/* NOTREACHED */
}
- /*
- * attach our hub_provider information to hubv,
- * so we can use it as a crosstalk provider "master"
- * vertex.
- */
- xtalk_provider_register(hubv, &hub_provider);
- xtalk_provider_startup(hubv);
-
/*
* Create a vertex to represent the crosstalk bus
* attached to this hub, and a vertex to be used
#define DPRINTF(x...)
#endif
-module_t *modules[MODULE_MAX];
+module_t *sn_modules[MODULE_MAX];
int nummodules;
#define SN00_SERIAL_FUDGE 0x3b1af409d513c2
int i;
for (i = 0; i < nummodules; i++)
- if (modules[i]->id == id) {
- DPRINTF("module_lookup: found m=0x%p\n", modules[i]);
- return modules[i];
+ if (sn_modules[i]->id == id) {
+ DPRINTF("module_lookup: found m=0x%p\n", sn_modules[i]);
+ return sn_modules[i];
}
return NULL;
/* Insert in sorted order by module number */
- for (i = nummodules; i > 0 && modules[i - 1]->id > moduleid; i--)
- modules[i] = modules[i - 1];
+ for (i = nummodules; i > 0 && sn_modules[i - 1]->id > moduleid; i--)
+ sn_modules[i] = sn_modules[i - 1];
- modules[i] = m;
+ sn_modules[i] = m;
nummodules++;
}
*/
#include <linux/config.h>
+#include <linux/module.h>
#include <asm/sn/sgi.h>
#include <asm/sn/nodepda.h>
#include <asm/sn/addrs.h>
#define L1_CACHE_MASK (L1_CACHE_BYTES - 1)
#endif
-/*
- * The base address of for each set of bte registers.
- */
-static int bte_offsets[] = { IIO_IBLS0, IIO_IBLS1 };
+/* two interfaces on two btes */
+#define MAX_INTERFACES_TO_TRY 4
+
+static struct bteinfo_s *
+bte_if_on_node(nasid_t nasid, int interface)
+{
+ nodepda_t *tmp_nodepda;
+
+ tmp_nodepda = NODEPDA(nasid_to_cnodeid(nasid));
+ return &tmp_nodepda->bte_if[interface];
+
+}
/************************************************************************
bte_result_t
bte_copy(u64 src, u64 dest, u64 len, u64 mode, void *notification)
{
- int bte_to_use;
u64 transfer_size;
struct bteinfo_s *bte;
bte_result_t bte_status;
unsigned long irq_flags;
+ struct bteinfo_s *btes_to_try[MAX_INTERFACES_TO_TRY];
+ int bte_if_index;
BTE_PRINTK(("bte_copy(0x%lx, 0x%lx, 0x%lx, 0x%lx, 0x%p)\n",
(src & L1_CACHE_MASK) || (dest & L1_CACHE_MASK)));
ASSERT(len < ((BTE_LEN_MASK + 1) << L1_CACHE_SHIFT));
+ if (mode & BTE_USE_DEST) {
+ /* try remote then local */
+ btes_to_try[0] = bte_if_on_node(NASID_GET(dest), 0);
+ btes_to_try[1] = bte_if_on_node(NASID_GET(dest), 1);
+ if (mode & BTE_USE_ANY) {
+ btes_to_try[2] = bte_if_on_node(get_nasid(), 0);
+ btes_to_try[3] = bte_if_on_node(get_nasid(), 1);
+ } else {
+ btes_to_try[2] = NULL;
+ btes_to_try[3] = NULL;
+ }
+ } else {
+ /* try local then remote */
+ btes_to_try[0] = bte_if_on_node(get_nasid(), 0);
+ btes_to_try[1] = bte_if_on_node(get_nasid(), 1);
+ if (mode & BTE_USE_ANY) {
+ btes_to_try[2] = bte_if_on_node(NASID_GET(dest), 0);
+ btes_to_try[3] = bte_if_on_node(NASID_GET(dest), 1);
+ } else {
+ btes_to_try[2] = NULL;
+ btes_to_try[3] = NULL;
+ }
+ }
+
do {
local_irq_save(irq_flags);
- bte_to_use = 0;
+ bte_if_index = 0;
+
/* Attempt to lock one of the BTE interfaces. */
- while ((bte_to_use < BTES_PER_NODE) &&
- BTE_LOCK_IF_AVAIL(bte_to_use)) {
- bte_to_use++;
+ while (bte_if_index < MAX_INTERFACES_TO_TRY) {
+ bte = btes_to_try[bte_if_index++];
+
+ if (bte == NULL) {
+ continue;
+ }
+
+ if (spin_trylock(&bte->spinlock)) {
+ if ((*bte->most_rcnt_na & BTE_ACTIVE) ||
+ (BTE_LNSTAT_LOAD(bte) & BTE_ACTIVE)) {
+ /* Got the lock but BTE still busy */
+ spin_unlock(&bte->spinlock);
+ bte = NULL;
+ } else {
+ /* we got the lock and it's not busy */
+ break;
+ }
+ }
}
- if (bte_to_use < BTES_PER_NODE) {
+ if (bte != NULL) {
break;
}
}
/* Wait until a bte is available. */
- udelay(10);
+ udelay(1);
} while (1);
- bte = pda->cpu_bte_if[bte_to_use];
- BTE_PRINTKV(("Got a lock on bte %d\n", bte_to_use));
-
if (notification == NULL) {
/* User does not want to be notified. */
*bte->most_rcnt_na = -1L;
/* Set the status reg busy bit and transfer length */
- BTE_PRINTKV(("IBLS - HUB_S(0x%p, 0x%lx)\n",
- BTEREG_LNSTAT_ADDR, IBLS_BUSY | transfer_size));
- HUB_S(BTEREG_LNSTAT_ADDR, (IBLS_BUSY | transfer_size));
+ BTE_PRINTKV(("IBLS = 0x%lx\n", IBLS_BUSY | transfer_size));
+ BTE_LNSTAT_STORE(bte, IBLS_BUSY | transfer_size);
/* Set the source and destination registers */
- BTE_PRINTKV(("IBSA - HUB_S(0x%p, 0x%lx)\n", BTEREG_SRC_ADDR,
- (TO_PHYS(src))));
- HUB_S(BTEREG_SRC_ADDR, (TO_PHYS(src)));
- BTE_PRINTKV(("IBDA - HUB_S(0x%p, 0x%lx)\n", BTEREG_DEST_ADDR,
- (TO_PHYS(dest))));
- HUB_S(BTEREG_DEST_ADDR, (TO_PHYS(dest)));
+ BTE_PRINTKV(("IBSA = 0x%lx)\n", (TO_PHYS(src))));
+ BTE_SRC_STORE(bte, TO_PHYS(src));
+ BTE_PRINTKV(("IBDA = 0x%lx)\n", (TO_PHYS(dest))));
+ BTE_DEST_STORE(bte, TO_PHYS(dest));
/* Set the notification register */
- BTE_PRINTKV(("IBNA - HUB_S(0x%p, 0x%lx)\n", BTEREG_NOTIF_ADDR,
- (TO_PHYS(ia64_tpa((unsigned long)bte->most_rcnt_na)))));
- HUB_S(BTEREG_NOTIF_ADDR, (TO_PHYS(ia64_tpa((unsigned long)bte->most_rcnt_na))));
+ BTE_PRINTKV(("IBNA = 0x%lx)\n",
+ TO_PHYS(ia64_tpa((unsigned long)bte->most_rcnt_na))));
+ BTE_NOTIF_STORE(bte, TO_PHYS(ia64_tpa((unsigned long)bte->most_rcnt_na)));
/* Initiate the transfer */
- BTE_PRINTK(("IBCT - HUB_S(0x%p, 0x%lx)\n", BTEREG_CTRL_ADDR,
- BTE_VALID_MODE(mode)));
- HUB_S(BTEREG_CTRL_ADDR, BTE_VALID_MODE(mode));
+ BTE_PRINTK(("IBCT = 0x%lx)\n", BTE_VALID_MODE(mode)));
+ BTE_CTRL_STORE(bte, BTE_VALID_MODE(mode));
spin_unlock_irqrestore(&bte->spinlock, irq_flags);
BTE_PRINTKV((" Delay Done. IBLS = 0x%lx, most_rcnt_na = 0x%lx\n",
- HUB_L(BTEREG_LNSTAT_ADDR), *bte->most_rcnt_na));
+ BTE_LNSTAT_LOAD(bte), *bte->most_rcnt_na));
if (*bte->most_rcnt_na & IBLS_ERROR) {
bte_status = *bte->most_rcnt_na & ~IBLS_ERROR;
bte_status = BTE_SUCCESS;
}
BTE_PRINTK(("Returning status is 0x%lx and most_rcnt_na is 0x%lx\n",
- HUB_L(BTEREG_LNSTAT_ADDR), *bte->most_rcnt_na));
+ BTE_LNSTAT_LOAD(bte), *bte->most_rcnt_na));
return bte_status;
}
+EXPORT_SYMBOL(bte_copy);
/*
u64 footBcopyDest;
u64 footBcopyLen;
bte_result_t rv;
- char *bteBlock;
+ char *bteBlock, *bteBlock_unaligned;
if (len == 0) {
return BTE_SUCCESS;
}
/* temporary buffer used during unaligned transfers */
- bteBlock = pda->cpu_bte_if[0]->scratch_buf;
+ bteBlock_unaligned = kmalloc(len + 3 * L1_CACHE_BYTES,
+ GFP_KERNEL | GFP_DMA);
+ if (bteBlock_unaligned == NULL) {
+ return BTEFAIL_NOTAVAIL;
+ }
+ bteBlock = (char *) L1_CACHE_ALIGN((u64) bteBlock_unaligned);
headBcopySrcOffset = src & L1_CACHE_MASK;
destFirstCacheOffset = dest & L1_CACHE_MASK;
ia64_tpa((unsigned long)bteBlock),
footBteLen, mode, NULL);
if (rv != BTE_SUCCESS) {
+ kfree(bteBlock_unaligned);
return rv;
}
(len - headBcopyLen -
footBcopyLen), mode, NULL);
if (rv != BTE_SUCCESS) {
+ kfree(bteBlock_unaligned);
return rv;
}
rv = bte_copy(headBteSource,
ia64_tpa((unsigned long)bteBlock), headBteLen, mode, NULL);
if (rv != BTE_SUCCESS) {
+ kfree(bteBlock_unaligned);
return rv;
}
headBcopySrcOffset),
headBcopyLen);
}
+ kfree(bteBlock_unaligned);
return BTE_SUCCESS;
}
+EXPORT_SYMBOL(bte_unaligned_copy);
/************************************************************************
mynodepda->bte_recovery_timer.data = (unsigned long) mynodepda;
for (i = 0; i < BTES_PER_NODE; i++) {
- /* >>> Don't know why the 0x1800000L is here. Robin */
- mynodepda->bte_if[i].bte_base_addr =
- (char *) LOCAL_MMR_ADDR(bte_offsets[i] | 0x1800000L);
+ (u64) mynodepda->bte_if[i].bte_base_addr =
+ REMOTE_HUB_ADDR(cnodeid_to_nasid(cnode),
+ (i == 0 ? IIO_IBLS0 : IIO_IBLS1));
/*
* Initialize the notification and spinlock
mynodepda->bte_if[i].notify = 0L;
spin_lock_init(&mynodepda->bte_if[i].spinlock);
- mynodepda->bte_if[i].scratch_buf =
- alloc_bootmem_node(NODE_DATA(cnode), BTE_MAX_XFER);
mynodepda->bte_if[i].bte_cnode = cnode;
mynodepda->bte_if[i].bte_error_count = 0;
mynodepda->bte_if[i].bte_num = i;
}
}
-
-/*
- * bte_init_cpu()
- *
- * Initialize the cpupda structure with pointers to the
- * nodepda bte blocks.
- *
- */
-void
-bte_init_cpu(void)
-{
- /* Called by setup.c as each cpu is being added to the nodepda */
- if (local_node_data->active_cpu_count & 0x1) {
- pda->cpu_bte_if[0] = &(nodepda->bte_if[0]);
- pda->cpu_bte_if[1] = &(nodepda->bte_if[1]);
- } else {
- pda->cpu_bte_if[0] = &(nodepda->bte_if[1]);
- pda->cpu_bte_if[1] = &(nodepda->bte_if[0]);
- }
-}
extern void pcibr_force_interrupt(pcibr_intr_t intr);
extern int sn_force_interrupt_flag;
struct irq_desc * sn_irq_desc(unsigned int irq);
+extern cpumask_t __cacheline_aligned pending_irq_cpumask[NR_IRQS];
struct sn_intr_list_t {
struct sn_intr_list_t *next;
{
}
+static inline void sn_move_irq(int irq)
+{
+ /* note - we hold desc->lock */
+ cpumask_t tmp;
+ irq_desc_t *desc = irq_descp(irq);
+
+ if (!cpus_empty(pending_irq_cpumask[irq])) {
+ cpus_and(tmp, pending_irq_cpumask[irq], cpu_online_map);
+ if (unlikely(!cpus_empty(tmp))) {
+ desc->handler->set_affinity(irq, pending_irq_cpumask[irq]);
+ }
+ cpus_clear(pending_irq_cpumask[irq]);
+ }
+}
+
static void
sn_ack_irq(unsigned int irq)
{
}
HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_EVENT_OCCURRED_ALIAS), mask );
__set_bit(irq, (volatile void *)pda->sn_in_service_ivecs);
+ sn_move_irq(irq);
}
static void
#define MAX_PHYS_MEMORY (1UL << 49) /* 1 TB */
extern void bte_init_node (nodepda_t *, cnodeid_t);
-extern void bte_init_cpu (void);
extern void sn_timer_init(void);
extern unsigned long last_time_offset;
extern void init_platform_hubinfo(nodepda_t **nodepdaindr);
shub_1_1_found = 1;
}
-
+/**
+ * sn_set_error_handling_features - Tell the SN prom how to handle certain
+ * error types.
+ */
+static void __init
+sn_set_error_handling_features(void)
+{
+ u64 ret;
+ u64 sn_ehf_bits[7]; /* see ia64_sn_set_error_handling_features */
+ memset(sn_ehf_bits, 0, sizeof(sn_ehf_bits));
+#define EHF(x) __set_bit(SN_SAL_EHF_ ## x, sn_ehf_bits)
+ EHF(MCA_SLV_TO_OS_INIT_SLV);
+ EHF(NO_RZ_TLBC);
+ // Uncomment once Jesse's code goes in - EHF(NO_RZ_IO_READ);
+#undef EHF
+ ret = ia64_sn_set_error_handling_features(sn_ehf_bits);
+ if (ret)
+ printk(KERN_ERR "%s: failed, return code %ld\n", __FUNCTION__, ret);
+}
/**
* sn_setup - SN platform setup routine
master_node_bedrock_address);
}
+ /* Tell the prom how to handle certain error types */
+ sn_set_error_handling_features();
+
/*
* we set the default root device to /dev/hda
* to make simulation easy
buddy_nasid = cnodeid_to_nasid(numa_node_id() == numnodes-1 ? 0 : numa_node_id()+ 1);
pda->pio_shub_war_cam_addr = (volatile unsigned long*)GLOBAL_MMR_ADDR(nasid, SH_PI_CAM_CONTROL);
}
-
- bte_init_cpu();
}
/*
#include <asm/delay.h>
#include <asm/io.h>
#include <asm/smp.h>
+#include <asm/tlb.h>
#include <asm/numa.h>
#include <asm/bitops.h>
#include <asm/hw_irq.h>
}
+void
+sn_tlb_migrate_finish(struct mm_struct *mm)
+{
+ if (mm == current->mm)
+ flush_tlb_mm(mm);
+}
+
/**
* sn2_global_tlb_purge - globally purge translation cache of virtual address range
return;
}
+ if (atomic_read(&mm->mm_users) == 1) {
+ flush_tlb_mm(mm);
+ preempt_enable();
+ return;
+ }
+
+
nix = 0;
for (cnode=find_first_bit(&nodes_flushed, NR_NODES); cnode < NR_NODES;
cnode=find_next_bit(&nodes_flushed, NR_NODES, ++cnode))
endmenu
+source "kernel/vserver/Kconfig"
+
source "security/Kconfig"
source "crypto/Kconfig"
LDFLAGS_vmlinux = -N
endif
+CHECK := $(CHECK) -D__mc68000__=1 -I$(shell $(CC) -print-file-name=include)
+
# without -fno-strength-reduce the 53c7xx.c driver fails ;-(
CFLAGS += -pipe -fno-strength-reduce -ffixed-a2
set_pte(dir, pte_mkdirty(mk_pte(page, vma->vm_page_prot)));
swap_free(entry);
get_page(page);
- ++vma->vm_mm->rss;
+ // ++vma->vm_mm->rss;
+ vx_rsspages_inc(vma->vm_mm);
}
static inline void unswap_pmd(struct vm_area_struct * vma, pmd_t *dir,
if (map[i]) {
entry = swp_entry(stram_swap_type, i);
- DPRINTK("unswap: map[i=%lu]=%u nr_swap=%u\n",
+ DPRINTK("unswap: map[i=%lu]=%u nr_swap=%ld\n",
i, map[i], nr_swap_pages);
swap_device_lock(stram_swap_info);
#endif
}
- DPRINTK( "unswap: map[i=%lu]=%u nr_swap=%u\n",
+ DPRINTK( "unswap: map[i=%lu]=%u nr_swap=%ld\n",
i, map[i], nr_swap_pages );
swap_list_lock();
swap_device_lock(stram_swap_info);
| Expected outputs:
| d0 = 0 -> success; non-zero -> failure
|
-| Linux/68k: As long as ints are disabled, no swapping out should
-| occur (hopefully...)
+| Linux/m68k: Make sure the page is properly paged in, so we use
+| plpaw and handle any exception here. The kernel must not be
+| preempted until _060_unlock_page(), so that the page stays mapped.
|
.global _060_real_lock_page
_060_real_lock_page:
- clr.l %d0
+ move.l %d2,-(%sp)
+ | load sfc/dfc
+ moveq #5,%d0
+ tst.b %d0
+ jne 1f
+ moveq #1,%d0
+1: movec.l %dfc,%d2
+ movec.l %d0,%dfc
+ movec.l %d0,%sfc
+
+ clr.l %d0
+ | prefetch address
+ .chip 68060
+ move.l %a0,%a1
+1: plpaw (%a1)
+ addq.w #1,%a0
+ tst.b %d1
+ jeq 2f
+ addq.w #2,%a0
+2: plpaw (%a0)
+3: .chip 68k
+
+ | restore sfc/dfc
+ movec.l %d2,%dfc
+ movec.l %d2,%sfc
+ move.l (%sp)+,%d2
rts
+.section __ex_table,"a"
+ .align 4
+ .long 1b,11f
+ .long 2b,21f
+.previous
+.section .fixup,"ax"
+ .even
+11: move.l #0x020003c0,%d0
+ or.l %d2,%d0
+ swap %d0
+ jra 3b
+21: move.l #0x02000bc0,%d0
+ or.l %d2,%d0
+ swap %d0
+ jra 3b
+.previous
+
|
| _060_unlock_page():
|
| d0 = `xxxxxxff -> supervisor; `xxxxxx00 -> user
| d1 = `xxxxxxff -> longword; `xxxxxx00 -> word
|
-| Linux/68k: As we do no special locking operation, also no unlocking
-| is needed...
+| Linux/m68k: perhaps reenable preemption here...
.global _060_real_unlock_page
_060_real_unlock_page: